Oct 11 03:35:57 np0005481065 kernel: Linux version 5.14.0-621.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-11), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Tue Sep 30 07:37:35 UTC 2025
Oct 11 03:35:57 np0005481065 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Oct 11 03:35:57 np0005481065 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64 root=UUID=9839e2e1-98a2-4594-b609-79d514deb0a3 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 11 03:35:57 np0005481065 kernel: BIOS-provided physical RAM map:
Oct 11 03:35:57 np0005481065 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Oct 11 03:35:57 np0005481065 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Oct 11 03:35:57 np0005481065 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Oct 11 03:35:57 np0005481065 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Oct 11 03:35:57 np0005481065 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Oct 11 03:35:57 np0005481065 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Oct 11 03:35:57 np0005481065 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Oct 11 03:35:57 np0005481065 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Oct 11 03:35:57 np0005481065 kernel: NX (Execute Disable) protection: active
Oct 11 03:35:57 np0005481065 kernel: APIC: Static calls initialized
Oct 11 03:35:57 np0005481065 kernel: SMBIOS 2.8 present.
Oct 11 03:35:57 np0005481065 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Oct 11 03:35:57 np0005481065 kernel: Hypervisor detected: KVM
Oct 11 03:35:57 np0005481065 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Oct 11 03:35:57 np0005481065 kernel: kvm-clock: using sched offset of 3969803700 cycles
Oct 11 03:35:57 np0005481065 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Oct 11 03:35:57 np0005481065 kernel: tsc: Detected 2800.000 MHz processor
Oct 11 03:35:57 np0005481065 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Oct 11 03:35:57 np0005481065 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Oct 11 03:35:57 np0005481065 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Oct 11 03:35:57 np0005481065 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Oct 11 03:35:57 np0005481065 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Oct 11 03:35:57 np0005481065 kernel: Using GB pages for direct mapping
Oct 11 03:35:57 np0005481065 kernel: RAMDISK: [mem 0x2d858000-0x32c23fff]
Oct 11 03:35:57 np0005481065 kernel: ACPI: Early table checksum verification disabled
Oct 11 03:35:57 np0005481065 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Oct 11 03:35:57 np0005481065 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 11 03:35:57 np0005481065 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 11 03:35:57 np0005481065 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 11 03:35:57 np0005481065 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Oct 11 03:35:57 np0005481065 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 11 03:35:57 np0005481065 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 11 03:35:57 np0005481065 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Oct 11 03:35:57 np0005481065 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Oct 11 03:35:57 np0005481065 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Oct 11 03:35:57 np0005481065 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Oct 11 03:35:57 np0005481065 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Oct 11 03:35:57 np0005481065 kernel: No NUMA configuration found
Oct 11 03:35:57 np0005481065 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Oct 11 03:35:57 np0005481065 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Oct 11 03:35:57 np0005481065 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Oct 11 03:35:57 np0005481065 kernel: Zone ranges:
Oct 11 03:35:57 np0005481065 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Oct 11 03:35:57 np0005481065 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Oct 11 03:35:57 np0005481065 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Oct 11 03:35:57 np0005481065 kernel:  Device   empty
Oct 11 03:35:57 np0005481065 kernel: Movable zone start for each node
Oct 11 03:35:57 np0005481065 kernel: Early memory node ranges
Oct 11 03:35:57 np0005481065 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Oct 11 03:35:57 np0005481065 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Oct 11 03:35:57 np0005481065 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Oct 11 03:35:57 np0005481065 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Oct 11 03:35:57 np0005481065 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Oct 11 03:35:57 np0005481065 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Oct 11 03:35:57 np0005481065 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Oct 11 03:35:57 np0005481065 kernel: ACPI: PM-Timer IO Port: 0x608
Oct 11 03:35:57 np0005481065 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Oct 11 03:35:57 np0005481065 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Oct 11 03:35:57 np0005481065 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Oct 11 03:35:57 np0005481065 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Oct 11 03:35:57 np0005481065 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Oct 11 03:35:57 np0005481065 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Oct 11 03:35:57 np0005481065 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Oct 11 03:35:57 np0005481065 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Oct 11 03:35:57 np0005481065 kernel: TSC deadline timer available
Oct 11 03:35:57 np0005481065 kernel: CPU topo: Max. logical packages:   8
Oct 11 03:35:57 np0005481065 kernel: CPU topo: Max. logical dies:       8
Oct 11 03:35:57 np0005481065 kernel: CPU topo: Max. dies per package:   1
Oct 11 03:35:57 np0005481065 kernel: CPU topo: Max. threads per core:   1
Oct 11 03:35:57 np0005481065 kernel: CPU topo: Num. cores per package:     1
Oct 11 03:35:57 np0005481065 kernel: CPU topo: Num. threads per package:   1
Oct 11 03:35:57 np0005481065 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Oct 11 03:35:57 np0005481065 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Oct 11 03:35:57 np0005481065 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Oct 11 03:35:57 np0005481065 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Oct 11 03:35:57 np0005481065 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Oct 11 03:35:57 np0005481065 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Oct 11 03:35:57 np0005481065 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Oct 11 03:35:57 np0005481065 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Oct 11 03:35:57 np0005481065 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Oct 11 03:35:57 np0005481065 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Oct 11 03:35:57 np0005481065 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Oct 11 03:35:57 np0005481065 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Oct 11 03:35:57 np0005481065 kernel: Booting paravirtualized kernel on KVM
Oct 11 03:35:57 np0005481065 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Oct 11 03:35:57 np0005481065 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Oct 11 03:35:57 np0005481065 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Oct 11 03:35:57 np0005481065 kernel: kvm-guest: PV spinlocks disabled, no host support
Oct 11 03:35:57 np0005481065 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64 root=UUID=9839e2e1-98a2-4594-b609-79d514deb0a3 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 11 03:35:57 np0005481065 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64", will be passed to user space.
Oct 11 03:35:57 np0005481065 kernel: random: crng init done
Oct 11 03:35:57 np0005481065 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Oct 11 03:35:57 np0005481065 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Oct 11 03:35:57 np0005481065 kernel: Fallback order for Node 0: 0 
Oct 11 03:35:57 np0005481065 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Oct 11 03:35:57 np0005481065 kernel: Policy zone: Normal
Oct 11 03:35:57 np0005481065 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Oct 11 03:35:57 np0005481065 kernel: software IO TLB: area num 8.
Oct 11 03:35:57 np0005481065 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Oct 11 03:35:57 np0005481065 kernel: ftrace: allocating 49162 entries in 193 pages
Oct 11 03:35:57 np0005481065 kernel: ftrace: allocated 193 pages with 3 groups
Oct 11 03:35:57 np0005481065 kernel: Dynamic Preempt: voluntary
Oct 11 03:35:57 np0005481065 kernel: rcu: Preemptible hierarchical RCU implementation.
Oct 11 03:35:57 np0005481065 kernel: rcu: #011RCU event tracing is enabled.
Oct 11 03:35:57 np0005481065 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Oct 11 03:35:57 np0005481065 kernel: #011Trampoline variant of Tasks RCU enabled.
Oct 11 03:35:57 np0005481065 kernel: #011Rude variant of Tasks RCU enabled.
Oct 11 03:35:57 np0005481065 kernel: #011Tracing variant of Tasks RCU enabled.
Oct 11 03:35:57 np0005481065 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Oct 11 03:35:57 np0005481065 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Oct 11 03:35:57 np0005481065 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 11 03:35:57 np0005481065 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 11 03:35:57 np0005481065 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 11 03:35:57 np0005481065 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Oct 11 03:35:57 np0005481065 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Oct 11 03:35:57 np0005481065 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Oct 11 03:35:57 np0005481065 kernel: Console: colour VGA+ 80x25
Oct 11 03:35:57 np0005481065 kernel: printk: console [ttyS0] enabled
Oct 11 03:35:57 np0005481065 kernel: ACPI: Core revision 20230331
Oct 11 03:35:57 np0005481065 kernel: APIC: Switch to symmetric I/O mode setup
Oct 11 03:35:57 np0005481065 kernel: x2apic enabled
Oct 11 03:35:57 np0005481065 kernel: APIC: Switched APIC routing to: physical x2apic
Oct 11 03:35:57 np0005481065 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Oct 11 03:35:57 np0005481065 kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Oct 11 03:35:57 np0005481065 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Oct 11 03:35:57 np0005481065 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Oct 11 03:35:57 np0005481065 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Oct 11 03:35:57 np0005481065 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Oct 11 03:35:57 np0005481065 kernel: Spectre V2 : Mitigation: Retpolines
Oct 11 03:35:57 np0005481065 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Oct 11 03:35:57 np0005481065 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Oct 11 03:35:57 np0005481065 kernel: RETBleed: Mitigation: untrained return thunk
Oct 11 03:35:57 np0005481065 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Oct 11 03:35:57 np0005481065 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Oct 11 03:35:57 np0005481065 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Oct 11 03:35:57 np0005481065 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Oct 11 03:35:57 np0005481065 kernel: x86/bugs: return thunk changed
Oct 11 03:35:57 np0005481065 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Oct 11 03:35:57 np0005481065 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Oct 11 03:35:57 np0005481065 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Oct 11 03:35:57 np0005481065 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Oct 11 03:35:57 np0005481065 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Oct 11 03:35:57 np0005481065 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Oct 11 03:35:57 np0005481065 kernel: Freeing SMP alternatives memory: 40K
Oct 11 03:35:57 np0005481065 kernel: pid_max: default: 32768 minimum: 301
Oct 11 03:35:57 np0005481065 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Oct 11 03:35:57 np0005481065 kernel: landlock: Up and running.
Oct 11 03:35:57 np0005481065 kernel: Yama: becoming mindful.
Oct 11 03:35:57 np0005481065 kernel: SELinux:  Initializing.
Oct 11 03:35:57 np0005481065 kernel: LSM support for eBPF active
Oct 11 03:35:57 np0005481065 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct 11 03:35:57 np0005481065 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct 11 03:35:57 np0005481065 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Oct 11 03:35:57 np0005481065 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Oct 11 03:35:57 np0005481065 kernel: ... version:                0
Oct 11 03:35:57 np0005481065 kernel: ... bit width:              48
Oct 11 03:35:57 np0005481065 kernel: ... generic registers:      6
Oct 11 03:35:57 np0005481065 kernel: ... value mask:             0000ffffffffffff
Oct 11 03:35:57 np0005481065 kernel: ... max period:             00007fffffffffff
Oct 11 03:35:57 np0005481065 kernel: ... fixed-purpose events:   0
Oct 11 03:35:57 np0005481065 kernel: ... event mask:             000000000000003f
Oct 11 03:35:57 np0005481065 kernel: signal: max sigframe size: 1776
Oct 11 03:35:57 np0005481065 kernel: rcu: Hierarchical SRCU implementation.
Oct 11 03:35:57 np0005481065 kernel: rcu: #011Max phase no-delay instances is 400.
Oct 11 03:35:57 np0005481065 kernel: smp: Bringing up secondary CPUs ...
Oct 11 03:35:57 np0005481065 kernel: smpboot: x86: Booting SMP configuration:
Oct 11 03:35:57 np0005481065 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Oct 11 03:35:57 np0005481065 kernel: smp: Brought up 1 node, 8 CPUs
Oct 11 03:35:57 np0005481065 kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Oct 11 03:35:57 np0005481065 kernel: node 0 deferred pages initialised in 9ms
Oct 11 03:35:57 np0005481065 kernel: Memory: 7765960K/8388068K available (16384K kernel code, 5784K rwdata, 13864K rodata, 4188K init, 7196K bss, 616204K reserved, 0K cma-reserved)
Oct 11 03:35:57 np0005481065 kernel: devtmpfs: initialized
Oct 11 03:35:57 np0005481065 kernel: x86/mm: Memory block size: 128MB
Oct 11 03:35:57 np0005481065 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Oct 11 03:35:57 np0005481065 kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Oct 11 03:35:57 np0005481065 kernel: pinctrl core: initialized pinctrl subsystem
Oct 11 03:35:57 np0005481065 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Oct 11 03:35:57 np0005481065 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Oct 11 03:35:57 np0005481065 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Oct 11 03:35:57 np0005481065 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Oct 11 03:35:57 np0005481065 kernel: audit: initializing netlink subsys (disabled)
Oct 11 03:35:57 np0005481065 kernel: audit: type=2000 audit(1760168155.621:1): state=initialized audit_enabled=0 res=1
Oct 11 03:35:57 np0005481065 kernel: thermal_sys: Registered thermal governor 'fair_share'
Oct 11 03:35:57 np0005481065 kernel: thermal_sys: Registered thermal governor 'step_wise'
Oct 11 03:35:57 np0005481065 kernel: thermal_sys: Registered thermal governor 'user_space'
Oct 11 03:35:57 np0005481065 kernel: cpuidle: using governor menu
Oct 11 03:35:57 np0005481065 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Oct 11 03:35:57 np0005481065 kernel: PCI: Using configuration type 1 for base access
Oct 11 03:35:57 np0005481065 kernel: PCI: Using configuration type 1 for extended access
Oct 11 03:35:57 np0005481065 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Oct 11 03:35:57 np0005481065 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Oct 11 03:35:57 np0005481065 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Oct 11 03:35:57 np0005481065 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Oct 11 03:35:57 np0005481065 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Oct 11 03:35:57 np0005481065 kernel: Demotion targets for Node 0: null
Oct 11 03:35:57 np0005481065 kernel: cryptd: max_cpu_qlen set to 1000
Oct 11 03:35:57 np0005481065 kernel: ACPI: Added _OSI(Module Device)
Oct 11 03:35:57 np0005481065 kernel: ACPI: Added _OSI(Processor Device)
Oct 11 03:35:57 np0005481065 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Oct 11 03:35:57 np0005481065 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Oct 11 03:35:57 np0005481065 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Oct 11 03:35:57 np0005481065 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Oct 11 03:35:57 np0005481065 kernel: ACPI: Interpreter enabled
Oct 11 03:35:57 np0005481065 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Oct 11 03:35:57 np0005481065 kernel: ACPI: Using IOAPIC for interrupt routing
Oct 11 03:35:57 np0005481065 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Oct 11 03:35:57 np0005481065 kernel: PCI: Using E820 reservations for host bridge windows
Oct 11 03:35:57 np0005481065 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Oct 11 03:35:57 np0005481065 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Oct 11 03:35:57 np0005481065 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Oct 11 03:35:57 np0005481065 kernel: acpiphp: Slot [3] registered
Oct 11 03:35:57 np0005481065 kernel: acpiphp: Slot [4] registered
Oct 11 03:35:57 np0005481065 kernel: acpiphp: Slot [5] registered
Oct 11 03:35:57 np0005481065 kernel: acpiphp: Slot [6] registered
Oct 11 03:35:57 np0005481065 kernel: acpiphp: Slot [7] registered
Oct 11 03:35:57 np0005481065 kernel: acpiphp: Slot [8] registered
Oct 11 03:35:57 np0005481065 kernel: acpiphp: Slot [9] registered
Oct 11 03:35:57 np0005481065 kernel: acpiphp: Slot [10] registered
Oct 11 03:35:57 np0005481065 kernel: acpiphp: Slot [11] registered
Oct 11 03:35:57 np0005481065 kernel: acpiphp: Slot [12] registered
Oct 11 03:35:57 np0005481065 kernel: acpiphp: Slot [13] registered
Oct 11 03:35:57 np0005481065 kernel: acpiphp: Slot [14] registered
Oct 11 03:35:57 np0005481065 kernel: acpiphp: Slot [15] registered
Oct 11 03:35:57 np0005481065 kernel: acpiphp: Slot [16] registered
Oct 11 03:35:57 np0005481065 kernel: acpiphp: Slot [17] registered
Oct 11 03:35:57 np0005481065 kernel: acpiphp: Slot [18] registered
Oct 11 03:35:57 np0005481065 kernel: acpiphp: Slot [19] registered
Oct 11 03:35:57 np0005481065 kernel: acpiphp: Slot [20] registered
Oct 11 03:35:57 np0005481065 kernel: acpiphp: Slot [21] registered
Oct 11 03:35:57 np0005481065 kernel: acpiphp: Slot [22] registered
Oct 11 03:35:57 np0005481065 kernel: acpiphp: Slot [23] registered
Oct 11 03:35:57 np0005481065 kernel: acpiphp: Slot [24] registered
Oct 11 03:35:57 np0005481065 kernel: acpiphp: Slot [25] registered
Oct 11 03:35:57 np0005481065 kernel: acpiphp: Slot [26] registered
Oct 11 03:35:57 np0005481065 kernel: acpiphp: Slot [27] registered
Oct 11 03:35:57 np0005481065 kernel: acpiphp: Slot [28] registered
Oct 11 03:35:57 np0005481065 kernel: acpiphp: Slot [29] registered
Oct 11 03:35:57 np0005481065 kernel: acpiphp: Slot [30] registered
Oct 11 03:35:57 np0005481065 kernel: acpiphp: Slot [31] registered
Oct 11 03:35:57 np0005481065 kernel: PCI host bridge to bus 0000:00
Oct 11 03:35:57 np0005481065 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Oct 11 03:35:57 np0005481065 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Oct 11 03:35:57 np0005481065 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Oct 11 03:35:57 np0005481065 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Oct 11 03:35:57 np0005481065 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Oct 11 03:35:57 np0005481065 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Oct 11 03:35:57 np0005481065 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Oct 11 03:35:57 np0005481065 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Oct 11 03:35:57 np0005481065 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Oct 11 03:35:57 np0005481065 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Oct 11 03:35:57 np0005481065 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Oct 11 03:35:57 np0005481065 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Oct 11 03:35:57 np0005481065 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Oct 11 03:35:57 np0005481065 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Oct 11 03:35:57 np0005481065 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Oct 11 03:35:57 np0005481065 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Oct 11 03:35:57 np0005481065 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Oct 11 03:35:57 np0005481065 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Oct 11 03:35:57 np0005481065 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Oct 11 03:35:57 np0005481065 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Oct 11 03:35:57 np0005481065 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Oct 11 03:35:57 np0005481065 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Oct 11 03:35:57 np0005481065 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Oct 11 03:35:57 np0005481065 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Oct 11 03:35:57 np0005481065 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Oct 11 03:35:57 np0005481065 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct 11 03:35:57 np0005481065 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Oct 11 03:35:57 np0005481065 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Oct 11 03:35:57 np0005481065 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Oct 11 03:35:57 np0005481065 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Oct 11 03:35:57 np0005481065 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Oct 11 03:35:57 np0005481065 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Oct 11 03:35:57 np0005481065 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Oct 11 03:35:57 np0005481065 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Oct 11 03:35:57 np0005481065 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Oct 11 03:35:57 np0005481065 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Oct 11 03:35:57 np0005481065 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Oct 11 03:35:57 np0005481065 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Oct 11 03:35:57 np0005481065 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Oct 11 03:35:57 np0005481065 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Oct 11 03:35:57 np0005481065 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Oct 11 03:35:57 np0005481065 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Oct 11 03:35:57 np0005481065 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Oct 11 03:35:57 np0005481065 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Oct 11 03:35:57 np0005481065 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Oct 11 03:35:57 np0005481065 kernel: iommu: Default domain type: Translated
Oct 11 03:35:57 np0005481065 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Oct 11 03:35:57 np0005481065 kernel: SCSI subsystem initialized
Oct 11 03:35:57 np0005481065 kernel: ACPI: bus type USB registered
Oct 11 03:35:57 np0005481065 kernel: usbcore: registered new interface driver usbfs
Oct 11 03:35:57 np0005481065 kernel: usbcore: registered new interface driver hub
Oct 11 03:35:57 np0005481065 kernel: usbcore: registered new device driver usb
Oct 11 03:35:57 np0005481065 kernel: pps_core: LinuxPPS API ver. 1 registered
Oct 11 03:35:57 np0005481065 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Oct 11 03:35:57 np0005481065 kernel: PTP clock support registered
Oct 11 03:35:57 np0005481065 kernel: EDAC MC: Ver: 3.0.0
Oct 11 03:35:57 np0005481065 kernel: NetLabel: Initializing
Oct 11 03:35:57 np0005481065 kernel: NetLabel:  domain hash size = 128
Oct 11 03:35:57 np0005481065 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Oct 11 03:35:57 np0005481065 kernel: NetLabel:  unlabeled traffic allowed by default
Oct 11 03:35:57 np0005481065 kernel: PCI: Using ACPI for IRQ routing
Oct 11 03:35:57 np0005481065 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Oct 11 03:35:57 np0005481065 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Oct 11 03:35:57 np0005481065 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Oct 11 03:35:57 np0005481065 kernel: vgaarb: loaded
Oct 11 03:35:57 np0005481065 kernel: clocksource: Switched to clocksource kvm-clock
Oct 11 03:35:57 np0005481065 kernel: VFS: Disk quotas dquot_6.6.0
Oct 11 03:35:57 np0005481065 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Oct 11 03:35:57 np0005481065 kernel: pnp: PnP ACPI init
Oct 11 03:35:57 np0005481065 kernel: pnp: PnP ACPI: found 5 devices
Oct 11 03:35:57 np0005481065 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Oct 11 03:35:57 np0005481065 kernel: NET: Registered PF_INET protocol family
Oct 11 03:35:57 np0005481065 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Oct 11 03:35:57 np0005481065 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Oct 11 03:35:57 np0005481065 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Oct 11 03:35:57 np0005481065 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Oct 11 03:35:57 np0005481065 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Oct 11 03:35:57 np0005481065 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Oct 11 03:35:57 np0005481065 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Oct 11 03:35:57 np0005481065 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct 11 03:35:57 np0005481065 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct 11 03:35:57 np0005481065 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Oct 11 03:35:57 np0005481065 kernel: NET: Registered PF_XDP protocol family
Oct 11 03:35:57 np0005481065 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Oct 11 03:35:57 np0005481065 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Oct 11 03:35:57 np0005481065 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Oct 11 03:35:57 np0005481065 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Oct 11 03:35:57 np0005481065 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Oct 11 03:35:57 np0005481065 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Oct 11 03:35:57 np0005481065 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Oct 11 03:35:57 np0005481065 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Oct 11 03:35:57 np0005481065 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 100214 usecs
Oct 11 03:35:57 np0005481065 kernel: PCI: CLS 0 bytes, default 64
Oct 11 03:35:57 np0005481065 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Oct 11 03:35:57 np0005481065 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Oct 11 03:35:57 np0005481065 kernel: ACPI: bus type thunderbolt registered
Oct 11 03:35:57 np0005481065 kernel: Trying to unpack rootfs image as initramfs...
Oct 11 03:35:57 np0005481065 kernel: Initialise system trusted keyrings
Oct 11 03:35:57 np0005481065 kernel: Key type blacklist registered
Oct 11 03:35:57 np0005481065 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Oct 11 03:35:57 np0005481065 kernel: zbud: loaded
Oct 11 03:35:57 np0005481065 kernel: integrity: Platform Keyring initialized
Oct 11 03:35:57 np0005481065 kernel: integrity: Machine keyring initialized
Oct 11 03:35:57 np0005481065 kernel: Freeing initrd memory: 85808K
Oct 11 03:35:57 np0005481065 kernel: NET: Registered PF_ALG protocol family
Oct 11 03:35:57 np0005481065 kernel: xor: automatically using best checksumming function   avx       
Oct 11 03:35:57 np0005481065 kernel: Key type asymmetric registered
Oct 11 03:35:57 np0005481065 kernel: Asymmetric key parser 'x509' registered
Oct 11 03:35:57 np0005481065 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Oct 11 03:35:57 np0005481065 kernel: io scheduler mq-deadline registered
Oct 11 03:35:57 np0005481065 kernel: io scheduler kyber registered
Oct 11 03:35:57 np0005481065 kernel: io scheduler bfq registered
Oct 11 03:35:57 np0005481065 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Oct 11 03:35:57 np0005481065 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Oct 11 03:35:57 np0005481065 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Oct 11 03:35:57 np0005481065 kernel: ACPI: button: Power Button [PWRF]
Oct 11 03:35:57 np0005481065 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Oct 11 03:35:57 np0005481065 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Oct 11 03:35:57 np0005481065 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Oct 11 03:35:57 np0005481065 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Oct 11 03:35:57 np0005481065 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Oct 11 03:35:57 np0005481065 kernel: Non-volatile memory driver v1.3
Oct 11 03:35:57 np0005481065 kernel: rdac: device handler registered
Oct 11 03:35:57 np0005481065 kernel: hp_sw: device handler registered
Oct 11 03:35:57 np0005481065 kernel: emc: device handler registered
Oct 11 03:35:57 np0005481065 kernel: alua: device handler registered
Oct 11 03:35:57 np0005481065 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Oct 11 03:35:57 np0005481065 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Oct 11 03:35:57 np0005481065 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Oct 11 03:35:57 np0005481065 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Oct 11 03:35:57 np0005481065 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Oct 11 03:35:57 np0005481065 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Oct 11 03:35:57 np0005481065 kernel: usb usb1: Product: UHCI Host Controller
Oct 11 03:35:57 np0005481065 kernel: usb usb1: Manufacturer: Linux 5.14.0-621.el9.x86_64 uhci_hcd
Oct 11 03:35:57 np0005481065 kernel: usb usb1: SerialNumber: 0000:00:01.2
Oct 11 03:35:57 np0005481065 kernel: hub 1-0:1.0: USB hub found
Oct 11 03:35:57 np0005481065 kernel: hub 1-0:1.0: 2 ports detected
Oct 11 03:35:57 np0005481065 kernel: usbcore: registered new interface driver usbserial_generic
Oct 11 03:35:57 np0005481065 kernel: usbserial: USB Serial support registered for generic
Oct 11 03:35:57 np0005481065 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Oct 11 03:35:57 np0005481065 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Oct 11 03:35:57 np0005481065 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Oct 11 03:35:57 np0005481065 kernel: mousedev: PS/2 mouse device common for all mice
Oct 11 03:35:57 np0005481065 kernel: rtc_cmos 00:04: RTC can wake from S4
Oct 11 03:35:57 np0005481065 kernel: rtc_cmos 00:04: registered as rtc0
Oct 11 03:35:57 np0005481065 kernel: rtc_cmos 00:04: setting system clock to 2025-10-11T07:35:56 UTC (1760168156)
Oct 11 03:35:57 np0005481065 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Oct 11 03:35:57 np0005481065 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Oct 11 03:35:57 np0005481065 kernel: hid: raw HID events driver (C) Jiri Kosina
Oct 11 03:35:57 np0005481065 kernel: usbcore: registered new interface driver usbhid
Oct 11 03:35:57 np0005481065 kernel: usbhid: USB HID core driver
Oct 11 03:35:57 np0005481065 kernel: drop_monitor: Initializing network drop monitor service
Oct 11 03:35:57 np0005481065 kernel: Initializing XFRM netlink socket
Oct 11 03:35:57 np0005481065 kernel: NET: Registered PF_INET6 protocol family
Oct 11 03:35:57 np0005481065 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Oct 11 03:35:57 np0005481065 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Oct 11 03:35:57 np0005481065 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Oct 11 03:35:57 np0005481065 kernel: Segment Routing with IPv6
Oct 11 03:35:57 np0005481065 kernel: NET: Registered PF_PACKET protocol family
Oct 11 03:35:57 np0005481065 kernel: mpls_gso: MPLS GSO support
Oct 11 03:35:57 np0005481065 kernel: IPI shorthand broadcast: enabled
Oct 11 03:35:57 np0005481065 kernel: AVX2 version of gcm_enc/dec engaged.
Oct 11 03:35:57 np0005481065 kernel: AES CTR mode by8 optimization enabled
Oct 11 03:35:57 np0005481065 kernel: sched_clock: Marking stable (1309001600, 139560770)->(1523055570, -74493200)
Oct 11 03:35:57 np0005481065 kernel: registered taskstats version 1
Oct 11 03:35:57 np0005481065 kernel: Loading compiled-in X.509 certificates
Oct 11 03:35:57 np0005481065 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 72f99a463516b0dfb027e50caab189f607ef1bc9'
Oct 11 03:35:57 np0005481065 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Oct 11 03:35:57 np0005481065 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Oct 11 03:35:57 np0005481065 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Oct 11 03:35:57 np0005481065 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Oct 11 03:35:57 np0005481065 kernel: Demotion targets for Node 0: null
Oct 11 03:35:57 np0005481065 kernel: page_owner is disabled
Oct 11 03:35:57 np0005481065 kernel: Key type .fscrypt registered
Oct 11 03:35:57 np0005481065 kernel: Key type fscrypt-provisioning registered
Oct 11 03:35:57 np0005481065 kernel: Key type big_key registered
Oct 11 03:35:57 np0005481065 kernel: Key type encrypted registered
Oct 11 03:35:57 np0005481065 kernel: ima: No TPM chip found, activating TPM-bypass!
Oct 11 03:35:57 np0005481065 kernel: Loading compiled-in module X.509 certificates
Oct 11 03:35:57 np0005481065 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 72f99a463516b0dfb027e50caab189f607ef1bc9'
Oct 11 03:35:57 np0005481065 kernel: ima: Allocated hash algorithm: sha256
Oct 11 03:35:57 np0005481065 kernel: ima: No architecture policies found
Oct 11 03:35:57 np0005481065 kernel: evm: Initialising EVM extended attributes:
Oct 11 03:35:57 np0005481065 kernel: evm: security.selinux
Oct 11 03:35:57 np0005481065 kernel: evm: security.SMACK64 (disabled)
Oct 11 03:35:57 np0005481065 kernel: evm: security.SMACK64EXEC (disabled)
Oct 11 03:35:57 np0005481065 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Oct 11 03:35:57 np0005481065 kernel: evm: security.SMACK64MMAP (disabled)
Oct 11 03:35:57 np0005481065 kernel: evm: security.apparmor (disabled)
Oct 11 03:35:57 np0005481065 kernel: evm: security.ima
Oct 11 03:35:57 np0005481065 kernel: evm: security.capability
Oct 11 03:35:57 np0005481065 kernel: evm: HMAC attrs: 0x1
Oct 11 03:35:57 np0005481065 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Oct 11 03:35:57 np0005481065 kernel: Running certificate verification RSA selftest
Oct 11 03:35:57 np0005481065 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Oct 11 03:35:57 np0005481065 kernel: Running certificate verification ECDSA selftest
Oct 11 03:35:57 np0005481065 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Oct 11 03:35:57 np0005481065 kernel: clk: Disabling unused clocks
Oct 11 03:35:57 np0005481065 kernel: Freeing unused decrypted memory: 2028K
Oct 11 03:35:57 np0005481065 kernel: Freeing unused kernel image (initmem) memory: 4188K
Oct 11 03:35:57 np0005481065 kernel: Write protecting the kernel read-only data: 30720k
Oct 11 03:35:57 np0005481065 kernel: Freeing unused kernel image (rodata/data gap) memory: 472K
Oct 11 03:35:57 np0005481065 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Oct 11 03:35:57 np0005481065 kernel: Run /init as init process
Oct 11 03:35:57 np0005481065 systemd: systemd 252-57.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct 11 03:35:57 np0005481065 systemd: Detected virtualization kvm.
Oct 11 03:35:57 np0005481065 systemd: Detected architecture x86-64.
Oct 11 03:35:57 np0005481065 systemd: Running in initrd.
Oct 11 03:35:57 np0005481065 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Oct 11 03:35:57 np0005481065 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Oct 11 03:35:57 np0005481065 kernel: usb 1-1: Product: QEMU USB Tablet
Oct 11 03:35:57 np0005481065 kernel: usb 1-1: Manufacturer: QEMU
Oct 11 03:35:57 np0005481065 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Oct 11 03:35:57 np0005481065 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Oct 11 03:35:57 np0005481065 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Oct 11 03:35:57 np0005481065 systemd: No hostname configured, using default hostname.
Oct 11 03:35:57 np0005481065 systemd: Hostname set to <localhost>.
Oct 11 03:35:57 np0005481065 systemd: Initializing machine ID from VM UUID.
Oct 11 03:35:57 np0005481065 systemd: Queued start job for default target Initrd Default Target.
Oct 11 03:35:57 np0005481065 systemd: Started Dispatch Password Requests to Console Directory Watch.
Oct 11 03:35:57 np0005481065 systemd: Reached target Local Encrypted Volumes.
Oct 11 03:35:57 np0005481065 systemd: Reached target Initrd /usr File System.
Oct 11 03:35:57 np0005481065 systemd: Reached target Local File Systems.
Oct 11 03:35:57 np0005481065 systemd: Reached target Path Units.
Oct 11 03:35:57 np0005481065 systemd: Reached target Slice Units.
Oct 11 03:35:57 np0005481065 systemd: Reached target Swaps.
Oct 11 03:35:57 np0005481065 systemd: Reached target Timer Units.
Oct 11 03:35:57 np0005481065 systemd: Listening on D-Bus System Message Bus Socket.
Oct 11 03:35:57 np0005481065 systemd: Listening on Journal Socket (/dev/log).
Oct 11 03:35:57 np0005481065 systemd: Listening on Journal Socket.
Oct 11 03:35:57 np0005481065 systemd: Listening on udev Control Socket.
Oct 11 03:35:57 np0005481065 systemd: Listening on udev Kernel Socket.
Oct 11 03:35:57 np0005481065 systemd: Reached target Socket Units.
Oct 11 03:35:57 np0005481065 systemd: Starting Create List of Static Device Nodes...
Oct 11 03:35:57 np0005481065 systemd: Starting Journal Service...
Oct 11 03:35:57 np0005481065 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct 11 03:35:57 np0005481065 systemd: Starting Apply Kernel Variables...
Oct 11 03:35:57 np0005481065 systemd: Starting Create System Users...
Oct 11 03:35:57 np0005481065 systemd: Starting Setup Virtual Console...
Oct 11 03:35:57 np0005481065 systemd: Finished Create List of Static Device Nodes.
Oct 11 03:35:57 np0005481065 systemd: Finished Apply Kernel Variables.
Oct 11 03:35:57 np0005481065 systemd: Finished Create System Users.
Oct 11 03:35:57 np0005481065 systemd-journald[306]: Journal started
Oct 11 03:35:57 np0005481065 systemd-journald[306]: Runtime Journal (/run/log/journal/441009aeb83147c3ab1edc607f9feb6d) is 8.0M, max 153.6M, 145.6M free.
Oct 11 03:35:57 np0005481065 systemd-sysusers[309]: Creating group 'users' with GID 100.
Oct 11 03:35:57 np0005481065 systemd-sysusers[309]: Creating group 'dbus' with GID 81.
Oct 11 03:35:57 np0005481065 systemd-sysusers[309]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Oct 11 03:35:57 np0005481065 systemd: Started Journal Service.
Oct 11 03:35:57 np0005481065 systemd[1]: Starting Create Static Device Nodes in /dev...
Oct 11 03:35:57 np0005481065 systemd[1]: Starting Create Volatile Files and Directories...
Oct 11 03:35:57 np0005481065 systemd[1]: Finished Create Static Device Nodes in /dev.
Oct 11 03:35:57 np0005481065 systemd[1]: Finished Create Volatile Files and Directories.
Oct 11 03:35:57 np0005481065 systemd[1]: Finished Setup Virtual Console.
Oct 11 03:35:57 np0005481065 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Oct 11 03:35:57 np0005481065 systemd[1]: Starting dracut cmdline hook...
Oct 11 03:35:57 np0005481065 dracut-cmdline[327]: dracut-9 dracut-057-102.git20250818.el9
Oct 11 03:35:57 np0005481065 dracut-cmdline[327]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64 root=UUID=9839e2e1-98a2-4594-b609-79d514deb0a3 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 11 03:35:58 np0005481065 systemd[1]: Finished dracut cmdline hook.
Oct 11 03:35:58 np0005481065 systemd[1]: Starting dracut pre-udev hook...
Oct 11 03:35:58 np0005481065 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Oct 11 03:35:58 np0005481065 kernel: device-mapper: uevent: version 1.0.3
Oct 11 03:35:58 np0005481065 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Oct 11 03:35:58 np0005481065 kernel: RPC: Registered named UNIX socket transport module.
Oct 11 03:35:58 np0005481065 kernel: RPC: Registered udp transport module.
Oct 11 03:35:58 np0005481065 kernel: RPC: Registered tcp transport module.
Oct 11 03:35:58 np0005481065 kernel: RPC: Registered tcp-with-tls transport module.
Oct 11 03:35:58 np0005481065 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Oct 11 03:35:58 np0005481065 rpc.statd[442]: Version 2.5.4 starting
Oct 11 03:35:58 np0005481065 rpc.statd[442]: Initializing NSM state
Oct 11 03:35:58 np0005481065 rpc.idmapd[447]: Setting log level to 0
Oct 11 03:35:58 np0005481065 systemd[1]: Finished dracut pre-udev hook.
Oct 11 03:35:58 np0005481065 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct 11 03:35:58 np0005481065 systemd-udevd[460]: Using default interface naming scheme 'rhel-9.0'.
Oct 11 03:35:58 np0005481065 systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct 11 03:35:58 np0005481065 systemd[1]: Starting dracut pre-trigger hook...
Oct 11 03:35:58 np0005481065 systemd[1]: Finished dracut pre-trigger hook.
Oct 11 03:35:58 np0005481065 systemd[1]: Starting Coldplug All udev Devices...
Oct 11 03:35:58 np0005481065 systemd[1]: Created slice Slice /system/modprobe.
Oct 11 03:35:58 np0005481065 systemd[1]: Starting Load Kernel Module configfs...
Oct 11 03:35:58 np0005481065 systemd[1]: Finished Coldplug All udev Devices.
Oct 11 03:35:58 np0005481065 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 11 03:35:58 np0005481065 systemd[1]: Finished Load Kernel Module configfs.
Oct 11 03:35:58 np0005481065 systemd[1]: Mounting Kernel Configuration File System...
Oct 11 03:35:58 np0005481065 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct 11 03:35:58 np0005481065 systemd[1]: Reached target Network.
Oct 11 03:35:58 np0005481065 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct 11 03:35:58 np0005481065 systemd[1]: Starting dracut initqueue hook...
Oct 11 03:35:58 np0005481065 systemd[1]: Mounted Kernel Configuration File System.
Oct 11 03:35:58 np0005481065 systemd[1]: Reached target System Initialization.
Oct 11 03:35:58 np0005481065 systemd[1]: Reached target Basic System.
Oct 11 03:35:58 np0005481065 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Oct 11 03:35:58 np0005481065 kernel: scsi host0: ata_piix
Oct 11 03:35:58 np0005481065 kernel: scsi host1: ata_piix
Oct 11 03:35:58 np0005481065 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Oct 11 03:35:58 np0005481065 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Oct 11 03:35:58 np0005481065 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Oct 11 03:35:58 np0005481065 kernel: vda: vda1
Oct 11 03:35:58 np0005481065 kernel: ata1: found unknown device (class 0)
Oct 11 03:35:58 np0005481065 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Oct 11 03:35:58 np0005481065 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Oct 11 03:35:58 np0005481065 systemd-udevd[491]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 03:35:58 np0005481065 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Oct 11 03:35:58 np0005481065 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Oct 11 03:35:58 np0005481065 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Oct 11 03:35:58 np0005481065 systemd[1]: Found device /dev/disk/by-uuid/9839e2e1-98a2-4594-b609-79d514deb0a3.
Oct 11 03:35:58 np0005481065 systemd[1]: Reached target Initrd Root Device.
Oct 11 03:35:59 np0005481065 systemd[1]: Finished dracut initqueue hook.
Oct 11 03:35:59 np0005481065 systemd[1]: Reached target Preparation for Remote File Systems.
Oct 11 03:35:59 np0005481065 systemd[1]: Reached target Remote Encrypted Volumes.
Oct 11 03:35:59 np0005481065 systemd[1]: Reached target Remote File Systems.
Oct 11 03:35:59 np0005481065 systemd[1]: Starting dracut pre-mount hook...
Oct 11 03:35:59 np0005481065 systemd[1]: Finished dracut pre-mount hook.
Oct 11 03:35:59 np0005481065 systemd[1]: Starting File System Check on /dev/disk/by-uuid/9839e2e1-98a2-4594-b609-79d514deb0a3...
Oct 11 03:35:59 np0005481065 systemd-fsck[553]: /usr/sbin/fsck.xfs: XFS file system.
Oct 11 03:35:59 np0005481065 systemd[1]: Finished File System Check on /dev/disk/by-uuid/9839e2e1-98a2-4594-b609-79d514deb0a3.
Oct 11 03:35:59 np0005481065 systemd[1]: Mounting /sysroot...
Oct 11 03:35:59 np0005481065 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Oct 11 03:35:59 np0005481065 kernel: XFS (vda1): Mounting V5 Filesystem 9839e2e1-98a2-4594-b609-79d514deb0a3
Oct 11 03:35:59 np0005481065 kernel: XFS (vda1): Ending clean mount
Oct 11 03:35:59 np0005481065 systemd[1]: Mounted /sysroot.
Oct 11 03:35:59 np0005481065 systemd[1]: Reached target Initrd Root File System.
Oct 11 03:35:59 np0005481065 systemd[1]: Starting Mountpoints Configured in the Real Root...
Oct 11 03:35:59 np0005481065 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Oct 11 03:35:59 np0005481065 systemd[1]: Finished Mountpoints Configured in the Real Root.
Oct 11 03:35:59 np0005481065 systemd[1]: Reached target Initrd File Systems.
Oct 11 03:35:59 np0005481065 systemd[1]: Reached target Initrd Default Target.
Oct 11 03:35:59 np0005481065 systemd[1]: Starting dracut mount hook...
Oct 11 03:35:59 np0005481065 systemd[1]: Finished dracut mount hook.
Oct 11 03:35:59 np0005481065 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Oct 11 03:35:59 np0005481065 rpc.idmapd[447]: exiting on signal 15
Oct 11 03:35:59 np0005481065 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Oct 11 03:35:59 np0005481065 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Oct 11 03:35:59 np0005481065 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Oct 11 03:36:00 np0005481065 systemd[1]: Stopped target Network.
Oct 11 03:36:00 np0005481065 systemd[1]: Stopped target Remote Encrypted Volumes.
Oct 11 03:36:00 np0005481065 systemd[1]: Stopped target Timer Units.
Oct 11 03:36:00 np0005481065 systemd[1]: dbus.socket: Deactivated successfully.
Oct 11 03:36:00 np0005481065 systemd[1]: Closed D-Bus System Message Bus Socket.
Oct 11 03:36:00 np0005481065 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Oct 11 03:36:00 np0005481065 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Oct 11 03:36:00 np0005481065 systemd[1]: Stopped target Initrd Default Target.
Oct 11 03:36:00 np0005481065 systemd[1]: Stopped target Basic System.
Oct 11 03:36:00 np0005481065 systemd[1]: Stopped target Initrd Root Device.
Oct 11 03:36:00 np0005481065 systemd[1]: Stopped target Initrd /usr File System.
Oct 11 03:36:00 np0005481065 systemd[1]: Stopped target Path Units.
Oct 11 03:36:00 np0005481065 systemd[1]: Stopped target Remote File Systems.
Oct 11 03:36:00 np0005481065 systemd[1]: Stopped target Preparation for Remote File Systems.
Oct 11 03:36:00 np0005481065 systemd[1]: Stopped target Slice Units.
Oct 11 03:36:00 np0005481065 systemd[1]: Stopped target Socket Units.
Oct 11 03:36:00 np0005481065 systemd[1]: Stopped target System Initialization.
Oct 11 03:36:00 np0005481065 systemd[1]: Stopped target Local File Systems.
Oct 11 03:36:00 np0005481065 systemd[1]: Stopped target Swaps.
Oct 11 03:36:00 np0005481065 systemd[1]: dracut-mount.service: Deactivated successfully.
Oct 11 03:36:00 np0005481065 systemd[1]: Stopped dracut mount hook.
Oct 11 03:36:00 np0005481065 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Oct 11 03:36:00 np0005481065 systemd[1]: Stopped dracut pre-mount hook.
Oct 11 03:36:00 np0005481065 systemd[1]: Stopped target Local Encrypted Volumes.
Oct 11 03:36:00 np0005481065 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Oct 11 03:36:00 np0005481065 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Oct 11 03:36:00 np0005481065 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Oct 11 03:36:00 np0005481065 systemd[1]: Stopped dracut initqueue hook.
Oct 11 03:36:00 np0005481065 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct 11 03:36:00 np0005481065 systemd[1]: Stopped Apply Kernel Variables.
Oct 11 03:36:00 np0005481065 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Oct 11 03:36:00 np0005481065 systemd[1]: Stopped Create Volatile Files and Directories.
Oct 11 03:36:00 np0005481065 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Oct 11 03:36:00 np0005481065 systemd[1]: Stopped Coldplug All udev Devices.
Oct 11 03:36:00 np0005481065 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Oct 11 03:36:00 np0005481065 systemd[1]: Stopped dracut pre-trigger hook.
Oct 11 03:36:00 np0005481065 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Oct 11 03:36:00 np0005481065 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Oct 11 03:36:00 np0005481065 systemd[1]: Stopped Setup Virtual Console.
Oct 11 03:36:00 np0005481065 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Oct 11 03:36:00 np0005481065 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct 11 03:36:00 np0005481065 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Oct 11 03:36:00 np0005481065 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Oct 11 03:36:00 np0005481065 systemd[1]: systemd-udevd.service: Deactivated successfully.
Oct 11 03:36:00 np0005481065 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Oct 11 03:36:00 np0005481065 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Oct 11 03:36:00 np0005481065 systemd[1]: Closed udev Control Socket.
Oct 11 03:36:00 np0005481065 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Oct 11 03:36:00 np0005481065 systemd[1]: Closed udev Kernel Socket.
Oct 11 03:36:00 np0005481065 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Oct 11 03:36:00 np0005481065 systemd[1]: Stopped dracut pre-udev hook.
Oct 11 03:36:00 np0005481065 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Oct 11 03:36:00 np0005481065 systemd[1]: Stopped dracut cmdline hook.
Oct 11 03:36:00 np0005481065 systemd[1]: Starting Cleanup udev Database...
Oct 11 03:36:00 np0005481065 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Oct 11 03:36:00 np0005481065 systemd[1]: Stopped Create Static Device Nodes in /dev.
Oct 11 03:36:00 np0005481065 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Oct 11 03:36:00 np0005481065 systemd[1]: Stopped Create List of Static Device Nodes.
Oct 11 03:36:00 np0005481065 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Oct 11 03:36:00 np0005481065 systemd[1]: Stopped Create System Users.
Oct 11 03:36:00 np0005481065 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Oct 11 03:36:00 np0005481065 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Oct 11 03:36:00 np0005481065 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Oct 11 03:36:00 np0005481065 systemd[1]: Finished Cleanup udev Database.
Oct 11 03:36:00 np0005481065 systemd[1]: Reached target Switch Root.
Oct 11 03:36:00 np0005481065 systemd[1]: Starting Switch Root...
Oct 11 03:36:00 np0005481065 systemd[1]: Switching root.
Oct 11 03:36:00 np0005481065 systemd-journald[306]: Journal stopped
Oct 11 03:36:01 np0005481065 systemd-journald: Received SIGTERM from PID 1 (systemd).
Oct 11 03:36:01 np0005481065 kernel: audit: type=1404 audit(1760168160.348:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Oct 11 03:36:01 np0005481065 kernel: SELinux:  policy capability network_peer_controls=1
Oct 11 03:36:01 np0005481065 kernel: SELinux:  policy capability open_perms=1
Oct 11 03:36:01 np0005481065 kernel: SELinux:  policy capability extended_socket_class=1
Oct 11 03:36:01 np0005481065 kernel: SELinux:  policy capability always_check_network=0
Oct 11 03:36:01 np0005481065 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 11 03:36:01 np0005481065 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 11 03:36:01 np0005481065 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 11 03:36:01 np0005481065 kernel: audit: type=1403 audit(1760168160.494:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Oct 11 03:36:01 np0005481065 systemd: Successfully loaded SELinux policy in 151.300ms.
Oct 11 03:36:01 np0005481065 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 27.767ms.
Oct 11 03:36:01 np0005481065 systemd: systemd 252-57.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct 11 03:36:01 np0005481065 systemd: Detected virtualization kvm.
Oct 11 03:36:01 np0005481065 systemd: Detected architecture x86-64.
Oct 11 03:36:01 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:36:01 np0005481065 systemd: initrd-switch-root.service: Deactivated successfully.
Oct 11 03:36:01 np0005481065 systemd: Stopped Switch Root.
Oct 11 03:36:01 np0005481065 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Oct 11 03:36:01 np0005481065 systemd: Created slice Slice /system/getty.
Oct 11 03:36:01 np0005481065 systemd: Created slice Slice /system/serial-getty.
Oct 11 03:36:01 np0005481065 systemd: Created slice Slice /system/sshd-keygen.
Oct 11 03:36:01 np0005481065 systemd: Created slice User and Session Slice.
Oct 11 03:36:01 np0005481065 systemd: Started Dispatch Password Requests to Console Directory Watch.
Oct 11 03:36:01 np0005481065 systemd: Started Forward Password Requests to Wall Directory Watch.
Oct 11 03:36:01 np0005481065 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Oct 11 03:36:01 np0005481065 systemd: Reached target Local Encrypted Volumes.
Oct 11 03:36:01 np0005481065 systemd: Stopped target Switch Root.
Oct 11 03:36:01 np0005481065 systemd: Stopped target Initrd File Systems.
Oct 11 03:36:01 np0005481065 systemd: Stopped target Initrd Root File System.
Oct 11 03:36:01 np0005481065 systemd: Reached target Local Integrity Protected Volumes.
Oct 11 03:36:01 np0005481065 systemd: Reached target Path Units.
Oct 11 03:36:01 np0005481065 systemd: Reached target rpc_pipefs.target.
Oct 11 03:36:01 np0005481065 systemd: Reached target Slice Units.
Oct 11 03:36:01 np0005481065 systemd: Reached target Swaps.
Oct 11 03:36:01 np0005481065 systemd: Reached target Local Verity Protected Volumes.
Oct 11 03:36:01 np0005481065 systemd: Listening on RPCbind Server Activation Socket.
Oct 11 03:36:01 np0005481065 systemd: Reached target RPC Port Mapper.
Oct 11 03:36:01 np0005481065 systemd: Listening on Process Core Dump Socket.
Oct 11 03:36:01 np0005481065 systemd: Listening on initctl Compatibility Named Pipe.
Oct 11 03:36:01 np0005481065 systemd: Listening on udev Control Socket.
Oct 11 03:36:01 np0005481065 systemd: Listening on udev Kernel Socket.
Oct 11 03:36:01 np0005481065 systemd: Mounting Huge Pages File System...
Oct 11 03:36:01 np0005481065 systemd: Mounting POSIX Message Queue File System...
Oct 11 03:36:01 np0005481065 systemd: Mounting Kernel Debug File System...
Oct 11 03:36:01 np0005481065 systemd: Mounting Kernel Trace File System...
Oct 11 03:36:01 np0005481065 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct 11 03:36:01 np0005481065 systemd: Starting Create List of Static Device Nodes...
Oct 11 03:36:01 np0005481065 systemd: Starting Load Kernel Module configfs...
Oct 11 03:36:01 np0005481065 systemd: Starting Load Kernel Module drm...
Oct 11 03:36:01 np0005481065 systemd: Starting Load Kernel Module efi_pstore...
Oct 11 03:36:01 np0005481065 systemd: Starting Load Kernel Module fuse...
Oct 11 03:36:01 np0005481065 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Oct 11 03:36:01 np0005481065 systemd: systemd-fsck-root.service: Deactivated successfully.
Oct 11 03:36:01 np0005481065 systemd: Stopped File System Check on Root Device.
Oct 11 03:36:01 np0005481065 systemd: Stopped Journal Service.
Oct 11 03:36:01 np0005481065 systemd: Starting Journal Service...
Oct 11 03:36:01 np0005481065 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct 11 03:36:01 np0005481065 systemd: Starting Generate network units from Kernel command line...
Oct 11 03:36:01 np0005481065 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 11 03:36:01 np0005481065 systemd: Starting Remount Root and Kernel File Systems...
Oct 11 03:36:01 np0005481065 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Oct 11 03:36:01 np0005481065 systemd: Starting Apply Kernel Variables...
Oct 11 03:36:01 np0005481065 kernel: fuse: init (API version 7.37)
Oct 11 03:36:01 np0005481065 systemd: Starting Coldplug All udev Devices...
Oct 11 03:36:01 np0005481065 systemd: Mounted Huge Pages File System.
Oct 11 03:36:01 np0005481065 systemd: Mounted POSIX Message Queue File System.
Oct 11 03:36:01 np0005481065 systemd: Mounted Kernel Debug File System.
Oct 11 03:36:01 np0005481065 systemd: Mounted Kernel Trace File System.
Oct 11 03:36:01 np0005481065 systemd: Finished Create List of Static Device Nodes.
Oct 11 03:36:01 np0005481065 systemd: modprobe@configfs.service: Deactivated successfully.
Oct 11 03:36:01 np0005481065 systemd: Finished Load Kernel Module configfs.
Oct 11 03:36:01 np0005481065 systemd-journald[675]: Journal started
Oct 11 03:36:01 np0005481065 systemd-journald[675]: Runtime Journal (/run/log/journal/a1727ec20198bc6caf436a6e13c4ff5e) is 8.0M, max 153.6M, 145.6M free.
Oct 11 03:36:01 np0005481065 systemd[1]: Queued start job for default target Multi-User System.
Oct 11 03:36:01 np0005481065 systemd[1]: systemd-journald.service: Deactivated successfully.
Oct 11 03:36:01 np0005481065 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Oct 11 03:36:01 np0005481065 systemd: Started Journal Service.
Oct 11 03:36:01 np0005481065 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Oct 11 03:36:01 np0005481065 systemd[1]: Finished Load Kernel Module efi_pstore.
Oct 11 03:36:01 np0005481065 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Oct 11 03:36:01 np0005481065 systemd[1]: Finished Load Kernel Module fuse.
Oct 11 03:36:01 np0005481065 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Oct 11 03:36:01 np0005481065 systemd[1]: Finished Generate network units from Kernel command line.
Oct 11 03:36:01 np0005481065 systemd[1]: Finished Remount Root and Kernel File Systems.
Oct 11 03:36:01 np0005481065 systemd[1]: Finished Apply Kernel Variables.
Oct 11 03:36:01 np0005481065 kernel: ACPI: bus type drm_connector registered
Oct 11 03:36:01 np0005481065 systemd[1]: Mounting FUSE Control File System...
Oct 11 03:36:01 np0005481065 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct 11 03:36:01 np0005481065 systemd[1]: Starting Rebuild Hardware Database...
Oct 11 03:36:01 np0005481065 systemd[1]: Starting Flush Journal to Persistent Storage...
Oct 11 03:36:01 np0005481065 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Oct 11 03:36:01 np0005481065 systemd[1]: Starting Load/Save OS Random Seed...
Oct 11 03:36:01 np0005481065 systemd[1]: Starting Create System Users...
Oct 11 03:36:01 np0005481065 systemd[1]: modprobe@drm.service: Deactivated successfully.
Oct 11 03:36:01 np0005481065 systemd[1]: Finished Load Kernel Module drm.
Oct 11 03:36:01 np0005481065 systemd[1]: Mounted FUSE Control File System.
Oct 11 03:36:01 np0005481065 systemd-journald[675]: Runtime Journal (/run/log/journal/a1727ec20198bc6caf436a6e13c4ff5e) is 8.0M, max 153.6M, 145.6M free.
Oct 11 03:36:01 np0005481065 systemd-journald[675]: Received client request to flush runtime journal.
Oct 11 03:36:01 np0005481065 systemd[1]: Finished Flush Journal to Persistent Storage.
Oct 11 03:36:01 np0005481065 systemd[1]: Finished Load/Save OS Random Seed.
Oct 11 03:36:01 np0005481065 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct 11 03:36:01 np0005481065 systemd[1]: Finished Create System Users.
Oct 11 03:36:01 np0005481065 systemd[1]: Starting Create Static Device Nodes in /dev...
Oct 11 03:36:01 np0005481065 systemd[1]: Finished Coldplug All udev Devices.
Oct 11 03:36:01 np0005481065 systemd[1]: Finished Create Static Device Nodes in /dev.
Oct 11 03:36:01 np0005481065 systemd[1]: Reached target Preparation for Local File Systems.
Oct 11 03:36:01 np0005481065 systemd[1]: Reached target Local File Systems.
Oct 11 03:36:01 np0005481065 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Oct 11 03:36:01 np0005481065 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Oct 11 03:36:01 np0005481065 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Oct 11 03:36:01 np0005481065 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Oct 11 03:36:01 np0005481065 systemd[1]: Starting Automatic Boot Loader Update...
Oct 11 03:36:01 np0005481065 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Oct 11 03:36:01 np0005481065 systemd[1]: Starting Create Volatile Files and Directories...
Oct 11 03:36:01 np0005481065 bootctl[694]: Couldn't find EFI system partition, skipping.
Oct 11 03:36:01 np0005481065 systemd[1]: Finished Automatic Boot Loader Update.
Oct 11 03:36:01 np0005481065 systemd[1]: Finished Create Volatile Files and Directories.
Oct 11 03:36:01 np0005481065 systemd[1]: Starting Security Auditing Service...
Oct 11 03:36:01 np0005481065 systemd[1]: Starting RPC Bind...
Oct 11 03:36:01 np0005481065 systemd[1]: Starting Rebuild Journal Catalog...
Oct 11 03:36:01 np0005481065 auditd[700]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Oct 11 03:36:01 np0005481065 auditd[700]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Oct 11 03:36:01 np0005481065 systemd[1]: Finished Rebuild Journal Catalog.
Oct 11 03:36:01 np0005481065 systemd[1]: Started RPC Bind.
Oct 11 03:36:01 np0005481065 augenrules[705]: /sbin/augenrules: No change
Oct 11 03:36:01 np0005481065 augenrules[720]: No rules
Oct 11 03:36:01 np0005481065 augenrules[720]: enabled 1
Oct 11 03:36:01 np0005481065 augenrules[720]: failure 1
Oct 11 03:36:01 np0005481065 augenrules[720]: pid 700
Oct 11 03:36:01 np0005481065 augenrules[720]: rate_limit 0
Oct 11 03:36:01 np0005481065 augenrules[720]: backlog_limit 8192
Oct 11 03:36:01 np0005481065 augenrules[720]: lost 0
Oct 11 03:36:01 np0005481065 augenrules[720]: backlog 3
Oct 11 03:36:01 np0005481065 augenrules[720]: backlog_wait_time 60000
Oct 11 03:36:01 np0005481065 augenrules[720]: backlog_wait_time_actual 0
Oct 11 03:36:01 np0005481065 augenrules[720]: enabled 1
Oct 11 03:36:01 np0005481065 augenrules[720]: failure 1
Oct 11 03:36:01 np0005481065 augenrules[720]: pid 700
Oct 11 03:36:01 np0005481065 augenrules[720]: rate_limit 0
Oct 11 03:36:01 np0005481065 augenrules[720]: backlog_limit 8192
Oct 11 03:36:01 np0005481065 augenrules[720]: lost 0
Oct 11 03:36:01 np0005481065 augenrules[720]: backlog 0
Oct 11 03:36:01 np0005481065 augenrules[720]: backlog_wait_time 60000
Oct 11 03:36:01 np0005481065 augenrules[720]: backlog_wait_time_actual 0
Oct 11 03:36:01 np0005481065 augenrules[720]: enabled 1
Oct 11 03:36:01 np0005481065 augenrules[720]: failure 1
Oct 11 03:36:01 np0005481065 augenrules[720]: pid 700
Oct 11 03:36:01 np0005481065 augenrules[720]: rate_limit 0
Oct 11 03:36:01 np0005481065 augenrules[720]: backlog_limit 8192
Oct 11 03:36:01 np0005481065 augenrules[720]: lost 0
Oct 11 03:36:01 np0005481065 augenrules[720]: backlog 0
Oct 11 03:36:01 np0005481065 augenrules[720]: backlog_wait_time 60000
Oct 11 03:36:01 np0005481065 augenrules[720]: backlog_wait_time_actual 0
Oct 11 03:36:01 np0005481065 systemd[1]: Started Security Auditing Service.
Oct 11 03:36:01 np0005481065 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Oct 11 03:36:01 np0005481065 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Oct 11 03:36:01 np0005481065 systemd[1]: Finished Rebuild Hardware Database.
Oct 11 03:36:01 np0005481065 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct 11 03:36:01 np0005481065 systemd-udevd[728]: Using default interface naming scheme 'rhel-9.0'.
Oct 11 03:36:01 np0005481065 systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct 11 03:36:01 np0005481065 systemd[1]: Starting Load Kernel Module configfs...
Oct 11 03:36:01 np0005481065 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Oct 11 03:36:01 np0005481065 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Oct 11 03:36:01 np0005481065 systemd[1]: Starting Update is Completed...
Oct 11 03:36:01 np0005481065 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 11 03:36:01 np0005481065 systemd[1]: Finished Load Kernel Module configfs.
Oct 11 03:36:01 np0005481065 systemd[1]: Finished Update is Completed.
Oct 11 03:36:01 np0005481065 systemd-udevd[756]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 03:36:01 np0005481065 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Oct 11 03:36:01 np0005481065 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Oct 11 03:36:01 np0005481065 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Oct 11 03:36:01 np0005481065 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Oct 11 03:36:02 np0005481065 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Oct 11 03:36:02 np0005481065 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Oct 11 03:36:02 np0005481065 kernel: Console: switching to colour dummy device 80x25
Oct 11 03:36:02 np0005481065 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Oct 11 03:36:02 np0005481065 kernel: [drm] features: -context_init
Oct 11 03:36:02 np0005481065 kernel: [drm] number of scanouts: 1
Oct 11 03:36:02 np0005481065 kernel: [drm] number of cap sets: 0
Oct 11 03:36:02 np0005481065 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Oct 11 03:36:02 np0005481065 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Oct 11 03:36:02 np0005481065 kernel: Console: switching to colour frame buffer device 128x48
Oct 11 03:36:02 np0005481065 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Oct 11 03:36:02 np0005481065 kernel: kvm_amd: TSC scaling supported
Oct 11 03:36:02 np0005481065 kernel: kvm_amd: Nested Virtualization enabled
Oct 11 03:36:02 np0005481065 kernel: kvm_amd: Nested Paging enabled
Oct 11 03:36:02 np0005481065 kernel: kvm_amd: LBR virtualization supported
Oct 11 03:36:02 np0005481065 systemd[1]: Reached target System Initialization.
Oct 11 03:36:02 np0005481065 systemd[1]: Started dnf makecache --timer.
Oct 11 03:36:02 np0005481065 systemd[1]: Started Daily rotation of log files.
Oct 11 03:36:02 np0005481065 systemd[1]: Started Daily Cleanup of Temporary Directories.
Oct 11 03:36:02 np0005481065 systemd[1]: Reached target Timer Units.
Oct 11 03:36:02 np0005481065 systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct 11 03:36:02 np0005481065 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Oct 11 03:36:02 np0005481065 systemd[1]: Reached target Socket Units.
Oct 11 03:36:02 np0005481065 systemd[1]: Starting D-Bus System Message Bus...
Oct 11 03:36:02 np0005481065 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 11 03:36:02 np0005481065 systemd[1]: Started D-Bus System Message Bus.
Oct 11 03:36:02 np0005481065 dbus-broker-lau[792]: Ready
Oct 11 03:36:02 np0005481065 systemd[1]: Reached target Basic System.
Oct 11 03:36:02 np0005481065 systemd[1]: Starting NTP client/server...
Oct 11 03:36:02 np0005481065 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Oct 11 03:36:02 np0005481065 systemd[1]: Starting Restore /run/initramfs on shutdown...
Oct 11 03:36:02 np0005481065 systemd[1]: Starting IPv4 firewall with iptables...
Oct 11 03:36:02 np0005481065 systemd[1]: Started irqbalance daemon.
Oct 11 03:36:02 np0005481065 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Oct 11 03:36:02 np0005481065 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 11 03:36:02 np0005481065 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 11 03:36:02 np0005481065 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 11 03:36:02 np0005481065 systemd[1]: Reached target sshd-keygen.target.
Oct 11 03:36:02 np0005481065 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Oct 11 03:36:02 np0005481065 systemd[1]: Reached target User and Group Name Lookups.
Oct 11 03:36:02 np0005481065 systemd[1]: Starting User Login Management...
Oct 11 03:36:02 np0005481065 systemd[1]: Finished Restore /run/initramfs on shutdown.
Oct 11 03:36:02 np0005481065 chronyd[827]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct 11 03:36:02 np0005481065 systemd-logind[819]: New seat seat0.
Oct 11 03:36:02 np0005481065 systemd-logind[819]: Watching system buttons on /dev/input/event0 (Power Button)
Oct 11 03:36:02 np0005481065 systemd-logind[819]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct 11 03:36:02 np0005481065 chronyd[827]: Loaded 0 symmetric keys
Oct 11 03:36:02 np0005481065 systemd[1]: Started User Login Management.
Oct 11 03:36:02 np0005481065 chronyd[827]: Using right/UTC timezone to obtain leap second data
Oct 11 03:36:02 np0005481065 chronyd[827]: Loaded seccomp filter (level 2)
Oct 11 03:36:02 np0005481065 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Oct 11 03:36:02 np0005481065 systemd[1]: Started NTP client/server.
Oct 11 03:36:02 np0005481065 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Oct 11 03:36:02 np0005481065 iptables.init[813]: iptables: Applying firewall rules: [  OK  ]
Oct 11 03:36:02 np0005481065 systemd[1]: Finished IPv4 firewall with iptables.
Oct 11 03:36:03 np0005481065 cloud-init[837]: Cloud-init v. 24.4-7.el9 running 'init-local' at Sat, 11 Oct 2025 07:36:02 +0000. Up 7.73 seconds.
Oct 11 03:36:03 np0005481065 systemd[1]: run-cloud\x2dinit-tmp-tmpfu2rqw6o.mount: Deactivated successfully.
Oct 11 03:36:03 np0005481065 systemd[1]: Starting Hostname Service...
Oct 11 03:36:03 np0005481065 systemd[1]: Started Hostname Service.
Oct 11 03:36:03 np0005481065 systemd-hostnamed[851]: Hostname set to <np0005481065.novalocal> (static)
Oct 11 03:36:03 np0005481065 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Oct 11 03:36:03 np0005481065 systemd[1]: Reached target Preparation for Network.
Oct 11 03:36:03 np0005481065 systemd[1]: Starting Network Manager...
Oct 11 03:36:03 np0005481065 NetworkManager[855]: <info>  [1760168163.6215] NetworkManager (version 1.54.1-1.el9) is starting... (boot:00f191b2-d1fe-48ff-9415-2594ae397b3d)
Oct 11 03:36:03 np0005481065 NetworkManager[855]: <info>  [1760168163.6219] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct 11 03:36:03 np0005481065 NetworkManager[855]: <info>  [1760168163.6429] manager[0x56368a97f080]: monitoring kernel firmware directory '/lib/firmware'.
Oct 11 03:36:03 np0005481065 NetworkManager[855]: <info>  [1760168163.6490] hostname: hostname: using hostnamed
Oct 11 03:36:03 np0005481065 NetworkManager[855]: <info>  [1760168163.6491] hostname: static hostname changed from (none) to "np0005481065.novalocal"
Oct 11 03:36:03 np0005481065 NetworkManager[855]: <info>  [1760168163.6496] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 11 03:36:03 np0005481065 NetworkManager[855]: <info>  [1760168163.6599] manager[0x56368a97f080]: rfkill: Wi-Fi hardware radio set enabled
Oct 11 03:36:03 np0005481065 NetworkManager[855]: <info>  [1760168163.6599] manager[0x56368a97f080]: rfkill: WWAN hardware radio set enabled
Oct 11 03:36:03 np0005481065 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Oct 11 03:36:03 np0005481065 NetworkManager[855]: <info>  [1760168163.6694] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct 11 03:36:03 np0005481065 NetworkManager[855]: <info>  [1760168163.6695] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 11 03:36:03 np0005481065 NetworkManager[855]: <info>  [1760168163.6696] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 11 03:36:03 np0005481065 NetworkManager[855]: <info>  [1760168163.6696] manager: Networking is enabled by state file
Oct 11 03:36:03 np0005481065 NetworkManager[855]: <info>  [1760168163.6698] settings: Loaded settings plugin: keyfile (internal)
Oct 11 03:36:03 np0005481065 NetworkManager[855]: <info>  [1760168163.6742] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 11 03:36:03 np0005481065 NetworkManager[855]: <info>  [1760168163.6770] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct 11 03:36:03 np0005481065 NetworkManager[855]: <info>  [1760168163.6796] dhcp: init: Using DHCP client 'internal'
Oct 11 03:36:03 np0005481065 NetworkManager[855]: <info>  [1760168163.6800] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 11 03:36:03 np0005481065 NetworkManager[855]: <info>  [1760168163.6816] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 03:36:03 np0005481065 NetworkManager[855]: <info>  [1760168163.6829] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 11 03:36:03 np0005481065 NetworkManager[855]: <info>  [1760168163.6837] device (lo): Activation: starting connection 'lo' (47a91fc4-8292-4f82-848b-b25d73b8c49f)
Oct 11 03:36:03 np0005481065 NetworkManager[855]: <info>  [1760168163.6847] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 11 03:36:03 np0005481065 NetworkManager[855]: <info>  [1760168163.6851] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 11 03:36:03 np0005481065 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 11 03:36:03 np0005481065 systemd[1]: Started Network Manager.
Oct 11 03:36:03 np0005481065 NetworkManager[855]: <info>  [1760168163.6896] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 11 03:36:03 np0005481065 NetworkManager[855]: <info>  [1760168163.6904] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 11 03:36:03 np0005481065 systemd[1]: Reached target Network.
Oct 11 03:36:03 np0005481065 NetworkManager[855]: <info>  [1760168163.6912] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 11 03:36:03 np0005481065 NetworkManager[855]: <info>  [1760168163.6917] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 11 03:36:03 np0005481065 NetworkManager[855]: <info>  [1760168163.6922] device (eth0): carrier: link connected
Oct 11 03:36:03 np0005481065 NetworkManager[855]: <info>  [1760168163.6928] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 11 03:36:03 np0005481065 NetworkManager[855]: <info>  [1760168163.6939] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 11 03:36:03 np0005481065 NetworkManager[855]: <info>  [1760168163.6945] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 11 03:36:03 np0005481065 NetworkManager[855]: <info>  [1760168163.6950] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 11 03:36:03 np0005481065 NetworkManager[855]: <info>  [1760168163.6951] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 11 03:36:03 np0005481065 NetworkManager[855]: <info>  [1760168163.6953] manager: NetworkManager state is now CONNECTING
Oct 11 03:36:03 np0005481065 NetworkManager[855]: <info>  [1760168163.6955] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 11 03:36:03 np0005481065 systemd[1]: Starting Network Manager Wait Online...
Oct 11 03:36:03 np0005481065 NetworkManager[855]: <info>  [1760168163.6962] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 11 03:36:03 np0005481065 NetworkManager[855]: <info>  [1760168163.6965] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 11 03:36:03 np0005481065 systemd[1]: Starting GSSAPI Proxy Daemon...
Oct 11 03:36:03 np0005481065 NetworkManager[855]: <info>  [1760168163.7030] dhcp4 (eth0): state changed new lease, address=38.102.83.193
Oct 11 03:36:03 np0005481065 NetworkManager[855]: <info>  [1760168163.7037] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 11 03:36:03 np0005481065 NetworkManager[855]: <info>  [1760168163.7061] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 11 03:36:03 np0005481065 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 11 03:36:03 np0005481065 NetworkManager[855]: <info>  [1760168163.7173] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 11 03:36:03 np0005481065 NetworkManager[855]: <info>  [1760168163.7175] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 11 03:36:03 np0005481065 NetworkManager[855]: <info>  [1760168163.7181] device (lo): Activation: successful, device activated.
Oct 11 03:36:03 np0005481065 NetworkManager[855]: <info>  [1760168163.7203] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 11 03:36:03 np0005481065 NetworkManager[855]: <info>  [1760168163.7205] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 11 03:36:03 np0005481065 NetworkManager[855]: <info>  [1760168163.7210] manager: NetworkManager state is now CONNECTED_SITE
Oct 11 03:36:03 np0005481065 NetworkManager[855]: <info>  [1760168163.7214] device (eth0): Activation: successful, device activated.
Oct 11 03:36:03 np0005481065 NetworkManager[855]: <info>  [1760168163.7222] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 11 03:36:03 np0005481065 NetworkManager[855]: <info>  [1760168163.7224] manager: startup complete
Oct 11 03:36:03 np0005481065 systemd[1]: Started GSSAPI Proxy Daemon.
Oct 11 03:36:03 np0005481065 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct 11 03:36:03 np0005481065 systemd[1]: Reached target NFS client services.
Oct 11 03:36:03 np0005481065 systemd[1]: Reached target Preparation for Remote File Systems.
Oct 11 03:36:03 np0005481065 systemd[1]: Reached target Remote File Systems.
Oct 11 03:36:03 np0005481065 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 11 03:36:03 np0005481065 systemd[1]: Finished Network Manager Wait Online.
Oct 11 03:36:03 np0005481065 systemd[1]: Starting Cloud-init: Network Stage...
Oct 11 03:36:04 np0005481065 cloud-init[920]: Cloud-init v. 24.4-7.el9 running 'init' at Sat, 11 Oct 2025 07:36:04 +0000. Up 8.83 seconds.
Oct 11 03:36:04 np0005481065 cloud-init[920]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Oct 11 03:36:04 np0005481065 cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct 11 03:36:04 np0005481065 cloud-init[920]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Oct 11 03:36:04 np0005481065 cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct 11 03:36:04 np0005481065 cloud-init[920]: ci-info: |  eth0  | True |        38.102.83.193         | 255.255.255.0 | global | fa:16:3e:54:37:ae |
Oct 11 03:36:04 np0005481065 cloud-init[920]: ci-info: |  eth0  | True | fe80::f816:3eff:fe54:37ae/64 |       .       |  link  | fa:16:3e:54:37:ae |
Oct 11 03:36:04 np0005481065 cloud-init[920]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Oct 11 03:36:04 np0005481065 cloud-init[920]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Oct 11 03:36:04 np0005481065 cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct 11 03:36:04 np0005481065 cloud-init[920]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Oct 11 03:36:04 np0005481065 cloud-init[920]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct 11 03:36:04 np0005481065 cloud-init[920]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Oct 11 03:36:04 np0005481065 cloud-init[920]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct 11 03:36:04 np0005481065 cloud-init[920]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Oct 11 03:36:04 np0005481065 cloud-init[920]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Oct 11 03:36:04 np0005481065 cloud-init[920]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Oct 11 03:36:04 np0005481065 cloud-init[920]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct 11 03:36:04 np0005481065 cloud-init[920]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Oct 11 03:36:04 np0005481065 cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 11 03:36:04 np0005481065 cloud-init[920]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Oct 11 03:36:04 np0005481065 cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 11 03:36:04 np0005481065 cloud-init[920]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Oct 11 03:36:04 np0005481065 cloud-init[920]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Oct 11 03:36:04 np0005481065 cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 11 03:36:05 np0005481065 cloud-init[920]: Generating public/private rsa key pair.
Oct 11 03:36:05 np0005481065 cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Oct 11 03:36:05 np0005481065 cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Oct 11 03:36:05 np0005481065 cloud-init[920]: The key fingerprint is:
Oct 11 03:36:05 np0005481065 cloud-init[920]: SHA256:Bx+FILd/HqzuDWxcPk6TRJ1J72umBafpQFyTbL6eNHs root@np0005481065.novalocal
Oct 11 03:36:05 np0005481065 cloud-init[920]: The key's randomart image is:
Oct 11 03:36:05 np0005481065 cloud-init[920]: +---[RSA 3072]----+
Oct 11 03:36:05 np0005481065 cloud-init[920]: |      . o. .. .  |
Oct 11 03:36:05 np0005481065 cloud-init[920]: |       o ....o.+ |
Oct 11 03:36:05 np0005481065 cloud-init[920]: |        o . .*+ .|
Oct 11 03:36:05 np0005481065 cloud-init[920]: |         +.++ .. |
Oct 11 03:36:05 np0005481065 cloud-init[920]: |        S +o*o ..|
Oct 11 03:36:05 np0005481065 cloud-init[920]: |         +.B o* .|
Oct 11 03:36:05 np0005481065 cloud-init[920]: |          *.B* = |
Oct 11 03:36:05 np0005481065 cloud-init[920]: |         o ==oOE |
Oct 11 03:36:05 np0005481065 cloud-init[920]: |         .o o*.  |
Oct 11 03:36:05 np0005481065 cloud-init[920]: +----[SHA256]-----+
Oct 11 03:36:05 np0005481065 cloud-init[920]: Generating public/private ecdsa key pair.
Oct 11 03:36:05 np0005481065 cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Oct 11 03:36:05 np0005481065 cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Oct 11 03:36:05 np0005481065 cloud-init[920]: The key fingerprint is:
Oct 11 03:36:05 np0005481065 cloud-init[920]: SHA256:UgyM2PWBfcxorvuVONmth8ctz0kPYTjBcXZr5Nm0N0s root@np0005481065.novalocal
Oct 11 03:36:05 np0005481065 cloud-init[920]: The key's randomart image is:
Oct 11 03:36:05 np0005481065 cloud-init[920]: +---[ECDSA 256]---+
Oct 11 03:36:05 np0005481065 cloud-init[920]: |   o +oo.+  . o.o|
Oct 11 03:36:05 np0005481065 cloud-init[920]: |  . o o++.+. +oo=|
Oct 11 03:36:05 np0005481065 cloud-init[920]: |       o+.  o  Eo|
Oct 11 03:36:05 np0005481065 cloud-init[920]: |       ..    oo +|
Oct 11 03:36:05 np0005481065 cloud-init[920]: |      ..S   o o. |
Oct 11 03:36:05 np0005481065 cloud-init[920]: |      .. + o o . |
Oct 11 03:36:05 np0005481065 cloud-init[920]: |       .+ +o..o  |
Oct 11 03:36:05 np0005481065 cloud-init[920]: |      .  o..=o.+ |
Oct 11 03:36:05 np0005481065 cloud-init[920]: |       .. .o o+ .|
Oct 11 03:36:05 np0005481065 cloud-init[920]: +----[SHA256]-----+
Oct 11 03:36:05 np0005481065 cloud-init[920]: Generating public/private ed25519 key pair.
Oct 11 03:36:05 np0005481065 cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Oct 11 03:36:05 np0005481065 cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Oct 11 03:36:05 np0005481065 cloud-init[920]: The key fingerprint is:
Oct 11 03:36:05 np0005481065 cloud-init[920]: SHA256:9espfO/OBUwM1wZE+eswblJb2vgMp6zgFe+j35AXkM0 root@np0005481065.novalocal
Oct 11 03:36:05 np0005481065 cloud-init[920]: The key's randomart image is:
Oct 11 03:36:05 np0005481065 cloud-init[920]: +--[ED25519 256]--+
Oct 11 03:36:05 np0005481065 cloud-init[920]: |            .o=+ |
Oct 11 03:36:05 np0005481065 cloud-init[920]: |             += o|
Oct 11 03:36:05 np0005481065 cloud-init[920]: |          .  ooE |
Oct 11 03:36:05 np0005481065 cloud-init[920]: |         . . o. .|
Oct 11 03:36:05 np0005481065 cloud-init[920]: |        S  .. o..|
Oct 11 03:36:05 np0005481065 cloud-init[920]: |            o=.+.|
Oct 11 03:36:05 np0005481065 cloud-init[920]: |         o .+=Xo.|
Oct 11 03:36:05 np0005481065 cloud-init[920]: |        . =o+XO+ |
Oct 11 03:36:05 np0005481065 cloud-init[920]: |         . =OOB+ |
Oct 11 03:36:05 np0005481065 cloud-init[920]: +----[SHA256]-----+
Oct 11 03:36:05 np0005481065 systemd[1]: Finished Cloud-init: Network Stage.
Oct 11 03:36:05 np0005481065 systemd[1]: Reached target Cloud-config availability.
Oct 11 03:36:05 np0005481065 systemd[1]: Reached target Network is Online.
Oct 11 03:36:05 np0005481065 systemd[1]: Starting Cloud-init: Config Stage...
Oct 11 03:36:05 np0005481065 systemd[1]: Starting Notify NFS peers of a restart...
Oct 11 03:36:05 np0005481065 sm-notify[1002]: Version 2.5.4 starting
Oct 11 03:36:05 np0005481065 systemd[1]: Starting System Logging Service...
Oct 11 03:36:05 np0005481065 systemd[1]: Starting OpenSSH server daemon...
Oct 11 03:36:05 np0005481065 systemd[1]: Starting Permit User Sessions...
Oct 11 03:36:05 np0005481065 systemd[1]: Started Notify NFS peers of a restart.
Oct 11 03:36:05 np0005481065 systemd[1]: Finished Permit User Sessions.
Oct 11 03:36:05 np0005481065 systemd[1]: Started Command Scheduler.
Oct 11 03:36:05 np0005481065 systemd[1]: Started Getty on tty1.
Oct 11 03:36:05 np0005481065 systemd[1]: Started Serial Getty on ttyS0.
Oct 11 03:36:05 np0005481065 systemd[1]: Reached target Login Prompts.
Oct 11 03:36:05 np0005481065 systemd[1]: Started OpenSSH server daemon.
Oct 11 03:36:05 np0005481065 rsyslogd[1003]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="1003" x-info="https://www.rsyslog.com"] start
Oct 11 03:36:05 np0005481065 rsyslogd[1003]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Oct 11 03:36:05 np0005481065 systemd[1]: Started System Logging Service.
Oct 11 03:36:05 np0005481065 systemd[1]: Reached target Multi-User System.
Oct 11 03:36:05 np0005481065 systemd[1]: Starting Record Runlevel Change in UTMP...
Oct 11 03:36:05 np0005481065 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Oct 11 03:36:05 np0005481065 systemd[1]: Finished Record Runlevel Change in UTMP.
Oct 11 03:36:05 np0005481065 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 03:36:05 np0005481065 cloud-init[1015]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Sat, 11 Oct 2025 07:36:05 +0000. Up 10.60 seconds.
Oct 11 03:36:05 np0005481065 systemd[1]: Finished Cloud-init: Config Stage.
Oct 11 03:36:05 np0005481065 systemd[1]: Starting Cloud-init: Final Stage...
Oct 11 03:36:06 np0005481065 cloud-init[1019]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Sat, 11 Oct 2025 07:36:06 +0000. Up 11.01 seconds.
Oct 11 03:36:06 np0005481065 cloud-init[1021]: #############################################################
Oct 11 03:36:06 np0005481065 cloud-init[1022]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Oct 11 03:36:06 np0005481065 cloud-init[1024]: 256 SHA256:UgyM2PWBfcxorvuVONmth8ctz0kPYTjBcXZr5Nm0N0s root@np0005481065.novalocal (ECDSA)
Oct 11 03:36:06 np0005481065 cloud-init[1026]: 256 SHA256:9espfO/OBUwM1wZE+eswblJb2vgMp6zgFe+j35AXkM0 root@np0005481065.novalocal (ED25519)
Oct 11 03:36:06 np0005481065 cloud-init[1028]: 3072 SHA256:Bx+FILd/HqzuDWxcPk6TRJ1J72umBafpQFyTbL6eNHs root@np0005481065.novalocal (RSA)
Oct 11 03:36:06 np0005481065 cloud-init[1029]: -----END SSH HOST KEY FINGERPRINTS-----
Oct 11 03:36:06 np0005481065 cloud-init[1030]: #############################################################
Oct 11 03:36:06 np0005481065 cloud-init[1019]: Cloud-init v. 24.4-7.el9 finished at Sat, 11 Oct 2025 07:36:06 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 11.20 seconds
Oct 11 03:36:06 np0005481065 systemd[1]: Finished Cloud-init: Final Stage.
Oct 11 03:36:06 np0005481065 systemd[1]: Reached target Cloud-init target.
Oct 11 03:36:06 np0005481065 systemd[1]: Startup finished in 1.673s (kernel) + 3.424s (initrd) + 6.162s (userspace) = 11.260s.
Oct 11 03:36:08 np0005481065 chronyd[827]: Selected source 54.39.196.172 (2.centos.pool.ntp.org)
Oct 11 03:36:08 np0005481065 chronyd[827]: System clock TAI offset set to 37 seconds
Oct 11 03:36:12 np0005481065 irqbalance[814]: Cannot change IRQ 25 affinity: Operation not permitted
Oct 11 03:36:12 np0005481065 irqbalance[814]: IRQ 25 affinity is now unmanaged
Oct 11 03:36:12 np0005481065 irqbalance[814]: Cannot change IRQ 31 affinity: Operation not permitted
Oct 11 03:36:12 np0005481065 irqbalance[814]: IRQ 31 affinity is now unmanaged
Oct 11 03:36:12 np0005481065 irqbalance[814]: Cannot change IRQ 28 affinity: Operation not permitted
Oct 11 03:36:12 np0005481065 irqbalance[814]: IRQ 28 affinity is now unmanaged
Oct 11 03:36:12 np0005481065 irqbalance[814]: Cannot change IRQ 32 affinity: Operation not permitted
Oct 11 03:36:12 np0005481065 irqbalance[814]: IRQ 32 affinity is now unmanaged
Oct 11 03:36:12 np0005481065 irqbalance[814]: Cannot change IRQ 30 affinity: Operation not permitted
Oct 11 03:36:12 np0005481065 irqbalance[814]: IRQ 30 affinity is now unmanaged
Oct 11 03:36:12 np0005481065 irqbalance[814]: Cannot change IRQ 29 affinity: Operation not permitted
Oct 11 03:36:12 np0005481065 irqbalance[814]: IRQ 29 affinity is now unmanaged
Oct 11 03:36:13 np0005481065 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 11 03:36:23 np0005481065 systemd[1]: Created slice User Slice of UID 1000.
Oct 11 03:36:23 np0005481065 systemd[1]: Starting User Runtime Directory /run/user/1000...
Oct 11 03:36:23 np0005481065 systemd-logind[819]: New session 1 of user zuul.
Oct 11 03:36:23 np0005481065 systemd[1]: Finished User Runtime Directory /run/user/1000.
Oct 11 03:36:23 np0005481065 systemd[1]: Starting User Manager for UID 1000...
Oct 11 03:36:23 np0005481065 systemd[1057]: Queued start job for default target Main User Target.
Oct 11 03:36:23 np0005481065 systemd[1057]: Created slice User Application Slice.
Oct 11 03:36:23 np0005481065 systemd[1057]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 11 03:36:23 np0005481065 systemd[1057]: Started Daily Cleanup of User's Temporary Directories.
Oct 11 03:36:23 np0005481065 systemd[1057]: Reached target Paths.
Oct 11 03:36:23 np0005481065 systemd[1057]: Reached target Timers.
Oct 11 03:36:23 np0005481065 systemd[1057]: Starting D-Bus User Message Bus Socket...
Oct 11 03:36:23 np0005481065 systemd[1057]: Starting Create User's Volatile Files and Directories...
Oct 11 03:36:23 np0005481065 systemd[1057]: Listening on D-Bus User Message Bus Socket.
Oct 11 03:36:23 np0005481065 systemd[1057]: Finished Create User's Volatile Files and Directories.
Oct 11 03:36:23 np0005481065 systemd[1057]: Reached target Sockets.
Oct 11 03:36:23 np0005481065 systemd[1057]: Reached target Basic System.
Oct 11 03:36:23 np0005481065 systemd[1057]: Reached target Main User Target.
Oct 11 03:36:23 np0005481065 systemd[1057]: Startup finished in 141ms.
Oct 11 03:36:23 np0005481065 systemd[1]: Started User Manager for UID 1000.
Oct 11 03:36:23 np0005481065 systemd[1]: Started Session 1 of User zuul.
Oct 11 03:36:23 np0005481065 python3[1140]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:36:26 np0005481065 python3[1168]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:36:32 np0005481065 python3[1226]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:36:33 np0005481065 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 11 03:36:33 np0005481065 python3[1266]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Oct 11 03:36:35 np0005481065 python3[1294]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCbxtri8d26Gm8Vvk7kjQUNX+k43xlHVhloKydfxd5b/vb+WLMVQhQhymY8Y/3zXO0xKIryYdPjOT9i8q7EnG3n/rRM/LT5N0hIV1auYOA6esSYPBjHKE1aTzkudO3idNsYk5/fBVStGR4HB8dkGdqFQhgON0iAK7q1NCWeZXB+D0E6mA3aYG/hrpIRAoJ0Wwa4vksoLkERmIIPsFWM8avGJ9MazR5B5zm+6JKrEzIerAUONug5sm4FEDiqLsfbpbGE9c8oOXqRlMSUnpGbYK98wtYFWN2j9FUzvQftYrPXOYnU0r82l8Q9SciSJZ1HedhugqBftEGzl+6QRakx6kpC5lQecl9TxLrJ2NI9cVB0G4NnEhIGqiIePExWovcKWJzqnQzaIx76Txufs5jqaErehE9BEivDDepfsERY+qYpfC2bcXZs7r5Xx/tIwOZkmYLd3iZi28Hz/HDu2vq+5gRKprxkIKBdmxoddIAyeHHg/HuTOg2QqLf0n5hgVEVN8ds= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:36:36 np0005481065 python3[1318]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:36:36 np0005481065 python3[1417]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 03:36:37 np0005481065 python3[1488]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760168196.4188383-207-237134992389789/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=07ab7c50f0ee44ea824b93a25c93d4dc_id_rsa follow=False checksum=4f1207c4e19dfb5547d6fe5e946e5744e28dc13a backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:36:37 np0005481065 python3[1611]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 03:36:38 np0005481065 python3[1682]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760168197.4784977-240-90569846495069/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=07ab7c50f0ee44ea824b93a25c93d4dc_id_rsa.pub follow=False checksum=668a787e81011c452399f262857c0cd3aec3929f backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:36:39 np0005481065 python3[1730]: ansible-ping Invoked with data=pong
Oct 11 03:36:40 np0005481065 python3[1754]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:36:42 np0005481065 python3[1812]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Oct 11 03:36:43 np0005481065 python3[1844]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:36:43 np0005481065 python3[1868]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:36:43 np0005481065 python3[1892]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:36:44 np0005481065 python3[1916]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:36:44 np0005481065 python3[1940]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:36:44 np0005481065 python3[1964]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:36:46 np0005481065 python3[1990]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:36:47 np0005481065 python3[2068]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 03:36:47 np0005481065 python3[2141]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1760168206.6645248-21-84169417327286/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:36:48 np0005481065 python3[2189]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:36:48 np0005481065 python3[2213]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:36:48 np0005481065 python3[2237]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:36:49 np0005481065 python3[2261]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:36:49 np0005481065 python3[2285]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:36:49 np0005481065 python3[2309]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:36:50 np0005481065 python3[2333]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:36:50 np0005481065 python3[2357]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:36:50 np0005481065 python3[2381]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:36:51 np0005481065 python3[2405]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:36:51 np0005481065 python3[2429]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:36:51 np0005481065 python3[2453]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:36:52 np0005481065 python3[2477]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:36:52 np0005481065 python3[2501]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:36:52 np0005481065 python3[2525]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:36:52 np0005481065 python3[2549]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:36:53 np0005481065 python3[2573]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:36:53 np0005481065 python3[2597]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:36:53 np0005481065 python3[2621]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:36:54 np0005481065 python3[2645]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:36:54 np0005481065 python3[2669]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:36:54 np0005481065 python3[2693]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:36:55 np0005481065 python3[2717]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:36:55 np0005481065 python3[2741]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:36:55 np0005481065 python3[2765]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:36:55 np0005481065 python3[2789]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:36:58 np0005481065 python3[2815]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct 11 03:36:58 np0005481065 systemd[1]: Starting Time & Date Service...
Oct 11 03:36:58 np0005481065 systemd[1]: Started Time & Date Service.
Oct 11 03:36:58 np0005481065 systemd-timedated[2817]: Changed time zone to 'UTC' (UTC).
Oct 11 03:36:58 np0005481065 python3[2846]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:36:59 np0005481065 python3[2922]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 03:36:59 np0005481065 python3[2993]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1760168218.8155613-153-224650037046680/source _original_basename=tmp2qahrrli follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:37:00 np0005481065 python3[3093]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 03:37:00 np0005481065 python3[3164]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1760168219.763504-183-269838755506450/source _original_basename=tmpdmbymysv follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:37:01 np0005481065 python3[3266]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 03:37:01 np0005481065 python3[3339]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1760168220.9462914-231-70918774790760/source _original_basename=tmpobaiovoy follow=False checksum=cd982ffd608592e8f819f22d6376b4402103f855 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:37:02 np0005481065 python3[3387]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:37:02 np0005481065 python3[3413]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:37:03 np0005481065 python3[3493]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 03:37:03 np0005481065 python3[3566]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1760168222.7204664-273-210880370716487/source _original_basename=tmpelm0sntz follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:37:03 np0005481065 python3[3617]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e3b-3c83-ca03-4524-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:37:04 np0005481065 python3[3645]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-ca03-4524-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Oct 11 03:37:05 np0005481065 python3[3673]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:37:24 np0005481065 python3[3699]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:37:28 np0005481065 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 11 03:37:57 np0005481065 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct 11 03:37:57 np0005481065 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Oct 11 03:37:57 np0005481065 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Oct 11 03:37:57 np0005481065 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Oct 11 03:37:57 np0005481065 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Oct 11 03:37:57 np0005481065 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Oct 11 03:37:57 np0005481065 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Oct 11 03:37:57 np0005481065 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Oct 11 03:37:57 np0005481065 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Oct 11 03:37:57 np0005481065 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Oct 11 03:37:57 np0005481065 NetworkManager[855]: <info>  [1760168277.2875] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct 11 03:37:57 np0005481065 systemd-udevd[3702]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 03:37:57 np0005481065 NetworkManager[855]: <info>  [1760168277.3098] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 11 03:37:57 np0005481065 NetworkManager[855]: <info>  [1760168277.3126] settings: (eth1): created default wired connection 'Wired connection 1'
Oct 11 03:37:57 np0005481065 NetworkManager[855]: <info>  [1760168277.3130] device (eth1): carrier: link connected
Oct 11 03:37:57 np0005481065 NetworkManager[855]: <info>  [1760168277.3132] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 11 03:37:57 np0005481065 NetworkManager[855]: <info>  [1760168277.3137] policy: auto-activating connection 'Wired connection 1' (004e078f-b467-3fdf-a5ef-87235f94dee4)
Oct 11 03:37:57 np0005481065 NetworkManager[855]: <info>  [1760168277.3142] device (eth1): Activation: starting connection 'Wired connection 1' (004e078f-b467-3fdf-a5ef-87235f94dee4)
Oct 11 03:37:57 np0005481065 NetworkManager[855]: <info>  [1760168277.3143] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 11 03:37:57 np0005481065 NetworkManager[855]: <info>  [1760168277.3146] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 11 03:37:57 np0005481065 NetworkManager[855]: <info>  [1760168277.3151] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 11 03:37:57 np0005481065 NetworkManager[855]: <info>  [1760168277.3156] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct 11 03:37:58 np0005481065 python3[3729]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e3b-3c83-8267-288b-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:38:08 np0005481065 python3[3809]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 03:38:08 np0005481065 python3[3882]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760168287.7628827-102-258460359190696/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=ac4afa79c292b8eabeb33c2ba67419664317bea4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:38:09 np0005481065 python3[3932]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 03:38:09 np0005481065 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct 11 03:38:09 np0005481065 systemd[1]: Stopped Network Manager Wait Online.
Oct 11 03:38:09 np0005481065 systemd[1]: Stopping Network Manager Wait Online...
Oct 11 03:38:09 np0005481065 NetworkManager[855]: <info>  [1760168289.5140] caught SIGTERM, shutting down normally.
Oct 11 03:38:09 np0005481065 systemd[1]: Stopping Network Manager...
Oct 11 03:38:09 np0005481065 NetworkManager[855]: <info>  [1760168289.5153] dhcp4 (eth0): canceled DHCP transaction
Oct 11 03:38:09 np0005481065 NetworkManager[855]: <info>  [1760168289.5153] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 11 03:38:09 np0005481065 NetworkManager[855]: <info>  [1760168289.5154] dhcp4 (eth0): state changed no lease
Oct 11 03:38:09 np0005481065 NetworkManager[855]: <info>  [1760168289.5157] manager: NetworkManager state is now CONNECTING
Oct 11 03:38:09 np0005481065 NetworkManager[855]: <info>  [1760168289.5328] dhcp4 (eth1): canceled DHCP transaction
Oct 11 03:38:09 np0005481065 NetworkManager[855]: <info>  [1760168289.5329] dhcp4 (eth1): state changed no lease
Oct 11 03:38:09 np0005481065 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 11 03:38:09 np0005481065 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 11 03:38:09 np0005481065 NetworkManager[855]: <info>  [1760168289.5706] exiting (success)
Oct 11 03:38:09 np0005481065 systemd[1]: NetworkManager.service: Deactivated successfully.
Oct 11 03:38:09 np0005481065 systemd[1]: Stopped Network Manager.
Oct 11 03:38:09 np0005481065 systemd[1]: Starting Network Manager...
Oct 11 03:38:09 np0005481065 NetworkManager[3949]: <info>  [1760168289.6650] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:00f191b2-d1fe-48ff-9415-2594ae397b3d)
Oct 11 03:38:09 np0005481065 NetworkManager[3949]: <info>  [1760168289.6652] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct 11 03:38:09 np0005481065 NetworkManager[3949]: <info>  [1760168289.6723] manager[0x55bbda91a070]: monitoring kernel firmware directory '/lib/firmware'.
Oct 11 03:38:09 np0005481065 systemd[1]: Starting Hostname Service...
Oct 11 03:38:09 np0005481065 systemd[1]: Started Hostname Service.
Oct 11 03:38:09 np0005481065 NetworkManager[3949]: <info>  [1760168289.7555] hostname: hostname: using hostnamed
Oct 11 03:38:09 np0005481065 NetworkManager[3949]: <info>  [1760168289.7558] hostname: static hostname changed from (none) to "np0005481065.novalocal"
Oct 11 03:38:09 np0005481065 NetworkManager[3949]: <info>  [1760168289.7565] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 11 03:38:09 np0005481065 NetworkManager[3949]: <info>  [1760168289.7573] manager[0x55bbda91a070]: rfkill: Wi-Fi hardware radio set enabled
Oct 11 03:38:09 np0005481065 NetworkManager[3949]: <info>  [1760168289.7574] manager[0x55bbda91a070]: rfkill: WWAN hardware radio set enabled
Oct 11 03:38:09 np0005481065 NetworkManager[3949]: <info>  [1760168289.7626] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct 11 03:38:09 np0005481065 NetworkManager[3949]: <info>  [1760168289.7626] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 11 03:38:09 np0005481065 NetworkManager[3949]: <info>  [1760168289.7627] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 11 03:38:09 np0005481065 NetworkManager[3949]: <info>  [1760168289.7628] manager: Networking is enabled by state file
Oct 11 03:38:09 np0005481065 NetworkManager[3949]: <info>  [1760168289.7632] settings: Loaded settings plugin: keyfile (internal)
Oct 11 03:38:09 np0005481065 NetworkManager[3949]: <info>  [1760168289.7640] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 11 03:38:09 np0005481065 NetworkManager[3949]: <info>  [1760168289.7686] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct 11 03:38:09 np0005481065 NetworkManager[3949]: <info>  [1760168289.7704] dhcp: init: Using DHCP client 'internal'
Oct 11 03:38:09 np0005481065 NetworkManager[3949]: <info>  [1760168289.7713] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 11 03:38:09 np0005481065 NetworkManager[3949]: <info>  [1760168289.7726] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 03:38:09 np0005481065 NetworkManager[3949]: <info>  [1760168289.7740] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 11 03:38:09 np0005481065 NetworkManager[3949]: <info>  [1760168289.7757] device (lo): Activation: starting connection 'lo' (47a91fc4-8292-4f82-848b-b25d73b8c49f)
Oct 11 03:38:09 np0005481065 NetworkManager[3949]: <info>  [1760168289.7769] device (eth0): carrier: link connected
Oct 11 03:38:09 np0005481065 NetworkManager[3949]: <info>  [1760168289.7776] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 11 03:38:09 np0005481065 NetworkManager[3949]: <info>  [1760168289.7784] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct 11 03:38:09 np0005481065 NetworkManager[3949]: <info>  [1760168289.7785] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 11 03:38:09 np0005481065 NetworkManager[3949]: <info>  [1760168289.7797] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 11 03:38:09 np0005481065 NetworkManager[3949]: <info>  [1760168289.7808] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 11 03:38:09 np0005481065 NetworkManager[3949]: <info>  [1760168289.7819] device (eth1): carrier: link connected
Oct 11 03:38:09 np0005481065 NetworkManager[3949]: <info>  [1760168289.7825] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct 11 03:38:09 np0005481065 NetworkManager[3949]: <info>  [1760168289.7835] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (004e078f-b467-3fdf-a5ef-87235f94dee4) (indicated)
Oct 11 03:38:09 np0005481065 NetworkManager[3949]: <info>  [1760168289.7835] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 11 03:38:09 np0005481065 NetworkManager[3949]: <info>  [1760168289.7845] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 11 03:38:09 np0005481065 NetworkManager[3949]: <info>  [1760168289.7859] device (eth1): Activation: starting connection 'Wired connection 1' (004e078f-b467-3fdf-a5ef-87235f94dee4)
Oct 11 03:38:09 np0005481065 systemd[1]: Started Network Manager.
Oct 11 03:38:09 np0005481065 NetworkManager[3949]: <info>  [1760168289.7870] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 11 03:38:09 np0005481065 NetworkManager[3949]: <info>  [1760168289.7880] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 11 03:38:09 np0005481065 NetworkManager[3949]: <info>  [1760168289.7883] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 11 03:38:09 np0005481065 NetworkManager[3949]: <info>  [1760168289.7887] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 11 03:38:09 np0005481065 NetworkManager[3949]: <info>  [1760168289.7893] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 11 03:38:09 np0005481065 NetworkManager[3949]: <info>  [1760168289.7900] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 11 03:38:09 np0005481065 NetworkManager[3949]: <info>  [1760168289.7904] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 11 03:38:09 np0005481065 NetworkManager[3949]: <info>  [1760168289.7909] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 11 03:38:09 np0005481065 NetworkManager[3949]: <info>  [1760168289.7920] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 11 03:38:09 np0005481065 NetworkManager[3949]: <info>  [1760168289.7942] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 11 03:38:09 np0005481065 NetworkManager[3949]: <info>  [1760168289.7947] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 11 03:38:09 np0005481065 NetworkManager[3949]: <info>  [1760168289.7959] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 11 03:38:09 np0005481065 NetworkManager[3949]: <info>  [1760168289.7962] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct 11 03:38:09 np0005481065 NetworkManager[3949]: <info>  [1760168289.7978] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 11 03:38:09 np0005481065 NetworkManager[3949]: <info>  [1760168289.7983] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 11 03:38:09 np0005481065 NetworkManager[3949]: <info>  [1760168289.7989] device (lo): Activation: successful, device activated.
Oct 11 03:38:09 np0005481065 NetworkManager[3949]: <info>  [1760168289.8003] dhcp4 (eth0): state changed new lease, address=38.102.83.193
Oct 11 03:38:09 np0005481065 systemd[1]: Starting Network Manager Wait Online...
Oct 11 03:38:09 np0005481065 NetworkManager[3949]: <info>  [1760168289.8026] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 11 03:38:10 np0005481065 NetworkManager[3949]: <info>  [1760168290.0045] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 11 03:38:10 np0005481065 NetworkManager[3949]: <info>  [1760168290.0111] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 11 03:38:10 np0005481065 NetworkManager[3949]: <info>  [1760168290.0113] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 11 03:38:10 np0005481065 NetworkManager[3949]: <info>  [1760168290.0117] manager: NetworkManager state is now CONNECTED_SITE
Oct 11 03:38:10 np0005481065 NetworkManager[3949]: <info>  [1760168290.0121] device (eth0): Activation: successful, device activated.
Oct 11 03:38:10 np0005481065 NetworkManager[3949]: <info>  [1760168290.0126] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 11 03:38:10 np0005481065 python3[4004]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e3b-3c83-8267-288b-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:38:20 np0005481065 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 11 03:38:39 np0005481065 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 11 03:38:55 np0005481065 NetworkManager[3949]: <info>  [1760168335.2410] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 11 03:38:55 np0005481065 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 11 03:38:55 np0005481065 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 11 03:38:55 np0005481065 NetworkManager[3949]: <info>  [1760168335.2662] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 11 03:38:55 np0005481065 NetworkManager[3949]: <info>  [1760168335.2663] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 11 03:38:55 np0005481065 NetworkManager[3949]: <info>  [1760168335.2667] device (eth1): Activation: successful, device activated.
Oct 11 03:38:55 np0005481065 NetworkManager[3949]: <info>  [1760168335.2671] manager: startup complete
Oct 11 03:38:55 np0005481065 NetworkManager[3949]: <info>  [1760168335.2672] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Oct 11 03:38:55 np0005481065 NetworkManager[3949]: <warn>  [1760168335.2675] device (eth1): Activation: failed for connection 'Wired connection 1'
Oct 11 03:38:55 np0005481065 NetworkManager[3949]: <info>  [1760168335.2680] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Oct 11 03:38:55 np0005481065 systemd[1]: Finished Network Manager Wait Online.
Oct 11 03:38:55 np0005481065 NetworkManager[3949]: <info>  [1760168335.2778] dhcp4 (eth1): canceled DHCP transaction
Oct 11 03:38:55 np0005481065 NetworkManager[3949]: <info>  [1760168335.2778] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct 11 03:38:55 np0005481065 NetworkManager[3949]: <info>  [1760168335.2779] dhcp4 (eth1): state changed no lease
Oct 11 03:38:55 np0005481065 NetworkManager[3949]: <info>  [1760168335.2788] policy: auto-activating connection 'ci-private-network' (1f26fdf3-6240-59b7-8a6d-ee72a6a88bd5)
Oct 11 03:38:55 np0005481065 NetworkManager[3949]: <info>  [1760168335.2791] device (eth1): Activation: starting connection 'ci-private-network' (1f26fdf3-6240-59b7-8a6d-ee72a6a88bd5)
Oct 11 03:38:55 np0005481065 NetworkManager[3949]: <info>  [1760168335.2791] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 11 03:38:55 np0005481065 NetworkManager[3949]: <info>  [1760168335.2793] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 11 03:38:55 np0005481065 NetworkManager[3949]: <info>  [1760168335.2797] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 11 03:38:55 np0005481065 NetworkManager[3949]: <info>  [1760168335.2802] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 11 03:38:55 np0005481065 NetworkManager[3949]: <info>  [1760168335.2863] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 11 03:38:55 np0005481065 NetworkManager[3949]: <info>  [1760168335.2865] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 11 03:38:55 np0005481065 NetworkManager[3949]: <info>  [1760168335.2869] device (eth1): Activation: successful, device activated.
Oct 11 03:39:05 np0005481065 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 11 03:39:09 np0005481065 systemd[1057]: Starting Mark boot as successful...
Oct 11 03:39:09 np0005481065 systemd[1057]: Finished Mark boot as successful.
Oct 11 03:39:10 np0005481065 systemd-logind[819]: Session 1 logged out. Waiting for processes to exit.
Oct 11 03:39:11 np0005481065 systemd-logind[819]: New session 3 of user zuul.
Oct 11 03:39:11 np0005481065 systemd[1]: Started Session 3 of User zuul.
Oct 11 03:39:12 np0005481065 python3[4126]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 03:39:12 np0005481065 python3[4199]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760168352.0678923-267-129935788117603/source _original_basename=tmpundjy2yp follow=False checksum=aa7c8731fdb730bcb670c68e21a8c7ae23fc1714 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:39:15 np0005481065 systemd[1]: session-3.scope: Deactivated successfully.
Oct 11 03:39:15 np0005481065 systemd-logind[819]: Session 3 logged out. Waiting for processes to exit.
Oct 11 03:39:15 np0005481065 systemd-logind[819]: Removed session 3.
Oct 11 03:42:09 np0005481065 systemd[1057]: Created slice User Background Tasks Slice.
Oct 11 03:42:09 np0005481065 systemd[1057]: Starting Cleanup of User's Temporary Files and Directories...
Oct 11 03:42:09 np0005481065 systemd[1057]: Finished Cleanup of User's Temporary Files and Directories.
Oct 11 03:44:33 np0005481065 systemd-logind[819]: New session 4 of user zuul.
Oct 11 03:44:33 np0005481065 systemd[1]: Started Session 4 of User zuul.
Oct 11 03:44:34 np0005481065 python3[4257]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-3bcb-db51-000000001ce8-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:44:34 np0005481065 python3[4285]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:44:34 np0005481065 python3[4312]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:44:34 np0005481065 python3[4338]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:44:35 np0005481065 python3[4364]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:44:35 np0005481065 python3[4390]: ansible-ansible.builtin.lineinfile Invoked with path=/etc/systemd/system.conf regexp=^#DefaultIOAccounting=no line=DefaultIOAccounting=yes state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:44:35 np0005481065 python3[4390]: ansible-ansible.builtin.lineinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Oct 11 03:44:36 np0005481065 python3[4416]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 11 03:44:36 np0005481065 systemd[1]: Reloading.
Oct 11 03:44:36 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:44:38 np0005481065 python3[4473]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Oct 11 03:44:38 np0005481065 python3[4499]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:44:38 np0005481065 python3[4527]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:44:39 np0005481065 python3[4555]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:44:39 np0005481065 python3[4583]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:44:39 np0005481065 python3[4610]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-3bcb-db51-000000001cee-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:44:40 np0005481065 python3[4640]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 03:44:42 np0005481065 irqbalance[814]: Cannot change IRQ 26 affinity: Operation not permitted
Oct 11 03:44:42 np0005481065 irqbalance[814]: IRQ 26 affinity is now unmanaged
Oct 11 03:44:42 np0005481065 systemd[1]: session-4.scope: Deactivated successfully.
Oct 11 03:44:42 np0005481065 systemd[1]: session-4.scope: Consumed 3.694s CPU time.
Oct 11 03:44:42 np0005481065 systemd-logind[819]: Session 4 logged out. Waiting for processes to exit.
Oct 11 03:44:42 np0005481065 systemd-logind[819]: Removed session 4.
Oct 11 03:44:44 np0005481065 systemd-logind[819]: New session 5 of user zuul.
Oct 11 03:44:44 np0005481065 systemd[1]: Started Session 5 of User zuul.
Oct 11 03:44:44 np0005481065 python3[4674]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 11 03:44:59 np0005481065 kernel: SELinux:  Converting 363 SID table entries...
Oct 11 03:45:00 np0005481065 kernel: SELinux:  policy capability network_peer_controls=1
Oct 11 03:45:00 np0005481065 kernel: SELinux:  policy capability open_perms=1
Oct 11 03:45:00 np0005481065 kernel: SELinux:  policy capability extended_socket_class=1
Oct 11 03:45:00 np0005481065 kernel: SELinux:  policy capability always_check_network=0
Oct 11 03:45:00 np0005481065 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 11 03:45:00 np0005481065 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 11 03:45:00 np0005481065 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 11 03:45:09 np0005481065 kernel: SELinux:  Converting 363 SID table entries...
Oct 11 03:45:09 np0005481065 kernel: SELinux:  policy capability network_peer_controls=1
Oct 11 03:45:09 np0005481065 kernel: SELinux:  policy capability open_perms=1
Oct 11 03:45:09 np0005481065 kernel: SELinux:  policy capability extended_socket_class=1
Oct 11 03:45:09 np0005481065 kernel: SELinux:  policy capability always_check_network=0
Oct 11 03:45:09 np0005481065 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 11 03:45:09 np0005481065 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 11 03:45:09 np0005481065 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 11 03:45:18 np0005481065 kernel: SELinux:  Converting 363 SID table entries...
Oct 11 03:45:18 np0005481065 kernel: SELinux:  policy capability network_peer_controls=1
Oct 11 03:45:18 np0005481065 kernel: SELinux:  policy capability open_perms=1
Oct 11 03:45:18 np0005481065 kernel: SELinux:  policy capability extended_socket_class=1
Oct 11 03:45:18 np0005481065 kernel: SELinux:  policy capability always_check_network=0
Oct 11 03:45:18 np0005481065 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 11 03:45:18 np0005481065 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 11 03:45:18 np0005481065 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 11 03:45:19 np0005481065 setsebool[4741]: The virt_use_nfs policy boolean was changed to 1 by root
Oct 11 03:45:19 np0005481065 setsebool[4741]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Oct 11 03:45:30 np0005481065 kernel: SELinux:  Converting 366 SID table entries...
Oct 11 03:45:30 np0005481065 kernel: SELinux:  policy capability network_peer_controls=1
Oct 11 03:45:30 np0005481065 kernel: SELinux:  policy capability open_perms=1
Oct 11 03:45:30 np0005481065 kernel: SELinux:  policy capability extended_socket_class=1
Oct 11 03:45:30 np0005481065 kernel: SELinux:  policy capability always_check_network=0
Oct 11 03:45:30 np0005481065 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 11 03:45:30 np0005481065 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 11 03:45:30 np0005481065 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 11 03:45:49 np0005481065 dbus-broker-launch[809]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct 11 03:45:49 np0005481065 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 11 03:45:49 np0005481065 systemd[1]: Starting man-db-cache-update.service...
Oct 11 03:45:49 np0005481065 systemd[1]: Reloading.
Oct 11 03:45:49 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 03:45:49 np0005481065 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 11 03:45:50 np0005481065 systemd[1]: Starting PackageKit Daemon...
Oct 11 03:45:51 np0005481065 systemd[1]: Starting Authorization Manager...
Oct 11 03:45:51 np0005481065 polkitd[6218]: Started polkitd version 0.117
Oct 11 03:45:51 np0005481065 systemd[1]: Started Authorization Manager.
Oct 11 03:45:51 np0005481065 systemd[1]: Started PackageKit Daemon.
Oct 11 03:45:59 np0005481065 python3[10898]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-8b83-6589-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:46:00 np0005481065 kernel: evm: overlay not supported
Oct 11 03:46:00 np0005481065 systemd[1057]: Starting D-Bus User Message Bus...
Oct 11 03:46:00 np0005481065 dbus-broker-launch[11474]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Oct 11 03:46:00 np0005481065 dbus-broker-launch[11474]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Oct 11 03:46:00 np0005481065 systemd[1057]: Started D-Bus User Message Bus.
Oct 11 03:46:00 np0005481065 dbus-broker-lau[11474]: Ready
Oct 11 03:46:00 np0005481065 systemd[1057]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct 11 03:46:00 np0005481065 systemd[1057]: Created slice Slice /user.
Oct 11 03:46:00 np0005481065 systemd[1057]: podman-11273.scope: unit configures an IP firewall, but not running as root.
Oct 11 03:46:00 np0005481065 systemd[1057]: (This warning is only shown for the first unit using IP firewalling.)
Oct 11 03:46:00 np0005481065 systemd[1057]: Started podman-11273.scope.
Oct 11 03:46:00 np0005481065 systemd[1057]: Started podman-pause-3da4196d.scope.
Oct 11 03:46:01 np0005481065 python3[11775]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.51:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.51:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:46:01 np0005481065 systemd[1]: session-5.scope: Deactivated successfully.
Oct 11 03:46:01 np0005481065 systemd[1]: session-5.scope: Consumed 1min 223ms CPU time.
Oct 11 03:46:01 np0005481065 systemd-logind[819]: Session 5 logged out. Waiting for processes to exit.
Oct 11 03:46:01 np0005481065 systemd-logind[819]: Removed session 5.
Oct 11 03:46:22 np0005481065 irqbalance[814]: Cannot change IRQ 27 affinity: Operation not permitted
Oct 11 03:46:22 np0005481065 irqbalance[814]: IRQ 27 affinity is now unmanaged
Oct 11 03:46:25 np0005481065 systemd-logind[819]: New session 6 of user zuul.
Oct 11 03:46:25 np0005481065 systemd[1]: Started Session 6 of User zuul.
Oct 11 03:46:25 np0005481065 python3[19417]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHqu03VXd+92X8/v2TqsnwJtAc6uK44GAED9vFnV6v3MaXrY6Km8zKymKUuKLiTcuwq3YMeVoCl1pl/dfawu91s= zuul@np0005481064.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:46:26 np0005481065 python3[19503]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHqu03VXd+92X8/v2TqsnwJtAc6uK44GAED9vFnV6v3MaXrY6Km8zKymKUuKLiTcuwq3YMeVoCl1pl/dfawu91s= zuul@np0005481064.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:46:26 np0005481065 python3[19704]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005481065.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Oct 11 03:46:27 np0005481065 python3[19947]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHqu03VXd+92X8/v2TqsnwJtAc6uK44GAED9vFnV6v3MaXrY6Km8zKymKUuKLiTcuwq3YMeVoCl1pl/dfawu91s= zuul@np0005481064.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 11 03:46:27 np0005481065 python3[20153]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 03:46:28 np0005481065 python3[20338]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1760168787.4515848-135-82364241589652/source _original_basename=tmps9bpokj3 follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:46:29 np0005481065 python3[20551]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Oct 11 03:46:29 np0005481065 systemd[1]: Starting Hostname Service...
Oct 11 03:46:29 np0005481065 systemd[1]: Started Hostname Service.
Oct 11 03:46:29 np0005481065 systemd-hostnamed[20671]: Changed pretty hostname to 'compute-0'
Oct 11 03:46:29 np0005481065 systemd-hostnamed[20671]: Hostname set to <compute-0> (static)
Oct 11 03:46:29 np0005481065 NetworkManager[3949]: <info>  [1760168789.3122] hostname: static hostname changed from "np0005481065.novalocal" to "compute-0"
Oct 11 03:46:29 np0005481065 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 11 03:46:29 np0005481065 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 11 03:46:29 np0005481065 systemd[1]: session-6.scope: Deactivated successfully.
Oct 11 03:46:29 np0005481065 systemd[1]: session-6.scope: Consumed 2.473s CPU time.
Oct 11 03:46:29 np0005481065 systemd-logind[819]: Session 6 logged out. Waiting for processes to exit.
Oct 11 03:46:29 np0005481065 systemd-logind[819]: Removed session 6.
Oct 11 03:46:39 np0005481065 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 11 03:46:55 np0005481065 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 11 03:46:55 np0005481065 systemd[1]: Finished man-db-cache-update.service.
Oct 11 03:46:55 np0005481065 systemd[1]: man-db-cache-update.service: Consumed 1min 2.504s CPU time.
Oct 11 03:46:55 np0005481065 systemd[1]: run-r4e4d61f9853e413c91782db8263a6cdc.service: Deactivated successfully.
Oct 11 03:46:59 np0005481065 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 11 03:50:07 np0005481065 systemd-logind[819]: New session 7 of user zuul.
Oct 11 03:50:07 np0005481065 systemd[1]: Started Session 7 of User zuul.
Oct 11 03:50:08 np0005481065 python3[26617]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 03:50:10 np0005481065 python3[26733]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 03:50:10 np0005481065 python3[26806]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760169009.7574925-30191-231745672646464/source mode=0755 _original_basename=delorean.repo follow=False checksum=f3fabc627b4c59ab3d10213193ffdeeed080e354 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:50:10 np0005481065 python3[26832]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 03:50:11 np0005481065 python3[26905]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760169009.7574925-30191-231745672646464/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:50:11 np0005481065 python3[26931]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 03:50:11 np0005481065 python3[27004]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760169009.7574925-30191-231745672646464/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:50:12 np0005481065 python3[27030]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 03:50:12 np0005481065 python3[27103]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760169009.7574925-30191-231745672646464/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:50:12 np0005481065 python3[27129]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 03:50:13 np0005481065 python3[27202]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760169009.7574925-30191-231745672646464/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:50:13 np0005481065 python3[27228]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 03:50:13 np0005481065 python3[27301]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760169009.7574925-30191-231745672646464/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:50:14 np0005481065 python3[27327]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 03:50:14 np0005481065 python3[27400]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760169009.7574925-30191-231745672646464/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=5e44558a2b46929660a6b5bfc8824fb4521580a4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 03:50:29 np0005481065 python3[27458]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 03:50:56 np0005481065 systemd[1]: Starting Cleanup of Temporary Directories...
Oct 11 03:50:56 np0005481065 systemd[1]: packagekit.service: Deactivated successfully.
Oct 11 03:50:56 np0005481065 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Oct 11 03:50:56 np0005481065 systemd[1]: Finished Cleanup of Temporary Directories.
Oct 11 03:50:56 np0005481065 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Oct 11 03:55:09 np0005481065 systemd[1]: Starting dnf makecache...
Oct 11 03:55:09 np0005481065 dnf[27465]: Failed determining last makecache time.
Oct 11 03:55:10 np0005481065 dnf[27465]: delorean-openstack-barbican-42b4c41831408a8e323 323 kB/s |  13 kB     00:00
Oct 11 03:55:10 np0005481065 dnf[27465]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 2.5 MB/s |  65 kB     00:00
Oct 11 03:55:10 np0005481065 dnf[27465]: delorean-openstack-cinder-1c00d6490d88e436f26ef 1.4 MB/s |  32 kB     00:00
Oct 11 03:55:10 np0005481065 dnf[27465]: delorean-python-stevedore-c4acc5639fd2329372142 4.8 MB/s | 131 kB     00:00
Oct 11 03:55:10 np0005481065 dnf[27465]: delorean-python-observabilityclient-2f31846d73c 969 kB/s |  25 kB     00:00
Oct 11 03:55:10 np0005481065 dnf[27465]: delorean-diskimage-builder-7d793e664cf892461c55  10 MB/s | 356 kB     00:00
Oct 11 03:55:10 np0005481065 dnf[27465]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 1.8 MB/s |  42 kB     00:00
Oct 11 03:55:10 np0005481065 dnf[27465]: delorean-python-designate-tests-tempest-347fdbc 648 kB/s |  18 kB     00:00
Oct 11 03:55:10 np0005481065 dnf[27465]: delorean-openstack-glance-1fd12c29b339f30fe823e 855 kB/s |  18 kB     00:00
Oct 11 03:55:10 np0005481065 dnf[27465]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 1.2 MB/s |  29 kB     00:00
Oct 11 03:55:10 np0005481065 dnf[27465]: delorean-openstack-manila-3c01b7181572c95dac462 1.1 MB/s |  25 kB     00:00
Oct 11 03:55:10 np0005481065 dnf[27465]: delorean-python-vmware-nsxlib-458234972d1428ac9 5.4 MB/s | 154 kB     00:00
Oct 11 03:55:10 np0005481065 dnf[27465]: delorean-openstack-octavia-ba397f07a7331190208c 1.0 MB/s |  26 kB     00:00
Oct 11 03:55:10 np0005481065 dnf[27465]: delorean-openstack-watcher-c014f81a8647287f6dcc 706 kB/s |  16 kB     00:00
Oct 11 03:55:10 np0005481065 dnf[27465]: delorean-python-tcib-ff70d03bf5bc0bb6f3540a02d3 287 kB/s | 7.4 kB     00:00
Oct 11 03:55:10 np0005481065 dnf[27465]: delorean-puppet-ceph-91ba84bc002c318a7f961d084e 5.3 MB/s | 144 kB     00:00
Oct 11 03:55:10 np0005481065 dnf[27465]: delorean-openstack-swift-dc98a8463506ac520c469a 535 kB/s |  14 kB     00:00
Oct 11 03:55:10 np0005481065 dnf[27465]: delorean-python-tempestconf-8515371b7cceebd4282 2.4 MB/s |  53 kB     00:00
Oct 11 03:55:10 np0005481065 dnf[27465]: delorean-openstack-heat-ui-013accbfd179753bc3f0 3.3 MB/s |  96 kB     00:00
Oct 11 03:55:11 np0005481065 dnf[27465]: CentOS Stream 9 - BaseOS                         45 kB/s | 6.7 kB     00:00
Oct 11 03:55:11 np0005481065 dnf[27465]: CentOS Stream 9 - AppStream                      25 kB/s | 6.8 kB     00:00
Oct 11 03:55:11 np0005481065 dnf[27465]: CentOS Stream 9 - CRB                            23 kB/s | 6.6 kB     00:00
Oct 11 03:55:12 np0005481065 dnf[27465]: CentOS Stream 9 - Extras packages                64 kB/s | 8.0 kB     00:00
Oct 11 03:55:12 np0005481065 dnf[27465]: dlrn-antelope-testing                            25 MB/s | 1.1 MB     00:00
Oct 11 03:55:12 np0005481065 dnf[27465]: dlrn-antelope-build-deps                         17 MB/s | 461 kB     00:00
Oct 11 03:55:12 np0005481065 dnf[27465]: centos9-rabbitmq                                7.3 MB/s | 123 kB     00:00
Oct 11 03:55:12 np0005481065 dnf[27465]: centos9-storage                                  20 MB/s | 415 kB     00:00
Oct 11 03:55:12 np0005481065 dnf[27465]: centos9-opstools                                3.9 MB/s |  51 kB     00:00
Oct 11 03:55:12 np0005481065 dnf[27465]: NFV SIG OpenvSwitch                              28 MB/s | 449 kB     00:00
Oct 11 03:55:13 np0005481065 dnf[27465]: repo-setup-centos-appstream                     111 MB/s |  25 MB     00:00
Oct 11 03:55:19 np0005481065 dnf[27465]: repo-setup-centos-baseos                         94 MB/s | 8.8 MB     00:00
Oct 11 03:55:20 np0005481065 dnf[27465]: repo-setup-centos-highavailability               35 MB/s | 744 kB     00:00
Oct 11 03:55:20 np0005481065 dnf[27465]: repo-setup-centos-powertools                    101 MB/s | 7.2 MB     00:00
Oct 11 03:55:22 np0005481065 dnf[27465]: Extra Packages for Enterprise Linux 9 - x86_64   62 MB/s |  20 MB     00:00
Oct 11 03:55:28 np0005481065 systemd-logind[819]: Session 7 logged out. Waiting for processes to exit.
Oct 11 03:55:28 np0005481065 systemd[1]: session-7.scope: Deactivated successfully.
Oct 11 03:55:28 np0005481065 systemd[1]: session-7.scope: Consumed 5.197s CPU time.
Oct 11 03:55:28 np0005481065 systemd-logind[819]: Removed session 7.
Oct 11 03:55:35 np0005481065 dnf[27465]: Metadata cache created.
Oct 11 03:55:35 np0005481065 systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct 11 03:55:35 np0005481065 systemd[1]: Finished dnf makecache.
Oct 11 03:55:35 np0005481065 systemd[1]: dnf-makecache.service: Consumed 23.755s CPU time.
Oct 11 04:00:38 np0005481065 systemd-logind[819]: New session 8 of user zuul.
Oct 11 04:00:38 np0005481065 systemd[1]: Started Session 8 of User zuul.
Oct 11 04:00:39 np0005481065 python3.9[27722]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:00:40 np0005481065 python3.9[27903]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:00:48 np0005481065 systemd[1]: session-8.scope: Deactivated successfully.
Oct 11 04:00:48 np0005481065 systemd[1]: session-8.scope: Consumed 8.020s CPU time.
Oct 11 04:00:48 np0005481065 systemd-logind[819]: Session 8 logged out. Waiting for processes to exit.
Oct 11 04:00:48 np0005481065 systemd-logind[819]: Removed session 8.
Oct 11 04:01:03 np0005481065 systemd-logind[819]: New session 9 of user zuul.
Oct 11 04:01:03 np0005481065 systemd[1]: Started Session 9 of User zuul.
Oct 11 04:01:04 np0005481065 python3.9[28128]: ansible-ansible.legacy.ping Invoked with data=pong
Oct 11 04:01:06 np0005481065 python3.9[28302]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:01:07 np0005481065 python3.9[28454]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:01:09 np0005481065 python3.9[28607]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:01:09 np0005481065 python3.9[28759]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:01:10 np0005481065 python3.9[28911]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:01:11 np0005481065 python3.9[29034]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1760169670.195212-73-189641308741838/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:01:12 np0005481065 python3.9[29186]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:01:13 np0005481065 python3.9[29342]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:01:14 np0005481065 python3.9[29492]: ansible-ansible.builtin.service_facts Invoked
Oct 11 04:01:19 np0005481065 python3.9[29747]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:01:20 np0005481065 python3.9[29897]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:01:21 np0005481065 python3.9[30051]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:01:22 np0005481065 python3.9[30209]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 04:01:23 np0005481065 python3.9[30293]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 04:02:05 np0005481065 systemd[1]: Reloading.
Oct 11 04:02:05 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:02:06 np0005481065 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Oct 11 04:02:06 np0005481065 systemd[1]: Reloading.
Oct 11 04:02:06 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:02:06 np0005481065 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Oct 11 04:02:06 np0005481065 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Oct 11 04:02:06 np0005481065 systemd[1]: Reloading.
Oct 11 04:02:06 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:02:07 np0005481065 systemd[1]: Listening on LVM2 poll daemon socket.
Oct 11 04:02:07 np0005481065 dbus-broker-launch[792]: Noticed file-system modification, trigger reload.
Oct 11 04:02:07 np0005481065 dbus-broker-launch[792]: Noticed file-system modification, trigger reload.
Oct 11 04:02:07 np0005481065 dbus-broker-launch[792]: Noticed file-system modification, trigger reload.
Oct 11 04:03:08 np0005481065 kernel: SELinux:  Converting 2715 SID table entries...
Oct 11 04:03:08 np0005481065 kernel: SELinux:  policy capability network_peer_controls=1
Oct 11 04:03:08 np0005481065 kernel: SELinux:  policy capability open_perms=1
Oct 11 04:03:08 np0005481065 kernel: SELinux:  policy capability extended_socket_class=1
Oct 11 04:03:08 np0005481065 kernel: SELinux:  policy capability always_check_network=0
Oct 11 04:03:08 np0005481065 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 11 04:03:08 np0005481065 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 11 04:03:08 np0005481065 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 11 04:03:08 np0005481065 dbus-broker-launch[809]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Oct 11 04:03:08 np0005481065 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 11 04:03:08 np0005481065 systemd[1]: Starting man-db-cache-update.service...
Oct 11 04:03:08 np0005481065 systemd[1]: Reloading.
Oct 11 04:03:08 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:03:08 np0005481065 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 11 04:03:09 np0005481065 systemd[1]: Starting PackageKit Daemon...
Oct 11 04:03:09 np0005481065 systemd[1]: Started PackageKit Daemon.
Oct 11 04:03:09 np0005481065 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 11 04:03:09 np0005481065 systemd[1]: Finished man-db-cache-update.service.
Oct 11 04:03:09 np0005481065 systemd[1]: man-db-cache-update.service: Consumed 1.355s CPU time.
Oct 11 04:03:09 np0005481065 systemd[1]: run-rf030721605d149849c356e20a579dc84.service: Deactivated successfully.
Oct 11 04:03:10 np0005481065 python3.9[31796]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:03:12 np0005481065 python3.9[32077]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct 11 04:03:13 np0005481065 python3.9[32229]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct 11 04:03:15 np0005481065 python3.9[32382]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:03:16 np0005481065 python3.9[32534]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct 11 04:03:17 np0005481065 python3.9[32686]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:03:18 np0005481065 python3.9[32838]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:03:19 np0005481065 python3.9[32961]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760169797.9240384-227-268155179682982/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=2350fec01ba4be1fc9481509a02f1bc685e36cb3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:03:22 np0005481065 python3.9[33114]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct 11 04:03:23 np0005481065 python3.9[33267]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 11 04:03:23 np0005481065 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 04:03:24 np0005481065 python3.9[33426]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 11 04:03:25 np0005481065 python3.9[33586]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct 11 04:03:26 np0005481065 python3.9[33739]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 11 04:03:27 np0005481065 python3.9[33897]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct 11 04:03:28 np0005481065 python3.9[34049]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 04:03:30 np0005481065 python3.9[34202]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:03:31 np0005481065 python3.9[34354]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:03:31 np0005481065 python3.9[34477]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760169810.500942-322-5038359438252/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:03:32 np0005481065 python3.9[34629]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 04:03:32 np0005481065 systemd[1]: Starting Load Kernel Modules...
Oct 11 04:03:32 np0005481065 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Oct 11 04:03:32 np0005481065 kernel: Bridge firewalling registered
Oct 11 04:03:32 np0005481065 systemd-modules-load[34633]: Inserted module 'br_netfilter'
Oct 11 04:03:32 np0005481065 systemd[1]: Finished Load Kernel Modules.
Oct 11 04:03:33 np0005481065 python3.9[34789]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:03:34 np0005481065 python3.9[34912]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760169813.1504605-345-179911610806428/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:03:35 np0005481065 python3.9[35064]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 04:03:39 np0005481065 dbus-broker-launch[792]: Noticed file-system modification, trigger reload.
Oct 11 04:03:39 np0005481065 dbus-broker-launch[792]: Noticed file-system modification, trigger reload.
Oct 11 04:03:39 np0005481065 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 11 04:03:39 np0005481065 systemd[1]: Starting man-db-cache-update.service...
Oct 11 04:03:40 np0005481065 systemd[1]: Reloading.
Oct 11 04:03:40 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:03:40 np0005481065 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 11 04:03:41 np0005481065 python3.9[36273]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:03:42 np0005481065 python3.9[37282]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct 11 04:03:43 np0005481065 python3.9[38016]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:03:43 np0005481065 python3.9[38879]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:03:44 np0005481065 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 11 04:03:44 np0005481065 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 11 04:03:44 np0005481065 systemd[1]: Finished man-db-cache-update.service.
Oct 11 04:03:44 np0005481065 systemd[1]: man-db-cache-update.service: Consumed 5.565s CPU time.
Oct 11 04:03:44 np0005481065 systemd[1]: run-r7a027c891c76444aa426e067880fdac5.service: Deactivated successfully.
Oct 11 04:03:44 np0005481065 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 11 04:03:45 np0005481065 python3.9[39610]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:03:45 np0005481065 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct 11 04:03:45 np0005481065 systemd[1]: tuned.service: Deactivated successfully.
Oct 11 04:03:45 np0005481065 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct 11 04:03:45 np0005481065 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 11 04:03:45 np0005481065 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 11 04:03:46 np0005481065 python3.9[39772]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct 11 04:03:48 np0005481065 python3.9[39924]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:03:48 np0005481065 systemd[1]: Reloading.
Oct 11 04:03:49 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:03:50 np0005481065 python3.9[40114]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:03:50 np0005481065 systemd[1]: Reloading.
Oct 11 04:03:50 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:03:51 np0005481065 python3.9[40303]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:03:51 np0005481065 python3.9[40456]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:03:51 np0005481065 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Oct 11 04:03:52 np0005481065 python3.9[40609]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:03:54 np0005481065 python3.9[40771]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:03:55 np0005481065 python3.9[40924]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 04:03:55 np0005481065 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct 11 04:03:55 np0005481065 systemd[1]: Stopped Apply Kernel Variables.
Oct 11 04:03:55 np0005481065 systemd[1]: Stopping Apply Kernel Variables...
Oct 11 04:03:55 np0005481065 systemd[1]: Starting Apply Kernel Variables...
Oct 11 04:03:55 np0005481065 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct 11 04:03:55 np0005481065 systemd[1]: Finished Apply Kernel Variables.
Oct 11 04:03:56 np0005481065 systemd-logind[819]: Session 9 logged out. Waiting for processes to exit.
Oct 11 04:03:56 np0005481065 systemd[1]: session-9.scope: Deactivated successfully.
Oct 11 04:03:56 np0005481065 systemd[1]: session-9.scope: Consumed 2min 15.549s CPU time.
Oct 11 04:03:56 np0005481065 systemd-logind[819]: Removed session 9.
Oct 11 04:04:01 np0005481065 systemd-logind[819]: New session 10 of user zuul.
Oct 11 04:04:01 np0005481065 systemd[1]: Started Session 10 of User zuul.
Oct 11 04:04:02 np0005481065 python3.9[41108]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:04:04 np0005481065 python3.9[41264]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct 11 04:04:05 np0005481065 python3.9[41417]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 11 04:04:06 np0005481065 python3.9[41575]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 11 04:04:07 np0005481065 python3.9[41735]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 04:04:08 np0005481065 python3.9[41819]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 11 04:04:11 np0005481065 python3.9[41982]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 04:04:21 np0005481065 kernel: SELinux:  Converting 2725 SID table entries...
Oct 11 04:04:21 np0005481065 kernel: SELinux:  policy capability network_peer_controls=1
Oct 11 04:04:21 np0005481065 kernel: SELinux:  policy capability open_perms=1
Oct 11 04:04:21 np0005481065 kernel: SELinux:  policy capability extended_socket_class=1
Oct 11 04:04:21 np0005481065 kernel: SELinux:  policy capability always_check_network=0
Oct 11 04:04:21 np0005481065 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 11 04:04:21 np0005481065 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 11 04:04:21 np0005481065 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 11 04:04:22 np0005481065 dbus-broker-launch[809]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Oct 11 04:04:22 np0005481065 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Oct 11 04:04:23 np0005481065 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 11 04:04:23 np0005481065 systemd[1]: Starting man-db-cache-update.service...
Oct 11 04:04:24 np0005481065 systemd[1]: Reloading.
Oct 11 04:04:24 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:04:24 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:04:24 np0005481065 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 11 04:04:24 np0005481065 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 11 04:04:24 np0005481065 systemd[1]: Finished man-db-cache-update.service.
Oct 11 04:04:24 np0005481065 systemd[1]: man-db-cache-update.service: Consumed 1.131s CPU time.
Oct 11 04:04:24 np0005481065 systemd[1]: run-rb6630fa5a9cb4451b21a9f12345d8b98.service: Deactivated successfully.
Oct 11 04:04:25 np0005481065 python3.9[43083]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 11 04:04:26 np0005481065 systemd[1]: Reloading.
Oct 11 04:04:26 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:04:26 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:04:26 np0005481065 systemd[1]: Starting Open vSwitch Database Unit...
Oct 11 04:04:26 np0005481065 chown[43125]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Oct 11 04:04:26 np0005481065 ovs-ctl[43130]: /etc/openvswitch/conf.db does not exist ... (warning).
Oct 11 04:04:26 np0005481065 ovs-ctl[43130]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Oct 11 04:04:26 np0005481065 ovs-ctl[43130]: Starting ovsdb-server [  OK  ]
Oct 11 04:04:26 np0005481065 ovs-vsctl[43179]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Oct 11 04:04:26 np0005481065 ovs-vsctl[43198]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"bec9a5e2-82b0-42f0-811d-08d245f1dc66\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Oct 11 04:04:26 np0005481065 ovs-ctl[43130]: Configuring Open vSwitch system IDs [  OK  ]
Oct 11 04:04:26 np0005481065 ovs-vsctl[43204]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Oct 11 04:04:26 np0005481065 ovs-ctl[43130]: Enabling remote OVSDB managers [  OK  ]
Oct 11 04:04:26 np0005481065 systemd[1]: Started Open vSwitch Database Unit.
Oct 11 04:04:26 np0005481065 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Oct 11 04:04:26 np0005481065 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Oct 11 04:04:26 np0005481065 systemd[1]: Starting Open vSwitch Forwarding Unit...
Oct 11 04:04:26 np0005481065 kernel: openvswitch: Open vSwitch switching datapath
Oct 11 04:04:26 np0005481065 ovs-ctl[43248]: Inserting openvswitch module [  OK  ]
Oct 11 04:04:27 np0005481065 ovs-ctl[43217]: Starting ovs-vswitchd [  OK  ]
Oct 11 04:04:27 np0005481065 ovs-vsctl[43268]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Oct 11 04:04:27 np0005481065 ovs-ctl[43217]: Enabling remote OVSDB managers [  OK  ]
Oct 11 04:04:27 np0005481065 systemd[1]: Started Open vSwitch Forwarding Unit.
Oct 11 04:04:27 np0005481065 systemd[1]: Starting Open vSwitch...
Oct 11 04:04:27 np0005481065 systemd[1]: Finished Open vSwitch.
Oct 11 04:04:28 np0005481065 python3.9[43420]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:04:29 np0005481065 python3.9[43572]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct 11 04:04:30 np0005481065 kernel: SELinux:  Converting 2739 SID table entries...
Oct 11 04:04:30 np0005481065 kernel: SELinux:  policy capability network_peer_controls=1
Oct 11 04:04:30 np0005481065 kernel: SELinux:  policy capability open_perms=1
Oct 11 04:04:30 np0005481065 kernel: SELinux:  policy capability extended_socket_class=1
Oct 11 04:04:30 np0005481065 kernel: SELinux:  policy capability always_check_network=0
Oct 11 04:04:30 np0005481065 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 11 04:04:30 np0005481065 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 11 04:04:30 np0005481065 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 11 04:04:31 np0005481065 python3.9[43727]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:04:32 np0005481065 dbus-broker-launch[809]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Oct 11 04:04:32 np0005481065 python3.9[43885]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 04:04:34 np0005481065 python3.9[44038]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:04:36 np0005481065 python3.9[44326]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 11 04:04:37 np0005481065 python3.9[44476]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:04:37 np0005481065 python3.9[44630]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 04:04:39 np0005481065 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 11 04:04:39 np0005481065 systemd[1]: Starting man-db-cache-update.service...
Oct 11 04:04:39 np0005481065 systemd[1]: Reloading.
Oct 11 04:04:39 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:04:39 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:04:39 np0005481065 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 11 04:04:40 np0005481065 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 11 04:04:40 np0005481065 systemd[1]: Finished man-db-cache-update.service.
Oct 11 04:04:40 np0005481065 systemd[1]: run-r5ffdabe158d24b90bb32bc67409bb47e.service: Deactivated successfully.
Oct 11 04:04:41 np0005481065 python3.9[44947]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 04:04:41 np0005481065 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct 11 04:04:41 np0005481065 systemd[1]: Stopped Network Manager Wait Online.
Oct 11 04:04:41 np0005481065 systemd[1]: Stopping Network Manager Wait Online...
Oct 11 04:04:41 np0005481065 systemd[1]: Stopping Network Manager...
Oct 11 04:04:41 np0005481065 NetworkManager[3949]: <info>  [1760169881.1098] caught SIGTERM, shutting down normally.
Oct 11 04:04:41 np0005481065 NetworkManager[3949]: <info>  [1760169881.1113] dhcp4 (eth0): canceled DHCP transaction
Oct 11 04:04:41 np0005481065 NetworkManager[3949]: <info>  [1760169881.1113] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 11 04:04:41 np0005481065 NetworkManager[3949]: <info>  [1760169881.1113] dhcp4 (eth0): state changed no lease
Oct 11 04:04:41 np0005481065 NetworkManager[3949]: <info>  [1760169881.1116] manager: NetworkManager state is now CONNECTED_SITE
Oct 11 04:04:41 np0005481065 NetworkManager[3949]: <info>  [1760169881.1170] exiting (success)
Oct 11 04:04:41 np0005481065 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 11 04:04:41 np0005481065 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 11 04:04:41 np0005481065 systemd[1]: NetworkManager.service: Deactivated successfully.
Oct 11 04:04:41 np0005481065 systemd[1]: Stopped Network Manager.
Oct 11 04:04:41 np0005481065 systemd[1]: NetworkManager.service: Consumed 9.332s CPU time, 4.3M memory peak, read 0B from disk, written 27.0K to disk.
Oct 11 04:04:41 np0005481065 systemd[1]: Starting Network Manager...
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.2031] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:00f191b2-d1fe-48ff-9415-2594ae397b3d)
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.2034] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.2102] manager[0x55b174c98090]: monitoring kernel firmware directory '/lib/firmware'.
Oct 11 04:04:41 np0005481065 systemd[1]: Starting Hostname Service...
Oct 11 04:04:41 np0005481065 systemd[1]: Started Hostname Service.
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3189] hostname: hostname: using hostnamed
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3192] hostname: static hostname changed from (none) to "compute-0"
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3199] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3203] manager[0x55b174c98090]: rfkill: Wi-Fi hardware radio set enabled
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3204] manager[0x55b174c98090]: rfkill: WWAN hardware radio set enabled
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3230] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3240] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3240] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3241] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3241] manager: Networking is enabled by state file
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3244] settings: Loaded settings plugin: keyfile (internal)
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3248] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3279] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3289] dhcp: init: Using DHCP client 'internal'
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3292] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3298] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3304] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3312] device (lo): Activation: starting connection 'lo' (47a91fc4-8292-4f82-848b-b25d73b8c49f)
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3319] device (eth0): carrier: link connected
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3323] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3328] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3329] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3337] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3344] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3350] device (eth1): carrier: link connected
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3354] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3359] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (1f26fdf3-6240-59b7-8a6d-ee72a6a88bd5) (indicated)
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3359] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3365] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3373] device (eth1): Activation: starting connection 'ci-private-network' (1f26fdf3-6240-59b7-8a6d-ee72a6a88bd5)
Oct 11 04:04:41 np0005481065 systemd[1]: Started Network Manager.
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3381] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3388] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3396] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3399] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3401] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3404] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3406] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3408] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3413] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3419] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3421] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3436] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3454] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3462] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3464] dhcp4 (eth0): state changed new lease, address=38.102.83.193
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3467] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3473] device (lo): Activation: successful, device activated.
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3486] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 11 04:04:41 np0005481065 systemd[1]: Starting Network Manager Wait Online...
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3557] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3565] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3567] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3572] manager: NetworkManager state is now CONNECTED_LOCAL
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3576] device (eth1): Activation: successful, device activated.
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3599] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3601] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3604] manager: NetworkManager state is now CONNECTED_SITE
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3609] device (eth0): Activation: successful, device activated.
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3615] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 11 04:04:41 np0005481065 NetworkManager[44960]: <info>  [1760169881.3618] manager: startup complete
Oct 11 04:04:41 np0005481065 systemd[1]: Finished Network Manager Wait Online.
Oct 11 04:04:42 np0005481065 python3.9[45173]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 04:04:47 np0005481065 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 11 04:04:47 np0005481065 systemd[1]: Starting man-db-cache-update.service...
Oct 11 04:04:47 np0005481065 systemd[1]: Reloading.
Oct 11 04:04:47 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:04:47 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:04:47 np0005481065 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 11 04:04:48 np0005481065 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 11 04:04:48 np0005481065 systemd[1]: Finished man-db-cache-update.service.
Oct 11 04:04:48 np0005481065 systemd[1]: run-r9cda22760f45457492594b5b3011ee37.service: Deactivated successfully.
Oct 11 04:04:49 np0005481065 python3.9[45638]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:04:50 np0005481065 python3.9[45790]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:04:51 np0005481065 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 11 04:04:51 np0005481065 python3.9[45944]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:04:52 np0005481065 python3.9[46096]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:04:53 np0005481065 python3.9[46248]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:04:54 np0005481065 python3.9[46400]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:04:54 np0005481065 python3.9[46552]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:04:55 np0005481065 python3.9[46675]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1760169894.4155316-229-66852664376189/.source _original_basename=.um4rtwcg follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:04:56 np0005481065 python3.9[46827]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:04:57 np0005481065 python3.9[46979]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Oct 11 04:04:58 np0005481065 python3.9[47131]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:05:00 np0005481065 python3.9[47558]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Oct 11 04:05:01 np0005481065 ansible-async_wrapper.py[47733]: Invoked with j308668699553 300 /home/zuul/.ansible/tmp/ansible-tmp-1760169900.986755-295-85347786859328/AnsiballZ_edpm_os_net_config.py _
Oct 11 04:05:01 np0005481065 ansible-async_wrapper.py[47736]: Starting module and watcher
Oct 11 04:05:01 np0005481065 ansible-async_wrapper.py[47736]: Start watching 47737 (300)
Oct 11 04:05:01 np0005481065 ansible-async_wrapper.py[47737]: Start module (47737)
Oct 11 04:05:01 np0005481065 ansible-async_wrapper.py[47733]: Return async_wrapper task started.
Oct 11 04:05:02 np0005481065 python3.9[47738]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Oct 11 04:05:02 np0005481065 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Oct 11 04:05:02 np0005481065 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Oct 11 04:05:02 np0005481065 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Oct 11 04:05:02 np0005481065 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Oct 11 04:05:02 np0005481065 kernel: cfg80211: failed to load regulatory.db
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.3743] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47739 uid=0 result="success"
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.3766] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47739 uid=0 result="success"
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4399] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4400] audit: op="connection-add" uuid="66a35c3e-68a7-438a-af87-75efefe44cf8" name="br-ex-br" pid=47739 uid=0 result="success"
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4418] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4419] audit: op="connection-add" uuid="feff718f-fc2d-4369-8035-264f82d6c72b" name="br-ex-port" pid=47739 uid=0 result="success"
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4435] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4436] audit: op="connection-add" uuid="75199c12-1835-4773-8170-b10d87d30db8" name="eth1-port" pid=47739 uid=0 result="success"
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4449] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4450] audit: op="connection-add" uuid="ee6aa4f4-faeb-4fdd-926a-8c24dd7c2b95" name="vlan20-port" pid=47739 uid=0 result="success"
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4464] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4466] audit: op="connection-add" uuid="9714fae3-e26c-46a1-a414-4f5817fd2e92" name="vlan21-port" pid=47739 uid=0 result="success"
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4478] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4479] audit: op="connection-add" uuid="a7751bd4-d335-40a4-aeb9-03887ebf43dd" name="vlan22-port" pid=47739 uid=0 result="success"
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4491] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4493] audit: op="connection-add" uuid="9739b968-a329-48aa-9a5f-d538d2841418" name="vlan23-port" pid=47739 uid=0 result="success"
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4515] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="connection.timestamp,connection.autoconnect-priority,802-3-ethernet.mtu,ipv4.dhcp-client-id,ipv4.dhcp-timeout,ipv6.dhcp-timeout,ipv6.method,ipv6.addr-gen-mode" pid=47739 uid=0 result="success"
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4531] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4532] audit: op="connection-add" uuid="915bf792-51f7-4d98-8547-1c7f5662ca03" name="br-ex-if" pid=47739 uid=0 result="success"
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4572] audit: op="connection-update" uuid="1f26fdf3-6240-59b7-8a6d-ee72a6a88bd5" name="ci-private-network" args="ovs-interface.type,connection.controller,connection.slave-type,connection.timestamp,connection.master,connection.port-type,ovs-external-ids.data,ipv4.addresses,ipv4.dns,ipv4.never-default,ipv4.method,ipv4.routes,ipv4.routing-rules,ipv6.addresses,ipv6.dns,ipv6.method,ipv6.routes,ipv6.addr-gen-mode,ipv6.routing-rules" pid=47739 uid=0 result="success"
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4589] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4590] audit: op="connection-add" uuid="68bf12a6-8c06-4cec-8dd5-e790a5003f05" name="vlan20-if" pid=47739 uid=0 result="success"
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4607] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4609] audit: op="connection-add" uuid="777c7be6-f9b2-4c11-950b-c7b66895645f" name="vlan21-if" pid=47739 uid=0 result="success"
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4626] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4627] audit: op="connection-add" uuid="ce324d19-66e4-498e-9d44-d102e4a7c30e" name="vlan22-if" pid=47739 uid=0 result="success"
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4646] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4647] audit: op="connection-add" uuid="8aaeb07d-c9b6-4284-b4d9-166d8806c992" name="vlan23-if" pid=47739 uid=0 result="success"
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4660] audit: op="connection-delete" uuid="004e078f-b467-3fdf-a5ef-87235f94dee4" name="Wired connection 1" pid=47739 uid=0 result="success"
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4673] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4684] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4687] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (66a35c3e-68a7-438a-af87-75efefe44cf8)
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4688] audit: op="connection-activate" uuid="66a35c3e-68a7-438a-af87-75efefe44cf8" name="br-ex-br" pid=47739 uid=0 result="success"
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4689] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4695] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4699] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (feff718f-fc2d-4369-8035-264f82d6c72b)
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4701] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4705] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4709] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (75199c12-1835-4773-8170-b10d87d30db8)
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4711] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4718] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4721] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (ee6aa4f4-faeb-4fdd-926a-8c24dd7c2b95)
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4723] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4729] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4733] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (9714fae3-e26c-46a1-a414-4f5817fd2e92)
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4734] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4740] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4745] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (a7751bd4-d335-40a4-aeb9-03887ebf43dd)
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4746] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4751] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4757] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (9739b968-a329-48aa-9a5f-d538d2841418)
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4757] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4759] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4761] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4766] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4771] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4775] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (915bf792-51f7-4d98-8547-1c7f5662ca03)
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4776] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4779] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4780] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4781] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4782] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4792] device (eth1): disconnecting for new activation request.
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4792] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4794] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4796] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4797] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4800] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4804] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4807] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (68bf12a6-8c06-4cec-8dd5-e790a5003f05)
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4807] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4810] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4812] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4813] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4816] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4819] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4822] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (777c7be6-f9b2-4c11-950b-c7b66895645f)
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4823] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4825] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4827] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4828] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4831] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4837] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4841] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (ce324d19-66e4-498e-9d44-d102e4a7c30e)
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4841] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4844] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4846] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4846] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4849] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4852] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4855] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (8aaeb07d-c9b6-4284-b4d9-166d8806c992)
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4857] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4860] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4861] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4862] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4864] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4877] audit: op="device-reapply" interface="eth0" ifindex=2 args="connection.autoconnect-priority,802-3-ethernet.mtu,ipv4.dhcp-client-id,ipv4.dhcp-timeout,ipv6.method,ipv6.addr-gen-mode" pid=47739 uid=0 result="success"
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4878] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4880] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4882] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4888] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4892] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4897] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4900] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4902] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 kernel: ovs-system: entered promiscuous mode
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4906] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4910] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4913] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4914] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4921] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4926] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4928] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4930] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4937] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4942] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4945] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 kernel: Timeout policy base is empty
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4948] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 systemd-udevd[47743]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4953] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4957] dhcp4 (eth0): canceled DHCP transaction
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4957] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4957] dhcp4 (eth0): state changed no lease
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4958] dhcp4 (eth0): activation: beginning transaction (no timeout)
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4976] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.4982] audit: op="device-reapply" interface="eth1" ifindex=3 pid=47739 uid=0 result="fail" reason="Device is not activated"
Oct 11 04:05:04 np0005481065 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5024] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5031] dhcp4 (eth0): state changed new lease, address=38.102.83.193
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5037] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5074] device (eth1): disconnecting for new activation request.
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5074] audit: op="connection-activate" uuid="1f26fdf3-6240-59b7-8a6d-ee72a6a88bd5" name="ci-private-network" pid=47739 uid=0 result="success"
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5075] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5091] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5100] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47739 uid=0 result="success"
Oct 11 04:05:04 np0005481065 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5194] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5318] device (eth1): Activation: starting connection 'ci-private-network' (1f26fdf3-6240-59b7-8a6d-ee72a6a88bd5)
Oct 11 04:05:04 np0005481065 kernel: br-ex: entered promiscuous mode
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5330] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5332] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5338] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5339] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5340] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5341] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5341] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5342] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5343] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5368] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5374] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5377] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5381] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5384] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5387] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5390] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5393] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5397] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5400] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5404] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5407] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5411] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5414] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5418] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5422] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5427] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 kernel: vlan22: entered promiscuous mode
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5484] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5490] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5493] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5499] device (eth1): Activation: successful, device activated.
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5517] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5546] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5547] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 kernel: vlan21: entered promiscuous mode
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5552] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 11 04:05:04 np0005481065 systemd-udevd[47744]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5625] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5636] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5659] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5660] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5664] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5686] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5702] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 kernel: vlan20: entered promiscuous mode
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5753] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5755] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5764] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 11 04:05:04 np0005481065 kernel: vlan23: entered promiscuous mode
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5850] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Oct 11 04:05:04 np0005481065 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5871] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5912] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5924] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5932] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5934] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5940] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5978] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5979] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 11 04:05:04 np0005481065 NetworkManager[44960]: <info>  [1760169904.5985] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 11 04:05:05 np0005481065 NetworkManager[44960]: <info>  [1760169905.7451] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47739 uid=0 result="success"
Oct 11 04:05:05 np0005481065 python3.9[48096]: ansible-ansible.legacy.async_status Invoked with jid=j308668699553.47733 mode=status _async_dir=/root/.ansible_async
Oct 11 04:05:05 np0005481065 NetworkManager[44960]: <info>  [1760169905.9692] checkpoint[0x55b174c6e950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Oct 11 04:05:05 np0005481065 NetworkManager[44960]: <info>  [1760169905.9694] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47739 uid=0 result="success"
Oct 11 04:05:06 np0005481065 NetworkManager[44960]: <info>  [1760169906.4217] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47739 uid=0 result="success"
Oct 11 04:05:06 np0005481065 NetworkManager[44960]: <info>  [1760169906.4238] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47739 uid=0 result="success"
Oct 11 04:05:06 np0005481065 NetworkManager[44960]: <info>  [1760169906.8347] audit: op="networking-control" arg="global-dns-configuration" pid=47739 uid=0 result="success"
Oct 11 04:05:06 np0005481065 NetworkManager[44960]: <info>  [1760169906.8392] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Oct 11 04:05:06 np0005481065 NetworkManager[44960]: <info>  [1760169906.8432] audit: op="networking-control" arg="global-dns-configuration" pid=47739 uid=0 result="success"
Oct 11 04:05:06 np0005481065 NetworkManager[44960]: <info>  [1760169906.8468] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47739 uid=0 result="success"
Oct 11 04:05:06 np0005481065 ansible-async_wrapper.py[47736]: 47737 still running (300)
Oct 11 04:05:07 np0005481065 NetworkManager[44960]: <info>  [1760169907.0367] checkpoint[0x55b174c6ea20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Oct 11 04:05:07 np0005481065 NetworkManager[44960]: <info>  [1760169907.0370] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47739 uid=0 result="success"
Oct 11 04:05:07 np0005481065 ansible-async_wrapper.py[47737]: Module complete (47737)
Oct 11 04:05:09 np0005481065 python3.9[48202]: ansible-ansible.legacy.async_status Invoked with jid=j308668699553.47733 mode=status _async_dir=/root/.ansible_async
Oct 11 04:05:10 np0005481065 python3.9[48302]: ansible-ansible.legacy.async_status Invoked with jid=j308668699553.47733 mode=cleanup _async_dir=/root/.ansible_async
Oct 11 04:05:10 np0005481065 python3.9[48454]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:05:11 np0005481065 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 11 04:05:11 np0005481065 python3.9[48579]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760169910.3856468-322-272096147211193/.source.returncode _original_basename=.tvyzdmd4 follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:05:11 np0005481065 ansible-async_wrapper.py[47736]: Done in kid B.
Oct 11 04:05:12 np0005481065 python3.9[48731]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:05:13 np0005481065 python3.9[48855]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760169911.8555818-338-280914750347542/.source.cfg _original_basename=.s0o0tdn_ follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:05:14 np0005481065 python3.9[49007]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 04:05:14 np0005481065 systemd[1]: Reloading Network Manager...
Oct 11 04:05:14 np0005481065 NetworkManager[44960]: <info>  [1760169914.1280] audit: op="reload" arg="0" pid=49011 uid=0 result="success"
Oct 11 04:05:14 np0005481065 NetworkManager[44960]: <info>  [1760169914.1293] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Oct 11 04:05:14 np0005481065 systemd[1]: Reloaded Network Manager.
Oct 11 04:05:14 np0005481065 systemd[1]: session-10.scope: Deactivated successfully.
Oct 11 04:05:14 np0005481065 systemd[1]: session-10.scope: Consumed 55.805s CPU time.
Oct 11 04:05:14 np0005481065 systemd-logind[819]: Session 10 logged out. Waiting for processes to exit.
Oct 11 04:05:14 np0005481065 systemd-logind[819]: Removed session 10.
Oct 11 04:05:19 np0005481065 systemd-logind[819]: New session 11 of user zuul.
Oct 11 04:05:19 np0005481065 systemd[1]: Started Session 11 of User zuul.
Oct 11 04:05:20 np0005481065 python3.9[49195]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:05:21 np0005481065 python3.9[49349]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 04:05:23 np0005481065 python3.9[49543]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:05:23 np0005481065 systemd[1]: session-11.scope: Deactivated successfully.
Oct 11 04:05:23 np0005481065 systemd[1]: session-11.scope: Consumed 2.843s CPU time.
Oct 11 04:05:23 np0005481065 systemd-logind[819]: Session 11 logged out. Waiting for processes to exit.
Oct 11 04:05:23 np0005481065 systemd-logind[819]: Removed session 11.
Oct 11 04:05:24 np0005481065 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 11 04:05:28 np0005481065 systemd-logind[819]: New session 12 of user zuul.
Oct 11 04:05:28 np0005481065 systemd[1]: Started Session 12 of User zuul.
Oct 11 04:05:29 np0005481065 python3.9[49725]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:05:30 np0005481065 python3.9[49879]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:05:31 np0005481065 python3.9[50035]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 04:05:33 np0005481065 python3.9[50120]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 04:05:35 np0005481065 python3.9[50273]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 04:05:36 np0005481065 python3.9[50469]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:05:37 np0005481065 python3.9[50621]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:05:37 np0005481065 podman[50622]: 2025-10-11 08:05:37.595650222 +0000 UTC m=+0.054489014 system refresh
Oct 11 04:05:38 np0005481065 python3.9[50783]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:05:38 np0005481065 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 11 04:05:39 np0005481065 python3.9[50906]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760169937.860626-79-251040263105214/.source.json follow=False _original_basename=podman_network_config.j2 checksum=bc599afddedf0e44df4d6d74ba83bf45ba59edbf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:05:40 np0005481065 python3.9[51058]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:05:40 np0005481065 python3.9[51181]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760169939.579132-94-188980569050018/.source.conf follow=False _original_basename=registries.conf.j2 checksum=a92d4bce7d9cad3a31d9a297b9e21f629ee446cd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:05:41 np0005481065 python3.9[51333]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:05:42 np0005481065 python3.9[51485]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:05:43 np0005481065 python3.9[51637]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:05:44 np0005481065 python3.9[51789]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:05:45 np0005481065 python3.9[51941]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 04:05:47 np0005481065 python3.9[52094]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:05:48 np0005481065 python3.9[52248]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:05:49 np0005481065 python3.9[52400]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:05:50 np0005481065 python3.9[52552]: ansible-service_facts Invoked
Oct 11 04:05:50 np0005481065 network[52569]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 11 04:05:50 np0005481065 network[52570]: 'network-scripts' will be removed from distribution in near future.
Oct 11 04:05:50 np0005481065 network[52571]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 11 04:05:56 np0005481065 python3.9[53025]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 04:05:59 np0005481065 python3.9[53178]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct 11 04:06:00 np0005481065 python3.9[53330]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:06:01 np0005481065 python3.9[53455]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760169959.9653065-226-193543449148257/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:06:02 np0005481065 python3.9[53609]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:06:02 np0005481065 python3.9[53734]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760169961.52794-241-113992433512215/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:06:04 np0005481065 python3.9[53888]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:06:05 np0005481065 python3.9[54042]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 04:06:06 np0005481065 python3.9[54126]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:06:07 np0005481065 python3.9[54280]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 04:06:08 np0005481065 python3.9[54364]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 04:06:08 np0005481065 chronyd[827]: chronyd exiting
Oct 11 04:06:08 np0005481065 systemd[1]: Stopping NTP client/server...
Oct 11 04:06:08 np0005481065 systemd[1]: chronyd.service: Deactivated successfully.
Oct 11 04:06:08 np0005481065 systemd[1]: Stopped NTP client/server.
Oct 11 04:06:08 np0005481065 systemd[1]: Starting NTP client/server...
Oct 11 04:06:08 np0005481065 chronyd[54372]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct 11 04:06:08 np0005481065 chronyd[54372]: Frequency -28.592 +/- 0.477 ppm read from /var/lib/chrony/drift
Oct 11 04:06:08 np0005481065 chronyd[54372]: Loaded seccomp filter (level 2)
Oct 11 04:06:08 np0005481065 systemd[1]: Started NTP client/server.
Oct 11 04:06:09 np0005481065 systemd[1]: session-12.scope: Deactivated successfully.
Oct 11 04:06:09 np0005481065 systemd[1]: session-12.scope: Consumed 29.957s CPU time.
Oct 11 04:06:09 np0005481065 systemd-logind[819]: Session 12 logged out. Waiting for processes to exit.
Oct 11 04:06:09 np0005481065 systemd-logind[819]: Removed session 12.
Oct 11 04:06:14 np0005481065 systemd-logind[819]: New session 13 of user zuul.
Oct 11 04:06:14 np0005481065 systemd[1]: Started Session 13 of User zuul.
Oct 11 04:06:15 np0005481065 python3.9[54553]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:06:16 np0005481065 python3.9[54705]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:06:17 np0005481065 python3.9[54828]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760169975.4272377-34-82624138171834/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:06:17 np0005481065 systemd[1]: session-13.scope: Deactivated successfully.
Oct 11 04:06:17 np0005481065 systemd[1]: session-13.scope: Consumed 2.073s CPU time.
Oct 11 04:06:17 np0005481065 systemd-logind[819]: Session 13 logged out. Waiting for processes to exit.
Oct 11 04:06:17 np0005481065 systemd-logind[819]: Removed session 13.
Oct 11 04:06:22 np0005481065 systemd-logind[819]: New session 14 of user zuul.
Oct 11 04:06:22 np0005481065 systemd[1]: Started Session 14 of User zuul.
Oct 11 04:06:23 np0005481065 python3.9[55006]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:06:24 np0005481065 python3.9[55162]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:06:25 np0005481065 python3.9[55337]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:06:26 np0005481065 python3.9[55460]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1760169985.0763757-41-41402993811411/.source.json _original_basename=.srthma_5 follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:06:27 np0005481065 python3.9[55612]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:06:28 np0005481065 python3.9[55735]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760169987.1508756-64-224638150415962/.source _original_basename=.7_uk4neq follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:06:29 np0005481065 python3.9[55887]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:06:29 np0005481065 python3.9[56039]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:06:30 np0005481065 python3.9[56162]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760169989.3140893-88-134565165573267/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:06:31 np0005481065 python3.9[56314]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:06:31 np0005481065 python3.9[56437]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760169990.588385-88-20915093084881/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:06:32 np0005481065 python3.9[56589]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:06:33 np0005481065 python3.9[56741]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:06:33 np0005481065 python3.9[56864]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760169992.6682014-125-118999247945903/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:06:34 np0005481065 python3.9[57016]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:06:35 np0005481065 python3.9[57139]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760169994.0809183-140-50959301143048/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:06:36 np0005481065 python3.9[57291]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:06:36 np0005481065 systemd[1]: Reloading.
Oct 11 04:06:36 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:06:36 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:06:36 np0005481065 systemd[1]: Reloading.
Oct 11 04:06:37 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:06:37 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:06:37 np0005481065 systemd[1]: Starting EDPM Container Shutdown...
Oct 11 04:06:37 np0005481065 systemd[1]: Finished EDPM Container Shutdown.
Oct 11 04:06:37 np0005481065 python3.9[57519]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:06:38 np0005481065 python3.9[57642]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760169997.390027-163-204599660215192/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:06:39 np0005481065 python3.9[57794]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:06:40 np0005481065 python3.9[57917]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760169998.825276-178-247405354318785/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:06:40 np0005481065 python3.9[58069]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:06:40 np0005481065 systemd[1]: Reloading.
Oct 11 04:06:41 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:06:41 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:06:41 np0005481065 systemd[1]: Reloading.
Oct 11 04:06:41 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:06:41 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:06:41 np0005481065 systemd[1]: Starting Create netns directory...
Oct 11 04:06:41 np0005481065 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 11 04:06:41 np0005481065 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 11 04:06:41 np0005481065 systemd[1]: Finished Create netns directory.
Oct 11 04:06:42 np0005481065 python3.9[58295]: ansible-ansible.builtin.service_facts Invoked
Oct 11 04:06:42 np0005481065 network[58312]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 11 04:06:42 np0005481065 network[58313]: 'network-scripts' will be removed from distribution in near future.
Oct 11 04:06:42 np0005481065 network[58314]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 11 04:06:48 np0005481065 python3.9[58578]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:06:48 np0005481065 systemd[1]: Reloading.
Oct 11 04:06:48 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:06:48 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:06:48 np0005481065 systemd[1]: Stopping IPv4 firewall with iptables...
Oct 11 04:06:48 np0005481065 iptables.init[58618]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Oct 11 04:06:48 np0005481065 iptables.init[58618]: iptables: Flushing firewall rules: [  OK  ]
Oct 11 04:06:48 np0005481065 systemd[1]: iptables.service: Deactivated successfully.
Oct 11 04:06:48 np0005481065 systemd[1]: Stopped IPv4 firewall with iptables.
Oct 11 04:06:49 np0005481065 python3.9[58814]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:06:50 np0005481065 python3.9[58968]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:06:50 np0005481065 systemd[1]: Reloading.
Oct 11 04:06:50 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:06:50 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:06:51 np0005481065 systemd[1]: Starting Netfilter Tables...
Oct 11 04:06:51 np0005481065 systemd[1]: Finished Netfilter Tables.
Oct 11 04:06:52 np0005481065 python3.9[59160]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:06:53 np0005481065 python3.9[59313]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:06:53 np0005481065 python3.9[59438]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760170012.575843-247-165259625905040/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=4729b6ffc5b555fa142bf0b6e6dc15609cb89a22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:06:54 np0005481065 python3.9[59589]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 04:07:20 np0005481065 systemd-logind[819]: Session 14 logged out. Waiting for processes to exit.
Oct 11 04:07:20 np0005481065 systemd[1]: session-14.scope: Deactivated successfully.
Oct 11 04:07:20 np0005481065 systemd[1]: session-14.scope: Consumed 23.475s CPU time.
Oct 11 04:07:20 np0005481065 systemd-logind[819]: Removed session 14.
Oct 11 04:07:32 np0005481065 systemd-logind[819]: New session 15 of user zuul.
Oct 11 04:07:32 np0005481065 systemd[1]: Started Session 15 of User zuul.
Oct 11 04:07:34 np0005481065 python3.9[59784]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:07:35 np0005481065 python3.9[59940]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:07:36 np0005481065 python3.9[60115]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:07:36 np0005481065 python3.9[60193]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.lir8aqz4 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:07:37 np0005481065 python3.9[60345]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:07:38 np0005481065 python3.9[60423]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.lu6ldluo recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:07:39 np0005481065 python3.9[60575]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:07:40 np0005481065 python3.9[60727]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:07:40 np0005481065 python3.9[60805]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:07:41 np0005481065 python3.9[60957]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:07:41 np0005481065 python3.9[61035]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:07:42 np0005481065 python3.9[61187]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:07:43 np0005481065 python3.9[61339]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:07:44 np0005481065 python3.9[61417]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:07:44 np0005481065 python3.9[61569]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:07:45 np0005481065 python3.9[61648]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:07:46 np0005481065 python3.9[61800]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:07:46 np0005481065 systemd[1]: Reloading.
Oct 11 04:07:46 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:07:46 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:07:47 np0005481065 python3.9[61991]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:07:48 np0005481065 python3.9[62069]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:07:49 np0005481065 python3.9[62221]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:07:49 np0005481065 python3.9[62299]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:07:50 np0005481065 python3.9[62451]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:07:50 np0005481065 systemd[1]: Reloading.
Oct 11 04:07:50 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:07:50 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:07:50 np0005481065 systemd[1]: Starting Create netns directory...
Oct 11 04:07:50 np0005481065 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 11 04:07:50 np0005481065 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 11 04:07:50 np0005481065 systemd[1]: Finished Create netns directory.
Oct 11 04:07:51 np0005481065 python3.9[62643]: ansible-ansible.builtin.service_facts Invoked
Oct 11 04:07:51 np0005481065 network[62660]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 11 04:07:52 np0005481065 network[62661]: 'network-scripts' will be removed from distribution in near future.
Oct 11 04:07:52 np0005481065 network[62662]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 11 04:07:56 np0005481065 python3.9[62925]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:07:57 np0005481065 python3.9[63003]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:07:58 np0005481065 python3.9[63155]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:07:58 np0005481065 python3.9[63307]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:07:59 np0005481065 python3.9[63430]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760170078.29271-216-45657229041517/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:08:00 np0005481065 python3.9[63582]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct 11 04:08:00 np0005481065 systemd[1]: Starting Time & Date Service...
Oct 11 04:08:00 np0005481065 systemd[1]: Started Time & Date Service.
Oct 11 04:08:01 np0005481065 python3.9[63738]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:08:02 np0005481065 python3.9[63892]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:08:03 np0005481065 python3.9[64015]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760170081.931296-251-125937697710313/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:08:03 np0005481065 python3.9[64167]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:08:04 np0005481065 python3.9[64290]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760170083.3843732-266-179369313928490/.source.yaml _original_basename=.xjpw063m follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:08:05 np0005481065 python3.9[64442]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:08:06 np0005481065 python3.9[64565]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760170084.862568-281-193415704667325/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:08:06 np0005481065 python3.9[64717]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:08:07 np0005481065 python3.9[64870]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:08:08 np0005481065 python3[65023]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 11 04:08:09 np0005481065 python3.9[65175]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:08:10 np0005481065 python3.9[65298]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760170089.0061164-320-74218756387358/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:08:11 np0005481065 python3.9[65450]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:08:11 np0005481065 python3.9[65573]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760170090.4247499-335-164030074078742/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:08:12 np0005481065 python3.9[65725]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:08:13 np0005481065 python3.9[65848]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760170091.946061-350-45280199151149/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:08:14 np0005481065 python3.9[66000]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:08:14 np0005481065 python3.9[66123]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760170093.4000113-365-50307412677136/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:08:15 np0005481065 python3.9[66275]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:08:16 np0005481065 python3.9[66398]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760170094.9076986-380-189641945681368/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:08:16 np0005481065 python3.9[66550]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:08:17 np0005481065 python3.9[66702]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:08:18 np0005481065 chronyd[54372]: Selected source 54.39.23.64 (pool.ntp.org)
Oct 11 04:08:18 np0005481065 python3.9[66861]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:08:19 np0005481065 python3.9[67014]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:08:20 np0005481065 python3.9[67166]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:08:21 np0005481065 python3.9[67318]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 11 04:08:21 np0005481065 python3.9[67471]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 11 04:08:22 np0005481065 systemd[1]: session-15.scope: Deactivated successfully.
Oct 11 04:08:22 np0005481065 systemd[1]: session-15.scope: Consumed 39.042s CPU time.
Oct 11 04:08:22 np0005481065 systemd-logind[819]: Session 15 logged out. Waiting for processes to exit.
Oct 11 04:08:22 np0005481065 systemd-logind[819]: Removed session 15.
Oct 11 04:08:27 np0005481065 systemd-logind[819]: New session 16 of user zuul.
Oct 11 04:08:27 np0005481065 systemd[1]: Started Session 16 of User zuul.
Oct 11 04:08:28 np0005481065 python3.9[67652]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct 11 04:08:29 np0005481065 python3.9[67804]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:08:30 np0005481065 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 11 04:08:30 np0005481065 python3.9[67956]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:08:31 np0005481065 python3.9[68111]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCivYfehvYlxMJv6FhpKksF/3ZVfXlrW356hXGhObfLQwq7tkfDgVlysqk5pKUEyYaxJfj0WGcCg5fil2DB0ZrFq44iBfV4QRyj3TZBBsGijCrLjs1hcI11dwQ0m76F9VDvGHsS2XaXKkJOTFTLA2lWyG/vQDOnm766pG2BOGCmVpw96pgt+SOmQQUHL1AJe9XemrIXoSvH3T95jwe0JDrreyQa4B9QkJb2w0gA5lO+XacRt8UKTIxLeuF2xvzwrjwdY6GAdm13Km/LlkKxgikd6CN4ljLq2gLhKG1pgWthtivKFpBrdtJjwiU0gksTN1iVRb4luaNL2ZQP6JYEz+I6QB9dMS9+m4XIIr42u8I0FPSZ22OY7QHn0nWlufq+K64IQMt1RSMqOcS7p3xxbmd2SqPHOFXcA1h7wJfwn6OTSMxP7ufblynW1XNSvtnAtQm50MiZhx/tfUDqKtbDLdmKBAKPEiAXA1tj58xWKgoQARydbrbBiG8Wet8SfA8/iDE=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDw0r1M89bVpRLfKhqkr7n9CHg5tcRb2DKaMDT/j0RfC#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKYrDbGmd7omtqvYq6RLwtY7+e5chFoAg2bYOxRlcutwzejZM82i1EyD2u9m4CitXVxyIjs9WwUWDqc7NjzVzDg=#012 create=True mode=0644 path=/tmp/ansible._w6i806v state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:08:32 np0005481065 python3.9[68263]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible._w6i806v' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:08:33 np0005481065 python3.9[68417]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible._w6i806v state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:08:34 np0005481065 systemd[1]: session-16.scope: Deactivated successfully.
Oct 11 04:08:34 np0005481065 systemd[1]: session-16.scope: Consumed 4.173s CPU time.
Oct 11 04:08:34 np0005481065 systemd-logind[819]: Session 16 logged out. Waiting for processes to exit.
Oct 11 04:08:34 np0005481065 systemd-logind[819]: Removed session 16.
Oct 11 04:08:39 np0005481065 systemd-logind[819]: New session 17 of user zuul.
Oct 11 04:08:39 np0005481065 systemd[1]: Started Session 17 of User zuul.
Oct 11 04:08:40 np0005481065 python3.9[68595]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:08:42 np0005481065 python3.9[68751]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 11 04:08:43 np0005481065 python3.9[68905]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 04:08:45 np0005481065 python3.9[69058]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:08:46 np0005481065 python3.9[69211]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:08:47 np0005481065 python3.9[69365]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:08:48 np0005481065 python3.9[69520]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:08:48 np0005481065 systemd[1]: session-17.scope: Deactivated successfully.
Oct 11 04:08:48 np0005481065 systemd[1]: session-17.scope: Consumed 5.571s CPU time.
Oct 11 04:08:48 np0005481065 systemd-logind[819]: Session 17 logged out. Waiting for processes to exit.
Oct 11 04:08:48 np0005481065 systemd-logind[819]: Removed session 17.
Oct 11 04:08:54 np0005481065 systemd-logind[819]: New session 18 of user zuul.
Oct 11 04:08:54 np0005481065 systemd[1]: Started Session 18 of User zuul.
Oct 11 04:08:55 np0005481065 python3.9[69699]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:08:56 np0005481065 python3.9[69855]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 04:08:57 np0005481065 python3.9[69939]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 11 04:08:59 np0005481065 python3.9[70090]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:09:00 np0005481065 python3.9[70241]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 11 04:09:01 np0005481065 python3.9[70391]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:09:01 np0005481065 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 04:09:02 np0005481065 python3.9[70542]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:09:02 np0005481065 systemd[1]: session-18.scope: Deactivated successfully.
Oct 11 04:09:02 np0005481065 systemd[1]: session-18.scope: Consumed 6.626s CPU time.
Oct 11 04:09:02 np0005481065 systemd-logind[819]: Session 18 logged out. Waiting for processes to exit.
Oct 11 04:09:02 np0005481065 systemd-logind[819]: Removed session 18.
Oct 11 04:09:10 np0005481065 systemd-logind[819]: New session 19 of user zuul.
Oct 11 04:09:10 np0005481065 systemd[1]: Started Session 19 of User zuul.
Oct 11 04:09:16 np0005481065 python3[71308]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:09:18 np0005481065 python3[71403]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 11 04:09:19 np0005481065 python3[71430]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:09:20 np0005481065 python3[71456]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:09:20 np0005481065 kernel: loop: module loaded
Oct 11 04:09:20 np0005481065 kernel: loop3: detected capacity change from 0 to 41943040
Oct 11 04:09:20 np0005481065 python3[71491]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:09:20 np0005481065 lvm[71494]: PV /dev/loop3 not used.
Oct 11 04:09:20 np0005481065 lvm[71503]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 11 04:09:20 np0005481065 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Oct 11 04:09:21 np0005481065 lvm[71505]:  1 logical volume(s) in volume group "ceph_vg0" now active
Oct 11 04:09:21 np0005481065 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Oct 11 04:09:21 np0005481065 python3[71583]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 04:09:21 np0005481065 python3[71656]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760170161.182264-32837-181008622043182/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:09:22 np0005481065 python3[71706]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:09:22 np0005481065 systemd[1]: Reloading.
Oct 11 04:09:22 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:09:22 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:09:23 np0005481065 systemd[1]: Starting Ceph OSD losetup...
Oct 11 04:09:23 np0005481065 bash[71747]: /dev/loop3: [64513]:4555666 (/var/lib/ceph-osd-0.img)
Oct 11 04:09:23 np0005481065 systemd[1]: Finished Ceph OSD losetup.
Oct 11 04:09:23 np0005481065 lvm[71748]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 11 04:09:23 np0005481065 lvm[71748]: VG ceph_vg0 finished
Oct 11 04:09:23 np0005481065 python3[71774]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 11 04:09:25 np0005481065 python3[71801]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:09:25 np0005481065 python3[71827]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G#012losetup /dev/loop4 /var/lib/ceph-osd-1.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:09:25 np0005481065 kernel: loop4: detected capacity change from 0 to 41943040
Oct 11 04:09:25 np0005481065 python3[71859]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4#012vgcreate ceph_vg1 /dev/loop4#012lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:09:25 np0005481065 lvm[71862]: PV /dev/loop4 not used.
Oct 11 04:09:26 np0005481065 lvm[71872]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct 11 04:09:26 np0005481065 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Oct 11 04:09:26 np0005481065 lvm[71874]:  1 logical volume(s) in volume group "ceph_vg1" now active
Oct 11 04:09:26 np0005481065 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Oct 11 04:09:26 np0005481065 python3[71952]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 04:09:27 np0005481065 python3[72025]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760170166.3015263-32866-242874055153103/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:09:27 np0005481065 python3[72075]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:09:27 np0005481065 systemd[1]: Reloading.
Oct 11 04:09:27 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:09:27 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:09:27 np0005481065 systemd[1]: Starting Ceph OSD losetup...
Oct 11 04:09:27 np0005481065 bash[72115]: /dev/loop4: [64513]:4727394 (/var/lib/ceph-osd-1.img)
Oct 11 04:09:27 np0005481065 systemd[1]: Finished Ceph OSD losetup.
Oct 11 04:09:27 np0005481065 lvm[72116]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct 11 04:09:27 np0005481065 lvm[72116]: VG ceph_vg1 finished
Oct 11 04:09:28 np0005481065 python3[72142]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 11 04:09:29 np0005481065 python3[72169]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:09:30 np0005481065 python3[72195]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G#012losetup /dev/loop5 /var/lib/ceph-osd-2.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:09:30 np0005481065 kernel: loop5: detected capacity change from 0 to 41943040
Oct 11 04:09:30 np0005481065 python3[72227]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5#012vgcreate ceph_vg2 /dev/loop5#012lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:09:30 np0005481065 lvm[72230]: PV /dev/loop5 not used.
Oct 11 04:09:30 np0005481065 lvm[72232]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 11 04:09:30 np0005481065 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Oct 11 04:09:30 np0005481065 lvm[72243]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 11 04:09:30 np0005481065 lvm[72243]: VG ceph_vg2 finished
Oct 11 04:09:30 np0005481065 lvm[72241]:  1 logical volume(s) in volume group "ceph_vg2" now active
Oct 11 04:09:30 np0005481065 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Oct 11 04:09:31 np0005481065 python3[72321]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 04:09:31 np0005481065 python3[72394]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760170171.1189535-32893-113581186654544/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:09:32 np0005481065 python3[72444]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:09:32 np0005481065 systemd[1]: Reloading.
Oct 11 04:09:32 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:09:32 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:09:32 np0005481065 systemd[1]: Starting Ceph OSD losetup...
Oct 11 04:09:32 np0005481065 bash[72485]: /dev/loop5: [64513]:4812253 (/var/lib/ceph-osd-2.img)
Oct 11 04:09:32 np0005481065 systemd[1]: Finished Ceph OSD losetup.
Oct 11 04:09:32 np0005481065 lvm[72486]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 11 04:09:32 np0005481065 lvm[72486]: VG ceph_vg2 finished
Oct 11 04:09:34 np0005481065 python3[72510]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:09:37 np0005481065 python3[72603]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 11 04:09:38 np0005481065 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 11 04:09:38 np0005481065 systemd[1]: Starting man-db-cache-update.service...
Oct 11 04:09:39 np0005481065 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 11 04:09:39 np0005481065 systemd[1]: Finished man-db-cache-update.service.
Oct 11 04:09:39 np0005481065 systemd[1]: run-r1a8fa6414b374bd6a5ca341431b7d381.service: Deactivated successfully.
Oct 11 04:09:39 np0005481065 python3[72718]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:09:39 np0005481065 python3[72746]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:09:39 np0005481065 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 11 04:09:40 np0005481065 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 11 04:09:40 np0005481065 python3[72811]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:09:40 np0005481065 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 11 04:09:41 np0005481065 python3[72837]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:09:42 np0005481065 python3[72915]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 04:09:42 np0005481065 python3[72988]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760170181.6822581-33040-57858992056871/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:09:43 np0005481065 python3[73090]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 04:09:43 np0005481065 python3[73163]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760170183.0165489-33058-136585937567973/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:09:44 np0005481065 python3[73213]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:09:44 np0005481065 python3[73241]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:09:45 np0005481065 python3[73269]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:09:45 np0005481065 python3[73297]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --skip-prepare-host --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:09:45 np0005481065 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 11 04:09:45 np0005481065 systemd-logind[819]: New session 20 of user ceph-admin.
Oct 11 04:09:45 np0005481065 systemd[1]: Created slice User Slice of UID 42477.
Oct 11 04:09:45 np0005481065 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct 11 04:09:45 np0005481065 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct 11 04:09:45 np0005481065 systemd[1]: Starting User Manager for UID 42477...
Oct 11 04:09:46 np0005481065 systemd[73317]: Queued start job for default target Main User Target.
Oct 11 04:09:46 np0005481065 systemd[73317]: Created slice User Application Slice.
Oct 11 04:09:46 np0005481065 systemd[73317]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 11 04:09:46 np0005481065 systemd[73317]: Started Daily Cleanup of User's Temporary Directories.
Oct 11 04:09:46 np0005481065 systemd[73317]: Reached target Paths.
Oct 11 04:09:46 np0005481065 systemd[73317]: Reached target Timers.
Oct 11 04:09:46 np0005481065 systemd[73317]: Starting D-Bus User Message Bus Socket...
Oct 11 04:09:46 np0005481065 systemd[73317]: Starting Create User's Volatile Files and Directories...
Oct 11 04:09:46 np0005481065 systemd[73317]: Finished Create User's Volatile Files and Directories.
Oct 11 04:09:46 np0005481065 systemd[73317]: Listening on D-Bus User Message Bus Socket.
Oct 11 04:09:46 np0005481065 systemd[73317]: Reached target Sockets.
Oct 11 04:09:46 np0005481065 systemd[73317]: Reached target Basic System.
Oct 11 04:09:46 np0005481065 systemd[73317]: Reached target Main User Target.
Oct 11 04:09:46 np0005481065 systemd[73317]: Startup finished in 123ms.
Oct 11 04:09:46 np0005481065 systemd[1]: Started User Manager for UID 42477.
Oct 11 04:09:46 np0005481065 systemd[1]: Started Session 20 of User ceph-admin.
Oct 11 04:09:46 np0005481065 systemd[1]: session-20.scope: Deactivated successfully.
Oct 11 04:09:46 np0005481065 systemd-logind[819]: Session 20 logged out. Waiting for processes to exit.
Oct 11 04:09:46 np0005481065 systemd-logind[819]: Removed session 20.
Oct 11 04:09:48 np0005481065 systemd[1]: var-lib-containers-storage-overlay-compat2543730209-lower\x2dmapped.mount: Deactivated successfully.
Oct 11 04:09:56 np0005481065 systemd[1]: Stopping User Manager for UID 42477...
Oct 11 04:09:56 np0005481065 systemd[73317]: Activating special unit Exit the Session...
Oct 11 04:09:56 np0005481065 systemd[73317]: Stopped target Main User Target.
Oct 11 04:09:56 np0005481065 systemd[73317]: Stopped target Basic System.
Oct 11 04:09:56 np0005481065 systemd[73317]: Stopped target Paths.
Oct 11 04:09:56 np0005481065 systemd[73317]: Stopped target Sockets.
Oct 11 04:09:56 np0005481065 systemd[73317]: Stopped target Timers.
Oct 11 04:09:56 np0005481065 systemd[73317]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct 11 04:09:56 np0005481065 systemd[73317]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 11 04:09:56 np0005481065 systemd[73317]: Closed D-Bus User Message Bus Socket.
Oct 11 04:09:56 np0005481065 systemd[73317]: Stopped Create User's Volatile Files and Directories.
Oct 11 04:09:56 np0005481065 systemd[73317]: Removed slice User Application Slice.
Oct 11 04:09:56 np0005481065 systemd[73317]: Reached target Shutdown.
Oct 11 04:09:56 np0005481065 systemd[73317]: Finished Exit the Session.
Oct 11 04:09:56 np0005481065 systemd[73317]: Reached target Exit the Session.
Oct 11 04:09:56 np0005481065 systemd[1]: user@42477.service: Deactivated successfully.
Oct 11 04:09:56 np0005481065 systemd[1]: Stopped User Manager for UID 42477.
Oct 11 04:09:56 np0005481065 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Oct 11 04:09:56 np0005481065 systemd[1]: run-user-42477.mount: Deactivated successfully.
Oct 11 04:09:56 np0005481065 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Oct 11 04:09:56 np0005481065 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Oct 11 04:09:56 np0005481065 systemd[1]: Removed slice User Slice of UID 42477.
Oct 11 04:09:59 np0005481065 podman[73370]: 2025-10-11 08:09:59.940104525 +0000 UTC m=+13.651715631 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:09:59 np0005481065 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 11 04:10:00 np0005481065 podman[73431]: 2025-10-11 08:10:00.036919062 +0000 UTC m=+0.065980513 container create b21056e6ae2347846a6ebfad5cb3ee93871e2f08ea36466889ae32ab3c21e215 (image=quay.io/ceph/ceph:v18, name=goofy_blackwell, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 04:10:00 np0005481065 systemd[1]: var-lib-containers-storage-overlay-volatile\x2dcheck2657759712-merged.mount: Deactivated successfully.
Oct 11 04:10:00 np0005481065 systemd[1]: Created slice Virtual Machine and Container Slice.
Oct 11 04:10:00 np0005481065 systemd[1]: Started libpod-conmon-b21056e6ae2347846a6ebfad5cb3ee93871e2f08ea36466889ae32ab3c21e215.scope.
Oct 11 04:10:00 np0005481065 podman[73431]: 2025-10-11 08:10:00.009073012 +0000 UTC m=+0.038134493 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:10:00 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:10:00 np0005481065 podman[73431]: 2025-10-11 08:10:00.172693356 +0000 UTC m=+0.201754857 container init b21056e6ae2347846a6ebfad5cb3ee93871e2f08ea36466889ae32ab3c21e215 (image=quay.io/ceph/ceph:v18, name=goofy_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Oct 11 04:10:00 np0005481065 podman[73431]: 2025-10-11 08:10:00.180429035 +0000 UTC m=+0.209490476 container start b21056e6ae2347846a6ebfad5cb3ee93871e2f08ea36466889ae32ab3c21e215 (image=quay.io/ceph/ceph:v18, name=goofy_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 11 04:10:00 np0005481065 podman[73431]: 2025-10-11 08:10:00.183803671 +0000 UTC m=+0.212865112 container attach b21056e6ae2347846a6ebfad5cb3ee93871e2f08ea36466889ae32ab3c21e215 (image=quay.io/ceph/ceph:v18, name=goofy_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 04:10:00 np0005481065 goofy_blackwell[73447]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Oct 11 04:10:00 np0005481065 systemd[1]: libpod-b21056e6ae2347846a6ebfad5cb3ee93871e2f08ea36466889ae32ab3c21e215.scope: Deactivated successfully.
Oct 11 04:10:00 np0005481065 podman[73431]: 2025-10-11 08:10:00.488376005 +0000 UTC m=+0.517437446 container died b21056e6ae2347846a6ebfad5cb3ee93871e2f08ea36466889ae32ab3c21e215 (image=quay.io/ceph/ceph:v18, name=goofy_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 04:10:00 np0005481065 podman[73431]: 2025-10-11 08:10:00.553795011 +0000 UTC m=+0.582856472 container remove b21056e6ae2347846a6ebfad5cb3ee93871e2f08ea36466889ae32ab3c21e215 (image=quay.io/ceph/ceph:v18, name=goofy_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:10:00 np0005481065 systemd[1]: libpod-conmon-b21056e6ae2347846a6ebfad5cb3ee93871e2f08ea36466889ae32ab3c21e215.scope: Deactivated successfully.
Oct 11 04:10:00 np0005481065 podman[73465]: 2025-10-11 08:10:00.620961227 +0000 UTC m=+0.041862909 container create 15203694d321c7ecf0612e1644e3137007f259d8aed1c4a17cd0e898e7e00f6f (image=quay.io/ceph/ceph:v18, name=charming_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 11 04:10:00 np0005481065 systemd[1]: Started libpod-conmon-15203694d321c7ecf0612e1644e3137007f259d8aed1c4a17cd0e898e7e00f6f.scope.
Oct 11 04:10:00 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:10:00 np0005481065 podman[73465]: 2025-10-11 08:10:00.603934504 +0000 UTC m=+0.024836186 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:10:00 np0005481065 podman[73465]: 2025-10-11 08:10:00.710974722 +0000 UTC m=+0.131876474 container init 15203694d321c7ecf0612e1644e3137007f259d8aed1c4a17cd0e898e7e00f6f (image=quay.io/ceph/ceph:v18, name=charming_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:10:00 np0005481065 podman[73465]: 2025-10-11 08:10:00.720456341 +0000 UTC m=+0.141358043 container start 15203694d321c7ecf0612e1644e3137007f259d8aed1c4a17cd0e898e7e00f6f (image=quay.io/ceph/ceph:v18, name=charming_villani, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 04:10:00 np0005481065 podman[73465]: 2025-10-11 08:10:00.724445994 +0000 UTC m=+0.145347766 container attach 15203694d321c7ecf0612e1644e3137007f259d8aed1c4a17cd0e898e7e00f6f (image=quay.io/ceph/ceph:v18, name=charming_villani, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 04:10:00 np0005481065 charming_villani[73481]: 167 167
Oct 11 04:10:00 np0005481065 systemd[1]: libpod-15203694d321c7ecf0612e1644e3137007f259d8aed1c4a17cd0e898e7e00f6f.scope: Deactivated successfully.
Oct 11 04:10:00 np0005481065 podman[73465]: 2025-10-11 08:10:00.726934955 +0000 UTC m=+0.147836657 container died 15203694d321c7ecf0612e1644e3137007f259d8aed1c4a17cd0e898e7e00f6f (image=quay.io/ceph/ceph:v18, name=charming_villani, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 11 04:10:00 np0005481065 podman[73465]: 2025-10-11 08:10:00.768552676 +0000 UTC m=+0.189454338 container remove 15203694d321c7ecf0612e1644e3137007f259d8aed1c4a17cd0e898e7e00f6f (image=quay.io/ceph/ceph:v18, name=charming_villani, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 11 04:10:00 np0005481065 systemd[1]: libpod-conmon-15203694d321c7ecf0612e1644e3137007f259d8aed1c4a17cd0e898e7e00f6f.scope: Deactivated successfully.
Oct 11 04:10:00 np0005481065 podman[73498]: 2025-10-11 08:10:00.865394404 +0000 UTC m=+0.064103580 container create 12aaa8b2e54362e50d4f03895ee5bbea1b6e8d747afcce720cd7761ad59ad643 (image=quay.io/ceph/ceph:v18, name=blissful_dijkstra, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 04:10:00 np0005481065 systemd[1]: Started libpod-conmon-12aaa8b2e54362e50d4f03895ee5bbea1b6e8d747afcce720cd7761ad59ad643.scope.
Oct 11 04:10:00 np0005481065 podman[73498]: 2025-10-11 08:10:00.839141799 +0000 UTC m=+0.037851035 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:10:00 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:10:00 np0005481065 systemd[1]: var-lib-containers-storage-overlay-82699ef3603532e017eeb7099161572b174bad49937bdd8a1ffd731ac2647d36-merged.mount: Deactivated successfully.
Oct 11 04:10:00 np0005481065 podman[73498]: 2025-10-11 08:10:00.955176062 +0000 UTC m=+0.153885288 container init 12aaa8b2e54362e50d4f03895ee5bbea1b6e8d747afcce720cd7761ad59ad643 (image=quay.io/ceph/ceph:v18, name=blissful_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:10:00 np0005481065 podman[73498]: 2025-10-11 08:10:00.964864877 +0000 UTC m=+0.163574053 container start 12aaa8b2e54362e50d4f03895ee5bbea1b6e8d747afcce720cd7761ad59ad643 (image=quay.io/ceph/ceph:v18, name=blissful_dijkstra, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 04:10:00 np0005481065 podman[73498]: 2025-10-11 08:10:00.968903202 +0000 UTC m=+0.167612448 container attach 12aaa8b2e54362e50d4f03895ee5bbea1b6e8d747afcce720cd7761ad59ad643 (image=quay.io/ceph/ceph:v18, name=blissful_dijkstra, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:10:00 np0005481065 blissful_dijkstra[73515]: AQDYEOpoYOCBOxAAS4nHMLerFL4xcEDANkzDLQ==
Oct 11 04:10:01 np0005481065 systemd[1]: libpod-12aaa8b2e54362e50d4f03895ee5bbea1b6e8d747afcce720cd7761ad59ad643.scope: Deactivated successfully.
Oct 11 04:10:01 np0005481065 podman[73498]: 2025-10-11 08:10:01.00266804 +0000 UTC m=+0.201377226 container died 12aaa8b2e54362e50d4f03895ee5bbea1b6e8d747afcce720cd7761ad59ad643 (image=quay.io/ceph/ceph:v18, name=blissful_dijkstra, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:10:01 np0005481065 systemd[1]: var-lib-containers-storage-overlay-e6c01ca35574a9bca8f2b433d2003c9f40730aef2551ff7900d1f33fc6eb81a7-merged.mount: Deactivated successfully.
Oct 11 04:10:01 np0005481065 podman[73498]: 2025-10-11 08:10:01.049145289 +0000 UTC m=+0.247854445 container remove 12aaa8b2e54362e50d4f03895ee5bbea1b6e8d747afcce720cd7761ad59ad643 (image=quay.io/ceph/ceph:v18, name=blissful_dijkstra, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 11 04:10:01 np0005481065 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 11 04:10:01 np0005481065 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 11 04:10:01 np0005481065 systemd[1]: libpod-conmon-12aaa8b2e54362e50d4f03895ee5bbea1b6e8d747afcce720cd7761ad59ad643.scope: Deactivated successfully.
Oct 11 04:10:01 np0005481065 podman[73533]: 2025-10-11 08:10:01.14573397 +0000 UTC m=+0.063385100 container create 0e436fdc279a8687c8f99acdaa6abeb838f89d12fed9bedf79a746d611f975be (image=quay.io/ceph/ceph:v18, name=intelligent_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 11 04:10:01 np0005481065 systemd[1]: Started libpod-conmon-0e436fdc279a8687c8f99acdaa6abeb838f89d12fed9bedf79a746d611f975be.scope.
Oct 11 04:10:01 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:10:01 np0005481065 podman[73533]: 2025-10-11 08:10:01.119467805 +0000 UTC m=+0.037118925 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:10:01 np0005481065 podman[73533]: 2025-10-11 08:10:01.229478107 +0000 UTC m=+0.147129237 container init 0e436fdc279a8687c8f99acdaa6abeb838f89d12fed9bedf79a746d611f975be (image=quay.io/ceph/ceph:v18, name=intelligent_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 11 04:10:01 np0005481065 podman[73533]: 2025-10-11 08:10:01.236068394 +0000 UTC m=+0.153719504 container start 0e436fdc279a8687c8f99acdaa6abeb838f89d12fed9bedf79a746d611f975be (image=quay.io/ceph/ceph:v18, name=intelligent_grothendieck, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 11 04:10:01 np0005481065 podman[73533]: 2025-10-11 08:10:01.239727248 +0000 UTC m=+0.157378368 container attach 0e436fdc279a8687c8f99acdaa6abeb838f89d12fed9bedf79a746d611f975be (image=quay.io/ceph/ceph:v18, name=intelligent_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 04:10:01 np0005481065 intelligent_grothendieck[73551]: AQDZEOpo0Hk2EBAAm4rSYAYEAK1zr0CIpc0vDQ==
Oct 11 04:10:01 np0005481065 systemd[1]: libpod-0e436fdc279a8687c8f99acdaa6abeb838f89d12fed9bedf79a746d611f975be.scope: Deactivated successfully.
Oct 11 04:10:01 np0005481065 podman[73533]: 2025-10-11 08:10:01.279526047 +0000 UTC m=+0.197177167 container died 0e436fdc279a8687c8f99acdaa6abeb838f89d12fed9bedf79a746d611f975be (image=quay.io/ceph/ceph:v18, name=intelligent_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 04:10:01 np0005481065 podman[73533]: 2025-10-11 08:10:01.335727202 +0000 UTC m=+0.253378322 container remove 0e436fdc279a8687c8f99acdaa6abeb838f89d12fed9bedf79a746d611f975be (image=quay.io/ceph/ceph:v18, name=intelligent_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 04:10:01 np0005481065 systemd[1]: libpod-conmon-0e436fdc279a8687c8f99acdaa6abeb838f89d12fed9bedf79a746d611f975be.scope: Deactivated successfully.
Oct 11 04:10:01 np0005481065 podman[73570]: 2025-10-11 08:10:01.43711421 +0000 UTC m=+0.070230935 container create 6f33e279d80e86ed7d8ba83d6cffa2cd38fcdbcff06f7a84b6a92463b573b3f5 (image=quay.io/ceph/ceph:v18, name=lucid_darwin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Oct 11 04:10:01 np0005481065 systemd[1]: Started libpod-conmon-6f33e279d80e86ed7d8ba83d6cffa2cd38fcdbcff06f7a84b6a92463b573b3f5.scope.
Oct 11 04:10:01 np0005481065 podman[73570]: 2025-10-11 08:10:01.409799814 +0000 UTC m=+0.042916569 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:10:01 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:10:01 np0005481065 podman[73570]: 2025-10-11 08:10:01.759043086 +0000 UTC m=+0.392159821 container init 6f33e279d80e86ed7d8ba83d6cffa2cd38fcdbcff06f7a84b6a92463b573b3f5 (image=quay.io/ceph/ceph:v18, name=lucid_darwin, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 11 04:10:01 np0005481065 podman[73570]: 2025-10-11 08:10:01.769170053 +0000 UTC m=+0.402286778 container start 6f33e279d80e86ed7d8ba83d6cffa2cd38fcdbcff06f7a84b6a92463b573b3f5 (image=quay.io/ceph/ceph:v18, name=lucid_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 04:10:01 np0005481065 lucid_darwin[73586]: AQDZEOpohvTYLxAAp/W/F/d814qYPOwVecmtGA==
Oct 11 04:10:01 np0005481065 systemd[1]: libpod-6f33e279d80e86ed7d8ba83d6cffa2cd38fcdbcff06f7a84b6a92463b573b3f5.scope: Deactivated successfully.
Oct 11 04:10:01 np0005481065 podman[73570]: 2025-10-11 08:10:01.814557991 +0000 UTC m=+0.447674736 container attach 6f33e279d80e86ed7d8ba83d6cffa2cd38fcdbcff06f7a84b6a92463b573b3f5 (image=quay.io/ceph/ceph:v18, name=lucid_darwin, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:10:01 np0005481065 podman[73570]: 2025-10-11 08:10:01.815018194 +0000 UTC m=+0.448134899 container died 6f33e279d80e86ed7d8ba83d6cffa2cd38fcdbcff06f7a84b6a92463b573b3f5 (image=quay.io/ceph/ceph:v18, name=lucid_darwin, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:10:01 np0005481065 podman[73570]: 2025-10-11 08:10:01.858569721 +0000 UTC m=+0.491686416 container remove 6f33e279d80e86ed7d8ba83d6cffa2cd38fcdbcff06f7a84b6a92463b573b3f5 (image=quay.io/ceph/ceph:v18, name=lucid_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 11 04:10:01 np0005481065 systemd[1]: libpod-conmon-6f33e279d80e86ed7d8ba83d6cffa2cd38fcdbcff06f7a84b6a92463b573b3f5.scope: Deactivated successfully.
Oct 11 04:10:01 np0005481065 podman[73606]: 2025-10-11 08:10:01.956785158 +0000 UTC m=+0.067002413 container create 2cbf31d95d8d2972ccc20a8a37754bcee7815ddd3738ce237f85d0b7c570ba77 (image=quay.io/ceph/ceph:v18, name=nostalgic_feistel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:10:01 np0005481065 systemd[1]: Started libpod-conmon-2cbf31d95d8d2972ccc20a8a37754bcee7815ddd3738ce237f85d0b7c570ba77.scope.
Oct 11 04:10:02 np0005481065 podman[73606]: 2025-10-11 08:10:01.92621551 +0000 UTC m=+0.036432855 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:10:02 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:10:02 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76c2ded5485044f15e539107f1d2155bff1f5fef71fba56c01373034f6855603/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:02 np0005481065 podman[73606]: 2025-10-11 08:10:02.041967865 +0000 UTC m=+0.152185180 container init 2cbf31d95d8d2972ccc20a8a37754bcee7815ddd3738ce237f85d0b7c570ba77 (image=quay.io/ceph/ceph:v18, name=nostalgic_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 04:10:02 np0005481065 podman[73606]: 2025-10-11 08:10:02.050439846 +0000 UTC m=+0.160657111 container start 2cbf31d95d8d2972ccc20a8a37754bcee7815ddd3738ce237f85d0b7c570ba77 (image=quay.io/ceph/ceph:v18, name=nostalgic_feistel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 04:10:02 np0005481065 podman[73606]: 2025-10-11 08:10:02.05446303 +0000 UTC m=+0.164680295 container attach 2cbf31d95d8d2972ccc20a8a37754bcee7815ddd3738ce237f85d0b7c570ba77 (image=quay.io/ceph/ceph:v18, name=nostalgic_feistel, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 04:10:02 np0005481065 nostalgic_feistel[73623]: /usr/bin/monmaptool: monmap file /tmp/monmap
Oct 11 04:10:02 np0005481065 nostalgic_feistel[73623]: setting min_mon_release = pacific
Oct 11 04:10:02 np0005481065 nostalgic_feistel[73623]: /usr/bin/monmaptool: set fsid to 33219f8b-dc38-5a8f-a577-8ccc4b37190a
Oct 11 04:10:02 np0005481065 nostalgic_feistel[73623]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Oct 11 04:10:02 np0005481065 systemd[1]: libpod-2cbf31d95d8d2972ccc20a8a37754bcee7815ddd3738ce237f85d0b7c570ba77.scope: Deactivated successfully.
Oct 11 04:10:02 np0005481065 podman[73606]: 2025-10-11 08:10:02.097953024 +0000 UTC m=+0.208170279 container died 2cbf31d95d8d2972ccc20a8a37754bcee7815ddd3738ce237f85d0b7c570ba77 (image=quay.io/ceph/ceph:v18, name=nostalgic_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:10:02 np0005481065 systemd[1]: var-lib-containers-storage-overlay-76c2ded5485044f15e539107f1d2155bff1f5fef71fba56c01373034f6855603-merged.mount: Deactivated successfully.
Oct 11 04:10:02 np0005481065 podman[73606]: 2025-10-11 08:10:02.144195867 +0000 UTC m=+0.254413132 container remove 2cbf31d95d8d2972ccc20a8a37754bcee7815ddd3738ce237f85d0b7c570ba77 (image=quay.io/ceph/ceph:v18, name=nostalgic_feistel, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:10:02 np0005481065 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 11 04:10:02 np0005481065 systemd[1]: libpod-conmon-2cbf31d95d8d2972ccc20a8a37754bcee7815ddd3738ce237f85d0b7c570ba77.scope: Deactivated successfully.
Oct 11 04:10:02 np0005481065 podman[73643]: 2025-10-11 08:10:02.234394166 +0000 UTC m=+0.059068767 container create 1b2a5e8b1cb2de1e8dd34024c96cab2e0bc40a47fd8ccb0881c53e59e95a749d (image=quay.io/ceph/ceph:v18, name=brave_volhard, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:10:02 np0005481065 systemd[1]: Started libpod-conmon-1b2a5e8b1cb2de1e8dd34024c96cab2e0bc40a47fd8ccb0881c53e59e95a749d.scope.
Oct 11 04:10:02 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:10:02 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55646fcb608034e2f04c9b8d480aed86b48db3d3c7effb529c07bed237a58bc7/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:02 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55646fcb608034e2f04c9b8d480aed86b48db3d3c7effb529c07bed237a58bc7/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:02 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55646fcb608034e2f04c9b8d480aed86b48db3d3c7effb529c07bed237a58bc7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:02 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55646fcb608034e2f04c9b8d480aed86b48db3d3c7effb529c07bed237a58bc7/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:02 np0005481065 podman[73643]: 2025-10-11 08:10:02.210599151 +0000 UTC m=+0.035273812 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:10:02 np0005481065 podman[73643]: 2025-10-11 08:10:02.319578534 +0000 UTC m=+0.144253175 container init 1b2a5e8b1cb2de1e8dd34024c96cab2e0bc40a47fd8ccb0881c53e59e95a749d (image=quay.io/ceph/ceph:v18, name=brave_volhard, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:10:02 np0005481065 podman[73643]: 2025-10-11 08:10:02.32929696 +0000 UTC m=+0.153971571 container start 1b2a5e8b1cb2de1e8dd34024c96cab2e0bc40a47fd8ccb0881c53e59e95a749d (image=quay.io/ceph/ceph:v18, name=brave_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 04:10:02 np0005481065 podman[73643]: 2025-10-11 08:10:02.333015615 +0000 UTC m=+0.157690266 container attach 1b2a5e8b1cb2de1e8dd34024c96cab2e0bc40a47fd8ccb0881c53e59e95a749d (image=quay.io/ceph/ceph:v18, name=brave_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:10:02 np0005481065 systemd[1]: libpod-1b2a5e8b1cb2de1e8dd34024c96cab2e0bc40a47fd8ccb0881c53e59e95a749d.scope: Deactivated successfully.
Oct 11 04:10:02 np0005481065 podman[73643]: 2025-10-11 08:10:02.44489204 +0000 UTC m=+0.269566621 container died 1b2a5e8b1cb2de1e8dd34024c96cab2e0bc40a47fd8ccb0881c53e59e95a749d (image=quay.io/ceph/ceph:v18, name=brave_volhard, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:10:02 np0005481065 podman[73643]: 2025-10-11 08:10:02.488738595 +0000 UTC m=+0.313413176 container remove 1b2a5e8b1cb2de1e8dd34024c96cab2e0bc40a47fd8ccb0881c53e59e95a749d (image=quay.io/ceph/ceph:v18, name=brave_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:10:02 np0005481065 systemd[1]: libpod-conmon-1b2a5e8b1cb2de1e8dd34024c96cab2e0bc40a47fd8ccb0881c53e59e95a749d.scope: Deactivated successfully.
Oct 11 04:10:02 np0005481065 systemd[1]: Reloading.
Oct 11 04:10:02 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:10:02 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:10:02 np0005481065 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 11 04:10:02 np0005481065 systemd[1]: Reloading.
Oct 11 04:10:02 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:10:02 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:10:03 np0005481065 systemd[1]: Reached target All Ceph clusters and services.
Oct 11 04:10:03 np0005481065 systemd[1]: Reloading.
Oct 11 04:10:03 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:10:03 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:10:03 np0005481065 systemd[1]: Reached target Ceph cluster 33219f8b-dc38-5a8f-a577-8ccc4b37190a.
Oct 11 04:10:03 np0005481065 systemd[1]: Reloading.
Oct 11 04:10:03 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:10:03 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:10:03 np0005481065 systemd[1]: Reloading.
Oct 11 04:10:03 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:10:03 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:10:03 np0005481065 systemd[1]: Created slice Slice /system/ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a.
Oct 11 04:10:03 np0005481065 systemd[1]: Reached target System Time Set.
Oct 11 04:10:03 np0005481065 systemd[1]: Reached target System Time Synchronized.
Oct 11 04:10:03 np0005481065 systemd[1]: Starting Ceph mon.compute-0 for 33219f8b-dc38-5a8f-a577-8ccc4b37190a...
Oct 11 04:10:04 np0005481065 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 11 04:10:04 np0005481065 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 11 04:10:04 np0005481065 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 11 04:10:04 np0005481065 podman[73938]: 2025-10-11 08:10:04.296421525 +0000 UTC m=+0.069713059 container create bb73aedc412b30fb4e64a605c2318406a7369703fc626bc4046500f1c854d305 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:10:04 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a644146e30e5a8fcc588936aafb53066e58685d3369874962a58ac1f9aa50d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:04 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a644146e30e5a8fcc588936aafb53066e58685d3369874962a58ac1f9aa50d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:04 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a644146e30e5a8fcc588936aafb53066e58685d3369874962a58ac1f9aa50d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:04 np0005481065 podman[73938]: 2025-10-11 08:10:04.267141374 +0000 UTC m=+0.040432938 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:10:04 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a644146e30e5a8fcc588936aafb53066e58685d3369874962a58ac1f9aa50d9/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:04 np0005481065 podman[73938]: 2025-10-11 08:10:04.381085608 +0000 UTC m=+0.154377202 container init bb73aedc412b30fb4e64a605c2318406a7369703fc626bc4046500f1c854d305 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 11 04:10:04 np0005481065 podman[73938]: 2025-10-11 08:10:04.395093776 +0000 UTC m=+0.168385310 container start bb73aedc412b30fb4e64a605c2318406a7369703fc626bc4046500f1c854d305 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:10:04 np0005481065 bash[73938]: bb73aedc412b30fb4e64a605c2318406a7369703fc626bc4046500f1c854d305
Oct 11 04:10:04 np0005481065 systemd[1]: Started Ceph mon.compute-0 for 33219f8b-dc38-5a8f-a577-8ccc4b37190a.
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: set uid:gid to 167:167 (ceph:ceph)
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: pidfile_write: ignore empty --pid-file
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: load: jerasure load: lrc 
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb: RocksDB version: 7.9.2
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb: Git sha 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb: DB SUMMARY
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb: DB Session ID:  B64ESIL4D5QDAAWT6MP5
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb: CURRENT file:  CURRENT
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb: IDENTITY file:  IDENTITY
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                         Options.error_if_exists: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                       Options.create_if_missing: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                         Options.paranoid_checks: 1
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                                     Options.env: 0x555f6304fc40
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                                      Options.fs: PosixFileSystem
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                                Options.info_log: 0x555f63928e80
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                Options.max_file_opening_threads: 16
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                              Options.statistics: (nil)
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                               Options.use_fsync: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                       Options.max_log_file_size: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                         Options.allow_fallocate: 1
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                        Options.use_direct_reads: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:          Options.create_missing_column_families: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                              Options.db_log_dir: 
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                                 Options.wal_dir: 
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                   Options.advise_random_on_open: 1
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                    Options.write_buffer_manager: 0x555f63938b40
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                            Options.rate_limiter: (nil)
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                  Options.unordered_write: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                               Options.row_cache: None
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                              Options.wal_filter: None
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:             Options.allow_ingest_behind: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:             Options.two_write_queues: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:             Options.manual_wal_flush: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:             Options.wal_compression: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:             Options.atomic_flush: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                 Options.log_readahead_size: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:             Options.allow_data_in_errors: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:             Options.db_host_id: __hostname__
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:             Options.max_background_jobs: 2
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:             Options.max_background_compactions: -1
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:             Options.max_subcompactions: 1
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:             Options.max_total_wal_size: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                          Options.max_open_files: -1
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                          Options.bytes_per_sync: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:       Options.compaction_readahead_size: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                  Options.max_background_flushes: -1
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb: Compression algorithms supported:
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb: #011kZSTD supported: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb: #011kXpressCompression supported: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb: #011kBZip2Compression supported: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb: #011kLZ4Compression supported: 1
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb: #011kZlibCompression supported: 1
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb: #011kLZ4HCCompression supported: 1
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb: #011kSnappyCompression supported: 1
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:           Options.merge_operator: 
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:        Options.compaction_filter: None
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x555f63928a80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x555f639211f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:        Options.write_buffer_size: 33554432
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:  Options.max_write_buffer_number: 2
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:          Options.compression: NoCompression
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:             Options.num_levels: 7
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 25d4b2da-549a-4320-988a-8e4ed36a341b
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760170204464924, "job": 1, "event": "recovery_started", "wal_files": [4]}
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760170204467331, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "B64ESIL4D5QDAAWT6MP5", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760170204467460, "job": 1, "event": "recovery_finished"}
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x555f6394ae00
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb: DB pointer 0x555f639d4000
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.13 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.13 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x555f639211f0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: mon.compute-0@-1(???) e0 preinit fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: mon.compute-0@0(probing) e0 win_standalone_election
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: mon.compute-0@0(probing) e1 win_standalone_election
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: paxos.0).electionLogic(2) init, last seen epoch 2
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v18,cpu=AMD EPYC-Rome Processor,created_at=2025-10-11T08:10:02.387590Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Tue Sep 30 07:37:35 UTC 2025,kernel_version=5.14.0-621.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864360,os=Linux}
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: mon.compute-0@0(leader).mds e1 new map
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: mon.compute-0@0(leader).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: log_channel(cluster) log [DBG] : fsmap 
Oct 11 04:10:04 np0005481065 podman[73958]: 2025-10-11 08:10:04.525596639 +0000 UTC m=+0.074863045 container create afe0111e5fc551400314be19b3a901090578cd639f485e863cbb2e2d6e374660 (image=quay.io/ceph/ceph:v18, name=objective_williamson, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: mkfs 33219f8b-dc38-5a8f-a577-8ccc4b37190a
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Oct 11 04:10:04 np0005481065 ceph-mon[73957]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 11 04:10:04 np0005481065 systemd[1]: Started libpod-conmon-afe0111e5fc551400314be19b3a901090578cd639f485e863cbb2e2d6e374660.scope.
Oct 11 04:10:04 np0005481065 podman[73958]: 2025-10-11 08:10:04.492795048 +0000 UTC m=+0.042061474 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:10:04 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:10:04 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72901bc949b9a93f8efba6967adeaa662ce80b9cf63f166e01d80b40e022467e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:04 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72901bc949b9a93f8efba6967adeaa662ce80b9cf63f166e01d80b40e022467e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:04 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72901bc949b9a93f8efba6967adeaa662ce80b9cf63f166e01d80b40e022467e/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:04 np0005481065 podman[73958]: 2025-10-11 08:10:04.62641085 +0000 UTC m=+0.175677256 container init afe0111e5fc551400314be19b3a901090578cd639f485e863cbb2e2d6e374660 (image=quay.io/ceph/ceph:v18, name=objective_williamson, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 04:10:04 np0005481065 podman[73958]: 2025-10-11 08:10:04.639406959 +0000 UTC m=+0.188673355 container start afe0111e5fc551400314be19b3a901090578cd639f485e863cbb2e2d6e374660 (image=quay.io/ceph/ceph:v18, name=objective_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 04:10:04 np0005481065 podman[73958]: 2025-10-11 08:10:04.643094274 +0000 UTC m=+0.192360650 container attach afe0111e5fc551400314be19b3a901090578cd639f485e863cbb2e2d6e374660 (image=quay.io/ceph/ceph:v18, name=objective_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 11 04:10:05 np0005481065 ceph-mon[73957]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Oct 11 04:10:05 np0005481065 ceph-mon[73957]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/180615555' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 11 04:10:05 np0005481065 objective_williamson[74012]:  cluster:
Oct 11 04:10:05 np0005481065 objective_williamson[74012]:    id:     33219f8b-dc38-5a8f-a577-8ccc4b37190a
Oct 11 04:10:05 np0005481065 objective_williamson[74012]:    health: HEALTH_OK
Oct 11 04:10:05 np0005481065 objective_williamson[74012]: 
Oct 11 04:10:05 np0005481065 objective_williamson[74012]:  services:
Oct 11 04:10:05 np0005481065 objective_williamson[74012]:    mon: 1 daemons, quorum compute-0 (age 0.563965s)
Oct 11 04:10:05 np0005481065 objective_williamson[74012]:    mgr: no daemons active
Oct 11 04:10:05 np0005481065 objective_williamson[74012]:    osd: 0 osds: 0 up, 0 in
Oct 11 04:10:05 np0005481065 objective_williamson[74012]: 
Oct 11 04:10:05 np0005481065 objective_williamson[74012]:  data:
Oct 11 04:10:05 np0005481065 objective_williamson[74012]:    pools:   0 pools, 0 pgs
Oct 11 04:10:05 np0005481065 objective_williamson[74012]:    objects: 0 objects, 0 B
Oct 11 04:10:05 np0005481065 objective_williamson[74012]:    usage:   0 B used, 0 B / 0 B avail
Oct 11 04:10:05 np0005481065 objective_williamson[74012]:    pgs:     
Oct 11 04:10:05 np0005481065 objective_williamson[74012]: 
Oct 11 04:10:05 np0005481065 systemd[1]: libpod-afe0111e5fc551400314be19b3a901090578cd639f485e863cbb2e2d6e374660.scope: Deactivated successfully.
Oct 11 04:10:05 np0005481065 podman[73958]: 2025-10-11 08:10:05.090703347 +0000 UTC m=+0.639969753 container died afe0111e5fc551400314be19b3a901090578cd639f485e863cbb2e2d6e374660 (image=quay.io/ceph/ceph:v18, name=objective_williamson, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 04:10:05 np0005481065 systemd[1]: var-lib-containers-storage-overlay-72901bc949b9a93f8efba6967adeaa662ce80b9cf63f166e01d80b40e022467e-merged.mount: Deactivated successfully.
Oct 11 04:10:05 np0005481065 podman[73958]: 2025-10-11 08:10:05.143619379 +0000 UTC m=+0.692885745 container remove afe0111e5fc551400314be19b3a901090578cd639f485e863cbb2e2d6e374660 (image=quay.io/ceph/ceph:v18, name=objective_williamson, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 04:10:05 np0005481065 systemd[1]: libpod-conmon-afe0111e5fc551400314be19b3a901090578cd639f485e863cbb2e2d6e374660.scope: Deactivated successfully.
Oct 11 04:10:05 np0005481065 podman[74051]: 2025-10-11 08:10:05.244566773 +0000 UTC m=+0.069107222 container create e1d00f0955459e50a27e36d22e439253f73e7d6375b6b0290cf2790387bdf7a2 (image=quay.io/ceph/ceph:v18, name=beautiful_albattani, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 04:10:05 np0005481065 systemd[1]: Started libpod-conmon-e1d00f0955459e50a27e36d22e439253f73e7d6375b6b0290cf2790387bdf7a2.scope.
Oct 11 04:10:05 np0005481065 podman[74051]: 2025-10-11 08:10:05.217784383 +0000 UTC m=+0.042324842 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:10:05 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:10:05 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43d92405e227c1eeb24eab662da847714055f4181a297c2c9ba55d24219b27c4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:05 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43d92405e227c1eeb24eab662da847714055f4181a297c2c9ba55d24219b27c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:05 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43d92405e227c1eeb24eab662da847714055f4181a297c2c9ba55d24219b27c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:05 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43d92405e227c1eeb24eab662da847714055f4181a297c2c9ba55d24219b27c4/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:05 np0005481065 podman[74051]: 2025-10-11 08:10:05.354152143 +0000 UTC m=+0.178692652 container init e1d00f0955459e50a27e36d22e439253f73e7d6375b6b0290cf2790387bdf7a2 (image=quay.io/ceph/ceph:v18, name=beautiful_albattani, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:10:05 np0005481065 podman[74051]: 2025-10-11 08:10:05.363116838 +0000 UTC m=+0.187657287 container start e1d00f0955459e50a27e36d22e439253f73e7d6375b6b0290cf2790387bdf7a2 (image=quay.io/ceph/ceph:v18, name=beautiful_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 11 04:10:05 np0005481065 podman[74051]: 2025-10-11 08:10:05.366929316 +0000 UTC m=+0.191469825 container attach e1d00f0955459e50a27e36d22e439253f73e7d6375b6b0290cf2790387bdf7a2 (image=quay.io/ceph/ceph:v18, name=beautiful_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 04:10:05 np0005481065 ceph-mon[73957]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 11 04:10:05 np0005481065 ceph-mon[73957]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Oct 11 04:10:05 np0005481065 ceph-mon[73957]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3174655771' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 11 04:10:05 np0005481065 ceph-mon[73957]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3174655771' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 11 04:10:05 np0005481065 beautiful_albattani[74068]: 
Oct 11 04:10:05 np0005481065 beautiful_albattani[74068]: [global]
Oct 11 04:10:05 np0005481065 beautiful_albattani[74068]: #011fsid = 33219f8b-dc38-5a8f-a577-8ccc4b37190a
Oct 11 04:10:05 np0005481065 beautiful_albattani[74068]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Oct 11 04:10:05 np0005481065 beautiful_albattani[74068]: #011osd_crush_chooseleaf_type = 0
Oct 11 04:10:05 np0005481065 systemd[1]: libpod-e1d00f0955459e50a27e36d22e439253f73e7d6375b6b0290cf2790387bdf7a2.scope: Deactivated successfully.
Oct 11 04:10:05 np0005481065 podman[74051]: 2025-10-11 08:10:05.81914785 +0000 UTC m=+0.643688269 container died e1d00f0955459e50a27e36d22e439253f73e7d6375b6b0290cf2790387bdf7a2 (image=quay.io/ceph/ceph:v18, name=beautiful_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 04:10:05 np0005481065 systemd[1]: var-lib-containers-storage-overlay-43d92405e227c1eeb24eab662da847714055f4181a297c2c9ba55d24219b27c4-merged.mount: Deactivated successfully.
Oct 11 04:10:05 np0005481065 podman[74051]: 2025-10-11 08:10:05.855278345 +0000 UTC m=+0.679818764 container remove e1d00f0955459e50a27e36d22e439253f73e7d6375b6b0290cf2790387bdf7a2 (image=quay.io/ceph/ceph:v18, name=beautiful_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:10:05 np0005481065 systemd[1]: libpod-conmon-e1d00f0955459e50a27e36d22e439253f73e7d6375b6b0290cf2790387bdf7a2.scope: Deactivated successfully.
Oct 11 04:10:05 np0005481065 podman[74103]: 2025-10-11 08:10:05.913507988 +0000 UTC m=+0.038127673 container create c549c38a4898432ae48be1f93ac1fa4c3a2971d6165c76a8494761d5366db767 (image=quay.io/ceph/ceph:v18, name=sleepy_yonath, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 04:10:05 np0005481065 systemd[1]: Started libpod-conmon-c549c38a4898432ae48be1f93ac1fa4c3a2971d6165c76a8494761d5366db767.scope.
Oct 11 04:10:05 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:10:05 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06d936dfe07795410361fd75697702db223e2223e0fbcc12383217459f6844de/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:05 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06d936dfe07795410361fd75697702db223e2223e0fbcc12383217459f6844de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:05 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06d936dfe07795410361fd75697702db223e2223e0fbcc12383217459f6844de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:05 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06d936dfe07795410361fd75697702db223e2223e0fbcc12383217459f6844de/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:05 np0005481065 podman[74103]: 2025-10-11 08:10:05.895219649 +0000 UTC m=+0.019839324 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:10:06 np0005481065 podman[74103]: 2025-10-11 08:10:06.009373889 +0000 UTC m=+0.133993564 container init c549c38a4898432ae48be1f93ac1fa4c3a2971d6165c76a8494761d5366db767 (image=quay.io/ceph/ceph:v18, name=sleepy_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 11 04:10:06 np0005481065 podman[74103]: 2025-10-11 08:10:06.015454561 +0000 UTC m=+0.140074216 container start c549c38a4898432ae48be1f93ac1fa4c3a2971d6165c76a8494761d5366db767 (image=quay.io/ceph/ceph:v18, name=sleepy_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 04:10:06 np0005481065 podman[74103]: 2025-10-11 08:10:06.018740944 +0000 UTC m=+0.143360629 container attach c549c38a4898432ae48be1f93ac1fa4c3a2971d6165c76a8494761d5366db767 (image=quay.io/ceph/ceph:v18, name=sleepy_yonath, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:10:06 np0005481065 ceph-mon[73957]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:10:06 np0005481065 ceph-mon[73957]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1488966714' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:10:06 np0005481065 systemd[1]: libpod-c549c38a4898432ae48be1f93ac1fa4c3a2971d6165c76a8494761d5366db767.scope: Deactivated successfully.
Oct 11 04:10:06 np0005481065 podman[74103]: 2025-10-11 08:10:06.378880674 +0000 UTC m=+0.503500379 container died c549c38a4898432ae48be1f93ac1fa4c3a2971d6165c76a8494761d5366db767 (image=quay.io/ceph/ceph:v18, name=sleepy_yonath, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:10:06 np0005481065 systemd[1]: var-lib-containers-storage-overlay-06d936dfe07795410361fd75697702db223e2223e0fbcc12383217459f6844de-merged.mount: Deactivated successfully.
Oct 11 04:10:06 np0005481065 podman[74103]: 2025-10-11 08:10:06.414893506 +0000 UTC m=+0.539513161 container remove c549c38a4898432ae48be1f93ac1fa4c3a2971d6165c76a8494761d5366db767 (image=quay.io/ceph/ceph:v18, name=sleepy_yonath, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 04:10:06 np0005481065 systemd[1]: libpod-conmon-c549c38a4898432ae48be1f93ac1fa4c3a2971d6165c76a8494761d5366db767.scope: Deactivated successfully.
Oct 11 04:10:06 np0005481065 systemd[1]: Stopping Ceph mon.compute-0 for 33219f8b-dc38-5a8f-a577-8ccc4b37190a...
Oct 11 04:10:06 np0005481065 ceph-mon[73957]: from='client.? 192.168.122.100:0/3174655771' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 11 04:10:06 np0005481065 ceph-mon[73957]: from='client.? 192.168.122.100:0/3174655771' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 11 04:10:06 np0005481065 ceph-mon[73957]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Oct 11 04:10:06 np0005481065 ceph-mon[73957]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Oct 11 04:10:06 np0005481065 ceph-mon[73957]: mon.compute-0@0(leader) e1 shutdown
Oct 11 04:10:06 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0[73953]: 2025-10-11T08:10:06.695+0000 7f18fb43d640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Oct 11 04:10:06 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0[73953]: 2025-10-11T08:10:06.695+0000 7f18fb43d640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Oct 11 04:10:06 np0005481065 ceph-mon[73957]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct 11 04:10:06 np0005481065 ceph-mon[73957]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct 11 04:10:06 np0005481065 podman[74189]: 2025-10-11 08:10:06.809083774 +0000 UTC m=+0.177398766 container died bb73aedc412b30fb4e64a605c2318406a7369703fc626bc4046500f1c854d305 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 04:10:06 np0005481065 systemd[1]: var-lib-containers-storage-overlay-0a644146e30e5a8fcc588936aafb53066e58685d3369874962a58ac1f9aa50d9-merged.mount: Deactivated successfully.
Oct 11 04:10:06 np0005481065 podman[74189]: 2025-10-11 08:10:06.85334954 +0000 UTC m=+0.221664512 container remove bb73aedc412b30fb4e64a605c2318406a7369703fc626bc4046500f1c854d305 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:10:06 np0005481065 bash[74189]: ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0
Oct 11 04:10:06 np0005481065 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 11 04:10:06 np0005481065 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 11 04:10:07 np0005481065 systemd[1]: ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a@mon.compute-0.service: Deactivated successfully.
Oct 11 04:10:07 np0005481065 systemd[1]: Stopped Ceph mon.compute-0 for 33219f8b-dc38-5a8f-a577-8ccc4b37190a.
Oct 11 04:10:07 np0005481065 systemd[1]: ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a@mon.compute-0.service: Consumed 1.383s CPU time.
Oct 11 04:10:07 np0005481065 systemd[1]: Starting Ceph mon.compute-0 for 33219f8b-dc38-5a8f-a577-8ccc4b37190a...
Oct 11 04:10:07 np0005481065 podman[74293]: 2025-10-11 08:10:07.385657886 +0000 UTC m=+0.074236877 container create ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:10:07 np0005481065 podman[74293]: 2025-10-11 08:10:07.358039162 +0000 UTC m=+0.046618203 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:10:07 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/523434a55aaccac9ba6eb7e69ce1acdddcfe67cbafa9edacbc74caadd87080bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:07 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/523434a55aaccac9ba6eb7e69ce1acdddcfe67cbafa9edacbc74caadd87080bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:07 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/523434a55aaccac9ba6eb7e69ce1acdddcfe67cbafa9edacbc74caadd87080bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:07 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/523434a55aaccac9ba6eb7e69ce1acdddcfe67cbafa9edacbc74caadd87080bc/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:07 np0005481065 podman[74293]: 2025-10-11 08:10:07.497104509 +0000 UTC m=+0.185683540 container init ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:10:07 np0005481065 podman[74293]: 2025-10-11 08:10:07.507195266 +0000 UTC m=+0.195774257 container start ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:10:07 np0005481065 bash[74293]: ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729
Oct 11 04:10:07 np0005481065 systemd[1]: Started Ceph mon.compute-0 for 33219f8b-dc38-5a8f-a577-8ccc4b37190a.
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: set uid:gid to 167:167 (ceph:ceph)
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: pidfile_write: ignore empty --pid-file
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: load: jerasure load: lrc 
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb: RocksDB version: 7.9.2
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb: Git sha 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb: DB SUMMARY
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb: DB Session ID:  HBQ1LYMNE0LLZGR7AHC6
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb: CURRENT file:  CURRENT
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb: IDENTITY file:  IDENTITY
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 55676 ; 
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                         Options.error_if_exists: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                       Options.create_if_missing: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                         Options.paranoid_checks: 1
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                                     Options.env: 0x558f08f75c40
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                                      Options.fs: PosixFileSystem
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                                Options.info_log: 0x558f0ab43040
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                Options.max_file_opening_threads: 16
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                              Options.statistics: (nil)
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                               Options.use_fsync: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                       Options.max_log_file_size: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                         Options.allow_fallocate: 1
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                        Options.use_direct_reads: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:          Options.create_missing_column_families: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                              Options.db_log_dir: 
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                                 Options.wal_dir: 
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                   Options.advise_random_on_open: 1
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                    Options.write_buffer_manager: 0x558f0ab52b40
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                            Options.rate_limiter: (nil)
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                  Options.unordered_write: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                               Options.row_cache: None
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                              Options.wal_filter: None
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:             Options.allow_ingest_behind: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:             Options.two_write_queues: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:             Options.manual_wal_flush: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:             Options.wal_compression: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:             Options.atomic_flush: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                 Options.log_readahead_size: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:             Options.allow_data_in_errors: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:             Options.db_host_id: __hostname__
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:             Options.max_background_jobs: 2
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:             Options.max_background_compactions: -1
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:             Options.max_subcompactions: 1
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:             Options.max_total_wal_size: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                          Options.max_open_files: -1
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                          Options.bytes_per_sync: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:       Options.compaction_readahead_size: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                  Options.max_background_flushes: -1
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb: Compression algorithms supported:
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb: #011kZSTD supported: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb: #011kXpressCompression supported: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb: #011kBZip2Compression supported: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb: #011kLZ4Compression supported: 1
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb: #011kZlibCompression supported: 1
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb: #011kLZ4HCCompression supported: 1
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb: #011kSnappyCompression supported: 1
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:           Options.merge_operator: 
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:        Options.compaction_filter: None
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558f0ab42c40)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558f0ab3b1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:        Options.write_buffer_size: 33554432
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:  Options.max_write_buffer_number: 2
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:          Options.compression: NoCompression
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:             Options.num_levels: 7
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 25d4b2da-549a-4320-988a-8e4ed36a341b
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760170207571812, "job": 1, "event": "recovery_started", "wal_files": [9]}
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760170207576694, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 55257, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 138, "table_properties": {"data_size": 53797, "index_size": 166, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 3050, "raw_average_key_size": 30, "raw_value_size": 51386, "raw_average_value_size": 508, "num_data_blocks": 9, "num_entries": 101, "num_filter_entries": 101, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170207, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760170207576809, "job": 1, "event": "recovery_finished"}
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x558f0ab64e00
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb: DB pointer 0x558f0abee000
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   55.86 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     12.2      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0   55.86 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     12.2      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     12.2      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.2      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 2.81 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 2.81 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558f0ab3b1f0#2 capacity: 512.00 MB usage: 0.78 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: mon.compute-0@-1(???) e1 preinit fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: mon.compute-0@-1(???).mds e1 new map
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: mon.compute-0@-1(???).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(probing) e1 win_standalone_election
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : fsmap 
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Oct 11 04:10:07 np0005481065 podman[74314]: 2025-10-11 08:10:07.620164542 +0000 UTC m=+0.066542830 container create f400ccaee9e481d1cc432c8fbcc78baba532035417435f46bc7d850f0325825f (image=quay.io/ceph/ceph:v18, name=great_bohr, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:10:07 np0005481065 ceph-mon[74313]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 11 04:10:07 np0005481065 systemd[1]: Started libpod-conmon-f400ccaee9e481d1cc432c8fbcc78baba532035417435f46bc7d850f0325825f.scope.
Oct 11 04:10:07 np0005481065 podman[74314]: 2025-10-11 08:10:07.597618732 +0000 UTC m=+0.043997020 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:10:07 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:10:07 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b23c2126d8f1d01164bb52ddbc9f6ec385bb692019c769c99468917939ebf343/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:07 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b23c2126d8f1d01164bb52ddbc9f6ec385bb692019c769c99468917939ebf343/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:07 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b23c2126d8f1d01164bb52ddbc9f6ec385bb692019c769c99468917939ebf343/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:07 np0005481065 podman[74314]: 2025-10-11 08:10:07.741797754 +0000 UTC m=+0.188176032 container init f400ccaee9e481d1cc432c8fbcc78baba532035417435f46bc7d850f0325825f (image=quay.io/ceph/ceph:v18, name=great_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 11 04:10:07 np0005481065 podman[74314]: 2025-10-11 08:10:07.752467856 +0000 UTC m=+0.198846134 container start f400ccaee9e481d1cc432c8fbcc78baba532035417435f46bc7d850f0325825f (image=quay.io/ceph/ceph:v18, name=great_bohr, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 04:10:07 np0005481065 podman[74314]: 2025-10-11 08:10:07.756899132 +0000 UTC m=+0.203277410 container attach f400ccaee9e481d1cc432c8fbcc78baba532035417435f46bc7d850f0325825f (image=quay.io/ceph/ceph:v18, name=great_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:10:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0) v1
Oct 11 04:10:08 np0005481065 systemd[1]: libpod-f400ccaee9e481d1cc432c8fbcc78baba532035417435f46bc7d850f0325825f.scope: Deactivated successfully.
Oct 11 04:10:08 np0005481065 podman[74314]: 2025-10-11 08:10:08.182324496 +0000 UTC m=+0.628702744 container died f400ccaee9e481d1cc432c8fbcc78baba532035417435f46bc7d850f0325825f (image=quay.io/ceph/ceph:v18, name=great_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 04:10:08 np0005481065 systemd[1]: var-lib-containers-storage-overlay-b23c2126d8f1d01164bb52ddbc9f6ec385bb692019c769c99468917939ebf343-merged.mount: Deactivated successfully.
Oct 11 04:10:08 np0005481065 podman[74314]: 2025-10-11 08:10:08.229098943 +0000 UTC m=+0.675477191 container remove f400ccaee9e481d1cc432c8fbcc78baba532035417435f46bc7d850f0325825f (image=quay.io/ceph/ceph:v18, name=great_bohr, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Oct 11 04:10:08 np0005481065 systemd[1]: libpod-conmon-f400ccaee9e481d1cc432c8fbcc78baba532035417435f46bc7d850f0325825f.scope: Deactivated successfully.
Oct 11 04:10:08 np0005481065 podman[74404]: 2025-10-11 08:10:08.33785771 +0000 UTC m=+0.076690158 container create e831d79d752e9991ac66956c9085054d3827cb218737a07b0c5bc350b92ff75c (image=quay.io/ceph/ceph:v18, name=condescending_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 11 04:10:08 np0005481065 systemd[1]: Started libpod-conmon-e831d79d752e9991ac66956c9085054d3827cb218737a07b0c5bc350b92ff75c.scope.
Oct 11 04:10:08 np0005481065 podman[74404]: 2025-10-11 08:10:08.307009964 +0000 UTC m=+0.045842462 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:10:08 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:10:08 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd193c19a8a8a0f262fd37fa05fc792f2bc1fb72e603a306099eb0c7fec709dd/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:08 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd193c19a8a8a0f262fd37fa05fc792f2bc1fb72e603a306099eb0c7fec709dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:08 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd193c19a8a8a0f262fd37fa05fc792f2bc1fb72e603a306099eb0c7fec709dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:08 np0005481065 podman[74404]: 2025-10-11 08:10:08.441258724 +0000 UTC m=+0.180091232 container init e831d79d752e9991ac66956c9085054d3827cb218737a07b0c5bc350b92ff75c (image=quay.io/ceph/ceph:v18, name=condescending_pike, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:10:08 np0005481065 podman[74404]: 2025-10-11 08:10:08.452075441 +0000 UTC m=+0.190907889 container start e831d79d752e9991ac66956c9085054d3827cb218737a07b0c5bc350b92ff75c (image=quay.io/ceph/ceph:v18, name=condescending_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:10:08 np0005481065 podman[74404]: 2025-10-11 08:10:08.455962491 +0000 UTC m=+0.194794959 container attach e831d79d752e9991ac66956c9085054d3827cb218737a07b0c5bc350b92ff75c (image=quay.io/ceph/ceph:v18, name=condescending_pike, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:10:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0) v1
Oct 11 04:10:08 np0005481065 systemd[1]: libpod-e831d79d752e9991ac66956c9085054d3827cb218737a07b0c5bc350b92ff75c.scope: Deactivated successfully.
Oct 11 04:10:08 np0005481065 podman[74404]: 2025-10-11 08:10:08.910577083 +0000 UTC m=+0.649409511 container died e831d79d752e9991ac66956c9085054d3827cb218737a07b0c5bc350b92ff75c (image=quay.io/ceph/ceph:v18, name=condescending_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:10:08 np0005481065 systemd[1]: var-lib-containers-storage-overlay-fd193c19a8a8a0f262fd37fa05fc792f2bc1fb72e603a306099eb0c7fec709dd-merged.mount: Deactivated successfully.
Oct 11 04:10:08 np0005481065 podman[74404]: 2025-10-11 08:10:08.962226099 +0000 UTC m=+0.701058537 container remove e831d79d752e9991ac66956c9085054d3827cb218737a07b0c5bc350b92ff75c (image=quay.io/ceph/ceph:v18, name=condescending_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 04:10:08 np0005481065 systemd[1]: libpod-conmon-e831d79d752e9991ac66956c9085054d3827cb218737a07b0c5bc350b92ff75c.scope: Deactivated successfully.
Oct 11 04:10:09 np0005481065 systemd[1]: Reloading.
Oct 11 04:10:09 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:10:09 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:10:09 np0005481065 systemd[1]: Reloading.
Oct 11 04:10:09 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:10:09 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:10:09 np0005481065 systemd[1]: Starting Ceph mgr.compute-0.hcsgrm for 33219f8b-dc38-5a8f-a577-8ccc4b37190a...
Oct 11 04:10:09 np0005481065 podman[74585]: 2025-10-11 08:10:09.9682976 +0000 UTC m=+0.064725508 container create 073bb15022dcd9383f662af733b000a2bdb8d8fcafcd81a9323f57cc99fa002c (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:10:10 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d935daa93024ffa1f9c4f39be165d347f7f59b3543568f6b5fb6101453b8dda9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:10 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d935daa93024ffa1f9c4f39be165d347f7f59b3543568f6b5fb6101453b8dda9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:10 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d935daa93024ffa1f9c4f39be165d347f7f59b3543568f6b5fb6101453b8dda9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:10 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d935daa93024ffa1f9c4f39be165d347f7f59b3543568f6b5fb6101453b8dda9/merged/var/lib/ceph/mgr/ceph-compute-0.hcsgrm supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:10 np0005481065 podman[74585]: 2025-10-11 08:10:09.940358027 +0000 UTC m=+0.036785985 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:10:10 np0005481065 podman[74585]: 2025-10-11 08:10:10.044104202 +0000 UTC m=+0.140532160 container init 073bb15022dcd9383f662af733b000a2bdb8d8fcafcd81a9323f57cc99fa002c (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:10:10 np0005481065 podman[74585]: 2025-10-11 08:10:10.059577051 +0000 UTC m=+0.156004959 container start 073bb15022dcd9383f662af733b000a2bdb8d8fcafcd81a9323f57cc99fa002c (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Oct 11 04:10:10 np0005481065 bash[74585]: 073bb15022dcd9383f662af733b000a2bdb8d8fcafcd81a9323f57cc99fa002c
Oct 11 04:10:10 np0005481065 systemd[1]: Started Ceph mgr.compute-0.hcsgrm for 33219f8b-dc38-5a8f-a577-8ccc4b37190a.
Oct 11 04:10:10 np0005481065 ceph-mgr[74605]: set uid:gid to 167:167 (ceph:ceph)
Oct 11 04:10:10 np0005481065 ceph-mgr[74605]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Oct 11 04:10:10 np0005481065 ceph-mgr[74605]: pidfile_write: ignore empty --pid-file
Oct 11 04:10:10 np0005481065 podman[74606]: 2025-10-11 08:10:10.18428274 +0000 UTC m=+0.074052783 container create 3bf3a49654bd4fb9ad815f50188977e77b31e2b5c06a428bb678dc06085e8fbc (image=quay.io/ceph/ceph:v18, name=awesome_almeida, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 04:10:10 np0005481065 systemd[1]: Started libpod-conmon-3bf3a49654bd4fb9ad815f50188977e77b31e2b5c06a428bb678dc06085e8fbc.scope.
Oct 11 04:10:10 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'alerts'
Oct 11 04:10:10 np0005481065 podman[74606]: 2025-10-11 08:10:10.156543933 +0000 UTC m=+0.046314046 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:10:10 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:10:10 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1674e377b43445adfa1ab3b52f8c0ee657d1b24671109eb4f39d1875915d788c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:10 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1674e377b43445adfa1ab3b52f8c0ee657d1b24671109eb4f39d1875915d788c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:10 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1674e377b43445adfa1ab3b52f8c0ee657d1b24671109eb4f39d1875915d788c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:10 np0005481065 podman[74606]: 2025-10-11 08:10:10.294697173 +0000 UTC m=+0.184467216 container init 3bf3a49654bd4fb9ad815f50188977e77b31e2b5c06a428bb678dc06085e8fbc (image=quay.io/ceph/ceph:v18, name=awesome_almeida, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 11 04:10:10 np0005481065 podman[74606]: 2025-10-11 08:10:10.306590781 +0000 UTC m=+0.196360824 container start 3bf3a49654bd4fb9ad815f50188977e77b31e2b5c06a428bb678dc06085e8fbc (image=quay.io/ceph/ceph:v18, name=awesome_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:10:10 np0005481065 podman[74606]: 2025-10-11 08:10:10.310983416 +0000 UTC m=+0.200753459 container attach 3bf3a49654bd4fb9ad815f50188977e77b31e2b5c06a428bb678dc06085e8fbc (image=quay.io/ceph/ceph:v18, name=awesome_almeida, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Oct 11 04:10:10 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: 2025-10-11T08:10:10.528+0000 7f4f799e6140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 11 04:10:10 np0005481065 ceph-mgr[74605]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 11 04:10:10 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'balancer'
Oct 11 04:10:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 11 04:10:10 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3104767282' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]: 
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]: {
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:    "fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:    "health": {
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:        "status": "HEALTH_OK",
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:        "checks": {},
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:        "mutes": []
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:    },
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:    "election_epoch": 5,
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:    "quorum": [
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:        0
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:    ],
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:    "quorum_names": [
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:        "compute-0"
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:    ],
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:    "quorum_age": 3,
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:    "monmap": {
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:        "epoch": 1,
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:        "min_mon_release_name": "reef",
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:        "num_mons": 1
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:    },
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:    "osdmap": {
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:        "epoch": 1,
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:        "num_osds": 0,
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:        "num_up_osds": 0,
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:        "osd_up_since": 0,
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:        "num_in_osds": 0,
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:        "osd_in_since": 0,
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:        "num_remapped_pgs": 0
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:    },
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:    "pgmap": {
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:        "pgs_by_state": [],
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:        "num_pgs": 0,
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:        "num_pools": 0,
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:        "num_objects": 0,
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:        "data_bytes": 0,
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:        "bytes_used": 0,
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:        "bytes_avail": 0,
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:        "bytes_total": 0
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:    },
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:    "fsmap": {
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:        "epoch": 1,
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:        "by_rank": [],
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:        "up:standby": 0
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:    },
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:    "mgrmap": {
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:        "available": false,
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:        "num_standbys": 0,
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:        "modules": [
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:            "iostat",
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:            "nfs",
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:            "restful"
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:        ],
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:        "services": {}
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:    },
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:    "servicemap": {
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:        "epoch": 1,
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:        "modified": "2025-10-11T08:10:04.514161+0000",
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:        "services": {}
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:    },
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]:    "progress_events": {}
Oct 11 04:10:10 np0005481065 awesome_almeida[74647]: }
Oct 11 04:10:10 np0005481065 systemd[1]: libpod-3bf3a49654bd4fb9ad815f50188977e77b31e2b5c06a428bb678dc06085e8fbc.scope: Deactivated successfully.
Oct 11 04:10:10 np0005481065 podman[74606]: 2025-10-11 08:10:10.755746708 +0000 UTC m=+0.645516761 container died 3bf3a49654bd4fb9ad815f50188977e77b31e2b5c06a428bb678dc06085e8fbc (image=quay.io/ceph/ceph:v18, name=awesome_almeida, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 04:10:10 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: 2025-10-11T08:10:10.772+0000 7f4f799e6140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 11 04:10:10 np0005481065 ceph-mgr[74605]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 11 04:10:10 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'cephadm'
Oct 11 04:10:10 np0005481065 systemd[1]: var-lib-containers-storage-overlay-1674e377b43445adfa1ab3b52f8c0ee657d1b24671109eb4f39d1875915d788c-merged.mount: Deactivated successfully.
Oct 11 04:10:10 np0005481065 podman[74606]: 2025-10-11 08:10:10.807175107 +0000 UTC m=+0.696945120 container remove 3bf3a49654bd4fb9ad815f50188977e77b31e2b5c06a428bb678dc06085e8fbc (image=quay.io/ceph/ceph:v18, name=awesome_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:10:10 np0005481065 systemd[1]: libpod-conmon-3bf3a49654bd4fb9ad815f50188977e77b31e2b5c06a428bb678dc06085e8fbc.scope: Deactivated successfully.
Oct 11 04:10:12 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'crash'
Oct 11 04:10:12 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: 2025-10-11T08:10:12.820+0000 7f4f799e6140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 11 04:10:12 np0005481065 ceph-mgr[74605]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 11 04:10:12 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'dashboard'
Oct 11 04:10:12 np0005481065 podman[74696]: 2025-10-11 08:10:12.911332693 +0000 UTC m=+0.074021152 container create 07b9715ed22cc9c0f0938ad68fda517921a4c7479388a180acddb157faf5952c (image=quay.io/ceph/ceph:v18, name=upbeat_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 04:10:12 np0005481065 systemd[1]: Started libpod-conmon-07b9715ed22cc9c0f0938ad68fda517921a4c7479388a180acddb157faf5952c.scope.
Oct 11 04:10:12 np0005481065 podman[74696]: 2025-10-11 08:10:12.869611949 +0000 UTC m=+0.032300478 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:10:12 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:10:12 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12d1452c96e6fdfc81809ed4b2cce1f6e0520e07d7ca29ad07e2fcb1309d9e21/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:12 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12d1452c96e6fdfc81809ed4b2cce1f6e0520e07d7ca29ad07e2fcb1309d9e21/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:12 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12d1452c96e6fdfc81809ed4b2cce1f6e0520e07d7ca29ad07e2fcb1309d9e21/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:13 np0005481065 podman[74696]: 2025-10-11 08:10:13.004275871 +0000 UTC m=+0.166964330 container init 07b9715ed22cc9c0f0938ad68fda517921a4c7479388a180acddb157faf5952c (image=quay.io/ceph/ceph:v18, name=upbeat_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 11 04:10:13 np0005481065 podman[74696]: 2025-10-11 08:10:13.011774544 +0000 UTC m=+0.174463003 container start 07b9715ed22cc9c0f0938ad68fda517921a4c7479388a180acddb157faf5952c (image=quay.io/ceph/ceph:v18, name=upbeat_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 04:10:13 np0005481065 podman[74696]: 2025-10-11 08:10:13.02784917 +0000 UTC m=+0.190537629 container attach 07b9715ed22cc9c0f0938ad68fda517921a4c7479388a180acddb157faf5952c (image=quay.io/ceph/ceph:v18, name=upbeat_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:10:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 11 04:10:13 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1398310671' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]: 
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]: {
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:    "fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:    "health": {
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:        "status": "HEALTH_OK",
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:        "checks": {},
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:        "mutes": []
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:    },
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:    "election_epoch": 5,
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:    "quorum": [
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:        0
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:    ],
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:    "quorum_names": [
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:        "compute-0"
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:    ],
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:    "quorum_age": 5,
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:    "monmap": {
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:        "epoch": 1,
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:        "min_mon_release_name": "reef",
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:        "num_mons": 1
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:    },
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:    "osdmap": {
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:        "epoch": 1,
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:        "num_osds": 0,
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:        "num_up_osds": 0,
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:        "osd_up_since": 0,
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:        "num_in_osds": 0,
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:        "osd_in_since": 0,
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:        "num_remapped_pgs": 0
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:    },
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:    "pgmap": {
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:        "pgs_by_state": [],
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:        "num_pgs": 0,
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:        "num_pools": 0,
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:        "num_objects": 0,
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:        "data_bytes": 0,
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:        "bytes_used": 0,
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:        "bytes_avail": 0,
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:        "bytes_total": 0
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:    },
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:    "fsmap": {
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:        "epoch": 1,
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:        "by_rank": [],
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:        "up:standby": 0
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:    },
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:    "mgrmap": {
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:        "available": false,
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:        "num_standbys": 0,
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:        "modules": [
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:            "iostat",
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:            "nfs",
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:            "restful"
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:        ],
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:        "services": {}
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:    },
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:    "servicemap": {
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:        "epoch": 1,
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:        "modified": "2025-10-11T08:10:04.514161+0000",
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:        "services": {}
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:    },
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]:    "progress_events": {}
Oct 11 04:10:13 np0005481065 upbeat_ishizaka[74713]: }
Oct 11 04:10:13 np0005481065 systemd[1]: libpod-07b9715ed22cc9c0f0938ad68fda517921a4c7479388a180acddb157faf5952c.scope: Deactivated successfully.
Oct 11 04:10:13 np0005481065 podman[74739]: 2025-10-11 08:10:13.462893826 +0000 UTC m=+0.028570522 container died 07b9715ed22cc9c0f0938ad68fda517921a4c7479388a180acddb157faf5952c (image=quay.io/ceph/ceph:v18, name=upbeat_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 04:10:13 np0005481065 systemd[1]: var-lib-containers-storage-overlay-12d1452c96e6fdfc81809ed4b2cce1f6e0520e07d7ca29ad07e2fcb1309d9e21-merged.mount: Deactivated successfully.
Oct 11 04:10:13 np0005481065 podman[74739]: 2025-10-11 08:10:13.526199212 +0000 UTC m=+0.091876418 container remove 07b9715ed22cc9c0f0938ad68fda517921a4c7479388a180acddb157faf5952c (image=quay.io/ceph/ceph:v18, name=upbeat_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:10:13 np0005481065 systemd[1]: libpod-conmon-07b9715ed22cc9c0f0938ad68fda517921a4c7479388a180acddb157faf5952c.scope: Deactivated successfully.
Oct 11 04:10:14 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'devicehealth'
Oct 11 04:10:14 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: 2025-10-11T08:10:14.399+0000 7f4f799e6140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 11 04:10:14 np0005481065 ceph-mgr[74605]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 11 04:10:14 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'diskprediction_local'
Oct 11 04:10:14 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 11 04:10:14 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 11 04:10:14 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]:  from numpy import show_config as show_numpy_config
Oct 11 04:10:14 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: 2025-10-11T08:10:14.903+0000 7f4f799e6140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 11 04:10:14 np0005481065 ceph-mgr[74605]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 11 04:10:14 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'influx'
Oct 11 04:10:15 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: 2025-10-11T08:10:15.134+0000 7f4f799e6140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 11 04:10:15 np0005481065 ceph-mgr[74605]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 11 04:10:15 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'insights'
Oct 11 04:10:15 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'iostat'
Oct 11 04:10:15 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: 2025-10-11T08:10:15.614+0000 7f4f799e6140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 11 04:10:15 np0005481065 ceph-mgr[74605]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 11 04:10:15 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'k8sevents'
Oct 11 04:10:15 np0005481065 podman[74753]: 2025-10-11 08:10:15.626656212 +0000 UTC m=+0.062080472 container create 6795e617efc8d7007027d87c383bc3f0f6db3df78e25cf72455f7ca96a3c071f (image=quay.io/ceph/ceph:v18, name=gifted_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 04:10:15 np0005481065 systemd[1]: Started libpod-conmon-6795e617efc8d7007027d87c383bc3f0f6db3df78e25cf72455f7ca96a3c071f.scope.
Oct 11 04:10:15 np0005481065 podman[74753]: 2025-10-11 08:10:15.597073163 +0000 UTC m=+0.032497463 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:10:15 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:10:15 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/467399a4384a4ec3a78b470e7f303a20f90d806b1e12eb8f516e0e8c93920f73/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:15 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/467399a4384a4ec3a78b470e7f303a20f90d806b1e12eb8f516e0e8c93920f73/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:15 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/467399a4384a4ec3a78b470e7f303a20f90d806b1e12eb8f516e0e8c93920f73/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:15 np0005481065 podman[74753]: 2025-10-11 08:10:15.749607252 +0000 UTC m=+0.185031492 container init 6795e617efc8d7007027d87c383bc3f0f6db3df78e25cf72455f7ca96a3c071f (image=quay.io/ceph/ceph:v18, name=gifted_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 11 04:10:15 np0005481065 podman[74753]: 2025-10-11 08:10:15.758093843 +0000 UTC m=+0.193518073 container start 6795e617efc8d7007027d87c383bc3f0f6db3df78e25cf72455f7ca96a3c071f (image=quay.io/ceph/ceph:v18, name=gifted_shannon, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:10:15 np0005481065 podman[74753]: 2025-10-11 08:10:15.762808546 +0000 UTC m=+0.198232786 container attach 6795e617efc8d7007027d87c383bc3f0f6db3df78e25cf72455f7ca96a3c071f (image=quay.io/ceph/ceph:v18, name=gifted_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:10:16 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 11 04:10:16 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1028084702' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]: 
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]: {
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:    "fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:    "health": {
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:        "status": "HEALTH_OK",
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:        "checks": {},
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:        "mutes": []
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:    },
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:    "election_epoch": 5,
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:    "quorum": [
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:        0
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:    ],
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:    "quorum_names": [
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:        "compute-0"
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:    ],
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:    "quorum_age": 8,
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:    "monmap": {
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:        "epoch": 1,
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:        "min_mon_release_name": "reef",
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:        "num_mons": 1
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:    },
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:    "osdmap": {
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:        "epoch": 1,
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:        "num_osds": 0,
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:        "num_up_osds": 0,
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:        "osd_up_since": 0,
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:        "num_in_osds": 0,
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:        "osd_in_since": 0,
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:        "num_remapped_pgs": 0
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:    },
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:    "pgmap": {
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:        "pgs_by_state": [],
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:        "num_pgs": 0,
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:        "num_pools": 0,
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:        "num_objects": 0,
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:        "data_bytes": 0,
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:        "bytes_used": 0,
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:        "bytes_avail": 0,
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:        "bytes_total": 0
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:    },
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:    "fsmap": {
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:        "epoch": 1,
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:        "by_rank": [],
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:        "up:standby": 0
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:    },
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:    "mgrmap": {
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:        "available": false,
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:        "num_standbys": 0,
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:        "modules": [
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:            "iostat",
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:            "nfs",
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:            "restful"
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:        ],
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:        "services": {}
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:    },
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:    "servicemap": {
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:        "epoch": 1,
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:        "modified": "2025-10-11T08:10:04.514161+0000",
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:        "services": {}
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:    },
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]:    "progress_events": {}
Oct 11 04:10:16 np0005481065 gifted_shannon[74769]: }
Oct 11 04:10:16 np0005481065 systemd[1]: libpod-6795e617efc8d7007027d87c383bc3f0f6db3df78e25cf72455f7ca96a3c071f.scope: Deactivated successfully.
Oct 11 04:10:16 np0005481065 podman[74753]: 2025-10-11 08:10:16.228161453 +0000 UTC m=+0.663585683 container died 6795e617efc8d7007027d87c383bc3f0f6db3df78e25cf72455f7ca96a3c071f (image=quay.io/ceph/ceph:v18, name=gifted_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 04:10:16 np0005481065 systemd[1]: var-lib-containers-storage-overlay-467399a4384a4ec3a78b470e7f303a20f90d806b1e12eb8f516e0e8c93920f73-merged.mount: Deactivated successfully.
Oct 11 04:10:16 np0005481065 podman[74753]: 2025-10-11 08:10:16.27175934 +0000 UTC m=+0.707183560 container remove 6795e617efc8d7007027d87c383bc3f0f6db3df78e25cf72455f7ca96a3c071f (image=quay.io/ceph/ceph:v18, name=gifted_shannon, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 04:10:16 np0005481065 systemd[1]: libpod-conmon-6795e617efc8d7007027d87c383bc3f0f6db3df78e25cf72455f7ca96a3c071f.scope: Deactivated successfully.
Oct 11 04:10:17 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'localpool'
Oct 11 04:10:17 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'mds_autoscaler'
Oct 11 04:10:18 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'mirroring'
Oct 11 04:10:18 np0005481065 podman[74807]: 2025-10-11 08:10:18.35848385 +0000 UTC m=+0.061619059 container create a3ae28de507f37d0a09d5210ff680580f124cd24d4a12636e2b0af0a0ba68e26 (image=quay.io/ceph/ceph:v18, name=confident_mccarthy, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:10:18 np0005481065 systemd[1]: Started libpod-conmon-a3ae28de507f37d0a09d5210ff680580f124cd24d4a12636e2b0af0a0ba68e26.scope.
Oct 11 04:10:18 np0005481065 podman[74807]: 2025-10-11 08:10:18.32711394 +0000 UTC m=+0.030249219 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:10:18 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:10:18 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35d4fce77b943e78d7851c904a3a978e5dcd631c4f82289da2ac416ed52ea412/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:18 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35d4fce77b943e78d7851c904a3a978e5dcd631c4f82289da2ac416ed52ea412/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:18 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35d4fce77b943e78d7851c904a3a978e5dcd631c4f82289da2ac416ed52ea412/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:18 np0005481065 podman[74807]: 2025-10-11 08:10:18.447727623 +0000 UTC m=+0.150862832 container init a3ae28de507f37d0a09d5210ff680580f124cd24d4a12636e2b0af0a0ba68e26 (image=quay.io/ceph/ceph:v18, name=confident_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 04:10:18 np0005481065 podman[74807]: 2025-10-11 08:10:18.45607926 +0000 UTC m=+0.159214429 container start a3ae28de507f37d0a09d5210ff680580f124cd24d4a12636e2b0af0a0ba68e26 (image=quay.io/ceph/ceph:v18, name=confident_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:10:18 np0005481065 podman[74807]: 2025-10-11 08:10:18.459408764 +0000 UTC m=+0.162543943 container attach a3ae28de507f37d0a09d5210ff680580f124cd24d4a12636e2b0af0a0ba68e26 (image=quay.io/ceph/ceph:v18, name=confident_mccarthy, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:10:18 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'nfs'
Oct 11 04:10:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 11 04:10:18 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4240569702' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]: 
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]: {
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:    "fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:    "health": {
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:        "status": "HEALTH_OK",
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:        "checks": {},
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:        "mutes": []
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:    },
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:    "election_epoch": 5,
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:    "quorum": [
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:        0
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:    ],
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:    "quorum_names": [
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:        "compute-0"
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:    ],
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:    "quorum_age": 11,
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:    "monmap": {
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:        "epoch": 1,
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:        "min_mon_release_name": "reef",
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:        "num_mons": 1
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:    },
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:    "osdmap": {
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:        "epoch": 1,
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:        "num_osds": 0,
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:        "num_up_osds": 0,
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:        "osd_up_since": 0,
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:        "num_in_osds": 0,
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:        "osd_in_since": 0,
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:        "num_remapped_pgs": 0
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:    },
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:    "pgmap": {
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:        "pgs_by_state": [],
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:        "num_pgs": 0,
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:        "num_pools": 0,
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:        "num_objects": 0,
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:        "data_bytes": 0,
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:        "bytes_used": 0,
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:        "bytes_avail": 0,
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:        "bytes_total": 0
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:    },
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:    "fsmap": {
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:        "epoch": 1,
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:        "by_rank": [],
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:        "up:standby": 0
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:    },
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:    "mgrmap": {
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:        "available": false,
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:        "num_standbys": 0,
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:        "modules": [
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:            "iostat",
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:            "nfs",
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:            "restful"
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:        ],
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:        "services": {}
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:    },
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:    "servicemap": {
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:        "epoch": 1,
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:        "modified": "2025-10-11T08:10:04.514161+0000",
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:        "services": {}
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:    },
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]:    "progress_events": {}
Oct 11 04:10:18 np0005481065 confident_mccarthy[74824]: }
Oct 11 04:10:18 np0005481065 systemd[1]: libpod-a3ae28de507f37d0a09d5210ff680580f124cd24d4a12636e2b0af0a0ba68e26.scope: Deactivated successfully.
Oct 11 04:10:18 np0005481065 podman[74807]: 2025-10-11 08:10:18.848460816 +0000 UTC m=+0.551596025 container died a3ae28de507f37d0a09d5210ff680580f124cd24d4a12636e2b0af0a0ba68e26 (image=quay.io/ceph/ceph:v18, name=confident_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 11 04:10:18 np0005481065 systemd[1]: var-lib-containers-storage-overlay-35d4fce77b943e78d7851c904a3a978e5dcd631c4f82289da2ac416ed52ea412-merged.mount: Deactivated successfully.
Oct 11 04:10:18 np0005481065 podman[74807]: 2025-10-11 08:10:18.904253599 +0000 UTC m=+0.607388788 container remove a3ae28de507f37d0a09d5210ff680580f124cd24d4a12636e2b0af0a0ba68e26 (image=quay.io/ceph/ceph:v18, name=confident_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 11 04:10:18 np0005481065 systemd[1]: libpod-conmon-a3ae28de507f37d0a09d5210ff680580f124cd24d4a12636e2b0af0a0ba68e26.scope: Deactivated successfully.
Oct 11 04:10:19 np0005481065 ceph-mgr[74605]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 11 04:10:19 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'orchestrator'
Oct 11 04:10:19 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: 2025-10-11T08:10:19.236+0000 7f4f799e6140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 11 04:10:19 np0005481065 ceph-mgr[74605]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 11 04:10:19 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'osd_perf_query'
Oct 11 04:10:19 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: 2025-10-11T08:10:19.847+0000 7f4f799e6140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 11 04:10:20 np0005481065 ceph-mgr[74605]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 11 04:10:20 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'osd_support'
Oct 11 04:10:20 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: 2025-10-11T08:10:20.090+0000 7f4f799e6140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 11 04:10:20 np0005481065 ceph-mgr[74605]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 11 04:10:20 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'pg_autoscaler'
Oct 11 04:10:20 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: 2025-10-11T08:10:20.363+0000 7f4f799e6140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 11 04:10:20 np0005481065 ceph-mgr[74605]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 11 04:10:20 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'progress'
Oct 11 04:10:20 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: 2025-10-11T08:10:20.628+0000 7f4f799e6140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 11 04:10:20 np0005481065 ceph-mgr[74605]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 11 04:10:20 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: 2025-10-11T08:10:20.862+0000 7f4f799e6140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 11 04:10:20 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'prometheus'
Oct 11 04:10:21 np0005481065 podman[74862]: 2025-10-11 08:10:21.005607594 +0000 UTC m=+0.064068399 container create 638cd3c98341c04b6bfd84f226d656749085125d2825a9e277fbc5da7786823e (image=quay.io/ceph/ceph:v18, name=sad_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 11 04:10:21 np0005481065 systemd[1]: Started libpod-conmon-638cd3c98341c04b6bfd84f226d656749085125d2825a9e277fbc5da7786823e.scope.
Oct 11 04:10:21 np0005481065 podman[74862]: 2025-10-11 08:10:20.977467135 +0000 UTC m=+0.035927990 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:10:21 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:10:21 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f048cdc75f0f0ad311a5c7976f0e002b4f57509747157aa12f2defbb089b918f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:21 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f048cdc75f0f0ad311a5c7976f0e002b4f57509747157aa12f2defbb089b918f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:21 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f048cdc75f0f0ad311a5c7976f0e002b4f57509747157aa12f2defbb089b918f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:21 np0005481065 podman[74862]: 2025-10-11 08:10:21.115955686 +0000 UTC m=+0.174416541 container init 638cd3c98341c04b6bfd84f226d656749085125d2825a9e277fbc5da7786823e (image=quay.io/ceph/ceph:v18, name=sad_rubin, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:10:21 np0005481065 podman[74862]: 2025-10-11 08:10:21.125019703 +0000 UTC m=+0.183480518 container start 638cd3c98341c04b6bfd84f226d656749085125d2825a9e277fbc5da7786823e (image=quay.io/ceph/ceph:v18, name=sad_rubin, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 04:10:21 np0005481065 podman[74862]: 2025-10-11 08:10:21.129095409 +0000 UTC m=+0.187556224 container attach 638cd3c98341c04b6bfd84f226d656749085125d2825a9e277fbc5da7786823e (image=quay.io/ceph/ceph:v18, name=sad_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:10:21 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 11 04:10:21 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2061908614' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 11 04:10:21 np0005481065 sad_rubin[74879]: 
Oct 11 04:10:21 np0005481065 sad_rubin[74879]: {
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:    "fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:    "health": {
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:        "status": "HEALTH_OK",
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:        "checks": {},
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:        "mutes": []
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:    },
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:    "election_epoch": 5,
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:    "quorum": [
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:        0
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:    ],
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:    "quorum_names": [
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:        "compute-0"
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:    ],
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:    "quorum_age": 13,
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:    "monmap": {
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:        "epoch": 1,
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:        "min_mon_release_name": "reef",
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:        "num_mons": 1
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:    },
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:    "osdmap": {
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:        "epoch": 1,
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:        "num_osds": 0,
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:        "num_up_osds": 0,
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:        "osd_up_since": 0,
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:        "num_in_osds": 0,
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:        "osd_in_since": 0,
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:        "num_remapped_pgs": 0
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:    },
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:    "pgmap": {
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:        "pgs_by_state": [],
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:        "num_pgs": 0,
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:        "num_pools": 0,
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:        "num_objects": 0,
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:        "data_bytes": 0,
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:        "bytes_used": 0,
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:        "bytes_avail": 0,
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:        "bytes_total": 0
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:    },
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:    "fsmap": {
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:        "epoch": 1,
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:        "by_rank": [],
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:        "up:standby": 0
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:    },
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:    "mgrmap": {
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:        "available": false,
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:        "num_standbys": 0,
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:        "modules": [
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:            "iostat",
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:            "nfs",
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:            "restful"
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:        ],
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:        "services": {}
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:    },
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:    "servicemap": {
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:        "epoch": 1,
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:        "modified": "2025-10-11T08:10:04.514161+0000",
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:        "services": {}
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:    },
Oct 11 04:10:21 np0005481065 sad_rubin[74879]:    "progress_events": {}
Oct 11 04:10:21 np0005481065 sad_rubin[74879]: }
Oct 11 04:10:21 np0005481065 systemd[1]: libpod-638cd3c98341c04b6bfd84f226d656749085125d2825a9e277fbc5da7786823e.scope: Deactivated successfully.
Oct 11 04:10:21 np0005481065 podman[74862]: 2025-10-11 08:10:21.575769715 +0000 UTC m=+0.634230530 container died 638cd3c98341c04b6bfd84f226d656749085125d2825a9e277fbc5da7786823e (image=quay.io/ceph/ceph:v18, name=sad_rubin, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:10:21 np0005481065 systemd[1]: var-lib-containers-storage-overlay-f048cdc75f0f0ad311a5c7976f0e002b4f57509747157aa12f2defbb089b918f-merged.mount: Deactivated successfully.
Oct 11 04:10:21 np0005481065 podman[74862]: 2025-10-11 08:10:21.630135248 +0000 UTC m=+0.688596053 container remove 638cd3c98341c04b6bfd84f226d656749085125d2825a9e277fbc5da7786823e (image=quay.io/ceph/ceph:v18, name=sad_rubin, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:10:21 np0005481065 systemd[1]: libpod-conmon-638cd3c98341c04b6bfd84f226d656749085125d2825a9e277fbc5da7786823e.scope: Deactivated successfully.
Oct 11 04:10:21 np0005481065 ceph-mgr[74605]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 11 04:10:21 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'rbd_support'
Oct 11 04:10:21 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: 2025-10-11T08:10:21.834+0000 7f4f799e6140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 11 04:10:22 np0005481065 ceph-mgr[74605]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 11 04:10:22 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: 2025-10-11T08:10:22.118+0000 7f4f799e6140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 11 04:10:22 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'restful'
Oct 11 04:10:22 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'rgw'
Oct 11 04:10:23 np0005481065 ceph-mgr[74605]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 11 04:10:23 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'rook'
Oct 11 04:10:23 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: 2025-10-11T08:10:23.590+0000 7f4f799e6140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 11 04:10:23 np0005481065 podman[74917]: 2025-10-11 08:10:23.726456151 +0000 UTC m=+0.063339858 container create 8e5c8d8aa464235691d8cbbd88ce6a9f8e8db5bcb9aa5baabb23d25e597c340b (image=quay.io/ceph/ceph:v18, name=sleepy_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 11 04:10:23 np0005481065 systemd[1]: Started libpod-conmon-8e5c8d8aa464235691d8cbbd88ce6a9f8e8db5bcb9aa5baabb23d25e597c340b.scope.
Oct 11 04:10:23 np0005481065 podman[74917]: 2025-10-11 08:10:23.700105883 +0000 UTC m=+0.036989640 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:10:23 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:10:23 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab6a00cd4c6812359a9086192c20b924765f739162dd2f652f6d042d9fd27f52/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:23 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab6a00cd4c6812359a9086192c20b924765f739162dd2f652f6d042d9fd27f52/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:23 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab6a00cd4c6812359a9086192c20b924765f739162dd2f652f6d042d9fd27f52/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:23 np0005481065 podman[74917]: 2025-10-11 08:10:23.826901842 +0000 UTC m=+0.163785599 container init 8e5c8d8aa464235691d8cbbd88ce6a9f8e8db5bcb9aa5baabb23d25e597c340b (image=quay.io/ceph/ceph:v18, name=sleepy_edison, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:10:23 np0005481065 podman[74917]: 2025-10-11 08:10:23.83565474 +0000 UTC m=+0.172538457 container start 8e5c8d8aa464235691d8cbbd88ce6a9f8e8db5bcb9aa5baabb23d25e597c340b (image=quay.io/ceph/ceph:v18, name=sleepy_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:10:23 np0005481065 podman[74917]: 2025-10-11 08:10:23.839436758 +0000 UTC m=+0.176320465 container attach 8e5c8d8aa464235691d8cbbd88ce6a9f8e8db5bcb9aa5baabb23d25e597c340b (image=quay.io/ceph/ceph:v18, name=sleepy_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 04:10:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 11 04:10:24 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/820890276' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]: 
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]: {
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:    "fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:    "health": {
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:        "status": "HEALTH_OK",
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:        "checks": {},
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:        "mutes": []
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:    },
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:    "election_epoch": 5,
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:    "quorum": [
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:        0
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:    ],
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:    "quorum_names": [
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:        "compute-0"
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:    ],
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:    "quorum_age": 16,
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:    "monmap": {
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:        "epoch": 1,
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:        "min_mon_release_name": "reef",
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:        "num_mons": 1
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:    },
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:    "osdmap": {
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:        "epoch": 1,
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:        "num_osds": 0,
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:        "num_up_osds": 0,
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:        "osd_up_since": 0,
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:        "num_in_osds": 0,
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:        "osd_in_since": 0,
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:        "num_remapped_pgs": 0
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:    },
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:    "pgmap": {
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:        "pgs_by_state": [],
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:        "num_pgs": 0,
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:        "num_pools": 0,
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:        "num_objects": 0,
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:        "data_bytes": 0,
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:        "bytes_used": 0,
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:        "bytes_avail": 0,
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:        "bytes_total": 0
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:    },
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:    "fsmap": {
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:        "epoch": 1,
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:        "by_rank": [],
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:        "up:standby": 0
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:    },
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:    "mgrmap": {
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:        "available": false,
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:        "num_standbys": 0,
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:        "modules": [
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:            "iostat",
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:            "nfs",
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:            "restful"
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:        ],
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:        "services": {}
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:    },
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:    "servicemap": {
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:        "epoch": 1,
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:        "modified": "2025-10-11T08:10:04.514161+0000",
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:        "services": {}
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:    },
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]:    "progress_events": {}
Oct 11 04:10:24 np0005481065 sleepy_edison[74933]: }
Oct 11 04:10:24 np0005481065 systemd[1]: libpod-8e5c8d8aa464235691d8cbbd88ce6a9f8e8db5bcb9aa5baabb23d25e597c340b.scope: Deactivated successfully.
Oct 11 04:10:24 np0005481065 podman[74917]: 2025-10-11 08:10:24.28599838 +0000 UTC m=+0.622882087 container died 8e5c8d8aa464235691d8cbbd88ce6a9f8e8db5bcb9aa5baabb23d25e597c340b (image=quay.io/ceph/ceph:v18, name=sleepy_edison, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:10:24 np0005481065 systemd[1]: var-lib-containers-storage-overlay-ab6a00cd4c6812359a9086192c20b924765f739162dd2f652f6d042d9fd27f52-merged.mount: Deactivated successfully.
Oct 11 04:10:24 np0005481065 podman[74917]: 2025-10-11 08:10:24.35084643 +0000 UTC m=+0.687730147 container remove 8e5c8d8aa464235691d8cbbd88ce6a9f8e8db5bcb9aa5baabb23d25e597c340b (image=quay.io/ceph/ceph:v18, name=sleepy_edison, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 11 04:10:24 np0005481065 systemd[1]: libpod-conmon-8e5c8d8aa464235691d8cbbd88ce6a9f8e8db5bcb9aa5baabb23d25e597c340b.scope: Deactivated successfully.
Oct 11 04:10:25 np0005481065 ceph-mgr[74605]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 11 04:10:25 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'selftest'
Oct 11 04:10:25 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: 2025-10-11T08:10:25.647+0000 7f4f799e6140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 11 04:10:25 np0005481065 ceph-mgr[74605]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 11 04:10:25 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'snap_schedule'
Oct 11 04:10:25 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: 2025-10-11T08:10:25.875+0000 7f4f799e6140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 11 04:10:26 np0005481065 ceph-mgr[74605]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 11 04:10:26 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'stats'
Oct 11 04:10:26 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: 2025-10-11T08:10:26.101+0000 7f4f799e6140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 11 04:10:26 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'status'
Oct 11 04:10:26 np0005481065 podman[74970]: 2025-10-11 08:10:26.450322003 +0000 UTC m=+0.069221935 container create 041e58f67f60178be4aed6e028d21cc21663fc9c15de7fce7df5a0a2a73fbffe (image=quay.io/ceph/ceph:v18, name=modest_hopper, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Oct 11 04:10:26 np0005481065 systemd[1]: Started libpod-conmon-041e58f67f60178be4aed6e028d21cc21663fc9c15de7fce7df5a0a2a73fbffe.scope.
Oct 11 04:10:26 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:10:26 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4bb07779a27d419ddcd6e8eaa7445deb812d28740da2b076254409a74dc5790/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:26 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4bb07779a27d419ddcd6e8eaa7445deb812d28740da2b076254409a74dc5790/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:26 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4bb07779a27d419ddcd6e8eaa7445deb812d28740da2b076254409a74dc5790/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:26 np0005481065 podman[74970]: 2025-10-11 08:10:26.421270479 +0000 UTC m=+0.040170491 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:10:26 np0005481065 podman[74970]: 2025-10-11 08:10:26.528960825 +0000 UTC m=+0.147860827 container init 041e58f67f60178be4aed6e028d21cc21663fc9c15de7fce7df5a0a2a73fbffe (image=quay.io/ceph/ceph:v18, name=modest_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 11 04:10:26 np0005481065 podman[74970]: 2025-10-11 08:10:26.540234335 +0000 UTC m=+0.159134307 container start 041e58f67f60178be4aed6e028d21cc21663fc9c15de7fce7df5a0a2a73fbffe (image=quay.io/ceph/ceph:v18, name=modest_hopper, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:10:26 np0005481065 podman[74970]: 2025-10-11 08:10:26.544417324 +0000 UTC m=+0.163317286 container attach 041e58f67f60178be4aed6e028d21cc21663fc9c15de7fce7df5a0a2a73fbffe (image=quay.io/ceph/ceph:v18, name=modest_hopper, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 11 04:10:26 np0005481065 ceph-mgr[74605]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 11 04:10:26 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'telegraf'
Oct 11 04:10:26 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: 2025-10-11T08:10:26.570+0000 7f4f799e6140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 11 04:10:26 np0005481065 ceph-mgr[74605]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 11 04:10:26 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'telemetry'
Oct 11 04:10:26 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: 2025-10-11T08:10:26.790+0000 7f4f799e6140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 11 04:10:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 11 04:10:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/723964856' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 11 04:10:26 np0005481065 modest_hopper[74986]: 
Oct 11 04:10:26 np0005481065 modest_hopper[74986]: {
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:    "fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:    "health": {
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:        "status": "HEALTH_OK",
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:        "checks": {},
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:        "mutes": []
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:    },
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:    "election_epoch": 5,
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:    "quorum": [
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:        0
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:    ],
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:    "quorum_names": [
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:        "compute-0"
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:    ],
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:    "quorum_age": 19,
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:    "monmap": {
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:        "epoch": 1,
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:        "min_mon_release_name": "reef",
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:        "num_mons": 1
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:    },
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:    "osdmap": {
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:        "epoch": 1,
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:        "num_osds": 0,
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:        "num_up_osds": 0,
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:        "osd_up_since": 0,
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:        "num_in_osds": 0,
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:        "osd_in_since": 0,
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:        "num_remapped_pgs": 0
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:    },
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:    "pgmap": {
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:        "pgs_by_state": [],
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:        "num_pgs": 0,
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:        "num_pools": 0,
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:        "num_objects": 0,
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:        "data_bytes": 0,
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:        "bytes_used": 0,
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:        "bytes_avail": 0,
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:        "bytes_total": 0
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:    },
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:    "fsmap": {
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:        "epoch": 1,
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:        "by_rank": [],
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:        "up:standby": 0
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:    },
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:    "mgrmap": {
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:        "available": false,
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:        "num_standbys": 0,
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:        "modules": [
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:            "iostat",
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:            "nfs",
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:            "restful"
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:        ],
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:        "services": {}
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:    },
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:    "servicemap": {
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:        "epoch": 1,
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:        "modified": "2025-10-11T08:10:04.514161+0000",
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:        "services": {}
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:    },
Oct 11 04:10:26 np0005481065 modest_hopper[74986]:    "progress_events": {}
Oct 11 04:10:26 np0005481065 modest_hopper[74986]: }
Oct 11 04:10:26 np0005481065 systemd[1]: libpod-041e58f67f60178be4aed6e028d21cc21663fc9c15de7fce7df5a0a2a73fbffe.scope: Deactivated successfully.
Oct 11 04:10:26 np0005481065 podman[74970]: 2025-10-11 08:10:26.97099297 +0000 UTC m=+0.589892962 container died 041e58f67f60178be4aed6e028d21cc21663fc9c15de7fce7df5a0a2a73fbffe (image=quay.io/ceph/ceph:v18, name=modest_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 04:10:27 np0005481065 systemd[1]: var-lib-containers-storage-overlay-c4bb07779a27d419ddcd6e8eaa7445deb812d28740da2b076254409a74dc5790-merged.mount: Deactivated successfully.
Oct 11 04:10:27 np0005481065 podman[74970]: 2025-10-11 08:10:27.029045398 +0000 UTC m=+0.647945370 container remove 041e58f67f60178be4aed6e028d21cc21663fc9c15de7fce7df5a0a2a73fbffe (image=quay.io/ceph/ceph:v18, name=modest_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:10:27 np0005481065 systemd[1]: libpod-conmon-041e58f67f60178be4aed6e028d21cc21663fc9c15de7fce7df5a0a2a73fbffe.scope: Deactivated successfully.
Oct 11 04:10:27 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: 2025-10-11T08:10:27.367+0000 7f4f799e6140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 11 04:10:27 np0005481065 ceph-mgr[74605]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 11 04:10:27 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'test_orchestrator'
Oct 11 04:10:28 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: 2025-10-11T08:10:28.005+0000 7f4f799e6140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 11 04:10:28 np0005481065 ceph-mgr[74605]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 11 04:10:28 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'volumes'
Oct 11 04:10:28 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: 2025-10-11T08:10:28.724+0000 7f4f799e6140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 11 04:10:28 np0005481065 ceph-mgr[74605]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 11 04:10:28 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'zabbix'
Oct 11 04:10:28 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: 2025-10-11T08:10:28.945+0000 7f4f799e6140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 11 04:10:28 np0005481065 ceph-mgr[74605]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 11 04:10:28 np0005481065 ceph-mgr[74605]: ms_deliver_dispatch: unhandled message 0x558b1fc991e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Oct 11 04:10:28 np0005481065 ceph-mon[74313]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.hcsgrm
Oct 11 04:10:28 np0005481065 ceph-mgr[74605]: mgr handle_mgr_map Activating!
Oct 11 04:10:28 np0005481065 ceph-mgr[74605]: mgr handle_mgr_map I am now activating
Oct 11 04:10:28 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.hcsgrm(active, starting, since 0.0130461s)
Oct 11 04:10:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Oct 11 04:10:28 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1845290802' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 11 04:10:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).mds e1 all = 1
Oct 11 04:10:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct 11 04:10:28 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1845290802' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 11 04:10:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Oct 11 04:10:28 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1845290802' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 11 04:10:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Oct 11 04:10:28 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1845290802' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 11 04:10:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.hcsgrm", "id": "compute-0.hcsgrm"} v 0) v1
Oct 11 04:10:28 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1845290802' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "mgr metadata", "who": "compute-0.hcsgrm", "id": "compute-0.hcsgrm"}]: dispatch
Oct 11 04:10:28 np0005481065 ceph-mgr[74605]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 04:10:28 np0005481065 ceph-mgr[74605]: mgr load Constructed class from module: balancer
Oct 11 04:10:28 np0005481065 ceph-mgr[74605]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 04:10:28 np0005481065 ceph-mgr[74605]: mgr load Constructed class from module: crash
Oct 11 04:10:28 np0005481065 ceph-mon[74313]: log_channel(cluster) log [INF] : Manager daemon compute-0.hcsgrm is now available
Oct 11 04:10:28 np0005481065 ceph-mgr[74605]: [balancer INFO root] Starting
Oct 11 04:10:28 np0005481065 ceph-mgr[74605]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 04:10:28 np0005481065 ceph-mgr[74605]: mgr load Constructed class from module: devicehealth
Oct 11 04:10:28 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:10:28
Oct 11 04:10:28 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:10:28 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 04:10:28 np0005481065 ceph-mgr[74605]: [balancer INFO root] No pools available
Oct 11 04:10:28 np0005481065 ceph-mgr[74605]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 04:10:28 np0005481065 ceph-mgr[74605]: mgr load Constructed class from module: iostat
Oct 11 04:10:28 np0005481065 ceph-mgr[74605]: [devicehealth INFO root] Starting
Oct 11 04:10:28 np0005481065 ceph-mgr[74605]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 04:10:28 np0005481065 ceph-mgr[74605]: mgr load Constructed class from module: nfs
Oct 11 04:10:28 np0005481065 ceph-mgr[74605]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 04:10:28 np0005481065 ceph-mgr[74605]: mgr load Constructed class from module: orchestrator
Oct 11 04:10:28 np0005481065 ceph-mgr[74605]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 04:10:28 np0005481065 ceph-mgr[74605]: mgr load Constructed class from module: pg_autoscaler
Oct 11 04:10:28 np0005481065 ceph-mgr[74605]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 04:10:28 np0005481065 ceph-mgr[74605]: mgr load Constructed class from module: progress
Oct 11 04:10:28 np0005481065 ceph-mgr[74605]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 04:10:28 np0005481065 ceph-mgr[74605]: [progress INFO root] Loading...
Oct 11 04:10:28 np0005481065 ceph-mgr[74605]: [progress INFO root] No stored events to load
Oct 11 04:10:28 np0005481065 ceph-mgr[74605]: [progress INFO root] Loaded [] historic events
Oct 11 04:10:28 np0005481065 ceph-mgr[74605]: [progress INFO root] Loaded OSDMap, ready.
Oct 11 04:10:28 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:10:29 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] recovery thread starting
Oct 11 04:10:29 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] starting setup
Oct 11 04:10:29 np0005481065 ceph-mgr[74605]: mgr load Constructed class from module: rbd_support
Oct 11 04:10:29 np0005481065 ceph-mgr[74605]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 04:10:29 np0005481065 ceph-mgr[74605]: mgr load Constructed class from module: restful
Oct 11 04:10:29 np0005481065 ceph-mgr[74605]: [restful INFO root] server_addr: :: server_port: 8003
Oct 11 04:10:29 np0005481065 ceph-mgr[74605]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 04:10:29 np0005481065 ceph-mgr[74605]: mgr load Constructed class from module: status
Oct 11 04:10:29 np0005481065 ceph-mgr[74605]: [restful WARNING root] server not running: no certificate configured
Oct 11 04:10:29 np0005481065 ceph-mgr[74605]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 04:10:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hcsgrm/mirror_snapshot_schedule"} v 0) v1
Oct 11 04:10:29 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1845290802' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hcsgrm/mirror_snapshot_schedule"}]: dispatch
Oct 11 04:10:29 np0005481065 ceph-mgr[74605]: mgr load Constructed class from module: telemetry
Oct 11 04:10:29 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:10:29 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct 11 04:10:29 np0005481065 ceph-mon[74313]: Activating manager daemon compute-0.hcsgrm
Oct 11 04:10:29 np0005481065 ceph-mon[74313]: Manager daemon compute-0.hcsgrm is now available
Oct 11 04:10:29 np0005481065 ceph-mgr[74605]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 04:10:29 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] PerfHandler: starting
Oct 11 04:10:29 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TaskHandler: starting
Oct 11 04:10:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0) v1
Oct 11 04:10:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hcsgrm/trash_purge_schedule"} v 0) v1
Oct 11 04:10:29 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1845290802' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hcsgrm/trash_purge_schedule"}]: dispatch
Oct 11 04:10:29 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1845290802' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:10:29 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:10:29 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct 11 04:10:29 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] setup complete
Oct 11 04:10:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0) v1
Oct 11 04:10:29 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1845290802' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:10:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0) v1
Oct 11 04:10:29 np0005481065 ceph-mgr[74605]: mgr load Constructed class from module: volumes
Oct 11 04:10:29 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1845290802' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:10:29 np0005481065 podman[75105]: 2025-10-11 08:10:29.136428619 +0000 UTC m=+0.071441750 container create 2cd39c7e560706c9e11bbbaccd379beb28deef1b8d22bc4a905589534b82a7f5 (image=quay.io/ceph/ceph:v18, name=hopeful_germain, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:10:29 np0005481065 systemd[1]: Started libpod-conmon-2cd39c7e560706c9e11bbbaccd379beb28deef1b8d22bc4a905589534b82a7f5.scope.
Oct 11 04:10:29 np0005481065 podman[75105]: 2025-10-11 08:10:29.107258081 +0000 UTC m=+0.042271262 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:10:29 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:10:29 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2602772d370e00979846a6e5b31eef8f70a0ac114e0c4a6a0394f675f5af0dfb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:29 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2602772d370e00979846a6e5b31eef8f70a0ac114e0c4a6a0394f675f5af0dfb/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:29 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2602772d370e00979846a6e5b31eef8f70a0ac114e0c4a6a0394f675f5af0dfb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:29 np0005481065 podman[75105]: 2025-10-11 08:10:29.240805472 +0000 UTC m=+0.175818653 container init 2cd39c7e560706c9e11bbbaccd379beb28deef1b8d22bc4a905589534b82a7f5 (image=quay.io/ceph/ceph:v18, name=hopeful_germain, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 04:10:29 np0005481065 podman[75105]: 2025-10-11 08:10:29.250261603 +0000 UTC m=+0.185274704 container start 2cd39c7e560706c9e11bbbaccd379beb28deef1b8d22bc4a905589534b82a7f5 (image=quay.io/ceph/ceph:v18, name=hopeful_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 04:10:29 np0005481065 podman[75105]: 2025-10-11 08:10:29.254192664 +0000 UTC m=+0.189205755 container attach 2cd39c7e560706c9e11bbbaccd379beb28deef1b8d22bc4a905589534b82a7f5 (image=quay.io/ceph/ceph:v18, name=hopeful_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 04:10:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 11 04:10:29 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1612184023' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]: 
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]: {
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:    "fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:    "health": {
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:        "status": "HEALTH_OK",
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:        "checks": {},
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:        "mutes": []
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:    },
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:    "election_epoch": 5,
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:    "quorum": [
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:        0
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:    ],
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:    "quorum_names": [
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:        "compute-0"
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:    ],
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:    "quorum_age": 22,
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:    "monmap": {
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:        "epoch": 1,
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:        "min_mon_release_name": "reef",
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:        "num_mons": 1
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:    },
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:    "osdmap": {
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:        "epoch": 1,
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:        "num_osds": 0,
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:        "num_up_osds": 0,
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:        "osd_up_since": 0,
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:        "num_in_osds": 0,
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:        "osd_in_since": 0,
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:        "num_remapped_pgs": 0
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:    },
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:    "pgmap": {
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:        "pgs_by_state": [],
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:        "num_pgs": 0,
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:        "num_pools": 0,
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:        "num_objects": 0,
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:        "data_bytes": 0,
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:        "bytes_used": 0,
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:        "bytes_avail": 0,
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:        "bytes_total": 0
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:    },
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:    "fsmap": {
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:        "epoch": 1,
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:        "by_rank": [],
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:        "up:standby": 0
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:    },
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:    "mgrmap": {
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:        "available": false,
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:        "num_standbys": 0,
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:        "modules": [
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:            "iostat",
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:            "nfs",
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:            "restful"
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:        ],
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:        "services": {}
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:    },
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:    "servicemap": {
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:        "epoch": 1,
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:        "modified": "2025-10-11T08:10:04.514161+0000",
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:        "services": {}
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:    },
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]:    "progress_events": {}
Oct 11 04:10:29 np0005481065 hopeful_germain[75122]: }
Oct 11 04:10:29 np0005481065 systemd[1]: libpod-2cd39c7e560706c9e11bbbaccd379beb28deef1b8d22bc4a905589534b82a7f5.scope: Deactivated successfully.
Oct 11 04:10:29 np0005481065 podman[75148]: 2025-10-11 08:10:29.755944359 +0000 UTC m=+0.040435746 container died 2cd39c7e560706c9e11bbbaccd379beb28deef1b8d22bc4a905589534b82a7f5 (image=quay.io/ceph/ceph:v18, name=hopeful_germain, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:10:29 np0005481065 systemd[1]: var-lib-containers-storage-overlay-2602772d370e00979846a6e5b31eef8f70a0ac114e0c4a6a0394f675f5af0dfb-merged.mount: Deactivated successfully.
Oct 11 04:10:29 np0005481065 podman[75148]: 2025-10-11 08:10:29.806168494 +0000 UTC m=+0.090659891 container remove 2cd39c7e560706c9e11bbbaccd379beb28deef1b8d22bc4a905589534b82a7f5 (image=quay.io/ceph/ceph:v18, name=hopeful_germain, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 04:10:29 np0005481065 systemd[1]: libpod-conmon-2cd39c7e560706c9e11bbbaccd379beb28deef1b8d22bc4a905589534b82a7f5.scope: Deactivated successfully.
Oct 11 04:10:29 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.hcsgrm(active, since 1.02796s)
Oct 11 04:10:30 np0005481065 ceph-mon[74313]: from='mgr.14102 192.168.122.100:0/1845290802' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hcsgrm/mirror_snapshot_schedule"}]: dispatch
Oct 11 04:10:30 np0005481065 ceph-mon[74313]: from='mgr.14102 192.168.122.100:0/1845290802' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hcsgrm/trash_purge_schedule"}]: dispatch
Oct 11 04:10:30 np0005481065 ceph-mon[74313]: from='mgr.14102 192.168.122.100:0/1845290802' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:10:30 np0005481065 ceph-mon[74313]: from='mgr.14102 192.168.122.100:0/1845290802' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:10:30 np0005481065 ceph-mon[74313]: from='mgr.14102 192.168.122.100:0/1845290802' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:10:30 np0005481065 ceph-mgr[74605]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 11 04:10:31 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.hcsgrm(active, since 2s)
Oct 11 04:10:31 np0005481065 podman[75163]: 2025-10-11 08:10:31.914440678 +0000 UTC m=+0.066422615 container create 7bfb15af627aee4a3608f8f2e9dcb923d665567c12bccf4b09adfae5ddb80099 (image=quay.io/ceph/ceph:v18, name=eloquent_chatelet, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:10:31 np0005481065 systemd[1]: Started libpod-conmon-7bfb15af627aee4a3608f8f2e9dcb923d665567c12bccf4b09adfae5ddb80099.scope.
Oct 11 04:10:31 np0005481065 podman[75163]: 2025-10-11 08:10:31.887095996 +0000 UTC m=+0.039077983 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:10:31 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:10:31 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e699227e1ee7ea2157b370c19b23121756334bfd289657908de3fe163b977c06/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:31 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e699227e1ee7ea2157b370c19b23121756334bfd289657908de3fe163b977c06/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:31 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e699227e1ee7ea2157b370c19b23121756334bfd289657908de3fe163b977c06/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:32 np0005481065 podman[75163]: 2025-10-11 08:10:32.03275422 +0000 UTC m=+0.184736197 container init 7bfb15af627aee4a3608f8f2e9dcb923d665567c12bccf4b09adfae5ddb80099 (image=quay.io/ceph/ceph:v18, name=eloquent_chatelet, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 04:10:32 np0005481065 podman[75163]: 2025-10-11 08:10:32.04249956 +0000 UTC m=+0.194481497 container start 7bfb15af627aee4a3608f8f2e9dcb923d665567c12bccf4b09adfae5ddb80099 (image=quay.io/ceph/ceph:v18, name=eloquent_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:10:32 np0005481065 podman[75163]: 2025-10-11 08:10:32.046804442 +0000 UTC m=+0.198786369 container attach 7bfb15af627aee4a3608f8f2e9dcb923d665567c12bccf4b09adfae5ddb80099 (image=quay.io/ceph/ceph:v18, name=eloquent_chatelet, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 11 04:10:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 11 04:10:32 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/198860451' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]: 
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]: {
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:    "fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:    "health": {
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:        "status": "HEALTH_OK",
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:        "checks": {},
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:        "mutes": []
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:    },
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:    "election_epoch": 5,
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:    "quorum": [
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:        0
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:    ],
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:    "quorum_names": [
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:        "compute-0"
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:    ],
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:    "quorum_age": 25,
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:    "monmap": {
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:        "epoch": 1,
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:        "min_mon_release_name": "reef",
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:        "num_mons": 1
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:    },
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:    "osdmap": {
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:        "epoch": 1,
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:        "num_osds": 0,
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:        "num_up_osds": 0,
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:        "osd_up_since": 0,
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:        "num_in_osds": 0,
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:        "osd_in_since": 0,
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:        "num_remapped_pgs": 0
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:    },
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:    "pgmap": {
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:        "pgs_by_state": [],
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:        "num_pgs": 0,
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:        "num_pools": 0,
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:        "num_objects": 0,
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:        "data_bytes": 0,
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:        "bytes_used": 0,
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:        "bytes_avail": 0,
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:        "bytes_total": 0
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:    },
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:    "fsmap": {
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:        "epoch": 1,
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:        "by_rank": [],
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:        "up:standby": 0
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:    },
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:    "mgrmap": {
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:        "available": true,
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:        "num_standbys": 0,
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:        "modules": [
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:            "iostat",
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:            "nfs",
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:            "restful"
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:        ],
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:        "services": {}
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:    },
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:    "servicemap": {
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:        "epoch": 1,
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:        "modified": "2025-10-11T08:10:04.514161+0000",
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:        "services": {}
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:    },
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]:    "progress_events": {}
Oct 11 04:10:32 np0005481065 eloquent_chatelet[75179]: }
Oct 11 04:10:32 np0005481065 systemd[1]: libpod-7bfb15af627aee4a3608f8f2e9dcb923d665567c12bccf4b09adfae5ddb80099.scope: Deactivated successfully.
Oct 11 04:10:32 np0005481065 podman[75163]: 2025-10-11 08:10:32.628378394 +0000 UTC m=+0.780360331 container died 7bfb15af627aee4a3608f8f2e9dcb923d665567c12bccf4b09adfae5ddb80099 (image=quay.io/ceph/ceph:v18, name=eloquent_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:10:32 np0005481065 ceph-mgr[74605]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 11 04:10:33 np0005481065 systemd[1]: var-lib-containers-storage-overlay-e699227e1ee7ea2157b370c19b23121756334bfd289657908de3fe163b977c06-merged.mount: Deactivated successfully.
Oct 11 04:10:33 np0005481065 podman[75163]: 2025-10-11 08:10:33.819219329 +0000 UTC m=+1.971201266 container remove 7bfb15af627aee4a3608f8f2e9dcb923d665567c12bccf4b09adfae5ddb80099 (image=quay.io/ceph/ceph:v18, name=eloquent_chatelet, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:10:33 np0005481065 systemd[1]: libpod-conmon-7bfb15af627aee4a3608f8f2e9dcb923d665567c12bccf4b09adfae5ddb80099.scope: Deactivated successfully.
Oct 11 04:10:33 np0005481065 podman[75218]: 2025-10-11 08:10:33.911318054 +0000 UTC m=+0.062116843 container create d8a00f009635d366583be72cb5b234ae5e2c1837d975b45de95a88c8f47f35d1 (image=quay.io/ceph/ceph:v18, name=elastic_diffie, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:10:33 np0005481065 systemd[1]: Started libpod-conmon-d8a00f009635d366583be72cb5b234ae5e2c1837d975b45de95a88c8f47f35d1.scope.
Oct 11 04:10:33 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:10:33 np0005481065 podman[75218]: 2025-10-11 08:10:33.884180028 +0000 UTC m=+0.034978897 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:10:33 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89d430c17a98f2be0aedf3f03c5de5ee5e04bd2696ee9b67601a55412c3ab0ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:33 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89d430c17a98f2be0aedf3f03c5de5ee5e04bd2696ee9b67601a55412c3ab0ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:33 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89d430c17a98f2be0aedf3f03c5de5ee5e04bd2696ee9b67601a55412c3ab0ca/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:33 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89d430c17a98f2be0aedf3f03c5de5ee5e04bd2696ee9b67601a55412c3ab0ca/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:34 np0005481065 podman[75218]: 2025-10-11 08:10:34.001559902 +0000 UTC m=+0.152358771 container init d8a00f009635d366583be72cb5b234ae5e2c1837d975b45de95a88c8f47f35d1 (image=quay.io/ceph/ceph:v18, name=elastic_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 04:10:34 np0005481065 podman[75218]: 2025-10-11 08:10:34.017356348 +0000 UTC m=+0.168155147 container start d8a00f009635d366583be72cb5b234ae5e2c1837d975b45de95a88c8f47f35d1 (image=quay.io/ceph/ceph:v18, name=elastic_diffie, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:10:34 np0005481065 podman[75218]: 2025-10-11 08:10:34.02165533 +0000 UTC m=+0.172454159 container attach d8a00f009635d366583be72cb5b234ae5e2c1837d975b45de95a88c8f47f35d1 (image=quay.io/ceph/ceph:v18, name=elastic_diffie, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:10:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Oct 11 04:10:34 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2028300244' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 11 04:10:34 np0005481065 systemd[1]: libpod-d8a00f009635d366583be72cb5b234ae5e2c1837d975b45de95a88c8f47f35d1.scope: Deactivated successfully.
Oct 11 04:10:34 np0005481065 podman[75218]: 2025-10-11 08:10:34.569223925 +0000 UTC m=+0.720022754 container died d8a00f009635d366583be72cb5b234ae5e2c1837d975b45de95a88c8f47f35d1 (image=quay.io/ceph/ceph:v18, name=elastic_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 04:10:34 np0005481065 systemd[1]: var-lib-containers-storage-overlay-89d430c17a98f2be0aedf3f03c5de5ee5e04bd2696ee9b67601a55412c3ab0ca-merged.mount: Deactivated successfully.
Oct 11 04:10:34 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/2028300244' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 11 04:10:34 np0005481065 podman[75218]: 2025-10-11 08:10:34.621496684 +0000 UTC m=+0.772295513 container remove d8a00f009635d366583be72cb5b234ae5e2c1837d975b45de95a88c8f47f35d1 (image=quay.io/ceph/ceph:v18, name=elastic_diffie, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:10:34 np0005481065 systemd[1]: libpod-conmon-d8a00f009635d366583be72cb5b234ae5e2c1837d975b45de95a88c8f47f35d1.scope: Deactivated successfully.
Oct 11 04:10:34 np0005481065 podman[75272]: 2025-10-11 08:10:34.716850299 +0000 UTC m=+0.063908048 container create e955b0edc3ffcc1cb1a15e89725f63b3b6e47390891252b41e019be979ecf844 (image=quay.io/ceph/ceph:v18, name=heuristic_dirac, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 11 04:10:34 np0005481065 systemd[1]: Started libpod-conmon-e955b0edc3ffcc1cb1a15e89725f63b3b6e47390891252b41e019be979ecf844.scope.
Oct 11 04:10:34 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:10:34 np0005481065 podman[75272]: 2025-10-11 08:10:34.69055369 +0000 UTC m=+0.037611499 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:10:34 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/390eef57625cf8c1348e7f34c7e6b9281fc858dcf8b01079903a9353c16067df/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:34 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/390eef57625cf8c1348e7f34c7e6b9281fc858dcf8b01079903a9353c16067df/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:34 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/390eef57625cf8c1348e7f34c7e6b9281fc858dcf8b01079903a9353c16067df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:34 np0005481065 podman[75272]: 2025-10-11 08:10:34.808050196 +0000 UTC m=+0.155107985 container init e955b0edc3ffcc1cb1a15e89725f63b3b6e47390891252b41e019be979ecf844 (image=quay.io/ceph/ceph:v18, name=heuristic_dirac, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 11 04:10:34 np0005481065 podman[75272]: 2025-10-11 08:10:34.817183017 +0000 UTC m=+0.164240766 container start e955b0edc3ffcc1cb1a15e89725f63b3b6e47390891252b41e019be979ecf844 (image=quay.io/ceph/ceph:v18, name=heuristic_dirac, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:10:34 np0005481065 podman[75272]: 2025-10-11 08:10:34.821530411 +0000 UTC m=+0.168588160 container attach e955b0edc3ffcc1cb1a15e89725f63b3b6e47390891252b41e019be979ecf844 (image=quay.io/ceph/ceph:v18, name=heuristic_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 11 04:10:34 np0005481065 ceph-mgr[74605]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 11 04:10:35 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1
Oct 11 04:10:35 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2045983432' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Oct 11 04:10:35 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/2045983432' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Oct 11 04:10:35 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2045983432' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Oct 11 04:10:35 np0005481065 ceph-mgr[74605]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct 11 04:10:35 np0005481065 ceph-mgr[74605]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct 11 04:10:35 np0005481065 ceph-mgr[74605]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct 11 04:10:35 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.hcsgrm(active, since 6s)
Oct 11 04:10:35 np0005481065 systemd[1]: libpod-e955b0edc3ffcc1cb1a15e89725f63b3b6e47390891252b41e019be979ecf844.scope: Deactivated successfully.
Oct 11 04:10:35 np0005481065 podman[75315]: 2025-10-11 08:10:35.729964283 +0000 UTC m=+0.042509050 container died e955b0edc3ffcc1cb1a15e89725f63b3b6e47390891252b41e019be979ecf844 (image=quay.io/ceph/ceph:v18, name=heuristic_dirac, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 04:10:35 np0005481065 systemd[1]: var-lib-containers-storage-overlay-390eef57625cf8c1348e7f34c7e6b9281fc858dcf8b01079903a9353c16067df-merged.mount: Deactivated successfully.
Oct 11 04:10:35 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: ignoring --setuser ceph since I am not root
Oct 11 04:10:35 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: ignoring --setgroup ceph since I am not root
Oct 11 04:10:35 np0005481065 ceph-mgr[74605]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Oct 11 04:10:35 np0005481065 ceph-mgr[74605]: pidfile_write: ignore empty --pid-file
Oct 11 04:10:35 np0005481065 podman[75315]: 2025-10-11 08:10:35.787577786 +0000 UTC m=+0.100122503 container remove e955b0edc3ffcc1cb1a15e89725f63b3b6e47390891252b41e019be979ecf844 (image=quay.io/ceph/ceph:v18, name=heuristic_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:10:35 np0005481065 systemd[1]: libpod-conmon-e955b0edc3ffcc1cb1a15e89725f63b3b6e47390891252b41e019be979ecf844.scope: Deactivated successfully.
Oct 11 04:10:35 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'alerts'
Oct 11 04:10:35 np0005481065 podman[75355]: 2025-10-11 08:10:35.891281698 +0000 UTC m=+0.063911008 container create f15dbb48055c5d2cfa5a655ae2ebf79232cd2e1ec2fbf69b6822bfdc6cd784da (image=quay.io/ceph/ceph:v18, name=brave_aryabhata, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:10:35 np0005481065 systemd[1]: Started libpod-conmon-f15dbb48055c5d2cfa5a655ae2ebf79232cd2e1ec2fbf69b6822bfdc6cd784da.scope.
Oct 11 04:10:35 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:10:35 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2aac7b5b3347246bdecac10dbe84a7209ddb8ae1d648acc66f0dc050db797c32/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:35 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2aac7b5b3347246bdecac10dbe84a7209ddb8ae1d648acc66f0dc050db797c32/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:35 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2aac7b5b3347246bdecac10dbe84a7209ddb8ae1d648acc66f0dc050db797c32/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:35 np0005481065 podman[75355]: 2025-10-11 08:10:35.870715615 +0000 UTC m=+0.043344935 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:10:35 np0005481065 podman[75355]: 2025-10-11 08:10:35.973011884 +0000 UTC m=+0.145641244 container init f15dbb48055c5d2cfa5a655ae2ebf79232cd2e1ec2fbf69b6822bfdc6cd784da (image=quay.io/ceph/ceph:v18, name=brave_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:10:35 np0005481065 podman[75355]: 2025-10-11 08:10:35.9858913 +0000 UTC m=+0.158520580 container start f15dbb48055c5d2cfa5a655ae2ebf79232cd2e1ec2fbf69b6822bfdc6cd784da (image=quay.io/ceph/ceph:v18, name=brave_aryabhata, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Oct 11 04:10:35 np0005481065 podman[75355]: 2025-10-11 08:10:35.989863263 +0000 UTC m=+0.162492643 container attach f15dbb48055c5d2cfa5a655ae2ebf79232cd2e1ec2fbf69b6822bfdc6cd784da (image=quay.io/ceph/ceph:v18, name=brave_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:10:36 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: 2025-10-11T08:10:36.159+0000 7f7f7d8d1140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 11 04:10:36 np0005481065 ceph-mgr[74605]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 11 04:10:36 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'balancer'
Oct 11 04:10:36 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: 2025-10-11T08:10:36.395+0000 7f7f7d8d1140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 11 04:10:36 np0005481065 ceph-mgr[74605]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 11 04:10:36 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'cephadm'
Oct 11 04:10:36 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Oct 11 04:10:36 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4066215337' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 11 04:10:36 np0005481065 brave_aryabhata[75372]: {
Oct 11 04:10:36 np0005481065 brave_aryabhata[75372]:    "epoch": 5,
Oct 11 04:10:36 np0005481065 brave_aryabhata[75372]:    "available": true,
Oct 11 04:10:36 np0005481065 brave_aryabhata[75372]:    "active_name": "compute-0.hcsgrm",
Oct 11 04:10:36 np0005481065 brave_aryabhata[75372]:    "num_standby": 0
Oct 11 04:10:36 np0005481065 brave_aryabhata[75372]: }
Oct 11 04:10:36 np0005481065 systemd[1]: libpod-f15dbb48055c5d2cfa5a655ae2ebf79232cd2e1ec2fbf69b6822bfdc6cd784da.scope: Deactivated successfully.
Oct 11 04:10:36 np0005481065 podman[75355]: 2025-10-11 08:10:36.573757195 +0000 UTC m=+0.746386495 container died f15dbb48055c5d2cfa5a655ae2ebf79232cd2e1ec2fbf69b6822bfdc6cd784da (image=quay.io/ceph/ceph:v18, name=brave_aryabhata, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 11 04:10:36 np0005481065 systemd[1]: var-lib-containers-storage-overlay-2aac7b5b3347246bdecac10dbe84a7209ddb8ae1d648acc66f0dc050db797c32-merged.mount: Deactivated successfully.
Oct 11 04:10:36 np0005481065 podman[75355]: 2025-10-11 08:10:36.630344477 +0000 UTC m=+0.802973747 container remove f15dbb48055c5d2cfa5a655ae2ebf79232cd2e1ec2fbf69b6822bfdc6cd784da (image=quay.io/ceph/ceph:v18, name=brave_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:10:36 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/2045983432' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Oct 11 04:10:36 np0005481065 systemd[1]: libpod-conmon-f15dbb48055c5d2cfa5a655ae2ebf79232cd2e1ec2fbf69b6822bfdc6cd784da.scope: Deactivated successfully.
Oct 11 04:10:36 np0005481065 podman[75411]: 2025-10-11 08:10:36.686264848 +0000 UTC m=+0.035719161 container create 9db5bc3d640c775101e5d096103268fe3f0f553c30f7496515f8abad4f5d568b (image=quay.io/ceph/ceph:v18, name=boring_bassi, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 04:10:36 np0005481065 systemd[1]: Started libpod-conmon-9db5bc3d640c775101e5d096103268fe3f0f553c30f7496515f8abad4f5d568b.scope.
Oct 11 04:10:36 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:10:36 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/351864cb69915edb1595f8ddbeb062ded1add9fbca2629d854fe38e8a2e844c0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:36 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/351864cb69915edb1595f8ddbeb062ded1add9fbca2629d854fe38e8a2e844c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:36 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/351864cb69915edb1595f8ddbeb062ded1add9fbca2629d854fe38e8a2e844c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:36 np0005481065 podman[75411]: 2025-10-11 08:10:36.7655937 +0000 UTC m=+0.115048032 container init 9db5bc3d640c775101e5d096103268fe3f0f553c30f7496515f8abad4f5d568b (image=quay.io/ceph/ceph:v18, name=boring_bassi, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:10:36 np0005481065 podman[75411]: 2025-10-11 08:10:36.669546714 +0000 UTC m=+0.019001046 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:10:36 np0005481065 podman[75411]: 2025-10-11 08:10:36.774639379 +0000 UTC m=+0.124093721 container start 9db5bc3d640c775101e5d096103268fe3f0f553c30f7496515f8abad4f5d568b (image=quay.io/ceph/ceph:v18, name=boring_bassi, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 11 04:10:36 np0005481065 podman[75411]: 2025-10-11 08:10:36.779043404 +0000 UTC m=+0.128497766 container attach 9db5bc3d640c775101e5d096103268fe3f0f553c30f7496515f8abad4f5d568b (image=quay.io/ceph/ceph:v18, name=boring_bassi, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:10:38 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'crash'
Oct 11 04:10:38 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: 2025-10-11T08:10:38.525+0000 7f7f7d8d1140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 11 04:10:38 np0005481065 ceph-mgr[74605]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 11 04:10:38 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'dashboard'
Oct 11 04:10:39 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'devicehealth'
Oct 11 04:10:40 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: 2025-10-11T08:10:40.144+0000 7f7f7d8d1140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 11 04:10:40 np0005481065 ceph-mgr[74605]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 11 04:10:40 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'diskprediction_local'
Oct 11 04:10:40 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 11 04:10:40 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 11 04:10:40 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]:  from numpy import show_config as show_numpy_config
Oct 11 04:10:40 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: 2025-10-11T08:10:40.689+0000 7f7f7d8d1140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 11 04:10:40 np0005481065 ceph-mgr[74605]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 11 04:10:40 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'influx'
Oct 11 04:10:40 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: 2025-10-11T08:10:40.920+0000 7f7f7d8d1140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 11 04:10:40 np0005481065 ceph-mgr[74605]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 11 04:10:40 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'insights'
Oct 11 04:10:41 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'iostat'
Oct 11 04:10:41 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: 2025-10-11T08:10:41.439+0000 7f7f7d8d1140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 11 04:10:41 np0005481065 ceph-mgr[74605]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 11 04:10:41 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'k8sevents'
Oct 11 04:10:43 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'localpool'
Oct 11 04:10:43 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'mds_autoscaler'
Oct 11 04:10:43 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'mirroring'
Oct 11 04:10:44 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'nfs'
Oct 11 04:10:44 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: 2025-10-11T08:10:44.812+0000 7f7f7d8d1140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 11 04:10:44 np0005481065 ceph-mgr[74605]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 11 04:10:44 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'orchestrator'
Oct 11 04:10:45 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: 2025-10-11T08:10:45.434+0000 7f7f7d8d1140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 11 04:10:45 np0005481065 ceph-mgr[74605]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 11 04:10:45 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'osd_perf_query'
Oct 11 04:10:45 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: 2025-10-11T08:10:45.677+0000 7f7f7d8d1140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 11 04:10:45 np0005481065 ceph-mgr[74605]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 11 04:10:45 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'osd_support'
Oct 11 04:10:45 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: 2025-10-11T08:10:45.888+0000 7f7f7d8d1140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 11 04:10:45 np0005481065 ceph-mgr[74605]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 11 04:10:45 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'pg_autoscaler'
Oct 11 04:10:46 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: 2025-10-11T08:10:46.177+0000 7f7f7d8d1140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 11 04:10:46 np0005481065 ceph-mgr[74605]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 11 04:10:46 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'progress'
Oct 11 04:10:46 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: 2025-10-11T08:10:46.443+0000 7f7f7d8d1140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 11 04:10:46 np0005481065 ceph-mgr[74605]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 11 04:10:46 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'prometheus'
Oct 11 04:10:47 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: 2025-10-11T08:10:47.471+0000 7f7f7d8d1140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 11 04:10:47 np0005481065 ceph-mgr[74605]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 11 04:10:47 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'rbd_support'
Oct 11 04:10:47 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: 2025-10-11T08:10:47.759+0000 7f7f7d8d1140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 11 04:10:47 np0005481065 ceph-mgr[74605]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 11 04:10:47 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'restful'
Oct 11 04:10:48 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'rgw'
Oct 11 04:10:49 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: 2025-10-11T08:10:49.199+0000 7f7f7d8d1140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 11 04:10:49 np0005481065 ceph-mgr[74605]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 11 04:10:49 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'rook'
Oct 11 04:10:51 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: 2025-10-11T08:10:51.259+0000 7f7f7d8d1140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 11 04:10:51 np0005481065 ceph-mgr[74605]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 11 04:10:51 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'selftest'
Oct 11 04:10:51 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: 2025-10-11T08:10:51.495+0000 7f7f7d8d1140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 11 04:10:51 np0005481065 ceph-mgr[74605]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 11 04:10:51 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'snap_schedule'
Oct 11 04:10:51 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: 2025-10-11T08:10:51.737+0000 7f7f7d8d1140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 11 04:10:51 np0005481065 ceph-mgr[74605]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 11 04:10:51 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'stats'
Oct 11 04:10:51 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'status'
Oct 11 04:10:52 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: 2025-10-11T08:10:52.263+0000 7f7f7d8d1140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 11 04:10:52 np0005481065 ceph-mgr[74605]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 11 04:10:52 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'telegraf'
Oct 11 04:10:52 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: 2025-10-11T08:10:52.486+0000 7f7f7d8d1140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 11 04:10:52 np0005481065 ceph-mgr[74605]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 11 04:10:52 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'telemetry'
Oct 11 04:10:53 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: 2025-10-11T08:10:53.057+0000 7f7f7d8d1140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 11 04:10:53 np0005481065 ceph-mgr[74605]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 11 04:10:53 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'test_orchestrator'
Oct 11 04:10:53 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: 2025-10-11T08:10:53.701+0000 7f7f7d8d1140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 11 04:10:53 np0005481065 ceph-mgr[74605]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 11 04:10:53 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'volumes'
Oct 11 04:10:54 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: 2025-10-11T08:10:54.383+0000 7f7f7d8d1140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: mgr[py] Loading python module 'zabbix'
Oct 11 04:10:54 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: 2025-10-11T08:10:54.620+0000 7f7f7d8d1140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 11 04:10:54 np0005481065 ceph-mon[74313]: log_channel(cluster) log [INF] : Active manager daemon compute-0.hcsgrm restarted
Oct 11 04:10:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Oct 11 04:10:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 11 04:10:54 np0005481065 ceph-mon[74313]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.hcsgrm
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: ms_deliver_dispatch: unhandled message 0x5621f1f7d1e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Oct 11 04:10:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Oct 11 04:10:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Oct 11 04:10:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: mgr handle_mgr_map Activating!
Oct 11 04:10:54 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Oct 11 04:10:54 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.hcsgrm(active, starting, since 0.0160624s)
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: mgr handle_mgr_map I am now activating
Oct 11 04:10:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Oct 11 04:10:54 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 11 04:10:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.hcsgrm", "id": "compute-0.hcsgrm"} v 0) v1
Oct 11 04:10:54 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "mgr metadata", "who": "compute-0.hcsgrm", "id": "compute-0.hcsgrm"}]: dispatch
Oct 11 04:10:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Oct 11 04:10:54 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 11 04:10:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).mds e1 all = 1
Oct 11 04:10:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct 11 04:10:54 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 11 04:10:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Oct 11 04:10:54 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 11 04:10:54 np0005481065 ceph-mon[74313]: log_channel(cluster) log [INF] : Manager daemon compute-0.hcsgrm is now available
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: mgr load Constructed class from module: balancer
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Starting
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:10:54
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] No pools available
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Oct 11 04:10:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0) v1
Oct 11 04:10:54 np0005481065 ceph-mon[74313]: Active manager daemon compute-0.hcsgrm restarted
Oct 11 04:10:54 np0005481065 ceph-mon[74313]: Activating manager daemon compute-0.hcsgrm
Oct 11 04:10:54 np0005481065 ceph-mon[74313]: Manager daemon compute-0.hcsgrm is now available
Oct 11 04:10:54 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:10:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0) v1
Oct 11 04:10:54 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: mgr load Constructed class from module: cephadm
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: mgr load Constructed class from module: crash
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: mgr load Constructed class from module: devicehealth
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: mgr load Constructed class from module: iostat
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: mgr load Constructed class from module: nfs
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: [devicehealth INFO root] Starting
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: mgr load Constructed class from module: orchestrator
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: mgr load Constructed class from module: pg_autoscaler
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: mgr load Constructed class from module: progress
Oct 11 04:10:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 11 04:10:54 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: [progress INFO root] Loading...
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: [progress INFO root] No stored events to load
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: [progress INFO root] Loaded [] historic events
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: [progress INFO root] Loaded OSDMap, ready.
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:10:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 11 04:10:54 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] recovery thread starting
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] starting setup
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: mgr load Constructed class from module: rbd_support
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: mgr load Constructed class from module: restful
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: mgr load Constructed class from module: status
Oct 11 04:10:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hcsgrm/mirror_snapshot_schedule"} v 0) v1
Oct 11 04:10:54 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hcsgrm/mirror_snapshot_schedule"}]: dispatch
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: [restful INFO root] server_addr: :: server_port: 8003
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: [restful WARNING root] server not running: no certificate configured
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: mgr load Constructed class from module: telemetry
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] PerfHandler: starting
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TaskHandler: starting
Oct 11 04:10:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hcsgrm/trash_purge_schedule"} v 0) v1
Oct 11 04:10:54 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hcsgrm/trash_purge_schedule"}]: dispatch
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] setup complete
Oct 11 04:10:54 np0005481065 ceph-mgr[74605]: mgr load Constructed class from module: volumes
Oct 11 04:10:55 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/cert}] v 0) v1
Oct 11 04:10:55 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:10:55 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/key}] v 0) v1
Oct 11 04:10:55 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:10:55 np0005481065 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Oct 11 04:10:55 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.hcsgrm(active, since 1.04622s)
Oct 11 04:10:55 np0005481065 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Oct 11 04:10:55 np0005481065 boring_bassi[75428]: {
Oct 11 04:10:55 np0005481065 boring_bassi[75428]:    "mgrmap_epoch": 7,
Oct 11 04:10:55 np0005481065 boring_bassi[75428]:    "initialized": true
Oct 11 04:10:55 np0005481065 boring_bassi[75428]: }
Oct 11 04:10:55 np0005481065 systemd[1]: libpod-9db5bc3d640c775101e5d096103268fe3f0f553c30f7496515f8abad4f5d568b.scope: Deactivated successfully.
Oct 11 04:10:55 np0005481065 podman[75411]: 2025-10-11 08:10:55.703580876 +0000 UTC m=+19.053035248 container died 9db5bc3d640c775101e5d096103268fe3f0f553c30f7496515f8abad4f5d568b (image=quay.io/ceph/ceph:v18, name=boring_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:10:55 np0005481065 ceph-mon[74313]: Found migration_current of "None". Setting to last migration.
Oct 11 04:10:55 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:10:55 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:10:55 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hcsgrm/mirror_snapshot_schedule"}]: dispatch
Oct 11 04:10:55 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hcsgrm/trash_purge_schedule"}]: dispatch
Oct 11 04:10:55 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:10:55 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:10:55 np0005481065 systemd[1]: var-lib-containers-storage-overlay-351864cb69915edb1595f8ddbeb062ded1add9fbca2629d854fe38e8a2e844c0-merged.mount: Deactivated successfully.
Oct 11 04:10:55 np0005481065 podman[75411]: 2025-10-11 08:10:55.774401156 +0000 UTC m=+19.123855498 container remove 9db5bc3d640c775101e5d096103268fe3f0f553c30f7496515f8abad4f5d568b (image=quay.io/ceph/ceph:v18, name=boring_bassi, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 11 04:10:55 np0005481065 systemd[1]: libpod-conmon-9db5bc3d640c775101e5d096103268fe3f0f553c30f7496515f8abad4f5d568b.scope: Deactivated successfully.
Oct 11 04:10:55 np0005481065 podman[75589]: 2025-10-11 08:10:55.885604459 +0000 UTC m=+0.075661220 container create 4af14d50473acbd90e836da2e68532bab3d66458a6436fee7f42ba7171657815 (image=quay.io/ceph/ceph:v18, name=intelligent_greider, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 04:10:55 np0005481065 systemd[1]: Started libpod-conmon-4af14d50473acbd90e836da2e68532bab3d66458a6436fee7f42ba7171657815.scope.
Oct 11 04:10:55 np0005481065 podman[75589]: 2025-10-11 08:10:55.851371485 +0000 UTC m=+0.041428316 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:10:55 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:10:55 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6e707b398151080b497d0bdb8eb387210c7993a1d50c5c70ec1c1bb8e45d3b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:55 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6e707b398151080b497d0bdb8eb387210c7993a1d50c5c70ec1c1bb8e45d3b3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:55 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6e707b398151080b497d0bdb8eb387210c7993a1d50c5c70ec1c1bb8e45d3b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:55 np0005481065 podman[75589]: 2025-10-11 08:10:55.991002633 +0000 UTC m=+0.181059454 container init 4af14d50473acbd90e836da2e68532bab3d66458a6436fee7f42ba7171657815 (image=quay.io/ceph/ceph:v18, name=intelligent_greider, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:10:55 np0005481065 podman[75589]: 2025-10-11 08:10:55.997888555 +0000 UTC m=+0.187945296 container start 4af14d50473acbd90e836da2e68532bab3d66458a6436fee7f42ba7171657815 (image=quay.io/ceph/ceph:v18, name=intelligent_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:10:56 np0005481065 podman[75589]: 2025-10-11 08:10:56.001772195 +0000 UTC m=+0.191829016 container attach 4af14d50473acbd90e836da2e68532bab3d66458a6436fee7f42ba7171657815 (image=quay.io/ceph/ceph:v18, name=intelligent_greider, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 04:10:56 np0005481065 ceph-mgr[74605]: [cephadm INFO cherrypy.error] [11/Oct/2025:08:10:56] ENGINE Bus STARTING
Oct 11 04:10:56 np0005481065 ceph-mgr[74605]: log_channel(cephadm) log [INF] : [11/Oct/2025:08:10:56] ENGINE Bus STARTING
Oct 11 04:10:56 np0005481065 ceph-mgr[74605]: [cephadm INFO cherrypy.error] [11/Oct/2025:08:10:56] ENGINE Serving on https://192.168.122.100:7150
Oct 11 04:10:56 np0005481065 ceph-mgr[74605]: log_channel(cephadm) log [INF] : [11/Oct/2025:08:10:56] ENGINE Serving on https://192.168.122.100:7150
Oct 11 04:10:56 np0005481065 ceph-mgr[74605]: [cephadm INFO cherrypy.error] [11/Oct/2025:08:10:56] ENGINE Client ('192.168.122.100', 34114) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 11 04:10:56 np0005481065 ceph-mgr[74605]: log_channel(cephadm) log [INF] : [11/Oct/2025:08:10:56] ENGINE Client ('192.168.122.100', 34114) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 11 04:10:56 np0005481065 ceph-mgr[74605]: [cephadm INFO cherrypy.error] [11/Oct/2025:08:10:56] ENGINE Serving on http://192.168.122.100:8765
Oct 11 04:10:56 np0005481065 ceph-mgr[74605]: log_channel(cephadm) log [INF] : [11/Oct/2025:08:10:56] ENGINE Serving on http://192.168.122.100:8765
Oct 11 04:10:56 np0005481065 ceph-mgr[74605]: [cephadm INFO cherrypy.error] [11/Oct/2025:08:10:56] ENGINE Bus STARTED
Oct 11 04:10:56 np0005481065 ceph-mgr[74605]: log_channel(cephadm) log [INF] : [11/Oct/2025:08:10:56] ENGINE Bus STARTED
Oct 11 04:10:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 11 04:10:56 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 11 04:10:56 np0005481065 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:10:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0) v1
Oct 11 04:10:56 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:10:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 11 04:10:56 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 11 04:10:56 np0005481065 systemd[1]: libpod-4af14d50473acbd90e836da2e68532bab3d66458a6436fee7f42ba7171657815.scope: Deactivated successfully.
Oct 11 04:10:56 np0005481065 podman[75655]: 2025-10-11 08:10:56.60818765 +0000 UTC m=+0.029144918 container died 4af14d50473acbd90e836da2e68532bab3d66458a6436fee7f42ba7171657815 (image=quay.io/ceph/ceph:v18, name=intelligent_greider, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 04:10:56 np0005481065 systemd[1]: var-lib-containers-storage-overlay-b6e707b398151080b497d0bdb8eb387210c7993a1d50c5c70ec1c1bb8e45d3b3-merged.mount: Deactivated successfully.
Oct 11 04:10:56 np0005481065 ceph-mgr[74605]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 11 04:10:56 np0005481065 podman[75655]: 2025-10-11 08:10:56.66342154 +0000 UTC m=+0.084378728 container remove 4af14d50473acbd90e836da2e68532bab3d66458a6436fee7f42ba7171657815 (image=quay.io/ceph/ceph:v18, name=intelligent_greider, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:10:56 np0005481065 systemd[1]: libpod-conmon-4af14d50473acbd90e836da2e68532bab3d66458a6436fee7f42ba7171657815.scope: Deactivated successfully.
Oct 11 04:10:56 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:10:56 np0005481065 podman[75670]: 2025-10-11 08:10:56.761280042 +0000 UTC m=+0.064283520 container create 2fc2ea5deca38caedfc3eedd08ddb863ea83ccee760496320b3340092cabe8bc (image=quay.io/ceph/ceph:v18, name=laughing_mccarthy, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:10:56 np0005481065 systemd[1]: Started libpod-conmon-2fc2ea5deca38caedfc3eedd08ddb863ea83ccee760496320b3340092cabe8bc.scope.
Oct 11 04:10:56 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:10:56 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3326225f0458eb50b611b1499188a5b921912f39014ce1f4f04b8634fa55955d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:56 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3326225f0458eb50b611b1499188a5b921912f39014ce1f4f04b8634fa55955d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:56 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3326225f0458eb50b611b1499188a5b921912f39014ce1f4f04b8634fa55955d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:56 np0005481065 podman[75670]: 2025-10-11 08:10:56.731383862 +0000 UTC m=+0.034387410 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:10:56 np0005481065 podman[75670]: 2025-10-11 08:10:56.839999875 +0000 UTC m=+0.143003403 container init 2fc2ea5deca38caedfc3eedd08ddb863ea83ccee760496320b3340092cabe8bc (image=quay.io/ceph/ceph:v18, name=laughing_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:10:56 np0005481065 podman[75670]: 2025-10-11 08:10:56.848746514 +0000 UTC m=+0.151749962 container start 2fc2ea5deca38caedfc3eedd08ddb863ea83ccee760496320b3340092cabe8bc (image=quay.io/ceph/ceph:v18, name=laughing_mccarthy, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:10:56 np0005481065 podman[75670]: 2025-10-11 08:10:56.852803309 +0000 UTC m=+0.155806837 container attach 2fc2ea5deca38caedfc3eedd08ddb863ea83ccee760496320b3340092cabe8bc (image=quay.io/ceph/ceph:v18, name=laughing_mccarthy, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 11 04:10:57 np0005481065 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:10:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0) v1
Oct 11 04:10:57 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:10:57 np0005481065 ceph-mgr[74605]: [cephadm INFO root] Set ssh ssh_user
Oct 11 04:10:57 np0005481065 ceph-mgr[74605]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Oct 11 04:10:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0) v1
Oct 11 04:10:57 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:10:57 np0005481065 ceph-mgr[74605]: [cephadm INFO root] Set ssh ssh_config
Oct 11 04:10:57 np0005481065 ceph-mgr[74605]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Oct 11 04:10:57 np0005481065 ceph-mgr[74605]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Oct 11 04:10:57 np0005481065 ceph-mgr[74605]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Oct 11 04:10:57 np0005481065 laughing_mccarthy[75686]: ssh user set to ceph-admin. sudo will be used
Oct 11 04:10:57 np0005481065 systemd[1]: libpod-2fc2ea5deca38caedfc3eedd08ddb863ea83ccee760496320b3340092cabe8bc.scope: Deactivated successfully.
Oct 11 04:10:57 np0005481065 conmon[75686]: conmon 2fc2ea5deca38caedfc3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2fc2ea5deca38caedfc3eedd08ddb863ea83ccee760496320b3340092cabe8bc.scope/container/memory.events
Oct 11 04:10:57 np0005481065 podman[75670]: 2025-10-11 08:10:57.412227509 +0000 UTC m=+0.715230987 container died 2fc2ea5deca38caedfc3eedd08ddb863ea83ccee760496320b3340092cabe8bc (image=quay.io/ceph/ceph:v18, name=laughing_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 04:10:57 np0005481065 systemd[1]: var-lib-containers-storage-overlay-3326225f0458eb50b611b1499188a5b921912f39014ce1f4f04b8634fa55955d-merged.mount: Deactivated successfully.
Oct 11 04:10:57 np0005481065 podman[75670]: 2025-10-11 08:10:57.470337568 +0000 UTC m=+0.773341026 container remove 2fc2ea5deca38caedfc3eedd08ddb863ea83ccee760496320b3340092cabe8bc (image=quay.io/ceph/ceph:v18, name=laughing_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 04:10:57 np0005481065 systemd[1]: libpod-conmon-2fc2ea5deca38caedfc3eedd08ddb863ea83ccee760496320b3340092cabe8bc.scope: Deactivated successfully.
Oct 11 04:10:57 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.hcsgrm(active, since 2s)
Oct 11 04:10:57 np0005481065 podman[75724]: 2025-10-11 08:10:57.573050139 +0000 UTC m=+0.070768189 container create 177c81b63bfe7c95099c6300d83e6874ef9988b804a88a5887529e70bb99fa70 (image=quay.io/ceph/ceph:v18, name=modest_noyce, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:10:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019922223 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:10:57 np0005481065 systemd[1]: Started libpod-conmon-177c81b63bfe7c95099c6300d83e6874ef9988b804a88a5887529e70bb99fa70.scope.
Oct 11 04:10:57 np0005481065 podman[75724]: 2025-10-11 08:10:57.544511791 +0000 UTC m=+0.042229891 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:10:57 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:10:57 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43498f55d05698f37f3d56da3cb05eea36e104d1bbfcbd41025f3d2d23bb13a7/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:57 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43498f55d05698f37f3d56da3cb05eea36e104d1bbfcbd41025f3d2d23bb13a7/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:57 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43498f55d05698f37f3d56da3cb05eea36e104d1bbfcbd41025f3d2d23bb13a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:57 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43498f55d05698f37f3d56da3cb05eea36e104d1bbfcbd41025f3d2d23bb13a7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:57 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43498f55d05698f37f3d56da3cb05eea36e104d1bbfcbd41025f3d2d23bb13a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:57 np0005481065 podman[75724]: 2025-10-11 08:10:57.680322092 +0000 UTC m=+0.178040182 container init 177c81b63bfe7c95099c6300d83e6874ef9988b804a88a5887529e70bb99fa70 (image=quay.io/ceph/ceph:v18, name=modest_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 04:10:57 np0005481065 podman[75724]: 2025-10-11 08:10:57.694627492 +0000 UTC m=+0.192345542 container start 177c81b63bfe7c95099c6300d83e6874ef9988b804a88a5887529e70bb99fa70 (image=quay.io/ceph/ceph:v18, name=modest_noyce, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:10:57 np0005481065 podman[75724]: 2025-10-11 08:10:57.698657036 +0000 UTC m=+0.196375086 container attach 177c81b63bfe7c95099c6300d83e6874ef9988b804a88a5887529e70bb99fa70 (image=quay.io/ceph/ceph:v18, name=modest_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 04:10:57 np0005481065 ceph-mon[74313]: [11/Oct/2025:08:10:56] ENGINE Bus STARTING
Oct 11 04:10:57 np0005481065 ceph-mon[74313]: [11/Oct/2025:08:10:56] ENGINE Serving on https://192.168.122.100:7150
Oct 11 04:10:57 np0005481065 ceph-mon[74313]: [11/Oct/2025:08:10:56] ENGINE Client ('192.168.122.100', 34114) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 11 04:10:57 np0005481065 ceph-mon[74313]: [11/Oct/2025:08:10:56] ENGINE Serving on http://192.168.122.100:8765
Oct 11 04:10:57 np0005481065 ceph-mon[74313]: [11/Oct/2025:08:10:56] ENGINE Bus STARTED
Oct 11 04:10:57 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:10:57 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:10:58 np0005481065 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:10:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0) v1
Oct 11 04:10:58 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:10:58 np0005481065 ceph-mgr[74605]: [cephadm INFO root] Set ssh ssh_identity_key
Oct 11 04:10:58 np0005481065 ceph-mgr[74605]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Oct 11 04:10:58 np0005481065 ceph-mgr[74605]: [cephadm INFO root] Set ssh private key
Oct 11 04:10:58 np0005481065 ceph-mgr[74605]: log_channel(cephadm) log [INF] : Set ssh private key
Oct 11 04:10:58 np0005481065 systemd[1]: libpod-177c81b63bfe7c95099c6300d83e6874ef9988b804a88a5887529e70bb99fa70.scope: Deactivated successfully.
Oct 11 04:10:58 np0005481065 podman[75724]: 2025-10-11 08:10:58.273560342 +0000 UTC m=+0.771278352 container died 177c81b63bfe7c95099c6300d83e6874ef9988b804a88a5887529e70bb99fa70 (image=quay.io/ceph/ceph:v18, name=modest_noyce, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 11 04:10:58 np0005481065 systemd[1]: var-lib-containers-storage-overlay-43498f55d05698f37f3d56da3cb05eea36e104d1bbfcbd41025f3d2d23bb13a7-merged.mount: Deactivated successfully.
Oct 11 04:10:58 np0005481065 podman[75724]: 2025-10-11 08:10:58.314785911 +0000 UTC m=+0.812503921 container remove 177c81b63bfe7c95099c6300d83e6874ef9988b804a88a5887529e70bb99fa70 (image=quay.io/ceph/ceph:v18, name=modest_noyce, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 04:10:58 np0005481065 systemd[1]: libpod-conmon-177c81b63bfe7c95099c6300d83e6874ef9988b804a88a5887529e70bb99fa70.scope: Deactivated successfully.
Oct 11 04:10:58 np0005481065 podman[75779]: 2025-10-11 08:10:58.393015659 +0000 UTC m=+0.054689175 container create 9792ea2a7fd3278a78c31e5d4f13ec6daa4e8addb88d636e7d97d21e6f00bbf6 (image=quay.io/ceph/ceph:v18, name=great_tharp, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:10:58 np0005481065 systemd[1]: Started libpod-conmon-9792ea2a7fd3278a78c31e5d4f13ec6daa4e8addb88d636e7d97d21e6f00bbf6.scope.
Oct 11 04:10:58 np0005481065 podman[75779]: 2025-10-11 08:10:58.365168032 +0000 UTC m=+0.026841558 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:10:58 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:10:58 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59e2b6fbeb4dd433b64ea6e8235f0150fe2015b4efe4169ef7c0c037ab543acc/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:58 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59e2b6fbeb4dd433b64ea6e8235f0150fe2015b4efe4169ef7c0c037ab543acc/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:58 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59e2b6fbeb4dd433b64ea6e8235f0150fe2015b4efe4169ef7c0c037ab543acc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:58 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59e2b6fbeb4dd433b64ea6e8235f0150fe2015b4efe4169ef7c0c037ab543acc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:58 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59e2b6fbeb4dd433b64ea6e8235f0150fe2015b4efe4169ef7c0c037ab543acc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:58 np0005481065 podman[75779]: 2025-10-11 08:10:58.487324472 +0000 UTC m=+0.148997978 container init 9792ea2a7fd3278a78c31e5d4f13ec6daa4e8addb88d636e7d97d21e6f00bbf6 (image=quay.io/ceph/ceph:v18, name=great_tharp, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:10:58 np0005481065 podman[75779]: 2025-10-11 08:10:58.49797342 +0000 UTC m=+0.159646926 container start 9792ea2a7fd3278a78c31e5d4f13ec6daa4e8addb88d636e7d97d21e6f00bbf6 (image=quay.io/ceph/ceph:v18, name=great_tharp, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 11 04:10:58 np0005481065 podman[75779]: 2025-10-11 08:10:58.503078367 +0000 UTC m=+0.164751933 container attach 9792ea2a7fd3278a78c31e5d4f13ec6daa4e8addb88d636e7d97d21e6f00bbf6 (image=quay.io/ceph/ceph:v18, name=great_tharp, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 11 04:10:58 np0005481065 ceph-mgr[74605]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 11 04:10:58 np0005481065 ceph-mon[74313]: Set ssh ssh_user
Oct 11 04:10:58 np0005481065 ceph-mon[74313]: Set ssh ssh_config
Oct 11 04:10:58 np0005481065 ceph-mon[74313]: ssh user set to ceph-admin. sudo will be used
Oct 11 04:10:58 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:10:59 np0005481065 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:10:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0) v1
Oct 11 04:10:59 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:10:59 np0005481065 ceph-mgr[74605]: [cephadm INFO root] Set ssh ssh_identity_pub
Oct 11 04:10:59 np0005481065 ceph-mgr[74605]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Oct 11 04:10:59 np0005481065 systemd[1]: libpod-9792ea2a7fd3278a78c31e5d4f13ec6daa4e8addb88d636e7d97d21e6f00bbf6.scope: Deactivated successfully.
Oct 11 04:10:59 np0005481065 podman[75779]: 2025-10-11 08:10:59.064194418 +0000 UTC m=+0.725867934 container died 9792ea2a7fd3278a78c31e5d4f13ec6daa4e8addb88d636e7d97d21e6f00bbf6 (image=quay.io/ceph/ceph:v18, name=great_tharp, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:10:59 np0005481065 systemd[1]: var-lib-containers-storage-overlay-59e2b6fbeb4dd433b64ea6e8235f0150fe2015b4efe4169ef7c0c037ab543acc-merged.mount: Deactivated successfully.
Oct 11 04:10:59 np0005481065 podman[75779]: 2025-10-11 08:10:59.123281817 +0000 UTC m=+0.784955333 container remove 9792ea2a7fd3278a78c31e5d4f13ec6daa4e8addb88d636e7d97d21e6f00bbf6 (image=quay.io/ceph/ceph:v18, name=great_tharp, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 04:10:59 np0005481065 systemd[1]: libpod-conmon-9792ea2a7fd3278a78c31e5d4f13ec6daa4e8addb88d636e7d97d21e6f00bbf6.scope: Deactivated successfully.
Oct 11 04:10:59 np0005481065 podman[75831]: 2025-10-11 08:10:59.218764506 +0000 UTC m=+0.065716924 container create c03db160556d4455c5f29cc992a1cc7728f42c50455bda1d73e4fbd02c41605d (image=quay.io/ceph/ceph:v18, name=exciting_carver, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:10:59 np0005481065 systemd[1]: Started libpod-conmon-c03db160556d4455c5f29cc992a1cc7728f42c50455bda1d73e4fbd02c41605d.scope.
Oct 11 04:10:59 np0005481065 podman[75831]: 2025-10-11 08:10:59.191202398 +0000 UTC m=+0.038154856 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:10:59 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:10:59 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00a3ed78ac25361deb42bec04ee5bc218374a962551f36725cef8de24feafbf3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:59 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00a3ed78ac25361deb42bec04ee5bc218374a962551f36725cef8de24feafbf3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:59 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00a3ed78ac25361deb42bec04ee5bc218374a962551f36725cef8de24feafbf3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:10:59 np0005481065 podman[75831]: 2025-10-11 08:10:59.31277612 +0000 UTC m=+0.159728588 container init c03db160556d4455c5f29cc992a1cc7728f42c50455bda1d73e4fbd02c41605d (image=quay.io/ceph/ceph:v18, name=exciting_carver, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 04:10:59 np0005481065 podman[75831]: 2025-10-11 08:10:59.322249641 +0000 UTC m=+0.169202059 container start c03db160556d4455c5f29cc992a1cc7728f42c50455bda1d73e4fbd02c41605d (image=quay.io/ceph/ceph:v18, name=exciting_carver, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:10:59 np0005481065 podman[75831]: 2025-10-11 08:10:59.325926695 +0000 UTC m=+0.172879093 container attach c03db160556d4455c5f29cc992a1cc7728f42c50455bda1d73e4fbd02c41605d (image=quay.io/ceph/ceph:v18, name=exciting_carver, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 04:10:59 np0005481065 ceph-mon[74313]: Set ssh ssh_identity_key
Oct 11 04:10:59 np0005481065 ceph-mon[74313]: Set ssh private key
Oct 11 04:10:59 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:10:59 np0005481065 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:10:59 np0005481065 exciting_carver[75847]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQChKrbnvuslhwuegC9mGR0FSGng2OW9L7DO6IQY39o87SJUPCCNwbL++VeaycaWrL+Bgk8f53bq2tYt7hnM7HjsAFEnTa6MZ6eXqrWvd7LGO8KxxBPAgSnNF457n7jUAxPZgxl7sNTlGgkktTzKOCXzvP2oj8fpc/JIfIpMsMGv0XO0fcVOlKunSD9ZZsg4833qgH/LBvutVic7R4VzBLPom8iDoyfroAAwpC8iaaJpyaiezVL4QM+OUoyt1eXjpSs8FU742mLIWZckr2+IOFhDOZMEOXhRw7ec1Ib1iXdpwOUUP3Mj7OHGYaYgx0d24NF3ZJJRgBy9EdckS8ckrjVhTQdyytxNGAY271b46BlUkn8WBzQrT1OxpvGTXYb7bLJK9y2utvASoS/eud77+NErwKACtKhMcLPyEalj03V8us0V+eYck2FPjPCkrmzCpapWoORksHPo1ZOjqrVSlTOMYzzlFWxJuD0D5mIWAnkp5+UhR+C8vOekZ7tCrPS1oIM= zuul@controller
Oct 11 04:10:59 np0005481065 systemd[1]: libpod-c03db160556d4455c5f29cc992a1cc7728f42c50455bda1d73e4fbd02c41605d.scope: Deactivated successfully.
Oct 11 04:10:59 np0005481065 podman[75873]: 2025-10-11 08:10:59.938696697 +0000 UTC m=+0.030695876 container died c03db160556d4455c5f29cc992a1cc7728f42c50455bda1d73e4fbd02c41605d (image=quay.io/ceph/ceph:v18, name=exciting_carver, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:10:59 np0005481065 systemd[1]: var-lib-containers-storage-overlay-00a3ed78ac25361deb42bec04ee5bc218374a962551f36725cef8de24feafbf3-merged.mount: Deactivated successfully.
Oct 11 04:10:59 np0005481065 podman[75873]: 2025-10-11 08:10:59.991407509 +0000 UTC m=+0.083406628 container remove c03db160556d4455c5f29cc992a1cc7728f42c50455bda1d73e4fbd02c41605d (image=quay.io/ceph/ceph:v18, name=exciting_carver, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:10:59 np0005481065 systemd[1]: libpod-conmon-c03db160556d4455c5f29cc992a1cc7728f42c50455bda1d73e4fbd02c41605d.scope: Deactivated successfully.
Oct 11 04:11:00 np0005481065 podman[75888]: 2025-10-11 08:11:00.098777923 +0000 UTC m=+0.062766933 container create e10d236a8f6a993b74d9bdf8f083d5566a17f0fed3b14af9e5c1b3cd3b8f505c (image=quay.io/ceph/ceph:v18, name=zen_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 04:11:00 np0005481065 systemd[1]: Started libpod-conmon-e10d236a8f6a993b74d9bdf8f083d5566a17f0fed3b14af9e5c1b3cd3b8f505c.scope.
Oct 11 04:11:00 np0005481065 podman[75888]: 2025-10-11 08:11:00.072806124 +0000 UTC m=+0.036795184 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:11:00 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:11:00 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4f020a15fde2732bd8bb6a7ffe229d8148c8c54ac0bf1da8dca5a8806739005/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:00 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4f020a15fde2732bd8bb6a7ffe229d8148c8c54ac0bf1da8dca5a8806739005/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:00 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4f020a15fde2732bd8bb6a7ffe229d8148c8c54ac0bf1da8dca5a8806739005/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:00 np0005481065 podman[75888]: 2025-10-11 08:11:00.199647218 +0000 UTC m=+0.163636278 container init e10d236a8f6a993b74d9bdf8f083d5566a17f0fed3b14af9e5c1b3cd3b8f505c (image=quay.io/ceph/ceph:v18, name=zen_mahavira, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 04:11:00 np0005481065 podman[75888]: 2025-10-11 08:11:00.211308637 +0000 UTC m=+0.175297647 container start e10d236a8f6a993b74d9bdf8f083d5566a17f0fed3b14af9e5c1b3cd3b8f505c (image=quay.io/ceph/ceph:v18, name=zen_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:11:00 np0005481065 podman[75888]: 2025-10-11 08:11:00.215346931 +0000 UTC m=+0.179336001 container attach e10d236a8f6a993b74d9bdf8f083d5566a17f0fed3b14af9e5c1b3cd3b8f505c (image=quay.io/ceph/ceph:v18, name=zen_mahavira, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 11 04:11:00 np0005481065 ceph-mgr[74605]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 11 04:11:00 np0005481065 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:11:00 np0005481065 ceph-mon[74313]: Set ssh ssh_identity_pub
Oct 11 04:11:00 np0005481065 systemd[1]: Created slice User Slice of UID 42477.
Oct 11 04:11:00 np0005481065 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct 11 04:11:01 np0005481065 systemd-logind[819]: New session 22 of user ceph-admin.
Oct 11 04:11:01 np0005481065 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct 11 04:11:01 np0005481065 systemd[1]: Starting User Manager for UID 42477...
Oct 11 04:11:01 np0005481065 systemd-logind[819]: New session 24 of user ceph-admin.
Oct 11 04:11:01 np0005481065 systemd[75935]: Queued start job for default target Main User Target.
Oct 11 04:11:01 np0005481065 systemd[75935]: Created slice User Application Slice.
Oct 11 04:11:01 np0005481065 systemd[75935]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 11 04:11:01 np0005481065 systemd[75935]: Started Daily Cleanup of User's Temporary Directories.
Oct 11 04:11:01 np0005481065 systemd[75935]: Reached target Paths.
Oct 11 04:11:01 np0005481065 systemd[75935]: Reached target Timers.
Oct 11 04:11:01 np0005481065 systemd[75935]: Starting D-Bus User Message Bus Socket...
Oct 11 04:11:01 np0005481065 systemd[75935]: Starting Create User's Volatile Files and Directories...
Oct 11 04:11:01 np0005481065 systemd[75935]: Listening on D-Bus User Message Bus Socket.
Oct 11 04:11:01 np0005481065 systemd[75935]: Reached target Sockets.
Oct 11 04:11:01 np0005481065 systemd[75935]: Finished Create User's Volatile Files and Directories.
Oct 11 04:11:01 np0005481065 systemd[75935]: Reached target Basic System.
Oct 11 04:11:01 np0005481065 systemd[75935]: Reached target Main User Target.
Oct 11 04:11:01 np0005481065 systemd[75935]: Startup finished in 153ms.
Oct 11 04:11:01 np0005481065 systemd[1]: Started User Manager for UID 42477.
Oct 11 04:11:01 np0005481065 systemd[1]: Started Session 22 of User ceph-admin.
Oct 11 04:11:01 np0005481065 systemd[1]: Started Session 24 of User ceph-admin.
Oct 11 04:11:01 np0005481065 systemd-logind[819]: New session 25 of user ceph-admin.
Oct 11 04:11:01 np0005481065 systemd[1]: Started Session 25 of User ceph-admin.
Oct 11 04:11:02 np0005481065 systemd-logind[819]: New session 26 of user ceph-admin.
Oct 11 04:11:02 np0005481065 systemd[1]: Started Session 26 of User ceph-admin.
Oct 11 04:11:02 np0005481065 ceph-mgr[74605]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Oct 11 04:11:02 np0005481065 ceph-mgr[74605]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Oct 11 04:11:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053049 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:11:02 np0005481065 ceph-mgr[74605]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 11 04:11:02 np0005481065 systemd-logind[819]: New session 27 of user ceph-admin.
Oct 11 04:11:02 np0005481065 systemd[1]: Started Session 27 of User ceph-admin.
Oct 11 04:11:02 np0005481065 ceph-mon[74313]: Deploying cephadm binary to compute-0
Oct 11 04:11:03 np0005481065 systemd-logind[819]: New session 28 of user ceph-admin.
Oct 11 04:11:03 np0005481065 systemd[1]: Started Session 28 of User ceph-admin.
Oct 11 04:11:03 np0005481065 systemd-logind[819]: New session 29 of user ceph-admin.
Oct 11 04:11:03 np0005481065 systemd[1]: Started Session 29 of User ceph-admin.
Oct 11 04:11:04 np0005481065 systemd-logind[819]: New session 30 of user ceph-admin.
Oct 11 04:11:04 np0005481065 systemd[1]: Started Session 30 of User ceph-admin.
Oct 11 04:11:04 np0005481065 systemd-logind[819]: New session 31 of user ceph-admin.
Oct 11 04:11:04 np0005481065 ceph-mgr[74605]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 11 04:11:04 np0005481065 systemd[1]: Started Session 31 of User ceph-admin.
Oct 11 04:11:05 np0005481065 systemd-logind[819]: New session 32 of user ceph-admin.
Oct 11 04:11:05 np0005481065 systemd[1]: Started Session 32 of User ceph-admin.
Oct 11 04:11:05 np0005481065 systemd-logind[819]: New session 33 of user ceph-admin.
Oct 11 04:11:05 np0005481065 systemd[1]: Started Session 33 of User ceph-admin.
Oct 11 04:11:06 np0005481065 systemd-logind[819]: New session 34 of user ceph-admin.
Oct 11 04:11:06 np0005481065 systemd[1]: Started Session 34 of User ceph-admin.
Oct 11 04:11:06 np0005481065 ceph-mgr[74605]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 11 04:11:06 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 11 04:11:06 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:06 np0005481065 ceph-mgr[74605]: [cephadm INFO root] Added host compute-0
Oct 11 04:11:06 np0005481065 ceph-mgr[74605]: log_channel(cephadm) log [INF] : Added host compute-0
Oct 11 04:11:06 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 11 04:11:06 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 11 04:11:06 np0005481065 zen_mahavira[75905]: Added host 'compute-0' with addr '192.168.122.100'
Oct 11 04:11:06 np0005481065 systemd[1]: libpod-e10d236a8f6a993b74d9bdf8f083d5566a17f0fed3b14af9e5c1b3cd3b8f505c.scope: Deactivated successfully.
Oct 11 04:11:06 np0005481065 podman[75888]: 2025-10-11 08:11:06.786983912 +0000 UTC m=+6.750972902 container died e10d236a8f6a993b74d9bdf8f083d5566a17f0fed3b14af9e5c1b3cd3b8f505c (image=quay.io/ceph/ceph:v18, name=zen_mahavira, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 11 04:11:06 np0005481065 systemd[1]: var-lib-containers-storage-overlay-e4f020a15fde2732bd8bb6a7ffe229d8148c8c54ac0bf1da8dca5a8806739005-merged.mount: Deactivated successfully.
Oct 11 04:11:06 np0005481065 podman[75888]: 2025-10-11 08:11:06.840968254 +0000 UTC m=+6.804957244 container remove e10d236a8f6a993b74d9bdf8f083d5566a17f0fed3b14af9e5c1b3cd3b8f505c (image=quay.io/ceph/ceph:v18, name=zen_mahavira, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:11:06 np0005481065 systemd[1]: libpod-conmon-e10d236a8f6a993b74d9bdf8f083d5566a17f0fed3b14af9e5c1b3cd3b8f505c.scope: Deactivated successfully.
Oct 11 04:11:06 np0005481065 podman[76577]: 2025-10-11 08:11:06.924028651 +0000 UTC m=+0.053536829 container create 96818f6720ba1e4801518aa01abf9231786c496d1cebcea0ab5dce850e0e3961 (image=quay.io/ceph/ceph:v18, name=beautiful_curie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:11:06 np0005481065 systemd[1]: Started libpod-conmon-96818f6720ba1e4801518aa01abf9231786c496d1cebcea0ab5dce850e0e3961.scope.
Oct 11 04:11:07 np0005481065 podman[76577]: 2025-10-11 08:11:06.90485667 +0000 UTC m=+0.034364858 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:11:07 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:11:07 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29f1ceeb0303e69086b412259e2d5b64f9cddeaacb15b7d8d1aa25e2cd922ec6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:07 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29f1ceeb0303e69086b412259e2d5b64f9cddeaacb15b7d8d1aa25e2cd922ec6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:07 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29f1ceeb0303e69086b412259e2d5b64f9cddeaacb15b7d8d1aa25e2cd922ec6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:07 np0005481065 podman[76577]: 2025-10-11 08:11:07.034950555 +0000 UTC m=+0.164458823 container init 96818f6720ba1e4801518aa01abf9231786c496d1cebcea0ab5dce850e0e3961 (image=quay.io/ceph/ceph:v18, name=beautiful_curie, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 04:11:07 np0005481065 podman[76577]: 2025-10-11 08:11:07.046187991 +0000 UTC m=+0.175696199 container start 96818f6720ba1e4801518aa01abf9231786c496d1cebcea0ab5dce850e0e3961 (image=quay.io/ceph/ceph:v18, name=beautiful_curie, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 04:11:07 np0005481065 podman[76577]: 2025-10-11 08:11:07.050220165 +0000 UTC m=+0.179728373 container attach 96818f6720ba1e4801518aa01abf9231786c496d1cebcea0ab5dce850e0e3961 (image=quay.io/ceph/ceph:v18, name=beautiful_curie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:11:07 np0005481065 podman[76700]: 2025-10-11 08:11:07.443685895 +0000 UTC m=+0.057448129 container create d44d0130414090a3dd1227d5782386c153010b4018ce1a6552affa68f299f409 (image=quay.io/ceph/ceph:v18, name=sharp_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 04:11:07 np0005481065 systemd[1]: Started libpod-conmon-d44d0130414090a3dd1227d5782386c153010b4018ce1a6552affa68f299f409.scope.
Oct 11 04:11:07 np0005481065 podman[76700]: 2025-10-11 08:11:07.412914498 +0000 UTC m=+0.026676792 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:11:07 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:11:07 np0005481065 podman[76700]: 2025-10-11 08:11:07.536891974 +0000 UTC m=+0.150654268 container init d44d0130414090a3dd1227d5782386c153010b4018ce1a6552affa68f299f409 (image=quay.io/ceph/ceph:v18, name=sharp_burnell, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:11:07 np0005481065 podman[76700]: 2025-10-11 08:11:07.54716103 +0000 UTC m=+0.160923224 container start d44d0130414090a3dd1227d5782386c153010b4018ce1a6552affa68f299f409 (image=quay.io/ceph/ceph:v18, name=sharp_burnell, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:11:07 np0005481065 podman[76700]: 2025-10-11 08:11:07.551138963 +0000 UTC m=+0.164901197 container attach d44d0130414090a3dd1227d5782386c153010b4018ce1a6552affa68f299f409 (image=quay.io/ceph/ceph:v18, name=sharp_burnell, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:11:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054710 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:11:07 np0005481065 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:11:07 np0005481065 ceph-mgr[74605]: [cephadm INFO root] Saving service mon spec with placement count:5
Oct 11 04:11:07 np0005481065 ceph-mgr[74605]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Oct 11 04:11:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct 11 04:11:07 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:07 np0005481065 beautiful_curie[76627]: Scheduled mon update...
Oct 11 04:11:07 np0005481065 systemd[1]: libpod-96818f6720ba1e4801518aa01abf9231786c496d1cebcea0ab5dce850e0e3961.scope: Deactivated successfully.
Oct 11 04:11:07 np0005481065 podman[76577]: 2025-10-11 08:11:07.632759955 +0000 UTC m=+0.762268143 container died 96818f6720ba1e4801518aa01abf9231786c496d1cebcea0ab5dce850e0e3961 (image=quay.io/ceph/ceph:v18, name=beautiful_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 11 04:11:07 np0005481065 systemd[1]: var-lib-containers-storage-overlay-29f1ceeb0303e69086b412259e2d5b64f9cddeaacb15b7d8d1aa25e2cd922ec6-merged.mount: Deactivated successfully.
Oct 11 04:11:07 np0005481065 podman[76577]: 2025-10-11 08:11:07.686174439 +0000 UTC m=+0.815682607 container remove 96818f6720ba1e4801518aa01abf9231786c496d1cebcea0ab5dce850e0e3961 (image=quay.io/ceph/ceph:v18, name=beautiful_curie, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:11:07 np0005481065 systemd[1]: libpod-conmon-96818f6720ba1e4801518aa01abf9231786c496d1cebcea0ab5dce850e0e3961.scope: Deactivated successfully.
Oct 11 04:11:07 np0005481065 podman[76755]: 2025-10-11 08:11:07.765059907 +0000 UTC m=+0.051279099 container create 4a9844ca9a1c0850c956400ab6a2466143dac663a5a9e41ca3a328b55192b7d4 (image=quay.io/ceph/ceph:v18, name=nostalgic_sammet, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 11 04:11:07 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:07 np0005481065 ceph-mon[74313]: Added host compute-0
Oct 11 04:11:07 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:07 np0005481065 systemd[1]: Started libpod-conmon-4a9844ca9a1c0850c956400ab6a2466143dac663a5a9e41ca3a328b55192b7d4.scope.
Oct 11 04:11:07 np0005481065 podman[76755]: 2025-10-11 08:11:07.740339986 +0000 UTC m=+0.026559258 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:11:07 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:11:07 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6add6e85e275bfbf0e46bce129ab00d04b4b5fa070b77d337f2f9f92648c56a9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:07 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6add6e85e275bfbf0e46bce129ab00d04b4b5fa070b77d337f2f9f92648c56a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:07 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6add6e85e275bfbf0e46bce129ab00d04b4b5fa070b77d337f2f9f92648c56a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:07 np0005481065 podman[76755]: 2025-10-11 08:11:07.864254201 +0000 UTC m=+0.150473413 container init 4a9844ca9a1c0850c956400ab6a2466143dac663a5a9e41ca3a328b55192b7d4 (image=quay.io/ceph/ceph:v18, name=nostalgic_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 11 04:11:07 np0005481065 podman[76755]: 2025-10-11 08:11:07.874489916 +0000 UTC m=+0.160709098 container start 4a9844ca9a1c0850c956400ab6a2466143dac663a5a9e41ca3a328b55192b7d4 (image=quay.io/ceph/ceph:v18, name=nostalgic_sammet, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 04:11:07 np0005481065 sharp_burnell[76734]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Oct 11 04:11:07 np0005481065 podman[76755]: 2025-10-11 08:11:07.878897461 +0000 UTC m=+0.165116643 container attach 4a9844ca9a1c0850c956400ab6a2466143dac663a5a9e41ca3a328b55192b7d4 (image=quay.io/ceph/ceph:v18, name=nostalgic_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:11:07 np0005481065 systemd[1]: libpod-d44d0130414090a3dd1227d5782386c153010b4018ce1a6552affa68f299f409.scope: Deactivated successfully.
Oct 11 04:11:07 np0005481065 podman[76700]: 2025-10-11 08:11:07.887530517 +0000 UTC m=+0.501292711 container died d44d0130414090a3dd1227d5782386c153010b4018ce1a6552affa68f299f409 (image=quay.io/ceph/ceph:v18, name=sharp_burnell, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:11:07 np0005481065 systemd[1]: var-lib-containers-storage-overlay-f0ccaa4f5f6f48237963169027e2081ddaa91c64552fcff23e12e7fa31ef1ba0-merged.mount: Deactivated successfully.
Oct 11 04:11:07 np0005481065 podman[76700]: 2025-10-11 08:11:07.947624627 +0000 UTC m=+0.561386861 container remove d44d0130414090a3dd1227d5782386c153010b4018ce1a6552affa68f299f409 (image=quay.io/ceph/ceph:v18, name=sharp_burnell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 04:11:07 np0005481065 systemd[1]: libpod-conmon-d44d0130414090a3dd1227d5782386c153010b4018ce1a6552affa68f299f409.scope: Deactivated successfully.
Oct 11 04:11:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0) v1
Oct 11 04:11:07 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:08 np0005481065 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:11:08 np0005481065 ceph-mgr[74605]: [cephadm INFO root] Saving service mgr spec with placement count:2
Oct 11 04:11:08 np0005481065 ceph-mgr[74605]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Oct 11 04:11:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 11 04:11:08 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:08 np0005481065 nostalgic_sammet[76772]: Scheduled mgr update...
Oct 11 04:11:08 np0005481065 systemd[1]: libpod-4a9844ca9a1c0850c956400ab6a2466143dac663a5a9e41ca3a328b55192b7d4.scope: Deactivated successfully.
Oct 11 04:11:08 np0005481065 podman[76755]: 2025-10-11 08:11:08.473246016 +0000 UTC m=+0.759465218 container died 4a9844ca9a1c0850c956400ab6a2466143dac663a5a9e41ca3a328b55192b7d4 (image=quay.io/ceph/ceph:v18, name=nostalgic_sammet, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 11 04:11:08 np0005481065 systemd[1]: var-lib-containers-storage-overlay-6add6e85e275bfbf0e46bce129ab00d04b4b5fa070b77d337f2f9f92648c56a9-merged.mount: Deactivated successfully.
Oct 11 04:11:08 np0005481065 podman[76755]: 2025-10-11 08:11:08.522040658 +0000 UTC m=+0.808259840 container remove 4a9844ca9a1c0850c956400ab6a2466143dac663a5a9e41ca3a328b55192b7d4 (image=quay.io/ceph/ceph:v18, name=nostalgic_sammet, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 11 04:11:08 np0005481065 systemd[1]: libpod-conmon-4a9844ca9a1c0850c956400ab6a2466143dac663a5a9e41ca3a328b55192b7d4.scope: Deactivated successfully.
Oct 11 04:11:08 np0005481065 podman[76928]: 2025-10-11 08:11:08.614615868 +0000 UTC m=+0.061078352 container create 9f3c4a10ca13081aa7206313d9d241d53a3197647580a9e77adfcbe8ca52fb3a (image=quay.io/ceph/ceph:v18, name=festive_agnesi, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 04:11:08 np0005481065 ceph-mgr[74605]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 11 04:11:08 np0005481065 systemd[1]: Started libpod-conmon-9f3c4a10ca13081aa7206313d9d241d53a3197647580a9e77adfcbe8ca52fb3a.scope.
Oct 11 04:11:08 np0005481065 podman[76928]: 2025-10-11 08:11:08.584975595 +0000 UTC m=+0.031438119 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:11:08 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:11:08 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1622d9c6292afad584e13f42663bc70f3f655bcc427bd0f33a245618eb7b8a39/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:08 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1622d9c6292afad584e13f42663bc70f3f655bcc427bd0f33a245618eb7b8a39/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:08 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1622d9c6292afad584e13f42663bc70f3f655bcc427bd0f33a245618eb7b8a39/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:08 np0005481065 podman[76928]: 2025-10-11 08:11:08.719936069 +0000 UTC m=+0.166398603 container init 9f3c4a10ca13081aa7206313d9d241d53a3197647580a9e77adfcbe8ca52fb3a (image=quay.io/ceph/ceph:v18, name=festive_agnesi, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:11:08 np0005481065 podman[76928]: 2025-10-11 08:11:08.734381754 +0000 UTC m=+0.180844208 container start 9f3c4a10ca13081aa7206313d9d241d53a3197647580a9e77adfcbe8ca52fb3a (image=quay.io/ceph/ceph:v18, name=festive_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 04:11:08 np0005481065 podman[76928]: 2025-10-11 08:11:08.737881752 +0000 UTC m=+0.184344236 container attach 9f3c4a10ca13081aa7206313d9d241d53a3197647580a9e77adfcbe8ca52fb3a (image=quay.io/ceph/ceph:v18, name=festive_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:11:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:11:08 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:08 np0005481065 ceph-mon[74313]: Saving service mon spec with placement count:5
Oct 11 04:11:08 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:08 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:08 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:09 np0005481065 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:11:09 np0005481065 ceph-mgr[74605]: [cephadm INFO root] Saving service crash spec with placement *
Oct 11 04:11:09 np0005481065 ceph-mgr[74605]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Oct 11 04:11:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Oct 11 04:11:09 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:09 np0005481065 festive_agnesi[76958]: Scheduled crash update...
Oct 11 04:11:09 np0005481065 systemd[1]: libpod-9f3c4a10ca13081aa7206313d9d241d53a3197647580a9e77adfcbe8ca52fb3a.scope: Deactivated successfully.
Oct 11 04:11:09 np0005481065 podman[76928]: 2025-10-11 08:11:09.351654704 +0000 UTC m=+0.798117188 container died 9f3c4a10ca13081aa7206313d9d241d53a3197647580a9e77adfcbe8ca52fb3a (image=quay.io/ceph/ceph:v18, name=festive_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 11 04:11:09 np0005481065 systemd[1]: var-lib-containers-storage-overlay-1622d9c6292afad584e13f42663bc70f3f655bcc427bd0f33a245618eb7b8a39-merged.mount: Deactivated successfully.
Oct 11 04:11:09 np0005481065 podman[76928]: 2025-10-11 08:11:09.409408462 +0000 UTC m=+0.855870906 container remove 9f3c4a10ca13081aa7206313d9d241d53a3197647580a9e77adfcbe8ca52fb3a (image=quay.io/ceph/ceph:v18, name=festive_agnesi, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:11:09 np0005481065 systemd[1]: libpod-conmon-9f3c4a10ca13081aa7206313d9d241d53a3197647580a9e77adfcbe8ca52fb3a.scope: Deactivated successfully.
Oct 11 04:11:09 np0005481065 podman[77136]: 2025-10-11 08:11:09.480060617 +0000 UTC m=+0.046612296 container create 3af1b18ecd08eb19252e66f1ed2d03941dac5febdf7fc7f8756eade104c4f740 (image=quay.io/ceph/ceph:v18, name=quizzical_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:11:09 np0005481065 systemd[1]: Started libpod-conmon-3af1b18ecd08eb19252e66f1ed2d03941dac5febdf7fc7f8756eade104c4f740.scope.
Oct 11 04:11:09 np0005481065 podman[77136]: 2025-10-11 08:11:09.46066449 +0000 UTC m=+0.027216149 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:11:09 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:11:09 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad30d6926be1c30b65a80b832131420351a694994bbb2f75cf3b865b043e70ae/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:09 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad30d6926be1c30b65a80b832131420351a694994bbb2f75cf3b865b043e70ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:09 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad30d6926be1c30b65a80b832131420351a694994bbb2f75cf3b865b043e70ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:09 np0005481065 podman[77136]: 2025-10-11 08:11:09.583438309 +0000 UTC m=+0.149989988 container init 3af1b18ecd08eb19252e66f1ed2d03941dac5febdf7fc7f8756eade104c4f740 (image=quay.io/ceph/ceph:v18, name=quizzical_pare, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 11 04:11:09 np0005481065 podman[77136]: 2025-10-11 08:11:09.593790748 +0000 UTC m=+0.160342427 container start 3af1b18ecd08eb19252e66f1ed2d03941dac5febdf7fc7f8756eade104c4f740 (image=quay.io/ceph/ceph:v18, name=quizzical_pare, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:11:09 np0005481065 podman[77136]: 2025-10-11 08:11:09.59776474 +0000 UTC m=+0.164316389 container attach 3af1b18ecd08eb19252e66f1ed2d03941dac5febdf7fc7f8756eade104c4f740 (image=quay.io/ceph/ceph:v18, name=quizzical_pare, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:11:09 np0005481065 podman[77188]: 2025-10-11 08:11:09.727095951 +0000 UTC m=+0.084890864 container exec ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:11:09 np0005481065 ceph-mon[74313]: Saving service mgr spec with placement count:2
Oct 11 04:11:09 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:10 np0005481065 podman[77188]: 2025-10-11 08:11:10.023263477 +0000 UTC m=+0.381058380 container exec_died ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 04:11:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1
Oct 11 04:11:10 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/203156899' entity='client.admin' 
Oct 11 04:11:10 np0005481065 systemd[1]: libpod-3af1b18ecd08eb19252e66f1ed2d03941dac5febdf7fc7f8756eade104c4f740.scope: Deactivated successfully.
Oct 11 04:11:10 np0005481065 podman[77136]: 2025-10-11 08:11:10.149231605 +0000 UTC m=+0.715783244 container died 3af1b18ecd08eb19252e66f1ed2d03941dac5febdf7fc7f8756eade104c4f740 (image=quay.io/ceph/ceph:v18, name=quizzical_pare, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 11 04:11:10 np0005481065 systemd[1]: var-lib-containers-storage-overlay-ad30d6926be1c30b65a80b832131420351a694994bbb2f75cf3b865b043e70ae-merged.mount: Deactivated successfully.
Oct 11 04:11:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:11:10 np0005481065 podman[77136]: 2025-10-11 08:11:10.209366856 +0000 UTC m=+0.775918515 container remove 3af1b18ecd08eb19252e66f1ed2d03941dac5febdf7fc7f8756eade104c4f740 (image=quay.io/ceph/ceph:v18, name=quizzical_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:11:10 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:10 np0005481065 systemd[1]: libpod-conmon-3af1b18ecd08eb19252e66f1ed2d03941dac5febdf7fc7f8756eade104c4f740.scope: Deactivated successfully.
Oct 11 04:11:10 np0005481065 podman[77276]: 2025-10-11 08:11:10.300326905 +0000 UTC m=+0.059104260 container create c41d2312314d8b36e8630341157121a4b2bc93db18110c7b72b5e27df1eac312 (image=quay.io/ceph/ceph:v18, name=wizardly_davinci, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:11:10 np0005481065 systemd[1]: Started libpod-conmon-c41d2312314d8b36e8630341157121a4b2bc93db18110c7b72b5e27df1eac312.scope.
Oct 11 04:11:10 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:11:10 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccf4a5f106cd7073db2f302c6f9fd40667589a34a331c4711d13766a7b3ccbb9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:10 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccf4a5f106cd7073db2f302c6f9fd40667589a34a331c4711d13766a7b3ccbb9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:10 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccf4a5f106cd7073db2f302c6f9fd40667589a34a331c4711d13766a7b3ccbb9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:10 np0005481065 podman[77276]: 2025-10-11 08:11:10.360731445 +0000 UTC m=+0.119508830 container init c41d2312314d8b36e8630341157121a4b2bc93db18110c7b72b5e27df1eac312 (image=quay.io/ceph/ceph:v18, name=wizardly_davinci, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 11 04:11:10 np0005481065 podman[77276]: 2025-10-11 08:11:10.271940922 +0000 UTC m=+0.030718387 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:11:10 np0005481065 podman[77276]: 2025-10-11 08:11:10.36871453 +0000 UTC m=+0.127491905 container start c41d2312314d8b36e8630341157121a4b2bc93db18110c7b72b5e27df1eac312 (image=quay.io/ceph/ceph:v18, name=wizardly_davinci, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:11:10 np0005481065 podman[77276]: 2025-10-11 08:11:10.37196477 +0000 UTC m=+0.130742145 container attach c41d2312314d8b36e8630341157121a4b2bc93db18110c7b72b5e27df1eac312 (image=quay.io/ceph/ceph:v18, name=wizardly_davinci, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:11:10 np0005481065 ceph-mgr[74605]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 11 04:11:10 np0005481065 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77423 (sysctl)
Oct 11 04:11:10 np0005481065 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Oct 11 04:11:10 np0005481065 ceph-mon[74313]: Saving service crash spec with placement *
Oct 11 04:11:10 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/203156899' entity='client.admin' 
Oct 11 04:11:10 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:10 np0005481065 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Oct 11 04:11:10 np0005481065 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:11:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0) v1
Oct 11 04:11:10 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:10 np0005481065 systemd[1]: libpod-c41d2312314d8b36e8630341157121a4b2bc93db18110c7b72b5e27df1eac312.scope: Deactivated successfully.
Oct 11 04:11:10 np0005481065 podman[77276]: 2025-10-11 08:11:10.967294114 +0000 UTC m=+0.726071479 container died c41d2312314d8b36e8630341157121a4b2bc93db18110c7b72b5e27df1eac312 (image=quay.io/ceph/ceph:v18, name=wizardly_davinci, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:11:11 np0005481065 systemd[1]: var-lib-containers-storage-overlay-ccf4a5f106cd7073db2f302c6f9fd40667589a34a331c4711d13766a7b3ccbb9-merged.mount: Deactivated successfully.
Oct 11 04:11:11 np0005481065 podman[77276]: 2025-10-11 08:11:11.024114133 +0000 UTC m=+0.782891508 container remove c41d2312314d8b36e8630341157121a4b2bc93db18110c7b72b5e27df1eac312 (image=quay.io/ceph/ceph:v18, name=wizardly_davinci, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 04:11:11 np0005481065 systemd[1]: libpod-conmon-c41d2312314d8b36e8630341157121a4b2bc93db18110c7b72b5e27df1eac312.scope: Deactivated successfully.
Oct 11 04:11:11 np0005481065 podman[77447]: 2025-10-11 08:11:11.111362589 +0000 UTC m=+0.064788075 container create 6231af8720858a4dffd7a66a2a92af6df42ada6867906d6c14fefca33772bb33 (image=quay.io/ceph/ceph:v18, name=interesting_bhabha, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 04:11:11 np0005481065 systemd[1]: Started libpod-conmon-6231af8720858a4dffd7a66a2a92af6df42ada6867906d6c14fefca33772bb33.scope.
Oct 11 04:11:11 np0005481065 podman[77447]: 2025-10-11 08:11:11.078366923 +0000 UTC m=+0.031792469 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:11:11 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:11:11 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94cf63a01104aaf13e3a83f8e4e8f1c50c0bf9d4f2c0cbff3d73cf530ed973b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:11 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94cf63a01104aaf13e3a83f8e4e8f1c50c0bf9d4f2c0cbff3d73cf530ed973b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:11 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94cf63a01104aaf13e3a83f8e4e8f1c50c0bf9d4f2c0cbff3d73cf530ed973b0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:11 np0005481065 podman[77447]: 2025-10-11 08:11:11.216058051 +0000 UTC m=+0.169483587 container init 6231af8720858a4dffd7a66a2a92af6df42ada6867906d6c14fefca33772bb33 (image=quay.io/ceph/ceph:v18, name=interesting_bhabha, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:11:11 np0005481065 podman[77447]: 2025-10-11 08:11:11.226785072 +0000 UTC m=+0.180210558 container start 6231af8720858a4dffd7a66a2a92af6df42ada6867906d6c14fefca33772bb33 (image=quay.io/ceph/ceph:v18, name=interesting_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 11 04:11:11 np0005481065 podman[77447]: 2025-10-11 08:11:11.230367232 +0000 UTC m=+0.183792708 container attach 6231af8720858a4dffd7a66a2a92af6df42ada6867906d6c14fefca33772bb33 (image=quay.io/ceph/ceph:v18, name=interesting_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 04:11:11 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:11:11 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:11 np0005481065 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:11:11 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 11 04:11:11 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:11 np0005481065 ceph-mgr[74605]: [cephadm INFO root] Added label _admin to host compute-0
Oct 11 04:11:11 np0005481065 ceph-mgr[74605]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Oct 11 04:11:11 np0005481065 interesting_bhabha[77475]: Added label _admin to host compute-0
Oct 11 04:11:11 np0005481065 systemd[1]: libpod-6231af8720858a4dffd7a66a2a92af6df42ada6867906d6c14fefca33772bb33.scope: Deactivated successfully.
Oct 11 04:11:11 np0005481065 podman[77447]: 2025-10-11 08:11:11.832596639 +0000 UTC m=+0.786022095 container died 6231af8720858a4dffd7a66a2a92af6df42ada6867906d6c14fefca33772bb33 (image=quay.io/ceph/ceph:v18, name=interesting_bhabha, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 11 04:11:11 np0005481065 systemd[1]: var-lib-containers-storage-overlay-94cf63a01104aaf13e3a83f8e4e8f1c50c0bf9d4f2c0cbff3d73cf530ed973b0-merged.mount: Deactivated successfully.
Oct 11 04:11:11 np0005481065 podman[77447]: 2025-10-11 08:11:11.880469673 +0000 UTC m=+0.833895139 container remove 6231af8720858a4dffd7a66a2a92af6df42ada6867906d6c14fefca33772bb33 (image=quay.io/ceph/ceph:v18, name=interesting_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:11:11 np0005481065 systemd[1]: libpod-conmon-6231af8720858a4dffd7a66a2a92af6df42ada6867906d6c14fefca33772bb33.scope: Deactivated successfully.
Oct 11 04:11:11 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:11 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:11 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:11 np0005481065 podman[77664]: 2025-10-11 08:11:11.987192868 +0000 UTC m=+0.071456201 container create 8ca8d8af398540a637436a2ba5806d8beced7e9bec656ee2d1cada8badd4b04e (image=quay.io/ceph/ceph:v18, name=vigorous_mccarthy, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:11:12 np0005481065 systemd[1]: Started libpod-conmon-8ca8d8af398540a637436a2ba5806d8beced7e9bec656ee2d1cada8badd4b04e.scope.
Oct 11 04:11:12 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:11:12 np0005481065 podman[77664]: 2025-10-11 08:11:11.958947938 +0000 UTC m=+0.043211271 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:11:12 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29a5755de299057714c298bff4c497455f8a38f03d25a8ad2b4522b9e308b2e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:12 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29a5755de299057714c298bff4c497455f8a38f03d25a8ad2b4522b9e308b2e3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:12 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29a5755de299057714c298bff4c497455f8a38f03d25a8ad2b4522b9e308b2e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:12 np0005481065 podman[77664]: 2025-10-11 08:11:12.064787016 +0000 UTC m=+0.149050339 container init 8ca8d8af398540a637436a2ba5806d8beced7e9bec656ee2d1cada8badd4b04e (image=quay.io/ceph/ceph:v18, name=vigorous_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:11:12 np0005481065 podman[77664]: 2025-10-11 08:11:12.07564582 +0000 UTC m=+0.159909163 container start 8ca8d8af398540a637436a2ba5806d8beced7e9bec656ee2d1cada8badd4b04e (image=quay.io/ceph/ceph:v18, name=vigorous_mccarthy, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 11 04:11:12 np0005481065 podman[77664]: 2025-10-11 08:11:12.07952683 +0000 UTC m=+0.163790173 container attach 8ca8d8af398540a637436a2ba5806d8beced7e9bec656ee2d1cada8badd4b04e (image=quay.io/ceph/ceph:v18, name=vigorous_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:11:12 np0005481065 podman[77809]: 2025-10-11 08:11:12.501998414 +0000 UTC m=+0.067223080 container create 812a30e035de75c00586d30c680e68903d9b75747314b089f3547f3a084806bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bassi, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:11:12 np0005481065 systemd[1]: Started libpod-conmon-812a30e035de75c00586d30c680e68903d9b75747314b089f3547f3a084806bb.scope.
Oct 11 04:11:12 np0005481065 podman[77809]: 2025-10-11 08:11:12.471511136 +0000 UTC m=+0.036735852 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:11:12 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:11:12 np0005481065 podman[77809]: 2025-10-11 08:11:12.596489743 +0000 UTC m=+0.161714469 container init 812a30e035de75c00586d30c680e68903d9b75747314b089f3547f3a084806bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 11 04:11:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:11:12 np0005481065 podman[77809]: 2025-10-11 08:11:12.606077178 +0000 UTC m=+0.171301844 container start 812a30e035de75c00586d30c680e68903d9b75747314b089f3547f3a084806bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bassi, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 04:11:12 np0005481065 mystifying_bassi[77827]: 167 167
Oct 11 04:11:12 np0005481065 systemd[1]: libpod-812a30e035de75c00586d30c680e68903d9b75747314b089f3547f3a084806bb.scope: Deactivated successfully.
Oct 11 04:11:12 np0005481065 podman[77809]: 2025-10-11 08:11:12.610111792 +0000 UTC m=+0.175336448 container attach 812a30e035de75c00586d30c680e68903d9b75747314b089f3547f3a084806bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:11:12 np0005481065 conmon[77827]: conmon 812a30e035de75c00586 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-812a30e035de75c00586d30c680e68903d9b75747314b089f3547f3a084806bb.scope/container/memory.events
Oct 11 04:11:12 np0005481065 podman[77809]: 2025-10-11 08:11:12.611950498 +0000 UTC m=+0.177175164 container died 812a30e035de75c00586d30c680e68903d9b75747314b089f3547f3a084806bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:11:12 np0005481065 systemd[1]: var-lib-containers-storage-overlay-29b1265423ff92f1cad91d989741c4b3064dc5a3260e32036e1422201607a829-merged.mount: Deactivated successfully.
Oct 11 04:11:12 np0005481065 ceph-mgr[74605]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 11 04:11:12 np0005481065 podman[77809]: 2025-10-11 08:11:12.663048951 +0000 UTC m=+0.228273607 container remove 812a30e035de75c00586d30c680e68903d9b75747314b089f3547f3a084806bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 04:11:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1
Oct 11 04:11:12 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2335723249' entity='client.admin' 
Oct 11 04:11:12 np0005481065 systemd[1]: libpod-conmon-812a30e035de75c00586d30c680e68903d9b75747314b089f3547f3a084806bb.scope: Deactivated successfully.
Oct 11 04:11:12 np0005481065 systemd[1]: libpod-8ca8d8af398540a637436a2ba5806d8beced7e9bec656ee2d1cada8badd4b04e.scope: Deactivated successfully.
Oct 11 04:11:12 np0005481065 podman[77664]: 2025-10-11 08:11:12.696520592 +0000 UTC m=+0.780783905 container died 8ca8d8af398540a637436a2ba5806d8beced7e9bec656ee2d1cada8badd4b04e (image=quay.io/ceph/ceph:v18, name=vigorous_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 11 04:11:12 np0005481065 systemd[1]: var-lib-containers-storage-overlay-29a5755de299057714c298bff4c497455f8a38f03d25a8ad2b4522b9e308b2e3-merged.mount: Deactivated successfully.
Oct 11 04:11:12 np0005481065 podman[77664]: 2025-10-11 08:11:12.751829054 +0000 UTC m=+0.836092367 container remove 8ca8d8af398540a637436a2ba5806d8beced7e9bec656ee2d1cada8badd4b04e (image=quay.io/ceph/ceph:v18, name=vigorous_mccarthy, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:11:12 np0005481065 systemd[1]: libpod-conmon-8ca8d8af398540a637436a2ba5806d8beced7e9bec656ee2d1cada8badd4b04e.scope: Deactivated successfully.
Oct 11 04:11:12 np0005481065 podman[77855]: 2025-10-11 08:11:12.831919259 +0000 UTC m=+0.056831610 container create b23c16f0a87e3f632feccb7092c3501d70ea982591ab4b29f2ba8039247f452e (image=quay.io/ceph/ceph:v18, name=mystifying_bartik, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:11:12 np0005481065 systemd[1]: Started libpod-conmon-b23c16f0a87e3f632feccb7092c3501d70ea982591ab4b29f2ba8039247f452e.scope.
Oct 11 04:11:12 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:11:12 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bba6a9e629dbb2b9d369b682e750cf1f0df6175fbd081258d6f9f6bf12851c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:12 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bba6a9e629dbb2b9d369b682e750cf1f0df6175fbd081258d6f9f6bf12851c6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:12 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bba6a9e629dbb2b9d369b682e750cf1f0df6175fbd081258d6f9f6bf12851c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:12 np0005481065 podman[77855]: 2025-10-11 08:11:12.805049422 +0000 UTC m=+0.029961833 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:11:12 np0005481065 podman[77855]: 2025-10-11 08:11:12.916695819 +0000 UTC m=+0.141608250 container init b23c16f0a87e3f632feccb7092c3501d70ea982591ab4b29f2ba8039247f452e (image=quay.io/ceph/ceph:v18, name=mystifying_bartik, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Oct 11 04:11:12 np0005481065 podman[77855]: 2025-10-11 08:11:12.927533782 +0000 UTC m=+0.152446143 container start b23c16f0a87e3f632feccb7092c3501d70ea982591ab4b29f2ba8039247f452e (image=quay.io/ceph/ceph:v18, name=mystifying_bartik, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 04:11:12 np0005481065 podman[77855]: 2025-10-11 08:11:12.93199417 +0000 UTC m=+0.156906531 container attach b23c16f0a87e3f632feccb7092c3501d70ea982591ab4b29f2ba8039247f452e (image=quay.io/ceph/ceph:v18, name=mystifying_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:11:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1
Oct 11 04:11:13 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2689232976' entity='client.admin' 
Oct 11 04:11:13 np0005481065 mystifying_bartik[77871]: set mgr/dashboard/cluster/status
Oct 11 04:11:13 np0005481065 systemd[1]: libpod-b23c16f0a87e3f632feccb7092c3501d70ea982591ab4b29f2ba8039247f452e.scope: Deactivated successfully.
Oct 11 04:11:13 np0005481065 podman[77855]: 2025-10-11 08:11:13.574074364 +0000 UTC m=+0.798986685 container died b23c16f0a87e3f632feccb7092c3501d70ea982591ab4b29f2ba8039247f452e (image=quay.io/ceph/ceph:v18, name=mystifying_bartik, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:11:13 np0005481065 systemd[1]: var-lib-containers-storage-overlay-0bba6a9e629dbb2b9d369b682e750cf1f0df6175fbd081258d6f9f6bf12851c6-merged.mount: Deactivated successfully.
Oct 11 04:11:13 np0005481065 podman[77855]: 2025-10-11 08:11:13.623410022 +0000 UTC m=+0.848322383 container remove b23c16f0a87e3f632feccb7092c3501d70ea982591ab4b29f2ba8039247f452e (image=quay.io/ceph/ceph:v18, name=mystifying_bartik, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 04:11:13 np0005481065 systemd[1]: libpod-conmon-b23c16f0a87e3f632feccb7092c3501d70ea982591ab4b29f2ba8039247f452e.scope: Deactivated successfully.
Oct 11 04:11:13 np0005481065 ceph-mon[74313]: Added label _admin to host compute-0
Oct 11 04:11:13 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/2335723249' entity='client.admin' 
Oct 11 04:11:13 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/2689232976' entity='client.admin' 
Oct 11 04:11:13 np0005481065 podman[77918]: 2025-10-11 08:11:13.886183941 +0000 UTC m=+0.060559325 container create a7186e0f0fa8495a3da00d3c026727d7a652df22d6f95235e8d8cc6b1fa6e205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_chatterjee, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 11 04:11:13 np0005481065 systemd[1]: Started libpod-conmon-a7186e0f0fa8495a3da00d3c026727d7a652df22d6f95235e8d8cc6b1fa6e205.scope.
Oct 11 04:11:13 np0005481065 podman[77918]: 2025-10-11 08:11:13.864137102 +0000 UTC m=+0.038512486 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:11:13 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:11:13 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/958d434e8f12b2cbd906872f779b0e99b6ded06ff75e7602eaff272beecc2355/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:13 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/958d434e8f12b2cbd906872f779b0e99b6ded06ff75e7602eaff272beecc2355/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:13 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/958d434e8f12b2cbd906872f779b0e99b6ded06ff75e7602eaff272beecc2355/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:13 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/958d434e8f12b2cbd906872f779b0e99b6ded06ff75e7602eaff272beecc2355/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:13 np0005481065 podman[77918]: 2025-10-11 08:11:13.996894238 +0000 UTC m=+0.171269632 container init a7186e0f0fa8495a3da00d3c026727d7a652df22d6f95235e8d8cc6b1fa6e205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 11 04:11:14 np0005481065 podman[77918]: 2025-10-11 08:11:14.011770156 +0000 UTC m=+0.186145530 container start a7186e0f0fa8495a3da00d3c026727d7a652df22d6f95235e8d8cc6b1fa6e205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_chatterjee, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:11:14 np0005481065 podman[77918]: 2025-10-11 08:11:14.016084459 +0000 UTC m=+0.190459893 container attach a7186e0f0fa8495a3da00d3c026727d7a652df22d6f95235e8d8cc6b1fa6e205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 04:11:14 np0005481065 python3[77964]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:11:14 np0005481065 podman[77965]: 2025-10-11 08:11:14.382768485 +0000 UTC m=+0.071138280 container create 2272b3b46a4f51b93821b41bbd251a2de3826b33207daa300253e58c54c6043d (image=quay.io/ceph/ceph:v18, name=priceless_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:11:14 np0005481065 systemd[1]: Started libpod-conmon-2272b3b46a4f51b93821b41bbd251a2de3826b33207daa300253e58c54c6043d.scope.
Oct 11 04:11:14 np0005481065 podman[77965]: 2025-10-11 08:11:14.350496382 +0000 UTC m=+0.038866197 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:11:14 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:11:14 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca0fb3e939bde2debb06ddf5a259ed7b775cea5317950afb2545490fd07c130e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:14 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca0fb3e939bde2debb06ddf5a259ed7b775cea5317950afb2545490fd07c130e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:14 np0005481065 podman[77965]: 2025-10-11 08:11:14.48039689 +0000 UTC m=+0.168766735 container init 2272b3b46a4f51b93821b41bbd251a2de3826b33207daa300253e58c54c6043d (image=quay.io/ceph/ceph:v18, name=priceless_thompson, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Oct 11 04:11:14 np0005481065 podman[77965]: 2025-10-11 08:11:14.490200712 +0000 UTC m=+0.178570497 container start 2272b3b46a4f51b93821b41bbd251a2de3826b33207daa300253e58c54c6043d (image=quay.io/ceph/ceph:v18, name=priceless_thompson, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:11:14 np0005481065 podman[77965]: 2025-10-11 08:11:14.495316169 +0000 UTC m=+0.183686024 container attach 2272b3b46a4f51b93821b41bbd251a2de3826b33207daa300253e58c54c6043d (image=quay.io/ceph/ceph:v18, name=priceless_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 04:11:14 np0005481065 ceph-mgr[74605]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Oct 11 04:11:14 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:11:14 np0005481065 ceph-mon[74313]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Oct 11 04:11:14 np0005481065 ceph-mon[74313]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Oct 11 04:11:15 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1
Oct 11 04:11:15 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3239332463' entity='client.admin' 
Oct 11 04:11:15 np0005481065 systemd[1]: libpod-2272b3b46a4f51b93821b41bbd251a2de3826b33207daa300253e58c54c6043d.scope: Deactivated successfully.
Oct 11 04:11:15 np0005481065 podman[77965]: 2025-10-11 08:11:15.036950291 +0000 UTC m=+0.725320116 container died 2272b3b46a4f51b93821b41bbd251a2de3826b33207daa300253e58c54c6043d (image=quay.io/ceph/ceph:v18, name=priceless_thompson, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 04:11:15 np0005481065 systemd[1]: var-lib-containers-storage-overlay-ca0fb3e939bde2debb06ddf5a259ed7b775cea5317950afb2545490fd07c130e-merged.mount: Deactivated successfully.
Oct 11 04:11:15 np0005481065 podman[77965]: 2025-10-11 08:11:15.099499947 +0000 UTC m=+0.787869732 container remove 2272b3b46a4f51b93821b41bbd251a2de3826b33207daa300253e58c54c6043d (image=quay.io/ceph/ceph:v18, name=priceless_thompson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 11 04:11:15 np0005481065 systemd[1]: libpod-conmon-2272b3b46a4f51b93821b41bbd251a2de3826b33207daa300253e58c54c6043d.scope: Deactivated successfully.
Oct 11 04:11:15 np0005481065 keen_chatterjee[77934]: [
Oct 11 04:11:15 np0005481065 keen_chatterjee[77934]:    {
Oct 11 04:11:15 np0005481065 keen_chatterjee[77934]:        "available": false,
Oct 11 04:11:15 np0005481065 keen_chatterjee[77934]:        "ceph_device": false,
Oct 11 04:11:15 np0005481065 keen_chatterjee[77934]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct 11 04:11:15 np0005481065 keen_chatterjee[77934]:        "lsm_data": {},
Oct 11 04:11:15 np0005481065 keen_chatterjee[77934]:        "lvs": [],
Oct 11 04:11:15 np0005481065 keen_chatterjee[77934]:        "path": "/dev/sr0",
Oct 11 04:11:15 np0005481065 keen_chatterjee[77934]:        "rejected_reasons": [
Oct 11 04:11:15 np0005481065 keen_chatterjee[77934]:            "Has a FileSystem",
Oct 11 04:11:15 np0005481065 keen_chatterjee[77934]:            "Insufficient space (<5GB)"
Oct 11 04:11:15 np0005481065 keen_chatterjee[77934]:        ],
Oct 11 04:11:15 np0005481065 keen_chatterjee[77934]:        "sys_api": {
Oct 11 04:11:15 np0005481065 keen_chatterjee[77934]:            "actuators": null,
Oct 11 04:11:15 np0005481065 keen_chatterjee[77934]:            "device_nodes": "sr0",
Oct 11 04:11:15 np0005481065 keen_chatterjee[77934]:            "devname": "sr0",
Oct 11 04:11:15 np0005481065 keen_chatterjee[77934]:            "human_readable_size": "482.00 KB",
Oct 11 04:11:15 np0005481065 keen_chatterjee[77934]:            "id_bus": "ata",
Oct 11 04:11:15 np0005481065 keen_chatterjee[77934]:            "model": "QEMU DVD-ROM",
Oct 11 04:11:15 np0005481065 keen_chatterjee[77934]:            "nr_requests": "2",
Oct 11 04:11:15 np0005481065 keen_chatterjee[77934]:            "parent": "/dev/sr0",
Oct 11 04:11:15 np0005481065 keen_chatterjee[77934]:            "partitions": {},
Oct 11 04:11:15 np0005481065 keen_chatterjee[77934]:            "path": "/dev/sr0",
Oct 11 04:11:15 np0005481065 keen_chatterjee[77934]:            "removable": "1",
Oct 11 04:11:15 np0005481065 keen_chatterjee[77934]:            "rev": "2.5+",
Oct 11 04:11:15 np0005481065 keen_chatterjee[77934]:            "ro": "0",
Oct 11 04:11:15 np0005481065 keen_chatterjee[77934]:            "rotational": "0",
Oct 11 04:11:15 np0005481065 keen_chatterjee[77934]:            "sas_address": "",
Oct 11 04:11:15 np0005481065 keen_chatterjee[77934]:            "sas_device_handle": "",
Oct 11 04:11:15 np0005481065 keen_chatterjee[77934]:            "scheduler_mode": "mq-deadline",
Oct 11 04:11:15 np0005481065 keen_chatterjee[77934]:            "sectors": 0,
Oct 11 04:11:15 np0005481065 keen_chatterjee[77934]:            "sectorsize": "2048",
Oct 11 04:11:15 np0005481065 keen_chatterjee[77934]:            "size": 493568.0,
Oct 11 04:11:15 np0005481065 keen_chatterjee[77934]:            "support_discard": "2048",
Oct 11 04:11:15 np0005481065 keen_chatterjee[77934]:            "type": "disk",
Oct 11 04:11:15 np0005481065 keen_chatterjee[77934]:            "vendor": "QEMU"
Oct 11 04:11:15 np0005481065 keen_chatterjee[77934]:        }
Oct 11 04:11:15 np0005481065 keen_chatterjee[77934]:    }
Oct 11 04:11:15 np0005481065 keen_chatterjee[77934]: ]
Oct 11 04:11:15 np0005481065 systemd[1]: libpod-a7186e0f0fa8495a3da00d3c026727d7a652df22d6f95235e8d8cc6b1fa6e205.scope: Deactivated successfully.
Oct 11 04:11:15 np0005481065 podman[77918]: 2025-10-11 08:11:15.492920777 +0000 UTC m=+1.667296121 container died a7186e0f0fa8495a3da00d3c026727d7a652df22d6f95235e8d8cc6b1fa6e205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_chatterjee, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 11 04:11:15 np0005481065 systemd[1]: libpod-a7186e0f0fa8495a3da00d3c026727d7a652df22d6f95235e8d8cc6b1fa6e205.scope: Consumed 1.510s CPU time.
Oct 11 04:11:15 np0005481065 systemd[1]: var-lib-containers-storage-overlay-958d434e8f12b2cbd906872f779b0e99b6ded06ff75e7602eaff272beecc2355-merged.mount: Deactivated successfully.
Oct 11 04:11:15 np0005481065 podman[77918]: 2025-10-11 08:11:15.552870642 +0000 UTC m=+1.727245986 container remove a7186e0f0fa8495a3da00d3c026727d7a652df22d6f95235e8d8cc6b1fa6e205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_chatterjee, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 04:11:15 np0005481065 systemd[1]: libpod-conmon-a7186e0f0fa8495a3da00d3c026727d7a652df22d6f95235e8d8cc6b1fa6e205.scope: Deactivated successfully.
Oct 11 04:11:15 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:11:15 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:15 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:11:15 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:15 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:11:15 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:15 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:11:15 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:15 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 11 04:11:15 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 04:11:15 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:11:15 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:11:15 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:11:15 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:11:15 np0005481065 ceph-mgr[74605]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Oct 11 04:11:15 np0005481065 ceph-mgr[74605]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Oct 11 04:11:15 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/3239332463' entity='client.admin' 
Oct 11 04:11:15 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:15 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:15 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:15 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:15 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 04:11:15 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:11:16 np0005481065 ansible-async_wrapper.py[80187]: Invoked with j502154280234 30 /home/zuul/.ansible/tmp/ansible-tmp-1760170275.5146103-33100-87346337050734/AnsiballZ_command.py _
Oct 11 04:11:16 np0005481065 ansible-async_wrapper.py[80240]: Starting module and watcher
Oct 11 04:11:16 np0005481065 ansible-async_wrapper.py[80240]: Start watching 80241 (30)
Oct 11 04:11:16 np0005481065 ansible-async_wrapper.py[80241]: Start module (80241)
Oct 11 04:11:16 np0005481065 ansible-async_wrapper.py[80187]: Return async_wrapper task started.
Oct 11 04:11:16 np0005481065 python3[80243]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:11:16 np0005481065 podman[80308]: 2025-10-11 08:11:16.491895766 +0000 UTC m=+0.060699699 container create fb23fa1925457a0ea5a33feb6c9ba951ccb40daf63136bc7781219c30f143434 (image=quay.io/ceph/ceph:v18, name=admiring_booth, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:11:16 np0005481065 systemd[1]: Started libpod-conmon-fb23fa1925457a0ea5a33feb6c9ba951ccb40daf63136bc7781219c30f143434.scope.
Oct 11 04:11:16 np0005481065 podman[80308]: 2025-10-11 08:11:16.457016692 +0000 UTC m=+0.025820715 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:11:16 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:11:16 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/195cb0b5709a8293f96321669e4ae0e35aee8e54ef68a00f415702c70c5da04a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:16 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/195cb0b5709a8293f96321669e4ae0e35aee8e54ef68a00f415702c70c5da04a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:16 np0005481065 podman[80308]: 2025-10-11 08:11:16.589265683 +0000 UTC m=+0.158069686 container init fb23fa1925457a0ea5a33feb6c9ba951ccb40daf63136bc7781219c30f143434 (image=quay.io/ceph/ceph:v18, name=admiring_booth, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:11:16 np0005481065 podman[80308]: 2025-10-11 08:11:16.599187819 +0000 UTC m=+0.167991782 container start fb23fa1925457a0ea5a33feb6c9ba951ccb40daf63136bc7781219c30f143434 (image=quay.io/ceph/ceph:v18, name=admiring_booth, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 11 04:11:16 np0005481065 podman[80308]: 2025-10-11 08:11:16.603393778 +0000 UTC m=+0.172197811 container attach fb23fa1925457a0ea5a33feb6c9ba951ccb40daf63136bc7781219c30f143434 (image=quay.io/ceph/ceph:v18, name=admiring_booth, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 11 04:11:16 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:11:16 np0005481065 ceph-mon[74313]: Updating compute-0:/etc/ceph/ceph.conf
Oct 11 04:11:17 np0005481065 ceph-mgr[74605]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/config/ceph.conf
Oct 11 04:11:17 np0005481065 ceph-mgr[74605]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/config/ceph.conf
Oct 11 04:11:17 np0005481065 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 11 04:11:17 np0005481065 admiring_booth[80356]: 
Oct 11 04:11:17 np0005481065 admiring_booth[80356]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 11 04:11:17 np0005481065 systemd[1]: libpod-fb23fa1925457a0ea5a33feb6c9ba951ccb40daf63136bc7781219c30f143434.scope: Deactivated successfully.
Oct 11 04:11:17 np0005481065 podman[80308]: 2025-10-11 08:11:17.149694784 +0000 UTC m=+0.718498727 container died fb23fa1925457a0ea5a33feb6c9ba951ccb40daf63136bc7781219c30f143434 (image=quay.io/ceph/ceph:v18, name=admiring_booth, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:11:17 np0005481065 systemd[1]: var-lib-containers-storage-overlay-195cb0b5709a8293f96321669e4ae0e35aee8e54ef68a00f415702c70c5da04a-merged.mount: Deactivated successfully.
Oct 11 04:11:17 np0005481065 podman[80308]: 2025-10-11 08:11:17.204037657 +0000 UTC m=+0.772841590 container remove fb23fa1925457a0ea5a33feb6c9ba951ccb40daf63136bc7781219c30f143434 (image=quay.io/ceph/ceph:v18, name=admiring_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 04:11:17 np0005481065 systemd[1]: libpod-conmon-fb23fa1925457a0ea5a33feb6c9ba951ccb40daf63136bc7781219c30f143434.scope: Deactivated successfully.
Oct 11 04:11:17 np0005481065 ansible-async_wrapper.py[80241]: Module complete (80241)
Oct 11 04:11:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:11:17 np0005481065 python3[80743]: ansible-ansible.legacy.async_status Invoked with jid=j502154280234.80187 mode=status _async_dir=/root/.ansible_async
Oct 11 04:11:18 np0005481065 python3[80891]: ansible-ansible.legacy.async_status Invoked with jid=j502154280234.80187 mode=cleanup _async_dir=/root/.ansible_async
Oct 11 04:11:18 np0005481065 ceph-mgr[74605]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 11 04:11:18 np0005481065 ceph-mgr[74605]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 11 04:11:18 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:11:18 np0005481065 python3[81067]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:11:18 np0005481065 ceph-mon[74313]: Updating compute-0:/var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/config/ceph.conf
Oct 11 04:11:19 np0005481065 python3[81253]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:11:19 np0005481065 podman[81297]: 2025-10-11 08:11:19.232528905 +0000 UTC m=+0.046399560 container create e9e44db8f31ba613c906e4629678e0551369c6c5007c4e5dd01037db57bcbb30 (image=quay.io/ceph/ceph:v18, name=crazy_matsumoto, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:11:19 np0005481065 systemd[1]: Started libpod-conmon-e9e44db8f31ba613c906e4629678e0551369c6c5007c4e5dd01037db57bcbb30.scope.
Oct 11 04:11:19 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:11:19 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fae054ce3473f60590c6bed286da0c52308936d1c2e2c4ae5367bf51a079b96/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:19 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fae054ce3473f60590c6bed286da0c52308936d1c2e2c4ae5367bf51a079b96/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:19 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fae054ce3473f60590c6bed286da0c52308936d1c2e2c4ae5367bf51a079b96/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:19 np0005481065 podman[81297]: 2025-10-11 08:11:19.209058462 +0000 UTC m=+0.022929117 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:11:19 np0005481065 podman[81297]: 2025-10-11 08:11:19.319336737 +0000 UTC m=+0.133207412 container init e9e44db8f31ba613c906e4629678e0551369c6c5007c4e5dd01037db57bcbb30 (image=quay.io/ceph/ceph:v18, name=crazy_matsumoto, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:11:19 np0005481065 podman[81297]: 2025-10-11 08:11:19.332907764 +0000 UTC m=+0.146778419 container start e9e44db8f31ba613c906e4629678e0551369c6c5007c4e5dd01037db57bcbb30 (image=quay.io/ceph/ceph:v18, name=crazy_matsumoto, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:11:19 np0005481065 podman[81297]: 2025-10-11 08:11:19.337293959 +0000 UTC m=+0.151164614 container attach e9e44db8f31ba613c906e4629678e0551369c6c5007c4e5dd01037db57bcbb30 (image=quay.io/ceph/ceph:v18, name=crazy_matsumoto, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:11:19 np0005481065 ceph-mon[74313]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 11 04:11:19 np0005481065 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 11 04:11:19 np0005481065 crazy_matsumoto[81347]: 
Oct 11 04:11:19 np0005481065 crazy_matsumoto[81347]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 11 04:11:19 np0005481065 systemd[1]: libpod-e9e44db8f31ba613c906e4629678e0551369c6c5007c4e5dd01037db57bcbb30.scope: Deactivated successfully.
Oct 11 04:11:19 np0005481065 podman[81297]: 2025-10-11 08:11:19.884644037 +0000 UTC m=+0.698514692 container died e9e44db8f31ba613c906e4629678e0551369c6c5007c4e5dd01037db57bcbb30 (image=quay.io/ceph/ceph:v18, name=crazy_matsumoto, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 11 04:11:19 np0005481065 systemd[1]: var-lib-containers-storage-overlay-8fae054ce3473f60590c6bed286da0c52308936d1c2e2c4ae5367bf51a079b96-merged.mount: Deactivated successfully.
Oct 11 04:11:19 np0005481065 podman[81297]: 2025-10-11 08:11:19.939847387 +0000 UTC m=+0.753718002 container remove e9e44db8f31ba613c906e4629678e0551369c6c5007c4e5dd01037db57bcbb30 (image=quay.io/ceph/ceph:v18, name=crazy_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 04:11:19 np0005481065 systemd[1]: libpod-conmon-e9e44db8f31ba613c906e4629678e0551369c6c5007c4e5dd01037db57bcbb30.scope: Deactivated successfully.
Oct 11 04:11:20 np0005481065 ceph-mgr[74605]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/config/ceph.client.admin.keyring
Oct 11 04:11:20 np0005481065 ceph-mgr[74605]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/config/ceph.client.admin.keyring
Oct 11 04:11:20 np0005481065 python3[81714]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:11:20 np0005481065 podman[81752]: 2025-10-11 08:11:20.584091247 +0000 UTC m=+0.063762414 container create b9165d2b9d2236b68be14734b11ca09adcf09ad0a917eedac0d8e80356cc0eb0 (image=quay.io/ceph/ceph:v18, name=relaxed_fermat, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:11:20 np0005481065 systemd[1]: Started libpod-conmon-b9165d2b9d2236b68be14734b11ca09adcf09ad0a917eedac0d8e80356cc0eb0.scope.
Oct 11 04:11:20 np0005481065 podman[81752]: 2025-10-11 08:11:20.557089276 +0000 UTC m=+0.036760483 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:11:20 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:11:20 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:11:20 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5453abb4c4858ce7d6f8e51f241e1b5f4896aeb3a91caafba00be2c887a0bad4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:20 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5453abb4c4858ce7d6f8e51f241e1b5f4896aeb3a91caafba00be2c887a0bad4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:20 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5453abb4c4858ce7d6f8e51f241e1b5f4896aeb3a91caafba00be2c887a0bad4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:20 np0005481065 podman[81752]: 2025-10-11 08:11:20.685690304 +0000 UTC m=+0.165361461 container init b9165d2b9d2236b68be14734b11ca09adcf09ad0a917eedac0d8e80356cc0eb0 (image=quay.io/ceph/ceph:v18, name=relaxed_fermat, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:11:20 np0005481065 podman[81752]: 2025-10-11 08:11:20.695692422 +0000 UTC m=+0.175363579 container start b9165d2b9d2236b68be14734b11ca09adcf09ad0a917eedac0d8e80356cc0eb0 (image=quay.io/ceph/ceph:v18, name=relaxed_fermat, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 11 04:11:20 np0005481065 podman[81752]: 2025-10-11 08:11:20.699953423 +0000 UTC m=+0.179624570 container attach b9165d2b9d2236b68be14734b11ca09adcf09ad0a917eedac0d8e80356cc0eb0 (image=quay.io/ceph/ceph:v18, name=relaxed_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 04:11:21 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0) v1
Oct 11 04:11:21 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1568042009' entity='client.admin' 
Oct 11 04:11:21 np0005481065 ansible-async_wrapper.py[80240]: Done in kid B.
Oct 11 04:11:21 np0005481065 systemd[1]: libpod-b9165d2b9d2236b68be14734b11ca09adcf09ad0a917eedac0d8e80356cc0eb0.scope: Deactivated successfully.
Oct 11 04:11:21 np0005481065 podman[81752]: 2025-10-11 08:11:21.26902197 +0000 UTC m=+0.748693117 container died b9165d2b9d2236b68be14734b11ca09adcf09ad0a917eedac0d8e80356cc0eb0 (image=quay.io/ceph/ceph:v18, name=relaxed_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:11:21 np0005481065 systemd[1]: var-lib-containers-storage-overlay-5453abb4c4858ce7d6f8e51f241e1b5f4896aeb3a91caafba00be2c887a0bad4-merged.mount: Deactivated successfully.
Oct 11 04:11:21 np0005481065 podman[81752]: 2025-10-11 08:11:21.316638026 +0000 UTC m=+0.796309143 container remove b9165d2b9d2236b68be14734b11ca09adcf09ad0a917eedac0d8e80356cc0eb0 (image=quay.io/ceph/ceph:v18, name=relaxed_fermat, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:11:21 np0005481065 systemd[1]: libpod-conmon-b9165d2b9d2236b68be14734b11ca09adcf09ad0a917eedac0d8e80356cc0eb0.scope: Deactivated successfully.
Oct 11 04:11:21 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:11:21 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:21 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:11:21 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:21 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:11:21 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:21 np0005481065 ceph-mgr[74605]: [progress INFO root] update: starting ev e976f935-f712-44cc-a193-4d19ff3d799c (Updating crash deployment (+1 -> 1))
Oct 11 04:11:21 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Oct 11 04:11:21 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 11 04:11:21 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct 11 04:11:21 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:11:21 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:11:21 np0005481065 ceph-mgr[74605]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Oct 11 04:11:21 np0005481065 ceph-mgr[74605]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Oct 11 04:11:21 np0005481065 python3[82144]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:11:21 np0005481065 ceph-mon[74313]: Updating compute-0:/var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/config/ceph.client.admin.keyring
Oct 11 04:11:21 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/1568042009' entity='client.admin' 
Oct 11 04:11:21 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:21 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:21 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:21 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 11 04:11:21 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct 11 04:11:21 np0005481065 podman[82193]: 2025-10-11 08:11:21.758948209 +0000 UTC m=+0.064740393 container create 1f170421cb240cba59841c70a5384082ee465513ef6c6b81fe7a88c9ada809d5 (image=quay.io/ceph/ceph:v18, name=goofy_heisenberg, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 04:11:21 np0005481065 systemd[1]: Started libpod-conmon-1f170421cb240cba59841c70a5384082ee465513ef6c6b81fe7a88c9ada809d5.scope.
Oct 11 04:11:21 np0005481065 podman[82193]: 2025-10-11 08:11:21.733577048 +0000 UTC m=+0.039369292 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:11:21 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:11:21 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2058b5b13939ad529584adc5370d7f3a3cd26325ae836fb63a1abe0b87132e4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:21 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2058b5b13939ad529584adc5370d7f3a3cd26325ae836fb63a1abe0b87132e4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:21 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2058b5b13939ad529584adc5370d7f3a3cd26325ae836fb63a1abe0b87132e4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:21 np0005481065 podman[82193]: 2025-10-11 08:11:21.852656994 +0000 UTC m=+0.158449228 container init 1f170421cb240cba59841c70a5384082ee465513ef6c6b81fe7a88c9ada809d5 (image=quay.io/ceph/ceph:v18, name=goofy_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:11:21 np0005481065 podman[82193]: 2025-10-11 08:11:21.862728524 +0000 UTC m=+0.168520718 container start 1f170421cb240cba59841c70a5384082ee465513ef6c6b81fe7a88c9ada809d5 (image=quay.io/ceph/ceph:v18, name=goofy_heisenberg, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 11 04:11:21 np0005481065 podman[82193]: 2025-10-11 08:11:21.866453658 +0000 UTC m=+0.172245852 container attach 1f170421cb240cba59841c70a5384082ee465513ef6c6b81fe7a88c9ada809d5 (image=quay.io/ceph/ceph:v18, name=goofy_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:11:22 np0005481065 podman[82281]: 2025-10-11 08:11:22.225276703 +0000 UTC m=+0.084894284 container create 6f7407d5bd6f1b52a4df88c6bd716a8bbfce7c01efcbd65b48db884c3d435242 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_fermat, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 04:11:22 np0005481065 systemd[1]: Started libpod-conmon-6f7407d5bd6f1b52a4df88c6bd716a8bbfce7c01efcbd65b48db884c3d435242.scope.
Oct 11 04:11:22 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:11:22 np0005481065 podman[82281]: 2025-10-11 08:11:22.19493729 +0000 UTC m=+0.054554931 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:11:22 np0005481065 podman[82281]: 2025-10-11 08:11:22.287892031 +0000 UTC m=+0.147509592 container init 6f7407d5bd6f1b52a4df88c6bd716a8bbfce7c01efcbd65b48db884c3d435242 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_fermat, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:11:22 np0005481065 podman[82281]: 2025-10-11 08:11:22.293870105 +0000 UTC m=+0.153487656 container start 6f7407d5bd6f1b52a4df88c6bd716a8bbfce7c01efcbd65b48db884c3d435242 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:11:22 np0005481065 hungry_fermat[82313]: 167 167
Oct 11 04:11:22 np0005481065 systemd[1]: libpod-6f7407d5bd6f1b52a4df88c6bd716a8bbfce7c01efcbd65b48db884c3d435242.scope: Deactivated successfully.
Oct 11 04:11:22 np0005481065 conmon[82313]: conmon 6f7407d5bd6f1b52a4df <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6f7407d5bd6f1b52a4df88c6bd716a8bbfce7c01efcbd65b48db884c3d435242.scope/container/memory.events
Oct 11 04:11:22 np0005481065 podman[82281]: 2025-10-11 08:11:22.298645622 +0000 UTC m=+0.158263203 container attach 6f7407d5bd6f1b52a4df88c6bd716a8bbfce7c01efcbd65b48db884c3d435242 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_fermat, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:11:22 np0005481065 podman[82281]: 2025-10-11 08:11:22.299373934 +0000 UTC m=+0.158991475 container died 6f7407d5bd6f1b52a4df88c6bd716a8bbfce7c01efcbd65b48db884c3d435242 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_fermat, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 04:11:22 np0005481065 systemd[1]: var-lib-containers-storage-overlay-e563c4f03e717cdd954365783304f84d23400f7b0a1ac71c1339a810acf4882b-merged.mount: Deactivated successfully.
Oct 11 04:11:22 np0005481065 podman[82281]: 2025-10-11 08:11:22.334454134 +0000 UTC m=+0.194071685 container remove 6f7407d5bd6f1b52a4df88c6bd716a8bbfce7c01efcbd65b48db884c3d435242 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_fermat, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 04:11:22 np0005481065 systemd[1]: libpod-conmon-6f7407d5bd6f1b52a4df88c6bd716a8bbfce7c01efcbd65b48db884c3d435242.scope: Deactivated successfully.
Oct 11 04:11:22 np0005481065 systemd[1]: Reloading.
Oct 11 04:11:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0) v1
Oct 11 04:11:22 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:11:22 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:11:22 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2342370446' entity='client.admin' 
Oct 11 04:11:22 np0005481065 podman[82193]: 2025-10-11 08:11:22.508872993 +0000 UTC m=+0.814665167 container died 1f170421cb240cba59841c70a5384082ee465513ef6c6b81fe7a88c9ada809d5 (image=quay.io/ceph/ceph:v18, name=goofy_heisenberg, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:11:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:11:22 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:11:22 np0005481065 systemd[1]: libpod-1f170421cb240cba59841c70a5384082ee465513ef6c6b81fe7a88c9ada809d5.scope: Deactivated successfully.
Oct 11 04:11:22 np0005481065 systemd[1]: var-lib-containers-storage-overlay-b2058b5b13939ad529584adc5370d7f3a3cd26325ae836fb63a1abe0b87132e4-merged.mount: Deactivated successfully.
Oct 11 04:11:22 np0005481065 podman[82193]: 2025-10-11 08:11:22.677362929 +0000 UTC m=+0.983155083 container remove 1f170421cb240cba59841c70a5384082ee465513ef6c6b81fe7a88c9ada809d5 (image=quay.io/ceph/ceph:v18, name=goofy_heisenberg, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:11:22 np0005481065 systemd[1]: libpod-conmon-1f170421cb240cba59841c70a5384082ee465513ef6c6b81fe7a88c9ada809d5.scope: Deactivated successfully.
Oct 11 04:11:22 np0005481065 systemd[1]: Reloading.
Oct 11 04:11:22 np0005481065 ceph-mon[74313]: Deploying daemon crash.compute-0 on compute-0
Oct 11 04:11:22 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/2342370446' entity='client.admin' 
Oct 11 04:11:22 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:11:22 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:11:22 np0005481065 systemd[1]: Starting Ceph crash.compute-0 for 33219f8b-dc38-5a8f-a577-8ccc4b37190a...
Oct 11 04:11:23 np0005481065 python3[82452]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:11:23 np0005481065 podman[82489]: 2025-10-11 08:11:23.238337196 +0000 UTC m=+0.058206782 container create f52dfbcc7a1513f0401ade1e08afedd1b9cba2113444313ca15d6ba482c527d2 (image=quay.io/ceph/ceph:v18, name=loving_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 11 04:11:23 np0005481065 podman[82511]: 2025-10-11 08:11:23.284278361 +0000 UTC m=+0.050998571 container create d685bb9538c2d54a199d5b36dac8cb88c703ade3531e2506569ffb1afed55aa1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-crash-compute-0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 11 04:11:23 np0005481065 systemd[1]: Started libpod-conmon-f52dfbcc7a1513f0401ade1e08afedd1b9cba2113444313ca15d6ba482c527d2.scope.
Oct 11 04:11:23 np0005481065 podman[82489]: 2025-10-11 08:11:23.214853604 +0000 UTC m=+0.034723200 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:11:23 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:11:23 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3cebc2f6bd01845961cef23d47ea2187d0cbbca0f85e3494a5389f51f24e6f2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:23 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3cebc2f6bd01845961cef23d47ea2187d0cbbca0f85e3494a5389f51f24e6f2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:23 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3cebc2f6bd01845961cef23d47ea2187d0cbbca0f85e3494a5389f51f24e6f2/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:23 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35993f085c0fcec0e564deb2b2e07764b43b22d6b47e14cb6eabf8374ef96c2e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:23 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35993f085c0fcec0e564deb2b2e07764b43b22d6b47e14cb6eabf8374ef96c2e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:23 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35993f085c0fcec0e564deb2b2e07764b43b22d6b47e14cb6eabf8374ef96c2e/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:23 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35993f085c0fcec0e564deb2b2e07764b43b22d6b47e14cb6eabf8374ef96c2e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:23 np0005481065 podman[82489]: 2025-10-11 08:11:23.352325765 +0000 UTC m=+0.172195341 container init f52dfbcc7a1513f0401ade1e08afedd1b9cba2113444313ca15d6ba482c527d2 (image=quay.io/ceph/ceph:v18, name=loving_proskuriakova, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:11:23 np0005481065 podman[82511]: 2025-10-11 08:11:23.260425276 +0000 UTC m=+0.027145566 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:11:23 np0005481065 podman[82489]: 2025-10-11 08:11:23.366260334 +0000 UTC m=+0.186129890 container start f52dfbcc7a1513f0401ade1e08afedd1b9cba2113444313ca15d6ba482c527d2 (image=quay.io/ceph/ceph:v18, name=loving_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Oct 11 04:11:23 np0005481065 podman[82511]: 2025-10-11 08:11:23.367173472 +0000 UTC m=+0.133893712 container init d685bb9538c2d54a199d5b36dac8cb88c703ade3531e2506569ffb1afed55aa1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-crash-compute-0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:11:23 np0005481065 podman[82489]: 2025-10-11 08:11:23.370247587 +0000 UTC m=+0.190117153 container attach f52dfbcc7a1513f0401ade1e08afedd1b9cba2113444313ca15d6ba482c527d2 (image=quay.io/ceph/ceph:v18, name=loving_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 11 04:11:23 np0005481065 podman[82511]: 2025-10-11 08:11:23.373554929 +0000 UTC m=+0.140275139 container start d685bb9538c2d54a199d5b36dac8cb88c703ade3531e2506569ffb1afed55aa1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-crash-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:11:23 np0005481065 bash[82511]: d685bb9538c2d54a199d5b36dac8cb88c703ade3531e2506569ffb1afed55aa1
Oct 11 04:11:23 np0005481065 systemd[1]: Started Ceph crash.compute-0 for 33219f8b-dc38-5a8f-a577-8ccc4b37190a.
Oct 11 04:11:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:11:23 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:11:23 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Oct 11 04:11:23 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:23 np0005481065 ceph-mgr[74605]: [progress INFO root] complete: finished ev e976f935-f712-44cc-a193-4d19ff3d799c (Updating crash deployment (+1 -> 1))
Oct 11 04:11:23 np0005481065 ceph-mgr[74605]: [progress INFO root] Completed event e976f935-f712-44cc-a193-4d19ff3d799c (Updating crash deployment (+1 -> 1)) in 2 seconds
Oct 11 04:11:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Oct 11 04:11:23 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:23 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 976bc712-4efc-41d0-9dcf-0b54bff2b860 does not exist
Oct 11 04:11:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct 11 04:11:23 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:23 np0005481065 ceph-mgr[74605]: [progress INFO root] update: starting ev 99c2304b-5c68-4dd1-81f5-95a76996d644 (Updating mgr deployment (+1 -> 2))
Oct 11 04:11:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.elpwhl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Oct 11 04:11:23 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.elpwhl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 11 04:11:23 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.elpwhl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct 11 04:11:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 11 04:11:23 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 11 04:11:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:11:23 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:11:23 np0005481065 ceph-mgr[74605]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.elpwhl on compute-0
Oct 11 04:11:23 np0005481065 ceph-mgr[74605]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.elpwhl on compute-0
Oct 11 04:11:23 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-crash-compute-0[82534]: INFO:ceph-crash:pinging cluster to exercise our key
Oct 11 04:11:23 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:23 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:23 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:23 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:23 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:23 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.elpwhl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 11 04:11:23 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.elpwhl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct 11 04:11:23 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-crash-compute-0[82534]: 2025-10-11T08:11:23.785+0000 7fb3caa34640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct 11 04:11:23 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-crash-compute-0[82534]: 2025-10-11T08:11:23.785+0000 7fb3caa34640 -1 AuthRegistry(0x7fb3c4066fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct 11 04:11:23 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-crash-compute-0[82534]: 2025-10-11T08:11:23.787+0000 7fb3caa34640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct 11 04:11:23 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-crash-compute-0[82534]: 2025-10-11T08:11:23.787+0000 7fb3caa34640 -1 AuthRegistry(0x7fb3caa33000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct 11 04:11:23 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-crash-compute-0[82534]: 2025-10-11T08:11:23.788+0000 7fb3c3fff640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Oct 11 04:11:23 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-crash-compute-0[82534]: 2025-10-11T08:11:23.788+0000 7fb3caa34640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Oct 11 04:11:23 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-crash-compute-0[82534]: [errno 13] RADOS permission denied (error connecting to the cluster)
Oct 11 04:11:23 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-crash-compute-0[82534]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Oct 11 04:11:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0) v1
Oct 11 04:11:23 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2078872399' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Oct 11 04:11:24 np0005481065 podman[82713]: 2025-10-11 08:11:24.136927056 +0000 UTC m=+0.051819076 container create 8fb99490b0ed08a6a4c63dfdaf4c4acc6ad6e7735f7f18e94d09fe5e3a3494da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_sinoussi, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 04:11:24 np0005481065 systemd[1]: Started libpod-conmon-8fb99490b0ed08a6a4c63dfdaf4c4acc6ad6e7735f7f18e94d09fe5e3a3494da.scope.
Oct 11 04:11:24 np0005481065 podman[82713]: 2025-10-11 08:11:24.118585731 +0000 UTC m=+0.033477731 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:11:24 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:11:24 np0005481065 podman[82713]: 2025-10-11 08:11:24.237006027 +0000 UTC m=+0.151898057 container init 8fb99490b0ed08a6a4c63dfdaf4c4acc6ad6e7735f7f18e94d09fe5e3a3494da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_sinoussi, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 11 04:11:24 np0005481065 podman[82713]: 2025-10-11 08:11:24.248031636 +0000 UTC m=+0.162923656 container start 8fb99490b0ed08a6a4c63dfdaf4c4acc6ad6e7735f7f18e94d09fe5e3a3494da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_sinoussi, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 04:11:24 np0005481065 podman[82713]: 2025-10-11 08:11:24.252013798 +0000 UTC m=+0.166905818 container attach 8fb99490b0ed08a6a4c63dfdaf4c4acc6ad6e7735f7f18e94d09fe5e3a3494da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:11:24 np0005481065 jovial_sinoussi[82729]: 167 167
Oct 11 04:11:24 np0005481065 systemd[1]: libpod-8fb99490b0ed08a6a4c63dfdaf4c4acc6ad6e7735f7f18e94d09fe5e3a3494da.scope: Deactivated successfully.
Oct 11 04:11:24 np0005481065 podman[82713]: 2025-10-11 08:11:24.257758235 +0000 UTC m=+0.172650265 container died 8fb99490b0ed08a6a4c63dfdaf4c4acc6ad6e7735f7f18e94d09fe5e3a3494da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_sinoussi, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:11:24 np0005481065 systemd[1]: var-lib-containers-storage-overlay-4ce2257068d742996612dde6be395feb53cff3650d09303e62fdfedab03a7307-merged.mount: Deactivated successfully.
Oct 11 04:11:24 np0005481065 podman[82713]: 2025-10-11 08:11:24.308324312 +0000 UTC m=+0.223216332 container remove 8fb99490b0ed08a6a4c63dfdaf4c4acc6ad6e7735f7f18e94d09fe5e3a3494da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_sinoussi, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:11:24 np0005481065 systemd[1]: libpod-conmon-8fb99490b0ed08a6a4c63dfdaf4c4acc6ad6e7735f7f18e94d09fe5e3a3494da.scope: Deactivated successfully.
Oct 11 04:11:24 np0005481065 systemd[1]: Reloading.
Oct 11 04:11:24 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:11:24 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:11:24 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:11:24 np0005481065 ceph-mgr[74605]: [progress INFO root] Writing back 1 completed events
Oct 11 04:11:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 11 04:11:24 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Oct 11 04:11:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 11 04:11:24 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2078872399' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Oct 11 04:11:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Oct 11 04:11:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:11:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:11:24 np0005481065 loving_proskuriakova[82530]: set require_min_compat_client to mimic
Oct 11 04:11:24 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Oct 11 04:11:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:11:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:11:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:11:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:11:24 np0005481065 ceph-mon[74313]: Deploying daemon mgr.compute-0.elpwhl on compute-0
Oct 11 04:11:24 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/2078872399' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Oct 11 04:11:24 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:24 np0005481065 systemd[1]: Reloading.
Oct 11 04:11:24 np0005481065 podman[82489]: 2025-10-11 08:11:24.769857268 +0000 UTC m=+1.589726844 container died f52dfbcc7a1513f0401ade1e08afedd1b9cba2113444313ca15d6ba482c527d2 (image=quay.io/ceph/ceph:v18, name=loving_proskuriakova, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 04:11:24 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:11:24 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:11:25 np0005481065 systemd[1]: libpod-f52dfbcc7a1513f0401ade1e08afedd1b9cba2113444313ca15d6ba482c527d2.scope: Deactivated successfully.
Oct 11 04:11:25 np0005481065 systemd[1]: var-lib-containers-storage-overlay-b3cebc2f6bd01845961cef23d47ea2187d0cbbca0f85e3494a5389f51f24e6f2-merged.mount: Deactivated successfully.
Oct 11 04:11:25 np0005481065 systemd[1]: Starting Ceph mgr.compute-0.elpwhl for 33219f8b-dc38-5a8f-a577-8ccc4b37190a...
Oct 11 04:11:25 np0005481065 podman[82489]: 2025-10-11 08:11:25.097539734 +0000 UTC m=+1.917409280 container remove f52dfbcc7a1513f0401ade1e08afedd1b9cba2113444313ca15d6ba482c527d2 (image=quay.io/ceph/ceph:v18, name=loving_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:11:25 np0005481065 systemd[1]: libpod-conmon-f52dfbcc7a1513f0401ade1e08afedd1b9cba2113444313ca15d6ba482c527d2.scope: Deactivated successfully.
Oct 11 04:11:25 np0005481065 podman[82887]: 2025-10-11 08:11:25.375704276 +0000 UTC m=+0.058403389 container create 6f33ce08af3dd835a4ce5c11c8c1534e7bd27c4b8dfcb009bd11683867e88582 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-elpwhl, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 04:11:25 np0005481065 podman[82887]: 2025-10-11 08:11:25.347558819 +0000 UTC m=+0.030257982 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:11:25 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b68017e0c49508c4f65ffbbda0a2a1c87ffa9ea9c8590e935bba4a97a6d2ffc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:25 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b68017e0c49508c4f65ffbbda0a2a1c87ffa9ea9c8590e935bba4a97a6d2ffc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:25 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b68017e0c49508c4f65ffbbda0a2a1c87ffa9ea9c8590e935bba4a97a6d2ffc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:25 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b68017e0c49508c4f65ffbbda0a2a1c87ffa9ea9c8590e935bba4a97a6d2ffc/merged/var/lib/ceph/mgr/ceph-compute-0.elpwhl supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:25 np0005481065 podman[82887]: 2025-10-11 08:11:25.478923133 +0000 UTC m=+0.161622256 container init 6f33ce08af3dd835a4ce5c11c8c1534e7bd27c4b8dfcb009bd11683867e88582 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-elpwhl, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 04:11:25 np0005481065 podman[82887]: 2025-10-11 08:11:25.492755639 +0000 UTC m=+0.175454762 container start 6f33ce08af3dd835a4ce5c11c8c1534e7bd27c4b8dfcb009bd11683867e88582 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-elpwhl, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:11:25 np0005481065 bash[82887]: 6f33ce08af3dd835a4ce5c11c8c1534e7bd27c4b8dfcb009bd11683867e88582
Oct 11 04:11:25 np0005481065 systemd[1]: Started Ceph mgr.compute-0.elpwhl for 33219f8b-dc38-5a8f-a577-8ccc4b37190a.
Oct 11 04:11:25 np0005481065 ceph-mgr[82906]: set uid:gid to 167:167 (ceph:ceph)
Oct 11 04:11:25 np0005481065 ceph-mgr[82906]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Oct 11 04:11:25 np0005481065 ceph-mgr[82906]: pidfile_write: ignore empty --pid-file
Oct 11 04:11:25 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:11:25 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:25 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:11:25 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:25 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 11 04:11:25 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:25 np0005481065 ceph-mgr[74605]: [progress INFO root] complete: finished ev 99c2304b-5c68-4dd1-81f5-95a76996d644 (Updating mgr deployment (+1 -> 2))
Oct 11 04:11:25 np0005481065 ceph-mgr[74605]: [progress INFO root] Completed event 99c2304b-5c68-4dd1-81f5-95a76996d644 (Updating mgr deployment (+1 -> 2)) in 2 seconds
Oct 11 04:11:25 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 11 04:11:25 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:25 np0005481065 ceph-mgr[82906]: mgr[py] Loading python module 'alerts'
Oct 11 04:11:25 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/2078872399' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Oct 11 04:11:25 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:25 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:25 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:25 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:25 np0005481065 python3[82980]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:11:25 np0005481065 podman[83038]: 2025-10-11 08:11:25.860763746 +0000 UTC m=+0.050583968 container create 3df87698c79140ec9cd1ab5fd0580b93724f5a60d2964442ae438c015ce3de90 (image=quay.io/ceph/ceph:v18, name=eager_heyrovsky, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 11 04:11:25 np0005481065 systemd[1]: Started libpod-conmon-3df87698c79140ec9cd1ab5fd0580b93724f5a60d2964442ae438c015ce3de90.scope.
Oct 11 04:11:25 np0005481065 podman[83038]: 2025-10-11 08:11:25.834040484 +0000 UTC m=+0.023860766 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:11:25 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:11:25 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/140df98632af478f7d1bcc8030255a13fdac267397b2d01d64a45aca501cf932/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:25 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/140df98632af478f7d1bcc8030255a13fdac267397b2d01d64a45aca501cf932/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:25 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/140df98632af478f7d1bcc8030255a13fdac267397b2d01d64a45aca501cf932/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:25 np0005481065 podman[83038]: 2025-10-11 08:11:25.952690376 +0000 UTC m=+0.142510668 container init 3df87698c79140ec9cd1ab5fd0580b93724f5a60d2964442ae438c015ce3de90 (image=quay.io/ceph/ceph:v18, name=eager_heyrovsky, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 11 04:11:25 np0005481065 podman[83038]: 2025-10-11 08:11:25.966416989 +0000 UTC m=+0.156237251 container start 3df87698c79140ec9cd1ab5fd0580b93724f5a60d2964442ae438c015ce3de90 (image=quay.io/ceph/ceph:v18, name=eager_heyrovsky, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:11:25 np0005481065 podman[83038]: 2025-10-11 08:11:25.970444583 +0000 UTC m=+0.160264845 container attach 3df87698c79140ec9cd1ab5fd0580b93724f5a60d2964442ae438c015ce3de90 (image=quay.io/ceph/ceph:v18, name=eager_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:11:25 np0005481065 ceph-mgr[82906]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 11 04:11:25 np0005481065 ceph-mgr[82906]: mgr[py] Loading python module 'balancer'
Oct 11 04:11:25 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-elpwhl[82902]: 2025-10-11T08:11:25.974+0000 7f36f50f8140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 11 04:11:26 np0005481065 ceph-mgr[82906]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 11 04:11:26 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-elpwhl[82902]: 2025-10-11T08:11:26.218+0000 7f36f50f8140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 11 04:11:26 np0005481065 ceph-mgr[82906]: mgr[py] Loading python module 'cephadm'
Oct 11 04:11:26 np0005481065 podman[83215]: 2025-10-11 08:11:26.493239925 +0000 UTC m=+0.078402065 container exec ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 11 04:11:26 np0005481065 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.14186 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:11:26 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:11:26 np0005481065 podman[83215]: 2025-10-11 08:11:26.664198777 +0000 UTC m=+0.249360877 container exec_died ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 04:11:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:11:27 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:11:27 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:11:27 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:11:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:11:27 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:11:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:11:27 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:27 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 9df607cb-93ce-4611-b27b-7951d8dc4c53 does not exist
Oct 11 04:11:27 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 221c74ad-097f-4007-96e2-8c38ab8041bd does not exist
Oct 11 04:11:27 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 72e1d8dc-43ff-4fff-b937-c4f1fff9e64e does not exist
Oct 11 04:11:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 11 04:11:27 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 11 04:11:27 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 11 04:11:27 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 11 04:11:27 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:27 np0005481065 ceph-mgr[74605]: [cephadm INFO root] Added host compute-0
Oct 11 04:11:27 np0005481065 ceph-mgr[74605]: log_channel(cephadm) log [INF] : Added host compute-0
Oct 11 04:11:27 np0005481065 ceph-mgr[74605]: [cephadm INFO root] Saving service mon spec with placement compute-0
Oct 11 04:11:27 np0005481065 ceph-mgr[74605]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Oct 11 04:11:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct 11 04:11:27 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:27 np0005481065 ceph-mgr[74605]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Oct 11 04:11:27 np0005481065 ceph-mgr[74605]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Oct 11 04:11:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 11 04:11:27 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:27 np0005481065 ceph-mgr[74605]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Oct 11 04:11:27 np0005481065 ceph-mgr[74605]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Oct 11 04:11:27 np0005481065 ceph-mgr[74605]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Oct 11 04:11:27 np0005481065 ceph-mgr[74605]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Oct 11 04:11:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0) v1
Oct 11 04:11:27 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:27 np0005481065 eager_heyrovsky[83094]: Added host 'compute-0' with addr '192.168.122.100'
Oct 11 04:11:27 np0005481065 eager_heyrovsky[83094]: Scheduled mon update...
Oct 11 04:11:27 np0005481065 eager_heyrovsky[83094]: Scheduled mgr update...
Oct 11 04:11:27 np0005481065 eager_heyrovsky[83094]: Scheduled osd.default_drive_group update...
Oct 11 04:11:27 np0005481065 systemd[1]: libpod-3df87698c79140ec9cd1ab5fd0580b93724f5a60d2964442ae438c015ce3de90.scope: Deactivated successfully.
Oct 11 04:11:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0) v1
Oct 11 04:11:27 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0) v1
Oct 11 04:11:27 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0) v1
Oct 11 04:11:27 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0) v1
Oct 11 04:11:27 np0005481065 podman[83469]: 2025-10-11 08:11:27.359112337 +0000 UTC m=+0.022145413 container died 3df87698c79140ec9cd1ab5fd0580b93724f5a60d2964442ae438c015ce3de90 (image=quay.io/ceph/ceph:v18, name=eager_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 04:11:27 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:27 np0005481065 ceph-mgr[74605]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Oct 11 04:11:27 np0005481065 ceph-mgr[74605]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Oct 11 04:11:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Oct 11 04:11:27 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 11 04:11:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Oct 11 04:11:27 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 11 04:11:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:11:27 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:11:27 np0005481065 ceph-mgr[74605]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Oct 11 04:11:27 np0005481065 ceph-mgr[74605]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Oct 11 04:11:27 np0005481065 systemd[1]: var-lib-containers-storage-overlay-140df98632af478f7d1bcc8030255a13fdac267397b2d01d64a45aca501cf932-merged.mount: Deactivated successfully.
Oct 11 04:11:27 np0005481065 podman[83469]: 2025-10-11 08:11:27.395847538 +0000 UTC m=+0.058880574 container remove 3df87698c79140ec9cd1ab5fd0580b93724f5a60d2964442ae438c015ce3de90 (image=quay.io/ceph/ceph:v18, name=eager_heyrovsky, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 04:11:27 np0005481065 systemd[1]: libpod-conmon-3df87698c79140ec9cd1ab5fd0580b93724f5a60d2964442ae438c015ce3de90.scope: Deactivated successfully.
Oct 11 04:11:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:11:27 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:27 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:27 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:11:27 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:27 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:27 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:27 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:27 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:27 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:27 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:27 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:27 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:27 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:27 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:27 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:27 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 11 04:11:27 np0005481065 python3[83620]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:11:27 np0005481065 podman[83637]: 2025-10-11 08:11:27.908104016 +0000 UTC m=+0.048596487 container create 5f0030e6c67e10612cacef2c24eb421f37fd12fdc93f60995231c8a4f323a4bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_bassi, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 11 04:11:27 np0005481065 systemd[1]: Started libpod-conmon-5f0030e6c67e10612cacef2c24eb421f37fd12fdc93f60995231c8a4f323a4bb.scope.
Oct 11 04:11:27 np0005481065 podman[83650]: 2025-10-11 08:11:27.967115962 +0000 UTC m=+0.063391602 container create 09dbeb3ca2c7c6497acc87291f3c84629a26cb5da0e55eae00165eb942abd928 (image=quay.io/ceph/ceph:v18, name=goofy_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 04:11:27 np0005481065 podman[83637]: 2025-10-11 08:11:27.880264819 +0000 UTC m=+0.020757370 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:11:27 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:11:27 np0005481065 podman[83637]: 2025-10-11 08:11:27.996508677 +0000 UTC m=+0.137001238 container init 5f0030e6c67e10612cacef2c24eb421f37fd12fdc93f60995231c8a4f323a4bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_bassi, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:11:28 np0005481065 podman[83637]: 2025-10-11 08:11:28.002187822 +0000 UTC m=+0.142680323 container start 5f0030e6c67e10612cacef2c24eb421f37fd12fdc93f60995231c8a4f323a4bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_bassi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Oct 11 04:11:28 np0005481065 podman[83637]: 2025-10-11 08:11:28.005743071 +0000 UTC m=+0.146235562 container attach 5f0030e6c67e10612cacef2c24eb421f37fd12fdc93f60995231c8a4f323a4bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_bassi, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:11:28 np0005481065 keen_bassi[83667]: 167 167
Oct 11 04:11:28 np0005481065 podman[83637]: 2025-10-11 08:11:28.009580859 +0000 UTC m=+0.150073350 container died 5f0030e6c67e10612cacef2c24eb421f37fd12fdc93f60995231c8a4f323a4bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 11 04:11:28 np0005481065 systemd[1]: Started libpod-conmon-09dbeb3ca2c7c6497acc87291f3c84629a26cb5da0e55eae00165eb942abd928.scope.
Oct 11 04:11:28 np0005481065 systemd[1]: libpod-5f0030e6c67e10612cacef2c24eb421f37fd12fdc93f60995231c8a4f323a4bb.scope: Deactivated successfully.
Oct 11 04:11:28 np0005481065 ceph-mgr[82906]: mgr[py] Loading python module 'crash'
Oct 11 04:11:28 np0005481065 podman[83650]: 2025-10-11 08:11:27.949536971 +0000 UTC m=+0.045812641 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:11:28 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:11:28 np0005481065 systemd[1]: var-lib-containers-storage-overlay-3c147469cc08dd025555a59ce370cf3a29d30c02c00377d476f03dfb743d0ece-merged.mount: Deactivated successfully.
Oct 11 04:11:28 np0005481065 podman[83637]: 2025-10-11 08:11:28.049733105 +0000 UTC m=+0.190225576 container remove 5f0030e6c67e10612cacef2c24eb421f37fd12fdc93f60995231c8a4f323a4bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_bassi, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:11:28 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b6c861499485bb6fb45e52d9f126cd85d2f49b0cc1ca0a2dae793abd90599b1/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:28 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b6c861499485bb6fb45e52d9f126cd85d2f49b0cc1ca0a2dae793abd90599b1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:28 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b6c861499485bb6fb45e52d9f126cd85d2f49b0cc1ca0a2dae793abd90599b1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:28 np0005481065 podman[83650]: 2025-10-11 08:11:28.070556826 +0000 UTC m=+0.166832506 container init 09dbeb3ca2c7c6497acc87291f3c84629a26cb5da0e55eae00165eb942abd928 (image=quay.io/ceph/ceph:v18, name=goofy_stonebraker, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 11 04:11:28 np0005481065 podman[83650]: 2025-10-11 08:11:28.080295816 +0000 UTC m=+0.176571506 container start 09dbeb3ca2c7c6497acc87291f3c84629a26cb5da0e55eae00165eb942abd928 (image=quay.io/ceph/ceph:v18, name=goofy_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 04:11:28 np0005481065 systemd[1]: libpod-conmon-5f0030e6c67e10612cacef2c24eb421f37fd12fdc93f60995231c8a4f323a4bb.scope: Deactivated successfully.
Oct 11 04:11:28 np0005481065 podman[83650]: 2025-10-11 08:11:28.087953082 +0000 UTC m=+0.184228752 container attach 09dbeb3ca2c7c6497acc87291f3c84629a26cb5da0e55eae00165eb942abd928 (image=quay.io/ceph/ceph:v18, name=goofy_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 11 04:11:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:11:28 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:11:28 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:28 np0005481065 ceph-mgr[74605]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.hcsgrm (unknown last config time)...
Oct 11 04:11:28 np0005481065 ceph-mgr[74605]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.hcsgrm (unknown last config time)...
Oct 11 04:11:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.hcsgrm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Oct 11 04:11:28 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.hcsgrm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 11 04:11:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 11 04:11:28 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 11 04:11:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:11:28 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:11:28 np0005481065 ceph-mgr[74605]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.hcsgrm on compute-0
Oct 11 04:11:28 np0005481065 ceph-mgr[74605]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.hcsgrm on compute-0
Oct 11 04:11:28 np0005481065 ceph-mgr[82906]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 11 04:11:28 np0005481065 ceph-mgr[82906]: mgr[py] Loading python module 'dashboard'
Oct 11 04:11:28 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-elpwhl[82902]: 2025-10-11T08:11:28.325+0000 7f36f50f8140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 11 04:11:28 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:11:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct 11 04:11:28 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1391893711' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 11 04:11:28 np0005481065 goofy_stonebraker[83678]: 
Oct 11 04:11:28 np0005481065 goofy_stonebraker[83678]: {"fsid":"33219f8b-dc38-5a8f-a577-8ccc4b37190a","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":81,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-10-11T08:10:04.514161+0000","services":{}},"progress_events":{}}
Oct 11 04:11:28 np0005481065 systemd[1]: libpod-09dbeb3ca2c7c6497acc87291f3c84629a26cb5da0e55eae00165eb942abd928.scope: Deactivated successfully.
Oct 11 04:11:28 np0005481065 podman[83829]: 2025-10-11 08:11:28.738072522 +0000 UTC m=+0.044551422 container create c448cc7f0fe650e03a3ca1b9630120fb466acc125f6b3d8a62158203435168cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_sutherland, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 04:11:28 np0005481065 podman[83836]: 2025-10-11 08:11:28.764364201 +0000 UTC m=+0.047435251 container died 09dbeb3ca2c7c6497acc87291f3c84629a26cb5da0e55eae00165eb942abd928 (image=quay.io/ceph/ceph:v18, name=goofy_stonebraker, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 11 04:11:28 np0005481065 systemd[1]: Started libpod-conmon-c448cc7f0fe650e03a3ca1b9630120fb466acc125f6b3d8a62158203435168cb.scope.
Oct 11 04:11:28 np0005481065 ceph-mon[74313]: Added host compute-0
Oct 11 04:11:28 np0005481065 ceph-mon[74313]: Saving service mon spec with placement compute-0
Oct 11 04:11:28 np0005481065 ceph-mon[74313]: Saving service mgr spec with placement compute-0
Oct 11 04:11:28 np0005481065 ceph-mon[74313]: Marking host: compute-0 for OSDSpec preview refresh.
Oct 11 04:11:28 np0005481065 ceph-mon[74313]: Saving service osd.default_drive_group spec with placement compute-0
Oct 11 04:11:28 np0005481065 ceph-mon[74313]: Reconfiguring mon.compute-0 (unknown last config time)...
Oct 11 04:11:28 np0005481065 ceph-mon[74313]: Reconfiguring daemon mon.compute-0 on compute-0
Oct 11 04:11:28 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:28 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:28 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.hcsgrm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 11 04:11:28 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:11:28 np0005481065 podman[83829]: 2025-10-11 08:11:28.71365713 +0000 UTC m=+0.020136050 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:11:28 np0005481065 systemd[1]: var-lib-containers-storage-overlay-7b6c861499485bb6fb45e52d9f126cd85d2f49b0cc1ca0a2dae793abd90599b1-merged.mount: Deactivated successfully.
Oct 11 04:11:28 np0005481065 podman[83829]: 2025-10-11 08:11:28.824370358 +0000 UTC m=+0.130849268 container init c448cc7f0fe650e03a3ca1b9630120fb466acc125f6b3d8a62158203435168cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 04:11:28 np0005481065 podman[83829]: 2025-10-11 08:11:28.831267931 +0000 UTC m=+0.137746811 container start c448cc7f0fe650e03a3ca1b9630120fb466acc125f6b3d8a62158203435168cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 11 04:11:28 np0005481065 loving_sutherland[83858]: 167 167
Oct 11 04:11:28 np0005481065 podman[83836]: 2025-10-11 08:11:28.836300186 +0000 UTC m=+0.119371206 container remove 09dbeb3ca2c7c6497acc87291f3c84629a26cb5da0e55eae00165eb942abd928 (image=quay.io/ceph/ceph:v18, name=goofy_stonebraker, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 11 04:11:28 np0005481065 systemd[1]: libpod-c448cc7f0fe650e03a3ca1b9630120fb466acc125f6b3d8a62158203435168cb.scope: Deactivated successfully.
Oct 11 04:11:28 np0005481065 conmon[83858]: conmon c448cc7f0fe650e03a3c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c448cc7f0fe650e03a3ca1b9630120fb466acc125f6b3d8a62158203435168cb.scope/container/memory.events
Oct 11 04:11:28 np0005481065 systemd[1]: libpod-conmon-09dbeb3ca2c7c6497acc87291f3c84629a26cb5da0e55eae00165eb942abd928.scope: Deactivated successfully.
Oct 11 04:11:28 np0005481065 podman[83829]: 2025-10-11 08:11:28.84328042 +0000 UTC m=+0.149759300 container attach c448cc7f0fe650e03a3ca1b9630120fb466acc125f6b3d8a62158203435168cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_sutherland, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 11 04:11:28 np0005481065 podman[83829]: 2025-10-11 08:11:28.843960031 +0000 UTC m=+0.150438911 container died c448cc7f0fe650e03a3ca1b9630120fb466acc125f6b3d8a62158203435168cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_sutherland, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:11:28 np0005481065 systemd[1]: var-lib-containers-storage-overlay-18dcb9de9c9a31c36969f8eefa36131b4019317bf4130e0c8370e838e709ef95-merged.mount: Deactivated successfully.
Oct 11 04:11:28 np0005481065 podman[83829]: 2025-10-11 08:11:28.887736309 +0000 UTC m=+0.194215189 container remove c448cc7f0fe650e03a3ca1b9630120fb466acc125f6b3d8a62158203435168cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_sutherland, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:11:28 np0005481065 systemd[1]: libpod-conmon-c448cc7f0fe650e03a3ca1b9630120fb466acc125f6b3d8a62158203435168cb.scope: Deactivated successfully.
Oct 11 04:11:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:11:28 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:11:28 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:29 np0005481065 ceph-mgr[82906]: mgr[py] Loading python module 'devicehealth'
Oct 11 04:11:29 np0005481065 ceph-mgr[74605]: [progress INFO root] Writing back 2 completed events
Oct 11 04:11:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 11 04:11:29 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:29 np0005481065 ceph-mon[74313]: Reconfiguring mgr.compute-0.hcsgrm (unknown last config time)...
Oct 11 04:11:29 np0005481065 ceph-mon[74313]: Reconfiguring daemon mgr.compute-0.hcsgrm on compute-0
Oct 11 04:11:29 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:29 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:29 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:29 np0005481065 podman[84050]: 2025-10-11 08:11:29.860486831 +0000 UTC m=+0.083042637 container exec ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:11:29 np0005481065 ceph-mgr[82906]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 11 04:11:29 np0005481065 ceph-mgr[82906]: mgr[py] Loading python module 'diskprediction_local'
Oct 11 04:11:29 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-elpwhl[82902]: 2025-10-11T08:11:29.940+0000 7f36f50f8140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 11 04:11:29 np0005481065 podman[84050]: 2025-10-11 08:11:29.985556431 +0000 UTC m=+0.208112217 container exec_died ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:11:30 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:11:30 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:30 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:11:30 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:30 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:11:30 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:30 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:11:30 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:30 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:11:30 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:11:30 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:11:30 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:11:30 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-elpwhl[82902]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 11 04:11:30 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-elpwhl[82902]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 11 04:11:30 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-elpwhl[82902]:  from numpy import show_config as show_numpy_config
Oct 11 04:11:30 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:11:30 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:30 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev dc301dc3-3e8b-4400-a3cd-3301901391ef does not exist
Oct 11 04:11:30 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct 11 04:11:30 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:30 np0005481065 ceph-mgr[82906]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 11 04:11:30 np0005481065 ceph-mgr[82906]: mgr[py] Loading python module 'influx'
Oct 11 04:11:30 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-elpwhl[82902]: 2025-10-11T08:11:30.460+0000 7f36f50f8140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 11 04:11:30 np0005481065 ceph-mgr[74605]: [progress INFO root] update: starting ev bea91b04-4e58-4331-893c-552301287aee (Updating mgr deployment (-1 -> 1))
Oct 11 04:11:30 np0005481065 ceph-mgr[74605]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.elpwhl from compute-0 -- ports [8765]
Oct 11 04:11:30 np0005481065 ceph-mgr[74605]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.elpwhl from compute-0 -- ports [8765]
Oct 11 04:11:30 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:11:30 np0005481065 ceph-mgr[82906]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 11 04:11:30 np0005481065 ceph-mgr[82906]: mgr[py] Loading python module 'insights'
Oct 11 04:11:30 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-elpwhl[82902]: 2025-10-11T08:11:30.689+0000 7f36f50f8140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 11 04:11:30 np0005481065 ceph-mgr[82906]: mgr[py] Loading python module 'iostat'
Oct 11 04:11:31 np0005481065 ceph-mgr[82906]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 11 04:11:31 np0005481065 ceph-mgr[82906]: mgr[py] Loading python module 'k8sevents'
Oct 11 04:11:31 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-elpwhl[82902]: 2025-10-11T08:11:31.160+0000 7f36f50f8140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 11 04:11:31 np0005481065 systemd[1]: Stopping Ceph mgr.compute-0.elpwhl for 33219f8b-dc38-5a8f-a577-8ccc4b37190a...
Oct 11 04:11:31 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:31 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:31 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:31 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:31 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:11:31 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:31 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:31 np0005481065 ceph-mon[74313]: Removing daemon mgr.compute-0.elpwhl from compute-0 -- ports [8765]
Oct 11 04:11:31 np0005481065 podman[84311]: 2025-10-11 08:11:31.441004941 +0000 UTC m=+0.099347769 container died 6f33ce08af3dd835a4ce5c11c8c1534e7bd27c4b8dfcb009bd11683867e88582 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-elpwhl, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:11:31 np0005481065 systemd[1]: var-lib-containers-storage-overlay-1b68017e0c49508c4f65ffbbda0a2a1c87ffa9ea9c8590e935bba4a97a6d2ffc-merged.mount: Deactivated successfully.
Oct 11 04:11:31 np0005481065 podman[84311]: 2025-10-11 08:11:31.493182527 +0000 UTC m=+0.151525285 container remove 6f33ce08af3dd835a4ce5c11c8c1534e7bd27c4b8dfcb009bd11683867e88582 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-elpwhl, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:11:31 np0005481065 bash[84311]: ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-elpwhl
Oct 11 04:11:31 np0005481065 systemd[1]: ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a@mgr.compute-0.elpwhl.service: Main process exited, code=exited, status=143/n/a
Oct 11 04:11:31 np0005481065 systemd[1]: ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a@mgr.compute-0.elpwhl.service: Failed with result 'exit-code'.
Oct 11 04:11:31 np0005481065 systemd[1]: Stopped Ceph mgr.compute-0.elpwhl for 33219f8b-dc38-5a8f-a577-8ccc4b37190a.
Oct 11 04:11:31 np0005481065 systemd[1]: ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a@mgr.compute-0.elpwhl.service: Consumed 6.946s CPU time.
Oct 11 04:11:31 np0005481065 systemd[1]: Reloading.
Oct 11 04:11:31 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:11:31 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:11:32 np0005481065 ceph-mgr[74605]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.elpwhl
Oct 11 04:11:32 np0005481065 ceph-mgr[74605]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.elpwhl
Oct 11 04:11:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.elpwhl"} v 0) v1
Oct 11 04:11:32 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.elpwhl"}]: dispatch
Oct 11 04:11:32 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.elpwhl"}]': finished
Oct 11 04:11:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 11 04:11:32 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:32 np0005481065 ceph-mgr[74605]: [progress INFO root] complete: finished ev bea91b04-4e58-4331-893c-552301287aee (Updating mgr deployment (-1 -> 1))
Oct 11 04:11:32 np0005481065 ceph-mgr[74605]: [progress INFO root] Completed event bea91b04-4e58-4331-893c-552301287aee (Updating mgr deployment (-1 -> 1)) in 2 seconds
Oct 11 04:11:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 11 04:11:32 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:32 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 2fe9c786-7140-4d3e-aee1-317dbb0fbca4 does not exist
Oct 11 04:11:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:11:32 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:11:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:11:32 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:11:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:11:32 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:11:32 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.elpwhl"}]: dispatch
Oct 11 04:11:32 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.elpwhl"}]': finished
Oct 11 04:11:32 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:32 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:32 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:11:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:11:32 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:11:32 np0005481065 podman[84549]: 2025-10-11 08:11:32.834541835 +0000 UTC m=+0.041299873 container create 85692ea24528d3107d6033cbd8100846fa9423acf658819abc0e66729087c9cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:11:32 np0005481065 systemd[1]: Started libpod-conmon-85692ea24528d3107d6033cbd8100846fa9423acf658819abc0e66729087c9cc.scope.
Oct 11 04:11:32 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:11:32 np0005481065 podman[84549]: 2025-10-11 08:11:32.817756618 +0000 UTC m=+0.024514676 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:11:32 np0005481065 podman[84549]: 2025-10-11 08:11:32.928726053 +0000 UTC m=+0.135484161 container init 85692ea24528d3107d6033cbd8100846fa9423acf658819abc0e66729087c9cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_engelbart, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 11 04:11:32 np0005481065 podman[84549]: 2025-10-11 08:11:32.941350842 +0000 UTC m=+0.148108910 container start 85692ea24528d3107d6033cbd8100846fa9423acf658819abc0e66729087c9cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_engelbart, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:11:32 np0005481065 podman[84549]: 2025-10-11 08:11:32.94518319 +0000 UTC m=+0.151941308 container attach 85692ea24528d3107d6033cbd8100846fa9423acf658819abc0e66729087c9cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 04:11:32 np0005481065 systemd[1]: libpod-85692ea24528d3107d6033cbd8100846fa9423acf658819abc0e66729087c9cc.scope: Deactivated successfully.
Oct 11 04:11:32 np0005481065 gallant_engelbart[84565]: 167 167
Oct 11 04:11:32 np0005481065 conmon[84565]: conmon 85692ea24528d3107d60 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-85692ea24528d3107d6033cbd8100846fa9423acf658819abc0e66729087c9cc.scope/container/memory.events
Oct 11 04:11:32 np0005481065 podman[84549]: 2025-10-11 08:11:32.950196044 +0000 UTC m=+0.156954112 container died 85692ea24528d3107d6033cbd8100846fa9423acf658819abc0e66729087c9cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_engelbart, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:11:32 np0005481065 systemd[1]: var-lib-containers-storage-overlay-536ddca6d625395a4ddd72659e4a9409f522efd4d5d0b33ba81f3f05ce74ba0d-merged.mount: Deactivated successfully.
Oct 11 04:11:32 np0005481065 podman[84549]: 2025-10-11 08:11:32.988898586 +0000 UTC m=+0.195656654 container remove 85692ea24528d3107d6033cbd8100846fa9423acf658819abc0e66729087c9cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_engelbart, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 04:11:33 np0005481065 systemd[1]: libpod-conmon-85692ea24528d3107d6033cbd8100846fa9423acf658819abc0e66729087c9cc.scope: Deactivated successfully.
Oct 11 04:11:33 np0005481065 podman[84588]: 2025-10-11 08:11:33.192143002 +0000 UTC m=+0.060425731 container create 765e1980b8a6cf2b1145e0d1f273ef466001210448f8a69ad52ec4c258e7e54f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:11:33 np0005481065 systemd[1]: Started libpod-conmon-765e1980b8a6cf2b1145e0d1f273ef466001210448f8a69ad52ec4c258e7e54f.scope.
Oct 11 04:11:33 np0005481065 podman[84588]: 2025-10-11 08:11:33.170622769 +0000 UTC m=+0.038905518 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:11:33 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:11:33 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4af0362b54198f98536812186fe2d7f25ae9cc1632f8936d27e0a20ddc324dd6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:33 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4af0362b54198f98536812186fe2d7f25ae9cc1632f8936d27e0a20ddc324dd6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:33 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4af0362b54198f98536812186fe2d7f25ae9cc1632f8936d27e0a20ddc324dd6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:33 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4af0362b54198f98536812186fe2d7f25ae9cc1632f8936d27e0a20ddc324dd6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:33 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4af0362b54198f98536812186fe2d7f25ae9cc1632f8936d27e0a20ddc324dd6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:33 np0005481065 podman[84588]: 2025-10-11 08:11:33.304091238 +0000 UTC m=+0.172373997 container init 765e1980b8a6cf2b1145e0d1f273ef466001210448f8a69ad52ec4c258e7e54f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_curran, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:11:33 np0005481065 podman[84588]: 2025-10-11 08:11:33.319267805 +0000 UTC m=+0.187550514 container start 765e1980b8a6cf2b1145e0d1f273ef466001210448f8a69ad52ec4c258e7e54f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_curran, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True)
Oct 11 04:11:33 np0005481065 podman[84588]: 2025-10-11 08:11:33.336894417 +0000 UTC m=+0.205177156 container attach 765e1980b8a6cf2b1145e0d1f273ef466001210448f8a69ad52ec4c258e7e54f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:11:33 np0005481065 ceph-mon[74313]: Removing key for mgr.compute-0.elpwhl
Oct 11 04:11:34 np0005481065 condescending_curran[84604]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:11:34 np0005481065 condescending_curran[84604]: --> relative data size: 1.0
Oct 11 04:11:34 np0005481065 condescending_curran[84604]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 11 04:11:34 np0005481065 condescending_curran[84604]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new bd31cfe9-266a-4479-a7f9-659fff14b3e1
Oct 11 04:11:34 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:11:34 np0005481065 ceph-mgr[74605]: [progress INFO root] Writing back 3 completed events
Oct 11 04:11:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 11 04:11:34 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1"} v 0) v1
Oct 11 04:11:34 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2613779221' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1"}]: dispatch
Oct 11 04:11:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Oct 11 04:11:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 11 04:11:34 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2613779221' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1"}]': finished
Oct 11 04:11:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Oct 11 04:11:34 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Oct 11 04:11:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 11 04:11:34 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 11 04:11:34 np0005481065 ceph-mgr[74605]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 11 04:11:35 np0005481065 condescending_curran[84604]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 11 04:11:35 np0005481065 condescending_curran[84604]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Oct 11 04:11:35 np0005481065 lvm[84666]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 11 04:11:35 np0005481065 lvm[84666]: VG ceph_vg0 finished
Oct 11 04:11:35 np0005481065 condescending_curran[84604]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Oct 11 04:11:35 np0005481065 condescending_curran[84604]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 11 04:11:35 np0005481065 condescending_curran[84604]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct 11 04:11:35 np0005481065 condescending_curran[84604]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Oct 11 04:11:35 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Oct 11 04:11:35 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1566740397' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 11 04:11:35 np0005481065 condescending_curran[84604]: stderr: got monmap epoch 1
Oct 11 04:11:35 np0005481065 condescending_curran[84604]: --> Creating keyring file for osd.0
Oct 11 04:11:35 np0005481065 condescending_curran[84604]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Oct 11 04:11:35 np0005481065 condescending_curran[84604]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Oct 11 04:11:35 np0005481065 condescending_curran[84604]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid bd31cfe9-266a-4479-a7f9-659fff14b3e1 --setuser ceph --setgroup ceph
Oct 11 04:11:35 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:35 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/2613779221' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1"}]: dispatch
Oct 11 04:11:35 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/2613779221' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1"}]': finished
Oct 11 04:11:36 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:11:36 np0005481065 ceph-mon[74313]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Oct 11 04:11:36 np0005481065 ceph-mon[74313]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct 11 04:11:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e4 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:11:37 np0005481065 ceph-mon[74313]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Oct 11 04:11:37 np0005481065 ceph-mon[74313]: Cluster is now healthy
Oct 11 04:11:38 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:11:38 np0005481065 condescending_curran[84604]: stderr: 2025-10-11T08:11:35.676+0000 7f80d026c740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 11 04:11:38 np0005481065 condescending_curran[84604]: stderr: 2025-10-11T08:11:35.676+0000 7f80d026c740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 11 04:11:38 np0005481065 condescending_curran[84604]: stderr: 2025-10-11T08:11:35.676+0000 7f80d026c740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 11 04:11:38 np0005481065 condescending_curran[84604]: stderr: 2025-10-11T08:11:35.677+0000 7f80d026c740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Oct 11 04:11:38 np0005481065 condescending_curran[84604]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Oct 11 04:11:38 np0005481065 condescending_curran[84604]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 11 04:11:38 np0005481065 condescending_curran[84604]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Oct 11 04:11:39 np0005481065 condescending_curran[84604]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct 11 04:11:39 np0005481065 condescending_curran[84604]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Oct 11 04:11:39 np0005481065 condescending_curran[84604]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 11 04:11:39 np0005481065 condescending_curran[84604]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 11 04:11:39 np0005481065 condescending_curran[84604]: --> ceph-volume lvm activate successful for osd ID: 0
Oct 11 04:11:39 np0005481065 condescending_curran[84604]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Oct 11 04:11:39 np0005481065 condescending_curran[84604]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 11 04:11:39 np0005481065 condescending_curran[84604]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new c6e730c8-7075-45e5-bf72-061b73088f5b
Oct 11 04:11:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b"} v 0) v1
Oct 11 04:11:39 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2802819823' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b"}]: dispatch
Oct 11 04:11:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Oct 11 04:11:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 11 04:11:39 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2802819823' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b"}]': finished
Oct 11 04:11:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Oct 11 04:11:39 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Oct 11 04:11:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 11 04:11:39 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 11 04:11:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 11 04:11:39 np0005481065 ceph-mgr[74605]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 11 04:11:39 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 04:11:39 np0005481065 ceph-mgr[74605]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 11 04:11:39 np0005481065 lvm[85596]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct 11 04:11:39 np0005481065 lvm[85596]: VG ceph_vg1 finished
Oct 11 04:11:39 np0005481065 condescending_curran[84604]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 11 04:11:39 np0005481065 condescending_curran[84604]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Oct 11 04:11:39 np0005481065 condescending_curran[84604]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Oct 11 04:11:39 np0005481065 condescending_curran[84604]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Oct 11 04:11:39 np0005481065 condescending_curran[84604]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Oct 11 04:11:39 np0005481065 condescending_curran[84604]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Oct 11 04:11:39 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/2802819823' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b"}]: dispatch
Oct 11 04:11:39 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/2802819823' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b"}]': finished
Oct 11 04:11:40 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Oct 11 04:11:40 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/340759144' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 11 04:11:40 np0005481065 condescending_curran[84604]: stderr: got monmap epoch 1
Oct 11 04:11:40 np0005481065 condescending_curran[84604]: --> Creating keyring file for osd.1
Oct 11 04:11:40 np0005481065 condescending_curran[84604]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Oct 11 04:11:40 np0005481065 condescending_curran[84604]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Oct 11 04:11:40 np0005481065 condescending_curran[84604]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid c6e730c8-7075-45e5-bf72-061b73088f5b --setuser ceph --setgroup ceph
Oct 11 04:11:40 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:11:42 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:11:42 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:11:42 np0005481065 condescending_curran[84604]: stderr: 2025-10-11T08:11:40.371+0000 7fa936bda740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 11 04:11:42 np0005481065 condescending_curran[84604]: stderr: 2025-10-11T08:11:40.371+0000 7fa936bda740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 11 04:11:42 np0005481065 condescending_curran[84604]: stderr: 2025-10-11T08:11:40.371+0000 7fa936bda740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 11 04:11:42 np0005481065 condescending_curran[84604]: stderr: 2025-10-11T08:11:40.372+0000 7fa936bda740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Oct 11 04:11:42 np0005481065 condescending_curran[84604]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Oct 11 04:11:42 np0005481065 condescending_curran[84604]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 11 04:11:42 np0005481065 condescending_curran[84604]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Oct 11 04:11:42 np0005481065 condescending_curran[84604]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Oct 11 04:11:42 np0005481065 condescending_curran[84604]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Oct 11 04:11:42 np0005481065 condescending_curran[84604]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Oct 11 04:11:42 np0005481065 condescending_curran[84604]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 11 04:11:42 np0005481065 condescending_curran[84604]: --> ceph-volume lvm activate successful for osd ID: 1
Oct 11 04:11:42 np0005481065 condescending_curran[84604]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Oct 11 04:11:43 np0005481065 condescending_curran[84604]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 11 04:11:43 np0005481065 condescending_curran[84604]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 9a8d87fe-940e-4960-94fb-405e22e6e9ee
Oct 11 04:11:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee"} v 0) v1
Oct 11 04:11:43 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/499405981' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee"}]: dispatch
Oct 11 04:11:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Oct 11 04:11:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 11 04:11:43 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/499405981' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee"}]': finished
Oct 11 04:11:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Oct 11 04:11:43 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Oct 11 04:11:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 11 04:11:43 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 11 04:11:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 11 04:11:43 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 04:11:43 np0005481065 ceph-mgr[74605]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 11 04:11:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 11 04:11:43 np0005481065 ceph-mgr[74605]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 11 04:11:43 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 04:11:43 np0005481065 ceph-mgr[74605]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 11 04:11:43 np0005481065 lvm[86530]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 11 04:11:43 np0005481065 lvm[86530]: VG ceph_vg2 finished
Oct 11 04:11:43 np0005481065 condescending_curran[84604]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 11 04:11:43 np0005481065 condescending_curran[84604]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Oct 11 04:11:43 np0005481065 condescending_curran[84604]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Oct 11 04:11:43 np0005481065 condescending_curran[84604]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Oct 11 04:11:43 np0005481065 condescending_curran[84604]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Oct 11 04:11:43 np0005481065 condescending_curran[84604]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Oct 11 04:11:43 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/499405981' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee"}]: dispatch
Oct 11 04:11:43 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/499405981' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee"}]': finished
Oct 11 04:11:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Oct 11 04:11:44 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2962508159' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 11 04:11:44 np0005481065 condescending_curran[84604]: stderr: got monmap epoch 1
Oct 11 04:11:44 np0005481065 condescending_curran[84604]: --> Creating keyring file for osd.2
Oct 11 04:11:44 np0005481065 condescending_curran[84604]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Oct 11 04:11:44 np0005481065 condescending_curran[84604]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Oct 11 04:11:44 np0005481065 condescending_curran[84604]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 9a8d87fe-940e-4960-94fb-405e22e6e9ee --setuser ceph --setgroup ceph
Oct 11 04:11:44 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:11:46 np0005481065 condescending_curran[84604]: stderr: 2025-10-11T08:11:44.232+0000 7f918c1cd740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 11 04:11:46 np0005481065 condescending_curran[84604]: stderr: 2025-10-11T08:11:44.232+0000 7f918c1cd740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 11 04:11:46 np0005481065 condescending_curran[84604]: stderr: 2025-10-11T08:11:44.232+0000 7f918c1cd740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 11 04:11:46 np0005481065 condescending_curran[84604]: stderr: 2025-10-11T08:11:44.233+0000 7f918c1cd740 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Oct 11 04:11:46 np0005481065 condescending_curran[84604]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Oct 11 04:11:46 np0005481065 condescending_curran[84604]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 11 04:11:46 np0005481065 condescending_curran[84604]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Oct 11 04:11:46 np0005481065 condescending_curran[84604]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Oct 11 04:11:46 np0005481065 condescending_curran[84604]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Oct 11 04:11:46 np0005481065 condescending_curran[84604]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Oct 11 04:11:46 np0005481065 condescending_curran[84604]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 11 04:11:46 np0005481065 condescending_curran[84604]: --> ceph-volume lvm activate successful for osd ID: 2
Oct 11 04:11:46 np0005481065 condescending_curran[84604]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Oct 11 04:11:46 np0005481065 systemd[1]: libpod-765e1980b8a6cf2b1145e0d1f273ef466001210448f8a69ad52ec4c258e7e54f.scope: Deactivated successfully.
Oct 11 04:11:46 np0005481065 systemd[1]: libpod-765e1980b8a6cf2b1145e0d1f273ef466001210448f8a69ad52ec4c258e7e54f.scope: Consumed 6.696s CPU time.
Oct 11 04:11:46 np0005481065 podman[84588]: 2025-10-11 08:11:46.609245464 +0000 UTC m=+13.477528263 container died 765e1980b8a6cf2b1145e0d1f273ef466001210448f8a69ad52ec4c258e7e54f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_curran, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:11:46 np0005481065 systemd[1]: var-lib-containers-storage-overlay-4af0362b54198f98536812186fe2d7f25ae9cc1632f8936d27e0a20ddc324dd6-merged.mount: Deactivated successfully.
Oct 11 04:11:46 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:11:46 np0005481065 podman[84588]: 2025-10-11 08:11:46.681904919 +0000 UTC m=+13.550187618 container remove 765e1980b8a6cf2b1145e0d1f273ef466001210448f8a69ad52ec4c258e7e54f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_curran, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 04:11:46 np0005481065 systemd[1]: libpod-conmon-765e1980b8a6cf2b1145e0d1f273ef466001210448f8a69ad52ec4c258e7e54f.scope: Deactivated successfully.
Oct 11 04:11:47 np0005481065 podman[87588]: 2025-10-11 08:11:47.418358814 +0000 UTC m=+0.067477968 container create f4383de75090abb10b22728a8a0a17fd470bf73fd5558cce322a4c5603706113 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_bouman, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 11 04:11:47 np0005481065 systemd[1]: Started libpod-conmon-f4383de75090abb10b22728a8a0a17fd470bf73fd5558cce322a4c5603706113.scope.
Oct 11 04:11:47 np0005481065 podman[87588]: 2025-10-11 08:11:47.388854191 +0000 UTC m=+0.037973335 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:11:47 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:11:47 np0005481065 podman[87588]: 2025-10-11 08:11:47.515252512 +0000 UTC m=+0.164371676 container init f4383de75090abb10b22728a8a0a17fd470bf73fd5558cce322a4c5603706113 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:11:47 np0005481065 podman[87588]: 2025-10-11 08:11:47.528462389 +0000 UTC m=+0.177581553 container start f4383de75090abb10b22728a8a0a17fd470bf73fd5558cce322a4c5603706113 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_bouman, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 11 04:11:47 np0005481065 podman[87588]: 2025-10-11 08:11:47.532587857 +0000 UTC m=+0.181707011 container attach f4383de75090abb10b22728a8a0a17fd470bf73fd5558cce322a4c5603706113 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 11 04:11:47 np0005481065 pensive_bouman[87605]: 167 167
Oct 11 04:11:47 np0005481065 systemd[1]: libpod-f4383de75090abb10b22728a8a0a17fd470bf73fd5558cce322a4c5603706113.scope: Deactivated successfully.
Oct 11 04:11:47 np0005481065 podman[87588]: 2025-10-11 08:11:47.535449258 +0000 UTC m=+0.184568442 container died f4383de75090abb10b22728a8a0a17fd470bf73fd5558cce322a4c5603706113 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_bouman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:11:47 np0005481065 systemd[1]: var-lib-containers-storage-overlay-804fb6f707d320e8ed811381ff81849b5cf19fce39ae5b9749edbe6abfb951da-merged.mount: Deactivated successfully.
Oct 11 04:11:47 np0005481065 podman[87588]: 2025-10-11 08:11:47.584674944 +0000 UTC m=+0.233794088 container remove f4383de75090abb10b22728a8a0a17fd470bf73fd5558cce322a4c5603706113 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_bouman, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:11:47 np0005481065 systemd[1]: libpod-conmon-f4383de75090abb10b22728a8a0a17fd470bf73fd5558cce322a4c5603706113.scope: Deactivated successfully.
Oct 11 04:11:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:11:47 np0005481065 podman[87630]: 2025-10-11 08:11:47.845417112 +0000 UTC m=+0.097194587 container create bf634759d7d60b78fdc33ae9430699ec8302433b7de68944446d43d0a5dd4a34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_carver, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:11:47 np0005481065 podman[87630]: 2025-10-11 08:11:47.788875987 +0000 UTC m=+0.040653442 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:11:47 np0005481065 systemd[1]: Started libpod-conmon-bf634759d7d60b78fdc33ae9430699ec8302433b7de68944446d43d0a5dd4a34.scope.
Oct 11 04:11:47 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:11:47 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5823775387b6d8b6d10fcb60362507d5ecb2a2e0adcec42b7f6b9b241f9c8477/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:47 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5823775387b6d8b6d10fcb60362507d5ecb2a2e0adcec42b7f6b9b241f9c8477/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:47 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5823775387b6d8b6d10fcb60362507d5ecb2a2e0adcec42b7f6b9b241f9c8477/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:47 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5823775387b6d8b6d10fcb60362507d5ecb2a2e0adcec42b7f6b9b241f9c8477/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:47 np0005481065 podman[87630]: 2025-10-11 08:11:47.953985663 +0000 UTC m=+0.205763198 container init bf634759d7d60b78fdc33ae9430699ec8302433b7de68944446d43d0a5dd4a34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:11:47 np0005481065 podman[87630]: 2025-10-11 08:11:47.966448919 +0000 UTC m=+0.218226394 container start bf634759d7d60b78fdc33ae9430699ec8302433b7de68944446d43d0a5dd4a34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_carver, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 11 04:11:47 np0005481065 podman[87630]: 2025-10-11 08:11:47.974623852 +0000 UTC m=+0.226401297 container attach bf634759d7d60b78fdc33ae9430699ec8302433b7de68944446d43d0a5dd4a34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:11:48 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:11:48 np0005481065 elastic_carver[87646]: {
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:    "0": [
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:        {
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:            "devices": [
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:                "/dev/loop3"
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:            ],
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:            "lv_name": "ceph_lv0",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:            "lv_size": "21470642176",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:            "name": "ceph_lv0",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:            "tags": {
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:                "ceph.cluster_name": "ceph",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:                "ceph.crush_device_class": "",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:                "ceph.encrypted": "0",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:                "ceph.osd_id": "0",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:                "ceph.type": "block",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:                "ceph.vdo": "0"
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:            },
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:            "type": "block",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:            "vg_name": "ceph_vg0"
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:        }
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:    ],
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:    "1": [
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:        {
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:            "devices": [
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:                "/dev/loop4"
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:            ],
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:            "lv_name": "ceph_lv1",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:            "lv_size": "21470642176",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:            "name": "ceph_lv1",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:            "tags": {
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:                "ceph.cluster_name": "ceph",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:                "ceph.crush_device_class": "",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:                "ceph.encrypted": "0",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:                "ceph.osd_id": "1",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:                "ceph.type": "block",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:                "ceph.vdo": "0"
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:            },
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:            "type": "block",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:            "vg_name": "ceph_vg1"
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:        }
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:    ],
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:    "2": [
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:        {
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:            "devices": [
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:                "/dev/loop5"
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:            ],
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:            "lv_name": "ceph_lv2",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:            "lv_size": "21470642176",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:            "name": "ceph_lv2",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:            "tags": {
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:                "ceph.cluster_name": "ceph",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:                "ceph.crush_device_class": "",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:                "ceph.encrypted": "0",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:                "ceph.osd_id": "2",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:                "ceph.type": "block",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:                "ceph.vdo": "0"
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:            },
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:            "type": "block",
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:            "vg_name": "ceph_vg2"
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:        }
Oct 11 04:11:48 np0005481065 elastic_carver[87646]:    ]
Oct 11 04:11:48 np0005481065 elastic_carver[87646]: }
Oct 11 04:11:48 np0005481065 systemd[1]: libpod-bf634759d7d60b78fdc33ae9430699ec8302433b7de68944446d43d0a5dd4a34.scope: Deactivated successfully.
Oct 11 04:11:48 np0005481065 podman[87630]: 2025-10-11 08:11:48.707250088 +0000 UTC m=+0.959027533 container died bf634759d7d60b78fdc33ae9430699ec8302433b7de68944446d43d0a5dd4a34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 04:11:48 np0005481065 systemd[1]: var-lib-containers-storage-overlay-5823775387b6d8b6d10fcb60362507d5ecb2a2e0adcec42b7f6b9b241f9c8477-merged.mount: Deactivated successfully.
Oct 11 04:11:48 np0005481065 podman[87630]: 2025-10-11 08:11:48.754638871 +0000 UTC m=+1.006416306 container remove bf634759d7d60b78fdc33ae9430699ec8302433b7de68944446d43d0a5dd4a34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_carver, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 04:11:48 np0005481065 systemd[1]: libpod-conmon-bf634759d7d60b78fdc33ae9430699ec8302433b7de68944446d43d0a5dd4a34.scope: Deactivated successfully.
Oct 11 04:11:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Oct 11 04:11:48 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct 11 04:11:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:11:48 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:11:48 np0005481065 ceph-mgr[74605]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Oct 11 04:11:48 np0005481065 ceph-mgr[74605]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Oct 11 04:11:49 np0005481065 podman[87809]: 2025-10-11 08:11:49.442868039 +0000 UTC m=+0.047919030 container create f1985531a164f02a38ca70ac34028e6391276b9e21ee6462066cb4128d571b46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:11:49 np0005481065 systemd[1]: Started libpod-conmon-f1985531a164f02a38ca70ac34028e6391276b9e21ee6462066cb4128d571b46.scope.
Oct 11 04:11:49 np0005481065 podman[87809]: 2025-10-11 08:11:49.422354303 +0000 UTC m=+0.027405314 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:11:49 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:11:49 np0005481065 podman[87809]: 2025-10-11 08:11:49.535556846 +0000 UTC m=+0.140607907 container init f1985531a164f02a38ca70ac34028e6391276b9e21ee6462066cb4128d571b46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 11 04:11:49 np0005481065 podman[87809]: 2025-10-11 08:11:49.542664509 +0000 UTC m=+0.147715500 container start f1985531a164f02a38ca70ac34028e6391276b9e21ee6462066cb4128d571b46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_swirles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 11 04:11:49 np0005481065 podman[87809]: 2025-10-11 08:11:49.546337664 +0000 UTC m=+0.151388675 container attach f1985531a164f02a38ca70ac34028e6391276b9e21ee6462066cb4128d571b46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_swirles, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 11 04:11:49 np0005481065 unruffled_swirles[87826]: 167 167
Oct 11 04:11:49 np0005481065 systemd[1]: libpod-f1985531a164f02a38ca70ac34028e6391276b9e21ee6462066cb4128d571b46.scope: Deactivated successfully.
Oct 11 04:11:49 np0005481065 podman[87809]: 2025-10-11 08:11:49.548270179 +0000 UTC m=+0.153321160 container died f1985531a164f02a38ca70ac34028e6391276b9e21ee6462066cb4128d571b46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_swirles, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 04:11:49 np0005481065 systemd[1]: var-lib-containers-storage-overlay-495d6960068f564fc4d6bc48f8f9081ebb9a93b39d422a995ebbd37e9252fb6d-merged.mount: Deactivated successfully.
Oct 11 04:11:49 np0005481065 podman[87809]: 2025-10-11 08:11:49.580311184 +0000 UTC m=+0.185362165 container remove f1985531a164f02a38ca70ac34028e6391276b9e21ee6462066cb4128d571b46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:11:49 np0005481065 systemd[1]: libpod-conmon-f1985531a164f02a38ca70ac34028e6391276b9e21ee6462066cb4128d571b46.scope: Deactivated successfully.
Oct 11 04:11:49 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct 11 04:11:49 np0005481065 ceph-mon[74313]: Deploying daemon osd.0 on compute-0
Oct 11 04:11:49 np0005481065 podman[87857]: 2025-10-11 08:11:49.878722657 +0000 UTC m=+0.037537223 container create bd7e212a2d2f70b05687d96c64ffa5e4deed1f4643bc8b9eb2d74eb3c40881ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-0-activate-test, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:11:49 np0005481065 systemd[1]: Started libpod-conmon-bd7e212a2d2f70b05687d96c64ffa5e4deed1f4643bc8b9eb2d74eb3c40881ad.scope.
Oct 11 04:11:49 np0005481065 podman[87857]: 2025-10-11 08:11:49.861450434 +0000 UTC m=+0.020265030 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:11:49 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:11:49 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bd98e9e116db7d652bdf5c9beeeca9c6a3c45db212705b8316b32e2d0c13a31/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:49 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bd98e9e116db7d652bdf5c9beeeca9c6a3c45db212705b8316b32e2d0c13a31/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:49 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bd98e9e116db7d652bdf5c9beeeca9c6a3c45db212705b8316b32e2d0c13a31/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:49 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bd98e9e116db7d652bdf5c9beeeca9c6a3c45db212705b8316b32e2d0c13a31/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:49 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bd98e9e116db7d652bdf5c9beeeca9c6a3c45db212705b8316b32e2d0c13a31/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:49 np0005481065 podman[87857]: 2025-10-11 08:11:49.997538721 +0000 UTC m=+0.156353317 container init bd7e212a2d2f70b05687d96c64ffa5e4deed1f4643bc8b9eb2d74eb3c40881ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-0-activate-test, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 04:11:50 np0005481065 podman[87857]: 2025-10-11 08:11:50.01256733 +0000 UTC m=+0.171381926 container start bd7e212a2d2f70b05687d96c64ffa5e4deed1f4643bc8b9eb2d74eb3c40881ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-0-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 04:11:50 np0005481065 podman[87857]: 2025-10-11 08:11:50.017251604 +0000 UTC m=+0.176066180 container attach bd7e212a2d2f70b05687d96c64ffa5e4deed1f4643bc8b9eb2d74eb3c40881ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-0-activate-test, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:11:50 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-0-activate-test[87874]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Oct 11 04:11:50 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-0-activate-test[87874]:                            [--no-systemd] [--no-tmpfs]
Oct 11 04:11:50 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-0-activate-test[87874]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct 11 04:11:50 np0005481065 systemd[1]: libpod-bd7e212a2d2f70b05687d96c64ffa5e4deed1f4643bc8b9eb2d74eb3c40881ad.scope: Deactivated successfully.
Oct 11 04:11:50 np0005481065 podman[87857]: 2025-10-11 08:11:50.619544256 +0000 UTC m=+0.778358922 container died bd7e212a2d2f70b05687d96c64ffa5e4deed1f4643bc8b9eb2d74eb3c40881ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-0-activate-test, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 04:11:50 np0005481065 systemd[1]: var-lib-containers-storage-overlay-0bd98e9e116db7d652bdf5c9beeeca9c6a3c45db212705b8316b32e2d0c13a31-merged.mount: Deactivated successfully.
Oct 11 04:11:50 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:11:50 np0005481065 podman[87857]: 2025-10-11 08:11:50.675570176 +0000 UTC m=+0.834384742 container remove bd7e212a2d2f70b05687d96c64ffa5e4deed1f4643bc8b9eb2d74eb3c40881ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-0-activate-test, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 04:11:50 np0005481065 systemd[1]: libpod-conmon-bd7e212a2d2f70b05687d96c64ffa5e4deed1f4643bc8b9eb2d74eb3c40881ad.scope: Deactivated successfully.
Oct 11 04:11:50 np0005481065 systemd[1]: Reloading.
Oct 11 04:11:51 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:11:51 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:11:51 np0005481065 systemd[1]: Reloading.
Oct 11 04:11:51 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:11:51 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:11:51 np0005481065 systemd[1]: Starting Ceph osd.0 for 33219f8b-dc38-5a8f-a577-8ccc4b37190a...
Oct 11 04:11:51 np0005481065 podman[88036]: 2025-10-11 08:11:51.900982696 +0000 UTC m=+0.048434954 container create 2176c8c3744593ec64ef71a8ab240b7bd0539880fdae49caf23faa0c74c13e04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-0-activate, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 11 04:11:51 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:11:51 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4bd23c7f73e1d6e872ffc8d952964a0aedf1065d9253f1db328fbc51a0735d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:51 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4bd23c7f73e1d6e872ffc8d952964a0aedf1065d9253f1db328fbc51a0735d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:51 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4bd23c7f73e1d6e872ffc8d952964a0aedf1065d9253f1db328fbc51a0735d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:51 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4bd23c7f73e1d6e872ffc8d952964a0aedf1065d9253f1db328fbc51a0735d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:51 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4bd23c7f73e1d6e872ffc8d952964a0aedf1065d9253f1db328fbc51a0735d1/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:51 np0005481065 podman[88036]: 2025-10-11 08:11:51.877672221 +0000 UTC m=+0.025124509 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:11:51 np0005481065 podman[88036]: 2025-10-11 08:11:51.987109406 +0000 UTC m=+0.134561714 container init 2176c8c3744593ec64ef71a8ab240b7bd0539880fdae49caf23faa0c74c13e04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-0-activate, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:11:51 np0005481065 podman[88036]: 2025-10-11 08:11:51.995148286 +0000 UTC m=+0.142600524 container start 2176c8c3744593ec64ef71a8ab240b7bd0539880fdae49caf23faa0c74c13e04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-0-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 04:11:51 np0005481065 podman[88036]: 2025-10-11 08:11:51.998373598 +0000 UTC m=+0.145825896 container attach 2176c8c3744593ec64ef71a8ab240b7bd0539880fdae49caf23faa0c74c13e04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-0-activate, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:11:52 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:11:52 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:11:53 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-0-activate[88052]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 11 04:11:53 np0005481065 bash[88036]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 11 04:11:53 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-0-activate[88052]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Oct 11 04:11:53 np0005481065 bash[88036]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Oct 11 04:11:53 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-0-activate[88052]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Oct 11 04:11:53 np0005481065 bash[88036]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Oct 11 04:11:53 np0005481065 bash[88036]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 11 04:11:53 np0005481065 bash[88036]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct 11 04:11:53 np0005481065 bash[88036]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 11 04:11:53 np0005481065 bash[88036]: --> ceph-volume raw activate successful for osd ID: 0
Oct 11 04:11:53 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-0-activate[88052]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 11 04:11:53 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-0-activate[88052]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct 11 04:11:53 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-0-activate[88052]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 11 04:11:53 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-0-activate[88052]: --> ceph-volume raw activate successful for osd ID: 0
Oct 11 04:11:53 np0005481065 systemd[1]: libpod-2176c8c3744593ec64ef71a8ab240b7bd0539880fdae49caf23faa0c74c13e04.scope: Deactivated successfully.
Oct 11 04:11:53 np0005481065 podman[88036]: 2025-10-11 08:11:53.177808915 +0000 UTC m=+1.325261173 container died 2176c8c3744593ec64ef71a8ab240b7bd0539880fdae49caf23faa0c74c13e04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-0-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:11:53 np0005481065 systemd[1]: libpod-2176c8c3744593ec64ef71a8ab240b7bd0539880fdae49caf23faa0c74c13e04.scope: Consumed 1.203s CPU time.
Oct 11 04:11:53 np0005481065 systemd[1]: var-lib-containers-storage-overlay-d4bd23c7f73e1d6e872ffc8d952964a0aedf1065d9253f1db328fbc51a0735d1-merged.mount: Deactivated successfully.
Oct 11 04:11:53 np0005481065 podman[88036]: 2025-10-11 08:11:53.226683171 +0000 UTC m=+1.374135409 container remove 2176c8c3744593ec64ef71a8ab240b7bd0539880fdae49caf23faa0c74c13e04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-0-activate, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:11:53 np0005481065 podman[88229]: 2025-10-11 08:11:53.451870193 +0000 UTC m=+0.049693910 container create 854c41823058c15b371a45da74217f9d1bfd1347f4b5a2745fc6da14d1aa5c73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-0, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 04:11:53 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6226151a1481dfb42becb452c3188058111bf93ec6fc50b3380da1025a238f6d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:53 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6226151a1481dfb42becb452c3188058111bf93ec6fc50b3380da1025a238f6d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:53 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6226151a1481dfb42becb452c3188058111bf93ec6fc50b3380da1025a238f6d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:53 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6226151a1481dfb42becb452c3188058111bf93ec6fc50b3380da1025a238f6d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:53 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6226151a1481dfb42becb452c3188058111bf93ec6fc50b3380da1025a238f6d/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:53 np0005481065 podman[88229]: 2025-10-11 08:11:53.426943091 +0000 UTC m=+0.024766858 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:11:53 np0005481065 podman[88229]: 2025-10-11 08:11:53.529097769 +0000 UTC m=+0.126921466 container init 854c41823058c15b371a45da74217f9d1bfd1347f4b5a2745fc6da14d1aa5c73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Oct 11 04:11:53 np0005481065 podman[88229]: 2025-10-11 08:11:53.534901155 +0000 UTC m=+0.132724842 container start 854c41823058c15b371a45da74217f9d1bfd1347f4b5a2745fc6da14d1aa5c73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 04:11:53 np0005481065 bash[88229]: 854c41823058c15b371a45da74217f9d1bfd1347f4b5a2745fc6da14d1aa5c73
Oct 11 04:11:53 np0005481065 systemd[1]: Started Ceph osd.0 for 33219f8b-dc38-5a8f-a577-8ccc4b37190a.
Oct 11 04:11:53 np0005481065 ceph-osd[88249]: set uid:gid to 167:167 (ceph:ceph)
Oct 11 04:11:53 np0005481065 ceph-osd[88249]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Oct 11 04:11:53 np0005481065 ceph-osd[88249]: pidfile_write: ignore empty --pid-file
Oct 11 04:11:53 np0005481065 ceph-osd[88249]: bdev(0x55f92669d800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 11 04:11:53 np0005481065 ceph-osd[88249]: bdev(0x55f92669d800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 11 04:11:53 np0005481065 ceph-osd[88249]: bdev(0x55f92669d800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 04:11:53 np0005481065 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 11 04:11:53 np0005481065 ceph-osd[88249]: bdev(0x55f9274d5800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 11 04:11:53 np0005481065 ceph-osd[88249]: bdev(0x55f9274d5800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 11 04:11:53 np0005481065 ceph-osd[88249]: bdev(0x55f9274d5800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 04:11:53 np0005481065 ceph-osd[88249]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Oct 11 04:11:53 np0005481065 ceph-osd[88249]: bdev(0x55f9274d5800 /var/lib/ceph/osd/ceph-0/block) close
Oct 11 04:11:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:11:53 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:11:53 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Oct 11 04:11:53 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct 11 04:11:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:11:53 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:11:53 np0005481065 ceph-mgr[74605]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Oct 11 04:11:53 np0005481065 ceph-mgr[74605]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Oct 11 04:11:53 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:53 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:53 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct 11 04:11:53 np0005481065 ceph-osd[88249]: bdev(0x55f92669d800 /var/lib/ceph/osd/ceph-0/block) close
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: load: jerasure load: lrc 
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: bdev(0x55f927556c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: bdev(0x55f927556c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: bdev(0x55f927556c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: bdev(0x55f927556c00 /var/lib/ceph/osd/ceph-0/block) close
Oct 11 04:11:54 np0005481065 podman[88411]: 2025-10-11 08:11:54.358094876 +0000 UTC m=+0.060623573 container create ca3f35d88921de67784cc197801bf2c32070ff186e74d095bffd5b6c187e9707 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: bdev(0x55f927556c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: bdev(0x55f927556c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: bdev(0x55f927556c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: bdev(0x55f927556c00 /var/lib/ceph/osd/ceph-0/block) close
Oct 11 04:11:54 np0005481065 systemd[1]: Started libpod-conmon-ca3f35d88921de67784cc197801bf2c32070ff186e74d095bffd5b6c187e9707.scope.
Oct 11 04:11:54 np0005481065 podman[88411]: 2025-10-11 08:11:54.334282016 +0000 UTC m=+0.036810743 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:11:54 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:11:54 np0005481065 podman[88411]: 2025-10-11 08:11:54.456074024 +0000 UTC m=+0.158602761 container init ca3f35d88921de67784cc197801bf2c32070ff186e74d095bffd5b6c187e9707 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_margulis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:11:54 np0005481065 podman[88411]: 2025-10-11 08:11:54.466689197 +0000 UTC m=+0.169217864 container start ca3f35d88921de67784cc197801bf2c32070ff186e74d095bffd5b6c187e9707 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 11 04:11:54 np0005481065 podman[88411]: 2025-10-11 08:11:54.47236476 +0000 UTC m=+0.174893517 container attach ca3f35d88921de67784cc197801bf2c32070ff186e74d095bffd5b6c187e9707 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_margulis, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:11:54 np0005481065 sharp_margulis[88431]: 167 167
Oct 11 04:11:54 np0005481065 systemd[1]: libpod-ca3f35d88921de67784cc197801bf2c32070ff186e74d095bffd5b6c187e9707.scope: Deactivated successfully.
Oct 11 04:11:54 np0005481065 podman[88411]: 2025-10-11 08:11:54.474557852 +0000 UTC m=+0.177086599 container died ca3f35d88921de67784cc197801bf2c32070ff186e74d095bffd5b6c187e9707 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:11:54 np0005481065 systemd[1]: var-lib-containers-storage-overlay-aa794bef2c8cf004bcf4df9216eb4ca76086459fa364ad0ffde8d06baf84317d-merged.mount: Deactivated successfully.
Oct 11 04:11:54 np0005481065 podman[88411]: 2025-10-11 08:11:54.529155472 +0000 UTC m=+0.231684169 container remove ca3f35d88921de67784cc197801bf2c32070ff186e74d095bffd5b6c187e9707 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_margulis, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 11 04:11:54 np0005481065 systemd[1]: libpod-conmon-ca3f35d88921de67784cc197801bf2c32070ff186e74d095bffd5b6c187e9707.scope: Deactivated successfully.
Oct 11 04:11:54 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:11:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:11:54
Oct 11 04:11:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:11:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 04:11:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] No pools available
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: bdev(0x55f927556c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: bdev(0x55f927556c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: bdev(0x55f927556c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: bdev(0x55f927557400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: bdev(0x55f927557400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: bdev(0x55f927557400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: bluefs mount
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: bluefs mount shared_bdev_used = 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: RocksDB version: 7.9.2
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Git sha 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: DB SUMMARY
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: DB Session ID:  CBZEO2LTX64ECN9QOLB2
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: CURRENT file:  CURRENT
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: IDENTITY file:  IDENTITY
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                         Options.error_if_exists: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                       Options.create_if_missing: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                         Options.paranoid_checks: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                                     Options.env: 0x55f927527c70
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                                Options.info_log: 0x55f9267248a0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.max_file_opening_threads: 16
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                              Options.statistics: (nil)
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                               Options.use_fsync: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                       Options.max_log_file_size: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                         Options.allow_fallocate: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                        Options.use_direct_reads: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.create_missing_column_families: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                              Options.db_log_dir: 
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                                 Options.wal_dir: db.wal
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.advise_random_on_open: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                    Options.write_buffer_manager: 0x55f927630460
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                            Options.rate_limiter: (nil)
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.unordered_write: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                               Options.row_cache: None
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                              Options.wal_filter: None
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.allow_ingest_behind: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.two_write_queues: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.manual_wal_flush: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.wal_compression: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.atomic_flush: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                 Options.log_readahead_size: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.allow_data_in_errors: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.db_host_id: __hostname__
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.max_background_jobs: 4
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.max_background_compactions: -1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.max_subcompactions: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                          Options.max_open_files: -1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                          Options.bytes_per_sync: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.max_background_flushes: -1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Compression algorithms supported:
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: #011kZSTD supported: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: #011kXpressCompression supported: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: #011kBZip2Compression supported: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: #011kLZ4Compression supported: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: #011kZlibCompression supported: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: #011kLZ4HCCompression supported: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: #011kSnappyCompression supported: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:        Options.compaction_filter: None
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f9267242c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f9267111f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.compression: LZ4
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.num_levels: 7
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:           Options.merge_operator: None
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:        Options.compaction_filter: None
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f9267242c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f9267111f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.compression: LZ4
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.num_levels: 7
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:           Options.merge_operator: None
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:        Options.compaction_filter: None
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f9267242c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f9267111f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.compression: LZ4
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.num_levels: 7
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:           Options.merge_operator: None
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:        Options.compaction_filter: None
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f9267242c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f9267111f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.compression: LZ4
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.num_levels: 7
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:           Options.merge_operator: None
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:        Options.compaction_filter: None
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f9267242c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f9267111f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.compression: LZ4
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.num_levels: 7
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:           Options.merge_operator: None
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:        Options.compaction_filter: None
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f9267242c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f9267111f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.compression: LZ4
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.num_levels: 7
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:           Options.merge_operator: None
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:        Options.compaction_filter: None
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f9267242c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f9267111f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.compression: LZ4
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.num_levels: 7
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:           Options.merge_operator: None
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:        Options.compaction_filter: None
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f926724240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f926711090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.compression: LZ4
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.num_levels: 7
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:           Options.merge_operator: None
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:        Options.compaction_filter: None
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f926724240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f926711090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.compression: LZ4
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.num_levels: 7
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:           Options.merge_operator: None
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:        Options.compaction_filter: None
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f926724240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f926711090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.compression: LZ4
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.num_levels: 7
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 56e8c095-7953-4e0a-997c-543470a48e92
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760170314722749, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760170314723118, "job": 1, "event": "recovery_finished"}
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: freelist init
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: freelist _read_cfg
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: bluefs umount
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: bdev(0x55f927557400 /var/lib/ceph/osd/ceph-0/block) close
Oct 11 04:11:54 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:11:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:11:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:11:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:11:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:11:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:11:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:11:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:11:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:11:54 np0005481065 ceph-mon[74313]: Deploying daemon osd.1 on compute-0
Oct 11 04:11:54 np0005481065 podman[88658]: 2025-10-11 08:11:54.877270685 +0000 UTC m=+0.061608301 container create e8da85e2087bb55bd81fcde899a5e0f7ada9ed2bc9e2419596aadb03c2876878 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-1-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 04:11:54 np0005481065 systemd[1]: Started libpod-conmon-e8da85e2087bb55bd81fcde899a5e0f7ada9ed2bc9e2419596aadb03c2876878.scope.
Oct 11 04:11:54 np0005481065 podman[88658]: 2025-10-11 08:11:54.852901229 +0000 UTC m=+0.037238905 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: bdev(0x55f927557400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: bdev(0x55f927557400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: bdev(0x55f927557400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: bluefs mount
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: bluefs mount shared_bdev_used = 4718592
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: RocksDB version: 7.9.2
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Git sha 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: DB SUMMARY
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: DB Session ID:  CBZEO2LTX64ECN9QOLB3
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: CURRENT file:  CURRENT
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: IDENTITY file:  IDENTITY
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                         Options.error_if_exists: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                       Options.create_if_missing: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                         Options.paranoid_checks: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                                     Options.env: 0x55f9276d83f0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                                Options.info_log: 0x55f926724620
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.max_file_opening_threads: 16
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                              Options.statistics: (nil)
Oct 11 04:11:54 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                               Options.use_fsync: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                       Options.max_log_file_size: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                         Options.allow_fallocate: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                        Options.use_direct_reads: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.create_missing_column_families: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                              Options.db_log_dir: 
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                                 Options.wal_dir: db.wal
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.advise_random_on_open: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                    Options.write_buffer_manager: 0x55f927630460
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                            Options.rate_limiter: (nil)
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.unordered_write: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                               Options.row_cache: None
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                              Options.wal_filter: None
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.allow_ingest_behind: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.two_write_queues: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.manual_wal_flush: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.wal_compression: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.atomic_flush: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                 Options.log_readahead_size: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.allow_data_in_errors: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.db_host_id: __hostname__
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.max_background_jobs: 4
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.max_background_compactions: -1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.max_subcompactions: 1
Oct 11 04:11:54 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fc6d927b1a8df6520e01ef0c7dcbd8155c1a4c43f471295187f15b124d18437/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                          Options.max_open_files: -1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                          Options.bytes_per_sync: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.max_background_flushes: -1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Compression algorithms supported:
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: #011kZSTD supported: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: #011kXpressCompression supported: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: #011kBZip2Compression supported: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: #011kLZ4Compression supported: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: #011kZlibCompression supported: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: #011kLZ4HCCompression supported: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: #011kSnappyCompression supported: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:        Options.compaction_filter: None
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:11:54 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fc6d927b1a8df6520e01ef0c7dcbd8155c1a4c43f471295187f15b124d18437/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:54 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fc6d927b1a8df6520e01ef0c7dcbd8155c1a4c43f471295187f15b124d18437/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f926724a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f9267111f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.compression: LZ4
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.num_levels: 7
Oct 11 04:11:54 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fc6d927b1a8df6520e01ef0c7dcbd8155c1a4c43f471295187f15b124d18437/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:11:54 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fc6d927b1a8df6520e01ef0c7dcbd8155c1a4c43f471295187f15b124d18437/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:           Options.merge_operator: None
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:        Options.compaction_filter: None
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f926724a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f9267111f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.compression: LZ4
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.num_levels: 7
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:11:54 np0005481065 podman[88658]: 2025-10-11 08:11:54.988983545 +0000 UTC m=+0.173321211 container init e8da85e2087bb55bd81fcde899a5e0f7ada9ed2bc9e2419596aadb03c2876878 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-1-activate-test, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:           Options.merge_operator: None
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:        Options.compaction_filter: None
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f926724a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f9267111f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.compression: LZ4
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.num_levels: 7
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:11:54 np0005481065 podman[88658]: 2025-10-11 08:11:54.999152376 +0000 UTC m=+0.183489972 container start e8da85e2087bb55bd81fcde899a5e0f7ada9ed2bc9e2419596aadb03c2876878 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-1-activate-test, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:11:54 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:11:55 np0005481065 podman[88658]: 2025-10-11 08:11:55.00279661 +0000 UTC m=+0.187134256 container attach e8da85e2087bb55bd81fcde899a5e0f7ada9ed2bc9e2419596aadb03c2876878 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-1-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:           Options.merge_operator: None
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:        Options.compaction_filter: None
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f926724a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f9267111f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:          Options.compression: LZ4
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:             Options.num_levels: 7
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:           Options.merge_operator: None
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:        Options.compaction_filter: None
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f926724a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f9267111f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:          Options.compression: LZ4
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:             Options.num_levels: 7
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:           Options.merge_operator: None
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:        Options.compaction_filter: None
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f926724a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f9267111f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:          Options.compression: LZ4
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:             Options.num_levels: 7
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:           Options.merge_operator: None
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:        Options.compaction_filter: None
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f926724a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f9267111f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:          Options.compression: LZ4
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:             Options.num_levels: 7
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:           Options.merge_operator: None
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:        Options.compaction_filter: None
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f926724380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f926711090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:          Options.compression: LZ4
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:             Options.num_levels: 7
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:           Options.merge_operator: None
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:        Options.compaction_filter: None
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f926724380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f926711090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:          Options.compression: LZ4
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:             Options.num_levels: 7
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:           Options.merge_operator: None
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:        Options.compaction_filter: None
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f926724380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f926711090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:          Options.compression: LZ4
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:             Options.num_levels: 7
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 56e8c095-7953-4e0a-997c-543470a48e92
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760170315005386, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760170315009108, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170315, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "56e8c095-7953-4e0a-997c-543470a48e92", "db_session_id": "CBZEO2LTX64ECN9QOLB3", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760170315012180, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1598, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 472, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170315, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "56e8c095-7953-4e0a-997c-543470a48e92", "db_session_id": "CBZEO2LTX64ECN9QOLB3", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760170315015163, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170315, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "56e8c095-7953-4e0a-997c-543470a48e92", "db_session_id": "CBZEO2LTX64ECN9QOLB3", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760170315016623, "job": 1, "event": "recovery_finished"}
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55f9276e4000
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: DB pointer 0x55f927619a00
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f9267111f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f9267111f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f9267111f0#2 capacity: 460.80 MB usag
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: _get_class not permitted to load lua
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: _get_class not permitted to load sdk
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: _get_class not permitted to load test_remote_reads
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: osd.0 0 load_pgs
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: osd.0 0 load_pgs opened 0 pgs
Oct 11 04:11:55 np0005481065 ceph-osd[88249]: osd.0 0 log_to_monitors true
Oct 11 04:11:55 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-0[88245]: 2025-10-11T08:11:55.063+0000 7fa8297e6740 -1 osd.0 0 log_to_monitors true
Oct 11 04:11:55 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0) v1
Oct 11 04:11:55 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2967606985,v1:192.168.122.100:6803/2967606985]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Oct 11 04:11:55 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-1-activate-test[88674]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Oct 11 04:11:55 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-1-activate-test[88674]:                            [--no-systemd] [--no-tmpfs]
Oct 11 04:11:55 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-1-activate-test[88674]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct 11 04:11:55 np0005481065 podman[88658]: 2025-10-11 08:11:55.61165922 +0000 UTC m=+0.795996896 container died e8da85e2087bb55bd81fcde899a5e0f7ada9ed2bc9e2419596aadb03c2876878 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-1-activate-test, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 04:11:55 np0005481065 systemd[1]: libpod-e8da85e2087bb55bd81fcde899a5e0f7ada9ed2bc9e2419596aadb03c2876878.scope: Deactivated successfully.
Oct 11 04:11:55 np0005481065 systemd[1]: var-lib-containers-storage-overlay-6fc6d927b1a8df6520e01ef0c7dcbd8155c1a4c43f471295187f15b124d18437-merged.mount: Deactivated successfully.
Oct 11 04:11:55 np0005481065 podman[88658]: 2025-10-11 08:11:55.703271067 +0000 UTC m=+0.887608673 container remove e8da85e2087bb55bd81fcde899a5e0f7ada9ed2bc9e2419596aadb03c2876878 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-1-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 11 04:11:55 np0005481065 systemd[1]: libpod-conmon-e8da85e2087bb55bd81fcde899a5e0f7ada9ed2bc9e2419596aadb03c2876878.scope: Deactivated successfully.
Oct 11 04:11:55 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Oct 11 04:11:55 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 11 04:11:55 np0005481065 ceph-mon[74313]: from='osd.0 [v2:192.168.122.100:6802/2967606985,v1:192.168.122.100:6803/2967606985]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Oct 11 04:11:56 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2967606985,v1:192.168.122.100:6803/2967606985]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Oct 11 04:11:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Oct 11 04:11:56 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Oct 11 04:11:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Oct 11 04:11:56 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2967606985,v1:192.168.122.100:6803/2967606985]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 11 04:11:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-0,root=default}
Oct 11 04:11:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 11 04:11:56 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 11 04:11:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 11 04:11:56 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 04:11:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 11 04:11:56 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 04:11:56 np0005481065 ceph-mgr[74605]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 11 04:11:56 np0005481065 ceph-mgr[74605]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 11 04:11:56 np0005481065 ceph-mgr[74605]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 11 04:11:56 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct 11 04:11:56 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct 11 04:11:56 np0005481065 systemd[1]: Reloading.
Oct 11 04:11:56 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:11:56 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:11:56 np0005481065 systemd[1]: Reloading.
Oct 11 04:11:56 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:11:56 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:11:56 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:11:56 np0005481065 systemd[1]: Starting Ceph osd.1 for 33219f8b-dc38-5a8f-a577-8ccc4b37190a...
Oct 11 04:11:57 np0005481065 podman[89051]: 2025-10-11 08:11:57.026366367 +0000 UTC m=+0.070570256 container create 5d9d54f85d745b2e72407d35d6a3fb942ac4bda7fe89d90fdd0ee84c14a4e008 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-1-activate, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:11:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Oct 11 04:11:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 11 04:11:57 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2967606985,v1:192.168.122.100:6803/2967606985]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 11 04:11:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Oct 11 04:11:57 np0005481065 ceph-osd[88249]: osd.0 0 done with init, starting boot process
Oct 11 04:11:57 np0005481065 ceph-osd[88249]: osd.0 0 start_boot
Oct 11 04:11:57 np0005481065 ceph-osd[88249]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct 11 04:11:57 np0005481065 ceph-osd[88249]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct 11 04:11:57 np0005481065 ceph-osd[88249]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct 11 04:11:57 np0005481065 ceph-osd[88249]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct 11 04:11:57 np0005481065 ceph-osd[88249]: osd.0 0  bench count 12288000 bsize 4 KiB
Oct 11 04:11:57 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Oct 11 04:11:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 11 04:11:57 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 11 04:11:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 11 04:11:57 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 04:11:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 11 04:11:57 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 04:11:57 np0005481065 ceph-mgr[74605]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 11 04:11:57 np0005481065 ceph-mgr[74605]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 11 04:11:57 np0005481065 ceph-mgr[74605]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 11 04:11:57 np0005481065 ceph-mgr[74605]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2967606985; not ready for session (expect reconnect)
Oct 11 04:11:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 11 04:11:57 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 11 04:11:57 np0005481065 ceph-mgr[74605]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 11 04:11:57 np0005481065 ceph-mon[74313]: from='osd.0 [v2:192.168.122.100:6802/2967606985,v1:192.168.122.100:6803/2967606985]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Oct 11 04:11:57 np0005481065 ceph-mon[74313]: from='osd.0 [v2:192.168.122.100:6802/2967606985,v1:192.168.122.100:6803/2967606985]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 11 04:11:57 np0005481065 podman[89051]: 2025-10-11 08:11:56.996449993 +0000 UTC m=+0.040653922 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:11:57 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:11:57 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17f740f7c568e28d8315677fe2814403517ed916b3b5b7ea0140d0bbe21bf2fc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:57 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17f740f7c568e28d8315677fe2814403517ed916b3b5b7ea0140d0bbe21bf2fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:57 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17f740f7c568e28d8315677fe2814403517ed916b3b5b7ea0140d0bbe21bf2fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:57 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17f740f7c568e28d8315677fe2814403517ed916b3b5b7ea0140d0bbe21bf2fc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:57 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17f740f7c568e28d8315677fe2814403517ed916b3b5b7ea0140d0bbe21bf2fc/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:57 np0005481065 podman[89051]: 2025-10-11 08:11:57.143890194 +0000 UTC m=+0.188094133 container init 5d9d54f85d745b2e72407d35d6a3fb942ac4bda7fe89d90fdd0ee84c14a4e008 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-1-activate, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:11:57 np0005481065 podman[89051]: 2025-10-11 08:11:57.158893343 +0000 UTC m=+0.203097232 container start 5d9d54f85d745b2e72407d35d6a3fb942ac4bda7fe89d90fdd0ee84c14a4e008 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-1-activate, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 04:11:57 np0005481065 podman[89051]: 2025-10-11 08:11:57.166317745 +0000 UTC m=+0.210521624 container attach 5d9d54f85d745b2e72407d35d6a3fb942ac4bda7fe89d90fdd0ee84c14a4e008 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-1-activate, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:11:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:11:58 np0005481065 ceph-mgr[74605]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2967606985; not ready for session (expect reconnect)
Oct 11 04:11:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 11 04:11:58 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 11 04:11:58 np0005481065 ceph-mgr[74605]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 11 04:11:58 np0005481065 ceph-mon[74313]: from='osd.0 [v2:192.168.122.100:6802/2967606985,v1:192.168.122.100:6803/2967606985]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 11 04:11:58 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-1-activate[89066]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 11 04:11:58 np0005481065 bash[89051]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 11 04:11:58 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-1-activate[89066]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Oct 11 04:11:58 np0005481065 bash[89051]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Oct 11 04:11:58 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-1-activate[89066]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Oct 11 04:11:58 np0005481065 bash[89051]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Oct 11 04:11:58 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-1-activate[89066]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Oct 11 04:11:58 np0005481065 bash[89051]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Oct 11 04:11:58 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-1-activate[89066]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Oct 11 04:11:58 np0005481065 bash[89051]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Oct 11 04:11:58 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-1-activate[89066]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 11 04:11:58 np0005481065 bash[89051]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 11 04:11:58 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-1-activate[89066]: --> ceph-volume raw activate successful for osd ID: 1
Oct 11 04:11:58 np0005481065 bash[89051]: --> ceph-volume raw activate successful for osd ID: 1
Oct 11 04:11:58 np0005481065 systemd[1]: libpod-5d9d54f85d745b2e72407d35d6a3fb942ac4bda7fe89d90fdd0ee84c14a4e008.scope: Deactivated successfully.
Oct 11 04:11:58 np0005481065 systemd[1]: libpod-5d9d54f85d745b2e72407d35d6a3fb942ac4bda7fe89d90fdd0ee84c14a4e008.scope: Consumed 1.162s CPU time.
Oct 11 04:11:58 np0005481065 podman[89051]: 2025-10-11 08:11:58.348469438 +0000 UTC m=+1.392673317 container died 5d9d54f85d745b2e72407d35d6a3fb942ac4bda7fe89d90fdd0ee84c14a4e008 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-1-activate, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:11:58 np0005481065 systemd[1]: var-lib-containers-storage-overlay-17f740f7c568e28d8315677fe2814403517ed916b3b5b7ea0140d0bbe21bf2fc-merged.mount: Deactivated successfully.
Oct 11 04:11:58 np0005481065 podman[89051]: 2025-10-11 08:11:58.462220197 +0000 UTC m=+1.506424086 container remove 5d9d54f85d745b2e72407d35d6a3fb942ac4bda7fe89d90fdd0ee84c14a4e008 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-1-activate, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 04:11:58 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:11:58 np0005481065 podman[89259]: 2025-10-11 08:11:58.767081485 +0000 UTC m=+0.068027754 container create 86b0c168475228276e1af06f9e3cf92ac5396eec25b5e613ac630bf23d6a8b85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 04:11:58 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/490d55af348669aaec3ec24835418c1ca9507b23931e54c329e105cca9abe5b9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:58 np0005481065 podman[89259]: 2025-10-11 08:11:58.743783399 +0000 UTC m=+0.044729708 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:11:58 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/490d55af348669aaec3ec24835418c1ca9507b23931e54c329e105cca9abe5b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:58 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/490d55af348669aaec3ec24835418c1ca9507b23931e54c329e105cca9abe5b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:58 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/490d55af348669aaec3ec24835418c1ca9507b23931e54c329e105cca9abe5b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:58 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/490d55af348669aaec3ec24835418c1ca9507b23931e54c329e105cca9abe5b9/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:58 np0005481065 podman[89259]: 2025-10-11 08:11:58.853548504 +0000 UTC m=+0.154494803 container init 86b0c168475228276e1af06f9e3cf92ac5396eec25b5e613ac630bf23d6a8b85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-1, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 11 04:11:58 np0005481065 podman[89259]: 2025-10-11 08:11:58.861938624 +0000 UTC m=+0.162884893 container start 86b0c168475228276e1af06f9e3cf92ac5396eec25b5e613ac630bf23d6a8b85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 11 04:11:58 np0005481065 bash[89259]: 86b0c168475228276e1af06f9e3cf92ac5396eec25b5e613ac630bf23d6a8b85
Oct 11 04:11:58 np0005481065 systemd[1]: Started Ceph osd.1 for 33219f8b-dc38-5a8f-a577-8ccc4b37190a.
Oct 11 04:11:58 np0005481065 ceph-osd[89278]: set uid:gid to 167:167 (ceph:ceph)
Oct 11 04:11:58 np0005481065 ceph-osd[89278]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Oct 11 04:11:58 np0005481065 ceph-osd[89278]: pidfile_write: ignore empty --pid-file
Oct 11 04:11:58 np0005481065 ceph-osd[89278]: bdev(0x557ab308f800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 11 04:11:58 np0005481065 ceph-osd[89278]: bdev(0x557ab308f800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 11 04:11:58 np0005481065 ceph-osd[89278]: bdev(0x557ab308f800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 04:11:58 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 11 04:11:58 np0005481065 ceph-osd[89278]: bdev(0x557ab3ed1800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 11 04:11:58 np0005481065 ceph-osd[89278]: bdev(0x557ab3ed1800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 11 04:11:58 np0005481065 ceph-osd[89278]: bdev(0x557ab3ed1800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 04:11:58 np0005481065 ceph-osd[89278]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Oct 11 04:11:58 np0005481065 ceph-osd[89278]: bdev(0x557ab3ed1800 /var/lib/ceph/osd/ceph-1/block) close
Oct 11 04:11:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:11:58 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:11:58 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) v1
Oct 11 04:11:58 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Oct 11 04:11:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:11:58 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:11:58 np0005481065 ceph-mgr[74605]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Oct 11 04:11:58 np0005481065 ceph-mgr[74605]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Oct 11 04:11:59 np0005481065 ceph-mgr[74605]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2967606985; not ready for session (expect reconnect)
Oct 11 04:11:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 11 04:11:59 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 11 04:11:59 np0005481065 ceph-mgr[74605]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 11 04:11:59 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:59 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:11:59 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Oct 11 04:11:59 np0005481065 python3[89316]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:11:59 np0005481065 ceph-osd[89278]: bdev(0x557ab308f800 /var/lib/ceph/osd/ceph-1/block) close
Oct 11 04:11:59 np0005481065 podman[89366]: 2025-10-11 08:11:59.248252918 +0000 UTC m=+0.080100439 container create 26ff8993aa64496334be1910c37ef09b5702eeb759e3f597d06ee9f13cf31e18 (image=quay.io/ceph/ceph:v18, name=intelligent_engelbart, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:11:59 np0005481065 systemd[1]: Started libpod-conmon-26ff8993aa64496334be1910c37ef09b5702eeb759e3f597d06ee9f13cf31e18.scope.
Oct 11 04:11:59 np0005481065 podman[89366]: 2025-10-11 08:11:59.222036349 +0000 UTC m=+0.053883930 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:11:59 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:11:59 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1bc5621101b9bbda590959b000a02ec8b669fad68f4744926e876f872e802c3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:59 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1bc5621101b9bbda590959b000a02ec8b669fad68f4744926e876f872e802c3/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:59 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1bc5621101b9bbda590959b000a02ec8b669fad68f4744926e876f872e802c3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:11:59 np0005481065 podman[89366]: 2025-10-11 08:11:59.375253345 +0000 UTC m=+0.207100906 container init 26ff8993aa64496334be1910c37ef09b5702eeb759e3f597d06ee9f13cf31e18 (image=quay.io/ceph/ceph:v18, name=intelligent_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:11:59 np0005481065 podman[89366]: 2025-10-11 08:11:59.387846765 +0000 UTC m=+0.219694326 container start 26ff8993aa64496334be1910c37ef09b5702eeb759e3f597d06ee9f13cf31e18 (image=quay.io/ceph/ceph:v18, name=intelligent_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:11:59 np0005481065 podman[89366]: 2025-10-11 08:11:59.394484845 +0000 UTC m=+0.226332406 container attach 26ff8993aa64496334be1910c37ef09b5702eeb759e3f597d06ee9f13cf31e18 (image=quay.io/ceph/ceph:v18, name=intelligent_engelbart, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 11 04:11:59 np0005481065 ceph-osd[89278]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Oct 11 04:11:59 np0005481065 ceph-osd[89278]: load: jerasure load: lrc 
Oct 11 04:11:59 np0005481065 ceph-osd[89278]: bdev(0x557ab3f52c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 11 04:11:59 np0005481065 ceph-osd[89278]: bdev(0x557ab3f52c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 11 04:11:59 np0005481065 ceph-osd[89278]: bdev(0x557ab3f52c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 04:11:59 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 11 04:11:59 np0005481065 ceph-osd[89278]: bdev(0x557ab3f52c00 /var/lib/ceph/osd/ceph-1/block) close
Oct 11 04:11:59 np0005481065 ceph-osd[89278]: bdev(0x557ab3f52c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 11 04:11:59 np0005481065 ceph-osd[89278]: bdev(0x557ab3f52c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 11 04:11:59 np0005481065 ceph-osd[89278]: bdev(0x557ab3f52c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 04:11:59 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 11 04:11:59 np0005481065 ceph-osd[89278]: bdev(0x557ab3f52c00 /var/lib/ceph/osd/ceph-1/block) close
Oct 11 04:11:59 np0005481065 podman[89507]: 2025-10-11 08:11:59.876590154 +0000 UTC m=+0.047827166 container create bb4d1b35c87f993f75dd8ca95e40bb423bb1d7863051cde98c09d13736b157cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_noyce, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:11:59 np0005481065 systemd[1]: Started libpod-conmon-bb4d1b35c87f993f75dd8ca95e40bb423bb1d7863051cde98c09d13736b157cf.scope.
Oct 11 04:11:59 np0005481065 podman[89507]: 2025-10-11 08:11:59.850714585 +0000 UTC m=+0.021951577 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:11:59 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:11:59 np0005481065 podman[89507]: 2025-10-11 08:11:59.968639394 +0000 UTC m=+0.139876386 container init bb4d1b35c87f993f75dd8ca95e40bb423bb1d7863051cde98c09d13736b157cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_noyce, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:11:59 np0005481065 podman[89507]: 2025-10-11 08:11:59.976619482 +0000 UTC m=+0.147856494 container start bb4d1b35c87f993f75dd8ca95e40bb423bb1d7863051cde98c09d13736b157cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 04:11:59 np0005481065 podman[89507]: 2025-10-11 08:11:59.979871554 +0000 UTC m=+0.151108556 container attach bb4d1b35c87f993f75dd8ca95e40bb423bb1d7863051cde98c09d13736b157cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_noyce, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:11:59 np0005481065 quirky_noyce[89522]: 167 167
Oct 11 04:11:59 np0005481065 systemd[1]: libpod-bb4d1b35c87f993f75dd8ca95e40bb423bb1d7863051cde98c09d13736b157cf.scope: Deactivated successfully.
Oct 11 04:11:59 np0005481065 podman[89507]: 2025-10-11 08:11:59.983803067 +0000 UTC m=+0.155040079 container died bb4d1b35c87f993f75dd8ca95e40bb423bb1d7863051cde98c09d13736b157cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Oct 11 04:11:59 np0005481065 ceph-osd[89278]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct 11 04:11:59 np0005481065 ceph-osd[89278]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: bdev(0x557ab3f52c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: bdev(0x557ab3f52c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: bdev(0x557ab3f52c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: bdev(0x557ab3f53400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: bdev(0x557ab3f53400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: bdev(0x557ab3f53400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: bluefs mount
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: bluefs mount shared_bdev_used = 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: RocksDB version: 7.9.2
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Git sha 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: DB SUMMARY
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: DB Session ID:  Q9RRH25LJTKTWF7OH2WI
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: CURRENT file:  CURRENT
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: IDENTITY file:  IDENTITY
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                         Options.error_if_exists: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                       Options.create_if_missing: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                         Options.paranoid_checks: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                                     Options.env: 0x557ab3f23d50
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                                Options.info_log: 0x557ab31167e0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.max_file_opening_threads: 16
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                              Options.statistics: (nil)
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                               Options.use_fsync: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                       Options.max_log_file_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                         Options.allow_fallocate: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                        Options.use_direct_reads: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.create_missing_column_families: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                              Options.db_log_dir: 
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                                 Options.wal_dir: db.wal
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.advise_random_on_open: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                    Options.write_buffer_manager: 0x557ab402c460
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                            Options.rate_limiter: (nil)
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.unordered_write: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                               Options.row_cache: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                              Options.wal_filter: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.allow_ingest_behind: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.two_write_queues: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.manual_wal_flush: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.wal_compression: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.atomic_flush: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                 Options.log_readahead_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.allow_data_in_errors: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.db_host_id: __hostname__
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.max_background_jobs: 4
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.max_background_compactions: -1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.max_subcompactions: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                          Options.max_open_files: -1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                          Options.bytes_per_sync: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.max_background_flushes: -1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Compression algorithms supported:
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: #011kZSTD supported: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: #011kXpressCompression supported: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: #011kBZip2Compression supported: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: #011kLZ4Compression supported: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: #011kZlibCompression supported: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: #011kLZ4HCCompression supported: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: #011kSnappyCompression supported: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.compaction_filter: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557ab3116200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557ab31031f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.compression: LZ4
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.num_levels: 7
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:           Options.merge_operator: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.compaction_filter: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557ab3116200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557ab31031f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.compression: LZ4
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.num_levels: 7
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:           Options.merge_operator: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.compaction_filter: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557ab3116200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557ab31031f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.compression: LZ4
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.num_levels: 7
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:12:00 np0005481065 systemd[1]: var-lib-containers-storage-overlay-4c38019c6fea236022af01d19c0b57fb6b6d2ce12d70cf032e1b0001c8ca5506-merged.mount: Deactivated successfully.
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:           Options.merge_operator: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.compaction_filter: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557ab3116200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557ab31031f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.compression: LZ4
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.num_levels: 7
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:12:00 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:12:00 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3132707145' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:           Options.merge_operator: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.compaction_filter: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557ab3116200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557ab31031f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.compression: LZ4
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.num_levels: 7
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:           Options.merge_operator: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.compaction_filter: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:12:00 np0005481065 intelligent_engelbart[89428]: 
Oct 11 04:12:00 np0005481065 intelligent_engelbart[89428]: {"fsid":"33219f8b-dc38-5a8f-a577-8ccc4b37190a","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":112,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":8,"num_osds":3,"num_up_osds":0,"osd_up_since":0,"num_in_osds":3,"osd_in_since":1760170303,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-10-11T08:11:56.659586+0000","services":{}},"progress_events":{}}
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557ab3116200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557ab31031f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.compression: LZ4
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.num_levels: 7
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:00 np0005481065 podman[89507]: 2025-10-11 08:12:00.048050452 +0000 UTC m=+0.219287434 container remove bb4d1b35c87f993f75dd8ca95e40bb423bb1d7863051cde98c09d13736b157cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_noyce, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:           Options.merge_operator: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.compaction_filter: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557ab3116200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557ab31031f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.compression: LZ4
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.num_levels: 7
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:           Options.merge_operator: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.compaction_filter: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557ab3116180)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557ab3103090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.compression: LZ4
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.num_levels: 7
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:           Options.merge_operator: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.compaction_filter: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557ab3116180)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557ab3103090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.compression: LZ4
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.num_levels: 7
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:12:00 np0005481065 ceph-mgr[74605]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2967606985; not ready for session (expect reconnect)
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:           Options.merge_operator: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.compaction_filter: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557ab3116180)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557ab3103090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:12:00 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 11 04:12:00 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 11 04:12:00 np0005481065 systemd[1]: libpod-conmon-bb4d1b35c87f993f75dd8ca95e40bb423bb1d7863051cde98c09d13736b157cf.scope: Deactivated successfully.
Oct 11 04:12:00 np0005481065 ceph-mgr[74605]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 11 04:12:00 np0005481065 podman[89366]: 2025-10-11 08:12:00.060498597 +0000 UTC m=+0.892346128 container died 26ff8993aa64496334be1910c37ef09b5702eeb759e3f597d06ee9f13cf31e18 (image=quay.io/ceph/ceph:v18, name=intelligent_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 04:12:00 np0005481065 systemd[1]: libpod-26ff8993aa64496334be1910c37ef09b5702eeb759e3f597d06ee9f13cf31e18.scope: Deactivated successfully.
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.compression: LZ4
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.num_levels: 7
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 75c4c961-633c-44bf-9bb6-36a95f9387ea
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760170320050671, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760170320050911, "job": 1, "event": "recovery_finished"}
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: freelist init
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: freelist _read_cfg
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: bluefs umount
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: bdev(0x557ab3f53400 /var/lib/ceph/osd/ceph-1/block) close
Oct 11 04:12:00 np0005481065 systemd[1]: var-lib-containers-storage-overlay-e1bc5621101b9bbda590959b000a02ec8b669fad68f4744926e876f872e802c3-merged.mount: Deactivated successfully.
Oct 11 04:12:00 np0005481065 ceph-mon[74313]: Deploying daemon osd.2 on compute-0
Oct 11 04:12:00 np0005481065 podman[89366]: 2025-10-11 08:12:00.117244448 +0000 UTC m=+0.949091979 container remove 26ff8993aa64496334be1910c37ef09b5702eeb759e3f597d06ee9f13cf31e18 (image=quay.io/ceph/ceph:v18, name=intelligent_engelbart, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 04:12:00 np0005481065 systemd[1]: libpod-conmon-26ff8993aa64496334be1910c37ef09b5702eeb759e3f597d06ee9f13cf31e18.scope: Deactivated successfully.
Oct 11 04:12:00 np0005481065 ceph-osd[88249]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 28.369 iops: 7262.442 elapsed_sec: 0.413
Oct 11 04:12:00 np0005481065 ceph-osd[88249]: log_channel(cluster) log [WRN] : OSD bench result of 7262.442105 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 11 04:12:00 np0005481065 ceph-osd[88249]: osd.0 0 waiting for initial osdmap
Oct 11 04:12:00 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-0[88245]: 2025-10-11T08:12:00.162+0000 7fa825766640 -1 osd.0 0 waiting for initial osdmap
Oct 11 04:12:00 np0005481065 ceph-osd[88249]: osd.0 8 crush map has features 288514050185494528, adjusting msgr requires for clients
Oct 11 04:12:00 np0005481065 ceph-osd[88249]: osd.0 8 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Oct 11 04:12:00 np0005481065 ceph-osd[88249]: osd.0 8 crush map has features 3314932999778484224, adjusting msgr requires for osds
Oct 11 04:12:00 np0005481065 ceph-osd[88249]: osd.0 8 check_osdmap_features require_osd_release unknown -> reef
Oct 11 04:12:00 np0005481065 ceph-osd[88249]: osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 11 04:12:00 np0005481065 ceph-osd[88249]: osd.0 8 set_numa_affinity not setting numa affinity
Oct 11 04:12:00 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-0[88245]: 2025-10-11T08:12:00.197+0000 7fa820d8e640 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 11 04:12:00 np0005481065 ceph-osd[88249]: osd.0 8 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: bdev(0x557ab3f53400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: bdev(0x557ab3f53400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: bdev(0x557ab3f53400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: bluefs mount
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: bluefs mount shared_bdev_used = 4718592
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: RocksDB version: 7.9.2
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Git sha 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: DB SUMMARY
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: DB Session ID:  Q9RRH25LJTKTWF7OH2WJ
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: CURRENT file:  CURRENT
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: IDENTITY file:  IDENTITY
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                         Options.error_if_exists: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                       Options.create_if_missing: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                         Options.paranoid_checks: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                                     Options.env: 0x557ab40d4460
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                                Options.info_log: 0x557ab3116560
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.max_file_opening_threads: 16
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                              Options.statistics: (nil)
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                               Options.use_fsync: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                       Options.max_log_file_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                         Options.allow_fallocate: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                        Options.use_direct_reads: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.create_missing_column_families: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                              Options.db_log_dir: 
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                                 Options.wal_dir: db.wal
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.advise_random_on_open: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                    Options.write_buffer_manager: 0x557ab402c460
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                            Options.rate_limiter: (nil)
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.unordered_write: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                               Options.row_cache: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                              Options.wal_filter: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.allow_ingest_behind: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.two_write_queues: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.manual_wal_flush: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.wal_compression: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.atomic_flush: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                 Options.log_readahead_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.allow_data_in_errors: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.db_host_id: __hostname__
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.max_background_jobs: 4
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.max_background_compactions: -1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.max_subcompactions: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                          Options.max_open_files: -1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                          Options.bytes_per_sync: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.max_background_flushes: -1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Compression algorithms supported:
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: #011kZSTD supported: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: #011kXpressCompression supported: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: #011kBZip2Compression supported: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: #011kLZ4Compression supported: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: #011kZlibCompression supported: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: #011kLZ4HCCompression supported: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: #011kSnappyCompression supported: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.compaction_filter: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557ab3116980)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557ab31031f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.compression: LZ4
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.num_levels: 7
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:           Options.merge_operator: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.compaction_filter: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557ab3116980)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557ab31031f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.compression: LZ4
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.num_levels: 7
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:           Options.merge_operator: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.compaction_filter: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557ab3116980)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557ab31031f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.compression: LZ4
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.num_levels: 7
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:           Options.merge_operator: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.compaction_filter: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557ab3116980)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557ab31031f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.compression: LZ4
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.num_levels: 7
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:           Options.merge_operator: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.compaction_filter: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557ab3116980)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557ab31031f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.compression: LZ4
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.num_levels: 7
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:           Options.merge_operator: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.compaction_filter: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557ab3116980)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557ab31031f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.compression: LZ4
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.num_levels: 7
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:           Options.merge_operator: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.compaction_filter: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557ab3116980)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557ab31031f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.compression: LZ4
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.num_levels: 7
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:           Options.merge_operator: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.compaction_filter: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557ab3116300)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557ab3103090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.compression: LZ4
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.num_levels: 7
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:           Options.merge_operator: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.compaction_filter: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557ab3116300)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557ab3103090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.compression: LZ4
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.num_levels: 7
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:           Options.merge_operator: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.compaction_filter: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557ab3116300)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557ab3103090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.compression: LZ4
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.num_levels: 7
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 75c4c961-633c-44bf-9bb6-36a95f9387ea
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760170320343477, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760170320348692, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170320, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "75c4c961-633c-44bf-9bb6-36a95f9387ea", "db_session_id": "Q9RRH25LJTKTWF7OH2WJ", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760170320352459, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170320, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "75c4c961-633c-44bf-9bb6-36a95f9387ea", "db_session_id": "Q9RRH25LJTKTWF7OH2WJ", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760170320359844, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170320, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "75c4c961-633c-44bf-9bb6-36a95f9387ea", "db_session_id": "Q9RRH25LJTKTWF7OH2WJ", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760170320362734, "job": 1, "event": "recovery_finished"}
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct 11 04:12:00 np0005481065 podman[89762]: 2025-10-11 08:12:00.382277218 +0000 UTC m=+0.083099895 container create 9d05ed2f4699b16292fac221ce68c3f12718ac1145e421b423763033cbacb8fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-2-activate-test, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x557ab3270000
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: DB pointer 0x557ab4015a00
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557ab31031f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557ab31031f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557ab31031f0#2 capacity: 460.80 MB usage: 0
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: _get_class not permitted to load lua
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: _get_class not permitted to load sdk
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: _get_class not permitted to load test_remote_reads
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: osd.1 0 load_pgs
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: osd.1 0 load_pgs opened 0 pgs
Oct 11 04:12:00 np0005481065 ceph-osd[89278]: osd.1 0 log_to_monitors true
Oct 11 04:12:00 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-1[89274]: 2025-10-11T08:12:00.403+0000 7f3ec0746740 -1 osd.1 0 log_to_monitors true
Oct 11 04:12:00 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0) v1
Oct 11 04:12:00 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/4142297906,v1:192.168.122.100:6807/4142297906]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Oct 11 04:12:00 np0005481065 systemd[1]: Started libpod-conmon-9d05ed2f4699b16292fac221ce68c3f12718ac1145e421b423763033cbacb8fd.scope.
Oct 11 04:12:00 np0005481065 podman[89762]: 2025-10-11 08:12:00.355544254 +0000 UTC m=+0.056367001 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:12:00 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:12:00 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7cd0155a131026e77bf4e52477452fffe6a1b4e88e6ec05666e1942e927d0e3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:00 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7cd0155a131026e77bf4e52477452fffe6a1b4e88e6ec05666e1942e927d0e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:00 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7cd0155a131026e77bf4e52477452fffe6a1b4e88e6ec05666e1942e927d0e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:00 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7cd0155a131026e77bf4e52477452fffe6a1b4e88e6ec05666e1942e927d0e3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:00 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7cd0155a131026e77bf4e52477452fffe6a1b4e88e6ec05666e1942e927d0e3/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:00 np0005481065 podman[89762]: 2025-10-11 08:12:00.515869694 +0000 UTC m=+0.216692431 container init 9d05ed2f4699b16292fac221ce68c3f12718ac1145e421b423763033cbacb8fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-2-activate-test, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 04:12:00 np0005481065 podman[89762]: 2025-10-11 08:12:00.525965152 +0000 UTC m=+0.226787839 container start 9d05ed2f4699b16292fac221ce68c3f12718ac1145e421b423763033cbacb8fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-2-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Oct 11 04:12:00 np0005481065 podman[89762]: 2025-10-11 08:12:00.529963846 +0000 UTC m=+0.230786583 container attach 9d05ed2f4699b16292fac221ce68c3f12718ac1145e421b423763033cbacb8fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-2-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 04:12:00 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 11 04:12:01 np0005481065 ceph-mgr[74605]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2967606985; not ready for session (expect reconnect)
Oct 11 04:12:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 11 04:12:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 11 04:12:01 np0005481065 ceph-mgr[74605]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 11 04:12:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Oct 11 04:12:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 11 04:12:01 np0005481065 ceph-mon[74313]: OSD bench result of 7262.442105 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 11 04:12:01 np0005481065 ceph-mon[74313]: from='osd.1 [v2:192.168.122.100:6806/4142297906,v1:192.168.122.100:6807/4142297906]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Oct 11 04:12:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/4142297906,v1:192.168.122.100:6807/4142297906]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Oct 11 04:12:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e9 e9: 3 total, 1 up, 3 in
Oct 11 04:12:01 np0005481065 ceph-mon[74313]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/2967606985,v1:192.168.122.100:6803/2967606985] boot
Oct 11 04:12:01 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 1 up, 3 in
Oct 11 04:12:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Oct 11 04:12:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/4142297906,v1:192.168.122.100:6807/4142297906]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 11 04:12:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-0,root=default}
Oct 11 04:12:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 11 04:12:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 11 04:12:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 11 04:12:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 04:12:01 np0005481065 ceph-mgr[74605]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 11 04:12:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 11 04:12:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 04:12:01 np0005481065 ceph-mgr[74605]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 11 04:12:01 np0005481065 ceph-osd[88249]: osd.0 9 state: booting -> active
Oct 11 04:12:01 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-2-activate-test[89995]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Oct 11 04:12:01 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-2-activate-test[89995]:                            [--no-systemd] [--no-tmpfs]
Oct 11 04:12:01 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-2-activate-test[89995]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct 11 04:12:01 np0005481065 systemd[1]: libpod-9d05ed2f4699b16292fac221ce68c3f12718ac1145e421b423763033cbacb8fd.scope: Deactivated successfully.
Oct 11 04:12:01 np0005481065 podman[89762]: 2025-10-11 08:12:01.184610833 +0000 UTC m=+0.885433520 container died 9d05ed2f4699b16292fac221ce68c3f12718ac1145e421b423763033cbacb8fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-2-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 04:12:01 np0005481065 systemd[1]: var-lib-containers-storage-overlay-f7cd0155a131026e77bf4e52477452fffe6a1b4e88e6ec05666e1942e927d0e3-merged.mount: Deactivated successfully.
Oct 11 04:12:01 np0005481065 podman[89762]: 2025-10-11 08:12:01.250346451 +0000 UTC m=+0.951169098 container remove 9d05ed2f4699b16292fac221ce68c3f12718ac1145e421b423763033cbacb8fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-2-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 04:12:01 np0005481065 systemd[1]: libpod-conmon-9d05ed2f4699b16292fac221ce68c3f12718ac1145e421b423763033cbacb8fd.scope: Deactivated successfully.
Oct 11 04:12:01 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct 11 04:12:01 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct 11 04:12:01 np0005481065 systemd[1]: Reloading.
Oct 11 04:12:01 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:12:01 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:12:01 np0005481065 systemd[1]: Reloading.
Oct 11 04:12:01 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:12:01 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:12:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Oct 11 04:12:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 11 04:12:02 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/4142297906,v1:192.168.122.100:6807/4142297906]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 11 04:12:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e10 e10: 3 total, 1 up, 3 in
Oct 11 04:12:02 np0005481065 ceph-osd[89278]: osd.1 0 done with init, starting boot process
Oct 11 04:12:02 np0005481065 ceph-osd[89278]: osd.1 0 start_boot
Oct 11 04:12:02 np0005481065 ceph-osd[89278]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct 11 04:12:02 np0005481065 ceph-osd[89278]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct 11 04:12:02 np0005481065 ceph-osd[89278]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct 11 04:12:02 np0005481065 ceph-osd[89278]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct 11 04:12:02 np0005481065 ceph-osd[89278]: osd.1 0  bench count 12288000 bsize 4 KiB
Oct 11 04:12:02 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 1 up, 3 in
Oct 11 04:12:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 11 04:12:02 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 04:12:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 11 04:12:02 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 04:12:02 np0005481065 ceph-mgr[74605]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 11 04:12:02 np0005481065 ceph-mgr[74605]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 11 04:12:02 np0005481065 ceph-mon[74313]: from='osd.1 [v2:192.168.122.100:6806/4142297906,v1:192.168.122.100:6807/4142297906]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Oct 11 04:12:02 np0005481065 ceph-mon[74313]: osd.0 [v2:192.168.122.100:6802/2967606985,v1:192.168.122.100:6803/2967606985] boot
Oct 11 04:12:02 np0005481065 ceph-mon[74313]: from='osd.1 [v2:192.168.122.100:6806/4142297906,v1:192.168.122.100:6807/4142297906]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 11 04:12:02 np0005481065 ceph-mgr[74605]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/4142297906; not ready for session (expect reconnect)
Oct 11 04:12:02 np0005481065 systemd[1]: Starting Ceph osd.2 for 33219f8b-dc38-5a8f-a577-8ccc4b37190a...
Oct 11 04:12:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 11 04:12:02 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 04:12:02 np0005481065 ceph-mgr[74605]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 11 04:12:02 np0005481065 podman[90155]: 2025-10-11 08:12:02.533551822 +0000 UTC m=+0.101980714 container create 1486d2d7f1d7c5b494eec7f340739701eb73d50f8a7bde1b88f5aaab47682a3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-2-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 04:12:02 np0005481065 podman[90155]: 2025-10-11 08:12:02.469452801 +0000 UTC m=+0.037881733 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:12:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e10 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:12:02 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v35: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Oct 11 04:12:02 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:12:02 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59bf45bcb11e130af53e3501b5cd4ce7bcd61abd9490a59dec79cea87ae153c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:02 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59bf45bcb11e130af53e3501b5cd4ce7bcd61abd9490a59dec79cea87ae153c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:02 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59bf45bcb11e130af53e3501b5cd4ce7bcd61abd9490a59dec79cea87ae153c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:02 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59bf45bcb11e130af53e3501b5cd4ce7bcd61abd9490a59dec79cea87ae153c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:02 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59bf45bcb11e130af53e3501b5cd4ce7bcd61abd9490a59dec79cea87ae153c6/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:02 np0005481065 podman[90155]: 2025-10-11 08:12:02.709480937 +0000 UTC m=+0.277909879 container init 1486d2d7f1d7c5b494eec7f340739701eb73d50f8a7bde1b88f5aaab47682a3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-2-activate, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:12:02 np0005481065 podman[90155]: 2025-10-11 08:12:02.723689463 +0000 UTC m=+0.292118395 container start 1486d2d7f1d7c5b494eec7f340739701eb73d50f8a7bde1b88f5aaab47682a3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-2-activate, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:12:02 np0005481065 ceph-mgr[74605]: [devicehealth INFO root] creating mgr pool
Oct 11 04:12:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0) v1
Oct 11 04:12:02 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Oct 11 04:12:02 np0005481065 podman[90155]: 2025-10-11 08:12:02.76244561 +0000 UTC m=+0.330874592 container attach 1486d2d7f1d7c5b494eec7f340739701eb73d50f8a7bde1b88f5aaab47682a3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-2-activate, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 11 04:12:03 np0005481065 ceph-mgr[74605]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/4142297906; not ready for session (expect reconnect)
Oct 11 04:12:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Oct 11 04:12:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 11 04:12:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 11 04:12:03 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 04:12:03 np0005481065 ceph-mon[74313]: from='osd.1 [v2:192.168.122.100:6806/4142297906,v1:192.168.122.100:6807/4142297906]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 11 04:12:03 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Oct 11 04:12:03 np0005481065 ceph-mgr[74605]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 11 04:12:03 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Oct 11 04:12:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Oct 11 04:12:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e11 crush map has features 3314933000852226048, adjusting msgr requires
Oct 11 04:12:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Oct 11 04:12:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Oct 11 04:12:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Oct 11 04:12:03 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Oct 11 04:12:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 11 04:12:03 np0005481065 ceph-osd[88249]: osd.0 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct 11 04:12:03 np0005481065 ceph-osd[88249]: osd.0 11 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Oct 11 04:12:03 np0005481065 ceph-osd[88249]: osd.0 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct 11 04:12:03 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 04:12:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 11 04:12:03 np0005481065 ceph-mgr[74605]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 11 04:12:03 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 04:12:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0) v1
Oct 11 04:12:03 np0005481065 ceph-mgr[74605]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 11 04:12:03 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Oct 11 04:12:03 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-2-activate[90170]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 11 04:12:03 np0005481065 bash[90155]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 11 04:12:03 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-2-activate[90170]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Oct 11 04:12:03 np0005481065 bash[90155]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Oct 11 04:12:03 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-2-activate[90170]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Oct 11 04:12:03 np0005481065 bash[90155]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Oct 11 04:12:03 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-2-activate[90170]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Oct 11 04:12:03 np0005481065 bash[90155]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Oct 11 04:12:03 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-2-activate[90170]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Oct 11 04:12:03 np0005481065 bash[90155]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Oct 11 04:12:03 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-2-activate[90170]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 11 04:12:03 np0005481065 bash[90155]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 11 04:12:03 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-2-activate[90170]: --> ceph-volume raw activate successful for osd ID: 2
Oct 11 04:12:03 np0005481065 bash[90155]: --> ceph-volume raw activate successful for osd ID: 2
Oct 11 04:12:03 np0005481065 systemd[1]: libpod-1486d2d7f1d7c5b494eec7f340739701eb73d50f8a7bde1b88f5aaab47682a3b.scope: Deactivated successfully.
Oct 11 04:12:03 np0005481065 podman[90155]: 2025-10-11 08:12:03.955543867 +0000 UTC m=+1.523972799 container died 1486d2d7f1d7c5b494eec7f340739701eb73d50f8a7bde1b88f5aaab47682a3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-2-activate, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:12:03 np0005481065 systemd[1]: libpod-1486d2d7f1d7c5b494eec7f340739701eb73d50f8a7bde1b88f5aaab47682a3b.scope: Consumed 1.246s CPU time.
Oct 11 04:12:04 np0005481065 systemd[1]: var-lib-containers-storage-overlay-59bf45bcb11e130af53e3501b5cd4ce7bcd61abd9490a59dec79cea87ae153c6-merged.mount: Deactivated successfully.
Oct 11 04:12:04 np0005481065 podman[90155]: 2025-10-11 08:12:04.049428489 +0000 UTC m=+1.617857421 container remove 1486d2d7f1d7c5b494eec7f340739701eb73d50f8a7bde1b88f5aaab47682a3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-2-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Oct 11 04:12:04 np0005481065 ceph-mgr[74605]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/4142297906; not ready for session (expect reconnect)
Oct 11 04:12:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 11 04:12:04 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 04:12:04 np0005481065 ceph-mgr[74605]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 11 04:12:04 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Oct 11 04:12:04 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Oct 11 04:12:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Oct 11 04:12:04 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Oct 11 04:12:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e12 e12: 3 total, 1 up, 3 in
Oct 11 04:12:04 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 1 up, 3 in
Oct 11 04:12:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 11 04:12:04 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 04:12:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 11 04:12:04 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 04:12:04 np0005481065 ceph-mgr[74605]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 11 04:12:04 np0005481065 ceph-mgr[74605]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 11 04:12:04 np0005481065 podman[90345]: 2025-10-11 08:12:04.398097607 +0000 UTC m=+0.100404518 container create 24098f39a1562affd547eec97ef43db43656b5c87c3b721a835c945fe04008f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 11 04:12:04 np0005481065 podman[90345]: 2025-10-11 08:12:04.327665316 +0000 UTC m=+0.029972307 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:12:04 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9ca9d1b3d00c542f5118531655587a0244a1ecfd7944830fa95caf420cc6c1d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:04 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9ca9d1b3d00c542f5118531655587a0244a1ecfd7944830fa95caf420cc6c1d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:04 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9ca9d1b3d00c542f5118531655587a0244a1ecfd7944830fa95caf420cc6c1d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:04 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9ca9d1b3d00c542f5118531655587a0244a1ecfd7944830fa95caf420cc6c1d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:04 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9ca9d1b3d00c542f5118531655587a0244a1ecfd7944830fa95caf420cc6c1d/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:04 np0005481065 podman[90345]: 2025-10-11 08:12:04.537389265 +0000 UTC m=+0.239696326 container init 24098f39a1562affd547eec97ef43db43656b5c87c3b721a835c945fe04008f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Oct 11 04:12:04 np0005481065 podman[90345]: 2025-10-11 08:12:04.548800081 +0000 UTC m=+0.251106992 container start 24098f39a1562affd547eec97ef43db43656b5c87c3b721a835c945fe04008f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-2, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 11 04:12:04 np0005481065 ceph-osd[90364]: set uid:gid to 167:167 (ceph:ceph)
Oct 11 04:12:04 np0005481065 ceph-osd[90364]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Oct 11 04:12:04 np0005481065 ceph-osd[90364]: pidfile_write: ignore empty --pid-file
Oct 11 04:12:04 np0005481065 ceph-osd[90364]: bdev(0x55c4d9a03800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 11 04:12:04 np0005481065 ceph-osd[90364]: bdev(0x55c4d9a03800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 11 04:12:04 np0005481065 ceph-osd[90364]: bdev(0x55c4d9a03800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 04:12:04 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 11 04:12:04 np0005481065 ceph-osd[90364]: bdev(0x55c4da83b800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 11 04:12:04 np0005481065 ceph-osd[90364]: bdev(0x55c4da83b800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 11 04:12:04 np0005481065 ceph-osd[90364]: bdev(0x55c4da83b800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 04:12:04 np0005481065 ceph-osd[90364]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Oct 11 04:12:04 np0005481065 ceph-osd[90364]: bdev(0x55c4da83b800 /var/lib/ceph/osd/ceph-2/block) close
Oct 11 04:12:04 np0005481065 bash[90345]: 24098f39a1562affd547eec97ef43db43656b5c87c3b721a835c945fe04008f1
Oct 11 04:12:04 np0005481065 systemd[1]: Started Ceph osd.2 for 33219f8b-dc38-5a8f-a577-8ccc4b37190a.
Oct 11 04:12:04 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v38: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Oct 11 04:12:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:12:04 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:12:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:12:04 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:12:04 np0005481065 ceph-osd[90364]: bdev(0x55c4d9a03800 /var/lib/ceph/osd/ceph-2/block) close
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Oct 11 04:12:05 np0005481065 ceph-mgr[74605]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/4142297906; not ready for session (expect reconnect)
Oct 11 04:12:05 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 11 04:12:05 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 04:12:05 np0005481065 ceph-mgr[74605]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: load: jerasure load: lrc 
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: bdev(0x55c4da8bcc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: bdev(0x55c4da8bcc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: bdev(0x55c4da8bcc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: bdev(0x55c4da8bcc00 /var/lib/ceph/osd/ceph-2/block) close
Oct 11 04:12:05 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Oct 11 04:12:05 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:12:05 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: bdev(0x55c4da8bcc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: bdev(0x55c4da8bcc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: bdev(0x55c4da8bcc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: bdev(0x55c4da8bcc00 /var/lib/ceph/osd/ceph-2/block) close
Oct 11 04:12:05 np0005481065 podman[90530]: 2025-10-11 08:12:05.623598369 +0000 UTC m=+0.073344846 container create 6edc1a52b877afba469d256ee20f3fe972f26c407ce5ae00ea1ddf1e83dd0812 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_clarke, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:12:05 np0005481065 podman[90530]: 2025-10-11 08:12:05.591593805 +0000 UTC m=+0.041340312 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:12:05 np0005481065 systemd[1]: Started libpod-conmon-6edc1a52b877afba469d256ee20f3fe972f26c407ce5ae00ea1ddf1e83dd0812.scope.
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: bdev(0x55c4da8bcc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: bdev(0x55c4da8bcc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: bdev(0x55c4da8bcc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: bdev(0x55c4da8bd400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: bdev(0x55c4da8bd400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: bdev(0x55c4da8bd400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: bluefs mount
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: bluefs mount shared_bdev_used = 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: RocksDB version: 7.9.2
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Git sha 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: DB SUMMARY
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: DB Session ID:  S4NNT8C34X1NX7F6D2RG
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: CURRENT file:  CURRENT
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: IDENTITY file:  IDENTITY
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                         Options.error_if_exists: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                       Options.create_if_missing: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                         Options.paranoid_checks: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                                     Options.env: 0x55c4da88dc70
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                                Options.info_log: 0x55c4d9a8a8a0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.max_file_opening_threads: 16
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                              Options.statistics: (nil)
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                               Options.use_fsync: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                       Options.max_log_file_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                         Options.allow_fallocate: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                        Options.use_direct_reads: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.create_missing_column_families: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                              Options.db_log_dir: 
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                                 Options.wal_dir: db.wal
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.advise_random_on_open: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                    Options.write_buffer_manager: 0x55c4da996460
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                            Options.rate_limiter: (nil)
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.unordered_write: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                               Options.row_cache: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                              Options.wal_filter: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.allow_ingest_behind: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.two_write_queues: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.manual_wal_flush: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.wal_compression: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.atomic_flush: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                 Options.log_readahead_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.allow_data_in_errors: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.db_host_id: __hostname__
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.max_background_jobs: 4
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.max_background_compactions: -1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.max_subcompactions: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                          Options.max_open_files: -1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                          Options.bytes_per_sync: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.max_background_flushes: -1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Compression algorithms supported:
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: #011kZSTD supported: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: #011kXpressCompression supported: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: #011kBZip2Compression supported: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: #011kLZ4Compression supported: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: #011kZlibCompression supported: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: #011kLZ4HCCompression supported: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: #011kSnappyCompression supported: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.compaction_filter: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4d9a8a2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4d9a771f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.compression: LZ4
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.num_levels: 7
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:           Options.merge_operator: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.compaction_filter: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4d9a8a2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4d9a771f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.compression: LZ4
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.num_levels: 7
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:           Options.merge_operator: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.compaction_filter: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4d9a8a2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4d9a771f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.compression: LZ4
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:12:05 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.num_levels: 7
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:           Options.merge_operator: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.compaction_filter: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4d9a8a2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4d9a771f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.compression: LZ4
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.num_levels: 7
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:           Options.merge_operator: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.compaction_filter: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4d9a8a2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4d9a771f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.compression: LZ4
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.num_levels: 7
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:           Options.merge_operator: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.compaction_filter: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4d9a8a2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4d9a771f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.compression: LZ4
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:12:05 np0005481065 podman[90530]: 2025-10-11 08:12:05.740335994 +0000 UTC m=+0.190082481 container init 6edc1a52b877afba469d256ee20f3fe972f26c407ce5ae00ea1ddf1e83dd0812 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.num_levels: 7
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:           Options.merge_operator: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.compaction_filter: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4d9a8a2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4d9a771f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.compression: LZ4
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.num_levels: 7
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:           Options.merge_operator: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.compaction_filter: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:12:05 np0005481065 podman[90530]: 2025-10-11 08:12:05.750543615 +0000 UTC m=+0.200290052 container start 6edc1a52b877afba469d256ee20f3fe972f26c407ce5ae00ea1ddf1e83dd0812 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_clarke, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4d9a8a240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4d9a77090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.compression: LZ4
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.num_levels: 7
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:           Options.merge_operator: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.compaction_filter: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4d9a8a240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4d9a77090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.compression: LZ4
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.num_levels: 7
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:           Options.merge_operator: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.compaction_filter: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4d9a8a240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4d9a77090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.compression: LZ4
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.num_levels: 7
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 11 04:12:05 np0005481065 podman[90530]: 2025-10-11 08:12:05.754879639 +0000 UTC m=+0.204626116 container attach 6edc1a52b877afba469d256ee20f3fe972f26c407ce5ae00ea1ddf1e83dd0812 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_clarke, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 093139c8-2040-4917-a1fc-76f152cba803
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760170325739503, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760170325739927, "job": 1, "event": "recovery_finished"}
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: freelist init
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: freelist _read_cfg
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: bluefs umount
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: bdev(0x55c4da8bd400 /var/lib/ceph/osd/ceph-2/block) close
Oct 11 04:12:05 np0005481065 fervent_clarke[90546]: 167 167
Oct 11 04:12:05 np0005481065 systemd[1]: libpod-6edc1a52b877afba469d256ee20f3fe972f26c407ce5ae00ea1ddf1e83dd0812.scope: Deactivated successfully.
Oct 11 04:12:05 np0005481065 podman[90530]: 2025-10-11 08:12:05.75839882 +0000 UTC m=+0.208145257 container died 6edc1a52b877afba469d256ee20f3fe972f26c407ce5ae00ea1ddf1e83dd0812 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:12:05 np0005481065 systemd[1]: var-lib-containers-storage-overlay-acac0eddad0e04019b26bfc6fcabde903ce0c58049d5e40cac4469db2dd6b683-merged.mount: Deactivated successfully.
Oct 11 04:12:05 np0005481065 podman[90530]: 2025-10-11 08:12:05.79972824 +0000 UTC m=+0.249474677 container remove 6edc1a52b877afba469d256ee20f3fe972f26c407ce5ae00ea1ddf1e83dd0812 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:12:05 np0005481065 systemd[1]: libpod-conmon-6edc1a52b877afba469d256ee20f3fe972f26c407ce5ae00ea1ddf1e83dd0812.scope: Deactivated successfully.
Oct 11 04:12:05 np0005481065 ceph-osd[89278]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 16.643 iops: 4260.707 elapsed_sec: 0.704
Oct 11 04:12:05 np0005481065 ceph-osd[89278]: log_channel(cluster) log [WRN] : OSD bench result of 4260.706610 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 11 04:12:05 np0005481065 ceph-osd[89278]: osd.1 0 waiting for initial osdmap
Oct 11 04:12:05 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-1[89274]: 2025-10-11T08:12:05.871+0000 7f3ebc6c6640 -1 osd.1 0 waiting for initial osdmap
Oct 11 04:12:05 np0005481065 ceph-osd[89278]: osd.1 12 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct 11 04:12:05 np0005481065 ceph-osd[89278]: osd.1 12 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Oct 11 04:12:05 np0005481065 ceph-osd[89278]: osd.1 12 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct 11 04:12:05 np0005481065 ceph-osd[89278]: osd.1 12 check_osdmap_features require_osd_release unknown -> reef
Oct 11 04:12:05 np0005481065 ceph-osd[89278]: osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 11 04:12:05 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-1[89274]: 2025-10-11T08:12:05.897+0000 7f3eb7cee640 -1 osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 11 04:12:05 np0005481065 ceph-osd[89278]: osd.1 12 set_numa_affinity not setting numa affinity
Oct 11 04:12:05 np0005481065 ceph-osd[89278]: osd.1 12 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: bdev(0x55c4da8bd400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: bdev(0x55c4da8bd400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: bdev(0x55c4da8bd400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: bluefs mount
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: bluefs mount shared_bdev_used = 4718592
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: RocksDB version: 7.9.2
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Git sha 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: DB SUMMARY
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: DB Session ID:  S4NNT8C34X1NX7F6D2RH
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: CURRENT file:  CURRENT
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: IDENTITY file:  IDENTITY
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                         Options.error_if_exists: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                       Options.create_if_missing: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                         Options.paranoid_checks: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                                     Options.env: 0x55c4da88df10
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                                Options.info_log: 0x55c4d9a5dc80
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.max_file_opening_threads: 16
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                              Options.statistics: (nil)
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                               Options.use_fsync: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                       Options.max_log_file_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                         Options.allow_fallocate: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                        Options.use_direct_reads: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.create_missing_column_families: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                              Options.db_log_dir: 
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                                 Options.wal_dir: db.wal
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.advise_random_on_open: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                    Options.write_buffer_manager: 0x55c4da9966e0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                            Options.rate_limiter: (nil)
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.unordered_write: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                               Options.row_cache: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                              Options.wal_filter: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.allow_ingest_behind: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.two_write_queues: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.manual_wal_flush: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.wal_compression: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.atomic_flush: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                 Options.log_readahead_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.allow_data_in_errors: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.db_host_id: __hostname__
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.max_background_jobs: 4
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.max_background_compactions: -1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.max_subcompactions: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                          Options.max_open_files: -1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                          Options.bytes_per_sync: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.max_background_flushes: -1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Compression algorithms supported:
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: #011kZSTD supported: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: #011kXpressCompression supported: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: #011kBZip2Compression supported: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: #011kLZ4Compression supported: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: #011kZlibCompression supported: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: #011kLZ4HCCompression supported: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: #011kSnappyCompression supported: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.compaction_filter: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4d9a81040)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4d9a771f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.compression: LZ4
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.num_levels: 7
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:           Options.merge_operator: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.compaction_filter: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4d9a81040)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4d9a771f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.compression: LZ4
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.num_levels: 7
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:           Options.merge_operator: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.compaction_filter: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4d9a81040)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4d9a771f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.compression: LZ4
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.num_levels: 7
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:           Options.merge_operator: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.compaction_filter: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4d9a81040)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4d9a771f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.compression: LZ4
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.num_levels: 7
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:           Options.merge_operator: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.compaction_filter: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4d9a81040)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4d9a771f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.compression: LZ4
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.num_levels: 7
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:           Options.merge_operator: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.compaction_filter: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4d9a81040)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4d9a771f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.compression: LZ4
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.num_levels: 7
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:           Options.merge_operator: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.compaction_filter: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4d9a81040)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4d9a771f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.compression: LZ4
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.num_levels: 7
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:           Options.merge_operator: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.compaction_filter: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4d9a80f60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4d9a77090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.compression: LZ4
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.num_levels: 7
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:           Options.merge_operator: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.compaction_filter: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4d9a80f60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4d9a77090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.compression: LZ4
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.num_levels: 7
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:           Options.merge_operator: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.compaction_filter: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.compaction_filter_factory: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.sst_partitioner_factory: None
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4d9a80f60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4d9a77090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.write_buffer_size: 16777216
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:  Options.max_write_buffer_number: 64
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:          Options.compression: LZ4
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:       Options.prefix_extractor: nullptr
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:             Options.num_levels: 7
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:                  Options.compression_opts.level: 32767
Oct 11 04:12:05 np0005481065 ceph-osd[90364]: rocksdb:               Options.compression_opts.strategy: 0
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb:                  Options.compression_opts.enabled: false
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb:                        Options.arena_block_size: 1048576
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb:                Options.disable_auto_compactions: 0
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb:                   Options.inplace_update_support: 0
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb:                           Options.bloom_locality: 0
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb:                    Options.max_successive_merges: 0
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb:                Options.paranoid_file_checks: 0
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb:                Options.force_consistency_checks: 1
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb:                Options.report_bg_io_stats: 0
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb:                               Options.ttl: 2592000
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb:                       Options.enable_blob_files: false
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb:                           Options.min_blob_size: 0
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb:                          Options.blob_file_size: 268435456
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb:                Options.blob_file_starting_level: 0
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 093139c8-2040-4917-a1fc-76f152cba803
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760170325996882, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 11 04:12:06 np0005481065 podman[90765]: 2025-10-11 08:12:06.021802043 +0000 UTC m=+0.091051152 container create 2ace9454d5bc8ac9517cff2f9f10f0a991b2b4702a22827e02006def3a38ff6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_murdock, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760170326044472, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170325, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "093139c8-2040-4917-a1fc-76f152cba803", "db_session_id": "S4NNT8C34X1NX7F6D2RH", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:12:06 np0005481065 podman[90765]: 2025-10-11 08:12:05.961101759 +0000 UTC m=+0.030350858 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760170326107067, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170326, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "093139c8-2040-4917-a1fc-76f152cba803", "db_session_id": "S4NNT8C34X1NX7F6D2RH", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:12:06 np0005481065 systemd[1]: Started libpod-conmon-2ace9454d5bc8ac9517cff2f9f10f0a991b2b4702a22827e02006def3a38ff6d.scope.
Oct 11 04:12:06 np0005481065 ceph-mgr[74605]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/4142297906; not ready for session (expect reconnect)
Oct 11 04:12:06 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 11 04:12:06 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 04:12:06 np0005481065 ceph-mgr[74605]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 11 04:12:06 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:12:06 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9731aa316748f7e06696d15a13940d8ac3df272cd6066d18a0c45ed470730ee8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:06 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9731aa316748f7e06696d15a13940d8ac3df272cd6066d18a0c45ed470730ee8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:06 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9731aa316748f7e06696d15a13940d8ac3df272cd6066d18a0c45ed470730ee8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:06 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9731aa316748f7e06696d15a13940d8ac3df272cd6066d18a0c45ed470730ee8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760170326189057, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170326, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "093139c8-2040-4917-a1fc-76f152cba803", "db_session_id": "S4NNT8C34X1NX7F6D2RH", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760170326201461, "job": 1, "event": "recovery_finished"}
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct 11 04:12:06 np0005481065 podman[90765]: 2025-10-11 08:12:06.212295194 +0000 UTC m=+0.281544323 container init 2ace9454d5bc8ac9517cff2f9f10f0a991b2b4702a22827e02006def3a38ff6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_murdock, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 04:12:06 np0005481065 podman[90765]: 2025-10-11 08:12:06.223550385 +0000 UTC m=+0.292799464 container start 2ace9454d5bc8ac9517cff2f9f10f0a991b2b4702a22827e02006def3a38ff6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:12:06 np0005481065 podman[90765]: 2025-10-11 08:12:06.228417334 +0000 UTC m=+0.297666433 container attach 2ace9454d5bc8ac9517cff2f9f10f0a991b2b4702a22827e02006def3a38ff6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_murdock, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55c4daa5dc00
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb: DB pointer 0x55c4da97fa00
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.3 total, 0.3 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.048       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.048       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.048       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.05              0.00         1    0.048       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.3 total, 0.3 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c4d9a771f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.3 total, 0.3 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c4d9a771f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.3 total, 0.3 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c4d9a771f0#2 capacity: 460.80 MB usage: 0
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: _get_class not permitted to load lua
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: _get_class not permitted to load sdk
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: _get_class not permitted to load test_remote_reads
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: osd.2 0 load_pgs
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: osd.2 0 load_pgs opened 0 pgs
Oct 11 04:12:06 np0005481065 ceph-osd[90364]: osd.2 0 log_to_monitors true
Oct 11 04:12:06 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-2[90360]: 2025-10-11T08:12:06.257+0000 7fed37213740 -1 osd.2 0 log_to_monitors true
Oct 11 04:12:06 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Oct 11 04:12:06 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/4201335666,v1:192.168.122.100:6811/4201335666]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct 11 04:12:06 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Oct 11 04:12:06 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/4201335666,v1:192.168.122.100:6811/4201335666]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Oct 11 04:12:06 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e13 e13: 3 total, 2 up, 3 in
Oct 11 04:12:06 np0005481065 ceph-mon[74313]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/4142297906,v1:192.168.122.100:6807/4142297906] boot
Oct 11 04:12:06 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 2 up, 3 in
Oct 11 04:12:06 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Oct 11 04:12:06 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/4201335666,v1:192.168.122.100:6811/4201335666]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 11 04:12:06 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e13 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-0,root=default}
Oct 11 04:12:06 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 11 04:12:06 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 11 04:12:06 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 11 04:12:06 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 04:12:06 np0005481065 ceph-mgr[74605]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 11 04:12:06 np0005481065 ceph-osd[89278]: osd.1 13 state: booting -> active
Oct 11 04:12:06 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 13 pg[1.0( empty local-lis/les=0/0 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 pi=[11,13)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:06 np0005481065 ceph-mon[74313]: from='osd.2 [v2:192.168.122.100:6810/4201335666,v1:192.168.122.100:6811/4201335666]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct 11 04:12:06 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v40: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Oct 11 04:12:07 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct 11 04:12:07 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct 11 04:12:07 np0005481065 agitated_murdock[90964]: {
Oct 11 04:12:07 np0005481065 agitated_murdock[90964]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 04:12:07 np0005481065 agitated_murdock[90964]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:12:07 np0005481065 agitated_murdock[90964]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:12:07 np0005481065 agitated_murdock[90964]:        "osd_id": 2,
Oct 11 04:12:07 np0005481065 agitated_murdock[90964]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:12:07 np0005481065 agitated_murdock[90964]:        "type": "bluestore"
Oct 11 04:12:07 np0005481065 agitated_murdock[90964]:    },
Oct 11 04:12:07 np0005481065 agitated_murdock[90964]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 04:12:07 np0005481065 agitated_murdock[90964]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:12:07 np0005481065 agitated_murdock[90964]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:12:07 np0005481065 agitated_murdock[90964]:        "osd_id": 0,
Oct 11 04:12:07 np0005481065 agitated_murdock[90964]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:12:07 np0005481065 agitated_murdock[90964]:        "type": "bluestore"
Oct 11 04:12:07 np0005481065 agitated_murdock[90964]:    },
Oct 11 04:12:07 np0005481065 agitated_murdock[90964]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 04:12:07 np0005481065 agitated_murdock[90964]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:12:07 np0005481065 agitated_murdock[90964]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:12:07 np0005481065 agitated_murdock[90964]:        "osd_id": 1,
Oct 11 04:12:07 np0005481065 agitated_murdock[90964]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:12:07 np0005481065 agitated_murdock[90964]:        "type": "bluestore"
Oct 11 04:12:07 np0005481065 agitated_murdock[90964]:    }
Oct 11 04:12:07 np0005481065 agitated_murdock[90964]: }
Oct 11 04:12:07 np0005481065 podman[90765]: 2025-10-11 08:12:07.304632683 +0000 UTC m=+1.373881782 container died 2ace9454d5bc8ac9517cff2f9f10f0a991b2b4702a22827e02006def3a38ff6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_murdock, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 04:12:07 np0005481065 systemd[1]: libpod-2ace9454d5bc8ac9517cff2f9f10f0a991b2b4702a22827e02006def3a38ff6d.scope: Deactivated successfully.
Oct 11 04:12:07 np0005481065 systemd[1]: libpod-2ace9454d5bc8ac9517cff2f9f10f0a991b2b4702a22827e02006def3a38ff6d.scope: Consumed 1.092s CPU time.
Oct 11 04:12:07 np0005481065 systemd[1]: var-lib-containers-storage-overlay-9731aa316748f7e06696d15a13940d8ac3df272cd6066d18a0c45ed470730ee8-merged.mount: Deactivated successfully.
Oct 11 04:12:07 np0005481065 podman[90765]: 2025-10-11 08:12:07.383912017 +0000 UTC m=+1.453161096 container remove 2ace9454d5bc8ac9517cff2f9f10f0a991b2b4702a22827e02006def3a38ff6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:12:07 np0005481065 systemd[1]: libpod-conmon-2ace9454d5bc8ac9517cff2f9f10f0a991b2b4702a22827e02006def3a38ff6d.scope: Deactivated successfully.
Oct 11 04:12:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Oct 11 04:12:07 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/4201335666,v1:192.168.122.100:6811/4201335666]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 11 04:12:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Oct 11 04:12:07 np0005481065 ceph-osd[90364]: osd.2 0 done with init, starting boot process
Oct 11 04:12:07 np0005481065 ceph-osd[90364]: osd.2 0 start_boot
Oct 11 04:12:07 np0005481065 ceph-osd[90364]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct 11 04:12:07 np0005481065 ceph-osd[90364]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct 11 04:12:07 np0005481065 ceph-osd[90364]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct 11 04:12:07 np0005481065 ceph-osd[90364]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct 11 04:12:07 np0005481065 ceph-osd[90364]: osd.2 0  bench count 12288000 bsize 4 KiB
Oct 11 04:12:07 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Oct 11 04:12:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 11 04:12:07 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 04:12:07 np0005481065 ceph-mgr[74605]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 11 04:12:07 np0005481065 ceph-mgr[74605]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/4201335666; not ready for session (expect reconnect)
Oct 11 04:12:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 11 04:12:07 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 04:12:07 np0005481065 ceph-mgr[74605]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 11 04:12:07 np0005481065 ceph-mon[74313]: OSD bench result of 4260.706610 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 11 04:12:07 np0005481065 ceph-mon[74313]: from='osd.2 [v2:192.168.122.100:6810/4201335666,v1:192.168.122.100:6811/4201335666]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Oct 11 04:12:07 np0005481065 ceph-mon[74313]: osd.1 [v2:192.168.122.100:6806/4142297906,v1:192.168.122.100:6807/4142297906] boot
Oct 11 04:12:07 np0005481065 ceph-mon[74313]: from='osd.2 [v2:192.168.122.100:6810/4201335666,v1:192.168.122.100:6811/4201335666]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 11 04:12:07 np0005481065 ceph-mon[74313]: from='osd.2 [v2:192.168.122.100:6810/4201335666,v1:192.168.122.100:6811/4201335666]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 11 04:12:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:12:07 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 14 pg[1.0( empty local-lis/les=13/14 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 pi=[11,13)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:07 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:12:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:12:07 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:12:07 np0005481065 ceph-mgr[74605]: [devicehealth INFO root] creating main.db for devicehealth
Oct 11 04:12:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e14 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:12:07 np0005481065 ceph-mgr[74605]: [devicehealth INFO root] Check health
Oct 11 04:12:07 np0005481065 ceph-mgr[74605]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Oct 11 04:12:07 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Oct 11 04:12:07 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Oct 11 04:12:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Oct 11 04:12:07 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 11 04:12:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Oct 11 04:12:08 np0005481065 ceph-mgr[74605]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/4201335666; not ready for session (expect reconnect)
Oct 11 04:12:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 11 04:12:08 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 04:12:08 np0005481065 ceph-mgr[74605]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 11 04:12:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e15 e15: 3 total, 2 up, 3 in
Oct 11 04:12:08 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 2 up, 3 in
Oct 11 04:12:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 11 04:12:08 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 04:12:08 np0005481065 ceph-mgr[74605]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 11 04:12:08 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:12:08 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:12:08 np0005481065 ceph-mon[74313]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Oct 11 04:12:08 np0005481065 ceph-mon[74313]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Oct 11 04:12:08 np0005481065 podman[91279]: 2025-10-11 08:12:08.637793009 +0000 UTC m=+0.093439710 container exec ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:12:08 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v43: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 11 04:12:08 np0005481065 podman[91279]: 2025-10-11 08:12:08.760112773 +0000 UTC m=+0.215759424 container exec_died ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:12:09 np0005481065 ceph-mgr[74605]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/4201335666; not ready for session (expect reconnect)
Oct 11 04:12:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 11 04:12:09 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 04:12:09 np0005481065 ceph-mgr[74605]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 11 04:12:09 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.hcsgrm(active, since 74s)
Oct 11 04:12:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:12:09 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:12:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:12:09 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:12:10 np0005481065 podman[91533]: 2025-10-11 08:12:10.295776564 +0000 UTC m=+0.049272728 container create 9c5b7d6d5103f9ec19eefdf3f8208c557beead260f1ec5d454553fb4aabbf523 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 04:12:10 np0005481065 systemd[1]: Started libpod-conmon-9c5b7d6d5103f9ec19eefdf3f8208c557beead260f1ec5d454553fb4aabbf523.scope.
Oct 11 04:12:10 np0005481065 podman[91533]: 2025-10-11 08:12:10.272882821 +0000 UTC m=+0.026378995 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:12:10 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:12:10 np0005481065 podman[91533]: 2025-10-11 08:12:10.405593711 +0000 UTC m=+0.159089935 container init 9c5b7d6d5103f9ec19eefdf3f8208c557beead260f1ec5d454553fb4aabbf523 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_easley, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:12:10 np0005481065 podman[91533]: 2025-10-11 08:12:10.414004681 +0000 UTC m=+0.167500855 container start 9c5b7d6d5103f9ec19eefdf3f8208c557beead260f1ec5d454553fb4aabbf523 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_easley, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:12:10 np0005481065 elastic_easley[91550]: 167 167
Oct 11 04:12:10 np0005481065 systemd[1]: libpod-9c5b7d6d5103f9ec19eefdf3f8208c557beead260f1ec5d454553fb4aabbf523.scope: Deactivated successfully.
Oct 11 04:12:10 np0005481065 podman[91533]: 2025-10-11 08:12:10.418325305 +0000 UTC m=+0.171821469 container attach 9c5b7d6d5103f9ec19eefdf3f8208c557beead260f1ec5d454553fb4aabbf523 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_easley, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:12:10 np0005481065 podman[91533]: 2025-10-11 08:12:10.419490598 +0000 UTC m=+0.172986742 container died 9c5b7d6d5103f9ec19eefdf3f8208c557beead260f1ec5d454553fb4aabbf523 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_easley, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:12:10 np0005481065 ceph-mgr[74605]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/4201335666; not ready for session (expect reconnect)
Oct 11 04:12:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 11 04:12:10 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 04:12:10 np0005481065 ceph-mgr[74605]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 11 04:12:10 np0005481065 systemd[1]: var-lib-containers-storage-overlay-976d12198233490e5698c657bed518e939c37e1f81a06696c4e8b4ce3d3b24c9-merged.mount: Deactivated successfully.
Oct 11 04:12:10 np0005481065 podman[91533]: 2025-10-11 08:12:10.489618291 +0000 UTC m=+0.243114435 container remove 9c5b7d6d5103f9ec19eefdf3f8208c557beead260f1ec5d454553fb4aabbf523 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_easley, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:12:10 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:12:10 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:12:10 np0005481065 systemd[1]: libpod-conmon-9c5b7d6d5103f9ec19eefdf3f8208c557beead260f1ec5d454553fb4aabbf523.scope: Deactivated successfully.
Oct 11 04:12:10 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v44: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 11 04:12:10 np0005481065 podman[91574]: 2025-10-11 08:12:10.683653443 +0000 UTC m=+0.048520867 container create 03a83bd9e1cfd9cf3b3e7a6a95fa302f87f70660675be2acf1d4ce7b722f9841 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bell, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:12:10 np0005481065 systemd[1]: Started libpod-conmon-03a83bd9e1cfd9cf3b3e7a6a95fa302f87f70660675be2acf1d4ce7b722f9841.scope.
Oct 11 04:12:10 np0005481065 ceph-osd[90364]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 28.647 iops: 7333.754 elapsed_sec: 0.409
Oct 11 04:12:10 np0005481065 ceph-osd[90364]: log_channel(cluster) log [WRN] : OSD bench result of 7333.754350 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 11 04:12:10 np0005481065 ceph-osd[90364]: osd.2 0 waiting for initial osdmap
Oct 11 04:12:10 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-2[90360]: 2025-10-11T08:12:10.725+0000 7fed33193640 -1 osd.2 0 waiting for initial osdmap
Oct 11 04:12:10 np0005481065 ceph-osd[90364]: osd.2 15 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct 11 04:12:10 np0005481065 ceph-osd[90364]: osd.2 15 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Oct 11 04:12:10 np0005481065 ceph-osd[90364]: osd.2 15 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct 11 04:12:10 np0005481065 ceph-osd[90364]: osd.2 15 check_osdmap_features require_osd_release unknown -> reef
Oct 11 04:12:10 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:12:10 np0005481065 ceph-osd[90364]: osd.2 15 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 11 04:12:10 np0005481065 ceph-osd[90364]: osd.2 15 set_numa_affinity not setting numa affinity
Oct 11 04:12:10 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-osd-2[90360]: 2025-10-11T08:12:10.749+0000 7fed2e7bb640 -1 osd.2 15 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 11 04:12:10 np0005481065 ceph-osd[90364]: osd.2 15 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial
Oct 11 04:12:10 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8188191fec09d7bec0dc60c449a8eaa61278f700558e52f01cd43b637285c21c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:10 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8188191fec09d7bec0dc60c449a8eaa61278f700558e52f01cd43b637285c21c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:10 np0005481065 podman[91574]: 2025-10-11 08:12:10.662976792 +0000 UTC m=+0.027844236 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:12:10 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8188191fec09d7bec0dc60c449a8eaa61278f700558e52f01cd43b637285c21c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:10 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8188191fec09d7bec0dc60c449a8eaa61278f700558e52f01cd43b637285c21c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:10 np0005481065 podman[91574]: 2025-10-11 08:12:10.772855111 +0000 UTC m=+0.137722555 container init 03a83bd9e1cfd9cf3b3e7a6a95fa302f87f70660675be2acf1d4ce7b722f9841 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 11 04:12:10 np0005481065 podman[91574]: 2025-10-11 08:12:10.785828151 +0000 UTC m=+0.150695575 container start 03a83bd9e1cfd9cf3b3e7a6a95fa302f87f70660675be2acf1d4ce7b722f9841 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:12:10 np0005481065 podman[91574]: 2025-10-11 08:12:10.78997161 +0000 UTC m=+0.154839034 container attach 03a83bd9e1cfd9cf3b3e7a6a95fa302f87f70660675be2acf1d4ce7b722f9841 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bell, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:12:11 np0005481065 ceph-mgr[74605]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/4201335666; not ready for session (expect reconnect)
Oct 11 04:12:11 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 11 04:12:11 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 04:12:11 np0005481065 ceph-mgr[74605]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 11 04:12:11 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Oct 11 04:12:11 np0005481065 ceph-mon[74313]: OSD bench result of 7333.754350 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 11 04:12:11 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e16 e16: 3 total, 3 up, 3 in
Oct 11 04:12:11 np0005481065 ceph-mon[74313]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/4201335666,v1:192.168.122.100:6811/4201335666] boot
Oct 11 04:12:11 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 3 up, 3 in
Oct 11 04:12:11 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 11 04:12:11 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 11 04:12:11 np0005481065 ceph-osd[90364]: osd.2 16 state: booting -> active
Oct 11 04:12:12 np0005481065 ecstatic_bell[91591]: [
Oct 11 04:12:12 np0005481065 ecstatic_bell[91591]:    {
Oct 11 04:12:12 np0005481065 ecstatic_bell[91591]:        "available": false,
Oct 11 04:12:12 np0005481065 ecstatic_bell[91591]:        "ceph_device": false,
Oct 11 04:12:12 np0005481065 ecstatic_bell[91591]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct 11 04:12:12 np0005481065 ecstatic_bell[91591]:        "lsm_data": {},
Oct 11 04:12:12 np0005481065 ecstatic_bell[91591]:        "lvs": [],
Oct 11 04:12:12 np0005481065 ecstatic_bell[91591]:        "path": "/dev/sr0",
Oct 11 04:12:12 np0005481065 ecstatic_bell[91591]:        "rejected_reasons": [
Oct 11 04:12:12 np0005481065 ecstatic_bell[91591]:            "Insufficient space (<5GB)",
Oct 11 04:12:12 np0005481065 ecstatic_bell[91591]:            "Has a FileSystem"
Oct 11 04:12:12 np0005481065 ecstatic_bell[91591]:        ],
Oct 11 04:12:12 np0005481065 ecstatic_bell[91591]:        "sys_api": {
Oct 11 04:12:12 np0005481065 ecstatic_bell[91591]:            "actuators": null,
Oct 11 04:12:12 np0005481065 ecstatic_bell[91591]:            "device_nodes": "sr0",
Oct 11 04:12:12 np0005481065 ecstatic_bell[91591]:            "devname": "sr0",
Oct 11 04:12:12 np0005481065 ecstatic_bell[91591]:            "human_readable_size": "482.00 KB",
Oct 11 04:12:12 np0005481065 ecstatic_bell[91591]:            "id_bus": "ata",
Oct 11 04:12:12 np0005481065 ecstatic_bell[91591]:            "model": "QEMU DVD-ROM",
Oct 11 04:12:12 np0005481065 ecstatic_bell[91591]:            "nr_requests": "2",
Oct 11 04:12:12 np0005481065 ecstatic_bell[91591]:            "parent": "/dev/sr0",
Oct 11 04:12:12 np0005481065 ecstatic_bell[91591]:            "partitions": {},
Oct 11 04:12:12 np0005481065 ecstatic_bell[91591]:            "path": "/dev/sr0",
Oct 11 04:12:12 np0005481065 ecstatic_bell[91591]:            "removable": "1",
Oct 11 04:12:12 np0005481065 ecstatic_bell[91591]:            "rev": "2.5+",
Oct 11 04:12:12 np0005481065 ecstatic_bell[91591]:            "ro": "0",
Oct 11 04:12:12 np0005481065 ecstatic_bell[91591]:            "rotational": "0",
Oct 11 04:12:12 np0005481065 ecstatic_bell[91591]:            "sas_address": "",
Oct 11 04:12:12 np0005481065 ecstatic_bell[91591]:            "sas_device_handle": "",
Oct 11 04:12:12 np0005481065 ecstatic_bell[91591]:            "scheduler_mode": "mq-deadline",
Oct 11 04:12:12 np0005481065 ecstatic_bell[91591]:            "sectors": 0,
Oct 11 04:12:12 np0005481065 ecstatic_bell[91591]:            "sectorsize": "2048",
Oct 11 04:12:12 np0005481065 ecstatic_bell[91591]:            "size": 493568.0,
Oct 11 04:12:12 np0005481065 ecstatic_bell[91591]:            "support_discard": "2048",
Oct 11 04:12:12 np0005481065 ecstatic_bell[91591]:            "type": "disk",
Oct 11 04:12:12 np0005481065 ecstatic_bell[91591]:            "vendor": "QEMU"
Oct 11 04:12:12 np0005481065 ecstatic_bell[91591]:        }
Oct 11 04:12:12 np0005481065 ecstatic_bell[91591]:    }
Oct 11 04:12:12 np0005481065 ecstatic_bell[91591]: ]
Oct 11 04:12:12 np0005481065 systemd[1]: libpod-03a83bd9e1cfd9cf3b3e7a6a95fa302f87f70660675be2acf1d4ce7b722f9841.scope: Deactivated successfully.
Oct 11 04:12:12 np0005481065 podman[91574]: 2025-10-11 08:12:12.314943545 +0000 UTC m=+1.679810969 container died 03a83bd9e1cfd9cf3b3e7a6a95fa302f87f70660675be2acf1d4ce7b722f9841 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bell, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 11 04:12:12 np0005481065 systemd[1]: libpod-03a83bd9e1cfd9cf3b3e7a6a95fa302f87f70660675be2acf1d4ce7b722f9841.scope: Consumed 1.566s CPU time.
Oct 11 04:12:12 np0005481065 systemd[1]: var-lib-containers-storage-overlay-8188191fec09d7bec0dc60c449a8eaa61278f700558e52f01cd43b637285c21c-merged.mount: Deactivated successfully.
Oct 11 04:12:12 np0005481065 podman[91574]: 2025-10-11 08:12:12.382941688 +0000 UTC m=+1.747809122 container remove 03a83bd9e1cfd9cf3b3e7a6a95fa302f87f70660675be2acf1d4ce7b722f9841 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bell, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:12:12 np0005481065 systemd[1]: libpod-conmon-03a83bd9e1cfd9cf3b3e7a6a95fa302f87f70660675be2acf1d4ce7b722f9841.scope: Deactivated successfully.
Oct 11 04:12:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:12:12 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:12:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:12:12 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:12:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Oct 11 04:12:12 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct 11 04:12:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Oct 11 04:12:12 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct 11 04:12:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Oct 11 04:12:12 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct 11 04:12:12 np0005481065 ceph-mgr[74605]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43700k
Oct 11 04:12:12 np0005481065 ceph-mgr[74605]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43700k
Oct 11 04:12:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Oct 11 04:12:12 np0005481065 ceph-mgr[74605]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44748800: error parsing value: Value '44748800' is below minimum 939524096
Oct 11 04:12:12 np0005481065 ceph-mgr[74605]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44748800: error parsing value: Value '44748800' is below minimum 939524096
Oct 11 04:12:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:12:12 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:12:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:12:12 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:12:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:12:12 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:12:12 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev c9c93d24-7131-4705-9134-0b8bfbf59f78 does not exist
Oct 11 04:12:12 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev a640c779-be0e-4bdd-b8e2-7bc5e7175ee8 does not exist
Oct 11 04:12:12 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 73642b18-a5b3-4c3e-b128-c74a2db7f8d4 does not exist
Oct 11 04:12:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:12:12 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:12:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:12:12 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:12:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:12:12 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:12:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Oct 11 04:12:12 np0005481065 ceph-mon[74313]: osd.2 [v2:192.168.122.100:6810/4201335666,v1:192.168.122.100:6811/4201335666] boot
Oct 11 04:12:12 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:12:12 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:12:12 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct 11 04:12:12 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct 11 04:12:12 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct 11 04:12:12 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:12:12 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:12:12 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:12:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e17 e17: 3 total, 3 up, 3 in
Oct 11 04:12:12 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 3 up, 3 in
Oct 11 04:12:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e17 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:12:12 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v47: 1 pgs: 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:12:13 np0005481065 podman[93376]: 2025-10-11 08:12:13.202219478 +0000 UTC m=+0.042696391 container create e61bcf81dc939d8c206f467cd18afa4dd28e23204496da7516510b9a73ded2e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_shamir, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 11 04:12:13 np0005481065 systemd[1]: Started libpod-conmon-e61bcf81dc939d8c206f467cd18afa4dd28e23204496da7516510b9a73ded2e4.scope.
Oct 11 04:12:13 np0005481065 podman[93376]: 2025-10-11 08:12:13.182557946 +0000 UTC m=+0.023034889 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:12:13 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:12:13 np0005481065 podman[93376]: 2025-10-11 08:12:13.326809836 +0000 UTC m=+0.167286779 container init e61bcf81dc939d8c206f467cd18afa4dd28e23204496da7516510b9a73ded2e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:12:13 np0005481065 podman[93376]: 2025-10-11 08:12:13.33780426 +0000 UTC m=+0.178281203 container start e61bcf81dc939d8c206f467cd18afa4dd28e23204496da7516510b9a73ded2e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 04:12:13 np0005481065 podman[93376]: 2025-10-11 08:12:13.341831335 +0000 UTC m=+0.182308298 container attach e61bcf81dc939d8c206f467cd18afa4dd28e23204496da7516510b9a73ded2e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 11 04:12:13 np0005481065 naughty_shamir[93392]: 167 167
Oct 11 04:12:13 np0005481065 systemd[1]: libpod-e61bcf81dc939d8c206f467cd18afa4dd28e23204496da7516510b9a73ded2e4.scope: Deactivated successfully.
Oct 11 04:12:13 np0005481065 conmon[93392]: conmon e61bcf81dc939d8c206f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e61bcf81dc939d8c206f467cd18afa4dd28e23204496da7516510b9a73ded2e4.scope/container/memory.events
Oct 11 04:12:13 np0005481065 podman[93376]: 2025-10-11 08:12:13.344507682 +0000 UTC m=+0.184984625 container died e61bcf81dc939d8c206f467cd18afa4dd28e23204496da7516510b9a73ded2e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 11 04:12:13 np0005481065 systemd[1]: var-lib-containers-storage-overlay-dd496cfa5e7f5b3dcd19caaa2dc7a7ee5e3445c3ac2c5467b5c281443db49bd8-merged.mount: Deactivated successfully.
Oct 11 04:12:13 np0005481065 podman[93376]: 2025-10-11 08:12:13.398774472 +0000 UTC m=+0.239251415 container remove e61bcf81dc939d8c206f467cd18afa4dd28e23204496da7516510b9a73ded2e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_shamir, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:12:13 np0005481065 systemd[1]: libpod-conmon-e61bcf81dc939d8c206f467cd18afa4dd28e23204496da7516510b9a73ded2e4.scope: Deactivated successfully.
Oct 11 04:12:13 np0005481065 ceph-mon[74313]: Adjusting osd_memory_target on compute-0 to 43700k
Oct 11 04:12:13 np0005481065 ceph-mon[74313]: Unable to set osd_memory_target on compute-0 to 44748800: error parsing value: Value '44748800' is below minimum 939524096
Oct 11 04:12:13 np0005481065 podman[93417]: 2025-10-11 08:12:13.63327832 +0000 UTC m=+0.064080081 container create 2c69d361025276aa47ed277bd1a94791d56fdcf42149acb3aec8fff7d3426e4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wright, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:12:13 np0005481065 systemd[1]: Started libpod-conmon-2c69d361025276aa47ed277bd1a94791d56fdcf42149acb3aec8fff7d3426e4f.scope.
Oct 11 04:12:13 np0005481065 podman[93417]: 2025-10-11 08:12:13.607588856 +0000 UTC m=+0.038390617 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:12:13 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:12:13 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63de8e340b190b0a1d6f753db6fc996e05f4623ddb26f0b7f276d06a5064437a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:13 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63de8e340b190b0a1d6f753db6fc996e05f4623ddb26f0b7f276d06a5064437a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:13 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63de8e340b190b0a1d6f753db6fc996e05f4623ddb26f0b7f276d06a5064437a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:13 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63de8e340b190b0a1d6f753db6fc996e05f4623ddb26f0b7f276d06a5064437a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:13 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63de8e340b190b0a1d6f753db6fc996e05f4623ddb26f0b7f276d06a5064437a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:13 np0005481065 podman[93417]: 2025-10-11 08:12:13.735763787 +0000 UTC m=+0.166565588 container init 2c69d361025276aa47ed277bd1a94791d56fdcf42149acb3aec8fff7d3426e4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wright, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:12:13 np0005481065 podman[93417]: 2025-10-11 08:12:13.752084633 +0000 UTC m=+0.182886394 container start 2c69d361025276aa47ed277bd1a94791d56fdcf42149acb3aec8fff7d3426e4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wright, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:12:13 np0005481065 podman[93417]: 2025-10-11 08:12:13.756211691 +0000 UTC m=+0.187013452 container attach 2c69d361025276aa47ed277bd1a94791d56fdcf42149acb3aec8fff7d3426e4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wright, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 04:12:14 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:12:14 np0005481065 friendly_wright[93434]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:12:14 np0005481065 friendly_wright[93434]: --> relative data size: 1.0
Oct 11 04:12:14 np0005481065 friendly_wright[93434]: --> All data devices are unavailable
Oct 11 04:12:14 np0005481065 systemd[1]: libpod-2c69d361025276aa47ed277bd1a94791d56fdcf42149acb3aec8fff7d3426e4f.scope: Deactivated successfully.
Oct 11 04:12:14 np0005481065 systemd[1]: libpod-2c69d361025276aa47ed277bd1a94791d56fdcf42149acb3aec8fff7d3426e4f.scope: Consumed 1.113s CPU time.
Oct 11 04:12:14 np0005481065 podman[93463]: 2025-10-11 08:12:14.967532739 +0000 UTC m=+0.033989582 container died 2c69d361025276aa47ed277bd1a94791d56fdcf42149acb3aec8fff7d3426e4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 04:12:15 np0005481065 systemd[1]: var-lib-containers-storage-overlay-63de8e340b190b0a1d6f753db6fc996e05f4623ddb26f0b7f276d06a5064437a-merged.mount: Deactivated successfully.
Oct 11 04:12:15 np0005481065 podman[93463]: 2025-10-11 08:12:15.149215408 +0000 UTC m=+0.215672221 container remove 2c69d361025276aa47ed277bd1a94791d56fdcf42149acb3aec8fff7d3426e4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 11 04:12:15 np0005481065 systemd[1]: libpod-conmon-2c69d361025276aa47ed277bd1a94791d56fdcf42149acb3aec8fff7d3426e4f.scope: Deactivated successfully.
Oct 11 04:12:16 np0005481065 podman[93619]: 2025-10-11 08:12:16.020672328 +0000 UTC m=+0.063926857 container create 87826f53b44573da6254c544ecd5fb4c38536872072b56ab39e5df8bbc48d79a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:12:16 np0005481065 systemd[1]: Started libpod-conmon-87826f53b44573da6254c544ecd5fb4c38536872072b56ab39e5df8bbc48d79a.scope.
Oct 11 04:12:16 np0005481065 podman[93619]: 2025-10-11 08:12:15.991566027 +0000 UTC m=+0.034820596 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:12:16 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:12:16 np0005481065 podman[93619]: 2025-10-11 08:12:16.125476711 +0000 UTC m=+0.168731310 container init 87826f53b44573da6254c544ecd5fb4c38536872072b56ab39e5df8bbc48d79a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 04:12:16 np0005481065 podman[93619]: 2025-10-11 08:12:16.132859962 +0000 UTC m=+0.176114461 container start 87826f53b44573da6254c544ecd5fb4c38536872072b56ab39e5df8bbc48d79a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_shtern, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 04:12:16 np0005481065 epic_shtern[93635]: 167 167
Oct 11 04:12:16 np0005481065 systemd[1]: libpod-87826f53b44573da6254c544ecd5fb4c38536872072b56ab39e5df8bbc48d79a.scope: Deactivated successfully.
Oct 11 04:12:16 np0005481065 podman[93619]: 2025-10-11 08:12:16.139517702 +0000 UTC m=+0.182772241 container attach 87826f53b44573da6254c544ecd5fb4c38536872072b56ab39e5df8bbc48d79a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_shtern, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:12:16 np0005481065 podman[93619]: 2025-10-11 08:12:16.140004486 +0000 UTC m=+0.183259025 container died 87826f53b44573da6254c544ecd5fb4c38536872072b56ab39e5df8bbc48d79a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_shtern, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 04:12:16 np0005481065 systemd[1]: var-lib-containers-storage-overlay-b74902717b713da52c5253bdf93bc8f2b7409fd8016b11d5ede476ff348de892-merged.mount: Deactivated successfully.
Oct 11 04:12:16 np0005481065 podman[93619]: 2025-10-11 08:12:16.183582001 +0000 UTC m=+0.226836550 container remove 87826f53b44573da6254c544ecd5fb4c38536872072b56ab39e5df8bbc48d79a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_shtern, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:12:16 np0005481065 systemd[1]: libpod-conmon-87826f53b44573da6254c544ecd5fb4c38536872072b56ab39e5df8bbc48d79a.scope: Deactivated successfully.
Oct 11 04:12:16 np0005481065 podman[93659]: 2025-10-11 08:12:16.397776369 +0000 UTC m=+0.062709342 container create 7e81503b7e19f60df51a1c9677e52e6a6b63116f28fc075449eb7dc316c55b6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_germain, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 11 04:12:16 np0005481065 systemd[1]: Started libpod-conmon-7e81503b7e19f60df51a1c9677e52e6a6b63116f28fc075449eb7dc316c55b6a.scope.
Oct 11 04:12:16 np0005481065 podman[93659]: 2025-10-11 08:12:16.372998371 +0000 UTC m=+0.037931424 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:12:16 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:12:16 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af244e4f176ca2edb4ac0891208e3cd9e1397791d5ee5f42f619691ab1b27324/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:16 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af244e4f176ca2edb4ac0891208e3cd9e1397791d5ee5f42f619691ab1b27324/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:16 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af244e4f176ca2edb4ac0891208e3cd9e1397791d5ee5f42f619691ab1b27324/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:16 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af244e4f176ca2edb4ac0891208e3cd9e1397791d5ee5f42f619691ab1b27324/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:16 np0005481065 podman[93659]: 2025-10-11 08:12:16.495652694 +0000 UTC m=+0.160585697 container init 7e81503b7e19f60df51a1c9677e52e6a6b63116f28fc075449eb7dc316c55b6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_germain, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 04:12:16 np0005481065 podman[93659]: 2025-10-11 08:12:16.507602836 +0000 UTC m=+0.172535829 container start 7e81503b7e19f60df51a1c9677e52e6a6b63116f28fc075449eb7dc316c55b6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_germain, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 04:12:16 np0005481065 podman[93659]: 2025-10-11 08:12:16.511637271 +0000 UTC m=+0.176570314 container attach 7e81503b7e19f60df51a1c9677e52e6a6b63116f28fc075449eb7dc316c55b6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_germain, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:12:16 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:12:17 np0005481065 agitated_germain[93676]: {
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:    "0": [
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:        {
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:            "devices": [
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:                "/dev/loop3"
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:            ],
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:            "lv_name": "ceph_lv0",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:            "lv_size": "21470642176",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:            "name": "ceph_lv0",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:            "tags": {
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:                "ceph.cluster_name": "ceph",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:                "ceph.crush_device_class": "",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:                "ceph.encrypted": "0",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:                "ceph.osd_id": "0",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:                "ceph.type": "block",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:                "ceph.vdo": "0"
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:            },
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:            "type": "block",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:            "vg_name": "ceph_vg0"
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:        }
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:    ],
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:    "1": [
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:        {
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:            "devices": [
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:                "/dev/loop4"
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:            ],
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:            "lv_name": "ceph_lv1",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:            "lv_size": "21470642176",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:            "name": "ceph_lv1",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:            "tags": {
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:                "ceph.cluster_name": "ceph",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:                "ceph.crush_device_class": "",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:                "ceph.encrypted": "0",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:                "ceph.osd_id": "1",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:                "ceph.type": "block",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:                "ceph.vdo": "0"
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:            },
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:            "type": "block",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:            "vg_name": "ceph_vg1"
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:        }
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:    ],
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:    "2": [
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:        {
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:            "devices": [
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:                "/dev/loop5"
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:            ],
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:            "lv_name": "ceph_lv2",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:            "lv_size": "21470642176",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:            "name": "ceph_lv2",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:            "tags": {
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:                "ceph.cluster_name": "ceph",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:                "ceph.crush_device_class": "",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:                "ceph.encrypted": "0",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:                "ceph.osd_id": "2",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:                "ceph.type": "block",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:                "ceph.vdo": "0"
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:            },
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:            "type": "block",
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:            "vg_name": "ceph_vg2"
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:        }
Oct 11 04:12:17 np0005481065 agitated_germain[93676]:    ]
Oct 11 04:12:17 np0005481065 agitated_germain[93676]: }
Oct 11 04:12:17 np0005481065 systemd[1]: libpod-7e81503b7e19f60df51a1c9677e52e6a6b63116f28fc075449eb7dc316c55b6a.scope: Deactivated successfully.
Oct 11 04:12:17 np0005481065 podman[93659]: 2025-10-11 08:12:17.277281009 +0000 UTC m=+0.942214042 container died 7e81503b7e19f60df51a1c9677e52e6a6b63116f28fc075449eb7dc316c55b6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_germain, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:12:17 np0005481065 systemd[1]: var-lib-containers-storage-overlay-af244e4f176ca2edb4ac0891208e3cd9e1397791d5ee5f42f619691ab1b27324-merged.mount: Deactivated successfully.
Oct 11 04:12:17 np0005481065 podman[93659]: 2025-10-11 08:12:17.357969404 +0000 UTC m=+1.022902417 container remove 7e81503b7e19f60df51a1c9677e52e6a6b63116f28fc075449eb7dc316c55b6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2)
Oct 11 04:12:17 np0005481065 systemd[1]: libpod-conmon-7e81503b7e19f60df51a1c9677e52e6a6b63116f28fc075449eb7dc316c55b6a.scope: Deactivated successfully.
Oct 11 04:12:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e17 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:12:18 np0005481065 podman[93837]: 2025-10-11 08:12:18.162000559 +0000 UTC m=+0.059589043 container create bfe583f876c9e0b440c5248581412243677d700c9b48b304b0d2e41daa8b5038 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 04:12:18 np0005481065 systemd[1]: Started libpod-conmon-bfe583f876c9e0b440c5248581412243677d700c9b48b304b0d2e41daa8b5038.scope.
Oct 11 04:12:18 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:12:18 np0005481065 podman[93837]: 2025-10-11 08:12:18.140110524 +0000 UTC m=+0.037699008 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:12:18 np0005481065 podman[93837]: 2025-10-11 08:12:18.243168077 +0000 UTC m=+0.140756581 container init bfe583f876c9e0b440c5248581412243677d700c9b48b304b0d2e41daa8b5038 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 11 04:12:18 np0005481065 podman[93837]: 2025-10-11 08:12:18.250332222 +0000 UTC m=+0.147920676 container start bfe583f876c9e0b440c5248581412243677d700c9b48b304b0d2e41daa8b5038 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 11 04:12:18 np0005481065 podman[93837]: 2025-10-11 08:12:18.254932873 +0000 UTC m=+0.152521377 container attach bfe583f876c9e0b440c5248581412243677d700c9b48b304b0d2e41daa8b5038 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_nobel, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:12:18 np0005481065 modest_nobel[93853]: 167 167
Oct 11 04:12:18 np0005481065 systemd[1]: libpod-bfe583f876c9e0b440c5248581412243677d700c9b48b304b0d2e41daa8b5038.scope: Deactivated successfully.
Oct 11 04:12:18 np0005481065 podman[93837]: 2025-10-11 08:12:18.259616397 +0000 UTC m=+0.157204881 container died bfe583f876c9e0b440c5248581412243677d700c9b48b304b0d2e41daa8b5038 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 04:12:18 np0005481065 systemd[1]: var-lib-containers-storage-overlay-f6dacabd3d6b424961ac813f71d773eb2759461dcbdb6e138eb792f9de49a463-merged.mount: Deactivated successfully.
Oct 11 04:12:18 np0005481065 podman[93837]: 2025-10-11 08:12:18.316409009 +0000 UTC m=+0.213997493 container remove bfe583f876c9e0b440c5248581412243677d700c9b48b304b0d2e41daa8b5038 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_nobel, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:12:18 np0005481065 systemd[1]: libpod-conmon-bfe583f876c9e0b440c5248581412243677d700c9b48b304b0d2e41daa8b5038.scope: Deactivated successfully.
Oct 11 04:12:18 np0005481065 podman[93877]: 2025-10-11 08:12:18.53735926 +0000 UTC m=+0.056099293 container create ddbb02ed4aa0ae4365bcbd8e748ae9f9a0930fca57c7f459e021060974667953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:12:18 np0005481065 systemd[1]: Started libpod-conmon-ddbb02ed4aa0ae4365bcbd8e748ae9f9a0930fca57c7f459e021060974667953.scope.
Oct 11 04:12:18 np0005481065 podman[93877]: 2025-10-11 08:12:18.514283991 +0000 UTC m=+0.033024034 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:12:18 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:12:18 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0efa8665933d25b05493d97c1328798b5774827d3f9e7e4d3d25f9e5b4b5140/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:18 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0efa8665933d25b05493d97c1328798b5774827d3f9e7e4d3d25f9e5b4b5140/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:18 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0efa8665933d25b05493d97c1328798b5774827d3f9e7e4d3d25f9e5b4b5140/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:18 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0efa8665933d25b05493d97c1328798b5774827d3f9e7e4d3d25f9e5b4b5140/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:18 np0005481065 podman[93877]: 2025-10-11 08:12:18.639185608 +0000 UTC m=+0.157925641 container init ddbb02ed4aa0ae4365bcbd8e748ae9f9a0930fca57c7f459e021060974667953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_varahamihira, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:12:18 np0005481065 podman[93877]: 2025-10-11 08:12:18.654566838 +0000 UTC m=+0.173306871 container start ddbb02ed4aa0ae4365bcbd8e748ae9f9a0930fca57c7f459e021060974667953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:12:18 np0005481065 podman[93877]: 2025-10-11 08:12:18.659286862 +0000 UTC m=+0.178026895 container attach ddbb02ed4aa0ae4365bcbd8e748ae9f9a0930fca57c7f459e021060974667953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_varahamihira, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:12:18 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:12:19 np0005481065 quizzical_varahamihira[93893]: {
Oct 11 04:12:19 np0005481065 quizzical_varahamihira[93893]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 04:12:19 np0005481065 quizzical_varahamihira[93893]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:12:19 np0005481065 quizzical_varahamihira[93893]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:12:19 np0005481065 quizzical_varahamihira[93893]:        "osd_id": 2,
Oct 11 04:12:19 np0005481065 quizzical_varahamihira[93893]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:12:19 np0005481065 quizzical_varahamihira[93893]:        "type": "bluestore"
Oct 11 04:12:19 np0005481065 quizzical_varahamihira[93893]:    },
Oct 11 04:12:19 np0005481065 quizzical_varahamihira[93893]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 04:12:19 np0005481065 quizzical_varahamihira[93893]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:12:19 np0005481065 quizzical_varahamihira[93893]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:12:19 np0005481065 quizzical_varahamihira[93893]:        "osd_id": 0,
Oct 11 04:12:19 np0005481065 quizzical_varahamihira[93893]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:12:19 np0005481065 quizzical_varahamihira[93893]:        "type": "bluestore"
Oct 11 04:12:19 np0005481065 quizzical_varahamihira[93893]:    },
Oct 11 04:12:19 np0005481065 quizzical_varahamihira[93893]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 04:12:19 np0005481065 quizzical_varahamihira[93893]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:12:19 np0005481065 quizzical_varahamihira[93893]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:12:19 np0005481065 quizzical_varahamihira[93893]:        "osd_id": 1,
Oct 11 04:12:19 np0005481065 quizzical_varahamihira[93893]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:12:19 np0005481065 quizzical_varahamihira[93893]:        "type": "bluestore"
Oct 11 04:12:19 np0005481065 quizzical_varahamihira[93893]:    }
Oct 11 04:12:19 np0005481065 quizzical_varahamihira[93893]: }
Oct 11 04:12:19 np0005481065 systemd[1]: libpod-ddbb02ed4aa0ae4365bcbd8e748ae9f9a0930fca57c7f459e021060974667953.scope: Deactivated successfully.
Oct 11 04:12:19 np0005481065 systemd[1]: libpod-ddbb02ed4aa0ae4365bcbd8e748ae9f9a0930fca57c7f459e021060974667953.scope: Consumed 1.156s CPU time.
Oct 11 04:12:19 np0005481065 podman[93877]: 2025-10-11 08:12:19.80306837 +0000 UTC m=+1.321808373 container died ddbb02ed4aa0ae4365bcbd8e748ae9f9a0930fca57c7f459e021060974667953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_varahamihira, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:12:19 np0005481065 systemd[1]: var-lib-containers-storage-overlay-a0efa8665933d25b05493d97c1328798b5774827d3f9e7e4d3d25f9e5b4b5140-merged.mount: Deactivated successfully.
Oct 11 04:12:19 np0005481065 podman[93877]: 2025-10-11 08:12:19.886462402 +0000 UTC m=+1.405202425 container remove ddbb02ed4aa0ae4365bcbd8e748ae9f9a0930fca57c7f459e021060974667953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_varahamihira, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:12:19 np0005481065 systemd[1]: libpod-conmon-ddbb02ed4aa0ae4365bcbd8e748ae9f9a0930fca57c7f459e021060974667953.scope: Deactivated successfully.
Oct 11 04:12:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:12:19 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:12:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:12:19 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:12:20 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:12:20 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:12:20 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:12:21 np0005481065 podman[94159]: 2025-10-11 08:12:21.220096664 +0000 UTC m=+0.076311511 container exec ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 04:12:21 np0005481065 podman[94159]: 2025-10-11 08:12:21.335351275 +0000 UTC m=+0.191566082 container exec_died ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 11 04:12:21 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:12:22 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:12:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:12:22 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:12:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e17 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:12:22 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:12:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:12:22 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:12:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:12:22 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:12:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:12:22 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:12:22 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 3603fb6b-ce49-4d1a-8ec7-5b0fbc6c0309 does not exist
Oct 11 04:12:22 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 1428d652-b061-46fa-9807-0426a0ec40e9 does not exist
Oct 11 04:12:22 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 574a984d-8751-46ae-bd04-4204a2491e53 does not exist
Oct 11 04:12:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:12:22 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:12:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:12:22 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:12:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:12:22 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:12:23 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:12:23 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:12:23 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:12:23 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:12:23 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:12:23 np0005481065 podman[94556]: 2025-10-11 08:12:23.66624413 +0000 UTC m=+0.057804562 container create f42533bdd23cebafe571d7fbc61a0120c4515abe28aa46fe65b513b44634fdf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_buck, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 04:12:23 np0005481065 systemd[1]: Started libpod-conmon-f42533bdd23cebafe571d7fbc61a0120c4515abe28aa46fe65b513b44634fdf0.scope.
Oct 11 04:12:23 np0005481065 podman[94556]: 2025-10-11 08:12:23.644290003 +0000 UTC m=+0.035850455 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:12:23 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:12:23 np0005481065 podman[94556]: 2025-10-11 08:12:23.771223408 +0000 UTC m=+0.162783880 container init f42533bdd23cebafe571d7fbc61a0120c4515abe28aa46fe65b513b44634fdf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_buck, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 04:12:23 np0005481065 podman[94556]: 2025-10-11 08:12:23.783362915 +0000 UTC m=+0.174923327 container start f42533bdd23cebafe571d7fbc61a0120c4515abe28aa46fe65b513b44634fdf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_buck, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:12:23 np0005481065 podman[94556]: 2025-10-11 08:12:23.786779582 +0000 UTC m=+0.178339994 container attach f42533bdd23cebafe571d7fbc61a0120c4515abe28aa46fe65b513b44634fdf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_buck, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:12:23 np0005481065 festive_buck[94572]: 167 167
Oct 11 04:12:23 np0005481065 podman[94556]: 2025-10-11 08:12:23.790052956 +0000 UTC m=+0.181613378 container died f42533bdd23cebafe571d7fbc61a0120c4515abe28aa46fe65b513b44634fdf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_buck, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:12:23 np0005481065 systemd[1]: libpod-f42533bdd23cebafe571d7fbc61a0120c4515abe28aa46fe65b513b44634fdf0.scope: Deactivated successfully.
Oct 11 04:12:23 np0005481065 systemd[1]: var-lib-containers-storage-overlay-564fb30e052b19346450382d890df76bc736b4f71b91a62d94dbe585d3a5d6c8-merged.mount: Deactivated successfully.
Oct 11 04:12:23 np0005481065 podman[94556]: 2025-10-11 08:12:23.831132119 +0000 UTC m=+0.222692501 container remove f42533bdd23cebafe571d7fbc61a0120c4515abe28aa46fe65b513b44634fdf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_buck, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Oct 11 04:12:23 np0005481065 systemd[1]: libpod-conmon-f42533bdd23cebafe571d7fbc61a0120c4515abe28aa46fe65b513b44634fdf0.scope: Deactivated successfully.
Oct 11 04:12:24 np0005481065 podman[94595]: 2025-10-11 08:12:24.045054239 +0000 UTC m=+0.066261293 container create ff3280440a759bc7a8a7fb93caf71b57f48c43f4c5a24765486ffbf6feeb340a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_meitner, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:12:24 np0005481065 systemd[1]: Started libpod-conmon-ff3280440a759bc7a8a7fb93caf71b57f48c43f4c5a24765486ffbf6feeb340a.scope.
Oct 11 04:12:24 np0005481065 podman[94595]: 2025-10-11 08:12:24.016980657 +0000 UTC m=+0.038187751 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:12:24 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:12:24 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/228b8b584d6bedf3b99e931e8598022b4c99614c044c0ac2913bb74bcfdbb9f6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:24 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/228b8b584d6bedf3b99e931e8598022b4c99614c044c0ac2913bb74bcfdbb9f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:24 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/228b8b584d6bedf3b99e931e8598022b4c99614c044c0ac2913bb74bcfdbb9f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:24 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/228b8b584d6bedf3b99e931e8598022b4c99614c044c0ac2913bb74bcfdbb9f6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:24 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/228b8b584d6bedf3b99e931e8598022b4c99614c044c0ac2913bb74bcfdbb9f6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:24 np0005481065 podman[94595]: 2025-10-11 08:12:24.136888982 +0000 UTC m=+0.158096096 container init ff3280440a759bc7a8a7fb93caf71b57f48c43f4c5a24765486ffbf6feeb340a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 11 04:12:24 np0005481065 podman[94595]: 2025-10-11 08:12:24.150907923 +0000 UTC m=+0.172114967 container start ff3280440a759bc7a8a7fb93caf71b57f48c43f4c5a24765486ffbf6feeb340a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_meitner, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 11 04:12:24 np0005481065 podman[94595]: 2025-10-11 08:12:24.155315379 +0000 UTC m=+0.176522453 container attach ff3280440a759bc7a8a7fb93caf71b57f48c43f4c5a24765486ffbf6feeb340a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_meitner, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:12:24 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:12:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:12:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:12:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:12:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:12:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:12:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:12:25 np0005481065 sleepy_meitner[94612]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:12:25 np0005481065 sleepy_meitner[94612]: --> relative data size: 1.0
Oct 11 04:12:25 np0005481065 sleepy_meitner[94612]: --> All data devices are unavailable
Oct 11 04:12:25 np0005481065 systemd[1]: libpod-ff3280440a759bc7a8a7fb93caf71b57f48c43f4c5a24765486ffbf6feeb340a.scope: Deactivated successfully.
Oct 11 04:12:25 np0005481065 podman[94595]: 2025-10-11 08:12:25.295714861 +0000 UTC m=+1.316921915 container died ff3280440a759bc7a8a7fb93caf71b57f48c43f4c5a24765486ffbf6feeb340a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:12:25 np0005481065 systemd[1]: libpod-ff3280440a759bc7a8a7fb93caf71b57f48c43f4c5a24765486ffbf6feeb340a.scope: Consumed 1.109s CPU time.
Oct 11 04:12:25 np0005481065 systemd[1]: var-lib-containers-storage-overlay-228b8b584d6bedf3b99e931e8598022b4c99614c044c0ac2913bb74bcfdbb9f6-merged.mount: Deactivated successfully.
Oct 11 04:12:25 np0005481065 podman[94595]: 2025-10-11 08:12:25.381960884 +0000 UTC m=+1.403167908 container remove ff3280440a759bc7a8a7fb93caf71b57f48c43f4c5a24765486ffbf6feeb340a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_meitner, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2)
Oct 11 04:12:25 np0005481065 systemd[1]: libpod-conmon-ff3280440a759bc7a8a7fb93caf71b57f48c43f4c5a24765486ffbf6feeb340a.scope: Deactivated successfully.
Oct 11 04:12:26 np0005481065 podman[94792]: 2025-10-11 08:12:26.106695373 +0000 UTC m=+0.036756951 container create 7286ce34cfc5a16eb31f1f9b48ed1b5621bf14fe6209d9a0ca95fd5d1738d40d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_turing, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 04:12:26 np0005481065 systemd[1]: Started libpod-conmon-7286ce34cfc5a16eb31f1f9b48ed1b5621bf14fe6209d9a0ca95fd5d1738d40d.scope.
Oct 11 04:12:26 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:12:26 np0005481065 podman[94792]: 2025-10-11 08:12:26.181574462 +0000 UTC m=+0.111636110 container init 7286ce34cfc5a16eb31f1f9b48ed1b5621bf14fe6209d9a0ca95fd5d1738d40d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_turing, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 04:12:26 np0005481065 podman[94792]: 2025-10-11 08:12:26.090700776 +0000 UTC m=+0.020762374 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:12:26 np0005481065 podman[94792]: 2025-10-11 08:12:26.189962851 +0000 UTC m=+0.120024439 container start 7286ce34cfc5a16eb31f1f9b48ed1b5621bf14fe6209d9a0ca95fd5d1738d40d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_turing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 11 04:12:26 np0005481065 compassionate_turing[94808]: 167 167
Oct 11 04:12:26 np0005481065 systemd[1]: libpod-7286ce34cfc5a16eb31f1f9b48ed1b5621bf14fe6209d9a0ca95fd5d1738d40d.scope: Deactivated successfully.
Oct 11 04:12:26 np0005481065 podman[94792]: 2025-10-11 08:12:26.193782451 +0000 UTC m=+0.123844109 container attach 7286ce34cfc5a16eb31f1f9b48ed1b5621bf14fe6209d9a0ca95fd5d1738d40d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_turing, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:12:26 np0005481065 conmon[94808]: conmon 7286ce34cfc5a16eb31f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7286ce34cfc5a16eb31f1f9b48ed1b5621bf14fe6209d9a0ca95fd5d1738d40d.scope/container/memory.events
Oct 11 04:12:26 np0005481065 podman[94792]: 2025-10-11 08:12:26.194915503 +0000 UTC m=+0.124977091 container died 7286ce34cfc5a16eb31f1f9b48ed1b5621bf14fe6209d9a0ca95fd5d1738d40d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 04:12:26 np0005481065 systemd[1]: var-lib-containers-storage-overlay-49c51736f0aa9f50aa8f126cd7c77337d1f3ffdfbdede08ba95917102fdca048-merged.mount: Deactivated successfully.
Oct 11 04:12:26 np0005481065 podman[94792]: 2025-10-11 08:12:26.235057649 +0000 UTC m=+0.165119267 container remove 7286ce34cfc5a16eb31f1f9b48ed1b5621bf14fe6209d9a0ca95fd5d1738d40d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 11 04:12:26 np0005481065 systemd[1]: libpod-conmon-7286ce34cfc5a16eb31f1f9b48ed1b5621bf14fe6209d9a0ca95fd5d1738d40d.scope: Deactivated successfully.
Oct 11 04:12:26 np0005481065 podman[94832]: 2025-10-11 08:12:26.487402257 +0000 UTC m=+0.071283417 container create 2516c09656e224996a7e4fee4f8f585af8d92a7205d549249392f16c04d711dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:12:26 np0005481065 systemd[1]: Started libpod-conmon-2516c09656e224996a7e4fee4f8f585af8d92a7205d549249392f16c04d711dd.scope.
Oct 11 04:12:26 np0005481065 podman[94832]: 2025-10-11 08:12:26.462894917 +0000 UTC m=+0.046775997 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:12:26 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:12:26 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49d277408c93c017e051605cd4b7c88aa02f2fe47a992d770163084d6a0af2e5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:26 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49d277408c93c017e051605cd4b7c88aa02f2fe47a992d770163084d6a0af2e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:26 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49d277408c93c017e051605cd4b7c88aa02f2fe47a992d770163084d6a0af2e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:26 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49d277408c93c017e051605cd4b7c88aa02f2fe47a992d770163084d6a0af2e5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:26 np0005481065 podman[94832]: 2025-10-11 08:12:26.595510795 +0000 UTC m=+0.179391905 container init 2516c09656e224996a7e4fee4f8f585af8d92a7205d549249392f16c04d711dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_williamson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 11 04:12:26 np0005481065 podman[94832]: 2025-10-11 08:12:26.606431367 +0000 UTC m=+0.190312437 container start 2516c09656e224996a7e4fee4f8f585af8d92a7205d549249392f16c04d711dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_williamson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 04:12:26 np0005481065 podman[94832]: 2025-10-11 08:12:26.616068722 +0000 UTC m=+0.199949862 container attach 2516c09656e224996a7e4fee4f8f585af8d92a7205d549249392f16c04d711dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_williamson, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:12:26 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]: {
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:    "0": [
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:        {
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:            "devices": [
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:                "/dev/loop3"
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:            ],
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:            "lv_name": "ceph_lv0",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:            "lv_size": "21470642176",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:            "name": "ceph_lv0",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:            "tags": {
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:                "ceph.cluster_name": "ceph",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:                "ceph.crush_device_class": "",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:                "ceph.encrypted": "0",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:                "ceph.osd_id": "0",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:                "ceph.type": "block",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:                "ceph.vdo": "0"
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:            },
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:            "type": "block",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:            "vg_name": "ceph_vg0"
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:        }
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:    ],
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:    "1": [
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:        {
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:            "devices": [
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:                "/dev/loop4"
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:            ],
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:            "lv_name": "ceph_lv1",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:            "lv_size": "21470642176",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:            "name": "ceph_lv1",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:            "tags": {
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:                "ceph.cluster_name": "ceph",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:                "ceph.crush_device_class": "",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:                "ceph.encrypted": "0",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:                "ceph.osd_id": "1",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:                "ceph.type": "block",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:                "ceph.vdo": "0"
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:            },
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:            "type": "block",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:            "vg_name": "ceph_vg1"
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:        }
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:    ],
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:    "2": [
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:        {
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:            "devices": [
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:                "/dev/loop5"
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:            ],
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:            "lv_name": "ceph_lv2",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:            "lv_size": "21470642176",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:            "name": "ceph_lv2",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:            "tags": {
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:                "ceph.cluster_name": "ceph",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:                "ceph.crush_device_class": "",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:                "ceph.encrypted": "0",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:                "ceph.osd_id": "2",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:                "ceph.type": "block",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:                "ceph.vdo": "0"
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:            },
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:            "type": "block",
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:            "vg_name": "ceph_vg2"
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:        }
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]:    ]
Oct 11 04:12:27 np0005481065 mystifying_williamson[94848]: }
Oct 11 04:12:27 np0005481065 systemd[1]: libpod-2516c09656e224996a7e4fee4f8f585af8d92a7205d549249392f16c04d711dd.scope: Deactivated successfully.
Oct 11 04:12:27 np0005481065 podman[94832]: 2025-10-11 08:12:27.397967635 +0000 UTC m=+0.981848685 container died 2516c09656e224996a7e4fee4f8f585af8d92a7205d549249392f16c04d711dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_williamson, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 04:12:27 np0005481065 systemd[1]: var-lib-containers-storage-overlay-49d277408c93c017e051605cd4b7c88aa02f2fe47a992d770163084d6a0af2e5-merged.mount: Deactivated successfully.
Oct 11 04:12:27 np0005481065 podman[94832]: 2025-10-11 08:12:27.463055114 +0000 UTC m=+1.046936164 container remove 2516c09656e224996a7e4fee4f8f585af8d92a7205d549249392f16c04d711dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 04:12:27 np0005481065 systemd[1]: libpod-conmon-2516c09656e224996a7e4fee4f8f585af8d92a7205d549249392f16c04d711dd.scope: Deactivated successfully.
Oct 11 04:12:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e17 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:12:28 np0005481065 podman[95011]: 2025-10-11 08:12:28.192785836 +0000 UTC m=+0.042861265 container create 8abb3e9c22623f5d809cf41ccb4c11f85c1cc6f7d84ae1e3e54f26727766f50d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:12:28 np0005481065 systemd[1]: Started libpod-conmon-8abb3e9c22623f5d809cf41ccb4c11f85c1cc6f7d84ae1e3e54f26727766f50d.scope.
Oct 11 04:12:28 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:12:28 np0005481065 podman[95011]: 2025-10-11 08:12:28.172984721 +0000 UTC m=+0.023060200 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:12:28 np0005481065 podman[95011]: 2025-10-11 08:12:28.281066228 +0000 UTC m=+0.131141727 container init 8abb3e9c22623f5d809cf41ccb4c11f85c1cc6f7d84ae1e3e54f26727766f50d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kare, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:12:28 np0005481065 podman[95011]: 2025-10-11 08:12:28.29338478 +0000 UTC m=+0.143460229 container start 8abb3e9c22623f5d809cf41ccb4c11f85c1cc6f7d84ae1e3e54f26727766f50d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kare, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:12:28 np0005481065 podman[95011]: 2025-10-11 08:12:28.299618128 +0000 UTC m=+0.149693587 container attach 8abb3e9c22623f5d809cf41ccb4c11f85c1cc6f7d84ae1e3e54f26727766f50d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:12:28 np0005481065 crazy_kare[95027]: 167 167
Oct 11 04:12:28 np0005481065 podman[95011]: 2025-10-11 08:12:28.302715826 +0000 UTC m=+0.152791295 container died 8abb3e9c22623f5d809cf41ccb4c11f85c1cc6f7d84ae1e3e54f26727766f50d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kare, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:12:28 np0005481065 systemd[1]: libpod-8abb3e9c22623f5d809cf41ccb4c11f85c1cc6f7d84ae1e3e54f26727766f50d.scope: Deactivated successfully.
Oct 11 04:12:28 np0005481065 systemd[1]: var-lib-containers-storage-overlay-ee21b604126a8b4cc1ecd6bdc009d9edc3fe6451afaedb5ac045777b146b9433-merged.mount: Deactivated successfully.
Oct 11 04:12:28 np0005481065 podman[95011]: 2025-10-11 08:12:28.348878875 +0000 UTC m=+0.198954344 container remove 8abb3e9c22623f5d809cf41ccb4c11f85c1cc6f7d84ae1e3e54f26727766f50d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kare, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:12:28 np0005481065 systemd[1]: libpod-conmon-8abb3e9c22623f5d809cf41ccb4c11f85c1cc6f7d84ae1e3e54f26727766f50d.scope: Deactivated successfully.
Oct 11 04:12:28 np0005481065 podman[95052]: 2025-10-11 08:12:28.58644936 +0000 UTC m=+0.069161416 container create 2b399d0577373324cbfd7aaeef51c1e50cbd200bea738787b607e6d5b94ec311 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_gates, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:12:28 np0005481065 systemd[1]: Started libpod-conmon-2b399d0577373324cbfd7aaeef51c1e50cbd200bea738787b607e6d5b94ec311.scope.
Oct 11 04:12:28 np0005481065 podman[95052]: 2025-10-11 08:12:28.55564755 +0000 UTC m=+0.038359676 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:12:28 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:12:28 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:12:28 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7dd3c4096a40329b60d9aaed1378726142e445091ba93038024268df02533fd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:28 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7dd3c4096a40329b60d9aaed1378726142e445091ba93038024268df02533fd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:28 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7dd3c4096a40329b60d9aaed1378726142e445091ba93038024268df02533fd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:28 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7dd3c4096a40329b60d9aaed1378726142e445091ba93038024268df02533fd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:28 np0005481065 podman[95052]: 2025-10-11 08:12:28.693429196 +0000 UTC m=+0.176141292 container init 2b399d0577373324cbfd7aaeef51c1e50cbd200bea738787b607e6d5b94ec311 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_gates, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:12:28 np0005481065 podman[95052]: 2025-10-11 08:12:28.707127867 +0000 UTC m=+0.189839883 container start 2b399d0577373324cbfd7aaeef51c1e50cbd200bea738787b607e6d5b94ec311 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_gates, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 11 04:12:28 np0005481065 podman[95052]: 2025-10-11 08:12:28.713315314 +0000 UTC m=+0.196027420 container attach 2b399d0577373324cbfd7aaeef51c1e50cbd200bea738787b607e6d5b94ec311 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_gates, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 04:12:29 np0005481065 youthful_gates[95069]: {
Oct 11 04:12:29 np0005481065 youthful_gates[95069]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 04:12:29 np0005481065 youthful_gates[95069]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:12:29 np0005481065 youthful_gates[95069]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:12:29 np0005481065 youthful_gates[95069]:        "osd_id": 2,
Oct 11 04:12:29 np0005481065 youthful_gates[95069]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:12:29 np0005481065 youthful_gates[95069]:        "type": "bluestore"
Oct 11 04:12:29 np0005481065 youthful_gates[95069]:    },
Oct 11 04:12:29 np0005481065 youthful_gates[95069]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 04:12:29 np0005481065 youthful_gates[95069]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:12:29 np0005481065 youthful_gates[95069]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:12:29 np0005481065 youthful_gates[95069]:        "osd_id": 0,
Oct 11 04:12:29 np0005481065 youthful_gates[95069]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:12:29 np0005481065 youthful_gates[95069]:        "type": "bluestore"
Oct 11 04:12:29 np0005481065 youthful_gates[95069]:    },
Oct 11 04:12:29 np0005481065 youthful_gates[95069]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 04:12:29 np0005481065 youthful_gates[95069]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:12:29 np0005481065 youthful_gates[95069]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:12:29 np0005481065 youthful_gates[95069]:        "osd_id": 1,
Oct 11 04:12:29 np0005481065 youthful_gates[95069]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:12:29 np0005481065 youthful_gates[95069]:        "type": "bluestore"
Oct 11 04:12:29 np0005481065 youthful_gates[95069]:    }
Oct 11 04:12:29 np0005481065 youthful_gates[95069]: }
Oct 11 04:12:29 np0005481065 systemd[1]: libpod-2b399d0577373324cbfd7aaeef51c1e50cbd200bea738787b607e6d5b94ec311.scope: Deactivated successfully.
Oct 11 04:12:29 np0005481065 podman[95052]: 2025-10-11 08:12:29.82972485 +0000 UTC m=+1.312436906 container died 2b399d0577373324cbfd7aaeef51c1e50cbd200bea738787b607e6d5b94ec311 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_gates, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:12:29 np0005481065 systemd[1]: libpod-2b399d0577373324cbfd7aaeef51c1e50cbd200bea738787b607e6d5b94ec311.scope: Consumed 1.136s CPU time.
Oct 11 04:12:29 np0005481065 systemd[1]: var-lib-containers-storage-overlay-f7dd3c4096a40329b60d9aaed1378726142e445091ba93038024268df02533fd-merged.mount: Deactivated successfully.
Oct 11 04:12:29 np0005481065 podman[95052]: 2025-10-11 08:12:29.909666173 +0000 UTC m=+1.392378229 container remove 2b399d0577373324cbfd7aaeef51c1e50cbd200bea738787b607e6d5b94ec311 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_gates, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:12:29 np0005481065 systemd[1]: libpod-conmon-2b399d0577373324cbfd7aaeef51c1e50cbd200bea738787b607e6d5b94ec311.scope: Deactivated successfully.
Oct 11 04:12:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:12:29 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:12:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:12:30 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:12:30 np0005481065 python3[95191]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:12:30 np0005481065 podman[95193]: 2025-10-11 08:12:30.586959938 +0000 UTC m=+0.076179457 container create 5817c04aab3b6ba675f47e53cc099b4fc21b029b5fec7b6c9c8a04a54421bdec (image=quay.io/ceph/ceph:v18, name=sweet_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:12:30 np0005481065 systemd[1]: Started libpod-conmon-5817c04aab3b6ba675f47e53cc099b4fc21b029b5fec7b6c9c8a04a54421bdec.scope.
Oct 11 04:12:30 np0005481065 podman[95193]: 2025-10-11 08:12:30.558415463 +0000 UTC m=+0.047635022 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:12:30 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:12:30 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:12:30 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46372c05e4525c5f4cf1da9e15a6f4b08d9af8f1f5ee9f74093904a2babc3171/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:30 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46372c05e4525c5f4cf1da9e15a6f4b08d9af8f1f5ee9f74093904a2babc3171/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:30 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46372c05e4525c5f4cf1da9e15a6f4b08d9af8f1f5ee9f74093904a2babc3171/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:30 np0005481065 podman[95193]: 2025-10-11 08:12:30.696827426 +0000 UTC m=+0.186046975 container init 5817c04aab3b6ba675f47e53cc099b4fc21b029b5fec7b6c9c8a04a54421bdec (image=quay.io/ceph/ceph:v18, name=sweet_babbage, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:12:30 np0005481065 podman[95193]: 2025-10-11 08:12:30.711516406 +0000 UTC m=+0.200735925 container start 5817c04aab3b6ba675f47e53cc099b4fc21b029b5fec7b6c9c8a04a54421bdec (image=quay.io/ceph/ceph:v18, name=sweet_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:12:30 np0005481065 podman[95193]: 2025-10-11 08:12:30.71586781 +0000 UTC m=+0.205087369 container attach 5817c04aab3b6ba675f47e53cc099b4fc21b029b5fec7b6c9c8a04a54421bdec (image=quay.io/ceph/ceph:v18, name=sweet_babbage, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:12:30 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:12:30 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:12:31 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct 11 04:12:31 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/487578869' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 11 04:12:31 np0005481065 sweet_babbage[95209]: 
Oct 11 04:12:31 np0005481065 sweet_babbage[95209]: {"fsid":"33219f8b-dc38-5a8f-a577-8ccc4b37190a","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":143,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":17,"num_osds":3,"num_up_osds":3,"osd_up_since":1760170331,"num_in_osds":3,"osd_in_since":1760170303,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":922337280,"bytes_avail":63489589248,"bytes_total":64411926528},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-10-11T08:11:56.659586+0000","services":{}},"progress_events":{}}
Oct 11 04:12:31 np0005481065 systemd[1]: libpod-5817c04aab3b6ba675f47e53cc099b4fc21b029b5fec7b6c9c8a04a54421bdec.scope: Deactivated successfully.
Oct 11 04:12:31 np0005481065 podman[95234]: 2025-10-11 08:12:31.387934315 +0000 UTC m=+0.028061302 container died 5817c04aab3b6ba675f47e53cc099b4fc21b029b5fec7b6c9c8a04a54421bdec (image=quay.io/ceph/ceph:v18, name=sweet_babbage, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 04:12:31 np0005481065 systemd[1]: var-lib-containers-storage-overlay-46372c05e4525c5f4cf1da9e15a6f4b08d9af8f1f5ee9f74093904a2babc3171-merged.mount: Deactivated successfully.
Oct 11 04:12:31 np0005481065 podman[95234]: 2025-10-11 08:12:31.444261514 +0000 UTC m=+0.084388431 container remove 5817c04aab3b6ba675f47e53cc099b4fc21b029b5fec7b6c9c8a04a54421bdec (image=quay.io/ceph/ceph:v18, name=sweet_babbage, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:12:31 np0005481065 systemd[1]: libpod-conmon-5817c04aab3b6ba675f47e53cc099b4fc21b029b5fec7b6c9c8a04a54421bdec.scope: Deactivated successfully.
Oct 11 04:12:31 np0005481065 python3[95275]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:12:32 np0005481065 podman[95276]: 2025-10-11 08:12:32.022406287 +0000 UTC m=+0.062333001 container create 4fff84fc4e537c28bb64cd5353354f9d1cdd7e964b06ac978bff3a4623c4bca8 (image=quay.io/ceph/ceph:v18, name=elated_ardinghelli, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:12:32 np0005481065 systemd[1]: Started libpod-conmon-4fff84fc4e537c28bb64cd5353354f9d1cdd7e964b06ac978bff3a4623c4bca8.scope.
Oct 11 04:12:32 np0005481065 podman[95276]: 2025-10-11 08:12:31.99939166 +0000 UTC m=+0.039318374 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:12:32 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:12:32 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e58089793a51e2fe7409311ee55f3c11dace1dce525bd32d7341b84293360fe0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:32 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e58089793a51e2fe7409311ee55f3c11dace1dce525bd32d7341b84293360fe0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:32 np0005481065 podman[95276]: 2025-10-11 08:12:32.122095855 +0000 UTC m=+0.162022619 container init 4fff84fc4e537c28bb64cd5353354f9d1cdd7e964b06ac978bff3a4623c4bca8 (image=quay.io/ceph/ceph:v18, name=elated_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 11 04:12:32 np0005481065 podman[95276]: 2025-10-11 08:12:32.132969365 +0000 UTC m=+0.172896069 container start 4fff84fc4e537c28bb64cd5353354f9d1cdd7e964b06ac978bff3a4623c4bca8 (image=quay.io/ceph/ceph:v18, name=elated_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 04:12:32 np0005481065 podman[95276]: 2025-10-11 08:12:32.137660719 +0000 UTC m=+0.177587433 container attach 4fff84fc4e537c28bb64cd5353354f9d1cdd7e964b06ac978bff3a4623c4bca8 (image=quay.io/ceph/ceph:v18, name=elated_ardinghelli, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:12:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e17 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:12:32 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:12:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 11 04:12:32 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1704947569' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 11 04:12:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Oct 11 04:12:33 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/1704947569' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 11 04:12:33 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1704947569' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 11 04:12:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Oct 11 04:12:33 np0005481065 elated_ardinghelli[95291]: pool 'vms' created
Oct 11 04:12:33 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Oct 11 04:12:33 np0005481065 systemd[1]: libpod-4fff84fc4e537c28bb64cd5353354f9d1cdd7e964b06ac978bff3a4623c4bca8.scope: Deactivated successfully.
Oct 11 04:12:33 np0005481065 podman[95276]: 2025-10-11 08:12:33.092933453 +0000 UTC m=+1.132860227 container died 4fff84fc4e537c28bb64cd5353354f9d1cdd7e964b06ac978bff3a4623c4bca8 (image=quay.io/ceph/ceph:v18, name=elated_ardinghelli, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:12:33 np0005481065 systemd[1]: var-lib-containers-storage-overlay-e58089793a51e2fe7409311ee55f3c11dace1dce525bd32d7341b84293360fe0-merged.mount: Deactivated successfully.
Oct 11 04:12:33 np0005481065 podman[95276]: 2025-10-11 08:12:33.152625208 +0000 UTC m=+1.192551922 container remove 4fff84fc4e537c28bb64cd5353354f9d1cdd7e964b06ac978bff3a4623c4bca8 (image=quay.io/ceph/ceph:v18, name=elated_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 04:12:33 np0005481065 systemd[1]: libpod-conmon-4fff84fc4e537c28bb64cd5353354f9d1cdd7e964b06ac978bff3a4623c4bca8.scope: Deactivated successfully.
Oct 11 04:12:33 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 18 pg[2.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [2] r=0 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:33 np0005481065 python3[95355]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:12:33 np0005481065 podman[95356]: 2025-10-11 08:12:33.606761049 +0000 UTC m=+0.052804920 container create 133f24c1a5da39aa8f2b4ec5a5f0188d57144518d2419d76c01865bcba061d69 (image=quay.io/ceph/ceph:v18, name=recursing_hellman, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:12:33 np0005481065 systemd[1]: Started libpod-conmon-133f24c1a5da39aa8f2b4ec5a5f0188d57144518d2419d76c01865bcba061d69.scope.
Oct 11 04:12:33 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:12:33 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/878f3eeb7380c139bdc6e3a4236d779bef340c8bab7742b939a9522daf35f3e7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:33 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/878f3eeb7380c139bdc6e3a4236d779bef340c8bab7742b939a9522daf35f3e7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:33 np0005481065 podman[95356]: 2025-10-11 08:12:33.580947181 +0000 UTC m=+0.026991102 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:12:33 np0005481065 podman[95356]: 2025-10-11 08:12:33.695305738 +0000 UTC m=+0.141349619 container init 133f24c1a5da39aa8f2b4ec5a5f0188d57144518d2419d76c01865bcba061d69 (image=quay.io/ceph/ceph:v18, name=recursing_hellman, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:12:33 np0005481065 podman[95356]: 2025-10-11 08:12:33.701384841 +0000 UTC m=+0.147428722 container start 133f24c1a5da39aa8f2b4ec5a5f0188d57144518d2419d76c01865bcba061d69 (image=quay.io/ceph/ceph:v18, name=recursing_hellman, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:12:33 np0005481065 podman[95356]: 2025-10-11 08:12:33.704893562 +0000 UTC m=+0.150937523 container attach 133f24c1a5da39aa8f2b4ec5a5f0188d57144518d2419d76c01865bcba061d69 (image=quay.io/ceph/ceph:v18, name=recursing_hellman, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:12:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Oct 11 04:12:34 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/1704947569' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 11 04:12:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Oct 11 04:12:34 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Oct 11 04:12:34 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 19 pg[2.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [2] r=0 lpr=18 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 11 04:12:34 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/523088299' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 11 04:12:34 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v60: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:12:35 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Oct 11 04:12:35 np0005481065 ceph-mon[74313]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 11 04:12:35 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/523088299' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 11 04:12:35 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Oct 11 04:12:35 np0005481065 recursing_hellman[95372]: pool 'volumes' created
Oct 11 04:12:35 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Oct 11 04:12:35 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 20 pg[3.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [1] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:35 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/523088299' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 11 04:12:35 np0005481065 systemd[1]: libpod-133f24c1a5da39aa8f2b4ec5a5f0188d57144518d2419d76c01865bcba061d69.scope: Deactivated successfully.
Oct 11 04:12:35 np0005481065 conmon[95372]: conmon 133f24c1a5da39aa8f2b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-133f24c1a5da39aa8f2b4ec5a5f0188d57144518d2419d76c01865bcba061d69.scope/container/memory.events
Oct 11 04:12:35 np0005481065 podman[95356]: 2025-10-11 08:12:35.115202713 +0000 UTC m=+1.561246624 container died 133f24c1a5da39aa8f2b4ec5a5f0188d57144518d2419d76c01865bcba061d69 (image=quay.io/ceph/ceph:v18, name=recursing_hellman, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 04:12:35 np0005481065 systemd[1]: var-lib-containers-storage-overlay-878f3eeb7380c139bdc6e3a4236d779bef340c8bab7742b939a9522daf35f3e7-merged.mount: Deactivated successfully.
Oct 11 04:12:35 np0005481065 podman[95356]: 2025-10-11 08:12:35.182360361 +0000 UTC m=+1.628404262 container remove 133f24c1a5da39aa8f2b4ec5a5f0188d57144518d2419d76c01865bcba061d69 (image=quay.io/ceph/ceph:v18, name=recursing_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 11 04:12:35 np0005481065 systemd[1]: libpod-conmon-133f24c1a5da39aa8f2b4ec5a5f0188d57144518d2419d76c01865bcba061d69.scope: Deactivated successfully.
Oct 11 04:12:35 np0005481065 python3[95438]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:12:35 np0005481065 podman[95439]: 2025-10-11 08:12:35.627340121 +0000 UTC m=+0.054810587 container create 7cdbbe11592b85a684423b61a76f672678bfbda5e362ec27eacb36099ef5d69f (image=quay.io/ceph/ceph:v18, name=beautiful_burnell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 11 04:12:35 np0005481065 systemd[1]: Started libpod-conmon-7cdbbe11592b85a684423b61a76f672678bfbda5e362ec27eacb36099ef5d69f.scope.
Oct 11 04:12:35 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:12:35 np0005481065 podman[95439]: 2025-10-11 08:12:35.608434281 +0000 UTC m=+0.035904757 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:12:35 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5db6cd733431eaae78edf69707b08ee71700fc441b4b3638b2c12803c999bb3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:35 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5db6cd733431eaae78edf69707b08ee71700fc441b4b3638b2c12803c999bb3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:35 np0005481065 podman[95439]: 2025-10-11 08:12:35.725068852 +0000 UTC m=+0.152539388 container init 7cdbbe11592b85a684423b61a76f672678bfbda5e362ec27eacb36099ef5d69f (image=quay.io/ceph/ceph:v18, name=beautiful_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:12:35 np0005481065 podman[95439]: 2025-10-11 08:12:35.734155302 +0000 UTC m=+0.161625768 container start 7cdbbe11592b85a684423b61a76f672678bfbda5e362ec27eacb36099ef5d69f (image=quay.io/ceph/ceph:v18, name=beautiful_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 04:12:35 np0005481065 podman[95439]: 2025-10-11 08:12:35.737907729 +0000 UTC m=+0.165378255 container attach 7cdbbe11592b85a684423b61a76f672678bfbda5e362ec27eacb36099ef5d69f (image=quay.io/ceph/ceph:v18, name=beautiful_burnell, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:12:36 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Oct 11 04:12:36 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Oct 11 04:12:36 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Oct 11 04:12:36 np0005481065 ceph-mon[74313]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 11 04:12:36 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/523088299' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 11 04:12:36 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 21 pg[3.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [1] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:36 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 11 04:12:36 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3767672100' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 11 04:12:36 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v63: 3 pgs: 2 unknown, 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Oct 11 04:12:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Oct 11 04:12:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3767672100' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 11 04:12:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Oct 11 04:12:37 np0005481065 beautiful_burnell[95455]: pool 'backups' created
Oct 11 04:12:37 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/3767672100' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 11 04:12:37 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Oct 11 04:12:37 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 22 pg[4.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [0] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:37 np0005481065 systemd[1]: libpod-7cdbbe11592b85a684423b61a76f672678bfbda5e362ec27eacb36099ef5d69f.scope: Deactivated successfully.
Oct 11 04:12:37 np0005481065 podman[95439]: 2025-10-11 08:12:37.153257823 +0000 UTC m=+1.580728259 container died 7cdbbe11592b85a684423b61a76f672678bfbda5e362ec27eacb36099ef5d69f (image=quay.io/ceph/ceph:v18, name=beautiful_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 04:12:37 np0005481065 systemd[1]: var-lib-containers-storage-overlay-b5db6cd733431eaae78edf69707b08ee71700fc441b4b3638b2c12803c999bb3-merged.mount: Deactivated successfully.
Oct 11 04:12:37 np0005481065 podman[95439]: 2025-10-11 08:12:37.211542348 +0000 UTC m=+1.639012824 container remove 7cdbbe11592b85a684423b61a76f672678bfbda5e362ec27eacb36099ef5d69f (image=quay.io/ceph/ceph:v18, name=beautiful_burnell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:12:37 np0005481065 systemd[1]: libpod-conmon-7cdbbe11592b85a684423b61a76f672678bfbda5e362ec27eacb36099ef5d69f.scope: Deactivated successfully.
Oct 11 04:12:37 np0005481065 python3[95519]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:12:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e22 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:12:37 np0005481065 podman[95520]: 2025-10-11 08:12:37.651254547 +0000 UTC m=+0.058341047 container create a9a16da9426be1a853ab16a053b63964f231c13bf9cf2e30a891294ba4565ed9 (image=quay.io/ceph/ceph:v18, name=gracious_banzai, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 04:12:37 np0005481065 systemd[1]: Started libpod-conmon-a9a16da9426be1a853ab16a053b63964f231c13bf9cf2e30a891294ba4565ed9.scope.
Oct 11 04:12:37 np0005481065 podman[95520]: 2025-10-11 08:12:37.621638981 +0000 UTC m=+0.028725541 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:12:37 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:12:37 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c9ddbd72e126b30a2b761183a69109466fa06fe7b45ae6b7f8d277d8057b51c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:37 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c9ddbd72e126b30a2b761183a69109466fa06fe7b45ae6b7f8d277d8057b51c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:37 np0005481065 podman[95520]: 2025-10-11 08:12:37.750772969 +0000 UTC m=+0.157859509 container init a9a16da9426be1a853ab16a053b63964f231c13bf9cf2e30a891294ba4565ed9 (image=quay.io/ceph/ceph:v18, name=gracious_banzai, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 11 04:12:37 np0005481065 podman[95520]: 2025-10-11 08:12:37.762069212 +0000 UTC m=+0.169155702 container start a9a16da9426be1a853ab16a053b63964f231c13bf9cf2e30a891294ba4565ed9 (image=quay.io/ceph/ceph:v18, name=gracious_banzai, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:12:37 np0005481065 podman[95520]: 2025-10-11 08:12:37.765913182 +0000 UTC m=+0.172999742 container attach a9a16da9426be1a853ab16a053b63964f231c13bf9cf2e30a891294ba4565ed9 (image=quay.io/ceph/ceph:v18, name=gracious_banzai, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 11 04:12:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Oct 11 04:12:38 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/3767672100' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 11 04:12:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Oct 11 04:12:38 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Oct 11 04:12:38 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 23 pg[4.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [0] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 11 04:12:38 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/343089119' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 11 04:12:38 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v66: 4 pgs: 3 active+clean, 1 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:12:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Oct 11 04:12:39 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/343089119' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 11 04:12:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Oct 11 04:12:39 np0005481065 gracious_banzai[95535]: pool 'images' created
Oct 11 04:12:39 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Oct 11 04:12:39 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/343089119' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 11 04:12:39 np0005481065 systemd[1]: libpod-a9a16da9426be1a853ab16a053b63964f231c13bf9cf2e30a891294ba4565ed9.scope: Deactivated successfully.
Oct 11 04:12:39 np0005481065 podman[95520]: 2025-10-11 08:12:39.164701846 +0000 UTC m=+1.571788356 container died a9a16da9426be1a853ab16a053b63964f231c13bf9cf2e30a891294ba4565ed9 (image=quay.io/ceph/ceph:v18, name=gracious_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 04:12:39 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 24 pg[5.0( empty local-lis/les=0/0 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [2] r=0 lpr=24 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:39 np0005481065 systemd[1]: var-lib-containers-storage-overlay-0c9ddbd72e126b30a2b761183a69109466fa06fe7b45ae6b7f8d277d8057b51c-merged.mount: Deactivated successfully.
Oct 11 04:12:39 np0005481065 podman[95520]: 2025-10-11 08:12:39.220149282 +0000 UTC m=+1.627235792 container remove a9a16da9426be1a853ab16a053b63964f231c13bf9cf2e30a891294ba4565ed9 (image=quay.io/ceph/ceph:v18, name=gracious_banzai, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:12:39 np0005481065 systemd[1]: libpod-conmon-a9a16da9426be1a853ab16a053b63964f231c13bf9cf2e30a891294ba4565ed9.scope: Deactivated successfully.
Oct 11 04:12:39 np0005481065 python3[95599]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:12:39 np0005481065 podman[95600]: 2025-10-11 08:12:39.718502689 +0000 UTC m=+0.065242227 container create 3fe60133a2423d115f6b6fc17c15e5b9e8463037139786a39cffc5bd8ecb78fd (image=quay.io/ceph/ceph:v18, name=nifty_neumann, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 11 04:12:39 np0005481065 systemd[1]: Started libpod-conmon-3fe60133a2423d115f6b6fc17c15e5b9e8463037139786a39cffc5bd8ecb78fd.scope.
Oct 11 04:12:39 np0005481065 podman[95600]: 2025-10-11 08:12:39.698194658 +0000 UTC m=+0.044934216 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:12:39 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:12:39 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29219ce61e5665b6cfaacdbc8717f5298d01ea278f783d8a428f658099872c2d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:39 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29219ce61e5665b6cfaacdbc8717f5298d01ea278f783d8a428f658099872c2d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:39 np0005481065 podman[95600]: 2025-10-11 08:12:39.820184498 +0000 UTC m=+0.166924106 container init 3fe60133a2423d115f6b6fc17c15e5b9e8463037139786a39cffc5bd8ecb78fd (image=quay.io/ceph/ceph:v18, name=nifty_neumann, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:12:39 np0005481065 podman[95600]: 2025-10-11 08:12:39.830127663 +0000 UTC m=+0.176867201 container start 3fe60133a2423d115f6b6fc17c15e5b9e8463037139786a39cffc5bd8ecb78fd (image=quay.io/ceph/ceph:v18, name=nifty_neumann, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 11 04:12:39 np0005481065 podman[95600]: 2025-10-11 08:12:39.833716065 +0000 UTC m=+0.180455703 container attach 3fe60133a2423d115f6b6fc17c15e5b9e8463037139786a39cffc5bd8ecb78fd (image=quay.io/ceph/ceph:v18, name=nifty_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 11 04:12:40 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Oct 11 04:12:40 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/343089119' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 11 04:12:40 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Oct 11 04:12:40 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Oct 11 04:12:40 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 25 pg[5.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [2] r=0 lpr=24 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:40 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 11 04:12:40 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2149037775' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 11 04:12:40 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v69: 5 pgs: 3 active+clean, 2 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:12:41 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Oct 11 04:12:41 np0005481065 ceph-mon[74313]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 11 04:12:41 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2149037775' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 11 04:12:41 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Oct 11 04:12:41 np0005481065 nifty_neumann[95616]: pool 'cephfs.cephfs.meta' created
Oct 11 04:12:41 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Oct 11 04:12:41 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 26 pg[6.0( empty local-lis/les=0/0 n=0 ec=26/26 lis/c=0/0 les/c/f=0/0/0 sis=26) [0] r=0 lpr=26 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:41 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/2149037775' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 11 04:12:41 np0005481065 systemd[1]: libpod-3fe60133a2423d115f6b6fc17c15e5b9e8463037139786a39cffc5bd8ecb78fd.scope: Deactivated successfully.
Oct 11 04:12:41 np0005481065 podman[95600]: 2025-10-11 08:12:41.209370319 +0000 UTC m=+1.556109877 container died 3fe60133a2423d115f6b6fc17c15e5b9e8463037139786a39cffc5bd8ecb78fd (image=quay.io/ceph/ceph:v18, name=nifty_neumann, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 04:12:41 np0005481065 systemd[1]: var-lib-containers-storage-overlay-29219ce61e5665b6cfaacdbc8717f5298d01ea278f783d8a428f658099872c2d-merged.mount: Deactivated successfully.
Oct 11 04:12:41 np0005481065 podman[95600]: 2025-10-11 08:12:41.271590189 +0000 UTC m=+1.618329767 container remove 3fe60133a2423d115f6b6fc17c15e5b9e8463037139786a39cffc5bd8ecb78fd (image=quay.io/ceph/ceph:v18, name=nifty_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 04:12:41 np0005481065 systemd[1]: libpod-conmon-3fe60133a2423d115f6b6fc17c15e5b9e8463037139786a39cffc5bd8ecb78fd.scope: Deactivated successfully.
Oct 11 04:12:41 np0005481065 python3[95680]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:12:41 np0005481065 podman[95681]: 2025-10-11 08:12:41.689695221 +0000 UTC m=+0.067738669 container create 8856273d3fe9cec9479f44d6907a4188b32ea67d19a252e1e3bfc6b163c0a61b (image=quay.io/ceph/ceph:v18, name=great_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:12:41 np0005481065 systemd[1]: Started libpod-conmon-8856273d3fe9cec9479f44d6907a4188b32ea67d19a252e1e3bfc6b163c0a61b.scope.
Oct 11 04:12:41 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:12:41 np0005481065 podman[95681]: 2025-10-11 08:12:41.662314707 +0000 UTC m=+0.040358215 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:12:41 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/243b6a3ddd1afe3ac6eb011c33c040abfa0475d3b9f42af71a2a1222ab35c955/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:41 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/243b6a3ddd1afe3ac6eb011c33c040abfa0475d3b9f42af71a2a1222ab35c955/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:41 np0005481065 podman[95681]: 2025-10-11 08:12:41.774296041 +0000 UTC m=+0.152339469 container init 8856273d3fe9cec9479f44d6907a4188b32ea67d19a252e1e3bfc6b163c0a61b (image=quay.io/ceph/ceph:v18, name=great_johnson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:12:41 np0005481065 podman[95681]: 2025-10-11 08:12:41.779800389 +0000 UTC m=+0.157843787 container start 8856273d3fe9cec9479f44d6907a4188b32ea67d19a252e1e3bfc6b163c0a61b (image=quay.io/ceph/ceph:v18, name=great_johnson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:12:41 np0005481065 podman[95681]: 2025-10-11 08:12:41.783759992 +0000 UTC m=+0.161803400 container attach 8856273d3fe9cec9479f44d6907a4188b32ea67d19a252e1e3bfc6b163c0a61b (image=quay.io/ceph/ceph:v18, name=great_johnson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True)
Oct 11 04:12:42 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Oct 11 04:12:42 np0005481065 ceph-mon[74313]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 11 04:12:42 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/2149037775' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 11 04:12:42 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Oct 11 04:12:42 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Oct 11 04:12:42 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 27 pg[6.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=0/0 les/c/f=0/0/0 sis=26) [0] r=0 lpr=26 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:42 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 11 04:12:42 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/763646340' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 11 04:12:42 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:12:42 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v72: 6 pgs: 1 creating+peering, 5 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:12:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Oct 11 04:12:43 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/763646340' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 11 04:12:43 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/763646340' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 11 04:12:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Oct 11 04:12:43 np0005481065 great_johnson[95696]: pool 'cephfs.cephfs.data' created
Oct 11 04:12:43 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Oct 11 04:12:43 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 28 pg[7.0( empty local-lis/les=0/0 n=0 ec=28/28 lis/c=0/0 les/c/f=0/0/0 sis=28) [1] r=0 lpr=28 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:43 np0005481065 systemd[1]: libpod-8856273d3fe9cec9479f44d6907a4188b32ea67d19a252e1e3bfc6b163c0a61b.scope: Deactivated successfully.
Oct 11 04:12:43 np0005481065 podman[95681]: 2025-10-11 08:12:43.258003136 +0000 UTC m=+1.636046634 container died 8856273d3fe9cec9479f44d6907a4188b32ea67d19a252e1e3bfc6b163c0a61b (image=quay.io/ceph/ceph:v18, name=great_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 11 04:12:43 np0005481065 systemd[1]: var-lib-containers-storage-overlay-243b6a3ddd1afe3ac6eb011c33c040abfa0475d3b9f42af71a2a1222ab35c955-merged.mount: Deactivated successfully.
Oct 11 04:12:43 np0005481065 podman[95681]: 2025-10-11 08:12:43.315592703 +0000 UTC m=+1.693636151 container remove 8856273d3fe9cec9479f44d6907a4188b32ea67d19a252e1e3bfc6b163c0a61b (image=quay.io/ceph/ceph:v18, name=great_johnson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:12:43 np0005481065 systemd[1]: libpod-conmon-8856273d3fe9cec9479f44d6907a4188b32ea67d19a252e1e3bfc6b163c0a61b.scope: Deactivated successfully.
Oct 11 04:12:43 np0005481065 python3[95760]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:12:43 np0005481065 podman[95761]: 2025-10-11 08:12:43.815422112 +0000 UTC m=+0.049495846 container create adaaf7f9f2c8a68f758f767beb80c53d38448205bd7e099263b7421b1e75ca81 (image=quay.io/ceph/ceph:v18, name=zealous_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:12:43 np0005481065 systemd[1]: Started libpod-conmon-adaaf7f9f2c8a68f758f767beb80c53d38448205bd7e099263b7421b1e75ca81.scope.
Oct 11 04:12:43 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:12:43 np0005481065 podman[95761]: 2025-10-11 08:12:43.794284748 +0000 UTC m=+0.028358502 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:12:43 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ef04e7bff82ff193c823d1797d814233c53a7079498708f28105cdbb6aa9b1f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:43 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ef04e7bff82ff193c823d1797d814233c53a7079498708f28105cdbb6aa9b1f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:43 np0005481065 podman[95761]: 2025-10-11 08:12:43.916079122 +0000 UTC m=+0.150152866 container init adaaf7f9f2c8a68f758f767beb80c53d38448205bd7e099263b7421b1e75ca81 (image=quay.io/ceph/ceph:v18, name=zealous_lovelace, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 11 04:12:43 np0005481065 podman[95761]: 2025-10-11 08:12:43.9219474 +0000 UTC m=+0.156021134 container start adaaf7f9f2c8a68f758f767beb80c53d38448205bd7e099263b7421b1e75ca81 (image=quay.io/ceph/ceph:v18, name=zealous_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 11 04:12:43 np0005481065 podman[95761]: 2025-10-11 08:12:43.925857252 +0000 UTC m=+0.159931236 container attach adaaf7f9f2c8a68f758f767beb80c53d38448205bd7e099263b7421b1e75ca81 (image=quay.io/ceph/ceph:v18, name=zealous_lovelace, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:12:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Oct 11 04:12:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Oct 11 04:12:44 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Oct 11 04:12:44 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/763646340' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 11 04:12:44 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 29 pg[7.0( empty local-lis/les=28/29 n=0 ec=28/28 lis/c=0/0 les/c/f=0/0/0 sis=28) [1] r=0 lpr=28 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0) v1
Oct 11 04:12:44 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2968070997' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Oct 11 04:12:44 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v75: 7 pgs: 1 unknown, 1 creating+peering, 5 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:12:45 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Oct 11 04:12:45 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/2968070997' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Oct 11 04:12:45 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2968070997' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Oct 11 04:12:45 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Oct 11 04:12:45 np0005481065 zealous_lovelace[95776]: enabled application 'rbd' on pool 'vms'
Oct 11 04:12:45 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Oct 11 04:12:45 np0005481065 systemd[1]: libpod-adaaf7f9f2c8a68f758f767beb80c53d38448205bd7e099263b7421b1e75ca81.scope: Deactivated successfully.
Oct 11 04:12:45 np0005481065 conmon[95776]: conmon adaaf7f9f2c8a68f758f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-adaaf7f9f2c8a68f758f767beb80c53d38448205bd7e099263b7421b1e75ca81.scope/container/memory.events
Oct 11 04:12:45 np0005481065 podman[95761]: 2025-10-11 08:12:45.296995168 +0000 UTC m=+1.531068932 container died adaaf7f9f2c8a68f758f767beb80c53d38448205bd7e099263b7421b1e75ca81 (image=quay.io/ceph/ceph:v18, name=zealous_lovelace, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 11 04:12:45 np0005481065 systemd[1]: var-lib-containers-storage-overlay-8ef04e7bff82ff193c823d1797d814233c53a7079498708f28105cdbb6aa9b1f-merged.mount: Deactivated successfully.
Oct 11 04:12:45 np0005481065 podman[95761]: 2025-10-11 08:12:45.374811415 +0000 UTC m=+1.608885179 container remove adaaf7f9f2c8a68f758f767beb80c53d38448205bd7e099263b7421b1e75ca81 (image=quay.io/ceph/ceph:v18, name=zealous_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:12:45 np0005481065 systemd[1]: libpod-conmon-adaaf7f9f2c8a68f758f767beb80c53d38448205bd7e099263b7421b1e75ca81.scope: Deactivated successfully.
Oct 11 04:12:45 np0005481065 python3[95840]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:12:45 np0005481065 podman[95841]: 2025-10-11 08:12:45.782080076 +0000 UTC m=+0.075131091 container create 597aea6c8917f22ac517218d91f7cac164adabe3bd5be01d33b5782ac71014b2 (image=quay.io/ceph/ceph:v18, name=adoring_varahamihira, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 04:12:45 np0005481065 systemd[1]: Started libpod-conmon-597aea6c8917f22ac517218d91f7cac164adabe3bd5be01d33b5782ac71014b2.scope.
Oct 11 04:12:45 np0005481065 podman[95841]: 2025-10-11 08:12:45.748389882 +0000 UTC m=+0.041440977 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:12:45 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:12:45 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/416de01ca54b04758fcca2af6e2a047d019ba24557c631897573e0db3557492a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:45 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/416de01ca54b04758fcca2af6e2a047d019ba24557c631897573e0db3557492a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:45 np0005481065 podman[95841]: 2025-10-11 08:12:45.865596275 +0000 UTC m=+0.158647310 container init 597aea6c8917f22ac517218d91f7cac164adabe3bd5be01d33b5782ac71014b2 (image=quay.io/ceph/ceph:v18, name=adoring_varahamihira, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 11 04:12:45 np0005481065 podman[95841]: 2025-10-11 08:12:45.872619936 +0000 UTC m=+0.165670951 container start 597aea6c8917f22ac517218d91f7cac164adabe3bd5be01d33b5782ac71014b2 (image=quay.io/ceph/ceph:v18, name=adoring_varahamihira, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:12:45 np0005481065 podman[95841]: 2025-10-11 08:12:45.876391354 +0000 UTC m=+0.169442359 container attach 597aea6c8917f22ac517218d91f7cac164adabe3bd5be01d33b5782ac71014b2 (image=quay.io/ceph/ceph:v18, name=adoring_varahamihira, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:12:46 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/2968070997' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Oct 11 04:12:46 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0) v1
Oct 11 04:12:46 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1594430872' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Oct 11 04:12:46 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v77: 7 pgs: 1 unknown, 1 creating+peering, 5 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:12:47 np0005481065 ceph-mon[74313]: log_channel(cluster) log [WRN] : Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 11 04:12:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Oct 11 04:12:47 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1594430872' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Oct 11 04:12:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Oct 11 04:12:47 np0005481065 adoring_varahamihira[95857]: enabled application 'rbd' on pool 'volumes'
Oct 11 04:12:47 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/1594430872' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Oct 11 04:12:47 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Oct 11 04:12:47 np0005481065 systemd[1]: libpod-597aea6c8917f22ac517218d91f7cac164adabe3bd5be01d33b5782ac71014b2.scope: Deactivated successfully.
Oct 11 04:12:47 np0005481065 podman[95841]: 2025-10-11 08:12:47.332525771 +0000 UTC m=+1.625576806 container died 597aea6c8917f22ac517218d91f7cac164adabe3bd5be01d33b5782ac71014b2 (image=quay.io/ceph/ceph:v18, name=adoring_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 11 04:12:47 np0005481065 systemd[1]: var-lib-containers-storage-overlay-416de01ca54b04758fcca2af6e2a047d019ba24557c631897573e0db3557492a-merged.mount: Deactivated successfully.
Oct 11 04:12:47 np0005481065 podman[95841]: 2025-10-11 08:12:47.386106273 +0000 UTC m=+1.679157288 container remove 597aea6c8917f22ac517218d91f7cac164adabe3bd5be01d33b5782ac71014b2 (image=quay.io/ceph/ceph:v18, name=adoring_varahamihira, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 11 04:12:47 np0005481065 systemd[1]: libpod-conmon-597aea6c8917f22ac517218d91f7cac164adabe3bd5be01d33b5782ac71014b2.scope: Deactivated successfully.
Oct 11 04:12:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:12:47 np0005481065 python3[95919]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:12:47 np0005481065 podman[95920]: 2025-10-11 08:12:47.791960744 +0000 UTC m=+0.061545422 container create c49043667a3e00747c341a03208bfb5884504a064e0ddf6b8a560d05120ab9be (image=quay.io/ceph/ceph:v18, name=adoring_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 04:12:47 np0005481065 systemd[1]: Started libpod-conmon-c49043667a3e00747c341a03208bfb5884504a064e0ddf6b8a560d05120ab9be.scope.
Oct 11 04:12:47 np0005481065 podman[95920]: 2025-10-11 08:12:47.757192689 +0000 UTC m=+0.026777417 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:12:47 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:12:47 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6123f6e7fff606b07446e3df379fe681f5eee5147ea18fd55b305ae0561ca5b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:47 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6123f6e7fff606b07446e3df379fe681f5eee5147ea18fd55b305ae0561ca5b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:47 np0005481065 podman[95920]: 2025-10-11 08:12:47.897889624 +0000 UTC m=+0.167474332 container init c49043667a3e00747c341a03208bfb5884504a064e0ddf6b8a560d05120ab9be (image=quay.io/ceph/ceph:v18, name=adoring_panini, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 04:12:47 np0005481065 podman[95920]: 2025-10-11 08:12:47.908200679 +0000 UTC m=+0.177785317 container start c49043667a3e00747c341a03208bfb5884504a064e0ddf6b8a560d05120ab9be (image=quay.io/ceph/ceph:v18, name=adoring_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 04:12:47 np0005481065 podman[95920]: 2025-10-11 08:12:47.912856583 +0000 UTC m=+0.182441221 container attach c49043667a3e00747c341a03208bfb5884504a064e0ddf6b8a560d05120ab9be (image=quay.io/ceph/ceph:v18, name=adoring_panini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 04:12:48 np0005481065 ceph-mon[74313]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 11 04:12:48 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/1594430872' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Oct 11 04:12:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0) v1
Oct 11 04:12:48 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1520165397' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Oct 11 04:12:48 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v79: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:12:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Oct 11 04:12:49 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/1520165397' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Oct 11 04:12:49 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1520165397' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Oct 11 04:12:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Oct 11 04:12:49 np0005481065 adoring_panini[95935]: enabled application 'rbd' on pool 'backups'
Oct 11 04:12:49 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Oct 11 04:12:49 np0005481065 systemd[1]: libpod-c49043667a3e00747c341a03208bfb5884504a064e0ddf6b8a560d05120ab9be.scope: Deactivated successfully.
Oct 11 04:12:49 np0005481065 podman[95920]: 2025-10-11 08:12:49.356373279 +0000 UTC m=+1.625957947 container died c49043667a3e00747c341a03208bfb5884504a064e0ddf6b8a560d05120ab9be (image=quay.io/ceph/ceph:v18, name=adoring_panini, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:12:49 np0005481065 systemd[1]: var-lib-containers-storage-overlay-f6123f6e7fff606b07446e3df379fe681f5eee5147ea18fd55b305ae0561ca5b-merged.mount: Deactivated successfully.
Oct 11 04:12:49 np0005481065 podman[95920]: 2025-10-11 08:12:49.427317279 +0000 UTC m=+1.696901957 container remove c49043667a3e00747c341a03208bfb5884504a064e0ddf6b8a560d05120ab9be (image=quay.io/ceph/ceph:v18, name=adoring_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 04:12:49 np0005481065 systemd[1]: libpod-conmon-c49043667a3e00747c341a03208bfb5884504a064e0ddf6b8a560d05120ab9be.scope: Deactivated successfully.
Oct 11 04:12:49 np0005481065 python3[96000]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:12:49 np0005481065 podman[96001]: 2025-10-11 08:12:49.908940137 +0000 UTC m=+0.067933324 container create bb562bd171e24a6da8f239f33f7da46c5586c01baffced9ac7d975552161a36e (image=quay.io/ceph/ceph:v18, name=sad_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 04:12:49 np0005481065 systemd[1]: Started libpod-conmon-bb562bd171e24a6da8f239f33f7da46c5586c01baffced9ac7d975552161a36e.scope.
Oct 11 04:12:49 np0005481065 podman[96001]: 2025-10-11 08:12:49.88106584 +0000 UTC m=+0.040059037 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:12:49 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:12:50 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87cc50736b82d4c5c01ac6b975e366cf97bee0b19cffcf58423d3c96499c1d38/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:50 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87cc50736b82d4c5c01ac6b975e366cf97bee0b19cffcf58423d3c96499c1d38/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:50 np0005481065 podman[96001]: 2025-10-11 08:12:50.021394314 +0000 UTC m=+0.180387551 container init bb562bd171e24a6da8f239f33f7da46c5586c01baffced9ac7d975552161a36e (image=quay.io/ceph/ceph:v18, name=sad_kilby, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 04:12:50 np0005481065 podman[96001]: 2025-10-11 08:12:50.032703348 +0000 UTC m=+0.191696535 container start bb562bd171e24a6da8f239f33f7da46c5586c01baffced9ac7d975552161a36e (image=quay.io/ceph/ceph:v18, name=sad_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:12:50 np0005481065 podman[96001]: 2025-10-11 08:12:50.036389563 +0000 UTC m=+0.195382750 container attach bb562bd171e24a6da8f239f33f7da46c5586c01baffced9ac7d975552161a36e (image=quay.io/ceph/ceph:v18, name=sad_kilby, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:12:50 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/1520165397' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Oct 11 04:12:50 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0) v1
Oct 11 04:12:50 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1867008952' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Oct 11 04:12:50 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v81: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:12:51 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Oct 11 04:12:51 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/1867008952' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Oct 11 04:12:51 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1867008952' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Oct 11 04:12:51 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Oct 11 04:12:51 np0005481065 sad_kilby[96017]: enabled application 'rbd' on pool 'images'
Oct 11 04:12:51 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Oct 11 04:12:51 np0005481065 systemd[1]: libpod-bb562bd171e24a6da8f239f33f7da46c5586c01baffced9ac7d975552161a36e.scope: Deactivated successfully.
Oct 11 04:12:51 np0005481065 podman[96001]: 2025-10-11 08:12:51.380605638 +0000 UTC m=+1.539598815 container died bb562bd171e24a6da8f239f33f7da46c5586c01baffced9ac7d975552161a36e (image=quay.io/ceph/ceph:v18, name=sad_kilby, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 04:12:51 np0005481065 systemd[1]: var-lib-containers-storage-overlay-87cc50736b82d4c5c01ac6b975e366cf97bee0b19cffcf58423d3c96499c1d38-merged.mount: Deactivated successfully.
Oct 11 04:12:51 np0005481065 podman[96001]: 2025-10-11 08:12:51.542939892 +0000 UTC m=+1.701933079 container remove bb562bd171e24a6da8f239f33f7da46c5586c01baffced9ac7d975552161a36e (image=quay.io/ceph/ceph:v18, name=sad_kilby, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:12:51 np0005481065 systemd[1]: libpod-conmon-bb562bd171e24a6da8f239f33f7da46c5586c01baffced9ac7d975552161a36e.scope: Deactivated successfully.
Oct 11 04:12:51 np0005481065 python3[96081]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:12:52 np0005481065 podman[96082]: 2025-10-11 08:12:52.08923202 +0000 UTC m=+0.107139826 container create cec2994b88fdd4e57c6cf951eb4cfca15b30b6ccee44ef8f922b39d3cacf02a8 (image=quay.io/ceph/ceph:v18, name=strange_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:12:52 np0005481065 podman[96082]: 2025-10-11 08:12:52.025061435 +0000 UTC m=+0.042969321 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:12:52 np0005481065 systemd[1]: Started libpod-conmon-cec2994b88fdd4e57c6cf951eb4cfca15b30b6ccee44ef8f922b39d3cacf02a8.scope.
Oct 11 04:12:52 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:12:52 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25e53b23116fc630e9901b0974a6ec68d41813ef5df96518abe5ff5079d167ae/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:52 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25e53b23116fc630e9901b0974a6ec68d41813ef5df96518abe5ff5079d167ae/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:52 np0005481065 podman[96082]: 2025-10-11 08:12:52.324392958 +0000 UTC m=+0.342300834 container init cec2994b88fdd4e57c6cf951eb4cfca15b30b6ccee44ef8f922b39d3cacf02a8 (image=quay.io/ceph/ceph:v18, name=strange_fermi, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Oct 11 04:12:52 np0005481065 podman[96082]: 2025-10-11 08:12:52.336904656 +0000 UTC m=+0.354812462 container start cec2994b88fdd4e57c6cf951eb4cfca15b30b6ccee44ef8f922b39d3cacf02a8 (image=quay.io/ceph/ceph:v18, name=strange_fermi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:12:52 np0005481065 podman[96082]: 2025-10-11 08:12:52.409274756 +0000 UTC m=+0.427182642 container attach cec2994b88fdd4e57c6cf951eb4cfca15b30b6ccee44ef8f922b39d3cacf02a8 (image=quay.io/ceph/ceph:v18, name=strange_fermi, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:12:52 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/1867008952' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Oct 11 04:12:52 np0005481065 ceph-mon[74313]: log_channel(cluster) log [WRN] : Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 11 04:12:52 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:12:52 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v83: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:12:52 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0) v1
Oct 11 04:12:52 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/967859063' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Oct 11 04:12:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Oct 11 04:12:53 np0005481065 ceph-mon[74313]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 11 04:12:53 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/967859063' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Oct 11 04:12:53 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/967859063' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Oct 11 04:12:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Oct 11 04:12:53 np0005481065 strange_fermi[96098]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Oct 11 04:12:53 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Oct 11 04:12:53 np0005481065 systemd[1]: libpod-cec2994b88fdd4e57c6cf951eb4cfca15b30b6ccee44ef8f922b39d3cacf02a8.scope: Deactivated successfully.
Oct 11 04:12:53 np0005481065 podman[96082]: 2025-10-11 08:12:53.447381805 +0000 UTC m=+1.465289631 container died cec2994b88fdd4e57c6cf951eb4cfca15b30b6ccee44ef8f922b39d3cacf02a8 (image=quay.io/ceph/ceph:v18, name=strange_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Oct 11 04:12:53 np0005481065 systemd[1]: var-lib-containers-storage-overlay-25e53b23116fc630e9901b0974a6ec68d41813ef5df96518abe5ff5079d167ae-merged.mount: Deactivated successfully.
Oct 11 04:12:53 np0005481065 podman[96082]: 2025-10-11 08:12:53.509370938 +0000 UTC m=+1.527278774 container remove cec2994b88fdd4e57c6cf951eb4cfca15b30b6ccee44ef8f922b39d3cacf02a8 (image=quay.io/ceph/ceph:v18, name=strange_fermi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 04:12:53 np0005481065 systemd[1]: libpod-conmon-cec2994b88fdd4e57c6cf951eb4cfca15b30b6ccee44ef8f922b39d3cacf02a8.scope: Deactivated successfully.
Oct 11 04:12:53 np0005481065 python3[96160]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:12:53 np0005481065 podman[96161]: 2025-10-11 08:12:53.939174354 +0000 UTC m=+0.075303045 container create ceddec1c565b7b6bf28f7a7d0c12e723646338d99045fe89d67e86cee72647e0 (image=quay.io/ceph/ceph:v18, name=compassionate_williams, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:12:53 np0005481065 systemd[1]: Started libpod-conmon-ceddec1c565b7b6bf28f7a7d0c12e723646338d99045fe89d67e86cee72647e0.scope.
Oct 11 04:12:53 np0005481065 podman[96161]: 2025-10-11 08:12:53.906732326 +0000 UTC m=+0.042861067 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:12:54 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:12:54 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10a65c3925279dac46b2578bcf69bcf2c8bcd79a69ab2a3d9b88d71cd397b9f3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:54 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10a65c3925279dac46b2578bcf69bcf2c8bcd79a69ab2a3d9b88d71cd397b9f3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:54 np0005481065 podman[96161]: 2025-10-11 08:12:54.04773102 +0000 UTC m=+0.183859721 container init ceddec1c565b7b6bf28f7a7d0c12e723646338d99045fe89d67e86cee72647e0 (image=quay.io/ceph/ceph:v18, name=compassionate_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 04:12:54 np0005481065 podman[96161]: 2025-10-11 08:12:54.057485779 +0000 UTC m=+0.193614470 container start ceddec1c565b7b6bf28f7a7d0c12e723646338d99045fe89d67e86cee72647e0 (image=quay.io/ceph/ceph:v18, name=compassionate_williams, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:12:54 np0005481065 podman[96161]: 2025-10-11 08:12:54.061471973 +0000 UTC m=+0.197600664 container attach ceddec1c565b7b6bf28f7a7d0c12e723646338d99045fe89d67e86cee72647e0 (image=quay.io/ceph/ceph:v18, name=compassionate_williams, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 04:12:54 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/967859063' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Oct 11 04:12:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0) v1
Oct 11 04:12:54 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4184359992' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Oct 11 04:12:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:12:54
Oct 11 04:12:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:12:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 04:12:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta', '.mgr', 'backups', 'vms', 'images']
Oct 11 04:12:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:12:54 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v85: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:12:54 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:12:54 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:12:54 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:12:54 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:12:54 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 11 04:12:54 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:12:54 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 11 04:12:54 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:12:54 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 11 04:12:54 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:12:54 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 11 04:12:54 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:12:54 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 11 04:12:54 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:12:54 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 11 04:12:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0) v1
Oct 11 04:12:54 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Oct 11 04:12:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:12:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:12:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:12:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:12:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:12:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:12:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:12:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:12:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:12:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:12:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:12:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:12:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:12:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:12:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:12:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:12:55 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Oct 11 04:12:55 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/4184359992' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Oct 11 04:12:55 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Oct 11 04:12:55 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4184359992' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Oct 11 04:12:55 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Oct 11 04:12:55 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Oct 11 04:12:55 np0005481065 compassionate_williams[96177]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Oct 11 04:12:55 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Oct 11 04:12:55 np0005481065 ceph-mgr[74605]: [progress INFO root] update: starting ev 2f9fdae7-882c-4a3a-a85e-27c4cbc86c5b (PG autoscaler increasing pool 2 PGs from 1 to 32)
Oct 11 04:12:55 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0) v1
Oct 11 04:12:55 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Oct 11 04:12:55 np0005481065 systemd[1]: libpod-ceddec1c565b7b6bf28f7a7d0c12e723646338d99045fe89d67e86cee72647e0.scope: Deactivated successfully.
Oct 11 04:12:55 np0005481065 podman[96161]: 2025-10-11 08:12:55.479469998 +0000 UTC m=+1.615598689 container died ceddec1c565b7b6bf28f7a7d0c12e723646338d99045fe89d67e86cee72647e0 (image=quay.io/ceph/ceph:v18, name=compassionate_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Oct 11 04:12:55 np0005481065 systemd[1]: var-lib-containers-storage-overlay-10a65c3925279dac46b2578bcf69bcf2c8bcd79a69ab2a3d9b88d71cd397b9f3-merged.mount: Deactivated successfully.
Oct 11 04:12:55 np0005481065 podman[96161]: 2025-10-11 08:12:55.527975515 +0000 UTC m=+1.664104176 container remove ceddec1c565b7b6bf28f7a7d0c12e723646338d99045fe89d67e86cee72647e0 (image=quay.io/ceph/ceph:v18, name=compassionate_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:12:55 np0005481065 systemd[1]: libpod-conmon-ceddec1c565b7b6bf28f7a7d0c12e723646338d99045fe89d67e86cee72647e0.scope: Deactivated successfully.
Oct 11 04:12:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Oct 11 04:12:56 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/4184359992' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Oct 11 04:12:56 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Oct 11 04:12:56 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Oct 11 04:12:56 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Oct 11 04:12:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Oct 11 04:12:56 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Oct 11 04:12:56 np0005481065 ceph-mgr[74605]: [progress INFO root] update: starting ev 25c6f460-965e-4540-8ea1-0c8674f57a35 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Oct 11 04:12:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Oct 11 04:12:56 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Oct 11 04:12:56 np0005481065 python3[96290]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 04:12:56 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v88: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:12:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 11 04:12:56 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 11 04:12:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 11 04:12:56 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 11 04:12:56 np0005481065 python3[96361]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760170376.2366004-33215-117522337161010/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:12:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Oct 11 04:12:57 np0005481065 ceph-mon[74313]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 11 04:12:57 np0005481065 ceph-mon[74313]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct 11 04:12:57 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Oct 11 04:12:57 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct 11 04:12:57 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Oct 11 04:12:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Oct 11 04:12:57 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Oct 11 04:12:57 np0005481065 ceph-mgr[74605]: [progress INFO root] update: starting ev c76432ef-1fb9-461e-88d3-918d4ee38a68 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Oct 11 04:12:57 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 37 pg[3.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=37 pruub=10.628627777s) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active pruub 67.718368530s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:12:57 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Oct 11 04:12:57 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Oct 11 04:12:57 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 11 04:12:57 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 11 04:12:57 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 37 pg[3.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=37 pruub=10.628627777s) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown pruub 67.718368530s@ mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0) v1
Oct 11 04:12:57 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Oct 11 04:12:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:12:57 np0005481065 python3[96463]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 04:12:58 np0005481065 python3[96538]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760170377.389889-33229-261343728739895/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=2626df0005e1287c183601708d2daa2e557e0edb backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:12:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Oct 11 04:12:58 np0005481065 ceph-mon[74313]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 11 04:12:58 np0005481065 ceph-mon[74313]: Cluster is now healthy
Oct 11 04:12:58 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Oct 11 04:12:58 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct 11 04:12:58 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Oct 11 04:12:58 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Oct 11 04:12:58 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Oct 11 04:12:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Oct 11 04:12:58 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Oct 11 04:12:58 np0005481065 ceph-mgr[74605]: [progress INFO root] update: starting ev 6569a04b-316d-4ed4-919c-64a2f52d97c3 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Oct 11 04:12:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} v 0) v1
Oct 11 04:12:58 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.1f( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.1e( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.1c( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.1d( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.1b( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.a( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.9( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.7( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.8( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.6( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.3( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.5( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.1( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.4( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.2( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.b( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.d( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.c( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.e( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.f( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.10( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.11( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.12( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.13( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.14( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.15( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.19( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.17( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.16( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.1a( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.18( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.0( empty local-lis/les=37/38 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.19( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:58 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 38 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:58 np0005481065 python3[96588]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:12:58 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v91: 69 pgs: 62 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:12:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 11 04:12:58 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 11 04:12:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 11 04:12:58 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 11 04:12:58 np0005481065 podman[96589]: 2025-10-11 08:12:58.727874209 +0000 UTC m=+0.071996961 container create beb3e5aac3d4d83080f97a5db63dea6dbc76bc8db89297faaba7310df96bb07c (image=quay.io/ceph/ceph:v18, name=brave_colden, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:12:58 np0005481065 systemd[1]: Started libpod-conmon-beb3e5aac3d4d83080f97a5db63dea6dbc76bc8db89297faaba7310df96bb07c.scope.
Oct 11 04:12:58 np0005481065 podman[96589]: 2025-10-11 08:12:58.698396705 +0000 UTC m=+0.042519507 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:12:58 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:12:58 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81efb5230adbd77bfaa1b54dd426ed22ace61962d9d605cdc1ac72ec0be55cb4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:58 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81efb5230adbd77bfaa1b54dd426ed22ace61962d9d605cdc1ac72ec0be55cb4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:58 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81efb5230adbd77bfaa1b54dd426ed22ace61962d9d605cdc1ac72ec0be55cb4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 11 04:12:58 np0005481065 podman[96589]: 2025-10-11 08:12:58.817296257 +0000 UTC m=+0.161419029 container init beb3e5aac3d4d83080f97a5db63dea6dbc76bc8db89297faaba7310df96bb07c (image=quay.io/ceph/ceph:v18, name=brave_colden, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 04:12:58 np0005481065 podman[96589]: 2025-10-11 08:12:58.831770631 +0000 UTC m=+0.175893353 container start beb3e5aac3d4d83080f97a5db63dea6dbc76bc8db89297faaba7310df96bb07c (image=quay.io/ceph/ceph:v18, name=brave_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 04:12:58 np0005481065 podman[96589]: 2025-10-11 08:12:58.846572274 +0000 UTC m=+0.190695066 container attach beb3e5aac3d4d83080f97a5db63dea6dbc76bc8db89297faaba7310df96bb07c (image=quay.io/ceph/ceph:v18, name=brave_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 37 pg[2.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=37 pruub=15.048598289s) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active pruub 67.836486816s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 38 pg[2.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=37 pruub=15.048598289s) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown pruub 67.836486816s@ mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 38 pg[2.1e( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 38 pg[2.1f( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 38 pg[2.10( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 38 pg[2.11( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 38 pg[2.15( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 38 pg[2.14( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 38 pg[2.12( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 38 pg[2.13( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 38 pg[2.16( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 38 pg[2.17( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 38 pg[2.18( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 38 pg[2.19( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 38 pg[2.1a( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 38 pg[2.1b( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 38 pg[2.1c( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 38 pg[2.1d( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 38 pg[2.1( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 38 pg[2.c( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 38 pg[2.d( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 38 pg[2.a( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 38 pg[2.b( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 38 pg[2.2( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 38 pg[2.3( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 38 pg[2.4( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 38 pg[2.8( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 38 pg[2.5( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 38 pg[2.9( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 38 pg[2.7( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 38 pg[2.e( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 38 pg[2.6( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 38 pg[2.f( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:59 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Oct 11 04:12:59 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Oct 11 04:12:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Oct 11 04:12:59 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3008815666' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 11 04:12:59 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3008815666' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 11 04:12:59 np0005481065 brave_colden[96605]: 
Oct 11 04:12:59 np0005481065 brave_colden[96605]: [global]
Oct 11 04:12:59 np0005481065 brave_colden[96605]: #011fsid = 33219f8b-dc38-5a8f-a577-8ccc4b37190a
Oct 11 04:12:59 np0005481065 brave_colden[96605]: #011mon_host = 192.168.122.100
Oct 11 04:12:59 np0005481065 systemd[1]: libpod-beb3e5aac3d4d83080f97a5db63dea6dbc76bc8db89297faaba7310df96bb07c.scope: Deactivated successfully.
Oct 11 04:12:59 np0005481065 conmon[96605]: conmon beb3e5aac3d4d83080f9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-beb3e5aac3d4d83080f97a5db63dea6dbc76bc8db89297faaba7310df96bb07c.scope/container/memory.events
Oct 11 04:12:59 np0005481065 podman[96589]: 2025-10-11 08:12:59.423954512 +0000 UTC m=+0.768077224 container died beb3e5aac3d4d83080f97a5db63dea6dbc76bc8db89297faaba7310df96bb07c (image=quay.io/ceph/ceph:v18, name=brave_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:12:59 np0005481065 systemd[1]: var-lib-containers-storage-overlay-81efb5230adbd77bfaa1b54dd426ed22ace61962d9d605cdc1ac72ec0be55cb4-merged.mount: Deactivated successfully.
Oct 11 04:12:59 np0005481065 podman[96589]: 2025-10-11 08:12:59.486387908 +0000 UTC m=+0.830510630 container remove beb3e5aac3d4d83080f97a5db63dea6dbc76bc8db89297faaba7310df96bb07c (image=quay.io/ceph/ceph:v18, name=brave_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Oct 11 04:12:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Oct 11 04:12:59 np0005481065 systemd[1]: libpod-conmon-beb3e5aac3d4d83080f97a5db63dea6dbc76bc8db89297faaba7310df96bb07c.scope: Deactivated successfully.
Oct 11 04:12:59 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Oct 11 04:12:59 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Oct 11 04:12:59 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Oct 11 04:12:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Oct 11 04:12:59 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 39 pg[5.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=39 pruub=12.673647881s) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active pruub 65.943511963s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:12:59 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Oct 11 04:12:59 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct 11 04:12:59 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 11 04:12:59 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 11 04:12:59 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/3008815666' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 11 04:12:59 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/3008815666' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 39 pg[5.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=39 pruub=12.673647881s) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown pruub 65.943511963s@ mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 39 pg[2.1a( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:59 np0005481065 ceph-mgr[74605]: [progress INFO root] update: starting ev 514bf44a-423e-46d2-b94d-6df9ae4c5828 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 39 pg[2.19( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 39 pg[2.1b( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 39 pg[2.17( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 39 pg[2.15( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0) v1
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 39 pg[2.16( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 39 pg[2.12( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:59 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 39 pg[2.14( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 39 pg[2.11( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 39 pg[2.d( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 39 pg[2.f( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 39 pg[2.e( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 39 pg[2.10( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 39 pg[2.c( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 39 pg[2.0( empty local-lis/les=37/39 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 39 pg[2.1( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 39 pg[2.7( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 39 pg[2.3( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 39 pg[2.5( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 39 pg[2.4( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 39 pg[2.2( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 39 pg[2.6( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 39 pg[2.8( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 39 pg[2.a( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 39 pg[2.13( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 39 pg[2.9( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 39 pg[2.1d( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 39 pg[2.1e( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 39 pg[2.18( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 39 pg[2.b( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 39 pg[2.1c( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:59 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 39 pg[2.1f( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:12:59 np0005481065 ceph-mgr[74605]: [progress WARNING root] Starting Global Recovery Event,124 pgs not in active + clean state
Oct 11 04:12:59 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 39 pg[4.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39 pruub=10.372782707s) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active pruub 75.099990845s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:12:59 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 39 pg[4.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39 pruub=10.372782707s) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown pruub 75.099990845s@ mbc={}] state<Start>: transitioning to Primary
Oct 11 04:12:59 np0005481065 python3[96743]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:12:59 np0005481065 podman[96767]: 2025-10-11 08:12:59.941236361 +0000 UTC m=+0.058901546 container create a78bfd538240a432e400857412c3e53f8ac2c6db8ffab4b30ccdfcddab2b25ca (image=quay.io/ceph/ceph:v18, name=jolly_liskov, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 04:12:59 np0005481065 systemd[1]: Started libpod-conmon-a78bfd538240a432e400857412c3e53f8ac2c6db8ffab4b30ccdfcddab2b25ca.scope.
Oct 11 04:13:00 np0005481065 podman[96767]: 2025-10-11 08:12:59.907919738 +0000 UTC m=+0.025584943 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:13:00 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:13:00 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a89b7da52a757f102bd330475d43a81a85dd614e08a148688af286e5b3be8186/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:00 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a89b7da52a757f102bd330475d43a81a85dd614e08a148688af286e5b3be8186/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:00 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a89b7da52a757f102bd330475d43a81a85dd614e08a148688af286e5b3be8186/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:00 np0005481065 podman[96767]: 2025-10-11 08:13:00.06773897 +0000 UTC m=+0.185404175 container init a78bfd538240a432e400857412c3e53f8ac2c6db8ffab4b30ccdfcddab2b25ca (image=quay.io/ceph/ceph:v18, name=jolly_liskov, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 04:13:00 np0005481065 podman[96767]: 2025-10-11 08:13:00.075024608 +0000 UTC m=+0.192689773 container start a78bfd538240a432e400857412c3e53f8ac2c6db8ffab4b30ccdfcddab2b25ca (image=quay.io/ceph/ceph:v18, name=jolly_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:13:00 np0005481065 podman[96767]: 2025-10-11 08:13:00.081400591 +0000 UTC m=+0.199065766 container attach a78bfd538240a432e400857412c3e53f8ac2c6db8ffab4b30ccdfcddab2b25ca (image=quay.io/ceph/ceph:v18, name=jolly_liskov, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Oct 11 04:13:00 np0005481065 podman[96857]: 2025-10-11 08:13:00.403143545 +0000 UTC m=+0.080075052 container exec ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 11 04:13:00 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Oct 11 04:13:00 np0005481065 podman[96857]: 2025-10-11 08:13:00.520365829 +0000 UTC m=+0.197297346 container exec_died ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 04:13:00 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Oct 11 04:13:00 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Oct 11 04:13:00 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.1d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.1c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.1b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.6( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.5( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.4( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.3( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.2( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.13( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.17( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.18( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.15( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.16( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-mgr[74605]: [progress INFO root] update: starting ev e88cc5bf-f3b5-4da5-8ea3-a2f0910c0202 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Oct 11 04:13:00 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Oct 11 04:13:00 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Oct 11 04:13:00 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Oct 11 04:13:00 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Oct 11 04:13:00 np0005481065 ceph-mgr[74605]: [progress INFO root] complete: finished ev 2f9fdae7-882c-4a3a-a85e-27c4cbc86c5b (PG autoscaler increasing pool 2 PGs from 1 to 32)
Oct 11 04:13:00 np0005481065 ceph-mgr[74605]: [progress INFO root] Completed event 2f9fdae7-882c-4a3a-a85e-27c4cbc86c5b (PG autoscaler increasing pool 2 PGs from 1 to 32) in 5 seconds
Oct 11 04:13:00 np0005481065 ceph-mgr[74605]: [progress INFO root] complete: finished ev 25c6f460-965e-4540-8ea1-0c8674f57a35 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Oct 11 04:13:00 np0005481065 ceph-mgr[74605]: [progress INFO root] Completed event 25c6f460-965e-4540-8ea1-0c8674f57a35 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 4 seconds
Oct 11 04:13:00 np0005481065 ceph-mgr[74605]: [progress INFO root] complete: finished ev c76432ef-1fb9-461e-88d3-918d4ee38a68 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Oct 11 04:13:00 np0005481065 ceph-mgr[74605]: [progress INFO root] Completed event c76432ef-1fb9-461e-88d3-918d4ee38a68 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 3 seconds
Oct 11 04:13:00 np0005481065 ceph-mgr[74605]: [progress INFO root] complete: finished ev 6569a04b-316d-4ed4-919c-64a2f52d97c3 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Oct 11 04:13:00 np0005481065 ceph-mgr[74605]: [progress INFO root] Completed event 6569a04b-316d-4ed4-919c-64a2f52d97c3 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 2 seconds
Oct 11 04:13:00 np0005481065 ceph-mgr[74605]: [progress INFO root] complete: finished ev 514bf44a-423e-46d2-b94d-6df9ae4c5828 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Oct 11 04:13:00 np0005481065 ceph-mgr[74605]: [progress INFO root] Completed event 514bf44a-423e-46d2-b94d-6df9ae4c5828 (PG autoscaler increasing pool 6 PGs from 1 to 32) in 1 seconds
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.1c( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.1d( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.13( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.15( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.16( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.17( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-mgr[74605]: [progress INFO root] complete: finished ev e88cc5bf-f3b5-4da5-8ea3-a2f0910c0202 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.6( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-mgr[74605]: [progress INFO root] Completed event e88cc5bf-f3b5-4da5-8ea3-a2f0910c0202 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.5( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.3( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.2( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.1b( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.18( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.1f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.7( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.1d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.8( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.1b( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.1e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.6( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.b( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.9( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.1a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.4( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.1( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.2( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.0( empty local-lis/les=39/40 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.19( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.3( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.13( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.5( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.12( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.11( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.10( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.1c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.15( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.17( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.0( empty local-lis/les=39/40 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.6( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.3( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.5( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.2( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.1b( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.17( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.14( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.18( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.15( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.1c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 40 pg[4.16( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:00 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v94: 131 pgs: 124 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:13:00 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 11 04:13:00 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 11 04:13:00 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 11 04:13:00 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 11 04:13:00 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0) v1
Oct 11 04:13:00 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2684815010' entity='client.admin' 
Oct 11 04:13:00 np0005481065 jolly_liskov[96808]: set ssl_option
Oct 11 04:13:00 np0005481065 systemd[1]: libpod-a78bfd538240a432e400857412c3e53f8ac2c6db8ffab4b30ccdfcddab2b25ca.scope: Deactivated successfully.
Oct 11 04:13:00 np0005481065 podman[96767]: 2025-10-11 08:13:00.784193986 +0000 UTC m=+0.901859141 container died a78bfd538240a432e400857412c3e53f8ac2c6db8ffab4b30ccdfcddab2b25ca (image=quay.io/ceph/ceph:v18, name=jolly_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:13:00 np0005481065 systemd[1]: var-lib-containers-storage-overlay-a89b7da52a757f102bd330475d43a81a85dd614e08a148688af286e5b3be8186-merged.mount: Deactivated successfully.
Oct 11 04:13:00 np0005481065 podman[96767]: 2025-10-11 08:13:00.828567266 +0000 UTC m=+0.946232421 container remove a78bfd538240a432e400857412c3e53f8ac2c6db8ffab4b30ccdfcddab2b25ca (image=quay.io/ceph/ceph:v18, name=jolly_liskov, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:13:00 np0005481065 systemd[1]: libpod-conmon-a78bfd538240a432e400857412c3e53f8ac2c6db8ffab4b30ccdfcddab2b25ca.scope: Deactivated successfully.
Oct 11 04:13:01 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Oct 11 04:13:01 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Oct 11 04:13:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:13:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:13:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:13:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:13:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:13:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:13:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:13:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:01 np0005481065 python3[97020]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:13:01 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev dff5970a-5db7-4ae5-ae68-ef7cd1719855 does not exist
Oct 11 04:13:01 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev ea771fa7-975e-4857-a507-71e907c7d2ee does not exist
Oct 11 04:13:01 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev a6b6a385-f398-42b9-ade8-544f9888ebd9 does not exist
Oct 11 04:13:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:13:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:13:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:13:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:13:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:13:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:13:01 np0005481065 podman[97037]: 2025-10-11 08:13:01.357173268 +0000 UTC m=+0.058891376 container create 23958eda8b5e468b7e34bfecd92e376b84df6893cd97ce60d7743a1659178678 (image=quay.io/ceph/ceph:v18, name=stupefied_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:13:01 np0005481065 systemd[75935]: Starting Mark boot as successful...
Oct 11 04:13:01 np0005481065 systemd[1]: Started libpod-conmon-23958eda8b5e468b7e34bfecd92e376b84df6893cd97ce60d7743a1659178678.scope.
Oct 11 04:13:01 np0005481065 systemd[75935]: Finished Mark boot as successful.
Oct 11 04:13:01 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:13:01 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0c3464d852ad9ee0b137df342745f1cedaa97eca224b4d6f88e0677398a8b9d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:01 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0c3464d852ad9ee0b137df342745f1cedaa97eca224b4d6f88e0677398a8b9d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:01 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0c3464d852ad9ee0b137df342745f1cedaa97eca224b4d6f88e0677398a8b9d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:01 np0005481065 podman[97037]: 2025-10-11 08:13:01.335425476 +0000 UTC m=+0.037143614 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:13:01 np0005481065 podman[97037]: 2025-10-11 08:13:01.448426179 +0000 UTC m=+0.150144287 container init 23958eda8b5e468b7e34bfecd92e376b84df6893cd97ce60d7743a1659178678 (image=quay.io/ceph/ceph:v18, name=stupefied_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 11 04:13:01 np0005481065 podman[97037]: 2025-10-11 08:13:01.455989825 +0000 UTC m=+0.157707923 container start 23958eda8b5e468b7e34bfecd92e376b84df6893cd97ce60d7743a1659178678 (image=quay.io/ceph/ceph:v18, name=stupefied_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 11 04:13:01 np0005481065 podman[97037]: 2025-10-11 08:13:01.459861576 +0000 UTC m=+0.161579724 container attach 23958eda8b5e468b7e34bfecd92e376b84df6893cd97ce60d7743a1659178678 (image=quay.io/ceph/ceph:v18, name=stupefied_elion, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 04:13:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Oct 11 04:13:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Oct 11 04:13:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct 11 04:13:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Oct 11 04:13:01 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Oct 11 04:13:01 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 41 pg[6.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=41 pruub=12.672432899s) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active pruub 79.172584534s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:01 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Oct 11 04:13:01 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 11 04:13:01 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 11 04:13:01 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/2684815010' entity='client.admin' 
Oct 11 04:13:01 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:01 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:01 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:13:01 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:01 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:13:01 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 41 pg[6.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=41 pruub=12.672432899s) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown pruub 79.172584534s@ mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:02 np0005481065 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.14246 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:13:02 np0005481065 ceph-mgr[74605]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Oct 11 04:13:02 np0005481065 ceph-mgr[74605]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Oct 11 04:13:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct 11 04:13:02 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:02 np0005481065 stupefied_elion[97085]: Scheduled rgw.rgw update...
Oct 11 04:13:02 np0005481065 podman[97215]: 2025-10-11 08:13:02.049921616 +0000 UTC m=+0.060197134 container create 61aea3a61ef6402c57c1b42526a46c6f51f04bb5be7f4839b44352438a4c8db8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_haibt, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 04:13:02 np0005481065 systemd[1]: libpod-23958eda8b5e468b7e34bfecd92e376b84df6893cd97ce60d7743a1659178678.scope: Deactivated successfully.
Oct 11 04:13:02 np0005481065 podman[97037]: 2025-10-11 08:13:02.063567486 +0000 UTC m=+0.765285604 container died 23958eda8b5e468b7e34bfecd92e376b84df6893cd97ce60d7743a1659178678 (image=quay.io/ceph/ceph:v18, name=stupefied_elion, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:13:02 np0005481065 systemd[1]: Started libpod-conmon-61aea3a61ef6402c57c1b42526a46c6f51f04bb5be7f4839b44352438a4c8db8.scope.
Oct 11 04:13:02 np0005481065 systemd[1]: var-lib-containers-storage-overlay-c0c3464d852ad9ee0b137df342745f1cedaa97eca224b4d6f88e0677398a8b9d-merged.mount: Deactivated successfully.
Oct 11 04:13:02 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Oct 11 04:13:02 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:13:02 np0005481065 podman[97037]: 2025-10-11 08:13:02.120162315 +0000 UTC m=+0.821880423 container remove 23958eda8b5e468b7e34bfecd92e376b84df6893cd97ce60d7743a1659178678 (image=quay.io/ceph/ceph:v18, name=stupefied_elion, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 11 04:13:02 np0005481065 podman[97215]: 2025-10-11 08:13:02.028939415 +0000 UTC m=+0.039214913 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:13:02 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Oct 11 04:13:02 np0005481065 podman[97215]: 2025-10-11 08:13:02.137772009 +0000 UTC m=+0.148047547 container init 61aea3a61ef6402c57c1b42526a46c6f51f04bb5be7f4839b44352438a4c8db8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:13:02 np0005481065 systemd[1]: libpod-conmon-23958eda8b5e468b7e34bfecd92e376b84df6893cd97ce60d7743a1659178678.scope: Deactivated successfully.
Oct 11 04:13:02 np0005481065 podman[97215]: 2025-10-11 08:13:02.15074346 +0000 UTC m=+0.161018968 container start 61aea3a61ef6402c57c1b42526a46c6f51f04bb5be7f4839b44352438a4c8db8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_haibt, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 11 04:13:02 np0005481065 podman[97215]: 2025-10-11 08:13:02.154803186 +0000 UTC m=+0.165078694 container attach 61aea3a61ef6402c57c1b42526a46c6f51f04bb5be7f4839b44352438a4c8db8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_haibt, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:13:02 np0005481065 happy_haibt[97240]: 167 167
Oct 11 04:13:02 np0005481065 systemd[1]: libpod-61aea3a61ef6402c57c1b42526a46c6f51f04bb5be7f4839b44352438a4c8db8.scope: Deactivated successfully.
Oct 11 04:13:02 np0005481065 podman[97215]: 2025-10-11 08:13:02.158600865 +0000 UTC m=+0.168876373 container died 61aea3a61ef6402c57c1b42526a46c6f51f04bb5be7f4839b44352438a4c8db8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_haibt, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:13:02 np0005481065 systemd[1]: var-lib-containers-storage-overlay-2de06634cd2b58be1537f6292290b209504c129cbcb6fdcc4d43a5134aecd2c1-merged.mount: Deactivated successfully.
Oct 11 04:13:02 np0005481065 podman[97215]: 2025-10-11 08:13:02.207504654 +0000 UTC m=+0.217780172 container remove 61aea3a61ef6402c57c1b42526a46c6f51f04bb5be7f4839b44352438a4c8db8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_haibt, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:13:02 np0005481065 systemd[1]: libpod-conmon-61aea3a61ef6402c57c1b42526a46c6f51f04bb5be7f4839b44352438a4c8db8.scope: Deactivated successfully.
Oct 11 04:13:02 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Oct 11 04:13:02 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Oct 11 04:13:02 np0005481065 podman[97267]: 2025-10-11 08:13:02.407791834 +0000 UTC m=+0.067443501 container create 1b5e4230eda38eb3f5832e45b9566ed70885798c1be6982afcdd4b02c501230f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 04:13:02 np0005481065 systemd[1]: Started libpod-conmon-1b5e4230eda38eb3f5832e45b9566ed70885798c1be6982afcdd4b02c501230f.scope.
Oct 11 04:13:02 np0005481065 podman[97267]: 2025-10-11 08:13:02.381389488 +0000 UTC m=+0.041041195 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:13:02 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:13:02 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1403cb558469fca937c52b70c36f6943a5f6cf3306b11619db3770653ca52b1c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:02 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1403cb558469fca937c52b70c36f6943a5f6cf3306b11619db3770653ca52b1c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:02 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1403cb558469fca937c52b70c36f6943a5f6cf3306b11619db3770653ca52b1c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:02 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1403cb558469fca937c52b70c36f6943a5f6cf3306b11619db3770653ca52b1c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:02 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1403cb558469fca937c52b70c36f6943a5f6cf3306b11619db3770653ca52b1c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:02 np0005481065 podman[97267]: 2025-10-11 08:13:02.518208212 +0000 UTC m=+0.177859919 container init 1b5e4230eda38eb3f5832e45b9566ed70885798c1be6982afcdd4b02c501230f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 04:13:02 np0005481065 podman[97267]: 2025-10-11 08:13:02.530587367 +0000 UTC m=+0.190239004 container start 1b5e4230eda38eb3f5832e45b9566ed70885798c1be6982afcdd4b02c501230f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_banzai, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:13:02 np0005481065 podman[97267]: 2025-10-11 08:13:02.533898681 +0000 UTC m=+0.193550348 container attach 1b5e4230eda38eb3f5832e45b9566ed70885798c1be6982afcdd4b02c501230f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:13:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Oct 11 04:13:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Oct 11 04:13:02 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.1a( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.14( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.15( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.17( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.16( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.10( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.13( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.12( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.d( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.c( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.f( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.e( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.2( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.3( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.1( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.1b( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.b( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.6( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.18( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.8( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.11( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.19( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.4( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.7( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.9( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.5( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.a( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.1f( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.1e( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.1d( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.1c( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.17( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.16( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.14( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.1a( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:02 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Oct 11 04:13:02 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct 11 04:13:02 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.15( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.13( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.10( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.d( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.c( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.12( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.e( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.f( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.0( empty local-lis/les=41/42 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.2( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.3( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.1( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.b( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.1b( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.6( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.18( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.19( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.8( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.9( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.7( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.5( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.4( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.a( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.1f( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.11( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.1c( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.1d( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:02 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 42 pg[6.1e( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:13:02 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v97: 193 pgs: 1 peering, 62 unknown, 130 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:13:03 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Oct 11 04:13:03 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Oct 11 04:13:03 np0005481065 python3[97364]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 04:13:03 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Oct 11 04:13:03 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Oct 11 04:13:03 np0005481065 ceph-mon[74313]: Saving service rgw.rgw spec with placement compute-0
Oct 11 04:13:03 np0005481065 python3[97449]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760170382.9529395-33270-167002508703103/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:13:03 np0005481065 silly_banzai[97284]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:13:03 np0005481065 silly_banzai[97284]: --> relative data size: 1.0
Oct 11 04:13:03 np0005481065 silly_banzai[97284]: --> All data devices are unavailable
Oct 11 04:13:03 np0005481065 systemd[1]: libpod-1b5e4230eda38eb3f5832e45b9566ed70885798c1be6982afcdd4b02c501230f.scope: Deactivated successfully.
Oct 11 04:13:03 np0005481065 systemd[1]: libpod-1b5e4230eda38eb3f5832e45b9566ed70885798c1be6982afcdd4b02c501230f.scope: Consumed 1.171s CPU time.
Oct 11 04:13:03 np0005481065 podman[97267]: 2025-10-11 08:13:03.827978973 +0000 UTC m=+1.487630620 container died 1b5e4230eda38eb3f5832e45b9566ed70885798c1be6982afcdd4b02c501230f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:13:03 np0005481065 systemd[1]: var-lib-containers-storage-overlay-1403cb558469fca937c52b70c36f6943a5f6cf3306b11619db3770653ca52b1c-merged.mount: Deactivated successfully.
Oct 11 04:13:03 np0005481065 podman[97267]: 2025-10-11 08:13:03.892219621 +0000 UTC m=+1.551871258 container remove 1b5e4230eda38eb3f5832e45b9566ed70885798c1be6982afcdd4b02c501230f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_banzai, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:13:03 np0005481065 systemd[1]: libpod-conmon-1b5e4230eda38eb3f5832e45b9566ed70885798c1be6982afcdd4b02c501230f.scope: Deactivated successfully.
Oct 11 04:13:04 np0005481065 python3[97598]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:13:04 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Oct 11 04:13:04 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Oct 11 04:13:04 np0005481065 podman[97624]: 2025-10-11 08:13:04.353087835 +0000 UTC m=+0.058611558 container create 68c796802dfa33cc584801e17f1d2cd6af6607f09a977646ef5208be83b1afbb (image=quay.io/ceph/ceph:v18, name=focused_bardeen, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 04:13:04 np0005481065 systemd[1]: Started libpod-conmon-68c796802dfa33cc584801e17f1d2cd6af6607f09a977646ef5208be83b1afbb.scope.
Oct 11 04:13:04 np0005481065 podman[97624]: 2025-10-11 08:13:04.325244969 +0000 UTC m=+0.030768752 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:13:04 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:13:04 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0511a90ed58de60ea72a0d34c905d106bdf9283f04a8e3a6edec9f27f433f150/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:04 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0511a90ed58de60ea72a0d34c905d106bdf9283f04a8e3a6edec9f27f433f150/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:04 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0511a90ed58de60ea72a0d34c905d106bdf9283f04a8e3a6edec9f27f433f150/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:04 np0005481065 podman[97624]: 2025-10-11 08:13:04.448137154 +0000 UTC m=+0.153660897 container init 68c796802dfa33cc584801e17f1d2cd6af6607f09a977646ef5208be83b1afbb (image=quay.io/ceph/ceph:v18, name=focused_bardeen, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 04:13:04 np0005481065 podman[97624]: 2025-10-11 08:13:04.453877599 +0000 UTC m=+0.159401282 container start 68c796802dfa33cc584801e17f1d2cd6af6607f09a977646ef5208be83b1afbb (image=quay.io/ceph/ceph:v18, name=focused_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Oct 11 04:13:04 np0005481065 podman[97624]: 2025-10-11 08:13:04.457481202 +0000 UTC m=+0.163004915 container attach 68c796802dfa33cc584801e17f1d2cd6af6607f09a977646ef5208be83b1afbb (image=quay.io/ceph/ceph:v18, name=focused_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 11 04:13:04 np0005481065 podman[97680]: 2025-10-11 08:13:04.643457802 +0000 UTC m=+0.066532724 container create f6dd348929910a3aa66f67784004005351a09b1c3bfdb815f31cb15858e1823c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:13:04 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v98: 193 pgs: 1 peering, 62 unknown, 130 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:13:04 np0005481065 systemd[1]: Started libpod-conmon-f6dd348929910a3aa66f67784004005351a09b1c3bfdb815f31cb15858e1823c.scope.
Oct 11 04:13:04 np0005481065 podman[97680]: 2025-10-11 08:13:04.615745669 +0000 UTC m=+0.038820651 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:13:04 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:13:04 np0005481065 podman[97680]: 2025-10-11 08:13:04.737597235 +0000 UTC m=+0.160672217 container init f6dd348929910a3aa66f67784004005351a09b1c3bfdb815f31cb15858e1823c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:13:04 np0005481065 podman[97680]: 2025-10-11 08:13:04.74756341 +0000 UTC m=+0.170638332 container start f6dd348929910a3aa66f67784004005351a09b1c3bfdb815f31cb15858e1823c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_shtern, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 11 04:13:04 np0005481065 podman[97680]: 2025-10-11 08:13:04.751417151 +0000 UTC m=+0.174492123 container attach f6dd348929910a3aa66f67784004005351a09b1c3bfdb815f31cb15858e1823c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_shtern, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 04:13:04 np0005481065 interesting_shtern[97697]: 167 167
Oct 11 04:13:04 np0005481065 systemd[1]: libpod-f6dd348929910a3aa66f67784004005351a09b1c3bfdb815f31cb15858e1823c.scope: Deactivated successfully.
Oct 11 04:13:04 np0005481065 podman[97680]: 2025-10-11 08:13:04.75454158 +0000 UTC m=+0.177616512 container died f6dd348929910a3aa66f67784004005351a09b1c3bfdb815f31cb15858e1823c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_shtern, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:13:04 np0005481065 ceph-mgr[74605]: [progress INFO root] Writing back 9 completed events
Oct 11 04:13:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 11 04:13:04 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:04 np0005481065 systemd[1]: var-lib-containers-storage-overlay-2714d8d20286277b506e9f996478147bf79818240706fd4e98173874035855ea-merged.mount: Deactivated successfully.
Oct 11 04:13:04 np0005481065 podman[97680]: 2025-10-11 08:13:04.80592794 +0000 UTC m=+0.229002872 container remove f6dd348929910a3aa66f67784004005351a09b1c3bfdb815f31cb15858e1823c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_shtern, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:13:04 np0005481065 systemd[1]: libpod-conmon-f6dd348929910a3aa66f67784004005351a09b1c3bfdb815f31cb15858e1823c.scope: Deactivated successfully.
Oct 11 04:13:04 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 41 pg[7.0( empty local-lis/les=28/29 n=0 ec=28/28 lis/c=28/28 les/c/f=29/29/0 sis=41 pruub=11.319311142s) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active pruub 75.860092163s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:04 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 42 pg[7.0( empty local-lis/les=28/29 n=0 ec=28/28 lis/c=28/28 les/c/f=29/29/0 sis=41 pruub=11.319311142s) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown pruub 75.860092163s@ mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:04 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 42 pg[7.4( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:04 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 42 pg[7.5( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:04 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 42 pg[7.2( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:04 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 42 pg[7.3( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:04 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 42 pg[7.6( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:04 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 42 pg[7.7( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:04 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 42 pg[7.a( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:04 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 42 pg[7.b( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:04 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 42 pg[7.8( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:04 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 42 pg[7.9( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:04 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 42 pg[7.18( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:04 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 42 pg[7.19( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:04 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 42 pg[7.14( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:04 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 42 pg[7.15( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:04 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 42 pg[7.16( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:04 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 42 pg[7.17( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:04 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 42 pg[7.10( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:04 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 42 pg[7.11( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:04 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 42 pg[7.12( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:04 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 42 pg[7.13( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:04 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 42 pg[7.e( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:04 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 42 pg[7.f( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:04 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 42 pg[7.c( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:04 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 42 pg[7.d( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:04 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 42 pg[7.1c( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:04 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 42 pg[7.1d( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:04 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 42 pg[7.1e( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:04 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 42 pg[7.1f( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:04 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 42 pg[7.1a( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:04 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 42 pg[7.1b( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:04 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 42 pg[7.1( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:05 np0005481065 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.14248 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:13:05 np0005481065 ceph-mgr[74605]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Oct 11 04:13:05 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0) v1
Oct 11 04:13:05 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Oct 11 04:13:05 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0) v1
Oct 11 04:13:05 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Oct 11 04:13:05 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0) v1
Oct 11 04:13:05 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Oct 11 04:13:05 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Oct 11 04:13:05 np0005481065 ceph-mon[74313]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct 11 04:13:05 np0005481065 ceph-mon[74313]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Oct 11 04:13:05 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0[74309]: 2025-10-11T08:13:05.020+0000 7fcf167ce640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct 11 04:13:05 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Oct 11 04:13:05 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).mds e2 new map
Oct 11 04:13:05 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).mds e2 print_map#012e2#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-11T08:13:05.020766+0000#012modified#0112025-10-11T08:13:05.020799+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 
Oct 11 04:13:05 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Oct 11 04:13:05 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Oct 11 04:13:05 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Oct 11 04:13:05 np0005481065 ceph-mgr[74605]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Oct 11 04:13:05 np0005481065 ceph-mgr[74605]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Oct 11 04:13:05 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct 11 04:13:05 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:05 np0005481065 ceph-mgr[74605]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Oct 11 04:13:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 43 pg[7.1e( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 43 pg[7.1c( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 43 pg[7.1d( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 43 pg[7.13( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 43 pg[7.10( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 43 pg[7.16( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 43 pg[7.17( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 43 pg[7.15( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 43 pg[7.14( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 43 pg[7.12( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 43 pg[7.b( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 43 pg[7.9( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 43 pg[7.a( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 43 pg[7.8( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 43 pg[7.6( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 43 pg[7.0( empty local-lis/les=41/43 n=0 ec=28/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 43 pg[7.1( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 43 pg[7.7( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 43 pg[7.5( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 43 pg[7.2( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 43 pg[7.3( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 43 pg[7.c( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 43 pg[7.d( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 43 pg[7.e( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 43 pg[7.1f( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 43 pg[7.18( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 43 pg[7.19( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 43 pg[7.1a( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 43 pg[7.1b( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 43 pg[7.11( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 43 pg[7.4( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 43 pg[7.f( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:05 np0005481065 systemd[1]: libpod-68c796802dfa33cc584801e17f1d2cd6af6607f09a977646ef5208be83b1afbb.scope: Deactivated successfully.
Oct 11 04:13:05 np0005481065 podman[97624]: 2025-10-11 08:13:05.0610822 +0000 UTC m=+0.766605913 container died 68c796802dfa33cc584801e17f1d2cd6af6607f09a977646ef5208be83b1afbb (image=quay.io/ceph/ceph:v18, name=focused_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:13:05 np0005481065 podman[97740]: 2025-10-11 08:13:05.078344793 +0000 UTC m=+0.090193861 container create 955f5503f2ed0be27cff2b2b65154d65414d7f99991e21fe4c79ffc6bea987a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_gauss, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 04:13:05 np0005481065 systemd[1]: var-lib-containers-storage-overlay-0511a90ed58de60ea72a0d34c905d106bdf9283f04a8e3a6edec9f27f433f150-merged.mount: Deactivated successfully.
Oct 11 04:13:05 np0005481065 podman[97624]: 2025-10-11 08:13:05.115038813 +0000 UTC m=+0.820562496 container remove 68c796802dfa33cc584801e17f1d2cd6af6607f09a977646ef5208be83b1afbb (image=quay.io/ceph/ceph:v18, name=focused_bardeen, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 11 04:13:05 np0005481065 podman[97740]: 2025-10-11 08:13:05.027151449 +0000 UTC m=+0.039000547 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:13:05 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Oct 11 04:13:05 np0005481065 systemd[1]: Started libpod-conmon-955f5503f2ed0be27cff2b2b65154d65414d7f99991e21fe4c79ffc6bea987a1.scope.
Oct 11 04:13:05 np0005481065 systemd[1]: libpod-conmon-68c796802dfa33cc584801e17f1d2cd6af6607f09a977646ef5208be83b1afbb.scope: Deactivated successfully.
Oct 11 04:13:05 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Oct 11 04:13:05 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:13:05 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a46ad6741298c5ae3e640e7c0b64de40a8c53687826a3bab0358b59855b57e9c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:05 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a46ad6741298c5ae3e640e7c0b64de40a8c53687826a3bab0358b59855b57e9c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:05 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a46ad6741298c5ae3e640e7c0b64de40a8c53687826a3bab0358b59855b57e9c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:05 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a46ad6741298c5ae3e640e7c0b64de40a8c53687826a3bab0358b59855b57e9c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:05 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Oct 11 04:13:05 np0005481065 podman[97740]: 2025-10-11 08:13:05.193740915 +0000 UTC m=+0.205589943 container init 955f5503f2ed0be27cff2b2b65154d65414d7f99991e21fe4c79ffc6bea987a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_gauss, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:13:05 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Oct 11 04:13:05 np0005481065 podman[97740]: 2025-10-11 08:13:05.209179356 +0000 UTC m=+0.221028394 container start 955f5503f2ed0be27cff2b2b65154d65414d7f99991e21fe4c79ffc6bea987a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_gauss, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 11 04:13:05 np0005481065 podman[97740]: 2025-10-11 08:13:05.212860842 +0000 UTC m=+0.224709880 container attach 955f5503f2ed0be27cff2b2b65154d65414d7f99991e21fe4c79ffc6bea987a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_gauss, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 04:13:05 np0005481065 python3[97799]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:13:05 np0005481065 podman[97800]: 2025-10-11 08:13:05.510256349 +0000 UTC m=+0.070623852 container create 35ec5f0368eaff35f733045ff858c101cd0ed19eb3acd26deff0c33b0dedb813 (image=quay.io/ceph/ceph:v18, name=boring_bohr, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 11 04:13:05 np0005481065 systemd[1]: Started libpod-conmon-35ec5f0368eaff35f733045ff858c101cd0ed19eb3acd26deff0c33b0dedb813.scope.
Oct 11 04:13:05 np0005481065 podman[97800]: 2025-10-11 08:13:05.480379184 +0000 UTC m=+0.040746747 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:13:05 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:13:05 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c13652bc6d91568486cff31f7d313d2ca47e5476c4ddc0fd29b184a83980ec4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:05 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c13652bc6d91568486cff31f7d313d2ca47e5476c4ddc0fd29b184a83980ec4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:05 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c13652bc6d91568486cff31f7d313d2ca47e5476c4ddc0fd29b184a83980ec4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:05 np0005481065 podman[97800]: 2025-10-11 08:13:05.630300123 +0000 UTC m=+0.190667656 container init 35ec5f0368eaff35f733045ff858c101cd0ed19eb3acd26deff0c33b0dedb813 (image=quay.io/ceph/ceph:v18, name=boring_bohr, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 04:13:05 np0005481065 podman[97800]: 2025-10-11 08:13:05.63790203 +0000 UTC m=+0.198269533 container start 35ec5f0368eaff35f733045ff858c101cd0ed19eb3acd26deff0c33b0dedb813 (image=quay.io/ceph/ceph:v18, name=boring_bohr, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 04:13:05 np0005481065 podman[97800]: 2025-10-11 08:13:05.641409551 +0000 UTC m=+0.201777104 container attach 35ec5f0368eaff35f733045ff858c101cd0ed19eb3acd26deff0c33b0dedb813 (image=quay.io/ceph/ceph:v18, name=boring_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:13:05 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:05 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Oct 11 04:13:05 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Oct 11 04:13:05 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Oct 11 04:13:05 np0005481065 ceph-mon[74313]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct 11 04:13:05 np0005481065 ceph-mon[74313]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Oct 11 04:13:05 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Oct 11 04:13:05 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]: {
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:    "0": [
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:        {
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:            "devices": [
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:                "/dev/loop3"
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:            ],
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:            "lv_name": "ceph_lv0",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:            "lv_size": "21470642176",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:            "name": "ceph_lv0",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:            "tags": {
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:                "ceph.cluster_name": "ceph",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:                "ceph.crush_device_class": "",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:                "ceph.encrypted": "0",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:                "ceph.osd_id": "0",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:                "ceph.type": "block",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:                "ceph.vdo": "0"
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:            },
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:            "type": "block",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:            "vg_name": "ceph_vg0"
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:        }
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:    ],
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:    "1": [
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:        {
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:            "devices": [
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:                "/dev/loop4"
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:            ],
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:            "lv_name": "ceph_lv1",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:            "lv_size": "21470642176",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:            "name": "ceph_lv1",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:            "tags": {
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:                "ceph.cluster_name": "ceph",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:                "ceph.crush_device_class": "",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:                "ceph.encrypted": "0",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:                "ceph.osd_id": "1",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:                "ceph.type": "block",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:                "ceph.vdo": "0"
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:            },
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:            "type": "block",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:            "vg_name": "ceph_vg1"
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:        }
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:    ],
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:    "2": [
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:        {
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:            "devices": [
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:                "/dev/loop5"
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:            ],
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:            "lv_name": "ceph_lv2",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:            "lv_size": "21470642176",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:            "name": "ceph_lv2",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:            "tags": {
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:                "ceph.cluster_name": "ceph",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:                "ceph.crush_device_class": "",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:                "ceph.encrypted": "0",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:                "ceph.osd_id": "2",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:                "ceph.type": "block",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:                "ceph.vdo": "0"
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:            },
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:            "type": "block",
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:            "vg_name": "ceph_vg2"
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:        }
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]:    ]
Oct 11 04:13:05 np0005481065 pensive_gauss[97769]: }
Oct 11 04:13:06 np0005481065 systemd[1]: libpod-955f5503f2ed0be27cff2b2b65154d65414d7f99991e21fe4c79ffc6bea987a1.scope: Deactivated successfully.
Oct 11 04:13:06 np0005481065 podman[97740]: 2025-10-11 08:13:06.001152832 +0000 UTC m=+1.013001900 container died 955f5503f2ed0be27cff2b2b65154d65414d7f99991e21fe4c79ffc6bea987a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_gauss, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 11 04:13:06 np0005481065 systemd[1]: var-lib-containers-storage-overlay-a46ad6741298c5ae3e640e7c0b64de40a8c53687826a3bab0358b59855b57e9c-merged.mount: Deactivated successfully.
Oct 11 04:13:06 np0005481065 podman[97740]: 2025-10-11 08:13:06.078915677 +0000 UTC m=+1.090764715 container remove 955f5503f2ed0be27cff2b2b65154d65414d7f99991e21fe4c79ffc6bea987a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_gauss, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:13:06 np0005481065 systemd[1]: libpod-conmon-955f5503f2ed0be27cff2b2b65154d65414d7f99991e21fe4c79ffc6bea987a1.scope: Deactivated successfully.
Oct 11 04:13:06 np0005481065 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.14250 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 04:13:06 np0005481065 ceph-mgr[74605]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Oct 11 04:13:06 np0005481065 ceph-mgr[74605]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Oct 11 04:13:06 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct 11 04:13:06 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:06 np0005481065 boring_bohr[97816]: Scheduled mds.cephfs update...
Oct 11 04:13:06 np0005481065 systemd[1]: libpod-35ec5f0368eaff35f733045ff858c101cd0ed19eb3acd26deff0c33b0dedb813.scope: Deactivated successfully.
Oct 11 04:13:06 np0005481065 podman[97800]: 2025-10-11 08:13:06.278540488 +0000 UTC m=+0.838907971 container died 35ec5f0368eaff35f733045ff858c101cd0ed19eb3acd26deff0c33b0dedb813 (image=quay.io/ceph/ceph:v18, name=boring_bohr, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:13:06 np0005481065 systemd[1]: var-lib-containers-storage-overlay-9c13652bc6d91568486cff31f7d313d2ca47e5476c4ddc0fd29b184a83980ec4-merged.mount: Deactivated successfully.
Oct 11 04:13:06 np0005481065 podman[97800]: 2025-10-11 08:13:06.332423849 +0000 UTC m=+0.892791333 container remove 35ec5f0368eaff35f733045ff858c101cd0ed19eb3acd26deff0c33b0dedb813 (image=quay.io/ceph/ceph:v18, name=boring_bohr, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:13:06 np0005481065 systemd[1]: libpod-conmon-35ec5f0368eaff35f733045ff858c101cd0ed19eb3acd26deff0c33b0dedb813.scope: Deactivated successfully.
Oct 11 04:13:06 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v100: 193 pgs: 1 peering, 62 unknown, 130 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:13:06 np0005481065 ceph-mon[74313]: Saving service mds.cephfs spec with placement compute-0
Oct 11 04:13:06 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:06 np0005481065 podman[98013]: 2025-10-11 08:13:06.845011904 +0000 UTC m=+0.069213091 container create 9999cee08522b0b443bebefe7d28ed14dcb83c9b97001fcee14152ebd92d319b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_keller, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 11 04:13:06 np0005481065 systemd[1]: Started libpod-conmon-9999cee08522b0b443bebefe7d28ed14dcb83c9b97001fcee14152ebd92d319b.scope.
Oct 11 04:13:06 np0005481065 podman[98013]: 2025-10-11 08:13:06.815984033 +0000 UTC m=+0.040185270 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:13:06 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:13:06 np0005481065 podman[98013]: 2025-10-11 08:13:06.938966032 +0000 UTC m=+0.163167209 container init 9999cee08522b0b443bebefe7d28ed14dcb83c9b97001fcee14152ebd92d319b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_keller, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 04:13:06 np0005481065 podman[98013]: 2025-10-11 08:13:06.950967825 +0000 UTC m=+0.175169012 container start 9999cee08522b0b443bebefe7d28ed14dcb83c9b97001fcee14152ebd92d319b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_keller, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:13:06 np0005481065 podman[98013]: 2025-10-11 08:13:06.954270499 +0000 UTC m=+0.178471666 container attach 9999cee08522b0b443bebefe7d28ed14dcb83c9b97001fcee14152ebd92d319b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_keller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 11 04:13:06 np0005481065 wizardly_keller[98070]: 167 167
Oct 11 04:13:06 np0005481065 systemd[1]: libpod-9999cee08522b0b443bebefe7d28ed14dcb83c9b97001fcee14152ebd92d319b.scope: Deactivated successfully.
Oct 11 04:13:06 np0005481065 podman[98013]: 2025-10-11 08:13:06.959385966 +0000 UTC m=+0.183587173 container died 9999cee08522b0b443bebefe7d28ed14dcb83c9b97001fcee14152ebd92d319b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_keller, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:13:06 np0005481065 systemd[1]: var-lib-containers-storage-overlay-ca08525d01f9d247becee601e4bf7988f4a1f399d83bdef69211577fb57eba50-merged.mount: Deactivated successfully.
Oct 11 04:13:07 np0005481065 podman[98013]: 2025-10-11 08:13:07.016131399 +0000 UTC m=+0.240332576 container remove 9999cee08522b0b443bebefe7d28ed14dcb83c9b97001fcee14152ebd92d319b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_keller, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 04:13:07 np0005481065 systemd[1]: libpod-conmon-9999cee08522b0b443bebefe7d28ed14dcb83c9b97001fcee14152ebd92d319b.scope: Deactivated successfully.
Oct 11 04:13:07 np0005481065 python3[98121]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 11 04:13:07 np0005481065 podman[98127]: 2025-10-11 08:13:07.2314727 +0000 UTC m=+0.058684760 container create 339a4542e58e3a029a9adcdf74c0ba353b87dae2f65513ffbed490cf79efeedb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_austin, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:13:07 np0005481065 systemd[1]: Started libpod-conmon-339a4542e58e3a029a9adcdf74c0ba353b87dae2f65513ffbed490cf79efeedb.scope.
Oct 11 04:13:07 np0005481065 podman[98127]: 2025-10-11 08:13:07.203478029 +0000 UTC m=+0.030690139 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:13:07 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:13:07 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c96bd2741cc8a5ee31e6126cbcaa1690f2b754ad5d094d35c926436e851959f6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:07 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c96bd2741cc8a5ee31e6126cbcaa1690f2b754ad5d094d35c926436e851959f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:07 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c96bd2741cc8a5ee31e6126cbcaa1690f2b754ad5d094d35c926436e851959f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:07 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c96bd2741cc8a5ee31e6126cbcaa1690f2b754ad5d094d35c926436e851959f6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:07 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Oct 11 04:13:07 np0005481065 podman[98127]: 2025-10-11 08:13:07.350637039 +0000 UTC m=+0.177849149 container init 339a4542e58e3a029a9adcdf74c0ba353b87dae2f65513ffbed490cf79efeedb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_austin, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:13:07 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Oct 11 04:13:07 np0005481065 podman[98127]: 2025-10-11 08:13:07.364588078 +0000 UTC m=+0.191800148 container start 339a4542e58e3a029a9adcdf74c0ba353b87dae2f65513ffbed490cf79efeedb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 11 04:13:07 np0005481065 podman[98127]: 2025-10-11 08:13:07.369001444 +0000 UTC m=+0.196213504 container attach 339a4542e58e3a029a9adcdf74c0ba353b87dae2f65513ffbed490cf79efeedb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 11 04:13:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:13:07 np0005481065 python3[98220]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760170386.8067513-33300-96194318095576/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=9ad13f6ebe2933a18906bbe3c21d5ee6a97cfb73 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:13:07 np0005481065 ceph-mon[74313]: Saving service mds.cephfs spec with placement compute-0
Oct 11 04:13:08 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 2.6 deep-scrub starts
Oct 11 04:13:08 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 2.6 deep-scrub ok
Oct 11 04:13:08 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Oct 11 04:13:08 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Oct 11 04:13:08 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 3.6 deep-scrub starts
Oct 11 04:13:08 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 3.6 deep-scrub ok
Oct 11 04:13:08 np0005481065 python3[98281]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:13:08 np0005481065 podman[98288]: 2025-10-11 08:13:08.397293162 +0000 UTC m=+0.049496777 container create 8aed1fc18944dfab5f562005d90a09bc56abe48d288fdeea54e0795b68673278 (image=quay.io/ceph/ceph:v18, name=modest_hodgkin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:13:08 np0005481065 systemd[1]: Started libpod-conmon-8aed1fc18944dfab5f562005d90a09bc56abe48d288fdeea54e0795b68673278.scope.
Oct 11 04:13:08 np0005481065 magical_austin[98146]: {
Oct 11 04:13:08 np0005481065 magical_austin[98146]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 04:13:08 np0005481065 magical_austin[98146]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:13:08 np0005481065 magical_austin[98146]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:13:08 np0005481065 magical_austin[98146]:        "osd_id": 2,
Oct 11 04:13:08 np0005481065 magical_austin[98146]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:13:08 np0005481065 magical_austin[98146]:        "type": "bluestore"
Oct 11 04:13:08 np0005481065 magical_austin[98146]:    },
Oct 11 04:13:08 np0005481065 magical_austin[98146]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 04:13:08 np0005481065 magical_austin[98146]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:13:08 np0005481065 magical_austin[98146]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:13:08 np0005481065 magical_austin[98146]:        "osd_id": 0,
Oct 11 04:13:08 np0005481065 magical_austin[98146]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:13:08 np0005481065 magical_austin[98146]:        "type": "bluestore"
Oct 11 04:13:08 np0005481065 magical_austin[98146]:    },
Oct 11 04:13:08 np0005481065 magical_austin[98146]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 04:13:08 np0005481065 magical_austin[98146]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:13:08 np0005481065 magical_austin[98146]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:13:08 np0005481065 magical_austin[98146]:        "osd_id": 1,
Oct 11 04:13:08 np0005481065 magical_austin[98146]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:13:08 np0005481065 magical_austin[98146]:        "type": "bluestore"
Oct 11 04:13:08 np0005481065 magical_austin[98146]:    }
Oct 11 04:13:08 np0005481065 magical_austin[98146]: }
Oct 11 04:13:08 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:13:08 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fa08f3215d7619f69dcc863a740a9e94cb602e037429ab3ca114cc23b054cb3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:08 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fa08f3215d7619f69dcc863a740a9e94cb602e037429ab3ca114cc23b054cb3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:08 np0005481065 podman[98288]: 2025-10-11 08:13:08.383115486 +0000 UTC m=+0.035319131 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:13:08 np0005481065 podman[98288]: 2025-10-11 08:13:08.490111437 +0000 UTC m=+0.142315162 container init 8aed1fc18944dfab5f562005d90a09bc56abe48d288fdeea54e0795b68673278 (image=quay.io/ceph/ceph:v18, name=modest_hodgkin, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:13:08 np0005481065 podman[98288]: 2025-10-11 08:13:08.500962788 +0000 UTC m=+0.153166463 container start 8aed1fc18944dfab5f562005d90a09bc56abe48d288fdeea54e0795b68673278 (image=quay.io/ceph/ceph:v18, name=modest_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 04:13:08 np0005481065 systemd[1]: libpod-339a4542e58e3a029a9adcdf74c0ba353b87dae2f65513ffbed490cf79efeedb.scope: Deactivated successfully.
Oct 11 04:13:08 np0005481065 systemd[1]: libpod-339a4542e58e3a029a9adcdf74c0ba353b87dae2f65513ffbed490cf79efeedb.scope: Consumed 1.140s CPU time.
Oct 11 04:13:08 np0005481065 podman[98288]: 2025-10-11 08:13:08.505490797 +0000 UTC m=+0.157694462 container attach 8aed1fc18944dfab5f562005d90a09bc56abe48d288fdeea54e0795b68673278 (image=quay.io/ceph/ceph:v18, name=modest_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:13:08 np0005481065 podman[98127]: 2025-10-11 08:13:08.506773504 +0000 UTC m=+1.333985584 container died 339a4542e58e3a029a9adcdf74c0ba353b87dae2f65513ffbed490cf79efeedb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_austin, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 04:13:08 np0005481065 systemd[1]: var-lib-containers-storage-overlay-c96bd2741cc8a5ee31e6126cbcaa1690f2b754ad5d094d35c926436e851959f6-merged.mount: Deactivated successfully.
Oct 11 04:13:08 np0005481065 podman[98127]: 2025-10-11 08:13:08.572065432 +0000 UTC m=+1.399277462 container remove 339a4542e58e3a029a9adcdf74c0ba353b87dae2f65513ffbed490cf79efeedb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_austin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:13:08 np0005481065 systemd[1]: libpod-conmon-339a4542e58e3a029a9adcdf74c0ba353b87dae2f65513ffbed490cf79efeedb.scope: Deactivated successfully.
Oct 11 04:13:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:13:08 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:13:08 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:08 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v101: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:13:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 11 04:13:08 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 11 04:13:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 11 04:13:08 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 11 04:13:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 11 04:13:08 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 11 04:13:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 11 04:13:08 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 11 04:13:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 11 04:13:08 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 11 04:13:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 11 04:13:08 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 11 04:13:08 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:08 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:08 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 11 04:13:08 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 11 04:13:08 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 11 04:13:08 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 11 04:13:08 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 11 04:13:08 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Oct 11 04:13:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0) v1
Oct 11 04:13:09 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/263339610' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Oct 11 04:13:09 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/263339610' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct 11 04:13:09 np0005481065 systemd[1]: libpod-8aed1fc18944dfab5f562005d90a09bc56abe48d288fdeea54e0795b68673278.scope: Deactivated successfully.
Oct 11 04:13:09 np0005481065 podman[98288]: 2025-10-11 08:13:09.122765225 +0000 UTC m=+0.774968890 container died 8aed1fc18944dfab5f562005d90a09bc56abe48d288fdeea54e0795b68673278 (image=quay.io/ceph/ceph:v18, name=modest_hodgkin, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 11 04:13:09 np0005481065 systemd[1]: var-lib-containers-storage-overlay-4fa08f3215d7619f69dcc863a740a9e94cb602e037429ab3ca114cc23b054cb3-merged.mount: Deactivated successfully.
Oct 11 04:13:09 np0005481065 podman[98288]: 2025-10-11 08:13:09.184547563 +0000 UTC m=+0.836751228 container remove 8aed1fc18944dfab5f562005d90a09bc56abe48d288fdeea54e0795b68673278 (image=quay.io/ceph/ceph:v18, name=modest_hodgkin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:13:09 np0005481065 systemd[1]: libpod-conmon-8aed1fc18944dfab5f562005d90a09bc56abe48d288fdeea54e0795b68673278.scope: Deactivated successfully.
Oct 11 04:13:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Oct 11 04:13:09 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 11 04:13:09 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 11 04:13:09 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 11 04:13:09 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 11 04:13:09 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 11 04:13:09 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 11 04:13:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Oct 11 04:13:09 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[4.18( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.895099640s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 89.504966736s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[6.15( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=8.913611412s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 83.523559570s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[6.14( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=8.907359123s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 83.517318726s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[4.18( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.894985199s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.504966736s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[6.15( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=8.913538933s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.523559570s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[6.17( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=8.907177925s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 83.517272949s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[6.14( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=8.907165527s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.517318726s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[4.14( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.894721031s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 89.504890442s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[6.17( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=8.907117844s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.517272949s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[4.13( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.884453773s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 89.494735718s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[4.14( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.894662857s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.504890442s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[6.11( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=8.914269447s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 83.524589539s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[4.13( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.884422302s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.494735718s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[4.12( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.884324074s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 89.494743347s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[4.12( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.884292603s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.494743347s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[4.10( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.884201050s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 89.494743347s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[6.13( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=8.913035393s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 83.523582458s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[4.11( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.884201050s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 89.494750977s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[4.10( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.884172440s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.494743347s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[6.11( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=8.914232254s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.524589539s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[6.13( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=8.912969589s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.523582458s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[4.f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.884057045s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 89.494735718s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[4.11( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.884085655s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.494750977s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[6.d( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=8.912883759s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 83.523597717s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[4.f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.884027481s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.494735718s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[6.d( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=8.912858009s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.523597717s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[4.e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.883681297s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 89.494537354s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[4.d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.883781433s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 89.494697571s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[6.f( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=8.912901878s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 83.523826599s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[6.c( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=8.912717819s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 83.523651123s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[4.d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.883755684s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.494697571s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[6.f( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=8.912872314s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.523826599s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[6.c( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=8.912671089s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.523651123s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[4.e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.883546829s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.494537354s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[6.e( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=8.912690163s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 83.523788452s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[6.e( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=8.912649155s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.523788452s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[4.2( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.883172035s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 89.494361877s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[4.2( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.883140564s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.494361877s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[4.1( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.883087158s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 89.494361877s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[6.1( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=8.912648201s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 83.524002075s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[4.4( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.882955551s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 89.494346619s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[6.2( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=8.912489891s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 83.523887634s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[6.6( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=8.912678719s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 83.524101257s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[4.1( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.883033752s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.494361877s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[4.4( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.882925987s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.494346619s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[6.1( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=8.912596703s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.524002075s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[6.2( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=8.912453651s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.523887634s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[6.6( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=8.912649155s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.524101257s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[6.b( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=8.912442207s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 83.524032593s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[4.9( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.882642746s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 89.494255066s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[6.b( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=8.912413597s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.524032593s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[4.9( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.882616043s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.494255066s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[4.1a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.882669449s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 89.494338989s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[4.5( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.882987022s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 89.494705200s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[4.1a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.882624626s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.494338989s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[4.1b( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.882345200s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 89.494140625s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[4.a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.882545471s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 89.494346619s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[4.5( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.882934570s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.494705200s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[6.8( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=8.912322998s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 83.524139404s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[4.a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.882515907s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.494346619s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[6.8( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=8.912284851s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.524139404s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[4.1b( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.882307053s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.494140625s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[6.4( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=8.912268639s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 83.524246216s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[6.4( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=8.912243843s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.524246216s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[6.1e( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=8.912437439s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 83.524543762s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[4.7( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.881948471s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 89.494041443s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[4.8( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.882002831s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 89.494110107s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[4.1c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.892783165s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 89.504981995s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[6.1e( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=8.912367821s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.524543762s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[4.8( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.881955147s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.494110107s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[4.7( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.881882668s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.494041443s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[6.1f( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=8.912302971s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 83.524528503s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[4.1c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.892747879s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.504981995s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[6.1f( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=8.912273407s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.524528503s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[6.1c( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=8.912220001s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 83.524589539s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[6.1c( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=8.912189484s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.524589539s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[6.1d( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=8.912165642s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 83.524627686s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[6.1d( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=8.912098885s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.524627686s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[4.18( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [2] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[4.1b( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [2] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[4.1a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [2] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[6.f( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[4.e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [2] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[4.1( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [2] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[4.a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [2] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[6.1e( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[6.8( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[4.d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[6.c( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[6.14( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[6.d( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[6.15( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[4.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[6.2( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[4.13( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [2] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[6.11( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[6.13( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[4.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[4.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[4.11( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [2] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[4.1c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [2] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[6.6( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[6.1f( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[6.4( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[2.19( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=13.847182274s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.280158997s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.876011848s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 78.308998108s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[2.19( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=13.847137451s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.280158997s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[2.1b( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=13.847072601s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.280158997s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.875942230s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.308998108s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[2.1b( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=13.847029686s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.280158997s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[2.17( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=13.846888542s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.280227661s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[2.16( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=13.847011566s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.280364990s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[2.16( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=13.846984863s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.280364990s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[2.18( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=13.847670555s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.280990601s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[2.17( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=13.846804619s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.280227661s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.875292778s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 78.308853149s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.875266075s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.308853149s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[2.15( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=13.846599579s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.280334473s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.875298500s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 78.309043884s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[2.15( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=13.846557617s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.280334473s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.874948502s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 78.309013367s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[6.1( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[4.7( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[4.5( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[6.e( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[6.b( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.865334511s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 78.299659729s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[2.18( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=13.847515106s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.280990601s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[2.13( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=13.845302582s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.280426025s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[2.13( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=13.845273972s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.280426025s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.875268936s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.309043884s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.873473167s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 78.308845520s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.873803139s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 78.309188843s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.873412132s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.308845520s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.873738289s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.309188843s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[2.11( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=13.844882011s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.280448914s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[2.11( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=13.844858170s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.280448914s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.873597145s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 78.309219360s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.873571396s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.309219360s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[2.f( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=13.844726562s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.280548096s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[2.f( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=13.844695091s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.280548096s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[2.d( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=13.844499588s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.280448914s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[2.d( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=13.844467163s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.280448914s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.873180389s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 78.309242249s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.873155594s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.309242249s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[2.2( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=13.844465256s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.280769348s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.865283966s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.299659729s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[2.2( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=13.844434738s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.280769348s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.872742653s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 78.309326172s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[2.3( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=13.844049454s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.280639648s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.872691154s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.309326172s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[2.3( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=13.843892097s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.280639648s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.872446060s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 78.309288025s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.872416496s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.309288025s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[2.4( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=13.843871117s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.280807495s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[2.4( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=13.843842506s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.280807495s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.872256279s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 78.309303284s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[4.9( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[4.8( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[6.17( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[4.14( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[4.12( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[4.10( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[6.1d( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[6.1c( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[7.1c( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44 pruub=11.357878685s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 80.651916504s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[7.1c( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44 pruub=11.357834816s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.651916504s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.872227669s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.309303284s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[2.5( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=13.843562126s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.280670166s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[2.5( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=13.843535423s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.280670166s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.872066498s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 78.309333801s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.872036934s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.309333801s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[2.6( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=13.843445778s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.280769348s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[2.6( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=13.843417168s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.280769348s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[2.8( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=13.843265533s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.280792236s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.871805191s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 78.309341431s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[2.8( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=13.843238831s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.280792236s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.830574989s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 82.125633240s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.830408096s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 82.125541687s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.830535889s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.125633240s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[7.13( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44 pruub=11.361083984s) [0] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 80.656311035s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.830308914s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.125541687s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[7.13( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44 pruub=11.361025810s) [0] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.656311035s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.830098152s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 82.125564575s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.830068588s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.125564575s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[7.11( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44 pruub=11.361576080s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 80.657119751s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[7.11( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44 pruub=11.361536026s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.657119751s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.829689026s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 82.125396729s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.829659462s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.125396729s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.829560280s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 82.125396729s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.829515457s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.125396729s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.829436302s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 82.125396729s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[7.15( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44 pruub=11.360552788s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 80.656547546s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.829454422s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 82.125473022s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[7.15( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44 pruub=11.360503197s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.656547546s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.829405785s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.125473022s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.829339027s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 82.125442505s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.829300880s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.125442505s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44 pruub=11.360579491s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 80.656730652s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44 pruub=11.360490799s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.656730652s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44 pruub=11.360414505s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 80.656768799s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.829042435s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 82.125411987s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44 pruub=11.360308647s) [0] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 80.656677246s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44 pruub=11.360387802s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.656768799s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.828999519s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.125411987s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44 pruub=11.360266685s) [0] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.656677246s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44 pruub=11.360220909s) [0] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 80.656768799s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44 pruub=11.360219002s) [0] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 80.656806946s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44 pruub=11.360192299s) [0] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.656806946s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44 pruub=11.360172272s) [0] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.656768799s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44 pruub=11.360467911s) [0] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 80.657119751s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.828546524s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 82.125328064s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[2.19( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[5.1e( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[2.16( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[2.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[2.13( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[5.14( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[5.15( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[2.11( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[2.f( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[5.7( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.828521729s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.125328064s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[2.2( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44 pruub=11.360428810s) [0] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.657119751s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[5.5( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[5.4( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.828184128s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 82.125144958s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[5.3( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.828160286s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.125144958s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[5.2( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.871769905s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.309341431s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.871689796s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 78.309394836s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.871665001s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.309394836s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.871345520s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 78.309226990s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.874906540s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.309013367s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[2.a( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=13.842839241s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.280815125s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[2.a( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=13.842810631s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.280815125s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[2.b( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=13.842854500s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.280998230s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.871241570s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 78.309417725s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[2.b( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=13.842813492s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.280998230s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.871212959s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.309417725s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.870912552s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.309226990s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[2.1c( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=13.842643738s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.281005859s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[2.1d( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=13.842609406s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.280982971s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[2.1c( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=13.842617035s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.281005859s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[2.1d( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=13.842539787s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.280982971s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.870997429s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 78.309562683s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.870972633s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.309562683s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[2.7( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=13.842062950s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.280700684s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.870808601s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 78.309448242s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.870760918s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.309448242s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.870799065s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 78.309539795s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[2.1f( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=13.842262268s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.280990601s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=14.870771408s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.309539795s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[2.1f( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=13.842208862s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.280990601s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[2.7( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=13.841885567s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.280700684s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[2.9( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=13.842989922s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.280807495s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[2.9( empty local-lis/les=37/39 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=13.835576057s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.280807495s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[3.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44 pruub=11.359894753s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 80.656890869s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44 pruub=11.359844208s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.656890869s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.828063011s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 82.125160217s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[2.8( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[2.b( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[2.1c( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.828009605s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.125160217s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.828194618s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.125396729s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[3.16( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.827692032s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 82.125022888s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.827666283s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.125022888s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44 pruub=11.359514236s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 80.656936646s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44 pruub=11.359479904s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.656936646s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.827357292s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 82.124923706s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[7.11( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.827331543s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.124923706s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44 pruub=11.359285355s) [0] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 80.656944275s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44 pruub=11.359252930s) [0] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.656944275s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.827275276s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 82.125045776s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.827251434s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.125045776s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44 pruub=11.359064102s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 80.656974792s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.826982498s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 82.124946594s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44 pruub=11.359030724s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.656974792s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.826957703s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.124946594s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[2.1d( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[2.1f( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[3.17( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[7.13( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[3.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[3.15( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[3.c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[7.9( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[7.6( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[7.f( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[3.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[7.4( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[3.3( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[3.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.826892853s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 82.125022888s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.826870918s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.125022888s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44 pruub=11.358801842s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 80.657005310s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44 pruub=11.358762741s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.657005310s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44 pruub=11.358670235s) [0] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 80.657012939s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44 pruub=11.358644485s) [0] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.657012939s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.826392174s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 82.124916077s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44 pruub=11.358461380s) [0] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 80.657051086s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.826357841s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.124916077s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44 pruub=11.358411789s) [0] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.657051086s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44 pruub=11.358139038s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 80.656860352s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44 pruub=11.358118057s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.656860352s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.826075554s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 82.124885559s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.826041222s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.124885559s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44 pruub=11.358166695s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 80.657081604s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44 pruub=11.358142853s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.657081604s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44 pruub=11.358052254s) [0] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 80.657089233s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.825798035s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 82.124885559s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=41/43 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44 pruub=11.358014107s) [0] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.657089233s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[2.1b( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.825754166s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.124885559s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[2.17( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[5.11( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[2.15( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[5.12( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[5.16( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[2.d( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[5.1d( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[2.3( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[2.4( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[2.5( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[2.6( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[5.1( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[5.f( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[5.13( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[2.a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[5.c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[5.9( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[5.1a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[5.19( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[5.18( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[2.7( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.815165520s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 82.121376038s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.815130234s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.121376038s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[3.6( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[7.3( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[3.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[7.1c( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[3.a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[7.1f( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[3.1b( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[3.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[7.15( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[3.e( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 44 pg[2.9( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[7.a( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[7.8( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[3.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[7.2( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[3.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[3.8( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[7.c( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[7.1b( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[7.18( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 44 pg[3.1f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[7.e( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[7.1( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[7.5( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[3.1d( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[7.1a( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 44 pg[3.1e( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:09 np0005481065 ceph-mgr[74605]: [progress INFO root] Completed event 89dd59be-f11a-43d0-8686-e3618f28d88e (Global Recovery Event) in 10 seconds
Oct 11 04:13:09 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/263339610' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Oct 11 04:13:09 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/263339610' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct 11 04:13:09 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 11 04:13:09 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 11 04:13:09 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 11 04:13:09 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 11 04:13:09 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 11 04:13:09 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 11 04:13:09 np0005481065 podman[98588]: 2025-10-11 08:13:09.829212326 +0000 UTC m=+0.052980997 container exec ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:13:09 np0005481065 podman[98588]: 2025-10-11 08:13:09.957204087 +0000 UTC m=+0.180972788 container exec_died ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 04:13:10 np0005481065 python3[98631]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:13:10 np0005481065 podman[98652]: 2025-10-11 08:13:10.107369033 +0000 UTC m=+0.073095722 container create 70a48c439676d650b280b56ff1c069a3d36822ae700d4c6a825270dd90038445 (image=quay.io/ceph/ceph:v18, name=tender_shaw, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:13:10 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Oct 11 04:13:10 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Oct 11 04:13:10 np0005481065 systemd[1]: Started libpod-conmon-70a48c439676d650b280b56ff1c069a3d36822ae700d4c6a825270dd90038445.scope.
Oct 11 04:13:10 np0005481065 podman[98652]: 2025-10-11 08:13:10.075904973 +0000 UTC m=+0.041631712 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:13:10 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:13:10 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33e11adc2ded0c336f752df019e2cad33f81ed30a5a3c29ef1b5c8e364c586d2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:10 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33e11adc2ded0c336f752df019e2cad33f81ed30a5a3c29ef1b5c8e364c586d2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:10 np0005481065 podman[98652]: 2025-10-11 08:13:10.206561031 +0000 UTC m=+0.172287750 container init 70a48c439676d650b280b56ff1c069a3d36822ae700d4c6a825270dd90038445 (image=quay.io/ceph/ceph:v18, name=tender_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:13:10 np0005481065 podman[98652]: 2025-10-11 08:13:10.214196409 +0000 UTC m=+0.179923068 container start 70a48c439676d650b280b56ff1c069a3d36822ae700d4c6a825270dd90038445 (image=quay.io/ceph/ceph:v18, name=tender_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 04:13:10 np0005481065 podman[98652]: 2025-10-11 08:13:10.217868354 +0000 UTC m=+0.183595023 container attach 70a48c439676d650b280b56ff1c069a3d36822ae700d4c6a825270dd90038445 (image=quay.io/ceph/ceph:v18, name=tender_shaw, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:13:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:13:10 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:13:10 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:13:10 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:13:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:13:10 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:13:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:13:10 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:10 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 186630d5-4859-476e-bd6b-7488864758fc does not exist
Oct 11 04:13:10 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 4a08d2a3-4f85-4fd5-ae78-cb5cf23add5f does not exist
Oct 11 04:13:10 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 3cbb835d-6dd7-49d4-8bf0-815749cb84e9 does not exist
Oct 11 04:13:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:13:10 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:13:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:13:10 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:13:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:13:10 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:13:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Oct 11 04:13:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Oct 11 04:13:10 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Oct 11 04:13:10 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v104: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:13:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 45 pg[2.1b( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 45 pg[7.a( empty local-lis/les=44/45 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 45 pg[3.11( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 45 pg[3.e( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 45 pg[5.11( empty local-lis/les=44/45 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 45 pg[4.10( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 45 pg[5.13( empty local-lis/les=44/45 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 45 pg[4.12( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 45 pg[5.12( empty local-lis/les=44/45 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 45 pg[2.15( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 45 pg[4.14( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 45 pg[5.16( empty local-lis/les=44/45 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 45 pg[2.17( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 45 pg[5.9( empty local-lis/les=44/45 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 45 pg[4.8( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 45 pg[4.9( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 45 pg[2.d( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 45 pg[6.17( empty local-lis/les=44/45 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 45 pg[6.b( empty local-lis/les=44/45 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 45 pg[2.a( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 45 pg[6.e( empty local-lis/les=44/45 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 45 pg[2.3( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 45 pg[4.5( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 45 pg[6.4( empty local-lis/les=44/45 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 45 pg[6.1c( empty local-lis/les=44/45 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 45 pg[6.6( empty local-lis/les=44/45 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 45 pg[6.1d( empty local-lis/les=44/45 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 45 pg[4.4( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 45 pg[2.4( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 45 pg[4.2( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 45 pg[2.7( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 45 pg[4.7( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 45 pg[5.1( empty local-lis/les=44/45 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 45 pg[2.6( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 45 pg[6.2( empty local-lis/les=44/45 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 45 pg[2.5( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 45 pg[6.1( empty local-lis/les=44/45 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 45 pg[5.1e( empty local-lis/les=44/45 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 45 pg[2.19( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 45 pg[2.18( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 45 pg[7.15( empty local-lis/les=44/45 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 45 pg[7.1f( empty local-lis/les=44/45 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 45 pg[3.f( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 45 pg[3.1b( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 45 pg[7.4( empty local-lis/les=44/45 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 45 pg[7.1c( empty local-lis/les=44/45 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 45 pg[3.18( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 45 pg[3.c( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 45 pg[7.11( empty local-lis/les=44/45 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 45 pg[5.7( empty local-lis/les=44/45 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 45 pg[7.18( empty local-lis/les=44/45 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 45 pg[3.1( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 45 pg[3.16( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 45 pg[4.1c( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [2] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 45 pg[7.9( empty local-lis/les=44/45 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 45 pg[6.1f( empty local-lis/les=44/45 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 45 pg[7.6( empty local-lis/les=44/45 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 45 pg[5.4( empty local-lis/les=44/45 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 45 pg[2.f( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 45 pg[6.11( empty local-lis/les=44/45 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 45 pg[2.2( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 45 pg[6.13( empty local-lis/les=44/45 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 45 pg[5.5( empty local-lis/les=44/45 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 45 pg[6.15( empty local-lis/les=44/45 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 45 pg[5.2( empty local-lis/les=44/45 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 45 pg[3.3( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 45 pg[6.14( empty local-lis/les=44/45 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 45 pg[2.1c( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 45 pg[7.8( empty local-lis/les=44/45 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 45 pg[5.3( empty local-lis/les=44/45 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 45 pg[7.3( empty local-lis/les=44/45 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 45 pg[6.8( empty local-lis/les=44/45 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 45 pg[3.6( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 45 pg[4.a( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [2] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 45 pg[2.b( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 45 pg[7.5( empty local-lis/les=44/45 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 45 pg[7.f( empty local-lis/les=44/45 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 45 pg[2.8( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 45 pg[7.2( empty local-lis/les=44/45 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 45 pg[4.1( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [2] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 45 pg[3.a( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 45 pg[3.9( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 45 pg[4.13( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [2] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 45 pg[2.16( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 45 pg[7.1( empty local-lis/les=44/45 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 45 pg[3.17( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 45 pg[7.13( empty local-lis/les=44/45 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 45 pg[4.11( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [2] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 45 pg[5.15( empty local-lis/les=44/45 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 45 pg[3.15( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 45 pg[5.14( empty local-lis/les=44/45 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 45 pg[3.5( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 45 pg[3.12( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 45 pg[2.13( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 45 pg[3.7( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 45 pg[2.11( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 45 pg[4.e( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [2] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 45 pg[3.1f( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 45 pg[7.e( empty local-lis/les=44/45 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 45 pg[7.1b( empty local-lis/les=44/45 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 45 pg[2.1d( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 45 pg[3.8( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 45 pg[2.1f( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 45 pg[6.f( empty local-lis/les=44/45 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 45 pg[3.1d( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 45 pg[4.18( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [2] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 45 pg[7.c( empty local-lis/les=44/45 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 45 pg[7.1a( empty local-lis/les=44/45 n=0 ec=41/28 lis/c=41/41 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 45 pg[4.1b( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [2] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 45 pg[3.1e( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 45 pg[4.f( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 45 pg[5.f( empty local-lis/les=44/45 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 45 pg[6.c( empty local-lis/les=44/45 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 45 pg[6.d( empty local-lis/les=44/45 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 45 pg[5.c( empty local-lis/les=44/45 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 45 pg[4.d( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 45 pg[5.1d( empty local-lis/les=44/45 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 45 pg[6.1e( empty local-lis/les=44/45 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 45 pg[5.1a( empty local-lis/les=44/45 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 45 pg[5.19( empty local-lis/les=44/45 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 45 pg[2.9( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=39/39/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 45 pg[5.18( empty local-lis/les=44/45 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 45 pg[4.1a( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [2] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:10 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:10 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:10 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:13:10 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:10 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:13:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct 11 04:13:10 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/892901286' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 11 04:13:10 np0005481065 tender_shaw[98693]: 
Oct 11 04:13:10 np0005481065 tender_shaw[98693]: {"fsid":"33219f8b-dc38-5a8f-a577-8ccc4b37190a","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":183,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":45,"num_osds":3,"num_up_osds":3,"osd_up_since":1760170331,"num_in_osds":3,"osd_in_since":1760170303,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":193}],"num_pgs":193,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":84213760,"bytes_avail":64327712768,"bytes_total":64411926528},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":4,"modified":"2025-10-11T08:13:08.682229+0000","services":{"osd":{"daemons":{"summary":"","2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{"89dd59be-f11a-43d0-8686-e3618f28d88e":{"message":"Global Recovery Event (5s)\n      [======================......] (remaining: 1s)","progress":0.81761008501052856,"add_to_ceph_s":true}}}
Oct 11 04:13:10 np0005481065 systemd[1]: libpod-70a48c439676d650b280b56ff1c069a3d36822ae700d4c6a825270dd90038445.scope: Deactivated successfully.
Oct 11 04:13:10 np0005481065 podman[98652]: 2025-10-11 08:13:10.881084928 +0000 UTC m=+0.846811617 container died 70a48c439676d650b280b56ff1c069a3d36822ae700d4c6a825270dd90038445 (image=quay.io/ceph/ceph:v18, name=tender_shaw, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 11 04:13:10 np0005481065 systemd[1]: var-lib-containers-storage-overlay-33e11adc2ded0c336f752df019e2cad33f81ed30a5a3c29ef1b5c8e364c586d2-merged.mount: Deactivated successfully.
Oct 11 04:13:10 np0005481065 podman[98652]: 2025-10-11 08:13:10.932049026 +0000 UTC m=+0.897775685 container remove 70a48c439676d650b280b56ff1c069a3d36822ae700d4c6a825270dd90038445 (image=quay.io/ceph/ceph:v18, name=tender_shaw, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 04:13:10 np0005481065 systemd[1]: libpod-conmon-70a48c439676d650b280b56ff1c069a3d36822ae700d4c6a825270dd90038445.scope: Deactivated successfully.
Oct 11 04:13:11 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 2.c scrub starts
Oct 11 04:13:11 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 2.c scrub ok
Oct 11 04:13:11 np0005481065 python3[98937]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:13:11 np0005481065 podman[98958]: 2025-10-11 08:13:11.377437168 +0000 UTC m=+0.054629494 container create 8d489c2bed8552577eb6b1a6dfa9d77a9215d1cc7c5c499a82389990472d12ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 04:13:11 np0005481065 systemd[1]: Started libpod-conmon-8d489c2bed8552577eb6b1a6dfa9d77a9215d1cc7c5c499a82389990472d12ac.scope.
Oct 11 04:13:11 np0005481065 podman[98958]: 2025-10-11 08:13:11.348794518 +0000 UTC m=+0.025986904 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:13:11 np0005481065 podman[98968]: 2025-10-11 08:13:11.442952572 +0000 UTC m=+0.082682917 container create 69810319585d80fabd91bd2e31f5af74a8318aafff5790ef969c8dbf9c7c0902 (image=quay.io/ceph/ceph:v18, name=practical_satoshi, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:13:11 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:13:11 np0005481065 systemd[1]: Started libpod-conmon-69810319585d80fabd91bd2e31f5af74a8318aafff5790ef969c8dbf9c7c0902.scope.
Oct 11 04:13:11 np0005481065 podman[98958]: 2025-10-11 08:13:11.478445917 +0000 UTC m=+0.155638283 container init 8d489c2bed8552577eb6b1a6dfa9d77a9215d1cc7c5c499a82389990472d12ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:13:11 np0005481065 podman[98958]: 2025-10-11 08:13:11.485937532 +0000 UTC m=+0.163129838 container start 8d489c2bed8552577eb6b1a6dfa9d77a9215d1cc7c5c499a82389990472d12ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 11 04:13:11 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:13:11 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/923f4d85300c264d1f673147a11abe0a2754cc8a6785faa5ae6212be7b999110/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:11 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/923f4d85300c264d1f673147a11abe0a2754cc8a6785faa5ae6212be7b999110/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:11 np0005481065 brave_bell[98987]: 167 167
Oct 11 04:13:11 np0005481065 podman[98958]: 2025-10-11 08:13:11.493392875 +0000 UTC m=+0.170585211 container attach 8d489c2bed8552577eb6b1a6dfa9d77a9215d1cc7c5c499a82389990472d12ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bell, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 11 04:13:11 np0005481065 systemd[1]: libpod-8d489c2bed8552577eb6b1a6dfa9d77a9215d1cc7c5c499a82389990472d12ac.scope: Deactivated successfully.
Oct 11 04:13:11 np0005481065 podman[98958]: 2025-10-11 08:13:11.495597728 +0000 UTC m=+0.172790064 container died 8d489c2bed8552577eb6b1a6dfa9d77a9215d1cc7c5c499a82389990472d12ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:13:11 np0005481065 podman[98968]: 2025-10-11 08:13:11.408926998 +0000 UTC m=+0.048657413 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:13:11 np0005481065 podman[98968]: 2025-10-11 08:13:11.510026481 +0000 UTC m=+0.149756816 container init 69810319585d80fabd91bd2e31f5af74a8318aafff5790ef969c8dbf9c7c0902 (image=quay.io/ceph/ceph:v18, name=practical_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 04:13:11 np0005481065 podman[98968]: 2025-10-11 08:13:11.521459618 +0000 UTC m=+0.161189943 container start 69810319585d80fabd91bd2e31f5af74a8318aafff5790ef969c8dbf9c7c0902 (image=quay.io/ceph/ceph:v18, name=practical_satoshi, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 04:13:11 np0005481065 systemd[1]: var-lib-containers-storage-overlay-5e87fd8446093102582d9804220b74c4e3a698a537b6f103fee856cb453f97c9-merged.mount: Deactivated successfully.
Oct 11 04:13:11 np0005481065 podman[98968]: 2025-10-11 08:13:11.528569941 +0000 UTC m=+0.168300266 container attach 69810319585d80fabd91bd2e31f5af74a8318aafff5790ef969c8dbf9c7c0902 (image=quay.io/ceph/ceph:v18, name=practical_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 04:13:11 np0005481065 podman[98958]: 2025-10-11 08:13:11.542025796 +0000 UTC m=+0.219218102 container remove 8d489c2bed8552577eb6b1a6dfa9d77a9215d1cc7c5c499a82389990472d12ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bell, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:13:11 np0005481065 systemd[1]: libpod-conmon-8d489c2bed8552577eb6b1a6dfa9d77a9215d1cc7c5c499a82389990472d12ac.scope: Deactivated successfully.
Oct 11 04:13:11 np0005481065 podman[99018]: 2025-10-11 08:13:11.761616978 +0000 UTC m=+0.065569827 container create 94c5e99012b7878b755452e54ef93b881388168275959711f33540c7168e286a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lewin, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:13:11 np0005481065 systemd[1]: Started libpod-conmon-94c5e99012b7878b755452e54ef93b881388168275959711f33540c7168e286a.scope.
Oct 11 04:13:11 np0005481065 podman[99018]: 2025-10-11 08:13:11.733097532 +0000 UTC m=+0.037050441 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:13:11 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:13:11 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbcfcb6b3b33607eb7328053c2e94fb2dfbb2a3d2bc0484096b5e7aca29e3c00/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:11 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbcfcb6b3b33607eb7328053c2e94fb2dfbb2a3d2bc0484096b5e7aca29e3c00/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:11 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbcfcb6b3b33607eb7328053c2e94fb2dfbb2a3d2bc0484096b5e7aca29e3c00/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:11 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbcfcb6b3b33607eb7328053c2e94fb2dfbb2a3d2bc0484096b5e7aca29e3c00/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:11 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbcfcb6b3b33607eb7328053c2e94fb2dfbb2a3d2bc0484096b5e7aca29e3c00/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:11 np0005481065 podman[99018]: 2025-10-11 08:13:11.866364095 +0000 UTC m=+0.170316984 container init 94c5e99012b7878b755452e54ef93b881388168275959711f33540c7168e286a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lewin, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 04:13:11 np0005481065 podman[99018]: 2025-10-11 08:13:11.880665994 +0000 UTC m=+0.184618843 container start 94c5e99012b7878b755452e54ef93b881388168275959711f33540c7168e286a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lewin, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:13:11 np0005481065 podman[99018]: 2025-10-11 08:13:11.88506157 +0000 UTC m=+0.189014419 container attach 94c5e99012b7878b755452e54ef93b881388168275959711f33540c7168e286a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lewin, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:13:12 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 4.b scrub starts
Oct 11 04:13:12 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 4.b scrub ok
Oct 11 04:13:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:13:12 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3834205974' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:13:12 np0005481065 practical_satoshi[98992]: 
Oct 11 04:13:12 np0005481065 practical_satoshi[98992]: {"epoch":1,"fsid":"33219f8b-dc38-5a8f-a577-8ccc4b37190a","modified":"2025-10-11T08:10:02.093625Z","created":"2025-10-11T08:10:02.093625Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Oct 11 04:13:12 np0005481065 practical_satoshi[98992]: dumped monmap epoch 1
Oct 11 04:13:12 np0005481065 systemd[1]: libpod-69810319585d80fabd91bd2e31f5af74a8318aafff5790ef969c8dbf9c7c0902.scope: Deactivated successfully.
Oct 11 04:13:12 np0005481065 podman[98968]: 2025-10-11 08:13:12.170747823 +0000 UTC m=+0.810478178 container died 69810319585d80fabd91bd2e31f5af74a8318aafff5790ef969c8dbf9c7c0902 (image=quay.io/ceph/ceph:v18, name=practical_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 04:13:12 np0005481065 systemd[1]: var-lib-containers-storage-overlay-923f4d85300c264d1f673147a11abe0a2754cc8a6785faa5ae6212be7b999110-merged.mount: Deactivated successfully.
Oct 11 04:13:12 np0005481065 podman[98968]: 2025-10-11 08:13:12.230555474 +0000 UTC m=+0.870285799 container remove 69810319585d80fabd91bd2e31f5af74a8318aafff5790ef969c8dbf9c7c0902 (image=quay.io/ceph/ceph:v18, name=practical_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 11 04:13:12 np0005481065 systemd[1]: libpod-conmon-69810319585d80fabd91bd2e31f5af74a8318aafff5790ef969c8dbf9c7c0902.scope: Deactivated successfully.
Oct 11 04:13:12 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 3.b scrub starts
Oct 11 04:13:12 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 3.b scrub ok
Oct 11 04:13:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:13:12 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v105: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:13:12 np0005481065 python3[99110]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:13:12 np0005481065 podman[99118]: 2025-10-11 08:13:12.95297443 +0000 UTC m=+0.081700108 container create fa255a28a98fdeeffb248b066d449d6e684a5c2e4004ce65a046db55244acdd7 (image=quay.io/ceph/ceph:v18, name=sweet_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 04:13:13 np0005481065 systemd[1]: Started libpod-conmon-fa255a28a98fdeeffb248b066d449d6e684a5c2e4004ce65a046db55244acdd7.scope.
Oct 11 04:13:13 np0005481065 dreamy_lewin[99037]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:13:13 np0005481065 dreamy_lewin[99037]: --> relative data size: 1.0
Oct 11 04:13:13 np0005481065 dreamy_lewin[99037]: --> All data devices are unavailable
Oct 11 04:13:13 np0005481065 podman[99118]: 2025-10-11 08:13:12.919144062 +0000 UTC m=+0.047869800 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:13:13 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:13:13 np0005481065 systemd[1]: libpod-94c5e99012b7878b755452e54ef93b881388168275959711f33540c7168e286a.scope: Deactivated successfully.
Oct 11 04:13:13 np0005481065 systemd[1]: libpod-94c5e99012b7878b755452e54ef93b881388168275959711f33540c7168e286a.scope: Consumed 1.105s CPU time.
Oct 11 04:13:13 np0005481065 podman[99018]: 2025-10-11 08:13:13.046941918 +0000 UTC m=+1.350894807 container died 94c5e99012b7878b755452e54ef93b881388168275959711f33540c7168e286a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lewin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 11 04:13:13 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff7366157aacdd0a09450ca613f650b6b39717e9a96bd42c78d2b9333c11d335/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:13 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff7366157aacdd0a09450ca613f650b6b39717e9a96bd42c78d2b9333c11d335/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:13 np0005481065 podman[99118]: 2025-10-11 08:13:13.078673246 +0000 UTC m=+0.207398974 container init fa255a28a98fdeeffb248b066d449d6e684a5c2e4004ce65a046db55244acdd7 (image=quay.io/ceph/ceph:v18, name=sweet_lehmann, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:13:13 np0005481065 systemd[1]: var-lib-containers-storage-overlay-dbcfcb6b3b33607eb7328053c2e94fb2dfbb2a3d2bc0484096b5e7aca29e3c00-merged.mount: Deactivated successfully.
Oct 11 04:13:13 np0005481065 podman[99118]: 2025-10-11 08:13:13.092844251 +0000 UTC m=+0.221569929 container start fa255a28a98fdeeffb248b066d449d6e684a5c2e4004ce65a046db55244acdd7 (image=quay.io/ceph/ceph:v18, name=sweet_lehmann, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 11 04:13:13 np0005481065 podman[99118]: 2025-10-11 08:13:13.096711472 +0000 UTC m=+0.225437150 container attach fa255a28a98fdeeffb248b066d449d6e684a5c2e4004ce65a046db55244acdd7 (image=quay.io/ceph/ceph:v18, name=sweet_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Oct 11 04:13:13 np0005481065 podman[99018]: 2025-10-11 08:13:13.144406096 +0000 UTC m=+1.448358955 container remove 94c5e99012b7878b755452e54ef93b881388168275959711f33540c7168e286a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lewin, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:13:13 np0005481065 systemd[1]: libpod-conmon-94c5e99012b7878b755452e54ef93b881388168275959711f33540c7168e286a.scope: Deactivated successfully.
Oct 11 04:13:13 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 3.d scrub starts
Oct 11 04:13:13 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 3.d scrub ok
Oct 11 04:13:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0) v1
Oct 11 04:13:13 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3655669632' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Oct 11 04:13:13 np0005481065 sweet_lehmann[99140]: [client.openstack]
Oct 11 04:13:13 np0005481065 sweet_lehmann[99140]: #011key = AQC/EOpoAAAAABAAbjRLTNBPTj6MQLghfQxMew==
Oct 11 04:13:13 np0005481065 sweet_lehmann[99140]: #011caps mgr = "allow *"
Oct 11 04:13:13 np0005481065 sweet_lehmann[99140]: #011caps mon = "profile rbd"
Oct 11 04:13:13 np0005481065 sweet_lehmann[99140]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Oct 11 04:13:13 np0005481065 systemd[1]: libpod-fa255a28a98fdeeffb248b066d449d6e684a5c2e4004ce65a046db55244acdd7.scope: Deactivated successfully.
Oct 11 04:13:13 np0005481065 podman[99118]: 2025-10-11 08:13:13.780279518 +0000 UTC m=+0.909005196 container died fa255a28a98fdeeffb248b066d449d6e684a5c2e4004ce65a046db55244acdd7 (image=quay.io/ceph/ceph:v18, name=sweet_lehmann, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:13:13 np0005481065 systemd[1]: var-lib-containers-storage-overlay-ff7366157aacdd0a09450ca613f650b6b39717e9a96bd42c78d2b9333c11d335-merged.mount: Deactivated successfully.
Oct 11 04:13:13 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/3655669632' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Oct 11 04:13:13 np0005481065 podman[99118]: 2025-10-11 08:13:13.836572368 +0000 UTC m=+0.965298036 container remove fa255a28a98fdeeffb248b066d449d6e684a5c2e4004ce65a046db55244acdd7 (image=quay.io/ceph/ceph:v18, name=sweet_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:13:13 np0005481065 systemd[1]: libpod-conmon-fa255a28a98fdeeffb248b066d449d6e684a5c2e4004ce65a046db55244acdd7.scope: Deactivated successfully.
Oct 11 04:13:13 np0005481065 podman[99326]: 2025-10-11 08:13:13.975925645 +0000 UTC m=+0.067226795 container create a152e1dbe847ce590a6b6d41cdeefb89be9a09bc43840f034aeb9b8326ce67fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Oct 11 04:13:14 np0005481065 systemd[1]: Started libpod-conmon-a152e1dbe847ce590a6b6d41cdeefb89be9a09bc43840f034aeb9b8326ce67fe.scope.
Oct 11 04:13:14 np0005481065 podman[99326]: 2025-10-11 08:13:13.946700909 +0000 UTC m=+0.038002109 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:13:14 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:13:14 np0005481065 podman[99326]: 2025-10-11 08:13:14.07119796 +0000 UTC m=+0.162499110 container init a152e1dbe847ce590a6b6d41cdeefb89be9a09bc43840f034aeb9b8326ce67fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_rhodes, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 04:13:14 np0005481065 podman[99326]: 2025-10-11 08:13:14.08166772 +0000 UTC m=+0.172968840 container start a152e1dbe847ce590a6b6d41cdeefb89be9a09bc43840f034aeb9b8326ce67fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 04:13:14 np0005481065 podman[99326]: 2025-10-11 08:13:14.085040996 +0000 UTC m=+0.176342136 container attach a152e1dbe847ce590a6b6d41cdeefb89be9a09bc43840f034aeb9b8326ce67fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_rhodes, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:13:14 np0005481065 blissful_rhodes[99343]: 167 167
Oct 11 04:13:14 np0005481065 systemd[1]: libpod-a152e1dbe847ce590a6b6d41cdeefb89be9a09bc43840f034aeb9b8326ce67fe.scope: Deactivated successfully.
Oct 11 04:13:14 np0005481065 podman[99326]: 2025-10-11 08:13:14.089500564 +0000 UTC m=+0.180801684 container died a152e1dbe847ce590a6b6d41cdeefb89be9a09bc43840f034aeb9b8326ce67fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_rhodes, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:13:14 np0005481065 systemd[1]: var-lib-containers-storage-overlay-46922cda51462ca7a10ab4122c9ac02e2b7f920c4f141c15f777e6fa0cdca218-merged.mount: Deactivated successfully.
Oct 11 04:13:14 np0005481065 podman[99326]: 2025-10-11 08:13:14.129404895 +0000 UTC m=+0.220706015 container remove a152e1dbe847ce590a6b6d41cdeefb89be9a09bc43840f034aeb9b8326ce67fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_rhodes, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:13:14 np0005481065 systemd[1]: libpod-conmon-a152e1dbe847ce590a6b6d41cdeefb89be9a09bc43840f034aeb9b8326ce67fe.scope: Deactivated successfully.
Oct 11 04:13:14 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 2.e scrub starts
Oct 11 04:13:14 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 2.e scrub ok
Oct 11 04:13:14 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Oct 11 04:13:14 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Oct 11 04:13:14 np0005481065 podman[99366]: 2025-10-11 08:13:14.354361061 +0000 UTC m=+0.070340043 container create 538145db3d7cc713e577b7aa3861e5b3744e383df36d25bbb0e97e75b166274f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_thompson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 04:13:14 np0005481065 systemd[1]: Started libpod-conmon-538145db3d7cc713e577b7aa3861e5b3744e383df36d25bbb0e97e75b166274f.scope.
Oct 11 04:13:14 np0005481065 podman[99366]: 2025-10-11 08:13:14.323612591 +0000 UTC m=+0.039591623 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:13:14 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:13:14 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/475134159d9bb367954062c40fff5457a4354b045130509dbe45cc1d6b333d14/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:14 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/475134159d9bb367954062c40fff5457a4354b045130509dbe45cc1d6b333d14/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:14 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/475134159d9bb367954062c40fff5457a4354b045130509dbe45cc1d6b333d14/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:14 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/475134159d9bb367954062c40fff5457a4354b045130509dbe45cc1d6b333d14/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:14 np0005481065 podman[99366]: 2025-10-11 08:13:14.465364147 +0000 UTC m=+0.181343179 container init 538145db3d7cc713e577b7aa3861e5b3744e383df36d25bbb0e97e75b166274f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:13:14 np0005481065 podman[99366]: 2025-10-11 08:13:14.481442507 +0000 UTC m=+0.197421489 container start 538145db3d7cc713e577b7aa3861e5b3744e383df36d25bbb0e97e75b166274f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_thompson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:13:14 np0005481065 podman[99366]: 2025-10-11 08:13:14.485612646 +0000 UTC m=+0.201591668 container attach 538145db3d7cc713e577b7aa3861e5b3744e383df36d25bbb0e97e75b166274f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Oct 11 04:13:14 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v106: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:13:14 np0005481065 ceph-mgr[74605]: [progress INFO root] Writing back 10 completed events
Oct 11 04:13:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 11 04:13:14 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:14 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:15 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 4.c scrub starts
Oct 11 04:13:15 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 4.c scrub ok
Oct 11 04:13:15 np0005481065 great_thompson[99383]: {
Oct 11 04:13:15 np0005481065 great_thompson[99383]:    "0": [
Oct 11 04:13:15 np0005481065 great_thompson[99383]:        {
Oct 11 04:13:15 np0005481065 great_thompson[99383]:            "devices": [
Oct 11 04:13:15 np0005481065 great_thompson[99383]:                "/dev/loop3"
Oct 11 04:13:15 np0005481065 great_thompson[99383]:            ],
Oct 11 04:13:15 np0005481065 great_thompson[99383]:            "lv_name": "ceph_lv0",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:            "lv_size": "21470642176",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:            "name": "ceph_lv0",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:            "tags": {
Oct 11 04:13:15 np0005481065 great_thompson[99383]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:                "ceph.cluster_name": "ceph",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:                "ceph.crush_device_class": "",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:                "ceph.encrypted": "0",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:                "ceph.osd_id": "0",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:                "ceph.type": "block",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:                "ceph.vdo": "0"
Oct 11 04:13:15 np0005481065 great_thompson[99383]:            },
Oct 11 04:13:15 np0005481065 great_thompson[99383]:            "type": "block",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:            "vg_name": "ceph_vg0"
Oct 11 04:13:15 np0005481065 great_thompson[99383]:        }
Oct 11 04:13:15 np0005481065 great_thompson[99383]:    ],
Oct 11 04:13:15 np0005481065 great_thompson[99383]:    "1": [
Oct 11 04:13:15 np0005481065 great_thompson[99383]:        {
Oct 11 04:13:15 np0005481065 great_thompson[99383]:            "devices": [
Oct 11 04:13:15 np0005481065 great_thompson[99383]:                "/dev/loop4"
Oct 11 04:13:15 np0005481065 great_thompson[99383]:            ],
Oct 11 04:13:15 np0005481065 great_thompson[99383]:            "lv_name": "ceph_lv1",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:            "lv_size": "21470642176",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:            "name": "ceph_lv1",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:            "tags": {
Oct 11 04:13:15 np0005481065 great_thompson[99383]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:                "ceph.cluster_name": "ceph",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:                "ceph.crush_device_class": "",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:                "ceph.encrypted": "0",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:                "ceph.osd_id": "1",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:                "ceph.type": "block",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:                "ceph.vdo": "0"
Oct 11 04:13:15 np0005481065 great_thompson[99383]:            },
Oct 11 04:13:15 np0005481065 great_thompson[99383]:            "type": "block",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:            "vg_name": "ceph_vg1"
Oct 11 04:13:15 np0005481065 great_thompson[99383]:        }
Oct 11 04:13:15 np0005481065 great_thompson[99383]:    ],
Oct 11 04:13:15 np0005481065 great_thompson[99383]:    "2": [
Oct 11 04:13:15 np0005481065 great_thompson[99383]:        {
Oct 11 04:13:15 np0005481065 great_thompson[99383]:            "devices": [
Oct 11 04:13:15 np0005481065 great_thompson[99383]:                "/dev/loop5"
Oct 11 04:13:15 np0005481065 great_thompson[99383]:            ],
Oct 11 04:13:15 np0005481065 great_thompson[99383]:            "lv_name": "ceph_lv2",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:            "lv_size": "21470642176",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:            "name": "ceph_lv2",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:            "tags": {
Oct 11 04:13:15 np0005481065 great_thompson[99383]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:                "ceph.cluster_name": "ceph",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:                "ceph.crush_device_class": "",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:                "ceph.encrypted": "0",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:                "ceph.osd_id": "2",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:                "ceph.type": "block",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:                "ceph.vdo": "0"
Oct 11 04:13:15 np0005481065 great_thompson[99383]:            },
Oct 11 04:13:15 np0005481065 great_thompson[99383]:            "type": "block",
Oct 11 04:13:15 np0005481065 great_thompson[99383]:            "vg_name": "ceph_vg2"
Oct 11 04:13:15 np0005481065 great_thompson[99383]:        }
Oct 11 04:13:15 np0005481065 great_thompson[99383]:    ]
Oct 11 04:13:15 np0005481065 great_thompson[99383]: }
Oct 11 04:13:15 np0005481065 systemd[1]: libpod-538145db3d7cc713e577b7aa3861e5b3744e383df36d25bbb0e97e75b166274f.scope: Deactivated successfully.
Oct 11 04:13:15 np0005481065 podman[99366]: 2025-10-11 08:13:15.228294983 +0000 UTC m=+0.944273965 container died 538145db3d7cc713e577b7aa3861e5b3744e383df36d25bbb0e97e75b166274f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:13:15 np0005481065 systemd[1]: var-lib-containers-storage-overlay-475134159d9bb367954062c40fff5457a4354b045130509dbe45cc1d6b333d14-merged.mount: Deactivated successfully.
Oct 11 04:13:15 np0005481065 podman[99366]: 2025-10-11 08:13:15.303869825 +0000 UTC m=+1.019848777 container remove 538145db3d7cc713e577b7aa3861e5b3744e383df36d25bbb0e97e75b166274f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_thompson, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:13:15 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Oct 11 04:13:15 np0005481065 systemd[1]: libpod-conmon-538145db3d7cc713e577b7aa3861e5b3744e383df36d25bbb0e97e75b166274f.scope: Deactivated successfully.
Oct 11 04:13:15 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Oct 11 04:13:15 np0005481065 ansible-async_wrapper.py[99600]: Invoked with j508475184611 30 /home/zuul/.ansible/tmp/ansible-tmp-1760170394.9914746-33372-177619891193444/AnsiballZ_command.py _
Oct 11 04:13:15 np0005481065 ansible-async_wrapper.py[99657]: Starting module and watcher
Oct 11 04:13:15 np0005481065 ansible-async_wrapper.py[99657]: Start watching 99658 (30)
Oct 11 04:13:15 np0005481065 ansible-async_wrapper.py[99658]: Start module (99658)
Oct 11 04:13:15 np0005481065 ansible-async_wrapper.py[99600]: Return async_wrapper task started.
Oct 11 04:13:15 np0005481065 python3[99660]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:13:15 np0005481065 podman[99673]: 2025-10-11 08:13:15.832192789 +0000 UTC m=+0.056978931 container create 0c330f8ab1395fb304058c1b8db3d054b08d819f4bdf22d7b32c45b179e5cde6 (image=quay.io/ceph/ceph:v18, name=brave_torvalds, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 11 04:13:15 np0005481065 systemd[1]: Started libpod-conmon-0c330f8ab1395fb304058c1b8db3d054b08d819f4bdf22d7b32c45b179e5cde6.scope.
Oct 11 04:13:15 np0005481065 podman[99673]: 2025-10-11 08:13:15.801255504 +0000 UTC m=+0.026041706 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:13:15 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:13:15 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a795c157d8304d375fef2b823cccf40f0c83f80e54a7c3054cfc2803156d04a6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:15 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a795c157d8304d375fef2b823cccf40f0c83f80e54a7c3054cfc2803156d04a6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:15 np0005481065 podman[99673]: 2025-10-11 08:13:15.948709013 +0000 UTC m=+0.173495135 container init 0c330f8ab1395fb304058c1b8db3d054b08d819f4bdf22d7b32c45b179e5cde6 (image=quay.io/ceph/ceph:v18, name=brave_torvalds, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:13:15 np0005481065 podman[99673]: 2025-10-11 08:13:15.961751066 +0000 UTC m=+0.186537198 container start 0c330f8ab1395fb304058c1b8db3d054b08d819f4bdf22d7b32c45b179e5cde6 (image=quay.io/ceph/ceph:v18, name=brave_torvalds, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 04:13:15 np0005481065 podman[99673]: 2025-10-11 08:13:15.965572125 +0000 UTC m=+0.190358257 container attach 0c330f8ab1395fb304058c1b8db3d054b08d819f4bdf22d7b32c45b179e5cde6 (image=quay.io/ceph/ceph:v18, name=brave_torvalds, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:13:16 np0005481065 podman[99719]: 2025-10-11 08:13:16.113473045 +0000 UTC m=+0.050013382 container create 7599d2148ff5c882ee264c1bfe7ef8d5af88d95a1ee36ca4b6de6cff7ec7e4ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:13:16 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 4.15 deep-scrub starts
Oct 11 04:13:16 np0005481065 systemd[1]: Started libpod-conmon-7599d2148ff5c882ee264c1bfe7ef8d5af88d95a1ee36ca4b6de6cff7ec7e4ec.scope.
Oct 11 04:13:16 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 4.15 deep-scrub ok
Oct 11 04:13:16 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:13:16 np0005481065 podman[99719]: 2025-10-11 08:13:16.08600996 +0000 UTC m=+0.022550327 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:13:16 np0005481065 podman[99719]: 2025-10-11 08:13:16.201123913 +0000 UTC m=+0.137664280 container init 7599d2148ff5c882ee264c1bfe7ef8d5af88d95a1ee36ca4b6de6cff7ec7e4ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_hypatia, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:13:16 np0005481065 podman[99719]: 2025-10-11 08:13:16.211123659 +0000 UTC m=+0.147663986 container start 7599d2148ff5c882ee264c1bfe7ef8d5af88d95a1ee36ca4b6de6cff7ec7e4ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_hypatia, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 11 04:13:16 np0005481065 gracious_hypatia[99736]: 167 167
Oct 11 04:13:16 np0005481065 systemd[1]: libpod-7599d2148ff5c882ee264c1bfe7ef8d5af88d95a1ee36ca4b6de6cff7ec7e4ec.scope: Deactivated successfully.
Oct 11 04:13:16 np0005481065 podman[99719]: 2025-10-11 08:13:16.226073897 +0000 UTC m=+0.162614274 container attach 7599d2148ff5c882ee264c1bfe7ef8d5af88d95a1ee36ca4b6de6cff7ec7e4ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 11 04:13:16 np0005481065 podman[99719]: 2025-10-11 08:13:16.22652977 +0000 UTC m=+0.163070087 container died 7599d2148ff5c882ee264c1bfe7ef8d5af88d95a1ee36ca4b6de6cff7ec7e4ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:13:16 np0005481065 systemd[1]: var-lib-containers-storage-overlay-bb1ab62ad5d1156d66bebc1e25ad29fb203d1abd9d474465b793094c5a74d4c9-merged.mount: Deactivated successfully.
Oct 11 04:13:16 np0005481065 podman[99719]: 2025-10-11 08:13:16.280764951 +0000 UTC m=+0.217305248 container remove 7599d2148ff5c882ee264c1bfe7ef8d5af88d95a1ee36ca4b6de6cff7ec7e4ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_hypatia, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:13:16 np0005481065 systemd[1]: libpod-conmon-7599d2148ff5c882ee264c1bfe7ef8d5af88d95a1ee36ca4b6de6cff7ec7e4ec.scope: Deactivated successfully.
Oct 11 04:13:16 np0005481065 podman[99780]: 2025-10-11 08:13:16.474586606 +0000 UTC m=+0.058419822 container create f555687ad6ba730696a4bab5ff0db94c72db18ff1fef3f445d12830637d449ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_pare, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:13:16 np0005481065 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 11 04:13:16 np0005481065 brave_torvalds[99701]: 
Oct 11 04:13:16 np0005481065 brave_torvalds[99701]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 11 04:13:16 np0005481065 systemd[1]: Started libpod-conmon-f555687ad6ba730696a4bab5ff0db94c72db18ff1fef3f445d12830637d449ae.scope.
Oct 11 04:13:16 np0005481065 systemd[1]: libpod-0c330f8ab1395fb304058c1b8db3d054b08d819f4bdf22d7b32c45b179e5cde6.scope: Deactivated successfully.
Oct 11 04:13:16 np0005481065 podman[99673]: 2025-10-11 08:13:16.53730469 +0000 UTC m=+0.762090872 container died 0c330f8ab1395fb304058c1b8db3d054b08d819f4bdf22d7b32c45b179e5cde6 (image=quay.io/ceph/ceph:v18, name=brave_torvalds, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Oct 11 04:13:16 np0005481065 podman[99780]: 2025-10-11 08:13:16.444780263 +0000 UTC m=+0.028613559 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:13:16 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:13:16 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/099422af7676bc432f1e5077d0af54e05bbd978816fa8ba850316cda8a32bdca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:16 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/099422af7676bc432f1e5077d0af54e05bbd978816fa8ba850316cda8a32bdca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:16 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/099422af7676bc432f1e5077d0af54e05bbd978816fa8ba850316cda8a32bdca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:16 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/099422af7676bc432f1e5077d0af54e05bbd978816fa8ba850316cda8a32bdca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:16 np0005481065 systemd[1]: var-lib-containers-storage-overlay-a795c157d8304d375fef2b823cccf40f0c83f80e54a7c3054cfc2803156d04a6-merged.mount: Deactivated successfully.
Oct 11 04:13:16 np0005481065 podman[99780]: 2025-10-11 08:13:16.590560434 +0000 UTC m=+0.174393720 container init f555687ad6ba730696a4bab5ff0db94c72db18ff1fef3f445d12830637d449ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 04:13:16 np0005481065 podman[99780]: 2025-10-11 08:13:16.603922306 +0000 UTC m=+0.187755522 container start f555687ad6ba730696a4bab5ff0db94c72db18ff1fef3f445d12830637d449ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 11 04:13:16 np0005481065 podman[99780]: 2025-10-11 08:13:16.616416194 +0000 UTC m=+0.200249460 container attach f555687ad6ba730696a4bab5ff0db94c72db18ff1fef3f445d12830637d449ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_pare, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 11 04:13:16 np0005481065 podman[99673]: 2025-10-11 08:13:16.62780956 +0000 UTC m=+0.852595692 container remove 0c330f8ab1395fb304058c1b8db3d054b08d819f4bdf22d7b32c45b179e5cde6 (image=quay.io/ceph/ceph:v18, name=brave_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:13:16 np0005481065 systemd[1]: libpod-conmon-0c330f8ab1395fb304058c1b8db3d054b08d819f4bdf22d7b32c45b179e5cde6.scope: Deactivated successfully.
Oct 11 04:13:16 np0005481065 ansible-async_wrapper.py[99658]: Module complete (99658)
Oct 11 04:13:16 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v107: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:13:17 np0005481065 python3[99866]: ansible-ansible.legacy.async_status Invoked with jid=j508475184611.99600 mode=status _async_dir=/root/.ansible_async
Oct 11 04:13:17 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Oct 11 04:13:17 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Oct 11 04:13:17 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Oct 11 04:13:17 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Oct 11 04:13:17 np0005481065 python3[99917]: ansible-ansible.legacy.async_status Invoked with jid=j508475184611.99600 mode=cleanup _async_dir=/root/.ansible_async
Oct 11 04:13:17 np0005481065 distracted_pare[99799]: {
Oct 11 04:13:17 np0005481065 distracted_pare[99799]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 04:13:17 np0005481065 distracted_pare[99799]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:13:17 np0005481065 distracted_pare[99799]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:13:17 np0005481065 distracted_pare[99799]:        "osd_id": 2,
Oct 11 04:13:17 np0005481065 distracted_pare[99799]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:13:17 np0005481065 distracted_pare[99799]:        "type": "bluestore"
Oct 11 04:13:17 np0005481065 distracted_pare[99799]:    },
Oct 11 04:13:17 np0005481065 distracted_pare[99799]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 04:13:17 np0005481065 distracted_pare[99799]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:13:17 np0005481065 distracted_pare[99799]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:13:17 np0005481065 distracted_pare[99799]:        "osd_id": 0,
Oct 11 04:13:17 np0005481065 distracted_pare[99799]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:13:17 np0005481065 distracted_pare[99799]:        "type": "bluestore"
Oct 11 04:13:17 np0005481065 distracted_pare[99799]:    },
Oct 11 04:13:17 np0005481065 distracted_pare[99799]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 04:13:17 np0005481065 distracted_pare[99799]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:13:17 np0005481065 distracted_pare[99799]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:13:17 np0005481065 distracted_pare[99799]:        "osd_id": 1,
Oct 11 04:13:17 np0005481065 distracted_pare[99799]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:13:17 np0005481065 distracted_pare[99799]:        "type": "bluestore"
Oct 11 04:13:17 np0005481065 distracted_pare[99799]:    }
Oct 11 04:13:17 np0005481065 distracted_pare[99799]: }
Oct 11 04:13:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:13:17 np0005481065 systemd[1]: libpod-f555687ad6ba730696a4bab5ff0db94c72db18ff1fef3f445d12830637d449ae.scope: Deactivated successfully.
Oct 11 04:13:17 np0005481065 podman[99780]: 2025-10-11 08:13:17.667984817 +0000 UTC m=+1.251818063 container died f555687ad6ba730696a4bab5ff0db94c72db18ff1fef3f445d12830637d449ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 11 04:13:17 np0005481065 systemd[1]: libpod-f555687ad6ba730696a4bab5ff0db94c72db18ff1fef3f445d12830637d449ae.scope: Consumed 1.057s CPU time.
Oct 11 04:13:17 np0005481065 systemd[1]: var-lib-containers-storage-overlay-099422af7676bc432f1e5077d0af54e05bbd978816fa8ba850316cda8a32bdca-merged.mount: Deactivated successfully.
Oct 11 04:13:17 np0005481065 podman[99780]: 2025-10-11 08:13:17.750459227 +0000 UTC m=+1.334292473 container remove f555687ad6ba730696a4bab5ff0db94c72db18ff1fef3f445d12830637d449ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_pare, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:13:17 np0005481065 systemd[1]: libpod-conmon-f555687ad6ba730696a4bab5ff0db94c72db18ff1fef3f445d12830637d449ae.scope: Deactivated successfully.
Oct 11 04:13:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:13:17 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:13:17 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:17 np0005481065 ceph-mgr[74605]: [progress INFO root] update: starting ev eba562f0-5955-46ad-ae79-80f32d21dae7 (Updating rgw.rgw deployment (+1 -> 1))
Oct 11 04:13:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.vutlzt", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Oct 11 04:13:17 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.vutlzt", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 11 04:13:17 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.vutlzt", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 11 04:13:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Oct 11 04:13:17 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:13:17 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:13:17 np0005481065 ceph-mgr[74605]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.vutlzt on compute-0
Oct 11 04:13:17 np0005481065 ceph-mgr[74605]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.vutlzt on compute-0
Oct 11 04:13:17 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:17 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:17 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.vutlzt", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 11 04:13:17 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.vutlzt", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 11 04:13:17 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:18 np0005481065 python3[100034]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:13:18 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 2.10 deep-scrub starts
Oct 11 04:13:18 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 2.10 deep-scrub ok
Oct 11 04:13:18 np0005481065 podman[100082]: 2025-10-11 08:13:18.236572293 +0000 UTC m=+0.044955877 container create fc0d7e7f9d7369d04c125b18e59288eca5b176d88196905debb8b9ffbe8eb996 (image=quay.io/ceph/ceph:v18, name=loving_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:13:18 np0005481065 systemd[1]: Started libpod-conmon-fc0d7e7f9d7369d04c125b18e59288eca5b176d88196905debb8b9ffbe8eb996.scope.
Oct 11 04:13:18 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:13:18 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/405f0b97642bff1c4d485328aac8e6fa7cec78e807a8b081f1ca50ea969b1b32/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:18 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/405f0b97642bff1c4d485328aac8e6fa7cec78e807a8b081f1ca50ea969b1b32/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:18 np0005481065 podman[100082]: 2025-10-11 08:13:18.219088103 +0000 UTC m=+0.027471737 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:13:18 np0005481065 podman[100082]: 2025-10-11 08:13:18.324979373 +0000 UTC m=+0.133362937 container init fc0d7e7f9d7369d04c125b18e59288eca5b176d88196905debb8b9ffbe8eb996 (image=quay.io/ceph/ceph:v18, name=loving_bassi, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:13:18 np0005481065 podman[100082]: 2025-10-11 08:13:18.336544163 +0000 UTC m=+0.144927707 container start fc0d7e7f9d7369d04c125b18e59288eca5b176d88196905debb8b9ffbe8eb996 (image=quay.io/ceph/ceph:v18, name=loving_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 11 04:13:18 np0005481065 podman[100082]: 2025-10-11 08:13:18.339644962 +0000 UTC m=+0.148028526 container attach fc0d7e7f9d7369d04c125b18e59288eca5b176d88196905debb8b9ffbe8eb996 (image=quay.io/ceph/ceph:v18, name=loving_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:13:18 np0005481065 podman[100144]: 2025-10-11 08:13:18.639009326 +0000 UTC m=+0.040012175 container create f0f05d9e0d4c9168f19d2f71cc69e300bcecaab19eb0f69eeb5935b7bfdb1b3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:13:18 np0005481065 systemd[1]: Started libpod-conmon-f0f05d9e0d4c9168f19d2f71cc69e300bcecaab19eb0f69eeb5935b7bfdb1b3e.scope.
Oct 11 04:13:18 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v108: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:13:18 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:13:18 np0005481065 podman[100144]: 2025-10-11 08:13:18.622463433 +0000 UTC m=+0.023466282 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:13:18 np0005481065 podman[100144]: 2025-10-11 08:13:18.724021218 +0000 UTC m=+0.125024057 container init f0f05d9e0d4c9168f19d2f71cc69e300bcecaab19eb0f69eeb5935b7bfdb1b3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Oct 11 04:13:18 np0005481065 podman[100144]: 2025-10-11 08:13:18.73420625 +0000 UTC m=+0.135209129 container start f0f05d9e0d4c9168f19d2f71cc69e300bcecaab19eb0f69eeb5935b7bfdb1b3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:13:18 np0005481065 brave_dirac[100161]: 167 167
Oct 11 04:13:18 np0005481065 podman[100144]: 2025-10-11 08:13:18.738273316 +0000 UTC m=+0.139276175 container attach f0f05d9e0d4c9168f19d2f71cc69e300bcecaab19eb0f69eeb5935b7bfdb1b3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_dirac, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 04:13:18 np0005481065 systemd[1]: libpod-f0f05d9e0d4c9168f19d2f71cc69e300bcecaab19eb0f69eeb5935b7bfdb1b3e.scope: Deactivated successfully.
Oct 11 04:13:18 np0005481065 podman[100144]: 2025-10-11 08:13:18.740480839 +0000 UTC m=+0.141483708 container died f0f05d9e0d4c9168f19d2f71cc69e300bcecaab19eb0f69eeb5935b7bfdb1b3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_dirac, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 11 04:13:18 np0005481065 systemd[1]: var-lib-containers-storage-overlay-57d78ee0b907d8b3566da2630bacd9e8c08a9fd19a86ddb3aa5b7e429c0c1ce3-merged.mount: Deactivated successfully.
Oct 11 04:13:18 np0005481065 podman[100144]: 2025-10-11 08:13:18.787835634 +0000 UTC m=+0.188838473 container remove f0f05d9e0d4c9168f19d2f71cc69e300bcecaab19eb0f69eeb5935b7bfdb1b3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 04:13:18 np0005481065 systemd[1]: libpod-conmon-f0f05d9e0d4c9168f19d2f71cc69e300bcecaab19eb0f69eeb5935b7bfdb1b3e.scope: Deactivated successfully.
Oct 11 04:13:18 np0005481065 systemd[1]: Reloading.
Oct 11 04:13:18 np0005481065 ceph-mon[74313]: Deploying daemon rgw.rgw.compute-0.vutlzt on compute-0
Oct 11 04:13:18 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:13:18 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:13:18 np0005481065 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.14262 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 11 04:13:18 np0005481065 loving_bassi[100099]: 
Oct 11 04:13:18 np0005481065 loving_bassi[100099]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 11 04:13:18 np0005481065 podman[100082]: 2025-10-11 08:13:18.973391312 +0000 UTC m=+0.781774856 container died fc0d7e7f9d7369d04c125b18e59288eca5b176d88196905debb8b9ffbe8eb996 (image=quay.io/ceph/ceph:v18, name=loving_bassi, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 11 04:13:19 np0005481065 systemd[1]: libpod-fc0d7e7f9d7369d04c125b18e59288eca5b176d88196905debb8b9ffbe8eb996.scope: Deactivated successfully.
Oct 11 04:13:19 np0005481065 systemd[1]: var-lib-containers-storage-overlay-405f0b97642bff1c4d485328aac8e6fa7cec78e807a8b081f1ca50ea969b1b32-merged.mount: Deactivated successfully.
Oct 11 04:13:19 np0005481065 podman[100082]: 2025-10-11 08:13:19.150380586 +0000 UTC m=+0.958764170 container remove fc0d7e7f9d7369d04c125b18e59288eca5b176d88196905debb8b9ffbe8eb996 (image=quay.io/ceph/ceph:v18, name=loving_bassi, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 11 04:13:19 np0005481065 systemd[1]: libpod-conmon-fc0d7e7f9d7369d04c125b18e59288eca5b176d88196905debb8b9ffbe8eb996.scope: Deactivated successfully.
Oct 11 04:13:19 np0005481065 systemd[1]: Reloading.
Oct 11 04:13:19 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Oct 11 04:13:19 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Oct 11 04:13:19 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:13:19 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:13:19 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Oct 11 04:13:19 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Oct 11 04:13:19 np0005481065 systemd[1]: Starting Ceph rgw.rgw.compute-0.vutlzt for 33219f8b-dc38-5a8f-a577-8ccc4b37190a...
Oct 11 04:13:19 np0005481065 podman[100338]: 2025-10-11 08:13:19.80386469 +0000 UTC m=+0.062724286 container create 2ea6801f3dab964b4fb5c45939adaf8be37ec554281d26132758560d030517c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-rgw-rgw-compute-0-vutlzt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:13:19 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13bd725e55dff883ec88933c8d8a10f2548d521dda94e41c3d74276d1726400d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:19 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13bd725e55dff883ec88933c8d8a10f2548d521dda94e41c3d74276d1726400d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:19 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13bd725e55dff883ec88933c8d8a10f2548d521dda94e41c3d74276d1726400d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:19 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13bd725e55dff883ec88933c8d8a10f2548d521dda94e41c3d74276d1726400d/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.vutlzt supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:19 np0005481065 podman[100338]: 2025-10-11 08:13:19.779420701 +0000 UTC m=+0.038280337 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:13:19 np0005481065 podman[100338]: 2025-10-11 08:13:19.886092692 +0000 UTC m=+0.144952348 container init 2ea6801f3dab964b4fb5c45939adaf8be37ec554281d26132758560d030517c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-rgw-rgw-compute-0-vutlzt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:13:19 np0005481065 podman[100338]: 2025-10-11 08:13:19.906405563 +0000 UTC m=+0.165265169 container start 2ea6801f3dab964b4fb5c45939adaf8be37ec554281d26132758560d030517c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-rgw-rgw-compute-0-vutlzt, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 04:13:19 np0005481065 bash[100338]: 2ea6801f3dab964b4fb5c45939adaf8be37ec554281d26132758560d030517c2
Oct 11 04:13:19 np0005481065 systemd[1]: Started Ceph rgw.rgw.compute-0.vutlzt for 33219f8b-dc38-5a8f-a577-8ccc4b37190a.
Oct 11 04:13:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:13:19 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:13:19 np0005481065 radosgw[100360]: deferred set uid:gid to 167:167 (ceph:ceph)
Oct 11 04:13:19 np0005481065 radosgw[100360]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Oct 11 04:13:19 np0005481065 radosgw[100360]: framework: beast
Oct 11 04:13:19 np0005481065 radosgw[100360]: framework conf key: endpoint, val: 192.168.122.100:8082
Oct 11 04:13:19 np0005481065 radosgw[100360]: init_numa not setting numa affinity
Oct 11 04:13:19 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct 11 04:13:20 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:20 np0005481065 ceph-mgr[74605]: [progress INFO root] complete: finished ev eba562f0-5955-46ad-ae79-80f32d21dae7 (Updating rgw.rgw deployment (+1 -> 1))
Oct 11 04:13:20 np0005481065 ceph-mgr[74605]: [progress INFO root] Completed event eba562f0-5955-46ad-ae79-80f32d21dae7 (Updating rgw.rgw deployment (+1 -> 1)) in 2 seconds
Oct 11 04:13:20 np0005481065 ceph-mgr[74605]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Oct 11 04:13:20 np0005481065 ceph-mgr[74605]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Oct 11 04:13:20 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct 11 04:13:20 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:20 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct 11 04:13:20 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:20 np0005481065 ceph-mgr[74605]: [progress INFO root] update: starting ev 9719429d-e28c-49d0-bd8d-e7a161e38827 (Updating mds.cephfs deployment (+1 -> 1))
Oct 11 04:13:20 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.aywjqo", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Oct 11 04:13:20 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.aywjqo", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 11 04:13:20 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.aywjqo", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 11 04:13:20 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:13:20 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:13:20 np0005481065 ceph-mgr[74605]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.aywjqo on compute-0
Oct 11 04:13:20 np0005481065 ceph-mgr[74605]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.aywjqo on compute-0
Oct 11 04:13:20 np0005481065 python3[100393]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:13:20 np0005481065 podman[100470]: 2025-10-11 08:13:20.220647793 +0000 UTC m=+0.057710472 container create 72c0b6d7c97d562fd553d7b5442fcd5614b9c6ad1e5b30eef41671e353918689 (image=quay.io/ceph/ceph:v18, name=hungry_lichterman, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:13:20 np0005481065 systemd[1]: Started libpod-conmon-72c0b6d7c97d562fd553d7b5442fcd5614b9c6ad1e5b30eef41671e353918689.scope.
Oct 11 04:13:20 np0005481065 podman[100470]: 2025-10-11 08:13:20.192888729 +0000 UTC m=+0.029951418 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:13:20 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:13:20 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5887cdd007605abc31c9892e7b2eb4fd9dba29e21ceb99cf425c311daf84af4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:20 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5887cdd007605abc31c9892e7b2eb4fd9dba29e21ceb99cf425c311daf84af4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:20 np0005481065 podman[100470]: 2025-10-11 08:13:20.333534743 +0000 UTC m=+0.170597432 container init 72c0b6d7c97d562fd553d7b5442fcd5614b9c6ad1e5b30eef41671e353918689 (image=quay.io/ceph/ceph:v18, name=hungry_lichterman, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:13:20 np0005481065 podman[100470]: 2025-10-11 08:13:20.341548282 +0000 UTC m=+0.178610931 container start 72c0b6d7c97d562fd553d7b5442fcd5614b9c6ad1e5b30eef41671e353918689 (image=quay.io/ceph/ceph:v18, name=hungry_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 04:13:20 np0005481065 podman[100470]: 2025-10-11 08:13:20.345857955 +0000 UTC m=+0.182920664 container attach 72c0b6d7c97d562fd553d7b5442fcd5614b9c6ad1e5b30eef41671e353918689 (image=quay.io/ceph/ceph:v18, name=hungry_lichterman, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:13:20 np0005481065 ansible-async_wrapper.py[99657]: Done in kid B.
Oct 11 04:13:20 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v109: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:13:20 np0005481065 podman[100625]: 2025-10-11 08:13:20.819678021 +0000 UTC m=+0.063466617 container create 73145f6e95bc38b88547d378fbb03b1c42d4bd38903d73b99c2ed19ab9d347d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_morse, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Oct 11 04:13:20 np0005481065 systemd[1]: Started libpod-conmon-73145f6e95bc38b88547d378fbb03b1c42d4bd38903d73b99c2ed19ab9d347d0.scope.
Oct 11 04:13:20 np0005481065 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.14267 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 11 04:13:20 np0005481065 hungry_lichterman[100529]: 
Oct 11 04:13:20 np0005481065 hungry_lichterman[100529]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Oct 11 04:13:20 np0005481065 podman[100625]: 2025-10-11 08:13:20.79308736 +0000 UTC m=+0.036875996 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:13:20 np0005481065 podman[100470]: 2025-10-11 08:13:20.898909507 +0000 UTC m=+0.735972236 container died 72c0b6d7c97d562fd553d7b5442fcd5614b9c6ad1e5b30eef41671e353918689 (image=quay.io/ceph/ceph:v18, name=hungry_lichterman, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:13:20 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:13:20 np0005481065 systemd[1]: libpod-72c0b6d7c97d562fd553d7b5442fcd5614b9c6ad1e5b30eef41671e353918689.scope: Deactivated successfully.
Oct 11 04:13:20 np0005481065 podman[100625]: 2025-10-11 08:13:20.922873993 +0000 UTC m=+0.166662639 container init 73145f6e95bc38b88547d378fbb03b1c42d4bd38903d73b99c2ed19ab9d347d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_morse, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 04:13:20 np0005481065 podman[100625]: 2025-10-11 08:13:20.935742251 +0000 UTC m=+0.179530847 container start 73145f6e95bc38b88547d378fbb03b1c42d4bd38903d73b99c2ed19ab9d347d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_morse, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:13:20 np0005481065 epic_morse[100642]: 167 167
Oct 11 04:13:20 np0005481065 systemd[1]: libpod-73145f6e95bc38b88547d378fbb03b1c42d4bd38903d73b99c2ed19ab9d347d0.scope: Deactivated successfully.
Oct 11 04:13:20 np0005481065 podman[100625]: 2025-10-11 08:13:20.945112159 +0000 UTC m=+0.188900805 container attach 73145f6e95bc38b88547d378fbb03b1c42d4bd38903d73b99c2ed19ab9d347d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 11 04:13:20 np0005481065 podman[100625]: 2025-10-11 08:13:20.945922912 +0000 UTC m=+0.189711568 container died 73145f6e95bc38b88547d378fbb03b1c42d4bd38903d73b99c2ed19ab9d347d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_morse, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 11 04:13:20 np0005481065 systemd[1]: var-lib-containers-storage-overlay-a5887cdd007605abc31c9892e7b2eb4fd9dba29e21ceb99cf425c311daf84af4-merged.mount: Deactivated successfully.
Oct 11 04:13:20 np0005481065 systemd[1]: var-lib-containers-storage-overlay-c2ed332f6f2b3557fa5837799be93f1572aaf72bb19bee4e2548c073bb1319b4-merged.mount: Deactivated successfully.
Oct 11 04:13:20 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:20 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:20 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:20 np0005481065 ceph-mon[74313]: Saving service rgw.rgw spec with placement compute-0
Oct 11 04:13:20 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:20 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:20 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.aywjqo", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 11 04:13:20 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.aywjqo", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 11 04:13:20 np0005481065 ceph-mon[74313]: Deploying daemon mds.cephfs.compute-0.aywjqo on compute-0
Oct 11 04:13:20 np0005481065 podman[100470]: 2025-10-11 08:13:20.996638883 +0000 UTC m=+0.833701572 container remove 72c0b6d7c97d562fd553d7b5442fcd5614b9c6ad1e5b30eef41671e353918689 (image=quay.io/ceph/ceph:v18, name=hungry_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 04:13:21 np0005481065 systemd[1]: libpod-conmon-72c0b6d7c97d562fd553d7b5442fcd5614b9c6ad1e5b30eef41671e353918689.scope: Deactivated successfully.
Oct 11 04:13:21 np0005481065 podman[100625]: 2025-10-11 08:13:21.014132694 +0000 UTC m=+0.257921290 container remove 73145f6e95bc38b88547d378fbb03b1c42d4bd38903d73b99c2ed19ab9d347d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_morse, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 11 04:13:21 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Oct 11 04:13:21 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Oct 11 04:13:21 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Oct 11 04:13:21 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Oct 11 04:13:21 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2398550775' entity='client.rgw.rgw.compute-0.vutlzt' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct 11 04:13:21 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 46 pg[8.0( empty local-lis/les=0/0 n=0 ec=46/46 lis/c=0/0 les/c/f=0/0/0 sis=46) [1] r=0 lpr=46 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:21 np0005481065 systemd[1]: libpod-conmon-73145f6e95bc38b88547d378fbb03b1c42d4bd38903d73b99c2ed19ab9d347d0.scope: Deactivated successfully.
Oct 11 04:13:21 np0005481065 systemd[1]: Reloading.
Oct 11 04:13:21 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:13:21 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:13:21 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Oct 11 04:13:21 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Oct 11 04:13:21 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Oct 11 04:13:21 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Oct 11 04:13:21 np0005481065 systemd[1]: Reloading.
Oct 11 04:13:21 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:13:21 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:13:21 np0005481065 systemd[1]: Starting Ceph mds.cephfs.compute-0.aywjqo for 33219f8b-dc38-5a8f-a577-8ccc4b37190a...
Oct 11 04:13:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Oct 11 04:13:22 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2398550775' entity='client.rgw.rgw.compute-0.vutlzt' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Oct 11 04:13:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Oct 11 04:13:22 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Oct 11 04:13:22 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/2398550775' entity='client.rgw.rgw.compute-0.vutlzt' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct 11 04:13:22 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 47 pg[8.0( empty local-lis/les=46/47 n=0 ec=46/46 lis/c=0/0 les/c/f=0/0/0 sis=46) [1] r=0 lpr=46 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:22 np0005481065 podman[100800]: 2025-10-11 08:13:22.088224351 +0000 UTC m=+0.086715941 container create dcbe23a6d0d5eba9037bdcc62f3d20771592a6f4b47fe9cfbb01c3b839226dab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mds-cephfs-compute-0-aywjqo, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:13:22 np0005481065 podman[100800]: 2025-10-11 08:13:22.051781009 +0000 UTC m=+0.050272669 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:13:22 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5825d3a44627367a4b6798e40279a905c1c7a5da67032fd629acff4a971e8ab1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:22 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5825d3a44627367a4b6798e40279a905c1c7a5da67032fd629acff4a971e8ab1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:22 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5825d3a44627367a4b6798e40279a905c1c7a5da67032fd629acff4a971e8ab1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:22 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5825d3a44627367a4b6798e40279a905c1c7a5da67032fd629acff4a971e8ab1/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.aywjqo supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:22 np0005481065 podman[100800]: 2025-10-11 08:13:22.166311045 +0000 UTC m=+0.164802675 container init dcbe23a6d0d5eba9037bdcc62f3d20771592a6f4b47fe9cfbb01c3b839226dab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mds-cephfs-compute-0-aywjqo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:13:22 np0005481065 podman[100800]: 2025-10-11 08:13:22.172614896 +0000 UTC m=+0.171106486 container start dcbe23a6d0d5eba9037bdcc62f3d20771592a6f4b47fe9cfbb01c3b839226dab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mds-cephfs-compute-0-aywjqo, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 11 04:13:22 np0005481065 bash[100800]: dcbe23a6d0d5eba9037bdcc62f3d20771592a6f4b47fe9cfbb01c3b839226dab
Oct 11 04:13:22 np0005481065 systemd[1]: Started Ceph mds.cephfs.compute-0.aywjqo for 33219f8b-dc38-5a8f-a577-8ccc4b37190a.
Oct 11 04:13:22 np0005481065 ceph-mds[100846]: set uid:gid to 167:167 (ceph:ceph)
Oct 11 04:13:22 np0005481065 ceph-mds[100846]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Oct 11 04:13:22 np0005481065 ceph-mds[100846]: main not setting numa affinity
Oct 11 04:13:22 np0005481065 ceph-mds[100846]: pidfile_write: ignore empty --pid-file
Oct 11 04:13:22 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mds-cephfs-compute-0-aywjqo[100842]: starting mds.cephfs.compute-0.aywjqo at 
Oct 11 04:13:22 np0005481065 ceph-mds[100846]: mds.cephfs.compute-0.aywjqo Updating MDS map to version 2 from mon.0
Oct 11 04:13:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:13:22 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:13:22 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:22 np0005481065 python3[100835]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:13:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct 11 04:13:22 np0005481065 ceph-mgr[74605]: [progress INFO root] complete: finished ev 9719429d-e28c-49d0-bd8d-e7a161e38827 (Updating mds.cephfs deployment (+1 -> 1))
Oct 11 04:13:22 np0005481065 ceph-mgr[74605]: [progress INFO root] Completed event 9719429d-e28c-49d0-bd8d-e7a161e38827 (Updating mds.cephfs deployment (+1 -> 1)) in 2 seconds
Oct 11 04:13:22 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0) v1
Oct 11 04:13:22 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct 11 04:13:22 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:22 np0005481065 podman[100865]: 2025-10-11 08:13:22.351763331 +0000 UTC m=+0.054529621 container create e4fc960481ae3052a1200eedcc5cc9863cef9a24814fe54c034ead22d093887f (image=quay.io/ceph/ceph:v18, name=strange_newton, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:13:22 np0005481065 systemd[1]: Started libpod-conmon-e4fc960481ae3052a1200eedcc5cc9863cef9a24814fe54c034ead22d093887f.scope.
Oct 11 04:13:22 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Oct 11 04:13:22 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Oct 11 04:13:22 np0005481065 podman[100865]: 2025-10-11 08:13:22.327520057 +0000 UTC m=+0.030286427 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:13:22 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:13:22 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/589f570f6455ba604800abf990a25aa1955f4cd20b01891fa87bca1e62751907/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:22 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/589f570f6455ba604800abf990a25aa1955f4cd20b01891fa87bca1e62751907/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:22 np0005481065 podman[100865]: 2025-10-11 08:13:22.461922622 +0000 UTC m=+0.164688962 container init e4fc960481ae3052a1200eedcc5cc9863cef9a24814fe54c034ead22d093887f (image=quay.io/ceph/ceph:v18, name=strange_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 04:13:22 np0005481065 podman[100865]: 2025-10-11 08:13:22.468926363 +0000 UTC m=+0.171692673 container start e4fc960481ae3052a1200eedcc5cc9863cef9a24814fe54c034ead22d093887f (image=quay.io/ceph/ceph:v18, name=strange_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:13:22 np0005481065 podman[100865]: 2025-10-11 08:13:22.472117814 +0000 UTC m=+0.174884124 container attach e4fc960481ae3052a1200eedcc5cc9863cef9a24814fe54c034ead22d093887f (image=quay.io/ceph/ceph:v18, name=strange_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:13:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).mds e2 assigned standby [v2:192.168.122.100:6814/1584461413,v1:192.168.122.100:6815/1584461413] as mds.0
Oct 11 04:13:22 np0005481065 ceph-mon[74313]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.aywjqo assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Oct 11 04:13:22 np0005481065 ceph-mon[74313]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Oct 11 04:13:22 np0005481065 ceph-mon[74313]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Oct 11 04:13:22 np0005481065 ceph-mon[74313]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct 11 04:13:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:13:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).mds e3 new map
Oct 11 04:13:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).mds e3 print_map#012e3#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0113#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-11T08:13:05.020766+0000#012modified#0112025-10-11T08:13:22.632207+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14269}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-0.aywjqo{0:14269} state up:creating seq 1 addr [v2:192.168.122.100:6814/1584461413,v1:192.168.122.100:6815/1584461413] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Oct 11 04:13:22 np0005481065 ceph-mds[100846]: mds.cephfs.compute-0.aywjqo Updating MDS map to version 3 from mon.0
Oct 11 04:13:22 np0005481065 ceph-mds[100846]: mds.0.3 handle_mds_map i am now mds.0.3
Oct 11 04:13:22 np0005481065 ceph-mds[100846]: mds.0.3 handle_mds_map state change up:standby --> up:creating
Oct 11 04:13:22 np0005481065 ceph-mds[100846]: mds.0.cache creating system inode with ino:0x1
Oct 11 04:13:22 np0005481065 ceph-mds[100846]: mds.0.cache creating system inode with ino:0x100
Oct 11 04:13:22 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/1584461413,v1:192.168.122.100:6815/1584461413] up:boot
Oct 11 04:13:22 np0005481065 ceph-mds[100846]: mds.0.cache creating system inode with ino:0x600
Oct 11 04:13:22 np0005481065 ceph-mds[100846]: mds.0.cache creating system inode with ino:0x601
Oct 11 04:13:22 np0005481065 ceph-mds[100846]: mds.0.cache creating system inode with ino:0x602
Oct 11 04:13:22 np0005481065 ceph-mds[100846]: mds.0.cache creating system inode with ino:0x603
Oct 11 04:13:22 np0005481065 ceph-mds[100846]: mds.0.cache creating system inode with ino:0x604
Oct 11 04:13:22 np0005481065 ceph-mds[100846]: mds.0.cache creating system inode with ino:0x605
Oct 11 04:13:22 np0005481065 ceph-mds[100846]: mds.0.cache creating system inode with ino:0x606
Oct 11 04:13:22 np0005481065 ceph-mds[100846]: mds.0.cache creating system inode with ino:0x607
Oct 11 04:13:22 np0005481065 ceph-mds[100846]: mds.0.cache creating system inode with ino:0x608
Oct 11 04:13:22 np0005481065 ceph-mds[100846]: mds.0.cache creating system inode with ino:0x609
Oct 11 04:13:22 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.aywjqo=up:creating}
Oct 11 04:13:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.aywjqo"} v 0) v1
Oct 11 04:13:22 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.aywjqo"}]: dispatch
Oct 11 04:13:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).mds e3 all = 0
Oct 11 04:13:22 np0005481065 ceph-mds[100846]: mds.0.3 creating_done
Oct 11 04:13:22 np0005481065 ceph-mon[74313]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.aywjqo is now active in filesystem cephfs as rank 0
Oct 11 04:13:22 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v112: 194 pgs: 194 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 511 B/s wr, 0 op/s
Oct 11 04:13:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Oct 11 04:13:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Oct 11 04:13:23 np0005481065 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.14271 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 11 04:13:23 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Oct 11 04:13:23 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 48 pg[9.0( empty local-lis/les=0/0 n=0 ec=48/48 lis/c=0/0 les/c/f=0/0/0 sis=48) [1] r=0 lpr=48 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Oct 11 04:13:23 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2398550775' entity='client.rgw.rgw.compute-0.vutlzt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 11 04:13:23 np0005481065 strange_newton[100917]: 
Oct 11 04:13:23 np0005481065 strange_newton[100917]: [{"container_id": "d685bb9538c2", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.40%", "created": "2025-10-11T08:11:23.387573Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2025-10-11T08:11:23.433390Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-11T08:13:10.557266Z", "memory_usage": 11639193, "ports": [], "service_name": "crash", "started": "2025-10-11T08:11:23.268505Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a@crash.compute-0", "version": "18.2.7"}, {"daemon_id": "cephfs.compute-0.aywjqo", "daemon_name": "mds.cephfs.compute-0.aywjqo", "daemon_type": "mds", "events": ["2025-10-11T08:13:22.267753Z daemon:mds.cephfs.compute-0.aywjqo [INFO] \"Deployed mds.cephfs.compute-0.aywjqo on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "ports": [], "service_name": "mds.cephfs", "status": 2, "status_desc": "starting"}, {"container_id": "073bb15022dc", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "25.44%", "created": "2025-10-11T08:10:10.076362Z", "daemon_id": "compute-0.hcsgrm", "daemon_name": "mgr.compute-0.hcsgrm", "daemon_type": "mgr", "events": ["2025-10-11T08:11:28.954723Z daemon:mgr.compute-0.hcsgrm [INFO] \"Reconfigured mgr.compute-0.hcsgrm on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-11T08:13:10.557105Z", "memory_usage": 549244108, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-10-11T08:10:09.948144Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a@mgr.compute-0.hcsgrm", "version": "18.2.7"}, {"container_id": "ef4d743dbf6b", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "2.21%", "created": "2025-10-11T08:10:04.413201Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2025-10-11T08:11:28.141334Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-11T08:13:10.556900Z", "memory_request": 2147483648, "memory_usage": 38409338, "ports": [], "service_name": "mon", "started": "2025-10-11T08:10:07.366168Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a@mon.compute-0", "version": "18.2.7"}, {"container_id": "854c41823058", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.60%", "created": "2025-10-11T08:11:53.561458Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2025-10-11T08:11:53.605124Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-11T08:13:10.557409Z", "memory_request": 4294967296, "memory_usage": 60030976, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-10-11T08:11:53.434146Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a@osd.0", "version": "18.2.7"}, {"container_id": "86b0c1684752", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.78%", "created": "2025-10-11T08:11:58.893610Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2025-10-11T08:11:58.989302Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-11T08:13:10.557539Z", "memory_request": 4294967296, "memory_usage": 60785950, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-10-11T08:11:58.749303Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a@osd.1", "version": "18.2.7"}, {"container_id": "24098f39a156", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.79%", "created": "2025-10-11T08:12:04.651775Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2025-10-11T08:12:04.772926Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-11T08:13:10.557669Z", "memory_request": 4294967296, "memory_usage": 62568529, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-10-11T08:12:04.336971Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a@osd.2", "version": "18.2.7"}, {"daemon_id": "rgw.compute-0.vutlzt", "daemon_name": "rgw.rgw.compute-0.vutlzt", "daemon_type": "rgw", "events": ["2025-10-11T08:13:19.998274Z daemon:rgw.rgw.compute-0.vutlzt [INFO] \"Deployed rgw.rgw.compute-0.vutlzt on host 'compute-0'\""], "hostname": "compute-0", "ip": "192.168.122.100", "is_active": false, "ports": [8082], "service_name": "rgw.rgw", "status": 2, "status_desc": "starting"}]
Oct 11 04:13:23 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/2398550775' entity='client.rgw.rgw.compute-0.vutlzt' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Oct 11 04:13:23 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:23 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:23 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:23 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:23 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:23 np0005481065 ceph-mon[74313]: daemon mds.cephfs.compute-0.aywjqo assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Oct 11 04:13:23 np0005481065 ceph-mon[74313]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Oct 11 04:13:23 np0005481065 ceph-mon[74313]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Oct 11 04:13:23 np0005481065 ceph-mon[74313]: Cluster is now healthy
Oct 11 04:13:23 np0005481065 ceph-mon[74313]: daemon mds.cephfs.compute-0.aywjqo is now active in filesystem cephfs as rank 0
Oct 11 04:13:23 np0005481065 systemd[1]: libpod-e4fc960481ae3052a1200eedcc5cc9863cef9a24814fe54c034ead22d093887f.scope: Deactivated successfully.
Oct 11 04:13:23 np0005481065 conmon[100917]: conmon e4fc960481ae3052a120 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e4fc960481ae3052a1200eedcc5cc9863cef9a24814fe54c034ead22d093887f.scope/container/memory.events
Oct 11 04:13:23 np0005481065 podman[100865]: 2025-10-11 08:13:23.08021158 +0000 UTC m=+0.782977910 container died e4fc960481ae3052a1200eedcc5cc9863cef9a24814fe54c034ead22d093887f (image=quay.io/ceph/ceph:v18, name=strange_newton, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 04:13:23 np0005481065 systemd[1]: var-lib-containers-storage-overlay-589f570f6455ba604800abf990a25aa1955f4cd20b01891fa87bca1e62751907-merged.mount: Deactivated successfully.
Oct 11 04:13:23 np0005481065 podman[100865]: 2025-10-11 08:13:23.137543541 +0000 UTC m=+0.840309831 container remove e4fc960481ae3052a1200eedcc5cc9863cef9a24814fe54c034ead22d093887f (image=quay.io/ceph/ceph:v18, name=strange_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:13:23 np0005481065 systemd[1]: libpod-conmon-e4fc960481ae3052a1200eedcc5cc9863cef9a24814fe54c034ead22d093887f.scope: Deactivated successfully.
Oct 11 04:13:23 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Oct 11 04:13:23 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Oct 11 04:13:23 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Oct 11 04:13:23 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Oct 11 04:13:23 np0005481065 podman[101143]: 2025-10-11 08:13:23.489718875 +0000 UTC m=+0.096615605 container exec ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:13:23 np0005481065 podman[101143]: 2025-10-11 08:13:23.608153783 +0000 UTC m=+0.215050443 container exec_died ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Oct 11 04:13:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).mds e4 new map
Oct 11 04:13:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).mds e4 print_map#012e4#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-11T08:13:05.020766+0000#012modified#0112025-10-11T08:13:23.637014+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14269}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-0.aywjqo{0:14269} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/1584461413,v1:192.168.122.100:6815/1584461413] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Oct 11 04:13:23 np0005481065 ceph-mds[100846]: mds.cephfs.compute-0.aywjqo Updating MDS map to version 4 from mon.0
Oct 11 04:13:23 np0005481065 ceph-mds[100846]: mds.0.3 handle_mds_map i am now mds.0.3
Oct 11 04:13:23 np0005481065 ceph-mds[100846]: mds.0.3 handle_mds_map state change up:creating --> up:active
Oct 11 04:13:23 np0005481065 ceph-mds[100846]: mds.0.3 recovery_done -- successful recovery!
Oct 11 04:13:23 np0005481065 ceph-mds[100846]: mds.0.3 active_start
Oct 11 04:13:23 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/1584461413,v1:192.168.122.100:6815/1584461413] up:active
Oct 11 04:13:23 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.aywjqo=up:active}
Oct 11 04:13:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Oct 11 04:13:24 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2398550775' entity='client.rgw.rgw.compute-0.vutlzt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct 11 04:13:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Oct 11 04:13:24 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Oct 11 04:13:24 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/2398550775' entity='client.rgw.rgw.compute-0.vutlzt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 11 04:13:24 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/2398550775' entity='client.rgw.rgw.compute-0.vutlzt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct 11 04:13:24 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 49 pg[9.0( empty local-lis/les=48/49 n=0 ec=48/48 lis/c=0/0 les/c/f=0/0/0 sis=48) [1] r=0 lpr=48 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:24 np0005481065 python3[101273]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:13:24 np0005481065 podman[101299]: 2025-10-11 08:13:24.328148281 +0000 UTC m=+0.072538247 container create c1d71b93085c89b109a096b93ff70176627138c79064446511df41f84e4bf3ee (image=quay.io/ceph/ceph:v18, name=focused_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:13:24 np0005481065 systemd[1]: Started libpod-conmon-c1d71b93085c89b109a096b93ff70176627138c79064446511df41f84e4bf3ee.scope.
Oct 11 04:13:24 np0005481065 podman[101299]: 2025-10-11 08:13:24.295447945 +0000 UTC m=+0.039837971 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:13:24 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:13:24 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ce7e5d8ab262fe5691434aacf6d958b58bb026dc2e32fb4e94c317bae7dd120/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:24 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ce7e5d8ab262fe5691434aacf6d958b58bb026dc2e32fb4e94c317bae7dd120/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:24 np0005481065 podman[101299]: 2025-10-11 08:13:24.425625859 +0000 UTC m=+0.170015865 container init c1d71b93085c89b109a096b93ff70176627138c79064446511df41f84e4bf3ee (image=quay.io/ceph/ceph:v18, name=focused_banach, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:13:24 np0005481065 podman[101299]: 2025-10-11 08:13:24.437133789 +0000 UTC m=+0.181523755 container start c1d71b93085c89b109a096b93ff70176627138c79064446511df41f84e4bf3ee (image=quay.io/ceph/ceph:v18, name=focused_banach, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:13:24 np0005481065 podman[101299]: 2025-10-11 08:13:24.440800044 +0000 UTC m=+0.185190000 container attach c1d71b93085c89b109a096b93ff70176627138c79064446511df41f84e4bf3ee (image=quay.io/ceph/ceph:v18, name=focused_banach, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 11 04:13:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:13:24 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:13:24 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:24 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v115: 195 pgs: 1 unknown, 194 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 1023 B/s wr, 1 op/s
Oct 11 04:13:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:13:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:13:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:13:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:13:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:13:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:13:24 np0005481065 ceph-mgr[74605]: [progress INFO root] Writing back 12 completed events
Oct 11 04:13:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 11 04:13:24 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:25 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct 11 04:13:25 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1240946640' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 11 04:13:25 np0005481065 focused_banach[101337]: 
Oct 11 04:13:25 np0005481065 focused_banach[101337]: {"fsid":"33219f8b-dc38-5a8f-a577-8ccc4b37190a","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":197,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":49,"num_osds":3,"num_up_osds":3,"osd_up_since":1760170331,"num_in_osds":3,"osd_in_since":1760170303,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":194}],"num_pgs":194,"num_pools":8,"num_objects":6,"data_bytes":460666,"bytes_used":84299776,"bytes_avail":64327626752,"bytes_total":64411926528,"read_bytes_sec":511,"write_bytes_sec":511,"read_op_per_sec":0,"write_op_per_sec":0},"fsmap":{"epoch":4,"id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.aywjqo","status":"up:active","gid":14269}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":5,"modified":"2025-10-11T08:13:12.684026+0000","services":{}},"progress_events":{}}
Oct 11 04:13:25 np0005481065 systemd[1]: libpod-c1d71b93085c89b109a096b93ff70176627138c79064446511df41f84e4bf3ee.scope: Deactivated successfully.
Oct 11 04:13:25 np0005481065 podman[101299]: 2025-10-11 08:13:25.043401723 +0000 UTC m=+0.787791679 container died c1d71b93085c89b109a096b93ff70176627138c79064446511df41f84e4bf3ee (image=quay.io/ceph/ceph:v18, name=focused_banach, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:13:25 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Oct 11 04:13:25 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Oct 11 04:13:25 np0005481065 systemd[1]: var-lib-containers-storage-overlay-5ce7e5d8ab262fe5691434aacf6d958b58bb026dc2e32fb4e94c317bae7dd120-merged.mount: Deactivated successfully.
Oct 11 04:13:25 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Oct 11 04:13:25 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Oct 11 04:13:25 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2398550775' entity='client.rgw.rgw.compute-0.vutlzt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 11 04:13:25 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:25 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:25 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:25 np0005481065 podman[101299]: 2025-10-11 08:13:25.102404931 +0000 UTC m=+0.846794847 container remove c1d71b93085c89b109a096b93ff70176627138c79064446511df41f84e4bf3ee (image=quay.io/ceph/ceph:v18, name=focused_banach, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 04:13:25 np0005481065 systemd[1]: libpod-conmon-c1d71b93085c89b109a096b93ff70176627138c79064446511df41f84e4bf3ee.scope: Deactivated successfully.
Oct 11 04:13:25 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Oct 11 04:13:25 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 50 pg[10.0( empty local-lis/les=0/0 n=0 ec=50/50 lis/c=0/0 les/c/f=0/0/0 sis=50) [2] r=0 lpr=50 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:25 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Oct 11 04:13:25 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Oct 11 04:13:25 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Oct 11 04:13:25 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:13:25 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:13:25 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:13:25 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:13:25 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:13:25 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:25 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 08187155-cd50-4ce3-aee6-22b0bfc7f025 does not exist
Oct 11 04:13:25 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 4f14a060-0c1f-4979-b4de-9f5fecdb7271 does not exist
Oct 11 04:13:25 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 9bdad70a-d7c5-486f-9d26-c43c47f0493d does not exist
Oct 11 04:13:25 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:13:25 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:13:25 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:13:25 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:13:25 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:13:25 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:13:25 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 7.7 deep-scrub starts
Oct 11 04:13:25 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 7.7 deep-scrub ok
Oct 11 04:13:25 np0005481065 podman[101651]: 2025-10-11 08:13:25.920721501 +0000 UTC m=+0.049243679 container create 901fbf06f32d28fd4b9f1cbf4488ae264e97e2258fe7f974314623e515a5e78f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mestorf, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:13:25 np0005481065 systemd[1]: Started libpod-conmon-901fbf06f32d28fd4b9f1cbf4488ae264e97e2258fe7f974314623e515a5e78f.scope.
Oct 11 04:13:25 np0005481065 podman[101651]: 2025-10-11 08:13:25.895769818 +0000 UTC m=+0.024292086 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:13:25 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:13:26 np0005481065 podman[101651]: 2025-10-11 08:13:26.012802186 +0000 UTC m=+0.141324434 container init 901fbf06f32d28fd4b9f1cbf4488ae264e97e2258fe7f974314623e515a5e78f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 04:13:26 np0005481065 podman[101651]: 2025-10-11 08:13:26.022060181 +0000 UTC m=+0.150582399 container start 901fbf06f32d28fd4b9f1cbf4488ae264e97e2258fe7f974314623e515a5e78f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mestorf, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 04:13:26 np0005481065 hungry_mestorf[101667]: 167 167
Oct 11 04:13:26 np0005481065 podman[101651]: 2025-10-11 08:13:26.029145473 +0000 UTC m=+0.157667741 container attach 901fbf06f32d28fd4b9f1cbf4488ae264e97e2258fe7f974314623e515a5e78f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 04:13:26 np0005481065 systemd[1]: libpod-901fbf06f32d28fd4b9f1cbf4488ae264e97e2258fe7f974314623e515a5e78f.scope: Deactivated successfully.
Oct 11 04:13:26 np0005481065 podman[101651]: 2025-10-11 08:13:26.031235763 +0000 UTC m=+0.159757951 container died 901fbf06f32d28fd4b9f1cbf4488ae264e97e2258fe7f974314623e515a5e78f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 04:13:26 np0005481065 systemd[1]: var-lib-containers-storage-overlay-a666a1743d577995f1530e751973b2f08ee8571eda5da549bdc139fb1f8ab8cb-merged.mount: Deactivated successfully.
Oct 11 04:13:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Oct 11 04:13:26 np0005481065 podman[101651]: 2025-10-11 08:13:26.075442598 +0000 UTC m=+0.203964776 container remove 901fbf06f32d28fd4b9f1cbf4488ae264e97e2258fe7f974314623e515a5e78f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 04:13:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2398550775' entity='client.rgw.rgw.compute-0.vutlzt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 11 04:13:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Oct 11 04:13:26 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Oct 11 04:13:26 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/2398550775' entity='client.rgw.rgw.compute-0.vutlzt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 11 04:13:26 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:13:26 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:26 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:13:26 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/2398550775' entity='client.rgw.rgw.compute-0.vutlzt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 11 04:13:26 np0005481065 systemd[1]: libpod-conmon-901fbf06f32d28fd4b9f1cbf4488ae264e97e2258fe7f974314623e515a5e78f.scope: Deactivated successfully.
Oct 11 04:13:26 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 51 pg[10.0( empty local-lis/les=50/51 n=0 ec=50/50 lis/c=0/0 les/c/f=0/0/0 sis=50) [2] r=0 lpr=50 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:26 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Oct 11 04:13:26 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Oct 11 04:13:26 np0005481065 python3[101708]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:13:26 np0005481065 podman[101716]: 2025-10-11 08:13:26.266965037 +0000 UTC m=+0.048594731 container create 530c70e839339644627501fc9311e8fd1a9a6f70f88bf4cec50e2bf9563b2088 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_khayyam, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 04:13:26 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 7.b scrub starts
Oct 11 04:13:26 np0005481065 podman[101736]: 2025-10-11 08:13:26.282937804 +0000 UTC m=+0.042792305 container create e913265825e36b1bf1412ac095c7af51f4ca39e25fc61f6736abbeca309e1406 (image=quay.io/ceph/ceph:v18, name=quizzical_franklin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 04:13:26 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 7.b scrub ok
Oct 11 04:13:26 np0005481065 systemd[1]: Started libpod-conmon-530c70e839339644627501fc9311e8fd1a9a6f70f88bf4cec50e2bf9563b2088.scope.
Oct 11 04:13:26 np0005481065 systemd[1]: Started libpod-conmon-e913265825e36b1bf1412ac095c7af51f4ca39e25fc61f6736abbeca309e1406.scope.
Oct 11 04:13:26 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:13:26 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8a43b2fb5f289ba107a04a14191e858d8b405370170d7603200ff15ffe320f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:26 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8a43b2fb5f289ba107a04a14191e858d8b405370170d7603200ff15ffe320f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:26 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8a43b2fb5f289ba107a04a14191e858d8b405370170d7603200ff15ffe320f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:26 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8a43b2fb5f289ba107a04a14191e858d8b405370170d7603200ff15ffe320f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:26 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8a43b2fb5f289ba107a04a14191e858d8b405370170d7603200ff15ffe320f4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:26 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:13:26 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58cd64395fc54f86f91a0f6794624953d1497f545a0097db3bdfd15eb5970cbd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:26 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58cd64395fc54f86f91a0f6794624953d1497f545a0097db3bdfd15eb5970cbd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:26 np0005481065 podman[101716]: 2025-10-11 08:13:26.2485509 +0000 UTC m=+0.030180634 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:13:26 np0005481065 podman[101716]: 2025-10-11 08:13:26.344180656 +0000 UTC m=+0.125810370 container init 530c70e839339644627501fc9311e8fd1a9a6f70f88bf4cec50e2bf9563b2088 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:13:26 np0005481065 podman[101736]: 2025-10-11 08:13:26.349396715 +0000 UTC m=+0.109251196 container init e913265825e36b1bf1412ac095c7af51f4ca39e25fc61f6736abbeca309e1406 (image=quay.io/ceph/ceph:v18, name=quizzical_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 04:13:26 np0005481065 podman[101716]: 2025-10-11 08:13:26.352798702 +0000 UTC m=+0.134428426 container start 530c70e839339644627501fc9311e8fd1a9a6f70f88bf4cec50e2bf9563b2088 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_khayyam, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 04:13:26 np0005481065 podman[101736]: 2025-10-11 08:13:26.355139449 +0000 UTC m=+0.114993910 container start e913265825e36b1bf1412ac095c7af51f4ca39e25fc61f6736abbeca309e1406 (image=quay.io/ceph/ceph:v18, name=quizzical_franklin, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:13:26 np0005481065 podman[101716]: 2025-10-11 08:13:26.355688915 +0000 UTC m=+0.137318639 container attach 530c70e839339644627501fc9311e8fd1a9a6f70f88bf4cec50e2bf9563b2088 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_khayyam, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 11 04:13:26 np0005481065 podman[101736]: 2025-10-11 08:13:26.262932522 +0000 UTC m=+0.022787033 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:13:26 np0005481065 podman[101736]: 2025-10-11 08:13:26.360081971 +0000 UTC m=+0.119936432 container attach e913265825e36b1bf1412ac095c7af51f4ca39e25fc61f6736abbeca309e1406 (image=quay.io/ceph/ceph:v18, name=quizzical_franklin, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:13:26 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v118: 196 pgs: 2 unknown, 194 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:13:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 11 04:13:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4235544457' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 11 04:13:26 np0005481065 quizzical_franklin[101763]: 
Oct 11 04:13:26 np0005481065 quizzical_franklin[101763]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"6","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.vutlzt","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Oct 11 04:13:26 np0005481065 systemd[1]: libpod-e913265825e36b1bf1412ac095c7af51f4ca39e25fc61f6736abbeca309e1406.scope: Deactivated successfully.
Oct 11 04:13:26 np0005481065 podman[101736]: 2025-10-11 08:13:26.916101897 +0000 UTC m=+0.675956398 container died e913265825e36b1bf1412ac095c7af51f4ca39e25fc61f6736abbeca309e1406 (image=quay.io/ceph/ceph:v18, name=quizzical_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 11 04:13:26 np0005481065 systemd[1]: var-lib-containers-storage-overlay-58cd64395fc54f86f91a0f6794624953d1497f545a0097db3bdfd15eb5970cbd-merged.mount: Deactivated successfully.
Oct 11 04:13:26 np0005481065 podman[101736]: 2025-10-11 08:13:26.969845354 +0000 UTC m=+0.729699835 container remove e913265825e36b1bf1412ac095c7af51f4ca39e25fc61f6736abbeca309e1406 (image=quay.io/ceph/ceph:v18, name=quizzical_franklin, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:13:26 np0005481065 systemd[1]: libpod-conmon-e913265825e36b1bf1412ac095c7af51f4ca39e25fc61f6736abbeca309e1406.scope: Deactivated successfully.
Oct 11 04:13:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Oct 11 04:13:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Oct 11 04:13:27 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Oct 11 04:13:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Oct 11 04:13:27 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3573938060' entity='client.rgw.rgw.compute-0.vutlzt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 11 04:13:27 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 52 pg[11.0( empty local-lis/les=0/0 n=0 ec=52/52 lis/c=0/0 les/c/f=0/0/0 sis=52) [1] r=0 lpr=52 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:13:27 np0005481065 intelligent_khayyam[101761]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:13:27 np0005481065 intelligent_khayyam[101761]: --> relative data size: 1.0
Oct 11 04:13:27 np0005481065 intelligent_khayyam[101761]: --> All data devices are unavailable
Oct 11 04:13:27 np0005481065 systemd[1]: libpod-530c70e839339644627501fc9311e8fd1a9a6f70f88bf4cec50e2bf9563b2088.scope: Deactivated successfully.
Oct 11 04:13:27 np0005481065 podman[101716]: 2025-10-11 08:13:27.413547598 +0000 UTC m=+1.195177312 container died 530c70e839339644627501fc9311e8fd1a9a6f70f88bf4cec50e2bf9563b2088 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 11 04:13:27 np0005481065 systemd[1]: var-lib-containers-storage-overlay-d8a43b2fb5f289ba107a04a14191e858d8b405370170d7603200ff15ffe320f4-merged.mount: Deactivated successfully.
Oct 11 04:13:27 np0005481065 podman[101716]: 2025-10-11 08:13:27.499993751 +0000 UTC m=+1.281623465 container remove 530c70e839339644627501fc9311e8fd1a9a6f70f88bf4cec50e2bf9563b2088 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 04:13:27 np0005481065 systemd[1]: libpod-conmon-530c70e839339644627501fc9311e8fd1a9a6f70f88bf4cec50e2bf9563b2088.scope: Deactivated successfully.
Oct 11 04:13:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:13:27 np0005481065 ceph-mds[100846]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Oct 11 04:13:27 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mds-cephfs-compute-0-aywjqo[100842]: 2025-10-11T08:13:27.651+0000 7fa82b3cd640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Oct 11 04:13:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Oct 11 04:13:28 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3573938060' entity='client.rgw.rgw.compute-0.vutlzt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 11 04:13:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Oct 11 04:13:28 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Oct 11 04:13:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Oct 11 04:13:28 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3573938060' entity='client.rgw.rgw.compute-0.vutlzt' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 11 04:13:28 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/3573938060' entity='client.rgw.rgw.compute-0.vutlzt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 11 04:13:28 np0005481065 python3[101963]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:13:28 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 53 pg[11.0( empty local-lis/les=52/53 n=0 ec=52/52 lis/c=0/0 les/c/f=0/0/0 sis=52) [1] r=0 lpr=52 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:13:28 np0005481065 podman[102000]: 2025-10-11 08:13:28.169289438 +0000 UTC m=+0.040112458 container create bf46a39721894c114817c51e9935df24c30f07b75aa399f90862ab1ae8377d82 (image=quay.io/ceph/ceph:v18, name=zealous_merkle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 11 04:13:28 np0005481065 podman[102010]: 2025-10-11 08:13:28.196421334 +0000 UTC m=+0.043477345 container create 4cb7af5947d6d11a7c7f030c8348b0b776d2a07c0501f24d406a235b736b39a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_moser, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:13:28 np0005481065 systemd[1]: Started libpod-conmon-bf46a39721894c114817c51e9935df24c30f07b75aa399f90862ab1ae8377d82.scope.
Oct 11 04:13:28 np0005481065 systemd[1]: Started libpod-conmon-4cb7af5947d6d11a7c7f030c8348b0b776d2a07c0501f24d406a235b736b39a8.scope.
Oct 11 04:13:28 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:13:28 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b72c127a87c1a0c9177e34e78075538b17dfa41ee6c9ba00c7ec61783b28cff5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:28 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b72c127a87c1a0c9177e34e78075538b17dfa41ee6c9ba00c7ec61783b28cff5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:28 np0005481065 podman[102000]: 2025-10-11 08:13:28.15326545 +0000 UTC m=+0.024088510 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:13:28 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:13:28 np0005481065 podman[102000]: 2025-10-11 08:13:28.261434544 +0000 UTC m=+0.132257564 container init bf46a39721894c114817c51e9935df24c30f07b75aa399f90862ab1ae8377d82 (image=quay.io/ceph/ceph:v18, name=zealous_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:13:28 np0005481065 podman[102010]: 2025-10-11 08:13:28.264566424 +0000 UTC m=+0.111622475 container init 4cb7af5947d6d11a7c7f030c8348b0b776d2a07c0501f24d406a235b736b39a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_moser, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:13:28 np0005481065 podman[102000]: 2025-10-11 08:13:28.268600929 +0000 UTC m=+0.139423939 container start bf46a39721894c114817c51e9935df24c30f07b75aa399f90862ab1ae8377d82 (image=quay.io/ceph/ceph:v18, name=zealous_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:13:28 np0005481065 podman[102000]: 2025-10-11 08:13:28.271730279 +0000 UTC m=+0.142553319 container attach bf46a39721894c114817c51e9935df24c30f07b75aa399f90862ab1ae8377d82 (image=quay.io/ceph/ceph:v18, name=zealous_merkle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 04:13:28 np0005481065 podman[102010]: 2025-10-11 08:13:28.271672327 +0000 UTC m=+0.118728348 container start 4cb7af5947d6d11a7c7f030c8348b0b776d2a07c0501f24d406a235b736b39a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_moser, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 11 04:13:28 np0005481065 podman[102010]: 2025-10-11 08:13:28.176425402 +0000 UTC m=+0.023481453 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:13:28 np0005481065 hardcore_moser[102037]: 167 167
Oct 11 04:13:28 np0005481065 systemd[1]: libpod-4cb7af5947d6d11a7c7f030c8348b0b776d2a07c0501f24d406a235b736b39a8.scope: Deactivated successfully.
Oct 11 04:13:28 np0005481065 podman[102010]: 2025-10-11 08:13:28.276135455 +0000 UTC m=+0.123191496 container attach 4cb7af5947d6d11a7c7f030c8348b0b776d2a07c0501f24d406a235b736b39a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:13:28 np0005481065 podman[102010]: 2025-10-11 08:13:28.276504795 +0000 UTC m=+0.123560816 container died 4cb7af5947d6d11a7c7f030c8348b0b776d2a07c0501f24d406a235b736b39a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_moser, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:13:28 np0005481065 systemd[1]: var-lib-containers-storage-overlay-442c18a6feb32347a5fd1a072681f55541db0ac87115a67b0367c213b1d251f6-merged.mount: Deactivated successfully.
Oct 11 04:13:28 np0005481065 podman[102010]: 2025-10-11 08:13:28.307830231 +0000 UTC m=+0.154886242 container remove 4cb7af5947d6d11a7c7f030c8348b0b776d2a07c0501f24d406a235b736b39a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 04:13:28 np0005481065 systemd[1]: libpod-conmon-4cb7af5947d6d11a7c7f030c8348b0b776d2a07c0501f24d406a235b736b39a8.scope: Deactivated successfully.
Oct 11 04:13:28 np0005481065 podman[102061]: 2025-10-11 08:13:28.470696081 +0000 UTC m=+0.055131398 container create 8ceb01e72f0044944f4922c06a8392288b0d008bd6b2ca4d354b08f75e481140 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lichterman, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:13:28 np0005481065 systemd[1]: Started libpod-conmon-8ceb01e72f0044944f4922c06a8392288b0d008bd6b2ca4d354b08f75e481140.scope.
Oct 11 04:13:28 np0005481065 podman[102061]: 2025-10-11 08:13:28.442098733 +0000 UTC m=+0.026534070 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:13:28 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:13:28 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f80bf37e4e40315af6f9c713f8887556e9adbdb88630b8de033473afd3e3c28d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:28 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f80bf37e4e40315af6f9c713f8887556e9adbdb88630b8de033473afd3e3c28d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:28 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f80bf37e4e40315af6f9c713f8887556e9adbdb88630b8de033473afd3e3c28d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:28 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f80bf37e4e40315af6f9c713f8887556e9adbdb88630b8de033473afd3e3c28d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:28 np0005481065 podman[102061]: 2025-10-11 08:13:28.563908327 +0000 UTC m=+0.148343694 container init 8ceb01e72f0044944f4922c06a8392288b0d008bd6b2ca4d354b08f75e481140 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lichterman, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:13:28 np0005481065 podman[102061]: 2025-10-11 08:13:28.578613548 +0000 UTC m=+0.163048875 container start 8ceb01e72f0044944f4922c06a8392288b0d008bd6b2ca4d354b08f75e481140 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 04:13:28 np0005481065 podman[102061]: 2025-10-11 08:13:28.582804598 +0000 UTC m=+0.167239935 container attach 8ceb01e72f0044944f4922c06a8392288b0d008bd6b2ca4d354b08f75e481140 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lichterman, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 04:13:28 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v121: 197 pgs: 1 unknown, 196 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 4.0 KiB/s wr, 12 op/s
Oct 11 04:13:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0) v1
Oct 11 04:13:28 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1260397818' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Oct 11 04:13:28 np0005481065 zealous_merkle[102034]: mimic
Oct 11 04:13:28 np0005481065 systemd[1]: libpod-bf46a39721894c114817c51e9935df24c30f07b75aa399f90862ab1ae8377d82.scope: Deactivated successfully.
Oct 11 04:13:28 np0005481065 podman[102104]: 2025-10-11 08:13:28.920256052 +0000 UTC m=+0.044881785 container died bf46a39721894c114817c51e9935df24c30f07b75aa399f90862ab1ae8377d82 (image=quay.io/ceph/ceph:v18, name=zealous_merkle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:13:28 np0005481065 systemd[1]: var-lib-containers-storage-overlay-b72c127a87c1a0c9177e34e78075538b17dfa41ee6c9ba00c7ec61783b28cff5-merged.mount: Deactivated successfully.
Oct 11 04:13:28 np0005481065 podman[102104]: 2025-10-11 08:13:28.997422529 +0000 UTC m=+0.122048212 container remove bf46a39721894c114817c51e9935df24c30f07b75aa399f90862ab1ae8377d82 (image=quay.io/ceph/ceph:v18, name=zealous_merkle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 04:13:29 np0005481065 systemd[1]: libpod-conmon-bf46a39721894c114817c51e9935df24c30f07b75aa399f90862ab1ae8377d82.scope: Deactivated successfully.
Oct 11 04:13:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Oct 11 04:13:29 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3573938060' entity='client.rgw.rgw.compute-0.vutlzt' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 11 04:13:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Oct 11 04:13:29 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Oct 11 04:13:29 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Oct 11 04:13:29 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/3573938060' entity='client.rgw.rgw.compute-0.vutlzt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 11 04:13:29 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/3573938060' entity='client.rgw.rgw.compute-0.vutlzt' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 11 04:13:29 np0005481065 ceph-mon[74313]: from='client.? 192.168.122.100:0/3573938060' entity='client.rgw.rgw.compute-0.vutlzt' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 11 04:13:29 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Oct 11 04:13:29 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Oct 11 04:13:29 np0005481065 radosgw[100360]: LDAP not started since no server URIs were provided in the configuration.
Oct 11 04:13:29 np0005481065 radosgw[100360]: framework: beast
Oct 11 04:13:29 np0005481065 radosgw[100360]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Oct 11 04:13:29 np0005481065 radosgw[100360]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Oct 11 04:13:29 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-rgw-rgw-compute-0-vutlzt[100353]: 2025-10-11T08:13:29.245+0000 7f4f0465b940 -1 LDAP not started since no server URIs were provided in the configuration.
Oct 11 04:13:29 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Oct 11 04:13:29 np0005481065 radosgw[100360]: starting handler: beast
Oct 11 04:13:29 np0005481065 radosgw[100360]: set uid:gid to 167:167 (ceph:ceph)
Oct 11 04:13:29 np0005481065 radosgw[100360]: mgrc service_daemon_register rgw.14275 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.vutlzt,kernel_description=#1 SMP PREEMPT_DYNAMIC Tue Sep 30 07:37:35 UTC 2025,kernel_version=5.14.0-621.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864360,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=1cdd971b-5b73-4c29-a702-29f1d5ecd8a3,zone_name=default,zonegroup_id=da4a8440-20e4-4bfa-82b9-a52631e28f87,zonegroup_name=default}
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]: {
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:    "0": [
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:        {
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:            "devices": [
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:                "/dev/loop3"
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:            ],
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:            "lv_name": "ceph_lv0",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:            "lv_size": "21470642176",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:            "name": "ceph_lv0",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:            "tags": {
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:                "ceph.cluster_name": "ceph",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:                "ceph.crush_device_class": "",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:                "ceph.encrypted": "0",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:                "ceph.osd_id": "0",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:                "ceph.type": "block",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:                "ceph.vdo": "0"
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:            },
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:            "type": "block",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:            "vg_name": "ceph_vg0"
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:        }
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:    ],
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:    "1": [
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:        {
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:            "devices": [
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:                "/dev/loop4"
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:            ],
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:            "lv_name": "ceph_lv1",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:            "lv_size": "21470642176",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:            "name": "ceph_lv1",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:            "tags": {
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:                "ceph.cluster_name": "ceph",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:                "ceph.crush_device_class": "",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:                "ceph.encrypted": "0",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:                "ceph.osd_id": "1",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:                "ceph.type": "block",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:                "ceph.vdo": "0"
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:            },
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:            "type": "block",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:            "vg_name": "ceph_vg1"
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:        }
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:    ],
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:    "2": [
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:        {
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:            "devices": [
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:                "/dev/loop5"
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:            ],
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:            "lv_name": "ceph_lv2",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:            "lv_size": "21470642176",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:            "name": "ceph_lv2",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:            "tags": {
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:                "ceph.cluster_name": "ceph",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:                "ceph.crush_device_class": "",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:                "ceph.encrypted": "0",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:                "ceph.osd_id": "2",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:                "ceph.type": "block",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:                "ceph.vdo": "0"
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:            },
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:            "type": "block",
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:            "vg_name": "ceph_vg2"
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:        }
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]:    ]
Oct 11 04:13:29 np0005481065 priceless_lichterman[102078]: }
Oct 11 04:13:29 np0005481065 systemd[1]: libpod-8ceb01e72f0044944f4922c06a8392288b0d008bd6b2ca4d354b08f75e481140.scope: Deactivated successfully.
Oct 11 04:13:29 np0005481065 podman[102061]: 2025-10-11 08:13:29.382060503 +0000 UTC m=+0.966495820 container died 8ceb01e72f0044944f4922c06a8392288b0d008bd6b2ca4d354b08f75e481140 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lichterman, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:13:29 np0005481065 systemd[1]: var-lib-containers-storage-overlay-f80bf37e4e40315af6f9c713f8887556e9adbdb88630b8de033473afd3e3c28d-merged.mount: Deactivated successfully.
Oct 11 04:13:29 np0005481065 podman[102061]: 2025-10-11 08:13:29.466767557 +0000 UTC m=+1.051202884 container remove 8ceb01e72f0044944f4922c06a8392288b0d008bd6b2ca4d354b08f75e481140 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lichterman, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 04:13:29 np0005481065 systemd[1]: libpod-conmon-8ceb01e72f0044944f4922c06a8392288b0d008bd6b2ca4d354b08f75e481140.scope: Deactivated successfully.
Oct 11 04:13:30 np0005481065 python3[102804]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:13:30 np0005481065 podman[102843]: 2025-10-11 08:13:30.179712133 +0000 UTC m=+0.040221742 container create 9ad183f8cf2a09f499be12eb4ee89db822760838acff3b1c8b68fa99104bf1b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 04:13:30 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 7.d scrub starts
Oct 11 04:13:30 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 7.d scrub ok
Oct 11 04:13:30 np0005481065 podman[102851]: 2025-10-11 08:13:30.204951285 +0000 UTC m=+0.044649259 container create 38d1aa8fbf93940029f27c7e5de76d582238c01bb3d126e766302db0f39a7bdb (image=quay.io/ceph/ceph:v18, name=nervous_clarke, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Oct 11 04:13:30 np0005481065 systemd[1]: Started libpod-conmon-9ad183f8cf2a09f499be12eb4ee89db822760838acff3b1c8b68fa99104bf1b9.scope.
Oct 11 04:13:30 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:13:30 np0005481065 systemd[1]: Started libpod-conmon-38d1aa8fbf93940029f27c7e5de76d582238c01bb3d126e766302db0f39a7bdb.scope.
Oct 11 04:13:30 np0005481065 podman[102843]: 2025-10-11 08:13:30.250386305 +0000 UTC m=+0.110895914 container init 9ad183f8cf2a09f499be12eb4ee89db822760838acff3b1c8b68fa99104bf1b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wiles, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:13:30 np0005481065 podman[102843]: 2025-10-11 08:13:30.257121407 +0000 UTC m=+0.117631036 container start 9ad183f8cf2a09f499be12eb4ee89db822760838acff3b1c8b68fa99104bf1b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wiles, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 04:13:30 np0005481065 podman[102843]: 2025-10-11 08:13:30.163755586 +0000 UTC m=+0.024265225 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:13:30 np0005481065 podman[102843]: 2025-10-11 08:13:30.260387141 +0000 UTC m=+0.120896740 container attach 9ad183f8cf2a09f499be12eb4ee89db822760838acff3b1c8b68fa99104bf1b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wiles, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 04:13:30 np0005481065 inspiring_wiles[102873]: 167 167
Oct 11 04:13:30 np0005481065 podman[102843]: 2025-10-11 08:13:30.263189351 +0000 UTC m=+0.123698980 container died 9ad183f8cf2a09f499be12eb4ee89db822760838acff3b1c8b68fa99104bf1b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wiles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:13:30 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:13:30 np0005481065 systemd[1]: libpod-9ad183f8cf2a09f499be12eb4ee89db822760838acff3b1c8b68fa99104bf1b9.scope: Deactivated successfully.
Oct 11 04:13:30 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c98b4c19ff7bc95957ccfc20c1a29ede1ff0bbaec4f6045c60c1e24afa56dca5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:30 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c98b4c19ff7bc95957ccfc20c1a29ede1ff0bbaec4f6045c60c1e24afa56dca5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:30 np0005481065 podman[102851]: 2025-10-11 08:13:30.182384459 +0000 UTC m=+0.022082443 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:13:30 np0005481065 systemd[1]: var-lib-containers-storage-overlay-6e3e04f86280a0514948af0dbceea73c4d5351021e341d05dafe96b1191d735b-merged.mount: Deactivated successfully.
Oct 11 04:13:30 np0005481065 podman[102851]: 2025-10-11 08:13:30.295487915 +0000 UTC m=+0.135185969 container init 38d1aa8fbf93940029f27c7e5de76d582238c01bb3d126e766302db0f39a7bdb (image=quay.io/ceph/ceph:v18, name=nervous_clarke, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:13:30 np0005481065 podman[102851]: 2025-10-11 08:13:30.300801997 +0000 UTC m=+0.140499971 container start 38d1aa8fbf93940029f27c7e5de76d582238c01bb3d126e766302db0f39a7bdb (image=quay.io/ceph/ceph:v18, name=nervous_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:13:30 np0005481065 podman[102851]: 2025-10-11 08:13:30.312862952 +0000 UTC m=+0.152560966 container attach 38d1aa8fbf93940029f27c7e5de76d582238c01bb3d126e766302db0f39a7bdb (image=quay.io/ceph/ceph:v18, name=nervous_clarke, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:13:30 np0005481065 podman[102843]: 2025-10-11 08:13:30.315960291 +0000 UTC m=+0.176469900 container remove 9ad183f8cf2a09f499be12eb4ee89db822760838acff3b1c8b68fa99104bf1b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 04:13:30 np0005481065 systemd[1]: libpod-conmon-9ad183f8cf2a09f499be12eb4ee89db822760838acff3b1c8b68fa99104bf1b9.scope: Deactivated successfully.
Oct 11 04:13:30 np0005481065 podman[102904]: 2025-10-11 08:13:30.487628731 +0000 UTC m=+0.038099211 container create 44fa264b9dbf5281ca5fe218b66b4bb234f18da9e6a113fe9dc824fae3187991 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 04:13:30 np0005481065 systemd[1]: Started libpod-conmon-44fa264b9dbf5281ca5fe218b66b4bb234f18da9e6a113fe9dc824fae3187991.scope.
Oct 11 04:13:30 np0005481065 podman[102904]: 2025-10-11 08:13:30.470525311 +0000 UTC m=+0.020995811 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:13:30 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:13:30 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6d38d918e668433a0ce9360d9eea2a5f650b9552c88e4f7c4fc952973f94eb9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:30 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6d38d918e668433a0ce9360d9eea2a5f650b9552c88e4f7c4fc952973f94eb9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:30 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6d38d918e668433a0ce9360d9eea2a5f650b9552c88e4f7c4fc952973f94eb9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:30 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6d38d918e668433a0ce9360d9eea2a5f650b9552c88e4f7c4fc952973f94eb9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:30 np0005481065 podman[102904]: 2025-10-11 08:13:30.589214807 +0000 UTC m=+0.139685307 container init 44fa264b9dbf5281ca5fe218b66b4bb234f18da9e6a113fe9dc824fae3187991 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:13:30 np0005481065 podman[102904]: 2025-10-11 08:13:30.602092495 +0000 UTC m=+0.152563015 container start 44fa264b9dbf5281ca5fe218b66b4bb234f18da9e6a113fe9dc824fae3187991 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:13:30 np0005481065 podman[102904]: 2025-10-11 08:13:30.606906443 +0000 UTC m=+0.157377023 container attach 44fa264b9dbf5281ca5fe218b66b4bb234f18da9e6a113fe9dc824fae3187991 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_nobel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 11 04:13:30 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v123: 197 pgs: 1 unknown, 196 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 222 B/s rd, 3.5 KiB/s wr, 10 op/s
Oct 11 04:13:30 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0) v1
Oct 11 04:13:30 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3049820678' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Oct 11 04:13:30 np0005481065 nervous_clarke[102878]: 
Oct 11 04:13:30 np0005481065 systemd[1]: libpod-38d1aa8fbf93940029f27c7e5de76d582238c01bb3d126e766302db0f39a7bdb.scope: Deactivated successfully.
Oct 11 04:13:30 np0005481065 nervous_clarke[102878]: {"mon":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"mgr":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"osd":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mds":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"overall":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":6}}
Oct 11 04:13:30 np0005481065 podman[102948]: 2025-10-11 08:13:30.951637355 +0000 UTC m=+0.022423582 container died 38d1aa8fbf93940029f27c7e5de76d582238c01bb3d126e766302db0f39a7bdb (image=quay.io/ceph/ceph:v18, name=nervous_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:13:30 np0005481065 systemd[1]: var-lib-containers-storage-overlay-c98b4c19ff7bc95957ccfc20c1a29ede1ff0bbaec4f6045c60c1e24afa56dca5-merged.mount: Deactivated successfully.
Oct 11 04:13:30 np0005481065 podman[102948]: 2025-10-11 08:13:30.990903158 +0000 UTC m=+0.061689385 container remove 38d1aa8fbf93940029f27c7e5de76d582238c01bb3d126e766302db0f39a7bdb (image=quay.io/ceph/ceph:v18, name=nervous_clarke, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:13:30 np0005481065 systemd[1]: libpod-conmon-38d1aa8fbf93940029f27c7e5de76d582238c01bb3d126e766302db0f39a7bdb.scope: Deactivated successfully.
Oct 11 04:13:31 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Oct 11 04:13:31 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Oct 11 04:13:31 np0005481065 reverent_nobel[102922]: {
Oct 11 04:13:31 np0005481065 reverent_nobel[102922]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 04:13:31 np0005481065 reverent_nobel[102922]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:13:31 np0005481065 reverent_nobel[102922]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:13:31 np0005481065 reverent_nobel[102922]:        "osd_id": 2,
Oct 11 04:13:31 np0005481065 reverent_nobel[102922]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:13:31 np0005481065 reverent_nobel[102922]:        "type": "bluestore"
Oct 11 04:13:31 np0005481065 reverent_nobel[102922]:    },
Oct 11 04:13:31 np0005481065 reverent_nobel[102922]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 04:13:31 np0005481065 reverent_nobel[102922]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:13:31 np0005481065 reverent_nobel[102922]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:13:31 np0005481065 reverent_nobel[102922]:        "osd_id": 0,
Oct 11 04:13:31 np0005481065 reverent_nobel[102922]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:13:31 np0005481065 reverent_nobel[102922]:        "type": "bluestore"
Oct 11 04:13:31 np0005481065 reverent_nobel[102922]:    },
Oct 11 04:13:31 np0005481065 reverent_nobel[102922]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 04:13:31 np0005481065 reverent_nobel[102922]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:13:31 np0005481065 reverent_nobel[102922]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:13:31 np0005481065 reverent_nobel[102922]:        "osd_id": 1,
Oct 11 04:13:31 np0005481065 reverent_nobel[102922]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:13:31 np0005481065 reverent_nobel[102922]:        "type": "bluestore"
Oct 11 04:13:31 np0005481065 reverent_nobel[102922]:    }
Oct 11 04:13:31 np0005481065 reverent_nobel[102922]: }
Oct 11 04:13:31 np0005481065 systemd[1]: libpod-44fa264b9dbf5281ca5fe218b66b4bb234f18da9e6a113fe9dc824fae3187991.scope: Deactivated successfully.
Oct 11 04:13:31 np0005481065 systemd[1]: libpod-44fa264b9dbf5281ca5fe218b66b4bb234f18da9e6a113fe9dc824fae3187991.scope: Consumed 1.134s CPU time.
Oct 11 04:13:31 np0005481065 podman[102904]: 2025-10-11 08:13:31.746762302 +0000 UTC m=+1.297232792 container died 44fa264b9dbf5281ca5fe218b66b4bb234f18da9e6a113fe9dc824fae3187991 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_nobel, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 04:13:31 np0005481065 systemd[1]: var-lib-containers-storage-overlay-d6d38d918e668433a0ce9360d9eea2a5f650b9552c88e4f7c4fc952973f94eb9-merged.mount: Deactivated successfully.
Oct 11 04:13:31 np0005481065 podman[102904]: 2025-10-11 08:13:31.808667393 +0000 UTC m=+1.359137893 container remove 44fa264b9dbf5281ca5fe218b66b4bb234f18da9e6a113fe9dc824fae3187991 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:13:31 np0005481065 systemd[1]: libpod-conmon-44fa264b9dbf5281ca5fe218b66b4bb234f18da9e6a113fe9dc824fae3187991.scope: Deactivated successfully.
Oct 11 04:13:31 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:13:31 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:31 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:13:31 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:31 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 5e71d648-ce6f-4107-a143-80e784133de9 does not exist
Oct 11 04:13:31 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 5762873d-8147-45f0-8db7-43b50b097232 does not exist
Oct 11 04:13:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:13:32 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v124: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 8.0 KiB/s wr, 191 op/s
Oct 11 04:13:32 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:32 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:32 np0005481065 podman[103228]: 2025-10-11 08:13:32.860410182 +0000 UTC m=+0.081285707 container exec ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 04:13:32 np0005481065 podman[103228]: 2025-10-11 08:13:32.983263126 +0000 UTC m=+0.204138631 container exec_died ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 11 04:13:33 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Oct 11 04:13:33 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Oct 11 04:13:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:13:33 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:13:34 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 5.8 deep-scrub starts
Oct 11 04:13:34 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:13:34 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:13:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:13:34 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:13:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:13:34 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 5.8 deep-scrub ok
Oct 11 04:13:34 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:34 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 4e5befa3-2665-4d60-ba0a-1874e25b3338 does not exist
Oct 11 04:13:34 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 3a9d81f8-3f42-42a2-ba96-42f7545d1d76 does not exist
Oct 11 04:13:34 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 1cdd6f06-4998-480e-8dd6-07a281602865 does not exist
Oct 11 04:13:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:13:34 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:13:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:13:34 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:13:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:13:34 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:13:34 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v125: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 6.3 KiB/s wr, 149 op/s
Oct 11 04:13:35 np0005481065 podman[103526]: 2025-10-11 08:13:34.950203726 +0000 UTC m=+0.024769480 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:13:35 np0005481065 podman[103526]: 2025-10-11 08:13:35.079444043 +0000 UTC m=+0.154009787 container create e421664510fd739f66105ad4f08a711609b73729f8ff8cc0bf3692dd58d0f589 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_stonebraker, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 11 04:13:35 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:35 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:35 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:13:35 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:35 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:13:35 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 5.a scrub starts
Oct 11 04:13:35 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 5.a scrub ok
Oct 11 04:13:35 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Oct 11 04:13:35 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Oct 11 04:13:35 np0005481065 systemd[1]: Started libpod-conmon-e421664510fd739f66105ad4f08a711609b73729f8ff8cc0bf3692dd58d0f589.scope.
Oct 11 04:13:35 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:13:35 np0005481065 podman[103526]: 2025-10-11 08:13:35.457263462 +0000 UTC m=+0.531829266 container init e421664510fd739f66105ad4f08a711609b73729f8ff8cc0bf3692dd58d0f589 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_stonebraker, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 04:13:35 np0005481065 podman[103526]: 2025-10-11 08:13:35.471299323 +0000 UTC m=+0.545865077 container start e421664510fd739f66105ad4f08a711609b73729f8ff8cc0bf3692dd58d0f589 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_stonebraker, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 11 04:13:35 np0005481065 practical_stonebraker[103542]: 167 167
Oct 11 04:13:35 np0005481065 systemd[1]: libpod-e421664510fd739f66105ad4f08a711609b73729f8ff8cc0bf3692dd58d0f589.scope: Deactivated successfully.
Oct 11 04:13:35 np0005481065 podman[103526]: 2025-10-11 08:13:35.612769041 +0000 UTC m=+0.687334795 container attach e421664510fd739f66105ad4f08a711609b73729f8ff8cc0bf3692dd58d0f589 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 04:13:35 np0005481065 podman[103526]: 2025-10-11 08:13:35.613525322 +0000 UTC m=+0.688091066 container died e421664510fd739f66105ad4f08a711609b73729f8ff8cc0bf3692dd58d0f589 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_stonebraker, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 04:13:35 np0005481065 systemd[1]: var-lib-containers-storage-overlay-2bef3e8398a829bdb6de235509224bade57ddd65950561ed76caf1d28301d087-merged.mount: Deactivated successfully.
Oct 11 04:13:36 np0005481065 podman[103547]: 2025-10-11 08:13:36.023400628 +0000 UTC m=+0.521765288 container remove e421664510fd739f66105ad4f08a711609b73729f8ff8cc0bf3692dd58d0f589 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_stonebraker, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 11 04:13:36 np0005481065 systemd[1]: libpod-conmon-e421664510fd739f66105ad4f08a711609b73729f8ff8cc0bf3692dd58d0f589.scope: Deactivated successfully.
Oct 11 04:13:36 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Oct 11 04:13:36 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Oct 11 04:13:36 np0005481065 podman[103569]: 2025-10-11 08:13:36.32688403 +0000 UTC m=+0.123318919 container create 7c41cc829f923976debcf9d99f161085d67b72a896e2a4f1eb9808c7e29a1b36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_borg, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:13:36 np0005481065 podman[103569]: 2025-10-11 08:13:36.248439746 +0000 UTC m=+0.044874735 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:13:36 np0005481065 systemd[1]: Started libpod-conmon-7c41cc829f923976debcf9d99f161085d67b72a896e2a4f1eb9808c7e29a1b36.scope.
Oct 11 04:13:36 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:13:36 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1844651ab0ead4c05b1a1bc4e9afb593ea959f3baa40dcba4cce5a339155372/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:36 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1844651ab0ead4c05b1a1bc4e9afb593ea959f3baa40dcba4cce5a339155372/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:36 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1844651ab0ead4c05b1a1bc4e9afb593ea959f3baa40dcba4cce5a339155372/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:36 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1844651ab0ead4c05b1a1bc4e9afb593ea959f3baa40dcba4cce5a339155372/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:36 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1844651ab0ead4c05b1a1bc4e9afb593ea959f3baa40dcba4cce5a339155372/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:36 np0005481065 podman[103569]: 2025-10-11 08:13:36.473606998 +0000 UTC m=+0.270041947 container init 7c41cc829f923976debcf9d99f161085d67b72a896e2a4f1eb9808c7e29a1b36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_borg, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:13:36 np0005481065 podman[103569]: 2025-10-11 08:13:36.490917153 +0000 UTC m=+0.287352072 container start 7c41cc829f923976debcf9d99f161085d67b72a896e2a4f1eb9808c7e29a1b36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_borg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 04:13:36 np0005481065 podman[103569]: 2025-10-11 08:13:36.545959608 +0000 UTC m=+0.342394517 container attach 7c41cc829f923976debcf9d99f161085d67b72a896e2a4f1eb9808c7e29a1b36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_borg, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 04:13:36 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v126: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 3.7 KiB/s wr, 128 op/s
Oct 11 04:13:37 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 7.12 deep-scrub starts
Oct 11 04:13:37 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 7.12 deep-scrub ok
Oct 11 04:13:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:13:37 np0005481065 recursing_borg[103585]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:13:37 np0005481065 recursing_borg[103585]: --> relative data size: 1.0
Oct 11 04:13:37 np0005481065 recursing_borg[103585]: --> All data devices are unavailable
Oct 11 04:13:37 np0005481065 systemd[1]: libpod-7c41cc829f923976debcf9d99f161085d67b72a896e2a4f1eb9808c7e29a1b36.scope: Deactivated successfully.
Oct 11 04:13:37 np0005481065 systemd[1]: libpod-7c41cc829f923976debcf9d99f161085d67b72a896e2a4f1eb9808c7e29a1b36.scope: Consumed 1.143s CPU time.
Oct 11 04:13:37 np0005481065 podman[103569]: 2025-10-11 08:13:37.727680474 +0000 UTC m=+1.524115393 container died 7c41cc829f923976debcf9d99f161085d67b72a896e2a4f1eb9808c7e29a1b36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:13:37 np0005481065 systemd[1]: var-lib-containers-storage-overlay-f1844651ab0ead4c05b1a1bc4e9afb593ea959f3baa40dcba4cce5a339155372-merged.mount: Deactivated successfully.
Oct 11 04:13:38 np0005481065 podman[103569]: 2025-10-11 08:13:38.204932797 +0000 UTC m=+2.001367666 container remove 7c41cc829f923976debcf9d99f161085d67b72a896e2a4f1eb9808c7e29a1b36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_borg, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 11 04:13:38 np0005481065 systemd[1]: libpod-conmon-7c41cc829f923976debcf9d99f161085d67b72a896e2a4f1eb9808c7e29a1b36.scope: Deactivated successfully.
Oct 11 04:13:38 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Oct 11 04:13:38 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Oct 11 04:13:38 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v127: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 3.2 KiB/s wr, 109 op/s
Oct 11 04:13:39 np0005481065 podman[103768]: 2025-10-11 08:13:39.026302375 +0000 UTC m=+0.078849887 container create a8180464b4585fa21b665db17e47dac2f6d9b12b4bfb9aba2793db2f78679b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bhabha, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:13:39 np0005481065 podman[103768]: 2025-10-11 08:13:38.98593001 +0000 UTC m=+0.038477582 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:13:39 np0005481065 systemd[1]: Started libpod-conmon-a8180464b4585fa21b665db17e47dac2f6d9b12b4bfb9aba2793db2f78679b35.scope.
Oct 11 04:13:39 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Oct 11 04:13:39 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Oct 11 04:13:39 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:13:39 np0005481065 podman[103768]: 2025-10-11 08:13:39.321747437 +0000 UTC m=+0.374294929 container init a8180464b4585fa21b665db17e47dac2f6d9b12b4bfb9aba2793db2f78679b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bhabha, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:13:39 np0005481065 podman[103768]: 2025-10-11 08:13:39.333137663 +0000 UTC m=+0.385685175 container start a8180464b4585fa21b665db17e47dac2f6d9b12b4bfb9aba2793db2f78679b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 04:13:39 np0005481065 agitated_bhabha[103784]: 167 167
Oct 11 04:13:39 np0005481065 systemd[1]: libpod-a8180464b4585fa21b665db17e47dac2f6d9b12b4bfb9aba2793db2f78679b35.scope: Deactivated successfully.
Oct 11 04:13:39 np0005481065 podman[103768]: 2025-10-11 08:13:39.549886273 +0000 UTC m=+0.602433775 container attach a8180464b4585fa21b665db17e47dac2f6d9b12b4bfb9aba2793db2f78679b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 04:13:39 np0005481065 podman[103768]: 2025-10-11 08:13:39.550569833 +0000 UTC m=+0.603117325 container died a8180464b4585fa21b665db17e47dac2f6d9b12b4bfb9aba2793db2f78679b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bhabha, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 11 04:13:39 np0005481065 systemd[1]: var-lib-containers-storage-overlay-5750d9fafb05cd7951c05866ff4b7d4100157e81aa05726f41295b1e6f96ee30-merged.mount: Deactivated successfully.
Oct 11 04:13:40 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 5.b scrub starts
Oct 11 04:13:40 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 5.b scrub ok
Oct 11 04:13:40 np0005481065 podman[103768]: 2025-10-11 08:13:40.492550952 +0000 UTC m=+1.545098474 container remove a8180464b4585fa21b665db17e47dac2f6d9b12b4bfb9aba2793db2f78679b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bhabha, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 04:13:40 np0005481065 systemd[1]: libpod-conmon-a8180464b4585fa21b665db17e47dac2f6d9b12b4bfb9aba2793db2f78679b35.scope: Deactivated successfully.
Oct 11 04:13:40 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v128: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.8 KiB/s wr, 95 op/s
Oct 11 04:13:40 np0005481065 podman[103808]: 2025-10-11 08:13:40.746188748 +0000 UTC m=+0.080900885 container create d60cfbba8c19f556eca237136d962f9af9f1b674e0403566acf1bd34f8b684ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 11 04:13:40 np0005481065 podman[103808]: 2025-10-11 08:13:40.706585845 +0000 UTC m=+0.041297982 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:13:40 np0005481065 systemd[1]: Started libpod-conmon-d60cfbba8c19f556eca237136d962f9af9f1b674e0403566acf1bd34f8b684ea.scope.
Oct 11 04:13:40 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:13:40 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b0b972826c19bf9a501dfad9514d28a4e9767c8c61d1001a09f98b24b74093a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:40 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b0b972826c19bf9a501dfad9514d28a4e9767c8c61d1001a09f98b24b74093a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:40 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b0b972826c19bf9a501dfad9514d28a4e9767c8c61d1001a09f98b24b74093a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:40 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b0b972826c19bf9a501dfad9514d28a4e9767c8c61d1001a09f98b24b74093a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:40 np0005481065 podman[103808]: 2025-10-11 08:13:40.887920853 +0000 UTC m=+0.222632990 container init d60cfbba8c19f556eca237136d962f9af9f1b674e0403566acf1bd34f8b684ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:13:40 np0005481065 podman[103808]: 2025-10-11 08:13:40.898836715 +0000 UTC m=+0.233548802 container start d60cfbba8c19f556eca237136d962f9af9f1b674e0403566acf1bd34f8b684ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 04:13:40 np0005481065 podman[103808]: 2025-10-11 08:13:40.9175272 +0000 UTC m=+0.252239277 container attach d60cfbba8c19f556eca237136d962f9af9f1b674e0403566acf1bd34f8b684ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]: {
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:    "0": [
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:        {
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:            "devices": [
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:                "/dev/loop3"
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:            ],
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:            "lv_name": "ceph_lv0",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:            "lv_size": "21470642176",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:            "name": "ceph_lv0",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:            "tags": {
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:                "ceph.cluster_name": "ceph",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:                "ceph.crush_device_class": "",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:                "ceph.encrypted": "0",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:                "ceph.osd_id": "0",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:                "ceph.type": "block",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:                "ceph.vdo": "0"
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:            },
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:            "type": "block",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:            "vg_name": "ceph_vg0"
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:        }
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:    ],
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:    "1": [
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:        {
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:            "devices": [
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:                "/dev/loop4"
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:            ],
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:            "lv_name": "ceph_lv1",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:            "lv_size": "21470642176",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:            "name": "ceph_lv1",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:            "tags": {
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:                "ceph.cluster_name": "ceph",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:                "ceph.crush_device_class": "",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:                "ceph.encrypted": "0",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:                "ceph.osd_id": "1",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:                "ceph.type": "block",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:                "ceph.vdo": "0"
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:            },
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:            "type": "block",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:            "vg_name": "ceph_vg1"
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:        }
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:    ],
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:    "2": [
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:        {
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:            "devices": [
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:                "/dev/loop5"
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:            ],
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:            "lv_name": "ceph_lv2",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:            "lv_size": "21470642176",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:            "name": "ceph_lv2",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:            "tags": {
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:                "ceph.cluster_name": "ceph",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:                "ceph.crush_device_class": "",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:                "ceph.encrypted": "0",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:                "ceph.osd_id": "2",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:                "ceph.type": "block",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:                "ceph.vdo": "0"
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:            },
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:            "type": "block",
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:            "vg_name": "ceph_vg2"
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:        }
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]:    ]
Oct 11 04:13:41 np0005481065 hungry_williamson[103824]: }
Oct 11 04:13:41 np0005481065 systemd[1]: libpod-d60cfbba8c19f556eca237136d962f9af9f1b674e0403566acf1bd34f8b684ea.scope: Deactivated successfully.
Oct 11 04:13:41 np0005481065 podman[103808]: 2025-10-11 08:13:41.638420892 +0000 UTC m=+0.973132949 container died d60cfbba8c19f556eca237136d962f9af9f1b674e0403566acf1bd34f8b684ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_williamson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 04:13:41 np0005481065 systemd[1]: var-lib-containers-storage-overlay-4b0b972826c19bf9a501dfad9514d28a4e9767c8c61d1001a09f98b24b74093a-merged.mount: Deactivated successfully.
Oct 11 04:13:41 np0005481065 podman[103808]: 2025-10-11 08:13:41.779680144 +0000 UTC m=+1.114392221 container remove d60cfbba8c19f556eca237136d962f9af9f1b674e0403566acf1bd34f8b684ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:13:41 np0005481065 systemd[1]: libpod-conmon-d60cfbba8c19f556eca237136d962f9af9f1b674e0403566acf1bd34f8b684ea.scope: Deactivated successfully.
Oct 11 04:13:42 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 5.d scrub starts
Oct 11 04:13:42 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 5.d scrub ok
Oct 11 04:13:42 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Oct 11 04:13:42 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Oct 11 04:13:42 np0005481065 podman[103988]: 2025-10-11 08:13:42.553974255 +0000 UTC m=+0.064568448 container create 45b1af1553b68857024fdb0f2cebbb972138439dc9a32e69d272440906a89519 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_euclid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 04:13:42 np0005481065 systemd[1]: Started libpod-conmon-45b1af1553b68857024fdb0f2cebbb972138439dc9a32e69d272440906a89519.scope.
Oct 11 04:13:42 np0005481065 podman[103988]: 2025-10-11 08:13:42.526422937 +0000 UTC m=+0.037017220 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:13:42 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:13:42 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:13:42 np0005481065 podman[103988]: 2025-10-11 08:13:42.64714748 +0000 UTC m=+0.157741753 container init 45b1af1553b68857024fdb0f2cebbb972138439dc9a32e69d272440906a89519 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 04:13:42 np0005481065 podman[103988]: 2025-10-11 08:13:42.657679772 +0000 UTC m=+0.168273965 container start 45b1af1553b68857024fdb0f2cebbb972138439dc9a32e69d272440906a89519 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 11 04:13:42 np0005481065 podman[103988]: 2025-10-11 08:13:42.660978626 +0000 UTC m=+0.171572899 container attach 45b1af1553b68857024fdb0f2cebbb972138439dc9a32e69d272440906a89519 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_euclid, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:13:42 np0005481065 adoring_euclid[104004]: 167 167
Oct 11 04:13:42 np0005481065 podman[103988]: 2025-10-11 08:13:42.665401192 +0000 UTC m=+0.175995455 container died 45b1af1553b68857024fdb0f2cebbb972138439dc9a32e69d272440906a89519 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_euclid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 11 04:13:42 np0005481065 systemd[1]: libpod-45b1af1553b68857024fdb0f2cebbb972138439dc9a32e69d272440906a89519.scope: Deactivated successfully.
Oct 11 04:13:42 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v129: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 2.7 KiB/s wr, 91 op/s
Oct 11 04:13:42 np0005481065 systemd[1]: var-lib-containers-storage-overlay-0709ed392b7a050f351fedc38ad42c3624d4707309ea1a2eeb4d0c5ca76f2627-merged.mount: Deactivated successfully.
Oct 11 04:13:42 np0005481065 podman[103988]: 2025-10-11 08:13:42.712361596 +0000 UTC m=+0.222955799 container remove 45b1af1553b68857024fdb0f2cebbb972138439dc9a32e69d272440906a89519 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_euclid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 11 04:13:42 np0005481065 systemd[1]: libpod-conmon-45b1af1553b68857024fdb0f2cebbb972138439dc9a32e69d272440906a89519.scope: Deactivated successfully.
Oct 11 04:13:42 np0005481065 podman[104029]: 2025-10-11 08:13:42.929538149 +0000 UTC m=+0.050992670 container create 5eecb190bb307f34b0a8edf008229cf8f9bd8e4d4186b8781d363bac9e333c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_kepler, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 04:13:42 np0005481065 systemd[1]: Started libpod-conmon-5eecb190bb307f34b0a8edf008229cf8f9bd8e4d4186b8781d363bac9e333c5e.scope.
Oct 11 04:13:43 np0005481065 podman[104029]: 2025-10-11 08:13:42.907860849 +0000 UTC m=+0.029315400 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:13:43 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:13:43 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd656fe0cf29ff2345ee6dad7c2c43c6a8e8fead30a4fdfbdf020b160b575f56/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:43 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd656fe0cf29ff2345ee6dad7c2c43c6a8e8fead30a4fdfbdf020b160b575f56/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:43 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd656fe0cf29ff2345ee6dad7c2c43c6a8e8fead30a4fdfbdf020b160b575f56/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:43 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd656fe0cf29ff2345ee6dad7c2c43c6a8e8fead30a4fdfbdf020b160b575f56/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:13:43 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 5.e scrub starts
Oct 11 04:13:43 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 5.e scrub ok
Oct 11 04:13:43 np0005481065 podman[104029]: 2025-10-11 08:13:43.169948421 +0000 UTC m=+0.291403012 container init 5eecb190bb307f34b0a8edf008229cf8f9bd8e4d4186b8781d363bac9e333c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_kepler, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 11 04:13:43 np0005481065 podman[104029]: 2025-10-11 08:13:43.181797189 +0000 UTC m=+0.303251730 container start 5eecb190bb307f34b0a8edf008229cf8f9bd8e4d4186b8781d363bac9e333c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:13:43 np0005481065 podman[104029]: 2025-10-11 08:13:43.23133098 +0000 UTC m=+0.352785591 container attach 5eecb190bb307f34b0a8edf008229cf8f9bd8e4d4186b8781d363bac9e333c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_kepler, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True)
Oct 11 04:13:43 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 6.a scrub starts
Oct 11 04:13:43 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 6.a scrub ok
Oct 11 04:13:44 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Oct 11 04:13:44 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Oct 11 04:13:44 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Oct 11 04:13:44 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Oct 11 04:13:44 np0005481065 adoring_kepler[104046]: {
Oct 11 04:13:44 np0005481065 adoring_kepler[104046]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 04:13:44 np0005481065 adoring_kepler[104046]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:13:44 np0005481065 adoring_kepler[104046]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:13:44 np0005481065 adoring_kepler[104046]:        "osd_id": 2,
Oct 11 04:13:44 np0005481065 adoring_kepler[104046]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:13:44 np0005481065 adoring_kepler[104046]:        "type": "bluestore"
Oct 11 04:13:44 np0005481065 adoring_kepler[104046]:    },
Oct 11 04:13:44 np0005481065 adoring_kepler[104046]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 04:13:44 np0005481065 adoring_kepler[104046]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:13:44 np0005481065 adoring_kepler[104046]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:13:44 np0005481065 adoring_kepler[104046]:        "osd_id": 0,
Oct 11 04:13:44 np0005481065 adoring_kepler[104046]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:13:44 np0005481065 adoring_kepler[104046]:        "type": "bluestore"
Oct 11 04:13:44 np0005481065 adoring_kepler[104046]:    },
Oct 11 04:13:44 np0005481065 adoring_kepler[104046]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 04:13:44 np0005481065 adoring_kepler[104046]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:13:44 np0005481065 adoring_kepler[104046]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:13:44 np0005481065 adoring_kepler[104046]:        "osd_id": 1,
Oct 11 04:13:44 np0005481065 adoring_kepler[104046]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:13:44 np0005481065 adoring_kepler[104046]:        "type": "bluestore"
Oct 11 04:13:44 np0005481065 adoring_kepler[104046]:    }
Oct 11 04:13:44 np0005481065 adoring_kepler[104046]: }
Oct 11 04:13:44 np0005481065 systemd[1]: libpod-5eecb190bb307f34b0a8edf008229cf8f9bd8e4d4186b8781d363bac9e333c5e.scope: Deactivated successfully.
Oct 11 04:13:44 np0005481065 podman[104029]: 2025-10-11 08:13:44.302562149 +0000 UTC m=+1.424016720 container died 5eecb190bb307f34b0a8edf008229cf8f9bd8e4d4186b8781d363bac9e333c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 11 04:13:44 np0005481065 systemd[1]: libpod-5eecb190bb307f34b0a8edf008229cf8f9bd8e4d4186b8781d363bac9e333c5e.scope: Consumed 1.128s CPU time.
Oct 11 04:13:44 np0005481065 systemd[1]: var-lib-containers-storage-overlay-fd656fe0cf29ff2345ee6dad7c2c43c6a8e8fead30a4fdfbdf020b160b575f56-merged.mount: Deactivated successfully.
Oct 11 04:13:44 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 6.10 deep-scrub starts
Oct 11 04:13:44 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 6.10 deep-scrub ok
Oct 11 04:13:44 np0005481065 podman[104029]: 2025-10-11 08:13:44.382789239 +0000 UTC m=+1.504243790 container remove 5eecb190bb307f34b0a8edf008229cf8f9bd8e4d4186b8781d363bac9e333c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 04:13:44 np0005481065 systemd[1]: libpod-conmon-5eecb190bb307f34b0a8edf008229cf8f9bd8e4d4186b8781d363bac9e333c5e.scope: Deactivated successfully.
Oct 11 04:13:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:13:44 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:13:44 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:44 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev bd2b801f-3715-4bf5-b101-a290fc595fc4 does not exist
Oct 11 04:13:44 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev be5ce13a-170f-421a-b954-682ab88d85be does not exist
Oct 11 04:13:44 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v130: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:13:45 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:45 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:13:46 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 6.12 scrub starts
Oct 11 04:13:46 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 6.12 scrub ok
Oct 11 04:13:46 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v131: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:13:47 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Oct 11 04:13:47 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Oct 11 04:13:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:13:48 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v132: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:13:49 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Oct 11 04:13:49 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Oct 11 04:13:49 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Oct 11 04:13:49 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Oct 11 04:13:49 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 6.16 scrub starts
Oct 11 04:13:49 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 6.16 scrub ok
Oct 11 04:13:50 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 6.18 scrub starts
Oct 11 04:13:50 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 6.18 scrub ok
Oct 11 04:13:50 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v133: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:13:51 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Oct 11 04:13:51 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Oct 11 04:13:51 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Oct 11 04:13:51 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Oct 11 04:13:52 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:13:52 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v134: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:13:53 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 6.19 scrub starts
Oct 11 04:13:53 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 6.19 scrub ok
Oct 11 04:13:54 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Oct 11 04:13:54 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Oct 11 04:13:54 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 6.1a scrub starts
Oct 11 04:13:54 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 6.1a scrub ok
Oct 11 04:13:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:13:54
Oct 11 04:13:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:13:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 04:13:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['volumes', '.mgr', 'vms', 'images', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root']
Oct 11 04:13:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:13:54 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v135: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:13:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:13:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:13:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:13:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:13:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:13:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:13:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:13:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:13:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:13:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:13:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:13:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:13:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:13:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:13:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:13:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:13:55 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 6.1b scrub starts
Oct 11 04:13:55 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 6.1b scrub ok
Oct 11 04:13:56 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Oct 11 04:13:56 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Oct 11 04:13:56 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v136: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:13:57 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Oct 11 04:13:57 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Oct 11 04:13:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:13:58 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Oct 11 04:13:58 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Oct 11 04:13:58 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v137: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:13:59 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Oct 11 04:13:59 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Oct 11 04:14:00 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Oct 11 04:14:00 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Oct 11 04:14:00 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Oct 11 04:14:00 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Oct 11 04:14:00 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:14:00 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:14:00 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:14:00 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:14:00 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:14:00 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:14:00 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:14:00 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:14:00 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:14:00 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:14:00 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:14:00 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:14:00 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 04:14:00 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:14:00 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:14:00 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:14:00 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 1)
Oct 11 04:14:00 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:14:00 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 1)
Oct 11 04:14:00 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:14:00 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 11 04:14:00 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:14:00 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Oct 11 04:14:00 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0) v1
Oct 11 04:14:00 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Oct 11 04:14:00 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v138: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:14:00 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Oct 11 04:14:00 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Oct 11 04:14:00 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Oct 11 04:14:00 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Oct 11 04:14:00 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Oct 11 04:14:00 np0005481065 ceph-mgr[74605]: [progress INFO root] update: starting ev c453f3ee-961f-4b9f-bb81-a5327c179530 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Oct 11 04:14:00 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0) v1
Oct 11 04:14:00 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Oct 11 04:14:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Oct 11 04:14:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Oct 11 04:14:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Oct 11 04:14:01 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Oct 11 04:14:01 np0005481065 ceph-mgr[74605]: [progress INFO root] update: starting ev d708f677-e975-4483-a949-1e8d90253419 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Oct 11 04:14:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0) v1
Oct 11 04:14:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Oct 11 04:14:01 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Oct 11 04:14:01 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Oct 11 04:14:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:14:02 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v141: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:14:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 11 04:14:02 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 11 04:14:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 11 04:14:02 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 11 04:14:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Oct 11 04:14:02 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Oct 11 04:14:02 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Oct 11 04:14:02 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Oct 11 04:14:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Oct 11 04:14:02 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Oct 11 04:14:02 np0005481065 ceph-mgr[74605]: [progress INFO root] update: starting ev d7e9f13c-2e94-4637-97e9-72a2421a8228 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Oct 11 04:14:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0) v1
Oct 11 04:14:02 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct 11 04:14:02 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Oct 11 04:14:02 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Oct 11 04:14:02 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 11 04:14:02 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 11 04:14:02 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Oct 11 04:14:02 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Oct 11 04:14:02 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Oct 11 04:14:02 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 57 pg[8.0( v 47'4 (0'0,47'4] local-lis/les=46/47 n=4 ec=46/46 lis/c=46/46 les/c/f=47/47/0 sis=57 pruub=15.063151360s) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 47'3 mlcod 47'3 active pruub 137.660705566s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:02 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 57 pg[9.0( v 54'385 (0'0,54'385] local-lis/les=48/49 n=177 ec=48/48 lis/c=48/48 les/c/f=49/49/0 sis=57 pruub=9.082973480s) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 54'384 mlcod 54'384 active pruub 131.680633545s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:02 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 57 pg[8.0( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=46/46 lis/c=46/46 les/c/f=47/47/0 sis=57 pruub=15.063151360s) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 47'3 mlcod 0'0 unknown pruub 137.660705566s@ mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 57 pg[9.0( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=5 ec=48/48 lis/c=48/48 les/c/f=49/49/0 sis=57 pruub=9.082973480s) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 54'384 mlcod 0'0 unknown pruub 131.680633545s@ mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Oct 11 04:14:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Oct 11 04:14:03 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Oct 11 04:14:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Oct 11 04:14:03 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Oct 11 04:14:03 np0005481065 ceph-mgr[74605]: [progress INFO root] update: starting ev c3f3bc31-9d02-4863-b34b-5f202ae556ff (PG autoscaler increasing pool 11 PGs from 1 to 32)
Oct 11 04:14:03 np0005481065 ceph-mgr[74605]: [progress INFO root] complete: finished ev c453f3ee-961f-4b9f-bb81-a5327c179530 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Oct 11 04:14:03 np0005481065 ceph-mgr[74605]: [progress INFO root] Completed event c453f3ee-961f-4b9f-bb81-a5327c179530 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Oct 11 04:14:03 np0005481065 ceph-mgr[74605]: [progress INFO root] complete: finished ev d708f677-e975-4483-a949-1e8d90253419 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Oct 11 04:14:03 np0005481065 ceph-mgr[74605]: [progress INFO root] Completed event d708f677-e975-4483-a949-1e8d90253419 (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Oct 11 04:14:03 np0005481065 ceph-mgr[74605]: [progress INFO root] complete: finished ev d7e9f13c-2e94-4637-97e9-72a2421a8228 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Oct 11 04:14:03 np0005481065 ceph-mgr[74605]: [progress INFO root] Completed event d7e9f13c-2e94-4637-97e9-72a2421a8228 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Oct 11 04:14:03 np0005481065 ceph-mgr[74605]: [progress INFO root] complete: finished ev c3f3bc31-9d02-4863-b34b-5f202ae556ff (PG autoscaler increasing pool 11 PGs from 1 to 32)
Oct 11 04:14:03 np0005481065 ceph-mgr[74605]: [progress INFO root] Completed event c3f3bc31-9d02-4863-b34b-5f202ae556ff (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Oct 11 04:14:03 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct 11 04:14:03 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.15( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.14( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.14( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=5 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.16( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.17( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=5 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.17( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.16( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=5 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.15( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=5 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.10( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.11( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.1( v 47'4 (0'0,47'4] local-lis/les=46/47 n=1 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.3( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.2( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=1 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.3( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=1 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.2( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.d( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.c( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.d( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.c( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.e( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.f( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.8( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.9( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.a( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.b( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.f( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.e( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.b( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.a( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.9( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.8( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.1( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.7( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.6( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.6( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.7( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.4( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.4( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=1 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.5( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.1a( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=5 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.1b( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.18( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=5 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.19( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=5 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.19( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.1e( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=5 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.18( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.1f( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.1f( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=5 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.1c( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=5 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.1e( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.1d( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.1d( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=5 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.1c( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.13( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.12( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=5 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.12( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.13( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=5 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.11( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.10( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.5( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.15( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.1a( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.1b( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=5 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.14( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.14( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.17( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.10( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.0( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=48/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 54'384 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.1( v 47'4 (0'0,47'4] local-lis/les=57/58 n=1 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.11( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.3( v 47'4 (0'0,47'4] local-lis/les=57/58 n=1 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.2( v 47'4 (0'0,47'4] local-lis/les=57/58 n=1 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.3( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.2( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.d( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.e( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.c( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.c( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.d( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.8( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.9( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.b( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.a( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.f( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.e( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.b( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.9( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.a( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.0( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=46/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 47'3 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.8( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.7( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.6( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.1( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.4( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.6( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.1a( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.4( v 47'4 (0'0,47'4] local-lis/les=57/58 n=1 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.1b( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.18( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.5( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.16( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.19( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.18( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.1e( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.1d( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.1d( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.1c( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.1f( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.12( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.12( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.13( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.11( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.10( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.1a( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[8.5( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:03 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 58 pg[9.1b( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:04 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v144: 259 pgs: 62 unknown, 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:14:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 11 04:14:04 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 11 04:14:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 11 04:14:04 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 11 04:14:04 np0005481065 ceph-mgr[74605]: [progress INFO root] Writing back 16 completed events
Oct 11 04:14:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 11 04:14:04 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:14:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Oct 11 04:14:04 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 11 04:14:04 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 11 04:14:04 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:14:04 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Oct 11 04:14:04 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct 11 04:14:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Oct 11 04:14:04 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Oct 11 04:14:04 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 59 pg[10.0( v 51'16 (0'0,51'16] local-lis/les=50/51 n=8 ec=50/50 lis/c=50/50 les/c/f=51/51/0 sis=59 pruub=9.173049927s) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 51'15 mlcod 51'15 active pruub 127.852287292s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:04 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 59 pg[10.0( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=50/50 lis/c=50/50 les/c/f=51/51/0 sis=59 pruub=9.173049927s) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 51'15 mlcod 0'0 unknown pruub 127.852287292s@ mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 59 pg[11.0( empty local-lis/les=52/53 n=0 ec=52/52 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=11.094512939s) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active pruub 135.729873657s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 59 pg[11.0( empty local-lis/les=52/53 n=0 ec=52/52 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=11.094512939s) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown pruub 135.729873657s@ mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 6.1e deep-scrub starts
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 6.1e deep-scrub ok
Oct 11 04:14:05 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Oct 11 04:14:05 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Oct 11 04:14:05 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Oct 11 04:14:05 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Oct 11 04:14:05 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.17( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.16( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.14( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.15( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.13( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.2( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.1( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.f( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.e( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.d( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.b( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.9( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.8( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.c( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.a( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.3( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.4( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.6( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.5( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.7( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.18( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.1b( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.1c( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.1a( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.1d( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.1e( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.1f( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.10( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.12( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.19( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.11( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.15( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Oct 11 04:14:05 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.d( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.1b( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.1e( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.b( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.a( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.13( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.12( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.11( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.10( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.1f( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.1d( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.1c( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.1a( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.19( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.18( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.7( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=1 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.6( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=1 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.5( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=1 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.4( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=1 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.f( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.8( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=1 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.9( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.c( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.e( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.1( v 51'16 (0'0,51'16] local-lis/les=50/51 n=1 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.2( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=1 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.3( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=1 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.14( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.15( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.16( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.17( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.1b( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.d( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.13( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.16( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.17( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.2( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.14( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.1( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.f( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.e( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.b( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.9( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.d( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.0( empty local-lis/les=59/60 n=0 ec=52/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.8( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.a( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.5( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.3( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.4( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.6( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.c( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.7( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.18( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.1b( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.1c( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.1d( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.1e( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.1f( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.19( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.10( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.12( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.11( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 60 pg[11.1a( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.b( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.a( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.13( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.1e( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.12( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.11( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.1f( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.10( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.1c( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.1d( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.1a( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.19( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.7( v 51'16 (0'0,51'16] local-lis/les=59/60 n=1 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.18( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.6( v 51'16 (0'0,51'16] local-lis/les=59/60 n=1 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.5( v 51'16 (0'0,51'16] local-lis/les=59/60 n=1 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.f( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.4( v 51'16 (0'0,51'16] local-lis/les=59/60 n=1 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.8( v 51'16 (0'0,51'16] local-lis/les=59/60 n=1 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.c( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.0( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=50/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 51'15 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.9( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.e( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.1( v 51'16 (0'0,51'16] local-lis/les=59/60 n=1 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.14( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.2( v 51'16 (0'0,51'16] local-lis/les=59/60 n=1 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.3( v 51'16 (0'0,51'16] local-lis/les=59/60 n=1 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.17( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.16( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:05 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 60 pg[10.15( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:06 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v147: 321 pgs: 124 unknown, 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:14:07 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 6.f scrub starts
Oct 11 04:14:07 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 6.f scrub ok
Oct 11 04:14:07 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 6.c scrub starts
Oct 11 04:14:07 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 6.c scrub ok
Oct 11 04:14:07 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Oct 11 04:14:07 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Oct 11 04:14:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:14:08 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v148: 321 pgs: 321 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:14:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 11 04:14:08 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 11 04:14:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 11 04:14:08 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 11 04:14:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Oct 11 04:14:08 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct 11 04:14:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 11 04:14:08 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 11 04:14:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Oct 11 04:14:08 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 11 04:14:08 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 11 04:14:08 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Oct 11 04:14:08 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 11 04:14:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Oct 11 04:14:08 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[8.14( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.937674522s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 139.520706177s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[11.17( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.971129417s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 141.554214478s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.937712669s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 139.520858765s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[8.14( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.937551498s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.520706177s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[8.15( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.933996201s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 139.517211914s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.937633514s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.520858765s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[11.17( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.971024513s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.554214478s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[8.15( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.933882713s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.517211914s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[11.15( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.963424683s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 141.546783447s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.937935829s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 139.521408081s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.937912941s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.521408081s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[9.11( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.937222481s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 139.521026611s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[8.10( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.937032700s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 139.520828247s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[9.11( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.937196732s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.521026611s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[11.2( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.970379829s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 141.554244995s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[8.10( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.936991692s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.520828247s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[11.1( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.970250130s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 141.554275513s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[11.1( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.970230103s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.554275513s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[11.15( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.963367462s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.546783447s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[9.3( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.936904907s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 139.521041870s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[9.3( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.936878204s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.521041870s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[8.2( v 47'4 (0'0,47'4] local-lis/les=57/58 n=1 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.936826706s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 139.521072388s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[8.2( v 47'4 (0'0,47'4] local-lis/les=57/58 n=1 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.936791420s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.521072388s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[11.f( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.969920158s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 141.554321289s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[11.f( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.969893456s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.554321289s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[8.c( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.936663628s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 139.521163940s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[9.d( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.936614990s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 139.521163940s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[8.c( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.936637878s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.521163940s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[9.d( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.936593056s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.521163940s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[11.e( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.969786644s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 141.554473877s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[11.e( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.969770432s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.554473877s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[8.d( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.936418533s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 139.521148682s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[8.d( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.936390877s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.521148682s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[11.d( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.969748497s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 141.554565430s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[11.d( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.969730377s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.554565430s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.936285019s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 139.521240234s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.936258316s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.521240234s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[11.b( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.969328880s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 141.554473877s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[11.b( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.969298363s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.554473877s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[9.9( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.935868263s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 139.521270752s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[11.2( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.970341682s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.554244995s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[9.9( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.935836792s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.521270752s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[11.14( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.968752861s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 141.554275513s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[11.9( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.968927383s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 141.554534912s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[11.9( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.968898773s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.554534912s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[11.14( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.968708038s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.554275513s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[8.f( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.935370445s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 139.521331787s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct 11 04:14:08 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[8.f( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.935338020s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.521331787s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[11.8( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.968598366s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 141.554656982s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[11.8( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.968554497s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.554656982s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[9.b( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.935041428s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 139.521316528s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[8.b( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.935079575s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 139.521362305s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[8.9( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.935027122s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 139.521377563s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[8.b( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.935008049s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.521362305s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[9.b( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.934948921s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.521316528s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[8.9( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.934990883s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.521377563s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[11.3( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.968347549s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 141.554916382s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[11.3( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.968317986s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.554916382s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[11.4( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.968256950s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 141.554977417s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[8.6( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.934635162s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 139.521453857s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[8.6( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.934606552s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.521453857s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[11.6( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.967903137s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 141.554977417s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[11.6( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.967836380s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.554977417s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.934165955s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 139.521423340s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[11.4( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.968220711s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.554977417s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.933875084s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.521423340s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[9.1( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.933789253s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 139.521453857s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[8.4( v 47'4 (0'0,47'4] local-lis/les=57/58 n=1 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.933741570s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 139.521484375s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[9.1( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.933726311s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.521453857s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[8.4( v 47'4 (0'0,47'4] local-lis/les=57/58 n=1 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.933664322s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.521484375s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[8.e( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.933118820s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 139.521209717s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[8.e( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.933084488s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.521209717s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[11.18( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.966610909s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 141.555068970s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[11.1b( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.966509819s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 141.555068970s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[8.1b( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.932958603s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 139.521530151s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[11.18( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.966484070s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.555068970s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[9.5( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.932922363s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 139.521560669s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[8.1b( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.932917595s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.521530151s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[9.5( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.932868004s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.521560669s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[11.1a( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.970271111s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 141.559036255s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[11.1a( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.970214844s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.559036255s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[11.1b( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.966229439s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.555068970s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.938383102s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 139.527343750s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.938231468s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.527343750s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[11.1c( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.965855598s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 141.555084229s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[8.18( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.938160896s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 139.527404785s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[8.15( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[8.1f( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.938263893s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 139.527664185s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[8.18( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.937932014s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.527404785s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[8.1f( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.938199997s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.527664185s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.938060760s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 139.527618408s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[11.1e( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.965537071s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 141.555160522s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.938022614s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.527618408s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[11.1e( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.965516090s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.555160522s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[11.1f( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.964785576s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 141.555206299s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[11.1f( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.964755058s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.555206299s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[11.15( empty local-lis/les=0/0 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[11.2( empty local-lis/les=0/0 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[8.1c( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.936782837s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 139.527633667s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[8.1c( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.936755180s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.527633667s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[11.1c( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.964122772s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.555084229s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[11.3( empty local-lis/les=0/0 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[8.2( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[11.d( empty local-lis/les=0/0 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[11.10( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.967780113s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 141.558959961s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[11.11( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.967634201s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 141.558990479s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[11.11( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.967594147s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.558990479s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[8.1d( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.936186790s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 139.527587891s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[9.1d( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.936131477s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 139.527618408s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.937101364s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 139.528656006s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[11.8( empty local-lis/les=0/0 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[11.10( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.967529297s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.558959961s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[9.1d( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.936079979s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.527618408s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[8.12( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.936030388s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 139.527679443s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[8.12( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.935976982s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.527679443s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.937076569s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.528656006s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[8.11( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.936725616s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 139.528686523s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[8.d( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[11.19( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.966925621s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 141.558944702s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[8.11( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.936691284s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.528686523s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[11.19( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.966892242s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.558944702s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[11.12( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.966837883s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 141.558959961s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[8.1d( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.936131477s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.527587891s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[11.12( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.966781616s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.558959961s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[9.1b( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.936504364s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 139.528945923s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[9.1b( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.936457634s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.528945923s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[8.1a( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.936244011s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 139.528823853s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[8.1a( v 47'4 (0'0,47'4] local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.936208725s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.528823853s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[11.9( empty local-lis/les=0/0 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[8.4( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 61 pg[11.10( empty local-lis/les=0/0 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[11.18( empty local-lis/les=0/0 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 61 pg[9.13( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[8.1b( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[11.b( empty local-lis/les=0/0 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 61 pg[9.11( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[11.1a( empty local-lis/les=0/0 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 61 pg[8.10( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 61 pg[9.5( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[10.1e( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.964798927s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active pruub 135.704269409s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[10.b( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.964715958s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active pruub 135.704208374s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[10.d( v 60'17 (0'0,60'17] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.958611488s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 51'16 mlcod 51'16 active pruub 135.698120117s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[10.1e( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.964754105s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.704269409s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[10.b( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.964674950s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.704208374s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[10.d( v 60'17 (0'0,60'17] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.958560944s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 51'16 mlcod 0'0 unknown NOTIFY pruub 135.698120117s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[11.1b( empty local-lis/les=0/0 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 61 pg[8.b( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 61 pg[11.4( empty local-lis/les=0/0 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[10.13( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.964245796s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active pruub 135.704254150s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[10.13( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.964202881s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.704254150s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[10.12( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.964199066s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active pruub 135.704315186s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[10.11( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.964489937s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active pruub 135.704635620s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[10.11( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.964453697s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.704635620s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[10.12( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.964138985s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.704315186s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[10.10( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.964196205s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active pruub 135.704666138s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[10.10( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.964166641s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.704666138s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[11.1e( empty local-lis/les=0/0 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 61 pg[9.b( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[10.1a( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.963737488s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active pruub 135.704727173s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[10.19( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.963687897s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active pruub 135.704742432s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[10.1a( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.963657379s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.704727173s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[10.19( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.963637352s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.704742432s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[10.7( v 51'16 (0'0,51'16] local-lis/les=59/60 n=1 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.963592529s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active pruub 135.704757690s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[10.7( v 51'16 (0'0,51'16] local-lis/les=59/60 n=1 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.963557243s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.704757690s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[10.6( v 51'16 (0'0,51'16] local-lis/les=59/60 n=1 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.963610649s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active pruub 135.704986572s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[10.6( v 51'16 (0'0,51'16] local-lis/les=59/60 n=1 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.963551521s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.704986572s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[10.8( v 51'16 (0'0,51'16] local-lis/les=59/60 n=1 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.963470459s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active pruub 135.705062866s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[10.f( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.963397026s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active pruub 135.705017090s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[10.9( v 60'17 (0'0,60'17] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.963446617s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 51'16 mlcod 51'16 active pruub 135.705078125s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[10.8( v 51'16 (0'0,51'16] local-lis/les=59/60 n=1 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.963430405s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.705062866s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[10.f( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.963362694s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.705017090s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[10.9( v 60'17 (0'0,60'17] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.963402748s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 51'16 mlcod 0'0 unknown NOTIFY pruub 135.705078125s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 61 pg[11.14( empty local-lis/les=0/0 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[10.4( v 51'16 (0'0,51'16] local-lis/les=59/60 n=1 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.963118553s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active pruub 135.705017090s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[10.4( v 51'16 (0'0,51'16] local-lis/les=59/60 n=1 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.963074684s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.705017090s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[10.e( v 60'17 (0'0,60'17] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.963128090s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 51'16 mlcod 51'16 active pruub 135.705093384s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[10.1( v 51'16 (0'0,51'16] local-lis/les=59/60 n=1 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.963024139s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active pruub 135.705123901s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[10.e( v 60'17 (0'0,60'17] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.963074684s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 51'16 mlcod 0'0 unknown NOTIFY pruub 135.705093384s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[10.1( v 51'16 (0'0,51'16] local-lis/les=59/60 n=1 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.962989807s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.705123901s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[10.14( v 60'17 (0'0,60'17] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.962896347s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 51'16 mlcod 51'16 active pruub 135.705123901s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[10.14( v 60'17 (0'0,60'17] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.962835312s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 51'16 mlcod 0'0 unknown NOTIFY pruub 135.705123901s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[10.15( v 60'17 (0'0,60'17] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.962732315s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 51'16 mlcod 51'16 active pruub 135.705184937s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[10.15( v 60'17 (0'0,60'17] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.962691307s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 51'16 mlcod 0'0 unknown NOTIFY pruub 135.705184937s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[10.16( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.962637901s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active pruub 135.705184937s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 61 pg[8.6( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[10.16( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.962602615s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.705184937s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[10.17( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.962357521s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active pruub 135.705184937s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[10.2( v 51'16 (0'0,51'16] local-lis/les=59/60 n=1 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.962318420s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active pruub 135.705139160s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:08 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 61 pg[9.7( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[10.17( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.962156296s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.705184937s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[10.2( v 51'16 (0'0,51'16] local-lis/les=59/60 n=1 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.962075233s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.705139160s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[11.1f( empty local-lis/les=0/0 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[8.1c( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[11.1c( empty local-lis/les=0/0 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 61 pg[8.9( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[11.11( empty local-lis/les=0/0 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 61 pg[9.17( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[8.12( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[8.11( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 61 pg[11.12( empty local-lis/les=0/0 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 61 pg[11.6( empty local-lis/les=0/0 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 61 pg[9.9( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 61 pg[8.f( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[10.b( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 61 pg[11.e( empty local-lis/les=0/0 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 61 pg[9.f( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[10.13( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 61 pg[8.e( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 61 pg[11.f( empty local-lis/les=0/0 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 61 pg[8.c( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[10.11( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 61 pg[9.d( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 61 pg[9.1( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 61 pg[11.1( empty local-lis/les=0/0 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 61 pg[9.3( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[10.12( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 61 pg[9.1d( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 61 pg[9.1f( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 61 pg[9.19( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[10.10( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[10.1a( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 61 pg[8.18( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 61 pg[11.19( empty local-lis/les=0/0 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 61 pg[11.17( empty local-lis/les=0/0 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[10.19( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:08 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 61 pg[8.14( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 61 pg[9.15( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 61 pg[8.1f( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 61 pg[8.1d( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[10.6( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 61 pg[9.1b( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 61 pg[10.1e( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 61 pg[8.1a( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[10.f( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 61 pg[10.d( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[10.14( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 61 pg[10.7( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 61 pg[10.8( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 61 pg[10.9( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:09 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 61 pg[10.2( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 61 pg[10.4( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 61 pg[10.1( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 61 pg[10.e( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 61 pg[10.15( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 61 pg[10.16( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:09 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 61 pg[10.17( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:09 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 3.c scrub starts
Oct 11 04:14:09 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 3.c scrub ok
Oct 11 04:14:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Oct 11 04:14:09 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 11 04:14:09 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 11 04:14:09 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Oct 11 04:14:09 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 11 04:14:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Oct 11 04:14:10 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[9.15( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[9.15( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 62 pg[8.1c( v 47'4 (0'0,47'4] local-lis/les=61/62 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[9.1b( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[9.19( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[9.1b( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[9.1f( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[9.1f( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[9.1d( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[9.1d( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[9.19( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[9.3( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[9.1( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[9.1( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[9.3( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[9.d( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[9.d( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[9.f( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[9.f( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[9.9( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[9.9( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[9.17( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[9.17( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[9.7( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[9.7( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 62 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 62 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 62 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[9.b( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 62 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 62 pg[9.11( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[9.b( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 62 pg[9.11( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 62 pg[9.3( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 62 pg[9.3( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 62 pg[9.d( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 62 pg[9.d( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 62 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 62 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 62 pg[9.9( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 62 pg[9.9( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 62 pg[9.b( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 62 pg[9.b( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 62 pg[9.1( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 62 pg[9.1( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 62 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 62 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 62 pg[9.5( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 62 pg[9.5( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 62 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 62 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 62 pg[9.1d( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 62 pg[9.1d( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 62 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 62 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[9.5( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[9.5( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[9.11( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[9.11( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[9.13( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[9.13( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[10.8( v 51'16 (0'0,51'16] local-lis/les=61/62 n=1 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 62 pg[11.b( empty local-lis/les=61/62 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 62 pg[11.1f( empty local-lis/les=61/62 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 62 pg[8.11( v 47'4 (0'0,47'4] local-lis/les=61/62 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 62 pg[11.11( empty local-lis/les=61/62 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 62 pg[8.12( v 47'4 (0'0,47'4] local-lis/les=61/62 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 62 pg[11.1c( empty local-lis/les=61/62 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 62 pg[11.1a( empty local-lis/les=61/62 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 62 pg[11.1e( empty local-lis/les=61/62 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 62 pg[9.1b( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 62 pg[11.1b( empty local-lis/les=61/62 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 62 pg[11.18( empty local-lis/les=61/62 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 62 pg[8.1b( v 47'4 (0'0,47'4] local-lis/les=61/62 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 62 pg[8.4( v 47'4 (0'0,47'4] local-lis/les=61/62 n=1 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 62 pg[8.d( v 47'4 (0'0,47'4] local-lis/les=61/62 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 62 pg[11.d( empty local-lis/les=61/62 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 62 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 62 pg[9.1b( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 62 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 62 pg[8.2( v 47'4 (0'0,47'4] local-lis/les=61/62 n=1 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 62 pg[11.3( empty local-lis/les=61/62 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 62 pg[10.13( v 51'16 (0'0,51'16] local-lis/les=61/62 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 62 pg[10.10( v 51'16 (0'0,51'16] local-lis/les=61/62 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 62 pg[11.8( empty local-lis/les=61/62 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 62 pg[8.15( v 47'4 (0'0,47'4] local-lis/les=61/62 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 62 pg[11.15( empty local-lis/les=61/62 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 62 pg[11.9( empty local-lis/les=61/62 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 62 pg[11.2( empty local-lis/les=61/62 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 62 pg[11.12( empty local-lis/les=61/62 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[10.15( v 60'17 lc 51'5 (0'0,60'17] local-lis/les=61/62 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=60'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[10.17( v 51'16 (0'0,51'16] local-lis/les=61/62 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[10.7( v 51'16 (0'0,51'16] local-lis/les=61/62 n=1 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[10.d( v 60'17 lc 51'9 (0'0,60'17] local-lis/les=61/62 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=60'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[10.1e( v 51'16 (0'0,51'16] local-lis/les=61/62 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[10.16( v 51'16 (0'0,51'16] local-lis/les=61/62 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 62 pg[10.6( v 51'16 (0'0,51'16] local-lis/les=61/62 n=1 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 62 pg[10.f( v 51'16 (0'0,51'16] local-lis/les=61/62 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 62 pg[10.1a( v 51'16 (0'0,51'16] local-lis/les=61/62 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 62 pg[10.2( v 51'16 (0'0,51'16] local-lis/les=61/62 n=1 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 62 pg[10.12( v 51'16 (0'0,51'16] local-lis/les=61/62 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 62 pg[10.11( v 51'16 (0'0,51'16] local-lis/les=61/62 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 62 pg[10.19( v 51'16 (0'0,51'16] local-lis/les=61/62 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 62 pg[10.14( v 60'17 lc 51'13 (0'0,60'17] local-lis/les=61/62 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=60'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 62 pg[10.b( v 51'16 (0'0,51'16] local-lis/les=61/62 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[10.4( v 51'16 (0'0,51'16] local-lis/les=61/62 n=1 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[10.1( v 51'16 (0'0,51'16] local-lis/les=61/62 n=1 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[8.1d( v 47'4 (0'0,47'4] local-lis/les=61/62 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[8.1f( v 47'4 (0'0,47'4] local-lis/les=61/62 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[8.14( v 47'4 (0'0,47'4] local-lis/les=61/62 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[10.e( v 60'17 lc 51'7 (0'0,60'17] local-lis/les=61/62 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=60'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[8.1a( v 47'4 (0'0,47'4] local-lis/les=61/62 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[11.17( empty local-lis/les=61/62 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[11.19( empty local-lis/les=61/62 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[8.18( v 47'4 (0'0,47'4] local-lis/les=61/62 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[11.1( empty local-lis/les=61/62 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[8.c( v 47'4 (0'0,47'4] local-lis/les=61/62 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[11.f( empty local-lis/les=61/62 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[8.e( v 47'4 (0'0,47'4] local-lis/les=61/62 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[8.f( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=61/62 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=47'4 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[11.6( empty local-lis/les=61/62 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[11.e( empty local-lis/les=61/62 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[10.9( v 60'17 lc 51'15 (0'0,60'17] local-lis/les=61/62 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=60'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[8.9( v 47'4 (0'0,47'4] local-lis/les=61/62 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[8.6( v 47'4 (0'0,47'4] local-lis/les=61/62 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[11.14( empty local-lis/les=61/62 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[11.4( empty local-lis/les=61/62 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[8.b( v 47'4 (0'0,47'4] local-lis/les=61/62 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[11.10( empty local-lis/les=61/62 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 62 pg[8.10( v 47'4 (0'0,47'4] local-lis/les=61/62 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Oct 11 04:14:10 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Oct 11 04:14:10 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 4.f scrub starts
Oct 11 04:14:10 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 4.f scrub ok
Oct 11 04:14:10 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v151: 321 pgs: 321 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:14:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0) v1
Oct 11 04:14:10 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct 11 04:14:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Oct 11 04:14:11 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Oct 11 04:14:11 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Oct 11 04:14:11 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct 11 04:14:11 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Oct 11 04:14:11 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 63 pg[9.11( v 54'385 (0'0,54'385] local-lis/les=62/63 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:11 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 63 pg[9.5( v 54'385 (0'0,54'385] local-lis/les=62/63 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:11 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 63 pg[9.b( v 54'385 (0'0,54'385] local-lis/les=62/63 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:11 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 63 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=62/63 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:11 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 63 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=62/63 n=5 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:11 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 63 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=62/63 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:11 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 63 pg[9.d( v 54'385 (0'0,54'385] local-lis/les=62/63 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:11 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 63 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=62/63 n=5 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:11 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 63 pg[9.3( v 54'385 (0'0,54'385] local-lis/les=62/63 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:11 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 63 pg[9.1( v 54'385 (0'0,54'385] local-lis/les=62/63 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:11 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 63 pg[9.9( v 54'385 (0'0,54'385] local-lis/les=62/63 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:11 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 63 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=62/63 n=5 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:11 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 63 pg[9.1b( v 54'385 (0'0,54'385] local-lis/les=62/63 n=5 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:11 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 63 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=62/63 n=5 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:11 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 63 pg[9.1d( v 54'385 (0'0,54'385] local-lis/les=62/63 n=5 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:11 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 63 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=62/63 n=5 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Oct 11 04:14:12 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Oct 11 04:14:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Oct 11 04:14:12 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Oct 11 04:14:12 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 64 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:12 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 64 pg[9.1( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:12 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 64 pg[9.d( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:12 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 64 pg[9.3( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:12 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 64 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:12 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 64 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:12 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 64 pg[9.3( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:12 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 64 pg[9.d( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:12 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 64 pg[9.1( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:12 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 64 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:12 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 64 pg[9.9( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:12 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 64 pg[9.9( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:12 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 64 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:12 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 64 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:12 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 64 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:12 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 64 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:12 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 64 pg[9.b( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:12 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 64 pg[9.5( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:12 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 64 pg[9.b( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:12 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 64 pg[9.5( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:12 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 64 pg[9.11( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:12 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 64 pg[9.11( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:12 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 64 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=62/63 n=5 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.428942680s) [0] async=[0] r=-1 lpr=64 pi=[57,64)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 147.078628540s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:12 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 64 pg[9.1( v 54'385 (0'0,54'385] local-lis/les=62/63 n=6 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.428731918s) [0] async=[0] r=-1 lpr=64 pi=[57,64)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 147.078552246s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:12 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 64 pg[9.1( v 54'385 (0'0,54'385] local-lis/les=62/63 n=6 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.428644180s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 147.078552246s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:12 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 64 pg[9.5( v 54'385 (0'0,54'385] local-lis/les=62/63 n=6 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.420660019s) [0] async=[0] r=-1 lpr=64 pi=[57,64)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 147.070343018s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:12 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 64 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=62/63 n=5 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.428691864s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 147.078628540s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:12 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 64 pg[9.5( v 54'385 (0'0,54'385] local-lis/les=62/63 n=6 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.420117378s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 147.070343018s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:12 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 64 pg[9.b( v 54'385 (0'0,54'385] local-lis/les=62/63 n=6 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.427746773s) [0] async=[0] r=-1 lpr=64 pi=[57,64)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 147.078033447s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:12 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 64 pg[9.b( v 54'385 (0'0,54'385] local-lis/les=62/63 n=6 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.427659988s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 147.078033447s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:12 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 64 pg[9.9( v 54'385 (0'0,54'385] local-lis/les=62/63 n=6 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.427974701s) [0] async=[0] r=-1 lpr=64 pi=[57,64)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 147.078628540s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:12 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 64 pg[9.9( v 54'385 (0'0,54'385] local-lis/les=62/63 n=6 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.427915573s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 147.078628540s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:12 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 64 pg[9.d( v 54'385 (0'0,54'385] local-lis/les=62/63 n=6 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.427549362s) [0] async=[0] r=-1 lpr=64 pi=[57,64)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 147.078430176s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:12 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 64 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=62/63 n=6 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.427468300s) [0] async=[0] r=-1 lpr=64 pi=[57,64)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 147.078399658s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:12 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 64 pg[9.d( v 54'385 (0'0,54'385] local-lis/les=62/63 n=6 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.427466393s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 147.078430176s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:12 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 64 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=62/63 n=5 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.427333832s) [0] async=[0] r=-1 lpr=64 pi=[57,64)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 147.078338623s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:12 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 64 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=62/63 n=6 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.427379608s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 147.078399658s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:12 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 64 pg[9.3( v 54'385 (0'0,54'385] local-lis/les=62/63 n=6 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.427248955s) [0] async=[0] r=-1 lpr=64 pi=[57,64)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 147.078521729s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:12 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 64 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=62/63 n=5 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.427270889s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 147.078338623s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:12 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 64 pg[9.11( v 54'385 (0'0,54'385] local-lis/les=62/63 n=6 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.418828011s) [0] async=[0] r=-1 lpr=64 pi=[57,64)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 147.070343018s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:12 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 64 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=62/63 n=6 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.426605225s) [0] async=[0] r=-1 lpr=64 pi=[57,64)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 147.078125000s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:12 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 64 pg[9.11( v 54'385 (0'0,54'385] local-lis/les=62/63 n=6 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.418766975s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 147.070343018s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:12 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 64 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=62/63 n=6 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.426520348s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 147.078125000s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:12 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 64 pg[9.3( v 54'385 (0'0,54'385] local-lis/les=62/63 n=6 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.427173615s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 147.078521729s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:14:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Oct 11 04:14:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Oct 11 04:14:12 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Oct 11 04:14:12 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 65 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=62/63 n=5 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=65 pruub=14.802893639s) [0] async=[0] r=-1 lpr=65 pi=[57,65)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 147.078872681s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:12 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 65 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:12 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 65 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:12 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 65 pg[9.1b( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:12 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 65 pg[9.1b( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:12 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 65 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:12 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 65 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:12 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 65 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=62/63 n=5 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=65 pruub=14.802791595s) [0] r=-1 lpr=65 pi=[57,65)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 147.078872681s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:12 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 65 pg[9.1d( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:12 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 65 pg[9.1d( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:12 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 65 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=62/63 n=5 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=65 pruub=14.802194595s) [0] async=[0] r=-1 lpr=65 pi=[57,65)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 147.078735352s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:12 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 65 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=62/63 n=5 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=65 pruub=14.802112579s) [0] r=-1 lpr=65 pi=[57,65)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 147.078735352s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:12 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 65 pg[9.1d( v 54'385 (0'0,54'385] local-lis/les=62/63 n=5 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=65 pruub=14.802025795s) [0] async=[0] r=-1 lpr=65 pi=[57,65)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 147.078781128s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:12 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 65 pg[9.1d( v 54'385 (0'0,54'385] local-lis/les=62/63 n=5 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=65 pruub=14.801971436s) [0] r=-1 lpr=65 pi=[57,65)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 147.078781128s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:12 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 65 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=62/63 n=5 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=65 pruub=14.801536560s) [0] async=[0] r=-1 lpr=65 pi=[57,65)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 147.078506470s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:12 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 65 pg[9.1b( v 54'385 (0'0,54'385] local-lis/les=62/63 n=5 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=65 pruub=14.801682472s) [0] async=[0] r=-1 lpr=65 pi=[57,65)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 147.078720093s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:12 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 65 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=62/63 n=5 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=65 pruub=14.801472664s) [0] r=-1 lpr=65 pi=[57,65)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 147.078506470s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:12 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 65 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:12 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 65 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:12 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 65 pg[9.1b( v 54'385 (0'0,54'385] local-lis/les=62/63 n=5 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=65 pruub=14.801640511s) [0] r=-1 lpr=65 pi=[57,65)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 147.078720093s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:12 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 65 pg[9.3( v 54'385 (0'0,54'385] local-lis/les=64/65 n=6 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:12 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 65 pg[9.d( v 54'385 (0'0,54'385] local-lis/les=64/65 n=6 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:12 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 65 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=64/65 n=6 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:12 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 65 pg[9.1( v 54'385 (0'0,54'385] local-lis/les=64/65 n=6 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:12 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 65 pg[9.9( v 54'385 (0'0,54'385] local-lis/les=64/65 n=6 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:12 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 65 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=64/65 n=5 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:12 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 65 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=64/65 n=5 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:12 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 65 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=64/65 n=6 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:12 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 65 pg[9.b( v 54'385 (0'0,54'385] local-lis/les=64/65 n=6 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:12 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 65 pg[9.5( v 54'385 (0'0,54'385] local-lis/les=64/65 n=6 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:12 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 65 pg[9.11( v 54'385 (0'0,54'385] local-lis/les=64/65 n=6 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:12 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v155: 321 pgs: 1 active+recovering+remapped, 1 active+recovery_wait+remapped, 3 active+remapped, 11 peering, 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 8/213 objects misplaced (3.756%); 595 B/s, 23 objects/s recovering
Oct 11 04:14:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Oct 11 04:14:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Oct 11 04:14:13 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Oct 11 04:14:13 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 66 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=65/66 n=5 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:13 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 66 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=65/66 n=5 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:13 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 66 pg[9.1b( v 54'385 (0'0,54'385] local-lis/les=65/66 n=5 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:13 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 66 pg[9.1d( v 54'385 (0'0,54'385] local-lis/les=65/66 n=5 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:13 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 66 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=65/66 n=5 ec=57/48 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:14 np0005481065 python3[104168]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:14:14 np0005481065 podman[104169]: 2025-10-11 08:14:14.361714421 +0000 UTC m=+0.070945865 container create 683d1c4841d393f86f597743037b1bca197099687729298083784e9e456f7aac (image=quay.io/ceph/ceph:v18, name=determined_shannon, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Oct 11 04:14:14 np0005481065 systemd[1]: Started libpod-conmon-683d1c4841d393f86f597743037b1bca197099687729298083784e9e456f7aac.scope.
Oct 11 04:14:14 np0005481065 podman[104169]: 2025-10-11 08:14:14.333121489 +0000 UTC m=+0.042352963 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:14:14 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:14:14 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5442be5a7dcddd400651f734f9aca5386ba9b4dc21bb69790816c90ba0057b7e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:14:14 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5442be5a7dcddd400651f734f9aca5386ba9b4dc21bb69790816c90ba0057b7e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:14:14 np0005481065 podman[104169]: 2025-10-11 08:14:14.461636126 +0000 UTC m=+0.170867570 container init 683d1c4841d393f86f597743037b1bca197099687729298083784e9e456f7aac (image=quay.io/ceph/ceph:v18, name=determined_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 04:14:14 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 4.d scrub starts
Oct 11 04:14:14 np0005481065 podman[104169]: 2025-10-11 08:14:14.477445264 +0000 UTC m=+0.186676668 container start 683d1c4841d393f86f597743037b1bca197099687729298083784e9e456f7aac (image=quay.io/ceph/ceph:v18, name=determined_shannon, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:14:14 np0005481065 podman[104169]: 2025-10-11 08:14:14.480599411 +0000 UTC m=+0.189830905 container attach 683d1c4841d393f86f597743037b1bca197099687729298083784e9e456f7aac (image=quay.io/ceph/ceph:v18, name=determined_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 11 04:14:14 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 4.d scrub ok
Oct 11 04:14:14 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Oct 11 04:14:14 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Oct 11 04:14:14 np0005481065 determined_shannon[104185]: could not fetch user info: no user info saved
Oct 11 04:14:14 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v157: 321 pgs: 1 active+recovering+remapped, 1 active+recovery_wait+remapped, 3 active+remapped, 11 peering, 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 8/213 objects misplaced (3.756%); 556 B/s, 21 objects/s recovering
Oct 11 04:14:14 np0005481065 systemd[1]: libpod-683d1c4841d393f86f597743037b1bca197099687729298083784e9e456f7aac.scope: Deactivated successfully.
Oct 11 04:14:14 np0005481065 conmon[104185]: conmon 683d1c4841d393f86f59 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-683d1c4841d393f86f597743037b1bca197099687729298083784e9e456f7aac.scope/container/memory.events
Oct 11 04:14:14 np0005481065 podman[104169]: 2025-10-11 08:14:14.717164859 +0000 UTC m=+0.426396293 container died 683d1c4841d393f86f597743037b1bca197099687729298083784e9e456f7aac (image=quay.io/ceph/ceph:v18, name=determined_shannon, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:14:14 np0005481065 systemd[1]: var-lib-containers-storage-overlay-5442be5a7dcddd400651f734f9aca5386ba9b4dc21bb69790816c90ba0057b7e-merged.mount: Deactivated successfully.
Oct 11 04:14:14 np0005481065 podman[104169]: 2025-10-11 08:14:14.767038239 +0000 UTC m=+0.476269683 container remove 683d1c4841d393f86f597743037b1bca197099687729298083784e9e456f7aac (image=quay.io/ceph/ceph:v18, name=determined_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:14:14 np0005481065 systemd[1]: libpod-conmon-683d1c4841d393f86f597743037b1bca197099687729298083784e9e456f7aac.scope: Deactivated successfully.
Oct 11 04:14:15 np0005481065 python3[104307]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:14:15 np0005481065 podman[104308]: 2025-10-11 08:14:15.241713347 +0000 UTC m=+0.053385589 container create 8597df6732e601a790195d42d2292cd0c0ccb6dc4c660f98c0cf23466bb5b2ee (image=quay.io/ceph/ceph:v18, name=naughty_rubin, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 04:14:15 np0005481065 systemd[1]: Started libpod-conmon-8597df6732e601a790195d42d2292cd0c0ccb6dc4c660f98c0cf23466bb5b2ee.scope.
Oct 11 04:14:15 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:14:15 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1154b1657ed0a283f0428e12fa4e83f3c1736a36e9192f3cb92114289d1cf83a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:14:15 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1154b1657ed0a283f0428e12fa4e83f3c1736a36e9192f3cb92114289d1cf83a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:14:15 np0005481065 podman[104308]: 2025-10-11 08:14:15.223138942 +0000 UTC m=+0.034811164 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 11 04:14:15 np0005481065 podman[104308]: 2025-10-11 08:14:15.325267949 +0000 UTC m=+0.136940171 container init 8597df6732e601a790195d42d2292cd0c0ccb6dc4c660f98c0cf23466bb5b2ee (image=quay.io/ceph/ceph:v18, name=naughty_rubin, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Oct 11 04:14:15 np0005481065 podman[104308]: 2025-10-11 08:14:15.333323802 +0000 UTC m=+0.144996024 container start 8597df6732e601a790195d42d2292cd0c0ccb6dc4c660f98c0cf23466bb5b2ee (image=quay.io/ceph/ceph:v18, name=naughty_rubin, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Oct 11 04:14:15 np0005481065 podman[104308]: 2025-10-11 08:14:15.336995794 +0000 UTC m=+0.148668026 container attach 8597df6732e601a790195d42d2292cd0c0ccb6dc4c660f98c0cf23466bb5b2ee (image=quay.io/ceph/ceph:v18, name=naughty_rubin, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 11 04:14:15 np0005481065 naughty_rubin[104323]: {
Oct 11 04:14:15 np0005481065 naughty_rubin[104323]:    "user_id": "openstack",
Oct 11 04:14:15 np0005481065 naughty_rubin[104323]:    "display_name": "openstack",
Oct 11 04:14:15 np0005481065 naughty_rubin[104323]:    "email": "",
Oct 11 04:14:15 np0005481065 naughty_rubin[104323]:    "suspended": 0,
Oct 11 04:14:15 np0005481065 naughty_rubin[104323]:    "max_buckets": 1000,
Oct 11 04:14:15 np0005481065 naughty_rubin[104323]:    "subusers": [],
Oct 11 04:14:15 np0005481065 naughty_rubin[104323]:    "keys": [
Oct 11 04:14:15 np0005481065 naughty_rubin[104323]:        {
Oct 11 04:14:15 np0005481065 naughty_rubin[104323]:            "user": "openstack",
Oct 11 04:14:15 np0005481065 naughty_rubin[104323]:            "access_key": "4MYHD7JPUDZ728QHF5MR",
Oct 11 04:14:15 np0005481065 naughty_rubin[104323]:            "secret_key": "6LC8DYSrrs8CbR0R0TB9LewOELcCg2C0pVI4WgW7"
Oct 11 04:14:15 np0005481065 naughty_rubin[104323]:        }
Oct 11 04:14:15 np0005481065 naughty_rubin[104323]:    ],
Oct 11 04:14:15 np0005481065 naughty_rubin[104323]:    "swift_keys": [],
Oct 11 04:14:15 np0005481065 naughty_rubin[104323]:    "caps": [],
Oct 11 04:14:15 np0005481065 naughty_rubin[104323]:    "op_mask": "read, write, delete",
Oct 11 04:14:15 np0005481065 naughty_rubin[104323]:    "default_placement": "",
Oct 11 04:14:15 np0005481065 naughty_rubin[104323]:    "default_storage_class": "",
Oct 11 04:14:15 np0005481065 naughty_rubin[104323]:    "placement_tags": [],
Oct 11 04:14:15 np0005481065 naughty_rubin[104323]:    "bucket_quota": {
Oct 11 04:14:15 np0005481065 naughty_rubin[104323]:        "enabled": false,
Oct 11 04:14:15 np0005481065 naughty_rubin[104323]:        "check_on_raw": false,
Oct 11 04:14:15 np0005481065 naughty_rubin[104323]:        "max_size": -1,
Oct 11 04:14:15 np0005481065 naughty_rubin[104323]:        "max_size_kb": 0,
Oct 11 04:14:15 np0005481065 naughty_rubin[104323]:        "max_objects": -1
Oct 11 04:14:15 np0005481065 naughty_rubin[104323]:    },
Oct 11 04:14:15 np0005481065 naughty_rubin[104323]:    "user_quota": {
Oct 11 04:14:15 np0005481065 naughty_rubin[104323]:        "enabled": false,
Oct 11 04:14:15 np0005481065 naughty_rubin[104323]:        "check_on_raw": false,
Oct 11 04:14:15 np0005481065 naughty_rubin[104323]:        "max_size": -1,
Oct 11 04:14:15 np0005481065 naughty_rubin[104323]:        "max_size_kb": 0,
Oct 11 04:14:15 np0005481065 naughty_rubin[104323]:        "max_objects": -1
Oct 11 04:14:15 np0005481065 naughty_rubin[104323]:    },
Oct 11 04:14:15 np0005481065 naughty_rubin[104323]:    "temp_url_keys": [],
Oct 11 04:14:15 np0005481065 naughty_rubin[104323]:    "type": "rgw",
Oct 11 04:14:15 np0005481065 naughty_rubin[104323]:    "mfa_ids": []
Oct 11 04:14:15 np0005481065 naughty_rubin[104323]: }
Oct 11 04:14:15 np0005481065 naughty_rubin[104323]: 
Oct 11 04:14:15 np0005481065 systemd[1]: libpod-8597df6732e601a790195d42d2292cd0c0ccb6dc4c660f98c0cf23466bb5b2ee.scope: Deactivated successfully.
Oct 11 04:14:15 np0005481065 podman[104308]: 2025-10-11 08:14:15.573737596 +0000 UTC m=+0.385409838 container died 8597df6732e601a790195d42d2292cd0c0ccb6dc4c660f98c0cf23466bb5b2ee (image=quay.io/ceph/ceph:v18, name=naughty_rubin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 04:14:15 np0005481065 systemd[1]: var-lib-containers-storage-overlay-1154b1657ed0a283f0428e12fa4e83f3c1736a36e9192f3cb92114289d1cf83a-merged.mount: Deactivated successfully.
Oct 11 04:14:15 np0005481065 podman[104308]: 2025-10-11 08:14:15.616416707 +0000 UTC m=+0.428088949 container remove 8597df6732e601a790195d42d2292cd0c0ccb6dc4c660f98c0cf23466bb5b2ee (image=quay.io/ceph/ceph:v18, name=naughty_rubin, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 04:14:15 np0005481065 systemd[1]: libpod-conmon-8597df6732e601a790195d42d2292cd0c0ccb6dc4c660f98c0cf23466bb5b2ee.scope: Deactivated successfully.
Oct 11 04:14:16 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v158: 321 pgs: 1 active+recovering+remapped, 1 active+recovery_wait+remapped, 3 active+remapped, 11 peering, 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 8/213 objects misplaced (3.756%); 390 B/s, 15 objects/s recovering
Oct 11 04:14:17 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Oct 11 04:14:17 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Oct 11 04:14:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:14:18 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v159: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 306 B/s wr, 2 op/s; 349 B/s, 14 objects/s recovering
Oct 11 04:14:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Oct 11 04:14:18 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct 11 04:14:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Oct 11 04:14:18 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Oct 11 04:14:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Oct 11 04:14:18 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Oct 11 04:14:18 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct 11 04:14:19 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Oct 11 04:14:19 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Oct 11 04:14:19 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Oct 11 04:14:20 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 6.6 deep-scrub starts
Oct 11 04:14:20 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 6.6 deep-scrub ok
Oct 11 04:14:20 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 4.a scrub starts
Oct 11 04:14:20 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 4.a scrub ok
Oct 11 04:14:20 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v161: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 2 op/s; 13 B/s, 0 objects/s recovering
Oct 11 04:14:20 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0) v1
Oct 11 04:14:20 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct 11 04:14:20 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Oct 11 04:14:20 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Oct 11 04:14:20 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Oct 11 04:14:20 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Oct 11 04:14:20 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct 11 04:14:21 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Oct 11 04:14:21 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Oct 11 04:14:21 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Oct 11 04:14:22 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 7.f scrub starts
Oct 11 04:14:22 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Oct 11 04:14:22 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 7.f scrub ok
Oct 11 04:14:22 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Oct 11 04:14:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:14:22 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v163: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 1 op/s; 13 B/s, 0 objects/s recovering
Oct 11 04:14:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0) v1
Oct 11 04:14:22 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct 11 04:14:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Oct 11 04:14:22 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Oct 11 04:14:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Oct 11 04:14:22 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Oct 11 04:14:22 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct 11 04:14:23 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 6.d deep-scrub starts
Oct 11 04:14:23 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 6.d deep-scrub ok
Oct 11 04:14:23 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Oct 11 04:14:24 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Oct 11 04:14:24 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Oct 11 04:14:24 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v165: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:14:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0) v1
Oct 11 04:14:24 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct 11 04:14:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:14:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:14:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:14:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:14:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:14:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:14:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Oct 11 04:14:24 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Oct 11 04:14:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Oct 11 04:14:24 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Oct 11 04:14:24 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct 11 04:14:25 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 6.15 scrub starts
Oct 11 04:14:25 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 6.15 scrub ok
Oct 11 04:14:25 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Oct 11 04:14:25 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Oct 11 04:14:25 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 70 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=70 pruub=10.069261551s) [2] r=-1 lpr=70 pi=[57,70)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 155.521301270s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:25 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 70 pg[9.e( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=70 pruub=10.069143295s) [2] r=-1 lpr=70 pi=[57,70)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 155.521697998s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:25 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 70 pg[9.e( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=70 pruub=10.069068909s) [2] r=-1 lpr=70 pi=[57,70)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.521697998s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:25 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 70 pg[9.6( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=70 pruub=10.069064140s) [2] r=-1 lpr=70 pi=[57,70)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 155.521896362s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:25 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 70 pg[9.6( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=70 pruub=10.068995476s) [2] r=-1 lpr=70 pi=[57,70)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.521896362s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:25 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 70 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=70 pruub=10.074383736s) [2] r=-1 lpr=70 pi=[57,70)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 155.527694702s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:25 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 70 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=70 pruub=10.074152946s) [2] r=-1 lpr=70 pi=[57,70)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.527694702s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:25 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 70 pg[9.e( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=70) [2] r=0 lpr=70 pi=[57,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:25 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 70 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=70 pruub=10.067797661s) [2] r=-1 lpr=70 pi=[57,70)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.521301270s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:25 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 70 pg[9.6( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=70) [2] r=0 lpr=70 pi=[57,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:25 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 70 pg[9.1e( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=70) [2] r=0 lpr=70 pi=[57,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:25 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 70 pg[9.16( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=70) [2] r=0 lpr=70 pi=[57,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:25 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Oct 11 04:14:25 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Oct 11 04:14:25 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Oct 11 04:14:25 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 71 pg[9.e( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] r=-1 lpr=71 pi=[57,71)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:25 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 71 pg[9.e( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] r=-1 lpr=71 pi=[57,71)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:25 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 71 pg[9.1e( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] r=-1 lpr=71 pi=[57,71)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:25 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 71 pg[9.1e( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] r=-1 lpr=71 pi=[57,71)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:25 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 71 pg[9.6( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] r=-1 lpr=71 pi=[57,71)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:25 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 71 pg[9.6( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] r=-1 lpr=71 pi=[57,71)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:25 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 71 pg[9.16( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] r=-1 lpr=71 pi=[57,71)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:25 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 71 pg[9.16( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] r=-1 lpr=71 pi=[57,71)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:25 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Oct 11 04:14:25 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 71 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] r=0 lpr=71 pi=[57,71)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:25 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 71 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] r=0 lpr=71 pi=[57,71)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:25 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 71 pg[9.6( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] r=0 lpr=71 pi=[57,71)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:25 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 71 pg[9.e( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] r=0 lpr=71 pi=[57,71)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:25 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 71 pg[9.6( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] r=0 lpr=71 pi=[57,71)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:25 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 71 pg[9.e( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] r=0 lpr=71 pi=[57,71)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:25 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 71 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] r=0 lpr=71 pi=[57,71)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:25 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 71 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] r=0 lpr=71 pi=[57,71)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:26 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v168: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:14:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0) v1
Oct 11 04:14:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct 11 04:14:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Oct 11 04:14:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Oct 11 04:14:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Oct 11 04:14:26 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Oct 11 04:14:26 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 72 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=64/65 n=5 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=72 pruub=9.776575089s) [2] r=-1 lpr=72 pi=[64,72)/1 crt=54'385 mlcod 0'0 active pruub 161.625411987s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:26 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 72 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=64/65 n=5 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=72 pruub=9.776525497s) [2] r=-1 lpr=72 pi=[64,72)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 161.625411987s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:26 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 72 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=64/65 n=6 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=72 pruub=9.776096344s) [2] r=-1 lpr=72 pi=[64,72)/1 crt=54'385 mlcod 0'0 active pruub 161.625274658s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:26 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 72 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=64/65 n=6 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=72 pruub=9.776064873s) [2] r=-1 lpr=72 pi=[64,72)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 161.625274658s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:26 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct 11 04:14:26 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 72 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=64/65 n=5 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=72 pruub=9.775970459s) [2] r=-1 lpr=72 pi=[64,72)/1 crt=54'385 mlcod 0'0 active pruub 161.625457764s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:26 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 72 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=64/65 n=5 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=72 pruub=9.775940895s) [2] r=-1 lpr=72 pi=[64,72)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 161.625457764s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:26 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 72 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=64/65 n=6 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=72 pruub=9.775887489s) [2] r=-1 lpr=72 pi=[64,72)/1 crt=54'385 mlcod 0'0 active pruub 161.625564575s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:26 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 72 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=64/65 n=6 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=72 pruub=9.775851250s) [2] r=-1 lpr=72 pi=[64,72)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 161.625564575s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:26 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 72 pg[9.17( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=72) [2] r=0 lpr=72 pi=[64,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:26 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 72 pg[9.f( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=72) [2] r=0 lpr=72 pi=[64,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:26 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 72 pg[9.7( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=72) [2] r=0 lpr=72 pi=[64,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:26 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 72 pg[9.1f( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=72) [2] r=0 lpr=72 pi=[64,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:26 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 72 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=71/72 n=5 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] async=[2] r=0 lpr=71 pi=[57,71)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:26 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 72 pg[9.e( v 54'385 (0'0,54'385] local-lis/les=71/72 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] async=[2] r=0 lpr=71 pi=[57,71)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:26 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 72 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=71/72 n=5 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] async=[2] r=0 lpr=71 pi=[57,71)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:26 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 72 pg[9.6( v 54'385 (0'0,54'385] local-lis/les=71/72 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] async=[2] r=0 lpr=71 pi=[57,71)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:27 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 6.11 deep-scrub starts
Oct 11 04:14:27 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 6.11 deep-scrub ok
Oct 11 04:14:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:14:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Oct 11 04:14:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Oct 11 04:14:27 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Oct 11 04:14:27 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 73 pg[9.e( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=71/57 les/c/f=72/58/0 sis=73) [2] r=0 lpr=73 pi=[57,73)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:27 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 73 pg[9.e( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=71/57 les/c/f=72/58/0 sis=73) [2] r=0 lpr=73 pi=[57,73)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:27 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 73 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=71/57 les/c/f=72/58/0 sis=73) [2] r=0 lpr=73 pi=[57,73)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:27 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 73 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=71/57 les/c/f=72/58/0 sis=73) [2] r=0 lpr=73 pi=[57,73)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:27 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 73 pg[9.1f( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[64,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:27 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 73 pg[9.1f( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[64,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:27 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 73 pg[9.7( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[64,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:27 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 73 pg[9.7( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[64,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:27 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 73 pg[9.6( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=71/57 les/c/f=72/58/0 sis=73) [2] r=0 lpr=73 pi=[57,73)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:27 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 73 pg[9.6( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=71/57 les/c/f=72/58/0 sis=73) [2] r=0 lpr=73 pi=[57,73)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:27 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 73 pg[9.f( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[64,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:27 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 73 pg[9.f( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[64,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:27 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 73 pg[9.17( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[64,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:27 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 73 pg[9.17( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[64,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:27 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 73 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=71/57 les/c/f=72/58/0 sis=73) [2] r=0 lpr=73 pi=[57,73)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:27 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 73 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=71/57 les/c/f=72/58/0 sis=73) [2] r=0 lpr=73 pi=[57,73)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:27 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 73 pg[9.e( v 54'385 (0'0,54'385] local-lis/les=71/72 n=6 ec=57/48 lis/c=71/57 les/c/f=72/58/0 sis=73 pruub=15.245409012s) [2] async=[2] r=-1 lpr=73 pi=[57,73)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 162.523941040s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:27 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 73 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=71/72 n=5 ec=57/48 lis/c=71/57 les/c/f=72/58/0 sis=73 pruub=15.245392799s) [2] async=[2] r=-1 lpr=73 pi=[57,73)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 162.523956299s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:27 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 73 pg[9.e( v 54'385 (0'0,54'385] local-lis/les=71/72 n=6 ec=57/48 lis/c=71/57 les/c/f=72/58/0 sis=73 pruub=15.245319366s) [2] r=-1 lpr=73 pi=[57,73)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 162.523941040s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:27 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 73 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=71/72 n=5 ec=57/48 lis/c=71/57 les/c/f=72/58/0 sis=73 pruub=15.245308876s) [2] r=-1 lpr=73 pi=[57,73)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 162.523956299s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:27 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 73 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=71/72 n=5 ec=57/48 lis/c=71/57 les/c/f=72/58/0 sis=73 pruub=15.245164871s) [2] async=[2] r=-1 lpr=73 pi=[57,73)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 162.523910522s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:27 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 73 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=71/72 n=5 ec=57/48 lis/c=71/57 les/c/f=72/58/0 sis=73 pruub=15.245057106s) [2] r=-1 lpr=73 pi=[57,73)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 162.523910522s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:27 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 73 pg[9.6( v 54'385 (0'0,54'385] local-lis/les=71/72 n=6 ec=57/48 lis/c=71/57 les/c/f=72/58/0 sis=73 pruub=15.243618965s) [2] async=[2] r=-1 lpr=73 pi=[57,73)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 162.523941040s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:27 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 73 pg[9.6( v 54'385 (0'0,54'385] local-lis/les=71/72 n=6 ec=57/48 lis/c=71/57 les/c/f=72/58/0 sis=73 pruub=15.243057251s) [2] r=-1 lpr=73 pi=[57,73)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 162.523941040s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:27 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 73 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=64/65 n=6 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=73) [2]/[0] r=0 lpr=73 pi=[64,73)/1 crt=54'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:27 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 73 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=64/65 n=6 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=73) [2]/[0] r=0 lpr=73 pi=[64,73)/1 crt=54'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:27 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 73 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=64/65 n=5 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=73) [2]/[0] r=0 lpr=73 pi=[64,73)/1 crt=54'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:27 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 73 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=64/65 n=6 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=73) [2]/[0] r=0 lpr=73 pi=[64,73)/1 crt=54'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:27 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 73 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=64/65 n=5 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=73) [2]/[0] r=0 lpr=73 pi=[64,73)/1 crt=54'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:27 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 73 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=64/65 n=6 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=73) [2]/[0] r=0 lpr=73 pi=[64,73)/1 crt=54'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:27 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 73 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=64/65 n=5 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=73) [2]/[0] r=0 lpr=73 pi=[64,73)/1 crt=54'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:27 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 73 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=64/65 n=5 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=73) [2]/[0] r=0 lpr=73 pi=[64,73)/1 crt=54'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:27 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Oct 11 04:14:28 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 7.4 deep-scrub starts
Oct 11 04:14:28 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 7.4 deep-scrub ok
Oct 11 04:14:28 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 6.1 deep-scrub starts
Oct 11 04:14:28 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 6.1 deep-scrub ok
Oct 11 04:14:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Oct 11 04:14:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Oct 11 04:14:28 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Oct 11 04:14:28 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 74 pg[9.e( v 54'385 (0'0,54'385] local-lis/les=73/74 n=6 ec=57/48 lis/c=71/57 les/c/f=72/58/0 sis=73) [2] r=0 lpr=73 pi=[57,73)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:28 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 74 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=73/74 n=5 ec=57/48 lis/c=71/57 les/c/f=72/58/0 sis=73) [2] r=0 lpr=73 pi=[57,73)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:28 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 74 pg[9.6( v 54'385 (0'0,54'385] local-lis/les=73/74 n=6 ec=57/48 lis/c=71/57 les/c/f=72/58/0 sis=73) [2] r=0 lpr=73 pi=[57,73)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:28 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 74 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=73/74 n=5 ec=57/48 lis/c=71/57 les/c/f=72/58/0 sis=73) [2] r=0 lpr=73 pi=[57,73)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:28 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 74 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=73/74 n=5 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[64,73)/1 crt=54'385 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:28 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 74 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=73/74 n=5 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[64,73)/1 crt=54'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:28 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 74 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=73/74 n=6 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[64,73)/1 crt=54'385 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:28 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 74 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=73/74 n=6 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[64,73)/1 crt=54'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:28 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v172: 321 pgs: 4 unknown, 4 active+remapped, 313 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:14:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Oct 11 04:14:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Oct 11 04:14:29 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Oct 11 04:14:29 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 75 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=73/74 n=5 ec=57/48 lis/c=73/64 les/c/f=74/65/0 sis=75 pruub=15.001062393s) [2] async=[2] r=-1 lpr=75 pi=[64,75)/1 crt=54'385 mlcod 54'385 active pruub 169.629989624s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:29 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 75 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=73/74 n=6 ec=57/48 lis/c=73/64 les/c/f=74/65/0 sis=75 pruub=15.001280785s) [2] async=[2] r=-1 lpr=75 pi=[64,75)/1 crt=54'385 mlcod 54'385 active pruub 169.630233765s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:29 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 75 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=73/74 n=6 ec=57/48 lis/c=73/64 les/c/f=74/65/0 sis=75 pruub=15.000844955s) [2] async=[2] r=-1 lpr=75 pi=[64,75)/1 crt=54'385 mlcod 54'385 active pruub 169.630279541s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:29 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 75 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=73/74 n=6 ec=57/48 lis/c=73/64 les/c/f=74/65/0 sis=75 pruub=15.000786781s) [2] r=-1 lpr=75 pi=[64,75)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 169.630233765s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:29 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 75 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=73/74 n=6 ec=57/48 lis/c=73/64 les/c/f=74/65/0 sis=75 pruub=15.000615120s) [2] r=-1 lpr=75 pi=[64,75)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 169.630279541s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:29 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 75 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=73/74 n=5 ec=57/48 lis/c=73/64 les/c/f=74/65/0 sis=75 pruub=15.000229836s) [2] r=-1 lpr=75 pi=[64,75)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 169.629989624s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:29 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 75 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=73/74 n=5 ec=57/48 lis/c=73/64 les/c/f=74/65/0 sis=75 pruub=14.999664307s) [2] async=[2] r=-1 lpr=75 pi=[64,75)/1 crt=54'385 mlcod 54'385 active pruub 169.629959106s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:29 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 75 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=73/74 n=5 ec=57/48 lis/c=73/64 les/c/f=74/65/0 sis=75 pruub=14.999320984s) [2] r=-1 lpr=75 pi=[64,75)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 169.629959106s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:29 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 75 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=73/64 les/c/f=74/65/0 sis=75) [2] r=0 lpr=75 pi=[64,75)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:29 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 75 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=73/64 les/c/f=74/65/0 sis=75) [2] r=0 lpr=75 pi=[64,75)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:29 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 75 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=73/64 les/c/f=74/65/0 sis=75) [2] r=0 lpr=75 pi=[64,75)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:29 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 75 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=73/64 les/c/f=74/65/0 sis=75) [2] r=0 lpr=75 pi=[64,75)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:29 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 75 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=73/64 les/c/f=74/65/0 sis=75) [2] r=0 lpr=75 pi=[64,75)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:29 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 75 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=73/64 les/c/f=74/65/0 sis=75) [2] r=0 lpr=75 pi=[64,75)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:29 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 75 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=73/64 les/c/f=74/65/0 sis=75) [2] r=0 lpr=75 pi=[64,75)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:29 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 75 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=73/64 les/c/f=74/65/0 sis=75) [2] r=0 lpr=75 pi=[64,75)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:30 np0005481065 systemd-logind[819]: New session 35 of user zuul.
Oct 11 04:14:30 np0005481065 systemd[1]: Started Session 35 of User zuul.
Oct 11 04:14:30 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Oct 11 04:14:30 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Oct 11 04:14:30 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Oct 11 04:14:30 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 6.e scrub starts
Oct 11 04:14:30 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 76 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=75/76 n=5 ec=57/48 lis/c=73/64 les/c/f=74/65/0 sis=75) [2] r=0 lpr=75 pi=[64,75)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:30 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 76 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=75/76 n=5 ec=57/48 lis/c=73/64 les/c/f=74/65/0 sis=75) [2] r=0 lpr=75 pi=[64,75)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:30 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 76 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=75/76 n=6 ec=57/48 lis/c=73/64 les/c/f=74/65/0 sis=75) [2] r=0 lpr=75 pi=[64,75)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:30 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 76 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=75/76 n=6 ec=57/48 lis/c=73/64 les/c/f=74/65/0 sis=75) [2] r=0 lpr=75 pi=[64,75)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:30 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 6.e scrub ok
Oct 11 04:14:30 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v175: 321 pgs: 4 unknown, 4 active+remapped, 313 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:14:31 np0005481065 python3.9[104571]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:14:31 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 6.b deep-scrub starts
Oct 11 04:14:31 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 6.b deep-scrub ok
Oct 11 04:14:32 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Oct 11 04:14:32 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Oct 11 04:14:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e76 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:14:32 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v176: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 810 B/s wr, 26 op/s; 217 B/s, 7 objects/s recovering
Oct 11 04:14:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0) v1
Oct 11 04:14:32 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct 11 04:14:33 np0005481065 python3.9[104789]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:14:33 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 3.f scrub starts
Oct 11 04:14:33 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 3.f scrub ok
Oct 11 04:14:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Oct 11 04:14:33 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct 11 04:14:33 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Oct 11 04:14:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Oct 11 04:14:33 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Oct 11 04:14:33 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 77 pg[9.8( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=77 pruub=10.183312416s) [2] r=-1 lpr=77 pi=[57,77)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 163.522186279s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:33 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 77 pg[9.8( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=77 pruub=10.183232307s) [2] r=-1 lpr=77 pi=[57,77)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 163.522186279s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:33 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 77 pg[9.18( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=77 pruub=10.183130264s) [2] r=-1 lpr=77 pi=[57,77)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 163.522430420s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:33 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 77 pg[9.18( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=77 pruub=10.183022499s) [2] r=-1 lpr=77 pi=[57,77)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 163.522430420s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:33 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 77 pg[9.18( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=77) [2] r=0 lpr=77 pi=[57,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:33 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 77 pg[9.8( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=77) [2] r=0 lpr=77 pi=[57,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:34 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Oct 11 04:14:34 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Oct 11 04:14:34 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v178: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 9.3 KiB/s rd, 682 B/s wr, 22 op/s; 183 B/s, 6 objects/s recovering
Oct 11 04:14:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0) v1
Oct 11 04:14:34 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct 11 04:14:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Oct 11 04:14:34 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Oct 11 04:14:34 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct 11 04:14:34 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Oct 11 04:14:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Oct 11 04:14:34 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Oct 11 04:14:34 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 78 pg[9.8( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[57,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:34 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 78 pg[9.18( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[57,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:34 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 78 pg[9.18( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[57,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:34 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 78 pg[9.8( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[57,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:34 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 78 pg[9.18( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=78) [2]/[1] r=0 lpr=78 pi=[57,78)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:34 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 78 pg[9.18( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=78) [2]/[1] r=0 lpr=78 pi=[57,78)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:34 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 78 pg[9.8( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=78) [2]/[1] r=0 lpr=78 pi=[57,78)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:34 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 78 pg[9.8( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=78) [2]/[1] r=0 lpr=78 pi=[57,78)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:35 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 6.13 scrub starts
Oct 11 04:14:35 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 6.13 scrub ok
Oct 11 04:14:35 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Oct 11 04:14:35 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Oct 11 04:14:35 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Oct 11 04:14:35 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Oct 11 04:14:35 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Oct 11 04:14:35 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Oct 11 04:14:36 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 4.1c deep-scrub starts
Oct 11 04:14:36 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 79 pg[9.18( v 54'385 (0'0,54'385] local-lis/les=78/79 n=5 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[57,78)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:36 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 79 pg[9.8( v 54'385 (0'0,54'385] local-lis/les=78/79 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[57,78)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:36 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 4.1c deep-scrub ok
Oct 11 04:14:36 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Oct 11 04:14:36 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Oct 11 04:14:36 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v181: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 9.3 KiB/s rd, 682 B/s wr, 22 op/s; 183 B/s, 6 objects/s recovering
Oct 11 04:14:36 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0) v1
Oct 11 04:14:36 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct 11 04:14:36 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Oct 11 04:14:36 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Oct 11 04:14:36 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Oct 11 04:14:36 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Oct 11 04:14:36 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 80 pg[9.8( v 54'385 (0'0,54'385] local-lis/les=78/79 n=6 ec=57/48 lis/c=78/57 les/c/f=79/58/0 sis=80 pruub=15.637723923s) [2] async=[2] r=-1 lpr=80 pi=[57,80)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 172.085571289s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:36 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 80 pg[9.8( v 54'385 (0'0,54'385] local-lis/les=78/79 n=6 ec=57/48 lis/c=78/57 les/c/f=79/58/0 sis=80 pruub=15.637634277s) [2] r=-1 lpr=80 pi=[57,80)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 172.085571289s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:36 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 80 pg[9.18( v 54'385 (0'0,54'385] local-lis/les=78/79 n=5 ec=57/48 lis/c=78/57 les/c/f=79/58/0 sis=80 pruub=15.637300491s) [2] async=[2] r=-1 lpr=80 pi=[57,80)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 172.085540771s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:36 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 80 pg[9.18( v 54'385 (0'0,54'385] local-lis/les=78/79 n=5 ec=57/48 lis/c=78/57 les/c/f=79/58/0 sis=80 pruub=15.637230873s) [2] r=-1 lpr=80 pi=[57,80)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 172.085540771s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:36 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct 11 04:14:36 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 80 pg[9.8( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=78/57 les/c/f=79/58/0 sis=80) [2] r=0 lpr=80 pi=[57,80)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:36 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 80 pg[9.8( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=78/57 les/c/f=79/58/0 sis=80) [2] r=0 lpr=80 pi=[57,80)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:36 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 80 pg[9.18( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=78/57 les/c/f=79/58/0 sis=80) [2] r=0 lpr=80 pi=[57,80)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:36 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 80 pg[9.18( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=78/57 les/c/f=79/58/0 sis=80) [2] r=0 lpr=80 pi=[57,80)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:14:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Oct 11 04:14:37 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Oct 11 04:14:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Oct 11 04:14:37 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Oct 11 04:14:37 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 81 pg[9.8( v 54'385 (0'0,54'385] local-lis/les=80/81 n=6 ec=57/48 lis/c=78/57 les/c/f=79/58/0 sis=80) [2] r=0 lpr=80 pi=[57,80)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:37 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 81 pg[9.18( v 54'385 (0'0,54'385] local-lis/les=80/81 n=5 ec=57/48 lis/c=78/57 les/c/f=79/58/0 sis=80) [2] r=0 lpr=80 pi=[57,80)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:38 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v184: 321 pgs: 2 peering, 319 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 2 objects/s recovering
Oct 11 04:14:39 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 6.14 scrub starts
Oct 11 04:14:39 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 6.14 scrub ok
Oct 11 04:14:40 np0005481065 systemd[1]: session-35.scope: Deactivated successfully.
Oct 11 04:14:40 np0005481065 systemd[1]: session-35.scope: Consumed 8.646s CPU time.
Oct 11 04:14:40 np0005481065 systemd-logind[819]: Session 35 logged out. Waiting for processes to exit.
Oct 11 04:14:40 np0005481065 systemd-logind[819]: Removed session 35.
Oct 11 04:14:40 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Oct 11 04:14:40 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Oct 11 04:14:40 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v185: 321 pgs: 2 peering, 319 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 1 objects/s recovering
Oct 11 04:14:41 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Oct 11 04:14:41 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Oct 11 04:14:42 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 4.e scrub starts
Oct 11 04:14:42 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 4.e scrub ok
Oct 11 04:14:42 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Oct 11 04:14:42 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Oct 11 04:14:42 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:14:42 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v186: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 31 B/s, 1 objects/s recovering
Oct 11 04:14:42 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0) v1
Oct 11 04:14:42 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct 11 04:14:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Oct 11 04:14:43 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct 11 04:14:43 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Oct 11 04:14:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Oct 11 04:14:43 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Oct 11 04:14:43 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Oct 11 04:14:43 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Oct 11 04:14:44 np0005481065 systemd[1]: packagekit.service: Deactivated successfully.
Oct 11 04:14:44 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 6.1f scrub starts
Oct 11 04:14:44 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 6.1f scrub ok
Oct 11 04:14:44 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Oct 11 04:14:44 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Oct 11 04:14:44 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Oct 11 04:14:44 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v188: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Oct 11 04:14:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0) v1
Oct 11 04:14:44 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct 11 04:14:45 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Oct 11 04:14:45 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct 11 04:14:45 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Oct 11 04:14:45 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Oct 11 04:14:45 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Oct 11 04:14:45 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Oct 11 04:14:45 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Oct 11 04:14:45 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:14:45 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:14:45 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:14:45 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:14:45 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:14:45 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:14:45 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 35dd6a81-c926-4004-a94d-7134c94c8d02 does not exist
Oct 11 04:14:45 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 22c2b102-d255-475d-9ea4-e6abf786b507 does not exist
Oct 11 04:14:45 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 3ee32986-bcf1-4969-9f59-2187a37b1dd3 does not exist
Oct 11 04:14:45 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:14:45 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:14:45 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:14:45 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:14:45 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:14:45 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:14:46 np0005481065 podman[105118]: 2025-10-11 08:14:46.385511747 +0000 UTC m=+0.066016728 container create ec052e38dfdf902c22e259a94ade65232bd03b72e14e74a957a2a690d8f4a7a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_pascal, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:14:46 np0005481065 systemd[1]: Started libpod-conmon-ec052e38dfdf902c22e259a94ade65232bd03b72e14e74a957a2a690d8f4a7a8.scope.
Oct 11 04:14:46 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Oct 11 04:14:46 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:14:46 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:14:46 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:14:46 np0005481065 podman[105118]: 2025-10-11 08:14:46.356876945 +0000 UTC m=+0.037381976 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:14:46 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:14:46 np0005481065 podman[105118]: 2025-10-11 08:14:46.479780266 +0000 UTC m=+0.160285287 container init ec052e38dfdf902c22e259a94ade65232bd03b72e14e74a957a2a690d8f4a7a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 04:14:46 np0005481065 podman[105118]: 2025-10-11 08:14:46.491793299 +0000 UTC m=+0.172298240 container start ec052e38dfdf902c22e259a94ade65232bd03b72e14e74a957a2a690d8f4a7a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_pascal, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:14:46 np0005481065 podman[105118]: 2025-10-11 08:14:46.495309396 +0000 UTC m=+0.175814387 container attach ec052e38dfdf902c22e259a94ade65232bd03b72e14e74a957a2a690d8f4a7a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 04:14:46 np0005481065 heuristic_pascal[105134]: 167 167
Oct 11 04:14:46 np0005481065 systemd[1]: libpod-ec052e38dfdf902c22e259a94ade65232bd03b72e14e74a957a2a690d8f4a7a8.scope: Deactivated successfully.
Oct 11 04:14:46 np0005481065 podman[105118]: 2025-10-11 08:14:46.497965919 +0000 UTC m=+0.178470910 container died ec052e38dfdf902c22e259a94ade65232bd03b72e14e74a957a2a690d8f4a7a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_pascal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 11 04:14:46 np0005481065 systemd[1]: var-lib-containers-storage-overlay-5ddd8ed77d048144e484130e279c7cfd53f2cf27c9cf7a2785aa44312a4be001-merged.mount: Deactivated successfully.
Oct 11 04:14:46 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 5.14 deep-scrub starts
Oct 11 04:14:46 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 5.14 deep-scrub ok
Oct 11 04:14:46 np0005481065 podman[105118]: 2025-10-11 08:14:46.539995303 +0000 UTC m=+0.220500254 container remove ec052e38dfdf902c22e259a94ade65232bd03b72e14e74a957a2a690d8f4a7a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_pascal, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 04:14:46 np0005481065 systemd[1]: libpod-conmon-ec052e38dfdf902c22e259a94ade65232bd03b72e14e74a957a2a690d8f4a7a8.scope: Deactivated successfully.
Oct 11 04:14:46 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v190: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:14:46 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0) v1
Oct 11 04:14:46 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct 11 04:14:46 np0005481065 podman[105159]: 2025-10-11 08:14:46.739110724 +0000 UTC m=+0.086719762 container create 6357975c55272e4637969f6b153bdcc8b7df255cf77a1569522b653f48bd30a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 11 04:14:46 np0005481065 podman[105159]: 2025-10-11 08:14:46.694144689 +0000 UTC m=+0.041753777 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:14:46 np0005481065 systemd[1]: Started libpod-conmon-6357975c55272e4637969f6b153bdcc8b7df255cf77a1569522b653f48bd30a2.scope.
Oct 11 04:14:46 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 83 pg[9.c( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=83 pruub=13.015478134s) [2] r=-1 lpr=83 pi=[57,83)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 179.522277832s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:46 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 83 pg[9.c( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=83 pruub=13.015401840s) [2] r=-1 lpr=83 pi=[57,83)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.522277832s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:46 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 83 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=83 pruub=13.020909309s) [2] r=-1 lpr=83 pi=[57,83)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 179.528488159s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:46 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 83 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=83 pruub=13.020862579s) [2] r=-1 lpr=83 pi=[57,83)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.528488159s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:46 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 83 pg[9.c( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=83) [2] r=0 lpr=83 pi=[57,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:46 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:14:46 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 83 pg[9.1c( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=83) [2] r=0 lpr=83 pi=[57,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:46 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05f34dec3ffe42d261b10eebcd89665d0c121c8df622c7d0598b970c98562541/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:14:46 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05f34dec3ffe42d261b10eebcd89665d0c121c8df622c7d0598b970c98562541/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:14:46 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05f34dec3ffe42d261b10eebcd89665d0c121c8df622c7d0598b970c98562541/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:14:46 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05f34dec3ffe42d261b10eebcd89665d0c121c8df622c7d0598b970c98562541/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:14:46 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05f34dec3ffe42d261b10eebcd89665d0c121c8df622c7d0598b970c98562541/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:14:46 np0005481065 podman[105159]: 2025-10-11 08:14:46.925330488 +0000 UTC m=+0.272939566 container init 6357975c55272e4637969f6b153bdcc8b7df255cf77a1569522b653f48bd30a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ramanujan, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 04:14:46 np0005481065 podman[105159]: 2025-10-11 08:14:46.940424225 +0000 UTC m=+0.288033243 container start 6357975c55272e4637969f6b153bdcc8b7df255cf77a1569522b653f48bd30a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 11 04:14:46 np0005481065 podman[105159]: 2025-10-11 08:14:46.944363154 +0000 UTC m=+0.291972252 container attach 6357975c55272e4637969f6b153bdcc8b7df255cf77a1569522b653f48bd30a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:14:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Oct 11 04:14:47 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct 11 04:14:47 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Oct 11 04:14:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Oct 11 04:14:47 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Oct 11 04:14:47 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 84 pg[9.1c( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[57,84)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:47 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 84 pg[9.1c( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[57,84)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:47 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 84 pg[9.c( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[57,84)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:47 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 84 pg[9.c( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[57,84)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:47 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 84 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=84) [2]/[1] r=0 lpr=84 pi=[57,84)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:47 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 84 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=57/58 n=5 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=84) [2]/[1] r=0 lpr=84 pi=[57,84)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:47 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 84 pg[9.c( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=84) [2]/[1] r=0 lpr=84 pi=[57,84)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:47 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 84 pg[9.c( v 54'385 (0'0,54'385] local-lis/les=57/58 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=84) [2]/[1] r=0 lpr=84 pi=[57,84)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:14:48 np0005481065 nifty_ramanujan[105175]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:14:48 np0005481065 nifty_ramanujan[105175]: --> relative data size: 1.0
Oct 11 04:14:48 np0005481065 nifty_ramanujan[105175]: --> All data devices are unavailable
Oct 11 04:14:48 np0005481065 systemd[1]: libpod-6357975c55272e4637969f6b153bdcc8b7df255cf77a1569522b653f48bd30a2.scope: Deactivated successfully.
Oct 11 04:14:48 np0005481065 systemd[1]: libpod-6357975c55272e4637969f6b153bdcc8b7df255cf77a1569522b653f48bd30a2.scope: Consumed 1.070s CPU time.
Oct 11 04:14:48 np0005481065 podman[105204]: 2025-10-11 08:14:48.123450421 +0000 UTC m=+0.031449036 container died 6357975c55272e4637969f6b153bdcc8b7df255cf77a1569522b653f48bd30a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ramanujan, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 04:14:48 np0005481065 systemd[1]: var-lib-containers-storage-overlay-05f34dec3ffe42d261b10eebcd89665d0c121c8df622c7d0598b970c98562541-merged.mount: Deactivated successfully.
Oct 11 04:14:48 np0005481065 podman[105204]: 2025-10-11 08:14:48.185302211 +0000 UTC m=+0.093300786 container remove 6357975c55272e4637969f6b153bdcc8b7df255cf77a1569522b653f48bd30a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:14:48 np0005481065 systemd[1]: libpod-conmon-6357975c55272e4637969f6b153bdcc8b7df255cf77a1569522b653f48bd30a2.scope: Deactivated successfully.
Oct 11 04:14:48 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 3.18 deep-scrub starts
Oct 11 04:14:48 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 3.18 deep-scrub ok
Oct 11 04:14:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Oct 11 04:14:48 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Oct 11 04:14:48 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Oct 11 04:14:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Oct 11 04:14:48 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Oct 11 04:14:48 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Oct 11 04:14:48 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 85 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=84/85 n=5 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[57,84)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:48 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 85 pg[9.c( v 54'385 (0'0,54'385] local-lis/les=84/85 n=6 ec=57/48 lis/c=57/57 les/c/f=58/58/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[57,84)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:48 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v193: 321 pgs: 2 unknown, 319 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:14:48 np0005481065 podman[105358]: 2025-10-11 08:14:48.99001578 +0000 UTC m=+0.066367481 container create e1fd59cec964f68b21ef8d787c9b9b5f4d7e96f93530146b81d1af9526a4d5ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 11 04:14:49 np0005481065 systemd[1]: Started libpod-conmon-e1fd59cec964f68b21ef8d787c9b9b5f4d7e96f93530146b81d1af9526a4d5ae.scope.
Oct 11 04:14:49 np0005481065 podman[105358]: 2025-10-11 08:14:48.961041106 +0000 UTC m=+0.037392837 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:14:49 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:14:49 np0005481065 podman[105358]: 2025-10-11 08:14:49.093126527 +0000 UTC m=+0.169478228 container init e1fd59cec964f68b21ef8d787c9b9b5f4d7e96f93530146b81d1af9526a4d5ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 04:14:49 np0005481065 podman[105358]: 2025-10-11 08:14:49.103543786 +0000 UTC m=+0.179895487 container start e1fd59cec964f68b21ef8d787c9b9b5f4d7e96f93530146b81d1af9526a4d5ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_clarke, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 04:14:49 np0005481065 podman[105358]: 2025-10-11 08:14:49.107497 +0000 UTC m=+0.183848781 container attach e1fd59cec964f68b21ef8d787c9b9b5f4d7e96f93530146b81d1af9526a4d5ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_clarke, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 04:14:49 np0005481065 cranky_clarke[105374]: 167 167
Oct 11 04:14:49 np0005481065 systemd[1]: libpod-e1fd59cec964f68b21ef8d787c9b9b5f4d7e96f93530146b81d1af9526a4d5ae.scope: Deactivated successfully.
Oct 11 04:14:49 np0005481065 conmon[105374]: conmon e1fd59cec964f68b21ef <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e1fd59cec964f68b21ef8d787c9b9b5f4d7e96f93530146b81d1af9526a4d5ae.scope/container/memory.events
Oct 11 04:14:49 np0005481065 podman[105358]: 2025-10-11 08:14:49.11304072 +0000 UTC m=+0.189392421 container died e1fd59cec964f68b21ef8d787c9b9b5f4d7e96f93530146b81d1af9526a4d5ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_clarke, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:14:49 np0005481065 systemd[1]: var-lib-containers-storage-overlay-af50a1df4fbf5ef41ab84d4043ccd89361f6c3e2fd082dc738f5e8ddb0f5b88a-merged.mount: Deactivated successfully.
Oct 11 04:14:49 np0005481065 podman[105358]: 2025-10-11 08:14:49.168684841 +0000 UTC m=+0.245036542 container remove e1fd59cec964f68b21ef8d787c9b9b5f4d7e96f93530146b81d1af9526a4d5ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 04:14:49 np0005481065 systemd[1]: libpod-conmon-e1fd59cec964f68b21ef8d787c9b9b5f4d7e96f93530146b81d1af9526a4d5ae.scope: Deactivated successfully.
Oct 11 04:14:49 np0005481065 podman[105399]: 2025-10-11 08:14:49.404183148 +0000 UTC m=+0.072626011 container create e54e25e51194670394c944e34bda22190f74cf0fb333e84525e824515a6821f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_tesla, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:14:49 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 4.10 deep-scrub starts
Oct 11 04:14:49 np0005481065 systemd[1]: Started libpod-conmon-e54e25e51194670394c944e34bda22190f74cf0fb333e84525e824515a6821f5.scope.
Oct 11 04:14:49 np0005481065 podman[105399]: 2025-10-11 08:14:49.375460352 +0000 UTC m=+0.043903265 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:14:49 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 4.10 deep-scrub ok
Oct 11 04:14:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Oct 11 04:14:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Oct 11 04:14:49 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Oct 11 04:14:49 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:14:49 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 86 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=84/57 les/c/f=85/58/0 sis=86) [2] r=0 lpr=86 pi=[57,86)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:49 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 86 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=84/57 les/c/f=85/58/0 sis=86) [2] r=0 lpr=86 pi=[57,86)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:49 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 86 pg[9.c( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=84/57 les/c/f=85/58/0 sis=86) [2] r=0 lpr=86 pi=[57,86)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:49 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 86 pg[9.c( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=84/57 les/c/f=85/58/0 sis=86) [2] r=0 lpr=86 pi=[57,86)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:14:49 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f925a75b42d7021decd9c77f968357c95245ef0863be6d31e19a72700d09f3e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:14:49 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 86 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=84/85 n=5 ec=57/48 lis/c=84/57 les/c/f=85/58/0 sis=86 pruub=14.986260414s) [2] async=[2] r=-1 lpr=86 pi=[57,86)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 184.100280762s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:49 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 86 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=84/85 n=5 ec=57/48 lis/c=84/57 les/c/f=85/58/0 sis=86 pruub=14.985932350s) [2] r=-1 lpr=86 pi=[57,86)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.100280762s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:49 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 86 pg[9.c( v 54'385 (0'0,54'385] local-lis/les=84/85 n=6 ec=57/48 lis/c=84/57 les/c/f=85/58/0 sis=86 pruub=14.986953735s) [2] async=[2] r=-1 lpr=86 pi=[57,86)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 184.101791382s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:14:49 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 86 pg[9.c( v 54'385 (0'0,54'385] local-lis/les=84/85 n=6 ec=57/48 lis/c=84/57 les/c/f=85/58/0 sis=86 pruub=14.986842155s) [2] r=-1 lpr=86 pi=[57,86)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.101791382s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:14:49 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f925a75b42d7021decd9c77f968357c95245ef0863be6d31e19a72700d09f3e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:14:49 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f925a75b42d7021decd9c77f968357c95245ef0863be6d31e19a72700d09f3e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:14:49 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f925a75b42d7021decd9c77f968357c95245ef0863be6d31e19a72700d09f3e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:14:49 np0005481065 podman[105399]: 2025-10-11 08:14:49.528487016 +0000 UTC m=+0.196929889 container init e54e25e51194670394c944e34bda22190f74cf0fb333e84525e824515a6821f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_tesla, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:14:49 np0005481065 podman[105399]: 2025-10-11 08:14:49.544198568 +0000 UTC m=+0.212641431 container start e54e25e51194670394c944e34bda22190f74cf0fb333e84525e824515a6821f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 11 04:14:49 np0005481065 podman[105399]: 2025-10-11 08:14:49.548938014 +0000 UTC m=+0.217380877 container attach e54e25e51194670394c944e34bda22190f74cf0fb333e84525e824515a6821f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_tesla, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]: {
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:    "0": [
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:        {
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:            "devices": [
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:                "/dev/loop3"
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:            ],
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:            "lv_name": "ceph_lv0",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:            "lv_size": "21470642176",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:            "name": "ceph_lv0",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:            "tags": {
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:                "ceph.cluster_name": "ceph",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:                "ceph.crush_device_class": "",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:                "ceph.encrypted": "0",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:                "ceph.osd_id": "0",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:                "ceph.type": "block",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:                "ceph.vdo": "0"
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:            },
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:            "type": "block",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:            "vg_name": "ceph_vg0"
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:        }
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:    ],
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:    "1": [
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:        {
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:            "devices": [
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:                "/dev/loop4"
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:            ],
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:            "lv_name": "ceph_lv1",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:            "lv_size": "21470642176",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:            "name": "ceph_lv1",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:            "tags": {
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:                "ceph.cluster_name": "ceph",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:                "ceph.crush_device_class": "",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:                "ceph.encrypted": "0",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:                "ceph.osd_id": "1",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:                "ceph.type": "block",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:                "ceph.vdo": "0"
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:            },
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:            "type": "block",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:            "vg_name": "ceph_vg1"
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:        }
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:    ],
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:    "2": [
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:        {
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:            "devices": [
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:                "/dev/loop5"
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:            ],
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:            "lv_name": "ceph_lv2",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:            "lv_size": "21470642176",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:            "name": "ceph_lv2",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:            "tags": {
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:                "ceph.cluster_name": "ceph",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:                "ceph.crush_device_class": "",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:                "ceph.encrypted": "0",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:                "ceph.osd_id": "2",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:                "ceph.type": "block",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:                "ceph.vdo": "0"
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:            },
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:            "type": "block",
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:            "vg_name": "ceph_vg2"
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:        }
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]:    ]
Oct 11 04:14:50 np0005481065 adoring_tesla[105416]: }
Oct 11 04:14:50 np0005481065 systemd[1]: libpod-e54e25e51194670394c944e34bda22190f74cf0fb333e84525e824515a6821f5.scope: Deactivated successfully.
Oct 11 04:14:50 np0005481065 podman[105399]: 2025-10-11 08:14:50.390489494 +0000 UTC m=+1.058932367 container died e54e25e51194670394c944e34bda22190f74cf0fb333e84525e824515a6821f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_tesla, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:14:50 np0005481065 systemd[1]: var-lib-containers-storage-overlay-0f925a75b42d7021decd9c77f968357c95245ef0863be6d31e19a72700d09f3e-merged.mount: Deactivated successfully.
Oct 11 04:14:50 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Oct 11 04:14:50 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Oct 11 04:14:50 np0005481065 podman[105399]: 2025-10-11 08:14:50.47201302 +0000 UTC m=+1.140455893 container remove e54e25e51194670394c944e34bda22190f74cf0fb333e84525e824515a6821f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:14:50 np0005481065 systemd[1]: libpod-conmon-e54e25e51194670394c944e34bda22190f74cf0fb333e84525e824515a6821f5.scope: Deactivated successfully.
Oct 11 04:14:50 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Oct 11 04:14:50 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Oct 11 04:14:50 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Oct 11 04:14:50 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 87 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=86/87 n=5 ec=57/48 lis/c=84/57 les/c/f=85/58/0 sis=86) [2] r=0 lpr=86 pi=[57,86)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:50 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 87 pg[9.c( v 54'385 (0'0,54'385] local-lis/les=86/87 n=6 ec=57/48 lis/c=84/57 les/c/f=85/58/0 sis=86) [2] r=0 lpr=86 pi=[57,86)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:14:50 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v196: 321 pgs: 2 unknown, 319 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:14:51 np0005481065 podman[105580]: 2025-10-11 08:14:51.282251398 +0000 UTC m=+0.070125159 container create a52a344dd7a75baef971c9a40302307ec1129db55aa26fdc4d77f10866e9349f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wescoff, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:14:51 np0005481065 systemd[1]: Started libpod-conmon-a52a344dd7a75baef971c9a40302307ec1129db55aa26fdc4d77f10866e9349f.scope.
Oct 11 04:14:51 np0005481065 podman[105580]: 2025-10-11 08:14:51.24828658 +0000 UTC m=+0.036160321 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:14:51 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:14:51 np0005481065 podman[105580]: 2025-10-11 08:14:51.369776787 +0000 UTC m=+0.157650548 container init a52a344dd7a75baef971c9a40302307ec1129db55aa26fdc4d77f10866e9349f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wescoff, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:14:51 np0005481065 podman[105580]: 2025-10-11 08:14:51.380202717 +0000 UTC m=+0.168076478 container start a52a344dd7a75baef971c9a40302307ec1129db55aa26fdc4d77f10866e9349f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 11 04:14:51 np0005481065 podman[105580]: 2025-10-11 08:14:51.384707896 +0000 UTC m=+0.172581718 container attach a52a344dd7a75baef971c9a40302307ec1129db55aa26fdc4d77f10866e9349f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wescoff, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:14:51 np0005481065 zen_wescoff[105597]: 167 167
Oct 11 04:14:51 np0005481065 systemd[1]: libpod-a52a344dd7a75baef971c9a40302307ec1129db55aa26fdc4d77f10866e9349f.scope: Deactivated successfully.
Oct 11 04:14:51 np0005481065 podman[105580]: 2025-10-11 08:14:51.390778641 +0000 UTC m=+0.178652402 container died a52a344dd7a75baef971c9a40302307ec1129db55aa26fdc4d77f10866e9349f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wescoff, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 11 04:14:51 np0005481065 systemd[1]: var-lib-containers-storage-overlay-24a9aa769c5ef0489cefb0cf4591aefaa81269d6a304b997a1de86340d0fb66e-merged.mount: Deactivated successfully.
Oct 11 04:14:51 np0005481065 podman[105580]: 2025-10-11 08:14:51.444395904 +0000 UTC m=+0.232269665 container remove a52a344dd7a75baef971c9a40302307ec1129db55aa26fdc4d77f10866e9349f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 04:14:51 np0005481065 systemd[1]: libpod-conmon-a52a344dd7a75baef971c9a40302307ec1129db55aa26fdc4d77f10866e9349f.scope: Deactivated successfully.
Oct 11 04:14:51 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 6.1d scrub starts
Oct 11 04:14:51 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 6.1d scrub ok
Oct 11 04:14:51 np0005481065 podman[105621]: 2025-10-11 08:14:51.631853499 +0000 UTC m=+0.069353167 container create fa1d8f9823660be7383c0b88f09b93b79720d549b0365fda4fc8994583f146b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kowalevski, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:14:51 np0005481065 systemd[1]: Started libpod-conmon-fa1d8f9823660be7383c0b88f09b93b79720d549b0365fda4fc8994583f146b9.scope.
Oct 11 04:14:51 np0005481065 podman[105621]: 2025-10-11 08:14:51.601856216 +0000 UTC m=+0.039355974 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:14:51 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:14:51 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8add1818700fdc75f9e5d457bf751b3a25fe586b3c536af88ac5527372491055/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:14:51 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8add1818700fdc75f9e5d457bf751b3a25fe586b3c536af88ac5527372491055/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:14:51 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8add1818700fdc75f9e5d457bf751b3a25fe586b3c536af88ac5527372491055/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:14:51 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8add1818700fdc75f9e5d457bf751b3a25fe586b3c536af88ac5527372491055/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:14:51 np0005481065 podman[105621]: 2025-10-11 08:14:51.74204067 +0000 UTC m=+0.179540398 container init fa1d8f9823660be7383c0b88f09b93b79720d549b0365fda4fc8994583f146b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kowalevski, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:14:51 np0005481065 podman[105621]: 2025-10-11 08:14:51.760411099 +0000 UTC m=+0.197910767 container start fa1d8f9823660be7383c0b88f09b93b79720d549b0365fda4fc8994583f146b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:14:51 np0005481065 podman[105621]: 2025-10-11 08:14:51.764567119 +0000 UTC m=+0.202066857 container attach fa1d8f9823660be7383c0b88f09b93b79720d549b0365fda4fc8994583f146b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:14:52 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 6.1c deep-scrub starts
Oct 11 04:14:52 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 6.1c deep-scrub ok
Oct 11 04:14:52 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:14:52 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v197: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s rd, 389 B/s wr, 12 op/s; 20 B/s, 2 objects/s recovering
Oct 11 04:14:52 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0) v1
Oct 11 04:14:52 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct 11 04:14:52 np0005481065 clever_kowalevski[105639]: {
Oct 11 04:14:52 np0005481065 clever_kowalevski[105639]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 04:14:52 np0005481065 clever_kowalevski[105639]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:14:52 np0005481065 clever_kowalevski[105639]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:14:52 np0005481065 clever_kowalevski[105639]:        "osd_id": 2,
Oct 11 04:14:52 np0005481065 clever_kowalevski[105639]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:14:52 np0005481065 clever_kowalevski[105639]:        "type": "bluestore"
Oct 11 04:14:52 np0005481065 clever_kowalevski[105639]:    },
Oct 11 04:14:52 np0005481065 clever_kowalevski[105639]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 04:14:52 np0005481065 clever_kowalevski[105639]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:14:52 np0005481065 clever_kowalevski[105639]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:14:52 np0005481065 clever_kowalevski[105639]:        "osd_id": 0,
Oct 11 04:14:52 np0005481065 clever_kowalevski[105639]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:14:52 np0005481065 clever_kowalevski[105639]:        "type": "bluestore"
Oct 11 04:14:52 np0005481065 clever_kowalevski[105639]:    },
Oct 11 04:14:52 np0005481065 clever_kowalevski[105639]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 04:14:52 np0005481065 clever_kowalevski[105639]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:14:52 np0005481065 clever_kowalevski[105639]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:14:52 np0005481065 clever_kowalevski[105639]:        "osd_id": 1,
Oct 11 04:14:52 np0005481065 clever_kowalevski[105639]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:14:52 np0005481065 clever_kowalevski[105639]:        "type": "bluestore"
Oct 11 04:14:52 np0005481065 clever_kowalevski[105639]:    }
Oct 11 04:14:52 np0005481065 clever_kowalevski[105639]: }
Oct 11 04:14:52 np0005481065 systemd[1]: libpod-fa1d8f9823660be7383c0b88f09b93b79720d549b0365fda4fc8994583f146b9.scope: Deactivated successfully.
Oct 11 04:14:52 np0005481065 podman[105621]: 2025-10-11 08:14:52.885691053 +0000 UTC m=+1.323190731 container died fa1d8f9823660be7383c0b88f09b93b79720d549b0365fda4fc8994583f146b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:14:52 np0005481065 systemd[1]: libpod-fa1d8f9823660be7383c0b88f09b93b79720d549b0365fda4fc8994583f146b9.scope: Consumed 1.130s CPU time.
Oct 11 04:14:52 np0005481065 systemd[1]: var-lib-containers-storage-overlay-8add1818700fdc75f9e5d457bf751b3a25fe586b3c536af88ac5527372491055-merged.mount: Deactivated successfully.
Oct 11 04:14:52 np0005481065 podman[105621]: 2025-10-11 08:14:52.955228894 +0000 UTC m=+1.392728542 container remove fa1d8f9823660be7383c0b88f09b93b79720d549b0365fda4fc8994583f146b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kowalevski, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:14:52 np0005481065 systemd[1]: libpod-conmon-fa1d8f9823660be7383c0b88f09b93b79720d549b0365fda4fc8994583f146b9.scope: Deactivated successfully.
Oct 11 04:14:52 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:14:53 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:14:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:14:53 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:14:53 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 20426ef1-07d1-437f-9b67-bff4763474be does not exist
Oct 11 04:14:53 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 801b38a6-e8ea-41c4-9639-2d1b66f4f19e does not exist
Oct 11 04:14:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Oct 11 04:14:53 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Oct 11 04:14:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Oct 11 04:14:53 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Oct 11 04:14:53 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct 11 04:14:53 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:14:53 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:14:54 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 6.17 scrub starts
Oct 11 04:14:54 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 6.17 scrub ok
Oct 11 04:14:54 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Oct 11 04:14:54 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Oct 11 04:14:54 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Oct 11 04:14:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:14:54
Oct 11 04:14:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:14:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 04:14:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'default.rgw.log', 'volumes', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', 'backups', 'default.rgw.meta', 'vms', '.mgr']
Oct 11 04:14:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:14:54 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v199: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 4.7 KiB/s rd, 341 B/s wr, 10 op/s; 18 B/s, 1 objects/s recovering
Oct 11 04:14:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Oct 11 04:14:54 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct 11 04:14:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:14:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:14:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:14:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:14:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:14:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:14:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:14:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:14:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:14:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:14:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:14:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:14:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:14:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:14:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:14:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:14:55 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Oct 11 04:14:55 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Oct 11 04:14:55 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Oct 11 04:14:55 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct 11 04:14:55 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Oct 11 04:14:55 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Oct 11 04:14:55 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Oct 11 04:14:55 np0005481065 systemd-logind[819]: New session 36 of user zuul.
Oct 11 04:14:55 np0005481065 systemd[1]: Started Session 36 of User zuul.
Oct 11 04:14:56 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 3.a scrub starts
Oct 11 04:14:56 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 3.a scrub ok
Oct 11 04:14:56 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Oct 11 04:14:56 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v201: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 4.5 KiB/s rd, 329 B/s wr, 10 op/s; 17 B/s, 1 objects/s recovering
Oct 11 04:14:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0) v1
Oct 11 04:14:56 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Oct 11 04:14:56 np0005481065 python3.9[105889]: ansible-ansible.legacy.ping Invoked with data=pong
Oct 11 04:14:57 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 3.16 deep-scrub starts
Oct 11 04:14:57 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 3.16 deep-scrub ok
Oct 11 04:14:57 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Oct 11 04:14:57 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Oct 11 04:14:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Oct 11 04:14:57 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Oct 11 04:14:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Oct 11 04:14:57 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Oct 11 04:14:57 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Oct 11 04:14:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:14:58 np0005481065 python3.9[106063]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:14:58 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Oct 11 04:14:58 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Oct 11 04:14:58 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Oct 11 04:14:58 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v203: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:14:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0) v1
Oct 11 04:14:58 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Oct 11 04:14:59 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Oct 11 04:14:59 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Oct 11 04:14:59 np0005481065 python3.9[106219]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:14:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Oct 11 04:14:59 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Oct 11 04:14:59 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Oct 11 04:14:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Oct 11 04:14:59 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Oct 11 04:15:00 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Oct 11 04:15:00 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Oct 11 04:15:00 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Oct 11 04:15:00 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v205: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:15:00 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Oct 11 04:15:00 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Oct 11 04:15:00 np0005481065 python3.9[106372]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:15:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Oct 11 04:15:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Oct 11 04:15:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Oct 11 04:15:01 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Oct 11 04:15:01 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Oct 11 04:15:01 np0005481065 python3.9[106526]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:15:02 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Oct 11 04:15:02 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Oct 11 04:15:02 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Oct 11 04:15:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:15:02 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v207: 321 pgs: 321 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:15:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Oct 11 04:15:02 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Oct 11 04:15:02 np0005481065 python3.9[106676]: ansible-ansible.builtin.service_facts Invoked
Oct 11 04:15:02 np0005481065 network[106693]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 11 04:15:02 np0005481065 network[106694]: 'network-scripts' will be removed from distribution in near future.
Oct 11 04:15:02 np0005481065 network[106695]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 11 04:15:03 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Oct 11 04:15:03 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Oct 11 04:15:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Oct 11 04:15:03 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Oct 11 04:15:03 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Oct 11 04:15:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Oct 11 04:15:03 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Oct 11 04:15:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:15:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:15:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:15:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:15:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:15:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:15:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:15:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:15:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:15:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:15:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:15:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:15:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 04:15:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:15:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:15:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:15:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:15:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:15:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:15:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:15:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:15:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:15:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:15:04 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 93 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=65/66 n=5 ec=57/48 lis/c=65/65 les/c/f=66/66/0 sis=93 pruub=13.387301445s) [2] r=-1 lpr=93 pi=[65,93)/1 crt=54'385 mlcod 0'0 active pruub 202.636520386s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:15:04 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 93 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=65/66 n=5 ec=57/48 lis/c=65/65 les/c/f=66/66/0 sis=93 pruub=13.387200356s) [2] r=-1 lpr=93 pi=[65,93)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 202.636520386s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:15:04 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=65/65 les/c/f=66/66/0 sis=93) [2] r=0 lpr=93 pi=[65,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:15:04 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 2.f scrub starts
Oct 11 04:15:04 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 2.f scrub ok
Oct 11 04:15:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Oct 11 04:15:04 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Oct 11 04:15:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Oct 11 04:15:04 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Oct 11 04:15:04 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 94 pg[9.13( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=65/65 les/c/f=66/66/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[65,94)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:15:04 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 94 pg[9.13( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=65/65 les/c/f=66/66/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[65,94)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:15:04 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 94 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=65/66 n=5 ec=57/48 lis/c=65/65 les/c/f=66/66/0 sis=94) [2]/[0] r=0 lpr=94 pi=[65,94)/1 crt=54'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:15:04 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 94 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=65/66 n=5 ec=57/48 lis/c=65/65 les/c/f=66/66/0 sis=94) [2]/[0] r=0 lpr=94 pi=[65,94)/1 crt=54'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:15:04 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v210: 321 pgs: 321 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:15:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0) v1
Oct 11 04:15:04 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Oct 11 04:15:05 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 2.15 deep-scrub starts
Oct 11 04:15:05 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 2.15 deep-scrub ok
Oct 11 04:15:05 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Oct 11 04:15:05 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Oct 11 04:15:05 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Oct 11 04:15:05 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Oct 11 04:15:05 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Oct 11 04:15:06 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 95 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=94/95 n=5 ec=57/48 lis/c=65/65 les/c/f=66/66/0 sis=94) [2]/[0] async=[2] r=0 lpr=94 pi=[65,94)/1 crt=54'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:15:06 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 5.12 deep-scrub starts
Oct 11 04:15:06 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 5.12 deep-scrub ok
Oct 11 04:15:06 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Oct 11 04:15:06 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Oct 11 04:15:06 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Oct 11 04:15:06 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Oct 11 04:15:06 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 96 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=94/65 les/c/f=95/66/0 sis=96) [2] r=0 lpr=96 pi=[65,96)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:15:06 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 96 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=94/65 les/c/f=95/66/0 sis=96) [2] r=0 lpr=96 pi=[65,96)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:15:06 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 96 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=94/95 n=5 ec=57/48 lis/c=94/65 les/c/f=95/66/0 sis=96 pruub=15.282032013s) [2] async=[2] r=-1 lpr=96 pi=[65,96)/1 crt=54'385 mlcod 54'385 active pruub 206.955017090s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:15:06 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 96 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=94/95 n=5 ec=57/48 lis/c=94/65 les/c/f=95/66/0 sis=96 pruub=15.281821251s) [2] r=-1 lpr=96 pi=[65,96)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 206.955017090s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:15:06 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v213: 321 pgs: 321 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:15:06 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Oct 11 04:15:06 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Oct 11 04:15:07 np0005481065 python3.9[106958]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:15:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:15:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Oct 11 04:15:07 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Oct 11 04:15:07 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Oct 11 04:15:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Oct 11 04:15:07 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Oct 11 04:15:07 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 97 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=96/97 n=5 ec=57/48 lis/c=94/65 les/c/f=95/66/0 sis=96) [2] r=0 lpr=96 pi=[65,96)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:15:08 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 7.a scrub starts
Oct 11 04:15:08 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 7.a scrub ok
Oct 11 04:15:08 np0005481065 python3.9[107108]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:15:08 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 3.1b deep-scrub starts
Oct 11 04:15:08 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 97 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=65/66 n=5 ec=57/48 lis/c=65/65 les/c/f=66/66/0 sis=97 pruub=9.219448090s) [1] r=-1 lpr=97 pi=[65,97)/1 crt=54'385 mlcod 0'0 active pruub 202.633651733s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:15:08 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 97 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=65/66 n=5 ec=57/48 lis/c=65/65 les/c/f=66/66/0 sis=97 pruub=9.219400406s) [1] r=-1 lpr=97 pi=[65,97)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 202.633651733s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:15:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 97 pg[9.15( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=65/65 les/c/f=66/66/0 sis=97) [1] r=0 lpr=97 pi=[65,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:15:08 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 3.1b deep-scrub ok
Oct 11 04:15:08 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v215: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Oct 11 04:15:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Oct 11 04:15:08 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Oct 11 04:15:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Oct 11 04:15:08 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Oct 11 04:15:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 98 pg[9.15( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=65/65 les/c/f=66/66/0 sis=98) [1]/[0] r=-1 lpr=98 pi=[65,98)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:15:08 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 98 pg[9.15( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=65/65 les/c/f=66/66/0 sis=98) [1]/[0] r=-1 lpr=98 pi=[65,98)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:15:08 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 98 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=65/66 n=5 ec=57/48 lis/c=65/65 les/c/f=66/66/0 sis=98) [1]/[0] r=0 lpr=98 pi=[65,98)/1 crt=54'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:15:08 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 98 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=65/66 n=5 ec=57/48 lis/c=65/65 les/c/f=66/66/0 sis=98) [1]/[0] r=0 lpr=98 pi=[65,98)/1 crt=54'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:15:09 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Oct 11 04:15:09 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Oct 11 04:15:09 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 5.1d deep-scrub starts
Oct 11 04:15:09 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 5.1d deep-scrub ok
Oct 11 04:15:09 np0005481065 python3.9[107262]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:15:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Oct 11 04:15:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Oct 11 04:15:09 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Oct 11 04:15:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 99 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=98/99 n=5 ec=57/48 lis/c=65/65 les/c/f=66/66/0 sis=98) [1]/[0] async=[1] r=0 lpr=98 pi=[65,98)/1 crt=54'385 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:15:10 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 5.f scrub starts
Oct 11 04:15:10 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 5.f scrub ok
Oct 11 04:15:10 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Oct 11 04:15:10 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Oct 11 04:15:10 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v218: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Oct 11 04:15:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Oct 11 04:15:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Oct 11 04:15:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 100 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=98/99 n=5 ec=57/48 lis/c=98/65 les/c/f=99/66/0 sis=100 pruub=15.219787598s) [1] async=[1] r=-1 lpr=100 pi=[65,100)/1 crt=54'385 mlcod 54'385 active pruub 210.960510254s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:15:10 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 100 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=98/99 n=5 ec=57/48 lis/c=98/65 les/c/f=99/66/0 sis=100 pruub=15.219685555s) [1] r=-1 lpr=100 pi=[65,100)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 210.960510254s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:15:10 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Oct 11 04:15:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 100 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=98/65 les/c/f=99/66/0 sis=100) [1] r=0 lpr=100 pi=[65,100)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:15:10 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 100 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=98/65 les/c/f=99/66/0 sis=100) [1] r=0 lpr=100 pi=[65,100)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:15:10 np0005481065 python3.9[107420]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 04:15:11 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Oct 11 04:15:11 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Oct 11 04:15:11 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Oct 11 04:15:11 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Oct 11 04:15:11 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Oct 11 04:15:11 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 101 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=100/101 n=5 ec=57/48 lis/c=98/65 les/c/f=99/66/0 sis=100) [1] r=0 lpr=100 pi=[65,100)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:15:11 np0005481065 python3.9[107504]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 04:15:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:15:12 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v221: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Oct 11 04:15:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Oct 11 04:15:12 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Oct 11 04:15:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Oct 11 04:15:12 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Oct 11 04:15:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Oct 11 04:15:12 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Oct 11 04:15:12 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Oct 11 04:15:13 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Oct 11 04:15:13 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Oct 11 04:15:13 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 2.a scrub starts
Oct 11 04:15:13 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 2.a scrub ok
Oct 11 04:15:13 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Oct 11 04:15:13 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Oct 11 04:15:13 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Oct 11 04:15:14 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 102 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=73/74 n=5 ec=57/48 lis/c=73/73 les/c/f=74/74/0 sis=102 pruub=10.579116821s) [0] r=-1 lpr=102 pi=[73,102)/1 crt=54'385 mlcod 0'0 active pruub 198.432937622s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:15:14 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 102 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=73/74 n=5 ec=57/48 lis/c=73/73 les/c/f=74/74/0 sis=102 pruub=10.579025269s) [0] r=-1 lpr=102 pi=[73,102)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 198.432937622s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:15:14 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 102 pg[9.16( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=73/73 les/c/f=74/74/0 sis=102) [0] r=0 lpr=102 pi=[73,102)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:15:14 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 5.c deep-scrub starts
Oct 11 04:15:14 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 5.c deep-scrub ok
Oct 11 04:15:14 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v223: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 22 B/s, 0 objects/s recovering
Oct 11 04:15:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0) v1
Oct 11 04:15:14 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Oct 11 04:15:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Oct 11 04:15:14 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Oct 11 04:15:14 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Oct 11 04:15:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Oct 11 04:15:14 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Oct 11 04:15:14 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 103 pg[9.16( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=73/73 les/c/f=74/74/0 sis=103) [0]/[2] r=-1 lpr=103 pi=[73,103)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:15:14 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 103 pg[9.16( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=73/73 les/c/f=74/74/0 sis=103) [0]/[2] r=-1 lpr=103 pi=[73,103)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:15:14 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 103 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=73/74 n=5 ec=57/48 lis/c=73/73 les/c/f=74/74/0 sis=103) [0]/[2] r=0 lpr=103 pi=[73,103)/1 crt=54'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:15:14 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 103 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=73/74 n=5 ec=57/48 lis/c=73/73 les/c/f=74/74/0 sis=103) [0]/[2] r=0 lpr=103 pi=[73,103)/1 crt=54'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:15:15 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Oct 11 04:15:15 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Oct 11 04:15:15 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Oct 11 04:15:15 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Oct 11 04:15:16 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 3.e scrub starts
Oct 11 04:15:16 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 3.e scrub ok
Oct 11 04:15:16 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 5.9 deep-scrub starts
Oct 11 04:15:16 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 5.9 deep-scrub ok
Oct 11 04:15:16 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 104 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=103/104 n=5 ec=57/48 lis/c=73/73 les/c/f=74/74/0 sis=103) [0]/[2] async=[0] r=0 lpr=103 pi=[73,103)/1 crt=54'385 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:15:16 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v226: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 22 B/s, 0 objects/s recovering
Oct 11 04:15:16 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0) v1
Oct 11 04:15:16 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Oct 11 04:15:16 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Oct 11 04:15:16 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Oct 11 04:15:16 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Oct 11 04:15:16 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Oct 11 04:15:16 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 105 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=103/104 n=5 ec=57/48 lis/c=103/73 les/c/f=104/74/0 sis=105 pruub=15.529418945s) [0] async=[0] r=-1 lpr=105 pi=[73,105)/1 crt=54'385 mlcod 54'385 active pruub 206.166885376s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:15:16 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 105 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=103/104 n=5 ec=57/48 lis/c=103/73 les/c/f=104/74/0 sis=105 pruub=15.529314995s) [0] r=-1 lpr=105 pi=[73,105)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 206.166885376s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:15:16 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Oct 11 04:15:16 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 105 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=103/73 les/c/f=104/74/0 sis=105) [0] r=0 lpr=105 pi=[73,105)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:15:16 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 105 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=103/73 les/c/f=104/74/0 sis=105) [0] r=0 lpr=105 pi=[73,105)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:15:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e105 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:15:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Oct 11 04:15:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Oct 11 04:15:17 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Oct 11 04:15:17 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Oct 11 04:15:17 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 106 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=105/106 n=5 ec=57/48 lis/c=103/73 les/c/f=104/74/0 sis=105) [0] r=0 lpr=105 pi=[73,105)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:15:18 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v229: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Oct 11 04:15:19 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Oct 11 04:15:19 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Oct 11 04:15:20 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Oct 11 04:15:20 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Oct 11 04:15:20 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Oct 11 04:15:20 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Oct 11 04:15:20 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v230: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Oct 11 04:15:22 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Oct 11 04:15:22 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Oct 11 04:15:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:15:22 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v231: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 16 B/s, 0 objects/s recovering
Oct 11 04:15:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0) v1
Oct 11 04:15:22 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Oct 11 04:15:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Oct 11 04:15:22 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Oct 11 04:15:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Oct 11 04:15:22 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Oct 11 04:15:22 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 107 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=65/66 n=5 ec=57/48 lis/c=65/65 les/c/f=66/66/0 sis=107 pruub=10.733969688s) [2] r=-1 lpr=107 pi=[65,107)/1 crt=54'385 mlcod 0'0 active pruub 218.637008667s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:15:22 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 107 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=65/66 n=5 ec=57/48 lis/c=65/65 les/c/f=66/66/0 sis=107 pruub=10.733732224s) [2] r=-1 lpr=107 pi=[65,107)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 218.637008667s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:15:22 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Oct 11 04:15:22 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=65/65 les/c/f=66/66/0 sis=107) [2] r=0 lpr=107 pi=[65,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:15:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Oct 11 04:15:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Oct 11 04:15:23 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Oct 11 04:15:23 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 108 pg[9.19( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=65/65 les/c/f=66/66/0 sis=108) [2]/[0] r=-1 lpr=108 pi=[65,108)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:15:23 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 108 pg[9.19( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=65/65 les/c/f=66/66/0 sis=108) [2]/[0] r=-1 lpr=108 pi=[65,108)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:15:23 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Oct 11 04:15:23 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 108 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=65/66 n=5 ec=57/48 lis/c=65/65 les/c/f=66/66/0 sis=108) [2]/[0] r=0 lpr=108 pi=[65,108)/1 crt=54'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:15:23 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 108 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=65/66 n=5 ec=57/48 lis/c=65/65 les/c/f=66/66/0 sis=108) [2]/[0] r=0 lpr=108 pi=[65,108)/1 crt=54'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:15:24 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v234: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:15:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Oct 11 04:15:24 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Oct 11 04:15:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:15:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:15:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:15:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:15:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:15:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:15:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Oct 11 04:15:24 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Oct 11 04:15:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Oct 11 04:15:24 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Oct 11 04:15:24 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Oct 11 04:15:25 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 109 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=108/109 n=5 ec=57/48 lis/c=65/65 les/c/f=66/66/0 sis=108) [2]/[0] async=[2] r=0 lpr=108 pi=[65,108)/1 crt=54'385 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:15:25 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Oct 11 04:15:25 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Oct 11 04:15:26 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Oct 11 04:15:26 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Oct 11 04:15:26 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 110 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=108/109 n=5 ec=57/48 lis/c=108/65 les/c/f=109/66/0 sis=110 pruub=15.096616745s) [2] async=[2] r=-1 lpr=110 pi=[65,110)/1 crt=54'385 mlcod 54'385 active pruub 226.054824829s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:15:26 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 110 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=108/109 n=5 ec=57/48 lis/c=108/65 les/c/f=109/66/0 sis=110 pruub=15.096544266s) [2] r=-1 lpr=110 pi=[65,110)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 226.054824829s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:15:26 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 110 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=108/65 les/c/f=109/66/0 sis=110) [2] r=0 lpr=110 pi=[65,110)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:15:26 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 110 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=108/65 les/c/f=109/66/0 sis=110) [2] r=0 lpr=110 pi=[65,110)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:15:26 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v237: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:15:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Oct 11 04:15:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Oct 11 04:15:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Oct 11 04:15:27 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Oct 11 04:15:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Oct 11 04:15:27 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Oct 11 04:15:27 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Oct 11 04:15:27 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 111 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=110/111 n=5 ec=57/48 lis/c=108/65 les/c/f=109/66/0 sis=110) [2] r=0 lpr=110 pi=[65,110)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:15:27 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Oct 11 04:15:27 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Oct 11 04:15:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:15:28 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Oct 11 04:15:28 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 7.c scrub starts
Oct 11 04:15:28 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 7.c scrub ok
Oct 11 04:15:28 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Oct 11 04:15:28 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Oct 11 04:15:28 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v239: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 23 B/s, 1 objects/s recovering
Oct 11 04:15:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Oct 11 04:15:28 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Oct 11 04:15:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Oct 11 04:15:29 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Oct 11 04:15:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Oct 11 04:15:29 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Oct 11 04:15:29 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Oct 11 04:15:29 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 7.e scrub starts
Oct 11 04:15:29 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 7.e scrub ok
Oct 11 04:15:30 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Oct 11 04:15:30 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Oct 11 04:15:30 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Oct 11 04:15:30 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Oct 11 04:15:30 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Oct 11 04:15:30 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 112 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=86/87 n=5 ec=57/48 lis/c=86/86 les/c/f=87/87/0 sis=112 pruub=15.918922424s) [0] r=-1 lpr=112 pi=[86,112)/1 crt=54'385 mlcod 0'0 active pruub 220.268218994s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:15:30 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 112 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=86/87 n=5 ec=57/48 lis/c=86/86 les/c/f=87/87/0 sis=112 pruub=15.918838501s) [0] r=-1 lpr=112 pi=[86,112)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 220.268218994s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:15:30 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 112 pg[9.1c( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=86/86 les/c/f=87/87/0 sis=112) [0] r=0 lpr=112 pi=[86,112)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:15:30 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v241: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 19 B/s, 1 objects/s recovering
Oct 11 04:15:30 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Oct 11 04:15:30 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Oct 11 04:15:31 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Oct 11 04:15:31 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Oct 11 04:15:31 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Oct 11 04:15:31 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Oct 11 04:15:31 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Oct 11 04:15:31 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 113 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=86/87 n=5 ec=57/48 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] r=0 lpr=113 pi=[86,113)/1 crt=54'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:15:31 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 113 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=86/87 n=5 ec=57/48 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] r=0 lpr=113 pi=[86,113)/1 crt=54'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:15:31 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 113 pg[9.1c( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[86,113)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:15:31 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 113 pg[9.1c( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[86,113)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:15:31 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Oct 11 04:15:31 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Oct 11 04:15:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Oct 11 04:15:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Oct 11 04:15:32 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Oct 11 04:15:32 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Oct 11 04:15:32 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 114 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=113/114 n=5 ec=57/48 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[86,113)/1 crt=54'385 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:15:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:15:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Oct 11 04:15:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Oct 11 04:15:32 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Oct 11 04:15:32 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 115 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=113/114 n=5 ec=57/48 lis/c=113/86 les/c/f=114/87/0 sis=115 pruub=15.415013313s) [0] async=[0] r=-1 lpr=115 pi=[86,115)/1 crt=54'385 mlcod 54'385 active pruub 221.851089478s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:15:32 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 115 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=113/114 n=5 ec=57/48 lis/c=113/86 les/c/f=114/87/0 sis=115 pruub=15.414893150s) [0] r=-1 lpr=115 pi=[86,115)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 221.851089478s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:15:32 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 115 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=113/86 les/c/f=114/87/0 sis=115) [0] r=0 lpr=115 pi=[86,115)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:15:32 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 115 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=113/86 les/c/f=114/87/0 sis=115) [0] r=0 lpr=115 pi=[86,115)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:15:32 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v245: 321 pgs: 1 active+remapped, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Oct 11 04:15:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Oct 11 04:15:32 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Oct 11 04:15:33 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Oct 11 04:15:33 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 2.7 deep-scrub starts
Oct 11 04:15:33 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 2.7 deep-scrub ok
Oct 11 04:15:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Oct 11 04:15:33 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Oct 11 04:15:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Oct 11 04:15:33 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Oct 11 04:15:33 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 116 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=73/74 n=5 ec=57/48 lis/c=73/73 les/c/f=74/74/0 sis=116 pruub=14.982356071s) [0] r=-1 lpr=116 pi=[73,116)/1 crt=54'385 mlcod 0'0 active pruub 222.430953979s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:15:33 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 116 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=73/74 n=5 ec=57/48 lis/c=73/73 les/c/f=74/74/0 sis=116 pruub=14.982285500s) [0] r=-1 lpr=116 pi=[73,116)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 222.430953979s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:15:33 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 116 pg[9.1e( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=73/73 les/c/f=74/74/0 sis=116) [0] r=0 lpr=116 pi=[73,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:15:33 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 116 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=115/116 n=5 ec=57/48 lis/c=113/86 les/c/f=114/87/0 sis=115) [0] r=0 lpr=115 pi=[86,115)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:15:34 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Oct 11 04:15:34 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Oct 11 04:15:34 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Oct 11 04:15:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Oct 11 04:15:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Oct 11 04:15:34 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Oct 11 04:15:34 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 117 pg[9.1e( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=73/73 les/c/f=74/74/0 sis=117) [0]/[2] r=-1 lpr=117 pi=[73,117)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:15:34 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 117 pg[9.1e( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=73/73 les/c/f=74/74/0 sis=117) [0]/[2] r=-1 lpr=117 pi=[73,117)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:15:34 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 117 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=73/74 n=5 ec=57/48 lis/c=73/73 les/c/f=74/74/0 sis=117) [0]/[2] r=0 lpr=117 pi=[73,117)/1 crt=54'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:15:34 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 117 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=73/74 n=5 ec=57/48 lis/c=73/73 les/c/f=74/74/0 sis=117) [0]/[2] r=0 lpr=117 pi=[73,117)/1 crt=54'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:15:34 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v248: 321 pgs: 1 active+remapped, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 30 B/s, 1 objects/s recovering
Oct 11 04:15:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 11 04:15:34 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 11 04:15:35 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 11 04:15:35 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Oct 11 04:15:35 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 11 04:15:35 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Oct 11 04:15:35 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Oct 11 04:15:35 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 118 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=75/76 n=5 ec=57/48 lis/c=75/75 les/c/f=76/76/0 sis=118 pruub=14.985039711s) [1] r=-1 lpr=118 pi=[75,118)/1 crt=54'385 mlcod 0'0 active pruub 224.452835083s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:15:35 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 118 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=75/76 n=5 ec=57/48 lis/c=75/75 les/c/f=76/76/0 sis=118 pruub=14.984968185s) [1] r=-1 lpr=118 pi=[75,118)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 224.452835083s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:15:35 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 118 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=117/118 n=5 ec=57/48 lis/c=73/73 les/c/f=74/74/0 sis=117) [0]/[2] async=[0] r=0 lpr=117 pi=[73,117)/1 crt=54'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:15:35 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 118 pg[9.1f( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=75/75 les/c/f=76/76/0 sis=118) [1] r=0 lpr=118 pi=[75,118)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:15:36 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Oct 11 04:15:36 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Oct 11 04:15:36 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 11 04:15:36 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Oct 11 04:15:36 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Oct 11 04:15:36 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Oct 11 04:15:36 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 119 pg[9.1f( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=75/75 les/c/f=76/76/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[75,119)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:15:36 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 119 pg[9.1f( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=75/75 les/c/f=76/76/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[75,119)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 11 04:15:36 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 119 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=75/76 n=5 ec=57/48 lis/c=75/75 les/c/f=76/76/0 sis=119) [1]/[2] r=0 lpr=119 pi=[75,119)/1 crt=54'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:15:36 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 119 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=75/76 n=5 ec=57/48 lis/c=75/75 les/c/f=76/76/0 sis=119) [1]/[2] r=0 lpr=119 pi=[75,119)/1 crt=54'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 11 04:15:36 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 119 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=117/118 n=5 ec=57/48 lis/c=117/73 les/c/f=118/74/0 sis=119 pruub=14.989190102s) [0] async=[0] r=-1 lpr=119 pi=[73,119)/1 crt=54'385 mlcod 54'385 active pruub 225.471755981s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:15:36 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 119 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=117/118 n=5 ec=57/48 lis/c=117/73 les/c/f=118/74/0 sis=119 pruub=14.989033699s) [0] r=-1 lpr=119 pi=[73,119)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 225.471755981s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:15:36 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 119 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=117/73 les/c/f=118/74/0 sis=119) [0] r=0 lpr=119 pi=[73,119)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:15:36 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 119 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=117/73 les/c/f=118/74/0 sis=119) [0] r=0 lpr=119 pi=[73,119)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:15:36 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v251: 321 pgs: 1 active+remapped, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:15:37 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Oct 11 04:15:37 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Oct 11 04:15:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:15:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Oct 11 04:15:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Oct 11 04:15:37 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Oct 11 04:15:37 np0005481065 ceph-osd[88249]: osd.0 pg_epoch: 120 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=119/120 n=5 ec=57/48 lis/c=117/73 les/c/f=118/74/0 sis=119) [0] r=0 lpr=119 pi=[73,119)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:15:38 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 120 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=119/120 n=5 ec=57/48 lis/c=75/75 les/c/f=76/76/0 sis=119) [1]/[2] async=[1] r=0 lpr=119 pi=[75,119)/1 crt=54'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:15:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Oct 11 04:15:38 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v253: 321 pgs: 1 remapped+peering, 1 peering, 319 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Oct 11 04:15:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Oct 11 04:15:38 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Oct 11 04:15:38 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 121 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=119/75 les/c/f=120/76/0 sis=121) [1] r=0 lpr=121 pi=[75,121)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:15:38 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 121 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=119/120 n=5 ec=57/48 lis/c=119/75 les/c/f=120/76/0 sis=121 pruub=15.646996498s) [1] async=[1] r=-1 lpr=121 pi=[75,121)/1 crt=54'385 mlcod 54'385 active pruub 228.155197144s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 11 04:15:38 np0005481065 ceph-osd[90364]: osd.2 pg_epoch: 121 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=119/120 n=5 ec=57/48 lis/c=119/75 les/c/f=120/76/0 sis=121 pruub=15.646604538s) [1] r=-1 lpr=121 pi=[75,121)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 228.155197144s@ mbc={}] state<Start>: transitioning to Stray
Oct 11 04:15:38 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 121 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=119/75 les/c/f=120/76/0 sis=121) [1] r=0 lpr=121 pi=[75,121)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 11 04:15:39 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Oct 11 04:15:39 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Oct 11 04:15:39 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Oct 11 04:15:39 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Oct 11 04:15:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Oct 11 04:15:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Oct 11 04:15:39 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Oct 11 04:15:39 np0005481065 ceph-osd[89278]: osd.1 pg_epoch: 122 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=121/122 n=5 ec=57/48 lis/c=119/75 les/c/f=120/76/0 sis=121) [1] r=0 lpr=121 pi=[75,121)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 11 04:15:40 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Oct 11 04:15:40 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Oct 11 04:15:40 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v256: 321 pgs: 1 remapped+peering, 1 peering, 319 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Oct 11 04:15:41 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Oct 11 04:15:41 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Oct 11 04:15:42 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Oct 11 04:15:42 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Oct 11 04:15:42 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:15:42 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v257: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Oct 11 04:15:44 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v258: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 15 B/s, 0 objects/s recovering
Oct 11 04:15:45 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Oct 11 04:15:45 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Oct 11 04:15:46 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v259: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Oct 11 04:15:47 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Oct 11 04:15:47 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Oct 11 04:15:47 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 2.d scrub starts
Oct 11 04:15:47 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 2.d scrub ok
Oct 11 04:15:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:15:48 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 10.a scrub starts
Oct 11 04:15:48 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 10.a scrub ok
Oct 11 04:15:48 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Oct 11 04:15:48 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Oct 11 04:15:48 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v260: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 10 B/s, 0 objects/s recovering
Oct 11 04:15:49 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 10.c scrub starts
Oct 11 04:15:49 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 10.c scrub ok
Oct 11 04:15:50 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Oct 11 04:15:50 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Oct 11 04:15:50 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v261: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 10 B/s, 0 objects/s recovering
Oct 11 04:15:51 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Oct 11 04:15:51 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Oct 11 04:15:52 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Oct 11 04:15:52 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Oct 11 04:15:52 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:15:52 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v262: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Oct 11 04:15:52 np0005481065 python3.9[107799]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:15:53 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Oct 11 04:15:53 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Oct 11 04:15:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:15:54 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:15:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:15:54 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:15:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:15:54 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:15:54 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 7a1b5402-f942-4ce4-8b7a-559c9aaf9c2f does not exist
Oct 11 04:15:54 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev c077f2f7-2569-4aed-86c2-7b76dd7ffb4c does not exist
Oct 11 04:15:54 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 89f7c3d7-d4ac-4ec4-9719-2e7e827512cb does not exist
Oct 11 04:15:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:15:54 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:15:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:15:54 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:15:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:15:54 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:15:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:15:54
Oct 11 04:15:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:15:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 04:15:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['vms', 'default.rgw.log', 'images', '.rgw.root', 'default.rgw.meta', 'backups', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'volumes']
Oct 11 04:15:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:15:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:15:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:15:54 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v263: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:15:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:15:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:15:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:15:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:15:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:15:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:15:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:15:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:15:54 np0005481065 podman[108356]: 2025-10-11 08:15:54.795120025 +0000 UTC m=+0.047689332 container create 926ce8fe026e32df7d7fce71c004ab4fd1f124cb458214e9bae3e7d4d417b3d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:15:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:15:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:15:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:15:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:15:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:15:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:15:54 np0005481065 systemd[1]: Started libpod-conmon-926ce8fe026e32df7d7fce71c004ab4fd1f124cb458214e9bae3e7d4d417b3d5.scope.
Oct 11 04:15:54 np0005481065 podman[108356]: 2025-10-11 08:15:54.776545813 +0000 UTC m=+0.029115100 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:15:54 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:15:54 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:15:54 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:15:54 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:15:54 np0005481065 podman[108356]: 2025-10-11 08:15:54.905044598 +0000 UTC m=+0.157613905 container init 926ce8fe026e32df7d7fce71c004ab4fd1f124cb458214e9bae3e7d4d417b3d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_maxwell, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 04:15:54 np0005481065 podman[108356]: 2025-10-11 08:15:54.9150866 +0000 UTC m=+0.167655897 container start 926ce8fe026e32df7d7fce71c004ab4fd1f124cb458214e9bae3e7d4d417b3d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_maxwell, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 04:15:54 np0005481065 podman[108356]: 2025-10-11 08:15:54.919021311 +0000 UTC m=+0.171590668 container attach 926ce8fe026e32df7d7fce71c004ab4fd1f124cb458214e9bae3e7d4d417b3d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_maxwell, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:15:54 np0005481065 distracted_maxwell[108372]: 167 167
Oct 11 04:15:54 np0005481065 systemd[1]: libpod-926ce8fe026e32df7d7fce71c004ab4fd1f124cb458214e9bae3e7d4d417b3d5.scope: Deactivated successfully.
Oct 11 04:15:54 np0005481065 podman[108356]: 2025-10-11 08:15:54.922756636 +0000 UTC m=+0.175325933 container died 926ce8fe026e32df7d7fce71c004ab4fd1f124cb458214e9bae3e7d4d417b3d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:15:54 np0005481065 python3.9[108329]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct 11 04:15:54 np0005481065 systemd[1]: var-lib-containers-storage-overlay-ef094adeba17586e4f285905faded8f0a6b225e2a0d005a8c1e6bf44461aecf5-merged.mount: Deactivated successfully.
Oct 11 04:15:54 np0005481065 podman[108356]: 2025-10-11 08:15:54.978110633 +0000 UTC m=+0.230679940 container remove 926ce8fe026e32df7d7fce71c004ab4fd1f124cb458214e9bae3e7d4d417b3d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:15:55 np0005481065 systemd[1]: libpod-conmon-926ce8fe026e32df7d7fce71c004ab4fd1f124cb458214e9bae3e7d4d417b3d5.scope: Deactivated successfully.
Oct 11 04:15:55 np0005481065 podman[108420]: 2025-10-11 08:15:55.161588715 +0000 UTC m=+0.046235112 container create bc6057f197ce65609e6bd2253925e8cb31840e7fc36a8920bb6ae7bb8aa70032 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:15:55 np0005481065 systemd[1]: Started libpod-conmon-bc6057f197ce65609e6bd2253925e8cb31840e7fc36a8920bb6ae7bb8aa70032.scope.
Oct 11 04:15:55 np0005481065 podman[108420]: 2025-10-11 08:15:55.143675111 +0000 UTC m=+0.028321488 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:15:55 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:15:55 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbb494bee922eafe3831119aa20fde23eb0b00cc88459a6e2f7fa91bec454e3e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:15:55 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbb494bee922eafe3831119aa20fde23eb0b00cc88459a6e2f7fa91bec454e3e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:15:55 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbb494bee922eafe3831119aa20fde23eb0b00cc88459a6e2f7fa91bec454e3e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:15:55 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbb494bee922eafe3831119aa20fde23eb0b00cc88459a6e2f7fa91bec454e3e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:15:55 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbb494bee922eafe3831119aa20fde23eb0b00cc88459a6e2f7fa91bec454e3e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:15:55 np0005481065 podman[108420]: 2025-10-11 08:15:55.27230646 +0000 UTC m=+0.156952857 container init bc6057f197ce65609e6bd2253925e8cb31840e7fc36a8920bb6ae7bb8aa70032 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_noether, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:15:55 np0005481065 podman[108420]: 2025-10-11 08:15:55.292381584 +0000 UTC m=+0.177027941 container start bc6057f197ce65609e6bd2253925e8cb31840e7fc36a8920bb6ae7bb8aa70032 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_noether, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 11 04:15:55 np0005481065 podman[108420]: 2025-10-11 08:15:55.29650155 +0000 UTC m=+0.181147907 container attach bc6057f197ce65609e6bd2253925e8cb31840e7fc36a8920bb6ae7bb8aa70032 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:15:55 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Oct 11 04:15:55 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Oct 11 04:15:55 np0005481065 python3.9[108567]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct 11 04:15:56 np0005481065 romantic_noether[108435]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:15:56 np0005481065 romantic_noether[108435]: --> relative data size: 1.0
Oct 11 04:15:56 np0005481065 romantic_noether[108435]: --> All data devices are unavailable
Oct 11 04:15:56 np0005481065 systemd[1]: libpod-bc6057f197ce65609e6bd2253925e8cb31840e7fc36a8920bb6ae7bb8aa70032.scope: Deactivated successfully.
Oct 11 04:15:56 np0005481065 systemd[1]: libpod-bc6057f197ce65609e6bd2253925e8cb31840e7fc36a8920bb6ae7bb8aa70032.scope: Consumed 1.050s CPU time.
Oct 11 04:15:56 np0005481065 podman[108420]: 2025-10-11 08:15:56.44074505 +0000 UTC m=+1.325391447 container died bc6057f197ce65609e6bd2253925e8cb31840e7fc36a8920bb6ae7bb8aa70032 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_noether, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 04:15:56 np0005481065 systemd[1]: var-lib-containers-storage-overlay-bbb494bee922eafe3831119aa20fde23eb0b00cc88459a6e2f7fa91bec454e3e-merged.mount: Deactivated successfully.
Oct 11 04:15:56 np0005481065 podman[108420]: 2025-10-11 08:15:56.519367182 +0000 UTC m=+1.404013579 container remove bc6057f197ce65609e6bd2253925e8cb31840e7fc36a8920bb6ae7bb8aa70032 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_noether, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:15:56 np0005481065 systemd[1]: libpod-conmon-bc6057f197ce65609e6bd2253925e8cb31840e7fc36a8920bb6ae7bb8aa70032.scope: Deactivated successfully.
Oct 11 04:15:56 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v264: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:15:56 np0005481065 python3.9[108757]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:15:57 np0005481065 podman[108974]: 2025-10-11 08:15:57.275089671 +0000 UTC m=+0.065928885 container create 4f21e47edb4385fed56d38483554789b7d62bcd190a7b1522af95796250afd52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_poitras, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:15:57 np0005481065 systemd[1]: Started libpod-conmon-4f21e47edb4385fed56d38483554789b7d62bcd190a7b1522af95796250afd52.scope.
Oct 11 04:15:57 np0005481065 podman[108974]: 2025-10-11 08:15:57.244592534 +0000 UTC m=+0.035431808 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:15:57 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 10.18 deep-scrub starts
Oct 11 04:15:57 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 10.18 deep-scrub ok
Oct 11 04:15:57 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:15:57 np0005481065 podman[108974]: 2025-10-11 08:15:57.383150191 +0000 UTC m=+0.173989425 container init 4f21e47edb4385fed56d38483554789b7d62bcd190a7b1522af95796250afd52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_poitras, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:15:57 np0005481065 podman[108974]: 2025-10-11 08:15:57.397182416 +0000 UTC m=+0.188021590 container start 4f21e47edb4385fed56d38483554789b7d62bcd190a7b1522af95796250afd52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_poitras, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 11 04:15:57 np0005481065 podman[108974]: 2025-10-11 08:15:57.401726614 +0000 UTC m=+0.192565878 container attach 4f21e47edb4385fed56d38483554789b7d62bcd190a7b1522af95796250afd52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_poitras, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 04:15:57 np0005481065 stoic_poitras[109018]: 167 167
Oct 11 04:15:57 np0005481065 systemd[1]: libpod-4f21e47edb4385fed56d38483554789b7d62bcd190a7b1522af95796250afd52.scope: Deactivated successfully.
Oct 11 04:15:57 np0005481065 podman[108974]: 2025-10-11 08:15:57.408013171 +0000 UTC m=+0.198852385 container died 4f21e47edb4385fed56d38483554789b7d62bcd190a7b1522af95796250afd52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_poitras, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 04:15:57 np0005481065 systemd[1]: var-lib-containers-storage-overlay-8746ecd54dbdfb0f679284f71764839d6634a2cbb4b0192f962fad2105da1980-merged.mount: Deactivated successfully.
Oct 11 04:15:57 np0005481065 podman[108974]: 2025-10-11 08:15:57.464641774 +0000 UTC m=+0.255480988 container remove 4f21e47edb4385fed56d38483554789b7d62bcd190a7b1522af95796250afd52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_poitras, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:15:57 np0005481065 systemd[1]: libpod-conmon-4f21e47edb4385fed56d38483554789b7d62bcd190a7b1522af95796250afd52.scope: Deactivated successfully.
Oct 11 04:15:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:15:57 np0005481065 podman[109091]: 2025-10-11 08:15:57.687327199 +0000 UTC m=+0.065761611 container create f43f081fc21f18d810e219d9401e2489e788a95bc413d2a8d25e980b4fb41848 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:15:57 np0005481065 systemd[1]: Started libpod-conmon-f43f081fc21f18d810e219d9401e2489e788a95bc413d2a8d25e980b4fb41848.scope.
Oct 11 04:15:57 np0005481065 python3.9[109085]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct 11 04:15:57 np0005481065 podman[109091]: 2025-10-11 08:15:57.662290214 +0000 UTC m=+0.040724626 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:15:57 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:15:57 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/974439b4629b41d4182e93d08ffab53f1db369adeffb6e70bf97945c58eaf4e1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:15:57 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/974439b4629b41d4182e93d08ffab53f1db369adeffb6e70bf97945c58eaf4e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:15:57 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/974439b4629b41d4182e93d08ffab53f1db369adeffb6e70bf97945c58eaf4e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:15:57 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/974439b4629b41d4182e93d08ffab53f1db369adeffb6e70bf97945c58eaf4e1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:15:57 np0005481065 podman[109091]: 2025-10-11 08:15:57.799881775 +0000 UTC m=+0.178316267 container init f43f081fc21f18d810e219d9401e2489e788a95bc413d2a8d25e980b4fb41848 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_carson, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 04:15:57 np0005481065 podman[109091]: 2025-10-11 08:15:57.811997966 +0000 UTC m=+0.190432378 container start f43f081fc21f18d810e219d9401e2489e788a95bc413d2a8d25e980b4fb41848 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 04:15:57 np0005481065 podman[109091]: 2025-10-11 08:15:57.816899824 +0000 UTC m=+0.195334246 container attach f43f081fc21f18d810e219d9401e2489e788a95bc413d2a8d25e980b4fb41848 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_carson, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 04:15:58 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Oct 11 04:15:58 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Oct 11 04:15:58 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Oct 11 04:15:58 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Oct 11 04:15:58 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Oct 11 04:15:58 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Oct 11 04:15:58 np0005481065 condescending_carson[109107]: {
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:    "0": [
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:        {
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:            "devices": [
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:                "/dev/loop3"
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:            ],
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:            "lv_name": "ceph_lv0",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:            "lv_size": "21470642176",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:            "name": "ceph_lv0",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:            "tags": {
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:                "ceph.cluster_name": "ceph",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:                "ceph.crush_device_class": "",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:                "ceph.encrypted": "0",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:                "ceph.osd_id": "0",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:                "ceph.type": "block",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:                "ceph.vdo": "0"
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:            },
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:            "type": "block",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:            "vg_name": "ceph_vg0"
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:        }
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:    ],
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:    "1": [
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:        {
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:            "devices": [
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:                "/dev/loop4"
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:            ],
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:            "lv_name": "ceph_lv1",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:            "lv_size": "21470642176",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:            "name": "ceph_lv1",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:            "tags": {
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:                "ceph.cluster_name": "ceph",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:                "ceph.crush_device_class": "",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:                "ceph.encrypted": "0",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:                "ceph.osd_id": "1",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:                "ceph.type": "block",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:                "ceph.vdo": "0"
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:            },
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:            "type": "block",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:            "vg_name": "ceph_vg1"
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:        }
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:    ],
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:    "2": [
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:        {
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:            "devices": [
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:                "/dev/loop5"
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:            ],
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:            "lv_name": "ceph_lv2",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:            "lv_size": "21470642176",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:            "name": "ceph_lv2",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:            "tags": {
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:                "ceph.cluster_name": "ceph",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:                "ceph.crush_device_class": "",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:                "ceph.encrypted": "0",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:                "ceph.osd_id": "2",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:                "ceph.type": "block",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:                "ceph.vdo": "0"
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:            },
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:            "type": "block",
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:            "vg_name": "ceph_vg2"
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:        }
Oct 11 04:15:58 np0005481065 condescending_carson[109107]:    ]
Oct 11 04:15:58 np0005481065 condescending_carson[109107]: }
Oct 11 04:15:58 np0005481065 systemd[1]: libpod-f43f081fc21f18d810e219d9401e2489e788a95bc413d2a8d25e980b4fb41848.scope: Deactivated successfully.
Oct 11 04:15:58 np0005481065 podman[109091]: 2025-10-11 08:15:58.641330807 +0000 UTC m=+1.019765219 container died f43f081fc21f18d810e219d9401e2489e788a95bc413d2a8d25e980b4fb41848 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 11 04:15:58 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v265: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:15:58 np0005481065 systemd[1]: var-lib-containers-storage-overlay-974439b4629b41d4182e93d08ffab53f1db369adeffb6e70bf97945c58eaf4e1-merged.mount: Deactivated successfully.
Oct 11 04:15:58 np0005481065 podman[109091]: 2025-10-11 08:15:58.801019999 +0000 UTC m=+1.179454381 container remove f43f081fc21f18d810e219d9401e2489e788a95bc413d2a8d25e980b4fb41848 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_carson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 04:15:58 np0005481065 systemd[1]: libpod-conmon-f43f081fc21f18d810e219d9401e2489e788a95bc413d2a8d25e980b4fb41848.scope: Deactivated successfully.
Oct 11 04:15:59 np0005481065 python3.9[109305]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:15:59 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Oct 11 04:15:59 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Oct 11 04:15:59 np0005481065 podman[109507]: 2025-10-11 08:15:59.595195141 +0000 UTC m=+0.074855466 container create 8e274f1e24901930acce1b0098cab885645840fd898b23033a43e06c97790067 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 11 04:15:59 np0005481065 systemd[1]: Started libpod-conmon-8e274f1e24901930acce1b0098cab885645840fd898b23033a43e06c97790067.scope.
Oct 11 04:15:59 np0005481065 podman[109507]: 2025-10-11 08:15:59.56706664 +0000 UTC m=+0.046727025 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:15:59 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:15:59 np0005481065 podman[109507]: 2025-10-11 08:15:59.686476989 +0000 UTC m=+0.166137314 container init 8e274f1e24901930acce1b0098cab885645840fd898b23033a43e06c97790067 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_brattain, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:15:59 np0005481065 podman[109507]: 2025-10-11 08:15:59.693759694 +0000 UTC m=+0.173419989 container start 8e274f1e24901930acce1b0098cab885645840fd898b23033a43e06c97790067 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_brattain, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:15:59 np0005481065 podman[109507]: 2025-10-11 08:15:59.697537011 +0000 UTC m=+0.177197386 container attach 8e274f1e24901930acce1b0098cab885645840fd898b23033a43e06c97790067 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 04:15:59 np0005481065 dazzling_brattain[109562]: 167 167
Oct 11 04:15:59 np0005481065 systemd[1]: libpod-8e274f1e24901930acce1b0098cab885645840fd898b23033a43e06c97790067.scope: Deactivated successfully.
Oct 11 04:15:59 np0005481065 podman[109507]: 2025-10-11 08:15:59.701182133 +0000 UTC m=+0.180842458 container died 8e274f1e24901930acce1b0098cab885645840fd898b23033a43e06c97790067 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_brattain, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:15:59 np0005481065 systemd[1]: var-lib-containers-storage-overlay-f34c5ccd8c331afbacf0eba3bf4512b4aa5d690b41b4126de52843ae47437ff5-merged.mount: Deactivated successfully.
Oct 11 04:15:59 np0005481065 podman[109507]: 2025-10-11 08:15:59.752366753 +0000 UTC m=+0.232027088 container remove 8e274f1e24901930acce1b0098cab885645840fd898b23033a43e06c97790067 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_brattain, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 11 04:15:59 np0005481065 systemd[1]: libpod-conmon-8e274f1e24901930acce1b0098cab885645840fd898b23033a43e06c97790067.scope: Deactivated successfully.
Oct 11 04:15:59 np0005481065 python3.9[109606]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:15:59 np0005481065 podman[109614]: 2025-10-11 08:15:59.990731179 +0000 UTC m=+0.073278153 container create d7dc0d6067a086469795499bba67174e0ccc17f73d198d191e07f582b0defcbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_kalam, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:16:00 np0005481065 systemd[1]: Started libpod-conmon-d7dc0d6067a086469795499bba67174e0ccc17f73d198d191e07f582b0defcbf.scope.
Oct 11 04:16:00 np0005481065 podman[109614]: 2025-10-11 08:15:59.962746102 +0000 UTC m=+0.045293136 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:16:00 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:16:00 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69282a5fe4e64b82edaea5b21bd1d466e9872fcbec0f87030241c51ffd54c1cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:16:00 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69282a5fe4e64b82edaea5b21bd1d466e9872fcbec0f87030241c51ffd54c1cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:16:00 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69282a5fe4e64b82edaea5b21bd1d466e9872fcbec0f87030241c51ffd54c1cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:16:00 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69282a5fe4e64b82edaea5b21bd1d466e9872fcbec0f87030241c51ffd54c1cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:16:00 np0005481065 podman[109614]: 2025-10-11 08:16:00.105070145 +0000 UTC m=+0.187617139 container init d7dc0d6067a086469795499bba67174e0ccc17f73d198d191e07f582b0defcbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Oct 11 04:16:00 np0005481065 podman[109614]: 2025-10-11 08:16:00.117088464 +0000 UTC m=+0.199635418 container start d7dc0d6067a086469795499bba67174e0ccc17f73d198d191e07f582b0defcbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_kalam, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:16:00 np0005481065 podman[109614]: 2025-10-11 08:16:00.123045641 +0000 UTC m=+0.205592655 container attach d7dc0d6067a086469795499bba67174e0ccc17f73d198d191e07f582b0defcbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:16:00 np0005481065 python3.9[109712]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:16:00 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Oct 11 04:16:00 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Oct 11 04:16:00 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 2.b scrub starts
Oct 11 04:16:00 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 2.b scrub ok
Oct 11 04:16:00 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v266: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:16:01 np0005481065 romantic_kalam[109640]: {
Oct 11 04:16:01 np0005481065 romantic_kalam[109640]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 04:16:01 np0005481065 romantic_kalam[109640]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:16:01 np0005481065 romantic_kalam[109640]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:16:01 np0005481065 romantic_kalam[109640]:        "osd_id": 2,
Oct 11 04:16:01 np0005481065 romantic_kalam[109640]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:16:01 np0005481065 romantic_kalam[109640]:        "type": "bluestore"
Oct 11 04:16:01 np0005481065 romantic_kalam[109640]:    },
Oct 11 04:16:01 np0005481065 romantic_kalam[109640]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 04:16:01 np0005481065 romantic_kalam[109640]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:16:01 np0005481065 romantic_kalam[109640]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:16:01 np0005481065 romantic_kalam[109640]:        "osd_id": 0,
Oct 11 04:16:01 np0005481065 romantic_kalam[109640]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:16:01 np0005481065 romantic_kalam[109640]:        "type": "bluestore"
Oct 11 04:16:01 np0005481065 romantic_kalam[109640]:    },
Oct 11 04:16:01 np0005481065 romantic_kalam[109640]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 04:16:01 np0005481065 romantic_kalam[109640]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:16:01 np0005481065 romantic_kalam[109640]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:16:01 np0005481065 romantic_kalam[109640]:        "osd_id": 1,
Oct 11 04:16:01 np0005481065 romantic_kalam[109640]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:16:01 np0005481065 romantic_kalam[109640]:        "type": "bluestore"
Oct 11 04:16:01 np0005481065 romantic_kalam[109640]:    }
Oct 11 04:16:01 np0005481065 romantic_kalam[109640]: }
Oct 11 04:16:01 np0005481065 systemd[1]: libpod-d7dc0d6067a086469795499bba67174e0ccc17f73d198d191e07f582b0defcbf.scope: Deactivated successfully.
Oct 11 04:16:01 np0005481065 podman[109614]: 2025-10-11 08:16:01.201266263 +0000 UTC m=+1.283813247 container died d7dc0d6067a086469795499bba67174e0ccc17f73d198d191e07f582b0defcbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_kalam, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:16:01 np0005481065 systemd[1]: libpod-d7dc0d6067a086469795499bba67174e0ccc17f73d198d191e07f582b0defcbf.scope: Consumed 1.087s CPU time.
Oct 11 04:16:01 np0005481065 systemd[1]: var-lib-containers-storage-overlay-69282a5fe4e64b82edaea5b21bd1d466e9872fcbec0f87030241c51ffd54c1cb-merged.mount: Deactivated successfully.
Oct 11 04:16:01 np0005481065 systemd[75935]: Created slice User Background Tasks Slice.
Oct 11 04:16:01 np0005481065 systemd[75935]: Starting Cleanup of User's Temporary Files and Directories...
Oct 11 04:16:01 np0005481065 systemd[75935]: Finished Cleanup of User's Temporary Files and Directories.
Oct 11 04:16:01 np0005481065 podman[109614]: 2025-10-11 08:16:01.282847528 +0000 UTC m=+1.365394512 container remove d7dc0d6067a086469795499bba67174e0ccc17f73d198d191e07f582b0defcbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:16:01 np0005481065 systemd[1]: libpod-conmon-d7dc0d6067a086469795499bba67174e0ccc17f73d198d191e07f582b0defcbf.scope: Deactivated successfully.
Oct 11 04:16:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:16:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:16:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:16:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:16:01 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev b648d8c5-31b9-40f9-932f-e195f3523997 does not exist
Oct 11 04:16:01 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev d50b0b5e-3df1-42c3-8b70-dbaa8e4685d7 does not exist
Oct 11 04:16:01 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Oct 11 04:16:01 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Oct 11 04:16:01 np0005481065 python3.9[109956]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct 11 04:16:01 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:16:01 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:16:02 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Oct 11 04:16:02 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Oct 11 04:16:02 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Oct 11 04:16:02 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Oct 11 04:16:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:16:02 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v267: 321 pgs: 321 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:16:02 np0005481065 python3.9[110109]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct 11 04:16:03 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Oct 11 04:16:03 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Oct 11 04:16:03 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Oct 11 04:16:03 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Oct 11 04:16:03 np0005481065 python3.9[110262]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 11 04:16:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:16:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:16:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:16:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:16:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:16:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:16:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:16:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:16:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:16:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:16:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:16:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:16:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 04:16:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:16:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:16:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:16:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:16:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:16:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:16:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:16:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:16:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:16:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:16:04 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v268: 321 pgs: 321 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:16:04 np0005481065 python3.9[110415]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct 11 04:16:05 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 10.1f deep-scrub starts
Oct 11 04:16:05 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 10.1f deep-scrub ok
Oct 11 04:16:05 np0005481065 python3.9[110567]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 04:16:06 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Oct 11 04:16:06 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Oct 11 04:16:06 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Oct 11 04:16:06 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Oct 11 04:16:06 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v269: 321 pgs: 321 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:16:07 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Oct 11 04:16:07 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Oct 11 04:16:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:16:07 np0005481065 python3.9[110720]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:16:08 np0005481065 python3.9[110872]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:16:08 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v270: 321 pgs: 321 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:16:09 np0005481065 python3.9[110950]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:16:09 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Oct 11 04:16:09 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Oct 11 04:16:09 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Oct 11 04:16:09 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Oct 11 04:16:09 np0005481065 python3.9[111102]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:16:10 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 11.d scrub starts
Oct 11 04:16:10 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 11.d scrub ok
Oct 11 04:16:10 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 2.8 deep-scrub starts
Oct 11 04:16:10 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 2.8 deep-scrub ok
Oct 11 04:16:10 np0005481065 python3.9[111180]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:16:10 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 8.a scrub starts
Oct 11 04:16:10 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 8.a scrub ok
Oct 11 04:16:10 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v271: 321 pgs: 321 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:16:11 np0005481065 python3.9[111332]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 04:16:12 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Oct 11 04:16:12 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Oct 11 04:16:12 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Oct 11 04:16:12 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Oct 11 04:16:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:16:12 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v272: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:16:13 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Oct 11 04:16:13 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Oct 11 04:16:13 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Oct 11 04:16:13 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Oct 11 04:16:13 np0005481065 python3.9[111483]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:16:14 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 8.17 deep-scrub starts
Oct 11 04:16:14 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 8.17 deep-scrub ok
Oct 11 04:16:14 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v273: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:16:14 np0005481065 python3.9[111635]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct 11 04:16:15 np0005481065 python3.9[111785]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:16:16 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Oct 11 04:16:16 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Oct 11 04:16:16 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v274: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:16:17 np0005481065 python3.9[111937]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:16:17 np0005481065 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct 11 04:16:17 np0005481065 systemd[1]: tuned.service: Deactivated successfully.
Oct 11 04:16:17 np0005481065 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct 11 04:16:17 np0005481065 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 11 04:16:17 np0005481065 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 11 04:16:17 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Oct 11 04:16:17 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Oct 11 04:16:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:16:18 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 11.10 deep-scrub starts
Oct 11 04:16:18 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 11.10 deep-scrub ok
Oct 11 04:16:18 np0005481065 python3.9[112099]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct 11 04:16:18 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Oct 11 04:16:18 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Oct 11 04:16:18 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v275: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:16:19 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Oct 11 04:16:19 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Oct 11 04:16:19 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 9.a scrub starts
Oct 11 04:16:19 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 9.a scrub ok
Oct 11 04:16:20 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Oct 11 04:16:20 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Oct 11 04:16:20 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v276: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:16:21 np0005481065 python3.9[112251]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:16:22 np0005481065 python3.9[112405]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:16:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:16:22 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v277: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:16:23 np0005481065 systemd[1]: session-36.scope: Deactivated successfully.
Oct 11 04:16:23 np0005481065 systemd[1]: session-36.scope: Consumed 1min 6.696s CPU time.
Oct 11 04:16:23 np0005481065 systemd-logind[819]: Session 36 logged out. Waiting for processes to exit.
Oct 11 04:16:23 np0005481065 systemd-logind[819]: Removed session 36.
Oct 11 04:16:23 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Oct 11 04:16:23 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Oct 11 04:16:24 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Oct 11 04:16:24 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Oct 11 04:16:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:16:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:16:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:16:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:16:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:16:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:16:24 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v278: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:16:25 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Oct 11 04:16:25 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Oct 11 04:16:26 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Oct 11 04:16:26 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Oct 11 04:16:26 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Oct 11 04:16:26 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Oct 11 04:16:26 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v279: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:16:27 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Oct 11 04:16:27 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Oct 11 04:16:27 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Oct 11 04:16:27 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Oct 11 04:16:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:16:28 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v280: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:16:29 np0005481065 systemd-logind[819]: New session 37 of user zuul.
Oct 11 04:16:29 np0005481065 systemd[1]: Started Session 37 of User zuul.
Oct 11 04:16:29 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Oct 11 04:16:29 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Oct 11 04:16:29 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 9.1a deep-scrub starts
Oct 11 04:16:29 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 9.1a deep-scrub ok
Oct 11 04:16:30 np0005481065 python3.9[112585]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:16:30 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v281: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:16:31 np0005481065 python3.9[112741]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct 11 04:16:32 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 8.b scrub starts
Oct 11 04:16:32 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 8.b scrub ok
Oct 11 04:16:32 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 11.5 deep-scrub starts
Oct 11 04:16:32 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 11.5 deep-scrub ok
Oct 11 04:16:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:16:32 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v282: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:16:32 np0005481065 python3.9[112894]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 04:16:33 np0005481065 python3.9[112978]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 11 04:16:34 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Oct 11 04:16:34 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Oct 11 04:16:34 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 8.f scrub starts
Oct 11 04:16:34 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 8.f scrub ok
Oct 11 04:16:34 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v283: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:16:35 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Oct 11 04:16:35 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Oct 11 04:16:35 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Oct 11 04:16:35 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Oct 11 04:16:35 np0005481065 python3.9[113131]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 04:16:36 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v284: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:16:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:16:38 np0005481065 python3.9[113284]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 11 04:16:38 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v285: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:16:39 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Oct 11 04:16:39 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Oct 11 04:16:39 np0005481065 python3.9[113437]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:16:40 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Oct 11 04:16:40 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Oct 11 04:16:40 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Oct 11 04:16:40 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Oct 11 04:16:40 np0005481065 python3.9[113589]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct 11 04:16:40 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v286: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:16:41 np0005481065 python3.9[113739]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:16:42 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 11.b scrub starts
Oct 11 04:16:42 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 11.b scrub ok
Oct 11 04:16:42 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 11.e scrub starts
Oct 11 04:16:42 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 11.e scrub ok
Oct 11 04:16:42 np0005481065 python3.9[113897]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 04:16:42 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:16:42 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v287: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:16:44 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Oct 11 04:16:44 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Oct 11 04:16:44 np0005481065 python3.9[114050]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:16:44 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v288: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:16:45 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Oct 11 04:16:45 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Oct 11 04:16:45 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 10.d scrub starts
Oct 11 04:16:45 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 10.d scrub ok
Oct 11 04:16:46 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Oct 11 04:16:46 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Oct 11 04:16:46 np0005481065 python3.9[114337]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 11 04:16:46 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v289: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:16:47 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 8.e scrub starts
Oct 11 04:16:47 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 8.e scrub ok
Oct 11 04:16:47 np0005481065 python3.9[114487]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:16:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:16:48 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 11.f scrub starts
Oct 11 04:16:48 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 11.f scrub ok
Oct 11 04:16:48 np0005481065 python3.9[114641]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 04:16:48 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v290: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:16:49 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Oct 11 04:16:49 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Oct 11 04:16:50 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Oct 11 04:16:50 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Oct 11 04:16:50 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 8.c scrub starts
Oct 11 04:16:50 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 8.c scrub ok
Oct 11 04:16:50 np0005481065 python3.9[114794]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 04:16:50 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v291: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:16:51 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 11.a scrub starts
Oct 11 04:16:51 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 11.a scrub ok
Oct 11 04:16:52 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 11.c scrub starts
Oct 11 04:16:52 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 11.c scrub ok
Oct 11 04:16:52 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:16:52 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v292: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:16:52 np0005481065 python3.9[114947]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:16:53 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 8.d scrub starts
Oct 11 04:16:53 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 8.d scrub ok
Oct 11 04:16:53 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Oct 11 04:16:53 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Oct 11 04:16:53 np0005481065 python3.9[115101]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Oct 11 04:16:54 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Oct 11 04:16:54 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Oct 11 04:16:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:16:54
Oct 11 04:16:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:16:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 04:16:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['volumes', '.mgr', 'backups', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.log', 'images', 'vms']
Oct 11 04:16:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:16:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:16:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:16:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:16:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:16:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:16:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:16:54 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v293: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:16:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:16:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:16:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:16:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:16:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:16:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:16:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:16:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:16:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:16:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:16:54 np0005481065 systemd[1]: session-37.scope: Deactivated successfully.
Oct 11 04:16:54 np0005481065 systemd[1]: session-37.scope: Consumed 19.491s CPU time.
Oct 11 04:16:54 np0005481065 systemd-logind[819]: Session 37 logged out. Waiting for processes to exit.
Oct 11 04:16:54 np0005481065 systemd-logind[819]: Removed session 37.
Oct 11 04:16:55 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Oct 11 04:16:55 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Oct 11 04:16:56 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Oct 11 04:16:56 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Oct 11 04:16:56 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v294: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:16:57 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 11.19 deep-scrub starts
Oct 11 04:16:57 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 11.19 deep-scrub ok
Oct 11 04:16:57 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Oct 11 04:16:57 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Oct 11 04:16:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:16:58 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Oct 11 04:16:58 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Oct 11 04:16:58 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v295: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:17:00 np0005481065 systemd-logind[819]: New session 38 of user zuul.
Oct 11 04:17:00 np0005481065 systemd[1]: Started Session 38 of User zuul.
Oct 11 04:17:00 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v296: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:17:01 np0005481065 python3.9[115279]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:17:01 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 10.b scrub starts
Oct 11 04:17:01 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 10.b scrub ok
Oct 11 04:17:02 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Oct 11 04:17:02 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Oct 11 04:17:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:17:02 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:17:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:17:02 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:17:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:17:02 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:17:02 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 91c03813-6747-40e0-9b57-192922b4574c does not exist
Oct 11 04:17:02 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev bf5d58ad-5f71-4577-880d-c68236721dc5 does not exist
Oct 11 04:17:02 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev eb988412-36ff-4451-bd11-6a455ac37144 does not exist
Oct 11 04:17:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:17:02 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:17:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:17:02 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:17:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:17:02 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:17:02 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Oct 11 04:17:02 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Oct 11 04:17:02 np0005481065 python3.9[115553]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 04:17:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:17:02 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v297: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:17:03 np0005481065 podman[115749]: 2025-10-11 08:17:03.135316415 +0000 UTC m=+0.057216256 container create 05af6ef71308719203b8da086878042b39481dbb1b4431c1d315cba5c0420391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:17:03 np0005481065 systemd[1]: Started libpod-conmon-05af6ef71308719203b8da086878042b39481dbb1b4431c1d315cba5c0420391.scope.
Oct 11 04:17:03 np0005481065 podman[115749]: 2025-10-11 08:17:03.114646093 +0000 UTC m=+0.036545954 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:17:03 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:17:03 np0005481065 podman[115749]: 2025-10-11 08:17:03.231570047 +0000 UTC m=+0.153469908 container init 05af6ef71308719203b8da086878042b39481dbb1b4431c1d315cba5c0420391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_darwin, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:17:03 np0005481065 podman[115749]: 2025-10-11 08:17:03.245993844 +0000 UTC m=+0.167893715 container start 05af6ef71308719203b8da086878042b39481dbb1b4431c1d315cba5c0420391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 11 04:17:03 np0005481065 podman[115749]: 2025-10-11 08:17:03.249692614 +0000 UTC m=+0.171592475 container attach 05af6ef71308719203b8da086878042b39481dbb1b4431c1d315cba5c0420391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 11 04:17:03 np0005481065 hopeful_darwin[115787]: 167 167
Oct 11 04:17:03 np0005481065 systemd[1]: libpod-05af6ef71308719203b8da086878042b39481dbb1b4431c1d315cba5c0420391.scope: Deactivated successfully.
Oct 11 04:17:03 np0005481065 podman[115749]: 2025-10-11 08:17:03.254727343 +0000 UTC m=+0.176627184 container died 05af6ef71308719203b8da086878042b39481dbb1b4431c1d315cba5c0420391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:17:03 np0005481065 systemd[1]: var-lib-containers-storage-overlay-1e9f8954f903680284d41967bc8644ed4b3942aa27a34b7bb9f414b224ca2dd4-merged.mount: Deactivated successfully.
Oct 11 04:17:03 np0005481065 podman[115749]: 2025-10-11 08:17:03.300584541 +0000 UTC m=+0.222484382 container remove 05af6ef71308719203b8da086878042b39481dbb1b4431c1d315cba5c0420391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:17:03 np0005481065 systemd[1]: libpod-conmon-05af6ef71308719203b8da086878042b39481dbb1b4431c1d315cba5c0420391.scope: Deactivated successfully.
Oct 11 04:17:03 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:17:03 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:17:03 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:17:03 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Oct 11 04:17:03 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Oct 11 04:17:03 np0005481065 podman[115860]: 2025-10-11 08:17:03.526979068 +0000 UTC m=+0.065850022 container create 1a76e7efb177bd9e08be8c8e5567ae5c8e77836322defd1cc46ac6e792989dae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_sanderson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 04:17:03 np0005481065 systemd[1]: Started libpod-conmon-1a76e7efb177bd9e08be8c8e5567ae5c8e77836322defd1cc46ac6e792989dae.scope.
Oct 11 04:17:03 np0005481065 podman[115860]: 2025-10-11 08:17:03.498793113 +0000 UTC m=+0.037664107 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:17:03 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:17:03 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db018d0856e6d23ad57448ba469b0e39ccf23d5ee3df33d7a8e722c88b5901a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:17:03 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db018d0856e6d23ad57448ba469b0e39ccf23d5ee3df33d7a8e722c88b5901a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:17:03 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db018d0856e6d23ad57448ba469b0e39ccf23d5ee3df33d7a8e722c88b5901a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:17:03 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db018d0856e6d23ad57448ba469b0e39ccf23d5ee3df33d7a8e722c88b5901a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:17:03 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db018d0856e6d23ad57448ba469b0e39ccf23d5ee3df33d7a8e722c88b5901a1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:17:03 np0005481065 podman[115860]: 2025-10-11 08:17:03.629784544 +0000 UTC m=+0.168655498 container init 1a76e7efb177bd9e08be8c8e5567ae5c8e77836322defd1cc46ac6e792989dae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_sanderson, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 04:17:03 np0005481065 podman[115860]: 2025-10-11 08:17:03.643295904 +0000 UTC m=+0.182166858 container start 1a76e7efb177bd9e08be8c8e5567ae5c8e77836322defd1cc46ac6e792989dae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_sanderson, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 11 04:17:03 np0005481065 podman[115860]: 2025-10-11 08:17:03.647041605 +0000 UTC m=+0.185912529 container attach 1a76e7efb177bd9e08be8c8e5567ae5c8e77836322defd1cc46ac6e792989dae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:17:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:17:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:17:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:17:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:17:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:17:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:17:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:17:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:17:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:17:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:17:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:17:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:17:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 04:17:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:17:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:17:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:17:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:17:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:17:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:17:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:17:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:17:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:17:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:17:04 np0005481065 python3.9[115957]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:17:04 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 11.11 deep-scrub starts
Oct 11 04:17:04 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 11.11 deep-scrub ok
Oct 11 04:17:04 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Oct 11 04:17:04 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Oct 11 04:17:04 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Oct 11 04:17:04 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Oct 11 04:17:04 np0005481065 systemd-logind[819]: Session 38 logged out. Waiting for processes to exit.
Oct 11 04:17:04 np0005481065 systemd[1]: session-38.scope: Deactivated successfully.
Oct 11 04:17:04 np0005481065 systemd[1]: session-38.scope: Consumed 2.840s CPU time.
Oct 11 04:17:04 np0005481065 systemd-logind[819]: Removed session 38.
Oct 11 04:17:04 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v298: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:17:04 np0005481065 flamboyant_sanderson[115879]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:17:04 np0005481065 flamboyant_sanderson[115879]: --> relative data size: 1.0
Oct 11 04:17:04 np0005481065 flamboyant_sanderson[115879]: --> All data devices are unavailable
Oct 11 04:17:04 np0005481065 systemd[1]: libpod-1a76e7efb177bd9e08be8c8e5567ae5c8e77836322defd1cc46ac6e792989dae.scope: Deactivated successfully.
Oct 11 04:17:04 np0005481065 podman[115860]: 2025-10-11 08:17:04.811885323 +0000 UTC m=+1.350756297 container died 1a76e7efb177bd9e08be8c8e5567ae5c8e77836322defd1cc46ac6e792989dae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_sanderson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 11 04:17:04 np0005481065 systemd[1]: libpod-1a76e7efb177bd9e08be8c8e5567ae5c8e77836322defd1cc46ac6e792989dae.scope: Consumed 1.096s CPU time.
Oct 11 04:17:04 np0005481065 systemd[1]: var-lib-containers-storage-overlay-db018d0856e6d23ad57448ba469b0e39ccf23d5ee3df33d7a8e722c88b5901a1-merged.mount: Deactivated successfully.
Oct 11 04:17:04 np0005481065 podman[115860]: 2025-10-11 08:17:04.892394018 +0000 UTC m=+1.431264972 container remove 1a76e7efb177bd9e08be8c8e5567ae5c8e77836322defd1cc46ac6e792989dae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_sanderson, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:17:04 np0005481065 systemd[1]: libpod-conmon-1a76e7efb177bd9e08be8c8e5567ae5c8e77836322defd1cc46ac6e792989dae.scope: Deactivated successfully.
Oct 11 04:17:05 np0005481065 podman[116161]: 2025-10-11 08:17:05.656936896 +0000 UTC m=+0.066858742 container create ccace348268c59c5f2de6f0e5da634618ad190f9368c798b45a0f36f068f62b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:17:05 np0005481065 systemd[1]: Started libpod-conmon-ccace348268c59c5f2de6f0e5da634618ad190f9368c798b45a0f36f068f62b0.scope.
Oct 11 04:17:05 np0005481065 podman[116161]: 2025-10-11 08:17:05.629682518 +0000 UTC m=+0.039604434 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:17:05 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:17:05 np0005481065 podman[116161]: 2025-10-11 08:17:05.754065703 +0000 UTC m=+0.163987599 container init ccace348268c59c5f2de6f0e5da634618ad190f9368c798b45a0f36f068f62b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 11 04:17:05 np0005481065 podman[116161]: 2025-10-11 08:17:05.765413989 +0000 UTC m=+0.175335865 container start ccace348268c59c5f2de6f0e5da634618ad190f9368c798b45a0f36f068f62b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_colden, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:17:05 np0005481065 podman[116161]: 2025-10-11 08:17:05.76948961 +0000 UTC m=+0.179411536 container attach ccace348268c59c5f2de6f0e5da634618ad190f9368c798b45a0f36f068f62b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_colden, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:17:05 np0005481065 interesting_colden[116175]: 167 167
Oct 11 04:17:05 np0005481065 systemd[1]: libpod-ccace348268c59c5f2de6f0e5da634618ad190f9368c798b45a0f36f068f62b0.scope: Deactivated successfully.
Oct 11 04:17:05 np0005481065 podman[116161]: 2025-10-11 08:17:05.773633473 +0000 UTC m=+0.183555389 container died ccace348268c59c5f2de6f0e5da634618ad190f9368c798b45a0f36f068f62b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_colden, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 04:17:05 np0005481065 systemd[1]: var-lib-containers-storage-overlay-bafbbf2d313491f8a33b0c48ec603d2f4b4e1f5a7e8f58ac3bc4e0d801eb07e6-merged.mount: Deactivated successfully.
Oct 11 04:17:05 np0005481065 podman[116161]: 2025-10-11 08:17:05.8305825 +0000 UTC m=+0.240504376 container remove ccace348268c59c5f2de6f0e5da634618ad190f9368c798b45a0f36f068f62b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:17:05 np0005481065 systemd[1]: libpod-conmon-ccace348268c59c5f2de6f0e5da634618ad190f9368c798b45a0f36f068f62b0.scope: Deactivated successfully.
Oct 11 04:17:06 np0005481065 podman[116202]: 2025-10-11 08:17:06.02265271 +0000 UTC m=+0.051615120 container create c636775238a9ff29bb52b5b20b662ad9f0851922775397799bcf38a152e2420d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_swartz, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 04:17:06 np0005481065 systemd[1]: Started libpod-conmon-c636775238a9ff29bb52b5b20b662ad9f0851922775397799bcf38a152e2420d.scope.
Oct 11 04:17:06 np0005481065 podman[116202]: 2025-10-11 08:17:05.994334121 +0000 UTC m=+0.023296581 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:17:06 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:17:06 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d28cc81207f8661e21d25921420cfbb6a907d2d7140e6551b3e3fb4b38a75e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:17:06 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d28cc81207f8661e21d25921420cfbb6a907d2d7140e6551b3e3fb4b38a75e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:17:06 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d28cc81207f8661e21d25921420cfbb6a907d2d7140e6551b3e3fb4b38a75e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:17:06 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d28cc81207f8661e21d25921420cfbb6a907d2d7140e6551b3e3fb4b38a75e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:17:06 np0005481065 podman[116202]: 2025-10-11 08:17:06.120379125 +0000 UTC m=+0.149341595 container init c636775238a9ff29bb52b5b20b662ad9f0851922775397799bcf38a152e2420d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 04:17:06 np0005481065 podman[116202]: 2025-10-11 08:17:06.135787931 +0000 UTC m=+0.164750341 container start c636775238a9ff29bb52b5b20b662ad9f0851922775397799bcf38a152e2420d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_swartz, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:17:06 np0005481065 podman[116202]: 2025-10-11 08:17:06.140291805 +0000 UTC m=+0.169254225 container attach c636775238a9ff29bb52b5b20b662ad9f0851922775397799bcf38a152e2420d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_swartz, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 11 04:17:06 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Oct 11 04:17:06 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Oct 11 04:17:06 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v299: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]: {
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:    "0": [
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:        {
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:            "devices": [
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:                "/dev/loop3"
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:            ],
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:            "lv_name": "ceph_lv0",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:            "lv_size": "21470642176",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:            "name": "ceph_lv0",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:            "tags": {
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:                "ceph.cluster_name": "ceph",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:                "ceph.crush_device_class": "",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:                "ceph.encrypted": "0",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:                "ceph.osd_id": "0",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:                "ceph.type": "block",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:                "ceph.vdo": "0"
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:            },
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:            "type": "block",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:            "vg_name": "ceph_vg0"
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:        }
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:    ],
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:    "1": [
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:        {
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:            "devices": [
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:                "/dev/loop4"
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:            ],
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:            "lv_name": "ceph_lv1",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:            "lv_size": "21470642176",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:            "name": "ceph_lv1",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:            "tags": {
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:                "ceph.cluster_name": "ceph",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:                "ceph.crush_device_class": "",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:                "ceph.encrypted": "0",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:                "ceph.osd_id": "1",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:                "ceph.type": "block",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:                "ceph.vdo": "0"
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:            },
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:            "type": "block",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:            "vg_name": "ceph_vg1"
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:        }
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:    ],
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:    "2": [
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:        {
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:            "devices": [
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:                "/dev/loop5"
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:            ],
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:            "lv_name": "ceph_lv2",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:            "lv_size": "21470642176",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:            "name": "ceph_lv2",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:            "tags": {
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:                "ceph.cluster_name": "ceph",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:                "ceph.crush_device_class": "",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:                "ceph.encrypted": "0",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:                "ceph.osd_id": "2",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:                "ceph.type": "block",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:                "ceph.vdo": "0"
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:            },
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:            "type": "block",
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:            "vg_name": "ceph_vg2"
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:        }
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]:    ]
Oct 11 04:17:06 np0005481065 priceless_swartz[116219]: }
Oct 11 04:17:06 np0005481065 systemd[1]: libpod-c636775238a9ff29bb52b5b20b662ad9f0851922775397799bcf38a152e2420d.scope: Deactivated successfully.
Oct 11 04:17:06 np0005481065 podman[116202]: 2025-10-11 08:17:06.879354679 +0000 UTC m=+0.908317069 container died c636775238a9ff29bb52b5b20b662ad9f0851922775397799bcf38a152e2420d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:17:06 np0005481065 systemd[1]: var-lib-containers-storage-overlay-2d28cc81207f8661e21d25921420cfbb6a907d2d7140e6551b3e3fb4b38a75e2-merged.mount: Deactivated successfully.
Oct 11 04:17:06 np0005481065 podman[116202]: 2025-10-11 08:17:06.956637709 +0000 UTC m=+0.985600129 container remove c636775238a9ff29bb52b5b20b662ad9f0851922775397799bcf38a152e2420d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_swartz, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 04:17:06 np0005481065 systemd[1]: libpod-conmon-c636775238a9ff29bb52b5b20b662ad9f0851922775397799bcf38a152e2420d.scope: Deactivated successfully.
Oct 11 04:17:07 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Oct 11 04:17:07 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Oct 11 04:17:07 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Oct 11 04:17:07 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Oct 11 04:17:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:17:07 np0005481065 podman[116384]: 2025-10-11 08:17:07.745099286 +0000 UTC m=+0.061660507 container create 0c0a1ef2a820ff5be48a06ba5a23049250b555be83a8770e42d15e79fbd01054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_torvalds, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:17:07 np0005481065 systemd[1]: Started libpod-conmon-0c0a1ef2a820ff5be48a06ba5a23049250b555be83a8770e42d15e79fbd01054.scope.
Oct 11 04:17:07 np0005481065 podman[116384]: 2025-10-11 08:17:07.71685399 +0000 UTC m=+0.033415271 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:17:07 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:17:07 np0005481065 podman[116384]: 2025-10-11 08:17:07.860256648 +0000 UTC m=+0.176817899 container init 0c0a1ef2a820ff5be48a06ba5a23049250b555be83a8770e42d15e79fbd01054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 04:17:07 np0005481065 podman[116384]: 2025-10-11 08:17:07.870707257 +0000 UTC m=+0.187268488 container start 0c0a1ef2a820ff5be48a06ba5a23049250b555be83a8770e42d15e79fbd01054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_torvalds, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 04:17:07 np0005481065 podman[116384]: 2025-10-11 08:17:07.874657624 +0000 UTC m=+0.191218905 container attach 0c0a1ef2a820ff5be48a06ba5a23049250b555be83a8770e42d15e79fbd01054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_torvalds, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 11 04:17:07 np0005481065 upbeat_torvalds[116401]: 167 167
Oct 11 04:17:07 np0005481065 systemd[1]: libpod-0c0a1ef2a820ff5be48a06ba5a23049250b555be83a8770e42d15e79fbd01054.scope: Deactivated successfully.
Oct 11 04:17:07 np0005481065 podman[116384]: 2025-10-11 08:17:07.87889411 +0000 UTC m=+0.195455341 container died 0c0a1ef2a820ff5be48a06ba5a23049250b555be83a8770e42d15e79fbd01054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_torvalds, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:17:07 np0005481065 systemd[1]: var-lib-containers-storage-overlay-b7960a853d2a63e2623ccd9cf8f6d787292ff495b68d7687d82eb3df136dc0c0-merged.mount: Deactivated successfully.
Oct 11 04:17:07 np0005481065 podman[116384]: 2025-10-11 08:17:07.932203099 +0000 UTC m=+0.248764320 container remove 0c0a1ef2a820ff5be48a06ba5a23049250b555be83a8770e42d15e79fbd01054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:17:07 np0005481065 systemd[1]: libpod-conmon-0c0a1ef2a820ff5be48a06ba5a23049250b555be83a8770e42d15e79fbd01054.scope: Deactivated successfully.
Oct 11 04:17:08 np0005481065 podman[116424]: 2025-10-11 08:17:08.167925222 +0000 UTC m=+0.061514553 container create 8306a736a6795a236a6a6852ecdcf960b4357db8910c2c126a02adaa8f9e2db1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 11 04:17:08 np0005481065 systemd[1]: Started libpod-conmon-8306a736a6795a236a6a6852ecdcf960b4357db8910c2c126a02adaa8f9e2db1.scope.
Oct 11 04:17:08 np0005481065 podman[116424]: 2025-10-11 08:17:08.142902161 +0000 UTC m=+0.036491532 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:17:08 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:17:08 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47ff2b78c68200b0f43f74d50149abca58e796ecc17627b7507350a8030df445/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:17:08 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47ff2b78c68200b0f43f74d50149abca58e796ecc17627b7507350a8030df445/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:17:08 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47ff2b78c68200b0f43f74d50149abca58e796ecc17627b7507350a8030df445/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:17:08 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47ff2b78c68200b0f43f74d50149abca58e796ecc17627b7507350a8030df445/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:17:08 np0005481065 podman[116424]: 2025-10-11 08:17:08.268648916 +0000 UTC m=+0.162238307 container init 8306a736a6795a236a6a6852ecdcf960b4357db8910c2c126a02adaa8f9e2db1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bartik, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 11 04:17:08 np0005481065 podman[116424]: 2025-10-11 08:17:08.28633029 +0000 UTC m=+0.179919641 container start 8306a736a6795a236a6a6852ecdcf960b4357db8910c2c126a02adaa8f9e2db1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:17:08 np0005481065 podman[116424]: 2025-10-11 08:17:08.29342481 +0000 UTC m=+0.187014201 container attach 8306a736a6795a236a6a6852ecdcf960b4357db8910c2c126a02adaa8f9e2db1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 11 04:17:08 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Oct 11 04:17:08 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:17:08.527303) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 04:17:08 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Oct 11 04:17:08 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760170628527432, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7231, "num_deletes": 251, "total_data_size": 8821253, "memory_usage": 9072240, "flush_reason": "Manual Compaction"}
Oct 11 04:17:08 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Oct 11 04:17:08 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760170628570288, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 7121642, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 141, "largest_seqno": 7369, "table_properties": {"data_size": 7095288, "index_size": 17087, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8197, "raw_key_size": 76025, "raw_average_key_size": 23, "raw_value_size": 7032707, "raw_average_value_size": 2152, "num_data_blocks": 749, "num_entries": 3267, "num_filter_entries": 3267, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170207, "oldest_key_time": 1760170207, "file_creation_time": 1760170628, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:17:08 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 43056 microseconds, and 24447 cpu microseconds.
Oct 11 04:17:08 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:17:08.570367) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 7121642 bytes OK
Oct 11 04:17:08 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:17:08.570394) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Oct 11 04:17:08 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:17:08.572214) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Oct 11 04:17:08 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:17:08.572238) EVENT_LOG_v1 {"time_micros": 1760170628572231, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Oct 11 04:17:08 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:17:08.572266) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Oct 11 04:17:08 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 8789741, prev total WAL file size 8789741, number of live WAL files 2.
Oct 11 04:17:08 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:17:08 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:17:08.575444) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Oct 11 04:17:08 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Oct 11 04:17:08 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(6954KB) 13(53KB) 8(1944B)]
Oct 11 04:17:08 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760170628575618, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 7178843, "oldest_snapshot_seqno": -1}
Oct 11 04:17:08 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3083 keys, 7134171 bytes, temperature: kUnknown
Oct 11 04:17:08 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760170628622537, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 7134171, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7108238, "index_size": 17119, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7749, "raw_key_size": 74097, "raw_average_key_size": 24, "raw_value_size": 7047255, "raw_average_value_size": 2285, "num_data_blocks": 752, "num_entries": 3083, "num_filter_entries": 3083, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760170628, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:17:08 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:17:08 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:17:08.622916) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 7134171 bytes
Oct 11 04:17:08 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:17:08.624232) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 152.7 rd, 151.7 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(6.8, 0.0 +0.0 blob) out(6.8 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3373, records dropped: 290 output_compression: NoCompression
Oct 11 04:17:08 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:17:08.624269) EVENT_LOG_v1 {"time_micros": 1760170628624250, "job": 4, "event": "compaction_finished", "compaction_time_micros": 47020, "compaction_time_cpu_micros": 30191, "output_level": 6, "num_output_files": 1, "total_output_size": 7134171, "num_input_records": 3373, "num_output_records": 3083, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 04:17:08 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:17:08 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760170628626713, "job": 4, "event": "table_file_deletion", "file_number": 19}
Oct 11 04:17:08 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:17:08 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760170628626845, "job": 4, "event": "table_file_deletion", "file_number": 13}
Oct 11 04:17:08 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:17:08 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760170628626905, "job": 4, "event": "table_file_deletion", "file_number": 8}
Oct 11 04:17:08 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:17:08.575196) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:17:08 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v300: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:17:09 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Oct 11 04:17:09 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Oct 11 04:17:09 np0005481065 elastic_bartik[116440]: {
Oct 11 04:17:09 np0005481065 elastic_bartik[116440]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 04:17:09 np0005481065 elastic_bartik[116440]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:17:09 np0005481065 elastic_bartik[116440]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:17:09 np0005481065 elastic_bartik[116440]:        "osd_id": 2,
Oct 11 04:17:09 np0005481065 elastic_bartik[116440]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:17:09 np0005481065 elastic_bartik[116440]:        "type": "bluestore"
Oct 11 04:17:09 np0005481065 elastic_bartik[116440]:    },
Oct 11 04:17:09 np0005481065 elastic_bartik[116440]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 04:17:09 np0005481065 elastic_bartik[116440]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:17:09 np0005481065 elastic_bartik[116440]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:17:09 np0005481065 elastic_bartik[116440]:        "osd_id": 0,
Oct 11 04:17:09 np0005481065 elastic_bartik[116440]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:17:09 np0005481065 elastic_bartik[116440]:        "type": "bluestore"
Oct 11 04:17:09 np0005481065 elastic_bartik[116440]:    },
Oct 11 04:17:09 np0005481065 elastic_bartik[116440]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 04:17:09 np0005481065 elastic_bartik[116440]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:17:09 np0005481065 elastic_bartik[116440]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:17:09 np0005481065 elastic_bartik[116440]:        "osd_id": 1,
Oct 11 04:17:09 np0005481065 elastic_bartik[116440]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:17:09 np0005481065 elastic_bartik[116440]:        "type": "bluestore"
Oct 11 04:17:09 np0005481065 elastic_bartik[116440]:    }
Oct 11 04:17:09 np0005481065 elastic_bartik[116440]: }
Oct 11 04:17:09 np0005481065 systemd[1]: libpod-8306a736a6795a236a6a6852ecdcf960b4357db8910c2c126a02adaa8f9e2db1.scope: Deactivated successfully.
Oct 11 04:17:09 np0005481065 systemd[1]: libpod-8306a736a6795a236a6a6852ecdcf960b4357db8910c2c126a02adaa8f9e2db1.scope: Consumed 1.134s CPU time.
Oct 11 04:17:09 np0005481065 podman[116475]: 2025-10-11 08:17:09.475526068 +0000 UTC m=+0.041396917 container died 8306a736a6795a236a6a6852ecdcf960b4357db8910c2c126a02adaa8f9e2db1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bartik, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 04:17:09 np0005481065 systemd[1]: var-lib-containers-storage-overlay-47ff2b78c68200b0f43f74d50149abca58e796ecc17627b7507350a8030df445-merged.mount: Deactivated successfully.
Oct 11 04:17:09 np0005481065 podman[116475]: 2025-10-11 08:17:09.544096789 +0000 UTC m=+0.109967568 container remove 8306a736a6795a236a6a6852ecdcf960b4357db8910c2c126a02adaa8f9e2db1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 04:17:09 np0005481065 systemd[1]: libpod-conmon-8306a736a6795a236a6a6852ecdcf960b4357db8910c2c126a02adaa8f9e2db1.scope: Deactivated successfully.
Oct 11 04:17:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:17:09 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:17:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:17:09 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:17:09 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 69707dc8-88bb-4311-8db8-7d0f79e4dff0 does not exist
Oct 11 04:17:09 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev dd2eb06f-405e-465e-81cb-f3b728cd6ab6 does not exist
Oct 11 04:17:10 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Oct 11 04:17:10 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Oct 11 04:17:10 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Oct 11 04:17:10 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Oct 11 04:17:10 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:17:10 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:17:10 np0005481065 systemd-logind[819]: New session 39 of user zuul.
Oct 11 04:17:10 np0005481065 systemd[1]: Started Session 39 of User zuul.
Oct 11 04:17:10 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v301: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:17:11 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 8.1f deep-scrub starts
Oct 11 04:17:11 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 8.1f deep-scrub ok
Oct 11 04:17:11 np0005481065 python3.9[116693]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:17:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:17:12 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v302: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:17:13 np0005481065 python3.9[116847]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:17:13 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Oct 11 04:17:13 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Oct 11 04:17:14 np0005481065 python3.9[117003]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 04:17:14 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Oct 11 04:17:14 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Oct 11 04:17:14 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v303: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:17:15 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 9.e scrub starts
Oct 11 04:17:15 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 9.e scrub ok
Oct 11 04:17:15 np0005481065 python3.9[117087]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 04:17:16 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v304: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:17:17 np0005481065 python3.9[117240]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 04:17:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:17:18 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v305: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:17:19 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Oct 11 04:17:19 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Oct 11 04:17:19 np0005481065 python3.9[117435]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:17:20 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 9.6 deep-scrub starts
Oct 11 04:17:20 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 9.6 deep-scrub ok
Oct 11 04:17:20 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Oct 11 04:17:20 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Oct 11 04:17:20 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Oct 11 04:17:20 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Oct 11 04:17:20 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v306: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:17:21 np0005481065 python3.9[117587]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:17:21 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Oct 11 04:17:21 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Oct 11 04:17:22 np0005481065 python3.9[117750]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:17:22 np0005481065 python3.9[117828]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:17:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:17:22 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v307: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:17:23 np0005481065 python3.9[117980]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:17:24 np0005481065 python3.9[118058]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:17:24 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 9.f deep-scrub starts
Oct 11 04:17:24 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 9.f deep-scrub ok
Oct 11 04:17:24 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 10.e scrub starts
Oct 11 04:17:24 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 10.e scrub ok
Oct 11 04:17:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:17:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:17:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:17:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:17:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:17:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:17:24 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v308: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:17:25 np0005481065 python3.9[118210]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:17:25 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 10.f scrub starts
Oct 11 04:17:25 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 10.f scrub ok
Oct 11 04:17:25 np0005481065 python3.9[118362]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:17:26 np0005481065 python3.9[118514]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:17:26 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Oct 11 04:17:26 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v309: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:17:26 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Oct 11 04:17:27 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Oct 11 04:17:27 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Oct 11 04:17:27 np0005481065 python3.9[118666]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:17:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:17:28 np0005481065 python3.9[118818]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 04:17:28 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v310: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:17:30 np0005481065 python3.9[118971]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:17:30 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v311: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:17:30 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Oct 11 04:17:30 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Oct 11 04:17:31 np0005481065 python3.9[119125]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:17:32 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Oct 11 04:17:32 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Oct 11 04:17:32 np0005481065 python3.9[119277]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:17:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:17:32 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v312: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:17:33 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Oct 11 04:17:33 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Oct 11 04:17:33 np0005481065 python3.9[119429]: ansible-service_facts Invoked
Oct 11 04:17:33 np0005481065 network[119446]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 11 04:17:33 np0005481065 network[119447]: 'network-scripts' will be removed from distribution in near future.
Oct 11 04:17:33 np0005481065 network[119448]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 11 04:17:34 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v313: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:17:34 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Oct 11 04:17:34 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Oct 11 04:17:35 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Oct 11 04:17:35 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Oct 11 04:17:36 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v314: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:17:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:17:38 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v315: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:17:39 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Oct 11 04:17:39 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Oct 11 04:17:39 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Oct 11 04:17:40 np0005481065 ceph-osd[89278]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Oct 11 04:17:40 np0005481065 python3.9[119903]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 04:17:40 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Oct 11 04:17:40 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Oct 11 04:17:40 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v316: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:17:42 np0005481065 python3.9[120056]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct 11 04:17:42 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:17:42 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v317: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:17:43 np0005481065 python3.9[120208]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:17:44 np0005481065 python3.9[120286]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:17:44 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v318: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:17:45 np0005481065 python3.9[120438]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:17:45 np0005481065 python3.9[120516]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:17:46 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v319: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:17:47 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 9.c scrub starts
Oct 11 04:17:47 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 9.c scrub ok
Oct 11 04:17:47 np0005481065 python3.9[120668]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:17:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:17:48 np0005481065 python3.9[120820]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 04:17:48 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v320: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:17:49 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 10.17 deep-scrub starts
Oct 11 04:17:49 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 10.17 deep-scrub ok
Oct 11 04:17:49 np0005481065 python3.9[120904]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:17:50 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Oct 11 04:17:50 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Oct 11 04:17:50 np0005481065 systemd-logind[819]: Session 39 logged out. Waiting for processes to exit.
Oct 11 04:17:50 np0005481065 systemd[1]: session-39.scope: Deactivated successfully.
Oct 11 04:17:50 np0005481065 systemd[1]: session-39.scope: Consumed 27.389s CPU time.
Oct 11 04:17:50 np0005481065 systemd-logind[819]: Removed session 39.
Oct 11 04:17:50 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v321: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:17:51 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Oct 11 04:17:51 np0005481065 ceph-osd[90364]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Oct 11 04:17:52 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:17:52 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v322: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:17:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:17:54
Oct 11 04:17:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:17:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 04:17:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'images', '.rgw.root', 'cephfs.cephfs.meta', 'backups', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', 'vms']
Oct 11 04:17:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:17:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:17:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:17:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:17:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:17:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:17:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:17:54 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v323: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:17:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:17:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:17:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:17:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:17:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:17:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:17:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:17:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:17:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:17:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:17:55 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Oct 11 04:17:55 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Oct 11 04:17:56 np0005481065 systemd-logind[819]: New session 40 of user zuul.
Oct 11 04:17:56 np0005481065 systemd[1]: Started Session 40 of User zuul.
Oct 11 04:17:56 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v324: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:17:57 np0005481065 python3.9[121086]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:17:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:17:58 np0005481065 python3.9[121238]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:17:58 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 9.1 deep-scrub starts
Oct 11 04:17:58 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 9.1 deep-scrub ok
Oct 11 04:17:58 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v325: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:17:58 np0005481065 python3.9[121316]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:17:59 np0005481065 systemd[1]: session-40.scope: Deactivated successfully.
Oct 11 04:17:59 np0005481065 systemd[1]: session-40.scope: Consumed 1.962s CPU time.
Oct 11 04:17:59 np0005481065 systemd-logind[819]: Session 40 logged out. Waiting for processes to exit.
Oct 11 04:17:59 np0005481065 systemd-logind[819]: Removed session 40.
Oct 11 04:18:00 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v326: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:18:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:18:02 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v327: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:18:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:18:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:18:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:18:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:18:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:18:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:18:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:18:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:18:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:18:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:18:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:18:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:18:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 04:18:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:18:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:18:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:18:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:18:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:18:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:18:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:18:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:18:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:18:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:18:04 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v328: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:18:05 np0005481065 systemd-logind[819]: New session 41 of user zuul.
Oct 11 04:18:05 np0005481065 systemd[1]: Started Session 41 of User zuul.
Oct 11 04:18:06 np0005481065 python3.9[121495]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:18:06 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v329: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:18:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:18:07 np0005481065 python3.9[121651]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:18:08 np0005481065 python3.9[121826]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:18:08 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v330: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:18:09 np0005481065 python3.9[121904]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.nw944bs7 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:18:09 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Oct 11 04:18:09 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Oct 11 04:18:10 np0005481065 python3.9[122135]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:18:10 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 9.b scrub starts
Oct 11 04:18:10 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 9.b scrub ok
Oct 11 04:18:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:18:10 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:18:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:18:10 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:18:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:18:10 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v331: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:18:10 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:18:10 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev d6ca8e3c-9044-4f25-bf2c-268eecfb9eaa does not exist
Oct 11 04:18:10 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 4dffb910-5ee4-470d-b9ca-a8c1d48d9d8c does not exist
Oct 11 04:18:10 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 7e4f784a-a13c-4df1-ac9f-a77de8b33417 does not exist
Oct 11 04:18:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:18:10 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:18:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:18:10 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:18:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:18:10 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:18:10 np0005481065 python3.9[122253]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.d3wgzpyb recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:18:10 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:18:10 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:18:10 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:18:11 np0005481065 podman[122552]: 2025-10-11 08:18:11.550250031 +0000 UTC m=+0.070862723 container create b3438a57163c6fe934425b13bacdbdc50d54edacedae672b005a1b2145f472ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kilby, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 11 04:18:11 np0005481065 systemd[1]: Started libpod-conmon-b3438a57163c6fe934425b13bacdbdc50d54edacedae672b005a1b2145f472ee.scope.
Oct 11 04:18:11 np0005481065 podman[122552]: 2025-10-11 08:18:11.519545092 +0000 UTC m=+0.040157874 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:18:11 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:18:11 np0005481065 podman[122552]: 2025-10-11 08:18:11.648075448 +0000 UTC m=+0.168688220 container init b3438a57163c6fe934425b13bacdbdc50d54edacedae672b005a1b2145f472ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kilby, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 11 04:18:11 np0005481065 podman[122552]: 2025-10-11 08:18:11.66043511 +0000 UTC m=+0.181047822 container start b3438a57163c6fe934425b13bacdbdc50d54edacedae672b005a1b2145f472ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:18:11 np0005481065 podman[122552]: 2025-10-11 08:18:11.665871578 +0000 UTC m=+0.186484350 container attach b3438a57163c6fe934425b13bacdbdc50d54edacedae672b005a1b2145f472ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kilby, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 11 04:18:11 np0005481065 festive_kilby[122574]: 167 167
Oct 11 04:18:11 np0005481065 systemd[1]: libpod-b3438a57163c6fe934425b13bacdbdc50d54edacedae672b005a1b2145f472ee.scope: Deactivated successfully.
Oct 11 04:18:11 np0005481065 podman[122552]: 2025-10-11 08:18:11.669770459 +0000 UTC m=+0.190383181 container died b3438a57163c6fe934425b13bacdbdc50d54edacedae672b005a1b2145f472ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kilby, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:18:11 np0005481065 python3.9[122568]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:18:11 np0005481065 systemd[1]: var-lib-containers-storage-overlay-5c56542d85f5b926d9bc2dfaf41236933e88f09f28aa1cd7fb783b48bd747219-merged.mount: Deactivated successfully.
Oct 11 04:18:11 np0005481065 podman[122552]: 2025-10-11 08:18:11.725047869 +0000 UTC m=+0.245660581 container remove b3438a57163c6fe934425b13bacdbdc50d54edacedae672b005a1b2145f472ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 11 04:18:11 np0005481065 systemd[1]: libpod-conmon-b3438a57163c6fe934425b13bacdbdc50d54edacedae672b005a1b2145f472ee.scope: Deactivated successfully.
Oct 11 04:18:11 np0005481065 podman[122623]: 2025-10-11 08:18:11.921923139 +0000 UTC m=+0.052827675 container create 2ac014c670114b9eaef6934ba3a750ea91763f296978f6340037185d948f13c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_shamir, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:18:11 np0005481065 systemd[1]: Started libpod-conmon-2ac014c670114b9eaef6934ba3a750ea91763f296978f6340037185d948f13c9.scope.
Oct 11 04:18:11 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:18:11 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39a57f20ac50dd53e05c9b0e2fe45e3f12c6b5e0340978f8527fb6fa268357d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:18:12 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39a57f20ac50dd53e05c9b0e2fe45e3f12c6b5e0340978f8527fb6fa268357d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:18:12 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39a57f20ac50dd53e05c9b0e2fe45e3f12c6b5e0340978f8527fb6fa268357d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:18:12 np0005481065 podman[122623]: 2025-10-11 08:18:11.905026537 +0000 UTC m=+0.035931093 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:18:12 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39a57f20ac50dd53e05c9b0e2fe45e3f12c6b5e0340978f8527fb6fa268357d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:18:12 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39a57f20ac50dd53e05c9b0e2fe45e3f12c6b5e0340978f8527fb6fa268357d7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:18:12 np0005481065 podman[122623]: 2025-10-11 08:18:12.020906662 +0000 UTC m=+0.151811218 container init 2ac014c670114b9eaef6934ba3a750ea91763f296978f6340037185d948f13c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 04:18:12 np0005481065 podman[122623]: 2025-10-11 08:18:12.036197105 +0000 UTC m=+0.167101651 container start 2ac014c670114b9eaef6934ba3a750ea91763f296978f6340037185d948f13c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 04:18:12 np0005481065 podman[122623]: 2025-10-11 08:18:12.040421405 +0000 UTC m=+0.171325941 container attach 2ac014c670114b9eaef6934ba3a750ea91763f296978f6340037185d948f13c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Oct 11 04:18:12 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Oct 11 04:18:12 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Oct 11 04:18:12 np0005481065 python3.9[122771]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:18:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:18:12 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v332: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:18:13 np0005481065 python3.9[122857]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:18:13 np0005481065 tender_shamir[122669]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:18:13 np0005481065 tender_shamir[122669]: --> relative data size: 1.0
Oct 11 04:18:13 np0005481065 tender_shamir[122669]: --> All data devices are unavailable
Oct 11 04:18:13 np0005481065 systemd[1]: libpod-2ac014c670114b9eaef6934ba3a750ea91763f296978f6340037185d948f13c9.scope: Deactivated successfully.
Oct 11 04:18:13 np0005481065 systemd[1]: libpod-2ac014c670114b9eaef6934ba3a750ea91763f296978f6340037185d948f13c9.scope: Consumed 1.039s CPU time.
Oct 11 04:18:13 np0005481065 podman[122901]: 2025-10-11 08:18:13.186758597 +0000 UTC m=+0.025090987 container died 2ac014c670114b9eaef6934ba3a750ea91763f296978f6340037185d948f13c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_shamir, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 04:18:13 np0005481065 systemd[1]: var-lib-containers-storage-overlay-39a57f20ac50dd53e05c9b0e2fe45e3f12c6b5e0340978f8527fb6fa268357d7-merged.mount: Deactivated successfully.
Oct 11 04:18:13 np0005481065 podman[122901]: 2025-10-11 08:18:13.247537037 +0000 UTC m=+0.085869417 container remove 2ac014c670114b9eaef6934ba3a750ea91763f296978f6340037185d948f13c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_shamir, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:18:13 np0005481065 systemd[1]: libpod-conmon-2ac014c670114b9eaef6934ba3a750ea91763f296978f6340037185d948f13c9.scope: Deactivated successfully.
Oct 11 04:18:13 np0005481065 python3.9[123117]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:18:14 np0005481065 podman[123228]: 2025-10-11 08:18:14.014653488 +0000 UTC m=+0.069859872 container create 1d8c3536386c4b486592f72a8fda647550f9d532c47bc4d402f46ff7b7ef8c10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Oct 11 04:18:14 np0005481065 systemd[1]: Started libpod-conmon-1d8c3536386c4b486592f72a8fda647550f9d532c47bc4d402f46ff7b7ef8c10.scope.
Oct 11 04:18:14 np0005481065 podman[123228]: 2025-10-11 08:18:13.985335811 +0000 UTC m=+0.040542235 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:18:14 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:18:14 np0005481065 podman[123228]: 2025-10-11 08:18:14.118879713 +0000 UTC m=+0.174086077 container init 1d8c3536386c4b486592f72a8fda647550f9d532c47bc4d402f46ff7b7ef8c10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_perlman, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 04:18:14 np0005481065 podman[123228]: 2025-10-11 08:18:14.126614392 +0000 UTC m=+0.181820736 container start 1d8c3536386c4b486592f72a8fda647550f9d532c47bc4d402f46ff7b7ef8c10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:18:14 np0005481065 podman[123228]: 2025-10-11 08:18:14.12978424 +0000 UTC m=+0.184990624 container attach 1d8c3536386c4b486592f72a8fda647550f9d532c47bc4d402f46ff7b7ef8c10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_perlman, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 11 04:18:14 np0005481065 musing_perlman[123275]: 167 167
Oct 11 04:18:14 np0005481065 systemd[1]: libpod-1d8c3536386c4b486592f72a8fda647550f9d532c47bc4d402f46ff7b7ef8c10.scope: Deactivated successfully.
Oct 11 04:18:14 np0005481065 podman[123228]: 2025-10-11 08:18:14.132261387 +0000 UTC m=+0.187467761 container died 1d8c3536386c4b486592f72a8fda647550f9d532c47bc4d402f46ff7b7ef8c10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 11 04:18:14 np0005481065 systemd[1]: var-lib-containers-storage-overlay-4fe822d9d38dc59c54fae3d85b4be1e2b6f88ba775115bcfaa33425a14543211-merged.mount: Deactivated successfully.
Oct 11 04:18:14 np0005481065 podman[123228]: 2025-10-11 08:18:14.177091144 +0000 UTC m=+0.232297498 container remove 1d8c3536386c4b486592f72a8fda647550f9d532c47bc4d402f46ff7b7ef8c10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_perlman, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:18:14 np0005481065 systemd[1]: libpod-conmon-1d8c3536386c4b486592f72a8fda647550f9d532c47bc4d402f46ff7b7ef8c10.scope: Deactivated successfully.
Oct 11 04:18:14 np0005481065 python3.9[123277]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:18:14 np0005481065 podman[123301]: 2025-10-11 08:18:14.351429107 +0000 UTC m=+0.051182005 container create 318aaf82f22bce3c7f5e33c26fc241fd21dff14ea60bfabb9c9ec200bebe043f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:18:14 np0005481065 systemd[1]: Started libpod-conmon-318aaf82f22bce3c7f5e33c26fc241fd21dff14ea60bfabb9c9ec200bebe043f.scope.
Oct 11 04:18:14 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Oct 11 04:18:14 np0005481065 podman[123301]: 2025-10-11 08:18:14.332699207 +0000 UTC m=+0.032452135 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:18:14 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:18:14 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/912c1d4f96abb2a4ed063711c451d9a402cb103b9e6d305dd1faedbe2b3e50b5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:18:14 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/912c1d4f96abb2a4ed063711c451d9a402cb103b9e6d305dd1faedbe2b3e50b5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:18:14 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/912c1d4f96abb2a4ed063711c451d9a402cb103b9e6d305dd1faedbe2b3e50b5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:18:14 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/912c1d4f96abb2a4ed063711c451d9a402cb103b9e6d305dd1faedbe2b3e50b5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:18:14 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Oct 11 04:18:14 np0005481065 podman[123301]: 2025-10-11 08:18:14.446467007 +0000 UTC m=+0.146219995 container init 318aaf82f22bce3c7f5e33c26fc241fd21dff14ea60bfabb9c9ec200bebe043f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jang, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:18:14 np0005481065 podman[123301]: 2025-10-11 08:18:14.460015006 +0000 UTC m=+0.159767954 container start 318aaf82f22bce3c7f5e33c26fc241fd21dff14ea60bfabb9c9ec200bebe043f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jang, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:18:14 np0005481065 podman[123301]: 2025-10-11 08:18:14.463899646 +0000 UTC m=+0.163652584 container attach 318aaf82f22bce3c7f5e33c26fc241fd21dff14ea60bfabb9c9ec200bebe043f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jang, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 11 04:18:14 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v333: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:18:15 np0005481065 python3.9[123474]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:18:15 np0005481065 charming_jang[123322]: {
Oct 11 04:18:15 np0005481065 charming_jang[123322]:    "0": [
Oct 11 04:18:15 np0005481065 charming_jang[123322]:        {
Oct 11 04:18:15 np0005481065 charming_jang[123322]:            "devices": [
Oct 11 04:18:15 np0005481065 charming_jang[123322]:                "/dev/loop3"
Oct 11 04:18:15 np0005481065 charming_jang[123322]:            ],
Oct 11 04:18:15 np0005481065 charming_jang[123322]:            "lv_name": "ceph_lv0",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:            "lv_size": "21470642176",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:            "name": "ceph_lv0",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:            "tags": {
Oct 11 04:18:15 np0005481065 charming_jang[123322]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:                "ceph.cluster_name": "ceph",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:                "ceph.crush_device_class": "",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:                "ceph.encrypted": "0",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:                "ceph.osd_id": "0",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:                "ceph.type": "block",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:                "ceph.vdo": "0"
Oct 11 04:18:15 np0005481065 charming_jang[123322]:            },
Oct 11 04:18:15 np0005481065 charming_jang[123322]:            "type": "block",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:            "vg_name": "ceph_vg0"
Oct 11 04:18:15 np0005481065 charming_jang[123322]:        }
Oct 11 04:18:15 np0005481065 charming_jang[123322]:    ],
Oct 11 04:18:15 np0005481065 charming_jang[123322]:    "1": [
Oct 11 04:18:15 np0005481065 charming_jang[123322]:        {
Oct 11 04:18:15 np0005481065 charming_jang[123322]:            "devices": [
Oct 11 04:18:15 np0005481065 charming_jang[123322]:                "/dev/loop4"
Oct 11 04:18:15 np0005481065 charming_jang[123322]:            ],
Oct 11 04:18:15 np0005481065 charming_jang[123322]:            "lv_name": "ceph_lv1",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:            "lv_size": "21470642176",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:            "name": "ceph_lv1",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:            "tags": {
Oct 11 04:18:15 np0005481065 charming_jang[123322]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:                "ceph.cluster_name": "ceph",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:                "ceph.crush_device_class": "",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:                "ceph.encrypted": "0",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:                "ceph.osd_id": "1",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:                "ceph.type": "block",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:                "ceph.vdo": "0"
Oct 11 04:18:15 np0005481065 charming_jang[123322]:            },
Oct 11 04:18:15 np0005481065 charming_jang[123322]:            "type": "block",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:            "vg_name": "ceph_vg1"
Oct 11 04:18:15 np0005481065 charming_jang[123322]:        }
Oct 11 04:18:15 np0005481065 charming_jang[123322]:    ],
Oct 11 04:18:15 np0005481065 charming_jang[123322]:    "2": [
Oct 11 04:18:15 np0005481065 charming_jang[123322]:        {
Oct 11 04:18:15 np0005481065 charming_jang[123322]:            "devices": [
Oct 11 04:18:15 np0005481065 charming_jang[123322]:                "/dev/loop5"
Oct 11 04:18:15 np0005481065 charming_jang[123322]:            ],
Oct 11 04:18:15 np0005481065 charming_jang[123322]:            "lv_name": "ceph_lv2",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:            "lv_size": "21470642176",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:            "name": "ceph_lv2",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:            "tags": {
Oct 11 04:18:15 np0005481065 charming_jang[123322]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:                "ceph.cluster_name": "ceph",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:                "ceph.crush_device_class": "",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:                "ceph.encrypted": "0",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:                "ceph.osd_id": "2",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:                "ceph.type": "block",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:                "ceph.vdo": "0"
Oct 11 04:18:15 np0005481065 charming_jang[123322]:            },
Oct 11 04:18:15 np0005481065 charming_jang[123322]:            "type": "block",
Oct 11 04:18:15 np0005481065 charming_jang[123322]:            "vg_name": "ceph_vg2"
Oct 11 04:18:15 np0005481065 charming_jang[123322]:        }
Oct 11 04:18:15 np0005481065 charming_jang[123322]:    ]
Oct 11 04:18:15 np0005481065 charming_jang[123322]: }
Oct 11 04:18:15 np0005481065 systemd[1]: libpod-318aaf82f22bce3c7f5e33c26fc241fd21dff14ea60bfabb9c9ec200bebe043f.scope: Deactivated successfully.
Oct 11 04:18:15 np0005481065 podman[123301]: 2025-10-11 08:18:15.286213345 +0000 UTC m=+0.985966363 container died 318aaf82f22bce3c7f5e33c26fc241fd21dff14ea60bfabb9c9ec200bebe043f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jang, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 11 04:18:15 np0005481065 systemd[1]: var-lib-containers-storage-overlay-912c1d4f96abb2a4ed063711c451d9a402cb103b9e6d305dd1faedbe2b3e50b5-merged.mount: Deactivated successfully.
Oct 11 04:18:15 np0005481065 podman[123301]: 2025-10-11 08:18:15.368179611 +0000 UTC m=+1.067932559 container remove 318aaf82f22bce3c7f5e33c26fc241fd21dff14ea60bfabb9c9ec200bebe043f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jang, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:18:15 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Oct 11 04:18:15 np0005481065 systemd[1]: libpod-conmon-318aaf82f22bce3c7f5e33c26fc241fd21dff14ea60bfabb9c9ec200bebe043f.scope: Deactivated successfully.
Oct 11 04:18:15 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Oct 11 04:18:16 np0005481065 python3.9[123756]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:18:16 np0005481065 podman[123785]: 2025-10-11 08:18:16.18468112 +0000 UTC m=+0.065668223 container create 73ddbe108dabaa9017cf6cffdccc26ceff7da1f7ffcc7e040ce945135f53804b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_khorana, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 04:18:16 np0005481065 systemd[1]: Started libpod-conmon-73ddbe108dabaa9017cf6cffdccc26ceff7da1f7ffcc7e040ce945135f53804b.scope.
Oct 11 04:18:16 np0005481065 podman[123785]: 2025-10-11 08:18:16.157289532 +0000 UTC m=+0.038276685 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:18:16 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:18:16 np0005481065 podman[123785]: 2025-10-11 08:18:16.275712786 +0000 UTC m=+0.156699949 container init 73ddbe108dabaa9017cf6cffdccc26ceff7da1f7ffcc7e040ce945135f53804b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_khorana, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 11 04:18:16 np0005481065 podman[123785]: 2025-10-11 08:18:16.288295145 +0000 UTC m=+0.169282208 container start 73ddbe108dabaa9017cf6cffdccc26ceff7da1f7ffcc7e040ce945135f53804b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_khorana, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 11 04:18:16 np0005481065 podman[123785]: 2025-10-11 08:18:16.292459544 +0000 UTC m=+0.173446697 container attach 73ddbe108dabaa9017cf6cffdccc26ceff7da1f7ffcc7e040ce945135f53804b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_khorana, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:18:16 np0005481065 kind_khorana[123826]: 167 167
Oct 11 04:18:16 np0005481065 systemd[1]: libpod-73ddbe108dabaa9017cf6cffdccc26ceff7da1f7ffcc7e040ce945135f53804b.scope: Deactivated successfully.
Oct 11 04:18:16 np0005481065 podman[123785]: 2025-10-11 08:18:16.296344104 +0000 UTC m=+0.177331217 container died 73ddbe108dabaa9017cf6cffdccc26ceff7da1f7ffcc7e040ce945135f53804b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_khorana, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:18:16 np0005481065 systemd[1]: var-lib-containers-storage-overlay-c36ea2452105c1ac473cc14e25d8694ab9337921da2a63bc390a5d520c68f35b-merged.mount: Deactivated successfully.
Oct 11 04:18:16 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 9.d scrub starts
Oct 11 04:18:16 np0005481065 podman[123785]: 2025-10-11 08:18:16.339242821 +0000 UTC m=+0.220229894 container remove 73ddbe108dabaa9017cf6cffdccc26ceff7da1f7ffcc7e040ce945135f53804b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:18:16 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 9.d scrub ok
Oct 11 04:18:16 np0005481065 systemd[1]: libpod-conmon-73ddbe108dabaa9017cf6cffdccc26ceff7da1f7ffcc7e040ce945135f53804b.scope: Deactivated successfully.
Oct 11 04:18:16 np0005481065 podman[123902]: 2025-10-11 08:18:16.563591002 +0000 UTC m=+0.054540779 container create d47322bfc1474dc04dd9e704567d41fa824ed5becfd8fd8846672b0e80473cff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_montalcini, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 04:18:16 np0005481065 systemd[1]: Started libpod-conmon-d47322bfc1474dc04dd9e704567d41fa824ed5becfd8fd8846672b0e80473cff.scope.
Oct 11 04:18:16 np0005481065 podman[123902]: 2025-10-11 08:18:16.538283159 +0000 UTC m=+0.029232946 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:18:16 np0005481065 python3.9[123896]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:18:16 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:18:16 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1978b95a0b82f20d155132f0349cd29da2d4400234c42c621ca3cbb8a81c781a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:18:16 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1978b95a0b82f20d155132f0349cd29da2d4400234c42c621ca3cbb8a81c781a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:18:16 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1978b95a0b82f20d155132f0349cd29da2d4400234c42c621ca3cbb8a81c781a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:18:16 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1978b95a0b82f20d155132f0349cd29da2d4400234c42c621ca3cbb8a81c781a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:18:16 np0005481065 podman[123902]: 2025-10-11 08:18:16.687456072 +0000 UTC m=+0.178405859 container init d47322bfc1474dc04dd9e704567d41fa824ed5becfd8fd8846672b0e80473cff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_montalcini, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:18:16 np0005481065 podman[123902]: 2025-10-11 08:18:16.693602563 +0000 UTC m=+0.184552300 container start d47322bfc1474dc04dd9e704567d41fa824ed5becfd8fd8846672b0e80473cff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_montalcini, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 11 04:18:16 np0005481065 podman[123902]: 2025-10-11 08:18:16.696660727 +0000 UTC m=+0.187610484 container attach d47322bfc1474dc04dd9e704567d41fa824ed5becfd8fd8846672b0e80473cff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_montalcini, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:18:16 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v334: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:18:17 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Oct 11 04:18:17 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Oct 11 04:18:17 np0005481065 python3.9[124074]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:18:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:18:17 np0005481065 hardcore_montalcini[123918]: {
Oct 11 04:18:17 np0005481065 hardcore_montalcini[123918]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 04:18:17 np0005481065 hardcore_montalcini[123918]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:18:17 np0005481065 hardcore_montalcini[123918]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:18:17 np0005481065 hardcore_montalcini[123918]:        "osd_id": 2,
Oct 11 04:18:17 np0005481065 hardcore_montalcini[123918]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:18:17 np0005481065 hardcore_montalcini[123918]:        "type": "bluestore"
Oct 11 04:18:17 np0005481065 hardcore_montalcini[123918]:    },
Oct 11 04:18:17 np0005481065 hardcore_montalcini[123918]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 04:18:17 np0005481065 hardcore_montalcini[123918]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:18:17 np0005481065 hardcore_montalcini[123918]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:18:17 np0005481065 hardcore_montalcini[123918]:        "osd_id": 0,
Oct 11 04:18:17 np0005481065 hardcore_montalcini[123918]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:18:17 np0005481065 hardcore_montalcini[123918]:        "type": "bluestore"
Oct 11 04:18:17 np0005481065 hardcore_montalcini[123918]:    },
Oct 11 04:18:17 np0005481065 hardcore_montalcini[123918]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 04:18:17 np0005481065 hardcore_montalcini[123918]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:18:17 np0005481065 hardcore_montalcini[123918]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:18:17 np0005481065 hardcore_montalcini[123918]:        "osd_id": 1,
Oct 11 04:18:17 np0005481065 hardcore_montalcini[123918]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:18:17 np0005481065 hardcore_montalcini[123918]:        "type": "bluestore"
Oct 11 04:18:17 np0005481065 hardcore_montalcini[123918]:    }
Oct 11 04:18:17 np0005481065 hardcore_montalcini[123918]: }
Oct 11 04:18:17 np0005481065 systemd[1]: libpod-d47322bfc1474dc04dd9e704567d41fa824ed5becfd8fd8846672b0e80473cff.scope: Deactivated successfully.
Oct 11 04:18:17 np0005481065 systemd[1]: libpod-d47322bfc1474dc04dd9e704567d41fa824ed5becfd8fd8846672b0e80473cff.scope: Consumed 1.064s CPU time.
Oct 11 04:18:17 np0005481065 podman[123902]: 2025-10-11 08:18:17.756444192 +0000 UTC m=+1.247393949 container died d47322bfc1474dc04dd9e704567d41fa824ed5becfd8fd8846672b0e80473cff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 04:18:17 np0005481065 systemd[1]: var-lib-containers-storage-overlay-1978b95a0b82f20d155132f0349cd29da2d4400234c42c621ca3cbb8a81c781a-merged.mount: Deactivated successfully.
Oct 11 04:18:17 np0005481065 podman[123902]: 2025-10-11 08:18:17.819530764 +0000 UTC m=+1.310480501 container remove d47322bfc1474dc04dd9e704567d41fa824ed5becfd8fd8846672b0e80473cff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_montalcini, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 11 04:18:17 np0005481065 systemd[1]: libpod-conmon-d47322bfc1474dc04dd9e704567d41fa824ed5becfd8fd8846672b0e80473cff.scope: Deactivated successfully.
Oct 11 04:18:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:18:17 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:18:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:18:17 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:18:17 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev b28dbbd9-6080-4f89-86a5-01e0fd907a1e does not exist
Oct 11 04:18:17 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev e6ac7ce7-62e6-4bdd-9eaf-88dfaac37e51 does not exist
Oct 11 04:18:17 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:18:17 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:18:18 np0005481065 python3.9[124185]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:18:18 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v335: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:18:19 np0005481065 python3.9[124394]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:18:19 np0005481065 systemd[1]: Reloading.
Oct 11 04:18:19 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Oct 11 04:18:19 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Oct 11 04:18:19 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:18:19 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:18:20 np0005481065 python3.9[124583]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:18:20 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v336: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:18:21 np0005481065 python3.9[124661]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:18:21 np0005481065 python3.9[124813]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:18:22 np0005481065 python3.9[124891]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:18:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:18:22 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v337: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:18:23 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Oct 11 04:18:23 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Oct 11 04:18:23 np0005481065 python3.9[125043]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:18:23 np0005481065 systemd[1]: Reloading.
Oct 11 04:18:23 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:18:23 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:18:23 np0005481065 systemd[1]: Starting Create netns directory...
Oct 11 04:18:23 np0005481065 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 11 04:18:23 np0005481065 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 11 04:18:23 np0005481065 systemd[1]: Finished Create netns directory.
Oct 11 04:18:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:18:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:18:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:18:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:18:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:18:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:18:24 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v338: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:18:24 np0005481065 python3.9[125234]: ansible-ansible.builtin.service_facts Invoked
Oct 11 04:18:25 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Oct 11 04:18:25 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Oct 11 04:18:26 np0005481065 network[125251]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 11 04:18:26 np0005481065 network[125252]: 'network-scripts' will be removed from distribution in near future.
Oct 11 04:18:26 np0005481065 network[125253]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 11 04:18:26 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v339: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:18:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:18:28 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v340: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:18:30 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v341: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:18:31 np0005481065 python3.9[125518]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:18:31 np0005481065 python3.9[125596]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:18:32 np0005481065 python3.9[125748]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:18:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:18:32 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v342: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:18:33 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Oct 11 04:18:33 np0005481065 ceph-osd[88249]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Oct 11 04:18:33 np0005481065 python3.9[125900]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:18:33 np0005481065 python3.9[125978]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:18:34 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v343: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:18:35 np0005481065 python3.9[126130]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct 11 04:18:35 np0005481065 systemd[1]: Starting Time & Date Service...
Oct 11 04:18:35 np0005481065 systemd[1]: Started Time & Date Service.
Oct 11 04:18:36 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v344: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:18:37 np0005481065 python3.9[126286]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:18:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:18:37 np0005481065 python3.9[126438]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:18:38 np0005481065 python3.9[126516]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:18:38 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v345: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:18:39 np0005481065 python3.9[126668]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:18:39 np0005481065 python3.9[126746]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.6pbztvic recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:18:40 np0005481065 python3.9[126898]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:18:40 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v346: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:18:41 np0005481065 python3.9[126976]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:18:42 np0005481065 python3.9[127128]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:18:42 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:18:42 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v347: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:18:43 np0005481065 python3[127281]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 11 04:18:44 np0005481065 python3.9[127433]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:18:44 np0005481065 python3.9[127511]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:18:44 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v348: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:18:45 np0005481065 python3.9[127663]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:18:46 np0005481065 python3.9[127741]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:18:46 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v349: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:18:47 np0005481065 python3.9[127893]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:18:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:18:47 np0005481065 python3.9[127971]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:18:48 np0005481065 python3.9[128123]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:18:48 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v350: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:18:49 np0005481065 python3.9[128201]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:18:50 np0005481065 python3.9[128353]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:18:50 np0005481065 python3.9[128431]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:18:50 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v351: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:18:51 np0005481065 python3.9[128583]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:18:52 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:18:52 np0005481065 python3.9[128738]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:18:52 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v352: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:18:53 np0005481065 python3.9[128890]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:18:54 np0005481065 python3.9[129042]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:18:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:18:54
Oct 11 04:18:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:18:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 04:18:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', '.rgw.root', 'volumes', '.mgr', 'default.rgw.log', 'vms', 'backups', 'cephfs.cephfs.data', 'images', 'default.rgw.meta']
Oct 11 04:18:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:18:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:18:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:18:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:18:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:18:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:18:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:18:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:18:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:18:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:18:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:18:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:18:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:18:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:18:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:18:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:18:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:18:54 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v353: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:18:55 np0005481065 python3.9[129194]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 11 04:18:56 np0005481065 python3.9[129346]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 11 04:18:56 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v354: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:18:57 np0005481065 systemd[1]: session-41.scope: Deactivated successfully.
Oct 11 04:18:57 np0005481065 systemd[1]: session-41.scope: Consumed 37.257s CPU time.
Oct 11 04:18:57 np0005481065 systemd-logind[819]: Session 41 logged out. Waiting for processes to exit.
Oct 11 04:18:57 np0005481065 systemd-logind[819]: Removed session 41.
Oct 11 04:18:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:18:58 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v355: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:19:00 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v356: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:19:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:19:02 np0005481065 systemd-logind[819]: New session 42 of user zuul.
Oct 11 04:19:02 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v357: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:19:02 np0005481065 systemd[1]: Started Session 42 of User zuul.
Oct 11 04:19:03 np0005481065 python3.9[129526]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct 11 04:19:03 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:19:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:19:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:19:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:19:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:19:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:19:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:19:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:19:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:19:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:19:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:19:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:19:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 04:19:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:19:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:19:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:19:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:19:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:19:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:19:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:19:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:19:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:19:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:19:04 np0005481065 python3.9[129678]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:19:04 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v358: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:19:05 np0005481065 python3.9[129832]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Oct 11 04:19:06 np0005481065 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 11 04:19:06 np0005481065 python3.9[129984]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.b8g382p1 follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:19:06 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v359: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:19:07 np0005481065 python3.9[130111]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.b8g382p1 mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760170746.0088718-44-54747410173603/.source.b8g382p1 _original_basename=.hnalg53l follow=False checksum=90b4b73305faf1fc8009443cab171f6295e6673a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:19:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:19:08 np0005481065 python3.9[130263]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:19:08 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v360: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:19:09 np0005481065 python3.9[130415]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCivYfehvYlxMJv6FhpKksF/3ZVfXlrW356hXGhObfLQwq7tkfDgVlysqk5pKUEyYaxJfj0WGcCg5fil2DB0ZrFq44iBfV4QRyj3TZBBsGijCrLjs1hcI11dwQ0m76F9VDvGHsS2XaXKkJOTFTLA2lWyG/vQDOnm766pG2BOGCmVpw96pgt+SOmQQUHL1AJe9XemrIXoSvH3T95jwe0JDrreyQa4B9QkJb2w0gA5lO+XacRt8UKTIxLeuF2xvzwrjwdY6GAdm13Km/LlkKxgikd6CN4ljLq2gLhKG1pgWthtivKFpBrdtJjwiU0gksTN1iVRb4luaNL2ZQP6JYEz+I6QB9dMS9+m4XIIr42u8I0FPSZ22OY7QHn0nWlufq+K64IQMt1RSMqOcS7p3xxbmd2SqPHOFXcA1h7wJfwn6OTSMxP7ufblynW1XNSvtnAtQm50MiZhx/tfUDqKtbDLdmKBAKPEiAXA1tj58xWKgoQARydbrbBiG8Wet8SfA8/iDE=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDw0r1M89bVpRLfKhqkr7n9CHg5tcRb2DKaMDT/j0RfC#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKYrDbGmd7omtqvYq6RLwtY7+e5chFoAg2bYOxRlcutwzejZM82i1EyD2u9m4CitXVxyIjs9WwUWDqc7NjzVzDg=#012 create=True mode=0644 path=/tmp/ansible.b8g382p1 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:19:10 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v361: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:19:10 np0005481065 python3.9[130567]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.b8g382p1' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:19:11 np0005481065 python3.9[130721]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.b8g382p1 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:19:12 np0005481065 systemd[1]: session-42.scope: Deactivated successfully.
Oct 11 04:19:12 np0005481065 systemd[1]: session-42.scope: Consumed 6.205s CPU time.
Oct 11 04:19:12 np0005481065 systemd-logind[819]: Session 42 logged out. Waiting for processes to exit.
Oct 11 04:19:12 np0005481065 systemd-logind[819]: Removed session 42.
Oct 11 04:19:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:19:12 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v362: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:19:14 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v363: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:19:15 np0005481065 systemd[1]: session-19.scope: Deactivated successfully.
Oct 11 04:19:15 np0005481065 systemd[1]: session-19.scope: Consumed 1min 34.820s CPU time.
Oct 11 04:19:15 np0005481065 systemd-logind[819]: Session 19 logged out. Waiting for processes to exit.
Oct 11 04:19:15 np0005481065 systemd-logind[819]: Removed session 19.
Oct 11 04:19:16 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v364: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:19:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:19:18 np0005481065 systemd-logind[819]: New session 43 of user zuul.
Oct 11 04:19:18 np0005481065 systemd[1]: Started Session 43 of User zuul.
Oct 11 04:19:18 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v365: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:19:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:19:18 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:19:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:19:18 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:19:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:19:18 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:19:18 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev df5b7fa1-dfd2-4724-adbd-12da56ffb03e does not exist
Oct 11 04:19:18 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev aed334a5-98cb-41c0-895d-d80846bf6bc7 does not exist
Oct 11 04:19:18 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 9360206a-ebb0-4073-a7d2-ad27d1d76d2f does not exist
Oct 11 04:19:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:19:18 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:19:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:19:18 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:19:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:19:18 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:19:19 np0005481065 python3.9[131079]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:19:19 np0005481065 podman[131174]: 2025-10-11 08:19:19.741027559 +0000 UTC m=+0.049144521 container create b6e3ed3cf57edb86572f3694bf477781aef9c252cf93ceb0d9734d5cb1b5aca8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_perlman, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Oct 11 04:19:19 np0005481065 systemd[1]: Started libpod-conmon-b6e3ed3cf57edb86572f3694bf477781aef9c252cf93ceb0d9734d5cb1b5aca8.scope.
Oct 11 04:19:19 np0005481065 podman[131174]: 2025-10-11 08:19:19.719861016 +0000 UTC m=+0.027978018 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:19:19 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:19:19 np0005481065 podman[131174]: 2025-10-11 08:19:19.842135234 +0000 UTC m=+0.150252226 container init b6e3ed3cf57edb86572f3694bf477781aef9c252cf93ceb0d9734d5cb1b5aca8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_perlman, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 11 04:19:19 np0005481065 podman[131174]: 2025-10-11 08:19:19.853341929 +0000 UTC m=+0.161458911 container start b6e3ed3cf57edb86572f3694bf477781aef9c252cf93ceb0d9734d5cb1b5aca8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_perlman, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 11 04:19:19 np0005481065 podman[131174]: 2025-10-11 08:19:19.857406671 +0000 UTC m=+0.165523653 container attach b6e3ed3cf57edb86572f3694bf477781aef9c252cf93ceb0d9734d5cb1b5aca8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 11 04:19:19 np0005481065 eloquent_perlman[131215]: 167 167
Oct 11 04:19:19 np0005481065 systemd[1]: libpod-b6e3ed3cf57edb86572f3694bf477781aef9c252cf93ceb0d9734d5cb1b5aca8.scope: Deactivated successfully.
Oct 11 04:19:19 np0005481065 podman[131174]: 2025-10-11 08:19:19.858651868 +0000 UTC m=+0.166768860 container died b6e3ed3cf57edb86572f3694bf477781aef9c252cf93ceb0d9734d5cb1b5aca8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:19:19 np0005481065 systemd[1]: var-lib-containers-storage-overlay-3e95e023100bc1b04a3fa86e8820c074525280798b32b59b3eb4ad207dc9ba79-merged.mount: Deactivated successfully.
Oct 11 04:19:19 np0005481065 podman[131174]: 2025-10-11 08:19:19.910308334 +0000 UTC m=+0.218425326 container remove b6e3ed3cf57edb86572f3694bf477781aef9c252cf93ceb0d9734d5cb1b5aca8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_perlman, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 11 04:19:19 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:19:19 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:19:19 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:19:19 np0005481065 systemd[1]: libpod-conmon-b6e3ed3cf57edb86572f3694bf477781aef9c252cf93ceb0d9734d5cb1b5aca8.scope: Deactivated successfully.
Oct 11 04:19:20 np0005481065 podman[131290]: 2025-10-11 08:19:20.128267205 +0000 UTC m=+0.057888233 container create 2a851d4f8454384dc175dd5882c4546eeebb93a724c7b9e2ae7af4ac9759b7d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hypatia, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 11 04:19:20 np0005481065 systemd[1]: Started libpod-conmon-2a851d4f8454384dc175dd5882c4546eeebb93a724c7b9e2ae7af4ac9759b7d4.scope.
Oct 11 04:19:20 np0005481065 podman[131290]: 2025-10-11 08:19:20.100325239 +0000 UTC m=+0.029946327 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:19:20 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:19:20 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e59b7708aa7cd1bb42576b3302ce006d985fd9c7f82a439fa23a6936a64691b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:19:20 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e59b7708aa7cd1bb42576b3302ce006d985fd9c7f82a439fa23a6936a64691b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:19:20 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e59b7708aa7cd1bb42576b3302ce006d985fd9c7f82a439fa23a6936a64691b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:19:20 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e59b7708aa7cd1bb42576b3302ce006d985fd9c7f82a439fa23a6936a64691b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:19:20 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e59b7708aa7cd1bb42576b3302ce006d985fd9c7f82a439fa23a6936a64691b8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:19:20 np0005481065 podman[131290]: 2025-10-11 08:19:20.248664347 +0000 UTC m=+0.178285445 container init 2a851d4f8454384dc175dd5882c4546eeebb93a724c7b9e2ae7af4ac9759b7d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 11 04:19:20 np0005481065 podman[131290]: 2025-10-11 08:19:20.266740408 +0000 UTC m=+0.196361446 container start 2a851d4f8454384dc175dd5882c4546eeebb93a724c7b9e2ae7af4ac9759b7d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:19:20 np0005481065 podman[131290]: 2025-10-11 08:19:20.27048547 +0000 UTC m=+0.200106558 container attach 2a851d4f8454384dc175dd5882c4546eeebb93a724c7b9e2ae7af4ac9759b7d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hypatia, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:19:20 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v366: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:19:21 np0005481065 python3.9[131386]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 11 04:19:21 np0005481065 musing_hypatia[131306]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:19:21 np0005481065 musing_hypatia[131306]: --> relative data size: 1.0
Oct 11 04:19:21 np0005481065 musing_hypatia[131306]: --> All data devices are unavailable
Oct 11 04:19:21 np0005481065 podman[131290]: 2025-10-11 08:19:21.454173944 +0000 UTC m=+1.383794952 container died 2a851d4f8454384dc175dd5882c4546eeebb93a724c7b9e2ae7af4ac9759b7d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hypatia, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 11 04:19:21 np0005481065 systemd[1]: libpod-2a851d4f8454384dc175dd5882c4546eeebb93a724c7b9e2ae7af4ac9759b7d4.scope: Deactivated successfully.
Oct 11 04:19:21 np0005481065 systemd[1]: libpod-2a851d4f8454384dc175dd5882c4546eeebb93a724c7b9e2ae7af4ac9759b7d4.scope: Consumed 1.097s CPU time.
Oct 11 04:19:21 np0005481065 systemd[1]: var-lib-containers-storage-overlay-e59b7708aa7cd1bb42576b3302ce006d985fd9c7f82a439fa23a6936a64691b8-merged.mount: Deactivated successfully.
Oct 11 04:19:21 np0005481065 podman[131290]: 2025-10-11 08:19:21.533773305 +0000 UTC m=+1.463394333 container remove 2a851d4f8454384dc175dd5882c4546eeebb93a724c7b9e2ae7af4ac9759b7d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 04:19:21 np0005481065 systemd[1]: libpod-conmon-2a851d4f8454384dc175dd5882c4546eeebb93a724c7b9e2ae7af4ac9759b7d4.scope: Deactivated successfully.
Oct 11 04:19:22 np0005481065 python3.9[131677]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 04:19:22 np0005481065 podman[131719]: 2025-10-11 08:19:22.245710135 +0000 UTC m=+0.040339827 container create 041839eeec70f497a6fa4914155e577656ec3627e6bc1e7b16ca96bdd18e7447 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_haslett, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 11 04:19:22 np0005481065 systemd[1]: Started libpod-conmon-041839eeec70f497a6fa4914155e577656ec3627e6bc1e7b16ca96bdd18e7447.scope.
Oct 11 04:19:22 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:19:22 np0005481065 podman[131719]: 2025-10-11 08:19:22.227359716 +0000 UTC m=+0.021989428 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:19:22 np0005481065 podman[131719]: 2025-10-11 08:19:22.350688856 +0000 UTC m=+0.145318568 container init 041839eeec70f497a6fa4914155e577656ec3627e6bc1e7b16ca96bdd18e7447 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:19:22 np0005481065 podman[131719]: 2025-10-11 08:19:22.364766367 +0000 UTC m=+0.159396089 container start 041839eeec70f497a6fa4914155e577656ec3627e6bc1e7b16ca96bdd18e7447 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:19:22 np0005481065 podman[131719]: 2025-10-11 08:19:22.369605522 +0000 UTC m=+0.164235314 container attach 041839eeec70f497a6fa4914155e577656ec3627e6bc1e7b16ca96bdd18e7447 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 04:19:22 np0005481065 youthful_haslett[131759]: 167 167
Oct 11 04:19:22 np0005481065 systemd[1]: libpod-041839eeec70f497a6fa4914155e577656ec3627e6bc1e7b16ca96bdd18e7447.scope: Deactivated successfully.
Oct 11 04:19:22 np0005481065 podman[131719]: 2025-10-11 08:19:22.373170069 +0000 UTC m=+0.167799791 container died 041839eeec70f497a6fa4914155e577656ec3627e6bc1e7b16ca96bdd18e7447 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_haslett, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 11 04:19:22 np0005481065 systemd[1]: var-lib-containers-storage-overlay-01a0fb87b07d1d9694d26bc5a676a112e33b877858b9ba47723a868a8c4bbbd0-merged.mount: Deactivated successfully.
Oct 11 04:19:22 np0005481065 podman[131719]: 2025-10-11 08:19:22.411490555 +0000 UTC m=+0.206120247 container remove 041839eeec70f497a6fa4914155e577656ec3627e6bc1e7b16ca96bdd18e7447 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_haslett, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:19:22 np0005481065 systemd[1]: libpod-conmon-041839eeec70f497a6fa4914155e577656ec3627e6bc1e7b16ca96bdd18e7447.scope: Deactivated successfully.
Oct 11 04:19:22 np0005481065 podman[131824]: 2025-10-11 08:19:22.623746556 +0000 UTC m=+0.058002216 container create 9ecf85944b0061907faa276bd57d64811639cf6b7bf002e30c0c016fe1cc9fcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:19:22 np0005481065 systemd[1]: Started libpod-conmon-9ecf85944b0061907faa276bd57d64811639cf6b7bf002e30c0c016fe1cc9fcd.scope.
Oct 11 04:19:22 np0005481065 podman[131824]: 2025-10-11 08:19:22.603800379 +0000 UTC m=+0.038056019 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:19:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:19:22 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:19:22 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9d8ed9aa617e6c19cd09e4645a04c99ba0a456ef748860d4dff1966c5485573/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:19:22 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9d8ed9aa617e6c19cd09e4645a04c99ba0a456ef748860d4dff1966c5485573/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:19:22 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9d8ed9aa617e6c19cd09e4645a04c99ba0a456ef748860d4dff1966c5485573/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:19:22 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9d8ed9aa617e6c19cd09e4645a04c99ba0a456ef748860d4dff1966c5485573/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:19:22 np0005481065 podman[131824]: 2025-10-11 08:19:22.737015355 +0000 UTC m=+0.171271005 container init 9ecf85944b0061907faa276bd57d64811639cf6b7bf002e30c0c016fe1cc9fcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_solomon, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 11 04:19:22 np0005481065 podman[131824]: 2025-10-11 08:19:22.758111366 +0000 UTC m=+0.192366996 container start 9ecf85944b0061907faa276bd57d64811639cf6b7bf002e30c0c016fe1cc9fcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 04:19:22 np0005481065 podman[131824]: 2025-10-11 08:19:22.761860878 +0000 UTC m=+0.196116558 container attach 9ecf85944b0061907faa276bd57d64811639cf6b7bf002e30c0c016fe1cc9fcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:19:22 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v367: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:19:23 np0005481065 python3.9[131932]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]: {
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:    "0": [
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:        {
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:            "devices": [
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:                "/dev/loop3"
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:            ],
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:            "lv_name": "ceph_lv0",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:            "lv_size": "21470642176",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:            "name": "ceph_lv0",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:            "tags": {
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:                "ceph.cluster_name": "ceph",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:                "ceph.crush_device_class": "",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:                "ceph.encrypted": "0",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:                "ceph.osd_id": "0",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:                "ceph.type": "block",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:                "ceph.vdo": "0"
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:            },
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:            "type": "block",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:            "vg_name": "ceph_vg0"
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:        }
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:    ],
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:    "1": [
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:        {
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:            "devices": [
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:                "/dev/loop4"
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:            ],
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:            "lv_name": "ceph_lv1",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:            "lv_size": "21470642176",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:            "name": "ceph_lv1",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:            "tags": {
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:                "ceph.cluster_name": "ceph",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:                "ceph.crush_device_class": "",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:                "ceph.encrypted": "0",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:                "ceph.osd_id": "1",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:                "ceph.type": "block",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:                "ceph.vdo": "0"
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:            },
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:            "type": "block",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:            "vg_name": "ceph_vg1"
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:        }
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:    ],
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:    "2": [
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:        {
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:            "devices": [
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:                "/dev/loop5"
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:            ],
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:            "lv_name": "ceph_lv2",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:            "lv_size": "21470642176",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:            "name": "ceph_lv2",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:            "tags": {
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:                "ceph.cluster_name": "ceph",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:                "ceph.crush_device_class": "",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:                "ceph.encrypted": "0",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:                "ceph.osd_id": "2",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:                "ceph.type": "block",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:                "ceph.vdo": "0"
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:            },
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:            "type": "block",
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:            "vg_name": "ceph_vg2"
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:        }
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]:    ]
Oct 11 04:19:23 np0005481065 hungry_solomon[131852]: }
Oct 11 04:19:23 np0005481065 systemd[1]: libpod-9ecf85944b0061907faa276bd57d64811639cf6b7bf002e30c0c016fe1cc9fcd.scope: Deactivated successfully.
Oct 11 04:19:23 np0005481065 podman[131824]: 2025-10-11 08:19:23.571232934 +0000 UTC m=+1.005488594 container died 9ecf85944b0061907faa276bd57d64811639cf6b7bf002e30c0c016fe1cc9fcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_solomon, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:19:23 np0005481065 systemd[1]: var-lib-containers-storage-overlay-b9d8ed9aa617e6c19cd09e4645a04c99ba0a456ef748860d4dff1966c5485573-merged.mount: Deactivated successfully.
Oct 11 04:19:23 np0005481065 podman[131824]: 2025-10-11 08:19:23.639530577 +0000 UTC m=+1.073786197 container remove 9ecf85944b0061907faa276bd57d64811639cf6b7bf002e30c0c016fe1cc9fcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_solomon, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 04:19:23 np0005481065 systemd[1]: libpod-conmon-9ecf85944b0061907faa276bd57d64811639cf6b7bf002e30c0c016fe1cc9fcd.scope: Deactivated successfully.
Oct 11 04:19:24 np0005481065 python3.9[132204]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:19:24 np0005481065 podman[132241]: 2025-10-11 08:19:24.487346893 +0000 UTC m=+0.068328205 container create b49e66b37064f5975d18fd5869d41e34baea60fdb41c611cdb99169b32d0ce9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_albattani, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:19:24 np0005481065 systemd[1]: Started libpod-conmon-b49e66b37064f5975d18fd5869d41e34baea60fdb41c611cdb99169b32d0ce9c.scope.
Oct 11 04:19:24 np0005481065 podman[132241]: 2025-10-11 08:19:24.459119688 +0000 UTC m=+0.040101040 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:19:24 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:19:24 np0005481065 podman[132241]: 2025-10-11 08:19:24.612336912 +0000 UTC m=+0.193318264 container init b49e66b37064f5975d18fd5869d41e34baea60fdb41c611cdb99169b32d0ce9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:19:24 np0005481065 podman[132241]: 2025-10-11 08:19:24.621929249 +0000 UTC m=+0.202910561 container start b49e66b37064f5975d18fd5869d41e34baea60fdb41c611cdb99169b32d0ce9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 04:19:24 np0005481065 podman[132241]: 2025-10-11 08:19:24.625759054 +0000 UTC m=+0.206740356 container attach b49e66b37064f5975d18fd5869d41e34baea60fdb41c611cdb99169b32d0ce9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_albattani, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 04:19:24 np0005481065 magical_albattani[132281]: 167 167
Oct 11 04:19:24 np0005481065 systemd[1]: libpod-b49e66b37064f5975d18fd5869d41e34baea60fdb41c611cdb99169b32d0ce9c.scope: Deactivated successfully.
Oct 11 04:19:24 np0005481065 podman[132241]: 2025-10-11 08:19:24.630185876 +0000 UTC m=+0.211167168 container died b49e66b37064f5975d18fd5869d41e34baea60fdb41c611cdb99169b32d0ce9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:19:24 np0005481065 systemd[1]: var-lib-containers-storage-overlay-23da246202000b448284e2dbeed33e58afea039d7864e3f7a6ff9804fb158050-merged.mount: Deactivated successfully.
Oct 11 04:19:24 np0005481065 podman[132241]: 2025-10-11 08:19:24.67108307 +0000 UTC m=+0.252064382 container remove b49e66b37064f5975d18fd5869d41e34baea60fdb41c611cdb99169b32d0ce9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 04:19:24 np0005481065 systemd[1]: libpod-conmon-b49e66b37064f5975d18fd5869d41e34baea60fdb41c611cdb99169b32d0ce9c.scope: Deactivated successfully.
Oct 11 04:19:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:19:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:19:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:19:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:19:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:19:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:19:24 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v368: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:19:24 np0005481065 podman[132357]: 2025-10-11 08:19:24.90841991 +0000 UTC m=+0.068813090 container create 81abfde0d7c79def72ffdcb0267fb76bcf238ef05ad2abad08f1f0e2a8590a69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_panini, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:19:24 np0005481065 systemd[1]: Started libpod-conmon-81abfde0d7c79def72ffdcb0267fb76bcf238ef05ad2abad08f1f0e2a8590a69.scope.
Oct 11 04:19:24 np0005481065 podman[132357]: 2025-10-11 08:19:24.880862525 +0000 UTC m=+0.041255805 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:19:24 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:19:24 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fac74284afea36706c04a4630954418b17dbe29448483ececde25214e71eefc2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:19:25 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fac74284afea36706c04a4630954418b17dbe29448483ececde25214e71eefc2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:19:25 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fac74284afea36706c04a4630954418b17dbe29448483ececde25214e71eefc2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:19:25 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fac74284afea36706c04a4630954418b17dbe29448483ececde25214e71eefc2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:19:25 np0005481065 podman[132357]: 2025-10-11 08:19:25.016546165 +0000 UTC m=+0.176939355 container init 81abfde0d7c79def72ffdcb0267fb76bcf238ef05ad2abad08f1f0e2a8590a69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_panini, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 04:19:25 np0005481065 podman[132357]: 2025-10-11 08:19:25.030250275 +0000 UTC m=+0.190643485 container start 81abfde0d7c79def72ffdcb0267fb76bcf238ef05ad2abad08f1f0e2a8590a69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_panini, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:19:25 np0005481065 podman[132357]: 2025-10-11 08:19:25.034690248 +0000 UTC m=+0.195083418 container attach 81abfde0d7c79def72ffdcb0267fb76bcf238ef05ad2abad08f1f0e2a8590a69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_panini, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 11 04:19:25 np0005481065 python3.9[132453]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:19:25 np0005481065 systemd-logind[819]: Session 43 logged out. Waiting for processes to exit.
Oct 11 04:19:25 np0005481065 systemd[1]: session-43.scope: Deactivated successfully.
Oct 11 04:19:25 np0005481065 systemd[1]: session-43.scope: Consumed 4.828s CPU time.
Oct 11 04:19:25 np0005481065 systemd-logind[819]: Removed session 43.
Oct 11 04:19:26 np0005481065 sad_panini[132373]: {
Oct 11 04:19:26 np0005481065 sad_panini[132373]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 04:19:26 np0005481065 sad_panini[132373]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:19:26 np0005481065 sad_panini[132373]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:19:26 np0005481065 sad_panini[132373]:        "osd_id": 2,
Oct 11 04:19:26 np0005481065 sad_panini[132373]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:19:26 np0005481065 sad_panini[132373]:        "type": "bluestore"
Oct 11 04:19:26 np0005481065 sad_panini[132373]:    },
Oct 11 04:19:26 np0005481065 sad_panini[132373]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 04:19:26 np0005481065 sad_panini[132373]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:19:26 np0005481065 sad_panini[132373]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:19:26 np0005481065 sad_panini[132373]:        "osd_id": 0,
Oct 11 04:19:26 np0005481065 sad_panini[132373]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:19:26 np0005481065 sad_panini[132373]:        "type": "bluestore"
Oct 11 04:19:26 np0005481065 sad_panini[132373]:    },
Oct 11 04:19:26 np0005481065 sad_panini[132373]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 04:19:26 np0005481065 sad_panini[132373]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:19:26 np0005481065 sad_panini[132373]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:19:26 np0005481065 sad_panini[132373]:        "osd_id": 1,
Oct 11 04:19:26 np0005481065 sad_panini[132373]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:19:26 np0005481065 sad_panini[132373]:        "type": "bluestore"
Oct 11 04:19:26 np0005481065 sad_panini[132373]:    }
Oct 11 04:19:26 np0005481065 sad_panini[132373]: }
Oct 11 04:19:26 np0005481065 systemd[1]: libpod-81abfde0d7c79def72ffdcb0267fb76bcf238ef05ad2abad08f1f0e2a8590a69.scope: Deactivated successfully.
Oct 11 04:19:26 np0005481065 systemd[1]: libpod-81abfde0d7c79def72ffdcb0267fb76bcf238ef05ad2abad08f1f0e2a8590a69.scope: Consumed 1.156s CPU time.
Oct 11 04:19:26 np0005481065 podman[132508]: 2025-10-11 08:19:26.251991118 +0000 UTC m=+0.046835312 container died 81abfde0d7c79def72ffdcb0267fb76bcf238ef05ad2abad08f1f0e2a8590a69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_panini, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:19:26 np0005481065 systemd[1]: var-lib-containers-storage-overlay-fac74284afea36706c04a4630954418b17dbe29448483ececde25214e71eefc2-merged.mount: Deactivated successfully.
Oct 11 04:19:26 np0005481065 podman[132508]: 2025-10-11 08:19:26.316110637 +0000 UTC m=+0.110954761 container remove 81abfde0d7c79def72ffdcb0267fb76bcf238ef05ad2abad08f1f0e2a8590a69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_panini, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:19:26 np0005481065 systemd[1]: libpod-conmon-81abfde0d7c79def72ffdcb0267fb76bcf238ef05ad2abad08f1f0e2a8590a69.scope: Deactivated successfully.
Oct 11 04:19:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:19:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:19:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:19:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:19:26 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 4a4ca85e-1bcc-4ee6-b2f1-de5721b48d6f does not exist
Oct 11 04:19:26 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 4bd33022-0e47-4210-af1c-714ebd787a65 does not exist
Oct 11 04:19:26 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v369: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:19:27 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:19:27 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:19:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:19:28 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v370: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:19:30 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v371: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:19:31 np0005481065 systemd-logind[819]: New session 44 of user zuul.
Oct 11 04:19:31 np0005481065 systemd[1]: Started Session 44 of User zuul.
Oct 11 04:19:32 np0005481065 python3.9[132727]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:19:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:19:32 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v372: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:19:33 np0005481065 python3.9[132883]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 04:19:34 np0005481065 python3.9[132967]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 11 04:19:34 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v373: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:19:36 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v374: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:19:37 np0005481065 python3.9[133118]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:19:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:19:38 np0005481065 python3.9[133269]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 11 04:19:38 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v375: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:19:39 np0005481065 python3.9[133419]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:19:40 np0005481065 python3.9[133569]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:19:40 np0005481065 systemd[1]: session-44.scope: Deactivated successfully.
Oct 11 04:19:40 np0005481065 systemd[1]: session-44.scope: Consumed 6.831s CPU time.
Oct 11 04:19:40 np0005481065 systemd-logind[819]: Session 44 logged out. Waiting for processes to exit.
Oct 11 04:19:40 np0005481065 systemd-logind[819]: Removed session 44.
Oct 11 04:19:40 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v376: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:19:42 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:19:42 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v377: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:19:44 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v378: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:19:46 np0005481065 systemd-logind[819]: New session 45 of user zuul.
Oct 11 04:19:46 np0005481065 systemd[1]: Started Session 45 of User zuul.
Oct 11 04:19:46 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v379: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:19:47 np0005481065 python3.9[133747]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:19:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:19:47 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Oct 11 04:19:47 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:19:47.723500) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 04:19:47 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Oct 11 04:19:47 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760170787723610, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1634, "num_deletes": 252, "total_data_size": 2444383, "memory_usage": 2483064, "flush_reason": "Manual Compaction"}
Oct 11 04:19:47 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Oct 11 04:19:47 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760170787739193, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1426236, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7370, "largest_seqno": 9003, "table_properties": {"data_size": 1420879, "index_size": 2433, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 15235, "raw_average_key_size": 20, "raw_value_size": 1408314, "raw_average_value_size": 1897, "num_data_blocks": 115, "num_entries": 742, "num_filter_entries": 742, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170629, "oldest_key_time": 1760170629, "file_creation_time": 1760170787, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:19:47 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 15775 microseconds, and 8339 cpu microseconds.
Oct 11 04:19:47 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:19:47 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:19:47.739285) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1426236 bytes OK
Oct 11 04:19:47 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:19:47.739322) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Oct 11 04:19:47 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:19:47.741786) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Oct 11 04:19:47 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:19:47.741918) EVENT_LOG_v1 {"time_micros": 1760170787741806, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 04:19:47 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:19:47.741950) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 04:19:47 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2437133, prev total WAL file size 2437133, number of live WAL files 2.
Oct 11 04:19:47 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:19:47 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:19:47.743418) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323533' seq:0, type:0; will stop at (end)
Oct 11 04:19:47 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 04:19:47 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1392KB)], [20(6966KB)]
Oct 11 04:19:47 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760170787743487, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 8560407, "oldest_snapshot_seqno": -1}
Oct 11 04:19:47 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3383 keys, 6809566 bytes, temperature: kUnknown
Oct 11 04:19:47 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760170787793787, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 6809566, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6783808, "index_size": 16177, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8517, "raw_key_size": 80985, "raw_average_key_size": 23, "raw_value_size": 6719547, "raw_average_value_size": 1986, "num_data_blocks": 717, "num_entries": 3383, "num_filter_entries": 3383, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760170787, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:19:47 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:19:47 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:19:47.794142) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 6809566 bytes
Oct 11 04:19:47 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:19:47.795898) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 169.7 rd, 135.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 6.8 +0.0 blob) out(6.5 +0.0 blob), read-write-amplify(10.8) write-amplify(4.8) OK, records in: 3825, records dropped: 442 output_compression: NoCompression
Oct 11 04:19:47 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:19:47.795936) EVENT_LOG_v1 {"time_micros": 1760170787795918, "job": 6, "event": "compaction_finished", "compaction_time_micros": 50439, "compaction_time_cpu_micros": 31580, "output_level": 6, "num_output_files": 1, "total_output_size": 6809566, "num_input_records": 3825, "num_output_records": 3383, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 04:19:47 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:19:47 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760170787796485, "job": 6, "event": "table_file_deletion", "file_number": 22}
Oct 11 04:19:47 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:19:47 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760170787798607, "job": 6, "event": "table_file_deletion", "file_number": 20}
Oct 11 04:19:47 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:19:47.743304) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:19:47 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:19:47.798768) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:19:47 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:19:47.798777) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:19:47 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:19:47.798780) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:19:47 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:19:47.798782) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:19:47 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:19:47.798785) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:19:48 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v380: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:19:49 np0005481065 python3.9[133903]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:19:50 np0005481065 python3.9[134055]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:19:50 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v381: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:19:51 np0005481065 python3.9[134207]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:19:52 np0005481065 python3.9[134330]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760170790.5987768-65-217125255337353/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=359597b0377eb0708b185062471f919e46f51772 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:19:52 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:19:52 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v382: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:19:53 np0005481065 python3.9[134482]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:19:53 np0005481065 python3.9[134605]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760170792.5692732-65-202900169356933/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=684500c7820d71abbe9906569856a96250346480 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:19:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:19:54
Oct 11 04:19:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:19:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 04:19:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['backups', 'default.rgw.control', 'default.rgw.log', 'volumes', '.mgr', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.data', 'vms', 'images']
Oct 11 04:19:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:19:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:19:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:19:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:19:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:19:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:19:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:19:54 np0005481065 python3.9[134757]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:19:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:19:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:19:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:19:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:19:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:19:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:19:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:19:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:19:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:19:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:19:54 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v383: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:19:55 np0005481065 python3.9[134880]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760170794.105819-65-232778112729796/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=6a03036d6e50c5710b3ecd5dc1c323eef6197212 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:19:56 np0005481065 python3.9[135032]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:19:56 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v384: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:19:57 np0005481065 python3.9[135184]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:19:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:19:58 np0005481065 python3.9[135336]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:19:58 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v385: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:19:58 np0005481065 python3.9[135459]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760170797.6174922-124-12920198675138/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=f2dec50755221a0e844c242167449fb4333288a4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:19:59 np0005481065 python3.9[135611]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:20:00 np0005481065 python3.9[135734]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760170799.2135978-124-83733739110687/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=9a4d2f851f5addb67f9296fc27b294cd92e2e6d8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:20:00 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v386: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:20:01 np0005481065 python3.9[135886]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:20:02 np0005481065 python3.9[136009]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760170800.7388136-124-257894661668568/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=925e15ef5953be29a1ac31d7ba95bfa3aaac25d7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:20:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:20:02 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v387: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:20:03 np0005481065 python3.9[136161]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:20:03 np0005481065 python3.9[136313]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:20:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:20:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:20:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:20:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:20:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:20:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:20:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:20:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:20:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:20:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:20:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:20:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:20:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 04:20:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:20:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:20:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:20:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:20:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:20:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:20:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:20:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:20:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:20:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:20:04 np0005481065 python3.9[136465]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:20:04 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v388: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:20:05 np0005481065 python3.9[136588]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760170804.1086397-183-238074919676301/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=2aac597bcaade21a8599d195ce1926fb6e5e48b6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:20:06 np0005481065 python3.9[136740]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:20:06 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v389: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:20:06 np0005481065 python3.9[136863]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760170805.6465693-183-184524209901799/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=9a4d2f851f5addb67f9296fc27b294cd92e2e6d8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:20:07 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:20:07 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2023 writes, 9005 keys, 2023 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s#012Cumulative WAL: 2023 writes, 2023 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2023 writes, 9005 keys, 2023 commit groups, 1.0 writes per commit group, ingest: 10.97 MB, 0.02 MB/s#012Interval WAL: 2023 writes, 2023 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    129.9      0.06              0.03         3    0.021       0      0       0.0       0.0#012  L6      1/0    6.49 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6    154.0    136.4      0.10              0.06         2    0.049    7198    732       0.0       0.0#012 Sum      1/0    6.49 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     93.5    133.9      0.16              0.09         5    0.032    7198    732       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     96.0    137.2      0.16              0.09         4    0.039    7198    732       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    154.0    136.4      0.10              0.06         2    0.049    7198    732       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    138.6      0.06              0.03         2    0.029       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.2      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.008, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.02 GB write, 0.04 MB/s write, 0.01 GB read, 0.03 MB/s read, 0.2 seconds#012Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.01 GB read, 0.03 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558f0ab3b1f0#2 capacity: 308.00 MB usage: 556.83 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(36,469.11 KB,0.148739%) FilterBlock(6,28.55 KB,0.00905124%) IndexBlock(6,59.17 KB,0.0187614%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct 11 04:20:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:20:07 np0005481065 python3.9[137015]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:20:08 np0005481065 python3.9[137138]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760170807.1115563-183-7686037823991/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=42dbf019d79235f23ccc22dcde7c51743ba87376 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:20:08 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v390: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:20:09 np0005481065 python3.9[137290]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:20:10 np0005481065 python3.9[137442]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:20:10 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v391: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:20:11 np0005481065 python3.9[137565]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760170810.1378472-251-146747404116420/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=2350fec01ba4be1fc9481509a02f1bc685e36cb3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:20:12 np0005481065 python3.9[137717]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:20:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:20:12 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v392: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:20:13 np0005481065 python3.9[137869]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:20:14 np0005481065 python3.9[137992]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760170812.6092954-275-176967149513234/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=2350fec01ba4be1fc9481509a02f1bc685e36cb3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:20:14 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v393: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:20:14 np0005481065 python3.9[138144]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:20:15 np0005481065 python3.9[138296]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:20:16 np0005481065 python3.9[138419]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760170815.1598907-299-54812537943165/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=2350fec01ba4be1fc9481509a02f1bc685e36cb3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:20:16 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v394: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:20:17 np0005481065 python3.9[138571]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:20:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:20:18 np0005481065 python3.9[138723]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:20:18 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v395: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:20:18 np0005481065 python3.9[138846]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760170817.6265259-323-218425570834882/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=2350fec01ba4be1fc9481509a02f1bc685e36cb3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:20:19 np0005481065 python3.9[138998]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:20:20 np0005481065 python3.9[139150]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:20:20 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v396: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:20:21 np0005481065 python3.9[139273]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760170820.0286055-347-149569416472993/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=2350fec01ba4be1fc9481509a02f1bc685e36cb3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:20:41 np0005481065 systemd-logind[819]: New session 47 of user zuul.
Oct 11 04:20:41 np0005481065 systemd[1]: Started Session 47 of User zuul.
Oct 11 04:20:41 np0005481065 rsyslogd[1003]: imjournal: 372 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct 11 04:20:42 np0005481065 python3.9[141487]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:20:42 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:20:42 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v407: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:20:43 np0005481065 python3.9[141643]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:20:44 np0005481065 python3.9[141795]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:20:44 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v408: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:20:45 np0005481065 python3.9[141945]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:20:46 np0005481065 python3.9[142097]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct 11 04:20:46 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v409: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:20:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:20:48 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Oct 11 04:20:48 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:20:48.008673) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 04:20:48 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Oct 11 04:20:48 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760170848008719, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 720, "num_deletes": 251, "total_data_size": 900304, "memory_usage": 913384, "flush_reason": "Manual Compaction"}
Oct 11 04:20:48 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Oct 11 04:20:48 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760170848018110, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 892167, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9004, "largest_seqno": 9723, "table_properties": {"data_size": 888476, "index_size": 1535, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8045, "raw_average_key_size": 18, "raw_value_size": 881030, "raw_average_value_size": 2030, "num_data_blocks": 71, "num_entries": 434, "num_filter_entries": 434, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170787, "oldest_key_time": 1760170787, "file_creation_time": 1760170848, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:20:48 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 9471 microseconds, and 5831 cpu microseconds.
Oct 11 04:20:48 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:20:48 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:20:48.018148) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 892167 bytes OK
Oct 11 04:20:48 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:20:48.018167) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Oct 11 04:20:48 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:20:48.019571) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Oct 11 04:20:48 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:20:48.019585) EVENT_LOG_v1 {"time_micros": 1760170848019580, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 04:20:48 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:20:48.019601) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 04:20:48 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 896597, prev total WAL file size 896597, number of live WAL files 2.
Oct 11 04:20:48 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:20:48 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:20:48.020157) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Oct 11 04:20:48 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 04:20:48 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(871KB)], [23(6649KB)]
Oct 11 04:20:48 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760170848020224, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 7701733, "oldest_snapshot_seqno": -1}
Oct 11 04:20:48 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3303 keys, 6134522 bytes, temperature: kUnknown
Oct 11 04:20:48 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760170848047478, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6134522, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6110512, "index_size": 14621, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8325, "raw_key_size": 80105, "raw_average_key_size": 24, "raw_value_size": 6048869, "raw_average_value_size": 1831, "num_data_blocks": 638, "num_entries": 3303, "num_filter_entries": 3303, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760170848, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:20:48 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:20:48 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:20:48.047800) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6134522 bytes
Oct 11 04:20:48 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:20:48.049770) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 281.6 rd, 224.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 6.5 +0.0 blob) out(5.9 +0.0 blob), read-write-amplify(15.5) write-amplify(6.9) OK, records in: 3817, records dropped: 514 output_compression: NoCompression
Oct 11 04:20:48 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:20:48.049800) EVENT_LOG_v1 {"time_micros": 1760170848049786, "job": 8, "event": "compaction_finished", "compaction_time_micros": 27353, "compaction_time_cpu_micros": 16892, "output_level": 6, "num_output_files": 1, "total_output_size": 6134522, "num_input_records": 3817, "num_output_records": 3303, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 04:20:48 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:20:48 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760170848050222, "job": 8, "event": "table_file_deletion", "file_number": 25}
Oct 11 04:20:48 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:20:48 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760170848052363, "job": 8, "event": "table_file_deletion", "file_number": 23}
Oct 11 04:20:48 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:20:48.020067) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:20:48 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:20:48.052419) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:20:48 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:20:48.052424) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:20:48 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:20:48.052426) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:20:48 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:20:48.052427) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:20:48 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:20:48.052429) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:20:48 np0005481065 dbus-broker-launch[809]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Oct 11 04:20:48 np0005481065 python3.9[142253]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 04:20:48 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v410: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:20:49 np0005481065 python3.9[142337]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 04:20:50 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v411: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:20:52 np0005481065 python3.9[142490]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 11 04:20:52 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:20:52 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v412: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:20:53 np0005481065 python3[142645]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Oct 11 04:20:54 np0005481065 python3.9[142797]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:20:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:20:54
Oct 11 04:20:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:20:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 04:20:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes', 'vms', 'cephfs.cephfs.meta', 'backups', '.rgw.root', 'images', 'default.rgw.log', 'default.rgw.meta']
Oct 11 04:20:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:20:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:20:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:20:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:20:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:20:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:20:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:20:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:20:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:20:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:20:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:20:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:20:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:20:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:20:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:20:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:20:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:20:54 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v413: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:20:55 np0005481065 python3.9[142949]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:20:55 np0005481065 python3.9[143027]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:20:56 np0005481065 python3.9[143179]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:20:56 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v414: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:20:57 np0005481065 python3.9[143257]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.j3o_2xch recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:20:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:20:58 np0005481065 python3.9[143409]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:20:58 np0005481065 python3.9[143487]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:20:58 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v415: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:20:59 np0005481065 python3.9[143639]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:21:00 np0005481065 python3[143792]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 11 04:21:00 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v416: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:21:01 np0005481065 python3.9[143944]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:21:02 np0005481065 python3.9[144069]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760170861.1652842-157-257222279559114/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:21:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:21:02 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v417: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:21:03 np0005481065 python3.9[144221]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:21:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:21:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:21:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:21:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:21:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:21:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:21:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:21:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:21:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:21:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:21:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:21:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:21:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 04:21:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:21:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:21:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:21:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:21:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:21:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:21:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:21:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:21:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:21:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:21:04 np0005481065 python3.9[144346]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760170862.9529872-172-278043484372220/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:21:04 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v418: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:21:05 np0005481065 python3.9[144498]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:21:05 np0005481065 python3.9[144623]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760170864.624953-187-919171875048/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:21:06 np0005481065 python3.9[144775]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:21:06 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v419: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:21:07 np0005481065 python3.9[144900]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760170866.2419937-202-48325631116995/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:21:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:21:08 np0005481065 python3.9[145052]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:21:08 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v420: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:21:09 np0005481065 python3.9[145177]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760170867.7587242-217-187260871415936/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:21:09 np0005481065 python3.9[145329]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:21:10 np0005481065 python3.9[145481]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:21:10 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v421: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:21:11 np0005481065 python3.9[145636]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:21:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:21:12 np0005481065 python3.9[145788]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:21:12 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v422: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:21:13 np0005481065 python3.9[145941]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:21:14 np0005481065 python3.9[146095]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:21:14 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v423: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:21:15 np0005481065 python3.9[146250]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:21:16 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v424: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:21:17 np0005481065 python3.9[146400]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:21:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:21:18 np0005481065 python3.9[146553]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:0e:0a:c0:16:5a:16" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:21:18 np0005481065 ovs-vsctl[146554]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:0e:0a:c0:16:5a:16 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Oct 11 04:21:18 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v425: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:21:19 np0005481065 python3.9[146706]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:21:20 np0005481065 python3.9[146861]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:21:20 np0005481065 ovs-vsctl[146862]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Oct 11 04:21:20 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v426: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:21:21 np0005481065 python3.9[147012]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:21:22 np0005481065 python3.9[147166]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:21:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:21:22 np0005481065 python3.9[147318]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:21:22 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v427: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:21:23 np0005481065 python3.9[147396]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:21:24 np0005481065 python3.9[147548]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:21:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:21:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:21:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:21:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:21:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:21:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:21:24 np0005481065 python3.9[147626]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:21:24 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v428: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:21:25 np0005481065 python3.9[147778]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:21:26 np0005481065 python3.9[147930]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:21:26 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v429: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:21:27 np0005481065 python3.9[148008]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:21:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:21:27 np0005481065 python3.9[148160]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:21:28 np0005481065 python3.9[148238]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:21:28 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v430: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:21:29 np0005481065 python3.9[148390]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:21:29 np0005481065 systemd[1]: Reloading.
Oct 11 04:21:29 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:21:29 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:21:30 np0005481065 python3.9[148580]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:21:30 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v431: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:21:31 np0005481065 python3.9[148658]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:21:32 np0005481065 python3.9[148810]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:21:32 np0005481065 python3.9[148888]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:21:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:21:32 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v432: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:21:33 np0005481065 python3.9[149040]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:21:33 np0005481065 systemd[1]: Reloading.
Oct 11 04:21:33 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:21:33 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:21:34 np0005481065 systemd[1]: Starting Create netns directory...
Oct 11 04:21:34 np0005481065 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 11 04:21:34 np0005481065 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 11 04:21:34 np0005481065 systemd[1]: Finished Create netns directory.
Oct 11 04:21:34 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v433: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:21:35 np0005481065 python3.9[149235]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:21:35 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:21:35 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:21:35 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:21:35 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:21:35 np0005481065 python3.9[149500]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:21:35 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:21:35 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:21:36 np0005481065 python3.9[149743]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760170895.3207965-468-220782682584018/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:21:36 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:21:36 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:21:36 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:21:36 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:21:36 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:21:36 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:21:36 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 4430f854-d058-458c-9e65-b2208b229da9 does not exist
Oct 11 04:21:36 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 766db2cc-4f92-4563-b60c-b57518f6e266 does not exist
Oct 11 04:21:36 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev dca4484c-25ae-482c-adf7-0365738d1b5f does not exist
Oct 11 04:21:36 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:21:36 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:21:36 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:21:36 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:21:36 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:21:36 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:21:36 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v434: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:21:36 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:21:36 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:21:36 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:21:37 np0005481065 podman[150051]: 2025-10-11 08:21:37.415320116 +0000 UTC m=+0.061566770 container create d751e606f1219f42f3adef47e29882f67d4cc3fee651de610876bf85cdab6660 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 04:21:37 np0005481065 systemd[1]: Started libpod-conmon-d751e606f1219f42f3adef47e29882f67d4cc3fee651de610876bf85cdab6660.scope.
Oct 11 04:21:37 np0005481065 podman[150051]: 2025-10-11 08:21:37.39263667 +0000 UTC m=+0.038883344 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:21:37 np0005481065 python3.9[150045]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:21:37 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:21:37 np0005481065 podman[150051]: 2025-10-11 08:21:37.527579448 +0000 UTC m=+0.173826122 container init d751e606f1219f42f3adef47e29882f67d4cc3fee651de610876bf85cdab6660 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_haslett, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:21:37 np0005481065 podman[150051]: 2025-10-11 08:21:37.534974731 +0000 UTC m=+0.181221365 container start d751e606f1219f42f3adef47e29882f67d4cc3fee651de610876bf85cdab6660 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_haslett, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 11 04:21:37 np0005481065 podman[150051]: 2025-10-11 08:21:37.538670098 +0000 UTC m=+0.184916742 container attach d751e606f1219f42f3adef47e29882f67d4cc3fee651de610876bf85cdab6660 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_haslett, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:21:37 np0005481065 jovial_haslett[150067]: 167 167
Oct 11 04:21:37 np0005481065 systemd[1]: libpod-d751e606f1219f42f3adef47e29882f67d4cc3fee651de610876bf85cdab6660.scope: Deactivated successfully.
Oct 11 04:21:37 np0005481065 podman[150051]: 2025-10-11 08:21:37.541596793 +0000 UTC m=+0.187843427 container died d751e606f1219f42f3adef47e29882f67d4cc3fee651de610876bf85cdab6660 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_haslett, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:21:37 np0005481065 systemd[1]: var-lib-containers-storage-overlay-641ee919549802d9e9f47077b21441e590f4782317a169da8640f9ffcff4ef04-merged.mount: Deactivated successfully.
Oct 11 04:21:37 np0005481065 podman[150051]: 2025-10-11 08:21:37.579174078 +0000 UTC m=+0.225420722 container remove d751e606f1219f42f3adef47e29882f67d4cc3fee651de610876bf85cdab6660 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_haslett, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:21:37 np0005481065 systemd[1]: libpod-conmon-d751e606f1219f42f3adef47e29882f67d4cc3fee651de610876bf85cdab6660.scope: Deactivated successfully.
Oct 11 04:21:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:21:37 np0005481065 podman[150124]: 2025-10-11 08:21:37.789259855 +0000 UTC m=+0.064361279 container create 4f93627f6327d6eb139245eb98fb0dceb4fb2c1eca37a325c408c0bbc09695c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_rosalind, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:21:37 np0005481065 systemd[1]: Started libpod-conmon-4f93627f6327d6eb139245eb98fb0dceb4fb2c1eca37a325c408c0bbc09695c9.scope.
Oct 11 04:21:37 np0005481065 podman[150124]: 2025-10-11 08:21:37.766249421 +0000 UTC m=+0.041350835 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:21:37 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:21:37 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84567e07ac84b0035a7b11157641e44dcc578afc76b7ebc5985cda5eb97746e5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:21:37 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84567e07ac84b0035a7b11157641e44dcc578afc76b7ebc5985cda5eb97746e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:21:37 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84567e07ac84b0035a7b11157641e44dcc578afc76b7ebc5985cda5eb97746e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:21:37 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84567e07ac84b0035a7b11157641e44dcc578afc76b7ebc5985cda5eb97746e5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:21:37 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84567e07ac84b0035a7b11157641e44dcc578afc76b7ebc5985cda5eb97746e5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:21:37 np0005481065 podman[150124]: 2025-10-11 08:21:37.931099982 +0000 UTC m=+0.206201446 container init 4f93627f6327d6eb139245eb98fb0dceb4fb2c1eca37a325c408c0bbc09695c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:21:37 np0005481065 podman[150124]: 2025-10-11 08:21:37.949131983 +0000 UTC m=+0.224233387 container start 4f93627f6327d6eb139245eb98fb0dceb4fb2c1eca37a325c408c0bbc09695c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:21:37 np0005481065 podman[150124]: 2025-10-11 08:21:37.95320552 +0000 UTC m=+0.228306984 container attach 4f93627f6327d6eb139245eb98fb0dceb4fb2c1eca37a325c408c0bbc09695c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:21:38 np0005481065 python3.9[150264]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:21:38 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v435: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:21:39 np0005481065 eloquent_rosalind[150184]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:21:39 np0005481065 eloquent_rosalind[150184]: --> relative data size: 1.0
Oct 11 04:21:39 np0005481065 eloquent_rosalind[150184]: --> All data devices are unavailable
Oct 11 04:21:39 np0005481065 python3.9[150403]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760170897.734804-493-29482369894212/.source.json _original_basename=.96u9pel2 follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:21:39 np0005481065 systemd[1]: libpod-4f93627f6327d6eb139245eb98fb0dceb4fb2c1eca37a325c408c0bbc09695c9.scope: Deactivated successfully.
Oct 11 04:21:39 np0005481065 systemd[1]: libpod-4f93627f6327d6eb139245eb98fb0dceb4fb2c1eca37a325c408c0bbc09695c9.scope: Consumed 1.079s CPU time.
Oct 11 04:21:39 np0005481065 podman[150124]: 2025-10-11 08:21:39.081291231 +0000 UTC m=+1.356392605 container died 4f93627f6327d6eb139245eb98fb0dceb4fb2c1eca37a325c408c0bbc09695c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_rosalind, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 04:21:39 np0005481065 systemd[1]: var-lib-containers-storage-overlay-84567e07ac84b0035a7b11157641e44dcc578afc76b7ebc5985cda5eb97746e5-merged.mount: Deactivated successfully.
Oct 11 04:21:39 np0005481065 podman[150124]: 2025-10-11 08:21:39.138147273 +0000 UTC m=+1.413248647 container remove 4f93627f6327d6eb139245eb98fb0dceb4fb2c1eca37a325c408c0bbc09695c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 04:21:39 np0005481065 systemd[1]: libpod-conmon-4f93627f6327d6eb139245eb98fb0dceb4fb2c1eca37a325c408c0bbc09695c9.scope: Deactivated successfully.
Oct 11 04:21:39 np0005481065 python3.9[150688]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:21:39 np0005481065 podman[150716]: 2025-10-11 08:21:39.810276815 +0000 UTC m=+0.060171329 container create ab80e401f2c7d423ff870f0ca2182dc3540384cb4d0c64b952bae569ab5f7523 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_johnson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:21:39 np0005481065 systemd[1]: Started libpod-conmon-ab80e401f2c7d423ff870f0ca2182dc3540384cb4d0c64b952bae569ab5f7523.scope.
Oct 11 04:21:39 np0005481065 podman[150716]: 2025-10-11 08:21:39.785913672 +0000 UTC m=+0.035808176 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:21:39 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:21:39 np0005481065 podman[150716]: 2025-10-11 08:21:39.925096881 +0000 UTC m=+0.174991435 container init ab80e401f2c7d423ff870f0ca2182dc3540384cb4d0c64b952bae569ab5f7523 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Oct 11 04:21:39 np0005481065 podman[150716]: 2025-10-11 08:21:39.936972354 +0000 UTC m=+0.186866848 container start ab80e401f2c7d423ff870f0ca2182dc3540384cb4d0c64b952bae569ab5f7523 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_johnson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 04:21:39 np0005481065 podman[150716]: 2025-10-11 08:21:39.941852345 +0000 UTC m=+0.191746919 container attach ab80e401f2c7d423ff870f0ca2182dc3540384cb4d0c64b952bae569ab5f7523 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 04:21:39 np0005481065 nice_johnson[150757]: 167 167
Oct 11 04:21:39 np0005481065 systemd[1]: libpod-ab80e401f2c7d423ff870f0ca2182dc3540384cb4d0c64b952bae569ab5f7523.scope: Deactivated successfully.
Oct 11 04:21:39 np0005481065 podman[150716]: 2025-10-11 08:21:39.946316854 +0000 UTC m=+0.196211378 container died ab80e401f2c7d423ff870f0ca2182dc3540384cb4d0c64b952bae569ab5f7523 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_johnson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 11 04:21:39 np0005481065 systemd[1]: var-lib-containers-storage-overlay-c70ead04abf1ec7d3713994b578eb64ee075da7773acaa22e431be9d6d942d2f-merged.mount: Deactivated successfully.
Oct 11 04:21:40 np0005481065 podman[150716]: 2025-10-11 08:21:40.000928542 +0000 UTC m=+0.250823066 container remove ab80e401f2c7d423ff870f0ca2182dc3540384cb4d0c64b952bae569ab5f7523 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_johnson, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 11 04:21:40 np0005481065 systemd[1]: libpod-conmon-ab80e401f2c7d423ff870f0ca2182dc3540384cb4d0c64b952bae569ab5f7523.scope: Deactivated successfully.
Oct 11 04:21:40 np0005481065 podman[150837]: 2025-10-11 08:21:40.246777142 +0000 UTC m=+0.076644785 container create e6b2f21e4411a2dbe32b59c99b5ad8e39e6501f61f6426dc8ed249bdb83254df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:21:40 np0005481065 systemd[1]: Started libpod-conmon-e6b2f21e4411a2dbe32b59c99b5ad8e39e6501f61f6426dc8ed249bdb83254df.scope.
Oct 11 04:21:40 np0005481065 podman[150837]: 2025-10-11 08:21:40.217469576 +0000 UTC m=+0.047337259 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:21:40 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:21:40 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cde4c5ed332a7ea852764f494ae6891824f02d055bbe9293ee934ec45406038d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:21:40 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cde4c5ed332a7ea852764f494ae6891824f02d055bbe9293ee934ec45406038d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:21:40 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cde4c5ed332a7ea852764f494ae6891824f02d055bbe9293ee934ec45406038d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:21:40 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cde4c5ed332a7ea852764f494ae6891824f02d055bbe9293ee934ec45406038d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:21:40 np0005481065 podman[150837]: 2025-10-11 08:21:40.36513359 +0000 UTC m=+0.195001303 container init e6b2f21e4411a2dbe32b59c99b5ad8e39e6501f61f6426dc8ed249bdb83254df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_newton, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:21:40 np0005481065 podman[150837]: 2025-10-11 08:21:40.378077564 +0000 UTC m=+0.207945227 container start e6b2f21e4411a2dbe32b59c99b5ad8e39e6501f61f6426dc8ed249bdb83254df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_newton, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:21:40 np0005481065 podman[150837]: 2025-10-11 08:21:40.382312326 +0000 UTC m=+0.212180039 container attach e6b2f21e4411a2dbe32b59c99b5ad8e39e6501f61f6426dc8ed249bdb83254df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3)
Oct 11 04:21:40 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v436: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]: {
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:    "0": [
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:        {
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:            "devices": [
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:                "/dev/loop3"
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:            ],
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:            "lv_name": "ceph_lv0",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:            "lv_size": "21470642176",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:            "name": "ceph_lv0",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:            "tags": {
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:                "ceph.cluster_name": "ceph",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:                "ceph.crush_device_class": "",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:                "ceph.encrypted": "0",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:                "ceph.osd_id": "0",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:                "ceph.type": "block",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:                "ceph.vdo": "0"
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:            },
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:            "type": "block",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:            "vg_name": "ceph_vg0"
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:        }
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:    ],
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:    "1": [
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:        {
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:            "devices": [
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:                "/dev/loop4"
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:            ],
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:            "lv_name": "ceph_lv1",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:            "lv_size": "21470642176",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:            "name": "ceph_lv1",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:            "tags": {
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:                "ceph.cluster_name": "ceph",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:                "ceph.crush_device_class": "",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:                "ceph.encrypted": "0",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:                "ceph.osd_id": "1",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:                "ceph.type": "block",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:                "ceph.vdo": "0"
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:            },
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:            "type": "block",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:            "vg_name": "ceph_vg1"
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:        }
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:    ],
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:    "2": [
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:        {
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:            "devices": [
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:                "/dev/loop5"
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:            ],
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:            "lv_name": "ceph_lv2",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:            "lv_size": "21470642176",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:            "name": "ceph_lv2",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:            "tags": {
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:                "ceph.cluster_name": "ceph",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:                "ceph.crush_device_class": "",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:                "ceph.encrypted": "0",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:                "ceph.osd_id": "2",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:                "ceph.type": "block",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:                "ceph.vdo": "0"
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:            },
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:            "type": "block",
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:            "vg_name": "ceph_vg2"
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:        }
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]:    ]
Oct 11 04:21:41 np0005481065 upbeat_newton[150895]: }
Oct 11 04:21:41 np0005481065 systemd[1]: libpod-e6b2f21e4411a2dbe32b59c99b5ad8e39e6501f61f6426dc8ed249bdb83254df.scope: Deactivated successfully.
Oct 11 04:21:41 np0005481065 podman[150837]: 2025-10-11 08:21:41.161712576 +0000 UTC m=+0.991580219 container died e6b2f21e4411a2dbe32b59c99b5ad8e39e6501f61f6426dc8ed249bdb83254df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_newton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:21:41 np0005481065 systemd[1]: var-lib-containers-storage-overlay-cde4c5ed332a7ea852764f494ae6891824f02d055bbe9293ee934ec45406038d-merged.mount: Deactivated successfully.
Oct 11 04:21:41 np0005481065 podman[150837]: 2025-10-11 08:21:41.242443227 +0000 UTC m=+1.072310850 container remove e6b2f21e4411a2dbe32b59c99b5ad8e39e6501f61f6426dc8ed249bdb83254df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_newton, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 04:21:41 np0005481065 systemd[1]: libpod-conmon-e6b2f21e4411a2dbe32b59c99b5ad8e39e6501f61f6426dc8ed249bdb83254df.scope: Deactivated successfully.
Oct 11 04:21:41 np0005481065 podman[151285]: 2025-10-11 08:21:41.95753471 +0000 UTC m=+0.060999593 container create 60dd14c3d578f57155a8e1b525585b958021f8eab477acfd3df248f1bf94481f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:21:42 np0005481065 systemd[1]: Started libpod-conmon-60dd14c3d578f57155a8e1b525585b958021f8eab477acfd3df248f1bf94481f.scope.
Oct 11 04:21:42 np0005481065 podman[151285]: 2025-10-11 08:21:41.931495458 +0000 UTC m=+0.034960351 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:21:42 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:21:42 np0005481065 podman[151285]: 2025-10-11 08:21:42.062125681 +0000 UTC m=+0.165590604 container init 60dd14c3d578f57155a8e1b525585b958021f8eab477acfd3df248f1bf94481f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hellman, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Oct 11 04:21:42 np0005481065 podman[151285]: 2025-10-11 08:21:42.073805698 +0000 UTC m=+0.177270551 container start 60dd14c3d578f57155a8e1b525585b958021f8eab477acfd3df248f1bf94481f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hellman, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:21:42 np0005481065 podman[151285]: 2025-10-11 08:21:42.077532896 +0000 UTC m=+0.180997819 container attach 60dd14c3d578f57155a8e1b525585b958021f8eab477acfd3df248f1bf94481f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hellman, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:21:42 np0005481065 vigilant_hellman[151342]: 167 167
Oct 11 04:21:42 np0005481065 systemd[1]: libpod-60dd14c3d578f57155a8e1b525585b958021f8eab477acfd3df248f1bf94481f.scope: Deactivated successfully.
Oct 11 04:21:42 np0005481065 podman[151285]: 2025-10-11 08:21:42.083612101 +0000 UTC m=+0.187076984 container died 60dd14c3d578f57155a8e1b525585b958021f8eab477acfd3df248f1bf94481f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hellman, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 04:21:42 np0005481065 systemd[1]: var-lib-containers-storage-overlay-365f43794fba939e93dfce66c88f607901f49a5104f96afa866a678fe48ced1f-merged.mount: Deactivated successfully.
Oct 11 04:21:42 np0005481065 podman[151285]: 2025-10-11 08:21:42.140328439 +0000 UTC m=+0.243793322 container remove 60dd14c3d578f57155a8e1b525585b958021f8eab477acfd3df248f1bf94481f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hellman, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 11 04:21:42 np0005481065 systemd[1]: libpod-conmon-60dd14c3d578f57155a8e1b525585b958021f8eab477acfd3df248f1bf94481f.scope: Deactivated successfully.
Oct 11 04:21:42 np0005481065 podman[151398]: 2025-10-11 08:21:42.350697195 +0000 UTC m=+0.060581331 container create 3c14017782da2ece5f41b4f251bb84c290b52029b1f7e9d3eeec4fd2c0c8e7bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mestorf, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 04:21:42 np0005481065 python3.9[151390]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Oct 11 04:21:42 np0005481065 systemd[1]: Started libpod-conmon-3c14017782da2ece5f41b4f251bb84c290b52029b1f7e9d3eeec4fd2c0c8e7bd.scope.
Oct 11 04:21:42 np0005481065 podman[151398]: 2025-10-11 08:21:42.321352218 +0000 UTC m=+0.031236404 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:21:42 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:21:42 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37b414d6194238cae255c877b2aa1802d704e12a18a00298cfd003c1d9a9536a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:21:42 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37b414d6194238cae255c877b2aa1802d704e12a18a00298cfd003c1d9a9536a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:21:42 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37b414d6194238cae255c877b2aa1802d704e12a18a00298cfd003c1d9a9536a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:21:42 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37b414d6194238cae255c877b2aa1802d704e12a18a00298cfd003c1d9a9536a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:21:42 np0005481065 podman[151398]: 2025-10-11 08:21:42.47619031 +0000 UTC m=+0.186074486 container init 3c14017782da2ece5f41b4f251bb84c290b52029b1f7e9d3eeec4fd2c0c8e7bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 04:21:42 np0005481065 podman[151398]: 2025-10-11 08:21:42.492654365 +0000 UTC m=+0.202538491 container start 3c14017782da2ece5f41b4f251bb84c290b52029b1f7e9d3eeec4fd2c0c8e7bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:21:42 np0005481065 podman[151398]: 2025-10-11 08:21:42.496555998 +0000 UTC m=+0.206440134 container attach 3c14017782da2ece5f41b4f251bb84c290b52029b1f7e9d3eeec4fd2c0c8e7bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 04:21:42 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:21:42 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v437: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:21:43 np0005481065 python3.9[151576]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 11 04:21:43 np0005481065 brave_mestorf[151415]: {
Oct 11 04:21:43 np0005481065 brave_mestorf[151415]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 04:21:43 np0005481065 brave_mestorf[151415]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:21:43 np0005481065 brave_mestorf[151415]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:21:43 np0005481065 brave_mestorf[151415]:        "osd_id": 2,
Oct 11 04:21:43 np0005481065 brave_mestorf[151415]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:21:43 np0005481065 brave_mestorf[151415]:        "type": "bluestore"
Oct 11 04:21:43 np0005481065 brave_mestorf[151415]:    },
Oct 11 04:21:43 np0005481065 brave_mestorf[151415]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 04:21:43 np0005481065 brave_mestorf[151415]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:21:43 np0005481065 brave_mestorf[151415]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:21:43 np0005481065 brave_mestorf[151415]:        "osd_id": 0,
Oct 11 04:21:43 np0005481065 brave_mestorf[151415]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:21:43 np0005481065 brave_mestorf[151415]:        "type": "bluestore"
Oct 11 04:21:43 np0005481065 brave_mestorf[151415]:    },
Oct 11 04:21:43 np0005481065 brave_mestorf[151415]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 04:21:43 np0005481065 brave_mestorf[151415]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:21:43 np0005481065 brave_mestorf[151415]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:21:43 np0005481065 brave_mestorf[151415]:        "osd_id": 1,
Oct 11 04:21:43 np0005481065 brave_mestorf[151415]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:21:43 np0005481065 brave_mestorf[151415]:        "type": "bluestore"
Oct 11 04:21:43 np0005481065 brave_mestorf[151415]:    }
Oct 11 04:21:43 np0005481065 brave_mestorf[151415]: }
Oct 11 04:21:43 np0005481065 systemd[1]: libpod-3c14017782da2ece5f41b4f251bb84c290b52029b1f7e9d3eeec4fd2c0c8e7bd.scope: Deactivated successfully.
Oct 11 04:21:43 np0005481065 systemd[1]: libpod-3c14017782da2ece5f41b4f251bb84c290b52029b1f7e9d3eeec4fd2c0c8e7bd.scope: Consumed 1.115s CPU time.
Oct 11 04:21:43 np0005481065 podman[151398]: 2025-10-11 08:21:43.598275467 +0000 UTC m=+1.308159583 container died 3c14017782da2ece5f41b4f251bb84c290b52029b1f7e9d3eeec4fd2c0c8e7bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mestorf, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 04:21:43 np0005481065 systemd[1]: var-lib-containers-storage-overlay-37b414d6194238cae255c877b2aa1802d704e12a18a00298cfd003c1d9a9536a-merged.mount: Deactivated successfully.
Oct 11 04:21:43 np0005481065 podman[151398]: 2025-10-11 08:21:43.670291477 +0000 UTC m=+1.380175583 container remove 3c14017782da2ece5f41b4f251bb84c290b52029b1f7e9d3eeec4fd2c0c8e7bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mestorf, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 04:21:43 np0005481065 systemd[1]: libpod-conmon-3c14017782da2ece5f41b4f251bb84c290b52029b1f7e9d3eeec4fd2c0c8e7bd.scope: Deactivated successfully.
Oct 11 04:21:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:21:43 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:21:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:21:43 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:21:43 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 18a84dcf-9db9-4ac6-a7b4-9df535478747 does not exist
Oct 11 04:21:43 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 92502dab-7421-4030-8301-953c9e455c94 does not exist
Oct 11 04:21:44 np0005481065 python3.9[151815]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 11 04:21:44 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:21:44 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:21:44 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v438: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:21:46 np0005481065 python3[151994]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 11 04:21:46 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v439: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:21:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:21:48 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v440: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:21:50 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v441: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:21:52 np0005481065 podman[152007]: 2025-10-11 08:21:52.075300082 +0000 UTC m=+5.495916919 image pull 70c92fb64e1eda6ef063d34e60e9a541e44edbaa51e757e8304331202c76a3a7 quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857
Oct 11 04:21:52 np0005481065 podman[152126]: 2025-10-11 08:21:52.312936595 +0000 UTC m=+0.068540260 container create afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 11 04:21:52 np0005481065 podman[152126]: 2025-10-11 08:21:52.279605453 +0000 UTC m=+0.035209128 image pull 70c92fb64e1eda6ef063d34e60e9a541e44edbaa51e757e8304331202c76a3a7 quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857
Oct 11 04:21:52 np0005481065 python3[151994]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857
Oct 11 04:21:52 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:21:52 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v442: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:21:53 np0005481065 python3.9[152316]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:21:54 np0005481065 python3.9[152470]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:21:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:21:54
Oct 11 04:21:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:21:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 04:21:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', 'volumes', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'backups', 'default.rgw.control', 'vms', 'images']
Oct 11 04:21:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:21:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:21:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:21:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:21:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:21:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:21:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:21:54 np0005481065 python3.9[152546]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:21:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:21:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:21:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:21:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:21:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:21:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:21:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:21:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:21:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:21:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:21:54 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v443: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:21:55 np0005481065 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:21:55 np0005481065 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 5410 writes, 23K keys, 5410 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5410 writes, 775 syncs, 6.98 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5410 writes, 23K keys, 5410 commit groups, 1.0 writes per commit group, ingest: 18.42 MB, 0.03 MB/s#012Interval WAL: 5410 writes, 775 syncs, 6.98 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f9267111f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f9267111f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Oct 11 04:21:55 np0005481065 python3.9[152697]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760170914.9485123-581-150457462293310/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:21:56 np0005481065 python3.9[152773]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 11 04:21:56 np0005481065 systemd[1]: Reloading.
Oct 11 04:21:56 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:21:56 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:21:56 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v444: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:21:57 np0005481065 python3.9[152887]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:21:57 np0005481065 systemd[1]: Reloading.
Oct 11 04:21:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:21:57 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:21:57 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:21:58 np0005481065 systemd[1]: Starting ovn_controller container...
Oct 11 04:21:58 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:21:58 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fabd6668b7f95badea804c38403ffd5a9bd1bfe45551ec01f4349b791302dc9/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct 11 04:21:58 np0005481065 systemd[1]: Started /usr/bin/podman healthcheck run afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143.
Oct 11 04:21:58 np0005481065 podman[152929]: 2025-10-11 08:21:58.217441335 +0000 UTC m=+0.156748908 container init afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Oct 11 04:21:58 np0005481065 ovn_controller[152945]: + sudo -E kolla_set_configs
Oct 11 04:21:58 np0005481065 podman[152929]: 2025-10-11 08:21:58.259101599 +0000 UTC m=+0.198409112 container start afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 04:21:58 np0005481065 edpm-start-podman-container[152929]: ovn_controller
Oct 11 04:21:58 np0005481065 systemd[1]: Created slice User Slice of UID 0.
Oct 11 04:21:58 np0005481065 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct 11 04:21:58 np0005481065 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct 11 04:21:58 np0005481065 systemd[1]: Starting User Manager for UID 0...
Oct 11 04:21:58 np0005481065 edpm-start-podman-container[152928]: Creating additional drop-in dependency for "ovn_controller" (afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143)
Oct 11 04:21:58 np0005481065 podman[152952]: 2025-10-11 08:21:58.389881956 +0000 UTC m=+0.111642886 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 04:21:58 np0005481065 systemd[1]: afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143-631ae506b0ced578.service: Main process exited, code=exited, status=1/FAILURE
Oct 11 04:21:58 np0005481065 systemd[1]: afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143-631ae506b0ced578.service: Failed with result 'exit-code'.
Oct 11 04:21:58 np0005481065 systemd[1]: Reloading.
Oct 11 04:21:58 np0005481065 systemd[152978]: Queued start job for default target Main User Target.
Oct 11 04:21:58 np0005481065 systemd[152978]: Created slice User Application Slice.
Oct 11 04:21:58 np0005481065 systemd[152978]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct 11 04:21:58 np0005481065 systemd[152978]: Started Daily Cleanup of User's Temporary Directories.
Oct 11 04:21:58 np0005481065 systemd[152978]: Reached target Paths.
Oct 11 04:21:58 np0005481065 systemd[152978]: Reached target Timers.
Oct 11 04:21:58 np0005481065 systemd[152978]: Starting D-Bus User Message Bus Socket...
Oct 11 04:21:58 np0005481065 systemd[152978]: Starting Create User's Volatile Files and Directories...
Oct 11 04:21:58 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:21:58 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:21:58 np0005481065 systemd[152978]: Listening on D-Bus User Message Bus Socket.
Oct 11 04:21:58 np0005481065 systemd[152978]: Finished Create User's Volatile Files and Directories.
Oct 11 04:21:58 np0005481065 systemd[152978]: Reached target Sockets.
Oct 11 04:21:58 np0005481065 systemd[152978]: Reached target Basic System.
Oct 11 04:21:58 np0005481065 systemd[152978]: Reached target Main User Target.
Oct 11 04:21:58 np0005481065 systemd[152978]: Startup finished in 166ms.
Oct 11 04:21:58 np0005481065 systemd[1]: Started User Manager for UID 0.
Oct 11 04:21:58 np0005481065 systemd[1]: Started ovn_controller container.
Oct 11 04:21:58 np0005481065 systemd[1]: Started Session c1 of User root.
Oct 11 04:21:58 np0005481065 ovn_controller[152945]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 11 04:21:58 np0005481065 ovn_controller[152945]: INFO:__main__:Validating config file
Oct 11 04:21:58 np0005481065 ovn_controller[152945]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 11 04:21:58 np0005481065 ovn_controller[152945]: INFO:__main__:Writing out command to execute
Oct 11 04:21:58 np0005481065 systemd[1]: session-c1.scope: Deactivated successfully.
Oct 11 04:21:58 np0005481065 ovn_controller[152945]: ++ cat /run_command
Oct 11 04:21:58 np0005481065 ovn_controller[152945]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct 11 04:21:58 np0005481065 ovn_controller[152945]: + ARGS=
Oct 11 04:21:58 np0005481065 ovn_controller[152945]: + sudo kolla_copy_cacerts
Oct 11 04:21:58 np0005481065 systemd[1]: Started Session c2 of User root.
Oct 11 04:21:58 np0005481065 systemd[1]: session-c2.scope: Deactivated successfully.
Oct 11 04:21:58 np0005481065 ovn_controller[152945]: + [[ ! -n '' ]]
Oct 11 04:21:58 np0005481065 ovn_controller[152945]: + . kolla_extend_start
Oct 11 04:21:58 np0005481065 ovn_controller[152945]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct 11 04:21:58 np0005481065 ovn_controller[152945]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Oct 11 04:21:58 np0005481065 ovn_controller[152945]: + umask 0022
Oct 11 04:21:58 np0005481065 ovn_controller[152945]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Oct 11 04:21:58 np0005481065 ovn_controller[152945]: 2025-10-11T08:21:58Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct 11 04:21:58 np0005481065 ovn_controller[152945]: 2025-10-11T08:21:58Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct 11 04:21:58 np0005481065 ovn_controller[152945]: 2025-10-11T08:21:58Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Oct 11 04:21:58 np0005481065 ovn_controller[152945]: 2025-10-11T08:21:58Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Oct 11 04:21:58 np0005481065 ovn_controller[152945]: 2025-10-11T08:21:58Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 11 04:21:58 np0005481065 ovn_controller[152945]: 2025-10-11T08:21:58Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Oct 11 04:21:58 np0005481065 NetworkManager[44960]: <info>  [1760170918.9037] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Oct 11 04:21:58 np0005481065 NetworkManager[44960]: <info>  [1760170918.9046] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 11 04:21:58 np0005481065 NetworkManager[44960]: <info>  [1760170918.9060] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Oct 11 04:21:58 np0005481065 NetworkManager[44960]: <info>  [1760170918.9067] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Oct 11 04:21:58 np0005481065 NetworkManager[44960]: <info>  [1760170918.9073] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 11 04:21:58 np0005481065 kernel: br-int: entered promiscuous mode
Oct 11 04:21:58 np0005481065 ovn_controller[152945]: 2025-10-11T08:21:58Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct 11 04:21:58 np0005481065 ovn_controller[152945]: 2025-10-11T08:21:58Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 11 04:21:58 np0005481065 ovn_controller[152945]: 2025-10-11T08:21:58Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 11 04:21:58 np0005481065 ovn_controller[152945]: 2025-10-11T08:21:58Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Oct 11 04:21:58 np0005481065 ovn_controller[152945]: 2025-10-11T08:21:58Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Oct 11 04:21:58 np0005481065 ovn_controller[152945]: 2025-10-11T08:21:58Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Oct 11 04:21:58 np0005481065 ovn_controller[152945]: 2025-10-11T08:21:58Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct 11 04:21:58 np0005481065 ovn_controller[152945]: 2025-10-11T08:21:58Z|00014|main|INFO|OVS feature set changed, force recompute.
Oct 11 04:21:58 np0005481065 ovn_controller[152945]: 2025-10-11T08:21:58Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 11 04:21:58 np0005481065 ovn_controller[152945]: 2025-10-11T08:21:58Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 11 04:21:58 np0005481065 ovn_controller[152945]: 2025-10-11T08:21:58Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 11 04:21:58 np0005481065 ovn_controller[152945]: 2025-10-11T08:21:58Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Oct 11 04:21:58 np0005481065 ovn_controller[152945]: 2025-10-11T08:21:58Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Oct 11 04:21:58 np0005481065 ovn_controller[152945]: 2025-10-11T08:21:58Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 11 04:21:58 np0005481065 ovn_controller[152945]: 2025-10-11T08:21:58Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct 11 04:21:58 np0005481065 ovn_controller[152945]: 2025-10-11T08:21:58Z|00022|main|INFO|OVS feature set changed, force recompute.
Oct 11 04:21:58 np0005481065 ovn_controller[152945]: 2025-10-11T08:21:58Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Oct 11 04:21:58 np0005481065 ovn_controller[152945]: 2025-10-11T08:21:58Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Oct 11 04:21:58 np0005481065 ovn_controller[152945]: 2025-10-11T08:21:58Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 11 04:21:58 np0005481065 ovn_controller[152945]: 2025-10-11T08:21:58Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 11 04:21:58 np0005481065 ovn_controller[152945]: 2025-10-11T08:21:58Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 11 04:21:58 np0005481065 ovn_controller[152945]: 2025-10-11T08:21:58Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 11 04:21:58 np0005481065 ovn_controller[152945]: 2025-10-11T08:21:58Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 11 04:21:58 np0005481065 ovn_controller[152945]: 2025-10-11T08:21:58Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 11 04:21:58 np0005481065 NetworkManager[44960]: <info>  [1760170918.9304] manager: (ovn-1596c6-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Oct 11 04:21:58 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v445: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:21:58 np0005481065 kernel: genev_sys_6081: entered promiscuous mode
Oct 11 04:21:58 np0005481065 systemd-udevd[153086]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:21:58 np0005481065 NetworkManager[44960]: <info>  [1760170918.9505] device (genev_sys_6081): carrier: link connected
Oct 11 04:21:58 np0005481065 NetworkManager[44960]: <info>  [1760170918.9508] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Oct 11 04:21:58 np0005481065 systemd-udevd[153080]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:21:59 np0005481065 python3.9[153213]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:21:59 np0005481065 ovs-vsctl[153214]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Oct 11 04:22:00 np0005481065 python3.9[153366]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:22:00 np0005481065 ovs-vsctl[153368]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Oct 11 04:22:00 np0005481065 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:22:00 np0005481065 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 6532 writes, 27K keys, 6532 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 6532 writes, 1136 syncs, 5.75 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6532 writes, 27K keys, 6532 commit groups, 1.0 writes per commit group, ingest: 19.27 MB, 0.03 MB/s#012Interval WAL: 6532 writes, 1136 syncs, 5.75 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557ab31031f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557ab31031f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Oct 11 04:22:00 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v446: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:01 np0005481065 python3.9[153521]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:22:01 np0005481065 ovs-vsctl[153522]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Oct 11 04:22:02 np0005481065 systemd[1]: session-47.scope: Deactivated successfully.
Oct 11 04:22:02 np0005481065 systemd[1]: session-47.scope: Consumed 1min 9.266s CPU time.
Oct 11 04:22:02 np0005481065 systemd-logind[819]: Session 47 logged out. Waiting for processes to exit.
Oct 11 04:22:02 np0005481065 systemd-logind[819]: Removed session 47.
Oct 11 04:22:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:22:02 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v447: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:22:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:22:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:22:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:22:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:22:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:22:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:22:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:22:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:22:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:22:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:22:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:22:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 04:22:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:22:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:22:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:22:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:22:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:22:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:22:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:22:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:22:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:22:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:22:04 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v448: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:06 np0005481065 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:22:06 np0005481065 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.3 total, 600.0 interval#012Cumulative writes: 5496 writes, 23K keys, 5496 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5496 writes, 788 syncs, 6.97 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5496 writes, 23K keys, 5496 commit groups, 1.0 writes per commit group, ingest: 18.42 MB, 0.03 MB/s#012Interval WAL: 5496 writes, 788 syncs, 6.97 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.048       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.048       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.05              0.00         1    0.048       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c4d9a771f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c4d9a771f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Oct 11 04:22:06 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v449: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:07 np0005481065 systemd-logind[819]: New session 49 of user zuul.
Oct 11 04:22:07 np0005481065 systemd[1]: Started Session 49 of User zuul.
Oct 11 04:22:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:22:07 np0005481065 ceph-mgr[74605]: [devicehealth INFO root] Check health
Oct 11 04:22:08 np0005481065 python3.9[153700]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:22:08 np0005481065 systemd[1]: Stopping User Manager for UID 0...
Oct 11 04:22:08 np0005481065 systemd[152978]: Activating special unit Exit the Session...
Oct 11 04:22:08 np0005481065 systemd[152978]: Stopped target Main User Target.
Oct 11 04:22:08 np0005481065 systemd[152978]: Stopped target Basic System.
Oct 11 04:22:08 np0005481065 systemd[152978]: Stopped target Paths.
Oct 11 04:22:08 np0005481065 systemd[152978]: Stopped target Sockets.
Oct 11 04:22:08 np0005481065 systemd[152978]: Stopped target Timers.
Oct 11 04:22:08 np0005481065 systemd[152978]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 11 04:22:08 np0005481065 systemd[152978]: Closed D-Bus User Message Bus Socket.
Oct 11 04:22:08 np0005481065 systemd[152978]: Stopped Create User's Volatile Files and Directories.
Oct 11 04:22:08 np0005481065 systemd[152978]: Removed slice User Application Slice.
Oct 11 04:22:08 np0005481065 systemd[152978]: Reached target Shutdown.
Oct 11 04:22:08 np0005481065 systemd[152978]: Finished Exit the Session.
Oct 11 04:22:08 np0005481065 systemd[152978]: Reached target Exit the Session.
Oct 11 04:22:08 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v450: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:08 np0005481065 systemd[1]: user@0.service: Deactivated successfully.
Oct 11 04:22:08 np0005481065 systemd[1]: Stopped User Manager for UID 0.
Oct 11 04:22:08 np0005481065 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct 11 04:22:08 np0005481065 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct 11 04:22:08 np0005481065 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct 11 04:22:08 np0005481065 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct 11 04:22:08 np0005481065 systemd[1]: Removed slice User Slice of UID 0.
Oct 11 04:22:09 np0005481065 python3.9[153859]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:22:10 np0005481065 python3.9[154011]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:22:10 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v451: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:11 np0005481065 python3.9[154163]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:22:12 np0005481065 python3.9[154315]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:22:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:22:12 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v452: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:12 np0005481065 python3.9[154467]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:22:13 np0005481065 python3.9[154617]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:22:14 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v453: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:15 np0005481065 python3.9[154769]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct 11 04:22:16 np0005481065 python3.9[154919]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:22:16 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v454: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:17 np0005481065 python3.9[155041]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760170935.9430563-86-268770536280527/.source follow=False _original_basename=haproxy.j2 checksum=4bca74f6ee0b6450624d22997e2f90c414d58b44 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:22:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:22:18 np0005481065 python3.9[155191]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:22:18 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v455: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:19 np0005481065 python3.9[155312]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760170937.7595243-101-37812124020075/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:22:20 np0005481065 python3.9[155464]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 04:22:20 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v456: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:21 np0005481065 python3.9[155548]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 04:22:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:22:22 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v457: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:23 np0005481065 python3.9[155701]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 11 04:22:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:22:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:22:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:22:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:22:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:22:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:22:24 np0005481065 python3.9[155854]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:22:24 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v458: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:25 np0005481065 python3.9[155975]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760170944.2592044-138-245943408367796/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:22:26 np0005481065 python3.9[156125]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:22:26 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v459: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:27 np0005481065 python3.9[156246]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760170945.8078263-138-87177835483022/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:22:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:22:28 np0005481065 python3.9[156396]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:22:28 np0005481065 ovn_controller[152945]: 2025-10-11T08:22:28Z|00025|memory|INFO|16384 kB peak resident set size after 29.9 seconds
Oct 11 04:22:28 np0005481065 ovn_controller[152945]: 2025-10-11T08:22:28Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Oct 11 04:22:28 np0005481065 podman[156444]: 2025-10-11 08:22:28.855261712 +0000 UTC m=+0.143174934 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 04:22:28 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v460: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:29 np0005481065 python3.9[156543]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760170947.863185-182-257643296386510/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:22:29 np0005481065 python3.9[156693]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:22:30 np0005481065 python3.9[156814]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760170949.33905-182-246703101854366/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:22:30 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v461: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:31 np0005481065 python3.9[156964]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:22:32 np0005481065 python3.9[157118]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:22:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:22:32 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v462: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:33 np0005481065 python3.9[157270]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:22:33 np0005481065 python3.9[157348]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:22:34 np0005481065 python3.9[157500]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:22:34 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v463: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:35 np0005481065 python3.9[157578]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:22:36 np0005481065 python3.9[157730]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:22:36 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v464: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:37 np0005481065 python3.9[157882]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:22:37 np0005481065 python3.9[157960]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:22:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:22:38 np0005481065 python3.9[158112]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:22:38 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v465: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:38 np0005481065 python3.9[158190]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:22:39 np0005481065 python3.9[158342]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:22:39 np0005481065 systemd[1]: Reloading.
Oct 11 04:22:40 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:22:40 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:22:40 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v466: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:41 np0005481065 python3.9[158531]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:22:41 np0005481065 python3.9[158609]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:22:42 np0005481065 python3.9[158761]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:22:42 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:22:42 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v467: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:43 np0005481065 python3.9[158839]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:22:44 np0005481065 python3.9[158991]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:22:44 np0005481065 systemd[1]: Reloading.
Oct 11 04:22:44 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:22:44 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:22:44 np0005481065 systemd[1]: Starting Create netns directory...
Oct 11 04:22:44 np0005481065 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 11 04:22:44 np0005481065 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 11 04:22:44 np0005481065 systemd[1]: Finished Create netns directory.
Oct 11 04:22:44 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v468: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:45 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 11 04:22:45 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 04:22:45 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:22:45 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:22:45 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:22:45 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:22:45 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:22:45 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:22:45 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 9f67c864-a8c0-4a79-b31c-c61a898b3e7f does not exist
Oct 11 04:22:45 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 6a7324ef-0391-464f-b48a-ddef63829dab does not exist
Oct 11 04:22:45 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev db6232c7-c33c-4f98-9917-6406ca0556df does not exist
Oct 11 04:22:45 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:22:45 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:22:45 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:22:45 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:22:45 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:22:45 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:22:45 np0005481065 python3.9[159342]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:22:46 np0005481065 podman[159536]: 2025-10-11 08:22:46.032384308 +0000 UTC m=+0.071074953 container create 84575263d0a2cebf05ec972a8ce906957c47e9865f1dab0d9fd81c7775197e25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 04:22:46 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 04:22:46 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:22:46 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:22:46 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:22:46 np0005481065 systemd[1]: Started libpod-conmon-84575263d0a2cebf05ec972a8ce906957c47e9865f1dab0d9fd81c7775197e25.scope.
Oct 11 04:22:46 np0005481065 podman[159536]: 2025-10-11 08:22:46.001864937 +0000 UTC m=+0.040555642 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:22:46 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:22:46 np0005481065 podman[159536]: 2025-10-11 08:22:46.160144396 +0000 UTC m=+0.198835071 container init 84575263d0a2cebf05ec972a8ce906957c47e9865f1dab0d9fd81c7775197e25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_keldysh, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:22:46 np0005481065 podman[159536]: 2025-10-11 08:22:46.171513745 +0000 UTC m=+0.210204380 container start 84575263d0a2cebf05ec972a8ce906957c47e9865f1dab0d9fd81c7775197e25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:22:46 np0005481065 podman[159536]: 2025-10-11 08:22:46.17584387 +0000 UTC m=+0.214534515 container attach 84575263d0a2cebf05ec972a8ce906957c47e9865f1dab0d9fd81c7775197e25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 11 04:22:46 np0005481065 amazing_keldysh[159584]: 167 167
Oct 11 04:22:46 np0005481065 systemd[1]: libpod-84575263d0a2cebf05ec972a8ce906957c47e9865f1dab0d9fd81c7775197e25.scope: Deactivated successfully.
Oct 11 04:22:46 np0005481065 podman[159536]: 2025-10-11 08:22:46.179745712 +0000 UTC m=+0.218436347 container died 84575263d0a2cebf05ec972a8ce906957c47e9865f1dab0d9fd81c7775197e25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 04:22:46 np0005481065 systemd[1]: var-lib-containers-storage-overlay-81c4a3415789f61528cb66063ec59914afb9b740215c843622758cf970e413f1-merged.mount: Deactivated successfully.
Oct 11 04:22:46 np0005481065 podman[159536]: 2025-10-11 08:22:46.231329092 +0000 UTC m=+0.270019727 container remove 84575263d0a2cebf05ec972a8ce906957c47e9865f1dab0d9fd81c7775197e25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:22:46 np0005481065 systemd[1]: libpod-conmon-84575263d0a2cebf05ec972a8ce906957c47e9865f1dab0d9fd81c7775197e25.scope: Deactivated successfully.
Oct 11 04:22:46 np0005481065 podman[159652]: 2025-10-11 08:22:46.476213202 +0000 UTC m=+0.084000927 container create 48d9b1c848b9c74d32e17baff947219fecdc3c1c7000c25a53e899ec2602194c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 11 04:22:46 np0005481065 python3.9[159646]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:22:46 np0005481065 podman[159652]: 2025-10-11 08:22:46.438168953 +0000 UTC m=+0.045956748 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:22:46 np0005481065 systemd[1]: Started libpod-conmon-48d9b1c848b9c74d32e17baff947219fecdc3c1c7000c25a53e899ec2602194c.scope.
Oct 11 04:22:46 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:22:46 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/927e0ff7ee45ce70a9d656af17bd0310aa07bff8b121516d114154e4bca946e4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:22:46 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/927e0ff7ee45ce70a9d656af17bd0310aa07bff8b121516d114154e4bca946e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:22:46 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/927e0ff7ee45ce70a9d656af17bd0310aa07bff8b121516d114154e4bca946e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:22:46 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/927e0ff7ee45ce70a9d656af17bd0310aa07bff8b121516d114154e4bca946e4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:22:46 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/927e0ff7ee45ce70a9d656af17bd0310aa07bff8b121516d114154e4bca946e4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:22:46 np0005481065 podman[159652]: 2025-10-11 08:22:46.597635777 +0000 UTC m=+0.205423542 container init 48d9b1c848b9c74d32e17baff947219fecdc3c1c7000c25a53e899ec2602194c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_maxwell, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:22:46 np0005481065 podman[159652]: 2025-10-11 08:22:46.610374975 +0000 UTC m=+0.218162710 container start 48d9b1c848b9c74d32e17baff947219fecdc3c1c7000c25a53e899ec2602194c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_maxwell, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 04:22:46 np0005481065 podman[159652]: 2025-10-11 08:22:46.61504555 +0000 UTC m=+0.222833325 container attach 48d9b1c848b9c74d32e17baff947219fecdc3c1c7000c25a53e899ec2602194c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 04:22:46 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v469: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:47 np0005481065 python3.9[159795]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760170965.78051-333-79479471335296/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:22:47 np0005481065 compassionate_maxwell[159668]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:22:47 np0005481065 compassionate_maxwell[159668]: --> relative data size: 1.0
Oct 11 04:22:47 np0005481065 compassionate_maxwell[159668]: --> All data devices are unavailable
Oct 11 04:22:47 np0005481065 systemd[1]: libpod-48d9b1c848b9c74d32e17baff947219fecdc3c1c7000c25a53e899ec2602194c.scope: Deactivated successfully.
Oct 11 04:22:47 np0005481065 systemd[1]: libpod-48d9b1c848b9c74d32e17baff947219fecdc3c1c7000c25a53e899ec2602194c.scope: Consumed 1.063s CPU time.
Oct 11 04:22:47 np0005481065 podman[159652]: 2025-10-11 08:22:47.733999684 +0000 UTC m=+1.341787419 container died 48d9b1c848b9c74d32e17baff947219fecdc3c1c7000c25a53e899ec2602194c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_maxwell, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:22:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:22:47 np0005481065 systemd[1]: var-lib-containers-storage-overlay-927e0ff7ee45ce70a9d656af17bd0310aa07bff8b121516d114154e4bca946e4-merged.mount: Deactivated successfully.
Oct 11 04:22:47 np0005481065 podman[159652]: 2025-10-11 08:22:47.855036639 +0000 UTC m=+1.462824364 container remove 48d9b1c848b9c74d32e17baff947219fecdc3c1c7000c25a53e899ec2602194c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_maxwell, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:22:47 np0005481065 systemd[1]: libpod-conmon-48d9b1c848b9c74d32e17baff947219fecdc3c1c7000c25a53e899ec2602194c.scope: Deactivated successfully.
Oct 11 04:22:48 np0005481065 python3.9[160006]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:22:48 np0005481065 podman[160201]: 2025-10-11 08:22:48.756690869 +0000 UTC m=+0.102385866 container create 8cc2162424971e3d127f97eb341e7056775e7fe40c88485b3e3e2f1c4a092bb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 04:22:48 np0005481065 podman[160201]: 2025-10-11 08:22:48.697094659 +0000 UTC m=+0.042789716 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:22:48 np0005481065 systemd[1]: Started libpod-conmon-8cc2162424971e3d127f97eb341e7056775e7fe40c88485b3e3e2f1c4a092bb3.scope.
Oct 11 04:22:48 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:22:48 np0005481065 podman[160201]: 2025-10-11 08:22:48.934739029 +0000 UTC m=+0.280434076 container init 8cc2162424971e3d127f97eb341e7056775e7fe40c88485b3e3e2f1c4a092bb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Oct 11 04:22:48 np0005481065 podman[160201]: 2025-10-11 08:22:48.947458266 +0000 UTC m=+0.293153283 container start 8cc2162424971e3d127f97eb341e7056775e7fe40c88485b3e3e2f1c4a092bb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_bhabha, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 11 04:22:48 np0005481065 serene_bhabha[160264]: 167 167
Oct 11 04:22:48 np0005481065 systemd[1]: libpod-8cc2162424971e3d127f97eb341e7056775e7fe40c88485b3e3e2f1c4a092bb3.scope: Deactivated successfully.
Oct 11 04:22:48 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v470: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:48 np0005481065 podman[160201]: 2025-10-11 08:22:48.977787362 +0000 UTC m=+0.323482389 container attach 8cc2162424971e3d127f97eb341e7056775e7fe40c88485b3e3e2f1c4a092bb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 04:22:48 np0005481065 podman[160201]: 2025-10-11 08:22:48.978886114 +0000 UTC m=+0.324581121 container died 8cc2162424971e3d127f97eb341e7056775e7fe40c88485b3e3e2f1c4a092bb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 04:22:49 np0005481065 systemd[1]: var-lib-containers-storage-overlay-e4d94e6464cf475d7d8a4de85c436c066f3b6a10acb3b5a71600e6a7d63d4643-merged.mount: Deactivated successfully.
Oct 11 04:22:49 np0005481065 podman[160201]: 2025-10-11 08:22:49.153058032 +0000 UTC m=+0.498753039 container remove 8cc2162424971e3d127f97eb341e7056775e7fe40c88485b3e3e2f1c4a092bb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_bhabha, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:22:49 np0005481065 systemd[1]: libpod-conmon-8cc2162424971e3d127f97eb341e7056775e7fe40c88485b3e3e2f1c4a092bb3.scope: Deactivated successfully.
Oct 11 04:22:49 np0005481065 python3.9[160296]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:22:49 np0005481065 podman[160319]: 2025-10-11 08:22:49.40097354 +0000 UTC m=+0.071932228 container create 8b9baf6a14a9253ec852ac78016cb9cd308c5ae91628a44b700c1f7353866267 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:22:49 np0005481065 systemd[1]: Started libpod-conmon-8b9baf6a14a9253ec852ac78016cb9cd308c5ae91628a44b700c1f7353866267.scope.
Oct 11 04:22:49 np0005481065 podman[160319]: 2025-10-11 08:22:49.372289771 +0000 UTC m=+0.043248519 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:22:49 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:22:49 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64267950edd78e624f10debd304d0c606c633f6285134bf6f4d229d5bd3d0a3b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:22:49 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64267950edd78e624f10debd304d0c606c633f6285134bf6f4d229d5bd3d0a3b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:22:49 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64267950edd78e624f10debd304d0c606c633f6285134bf6f4d229d5bd3d0a3b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:22:49 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64267950edd78e624f10debd304d0c606c633f6285134bf6f4d229d5bd3d0a3b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:22:49 np0005481065 podman[160319]: 2025-10-11 08:22:49.531054705 +0000 UTC m=+0.202013443 container init 8b9baf6a14a9253ec852ac78016cb9cd308c5ae91628a44b700c1f7353866267 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:22:49 np0005481065 podman[160319]: 2025-10-11 08:22:49.545857242 +0000 UTC m=+0.216815930 container start 8b9baf6a14a9253ec852ac78016cb9cd308c5ae91628a44b700c1f7353866267 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 04:22:49 np0005481065 podman[160319]: 2025-10-11 08:22:49.549910119 +0000 UTC m=+0.220868857 container attach 8b9baf6a14a9253ec852ac78016cb9cd308c5ae91628a44b700c1f7353866267 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_leavitt, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 11 04:22:50 np0005481065 python3.9[160463]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760170968.5272412-358-40460493448015/.source.json _original_basename=.6t1j6ywa follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]: {
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:    "0": [
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:        {
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:            "devices": [
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:                "/dev/loop3"
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:            ],
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:            "lv_name": "ceph_lv0",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:            "lv_size": "21470642176",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:            "name": "ceph_lv0",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:            "tags": {
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:                "ceph.cluster_name": "ceph",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:                "ceph.crush_device_class": "",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:                "ceph.encrypted": "0",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:                "ceph.osd_id": "0",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:                "ceph.type": "block",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:                "ceph.vdo": "0"
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:            },
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:            "type": "block",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:            "vg_name": "ceph_vg0"
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:        }
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:    ],
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:    "1": [
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:        {
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:            "devices": [
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:                "/dev/loop4"
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:            ],
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:            "lv_name": "ceph_lv1",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:            "lv_size": "21470642176",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:            "name": "ceph_lv1",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:            "tags": {
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:                "ceph.cluster_name": "ceph",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:                "ceph.crush_device_class": "",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:                "ceph.encrypted": "0",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:                "ceph.osd_id": "1",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:                "ceph.type": "block",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:                "ceph.vdo": "0"
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:            },
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:            "type": "block",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:            "vg_name": "ceph_vg1"
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:        }
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:    ],
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:    "2": [
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:        {
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:            "devices": [
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:                "/dev/loop5"
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:            ],
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:            "lv_name": "ceph_lv2",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:            "lv_size": "21470642176",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:            "name": "ceph_lv2",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:            "tags": {
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:                "ceph.cluster_name": "ceph",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:                "ceph.crush_device_class": "",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:                "ceph.encrypted": "0",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:                "ceph.osd_id": "2",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:                "ceph.type": "block",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:                "ceph.vdo": "0"
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:            },
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:            "type": "block",
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:            "vg_name": "ceph_vg2"
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:        }
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]:    ]
Oct 11 04:22:50 np0005481065 lucid_leavitt[160359]: }
Oct 11 04:22:50 np0005481065 systemd[1]: libpod-8b9baf6a14a9253ec852ac78016cb9cd308c5ae91628a44b700c1f7353866267.scope: Deactivated successfully.
Oct 11 04:22:50 np0005481065 conmon[160359]: conmon 8b9baf6a14a9253ec852 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8b9baf6a14a9253ec852ac78016cb9cd308c5ae91628a44b700c1f7353866267.scope/container/memory.events
Oct 11 04:22:50 np0005481065 podman[160319]: 2025-10-11 08:22:50.367749961 +0000 UTC m=+1.038708689 container died 8b9baf6a14a9253ec852ac78016cb9cd308c5ae91628a44b700c1f7353866267 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_leavitt, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:22:50 np0005481065 systemd[1]: var-lib-containers-storage-overlay-64267950edd78e624f10debd304d0c606c633f6285134bf6f4d229d5bd3d0a3b-merged.mount: Deactivated successfully.
Oct 11 04:22:50 np0005481065 podman[160319]: 2025-10-11 08:22:50.461152957 +0000 UTC m=+1.132111645 container remove 8b9baf6a14a9253ec852ac78016cb9cd308c5ae91628a44b700c1f7353866267 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:22:50 np0005481065 systemd[1]: libpod-conmon-8b9baf6a14a9253ec852ac78016cb9cd308c5ae91628a44b700c1f7353866267.scope: Deactivated successfully.
Oct 11 04:22:50 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v471: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:51 np0005481065 python3.9[160716]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:22:51 np0005481065 podman[160798]: 2025-10-11 08:22:51.365900828 +0000 UTC m=+0.058618444 container create 09c5728ca9c790a553a9a8ea5af8f796c1ae6b16c55f0d875ab63aea585e5879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_agnesi, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:22:51 np0005481065 systemd[1]: Started libpod-conmon-09c5728ca9c790a553a9a8ea5af8f796c1ae6b16c55f0d875ab63aea585e5879.scope.
Oct 11 04:22:51 np0005481065 podman[160798]: 2025-10-11 08:22:51.339472745 +0000 UTC m=+0.032190411 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:22:51 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:22:51 np0005481065 podman[160798]: 2025-10-11 08:22:51.48376033 +0000 UTC m=+0.176478016 container init 09c5728ca9c790a553a9a8ea5af8f796c1ae6b16c55f0d875ab63aea585e5879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_agnesi, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 04:22:51 np0005481065 podman[160798]: 2025-10-11 08:22:51.496104747 +0000 UTC m=+0.188822393 container start 09c5728ca9c790a553a9a8ea5af8f796c1ae6b16c55f0d875ab63aea585e5879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:22:51 np0005481065 podman[160798]: 2025-10-11 08:22:51.503913352 +0000 UTC m=+0.196631038 container attach 09c5728ca9c790a553a9a8ea5af8f796c1ae6b16c55f0d875ab63aea585e5879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_agnesi, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:22:51 np0005481065 adoring_agnesi[160840]: 167 167
Oct 11 04:22:51 np0005481065 systemd[1]: libpod-09c5728ca9c790a553a9a8ea5af8f796c1ae6b16c55f0d875ab63aea585e5879.scope: Deactivated successfully.
Oct 11 04:22:51 np0005481065 podman[160798]: 2025-10-11 08:22:51.505936491 +0000 UTC m=+0.198654127 container died 09c5728ca9c790a553a9a8ea5af8f796c1ae6b16c55f0d875ab63aea585e5879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_agnesi, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 04:22:51 np0005481065 systemd[1]: var-lib-containers-storage-overlay-f966b370a1fd74bebb451bdfe17810abe05fce1af2b284184adf01553f8d0253-merged.mount: Deactivated successfully.
Oct 11 04:22:51 np0005481065 podman[160798]: 2025-10-11 08:22:51.556326245 +0000 UTC m=+0.249043891 container remove 09c5728ca9c790a553a9a8ea5af8f796c1ae6b16c55f0d875ab63aea585e5879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_agnesi, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:22:51 np0005481065 systemd[1]: libpod-conmon-09c5728ca9c790a553a9a8ea5af8f796c1ae6b16c55f0d875ab63aea585e5879.scope: Deactivated successfully.
Oct 11 04:22:51 np0005481065 podman[160933]: 2025-10-11 08:22:51.81974771 +0000 UTC m=+0.095550939 container create 9f39bc35c30c1a247572597fd1222b82a2e0c3d8ca3df3e16576906d3a213205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_liskov, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 11 04:22:51 np0005481065 systemd[1]: Started libpod-conmon-9f39bc35c30c1a247572597fd1222b82a2e0c3d8ca3df3e16576906d3a213205.scope.
Oct 11 04:22:51 np0005481065 podman[160933]: 2025-10-11 08:22:51.802314517 +0000 UTC m=+0.078117766 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:22:51 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:22:51 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/577150418c0cbe96036ab30c3d5341c96926b2294f148289ca8630f640aad64c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:22:51 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/577150418c0cbe96036ab30c3d5341c96926b2294f148289ca8630f640aad64c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:22:51 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/577150418c0cbe96036ab30c3d5341c96926b2294f148289ca8630f640aad64c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:22:51 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/577150418c0cbe96036ab30c3d5341c96926b2294f148289ca8630f640aad64c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:22:51 np0005481065 podman[160933]: 2025-10-11 08:22:51.948161208 +0000 UTC m=+0.223964437 container init 9f39bc35c30c1a247572597fd1222b82a2e0c3d8ca3df3e16576906d3a213205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_liskov, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 11 04:22:51 np0005481065 podman[160933]: 2025-10-11 08:22:51.968540966 +0000 UTC m=+0.244344195 container start 9f39bc35c30c1a247572597fd1222b82a2e0c3d8ca3df3e16576906d3a213205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_liskov, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 04:22:51 np0005481065 podman[160933]: 2025-10-11 08:22:51.972057708 +0000 UTC m=+0.247860937 container attach 9f39bc35c30c1a247572597fd1222b82a2e0c3d8ca3df3e16576906d3a213205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_liskov, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:22:52 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:22:52 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v472: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:53 np0005481065 confident_liskov[160979]: {
Oct 11 04:22:53 np0005481065 confident_liskov[160979]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 04:22:53 np0005481065 confident_liskov[160979]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:22:53 np0005481065 confident_liskov[160979]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:22:53 np0005481065 confident_liskov[160979]:        "osd_id": 2,
Oct 11 04:22:53 np0005481065 confident_liskov[160979]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:22:53 np0005481065 confident_liskov[160979]:        "type": "bluestore"
Oct 11 04:22:53 np0005481065 confident_liskov[160979]:    },
Oct 11 04:22:53 np0005481065 confident_liskov[160979]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 04:22:53 np0005481065 confident_liskov[160979]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:22:53 np0005481065 confident_liskov[160979]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:22:53 np0005481065 confident_liskov[160979]:        "osd_id": 0,
Oct 11 04:22:53 np0005481065 confident_liskov[160979]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:22:53 np0005481065 confident_liskov[160979]:        "type": "bluestore"
Oct 11 04:22:53 np0005481065 confident_liskov[160979]:    },
Oct 11 04:22:53 np0005481065 confident_liskov[160979]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 04:22:53 np0005481065 confident_liskov[160979]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:22:53 np0005481065 confident_liskov[160979]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:22:53 np0005481065 confident_liskov[160979]:        "osd_id": 1,
Oct 11 04:22:53 np0005481065 confident_liskov[160979]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:22:53 np0005481065 confident_liskov[160979]:        "type": "bluestore"
Oct 11 04:22:53 np0005481065 confident_liskov[160979]:    }
Oct 11 04:22:53 np0005481065 confident_liskov[160979]: }
Oct 11 04:22:53 np0005481065 systemd[1]: libpod-9f39bc35c30c1a247572597fd1222b82a2e0c3d8ca3df3e16576906d3a213205.scope: Deactivated successfully.
Oct 11 04:22:53 np0005481065 podman[160933]: 2025-10-11 08:22:53.089208359 +0000 UTC m=+1.365011628 container died 9f39bc35c30c1a247572597fd1222b82a2e0c3d8ca3df3e16576906d3a213205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:22:53 np0005481065 systemd[1]: libpod-9f39bc35c30c1a247572597fd1222b82a2e0c3d8ca3df3e16576906d3a213205.scope: Consumed 1.129s CPU time.
Oct 11 04:22:53 np0005481065 systemd[1]: var-lib-containers-storage-overlay-577150418c0cbe96036ab30c3d5341c96926b2294f148289ca8630f640aad64c-merged.mount: Deactivated successfully.
Oct 11 04:22:53 np0005481065 podman[160933]: 2025-10-11 08:22:53.181269727 +0000 UTC m=+1.457072986 container remove 9f39bc35c30c1a247572597fd1222b82a2e0c3d8ca3df3e16576906d3a213205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_liskov, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 04:22:53 np0005481065 systemd[1]: libpod-conmon-9f39bc35c30c1a247572597fd1222b82a2e0c3d8ca3df3e16576906d3a213205.scope: Deactivated successfully.
Oct 11 04:22:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:22:53 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:22:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:22:53 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:22:53 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 322a0ba0-62d9-4b98-9558-876511248eb3 does not exist
Oct 11 04:22:53 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev f8bbcf6c-44a1-4c3f-80fb-b7419bb20d31 does not exist
Oct 11 04:22:54 np0005481065 python3.9[161347]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Oct 11 04:22:54 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:22:54 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:22:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:22:54
Oct 11 04:22:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:22:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 04:22:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', '.rgw.root', 'backups', 'default.rgw.control', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.data', 'images', '.mgr', 'vms']
Oct 11 04:22:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:22:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:22:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:22:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:22:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:22:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:22:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:22:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:22:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:22:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:22:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:22:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:22:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:22:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:22:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:22:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:22:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:22:54 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v473: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:55 np0005481065 python3.9[161499]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 11 04:22:56 np0005481065 python3.9[161651]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 11 04:22:56 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v474: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:22:58 np0005481065 python3[161830]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 11 04:22:58 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v475: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:22:59 np0005481065 podman[161878]: 2025-10-11 08:22:59.795691866 +0000 UTC m=+0.096270780 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Oct 11 04:23:00 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v476: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:23:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:23:02 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v477: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:23:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:23:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:23:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:23:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:23:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:23:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:23:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:23:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:23:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:23:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:23:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:23:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:23:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 04:23:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:23:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:23:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:23:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:23:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:23:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:23:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:23:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:23:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:23:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:23:04 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v478: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:23:06 np0005481065 podman[161845]: 2025-10-11 08:23:06.798767696 +0000 UTC m=+8.576169855 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 04:23:06 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v479: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:23:07 np0005481065 podman[161992]: 2025-10-11 08:23:07.074794115 +0000 UTC m=+0.074655276 container create 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 11 04:23:07 np0005481065 podman[161992]: 2025-10-11 08:23:07.034113031 +0000 UTC m=+0.033974242 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 04:23:07 np0005481065 python3[161830]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 04:23:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:23:08 np0005481065 python3.9[162182]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:23:08 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v480: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:23:09 np0005481065 python3.9[162336]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:23:09 np0005481065 python3.9[162412]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:23:10 np0005481065 python3.9[162563]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760170989.6631498-446-180005770717980/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:23:10 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v481: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:23:11 np0005481065 python3.9[162639]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 11 04:23:11 np0005481065 systemd[1]: Reloading.
Oct 11 04:23:11 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:23:11 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:23:12 np0005481065 python3.9[162750]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:23:12 np0005481065 systemd[1]: Reloading.
Oct 11 04:23:12 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:23:12 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:23:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:23:12 np0005481065 systemd[1]: Starting ovn_metadata_agent container...
Oct 11 04:23:12 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v482: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:23:12 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:23:12 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/686d6749304efbe68ee901984da690cf49c8c45d4e1df4e5e3b186df119761b1/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Oct 11 04:23:12 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/686d6749304efbe68ee901984da690cf49c8c45d4e1df4e5e3b186df119761b1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:23:13 np0005481065 systemd[1]: Started /usr/bin/podman healthcheck run 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d.
Oct 11 04:23:13 np0005481065 podman[162793]: 2025-10-11 08:23:13.046720837 +0000 UTC m=+0.169726211 container init 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 04:23:13 np0005481065 ovn_metadata_agent[162810]: + sudo -E kolla_set_configs
Oct 11 04:23:13 np0005481065 podman[162793]: 2025-10-11 08:23:13.085804795 +0000 UTC m=+0.208810149 container start 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:23:13 np0005481065 edpm-start-podman-container[162793]: ovn_metadata_agent
Oct 11 04:23:13 np0005481065 edpm-start-podman-container[162792]: Creating additional drop-in dependency for "ovn_metadata_agent" (96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d)
Oct 11 04:23:13 np0005481065 systemd[1]: Reloading.
Oct 11 04:23:13 np0005481065 podman[162816]: 2025-10-11 08:23:13.224410967 +0000 UTC m=+0.119462270 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:23:13 np0005481065 ovn_metadata_agent[162810]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 11 04:23:13 np0005481065 ovn_metadata_agent[162810]: INFO:__main__:Validating config file
Oct 11 04:23:13 np0005481065 ovn_metadata_agent[162810]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 11 04:23:13 np0005481065 ovn_metadata_agent[162810]: INFO:__main__:Copying service configuration files
Oct 11 04:23:13 np0005481065 ovn_metadata_agent[162810]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Oct 11 04:23:13 np0005481065 ovn_metadata_agent[162810]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Oct 11 04:23:13 np0005481065 ovn_metadata_agent[162810]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Oct 11 04:23:13 np0005481065 ovn_metadata_agent[162810]: INFO:__main__:Writing out command to execute
Oct 11 04:23:13 np0005481065 ovn_metadata_agent[162810]: INFO:__main__:Setting permission for /var/lib/neutron
Oct 11 04:23:13 np0005481065 ovn_metadata_agent[162810]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Oct 11 04:23:13 np0005481065 ovn_metadata_agent[162810]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Oct 11 04:23:13 np0005481065 ovn_metadata_agent[162810]: INFO:__main__:Setting permission for /var/lib/neutron/external
Oct 11 04:23:13 np0005481065 ovn_metadata_agent[162810]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Oct 11 04:23:13 np0005481065 ovn_metadata_agent[162810]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Oct 11 04:23:13 np0005481065 ovn_metadata_agent[162810]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Oct 11 04:23:13 np0005481065 ovn_metadata_agent[162810]: ++ cat /run_command
Oct 11 04:23:13 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:23:13 np0005481065 ovn_metadata_agent[162810]: + CMD=neutron-ovn-metadata-agent
Oct 11 04:23:13 np0005481065 ovn_metadata_agent[162810]: + ARGS=
Oct 11 04:23:13 np0005481065 ovn_metadata_agent[162810]: + sudo kolla_copy_cacerts
Oct 11 04:23:13 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:23:13 np0005481065 ovn_metadata_agent[162810]: Running command: 'neutron-ovn-metadata-agent'
Oct 11 04:23:13 np0005481065 ovn_metadata_agent[162810]: + [[ ! -n '' ]]
Oct 11 04:23:13 np0005481065 ovn_metadata_agent[162810]: + . kolla_extend_start
Oct 11 04:23:13 np0005481065 ovn_metadata_agent[162810]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Oct 11 04:23:13 np0005481065 ovn_metadata_agent[162810]: + umask 0022
Oct 11 04:23:13 np0005481065 ovn_metadata_agent[162810]: + exec neutron-ovn-metadata-agent
Oct 11 04:23:13 np0005481065 systemd[1]: Started ovn_metadata_agent container.
Oct 11 04:23:13 np0005481065 systemd[1]: session-49.scope: Deactivated successfully.
Oct 11 04:23:13 np0005481065 systemd[1]: session-49.scope: Consumed 1min 5.974s CPU time.
Oct 11 04:23:13 np0005481065 systemd-logind[819]: Session 49 logged out. Waiting for processes to exit.
Oct 11 04:23:13 np0005481065 systemd-logind[819]: Removed session 49.
Oct 11 04:23:14 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v483: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.114 162815 INFO neutron.common.config [-] Logging enabled!#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.114 162815 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.115 162815 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.115 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.115 162815 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.115 162815 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.115 162815 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.115 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.116 162815 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.116 162815 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.116 162815 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.116 162815 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.116 162815 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.116 162815 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.116 162815 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.116 162815 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.116 162815 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.117 162815 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.117 162815 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.117 162815 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.117 162815 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.117 162815 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.117 162815 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.117 162815 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.117 162815 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.117 162815 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.117 162815 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.118 162815 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.118 162815 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.118 162815 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.118 162815 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.118 162815 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.118 162815 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.118 162815 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.118 162815 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.118 162815 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.119 162815 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.119 162815 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.119 162815 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.119 162815 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.119 162815 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.119 162815 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.119 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.119 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.119 162815 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.120 162815 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.120 162815 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.120 162815 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.120 162815 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.120 162815 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.120 162815 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.120 162815 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.120 162815 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.120 162815 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.120 162815 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.120 162815 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.121 162815 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.121 162815 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.121 162815 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.121 162815 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.121 162815 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.121 162815 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.121 162815 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.121 162815 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.121 162815 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.122 162815 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.122 162815 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.122 162815 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.122 162815 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.122 162815 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.122 162815 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.122 162815 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.122 162815 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.122 162815 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.123 162815 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.123 162815 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.123 162815 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.123 162815 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.123 162815 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.123 162815 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.123 162815 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.123 162815 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.123 162815 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.124 162815 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.124 162815 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.124 162815 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.124 162815 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.124 162815 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.124 162815 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.124 162815 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.124 162815 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.124 162815 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.124 162815 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.125 162815 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.125 162815 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.125 162815 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.125 162815 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.125 162815 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.125 162815 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.125 162815 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.125 162815 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.125 162815 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.125 162815 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.125 162815 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.126 162815 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.126 162815 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.126 162815 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.126 162815 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.126 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.126 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.126 162815 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.126 162815 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.126 162815 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.127 162815 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.127 162815 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.127 162815 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.127 162815 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.127 162815 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.127 162815 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.127 162815 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.127 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.127 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.128 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.128 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.128 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.128 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.128 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.128 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.128 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.128 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.128 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.129 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.129 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.129 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.129 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.129 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.129 162815 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.129 162815 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.129 162815 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.129 162815 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.130 162815 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.130 162815 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.130 162815 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.130 162815 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.130 162815 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.130 162815 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.130 162815 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.130 162815 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.130 162815 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.130 162815 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.131 162815 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.131 162815 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.131 162815 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.131 162815 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.131 162815 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.131 162815 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.131 162815 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.131 162815 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.131 162815 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.132 162815 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.132 162815 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.132 162815 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.132 162815 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.132 162815 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.132 162815 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.132 162815 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.132 162815 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.132 162815 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.132 162815 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.133 162815 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.133 162815 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.133 162815 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.133 162815 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.133 162815 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.133 162815 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.133 162815 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.133 162815 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.134 162815 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.134 162815 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.134 162815 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.134 162815 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.134 162815 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.134 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.134 162815 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.134 162815 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.134 162815 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.134 162815 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.135 162815 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.135 162815 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.135 162815 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.135 162815 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.135 162815 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.135 162815 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.135 162815 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.135 162815 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.135 162815 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.136 162815 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.136 162815 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.136 162815 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.136 162815 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.136 162815 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.136 162815 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.136 162815 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.136 162815 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.136 162815 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.136 162815 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.137 162815 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.137 162815 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.137 162815 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.137 162815 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.137 162815 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.137 162815 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.137 162815 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.137 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.137 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.138 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.138 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.138 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.138 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.138 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.138 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.138 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.138 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.139 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.139 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.139 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.139 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.139 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.139 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.139 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.139 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.139 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.139 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.140 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.140 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.140 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.140 162815 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.140 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.140 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.140 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.140 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.140 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.141 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.141 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.141 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.141 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.141 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.141 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.141 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.141 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.142 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.142 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.142 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.142 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.142 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.142 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.142 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.142 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.142 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.142 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.143 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.143 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.143 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.143 162815 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.143 162815 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.143 162815 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.143 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.143 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.144 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.144 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.144 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.144 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.144 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.144 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.144 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.144 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.144 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.144 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.145 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.145 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.145 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.145 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.145 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.145 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.145 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.145 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.145 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.146 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.146 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.146 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.146 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.146 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.146 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.146 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.147 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.147 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.147 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.147 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.147 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.147 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.147 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.147 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.148 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.148 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.160 162815 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.161 162815 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.161 162815 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.161 162815 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.162 162815 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Oct 11 04:23:15 np0005481065 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.176 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name bec9a5e2-82b0-42f0-811d-08d245f1dc66 (UUID: bec9a5e2-82b0-42f0-811d-08d245f1dc66) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.207 162815 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.208 162815 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.208 162815 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.208 162815 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.211 162815 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.218 162815 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.231 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'bec9a5e2-82b0-42f0-811d-08d245f1dc66'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], external_ids={}, name=bec9a5e2-82b0-42f0-811d-08d245f1dc66, nb_cfg_timestamp=1760170926930, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.232 162815 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f6582ad0f70>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.233 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.233 162815 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.233 162815 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.234 162815 INFO oslo_service.service [-] Starting 1 workers#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.237 162815 DEBUG oslo_service.service [-] Started child 162924 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.240 162815 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmplkui4esc/privsep.sock']#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.241 162924 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-497889'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.265 162924 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.266 162924 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.266 162924 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.269 162924 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.276 162924 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.281 162924 INFO eventlet.wsgi.server [-] (162924) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Oct 11 04:23:15 np0005481065 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.902 162815 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.903 162815 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmplkui4esc/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.770 162929 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.775 162929 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.777 162929 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.777 162929 INFO oslo.privsep.daemon [-] privsep daemon running as pid 162929#033[00m
Oct 11 04:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.907 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[02689a7c-be2c-4dd7-a7a8-e7844b2f5bfe]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:23:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:16.439 162929 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:23:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:16.439 162929 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:23:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:16.439 162929 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:23:16 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v484: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:23:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:16.986 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[6fa0224e-81b9-4492-9fc5-a436096e91ea]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:23:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:16.989 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, column=external_ids, values=({'neutron:ovn-metadata-id': '9f3e73f3-77a0-5f6f-87b5-2f64343bab3a'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.006 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.012 162815 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.013 162815 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.013 162815 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.013 162815 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.013 162815 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.013 162815 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.014 162815 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.014 162815 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.014 162815 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.014 162815 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.015 162815 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.015 162815 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.015 162815 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.015 162815 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.016 162815 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.016 162815 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.016 162815 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.016 162815 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.017 162815 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.017 162815 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.017 162815 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.017 162815 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.017 162815 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.018 162815 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.018 162815 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.018 162815 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.019 162815 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.019 162815 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.019 162815 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.019 162815 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.019 162815 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.020 162815 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.020 162815 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.020 162815 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.020 162815 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.021 162815 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.021 162815 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.021 162815 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.022 162815 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.022 162815 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.022 162815 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.022 162815 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.023 162815 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.023 162815 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.023 162815 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.023 162815 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.023 162815 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.024 162815 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.024 162815 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.024 162815 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.024 162815 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.025 162815 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.025 162815 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.025 162815 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.025 162815 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.025 162815 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.026 162815 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.026 162815 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.026 162815 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.026 162815 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.027 162815 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.027 162815 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.027 162815 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.027 162815 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.028 162815 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.028 162815 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.028 162815 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.028 162815 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.028 162815 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.029 162815 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.029 162815 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.029 162815 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.029 162815 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.030 162815 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.030 162815 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.030 162815 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.030 162815 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.030 162815 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.031 162815 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.031 162815 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.031 162815 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.031 162815 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.032 162815 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.032 162815 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.032 162815 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.032 162815 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.032 162815 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.033 162815 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.033 162815 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.033 162815 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.033 162815 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.034 162815 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.034 162815 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.034 162815 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.034 162815 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.034 162815 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.035 162815 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.035 162815 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.035 162815 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.035 162815 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.036 162815 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.036 162815 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.036 162815 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.036 162815 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.037 162815 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.037 162815 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.037 162815 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.038 162815 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.038 162815 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.038 162815 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.039 162815 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.039 162815 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.039 162815 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.040 162815 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.040 162815 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.040 162815 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.040 162815 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.040 162815 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.041 162815 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.041 162815 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.041 162815 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.042 162815 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.042 162815 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.042 162815 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.042 162815 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.043 162815 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.043 162815 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.043 162815 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.043 162815 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.044 162815 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.044 162815 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.044 162815 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.044 162815 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.044 162815 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.045 162815 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.045 162815 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.045 162815 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.046 162815 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.046 162815 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.046 162815 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.046 162815 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.047 162815 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.047 162815 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.047 162815 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.047 162815 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.047 162815 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.048 162815 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.048 162815 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.048 162815 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.048 162815 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.049 162815 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.049 162815 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.049 162815 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.049 162815 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.049 162815 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.050 162815 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.050 162815 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.050 162815 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.050 162815 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.050 162815 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.051 162815 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.051 162815 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.051 162815 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.051 162815 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.052 162815 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.052 162815 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.052 162815 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.052 162815 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.052 162815 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.053 162815 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.053 162815 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.053 162815 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.053 162815 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.054 162815 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.054 162815 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.054 162815 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.054 162815 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.055 162815 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.055 162815 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.055 162815 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.055 162815 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.055 162815 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.056 162815 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.056 162815 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.056 162815 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.057 162815 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.057 162815 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.057 162815 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.057 162815 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.057 162815 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.058 162815 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.058 162815 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.058 162815 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.058 162815 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.059 162815 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.059 162815 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.059 162815 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.059 162815 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.059 162815 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.059 162815 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.059 162815 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.060 162815 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.060 162815 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.060 162815 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.060 162815 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.060 162815 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.060 162815 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.060 162815 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.060 162815 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.061 162815 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.061 162815 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.061 162815 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.061 162815 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.061 162815 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.061 162815 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.061 162815 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.062 162815 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.062 162815 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.062 162815 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.062 162815 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.062 162815 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.062 162815 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.062 162815 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.063 162815 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.063 162815 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.063 162815 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.063 162815 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.063 162815 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.063 162815 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.063 162815 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.063 162815 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.064 162815 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.064 162815 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.064 162815 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.064 162815 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.064 162815 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.064 162815 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.064 162815 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.064 162815 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.065 162815 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.065 162815 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.065 162815 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.065 162815 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.065 162815 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.065 162815 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.065 162815 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.066 162815 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.066 162815 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.066 162815 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.066 162815 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.066 162815 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.066 162815 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.066 162815 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.067 162815 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.067 162815 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.067 162815 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.067 162815 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.067 162815 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.067 162815 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.067 162815 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.068 162815 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.068 162815 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.068 162815 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.068 162815 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.068 162815 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.068 162815 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.068 162815 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.068 162815 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.069 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.069 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.069 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.069 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.069 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.069 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.070 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.070 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.070 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.070 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.070 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.070 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.070 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.071 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.071 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.071 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.071 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.071 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.071 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.071 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.071 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.072 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.072 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.072 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.072 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.072 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.072 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.072 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.073 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.073 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.073 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.073 162815 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.073 162815 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.073 162815 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.073 162815 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:23:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.074 162815 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct 11 04:23:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:23:18 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v485: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:23:19 np0005481065 systemd-logind[819]: New session 50 of user zuul.
Oct 11 04:23:19 np0005481065 systemd[1]: Started Session 50 of User zuul.
Oct 11 04:23:20 np0005481065 python3.9[163087]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:23:20 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v486: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:23:21 np0005481065 python3.9[163243]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:23:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:23:22 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v487: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:23:23 np0005481065 python3.9[163408]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 11 04:23:23 np0005481065 systemd[1]: Reloading.
Oct 11 04:23:23 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:23:23 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:23:24 np0005481065 python3.9[163593]: ansible-ansible.builtin.service_facts Invoked
Oct 11 04:23:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:23:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:23:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:23:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:23:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:23:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:23:24 np0005481065 network[163610]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 11 04:23:24 np0005481065 network[163611]: 'network-scripts' will be removed from distribution in near future.
Oct 11 04:23:24 np0005481065 network[163612]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 11 04:23:24 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v488: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:23:26 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v489: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:23:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:23:28 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v490: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:23:29 np0005481065 python3.9[163877]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:23:30 np0005481065 python3.9[164030]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:23:30 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v491: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:23:31 np0005481065 python3.9[164183]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:23:32 np0005481065 podman[164308]: 2025-10-11 08:23:32.008197164 +0000 UTC m=+0.159198682 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 04:23:32 np0005481065 python3.9[164349]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:23:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:23:32 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v492: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 04:23:33 np0005481065 python3.9[164515]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:23:34 np0005481065 python3.9[164668]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:23:34 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v493: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 04:23:35 np0005481065 python3.9[164821]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:23:36 np0005481065 python3.9[164974]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:23:36 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v494: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 04:23:37 np0005481065 python3.9[165126]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:23:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:23:38 np0005481065 python3.9[165278]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:23:38 np0005481065 python3.9[165430]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:23:38 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v495: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 04:23:39 np0005481065 python3.9[165582]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:23:40 np0005481065 python3.9[165734]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:23:40 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v496: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 04:23:41 np0005481065 python3.9[165886]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:23:42 np0005481065 python3.9[166038]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:23:42 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:23:42 np0005481065 python3.9[166190]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:23:42 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v497: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 04:23:43 np0005481065 python3.9[166342]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:23:43 np0005481065 podman[166344]: 2025-10-11 08:23:43.779736914 +0000 UTC m=+0.085162721 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 11 04:23:44 np0005481065 python3.9[166513]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:23:44 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v498: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:23:45 np0005481065 python3.9[166665]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:23:46 np0005481065 python3.9[166817]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:23:46 np0005481065 python3.9[166969]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:23:46 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v499: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:23:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:23:47 np0005481065 python3.9[167121]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:23:48 np0005481065 python3.9[167273]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 11 04:23:48 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v500: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:23:50 np0005481065 python3.9[167425]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 11 04:23:50 np0005481065 systemd[1]: Reloading.
Oct 11 04:23:50 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:23:50 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:23:50 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v501: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:23:51 np0005481065 python3.9[167613]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:23:52 np0005481065 python3.9[167766]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:23:52 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:23:52 np0005481065 python3.9[167919]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:23:52 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v502: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:23:53 np0005481065 python3.9[168120]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:23:54 np0005481065 podman[168382]: 2025-10-11 08:23:54.578115015 +0000 UTC m=+0.104523109 container exec ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 04:23:54 np0005481065 podman[168382]: 2025-10-11 08:23:54.701474624 +0000 UTC m=+0.227882678 container exec_died ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:23:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:23:54
Oct 11 04:23:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:23:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 04:23:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', 'volumes', 'images', 'default.rgw.control', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms', '.rgw.root', 'backups']
Oct 11 04:23:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:23:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:23:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:23:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:23:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:23:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:23:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:23:54 np0005481065 python3.9[168409]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:23:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:23:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:23:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:23:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:23:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:23:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:23:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:23:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:23:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:23:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:23:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v503: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:23:55 np0005481065 python3.9[168675]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:23:55 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:23:55 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:23:55 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:23:55 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:23:56 np0005481065 python3.9[168971]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:23:56 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:23:56 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:23:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:23:56 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:23:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:23:56 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:23:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:23:56 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:23:56 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev bd212914-65b2-4aa1-a709-ad4a37d23c86 does not exist
Oct 11 04:23:56 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 9460f98c-bb92-4225-aec7-727922ea6339 does not exist
Oct 11 04:23:56 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 18a86277-9beb-410d-bd76-c0e8ff387f6b does not exist
Oct 11 04:23:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:23:56 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:23:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:23:56 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:23:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:23:56 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:23:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v504: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:23:57 np0005481065 podman[169285]: 2025-10-11 08:23:57.701433698 +0000 UTC m=+0.071555240 container create 7f6848da0dad0b0275ed1be89bc4e2c2dd37fc5ca9ddf460b0c8aa7d43621b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_antonelli, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 04:23:57 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:23:57 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:23:57 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:23:57 np0005481065 systemd[1]: Started libpod-conmon-7f6848da0dad0b0275ed1be89bc4e2c2dd37fc5ca9ddf460b0c8aa7d43621b4a.scope.
Oct 11 04:23:57 np0005481065 podman[169285]: 2025-10-11 08:23:57.669550371 +0000 UTC m=+0.039672413 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:23:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:23:57 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:23:57 np0005481065 podman[169285]: 2025-10-11 08:23:57.821982497 +0000 UTC m=+0.192104089 container init 7f6848da0dad0b0275ed1be89bc4e2c2dd37fc5ca9ddf460b0c8aa7d43621b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_antonelli, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 04:23:57 np0005481065 podman[169285]: 2025-10-11 08:23:57.835148256 +0000 UTC m=+0.205269788 container start 7f6848da0dad0b0275ed1be89bc4e2c2dd37fc5ca9ddf460b0c8aa7d43621b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_antonelli, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Oct 11 04:23:57 np0005481065 podman[169285]: 2025-10-11 08:23:57.839202982 +0000 UTC m=+0.209324574 container attach 7f6848da0dad0b0275ed1be89bc4e2c2dd37fc5ca9ddf460b0c8aa7d43621b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_antonelli, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:23:57 np0005481065 sweet_antonelli[169302]: 167 167
Oct 11 04:23:57 np0005481065 systemd[1]: libpod-7f6848da0dad0b0275ed1be89bc4e2c2dd37fc5ca9ddf460b0c8aa7d43621b4a.scope: Deactivated successfully.
Oct 11 04:23:57 np0005481065 podman[169285]: 2025-10-11 08:23:57.842921479 +0000 UTC m=+0.213043001 container died 7f6848da0dad0b0275ed1be89bc4e2c2dd37fc5ca9ddf460b0c8aa7d43621b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_antonelli, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Oct 11 04:23:57 np0005481065 python3.9[169287]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Oct 11 04:23:57 np0005481065 systemd[1]: var-lib-containers-storage-overlay-be13aa91c9a5b7ead75b543058011fca715d57382fd93933d48088a8cb3c1134-merged.mount: Deactivated successfully.
Oct 11 04:23:57 np0005481065 podman[169285]: 2025-10-11 08:23:57.899396004 +0000 UTC m=+0.269517546 container remove 7f6848da0dad0b0275ed1be89bc4e2c2dd37fc5ca9ddf460b0c8aa7d43621b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 04:23:57 np0005481065 systemd[1]: libpod-conmon-7f6848da0dad0b0275ed1be89bc4e2c2dd37fc5ca9ddf460b0c8aa7d43621b4a.scope: Deactivated successfully.
Oct 11 04:23:58 np0005481065 podman[169351]: 2025-10-11 08:23:58.098971536 +0000 UTC m=+0.059742360 container create 354a9f307aef4930636bc24f65e918eb3b614988d2cade95ad36ce974bb3535f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:23:58 np0005481065 systemd[1]: Started libpod-conmon-354a9f307aef4930636bc24f65e918eb3b614988d2cade95ad36ce974bb3535f.scope.
Oct 11 04:23:58 np0005481065 podman[169351]: 2025-10-11 08:23:58.083495281 +0000 UTC m=+0.044266115 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:23:58 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:23:58 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf038e0523ac45a13bf877b5298ddb732481d6f6e6483b17e0088955e6c635f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:23:58 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf038e0523ac45a13bf877b5298ddb732481d6f6e6483b17e0088955e6c635f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:23:58 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf038e0523ac45a13bf877b5298ddb732481d6f6e6483b17e0088955e6c635f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:23:58 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf038e0523ac45a13bf877b5298ddb732481d6f6e6483b17e0088955e6c635f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:23:58 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf038e0523ac45a13bf877b5298ddb732481d6f6e6483b17e0088955e6c635f2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:23:58 np0005481065 podman[169351]: 2025-10-11 08:23:58.208681873 +0000 UTC m=+0.169452697 container init 354a9f307aef4930636bc24f65e918eb3b614988d2cade95ad36ce974bb3535f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_aryabhata, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:23:58 np0005481065 podman[169351]: 2025-10-11 08:23:58.219523735 +0000 UTC m=+0.180294599 container start 354a9f307aef4930636bc24f65e918eb3b614988d2cade95ad36ce974bb3535f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 04:23:58 np0005481065 podman[169351]: 2025-10-11 08:23:58.223435547 +0000 UTC m=+0.184206481 container attach 354a9f307aef4930636bc24f65e918eb3b614988d2cade95ad36ce974bb3535f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_aryabhata, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 04:23:58 np0005481065 python3.9[169499]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 11 04:23:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v505: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:23:59 np0005481065 intelligent_aryabhata[169415]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:23:59 np0005481065 intelligent_aryabhata[169415]: --> relative data size: 1.0
Oct 11 04:23:59 np0005481065 intelligent_aryabhata[169415]: --> All data devices are unavailable
Oct 11 04:23:59 np0005481065 systemd[1]: libpod-354a9f307aef4930636bc24f65e918eb3b614988d2cade95ad36ce974bb3535f.scope: Deactivated successfully.
Oct 11 04:23:59 np0005481065 systemd[1]: libpod-354a9f307aef4930636bc24f65e918eb3b614988d2cade95ad36ce974bb3535f.scope: Consumed 1.228s CPU time.
Oct 11 04:23:59 np0005481065 podman[169351]: 2025-10-11 08:23:59.517304245 +0000 UTC m=+1.478075109 container died 354a9f307aef4930636bc24f65e918eb3b614988d2cade95ad36ce974bb3535f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_aryabhata, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:23:59 np0005481065 systemd[1]: var-lib-containers-storage-overlay-cf038e0523ac45a13bf877b5298ddb732481d6f6e6483b17e0088955e6c635f2-merged.mount: Deactivated successfully.
Oct 11 04:23:59 np0005481065 podman[169351]: 2025-10-11 08:23:59.594040573 +0000 UTC m=+1.554811407 container remove 354a9f307aef4930636bc24f65e918eb3b614988d2cade95ad36ce974bb3535f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_aryabhata, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 04:23:59 np0005481065 systemd[1]: libpod-conmon-354a9f307aef4930636bc24f65e918eb3b614988d2cade95ad36ce974bb3535f.scope: Deactivated successfully.
Oct 11 04:24:00 np0005481065 python3.9[169756]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 11 04:24:00 np0005481065 podman[169866]: 2025-10-11 08:24:00.438424037 +0000 UTC m=+0.071219791 container create 94d634d33b3fa55f41e57727e23851bffd467d3896d930d4295b6ee04acf1e8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_chandrasekhar, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 11 04:24:00 np0005481065 systemd[1]: Started libpod-conmon-94d634d33b3fa55f41e57727e23851bffd467d3896d930d4295b6ee04acf1e8c.scope.
Oct 11 04:24:00 np0005481065 podman[169866]: 2025-10-11 08:24:00.40865908 +0000 UTC m=+0.041454904 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:24:00 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:24:00 np0005481065 podman[169866]: 2025-10-11 08:24:00.547493255 +0000 UTC m=+0.180289069 container init 94d634d33b3fa55f41e57727e23851bffd467d3896d930d4295b6ee04acf1e8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_chandrasekhar, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:24:00 np0005481065 podman[169866]: 2025-10-11 08:24:00.560652253 +0000 UTC m=+0.193447977 container start 94d634d33b3fa55f41e57727e23851bffd467d3896d930d4295b6ee04acf1e8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_chandrasekhar, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 04:24:00 np0005481065 podman[169866]: 2025-10-11 08:24:00.565916925 +0000 UTC m=+0.198712679 container attach 94d634d33b3fa55f41e57727e23851bffd467d3896d930d4295b6ee04acf1e8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_chandrasekhar, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 04:24:00 np0005481065 practical_chandrasekhar[169883]: 167 167
Oct 11 04:24:00 np0005481065 systemd[1]: libpod-94d634d33b3fa55f41e57727e23851bffd467d3896d930d4295b6ee04acf1e8c.scope: Deactivated successfully.
Oct 11 04:24:00 np0005481065 podman[169866]: 2025-10-11 08:24:00.572390571 +0000 UTC m=+0.205186355 container died 94d634d33b3fa55f41e57727e23851bffd467d3896d930d4295b6ee04acf1e8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_chandrasekhar, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:24:00 np0005481065 systemd[1]: var-lib-containers-storage-overlay-a978568275c391bdbe3ba1850bc715e779d26569e39fe87ad757cd950809cc98-merged.mount: Deactivated successfully.
Oct 11 04:24:00 np0005481065 podman[169866]: 2025-10-11 08:24:00.631358598 +0000 UTC m=+0.264154352 container remove 94d634d33b3fa55f41e57727e23851bffd467d3896d930d4295b6ee04acf1e8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:24:00 np0005481065 systemd[1]: libpod-conmon-94d634d33b3fa55f41e57727e23851bffd467d3896d930d4295b6ee04acf1e8c.scope: Deactivated successfully.
Oct 11 04:24:00 np0005481065 podman[169959]: 2025-10-11 08:24:00.85664466 +0000 UTC m=+0.062906301 container create 4d94ff61dc615f8d38cfae300cdad27c2b5b6b2ebe7909b2a2bdd92b0670378e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_antonelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 11 04:24:00 np0005481065 systemd[1]: Started libpod-conmon-4d94ff61dc615f8d38cfae300cdad27c2b5b6b2ebe7909b2a2bdd92b0670378e.scope.
Oct 11 04:24:00 np0005481065 podman[169959]: 2025-10-11 08:24:00.824185436 +0000 UTC m=+0.030447127 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:24:00 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:24:00 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3de6e85ec19679a308e38c8d48f9b35b98cea92bcd9d3ac663692f79d442e143/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:24:00 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3de6e85ec19679a308e38c8d48f9b35b98cea92bcd9d3ac663692f79d442e143/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:24:00 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3de6e85ec19679a308e38c8d48f9b35b98cea92bcd9d3ac663692f79d442e143/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:24:01 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3de6e85ec19679a308e38c8d48f9b35b98cea92bcd9d3ac663692f79d442e143/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:24:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v506: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:24:01 np0005481065 podman[169959]: 2025-10-11 08:24:01.024838849 +0000 UTC m=+0.231100450 container init 4d94ff61dc615f8d38cfae300cdad27c2b5b6b2ebe7909b2a2bdd92b0670378e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 11 04:24:01 np0005481065 podman[169959]: 2025-10-11 08:24:01.039525952 +0000 UTC m=+0.245787543 container start 4d94ff61dc615f8d38cfae300cdad27c2b5b6b2ebe7909b2a2bdd92b0670378e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_antonelli, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:24:01 np0005481065 podman[169959]: 2025-10-11 08:24:01.043556438 +0000 UTC m=+0.249818039 container attach 4d94ff61dc615f8d38cfae300cdad27c2b5b6b2ebe7909b2a2bdd92b0670378e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:24:01 np0005481065 python3.9[170056]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]: {
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:    "0": [
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:        {
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:            "devices": [
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:                "/dev/loop3"
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:            ],
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:            "lv_name": "ceph_lv0",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:            "lv_size": "21470642176",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:            "name": "ceph_lv0",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:            "tags": {
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:                "ceph.cluster_name": "ceph",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:                "ceph.crush_device_class": "",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:                "ceph.encrypted": "0",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:                "ceph.osd_id": "0",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:                "ceph.type": "block",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:                "ceph.vdo": "0"
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:            },
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:            "type": "block",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:            "vg_name": "ceph_vg0"
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:        }
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:    ],
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:    "1": [
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:        {
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:            "devices": [
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:                "/dev/loop4"
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:            ],
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:            "lv_name": "ceph_lv1",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:            "lv_size": "21470642176",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:            "name": "ceph_lv1",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:            "tags": {
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:                "ceph.cluster_name": "ceph",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:                "ceph.crush_device_class": "",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:                "ceph.encrypted": "0",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:                "ceph.osd_id": "1",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:                "ceph.type": "block",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:                "ceph.vdo": "0"
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:            },
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:            "type": "block",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:            "vg_name": "ceph_vg1"
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:        }
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:    ],
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:    "2": [
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:        {
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:            "devices": [
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:                "/dev/loop5"
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:            ],
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:            "lv_name": "ceph_lv2",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:            "lv_size": "21470642176",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:            "name": "ceph_lv2",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:            "tags": {
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:                "ceph.cluster_name": "ceph",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:                "ceph.crush_device_class": "",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:                "ceph.encrypted": "0",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:                "ceph.osd_id": "2",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:                "ceph.type": "block",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:                "ceph.vdo": "0"
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:            },
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:            "type": "block",
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:            "vg_name": "ceph_vg2"
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:        }
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]:    ]
Oct 11 04:24:01 np0005481065 priceless_antonelli[170003]: }
Oct 11 04:24:01 np0005481065 systemd[1]: libpod-4d94ff61dc615f8d38cfae300cdad27c2b5b6b2ebe7909b2a2bdd92b0670378e.scope: Deactivated successfully.
Oct 11 04:24:01 np0005481065 podman[170081]: 2025-10-11 08:24:01.971496856 +0000 UTC m=+0.045929382 container died 4d94ff61dc615f8d38cfae300cdad27c2b5b6b2ebe7909b2a2bdd92b0670378e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 11 04:24:02 np0005481065 systemd[1]: var-lib-containers-storage-overlay-3de6e85ec19679a308e38c8d48f9b35b98cea92bcd9d3ac663692f79d442e143-merged.mount: Deactivated successfully.
Oct 11 04:24:02 np0005481065 podman[170081]: 2025-10-11 08:24:02.054619208 +0000 UTC m=+0.129051754 container remove 4d94ff61dc615f8d38cfae300cdad27c2b5b6b2ebe7909b2a2bdd92b0670378e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_antonelli, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:24:02 np0005481065 systemd[1]: libpod-conmon-4d94ff61dc615f8d38cfae300cdad27c2b5b6b2ebe7909b2a2bdd92b0670378e.scope: Deactivated successfully.
Oct 11 04:24:02 np0005481065 podman[170131]: 2025-10-11 08:24:02.261376167 +0000 UTC m=+0.138234918 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, tcib_managed=true)
Oct 11 04:24:02 np0005481065 python3.9[170205]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 04:24:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:24:02 np0005481065 podman[170328]: 2025-10-11 08:24:02.939710734 +0000 UTC m=+0.070011285 container create 2f7be35e793e509888c495985a00413ca505d9fe0eccad8738e52b2d27fa0e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hawking, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:24:02 np0005481065 systemd[1]: Started libpod-conmon-2f7be35e793e509888c495985a00413ca505d9fe0eccad8738e52b2d27fa0e41.scope.
Oct 11 04:24:03 np0005481065 podman[170328]: 2025-10-11 08:24:02.912049958 +0000 UTC m=+0.042350569 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:24:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v507: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:24:03 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:24:03 np0005481065 podman[170328]: 2025-10-11 08:24:03.047279179 +0000 UTC m=+0.177579730 container init 2f7be35e793e509888c495985a00413ca505d9fe0eccad8738e52b2d27fa0e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hawking, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:24:03 np0005481065 podman[170328]: 2025-10-11 08:24:03.056236727 +0000 UTC m=+0.186537278 container start 2f7be35e793e509888c495985a00413ca505d9fe0eccad8738e52b2d27fa0e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:24:03 np0005481065 podman[170328]: 2025-10-11 08:24:03.060633373 +0000 UTC m=+0.190933974 container attach 2f7be35e793e509888c495985a00413ca505d9fe0eccad8738e52b2d27fa0e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hawking, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:24:03 np0005481065 competent_hawking[170344]: 167 167
Oct 11 04:24:03 np0005481065 systemd[1]: libpod-2f7be35e793e509888c495985a00413ca505d9fe0eccad8738e52b2d27fa0e41.scope: Deactivated successfully.
Oct 11 04:24:03 np0005481065 conmon[170344]: conmon 2f7be35e793e509888c4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2f7be35e793e509888c495985a00413ca505d9fe0eccad8738e52b2d27fa0e41.scope/container/memory.events
Oct 11 04:24:03 np0005481065 podman[170328]: 2025-10-11 08:24:03.067224933 +0000 UTC m=+0.197525504 container died 2f7be35e793e509888c495985a00413ca505d9fe0eccad8738e52b2d27fa0e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hawking, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:24:03 np0005481065 systemd[1]: var-lib-containers-storage-overlay-a788126162f8251dd8057bed0fbf59842994cfaf5cc40c3ffedbe2d6c25779a8-merged.mount: Deactivated successfully.
Oct 11 04:24:03 np0005481065 podman[170328]: 2025-10-11 08:24:03.13732123 +0000 UTC m=+0.267621791 container remove 2f7be35e793e509888c495985a00413ca505d9fe0eccad8738e52b2d27fa0e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hawking, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 04:24:03 np0005481065 systemd[1]: libpod-conmon-2f7be35e793e509888c495985a00413ca505d9fe0eccad8738e52b2d27fa0e41.scope: Deactivated successfully.
Oct 11 04:24:03 np0005481065 podman[170368]: 2025-10-11 08:24:03.377691926 +0000 UTC m=+0.057067593 container create f1675d4970417cc75971bc559e27b86cac1456ac7a4adfd6df9fd647f25573de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 04:24:03 np0005481065 systemd[1]: Started libpod-conmon-f1675d4970417cc75971bc559e27b86cac1456ac7a4adfd6df9fd647f25573de.scope.
Oct 11 04:24:03 np0005481065 podman[170368]: 2025-10-11 08:24:03.353476969 +0000 UTC m=+0.032852706 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:24:03 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:24:03 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffd207b9f692c79af67c3129f2a4cd4139ac0ce401cc85dbaef735fd9190866f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:24:03 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffd207b9f692c79af67c3129f2a4cd4139ac0ce401cc85dbaef735fd9190866f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:24:03 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffd207b9f692c79af67c3129f2a4cd4139ac0ce401cc85dbaef735fd9190866f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:24:03 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffd207b9f692c79af67c3129f2a4cd4139ac0ce401cc85dbaef735fd9190866f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:24:03 np0005481065 podman[170368]: 2025-10-11 08:24:03.506115661 +0000 UTC m=+0.185491348 container init f1675d4970417cc75971bc559e27b86cac1456ac7a4adfd6df9fd647f25573de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:24:03 np0005481065 podman[170368]: 2025-10-11 08:24:03.517976062 +0000 UTC m=+0.197351749 container start f1675d4970417cc75971bc559e27b86cac1456ac7a4adfd6df9fd647f25573de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_babbage, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 11 04:24:03 np0005481065 podman[170368]: 2025-10-11 08:24:03.527879947 +0000 UTC m=+0.207255624 container attach f1675d4970417cc75971bc559e27b86cac1456ac7a4adfd6df9fd647f25573de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_babbage, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 04:24:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:24:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:24:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:24:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:24:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:24:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:24:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:24:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:24:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:24:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:24:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:24:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:24:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 04:24:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:24:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:24:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:24:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:24:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:24:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:24:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:24:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:24:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:24:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:24:04 np0005481065 admiring_babbage[170384]: {
Oct 11 04:24:04 np0005481065 admiring_babbage[170384]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 04:24:04 np0005481065 admiring_babbage[170384]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:24:04 np0005481065 admiring_babbage[170384]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:24:04 np0005481065 admiring_babbage[170384]:        "osd_id": 2,
Oct 11 04:24:04 np0005481065 admiring_babbage[170384]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:24:04 np0005481065 admiring_babbage[170384]:        "type": "bluestore"
Oct 11 04:24:04 np0005481065 admiring_babbage[170384]:    },
Oct 11 04:24:04 np0005481065 admiring_babbage[170384]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 04:24:04 np0005481065 admiring_babbage[170384]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:24:04 np0005481065 admiring_babbage[170384]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:24:04 np0005481065 admiring_babbage[170384]:        "osd_id": 0,
Oct 11 04:24:04 np0005481065 admiring_babbage[170384]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:24:04 np0005481065 admiring_babbage[170384]:        "type": "bluestore"
Oct 11 04:24:04 np0005481065 admiring_babbage[170384]:    },
Oct 11 04:24:04 np0005481065 admiring_babbage[170384]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 04:24:04 np0005481065 admiring_babbage[170384]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:24:04 np0005481065 admiring_babbage[170384]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:24:04 np0005481065 admiring_babbage[170384]:        "osd_id": 1,
Oct 11 04:24:04 np0005481065 admiring_babbage[170384]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:24:04 np0005481065 admiring_babbage[170384]:        "type": "bluestore"
Oct 11 04:24:04 np0005481065 admiring_babbage[170384]:    }
Oct 11 04:24:04 np0005481065 admiring_babbage[170384]: }
Oct 11 04:24:04 np0005481065 systemd[1]: libpod-f1675d4970417cc75971bc559e27b86cac1456ac7a4adfd6df9fd647f25573de.scope: Deactivated successfully.
Oct 11 04:24:04 np0005481065 systemd[1]: libpod-f1675d4970417cc75971bc559e27b86cac1456ac7a4adfd6df9fd647f25573de.scope: Consumed 1.144s CPU time.
Oct 11 04:24:04 np0005481065 podman[170368]: 2025-10-11 08:24:04.653514313 +0000 UTC m=+1.332890040 container died f1675d4970417cc75971bc559e27b86cac1456ac7a4adfd6df9fd647f25573de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_babbage, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 11 04:24:04 np0005481065 systemd[1]: var-lib-containers-storage-overlay-ffd207b9f692c79af67c3129f2a4cd4139ac0ce401cc85dbaef735fd9190866f-merged.mount: Deactivated successfully.
Oct 11 04:24:04 np0005481065 podman[170368]: 2025-10-11 08:24:04.733559046 +0000 UTC m=+1.412934743 container remove f1675d4970417cc75971bc559e27b86cac1456ac7a4adfd6df9fd647f25573de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_babbage, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 04:24:04 np0005481065 systemd[1]: libpod-conmon-f1675d4970417cc75971bc559e27b86cac1456ac7a4adfd6df9fd647f25573de.scope: Deactivated successfully.
Oct 11 04:24:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:24:04 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:24:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:24:04 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:24:04 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev b67c40d2-f1f4-4504-a3b9-4cbe597cf5e7 does not exist
Oct 11 04:24:04 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 59809dd6-81d9-4067-881b-b2380aa72a00 does not exist
Oct 11 04:24:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v508: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:24:05 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:24:05 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:24:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v509: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:24:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:24:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v510: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:24:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v511: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:24:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:24:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v512: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:24:14 np0005481065 podman[170663]: 2025-10-11 08:24:14.807752131 +0000 UTC m=+0.097433685 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 04:24:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v513: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:24:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:24:15.151 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:24:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:24:15.152 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:24:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:24:15.152 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:24:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v514: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:24:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:24:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v515: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:24:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v516: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:24:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:24:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v517: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:24:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:24:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:24:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:24:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:24:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:24:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:24:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v518: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:24:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v519: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:24:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:24:28 np0005481065 kernel: SELinux:  Converting 2767 SID table entries...
Oct 11 04:24:28 np0005481065 kernel: SELinux:  policy capability network_peer_controls=1
Oct 11 04:24:28 np0005481065 kernel: SELinux:  policy capability open_perms=1
Oct 11 04:24:28 np0005481065 kernel: SELinux:  policy capability extended_socket_class=1
Oct 11 04:24:28 np0005481065 kernel: SELinux:  policy capability always_check_network=0
Oct 11 04:24:28 np0005481065 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 11 04:24:28 np0005481065 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 11 04:24:28 np0005481065 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 11 04:24:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v520: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:24:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v521: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:24:32 np0005481065 dbus-broker-launch[809]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Oct 11 04:24:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:24:32 np0005481065 podman[170697]: 2025-10-11 08:24:32.81120339 +0000 UTC m=+0.110440592 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:24:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v522: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:24:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v523: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:24:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v524: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:24:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:24:38 np0005481065 kernel: SELinux:  Converting 2767 SID table entries...
Oct 11 04:24:38 np0005481065 kernel: SELinux:  policy capability network_peer_controls=1
Oct 11 04:24:38 np0005481065 kernel: SELinux:  policy capability open_perms=1
Oct 11 04:24:38 np0005481065 kernel: SELinux:  policy capability extended_socket_class=1
Oct 11 04:24:38 np0005481065 kernel: SELinux:  policy capability always_check_network=0
Oct 11 04:24:38 np0005481065 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 11 04:24:38 np0005481065 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 11 04:24:38 np0005481065 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 11 04:24:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v525: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:24:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v526: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:24:41 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Oct 11 04:24:41 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:24:41.491782) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 04:24:41 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Oct 11 04:24:41 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171081491860, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2038, "num_deletes": 251, "total_data_size": 3509069, "memory_usage": 3570288, "flush_reason": "Manual Compaction"}
Oct 11 04:24:41 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Oct 11 04:24:41 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171081635765, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3433926, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9724, "largest_seqno": 11761, "table_properties": {"data_size": 3424665, "index_size": 5883, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17735, "raw_average_key_size": 19, "raw_value_size": 3406330, "raw_average_value_size": 3730, "num_data_blocks": 267, "num_entries": 913, "num_filter_entries": 913, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170849, "oldest_key_time": 1760170849, "file_creation_time": 1760171081, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:24:41 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 144114 microseconds, and 8089 cpu microseconds.
Oct 11 04:24:41 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:24:41 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:24:41.635892) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3433926 bytes OK
Oct 11 04:24:41 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:24:41.635915) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Oct 11 04:24:41 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:24:41.648447) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Oct 11 04:24:41 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:24:41.648492) EVENT_LOG_v1 {"time_micros": 1760171081648481, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 04:24:41 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:24:41.648522) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 04:24:41 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3500576, prev total WAL file size 3500576, number of live WAL files 2.
Oct 11 04:24:41 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:24:41 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:24:41.649612) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Oct 11 04:24:41 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 04:24:41 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3353KB)], [26(5990KB)]
Oct 11 04:24:41 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171081649685, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 9568448, "oldest_snapshot_seqno": -1}
Oct 11 04:24:41 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3702 keys, 7899439 bytes, temperature: kUnknown
Oct 11 04:24:41 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171081716200, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 7899439, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7871124, "index_size": 17965, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9285, "raw_key_size": 88915, "raw_average_key_size": 24, "raw_value_size": 7800721, "raw_average_value_size": 2107, "num_data_blocks": 778, "num_entries": 3702, "num_filter_entries": 3702, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760171081, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:24:41 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:24:41 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:24:41.716548) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 7899439 bytes
Oct 11 04:24:41 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:24:41.727971) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 143.6 rd, 118.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 5.9 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(5.1) write-amplify(2.3) OK, records in: 4216, records dropped: 514 output_compression: NoCompression
Oct 11 04:24:41 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:24:41.727994) EVENT_LOG_v1 {"time_micros": 1760171081727982, "job": 10, "event": "compaction_finished", "compaction_time_micros": 66647, "compaction_time_cpu_micros": 17677, "output_level": 6, "num_output_files": 1, "total_output_size": 7899439, "num_input_records": 4216, "num_output_records": 3702, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 04:24:41 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:24:41 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171081728613, "job": 10, "event": "table_file_deletion", "file_number": 28}
Oct 11 04:24:41 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:24:41 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171081729585, "job": 10, "event": "table_file_deletion", "file_number": 26}
Oct 11 04:24:41 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:24:41.649531) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:24:41 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:24:41.729691) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:24:41 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:24:41.729699) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:24:41 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:24:41.729701) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:24:41 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:24:41.729705) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:24:41 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:24:41.729708) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:24:42 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:24:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v527: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:24:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v528: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:24:45 np0005481065 dbus-broker-launch[809]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Oct 11 04:24:45 np0005481065 podman[170728]: 2025-10-11 08:24:45.816277904 +0000 UTC m=+0.090385896 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct 11 04:24:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v529: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:24:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:24:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v530: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:24:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v531: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:24:52 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:24:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v532: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:24:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:24:54
Oct 11 04:24:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:24:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 04:24:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', 'volumes', 'images', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'backups', '.rgw.root']
Oct 11 04:24:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:24:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:24:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:24:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:24:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:24:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:24:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:24:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:24:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:24:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:24:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:24:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:24:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:24:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:24:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:24:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:24:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:24:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v533: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:24:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v534: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:24:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:24:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v535: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:25:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v536: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:25:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:25:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v537: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:25:03 np0005481065 podman[175826]: 2025-10-11 08:25:03.816797458 +0000 UTC m=+0.107932149 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 11 04:25:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:25:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:25:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:25:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:25:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:25:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:25:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:25:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:25:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:25:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:25:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:25:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:25:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 04:25:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:25:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:25:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:25:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:25:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:25:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:25:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:25:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:25:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:25:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:25:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v538: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:25:06 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:25:06 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:25:06 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:25:06 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:25:06 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:25:06 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:25:06 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 883367fd-d821-48d6-bf4c-8b78a2bb8bec does not exist
Oct 11 04:25:06 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 57b8757a-241f-472f-a562-1b5f6a0c3c65 does not exist
Oct 11 04:25:06 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev d9286945-1626-445a-9ccf-c97379c4caf8 does not exist
Oct 11 04:25:06 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:25:06 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:25:06 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:25:06 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:25:06 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:25:06 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:25:06 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:25:06 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:25:06 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:25:06 np0005481065 podman[177525]: 2025-10-11 08:25:06.816499694 +0000 UTC m=+0.071164723 container create a932a7fd7e2a24f7a5b5192f58e1edf5187cdc20007a50737c99d0edc5ffffe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_saha, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:25:06 np0005481065 systemd[1]: Started libpod-conmon-a932a7fd7e2a24f7a5b5192f58e1edf5187cdc20007a50737c99d0edc5ffffe5.scope.
Oct 11 04:25:06 np0005481065 podman[177525]: 2025-10-11 08:25:06.778889624 +0000 UTC m=+0.033554703 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:25:06 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:25:06 np0005481065 podman[177525]: 2025-10-11 08:25:06.918356674 +0000 UTC m=+0.173021693 container init a932a7fd7e2a24f7a5b5192f58e1edf5187cdc20007a50737c99d0edc5ffffe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_saha, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:25:06 np0005481065 podman[177525]: 2025-10-11 08:25:06.932388054 +0000 UTC m=+0.187053083 container start a932a7fd7e2a24f7a5b5192f58e1edf5187cdc20007a50737c99d0edc5ffffe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:25:06 np0005481065 podman[177525]: 2025-10-11 08:25:06.937567466 +0000 UTC m=+0.192232475 container attach a932a7fd7e2a24f7a5b5192f58e1edf5187cdc20007a50737c99d0edc5ffffe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:25:06 np0005481065 ecstatic_saha[177596]: 167 167
Oct 11 04:25:06 np0005481065 systemd[1]: libpod-a932a7fd7e2a24f7a5b5192f58e1edf5187cdc20007a50737c99d0edc5ffffe5.scope: Deactivated successfully.
Oct 11 04:25:06 np0005481065 podman[177525]: 2025-10-11 08:25:06.952097651 +0000 UTC m=+0.206762680 container died a932a7fd7e2a24f7a5b5192f58e1edf5187cdc20007a50737c99d0edc5ffffe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_saha, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:25:06 np0005481065 systemd[1]: var-lib-containers-storage-overlay-9d6a1b06de49393411fbaf2c8077798e75551af77ac300513065bb979466decf-merged.mount: Deactivated successfully.
Oct 11 04:25:07 np0005481065 podman[177525]: 2025-10-11 08:25:07.010907201 +0000 UTC m=+0.265572220 container remove a932a7fd7e2a24f7a5b5192f58e1edf5187cdc20007a50737c99d0edc5ffffe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_saha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:25:07 np0005481065 systemd[1]: libpod-conmon-a932a7fd7e2a24f7a5b5192f58e1edf5187cdc20007a50737c99d0edc5ffffe5.scope: Deactivated successfully.
Oct 11 04:25:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v539: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:25:07 np0005481065 podman[177710]: 2025-10-11 08:25:07.248949525 +0000 UTC m=+0.078852138 container create 3f7bda73a4ce0091f33310a165c1e385a644a6af66484528012c1007517cdb54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True)
Oct 11 04:25:07 np0005481065 systemd[1]: Started libpod-conmon-3f7bda73a4ce0091f33310a165c1e385a644a6af66484528012c1007517cdb54.scope.
Oct 11 04:25:07 np0005481065 podman[177710]: 2025-10-11 08:25:07.218372611 +0000 UTC m=+0.048275264 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:25:07 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:25:07 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d588e7f93943ed30843b50531fa30c0e2e11917cfc92a501ebe83884febfc359/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:07 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d588e7f93943ed30843b50531fa30c0e2e11917cfc92a501ebe83884febfc359/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:07 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d588e7f93943ed30843b50531fa30c0e2e11917cfc92a501ebe83884febfc359/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:07 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d588e7f93943ed30843b50531fa30c0e2e11917cfc92a501ebe83884febfc359/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:07 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d588e7f93943ed30843b50531fa30c0e2e11917cfc92a501ebe83884febfc359/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:07 np0005481065 podman[177710]: 2025-10-11 08:25:07.370506701 +0000 UTC m=+0.200409365 container init 3f7bda73a4ce0091f33310a165c1e385a644a6af66484528012c1007517cdb54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 11 04:25:07 np0005481065 podman[177710]: 2025-10-11 08:25:07.388045425 +0000 UTC m=+0.217948038 container start 3f7bda73a4ce0091f33310a165c1e385a644a6af66484528012c1007517cdb54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 11 04:25:07 np0005481065 podman[177710]: 2025-10-11 08:25:07.395582925 +0000 UTC m=+0.225485528 container attach 3f7bda73a4ce0091f33310a165c1e385a644a6af66484528012c1007517cdb54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 04:25:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:25:08 np0005481065 awesome_leavitt[177793]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:25:08 np0005481065 awesome_leavitt[177793]: --> relative data size: 1.0
Oct 11 04:25:08 np0005481065 awesome_leavitt[177793]: --> All data devices are unavailable
Oct 11 04:25:08 np0005481065 systemd[1]: libpod-3f7bda73a4ce0091f33310a165c1e385a644a6af66484528012c1007517cdb54.scope: Deactivated successfully.
Oct 11 04:25:08 np0005481065 conmon[177793]: conmon 3f7bda73a4ce0091f333 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3f7bda73a4ce0091f33310a165c1e385a644a6af66484528012c1007517cdb54.scope/container/memory.events
Oct 11 04:25:08 np0005481065 systemd[1]: libpod-3f7bda73a4ce0091f33310a165c1e385a644a6af66484528012c1007517cdb54.scope: Consumed 1.164s CPU time.
Oct 11 04:25:08 np0005481065 podman[177710]: 2025-10-11 08:25:08.666867136 +0000 UTC m=+1.496769749 container died 3f7bda73a4ce0091f33310a165c1e385a644a6af66484528012c1007517cdb54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 04:25:08 np0005481065 systemd[1]: var-lib-containers-storage-overlay-d588e7f93943ed30843b50531fa30c0e2e11917cfc92a501ebe83884febfc359-merged.mount: Deactivated successfully.
Oct 11 04:25:08 np0005481065 podman[177710]: 2025-10-11 08:25:08.741238272 +0000 UTC m=+1.571140855 container remove 3f7bda73a4ce0091f33310a165c1e385a644a6af66484528012c1007517cdb54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 11 04:25:08 np0005481065 systemd[1]: libpod-conmon-3f7bda73a4ce0091f33310a165c1e385a644a6af66484528012c1007517cdb54.scope: Deactivated successfully.
Oct 11 04:25:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v540: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:25:09 np0005481065 podman[178896]: 2025-10-11 08:25:09.559942343 +0000 UTC m=+0.069648399 container create 5bab73d9c71f30af052ce95e981729640fd5a4d7b051025e3ddd9b0480589866 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Oct 11 04:25:09 np0005481065 systemd[1]: Started libpod-conmon-5bab73d9c71f30af052ce95e981729640fd5a4d7b051025e3ddd9b0480589866.scope.
Oct 11 04:25:09 np0005481065 podman[178896]: 2025-10-11 08:25:09.531535382 +0000 UTC m=+0.041241488 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:25:09 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:25:09 np0005481065 podman[178896]: 2025-10-11 08:25:09.674067612 +0000 UTC m=+0.183773728 container init 5bab73d9c71f30af052ce95e981729640fd5a4d7b051025e3ddd9b0480589866 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ritchie, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:25:09 np0005481065 podman[178896]: 2025-10-11 08:25:09.688575006 +0000 UTC m=+0.198281042 container start 5bab73d9c71f30af052ce95e981729640fd5a4d7b051025e3ddd9b0480589866 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ritchie, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:25:09 np0005481065 podman[178896]: 2025-10-11 08:25:09.692963654 +0000 UTC m=+0.202669730 container attach 5bab73d9c71f30af052ce95e981729640fd5a4d7b051025e3ddd9b0480589866 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ritchie, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 04:25:09 np0005481065 youthful_ritchie[178974]: 167 167
Oct 11 04:25:09 np0005481065 systemd[1]: libpod-5bab73d9c71f30af052ce95e981729640fd5a4d7b051025e3ddd9b0480589866.scope: Deactivated successfully.
Oct 11 04:25:09 np0005481065 podman[178896]: 2025-10-11 08:25:09.69896374 +0000 UTC m=+0.208669806 container died 5bab73d9c71f30af052ce95e981729640fd5a4d7b051025e3ddd9b0480589866 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:25:09 np0005481065 systemd[1]: var-lib-containers-storage-overlay-438ce47867b0cc540f5bad6d733492a1e1c86ace3b813f73faf348e794272713-merged.mount: Deactivated successfully.
Oct 11 04:25:09 np0005481065 podman[178896]: 2025-10-11 08:25:09.75194566 +0000 UTC m=+0.261651686 container remove 5bab73d9c71f30af052ce95e981729640fd5a4d7b051025e3ddd9b0480589866 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:25:09 np0005481065 systemd[1]: libpod-conmon-5bab73d9c71f30af052ce95e981729640fd5a4d7b051025e3ddd9b0480589866.scope: Deactivated successfully.
Oct 11 04:25:09 np0005481065 podman[179114]: 2025-10-11 08:25:09.982807604 +0000 UTC m=+0.053765254 container create 6043939bdb5a3e4aec50d761094e8cd4da806b0e628edf7e1b8003471be52523 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_easley, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 11 04:25:10 np0005481065 systemd[1]: Started libpod-conmon-6043939bdb5a3e4aec50d761094e8cd4da806b0e628edf7e1b8003471be52523.scope.
Oct 11 04:25:10 np0005481065 podman[179114]: 2025-10-11 08:25:09.961269874 +0000 UTC m=+0.032227554 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:25:10 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:25:10 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be2bddb51f3edeb01d5309b13b72e03ec4e667795e3faec5d987bc7afef75012/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:10 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be2bddb51f3edeb01d5309b13b72e03ec4e667795e3faec5d987bc7afef75012/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:10 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be2bddb51f3edeb01d5309b13b72e03ec4e667795e3faec5d987bc7afef75012/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:10 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be2bddb51f3edeb01d5309b13b72e03ec4e667795e3faec5d987bc7afef75012/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:10 np0005481065 podman[179114]: 2025-10-11 08:25:10.09889305 +0000 UTC m=+0.169850790 container init 6043939bdb5a3e4aec50d761094e8cd4da806b0e628edf7e1b8003471be52523 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_easley, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:25:10 np0005481065 podman[179114]: 2025-10-11 08:25:10.11392431 +0000 UTC m=+0.184881960 container start 6043939bdb5a3e4aec50d761094e8cd4da806b0e628edf7e1b8003471be52523 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_easley, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 04:25:10 np0005481065 podman[179114]: 2025-10-11 08:25:10.117758712 +0000 UTC m=+0.188716442 container attach 6043939bdb5a3e4aec50d761094e8cd4da806b0e628edf7e1b8003471be52523 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_easley, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 04:25:10 np0005481065 interesting_easley[179187]: {
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:    "0": [
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:        {
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:            "devices": [
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:                "/dev/loop3"
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:            ],
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:            "lv_name": "ceph_lv0",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:            "lv_size": "21470642176",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:            "name": "ceph_lv0",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:            "tags": {
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:                "ceph.cluster_name": "ceph",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:                "ceph.crush_device_class": "",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:                "ceph.encrypted": "0",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:                "ceph.osd_id": "0",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:                "ceph.type": "block",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:                "ceph.vdo": "0"
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:            },
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:            "type": "block",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:            "vg_name": "ceph_vg0"
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:        }
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:    ],
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:    "1": [
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:        {
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:            "devices": [
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:                "/dev/loop4"
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:            ],
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:            "lv_name": "ceph_lv1",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:            "lv_size": "21470642176",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:            "name": "ceph_lv1",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:            "tags": {
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:                "ceph.cluster_name": "ceph",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:                "ceph.crush_device_class": "",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:                "ceph.encrypted": "0",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:                "ceph.osd_id": "1",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:                "ceph.type": "block",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:                "ceph.vdo": "0"
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:            },
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:            "type": "block",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:            "vg_name": "ceph_vg1"
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:        }
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:    ],
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:    "2": [
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:        {
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:            "devices": [
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:                "/dev/loop5"
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:            ],
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:            "lv_name": "ceph_lv2",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:            "lv_size": "21470642176",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:            "name": "ceph_lv2",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:            "tags": {
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:                "ceph.cluster_name": "ceph",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:                "ceph.crush_device_class": "",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:                "ceph.encrypted": "0",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:                "ceph.osd_id": "2",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:                "ceph.type": "block",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:                "ceph.vdo": "0"
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:            },
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:            "type": "block",
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:            "vg_name": "ceph_vg2"
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:        }
Oct 11 04:25:10 np0005481065 interesting_easley[179187]:    ]
Oct 11 04:25:10 np0005481065 interesting_easley[179187]: }
Oct 11 04:25:10 np0005481065 systemd[1]: libpod-6043939bdb5a3e4aec50d761094e8cd4da806b0e628edf7e1b8003471be52523.scope: Deactivated successfully.
Oct 11 04:25:10 np0005481065 podman[179114]: 2025-10-11 08:25:10.909031841 +0000 UTC m=+0.979989521 container died 6043939bdb5a3e4aec50d761094e8cd4da806b0e628edf7e1b8003471be52523 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_easley, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 04:25:10 np0005481065 systemd[1]: var-lib-containers-storage-overlay-be2bddb51f3edeb01d5309b13b72e03ec4e667795e3faec5d987bc7afef75012-merged.mount: Deactivated successfully.
Oct 11 04:25:10 np0005481065 podman[179114]: 2025-10-11 08:25:10.980403119 +0000 UTC m=+1.051360759 container remove 6043939bdb5a3e4aec50d761094e8cd4da806b0e628edf7e1b8003471be52523 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:25:10 np0005481065 systemd[1]: libpod-conmon-6043939bdb5a3e4aec50d761094e8cd4da806b0e628edf7e1b8003471be52523.scope: Deactivated successfully.
Oct 11 04:25:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v541: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:25:11 np0005481065 podman[180053]: 2025-10-11 08:25:11.826229264 +0000 UTC m=+0.043939177 container create 0958dcb0022bc6e63027a2aa1da1a5f9de4ddab98e498fa331e57490ad6513fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_volhard, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 04:25:11 np0005481065 systemd[1]: Started libpod-conmon-0958dcb0022bc6e63027a2aa1da1a5f9de4ddab98e498fa331e57490ad6513fd.scope.
Oct 11 04:25:11 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:25:11 np0005481065 podman[180053]: 2025-10-11 08:25:11.807248029 +0000 UTC m=+0.024957982 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:25:11 np0005481065 podman[180053]: 2025-10-11 08:25:11.918606575 +0000 UTC m=+0.136316508 container init 0958dcb0022bc6e63027a2aa1da1a5f9de4ddab98e498fa331e57490ad6513fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_volhard, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:25:11 np0005481065 podman[180053]: 2025-10-11 08:25:11.929860765 +0000 UTC m=+0.147570678 container start 0958dcb0022bc6e63027a2aa1da1a5f9de4ddab98e498fa331e57490ad6513fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:25:11 np0005481065 podman[180053]: 2025-10-11 08:25:11.933274254 +0000 UTC m=+0.150984167 container attach 0958dcb0022bc6e63027a2aa1da1a5f9de4ddab98e498fa331e57490ad6513fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 04:25:11 np0005481065 relaxed_volhard[180112]: 167 167
Oct 11 04:25:11 np0005481065 systemd[1]: libpod-0958dcb0022bc6e63027a2aa1da1a5f9de4ddab98e498fa331e57490ad6513fd.scope: Deactivated successfully.
Oct 11 04:25:11 np0005481065 podman[180053]: 2025-10-11 08:25:11.939995521 +0000 UTC m=+0.157705434 container died 0958dcb0022bc6e63027a2aa1da1a5f9de4ddab98e498fa331e57490ad6513fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_volhard, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:25:11 np0005481065 systemd[1]: var-lib-containers-storage-overlay-0002bc3405b4241ab6d554aa2694c6bae0111e3c1295ffdf3fbcac2ecb372ec4-merged.mount: Deactivated successfully.
Oct 11 04:25:11 np0005481065 podman[180053]: 2025-10-11 08:25:11.980101714 +0000 UTC m=+0.197811627 container remove 0958dcb0022bc6e63027a2aa1da1a5f9de4ddab98e498fa331e57490ad6513fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_volhard, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 11 04:25:11 np0005481065 systemd[1]: libpod-conmon-0958dcb0022bc6e63027a2aa1da1a5f9de4ddab98e498fa331e57490ad6513fd.scope: Deactivated successfully.
Oct 11 04:25:12 np0005481065 podman[180243]: 2025-10-11 08:25:12.197297279 +0000 UTC m=+0.069565097 container create f686bb263578154b59b2ff4875050baad95229990844e4f39b20401c36f836b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_lamport, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Oct 11 04:25:12 np0005481065 systemd[1]: Started libpod-conmon-f686bb263578154b59b2ff4875050baad95229990844e4f39b20401c36f836b8.scope.
Oct 11 04:25:12 np0005481065 podman[180243]: 2025-10-11 08:25:12.159787021 +0000 UTC m=+0.032054849 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:25:12 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:25:12 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fb61b4497cb2e828644f8aae182b6424738d059b439731b2b8c4eac2308814c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:12 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fb61b4497cb2e828644f8aae182b6424738d059b439731b2b8c4eac2308814c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:12 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fb61b4497cb2e828644f8aae182b6424738d059b439731b2b8c4eac2308814c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:12 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fb61b4497cb2e828644f8aae182b6424738d059b439731b2b8c4eac2308814c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:25:12 np0005481065 podman[180243]: 2025-10-11 08:25:12.31322521 +0000 UTC m=+0.185493058 container init f686bb263578154b59b2ff4875050baad95229990844e4f39b20401c36f836b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_lamport, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 04:25:12 np0005481065 podman[180243]: 2025-10-11 08:25:12.374974917 +0000 UTC m=+0.247242775 container start f686bb263578154b59b2ff4875050baad95229990844e4f39b20401c36f836b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_lamport, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 11 04:25:12 np0005481065 podman[180243]: 2025-10-11 08:25:12.378930302 +0000 UTC m=+0.251198150 container attach f686bb263578154b59b2ff4875050baad95229990844e4f39b20401c36f836b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Oct 11 04:25:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:25:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v542: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:25:13 np0005481065 peaceful_lamport[180311]: {
Oct 11 04:25:13 np0005481065 peaceful_lamport[180311]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 04:25:13 np0005481065 peaceful_lamport[180311]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:25:13 np0005481065 peaceful_lamport[180311]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:25:13 np0005481065 peaceful_lamport[180311]:        "osd_id": 2,
Oct 11 04:25:13 np0005481065 peaceful_lamport[180311]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:25:13 np0005481065 peaceful_lamport[180311]:        "type": "bluestore"
Oct 11 04:25:13 np0005481065 peaceful_lamport[180311]:    },
Oct 11 04:25:13 np0005481065 peaceful_lamport[180311]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 04:25:13 np0005481065 peaceful_lamport[180311]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:25:13 np0005481065 peaceful_lamport[180311]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:25:13 np0005481065 peaceful_lamport[180311]:        "osd_id": 0,
Oct 11 04:25:13 np0005481065 peaceful_lamport[180311]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:25:13 np0005481065 peaceful_lamport[180311]:        "type": "bluestore"
Oct 11 04:25:13 np0005481065 peaceful_lamport[180311]:    },
Oct 11 04:25:13 np0005481065 peaceful_lamport[180311]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 04:25:13 np0005481065 peaceful_lamport[180311]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:25:13 np0005481065 peaceful_lamport[180311]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:25:13 np0005481065 peaceful_lamport[180311]:        "osd_id": 1,
Oct 11 04:25:13 np0005481065 peaceful_lamport[180311]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:25:13 np0005481065 peaceful_lamport[180311]:        "type": "bluestore"
Oct 11 04:25:13 np0005481065 peaceful_lamport[180311]:    }
Oct 11 04:25:13 np0005481065 peaceful_lamport[180311]: }
Oct 11 04:25:13 np0005481065 podman[180243]: 2025-10-11 08:25:13.4487341 +0000 UTC m=+1.321001928 container died f686bb263578154b59b2ff4875050baad95229990844e4f39b20401c36f836b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_lamport, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:25:13 np0005481065 systemd[1]: libpod-f686bb263578154b59b2ff4875050baad95229990844e4f39b20401c36f836b8.scope: Deactivated successfully.
Oct 11 04:25:13 np0005481065 systemd[1]: libpod-f686bb263578154b59b2ff4875050baad95229990844e4f39b20401c36f836b8.scope: Consumed 1.074s CPU time.
Oct 11 04:25:13 np0005481065 systemd[1]: var-lib-containers-storage-overlay-2fb61b4497cb2e828644f8aae182b6424738d059b439731b2b8c4eac2308814c-merged.mount: Deactivated successfully.
Oct 11 04:25:13 np0005481065 podman[180243]: 2025-10-11 08:25:13.533030616 +0000 UTC m=+1.405298464 container remove f686bb263578154b59b2ff4875050baad95229990844e4f39b20401c36f836b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_lamport, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:25:13 np0005481065 systemd[1]: libpod-conmon-f686bb263578154b59b2ff4875050baad95229990844e4f39b20401c36f836b8.scope: Deactivated successfully.
Oct 11 04:25:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:25:13 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:25:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:25:13 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:25:13 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev f92228bd-de32-4a67-a35d-89b70af86737 does not exist
Oct 11 04:25:13 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev e06179c5-dfef-429f-b5dc-21a03c28dc52 does not exist
Oct 11 04:25:14 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:25:14 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:25:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v543: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:25:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:25:15.153 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:25:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:25:15.155 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:25:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:25:15.155 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:25:16 np0005481065 podman[182538]: 2025-10-11 08:25:16.805582284 +0000 UTC m=+0.091608151 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Oct 11 04:25:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v544: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:25:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:25:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v545: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:25:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v546: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:25:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:25:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v547: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:25:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:25:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:25:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:25:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:25:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:25:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:25:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v548: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:25:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v549: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:25:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:25:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v550: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:25:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v551: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:25:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:25:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v552: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:25:34 np0005481065 podman[188417]: 2025-10-11 08:25:34.826094225 +0000 UTC m=+0.118601100 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller)
Oct 11 04:25:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v553: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:25:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v554: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:25:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:25:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v555: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:25:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v556: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:25:42 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:25:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v557: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:25:43 np0005481065 kernel: SELinux:  Converting 2768 SID table entries...
Oct 11 04:25:43 np0005481065 kernel: SELinux:  policy capability network_peer_controls=1
Oct 11 04:25:43 np0005481065 kernel: SELinux:  policy capability open_perms=1
Oct 11 04:25:43 np0005481065 kernel: SELinux:  policy capability extended_socket_class=1
Oct 11 04:25:43 np0005481065 kernel: SELinux:  policy capability always_check_network=0
Oct 11 04:25:43 np0005481065 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 11 04:25:43 np0005481065 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 11 04:25:43 np0005481065 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 11 04:25:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v558: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:25:45 np0005481065 dbus-broker-launch[792]: Noticed file-system modification, trigger reload.
Oct 11 04:25:45 np0005481065 dbus-broker-launch[809]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Oct 11 04:25:45 np0005481065 dbus-broker-launch[792]: Noticed file-system modification, trigger reload.
Oct 11 04:25:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v559: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:25:47 np0005481065 podman[188508]: 2025-10-11 08:25:47.215314823 +0000 UTC m=+0.103047181 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 11 04:25:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:25:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v560: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:25:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v561: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:25:52 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:25:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v562: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:25:53 np0005481065 systemd[1]: Stopping OpenSSH server daemon...
Oct 11 04:25:53 np0005481065 systemd[1]: sshd.service: Deactivated successfully.
Oct 11 04:25:53 np0005481065 systemd[1]: Stopped OpenSSH server daemon.
Oct 11 04:25:53 np0005481065 systemd[1]: sshd.service: Consumed 2.890s CPU time, no IO.
Oct 11 04:25:53 np0005481065 systemd[1]: Stopped target sshd-keygen.target.
Oct 11 04:25:53 np0005481065 systemd[1]: Stopping sshd-keygen.target...
Oct 11 04:25:53 np0005481065 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 11 04:25:53 np0005481065 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 11 04:25:53 np0005481065 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 11 04:25:53 np0005481065 systemd[1]: Reached target sshd-keygen.target.
Oct 11 04:25:53 np0005481065 systemd[1]: Starting OpenSSH server daemon...
Oct 11 04:25:53 np0005481065 systemd[1]: Started OpenSSH server daemon.
Oct 11 04:25:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:25:54
Oct 11 04:25:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:25:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 04:25:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', '.mgr', 'default.rgw.meta', 'backups', 'vms', 'cephfs.cephfs.data', 'images', 'default.rgw.control', 'default.rgw.log', 'volumes']
Oct 11 04:25:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:25:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:25:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:25:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:25:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:25:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:25:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:25:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:25:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:25:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:25:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:25:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:25:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:25:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:25:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:25:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:25:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:25:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v563: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:25:55 np0005481065 ceph-mgr[74605]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2898047278
Oct 11 04:25:56 np0005481065 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 11 04:25:56 np0005481065 systemd[1]: Starting man-db-cache-update.service...
Oct 11 04:25:56 np0005481065 systemd[1]: Reloading.
Oct 11 04:25:56 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:25:56 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:25:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v564: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:25:57 np0005481065 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 11 04:25:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:25:58 np0005481065 systemd[1]: Starting PackageKit Daemon...
Oct 11 04:25:59 np0005481065 systemd[1]: Started PackageKit Daemon.
Oct 11 04:25:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v565: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:26:00 np0005481065 python3.9[192530]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 11 04:26:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v566: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:26:01 np0005481065 systemd[1]: Reloading.
Oct 11 04:26:02 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:26:02 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:26:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:26:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v567: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:26:03 np0005481065 python3.9[194436]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 11 04:26:03 np0005481065 systemd[1]: Reloading.
Oct 11 04:26:03 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:26:03 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:26:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:26:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:26:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:26:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:26:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:26:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:26:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:26:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:26:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:26:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:26:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:26:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:26:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 04:26:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:26:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:26:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:26:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:26:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:26:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:26:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:26:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:26:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:26:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:26:04 np0005481065 python3.9[195517]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 11 04:26:04 np0005481065 systemd[1]: Reloading.
Oct 11 04:26:04 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:26:04 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:26:05 np0005481065 podman[196039]: 2025-10-11 08:26:05.036341701 +0000 UTC m=+0.148848215 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 04:26:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v568: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:26:05 np0005481065 auditd[700]: Audit daemon rotating log files
Oct 11 04:26:05 np0005481065 python3.9[196600]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 11 04:26:05 np0005481065 systemd[1]: Reloading.
Oct 11 04:26:06 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:26:06 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:26:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v569: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:26:07 np0005481065 python3.9[197798]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 04:26:07 np0005481065 systemd[1]: Reloading.
Oct 11 04:26:07 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:26:07 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:26:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:26:08 np0005481065 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 11 04:26:08 np0005481065 systemd[1]: Finished man-db-cache-update.service.
Oct 11 04:26:08 np0005481065 systemd[1]: man-db-cache-update.service: Consumed 14.849s CPU time.
Oct 11 04:26:08 np0005481065 systemd[1]: run-r6ee3f6c33fb64d0fbf675f39ad61da22.service: Deactivated successfully.
Oct 11 04:26:08 np0005481065 python3.9[198919]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 04:26:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v570: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:26:09 np0005481065 systemd[1]: Reloading.
Oct 11 04:26:09 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:26:09 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:26:10 np0005481065 python3.9[199152]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 04:26:11 np0005481065 systemd[1]: Reloading.
Oct 11 04:26:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v571: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:26:11 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:26:11 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:26:12 np0005481065 python3.9[199343]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 04:26:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:26:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v572: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:26:13 np0005481065 python3.9[199498]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 04:26:13 np0005481065 systemd[1]: Reloading.
Oct 11 04:26:13 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:26:13 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:26:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:26:14 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:26:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:26:14 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:26:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:26:14 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:26:14 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 9964f7e2-b646-42e4-84a6-b296fdf29598 does not exist
Oct 11 04:26:14 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev bf347f46-5543-4216-9383-807a511d2cdd does not exist
Oct 11 04:26:14 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev c5c5f727-6d4b-468f-87c5-71a0456bf10f does not exist
Oct 11 04:26:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:26:14 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:26:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:26:14 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:26:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:26:14 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:26:14 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:26:14 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:26:14 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:26:14 np0005481065 python3.9[199805]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 11 04:26:15 np0005481065 systemd[1]: Reloading.
Oct 11 04:26:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v573: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:26:15 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:26:15 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:26:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:26:15.155 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:26:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:26:15.156 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:26:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:26:15.157 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:26:15 np0005481065 systemd[1]: Listening on libvirt proxy daemon socket.
Oct 11 04:26:15 np0005481065 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Oct 11 04:26:16 np0005481065 podman[200100]: 2025-10-11 08:26:16.049864111 +0000 UTC m=+0.068707428 container create 1a8e960e6425ac992e89c5d0a403e3dbe53f984e1705324c7e2c22761a8b4866 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_booth, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 04:26:16 np0005481065 podman[200100]: 2025-10-11 08:26:16.020064419 +0000 UTC m=+0.038907776 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:26:16 np0005481065 systemd[1]: Started libpod-conmon-1a8e960e6425ac992e89c5d0a403e3dbe53f984e1705324c7e2c22761a8b4866.scope.
Oct 11 04:26:16 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:26:16 np0005481065 podman[200100]: 2025-10-11 08:26:16.177509282 +0000 UTC m=+0.196352649 container init 1a8e960e6425ac992e89c5d0a403e3dbe53f984e1705324c7e2c22761a8b4866 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:26:16 np0005481065 podman[200100]: 2025-10-11 08:26:16.193145354 +0000 UTC m=+0.211988661 container start 1a8e960e6425ac992e89c5d0a403e3dbe53f984e1705324c7e2c22761a8b4866 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Oct 11 04:26:16 np0005481065 podman[200100]: 2025-10-11 08:26:16.198159209 +0000 UTC m=+0.217002586 container attach 1a8e960e6425ac992e89c5d0a403e3dbe53f984e1705324c7e2c22761a8b4866 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:26:16 np0005481065 modest_booth[200149]: 167 167
Oct 11 04:26:16 np0005481065 systemd[1]: libpod-1a8e960e6425ac992e89c5d0a403e3dbe53f984e1705324c7e2c22761a8b4866.scope: Deactivated successfully.
Oct 11 04:26:16 np0005481065 podman[200100]: 2025-10-11 08:26:16.203584876 +0000 UTC m=+0.222428183 container died 1a8e960e6425ac992e89c5d0a403e3dbe53f984e1705324c7e2c22761a8b4866 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:26:16 np0005481065 systemd[1]: var-lib-containers-storage-overlay-ca8daa62f5b2277c06d470e28e600c92bb0fee6d04bfb5954f8146cb84958b6e-merged.mount: Deactivated successfully.
Oct 11 04:26:16 np0005481065 podman[200100]: 2025-10-11 08:26:16.265591859 +0000 UTC m=+0.284435176 container remove 1a8e960e6425ac992e89c5d0a403e3dbe53f984e1705324c7e2c22761a8b4866 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_booth, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 04:26:16 np0005481065 systemd[1]: libpod-conmon-1a8e960e6425ac992e89c5d0a403e3dbe53f984e1705324c7e2c22761a8b4866.scope: Deactivated successfully.
Oct 11 04:26:16 np0005481065 python3.9[200173]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 04:26:16 np0005481065 podman[200192]: 2025-10-11 08:26:16.509850611 +0000 UTC m=+0.059265935 container create 082c2156b4ecde2b2c1771ea19136513cb0dce5dfcc2f3cb18dd5604d06ef1ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_montalcini, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 04:26:16 np0005481065 systemd[1]: Started libpod-conmon-082c2156b4ecde2b2c1771ea19136513cb0dce5dfcc2f3cb18dd5604d06ef1ac.scope.
Oct 11 04:26:16 np0005481065 podman[200192]: 2025-10-11 08:26:16.486477785 +0000 UTC m=+0.035893109 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:26:16 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:26:16 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8fb3c57c5bce9c1c71852580b70ec56c657f27802c86cfec46d6279309cddf2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:16 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8fb3c57c5bce9c1c71852580b70ec56c657f27802c86cfec46d6279309cddf2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:16 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8fb3c57c5bce9c1c71852580b70ec56c657f27802c86cfec46d6279309cddf2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:16 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8fb3c57c5bce9c1c71852580b70ec56c657f27802c86cfec46d6279309cddf2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:16 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8fb3c57c5bce9c1c71852580b70ec56c657f27802c86cfec46d6279309cddf2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:16 np0005481065 podman[200192]: 2025-10-11 08:26:16.615273859 +0000 UTC m=+0.164689153 container init 082c2156b4ecde2b2c1771ea19136513cb0dce5dfcc2f3cb18dd5604d06ef1ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_montalcini, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:26:16 np0005481065 podman[200192]: 2025-10-11 08:26:16.630264063 +0000 UTC m=+0.179679397 container start 082c2156b4ecde2b2c1771ea19136513cb0dce5dfcc2f3cb18dd5604d06ef1ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507)
Oct 11 04:26:16 np0005481065 podman[200192]: 2025-10-11 08:26:16.634632759 +0000 UTC m=+0.184048053 container attach 082c2156b4ecde2b2c1771ea19136513cb0dce5dfcc2f3cb18dd5604d06ef1ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:26:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v574: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:26:17 np0005481065 podman[200375]: 2025-10-11 08:26:17.483948457 +0000 UTC m=+0.109569439 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 04:26:17 np0005481065 python3.9[200376]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 04:26:17 np0005481065 elastic_montalcini[200210]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:26:17 np0005481065 elastic_montalcini[200210]: --> relative data size: 1.0
Oct 11 04:26:17 np0005481065 elastic_montalcini[200210]: --> All data devices are unavailable
Oct 11 04:26:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:26:17 np0005481065 systemd[1]: libpod-082c2156b4ecde2b2c1771ea19136513cb0dce5dfcc2f3cb18dd5604d06ef1ac.scope: Deactivated successfully.
Oct 11 04:26:17 np0005481065 systemd[1]: libpod-082c2156b4ecde2b2c1771ea19136513cb0dce5dfcc2f3cb18dd5604d06ef1ac.scope: Consumed 1.100s CPU time.
Oct 11 04:26:17 np0005481065 podman[200192]: 2025-10-11 08:26:17.807764841 +0000 UTC m=+1.357180165 container died 082c2156b4ecde2b2c1771ea19136513cb0dce5dfcc2f3cb18dd5604d06ef1ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 04:26:17 np0005481065 systemd[1]: var-lib-containers-storage-overlay-d8fb3c57c5bce9c1c71852580b70ec56c657f27802c86cfec46d6279309cddf2-merged.mount: Deactivated successfully.
Oct 11 04:26:17 np0005481065 podman[200192]: 2025-10-11 08:26:17.875271643 +0000 UTC m=+1.424686927 container remove 082c2156b4ecde2b2c1771ea19136513cb0dce5dfcc2f3cb18dd5604d06ef1ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_montalcini, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:26:17 np0005481065 systemd[1]: libpod-conmon-082c2156b4ecde2b2c1771ea19136513cb0dce5dfcc2f3cb18dd5604d06ef1ac.scope: Deactivated successfully.
Oct 11 04:26:18 np0005481065 python3.9[200683]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 04:26:18 np0005481065 podman[200717]: 2025-10-11 08:26:18.752395315 +0000 UTC m=+0.069901672 container create b6d26c2f50f04c04976f31a6845ad34bbc1c9617ed05b6ba0ad588e5d519f1f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:26:18 np0005481065 systemd[1]: Started libpod-conmon-b6d26c2f50f04c04976f31a6845ad34bbc1c9617ed05b6ba0ad588e5d519f1f2.scope.
Oct 11 04:26:18 np0005481065 podman[200717]: 2025-10-11 08:26:18.72145217 +0000 UTC m=+0.038958587 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:26:18 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:26:18 np0005481065 podman[200717]: 2025-10-11 08:26:18.874865726 +0000 UTC m=+0.192372123 container init b6d26c2f50f04c04976f31a6845ad34bbc1c9617ed05b6ba0ad588e5d519f1f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 11 04:26:18 np0005481065 podman[200717]: 2025-10-11 08:26:18.888578023 +0000 UTC m=+0.206084390 container start b6d26c2f50f04c04976f31a6845ad34bbc1c9617ed05b6ba0ad588e5d519f1f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mccarthy, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 04:26:18 np0005481065 podman[200717]: 2025-10-11 08:26:18.893097074 +0000 UTC m=+0.210603501 container attach b6d26c2f50f04c04976f31a6845ad34bbc1c9617ed05b6ba0ad588e5d519f1f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 04:26:18 np0005481065 exciting_mccarthy[200736]: 167 167
Oct 11 04:26:18 np0005481065 systemd[1]: libpod-b6d26c2f50f04c04976f31a6845ad34bbc1c9617ed05b6ba0ad588e5d519f1f2.scope: Deactivated successfully.
Oct 11 04:26:18 np0005481065 podman[200717]: 2025-10-11 08:26:18.899634753 +0000 UTC m=+0.217141120 container died b6d26c2f50f04c04976f31a6845ad34bbc1c9617ed05b6ba0ad588e5d519f1f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 11 04:26:18 np0005481065 systemd[1]: var-lib-containers-storage-overlay-57088c6405620078587aa0184a2e35980e087fc0246251744b6edf7adfdc3994-merged.mount: Deactivated successfully.
Oct 11 04:26:18 np0005481065 podman[200717]: 2025-10-11 08:26:18.94899292 +0000 UTC m=+0.266499247 container remove b6d26c2f50f04c04976f31a6845ad34bbc1c9617ed05b6ba0ad588e5d519f1f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:26:18 np0005481065 systemd[1]: libpod-conmon-b6d26c2f50f04c04976f31a6845ad34bbc1c9617ed05b6ba0ad588e5d519f1f2.scope: Deactivated successfully.
Oct 11 04:26:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v575: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:26:19 np0005481065 podman[200797]: 2025-10-11 08:26:19.171612677 +0000 UTC m=+0.069901532 container create 19a7d7037a1553d29e8d20749f9aae55c571686bcfdfa6219bc977a3e5f5c3d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chaum, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:26:19 np0005481065 podman[200797]: 2025-10-11 08:26:19.14542791 +0000 UTC m=+0.043716855 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:26:19 np0005481065 systemd[1]: Started libpod-conmon-19a7d7037a1553d29e8d20749f9aae55c571686bcfdfa6219bc977a3e5f5c3d8.scope.
Oct 11 04:26:19 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:26:19 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e89f6557004dfec1988db132ba044fdc74c5726d7cc93d3a7242d8277d95e637/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:19 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e89f6557004dfec1988db132ba044fdc74c5726d7cc93d3a7242d8277d95e637/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:19 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e89f6557004dfec1988db132ba044fdc74c5726d7cc93d3a7242d8277d95e637/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:19 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e89f6557004dfec1988db132ba044fdc74c5726d7cc93d3a7242d8277d95e637/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:19 np0005481065 podman[200797]: 2025-10-11 08:26:19.302299816 +0000 UTC m=+0.200588771 container init 19a7d7037a1553d29e8d20749f9aae55c571686bcfdfa6219bc977a3e5f5c3d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 11 04:26:19 np0005481065 podman[200797]: 2025-10-11 08:26:19.316377213 +0000 UTC m=+0.214666108 container start 19a7d7037a1553d29e8d20749f9aae55c571686bcfdfa6219bc977a3e5f5c3d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:26:19 np0005481065 podman[200797]: 2025-10-11 08:26:19.320663657 +0000 UTC m=+0.218952592 container attach 19a7d7037a1553d29e8d20749f9aae55c571686bcfdfa6219bc977a3e5f5c3d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 04:26:19 np0005481065 python3.9[200933]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 04:26:20 np0005481065 competent_chaum[200853]: {
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:    "0": [
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:        {
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:            "devices": [
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:                "/dev/loop3"
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:            ],
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:            "lv_name": "ceph_lv0",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:            "lv_size": "21470642176",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:            "name": "ceph_lv0",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:            "tags": {
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:                "ceph.cluster_name": "ceph",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:                "ceph.crush_device_class": "",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:                "ceph.encrypted": "0",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:                "ceph.osd_id": "0",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:                "ceph.type": "block",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:                "ceph.vdo": "0"
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:            },
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:            "type": "block",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:            "vg_name": "ceph_vg0"
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:        }
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:    ],
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:    "1": [
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:        {
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:            "devices": [
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:                "/dev/loop4"
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:            ],
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:            "lv_name": "ceph_lv1",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:            "lv_size": "21470642176",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:            "name": "ceph_lv1",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:            "tags": {
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:                "ceph.cluster_name": "ceph",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:                "ceph.crush_device_class": "",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:                "ceph.encrypted": "0",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:                "ceph.osd_id": "1",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:                "ceph.type": "block",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:                "ceph.vdo": "0"
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:            },
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:            "type": "block",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:            "vg_name": "ceph_vg1"
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:        }
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:    ],
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:    "2": [
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:        {
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:            "devices": [
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:                "/dev/loop5"
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:            ],
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:            "lv_name": "ceph_lv2",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:            "lv_size": "21470642176",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:            "name": "ceph_lv2",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:            "tags": {
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:                "ceph.cluster_name": "ceph",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:                "ceph.crush_device_class": "",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:                "ceph.encrypted": "0",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:                "ceph.osd_id": "2",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:                "ceph.type": "block",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:                "ceph.vdo": "0"
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:            },
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:            "type": "block",
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:            "vg_name": "ceph_vg2"
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:        }
Oct 11 04:26:20 np0005481065 competent_chaum[200853]:    ]
Oct 11 04:26:20 np0005481065 competent_chaum[200853]: }
Oct 11 04:26:20 np0005481065 systemd[1]: libpod-19a7d7037a1553d29e8d20749f9aae55c571686bcfdfa6219bc977a3e5f5c3d8.scope: Deactivated successfully.
Oct 11 04:26:20 np0005481065 podman[200797]: 2025-10-11 08:26:20.130265166 +0000 UTC m=+1.028554031 container died 19a7d7037a1553d29e8d20749f9aae55c571686bcfdfa6219bc977a3e5f5c3d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chaum, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 11 04:26:20 np0005481065 systemd[1]: var-lib-containers-storage-overlay-e89f6557004dfec1988db132ba044fdc74c5726d7cc93d3a7242d8277d95e637-merged.mount: Deactivated successfully.
Oct 11 04:26:20 np0005481065 podman[200797]: 2025-10-11 08:26:20.219928929 +0000 UTC m=+1.118217814 container remove 19a7d7037a1553d29e8d20749f9aae55c571686bcfdfa6219bc977a3e5f5c3d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chaum, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:26:20 np0005481065 systemd[1]: libpod-conmon-19a7d7037a1553d29e8d20749f9aae55c571686bcfdfa6219bc977a3e5f5c3d8.scope: Deactivated successfully.
Oct 11 04:26:20 np0005481065 python3.9[201207]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 04:26:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v576: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:26:21 np0005481065 podman[201252]: 2025-10-11 08:26:21.116773721 +0000 UTC m=+0.059103680 container create f6172ca743f71d632a7b10deb062f6297f85784e9599d03f33031e7998b0f6df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_carver, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 04:26:21 np0005481065 systemd[1]: Started libpod-conmon-f6172ca743f71d632a7b10deb062f6297f85784e9599d03f33031e7998b0f6df.scope.
Oct 11 04:26:21 np0005481065 podman[201252]: 2025-10-11 08:26:21.085086725 +0000 UTC m=+0.027416704 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:26:21 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:26:21 np0005481065 podman[201252]: 2025-10-11 08:26:21.220283214 +0000 UTC m=+0.162613183 container init f6172ca743f71d632a7b10deb062f6297f85784e9599d03f33031e7998b0f6df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_carver, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 04:26:21 np0005481065 podman[201252]: 2025-10-11 08:26:21.230987144 +0000 UTC m=+0.173317113 container start f6172ca743f71d632a7b10deb062f6297f85784e9599d03f33031e7998b0f6df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_carver, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:26:21 np0005481065 podman[201252]: 2025-10-11 08:26:21.235066772 +0000 UTC m=+0.177396731 container attach f6172ca743f71d632a7b10deb062f6297f85784e9599d03f33031e7998b0f6df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_carver, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 04:26:21 np0005481065 xenodochial_carver[201276]: 167 167
Oct 11 04:26:21 np0005481065 systemd[1]: libpod-f6172ca743f71d632a7b10deb062f6297f85784e9599d03f33031e7998b0f6df.scope: Deactivated successfully.
Oct 11 04:26:21 np0005481065 podman[201252]: 2025-10-11 08:26:21.242064664 +0000 UTC m=+0.184394623 container died f6172ca743f71d632a7b10deb062f6297f85784e9599d03f33031e7998b0f6df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:26:21 np0005481065 systemd[1]: var-lib-containers-storage-overlay-8e431252c652aea722080ec89640261392f51f3e7bb34eff2449af3f4c53f162-merged.mount: Deactivated successfully.
Oct 11 04:26:21 np0005481065 podman[201252]: 2025-10-11 08:26:21.294326395 +0000 UTC m=+0.236656344 container remove f6172ca743f71d632a7b10deb062f6297f85784e9599d03f33031e7998b0f6df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 04:26:21 np0005481065 systemd[1]: libpod-conmon-f6172ca743f71d632a7b10deb062f6297f85784e9599d03f33031e7998b0f6df.scope: Deactivated successfully.
Oct 11 04:26:21 np0005481065 podman[201380]: 2025-10-11 08:26:21.570438759 +0000 UTC m=+0.091768344 container create 04040285a03d4a7fe86a42018c47a0b0faa8e1be5afbefac270c36abfba5e3d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_diffie, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 04:26:21 np0005481065 podman[201380]: 2025-10-11 08:26:21.521187885 +0000 UTC m=+0.042517570 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:26:21 np0005481065 systemd[1]: Started libpod-conmon-04040285a03d4a7fe86a42018c47a0b0faa8e1be5afbefac270c36abfba5e3d5.scope.
Oct 11 04:26:21 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:26:21 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa2034ff69a4e7666bf61e6e2d3b74a01958efc8ad35c823f46c11f273eb6b8a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:21 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa2034ff69a4e7666bf61e6e2d3b74a01958efc8ad35c823f46c11f273eb6b8a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:21 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa2034ff69a4e7666bf61e6e2d3b74a01958efc8ad35c823f46c11f273eb6b8a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:21 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa2034ff69a4e7666bf61e6e2d3b74a01958efc8ad35c823f46c11f273eb6b8a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:26:21 np0005481065 podman[201380]: 2025-10-11 08:26:21.695436094 +0000 UTC m=+0.216765749 container init 04040285a03d4a7fe86a42018c47a0b0faa8e1be5afbefac270c36abfba5e3d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 04:26:21 np0005481065 podman[201380]: 2025-10-11 08:26:21.706916936 +0000 UTC m=+0.228246541 container start 04040285a03d4a7fe86a42018c47a0b0faa8e1be5afbefac270c36abfba5e3d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_diffie, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:26:21 np0005481065 podman[201380]: 2025-10-11 08:26:21.710667784 +0000 UTC m=+0.231997449 container attach 04040285a03d4a7fe86a42018c47a0b0faa8e1be5afbefac270c36abfba5e3d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:26:22 np0005481065 python3.9[201463]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 04:26:22 np0005481065 sweet_diffie[201430]: {
Oct 11 04:26:22 np0005481065 sweet_diffie[201430]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 04:26:22 np0005481065 sweet_diffie[201430]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:26:22 np0005481065 sweet_diffie[201430]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:26:22 np0005481065 sweet_diffie[201430]:        "osd_id": 2,
Oct 11 04:26:22 np0005481065 sweet_diffie[201430]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:26:22 np0005481065 sweet_diffie[201430]:        "type": "bluestore"
Oct 11 04:26:22 np0005481065 sweet_diffie[201430]:    },
Oct 11 04:26:22 np0005481065 sweet_diffie[201430]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 04:26:22 np0005481065 sweet_diffie[201430]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:26:22 np0005481065 sweet_diffie[201430]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:26:22 np0005481065 sweet_diffie[201430]:        "osd_id": 0,
Oct 11 04:26:22 np0005481065 sweet_diffie[201430]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:26:22 np0005481065 sweet_diffie[201430]:        "type": "bluestore"
Oct 11 04:26:22 np0005481065 sweet_diffie[201430]:    },
Oct 11 04:26:22 np0005481065 sweet_diffie[201430]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 04:26:22 np0005481065 sweet_diffie[201430]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:26:22 np0005481065 sweet_diffie[201430]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:26:22 np0005481065 sweet_diffie[201430]:        "osd_id": 1,
Oct 11 04:26:22 np0005481065 sweet_diffie[201430]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:26:22 np0005481065 sweet_diffie[201430]:        "type": "bluestore"
Oct 11 04:26:22 np0005481065 sweet_diffie[201430]:    }
Oct 11 04:26:22 np0005481065 sweet_diffie[201430]: }
Oct 11 04:26:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:26:22 np0005481065 systemd[1]: libpod-04040285a03d4a7fe86a42018c47a0b0faa8e1be5afbefac270c36abfba5e3d5.scope: Deactivated successfully.
Oct 11 04:26:22 np0005481065 podman[201380]: 2025-10-11 08:26:22.835030916 +0000 UTC m=+1.356360551 container died 04040285a03d4a7fe86a42018c47a0b0faa8e1be5afbefac270c36abfba5e3d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 04:26:22 np0005481065 systemd[1]: libpod-04040285a03d4a7fe86a42018c47a0b0faa8e1be5afbefac270c36abfba5e3d5.scope: Consumed 1.128s CPU time.
Oct 11 04:26:22 np0005481065 systemd[1]: var-lib-containers-storage-overlay-aa2034ff69a4e7666bf61e6e2d3b74a01958efc8ad35c823f46c11f273eb6b8a-merged.mount: Deactivated successfully.
Oct 11 04:26:22 np0005481065 podman[201380]: 2025-10-11 08:26:22.916891963 +0000 UTC m=+1.438221558 container remove 04040285a03d4a7fe86a42018c47a0b0faa8e1be5afbefac270c36abfba5e3d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:26:22 np0005481065 systemd[1]: libpod-conmon-04040285a03d4a7fe86a42018c47a0b0faa8e1be5afbefac270c36abfba5e3d5.scope: Deactivated successfully.
Oct 11 04:26:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:26:22 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:26:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:26:22 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:26:22 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 36abbe34-fe35-40b7-a58c-bf511ee84431 does not exist
Oct 11 04:26:22 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev a7fa4941-155f-4fe9-a516-1eb3113cf502 does not exist
Oct 11 04:26:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v577: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:26:23 np0005481065 python3.9[201646]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 04:26:23 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:26:23 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:26:24 np0005481065 python3.9[201864]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 04:26:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:26:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:26:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:26:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:26:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:26:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:26:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v578: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:26:25 np0005481065 python3.9[202019]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 04:26:26 np0005481065 python3.9[202174]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 04:26:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v579: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:26:27 np0005481065 python3.9[202329]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 04:26:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:26:28 np0005481065 python3.9[202484]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 04:26:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v580: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:26:29 np0005481065 python3.9[202639]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 04:26:30 np0005481065 python3.9[202794]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 04:26:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v581: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:26:31 np0005481065 python3.9[202949]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:26:32 np0005481065 python3.9[203101]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:26:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:26:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v582: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:26:33 np0005481065 python3.9[203253]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:26:34 np0005481065 python3.9[203405]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:26:35 np0005481065 python3.9[203557]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:26:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v583: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:26:35 np0005481065 podman[203681]: 2025-10-11 08:26:35.705350683 +0000 UTC m=+0.141032109 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 11 04:26:35 np0005481065 python3.9[203728]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:26:36 np0005481065 python3.9[203888]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:26:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v584: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:26:37 np0005481065 python3.9[204013]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760171196.1097093-554-194864933559665/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:26:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:26:38 np0005481065 python3.9[204165]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:26:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v585: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:26:39 np0005481065 python3.9[204290]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760171197.9724848-554-91649675375093/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:26:40 np0005481065 python3.9[204442]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:26:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v586: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:26:41 np0005481065 python3.9[204567]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760171199.557333-554-26072867298799/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:26:42 np0005481065 python3.9[204719]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:26:42 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:26:42 np0005481065 python3.9[204844]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760171201.4470196-554-103478801935860/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:26:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v587: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:26:43 np0005481065 python3.9[204996]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:26:44 np0005481065 python3.9[205121]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760171203.0568814-554-47111044612848/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:26:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v588: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:26:45 np0005481065 python3.9[205273]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:26:46 np0005481065 python3.9[205398]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760171204.6573875-554-180382546408933/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:26:46 np0005481065 python3.9[205550]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:26:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v589: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:26:47 np0005481065 python3.9[205673]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760171206.2742188-554-113360488149126/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:26:47 np0005481065 podman[205674]: 2025-10-11 08:26:47.65568066 +0000 UTC m=+0.079257013 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Oct 11 04:26:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:26:48 np0005481065 python3.9[205845]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:26:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v590: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:26:49 np0005481065 python3.9[205970]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760171207.7554858-554-156885925612333/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:26:50 np0005481065 python3.9[206122]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Oct 11 04:26:50 np0005481065 python3.9[206275]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:26:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v591: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:26:51 np0005481065 python3.9[206427]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:26:52 np0005481065 python3.9[206579]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:26:52 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:26:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v592: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:26:53 np0005481065 python3.9[206731]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:26:54 np0005481065 python3.9[206883]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:26:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:26:54
Oct 11 04:26:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:26:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 04:26:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'backups', 'vms', '.mgr', '.rgw.root', 'default.rgw.meta', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control']
Oct 11 04:26:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:26:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:26:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:26:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:26:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:26:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:26:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:26:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:26:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:26:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:26:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:26:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:26:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:26:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:26:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:26:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:26:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:26:55 np0005481065 python3.9[207035]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:26:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v593: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:26:55 np0005481065 python3.9[207187]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:26:56 np0005481065 python3.9[207339]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:26:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v594: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:26:57 np0005481065 python3.9[207491]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:26:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:26:58 np0005481065 python3.9[207643]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:26:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v595: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:26:59 np0005481065 python3.9[207795]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:27:00 np0005481065 python3.9[207947]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:27:00 np0005481065 python3.9[208099]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:27:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v596: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:01 np0005481065 python3.9[208251]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:27:02 np0005481065 python3.9[208403]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:27:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:27:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v597: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:03 np0005481065 python3.9[208526]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760171221.9284294-775-269471614739624/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:27:04 np0005481065 python3.9[208678]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:27:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:27:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:27:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:27:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:27:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:27:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:27:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:27:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:27:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:27:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:27:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:27:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:27:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 04:27:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:27:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:27:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:27:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:27:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:27:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:27:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:27:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:27:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:27:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:27:04 np0005481065 python3.9[208801]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760171223.4792318-775-176658747803651/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:27:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v598: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:05 np0005481065 python3.9[208953]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:27:06 np0005481065 podman[209048]: 2025-10-11 08:27:06.271127718 +0000 UTC m=+0.148244478 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Oct 11 04:27:06 np0005481065 python3.9[209090]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760171225.046138-775-67219452377892/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:27:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v599: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:07 np0005481065 python3.9[209251]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:27:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:27:08 np0005481065 python3.9[209374]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760171226.6717944-775-251791200899765/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:27:08 np0005481065 python3.9[209526]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:27:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v600: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:09 np0005481065 python3.9[209649]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760171228.2590163-775-60781284299053/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:27:10 np0005481065 python3.9[209801]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:27:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v601: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:11 np0005481065 python3.9[209924]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760171229.944545-775-254370766992314/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:27:12 np0005481065 python3.9[210076]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:27:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:27:13 np0005481065 python3.9[210199]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760171231.6060023-775-233063784969844/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:27:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v602: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:13 np0005481065 python3.9[210351]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:27:14 np0005481065 python3.9[210474]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760171233.274518-775-79186126086940/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:27:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v603: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:27:15.157 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:27:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:27:15.157 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:27:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:27:15.158 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:27:15 np0005481065 python3.9[210626]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:27:16 np0005481065 python3.9[210749]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760171234.8783908-775-61529651540859/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:27:17 np0005481065 python3.9[210901]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:27:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v604: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:17 np0005481065 python3.9[211024]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760171236.4811127-775-94497765831696/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:27:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:27:18 np0005481065 podman[211148]: 2025-10-11 08:27:18.412508688 +0000 UTC m=+0.073952400 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 11 04:27:18 np0005481065 python3.9[211195]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:27:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v605: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:19 np0005481065 python3.9[211318]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760171238.0171764-775-52338772292211/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:27:20 np0005481065 python3.9[211470]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:27:20 np0005481065 python3.9[211593]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760171239.5173194-775-190524431362542/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:27:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v606: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:21 np0005481065 python3.9[211745]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:27:22 np0005481065 python3.9[211868]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760171241.0570507-775-57145224136841/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:27:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:27:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v607: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:23 np0005481065 python3.9[212020]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:27:23 np0005481065 python3.9[212257]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760171242.6043742-775-170105886017473/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:27:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:27:24 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:27:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:27:24 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:27:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:27:24 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:27:24 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 7fa81eb5-7a76-4d63-8200-6dff16728af0 does not exist
Oct 11 04:27:24 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev ca367e0e-b32d-4eba-b3b4-44b24be56c13 does not exist
Oct 11 04:27:24 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 46c42ac4-4e36-4308-91e9-deb561f4e405 does not exist
Oct 11 04:27:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:27:24 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:27:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:27:24 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:27:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:27:24 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:27:24 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:27:24 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:27:24 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:27:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:27:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:27:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:27:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:27:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:27:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:27:24 np0005481065 python3.9[212524]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:27:25 np0005481065 podman[212568]: 2025-10-11 08:27:25.006267838 +0000 UTC m=+0.071933431 container create c94a2bf43c051137f67bb5ab578a3d41076dd09ed95b1bcb1985f77008332c5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 04:27:25 np0005481065 systemd[1]: Started libpod-conmon-c94a2bf43c051137f67bb5ab578a3d41076dd09ed95b1bcb1985f77008332c5f.scope.
Oct 11 04:27:25 np0005481065 podman[212568]: 2025-10-11 08:27:24.974595632 +0000 UTC m=+0.040261285 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:27:25 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:27:25 np0005481065 podman[212568]: 2025-10-11 08:27:25.109144283 +0000 UTC m=+0.174809906 container init c94a2bf43c051137f67bb5ab578a3d41076dd09ed95b1bcb1985f77008332c5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_perlman, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 04:27:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v608: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:25 np0005481065 podman[212568]: 2025-10-11 08:27:25.124120986 +0000 UTC m=+0.189786569 container start c94a2bf43c051137f67bb5ab578a3d41076dd09ed95b1bcb1985f77008332c5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_perlman, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:27:25 np0005481065 podman[212568]: 2025-10-11 08:27:25.128479292 +0000 UTC m=+0.194144875 container attach c94a2bf43c051137f67bb5ab578a3d41076dd09ed95b1bcb1985f77008332c5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_perlman, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:27:25 np0005481065 nostalgic_perlman[212609]: 167 167
Oct 11 04:27:25 np0005481065 systemd[1]: libpod-c94a2bf43c051137f67bb5ab578a3d41076dd09ed95b1bcb1985f77008332c5f.scope: Deactivated successfully.
Oct 11 04:27:25 np0005481065 podman[212568]: 2025-10-11 08:27:25.133028153 +0000 UTC m=+0.198693776 container died c94a2bf43c051137f67bb5ab578a3d41076dd09ed95b1bcb1985f77008332c5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:27:25 np0005481065 systemd[1]: var-lib-containers-storage-overlay-57833386b3307bb388273bfea463b57765fa668bd1e9ff890fef288e5ccbb2ad-merged.mount: Deactivated successfully.
Oct 11 04:27:25 np0005481065 podman[212568]: 2025-10-11 08:27:25.19068382 +0000 UTC m=+0.256349383 container remove c94a2bf43c051137f67bb5ab578a3d41076dd09ed95b1bcb1985f77008332c5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_perlman, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 04:27:25 np0005481065 systemd[1]: libpod-conmon-c94a2bf43c051137f67bb5ab578a3d41076dd09ed95b1bcb1985f77008332c5f.scope: Deactivated successfully.
Oct 11 04:27:25 np0005481065 podman[212685]: 2025-10-11 08:27:25.434356776 +0000 UTC m=+0.066713850 container create d1c8d064afefa80b63b7dd491b7c590e91dd0c8ef71d9c3fffe39bdde19cebb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_burnell, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:27:25 np0005481065 systemd[1]: Started libpod-conmon-d1c8d064afefa80b63b7dd491b7c590e91dd0c8ef71d9c3fffe39bdde19cebb7.scope.
Oct 11 04:27:25 np0005481065 podman[212685]: 2025-10-11 08:27:25.406630115 +0000 UTC m=+0.038987239 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:27:25 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:27:25 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1814c00ad2842a1281dbc01403f2643bf8336db48a55bcf22a5a9f2c1dbd00e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:25 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1814c00ad2842a1281dbc01403f2643bf8336db48a55bcf22a5a9f2c1dbd00e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:25 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1814c00ad2842a1281dbc01403f2643bf8336db48a55bcf22a5a9f2c1dbd00e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:25 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1814c00ad2842a1281dbc01403f2643bf8336db48a55bcf22a5a9f2c1dbd00e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:25 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1814c00ad2842a1281dbc01403f2643bf8336db48a55bcf22a5a9f2c1dbd00e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:25 np0005481065 podman[212685]: 2025-10-11 08:27:25.547135377 +0000 UTC m=+0.179492511 container init d1c8d064afefa80b63b7dd491b7c590e91dd0c8ef71d9c3fffe39bdde19cebb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_burnell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:27:25 np0005481065 podman[212685]: 2025-10-11 08:27:25.563974014 +0000 UTC m=+0.196331088 container start d1c8d064afefa80b63b7dd491b7c590e91dd0c8ef71d9c3fffe39bdde19cebb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 11 04:27:25 np0005481065 podman[212685]: 2025-10-11 08:27:25.568919577 +0000 UTC m=+0.201276721 container attach d1c8d064afefa80b63b7dd491b7c590e91dd0c8ef71d9c3fffe39bdde19cebb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_burnell, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:27:26 np0005481065 python3.9[212782]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Oct 11 04:27:26 np0005481065 vigorous_burnell[212702]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:27:26 np0005481065 vigorous_burnell[212702]: --> relative data size: 1.0
Oct 11 04:27:26 np0005481065 vigorous_burnell[212702]: --> All data devices are unavailable
Oct 11 04:27:26 np0005481065 systemd[1]: libpod-d1c8d064afefa80b63b7dd491b7c590e91dd0c8ef71d9c3fffe39bdde19cebb7.scope: Deactivated successfully.
Oct 11 04:27:26 np0005481065 dbus-broker-launch[809]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Oct 11 04:27:26 np0005481065 systemd[1]: libpod-d1c8d064afefa80b63b7dd491b7c590e91dd0c8ef71d9c3fffe39bdde19cebb7.scope: Consumed 1.171s CPU time.
Oct 11 04:27:26 np0005481065 podman[212685]: 2025-10-11 08:27:26.809388826 +0000 UTC m=+1.441745920 container died d1c8d064afefa80b63b7dd491b7c590e91dd0c8ef71d9c3fffe39bdde19cebb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_burnell, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 04:27:26 np0005481065 systemd[1]: var-lib-containers-storage-overlay-f1814c00ad2842a1281dbc01403f2643bf8336db48a55bcf22a5a9f2c1dbd00e-merged.mount: Deactivated successfully.
Oct 11 04:27:26 np0005481065 podman[212685]: 2025-10-11 08:27:26.935559475 +0000 UTC m=+1.567916529 container remove d1c8d064afefa80b63b7dd491b7c590e91dd0c8ef71d9c3fffe39bdde19cebb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_burnell, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:27:26 np0005481065 systemd[1]: libpod-conmon-d1c8d064afefa80b63b7dd491b7c590e91dd0c8ef71d9c3fffe39bdde19cebb7.scope: Deactivated successfully.
Oct 11 04:27:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v609: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:27:27 np0005481065 podman[213071]: 2025-10-11 08:27:27.844730944 +0000 UTC m=+0.073180817 container create 61e1403a6ddb15124cdc03e9957f648bf7d56099404dc767c36e9ef301f16356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mclaren, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 11 04:27:27 np0005481065 systemd[1]: Started libpod-conmon-61e1403a6ddb15124cdc03e9957f648bf7d56099404dc767c36e9ef301f16356.scope.
Oct 11 04:27:27 np0005481065 podman[213071]: 2025-10-11 08:27:27.811909415 +0000 UTC m=+0.040359298 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:27:27 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:27:27 np0005481065 podman[213071]: 2025-10-11 08:27:27.963041944 +0000 UTC m=+0.191491827 container init 61e1403a6ddb15124cdc03e9957f648bf7d56099404dc767c36e9ef301f16356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 04:27:27 np0005481065 podman[213071]: 2025-10-11 08:27:27.972538768 +0000 UTC m=+0.200988611 container start 61e1403a6ddb15124cdc03e9957f648bf7d56099404dc767c36e9ef301f16356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 04:27:27 np0005481065 podman[213071]: 2025-10-11 08:27:27.976865073 +0000 UTC m=+0.205314986 container attach 61e1403a6ddb15124cdc03e9957f648bf7d56099404dc767c36e9ef301f16356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 11 04:27:27 np0005481065 magical_mclaren[213130]: 167 167
Oct 11 04:27:27 np0005481065 systemd[1]: libpod-61e1403a6ddb15124cdc03e9957f648bf7d56099404dc767c36e9ef301f16356.scope: Deactivated successfully.
Oct 11 04:27:27 np0005481065 podman[213071]: 2025-10-11 08:27:27.981874758 +0000 UTC m=+0.210324591 container died 61e1403a6ddb15124cdc03e9957f648bf7d56099404dc767c36e9ef301f16356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mclaren, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 04:27:28 np0005481065 systemd[1]: var-lib-containers-storage-overlay-56275893a094fae51c73af8befef1d575caa2921308be59ae1b8f410f96a0561-merged.mount: Deactivated successfully.
Oct 11 04:27:28 np0005481065 podman[213071]: 2025-10-11 08:27:28.027635141 +0000 UTC m=+0.256084974 container remove 61e1403a6ddb15124cdc03e9957f648bf7d56099404dc767c36e9ef301f16356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mclaren, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:27:28 np0005481065 systemd[1]: libpod-conmon-61e1403a6ddb15124cdc03e9957f648bf7d56099404dc767c36e9ef301f16356.scope: Deactivated successfully.
Oct 11 04:27:28 np0005481065 python3.9[213135]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:27:28 np0005481065 podman[213158]: 2025-10-11 08:27:28.244471211 +0000 UTC m=+0.062357214 container create 8eced79336be683dd3def2011572e63859482f9008534f079974feb55057fcfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lamarr, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 04:27:28 np0005481065 systemd[1]: Started libpod-conmon-8eced79336be683dd3def2011572e63859482f9008534f079974feb55057fcfa.scope.
Oct 11 04:27:28 np0005481065 podman[213158]: 2025-10-11 08:27:28.225488752 +0000 UTC m=+0.043374745 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:27:28 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:27:28 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95a93f08318ebf4582dc5994914aa7a65950b7589beee5825acc0a682fe7612d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:28 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95a93f08318ebf4582dc5994914aa7a65950b7589beee5825acc0a682fe7612d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:28 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95a93f08318ebf4582dc5994914aa7a65950b7589beee5825acc0a682fe7612d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:28 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95a93f08318ebf4582dc5994914aa7a65950b7589beee5825acc0a682fe7612d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:28 np0005481065 podman[213158]: 2025-10-11 08:27:28.371420322 +0000 UTC m=+0.189306345 container init 8eced79336be683dd3def2011572e63859482f9008534f079974feb55057fcfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lamarr, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:27:28 np0005481065 podman[213158]: 2025-10-11 08:27:28.383429379 +0000 UTC m=+0.201315372 container start 8eced79336be683dd3def2011572e63859482f9008534f079974feb55057fcfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lamarr, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 04:27:28 np0005481065 podman[213158]: 2025-10-11 08:27:28.388002412 +0000 UTC m=+0.205888415 container attach 8eced79336be683dd3def2011572e63859482f9008534f079974feb55057fcfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 11 04:27:28 np0005481065 python3.9[213331]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:27:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v610: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]: {
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:    "0": [
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:        {
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:            "devices": [
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:                "/dev/loop3"
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:            ],
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:            "lv_name": "ceph_lv0",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:            "lv_size": "21470642176",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:            "name": "ceph_lv0",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:            "tags": {
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:                "ceph.cluster_name": "ceph",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:                "ceph.crush_device_class": "",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:                "ceph.encrypted": "0",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:                "ceph.osd_id": "0",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:                "ceph.type": "block",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:                "ceph.vdo": "0"
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:            },
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:            "type": "block",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:            "vg_name": "ceph_vg0"
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:        }
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:    ],
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:    "1": [
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:        {
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:            "devices": [
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:                "/dev/loop4"
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:            ],
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:            "lv_name": "ceph_lv1",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:            "lv_size": "21470642176",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:            "name": "ceph_lv1",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:            "tags": {
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:                "ceph.cluster_name": "ceph",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:                "ceph.crush_device_class": "",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:                "ceph.encrypted": "0",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:                "ceph.osd_id": "1",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:                "ceph.type": "block",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:                "ceph.vdo": "0"
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:            },
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:            "type": "block",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:            "vg_name": "ceph_vg1"
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:        }
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:    ],
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:    "2": [
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:        {
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:            "devices": [
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:                "/dev/loop5"
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:            ],
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:            "lv_name": "ceph_lv2",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:            "lv_size": "21470642176",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:            "name": "ceph_lv2",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:            "tags": {
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:                "ceph.cluster_name": "ceph",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:                "ceph.crush_device_class": "",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:                "ceph.encrypted": "0",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:                "ceph.osd_id": "2",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:                "ceph.type": "block",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:                "ceph.vdo": "0"
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:            },
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:            "type": "block",
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:            "vg_name": "ceph_vg2"
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:        }
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]:    ]
Oct 11 04:27:29 np0005481065 interesting_lamarr[213199]: }
Oct 11 04:27:29 np0005481065 systemd[1]: libpod-8eced79336be683dd3def2011572e63859482f9008534f079974feb55057fcfa.scope: Deactivated successfully.
Oct 11 04:27:29 np0005481065 podman[213158]: 2025-10-11 08:27:29.193086841 +0000 UTC m=+1.010972834 container died 8eced79336be683dd3def2011572e63859482f9008534f079974feb55057fcfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 04:27:29 np0005481065 systemd[1]: var-lib-containers-storage-overlay-95a93f08318ebf4582dc5994914aa7a65950b7589beee5825acc0a682fe7612d-merged.mount: Deactivated successfully.
Oct 11 04:27:29 np0005481065 podman[213158]: 2025-10-11 08:27:29.282111555 +0000 UTC m=+1.099997538 container remove 8eced79336be683dd3def2011572e63859482f9008534f079974feb55057fcfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Oct 11 04:27:29 np0005481065 systemd[1]: libpod-conmon-8eced79336be683dd3def2011572e63859482f9008534f079974feb55057fcfa.scope: Deactivated successfully.
Oct 11 04:27:29 np0005481065 python3.9[213575]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:27:30 np0005481065 podman[213691]: 2025-10-11 08:27:30.170085352 +0000 UTC m=+0.073340982 container create a140391039848c5bbeb9a9f63ac2e59d1998c6b23c44f3bf506a59b7667fabbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_jones, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:27:30 np0005481065 systemd[1]: Started libpod-conmon-a140391039848c5bbeb9a9f63ac2e59d1998c6b23c44f3bf506a59b7667fabbc.scope.
Oct 11 04:27:30 np0005481065 podman[213691]: 2025-10-11 08:27:30.135648446 +0000 UTC m=+0.038904126 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:27:30 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:27:30 np0005481065 podman[213691]: 2025-10-11 08:27:30.278997411 +0000 UTC m=+0.182253101 container init a140391039848c5bbeb9a9f63ac2e59d1998c6b23c44f3bf506a59b7667fabbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_jones, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:27:30 np0005481065 podman[213691]: 2025-10-11 08:27:30.289672589 +0000 UTC m=+0.192928199 container start a140391039848c5bbeb9a9f63ac2e59d1998c6b23c44f3bf506a59b7667fabbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_jones, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:27:30 np0005481065 podman[213691]: 2025-10-11 08:27:30.293296754 +0000 UTC m=+0.196552404 container attach a140391039848c5bbeb9a9f63ac2e59d1998c6b23c44f3bf506a59b7667fabbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_jones, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:27:30 np0005481065 distracted_jones[213750]: 167 167
Oct 11 04:27:30 np0005481065 systemd[1]: libpod-a140391039848c5bbeb9a9f63ac2e59d1998c6b23c44f3bf506a59b7667fabbc.scope: Deactivated successfully.
Oct 11 04:27:30 np0005481065 podman[213691]: 2025-10-11 08:27:30.298202536 +0000 UTC m=+0.201458176 container died a140391039848c5bbeb9a9f63ac2e59d1998c6b23c44f3bf506a59b7667fabbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_jones, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:27:30 np0005481065 systemd[1]: var-lib-containers-storage-overlay-17e167e00dd6a533f98760deb4d2886dfc77ede951cfce61206ccf5e0e58b006-merged.mount: Deactivated successfully.
Oct 11 04:27:30 np0005481065 podman[213691]: 2025-10-11 08:27:30.360106516 +0000 UTC m=+0.263362156 container remove a140391039848c5bbeb9a9f63ac2e59d1998c6b23c44f3bf506a59b7667fabbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_jones, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 04:27:30 np0005481065 systemd[1]: libpod-conmon-a140391039848c5bbeb9a9f63ac2e59d1998c6b23c44f3bf506a59b7667fabbc.scope: Deactivated successfully.
Oct 11 04:27:30 np0005481065 podman[213832]: 2025-10-11 08:27:30.588929113 +0000 UTC m=+0.062811418 container create 1e40bbf269c8674e9ab6edb708d9b16454ae2520e53d0e2b3af5bf6a33face7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_herschel, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:27:30 np0005481065 podman[213832]: 2025-10-11 08:27:30.567638497 +0000 UTC m=+0.041520832 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:27:30 np0005481065 systemd[1]: Started libpod-conmon-1e40bbf269c8674e9ab6edb708d9b16454ae2520e53d0e2b3af5bf6a33face7c.scope.
Oct 11 04:27:30 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:27:30 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a435232cf9687cb9a3d42a0906a904207273d0ff94939affe8a6d9cd8a0030d4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:30 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a435232cf9687cb9a3d42a0906a904207273d0ff94939affe8a6d9cd8a0030d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:30 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a435232cf9687cb9a3d42a0906a904207273d0ff94939affe8a6d9cd8a0030d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:30 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a435232cf9687cb9a3d42a0906a904207273d0ff94939affe8a6d9cd8a0030d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:27:30 np0005481065 python3.9[213827]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:27:30 np0005481065 podman[213832]: 2025-10-11 08:27:30.749029872 +0000 UTC m=+0.222912267 container init 1e40bbf269c8674e9ab6edb708d9b16454ae2520e53d0e2b3af5bf6a33face7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_herschel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 11 04:27:30 np0005481065 podman[213832]: 2025-10-11 08:27:30.762914584 +0000 UTC m=+0.236796909 container start 1e40bbf269c8674e9ab6edb708d9b16454ae2520e53d0e2b3af5bf6a33face7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_herschel, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:27:30 np0005481065 podman[213832]: 2025-10-11 08:27:30.76797498 +0000 UTC m=+0.241857355 container attach 1e40bbf269c8674e9ab6edb708d9b16454ae2520e53d0e2b3af5bf6a33face7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_herschel, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:27:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v611: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:31 np0005481065 python3.9[214005]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:27:31 np0005481065 bold_herschel[213849]: {
Oct 11 04:27:31 np0005481065 bold_herschel[213849]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 04:27:31 np0005481065 bold_herschel[213849]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:27:31 np0005481065 bold_herschel[213849]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:27:31 np0005481065 bold_herschel[213849]:        "osd_id": 2,
Oct 11 04:27:31 np0005481065 bold_herschel[213849]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:27:31 np0005481065 bold_herschel[213849]:        "type": "bluestore"
Oct 11 04:27:31 np0005481065 bold_herschel[213849]:    },
Oct 11 04:27:31 np0005481065 bold_herschel[213849]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 04:27:31 np0005481065 bold_herschel[213849]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:27:31 np0005481065 bold_herschel[213849]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:27:31 np0005481065 bold_herschel[213849]:        "osd_id": 0,
Oct 11 04:27:31 np0005481065 bold_herschel[213849]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:27:31 np0005481065 bold_herschel[213849]:        "type": "bluestore"
Oct 11 04:27:31 np0005481065 bold_herschel[213849]:    },
Oct 11 04:27:31 np0005481065 bold_herschel[213849]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 04:27:31 np0005481065 bold_herschel[213849]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:27:31 np0005481065 bold_herschel[213849]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:27:31 np0005481065 bold_herschel[213849]:        "osd_id": 1,
Oct 11 04:27:31 np0005481065 bold_herschel[213849]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:27:31 np0005481065 bold_herschel[213849]:        "type": "bluestore"
Oct 11 04:27:31 np0005481065 bold_herschel[213849]:    }
Oct 11 04:27:31 np0005481065 bold_herschel[213849]: }
Oct 11 04:27:31 np0005481065 systemd[1]: libpod-1e40bbf269c8674e9ab6edb708d9b16454ae2520e53d0e2b3af5bf6a33face7c.scope: Deactivated successfully.
Oct 11 04:27:31 np0005481065 systemd[1]: libpod-1e40bbf269c8674e9ab6edb708d9b16454ae2520e53d0e2b3af5bf6a33face7c.scope: Consumed 1.182s CPU time.
Oct 11 04:27:31 np0005481065 podman[213832]: 2025-10-11 08:27:31.946023223 +0000 UTC m=+1.419905568 container died 1e40bbf269c8674e9ab6edb708d9b16454ae2520e53d0e2b3af5bf6a33face7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 04:27:31 np0005481065 systemd[1]: var-lib-containers-storage-overlay-a435232cf9687cb9a3d42a0906a904207273d0ff94939affe8a6d9cd8a0030d4-merged.mount: Deactivated successfully.
Oct 11 04:27:32 np0005481065 podman[213832]: 2025-10-11 08:27:32.036655723 +0000 UTC m=+1.510538048 container remove 1e40bbf269c8674e9ab6edb708d9b16454ae2520e53d0e2b3af5bf6a33face7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_herschel, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 11 04:27:32 np0005481065 systemd[1]: libpod-conmon-1e40bbf269c8674e9ab6edb708d9b16454ae2520e53d0e2b3af5bf6a33face7c.scope: Deactivated successfully.
Oct 11 04:27:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:27:32 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:27:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:27:32 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:27:32 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev a2d185e5-9e30-48b8-bcc9-37ac1d5a604a does not exist
Oct 11 04:27:32 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 6207cc91-c19a-4cdf-ad6c-f4daf5656d99 does not exist
Oct 11 04:27:32 np0005481065 python3.9[214249]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:27:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:27:33 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:27:33 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:27:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v612: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:33 np0005481065 python3.9[214401]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:27:34 np0005481065 python3.9[214553]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:27:34 np0005481065 python3.9[214705]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:27:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v613: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:35 np0005481065 python3.9[214857]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:27:36 np0005481065 podman[214981]: 2025-10-11 08:27:36.595241786 +0000 UTC m=+0.147819485 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2)
Oct 11 04:27:36 np0005481065 python3.9[215026]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 04:27:36 np0005481065 systemd[1]: Reloading.
Oct 11 04:27:36 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:27:36 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:27:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v614: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:37 np0005481065 systemd[1]: Starting libvirt logging daemon socket...
Oct 11 04:27:37 np0005481065 systemd[1]: Listening on libvirt logging daemon socket.
Oct 11 04:27:37 np0005481065 systemd[1]: Starting libvirt logging daemon admin socket...
Oct 11 04:27:37 np0005481065 systemd[1]: Listening on libvirt logging daemon admin socket.
Oct 11 04:27:37 np0005481065 systemd[1]: Starting libvirt logging daemon...
Oct 11 04:27:37 np0005481065 systemd[1]: Started libvirt logging daemon.
Oct 11 04:27:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:27:38 np0005481065 python3.9[215228]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 04:27:38 np0005481065 systemd[1]: Reloading.
Oct 11 04:27:38 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:27:38 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:27:38 np0005481065 systemd[1]: Starting libvirt nodedev daemon socket...
Oct 11 04:27:38 np0005481065 systemd[1]: Listening on libvirt nodedev daemon socket.
Oct 11 04:27:38 np0005481065 systemd[1]: Starting libvirt nodedev daemon admin socket...
Oct 11 04:27:38 np0005481065 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Oct 11 04:27:38 np0005481065 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Oct 11 04:27:38 np0005481065 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Oct 11 04:27:38 np0005481065 systemd[1]: Starting libvirt nodedev daemon...
Oct 11 04:27:38 np0005481065 systemd[1]: Started libvirt nodedev daemon.
Oct 11 04:27:39 np0005481065 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Oct 11 04:27:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v615: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:39 np0005481065 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Oct 11 04:27:39 np0005481065 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Oct 11 04:27:39 np0005481065 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Oct 11 04:27:39 np0005481065 python3.9[215450]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 04:27:39 np0005481065 systemd[1]: Reloading.
Oct 11 04:27:40 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:27:40 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:27:40 np0005481065 systemd[1]: Starting libvirt proxy daemon admin socket...
Oct 11 04:27:40 np0005481065 systemd[1]: Starting libvirt proxy daemon read-only socket...
Oct 11 04:27:40 np0005481065 systemd[1]: Listening on libvirt proxy daemon admin socket.
Oct 11 04:27:40 np0005481065 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Oct 11 04:27:40 np0005481065 systemd[1]: Starting libvirt proxy daemon...
Oct 11 04:27:40 np0005481065 systemd[1]: Started libvirt proxy daemon.
Oct 11 04:27:40 np0005481065 setroubleshoot[215316]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l fe00e702-c632-4d13-93c6-b24e5c622602
Oct 11 04:27:40 np0005481065 setroubleshoot[215316]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Oct 11 04:27:40 np0005481065 setroubleshoot[215316]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l fe00e702-c632-4d13-93c6-b24e5c622602
Oct 11 04:27:40 np0005481065 setroubleshoot[215316]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Oct 11 04:27:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v616: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:41 np0005481065 python3.9[215664]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 04:27:41 np0005481065 systemd[1]: Reloading.
Oct 11 04:27:41 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:27:41 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:27:41 np0005481065 systemd[1]: Listening on libvirt locking daemon socket.
Oct 11 04:27:41 np0005481065 systemd[1]: Starting libvirt QEMU daemon socket...
Oct 11 04:27:41 np0005481065 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Oct 11 04:27:41 np0005481065 systemd[1]: Starting Virtual Machine and Container Registration Service...
Oct 11 04:27:41 np0005481065 systemd[1]: Listening on libvirt QEMU daemon socket.
Oct 11 04:27:41 np0005481065 systemd[1]: Starting libvirt QEMU daemon admin socket...
Oct 11 04:27:41 np0005481065 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Oct 11 04:27:41 np0005481065 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Oct 11 04:27:41 np0005481065 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Oct 11 04:27:41 np0005481065 systemd[1]: Started Virtual Machine and Container Registration Service.
Oct 11 04:27:41 np0005481065 systemd[1]: Starting libvirt QEMU daemon...
Oct 11 04:27:41 np0005481065 systemd[1]: Started libvirt QEMU daemon.
Oct 11 04:27:42 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:27:42 np0005481065 python3.9[215878]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 04:27:42 np0005481065 systemd[1]: Reloading.
Oct 11 04:27:43 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:27:43 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:27:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v617: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:43 np0005481065 systemd[1]: Starting libvirt secret daemon socket...
Oct 11 04:27:43 np0005481065 systemd[1]: Listening on libvirt secret daemon socket.
Oct 11 04:27:43 np0005481065 systemd[1]: Starting libvirt secret daemon admin socket...
Oct 11 04:27:43 np0005481065 systemd[1]: Starting libvirt secret daemon read-only socket...
Oct 11 04:27:43 np0005481065 systemd[1]: Listening on libvirt secret daemon admin socket.
Oct 11 04:27:43 np0005481065 systemd[1]: Listening on libvirt secret daemon read-only socket.
Oct 11 04:27:43 np0005481065 systemd[1]: Starting libvirt secret daemon...
Oct 11 04:27:43 np0005481065 systemd[1]: Started libvirt secret daemon.
Oct 11 04:27:44 np0005481065 python3.9[216087]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:27:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v618: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:45 np0005481065 python3.9[216239]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 11 04:27:46 np0005481065 python3.9[216391]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:27:46 np0005481065 python3.9[216545]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 11 04:27:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v619: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:27:48 np0005481065 python3.9[216695]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:27:48 np0005481065 podman[216790]: 2025-10-11 08:27:48.581458981 +0000 UTC m=+0.084565426 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 04:27:48 np0005481065 python3.9[216827]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760171267.417348-1133-51141290879582/.source.xml follow=False _original_basename=secret.xml.j2 checksum=b56ed023dc31f704f6fbc855b5c0dab0643174c8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:27:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v620: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:49 np0005481065 python3.9[216988]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 33219f8b-dc38-5a8f-a577-8ccc4b37190a#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:27:50 np0005481065 python3.9[217150]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:27:50 np0005481065 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Oct 11 04:27:50 np0005481065 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.027s CPU time.
Oct 11 04:27:50 np0005481065 systemd[1]: setroubleshootd.service: Deactivated successfully.
Oct 11 04:27:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v621: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:52 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:27:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v622: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:53 np0005481065 python3.9[217613]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:27:54 np0005481065 python3.9[217765]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:27:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:27:54
Oct 11 04:27:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:27:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 04:27:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['volumes', 'backups', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', 'vms', '.rgw.root']
Oct 11 04:27:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:27:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:27:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:27:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:27:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:27:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:27:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:27:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:27:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:27:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:27:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:27:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:27:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:27:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:27:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:27:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:27:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:27:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v623: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:55 np0005481065 python3.9[217888]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1760171273.8905687-1188-64374561409026/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:27:56 np0005481065 python3.9[218040]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:27:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v624: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:57 np0005481065 python3.9[218192]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:27:57 np0005481065 python3.9[218270]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:27:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:27:58 np0005481065 python3.9[218422]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:27:58 np0005481065 python3.9[218500]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.g6qg7uc3 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:27:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v625: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:27:59 np0005481065 python3.9[218652]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:28:00 np0005481065 python3.9[218730]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:28:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v626: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:28:01 np0005481065 python3.9[218882]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:28:02 np0005481065 python3[219037]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 11 04:28:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:28:02 np0005481065 python3.9[219189]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:28:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v627: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:28:03 np0005481065 python3.9[219267]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:28:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:28:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:28:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:28:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:28:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:28:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:28:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:28:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:28:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:28:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:28:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:28:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:28:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 04:28:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:28:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:28:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:28:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:28:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:28:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:28:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:28:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:28:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:28:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:28:04 np0005481065 python3.9[219419]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:28:04 np0005481065 python3.9[219497]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:28:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v628: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:28:05 np0005481065 python3.9[219649]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:28:06 np0005481065 python3.9[219727]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:28:06 np0005481065 podman[219775]: 2025-10-11 08:28:06.846246689 +0000 UTC m=+0.143842212 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 04:28:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v629: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:28:07 np0005481065 python3.9[219906]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:28:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:28:07 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Oct 11 04:28:07 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:28:07.828172) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 04:28:07 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Oct 11 04:28:07 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171287828233, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1799, "num_deletes": 250, "total_data_size": 3029473, "memory_usage": 3062648, "flush_reason": "Manual Compaction"}
Oct 11 04:28:07 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Oct 11 04:28:07 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171287839466, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1706193, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11762, "largest_seqno": 13560, "table_properties": {"data_size": 1700337, "index_size": 2931, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14635, "raw_average_key_size": 20, "raw_value_size": 1687431, "raw_average_value_size": 2317, "num_data_blocks": 136, "num_entries": 728, "num_filter_entries": 728, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760171083, "oldest_key_time": 1760171083, "file_creation_time": 1760171287, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:28:07 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 11328 microseconds, and 4294 cpu microseconds.
Oct 11 04:28:07 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:28:07 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:28:07.839516) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1706193 bytes OK
Oct 11 04:28:07 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:28:07.839537) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Oct 11 04:28:07 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:28:07.840912) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Oct 11 04:28:07 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:28:07.840926) EVENT_LOG_v1 {"time_micros": 1760171287840921, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 04:28:07 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:28:07.840947) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 04:28:07 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 3021862, prev total WAL file size 3021862, number of live WAL files 2.
Oct 11 04:28:07 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:28:07 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:28:07.841849) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323532' seq:72057594037927935, type:22 .. '6D67727374617400353033' seq:0, type:0; will stop at (end)
Oct 11 04:28:07 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 04:28:07 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1666KB)], [29(7714KB)]
Oct 11 04:28:07 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171287841919, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9605632, "oldest_snapshot_seqno": -1}
Oct 11 04:28:07 np0005481065 python3.9[219984]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:28:07 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4015 keys, 7581083 bytes, temperature: kUnknown
Oct 11 04:28:07 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171287894850, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7581083, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7552542, "index_size": 17423, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 95500, "raw_average_key_size": 23, "raw_value_size": 7478419, "raw_average_value_size": 1862, "num_data_blocks": 758, "num_entries": 4015, "num_filter_entries": 4015, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760171287, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:28:07 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:28:07 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:28:07.895127) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7581083 bytes
Oct 11 04:28:07 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:28:07.896368) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 181.2 rd, 143.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 7.5 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(10.1) write-amplify(4.4) OK, records in: 4430, records dropped: 415 output_compression: NoCompression
Oct 11 04:28:07 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:28:07.896398) EVENT_LOG_v1 {"time_micros": 1760171287896384, "job": 12, "event": "compaction_finished", "compaction_time_micros": 53015, "compaction_time_cpu_micros": 33300, "output_level": 6, "num_output_files": 1, "total_output_size": 7581083, "num_input_records": 4430, "num_output_records": 4015, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 04:28:07 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:28:07 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171287897043, "job": 12, "event": "table_file_deletion", "file_number": 31}
Oct 11 04:28:07 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:28:07 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171287899291, "job": 12, "event": "table_file_deletion", "file_number": 29}
Oct 11 04:28:07 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:28:07.841719) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:28:07 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:28:07.899385) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:28:07 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:28:07.899391) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:28:07 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:28:07.899393) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:28:07 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:28:07.899395) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:28:07 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:28:07.899396) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:28:08 np0005481065 python3.9[220136]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:28:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v630: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:28:09 np0005481065 python3.9[220261]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760171288.1122963-1313-256563557952893/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:28:10 np0005481065 python3.9[220413]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:28:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v631: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:28:11 np0005481065 python3.9[220565]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:28:12 np0005481065 python3.9[220720]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:28:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:28:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v632: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:28:13 np0005481065 python3.9[220872]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:28:14 np0005481065 python3.9[221025]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:28:15 np0005481065 python3.9[221179]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:28:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v633: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:28:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:28:15.158 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:28:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:28:15.158 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:28:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:28:15.158 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:28:15 np0005481065 python3.9[221334]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:28:16 np0005481065 python3.9[221486]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:28:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v634: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:28:17 np0005481065 python3.9[221609]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760171296.184386-1385-12432590233/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:28:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:28:18 np0005481065 python3.9[221761]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:28:18 np0005481065 podman[221833]: 2025-10-11 08:28:18.792244014 +0000 UTC m=+0.089790826 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:28:19 np0005481065 python3.9[221904]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760171297.773118-1400-261364558454995/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:28:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v635: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:28:19 np0005481065 python3.9[222056]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:28:20 np0005481065 python3.9[222179]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760171299.2835908-1415-150476464040750/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:28:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v636: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:28:21 np0005481065 python3.9[222331]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:28:21 np0005481065 systemd[1]: Reloading.
Oct 11 04:28:21 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:28:21 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:28:22 np0005481065 systemd[1]: Reached target edpm_libvirt.target.
Oct 11 04:28:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:28:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v637: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:28:23 np0005481065 python3.9[222522]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 11 04:28:23 np0005481065 systemd[1]: Reloading.
Oct 11 04:28:23 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:28:23 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:28:24 np0005481065 systemd[1]: Reloading.
Oct 11 04:28:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:28:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:28:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:28:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:28:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:28:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:28:24 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:28:24 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:28:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v638: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:28:25 np0005481065 systemd[1]: session-50.scope: Deactivated successfully.
Oct 11 04:28:25 np0005481065 systemd[1]: session-50.scope: Consumed 4min 16.415s CPU time.
Oct 11 04:28:25 np0005481065 systemd-logind[819]: Session 50 logged out. Waiting for processes to exit.
Oct 11 04:28:25 np0005481065 systemd-logind[819]: Removed session 50.
Oct 11 04:28:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v639: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:28:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:28:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v640: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:28:30 np0005481065 systemd-logind[819]: New session 51 of user zuul.
Oct 11 04:28:30 np0005481065 systemd[1]: Started Session 51 of User zuul.
Oct 11 04:28:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v641: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:28:32 np0005481065 python3.9[222772]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:28:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:28:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v642: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:28:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:28:33 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:28:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:28:33 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:28:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:28:33 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:28:33 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 42cc0e00-67ba-49dd-8b5c-41a39a043931 does not exist
Oct 11 04:28:33 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev ab224b1c-ec78-4866-bfcf-7261c46de3e3 does not exist
Oct 11 04:28:33 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev ac890ab1-9f1c-40c7-825d-b2f848b733c3 does not exist
Oct 11 04:28:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:28:33 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:28:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:28:33 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:28:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:28:33 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:28:33 np0005481065 python3.9[223107]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:28:33 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:28:33 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:28:33 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:28:34 np0005481065 podman[223277]: 2025-10-11 08:28:34.16630885 +0000 UTC m=+0.067096003 container create 30686c8c1a7294c3e598efa147d0bfb0eedb263df3750ec9bd9e0d92124d9852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wilson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 11 04:28:34 np0005481065 systemd[1]: Started libpod-conmon-30686c8c1a7294c3e598efa147d0bfb0eedb263df3750ec9bd9e0d92124d9852.scope.
Oct 11 04:28:34 np0005481065 podman[223277]: 2025-10-11 08:28:34.135130402 +0000 UTC m=+0.035917615 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:28:34 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:28:34 np0005481065 podman[223277]: 2025-10-11 08:28:34.28095029 +0000 UTC m=+0.181737483 container init 30686c8c1a7294c3e598efa147d0bfb0eedb263df3750ec9bd9e0d92124d9852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:28:34 np0005481065 podman[223277]: 2025-10-11 08:28:34.292870363 +0000 UTC m=+0.193657516 container start 30686c8c1a7294c3e598efa147d0bfb0eedb263df3750ec9bd9e0d92124d9852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wilson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 11 04:28:34 np0005481065 podman[223277]: 2025-10-11 08:28:34.297343712 +0000 UTC m=+0.198130865 container attach 30686c8c1a7294c3e598efa147d0bfb0eedb263df3750ec9bd9e0d92124d9852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wilson, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:28:34 np0005481065 thirsty_wilson[223321]: 167 167
Oct 11 04:28:34 np0005481065 systemd[1]: libpod-30686c8c1a7294c3e598efa147d0bfb0eedb263df3750ec9bd9e0d92124d9852.scope: Deactivated successfully.
Oct 11 04:28:34 np0005481065 podman[223277]: 2025-10-11 08:28:34.303465268 +0000 UTC m=+0.204252411 container died 30686c8c1a7294c3e598efa147d0bfb0eedb263df3750ec9bd9e0d92124d9852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:28:34 np0005481065 systemd[1]: var-lib-containers-storage-overlay-78789da50e0528f404f2401fb2c40fde8a434080ccbafa1ab052f0135d282463-merged.mount: Deactivated successfully.
Oct 11 04:28:34 np0005481065 podman[223277]: 2025-10-11 08:28:34.362076986 +0000 UTC m=+0.262864139 container remove 30686c8c1a7294c3e598efa147d0bfb0eedb263df3750ec9bd9e0d92124d9852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wilson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:28:34 np0005481065 systemd[1]: libpod-conmon-30686c8c1a7294c3e598efa147d0bfb0eedb263df3750ec9bd9e0d92124d9852.scope: Deactivated successfully.
Oct 11 04:28:34 np0005481065 podman[223393]: 2025-10-11 08:28:34.604718581 +0000 UTC m=+0.060468712 container create 6806cef7266a40b837d8060a6eb0c245ec50f6ee5f1154c192499e70b359a754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_lovelace, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 04:28:34 np0005481065 python3.9[223387]: ansible-ansible.builtin.file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:28:34 np0005481065 podman[223393]: 2025-10-11 08:28:34.575772778 +0000 UTC m=+0.031522969 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:28:34 np0005481065 systemd[1]: Started libpod-conmon-6806cef7266a40b837d8060a6eb0c245ec50f6ee5f1154c192499e70b359a754.scope.
Oct 11 04:28:34 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:28:34 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9096da77454e0f901002b97534d359267fb2b3710b8d19d1da71bd82db63360d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:28:34 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9096da77454e0f901002b97534d359267fb2b3710b8d19d1da71bd82db63360d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:28:34 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9096da77454e0f901002b97534d359267fb2b3710b8d19d1da71bd82db63360d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:28:34 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9096da77454e0f901002b97534d359267fb2b3710b8d19d1da71bd82db63360d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:28:34 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9096da77454e0f901002b97534d359267fb2b3710b8d19d1da71bd82db63360d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:28:34 np0005481065 podman[223393]: 2025-10-11 08:28:34.744904747 +0000 UTC m=+0.200654928 container init 6806cef7266a40b837d8060a6eb0c245ec50f6ee5f1154c192499e70b359a754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_lovelace, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:28:34 np0005481065 podman[223393]: 2025-10-11 08:28:34.76410797 +0000 UTC m=+0.219858111 container start 6806cef7266a40b837d8060a6eb0c245ec50f6ee5f1154c192499e70b359a754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_lovelace, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 11 04:28:34 np0005481065 podman[223393]: 2025-10-11 08:28:34.768371913 +0000 UTC m=+0.224122114 container attach 6806cef7266a40b837d8060a6eb0c245ec50f6ee5f1154c192499e70b359a754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:28:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v643: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:28:35 np0005481065 python3.9[223566]: ansible-ansible.builtin.file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:28:35 np0005481065 angry_lovelace[223411]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:28:35 np0005481065 angry_lovelace[223411]: --> relative data size: 1.0
Oct 11 04:28:35 np0005481065 angry_lovelace[223411]: --> All data devices are unavailable
Oct 11 04:28:35 np0005481065 systemd[1]: libpod-6806cef7266a40b837d8060a6eb0c245ec50f6ee5f1154c192499e70b359a754.scope: Deactivated successfully.
Oct 11 04:28:35 np0005481065 podman[223393]: 2025-10-11 08:28:35.863355567 +0000 UTC m=+1.319105698 container died 6806cef7266a40b837d8060a6eb0c245ec50f6ee5f1154c192499e70b359a754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_lovelace, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:28:35 np0005481065 systemd[1]: libpod-6806cef7266a40b837d8060a6eb0c245ec50f6ee5f1154c192499e70b359a754.scope: Consumed 1.013s CPU time.
Oct 11 04:28:35 np0005481065 systemd[1]: var-lib-containers-storage-overlay-9096da77454e0f901002b97534d359267fb2b3710b8d19d1da71bd82db63360d-merged.mount: Deactivated successfully.
Oct 11 04:28:35 np0005481065 podman[223393]: 2025-10-11 08:28:35.930016185 +0000 UTC m=+1.385766296 container remove 6806cef7266a40b837d8060a6eb0c245ec50f6ee5f1154c192499e70b359a754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_lovelace, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 04:28:35 np0005481065 systemd[1]: libpod-conmon-6806cef7266a40b837d8060a6eb0c245ec50f6ee5f1154c192499e70b359a754.scope: Deactivated successfully.
Oct 11 04:28:36 np0005481065 python3.9[223777]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 11 04:28:36 np0005481065 podman[223995]: 2025-10-11 08:28:36.739893991 +0000 UTC m=+0.062170871 container create 0759dd61a2a8c546ac4daad99822f01dce0213430ac2a007dcd4f4efdd3d8937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bardeen, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:28:36 np0005481065 systemd[1]: Started libpod-conmon-0759dd61a2a8c546ac4daad99822f01dce0213430ac2a007dcd4f4efdd3d8937.scope.
Oct 11 04:28:36 np0005481065 podman[223995]: 2025-10-11 08:28:36.714530941 +0000 UTC m=+0.036807841 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:28:36 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:28:36 np0005481065 podman[223995]: 2025-10-11 08:28:36.837969184 +0000 UTC m=+0.160246154 container init 0759dd61a2a8c546ac4daad99822f01dce0213430ac2a007dcd4f4efdd3d8937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 04:28:36 np0005481065 podman[223995]: 2025-10-11 08:28:36.850019231 +0000 UTC m=+0.172296111 container start 0759dd61a2a8c546ac4daad99822f01dce0213430ac2a007dcd4f4efdd3d8937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bardeen, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 04:28:36 np0005481065 podman[223995]: 2025-10-11 08:28:36.854205252 +0000 UTC m=+0.176482222 container attach 0759dd61a2a8c546ac4daad99822f01dce0213430ac2a007dcd4f4efdd3d8937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bardeen, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:28:36 np0005481065 confident_bardeen[224046]: 167 167
Oct 11 04:28:36 np0005481065 systemd[1]: libpod-0759dd61a2a8c546ac4daad99822f01dce0213430ac2a007dcd4f4efdd3d8937.scope: Deactivated successfully.
Oct 11 04:28:36 np0005481065 podman[223995]: 2025-10-11 08:28:36.860517513 +0000 UTC m=+0.182794423 container died 0759dd61a2a8c546ac4daad99822f01dce0213430ac2a007dcd4f4efdd3d8937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:28:36 np0005481065 systemd[1]: var-lib-containers-storage-overlay-34977f0978772871fdc9210f84bd32f55dd7e915a017ebe3e0545d1590f4c6b6-merged.mount: Deactivated successfully.
Oct 11 04:28:36 np0005481065 podman[223995]: 2025-10-11 08:28:36.937448058 +0000 UTC m=+0.259724968 container remove 0759dd61a2a8c546ac4daad99822f01dce0213430ac2a007dcd4f4efdd3d8937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 04:28:36 np0005481065 systemd[1]: libpod-conmon-0759dd61a2a8c546ac4daad99822f01dce0213430ac2a007dcd4f4efdd3d8937.scope: Deactivated successfully.
Oct 11 04:28:37 np0005481065 podman[224070]: 2025-10-11 08:28:37.092746399 +0000 UTC m=+0.189439195 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 04:28:37 np0005481065 python3.9[224068]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data/ansible-generated/iscsid setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:28:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v644: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:28:37 np0005481065 podman[224115]: 2025-10-11 08:28:37.201350526 +0000 UTC m=+0.057547388 container create 8bdcbce48bfd5d17130e409f4942a05091384d5bd95b93d3ecc3afe819ce6134 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 04:28:37 np0005481065 systemd[1]: Started libpod-conmon-8bdcbce48bfd5d17130e409f4942a05091384d5bd95b93d3ecc3afe819ce6134.scope.
Oct 11 04:28:37 np0005481065 podman[224115]: 2025-10-11 08:28:37.179411334 +0000 UTC m=+0.035608276 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:28:37 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:28:37 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f7bfe8b2818632421ed295fbbf65cf14a97751e17a5ca3282dbf2fc62bf674f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:28:37 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f7bfe8b2818632421ed295fbbf65cf14a97751e17a5ca3282dbf2fc62bf674f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:28:37 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f7bfe8b2818632421ed295fbbf65cf14a97751e17a5ca3282dbf2fc62bf674f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:28:37 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f7bfe8b2818632421ed295fbbf65cf14a97751e17a5ca3282dbf2fc62bf674f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:28:37 np0005481065 podman[224115]: 2025-10-11 08:28:37.332719738 +0000 UTC m=+0.188916640 container init 8bdcbce48bfd5d17130e409f4942a05091384d5bd95b93d3ecc3afe819ce6134 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_dijkstra, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:28:37 np0005481065 podman[224115]: 2025-10-11 08:28:37.345620889 +0000 UTC m=+0.201817741 container start 8bdcbce48bfd5d17130e409f4942a05091384d5bd95b93d3ecc3afe819ce6134 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:28:37 np0005481065 podman[224115]: 2025-10-11 08:28:37.348870163 +0000 UTC m=+0.205067115 container attach 8bdcbce48bfd5d17130e409f4942a05091384d5bd95b93d3ecc3afe819ce6134 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_dijkstra, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 11 04:28:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]: {
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:    "0": [
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:        {
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:            "devices": [
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:                "/dev/loop3"
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:            ],
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:            "lv_name": "ceph_lv0",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:            "lv_size": "21470642176",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:            "name": "ceph_lv0",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:            "tags": {
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:                "ceph.cluster_name": "ceph",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:                "ceph.crush_device_class": "",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:                "ceph.encrypted": "0",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:                "ceph.osd_id": "0",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:                "ceph.type": "block",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:                "ceph.vdo": "0"
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:            },
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:            "type": "block",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:            "vg_name": "ceph_vg0"
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:        }
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:    ],
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:    "1": [
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:        {
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:            "devices": [
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:                "/dev/loop4"
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:            ],
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:            "lv_name": "ceph_lv1",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:            "lv_size": "21470642176",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:            "name": "ceph_lv1",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:            "tags": {
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:                "ceph.cluster_name": "ceph",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:                "ceph.crush_device_class": "",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:                "ceph.encrypted": "0",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:                "ceph.osd_id": "1",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:                "ceph.type": "block",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:                "ceph.vdo": "0"
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:            },
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:            "type": "block",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:            "vg_name": "ceph_vg1"
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:        }
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:    ],
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:    "2": [
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:        {
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:            "devices": [
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:                "/dev/loop5"
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:            ],
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:            "lv_name": "ceph_lv2",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:            "lv_size": "21470642176",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:            "name": "ceph_lv2",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:            "tags": {
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:                "ceph.cluster_name": "ceph",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:                "ceph.crush_device_class": "",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:                "ceph.encrypted": "0",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:                "ceph.osd_id": "2",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:                "ceph.type": "block",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:                "ceph.vdo": "0"
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:            },
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:            "type": "block",
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:            "vg_name": "ceph_vg2"
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:        }
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]:    ]
Oct 11 04:28:38 np0005481065 compassionate_dijkstra[224156]: }
Oct 11 04:28:38 np0005481065 systemd[1]: libpod-8bdcbce48bfd5d17130e409f4942a05091384d5bd95b93d3ecc3afe819ce6134.scope: Deactivated successfully.
Oct 11 04:28:38 np0005481065 conmon[224156]: conmon 8bdcbce48bfd5d17130e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8bdcbce48bfd5d17130e409f4942a05091384d5bd95b93d3ecc3afe819ce6134.scope/container/memory.events
Oct 11 04:28:38 np0005481065 podman[224115]: 2025-10-11 08:28:38.097276859 +0000 UTC m=+0.953473711 container died 8bdcbce48bfd5d17130e409f4942a05091384d5bd95b93d3ecc3afe819ce6134 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_dijkstra, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:28:38 np0005481065 systemd[1]: var-lib-containers-storage-overlay-2f7bfe8b2818632421ed295fbbf65cf14a97751e17a5ca3282dbf2fc62bf674f-merged.mount: Deactivated successfully.
Oct 11 04:28:38 np0005481065 podman[224115]: 2025-10-11 08:28:38.158534833 +0000 UTC m=+1.014731685 container remove 8bdcbce48bfd5d17130e409f4942a05091384d5bd95b93d3ecc3afe819ce6134 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_dijkstra, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Oct 11 04:28:38 np0005481065 systemd[1]: libpod-conmon-8bdcbce48bfd5d17130e409f4942a05091384d5bd95b93d3ecc3afe819ce6134.scope: Deactivated successfully.
Oct 11 04:28:38 np0005481065 python3.9[224290]: ansible-ansible.builtin.stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:28:38 np0005481065 podman[224522]: 2025-10-11 08:28:38.912982963 +0000 UTC m=+0.048394474 container create 19639dfb423063c9e312e30ce48c42dcf1d03203038ae6df53f0f7c7efd89b7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Oct 11 04:28:38 np0005481065 systemd[1]: Started libpod-conmon-19639dfb423063c9e312e30ce48c42dcf1d03203038ae6df53f0f7c7efd89b7b.scope.
Oct 11 04:28:38 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:28:38 np0005481065 podman[224522]: 2025-10-11 08:28:38.89273671 +0000 UTC m=+0.028148221 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:28:38 np0005481065 podman[224522]: 2025-10-11 08:28:38.998447213 +0000 UTC m=+0.133858724 container init 19639dfb423063c9e312e30ce48c42dcf1d03203038ae6df53f0f7c7efd89b7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_lederberg, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:28:39 np0005481065 podman[224522]: 2025-10-11 08:28:39.010790479 +0000 UTC m=+0.146201960 container start 19639dfb423063c9e312e30ce48c42dcf1d03203038ae6df53f0f7c7efd89b7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_lederberg, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 04:28:39 np0005481065 podman[224522]: 2025-10-11 08:28:39.014568047 +0000 UTC m=+0.149979568 container attach 19639dfb423063c9e312e30ce48c42dcf1d03203038ae6df53f0f7c7efd89b7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Oct 11 04:28:39 np0005481065 flamboyant_lederberg[224539]: 167 167
Oct 11 04:28:39 np0005481065 systemd[1]: libpod-19639dfb423063c9e312e30ce48c42dcf1d03203038ae6df53f0f7c7efd89b7b.scope: Deactivated successfully.
Oct 11 04:28:39 np0005481065 conmon[224539]: conmon 19639dfb423063c9e312 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-19639dfb423063c9e312e30ce48c42dcf1d03203038ae6df53f0f7c7efd89b7b.scope/container/memory.events
Oct 11 04:28:39 np0005481065 podman[224522]: 2025-10-11 08:28:39.0195204 +0000 UTC m=+0.154931881 container died 19639dfb423063c9e312e30ce48c42dcf1d03203038ae6df53f0f7c7efd89b7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_lederberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Oct 11 04:28:39 np0005481065 systemd[1]: var-lib-containers-storage-overlay-b839a9853b45f090f9934db823397080d3910d020e5be87c305fe059d534bd08-merged.mount: Deactivated successfully.
Oct 11 04:28:39 np0005481065 podman[224522]: 2025-10-11 08:28:39.06850561 +0000 UTC m=+0.203917131 container remove 19639dfb423063c9e312e30ce48c42dcf1d03203038ae6df53f0f7c7efd89b7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_lederberg, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 04:28:39 np0005481065 systemd[1]: libpod-conmon-19639dfb423063c9e312e30ce48c42dcf1d03203038ae6df53f0f7c7efd89b7b.scope: Deactivated successfully.
Oct 11 04:28:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v645: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:28:39 np0005481065 podman[224597]: 2025-10-11 08:28:39.299074868 +0000 UTC m=+0.071659724 container create bb646aac68e70e458994ae0197495159514221938ce88aacb5bec75f3d2de08f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_pasteur, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:28:39 np0005481065 systemd[1]: Started libpod-conmon-bb646aac68e70e458994ae0197495159514221938ce88aacb5bec75f3d2de08f.scope.
Oct 11 04:28:39 np0005481065 podman[224597]: 2025-10-11 08:28:39.270005481 +0000 UTC m=+0.042590427 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:28:39 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:28:39 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2343a6c67a37335b6509748c84b37e6098a35891cce33932bbc7c911168d311/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:28:39 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2343a6c67a37335b6509748c84b37e6098a35891cce33932bbc7c911168d311/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:28:39 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2343a6c67a37335b6509748c84b37e6098a35891cce33932bbc7c911168d311/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:28:39 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2343a6c67a37335b6509748c84b37e6098a35891cce33932bbc7c911168d311/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:28:39 np0005481065 podman[224597]: 2025-10-11 08:28:39.417205399 +0000 UTC m=+0.189790285 container init bb646aac68e70e458994ae0197495159514221938ce88aacb5bec75f3d2de08f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 11 04:28:39 np0005481065 podman[224597]: 2025-10-11 08:28:39.432593032 +0000 UTC m=+0.205177938 container start bb646aac68e70e458994ae0197495159514221938ce88aacb5bec75f3d2de08f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 04:28:39 np0005481065 podman[224597]: 2025-10-11 08:28:39.437082151 +0000 UTC m=+0.209667047 container attach bb646aac68e70e458994ae0197495159514221938ce88aacb5bec75f3d2de08f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 04:28:39 np0005481065 python3.9[224656]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsid.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:28:39 np0005481065 systemd[1]: Reloading.
Oct 11 04:28:39 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:28:39 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:28:40 np0005481065 musing_pasteur[224652]: {
Oct 11 04:28:40 np0005481065 musing_pasteur[224652]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 04:28:40 np0005481065 musing_pasteur[224652]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:28:40 np0005481065 musing_pasteur[224652]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:28:40 np0005481065 musing_pasteur[224652]:        "osd_id": 2,
Oct 11 04:28:40 np0005481065 musing_pasteur[224652]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:28:40 np0005481065 musing_pasteur[224652]:        "type": "bluestore"
Oct 11 04:28:40 np0005481065 musing_pasteur[224652]:    },
Oct 11 04:28:40 np0005481065 musing_pasteur[224652]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 04:28:40 np0005481065 musing_pasteur[224652]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:28:40 np0005481065 musing_pasteur[224652]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:28:40 np0005481065 musing_pasteur[224652]:        "osd_id": 0,
Oct 11 04:28:40 np0005481065 musing_pasteur[224652]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:28:40 np0005481065 musing_pasteur[224652]:        "type": "bluestore"
Oct 11 04:28:40 np0005481065 musing_pasteur[224652]:    },
Oct 11 04:28:40 np0005481065 musing_pasteur[224652]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 04:28:40 np0005481065 musing_pasteur[224652]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:28:40 np0005481065 musing_pasteur[224652]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:28:40 np0005481065 musing_pasteur[224652]:        "osd_id": 1,
Oct 11 04:28:40 np0005481065 musing_pasteur[224652]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:28:40 np0005481065 musing_pasteur[224652]:        "type": "bluestore"
Oct 11 04:28:40 np0005481065 musing_pasteur[224652]:    }
Oct 11 04:28:40 np0005481065 musing_pasteur[224652]: }
Oct 11 04:28:40 np0005481065 systemd[1]: libpod-bb646aac68e70e458994ae0197495159514221938ce88aacb5bec75f3d2de08f.scope: Deactivated successfully.
Oct 11 04:28:40 np0005481065 systemd[1]: libpod-bb646aac68e70e458994ae0197495159514221938ce88aacb5bec75f3d2de08f.scope: Consumed 1.071s CPU time.
Oct 11 04:28:40 np0005481065 podman[224597]: 2025-10-11 08:28:40.503965675 +0000 UTC m=+1.276550581 container died bb646aac68e70e458994ae0197495159514221938ce88aacb5bec75f3d2de08f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 04:28:40 np0005481065 systemd[1]: var-lib-containers-storage-overlay-b2343a6c67a37335b6509748c84b37e6098a35891cce33932bbc7c911168d311-merged.mount: Deactivated successfully.
Oct 11 04:28:40 np0005481065 podman[224597]: 2025-10-11 08:28:40.585521013 +0000 UTC m=+1.358105899 container remove bb646aac68e70e458994ae0197495159514221938ce88aacb5bec75f3d2de08f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_pasteur, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 04:28:40 np0005481065 systemd[1]: libpod-conmon-bb646aac68e70e458994ae0197495159514221938ce88aacb5bec75f3d2de08f.scope: Deactivated successfully.
Oct 11 04:28:40 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:28:40 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:28:40 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:28:40 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:28:40 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 48031b39-0a0f-49bc-8804-0181a2d0d867 does not exist
Oct 11 04:28:40 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 9e97f9d2-f503-446e-86cf-7d92074cef0b does not exist
Oct 11 04:28:40 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:28:40 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:28:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v646: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:28:41 np0005481065 python3.9[224938]: ansible-ansible.builtin.service_facts Invoked
Oct 11 04:28:41 np0005481065 network[224955]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 11 04:28:41 np0005481065 network[224956]: 'network-scripts' will be removed from distribution in near future.
Oct 11 04:28:41 np0005481065 network[224957]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 11 04:28:42 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:28:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v647: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:28:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v648: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:28:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v649: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:28:47 np0005481065 python3.9[225231]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsi-starter.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:28:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:28:48 np0005481065 systemd[1]: Reloading.
Oct 11 04:28:48 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:28:48 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:28:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v650: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:28:49 np0005481065 podman[225270]: 2025-10-11 08:28:49.291366447 +0000 UTC m=+0.095632604 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Oct 11 04:28:50 np0005481065 python3.9[225440]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:28:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v651: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:28:51 np0005481065 python3.9[225592]: ansible-containers.podman.podman_container Invoked with command=/usr/sbin/iscsi-iname detach=False image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f name=iscsid_config rm=True tty=True executable=podman state=started debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 11 04:28:52 np0005481065 podman[225605]: 2025-10-11 08:28:52.707645679 +0000 UTC m=+1.452646342 image pull 74877095db294c27659f24e7f86074178a6f28eee68561c30e3ce4d18519e09c quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f
Oct 11 04:28:52 np0005481065 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 04:28:52 np0005481065 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 04:28:52 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:28:52 np0005481065 podman[225664]: 2025-10-11 08:28:52.908659316 +0000 UTC m=+0.061415769 container create 974a768f94d5d782e9620e035ff75920f4d9520cae2e6765dfbc015d83002a34 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:28:52 np0005481065 NetworkManager[44960]: <info>  [1760171332.9400] manager: (podman0): new Bridge device (/org/freedesktop/NetworkManager/Devices/21)
Oct 11 04:28:52 np0005481065 kernel: podman0: port 1(veth0) entered blocking state
Oct 11 04:28:52 np0005481065 kernel: podman0: port 1(veth0) entered disabled state
Oct 11 04:28:52 np0005481065 kernel: veth0: entered allmulticast mode
Oct 11 04:28:52 np0005481065 kernel: veth0: entered promiscuous mode
Oct 11 04:28:52 np0005481065 podman[225664]: 2025-10-11 08:28:52.8771742 +0000 UTC m=+0.029930653 image pull 74877095db294c27659f24e7f86074178a6f28eee68561c30e3ce4d18519e09c quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f
Oct 11 04:28:52 np0005481065 kernel: podman0: port 1(veth0) entered blocking state
Oct 11 04:28:52 np0005481065 kernel: podman0: port 1(veth0) entered forwarding state
Oct 11 04:28:52 np0005481065 NetworkManager[44960]: <info>  [1760171332.9797] manager: (veth0): new Veth device (/org/freedesktop/NetworkManager/Devices/22)
Oct 11 04:28:52 np0005481065 NetworkManager[44960]: <info>  [1760171332.9835] device (veth0): carrier: link connected
Oct 11 04:28:52 np0005481065 NetworkManager[44960]: <info>  [1760171332.9845] device (podman0): carrier: link connected
Oct 11 04:28:52 np0005481065 systemd-udevd[225694]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:28:53 np0005481065 systemd-udevd[225697]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:28:53 np0005481065 NetworkManager[44960]: <info>  [1760171333.0293] device (podman0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:28:53 np0005481065 NetworkManager[44960]: <info>  [1760171333.0329] device (podman0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:28:53 np0005481065 NetworkManager[44960]: <info>  [1760171333.0357] device (podman0): Activation: starting connection 'podman0' (f56496b4-d1b3-4fc8-b162-b99153c010b8)
Oct 11 04:28:53 np0005481065 NetworkManager[44960]: <info>  [1760171333.0369] device (podman0): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 11 04:28:53 np0005481065 NetworkManager[44960]: <info>  [1760171333.0384] device (podman0): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 11 04:28:53 np0005481065 NetworkManager[44960]: <info>  [1760171333.0395] device (podman0): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 11 04:28:53 np0005481065 NetworkManager[44960]: <info>  [1760171333.0404] device (podman0): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 11 04:28:53 np0005481065 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 11 04:28:53 np0005481065 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 11 04:28:53 np0005481065 NetworkManager[44960]: <info>  [1760171333.0834] device (podman0): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 11 04:28:53 np0005481065 NetworkManager[44960]: <info>  [1760171333.0846] device (podman0): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 11 04:28:53 np0005481065 NetworkManager[44960]: <info>  [1760171333.0876] device (podman0): Activation: successful, device activated.
Oct 11 04:28:53 np0005481065 systemd[1]: iscsi.service: Unit cannot be reloaded because it is inactive.
Oct 11 04:28:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v652: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:28:53 np0005481065 systemd[1]: Started libpod-conmon-974a768f94d5d782e9620e035ff75920f4d9520cae2e6765dfbc015d83002a34.scope.
Oct 11 04:28:53 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:28:53 np0005481065 podman[225664]: 2025-10-11 08:28:53.430139199 +0000 UTC m=+0.582895702 container init 974a768f94d5d782e9620e035ff75920f4d9520cae2e6765dfbc015d83002a34 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:28:53 np0005481065 podman[225664]: 2025-10-11 08:28:53.446297244 +0000 UTC m=+0.599053687 container start 974a768f94d5d782e9620e035ff75920f4d9520cae2e6765dfbc015d83002a34 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:28:53 np0005481065 podman[225664]: 2025-10-11 08:28:53.4523993 +0000 UTC m=+0.605155803 container attach 974a768f94d5d782e9620e035ff75920f4d9520cae2e6765dfbc015d83002a34 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 11 04:28:53 np0005481065 iscsid_config[225822]: iqn.1994-05.com.redhat:37f734a867d1#015
Oct 11 04:28:53 np0005481065 systemd[1]: libpod-974a768f94d5d782e9620e035ff75920f4d9520cae2e6765dfbc015d83002a34.scope: Deactivated successfully.
Oct 11 04:28:53 np0005481065 podman[225664]: 2025-10-11 08:28:53.456682073 +0000 UTC m=+0.609438516 container died 974a768f94d5d782e9620e035ff75920f4d9520cae2e6765dfbc015d83002a34 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct 11 04:28:53 np0005481065 kernel: podman0: port 1(veth0) entered disabled state
Oct 11 04:28:53 np0005481065 kernel: veth0 (unregistering): left allmulticast mode
Oct 11 04:28:53 np0005481065 kernel: veth0 (unregistering): left promiscuous mode
Oct 11 04:28:53 np0005481065 kernel: podman0: port 1(veth0) entered disabled state
Oct 11 04:28:53 np0005481065 NetworkManager[44960]: <info>  [1760171333.5170] device (podman0): state change: activated -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:28:53 np0005481065 systemd[1]: run-netns-netns\x2d602a4fb2\x2d095e\x2d26fb\x2d2b94\x2d033eeb81c571.mount: Deactivated successfully.
Oct 11 04:28:53 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-974a768f94d5d782e9620e035ff75920f4d9520cae2e6765dfbc015d83002a34-userdata-shm.mount: Deactivated successfully.
Oct 11 04:28:53 np0005481065 systemd[1]: var-lib-containers-storage-overlay-6fe1fc2e701b77a548eeb9e38929048b6526f604f4a4e6fe28763cd760b11888-merged.mount: Deactivated successfully.
Oct 11 04:28:53 np0005481065 podman[225664]: 2025-10-11 08:28:53.957501351 +0000 UTC m=+1.110257794 container remove 974a768f94d5d782e9620e035ff75920f4d9520cae2e6765dfbc015d83002a34 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 11 04:28:53 np0005481065 systemd[1]: libpod-conmon-974a768f94d5d782e9620e035ff75920f4d9520cae2e6765dfbc015d83002a34.scope: Deactivated successfully.
Oct 11 04:28:53 np0005481065 python3.9[225592]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman run --name iscsid_config --detach=False --rm --tty=True quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f /usr/sbin/iscsi-iname
Oct 11 04:28:54 np0005481065 python3.9[225592]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: Error generating systemd: #012DEPRECATED command:#012It is recommended to use Quadlets for running containers and pods under systemd.#012#012Please refer to podman-systemd.unit(5) for details.#012Error: iscsid_config does not refer to a container or pod: no pod with name or ID iscsid_config found: no such pod: no container with name or ID "iscsid_config" found: no such container
Oct 11 04:28:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:28:54
Oct 11 04:28:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:28:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 04:28:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['.mgr', 'backups', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', '.rgw.root', 'default.rgw.log', 'images', 'default.rgw.control']
Oct 11 04:28:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:28:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:28:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:28:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:28:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:28:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:28:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:28:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:28:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:28:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:28:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:28:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:28:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:28:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:28:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:28:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:28:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:28:55 np0005481065 python3.9[226064]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:28:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v653: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:28:55 np0005481065 python3.9[226187]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760171334.3837938-119-6403385519039/.source.iscsi _original_basename=.ocpwkz9q follow=False checksum=a3276cc3a122ba31a71903624a34893e0bef54c4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:28:56 np0005481065 python3.9[226339]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:28:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v654: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:28:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:28:57 np0005481065 python3.9[226489]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/iscsid.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:28:58 np0005481065 python3.9[226643]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:28:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v655: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:00 np0005481065 python3.9[226795]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:29:01 np0005481065 python3.9[226947]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:29:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v656: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:01 np0005481065 python3.9[227025]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:29:02 np0005481065 python3.9[227177]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:29:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:29:03 np0005481065 python3.9[227255]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:29:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v657: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:03 np0005481065 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 11 04:29:03 np0005481065 python3.9[227407]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:29:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:29:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:29:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:29:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:29:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:29:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:29:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:29:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:29:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:29:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:29:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:29:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:29:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 04:29:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:29:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:29:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:29:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:29:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:29:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:29:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:29:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:29:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:29:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:29:04 np0005481065 python3.9[227559]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:29:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v658: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:05 np0005481065 python3.9[227637]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:29:06 np0005481065 python3.9[227789]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:29:06 np0005481065 python3.9[227867]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:29:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v659: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:07 np0005481065 podman[227991]: 2025-10-11 08:29:07.634294835 +0000 UTC m=+0.152952134 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 04:29:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:29:07 np0005481065 python3.9[228034]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:29:07 np0005481065 systemd[1]: Reloading.
Oct 11 04:29:07 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:29:08 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:29:09 np0005481065 python3.9[228233]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:29:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v660: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:09 np0005481065 python3.9[228311]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:29:10 np0005481065 python3.9[228463]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:29:10 np0005481065 python3.9[228541]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:29:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v661: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:11 np0005481065 python3.9[228693]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:29:11 np0005481065 systemd[1]: Reloading.
Oct 11 04:29:12 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:29:12 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:29:12 np0005481065 systemd[1]: Starting Create netns directory...
Oct 11 04:29:12 np0005481065 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 11 04:29:12 np0005481065 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 11 04:29:12 np0005481065 systemd[1]: Finished Create netns directory.
Oct 11 04:29:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:29:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v662: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:13 np0005481065 python3.9[228886]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:29:14 np0005481065 python3.9[229038]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/iscsid/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:29:14 np0005481065 python3.9[229161]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/iscsid/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760171353.5582364-273-82616585647129/.source _original_basename=healthcheck follow=False checksum=2e1237e7fe015c809b173c52e24cfb87132f4344 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:29:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:29:15.158 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:29:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:29:15.159 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:29:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:29:15.160 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:29:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v663: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:15 np0005481065 python3.9[229313]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:29:16 np0005481065 python3.9[229465]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/iscsid.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:29:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v664: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:17 np0005481065 python3.9[229588]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/iscsid.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760171356.173322-298-258275819767205/.source.json _original_basename=.slzkfrxv follow=False checksum=80e4f97460718c7e5c66b21ef8b846eba0e0dbc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:29:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:29:18 np0005481065 python3.9[229740]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/iscsid state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:29:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v665: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:19 np0005481065 podman[229987]: 2025-10-11 08:29:19.634579952 +0000 UTC m=+0.098661661 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct 11 04:29:21 np0005481065 python3.9[230184]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/iscsid config_pattern=*.json debug=False
Oct 11 04:29:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v666: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:22 np0005481065 python3.9[230336]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 11 04:29:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:29:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v667: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:23 np0005481065 python3.9[230488]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 11 04:29:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:29:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:29:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:29:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:29:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:29:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:29:25 np0005481065 python3[230667]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/iscsid config_id=iscsid config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 11 04:29:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v668: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:25 np0005481065 podman[230706]: 2025-10-11 08:29:25.381697899 +0000 UTC m=+0.049350082 container create 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, io.buildah.version=1.41.3, config_id=iscsid, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct 11 04:29:25 np0005481065 podman[230706]: 2025-10-11 08:29:25.355491594 +0000 UTC m=+0.023143807 image pull 74877095db294c27659f24e7f86074178a6f28eee68561c30e3ce4d18519e09c quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f
Oct 11 04:29:25 np0005481065 python3[230667]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name iscsid --conmon-pidfile /run/iscsid.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=iscsid --label container_name=iscsid --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:z --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/openstack/healthchecks/iscsid:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f
Oct 11 04:29:26 np0005481065 python3.9[230895]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:29:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v669: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:27 np0005481065 python3.9[231049]: ansible-file Invoked with path=/etc/systemd/system/edpm_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:29:27 np0005481065 python3.9[231125]: ansible-stat Invoked with path=/etc/systemd/system/edpm_iscsid_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:29:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:29:28 np0005481065 python3.9[231276]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760171367.9348116-386-45364736007413/source dest=/etc/systemd/system/edpm_iscsid.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:29:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v670: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:29 np0005481065 python3.9[231352]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 11 04:29:29 np0005481065 systemd[1]: Reloading.
Oct 11 04:29:29 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:29:29 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:29:30 np0005481065 python3.9[231463]: ansible-systemd Invoked with state=restarted name=edpm_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:29:30 np0005481065 systemd[1]: Reloading.
Oct 11 04:29:30 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:29:30 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:29:30 np0005481065 systemd[1]: Starting iscsid container...
Oct 11 04:29:30 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:29:30 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e9788305ccac62cbcce6e5451f80a6cd2ee3b7bc6a9fb94aacf69e0dc4abae6/merged/etc/target supports timestamps until 2038 (0x7fffffff)
Oct 11 04:29:30 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e9788305ccac62cbcce6e5451f80a6cd2ee3b7bc6a9fb94aacf69e0dc4abae6/merged/etc/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 11 04:29:30 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e9788305ccac62cbcce6e5451f80a6cd2ee3b7bc6a9fb94aacf69e0dc4abae6/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 11 04:29:31 np0005481065 systemd[1]: Started /usr/bin/podman healthcheck run 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9.
Oct 11 04:29:31 np0005481065 podman[231502]: 2025-10-11 08:29:31.015268374 +0000 UTC m=+0.175168014 container init 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid)
Oct 11 04:29:31 np0005481065 iscsid[231517]: + sudo -E kolla_set_configs
Oct 11 04:29:31 np0005481065 podman[231502]: 2025-10-11 08:29:31.052011032 +0000 UTC m=+0.211910652 container start 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3)
Oct 11 04:29:31 np0005481065 podman[231502]: iscsid
Oct 11 04:29:31 np0005481065 systemd[1]: Started iscsid container.
Oct 11 04:29:31 np0005481065 systemd[1]: Created slice User Slice of UID 0.
Oct 11 04:29:31 np0005481065 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct 11 04:29:31 np0005481065 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct 11 04:29:31 np0005481065 systemd[1]: Starting User Manager for UID 0...
Oct 11 04:29:31 np0005481065 podman[231524]: 2025-10-11 08:29:31.133252241 +0000 UTC m=+0.073114296 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=starting, health_failing_streak=1, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid)
Oct 11 04:29:31 np0005481065 systemd[1]: 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9-4be06497c94ab793.service: Main process exited, code=exited, status=1/FAILURE
Oct 11 04:29:31 np0005481065 systemd[1]: 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9-4be06497c94ab793.service: Failed with result 'exit-code'.
Oct 11 04:29:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v671: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:31 np0005481065 systemd[231541]: Queued start job for default target Main User Target.
Oct 11 04:29:31 np0005481065 systemd[231541]: Created slice User Application Slice.
Oct 11 04:29:31 np0005481065 systemd[231541]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct 11 04:29:31 np0005481065 systemd[231541]: Started Daily Cleanup of User's Temporary Directories.
Oct 11 04:29:31 np0005481065 systemd[231541]: Reached target Paths.
Oct 11 04:29:31 np0005481065 systemd[231541]: Reached target Timers.
Oct 11 04:29:31 np0005481065 systemd[231541]: Starting D-Bus User Message Bus Socket...
Oct 11 04:29:31 np0005481065 systemd[231541]: Starting Create User's Volatile Files and Directories...
Oct 11 04:29:31 np0005481065 systemd[231541]: Listening on D-Bus User Message Bus Socket.
Oct 11 04:29:31 np0005481065 systemd[231541]: Reached target Sockets.
Oct 11 04:29:31 np0005481065 systemd[231541]: Finished Create User's Volatile Files and Directories.
Oct 11 04:29:31 np0005481065 systemd[231541]: Reached target Basic System.
Oct 11 04:29:31 np0005481065 systemd[231541]: Reached target Main User Target.
Oct 11 04:29:31 np0005481065 systemd[231541]: Startup finished in 147ms.
Oct 11 04:29:31 np0005481065 systemd[1]: Started User Manager for UID 0.
Oct 11 04:29:31 np0005481065 systemd[1]: Started Session c3 of User root.
Oct 11 04:29:31 np0005481065 iscsid[231517]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 11 04:29:31 np0005481065 iscsid[231517]: INFO:__main__:Validating config file
Oct 11 04:29:31 np0005481065 iscsid[231517]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 11 04:29:31 np0005481065 iscsid[231517]: INFO:__main__:Writing out command to execute
Oct 11 04:29:31 np0005481065 systemd[1]: session-c3.scope: Deactivated successfully.
Oct 11 04:29:31 np0005481065 iscsid[231517]: ++ cat /run_command
Oct 11 04:29:31 np0005481065 iscsid[231517]: + CMD='/usr/sbin/iscsid -f'
Oct 11 04:29:31 np0005481065 iscsid[231517]: + ARGS=
Oct 11 04:29:31 np0005481065 iscsid[231517]: + sudo kolla_copy_cacerts
Oct 11 04:29:31 np0005481065 systemd[1]: Started Session c4 of User root.
Oct 11 04:29:31 np0005481065 systemd[1]: session-c4.scope: Deactivated successfully.
Oct 11 04:29:31 np0005481065 iscsid[231517]: + [[ ! -n '' ]]
Oct 11 04:29:31 np0005481065 iscsid[231517]: + . kolla_extend_start
Oct 11 04:29:31 np0005481065 iscsid[231517]: Running command: '/usr/sbin/iscsid -f'
Oct 11 04:29:31 np0005481065 iscsid[231517]: ++ [[ ! -f /etc/iscsi/initiatorname.iscsi ]]
Oct 11 04:29:31 np0005481065 iscsid[231517]: + echo 'Running command: '\''/usr/sbin/iscsid -f'\'''
Oct 11 04:29:31 np0005481065 iscsid[231517]: + umask 0022
Oct 11 04:29:31 np0005481065 iscsid[231517]: + exec /usr/sbin/iscsid -f
Oct 11 04:29:31 np0005481065 kernel: Loading iSCSI transport class v2.0-870.
Oct 11 04:29:31 np0005481065 python3.9[231721]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.iscsid_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:29:32 np0005481065 python3.9[231873]: ansible-ansible.builtin.file Invoked with path=/etc/iscsi/.iscsid_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:29:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:29:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v672: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:33 np0005481065 python3.9[232025]: ansible-ansible.builtin.service_facts Invoked
Oct 11 04:29:33 np0005481065 network[232042]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 11 04:29:33 np0005481065 network[232043]: 'network-scripts' will be removed from distribution in near future.
Oct 11 04:29:33 np0005481065 network[232044]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 11 04:29:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v673: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v674: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:29:37 np0005481065 podman[232151]: 2025-10-11 08:29:37.845657975 +0000 UTC m=+0.155065285 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 04:29:38 np0005481065 systemd[1]: virtnodedevd.service: Deactivated successfully.
Oct 11 04:29:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v675: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:39 np0005481065 python3.9[232347]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 11 04:29:40 np0005481065 python3.9[232500]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Oct 11 04:29:40 np0005481065 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct 11 04:29:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v676: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:41 np0005481065 python3.9[232705]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:29:41 np0005481065 systemd[1]: Stopping User Manager for UID 0...
Oct 11 04:29:41 np0005481065 systemd[231541]: Activating special unit Exit the Session...
Oct 11 04:29:41 np0005481065 systemd[231541]: Stopped target Main User Target.
Oct 11 04:29:41 np0005481065 systemd[231541]: Stopped target Basic System.
Oct 11 04:29:41 np0005481065 systemd[231541]: Stopped target Paths.
Oct 11 04:29:41 np0005481065 systemd[231541]: Stopped target Sockets.
Oct 11 04:29:41 np0005481065 systemd[231541]: Stopped target Timers.
Oct 11 04:29:41 np0005481065 systemd[231541]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 11 04:29:41 np0005481065 systemd[231541]: Closed D-Bus User Message Bus Socket.
Oct 11 04:29:41 np0005481065 systemd[231541]: Stopped Create User's Volatile Files and Directories.
Oct 11 04:29:41 np0005481065 systemd[231541]: Removed slice User Application Slice.
Oct 11 04:29:41 np0005481065 systemd[231541]: Reached target Shutdown.
Oct 11 04:29:41 np0005481065 systemd[231541]: Finished Exit the Session.
Oct 11 04:29:41 np0005481065 systemd[231541]: Reached target Exit the Session.
Oct 11 04:29:41 np0005481065 systemd[1]: user@0.service: Deactivated successfully.
Oct 11 04:29:41 np0005481065 systemd[1]: Stopped User Manager for UID 0.
Oct 11 04:29:41 np0005481065 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct 11 04:29:41 np0005481065 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct 11 04:29:41 np0005481065 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct 11 04:29:41 np0005481065 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct 11 04:29:41 np0005481065 systemd[1]: Removed slice User Slice of UID 0.
Oct 11 04:29:41 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:29:41 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:29:41 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:29:41 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:29:41 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:29:41 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:29:41 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 0b955c97-831a-492e-a779-e335ff79ab59 does not exist
Oct 11 04:29:41 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev a44214e7-f78d-4c0b-96fe-546dc1842407 does not exist
Oct 11 04:29:41 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 95c485c7-4c61-404d-88f5-feb83187e929 does not exist
Oct 11 04:29:41 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:29:41 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:29:41 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:29:41 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:29:41 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:29:41 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:29:41 np0005481065 python3.9[232912]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760171380.5323439-460-93739374489177/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:29:42 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:29:42 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:29:42 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:29:42 np0005481065 podman[233164]: 2025-10-11 08:29:42.564045364 +0000 UTC m=+0.061351707 container create 278d9e4454914fdcf28912d843bd328fd8ae296691608c36756cb93e061f2bc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_engelbart, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:29:42 np0005481065 systemd[1]: Started libpod-conmon-278d9e4454914fdcf28912d843bd328fd8ae296691608c36756cb93e061f2bc8.scope.
Oct 11 04:29:42 np0005481065 podman[233164]: 2025-10-11 08:29:42.531490897 +0000 UTC m=+0.028797290 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:29:42 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:29:42 np0005481065 podman[233164]: 2025-10-11 08:29:42.66776323 +0000 UTC m=+0.165069573 container init 278d9e4454914fdcf28912d843bd328fd8ae296691608c36756cb93e061f2bc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_engelbart, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 04:29:42 np0005481065 podman[233164]: 2025-10-11 08:29:42.679416235 +0000 UTC m=+0.176722578 container start 278d9e4454914fdcf28912d843bd328fd8ae296691608c36756cb93e061f2bc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Oct 11 04:29:42 np0005481065 podman[233164]: 2025-10-11 08:29:42.684146571 +0000 UTC m=+0.181452924 container attach 278d9e4454914fdcf28912d843bd328fd8ae296691608c36756cb93e061f2bc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_engelbart, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 04:29:42 np0005481065 pensive_engelbart[233218]: 167 167
Oct 11 04:29:42 np0005481065 systemd[1]: libpod-278d9e4454914fdcf28912d843bd328fd8ae296691608c36756cb93e061f2bc8.scope: Deactivated successfully.
Oct 11 04:29:42 np0005481065 podman[233164]: 2025-10-11 08:29:42.69104524 +0000 UTC m=+0.188351553 container died 278d9e4454914fdcf28912d843bd328fd8ae296691608c36756cb93e061f2bc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:29:42 np0005481065 systemd[1]: var-lib-containers-storage-overlay-1468b79d140f33173562e0d66fd52f7769324159ebdae7f742e0928c27e18ec8-merged.mount: Deactivated successfully.
Oct 11 04:29:42 np0005481065 podman[233164]: 2025-10-11 08:29:42.747576018 +0000 UTC m=+0.244882361 container remove 278d9e4454914fdcf28912d843bd328fd8ae296691608c36756cb93e061f2bc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_engelbart, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:29:42 np0005481065 systemd[1]: libpod-conmon-278d9e4454914fdcf28912d843bd328fd8ae296691608c36756cb93e061f2bc8.scope: Deactivated successfully.
Oct 11 04:29:42 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:29:42 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Oct 11 04:29:42 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:29:42.849899) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 04:29:42 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Oct 11 04:29:42 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171382849932, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1232, "num_deletes": 507, "total_data_size": 1391353, "memory_usage": 1425136, "flush_reason": "Manual Compaction"}
Oct 11 04:29:42 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Oct 11 04:29:42 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171382861524, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1377887, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13561, "largest_seqno": 14792, "table_properties": {"data_size": 1372429, "index_size": 2406, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 14058, "raw_average_key_size": 17, "raw_value_size": 1359514, "raw_average_value_size": 1731, "num_data_blocks": 110, "num_entries": 785, "num_filter_entries": 785, "num_deletions": 507, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760171288, "oldest_key_time": 1760171288, "file_creation_time": 1760171382, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:29:42 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 11664 microseconds, and 4414 cpu microseconds.
Oct 11 04:29:42 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:29:42 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:29:42.861563) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1377887 bytes OK
Oct 11 04:29:42 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:29:42.861579) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Oct 11 04:29:42 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:29:42.863239) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Oct 11 04:29:42 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:29:42.863251) EVENT_LOG_v1 {"time_micros": 1760171382863248, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 04:29:42 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:29:42.863266) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 04:29:42 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1384670, prev total WAL file size 1384670, number of live WAL files 2.
Oct 11 04:29:42 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:29:42 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:29:42.863867) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323533' seq:0, type:0; will stop at (end)
Oct 11 04:29:42 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 04:29:42 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1345KB)], [32(7403KB)]
Oct 11 04:29:42 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171382863950, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 8958970, "oldest_snapshot_seqno": -1}
Oct 11 04:29:42 np0005481065 python3.9[233223]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:29:42 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3773 keys, 7002265 bytes, temperature: kUnknown
Oct 11 04:29:42 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171382915775, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7002265, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6975434, "index_size": 16290, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9477, "raw_key_size": 92581, "raw_average_key_size": 24, "raw_value_size": 6905427, "raw_average_value_size": 1830, "num_data_blocks": 689, "num_entries": 3773, "num_filter_entries": 3773, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760171382, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:29:42 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:29:42 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:29:42.916140) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7002265 bytes
Oct 11 04:29:42 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:29:42.919993) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 172.5 rd, 134.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 7.2 +0.0 blob) out(6.7 +0.0 blob), read-write-amplify(11.6) write-amplify(5.1) OK, records in: 4800, records dropped: 1027 output_compression: NoCompression
Oct 11 04:29:42 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:29:42.920084) EVENT_LOG_v1 {"time_micros": 1760171382920030, "job": 14, "event": "compaction_finished", "compaction_time_micros": 51929, "compaction_time_cpu_micros": 33363, "output_level": 6, "num_output_files": 1, "total_output_size": 7002265, "num_input_records": 4800, "num_output_records": 3773, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 04:29:42 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:29:42 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171382920815, "job": 14, "event": "table_file_deletion", "file_number": 34}
Oct 11 04:29:42 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:29:42 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171382924261, "job": 14, "event": "table_file_deletion", "file_number": 32}
Oct 11 04:29:42 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:29:42.863703) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:29:42 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:29:42.924330) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:29:42 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:29:42.924340) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:29:42 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:29:42.924344) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:29:42 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:29:42.924348) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:29:42 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:29:42.924352) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:29:42 np0005481065 podman[233246]: 2025-10-11 08:29:42.984422486 +0000 UTC m=+0.074642440 container create afc86255b0163812b08d5c6befa358ffb605bd24ea321b321c4c96d0d46efbe7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_brattain, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 11 04:29:43 np0005481065 systemd[1]: Started libpod-conmon-afc86255b0163812b08d5c6befa358ffb605bd24ea321b321c4c96d0d46efbe7.scope.
Oct 11 04:29:43 np0005481065 podman[233246]: 2025-10-11 08:29:42.954923717 +0000 UTC m=+0.045143741 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:29:43 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:29:43 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59abc0f4a689f19ffb2bae00ee6d1f4e57fd4fc79ad79aaceb5bbd3e16e51b9e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:29:43 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59abc0f4a689f19ffb2bae00ee6d1f4e57fd4fc79ad79aaceb5bbd3e16e51b9e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:29:43 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59abc0f4a689f19ffb2bae00ee6d1f4e57fd4fc79ad79aaceb5bbd3e16e51b9e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:29:43 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59abc0f4a689f19ffb2bae00ee6d1f4e57fd4fc79ad79aaceb5bbd3e16e51b9e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:29:43 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59abc0f4a689f19ffb2bae00ee6d1f4e57fd4fc79ad79aaceb5bbd3e16e51b9e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:29:43 np0005481065 podman[233246]: 2025-10-11 08:29:43.099558861 +0000 UTC m=+0.189778855 container init afc86255b0163812b08d5c6befa358ffb605bd24ea321b321c4c96d0d46efbe7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_brattain, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 04:29:43 np0005481065 podman[233246]: 2025-10-11 08:29:43.114198612 +0000 UTC m=+0.204418546 container start afc86255b0163812b08d5c6befa358ffb605bd24ea321b321c4c96d0d46efbe7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:29:43 np0005481065 podman[233246]: 2025-10-11 08:29:43.117700863 +0000 UTC m=+0.207920897 container attach afc86255b0163812b08d5c6befa358ffb605bd24ea321b321c4c96d0d46efbe7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:29:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v677: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:43 np0005481065 python3.9[233419]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 04:29:44 np0005481065 relaxed_brattain[233287]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:29:44 np0005481065 relaxed_brattain[233287]: --> relative data size: 1.0
Oct 11 04:29:44 np0005481065 relaxed_brattain[233287]: --> All data devices are unavailable
Oct 11 04:29:44 np0005481065 systemd[1]: libpod-afc86255b0163812b08d5c6befa358ffb605bd24ea321b321c4c96d0d46efbe7.scope: Deactivated successfully.
Oct 11 04:29:44 np0005481065 podman[233246]: 2025-10-11 08:29:44.291711501 +0000 UTC m=+1.381931465 container died afc86255b0163812b08d5c6befa358ffb605bd24ea321b321c4c96d0d46efbe7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_brattain, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:29:44 np0005481065 systemd[1]: libpod-afc86255b0163812b08d5c6befa358ffb605bd24ea321b321c4c96d0d46efbe7.scope: Consumed 1.145s CPU time.
Oct 11 04:29:44 np0005481065 systemd[1]: var-lib-containers-storage-overlay-59abc0f4a689f19ffb2bae00ee6d1f4e57fd4fc79ad79aaceb5bbd3e16e51b9e-merged.mount: Deactivated successfully.
Oct 11 04:29:44 np0005481065 podman[233246]: 2025-10-11 08:29:44.374658169 +0000 UTC m=+1.464878123 container remove afc86255b0163812b08d5c6befa358ffb605bd24ea321b321c4c96d0d46efbe7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_brattain, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:29:44 np0005481065 systemd[1]: libpod-conmon-afc86255b0163812b08d5c6befa358ffb605bd24ea321b321c4c96d0d46efbe7.scope: Deactivated successfully.
Oct 11 04:29:44 np0005481065 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct 11 04:29:44 np0005481065 systemd[1]: Stopped Load Kernel Modules.
Oct 11 04:29:44 np0005481065 systemd[1]: Stopping Load Kernel Modules...
Oct 11 04:29:44 np0005481065 systemd[1]: Starting Load Kernel Modules...
Oct 11 04:29:44 np0005481065 systemd[1]: Finished Load Kernel Modules.
Oct 11 04:29:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v678: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:45 np0005481065 podman[233625]: 2025-10-11 08:29:45.252727449 +0000 UTC m=+0.067163785 container create c4fd2b25dd6f38e3f61aa8f61c6d8354ac7733e3d132c902f14ece5ba8dfc646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:29:45 np0005481065 systemd[1]: Started libpod-conmon-c4fd2b25dd6f38e3f61aa8f61c6d8354ac7733e3d132c902f14ece5ba8dfc646.scope.
Oct 11 04:29:45 np0005481065 podman[233625]: 2025-10-11 08:29:45.215938769 +0000 UTC m=+0.030375195 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:29:45 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:29:45 np0005481065 podman[233625]: 2025-10-11 08:29:45.365812104 +0000 UTC m=+0.180248480 container init c4fd2b25dd6f38e3f61aa8f61c6d8354ac7733e3d132c902f14ece5ba8dfc646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 11 04:29:45 np0005481065 podman[233625]: 2025-10-11 08:29:45.378542821 +0000 UTC m=+0.192979167 container start c4fd2b25dd6f38e3f61aa8f61c6d8354ac7733e3d132c902f14ece5ba8dfc646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mahavira, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:29:45 np0005481065 podman[233625]: 2025-10-11 08:29:45.382666799 +0000 UTC m=+0.197103135 container attach c4fd2b25dd6f38e3f61aa8f61c6d8354ac7733e3d132c902f14ece5ba8dfc646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 04:29:45 np0005481065 eloquent_mahavira[233663]: 167 167
Oct 11 04:29:45 np0005481065 podman[233625]: 2025-10-11 08:29:45.387868759 +0000 UTC m=+0.202305125 container died c4fd2b25dd6f38e3f61aa8f61c6d8354ac7733e3d132c902f14ece5ba8dfc646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:29:45 np0005481065 systemd[1]: libpod-c4fd2b25dd6f38e3f61aa8f61c6d8354ac7733e3d132c902f14ece5ba8dfc646.scope: Deactivated successfully.
Oct 11 04:29:45 np0005481065 systemd[1]: var-lib-containers-storage-overlay-a1ff33193960472577a078072ff956f2c9c3fe0fc050f98616ca01d6b619c7fb-merged.mount: Deactivated successfully.
Oct 11 04:29:45 np0005481065 podman[233625]: 2025-10-11 08:29:45.442674827 +0000 UTC m=+0.257111163 container remove c4fd2b25dd6f38e3f61aa8f61c6d8354ac7733e3d132c902f14ece5ba8dfc646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mahavira, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 11 04:29:45 np0005481065 systemd[1]: libpod-conmon-c4fd2b25dd6f38e3f61aa8f61c6d8354ac7733e3d132c902f14ece5ba8dfc646.scope: Deactivated successfully.
Oct 11 04:29:45 np0005481065 podman[233764]: 2025-10-11 08:29:45.644824997 +0000 UTC m=+0.042683330 container create ae50f18c28048935811f306b5b4fb44172cf275072c0996aac58a5d6842e5cb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 11 04:29:45 np0005481065 systemd[1]: Started libpod-conmon-ae50f18c28048935811f306b5b4fb44172cf275072c0996aac58a5d6842e5cb8.scope.
Oct 11 04:29:45 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:29:45 np0005481065 podman[233764]: 2025-10-11 08:29:45.623702439 +0000 UTC m=+0.021560762 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:29:45 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ffa360fd7cf73d8bef30a880116de5d4cf6d66b6112a36c21bc822136d6505/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:29:45 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ffa360fd7cf73d8bef30a880116de5d4cf6d66b6112a36c21bc822136d6505/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:29:45 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ffa360fd7cf73d8bef30a880116de5d4cf6d66b6112a36c21bc822136d6505/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:29:45 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ffa360fd7cf73d8bef30a880116de5d4cf6d66b6112a36c21bc822136d6505/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:29:45 np0005481065 podman[233764]: 2025-10-11 08:29:45.741358416 +0000 UTC m=+0.139216799 container init ae50f18c28048935811f306b5b4fb44172cf275072c0996aac58a5d6842e5cb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mayer, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:29:45 np0005481065 podman[233764]: 2025-10-11 08:29:45.754949737 +0000 UTC m=+0.152808090 container start ae50f18c28048935811f306b5b4fb44172cf275072c0996aac58a5d6842e5cb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mayer, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:29:45 np0005481065 podman[233764]: 2025-10-11 08:29:45.759256811 +0000 UTC m=+0.157115154 container attach ae50f18c28048935811f306b5b4fb44172cf275072c0996aac58a5d6842e5cb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 04:29:45 np0005481065 python3.9[233811]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]: {
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:    "0": [
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:        {
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:            "devices": [
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:                "/dev/loop3"
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:            ],
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:            "lv_name": "ceph_lv0",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:            "lv_size": "21470642176",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:            "name": "ceph_lv0",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:            "tags": {
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:                "ceph.cluster_name": "ceph",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:                "ceph.crush_device_class": "",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:                "ceph.encrypted": "0",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:                "ceph.osd_id": "0",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:                "ceph.type": "block",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:                "ceph.vdo": "0"
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:            },
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:            "type": "block",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:            "vg_name": "ceph_vg0"
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:        }
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:    ],
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:    "1": [
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:        {
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:            "devices": [
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:                "/dev/loop4"
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:            ],
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:            "lv_name": "ceph_lv1",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:            "lv_size": "21470642176",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:            "name": "ceph_lv1",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:            "tags": {
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:                "ceph.cluster_name": "ceph",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:                "ceph.crush_device_class": "",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:                "ceph.encrypted": "0",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:                "ceph.osd_id": "1",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:                "ceph.type": "block",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:                "ceph.vdo": "0"
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:            },
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:            "type": "block",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:            "vg_name": "ceph_vg1"
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:        }
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:    ],
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:    "2": [
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:        {
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:            "devices": [
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:                "/dev/loop5"
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:            ],
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:            "lv_name": "ceph_lv2",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:            "lv_size": "21470642176",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:            "name": "ceph_lv2",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:            "tags": {
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:                "ceph.cluster_name": "ceph",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:                "ceph.crush_device_class": "",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:                "ceph.encrypted": "0",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:                "ceph.osd_id": "2",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:                "ceph.type": "block",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:                "ceph.vdo": "0"
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:            },
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:            "type": "block",
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:            "vg_name": "ceph_vg2"
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:        }
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]:    ]
Oct 11 04:29:46 np0005481065 fervent_mayer[233807]: }
Oct 11 04:29:46 np0005481065 systemd[1]: libpod-ae50f18c28048935811f306b5b4fb44172cf275072c0996aac58a5d6842e5cb8.scope: Deactivated successfully.
Oct 11 04:29:46 np0005481065 podman[233764]: 2025-10-11 08:29:46.59492585 +0000 UTC m=+0.992784183 container died ae50f18c28048935811f306b5b4fb44172cf275072c0996aac58a5d6842e5cb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mayer, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 04:29:46 np0005481065 systemd[1]: var-lib-containers-storage-overlay-e0ffa360fd7cf73d8bef30a880116de5d4cf6d66b6112a36c21bc822136d6505-merged.mount: Deactivated successfully.
Oct 11 04:29:46 np0005481065 podman[233764]: 2025-10-11 08:29:46.660145997 +0000 UTC m=+1.058004300 container remove ae50f18c28048935811f306b5b4fb44172cf275072c0996aac58a5d6842e5cb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 11 04:29:46 np0005481065 systemd[1]: libpod-conmon-ae50f18c28048935811f306b5b4fb44172cf275072c0996aac58a5d6842e5cb8.scope: Deactivated successfully.
Oct 11 04:29:46 np0005481065 python3.9[233980]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:29:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v679: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:47 np0005481065 podman[234221]: 2025-10-11 08:29:47.416142392 +0000 UTC m=+0.077358828 container create 85c5beecfccac8523beef2f3f6004b944653bc55e42fa68a2c09a6af8ca4d2d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 11 04:29:47 np0005481065 systemd[1]: Started libpod-conmon-85c5beecfccac8523beef2f3f6004b944653bc55e42fa68a2c09a6af8ca4d2d7.scope.
Oct 11 04:29:47 np0005481065 podman[234221]: 2025-10-11 08:29:47.387026434 +0000 UTC m=+0.048242920 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:29:47 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:29:47 np0005481065 podman[234221]: 2025-10-11 08:29:47.530099612 +0000 UTC m=+0.191316098 container init 85c5beecfccac8523beef2f3f6004b944653bc55e42fa68a2c09a6af8ca4d2d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_sanderson, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 04:29:47 np0005481065 podman[234221]: 2025-10-11 08:29:47.542637873 +0000 UTC m=+0.203854279 container start 85c5beecfccac8523beef2f3f6004b944653bc55e42fa68a2c09a6af8ca4d2d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 11 04:29:47 np0005481065 podman[234221]: 2025-10-11 08:29:47.546203475 +0000 UTC m=+0.207419961 container attach 85c5beecfccac8523beef2f3f6004b944653bc55e42fa68a2c09a6af8ca4d2d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 04:29:47 np0005481065 objective_sanderson[234273]: 167 167
Oct 11 04:29:47 np0005481065 systemd[1]: libpod-85c5beecfccac8523beef2f3f6004b944653bc55e42fa68a2c09a6af8ca4d2d7.scope: Deactivated successfully.
Oct 11 04:29:47 np0005481065 podman[234221]: 2025-10-11 08:29:47.553038562 +0000 UTC m=+0.214254988 container died 85c5beecfccac8523beef2f3f6004b944653bc55e42fa68a2c09a6af8ca4d2d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_sanderson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:29:47 np0005481065 systemd[1]: var-lib-containers-storage-overlay-97352ff08d8b3f3c59de4e620c3c2ede678133dc2844fdcfda4c01a59e7ce6ba-merged.mount: Deactivated successfully.
Oct 11 04:29:47 np0005481065 podman[234221]: 2025-10-11 08:29:47.601876008 +0000 UTC m=+0.263092434 container remove 85c5beecfccac8523beef2f3f6004b944653bc55e42fa68a2c09a6af8ca4d2d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_sanderson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:29:47 np0005481065 systemd[1]: libpod-conmon-85c5beecfccac8523beef2f3f6004b944653bc55e42fa68a2c09a6af8ca4d2d7.scope: Deactivated successfully.
Oct 11 04:29:47 np0005481065 python3.9[234294]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:29:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:29:47 np0005481065 podman[234315]: 2025-10-11 08:29:47.855311444 +0000 UTC m=+0.060355628 container create 2c26dcea805e1bd1744342bb6c3a7c5ce3f87c95ced4e6e22a5f5520e7b6135c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_carson, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:29:47 np0005481065 systemd[1]: Started libpod-conmon-2c26dcea805e1bd1744342bb6c3a7c5ce3f87c95ced4e6e22a5f5520e7b6135c.scope.
Oct 11 04:29:47 np0005481065 podman[234315]: 2025-10-11 08:29:47.82494727 +0000 UTC m=+0.029991504 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:29:47 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:29:47 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0de79593851794d9c05c24a035e8527f8c7b62b0fca22c59c2cef2bf8041102b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:29:47 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0de79593851794d9c05c24a035e8527f8c7b62b0fca22c59c2cef2bf8041102b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:29:47 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0de79593851794d9c05c24a035e8527f8c7b62b0fca22c59c2cef2bf8041102b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:29:47 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0de79593851794d9c05c24a035e8527f8c7b62b0fca22c59c2cef2bf8041102b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:29:47 np0005481065 podman[234315]: 2025-10-11 08:29:47.970152321 +0000 UTC m=+0.175196535 container init 2c26dcea805e1bd1744342bb6c3a7c5ce3f87c95ced4e6e22a5f5520e7b6135c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 04:29:47 np0005481065 podman[234315]: 2025-10-11 08:29:47.985493462 +0000 UTC m=+0.190537646 container start 2c26dcea805e1bd1744342bb6c3a7c5ce3f87c95ced4e6e22a5f5520e7b6135c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:29:47 np0005481065 podman[234315]: 2025-10-11 08:29:47.989556039 +0000 UTC m=+0.194600273 container attach 2c26dcea805e1bd1744342bb6c3a7c5ce3f87c95ced4e6e22a5f5520e7b6135c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_carson, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 04:29:48 np0005481065 python3.9[234487]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:29:49 np0005481065 infallible_carson[234355]: {
Oct 11 04:29:49 np0005481065 infallible_carson[234355]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 04:29:49 np0005481065 infallible_carson[234355]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:29:49 np0005481065 infallible_carson[234355]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:29:49 np0005481065 infallible_carson[234355]:        "osd_id": 2,
Oct 11 04:29:49 np0005481065 infallible_carson[234355]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:29:49 np0005481065 infallible_carson[234355]:        "type": "bluestore"
Oct 11 04:29:49 np0005481065 infallible_carson[234355]:    },
Oct 11 04:29:49 np0005481065 infallible_carson[234355]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 04:29:49 np0005481065 infallible_carson[234355]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:29:49 np0005481065 infallible_carson[234355]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:29:49 np0005481065 infallible_carson[234355]:        "osd_id": 0,
Oct 11 04:29:49 np0005481065 infallible_carson[234355]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:29:49 np0005481065 infallible_carson[234355]:        "type": "bluestore"
Oct 11 04:29:49 np0005481065 infallible_carson[234355]:    },
Oct 11 04:29:49 np0005481065 infallible_carson[234355]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 04:29:49 np0005481065 infallible_carson[234355]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:29:49 np0005481065 infallible_carson[234355]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:29:49 np0005481065 infallible_carson[234355]:        "osd_id": 1,
Oct 11 04:29:49 np0005481065 infallible_carson[234355]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:29:49 np0005481065 infallible_carson[234355]:        "type": "bluestore"
Oct 11 04:29:49 np0005481065 infallible_carson[234355]:    }
Oct 11 04:29:49 np0005481065 infallible_carson[234355]: }
Oct 11 04:29:49 np0005481065 systemd[1]: libpod-2c26dcea805e1bd1744342bb6c3a7c5ce3f87c95ced4e6e22a5f5520e7b6135c.scope: Deactivated successfully.
Oct 11 04:29:49 np0005481065 systemd[1]: libpod-2c26dcea805e1bd1744342bb6c3a7c5ce3f87c95ced4e6e22a5f5520e7b6135c.scope: Consumed 1.093s CPU time.
Oct 11 04:29:49 np0005481065 podman[234315]: 2025-10-11 08:29:49.068917454 +0000 UTC m=+1.273961658 container died 2c26dcea805e1bd1744342bb6c3a7c5ce3f87c95ced4e6e22a5f5520e7b6135c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_carson, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 04:29:49 np0005481065 systemd[1]: var-lib-containers-storage-overlay-0de79593851794d9c05c24a035e8527f8c7b62b0fca22c59c2cef2bf8041102b-merged.mount: Deactivated successfully.
Oct 11 04:29:49 np0005481065 podman[234315]: 2025-10-11 08:29:49.147185147 +0000 UTC m=+1.352229301 container remove 2c26dcea805e1bd1744342bb6c3a7c5ce3f87c95ced4e6e22a5f5520e7b6135c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Oct 11 04:29:49 np0005481065 systemd[1]: libpod-conmon-2c26dcea805e1bd1744342bb6c3a7c5ce3f87c95ced4e6e22a5f5520e7b6135c.scope: Deactivated successfully.
Oct 11 04:29:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v680: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:29:49 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:29:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:29:49 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:29:49 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 1c9f7a63-6e8b-445b-8213-00a092a77280 does not exist
Oct 11 04:29:49 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 6099151d-d993-4b2b-9875-fdf1eafbc55a does not exist
Oct 11 04:29:49 np0005481065 python3.9[234649]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760171388.0173821-518-44476485314969/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:29:49 np0005481065 podman[234755]: 2025-10-11 08:29:49.791991521 +0000 UTC m=+0.088431457 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:29:50 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:29:50 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:29:50 np0005481065 python3.9[234871]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:29:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v681: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:51 np0005481065 python3.9[235024]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:29:52 np0005481065 python3.9[235176]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:29:52 np0005481065 systemd[1]: virtqemud.service: Deactivated successfully.
Oct 11 04:29:52 np0005481065 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct 11 04:29:52 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:29:53 np0005481065 python3.9[235330]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:29:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v682: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:54 np0005481065 python3.9[235482]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:29:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:29:54
Oct 11 04:29:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:29:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 04:29:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['vms', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr', 'images', 'default.rgw.meta', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', 'backups']
Oct 11 04:29:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:29:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:29:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:29:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:29:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:29:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:29:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:29:54 np0005481065 python3.9[235634]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:29:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:29:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:29:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:29:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:29:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:29:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:29:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:29:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:29:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:29:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:29:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v683: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:55 np0005481065 python3.9[235786]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:29:56 np0005481065 python3.9[235938]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:29:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v684: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:57 np0005481065 python3.9[236090]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:29:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:29:58 np0005481065 python3.9[236244]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:29:59 np0005481065 python3.9[236396]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:29:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v685: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:29:59 np0005481065 python3.9[236548]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:30:00 np0005481065 python3.9[236626]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:30:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v686: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:01 np0005481065 podman[236779]: 2025-10-11 08:30:01.293884275 +0000 UTC m=+0.078110932 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:30:01 np0005481065 python3.9[236778]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:30:01 np0005481065 python3.9[236875]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:30:02 np0005481065 python3.9[237027]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:30:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:30:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v687: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:03 np0005481065 python3.9[237179]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:30:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:30:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:30:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:30:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:30:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:30:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:30:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:30:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:30:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:30:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:30:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:30:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:30:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 04:30:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:30:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:30:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:30:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:30:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:30:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:30:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:30:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:30:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:30:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:30:04 np0005481065 python3.9[237257]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:30:05 np0005481065 python3.9[237409]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:30:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v688: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:05 np0005481065 python3.9[237487]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:30:06 np0005481065 python3.9[237639]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:30:06 np0005481065 systemd[1]: Reloading.
Oct 11 04:30:06 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:30:06 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:30:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v689: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:07 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:30:07 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 3311 writes, 14K keys, 3311 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 3311 writes, 3311 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1288 writes, 5840 keys, 1288 commit groups, 1.0 writes per commit group, ingest: 8.52 MB, 0.01 MB/s#012Interval WAL: 1288 writes, 1288 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     63.7      0.24              0.06         7    0.034       0      0       0.0       0.0#012  L6      1/0    6.68 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.7    165.9    136.9      0.30              0.16         6    0.049     24K   3202       0.0       0.0#012 Sum      1/0    6.68 MB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   3.7     91.7    104.2      0.54              0.22        13    0.041     24K   3202       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.9     91.0     91.5      0.38              0.12         8    0.047     17K   2470       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    165.9    136.9      0.30              0.16         6    0.049     24K   3202       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     64.6      0.24              0.06         6    0.039       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.2      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.015, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.05 GB write, 0.05 MB/s write, 0.05 GB read, 0.04 MB/s read, 0.5 seconds#012Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558f0ab3b1f0#2 capacity: 308.00 MB usage: 1.56 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 6.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(99,1.35 MB,0.43816%) FilterBlock(14,75.42 KB,0.0239137%) IndexBlock(14,144.80 KB,0.0459101%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct 11 04:30:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:30:07 np0005481065 python3.9[237827]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:30:08 np0005481065 podman[237877]: 2025-10-11 08:30:08.301392394 +0000 UTC m=+0.149602947 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:30:08 np0005481065 python3.9[237922]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:30:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v690: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:09 np0005481065 python3.9[238083]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:30:09 np0005481065 python3.9[238161]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:30:10 np0005481065 python3.9[238313]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:30:10 np0005481065 systemd[1]: Reloading.
Oct 11 04:30:11 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:30:11 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:30:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v691: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:11 np0005481065 systemd[1]: Starting Create netns directory...
Oct 11 04:30:11 np0005481065 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 11 04:30:11 np0005481065 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 11 04:30:11 np0005481065 systemd[1]: Finished Create netns directory.
Oct 11 04:30:12 np0005481065 python3.9[238506]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:30:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:30:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v692: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:13 np0005481065 python3.9[238658]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:30:13 np0005481065 python3.9[238781]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760171412.636839-725-126955633758627/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:30:15 np0005481065 python3.9[238933]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:30:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:30:15.159 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:30:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:30:15.160 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:30:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:30:15.161 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:30:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v693: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:15 np0005481065 python3.9[239085]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:30:16 np0005481065 python3.9[239208]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760171415.3658206-750-139762178139221/.source.json _original_basename=.zbb_7_qx follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:30:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v694: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:17 np0005481065 python3.9[239360]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:30:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:30:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v695: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:20 np0005481065 podman[239759]: 2025-10-11 08:30:20.220634922 +0000 UTC m=+0.090685217 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 11 04:30:20 np0005481065 python3.9[239804]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Oct 11 04:30:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v696: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:21 np0005481065 python3.9[239958]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 11 04:30:22 np0005481065 python3.9[240110]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 11 04:30:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:30:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v697: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:24 np0005481065 python3[240289]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 11 04:30:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:30:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:30:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:30:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:30:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:30:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:30:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v698: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:25 np0005481065 podman[240301]: 2025-10-11 08:30:25.361263743 +0000 UTC m=+1.211142571 image pull f541ff382622bd8bc9ad206129d2a8e74c239ff4503fa3b67d3bdf6d5b50b511 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43
Oct 11 04:30:25 np0005481065 podman[240360]: 2025-10-11 08:30:25.62589122 +0000 UTC m=+0.075480998 container create 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 04:30:25 np0005481065 podman[240360]: 2025-10-11 08:30:25.589196396 +0000 UTC m=+0.038786234 image pull f541ff382622bd8bc9ad206129d2a8e74c239ff4503fa3b67d3bdf6d5b50b511 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43
Oct 11 04:30:25 np0005481065 python3[240289]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43
Oct 11 04:30:26 np0005481065 python3.9[240550]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:30:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v699: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:27 np0005481065 python3.9[240704]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:30:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:30:28 np0005481065 python3.9[240780]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:30:28 np0005481065 python3.9[240931]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760171428.1494484-838-204843542158204/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:30:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v700: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:29 np0005481065 python3.9[241007]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 11 04:30:29 np0005481065 systemd[1]: Reloading.
Oct 11 04:30:29 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:30:29 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:30:30 np0005481065 python3.9[241117]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:30:30 np0005481065 systemd[1]: Reloading.
Oct 11 04:30:30 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:30:30 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:30:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v701: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:31 np0005481065 systemd[1]: Starting multipathd container...
Oct 11 04:30:31 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:30:31 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e27cbfb756b19c9f20be1db736f032cd43ae5d0c2a634ca74e3c4d2ae715a8c/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 11 04:30:31 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e27cbfb756b19c9f20be1db736f032cd43ae5d0c2a634ca74e3c4d2ae715a8c/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 11 04:30:31 np0005481065 podman[241170]: 2025-10-11 08:30:31.466517316 +0000 UTC m=+0.123389808 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2)
Oct 11 04:30:31 np0005481065 systemd[1]: Started /usr/bin/podman healthcheck run 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863.
Oct 11 04:30:31 np0005481065 podman[241157]: 2025-10-11 08:30:31.492158228 +0000 UTC m=+0.235248380 container init 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 04:30:31 np0005481065 multipathd[241189]: + sudo -E kolla_set_configs
Oct 11 04:30:31 np0005481065 podman[241157]: 2025-10-11 08:30:31.530020295 +0000 UTC m=+0.273110447 container start 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 04:30:31 np0005481065 podman[241157]: multipathd
Oct 11 04:30:31 np0005481065 systemd[1]: Started multipathd container.
Oct 11 04:30:31 np0005481065 multipathd[241189]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 11 04:30:31 np0005481065 multipathd[241189]: INFO:__main__:Validating config file
Oct 11 04:30:31 np0005481065 multipathd[241189]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 11 04:30:31 np0005481065 multipathd[241189]: INFO:__main__:Writing out command to execute
Oct 11 04:30:31 np0005481065 multipathd[241189]: ++ cat /run_command
Oct 11 04:30:31 np0005481065 multipathd[241189]: + CMD='/usr/sbin/multipathd -d'
Oct 11 04:30:31 np0005481065 multipathd[241189]: + ARGS=
Oct 11 04:30:31 np0005481065 multipathd[241189]: + sudo kolla_copy_cacerts
Oct 11 04:30:31 np0005481065 podman[241199]: 2025-10-11 08:30:31.618771826 +0000 UTC m=+0.076046073 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 04:30:31 np0005481065 multipathd[241189]: Running command: '/usr/sbin/multipathd -d'
Oct 11 04:30:31 np0005481065 multipathd[241189]: + [[ ! -n '' ]]
Oct 11 04:30:31 np0005481065 multipathd[241189]: + . kolla_extend_start
Oct 11 04:30:31 np0005481065 multipathd[241189]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct 11 04:30:31 np0005481065 multipathd[241189]: + umask 0022
Oct 11 04:30:31 np0005481065 multipathd[241189]: + exec /usr/sbin/multipathd -d
Oct 11 04:30:31 np0005481065 systemd[1]: 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863-7c8c33c3794afeb.service: Main process exited, code=exited, status=1/FAILURE
Oct 11 04:30:31 np0005481065 systemd[1]: 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863-7c8c33c3794afeb.service: Failed with result 'exit-code'.
Oct 11 04:30:31 np0005481065 multipathd[241189]: 3276.396699 | --------start up--------
Oct 11 04:30:31 np0005481065 multipathd[241189]: 3276.396716 | read /etc/multipath.conf
Oct 11 04:30:31 np0005481065 multipathd[241189]: 3276.406473 | path checkers start up
Oct 11 04:30:32 np0005481065 python3.9[241382]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:30:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:30:33 np0005481065 python3.9[241536]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:30:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v702: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:34 np0005481065 python3.9[241701]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 04:30:34 np0005481065 systemd[1]: Stopping multipathd container...
Oct 11 04:30:34 np0005481065 multipathd[241189]: 3279.057544 | exit (signal)
Oct 11 04:30:34 np0005481065 multipathd[241189]: 3279.057647 | --------shut down-------
Oct 11 04:30:34 np0005481065 systemd[1]: libpod-37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863.scope: Deactivated successfully.
Oct 11 04:30:34 np0005481065 podman[241705]: 2025-10-11 08:30:34.33250394 +0000 UTC m=+0.095383059 container died 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:30:34 np0005481065 systemd[1]: 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863-7c8c33c3794afeb.timer: Deactivated successfully.
Oct 11 04:30:34 np0005481065 systemd[1]: Stopped /usr/bin/podman healthcheck run 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863.
Oct 11 04:30:34 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863-userdata-shm.mount: Deactivated successfully.
Oct 11 04:30:34 np0005481065 systemd[1]: var-lib-containers-storage-overlay-0e27cbfb756b19c9f20be1db736f032cd43ae5d0c2a634ca74e3c4d2ae715a8c-merged.mount: Deactivated successfully.
Oct 11 04:30:34 np0005481065 podman[241705]: 2025-10-11 08:30:34.512284896 +0000 UTC m=+0.275164015 container cleanup 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 11 04:30:34 np0005481065 podman[241705]: multipathd
Oct 11 04:30:34 np0005481065 podman[241735]: multipathd
Oct 11 04:30:34 np0005481065 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Oct 11 04:30:34 np0005481065 systemd[1]: Stopped multipathd container.
Oct 11 04:30:34 np0005481065 systemd[1]: Starting multipathd container...
Oct 11 04:30:34 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:30:34 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e27cbfb756b19c9f20be1db736f032cd43ae5d0c2a634ca74e3c4d2ae715a8c/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 11 04:30:34 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e27cbfb756b19c9f20be1db736f032cd43ae5d0c2a634ca74e3c4d2ae715a8c/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 11 04:30:34 np0005481065 systemd[1]: Started /usr/bin/podman healthcheck run 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863.
Oct 11 04:30:34 np0005481065 podman[241746]: 2025-10-11 08:30:34.811111087 +0000 UTC m=+0.165602147 container init 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 11 04:30:34 np0005481065 multipathd[241762]: + sudo -E kolla_set_configs
Oct 11 04:30:34 np0005481065 podman[241746]: 2025-10-11 08:30:34.851333211 +0000 UTC m=+0.205824211 container start 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 04:30:34 np0005481065 podman[241746]: multipathd
Oct 11 04:30:34 np0005481065 systemd[1]: Started multipathd container.
Oct 11 04:30:34 np0005481065 multipathd[241762]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 11 04:30:34 np0005481065 multipathd[241762]: INFO:__main__:Validating config file
Oct 11 04:30:34 np0005481065 multipathd[241762]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 11 04:30:34 np0005481065 multipathd[241762]: INFO:__main__:Writing out command to execute
Oct 11 04:30:34 np0005481065 podman[241769]: 2025-10-11 08:30:34.951517434 +0000 UTC m=+0.083986078 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:30:34 np0005481065 systemd[1]: 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863-536a9e401c18b8f.service: Main process exited, code=exited, status=1/FAILURE
Oct 11 04:30:34 np0005481065 systemd[1]: 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863-536a9e401c18b8f.service: Failed with result 'exit-code'.
Oct 11 04:30:34 np0005481065 multipathd[241762]: ++ cat /run_command
Oct 11 04:30:34 np0005481065 multipathd[241762]: + CMD='/usr/sbin/multipathd -d'
Oct 11 04:30:34 np0005481065 multipathd[241762]: + ARGS=
Oct 11 04:30:34 np0005481065 multipathd[241762]: + sudo kolla_copy_cacerts
Oct 11 04:30:34 np0005481065 multipathd[241762]: + [[ ! -n '' ]]
Oct 11 04:30:34 np0005481065 multipathd[241762]: + . kolla_extend_start
Oct 11 04:30:34 np0005481065 multipathd[241762]: Running command: '/usr/sbin/multipathd -d'
Oct 11 04:30:34 np0005481065 multipathd[241762]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct 11 04:30:34 np0005481065 multipathd[241762]: + umask 0022
Oct 11 04:30:34 np0005481065 multipathd[241762]: + exec /usr/sbin/multipathd -d
Oct 11 04:30:35 np0005481065 multipathd[241762]: 3279.765478 | --------start up--------
Oct 11 04:30:35 np0005481065 multipathd[241762]: 3279.766019 | read /etc/multipath.conf
Oct 11 04:30:35 np0005481065 multipathd[241762]: 3279.774370 | path checkers start up
Oct 11 04:30:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v703: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:35 np0005481065 python3.9[241953]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:30:36 np0005481065 python3.9[242105]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 11 04:30:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v704: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:37 np0005481065 python3.9[242257]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Oct 11 04:30:37 np0005481065 kernel: Key type psk registered
Oct 11 04:30:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:30:38 np0005481065 podman[242393]: 2025-10-11 08:30:38.537083004 +0000 UTC m=+0.159208987 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 11 04:30:38 np0005481065 python3.9[242440]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:30:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v705: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:39 np0005481065 python3.9[242572]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760171438.0185792-918-4015368103802/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:30:40 np0005481065 python3.9[242724]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:30:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v706: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:41 np0005481065 python3.9[242876]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 04:30:41 np0005481065 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct 11 04:30:41 np0005481065 systemd[1]: Stopped Load Kernel Modules.
Oct 11 04:30:41 np0005481065 systemd[1]: Stopping Load Kernel Modules...
Oct 11 04:30:41 np0005481065 systemd[1]: Starting Load Kernel Modules...
Oct 11 04:30:41 np0005481065 systemd[1]: Finished Load Kernel Modules.
Oct 11 04:30:42 np0005481065 python3.9[243032]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 04:30:42 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:30:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v707: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:43 np0005481065 python3.9[243116]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 04:30:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v708: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v709: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:30:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v710: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:49 np0005481065 systemd[1]: Reloading.
Oct 11 04:30:49 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:30:49 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:30:49 np0005481065 systemd[1]: Reloading.
Oct 11 04:30:50 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:30:50 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:30:50 np0005481065 podman[243240]: 2025-10-11 08:30:50.473314683 +0000 UTC m=+0.106145853 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 04:30:50 np0005481065 systemd-logind[819]: Watching system buttons on /dev/input/event0 (Power Button)
Oct 11 04:30:50 np0005481065 systemd-logind[819]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct 11 04:30:50 np0005481065 lvm[243351]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 11 04:30:50 np0005481065 lvm[243351]: VG ceph_vg0 finished
Oct 11 04:30:50 np0005481065 lvm[243352]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct 11 04:30:50 np0005481065 lvm[243352]: VG ceph_vg1 finished
Oct 11 04:30:50 np0005481065 lvm[243349]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 11 04:30:50 np0005481065 lvm[243349]: VG ceph_vg2 finished
Oct 11 04:30:50 np0005481065 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 11 04:30:50 np0005481065 systemd[1]: Starting man-db-cache-update.service...
Oct 11 04:30:50 np0005481065 systemd[1]: Reloading.
Oct 11 04:30:50 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:30:50 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:30:51 np0005481065 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 11 04:30:51 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:30:51 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:30:51 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:30:51 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:30:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v711: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:51 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:30:51 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:30:51 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 9dfe07c3-8452-4a7e-a006-be92927547ec does not exist
Oct 11 04:30:51 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev a4f657b2-b0f9-4773-981d-361ca93a0033 does not exist
Oct 11 04:30:51 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 2b9113d8-4d81-4c96-b9b6-5b0984e3f683 does not exist
Oct 11 04:30:51 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:30:51 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:30:51 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:30:51 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:30:51 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:30:51 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:30:51 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:30:51 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:30:51 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:30:52 np0005481065 podman[244375]: 2025-10-11 08:30:52.080048889 +0000 UTC m=+0.055573937 container create c688fc8f69e18b28294ec647497012595357636c3e7e272c34a7b613778f6f56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_shamir, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:30:52 np0005481065 systemd[1]: Started libpod-conmon-c688fc8f69e18b28294ec647497012595357636c3e7e272c34a7b613778f6f56.scope.
Oct 11 04:30:52 np0005481065 podman[244375]: 2025-10-11 08:30:52.056967429 +0000 UTC m=+0.032492487 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:30:52 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:30:52 np0005481065 podman[244375]: 2025-10-11 08:30:52.19364827 +0000 UTC m=+0.169173358 container init c688fc8f69e18b28294ec647497012595357636c3e7e272c34a7b613778f6f56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_shamir, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 04:30:52 np0005481065 podman[244375]: 2025-10-11 08:30:52.204323561 +0000 UTC m=+0.179848569 container start c688fc8f69e18b28294ec647497012595357636c3e7e272c34a7b613778f6f56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_shamir, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:30:52 np0005481065 podman[244375]: 2025-10-11 08:30:52.209405444 +0000 UTC m=+0.184930542 container attach c688fc8f69e18b28294ec647497012595357636c3e7e272c34a7b613778f6f56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_shamir, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:30:52 np0005481065 beautiful_shamir[244521]: 167 167
Oct 11 04:30:52 np0005481065 systemd[1]: libpod-c688fc8f69e18b28294ec647497012595357636c3e7e272c34a7b613778f6f56.scope: Deactivated successfully.
Oct 11 04:30:52 np0005481065 podman[244375]: 2025-10-11 08:30:52.214358144 +0000 UTC m=+0.189883162 container died c688fc8f69e18b28294ec647497012595357636c3e7e272c34a7b613778f6f56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_shamir, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:30:52 np0005481065 systemd[1]: var-lib-containers-storage-overlay-2d7b673f38d50a5ef1c0044613536f556efbc9914f9f4e2eecb556645bd245c7-merged.mount: Deactivated successfully.
Oct 11 04:30:52 np0005481065 podman[244375]: 2025-10-11 08:30:52.269605061 +0000 UTC m=+0.245130079 container remove c688fc8f69e18b28294ec647497012595357636c3e7e272c34a7b613778f6f56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_shamir, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 04:30:52 np0005481065 systemd[1]: libpod-conmon-c688fc8f69e18b28294ec647497012595357636c3e7e272c34a7b613778f6f56.scope: Deactivated successfully.
Oct 11 04:30:52 np0005481065 podman[244804]: 2025-10-11 08:30:52.502245336 +0000 UTC m=+0.062591964 container create e414cebd424d841514c278b12403fe0cbe1f8b8bbb7bb0b6efa94560d9755533 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wescoff, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 11 04:30:52 np0005481065 systemd[1]: Started libpod-conmon-e414cebd424d841514c278b12403fe0cbe1f8b8bbb7bb0b6efa94560d9755533.scope.
Oct 11 04:30:52 np0005481065 podman[244804]: 2025-10-11 08:30:52.47114178 +0000 UTC m=+0.031488398 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:30:52 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:30:52 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00f722859fa2bbe7cf2c3f38911581c4dfbd514c83ccc655f507ea7dd7883f2e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:30:52 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00f722859fa2bbe7cf2c3f38911581c4dfbd514c83ccc655f507ea7dd7883f2e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:30:52 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00f722859fa2bbe7cf2c3f38911581c4dfbd514c83ccc655f507ea7dd7883f2e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:30:52 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00f722859fa2bbe7cf2c3f38911581c4dfbd514c83ccc655f507ea7dd7883f2e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:30:52 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00f722859fa2bbe7cf2c3f38911581c4dfbd514c83ccc655f507ea7dd7883f2e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:30:52 np0005481065 podman[244804]: 2025-10-11 08:30:52.624623025 +0000 UTC m=+0.184969633 container init e414cebd424d841514c278b12403fe0cbe1f8b8bbb7bb0b6efa94560d9755533 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wescoff, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 04:30:52 np0005481065 podman[244804]: 2025-10-11 08:30:52.639936357 +0000 UTC m=+0.200282945 container start e414cebd424d841514c278b12403fe0cbe1f8b8bbb7bb0b6efa94560d9755533 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wescoff, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:30:52 np0005481065 podman[244804]: 2025-10-11 08:30:52.644560877 +0000 UTC m=+0.204907505 container attach e414cebd424d841514c278b12403fe0cbe1f8b8bbb7bb0b6efa94560d9755533 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 04:30:52 np0005481065 python3.9[244840]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.iscsid_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:30:52 np0005481065 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 11 04:30:52 np0005481065 systemd[1]: Finished man-db-cache-update.service.
Oct 11 04:30:52 np0005481065 systemd[1]: man-db-cache-update.service: Consumed 2.220s CPU time.
Oct 11 04:30:52 np0005481065 systemd[1]: run-r31dae10ddc3a4f9fa80492e05f87c37c.service: Deactivated successfully.
Oct 11 04:30:52 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:30:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v712: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:53 np0005481065 python3.9[245080]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 04:30:53 np0005481065 heuristic_wescoff[244892]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:30:53 np0005481065 heuristic_wescoff[244892]: --> relative data size: 1.0
Oct 11 04:30:53 np0005481065 heuristic_wescoff[244892]: --> All data devices are unavailable
Oct 11 04:30:53 np0005481065 systemd[1]: libpod-e414cebd424d841514c278b12403fe0cbe1f8b8bbb7bb0b6efa94560d9755533.scope: Deactivated successfully.
Oct 11 04:30:53 np0005481065 systemd[1]: libpod-e414cebd424d841514c278b12403fe0cbe1f8b8bbb7bb0b6efa94560d9755533.scope: Consumed 1.159s CPU time.
Oct 11 04:30:53 np0005481065 podman[244804]: 2025-10-11 08:30:53.877437019 +0000 UTC m=+1.437783637 container died e414cebd424d841514c278b12403fe0cbe1f8b8bbb7bb0b6efa94560d9755533 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wescoff, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 04:30:53 np0005481065 systemd[1]: var-lib-containers-storage-overlay-00f722859fa2bbe7cf2c3f38911581c4dfbd514c83ccc655f507ea7dd7883f2e-merged.mount: Deactivated successfully.
Oct 11 04:30:53 np0005481065 podman[244804]: 2025-10-11 08:30:53.96371203 +0000 UTC m=+1.524058628 container remove e414cebd424d841514c278b12403fe0cbe1f8b8bbb7bb0b6efa94560d9755533 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 11 04:30:53 np0005481065 systemd[1]: libpod-conmon-e414cebd424d841514c278b12403fe0cbe1f8b8bbb7bb0b6efa94560d9755533.scope: Deactivated successfully.
Oct 11 04:30:54 np0005481065 podman[245389]: 2025-10-11 08:30:54.713325264 +0000 UTC m=+0.050037721 container create 5accb42b85a58bacb41c7de23b818b758a7215c9582884453bc53a07d7860530 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_shannon, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 04:30:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:30:54
Oct 11 04:30:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:30:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 04:30:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', 'images', 'volumes']
Oct 11 04:30:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:30:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:30:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:30:54 np0005481065 systemd[1]: Started libpod-conmon-5accb42b85a58bacb41c7de23b818b758a7215c9582884453bc53a07d7860530.scope.
Oct 11 04:30:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:30:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:30:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:30:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:30:54 np0005481065 podman[245389]: 2025-10-11 08:30:54.688155535 +0000 UTC m=+0.024868002 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:30:54 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:30:54 np0005481065 podman[245389]: 2025-10-11 08:30:54.836933847 +0000 UTC m=+0.173646314 container init 5accb42b85a58bacb41c7de23b818b758a7215c9582884453bc53a07d7860530 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 04:30:54 np0005481065 podman[245389]: 2025-10-11 08:30:54.850286984 +0000 UTC m=+0.186999441 container start 5accb42b85a58bacb41c7de23b818b758a7215c9582884453bc53a07d7860530 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_shannon, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:30:54 np0005481065 interesting_shannon[245425]: 167 167
Oct 11 04:30:54 np0005481065 systemd[1]: libpod-5accb42b85a58bacb41c7de23b818b758a7215c9582884453bc53a07d7860530.scope: Deactivated successfully.
Oct 11 04:30:54 np0005481065 podman[245389]: 2025-10-11 08:30:54.8543943 +0000 UTC m=+0.191106757 container attach 5accb42b85a58bacb41c7de23b818b758a7215c9582884453bc53a07d7860530 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_shannon, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:30:54 np0005481065 podman[245389]: 2025-10-11 08:30:54.860387488 +0000 UTC m=+0.197099945 container died 5accb42b85a58bacb41c7de23b818b758a7215c9582884453bc53a07d7860530 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_shannon, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 04:30:54 np0005481065 systemd[1]: var-lib-containers-storage-overlay-2e5d63f0666364347eb3a7bd0b0899be1d3c336581a6a059fe9432846be0576d-merged.mount: Deactivated successfully.
Oct 11 04:30:54 np0005481065 podman[245389]: 2025-10-11 08:30:54.914451312 +0000 UTC m=+0.251163769 container remove 5accb42b85a58bacb41c7de23b818b758a7215c9582884453bc53a07d7860530 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 04:30:54 np0005481065 systemd[1]: libpod-conmon-5accb42b85a58bacb41c7de23b818b758a7215c9582884453bc53a07d7860530.scope: Deactivated successfully.
Oct 11 04:30:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:30:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:30:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:30:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:30:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:30:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:30:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:30:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:30:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:30:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:30:54 np0005481065 python3.9[245422]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:30:55 np0005481065 podman[245471]: 2025-10-11 08:30:55.136424557 +0000 UTC m=+0.055782953 container create 35729055cb69069ff9813bc9639569eb5ecedc49fcc0b0094d7223720ab7d31e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_nobel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 04:30:55 np0005481065 systemd[1]: Started libpod-conmon-35729055cb69069ff9813bc9639569eb5ecedc49fcc0b0094d7223720ab7d31e.scope.
Oct 11 04:30:55 np0005481065 podman[245471]: 2025-10-11 08:30:55.107135282 +0000 UTC m=+0.026493758 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:30:55 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:30:55 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39e8cf8d05078483a94e6222e687d0f496bd8d66759636b2537185e79cea5da4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:30:55 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39e8cf8d05078483a94e6222e687d0f496bd8d66759636b2537185e79cea5da4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:30:55 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39e8cf8d05078483a94e6222e687d0f496bd8d66759636b2537185e79cea5da4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:30:55 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39e8cf8d05078483a94e6222e687d0f496bd8d66759636b2537185e79cea5da4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:30:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v713: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:55 np0005481065 podman[245471]: 2025-10-11 08:30:55.238420421 +0000 UTC m=+0.157778867 container init 35729055cb69069ff9813bc9639569eb5ecedc49fcc0b0094d7223720ab7d31e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 04:30:55 np0005481065 podman[245471]: 2025-10-11 08:30:55.252482978 +0000 UTC m=+0.171841404 container start 35729055cb69069ff9813bc9639569eb5ecedc49fcc0b0094d7223720ab7d31e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 04:30:55 np0005481065 podman[245471]: 2025-10-11 08:30:55.256658695 +0000 UTC m=+0.176017121 container attach 35729055cb69069ff9813bc9639569eb5ecedc49fcc0b0094d7223720ab7d31e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_nobel, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]: {
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:    "0": [
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:        {
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:            "devices": [
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:                "/dev/loop3"
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:            ],
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:            "lv_name": "ceph_lv0",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:            "lv_size": "21470642176",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:            "name": "ceph_lv0",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:            "tags": {
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:                "ceph.cluster_name": "ceph",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:                "ceph.crush_device_class": "",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:                "ceph.encrypted": "0",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:                "ceph.osd_id": "0",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:                "ceph.type": "block",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:                "ceph.vdo": "0"
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:            },
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:            "type": "block",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:            "vg_name": "ceph_vg0"
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:        }
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:    ],
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:    "1": [
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:        {
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:            "devices": [
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:                "/dev/loop4"
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:            ],
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:            "lv_name": "ceph_lv1",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:            "lv_size": "21470642176",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:            "name": "ceph_lv1",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:            "tags": {
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:                "ceph.cluster_name": "ceph",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:                "ceph.crush_device_class": "",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:                "ceph.encrypted": "0",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:                "ceph.osd_id": "1",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:                "ceph.type": "block",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:                "ceph.vdo": "0"
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:            },
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:            "type": "block",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:            "vg_name": "ceph_vg1"
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:        }
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:    ],
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:    "2": [
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:        {
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:            "devices": [
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:                "/dev/loop5"
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:            ],
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:            "lv_name": "ceph_lv2",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:            "lv_size": "21470642176",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:            "name": "ceph_lv2",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:            "tags": {
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:                "ceph.cluster_name": "ceph",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:                "ceph.crush_device_class": "",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:                "ceph.encrypted": "0",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:                "ceph.osd_id": "2",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:                "ceph.type": "block",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:                "ceph.vdo": "0"
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:            },
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:            "type": "block",
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:            "vg_name": "ceph_vg2"
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:        }
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]:    ]
Oct 11 04:30:56 np0005481065 zealous_nobel[245487]: }
Oct 11 04:30:56 np0005481065 podman[245471]: 2025-10-11 08:30:56.07056142 +0000 UTC m=+0.989919856 container died 35729055cb69069ff9813bc9639569eb5ecedc49fcc0b0094d7223720ab7d31e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:30:56 np0005481065 systemd[1]: libpod-35729055cb69069ff9813bc9639569eb5ecedc49fcc0b0094d7223720ab7d31e.scope: Deactivated successfully.
Oct 11 04:30:56 np0005481065 systemd[1]: var-lib-containers-storage-overlay-39e8cf8d05078483a94e6222e687d0f496bd8d66759636b2537185e79cea5da4-merged.mount: Deactivated successfully.
Oct 11 04:30:56 np0005481065 podman[245471]: 2025-10-11 08:30:56.135622493 +0000 UTC m=+1.054980899 container remove 35729055cb69069ff9813bc9639569eb5ecedc49fcc0b0094d7223720ab7d31e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_nobel, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 04:30:56 np0005481065 systemd[1]: libpod-conmon-35729055cb69069ff9813bc9639569eb5ecedc49fcc0b0094d7223720ab7d31e.scope: Deactivated successfully.
Oct 11 04:30:56 np0005481065 python3.9[245634]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 11 04:30:56 np0005481065 systemd[1]: Reloading.
Oct 11 04:30:56 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:30:56 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:30:56 np0005481065 podman[245811]: 2025-10-11 08:30:56.997569083 +0000 UTC m=+0.123146101 container create df5f0960e12c909b52de5366a0ce4211181ffcda18c74155871998a1bbea75dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 04:30:57 np0005481065 podman[245811]: 2025-10-11 08:30:56.910562151 +0000 UTC m=+0.036139159 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:30:57 np0005481065 systemd[1]: Started libpod-conmon-df5f0960e12c909b52de5366a0ce4211181ffcda18c74155871998a1bbea75dc.scope.
Oct 11 04:30:57 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:30:57 np0005481065 podman[245811]: 2025-10-11 08:30:57.208546268 +0000 UTC m=+0.334123336 container init df5f0960e12c909b52de5366a0ce4211181ffcda18c74155871998a1bbea75dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cray, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:30:57 np0005481065 podman[245811]: 2025-10-11 08:30:57.218075137 +0000 UTC m=+0.343652135 container start df5f0960e12c909b52de5366a0ce4211181ffcda18c74155871998a1bbea75dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:30:57 np0005481065 hopeful_cray[245851]: 167 167
Oct 11 04:30:57 np0005481065 systemd[1]: libpod-df5f0960e12c909b52de5366a0ce4211181ffcda18c74155871998a1bbea75dc.scope: Deactivated successfully.
Oct 11 04:30:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v714: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:57 np0005481065 podman[245811]: 2025-10-11 08:30:57.247882187 +0000 UTC m=+0.373459225 container attach df5f0960e12c909b52de5366a0ce4211181ffcda18c74155871998a1bbea75dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cray, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:30:57 np0005481065 podman[245811]: 2025-10-11 08:30:57.251414916 +0000 UTC m=+0.376991914 container died df5f0960e12c909b52de5366a0ce4211181ffcda18c74155871998a1bbea75dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cray, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:30:57 np0005481065 systemd[1]: var-lib-containers-storage-overlay-cafd15bacc684cc6055e500c62c947083032abbf5803ba22446f4d8ed831c311-merged.mount: Deactivated successfully.
Oct 11 04:30:57 np0005481065 podman[245811]: 2025-10-11 08:30:57.556912435 +0000 UTC m=+0.682489463 container remove df5f0960e12c909b52de5366a0ce4211181ffcda18c74155871998a1bbea75dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cray, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 11 04:30:57 np0005481065 systemd[1]: libpod-conmon-df5f0960e12c909b52de5366a0ce4211181ffcda18c74155871998a1bbea75dc.scope: Deactivated successfully.
Oct 11 04:30:57 np0005481065 podman[246002]: 2025-10-11 08:30:57.810498251 +0000 UTC m=+0.074345736 container create d8e70c3982f0f0c4ce74031ae9a813cf8ef8b24c0b30d905ea15ffb5a94784c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_herschel, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:30:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:30:57 np0005481065 podman[246002]: 2025-10-11 08:30:57.780785574 +0000 UTC m=+0.044633099 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:30:57 np0005481065 systemd[1]: Started libpod-conmon-d8e70c3982f0f0c4ce74031ae9a813cf8ef8b24c0b30d905ea15ffb5a94784c1.scope.
Oct 11 04:30:57 np0005481065 python3.9[246003]: ansible-ansible.builtin.service_facts Invoked
Oct 11 04:30:57 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:30:57 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e9d0ecf7b12eda9831953f0554e51bbfdabc114ff48479406c1a180f2c82050/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:30:57 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e9d0ecf7b12eda9831953f0554e51bbfdabc114ff48479406c1a180f2c82050/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:30:57 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e9d0ecf7b12eda9831953f0554e51bbfdabc114ff48479406c1a180f2c82050/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:30:57 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e9d0ecf7b12eda9831953f0554e51bbfdabc114ff48479406c1a180f2c82050/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:30:57 np0005481065 podman[246002]: 2025-10-11 08:30:57.993545019 +0000 UTC m=+0.257392514 container init d8e70c3982f0f0c4ce74031ae9a813cf8ef8b24c0b30d905ea15ffb5a94784c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 04:30:57 np0005481065 network[246039]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 11 04:30:57 np0005481065 network[246041]: 'network-scripts' will be removed from distribution in near future.
Oct 11 04:30:58 np0005481065 network[246042]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 11 04:30:58 np0005481065 podman[246002]: 2025-10-11 08:30:58.002689817 +0000 UTC m=+0.266537272 container start d8e70c3982f0f0c4ce74031ae9a813cf8ef8b24c0b30d905ea15ffb5a94784c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_herschel, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:30:58 np0005481065 podman[246002]: 2025-10-11 08:30:58.006424012 +0000 UTC m=+0.270271477 container attach d8e70c3982f0f0c4ce74031ae9a813cf8ef8b24c0b30d905ea15ffb5a94784c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_herschel, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:30:59 np0005481065 hungry_herschel[246020]: {
Oct 11 04:30:59 np0005481065 hungry_herschel[246020]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 04:30:59 np0005481065 hungry_herschel[246020]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:30:59 np0005481065 hungry_herschel[246020]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:30:59 np0005481065 hungry_herschel[246020]:        "osd_id": 2,
Oct 11 04:30:59 np0005481065 hungry_herschel[246020]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:30:59 np0005481065 hungry_herschel[246020]:        "type": "bluestore"
Oct 11 04:30:59 np0005481065 hungry_herschel[246020]:    },
Oct 11 04:30:59 np0005481065 hungry_herschel[246020]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 04:30:59 np0005481065 hungry_herschel[246020]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:30:59 np0005481065 hungry_herschel[246020]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:30:59 np0005481065 hungry_herschel[246020]:        "osd_id": 0,
Oct 11 04:30:59 np0005481065 hungry_herschel[246020]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:30:59 np0005481065 hungry_herschel[246020]:        "type": "bluestore"
Oct 11 04:30:59 np0005481065 hungry_herschel[246020]:    },
Oct 11 04:30:59 np0005481065 hungry_herschel[246020]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 04:30:59 np0005481065 hungry_herschel[246020]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:30:59 np0005481065 hungry_herschel[246020]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:30:59 np0005481065 hungry_herschel[246020]:        "osd_id": 1,
Oct 11 04:30:59 np0005481065 hungry_herschel[246020]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:30:59 np0005481065 hungry_herschel[246020]:        "type": "bluestore"
Oct 11 04:30:59 np0005481065 hungry_herschel[246020]:    }
Oct 11 04:30:59 np0005481065 hungry_herschel[246020]: }
Oct 11 04:30:59 np0005481065 systemd[1]: libpod-d8e70c3982f0f0c4ce74031ae9a813cf8ef8b24c0b30d905ea15ffb5a94784c1.scope: Deactivated successfully.
Oct 11 04:30:59 np0005481065 podman[246002]: 2025-10-11 08:30:59.097701693 +0000 UTC m=+1.361549158 container died d8e70c3982f0f0c4ce74031ae9a813cf8ef8b24c0b30d905ea15ffb5a94784c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_herschel, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 04:30:59 np0005481065 systemd[1]: libpod-d8e70c3982f0f0c4ce74031ae9a813cf8ef8b24c0b30d905ea15ffb5a94784c1.scope: Consumed 1.100s CPU time.
Oct 11 04:30:59 np0005481065 systemd[1]: var-lib-containers-storage-overlay-8e9d0ecf7b12eda9831953f0554e51bbfdabc114ff48479406c1a180f2c82050-merged.mount: Deactivated successfully.
Oct 11 04:30:59 np0005481065 podman[246002]: 2025-10-11 08:30:59.198414631 +0000 UTC m=+1.462262096 container remove d8e70c3982f0f0c4ce74031ae9a813cf8ef8b24c0b30d905ea15ffb5a94784c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 04:30:59 np0005481065 systemd[1]: libpod-conmon-d8e70c3982f0f0c4ce74031ae9a813cf8ef8b24c0b30d905ea15ffb5a94784c1.scope: Deactivated successfully.
Oct 11 04:30:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v715: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:30:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:30:59 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:30:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:30:59 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:30:59 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 4a3e6bcd-5138-4803-9b27-2f76c10f8754 does not exist
Oct 11 04:30:59 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 28c5034f-b8c7-4dfa-bec1-207eef25f928 does not exist
Oct 11 04:31:00 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:31:00 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:31:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v716: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:01 np0005481065 podman[246228]: 2025-10-11 08:31:01.647467345 +0000 UTC m=+0.109112435 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid)
Oct 11 04:31:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:31:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v717: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:03 np0005481065 python3.9[246433]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:31:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:31:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:31:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:31:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:31:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:31:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:31:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:31:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:31:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:31:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:31:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:31:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:31:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 04:31:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:31:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:31:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:31:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:31:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:31:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:31:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:31:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:31:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:31:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:31:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v718: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:05 np0005481065 podman[246558]: 2025-10-11 08:31:05.541844007 +0000 UTC m=+0.102408886 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:31:05 np0005481065 python3.9[246607]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:31:06 np0005481065 python3.9[246760]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:31:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v719: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:07 np0005481065 python3.9[246913]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:31:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:31:08 np0005481065 python3.9[247066]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:31:08 np0005481065 podman[247068]: 2025-10-11 08:31:08.825219492 +0000 UTC m=+0.123926334 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:31:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v720: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:09 np0005481065 python3.9[247245]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:31:10 np0005481065 python3.9[247398]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:31:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v721: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:11 np0005481065 python3.9[247551]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:31:12 np0005481065 python3.9[247704]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:31:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:31:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v722: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:13 np0005481065 python3.9[247856]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:31:14 np0005481065 python3.9[248008]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:31:15 np0005481065 python3.9[248160]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:31:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:31:15.160 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:31:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:31:15.161 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:31:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:31:15.161 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:31:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v723: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:15 np0005481065 python3.9[248312]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:31:16 np0005481065 python3.9[248464]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:31:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v724: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:17 np0005481065 python3.9[248616]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:31:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:31:18 np0005481065 python3.9[248768]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:31:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v725: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:19 np0005481065 python3.9[248920]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:31:20 np0005481065 python3.9[249072]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:31:20 np0005481065 podman[249178]: 2025-10-11 08:31:20.791566577 +0000 UTC m=+0.081049875 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct 11 04:31:21 np0005481065 python3.9[249244]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:31:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v726: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:21 np0005481065 python3.9[249396]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:31:22 np0005481065 python3.9[249548]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:31:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:31:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v727: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:23 np0005481065 python3.9[249700]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:31:24 np0005481065 python3.9[249852]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:31:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:31:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:31:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:31:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:31:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:31:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:31:25 np0005481065 python3.9[250004]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:31:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v728: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:26 np0005481065 python3.9[250156]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:31:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v729: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:27 np0005481065 python3.9[250308]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 11 04:31:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:31:28 np0005481065 python3.9[250460]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 11 04:31:28 np0005481065 systemd[1]: Reloading.
Oct 11 04:31:28 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:31:28 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:31:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v730: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:29 np0005481065 python3.9[250647]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:31:30 np0005481065 python3.9[250800]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:31:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v731: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:31 np0005481065 python3.9[250953]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:31:31 np0005481065 podman[251031]: 2025-10-11 08:31:31.803588471 +0000 UTC m=+0.100213395 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:31:32 np0005481065 python3.9[251127]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:31:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:31:32 np0005481065 python3.9[251280]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:31:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v732: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:33 np0005481065 python3.9[251433]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:31:34 np0005481065 python3.9[251586]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:31:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v733: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:35 np0005481065 python3.9[251739]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 04:31:35 np0005481065 podman[251741]: 2025-10-11 08:31:35.774356336 +0000 UTC m=+0.102767457 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:31:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v734: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:37 np0005481065 python3.9[251911]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:31:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:31:38 np0005481065 python3.9[252063]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:31:39 np0005481065 podman[252215]: 2025-10-11 08:31:39.026982903 +0000 UTC m=+0.130115008 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Oct 11 04:31:39 np0005481065 python3.9[252216]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:31:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v735: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:40 np0005481065 python3.9[252393]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:31:40 np0005481065 python3.9[252545]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:31:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v736: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:41 np0005481065 python3.9[252697]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:31:42 np0005481065 python3.9[252849]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:31:42 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:31:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v737: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:43 np0005481065 python3.9[253001]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:31:44 np0005481065 python3.9[253153]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:31:45 np0005481065 python3.9[253305]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:31:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v738: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:45 np0005481065 python3.9[253457]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:31:46 np0005481065 python3.9[253609]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:31:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v739: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:31:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v740: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v741: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:51 np0005481065 podman[253634]: 2025-10-11 08:31:51.782186581 +0000 UTC m=+0.078460822 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct 11 04:31:52 np0005481065 python3.9[253781]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Oct 11 04:31:52 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:31:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v742: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:53 np0005481065 python3.9[253934]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 11 04:31:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:31:54
Oct 11 04:31:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:31:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 04:31:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', 'default.rgw.meta', 'images', 'backups']
Oct 11 04:31:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:31:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:31:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:31:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:31:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:31:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:31:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:31:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:31:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:31:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:31:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:31:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:31:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:31:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:31:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:31:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:31:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:31:54 np0005481065 python3.9[254092]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 11 04:31:55 np0005481065 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:31:55 np0005481065 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 5622 writes, 23K keys, 5622 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5622 writes, 881 syncs, 6.38 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s#012Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f9267111f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f9267111f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Oct 11 04:31:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v743: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:56 np0005481065 systemd-logind[819]: New session 53 of user zuul.
Oct 11 04:31:56 np0005481065 systemd[1]: Started Session 53 of User zuul.
Oct 11 04:31:56 np0005481065 systemd[1]: session-53.scope: Deactivated successfully.
Oct 11 04:31:56 np0005481065 systemd-logind[819]: Session 53 logged out. Waiting for processes to exit.
Oct 11 04:31:56 np0005481065 systemd-logind[819]: Removed session 53.
Oct 11 04:31:57 np0005481065 python3.9[254278]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:31:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v744: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:57 np0005481065 python3.9[254399]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760171516.401401-1555-269118851967355/.source.json follow=False _original_basename=config.json.j2 checksum=2c2474b5f24ef7c9ed37f49680082593e0d1100b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:31:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:31:58 np0005481065 python3.9[254549]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:31:59 np0005481065 python3.9[254625]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:31:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v745: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:31:59 np0005481065 python3.9[254823]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:32:00 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:32:00 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:32:00 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:32:00 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:32:00 np0005481065 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:32:00 np0005481065 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 6712 writes, 27K keys, 6712 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 6712 writes, 1226 syncs, 5.47 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 271 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557ab31031f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557ab31031f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Oct 11 04:32:00 np0005481065 python3.9[255064]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760171519.2215683-1555-266665536993903/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:32:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:32:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:32:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:32:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:32:01 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:32:01 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:32:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:32:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:32:01 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 8c2bda3e-e11f-4c1a-bca9-cdec29912896 does not exist
Oct 11 04:32:01 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 4aa11e04-9816-4b74-8117-1144417eb6f2 does not exist
Oct 11 04:32:01 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 93d3038a-70b7-49cc-8b7d-7af4886c39a9 does not exist
Oct 11 04:32:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:32:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:32:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:32:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:32:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:32:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:32:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v746: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:01 np0005481065 python3.9[255301]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:32:01 np0005481065 podman[255558]: 2025-10-11 08:32:01.946550324 +0000 UTC m=+0.050340161 container create e79c4ac1d4a4f7830316fce0b11f7694464500eb22cbedf18416c12c82284304 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 04:32:01 np0005481065 systemd[1]: Started libpod-conmon-e79c4ac1d4a4f7830316fce0b11f7694464500eb22cbedf18416c12c82284304.scope.
Oct 11 04:32:02 np0005481065 podman[255558]: 2025-10-11 08:32:01.925062143 +0000 UTC m=+0.028851990 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:32:02 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:32:02 np0005481065 podman[255558]: 2025-10-11 08:32:02.06421332 +0000 UTC m=+0.168003137 container init e79c4ac1d4a4f7830316fce0b11f7694464500eb22cbedf18416c12c82284304 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_spence, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:32:02 np0005481065 podman[255558]: 2025-10-11 08:32:02.076619447 +0000 UTC m=+0.180409294 container start e79c4ac1d4a4f7830316fce0b11f7694464500eb22cbedf18416c12c82284304 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_spence, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:32:02 np0005481065 compassionate_spence[255575]: 167 167
Oct 11 04:32:02 np0005481065 podman[255558]: 2025-10-11 08:32:02.105528717 +0000 UTC m=+0.209318534 container attach e79c4ac1d4a4f7830316fce0b11f7694464500eb22cbedf18416c12c82284304 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_spence, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:32:02 np0005481065 systemd[1]: libpod-e79c4ac1d4a4f7830316fce0b11f7694464500eb22cbedf18416c12c82284304.scope: Deactivated successfully.
Oct 11 04:32:02 np0005481065 python3.9[255557]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760171520.784946-1555-121773290648046/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:32:02 np0005481065 podman[255572]: 2025-10-11 08:32:02.151558337 +0000 UTC m=+0.148049468 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=iscsid)
Oct 11 04:32:02 np0005481065 podman[255594]: 2025-10-11 08:32:02.160089855 +0000 UTC m=+0.029336692 container died e79c4ac1d4a4f7830316fce0b11f7694464500eb22cbedf18416c12c82284304 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_spence, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 11 04:32:02 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:32:02 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:32:02 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:32:02 np0005481065 systemd[1]: var-lib-containers-storage-overlay-1cd7a2baf67e75eafbc03850439205fd34a8c845f8c5ade34e5f0acef7b65008-merged.mount: Deactivated successfully.
Oct 11 04:32:02 np0005481065 podman[255594]: 2025-10-11 08:32:02.215356573 +0000 UTC m=+0.084603430 container remove e79c4ac1d4a4f7830316fce0b11f7694464500eb22cbedf18416c12c82284304 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_spence, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:32:02 np0005481065 systemd[1]: libpod-conmon-e79c4ac1d4a4f7830316fce0b11f7694464500eb22cbedf18416c12c82284304.scope: Deactivated successfully.
Oct 11 04:32:02 np0005481065 podman[255671]: 2025-10-11 08:32:02.424954474 +0000 UTC m=+0.046919785 container create f34dc80e299718e2a37f14e69f59de32f4edc45ad15ed5445d3ffb0c9b565d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_payne, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:32:02 np0005481065 systemd[1]: Started libpod-conmon-f34dc80e299718e2a37f14e69f59de32f4edc45ad15ed5445d3ffb0c9b565d48.scope.
Oct 11 04:32:02 np0005481065 podman[255671]: 2025-10-11 08:32:02.405681424 +0000 UTC m=+0.027646735 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:32:02 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:32:02 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01325597af399b2ff9d7cba1e5433aa49e9809d51a64468a11565a1cceb66d9e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:32:02 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01325597af399b2ff9d7cba1e5433aa49e9809d51a64468a11565a1cceb66d9e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:32:02 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01325597af399b2ff9d7cba1e5433aa49e9809d51a64468a11565a1cceb66d9e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:32:02 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01325597af399b2ff9d7cba1e5433aa49e9809d51a64468a11565a1cceb66d9e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:32:02 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01325597af399b2ff9d7cba1e5433aa49e9809d51a64468a11565a1cceb66d9e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:32:02 np0005481065 podman[255671]: 2025-10-11 08:32:02.533447783 +0000 UTC m=+0.155413144 container init f34dc80e299718e2a37f14e69f59de32f4edc45ad15ed5445d3ffb0c9b565d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_payne, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:32:02 np0005481065 podman[255671]: 2025-10-11 08:32:02.547319101 +0000 UTC m=+0.169284392 container start f34dc80e299718e2a37f14e69f59de32f4edc45ad15ed5445d3ffb0c9b565d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_payne, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 04:32:02 np0005481065 podman[255671]: 2025-10-11 08:32:02.550869971 +0000 UTC m=+0.172835252 container attach f34dc80e299718e2a37f14e69f59de32f4edc45ad15ed5445d3ffb0c9b565d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_payne, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 11 04:32:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:32:02 np0005481065 python3.9[255791]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:32:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v747: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:03 np0005481065 python3.9[255920]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760171522.3421571-1555-13752243027683/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:32:03 np0005481065 angry_payne[255722]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:32:03 np0005481065 angry_payne[255722]: --> relative data size: 1.0
Oct 11 04:32:03 np0005481065 angry_payne[255722]: --> All data devices are unavailable
Oct 11 04:32:03 np0005481065 systemd[1]: libpod-f34dc80e299718e2a37f14e69f59de32f4edc45ad15ed5445d3ffb0c9b565d48.scope: Deactivated successfully.
Oct 11 04:32:03 np0005481065 podman[255671]: 2025-10-11 08:32:03.725393047 +0000 UTC m=+1.347358358 container died f34dc80e299718e2a37f14e69f59de32f4edc45ad15ed5445d3ffb0c9b565d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_payne, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 11 04:32:03 np0005481065 systemd[1]: libpod-f34dc80e299718e2a37f14e69f59de32f4edc45ad15ed5445d3ffb0c9b565d48.scope: Consumed 1.111s CPU time.
Oct 11 04:32:03 np0005481065 systemd[1]: var-lib-containers-storage-overlay-01325597af399b2ff9d7cba1e5433aa49e9809d51a64468a11565a1cceb66d9e-merged.mount: Deactivated successfully.
Oct 11 04:32:03 np0005481065 podman[255671]: 2025-10-11 08:32:03.800079859 +0000 UTC m=+1.422045140 container remove f34dc80e299718e2a37f14e69f59de32f4edc45ad15ed5445d3ffb0c9b565d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_payne, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 04:32:03 np0005481065 systemd[1]: libpod-conmon-f34dc80e299718e2a37f14e69f59de32f4edc45ad15ed5445d3ffb0c9b565d48.scope: Deactivated successfully.
Oct 11 04:32:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:32:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:32:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:32:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:32:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:32:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:32:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:32:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:32:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:32:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:32:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:32:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:32:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 04:32:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:32:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:32:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:32:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:32:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:32:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:32:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:32:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:32:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:32:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:32:04 np0005481065 python3.9[256200]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:32:04 np0005481065 podman[256249]: 2025-10-11 08:32:04.676191758 +0000 UTC m=+0.048142500 container create 526bb5c527113b28ef7d4e9adb27c311bc36ab02d4b7f1475fba5b3533def258 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_cerf, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 11 04:32:04 np0005481065 systemd[1]: Started libpod-conmon-526bb5c527113b28ef7d4e9adb27c311bc36ab02d4b7f1475fba5b3533def258.scope.
Oct 11 04:32:04 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:32:04 np0005481065 podman[256249]: 2025-10-11 08:32:04.65556631 +0000 UTC m=+0.027517052 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:32:04 np0005481065 podman[256249]: 2025-10-11 08:32:04.770638323 +0000 UTC m=+0.142589115 container init 526bb5c527113b28ef7d4e9adb27c311bc36ab02d4b7f1475fba5b3533def258 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_cerf, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:32:04 np0005481065 podman[256249]: 2025-10-11 08:32:04.785996643 +0000 UTC m=+0.157947375 container start 526bb5c527113b28ef7d4e9adb27c311bc36ab02d4b7f1475fba5b3533def258 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_cerf, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:32:04 np0005481065 podman[256249]: 2025-10-11 08:32:04.790337185 +0000 UTC m=+0.162287937 container attach 526bb5c527113b28ef7d4e9adb27c311bc36ab02d4b7f1475fba5b3533def258 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_cerf, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 11 04:32:04 np0005481065 kind_cerf[256283]: 167 167
Oct 11 04:32:04 np0005481065 systemd[1]: libpod-526bb5c527113b28ef7d4e9adb27c311bc36ab02d4b7f1475fba5b3533def258.scope: Deactivated successfully.
Oct 11 04:32:04 np0005481065 podman[256249]: 2025-10-11 08:32:04.796155768 +0000 UTC m=+0.168106500 container died 526bb5c527113b28ef7d4e9adb27c311bc36ab02d4b7f1475fba5b3533def258 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:32:04 np0005481065 systemd[1]: var-lib-containers-storage-overlay-8fe40077af7f6e854f30e998075e301caf5c936696c0cafc469f5bbee189c44c-merged.mount: Deactivated successfully.
Oct 11 04:32:04 np0005481065 podman[256249]: 2025-10-11 08:32:04.85014179 +0000 UTC m=+0.222092522 container remove 526bb5c527113b28ef7d4e9adb27c311bc36ab02d4b7f1475fba5b3533def258 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_cerf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:32:04 np0005481065 systemd[1]: libpod-conmon-526bb5c527113b28ef7d4e9adb27c311bc36ab02d4b7f1475fba5b3533def258.scope: Deactivated successfully.
Oct 11 04:32:05 np0005481065 podman[256383]: 2025-10-11 08:32:05.092675983 +0000 UTC m=+0.073495170 container create cca1cfae9d46bb50bcbde1eb0dfb30869fa00bddfe85526f45a2410f0c43a852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mahavira, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:32:05 np0005481065 systemd[1]: Started libpod-conmon-cca1cfae9d46bb50bcbde1eb0dfb30869fa00bddfe85526f45a2410f0c43a852.scope.
Oct 11 04:32:05 np0005481065 podman[256383]: 2025-10-11 08:32:05.063937568 +0000 UTC m=+0.044756775 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:32:05 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:32:05 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f6fd0502e4457dd9c49f674ba2f6013e623a95b40e62ef16d8240b758755a23/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:32:05 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f6fd0502e4457dd9c49f674ba2f6013e623a95b40e62ef16d8240b758755a23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:32:05 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f6fd0502e4457dd9c49f674ba2f6013e623a95b40e62ef16d8240b758755a23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:32:05 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f6fd0502e4457dd9c49f674ba2f6013e623a95b40e62ef16d8240b758755a23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:32:05 np0005481065 podman[256383]: 2025-10-11 08:32:05.20966644 +0000 UTC m=+0.190485647 container init cca1cfae9d46bb50bcbde1eb0dfb30869fa00bddfe85526f45a2410f0c43a852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mahavira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:32:05 np0005481065 podman[256383]: 2025-10-11 08:32:05.226757458 +0000 UTC m=+0.207576655 container start cca1cfae9d46bb50bcbde1eb0dfb30869fa00bddfe85526f45a2410f0c43a852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mahavira, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 11 04:32:05 np0005481065 podman[256383]: 2025-10-11 08:32:05.231318076 +0000 UTC m=+0.212137273 container attach cca1cfae9d46bb50bcbde1eb0dfb30869fa00bddfe85526f45a2410f0c43a852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mahavira, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 04:32:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v748: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:05 np0005481065 python3.9[256458]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]: {
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:    "0": [
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:        {
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:            "devices": [
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:                "/dev/loop3"
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:            ],
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:            "lv_name": "ceph_lv0",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:            "lv_size": "21470642176",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:            "name": "ceph_lv0",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:            "tags": {
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:                "ceph.cluster_name": "ceph",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:                "ceph.crush_device_class": "",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:                "ceph.encrypted": "0",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:                "ceph.osd_id": "0",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:                "ceph.type": "block",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:                "ceph.vdo": "0"
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:            },
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:            "type": "block",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:            "vg_name": "ceph_vg0"
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:        }
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:    ],
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:    "1": [
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:        {
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:            "devices": [
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:                "/dev/loop4"
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:            ],
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:            "lv_name": "ceph_lv1",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:            "lv_size": "21470642176",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:            "name": "ceph_lv1",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:            "tags": {
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:                "ceph.cluster_name": "ceph",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:                "ceph.crush_device_class": "",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:                "ceph.encrypted": "0",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:                "ceph.osd_id": "1",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:                "ceph.type": "block",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:                "ceph.vdo": "0"
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:            },
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:            "type": "block",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:            "vg_name": "ceph_vg1"
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:        }
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:    ],
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:    "2": [
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:        {
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:            "devices": [
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:                "/dev/loop5"
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:            ],
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:            "lv_name": "ceph_lv2",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:            "lv_size": "21470642176",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:            "name": "ceph_lv2",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:            "tags": {
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:                "ceph.cluster_name": "ceph",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:                "ceph.crush_device_class": "",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:                "ceph.encrypted": "0",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:                "ceph.osd_id": "2",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:                "ceph.type": "block",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:                "ceph.vdo": "0"
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:            },
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:            "type": "block",
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:            "vg_name": "ceph_vg2"
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:        }
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]:    ]
Oct 11 04:32:05 np0005481065 elegant_mahavira[256425]: }
Oct 11 04:32:05 np0005481065 systemd[1]: libpod-cca1cfae9d46bb50bcbde1eb0dfb30869fa00bddfe85526f45a2410f0c43a852.scope: Deactivated successfully.
Oct 11 04:32:06 np0005481065 podman[256383]: 2025-10-11 08:32:05.99941715 +0000 UTC m=+0.980236337 container died cca1cfae9d46bb50bcbde1eb0dfb30869fa00bddfe85526f45a2410f0c43a852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:32:06 np0005481065 systemd[1]: var-lib-containers-storage-overlay-3f6fd0502e4457dd9c49f674ba2f6013e623a95b40e62ef16d8240b758755a23-merged.mount: Deactivated successfully.
Oct 11 04:32:06 np0005481065 podman[256383]: 2025-10-11 08:32:06.086502369 +0000 UTC m=+1.067321536 container remove cca1cfae9d46bb50bcbde1eb0dfb30869fa00bddfe85526f45a2410f0c43a852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:32:06 np0005481065 systemd[1]: libpod-conmon-cca1cfae9d46bb50bcbde1eb0dfb30869fa00bddfe85526f45a2410f0c43a852.scope: Deactivated successfully.
Oct 11 04:32:06 np0005481065 podman[256568]: 2025-10-11 08:32:06.151467898 +0000 UTC m=+0.116518404 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:32:06 np0005481065 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:32:06 np0005481065 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.3 total, 600.0 interval#012Cumulative writes: 5676 writes, 23K keys, 5676 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5676 writes, 878 syncs, 6.46 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.048       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.048       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.05              0.00         1    0.048       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c4d9a771f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c4d9a771f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_sl
Oct 11 04:32:06 np0005481065 python3.9[256658]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:32:06 np0005481065 podman[256887]: 2025-10-11 08:32:06.89772927 +0000 UTC m=+0.053307074 container create deab9ccdd2ba17e3f94c2edde3f5d126d574b382e4e9e7661e0e819d00c3a823 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_heisenberg, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:32:06 np0005481065 systemd[1]: Started libpod-conmon-deab9ccdd2ba17e3f94c2edde3f5d126d574b382e4e9e7661e0e819d00c3a823.scope.
Oct 11 04:32:06 np0005481065 podman[256887]: 2025-10-11 08:32:06.871103965 +0000 UTC m=+0.026681829 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:32:06 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:32:07 np0005481065 podman[256887]: 2025-10-11 08:32:07.001453646 +0000 UTC m=+0.157031510 container init deab9ccdd2ba17e3f94c2edde3f5d126d574b382e4e9e7661e0e819d00c3a823 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 04:32:07 np0005481065 podman[256887]: 2025-10-11 08:32:07.014326516 +0000 UTC m=+0.169904300 container start deab9ccdd2ba17e3f94c2edde3f5d126d574b382e4e9e7661e0e819d00c3a823 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_heisenberg, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:32:07 np0005481065 podman[256887]: 2025-10-11 08:32:07.018448221 +0000 UTC m=+0.174026035 container attach deab9ccdd2ba17e3f94c2edde3f5d126d574b382e4e9e7661e0e819d00c3a823 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_heisenberg, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 04:32:07 np0005481065 vibrant_heisenberg[256928]: 167 167
Oct 11 04:32:07 np0005481065 systemd[1]: libpod-deab9ccdd2ba17e3f94c2edde3f5d126d574b382e4e9e7661e0e819d00c3a823.scope: Deactivated successfully.
Oct 11 04:32:07 np0005481065 podman[256887]: 2025-10-11 08:32:07.024137231 +0000 UTC m=+0.179715045 container died deab9ccdd2ba17e3f94c2edde3f5d126d574b382e4e9e7661e0e819d00c3a823 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_heisenberg, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 04:32:07 np0005481065 systemd[1]: var-lib-containers-storage-overlay-543d316cd198d718a285aad7485e62e63c0f3a3eba392f45a7eef1c509d354d2-merged.mount: Deactivated successfully.
Oct 11 04:32:07 np0005481065 podman[256887]: 2025-10-11 08:32:07.073297947 +0000 UTC m=+0.228875721 container remove deab9ccdd2ba17e3f94c2edde3f5d126d574b382e4e9e7661e0e819d00c3a823 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 11 04:32:07 np0005481065 systemd[1]: libpod-conmon-deab9ccdd2ba17e3f94c2edde3f5d126d574b382e4e9e7661e0e819d00c3a823.scope: Deactivated successfully.
Oct 11 04:32:07 np0005481065 python3.9[256974]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:32:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v749: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:07 np0005481065 podman[256982]: 2025-10-11 08:32:07.324084581 +0000 UTC m=+0.063782537 container create 9b2813e26064ba153df5c8a4d41f25cc074fc15784afeacc52be188b75d011a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:32:07 np0005481065 systemd[1]: Started libpod-conmon-9b2813e26064ba153df5c8a4d41f25cc074fc15784afeacc52be188b75d011a6.scope.
Oct 11 04:32:07 np0005481065 podman[256982]: 2025-10-11 08:32:07.299386859 +0000 UTC m=+0.039084905 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:32:07 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:32:07 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eccbe32beb8ca37d1ac5ae3ed2cb1983070b03c6958eb52d5215ce072d41d3a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:32:07 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eccbe32beb8ca37d1ac5ae3ed2cb1983070b03c6958eb52d5215ce072d41d3a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:32:07 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eccbe32beb8ca37d1ac5ae3ed2cb1983070b03c6958eb52d5215ce072d41d3a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:32:07 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eccbe32beb8ca37d1ac5ae3ed2cb1983070b03c6958eb52d5215ce072d41d3a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:32:07 np0005481065 podman[256982]: 2025-10-11 08:32:07.431285954 +0000 UTC m=+0.170983950 container init 9b2813e26064ba153df5c8a4d41f25cc074fc15784afeacc52be188b75d011a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:32:07 np0005481065 podman[256982]: 2025-10-11 08:32:07.445929784 +0000 UTC m=+0.185627790 container start 9b2813e26064ba153df5c8a4d41f25cc074fc15784afeacc52be188b75d011a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_merkle, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:32:07 np0005481065 podman[256982]: 2025-10-11 08:32:07.449699509 +0000 UTC m=+0.189397505 container attach 9b2813e26064ba153df5c8a4d41f25cc074fc15784afeacc52be188b75d011a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_merkle, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:32:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:32:07 np0005481065 ceph-mgr[74605]: [devicehealth INFO root] Check health
Oct 11 04:32:07 np0005481065 python3.9[257126]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1760171526.623717-1648-8221585267862/.source _original_basename=.46essqj7 follow=False checksum=c9017603f1bc5177c2246b299a8eb428f37cce9e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Oct 11 04:32:08 np0005481065 gifted_merkle[257022]: {
Oct 11 04:32:08 np0005481065 gifted_merkle[257022]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 04:32:08 np0005481065 gifted_merkle[257022]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:32:08 np0005481065 gifted_merkle[257022]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:32:08 np0005481065 gifted_merkle[257022]:        "osd_id": 2,
Oct 11 04:32:08 np0005481065 gifted_merkle[257022]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:32:08 np0005481065 gifted_merkle[257022]:        "type": "bluestore"
Oct 11 04:32:08 np0005481065 gifted_merkle[257022]:    },
Oct 11 04:32:08 np0005481065 gifted_merkle[257022]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 04:32:08 np0005481065 gifted_merkle[257022]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:32:08 np0005481065 gifted_merkle[257022]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:32:08 np0005481065 gifted_merkle[257022]:        "osd_id": 0,
Oct 11 04:32:08 np0005481065 gifted_merkle[257022]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:32:08 np0005481065 gifted_merkle[257022]:        "type": "bluestore"
Oct 11 04:32:08 np0005481065 gifted_merkle[257022]:    },
Oct 11 04:32:08 np0005481065 gifted_merkle[257022]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 04:32:08 np0005481065 gifted_merkle[257022]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:32:08 np0005481065 gifted_merkle[257022]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:32:08 np0005481065 gifted_merkle[257022]:        "osd_id": 1,
Oct 11 04:32:08 np0005481065 gifted_merkle[257022]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:32:08 np0005481065 gifted_merkle[257022]:        "type": "bluestore"
Oct 11 04:32:08 np0005481065 gifted_merkle[257022]:    }
Oct 11 04:32:08 np0005481065 gifted_merkle[257022]: }
Oct 11 04:32:08 np0005481065 systemd[1]: libpod-9b2813e26064ba153df5c8a4d41f25cc074fc15784afeacc52be188b75d011a6.scope: Deactivated successfully.
Oct 11 04:32:08 np0005481065 systemd[1]: libpod-9b2813e26064ba153df5c8a4d41f25cc074fc15784afeacc52be188b75d011a6.scope: Consumed 1.148s CPU time.
Oct 11 04:32:08 np0005481065 podman[257256]: 2025-10-11 08:32:08.660174083 +0000 UTC m=+0.044478366 container died 9b2813e26064ba153df5c8a4d41f25cc074fc15784afeacc52be188b75d011a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_merkle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:32:08 np0005481065 systemd[1]: var-lib-containers-storage-overlay-eccbe32beb8ca37d1ac5ae3ed2cb1983070b03c6958eb52d5215ce072d41d3a7-merged.mount: Deactivated successfully.
Oct 11 04:32:08 np0005481065 podman[257256]: 2025-10-11 08:32:08.735690869 +0000 UTC m=+0.119995062 container remove 9b2813e26064ba153df5c8a4d41f25cc074fc15784afeacc52be188b75d011a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_merkle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 04:32:08 np0005481065 systemd[1]: libpod-conmon-9b2813e26064ba153df5c8a4d41f25cc074fc15784afeacc52be188b75d011a6.scope: Deactivated successfully.
Oct 11 04:32:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:32:08 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:32:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:32:08 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:32:08 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 5218a6f6-82d2-4439-b7d4-8cebc544d1f7 does not exist
Oct 11 04:32:08 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 95afc3f7-c5b4-43a5-8b3e-a68376bed9ce does not exist
Oct 11 04:32:09 np0005481065 python3.9[257322]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:32:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v750: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:09 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:32:09 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:32:09 np0005481065 podman[257497]: 2025-10-11 08:32:09.889395993 +0000 UTC m=+0.179742646 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller)
Oct 11 04:32:10 np0005481065 python3.9[257536]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:32:10 np0005481065 python3.9[257670]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760171529.3303435-1674-151816430564190/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=837ffd9c004e5987a2e117698c56827ebbfeb5b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:32:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v751: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:11 np0005481065 python3.9[257820]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 04:32:12 np0005481065 python3.9[257941]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760171530.897792-1689-232689671924507/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=722ab36345f3375cbdcf911ce8f6e1a8083d7e59 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 04:32:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:32:13 np0005481065 python3.9[258093]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Oct 11 04:32:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v752: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:14 np0005481065 python3.9[258245]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 11 04:32:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:32:15.161 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:32:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:32:15.162 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:32:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:32:15.162 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:32:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v753: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:15 np0005481065 python3[258397]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Oct 11 04:32:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v754: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:32:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v755: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v756: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:32:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v757: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:24 np0005481065 podman[258469]: 2025-10-11 08:32:24.710173021 +0000 UTC m=+2.002081716 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 11 04:32:24 np0005481065 podman[258410]: 2025-10-11 08:32:24.732636831 +0000 UTC m=+9.347437240 image pull 7ac362f4e836cf46e10a309acb4abf774df9481a1d6404c213437495cfb42f5d quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844
Oct 11 04:32:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:32:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:32:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:32:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:32:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:32:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:32:24 np0005481065 podman[258512]: 2025-10-11 08:32:24.938348242 +0000 UTC m=+0.070741332 container create a935fb67611e2cdf191c5aec8ff164c181e631191daef2a1684486c6280c3ac6 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, org.label-schema.build-date=20251001, tcib_managed=true, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:32:24 np0005481065 podman[258512]: 2025-10-11 08:32:24.899075472 +0000 UTC m=+0.031468612 image pull 7ac362f4e836cf46e10a309acb4abf774df9481a1d6404c213437495cfb42f5d quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844
Oct 11 04:32:24 np0005481065 python3[258397]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844 bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Oct 11 04:32:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v758: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:26 np0005481065 python3.9[258702]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:32:27 np0005481065 python3.9[258856]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Oct 11 04:32:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v759: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:32:28 np0005481065 python3.9[259008]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 11 04:32:29 np0005481065 python3[259160]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Oct 11 04:32:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v760: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:29 np0005481065 podman[259198]: 2025-10-11 08:32:29.411114738 +0000 UTC m=+0.068629554 container create 73b98b0be9f6f26e777db084dcc633e96c305d7c0e44252280b6444085b3225d (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, config_id=edpm, container_name=nova_compute, org.label-schema.license=GPLv2)
Oct 11 04:32:29 np0005481065 podman[259198]: 2025-10-11 08:32:29.372230158 +0000 UTC m=+0.029745034 image pull 7ac362f4e836cf46e10a309acb4abf774df9481a1d6404c213437495cfb42f5d quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844
Oct 11 04:32:29 np0005481065 python3[259160]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844 kolla_start
Oct 11 04:32:30 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Oct 11 04:32:30 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:32:30.393672) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 04:32:30 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Oct 11 04:32:30 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171550393870, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1544, "num_deletes": 251, "total_data_size": 2541071, "memory_usage": 2581312, "flush_reason": "Manual Compaction"}
Oct 11 04:32:30 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Oct 11 04:32:30 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171550414905, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2485526, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14793, "largest_seqno": 16336, "table_properties": {"data_size": 2478349, "index_size": 4248, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14331, "raw_average_key_size": 19, "raw_value_size": 2464057, "raw_average_value_size": 3366, "num_data_blocks": 194, "num_entries": 732, "num_filter_entries": 732, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760171383, "oldest_key_time": 1760171383, "file_creation_time": 1760171550, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:32:30 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 21577 microseconds, and 11174 cpu microseconds.
Oct 11 04:32:30 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:32:30 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:32:30.415266) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2485526 bytes OK
Oct 11 04:32:30 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:32:30.415463) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Oct 11 04:32:30 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:32:30.417136) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Oct 11 04:32:30 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:32:30.417160) EVENT_LOG_v1 {"time_micros": 1760171550417152, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 04:32:30 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:32:30.417193) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 04:32:30 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2534380, prev total WAL file size 2534380, number of live WAL files 2.
Oct 11 04:32:30 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:32:30 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:32:30.419582) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Oct 11 04:32:30 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 04:32:30 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2427KB)], [35(6838KB)]
Oct 11 04:32:30 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171550420086, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9487791, "oldest_snapshot_seqno": -1}
Oct 11 04:32:30 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 3991 keys, 7714207 bytes, temperature: kUnknown
Oct 11 04:32:30 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171550474214, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 7714207, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7685324, "index_size": 17829, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9989, "raw_key_size": 97485, "raw_average_key_size": 24, "raw_value_size": 7610851, "raw_average_value_size": 1907, "num_data_blocks": 755, "num_entries": 3991, "num_filter_entries": 3991, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760171550, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:32:30 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:32:30 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:32:30.474952) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 7714207 bytes
Oct 11 04:32:30 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:32:30.476803) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 174.3 rd, 141.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 6.7 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(6.9) write-amplify(3.1) OK, records in: 4505, records dropped: 514 output_compression: NoCompression
Oct 11 04:32:30 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:32:30.476879) EVENT_LOG_v1 {"time_micros": 1760171550476860, "job": 16, "event": "compaction_finished", "compaction_time_micros": 54422, "compaction_time_cpu_micros": 34225, "output_level": 6, "num_output_files": 1, "total_output_size": 7714207, "num_input_records": 4505, "num_output_records": 3991, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 04:32:30 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:32:30 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171550478795, "job": 16, "event": "table_file_deletion", "file_number": 37}
Oct 11 04:32:30 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:32:30 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171550481718, "job": 16, "event": "table_file_deletion", "file_number": 35}
Oct 11 04:32:30 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:32:30.419508) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:32:30 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:32:30.481960) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:32:30 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:32:30.481968) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:32:30 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:32:30.481971) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:32:30 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:32:30.481973) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:32:30 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:32:30.481976) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:32:30 np0005481065 python3.9[259387]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:32:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v761: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:31 np0005481065 python3.9[259541]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:32:32 np0005481065 podman[259664]: 2025-10-11 08:32:32.304099816 +0000 UTC m=+0.088750637 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 11 04:32:32 np0005481065 python3.9[259713]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760171551.6557946-1781-128799318449788/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 04:32:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:32:33 np0005481065 python3.9[259789]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 11 04:32:33 np0005481065 systemd[1]: Reloading.
Oct 11 04:32:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v762: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:33 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:32:33 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:32:34 np0005481065 python3.9[259901]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 04:32:34 np0005481065 systemd[1]: Reloading.
Oct 11 04:32:34 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 04:32:34 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 04:32:34 np0005481065 systemd[1]: Starting nova_compute container...
Oct 11 04:32:34 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:32:34 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9446680486d73744fe42e7bc9dbd4018ec0b5ae0f06cba6589fac0eabf4e949/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 11 04:32:34 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9446680486d73744fe42e7bc9dbd4018ec0b5ae0f06cba6589fac0eabf4e949/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct 11 04:32:34 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9446680486d73744fe42e7bc9dbd4018ec0b5ae0f06cba6589fac0eabf4e949/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 11 04:32:34 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9446680486d73744fe42e7bc9dbd4018ec0b5ae0f06cba6589fac0eabf4e949/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 11 04:32:34 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9446680486d73744fe42e7bc9dbd4018ec0b5ae0f06cba6589fac0eabf4e949/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 11 04:32:34 np0005481065 podman[259941]: 2025-10-11 08:32:34.914632404 +0000 UTC m=+0.157560034 container init 73b98b0be9f6f26e777db084dcc633e96c305d7c0e44252280b6444085b3225d (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, container_name=nova_compute, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 04:32:34 np0005481065 podman[259941]: 2025-10-11 08:32:34.927895865 +0000 UTC m=+0.170823445 container start 73b98b0be9f6f26e777db084dcc633e96c305d7c0e44252280b6444085b3225d (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 04:32:34 np0005481065 podman[259941]: nova_compute
Oct 11 04:32:34 np0005481065 nova_compute[259955]: + sudo -E kolla_set_configs
Oct 11 04:32:34 np0005481065 systemd[1]: Started nova_compute container.
Oct 11 04:32:35 np0005481065 nova_compute[259955]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 11 04:32:35 np0005481065 nova_compute[259955]: INFO:__main__:Validating config file
Oct 11 04:32:35 np0005481065 nova_compute[259955]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 11 04:32:35 np0005481065 nova_compute[259955]: INFO:__main__:Copying service configuration files
Oct 11 04:32:35 np0005481065 nova_compute[259955]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct 11 04:32:35 np0005481065 nova_compute[259955]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct 11 04:32:35 np0005481065 nova_compute[259955]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct 11 04:32:35 np0005481065 nova_compute[259955]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct 11 04:32:35 np0005481065 nova_compute[259955]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct 11 04:32:35 np0005481065 nova_compute[259955]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 11 04:32:35 np0005481065 nova_compute[259955]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 11 04:32:35 np0005481065 nova_compute[259955]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 11 04:32:35 np0005481065 nova_compute[259955]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 11 04:32:35 np0005481065 nova_compute[259955]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct 11 04:32:35 np0005481065 nova_compute[259955]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct 11 04:32:35 np0005481065 nova_compute[259955]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 11 04:32:35 np0005481065 nova_compute[259955]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 11 04:32:35 np0005481065 nova_compute[259955]: INFO:__main__:Deleting /etc/ceph
Oct 11 04:32:35 np0005481065 nova_compute[259955]: INFO:__main__:Creating directory /etc/ceph
Oct 11 04:32:35 np0005481065 nova_compute[259955]: INFO:__main__:Setting permission for /etc/ceph
Oct 11 04:32:35 np0005481065 nova_compute[259955]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct 11 04:32:35 np0005481065 nova_compute[259955]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 11 04:32:35 np0005481065 nova_compute[259955]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct 11 04:32:35 np0005481065 nova_compute[259955]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 11 04:32:35 np0005481065 nova_compute[259955]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct 11 04:32:35 np0005481065 nova_compute[259955]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 11 04:32:35 np0005481065 nova_compute[259955]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct 11 04:32:35 np0005481065 nova_compute[259955]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 11 04:32:35 np0005481065 nova_compute[259955]: INFO:__main__:Writing out command to execute
Oct 11 04:32:35 np0005481065 nova_compute[259955]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 11 04:32:35 np0005481065 nova_compute[259955]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 11 04:32:35 np0005481065 nova_compute[259955]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct 11 04:32:35 np0005481065 nova_compute[259955]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 11 04:32:35 np0005481065 nova_compute[259955]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 11 04:32:35 np0005481065 nova_compute[259955]: ++ cat /run_command
Oct 11 04:32:35 np0005481065 nova_compute[259955]: + CMD=nova-compute
Oct 11 04:32:35 np0005481065 nova_compute[259955]: + ARGS=
Oct 11 04:32:35 np0005481065 nova_compute[259955]: + sudo kolla_copy_cacerts
Oct 11 04:32:35 np0005481065 nova_compute[259955]: + [[ ! -n '' ]]
Oct 11 04:32:35 np0005481065 nova_compute[259955]: + . kolla_extend_start
Oct 11 04:32:35 np0005481065 nova_compute[259955]: Running command: 'nova-compute'
Oct 11 04:32:35 np0005481065 nova_compute[259955]: + echo 'Running command: '\''nova-compute'\'''
Oct 11 04:32:35 np0005481065 nova_compute[259955]: + umask 0022
Oct 11 04:32:35 np0005481065 nova_compute[259955]: + exec nova-compute
Oct 11 04:32:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v763: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:36 np0005481065 python3.9[260117]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:32:36 np0005481065 podman[260194]: 2025-10-11 08:32:36.806344607 +0000 UTC m=+0.102954464 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Oct 11 04:32:37 np0005481065 python3.9[260284]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:32:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v764: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:37 np0005481065 nova_compute[259955]: 2025-10-11 08:32:37.287 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct 11 04:32:37 np0005481065 nova_compute[259955]: 2025-10-11 08:32:37.287 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct 11 04:32:37 np0005481065 nova_compute[259955]: 2025-10-11 08:32:37.287 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct 11 04:32:37 np0005481065 nova_compute[259955]: 2025-10-11 08:32:37.287 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Oct 11 04:32:37 np0005481065 nova_compute[259955]: 2025-10-11 08:32:37.427 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:32:37 np0005481065 nova_compute[259955]: 2025-10-11 08:32:37.460 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:32:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.105 2 INFO nova.virt.driver [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Oct 11 04:32:38 np0005481065 python3.9[260438]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.296 2 INFO nova.compute.provider_config [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.332 2 DEBUG oslo_concurrency.lockutils [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.332 2 DEBUG oslo_concurrency.lockutils [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.332 2 DEBUG oslo_concurrency.lockutils [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.333 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.333 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.333 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.333 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.334 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.334 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.334 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.334 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.334 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.335 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.335 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.335 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.335 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.335 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.336 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.336 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.336 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.336 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.336 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.336 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.337 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.337 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.337 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.337 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.337 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.338 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.338 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.338 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.338 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.338 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.339 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.339 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.339 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.339 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.339 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.340 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.340 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.340 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.340 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.340 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.341 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.341 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.341 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.341 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.341 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.342 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.342 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.342 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.342 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.342 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.343 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.343 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.343 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.343 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.343 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.343 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.344 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.344 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.344 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.344 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.345 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.345 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.345 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.345 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.345 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.345 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.346 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.346 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.346 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.346 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.346 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.347 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.347 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.347 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.347 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.348 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.348 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.348 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.348 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.348 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.349 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.349 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.349 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.349 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.350 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.350 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.350 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.350 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.350 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.351 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.351 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.351 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.351 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.351 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.352 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.352 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.352 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.352 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.352 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.353 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.353 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.353 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.353 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.353 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.353 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.354 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.354 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.354 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.354 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.354 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.355 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.355 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.355 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.355 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.355 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.355 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.356 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.356 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.356 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.356 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.356 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.357 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.357 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.357 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.357 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.357 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.357 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.358 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.358 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.358 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.358 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.358 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.359 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.359 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.359 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.359 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.359 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.360 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.360 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.360 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.360 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.360 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.361 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.361 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.361 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.361 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.361 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.362 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.362 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.362 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.362 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.362 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.363 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.363 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.363 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.363 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.363 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.364 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.364 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.364 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.364 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.364 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.365 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.365 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.365 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.365 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.365 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.366 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.366 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.366 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.366 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.366 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.367 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.367 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.367 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.367 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.367 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.367 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.368 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.368 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.368 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.368 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.368 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.369 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.369 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.369 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.369 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.369 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.370 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.370 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.370 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.370 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.370 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.371 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.371 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.371 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.371 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.371 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.372 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.372 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.372 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.372 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.372 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.373 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.373 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.373 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.373 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.373 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.374 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.374 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.374 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.374 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.374 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.374 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.375 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.375 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.375 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.375 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.375 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.376 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.376 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.376 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.376 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.376 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.377 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.377 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.377 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.377 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.377 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.377 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.377 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.378 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.378 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.378 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.378 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.378 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.378 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.378 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.379 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.379 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.379 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.379 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.379 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.379 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.379 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.379 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.380 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.380 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.380 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.380 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.380 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.380 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.380 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.381 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.381 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.381 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.381 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.381 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.381 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.381 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.381 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.382 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.382 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.382 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.382 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.382 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.382 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.382 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.383 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.383 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.383 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.383 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.383 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.383 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.383 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.384 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.384 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.384 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.384 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.384 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.384 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.384 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.385 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.385 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.385 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.385 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.385 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.385 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.385 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.386 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.386 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.386 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.386 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.386 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.386 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.387 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.387 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.387 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.387 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.387 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.387 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.387 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.388 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.388 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.388 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.388 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.388 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.388 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.388 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.389 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.389 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.389 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.389 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.389 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.389 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.389 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.390 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.390 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.390 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.390 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.390 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.390 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.390 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.391 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.391 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.391 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.391 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.391 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.391 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.391 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.392 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.392 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.392 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.392 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.392 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.392 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.393 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.393 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.393 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.393 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.393 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.393 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.394 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.394 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.394 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.394 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.394 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.394 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.395 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.395 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.395 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.395 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.395 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.395 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.395 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.396 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.396 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.396 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.396 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.396 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.396 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.397 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.397 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.397 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.397 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.397 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.397 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.397 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.398 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.398 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.398 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.398 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.398 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.398 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.398 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.398 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.399 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.399 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.399 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.399 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.399 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.399 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.400 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.400 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.400 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.400 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.400 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.400 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.401 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.401 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.401 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.401 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.401 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.401 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.402 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.402 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.402 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.402 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.402 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.402 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.402 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.403 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.403 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.403 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.403 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.403 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.403 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.403 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.403 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.404 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.404 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.404 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.404 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.404 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.404 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.404 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.405 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.405 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.405 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.405 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.405 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.405 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.406 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.406 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.406 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.406 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.406 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.406 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.407 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.407 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.407 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.407 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.407 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.407 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.408 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.408 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.408 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.408 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.408 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.408 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.408 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.409 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.409 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.409 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.409 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.409 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.409 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.410 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.410 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.410 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.410 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.410 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.410 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.410 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.411 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.411 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.411 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.411 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.411 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.412 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.412 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.412 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.412 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.412 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.413 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.413 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.413 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.413 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.413 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.413 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.413 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.414 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.414 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.414 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.414 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.414 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.414 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.414 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.415 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.415 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.415 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.415 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.415 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.415 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.416 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.416 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.416 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.416 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.416 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.416 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.416 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.417 2 WARNING oslo_config.cfg [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct 11 04:32:38 np0005481065 nova_compute[259955]: live_migration_uri is deprecated for removal in favor of two other options that
Oct 11 04:32:38 np0005481065 nova_compute[259955]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct 11 04:32:38 np0005481065 nova_compute[259955]: and ``live_migration_inbound_addr`` respectively.
Oct 11 04:32:38 np0005481065 nova_compute[259955]: ).  Its value may be silently ignored in the future.#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.417 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.417 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.417 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.417 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.417 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.418 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.418 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.418 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.418 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.418 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.418 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.419 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.419 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.419 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.419 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.419 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.419 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.419 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.420 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.rbd_secret_uuid        = 33219f8b-dc38-5a8f-a577-8ccc4b37190a log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.420 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.420 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.420 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.420 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.420 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.421 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.421 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.421 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.421 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.421 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.421 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.421 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.422 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.422 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.422 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.422 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.422 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.422 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.423 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.423 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.423 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.423 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.423 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.423 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.423 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.424 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.424 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.424 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.424 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.424 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.424 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.424 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.425 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.425 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.425 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.425 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.425 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.425 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.425 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.426 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.426 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.426 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.426 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.426 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.426 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.427 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.427 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.427 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.427 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.427 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.427 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.427 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.428 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.428 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.428 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.428 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.428 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.428 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.428 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.429 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.429 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.429 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.429 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.429 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.430 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.430 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.430 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.430 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.430 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.431 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.431 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.431 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.431 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.431 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.431 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.432 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.432 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.432 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.432 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.432 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.432 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.432 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.433 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.433 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.433 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.433 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.433 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.433 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.433 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.434 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.434 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.434 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.434 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.434 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.434 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.435 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.435 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.435 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.435 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.435 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.435 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.436 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.436 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.436 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.436 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.436 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.437 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.437 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.437 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.437 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.437 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.438 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.438 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.438 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.438 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.438 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.439 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.439 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.439 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.439 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.439 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.440 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.440 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.440 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.440 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.440 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.441 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.441 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.441 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.441 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.441 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.442 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.442 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.442 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.442 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.442 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.442 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.443 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.443 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.443 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.443 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.443 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.444 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.444 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.444 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.444 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.444 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.445 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.445 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.445 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.445 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.445 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.446 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.446 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.446 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.446 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.446 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.446 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.447 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.447 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.447 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.447 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.448 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.448 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.448 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.448 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.448 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.448 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.448 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.449 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.449 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.449 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.449 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.449 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.449 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.449 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.450 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.450 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.450 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.450 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.450 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.450 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.451 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.451 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.451 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.451 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.451 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.451 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.451 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.452 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.452 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.452 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.452 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.452 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.452 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.452 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.453 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.453 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.453 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.453 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.453 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.453 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.453 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.454 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.454 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.454 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.454 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.454 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.454 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.454 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.455 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.455 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.455 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.455 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.455 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.455 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.455 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.456 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.456 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.456 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.456 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.456 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.456 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.456 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.457 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.457 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.457 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.457 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.457 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.457 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.458 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.458 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.458 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.458 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.458 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.458 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.458 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.459 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.459 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.459 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.459 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.459 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.459 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.459 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.460 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.460 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.460 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.460 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.460 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.460 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.460 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.460 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.461 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.461 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.461 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.461 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.461 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.461 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.461 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.462 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.462 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.462 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.462 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.462 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.462 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.462 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.462 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.463 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.463 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.463 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.463 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.463 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.463 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.463 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.464 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.464 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.464 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.464 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.464 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.464 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.465 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.465 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.465 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.465 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.465 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.465 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.465 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.465 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.466 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.466 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.466 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.466 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.466 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.466 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.466 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.467 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.467 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.467 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.467 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.467 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.467 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.467 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.467 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.468 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.468 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.468 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.468 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.468 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.468 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.468 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.469 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.469 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.469 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.469 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.469 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.469 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.469 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.470 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.470 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.470 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.470 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.470 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.470 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.470 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.471 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.471 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.471 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.471 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.471 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.471 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.471 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.471 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.472 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.472 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.472 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.472 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.472 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.472 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.472 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.473 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.473 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.473 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.473 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.473 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.473 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.473 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.473 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.474 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.474 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.474 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.474 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.474 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.474 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.475 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.475 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.475 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.475 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.475 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.476 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.476 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.476 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.476 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.476 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.476 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.477 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.477 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.477 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.477 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.477 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.477 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.477 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.478 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.478 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.478 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.478 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.478 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.478 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.479 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.479 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.479 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.479 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.479 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.479 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.479 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.480 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.480 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.480 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.480 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.480 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.480 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.481 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.481 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.481 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.481 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.481 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.481 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.481 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.482 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.482 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.482 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.482 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.482 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.482 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.482 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.483 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.484 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.504 2 DEBUG nova.virt.libvirt.host [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.505 2 DEBUG nova.virt.libvirt.host [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.505 2 DEBUG nova.virt.libvirt.host [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.505 2 DEBUG nova.virt.libvirt.host [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Oct 11 04:32:38 np0005481065 systemd[1]: Starting libvirt QEMU daemon...
Oct 11 04:32:38 np0005481065 systemd[1]: Started libvirt QEMU daemon.
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.591 2 DEBUG nova.virt.libvirt.host [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f0338b40040> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.594 2 DEBUG nova.virt.libvirt.host [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f0338b40040> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.596 2 INFO nova.virt.libvirt.driver [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] Connection event '1' reason 'None'#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.618 2 WARNING nova.virt.libvirt.driver [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Oct 11 04:32:38 np0005481065 nova_compute[259955]: 2025-10-11 08:32:38.619 2 DEBUG nova.virt.libvirt.volume.mount [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Oct 11 04:32:39 np0005481065 python3.9[260642]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 11 04:32:39 np0005481065 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 04:32:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v765: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:39 np0005481065 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 04:32:39 np0005481065 nova_compute[259955]: 2025-10-11 08:32:39.558 2 INFO nova.virt.libvirt.host [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] Libvirt host capabilities <capabilities>
Oct 11 04:32:39 np0005481065 nova_compute[259955]: 
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  <host>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <uuid>441009ae-b831-47c3-ab1e-dc607f9feb6d</uuid>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <cpu>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <arch>x86_64</arch>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model>EPYC-Rome-v4</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <vendor>AMD</vendor>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <microcode version='16777317'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <signature family='23' model='49' stepping='0'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <maxphysaddr mode='emulate' bits='40'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature name='x2apic'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature name='tsc-deadline'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature name='osxsave'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature name='hypervisor'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature name='tsc_adjust'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature name='spec-ctrl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature name='stibp'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature name='arch-capabilities'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature name='ssbd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature name='cmp_legacy'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature name='topoext'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature name='virt-ssbd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature name='lbrv'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature name='tsc-scale'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature name='vmcb-clean'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature name='pause-filter'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature name='pfthreshold'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature name='svme-addr-chk'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature name='rdctl-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature name='skip-l1dfl-vmentry'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature name='mds-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature name='pschange-mc-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <pages unit='KiB' size='4'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <pages unit='KiB' size='2048'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <pages unit='KiB' size='1048576'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </cpu>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <power_management>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <suspend_mem/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </power_management>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <iommu support='no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <migration_features>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <live/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <uri_transports>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <uri_transport>tcp</uri_transport>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <uri_transport>rdma</uri_transport>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </uri_transports>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </migration_features>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <topology>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <cells num='1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <cell id='0'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:          <memory unit='KiB'>7864360</memory>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:          <pages unit='KiB' size='4'>1966090</pages>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:          <pages unit='KiB' size='2048'>0</pages>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:          <pages unit='KiB' size='1048576'>0</pages>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:          <distances>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:            <sibling id='0' value='10'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:          </distances>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:          <cpus num='8'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:          </cpus>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        </cell>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </cells>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </topology>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <cache>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </cache>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <secmodel>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model>selinux</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <doi>0</doi>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </secmodel>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <secmodel>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model>dac</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <doi>0</doi>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <baselabel type='kvm'>+107:+107</baselabel>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <baselabel type='qemu'>+107:+107</baselabel>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </secmodel>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  </host>
Oct 11 04:32:39 np0005481065 nova_compute[259955]: 
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  <guest>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <os_type>hvm</os_type>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <arch name='i686'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <wordsize>32</wordsize>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <domain type='qemu'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <domain type='kvm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </arch>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <features>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <pae/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <nonpae/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <acpi default='on' toggle='yes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <apic default='on' toggle='no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <cpuselection/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <deviceboot/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <disksnapshot default='on' toggle='no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <externalSnapshot/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </features>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  </guest>
Oct 11 04:32:39 np0005481065 nova_compute[259955]: 
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  <guest>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <os_type>hvm</os_type>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <arch name='x86_64'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <wordsize>64</wordsize>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <domain type='qemu'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <domain type='kvm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </arch>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <features>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <acpi default='on' toggle='yes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <apic default='on' toggle='no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <cpuselection/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <deviceboot/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <disksnapshot default='on' toggle='no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <externalSnapshot/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </features>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  </guest>
Oct 11 04:32:39 np0005481065 nova_compute[259955]: 
Oct 11 04:32:39 np0005481065 nova_compute[259955]: </capabilities>
Oct 11 04:32:39 np0005481065 nova_compute[259955]: #033[00m
Oct 11 04:32:39 np0005481065 nova_compute[259955]: 2025-10-11 08:32:39.567 2 DEBUG nova.virt.libvirt.host [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct 11 04:32:39 np0005481065 nova_compute[259955]: 2025-10-11 08:32:39.602 2 DEBUG nova.virt.libvirt.host [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct 11 04:32:39 np0005481065 nova_compute[259955]: <domainCapabilities>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  <path>/usr/libexec/qemu-kvm</path>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  <domain>kvm</domain>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  <machine>pc-q35-rhel9.6.0</machine>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  <arch>i686</arch>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  <vcpu max='4096'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  <iothreads supported='yes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  <os supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <enum name='firmware'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <loader supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='type'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>rom</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>pflash</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='readonly'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>yes</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>no</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='secure'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>no</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </loader>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  </os>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  <cpu>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <mode name='host-passthrough' supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='hostPassthroughMigratable'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>on</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>off</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </mode>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <mode name='maximum' supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='maximumMigratable'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>on</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>off</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </mode>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <mode name='host-model' supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model fallback='forbid'>EPYC-Rome</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <vendor>AMD</vendor>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='x2apic'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='tsc-deadline'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='hypervisor'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='tsc_adjust'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='spec-ctrl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='stibp'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='arch-capabilities'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='ssbd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='cmp_legacy'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='overflow-recov'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='succor'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='ibrs'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='amd-ssbd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='virt-ssbd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='lbrv'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='tsc-scale'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='vmcb-clean'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='flushbyasid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='pause-filter'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='pfthreshold'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='svme-addr-chk'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='lfence-always-serializing'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='rdctl-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='mds-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='pschange-mc-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='gds-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='rfds-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='disable' name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </mode>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <mode name='custom' supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Broadwell'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Broadwell-IBRS'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Broadwell-noTSX'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Broadwell-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Broadwell-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Broadwell-v3'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Broadwell-v4'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Cascadelake-Server'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Cascadelake-Server-noTSX'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Cascadelake-Server-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Cascadelake-Server-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Cascadelake-Server-v3'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Cascadelake-Server-v4'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Cascadelake-Server-v5'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Cooperlake'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Cooperlake-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Cooperlake-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Denverton'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='mpx'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Denverton-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='mpx'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Denverton-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Denverton-v3'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Dhyana-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='EPYC-Genoa'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amd-psfd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='auto-ibrs'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='no-nested-data-bp'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='null-sel-clr-base'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='stibp-always-on'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='EPYC-Genoa-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amd-psfd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='auto-ibrs'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='no-nested-data-bp'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='null-sel-clr-base'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='stibp-always-on'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='EPYC-Milan'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='EPYC-Milan-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='EPYC-Milan-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amd-psfd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='no-nested-data-bp'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='null-sel-clr-base'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='stibp-always-on'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='EPYC-Rome'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='EPYC-Rome-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='EPYC-Rome-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='EPYC-Rome-v3'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='EPYC-v3'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='EPYC-v4'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='GraniteRapids'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-fp16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-int8'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-tile'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-fp16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fbsdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrc'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrs'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fzrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='mcdt-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pbrsb-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='prefetchiti'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='psdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='sbdr-ssdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='serialize'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='tsx-ldtrk'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xfd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='GraniteRapids-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-fp16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-int8'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-tile'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-fp16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fbsdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrc'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrs'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fzrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='mcdt-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pbrsb-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='prefetchiti'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='psdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='sbdr-ssdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='serialize'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='tsx-ldtrk'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xfd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='GraniteRapids-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-fp16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-int8'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-tile'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx10'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx10-128'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx10-256'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx10-512'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-fp16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='cldemote'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fbsdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrc'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrs'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fzrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='mcdt-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='movdir64b'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='movdiri'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pbrsb-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='prefetchiti'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='psdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='sbdr-ssdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='serialize'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ss'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='tsx-ldtrk'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xfd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Haswell'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Haswell-IBRS'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Haswell-noTSX'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Haswell-noTSX-IBRS'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Haswell-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Haswell-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Haswell-v3'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Haswell-v4'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Icelake-Server'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Icelake-Server-noTSX'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Icelake-Server-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Icelake-Server-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Icelake-Server-v3'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Icelake-Server-v4'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Icelake-Server-v5'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Icelake-Server-v6'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Icelake-Server-v7'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='IvyBridge'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='IvyBridge-IBRS'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='IvyBridge-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='IvyBridge-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='KnightsMill'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-4fmaps'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-4vnniw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512er'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512pf'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ss'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='KnightsMill-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-4fmaps'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-4vnniw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512er'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512pf'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ss'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Opteron_G4'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fma4'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xop'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Opteron_G4-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fma4'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xop'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Opteron_G5'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fma4'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='tbm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xop'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Opteron_G5-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fma4'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='tbm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xop'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='SapphireRapids'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-int8'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-tile'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-fp16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrc'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrs'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fzrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='serialize'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='tsx-ldtrk'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xfd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='SapphireRapids-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-int8'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-tile'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-fp16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrc'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrs'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fzrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='serialize'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='tsx-ldtrk'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xfd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='SapphireRapids-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-int8'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-tile'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-fp16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fbsdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrc'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrs'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fzrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='psdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='sbdr-ssdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='serialize'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='tsx-ldtrk'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xfd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='SapphireRapids-v3'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-int8'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-tile'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-fp16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='cldemote'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fbsdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrc'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrs'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fzrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='movdir64b'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='movdiri'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='psdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='sbdr-ssdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='serialize'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ss'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='tsx-ldtrk'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xfd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='SierraForest'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-ne-convert'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-vnni-int8'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='cmpccxadd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fbsdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrs'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='mcdt-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pbrsb-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='psdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='sbdr-ssdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='serialize'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='SierraForest-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-ne-convert'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-vnni-int8'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='cmpccxadd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fbsdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrs'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='mcdt-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pbrsb-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='psdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='sbdr-ssdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='serialize'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Client'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Client-IBRS'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Client-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Client-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Client-v3'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Client-v4'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Server'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Server-IBRS'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Server-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Server-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Server-v3'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Server-v4'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Server-v5'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Snowridge'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='cldemote'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='core-capability'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='movdir64b'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='movdiri'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='mpx'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='split-lock-detect'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Snowridge-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='cldemote'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='core-capability'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='movdir64b'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='movdiri'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='mpx'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='split-lock-detect'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Snowridge-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='cldemote'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='core-capability'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='movdir64b'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='movdiri'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='split-lock-detect'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Snowridge-v3'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='cldemote'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='core-capability'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='movdir64b'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='movdiri'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='split-lock-detect'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Snowridge-v4'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='cldemote'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='movdir64b'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='movdiri'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='athlon'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='3dnow'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='3dnowext'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='athlon-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='3dnow'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='3dnowext'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='core2duo'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ss'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='core2duo-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ss'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='coreduo'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ss'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='coreduo-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ss'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='n270'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ss'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='n270-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ss'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='phenom'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='3dnow'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='3dnowext'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='phenom-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='3dnow'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='3dnowext'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </mode>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  </cpu>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  <memoryBacking supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <enum name='sourceType'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <value>file</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <value>anonymous</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <value>memfd</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  </memoryBacking>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  <devices>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <disk supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='diskDevice'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>disk</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>cdrom</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>floppy</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>lun</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='bus'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>fdc</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>scsi</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>virtio</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>usb</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>sata</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='model'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>virtio</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>virtio-transitional</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>virtio-non-transitional</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </disk>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <graphics supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='type'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>vnc</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>egl-headless</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>dbus</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </graphics>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <video supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='modelType'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>vga</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>cirrus</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>virtio</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>none</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>bochs</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>ramfb</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </video>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <hostdev supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='mode'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>subsystem</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='startupPolicy'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>default</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>mandatory</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>requisite</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>optional</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='subsysType'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>usb</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>pci</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>scsi</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='capsType'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='pciBackend'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </hostdev>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <rng supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='model'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>virtio</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>virtio-transitional</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>virtio-non-transitional</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='backendModel'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>random</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>egd</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>builtin</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </rng>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <filesystem supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='driverType'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>path</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>handle</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>virtiofs</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </filesystem>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <tpm supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='model'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>tpm-tis</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>tpm-crb</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='backendModel'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>emulator</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>external</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='backendVersion'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>2.0</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </tpm>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <redirdev supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='bus'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>usb</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </redirdev>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <channel supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='type'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>pty</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>unix</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </channel>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <crypto supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='model'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='type'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>qemu</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='backendModel'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>builtin</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </crypto>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <interface supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='backendType'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>default</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>passt</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </interface>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <panic supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='model'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>isa</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>hyperv</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </panic>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  </devices>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  <features>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <gic supported='no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <vmcoreinfo supported='yes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <genid supported='yes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <backingStoreInput supported='yes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <backup supported='yes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <async-teardown supported='yes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <ps2 supported='yes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <sev supported='no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <sgx supported='no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <hyperv supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='features'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>relaxed</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>vapic</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>spinlocks</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>vpindex</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>runtime</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>synic</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>stimer</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>reset</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>vendor_id</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>frequencies</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>reenlightenment</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>tlbflush</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>ipi</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>avic</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>emsr_bitmap</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>xmm_input</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </hyperv>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <launchSecurity supported='no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  </features>
Oct 11 04:32:39 np0005481065 nova_compute[259955]: </domainCapabilities>
Oct 11 04:32:39 np0005481065 nova_compute[259955]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct 11 04:32:39 np0005481065 nova_compute[259955]: 2025-10-11 08:32:39.612 2 DEBUG nova.virt.libvirt.host [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct 11 04:32:39 np0005481065 nova_compute[259955]: <domainCapabilities>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  <path>/usr/libexec/qemu-kvm</path>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  <domain>kvm</domain>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  <arch>i686</arch>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  <vcpu max='240'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  <iothreads supported='yes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  <os supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <enum name='firmware'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <loader supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='type'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>rom</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>pflash</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='readonly'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>yes</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>no</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='secure'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>no</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </loader>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  </os>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  <cpu>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <mode name='host-passthrough' supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='hostPassthroughMigratable'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>on</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>off</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </mode>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <mode name='maximum' supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='maximumMigratable'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>on</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>off</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </mode>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <mode name='host-model' supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model fallback='forbid'>EPYC-Rome</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <vendor>AMD</vendor>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='x2apic'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='tsc-deadline'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='hypervisor'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='tsc_adjust'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='spec-ctrl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='stibp'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='arch-capabilities'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='ssbd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='cmp_legacy'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='overflow-recov'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='succor'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='ibrs'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='amd-ssbd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='virt-ssbd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='lbrv'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='tsc-scale'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='vmcb-clean'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='flushbyasid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='pause-filter'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='pfthreshold'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='svme-addr-chk'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='lfence-always-serializing'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='rdctl-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='mds-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='pschange-mc-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='gds-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='rfds-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='disable' name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </mode>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <mode name='custom' supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Broadwell'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Broadwell-IBRS'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Broadwell-noTSX'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Broadwell-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Broadwell-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Broadwell-v3'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Broadwell-v4'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Cascadelake-Server'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Cascadelake-Server-noTSX'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Cascadelake-Server-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Cascadelake-Server-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Cascadelake-Server-v3'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Cascadelake-Server-v4'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Cascadelake-Server-v5'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Cooperlake'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Cooperlake-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Cooperlake-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Denverton'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='mpx'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Denverton-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='mpx'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Denverton-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Denverton-v3'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Dhyana-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='EPYC-Genoa'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amd-psfd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='auto-ibrs'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='no-nested-data-bp'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='null-sel-clr-base'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='stibp-always-on'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='EPYC-Genoa-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amd-psfd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='auto-ibrs'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='no-nested-data-bp'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='null-sel-clr-base'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='stibp-always-on'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='EPYC-Milan'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='EPYC-Milan-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='EPYC-Milan-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amd-psfd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='no-nested-data-bp'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='null-sel-clr-base'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='stibp-always-on'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='EPYC-Rome'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='EPYC-Rome-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='EPYC-Rome-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='EPYC-Rome-v3'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='EPYC-v3'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='EPYC-v4'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='GraniteRapids'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-fp16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-int8'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-tile'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-fp16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fbsdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrc'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrs'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fzrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='mcdt-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pbrsb-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='prefetchiti'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='psdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='sbdr-ssdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='serialize'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='tsx-ldtrk'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xfd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='GraniteRapids-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-fp16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-int8'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-tile'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-fp16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fbsdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrc'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrs'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fzrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='mcdt-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pbrsb-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='prefetchiti'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='psdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='sbdr-ssdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='serialize'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='tsx-ldtrk'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xfd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='GraniteRapids-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-fp16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-int8'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-tile'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx10'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx10-128'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx10-256'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx10-512'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-fp16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='cldemote'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fbsdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrc'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrs'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fzrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='mcdt-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='movdir64b'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='movdiri'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pbrsb-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='prefetchiti'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='psdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='sbdr-ssdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='serialize'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ss'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='tsx-ldtrk'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xfd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Haswell'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Haswell-IBRS'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Haswell-noTSX'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Haswell-noTSX-IBRS'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Haswell-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Haswell-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Haswell-v3'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Haswell-v4'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Icelake-Server'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Icelake-Server-noTSX'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Icelake-Server-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Icelake-Server-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Icelake-Server-v3'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Icelake-Server-v4'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Icelake-Server-v5'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Icelake-Server-v6'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Icelake-Server-v7'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='IvyBridge'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='IvyBridge-IBRS'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='IvyBridge-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='IvyBridge-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='KnightsMill'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-4fmaps'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-4vnniw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512er'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512pf'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ss'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='KnightsMill-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-4fmaps'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-4vnniw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512er'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512pf'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ss'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Opteron_G4'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fma4'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xop'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Opteron_G4-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fma4'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xop'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Opteron_G5'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fma4'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='tbm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xop'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Opteron_G5-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fma4'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='tbm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xop'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='SapphireRapids'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-int8'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-tile'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-fp16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrc'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrs'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fzrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='serialize'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='tsx-ldtrk'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xfd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='SapphireRapids-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-int8'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-tile'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-fp16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrc'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrs'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fzrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='serialize'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='tsx-ldtrk'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xfd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='SapphireRapids-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-int8'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-tile'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-fp16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fbsdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrc'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrs'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fzrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='psdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='sbdr-ssdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='serialize'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='tsx-ldtrk'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xfd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='SapphireRapids-v3'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-int8'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-tile'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-fp16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='cldemote'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fbsdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrc'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrs'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fzrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='movdir64b'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='movdiri'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='psdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='sbdr-ssdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='serialize'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ss'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='tsx-ldtrk'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xfd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='SierraForest'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-ne-convert'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-vnni-int8'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='cmpccxadd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fbsdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrs'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='mcdt-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pbrsb-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='psdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='sbdr-ssdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='serialize'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='SierraForest-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-ne-convert'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-vnni-int8'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='cmpccxadd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fbsdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrs'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='mcdt-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pbrsb-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='psdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='sbdr-ssdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='serialize'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Client'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Client-IBRS'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Client-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Client-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Client-v3'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Client-v4'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Server'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Server-IBRS'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Server-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Server-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Server-v3'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Server-v4'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Server-v5'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Snowridge'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='cldemote'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='core-capability'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='movdir64b'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='movdiri'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='mpx'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='split-lock-detect'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Snowridge-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='cldemote'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='core-capability'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='movdir64b'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='movdiri'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='mpx'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='split-lock-detect'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Snowridge-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='cldemote'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='core-capability'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='movdir64b'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='movdiri'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='split-lock-detect'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Snowridge-v3'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='cldemote'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='core-capability'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='movdir64b'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='movdiri'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='split-lock-detect'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Snowridge-v4'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='cldemote'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='movdir64b'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='movdiri'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='athlon'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='3dnow'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='3dnowext'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='athlon-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='3dnow'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='3dnowext'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='core2duo'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ss'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='core2duo-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ss'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='coreduo'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ss'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='coreduo-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ss'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='n270'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ss'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='n270-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ss'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='phenom'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='3dnow'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='3dnowext'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='phenom-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='3dnow'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='3dnowext'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </mode>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  </cpu>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  <memoryBacking supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <enum name='sourceType'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <value>file</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <value>anonymous</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <value>memfd</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  </memoryBacking>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  <devices>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <disk supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='diskDevice'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>disk</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>cdrom</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>floppy</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>lun</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='bus'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>ide</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>fdc</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>scsi</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>virtio</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>usb</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>sata</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='model'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>virtio</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>virtio-transitional</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>virtio-non-transitional</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </disk>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <graphics supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='type'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>vnc</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>egl-headless</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>dbus</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </graphics>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <video supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='modelType'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>vga</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>cirrus</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>virtio</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>none</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>bochs</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>ramfb</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </video>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <hostdev supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='mode'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>subsystem</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='startupPolicy'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>default</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>mandatory</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>requisite</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>optional</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='subsysType'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>usb</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>pci</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>scsi</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='capsType'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='pciBackend'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </hostdev>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <rng supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='model'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>virtio</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>virtio-transitional</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>virtio-non-transitional</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='backendModel'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>random</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>egd</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>builtin</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </rng>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <filesystem supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='driverType'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>path</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>handle</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>virtiofs</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </filesystem>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <tpm supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='model'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>tpm-tis</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>tpm-crb</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='backendModel'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>emulator</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>external</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='backendVersion'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>2.0</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </tpm>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <redirdev supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='bus'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>usb</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </redirdev>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <channel supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='type'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>pty</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>unix</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </channel>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <crypto supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='model'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='type'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>qemu</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='backendModel'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>builtin</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </crypto>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <interface supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='backendType'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>default</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>passt</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </interface>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <panic supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='model'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>isa</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>hyperv</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </panic>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  </devices>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  <features>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <gic supported='no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <vmcoreinfo supported='yes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <genid supported='yes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <backingStoreInput supported='yes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <backup supported='yes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <async-teardown supported='yes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <ps2 supported='yes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <sev supported='no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <sgx supported='no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <hyperv supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='features'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>relaxed</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>vapic</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>spinlocks</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>vpindex</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>runtime</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>synic</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>stimer</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>reset</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>vendor_id</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>frequencies</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>reenlightenment</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>tlbflush</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>ipi</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>avic</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>emsr_bitmap</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>xmm_input</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </hyperv>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <launchSecurity supported='no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  </features>
Oct 11 04:32:39 np0005481065 nova_compute[259955]: </domainCapabilities>
Oct 11 04:32:39 np0005481065 nova_compute[259955]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct 11 04:32:39 np0005481065 nova_compute[259955]: 2025-10-11 08:32:39.662 2 DEBUG nova.virt.libvirt.host [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct 11 04:32:39 np0005481065 nova_compute[259955]: 2025-10-11 08:32:39.668 2 DEBUG nova.virt.libvirt.host [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct 11 04:32:39 np0005481065 nova_compute[259955]: <domainCapabilities>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  <path>/usr/libexec/qemu-kvm</path>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  <domain>kvm</domain>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  <machine>pc-q35-rhel9.6.0</machine>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  <arch>x86_64</arch>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  <vcpu max='4096'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  <iothreads supported='yes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  <os supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <enum name='firmware'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <value>efi</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <loader supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='type'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>rom</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>pflash</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='readonly'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>yes</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>no</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='secure'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>yes</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>no</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </loader>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  </os>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  <cpu>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <mode name='host-passthrough' supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='hostPassthroughMigratable'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>on</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>off</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </mode>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <mode name='maximum' supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='maximumMigratable'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>on</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>off</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </mode>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <mode name='host-model' supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model fallback='forbid'>EPYC-Rome</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <vendor>AMD</vendor>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='x2apic'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='tsc-deadline'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='hypervisor'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='tsc_adjust'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='spec-ctrl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='stibp'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='arch-capabilities'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='ssbd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='cmp_legacy'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='overflow-recov'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='succor'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='ibrs'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='amd-ssbd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='virt-ssbd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='lbrv'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='tsc-scale'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='vmcb-clean'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='flushbyasid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='pause-filter'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='pfthreshold'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='svme-addr-chk'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='lfence-always-serializing'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='rdctl-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='mds-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='pschange-mc-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='gds-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='rfds-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='disable' name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </mode>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <mode name='custom' supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Broadwell'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Broadwell-IBRS'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Broadwell-noTSX'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Broadwell-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Broadwell-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Broadwell-v3'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Broadwell-v4'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Cascadelake-Server'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Cascadelake-Server-noTSX'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Cascadelake-Server-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Cascadelake-Server-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Cascadelake-Server-v3'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Cascadelake-Server-v4'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Cascadelake-Server-v5'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Cooperlake'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Cooperlake-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Cooperlake-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Denverton'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='mpx'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Denverton-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='mpx'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Denverton-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Denverton-v3'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Dhyana-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='EPYC-Genoa'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amd-psfd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='auto-ibrs'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='no-nested-data-bp'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='null-sel-clr-base'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='stibp-always-on'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='EPYC-Genoa-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amd-psfd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='auto-ibrs'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='no-nested-data-bp'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='null-sel-clr-base'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='stibp-always-on'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='EPYC-Milan'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='EPYC-Milan-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='EPYC-Milan-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amd-psfd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='no-nested-data-bp'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='null-sel-clr-base'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='stibp-always-on'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='EPYC-Rome'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='EPYC-Rome-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='EPYC-Rome-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='EPYC-Rome-v3'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='EPYC-v3'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='EPYC-v4'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='GraniteRapids'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-fp16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-int8'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-tile'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-fp16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fbsdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrc'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrs'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fzrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='mcdt-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pbrsb-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='prefetchiti'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='psdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='sbdr-ssdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='serialize'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='tsx-ldtrk'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xfd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='GraniteRapids-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-fp16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-int8'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-tile'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-fp16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fbsdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrc'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrs'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fzrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='mcdt-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pbrsb-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='prefetchiti'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='psdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='sbdr-ssdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='serialize'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='tsx-ldtrk'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xfd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='GraniteRapids-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-fp16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-int8'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-tile'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx10'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx10-128'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx10-256'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx10-512'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-fp16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='cldemote'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fbsdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrc'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrs'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fzrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='mcdt-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='movdir64b'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='movdiri'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pbrsb-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='prefetchiti'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='psdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='sbdr-ssdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='serialize'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ss'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='tsx-ldtrk'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xfd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Haswell'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Haswell-IBRS'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Haswell-noTSX'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Haswell-noTSX-IBRS'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Haswell-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Haswell-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Haswell-v3'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Haswell-v4'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Icelake-Server'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Icelake-Server-noTSX'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Icelake-Server-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Icelake-Server-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Icelake-Server-v3'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Icelake-Server-v4'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Icelake-Server-v5'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Icelake-Server-v6'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Icelake-Server-v7'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='IvyBridge'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='IvyBridge-IBRS'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='IvyBridge-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='IvyBridge-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='KnightsMill'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-4fmaps'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-4vnniw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512er'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512pf'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ss'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='KnightsMill-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-4fmaps'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-4vnniw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512er'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512pf'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ss'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Opteron_G4'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fma4'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xop'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Opteron_G4-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fma4'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xop'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Opteron_G5'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fma4'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='tbm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xop'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Opteron_G5-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fma4'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='tbm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xop'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='SapphireRapids'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-int8'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-tile'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-fp16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrc'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrs'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fzrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='serialize'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='tsx-ldtrk'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xfd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='SapphireRapids-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-int8'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-tile'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-fp16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrc'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrs'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fzrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='serialize'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='tsx-ldtrk'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xfd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='SapphireRapids-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-int8'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-tile'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-fp16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fbsdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrc'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrs'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fzrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='psdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='sbdr-ssdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='serialize'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='tsx-ldtrk'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xfd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='SapphireRapids-v3'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-int8'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-tile'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-fp16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='cldemote'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fbsdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrc'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrs'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fzrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='movdir64b'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='movdiri'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='psdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='sbdr-ssdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='serialize'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ss'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='tsx-ldtrk'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xfd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='SierraForest'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-ne-convert'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-vnni-int8'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='cmpccxadd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fbsdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrs'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='mcdt-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pbrsb-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='psdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='sbdr-ssdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='serialize'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='SierraForest-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-ne-convert'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-vnni-int8'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='cmpccxadd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fbsdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrs'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='mcdt-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pbrsb-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='psdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='sbdr-ssdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='serialize'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Client'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Client-IBRS'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Client-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Client-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Client-v3'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Client-v4'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Server'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Server-IBRS'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Server-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Server-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Server-v3'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Server-v4'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Server-v5'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Snowridge'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='cldemote'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='core-capability'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='movdir64b'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='movdiri'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='mpx'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='split-lock-detect'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Snowridge-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='cldemote'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='core-capability'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='movdir64b'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='movdiri'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='mpx'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='split-lock-detect'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Snowridge-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='cldemote'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='core-capability'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='movdir64b'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='movdiri'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='split-lock-detect'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Snowridge-v3'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='cldemote'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='core-capability'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='movdir64b'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='movdiri'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='split-lock-detect'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Snowridge-v4'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='cldemote'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='movdir64b'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='movdiri'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='athlon'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='3dnow'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='3dnowext'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='athlon-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='3dnow'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='3dnowext'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='core2duo'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ss'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='core2duo-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ss'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='coreduo'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ss'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='coreduo-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ss'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='n270'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ss'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='n270-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ss'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='phenom'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='3dnow'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='3dnowext'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='phenom-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='3dnow'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='3dnowext'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </mode>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  </cpu>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  <memoryBacking supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <enum name='sourceType'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <value>file</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <value>anonymous</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <value>memfd</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  </memoryBacking>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  <devices>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <disk supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='diskDevice'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>disk</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>cdrom</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>floppy</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>lun</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='bus'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>fdc</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>scsi</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>virtio</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>usb</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>sata</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='model'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>virtio</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>virtio-transitional</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>virtio-non-transitional</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </disk>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <graphics supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='type'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>vnc</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>egl-headless</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>dbus</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </graphics>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <video supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='modelType'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>vga</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>cirrus</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>virtio</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>none</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>bochs</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>ramfb</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </video>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <hostdev supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='mode'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>subsystem</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='startupPolicy'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>default</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>mandatory</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>requisite</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>optional</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='subsysType'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>usb</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>pci</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>scsi</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='capsType'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='pciBackend'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </hostdev>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <rng supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='model'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>virtio</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>virtio-transitional</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>virtio-non-transitional</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='backendModel'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>random</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>egd</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>builtin</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </rng>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <filesystem supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='driverType'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>path</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>handle</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>virtiofs</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </filesystem>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <tpm supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='model'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>tpm-tis</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>tpm-crb</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='backendModel'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>emulator</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>external</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='backendVersion'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>2.0</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </tpm>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <redirdev supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='bus'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>usb</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </redirdev>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <channel supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='type'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>pty</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>unix</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </channel>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <crypto supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='model'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='type'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>qemu</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='backendModel'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>builtin</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </crypto>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <interface supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='backendType'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>default</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>passt</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </interface>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <panic supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='model'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>isa</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>hyperv</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </panic>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  </devices>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  <features>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <gic supported='no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <vmcoreinfo supported='yes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <genid supported='yes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <backingStoreInput supported='yes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <backup supported='yes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <async-teardown supported='yes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <ps2 supported='yes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <sev supported='no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <sgx supported='no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <hyperv supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='features'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>relaxed</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>vapic</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>spinlocks</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>vpindex</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>runtime</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>synic</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>stimer</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>reset</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>vendor_id</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>frequencies</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>reenlightenment</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>tlbflush</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>ipi</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>avic</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>emsr_bitmap</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>xmm_input</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </hyperv>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <launchSecurity supported='no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  </features>
Oct 11 04:32:39 np0005481065 nova_compute[259955]: </domainCapabilities>
Oct 11 04:32:39 np0005481065 nova_compute[259955]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct 11 04:32:39 np0005481065 nova_compute[259955]: 2025-10-11 08:32:39.729 2 DEBUG nova.virt.libvirt.host [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct 11 04:32:39 np0005481065 nova_compute[259955]: <domainCapabilities>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  <path>/usr/libexec/qemu-kvm</path>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  <domain>kvm</domain>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  <arch>x86_64</arch>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  <vcpu max='240'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  <iothreads supported='yes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  <os supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <enum name='firmware'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <loader supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='type'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>rom</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>pflash</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='readonly'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>yes</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>no</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='secure'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>no</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </loader>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  </os>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  <cpu>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <mode name='host-passthrough' supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='hostPassthroughMigratable'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>on</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>off</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </mode>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <mode name='maximum' supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='maximumMigratable'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>on</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>off</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </mode>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <mode name='host-model' supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model fallback='forbid'>EPYC-Rome</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <vendor>AMD</vendor>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='x2apic'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='tsc-deadline'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='hypervisor'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='tsc_adjust'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='spec-ctrl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='stibp'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='arch-capabilities'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='ssbd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='cmp_legacy'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='overflow-recov'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='succor'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='ibrs'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='amd-ssbd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='virt-ssbd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='lbrv'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='tsc-scale'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='vmcb-clean'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='flushbyasid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='pause-filter'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='pfthreshold'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='svme-addr-chk'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='lfence-always-serializing'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='rdctl-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='mds-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='pschange-mc-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='gds-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='require' name='rfds-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <feature policy='disable' name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </mode>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <mode name='custom' supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Broadwell'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Broadwell-IBRS'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Broadwell-noTSX'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Broadwell-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Broadwell-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Broadwell-v3'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Broadwell-v4'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Cascadelake-Server'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Cascadelake-Server-noTSX'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Cascadelake-Server-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Cascadelake-Server-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Cascadelake-Server-v3'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Cascadelake-Server-v4'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Cascadelake-Server-v5'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Cooperlake'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Cooperlake-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Cooperlake-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Denverton'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='mpx'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Denverton-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='mpx'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Denverton-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Denverton-v3'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Dhyana-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='EPYC-Genoa'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amd-psfd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='auto-ibrs'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='no-nested-data-bp'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='null-sel-clr-base'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='stibp-always-on'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='EPYC-Genoa-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amd-psfd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='auto-ibrs'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='no-nested-data-bp'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='null-sel-clr-base'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='stibp-always-on'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='EPYC-Milan'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='EPYC-Milan-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='EPYC-Milan-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amd-psfd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='no-nested-data-bp'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='null-sel-clr-base'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='stibp-always-on'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='EPYC-Rome'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='EPYC-Rome-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='EPYC-Rome-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='EPYC-Rome-v3'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='EPYC-v3'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='EPYC-v4'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='GraniteRapids'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-fp16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-int8'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-tile'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-fp16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fbsdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrc'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrs'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fzrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='mcdt-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pbrsb-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='prefetchiti'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='psdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='sbdr-ssdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='serialize'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='tsx-ldtrk'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xfd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='GraniteRapids-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-fp16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-int8'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-tile'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-fp16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fbsdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrc'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrs'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fzrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='mcdt-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pbrsb-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='prefetchiti'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='psdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='sbdr-ssdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='serialize'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='tsx-ldtrk'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xfd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='GraniteRapids-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-fp16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-int8'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-tile'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx10'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx10-128'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx10-256'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx10-512'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-fp16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='cldemote'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fbsdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrc'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrs'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fzrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='mcdt-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='movdir64b'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='movdiri'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pbrsb-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='prefetchiti'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='psdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='sbdr-ssdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='serialize'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ss'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='tsx-ldtrk'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xfd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Haswell'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Haswell-IBRS'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Haswell-noTSX'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Haswell-noTSX-IBRS'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Haswell-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Haswell-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Haswell-v3'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Haswell-v4'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Icelake-Server'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Icelake-Server-noTSX'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Icelake-Server-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Icelake-Server-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Icelake-Server-v3'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Icelake-Server-v4'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Icelake-Server-v5'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Icelake-Server-v6'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Icelake-Server-v7'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='IvyBridge'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='IvyBridge-IBRS'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='IvyBridge-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='IvyBridge-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='KnightsMill'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-4fmaps'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-4vnniw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512er'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512pf'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ss'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='KnightsMill-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-4fmaps'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-4vnniw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512er'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512pf'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ss'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Opteron_G4'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fma4'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xop'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Opteron_G4-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fma4'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xop'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Opteron_G5'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fma4'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='tbm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xop'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Opteron_G5-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fma4'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='tbm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xop'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='SapphireRapids'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-int8'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-tile'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-fp16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrc'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrs'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fzrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='serialize'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='tsx-ldtrk'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xfd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='SapphireRapids-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-int8'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-tile'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-fp16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrc'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrs'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fzrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='serialize'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='tsx-ldtrk'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xfd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='SapphireRapids-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-int8'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-tile'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-fp16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fbsdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrc'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrs'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fzrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='psdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='sbdr-ssdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='serialize'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='tsx-ldtrk'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xfd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='SapphireRapids-v3'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-int8'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='amx-tile'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-bf16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-fp16'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bitalg'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='cldemote'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fbsdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrc'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrs'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fzrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='la57'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='movdir64b'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='movdiri'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='psdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='sbdr-ssdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='serialize'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ss'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='taa-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='tsx-ldtrk'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xfd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='SierraForest'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-ne-convert'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-vnni-int8'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='cmpccxadd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fbsdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrs'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='mcdt-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pbrsb-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='psdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='sbdr-ssdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='serialize'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='SierraForest-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-ifma'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-ne-convert'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-vnni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx-vnni-int8'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='cmpccxadd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fbsdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='fsrs'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ibrs-all'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='mcdt-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pbrsb-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='psdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='sbdr-ssdp-no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='serialize'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vaes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Client'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Client-IBRS'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Client-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Client-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Client-v3'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Client-v4'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Server'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Server-IBRS'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Server-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Server-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='hle'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='rtm'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Server-v3'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Server-v4'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Skylake-Server-v5'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512bw'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512cd'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512dq'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512f'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='avx512vl'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='invpcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pcid'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='pku'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Snowridge'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='cldemote'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='core-capability'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='movdir64b'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='movdiri'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='mpx'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='split-lock-detect'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Snowridge-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='cldemote'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='core-capability'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='movdir64b'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='movdiri'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='mpx'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='split-lock-detect'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Snowridge-v2'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='cldemote'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='core-capability'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='movdir64b'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='movdiri'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='split-lock-detect'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Snowridge-v3'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='cldemote'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='core-capability'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='movdir64b'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='movdiri'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='split-lock-detect'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='Snowridge-v4'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='cldemote'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='erms'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='gfni'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='movdir64b'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='movdiri'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='xsaves'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='athlon'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='3dnow'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='3dnowext'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='athlon-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='3dnow'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='3dnowext'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='core2duo'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ss'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='core2duo-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ss'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='coreduo'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ss'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='coreduo-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ss'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='n270'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ss'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='n270-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='ss'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='phenom'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='3dnow'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='3dnowext'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <blockers model='phenom-v1'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='3dnow'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <feature name='3dnowext'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </blockers>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </mode>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  </cpu>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  <memoryBacking supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <enum name='sourceType'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <value>file</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <value>anonymous</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <value>memfd</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  </memoryBacking>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  <devices>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <disk supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='diskDevice'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>disk</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>cdrom</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>floppy</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>lun</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='bus'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>ide</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>fdc</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>scsi</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>virtio</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>usb</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>sata</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='model'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>virtio</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>virtio-transitional</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>virtio-non-transitional</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </disk>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <graphics supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='type'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>vnc</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>egl-headless</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>dbus</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </graphics>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <video supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='modelType'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>vga</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>cirrus</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>virtio</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>none</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>bochs</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>ramfb</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </video>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <hostdev supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='mode'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>subsystem</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='startupPolicy'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>default</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>mandatory</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>requisite</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>optional</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='subsysType'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>usb</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>pci</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>scsi</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='capsType'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='pciBackend'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </hostdev>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <rng supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='model'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>virtio</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>virtio-transitional</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>virtio-non-transitional</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='backendModel'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>random</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>egd</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>builtin</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </rng>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <filesystem supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='driverType'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>path</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>handle</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>virtiofs</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </filesystem>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <tpm supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='model'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>tpm-tis</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>tpm-crb</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='backendModel'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>emulator</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>external</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='backendVersion'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>2.0</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </tpm>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <redirdev supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='bus'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>usb</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </redirdev>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <channel supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='type'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>pty</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>unix</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </channel>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <crypto supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='model'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='type'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>qemu</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='backendModel'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>builtin</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </crypto>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <interface supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='backendType'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>default</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>passt</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </interface>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <panic supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='model'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>isa</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>hyperv</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </panic>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  </devices>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  <features>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <gic supported='no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <vmcoreinfo supported='yes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <genid supported='yes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <backingStoreInput supported='yes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <backup supported='yes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <async-teardown supported='yes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <ps2 supported='yes'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <sev supported='no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <sgx supported='no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <hyperv supported='yes'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      <enum name='features'>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>relaxed</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>vapic</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>spinlocks</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>vpindex</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>runtime</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>synic</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>stimer</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>reset</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>vendor_id</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>frequencies</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>reenlightenment</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>tlbflush</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>ipi</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>avic</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>emsr_bitmap</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:        <value>xmm_input</value>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:      </enum>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    </hyperv>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:    <launchSecurity supported='no'/>
Oct 11 04:32:39 np0005481065 nova_compute[259955]:  </features>
Oct 11 04:32:39 np0005481065 nova_compute[259955]: </domainCapabilities>
Oct 11 04:32:39 np0005481065 nova_compute[259955]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct 11 04:32:39 np0005481065 nova_compute[259955]: 2025-10-11 08:32:39.783 2 DEBUG nova.virt.libvirt.host [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Oct 11 04:32:39 np0005481065 nova_compute[259955]: 2025-10-11 08:32:39.783 2 INFO nova.virt.libvirt.host [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] Secure Boot support detected#033[00m
Oct 11 04:32:39 np0005481065 nova_compute[259955]: 2025-10-11 08:32:39.786 2 INFO nova.virt.libvirt.driver [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Oct 11 04:32:39 np0005481065 nova_compute[259955]: 2025-10-11 08:32:39.786 2 INFO nova.virt.libvirt.driver [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Oct 11 04:32:39 np0005481065 nova_compute[259955]: 2025-10-11 08:32:39.801 2 DEBUG nova.virt.libvirt.driver [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Oct 11 04:32:39 np0005481065 nova_compute[259955]: 2025-10-11 08:32:39.844 2 INFO nova.virt.node [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] Determined node identity ead2f521-4d5d-46d9-864c-1aac19134114 from /var/lib/nova/compute_id#033[00m
Oct 11 04:32:39 np0005481065 nova_compute[259955]: 2025-10-11 08:32:39.885 2 WARNING nova.compute.manager [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] Compute nodes ['ead2f521-4d5d-46d9-864c-1aac19134114'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Oct 11 04:32:39 np0005481065 nova_compute[259955]: 2025-10-11 08:32:39.937 2 INFO nova.compute.manager [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Oct 11 04:32:39 np0005481065 nova_compute[259955]: 2025-10-11 08:32:39.988 2 WARNING nova.compute.manager [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Oct 11 04:32:39 np0005481065 nova_compute[259955]: 2025-10-11 08:32:39.988 2 DEBUG oslo_concurrency.lockutils [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:32:39 np0005481065 nova_compute[259955]: 2025-10-11 08:32:39.989 2 DEBUG oslo_concurrency.lockutils [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:32:39 np0005481065 nova_compute[259955]: 2025-10-11 08:32:39.989 2 DEBUG oslo_concurrency.lockutils [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:32:39 np0005481065 nova_compute[259955]: 2025-10-11 08:32:39.989 2 DEBUG nova.compute.resource_tracker [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 04:32:39 np0005481065 nova_compute[259955]: 2025-10-11 08:32:39.990 2 DEBUG oslo_concurrency.processutils [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:32:40 np0005481065 podman[260830]: 2025-10-11 08:32:40.165726579 +0000 UTC m=+0.149094637 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 11 04:32:40 np0005481065 python3.9[260832]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 04:32:40 np0005481065 systemd[1]: Stopping nova_compute container...
Oct 11 04:32:40 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:32:40 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2926021458' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:32:40 np0005481065 nova_compute[259955]: 2025-10-11 08:32:40.426 2 DEBUG oslo_concurrency.lockutils [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:32:40 np0005481065 nova_compute[259955]: 2025-10-11 08:32:40.427 2 DEBUG oslo_concurrency.lockutils [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:32:40 np0005481065 nova_compute[259955]: 2025-10-11 08:32:40.427 2 DEBUG oslo_concurrency.lockutils [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:32:40 np0005481065 systemd[1]: libpod-73b98b0be9f6f26e777db084dcc633e96c305d7c0e44252280b6444085b3225d.scope: Deactivated successfully.
Oct 11 04:32:40 np0005481065 virtqemud[260524]: libvirt version: 10.10.0, package: 15.el9 (builder@centos.org, 2025-08-18-13:22:20, )
Oct 11 04:32:40 np0005481065 virtqemud[260524]: hostname: compute-0
Oct 11 04:32:40 np0005481065 virtqemud[260524]: End of file while reading data: Input/output error
Oct 11 04:32:40 np0005481065 systemd[1]: libpod-73b98b0be9f6f26e777db084dcc633e96c305d7c0e44252280b6444085b3225d.scope: Consumed 3.616s CPU time.
Oct 11 04:32:40 np0005481065 podman[260879]: 2025-10-11 08:32:40.931929389 +0000 UTC m=+0.563698799 container died 73b98b0be9f6f26e777db084dcc633e96c305d7c0e44252280b6444085b3225d (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 11 04:32:40 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-73b98b0be9f6f26e777db084dcc633e96c305d7c0e44252280b6444085b3225d-userdata-shm.mount: Deactivated successfully.
Oct 11 04:32:40 np0005481065 systemd[1]: var-lib-containers-storage-overlay-e9446680486d73744fe42e7bc9dbd4018ec0b5ae0f06cba6589fac0eabf4e949-merged.mount: Deactivated successfully.
Oct 11 04:32:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v766: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:41 np0005481065 podman[260879]: 2025-10-11 08:32:41.430208275 +0000 UTC m=+1.061977645 container cleanup 73b98b0be9f6f26e777db084dcc633e96c305d7c0e44252280b6444085b3225d (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, container_name=nova_compute, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 11 04:32:41 np0005481065 podman[260879]: nova_compute
Oct 11 04:32:41 np0005481065 podman[260907]: nova_compute
Oct 11 04:32:41 np0005481065 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Oct 11 04:32:41 np0005481065 systemd[1]: Stopped nova_compute container.
Oct 11 04:32:41 np0005481065 systemd[1]: Starting nova_compute container...
Oct 11 04:32:41 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:32:41 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9446680486d73744fe42e7bc9dbd4018ec0b5ae0f06cba6589fac0eabf4e949/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 11 04:32:41 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9446680486d73744fe42e7bc9dbd4018ec0b5ae0f06cba6589fac0eabf4e949/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct 11 04:32:41 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9446680486d73744fe42e7bc9dbd4018ec0b5ae0f06cba6589fac0eabf4e949/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 11 04:32:41 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9446680486d73744fe42e7bc9dbd4018ec0b5ae0f06cba6589fac0eabf4e949/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 11 04:32:41 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9446680486d73744fe42e7bc9dbd4018ec0b5ae0f06cba6589fac0eabf4e949/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 11 04:32:41 np0005481065 podman[260920]: 2025-10-11 08:32:41.675410383 +0000 UTC m=+0.121397261 container init 73b98b0be9f6f26e777db084dcc633e96c305d7c0e44252280b6444085b3225d (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 04:32:41 np0005481065 podman[260920]: 2025-10-11 08:32:41.687198123 +0000 UTC m=+0.133184981 container start 73b98b0be9f6f26e777db084dcc633e96c305d7c0e44252280b6444085b3225d (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=edpm, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=nova_compute)
Oct 11 04:32:41 np0005481065 podman[260920]: nova_compute
Oct 11 04:32:41 np0005481065 nova_compute[260935]: + sudo -E kolla_set_configs
Oct 11 04:32:41 np0005481065 systemd[1]: Started nova_compute container.
Oct 11 04:32:41 np0005481065 nova_compute[260935]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 11 04:32:41 np0005481065 nova_compute[260935]: INFO:__main__:Validating config file
Oct 11 04:32:41 np0005481065 nova_compute[260935]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 11 04:32:41 np0005481065 nova_compute[260935]: INFO:__main__:Copying service configuration files
Oct 11 04:32:41 np0005481065 nova_compute[260935]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct 11 04:32:41 np0005481065 nova_compute[260935]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct 11 04:32:41 np0005481065 nova_compute[260935]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct 11 04:32:41 np0005481065 nova_compute[260935]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Oct 11 04:32:41 np0005481065 nova_compute[260935]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct 11 04:32:41 np0005481065 nova_compute[260935]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct 11 04:32:41 np0005481065 nova_compute[260935]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 11 04:32:41 np0005481065 nova_compute[260935]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 11 04:32:41 np0005481065 nova_compute[260935]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 11 04:32:41 np0005481065 nova_compute[260935]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 11 04:32:41 np0005481065 nova_compute[260935]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 11 04:32:41 np0005481065 nova_compute[260935]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 11 04:32:41 np0005481065 nova_compute[260935]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Oct 11 04:32:41 np0005481065 nova_compute[260935]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct 11 04:32:41 np0005481065 nova_compute[260935]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct 11 04:32:41 np0005481065 nova_compute[260935]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 11 04:32:41 np0005481065 nova_compute[260935]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 11 04:32:41 np0005481065 nova_compute[260935]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 11 04:32:41 np0005481065 nova_compute[260935]: INFO:__main__:Deleting /etc/ceph
Oct 11 04:32:41 np0005481065 nova_compute[260935]: INFO:__main__:Creating directory /etc/ceph
Oct 11 04:32:41 np0005481065 nova_compute[260935]: INFO:__main__:Setting permission for /etc/ceph
Oct 11 04:32:41 np0005481065 nova_compute[260935]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct 11 04:32:41 np0005481065 nova_compute[260935]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 11 04:32:41 np0005481065 nova_compute[260935]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct 11 04:32:41 np0005481065 nova_compute[260935]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 11 04:32:41 np0005481065 nova_compute[260935]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Oct 11 04:32:41 np0005481065 nova_compute[260935]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct 11 04:32:41 np0005481065 nova_compute[260935]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 11 04:32:41 np0005481065 nova_compute[260935]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Oct 11 04:32:41 np0005481065 nova_compute[260935]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct 11 04:32:41 np0005481065 nova_compute[260935]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 11 04:32:41 np0005481065 nova_compute[260935]: INFO:__main__:Writing out command to execute
Oct 11 04:32:41 np0005481065 nova_compute[260935]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 11 04:32:41 np0005481065 nova_compute[260935]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 11 04:32:41 np0005481065 nova_compute[260935]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct 11 04:32:41 np0005481065 nova_compute[260935]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 11 04:32:41 np0005481065 nova_compute[260935]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 11 04:32:41 np0005481065 nova_compute[260935]: ++ cat /run_command
Oct 11 04:32:41 np0005481065 nova_compute[260935]: + CMD=nova-compute
Oct 11 04:32:41 np0005481065 nova_compute[260935]: + ARGS=
Oct 11 04:32:41 np0005481065 nova_compute[260935]: + sudo kolla_copy_cacerts
Oct 11 04:32:41 np0005481065 nova_compute[260935]: + [[ ! -n '' ]]
Oct 11 04:32:41 np0005481065 nova_compute[260935]: + . kolla_extend_start
Oct 11 04:32:41 np0005481065 nova_compute[260935]: Running command: 'nova-compute'
Oct 11 04:32:41 np0005481065 nova_compute[260935]: + echo 'Running command: '\''nova-compute'\'''
Oct 11 04:32:41 np0005481065 nova_compute[260935]: + umask 0022
Oct 11 04:32:41 np0005481065 nova_compute[260935]: + exec nova-compute
Oct 11 04:32:42 np0005481065 python3.9[261098]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 11 04:32:42 np0005481065 systemd[1]: Started libpod-conmon-a935fb67611e2cdf191c5aec8ff164c181e631191daef2a1684486c6280c3ac6.scope.
Oct 11 04:32:42 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:32:42 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e0df993bee33502936ff8744830557639fda9bda47a2a4bf67587ad0db19cfe/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Oct 11 04:32:42 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e0df993bee33502936ff8744830557639fda9bda47a2a4bf67587ad0db19cfe/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 11 04:32:42 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e0df993bee33502936ff8744830557639fda9bda47a2a4bf67587ad0db19cfe/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Oct 11 04:32:42 np0005481065 podman[261124]: 2025-10-11 08:32:42.891284667 +0000 UTC m=+0.143960822 container init a935fb67611e2cdf191c5aec8ff164c181e631191daef2a1684486c6280c3ac6 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Oct 11 04:32:42 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:32:42 np0005481065 podman[261124]: 2025-10-11 08:32:42.905273179 +0000 UTC m=+0.157949314 container start a935fb67611e2cdf191c5aec8ff164c181e631191daef2a1684486c6280c3ac6 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:32:42 np0005481065 python3.9[261098]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Oct 11 04:32:42 np0005481065 nova_compute_init[261146]: INFO:nova_statedir:Applying nova statedir ownership
Oct 11 04:32:42 np0005481065 nova_compute_init[261146]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Oct 11 04:32:42 np0005481065 nova_compute_init[261146]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Oct 11 04:32:42 np0005481065 nova_compute_init[261146]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Oct 11 04:32:42 np0005481065 nova_compute_init[261146]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Oct 11 04:32:42 np0005481065 nova_compute_init[261146]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Oct 11 04:32:42 np0005481065 nova_compute_init[261146]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Oct 11 04:32:42 np0005481065 nova_compute_init[261146]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Oct 11 04:32:42 np0005481065 nova_compute_init[261146]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Oct 11 04:32:42 np0005481065 nova_compute_init[261146]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Oct 11 04:32:42 np0005481065 nova_compute_init[261146]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Oct 11 04:32:42 np0005481065 nova_compute_init[261146]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Oct 11 04:32:42 np0005481065 nova_compute_init[261146]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Oct 11 04:32:42 np0005481065 nova_compute_init[261146]: INFO:nova_statedir:Nova statedir ownership complete
Oct 11 04:32:42 np0005481065 systemd[1]: libpod-a935fb67611e2cdf191c5aec8ff164c181e631191daef2a1684486c6280c3ac6.scope: Deactivated successfully.
Oct 11 04:32:42 np0005481065 podman[261147]: 2025-10-11 08:32:42.981112763 +0000 UTC m=+0.039878978 container died a935fb67611e2cdf191c5aec8ff164c181e631191daef2a1684486c6280c3ac6 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 11 04:32:43 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a935fb67611e2cdf191c5aec8ff164c181e631191daef2a1684486c6280c3ac6-userdata-shm.mount: Deactivated successfully.
Oct 11 04:32:43 np0005481065 systemd[1]: var-lib-containers-storage-overlay-9e0df993bee33502936ff8744830557639fda9bda47a2a4bf67587ad0db19cfe-merged.mount: Deactivated successfully.
Oct 11 04:32:43 np0005481065 podman[261160]: 2025-10-11 08:32:43.059106748 +0000 UTC m=+0.072036059 container cleanup a935fb67611e2cdf191c5aec8ff164c181e631191daef2a1684486c6280c3ac6 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, config_id=edpm, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 04:32:43 np0005481065 systemd[1]: libpod-conmon-a935fb67611e2cdf191c5aec8ff164c181e631191daef2a1684486c6280c3ac6.scope: Deactivated successfully.
Oct 11 04:32:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v767: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:43 np0005481065 nova_compute[260935]: 2025-10-11 08:32:43.553 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct 11 04:32:43 np0005481065 nova_compute[260935]: 2025-10-11 08:32:43.554 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct 11 04:32:43 np0005481065 nova_compute[260935]: 2025-10-11 08:32:43.554 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct 11 04:32:43 np0005481065 nova_compute[260935]: 2025-10-11 08:32:43.554 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Oct 11 04:32:43 np0005481065 nova_compute[260935]: 2025-10-11 08:32:43.671 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:32:43 np0005481065 systemd[1]: session-51.scope: Deactivated successfully.
Oct 11 04:32:43 np0005481065 systemd[1]: session-51.scope: Consumed 3min 21.197s CPU time.
Oct 11 04:32:43 np0005481065 systemd-logind[819]: Session 51 logged out. Waiting for processes to exit.
Oct 11 04:32:43 np0005481065 systemd-logind[819]: Removed session 51.
Oct 11 04:32:43 np0005481065 nova_compute[260935]: 2025-10-11 08:32:43.693 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.232 2 INFO nova.virt.driver [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.359 2 INFO nova.compute.provider_config [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.375 2 DEBUG oslo_concurrency.lockutils [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.375 2 DEBUG oslo_concurrency.lockutils [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.375 2 DEBUG oslo_concurrency.lockutils [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.376 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.376 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.376 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.376 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.376 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.376 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.376 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.376 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.377 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.377 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.377 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.377 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.377 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.377 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.377 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.378 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.378 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.378 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.378 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.378 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.378 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.379 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.379 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.379 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.379 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.379 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.380 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.380 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.380 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.380 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.380 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.381 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.381 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.381 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.381 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.381 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.382 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.382 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.382 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.382 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.382 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.383 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.383 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.383 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.383 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.383 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.384 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.384 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.384 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.384 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.384 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.385 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.385 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.385 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.385 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.385 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.386 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.386 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.386 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.386 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.386 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.387 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.387 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.387 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.387 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.387 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.387 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.388 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.388 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.388 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.388 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.388 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.389 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.389 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.389 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.389 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.389 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.390 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.390 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.390 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.390 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.390 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.391 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.391 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.391 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.391 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.392 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.392 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.392 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.392 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.392 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.393 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.393 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.393 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.393 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.393 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.393 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.394 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.394 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.394 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.394 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.394 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.395 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.395 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.395 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.395 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.395 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.396 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.396 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.396 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.396 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.396 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.397 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.397 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.397 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.397 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.398 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.398 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.398 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.398 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.398 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.399 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.399 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.399 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.399 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.399 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.399 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.400 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.400 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.400 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.400 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.400 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.400 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.400 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.400 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.401 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.401 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.401 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.401 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.401 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.401 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.401 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.402 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.402 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.402 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.402 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.402 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.402 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.402 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.403 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.403 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.403 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.403 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.403 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.403 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.403 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.404 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.404 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.404 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.404 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.404 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.404 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.404 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.405 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.405 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.405 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.405 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.405 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.405 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.406 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.406 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.406 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.406 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.406 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.406 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.406 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.407 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.407 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.407 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.407 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.407 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.407 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.407 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.408 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.408 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.408 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.408 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.408 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.408 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.409 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.409 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.409 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.409 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.409 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.409 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.409 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.410 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.410 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.410 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.410 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.410 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.410 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.410 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.411 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.411 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.411 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.411 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.411 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.411 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.411 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.412 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.412 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.412 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.412 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.412 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.412 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.412 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.413 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.413 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.413 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.413 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.413 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.413 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.413 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.414 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.414 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.414 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.414 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.414 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.414 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.414 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.415 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.415 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.415 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.415 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.415 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.415 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.415 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.416 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.416 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.416 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.416 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.416 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.416 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.416 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.417 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.417 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.417 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.417 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.417 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.417 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.417 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.418 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.418 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.418 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.418 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.418 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.418 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.418 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.419 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.419 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.419 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.419 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.419 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.419 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.420 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.420 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.420 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.420 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.420 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.420 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.420 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.421 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.421 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.421 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.421 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.421 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.421 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.421 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.422 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.422 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.422 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.422 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.422 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.422 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.422 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.423 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.423 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.423 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.423 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.423 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.423 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.423 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.424 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.424 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.424 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.424 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.424 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.424 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.424 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.425 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.425 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.425 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.425 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.425 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.425 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.425 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.426 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.426 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.426 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.426 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.426 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.426 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.426 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.426 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.427 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.427 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.427 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.427 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.427 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.427 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.427 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.428 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.428 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.428 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.428 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.428 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.428 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.428 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.429 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.429 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.429 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.429 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.429 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.429 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.429 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.430 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.430 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.430 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.430 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.430 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.430 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.430 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.431 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.431 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.431 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.431 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.431 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.431 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.432 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.432 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.432 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.432 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.432 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.432 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.433 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.433 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.433 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.433 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.433 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.433 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.434 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.434 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.434 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.434 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.434 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.434 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.435 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.435 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.435 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.435 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.435 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.435 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.436 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.436 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.436 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.436 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.436 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.436 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.436 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.437 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.437 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.437 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.437 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.437 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.437 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.438 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.438 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.438 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.438 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.438 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.438 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.439 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.439 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.439 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.439 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.439 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.439 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.439 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.440 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.440 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.440 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.440 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.440 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.440 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.441 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.441 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.441 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.441 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.441 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.442 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.442 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.442 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.442 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.442 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.442 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.442 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.443 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.443 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.443 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.443 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.443 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.443 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.443 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.444 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.444 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.444 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.444 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.444 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.444 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.444 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.445 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.445 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.445 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.445 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.445 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.445 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.445 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.446 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.446 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.446 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.446 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.446 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.446 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.447 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.447 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.447 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.447 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.447 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.448 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.448 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.448 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.448 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.448 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.448 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.449 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.449 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.449 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.449 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.449 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.449 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.449 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.450 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.450 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.450 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.450 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.450 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.450 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.450 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.451 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.451 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.451 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.451 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.451 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.451 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.451 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.452 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.452 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.452 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.452 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.452 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.452 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.453 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.453 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.453 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.453 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.453 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.453 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.454 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.454 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.454 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.455 2 WARNING oslo_config.cfg [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct 11 04:32:44 np0005481065 nova_compute[260935]: live_migration_uri is deprecated for removal in favor of two other options that
Oct 11 04:32:44 np0005481065 nova_compute[260935]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct 11 04:32:44 np0005481065 nova_compute[260935]: and ``live_migration_inbound_addr`` respectively.
Oct 11 04:32:44 np0005481065 nova_compute[260935]: ).  Its value may be silently ignored in the future.#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.455 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.455 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.455 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.455 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.456 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.456 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.456 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.456 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.456 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.456 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.457 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.457 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.457 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.457 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.457 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.457 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.458 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.458 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.458 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.rbd_secret_uuid        = 33219f8b-dc38-5a8f-a577-8ccc4b37190a log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.458 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.458 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.460 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.460 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.461 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.461 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.461 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.462 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.462 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.462 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.463 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.463 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.464 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.464 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.464 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.464 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.465 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.465 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.465 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.466 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.466 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.466 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.467 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.467 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.467 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.468 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.468 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.468 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.469 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.469 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.469 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.470 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.470 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.470 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.471 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.471 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.471 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.471 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.472 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.472 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.472 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.473 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.473 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.473 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.474 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.474 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.474 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.474 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.475 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.475 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.475 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.476 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.476 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.476 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.477 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.477 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.477 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.477 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.478 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.478 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.478 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.479 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.479 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.479 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.480 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.480 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.480 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.481 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.481 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.481 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.482 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.482 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.482 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.483 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.483 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.483 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.484 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.484 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.485 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.485 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.486 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.486 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.487 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.487 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.488 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.488 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.489 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.489 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.490 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.490 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.491 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.492 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.492 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.493 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.493 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.494 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.494 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.495 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.495 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.496 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.496 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.497 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.497 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.498 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.498 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.499 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.500 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.500 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.501 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.501 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.502 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.502 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.503 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.504 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.504 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.505 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.505 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.506 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.506 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.507 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.508 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.508 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.509 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.509 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.510 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.510 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.511 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.512 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.512 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.512 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.513 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.514 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.514 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.514 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.515 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.515 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.515 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.516 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.516 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.516 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.517 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.517 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.517 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.517 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.518 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.518 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.518 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.519 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.519 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.519 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.519 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.520 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.520 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.520 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.520 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.521 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.521 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.521 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.522 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.522 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.522 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.522 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.523 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.523 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.523 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.524 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.524 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.524 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.524 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.525 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.525 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.525 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.525 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.526 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.526 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.526 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.527 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.527 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.527 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.528 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.528 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.528 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.528 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.529 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.529 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.529 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.530 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.530 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.530 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.530 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.531 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.531 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.531 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.531 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.532 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.532 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.532 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.533 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.533 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.533 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.533 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.534 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.534 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.534 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.534 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.535 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.535 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.535 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.536 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.536 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.536 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.536 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.537 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.537 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.537 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.537 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.538 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.538 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.538 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.539 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.539 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.539 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.540 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.540 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.540 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.541 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.541 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.541 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.542 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.542 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.543 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.543 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.543 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.544 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.544 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.544 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.545 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.545 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.545 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.546 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.546 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.546 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.547 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.547 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.547 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.547 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.548 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.548 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.548 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.548 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.548 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.549 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.549 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.549 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.549 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.549 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.550 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.550 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.550 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.550 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.550 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.551 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.551 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.551 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.551 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.551 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.552 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.552 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.552 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.552 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.552 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.553 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.553 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.553 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.553 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.554 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.554 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.554 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.554 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.555 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.555 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.555 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.555 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.555 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.555 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.556 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.556 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.556 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.556 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.556 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.556 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.556 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.557 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.557 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.557 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.557 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.557 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.557 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.557 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.558 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.558 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.558 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.558 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.558 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.558 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.558 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.558 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.559 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.559 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.559 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.559 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.559 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.559 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.559 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.560 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.560 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.560 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.560 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.560 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.560 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.560 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.561 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.561 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.561 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.561 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.561 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.561 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.561 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.562 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.562 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.562 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.562 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.562 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.562 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.562 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.562 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.563 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.563 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.563 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.563 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.563 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.563 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.563 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.563 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.564 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.564 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.564 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.564 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.564 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.564 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.564 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.565 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.565 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.565 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.565 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.565 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.565 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.565 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.565 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.566 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.566 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.566 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.566 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.566 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.566 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.566 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.567 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.567 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.567 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.567 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.567 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.567 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.567 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.567 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.568 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.568 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.568 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.568 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.568 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.568 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.568 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.569 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.569 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.569 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.569 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.569 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.569 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.569 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.570 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.570 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.570 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.570 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.570 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.570 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.570 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.570 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.571 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.571 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.571 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.571 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.571 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.571 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.571 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.572 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.572 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.572 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.629 2 INFO nova.virt.node [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Determined node identity ead2f521-4d5d-46d9-864c-1aac19134114 from /var/lib/nova/compute_id#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.630 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.631 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.631 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.632 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.649 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f1e55b5ecd0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.653 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f1e55b5ecd0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.654 2 INFO nova.virt.libvirt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Connection event '1' reason 'None'#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.665 2 INFO nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Libvirt host capabilities <capabilities>
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  <host>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <uuid>441009ae-b831-47c3-ab1e-dc607f9feb6d</uuid>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <cpu>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <arch>x86_64</arch>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model>EPYC-Rome-v4</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <vendor>AMD</vendor>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <microcode version='16777317'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <signature family='23' model='49' stepping='0'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <maxphysaddr mode='emulate' bits='40'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature name='x2apic'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature name='tsc-deadline'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature name='osxsave'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature name='hypervisor'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature name='tsc_adjust'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature name='spec-ctrl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature name='stibp'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature name='arch-capabilities'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature name='ssbd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature name='cmp_legacy'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature name='topoext'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature name='virt-ssbd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature name='lbrv'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature name='tsc-scale'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature name='vmcb-clean'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature name='pause-filter'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature name='pfthreshold'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature name='svme-addr-chk'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature name='rdctl-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature name='skip-l1dfl-vmentry'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature name='mds-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature name='pschange-mc-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <pages unit='KiB' size='4'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <pages unit='KiB' size='2048'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <pages unit='KiB' size='1048576'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </cpu>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <power_management>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <suspend_mem/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </power_management>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <iommu support='no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <migration_features>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <live/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <uri_transports>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <uri_transport>tcp</uri_transport>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <uri_transport>rdma</uri_transport>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </uri_transports>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </migration_features>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <topology>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <cells num='1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <cell id='0'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:          <memory unit='KiB'>7864360</memory>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:          <pages unit='KiB' size='4'>1966090</pages>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:          <pages unit='KiB' size='2048'>0</pages>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:          <pages unit='KiB' size='1048576'>0</pages>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:          <distances>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:            <sibling id='0' value='10'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:          </distances>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:          <cpus num='8'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:          </cpus>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        </cell>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </cells>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </topology>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <cache>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </cache>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <secmodel>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model>selinux</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <doi>0</doi>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </secmodel>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <secmodel>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model>dac</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <doi>0</doi>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <baselabel type='kvm'>+107:+107</baselabel>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <baselabel type='qemu'>+107:+107</baselabel>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </secmodel>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  </host>
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  <guest>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <os_type>hvm</os_type>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <arch name='i686'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <wordsize>32</wordsize>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <domain type='qemu'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <domain type='kvm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </arch>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <features>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <pae/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <nonpae/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <acpi default='on' toggle='yes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <apic default='on' toggle='no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <cpuselection/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <deviceboot/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <disksnapshot default='on' toggle='no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <externalSnapshot/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </features>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  </guest>
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  <guest>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <os_type>hvm</os_type>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <arch name='x86_64'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <wordsize>64</wordsize>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <domain type='qemu'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <domain type='kvm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </arch>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <features>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <acpi default='on' toggle='yes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <apic default='on' toggle='no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <cpuselection/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <deviceboot/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <disksnapshot default='on' toggle='no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <externalSnapshot/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </features>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  </guest>
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 
Oct 11 04:32:44 np0005481065 nova_compute[260935]: </capabilities>
Oct 11 04:32:44 np0005481065 nova_compute[260935]: #033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.676 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.682 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct 11 04:32:44 np0005481065 nova_compute[260935]: <domainCapabilities>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  <path>/usr/libexec/qemu-kvm</path>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  <domain>kvm</domain>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  <arch>i686</arch>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  <vcpu max='240'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  <iothreads supported='yes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  <os supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <enum name='firmware'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <loader supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='type'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>rom</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>pflash</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='readonly'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>yes</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>no</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='secure'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>no</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </loader>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  <cpu>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <mode name='host-passthrough' supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='hostPassthroughMigratable'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>on</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>off</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </mode>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <mode name='maximum' supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='maximumMigratable'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>on</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>off</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </mode>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <mode name='host-model' supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model fallback='forbid'>EPYC-Rome</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <vendor>AMD</vendor>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='x2apic'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='tsc-deadline'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='hypervisor'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='tsc_adjust'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='spec-ctrl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='stibp'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='arch-capabilities'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='ssbd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='cmp_legacy'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='overflow-recov'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='succor'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='ibrs'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='amd-ssbd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='virt-ssbd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='lbrv'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='tsc-scale'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='vmcb-clean'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='flushbyasid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='pause-filter'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='pfthreshold'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='svme-addr-chk'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='lfence-always-serializing'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='rdctl-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='mds-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='pschange-mc-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='gds-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='rfds-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='disable' name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </mode>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <mode name='custom' supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Broadwell'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Broadwell-IBRS'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Broadwell-noTSX'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Broadwell-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Broadwell-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Broadwell-v3'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Broadwell-v4'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Cascadelake-Server'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Cascadelake-Server-noTSX'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Cascadelake-Server-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Cascadelake-Server-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Cascadelake-Server-v3'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Cascadelake-Server-v4'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Cascadelake-Server-v5'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Cooperlake'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Cooperlake-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Cooperlake-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Denverton'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='mpx'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Denverton-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='mpx'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Denverton-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Denverton-v3'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Dhyana-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='EPYC-Genoa'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amd-psfd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='auto-ibrs'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='no-nested-data-bp'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='null-sel-clr-base'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='stibp-always-on'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='EPYC-Genoa-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amd-psfd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='auto-ibrs'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='no-nested-data-bp'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='null-sel-clr-base'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='stibp-always-on'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='EPYC-Milan'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='EPYC-Milan-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='EPYC-Milan-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amd-psfd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='no-nested-data-bp'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='null-sel-clr-base'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='stibp-always-on'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='EPYC-Rome'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='EPYC-Rome-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='EPYC-Rome-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='EPYC-Rome-v3'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='EPYC-v3'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='EPYC-v4'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='GraniteRapids'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-fp16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-int8'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-tile'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-fp16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fbsdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrc'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrs'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fzrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='mcdt-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pbrsb-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='prefetchiti'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='psdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='sbdr-ssdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='serialize'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='tsx-ldtrk'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xfd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='GraniteRapids-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-fp16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-int8'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-tile'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-fp16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fbsdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrc'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrs'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fzrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='mcdt-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pbrsb-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='prefetchiti'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='psdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='sbdr-ssdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='serialize'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='tsx-ldtrk'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xfd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='GraniteRapids-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-fp16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-int8'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-tile'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx10'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx10-128'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx10-256'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx10-512'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-fp16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='cldemote'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fbsdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrc'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrs'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fzrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='mcdt-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='movdir64b'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='movdiri'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pbrsb-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='prefetchiti'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='psdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='sbdr-ssdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='serialize'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ss'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='tsx-ldtrk'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xfd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Haswell'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Haswell-IBRS'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Haswell-noTSX'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Haswell-noTSX-IBRS'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Haswell-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Haswell-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Haswell-v3'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Haswell-v4'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Icelake-Server'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Icelake-Server-noTSX'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Icelake-Server-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Icelake-Server-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Icelake-Server-v3'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Icelake-Server-v4'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Icelake-Server-v5'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Icelake-Server-v6'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Icelake-Server-v7'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='IvyBridge'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='IvyBridge-IBRS'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='IvyBridge-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='IvyBridge-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='KnightsMill'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-4fmaps'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-4vnniw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512er'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512pf'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ss'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='KnightsMill-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-4fmaps'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-4vnniw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512er'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512pf'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ss'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Opteron_G4'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fma4'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xop'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Opteron_G4-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fma4'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xop'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Opteron_G5'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fma4'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='tbm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xop'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Opteron_G5-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fma4'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='tbm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xop'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='SapphireRapids'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-int8'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-tile'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-fp16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrc'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrs'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fzrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='serialize'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='tsx-ldtrk'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xfd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='SapphireRapids-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-int8'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-tile'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-fp16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrc'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrs'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fzrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='serialize'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='tsx-ldtrk'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xfd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='SapphireRapids-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-int8'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-tile'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-fp16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fbsdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrc'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrs'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fzrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='psdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='sbdr-ssdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='serialize'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='tsx-ldtrk'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xfd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='SapphireRapids-v3'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-int8'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-tile'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-fp16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='cldemote'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fbsdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrc'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrs'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fzrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='movdir64b'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='movdiri'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='psdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='sbdr-ssdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='serialize'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ss'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='tsx-ldtrk'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xfd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='SierraForest'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-ne-convert'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-vnni-int8'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='cmpccxadd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fbsdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrs'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='mcdt-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pbrsb-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='psdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='sbdr-ssdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='serialize'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='SierraForest-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-ne-convert'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-vnni-int8'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='cmpccxadd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fbsdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrs'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='mcdt-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pbrsb-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='psdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='sbdr-ssdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='serialize'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Client'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Client-IBRS'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Client-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Client-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Client-v3'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Client-v4'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Server'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Server-IBRS'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Server-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Server-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Server-v3'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Server-v4'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Server-v5'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Snowridge'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='cldemote'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='core-capability'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='movdir64b'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='movdiri'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='mpx'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='split-lock-detect'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Snowridge-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='cldemote'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='core-capability'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='movdir64b'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='movdiri'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='mpx'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='split-lock-detect'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Snowridge-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='cldemote'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='core-capability'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='movdir64b'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='movdiri'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='split-lock-detect'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Snowridge-v3'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='cldemote'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='core-capability'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='movdir64b'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='movdiri'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='split-lock-detect'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Snowridge-v4'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='cldemote'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='movdir64b'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='movdiri'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='athlon'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='3dnow'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='3dnowext'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='athlon-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='3dnow'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='3dnowext'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='core2duo'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ss'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='core2duo-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ss'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='coreduo'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ss'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='coreduo-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ss'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='n270'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ss'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='n270-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ss'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='phenom'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='3dnow'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='3dnowext'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='phenom-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='3dnow'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='3dnowext'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </mode>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  <memoryBacking supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <enum name='sourceType'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <value>file</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <value>anonymous</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <value>memfd</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  </memoryBacking>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <disk supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='diskDevice'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>disk</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>cdrom</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>floppy</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>lun</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='bus'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>ide</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>fdc</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>scsi</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>virtio</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>usb</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>sata</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='model'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>virtio</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>virtio-transitional</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>virtio-non-transitional</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <graphics supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='type'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>vnc</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>egl-headless</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>dbus</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </graphics>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <video supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='modelType'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>vga</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>cirrus</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>virtio</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>none</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>bochs</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>ramfb</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <hostdev supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='mode'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>subsystem</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='startupPolicy'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>default</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>mandatory</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>requisite</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>optional</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='subsysType'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>usb</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>pci</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>scsi</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='capsType'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='pciBackend'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </hostdev>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <rng supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='model'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>virtio</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>virtio-transitional</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>virtio-non-transitional</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='backendModel'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>random</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>egd</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>builtin</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <filesystem supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='driverType'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>path</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>handle</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>virtiofs</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </filesystem>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <tpm supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='model'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>tpm-tis</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>tpm-crb</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='backendModel'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>emulator</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>external</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='backendVersion'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>2.0</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </tpm>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <redirdev supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='bus'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>usb</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </redirdev>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <channel supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='type'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>pty</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>unix</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </channel>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <crypto supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='model'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='type'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>qemu</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='backendModel'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>builtin</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </crypto>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <interface supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='backendType'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>default</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>passt</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <panic supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='model'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>isa</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>hyperv</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </panic>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <gic supported='no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <vmcoreinfo supported='yes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <genid supported='yes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <backingStoreInput supported='yes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <backup supported='yes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <async-teardown supported='yes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <ps2 supported='yes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <sev supported='no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <sgx supported='no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <hyperv supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='features'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>relaxed</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>vapic</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>spinlocks</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>vpindex</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>runtime</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>synic</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>stimer</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>reset</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>vendor_id</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>frequencies</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>reenlightenment</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>tlbflush</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>ipi</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>avic</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>emsr_bitmap</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>xmm_input</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </hyperv>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <launchSecurity supported='no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:32:44 np0005481065 nova_compute[260935]: </domainCapabilities>
Oct 11 04:32:44 np0005481065 nova_compute[260935]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.688 2 DEBUG nova.virt.libvirt.volume.mount [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.693 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct 11 04:32:44 np0005481065 nova_compute[260935]: <domainCapabilities>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  <path>/usr/libexec/qemu-kvm</path>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  <domain>kvm</domain>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  <machine>pc-q35-rhel9.6.0</machine>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  <arch>i686</arch>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  <vcpu max='4096'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  <iothreads supported='yes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  <os supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <enum name='firmware'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <loader supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='type'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>rom</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>pflash</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='readonly'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>yes</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>no</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='secure'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>no</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </loader>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  <cpu>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <mode name='host-passthrough' supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='hostPassthroughMigratable'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>on</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>off</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </mode>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <mode name='maximum' supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='maximumMigratable'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>on</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>off</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </mode>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <mode name='host-model' supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model fallback='forbid'>EPYC-Rome</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <vendor>AMD</vendor>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='x2apic'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='tsc-deadline'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='hypervisor'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='tsc_adjust'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='spec-ctrl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='stibp'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='arch-capabilities'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='ssbd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='cmp_legacy'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='overflow-recov'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='succor'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='ibrs'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='amd-ssbd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='virt-ssbd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='lbrv'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='tsc-scale'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='vmcb-clean'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='flushbyasid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='pause-filter'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='pfthreshold'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='svme-addr-chk'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='lfence-always-serializing'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='rdctl-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='mds-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='pschange-mc-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='gds-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='rfds-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='disable' name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </mode>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <mode name='custom' supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Broadwell'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Broadwell-IBRS'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Broadwell-noTSX'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Broadwell-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Broadwell-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Broadwell-v3'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Broadwell-v4'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Cascadelake-Server'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Cascadelake-Server-noTSX'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Cascadelake-Server-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Cascadelake-Server-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Cascadelake-Server-v3'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Cascadelake-Server-v4'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Cascadelake-Server-v5'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Cooperlake'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Cooperlake-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Cooperlake-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Denverton'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='mpx'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Denverton-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='mpx'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Denverton-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Denverton-v3'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Dhyana-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='EPYC-Genoa'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amd-psfd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='auto-ibrs'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='no-nested-data-bp'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='null-sel-clr-base'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='stibp-always-on'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='EPYC-Genoa-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amd-psfd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='auto-ibrs'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='no-nested-data-bp'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='null-sel-clr-base'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='stibp-always-on'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='EPYC-Milan'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='EPYC-Milan-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='EPYC-Milan-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amd-psfd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='no-nested-data-bp'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='null-sel-clr-base'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='stibp-always-on'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='EPYC-Rome'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='EPYC-Rome-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='EPYC-Rome-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='EPYC-Rome-v3'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='EPYC-v3'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='EPYC-v4'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='GraniteRapids'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-fp16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-int8'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-tile'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-fp16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fbsdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrc'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrs'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fzrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='mcdt-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pbrsb-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='prefetchiti'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='psdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='sbdr-ssdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='serialize'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='tsx-ldtrk'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xfd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='GraniteRapids-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-fp16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-int8'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-tile'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-fp16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fbsdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrc'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrs'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fzrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='mcdt-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pbrsb-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='prefetchiti'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='psdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='sbdr-ssdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='serialize'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='tsx-ldtrk'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xfd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='GraniteRapids-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-fp16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-int8'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-tile'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx10'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx10-128'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx10-256'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx10-512'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-fp16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='cldemote'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fbsdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrc'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrs'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fzrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='mcdt-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='movdir64b'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='movdiri'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pbrsb-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='prefetchiti'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='psdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='sbdr-ssdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='serialize'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ss'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='tsx-ldtrk'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xfd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Haswell'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Haswell-IBRS'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Haswell-noTSX'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Haswell-noTSX-IBRS'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Haswell-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Haswell-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Haswell-v3'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Haswell-v4'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Icelake-Server'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Icelake-Server-noTSX'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Icelake-Server-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Icelake-Server-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Icelake-Server-v3'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Icelake-Server-v4'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Icelake-Server-v5'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Icelake-Server-v6'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Icelake-Server-v7'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='IvyBridge'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='IvyBridge-IBRS'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='IvyBridge-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='IvyBridge-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='KnightsMill'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-4fmaps'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-4vnniw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512er'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512pf'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ss'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='KnightsMill-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-4fmaps'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-4vnniw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512er'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512pf'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ss'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Opteron_G4'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fma4'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xop'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Opteron_G4-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fma4'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xop'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Opteron_G5'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fma4'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='tbm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xop'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Opteron_G5-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fma4'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='tbm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xop'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='SapphireRapids'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-int8'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-tile'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-fp16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrc'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrs'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fzrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='serialize'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='tsx-ldtrk'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xfd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='SapphireRapids-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-int8'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-tile'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-fp16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrc'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrs'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fzrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='serialize'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='tsx-ldtrk'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xfd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='SapphireRapids-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-int8'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-tile'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-fp16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fbsdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrc'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrs'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fzrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='psdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='sbdr-ssdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='serialize'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='tsx-ldtrk'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xfd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='SapphireRapids-v3'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-int8'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-tile'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-fp16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='cldemote'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fbsdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrc'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrs'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fzrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='movdir64b'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='movdiri'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='psdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='sbdr-ssdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='serialize'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ss'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='tsx-ldtrk'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xfd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='SierraForest'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-ne-convert'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-vnni-int8'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='cmpccxadd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fbsdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrs'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='mcdt-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pbrsb-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='psdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='sbdr-ssdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='serialize'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='SierraForest-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-ne-convert'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-vnni-int8'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='cmpccxadd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fbsdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrs'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='mcdt-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pbrsb-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='psdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='sbdr-ssdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='serialize'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Client'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Client-IBRS'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Client-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Client-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Client-v3'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Client-v4'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Server'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Server-IBRS'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Server-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Server-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Server-v3'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Server-v4'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Server-v5'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Snowridge'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='cldemote'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='core-capability'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='movdir64b'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='movdiri'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='mpx'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='split-lock-detect'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Snowridge-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='cldemote'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='core-capability'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='movdir64b'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='movdiri'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='mpx'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='split-lock-detect'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Snowridge-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='cldemote'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='core-capability'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='movdir64b'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='movdiri'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='split-lock-detect'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Snowridge-v3'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='cldemote'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='core-capability'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='movdir64b'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='movdiri'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='split-lock-detect'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Snowridge-v4'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='cldemote'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='movdir64b'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='movdiri'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='athlon'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='3dnow'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='3dnowext'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='athlon-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='3dnow'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='3dnowext'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='core2duo'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ss'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='core2duo-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ss'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='coreduo'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ss'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='coreduo-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ss'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='n270'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ss'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='n270-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ss'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='phenom'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='3dnow'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='3dnowext'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='phenom-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='3dnow'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='3dnowext'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </mode>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  <memoryBacking supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <enum name='sourceType'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <value>file</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <value>anonymous</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <value>memfd</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  </memoryBacking>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <disk supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='diskDevice'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>disk</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>cdrom</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>floppy</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>lun</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='bus'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>fdc</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>scsi</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>virtio</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>usb</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>sata</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='model'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>virtio</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>virtio-transitional</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>virtio-non-transitional</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <graphics supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='type'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>vnc</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>egl-headless</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>dbus</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </graphics>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <video supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='modelType'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>vga</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>cirrus</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>virtio</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>none</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>bochs</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>ramfb</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <hostdev supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='mode'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>subsystem</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='startupPolicy'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>default</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>mandatory</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>requisite</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>optional</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='subsysType'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>usb</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>pci</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>scsi</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='capsType'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='pciBackend'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </hostdev>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <rng supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='model'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>virtio</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>virtio-transitional</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>virtio-non-transitional</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='backendModel'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>random</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>egd</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>builtin</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <filesystem supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='driverType'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>path</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>handle</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>virtiofs</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </filesystem>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <tpm supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='model'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>tpm-tis</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>tpm-crb</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='backendModel'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>emulator</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>external</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='backendVersion'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>2.0</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </tpm>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <redirdev supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='bus'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>usb</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </redirdev>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <channel supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='type'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>pty</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>unix</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </channel>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <crypto supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='model'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='type'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>qemu</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='backendModel'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>builtin</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </crypto>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <interface supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='backendType'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>default</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>passt</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <panic supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='model'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>isa</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>hyperv</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </panic>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <gic supported='no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <vmcoreinfo supported='yes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <genid supported='yes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <backingStoreInput supported='yes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <backup supported='yes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <async-teardown supported='yes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <ps2 supported='yes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <sev supported='no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <sgx supported='no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <hyperv supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='features'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>relaxed</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>vapic</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>spinlocks</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>vpindex</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>runtime</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>synic</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>stimer</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>reset</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>vendor_id</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>frequencies</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>reenlightenment</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>tlbflush</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>ipi</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>avic</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>emsr_bitmap</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>xmm_input</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </hyperv>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <launchSecurity supported='no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:32:44 np0005481065 nova_compute[260935]: </domainCapabilities>
Oct 11 04:32:44 np0005481065 nova_compute[260935]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.721 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.727 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct 11 04:32:44 np0005481065 nova_compute[260935]: <domainCapabilities>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  <path>/usr/libexec/qemu-kvm</path>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  <domain>kvm</domain>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  <arch>x86_64</arch>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  <vcpu max='240'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  <iothreads supported='yes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  <os supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <enum name='firmware'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <loader supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='type'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>rom</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>pflash</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='readonly'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>yes</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>no</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='secure'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>no</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </loader>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  <cpu>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <mode name='host-passthrough' supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='hostPassthroughMigratable'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>on</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>off</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </mode>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <mode name='maximum' supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='maximumMigratable'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>on</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>off</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </mode>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <mode name='host-model' supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model fallback='forbid'>EPYC-Rome</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <vendor>AMD</vendor>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='x2apic'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='tsc-deadline'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='hypervisor'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='tsc_adjust'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='spec-ctrl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='stibp'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='arch-capabilities'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='ssbd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='cmp_legacy'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='overflow-recov'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='succor'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='ibrs'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='amd-ssbd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='virt-ssbd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='lbrv'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='tsc-scale'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='vmcb-clean'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='flushbyasid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='pause-filter'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='pfthreshold'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='svme-addr-chk'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='lfence-always-serializing'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='rdctl-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='mds-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='pschange-mc-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='gds-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='rfds-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='disable' name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </mode>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <mode name='custom' supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Broadwell'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Broadwell-IBRS'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Broadwell-noTSX'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Broadwell-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Broadwell-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Broadwell-v3'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Broadwell-v4'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Cascadelake-Server'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Cascadelake-Server-noTSX'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Cascadelake-Server-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Cascadelake-Server-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Cascadelake-Server-v3'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Cascadelake-Server-v4'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Cascadelake-Server-v5'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Cooperlake'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Cooperlake-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Cooperlake-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Denverton'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='mpx'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Denverton-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='mpx'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Denverton-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Denverton-v3'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Dhyana-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='EPYC-Genoa'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amd-psfd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='auto-ibrs'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='no-nested-data-bp'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='null-sel-clr-base'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='stibp-always-on'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='EPYC-Genoa-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amd-psfd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='auto-ibrs'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='no-nested-data-bp'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='null-sel-clr-base'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='stibp-always-on'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='EPYC-Milan'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='EPYC-Milan-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='EPYC-Milan-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amd-psfd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='no-nested-data-bp'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='null-sel-clr-base'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='stibp-always-on'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='EPYC-Rome'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='EPYC-Rome-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='EPYC-Rome-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='EPYC-Rome-v3'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='EPYC-v3'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='EPYC-v4'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='GraniteRapids'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-fp16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-int8'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-tile'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-fp16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fbsdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrc'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrs'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fzrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='mcdt-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pbrsb-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='prefetchiti'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='psdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='sbdr-ssdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='serialize'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='tsx-ldtrk'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xfd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='GraniteRapids-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-fp16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-int8'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-tile'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-fp16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fbsdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrc'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrs'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fzrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='mcdt-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pbrsb-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='prefetchiti'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='psdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='sbdr-ssdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='serialize'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='tsx-ldtrk'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xfd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='GraniteRapids-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-fp16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-int8'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-tile'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx10'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx10-128'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx10-256'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx10-512'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-fp16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='cldemote'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fbsdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrc'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrs'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fzrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='mcdt-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='movdir64b'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='movdiri'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pbrsb-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='prefetchiti'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='psdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='sbdr-ssdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='serialize'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ss'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='tsx-ldtrk'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xfd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Haswell'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Haswell-IBRS'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Haswell-noTSX'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Haswell-noTSX-IBRS'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Haswell-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Haswell-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Haswell-v3'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Haswell-v4'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Icelake-Server'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Icelake-Server-noTSX'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Icelake-Server-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Icelake-Server-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Icelake-Server-v3'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Icelake-Server-v4'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Icelake-Server-v5'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Icelake-Server-v6'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Icelake-Server-v7'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='IvyBridge'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='IvyBridge-IBRS'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='IvyBridge-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='IvyBridge-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='KnightsMill'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-4fmaps'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-4vnniw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512er'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512pf'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ss'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='KnightsMill-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-4fmaps'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-4vnniw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512er'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512pf'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ss'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Opteron_G4'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fma4'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xop'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Opteron_G4-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fma4'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xop'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Opteron_G5'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fma4'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='tbm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xop'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Opteron_G5-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fma4'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='tbm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xop'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='SapphireRapids'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-int8'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-tile'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-fp16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrc'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrs'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fzrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='serialize'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='tsx-ldtrk'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xfd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='SapphireRapids-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-int8'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-tile'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-fp16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrc'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrs'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fzrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='serialize'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='tsx-ldtrk'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xfd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='SapphireRapids-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-int8'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-tile'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-fp16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fbsdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrc'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrs'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fzrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='psdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='sbdr-ssdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='serialize'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='tsx-ldtrk'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xfd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='SapphireRapids-v3'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-int8'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-tile'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-fp16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='cldemote'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fbsdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrc'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrs'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fzrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='movdir64b'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='movdiri'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='psdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='sbdr-ssdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='serialize'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ss'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='tsx-ldtrk'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xfd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='SierraForest'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-ne-convert'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-vnni-int8'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='cmpccxadd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fbsdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrs'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='mcdt-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pbrsb-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='psdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='sbdr-ssdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='serialize'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='SierraForest-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-ne-convert'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-vnni-int8'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='cmpccxadd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fbsdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrs'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='mcdt-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pbrsb-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='psdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='sbdr-ssdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='serialize'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Client'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Client-IBRS'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Client-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Client-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Client-v3'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Client-v4'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Server'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Server-IBRS'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Server-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Server-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Server-v3'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Server-v4'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Server-v5'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Snowridge'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='cldemote'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='core-capability'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='movdir64b'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='movdiri'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='mpx'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='split-lock-detect'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Snowridge-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='cldemote'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='core-capability'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='movdir64b'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='movdiri'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='mpx'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='split-lock-detect'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Snowridge-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='cldemote'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='core-capability'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='movdir64b'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='movdiri'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='split-lock-detect'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Snowridge-v3'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='cldemote'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='core-capability'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='movdir64b'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='movdiri'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='split-lock-detect'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Snowridge-v4'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='cldemote'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='movdir64b'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='movdiri'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='athlon'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='3dnow'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='3dnowext'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='athlon-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='3dnow'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='3dnowext'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='core2duo'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ss'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='core2duo-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ss'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='coreduo'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ss'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='coreduo-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ss'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='n270'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ss'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='n270-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ss'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='phenom'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='3dnow'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='3dnowext'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='phenom-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='3dnow'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='3dnowext'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </mode>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  <memoryBacking supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <enum name='sourceType'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <value>file</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <value>anonymous</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <value>memfd</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  </memoryBacking>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <disk supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='diskDevice'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>disk</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>cdrom</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>floppy</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>lun</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='bus'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>ide</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>fdc</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>scsi</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>virtio</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>usb</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>sata</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='model'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>virtio</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>virtio-transitional</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>virtio-non-transitional</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <graphics supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='type'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>vnc</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>egl-headless</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>dbus</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </graphics>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <video supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='modelType'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>vga</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>cirrus</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>virtio</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>none</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>bochs</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>ramfb</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <hostdev supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='mode'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>subsystem</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='startupPolicy'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>default</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>mandatory</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>requisite</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>optional</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='subsysType'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>usb</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>pci</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>scsi</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='capsType'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='pciBackend'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </hostdev>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <rng supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='model'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>virtio</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>virtio-transitional</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>virtio-non-transitional</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='backendModel'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>random</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>egd</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>builtin</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <filesystem supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='driverType'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>path</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>handle</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>virtiofs</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </filesystem>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <tpm supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='model'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>tpm-tis</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>tpm-crb</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='backendModel'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>emulator</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>external</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='backendVersion'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>2.0</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </tpm>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <redirdev supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='bus'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>usb</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </redirdev>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <channel supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='type'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>pty</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>unix</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </channel>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <crypto supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='model'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='type'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>qemu</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='backendModel'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>builtin</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </crypto>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <interface supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='backendType'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>default</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>passt</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <panic supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='model'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>isa</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>hyperv</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </panic>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <gic supported='no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <vmcoreinfo supported='yes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <genid supported='yes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <backingStoreInput supported='yes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <backup supported='yes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <async-teardown supported='yes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <ps2 supported='yes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <sev supported='no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <sgx supported='no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <hyperv supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='features'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>relaxed</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>vapic</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>spinlocks</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>vpindex</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>runtime</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>synic</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>stimer</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>reset</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>vendor_id</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>frequencies</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>reenlightenment</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>tlbflush</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>ipi</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>avic</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>emsr_bitmap</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>xmm_input</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </hyperv>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <launchSecurity supported='no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:32:44 np0005481065 nova_compute[260935]: </domainCapabilities>
Oct 11 04:32:44 np0005481065 nova_compute[260935]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.797 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct 11 04:32:44 np0005481065 nova_compute[260935]: <domainCapabilities>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  <path>/usr/libexec/qemu-kvm</path>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  <domain>kvm</domain>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  <machine>pc-q35-rhel9.6.0</machine>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  <arch>x86_64</arch>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  <vcpu max='4096'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  <iothreads supported='yes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  <os supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <enum name='firmware'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <value>efi</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <loader supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='type'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>rom</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>pflash</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='readonly'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>yes</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>no</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='secure'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>yes</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>no</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </loader>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  <cpu>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <mode name='host-passthrough' supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='hostPassthroughMigratable'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>on</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>off</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </mode>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <mode name='maximum' supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='maximumMigratable'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>on</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>off</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </mode>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <mode name='host-model' supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model fallback='forbid'>EPYC-Rome</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <vendor>AMD</vendor>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='x2apic'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='tsc-deadline'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='hypervisor'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='tsc_adjust'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='spec-ctrl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='stibp'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='arch-capabilities'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='ssbd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='cmp_legacy'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='overflow-recov'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='succor'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='ibrs'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='amd-ssbd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='virt-ssbd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='lbrv'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='tsc-scale'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='vmcb-clean'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='flushbyasid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='pause-filter'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='pfthreshold'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='svme-addr-chk'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='lfence-always-serializing'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='rdctl-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='mds-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='pschange-mc-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='gds-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='require' name='rfds-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <feature policy='disable' name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </mode>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <mode name='custom' supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Broadwell'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Broadwell-IBRS'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Broadwell-noTSX'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Broadwell-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Broadwell-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Broadwell-v3'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Broadwell-v4'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Cascadelake-Server'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Cascadelake-Server-noTSX'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Cascadelake-Server-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Cascadelake-Server-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Cascadelake-Server-v3'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Cascadelake-Server-v4'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Cascadelake-Server-v5'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Cooperlake'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Cooperlake-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Cooperlake-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Denverton'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='mpx'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Denverton-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='mpx'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Denverton-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Denverton-v3'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Dhyana-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='EPYC-Genoa'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amd-psfd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='auto-ibrs'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='no-nested-data-bp'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='null-sel-clr-base'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='stibp-always-on'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='EPYC-Genoa-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amd-psfd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='auto-ibrs'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='no-nested-data-bp'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='null-sel-clr-base'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='stibp-always-on'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='EPYC-Milan'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='EPYC-Milan-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='EPYC-Milan-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amd-psfd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='no-nested-data-bp'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='null-sel-clr-base'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='stibp-always-on'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='EPYC-Rome'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='EPYC-Rome-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='EPYC-Rome-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='EPYC-Rome-v3'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='EPYC-v3'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='EPYC-v4'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='GraniteRapids'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-fp16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-int8'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-tile'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-fp16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fbsdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrc'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrs'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fzrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='mcdt-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pbrsb-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='prefetchiti'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='psdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='sbdr-ssdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='serialize'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='tsx-ldtrk'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xfd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='GraniteRapids-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-fp16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-int8'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-tile'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-fp16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fbsdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrc'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrs'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fzrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='mcdt-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pbrsb-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='prefetchiti'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='psdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='sbdr-ssdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='serialize'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='tsx-ldtrk'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xfd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='GraniteRapids-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-fp16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-int8'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-tile'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx10'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx10-128'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx10-256'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx10-512'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-fp16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='cldemote'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fbsdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrc'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrs'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fzrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='mcdt-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='movdir64b'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='movdiri'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pbrsb-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='prefetchiti'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='psdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='sbdr-ssdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='serialize'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ss'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='tsx-ldtrk'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xfd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Haswell'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Haswell-IBRS'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Haswell-noTSX'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Haswell-noTSX-IBRS'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Haswell-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Haswell-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Haswell-v3'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Haswell-v4'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Icelake-Server'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Icelake-Server-noTSX'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Icelake-Server-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Icelake-Server-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Icelake-Server-v3'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Icelake-Server-v4'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Icelake-Server-v5'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Icelake-Server-v6'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Icelake-Server-v7'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='IvyBridge'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='IvyBridge-IBRS'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='IvyBridge-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='IvyBridge-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='KnightsMill'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-4fmaps'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-4vnniw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512er'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512pf'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ss'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='KnightsMill-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-4fmaps'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-4vnniw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512er'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512pf'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ss'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Opteron_G4'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fma4'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xop'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Opteron_G4-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fma4'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xop'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Opteron_G5'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fma4'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='tbm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xop'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Opteron_G5-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fma4'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='tbm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xop'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='SapphireRapids'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-int8'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-tile'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-fp16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrc'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrs'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fzrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='serialize'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='tsx-ldtrk'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xfd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='SapphireRapids-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-int8'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-tile'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-fp16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrc'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrs'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fzrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='serialize'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='tsx-ldtrk'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xfd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='SapphireRapids-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-int8'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-tile'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-fp16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fbsdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrc'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrs'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fzrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='psdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='sbdr-ssdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='serialize'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='tsx-ldtrk'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xfd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='SapphireRapids-v3'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-int8'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='amx-tile'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-bf16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-fp16'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512-vpopcntdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bitalg'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vbmi2'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='cldemote'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fbsdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrc'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrs'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fzrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='la57'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='movdir64b'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='movdiri'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='psdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='sbdr-ssdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='serialize'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ss'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='taa-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='tsx-ldtrk'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xfd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='SierraForest'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-ne-convert'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-vnni-int8'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='cmpccxadd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fbsdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrs'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='mcdt-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pbrsb-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='psdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='sbdr-ssdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='serialize'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='SierraForest-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-ifma'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-ne-convert'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-vnni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx-vnni-int8'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='bus-lock-detect'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='cmpccxadd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fbsdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='fsrs'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ibrs-all'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='mcdt-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pbrsb-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='psdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='sbdr-ssdp-no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='serialize'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vaes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='vpclmulqdq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Client'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Client-IBRS'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Client-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Client-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Client-v3'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Client-v4'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Server'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Server-IBRS'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Server-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Server-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='hle'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='rtm'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Server-v3'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Server-v4'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Skylake-Server-v5'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512bw'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512cd'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512dq'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512f'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='avx512vl'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='invpcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pcid'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='pku'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Snowridge'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='cldemote'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='core-capability'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='movdir64b'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='movdiri'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='mpx'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='split-lock-detect'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Snowridge-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='cldemote'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='core-capability'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='movdir64b'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='movdiri'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='mpx'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='split-lock-detect'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Snowridge-v2'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='cldemote'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='core-capability'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='movdir64b'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='movdiri'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='split-lock-detect'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Snowridge-v3'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='cldemote'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='core-capability'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='movdir64b'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='movdiri'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='split-lock-detect'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='Snowridge-v4'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='cldemote'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='erms'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='gfni'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='movdir64b'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='movdiri'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='xsaves'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='athlon'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='3dnow'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='3dnowext'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='athlon-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='3dnow'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='3dnowext'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='core2duo'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ss'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='core2duo-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ss'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='coreduo'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ss'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='coreduo-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ss'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='n270'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ss'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='n270-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='ss'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='phenom'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='3dnow'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='3dnowext'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <blockers model='phenom-v1'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='3dnow'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <feature name='3dnowext'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </blockers>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </mode>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  <memoryBacking supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <enum name='sourceType'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <value>file</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <value>anonymous</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <value>memfd</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  </memoryBacking>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <disk supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='diskDevice'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>disk</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>cdrom</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>floppy</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>lun</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='bus'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>fdc</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>scsi</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>virtio</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>usb</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>sata</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='model'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>virtio</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>virtio-transitional</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>virtio-non-transitional</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <graphics supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='type'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>vnc</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>egl-headless</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>dbus</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </graphics>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <video supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='modelType'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>vga</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>cirrus</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>virtio</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>none</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>bochs</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>ramfb</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <hostdev supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='mode'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>subsystem</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='startupPolicy'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>default</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>mandatory</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>requisite</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>optional</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='subsysType'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>usb</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>pci</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>scsi</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='capsType'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='pciBackend'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </hostdev>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <rng supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='model'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>virtio</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>virtio-transitional</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>virtio-non-transitional</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='backendModel'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>random</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>egd</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>builtin</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <filesystem supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='driverType'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>path</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>handle</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>virtiofs</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </filesystem>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <tpm supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='model'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>tpm-tis</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>tpm-crb</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='backendModel'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>emulator</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>external</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='backendVersion'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>2.0</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </tpm>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <redirdev supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='bus'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>usb</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </redirdev>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <channel supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='type'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>pty</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>unix</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </channel>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <crypto supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='model'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='type'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>qemu</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='backendModel'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>builtin</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </crypto>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <interface supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='backendType'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>default</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>passt</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <panic supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='model'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>isa</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>hyperv</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </panic>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <gic supported='no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <vmcoreinfo supported='yes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <genid supported='yes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <backingStoreInput supported='yes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <backup supported='yes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <async-teardown supported='yes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <ps2 supported='yes'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <sev supported='no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <sgx supported='no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <hyperv supported='yes'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      <enum name='features'>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>relaxed</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>vapic</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>spinlocks</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>vpindex</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>runtime</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>synic</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>stimer</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>reset</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>vendor_id</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>frequencies</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>reenlightenment</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>tlbflush</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>ipi</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>avic</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>emsr_bitmap</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:        <value>xmm_input</value>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:      </enum>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    </hyperv>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:    <launchSecurity supported='no'/>
Oct 11 04:32:44 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:32:44 np0005481065 nova_compute[260935]: </domainCapabilities>
Oct 11 04:32:44 np0005481065 nova_compute[260935]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.892 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.892 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.892 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.892 2 INFO nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Secure Boot support detected#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.895 2 INFO nova.virt.libvirt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.895 2 INFO nova.virt.libvirt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.903 2 DEBUG nova.virt.libvirt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.926 2 INFO nova.virt.node [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Determined node identity ead2f521-4d5d-46d9-864c-1aac19134114 from /var/lib/nova/compute_id#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.945 2 WARNING nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Compute nodes ['ead2f521-4d5d-46d9-864c-1aac19134114'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.973 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.996 2 WARNING nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.996 2 DEBUG oslo_concurrency.lockutils [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.997 2 DEBUG oslo_concurrency.lockutils [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.997 2 DEBUG oslo_concurrency.lockutils [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.998 2 DEBUG nova.compute.resource_tracker [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 04:32:44 np0005481065 nova_compute[260935]: 2025-10-11 08:32:44.998 2 DEBUG oslo_concurrency.processutils [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:32:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v768: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:45 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:32:45 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1004446298' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:32:45 np0005481065 nova_compute[260935]: 2025-10-11 08:32:45.503 2 DEBUG oslo_concurrency.processutils [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:32:45 np0005481065 systemd[1]: Starting libvirt nodedev daemon...
Oct 11 04:32:45 np0005481065 systemd[1]: Started libvirt nodedev daemon.
Oct 11 04:32:45 np0005481065 nova_compute[260935]: 2025-10-11 08:32:45.953 2 WARNING nova.virt.libvirt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:32:45 np0005481065 nova_compute[260935]: 2025-10-11 08:32:45.955 2 DEBUG nova.compute.resource_tracker [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5182MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 04:32:45 np0005481065 nova_compute[260935]: 2025-10-11 08:32:45.956 2 DEBUG oslo_concurrency.lockutils [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:32:45 np0005481065 nova_compute[260935]: 2025-10-11 08:32:45.956 2 DEBUG oslo_concurrency.lockutils [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:32:46 np0005481065 nova_compute[260935]: 2025-10-11 08:32:46.008 2 WARNING nova.compute.resource_tracker [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] No compute node record for compute-0.ctlplane.example.com:ead2f521-4d5d-46d9-864c-1aac19134114: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host ead2f521-4d5d-46d9-864c-1aac19134114 could not be found.#033[00m
Oct 11 04:32:46 np0005481065 nova_compute[260935]: 2025-10-11 08:32:46.045 2 INFO nova.compute.resource_tracker [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: ead2f521-4d5d-46d9-864c-1aac19134114#033[00m
Oct 11 04:32:46 np0005481065 nova_compute[260935]: 2025-10-11 08:32:46.184 2 DEBUG nova.compute.resource_tracker [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 04:32:46 np0005481065 nova_compute[260935]: 2025-10-11 08:32:46.185 2 DEBUG nova.compute.resource_tracker [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 04:32:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v769: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:47 np0005481065 nova_compute[260935]: 2025-10-11 08:32:47.394 2 INFO nova.scheduler.client.report [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [req-7739ac00-a31a-4d51-9974-b274c4dd3293] Created resource provider record via placement API for resource provider with UUID ead2f521-4d5d-46d9-864c-1aac19134114 and name compute-0.ctlplane.example.com.#033[00m
Oct 11 04:32:47 np0005481065 nova_compute[260935]: 2025-10-11 08:32:47.782 2 DEBUG oslo_concurrency.processutils [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:32:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:32:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:32:48 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4251336718' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:32:48 np0005481065 nova_compute[260935]: 2025-10-11 08:32:48.231 2 DEBUG oslo_concurrency.processutils [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:32:48 np0005481065 nova_compute[260935]: 2025-10-11 08:32:48.238 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Oct 11 04:32:48 np0005481065 nova_compute[260935]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Oct 11 04:32:48 np0005481065 nova_compute[260935]: 2025-10-11 08:32:48.239 2 INFO nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] kernel doesn't support AMD SEV#033[00m
Oct 11 04:32:48 np0005481065 nova_compute[260935]: 2025-10-11 08:32:48.241 2 DEBUG nova.compute.provider_tree [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Updating inventory in ProviderTree for provider ead2f521-4d5d-46d9-864c-1aac19134114 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 11 04:32:48 np0005481065 nova_compute[260935]: 2025-10-11 08:32:48.241 2 DEBUG nova.virt.libvirt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:32:48 np0005481065 nova_compute[260935]: 2025-10-11 08:32:48.325 2 DEBUG nova.scheduler.client.report [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Updated inventory for provider ead2f521-4d5d-46d9-864c-1aac19134114 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Oct 11 04:32:48 np0005481065 nova_compute[260935]: 2025-10-11 08:32:48.326 2 DEBUG nova.compute.provider_tree [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Updating resource provider ead2f521-4d5d-46d9-864c-1aac19134114 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Oct 11 04:32:48 np0005481065 nova_compute[260935]: 2025-10-11 08:32:48.326 2 DEBUG nova.compute.provider_tree [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Updating inventory in ProviderTree for provider ead2f521-4d5d-46d9-864c-1aac19134114 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 11 04:32:48 np0005481065 nova_compute[260935]: 2025-10-11 08:32:48.512 2 DEBUG nova.compute.provider_tree [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Updating resource provider ead2f521-4d5d-46d9-864c-1aac19134114 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Oct 11 04:32:48 np0005481065 nova_compute[260935]: 2025-10-11 08:32:48.546 2 DEBUG nova.compute.resource_tracker [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 04:32:48 np0005481065 nova_compute[260935]: 2025-10-11 08:32:48.547 2 DEBUG oslo_concurrency.lockutils [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:32:48 np0005481065 nova_compute[260935]: 2025-10-11 08:32:48.547 2 DEBUG nova.service [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Oct 11 04:32:48 np0005481065 nova_compute[260935]: 2025-10-11 08:32:48.656 2 DEBUG nova.service [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Oct 11 04:32:48 np0005481065 nova_compute[260935]: 2025-10-11 08:32:48.657 2 DEBUG nova.servicegroup.drivers.db [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Oct 11 04:32:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v770: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v771: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:52 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:32:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v772: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:32:54
Oct 11 04:32:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:32:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 04:32:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes', 'backups', 'vms', 'default.rgw.meta', '.mgr', 'default.rgw.log', '.rgw.root']
Oct 11 04:32:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:32:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:32:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:32:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:32:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:32:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:32:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:32:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:32:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:32:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:32:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:32:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:32:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:32:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:32:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:32:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:32:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:32:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v773: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:55 np0005481065 podman[261302]: 2025-10-11 08:32:55.803791429 +0000 UTC m=+0.095521746 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 11 04:32:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v774: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:32:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:32:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v775: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v776: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:02 np0005481065 podman[261322]: 2025-10-11 08:33:02.806277398 +0000 UTC m=+0.105491186 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.schema-version=1.0)
Oct 11 04:33:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:33:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v777: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:33:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:33:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:33:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:33:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:33:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:33:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:33:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:33:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:33:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:33:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:33:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:33:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 04:33:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:33:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:33:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:33:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:33:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:33:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:33:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:33:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:33:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:33:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:33:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v778: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v779: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:07 np0005481065 podman[261342]: 2025-10-11 08:33:07.813945056 +0000 UTC m=+0.107510042 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 11 04:33:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:33:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:33:08 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1463444221' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:33:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:33:08 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1463444221' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:33:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:33:09 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2586404284' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:33:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:33:09 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2586404284' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:33:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v780: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:33:09 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3292399546' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:33:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:33:09 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3292399546' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:33:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 11 04:33:10 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 04:33:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:33:10 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:33:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:33:10 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:33:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:33:10 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:33:10 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev efea182e-33ee-4d25-bea7-d5cba7d465f3 does not exist
Oct 11 04:33:10 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 568535d3-bca7-493d-b1be-4764fd61c8ad does not exist
Oct 11 04:33:10 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 12e510ff-3d79-460d-b21e-069c7432ce3b does not exist
Oct 11 04:33:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:33:10 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:33:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:33:10 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:33:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:33:10 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:33:10 np0005481065 podman[261546]: 2025-10-11 08:33:10.510474342 +0000 UTC m=+0.212120383 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 04:33:10 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 04:33:10 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:33:10 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:33:10 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:33:10 np0005481065 podman[261661]: 2025-10-11 08:33:10.971388621 +0000 UTC m=+0.069226460 container create cf44567fb7cecbf2507e9c1dfee5ba296d1ab6372afec0c588800d3b1af1d893 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lamarr, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:33:11 np0005481065 systemd[1]: Started libpod-conmon-cf44567fb7cecbf2507e9c1dfee5ba296d1ab6372afec0c588800d3b1af1d893.scope.
Oct 11 04:33:11 np0005481065 podman[261661]: 2025-10-11 08:33:10.942615915 +0000 UTC m=+0.040453824 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:33:11 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:33:11 np0005481065 podman[261661]: 2025-10-11 08:33:11.082342869 +0000 UTC m=+0.180180768 container init cf44567fb7cecbf2507e9c1dfee5ba296d1ab6372afec0c588800d3b1af1d893 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lamarr, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:33:11 np0005481065 podman[261661]: 2025-10-11 08:33:11.100842387 +0000 UTC m=+0.198680236 container start cf44567fb7cecbf2507e9c1dfee5ba296d1ab6372afec0c588800d3b1af1d893 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 04:33:11 np0005481065 practical_lamarr[261678]: 167 167
Oct 11 04:33:11 np0005481065 podman[261661]: 2025-10-11 08:33:11.105372024 +0000 UTC m=+0.203209883 container attach cf44567fb7cecbf2507e9c1dfee5ba296d1ab6372afec0c588800d3b1af1d893 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lamarr, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 04:33:11 np0005481065 systemd[1]: libpod-cf44567fb7cecbf2507e9c1dfee5ba296d1ab6372afec0c588800d3b1af1d893.scope: Deactivated successfully.
Oct 11 04:33:11 np0005481065 conmon[261678]: conmon cf44567fb7cecbf2507e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cf44567fb7cecbf2507e9c1dfee5ba296d1ab6372afec0c588800d3b1af1d893.scope/container/memory.events
Oct 11 04:33:11 np0005481065 podman[261661]: 2025-10-11 08:33:11.108060909 +0000 UTC m=+0.205898758 container died cf44567fb7cecbf2507e9c1dfee5ba296d1ab6372afec0c588800d3b1af1d893 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 04:33:11 np0005481065 systemd[1]: var-lib-containers-storage-overlay-7a43ab7038c6f441572aafe432d8320816b96b696fb94b6fae5628be1ec44cf7-merged.mount: Deactivated successfully.
Oct 11 04:33:11 np0005481065 podman[261661]: 2025-10-11 08:33:11.153904963 +0000 UTC m=+0.251742772 container remove cf44567fb7cecbf2507e9c1dfee5ba296d1ab6372afec0c588800d3b1af1d893 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:33:11 np0005481065 systemd[1]: libpod-conmon-cf44567fb7cecbf2507e9c1dfee5ba296d1ab6372afec0c588800d3b1af1d893.scope: Deactivated successfully.
Oct 11 04:33:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v781: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:11 np0005481065 podman[261702]: 2025-10-11 08:33:11.375313395 +0000 UTC m=+0.062010298 container create 79ce021772aa6259336c3b3c930ab4efd23bdb4134559e6bc0a09d0355663660 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_heyrovsky, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 11 04:33:11 np0005481065 systemd[1]: Started libpod-conmon-79ce021772aa6259336c3b3c930ab4efd23bdb4134559e6bc0a09d0355663660.scope.
Oct 11 04:33:11 np0005481065 podman[261702]: 2025-10-11 08:33:11.346649312 +0000 UTC m=+0.033346285 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:33:11 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:33:11 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7145b776e3f815f1963877a3353a9b93241a66be8a622634c1ab8941d85e8e3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:33:11 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7145b776e3f815f1963877a3353a9b93241a66be8a622634c1ab8941d85e8e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:33:11 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7145b776e3f815f1963877a3353a9b93241a66be8a622634c1ab8941d85e8e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:33:11 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7145b776e3f815f1963877a3353a9b93241a66be8a622634c1ab8941d85e8e3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:33:11 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7145b776e3f815f1963877a3353a9b93241a66be8a622634c1ab8941d85e8e3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:33:11 np0005481065 podman[261702]: 2025-10-11 08:33:11.4894472 +0000 UTC m=+0.176144183 container init 79ce021772aa6259336c3b3c930ab4efd23bdb4134559e6bc0a09d0355663660 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_heyrovsky, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:33:11 np0005481065 podman[261702]: 2025-10-11 08:33:11.502454615 +0000 UTC m=+0.189151548 container start 79ce021772aa6259336c3b3c930ab4efd23bdb4134559e6bc0a09d0355663660 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:33:11 np0005481065 podman[261702]: 2025-10-11 08:33:11.506749395 +0000 UTC m=+0.193446388 container attach 79ce021772aa6259336c3b3c930ab4efd23bdb4134559e6bc0a09d0355663660 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_heyrovsky, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 11 04:33:12 np0005481065 funny_heyrovsky[261718]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:33:12 np0005481065 funny_heyrovsky[261718]: --> relative data size: 1.0
Oct 11 04:33:12 np0005481065 funny_heyrovsky[261718]: --> All data devices are unavailable
Oct 11 04:33:12 np0005481065 systemd[1]: libpod-79ce021772aa6259336c3b3c930ab4efd23bdb4134559e6bc0a09d0355663660.scope: Deactivated successfully.
Oct 11 04:33:12 np0005481065 systemd[1]: libpod-79ce021772aa6259336c3b3c930ab4efd23bdb4134559e6bc0a09d0355663660.scope: Consumed 1.220s CPU time.
Oct 11 04:33:12 np0005481065 podman[261702]: 2025-10-11 08:33:12.777135617 +0000 UTC m=+1.463832570 container died 79ce021772aa6259336c3b3c930ab4efd23bdb4134559e6bc0a09d0355663660 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_heyrovsky, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:33:12 np0005481065 systemd[1]: var-lib-containers-storage-overlay-a7145b776e3f815f1963877a3353a9b93241a66be8a622634c1ab8941d85e8e3-merged.mount: Deactivated successfully.
Oct 11 04:33:12 np0005481065 podman[261702]: 2025-10-11 08:33:12.865130952 +0000 UTC m=+1.551827885 container remove 79ce021772aa6259336c3b3c930ab4efd23bdb4134559e6bc0a09d0355663660 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_heyrovsky, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:33:12 np0005481065 systemd[1]: libpod-conmon-79ce021772aa6259336c3b3c930ab4efd23bdb4134559e6bc0a09d0355663660.scope: Deactivated successfully.
Oct 11 04:33:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:33:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v782: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:13 np0005481065 podman[261900]: 2025-10-11 08:33:13.743502444 +0000 UTC m=+0.058223352 container create 857023e21739820ab2936d8a0b19d72f393ffb616e54b351d9bc59e15b035c92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ishizaka, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:33:13 np0005481065 systemd[1]: Started libpod-conmon-857023e21739820ab2936d8a0b19d72f393ffb616e54b351d9bc59e15b035c92.scope.
Oct 11 04:33:13 np0005481065 podman[261900]: 2025-10-11 08:33:13.715410127 +0000 UTC m=+0.030131085 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:33:13 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:33:13 np0005481065 podman[261900]: 2025-10-11 08:33:13.842786545 +0000 UTC m=+0.157507473 container init 857023e21739820ab2936d8a0b19d72f393ffb616e54b351d9bc59e15b035c92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 11 04:33:13 np0005481065 podman[261900]: 2025-10-11 08:33:13.855929633 +0000 UTC m=+0.170650551 container start 857023e21739820ab2936d8a0b19d72f393ffb616e54b351d9bc59e15b035c92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 04:33:13 np0005481065 podman[261900]: 2025-10-11 08:33:13.860413749 +0000 UTC m=+0.175134657 container attach 857023e21739820ab2936d8a0b19d72f393ffb616e54b351d9bc59e15b035c92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:33:13 np0005481065 eloquent_ishizaka[261916]: 167 167
Oct 11 04:33:13 np0005481065 podman[261900]: 2025-10-11 08:33:13.863098164 +0000 UTC m=+0.177819092 container died 857023e21739820ab2936d8a0b19d72f393ffb616e54b351d9bc59e15b035c92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:33:13 np0005481065 systemd[1]: libpod-857023e21739820ab2936d8a0b19d72f393ffb616e54b351d9bc59e15b035c92.scope: Deactivated successfully.
Oct 11 04:33:13 np0005481065 systemd[1]: var-lib-containers-storage-overlay-762e39850d97e8c992d041f3e633f9520a5dca465fe71c916518f90c993998ca-merged.mount: Deactivated successfully.
Oct 11 04:33:13 np0005481065 podman[261900]: 2025-10-11 08:33:13.910486151 +0000 UTC m=+0.225207059 container remove 857023e21739820ab2936d8a0b19d72f393ffb616e54b351d9bc59e15b035c92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 04:33:13 np0005481065 systemd[1]: libpod-conmon-857023e21739820ab2936d8a0b19d72f393ffb616e54b351d9bc59e15b035c92.scope: Deactivated successfully.
Oct 11 04:33:14 np0005481065 podman[261939]: 2025-10-11 08:33:14.16571183 +0000 UTC m=+0.068850350 container create 72e5a7555918f1cc8bf772d877b94073602a2c3f723a88ca972d948fb51152c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 04:33:14 np0005481065 systemd[1]: Started libpod-conmon-72e5a7555918f1cc8bf772d877b94073602a2c3f723a88ca972d948fb51152c6.scope.
Oct 11 04:33:14 np0005481065 podman[261939]: 2025-10-11 08:33:14.139766163 +0000 UTC m=+0.042904723 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:33:14 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:33:14 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cae115275bfe2671a9997fe47531fed97b2decb0e525d5c44f6b973dba96477/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:33:14 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cae115275bfe2671a9997fe47531fed97b2decb0e525d5c44f6b973dba96477/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:33:14 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cae115275bfe2671a9997fe47531fed97b2decb0e525d5c44f6b973dba96477/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:33:14 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cae115275bfe2671a9997fe47531fed97b2decb0e525d5c44f6b973dba96477/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:33:14 np0005481065 podman[261939]: 2025-10-11 08:33:14.275171475 +0000 UTC m=+0.178310035 container init 72e5a7555918f1cc8bf772d877b94073602a2c3f723a88ca972d948fb51152c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_raman, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 04:33:14 np0005481065 podman[261939]: 2025-10-11 08:33:14.287085259 +0000 UTC m=+0.190223779 container start 72e5a7555918f1cc8bf772d877b94073602a2c3f723a88ca972d948fb51152c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 04:33:14 np0005481065 podman[261939]: 2025-10-11 08:33:14.291261786 +0000 UTC m=+0.194400356 container attach 72e5a7555918f1cc8bf772d877b94073602a2c3f723a88ca972d948fb51152c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_raman, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 04:33:15 np0005481065 tender_raman[261956]: {
Oct 11 04:33:15 np0005481065 tender_raman[261956]:    "0": [
Oct 11 04:33:15 np0005481065 tender_raman[261956]:        {
Oct 11 04:33:15 np0005481065 tender_raman[261956]:            "devices": [
Oct 11 04:33:15 np0005481065 tender_raman[261956]:                "/dev/loop3"
Oct 11 04:33:15 np0005481065 tender_raman[261956]:            ],
Oct 11 04:33:15 np0005481065 tender_raman[261956]:            "lv_name": "ceph_lv0",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:            "lv_size": "21470642176",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:            "name": "ceph_lv0",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:            "tags": {
Oct 11 04:33:15 np0005481065 tender_raman[261956]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:                "ceph.cluster_name": "ceph",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:                "ceph.crush_device_class": "",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:                "ceph.encrypted": "0",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:                "ceph.osd_id": "0",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:                "ceph.type": "block",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:                "ceph.vdo": "0"
Oct 11 04:33:15 np0005481065 tender_raman[261956]:            },
Oct 11 04:33:15 np0005481065 tender_raman[261956]:            "type": "block",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:            "vg_name": "ceph_vg0"
Oct 11 04:33:15 np0005481065 tender_raman[261956]:        }
Oct 11 04:33:15 np0005481065 tender_raman[261956]:    ],
Oct 11 04:33:15 np0005481065 tender_raman[261956]:    "1": [
Oct 11 04:33:15 np0005481065 tender_raman[261956]:        {
Oct 11 04:33:15 np0005481065 tender_raman[261956]:            "devices": [
Oct 11 04:33:15 np0005481065 tender_raman[261956]:                "/dev/loop4"
Oct 11 04:33:15 np0005481065 tender_raman[261956]:            ],
Oct 11 04:33:15 np0005481065 tender_raman[261956]:            "lv_name": "ceph_lv1",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:            "lv_size": "21470642176",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:            "name": "ceph_lv1",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:            "tags": {
Oct 11 04:33:15 np0005481065 tender_raman[261956]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:                "ceph.cluster_name": "ceph",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:                "ceph.crush_device_class": "",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:                "ceph.encrypted": "0",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:                "ceph.osd_id": "1",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:                "ceph.type": "block",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:                "ceph.vdo": "0"
Oct 11 04:33:15 np0005481065 tender_raman[261956]:            },
Oct 11 04:33:15 np0005481065 tender_raman[261956]:            "type": "block",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:            "vg_name": "ceph_vg1"
Oct 11 04:33:15 np0005481065 tender_raman[261956]:        }
Oct 11 04:33:15 np0005481065 tender_raman[261956]:    ],
Oct 11 04:33:15 np0005481065 tender_raman[261956]:    "2": [
Oct 11 04:33:15 np0005481065 tender_raman[261956]:        {
Oct 11 04:33:15 np0005481065 tender_raman[261956]:            "devices": [
Oct 11 04:33:15 np0005481065 tender_raman[261956]:                "/dev/loop5"
Oct 11 04:33:15 np0005481065 tender_raman[261956]:            ],
Oct 11 04:33:15 np0005481065 tender_raman[261956]:            "lv_name": "ceph_lv2",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:            "lv_size": "21470642176",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:            "name": "ceph_lv2",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:            "tags": {
Oct 11 04:33:15 np0005481065 tender_raman[261956]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:                "ceph.cluster_name": "ceph",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:                "ceph.crush_device_class": "",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:                "ceph.encrypted": "0",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:                "ceph.osd_id": "2",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:                "ceph.type": "block",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:                "ceph.vdo": "0"
Oct 11 04:33:15 np0005481065 tender_raman[261956]:            },
Oct 11 04:33:15 np0005481065 tender_raman[261956]:            "type": "block",
Oct 11 04:33:15 np0005481065 tender_raman[261956]:            "vg_name": "ceph_vg2"
Oct 11 04:33:15 np0005481065 tender_raman[261956]:        }
Oct 11 04:33:15 np0005481065 tender_raman[261956]:    ]
Oct 11 04:33:15 np0005481065 tender_raman[261956]: }
Oct 11 04:33:15 np0005481065 systemd[1]: libpod-72e5a7555918f1cc8bf772d877b94073602a2c3f723a88ca972d948fb51152c6.scope: Deactivated successfully.
Oct 11 04:33:15 np0005481065 podman[261939]: 2025-10-11 08:33:15.062421485 +0000 UTC m=+0.965560005 container died 72e5a7555918f1cc8bf772d877b94073602a2c3f723a88ca972d948fb51152c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 11 04:33:15 np0005481065 systemd[1]: var-lib-containers-storage-overlay-6cae115275bfe2671a9997fe47531fed97b2decb0e525d5c44f6b973dba96477-merged.mount: Deactivated successfully.
Oct 11 04:33:15 np0005481065 podman[261939]: 2025-10-11 08:33:15.148113815 +0000 UTC m=+1.051252335 container remove 72e5a7555918f1cc8bf772d877b94073602a2c3f723a88ca972d948fb51152c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_raman, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:33:15 np0005481065 systemd[1]: libpod-conmon-72e5a7555918f1cc8bf772d877b94073602a2c3f723a88ca972d948fb51152c6.scope: Deactivated successfully.
Oct 11 04:33:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:33:15.162 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:33:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:33:15.164 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:33:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:33:15.164 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:33:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v783: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:16 np0005481065 podman[262120]: 2025-10-11 08:33:16.059746248 +0000 UTC m=+0.060109604 container create ba38a7193cb0b3327b2d2eb60fa45e945d981a409923db2fc39e3465ca2ee206 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_newton, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:33:16 np0005481065 systemd[1]: Started libpod-conmon-ba38a7193cb0b3327b2d2eb60fa45e945d981a409923db2fc39e3465ca2ee206.scope.
Oct 11 04:33:16 np0005481065 podman[262120]: 2025-10-11 08:33:16.032985099 +0000 UTC m=+0.033348505 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:33:16 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:33:16 np0005481065 podman[262120]: 2025-10-11 08:33:16.161495028 +0000 UTC m=+0.161858414 container init ba38a7193cb0b3327b2d2eb60fa45e945d981a409923db2fc39e3465ca2ee206 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 11 04:33:16 np0005481065 podman[262120]: 2025-10-11 08:33:16.172470426 +0000 UTC m=+0.172833782 container start ba38a7193cb0b3327b2d2eb60fa45e945d981a409923db2fc39e3465ca2ee206 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_newton, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:33:16 np0005481065 podman[262120]: 2025-10-11 08:33:16.176632062 +0000 UTC m=+0.176995458 container attach ba38a7193cb0b3327b2d2eb60fa45e945d981a409923db2fc39e3465ca2ee206 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_newton, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:33:16 np0005481065 practical_newton[262136]: 167 167
Oct 11 04:33:16 np0005481065 systemd[1]: libpod-ba38a7193cb0b3327b2d2eb60fa45e945d981a409923db2fc39e3465ca2ee206.scope: Deactivated successfully.
Oct 11 04:33:16 np0005481065 conmon[262136]: conmon ba38a7193cb0b3327b2d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ba38a7193cb0b3327b2d2eb60fa45e945d981a409923db2fc39e3465ca2ee206.scope/container/memory.events
Oct 11 04:33:16 np0005481065 podman[262120]: 2025-10-11 08:33:16.181708704 +0000 UTC m=+0.182072100 container died ba38a7193cb0b3327b2d2eb60fa45e945d981a409923db2fc39e3465ca2ee206 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_newton, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 11 04:33:16 np0005481065 systemd[1]: var-lib-containers-storage-overlay-21658351ee08594ad6d15308f312db4161d5a65e6524eca84b1ab1bc5b39fe35-merged.mount: Deactivated successfully.
Oct 11 04:33:16 np0005481065 podman[262120]: 2025-10-11 08:33:16.233470074 +0000 UTC m=+0.233833420 container remove ba38a7193cb0b3327b2d2eb60fa45e945d981a409923db2fc39e3465ca2ee206 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Oct 11 04:33:16 np0005481065 systemd[1]: libpod-conmon-ba38a7193cb0b3327b2d2eb60fa45e945d981a409923db2fc39e3465ca2ee206.scope: Deactivated successfully.
Oct 11 04:33:16 np0005481065 podman[262160]: 2025-10-11 08:33:16.468837617 +0000 UTC m=+0.057865122 container create 93b10108e522ad3cc280194954ba144424bc64b935b9cdb6bbeb35a3be929fdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 04:33:16 np0005481065 systemd[1]: Started libpod-conmon-93b10108e522ad3cc280194954ba144424bc64b935b9cdb6bbeb35a3be929fdb.scope.
Oct 11 04:33:16 np0005481065 podman[262160]: 2025-10-11 08:33:16.439918087 +0000 UTC m=+0.028945612 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:33:16 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:33:16 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cc24891a63369e56025d2e7d4575ca968c312927f4f790d8694764d2827cf7f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:33:16 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cc24891a63369e56025d2e7d4575ca968c312927f4f790d8694764d2827cf7f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:33:16 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cc24891a63369e56025d2e7d4575ca968c312927f4f790d8694764d2827cf7f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:33:16 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cc24891a63369e56025d2e7d4575ca968c312927f4f790d8694764d2827cf7f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:33:16 np0005481065 podman[262160]: 2025-10-11 08:33:16.575098993 +0000 UTC m=+0.164126518 container init 93b10108e522ad3cc280194954ba144424bc64b935b9cdb6bbeb35a3be929fdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lederberg, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 11 04:33:16 np0005481065 podman[262160]: 2025-10-11 08:33:16.593502038 +0000 UTC m=+0.182529543 container start 93b10108e522ad3cc280194954ba144424bc64b935b9cdb6bbeb35a3be929fdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lederberg, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 04:33:16 np0005481065 podman[262160]: 2025-10-11 08:33:16.597875811 +0000 UTC m=+0.186903336 container attach 93b10108e522ad3cc280194954ba144424bc64b935b9cdb6bbeb35a3be929fdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lederberg, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:33:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v784: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:17 np0005481065 blissful_lederberg[262177]: {
Oct 11 04:33:17 np0005481065 blissful_lederberg[262177]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 04:33:17 np0005481065 blissful_lederberg[262177]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:33:17 np0005481065 blissful_lederberg[262177]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:33:17 np0005481065 blissful_lederberg[262177]:        "osd_id": 2,
Oct 11 04:33:17 np0005481065 blissful_lederberg[262177]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:33:17 np0005481065 blissful_lederberg[262177]:        "type": "bluestore"
Oct 11 04:33:17 np0005481065 blissful_lederberg[262177]:    },
Oct 11 04:33:17 np0005481065 blissful_lederberg[262177]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 04:33:17 np0005481065 blissful_lederberg[262177]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:33:17 np0005481065 blissful_lederberg[262177]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:33:17 np0005481065 blissful_lederberg[262177]:        "osd_id": 0,
Oct 11 04:33:17 np0005481065 blissful_lederberg[262177]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:33:17 np0005481065 blissful_lederberg[262177]:        "type": "bluestore"
Oct 11 04:33:17 np0005481065 blissful_lederberg[262177]:    },
Oct 11 04:33:17 np0005481065 blissful_lederberg[262177]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 04:33:17 np0005481065 blissful_lederberg[262177]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:33:17 np0005481065 blissful_lederberg[262177]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:33:17 np0005481065 blissful_lederberg[262177]:        "osd_id": 1,
Oct 11 04:33:17 np0005481065 blissful_lederberg[262177]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:33:17 np0005481065 blissful_lederberg[262177]:        "type": "bluestore"
Oct 11 04:33:17 np0005481065 blissful_lederberg[262177]:    }
Oct 11 04:33:17 np0005481065 blissful_lederberg[262177]: }
Oct 11 04:33:17 np0005481065 systemd[1]: libpod-93b10108e522ad3cc280194954ba144424bc64b935b9cdb6bbeb35a3be929fdb.scope: Deactivated successfully.
Oct 11 04:33:17 np0005481065 systemd[1]: libpod-93b10108e522ad3cc280194954ba144424bc64b935b9cdb6bbeb35a3be929fdb.scope: Consumed 1.059s CPU time.
Oct 11 04:33:17 np0005481065 podman[262210]: 2025-10-11 08:33:17.718249791 +0000 UTC m=+0.043540340 container died 93b10108e522ad3cc280194954ba144424bc64b935b9cdb6bbeb35a3be929fdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lederberg, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 04:33:17 np0005481065 systemd[1]: var-lib-containers-storage-overlay-6cc24891a63369e56025d2e7d4575ca968c312927f4f790d8694764d2827cf7f-merged.mount: Deactivated successfully.
Oct 11 04:33:17 np0005481065 podman[262210]: 2025-10-11 08:33:17.792418438 +0000 UTC m=+0.117708897 container remove 93b10108e522ad3cc280194954ba144424bc64b935b9cdb6bbeb35a3be929fdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lederberg, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:33:17 np0005481065 systemd[1]: libpod-conmon-93b10108e522ad3cc280194954ba144424bc64b935b9cdb6bbeb35a3be929fdb.scope: Deactivated successfully.
Oct 11 04:33:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:33:17 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:33:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:33:17 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:33:17 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 24eae480-c4d4-4477-8df7-12c1dbc1ed3f does not exist
Oct 11 04:33:17 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev bd0b60f5-3b8a-4fbb-97c9-bec1f534c497 does not exist
Oct 11 04:33:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:33:18 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:33:18 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:33:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v785: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v786: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:33:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v787: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:33:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:33:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:33:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:33:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:33:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:33:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v788: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:26 np0005481065 podman[262275]: 2025-10-11 08:33:26.808858856 +0000 UTC m=+0.102576354 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 11 04:33:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v789: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:33:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v790: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v791: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:33:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v792: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 04:33:33 np0005481065 podman[262296]: 2025-10-11 08:33:33.800994356 +0000 UTC m=+0.093630854 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 11 04:33:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v793: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 04:33:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v794: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 04:33:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:33:38 np0005481065 podman[262316]: 2025-10-11 08:33:38.794281151 +0000 UTC m=+0.089379985 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 04:33:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v795: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 04:33:40 np0005481065 podman[262337]: 2025-10-11 08:33:40.831179061 +0000 UTC m=+0.123878001 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 11 04:33:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v796: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 04:33:42 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:33:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v797: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 04:33:43 np0005481065 nova_compute[260935]: 2025-10-11 08:33:43.659 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:33:43 np0005481065 nova_compute[260935]: 2025-10-11 08:33:43.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:33:43 np0005481065 nova_compute[260935]: 2025-10-11 08:33:43.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 04:33:43 np0005481065 nova_compute[260935]: 2025-10-11 08:33:43.705 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:33:43 np0005481065 nova_compute[260935]: 2025-10-11 08:33:43.746 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:33:43 np0005481065 nova_compute[260935]: 2025-10-11 08:33:43.747 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:33:43 np0005481065 nova_compute[260935]: 2025-10-11 08:33:43.747 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:33:43 np0005481065 nova_compute[260935]: 2025-10-11 08:33:43.748 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 04:33:43 np0005481065 nova_compute[260935]: 2025-10-11 08:33:43.748 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:33:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:33:44 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/961444815' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:33:44 np0005481065 nova_compute[260935]: 2025-10-11 08:33:44.215 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:33:44 np0005481065 nova_compute[260935]: 2025-10-11 08:33:44.451 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:33:44 np0005481065 nova_compute[260935]: 2025-10-11 08:33:44.453 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5163MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 04:33:44 np0005481065 nova_compute[260935]: 2025-10-11 08:33:44.453 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:33:44 np0005481065 nova_compute[260935]: 2025-10-11 08:33:44.454 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:33:44 np0005481065 nova_compute[260935]: 2025-10-11 08:33:44.587 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 04:33:44 np0005481065 nova_compute[260935]: 2025-10-11 08:33:44.588 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 04:33:44 np0005481065 nova_compute[260935]: 2025-10-11 08:33:44.606 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:33:45 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:33:45 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/585207780' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:33:45 np0005481065 nova_compute[260935]: 2025-10-11 08:33:45.084 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:33:45 np0005481065 nova_compute[260935]: 2025-10-11 08:33:45.092 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:33:45 np0005481065 nova_compute[260935]: 2025-10-11 08:33:45.142 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:33:45 np0005481065 nova_compute[260935]: 2025-10-11 08:33:45.145 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 04:33:45 np0005481065 nova_compute[260935]: 2025-10-11 08:33:45.146 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:33:45 np0005481065 nova_compute[260935]: 2025-10-11 08:33:45.146 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:33:45 np0005481065 nova_compute[260935]: 2025-10-11 08:33:45.188 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:33:45 np0005481065 nova_compute[260935]: 2025-10-11 08:33:45.188 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:33:45 np0005481065 nova_compute[260935]: 2025-10-11 08:33:45.189 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 04:33:45 np0005481065 nova_compute[260935]: 2025-10-11 08:33:45.189 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 11 04:33:45 np0005481065 nova_compute[260935]: 2025-10-11 08:33:45.207 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 11 04:33:45 np0005481065 nova_compute[260935]: 2025-10-11 08:33:45.207 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:33:45 np0005481065 nova_compute[260935]: 2025-10-11 08:33:45.208 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:33:45 np0005481065 nova_compute[260935]: 2025-10-11 08:33:45.209 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:33:45 np0005481065 nova_compute[260935]: 2025-10-11 08:33:45.209 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:33:45 np0005481065 nova_compute[260935]: 2025-10-11 08:33:45.210 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:33:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v798: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v799: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:33:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v800: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:50 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Oct 11 04:33:50 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2423047993' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 11 04:33:50 np0005481065 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.14355 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 11 04:33:50 np0005481065 ceph-mgr[74605]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 11 04:33:50 np0005481065 ceph-mgr[74605]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 11 04:33:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v801: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:52 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:33:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v802: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:33:54
Oct 11 04:33:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:33:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 04:33:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', '.mgr', '.rgw.root', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', 'images', 'default.rgw.meta']
Oct 11 04:33:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:33:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:33:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:33:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:33:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:33:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:33:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:33:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:33:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:33:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:33:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:33:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:33:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:33:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:33:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:33:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:33:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:33:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v803: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v804: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:33:57 np0005481065 podman[262408]: 2025-10-11 08:33:57.814107249 +0000 UTC m=+0.104840377 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:33:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:33:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v805: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v806: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:34:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v807: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:34:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:34:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:34:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:34:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:34:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:34:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:34:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:34:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:34:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:34:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:34:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:34:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 04:34:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:34:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:34:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:34:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:34:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:34:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:34:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:34:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:34:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:34:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:34:04 np0005481065 podman[262427]: 2025-10-11 08:34:04.797883555 +0000 UTC m=+0.093418377 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:34:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v808: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v809: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:34:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v810: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:09 np0005481065 podman[262448]: 2025-10-11 08:34:09.786846431 +0000 UTC m=+0.088457902 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct 11 04:34:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v811: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:11 np0005481065 podman[262468]: 2025-10-11 08:34:11.831702599 +0000 UTC m=+0.126339847 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:34:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:34:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v812: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Oct 11 04:34:14 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1413615351' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 11 04:34:14 np0005481065 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 11 04:34:14 np0005481065 ceph-mgr[74605]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 11 04:34:14 np0005481065 ceph-mgr[74605]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 11 04:34:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:34:15.163 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:34:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:34:15.164 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:34:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:34:15.164 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:34:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v813: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v814: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:34:19 np0005481065 podman[262667]: 2025-10-11 08:34:19.131027557 +0000 UTC m=+0.093816714 container exec ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 04:34:19 np0005481065 podman[262667]: 2025-10-11 08:34:19.264355241 +0000 UTC m=+0.227144348 container exec_died ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:34:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v815: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:20 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:34:20 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:34:20 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:34:20 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:34:21 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:34:21 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:34:21 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:34:21 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:34:21 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:34:21 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:34:21 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 330636ef-75cf-4c46-87fa-f77ff9d27976 does not exist
Oct 11 04:34:21 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 18f31804-1006-4b15-b159-53fa97c74f00 does not exist
Oct 11 04:34:21 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev ac56f209-67d3-43b7-a347-8b63ed516cfc does not exist
Oct 11 04:34:21 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:34:21 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:34:21 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:34:21 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:34:21 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:34:21 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:34:21 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:34:21 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:34:21 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:34:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v816: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:22 np0005481065 podman[263100]: 2025-10-11 08:34:22.092770026 +0000 UTC m=+0.069818243 container create 7b7e159129314742cb1e791e87c085acee78c45c0819380e382b40c6bf8f3385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_sutherland, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:34:22 np0005481065 systemd[1]: Started libpod-conmon-7b7e159129314742cb1e791e87c085acee78c45c0819380e382b40c6bf8f3385.scope.
Oct 11 04:34:22 np0005481065 podman[263100]: 2025-10-11 08:34:22.064111122 +0000 UTC m=+0.041159389 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:34:22 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:34:22 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:34:22 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:34:22 np0005481065 podman[263100]: 2025-10-11 08:34:22.215402616 +0000 UTC m=+0.192450833 container init 7b7e159129314742cb1e791e87c085acee78c45c0819380e382b40c6bf8f3385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_sutherland, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 04:34:22 np0005481065 podman[263100]: 2025-10-11 08:34:22.226127021 +0000 UTC m=+0.203175208 container start 7b7e159129314742cb1e791e87c085acee78c45c0819380e382b40c6bf8f3385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_sutherland, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Oct 11 04:34:22 np0005481065 podman[263100]: 2025-10-11 08:34:22.229844506 +0000 UTC m=+0.206892703 container attach 7b7e159129314742cb1e791e87c085acee78c45c0819380e382b40c6bf8f3385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_sutherland, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:34:22 np0005481065 dazzling_sutherland[263116]: 167 167
Oct 11 04:34:22 np0005481065 systemd[1]: libpod-7b7e159129314742cb1e791e87c085acee78c45c0819380e382b40c6bf8f3385.scope: Deactivated successfully.
Oct 11 04:34:22 np0005481065 podman[263100]: 2025-10-11 08:34:22.236177186 +0000 UTC m=+0.213225393 container died 7b7e159129314742cb1e791e87c085acee78c45c0819380e382b40c6bf8f3385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Oct 11 04:34:22 np0005481065 systemd[1]: var-lib-containers-storage-overlay-d43111cdd22f79d8f92b1408d6efce52bdb8422c3d45991be4c947f1fce3a6fb-merged.mount: Deactivated successfully.
Oct 11 04:34:22 np0005481065 podman[263100]: 2025-10-11 08:34:22.283833608 +0000 UTC m=+0.260881785 container remove 7b7e159129314742cb1e791e87c085acee78c45c0819380e382b40c6bf8f3385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:34:22 np0005481065 systemd[1]: libpod-conmon-7b7e159129314742cb1e791e87c085acee78c45c0819380e382b40c6bf8f3385.scope: Deactivated successfully.
Oct 11 04:34:22 np0005481065 podman[263139]: 2025-10-11 08:34:22.484931606 +0000 UTC m=+0.054288432 container create 9ebf724888c2fad41f576291fbb54d242db95edd54a32e0895a46b9603acc028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:34:22 np0005481065 systemd[1]: Started libpod-conmon-9ebf724888c2fad41f576291fbb54d242db95edd54a32e0895a46b9603acc028.scope.
Oct 11 04:34:22 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:34:22 np0005481065 podman[263139]: 2025-10-11 08:34:22.464432794 +0000 UTC m=+0.033789650 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:34:22 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b12ae1ad029cb0d39f85212b74415bea721c4dcc1b023a2a78ea209dcf59ffd7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:34:22 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b12ae1ad029cb0d39f85212b74415bea721c4dcc1b023a2a78ea209dcf59ffd7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:34:22 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b12ae1ad029cb0d39f85212b74415bea721c4dcc1b023a2a78ea209dcf59ffd7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:34:22 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b12ae1ad029cb0d39f85212b74415bea721c4dcc1b023a2a78ea209dcf59ffd7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:34:22 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b12ae1ad029cb0d39f85212b74415bea721c4dcc1b023a2a78ea209dcf59ffd7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:34:22 np0005481065 podman[263139]: 2025-10-11 08:34:22.577316988 +0000 UTC m=+0.146673894 container init 9ebf724888c2fad41f576291fbb54d242db95edd54a32e0895a46b9603acc028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_zhukovsky, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:34:22 np0005481065 podman[263139]: 2025-10-11 08:34:22.593125947 +0000 UTC m=+0.162482803 container start 9ebf724888c2fad41f576291fbb54d242db95edd54a32e0895a46b9603acc028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_zhukovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:34:22 np0005481065 podman[263139]: 2025-10-11 08:34:22.597007017 +0000 UTC m=+0.166363913 container attach 9ebf724888c2fad41f576291fbb54d242db95edd54a32e0895a46b9603acc028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_zhukovsky, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 04:34:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:34:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v817: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:23 np0005481065 laughing_zhukovsky[263155]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:34:23 np0005481065 laughing_zhukovsky[263155]: --> relative data size: 1.0
Oct 11 04:34:23 np0005481065 laughing_zhukovsky[263155]: --> All data devices are unavailable
Oct 11 04:34:23 np0005481065 systemd[1]: libpod-9ebf724888c2fad41f576291fbb54d242db95edd54a32e0895a46b9603acc028.scope: Deactivated successfully.
Oct 11 04:34:23 np0005481065 systemd[1]: libpod-9ebf724888c2fad41f576291fbb54d242db95edd54a32e0895a46b9603acc028.scope: Consumed 1.058s CPU time.
Oct 11 04:34:23 np0005481065 podman[263184]: 2025-10-11 08:34:23.781601867 +0000 UTC m=+0.044345570 container died 9ebf724888c2fad41f576291fbb54d242db95edd54a32e0895a46b9603acc028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_zhukovsky, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:34:23 np0005481065 systemd[1]: var-lib-containers-storage-overlay-b12ae1ad029cb0d39f85212b74415bea721c4dcc1b023a2a78ea209dcf59ffd7-merged.mount: Deactivated successfully.
Oct 11 04:34:23 np0005481065 podman[263184]: 2025-10-11 08:34:23.856779521 +0000 UTC m=+0.119523174 container remove 9ebf724888c2fad41f576291fbb54d242db95edd54a32e0895a46b9603acc028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:34:23 np0005481065 systemd[1]: libpod-conmon-9ebf724888c2fad41f576291fbb54d242db95edd54a32e0895a46b9603acc028.scope: Deactivated successfully.
Oct 11 04:34:24 np0005481065 podman[263339]: 2025-10-11 08:34:24.762308291 +0000 UTC m=+0.065417707 container create e44dc9854e88d9f40d4531834a1063ebc96dfe591ab95860440ec16b4ba8adea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_heisenberg, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:34:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:34:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:34:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:34:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:34:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:34:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:34:24 np0005481065 systemd[1]: Started libpod-conmon-e44dc9854e88d9f40d4531834a1063ebc96dfe591ab95860440ec16b4ba8adea.scope.
Oct 11 04:34:24 np0005481065 podman[263339]: 2025-10-11 08:34:24.736363215 +0000 UTC m=+0.039472671 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:34:24 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:34:24 np0005481065 podman[263339]: 2025-10-11 08:34:24.861484686 +0000 UTC m=+0.164594122 container init e44dc9854e88d9f40d4531834a1063ebc96dfe591ab95860440ec16b4ba8adea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_heisenberg, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:34:24 np0005481065 podman[263339]: 2025-10-11 08:34:24.872741886 +0000 UTC m=+0.175851302 container start e44dc9854e88d9f40d4531834a1063ebc96dfe591ab95860440ec16b4ba8adea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Oct 11 04:34:24 np0005481065 happy_heisenberg[263355]: 167 167
Oct 11 04:34:24 np0005481065 podman[263339]: 2025-10-11 08:34:24.877442049 +0000 UTC m=+0.180551495 container attach e44dc9854e88d9f40d4531834a1063ebc96dfe591ab95860440ec16b4ba8adea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:34:24 np0005481065 systemd[1]: libpod-e44dc9854e88d9f40d4531834a1063ebc96dfe591ab95860440ec16b4ba8adea.scope: Deactivated successfully.
Oct 11 04:34:24 np0005481065 podman[263339]: 2025-10-11 08:34:24.878209891 +0000 UTC m=+0.181319297 container died e44dc9854e88d9f40d4531834a1063ebc96dfe591ab95860440ec16b4ba8adea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_heisenberg, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Oct 11 04:34:24 np0005481065 systemd[1]: var-lib-containers-storage-overlay-b43a95d47aadf706783ce939f857176acc3166697dc62b8d550f5db2f056530b-merged.mount: Deactivated successfully.
Oct 11 04:34:24 np0005481065 podman[263339]: 2025-10-11 08:34:24.928612601 +0000 UTC m=+0.231722007 container remove e44dc9854e88d9f40d4531834a1063ebc96dfe591ab95860440ec16b4ba8adea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_heisenberg, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:34:24 np0005481065 systemd[1]: libpod-conmon-e44dc9854e88d9f40d4531834a1063ebc96dfe591ab95860440ec16b4ba8adea.scope: Deactivated successfully.
Oct 11 04:34:25 np0005481065 podman[263380]: 2025-10-11 08:34:25.151059605 +0000 UTC m=+0.069443112 container create 6bb1b8f74bf43ab036e8106bb0b15490ccd99160aa0d46d931784b15cd94a601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 04:34:25 np0005481065 systemd[1]: Started libpod-conmon-6bb1b8f74bf43ab036e8106bb0b15490ccd99160aa0d46d931784b15cd94a601.scope.
Oct 11 04:34:25 np0005481065 podman[263380]: 2025-10-11 08:34:25.125274853 +0000 UTC m=+0.043658410 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:34:25 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:34:25 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/312cf7dcc4072764221615c6d53179518d1506027c55429fd81103bba42e6771/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:34:25 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/312cf7dcc4072764221615c6d53179518d1506027c55429fd81103bba42e6771/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:34:25 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/312cf7dcc4072764221615c6d53179518d1506027c55429fd81103bba42e6771/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:34:25 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/312cf7dcc4072764221615c6d53179518d1506027c55429fd81103bba42e6771/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:34:25 np0005481065 podman[263380]: 2025-10-11 08:34:25.256618471 +0000 UTC m=+0.175002018 container init 6bb1b8f74bf43ab036e8106bb0b15490ccd99160aa0d46d931784b15cd94a601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 04:34:25 np0005481065 podman[263380]: 2025-10-11 08:34:25.275367283 +0000 UTC m=+0.193750790 container start 6bb1b8f74bf43ab036e8106bb0b15490ccd99160aa0d46d931784b15cd94a601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_brahmagupta, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:34:25 np0005481065 podman[263380]: 2025-10-11 08:34:25.281001533 +0000 UTC m=+0.199385110 container attach 6bb1b8f74bf43ab036e8106bb0b15490ccd99160aa0d46d931784b15cd94a601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_brahmagupta, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:34:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v818: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]: {
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:    "0": [
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:        {
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:            "devices": [
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:                "/dev/loop3"
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:            ],
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:            "lv_name": "ceph_lv0",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:            "lv_size": "21470642176",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:            "name": "ceph_lv0",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:            "tags": {
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:                "ceph.cluster_name": "ceph",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:                "ceph.crush_device_class": "",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:                "ceph.encrypted": "0",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:                "ceph.osd_id": "0",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:                "ceph.type": "block",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:                "ceph.vdo": "0"
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:            },
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:            "type": "block",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:            "vg_name": "ceph_vg0"
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:        }
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:    ],
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:    "1": [
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:        {
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:            "devices": [
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:                "/dev/loop4"
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:            ],
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:            "lv_name": "ceph_lv1",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:            "lv_size": "21470642176",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:            "name": "ceph_lv1",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:            "tags": {
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:                "ceph.cluster_name": "ceph",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:                "ceph.crush_device_class": "",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:                "ceph.encrypted": "0",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:                "ceph.osd_id": "1",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:                "ceph.type": "block",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:                "ceph.vdo": "0"
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:            },
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:            "type": "block",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:            "vg_name": "ceph_vg1"
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:        }
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:    ],
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:    "2": [
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:        {
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:            "devices": [
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:                "/dev/loop5"
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:            ],
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:            "lv_name": "ceph_lv2",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:            "lv_size": "21470642176",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:            "name": "ceph_lv2",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:            "tags": {
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:                "ceph.cluster_name": "ceph",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:                "ceph.crush_device_class": "",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:                "ceph.encrypted": "0",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:                "ceph.osd_id": "2",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:                "ceph.type": "block",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:                "ceph.vdo": "0"
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:            },
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:            "type": "block",
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:            "vg_name": "ceph_vg2"
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:        }
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]:    ]
Oct 11 04:34:26 np0005481065 awesome_brahmagupta[263397]: }
Oct 11 04:34:26 np0005481065 systemd[1]: libpod-6bb1b8f74bf43ab036e8106bb0b15490ccd99160aa0d46d931784b15cd94a601.scope: Deactivated successfully.
Oct 11 04:34:26 np0005481065 podman[263380]: 2025-10-11 08:34:26.046555321 +0000 UTC m=+0.964938828 container died 6bb1b8f74bf43ab036e8106bb0b15490ccd99160aa0d46d931784b15cd94a601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_brahmagupta, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 04:34:26 np0005481065 systemd[1]: var-lib-containers-storage-overlay-312cf7dcc4072764221615c6d53179518d1506027c55429fd81103bba42e6771-merged.mount: Deactivated successfully.
Oct 11 04:34:26 np0005481065 podman[263380]: 2025-10-11 08:34:26.117803993 +0000 UTC m=+1.036187480 container remove 6bb1b8f74bf43ab036e8106bb0b15490ccd99160aa0d46d931784b15cd94a601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_brahmagupta, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:34:26 np0005481065 systemd[1]: libpod-conmon-6bb1b8f74bf43ab036e8106bb0b15490ccd99160aa0d46d931784b15cd94a601.scope: Deactivated successfully.
Oct 11 04:34:26 np0005481065 podman[263561]: 2025-10-11 08:34:26.941521021 +0000 UTC m=+0.063735490 container create 1f68612ef2160fd57a00fb8365d8f29c8cff10906d9664ad1bf0601cfdbd564f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Oct 11 04:34:26 np0005481065 systemd[1]: Started libpod-conmon-1f68612ef2160fd57a00fb8365d8f29c8cff10906d9664ad1bf0601cfdbd564f.scope.
Oct 11 04:34:27 np0005481065 podman[263561]: 2025-10-11 08:34:26.916390147 +0000 UTC m=+0.038604656 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:34:27 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:34:27 np0005481065 podman[263561]: 2025-10-11 08:34:27.046338675 +0000 UTC m=+0.168553174 container init 1f68612ef2160fd57a00fb8365d8f29c8cff10906d9664ad1bf0601cfdbd564f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:34:27 np0005481065 podman[263561]: 2025-10-11 08:34:27.057326697 +0000 UTC m=+0.179541156 container start 1f68612ef2160fd57a00fb8365d8f29c8cff10906d9664ad1bf0601cfdbd564f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 04:34:27 np0005481065 pedantic_williamson[263578]: 167 167
Oct 11 04:34:27 np0005481065 podman[263561]: 2025-10-11 08:34:27.061857536 +0000 UTC m=+0.184072055 container attach 1f68612ef2160fd57a00fb8365d8f29c8cff10906d9664ad1bf0601cfdbd564f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 11 04:34:27 np0005481065 systemd[1]: libpod-1f68612ef2160fd57a00fb8365d8f29c8cff10906d9664ad1bf0601cfdbd564f.scope: Deactivated successfully.
Oct 11 04:34:27 np0005481065 podman[263561]: 2025-10-11 08:34:27.063317057 +0000 UTC m=+0.185531526 container died 1f68612ef2160fd57a00fb8365d8f29c8cff10906d9664ad1bf0601cfdbd564f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_williamson, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:34:27 np0005481065 systemd[1]: var-lib-containers-storage-overlay-33467f67ae425702c454a28bbdc4ace11d198d3f350c865f0c8d758cdec93162-merged.mount: Deactivated successfully.
Oct 11 04:34:27 np0005481065 podman[263561]: 2025-10-11 08:34:27.115271192 +0000 UTC m=+0.237485661 container remove 1f68612ef2160fd57a00fb8365d8f29c8cff10906d9664ad1bf0601cfdbd564f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_williamson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 04:34:27 np0005481065 systemd[1]: libpod-conmon-1f68612ef2160fd57a00fb8365d8f29c8cff10906d9664ad1bf0601cfdbd564f.scope: Deactivated successfully.
Oct 11 04:34:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v819: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:27 np0005481065 podman[263602]: 2025-10-11 08:34:27.356250561 +0000 UTC m=+0.062697500 container create 5a696e7568302684f6160f1debc1436b0cba85417acbe3bc55913ffa495de41b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_turing, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 11 04:34:27 np0005481065 systemd[1]: Started libpod-conmon-5a696e7568302684f6160f1debc1436b0cba85417acbe3bc55913ffa495de41b.scope.
Oct 11 04:34:27 np0005481065 podman[263602]: 2025-10-11 08:34:27.32978298 +0000 UTC m=+0.036229959 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:34:27 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:34:27 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8179358aae0d6d39e922f88153bb6e46897de32107497cffffc94754381d0196/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:34:27 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8179358aae0d6d39e922f88153bb6e46897de32107497cffffc94754381d0196/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:34:27 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8179358aae0d6d39e922f88153bb6e46897de32107497cffffc94754381d0196/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:34:27 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8179358aae0d6d39e922f88153bb6e46897de32107497cffffc94754381d0196/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:34:27 np0005481065 podman[263602]: 2025-10-11 08:34:27.454259663 +0000 UTC m=+0.160706652 container init 5a696e7568302684f6160f1debc1436b0cba85417acbe3bc55913ffa495de41b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_turing, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 04:34:27 np0005481065 podman[263602]: 2025-10-11 08:34:27.465749589 +0000 UTC m=+0.172196538 container start 5a696e7568302684f6160f1debc1436b0cba85417acbe3bc55913ffa495de41b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:34:27 np0005481065 podman[263602]: 2025-10-11 08:34:27.470382131 +0000 UTC m=+0.176829060 container attach 5a696e7568302684f6160f1debc1436b0cba85417acbe3bc55913ffa495de41b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 04:34:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:34:28 np0005481065 thirsty_turing[263618]: {
Oct 11 04:34:28 np0005481065 thirsty_turing[263618]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 04:34:28 np0005481065 thirsty_turing[263618]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:34:28 np0005481065 thirsty_turing[263618]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:34:28 np0005481065 thirsty_turing[263618]:        "osd_id": 2,
Oct 11 04:34:28 np0005481065 thirsty_turing[263618]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:34:28 np0005481065 thirsty_turing[263618]:        "type": "bluestore"
Oct 11 04:34:28 np0005481065 thirsty_turing[263618]:    },
Oct 11 04:34:28 np0005481065 thirsty_turing[263618]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 04:34:28 np0005481065 thirsty_turing[263618]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:34:28 np0005481065 thirsty_turing[263618]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:34:28 np0005481065 thirsty_turing[263618]:        "osd_id": 0,
Oct 11 04:34:28 np0005481065 thirsty_turing[263618]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:34:28 np0005481065 thirsty_turing[263618]:        "type": "bluestore"
Oct 11 04:34:28 np0005481065 thirsty_turing[263618]:    },
Oct 11 04:34:28 np0005481065 thirsty_turing[263618]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 04:34:28 np0005481065 thirsty_turing[263618]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:34:28 np0005481065 thirsty_turing[263618]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:34:28 np0005481065 thirsty_turing[263618]:        "osd_id": 1,
Oct 11 04:34:28 np0005481065 thirsty_turing[263618]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:34:28 np0005481065 thirsty_turing[263618]:        "type": "bluestore"
Oct 11 04:34:28 np0005481065 thirsty_turing[263618]:    }
Oct 11 04:34:28 np0005481065 thirsty_turing[263618]: }
Oct 11 04:34:28 np0005481065 systemd[1]: libpod-5a696e7568302684f6160f1debc1436b0cba85417acbe3bc55913ffa495de41b.scope: Deactivated successfully.
Oct 11 04:34:28 np0005481065 systemd[1]: libpod-5a696e7568302684f6160f1debc1436b0cba85417acbe3bc55913ffa495de41b.scope: Consumed 1.097s CPU time.
Oct 11 04:34:28 np0005481065 podman[263602]: 2025-10-11 08:34:28.553378078 +0000 UTC m=+1.259825057 container died 5a696e7568302684f6160f1debc1436b0cba85417acbe3bc55913ffa495de41b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_turing, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 04:34:28 np0005481065 systemd[1]: var-lib-containers-storage-overlay-8179358aae0d6d39e922f88153bb6e46897de32107497cffffc94754381d0196-merged.mount: Deactivated successfully.
Oct 11 04:34:28 np0005481065 podman[263602]: 2025-10-11 08:34:28.62706753 +0000 UTC m=+1.333514439 container remove 5a696e7568302684f6160f1debc1436b0cba85417acbe3bc55913ffa495de41b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_turing, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:34:28 np0005481065 systemd[1]: libpod-conmon-5a696e7568302684f6160f1debc1436b0cba85417acbe3bc55913ffa495de41b.scope: Deactivated successfully.
Oct 11 04:34:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:34:28 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:34:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:34:28 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:34:28 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 83d83d10-777c-47c3-bfdc-cf34beff0b3f does not exist
Oct 11 04:34:28 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 3809be50-8880-4bf2-a159-b2f07562729f does not exist
Oct 11 04:34:28 np0005481065 podman[263652]: 2025-10-11 08:34:28.708922323 +0000 UTC m=+0.114023377 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent)
Oct 11 04:34:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v820: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:29 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:34:29 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:34:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v821: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:34:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v822: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v823: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:35 np0005481065 podman[263730]: 2025-10-11 08:34:35.791189849 +0000 UTC m=+0.089792029 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 04:34:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v824: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:34:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1935225412' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:34:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:34:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1935225412' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:34:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:34:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v825: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:40 np0005481065 podman[263750]: 2025-10-11 08:34:40.801979494 +0000 UTC m=+0.106027661 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 11 04:34:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v826: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:42 np0005481065 podman[263771]: 2025-10-11 08:34:42.893666169 +0000 UTC m=+0.191103955 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 11 04:34:42 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:34:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v827: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:43 np0005481065 nova_compute[260935]: 2025-10-11 08:34:43.719 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:34:44 np0005481065 nova_compute[260935]: 2025-10-11 08:34:44.198 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:34:44 np0005481065 nova_compute[260935]: 2025-10-11 08:34:44.198 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:34:44 np0005481065 nova_compute[260935]: 2025-10-11 08:34:44.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:34:44 np0005481065 nova_compute[260935]: 2025-10-11 08:34:44.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 04:34:44 np0005481065 nova_compute[260935]: 2025-10-11 08:34:44.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 11 04:34:45 np0005481065 nova_compute[260935]: 2025-10-11 08:34:45.113 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 11 04:34:45 np0005481065 nova_compute[260935]: 2025-10-11 08:34:45.114 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:34:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v828: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:45 np0005481065 nova_compute[260935]: 2025-10-11 08:34:45.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:34:45 np0005481065 nova_compute[260935]: 2025-10-11 08:34:45.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:34:45 np0005481065 nova_compute[260935]: 2025-10-11 08:34:45.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:34:45 np0005481065 nova_compute[260935]: 2025-10-11 08:34:45.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:34:45 np0005481065 nova_compute[260935]: 2025-10-11 08:34:45.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 04:34:45 np0005481065 nova_compute[260935]: 2025-10-11 08:34:45.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:34:46 np0005481065 nova_compute[260935]: 2025-10-11 08:34:46.041 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:34:46 np0005481065 nova_compute[260935]: 2025-10-11 08:34:46.042 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:34:46 np0005481065 nova_compute[260935]: 2025-10-11 08:34:46.042 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:34:46 np0005481065 nova_compute[260935]: 2025-10-11 08:34:46.043 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 04:34:46 np0005481065 nova_compute[260935]: 2025-10-11 08:34:46.043 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:34:46 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:34:46 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3611365352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:34:46 np0005481065 nova_compute[260935]: 2025-10-11 08:34:46.497 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:34:46 np0005481065 nova_compute[260935]: 2025-10-11 08:34:46.791 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:34:46 np0005481065 nova_compute[260935]: 2025-10-11 08:34:46.793 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5133MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 04:34:46 np0005481065 nova_compute[260935]: 2025-10-11 08:34:46.794 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:34:46 np0005481065 nova_compute[260935]: 2025-10-11 08:34:46.794 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:34:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v829: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:47 np0005481065 nova_compute[260935]: 2025-10-11 08:34:47.711 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 04:34:47 np0005481065 nova_compute[260935]: 2025-10-11 08:34:47.712 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 04:34:47 np0005481065 nova_compute[260935]: 2025-10-11 08:34:47.744 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:34:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:34:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:34:48 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1300747106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:34:48 np0005481065 nova_compute[260935]: 2025-10-11 08:34:48.259 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:34:48 np0005481065 nova_compute[260935]: 2025-10-11 08:34:48.264 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:34:48 np0005481065 nova_compute[260935]: 2025-10-11 08:34:48.302 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:34:48 np0005481065 nova_compute[260935]: 2025-10-11 08:34:48.304 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 04:34:48 np0005481065 nova_compute[260935]: 2025-10-11 08:34:48.304 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.510s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:34:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v830: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v831: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:52 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:34:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v832: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:34:53.467 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:34:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:34:53.469 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 04:34:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:34:53.471 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:34:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:34:54
Oct 11 04:34:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:34:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 04:34:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.control', '.mgr', '.rgw.root', 'backups', 'images', 'vms', 'cephfs.cephfs.meta']
Oct 11 04:34:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:34:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:34:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:34:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:34:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:34:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:34:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:34:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:34:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:34:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:34:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:34:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:34:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:34:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:34:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:34:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:34:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:34:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v833: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v834: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:34:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v835: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:34:59 np0005481065 podman[263841]: 2025-10-11 08:34:59.786628731 +0000 UTC m=+0.094177734 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 11 04:35:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v836: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:35:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v837: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:35:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:35:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:35:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:35:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:35:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:35:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:35:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:35:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:35:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:35:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:35:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:35:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 04:35:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:35:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:35:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:35:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:35:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:35:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:35:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:35:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:35:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:35:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:35:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v838: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:06 np0005481065 podman[263860]: 2025-10-11 08:35:06.800672452 +0000 UTC m=+0.092229039 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS)
Oct 11 04:35:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v839: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:35:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v840: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v841: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:11 np0005481065 podman[263880]: 2025-10-11 08:35:11.836881708 +0000 UTC m=+0.129171908 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 04:35:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:35:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v842: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:13 np0005481065 podman[263900]: 2025-10-11 08:35:13.786660985 +0000 UTC m=+0.093882015 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 11 04:35:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:35:15.165 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:35:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:35:15.165 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:35:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:35:15.165 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:35:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v843: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v844: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:35:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v845: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v846: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:35:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v847: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:35:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:35:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:35:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:35:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:35:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:35:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v848: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v849: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:35:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v850: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:30 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:35:30 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:35:30 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:35:30 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:35:30 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:35:30 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:35:30 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev fffc74b7-0452-4a04-862d-1debf0509b7b does not exist
Oct 11 04:35:30 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev e35dca71-c121-4274-9461-2dd1c6dea592 does not exist
Oct 11 04:35:30 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev ea556447-c788-428f-ae22-cb6dcb54cb7d does not exist
Oct 11 04:35:30 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:35:30 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:35:30 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:35:30 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:35:30 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:35:30 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:35:30 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:35:30 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:35:30 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:35:30 np0005481065 podman[264081]: 2025-10-11 08:35:30.247929274 +0000 UTC m=+0.088428581 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct 11 04:35:30 np0005481065 podman[264218]: 2025-10-11 08:35:30.959583062 +0000 UTC m=+0.069879835 container create 67802077e3ab81c22cc6c3191975046d8b5ae186d7dd451e04726868540c69ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 04:35:31 np0005481065 systemd[1]: Started libpod-conmon-67802077e3ab81c22cc6c3191975046d8b5ae186d7dd451e04726868540c69ed.scope.
Oct 11 04:35:31 np0005481065 podman[264218]: 2025-10-11 08:35:30.930748273 +0000 UTC m=+0.041045106 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:35:31 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:35:31 np0005481065 podman[264218]: 2025-10-11 08:35:31.061938647 +0000 UTC m=+0.172235490 container init 67802077e3ab81c22cc6c3191975046d8b5ae186d7dd451e04726868540c69ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_wozniak, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 04:35:31 np0005481065 podman[264218]: 2025-10-11 08:35:31.074792962 +0000 UTC m=+0.185089715 container start 67802077e3ab81c22cc6c3191975046d8b5ae186d7dd451e04726868540c69ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:35:31 np0005481065 podman[264218]: 2025-10-11 08:35:31.078624619 +0000 UTC m=+0.188921412 container attach 67802077e3ab81c22cc6c3191975046d8b5ae186d7dd451e04726868540c69ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_wozniak, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:35:31 np0005481065 fervent_wozniak[264234]: 167 167
Oct 11 04:35:31 np0005481065 systemd[1]: libpod-67802077e3ab81c22cc6c3191975046d8b5ae186d7dd451e04726868540c69ed.scope: Deactivated successfully.
Oct 11 04:35:31 np0005481065 podman[264218]: 2025-10-11 08:35:31.085160605 +0000 UTC m=+0.195457388 container died 67802077e3ab81c22cc6c3191975046d8b5ae186d7dd451e04726868540c69ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_wozniak, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 11 04:35:31 np0005481065 systemd[1]: var-lib-containers-storage-overlay-0ea48686b1174e2fa93db28a0ac99c09b82874fdcad7b9b7dc62331b61320d80-merged.mount: Deactivated successfully.
Oct 11 04:35:31 np0005481065 podman[264218]: 2025-10-11 08:35:31.13397507 +0000 UTC m=+0.244271823 container remove 67802077e3ab81c22cc6c3191975046d8b5ae186d7dd451e04726868540c69ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:35:31 np0005481065 systemd[1]: libpod-conmon-67802077e3ab81c22cc6c3191975046d8b5ae186d7dd451e04726868540c69ed.scope: Deactivated successfully.
Oct 11 04:35:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v851: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:31 np0005481065 podman[264257]: 2025-10-11 08:35:31.405006333 +0000 UTC m=+0.074722532 container create f3b7c5c8d0b2a10f74f082d04dc936f367805d4800897d57356cfe345fa1bc2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Oct 11 04:35:31 np0005481065 systemd[1]: Started libpod-conmon-f3b7c5c8d0b2a10f74f082d04dc936f367805d4800897d57356cfe345fa1bc2d.scope.
Oct 11 04:35:31 np0005481065 podman[264257]: 2025-10-11 08:35:31.377895503 +0000 UTC m=+0.047611782 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:35:31 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:35:31 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc0a8bb12889f1c9a0087cd566d17266fc94f9eb65690f70fa9de65bfe488292/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:35:31 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc0a8bb12889f1c9a0087cd566d17266fc94f9eb65690f70fa9de65bfe488292/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:35:31 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc0a8bb12889f1c9a0087cd566d17266fc94f9eb65690f70fa9de65bfe488292/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:35:31 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc0a8bb12889f1c9a0087cd566d17266fc94f9eb65690f70fa9de65bfe488292/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:35:31 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc0a8bb12889f1c9a0087cd566d17266fc94f9eb65690f70fa9de65bfe488292/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:35:31 np0005481065 podman[264257]: 2025-10-11 08:35:31.518243907 +0000 UTC m=+0.187960146 container init f3b7c5c8d0b2a10f74f082d04dc936f367805d4800897d57356cfe345fa1bc2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_feynman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:35:31 np0005481065 podman[264257]: 2025-10-11 08:35:31.53105219 +0000 UTC m=+0.200768409 container start f3b7c5c8d0b2a10f74f082d04dc936f367805d4800897d57356cfe345fa1bc2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_feynman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:35:31 np0005481065 podman[264257]: 2025-10-11 08:35:31.535924398 +0000 UTC m=+0.205640677 container attach f3b7c5c8d0b2a10f74f082d04dc936f367805d4800897d57356cfe345fa1bc2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_feynman, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 04:35:32 np0005481065 vigilant_feynman[264273]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:35:32 np0005481065 vigilant_feynman[264273]: --> relative data size: 1.0
Oct 11 04:35:32 np0005481065 vigilant_feynman[264273]: --> All data devices are unavailable
Oct 11 04:35:32 np0005481065 systemd[1]: libpod-f3b7c5c8d0b2a10f74f082d04dc936f367805d4800897d57356cfe345fa1bc2d.scope: Deactivated successfully.
Oct 11 04:35:32 np0005481065 systemd[1]: libpod-f3b7c5c8d0b2a10f74f082d04dc936f367805d4800897d57356cfe345fa1bc2d.scope: Consumed 1.157s CPU time.
Oct 11 04:35:32 np0005481065 podman[264257]: 2025-10-11 08:35:32.729990688 +0000 UTC m=+1.399706917 container died f3b7c5c8d0b2a10f74f082d04dc936f367805d4800897d57356cfe345fa1bc2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_feynman, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 04:35:32 np0005481065 systemd[1]: var-lib-containers-storage-overlay-bc0a8bb12889f1c9a0087cd566d17266fc94f9eb65690f70fa9de65bfe488292-merged.mount: Deactivated successfully.
Oct 11 04:35:32 np0005481065 podman[264257]: 2025-10-11 08:35:32.815264729 +0000 UTC m=+1.484980968 container remove f3b7c5c8d0b2a10f74f082d04dc936f367805d4800897d57356cfe345fa1bc2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 04:35:32 np0005481065 systemd[1]: libpod-conmon-f3b7c5c8d0b2a10f74f082d04dc936f367805d4800897d57356cfe345fa1bc2d.scope: Deactivated successfully.
Oct 11 04:35:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:35:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v852: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:33 np0005481065 podman[264457]: 2025-10-11 08:35:33.78352797 +0000 UTC m=+0.070721928 container create 79c6ed0d5c95c8def82a106d8488d24b4416cb1abd2e88b1212685b563ab69f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:35:33 np0005481065 systemd[1]: Started libpod-conmon-79c6ed0d5c95c8def82a106d8488d24b4416cb1abd2e88b1212685b563ab69f2.scope.
Oct 11 04:35:33 np0005481065 podman[264457]: 2025-10-11 08:35:33.753105026 +0000 UTC m=+0.040298994 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:35:33 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:35:33 np0005481065 podman[264457]: 2025-10-11 08:35:33.884937828 +0000 UTC m=+0.172131836 container init 79c6ed0d5c95c8def82a106d8488d24b4416cb1abd2e88b1212685b563ab69f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_germain, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Oct 11 04:35:33 np0005481065 podman[264457]: 2025-10-11 08:35:33.896255219 +0000 UTC m=+0.183449167 container start 79c6ed0d5c95c8def82a106d8488d24b4416cb1abd2e88b1212685b563ab69f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Oct 11 04:35:33 np0005481065 podman[264457]: 2025-10-11 08:35:33.899973755 +0000 UTC m=+0.187167763 container attach 79c6ed0d5c95c8def82a106d8488d24b4416cb1abd2e88b1212685b563ab69f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 11 04:35:33 np0005481065 modest_germain[264473]: 167 167
Oct 11 04:35:33 np0005481065 systemd[1]: libpod-79c6ed0d5c95c8def82a106d8488d24b4416cb1abd2e88b1212685b563ab69f2.scope: Deactivated successfully.
Oct 11 04:35:33 np0005481065 podman[264457]: 2025-10-11 08:35:33.901514858 +0000 UTC m=+0.188708816 container died 79c6ed0d5c95c8def82a106d8488d24b4416cb1abd2e88b1212685b563ab69f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:35:33 np0005481065 systemd[1]: var-lib-containers-storage-overlay-7c132b017127d026a474b19a60c6e08842f175cff5623304d3b0a0a4246a4008-merged.mount: Deactivated successfully.
Oct 11 04:35:33 np0005481065 podman[264457]: 2025-10-11 08:35:33.941424351 +0000 UTC m=+0.228618269 container remove 79c6ed0d5c95c8def82a106d8488d24b4416cb1abd2e88b1212685b563ab69f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_germain, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:35:33 np0005481065 systemd[1]: libpod-conmon-79c6ed0d5c95c8def82a106d8488d24b4416cb1abd2e88b1212685b563ab69f2.scope: Deactivated successfully.
Oct 11 04:35:34 np0005481065 podman[264497]: 2025-10-11 08:35:34.180591839 +0000 UTC m=+0.073168367 container create 2a01e0925f0c2259ac0279d7f34d3357a80105d59f68ba488013a3bceaf2e309 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_swirles, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:35:34 np0005481065 systemd[1]: Started libpod-conmon-2a01e0925f0c2259ac0279d7f34d3357a80105d59f68ba488013a3bceaf2e309.scope.
Oct 11 04:35:34 np0005481065 podman[264497]: 2025-10-11 08:35:34.151281297 +0000 UTC m=+0.043857885 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:35:34 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:35:34 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33876b5dc49687fa1690a247465b844a1cacba5cf17d77dfbbe27f96dcb7f28a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:35:34 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33876b5dc49687fa1690a247465b844a1cacba5cf17d77dfbbe27f96dcb7f28a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:35:34 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33876b5dc49687fa1690a247465b844a1cacba5cf17d77dfbbe27f96dcb7f28a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:35:34 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33876b5dc49687fa1690a247465b844a1cacba5cf17d77dfbbe27f96dcb7f28a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:35:34 np0005481065 podman[264497]: 2025-10-11 08:35:34.283805629 +0000 UTC m=+0.176382157 container init 2a01e0925f0c2259ac0279d7f34d3357a80105d59f68ba488013a3bceaf2e309 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef)
Oct 11 04:35:34 np0005481065 podman[264497]: 2025-10-11 08:35:34.296620902 +0000 UTC m=+0.189197430 container start 2a01e0925f0c2259ac0279d7f34d3357a80105d59f68ba488013a3bceaf2e309 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_swirles, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:35:34 np0005481065 podman[264497]: 2025-10-11 08:35:34.301504011 +0000 UTC m=+0.194080539 container attach 2a01e0925f0c2259ac0279d7f34d3357a80105d59f68ba488013a3bceaf2e309 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]: {
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:    "0": [
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:        {
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:            "devices": [
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:                "/dev/loop3"
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:            ],
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:            "lv_name": "ceph_lv0",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:            "lv_size": "21470642176",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:            "name": "ceph_lv0",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:            "tags": {
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:                "ceph.cluster_name": "ceph",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:                "ceph.crush_device_class": "",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:                "ceph.encrypted": "0",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:                "ceph.osd_id": "0",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:                "ceph.type": "block",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:                "ceph.vdo": "0"
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:            },
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:            "type": "block",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:            "vg_name": "ceph_vg0"
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:        }
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:    ],
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:    "1": [
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:        {
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:            "devices": [
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:                "/dev/loop4"
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:            ],
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:            "lv_name": "ceph_lv1",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:            "lv_size": "21470642176",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:            "name": "ceph_lv1",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:            "tags": {
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:                "ceph.cluster_name": "ceph",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:                "ceph.crush_device_class": "",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:                "ceph.encrypted": "0",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:                "ceph.osd_id": "1",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:                "ceph.type": "block",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:                "ceph.vdo": "0"
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:            },
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:            "type": "block",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:            "vg_name": "ceph_vg1"
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:        }
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:    ],
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:    "2": [
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:        {
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:            "devices": [
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:                "/dev/loop5"
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:            ],
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:            "lv_name": "ceph_lv2",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:            "lv_size": "21470642176",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:            "name": "ceph_lv2",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:            "tags": {
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:                "ceph.cluster_name": "ceph",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:                "ceph.crush_device_class": "",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:                "ceph.encrypted": "0",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:                "ceph.osd_id": "2",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:                "ceph.type": "block",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:                "ceph.vdo": "0"
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:            },
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:            "type": "block",
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:            "vg_name": "ceph_vg2"
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:        }
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]:    ]
Oct 11 04:35:35 np0005481065 pensive_swirles[264513]: }
Oct 11 04:35:35 np0005481065 systemd[1]: libpod-2a01e0925f0c2259ac0279d7f34d3357a80105d59f68ba488013a3bceaf2e309.scope: Deactivated successfully.
Oct 11 04:35:35 np0005481065 podman[264497]: 2025-10-11 08:35:35.048067859 +0000 UTC m=+0.940644367 container died 2a01e0925f0c2259ac0279d7f34d3357a80105d59f68ba488013a3bceaf2e309 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 11 04:35:35 np0005481065 systemd[1]: var-lib-containers-storage-overlay-33876b5dc49687fa1690a247465b844a1cacba5cf17d77dfbbe27f96dcb7f28a-merged.mount: Deactivated successfully.
Oct 11 04:35:35 np0005481065 podman[264497]: 2025-10-11 08:35:35.131086315 +0000 UTC m=+1.023662823 container remove 2a01e0925f0c2259ac0279d7f34d3357a80105d59f68ba488013a3bceaf2e309 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 11 04:35:35 np0005481065 systemd[1]: libpod-conmon-2a01e0925f0c2259ac0279d7f34d3357a80105d59f68ba488013a3bceaf2e309.scope: Deactivated successfully.
Oct 11 04:35:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v853: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:36 np0005481065 podman[264680]: 2025-10-11 08:35:36.049964175 +0000 UTC m=+0.067402534 container create 89b5ab045a4f01be168486ee769da2a4394a3a5ea9352abe858a12a037c8dd66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_neumann, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:35:36 np0005481065 systemd[1]: Started libpod-conmon-89b5ab045a4f01be168486ee769da2a4394a3a5ea9352abe858a12a037c8dd66.scope.
Oct 11 04:35:36 np0005481065 podman[264680]: 2025-10-11 08:35:36.022983569 +0000 UTC m=+0.040421918 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:35:36 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:35:36 np0005481065 podman[264680]: 2025-10-11 08:35:36.151106835 +0000 UTC m=+0.168545204 container init 89b5ab045a4f01be168486ee769da2a4394a3a5ea9352abe858a12a037c8dd66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_neumann, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:35:36 np0005481065 podman[264680]: 2025-10-11 08:35:36.158016671 +0000 UTC m=+0.175454990 container start 89b5ab045a4f01be168486ee769da2a4394a3a5ea9352abe858a12a037c8dd66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:35:36 np0005481065 podman[264680]: 2025-10-11 08:35:36.16147475 +0000 UTC m=+0.178913089 container attach 89b5ab045a4f01be168486ee769da2a4394a3a5ea9352abe858a12a037c8dd66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 11 04:35:36 np0005481065 stupefied_neumann[264696]: 167 167
Oct 11 04:35:36 np0005481065 systemd[1]: libpod-89b5ab045a4f01be168486ee769da2a4394a3a5ea9352abe858a12a037c8dd66.scope: Deactivated successfully.
Oct 11 04:35:36 np0005481065 podman[264680]: 2025-10-11 08:35:36.164201057 +0000 UTC m=+0.181639376 container died 89b5ab045a4f01be168486ee769da2a4394a3a5ea9352abe858a12a037c8dd66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 04:35:36 np0005481065 systemd[1]: var-lib-containers-storage-overlay-e7587e281152edd9dfbdc177084392c555c11516e770b68be07e32ce887cbf99-merged.mount: Deactivated successfully.
Oct 11 04:35:36 np0005481065 podman[264680]: 2025-10-11 08:35:36.195693481 +0000 UTC m=+0.213131800 container remove 89b5ab045a4f01be168486ee769da2a4394a3a5ea9352abe858a12a037c8dd66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 04:35:36 np0005481065 systemd[1]: libpod-conmon-89b5ab045a4f01be168486ee769da2a4394a3a5ea9352abe858a12a037c8dd66.scope: Deactivated successfully.
Oct 11 04:35:36 np0005481065 podman[264718]: 2025-10-11 08:35:36.443181975 +0000 UTC m=+0.078876100 container create 0bd5b5acf1c2a06acf572f800ea8eabca75e198c59689e86bc2b85255c504e47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 11 04:35:36 np0005481065 systemd[1]: Started libpod-conmon-0bd5b5acf1c2a06acf572f800ea8eabca75e198c59689e86bc2b85255c504e47.scope.
Oct 11 04:35:36 np0005481065 podman[264718]: 2025-10-11 08:35:36.412616087 +0000 UTC m=+0.048310252 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:35:36 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:35:36 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3453c3798e98d5721698c82491c7e905bb04a20741ca02db06d2d78910c8a2d4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:35:36 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3453c3798e98d5721698c82491c7e905bb04a20741ca02db06d2d78910c8a2d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:35:36 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3453c3798e98d5721698c82491c7e905bb04a20741ca02db06d2d78910c8a2d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:35:36 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3453c3798e98d5721698c82491c7e905bb04a20741ca02db06d2d78910c8a2d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:35:36 np0005481065 podman[264718]: 2025-10-11 08:35:36.571719473 +0000 UTC m=+0.207413608 container init 0bd5b5acf1c2a06acf572f800ea8eabca75e198c59689e86bc2b85255c504e47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_visvesvaraya, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:35:36 np0005481065 podman[264718]: 2025-10-11 08:35:36.585963557 +0000 UTC m=+0.221657682 container start 0bd5b5acf1c2a06acf572f800ea8eabca75e198c59689e86bc2b85255c504e47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:35:36 np0005481065 podman[264718]: 2025-10-11 08:35:36.589956201 +0000 UTC m=+0.225650326 container attach 0bd5b5acf1c2a06acf572f800ea8eabca75e198c59689e86bc2b85255c504e47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_visvesvaraya, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 11 04:35:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v854: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:35:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1514659896' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:35:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:35:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1514659896' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:35:37 np0005481065 strange_visvesvaraya[264735]: {
Oct 11 04:35:37 np0005481065 strange_visvesvaraya[264735]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 04:35:37 np0005481065 strange_visvesvaraya[264735]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:35:37 np0005481065 strange_visvesvaraya[264735]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:35:37 np0005481065 strange_visvesvaraya[264735]:        "osd_id": 2,
Oct 11 04:35:37 np0005481065 strange_visvesvaraya[264735]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:35:37 np0005481065 strange_visvesvaraya[264735]:        "type": "bluestore"
Oct 11 04:35:37 np0005481065 strange_visvesvaraya[264735]:    },
Oct 11 04:35:37 np0005481065 strange_visvesvaraya[264735]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 04:35:37 np0005481065 strange_visvesvaraya[264735]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:35:37 np0005481065 strange_visvesvaraya[264735]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:35:37 np0005481065 strange_visvesvaraya[264735]:        "osd_id": 0,
Oct 11 04:35:37 np0005481065 strange_visvesvaraya[264735]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:35:37 np0005481065 strange_visvesvaraya[264735]:        "type": "bluestore"
Oct 11 04:35:37 np0005481065 strange_visvesvaraya[264735]:    },
Oct 11 04:35:37 np0005481065 strange_visvesvaraya[264735]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 04:35:37 np0005481065 strange_visvesvaraya[264735]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:35:37 np0005481065 strange_visvesvaraya[264735]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:35:37 np0005481065 strange_visvesvaraya[264735]:        "osd_id": 1,
Oct 11 04:35:37 np0005481065 strange_visvesvaraya[264735]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:35:37 np0005481065 strange_visvesvaraya[264735]:        "type": "bluestore"
Oct 11 04:35:37 np0005481065 strange_visvesvaraya[264735]:    }
Oct 11 04:35:37 np0005481065 strange_visvesvaraya[264735]: }
Oct 11 04:35:37 np0005481065 systemd[1]: libpod-0bd5b5acf1c2a06acf572f800ea8eabca75e198c59689e86bc2b85255c504e47.scope: Deactivated successfully.
Oct 11 04:35:37 np0005481065 systemd[1]: libpod-0bd5b5acf1c2a06acf572f800ea8eabca75e198c59689e86bc2b85255c504e47.scope: Consumed 1.119s CPU time.
Oct 11 04:35:37 np0005481065 podman[264774]: 2025-10-11 08:35:37.769796947 +0000 UTC m=+0.044572386 container died 0bd5b5acf1c2a06acf572f800ea8eabca75e198c59689e86bc2b85255c504e47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_visvesvaraya, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:35:37 np0005481065 systemd[1]: var-lib-containers-storage-overlay-3453c3798e98d5721698c82491c7e905bb04a20741ca02db06d2d78910c8a2d4-merged.mount: Deactivated successfully.
Oct 11 04:35:37 np0005481065 podman[264768]: 2025-10-11 08:35:37.826966269 +0000 UTC m=+0.121396396 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 11 04:35:37 np0005481065 podman[264774]: 2025-10-11 08:35:37.851935308 +0000 UTC m=+0.126710687 container remove 0bd5b5acf1c2a06acf572f800ea8eabca75e198c59689e86bc2b85255c504e47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 11 04:35:37 np0005481065 systemd[1]: libpod-conmon-0bd5b5acf1c2a06acf572f800ea8eabca75e198c59689e86bc2b85255c504e47.scope: Deactivated successfully.
Oct 11 04:35:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:35:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:35:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:35:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:35:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:35:37 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 89252cea-c438-4022-9bb3-52fbb141fd1b does not exist
Oct 11 04:35:37 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev c0802e6f-7844-4614-b044-0a501a87ade3 does not exist
Oct 11 04:35:38 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:35:38 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:35:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v855: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v856: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:42 np0005481065 podman[264850]: 2025-10-11 08:35:42.821393509 +0000 UTC m=+0.112977207 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 11 04:35:42 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:35:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v857: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:44 np0005481065 podman[264870]: 2025-10-11 08:35:44.803466095 +0000 UTC m=+0.100256117 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 04:35:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v858: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:47 np0005481065 nova_compute[260935]: 2025-10-11 08:35:47.304 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:35:47 np0005481065 nova_compute[260935]: 2025-10-11 08:35:47.305 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 04:35:47 np0005481065 nova_compute[260935]: 2025-10-11 08:35:47.305 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 11 04:35:47 np0005481065 nova_compute[260935]: 2025-10-11 08:35:47.336 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 11 04:35:47 np0005481065 nova_compute[260935]: 2025-10-11 08:35:47.337 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:35:47 np0005481065 nova_compute[260935]: 2025-10-11 08:35:47.337 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:35:47 np0005481065 nova_compute[260935]: 2025-10-11 08:35:47.338 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:35:47 np0005481065 nova_compute[260935]: 2025-10-11 08:35:47.338 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:35:47 np0005481065 nova_compute[260935]: 2025-10-11 08:35:47.338 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 04:35:47 np0005481065 nova_compute[260935]: 2025-10-11 08:35:47.339 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:35:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v859: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:47 np0005481065 nova_compute[260935]: 2025-10-11 08:35:47.372 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:35:47 np0005481065 nova_compute[260935]: 2025-10-11 08:35:47.373 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:35:47 np0005481065 nova_compute[260935]: 2025-10-11 08:35:47.374 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:35:47 np0005481065 nova_compute[260935]: 2025-10-11 08:35:47.374 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 04:35:47 np0005481065 nova_compute[260935]: 2025-10-11 08:35:47.375 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:35:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:35:47 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2289925496' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:35:47 np0005481065 nova_compute[260935]: 2025-10-11 08:35:47.833 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:35:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:35:48 np0005481065 nova_compute[260935]: 2025-10-11 08:35:48.133 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:35:48 np0005481065 nova_compute[260935]: 2025-10-11 08:35:48.135 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5145MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 04:35:48 np0005481065 nova_compute[260935]: 2025-10-11 08:35:48.135 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:35:48 np0005481065 nova_compute[260935]: 2025-10-11 08:35:48.136 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:35:48 np0005481065 nova_compute[260935]: 2025-10-11 08:35:48.392 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 04:35:48 np0005481065 nova_compute[260935]: 2025-10-11 08:35:48.392 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 04:35:48 np0005481065 nova_compute[260935]: 2025-10-11 08:35:48.409 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:35:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:35:48 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1034930190' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:35:48 np0005481065 nova_compute[260935]: 2025-10-11 08:35:48.890 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:35:48 np0005481065 nova_compute[260935]: 2025-10-11 08:35:48.899 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:35:48 np0005481065 nova_compute[260935]: 2025-10-11 08:35:48.921 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:35:48 np0005481065 nova_compute[260935]: 2025-10-11 08:35:48.924 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 04:35:48 np0005481065 nova_compute[260935]: 2025-10-11 08:35:48.925 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.789s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:35:49 np0005481065 nova_compute[260935]: 2025-10-11 08:35:49.290 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:35:49 np0005481065 nova_compute[260935]: 2025-10-11 08:35:49.290 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:35:49 np0005481065 nova_compute[260935]: 2025-10-11 08:35:49.291 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:35:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v860: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v861: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:52 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:35:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v862: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:35:54
Oct 11 04:35:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:35:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 04:35:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', 'images', 'vms', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root', 'backups', 'volumes', 'default.rgw.meta']
Oct 11 04:35:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:35:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:35:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:35:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:35:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:35:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:35:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:35:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:35:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:35:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:35:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:35:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:35:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:35:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:35:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:35:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:35:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:35:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v863: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v864: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:35:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:35:59 np0005481065 systemd[1]: packagekit.service: Deactivated successfully.
Oct 11 04:35:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v865: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:00 np0005481065 podman[264940]: 2025-10-11 08:36:00.783479634 +0000 UTC m=+0.078725696 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 11 04:36:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v866: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:36:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v867: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:36:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:36:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:36:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:36:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:36:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:36:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:36:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:36:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:36:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:36:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:36:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:36:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 04:36:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:36:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:36:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:36:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:36:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:36:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:36:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:36:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:36:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:36:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:36:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v868: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v869: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:36:08 np0005481065 podman[264959]: 2025-10-11 08:36:08.799510672 +0000 UTC m=+0.093199986 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid)
Oct 11 04:36:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v870: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v871: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:36:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v872: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:13 np0005481065 podman[264980]: 2025-10-11 08:36:13.79696378 +0000 UTC m=+0.096203851 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 11 04:36:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:36:15.166 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:36:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:36:15.166 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:36:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:36:15.166 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:36:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v873: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:15 np0005481065 podman[265001]: 2025-10-11 08:36:15.822661162 +0000 UTC m=+0.123722162 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:36:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v874: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:36:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v875: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:20 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Oct 11 04:36:20 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:20.543689) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 04:36:20 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Oct 11 04:36:20 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171780543733, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 2051, "num_deletes": 251, "total_data_size": 3436538, "memory_usage": 3487368, "flush_reason": "Manual Compaction"}
Oct 11 04:36:20 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Oct 11 04:36:20 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171780576181, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 3371628, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16337, "largest_seqno": 18387, "table_properties": {"data_size": 3362309, "index_size": 5877, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18477, "raw_average_key_size": 19, "raw_value_size": 3343780, "raw_average_value_size": 3591, "num_data_blocks": 266, "num_entries": 931, "num_filter_entries": 931, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760171551, "oldest_key_time": 1760171551, "file_creation_time": 1760171780, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:36:20 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 32562 microseconds, and 14273 cpu microseconds.
Oct 11 04:36:20 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:36:20 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:20.576250) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 3371628 bytes OK
Oct 11 04:36:20 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:20.576278) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Oct 11 04:36:20 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:20.578439) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Oct 11 04:36:20 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:20.578465) EVENT_LOG_v1 {"time_micros": 1760171780578457, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 04:36:20 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:20.578491) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 04:36:20 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3427954, prev total WAL file size 3427954, number of live WAL files 2.
Oct 11 04:36:20 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:36:20 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:20.580034) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Oct 11 04:36:20 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 04:36:20 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(3292KB)], [38(7533KB)]
Oct 11 04:36:20 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171780580078, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 11085835, "oldest_snapshot_seqno": -1}
Oct 11 04:36:20 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4408 keys, 9331196 bytes, temperature: kUnknown
Oct 11 04:36:20 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171780643860, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 9331196, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9298053, "index_size": 21001, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11077, "raw_key_size": 106535, "raw_average_key_size": 24, "raw_value_size": 9214667, "raw_average_value_size": 2090, "num_data_blocks": 891, "num_entries": 4408, "num_filter_entries": 4408, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760171780, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:36:20 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:36:20 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:20.644226) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 9331196 bytes
Oct 11 04:36:20 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:20.645770) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 173.4 rd, 146.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.4 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 4922, records dropped: 514 output_compression: NoCompression
Oct 11 04:36:20 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:20.645796) EVENT_LOG_v1 {"time_micros": 1760171780645785, "job": 18, "event": "compaction_finished", "compaction_time_micros": 63922, "compaction_time_cpu_micros": 35770, "output_level": 6, "num_output_files": 1, "total_output_size": 9331196, "num_input_records": 4922, "num_output_records": 4408, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 04:36:20 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:36:20 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171780647113, "job": 18, "event": "table_file_deletion", "file_number": 40}
Oct 11 04:36:20 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:36:20 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171780649861, "job": 18, "event": "table_file_deletion", "file_number": 38}
Oct 11 04:36:20 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:20.579978) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:36:20 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:20.649911) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:36:20 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:20.649918) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:36:20 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:20.649921) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:36:20 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:20.649924) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:36:20 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:20.649928) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:36:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v876: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:36:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v877: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:36:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:36:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:36:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:36:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:36:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:36:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v878: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v879: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:36:27 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Oct 11 04:36:27 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:27.976434) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 04:36:27 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Oct 11 04:36:27 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171787976475, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 304, "num_deletes": 250, "total_data_size": 123412, "memory_usage": 130304, "flush_reason": "Manual Compaction"}
Oct 11 04:36:27 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Oct 11 04:36:27 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171787979460, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 122438, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18388, "largest_seqno": 18691, "table_properties": {"data_size": 120426, "index_size": 240, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5290, "raw_average_key_size": 19, "raw_value_size": 116523, "raw_average_value_size": 423, "num_data_blocks": 11, "num_entries": 275, "num_filter_entries": 275, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760171781, "oldest_key_time": 1760171781, "file_creation_time": 1760171787, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:36:27 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 3076 microseconds, and 1166 cpu microseconds.
Oct 11 04:36:27 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:36:27 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:27.979511) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 122438 bytes OK
Oct 11 04:36:27 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:27.979533) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Oct 11 04:36:27 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:27.980887) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Oct 11 04:36:27 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:27.980910) EVENT_LOG_v1 {"time_micros": 1760171787980903, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 04:36:27 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:27.980928) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 04:36:27 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 121226, prev total WAL file size 121226, number of live WAL files 2.
Oct 11 04:36:27 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:36:27 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:27.981369) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353032' seq:72057594037927935, type:22 .. '6D67727374617400373533' seq:0, type:0; will stop at (end)
Oct 11 04:36:27 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 04:36:27 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(119KB)], [41(9112KB)]
Oct 11 04:36:27 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171787981415, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 9453634, "oldest_snapshot_seqno": -1}
Oct 11 04:36:28 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4176 keys, 6165904 bytes, temperature: kUnknown
Oct 11 04:36:28 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171788038053, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 6165904, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6138897, "index_size": 15442, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10501, "raw_key_size": 102140, "raw_average_key_size": 24, "raw_value_size": 6064097, "raw_average_value_size": 1452, "num_data_blocks": 650, "num_entries": 4176, "num_filter_entries": 4176, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760171787, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:36:28 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:36:28 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:28.038507) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 6165904 bytes
Oct 11 04:36:28 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:28.040166) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 166.5 rd, 108.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 8.9 +0.0 blob) out(5.9 +0.0 blob), read-write-amplify(127.6) write-amplify(50.4) OK, records in: 4683, records dropped: 507 output_compression: NoCompression
Oct 11 04:36:28 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:28.040202) EVENT_LOG_v1 {"time_micros": 1760171788040186, "job": 20, "event": "compaction_finished", "compaction_time_micros": 56776, "compaction_time_cpu_micros": 35306, "output_level": 6, "num_output_files": 1, "total_output_size": 6165904, "num_input_records": 4683, "num_output_records": 4176, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 04:36:28 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:36:28 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171788040580, "job": 20, "event": "table_file_deletion", "file_number": 43}
Oct 11 04:36:28 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:36:28 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171788044090, "job": 20, "event": "table_file_deletion", "file_number": 41}
Oct 11 04:36:28 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:27.981289) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:36:28 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:28.044220) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:36:28 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:28.044229) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:36:28 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:28.044231) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:36:28 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:28.044235) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:36:28 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:28.044238) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:36:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v880: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v881: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:31 np0005481065 podman[265028]: 2025-10-11 08:36:31.782620562 +0000 UTC m=+0.081517666 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 04:36:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:36:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v882: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v883: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v884: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:36:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3902930428' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:36:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:36:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3902930428' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:36:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:36:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:36:39 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:36:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:36:39 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:36:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:36:39 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:36:39 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev c7ad0462-d154-4ca4-8e7e-4742c7c341c1 does not exist
Oct 11 04:36:39 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev bf1410b3-87d4-48f9-841d-91034fa9b574 does not exist
Oct 11 04:36:39 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev b345118d-93a1-4839-9a10-7986eae63879 does not exist
Oct 11 04:36:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:36:39 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:36:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:36:39 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:36:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:36:39 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:36:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v885: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:39 np0005481065 podman[265203]: 2025-10-11 08:36:39.414109492 +0000 UTC m=+0.085667165 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:36:40 np0005481065 podman[265340]: 2025-10-11 08:36:40.026022306 +0000 UTC m=+0.073784616 container create 4ca2317f390840eb5f372970a59c8237dbf2e244b1b6ce2a47e74f1316c092ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 04:36:40 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:36:40 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:36:40 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:36:40 np0005481065 podman[265340]: 2025-10-11 08:36:39.99251011 +0000 UTC m=+0.040272500 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:36:40 np0005481065 systemd[1]: Started libpod-conmon-4ca2317f390840eb5f372970a59c8237dbf2e244b1b6ce2a47e74f1316c092ad.scope.
Oct 11 04:36:40 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:36:40 np0005481065 podman[265340]: 2025-10-11 08:36:40.173030349 +0000 UTC m=+0.220792739 container init 4ca2317f390840eb5f372970a59c8237dbf2e244b1b6ce2a47e74f1316c092ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:36:40 np0005481065 podman[265340]: 2025-10-11 08:36:40.186317298 +0000 UTC m=+0.234079628 container start 4ca2317f390840eb5f372970a59c8237dbf2e244b1b6ce2a47e74f1316c092ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_thompson, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 11 04:36:40 np0005481065 focused_thompson[265356]: 167 167
Oct 11 04:36:40 np0005481065 systemd[1]: libpod-4ca2317f390840eb5f372970a59c8237dbf2e244b1b6ce2a47e74f1316c092ad.scope: Deactivated successfully.
Oct 11 04:36:40 np0005481065 podman[265340]: 2025-10-11 08:36:40.196970792 +0000 UTC m=+0.244733132 container attach 4ca2317f390840eb5f372970a59c8237dbf2e244b1b6ce2a47e74f1316c092ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_thompson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Oct 11 04:36:40 np0005481065 podman[265340]: 2025-10-11 08:36:40.197385384 +0000 UTC m=+0.245147724 container died 4ca2317f390840eb5f372970a59c8237dbf2e244b1b6ce2a47e74f1316c092ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_thompson, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:36:40 np0005481065 systemd[1]: var-lib-containers-storage-overlay-9e25a2b6a4b085b8cc78cd3c158309d57abb8f4157a50af499525eaf3fc61290-merged.mount: Deactivated successfully.
Oct 11 04:36:40 np0005481065 podman[265340]: 2025-10-11 08:36:40.401429774 +0000 UTC m=+0.449192114 container remove 4ca2317f390840eb5f372970a59c8237dbf2e244b1b6ce2a47e74f1316c092ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_thompson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:36:40 np0005481065 systemd[1]: libpod-conmon-4ca2317f390840eb5f372970a59c8237dbf2e244b1b6ce2a47e74f1316c092ad.scope: Deactivated successfully.
Oct 11 04:36:40 np0005481065 podman[265380]: 2025-10-11 08:36:40.595352706 +0000 UTC m=+0.055027781 container create 7126ae642ea38cb549e2dd11744f9f34d5a6735a82f2bec36b12fbb636b6e53f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_shaw, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:36:40 np0005481065 systemd[1]: Started libpod-conmon-7126ae642ea38cb549e2dd11744f9f34d5a6735a82f2bec36b12fbb636b6e53f.scope.
Oct 11 04:36:40 np0005481065 podman[265380]: 2025-10-11 08:36:40.570708923 +0000 UTC m=+0.030384068 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:36:40 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:36:40 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ce21210e6d9768af4d1e551f3e4bcb8d829a19378aa7b3e596b44491343f879/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:36:40 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ce21210e6d9768af4d1e551f3e4bcb8d829a19378aa7b3e596b44491343f879/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:36:40 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ce21210e6d9768af4d1e551f3e4bcb8d829a19378aa7b3e596b44491343f879/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:36:40 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ce21210e6d9768af4d1e551f3e4bcb8d829a19378aa7b3e596b44491343f879/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:36:40 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ce21210e6d9768af4d1e551f3e4bcb8d829a19378aa7b3e596b44491343f879/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:36:40 np0005481065 podman[265380]: 2025-10-11 08:36:40.710277794 +0000 UTC m=+0.169952919 container init 7126ae642ea38cb549e2dd11744f9f34d5a6735a82f2bec36b12fbb636b6e53f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_shaw, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:36:40 np0005481065 podman[265380]: 2025-10-11 08:36:40.72450756 +0000 UTC m=+0.184182665 container start 7126ae642ea38cb549e2dd11744f9f34d5a6735a82f2bec36b12fbb636b6e53f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_shaw, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 04:36:40 np0005481065 podman[265380]: 2025-10-11 08:36:40.729743049 +0000 UTC m=+0.189418214 container attach 7126ae642ea38cb549e2dd11744f9f34d5a6735a82f2bec36b12fbb636b6e53f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 04:36:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v886: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:41 np0005481065 friendly_shaw[265397]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:36:41 np0005481065 friendly_shaw[265397]: --> relative data size: 1.0
Oct 11 04:36:41 np0005481065 friendly_shaw[265397]: --> All data devices are unavailable
Oct 11 04:36:41 np0005481065 systemd[1]: libpod-7126ae642ea38cb549e2dd11744f9f34d5a6735a82f2bec36b12fbb636b6e53f.scope: Deactivated successfully.
Oct 11 04:36:41 np0005481065 podman[265380]: 2025-10-11 08:36:41.91754285 +0000 UTC m=+1.377217925 container died 7126ae642ea38cb549e2dd11744f9f34d5a6735a82f2bec36b12fbb636b6e53f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:36:41 np0005481065 systemd[1]: libpod-7126ae642ea38cb549e2dd11744f9f34d5a6735a82f2bec36b12fbb636b6e53f.scope: Consumed 1.151s CPU time.
Oct 11 04:36:41 np0005481065 systemd[1]: var-lib-containers-storage-overlay-4ce21210e6d9768af4d1e551f3e4bcb8d829a19378aa7b3e596b44491343f879-merged.mount: Deactivated successfully.
Oct 11 04:36:41 np0005481065 podman[265380]: 2025-10-11 08:36:41.988229746 +0000 UTC m=+1.447904851 container remove 7126ae642ea38cb549e2dd11744f9f34d5a6735a82f2bec36b12fbb636b6e53f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 04:36:41 np0005481065 systemd[1]: libpod-conmon-7126ae642ea38cb549e2dd11744f9f34d5a6735a82f2bec36b12fbb636b6e53f.scope: Deactivated successfully.
Oct 11 04:36:42 np0005481065 podman[265581]: 2025-10-11 08:36:42.869522493 +0000 UTC m=+0.062947056 container create 3e7e1b3b40a083a54cf4b0b15fd709ad42ac7d6adfc8f1b083fd9c8336e6ff6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Oct 11 04:36:42 np0005481065 systemd[1]: Started libpod-conmon-3e7e1b3b40a083a54cf4b0b15fd709ad42ac7d6adfc8f1b083fd9c8336e6ff6f.scope.
Oct 11 04:36:42 np0005481065 podman[265581]: 2025-10-11 08:36:42.844091558 +0000 UTC m=+0.037516171 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:36:42 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:36:42 np0005481065 podman[265581]: 2025-10-11 08:36:42.962909167 +0000 UTC m=+0.156333760 container init 3e7e1b3b40a083a54cf4b0b15fd709ad42ac7d6adfc8f1b083fd9c8336e6ff6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lamarr, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:36:42 np0005481065 podman[265581]: 2025-10-11 08:36:42.973369695 +0000 UTC m=+0.166794258 container start 3e7e1b3b40a083a54cf4b0b15fd709ad42ac7d6adfc8f1b083fd9c8336e6ff6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lamarr, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:36:42 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:36:42 np0005481065 podman[265581]: 2025-10-11 08:36:42.978133271 +0000 UTC m=+0.171557824 container attach 3e7e1b3b40a083a54cf4b0b15fd709ad42ac7d6adfc8f1b083fd9c8336e6ff6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:36:42 np0005481065 goofy_lamarr[265598]: 167 167
Oct 11 04:36:42 np0005481065 systemd[1]: libpod-3e7e1b3b40a083a54cf4b0b15fd709ad42ac7d6adfc8f1b083fd9c8336e6ff6f.scope: Deactivated successfully.
Oct 11 04:36:42 np0005481065 podman[265581]: 2025-10-11 08:36:42.981130117 +0000 UTC m=+0.174554670 container died 3e7e1b3b40a083a54cf4b0b15fd709ad42ac7d6adfc8f1b083fd9c8336e6ff6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lamarr, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 04:36:43 np0005481065 systemd[1]: var-lib-containers-storage-overlay-3ac6697690962e6586e570d32edbbedff0fc9e79413bbdbbeffb609e0613b91f-merged.mount: Deactivated successfully.
Oct 11 04:36:43 np0005481065 podman[265581]: 2025-10-11 08:36:43.035209429 +0000 UTC m=+0.228633962 container remove 3e7e1b3b40a083a54cf4b0b15fd709ad42ac7d6adfc8f1b083fd9c8336e6ff6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lamarr, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 04:36:43 np0005481065 systemd[1]: libpod-conmon-3e7e1b3b40a083a54cf4b0b15fd709ad42ac7d6adfc8f1b083fd9c8336e6ff6f.scope: Deactivated successfully.
Oct 11 04:36:43 np0005481065 podman[265623]: 2025-10-11 08:36:43.280181347 +0000 UTC m=+0.068003331 container create 33993e1f4b44889f9de960e07ba57c7bd3cf8e51a9b783e4e7091a9137b4ecc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 11 04:36:43 np0005481065 systemd[1]: Started libpod-conmon-33993e1f4b44889f9de960e07ba57c7bd3cf8e51a9b783e4e7091a9137b4ecc7.scope.
Oct 11 04:36:43 np0005481065 podman[265623]: 2025-10-11 08:36:43.251947342 +0000 UTC m=+0.039769376 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:36:43 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:36:43 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f90a2c66e81dceea51469b2fbae88274319c8eced1cc1c53f8a72c9496273623/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:36:43 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f90a2c66e81dceea51469b2fbae88274319c8eced1cc1c53f8a72c9496273623/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:36:43 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f90a2c66e81dceea51469b2fbae88274319c8eced1cc1c53f8a72c9496273623/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:36:43 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f90a2c66e81dceea51469b2fbae88274319c8eced1cc1c53f8a72c9496273623/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:36:43 np0005481065 podman[265623]: 2025-10-11 08:36:43.387938911 +0000 UTC m=+0.175760945 container init 33993e1f4b44889f9de960e07ba57c7bd3cf8e51a9b783e4e7091a9137b4ecc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:36:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v887: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:43 np0005481065 podman[265623]: 2025-10-11 08:36:43.402805115 +0000 UTC m=+0.190627069 container start 33993e1f4b44889f9de960e07ba57c7bd3cf8e51a9b783e4e7091a9137b4ecc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 11 04:36:43 np0005481065 podman[265623]: 2025-10-11 08:36:43.406789698 +0000 UTC m=+0.194611722 container attach 33993e1f4b44889f9de960e07ba57c7bd3cf8e51a9b783e4e7091a9137b4ecc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_chandrasekhar, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]: {
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:    "0": [
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:        {
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:            "devices": [
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:                "/dev/loop3"
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:            ],
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:            "lv_name": "ceph_lv0",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:            "lv_size": "21470642176",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:            "name": "ceph_lv0",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:            "tags": {
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:                "ceph.cluster_name": "ceph",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:                "ceph.crush_device_class": "",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:                "ceph.encrypted": "0",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:                "ceph.osd_id": "0",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:                "ceph.type": "block",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:                "ceph.vdo": "0"
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:            },
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:            "type": "block",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:            "vg_name": "ceph_vg0"
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:        }
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:    ],
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:    "1": [
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:        {
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:            "devices": [
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:                "/dev/loop4"
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:            ],
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:            "lv_name": "ceph_lv1",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:            "lv_size": "21470642176",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:            "name": "ceph_lv1",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:            "tags": {
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:                "ceph.cluster_name": "ceph",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:                "ceph.crush_device_class": "",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:                "ceph.encrypted": "0",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:                "ceph.osd_id": "1",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:                "ceph.type": "block",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:                "ceph.vdo": "0"
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:            },
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:            "type": "block",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:            "vg_name": "ceph_vg1"
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:        }
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:    ],
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:    "2": [
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:        {
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:            "devices": [
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:                "/dev/loop5"
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:            ],
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:            "lv_name": "ceph_lv2",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:            "lv_size": "21470642176",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:            "name": "ceph_lv2",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:            "tags": {
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:                "ceph.cluster_name": "ceph",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:                "ceph.crush_device_class": "",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:                "ceph.encrypted": "0",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:                "ceph.osd_id": "2",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:                "ceph.type": "block",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:                "ceph.vdo": "0"
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:            },
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:            "type": "block",
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:            "vg_name": "ceph_vg2"
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:        }
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]:    ]
Oct 11 04:36:44 np0005481065 romantic_chandrasekhar[265639]: }
Oct 11 04:36:44 np0005481065 systemd[1]: libpod-33993e1f4b44889f9de960e07ba57c7bd3cf8e51a9b783e4e7091a9137b4ecc7.scope: Deactivated successfully.
Oct 11 04:36:44 np0005481065 podman[265623]: 2025-10-11 08:36:44.216787993 +0000 UTC m=+1.004609977 container died 33993e1f4b44889f9de960e07ba57c7bd3cf8e51a9b783e4e7091a9137b4ecc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 11 04:36:44 np0005481065 systemd[1]: var-lib-containers-storage-overlay-f90a2c66e81dceea51469b2fbae88274319c8eced1cc1c53f8a72c9496273623-merged.mount: Deactivated successfully.
Oct 11 04:36:44 np0005481065 podman[265623]: 2025-10-11 08:36:44.29415856 +0000 UTC m=+1.081980504 container remove 33993e1f4b44889f9de960e07ba57c7bd3cf8e51a9b783e4e7091a9137b4ecc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 04:36:44 np0005481065 systemd[1]: libpod-conmon-33993e1f4b44889f9de960e07ba57c7bd3cf8e51a9b783e4e7091a9137b4ecc7.scope: Deactivated successfully.
Oct 11 04:36:44 np0005481065 podman[265649]: 2025-10-11 08:36:44.354763449 +0000 UTC m=+0.090806052 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.schema-version=1.0)
Oct 11 04:36:45 np0005481065 podman[265820]: 2025-10-11 08:36:45.095661062 +0000 UTC m=+0.050192083 container create 9dc14ea7558a631436fffbfc8f05803036d603e6d45b31d3e85bd6f8d6d9abfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_almeida, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 04:36:45 np0005481065 systemd[1]: Started libpod-conmon-9dc14ea7558a631436fffbfc8f05803036d603e6d45b31d3e85bd6f8d6d9abfc.scope.
Oct 11 04:36:45 np0005481065 podman[265820]: 2025-10-11 08:36:45.076643329 +0000 UTC m=+0.031174330 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:36:45 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:36:45 np0005481065 podman[265820]: 2025-10-11 08:36:45.211306451 +0000 UTC m=+0.165837482 container init 9dc14ea7558a631436fffbfc8f05803036d603e6d45b31d3e85bd6f8d6d9abfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 11 04:36:45 np0005481065 podman[265820]: 2025-10-11 08:36:45.22284572 +0000 UTC m=+0.177376751 container start 9dc14ea7558a631436fffbfc8f05803036d603e6d45b31d3e85bd6f8d6d9abfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_almeida, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:36:45 np0005481065 podman[265820]: 2025-10-11 08:36:45.226933366 +0000 UTC m=+0.181464437 container attach 9dc14ea7558a631436fffbfc8f05803036d603e6d45b31d3e85bd6f8d6d9abfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_almeida, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:36:45 np0005481065 jovial_almeida[265836]: 167 167
Oct 11 04:36:45 np0005481065 systemd[1]: libpod-9dc14ea7558a631436fffbfc8f05803036d603e6d45b31d3e85bd6f8d6d9abfc.scope: Deactivated successfully.
Oct 11 04:36:45 np0005481065 podman[265820]: 2025-10-11 08:36:45.230687073 +0000 UTC m=+0.185218094 container died 9dc14ea7558a631436fffbfc8f05803036d603e6d45b31d3e85bd6f8d6d9abfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_almeida, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 04:36:45 np0005481065 systemd[1]: var-lib-containers-storage-overlay-f9a037c77becfd965b59936e1d069ef5d944b854875a0c27daaa1aa73da336af-merged.mount: Deactivated successfully.
Oct 11 04:36:45 np0005481065 podman[265820]: 2025-10-11 08:36:45.283455519 +0000 UTC m=+0.237986540 container remove 9dc14ea7558a631436fffbfc8f05803036d603e6d45b31d3e85bd6f8d6d9abfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_almeida, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 04:36:45 np0005481065 systemd[1]: libpod-conmon-9dc14ea7558a631436fffbfc8f05803036d603e6d45b31d3e85bd6f8d6d9abfc.scope: Deactivated successfully.
Oct 11 04:36:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v888: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:45 np0005481065 podman[265860]: 2025-10-11 08:36:45.521602292 +0000 UTC m=+0.063562984 container create b82fcf610599eda490d256b06810788cfc609c2e9fa510cb693b9f5758387890 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:36:45 np0005481065 systemd[1]: Started libpod-conmon-b82fcf610599eda490d256b06810788cfc609c2e9fa510cb693b9f5758387890.scope.
Oct 11 04:36:45 np0005481065 podman[265860]: 2025-10-11 08:36:45.498697668 +0000 UTC m=+0.040658430 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:36:45 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:36:45 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/339c4769f16f192181be28d02618b545da7cf432002ff79f0b8019f4d3ea1339/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:36:45 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/339c4769f16f192181be28d02618b545da7cf432002ff79f0b8019f4d3ea1339/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:36:45 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/339c4769f16f192181be28d02618b545da7cf432002ff79f0b8019f4d3ea1339/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:36:45 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/339c4769f16f192181be28d02618b545da7cf432002ff79f0b8019f4d3ea1339/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:36:45 np0005481065 podman[265860]: 2025-10-11 08:36:45.629147449 +0000 UTC m=+0.171108171 container init b82fcf610599eda490d256b06810788cfc609c2e9fa510cb693b9f5758387890 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 04:36:45 np0005481065 podman[265860]: 2025-10-11 08:36:45.63512055 +0000 UTC m=+0.177081252 container start b82fcf610599eda490d256b06810788cfc609c2e9fa510cb693b9f5758387890 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_margulis, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:36:45 np0005481065 podman[265860]: 2025-10-11 08:36:45.638310131 +0000 UTC m=+0.180270913 container attach b82fcf610599eda490d256b06810788cfc609c2e9fa510cb693b9f5758387890 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_margulis, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 11 04:36:45 np0005481065 nova_compute[260935]: 2025-10-11 08:36:45.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:36:46 np0005481065 nova_compute[260935]: 2025-10-11 08:36:46.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:36:46 np0005481065 nova_compute[260935]: 2025-10-11 08:36:46.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:36:46 np0005481065 nova_compute[260935]: 2025-10-11 08:36:46.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 04:36:46 np0005481065 fervent_margulis[265876]: {
Oct 11 04:36:46 np0005481065 fervent_margulis[265876]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 04:36:46 np0005481065 fervent_margulis[265876]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:36:46 np0005481065 fervent_margulis[265876]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:36:46 np0005481065 fervent_margulis[265876]:        "osd_id": 2,
Oct 11 04:36:46 np0005481065 fervent_margulis[265876]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:36:46 np0005481065 fervent_margulis[265876]:        "type": "bluestore"
Oct 11 04:36:46 np0005481065 fervent_margulis[265876]:    },
Oct 11 04:36:46 np0005481065 fervent_margulis[265876]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 04:36:46 np0005481065 fervent_margulis[265876]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:36:46 np0005481065 fervent_margulis[265876]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:36:46 np0005481065 fervent_margulis[265876]:        "osd_id": 0,
Oct 11 04:36:46 np0005481065 fervent_margulis[265876]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:36:46 np0005481065 fervent_margulis[265876]:        "type": "bluestore"
Oct 11 04:36:46 np0005481065 fervent_margulis[265876]:    },
Oct 11 04:36:46 np0005481065 fervent_margulis[265876]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 04:36:46 np0005481065 fervent_margulis[265876]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:36:46 np0005481065 fervent_margulis[265876]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:36:46 np0005481065 fervent_margulis[265876]:        "osd_id": 1,
Oct 11 04:36:46 np0005481065 fervent_margulis[265876]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:36:46 np0005481065 fervent_margulis[265876]:        "type": "bluestore"
Oct 11 04:36:46 np0005481065 fervent_margulis[265876]:    }
Oct 11 04:36:46 np0005481065 fervent_margulis[265876]: }
Oct 11 04:36:46 np0005481065 systemd[1]: libpod-b82fcf610599eda490d256b06810788cfc609c2e9fa510cb693b9f5758387890.scope: Deactivated successfully.
Oct 11 04:36:46 np0005481065 systemd[1]: libpod-b82fcf610599eda490d256b06810788cfc609c2e9fa510cb693b9f5758387890.scope: Consumed 1.120s CPU time.
Oct 11 04:36:46 np0005481065 podman[265860]: 2025-10-11 08:36:46.750437092 +0000 UTC m=+1.292397814 container died b82fcf610599eda490d256b06810788cfc609c2e9fa510cb693b9f5758387890 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_margulis, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 04:36:46 np0005481065 systemd[1]: var-lib-containers-storage-overlay-339c4769f16f192181be28d02618b545da7cf432002ff79f0b8019f4d3ea1339-merged.mount: Deactivated successfully.
Oct 11 04:36:46 np0005481065 podman[265860]: 2025-10-11 08:36:46.883437346 +0000 UTC m=+1.425398038 container remove b82fcf610599eda490d256b06810788cfc609c2e9fa510cb693b9f5758387890 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_margulis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:36:46 np0005481065 systemd[1]: libpod-conmon-b82fcf610599eda490d256b06810788cfc609c2e9fa510cb693b9f5758387890.scope: Deactivated successfully.
Oct 11 04:36:46 np0005481065 podman[265908]: 2025-10-11 08:36:46.902799768 +0000 UTC m=+0.195646122 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251001)
Oct 11 04:36:46 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:36:46 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:36:46 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:36:46 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:36:46 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 9cea45c5-d653-4634-bf7e-e840e4b80f00 does not exist
Oct 11 04:36:46 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev ba796e0c-7ee4-41c8-a373-8eaab63b8d1f does not exist
Oct 11 04:36:47 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:36:47 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:36:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v889: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:47 np0005481065 nova_compute[260935]: 2025-10-11 08:36:47.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:36:47 np0005481065 nova_compute[260935]: 2025-10-11 08:36:47.724 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:36:47 np0005481065 nova_compute[260935]: 2025-10-11 08:36:47.725 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:36:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:36:48 np0005481065 nova_compute[260935]: 2025-10-11 08:36:48.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:36:48 np0005481065 nova_compute[260935]: 2025-10-11 08:36:48.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:36:48 np0005481065 nova_compute[260935]: 2025-10-11 08:36:48.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 04:36:48 np0005481065 nova_compute[260935]: 2025-10-11 08:36:48.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 11 04:36:48 np0005481065 nova_compute[260935]: 2025-10-11 08:36:48.736 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 11 04:36:48 np0005481065 nova_compute[260935]: 2025-10-11 08:36:48.737 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:36:48 np0005481065 nova_compute[260935]: 2025-10-11 08:36:48.774 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:36:48 np0005481065 nova_compute[260935]: 2025-10-11 08:36:48.774 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:36:48 np0005481065 nova_compute[260935]: 2025-10-11 08:36:48.775 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:36:48 np0005481065 nova_compute[260935]: 2025-10-11 08:36:48.775 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 04:36:48 np0005481065 nova_compute[260935]: 2025-10-11 08:36:48.776 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:36:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:36:49 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/935852823' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:36:49 np0005481065 nova_compute[260935]: 2025-10-11 08:36:49.251 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:36:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v890: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:49 np0005481065 nova_compute[260935]: 2025-10-11 08:36:49.523 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:36:49 np0005481065 nova_compute[260935]: 2025-10-11 08:36:49.525 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5155MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 04:36:49 np0005481065 nova_compute[260935]: 2025-10-11 08:36:49.526 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:36:49 np0005481065 nova_compute[260935]: 2025-10-11 08:36:49.527 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:36:49 np0005481065 nova_compute[260935]: 2025-10-11 08:36:49.707 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 04:36:49 np0005481065 nova_compute[260935]: 2025-10-11 08:36:49.707 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 04:36:49 np0005481065 nova_compute[260935]: 2025-10-11 08:36:49.723 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:36:50 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:36:50 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/733932756' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:36:50 np0005481065 nova_compute[260935]: 2025-10-11 08:36:50.199 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:36:50 np0005481065 nova_compute[260935]: 2025-10-11 08:36:50.208 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:36:50 np0005481065 nova_compute[260935]: 2025-10-11 08:36:50.238 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:36:50 np0005481065 nova_compute[260935]: 2025-10-11 08:36:50.241 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 04:36:50 np0005481065 nova_compute[260935]: 2025-10-11 08:36:50.241 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:36:51 np0005481065 nova_compute[260935]: 2025-10-11 08:36:51.207 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:36:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v891: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:52 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:36:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v892: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:36:54
Oct 11 04:36:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:36:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 04:36:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', 'vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'backups', 'images', '.rgw.root', 'default.rgw.control', 'default.rgw.log']
Oct 11 04:36:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:36:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:36:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:36:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:36:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:36:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:36:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:36:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:36:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:36:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:36:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:36:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:36:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:36:54 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:36:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:36:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:36:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:36:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v893: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v894: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:36:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:36:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v895: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v896: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:02 np0005481065 podman[266040]: 2025-10-11 08:37:02.795289653 +0000 UTC m=+0.093121877 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 04:37:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:37:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v897: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:37:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:37:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:37:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:37:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:37:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:37:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:37:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:37:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:37:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:37:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:37:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:37:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 04:37:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:37:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:37:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:37:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:37:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:37:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:37:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:37:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:37:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:37:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:37:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v898: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v899: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:37:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v900: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:09 np0005481065 podman[266061]: 2025-10-11 08:37:09.795694003 +0000 UTC m=+0.063067340 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:37:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v901: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:37:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v902: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:14 np0005481065 podman[266081]: 2025-10-11 08:37:14.824606587 +0000 UTC m=+0.115811154 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 11 04:37:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:37:15.167 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:37:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:37:15.168 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:37:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:37:15.168 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:37:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v903: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v904: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:17 np0005481065 podman[266101]: 2025-10-11 08:37:17.838329419 +0000 UTC m=+0.138696377 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 11 04:37:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:37:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v905: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v906: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:37:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v907: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:37:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:37:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:37:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:37:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:37:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:37:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v908: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v909: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:37:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v910: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v911: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:37:32 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Oct 11 04:37:32 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:37:32.991489) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 04:37:32 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Oct 11 04:37:32 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171852991535, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 768, "num_deletes": 255, "total_data_size": 960681, "memory_usage": 974720, "flush_reason": "Manual Compaction"}
Oct 11 04:37:32 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Oct 11 04:37:33 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171853002313, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 951926, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18692, "largest_seqno": 19459, "table_properties": {"data_size": 948028, "index_size": 1678, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8323, "raw_average_key_size": 18, "raw_value_size": 940141, "raw_average_value_size": 2061, "num_data_blocks": 76, "num_entries": 456, "num_filter_entries": 456, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760171788, "oldest_key_time": 1760171788, "file_creation_time": 1760171852, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:37:33 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 10877 microseconds, and 3580 cpu microseconds.
Oct 11 04:37:33 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:37:33 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:37:33.002364) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 951926 bytes OK
Oct 11 04:37:33 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:37:33.002388) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Oct 11 04:37:33 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:37:33.004137) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Oct 11 04:37:33 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:37:33.004158) EVENT_LOG_v1 {"time_micros": 1760171853004152, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 04:37:33 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:37:33.004178) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 04:37:33 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 956778, prev total WAL file size 956778, number of live WAL files 2.
Oct 11 04:37:33 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:37:33 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:37:33.004882) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323532' seq:72057594037927935, type:22 .. '6C6F676D00353033' seq:0, type:0; will stop at (end)
Oct 11 04:37:33 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 04:37:33 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(929KB)], [44(6021KB)]
Oct 11 04:37:33 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171853004949, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 7117830, "oldest_snapshot_seqno": -1}
Oct 11 04:37:33 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4110 keys, 6986821 bytes, temperature: kUnknown
Oct 11 04:37:33 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171853054269, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 6986821, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6958843, "index_size": 16604, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10309, "raw_key_size": 101863, "raw_average_key_size": 24, "raw_value_size": 6883857, "raw_average_value_size": 1674, "num_data_blocks": 696, "num_entries": 4110, "num_filter_entries": 4110, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760171853, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:37:33 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:37:33 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:37:33.054592) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 6986821 bytes
Oct 11 04:37:33 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:37:33.056374) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 144.0 rd, 141.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 5.9 +0.0 blob) out(6.7 +0.0 blob), read-write-amplify(14.8) write-amplify(7.3) OK, records in: 4632, records dropped: 522 output_compression: NoCompression
Oct 11 04:37:33 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:37:33.056403) EVENT_LOG_v1 {"time_micros": 1760171853056390, "job": 22, "event": "compaction_finished", "compaction_time_micros": 49414, "compaction_time_cpu_micros": 32730, "output_level": 6, "num_output_files": 1, "total_output_size": 6986821, "num_input_records": 4632, "num_output_records": 4110, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 04:37:33 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:37:33 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171853056877, "job": 22, "event": "table_file_deletion", "file_number": 46}
Oct 11 04:37:33 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:37:33 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171853059089, "job": 22, "event": "table_file_deletion", "file_number": 44}
Oct 11 04:37:33 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:37:33.004769) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:37:33 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:37:33.059136) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:37:33 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:37:33.059142) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:37:33 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:37:33.059146) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:37:33 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:37:33.059149) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:37:33 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:37:33.059152) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:37:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v912: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:33 np0005481065 podman[266127]: 2025-10-11 08:37:33.789085259 +0000 UTC m=+0.079234871 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 11 04:37:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v913: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v914: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:37:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1190014263' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:37:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:37:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1190014263' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:37:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:37:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v915: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:40 np0005481065 podman[266148]: 2025-10-11 08:37:40.800848261 +0000 UTC m=+0.097885573 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 04:37:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v916: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:42 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:37:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v917: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:43 np0005481065 nova_compute[260935]: 2025-10-11 08:37:43.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:37:43 np0005481065 nova_compute[260935]: 2025-10-11 08:37:43.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct 11 04:37:43 np0005481065 nova_compute[260935]: 2025-10-11 08:37:43.736 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct 11 04:37:43 np0005481065 nova_compute[260935]: 2025-10-11 08:37:43.738 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:37:43 np0005481065 nova_compute[260935]: 2025-10-11 08:37:43.738 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 11 04:37:43 np0005481065 nova_compute[260935]: 2025-10-11 08:37:43.755 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:37:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v918: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:45 np0005481065 podman[266169]: 2025-10-11 08:37:45.792266567 +0000 UTC m=+0.089171374 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd)
Oct 11 04:37:46 np0005481065 nova_compute[260935]: 2025-10-11 08:37:46.768 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:37:46 np0005481065 nova_compute[260935]: 2025-10-11 08:37:46.768 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:37:46 np0005481065 nova_compute[260935]: 2025-10-11 08:37:46.769 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:37:46 np0005481065 nova_compute[260935]: 2025-10-11 08:37:46.769 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 04:37:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v919: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:37:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:37:48 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:37:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:37:48 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:37:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:37:48 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:37:48 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 611af44b-8405-4923-9649-83e72f55ae4d does not exist
Oct 11 04:37:48 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev c5415471-9297-4e27-aa32-aafd2789e4d1 does not exist
Oct 11 04:37:48 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 047cf064-a2c3-4e32-80ef-8fbbcbb523b6 does not exist
Oct 11 04:37:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:37:48 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:37:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:37:48 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:37:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:37:48 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:37:48 np0005481065 podman[266345]: 2025-10-11 08:37:48.435991526 +0000 UTC m=+0.138498871 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 04:37:48 np0005481065 nova_compute[260935]: 2025-10-11 08:37:48.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:37:48 np0005481065 nova_compute[260935]: 2025-10-11 08:37:48.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:37:48 np0005481065 podman[266487]: 2025-10-11 08:37:48.888339109 +0000 UTC m=+0.067985470 container create cc0a0a35dcb1a82e7421cb6b70d2911714ea86121daa5f7eead72fb5e6107be9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:37:48 np0005481065 systemd[1]: Started libpod-conmon-cc0a0a35dcb1a82e7421cb6b70d2911714ea86121daa5f7eead72fb5e6107be9.scope.
Oct 11 04:37:48 np0005481065 podman[266487]: 2025-10-11 08:37:48.85962162 +0000 UTC m=+0.039268041 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:37:48 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:37:49 np0005481065 podman[266487]: 2025-10-11 08:37:49.004940065 +0000 UTC m=+0.184586466 container init cc0a0a35dcb1a82e7421cb6b70d2911714ea86121daa5f7eead72fb5e6107be9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_nightingale, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 11 04:37:49 np0005481065 podman[266487]: 2025-10-11 08:37:49.016769042 +0000 UTC m=+0.196415413 container start cc0a0a35dcb1a82e7421cb6b70d2911714ea86121daa5f7eead72fb5e6107be9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_nightingale, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 11 04:37:49 np0005481065 podman[266487]: 2025-10-11 08:37:49.021895219 +0000 UTC m=+0.201541590 container attach cc0a0a35dcb1a82e7421cb6b70d2911714ea86121daa5f7eead72fb5e6107be9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 04:37:49 np0005481065 nifty_nightingale[266503]: 167 167
Oct 11 04:37:49 np0005481065 systemd[1]: libpod-cc0a0a35dcb1a82e7421cb6b70d2911714ea86121daa5f7eead72fb5e6107be9.scope: Deactivated successfully.
Oct 11 04:37:49 np0005481065 conmon[266503]: conmon cc0a0a35dcb1a82e7421 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cc0a0a35dcb1a82e7421cb6b70d2911714ea86121daa5f7eead72fb5e6107be9.scope/container/memory.events
Oct 11 04:37:49 np0005481065 podman[266487]: 2025-10-11 08:37:49.027667023 +0000 UTC m=+0.207313384 container died cc0a0a35dcb1a82e7421cb6b70d2911714ea86121daa5f7eead72fb5e6107be9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_nightingale, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 11 04:37:49 np0005481065 systemd[1]: var-lib-containers-storage-overlay-73c38e87bb0f9feaf2d8cda1712a358afafc1cdedc1b0cbf830f9917a50391cf-merged.mount: Deactivated successfully.
Oct 11 04:37:49 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:37:49 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:37:49 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:37:49 np0005481065 podman[266487]: 2025-10-11 08:37:49.083405443 +0000 UTC m=+0.263051814 container remove cc0a0a35dcb1a82e7421cb6b70d2911714ea86121daa5f7eead72fb5e6107be9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_nightingale, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 11 04:37:49 np0005481065 systemd[1]: libpod-conmon-cc0a0a35dcb1a82e7421cb6b70d2911714ea86121daa5f7eead72fb5e6107be9.scope: Deactivated successfully.
Oct 11 04:37:49 np0005481065 podman[266528]: 2025-10-11 08:37:49.281429322 +0000 UTC m=+0.054685051 container create f8904a8a61465b2f188eedf3d9f2f9ec81fda04f4ef6d526f70ca15c89da58c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shtern, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:37:49 np0005481065 systemd[1]: Started libpod-conmon-f8904a8a61465b2f188eedf3d9f2f9ec81fda04f4ef6d526f70ca15c89da58c6.scope.
Oct 11 04:37:49 np0005481065 podman[266528]: 2025-10-11 08:37:49.255670427 +0000 UTC m=+0.028926216 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:37:49 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:37:49 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79a5d3f7c752cd8bf0e0d5a5818a21b540ee634410105d86b2821c2506641c86/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:37:49 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79a5d3f7c752cd8bf0e0d5a5818a21b540ee634410105d86b2821c2506641c86/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:37:49 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79a5d3f7c752cd8bf0e0d5a5818a21b540ee634410105d86b2821c2506641c86/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:37:49 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79a5d3f7c752cd8bf0e0d5a5818a21b540ee634410105d86b2821c2506641c86/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:37:49 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79a5d3f7c752cd8bf0e0d5a5818a21b540ee634410105d86b2821c2506641c86/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:37:49 np0005481065 podman[266528]: 2025-10-11 08:37:49.384007687 +0000 UTC m=+0.157263466 container init f8904a8a61465b2f188eedf3d9f2f9ec81fda04f4ef6d526f70ca15c89da58c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 04:37:49 np0005481065 podman[266528]: 2025-10-11 08:37:49.399658334 +0000 UTC m=+0.172914073 container start f8904a8a61465b2f188eedf3d9f2f9ec81fda04f4ef6d526f70ca15c89da58c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 04:37:49 np0005481065 podman[266528]: 2025-10-11 08:37:49.404044909 +0000 UTC m=+0.177300648 container attach f8904a8a61465b2f188eedf3d9f2f9ec81fda04f4ef6d526f70ca15c89da58c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Oct 11 04:37:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v920: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:49 np0005481065 nova_compute[260935]: 2025-10-11 08:37:49.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:37:49 np0005481065 nova_compute[260935]: 2025-10-11 08:37:49.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:37:49 np0005481065 nova_compute[260935]: 2025-10-11 08:37:49.738 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:37:49 np0005481065 nova_compute[260935]: 2025-10-11 08:37:49.738 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:37:49 np0005481065 nova_compute[260935]: 2025-10-11 08:37:49.739 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:37:49 np0005481065 nova_compute[260935]: 2025-10-11 08:37:49.739 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 04:37:49 np0005481065 nova_compute[260935]: 2025-10-11 08:37:49.739 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:37:50 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:37:50 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/291088395' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:37:50 np0005481065 nova_compute[260935]: 2025-10-11 08:37:50.210 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:37:50 np0005481065 nova_compute[260935]: 2025-10-11 08:37:50.416 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:37:50 np0005481065 nova_compute[260935]: 2025-10-11 08:37:50.418 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5116MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 04:37:50 np0005481065 nova_compute[260935]: 2025-10-11 08:37:50.418 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:37:50 np0005481065 nova_compute[260935]: 2025-10-11 08:37:50.419 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:37:50 np0005481065 thirsty_shtern[266545]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:37:50 np0005481065 thirsty_shtern[266545]: --> relative data size: 1.0
Oct 11 04:37:50 np0005481065 thirsty_shtern[266545]: --> All data devices are unavailable
Oct 11 04:37:50 np0005481065 systemd[1]: libpod-f8904a8a61465b2f188eedf3d9f2f9ec81fda04f4ef6d526f70ca15c89da58c6.scope: Deactivated successfully.
Oct 11 04:37:50 np0005481065 podman[266528]: 2025-10-11 08:37:50.534779492 +0000 UTC m=+1.308035191 container died f8904a8a61465b2f188eedf3d9f2f9ec81fda04f4ef6d526f70ca15c89da58c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shtern, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:37:50 np0005481065 systemd[1]: libpod-f8904a8a61465b2f188eedf3d9f2f9ec81fda04f4ef6d526f70ca15c89da58c6.scope: Consumed 1.070s CPU time.
Oct 11 04:37:50 np0005481065 systemd[1]: var-lib-containers-storage-overlay-79a5d3f7c752cd8bf0e0d5a5818a21b540ee634410105d86b2821c2506641c86-merged.mount: Deactivated successfully.
Oct 11 04:37:50 np0005481065 podman[266528]: 2025-10-11 08:37:50.606970721 +0000 UTC m=+1.380226440 container remove f8904a8a61465b2f188eedf3d9f2f9ec81fda04f4ef6d526f70ca15c89da58c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shtern, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 04:37:50 np0005481065 systemd[1]: libpod-conmon-f8904a8a61465b2f188eedf3d9f2f9ec81fda04f4ef6d526f70ca15c89da58c6.scope: Deactivated successfully.
Oct 11 04:37:50 np0005481065 nova_compute[260935]: 2025-10-11 08:37:50.781 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 04:37:50 np0005481065 nova_compute[260935]: 2025-10-11 08:37:50.782 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 04:37:50 np0005481065 nova_compute[260935]: 2025-10-11 08:37:50.896 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing inventories for resource provider ead2f521-4d5d-46d9-864c-1aac19134114 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 11 04:37:51 np0005481065 nova_compute[260935]: 2025-10-11 08:37:51.011 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updating ProviderTree inventory for provider ead2f521-4d5d-46d9-864c-1aac19134114 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 11 04:37:51 np0005481065 nova_compute[260935]: 2025-10-11 08:37:51.011 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updating inventory in ProviderTree for provider ead2f521-4d5d-46d9-864c-1aac19134114 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 11 04:37:51 np0005481065 nova_compute[260935]: 2025-10-11 08:37:51.041 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing aggregate associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 11 04:37:51 np0005481065 nova_compute[260935]: 2025-10-11 08:37:51.066 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing trait associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, traits: HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSE41,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_USB,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,HW_CPU_X86_SVM,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 11 04:37:51 np0005481065 nova_compute[260935]: 2025-10-11 08:37:51.091 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:37:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v921: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:51 np0005481065 podman[266770]: 2025-10-11 08:37:51.525077319 +0000 UTC m=+0.063847863 container create ecda262daf8a2f8da0255b5f1bdc2aff8881f22c3d8915b018dac2d1f7460a89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:37:51 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:37:51 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1280830858' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:37:51 np0005481065 nova_compute[260935]: 2025-10-11 08:37:51.568 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:37:51 np0005481065 systemd[1]: Started libpod-conmon-ecda262daf8a2f8da0255b5f1bdc2aff8881f22c3d8915b018dac2d1f7460a89.scope.
Oct 11 04:37:51 np0005481065 nova_compute[260935]: 2025-10-11 08:37:51.579 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:37:51 np0005481065 podman[266770]: 2025-10-11 08:37:51.50127807 +0000 UTC m=+0.040048624 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:37:51 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:37:51 np0005481065 podman[266770]: 2025-10-11 08:37:51.62960991 +0000 UTC m=+0.168380454 container init ecda262daf8a2f8da0255b5f1bdc2aff8881f22c3d8915b018dac2d1f7460a89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_wescoff, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:37:51 np0005481065 podman[266770]: 2025-10-11 08:37:51.643351022 +0000 UTC m=+0.182121556 container start ecda262daf8a2f8da0255b5f1bdc2aff8881f22c3d8915b018dac2d1f7460a89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_wescoff, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 04:37:51 np0005481065 podman[266770]: 2025-10-11 08:37:51.647852171 +0000 UTC m=+0.186622785 container attach ecda262daf8a2f8da0255b5f1bdc2aff8881f22c3d8915b018dac2d1f7460a89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_wescoff, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:37:51 np0005481065 mystifying_wescoff[266788]: 167 167
Oct 11 04:37:51 np0005481065 systemd[1]: libpod-ecda262daf8a2f8da0255b5f1bdc2aff8881f22c3d8915b018dac2d1f7460a89.scope: Deactivated successfully.
Oct 11 04:37:51 np0005481065 podman[266770]: 2025-10-11 08:37:51.654243993 +0000 UTC m=+0.193014547 container died ecda262daf8a2f8da0255b5f1bdc2aff8881f22c3d8915b018dac2d1f7460a89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_wescoff, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 11 04:37:51 np0005481065 nova_compute[260935]: 2025-10-11 08:37:51.688 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:37:51 np0005481065 systemd[1]: var-lib-containers-storage-overlay-163a8781212963e2ca5f87b8d7b1048ba5afd23c944307695e18708a37282a80-merged.mount: Deactivated successfully.
Oct 11 04:37:51 np0005481065 nova_compute[260935]: 2025-10-11 08:37:51.691 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 04:37:51 np0005481065 nova_compute[260935]: 2025-10-11 08:37:51.692 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.273s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:37:51 np0005481065 podman[266770]: 2025-10-11 08:37:51.712230087 +0000 UTC m=+0.251000631 container remove ecda262daf8a2f8da0255b5f1bdc2aff8881f22c3d8915b018dac2d1f7460a89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:37:51 np0005481065 systemd[1]: libpod-conmon-ecda262daf8a2f8da0255b5f1bdc2aff8881f22c3d8915b018dac2d1f7460a89.scope: Deactivated successfully.
Oct 11 04:37:51 np0005481065 podman[266812]: 2025-10-11 08:37:51.957431961 +0000 UTC m=+0.066206229 container create 4e147d514f3d5a93735774097b0d1c7b84237a886a8a3151879f0e7e1e0841da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 04:37:52 np0005481065 systemd[1]: Started libpod-conmon-4e147d514f3d5a93735774097b0d1c7b84237a886a8a3151879f0e7e1e0841da.scope.
Oct 11 04:37:52 np0005481065 podman[266812]: 2025-10-11 08:37:51.93460668 +0000 UTC m=+0.043380968 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:37:52 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:37:52 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80a21aa9d6d73f7a344d776e189b25233ea02bfc55dff84b533fc9dd2f5d5e76/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:37:52 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80a21aa9d6d73f7a344d776e189b25233ea02bfc55dff84b533fc9dd2f5d5e76/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:37:52 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80a21aa9d6d73f7a344d776e189b25233ea02bfc55dff84b533fc9dd2f5d5e76/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:37:52 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80a21aa9d6d73f7a344d776e189b25233ea02bfc55dff84b533fc9dd2f5d5e76/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:37:52 np0005481065 podman[266812]: 2025-10-11 08:37:52.07169576 +0000 UTC m=+0.180470088 container init 4e147d514f3d5a93735774097b0d1c7b84237a886a8a3151879f0e7e1e0841da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 04:37:52 np0005481065 podman[266812]: 2025-10-11 08:37:52.082395896 +0000 UTC m=+0.191170204 container start 4e147d514f3d5a93735774097b0d1c7b84237a886a8a3151879f0e7e1e0841da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lewin, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 04:37:52 np0005481065 podman[266812]: 2025-10-11 08:37:52.086510613 +0000 UTC m=+0.195285001 container attach 4e147d514f3d5a93735774097b0d1c7b84237a886a8a3151879f0e7e1e0841da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lewin, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:37:52 np0005481065 nova_compute[260935]: 2025-10-11 08:37:52.688 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:37:52 np0005481065 nova_compute[260935]: 2025-10-11 08:37:52.689 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:37:52 np0005481065 nova_compute[260935]: 2025-10-11 08:37:52.690 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 04:37:52 np0005481065 nova_compute[260935]: 2025-10-11 08:37:52.690 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 11 04:37:52 np0005481065 nova_compute[260935]: 2025-10-11 08:37:52.719 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 11 04:37:52 np0005481065 keen_lewin[266829]: {
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:    "0": [
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:        {
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:            "devices": [
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:                "/dev/loop3"
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:            ],
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:            "lv_name": "ceph_lv0",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:            "lv_size": "21470642176",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:            "name": "ceph_lv0",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:            "tags": {
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:                "ceph.cluster_name": "ceph",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:                "ceph.crush_device_class": "",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:                "ceph.encrypted": "0",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:                "ceph.osd_id": "0",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:                "ceph.type": "block",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:                "ceph.vdo": "0"
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:            },
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:            "type": "block",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:            "vg_name": "ceph_vg0"
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:        }
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:    ],
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:    "1": [
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:        {
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:            "devices": [
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:                "/dev/loop4"
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:            ],
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:            "lv_name": "ceph_lv1",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:            "lv_size": "21470642176",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:            "name": "ceph_lv1",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:            "tags": {
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:                "ceph.cluster_name": "ceph",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:                "ceph.crush_device_class": "",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:                "ceph.encrypted": "0",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:                "ceph.osd_id": "1",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:                "ceph.type": "block",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:                "ceph.vdo": "0"
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:            },
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:            "type": "block",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:            "vg_name": "ceph_vg1"
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:        }
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:    ],
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:    "2": [
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:        {
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:            "devices": [
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:                "/dev/loop5"
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:            ],
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:            "lv_name": "ceph_lv2",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:            "lv_size": "21470642176",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:            "name": "ceph_lv2",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:            "tags": {
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:                "ceph.cluster_name": "ceph",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:                "ceph.crush_device_class": "",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:                "ceph.encrypted": "0",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:                "ceph.osd_id": "2",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:                "ceph.type": "block",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:                "ceph.vdo": "0"
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:            },
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:            "type": "block",
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:            "vg_name": "ceph_vg2"
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:        }
Oct 11 04:37:52 np0005481065 keen_lewin[266829]:    ]
Oct 11 04:37:52 np0005481065 keen_lewin[266829]: }
Oct 11 04:37:52 np0005481065 systemd[1]: libpod-4e147d514f3d5a93735774097b0d1c7b84237a886a8a3151879f0e7e1e0841da.scope: Deactivated successfully.
Oct 11 04:37:52 np0005481065 podman[266838]: 2025-10-11 08:37:52.968022417 +0000 UTC m=+0.032976871 container died 4e147d514f3d5a93735774097b0d1c7b84237a886a8a3151879f0e7e1e0841da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 11 04:37:52 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:37:53 np0005481065 systemd[1]: var-lib-containers-storage-overlay-80a21aa9d6d73f7a344d776e189b25233ea02bfc55dff84b533fc9dd2f5d5e76-merged.mount: Deactivated successfully.
Oct 11 04:37:53 np0005481065 podman[266838]: 2025-10-11 08:37:53.04347582 +0000 UTC m=+0.108430254 container remove 4e147d514f3d5a93735774097b0d1c7b84237a886a8a3151879f0e7e1e0841da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lewin, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:37:53 np0005481065 systemd[1]: libpod-conmon-4e147d514f3d5a93735774097b0d1c7b84237a886a8a3151879f0e7e1e0841da.scope: Deactivated successfully.
Oct 11 04:37:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v922: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:54 np0005481065 podman[266995]: 2025-10-11 08:37:54.026127129 +0000 UTC m=+0.111925484 container create 4485d01d5d8c51fa11927ce5dabb07ffecc3da562a4ff3c57aac30223ee1e62b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 04:37:54 np0005481065 podman[266995]: 2025-10-11 08:37:53.954231318 +0000 UTC m=+0.040029693 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:37:54 np0005481065 systemd[1]: Started libpod-conmon-4485d01d5d8c51fa11927ce5dabb07ffecc3da562a4ff3c57aac30223ee1e62b.scope.
Oct 11 04:37:54 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:37:54 np0005481065 podman[266995]: 2025-10-11 08:37:54.144286529 +0000 UTC m=+0.230084894 container init 4485d01d5d8c51fa11927ce5dabb07ffecc3da562a4ff3c57aac30223ee1e62b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 04:37:54 np0005481065 podman[266995]: 2025-10-11 08:37:54.157071694 +0000 UTC m=+0.242870059 container start 4485d01d5d8c51fa11927ce5dabb07ffecc3da562a4ff3c57aac30223ee1e62b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Oct 11 04:37:54 np0005481065 podman[266995]: 2025-10-11 08:37:54.161867481 +0000 UTC m=+0.247665896 container attach 4485d01d5d8c51fa11927ce5dabb07ffecc3da562a4ff3c57aac30223ee1e62b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_chatelet, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 04:37:54 np0005481065 dazzling_chatelet[267011]: 167 167
Oct 11 04:37:54 np0005481065 systemd[1]: libpod-4485d01d5d8c51fa11927ce5dabb07ffecc3da562a4ff3c57aac30223ee1e62b.scope: Deactivated successfully.
Oct 11 04:37:54 np0005481065 podman[266995]: 2025-10-11 08:37:54.167181502 +0000 UTC m=+0.252979867 container died 4485d01d5d8c51fa11927ce5dabb07ffecc3da562a4ff3c57aac30223ee1e62b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_chatelet, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 04:37:54 np0005481065 systemd[1]: var-lib-containers-storage-overlay-5bd638472678d4392cb4f2c669c318331e4baa597b9b9fe223da981af2de770e-merged.mount: Deactivated successfully.
Oct 11 04:37:54 np0005481065 podman[266995]: 2025-10-11 08:37:54.218300191 +0000 UTC m=+0.304098556 container remove 4485d01d5d8c51fa11927ce5dabb07ffecc3da562a4ff3c57aac30223ee1e62b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_chatelet, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 04:37:54 np0005481065 systemd[1]: libpod-conmon-4485d01d5d8c51fa11927ce5dabb07ffecc3da562a4ff3c57aac30223ee1e62b.scope: Deactivated successfully.
Oct 11 04:37:54 np0005481065 podman[267036]: 2025-10-11 08:37:54.475707342 +0000 UTC m=+0.074248879 container create 5dffa9f0037a13c35c38850671397716e5d1b1e36d713c14e16e9cd35e8eee50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_margulis, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 04:37:54 np0005481065 systemd[1]: Started libpod-conmon-5dffa9f0037a13c35c38850671397716e5d1b1e36d713c14e16e9cd35e8eee50.scope.
Oct 11 04:37:54 np0005481065 podman[267036]: 2025-10-11 08:37:54.445940733 +0000 UTC m=+0.044482320 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:37:54 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:37:54 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/132f5e5e1b84d6d662c67a840df4f300f9909ba275843e5466ef9cf0210bf12e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:37:54 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/132f5e5e1b84d6d662c67a840df4f300f9909ba275843e5466ef9cf0210bf12e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:37:54 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/132f5e5e1b84d6d662c67a840df4f300f9909ba275843e5466ef9cf0210bf12e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:37:54 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/132f5e5e1b84d6d662c67a840df4f300f9909ba275843e5466ef9cf0210bf12e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:37:54 np0005481065 podman[267036]: 2025-10-11 08:37:54.579111201 +0000 UTC m=+0.177652758 container init 5dffa9f0037a13c35c38850671397716e5d1b1e36d713c14e16e9cd35e8eee50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:37:54 np0005481065 podman[267036]: 2025-10-11 08:37:54.596611611 +0000 UTC m=+0.195153148 container start 5dffa9f0037a13c35c38850671397716e5d1b1e36d713c14e16e9cd35e8eee50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 11 04:37:54 np0005481065 podman[267036]: 2025-10-11 08:37:54.601304134 +0000 UTC m=+0.199845731 container attach 5dffa9f0037a13c35c38850671397716e5d1b1e36d713c14e16e9cd35e8eee50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:37:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:37:54
Oct 11 04:37:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:37:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 04:37:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['volumes', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', '.mgr', 'images', 'vms', 'backups']
Oct 11 04:37:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:37:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:37:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:37:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:37:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:37:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:37:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:37:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:37:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:37:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:37:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:37:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:37:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:37:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:37:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:37:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:37:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:37:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v923: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:55 np0005481065 gracious_margulis[267053]: {
Oct 11 04:37:55 np0005481065 gracious_margulis[267053]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 04:37:55 np0005481065 gracious_margulis[267053]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:37:55 np0005481065 gracious_margulis[267053]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:37:55 np0005481065 gracious_margulis[267053]:        "osd_id": 2,
Oct 11 04:37:55 np0005481065 gracious_margulis[267053]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:37:55 np0005481065 gracious_margulis[267053]:        "type": "bluestore"
Oct 11 04:37:55 np0005481065 gracious_margulis[267053]:    },
Oct 11 04:37:55 np0005481065 gracious_margulis[267053]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 04:37:55 np0005481065 gracious_margulis[267053]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:37:55 np0005481065 gracious_margulis[267053]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:37:55 np0005481065 gracious_margulis[267053]:        "osd_id": 0,
Oct 11 04:37:55 np0005481065 gracious_margulis[267053]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:37:55 np0005481065 gracious_margulis[267053]:        "type": "bluestore"
Oct 11 04:37:55 np0005481065 gracious_margulis[267053]:    },
Oct 11 04:37:55 np0005481065 gracious_margulis[267053]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 04:37:55 np0005481065 gracious_margulis[267053]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:37:55 np0005481065 gracious_margulis[267053]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:37:55 np0005481065 gracious_margulis[267053]:        "osd_id": 1,
Oct 11 04:37:55 np0005481065 gracious_margulis[267053]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:37:55 np0005481065 gracious_margulis[267053]:        "type": "bluestore"
Oct 11 04:37:55 np0005481065 gracious_margulis[267053]:    }
Oct 11 04:37:55 np0005481065 gracious_margulis[267053]: }
Oct 11 04:37:55 np0005481065 systemd[1]: libpod-5dffa9f0037a13c35c38850671397716e5d1b1e36d713c14e16e9cd35e8eee50.scope: Deactivated successfully.
Oct 11 04:37:55 np0005481065 podman[267036]: 2025-10-11 08:37:55.721845957 +0000 UTC m=+1.320387504 container died 5dffa9f0037a13c35c38850671397716e5d1b1e36d713c14e16e9cd35e8eee50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 04:37:55 np0005481065 systemd[1]: libpod-5dffa9f0037a13c35c38850671397716e5d1b1e36d713c14e16e9cd35e8eee50.scope: Consumed 1.129s CPU time.
Oct 11 04:37:55 np0005481065 systemd[1]: var-lib-containers-storage-overlay-132f5e5e1b84d6d662c67a840df4f300f9909ba275843e5466ef9cf0210bf12e-merged.mount: Deactivated successfully.
Oct 11 04:37:55 np0005481065 podman[267036]: 2025-10-11 08:37:55.807523481 +0000 UTC m=+1.406065018 container remove 5dffa9f0037a13c35c38850671397716e5d1b1e36d713c14e16e9cd35e8eee50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:37:55 np0005481065 systemd[1]: libpod-conmon-5dffa9f0037a13c35c38850671397716e5d1b1e36d713c14e16e9cd35e8eee50.scope: Deactivated successfully.
Oct 11 04:37:55 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:37:55 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:37:55 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:37:55 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:37:55 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 7d623a57-1adc-4775-aa55-d65085f6855d does not exist
Oct 11 04:37:55 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 22ef0d61-3740-4eb5-8e4a-4018b923e7f4 does not exist
Oct 11 04:37:56 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:37:56 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:37:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v924: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:37:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:37:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v925: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v926: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:38:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v927: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:38:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:38:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:38:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:38:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:38:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:38:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:38:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:38:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:38:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:38:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:38:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:38:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 04:38:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:38:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:38:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:38:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:38:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:38:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:38:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:38:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:38:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:38:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:38:04 np0005481065 podman[267150]: 2025-10-11 08:38:04.816134102 +0000 UTC m=+0.107046944 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 11 04:38:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v928: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v929: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:38:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v930: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v931: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:11 np0005481065 podman[267170]: 2025-10-11 08:38:11.769834458 +0000 UTC m=+0.071583863 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 11 04:38:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:38:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v932: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:38:15.168 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:38:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:38:15.169 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:38:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:38:15.169 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:38:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v933: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:16 np0005481065 podman[267192]: 2025-10-11 08:38:16.787106352 +0000 UTC m=+0.085163531 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 04:38:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v934: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:38:18 np0005481065 podman[267213]: 2025-10-11 08:38:18.840619817 +0000 UTC m=+0.131529163 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 04:38:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v935: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v936: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:38:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v937: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:38:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:38:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:38:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:38:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:38:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:38:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v938: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v939: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:38:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v940: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v941: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:38:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v942: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v943: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:35 np0005481065 podman[267242]: 2025-10-11 08:38:35.792380768 +0000 UTC m=+0.090646336 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 11 04:38:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v944: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:38:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3518290371' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:38:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:38:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3518290371' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:38:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:38:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v945: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v946: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:42 np0005481065 podman[267262]: 2025-10-11 08:38:42.789231823 +0000 UTC m=+0.084082079 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible)
Oct 11 04:38:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:38:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v947: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v948: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:46 np0005481065 nova_compute[260935]: 2025-10-11 08:38:46.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:38:46 np0005481065 nova_compute[260935]: 2025-10-11 08:38:46.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:38:46 np0005481065 nova_compute[260935]: 2025-10-11 08:38:46.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 04:38:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v949: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:47 np0005481065 podman[267283]: 2025-10-11 08:38:47.785429044 +0000 UTC m=+0.076769301 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 11 04:38:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:38:48 np0005481065 nova_compute[260935]: 2025-10-11 08:38:48.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:38:48 np0005481065 nova_compute[260935]: 2025-10-11 08:38:48.773 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:38:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v950: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:49 np0005481065 nova_compute[260935]: 2025-10-11 08:38:49.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:38:49 np0005481065 podman[267303]: 2025-10-11 08:38:49.837177227 +0000 UTC m=+0.135075704 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 11 04:38:50 np0005481065 nova_compute[260935]: 2025-10-11 08:38:50.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:38:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v951: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:51 np0005481065 nova_compute[260935]: 2025-10-11 08:38:51.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:38:51 np0005481065 nova_compute[260935]: 2025-10-11 08:38:51.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:38:51 np0005481065 nova_compute[260935]: 2025-10-11 08:38:51.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:38:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:38:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v952: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:53 np0005481065 nova_compute[260935]: 2025-10-11 08:38:53.534 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:38:53 np0005481065 nova_compute[260935]: 2025-10-11 08:38:53.535 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:38:53 np0005481065 nova_compute[260935]: 2025-10-11 08:38:53.535 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:38:53 np0005481065 nova_compute[260935]: 2025-10-11 08:38:53.535 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 04:38:53 np0005481065 nova_compute[260935]: 2025-10-11 08:38:53.536 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:38:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:38:53 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/179881512' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:38:54 np0005481065 nova_compute[260935]: 2025-10-11 08:38:54.016 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:38:54 np0005481065 nova_compute[260935]: 2025-10-11 08:38:54.202 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:38:54 np0005481065 nova_compute[260935]: 2025-10-11 08:38:54.203 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5172MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 04:38:54 np0005481065 nova_compute[260935]: 2025-10-11 08:38:54.204 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:38:54 np0005481065 nova_compute[260935]: 2025-10-11 08:38:54.204 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:38:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:38:54
Oct 11 04:38:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:38:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 04:38:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', 'cephfs.cephfs.meta', 'volumes', 'images', '.rgw.root', 'default.rgw.control', 'backups', '.mgr']
Oct 11 04:38:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:38:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:38:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:38:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:38:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:38:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:38:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:38:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:38:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:38:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:38:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:38:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:38:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:38:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:38:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:38:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:38:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:38:55 np0005481065 nova_compute[260935]: 2025-10-11 08:38:55.147 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 04:38:55 np0005481065 nova_compute[260935]: 2025-10-11 08:38:55.148 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 04:38:55 np0005481065 nova_compute[260935]: 2025-10-11 08:38:55.173 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:38:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v953: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:55 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:38:55 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4180341362' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:38:55 np0005481065 nova_compute[260935]: 2025-10-11 08:38:55.604 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:38:55 np0005481065 nova_compute[260935]: 2025-10-11 08:38:55.609 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:38:55 np0005481065 nova_compute[260935]: 2025-10-11 08:38:55.792 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:38:55 np0005481065 nova_compute[260935]: 2025-10-11 08:38:55.793 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 04:38:55 np0005481065 nova_compute[260935]: 2025-10-11 08:38:55.794 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:38:56 np0005481065 nova_compute[260935]: 2025-10-11 08:38:56.794 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:38:56 np0005481065 nova_compute[260935]: 2025-10-11 08:38:56.795 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 04:38:56 np0005481065 nova_compute[260935]: 2025-10-11 08:38:56.795 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 11 04:38:56 np0005481065 nova_compute[260935]: 2025-10-11 08:38:56.824 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 11 04:38:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:38:56 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:38:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:38:56 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:38:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:38:56 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:38:56 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 8afa6748-4d8a-49dd-a7b7-31f015af244e does not exist
Oct 11 04:38:56 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 3002e3f2-b4df-4e24-ac15-977cbf8f3593 does not exist
Oct 11 04:38:56 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 4b0b8523-7efe-4cdf-9450-ab5f7ac6324f does not exist
Oct 11 04:38:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:38:56 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:38:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:38:56 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:38:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:38:56 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:38:57 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:38:57 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:38:57 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:38:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v954: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:38:57 np0005481065 podman[267643]: 2025-10-11 08:38:57.696245618 +0000 UTC m=+0.050904963 container create e0b0de607ac5869410e9f794e90b8f497c154ec6e30d8e749f66aed7e1dadcb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_archimedes, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:38:57 np0005481065 systemd[1]: Started libpod-conmon-e0b0de607ac5869410e9f794e90b8f497c154ec6e30d8e749f66aed7e1dadcb3.scope.
Oct 11 04:38:57 np0005481065 podman[267643]: 2025-10-11 08:38:57.669645809 +0000 UTC m=+0.024305244 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:38:57 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:38:57 np0005481065 podman[267643]: 2025-10-11 08:38:57.786513263 +0000 UTC m=+0.141172628 container init e0b0de607ac5869410e9f794e90b8f497c154ec6e30d8e749f66aed7e1dadcb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_archimedes, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 04:38:57 np0005481065 podman[267643]: 2025-10-11 08:38:57.799533584 +0000 UTC m=+0.154192969 container start e0b0de607ac5869410e9f794e90b8f497c154ec6e30d8e749f66aed7e1dadcb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:38:57 np0005481065 podman[267643]: 2025-10-11 08:38:57.804503116 +0000 UTC m=+0.159162461 container attach e0b0de607ac5869410e9f794e90b8f497c154ec6e30d8e749f66aed7e1dadcb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 11 04:38:57 np0005481065 hardcore_archimedes[267659]: 167 167
Oct 11 04:38:57 np0005481065 systemd[1]: libpod-e0b0de607ac5869410e9f794e90b8f497c154ec6e30d8e749f66aed7e1dadcb3.scope: Deactivated successfully.
Oct 11 04:38:57 np0005481065 podman[267643]: 2025-10-11 08:38:57.807212543 +0000 UTC m=+0.161871888 container died e0b0de607ac5869410e9f794e90b8f497c154ec6e30d8e749f66aed7e1dadcb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 04:38:57 np0005481065 systemd[1]: var-lib-containers-storage-overlay-6692efc6ee7b4c340588fe59815a8bab78c225adba108fe2f645e9075512b486-merged.mount: Deactivated successfully.
Oct 11 04:38:57 np0005481065 podman[267643]: 2025-10-11 08:38:57.883447798 +0000 UTC m=+0.238107183 container remove e0b0de607ac5869410e9f794e90b8f497c154ec6e30d8e749f66aed7e1dadcb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 11 04:38:57 np0005481065 systemd[1]: libpod-conmon-e0b0de607ac5869410e9f794e90b8f497c154ec6e30d8e749f66aed7e1dadcb3.scope: Deactivated successfully.
Oct 11 04:38:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:38:58 np0005481065 podman[267682]: 2025-10-11 08:38:58.05811687 +0000 UTC m=+0.045589451 container create ef4a8feb9767e172b447955b8eb07cfa1b6476a8a1cf2b68eb8ccf1e7ec7117d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_ardinghelli, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 04:38:58 np0005481065 systemd[1]: Started libpod-conmon-ef4a8feb9767e172b447955b8eb07cfa1b6476a8a1cf2b68eb8ccf1e7ec7117d.scope.
Oct 11 04:38:58 np0005481065 podman[267682]: 2025-10-11 08:38:58.036560315 +0000 UTC m=+0.024032916 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:38:58 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:38:58 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55cbb2aa42b72638a9f789442d8c6081cc90f304dea89055e90f696eaeabc96b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:38:58 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55cbb2aa42b72638a9f789442d8c6081cc90f304dea89055e90f696eaeabc96b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:38:58 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55cbb2aa42b72638a9f789442d8c6081cc90f304dea89055e90f696eaeabc96b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:38:58 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55cbb2aa42b72638a9f789442d8c6081cc90f304dea89055e90f696eaeabc96b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:38:58 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55cbb2aa42b72638a9f789442d8c6081cc90f304dea89055e90f696eaeabc96b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:38:58 np0005481065 podman[267682]: 2025-10-11 08:38:58.158976347 +0000 UTC m=+0.146448958 container init ef4a8feb9767e172b447955b8eb07cfa1b6476a8a1cf2b68eb8ccf1e7ec7117d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_ardinghelli, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:38:58 np0005481065 podman[267682]: 2025-10-11 08:38:58.1723998 +0000 UTC m=+0.159872381 container start ef4a8feb9767e172b447955b8eb07cfa1b6476a8a1cf2b68eb8ccf1e7ec7117d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_ardinghelli, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:38:58 np0005481065 podman[267682]: 2025-10-11 08:38:58.177069323 +0000 UTC m=+0.164541954 container attach ef4a8feb9767e172b447955b8eb07cfa1b6476a8a1cf2b68eb8ccf1e7ec7117d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_ardinghelli, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:38:59 np0005481065 distracted_ardinghelli[267699]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:38:59 np0005481065 distracted_ardinghelli[267699]: --> relative data size: 1.0
Oct 11 04:38:59 np0005481065 distracted_ardinghelli[267699]: --> All data devices are unavailable
Oct 11 04:38:59 np0005481065 systemd[1]: libpod-ef4a8feb9767e172b447955b8eb07cfa1b6476a8a1cf2b68eb8ccf1e7ec7117d.scope: Deactivated successfully.
Oct 11 04:38:59 np0005481065 systemd[1]: libpod-ef4a8feb9767e172b447955b8eb07cfa1b6476a8a1cf2b68eb8ccf1e7ec7117d.scope: Consumed 1.063s CPU time.
Oct 11 04:38:59 np0005481065 podman[267682]: 2025-10-11 08:38:59.295013111 +0000 UTC m=+1.282485692 container died ef4a8feb9767e172b447955b8eb07cfa1b6476a8a1cf2b68eb8ccf1e7ec7117d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_ardinghelli, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:38:59 np0005481065 systemd[1]: var-lib-containers-storage-overlay-55cbb2aa42b72638a9f789442d8c6081cc90f304dea89055e90f696eaeabc96b-merged.mount: Deactivated successfully.
Oct 11 04:38:59 np0005481065 podman[267682]: 2025-10-11 08:38:59.367672993 +0000 UTC m=+1.355145564 container remove ef4a8feb9767e172b447955b8eb07cfa1b6476a8a1cf2b68eb8ccf1e7ec7117d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 04:38:59 np0005481065 systemd[1]: libpod-conmon-ef4a8feb9767e172b447955b8eb07cfa1b6476a8a1cf2b68eb8ccf1e7ec7117d.scope: Deactivated successfully.
Oct 11 04:38:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v955: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:00 np0005481065 podman[267886]: 2025-10-11 08:39:00.275340464 +0000 UTC m=+0.078599163 container create 649f0ec8799212ea67dfd34b23b4f938a1b40c906be0a9a92f131a15b16651e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_banach, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 04:39:00 np0005481065 podman[267886]: 2025-10-11 08:39:00.244967297 +0000 UTC m=+0.048226036 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:39:00 np0005481065 systemd[1]: Started libpod-conmon-649f0ec8799212ea67dfd34b23b4f938a1b40c906be0a9a92f131a15b16651e5.scope.
Oct 11 04:39:00 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:39:00 np0005481065 podman[267886]: 2025-10-11 08:39:00.414665008 +0000 UTC m=+0.217923757 container init 649f0ec8799212ea67dfd34b23b4f938a1b40c906be0a9a92f131a15b16651e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 04:39:00 np0005481065 podman[267886]: 2025-10-11 08:39:00.426886317 +0000 UTC m=+0.230145006 container start 649f0ec8799212ea67dfd34b23b4f938a1b40c906be0a9a92f131a15b16651e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_banach, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 04:39:00 np0005481065 podman[267886]: 2025-10-11 08:39:00.43084789 +0000 UTC m=+0.234106589 container attach 649f0ec8799212ea67dfd34b23b4f938a1b40c906be0a9a92f131a15b16651e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_banach, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 04:39:00 np0005481065 youthful_banach[267903]: 167 167
Oct 11 04:39:00 np0005481065 systemd[1]: libpod-649f0ec8799212ea67dfd34b23b4f938a1b40c906be0a9a92f131a15b16651e5.scope: Deactivated successfully.
Oct 11 04:39:00 np0005481065 podman[267886]: 2025-10-11 08:39:00.435053029 +0000 UTC m=+0.238311718 container died 649f0ec8799212ea67dfd34b23b4f938a1b40c906be0a9a92f131a15b16651e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:39:00 np0005481065 systemd[1]: var-lib-containers-storage-overlay-5d5bc87112bd85bb2f6438288cfb3876ace0eb59d2e8a885347a345ec635749a-merged.mount: Deactivated successfully.
Oct 11 04:39:00 np0005481065 podman[267886]: 2025-10-11 08:39:00.485683154 +0000 UTC m=+0.288941853 container remove 649f0ec8799212ea67dfd34b23b4f938a1b40c906be0a9a92f131a15b16651e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_banach, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 04:39:00 np0005481065 systemd[1]: libpod-conmon-649f0ec8799212ea67dfd34b23b4f938a1b40c906be0a9a92f131a15b16651e5.scope: Deactivated successfully.
Oct 11 04:39:00 np0005481065 podman[267926]: 2025-10-11 08:39:00.750918009 +0000 UTC m=+0.073826427 container create 1efc68d922dc2a88cbc4aac3a91008f48d5cdc7cec4eebc2f0cba556b936f76f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_haibt, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:39:00 np0005481065 systemd[1]: Started libpod-conmon-1efc68d922dc2a88cbc4aac3a91008f48d5cdc7cec4eebc2f0cba556b936f76f.scope.
Oct 11 04:39:00 np0005481065 podman[267926]: 2025-10-11 08:39:00.71832662 +0000 UTC m=+0.041235078 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:39:00 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:39:00 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feba2df914374848d6ef71db2f26d1f336713c74ba24da729f82228e26dd28b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:39:00 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feba2df914374848d6ef71db2f26d1f336713c74ba24da729f82228e26dd28b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:39:00 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feba2df914374848d6ef71db2f26d1f336713c74ba24da729f82228e26dd28b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:39:00 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feba2df914374848d6ef71db2f26d1f336713c74ba24da729f82228e26dd28b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:39:00 np0005481065 podman[267926]: 2025-10-11 08:39:00.854469583 +0000 UTC m=+0.177378011 container init 1efc68d922dc2a88cbc4aac3a91008f48d5cdc7cec4eebc2f0cba556b936f76f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_haibt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 11 04:39:00 np0005481065 podman[267926]: 2025-10-11 08:39:00.870942703 +0000 UTC m=+0.193851091 container start 1efc68d922dc2a88cbc4aac3a91008f48d5cdc7cec4eebc2f0cba556b936f76f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 11 04:39:00 np0005481065 podman[267926]: 2025-10-11 08:39:00.874881325 +0000 UTC m=+0.197789753 container attach 1efc68d922dc2a88cbc4aac3a91008f48d5cdc7cec4eebc2f0cba556b936f76f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 04:39:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v956: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]: {
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:    "0": [
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:        {
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:            "devices": [
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:                "/dev/loop3"
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:            ],
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:            "lv_name": "ceph_lv0",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:            "lv_size": "21470642176",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:            "name": "ceph_lv0",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:            "tags": {
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:                "ceph.cluster_name": "ceph",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:                "ceph.crush_device_class": "",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:                "ceph.encrypted": "0",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:                "ceph.osd_id": "0",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:                "ceph.type": "block",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:                "ceph.vdo": "0"
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:            },
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:            "type": "block",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:            "vg_name": "ceph_vg0"
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:        }
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:    ],
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:    "1": [
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:        {
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:            "devices": [
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:                "/dev/loop4"
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:            ],
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:            "lv_name": "ceph_lv1",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:            "lv_size": "21470642176",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:            "name": "ceph_lv1",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:            "tags": {
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:                "ceph.cluster_name": "ceph",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:                "ceph.crush_device_class": "",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:                "ceph.encrypted": "0",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:                "ceph.osd_id": "1",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:                "ceph.type": "block",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:                "ceph.vdo": "0"
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:            },
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:            "type": "block",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:            "vg_name": "ceph_vg1"
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:        }
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:    ],
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:    "2": [
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:        {
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:            "devices": [
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:                "/dev/loop5"
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:            ],
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:            "lv_name": "ceph_lv2",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:            "lv_size": "21470642176",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:            "name": "ceph_lv2",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:            "tags": {
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:                "ceph.cluster_name": "ceph",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:                "ceph.crush_device_class": "",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:                "ceph.encrypted": "0",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:                "ceph.osd_id": "2",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:                "ceph.type": "block",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:                "ceph.vdo": "0"
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:            },
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:            "type": "block",
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:            "vg_name": "ceph_vg2"
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:        }
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]:    ]
Oct 11 04:39:01 np0005481065 flamboyant_haibt[267943]: }
Oct 11 04:39:01 np0005481065 systemd[1]: libpod-1efc68d922dc2a88cbc4aac3a91008f48d5cdc7cec4eebc2f0cba556b936f76f.scope: Deactivated successfully.
Oct 11 04:39:01 np0005481065 podman[267926]: 2025-10-11 08:39:01.696552833 +0000 UTC m=+1.019461251 container died 1efc68d922dc2a88cbc4aac3a91008f48d5cdc7cec4eebc2f0cba556b936f76f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_haibt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 11 04:39:01 np0005481065 systemd[1]: var-lib-containers-storage-overlay-feba2df914374848d6ef71db2f26d1f336713c74ba24da729f82228e26dd28b1-merged.mount: Deactivated successfully.
Oct 11 04:39:01 np0005481065 podman[267926]: 2025-10-11 08:39:01.782404512 +0000 UTC m=+1.105312920 container remove 1efc68d922dc2a88cbc4aac3a91008f48d5cdc7cec4eebc2f0cba556b936f76f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_haibt, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 11 04:39:01 np0005481065 systemd[1]: libpod-conmon-1efc68d922dc2a88cbc4aac3a91008f48d5cdc7cec4eebc2f0cba556b936f76f.scope: Deactivated successfully.
Oct 11 04:39:02 np0005481065 podman[268106]: 2025-10-11 08:39:02.707075326 +0000 UTC m=+0.065931972 container create 898abcf45acf911663d9b0b76e3fb512b4f487a2b72d44f8298527f01ee6d2c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hoover, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 04:39:02 np0005481065 systemd[1]: Started libpod-conmon-898abcf45acf911663d9b0b76e3fb512b4f487a2b72d44f8298527f01ee6d2c5.scope.
Oct 11 04:39:02 np0005481065 podman[268106]: 2025-10-11 08:39:02.681487886 +0000 UTC m=+0.040344582 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:39:02 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:39:02 np0005481065 podman[268106]: 2025-10-11 08:39:02.81132767 +0000 UTC m=+0.170184356 container init 898abcf45acf911663d9b0b76e3fb512b4f487a2b72d44f8298527f01ee6d2c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 11 04:39:02 np0005481065 podman[268106]: 2025-10-11 08:39:02.822774216 +0000 UTC m=+0.181630872 container start 898abcf45acf911663d9b0b76e3fb512b4f487a2b72d44f8298527f01ee6d2c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hoover, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 11 04:39:02 np0005481065 podman[268106]: 2025-10-11 08:39:02.827381338 +0000 UTC m=+0.186238034 container attach 898abcf45acf911663d9b0b76e3fb512b4f487a2b72d44f8298527f01ee6d2c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hoover, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:39:02 np0005481065 suspicious_hoover[268122]: 167 167
Oct 11 04:39:02 np0005481065 systemd[1]: libpod-898abcf45acf911663d9b0b76e3fb512b4f487a2b72d44f8298527f01ee6d2c5.scope: Deactivated successfully.
Oct 11 04:39:02 np0005481065 podman[268106]: 2025-10-11 08:39:02.831450964 +0000 UTC m=+0.190307610 container died 898abcf45acf911663d9b0b76e3fb512b4f487a2b72d44f8298527f01ee6d2c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hoover, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:39:02 np0005481065 systemd[1]: var-lib-containers-storage-overlay-217d92d393ffbd968d07661fb62a1659ef920f4fd7dd2f5ad3f9ab493a3ceaaa-merged.mount: Deactivated successfully.
Oct 11 04:39:02 np0005481065 podman[268106]: 2025-10-11 08:39:02.914872763 +0000 UTC m=+0.273729379 container remove 898abcf45acf911663d9b0b76e3fb512b4f487a2b72d44f8298527f01ee6d2c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hoover, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:39:02 np0005481065 systemd[1]: libpod-conmon-898abcf45acf911663d9b0b76e3fb512b4f487a2b72d44f8298527f01ee6d2c5.scope: Deactivated successfully.
Oct 11 04:39:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:39:03 np0005481065 podman[268145]: 2025-10-11 08:39:03.147375915 +0000 UTC m=+0.066503138 container create c090d6f3b50b5d57abf14433b969721484c122a9bb10344b902f897d7ec17584 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 04:39:03 np0005481065 systemd[1]: Started libpod-conmon-c090d6f3b50b5d57abf14433b969721484c122a9bb10344b902f897d7ec17584.scope.
Oct 11 04:39:03 np0005481065 podman[268145]: 2025-10-11 08:39:03.119784338 +0000 UTC m=+0.038911611 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:39:03 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:39:03 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/febf5c4eab80e435038d5d874253786158b1146c70d622daa6dc70a86cecb88a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:39:03 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/febf5c4eab80e435038d5d874253786158b1146c70d622daa6dc70a86cecb88a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:39:03 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/febf5c4eab80e435038d5d874253786158b1146c70d622daa6dc70a86cecb88a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:39:03 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/febf5c4eab80e435038d5d874253786158b1146c70d622daa6dc70a86cecb88a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:39:03 np0005481065 podman[268145]: 2025-10-11 08:39:03.250665041 +0000 UTC m=+0.169792304 container init c090d6f3b50b5d57abf14433b969721484c122a9bb10344b902f897d7ec17584 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_galileo, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 11 04:39:03 np0005481065 podman[268145]: 2025-10-11 08:39:03.264867196 +0000 UTC m=+0.183994419 container start c090d6f3b50b5d57abf14433b969721484c122a9bb10344b902f897d7ec17584 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_galileo, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 04:39:03 np0005481065 podman[268145]: 2025-10-11 08:39:03.271238568 +0000 UTC m=+0.190365831 container attach c090d6f3b50b5d57abf14433b969721484c122a9bb10344b902f897d7ec17584 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 04:39:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v957: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:04 np0005481065 infallible_galileo[268162]: {
Oct 11 04:39:04 np0005481065 infallible_galileo[268162]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 04:39:04 np0005481065 infallible_galileo[268162]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:39:04 np0005481065 infallible_galileo[268162]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:39:04 np0005481065 infallible_galileo[268162]:        "osd_id": 2,
Oct 11 04:39:04 np0005481065 infallible_galileo[268162]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:39:04 np0005481065 infallible_galileo[268162]:        "type": "bluestore"
Oct 11 04:39:04 np0005481065 infallible_galileo[268162]:    },
Oct 11 04:39:04 np0005481065 infallible_galileo[268162]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 04:39:04 np0005481065 infallible_galileo[268162]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:39:04 np0005481065 infallible_galileo[268162]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:39:04 np0005481065 infallible_galileo[268162]:        "osd_id": 0,
Oct 11 04:39:04 np0005481065 infallible_galileo[268162]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:39:04 np0005481065 infallible_galileo[268162]:        "type": "bluestore"
Oct 11 04:39:04 np0005481065 infallible_galileo[268162]:    },
Oct 11 04:39:04 np0005481065 infallible_galileo[268162]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 04:39:04 np0005481065 infallible_galileo[268162]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:39:04 np0005481065 infallible_galileo[268162]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:39:04 np0005481065 infallible_galileo[268162]:        "osd_id": 1,
Oct 11 04:39:04 np0005481065 infallible_galileo[268162]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:39:04 np0005481065 infallible_galileo[268162]:        "type": "bluestore"
Oct 11 04:39:04 np0005481065 infallible_galileo[268162]:    }
Oct 11 04:39:04 np0005481065 infallible_galileo[268162]: }
Oct 11 04:39:04 np0005481065 systemd[1]: libpod-c090d6f3b50b5d57abf14433b969721484c122a9bb10344b902f897d7ec17584.scope: Deactivated successfully.
Oct 11 04:39:04 np0005481065 systemd[1]: libpod-c090d6f3b50b5d57abf14433b969721484c122a9bb10344b902f897d7ec17584.scope: Consumed 1.121s CPU time.
Oct 11 04:39:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:39:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:39:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:39:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:39:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:39:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:39:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:39:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:39:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:39:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:39:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:39:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:39:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 04:39:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:39:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:39:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:39:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:39:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:39:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:39:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:39:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:39:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:39:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:39:04 np0005481065 podman[268195]: 2025-10-11 08:39:04.446767219 +0000 UTC m=+0.045096877 container died c090d6f3b50b5d57abf14433b969721484c122a9bb10344b902f897d7ec17584 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_galileo, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:39:04 np0005481065 systemd[1]: var-lib-containers-storage-overlay-febf5c4eab80e435038d5d874253786158b1146c70d622daa6dc70a86cecb88a-merged.mount: Deactivated successfully.
Oct 11 04:39:04 np0005481065 podman[268195]: 2025-10-11 08:39:04.526306978 +0000 UTC m=+0.124636596 container remove c090d6f3b50b5d57abf14433b969721484c122a9bb10344b902f897d7ec17584 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_galileo, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 04:39:04 np0005481065 systemd[1]: libpod-conmon-c090d6f3b50b5d57abf14433b969721484c122a9bb10344b902f897d7ec17584.scope: Deactivated successfully.
Oct 11 04:39:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:39:04 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:39:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:39:04 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:39:04 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev ef3172c8-de60-405c-bdb2-78267df11b5f does not exist
Oct 11 04:39:04 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 61a5e16b-24d2-43ac-a536-a7d022ea8ae9 does not exist
Oct 11 04:39:05 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:39:05 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:39:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v958: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:06 np0005481065 podman[268260]: 2025-10-11 08:39:06.821136685 +0000 UTC m=+0.108587788 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:39:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v959: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:39:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v960: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v961: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:39:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v962: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:13 np0005481065 podman[268281]: 2025-10-11 08:39:13.78524304 +0000 UTC m=+0.085681305 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 11 04:39:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:39:15.169 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:39:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:39:15.170 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:39:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:39:15.170 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:39:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v963: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v964: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:39:18 np0005481065 podman[268301]: 2025-10-11 08:39:18.80283509 +0000 UTC m=+0.098295485 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Oct 11 04:39:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v965: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:20 np0005481065 podman[268322]: 2025-10-11 08:39:20.847716997 +0000 UTC m=+0.144793831 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 11 04:39:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v966: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:39:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v967: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:39:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:39:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:39:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:39:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:39:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:39:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v968: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v969: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:39:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v970: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v971: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:39:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v972: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v973: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:39:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2935519272' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:39:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:39:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2935519272' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:39:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v974: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:37 np0005481065 podman[268350]: 2025-10-11 08:39:37.801419183 +0000 UTC m=+0.097213154 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 11 04:39:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:39:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v975: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v976: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:39:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v977: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:44 np0005481065 podman[268369]: 2025-10-11 08:39:44.777690303 +0000 UTC m=+0.078469180 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:39:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v978: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v979: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:47 np0005481065 nova_compute[260935]: 2025-10-11 08:39:47.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:39:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:39:48 np0005481065 nova_compute[260935]: 2025-10-11 08:39:48.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:39:48 np0005481065 nova_compute[260935]: 2025-10-11 08:39:48.702 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 04:39:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v980: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:49 np0005481065 nova_compute[260935]: 2025-10-11 08:39:49.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:39:49 np0005481065 nova_compute[260935]: 2025-10-11 08:39:49.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:39:49 np0005481065 podman[268389]: 2025-10-11 08:39:49.79410814 +0000 UTC m=+0.093978041 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct 11 04:39:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v981: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:51 np0005481065 nova_compute[260935]: 2025-10-11 08:39:51.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:39:51 np0005481065 nova_compute[260935]: 2025-10-11 08:39:51.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:39:51 np0005481065 nova_compute[260935]: 2025-10-11 08:39:51.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:39:51 np0005481065 nova_compute[260935]: 2025-10-11 08:39:51.731 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:39:51 np0005481065 nova_compute[260935]: 2025-10-11 08:39:51.732 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:39:51 np0005481065 nova_compute[260935]: 2025-10-11 08:39:51.732 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:39:51 np0005481065 nova_compute[260935]: 2025-10-11 08:39:51.732 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 04:39:51 np0005481065 nova_compute[260935]: 2025-10-11 08:39:51.732 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:39:51 np0005481065 podman[268410]: 2025-10-11 08:39:51.899928357 +0000 UTC m=+0.188388405 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:39:52 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:39:52 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4279787727' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:39:52 np0005481065 nova_compute[260935]: 2025-10-11 08:39:52.195 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:39:52 np0005481065 nova_compute[260935]: 2025-10-11 08:39:52.378 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:39:52 np0005481065 nova_compute[260935]: 2025-10-11 08:39:52.380 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5167MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 04:39:52 np0005481065 nova_compute[260935]: 2025-10-11 08:39:52.381 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:39:52 np0005481065 nova_compute[260935]: 2025-10-11 08:39:52.381 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:39:52 np0005481065 nova_compute[260935]: 2025-10-11 08:39:52.466 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 04:39:52 np0005481065 nova_compute[260935]: 2025-10-11 08:39:52.467 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 04:39:52 np0005481065 nova_compute[260935]: 2025-10-11 08:39:52.493 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:39:52 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:39:52 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/496415161' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:39:52 np0005481065 nova_compute[260935]: 2025-10-11 08:39:52.951 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:39:52 np0005481065 nova_compute[260935]: 2025-10-11 08:39:52.959 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:39:52 np0005481065 nova_compute[260935]: 2025-10-11 08:39:52.979 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:39:52 np0005481065 nova_compute[260935]: 2025-10-11 08:39:52.982 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 04:39:52 np0005481065 nova_compute[260935]: 2025-10-11 08:39:52.982 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:39:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:39:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v982: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:39:54
Oct 11 04:39:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:39:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 04:39:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'vms', '.mgr', '.rgw.root', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', 'default.rgw.log', 'default.rgw.meta']
Oct 11 04:39:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:39:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:39:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:39:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:39:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:39:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:39:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:39:54 np0005481065 nova_compute[260935]: 2025-10-11 08:39:54.983 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:39:54 np0005481065 nova_compute[260935]: 2025-10-11 08:39:54.984 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 04:39:54 np0005481065 nova_compute[260935]: 2025-10-11 08:39:54.984 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 11 04:39:55 np0005481065 nova_compute[260935]: 2025-10-11 08:39:55.004 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 11 04:39:55 np0005481065 nova_compute[260935]: 2025-10-11 08:39:55.004 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:39:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:39:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:39:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:39:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:39:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:39:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:39:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:39:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:39:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:39:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:39:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v983: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v984: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:39:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:39:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v985: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v986: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:40:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v987: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:40:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:40:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:40:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:40:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:40:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:40:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:40:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:40:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:40:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:40:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:40:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:40:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 04:40:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:40:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:40:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:40:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:40:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:40:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:40:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:40:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:40:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:40:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:40:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v988: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:05 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:40:05 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:40:05 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:40:05 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:40:05 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:40:05 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:40:05 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev e33c656d-9512-4357-b752-217993c6cc20 does not exist
Oct 11 04:40:05 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 090414a1-eac5-415d-aa19-8d8956f2c471 does not exist
Oct 11 04:40:05 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 71c6af04-8ce0-4d70-a50e-ad8fe30ec914 does not exist
Oct 11 04:40:05 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:40:05 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:40:05 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:40:05 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:40:05 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:40:05 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:40:06 np0005481065 podman[268755]: 2025-10-11 08:40:06.694808543 +0000 UTC m=+0.066375263 container create 753f6a67532760f9960dd80f6529e82a9412539eb76139eb94bb9ac728bce8c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_thompson, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 11 04:40:06 np0005481065 systemd[1]: Started libpod-conmon-753f6a67532760f9960dd80f6529e82a9412539eb76139eb94bb9ac728bce8c0.scope.
Oct 11 04:40:06 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:40:06 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:40:06 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:40:06 np0005481065 podman[268755]: 2025-10-11 08:40:06.66591886 +0000 UTC m=+0.037485630 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:40:06 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:40:06 np0005481065 podman[268755]: 2025-10-11 08:40:06.801230519 +0000 UTC m=+0.172797269 container init 753f6a67532760f9960dd80f6529e82a9412539eb76139eb94bb9ac728bce8c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_thompson, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 11 04:40:06 np0005481065 podman[268755]: 2025-10-11 08:40:06.811728298 +0000 UTC m=+0.183295018 container start 753f6a67532760f9960dd80f6529e82a9412539eb76139eb94bb9ac728bce8c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_thompson, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:40:06 np0005481065 podman[268755]: 2025-10-11 08:40:06.815595239 +0000 UTC m=+0.187161959 container attach 753f6a67532760f9960dd80f6529e82a9412539eb76139eb94bb9ac728bce8c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_thompson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:40:06 np0005481065 flamboyant_thompson[268772]: 167 167
Oct 11 04:40:06 np0005481065 systemd[1]: libpod-753f6a67532760f9960dd80f6529e82a9412539eb76139eb94bb9ac728bce8c0.scope: Deactivated successfully.
Oct 11 04:40:06 np0005481065 podman[268755]: 2025-10-11 08:40:06.822345311 +0000 UTC m=+0.193912021 container died 753f6a67532760f9960dd80f6529e82a9412539eb76139eb94bb9ac728bce8c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_thompson, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 11 04:40:06 np0005481065 systemd[1]: var-lib-containers-storage-overlay-2b177b526649d7ede552e12222b260cfa8eacd142d71dc68a99c4e03f679cc18-merged.mount: Deactivated successfully.
Oct 11 04:40:06 np0005481065 podman[268755]: 2025-10-11 08:40:06.878111912 +0000 UTC m=+0.249678592 container remove 753f6a67532760f9960dd80f6529e82a9412539eb76139eb94bb9ac728bce8c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:40:06 np0005481065 systemd[1]: libpod-conmon-753f6a67532760f9960dd80f6529e82a9412539eb76139eb94bb9ac728bce8c0.scope: Deactivated successfully.
Oct 11 04:40:07 np0005481065 podman[268796]: 2025-10-11 08:40:07.118522049 +0000 UTC m=+0.061042272 container create fc0d7d2d68b81cd600e821253e7cf6b11334205e2dd1c96b2e8b873a4c1d8afd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_solomon, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:40:07 np0005481065 systemd[1]: Started libpod-conmon-fc0d7d2d68b81cd600e821253e7cf6b11334205e2dd1c96b2e8b873a4c1d8afd.scope.
Oct 11 04:40:07 np0005481065 podman[268796]: 2025-10-11 08:40:07.089289576 +0000 UTC m=+0.031809889 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:40:07 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:40:07 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44067ae6dac312a8e349f941216f48c3770edfbb70420903471c4002621c36c8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:40:07 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44067ae6dac312a8e349f941216f48c3770edfbb70420903471c4002621c36c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:40:07 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44067ae6dac312a8e349f941216f48c3770edfbb70420903471c4002621c36c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:40:07 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44067ae6dac312a8e349f941216f48c3770edfbb70420903471c4002621c36c8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:40:07 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44067ae6dac312a8e349f941216f48c3770edfbb70420903471c4002621c36c8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:40:07 np0005481065 podman[268796]: 2025-10-11 08:40:07.224283736 +0000 UTC m=+0.166803979 container init fc0d7d2d68b81cd600e821253e7cf6b11334205e2dd1c96b2e8b873a4c1d8afd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:40:07 np0005481065 podman[268796]: 2025-10-11 08:40:07.23318891 +0000 UTC m=+0.175709173 container start fc0d7d2d68b81cd600e821253e7cf6b11334205e2dd1c96b2e8b873a4c1d8afd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_solomon, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 04:40:07 np0005481065 podman[268796]: 2025-10-11 08:40:07.236780653 +0000 UTC m=+0.179300876 container attach fc0d7d2d68b81cd600e821253e7cf6b11334205e2dd1c96b2e8b873a4c1d8afd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_solomon, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 04:40:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v989: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:07 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:40:07 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 4619 writes, 20K keys, 4619 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 4619 writes, 4619 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1308 writes, 5683 keys, 1308 commit groups, 1.0 writes per commit group, ingest: 8.49 MB, 0.01 MB/s#012Interval WAL: 1308 writes, 1308 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     71.1      0.31              0.09        11    0.028       0      0       0.0       0.0#012  L6      1/0    6.66 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.2    162.4    133.2      0.52              0.30        10    0.052     43K   5259       0.0       0.0#012 Sum      1/0    6.66 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.2    102.1    110.1      0.83              0.39        21    0.039     43K   5259       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.4    121.1    121.0      0.29              0.17         8    0.037     18K   2057       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    162.4    133.2      0.52              0.30        10    0.052     43K   5259       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     71.9      0.30              0.09        10    0.030       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.2      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.021, interval 0.006#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.09 GB write, 0.05 MB/s write, 0.08 GB read, 0.05 MB/s read, 0.8 seconds#012Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558f0ab3b1f0#2 capacity: 308.00 MB usage: 6.31 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000142 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(407,5.95 MB,1.9319%) FilterBlock(22,128.17 KB,0.0406389%) IndexBlock(22,237.91 KB,0.0754319%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct 11 04:40:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:40:08 np0005481065 tender_solomon[268813]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:40:08 np0005481065 tender_solomon[268813]: --> relative data size: 1.0
Oct 11 04:40:08 np0005481065 tender_solomon[268813]: --> All data devices are unavailable
Oct 11 04:40:08 np0005481065 podman[268796]: 2025-10-11 08:40:08.396119232 +0000 UTC m=+1.338639465 container died fc0d7d2d68b81cd600e821253e7cf6b11334205e2dd1c96b2e8b873a4c1d8afd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Oct 11 04:40:08 np0005481065 systemd[1]: libpod-fc0d7d2d68b81cd600e821253e7cf6b11334205e2dd1c96b2e8b873a4c1d8afd.scope: Deactivated successfully.
Oct 11 04:40:08 np0005481065 systemd[1]: libpod-fc0d7d2d68b81cd600e821253e7cf6b11334205e2dd1c96b2e8b873a4c1d8afd.scope: Consumed 1.107s CPU time.
Oct 11 04:40:08 np0005481065 systemd[1]: var-lib-containers-storage-overlay-44067ae6dac312a8e349f941216f48c3770edfbb70420903471c4002621c36c8-merged.mount: Deactivated successfully.
Oct 11 04:40:08 np0005481065 podman[268796]: 2025-10-11 08:40:08.479023277 +0000 UTC m=+1.421543540 container remove fc0d7d2d68b81cd600e821253e7cf6b11334205e2dd1c96b2e8b873a4c1d8afd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 11 04:40:08 np0005481065 systemd[1]: libpod-conmon-fc0d7d2d68b81cd600e821253e7cf6b11334205e2dd1c96b2e8b873a4c1d8afd.scope: Deactivated successfully.
Oct 11 04:40:08 np0005481065 podman[268843]: 2025-10-11 08:40:08.533169051 +0000 UTC m=+0.096020250 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 11 04:40:08 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Oct 11 04:40:08 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:40:08.783124) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 04:40:08 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Oct 11 04:40:08 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172008783163, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1477, "num_deletes": 251, "total_data_size": 2307185, "memory_usage": 2349344, "flush_reason": "Manual Compaction"}
Oct 11 04:40:08 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Oct 11 04:40:08 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172008802965, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 2274062, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19460, "largest_seqno": 20936, "table_properties": {"data_size": 2267158, "index_size": 3975, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14236, "raw_average_key_size": 19, "raw_value_size": 2253376, "raw_average_value_size": 3138, "num_data_blocks": 181, "num_entries": 718, "num_filter_entries": 718, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760171853, "oldest_key_time": 1760171853, "file_creation_time": 1760172008, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:40:08 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 20188 microseconds, and 9651 cpu microseconds.
Oct 11 04:40:08 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:40:08 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:40:08.803281) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 2274062 bytes OK
Oct 11 04:40:08 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:40:08.803409) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Oct 11 04:40:08 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:40:08.805523) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Oct 11 04:40:08 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:40:08.805606) EVENT_LOG_v1 {"time_micros": 1760172008805598, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 04:40:08 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:40:08.805633) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 04:40:08 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 2300699, prev total WAL file size 2300699, number of live WAL files 2.
Oct 11 04:40:08 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:40:08 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:40:08.806938) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Oct 11 04:40:08 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 04:40:08 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(2220KB)], [47(6823KB)]
Oct 11 04:40:08 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172008806977, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9260883, "oldest_snapshot_seqno": -1}
Oct 11 04:40:08 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4314 keys, 7493713 bytes, temperature: kUnknown
Oct 11 04:40:08 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172008869327, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7493713, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7463986, "index_size": 17841, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10821, "raw_key_size": 106672, "raw_average_key_size": 24, "raw_value_size": 7384974, "raw_average_value_size": 1711, "num_data_blocks": 748, "num_entries": 4314, "num_filter_entries": 4314, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760172008, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:40:08 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:40:08 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:40:08.869620) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7493713 bytes
Oct 11 04:40:08 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:40:08.871187) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 148.3 rd, 120.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 6.7 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(7.4) write-amplify(3.3) OK, records in: 4828, records dropped: 514 output_compression: NoCompression
Oct 11 04:40:08 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:40:08.871214) EVENT_LOG_v1 {"time_micros": 1760172008871202, "job": 24, "event": "compaction_finished", "compaction_time_micros": 62440, "compaction_time_cpu_micros": 31252, "output_level": 6, "num_output_files": 1, "total_output_size": 7493713, "num_input_records": 4828, "num_output_records": 4314, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 04:40:08 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:40:08 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172008872311, "job": 24, "event": "table_file_deletion", "file_number": 49}
Oct 11 04:40:08 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:40:08 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172008875465, "job": 24, "event": "table_file_deletion", "file_number": 47}
Oct 11 04:40:08 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:40:08.806857) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:40:08 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:40:08.875515) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:40:08 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:40:08.875519) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:40:08 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:40:08.875521) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:40:08 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:40:08.875523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:40:08 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:40:08.875524) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:40:09 np0005481065 podman[269019]: 2025-10-11 08:40:09.38398262 +0000 UTC m=+0.068735782 container create 00d468885dc8ec0172c4561528dca840208d71c0f67d03d119036c6b7a3c4a5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_noether, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 11 04:40:09 np0005481065 systemd[1]: Started libpod-conmon-00d468885dc8ec0172c4561528dca840208d71c0f67d03d119036c6b7a3c4a5e.scope.
Oct 11 04:40:09 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:40:09 np0005481065 podman[269019]: 2025-10-11 08:40:09.358249806 +0000 UTC m=+0.043003008 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:40:09 np0005481065 podman[269019]: 2025-10-11 08:40:09.463971821 +0000 UTC m=+0.148724993 container init 00d468885dc8ec0172c4561528dca840208d71c0f67d03d119036c6b7a3c4a5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_noether, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 04:40:09 np0005481065 podman[269019]: 2025-10-11 08:40:09.476096637 +0000 UTC m=+0.160849769 container start 00d468885dc8ec0172c4561528dca840208d71c0f67d03d119036c6b7a3c4a5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:40:09 np0005481065 priceless_noether[269035]: 167 167
Oct 11 04:40:09 np0005481065 systemd[1]: libpod-00d468885dc8ec0172c4561528dca840208d71c0f67d03d119036c6b7a3c4a5e.scope: Deactivated successfully.
Oct 11 04:40:09 np0005481065 podman[269019]: 2025-10-11 08:40:09.484330442 +0000 UTC m=+0.169083584 container attach 00d468885dc8ec0172c4561528dca840208d71c0f67d03d119036c6b7a3c4a5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_noether, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:40:09 np0005481065 podman[269019]: 2025-10-11 08:40:09.484653271 +0000 UTC m=+0.169406403 container died 00d468885dc8ec0172c4561528dca840208d71c0f67d03d119036c6b7a3c4a5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_noether, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 04:40:09 np0005481065 systemd[1]: var-lib-containers-storage-overlay-b45cb8eb5a1e3f85b75be311710a5a41c0353bfccdb0207e1ca22ec836707d5e-merged.mount: Deactivated successfully.
Oct 11 04:40:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v990: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:09 np0005481065 podman[269019]: 2025-10-11 08:40:09.523202371 +0000 UTC m=+0.207955493 container remove 00d468885dc8ec0172c4561528dca840208d71c0f67d03d119036c6b7a3c4a5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_noether, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:40:09 np0005481065 systemd[1]: libpod-conmon-00d468885dc8ec0172c4561528dca840208d71c0f67d03d119036c6b7a3c4a5e.scope: Deactivated successfully.
Oct 11 04:40:09 np0005481065 podman[269059]: 2025-10-11 08:40:09.768011234 +0000 UTC m=+0.057545163 container create ac74977a4368a5da11d12dcd630872a12b8af1371b56582236c67fa563a8cbee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_jemison, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:40:09 np0005481065 systemd[1]: Started libpod-conmon-ac74977a4368a5da11d12dcd630872a12b8af1371b56582236c67fa563a8cbee.scope.
Oct 11 04:40:09 np0005481065 podman[269059]: 2025-10-11 08:40:09.748488027 +0000 UTC m=+0.038021996 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:40:09 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:40:09 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2f7cd7020bd155e13b4770dd10054bc06bb0fc45346a238d1f7077fc8507ad4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:40:09 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2f7cd7020bd155e13b4770dd10054bc06bb0fc45346a238d1f7077fc8507ad4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:40:09 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2f7cd7020bd155e13b4770dd10054bc06bb0fc45346a238d1f7077fc8507ad4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:40:09 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2f7cd7020bd155e13b4770dd10054bc06bb0fc45346a238d1f7077fc8507ad4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:40:09 np0005481065 podman[269059]: 2025-10-11 08:40:09.891793275 +0000 UTC m=+0.181327204 container init ac74977a4368a5da11d12dcd630872a12b8af1371b56582236c67fa563a8cbee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 11 04:40:09 np0005481065 podman[269059]: 2025-10-11 08:40:09.899159495 +0000 UTC m=+0.188693424 container start ac74977a4368a5da11d12dcd630872a12b8af1371b56582236c67fa563a8cbee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_jemison, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 11 04:40:09 np0005481065 podman[269059]: 2025-10-11 08:40:09.902977564 +0000 UTC m=+0.192511493 container attach ac74977a4368a5da11d12dcd630872a12b8af1371b56582236c67fa563a8cbee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:40:10 np0005481065 jovial_jemison[269076]: {
Oct 11 04:40:10 np0005481065 jovial_jemison[269076]:    "0": [
Oct 11 04:40:10 np0005481065 jovial_jemison[269076]:        {
Oct 11 04:40:10 np0005481065 jovial_jemison[269076]:            "devices": [
Oct 11 04:40:10 np0005481065 jovial_jemison[269076]:                "/dev/loop3"
Oct 11 04:40:10 np0005481065 jovial_jemison[269076]:            ],
Oct 11 04:40:10 np0005481065 jovial_jemison[269076]:            "lv_name": "ceph_lv0",
Oct 11 04:40:10 np0005481065 jovial_jemison[269076]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:40:10 np0005481065 jovial_jemison[269076]:            "lv_size": "21470642176",
Oct 11 04:40:10 np0005481065 jovial_jemison[269076]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:40:10 np0005481065 jovial_jemison[269076]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:40:10 np0005481065 jovial_jemison[269076]:            "name": "ceph_lv0",
Oct 11 04:40:10 np0005481065 jovial_jemison[269076]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:40:10 np0005481065 jovial_jemison[269076]:            "tags": {
Oct 11 04:40:10 np0005481065 jovial_jemison[269076]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:40:10 np0005481065 jovial_jemison[269076]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:40:10 np0005481065 jovial_jemison[269076]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:40:10 np0005481065 jovial_jemison[269076]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:40:10 np0005481065 jovial_jemison[269076]:                "ceph.cluster_name": "ceph",
Oct 11 04:40:10 np0005481065 jovial_jemison[269076]:                "ceph.crush_device_class": "",
Oct 11 04:40:10 np0005481065 jovial_jemison[269076]:                "ceph.encrypted": "0",
Oct 11 04:40:10 np0005481065 jovial_jemison[269076]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:40:10 np0005481065 jovial_jemison[269076]:                "ceph.osd_id": "0",
Oct 11 04:40:10 np0005481065 jovial_jemison[269076]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:40:10 np0005481065 jovial_jemison[269076]:                "ceph.type": "block",
Oct 11 04:40:10 np0005481065 jovial_jemison[269076]:                "ceph.vdo": "0"
Oct 11 04:40:10 np0005481065 jovial_jemison[269076]:            },
Oct 11 04:40:10 np0005481065 jovial_jemison[269076]:            "type": "block",
Oct 11 04:40:10 np0005481065 jovial_jemison[269076]:            "vg_name": "ceph_vg0"
Oct 11 04:40:10 np0005481065 jovial_jemison[269076]:        }
Oct 11 04:40:10 np0005481065 jovial_jemison[269076]:    ],
Oct 11 04:40:10 np0005481065 jovial_jemison[269076]:    "1": [
Oct 11 04:40:10 np0005481065 jovial_jemison[269076]:        {
Oct 11 04:40:10 np0005481065 jovial_jemison[269076]:            "devices": [
Oct 11 04:40:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:40:43 np0005481065 rsyslogd[1003]: imjournal: 202 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct 11 04:40:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1007: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1008: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:46 np0005481065 podman[269482]: 2025-10-11 08:40:46.812165595 +0000 UTC m=+0.107554643 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.schema-version=1.0)
Oct 11 04:40:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1009: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:40:48 np0005481065 nova_compute[260935]: 2025-10-11 08:40:48.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:40:48 np0005481065 nova_compute[260935]: 2025-10-11 08:40:48.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 04:40:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1010: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:49 np0005481065 nova_compute[260935]: 2025-10-11 08:40:49.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:40:50 np0005481065 nova_compute[260935]: 2025-10-11 08:40:50.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:40:50 np0005481065 nova_compute[260935]: 2025-10-11 08:40:50.730 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:40:50 np0005481065 nova_compute[260935]: 2025-10-11 08:40:50.732 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:40:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1011: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:51 np0005481065 podman[269502]: 2025-10-11 08:40:51.80012781 +0000 UTC m=+0.096837041 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:40:52 np0005481065 nova_compute[260935]: 2025-10-11 08:40:52.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:40:52 np0005481065 nova_compute[260935]: 2025-10-11 08:40:52.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:40:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:40:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1012: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:53 np0005481065 nova_compute[260935]: 2025-10-11 08:40:53.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:40:53 np0005481065 nova_compute[260935]: 2025-10-11 08:40:53.745 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:40:53 np0005481065 nova_compute[260935]: 2025-10-11 08:40:53.745 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:40:53 np0005481065 nova_compute[260935]: 2025-10-11 08:40:53.745 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:40:53 np0005481065 nova_compute[260935]: 2025-10-11 08:40:53.746 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 04:40:53 np0005481065 nova_compute[260935]: 2025-10-11 08:40:53.746 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:40:53 np0005481065 podman[269522]: 2025-10-11 08:40:53.818203386 +0000 UTC m=+0.115844837 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 11 04:40:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:40:54 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1446437953' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:40:54 np0005481065 nova_compute[260935]: 2025-10-11 08:40:54.203 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:40:54 np0005481065 nova_compute[260935]: 2025-10-11 08:40:54.422 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:40:54 np0005481065 nova_compute[260935]: 2025-10-11 08:40:54.425 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5157MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 04:40:54 np0005481065 nova_compute[260935]: 2025-10-11 08:40:54.425 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:40:54 np0005481065 nova_compute[260935]: 2025-10-11 08:40:54.426 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:40:54 np0005481065 nova_compute[260935]: 2025-10-11 08:40:54.510 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 04:40:54 np0005481065 nova_compute[260935]: 2025-10-11 08:40:54.510 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 04:40:54 np0005481065 nova_compute[260935]: 2025-10-11 08:40:54.528 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:40:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:40:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:40:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:40:54
Oct 11 04:40:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:40:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 04:40:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', 'default.rgw.log', 'images', 'volumes', 'vms', 'default.rgw.meta', 'default.rgw.control', '.rgw.root']
Oct 11 04:40:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:40:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:40:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:40:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:40:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:40:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:40:54 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4144864316' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:40:54 np0005481065 nova_compute[260935]: 2025-10-11 08:40:54.988 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:40:54 np0005481065 nova_compute[260935]: 2025-10-11 08:40:54.996 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:40:55 np0005481065 nova_compute[260935]: 2025-10-11 08:40:55.027 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:40:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:40:55 np0005481065 nova_compute[260935]: 2025-10-11 08:40:55.029 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 04:40:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:40:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:40:55 np0005481065 nova_compute[260935]: 2025-10-11 08:40:55.030 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:40:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:40:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:40:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:40:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:40:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:40:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:40:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:40:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1013: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:56 np0005481065 nova_compute[260935]: 2025-10-11 08:40:56.033 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:40:56 np0005481065 nova_compute[260935]: 2025-10-11 08:40:56.033 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 04:40:56 np0005481065 nova_compute[260935]: 2025-10-11 08:40:56.034 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 11 04:40:56 np0005481065 nova_compute[260935]: 2025-10-11 08:40:56.050 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 11 04:40:56 np0005481065 nova_compute[260935]: 2025-10-11 08:40:56.051 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:40:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1014: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:40:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:40:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1015: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1016: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:41:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1017: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:41:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:41:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:41:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:41:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:41:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:41:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:41:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:41:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:41:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:41:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:41:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:41:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 04:41:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:41:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:41:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:41:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:41:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:41:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:41:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:41:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:41:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:41:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:41:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1018: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:05 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Oct 11 04:41:05 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Oct 11 04:41:05 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Oct 11 04:41:06 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Oct 11 04:41:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Oct 11 04:41:07 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Oct 11 04:41:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1021: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 255 B/s wr, 0 op/s
Oct 11 04:41:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:41:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Oct 11 04:41:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Oct 11 04:41:09 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Oct 11 04:41:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1023: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 852 B/s wr, 2 op/s
Oct 11 04:41:09 np0005481065 podman[269595]: 2025-10-11 08:41:09.803197437 +0000 UTC m=+0.093543658 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 11 04:41:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1024: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 853 B/s wr, 2 op/s
Oct 11 04:41:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:41:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1025: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 5.4 MiB/s wr, 40 op/s
Oct 11 04:41:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Oct 11 04:41:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Oct 11 04:41:13 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Oct 11 04:41:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:41:15.172 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:41:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:41:15.173 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:41:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:41:15.173 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:41:15 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:41:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1027: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 5.1 MiB/s wr, 38 op/s
Oct 11 04:41:15 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:41:15 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:41:15 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:41:15 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:41:15 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:41:15 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:41:15 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:41:15 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:41:16 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:41:16 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 001e3c2a-de09-4d8a-919d-b765d9585dc8 does not exist
Oct 11 04:41:16 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev a9227aca-8918-43eb-a9f3-14c58eee229e does not exist
Oct 11 04:41:16 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 5c8436ca-bac0-475b-bf7b-c86dceda9969 does not exist
Oct 11 04:41:16 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:41:16 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:41:16 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:41:16 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:41:16 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:41:16 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:41:16 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:41:16 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:41:16 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:41:16 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:41:16 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:41:17 np0005481065 podman[270005]: 2025-10-11 08:41:17.137606624 +0000 UTC m=+0.054502648 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:41:17 np0005481065 podman[270005]: 2025-10-11 08:41:17.359111 +0000 UTC m=+0.276006973 container create ed7ff5053f372c6140d16725fc4c6734455959a8b90c9720d95f1ceb87290b9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:41:17 np0005481065 systemd[1]: Started libpod-conmon-ed7ff5053f372c6140d16725fc4c6734455959a8b90c9720d95f1ceb87290b9f.scope.
Oct 11 04:41:17 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:41:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1028: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 4.9 MiB/s wr, 44 op/s
Oct 11 04:41:17 np0005481065 podman[270005]: 2025-10-11 08:41:17.636623854 +0000 UTC m=+0.553519877 container init ed7ff5053f372c6140d16725fc4c6734455959a8b90c9720d95f1ceb87290b9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_yonath, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:41:17 np0005481065 podman[270019]: 2025-10-11 08:41:17.637141638 +0000 UTC m=+0.221627619 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 11 04:41:17 np0005481065 podman[270005]: 2025-10-11 08:41:17.648665423 +0000 UTC m=+0.565561396 container start ed7ff5053f372c6140d16725fc4c6734455959a8b90c9720d95f1ceb87290b9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_yonath, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:41:17 np0005481065 systemd[1]: libpod-ed7ff5053f372c6140d16725fc4c6734455959a8b90c9720d95f1ceb87290b9f.scope: Deactivated successfully.
Oct 11 04:41:17 np0005481065 blissful_yonath[270033]: 167 167
Oct 11 04:41:17 np0005481065 conmon[270033]: conmon ed7ff5053f372c6140d1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ed7ff5053f372c6140d16725fc4c6734455959a8b90c9720d95f1ceb87290b9f.scope/container/memory.events
Oct 11 04:41:17 np0005481065 podman[270005]: 2025-10-11 08:41:17.791872891 +0000 UTC m=+0.708768914 container attach ed7ff5053f372c6140d16725fc4c6734455959a8b90c9720d95f1ceb87290b9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 11 04:41:17 np0005481065 podman[270005]: 2025-10-11 08:41:17.79326328 +0000 UTC m=+0.710159263 container died ed7ff5053f372c6140d16725fc4c6734455959a8b90c9720d95f1ceb87290b9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_yonath, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 04:41:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:41:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Oct 11 04:41:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Oct 11 04:41:18 np0005481065 systemd[1]: var-lib-containers-storage-overlay-4af2354a333cbcebbc7ce4c77f19e9ea1622bbc65a69c45f7cb3565256e2d297-merged.mount: Deactivated successfully.
Oct 11 04:41:18 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Oct 11 04:41:18 np0005481065 podman[270005]: 2025-10-11 08:41:18.71835603 +0000 UTC m=+1.635252013 container remove ed7ff5053f372c6140d16725fc4c6734455959a8b90c9720d95f1ceb87290b9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_yonath, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:41:18 np0005481065 systemd[1]: libpod-conmon-ed7ff5053f372c6140d16725fc4c6734455959a8b90c9720d95f1ceb87290b9f.scope: Deactivated successfully.
Oct 11 04:41:18 np0005481065 podman[270067]: 2025-10-11 08:41:18.996460531 +0000 UTC m=+0.088076894 container create 0ff36fe84742cc87c685200d67601d123b8ebe8ab0124167068ce4d99fd9ea9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bassi, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 04:41:19 np0005481065 podman[270067]: 2025-10-11 08:41:18.95278494 +0000 UTC m=+0.044401303 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:41:19 np0005481065 systemd[1]: Started libpod-conmon-0ff36fe84742cc87c685200d67601d123b8ebe8ab0124167068ce4d99fd9ea9b.scope.
Oct 11 04:41:19 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:41:19 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2e91f05215e9771b201de158832cfe3395b9d3fe40d40b33957b0f0fa511351/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:41:19 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2e91f05215e9771b201de158832cfe3395b9d3fe40d40b33957b0f0fa511351/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:41:19 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2e91f05215e9771b201de158832cfe3395b9d3fe40d40b33957b0f0fa511351/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:41:19 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2e91f05215e9771b201de158832cfe3395b9d3fe40d40b33957b0f0fa511351/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:41:19 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2e91f05215e9771b201de158832cfe3395b9d3fe40d40b33957b0f0fa511351/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:41:19 np0005481065 podman[270067]: 2025-10-11 08:41:19.133605807 +0000 UTC m=+0.225222190 container init 0ff36fe84742cc87c685200d67601d123b8ebe8ab0124167068ce4d99fd9ea9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bassi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 04:41:19 np0005481065 podman[270067]: 2025-10-11 08:41:19.148333203 +0000 UTC m=+0.239949556 container start 0ff36fe84742cc87c685200d67601d123b8ebe8ab0124167068ce4d99fd9ea9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:41:19 np0005481065 podman[270067]: 2025-10-11 08:41:19.161491444 +0000 UTC m=+0.253107807 container attach 0ff36fe84742cc87c685200d67601d123b8ebe8ab0124167068ce4d99fd9ea9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bassi, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:41:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1030: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 5.1 MiB/s wr, 45 op/s
Oct 11 04:41:20 np0005481065 wonderful_bassi[270084]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:41:20 np0005481065 wonderful_bassi[270084]: --> relative data size: 1.0
Oct 11 04:41:20 np0005481065 wonderful_bassi[270084]: --> All data devices are unavailable
Oct 11 04:41:20 np0005481065 systemd[1]: libpod-0ff36fe84742cc87c685200d67601d123b8ebe8ab0124167068ce4d99fd9ea9b.scope: Deactivated successfully.
Oct 11 04:41:20 np0005481065 systemd[1]: libpod-0ff36fe84742cc87c685200d67601d123b8ebe8ab0124167068ce4d99fd9ea9b.scope: Consumed 1.149s CPU time.
Oct 11 04:41:20 np0005481065 podman[270067]: 2025-10-11 08:41:20.34888538 +0000 UTC m=+1.440501743 container died 0ff36fe84742cc87c685200d67601d123b8ebe8ab0124167068ce4d99fd9ea9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 04:41:20 np0005481065 systemd[1]: var-lib-containers-storage-overlay-c2e91f05215e9771b201de158832cfe3395b9d3fe40d40b33957b0f0fa511351-merged.mount: Deactivated successfully.
Oct 11 04:41:20 np0005481065 podman[270067]: 2025-10-11 08:41:20.429891004 +0000 UTC m=+1.521507317 container remove 0ff36fe84742cc87c685200d67601d123b8ebe8ab0124167068ce4d99fd9ea9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 11 04:41:20 np0005481065 systemd[1]: libpod-conmon-0ff36fe84742cc87c685200d67601d123b8ebe8ab0124167068ce4d99fd9ea9b.scope: Deactivated successfully.
Oct 11 04:41:21 np0005481065 podman[270265]: 2025-10-11 08:41:21.294247923 +0000 UTC m=+0.071879808 container create 45488e29878398e529a3d7fe7305b37cdb946edeb48d3850fd3af85f96351da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_chandrasekhar, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:41:21 np0005481065 systemd[1]: Started libpod-conmon-45488e29878398e529a3d7fe7305b37cdb946edeb48d3850fd3af85f96351da7.scope.
Oct 11 04:41:21 np0005481065 podman[270265]: 2025-10-11 08:41:21.267764496 +0000 UTC m=+0.045396411 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:41:21 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:41:21 np0005481065 podman[270265]: 2025-10-11 08:41:21.399107599 +0000 UTC m=+0.176739504 container init 45488e29878398e529a3d7fe7305b37cdb946edeb48d3850fd3af85f96351da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:41:21 np0005481065 podman[270265]: 2025-10-11 08:41:21.409356638 +0000 UTC m=+0.186988513 container start 45488e29878398e529a3d7fe7305b37cdb946edeb48d3850fd3af85f96351da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:41:21 np0005481065 podman[270265]: 2025-10-11 08:41:21.413450883 +0000 UTC m=+0.191082818 container attach 45488e29878398e529a3d7fe7305b37cdb946edeb48d3850fd3af85f96351da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:41:21 np0005481065 determined_chandrasekhar[270281]: 167 167
Oct 11 04:41:21 np0005481065 systemd[1]: libpod-45488e29878398e529a3d7fe7305b37cdb946edeb48d3850fd3af85f96351da7.scope: Deactivated successfully.
Oct 11 04:41:21 np0005481065 podman[270265]: 2025-10-11 08:41:21.417567899 +0000 UTC m=+0.195199774 container died 45488e29878398e529a3d7fe7305b37cdb946edeb48d3850fd3af85f96351da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_chandrasekhar, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:41:21 np0005481065 systemd[1]: var-lib-containers-storage-overlay-d74d173e91f76652c6874ff22f0845afe275dacd65034e3ef5c208bcaf13564e-merged.mount: Deactivated successfully.
Oct 11 04:41:21 np0005481065 podman[270265]: 2025-10-11 08:41:21.469463452 +0000 UTC m=+0.247095327 container remove 45488e29878398e529a3d7fe7305b37cdb946edeb48d3850fd3af85f96351da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:41:21 np0005481065 systemd[1]: libpod-conmon-45488e29878398e529a3d7fe7305b37cdb946edeb48d3850fd3af85f96351da7.scope: Deactivated successfully.
Oct 11 04:41:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1031: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 6.5 KiB/s rd, 891 B/s wr, 8 op/s
Oct 11 04:41:21 np0005481065 podman[270304]: 2025-10-11 08:41:21.705368493 +0000 UTC m=+0.066862896 container create 169e9574fec2244d96fa37691161712b19b86998b5df9afeebc811ac6779dc40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wing, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 04:41:21 np0005481065 systemd[1]: Started libpod-conmon-169e9574fec2244d96fa37691161712b19b86998b5df9afeebc811ac6779dc40.scope.
Oct 11 04:41:21 np0005481065 podman[270304]: 2025-10-11 08:41:21.678172497 +0000 UTC m=+0.039666960 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:41:21 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:41:21 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9f5f9299f4f7915b1666c86c6ccd3c44c0b3be9808cdd2038ba687bdaf69504/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:41:21 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9f5f9299f4f7915b1666c86c6ccd3c44c0b3be9808cdd2038ba687bdaf69504/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:41:21 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9f5f9299f4f7915b1666c86c6ccd3c44c0b3be9808cdd2038ba687bdaf69504/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:41:21 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9f5f9299f4f7915b1666c86c6ccd3c44c0b3be9808cdd2038ba687bdaf69504/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:41:21 np0005481065 podman[270304]: 2025-10-11 08:41:21.829372269 +0000 UTC m=+0.190866802 container init 169e9574fec2244d96fa37691161712b19b86998b5df9afeebc811ac6779dc40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wing, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 04:41:21 np0005481065 podman[270304]: 2025-10-11 08:41:21.839897306 +0000 UTC m=+0.201391709 container start 169e9574fec2244d96fa37691161712b19b86998b5df9afeebc811ac6779dc40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 04:41:21 np0005481065 podman[270304]: 2025-10-11 08:41:21.843407944 +0000 UTC m=+0.204902347 container attach 169e9574fec2244d96fa37691161712b19b86998b5df9afeebc811ac6779dc40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wing, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:41:22 np0005481065 gallant_wing[270321]: {
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:    "0": [
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:        {
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:            "devices": [
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:                "/dev/loop3"
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:            ],
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:            "lv_name": "ceph_lv0",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:            "lv_size": "21470642176",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:            "name": "ceph_lv0",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:            "tags": {
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:                "ceph.cluster_name": "ceph",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:                "ceph.crush_device_class": "",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:                "ceph.encrypted": "0",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:                "ceph.osd_id": "0",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:                "ceph.type": "block",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:                "ceph.vdo": "0"
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:            },
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:            "type": "block",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:            "vg_name": "ceph_vg0"
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:        }
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:    ],
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:    "1": [
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:        {
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:            "devices": [
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:                "/dev/loop4"
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:            ],
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:            "lv_name": "ceph_lv1",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:            "lv_size": "21470642176",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:            "name": "ceph_lv1",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:            "tags": {
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:                "ceph.cluster_name": "ceph",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:                "ceph.crush_device_class": "",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:                "ceph.encrypted": "0",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:                "ceph.osd_id": "1",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:                "ceph.type": "block",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:                "ceph.vdo": "0"
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:            },
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:            "type": "block",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:            "vg_name": "ceph_vg1"
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:        }
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:    ],
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:    "2": [
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:        {
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:            "devices": [
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:                "/dev/loop5"
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:            ],
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:            "lv_name": "ceph_lv2",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:            "lv_size": "21470642176",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:            "name": "ceph_lv2",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:            "tags": {
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:                "ceph.cluster_name": "ceph",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:                "ceph.crush_device_class": "",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:                "ceph.encrypted": "0",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:                "ceph.osd_id": "2",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:                "ceph.type": "block",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:                "ceph.vdo": "0"
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:            },
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:            "type": "block",
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:            "vg_name": "ceph_vg2"
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:        }
Oct 11 04:41:22 np0005481065 gallant_wing[270321]:    ]
Oct 11 04:41:22 np0005481065 gallant_wing[270321]: }
Oct 11 04:41:22 np0005481065 systemd[1]: libpod-169e9574fec2244d96fa37691161712b19b86998b5df9afeebc811ac6779dc40.scope: Deactivated successfully.
Oct 11 04:41:22 np0005481065 podman[270304]: 2025-10-11 08:41:22.694521469 +0000 UTC m=+1.056015882 container died 169e9574fec2244d96fa37691161712b19b86998b5df9afeebc811ac6779dc40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wing, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 04:41:22 np0005481065 systemd[1]: var-lib-containers-storage-overlay-f9f5f9299f4f7915b1666c86c6ccd3c44c0b3be9808cdd2038ba687bdaf69504-merged.mount: Deactivated successfully.
Oct 11 04:41:22 np0005481065 podman[270304]: 2025-10-11 08:41:22.772124837 +0000 UTC m=+1.133619230 container remove 169e9574fec2244d96fa37691161712b19b86998b5df9afeebc811ac6779dc40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 11 04:41:22 np0005481065 podman[270330]: 2025-10-11 08:41:22.778790405 +0000 UTC m=+0.081986742 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:41:22 np0005481065 systemd[1]: libpod-conmon-169e9574fec2244d96fa37691161712b19b86998b5df9afeebc811ac6779dc40.scope: Deactivated successfully.
Oct 11 04:41:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:41:23 np0005481065 podman[270505]: 2025-10-11 08:41:23.532985188 +0000 UTC m=+0.050512055 container create aa8e68ddd0627e3ad3cc846ef687b8c6193d62f697bc8ef05e55124d097d85a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:41:23 np0005481065 systemd[1]: Started libpod-conmon-aa8e68ddd0627e3ad3cc846ef687b8c6193d62f697bc8ef05e55124d097d85a8.scope.
Oct 11 04:41:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1032: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s rd, 723 B/s wr, 7 op/s
Oct 11 04:41:23 np0005481065 podman[270505]: 2025-10-11 08:41:23.512547482 +0000 UTC m=+0.030074339 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:41:23 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:41:23 np0005481065 podman[270505]: 2025-10-11 08:41:23.629591922 +0000 UTC m=+0.147118849 container init aa8e68ddd0627e3ad3cc846ef687b8c6193d62f697bc8ef05e55124d097d85a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_sanderson, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:41:23 np0005481065 podman[270505]: 2025-10-11 08:41:23.636590239 +0000 UTC m=+0.154117096 container start aa8e68ddd0627e3ad3cc846ef687b8c6193d62f697bc8ef05e55124d097d85a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_sanderson, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 11 04:41:23 np0005481065 podman[270505]: 2025-10-11 08:41:23.640718856 +0000 UTC m=+0.158245713 container attach aa8e68ddd0627e3ad3cc846ef687b8c6193d62f697bc8ef05e55124d097d85a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_sanderson, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:41:23 np0005481065 intelligent_sanderson[270521]: 167 167
Oct 11 04:41:23 np0005481065 systemd[1]: libpod-aa8e68ddd0627e3ad3cc846ef687b8c6193d62f697bc8ef05e55124d097d85a8.scope: Deactivated successfully.
Oct 11 04:41:23 np0005481065 podman[270505]: 2025-10-11 08:41:23.648220977 +0000 UTC m=+0.165747844 container died aa8e68ddd0627e3ad3cc846ef687b8c6193d62f697bc8ef05e55124d097d85a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_sanderson, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:41:23 np0005481065 systemd[1]: var-lib-containers-storage-overlay-59800077ace1209cff27597bfa56e3730ee513a01a5b441575535d81aa663c40-merged.mount: Deactivated successfully.
Oct 11 04:41:23 np0005481065 podman[270505]: 2025-10-11 08:41:23.69017557 +0000 UTC m=+0.207702407 container remove aa8e68ddd0627e3ad3cc846ef687b8c6193d62f697bc8ef05e55124d097d85a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:41:23 np0005481065 systemd[1]: libpod-conmon-aa8e68ddd0627e3ad3cc846ef687b8c6193d62f697bc8ef05e55124d097d85a8.scope: Deactivated successfully.
Oct 11 04:41:23 np0005481065 podman[270546]: 2025-10-11 08:41:23.897029862 +0000 UTC m=+0.047824970 container create f3d44929cb0314b9051751d4500006cf5e6bd697346c1c0a786f94f2460cdc1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_roentgen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 04:41:23 np0005481065 systemd[1]: Started libpod-conmon-f3d44929cb0314b9051751d4500006cf5e6bd697346c1c0a786f94f2460cdc1b.scope.
Oct 11 04:41:23 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:41:23 np0005481065 podman[270546]: 2025-10-11 08:41:23.877155861 +0000 UTC m=+0.027951019 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:41:23 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/872bd824bb54f5667c52a1ce05423f656ec2749e0d7a402a00353d8dd5d608cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:41:23 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/872bd824bb54f5667c52a1ce05423f656ec2749e0d7a402a00353d8dd5d608cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:41:23 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/872bd824bb54f5667c52a1ce05423f656ec2749e0d7a402a00353d8dd5d608cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:41:23 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/872bd824bb54f5667c52a1ce05423f656ec2749e0d7a402a00353d8dd5d608cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:41:23 np0005481065 podman[270546]: 2025-10-11 08:41:23.995760315 +0000 UTC m=+0.146555423 container init f3d44929cb0314b9051751d4500006cf5e6bd697346c1c0a786f94f2460cdc1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Oct 11 04:41:24 np0005481065 podman[270546]: 2025-10-11 08:41:24.009169173 +0000 UTC m=+0.159964291 container start f3d44929cb0314b9051751d4500006cf5e6bd697346c1c0a786f94f2460cdc1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_roentgen, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:41:24 np0005481065 podman[270546]: 2025-10-11 08:41:24.012720914 +0000 UTC m=+0.163516022 container attach f3d44929cb0314b9051751d4500006cf5e6bd697346c1c0a786f94f2460cdc1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_roentgen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 04:41:24 np0005481065 podman[270560]: 2025-10-11 08:41:24.06970061 +0000 UTC m=+0.127328441 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 04:41:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Oct 11 04:41:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Oct 11 04:41:24 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Oct 11 04:41:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:41:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:41:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:41:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:41:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:41:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:41:25 np0005481065 festive_roentgen[270568]: {
Oct 11 04:41:25 np0005481065 festive_roentgen[270568]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 04:41:25 np0005481065 festive_roentgen[270568]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:41:25 np0005481065 festive_roentgen[270568]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:41:25 np0005481065 festive_roentgen[270568]:        "osd_id": 2,
Oct 11 04:41:25 np0005481065 festive_roentgen[270568]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:41:25 np0005481065 festive_roentgen[270568]:        "type": "bluestore"
Oct 11 04:41:25 np0005481065 festive_roentgen[270568]:    },
Oct 11 04:41:25 np0005481065 festive_roentgen[270568]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 04:41:25 np0005481065 festive_roentgen[270568]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:41:25 np0005481065 festive_roentgen[270568]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:41:25 np0005481065 festive_roentgen[270568]:        "osd_id": 0,
Oct 11 04:41:25 np0005481065 festive_roentgen[270568]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:41:25 np0005481065 festive_roentgen[270568]:        "type": "bluestore"
Oct 11 04:41:25 np0005481065 festive_roentgen[270568]:    },
Oct 11 04:41:25 np0005481065 festive_roentgen[270568]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 04:41:25 np0005481065 festive_roentgen[270568]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:41:25 np0005481065 festive_roentgen[270568]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:41:25 np0005481065 festive_roentgen[270568]:        "osd_id": 1,
Oct 11 04:41:25 np0005481065 festive_roentgen[270568]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:41:25 np0005481065 festive_roentgen[270568]:        "type": "bluestore"
Oct 11 04:41:25 np0005481065 festive_roentgen[270568]:    }
Oct 11 04:41:25 np0005481065 festive_roentgen[270568]: }
Oct 11 04:41:25 np0005481065 systemd[1]: libpod-f3d44929cb0314b9051751d4500006cf5e6bd697346c1c0a786f94f2460cdc1b.scope: Deactivated successfully.
Oct 11 04:41:25 np0005481065 systemd[1]: libpod-f3d44929cb0314b9051751d4500006cf5e6bd697346c1c0a786f94f2460cdc1b.scope: Consumed 1.178s CPU time.
Oct 11 04:41:25 np0005481065 podman[270546]: 2025-10-11 08:41:25.191516437 +0000 UTC m=+1.342311585 container died f3d44929cb0314b9051751d4500006cf5e6bd697346c1c0a786f94f2460cdc1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_roentgen, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:41:25 np0005481065 systemd[1]: var-lib-containers-storage-overlay-872bd824bb54f5667c52a1ce05423f656ec2749e0d7a402a00353d8dd5d608cd-merged.mount: Deactivated successfully.
Oct 11 04:41:25 np0005481065 podman[270546]: 2025-10-11 08:41:25.279859688 +0000 UTC m=+1.430654826 container remove f3d44929cb0314b9051751d4500006cf5e6bd697346c1c0a786f94f2460cdc1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:41:25 np0005481065 systemd[1]: libpod-conmon-f3d44929cb0314b9051751d4500006cf5e6bd697346c1c0a786f94f2460cdc1b.scope: Deactivated successfully.
Oct 11 04:41:25 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:41:25 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:41:25 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:41:25 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:41:25 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 2e789098-cf7e-4f0a-86e9-c7c40b0ab2ce does not exist
Oct 11 04:41:25 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 4f78776a-277d-4daa-9640-d2462f34b5c7 does not exist
Oct 11 04:41:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1034: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:41:26 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:41:26 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:41:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Oct 11 04:41:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Oct 11 04:41:27 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Oct 11 04:41:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1036: 321 pgs: 321 active+clean; 29 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 2.0 KiB/s wr, 45 op/s
Oct 11 04:41:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:41:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1037: 321 pgs: 2 active+clean+snaptrim, 319 active+clean; 13 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 2.6 KiB/s wr, 54 op/s
Oct 11 04:41:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1038: 321 pgs: 2 active+clean+snaptrim, 319 active+clean; 13 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 2.6 KiB/s wr, 54 op/s
Oct 11 04:41:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:41:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Oct 11 04:41:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Oct 11 04:41:33 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Oct 11 04:41:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1040: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 62 op/s
Oct 11 04:41:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1041: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.3 KiB/s wr, 38 op/s
Oct 11 04:41:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:41:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1905036941' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:41:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:41:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1905036941' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:41:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1042: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 1.2 KiB/s wr, 13 op/s
Oct 11 04:41:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:41:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1043: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.6 KiB/s rd, 716 B/s wr, 6 op/s
Oct 11 04:41:40 np0005481065 podman[270684]: 2025-10-11 08:41:40.786888613 +0000 UTC m=+0.082614441 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 11 04:41:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1044: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.6 KiB/s rd, 716 B/s wr, 6 op/s
Oct 11 04:41:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:41:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Oct 11 04:41:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Oct 11 04:41:43 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Oct 11 04:41:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1046: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s rd, 1023 B/s wr, 8 op/s
Oct 11 04:41:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1047: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s rd, 1023 B/s wr, 8 op/s
Oct 11 04:41:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1048: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.5 KiB/s wr, 14 op/s
Oct 11 04:41:47 np0005481065 podman[270705]: 2025-10-11 08:41:47.797179373 +0000 UTC m=+0.097711906 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:41:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:41:48 np0005481065 nova_compute[260935]: 2025-10-11 08:41:48.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:41:48 np0005481065 nova_compute[260935]: 2025-10-11 08:41:48.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 04:41:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1049: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 11 04:41:49 np0005481065 nova_compute[260935]: 2025-10-11 08:41:49.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:41:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1050: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 11 04:41:51 np0005481065 nova_compute[260935]: 2025-10-11 08:41:51.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:41:52 np0005481065 nova_compute[260935]: 2025-10-11 08:41:52.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:41:52 np0005481065 nova_compute[260935]: 2025-10-11 08:41:52.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:41:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:41:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1051: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.7 KiB/s rd, 584 B/s wr, 6 op/s
Oct 11 04:41:53 np0005481065 podman[270725]: 2025-10-11 08:41:53.796411218 +0000 UTC m=+0.096919223 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 11 04:41:54 np0005481065 nova_compute[260935]: 2025-10-11 08:41:54.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:41:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:41:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:41:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:41:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:41:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:41:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:41:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:41:54
Oct 11 04:41:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:41:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 04:41:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['default.rgw.meta', 'images', 'vms', '.mgr', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log']
Oct 11 04:41:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:41:54 np0005481065 podman[270745]: 2025-10-11 08:41:54.850674211 +0000 UTC m=+0.149583939 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 11 04:41:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:41:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:41:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:41:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:41:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:41:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:41:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:41:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:41:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:41:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:41:55 np0005481065 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:41:55 np0005481065 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 6052 writes, 24K keys, 6052 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 6052 writes, 1073 syncs, 5.64 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 430 writes, 1011 keys, 430 commit groups, 1.0 writes per commit group, ingest: 0.45 MB, 0.00 MB/s#012Interval WAL: 430 writes, 192 syncs, 2.24 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 04:41:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1052: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.1 KiB/s rd, 511 B/s wr, 5 op/s
Oct 11 04:41:55 np0005481065 nova_compute[260935]: 2025-10-11 08:41:55.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:41:55 np0005481065 nova_compute[260935]: 2025-10-11 08:41:55.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 04:41:55 np0005481065 nova_compute[260935]: 2025-10-11 08:41:55.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 11 04:41:55 np0005481065 nova_compute[260935]: 2025-10-11 08:41:55.719 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 11 04:41:55 np0005481065 nova_compute[260935]: 2025-10-11 08:41:55.719 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:41:55 np0005481065 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 04:41:55 np0005481065 nova_compute[260935]: 2025-10-11 08:41:55.749 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:41:55 np0005481065 nova_compute[260935]: 2025-10-11 08:41:55.750 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:41:55 np0005481065 nova_compute[260935]: 2025-10-11 08:41:55.750 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:41:55 np0005481065 nova_compute[260935]: 2025-10-11 08:41:55.750 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 04:41:55 np0005481065 nova_compute[260935]: 2025-10-11 08:41:55.750 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:41:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:41:56 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2891135953' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:41:56 np0005481065 nova_compute[260935]: 2025-10-11 08:41:56.227 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:41:56 np0005481065 nova_compute[260935]: 2025-10-11 08:41:56.454 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:41:56 np0005481065 nova_compute[260935]: 2025-10-11 08:41:56.456 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5175MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 04:41:56 np0005481065 nova_compute[260935]: 2025-10-11 08:41:56.457 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:41:56 np0005481065 nova_compute[260935]: 2025-10-11 08:41:56.457 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:41:56 np0005481065 nova_compute[260935]: 2025-10-11 08:41:56.528 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 04:41:56 np0005481065 nova_compute[260935]: 2025-10-11 08:41:56.529 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 04:41:56 np0005481065 nova_compute[260935]: 2025-10-11 08:41:56.550 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:41:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:41:57 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3738029934' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:41:57 np0005481065 nova_compute[260935]: 2025-10-11 08:41:57.023 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:41:57 np0005481065 nova_compute[260935]: 2025-10-11 08:41:57.031 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:41:57 np0005481065 nova_compute[260935]: 2025-10-11 08:41:57.054 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:41:57 np0005481065 nova_compute[260935]: 2025-10-11 08:41:57.056 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 04:41:57 np0005481065 nova_compute[260935]: 2025-10-11 08:41:57.056 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.599s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:41:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1053: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.1 KiB/s rd, 511 B/s wr, 5 op/s
Oct 11 04:41:58 np0005481065 nova_compute[260935]: 2025-10-11 08:41:58.040 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:41:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:41:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1054: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Oct 11 04:42:00 np0005481065 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:42:00 np0005481065 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 7156 writes, 28K keys, 7156 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 7156 writes, 1416 syncs, 5.05 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 444 writes, 1061 keys, 444 commit groups, 1.0 writes per commit group, ingest: 0.54 MB, 0.00 MB/s#012Interval WAL: 444 writes, 190 syncs, 2.34 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 04:42:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1055: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:42:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1056: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:42:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:42:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:42:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:42:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:42:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:42:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:42:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:42:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:42:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:42:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.1795353910268934e-07 of space, bias 1.0, pg target 9.53860617308068e-05 quantized to 32 (current 32)
Oct 11 04:42:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:42:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 04:42:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:42:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:42:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:42:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:42:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:42:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:42:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:42:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:42:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:42:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:42:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1057: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:06 np0005481065 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:42:06 np0005481065 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.3 total, 600.0 interval#012Cumulative writes: 6087 writes, 25K keys, 6087 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 6087 writes, 1057 syncs, 5.76 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 411 writes, 1113 keys, 411 commit groups, 1.0 writes per commit group, ingest: 0.56 MB, 0.00 MB/s#012Interval WAL: 411 writes, 179 syncs, 2.30 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 04:42:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1058: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:07 np0005481065 ceph-mgr[74605]: [devicehealth INFO root] Check health
Oct 11 04:42:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:42:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1059: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1060: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:11 np0005481065 podman[270817]: 2025-10-11 08:42:11.78913775 +0000 UTC m=+0.084441291 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 11 04:42:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:42:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1061: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:42:15.173 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:42:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:42:15.173 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:42:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:42:15.174 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:42:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1062: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1063: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:42:18 np0005481065 podman[270835]: 2025-10-11 08:42:18.798130883 +0000 UTC m=+0.093848327 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 04:42:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1064: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1065: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:42:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1066: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:42:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:42:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:42:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:42:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:42:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:42:24 np0005481065 podman[270855]: 2025-10-11 08:42:24.805415934 +0000 UTC m=+0.099282570 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 04:42:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1067: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:25 np0005481065 podman[270901]: 2025-10-11 08:42:25.88317252 +0000 UTC m=+0.170128057 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 11 04:42:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:42:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:42:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:42:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:42:26 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:42:26 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:42:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1068: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:42:28 np0005481065 podman[271300]: 2025-10-11 08:42:28.244118442 +0000 UTC m=+0.064163230 container create 338237e23f46e8274c83ebaec5f1b6ba9f9e570361834a1c21fd8c425ed91ec7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 04:42:28 np0005481065 systemd[1]: Started libpod-conmon-338237e23f46e8274c83ebaec5f1b6ba9f9e570361834a1c21fd8c425ed91ec7.scope.
Oct 11 04:42:28 np0005481065 podman[271300]: 2025-10-11 08:42:28.211889173 +0000 UTC m=+0.031934011 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:42:28 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:42:28 np0005481065 podman[271300]: 2025-10-11 08:42:28.353195097 +0000 UTC m=+0.173239925 container init 338237e23f46e8274c83ebaec5f1b6ba9f9e570361834a1c21fd8c425ed91ec7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_vaughan, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 11 04:42:28 np0005481065 podman[271300]: 2025-10-11 08:42:28.368104117 +0000 UTC m=+0.188148885 container start 338237e23f46e8274c83ebaec5f1b6ba9f9e570361834a1c21fd8c425ed91ec7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_vaughan, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 04:42:28 np0005481065 podman[271300]: 2025-10-11 08:42:28.372612514 +0000 UTC m=+0.192657292 container attach 338237e23f46e8274c83ebaec5f1b6ba9f9e570361834a1c21fd8c425ed91ec7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 11 04:42:28 np0005481065 lucid_vaughan[271317]: 167 167
Oct 11 04:42:28 np0005481065 systemd[1]: libpod-338237e23f46e8274c83ebaec5f1b6ba9f9e570361834a1c21fd8c425ed91ec7.scope: Deactivated successfully.
Oct 11 04:42:28 np0005481065 podman[271300]: 2025-10-11 08:42:28.37636416 +0000 UTC m=+0.196408928 container died 338237e23f46e8274c83ebaec5f1b6ba9f9e570361834a1c21fd8c425ed91ec7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:42:28 np0005481065 systemd[1]: var-lib-containers-storage-overlay-7b5dc300bb48cdaf8cd6984794e305ed43842dfcde5e4b3fea1b4cb4f02b6ecd-merged.mount: Deactivated successfully.
Oct 11 04:42:28 np0005481065 podman[271300]: 2025-10-11 08:42:28.509036631 +0000 UTC m=+0.329081399 container remove 338237e23f46e8274c83ebaec5f1b6ba9f9e570361834a1c21fd8c425ed91ec7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:42:28 np0005481065 systemd[1]: libpod-conmon-338237e23f46e8274c83ebaec5f1b6ba9f9e570361834a1c21fd8c425ed91ec7.scope: Deactivated successfully.
Oct 11 04:42:28 np0005481065 podman[271344]: 2025-10-11 08:42:28.731636156 +0000 UTC m=+0.068109201 container create 7dec2621aa532c239104b27c1196ec02d868141b77ba444a81a0090023f3b70e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_swartz, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Oct 11 04:42:28 np0005481065 systemd[1]: Started libpod-conmon-7dec2621aa532c239104b27c1196ec02d868141b77ba444a81a0090023f3b70e.scope.
Oct 11 04:42:28 np0005481065 podman[271344]: 2025-10-11 08:42:28.70195697 +0000 UTC m=+0.038430075 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:42:28 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:42:28 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/451ca74eba48f0f2743a0d1bad7b0ee8ecf438c70ceed4dd5136187394385e15/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:42:28 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/451ca74eba48f0f2743a0d1bad7b0ee8ecf438c70ceed4dd5136187394385e15/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:42:28 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/451ca74eba48f0f2743a0d1bad7b0ee8ecf438c70ceed4dd5136187394385e15/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:42:28 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/451ca74eba48f0f2743a0d1bad7b0ee8ecf438c70ceed4dd5136187394385e15/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:42:28 np0005481065 podman[271344]: 2025-10-11 08:42:28.837641555 +0000 UTC m=+0.174114600 container init 7dec2621aa532c239104b27c1196ec02d868141b77ba444a81a0090023f3b70e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 04:42:28 np0005481065 podman[271344]: 2025-10-11 08:42:28.8619271 +0000 UTC m=+0.198400145 container start 7dec2621aa532c239104b27c1196ec02d868141b77ba444a81a0090023f3b70e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_swartz, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 11 04:42:28 np0005481065 podman[271344]: 2025-10-11 08:42:28.866021695 +0000 UTC m=+0.202494740 container attach 7dec2621aa532c239104b27c1196ec02d868141b77ba444a81a0090023f3b70e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 04:42:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1069: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:30 np0005481065 distracted_swartz[271361]: [
Oct 11 04:42:30 np0005481065 distracted_swartz[271361]:    {
Oct 11 04:42:30 np0005481065 distracted_swartz[271361]:        "available": false,
Oct 11 04:42:30 np0005481065 distracted_swartz[271361]:        "ceph_device": false,
Oct 11 04:42:30 np0005481065 distracted_swartz[271361]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct 11 04:42:30 np0005481065 distracted_swartz[271361]:        "lsm_data": {},
Oct 11 04:42:30 np0005481065 distracted_swartz[271361]:        "lvs": [],
Oct 11 04:42:30 np0005481065 distracted_swartz[271361]:        "path": "/dev/sr0",
Oct 11 04:42:30 np0005481065 distracted_swartz[271361]:        "rejected_reasons": [
Oct 11 04:42:30 np0005481065 distracted_swartz[271361]:            "Has a FileSystem",
Oct 11 04:42:30 np0005481065 distracted_swartz[271361]:            "Insufficient space (<5GB)"
Oct 11 04:42:30 np0005481065 distracted_swartz[271361]:        ],
Oct 11 04:42:30 np0005481065 distracted_swartz[271361]:        "sys_api": {
Oct 11 04:42:30 np0005481065 distracted_swartz[271361]:            "actuators": null,
Oct 11 04:42:30 np0005481065 distracted_swartz[271361]:            "device_nodes": "sr0",
Oct 11 04:42:30 np0005481065 distracted_swartz[271361]:            "devname": "sr0",
Oct 11 04:42:30 np0005481065 distracted_swartz[271361]:            "human_readable_size": "482.00 KB",
Oct 11 04:42:30 np0005481065 distracted_swartz[271361]:            "id_bus": "ata",
Oct 11 04:42:30 np0005481065 distracted_swartz[271361]:            "model": "QEMU DVD-ROM",
Oct 11 04:42:30 np0005481065 distracted_swartz[271361]:            "nr_requests": "2",
Oct 11 04:42:30 np0005481065 distracted_swartz[271361]:            "parent": "/dev/sr0",
Oct 11 04:42:30 np0005481065 distracted_swartz[271361]:            "partitions": {},
Oct 11 04:42:30 np0005481065 distracted_swartz[271361]:            "path": "/dev/sr0",
Oct 11 04:42:30 np0005481065 distracted_swartz[271361]:            "removable": "1",
Oct 11 04:42:30 np0005481065 distracted_swartz[271361]:            "rev": "2.5+",
Oct 11 04:42:30 np0005481065 distracted_swartz[271361]:            "ro": "0",
Oct 11 04:42:30 np0005481065 distracted_swartz[271361]:            "rotational": "0",
Oct 11 04:42:30 np0005481065 distracted_swartz[271361]:            "sas_address": "",
Oct 11 04:42:30 np0005481065 distracted_swartz[271361]:            "sas_device_handle": "",
Oct 11 04:42:30 np0005481065 distracted_swartz[271361]:            "scheduler_mode": "mq-deadline",
Oct 11 04:42:30 np0005481065 distracted_swartz[271361]:            "sectors": 0,
Oct 11 04:42:30 np0005481065 distracted_swartz[271361]:            "sectorsize": "2048",
Oct 11 04:42:30 np0005481065 distracted_swartz[271361]:            "size": 493568.0,
Oct 11 04:42:30 np0005481065 distracted_swartz[271361]:            "support_discard": "2048",
Oct 11 04:42:30 np0005481065 distracted_swartz[271361]:            "type": "disk",
Oct 11 04:42:30 np0005481065 distracted_swartz[271361]:            "vendor": "QEMU"
Oct 11 04:42:30 np0005481065 distracted_swartz[271361]:        }
Oct 11 04:42:30 np0005481065 distracted_swartz[271361]:    }
Oct 11 04:42:30 np0005481065 distracted_swartz[271361]: ]
Oct 11 04:42:30 np0005481065 systemd[1]: libpod-7dec2621aa532c239104b27c1196ec02d868141b77ba444a81a0090023f3b70e.scope: Deactivated successfully.
Oct 11 04:42:30 np0005481065 systemd[1]: libpod-7dec2621aa532c239104b27c1196ec02d868141b77ba444a81a0090023f3b70e.scope: Consumed 1.779s CPU time.
Oct 11 04:42:30 np0005481065 podman[271344]: 2025-10-11 08:42:30.544149466 +0000 UTC m=+1.880622511 container died 7dec2621aa532c239104b27c1196ec02d868141b77ba444a81a0090023f3b70e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_swartz, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:42:30 np0005481065 systemd[1]: var-lib-containers-storage-overlay-451ca74eba48f0f2743a0d1bad7b0ee8ecf438c70ceed4dd5136187394385e15-merged.mount: Deactivated successfully.
Oct 11 04:42:30 np0005481065 podman[271344]: 2025-10-11 08:42:30.623222305 +0000 UTC m=+1.959695320 container remove 7dec2621aa532c239104b27c1196ec02d868141b77ba444a81a0090023f3b70e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_swartz, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Oct 11 04:42:30 np0005481065 systemd[1]: libpod-conmon-7dec2621aa532c239104b27c1196ec02d868141b77ba444a81a0090023f3b70e.scope: Deactivated successfully.
Oct 11 04:42:30 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:42:30 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:42:30 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:42:30 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:42:30 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:42:30 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:42:30 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:42:30 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:42:30 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:42:30 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:42:30 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev d3748bc9-0eef-43c3-bfe4-9471c6e3db83 does not exist
Oct 11 04:42:30 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 85783387-de04-4008-95e0-d0595161b6e7 does not exist
Oct 11 04:42:30 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 634c2799-b7af-404c-b35a-6df6bc50a3bb does not exist
Oct 11 04:42:30 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:42:30 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:42:30 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:42:30 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:42:30 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:42:30 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:42:30 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:42:30 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:42:30 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:42:30 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:42:30 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:42:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1070: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:31 np0005481065 podman[273723]: 2025-10-11 08:42:31.655699594 +0000 UTC m=+0.095828813 container create 28c788ca0aa5a2478b79577b20d1e03963440e7a95a28bb3bec0567e20f74f13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lovelace, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:42:31 np0005481065 podman[273723]: 2025-10-11 08:42:31.604882081 +0000 UTC m=+0.045011350 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:42:31 np0005481065 systemd[1]: Started libpod-conmon-28c788ca0aa5a2478b79577b20d1e03963440e7a95a28bb3bec0567e20f74f13.scope.
Oct 11 04:42:31 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:42:31 np0005481065 podman[273723]: 2025-10-11 08:42:31.764438429 +0000 UTC m=+0.204567678 container init 28c788ca0aa5a2478b79577b20d1e03963440e7a95a28bb3bec0567e20f74f13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lovelace, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:42:31 np0005481065 podman[273723]: 2025-10-11 08:42:31.775870402 +0000 UTC m=+0.215999621 container start 28c788ca0aa5a2478b79577b20d1e03963440e7a95a28bb3bec0567e20f74f13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 11 04:42:31 np0005481065 podman[273723]: 2025-10-11 08:42:31.78043393 +0000 UTC m=+0.220563199 container attach 28c788ca0aa5a2478b79577b20d1e03963440e7a95a28bb3bec0567e20f74f13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 04:42:31 np0005481065 hopeful_lovelace[273740]: 167 167
Oct 11 04:42:31 np0005481065 systemd[1]: libpod-28c788ca0aa5a2478b79577b20d1e03963440e7a95a28bb3bec0567e20f74f13.scope: Deactivated successfully.
Oct 11 04:42:31 np0005481065 podman[273723]: 2025-10-11 08:42:31.785139073 +0000 UTC m=+0.225268292 container died 28c788ca0aa5a2478b79577b20d1e03963440e7a95a28bb3bec0567e20f74f13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 04:42:31 np0005481065 systemd[1]: var-lib-containers-storage-overlay-c771ab20f310fc4ed9799e9cc8ae1b1ba551249c155486b02b57b8b6a7455edc-merged.mount: Deactivated successfully.
Oct 11 04:42:31 np0005481065 podman[273723]: 2025-10-11 08:42:31.840202275 +0000 UTC m=+0.280331494 container remove 28c788ca0aa5a2478b79577b20d1e03963440e7a95a28bb3bec0567e20f74f13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:42:31 np0005481065 systemd[1]: libpod-conmon-28c788ca0aa5a2478b79577b20d1e03963440e7a95a28bb3bec0567e20f74f13.scope: Deactivated successfully.
Oct 11 04:42:32 np0005481065 podman[273764]: 2025-10-11 08:42:32.086358745 +0000 UTC m=+0.070011685 container create 0660820ad63e26b5d462181a78f9ff480322e2370d4f1bc257ac82e4c2d591c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_mayer, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 11 04:42:32 np0005481065 systemd[1]: Started libpod-conmon-0660820ad63e26b5d462181a78f9ff480322e2370d4f1bc257ac82e4c2d591c1.scope.
Oct 11 04:42:32 np0005481065 podman[273764]: 2025-10-11 08:42:32.058470199 +0000 UTC m=+0.042123179 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:42:32 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:42:32 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eef93847f6001dc96ff9c6f11498763c5c474104181d7e488d181dfc6c926ff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:42:32 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eef93847f6001dc96ff9c6f11498763c5c474104181d7e488d181dfc6c926ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:42:32 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eef93847f6001dc96ff9c6f11498763c5c474104181d7e488d181dfc6c926ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:42:32 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eef93847f6001dc96ff9c6f11498763c5c474104181d7e488d181dfc6c926ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:42:32 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eef93847f6001dc96ff9c6f11498763c5c474104181d7e488d181dfc6c926ff/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:42:32 np0005481065 podman[273764]: 2025-10-11 08:42:32.207081559 +0000 UTC m=+0.190734549 container init 0660820ad63e26b5d462181a78f9ff480322e2370d4f1bc257ac82e4c2d591c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Oct 11 04:42:32 np0005481065 podman[273764]: 2025-10-11 08:42:32.220225139 +0000 UTC m=+0.203878079 container start 0660820ad63e26b5d462181a78f9ff480322e2370d4f1bc257ac82e4c2d591c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_mayer, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 11 04:42:32 np0005481065 podman[273764]: 2025-10-11 08:42:32.225337613 +0000 UTC m=+0.208990553 container attach 0660820ad63e26b5d462181a78f9ff480322e2370d4f1bc257ac82e4c2d591c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_mayer, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 04:42:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:42:33 np0005481065 gifted_mayer[273780]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:42:33 np0005481065 gifted_mayer[273780]: --> relative data size: 1.0
Oct 11 04:42:33 np0005481065 gifted_mayer[273780]: --> All data devices are unavailable
Oct 11 04:42:33 np0005481065 systemd[1]: libpod-0660820ad63e26b5d462181a78f9ff480322e2370d4f1bc257ac82e4c2d591c1.scope: Deactivated successfully.
Oct 11 04:42:33 np0005481065 podman[273764]: 2025-10-11 08:42:33.40351634 +0000 UTC m=+1.387169270 container died 0660820ad63e26b5d462181a78f9ff480322e2370d4f1bc257ac82e4c2d591c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_mayer, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 11 04:42:33 np0005481065 systemd[1]: libpod-0660820ad63e26b5d462181a78f9ff480322e2370d4f1bc257ac82e4c2d591c1.scope: Consumed 1.139s CPU time.
Oct 11 04:42:33 np0005481065 systemd[1]: var-lib-containers-storage-overlay-7eef93847f6001dc96ff9c6f11498763c5c474104181d7e488d181dfc6c926ff-merged.mount: Deactivated successfully.
Oct 11 04:42:33 np0005481065 podman[273764]: 2025-10-11 08:42:33.48621157 +0000 UTC m=+1.469864500 container remove 0660820ad63e26b5d462181a78f9ff480322e2370d4f1bc257ac82e4c2d591c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 04:42:33 np0005481065 systemd[1]: libpod-conmon-0660820ad63e26b5d462181a78f9ff480322e2370d4f1bc257ac82e4c2d591c1.scope: Deactivated successfully.
Oct 11 04:42:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1071: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:34 np0005481065 podman[273961]: 2025-10-11 08:42:34.328631531 +0000 UTC m=+0.071649221 container create e2119e7dd5e928dd82749f7fbaa1e73ac6f87b8b2e86364d2c0b1c1da00d40bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 11 04:42:34 np0005481065 systemd[1]: Started libpod-conmon-e2119e7dd5e928dd82749f7fbaa1e73ac6f87b8b2e86364d2c0b1c1da00d40bb.scope.
Oct 11 04:42:34 np0005481065 podman[273961]: 2025-10-11 08:42:34.296945737 +0000 UTC m=+0.039963467 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:42:34 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:42:34 np0005481065 podman[273961]: 2025-10-11 08:42:34.429782942 +0000 UTC m=+0.172800682 container init e2119e7dd5e928dd82749f7fbaa1e73ac6f87b8b2e86364d2c0b1c1da00d40bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_herschel, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 11 04:42:34 np0005481065 podman[273961]: 2025-10-11 08:42:34.440216827 +0000 UTC m=+0.183234507 container start e2119e7dd5e928dd82749f7fbaa1e73ac6f87b8b2e86364d2c0b1c1da00d40bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:42:34 np0005481065 podman[273961]: 2025-10-11 08:42:34.444442056 +0000 UTC m=+0.187459736 container attach e2119e7dd5e928dd82749f7fbaa1e73ac6f87b8b2e86364d2c0b1c1da00d40bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_herschel, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:42:34 np0005481065 gifted_herschel[273977]: 167 167
Oct 11 04:42:34 np0005481065 systemd[1]: libpod-e2119e7dd5e928dd82749f7fbaa1e73ac6f87b8b2e86364d2c0b1c1da00d40bb.scope: Deactivated successfully.
Oct 11 04:42:34 np0005481065 podman[273961]: 2025-10-11 08:42:34.447465231 +0000 UTC m=+0.190482911 container died e2119e7dd5e928dd82749f7fbaa1e73ac6f87b8b2e86364d2c0b1c1da00d40bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_herschel, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:42:34 np0005481065 systemd[1]: var-lib-containers-storage-overlay-31300592cb4af761b8e337bac59c1168f2567537f4ac591258eac18c5958ef54-merged.mount: Deactivated successfully.
Oct 11 04:42:34 np0005481065 podman[273961]: 2025-10-11 08:42:34.497398339 +0000 UTC m=+0.240416029 container remove e2119e7dd5e928dd82749f7fbaa1e73ac6f87b8b2e86364d2c0b1c1da00d40bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_herschel, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:42:34 np0005481065 systemd[1]: libpod-conmon-e2119e7dd5e928dd82749f7fbaa1e73ac6f87b8b2e86364d2c0b1c1da00d40bb.scope: Deactivated successfully.
Oct 11 04:42:34 np0005481065 podman[274001]: 2025-10-11 08:42:34.771047324 +0000 UTC m=+0.082351133 container create 18ce762fa78d1a0e0f673f27d01f1523cfe0f63591eab986978eb138203d380f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 04:42:34 np0005481065 systemd[1]: Started libpod-conmon-18ce762fa78d1a0e0f673f27d01f1523cfe0f63591eab986978eb138203d380f.scope.
Oct 11 04:42:34 np0005481065 podman[274001]: 2025-10-11 08:42:34.738339682 +0000 UTC m=+0.049643541 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:42:34 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:42:34 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0700bd248c1284ba8f1178b93bd60a267269443ab95ac03b03f08fabcadaff9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:42:34 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0700bd248c1284ba8f1178b93bd60a267269443ab95ac03b03f08fabcadaff9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:42:34 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0700bd248c1284ba8f1178b93bd60a267269443ab95ac03b03f08fabcadaff9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:42:34 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0700bd248c1284ba8f1178b93bd60a267269443ab95ac03b03f08fabcadaff9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:42:34 np0005481065 podman[274001]: 2025-10-11 08:42:34.890640745 +0000 UTC m=+0.201944594 container init 18ce762fa78d1a0e0f673f27d01f1523cfe0f63591eab986978eb138203d380f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_kapitsa, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:42:34 np0005481065 podman[274001]: 2025-10-11 08:42:34.907958384 +0000 UTC m=+0.219262193 container start 18ce762fa78d1a0e0f673f27d01f1523cfe0f63591eab986978eb138203d380f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_kapitsa, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 04:42:34 np0005481065 podman[274001]: 2025-10-11 08:42:34.911755441 +0000 UTC m=+0.223059320 container attach 18ce762fa78d1a0e0f673f27d01f1523cfe0f63591eab986978eb138203d380f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:42:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1072: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]: {
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:    "0": [
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:        {
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:            "devices": [
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:                "/dev/loop3"
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:            ],
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:            "lv_name": "ceph_lv0",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:            "lv_size": "21470642176",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:            "name": "ceph_lv0",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:            "tags": {
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:                "ceph.cluster_name": "ceph",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:                "ceph.crush_device_class": "",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:                "ceph.encrypted": "0",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:                "ceph.osd_id": "0",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:                "ceph.type": "block",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:                "ceph.vdo": "0"
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:            },
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:            "type": "block",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:            "vg_name": "ceph_vg0"
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:        }
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:    ],
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:    "1": [
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:        {
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:            "devices": [
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:                "/dev/loop4"
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:            ],
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:            "lv_name": "ceph_lv1",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:            "lv_size": "21470642176",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:            "name": "ceph_lv1",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:            "tags": {
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:                "ceph.cluster_name": "ceph",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:                "ceph.crush_device_class": "",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:                "ceph.encrypted": "0",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:                "ceph.osd_id": "1",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:                "ceph.type": "block",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:                "ceph.vdo": "0"
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:            },
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:            "type": "block",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:            "vg_name": "ceph_vg1"
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:        }
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:    ],
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:    "2": [
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:        {
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:            "devices": [
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:                "/dev/loop5"
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:            ],
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:            "lv_name": "ceph_lv2",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:            "lv_size": "21470642176",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:            "name": "ceph_lv2",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:            "tags": {
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:                "ceph.cluster_name": "ceph",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:                "ceph.crush_device_class": "",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:                "ceph.encrypted": "0",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:                "ceph.osd_id": "2",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:                "ceph.type": "block",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:                "ceph.vdo": "0"
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:            },
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:            "type": "block",
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:            "vg_name": "ceph_vg2"
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:        }
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]:    ]
Oct 11 04:42:35 np0005481065 sleepy_kapitsa[274018]: }
Oct 11 04:42:35 np0005481065 systemd[1]: libpod-18ce762fa78d1a0e0f673f27d01f1523cfe0f63591eab986978eb138203d380f.scope: Deactivated successfully.
Oct 11 04:42:35 np0005481065 podman[274001]: 2025-10-11 08:42:35.731690747 +0000 UTC m=+1.042994586 container died 18ce762fa78d1a0e0f673f27d01f1523cfe0f63591eab986978eb138203d380f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:42:35 np0005481065 systemd[1]: var-lib-containers-storage-overlay-a0700bd248c1284ba8f1178b93bd60a267269443ab95ac03b03f08fabcadaff9-merged.mount: Deactivated successfully.
Oct 11 04:42:35 np0005481065 podman[274001]: 2025-10-11 08:42:35.794865748 +0000 UTC m=+1.106169527 container remove 18ce762fa78d1a0e0f673f27d01f1523cfe0f63591eab986978eb138203d380f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:42:35 np0005481065 systemd[1]: libpod-conmon-18ce762fa78d1a0e0f673f27d01f1523cfe0f63591eab986978eb138203d380f.scope: Deactivated successfully.
Oct 11 04:42:36 np0005481065 podman[274181]: 2025-10-11 08:42:36.616968246 +0000 UTC m=+0.056664569 container create aca801a7820175709710ff65797c9e9aeb355d1b71d93e870ec0c86f9134573e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_dubinsky, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:42:36 np0005481065 systemd[1]: Started libpod-conmon-aca801a7820175709710ff65797c9e9aeb355d1b71d93e870ec0c86f9134573e.scope.
Oct 11 04:42:36 np0005481065 podman[274181]: 2025-10-11 08:42:36.593411712 +0000 UTC m=+0.033108055 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:42:36 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:42:36 np0005481065 podman[274181]: 2025-10-11 08:42:36.721067851 +0000 UTC m=+0.160764214 container init aca801a7820175709710ff65797c9e9aeb355d1b71d93e870ec0c86f9134573e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:42:36 np0005481065 podman[274181]: 2025-10-11 08:42:36.733429099 +0000 UTC m=+0.173125452 container start aca801a7820175709710ff65797c9e9aeb355d1b71d93e870ec0c86f9134573e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:42:36 np0005481065 podman[274181]: 2025-10-11 08:42:36.738153432 +0000 UTC m=+0.177849795 container attach aca801a7820175709710ff65797c9e9aeb355d1b71d93e870ec0c86f9134573e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 11 04:42:36 np0005481065 awesome_dubinsky[274197]: 167 167
Oct 11 04:42:36 np0005481065 systemd[1]: libpod-aca801a7820175709710ff65797c9e9aeb355d1b71d93e870ec0c86f9134573e.scope: Deactivated successfully.
Oct 11 04:42:36 np0005481065 podman[274181]: 2025-10-11 08:42:36.745304974 +0000 UTC m=+0.185001337 container died aca801a7820175709710ff65797c9e9aeb355d1b71d93e870ec0c86f9134573e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_dubinsky, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:42:36 np0005481065 systemd[1]: var-lib-containers-storage-overlay-031375ea1e86540649da3c424e298af816c00c60f7a53cf5eada4987024032dd-merged.mount: Deactivated successfully.
Oct 11 04:42:36 np0005481065 podman[274181]: 2025-10-11 08:42:36.800186041 +0000 UTC m=+0.239882404 container remove aca801a7820175709710ff65797c9e9aeb355d1b71d93e870ec0c86f9134573e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:42:36 np0005481065 systemd[1]: libpod-conmon-aca801a7820175709710ff65797c9e9aeb355d1b71d93e870ec0c86f9134573e.scope: Deactivated successfully.
Oct 11 04:42:37 np0005481065 podman[274220]: 2025-10-11 08:42:37.05628539 +0000 UTC m=+0.073413909 container create 6e5268f8a272e68db120358b57658e8fac9a2a71ecb3c17bd15e6705d0813264 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_northcutt, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:42:37 np0005481065 systemd[1]: Started libpod-conmon-6e5268f8a272e68db120358b57658e8fac9a2a71ecb3c17bd15e6705d0813264.scope.
Oct 11 04:42:37 np0005481065 podman[274220]: 2025-10-11 08:42:37.023723002 +0000 UTC m=+0.040851591 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:42:37 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:42:37 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdd1283ce59d95a2978672b3ad153f34d01bc9fc7062bdfde79126e11e7c3761/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:42:37 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdd1283ce59d95a2978672b3ad153f34d01bc9fc7062bdfde79126e11e7c3761/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:42:37 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdd1283ce59d95a2978672b3ad153f34d01bc9fc7062bdfde79126e11e7c3761/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:42:37 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdd1283ce59d95a2978672b3ad153f34d01bc9fc7062bdfde79126e11e7c3761/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:42:37 np0005481065 podman[274220]: 2025-10-11 08:42:37.16587636 +0000 UTC m=+0.183004899 container init 6e5268f8a272e68db120358b57658e8fac9a2a71ecb3c17bd15e6705d0813264 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:42:37 np0005481065 podman[274220]: 2025-10-11 08:42:37.177486017 +0000 UTC m=+0.194614546 container start 6e5268f8a272e68db120358b57658e8fac9a2a71ecb3c17bd15e6705d0813264 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:42:37 np0005481065 podman[274220]: 2025-10-11 08:42:37.181226293 +0000 UTC m=+0.198354822 container attach 6e5268f8a272e68db120358b57658e8fac9a2a71ecb3c17bd15e6705d0813264 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 04:42:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:42:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2137733844' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:42:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:42:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2137733844' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:42:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1073: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:42:38 np0005481065 sad_northcutt[274237]: {
Oct 11 04:42:38 np0005481065 sad_northcutt[274237]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 04:42:38 np0005481065 sad_northcutt[274237]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:42:38 np0005481065 sad_northcutt[274237]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:42:38 np0005481065 sad_northcutt[274237]:        "osd_id": 2,
Oct 11 04:42:38 np0005481065 sad_northcutt[274237]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:42:38 np0005481065 sad_northcutt[274237]:        "type": "bluestore"
Oct 11 04:42:38 np0005481065 sad_northcutt[274237]:    },
Oct 11 04:42:38 np0005481065 sad_northcutt[274237]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 04:42:38 np0005481065 sad_northcutt[274237]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:42:38 np0005481065 sad_northcutt[274237]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:42:38 np0005481065 sad_northcutt[274237]:        "osd_id": 0,
Oct 11 04:42:38 np0005481065 sad_northcutt[274237]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:42:38 np0005481065 sad_northcutt[274237]:        "type": "bluestore"
Oct 11 04:42:38 np0005481065 sad_northcutt[274237]:    },
Oct 11 04:42:38 np0005481065 sad_northcutt[274237]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 04:42:38 np0005481065 sad_northcutt[274237]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:42:38 np0005481065 sad_northcutt[274237]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:42:38 np0005481065 sad_northcutt[274237]:        "osd_id": 1,
Oct 11 04:42:38 np0005481065 sad_northcutt[274237]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:42:38 np0005481065 sad_northcutt[274237]:        "type": "bluestore"
Oct 11 04:42:38 np0005481065 sad_northcutt[274237]:    }
Oct 11 04:42:38 np0005481065 sad_northcutt[274237]: }
Oct 11 04:42:38 np0005481065 systemd[1]: libpod-6e5268f8a272e68db120358b57658e8fac9a2a71ecb3c17bd15e6705d0813264.scope: Deactivated successfully.
Oct 11 04:42:38 np0005481065 podman[274220]: 2025-10-11 08:42:38.21542759 +0000 UTC m=+1.232556089 container died 6e5268f8a272e68db120358b57658e8fac9a2a71ecb3c17bd15e6705d0813264 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 11 04:42:38 np0005481065 systemd[1]: libpod-6e5268f8a272e68db120358b57658e8fac9a2a71ecb3c17bd15e6705d0813264.scope: Consumed 1.047s CPU time.
Oct 11 04:42:38 np0005481065 systemd[1]: var-lib-containers-storage-overlay-fdd1283ce59d95a2978672b3ad153f34d01bc9fc7062bdfde79126e11e7c3761-merged.mount: Deactivated successfully.
Oct 11 04:42:38 np0005481065 podman[274220]: 2025-10-11 08:42:38.272696084 +0000 UTC m=+1.289824563 container remove 6e5268f8a272e68db120358b57658e8fac9a2a71ecb3c17bd15e6705d0813264 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_northcutt, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 11 04:42:38 np0005481065 systemd[1]: libpod-conmon-6e5268f8a272e68db120358b57658e8fac9a2a71ecb3c17bd15e6705d0813264.scope: Deactivated successfully.
Oct 11 04:42:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:42:38 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:42:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:42:38 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:42:38 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev a78175dc-cd1a-4675-b252-4e4d15a426a2 does not exist
Oct 11 04:42:38 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 71d552c1-22ef-46ca-8d88-d9cb558c5aae does not exist
Oct 11 04:42:39 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:42:39 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:42:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1074: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1075: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:42 np0005481065 podman[274331]: 2025-10-11 08:42:42.816320282 +0000 UTC m=+0.105582228 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:42:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:42:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1076: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1077: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1078: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:42:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1079: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:49 np0005481065 podman[274353]: 2025-10-11 08:42:49.769701326 +0000 UTC m=+0.067976228 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 11 04:42:50 np0005481065 nova_compute[260935]: 2025-10-11 08:42:50.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:42:50 np0005481065 nova_compute[260935]: 2025-10-11 08:42:50.752 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:42:50 np0005481065 nova_compute[260935]: 2025-10-11 08:42:50.753 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:42:50 np0005481065 nova_compute[260935]: 2025-10-11 08:42:50.753 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 04:42:50 np0005481065 nova_compute[260935]: 2025-10-11 08:42:50.753 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:42:50 np0005481065 nova_compute[260935]: 2025-10-11 08:42:50.753 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct 11 04:42:50 np0005481065 nova_compute[260935]: 2025-10-11 08:42:50.917 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct 11 04:42:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1080: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:52 np0005481065 nova_compute[260935]: 2025-10-11 08:42:52.867 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:42:52 np0005481065 nova_compute[260935]: 2025-10-11 08:42:52.868 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:42:52 np0005481065 nova_compute[260935]: 2025-10-11 08:42:52.868 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:42:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:42:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1081: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:42:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:42:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:42:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:42:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:42:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:42:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:42:54
Oct 11 04:42:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:42:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 04:42:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['default.rgw.control', 'backups', 'volumes', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'vms', 'images', 'default.rgw.log', 'default.rgw.meta', '.rgw.root']
Oct 11 04:42:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:42:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:42:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:42:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:42:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:42:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:42:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:42:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:42:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:42:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:42:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:42:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1082: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:55 np0005481065 nova_compute[260935]: 2025-10-11 08:42:55.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:42:55 np0005481065 podman[274373]: 2025-10-11 08:42:55.799645077 +0000 UTC m=+0.097392767 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 11 04:42:56 np0005481065 podman[274393]: 2025-10-11 08:42:56.860779774 +0000 UTC m=+0.155671980 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 04:42:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1083: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:42:57 np0005481065 nova_compute[260935]: 2025-10-11 08:42:57.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:42:57 np0005481065 nova_compute[260935]: 2025-10-11 08:42:57.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 04:42:57 np0005481065 nova_compute[260935]: 2025-10-11 08:42:57.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 11 04:42:57 np0005481065 nova_compute[260935]: 2025-10-11 08:42:57.725 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 11 04:42:57 np0005481065 nova_compute[260935]: 2025-10-11 08:42:57.725 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:42:57 np0005481065 nova_compute[260935]: 2025-10-11 08:42:57.760 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:42:57 np0005481065 nova_compute[260935]: 2025-10-11 08:42:57.761 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:42:57 np0005481065 nova_compute[260935]: 2025-10-11 08:42:57.762 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:42:57 np0005481065 nova_compute[260935]: 2025-10-11 08:42:57.762 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 04:42:57 np0005481065 nova_compute[260935]: 2025-10-11 08:42:57.763 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:42:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:42:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:42:58 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/208361886' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:42:58 np0005481065 nova_compute[260935]: 2025-10-11 08:42:58.226 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:42:58 np0005481065 nova_compute[260935]: 2025-10-11 08:42:58.396 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:42:58 np0005481065 nova_compute[260935]: 2025-10-11 08:42:58.397 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5122MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 04:42:58 np0005481065 nova_compute[260935]: 2025-10-11 08:42:58.397 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:42:58 np0005481065 nova_compute[260935]: 2025-10-11 08:42:58.398 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:42:58 np0005481065 nova_compute[260935]: 2025-10-11 08:42:58.645 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 04:42:58 np0005481065 nova_compute[260935]: 2025-10-11 08:42:58.645 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 04:42:58 np0005481065 nova_compute[260935]: 2025-10-11 08:42:58.741 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing inventories for resource provider ead2f521-4d5d-46d9-864c-1aac19134114 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 11 04:42:58 np0005481065 nova_compute[260935]: 2025-10-11 08:42:58.839 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updating ProviderTree inventory for provider ead2f521-4d5d-46d9-864c-1aac19134114 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 11 04:42:58 np0005481065 nova_compute[260935]: 2025-10-11 08:42:58.840 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updating inventory in ProviderTree for provider ead2f521-4d5d-46d9-864c-1aac19134114 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 11 04:42:58 np0005481065 nova_compute[260935]: 2025-10-11 08:42:58.858 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing aggregate associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 11 04:42:58 np0005481065 nova_compute[260935]: 2025-10-11 08:42:58.885 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing trait associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, traits: HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSE41,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_USB,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,HW_CPU_X86_SVM,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 11 04:42:58 np0005481065 nova_compute[260935]: 2025-10-11 08:42:58.903 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:42:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:42:59 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2811804463' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:42:59 np0005481065 nova_compute[260935]: 2025-10-11 08:42:59.386 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:42:59 np0005481065 nova_compute[260935]: 2025-10-11 08:42:59.395 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:42:59 np0005481065 nova_compute[260935]: 2025-10-11 08:42:59.415 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:42:59 np0005481065 nova_compute[260935]: 2025-10-11 08:42:59.418 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 04:42:59 np0005481065 nova_compute[260935]: 2025-10-11 08:42:59.418 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.021s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:42:59 np0005481065 nova_compute[260935]: 2025-10-11 08:42:59.419 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:42:59 np0005481065 nova_compute[260935]: 2025-10-11 08:42:59.420 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 11 04:42:59 np0005481065 nova_compute[260935]: 2025-10-11 08:42:59.440 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:42:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1084: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:00 np0005481065 nova_compute[260935]: 2025-10-11 08:43:00.433 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:43:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1085: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:43:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1086: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:43:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:43:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:43:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:43:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:43:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:43:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:43:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:43:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:43:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:43:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.1795353910268934e-07 of space, bias 1.0, pg target 9.53860617308068e-05 quantized to 32 (current 32)
Oct 11 04:43:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:43:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 04:43:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:43:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:43:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:43:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:43:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:43:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:43:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:43:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:43:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:43:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:43:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1087: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1088: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:43:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1089: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1090: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:43:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1091: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:13 np0005481065 podman[274465]: 2025-10-11 08:43:13.777220135 +0000 UTC m=+0.078644888 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct 11 04:43:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:43:15.174 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:43:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:43:15.175 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:43:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:43:15.175 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:43:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1092: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1093: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:43:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1094: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:20 np0005481065 podman[274486]: 2025-10-11 08:43:20.797742911 +0000 UTC m=+0.084758010 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 04:43:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1095: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:43:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1096: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:43:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:43:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:43:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:43:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:43:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:43:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1097: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:26 np0005481065 podman[274507]: 2025-10-11 08:43:26.810874049 +0000 UTC m=+0.106736710 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 11 04:43:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1098: 321 pgs: 321 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 426 B/s wr, 3 op/s
Oct 11 04:43:27 np0005481065 podman[274527]: 2025-10-11 08:43:27.859252285 +0000 UTC m=+0.148767305 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:43:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:43:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Oct 11 04:43:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Oct 11 04:43:28 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Oct 11 04:43:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1100: 321 pgs: 321 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 511 B/s wr, 4 op/s
Oct 11 04:43:30 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Oct 11 04:43:30 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Oct 11 04:43:30 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Oct 11 04:43:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1102: 321 pgs: 321 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 639 B/s wr, 5 op/s
Oct 11 04:43:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:43:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1103: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 87 KiB/s rd, 5.1 MiB/s wr, 137 op/s
Oct 11 04:43:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1104: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 5.1 MiB/s wr, 131 op/s
Oct 11 04:43:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:43:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/617872054' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:43:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:43:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/617872054' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:43:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1105: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 76 KiB/s rd, 4.7 MiB/s wr, 120 op/s
Oct 11 04:43:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:43:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 11 04:43:39 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 04:43:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:43:39 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:43:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:43:39 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:43:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:43:39 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:43:39 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 49267909-4f64-4322-8748-6d3dda31b653 does not exist
Oct 11 04:43:39 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev fd16eafb-385e-473a-b3db-8f9595d06e74 does not exist
Oct 11 04:43:39 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev ad1c7374-3477-4534-a91a-62a4a36c493d does not exist
Oct 11 04:43:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1106: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 4.1 MiB/s wr, 105 op/s
Oct 11 04:43:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:43:39 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:43:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:43:39 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:43:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:43:39 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:43:39 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 04:43:39 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:43:39 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:43:39 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:43:40 np0005481065 podman[274827]: 2025-10-11 08:43:40.578088769 +0000 UTC m=+0.054603200 container create bd2f681dae196c9ebb48af3e416f31c644965531e8977c9ef713e591205066ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_satoshi, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:43:40 np0005481065 systemd[1]: Started libpod-conmon-bd2f681dae196c9ebb48af3e416f31c644965531e8977c9ef713e591205066ec.scope.
Oct 11 04:43:40 np0005481065 podman[274827]: 2025-10-11 08:43:40.554927896 +0000 UTC m=+0.031442357 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:43:40 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:43:40 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Oct 11 04:43:40 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:43:40.666205) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 04:43:40 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Oct 11 04:43:40 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172220666248, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2098, "num_deletes": 254, "total_data_size": 3451779, "memory_usage": 3516800, "flush_reason": "Manual Compaction"}
Oct 11 04:43:40 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Oct 11 04:43:40 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172220689888, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 3362279, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20937, "largest_seqno": 23034, "table_properties": {"data_size": 3352702, "index_size": 6071, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19141, "raw_average_key_size": 20, "raw_value_size": 3333633, "raw_average_value_size": 3512, "num_data_blocks": 273, "num_entries": 949, "num_filter_entries": 949, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760172009, "oldest_key_time": 1760172009, "file_creation_time": 1760172220, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:43:40 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 23743 microseconds, and 11930 cpu microseconds.
Oct 11 04:43:40 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:43:40 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:43:40.689946) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 3362279 bytes OK
Oct 11 04:43:40 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:43:40.689976) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Oct 11 04:43:40 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:43:40.691910) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Oct 11 04:43:40 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:43:40.691932) EVENT_LOG_v1 {"time_micros": 1760172220691925, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 04:43:40 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:43:40.691956) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 04:43:40 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3442995, prev total WAL file size 3442995, number of live WAL files 2.
Oct 11 04:43:40 np0005481065 podman[274827]: 2025-10-11 08:43:40.692025011 +0000 UTC m=+0.168539512 container init bd2f681dae196c9ebb48af3e416f31c644965531e8977c9ef713e591205066ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 11 04:43:40 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:43:40 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:43:40.693433) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Oct 11 04:43:40 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 04:43:40 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(3283KB)], [50(7318KB)]
Oct 11 04:43:40 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172220693475, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 10855992, "oldest_snapshot_seqno": -1}
Oct 11 04:43:40 np0005481065 podman[274827]: 2025-10-11 08:43:40.705923393 +0000 UTC m=+0.182437854 container start bd2f681dae196c9ebb48af3e416f31c644965531e8977c9ef713e591205066ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_satoshi, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:43:40 np0005481065 podman[274827]: 2025-10-11 08:43:40.710018069 +0000 UTC m=+0.186532570 container attach bd2f681dae196c9ebb48af3e416f31c644965531e8977c9ef713e591205066ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_satoshi, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 04:43:40 np0005481065 kind_satoshi[274843]: 167 167
Oct 11 04:43:40 np0005481065 systemd[1]: libpod-bd2f681dae196c9ebb48af3e416f31c644965531e8977c9ef713e591205066ec.scope: Deactivated successfully.
Oct 11 04:43:40 np0005481065 podman[274827]: 2025-10-11 08:43:40.716917433 +0000 UTC m=+0.193431894 container died bd2f681dae196c9ebb48af3e416f31c644965531e8977c9ef713e591205066ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_satoshi, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Oct 11 04:43:40 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4742 keys, 9118470 bytes, temperature: kUnknown
Oct 11 04:43:40 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172220753839, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 9118470, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9084237, "index_size": 21259, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11909, "raw_key_size": 116137, "raw_average_key_size": 24, "raw_value_size": 8996032, "raw_average_value_size": 1897, "num_data_blocks": 894, "num_entries": 4742, "num_filter_entries": 4742, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760172220, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:43:40 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:43:40 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:43:40.754225) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 9118470 bytes
Oct 11 04:43:40 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:43:40.755988) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 179.5 rd, 150.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.1 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(5.9) write-amplify(2.7) OK, records in: 5263, records dropped: 521 output_compression: NoCompression
Oct 11 04:43:40 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:43:40.756018) EVENT_LOG_v1 {"time_micros": 1760172220756002, "job": 26, "event": "compaction_finished", "compaction_time_micros": 60481, "compaction_time_cpu_micros": 41609, "output_level": 6, "num_output_files": 1, "total_output_size": 9118470, "num_input_records": 5263, "num_output_records": 4742, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 04:43:40 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:43:40 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172220758045, "job": 26, "event": "table_file_deletion", "file_number": 52}
Oct 11 04:43:40 np0005481065 systemd[1]: var-lib-containers-storage-overlay-f2379ecc6515f8f0ab5ffc88db1491372f5385271dbb1a8219182b3490e72139-merged.mount: Deactivated successfully.
Oct 11 04:43:40 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:43:40 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172220761274, "job": 26, "event": "table_file_deletion", "file_number": 50}
Oct 11 04:43:40 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:43:40.693342) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:43:40 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:43:40.761338) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:43:40 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:43:40.761348) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:43:40 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:43:40.761352) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:43:40 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:43:40.761356) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:43:40 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:43:40.761360) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:43:40 np0005481065 podman[274827]: 2025-10-11 08:43:40.777749138 +0000 UTC m=+0.254263589 container remove bd2f681dae196c9ebb48af3e416f31c644965531e8977c9ef713e591205066ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_satoshi, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:43:40 np0005481065 systemd[1]: libpod-conmon-bd2f681dae196c9ebb48af3e416f31c644965531e8977c9ef713e591205066ec.scope: Deactivated successfully.
Oct 11 04:43:41 np0005481065 podman[274868]: 2025-10-11 08:43:41.008879484 +0000 UTC m=+0.055567017 container create e42c8cf6f638a9dfb85e87d4d6b90cb1d14adf19406453119d1086d8428eef83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hugle, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 04:43:41 np0005481065 systemd[1]: Started libpod-conmon-e42c8cf6f638a9dfb85e87d4d6b90cb1d14adf19406453119d1086d8428eef83.scope.
Oct 11 04:43:41 np0005481065 podman[274868]: 2025-10-11 08:43:40.98070104 +0000 UTC m=+0.027388623 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:43:41 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:43:41 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a76e5e2c93cb310c71ff7d052129ab3bc684666f2e3d1a94b9184afda7a2f02/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:43:41 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a76e5e2c93cb310c71ff7d052129ab3bc684666f2e3d1a94b9184afda7a2f02/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:43:41 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a76e5e2c93cb310c71ff7d052129ab3bc684666f2e3d1a94b9184afda7a2f02/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:43:41 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a76e5e2c93cb310c71ff7d052129ab3bc684666f2e3d1a94b9184afda7a2f02/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:43:41 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a76e5e2c93cb310c71ff7d052129ab3bc684666f2e3d1a94b9184afda7a2f02/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:43:41 np0005481065 podman[274868]: 2025-10-11 08:43:41.129950058 +0000 UTC m=+0.176637631 container init e42c8cf6f638a9dfb85e87d4d6b90cb1d14adf19406453119d1086d8428eef83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hugle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:43:41 np0005481065 podman[274868]: 2025-10-11 08:43:41.143412187 +0000 UTC m=+0.190099680 container start e42c8cf6f638a9dfb85e87d4d6b90cb1d14adf19406453119d1086d8428eef83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 04:43:41 np0005481065 podman[274868]: 2025-10-11 08:43:41.147593805 +0000 UTC m=+0.194281338 container attach e42c8cf6f638a9dfb85e87d4d6b90cb1d14adf19406453119d1086d8428eef83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hugle, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 04:43:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1107: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 3.8 MiB/s wr, 98 op/s
Oct 11 04:43:42 np0005481065 admiring_hugle[274883]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:43:42 np0005481065 admiring_hugle[274883]: --> relative data size: 1.0
Oct 11 04:43:42 np0005481065 admiring_hugle[274883]: --> All data devices are unavailable
Oct 11 04:43:42 np0005481065 systemd[1]: libpod-e42c8cf6f638a9dfb85e87d4d6b90cb1d14adf19406453119d1086d8428eef83.scope: Deactivated successfully.
Oct 11 04:43:42 np0005481065 podman[274868]: 2025-10-11 08:43:42.307080464 +0000 UTC m=+1.353768017 container died e42c8cf6f638a9dfb85e87d4d6b90cb1d14adf19406453119d1086d8428eef83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hugle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 04:43:42 np0005481065 systemd[1]: libpod-e42c8cf6f638a9dfb85e87d4d6b90cb1d14adf19406453119d1086d8428eef83.scope: Consumed 1.120s CPU time.
Oct 11 04:43:42 np0005481065 systemd[1]: var-lib-containers-storage-overlay-5a76e5e2c93cb310c71ff7d052129ab3bc684666f2e3d1a94b9184afda7a2f02-merged.mount: Deactivated successfully.
Oct 11 04:43:42 np0005481065 podman[274868]: 2025-10-11 08:43:42.378075865 +0000 UTC m=+1.424763368 container remove e42c8cf6f638a9dfb85e87d4d6b90cb1d14adf19406453119d1086d8428eef83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hugle, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:43:42 np0005481065 systemd[1]: libpod-conmon-e42c8cf6f638a9dfb85e87d4d6b90cb1d14adf19406453119d1086d8428eef83.scope: Deactivated successfully.
Oct 11 04:43:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:43:43 np0005481065 podman[275065]: 2025-10-11 08:43:43.365236226 +0000 UTC m=+0.071991821 container create cc0cb479bd24efd61c52aca99d86f509a005870723ae7c0b31d408fc9995bb24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_williamson, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:43:43 np0005481065 systemd[1]: Started libpod-conmon-cc0cb479bd24efd61c52aca99d86f509a005870723ae7c0b31d408fc9995bb24.scope.
Oct 11 04:43:43 np0005481065 podman[275065]: 2025-10-11 08:43:43.338520303 +0000 UTC m=+0.045275968 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:43:43 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:43:43 np0005481065 podman[275065]: 2025-10-11 08:43:43.470076582 +0000 UTC m=+0.176832167 container init cc0cb479bd24efd61c52aca99d86f509a005870723ae7c0b31d408fc9995bb24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_williamson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 11 04:43:43 np0005481065 podman[275065]: 2025-10-11 08:43:43.478025916 +0000 UTC m=+0.184781481 container start cc0cb479bd24efd61c52aca99d86f509a005870723ae7c0b31d408fc9995bb24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_williamson, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:43:43 np0005481065 podman[275065]: 2025-10-11 08:43:43.481280258 +0000 UTC m=+0.188035883 container attach cc0cb479bd24efd61c52aca99d86f509a005870723ae7c0b31d408fc9995bb24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_williamson, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 04:43:43 np0005481065 heuristic_williamson[275082]: 167 167
Oct 11 04:43:43 np0005481065 systemd[1]: libpod-cc0cb479bd24efd61c52aca99d86f509a005870723ae7c0b31d408fc9995bb24.scope: Deactivated successfully.
Oct 11 04:43:43 np0005481065 conmon[275082]: conmon cc0cb479bd24efd61c52 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cc0cb479bd24efd61c52aca99d86f509a005870723ae7c0b31d408fc9995bb24.scope/container/memory.events
Oct 11 04:43:43 np0005481065 podman[275065]: 2025-10-11 08:43:43.486974038 +0000 UTC m=+0.193729603 container died cc0cb479bd24efd61c52aca99d86f509a005870723ae7c0b31d408fc9995bb24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 04:43:43 np0005481065 systemd[1]: var-lib-containers-storage-overlay-404dc7a167aa685c9b38dcbde2318e023005e1eced96a4d60f73799083d07c5c-merged.mount: Deactivated successfully.
Oct 11 04:43:43 np0005481065 podman[275065]: 2025-10-11 08:43:43.533303114 +0000 UTC m=+0.240058679 container remove cc0cb479bd24efd61c52aca99d86f509a005870723ae7c0b31d408fc9995bb24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_williamson, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 04:43:43 np0005481065 systemd[1]: libpod-conmon-cc0cb479bd24efd61c52aca99d86f509a005870723ae7c0b31d408fc9995bb24.scope: Deactivated successfully.
Oct 11 04:43:43 np0005481065 nova_compute[260935]: 2025-10-11 08:43:43.660 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:43:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1108: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 3.4 MiB/s wr, 87 op/s
Oct 11 04:43:43 np0005481065 podman[275106]: 2025-10-11 08:43:43.749245062 +0000 UTC m=+0.050440903 container create 01c537420c88c7d56d1f7cb04d6da9c65eea27a39be65fe00b2471b10d320441 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_carver, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 04:43:43 np0005481065 systemd[1]: Started libpod-conmon-01c537420c88c7d56d1f7cb04d6da9c65eea27a39be65fe00b2471b10d320441.scope.
Oct 11 04:43:43 np0005481065 podman[275106]: 2025-10-11 08:43:43.728975351 +0000 UTC m=+0.030171212 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:43:43 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:43:43 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd91a8fe5357c7897fcbd3f617129f9e5edcd2e412904b29ab122b5a5767282/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:43:43 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd91a8fe5357c7897fcbd3f617129f9e5edcd2e412904b29ab122b5a5767282/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:43:43 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd91a8fe5357c7897fcbd3f617129f9e5edcd2e412904b29ab122b5a5767282/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:43:43 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd91a8fe5357c7897fcbd3f617129f9e5edcd2e412904b29ab122b5a5767282/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:43:43 np0005481065 podman[275106]: 2025-10-11 08:43:43.882419657 +0000 UTC m=+0.183615498 container init 01c537420c88c7d56d1f7cb04d6da9c65eea27a39be65fe00b2471b10d320441 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:43:43 np0005481065 podman[275106]: 2025-10-11 08:43:43.897026609 +0000 UTC m=+0.198222450 container start 01c537420c88c7d56d1f7cb04d6da9c65eea27a39be65fe00b2471b10d320441 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_carver, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 11 04:43:43 np0005481065 podman[275106]: 2025-10-11 08:43:43.900179718 +0000 UTC m=+0.201375559 container attach 01c537420c88c7d56d1f7cb04d6da9c65eea27a39be65fe00b2471b10d320441 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_carver, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 11 04:43:43 np0005481065 podman[275124]: 2025-10-11 08:43:43.911676992 +0000 UTC m=+0.087080816 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 11 04:43:44 np0005481065 competent_carver[275123]: {
Oct 11 04:43:44 np0005481065 competent_carver[275123]:    "0": [
Oct 11 04:43:44 np0005481065 competent_carver[275123]:        {
Oct 11 04:43:44 np0005481065 competent_carver[275123]:            "devices": [
Oct 11 04:43:44 np0005481065 competent_carver[275123]:                "/dev/loop3"
Oct 11 04:43:44 np0005481065 competent_carver[275123]:            ],
Oct 11 04:43:44 np0005481065 competent_carver[275123]:            "lv_name": "ceph_lv0",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:            "lv_size": "21470642176",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:            "name": "ceph_lv0",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:            "tags": {
Oct 11 04:43:44 np0005481065 competent_carver[275123]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:                "ceph.cluster_name": "ceph",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:                "ceph.crush_device_class": "",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:                "ceph.encrypted": "0",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:                "ceph.osd_id": "0",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:                "ceph.type": "block",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:                "ceph.vdo": "0"
Oct 11 04:43:44 np0005481065 competent_carver[275123]:            },
Oct 11 04:43:44 np0005481065 competent_carver[275123]:            "type": "block",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:            "vg_name": "ceph_vg0"
Oct 11 04:43:44 np0005481065 competent_carver[275123]:        }
Oct 11 04:43:44 np0005481065 competent_carver[275123]:    ],
Oct 11 04:43:44 np0005481065 competent_carver[275123]:    "1": [
Oct 11 04:43:44 np0005481065 competent_carver[275123]:        {
Oct 11 04:43:44 np0005481065 competent_carver[275123]:            "devices": [
Oct 11 04:43:44 np0005481065 competent_carver[275123]:                "/dev/loop4"
Oct 11 04:43:44 np0005481065 competent_carver[275123]:            ],
Oct 11 04:43:44 np0005481065 competent_carver[275123]:            "lv_name": "ceph_lv1",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:            "lv_size": "21470642176",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:            "name": "ceph_lv1",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:            "tags": {
Oct 11 04:43:44 np0005481065 competent_carver[275123]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:                "ceph.cluster_name": "ceph",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:                "ceph.crush_device_class": "",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:                "ceph.encrypted": "0",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:                "ceph.osd_id": "1",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:                "ceph.type": "block",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:                "ceph.vdo": "0"
Oct 11 04:43:44 np0005481065 competent_carver[275123]:            },
Oct 11 04:43:44 np0005481065 competent_carver[275123]:            "type": "block",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:            "vg_name": "ceph_vg1"
Oct 11 04:43:44 np0005481065 competent_carver[275123]:        }
Oct 11 04:43:44 np0005481065 competent_carver[275123]:    ],
Oct 11 04:43:44 np0005481065 competent_carver[275123]:    "2": [
Oct 11 04:43:44 np0005481065 competent_carver[275123]:        {
Oct 11 04:43:44 np0005481065 competent_carver[275123]:            "devices": [
Oct 11 04:43:44 np0005481065 competent_carver[275123]:                "/dev/loop5"
Oct 11 04:43:44 np0005481065 competent_carver[275123]:            ],
Oct 11 04:43:44 np0005481065 competent_carver[275123]:            "lv_name": "ceph_lv2",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:            "lv_size": "21470642176",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:            "name": "ceph_lv2",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:            "tags": {
Oct 11 04:43:44 np0005481065 competent_carver[275123]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:                "ceph.cluster_name": "ceph",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:                "ceph.crush_device_class": "",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:                "ceph.encrypted": "0",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:                "ceph.osd_id": "2",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:                "ceph.type": "block",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:                "ceph.vdo": "0"
Oct 11 04:43:44 np0005481065 competent_carver[275123]:            },
Oct 11 04:43:44 np0005481065 competent_carver[275123]:            "type": "block",
Oct 11 04:43:44 np0005481065 competent_carver[275123]:            "vg_name": "ceph_vg2"
Oct 11 04:43:44 np0005481065 competent_carver[275123]:        }
Oct 11 04:43:44 np0005481065 competent_carver[275123]:    ]
Oct 11 04:43:44 np0005481065 competent_carver[275123]: }
Oct 11 04:43:44 np0005481065 systemd[1]: libpod-01c537420c88c7d56d1f7cb04d6da9c65eea27a39be65fe00b2471b10d320441.scope: Deactivated successfully.
Oct 11 04:43:44 np0005481065 podman[275106]: 2025-10-11 08:43:44.703801654 +0000 UTC m=+1.004997505 container died 01c537420c88c7d56d1f7cb04d6da9c65eea27a39be65fe00b2471b10d320441 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:43:44 np0005481065 systemd[1]: var-lib-containers-storage-overlay-3fd91a8fe5357c7897fcbd3f617129f9e5edcd2e412904b29ab122b5a5767282-merged.mount: Deactivated successfully.
Oct 11 04:43:44 np0005481065 podman[275106]: 2025-10-11 08:43:44.775134095 +0000 UTC m=+1.076329976 container remove 01c537420c88c7d56d1f7cb04d6da9c65eea27a39be65fe00b2471b10d320441 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_carver, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:43:44 np0005481065 systemd[1]: libpod-conmon-01c537420c88c7d56d1f7cb04d6da9c65eea27a39be65fe00b2471b10d320441.scope: Deactivated successfully.
Oct 11 04:43:45 np0005481065 podman[275303]: 2025-10-11 08:43:45.652690125 +0000 UTC m=+0.048808327 container create 6223076ade9daeeca83dccf370a201fb89af0e74aa884a06f68e06020d5d02b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:43:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1109: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:45 np0005481065 systemd[1]: Started libpod-conmon-6223076ade9daeeca83dccf370a201fb89af0e74aa884a06f68e06020d5d02b6.scope.
Oct 11 04:43:45 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:43:45 np0005481065 podman[275303]: 2025-10-11 08:43:45.634806611 +0000 UTC m=+0.030924833 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:43:45 np0005481065 podman[275303]: 2025-10-11 08:43:45.748456555 +0000 UTC m=+0.144574787 container init 6223076ade9daeeca83dccf370a201fb89af0e74aa884a06f68e06020d5d02b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lamport, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 11 04:43:45 np0005481065 podman[275303]: 2025-10-11 08:43:45.757098419 +0000 UTC m=+0.153216621 container start 6223076ade9daeeca83dccf370a201fb89af0e74aa884a06f68e06020d5d02b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lamport, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:43:45 np0005481065 podman[275303]: 2025-10-11 08:43:45.760222037 +0000 UTC m=+0.156340259 container attach 6223076ade9daeeca83dccf370a201fb89af0e74aa884a06f68e06020d5d02b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lamport, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:43:45 np0005481065 naughty_lamport[275320]: 167 167
Oct 11 04:43:45 np0005481065 systemd[1]: libpod-6223076ade9daeeca83dccf370a201fb89af0e74aa884a06f68e06020d5d02b6.scope: Deactivated successfully.
Oct 11 04:43:45 np0005481065 podman[275303]: 2025-10-11 08:43:45.765613269 +0000 UTC m=+0.161731471 container died 6223076ade9daeeca83dccf370a201fb89af0e74aa884a06f68e06020d5d02b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lamport, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 04:43:45 np0005481065 systemd[1]: var-lib-containers-storage-overlay-43b6260fea5c0123419133acc677082a020a85c9dd393ca591c37ea3183f0108-merged.mount: Deactivated successfully.
Oct 11 04:43:45 np0005481065 podman[275303]: 2025-10-11 08:43:45.798465945 +0000 UTC m=+0.194584147 container remove 6223076ade9daeeca83dccf370a201fb89af0e74aa884a06f68e06020d5d02b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lamport, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:43:45 np0005481065 systemd[1]: libpod-conmon-6223076ade9daeeca83dccf370a201fb89af0e74aa884a06f68e06020d5d02b6.scope: Deactivated successfully.
Oct 11 04:43:46 np0005481065 podman[275346]: 2025-10-11 08:43:46.043108952 +0000 UTC m=+0.064836069 container create 4427113bd7f615fd69849860d066f89bd74000ecb009400ea765165ccabfa57e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:43:46 np0005481065 systemd[1]: Started libpod-conmon-4427113bd7f615fd69849860d066f89bd74000ecb009400ea765165ccabfa57e.scope.
Oct 11 04:43:46 np0005481065 podman[275346]: 2025-10-11 08:43:46.018779376 +0000 UTC m=+0.040506473 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:43:46 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:43:46 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c83a02ed9cb0b4cc429ce6634f698f22b56c208fcead626892b9cf4a92c221c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:43:46 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c83a02ed9cb0b4cc429ce6634f698f22b56c208fcead626892b9cf4a92c221c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:43:46 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c83a02ed9cb0b4cc429ce6634f698f22b56c208fcead626892b9cf4a92c221c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:43:46 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c83a02ed9cb0b4cc429ce6634f698f22b56c208fcead626892b9cf4a92c221c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:43:46 np0005481065 podman[275346]: 2025-10-11 08:43:46.150437848 +0000 UTC m=+0.172164925 container init 4427113bd7f615fd69849860d066f89bd74000ecb009400ea765165ccabfa57e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 11 04:43:46 np0005481065 podman[275346]: 2025-10-11 08:43:46.164759602 +0000 UTC m=+0.186486689 container start 4427113bd7f615fd69849860d066f89bd74000ecb009400ea765165ccabfa57e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_keldysh, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 04:43:46 np0005481065 podman[275346]: 2025-10-11 08:43:46.170315089 +0000 UTC m=+0.192042176 container attach 4427113bd7f615fd69849860d066f89bd74000ecb009400ea765165ccabfa57e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_keldysh, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:43:47 np0005481065 fervent_keldysh[275363]: {
Oct 11 04:43:47 np0005481065 fervent_keldysh[275363]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 04:43:47 np0005481065 fervent_keldysh[275363]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:43:47 np0005481065 fervent_keldysh[275363]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:43:47 np0005481065 fervent_keldysh[275363]:        "osd_id": 2,
Oct 11 04:43:47 np0005481065 fervent_keldysh[275363]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:43:47 np0005481065 fervent_keldysh[275363]:        "type": "bluestore"
Oct 11 04:43:47 np0005481065 fervent_keldysh[275363]:    },
Oct 11 04:43:47 np0005481065 fervent_keldysh[275363]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 04:43:47 np0005481065 fervent_keldysh[275363]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:43:47 np0005481065 fervent_keldysh[275363]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:43:47 np0005481065 fervent_keldysh[275363]:        "osd_id": 0,
Oct 11 04:43:47 np0005481065 fervent_keldysh[275363]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:43:47 np0005481065 fervent_keldysh[275363]:        "type": "bluestore"
Oct 11 04:43:47 np0005481065 fervent_keldysh[275363]:    },
Oct 11 04:43:47 np0005481065 fervent_keldysh[275363]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 04:43:47 np0005481065 fervent_keldysh[275363]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:43:47 np0005481065 fervent_keldysh[275363]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:43:47 np0005481065 fervent_keldysh[275363]:        "osd_id": 1,
Oct 11 04:43:47 np0005481065 fervent_keldysh[275363]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:43:47 np0005481065 fervent_keldysh[275363]:        "type": "bluestore"
Oct 11 04:43:47 np0005481065 fervent_keldysh[275363]:    }
Oct 11 04:43:47 np0005481065 fervent_keldysh[275363]: }
Oct 11 04:43:47 np0005481065 systemd[1]: libpod-4427113bd7f615fd69849860d066f89bd74000ecb009400ea765165ccabfa57e.scope: Deactivated successfully.
Oct 11 04:43:47 np0005481065 podman[275346]: 2025-10-11 08:43:47.211350159 +0000 UTC m=+1.233077276 container died 4427113bd7f615fd69849860d066f89bd74000ecb009400ea765165ccabfa57e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 11 04:43:47 np0005481065 systemd[1]: libpod-4427113bd7f615fd69849860d066f89bd74000ecb009400ea765165ccabfa57e.scope: Consumed 1.056s CPU time.
Oct 11 04:43:47 np0005481065 systemd[1]: var-lib-containers-storage-overlay-c83a02ed9cb0b4cc429ce6634f698f22b56c208fcead626892b9cf4a92c221c4-merged.mount: Deactivated successfully.
Oct 11 04:43:47 np0005481065 podman[275346]: 2025-10-11 08:43:47.288063341 +0000 UTC m=+1.309790458 container remove 4427113bd7f615fd69849860d066f89bd74000ecb009400ea765165ccabfa57e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:43:47 np0005481065 systemd[1]: libpod-conmon-4427113bd7f615fd69849860d066f89bd74000ecb009400ea765165ccabfa57e.scope: Deactivated successfully.
Oct 11 04:43:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:43:47 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:43:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:43:47 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:43:47 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev f2e51c90-385d-4899-9f88-d3df5d71e3cd does not exist
Oct 11 04:43:47 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev da64d99a-cbe4-45f6-8f78-9ed06aec6743 does not exist
Oct 11 04:43:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1110: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:43:48 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:43:48 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:43:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1111: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:50 np0005481065 nova_compute[260935]: 2025-10-11 08:43:50.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:43:50 np0005481065 nova_compute[260935]: 2025-10-11 08:43:50.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:43:50 np0005481065 nova_compute[260935]: 2025-10-11 08:43:50.705 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 04:43:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1112: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:51 np0005481065 podman[275459]: 2025-10-11 08:43:51.784350674 +0000 UTC m=+0.084758081 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct 11 04:43:52 np0005481065 nova_compute[260935]: 2025-10-11 08:43:52.424 2 DEBUG oslo_concurrency.lockutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Acquiring lock "dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:43:52 np0005481065 nova_compute[260935]: 2025-10-11 08:43:52.425 2 DEBUG oslo_concurrency.lockutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Lock "dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:43:52 np0005481065 nova_compute[260935]: 2025-10-11 08:43:52.567 2 DEBUG nova.compute.manager [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:43:52 np0005481065 nova_compute[260935]: 2025-10-11 08:43:52.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:43:52 np0005481065 nova_compute[260935]: 2025-10-11 08:43:52.856 2 DEBUG oslo_concurrency.lockutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:43:52 np0005481065 nova_compute[260935]: 2025-10-11 08:43:52.857 2 DEBUG oslo_concurrency.lockutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:43:52 np0005481065 nova_compute[260935]: 2025-10-11 08:43:52.867 2 DEBUG nova.virt.hardware [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:43:52 np0005481065 nova_compute[260935]: 2025-10-11 08:43:52.867 2 INFO nova.compute.claims [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:43:53 np0005481065 nova_compute[260935]: 2025-10-11 08:43:53.044 2 DEBUG oslo_concurrency.processutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:43:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:43:53.057 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:43:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:43:53.059 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 04:43:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:43:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:43:53 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2697716668' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:43:53 np0005481065 nova_compute[260935]: 2025-10-11 08:43:53.494 2 DEBUG oslo_concurrency.processutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:43:53 np0005481065 nova_compute[260935]: 2025-10-11 08:43:53.502 2 DEBUG nova.compute.provider_tree [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:43:53 np0005481065 nova_compute[260935]: 2025-10-11 08:43:53.525 2 DEBUG nova.scheduler.client.report [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:43:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1113: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:53 np0005481065 nova_compute[260935]: 2025-10-11 08:43:53.687 2 DEBUG oslo_concurrency.lockutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.830s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:43:53 np0005481065 nova_compute[260935]: 2025-10-11 08:43:53.689 2 DEBUG nova.compute.manager [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:43:53 np0005481065 nova_compute[260935]: 2025-10-11 08:43:53.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:43:53 np0005481065 nova_compute[260935]: 2025-10-11 08:43:53.867 2 DEBUG nova.compute.manager [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Oct 11 04:43:53 np0005481065 nova_compute[260935]: 2025-10-11 08:43:53.958 2 INFO nova.virt.libvirt.driver [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:43:54 np0005481065 nova_compute[260935]: 2025-10-11 08:43:54.145 2 DEBUG nova.compute.manager [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:43:54 np0005481065 nova_compute[260935]: 2025-10-11 08:43:54.396 2 DEBUG nova.compute.manager [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:43:54 np0005481065 nova_compute[260935]: 2025-10-11 08:43:54.398 2 DEBUG nova.virt.libvirt.driver [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:43:54 np0005481065 nova_compute[260935]: 2025-10-11 08:43:54.399 2 INFO nova.virt.libvirt.driver [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Creating image(s)#033[00m
Oct 11 04:43:54 np0005481065 nova_compute[260935]: 2025-10-11 08:43:54.474 2 DEBUG nova.storage.rbd_utils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] rbd image dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:43:54 np0005481065 nova_compute[260935]: 2025-10-11 08:43:54.511 2 DEBUG nova.storage.rbd_utils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] rbd image dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:43:54 np0005481065 nova_compute[260935]: 2025-10-11 08:43:54.546 2 DEBUG nova.storage.rbd_utils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] rbd image dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:43:54 np0005481065 nova_compute[260935]: 2025-10-11 08:43:54.551 2 DEBUG oslo_concurrency.lockutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:43:54 np0005481065 nova_compute[260935]: 2025-10-11 08:43:54.552 2 DEBUG oslo_concurrency.lockutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:43:54 np0005481065 nova_compute[260935]: 2025-10-11 08:43:54.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:43:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:43:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:43:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:43:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:43:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:43:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:43:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:43:54
Oct 11 04:43:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:43:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 04:43:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'default.rgw.meta', 'images', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes', '.mgr', '.rgw.root', 'vms']
Oct 11 04:43:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:43:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:43:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:43:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:43:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:43:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:43:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:43:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:43:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:43:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:43:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:43:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1114: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:43:55 np0005481065 nova_compute[260935]: 2025-10-11 08:43:55.761 2 DEBUG nova.virt.libvirt.imagebackend [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Image locations are: [{'url': 'rbd://33219f8b-dc38-5a8f-a577-8ccc4b37190a/images/03f2fef0-11c0-48e1-b3a0-3e02d898739e/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://33219f8b-dc38-5a8f-a577-8ccc4b37190a/images/03f2fef0-11c0-48e1-b3a0-3e02d898739e/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct 11 04:43:56 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:43:56.062 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:43:56 np0005481065 nova_compute[260935]: 2025-10-11 08:43:56.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:43:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1115: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 6 op/s
Oct 11 04:43:57 np0005481065 nova_compute[260935]: 2025-10-11 08:43:57.739 2 DEBUG oslo_concurrency.processutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:43:57 np0005481065 podman[275557]: 2025-10-11 08:43:57.802205585 +0000 UTC m=+0.101259516 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 11 04:43:57 np0005481065 nova_compute[260935]: 2025-10-11 08:43:57.831 2 DEBUG oslo_concurrency.processutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1.part --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:43:57 np0005481065 nova_compute[260935]: 2025-10-11 08:43:57.834 2 DEBUG nova.virt.images [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] 03f2fef0-11c0-48e1-b3a0-3e02d898739e was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Oct 11 04:43:57 np0005481065 nova_compute[260935]: 2025-10-11 08:43:57.837 2 DEBUG nova.privsep.utils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Oct 11 04:43:57 np0005481065 nova_compute[260935]: 2025-10-11 08:43:57.837 2 DEBUG oslo_concurrency.processutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1.part /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:43:58 np0005481065 nova_compute[260935]: 2025-10-11 08:43:58.051 2 DEBUG oslo_concurrency.processutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1.part /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1.converted" returned: 0 in 0.213s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:43:58 np0005481065 nova_compute[260935]: 2025-10-11 08:43:58.057 2 DEBUG oslo_concurrency.processutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:43:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:43:58 np0005481065 nova_compute[260935]: 2025-10-11 08:43:58.139 2 DEBUG oslo_concurrency.processutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1.converted --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:43:58 np0005481065 nova_compute[260935]: 2025-10-11 08:43:58.142 2 DEBUG oslo_concurrency.lockutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 3.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:43:58 np0005481065 nova_compute[260935]: 2025-10-11 08:43:58.172 2 DEBUG nova.storage.rbd_utils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] rbd image dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:43:58 np0005481065 nova_compute[260935]: 2025-10-11 08:43:58.176 2 DEBUG oslo_concurrency.processutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:43:58 np0005481065 nova_compute[260935]: 2025-10-11 08:43:58.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:43:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Oct 11 04:43:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Oct 11 04:43:58 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Oct 11 04:43:58 np0005481065 podman[275628]: 2025-10-11 08:43:58.819710061 +0000 UTC m=+0.123665137 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 11 04:43:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1117: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 8 op/s
Oct 11 04:43:59 np0005481065 nova_compute[260935]: 2025-10-11 08:43:59.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:43:59 np0005481065 nova_compute[260935]: 2025-10-11 08:43:59.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 04:43:59 np0005481065 nova_compute[260935]: 2025-10-11 08:43:59.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 11 04:43:59 np0005481065 nova_compute[260935]: 2025-10-11 08:43:59.725 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct 11 04:43:59 np0005481065 nova_compute[260935]: 2025-10-11 08:43:59.725 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 11 04:43:59 np0005481065 nova_compute[260935]: 2025-10-11 08:43:59.727 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:43:59 np0005481065 nova_compute[260935]: 2025-10-11 08:43:59.758 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:43:59 np0005481065 nova_compute[260935]: 2025-10-11 08:43:59.758 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:43:59 np0005481065 nova_compute[260935]: 2025-10-11 08:43:59.759 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:43:59 np0005481065 nova_compute[260935]: 2025-10-11 08:43:59.759 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 04:43:59 np0005481065 nova_compute[260935]: 2025-10-11 08:43:59.760 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:43:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Oct 11 04:43:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Oct 11 04:43:59 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Oct 11 04:44:00 np0005481065 nova_compute[260935]: 2025-10-11 08:44:00.102 2 DEBUG oslo_concurrency.processutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.925s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:44:00 np0005481065 nova_compute[260935]: 2025-10-11 08:44:00.170 2 DEBUG nova.storage.rbd_utils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] resizing rbd image dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:44:00 np0005481065 nova_compute[260935]: 2025-10-11 08:44:00.274 2 DEBUG nova.objects.instance [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Lazy-loading 'migration_context' on Instance uuid dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:44:00 np0005481065 nova_compute[260935]: 2025-10-11 08:44:00.296 2 DEBUG nova.virt.libvirt.driver [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:44:00 np0005481065 nova_compute[260935]: 2025-10-11 08:44:00.297 2 DEBUG nova.virt.libvirt.driver [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Ensure instance console log exists: /var/lib/nova/instances/dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:44:00 np0005481065 nova_compute[260935]: 2025-10-11 08:44:00.298 2 DEBUG oslo_concurrency.lockutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:44:00 np0005481065 nova_compute[260935]: 2025-10-11 08:44:00.298 2 DEBUG oslo_concurrency.lockutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:44:00 np0005481065 nova_compute[260935]: 2025-10-11 08:44:00.298 2 DEBUG oslo_concurrency.lockutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:44:00 np0005481065 nova_compute[260935]: 2025-10-11 08:44:00.300 2 DEBUG nova.virt.libvirt.driver [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:44:00 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:44:00 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2580541415' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:44:00 np0005481065 nova_compute[260935]: 2025-10-11 08:44:00.306 2 WARNING nova.virt.libvirt.driver [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:44:00 np0005481065 nova_compute[260935]: 2025-10-11 08:44:00.311 2 DEBUG nova.virt.libvirt.host [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:44:00 np0005481065 nova_compute[260935]: 2025-10-11 08:44:00.312 2 DEBUG nova.virt.libvirt.host [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:44:00 np0005481065 nova_compute[260935]: 2025-10-11 08:44:00.315 2 DEBUG nova.virt.libvirt.host [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:44:00 np0005481065 nova_compute[260935]: 2025-10-11 08:44:00.315 2 DEBUG nova.virt.libvirt.host [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:44:00 np0005481065 nova_compute[260935]: 2025-10-11 08:44:00.316 2 DEBUG nova.virt.libvirt.driver [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:44:00 np0005481065 nova_compute[260935]: 2025-10-11 08:44:00.316 2 DEBUG nova.virt.hardware [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:44:00 np0005481065 nova_compute[260935]: 2025-10-11 08:44:00.317 2 DEBUG nova.virt.hardware [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:44:00 np0005481065 nova_compute[260935]: 2025-10-11 08:44:00.317 2 DEBUG nova.virt.hardware [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:44:00 np0005481065 nova_compute[260935]: 2025-10-11 08:44:00.317 2 DEBUG nova.virt.hardware [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:44:00 np0005481065 nova_compute[260935]: 2025-10-11 08:44:00.318 2 DEBUG nova.virt.hardware [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:44:00 np0005481065 nova_compute[260935]: 2025-10-11 08:44:00.318 2 DEBUG nova.virt.hardware [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:44:00 np0005481065 nova_compute[260935]: 2025-10-11 08:44:00.318 2 DEBUG nova.virt.hardware [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:44:00 np0005481065 nova_compute[260935]: 2025-10-11 08:44:00.319 2 DEBUG nova.virt.hardware [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:44:00 np0005481065 nova_compute[260935]: 2025-10-11 08:44:00.319 2 DEBUG nova.virt.hardware [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:44:00 np0005481065 nova_compute[260935]: 2025-10-11 08:44:00.319 2 DEBUG nova.virt.hardware [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:44:00 np0005481065 nova_compute[260935]: 2025-10-11 08:44:00.319 2 DEBUG nova.virt.hardware [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:44:00 np0005481065 nova_compute[260935]: 2025-10-11 08:44:00.323 2 DEBUG nova.privsep.utils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Oct 11 04:44:00 np0005481065 nova_compute[260935]: 2025-10-11 08:44:00.324 2 DEBUG oslo_concurrency.processutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:44:00 np0005481065 nova_compute[260935]: 2025-10-11 08:44:00.346 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:44:00 np0005481065 nova_compute[260935]: 2025-10-11 08:44:00.574 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:44:00 np0005481065 nova_compute[260935]: 2025-10-11 08:44:00.576 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5084MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 04:44:00 np0005481065 nova_compute[260935]: 2025-10-11 08:44:00.576 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:44:00 np0005481065 nova_compute[260935]: 2025-10-11 08:44:00.577 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:44:00 np0005481065 nova_compute[260935]: 2025-10-11 08:44:00.646 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 04:44:00 np0005481065 nova_compute[260935]: 2025-10-11 08:44:00.646 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 04:44:00 np0005481065 nova_compute[260935]: 2025-10-11 08:44:00.647 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 04:44:00 np0005481065 nova_compute[260935]: 2025-10-11 08:44:00.721 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:44:00 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:44:00 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1105123448' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:44:00 np0005481065 nova_compute[260935]: 2025-10-11 08:44:00.779 2 DEBUG oslo_concurrency.processutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:44:00 np0005481065 nova_compute[260935]: 2025-10-11 08:44:00.822 2 DEBUG nova.storage.rbd_utils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] rbd image dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:44:00 np0005481065 nova_compute[260935]: 2025-10-11 08:44:00.828 2 DEBUG oslo_concurrency.processutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:44:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:44:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2585505457' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:44:01 np0005481065 nova_compute[260935]: 2025-10-11 08:44:01.154 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:44:01 np0005481065 nova_compute[260935]: 2025-10-11 08:44:01.162 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updating inventory in ProviderTree for provider ead2f521-4d5d-46d9-864c-1aac19134114 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 11 04:44:01 np0005481065 nova_compute[260935]: 2025-10-11 08:44:01.210 2 ERROR nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [req-0fdb4dc7-b510-40da-8f31-8fd40c7e19bb] Failed to update inventory to [{'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID ead2f521-4d5d-46d9-864c-1aac19134114.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-0fdb4dc7-b510-40da-8f31-8fd40c7e19bb"}]}#033[00m
Oct 11 04:44:01 np0005481065 nova_compute[260935]: 2025-10-11 08:44:01.232 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing inventories for resource provider ead2f521-4d5d-46d9-864c-1aac19134114 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 11 04:44:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:44:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3501685134' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:44:01 np0005481065 nova_compute[260935]: 2025-10-11 08:44:01.257 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updating ProviderTree inventory for provider ead2f521-4d5d-46d9-864c-1aac19134114 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 11 04:44:01 np0005481065 nova_compute[260935]: 2025-10-11 08:44:01.257 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updating inventory in ProviderTree for provider ead2f521-4d5d-46d9-864c-1aac19134114 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 11 04:44:01 np0005481065 nova_compute[260935]: 2025-10-11 08:44:01.272 2 DEBUG oslo_concurrency.processutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:44:01 np0005481065 nova_compute[260935]: 2025-10-11 08:44:01.275 2 DEBUG nova.objects.instance [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Lazy-loading 'pci_devices' on Instance uuid dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:44:01 np0005481065 nova_compute[260935]: 2025-10-11 08:44:01.278 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing aggregate associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 11 04:44:01 np0005481065 nova_compute[260935]: 2025-10-11 08:44:01.298 2 DEBUG nova.virt.libvirt.driver [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:44:01 np0005481065 nova_compute[260935]:  <uuid>dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d</uuid>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:  <name>instance-00000001</name>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:44:01 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:      <nova:name>tempest-AutoAllocateNetworkTest-server-1273474145</nova:name>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:44:00</nova:creationTime>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:44:01 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:        <nova:user uuid="324b88029ca649458a6186fedaec68e8">tempest-AutoAllocateNetworkTest-1694481392-project-member</nova:user>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:        <nova:project uuid="3132c1b9f1b741a98560770caf557fdc">tempest-AutoAllocateNetworkTest-1694481392</nova:project>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:      <nova:ports/>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:44:01 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:      <entry name="serial">dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d</entry>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:      <entry name="uuid">dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d</entry>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:44:01 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:44:01 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:44:01 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d_disk">
Oct 11 04:44:01 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:44:01 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:44:01 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d_disk.config">
Oct 11 04:44:01 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:44:01 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:44:01 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d/console.log" append="off"/>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:44:01 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:44:01 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:44:01 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:44:01 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:44:01 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:44:01 np0005481065 nova_compute[260935]: 2025-10-11 08:44:01.317 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing trait associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, traits: HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSE41,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_USB,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,HW_CPU_X86_SVM,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 11 04:44:01 np0005481065 nova_compute[260935]: 2025-10-11 08:44:01.359 2 DEBUG nova.virt.libvirt.driver [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:44:01 np0005481065 nova_compute[260935]: 2025-10-11 08:44:01.360 2 DEBUG nova.virt.libvirt.driver [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:44:01 np0005481065 nova_compute[260935]: 2025-10-11 08:44:01.361 2 INFO nova.virt.libvirt.driver [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Using config drive#033[00m
Oct 11 04:44:01 np0005481065 nova_compute[260935]: 2025-10-11 08:44:01.390 2 DEBUG nova.storage.rbd_utils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] rbd image dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:44:01 np0005481065 nova_compute[260935]: 2025-10-11 08:44:01.399 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:44:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1119: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 10 op/s
Oct 11 04:44:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:44:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3466260857' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:44:01 np0005481065 nova_compute[260935]: 2025-10-11 08:44:01.850 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:44:01 np0005481065 nova_compute[260935]: 2025-10-11 08:44:01.858 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updating inventory in ProviderTree for provider ead2f521-4d5d-46d9-864c-1aac19134114 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 11 04:44:01 np0005481065 nova_compute[260935]: 2025-10-11 08:44:01.897 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updated inventory for provider ead2f521-4d5d-46d9-864c-1aac19134114 with generation 7 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Oct 11 04:44:01 np0005481065 nova_compute[260935]: 2025-10-11 08:44:01.897 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updating resource provider ead2f521-4d5d-46d9-864c-1aac19134114 generation from 7 to 8 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Oct 11 04:44:01 np0005481065 nova_compute[260935]: 2025-10-11 08:44:01.898 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updating inventory in ProviderTree for provider ead2f521-4d5d-46d9-864c-1aac19134114 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 11 04:44:01 np0005481065 nova_compute[260935]: 2025-10-11 08:44:01.921 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 04:44:01 np0005481065 nova_compute[260935]: 2025-10-11 08:44:01.922 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.345s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:44:02 np0005481065 nova_compute[260935]: 2025-10-11 08:44:02.191 2 INFO nova.virt.libvirt.driver [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Creating config drive at /var/lib/nova/instances/dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d/disk.config#033[00m
Oct 11 04:44:02 np0005481065 nova_compute[260935]: 2025-10-11 08:44:02.199 2 DEBUG oslo_concurrency.processutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0kgibxgi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:44:02 np0005481065 nova_compute[260935]: 2025-10-11 08:44:02.355 2 DEBUG oslo_concurrency.processutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0kgibxgi" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:44:02 np0005481065 nova_compute[260935]: 2025-10-11 08:44:02.392 2 DEBUG nova.storage.rbd_utils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] rbd image dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:44:02 np0005481065 nova_compute[260935]: 2025-10-11 08:44:02.397 2 DEBUG oslo_concurrency.processutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d/disk.config dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:44:02 np0005481065 nova_compute[260935]: 2025-10-11 08:44:02.577 2 DEBUG oslo_concurrency.processutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d/disk.config dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:44:02 np0005481065 nova_compute[260935]: 2025-10-11 08:44:02.578 2 INFO nova.virt.libvirt.driver [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Deleting local config drive /var/lib/nova/instances/dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d/disk.config because it was imported into RBD.#033[00m
Oct 11 04:44:02 np0005481065 systemd[1]: Starting libvirt secret daemon...
Oct 11 04:44:02 np0005481065 systemd[1]: Started libvirt secret daemon.
Oct 11 04:44:02 np0005481065 systemd-machined[215705]: New machine qemu-1-instance-00000001.
Oct 11 04:44:02 np0005481065 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Oct 11 04:44:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:44:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1120: 321 pgs: 321 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 53 op/s
Oct 11 04:44:04 np0005481065 nova_compute[260935]: 2025-10-11 08:44:04.012 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172244.0114164, dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:44:04 np0005481065 nova_compute[260935]: 2025-10-11 08:44:04.013 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:44:04 np0005481065 nova_compute[260935]: 2025-10-11 08:44:04.023 2 DEBUG nova.compute.manager [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:44:04 np0005481065 nova_compute[260935]: 2025-10-11 08:44:04.024 2 DEBUG nova.virt.libvirt.driver [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:44:04 np0005481065 nova_compute[260935]: 2025-10-11 08:44:04.028 2 INFO nova.virt.libvirt.driver [-] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Instance spawned successfully.#033[00m
Oct 11 04:44:04 np0005481065 nova_compute[260935]: 2025-10-11 08:44:04.029 2 DEBUG nova.virt.libvirt.driver [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:44:04 np0005481065 nova_compute[260935]: 2025-10-11 08:44:04.069 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:44:04 np0005481065 nova_compute[260935]: 2025-10-11 08:44:04.080 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:44:04 np0005481065 nova_compute[260935]: 2025-10-11 08:44:04.086 2 DEBUG nova.virt.libvirt.driver [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:44:04 np0005481065 nova_compute[260935]: 2025-10-11 08:44:04.087 2 DEBUG nova.virt.libvirt.driver [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:44:04 np0005481065 nova_compute[260935]: 2025-10-11 08:44:04.088 2 DEBUG nova.virt.libvirt.driver [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:44:04 np0005481065 nova_compute[260935]: 2025-10-11 08:44:04.088 2 DEBUG nova.virt.libvirt.driver [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:44:04 np0005481065 nova_compute[260935]: 2025-10-11 08:44:04.090 2 DEBUG nova.virt.libvirt.driver [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:44:04 np0005481065 nova_compute[260935]: 2025-10-11 08:44:04.090 2 DEBUG nova.virt.libvirt.driver [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:44:04 np0005481065 nova_compute[260935]: 2025-10-11 08:44:04.129 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:44:04 np0005481065 nova_compute[260935]: 2025-10-11 08:44:04.129 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172244.022236, dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:44:04 np0005481065 nova_compute[260935]: 2025-10-11 08:44:04.130 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] VM Started (Lifecycle Event)#033[00m
Oct 11 04:44:04 np0005481065 nova_compute[260935]: 2025-10-11 08:44:04.166 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:44:04 np0005481065 nova_compute[260935]: 2025-10-11 08:44:04.174 2 INFO nova.compute.manager [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Took 9.78 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:44:04 np0005481065 nova_compute[260935]: 2025-10-11 08:44:04.176 2 DEBUG nova.compute.manager [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:44:04 np0005481065 nova_compute[260935]: 2025-10-11 08:44:04.181 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:44:04 np0005481065 nova_compute[260935]: 2025-10-11 08:44:04.215 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:44:04 np0005481065 nova_compute[260935]: 2025-10-11 08:44:04.248 2 INFO nova.compute.manager [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Took 11.44 seconds to build instance.#033[00m
Oct 11 04:44:04 np0005481065 nova_compute[260935]: 2025-10-11 08:44:04.270 2 DEBUG oslo_concurrency.lockutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Lock "dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.846s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:44:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:44:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:44:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:44:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:44:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003460606319593671 of space, bias 1.0, pg target 0.10381818958781013 quantized to 32 (current 32)
Oct 11 04:44:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:44:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:44:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:44:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:44:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:44:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 11 04:44:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:44:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 04:44:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:44:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:44:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:44:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:44:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:44:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:44:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:44:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:44:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:44:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:44:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1121: 321 pgs: 321 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.7 MiB/s wr, 42 op/s
Oct 11 04:44:06 np0005481065 nova_compute[260935]: 2025-10-11 08:44:06.025 2 DEBUG oslo_concurrency.lockutils [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Acquiring lock "dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:44:06 np0005481065 nova_compute[260935]: 2025-10-11 08:44:06.026 2 DEBUG oslo_concurrency.lockutils [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Lock "dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:44:06 np0005481065 nova_compute[260935]: 2025-10-11 08:44:06.027 2 DEBUG oslo_concurrency.lockutils [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Acquiring lock "dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:44:06 np0005481065 nova_compute[260935]: 2025-10-11 08:44:06.027 2 DEBUG oslo_concurrency.lockutils [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Lock "dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:44:06 np0005481065 nova_compute[260935]: 2025-10-11 08:44:06.028 2 DEBUG oslo_concurrency.lockutils [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Lock "dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:44:06 np0005481065 nova_compute[260935]: 2025-10-11 08:44:06.029 2 INFO nova.compute.manager [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Terminating instance#033[00m
Oct 11 04:44:06 np0005481065 nova_compute[260935]: 2025-10-11 08:44:06.031 2 DEBUG oslo_concurrency.lockutils [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Acquiring lock "refresh_cache-dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:44:06 np0005481065 nova_compute[260935]: 2025-10-11 08:44:06.031 2 DEBUG oslo_concurrency.lockutils [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Acquired lock "refresh_cache-dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:44:06 np0005481065 nova_compute[260935]: 2025-10-11 08:44:06.031 2 DEBUG nova.network.neutron [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:44:06 np0005481065 nova_compute[260935]: 2025-10-11 08:44:06.253 2 DEBUG nova.network.neutron [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:44:06 np0005481065 nova_compute[260935]: 2025-10-11 08:44:06.566 2 DEBUG nova.network.neutron [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:44:06 np0005481065 nova_compute[260935]: 2025-10-11 08:44:06.581 2 DEBUG oslo_concurrency.lockutils [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Releasing lock "refresh_cache-dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:44:06 np0005481065 nova_compute[260935]: 2025-10-11 08:44:06.582 2 DEBUG nova.compute.manager [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 04:44:06 np0005481065 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Oct 11 04:44:06 np0005481065 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 3.869s CPU time.
Oct 11 04:44:06 np0005481065 systemd-machined[215705]: Machine qemu-1-instance-00000001 terminated.
Oct 11 04:44:06 np0005481065 nova_compute[260935]: 2025-10-11 08:44:06.806 2 INFO nova.virt.libvirt.driver [-] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Instance destroyed successfully.#033[00m
Oct 11 04:44:06 np0005481065 nova_compute[260935]: 2025-10-11 08:44:06.806 2 DEBUG nova.objects.instance [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Lazy-loading 'resources' on Instance uuid dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:44:07 np0005481065 nova_compute[260935]: 2025-10-11 08:44:07.191 2 INFO nova.virt.libvirt.driver [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Deleting instance files /var/lib/nova/instances/dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d_del#033[00m
Oct 11 04:44:07 np0005481065 nova_compute[260935]: 2025-10-11 08:44:07.192 2 INFO nova.virt.libvirt.driver [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Deletion of /var/lib/nova/instances/dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d_del complete#033[00m
Oct 11 04:44:07 np0005481065 nova_compute[260935]: 2025-10-11 08:44:07.258 2 DEBUG nova.virt.libvirt.host [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Oct 11 04:44:07 np0005481065 nova_compute[260935]: 2025-10-11 08:44:07.259 2 INFO nova.virt.libvirt.host [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] UEFI support detected#033[00m
Oct 11 04:44:07 np0005481065 nova_compute[260935]: 2025-10-11 08:44:07.260 2 INFO nova.compute.manager [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Took 0.68 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 04:44:07 np0005481065 nova_compute[260935]: 2025-10-11 08:44:07.261 2 DEBUG oslo.service.loopingcall [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 04:44:07 np0005481065 nova_compute[260935]: 2025-10-11 08:44:07.261 2 DEBUG nova.compute.manager [-] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 04:44:07 np0005481065 nova_compute[260935]: 2025-10-11 08:44:07.261 2 DEBUG nova.network.neutron [-] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 04:44:07 np0005481065 nova_compute[260935]: 2025-10-11 08:44:07.616 2 DEBUG nova.network.neutron [-] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:44:07 np0005481065 nova_compute[260935]: 2025-10-11 08:44:07.650 2 DEBUG nova.network.neutron [-] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:44:07 np0005481065 nova_compute[260935]: 2025-10-11 08:44:07.669 2 INFO nova.compute.manager [-] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Took 0.41 seconds to deallocate network for instance.#033[00m
Oct 11 04:44:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1122: 321 pgs: 321 active+clean; 41 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.4 MiB/s wr, 171 op/s
Oct 11 04:44:07 np0005481065 nova_compute[260935]: 2025-10-11 08:44:07.712 2 DEBUG oslo_concurrency.lockutils [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:44:07 np0005481065 nova_compute[260935]: 2025-10-11 08:44:07.712 2 DEBUG oslo_concurrency.lockutils [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:44:07 np0005481065 nova_compute[260935]: 2025-10-11 08:44:07.834 2 DEBUG oslo_concurrency.processutils [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:44:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:44:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Oct 11 04:44:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Oct 11 04:44:08 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Oct 11 04:44:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:44:08 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2127591540' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:44:08 np0005481065 nova_compute[260935]: 2025-10-11 08:44:08.335 2 DEBUG oslo_concurrency.processutils [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:44:08 np0005481065 nova_compute[260935]: 2025-10-11 08:44:08.341 2 DEBUG nova.compute.provider_tree [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:44:08 np0005481065 nova_compute[260935]: 2025-10-11 08:44:08.356 2 DEBUG nova.scheduler.client.report [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:44:08 np0005481065 nova_compute[260935]: 2025-10-11 08:44:08.375 2 DEBUG oslo_concurrency.lockutils [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:44:08 np0005481065 nova_compute[260935]: 2025-10-11 08:44:08.399 2 INFO nova.scheduler.client.report [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Deleted allocations for instance dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d#033[00m
Oct 11 04:44:08 np0005481065 nova_compute[260935]: 2025-10-11 08:44:08.466 2 DEBUG oslo_concurrency.lockutils [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Lock "dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.440s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:44:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1124: 321 pgs: 321 active+clean; 41 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 153 op/s
Oct 11 04:44:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1125: 321 pgs: 321 active+clean; 41 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 152 op/s
Oct 11 04:44:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:44:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1126: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 118 op/s
Oct 11 04:44:14 np0005481065 podman[276031]: 2025-10-11 08:44:14.826773385 +0000 UTC m=+0.092491019 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 04:44:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:44:15.175 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:44:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:44:15.176 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:44:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:44:15.176 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:44:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1127: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 118 op/s
Oct 11 04:44:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1128: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 0 B/s wr, 0 op/s
Oct 11 04:44:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:44:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1129: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 88 B/s rd, 0 B/s wr, 0 op/s
Oct 11 04:44:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1130: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Oct 11 04:44:21 np0005481065 nova_compute[260935]: 2025-10-11 08:44:21.805 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172246.8035655, dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:44:21 np0005481065 nova_compute[260935]: 2025-10-11 08:44:21.806 2 INFO nova.compute.manager [-] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] VM Stopped (Lifecycle Event)#033[00m
Oct 11 04:44:21 np0005481065 nova_compute[260935]: 2025-10-11 08:44:21.904 2 DEBUG nova.compute.manager [None req-a681d7d1-255b-4b48-b5a9-ed98ee233506 - - - - - -] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:44:22 np0005481065 podman[276050]: 2025-10-11 08:44:22.823866794 +0000 UTC m=+0.111456533 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 11 04:44:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:44:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1131: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Oct 11 04:44:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:44:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:44:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:44:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:44:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:44:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:44:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1132: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1133: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:44:28 np0005481065 podman[276070]: 2025-10-11 08:44:28.805426903 +0000 UTC m=+0.097406497 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 04:44:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1134: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:29 np0005481065 podman[276090]: 2025-10-11 08:44:29.840753262 +0000 UTC m=+0.135360297 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 11 04:44:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1135: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:44:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1136: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1137: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:44:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2182075254' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:44:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:44:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2182075254' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:44:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1138: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:38 np0005481065 nova_compute[260935]: 2025-10-11 08:44:38.084 2 DEBUG oslo_concurrency.lockutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Acquiring lock "aac0adcc-167d-400a-a04a-93767356cc9c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:44:38 np0005481065 nova_compute[260935]: 2025-10-11 08:44:38.085 2 DEBUG oslo_concurrency.lockutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "aac0adcc-167d-400a-a04a-93767356cc9c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:44:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:44:38 np0005481065 nova_compute[260935]: 2025-10-11 08:44:38.136 2 DEBUG nova.compute.manager [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:44:38 np0005481065 nova_compute[260935]: 2025-10-11 08:44:38.256 2 DEBUG oslo_concurrency.lockutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:44:38 np0005481065 nova_compute[260935]: 2025-10-11 08:44:38.256 2 DEBUG oslo_concurrency.lockutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:44:38 np0005481065 nova_compute[260935]: 2025-10-11 08:44:38.268 2 DEBUG nova.virt.hardware [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:44:38 np0005481065 nova_compute[260935]: 2025-10-11 08:44:38.268 2 INFO nova.compute.claims [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:44:38 np0005481065 nova_compute[260935]: 2025-10-11 08:44:38.397 2 DEBUG oslo_concurrency.processutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:44:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:44:38 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/253931390' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:44:38 np0005481065 nova_compute[260935]: 2025-10-11 08:44:38.963 2 DEBUG oslo_concurrency.processutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:44:38 np0005481065 nova_compute[260935]: 2025-10-11 08:44:38.972 2 DEBUG nova.compute.provider_tree [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:44:38 np0005481065 nova_compute[260935]: 2025-10-11 08:44:38.991 2 DEBUG nova.scheduler.client.report [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:44:39 np0005481065 nova_compute[260935]: 2025-10-11 08:44:39.017 2 DEBUG oslo_concurrency.lockutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:44:39 np0005481065 nova_compute[260935]: 2025-10-11 08:44:39.019 2 DEBUG nova.compute.manager [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:44:39 np0005481065 nova_compute[260935]: 2025-10-11 08:44:39.078 2 DEBUG nova.compute.manager [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:44:39 np0005481065 nova_compute[260935]: 2025-10-11 08:44:39.078 2 DEBUG nova.network.neutron [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:44:39 np0005481065 nova_compute[260935]: 2025-10-11 08:44:39.268 2 INFO nova.virt.libvirt.driver [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:44:39 np0005481065 nova_compute[260935]: 2025-10-11 08:44:39.295 2 DEBUG nova.compute.manager [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:44:39 np0005481065 nova_compute[260935]: 2025-10-11 08:44:39.383 2 DEBUG nova.compute.manager [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:44:39 np0005481065 nova_compute[260935]: 2025-10-11 08:44:39.384 2 DEBUG nova.virt.libvirt.driver [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:44:39 np0005481065 nova_compute[260935]: 2025-10-11 08:44:39.385 2 INFO nova.virt.libvirt.driver [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Creating image(s)#033[00m
Oct 11 04:44:39 np0005481065 nova_compute[260935]: 2025-10-11 08:44:39.417 2 DEBUG nova.storage.rbd_utils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] rbd image aac0adcc-167d-400a-a04a-93767356cc9c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:44:39 np0005481065 nova_compute[260935]: 2025-10-11 08:44:39.451 2 DEBUG nova.storage.rbd_utils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] rbd image aac0adcc-167d-400a-a04a-93767356cc9c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:44:39 np0005481065 nova_compute[260935]: 2025-10-11 08:44:39.482 2 DEBUG nova.storage.rbd_utils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] rbd image aac0adcc-167d-400a-a04a-93767356cc9c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:44:39 np0005481065 nova_compute[260935]: 2025-10-11 08:44:39.486 2 DEBUG oslo_concurrency.processutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:44:39 np0005481065 nova_compute[260935]: 2025-10-11 08:44:39.532 2 WARNING oslo_policy.policy [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Oct 11 04:44:39 np0005481065 nova_compute[260935]: 2025-10-11 08:44:39.533 2 WARNING oslo_policy.policy [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Oct 11 04:44:39 np0005481065 nova_compute[260935]: 2025-10-11 08:44:39.538 2 DEBUG nova.policy [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '113798b24d1e4a9e91db94214d254ea9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4e6573eaf6684f1c99a553fd46667a67', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 04:44:39 np0005481065 nova_compute[260935]: 2025-10-11 08:44:39.573 2 DEBUG oslo_concurrency.processutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:44:39 np0005481065 nova_compute[260935]: 2025-10-11 08:44:39.573 2 DEBUG oslo_concurrency.lockutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:44:39 np0005481065 nova_compute[260935]: 2025-10-11 08:44:39.574 2 DEBUG oslo_concurrency.lockutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:44:39 np0005481065 nova_compute[260935]: 2025-10-11 08:44:39.574 2 DEBUG oslo_concurrency.lockutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:44:39 np0005481065 nova_compute[260935]: 2025-10-11 08:44:39.598 2 DEBUG nova.storage.rbd_utils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] rbd image aac0adcc-167d-400a-a04a-93767356cc9c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:44:39 np0005481065 nova_compute[260935]: 2025-10-11 08:44:39.601 2 DEBUG oslo_concurrency.processutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 aac0adcc-167d-400a-a04a-93767356cc9c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:44:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1139: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:39 np0005481065 nova_compute[260935]: 2025-10-11 08:44:39.906 2 DEBUG oslo_concurrency.processutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 aac0adcc-167d-400a-a04a-93767356cc9c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.304s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:44:39 np0005481065 nova_compute[260935]: 2025-10-11 08:44:39.987 2 DEBUG nova.storage.rbd_utils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] resizing rbd image aac0adcc-167d-400a-a04a-93767356cc9c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:44:40 np0005481065 nova_compute[260935]: 2025-10-11 08:44:40.103 2 DEBUG nova.objects.instance [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lazy-loading 'migration_context' on Instance uuid aac0adcc-167d-400a-a04a-93767356cc9c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:44:40 np0005481065 nova_compute[260935]: 2025-10-11 08:44:40.127 2 DEBUG nova.virt.libvirt.driver [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:44:40 np0005481065 nova_compute[260935]: 2025-10-11 08:44:40.128 2 DEBUG nova.virt.libvirt.driver [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Ensure instance console log exists: /var/lib/nova/instances/aac0adcc-167d-400a-a04a-93767356cc9c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:44:40 np0005481065 nova_compute[260935]: 2025-10-11 08:44:40.128 2 DEBUG oslo_concurrency.lockutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:44:40 np0005481065 nova_compute[260935]: 2025-10-11 08:44:40.129 2 DEBUG oslo_concurrency.lockutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:44:40 np0005481065 nova_compute[260935]: 2025-10-11 08:44:40.129 2 DEBUG oslo_concurrency.lockutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:44:41 np0005481065 nova_compute[260935]: 2025-10-11 08:44:41.500 2 DEBUG nova.network.neutron [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Successfully created port: 6e75116e-1034-4d1a-8320-6755ee57c51f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 04:44:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1140: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:44:42 np0005481065 nova_compute[260935]: 2025-10-11 08:44:42.476 2 DEBUG nova.network.neutron [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Successfully updated port: 6e75116e-1034-4d1a-8320-6755ee57c51f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:44:42 np0005481065 nova_compute[260935]: 2025-10-11 08:44:42.511 2 DEBUG oslo_concurrency.lockutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Acquiring lock "refresh_cache-aac0adcc-167d-400a-a04a-93767356cc9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:44:42 np0005481065 nova_compute[260935]: 2025-10-11 08:44:42.512 2 DEBUG oslo_concurrency.lockutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Acquired lock "refresh_cache-aac0adcc-167d-400a-a04a-93767356cc9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:44:42 np0005481065 nova_compute[260935]: 2025-10-11 08:44:42.512 2 DEBUG nova.network.neutron [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:44:42 np0005481065 nova_compute[260935]: 2025-10-11 08:44:42.792 2 DEBUG nova.network.neutron [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:44:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:44:43 np0005481065 nova_compute[260935]: 2025-10-11 08:44:43.317 2 DEBUG nova.compute.manager [req-423a4bbd-f546-45a6-a7f7-655c30786510 req-c6ce626f-bf81-492b-bb5a-4a7ae9ee0845 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Received event network-changed-6e75116e-1034-4d1a-8320-6755ee57c51f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:44:43 np0005481065 nova_compute[260935]: 2025-10-11 08:44:43.318 2 DEBUG nova.compute.manager [req-423a4bbd-f546-45a6-a7f7-655c30786510 req-c6ce626f-bf81-492b-bb5a-4a7ae9ee0845 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Refreshing instance network info cache due to event network-changed-6e75116e-1034-4d1a-8320-6755ee57c51f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:44:43 np0005481065 nova_compute[260935]: 2025-10-11 08:44:43.319 2 DEBUG oslo_concurrency.lockutils [req-423a4bbd-f546-45a6-a7f7-655c30786510 req-c6ce626f-bf81-492b-bb5a-4a7ae9ee0845 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-aac0adcc-167d-400a-a04a-93767356cc9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:44:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1141: 321 pgs: 321 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 04:44:44 np0005481065 nova_compute[260935]: 2025-10-11 08:44:44.076 2 DEBUG nova.network.neutron [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Updating instance_info_cache with network_info: [{"id": "6e75116e-1034-4d1a-8320-6755ee57c51f", "address": "fa:16:3e:14:7b:bc", "network": {"id": "3bd537c2-e6ec-4d00-ac83-fbf5d86f963c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-610535994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e6573eaf6684f1c99a553fd46667a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e75116e-10", "ovs_interfaceid": "6e75116e-1034-4d1a-8320-6755ee57c51f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:44:44 np0005481065 nova_compute[260935]: 2025-10-11 08:44:44.100 2 DEBUG oslo_concurrency.lockutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Releasing lock "refresh_cache-aac0adcc-167d-400a-a04a-93767356cc9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:44:44 np0005481065 nova_compute[260935]: 2025-10-11 08:44:44.100 2 DEBUG nova.compute.manager [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Instance network_info: |[{"id": "6e75116e-1034-4d1a-8320-6755ee57c51f", "address": "fa:16:3e:14:7b:bc", "network": {"id": "3bd537c2-e6ec-4d00-ac83-fbf5d86f963c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-610535994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e6573eaf6684f1c99a553fd46667a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e75116e-10", "ovs_interfaceid": "6e75116e-1034-4d1a-8320-6755ee57c51f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:44:44 np0005481065 nova_compute[260935]: 2025-10-11 08:44:44.101 2 DEBUG oslo_concurrency.lockutils [req-423a4bbd-f546-45a6-a7f7-655c30786510 req-c6ce626f-bf81-492b-bb5a-4a7ae9ee0845 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-aac0adcc-167d-400a-a04a-93767356cc9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:44:44 np0005481065 nova_compute[260935]: 2025-10-11 08:44:44.101 2 DEBUG nova.network.neutron [req-423a4bbd-f546-45a6-a7f7-655c30786510 req-c6ce626f-bf81-492b-bb5a-4a7ae9ee0845 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Refreshing network info cache for port 6e75116e-1034-4d1a-8320-6755ee57c51f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:44:44 np0005481065 nova_compute[260935]: 2025-10-11 08:44:44.104 2 DEBUG nova.virt.libvirt.driver [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Start _get_guest_xml network_info=[{"id": "6e75116e-1034-4d1a-8320-6755ee57c51f", "address": "fa:16:3e:14:7b:bc", "network": {"id": "3bd537c2-e6ec-4d00-ac83-fbf5d86f963c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-610535994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e6573eaf6684f1c99a553fd46667a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e75116e-10", "ovs_interfaceid": "6e75116e-1034-4d1a-8320-6755ee57c51f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:44:44 np0005481065 nova_compute[260935]: 2025-10-11 08:44:44.108 2 WARNING nova.virt.libvirt.driver [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:44:44 np0005481065 nova_compute[260935]: 2025-10-11 08:44:44.113 2 DEBUG nova.virt.libvirt.host [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:44:44 np0005481065 nova_compute[260935]: 2025-10-11 08:44:44.114 2 DEBUG nova.virt.libvirt.host [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:44:44 np0005481065 nova_compute[260935]: 2025-10-11 08:44:44.120 2 DEBUG nova.virt.libvirt.host [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:44:44 np0005481065 nova_compute[260935]: 2025-10-11 08:44:44.121 2 DEBUG nova.virt.libvirt.host [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:44:44 np0005481065 nova_compute[260935]: 2025-10-11 08:44:44.122 2 DEBUG nova.virt.libvirt.driver [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:44:44 np0005481065 nova_compute[260935]: 2025-10-11 08:44:44.123 2 DEBUG nova.virt.hardware [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:44:30Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1952576325',id=25,is_public=True,memory_mb=128,name='tempest-flavor_with_ephemeral_0-1549572389',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:44:44 np0005481065 nova_compute[260935]: 2025-10-11 08:44:44.124 2 DEBUG nova.virt.hardware [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:44:44 np0005481065 nova_compute[260935]: 2025-10-11 08:44:44.124 2 DEBUG nova.virt.hardware [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:44:44 np0005481065 nova_compute[260935]: 2025-10-11 08:44:44.125 2 DEBUG nova.virt.hardware [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:44:44 np0005481065 nova_compute[260935]: 2025-10-11 08:44:44.125 2 DEBUG nova.virt.hardware [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:44:44 np0005481065 nova_compute[260935]: 2025-10-11 08:44:44.125 2 DEBUG nova.virt.hardware [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:44:44 np0005481065 nova_compute[260935]: 2025-10-11 08:44:44.126 2 DEBUG nova.virt.hardware [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:44:44 np0005481065 nova_compute[260935]: 2025-10-11 08:44:44.126 2 DEBUG nova.virt.hardware [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:44:44 np0005481065 nova_compute[260935]: 2025-10-11 08:44:44.127 2 DEBUG nova.virt.hardware [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:44:44 np0005481065 nova_compute[260935]: 2025-10-11 08:44:44.127 2 DEBUG nova.virt.hardware [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:44:44 np0005481065 nova_compute[260935]: 2025-10-11 08:44:44.128 2 DEBUG nova.virt.hardware [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:44:44 np0005481065 nova_compute[260935]: 2025-10-11 08:44:44.135 2 DEBUG oslo_concurrency.processutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:44:44 np0005481065 nova_compute[260935]: 2025-10-11 08:44:44.251 2 DEBUG oslo_concurrency.lockutils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Acquiring lock "10e2549e-21d4-44fb-acbf-9104ec32970f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:44:44 np0005481065 nova_compute[260935]: 2025-10-11 08:44:44.251 2 DEBUG oslo_concurrency.lockutils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lock "10e2549e-21d4-44fb-acbf-9104ec32970f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:44:44 np0005481065 nova_compute[260935]: 2025-10-11 08:44:44.278 2 DEBUG nova.compute.manager [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:44:44 np0005481065 nova_compute[260935]: 2025-10-11 08:44:44.352 2 DEBUG oslo_concurrency.processutils [None req-7b6ccb2f-f791-4a53-bf48-6ac97096ba5f 1ae5b4d76e5b4b87b3371b1af07b4b86 e3ee64d88df445bf9d54ac52ea80e580 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:44:44 np0005481065 nova_compute[260935]: 2025-10-11 08:44:44.393 2 DEBUG oslo_concurrency.processutils [None req-7b6ccb2f-f791-4a53-bf48-6ac97096ba5f 1ae5b4d76e5b4b87b3371b1af07b4b86 e3ee64d88df445bf9d54ac52ea80e580 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.041s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:44:44 np0005481065 nova_compute[260935]: 2025-10-11 08:44:44.419 2 DEBUG oslo_concurrency.lockutils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:44:44 np0005481065 nova_compute[260935]: 2025-10-11 08:44:44.420 2 DEBUG oslo_concurrency.lockutils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:44:44 np0005481065 nova_compute[260935]: 2025-10-11 08:44:44.427 2 DEBUG nova.virt.hardware [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:44:44 np0005481065 nova_compute[260935]: 2025-10-11 08:44:44.428 2 INFO nova.compute.claims [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:44:44 np0005481065 nova_compute[260935]: 2025-10-11 08:44:44.580 2 DEBUG oslo_concurrency.processutils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:44:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:44:44 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1642061443' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:44:44 np0005481065 nova_compute[260935]: 2025-10-11 08:44:44.601 2 DEBUG oslo_concurrency.processutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:44:44 np0005481065 nova_compute[260935]: 2025-10-11 08:44:44.622 2 DEBUG nova.storage.rbd_utils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] rbd image aac0adcc-167d-400a-a04a-93767356cc9c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:44:44 np0005481065 nova_compute[260935]: 2025-10-11 08:44:44.625 2 DEBUG oslo_concurrency.processutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:44:45 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:44:45 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1329841632' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:44:45 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:44:45 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1463306307' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:44:45 np0005481065 nova_compute[260935]: 2025-10-11 08:44:45.074 2 DEBUG oslo_concurrency.processutils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:44:45 np0005481065 nova_compute[260935]: 2025-10-11 08:44:45.084 2 DEBUG nova.compute.provider_tree [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:44:45 np0005481065 nova_compute[260935]: 2025-10-11 08:44:45.090 2 DEBUG oslo_concurrency.processutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:44:45 np0005481065 nova_compute[260935]: 2025-10-11 08:44:45.092 2 DEBUG nova.virt.libvirt.vif [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:44:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-189754419',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-189754419',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(25),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-189754419',id=2,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=25,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNbdle8Jb2CGpQv4SpbWMUmm3hiSixEA0s5KapvSItFHc3x9bUVgKTLkLVOIdrKirkN+vbb4fO8g4FzWk9eXgUjBokGNviLy6eM2HjHbeF2CcHrjTEf0RujQy30DD/WL/w==',key_name='tempest-keypair-358550695',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4e6573eaf6684f1c99a553fd46667a67',ramdisk_id='',reservation_id='r-bh0dixv5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1781130606',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1781130606-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:44:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='113798b24d1e4a9e91db94214d254ea9',uuid=aac0adcc-167d-400a-a04a-93767356cc9c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6e75116e-1034-4d1a-8320-6755ee57c51f", "address": "fa:16:3e:14:7b:bc", "network": {"id": "3bd537c2-e6ec-4d00-ac83-fbf5d86f963c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-610535994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e6573eaf6684f1c99a553fd46667a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e75116e-10", "ovs_interfaceid": "6e75116e-1034-4d1a-8320-6755ee57c51f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:44:45 np0005481065 nova_compute[260935]: 2025-10-11 08:44:45.093 2 DEBUG nova.network.os_vif_util [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Converting VIF {"id": "6e75116e-1034-4d1a-8320-6755ee57c51f", "address": "fa:16:3e:14:7b:bc", "network": {"id": "3bd537c2-e6ec-4d00-ac83-fbf5d86f963c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-610535994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e6573eaf6684f1c99a553fd46667a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e75116e-10", "ovs_interfaceid": "6e75116e-1034-4d1a-8320-6755ee57c51f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:44:45 np0005481065 nova_compute[260935]: 2025-10-11 08:44:45.094 2 DEBUG nova.network.os_vif_util [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:7b:bc,bridge_name='br-int',has_traffic_filtering=True,id=6e75116e-1034-4d1a-8320-6755ee57c51f,network=Network(3bd537c2-e6ec-4d00-ac83-fbf5d86f963c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e75116e-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:44:45 np0005481065 nova_compute[260935]: 2025-10-11 08:44:45.098 2 DEBUG nova.objects.instance [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lazy-loading 'pci_devices' on Instance uuid aac0adcc-167d-400a-a04a-93767356cc9c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:44:45 np0005481065 nova_compute[260935]: 2025-10-11 08:44:45.114 2 DEBUG nova.scheduler.client.report [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:44:45 np0005481065 nova_compute[260935]: 2025-10-11 08:44:45.152 2 DEBUG nova.virt.libvirt.driver [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:44:45 np0005481065 nova_compute[260935]:  <uuid>aac0adcc-167d-400a-a04a-93767356cc9c</uuid>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:  <name>instance-00000002</name>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:44:45 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:      <nova:name>tempest-ServersWithSpecificFlavorTestJSON-server-189754419</nova:name>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:44:44</nova:creationTime>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:      <nova:flavor name="tempest-flavor_with_ephemeral_0-1549572389">
Oct 11 04:44:45 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:        <nova:user uuid="113798b24d1e4a9e91db94214d254ea9">tempest-ServersWithSpecificFlavorTestJSON-1781130606-project-member</nova:user>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:        <nova:project uuid="4e6573eaf6684f1c99a553fd46667a67">tempest-ServersWithSpecificFlavorTestJSON-1781130606</nova:project>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:        <nova:port uuid="6e75116e-1034-4d1a-8320-6755ee57c51f">
Oct 11 04:44:45 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:44:45 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:      <entry name="serial">aac0adcc-167d-400a-a04a-93767356cc9c</entry>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:      <entry name="uuid">aac0adcc-167d-400a-a04a-93767356cc9c</entry>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:44:45 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:44:45 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:44:45 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/aac0adcc-167d-400a-a04a-93767356cc9c_disk">
Oct 11 04:44:45 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:44:45 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:44:45 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/aac0adcc-167d-400a-a04a-93767356cc9c_disk.config">
Oct 11 04:44:45 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:44:45 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 04:44:45 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:14:7b:bc"/>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:      <target dev="tap6e75116e-10"/>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:44:45 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/aac0adcc-167d-400a-a04a-93767356cc9c/console.log" append="off"/>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:44:45 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:44:45 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:44:45 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:44:45 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:44:45 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:44:45 np0005481065 nova_compute[260935]: 2025-10-11 08:44:45.154 2 DEBUG nova.compute.manager [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Preparing to wait for external event network-vif-plugged-6e75116e-1034-4d1a-8320-6755ee57c51f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 04:44:45 np0005481065 nova_compute[260935]: 2025-10-11 08:44:45.155 2 DEBUG oslo_concurrency.lockutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Acquiring lock "aac0adcc-167d-400a-a04a-93767356cc9c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:44:45 np0005481065 nova_compute[260935]: 2025-10-11 08:44:45.155 2 DEBUG oslo_concurrency.lockutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "aac0adcc-167d-400a-a04a-93767356cc9c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:44:45 np0005481065 nova_compute[260935]: 2025-10-11 08:44:45.156 2 DEBUG oslo_concurrency.lockutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "aac0adcc-167d-400a-a04a-93767356cc9c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:44:45 np0005481065 nova_compute[260935]: 2025-10-11 08:44:45.157 2 DEBUG nova.virt.libvirt.vif [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:44:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-189754419',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-189754419',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(25),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-189754419',id=2,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=25,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNbdle8Jb2CGpQv4SpbWMUmm3hiSixEA0s5KapvSItFHc3x9bUVgKTLkLVOIdrKirkN+vbb4fO8g4FzWk9eXgUjBokGNviLy6eM2HjHbeF2CcHrjTEf0RujQy30DD/WL/w==',key_name='tempest-keypair-358550695',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4e6573eaf6684f1c99a553fd46667a67',ramdisk_id='',reservation_id='r-bh0dixv5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1781130606',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1781130606-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:44:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='113798b24d1e4a9e91db94214d254ea9',uuid=aac0adcc-167d-400a-a04a-93767356cc9c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6e75116e-1034-4d1a-8320-6755ee57c51f", "address": "fa:16:3e:14:7b:bc", "network": {"id": "3bd537c2-e6ec-4d00-ac83-fbf5d86f963c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-610535994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e6573eaf6684f1c99a553fd46667a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e75116e-10", "ovs_interfaceid": "6e75116e-1034-4d1a-8320-6755ee57c51f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:44:45 np0005481065 nova_compute[260935]: 2025-10-11 08:44:45.158 2 DEBUG nova.network.os_vif_util [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Converting VIF {"id": "6e75116e-1034-4d1a-8320-6755ee57c51f", "address": "fa:16:3e:14:7b:bc", "network": {"id": "3bd537c2-e6ec-4d00-ac83-fbf5d86f963c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-610535994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e6573eaf6684f1c99a553fd46667a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e75116e-10", "ovs_interfaceid": "6e75116e-1034-4d1a-8320-6755ee57c51f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:44:45 np0005481065 nova_compute[260935]: 2025-10-11 08:44:45.159 2 DEBUG nova.network.os_vif_util [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:7b:bc,bridge_name='br-int',has_traffic_filtering=True,id=6e75116e-1034-4d1a-8320-6755ee57c51f,network=Network(3bd537c2-e6ec-4d00-ac83-fbf5d86f963c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e75116e-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:44:45 np0005481065 nova_compute[260935]: 2025-10-11 08:44:45.159 2 DEBUG os_vif [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:7b:bc,bridge_name='br-int',has_traffic_filtering=True,id=6e75116e-1034-4d1a-8320-6755ee57c51f,network=Network(3bd537c2-e6ec-4d00-ac83-fbf5d86f963c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e75116e-10') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:44:45 np0005481065 nova_compute[260935]: 2025-10-11 08:44:45.203 2 DEBUG oslo_concurrency.lockutils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:44:45 np0005481065 nova_compute[260935]: 2025-10-11 08:44:45.204 2 DEBUG nova.compute.manager [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:44:45 np0005481065 nova_compute[260935]: 2025-10-11 08:44:45.216 2 DEBUG ovsdbapp.backend.ovs_idl [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct 11 04:44:45 np0005481065 nova_compute[260935]: 2025-10-11 08:44:45.216 2 DEBUG ovsdbapp.backend.ovs_idl [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct 11 04:44:45 np0005481065 nova_compute[260935]: 2025-10-11 08:44:45.216 2 DEBUG ovsdbapp.backend.ovs_idl [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct 11 04:44:45 np0005481065 nova_compute[260935]: 2025-10-11 08:44:45.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 04:44:45 np0005481065 nova_compute[260935]: 2025-10-11 08:44:45.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [POLLOUT] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:44:45 np0005481065 nova_compute[260935]: 2025-10-11 08:44:45.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 04:44:45 np0005481065 nova_compute[260935]: 2025-10-11 08:44:45.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:44:45 np0005481065 nova_compute[260935]: 2025-10-11 08:44:45.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:44:45 np0005481065 nova_compute[260935]: 2025-10-11 08:44:45.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:44:45 np0005481065 nova_compute[260935]: 2025-10-11 08:44:45.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:44:45 np0005481065 nova_compute[260935]: 2025-10-11 08:44:45.232 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:44:45 np0005481065 nova_compute[260935]: 2025-10-11 08:44:45.232 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:44:45 np0005481065 nova_compute[260935]: 2025-10-11 08:44:45.233 2 INFO oslo.privsep.daemon [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmp05yxqiq3/privsep.sock']#033[00m
Oct 11 04:44:45 np0005481065 nova_compute[260935]: 2025-10-11 08:44:45.262 2 DEBUG nova.compute.manager [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:44:45 np0005481065 nova_compute[260935]: 2025-10-11 08:44:45.263 2 DEBUG nova.network.neutron [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:44:45 np0005481065 nova_compute[260935]: 2025-10-11 08:44:45.282 2 INFO nova.virt.libvirt.driver [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:44:45 np0005481065 nova_compute[260935]: 2025-10-11 08:44:45.301 2 DEBUG nova.compute.manager [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:44:45 np0005481065 nova_compute[260935]: 2025-10-11 08:44:45.525 2 DEBUG nova.compute.manager [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:44:45 np0005481065 nova_compute[260935]: 2025-10-11 08:44:45.526 2 DEBUG nova.virt.libvirt.driver [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:44:45 np0005481065 nova_compute[260935]: 2025-10-11 08:44:45.527 2 INFO nova.virt.libvirt.driver [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Creating image(s)#033[00m
Oct 11 04:44:45 np0005481065 nova_compute[260935]: 2025-10-11 08:44:45.550 2 DEBUG nova.storage.rbd_utils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] rbd image 10e2549e-21d4-44fb-acbf-9104ec32970f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:44:45 np0005481065 nova_compute[260935]: 2025-10-11 08:44:45.577 2 DEBUG nova.storage.rbd_utils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] rbd image 10e2549e-21d4-44fb-acbf-9104ec32970f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:44:45 np0005481065 nova_compute[260935]: 2025-10-11 08:44:45.600 2 DEBUG nova.storage.rbd_utils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] rbd image 10e2549e-21d4-44fb-acbf-9104ec32970f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:44:45 np0005481065 nova_compute[260935]: 2025-10-11 08:44:45.605 2 DEBUG oslo_concurrency.processutils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:44:45 np0005481065 nova_compute[260935]: 2025-10-11 08:44:45.688 2 DEBUG oslo_concurrency.processutils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:44:45 np0005481065 nova_compute[260935]: 2025-10-11 08:44:45.690 2 DEBUG oslo_concurrency.lockutils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:44:45 np0005481065 nova_compute[260935]: 2025-10-11 08:44:45.691 2 DEBUG oslo_concurrency.lockutils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:44:45 np0005481065 nova_compute[260935]: 2025-10-11 08:44:45.692 2 DEBUG oslo_concurrency.lockutils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:44:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1142: 321 pgs: 321 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 04:44:45 np0005481065 nova_compute[260935]: 2025-10-11 08:44:45.726 2 DEBUG nova.storage.rbd_utils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] rbd image 10e2549e-21d4-44fb-acbf-9104ec32970f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:44:45 np0005481065 nova_compute[260935]: 2025-10-11 08:44:45.732 2 DEBUG oslo_concurrency.processutils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 10e2549e-21d4-44fb-acbf-9104ec32970f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:44:45 np0005481065 nova_compute[260935]: 2025-10-11 08:44:45.772 2 DEBUG nova.network.neutron [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct 11 04:44:45 np0005481065 nova_compute[260935]: 2025-10-11 08:44:45.773 2 DEBUG nova.compute.manager [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:44:45 np0005481065 podman[276449]: 2025-10-11 08:44:45.785342446 +0000 UTC m=+0.082607550 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Oct 11 04:44:46 np0005481065 nova_compute[260935]: 2025-10-11 08:44:46.065 2 DEBUG oslo_concurrency.processutils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 10e2549e-21d4-44fb-acbf-9104ec32970f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:44:46 np0005481065 nova_compute[260935]: 2025-10-11 08:44:46.066 2 INFO oslo.privsep.daemon [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Oct 11 04:44:46 np0005481065 nova_compute[260935]: 2025-10-11 08:44:45.902 1319 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct 11 04:44:46 np0005481065 nova_compute[260935]: 2025-10-11 08:44:45.915 1319 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct 11 04:44:46 np0005481065 nova_compute[260935]: 2025-10-11 08:44:45.920 1319 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Oct 11 04:44:46 np0005481065 nova_compute[260935]: 2025-10-11 08:44:45.920 1319 INFO oslo.privsep.daemon [-] privsep daemon running as pid 1319#033[00m
Oct 11 04:44:46 np0005481065 nova_compute[260935]: 2025-10-11 08:44:46.133 2 DEBUG nova.storage.rbd_utils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] resizing rbd image 10e2549e-21d4-44fb-acbf-9104ec32970f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:44:46 np0005481065 nova_compute[260935]: 2025-10-11 08:44:46.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:44:46 np0005481065 nova_compute[260935]: 2025-10-11 08:44:46.229 2 DEBUG nova.network.neutron [req-423a4bbd-f546-45a6-a7f7-655c30786510 req-c6ce626f-bf81-492b-bb5a-4a7ae9ee0845 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Updated VIF entry in instance network info cache for port 6e75116e-1034-4d1a-8320-6755ee57c51f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:44:46 np0005481065 nova_compute[260935]: 2025-10-11 08:44:46.230 2 DEBUG nova.network.neutron [req-423a4bbd-f546-45a6-a7f7-655c30786510 req-c6ce626f-bf81-492b-bb5a-4a7ae9ee0845 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Updating instance_info_cache with network_info: [{"id": "6e75116e-1034-4d1a-8320-6755ee57c51f", "address": "fa:16:3e:14:7b:bc", "network": {"id": "3bd537c2-e6ec-4d00-ac83-fbf5d86f963c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-610535994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e6573eaf6684f1c99a553fd46667a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e75116e-10", "ovs_interfaceid": "6e75116e-1034-4d1a-8320-6755ee57c51f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:44:46 np0005481065 nova_compute[260935]: 2025-10-11 08:44:46.236 2 DEBUG nova.objects.instance [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lazy-loading 'migration_context' on Instance uuid 10e2549e-21d4-44fb-acbf-9104ec32970f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:44:46 np0005481065 nova_compute[260935]: 2025-10-11 08:44:46.258 2 DEBUG nova.virt.libvirt.driver [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:44:46 np0005481065 nova_compute[260935]: 2025-10-11 08:44:46.259 2 DEBUG nova.virt.libvirt.driver [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Ensure instance console log exists: /var/lib/nova/instances/10e2549e-21d4-44fb-acbf-9104ec32970f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:44:46 np0005481065 nova_compute[260935]: 2025-10-11 08:44:46.259 2 DEBUG oslo_concurrency.lockutils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:44:46 np0005481065 nova_compute[260935]: 2025-10-11 08:44:46.260 2 DEBUG oslo_concurrency.lockutils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:44:46 np0005481065 nova_compute[260935]: 2025-10-11 08:44:46.260 2 DEBUG oslo_concurrency.lockutils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:44:46 np0005481065 nova_compute[260935]: 2025-10-11 08:44:46.261 2 DEBUG nova.virt.libvirt.driver [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:44:46 np0005481065 nova_compute[260935]: 2025-10-11 08:44:46.262 2 DEBUG oslo_concurrency.lockutils [req-423a4bbd-f546-45a6-a7f7-655c30786510 req-c6ce626f-bf81-492b-bb5a-4a7ae9ee0845 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-aac0adcc-167d-400a-a04a-93767356cc9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:44:46 np0005481065 nova_compute[260935]: 2025-10-11 08:44:46.265 2 WARNING nova.virt.libvirt.driver [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:44:46 np0005481065 nova_compute[260935]: 2025-10-11 08:44:46.269 2 DEBUG nova.virt.libvirt.host [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:44:46 np0005481065 nova_compute[260935]: 2025-10-11 08:44:46.270 2 DEBUG nova.virt.libvirt.host [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:44:46 np0005481065 nova_compute[260935]: 2025-10-11 08:44:46.273 2 DEBUG nova.virt.libvirt.host [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:44:46 np0005481065 nova_compute[260935]: 2025-10-11 08:44:46.273 2 DEBUG nova.virt.libvirt.host [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:44:46 np0005481065 nova_compute[260935]: 2025-10-11 08:44:46.273 2 DEBUG nova.virt.libvirt.driver [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:44:46 np0005481065 nova_compute[260935]: 2025-10-11 08:44:46.273 2 DEBUG nova.virt.hardware [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:44:46 np0005481065 nova_compute[260935]: 2025-10-11 08:44:46.274 2 DEBUG nova.virt.hardware [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:44:46 np0005481065 nova_compute[260935]: 2025-10-11 08:44:46.274 2 DEBUG nova.virt.hardware [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:44:46 np0005481065 nova_compute[260935]: 2025-10-11 08:44:46.274 2 DEBUG nova.virt.hardware [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:44:46 np0005481065 nova_compute[260935]: 2025-10-11 08:44:46.275 2 DEBUG nova.virt.hardware [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:44:46 np0005481065 nova_compute[260935]: 2025-10-11 08:44:46.275 2 DEBUG nova.virt.hardware [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:44:46 np0005481065 nova_compute[260935]: 2025-10-11 08:44:46.275 2 DEBUG nova.virt.hardware [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:44:46 np0005481065 nova_compute[260935]: 2025-10-11 08:44:46.276 2 DEBUG nova.virt.hardware [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:44:46 np0005481065 nova_compute[260935]: 2025-10-11 08:44:46.276 2 DEBUG nova.virt.hardware [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:44:46 np0005481065 nova_compute[260935]: 2025-10-11 08:44:46.276 2 DEBUG nova.virt.hardware [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:44:46 np0005481065 nova_compute[260935]: 2025-10-11 08:44:46.276 2 DEBUG nova.virt.hardware [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:44:46 np0005481065 nova_compute[260935]: 2025-10-11 08:44:46.279 2 DEBUG oslo_concurrency.processutils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:44:46 np0005481065 nova_compute[260935]: 2025-10-11 08:44:46.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:44:46 np0005481065 nova_compute[260935]: 2025-10-11 08:44:46.465 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e75116e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:44:46 np0005481065 nova_compute[260935]: 2025-10-11 08:44:46.465 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6e75116e-10, col_values=(('external_ids', {'iface-id': '6e75116e-1034-4d1a-8320-6755ee57c51f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:14:7b:bc', 'vm-uuid': 'aac0adcc-167d-400a-a04a-93767356cc9c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:44:46 np0005481065 nova_compute[260935]: 2025-10-11 08:44:46.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:44:46 np0005481065 NetworkManager[44960]: <info>  [1760172286.4687] manager: (tap6e75116e-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/23)
Oct 11 04:44:46 np0005481065 nova_compute[260935]: 2025-10-11 08:44:46.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:44:46 np0005481065 nova_compute[260935]: 2025-10-11 08:44:46.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:44:46 np0005481065 nova_compute[260935]: 2025-10-11 08:44:46.477 2 INFO os_vif [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:7b:bc,bridge_name='br-int',has_traffic_filtering=True,id=6e75116e-1034-4d1a-8320-6755ee57c51f,network=Network(3bd537c2-e6ec-4d00-ac83-fbf5d86f963c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e75116e-10')#033[00m
Oct 11 04:44:46 np0005481065 nova_compute[260935]: 2025-10-11 08:44:46.646 2 DEBUG nova.virt.libvirt.driver [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:44:46 np0005481065 nova_compute[260935]: 2025-10-11 08:44:46.647 2 DEBUG nova.virt.libvirt.driver [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:44:46 np0005481065 nova_compute[260935]: 2025-10-11 08:44:46.648 2 DEBUG nova.virt.libvirt.driver [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] No VIF found with MAC fa:16:3e:14:7b:bc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:44:46 np0005481065 nova_compute[260935]: 2025-10-11 08:44:46.649 2 INFO nova.virt.libvirt.driver [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Using config drive#033[00m
Oct 11 04:44:46 np0005481065 nova_compute[260935]: 2025-10-11 08:44:46.683 2 DEBUG nova.storage.rbd_utils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] rbd image aac0adcc-167d-400a-a04a-93767356cc9c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:44:46 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:44:46 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1972186408' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:44:46 np0005481065 nova_compute[260935]: 2025-10-11 08:44:46.745 2 DEBUG oslo_concurrency.processutils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:44:46 np0005481065 nova_compute[260935]: 2025-10-11 08:44:46.776 2 DEBUG nova.storage.rbd_utils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] rbd image 10e2549e-21d4-44fb-acbf-9104ec32970f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:44:46 np0005481065 nova_compute[260935]: 2025-10-11 08:44:46.781 2 DEBUG oslo_concurrency.processutils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:44:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:44:47 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1150741359' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:44:47 np0005481065 nova_compute[260935]: 2025-10-11 08:44:47.225 2 DEBUG oslo_concurrency.processutils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:44:47 np0005481065 nova_compute[260935]: 2025-10-11 08:44:47.228 2 DEBUG nova.objects.instance [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lazy-loading 'pci_devices' on Instance uuid 10e2549e-21d4-44fb-acbf-9104ec32970f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:44:47 np0005481065 nova_compute[260935]: 2025-10-11 08:44:47.247 2 DEBUG nova.virt.libvirt.driver [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:44:47 np0005481065 nova_compute[260935]:  <uuid>10e2549e-21d4-44fb-acbf-9104ec32970f</uuid>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:  <name>instance-00000003</name>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:44:47 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:      <nova:name>tempest-LiveMigrationNegativeTest-server-846905837</nova:name>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:44:46</nova:creationTime>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:44:47 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:        <nova:user uuid="b1cf6609831e4cd3b6261e999c3c068e">tempest-LiveMigrationNegativeTest-1774903191-project-member</nova:user>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:        <nova:project uuid="507d5ff420164426beedc0ce7977570a">tempest-LiveMigrationNegativeTest-1774903191</nova:project>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:      <nova:ports/>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:44:47 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:      <entry name="serial">10e2549e-21d4-44fb-acbf-9104ec32970f</entry>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:      <entry name="uuid">10e2549e-21d4-44fb-acbf-9104ec32970f</entry>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:44:47 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:44:47 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:44:47 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/10e2549e-21d4-44fb-acbf-9104ec32970f_disk">
Oct 11 04:44:47 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:44:47 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:44:47 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/10e2549e-21d4-44fb-acbf-9104ec32970f_disk.config">
Oct 11 04:44:47 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:44:47 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:44:47 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/10e2549e-21d4-44fb-acbf-9104ec32970f/console.log" append="off"/>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:44:47 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:44:47 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:44:47 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:44:47 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:44:47 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:44:47 np0005481065 nova_compute[260935]: 2025-10-11 08:44:47.260 2 INFO nova.virt.libvirt.driver [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Creating config drive at /var/lib/nova/instances/aac0adcc-167d-400a-a04a-93767356cc9c/disk.config#033[00m
Oct 11 04:44:47 np0005481065 nova_compute[260935]: 2025-10-11 08:44:47.274 2 DEBUG oslo_concurrency.processutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/aac0adcc-167d-400a-a04a-93767356cc9c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0j1acf0s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:44:47 np0005481065 nova_compute[260935]: 2025-10-11 08:44:47.352 2 DEBUG nova.virt.libvirt.driver [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:44:47 np0005481065 nova_compute[260935]: 2025-10-11 08:44:47.354 2 DEBUG nova.virt.libvirt.driver [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:44:47 np0005481065 nova_compute[260935]: 2025-10-11 08:44:47.355 2 INFO nova.virt.libvirt.driver [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Using config drive#033[00m
Oct 11 04:44:47 np0005481065 nova_compute[260935]: 2025-10-11 08:44:47.391 2 DEBUG nova.storage.rbd_utils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] rbd image 10e2549e-21d4-44fb-acbf-9104ec32970f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:44:47 np0005481065 nova_compute[260935]: 2025-10-11 08:44:47.422 2 DEBUG oslo_concurrency.processutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/aac0adcc-167d-400a-a04a-93767356cc9c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0j1acf0s" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:44:47 np0005481065 nova_compute[260935]: 2025-10-11 08:44:47.447 2 DEBUG nova.storage.rbd_utils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] rbd image aac0adcc-167d-400a-a04a-93767356cc9c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:44:47 np0005481065 nova_compute[260935]: 2025-10-11 08:44:47.452 2 DEBUG oslo_concurrency.processutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/aac0adcc-167d-400a-a04a-93767356cc9c/disk.config aac0adcc-167d-400a-a04a-93767356cc9c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:44:47 np0005481065 nova_compute[260935]: 2025-10-11 08:44:47.611 2 DEBUG oslo_concurrency.processutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/aac0adcc-167d-400a-a04a-93767356cc9c/disk.config aac0adcc-167d-400a-a04a-93767356cc9c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:44:47 np0005481065 nova_compute[260935]: 2025-10-11 08:44:47.613 2 INFO nova.virt.libvirt.driver [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Deleting local config drive /var/lib/nova/instances/aac0adcc-167d-400a-a04a-93767356cc9c/disk.config because it was imported into RBD.#033[00m
Oct 11 04:44:47 np0005481065 nova_compute[260935]: 2025-10-11 08:44:47.667 2 INFO nova.virt.libvirt.driver [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Creating config drive at /var/lib/nova/instances/10e2549e-21d4-44fb-acbf-9104ec32970f/disk.config#033[00m
Oct 11 04:44:47 np0005481065 nova_compute[260935]: 2025-10-11 08:44:47.673 2 DEBUG oslo_concurrency.processutils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/10e2549e-21d4-44fb-acbf-9104ec32970f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0y8helu4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:44:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1143: 321 pgs: 321 active+clean; 134 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 11 04:44:47 np0005481065 kernel: tun: Universal TUN/TAP device driver, 1.6
Oct 11 04:44:47 np0005481065 kernel: tap6e75116e-10: entered promiscuous mode
Oct 11 04:44:47 np0005481065 NetworkManager[44960]: <info>  [1760172287.7113] manager: (tap6e75116e-10): new Tun device (/org/freedesktop/NetworkManager/Devices/24)
Oct 11 04:44:47 np0005481065 ovn_controller[152945]: 2025-10-11T08:44:47Z|00027|binding|INFO|Claiming lport 6e75116e-1034-4d1a-8320-6755ee57c51f for this chassis.
Oct 11 04:44:47 np0005481065 ovn_controller[152945]: 2025-10-11T08:44:47Z|00028|binding|INFO|6e75116e-1034-4d1a-8320-6755ee57c51f: Claiming fa:16:3e:14:7b:bc 10.100.0.9
Oct 11 04:44:47 np0005481065 nova_compute[260935]: 2025-10-11 08:44:47.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:44:47 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:44:47.729 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:7b:bc 10.100.0.9'], port_security=['fa:16:3e:14:7b:bc 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'aac0adcc-167d-400a-a04a-93767356cc9c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4e6573eaf6684f1c99a553fd46667a67', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6b6e9add-724a-49a4-90de-2f4e4912ca78', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be56cb0d-617a-4b33-ac01-b32b133373b2, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=6e75116e-1034-4d1a-8320-6755ee57c51f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:44:47 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:44:47.730 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 6e75116e-1034-4d1a-8320-6755ee57c51f in datapath 3bd537c2-e6ec-4d00-ac83-fbf5d86f963c bound to our chassis#033[00m
Oct 11 04:44:47 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:44:47.733 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3bd537c2-e6ec-4d00-ac83-fbf5d86f963c#033[00m
Oct 11 04:44:47 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:44:47.734 162815 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmps_0luhbi/privsep.sock']#033[00m
Oct 11 04:44:47 np0005481065 systemd-udevd[276786]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:44:47 np0005481065 NetworkManager[44960]: <info>  [1760172287.7836] device (tap6e75116e-10): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:44:47 np0005481065 NetworkManager[44960]: <info>  [1760172287.7855] device (tap6e75116e-10): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:44:47 np0005481065 systemd-machined[215705]: New machine qemu-2-instance-00000002.
Oct 11 04:44:47 np0005481065 nova_compute[260935]: 2025-10-11 08:44:47.816 2 DEBUG oslo_concurrency.processutils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/10e2549e-21d4-44fb-acbf-9104ec32970f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0y8helu4" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:44:47 np0005481065 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Oct 11 04:44:47 np0005481065 ovn_controller[152945]: 2025-10-11T08:44:47Z|00029|binding|INFO|Setting lport 6e75116e-1034-4d1a-8320-6755ee57c51f ovn-installed in OVS
Oct 11 04:44:47 np0005481065 ovn_controller[152945]: 2025-10-11T08:44:47Z|00030|binding|INFO|Setting lport 6e75116e-1034-4d1a-8320-6755ee57c51f up in Southbound
Oct 11 04:44:47 np0005481065 nova_compute[260935]: 2025-10-11 08:44:47.859 2 DEBUG nova.storage.rbd_utils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] rbd image 10e2549e-21d4-44fb-acbf-9104ec32970f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:44:47 np0005481065 nova_compute[260935]: 2025-10-11 08:44:47.866 2 DEBUG oslo_concurrency.processutils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/10e2549e-21d4-44fb-acbf-9104ec32970f/disk.config 10e2549e-21d4-44fb-acbf-9104ec32970f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:44:47 np0005481065 nova_compute[260935]: 2025-10-11 08:44:47.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:44:48 np0005481065 nova_compute[260935]: 2025-10-11 08:44:48.043 2 DEBUG oslo_concurrency.processutils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/10e2549e-21d4-44fb-acbf-9104ec32970f/disk.config 10e2549e-21d4-44fb-acbf-9104ec32970f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.178s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:44:48 np0005481065 nova_compute[260935]: 2025-10-11 08:44:48.044 2 INFO nova.virt.libvirt.driver [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Deleting local config drive /var/lib/nova/instances/10e2549e-21d4-44fb-acbf-9104ec32970f/disk.config because it was imported into RBD.#033[00m
Oct 11 04:44:48 np0005481065 systemd-machined[215705]: New machine qemu-3-instance-00000003.
Oct 11 04:44:48 np0005481065 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Oct 11 04:44:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:44:48 np0005481065 nova_compute[260935]: 2025-10-11 08:44:48.329 2 DEBUG nova.compute.manager [req-0905cdef-2cad-4a3f-966f-d0ad4574bc1b req-5d867cd1-62f5-45f8-8a23-60d561be8be1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Received event network-vif-plugged-6e75116e-1034-4d1a-8320-6755ee57c51f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:44:48 np0005481065 nova_compute[260935]: 2025-10-11 08:44:48.329 2 DEBUG oslo_concurrency.lockutils [req-0905cdef-2cad-4a3f-966f-d0ad4574bc1b req-5d867cd1-62f5-45f8-8a23-60d561be8be1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "aac0adcc-167d-400a-a04a-93767356cc9c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:44:48 np0005481065 nova_compute[260935]: 2025-10-11 08:44:48.331 2 DEBUG oslo_concurrency.lockutils [req-0905cdef-2cad-4a3f-966f-d0ad4574bc1b req-5d867cd1-62f5-45f8-8a23-60d561be8be1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "aac0adcc-167d-400a-a04a-93767356cc9c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:44:48 np0005481065 nova_compute[260935]: 2025-10-11 08:44:48.332 2 DEBUG oslo_concurrency.lockutils [req-0905cdef-2cad-4a3f-966f-d0ad4574bc1b req-5d867cd1-62f5-45f8-8a23-60d561be8be1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "aac0adcc-167d-400a-a04a-93767356cc9c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:44:48 np0005481065 nova_compute[260935]: 2025-10-11 08:44:48.332 2 DEBUG nova.compute.manager [req-0905cdef-2cad-4a3f-966f-d0ad4574bc1b req-5d867cd1-62f5-45f8-8a23-60d561be8be1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Processing event network-vif-plugged-6e75116e-1034-4d1a-8320-6755ee57c51f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 04:44:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:44:48.393 162815 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Oct 11 04:44:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:44:48.394 162815 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmps_0luhbi/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Oct 11 04:44:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:44:48.274 276945 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct 11 04:44:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:44:48.283 276945 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct 11 04:44:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:44:48.288 276945 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Oct 11 04:44:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:44:48.288 276945 INFO oslo.privsep.daemon [-] privsep daemon running as pid 276945#033[00m
Oct 11 04:44:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:44:48.400 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e30130fa-b418-4339-a37c-064b80c19859]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:44:48 np0005481065 podman[277067]: 2025-10-11 08:44:48.633423771 +0000 UTC m=+0.086714436 container exec ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:44:48 np0005481065 podman[277067]: 2025-10-11 08:44:48.776550896 +0000 UTC m=+0.229841511 container exec_died ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 04:44:49 np0005481065 nova_compute[260935]: 2025-10-11 08:44:49.011 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172289.0109758, aac0adcc-167d-400a-a04a-93767356cc9c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:44:49 np0005481065 nova_compute[260935]: 2025-10-11 08:44:49.011 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] VM Started (Lifecycle Event)#033[00m
Oct 11 04:44:49 np0005481065 nova_compute[260935]: 2025-10-11 08:44:49.014 2 DEBUG nova.compute.manager [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:44:49 np0005481065 nova_compute[260935]: 2025-10-11 08:44:49.021 2 DEBUG nova.virt.libvirt.driver [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:44:49 np0005481065 nova_compute[260935]: 2025-10-11 08:44:49.028 2 INFO nova.virt.libvirt.driver [-] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Instance spawned successfully.#033[00m
Oct 11 04:44:49 np0005481065 nova_compute[260935]: 2025-10-11 08:44:49.028 2 DEBUG nova.virt.libvirt.driver [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:44:49 np0005481065 nova_compute[260935]: 2025-10-11 08:44:49.039 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:44:49 np0005481065 nova_compute[260935]: 2025-10-11 08:44:49.047 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:44:49 np0005481065 nova_compute[260935]: 2025-10-11 08:44:49.051 2 DEBUG nova.virt.libvirt.driver [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:44:49 np0005481065 nova_compute[260935]: 2025-10-11 08:44:49.051 2 DEBUG nova.virt.libvirt.driver [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:44:49 np0005481065 nova_compute[260935]: 2025-10-11 08:44:49.051 2 DEBUG nova.virt.libvirt.driver [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:44:49 np0005481065 nova_compute[260935]: 2025-10-11 08:44:49.052 2 DEBUG nova.virt.libvirt.driver [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:44:49 np0005481065 nova_compute[260935]: 2025-10-11 08:44:49.052 2 DEBUG nova.virt.libvirt.driver [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:44:49 np0005481065 nova_compute[260935]: 2025-10-11 08:44:49.052 2 DEBUG nova.virt.libvirt.driver [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:44:49 np0005481065 nova_compute[260935]: 2025-10-11 08:44:49.078 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:44:49 np0005481065 nova_compute[260935]: 2025-10-11 08:44:49.078 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172289.0117393, aac0adcc-167d-400a-a04a-93767356cc9c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:44:49 np0005481065 nova_compute[260935]: 2025-10-11 08:44:49.078 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] VM Paused (Lifecycle Event)#033[00m
Oct 11 04:44:49 np0005481065 nova_compute[260935]: 2025-10-11 08:44:49.113 2 DEBUG nova.compute.manager [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:44:49 np0005481065 nova_compute[260935]: 2025-10-11 08:44:49.114 2 DEBUG nova.virt.libvirt.driver [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:44:49 np0005481065 nova_compute[260935]: 2025-10-11 08:44:49.117 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:44:49 np0005481065 nova_compute[260935]: 2025-10-11 08:44:49.125 2 INFO nova.virt.libvirt.driver [-] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Instance spawned successfully.#033[00m
Oct 11 04:44:49 np0005481065 nova_compute[260935]: 2025-10-11 08:44:49.125 2 DEBUG nova.virt.libvirt.driver [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:44:49 np0005481065 nova_compute[260935]: 2025-10-11 08:44:49.129 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172289.0184076, aac0adcc-167d-400a-a04a-93767356cc9c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:44:49 np0005481065 nova_compute[260935]: 2025-10-11 08:44:49.130 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:44:49 np0005481065 nova_compute[260935]: 2025-10-11 08:44:49.137 2 INFO nova.compute.manager [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Took 9.75 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:44:49 np0005481065 nova_compute[260935]: 2025-10-11 08:44:49.137 2 DEBUG nova.compute.manager [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:44:49 np0005481065 nova_compute[260935]: 2025-10-11 08:44:49.154 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:44:49 np0005481065 nova_compute[260935]: 2025-10-11 08:44:49.163 2 DEBUG nova.virt.libvirt.driver [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:44:49 np0005481065 nova_compute[260935]: 2025-10-11 08:44:49.164 2 DEBUG nova.virt.libvirt.driver [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:44:49 np0005481065 nova_compute[260935]: 2025-10-11 08:44:49.165 2 DEBUG nova.virt.libvirt.driver [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:44:49 np0005481065 nova_compute[260935]: 2025-10-11 08:44:49.166 2 DEBUG nova.virt.libvirt.driver [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:44:49 np0005481065 nova_compute[260935]: 2025-10-11 08:44:49.167 2 DEBUG nova.virt.libvirt.driver [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:44:49 np0005481065 nova_compute[260935]: 2025-10-11 08:44:49.168 2 DEBUG nova.virt.libvirt.driver [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:44:49 np0005481065 nova_compute[260935]: 2025-10-11 08:44:49.174 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:44:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:44:49.219 276945 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:44:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:44:49.219 276945 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:44:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:44:49.219 276945 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:44:49 np0005481065 nova_compute[260935]: 2025-10-11 08:44:49.235 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172289.1105483, 10e2549e-21d4-44fb-acbf-9104ec32970f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:44:49 np0005481065 nova_compute[260935]: 2025-10-11 08:44:49.235 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:44:49 np0005481065 nova_compute[260935]: 2025-10-11 08:44:49.240 2 INFO nova.compute.manager [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Took 11.02 seconds to build instance.#033[00m
Oct 11 04:44:49 np0005481065 nova_compute[260935]: 2025-10-11 08:44:49.245 2 INFO nova.compute.manager [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Took 3.72 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:44:49 np0005481065 nova_compute[260935]: 2025-10-11 08:44:49.246 2 DEBUG nova.compute.manager [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:44:49 np0005481065 nova_compute[260935]: 2025-10-11 08:44:49.261 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:44:49 np0005481065 nova_compute[260935]: 2025-10-11 08:44:49.266 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:44:49 np0005481065 nova_compute[260935]: 2025-10-11 08:44:49.269 2 DEBUG oslo_concurrency.lockutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "aac0adcc-167d-400a-a04a-93767356cc9c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.184s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:44:49 np0005481065 nova_compute[260935]: 2025-10-11 08:44:49.290 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:44:49 np0005481065 nova_compute[260935]: 2025-10-11 08:44:49.290 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172289.1119533, 10e2549e-21d4-44fb-acbf-9104ec32970f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:44:49 np0005481065 nova_compute[260935]: 2025-10-11 08:44:49.290 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] VM Started (Lifecycle Event)#033[00m
Oct 11 04:44:49 np0005481065 nova_compute[260935]: 2025-10-11 08:44:49.311 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:44:49 np0005481065 nova_compute[260935]: 2025-10-11 08:44:49.313 2 INFO nova.compute.manager [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Took 4.93 seconds to build instance.#033[00m
Oct 11 04:44:49 np0005481065 nova_compute[260935]: 2025-10-11 08:44:49.319 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:44:49 np0005481065 nova_compute[260935]: 2025-10-11 08:44:49.349 2 DEBUG oslo_concurrency.lockutils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lock "10e2549e-21d4-44fb-acbf-9104ec32970f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.098s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:44:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1144: 321 pgs: 321 active+clean; 134 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 11 04:44:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:44:49 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:44:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:44:49 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:44:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:44:50.010 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f86750aa-fa02-4c0b-bf47-8b56cb8b8fad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:44:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:44:50.012 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3bd537c2-e1 in ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 04:44:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:44:50.015 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3bd537c2-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 04:44:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:44:50.015 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cbd9973f-e5f1-4637-bc6f-c4d6f2fef1ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:44:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:44:50.020 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b3f1aafa-9ff8-4366-89f3-e1c154ef5f5d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:44:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:44:50.057 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[ede9cd7e-d561-421b-a094-45252a1f564c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:44:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:44:50.094 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4f59d4b7-8079-41df-8183-3d15ec76755a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:44:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:44:50.096 162815 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmp_s4rubi2/privsep.sock']#033[00m
Oct 11 04:44:50 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:44:50 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:44:50 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:44:50 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:44:50 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:44:50 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:44:50 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev b4048180-8117-4b94-be9a-0f6c0afa0235 does not exist
Oct 11 04:44:50 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 4feb4d42-bc20-4178-818b-5aa4ab9d0c4e does not exist
Oct 11 04:44:50 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 524b8e4d-02eb-4a1f-b84b-ca16bce93ab9 does not exist
Oct 11 04:44:50 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:44:50 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:44:50 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:44:50 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:44:50 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:44:50 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:44:50 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:44:50 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:44:50 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:44:50 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:44:50 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:44:50 np0005481065 nova_compute[260935]: 2025-10-11 08:44:50.770 2 DEBUG nova.compute.manager [req-15dac434-061d-45cf-8b24-2dc0bcd881c9 req-27186650-a844-4729-a5da-89f35f5f0895 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Received event network-vif-plugged-6e75116e-1034-4d1a-8320-6755ee57c51f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:44:50 np0005481065 nova_compute[260935]: 2025-10-11 08:44:50.771 2 DEBUG oslo_concurrency.lockutils [req-15dac434-061d-45cf-8b24-2dc0bcd881c9 req-27186650-a844-4729-a5da-89f35f5f0895 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "aac0adcc-167d-400a-a04a-93767356cc9c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:44:50 np0005481065 nova_compute[260935]: 2025-10-11 08:44:50.771 2 DEBUG oslo_concurrency.lockutils [req-15dac434-061d-45cf-8b24-2dc0bcd881c9 req-27186650-a844-4729-a5da-89f35f5f0895 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "aac0adcc-167d-400a-a04a-93767356cc9c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:44:50 np0005481065 nova_compute[260935]: 2025-10-11 08:44:50.771 2 DEBUG oslo_concurrency.lockutils [req-15dac434-061d-45cf-8b24-2dc0bcd881c9 req-27186650-a844-4729-a5da-89f35f5f0895 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "aac0adcc-167d-400a-a04a-93767356cc9c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:44:50 np0005481065 nova_compute[260935]: 2025-10-11 08:44:50.773 2 DEBUG nova.compute.manager [req-15dac434-061d-45cf-8b24-2dc0bcd881c9 req-27186650-a844-4729-a5da-89f35f5f0895 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] No waiting events found dispatching network-vif-plugged-6e75116e-1034-4d1a-8320-6755ee57c51f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:44:50 np0005481065 nova_compute[260935]: 2025-10-11 08:44:50.774 2 WARNING nova.compute.manager [req-15dac434-061d-45cf-8b24-2dc0bcd881c9 req-27186650-a844-4729-a5da-89f35f5f0895 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Received unexpected event network-vif-plugged-6e75116e-1034-4d1a-8320-6755ee57c51f for instance with vm_state active and task_state None.#033[00m
Oct 11 04:44:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:44:50.855 162815 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Oct 11 04:44:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:44:50.856 162815 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp_s4rubi2/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Oct 11 04:44:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:44:50.719 277356 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct 11 04:44:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:44:50.728 277356 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct 11 04:44:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:44:50.733 277356 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Oct 11 04:44:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:44:50.734 277356 INFO oslo.privsep.daemon [-] privsep daemon running as pid 277356#033[00m
Oct 11 04:44:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:44:50.860 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[9b1b9133-2538-4911-ac0a-45cef64de63e]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:44:51 np0005481065 nova_compute[260935]: 2025-10-11 08:44:51.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:44:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:44:51.433 277356 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:44:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:44:51.433 277356 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:44:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:44:51.433 277356 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:44:51 np0005481065 nova_compute[260935]: 2025-10-11 08:44:51.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:44:51 np0005481065 podman[277501]: 2025-10-11 08:44:51.527101941 +0000 UTC m=+0.087555439 container create 3f982514105932aedde5dd0f08350381bd5f3e3e409f4ed76c01d5d36e3e2629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 04:44:51 np0005481065 systemd[1]: Started libpod-conmon-3f982514105932aedde5dd0f08350381bd5f3e3e409f4ed76c01d5d36e3e2629.scope.
Oct 11 04:44:51 np0005481065 podman[277501]: 2025-10-11 08:44:51.484792989 +0000 UTC m=+0.045246447 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:44:51 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:44:51 np0005481065 podman[277501]: 2025-10-11 08:44:51.644791659 +0000 UTC m=+0.205245197 container init 3f982514105932aedde5dd0f08350381bd5f3e3e409f4ed76c01d5d36e3e2629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hopper, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 11 04:44:51 np0005481065 podman[277501]: 2025-10-11 08:44:51.657432586 +0000 UTC m=+0.217886074 container start 3f982514105932aedde5dd0f08350381bd5f3e3e409f4ed76c01d5d36e3e2629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 04:44:51 np0005481065 podman[277501]: 2025-10-11 08:44:51.660985266 +0000 UTC m=+0.221438724 container attach 3f982514105932aedde5dd0f08350381bd5f3e3e409f4ed76c01d5d36e3e2629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 04:44:51 np0005481065 hungry_hopper[277517]: 167 167
Oct 11 04:44:51 np0005481065 systemd[1]: libpod-3f982514105932aedde5dd0f08350381bd5f3e3e409f4ed76c01d5d36e3e2629.scope: Deactivated successfully.
Oct 11 04:44:51 np0005481065 podman[277501]: 2025-10-11 08:44:51.665997737 +0000 UTC m=+0.226451225 container died 3f982514105932aedde5dd0f08350381bd5f3e3e409f4ed76c01d5d36e3e2629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hopper, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:44:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1145: 321 pgs: 321 active+clean; 134 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 11 04:44:51 np0005481065 systemd[1]: var-lib-containers-storage-overlay-158ebbebb2d0e84c39fdeca21bbe72b1fb24657253754b703fde6b210afe7921-merged.mount: Deactivated successfully.
Oct 11 04:44:51 np0005481065 podman[277501]: 2025-10-11 08:44:51.738310806 +0000 UTC m=+0.298764304 container remove 3f982514105932aedde5dd0f08350381bd5f3e3e409f4ed76c01d5d36e3e2629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hopper, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 04:44:51 np0005481065 systemd[1]: libpod-conmon-3f982514105932aedde5dd0f08350381bd5f3e3e409f4ed76c01d5d36e3e2629.scope: Deactivated successfully.
Oct 11 04:44:51 np0005481065 NetworkManager[44960]: <info>  [1760172291.8659] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/25)
Oct 11 04:44:51 np0005481065 NetworkManager[44960]: <info>  [1760172291.8668] device (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 11 04:44:51 np0005481065 nova_compute[260935]: 2025-10-11 08:44:51.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:44:51 np0005481065 NetworkManager[44960]: <info>  [1760172291.8682] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/26)
Oct 11 04:44:51 np0005481065 NetworkManager[44960]: <info>  [1760172291.8685] device (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 11 04:44:51 np0005481065 NetworkManager[44960]: <info>  [1760172291.8693] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Oct 11 04:44:51 np0005481065 NetworkManager[44960]: <info>  [1760172291.8698] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Oct 11 04:44:51 np0005481065 NetworkManager[44960]: <info>  [1760172291.8701] device (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 11 04:44:51 np0005481065 NetworkManager[44960]: <info>  [1760172291.8704] device (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 11 04:44:51 np0005481065 nova_compute[260935]: 2025-10-11 08:44:51.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:44:51 np0005481065 podman[277542]: 2025-10-11 08:44:51.943572703 +0000 UTC m=+0.044255979 container create b8a653bb6e633e280d2441c0695a91497b40215944f26339c2a5217dbb4bc6c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_black, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:44:51 np0005481065 nova_compute[260935]: 2025-10-11 08:44:51.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:44:51 np0005481065 systemd[1]: Started libpod-conmon-b8a653bb6e633e280d2441c0695a91497b40215944f26339c2a5217dbb4bc6c6.scope.
Oct 11 04:44:52 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:44:52 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f89a2e703f8d28b41ac86e2a7de3a208c15f8f89777b3987111deab2038aa9c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:44:52 np0005481065 podman[277542]: 2025-10-11 08:44:51.923667112 +0000 UTC m=+0.024350438 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:44:52 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f89a2e703f8d28b41ac86e2a7de3a208c15f8f89777b3987111deab2038aa9c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:44:52 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f89a2e703f8d28b41ac86e2a7de3a208c15f8f89777b3987111deab2038aa9c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:44:52 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f89a2e703f8d28b41ac86e2a7de3a208c15f8f89777b3987111deab2038aa9c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:44:52 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f89a2e703f8d28b41ac86e2a7de3a208c15f8f89777b3987111deab2038aa9c0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:44:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:44:52.051 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[0f1a0f6b-3f2d-4510-a89e-0faf276131b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:44:52 np0005481065 podman[277542]: 2025-10-11 08:44:52.061147868 +0000 UTC m=+0.161831194 container init b8a653bb6e633e280d2441c0695a91497b40215944f26339c2a5217dbb4bc6c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_black, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 04:44:52 np0005481065 NetworkManager[44960]: <info>  [1760172292.0618] manager: (tap3bd537c2-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/29)
Oct 11 04:44:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:44:52.061 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8d9e5533-de1c-46e8-bdaa-76721edfa3c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:44:52 np0005481065 podman[277542]: 2025-10-11 08:44:52.069623857 +0000 UTC m=+0.170307173 container start b8a653bb6e633e280d2441c0695a91497b40215944f26339c2a5217dbb4bc6c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:44:52 np0005481065 podman[277542]: 2025-10-11 08:44:52.073604409 +0000 UTC m=+0.174287705 container attach b8a653bb6e633e280d2441c0695a91497b40215944f26339c2a5217dbb4bc6c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_black, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Oct 11 04:44:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:44:52.110 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3b56a336-6fb7-452e-b58e-0e0dad2feea7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:44:52 np0005481065 systemd-udevd[277569]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:44:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:44:52.114 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[7ff0d4e0-1b4f-40f3-bdd1-d7a5b1792bb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:44:52 np0005481065 NetworkManager[44960]: <info>  [1760172292.1413] device (tap3bd537c2-e0): carrier: link connected
Oct 11 04:44:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:44:52.150 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[45ac11be-3cd9-4463-b171-4f2b9c8ad940]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:44:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:44:52.176 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[56e5ff32-640b-4a74-a6cf-f0c40fc37f56]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3bd537c2-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e9:0a:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 413684, 'reachable_time': 17731, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277588, 'error': None, 'target': 'ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:44:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:44:52.200 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c0edeadc-a7ee-41b4-a829-790692b2a705]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee9:a0a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 413684, 'tstamp': 413684}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277589, 'error': None, 'target': 'ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:44:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:44:52.223 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e215fff9-9f44-4856-b244-d90fe26b06c4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3bd537c2-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e9:0a:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 413684, 'reachable_time': 17731, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 277590, 'error': None, 'target': 'ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:44:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:44:52.269 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4d3dd685-9085-426a-a060-f9aa93011b63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:44:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:44:52.345 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[44b4fb44-0ef2-491b-90fa-92e092ff2e23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:44:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:44:52.348 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3bd537c2-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:44:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:44:52.348 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:44:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:44:52.349 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3bd537c2-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:44:52 np0005481065 kernel: tap3bd537c2-e0: entered promiscuous mode
Oct 11 04:44:52 np0005481065 NetworkManager[44960]: <info>  [1760172292.4063] manager: (tap3bd537c2-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Oct 11 04:44:52 np0005481065 nova_compute[260935]: 2025-10-11 08:44:52.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:44:52 np0005481065 nova_compute[260935]: 2025-10-11 08:44:52.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:44:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:44:52.410 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3bd537c2-e0, col_values=(('external_ids', {'iface-id': '6fcfb560-e2ca-4f85-8b53-5907aa538954'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:44:52 np0005481065 nova_compute[260935]: 2025-10-11 08:44:52.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:44:52 np0005481065 ovn_controller[152945]: 2025-10-11T08:44:52Z|00031|binding|INFO|Releasing lport 6fcfb560-e2ca-4f85-8b53-5907aa538954 from this chassis (sb_readonly=0)
Oct 11 04:44:52 np0005481065 nova_compute[260935]: 2025-10-11 08:44:52.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:44:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:44:52.420 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3bd537c2-e6ec-4d00-ac83-fbf5d86f963c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3bd537c2-e6ec-4d00-ac83-fbf5d86f963c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 04:44:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:44:52.421 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[25889645-a983-442c-8194-aba02d2d3d34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:44:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:44:52.422 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:44:52 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 04:44:52 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 04:44:52 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c
Oct 11 04:44:52 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 04:44:52 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 04:44:52 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 04:44:52 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/3bd537c2-e6ec-4d00-ac83-fbf5d86f963c.pid.haproxy
Oct 11 04:44:52 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 04:44:52 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:44:52 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 04:44:52 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 04:44:52 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 04:44:52 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 04:44:52 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 04:44:52 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 04:44:52 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 04:44:52 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 04:44:52 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 04:44:52 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 04:44:52 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 04:44:52 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 04:44:52 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 04:44:52 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:44:52 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:44:52 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 04:44:52 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 04:44:52 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:44:52 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID 3bd537c2-e6ec-4d00-ac83-fbf5d86f963c
Oct 11 04:44:52 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 04:44:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:44:52.423 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c', 'env', 'PROCESS_TAG=haproxy-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3bd537c2-e6ec-4d00-ac83-fbf5d86f963c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 04:44:52 np0005481065 nova_compute[260935]: 2025-10-11 08:44:52.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:44:52 np0005481065 podman[277621]: 2025-10-11 08:44:52.823729327 +0000 UTC m=+0.042395186 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 04:44:52 np0005481065 podman[277621]: 2025-10-11 08:44:52.993564255 +0000 UTC m=+0.212230104 container create a93ddd922ca452d60dcc1226107836e2108ccbbff976a3b0320be2288dbe7021 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct 11 04:44:53 np0005481065 systemd[1]: Started libpod-conmon-a93ddd922ca452d60dcc1226107836e2108ccbbff976a3b0320be2288dbe7021.scope.
Oct 11 04:44:53 np0005481065 nova_compute[260935]: 2025-10-11 08:44:53.043 2 DEBUG nova.compute.manager [req-90638dd7-4a89-4b6f-85c9-e420bd544a66 req-c22fb917-540f-4cc7-83a3-35a03ccfb3a1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Received event network-changed-6e75116e-1034-4d1a-8320-6755ee57c51f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:44:53 np0005481065 nova_compute[260935]: 2025-10-11 08:44:53.045 2 DEBUG nova.compute.manager [req-90638dd7-4a89-4b6f-85c9-e420bd544a66 req-c22fb917-540f-4cc7-83a3-35a03ccfb3a1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Refreshing instance network info cache due to event network-changed-6e75116e-1034-4d1a-8320-6755ee57c51f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:44:53 np0005481065 nova_compute[260935]: 2025-10-11 08:44:53.045 2 DEBUG oslo_concurrency.lockutils [req-90638dd7-4a89-4b6f-85c9-e420bd544a66 req-c22fb917-540f-4cc7-83a3-35a03ccfb3a1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-aac0adcc-167d-400a-a04a-93767356cc9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:44:53 np0005481065 nova_compute[260935]: 2025-10-11 08:44:53.047 2 DEBUG oslo_concurrency.lockutils [req-90638dd7-4a89-4b6f-85c9-e420bd544a66 req-c22fb917-540f-4cc7-83a3-35a03ccfb3a1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-aac0adcc-167d-400a-a04a-93767356cc9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:44:53 np0005481065 nova_compute[260935]: 2025-10-11 08:44:53.047 2 DEBUG nova.network.neutron [req-90638dd7-4a89-4b6f-85c9-e420bd544a66 req-c22fb917-540f-4cc7-83a3-35a03ccfb3a1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Refreshing network info cache for port 6e75116e-1034-4d1a-8320-6755ee57c51f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:44:53 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:44:53 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1db26376507664bf223ae5a9650b1681bc0d81b11369be060261805baad7082/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:44:53 np0005481065 podman[277621]: 2025-10-11 08:44:53.105661645 +0000 UTC m=+0.324327534 container init a93ddd922ca452d60dcc1226107836e2108ccbbff976a3b0320be2288dbe7021 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 04:44:53 np0005481065 podman[277621]: 2025-10-11 08:44:53.112901649 +0000 UTC m=+0.331567458 container start a93ddd922ca452d60dcc1226107836e2108ccbbff976a3b0320be2288dbe7021 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 04:44:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:44:53 np0005481065 podman[277646]: 2025-10-11 08:44:53.136388401 +0000 UTC m=+0.091142461 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 04:44:53 np0005481065 neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c[277655]: [NOTICE]   (277673) : New worker (277679) forked
Oct 11 04:44:53 np0005481065 neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c[277655]: [NOTICE]   (277673) : Loading success.
Oct 11 04:44:53 np0005481065 romantic_black[277559]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:44:53 np0005481065 romantic_black[277559]: --> relative data size: 1.0
Oct 11 04:44:53 np0005481065 romantic_black[277559]: --> All data devices are unavailable
Oct 11 04:44:53 np0005481065 systemd[1]: libpod-b8a653bb6e633e280d2441c0695a91497b40215944f26339c2a5217dbb4bc6c6.scope: Deactivated successfully.
Oct 11 04:44:53 np0005481065 systemd[1]: libpod-b8a653bb6e633e280d2441c0695a91497b40215944f26339c2a5217dbb4bc6c6.scope: Consumed 1.100s CPU time.
Oct 11 04:44:53 np0005481065 conmon[277559]: conmon b8a653bb6e633e280d24 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b8a653bb6e633e280d2441c0695a91497b40215944f26339c2a5217dbb4bc6c6.scope/container/memory.events
Oct 11 04:44:53 np0005481065 podman[277692]: 2025-10-11 08:44:53.311535589 +0000 UTC m=+0.031912451 container died b8a653bb6e633e280d2441c0695a91497b40215944f26339c2a5217dbb4bc6c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_black, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 11 04:44:53 np0005481065 systemd[1]: var-lib-containers-storage-overlay-f89a2e703f8d28b41ac86e2a7de3a208c15f8f89777b3987111deab2038aa9c0-merged.mount: Deactivated successfully.
Oct 11 04:44:53 np0005481065 podman[277692]: 2025-10-11 08:44:53.373580638 +0000 UTC m=+0.093957490 container remove b8a653bb6e633e280d2441c0695a91497b40215944f26339c2a5217dbb4bc6c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_black, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:44:53 np0005481065 systemd[1]: libpod-conmon-b8a653bb6e633e280d2441c0695a91497b40215944f26339c2a5217dbb4bc6c6.scope: Deactivated successfully.
Oct 11 04:44:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1146: 321 pgs: 321 active+clean; 134 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 201 op/s
Oct 11 04:44:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:44:54.017 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:44:54 np0005481065 nova_compute[260935]: 2025-10-11 08:44:54.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:44:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:44:54.022 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 04:44:54 np0005481065 podman[277846]: 2025-10-11 08:44:54.229649653 +0000 UTC m=+0.071385743 container create 108c595761bf68df926e86ce46a9c57044517cacfa0ecfbf1e4d8502b6e5e3e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brahmagupta, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:44:54 np0005481065 systemd[1]: Started libpod-conmon-108c595761bf68df926e86ce46a9c57044517cacfa0ecfbf1e4d8502b6e5e3e5.scope.
Oct 11 04:44:54 np0005481065 podman[277846]: 2025-10-11 08:44:54.193645558 +0000 UTC m=+0.035381628 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:44:54 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:44:54 np0005481065 podman[277846]: 2025-10-11 08:44:54.344286555 +0000 UTC m=+0.186022685 container init 108c595761bf68df926e86ce46a9c57044517cacfa0ecfbf1e4d8502b6e5e3e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brahmagupta, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 04:44:54 np0005481065 podman[277846]: 2025-10-11 08:44:54.355362217 +0000 UTC m=+0.197098307 container start 108c595761bf68df926e86ce46a9c57044517cacfa0ecfbf1e4d8502b6e5e3e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brahmagupta, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 04:44:54 np0005481065 podman[277846]: 2025-10-11 08:44:54.360199434 +0000 UTC m=+0.201935534 container attach 108c595761bf68df926e86ce46a9c57044517cacfa0ecfbf1e4d8502b6e5e3e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brahmagupta, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Oct 11 04:44:54 np0005481065 epic_brahmagupta[277862]: 167 167
Oct 11 04:44:54 np0005481065 systemd[1]: libpod-108c595761bf68df926e86ce46a9c57044517cacfa0ecfbf1e4d8502b6e5e3e5.scope: Deactivated successfully.
Oct 11 04:44:54 np0005481065 podman[277846]: 2025-10-11 08:44:54.36608128 +0000 UTC m=+0.207817370 container died 108c595761bf68df926e86ce46a9c57044517cacfa0ecfbf1e4d8502b6e5e3e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brahmagupta, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:44:54 np0005481065 systemd[1]: var-lib-containers-storage-overlay-2d0f17499bc388afee0f217bd5f66d2a91ee4675d0f33a1426224584f134f707-merged.mount: Deactivated successfully.
Oct 11 04:44:54 np0005481065 podman[277846]: 2025-10-11 08:44:54.426590156 +0000 UTC m=+0.268326236 container remove 108c595761bf68df926e86ce46a9c57044517cacfa0ecfbf1e4d8502b6e5e3e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 11 04:44:54 np0005481065 systemd[1]: libpod-conmon-108c595761bf68df926e86ce46a9c57044517cacfa0ecfbf1e4d8502b6e5e3e5.scope: Deactivated successfully.
Oct 11 04:44:54 np0005481065 podman[277885]: 2025-10-11 08:44:54.705431207 +0000 UTC m=+0.093828036 container create 1c39acd5cda50586c0cb25171d4aba888e54d136022ae829645313a841582c18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_shannon, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:44:54 np0005481065 podman[277885]: 2025-10-11 08:44:54.653205345 +0000 UTC m=+0.041602254 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:44:54 np0005481065 systemd[1]: Started libpod-conmon-1c39acd5cda50586c0cb25171d4aba888e54d136022ae829645313a841582c18.scope.
Oct 11 04:44:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:44:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:44:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:44:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:44:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:44:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:44:54 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:44:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:44:54
Oct 11 04:44:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:44:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 04:44:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'volumes', 'vms', 'images', '.mgr', 'default.rgw.meta', 'backups']
Oct 11 04:44:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:44:54 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f34ebbe09dc6a8d40110ca9b4300a6d7412c41e6383a134fdb3014cae9861a53/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:44:54 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f34ebbe09dc6a8d40110ca9b4300a6d7412c41e6383a134fdb3014cae9861a53/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:44:54 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f34ebbe09dc6a8d40110ca9b4300a6d7412c41e6383a134fdb3014cae9861a53/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:44:54 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f34ebbe09dc6a8d40110ca9b4300a6d7412c41e6383a134fdb3014cae9861a53/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:44:54 np0005481065 podman[277885]: 2025-10-11 08:44:54.818436673 +0000 UTC m=+0.206833562 container init 1c39acd5cda50586c0cb25171d4aba888e54d136022ae829645313a841582c18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_shannon, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:44:54 np0005481065 podman[277885]: 2025-10-11 08:44:54.832415917 +0000 UTC m=+0.220812786 container start 1c39acd5cda50586c0cb25171d4aba888e54d136022ae829645313a841582c18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_shannon, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:44:54 np0005481065 podman[277885]: 2025-10-11 08:44:54.837036947 +0000 UTC m=+0.225433856 container attach 1c39acd5cda50586c0cb25171d4aba888e54d136022ae829645313a841582c18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_shannon, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:44:54 np0005481065 nova_compute[260935]: 2025-10-11 08:44:54.898 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:44:54 np0005481065 nova_compute[260935]: 2025-10-11 08:44:54.921 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:44:54 np0005481065 nova_compute[260935]: 2025-10-11 08:44:54.921 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:44:54 np0005481065 nova_compute[260935]: 2025-10-11 08:44:54.921 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:44:54 np0005481065 nova_compute[260935]: 2025-10-11 08:44:54.922 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 04:44:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:44:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:44:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:44:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:44:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:44:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:44:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:44:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:44:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:44:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:44:55 np0005481065 nova_compute[260935]: 2025-10-11 08:44:55.169 2 DEBUG oslo_concurrency.lockutils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Acquiring lock "eda894cf-d32d-47f0-adc0-7e2f7fffb442" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:44:55 np0005481065 nova_compute[260935]: 2025-10-11 08:44:55.170 2 DEBUG oslo_concurrency.lockutils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lock "eda894cf-d32d-47f0-adc0-7e2f7fffb442" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:44:55 np0005481065 nova_compute[260935]: 2025-10-11 08:44:55.188 2 DEBUG nova.compute.manager [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:44:55 np0005481065 nova_compute[260935]: 2025-10-11 08:44:55.267 2 DEBUG oslo_concurrency.lockutils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:44:55 np0005481065 nova_compute[260935]: 2025-10-11 08:44:55.268 2 DEBUG oslo_concurrency.lockutils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:44:55 np0005481065 nova_compute[260935]: 2025-10-11 08:44:55.278 2 DEBUG nova.virt.hardware [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:44:55 np0005481065 nova_compute[260935]: 2025-10-11 08:44:55.279 2 INFO nova.compute.claims [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:44:55 np0005481065 nova_compute[260935]: 2025-10-11 08:44:55.432 2 DEBUG oslo_concurrency.processutils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]: {
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:    "0": [
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:        {
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:            "devices": [
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:                "/dev/loop3"
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:            ],
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:            "lv_name": "ceph_lv0",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:            "lv_size": "21470642176",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:            "name": "ceph_lv0",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:            "tags": {
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:                "ceph.cluster_name": "ceph",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:                "ceph.crush_device_class": "",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:                "ceph.encrypted": "0",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:                "ceph.osd_id": "0",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:                "ceph.type": "block",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:                "ceph.vdo": "0"
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:            },
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:            "type": "block",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:            "vg_name": "ceph_vg0"
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:        }
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:    ],
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:    "1": [
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:        {
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:            "devices": [
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:                "/dev/loop4"
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:            ],
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:            "lv_name": "ceph_lv1",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:            "lv_size": "21470642176",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:            "name": "ceph_lv1",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:            "tags": {
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:                "ceph.cluster_name": "ceph",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:                "ceph.crush_device_class": "",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:                "ceph.encrypted": "0",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:                "ceph.osd_id": "1",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:                "ceph.type": "block",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:                "ceph.vdo": "0"
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:            },
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:            "type": "block",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:            "vg_name": "ceph_vg1"
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:        }
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:    ],
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:    "2": [
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:        {
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:            "devices": [
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:                "/dev/loop5"
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:            ],
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:            "lv_name": "ceph_lv2",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:            "lv_size": "21470642176",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:            "name": "ceph_lv2",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:            "tags": {
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:                "ceph.cluster_name": "ceph",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:                "ceph.crush_device_class": "",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:                "ceph.encrypted": "0",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:                "ceph.osd_id": "2",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:                "ceph.type": "block",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:                "ceph.vdo": "0"
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:            },
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:            "type": "block",
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:            "vg_name": "ceph_vg2"
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:        }
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]:    ]
Oct 11 04:44:55 np0005481065 crazy_shannon[277902]: }
Oct 11 04:44:55 np0005481065 systemd[1]: libpod-1c39acd5cda50586c0cb25171d4aba888e54d136022ae829645313a841582c18.scope: Deactivated successfully.
Oct 11 04:44:55 np0005481065 nova_compute[260935]: 2025-10-11 08:44:55.627 2 DEBUG nova.network.neutron [req-90638dd7-4a89-4b6f-85c9-e420bd544a66 req-c22fb917-540f-4cc7-83a3-35a03ccfb3a1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Updated VIF entry in instance network info cache for port 6e75116e-1034-4d1a-8320-6755ee57c51f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:44:55 np0005481065 nova_compute[260935]: 2025-10-11 08:44:55.630 2 DEBUG nova.network.neutron [req-90638dd7-4a89-4b6f-85c9-e420bd544a66 req-c22fb917-540f-4cc7-83a3-35a03ccfb3a1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Updating instance_info_cache with network_info: [{"id": "6e75116e-1034-4d1a-8320-6755ee57c51f", "address": "fa:16:3e:14:7b:bc", "network": {"id": "3bd537c2-e6ec-4d00-ac83-fbf5d86f963c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-610535994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e6573eaf6684f1c99a553fd46667a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e75116e-10", "ovs_interfaceid": "6e75116e-1034-4d1a-8320-6755ee57c51f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:44:55 np0005481065 nova_compute[260935]: 2025-10-11 08:44:55.656 2 DEBUG oslo_concurrency.lockutils [req-90638dd7-4a89-4b6f-85c9-e420bd544a66 req-c22fb917-540f-4cc7-83a3-35a03ccfb3a1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-aac0adcc-167d-400a-a04a-93767356cc9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:44:55 np0005481065 podman[277931]: 2025-10-11 08:44:55.684429151 +0000 UTC m=+0.040509889 container died 1c39acd5cda50586c0cb25171d4aba888e54d136022ae829645313a841582c18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_shannon, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 04:44:55 np0005481065 nova_compute[260935]: 2025-10-11 08:44:55.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:44:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1147: 321 pgs: 321 active+clean; 134 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Oct 11 04:44:55 np0005481065 nova_compute[260935]: 2025-10-11 08:44:55.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:44:55 np0005481065 systemd[1]: var-lib-containers-storage-overlay-f34ebbe09dc6a8d40110ca9b4300a6d7412c41e6383a134fdb3014cae9861a53-merged.mount: Deactivated successfully.
Oct 11 04:44:55 np0005481065 podman[277931]: 2025-10-11 08:44:55.743865 +0000 UTC m=+0.099945708 container remove 1c39acd5cda50586c0cb25171d4aba888e54d136022ae829645313a841582c18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 04:44:55 np0005481065 systemd[1]: libpod-conmon-1c39acd5cda50586c0cb25171d4aba888e54d136022ae829645313a841582c18.scope: Deactivated successfully.
Oct 11 04:44:55 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:44:55 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1715674536' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:44:55 np0005481065 nova_compute[260935]: 2025-10-11 08:44:55.996 2 DEBUG oslo_concurrency.processutils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:44:56 np0005481065 nova_compute[260935]: 2025-10-11 08:44:56.006 2 DEBUG nova.compute.provider_tree [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:44:56 np0005481065 nova_compute[260935]: 2025-10-11 08:44:56.029 2 DEBUG nova.scheduler.client.report [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:44:56 np0005481065 nova_compute[260935]: 2025-10-11 08:44:56.054 2 DEBUG oslo_concurrency.lockutils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:44:56 np0005481065 nova_compute[260935]: 2025-10-11 08:44:56.056 2 DEBUG nova.compute.manager [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:44:56 np0005481065 nova_compute[260935]: 2025-10-11 08:44:56.114 2 DEBUG nova.compute.manager [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:44:56 np0005481065 nova_compute[260935]: 2025-10-11 08:44:56.116 2 DEBUG nova.network.neutron [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:44:56 np0005481065 nova_compute[260935]: 2025-10-11 08:44:56.135 2 INFO nova.virt.libvirt.driver [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:44:56 np0005481065 nova_compute[260935]: 2025-10-11 08:44:56.152 2 DEBUG nova.compute.manager [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:44:56 np0005481065 nova_compute[260935]: 2025-10-11 08:44:56.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:44:56 np0005481065 nova_compute[260935]: 2025-10-11 08:44:56.239 2 DEBUG nova.compute.manager [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:44:56 np0005481065 nova_compute[260935]: 2025-10-11 08:44:56.244 2 DEBUG nova.virt.libvirt.driver [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:44:56 np0005481065 nova_compute[260935]: 2025-10-11 08:44:56.245 2 INFO nova.virt.libvirt.driver [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Creating image(s)#033[00m
Oct 11 04:44:56 np0005481065 nova_compute[260935]: 2025-10-11 08:44:56.277 2 DEBUG nova.storage.rbd_utils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] rbd image eda894cf-d32d-47f0-adc0-7e2f7fffb442_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:44:56 np0005481065 nova_compute[260935]: 2025-10-11 08:44:56.308 2 DEBUG nova.storage.rbd_utils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] rbd image eda894cf-d32d-47f0-adc0-7e2f7fffb442_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:44:56 np0005481065 nova_compute[260935]: 2025-10-11 08:44:56.336 2 DEBUG nova.storage.rbd_utils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] rbd image eda894cf-d32d-47f0-adc0-7e2f7fffb442_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:44:56 np0005481065 nova_compute[260935]: 2025-10-11 08:44:56.341 2 DEBUG oslo_concurrency.processutils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:44:56 np0005481065 nova_compute[260935]: 2025-10-11 08:44:56.453 2 DEBUG oslo_concurrency.processutils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.112s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:44:56 np0005481065 nova_compute[260935]: 2025-10-11 08:44:56.455 2 DEBUG oslo_concurrency.lockutils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:44:56 np0005481065 nova_compute[260935]: 2025-10-11 08:44:56.456 2 DEBUG oslo_concurrency.lockutils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:44:56 np0005481065 nova_compute[260935]: 2025-10-11 08:44:56.456 2 DEBUG oslo_concurrency.lockutils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:44:56 np0005481065 nova_compute[260935]: 2025-10-11 08:44:56.490 2 DEBUG nova.storage.rbd_utils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] rbd image eda894cf-d32d-47f0-adc0-7e2f7fffb442_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:44:56 np0005481065 nova_compute[260935]: 2025-10-11 08:44:56.502 2 DEBUG oslo_concurrency.processutils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 eda894cf-d32d-47f0-adc0-7e2f7fffb442_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:44:56 np0005481065 nova_compute[260935]: 2025-10-11 08:44:56.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:44:56 np0005481065 podman[278163]: 2025-10-11 08:44:56.609934296 +0000 UTC m=+0.071351090 container create 06d0307e4a74a273214ad88c6b304739eacc58851538c2359e91aa7cd3a4bb9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackwell, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 04:44:56 np0005481065 podman[278163]: 2025-10-11 08:44:56.58100097 +0000 UTC m=+0.042417854 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:44:56 np0005481065 systemd[1]: Started libpod-conmon-06d0307e4a74a273214ad88c6b304739eacc58851538c2359e91aa7cd3a4bb9d.scope.
Oct 11 04:44:56 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:44:56 np0005481065 podman[278163]: 2025-10-11 08:44:56.775504829 +0000 UTC m=+0.236921673 container init 06d0307e4a74a273214ad88c6b304739eacc58851538c2359e91aa7cd3a4bb9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackwell, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:44:56 np0005481065 podman[278163]: 2025-10-11 08:44:56.785047201 +0000 UTC m=+0.246464005 container start 06d0307e4a74a273214ad88c6b304739eacc58851538c2359e91aa7cd3a4bb9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 11 04:44:56 np0005481065 podman[278163]: 2025-10-11 08:44:56.788887701 +0000 UTC m=+0.250304575 container attach 06d0307e4a74a273214ad88c6b304739eacc58851538c2359e91aa7cd3a4bb9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackwell, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 04:44:56 np0005481065 stoic_blackwell[278199]: 167 167
Oct 11 04:44:56 np0005481065 podman[278163]: 2025-10-11 08:44:56.793537384 +0000 UTC m=+0.254954218 container died 06d0307e4a74a273214ad88c6b304739eacc58851538c2359e91aa7cd3a4bb9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackwell, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 11 04:44:56 np0005481065 systemd[1]: libpod-06d0307e4a74a273214ad88c6b304739eacc58851538c2359e91aa7cd3a4bb9d.scope: Deactivated successfully.
Oct 11 04:44:56 np0005481065 nova_compute[260935]: 2025-10-11 08:44:56.805 2 DEBUG oslo_concurrency.processutils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 eda894cf-d32d-47f0-adc0-7e2f7fffb442_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.304s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:44:56 np0005481065 systemd[1]: var-lib-containers-storage-overlay-551d54630e04c2c011375761b25602f023d030227ec74945e89d8f29e3933daa-merged.mount: Deactivated successfully.
Oct 11 04:44:56 np0005481065 podman[278163]: 2025-10-11 08:44:56.845387896 +0000 UTC m=+0.306804700 container remove 06d0307e4a74a273214ad88c6b304739eacc58851538c2359e91aa7cd3a4bb9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 11 04:44:56 np0005481065 systemd[1]: libpod-conmon-06d0307e4a74a273214ad88c6b304739eacc58851538c2359e91aa7cd3a4bb9d.scope: Deactivated successfully.
Oct 11 04:44:56 np0005481065 nova_compute[260935]: 2025-10-11 08:44:56.894 2 DEBUG nova.storage.rbd_utils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] resizing rbd image eda894cf-d32d-47f0-adc0-7e2f7fffb442_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:44:56 np0005481065 nova_compute[260935]: 2025-10-11 08:44:56.975 2 DEBUG nova.network.neutron [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct 11 04:44:56 np0005481065 nova_compute[260935]: 2025-10-11 08:44:56.976 2 DEBUG nova.compute.manager [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:44:57 np0005481065 nova_compute[260935]: 2025-10-11 08:44:57.058 2 DEBUG nova.objects.instance [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lazy-loading 'migration_context' on Instance uuid eda894cf-d32d-47f0-adc0-7e2f7fffb442 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:44:57 np0005481065 nova_compute[260935]: 2025-10-11 08:44:57.079 2 DEBUG nova.virt.libvirt.driver [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:44:57 np0005481065 nova_compute[260935]: 2025-10-11 08:44:57.080 2 DEBUG nova.virt.libvirt.driver [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Ensure instance console log exists: /var/lib/nova/instances/eda894cf-d32d-47f0-adc0-7e2f7fffb442/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:44:57 np0005481065 nova_compute[260935]: 2025-10-11 08:44:57.080 2 DEBUG oslo_concurrency.lockutils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:44:57 np0005481065 nova_compute[260935]: 2025-10-11 08:44:57.081 2 DEBUG oslo_concurrency.lockutils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:44:57 np0005481065 nova_compute[260935]: 2025-10-11 08:44:57.082 2 DEBUG oslo_concurrency.lockutils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:44:57 np0005481065 nova_compute[260935]: 2025-10-11 08:44:57.084 2 DEBUG nova.virt.libvirt.driver [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:44:57 np0005481065 podman[278276]: 2025-10-11 08:44:57.085430608 +0000 UTC m=+0.086845714 container create da733033708753dc525e668b4b95a3ad2c5852ace4dcd9cf04be54bddc4c1291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:44:57 np0005481065 nova_compute[260935]: 2025-10-11 08:44:57.091 2 WARNING nova.virt.libvirt.driver [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:44:57 np0005481065 nova_compute[260935]: 2025-10-11 08:44:57.098 2 DEBUG nova.virt.libvirt.host [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:44:57 np0005481065 nova_compute[260935]: 2025-10-11 08:44:57.099 2 DEBUG nova.virt.libvirt.host [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:44:57 np0005481065 nova_compute[260935]: 2025-10-11 08:44:57.103 2 DEBUG nova.virt.libvirt.host [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:44:57 np0005481065 nova_compute[260935]: 2025-10-11 08:44:57.104 2 DEBUG nova.virt.libvirt.host [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:44:57 np0005481065 nova_compute[260935]: 2025-10-11 08:44:57.104 2 DEBUG nova.virt.libvirt.driver [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:44:57 np0005481065 nova_compute[260935]: 2025-10-11 08:44:57.105 2 DEBUG nova.virt.hardware [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:44:57 np0005481065 nova_compute[260935]: 2025-10-11 08:44:57.106 2 DEBUG nova.virt.hardware [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:44:57 np0005481065 nova_compute[260935]: 2025-10-11 08:44:57.106 2 DEBUG nova.virt.hardware [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:44:57 np0005481065 nova_compute[260935]: 2025-10-11 08:44:57.107 2 DEBUG nova.virt.hardware [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:44:57 np0005481065 nova_compute[260935]: 2025-10-11 08:44:57.107 2 DEBUG nova.virt.hardware [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:44:57 np0005481065 nova_compute[260935]: 2025-10-11 08:44:57.107 2 DEBUG nova.virt.hardware [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:44:57 np0005481065 nova_compute[260935]: 2025-10-11 08:44:57.108 2 DEBUG nova.virt.hardware [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:44:57 np0005481065 nova_compute[260935]: 2025-10-11 08:44:57.108 2 DEBUG nova.virt.hardware [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:44:57 np0005481065 nova_compute[260935]: 2025-10-11 08:44:57.108 2 DEBUG nova.virt.hardware [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:44:57 np0005481065 nova_compute[260935]: 2025-10-11 08:44:57.109 2 DEBUG nova.virt.hardware [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:44:57 np0005481065 nova_compute[260935]: 2025-10-11 08:44:57.109 2 DEBUG nova.virt.hardware [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:44:57 np0005481065 nova_compute[260935]: 2025-10-11 08:44:57.115 2 DEBUG oslo_concurrency.processutils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:44:57 np0005481065 podman[278276]: 2025-10-11 08:44:57.043095058 +0000 UTC m=+0.044510174 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:44:57 np0005481065 systemd[1]: Started libpod-conmon-da733033708753dc525e668b4b95a3ad2c5852ace4dcd9cf04be54bddc4c1291.scope.
Oct 11 04:44:57 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:44:57 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d22078b2187e12e17d5063dc397551b8d514aa50728d631e16b10d3bdc480ba6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:44:57 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d22078b2187e12e17d5063dc397551b8d514aa50728d631e16b10d3bdc480ba6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:44:57 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d22078b2187e12e17d5063dc397551b8d514aa50728d631e16b10d3bdc480ba6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:44:57 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d22078b2187e12e17d5063dc397551b8d514aa50728d631e16b10d3bdc480ba6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:44:57 np0005481065 podman[278276]: 2025-10-11 08:44:57.199359675 +0000 UTC m=+0.200774791 container init da733033708753dc525e668b4b95a3ad2c5852ace4dcd9cf04be54bddc4c1291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 11 04:44:57 np0005481065 podman[278276]: 2025-10-11 08:44:57.21460572 +0000 UTC m=+0.216020806 container start da733033708753dc525e668b4b95a3ad2c5852ace4dcd9cf04be54bddc4c1291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bell, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:44:57 np0005481065 podman[278276]: 2025-10-11 08:44:57.21844562 +0000 UTC m=+0.219860706 container attach da733033708753dc525e668b4b95a3ad2c5852ace4dcd9cf04be54bddc4c1291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:44:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:44:57 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/887322243' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:44:57 np0005481065 nova_compute[260935]: 2025-10-11 08:44:57.662 2 DEBUG oslo_concurrency.processutils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:44:57 np0005481065 nova_compute[260935]: 2025-10-11 08:44:57.701 2 DEBUG nova.storage.rbd_utils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] rbd image eda894cf-d32d-47f0-adc0-7e2f7fffb442_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:44:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1148: 321 pgs: 321 active+clean; 181 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 201 op/s
Oct 11 04:44:57 np0005481065 nova_compute[260935]: 2025-10-11 08:44:57.710 2 DEBUG oslo_concurrency.processutils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:44:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:44:58.025 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:44:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:44:58 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2952667324' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:44:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:44:58 np0005481065 nova_compute[260935]: 2025-10-11 08:44:58.147 2 DEBUG oslo_concurrency.processutils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:44:58 np0005481065 nova_compute[260935]: 2025-10-11 08:44:58.149 2 DEBUG nova.objects.instance [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lazy-loading 'pci_devices' on Instance uuid eda894cf-d32d-47f0-adc0-7e2f7fffb442 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:44:58 np0005481065 nova_compute[260935]: 2025-10-11 08:44:58.170 2 DEBUG nova.virt.libvirt.driver [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:44:58 np0005481065 nova_compute[260935]:  <uuid>eda894cf-d32d-47f0-adc0-7e2f7fffb442</uuid>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:  <name>instance-00000004</name>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:44:58 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:      <nova:name>tempest-LiveMigrationNegativeTest-server-630761309</nova:name>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:44:57</nova:creationTime>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:44:58 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:        <nova:user uuid="b1cf6609831e4cd3b6261e999c3c068e">tempest-LiveMigrationNegativeTest-1774903191-project-member</nova:user>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:        <nova:project uuid="507d5ff420164426beedc0ce7977570a">tempest-LiveMigrationNegativeTest-1774903191</nova:project>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:      <nova:ports/>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:44:58 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:      <entry name="serial">eda894cf-d32d-47f0-adc0-7e2f7fffb442</entry>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:      <entry name="uuid">eda894cf-d32d-47f0-adc0-7e2f7fffb442</entry>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:44:58 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:44:58 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:44:58 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/eda894cf-d32d-47f0-adc0-7e2f7fffb442_disk">
Oct 11 04:44:58 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:44:58 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:44:58 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/eda894cf-d32d-47f0-adc0-7e2f7fffb442_disk.config">
Oct 11 04:44:58 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:44:58 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:44:58 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/eda894cf-d32d-47f0-adc0-7e2f7fffb442/console.log" append="off"/>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:44:58 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:44:58 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:44:58 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:44:58 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:44:58 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:44:58 np0005481065 unruffled_bell[278311]: {
Oct 11 04:44:58 np0005481065 unruffled_bell[278311]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 04:44:58 np0005481065 unruffled_bell[278311]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:44:58 np0005481065 unruffled_bell[278311]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:44:58 np0005481065 unruffled_bell[278311]:        "osd_id": 2,
Oct 11 04:44:58 np0005481065 unruffled_bell[278311]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:44:58 np0005481065 unruffled_bell[278311]:        "type": "bluestore"
Oct 11 04:44:58 np0005481065 unruffled_bell[278311]:    },
Oct 11 04:44:58 np0005481065 unruffled_bell[278311]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 04:44:58 np0005481065 unruffled_bell[278311]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:44:58 np0005481065 unruffled_bell[278311]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:44:58 np0005481065 unruffled_bell[278311]:        "osd_id": 0,
Oct 11 04:44:58 np0005481065 unruffled_bell[278311]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:44:58 np0005481065 unruffled_bell[278311]:        "type": "bluestore"
Oct 11 04:44:58 np0005481065 unruffled_bell[278311]:    },
Oct 11 04:44:58 np0005481065 unruffled_bell[278311]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 04:44:58 np0005481065 unruffled_bell[278311]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:44:58 np0005481065 unruffled_bell[278311]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:44:58 np0005481065 unruffled_bell[278311]:        "osd_id": 1,
Oct 11 04:44:58 np0005481065 unruffled_bell[278311]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:44:58 np0005481065 unruffled_bell[278311]:        "type": "bluestore"
Oct 11 04:44:58 np0005481065 unruffled_bell[278311]:    }
Oct 11 04:44:58 np0005481065 unruffled_bell[278311]: }
Oct 11 04:44:58 np0005481065 nova_compute[260935]: 2025-10-11 08:44:58.256 2 DEBUG nova.virt.libvirt.driver [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:44:58 np0005481065 nova_compute[260935]: 2025-10-11 08:44:58.256 2 DEBUG nova.virt.libvirt.driver [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:44:58 np0005481065 nova_compute[260935]: 2025-10-11 08:44:58.257 2 INFO nova.virt.libvirt.driver [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Using config drive#033[00m
Oct 11 04:44:58 np0005481065 nova_compute[260935]: 2025-10-11 08:44:58.281 2 DEBUG nova.storage.rbd_utils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] rbd image eda894cf-d32d-47f0-adc0-7e2f7fffb442_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:44:58 np0005481065 systemd[1]: libpod-da733033708753dc525e668b4b95a3ad2c5852ace4dcd9cf04be54bddc4c1291.scope: Deactivated successfully.
Oct 11 04:44:58 np0005481065 systemd[1]: libpod-da733033708753dc525e668b4b95a3ad2c5852ace4dcd9cf04be54bddc4c1291.scope: Consumed 1.053s CPU time.
Oct 11 04:44:58 np0005481065 podman[278425]: 2025-10-11 08:44:58.338404744 +0000 UTC m=+0.031357447 container died da733033708753dc525e668b4b95a3ad2c5852ace4dcd9cf04be54bddc4c1291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bell, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:44:58 np0005481065 systemd[1]: var-lib-containers-storage-overlay-d22078b2187e12e17d5063dc397551b8d514aa50728d631e16b10d3bdc480ba6-merged.mount: Deactivated successfully.
Oct 11 04:44:58 np0005481065 podman[278425]: 2025-10-11 08:44:58.40123487 +0000 UTC m=+0.094187553 container remove da733033708753dc525e668b4b95a3ad2c5852ace4dcd9cf04be54bddc4c1291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bell, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 04:44:58 np0005481065 systemd[1]: libpod-conmon-da733033708753dc525e668b4b95a3ad2c5852ace4dcd9cf04be54bddc4c1291.scope: Deactivated successfully.
Oct 11 04:44:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:44:58 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:44:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:44:58 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:44:58 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 5f45dae6-2b5a-4ccc-ad8e-21421610eb80 does not exist
Oct 11 04:44:58 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev e2a33329-23fd-435f-9d0f-dc630e0a6735 does not exist
Oct 11 04:44:58 np0005481065 nova_compute[260935]: 2025-10-11 08:44:58.543 2 INFO nova.virt.libvirt.driver [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Creating config drive at /var/lib/nova/instances/eda894cf-d32d-47f0-adc0-7e2f7fffb442/disk.config#033[00m
Oct 11 04:44:58 np0005481065 nova_compute[260935]: 2025-10-11 08:44:58.552 2 DEBUG oslo_concurrency.processutils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/eda894cf-d32d-47f0-adc0-7e2f7fffb442/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp12aqz3uw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:44:58 np0005481065 nova_compute[260935]: 2025-10-11 08:44:58.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:44:58 np0005481065 nova_compute[260935]: 2025-10-11 08:44:58.702 2 DEBUG oslo_concurrency.processutils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/eda894cf-d32d-47f0-adc0-7e2f7fffb442/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp12aqz3uw" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:44:58 np0005481065 nova_compute[260935]: 2025-10-11 08:44:58.748 2 DEBUG nova.storage.rbd_utils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] rbd image eda894cf-d32d-47f0-adc0-7e2f7fffb442_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:44:58 np0005481065 nova_compute[260935]: 2025-10-11 08:44:58.754 2 DEBUG oslo_concurrency.processutils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/eda894cf-d32d-47f0-adc0-7e2f7fffb442/disk.config eda894cf-d32d-47f0-adc0-7e2f7fffb442_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:44:58 np0005481065 nova_compute[260935]: 2025-10-11 08:44:58.951 2 DEBUG oslo_concurrency.processutils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/eda894cf-d32d-47f0-adc0-7e2f7fffb442/disk.config eda894cf-d32d-47f0-adc0-7e2f7fffb442_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:44:58 np0005481065 nova_compute[260935]: 2025-10-11 08:44:58.953 2 INFO nova.virt.libvirt.driver [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Deleting local config drive /var/lib/nova/instances/eda894cf-d32d-47f0-adc0-7e2f7fffb442/disk.config because it was imported into RBD.#033[00m
Oct 11 04:44:59 np0005481065 systemd-machined[215705]: New machine qemu-4-instance-00000004.
Oct 11 04:44:59 np0005481065 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Oct 11 04:44:59 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:44:59 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:44:59 np0005481065 podman[278536]: 2025-10-11 08:44:59.182279176 +0000 UTC m=+0.127737072 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd)
Oct 11 04:44:59 np0005481065 nova_compute[260935]: 2025-10-11 08:44:59.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:44:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1149: 321 pgs: 321 active+clean; 181 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Oct 11 04:44:59 np0005481065 nova_compute[260935]: 2025-10-11 08:44:59.777 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:44:59 np0005481065 nova_compute[260935]: 2025-10-11 08:44:59.778 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:44:59 np0005481065 nova_compute[260935]: 2025-10-11 08:44:59.779 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:44:59 np0005481065 nova_compute[260935]: 2025-10-11 08:44:59.780 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 04:44:59 np0005481065 nova_compute[260935]: 2025-10-11 08:44:59.781 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:44:59 np0005481065 nova_compute[260935]: 2025-10-11 08:44:59.986 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172299.985939, eda894cf-d32d-47f0-adc0-7e2f7fffb442 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:44:59 np0005481065 nova_compute[260935]: 2025-10-11 08:44:59.987 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:44:59 np0005481065 nova_compute[260935]: 2025-10-11 08:44:59.990 2 DEBUG nova.compute.manager [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:44:59 np0005481065 nova_compute[260935]: 2025-10-11 08:44:59.991 2 DEBUG nova.virt.libvirt.driver [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:44:59 np0005481065 nova_compute[260935]: 2025-10-11 08:44:59.998 2 INFO nova.virt.libvirt.driver [-] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Instance spawned successfully.#033[00m
Oct 11 04:45:00 np0005481065 nova_compute[260935]: 2025-10-11 08:44:59.999 2 DEBUG nova.virt.libvirt.driver [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:45:00 np0005481065 nova_compute[260935]: 2025-10-11 08:45:00.042 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:45:00 np0005481065 nova_compute[260935]: 2025-10-11 08:45:00.046 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:45:00 np0005481065 nova_compute[260935]: 2025-10-11 08:45:00.092 2 DEBUG nova.virt.libvirt.driver [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:45:00 np0005481065 nova_compute[260935]: 2025-10-11 08:45:00.093 2 DEBUG nova.virt.libvirt.driver [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:45:00 np0005481065 nova_compute[260935]: 2025-10-11 08:45:00.094 2 DEBUG nova.virt.libvirt.driver [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:45:00 np0005481065 nova_compute[260935]: 2025-10-11 08:45:00.094 2 DEBUG nova.virt.libvirt.driver [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:45:00 np0005481065 nova_compute[260935]: 2025-10-11 08:45:00.095 2 DEBUG nova.virt.libvirt.driver [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:45:00 np0005481065 nova_compute[260935]: 2025-10-11 08:45:00.095 2 DEBUG nova.virt.libvirt.driver [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:45:00 np0005481065 nova_compute[260935]: 2025-10-11 08:45:00.100 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:45:00 np0005481065 nova_compute[260935]: 2025-10-11 08:45:00.101 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172299.9899127, eda894cf-d32d-47f0-adc0-7e2f7fffb442 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:45:00 np0005481065 nova_compute[260935]: 2025-10-11 08:45:00.101 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] VM Started (Lifecycle Event)#033[00m
Oct 11 04:45:00 np0005481065 nova_compute[260935]: 2025-10-11 08:45:00.150 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:45:00 np0005481065 nova_compute[260935]: 2025-10-11 08:45:00.155 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:45:00 np0005481065 nova_compute[260935]: 2025-10-11 08:45:00.188 2 INFO nova.compute.manager [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Took 3.95 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:45:00 np0005481065 nova_compute[260935]: 2025-10-11 08:45:00.189 2 DEBUG nova.compute.manager [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:45:00 np0005481065 nova_compute[260935]: 2025-10-11 08:45:00.201 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:45:00 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:45:00 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/155522291' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:45:00 np0005481065 nova_compute[260935]: 2025-10-11 08:45:00.249 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:45:00 np0005481065 nova_compute[260935]: 2025-10-11 08:45:00.268 2 INFO nova.compute.manager [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Took 5.03 seconds to build instance.#033[00m
Oct 11 04:45:00 np0005481065 nova_compute[260935]: 2025-10-11 08:45:00.305 2 DEBUG oslo_concurrency.lockutils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lock "eda894cf-d32d-47f0-adc0-7e2f7fffb442" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.135s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:45:00 np0005481065 podman[278630]: 2025-10-11 08:45:00.409050182 +0000 UTC m=+0.090259291 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 11 04:45:00 np0005481065 nova_compute[260935]: 2025-10-11 08:45:00.441 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 04:45:00 np0005481065 nova_compute[260935]: 2025-10-11 08:45:00.442 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 04:45:00 np0005481065 nova_compute[260935]: 2025-10-11 08:45:00.446 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 04:45:00 np0005481065 nova_compute[260935]: 2025-10-11 08:45:00.446 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 04:45:00 np0005481065 nova_compute[260935]: 2025-10-11 08:45:00.450 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 04:45:00 np0005481065 nova_compute[260935]: 2025-10-11 08:45:00.451 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 04:45:00 np0005481065 nova_compute[260935]: 2025-10-11 08:45:00.634 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:45:00 np0005481065 nova_compute[260935]: 2025-10-11 08:45:00.636 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4474MB free_disk=59.92570877075195GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 04:45:00 np0005481065 nova_compute[260935]: 2025-10-11 08:45:00.637 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:45:00 np0005481065 nova_compute[260935]: 2025-10-11 08:45:00.638 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:45:00 np0005481065 nova_compute[260935]: 2025-10-11 08:45:00.767 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance aac0adcc-167d-400a-a04a-93767356cc9c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 04:45:00 np0005481065 nova_compute[260935]: 2025-10-11 08:45:00.768 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 10e2549e-21d4-44fb-acbf-9104ec32970f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 04:45:00 np0005481065 nova_compute[260935]: 2025-10-11 08:45:00.768 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance eda894cf-d32d-47f0-adc0-7e2f7fffb442 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 04:45:00 np0005481065 nova_compute[260935]: 2025-10-11 08:45:00.769 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 04:45:00 np0005481065 nova_compute[260935]: 2025-10-11 08:45:00.769 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 04:45:00 np0005481065 nova_compute[260935]: 2025-10-11 08:45:00.871 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:45:01 np0005481065 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 11 04:45:01 np0005481065 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 11 04:45:01 np0005481065 nova_compute[260935]: 2025-10-11 08:45:01.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:45:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:45:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4013955541' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:45:01 np0005481065 nova_compute[260935]: 2025-10-11 08:45:01.437 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:45:01 np0005481065 nova_compute[260935]: 2025-10-11 08:45:01.454 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:45:01 np0005481065 nova_compute[260935]: 2025-10-11 08:45:01.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:45:01 np0005481065 nova_compute[260935]: 2025-10-11 08:45:01.581 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:45:01 np0005481065 nova_compute[260935]: 2025-10-11 08:45:01.591 2 DEBUG nova.objects.instance [None req-1e22ab8e-0043-46cc-8854-e162ddc10f6f 344f85d2138c407e8c0c5209ec354af7 e7682c9cda004f9985bc3bfe7abf6aab - - default default] Lazy-loading 'pci_devices' on Instance uuid eda894cf-d32d-47f0-adc0-7e2f7fffb442 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:45:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1150: 321 pgs: 321 active+clean; 181 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Oct 11 04:45:01 np0005481065 nova_compute[260935]: 2025-10-11 08:45:01.857 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172301.8571935, eda894cf-d32d-47f0-adc0-7e2f7fffb442 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:45:01 np0005481065 nova_compute[260935]: 2025-10-11 08:45:01.858 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] VM Paused (Lifecycle Event)#033[00m
Oct 11 04:45:01 np0005481065 nova_compute[260935]: 2025-10-11 08:45:01.869 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 04:45:01 np0005481065 nova_compute[260935]: 2025-10-11 08:45:01.870 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.232s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:45:01 np0005481065 ovn_controller[152945]: 2025-10-11T08:45:01Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:14:7b:bc 10.100.0.9
Oct 11 04:45:01 np0005481065 ovn_controller[152945]: 2025-10-11T08:45:01Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:14:7b:bc 10.100.0.9
Oct 11 04:45:01 np0005481065 nova_compute[260935]: 2025-10-11 08:45:01.897 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:45:01 np0005481065 nova_compute[260935]: 2025-10-11 08:45:01.904 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:45:01 np0005481065 nova_compute[260935]: 2025-10-11 08:45:01.991 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Oct 11 04:45:02 np0005481065 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Oct 11 04:45:02 np0005481065 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 2.704s CPU time.
Oct 11 04:45:02 np0005481065 systemd-machined[215705]: Machine qemu-4-instance-00000004 terminated.
Oct 11 04:45:02 np0005481065 nova_compute[260935]: 2025-10-11 08:45:02.374 2 DEBUG nova.compute.manager [None req-1e22ab8e-0043-46cc-8854-e162ddc10f6f 344f85d2138c407e8c0c5209ec354af7 e7682c9cda004f9985bc3bfe7abf6aab - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:45:02 np0005481065 nova_compute[260935]: 2025-10-11 08:45:02.871 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:45:02 np0005481065 nova_compute[260935]: 2025-10-11 08:45:02.871 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 04:45:02 np0005481065 nova_compute[260935]: 2025-10-11 08:45:02.871 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 11 04:45:03 np0005481065 nova_compute[260935]: 2025-10-11 08:45:03.067 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-aac0adcc-167d-400a-a04a-93767356cc9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:45:03 np0005481065 nova_compute[260935]: 2025-10-11 08:45:03.067 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-aac0adcc-167d-400a-a04a-93767356cc9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:45:03 np0005481065 nova_compute[260935]: 2025-10-11 08:45:03.068 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 04:45:03 np0005481065 nova_compute[260935]: 2025-10-11 08:45:03.068 2 DEBUG nova.objects.instance [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lazy-loading 'info_cache' on Instance uuid aac0adcc-167d-400a-a04a-93767356cc9c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:45:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:45:03 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Oct 11 04:45:03 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:45:03.137713) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 04:45:03 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Oct 11 04:45:03 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172303137758, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1256, "num_deletes": 509, "total_data_size": 1386610, "memory_usage": 1416960, "flush_reason": "Manual Compaction"}
Oct 11 04:45:03 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Oct 11 04:45:03 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172303144772, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 956397, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23035, "largest_seqno": 24290, "table_properties": {"data_size": 951587, "index_size": 1758, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 14798, "raw_average_key_size": 18, "raw_value_size": 939298, "raw_average_value_size": 1205, "num_data_blocks": 79, "num_entries": 779, "num_filter_entries": 779, "num_deletions": 509, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760172220, "oldest_key_time": 1760172220, "file_creation_time": 1760172303, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:45:03 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 7115 microseconds, and 3643 cpu microseconds.
Oct 11 04:45:03 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:45:03 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:45:03.144826) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 956397 bytes OK
Oct 11 04:45:03 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:45:03.144848) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Oct 11 04:45:03 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:45:03.146255) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Oct 11 04:45:03 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:45:03.146272) EVENT_LOG_v1 {"time_micros": 1760172303146267, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 04:45:03 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:45:03.146296) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 04:45:03 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1379769, prev total WAL file size 1379769, number of live WAL files 2.
Oct 11 04:45:03 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:45:03 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:45:03.147133) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353032' seq:72057594037927935, type:22 .. '6C6F676D00373535' seq:0, type:0; will stop at (end)
Oct 11 04:45:03 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 04:45:03 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(933KB)], [53(8904KB)]
Oct 11 04:45:03 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172303147208, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 10074867, "oldest_snapshot_seqno": -1}
Oct 11 04:45:03 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 4516 keys, 7060096 bytes, temperature: kUnknown
Oct 11 04:45:03 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172303200861, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 7060096, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7030134, "index_size": 17564, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11333, "raw_key_size": 113121, "raw_average_key_size": 25, "raw_value_size": 6948571, "raw_average_value_size": 1538, "num_data_blocks": 730, "num_entries": 4516, "num_filter_entries": 4516, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760172303, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:45:03 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:45:03 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:45:03.201292) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 7060096 bytes
Oct 11 04:45:03 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:45:03.202998) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 187.3 rd, 131.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 8.7 +0.0 blob) out(6.7 +0.0 blob), read-write-amplify(17.9) write-amplify(7.4) OK, records in: 5521, records dropped: 1005 output_compression: NoCompression
Oct 11 04:45:03 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:45:03.203026) EVENT_LOG_v1 {"time_micros": 1760172303203013, "job": 28, "event": "compaction_finished", "compaction_time_micros": 53802, "compaction_time_cpu_micros": 37653, "output_level": 6, "num_output_files": 1, "total_output_size": 7060096, "num_input_records": 5521, "num_output_records": 4516, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 04:45:03 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:45:03 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172303203724, "job": 28, "event": "table_file_deletion", "file_number": 55}
Oct 11 04:45:03 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:45:03 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172303207289, "job": 28, "event": "table_file_deletion", "file_number": 53}
Oct 11 04:45:03 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:45:03.147038) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:45:03 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:45:03.207445) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:45:03 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:45:03.207459) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:45:03 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:45:03.207462) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:45:03 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:45:03.207466) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:45:03 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:45:03.207469) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:45:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1151: 321 pgs: 321 active+clean; 233 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 6.3 MiB/s rd, 5.9 MiB/s wr, 353 op/s
Oct 11 04:45:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:45:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:45:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:45:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:45:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0018461018387380347 of space, bias 1.0, pg target 0.5538305516214104 quantized to 32 (current 32)
Oct 11 04:45:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:45:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:45:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:45:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:45:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:45:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 11 04:45:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:45:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 04:45:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:45:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:45:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:45:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:45:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:45:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:45:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:45:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:45:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:45:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:45:05 np0005481065 nova_compute[260935]: 2025-10-11 08:45:05.296 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Updating instance_info_cache with network_info: [{"id": "6e75116e-1034-4d1a-8320-6755ee57c51f", "address": "fa:16:3e:14:7b:bc", "network": {"id": "3bd537c2-e6ec-4d00-ac83-fbf5d86f963c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-610535994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e6573eaf6684f1c99a553fd46667a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e75116e-10", "ovs_interfaceid": "6e75116e-1034-4d1a-8320-6755ee57c51f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:45:05 np0005481065 nova_compute[260935]: 2025-10-11 08:45:05.327 2 DEBUG oslo_concurrency.lockutils [None req-bf3d529e-917c-4ed4-8397-ab351ee6a185 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Acquiring lock "eda894cf-d32d-47f0-adc0-7e2f7fffb442" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:45:05 np0005481065 nova_compute[260935]: 2025-10-11 08:45:05.328 2 DEBUG oslo_concurrency.lockutils [None req-bf3d529e-917c-4ed4-8397-ab351ee6a185 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lock "eda894cf-d32d-47f0-adc0-7e2f7fffb442" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:45:05 np0005481065 nova_compute[260935]: 2025-10-11 08:45:05.328 2 DEBUG oslo_concurrency.lockutils [None req-bf3d529e-917c-4ed4-8397-ab351ee6a185 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Acquiring lock "eda894cf-d32d-47f0-adc0-7e2f7fffb442-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:45:05 np0005481065 nova_compute[260935]: 2025-10-11 08:45:05.329 2 DEBUG oslo_concurrency.lockutils [None req-bf3d529e-917c-4ed4-8397-ab351ee6a185 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lock "eda894cf-d32d-47f0-adc0-7e2f7fffb442-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:45:05 np0005481065 nova_compute[260935]: 2025-10-11 08:45:05.329 2 DEBUG oslo_concurrency.lockutils [None req-bf3d529e-917c-4ed4-8397-ab351ee6a185 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lock "eda894cf-d32d-47f0-adc0-7e2f7fffb442-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:45:05 np0005481065 nova_compute[260935]: 2025-10-11 08:45:05.330 2 INFO nova.compute.manager [None req-bf3d529e-917c-4ed4-8397-ab351ee6a185 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Terminating instance#033[00m
Oct 11 04:45:05 np0005481065 nova_compute[260935]: 2025-10-11 08:45:05.331 2 DEBUG oslo_concurrency.lockutils [None req-bf3d529e-917c-4ed4-8397-ab351ee6a185 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Acquiring lock "refresh_cache-eda894cf-d32d-47f0-adc0-7e2f7fffb442" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:45:05 np0005481065 nova_compute[260935]: 2025-10-11 08:45:05.331 2 DEBUG oslo_concurrency.lockutils [None req-bf3d529e-917c-4ed4-8397-ab351ee6a185 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Acquired lock "refresh_cache-eda894cf-d32d-47f0-adc0-7e2f7fffb442" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:45:05 np0005481065 nova_compute[260935]: 2025-10-11 08:45:05.331 2 DEBUG nova.network.neutron [None req-bf3d529e-917c-4ed4-8397-ab351ee6a185 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:45:05 np0005481065 nova_compute[260935]: 2025-10-11 08:45:05.348 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-aac0adcc-167d-400a-a04a-93767356cc9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:45:05 np0005481065 nova_compute[260935]: 2025-10-11 08:45:05.348 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 04:45:05 np0005481065 nova_compute[260935]: 2025-10-11 08:45:05.348 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:45:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1152: 321 pgs: 321 active+clean; 233 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.9 MiB/s wr, 206 op/s
Oct 11 04:45:05 np0005481065 nova_compute[260935]: 2025-10-11 08:45:05.964 2 DEBUG nova.network.neutron [None req-bf3d529e-917c-4ed4-8397-ab351ee6a185 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:45:06 np0005481065 nova_compute[260935]: 2025-10-11 08:45:06.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:45:06 np0005481065 nova_compute[260935]: 2025-10-11 08:45:06.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:45:06 np0005481065 nova_compute[260935]: 2025-10-11 08:45:06.749 2 DEBUG nova.network.neutron [None req-bf3d529e-917c-4ed4-8397-ab351ee6a185 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:45:06 np0005481065 nova_compute[260935]: 2025-10-11 08:45:06.818 2 DEBUG oslo_concurrency.lockutils [None req-bf3d529e-917c-4ed4-8397-ab351ee6a185 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Releasing lock "refresh_cache-eda894cf-d32d-47f0-adc0-7e2f7fffb442" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:45:06 np0005481065 nova_compute[260935]: 2025-10-11 08:45:06.819 2 DEBUG nova.compute.manager [None req-bf3d529e-917c-4ed4-8397-ab351ee6a185 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 04:45:06 np0005481065 nova_compute[260935]: 2025-10-11 08:45:06.827 2 INFO nova.virt.libvirt.driver [-] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Instance destroyed successfully.#033[00m
Oct 11 04:45:06 np0005481065 nova_compute[260935]: 2025-10-11 08:45:06.828 2 DEBUG nova.objects.instance [None req-bf3d529e-917c-4ed4-8397-ab351ee6a185 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lazy-loading 'resources' on Instance uuid eda894cf-d32d-47f0-adc0-7e2f7fffb442 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:45:07 np0005481065 nova_compute[260935]: 2025-10-11 08:45:07.387 2 INFO nova.virt.libvirt.driver [None req-bf3d529e-917c-4ed4-8397-ab351ee6a185 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Deleting instance files /var/lib/nova/instances/eda894cf-d32d-47f0-adc0-7e2f7fffb442_del#033[00m
Oct 11 04:45:07 np0005481065 nova_compute[260935]: 2025-10-11 08:45:07.389 2 INFO nova.virt.libvirt.driver [None req-bf3d529e-917c-4ed4-8397-ab351ee6a185 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Deletion of /var/lib/nova/instances/eda894cf-d32d-47f0-adc0-7e2f7fffb442_del complete#033[00m
Oct 11 04:45:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1153: 321 pgs: 321 active+clean; 200 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.0 MiB/s wr, 251 op/s
Oct 11 04:45:08 np0005481065 nova_compute[260935]: 2025-10-11 08:45:08.108 2 INFO nova.compute.manager [None req-bf3d529e-917c-4ed4-8397-ab351ee6a185 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Took 1.29 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 04:45:08 np0005481065 nova_compute[260935]: 2025-10-11 08:45:08.109 2 DEBUG oslo.service.loopingcall [None req-bf3d529e-917c-4ed4-8397-ab351ee6a185 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 04:45:08 np0005481065 nova_compute[260935]: 2025-10-11 08:45:08.109 2 DEBUG nova.compute.manager [-] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 04:45:08 np0005481065 nova_compute[260935]: 2025-10-11 08:45:08.110 2 DEBUG nova.network.neutron [-] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 04:45:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:45:08 np0005481065 nova_compute[260935]: 2025-10-11 08:45:08.784 2 DEBUG nova.network.neutron [-] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:45:08 np0005481065 nova_compute[260935]: 2025-10-11 08:45:08.811 2 DEBUG nova.network.neutron [-] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:45:08 np0005481065 nova_compute[260935]: 2025-10-11 08:45:08.876 2 INFO nova.compute.manager [-] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Took 0.77 seconds to deallocate network for instance.#033[00m
Oct 11 04:45:08 np0005481065 nova_compute[260935]: 2025-10-11 08:45:08.998 2 DEBUG oslo_concurrency.lockutils [None req-bf3d529e-917c-4ed4-8397-ab351ee6a185 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:45:08 np0005481065 nova_compute[260935]: 2025-10-11 08:45:08.999 2 DEBUG oslo_concurrency.lockutils [None req-bf3d529e-917c-4ed4-8397-ab351ee6a185 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:45:09 np0005481065 nova_compute[260935]: 2025-10-11 08:45:09.038 2 DEBUG oslo_concurrency.lockutils [None req-a395bd6b-dd8f-4041-9548-db4210a48d17 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Acquiring lock "aac0adcc-167d-400a-a04a-93767356cc9c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:45:09 np0005481065 nova_compute[260935]: 2025-10-11 08:45:09.039 2 DEBUG oslo_concurrency.lockutils [None req-a395bd6b-dd8f-4041-9548-db4210a48d17 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "aac0adcc-167d-400a-a04a-93767356cc9c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:45:09 np0005481065 nova_compute[260935]: 2025-10-11 08:45:09.039 2 DEBUG oslo_concurrency.lockutils [None req-a395bd6b-dd8f-4041-9548-db4210a48d17 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Acquiring lock "aac0adcc-167d-400a-a04a-93767356cc9c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:45:09 np0005481065 nova_compute[260935]: 2025-10-11 08:45:09.040 2 DEBUG oslo_concurrency.lockutils [None req-a395bd6b-dd8f-4041-9548-db4210a48d17 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "aac0adcc-167d-400a-a04a-93767356cc9c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:45:09 np0005481065 nova_compute[260935]: 2025-10-11 08:45:09.040 2 DEBUG oslo_concurrency.lockutils [None req-a395bd6b-dd8f-4041-9548-db4210a48d17 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "aac0adcc-167d-400a-a04a-93767356cc9c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:45:09 np0005481065 nova_compute[260935]: 2025-10-11 08:45:09.042 2 INFO nova.compute.manager [None req-a395bd6b-dd8f-4041-9548-db4210a48d17 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Terminating instance#033[00m
Oct 11 04:45:09 np0005481065 nova_compute[260935]: 2025-10-11 08:45:09.044 2 DEBUG nova.compute.manager [None req-a395bd6b-dd8f-4041-9548-db4210a48d17 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 04:45:09 np0005481065 nova_compute[260935]: 2025-10-11 08:45:09.098 2 DEBUG oslo_concurrency.processutils [None req-bf3d529e-917c-4ed4-8397-ab351ee6a185 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:45:09 np0005481065 kernel: tap6e75116e-10 (unregistering): left promiscuous mode
Oct 11 04:45:09 np0005481065 NetworkManager[44960]: <info>  [1760172309.1104] device (tap6e75116e-10): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:45:09 np0005481065 ovn_controller[152945]: 2025-10-11T08:45:09Z|00032|binding|INFO|Releasing lport 6e75116e-1034-4d1a-8320-6755ee57c51f from this chassis (sb_readonly=0)
Oct 11 04:45:09 np0005481065 ovn_controller[152945]: 2025-10-11T08:45:09Z|00033|binding|INFO|Setting lport 6e75116e-1034-4d1a-8320-6755ee57c51f down in Southbound
Oct 11 04:45:09 np0005481065 ovn_controller[152945]: 2025-10-11T08:45:09Z|00034|binding|INFO|Removing iface tap6e75116e-10 ovn-installed in OVS
Oct 11 04:45:09 np0005481065 nova_compute[260935]: 2025-10-11 08:45:09.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:45:09 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:09.160 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:7b:bc 10.100.0.9'], port_security=['fa:16:3e:14:7b:bc 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'aac0adcc-167d-400a-a04a-93767356cc9c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4e6573eaf6684f1c99a553fd46667a67', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6b6e9add-724a-49a4-90de-2f4e4912ca78', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.218'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be56cb0d-617a-4b33-ac01-b32b133373b2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=6e75116e-1034-4d1a-8320-6755ee57c51f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:45:09 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:09.162 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 6e75116e-1034-4d1a-8320-6755ee57c51f in datapath 3bd537c2-e6ec-4d00-ac83-fbf5d86f963c unbound from our chassis#033[00m
Oct 11 04:45:09 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:09.163 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3bd537c2-e6ec-4d00-ac83-fbf5d86f963c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 04:45:09 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:09.166 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0e174ce2-6db6-4c1e-a370-3727af660144]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:45:09 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:09.167 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c namespace which is not needed anymore#033[00m
Oct 11 04:45:09 np0005481065 nova_compute[260935]: 2025-10-11 08:45:09.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:45:09 np0005481065 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Oct 11 04:45:09 np0005481065 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 14.392s CPU time.
Oct 11 04:45:09 np0005481065 systemd-machined[215705]: Machine qemu-2-instance-00000002 terminated.
Oct 11 04:45:09 np0005481065 nova_compute[260935]: 2025-10-11 08:45:09.302 2 INFO nova.virt.libvirt.driver [-] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Instance destroyed successfully.#033[00m
Oct 11 04:45:09 np0005481065 nova_compute[260935]: 2025-10-11 08:45:09.303 2 DEBUG nova.objects.instance [None req-a395bd6b-dd8f-4041-9548-db4210a48d17 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lazy-loading 'resources' on Instance uuid aac0adcc-167d-400a-a04a-93767356cc9c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:45:09 np0005481065 nova_compute[260935]: 2025-10-11 08:45:09.334 2 DEBUG nova.virt.libvirt.vif [None req-a395bd6b-dd8f-4041-9548-db4210a48d17 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:44:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-189754419',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-189754419',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(25),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-189754419',id=2,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=25,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNbdle8Jb2CGpQv4SpbWMUmm3hiSixEA0s5KapvSItFHc3x9bUVgKTLkLVOIdrKirkN+vbb4fO8g4FzWk9eXgUjBokGNviLy6eM2HjHbeF2CcHrjTEf0RujQy30DD/WL/w==',key_name='tempest-keypair-358550695',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:44:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4e6573eaf6684f1c99a553fd46667a67',ramdisk_id='',reservation_id='r-bh0dixv5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1781130606',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1781130606-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:44:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='113798b24d1e4a9e91db94214d254ea9',uuid=aac0adcc-167d-400a-a04a-93767356cc9c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6e75116e-1034-4d1a-8320-6755ee57c51f", "address": "fa:16:3e:14:7b:bc", "network": {"id": "3bd537c2-e6ec-4d00-ac83-fbf5d86f963c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-610535994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e6573eaf6684f1c99a553fd46667a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e75116e-10", "ovs_interfaceid": "6e75116e-1034-4d1a-8320-6755ee57c51f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 04:45:09 np0005481065 nova_compute[260935]: 2025-10-11 08:45:09.335 2 DEBUG nova.network.os_vif_util [None req-a395bd6b-dd8f-4041-9548-db4210a48d17 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Converting VIF {"id": "6e75116e-1034-4d1a-8320-6755ee57c51f", "address": "fa:16:3e:14:7b:bc", "network": {"id": "3bd537c2-e6ec-4d00-ac83-fbf5d86f963c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-610535994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e6573eaf6684f1c99a553fd46667a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e75116e-10", "ovs_interfaceid": "6e75116e-1034-4d1a-8320-6755ee57c51f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:45:09 np0005481065 nova_compute[260935]: 2025-10-11 08:45:09.337 2 DEBUG nova.network.os_vif_util [None req-a395bd6b-dd8f-4041-9548-db4210a48d17 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:14:7b:bc,bridge_name='br-int',has_traffic_filtering=True,id=6e75116e-1034-4d1a-8320-6755ee57c51f,network=Network(3bd537c2-e6ec-4d00-ac83-fbf5d86f963c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e75116e-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:45:09 np0005481065 nova_compute[260935]: 2025-10-11 08:45:09.338 2 DEBUG os_vif [None req-a395bd6b-dd8f-4041-9548-db4210a48d17 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:14:7b:bc,bridge_name='br-int',has_traffic_filtering=True,id=6e75116e-1034-4d1a-8320-6755ee57c51f,network=Network(3bd537c2-e6ec-4d00-ac83-fbf5d86f963c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e75116e-10') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 04:45:09 np0005481065 nova_compute[260935]: 2025-10-11 08:45:09.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:45:09 np0005481065 nova_compute[260935]: 2025-10-11 08:45:09.343 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e75116e-10, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:45:09 np0005481065 nova_compute[260935]: 2025-10-11 08:45:09.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:45:09 np0005481065 nova_compute[260935]: 2025-10-11 08:45:09.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:45:09 np0005481065 nova_compute[260935]: 2025-10-11 08:45:09.350 2 INFO os_vif [None req-a395bd6b-dd8f-4041-9548-db4210a48d17 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:14:7b:bc,bridge_name='br-int',has_traffic_filtering=True,id=6e75116e-1034-4d1a-8320-6755ee57c51f,network=Network(3bd537c2-e6ec-4d00-ac83-fbf5d86f963c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e75116e-10')#033[00m
Oct 11 04:45:09 np0005481065 neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c[277655]: [NOTICE]   (277673) : haproxy version is 2.8.14-c23fe91
Oct 11 04:45:09 np0005481065 neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c[277655]: [NOTICE]   (277673) : path to executable is /usr/sbin/haproxy
Oct 11 04:45:09 np0005481065 neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c[277655]: [WARNING]  (277673) : Exiting Master process...
Oct 11 04:45:09 np0005481065 neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c[277655]: [ALERT]    (277673) : Current worker (277679) exited with code 143 (Terminated)
Oct 11 04:45:09 np0005481065 neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c[277655]: [WARNING]  (277673) : All workers exited. Exiting... (0)
Oct 11 04:45:09 np0005481065 systemd[1]: libpod-a93ddd922ca452d60dcc1226107836e2108ccbbff976a3b0320be2288dbe7021.scope: Deactivated successfully.
Oct 11 04:45:09 np0005481065 podman[278753]: 2025-10-11 08:45:09.396159077 +0000 UTC m=+0.081022727 container died a93ddd922ca452d60dcc1226107836e2108ccbbff976a3b0320be2288dbe7021 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 04:45:09 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a93ddd922ca452d60dcc1226107836e2108ccbbff976a3b0320be2288dbe7021-userdata-shm.mount: Deactivated successfully.
Oct 11 04:45:09 np0005481065 systemd[1]: var-lib-containers-storage-overlay-f1db26376507664bf223ae5a9650b1681bc0d81b11369be060261805baad7082-merged.mount: Deactivated successfully.
Oct 11 04:45:09 np0005481065 podman[278753]: 2025-10-11 08:45:09.450545701 +0000 UTC m=+0.135409321 container cleanup a93ddd922ca452d60dcc1226107836e2108ccbbff976a3b0320be2288dbe7021 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:45:09 np0005481065 systemd[1]: libpod-conmon-a93ddd922ca452d60dcc1226107836e2108ccbbff976a3b0320be2288dbe7021.scope: Deactivated successfully.
Oct 11 04:45:09 np0005481065 nova_compute[260935]: 2025-10-11 08:45:09.506 2 DEBUG nova.compute.manager [req-24596e1b-54d8-430f-a881-34e3e413bae7 req-ff93cf0c-ff3e-47f1-bcdf-4ade579bceb4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Received event network-vif-unplugged-6e75116e-1034-4d1a-8320-6755ee57c51f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:45:09 np0005481065 nova_compute[260935]: 2025-10-11 08:45:09.507 2 DEBUG oslo_concurrency.lockutils [req-24596e1b-54d8-430f-a881-34e3e413bae7 req-ff93cf0c-ff3e-47f1-bcdf-4ade579bceb4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "aac0adcc-167d-400a-a04a-93767356cc9c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:45:09 np0005481065 nova_compute[260935]: 2025-10-11 08:45:09.507 2 DEBUG oslo_concurrency.lockutils [req-24596e1b-54d8-430f-a881-34e3e413bae7 req-ff93cf0c-ff3e-47f1-bcdf-4ade579bceb4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "aac0adcc-167d-400a-a04a-93767356cc9c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:45:09 np0005481065 nova_compute[260935]: 2025-10-11 08:45:09.508 2 DEBUG oslo_concurrency.lockutils [req-24596e1b-54d8-430f-a881-34e3e413bae7 req-ff93cf0c-ff3e-47f1-bcdf-4ade579bceb4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "aac0adcc-167d-400a-a04a-93767356cc9c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:45:09 np0005481065 nova_compute[260935]: 2025-10-11 08:45:09.508 2 DEBUG nova.compute.manager [req-24596e1b-54d8-430f-a881-34e3e413bae7 req-ff93cf0c-ff3e-47f1-bcdf-4ade579bceb4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] No waiting events found dispatching network-vif-unplugged-6e75116e-1034-4d1a-8320-6755ee57c51f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:45:09 np0005481065 nova_compute[260935]: 2025-10-11 08:45:09.508 2 DEBUG nova.compute.manager [req-24596e1b-54d8-430f-a881-34e3e413bae7 req-ff93cf0c-ff3e-47f1-bcdf-4ade579bceb4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Received event network-vif-unplugged-6e75116e-1034-4d1a-8320-6755ee57c51f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 04:45:09 np0005481065 podman[278803]: 2025-10-11 08:45:09.54004529 +0000 UTC m=+0.060254874 container remove a93ddd922ca452d60dcc1226107836e2108ccbbff976a3b0320be2288dbe7021 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 04:45:09 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:09.551 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0b970dcc-214c-4c21-86c2-a9f3dd5f8326]: (4, ('Sat Oct 11 08:45:09 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c (a93ddd922ca452d60dcc1226107836e2108ccbbff976a3b0320be2288dbe7021)\na93ddd922ca452d60dcc1226107836e2108ccbbff976a3b0320be2288dbe7021\nSat Oct 11 08:45:09 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c (a93ddd922ca452d60dcc1226107836e2108ccbbff976a3b0320be2288dbe7021)\na93ddd922ca452d60dcc1226107836e2108ccbbff976a3b0320be2288dbe7021\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:45:09 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:09.553 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[681e641f-5b45-4020-a22c-5fa25fbc560d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:45:09 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:09.554 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3bd537c2-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:45:09 np0005481065 nova_compute[260935]: 2025-10-11 08:45:09.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:45:09 np0005481065 kernel: tap3bd537c2-e0: left promiscuous mode
Oct 11 04:45:09 np0005481065 nova_compute[260935]: 2025-10-11 08:45:09.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:45:09 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:09.590 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3ae9d6bb-8a27-4608-a18d-2167cea1d562]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:45:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:45:09 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4137421471' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:45:09 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:09.617 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e95509e6-4959-4e38-b072-966f56cfd040]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:45:09 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:09.618 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6007fe18-9184-46a9-a506-76bfb5504a19]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:45:09 np0005481065 nova_compute[260935]: 2025-10-11 08:45:09.637 2 DEBUG oslo_concurrency.processutils [None req-bf3d529e-917c-4ed4-8397-ab351ee6a185 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:45:09 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:09.643 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fb3c61df-f576-4810-b0e3-6d998614eba6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 413674, 'reachable_time': 18027, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278820, 'error': None, 'target': 'ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:45:09 np0005481065 nova_compute[260935]: 2025-10-11 08:45:09.643 2 DEBUG nova.compute.provider_tree [None req-bf3d529e-917c-4ed4-8397-ab351ee6a185 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:45:09 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:09.659 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 04:45:09 np0005481065 systemd[1]: run-netns-ovnmeta\x2d3bd537c2\x2de6ec\x2d4d00\x2dac83\x2dfbf5d86f963c.mount: Deactivated successfully.
Oct 11 04:45:09 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:09.661 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[98a7bbe5-2353-4aaa-b853-c8af1265841a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:45:09 np0005481065 nova_compute[260935]: 2025-10-11 08:45:09.671 2 DEBUG nova.scheduler.client.report [None req-bf3d529e-917c-4ed4-8397-ab351ee6a185 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:45:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1154: 321 pgs: 321 active+clean; 200 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 224 op/s
Oct 11 04:45:09 np0005481065 nova_compute[260935]: 2025-10-11 08:45:09.757 2 DEBUG oslo_concurrency.lockutils [None req-bf3d529e-917c-4ed4-8397-ab351ee6a185 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:45:09 np0005481065 nova_compute[260935]: 2025-10-11 08:45:09.814 2 INFO nova.scheduler.client.report [None req-bf3d529e-917c-4ed4-8397-ab351ee6a185 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Deleted allocations for instance eda894cf-d32d-47f0-adc0-7e2f7fffb442#033[00m
Oct 11 04:45:09 np0005481065 nova_compute[260935]: 2025-10-11 08:45:09.842 2 INFO nova.virt.libvirt.driver [None req-a395bd6b-dd8f-4041-9548-db4210a48d17 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Deleting instance files /var/lib/nova/instances/aac0adcc-167d-400a-a04a-93767356cc9c_del#033[00m
Oct 11 04:45:09 np0005481065 nova_compute[260935]: 2025-10-11 08:45:09.843 2 INFO nova.virt.libvirt.driver [None req-a395bd6b-dd8f-4041-9548-db4210a48d17 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Deletion of /var/lib/nova/instances/aac0adcc-167d-400a-a04a-93767356cc9c_del complete#033[00m
Oct 11 04:45:09 np0005481065 nova_compute[260935]: 2025-10-11 08:45:09.960 2 DEBUG oslo_concurrency.lockutils [None req-bf3d529e-917c-4ed4-8397-ab351ee6a185 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lock "eda894cf-d32d-47f0-adc0-7e2f7fffb442" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:45:09 np0005481065 nova_compute[260935]: 2025-10-11 08:45:09.974 2 INFO nova.compute.manager [None req-a395bd6b-dd8f-4041-9548-db4210a48d17 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Took 0.93 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 04:45:09 np0005481065 nova_compute[260935]: 2025-10-11 08:45:09.975 2 DEBUG oslo.service.loopingcall [None req-a395bd6b-dd8f-4041-9548-db4210a48d17 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 04:45:09 np0005481065 nova_compute[260935]: 2025-10-11 08:45:09.975 2 DEBUG nova.compute.manager [-] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 04:45:09 np0005481065 nova_compute[260935]: 2025-10-11 08:45:09.976 2 DEBUG nova.network.neutron [-] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 04:45:11 np0005481065 nova_compute[260935]: 2025-10-11 08:45:11.147 2 DEBUG oslo_concurrency.lockutils [None req-e09cbfb0-1615-487f-aa8c-0993df28a232 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Acquiring lock "10e2549e-21d4-44fb-acbf-9104ec32970f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:45:11 np0005481065 nova_compute[260935]: 2025-10-11 08:45:11.148 2 DEBUG oslo_concurrency.lockutils [None req-e09cbfb0-1615-487f-aa8c-0993df28a232 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lock "10e2549e-21d4-44fb-acbf-9104ec32970f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:45:11 np0005481065 nova_compute[260935]: 2025-10-11 08:45:11.148 2 DEBUG oslo_concurrency.lockutils [None req-e09cbfb0-1615-487f-aa8c-0993df28a232 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Acquiring lock "10e2549e-21d4-44fb-acbf-9104ec32970f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:45:11 np0005481065 nova_compute[260935]: 2025-10-11 08:45:11.149 2 DEBUG oslo_concurrency.lockutils [None req-e09cbfb0-1615-487f-aa8c-0993df28a232 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lock "10e2549e-21d4-44fb-acbf-9104ec32970f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:45:11 np0005481065 nova_compute[260935]: 2025-10-11 08:45:11.150 2 DEBUG oslo_concurrency.lockutils [None req-e09cbfb0-1615-487f-aa8c-0993df28a232 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lock "10e2549e-21d4-44fb-acbf-9104ec32970f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:45:11 np0005481065 nova_compute[260935]: 2025-10-11 08:45:11.152 2 INFO nova.compute.manager [None req-e09cbfb0-1615-487f-aa8c-0993df28a232 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Terminating instance#033[00m
Oct 11 04:45:11 np0005481065 nova_compute[260935]: 2025-10-11 08:45:11.153 2 DEBUG oslo_concurrency.lockutils [None req-e09cbfb0-1615-487f-aa8c-0993df28a232 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Acquiring lock "refresh_cache-10e2549e-21d4-44fb-acbf-9104ec32970f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:45:11 np0005481065 nova_compute[260935]: 2025-10-11 08:45:11.153 2 DEBUG oslo_concurrency.lockutils [None req-e09cbfb0-1615-487f-aa8c-0993df28a232 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Acquired lock "refresh_cache-10e2549e-21d4-44fb-acbf-9104ec32970f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:45:11 np0005481065 nova_compute[260935]: 2025-10-11 08:45:11.154 2 DEBUG nova.network.neutron [None req-e09cbfb0-1615-487f-aa8c-0993df28a232 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:45:11 np0005481065 nova_compute[260935]: 2025-10-11 08:45:11.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:45:11 np0005481065 nova_compute[260935]: 2025-10-11 08:45:11.451 2 DEBUG nova.network.neutron [None req-e09cbfb0-1615-487f-aa8c-0993df28a232 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:45:11 np0005481065 nova_compute[260935]: 2025-10-11 08:45:11.481 2 DEBUG nova.network.neutron [-] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:45:11 np0005481065 nova_compute[260935]: 2025-10-11 08:45:11.662 2 INFO nova.compute.manager [-] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Took 1.69 seconds to deallocate network for instance.#033[00m
Oct 11 04:45:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1155: 321 pgs: 321 active+clean; 200 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 224 op/s
Oct 11 04:45:11 np0005481065 nova_compute[260935]: 2025-10-11 08:45:11.786 2 DEBUG oslo_concurrency.lockutils [None req-a395bd6b-dd8f-4041-9548-db4210a48d17 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:45:11 np0005481065 nova_compute[260935]: 2025-10-11 08:45:11.787 2 DEBUG oslo_concurrency.lockutils [None req-a395bd6b-dd8f-4041-9548-db4210a48d17 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:45:11 np0005481065 nova_compute[260935]: 2025-10-11 08:45:11.853 2 DEBUG oslo_concurrency.processutils [None req-a395bd6b-dd8f-4041-9548-db4210a48d17 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:45:11 np0005481065 nova_compute[260935]: 2025-10-11 08:45:11.960 2 DEBUG nova.network.neutron [None req-e09cbfb0-1615-487f-aa8c-0993df28a232 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:45:11 np0005481065 nova_compute[260935]: 2025-10-11 08:45:11.987 2 DEBUG oslo_concurrency.lockutils [None req-e09cbfb0-1615-487f-aa8c-0993df28a232 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Releasing lock "refresh_cache-10e2549e-21d4-44fb-acbf-9104ec32970f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:45:11 np0005481065 nova_compute[260935]: 2025-10-11 08:45:11.989 2 DEBUG nova.compute.manager [None req-e09cbfb0-1615-487f-aa8c-0993df28a232 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 04:45:12 np0005481065 nova_compute[260935]: 2025-10-11 08:45:12.084 2 DEBUG nova.compute.manager [req-8c91d96d-1cc6-445b-b65a-ba8c62dee7d5 req-28ee356f-e437-4e2e-8d74-2cbd9f1a9b9a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Received event network-vif-deleted-6e75116e-1034-4d1a-8320-6755ee57c51f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:45:12 np0005481065 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Oct 11 04:45:12 np0005481065 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 13.950s CPU time.
Oct 11 04:45:12 np0005481065 systemd-machined[215705]: Machine qemu-3-instance-00000003 terminated.
Oct 11 04:45:12 np0005481065 nova_compute[260935]: 2025-10-11 08:45:12.153 2 DEBUG nova.compute.manager [req-8fed93f8-1714-4d08-aa95-f24b0520a05d req-7c7fb12b-bb9f-46ea-9ce2-bcc7bc88eaab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Received event network-vif-plugged-6e75116e-1034-4d1a-8320-6755ee57c51f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:45:12 np0005481065 nova_compute[260935]: 2025-10-11 08:45:12.153 2 DEBUG oslo_concurrency.lockutils [req-8fed93f8-1714-4d08-aa95-f24b0520a05d req-7c7fb12b-bb9f-46ea-9ce2-bcc7bc88eaab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "aac0adcc-167d-400a-a04a-93767356cc9c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:45:12 np0005481065 nova_compute[260935]: 2025-10-11 08:45:12.154 2 DEBUG oslo_concurrency.lockutils [req-8fed93f8-1714-4d08-aa95-f24b0520a05d req-7c7fb12b-bb9f-46ea-9ce2-bcc7bc88eaab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "aac0adcc-167d-400a-a04a-93767356cc9c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:45:12 np0005481065 nova_compute[260935]: 2025-10-11 08:45:12.155 2 DEBUG oslo_concurrency.lockutils [req-8fed93f8-1714-4d08-aa95-f24b0520a05d req-7c7fb12b-bb9f-46ea-9ce2-bcc7bc88eaab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "aac0adcc-167d-400a-a04a-93767356cc9c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:45:12 np0005481065 nova_compute[260935]: 2025-10-11 08:45:12.155 2 DEBUG nova.compute.manager [req-8fed93f8-1714-4d08-aa95-f24b0520a05d req-7c7fb12b-bb9f-46ea-9ce2-bcc7bc88eaab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] No waiting events found dispatching network-vif-plugged-6e75116e-1034-4d1a-8320-6755ee57c51f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:45:12 np0005481065 nova_compute[260935]: 2025-10-11 08:45:12.156 2 WARNING nova.compute.manager [req-8fed93f8-1714-4d08-aa95-f24b0520a05d req-7c7fb12b-bb9f-46ea-9ce2-bcc7bc88eaab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Received unexpected event network-vif-plugged-6e75116e-1034-4d1a-8320-6755ee57c51f for instance with vm_state deleted and task_state None.#033[00m
Oct 11 04:45:12 np0005481065 nova_compute[260935]: 2025-10-11 08:45:12.221 2 INFO nova.virt.libvirt.driver [-] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Instance destroyed successfully.#033[00m
Oct 11 04:45:12 np0005481065 nova_compute[260935]: 2025-10-11 08:45:12.222 2 DEBUG nova.objects.instance [None req-e09cbfb0-1615-487f-aa8c-0993df28a232 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lazy-loading 'resources' on Instance uuid 10e2549e-21d4-44fb-acbf-9104ec32970f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:45:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:45:12 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1138488383' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:45:12 np0005481065 nova_compute[260935]: 2025-10-11 08:45:12.377 2 DEBUG oslo_concurrency.processutils [None req-a395bd6b-dd8f-4041-9548-db4210a48d17 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:45:12 np0005481065 nova_compute[260935]: 2025-10-11 08:45:12.385 2 DEBUG nova.compute.provider_tree [None req-a395bd6b-dd8f-4041-9548-db4210a48d17 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:45:12 np0005481065 nova_compute[260935]: 2025-10-11 08:45:12.422 2 DEBUG nova.scheduler.client.report [None req-a395bd6b-dd8f-4041-9548-db4210a48d17 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:45:12 np0005481065 nova_compute[260935]: 2025-10-11 08:45:12.563 2 DEBUG oslo_concurrency.lockutils [None req-a395bd6b-dd8f-4041-9548-db4210a48d17 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:45:12 np0005481065 nova_compute[260935]: 2025-10-11 08:45:12.760 2 INFO nova.virt.libvirt.driver [None req-e09cbfb0-1615-487f-aa8c-0993df28a232 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Deleting instance files /var/lib/nova/instances/10e2549e-21d4-44fb-acbf-9104ec32970f_del#033[00m
Oct 11 04:45:12 np0005481065 nova_compute[260935]: 2025-10-11 08:45:12.761 2 INFO nova.virt.libvirt.driver [None req-e09cbfb0-1615-487f-aa8c-0993df28a232 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Deletion of /var/lib/nova/instances/10e2549e-21d4-44fb-acbf-9104ec32970f_del complete#033[00m
Oct 11 04:45:12 np0005481065 nova_compute[260935]: 2025-10-11 08:45:12.916 2 INFO nova.compute.manager [None req-e09cbfb0-1615-487f-aa8c-0993df28a232 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Took 0.93 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 04:45:12 np0005481065 nova_compute[260935]: 2025-10-11 08:45:12.917 2 DEBUG oslo.service.loopingcall [None req-e09cbfb0-1615-487f-aa8c-0993df28a232 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 04:45:12 np0005481065 nova_compute[260935]: 2025-10-11 08:45:12.918 2 DEBUG nova.compute.manager [-] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 04:45:12 np0005481065 nova_compute[260935]: 2025-10-11 08:45:12.918 2 DEBUG nova.network.neutron [-] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 04:45:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:45:13 np0005481065 nova_compute[260935]: 2025-10-11 08:45:13.388 2 INFO nova.scheduler.client.report [None req-a395bd6b-dd8f-4041-9548-db4210a48d17 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Deleted allocations for instance aac0adcc-167d-400a-a04a-93767356cc9c#033[00m
Oct 11 04:45:13 np0005481065 nova_compute[260935]: 2025-10-11 08:45:13.400 2 DEBUG oslo_concurrency.lockutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Acquiring lock "f7475a32-2490-4f8c-a700-a123973da072" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:45:13 np0005481065 nova_compute[260935]: 2025-10-11 08:45:13.400 2 DEBUG oslo_concurrency.lockutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "f7475a32-2490-4f8c-a700-a123973da072" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:45:13 np0005481065 nova_compute[260935]: 2025-10-11 08:45:13.441 2 DEBUG nova.compute.manager [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:45:13 np0005481065 nova_compute[260935]: 2025-10-11 08:45:13.450 2 DEBUG nova.network.neutron [-] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:45:13 np0005481065 nova_compute[260935]: 2025-10-11 08:45:13.595 2 DEBUG oslo_concurrency.lockutils [None req-a395bd6b-dd8f-4041-9548-db4210a48d17 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "aac0adcc-167d-400a-a04a-93767356cc9c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.556s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:45:13 np0005481065 nova_compute[260935]: 2025-10-11 08:45:13.639 2 DEBUG nova.network.neutron [-] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:45:13 np0005481065 nova_compute[260935]: 2025-10-11 08:45:13.656 2 DEBUG oslo_concurrency.lockutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:45:13 np0005481065 nova_compute[260935]: 2025-10-11 08:45:13.656 2 DEBUG oslo_concurrency.lockutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:45:13 np0005481065 nova_compute[260935]: 2025-10-11 08:45:13.665 2 DEBUG nova.virt.hardware [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:45:13 np0005481065 nova_compute[260935]: 2025-10-11 08:45:13.665 2 INFO nova.compute.claims [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:45:13 np0005481065 nova_compute[260935]: 2025-10-11 08:45:13.703 2 INFO nova.compute.manager [-] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Took 0.79 seconds to deallocate network for instance.#033[00m
Oct 11 04:45:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1156: 321 pgs: 321 active+clean; 101 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 266 op/s
Oct 11 04:45:13 np0005481065 nova_compute[260935]: 2025-10-11 08:45:13.805 2 DEBUG oslo_concurrency.lockutils [None req-e09cbfb0-1615-487f-aa8c-0993df28a232 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:45:13 np0005481065 nova_compute[260935]: 2025-10-11 08:45:13.873 2 DEBUG oslo_concurrency.processutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:45:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:45:14 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1588867446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:45:14 np0005481065 nova_compute[260935]: 2025-10-11 08:45:14.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:45:14 np0005481065 nova_compute[260935]: 2025-10-11 08:45:14.356 2 DEBUG oslo_concurrency.processutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:45:14 np0005481065 nova_compute[260935]: 2025-10-11 08:45:14.364 2 DEBUG nova.compute.provider_tree [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:45:14 np0005481065 nova_compute[260935]: 2025-10-11 08:45:14.399 2 DEBUG nova.scheduler.client.report [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:45:14 np0005481065 nova_compute[260935]: 2025-10-11 08:45:14.464 2 DEBUG oslo_concurrency.lockutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.808s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:45:14 np0005481065 nova_compute[260935]: 2025-10-11 08:45:14.466 2 DEBUG nova.compute.manager [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:45:14 np0005481065 nova_compute[260935]: 2025-10-11 08:45:14.470 2 DEBUG oslo_concurrency.lockutils [None req-e09cbfb0-1615-487f-aa8c-0993df28a232 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:45:14 np0005481065 nova_compute[260935]: 2025-10-11 08:45:14.570 2 DEBUG oslo_concurrency.processutils [None req-e09cbfb0-1615-487f-aa8c-0993df28a232 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:45:14 np0005481065 nova_compute[260935]: 2025-10-11 08:45:14.627 2 DEBUG nova.compute.manager [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:45:14 np0005481065 nova_compute[260935]: 2025-10-11 08:45:14.627 2 DEBUG nova.network.neutron [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:45:14 np0005481065 nova_compute[260935]: 2025-10-11 08:45:14.710 2 INFO nova.virt.libvirt.driver [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:45:14 np0005481065 nova_compute[260935]: 2025-10-11 08:45:14.880 2 DEBUG nova.compute.manager [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:45:14 np0005481065 nova_compute[260935]: 2025-10-11 08:45:14.891 2 DEBUG nova.policy [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '113798b24d1e4a9e91db94214d254ea9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4e6573eaf6684f1c99a553fd46667a67', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 04:45:15 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:45:15 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3775269212' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:45:15 np0005481065 nova_compute[260935]: 2025-10-11 08:45:15.057 2 DEBUG oslo_concurrency.processutils [None req-e09cbfb0-1615-487f-aa8c-0993df28a232 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:45:15 np0005481065 nova_compute[260935]: 2025-10-11 08:45:15.065 2 DEBUG nova.compute.provider_tree [None req-e09cbfb0-1615-487f-aa8c-0993df28a232 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:45:15 np0005481065 nova_compute[260935]: 2025-10-11 08:45:15.101 2 DEBUG nova.compute.manager [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:45:15 np0005481065 nova_compute[260935]: 2025-10-11 08:45:15.103 2 DEBUG nova.virt.libvirt.driver [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:45:15 np0005481065 nova_compute[260935]: 2025-10-11 08:45:15.104 2 INFO nova.virt.libvirt.driver [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Creating image(s)#033[00m
Oct 11 04:45:15 np0005481065 nova_compute[260935]: 2025-10-11 08:45:15.134 2 DEBUG nova.storage.rbd_utils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] rbd image f7475a32-2490-4f8c-a700-a123973da072_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:45:15 np0005481065 nova_compute[260935]: 2025-10-11 08:45:15.171 2 DEBUG nova.storage.rbd_utils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] rbd image f7475a32-2490-4f8c-a700-a123973da072_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:45:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:15.176 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:45:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:15.177 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:45:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:15.177 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:45:15 np0005481065 nova_compute[260935]: 2025-10-11 08:45:15.209 2 DEBUG nova.storage.rbd_utils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] rbd image f7475a32-2490-4f8c-a700-a123973da072_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:45:15 np0005481065 nova_compute[260935]: 2025-10-11 08:45:15.214 2 DEBUG oslo_concurrency.processutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:45:15 np0005481065 nova_compute[260935]: 2025-10-11 08:45:15.252 2 DEBUG nova.scheduler.client.report [None req-e09cbfb0-1615-487f-aa8c-0993df28a232 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:45:15 np0005481065 nova_compute[260935]: 2025-10-11 08:45:15.281 2 DEBUG oslo_concurrency.lockutils [None req-e09cbfb0-1615-487f-aa8c-0993df28a232 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.811s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:45:15 np0005481065 nova_compute[260935]: 2025-10-11 08:45:15.304 2 DEBUG oslo_concurrency.processutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:45:15 np0005481065 nova_compute[260935]: 2025-10-11 08:45:15.305 2 DEBUG oslo_concurrency.lockutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:45:15 np0005481065 nova_compute[260935]: 2025-10-11 08:45:15.306 2 DEBUG oslo_concurrency.lockutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:45:15 np0005481065 nova_compute[260935]: 2025-10-11 08:45:15.307 2 DEBUG oslo_concurrency.lockutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:45:15 np0005481065 nova_compute[260935]: 2025-10-11 08:45:15.344 2 DEBUG nova.storage.rbd_utils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] rbd image f7475a32-2490-4f8c-a700-a123973da072_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:45:15 np0005481065 nova_compute[260935]: 2025-10-11 08:45:15.348 2 DEBUG oslo_concurrency.processutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 f7475a32-2490-4f8c-a700-a123973da072_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:45:15 np0005481065 nova_compute[260935]: 2025-10-11 08:45:15.365 2 INFO nova.scheduler.client.report [None req-e09cbfb0-1615-487f-aa8c-0993df28a232 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Deleted allocations for instance 10e2549e-21d4-44fb-acbf-9104ec32970f#033[00m
Oct 11 04:45:15 np0005481065 nova_compute[260935]: 2025-10-11 08:45:15.479 2 DEBUG oslo_concurrency.lockutils [None req-e09cbfb0-1615-487f-aa8c-0993df28a232 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lock "10e2549e-21d4-44fb-acbf-9104ec32970f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.331s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:45:15 np0005481065 nova_compute[260935]: 2025-10-11 08:45:15.614 2 DEBUG oslo_concurrency.processutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 f7475a32-2490-4f8c-a700-a123973da072_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.266s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:45:15 np0005481065 nova_compute[260935]: 2025-10-11 08:45:15.697 2 DEBUG nova.storage.rbd_utils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] resizing rbd image f7475a32-2490-4f8c-a700-a123973da072_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:45:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1157: 321 pgs: 321 active+clean; 101 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 95 KiB/s rd, 154 KiB/s wr, 86 op/s
Oct 11 04:45:15 np0005481065 nova_compute[260935]: 2025-10-11 08:45:15.818 2 DEBUG nova.objects.instance [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lazy-loading 'migration_context' on Instance uuid f7475a32-2490-4f8c-a700-a123973da072 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:45:15 np0005481065 nova_compute[260935]: 2025-10-11 08:45:15.884 2 DEBUG nova.storage.rbd_utils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] rbd image f7475a32-2490-4f8c-a700-a123973da072_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:45:15 np0005481065 nova_compute[260935]: 2025-10-11 08:45:15.916 2 DEBUG nova.storage.rbd_utils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] rbd image f7475a32-2490-4f8c-a700-a123973da072_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:45:15 np0005481065 nova_compute[260935]: 2025-10-11 08:45:15.921 2 DEBUG oslo_concurrency.lockutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:45:15 np0005481065 nova_compute[260935]: 2025-10-11 08:45:15.922 2 DEBUG oslo_concurrency.lockutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:45:15 np0005481065 nova_compute[260935]: 2025-10-11 08:45:15.922 2 DEBUG oslo_concurrency.processutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:45:15 np0005481065 nova_compute[260935]: 2025-10-11 08:45:15.963 2 DEBUG oslo_concurrency.processutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.040s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:45:15 np0005481065 nova_compute[260935]: 2025-10-11 08:45:15.964 2 DEBUG oslo_concurrency.processutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:45:16 np0005481065 nova_compute[260935]: 2025-10-11 08:45:16.025 2 DEBUG oslo_concurrency.processutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:45:16 np0005481065 nova_compute[260935]: 2025-10-11 08:45:16.026 2 DEBUG oslo_concurrency.lockutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.104s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:45:16 np0005481065 nova_compute[260935]: 2025-10-11 08:45:16.058 2 DEBUG nova.storage.rbd_utils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] rbd image f7475a32-2490-4f8c-a700-a123973da072_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:45:16 np0005481065 nova_compute[260935]: 2025-10-11 08:45:16.063 2 DEBUG oslo_concurrency.processutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 f7475a32-2490-4f8c-a700-a123973da072_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:45:16 np0005481065 nova_compute[260935]: 2025-10-11 08:45:16.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:45:16 np0005481065 nova_compute[260935]: 2025-10-11 08:45:16.269 2 DEBUG nova.network.neutron [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Successfully created port: 865f9eb5-5e9d-40e5-abb3-123fcf5d05c3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 04:45:16 np0005481065 podman[279153]: 2025-10-11 08:45:16.798753428 +0000 UTC m=+0.096487020 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, container_name=ovn_metadata_agent)
Oct 11 04:45:17 np0005481065 nova_compute[260935]: 2025-10-11 08:45:17.010 2 DEBUG oslo_concurrency.processutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 f7475a32-2490-4f8c-a700-a123973da072_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.947s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:45:17 np0005481065 nova_compute[260935]: 2025-10-11 08:45:17.127 2 DEBUG nova.virt.libvirt.driver [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:45:17 np0005481065 nova_compute[260935]: 2025-10-11 08:45:17.128 2 DEBUG nova.virt.libvirt.driver [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Ensure instance console log exists: /var/lib/nova/instances/f7475a32-2490-4f8c-a700-a123973da072/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:45:17 np0005481065 nova_compute[260935]: 2025-10-11 08:45:17.129 2 DEBUG oslo_concurrency.lockutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:45:17 np0005481065 nova_compute[260935]: 2025-10-11 08:45:17.129 2 DEBUG oslo_concurrency.lockutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:45:17 np0005481065 nova_compute[260935]: 2025-10-11 08:45:17.130 2 DEBUG oslo_concurrency.lockutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:45:17 np0005481065 nova_compute[260935]: 2025-10-11 08:45:17.376 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172302.3736513, eda894cf-d32d-47f0-adc0-7e2f7fffb442 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:45:17 np0005481065 nova_compute[260935]: 2025-10-11 08:45:17.377 2 INFO nova.compute.manager [-] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] VM Stopped (Lifecycle Event)#033[00m
Oct 11 04:45:17 np0005481065 nova_compute[260935]: 2025-10-11 08:45:17.401 2 DEBUG nova.compute.manager [None req-827310ad-b4c6-4c57-8b48-bb209e1e2250 - - - - - -] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:45:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1158: 321 pgs: 321 active+clean; 90 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 133 KiB/s rd, 1.9 MiB/s wr, 144 op/s
Oct 11 04:45:17 np0005481065 nova_compute[260935]: 2025-10-11 08:45:17.917 2 DEBUG nova.network.neutron [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Successfully updated port: 865f9eb5-5e9d-40e5-abb3-123fcf5d05c3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:45:17 np0005481065 nova_compute[260935]: 2025-10-11 08:45:17.935 2 DEBUG oslo_concurrency.lockutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Acquiring lock "refresh_cache-f7475a32-2490-4f8c-a700-a123973da072" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:45:17 np0005481065 nova_compute[260935]: 2025-10-11 08:45:17.936 2 DEBUG oslo_concurrency.lockutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Acquired lock "refresh_cache-f7475a32-2490-4f8c-a700-a123973da072" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:45:17 np0005481065 nova_compute[260935]: 2025-10-11 08:45:17.936 2 DEBUG nova.network.neutron [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:45:17 np0005481065 nova_compute[260935]: 2025-10-11 08:45:17.996 2 DEBUG nova.compute.manager [req-d0b2a4c7-120b-4413-8dd7-d4efbb163471 req-4ef45372-d99c-498f-8de6-bcc59dab8244 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Received event network-changed-865f9eb5-5e9d-40e5-abb3-123fcf5d05c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:45:17 np0005481065 nova_compute[260935]: 2025-10-11 08:45:17.996 2 DEBUG nova.compute.manager [req-d0b2a4c7-120b-4413-8dd7-d4efbb163471 req-4ef45372-d99c-498f-8de6-bcc59dab8244 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Refreshing instance network info cache due to event network-changed-865f9eb5-5e9d-40e5-abb3-123fcf5d05c3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:45:17 np0005481065 nova_compute[260935]: 2025-10-11 08:45:17.997 2 DEBUG oslo_concurrency.lockutils [req-d0b2a4c7-120b-4413-8dd7-d4efbb163471 req-4ef45372-d99c-498f-8de6-bcc59dab8244 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-f7475a32-2490-4f8c-a700-a123973da072" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:45:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:45:18 np0005481065 nova_compute[260935]: 2025-10-11 08:45:18.219 2 DEBUG nova.network.neutron [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:45:19 np0005481065 nova_compute[260935]: 2025-10-11 08:45:19.189 2 DEBUG nova.network.neutron [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Updating instance_info_cache with network_info: [{"id": "865f9eb5-5e9d-40e5-abb3-123fcf5d05c3", "address": "fa:16:3e:b3:e2:77", "network": {"id": "3bd537c2-e6ec-4d00-ac83-fbf5d86f963c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-610535994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e6573eaf6684f1c99a553fd46667a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap865f9eb5-5e", "ovs_interfaceid": "865f9eb5-5e9d-40e5-abb3-123fcf5d05c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:45:19 np0005481065 nova_compute[260935]: 2025-10-11 08:45:19.266 2 DEBUG oslo_concurrency.lockutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Releasing lock "refresh_cache-f7475a32-2490-4f8c-a700-a123973da072" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:45:19 np0005481065 nova_compute[260935]: 2025-10-11 08:45:19.267 2 DEBUG nova.compute.manager [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Instance network_info: |[{"id": "865f9eb5-5e9d-40e5-abb3-123fcf5d05c3", "address": "fa:16:3e:b3:e2:77", "network": {"id": "3bd537c2-e6ec-4d00-ac83-fbf5d86f963c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-610535994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e6573eaf6684f1c99a553fd46667a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap865f9eb5-5e", "ovs_interfaceid": "865f9eb5-5e9d-40e5-abb3-123fcf5d05c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:45:19 np0005481065 nova_compute[260935]: 2025-10-11 08:45:19.268 2 DEBUG oslo_concurrency.lockutils [req-d0b2a4c7-120b-4413-8dd7-d4efbb163471 req-4ef45372-d99c-498f-8de6-bcc59dab8244 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-f7475a32-2490-4f8c-a700-a123973da072" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:45:19 np0005481065 nova_compute[260935]: 2025-10-11 08:45:19.269 2 DEBUG nova.network.neutron [req-d0b2a4c7-120b-4413-8dd7-d4efbb163471 req-4ef45372-d99c-498f-8de6-bcc59dab8244 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Refreshing network info cache for port 865f9eb5-5e9d-40e5-abb3-123fcf5d05c3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:45:19 np0005481065 nova_compute[260935]: 2025-10-11 08:45:19.275 2 DEBUG nova.virt.libvirt.driver [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Start _get_guest_xml network_info=[{"id": "865f9eb5-5e9d-40e5-abb3-123fcf5d05c3", "address": "fa:16:3e:b3:e2:77", "network": {"id": "3bd537c2-e6ec-4d00-ac83-fbf5d86f963c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-610535994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e6573eaf6684f1c99a553fd46667a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap865f9eb5-5e", "ovs_interfaceid": "865f9eb5-5e9d-40e5-abb3-123fcf5d05c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [{'disk_bus': 'virtio', 'device_name': '/dev/vdb', 'guest_format': None, 'size': 1, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:45:19 np0005481065 nova_compute[260935]: 2025-10-11 08:45:19.282 2 WARNING nova.virt.libvirt.driver [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:45:19 np0005481065 nova_compute[260935]: 2025-10-11 08:45:19.287 2 DEBUG nova.virt.libvirt.host [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:45:19 np0005481065 nova_compute[260935]: 2025-10-11 08:45:19.288 2 DEBUG nova.virt.libvirt.host [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:45:19 np0005481065 nova_compute[260935]: 2025-10-11 08:45:19.294 2 DEBUG nova.virt.libvirt.host [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:45:19 np0005481065 nova_compute[260935]: 2025-10-11 08:45:19.294 2 DEBUG nova.virt.libvirt.host [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:45:19 np0005481065 nova_compute[260935]: 2025-10-11 08:45:19.295 2 DEBUG nova.virt.libvirt.driver [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:45:19 np0005481065 nova_compute[260935]: 2025-10-11 08:45:19.295 2 DEBUG nova.virt.hardware [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:44:30Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={hw_rng:allowed='True'},flavorid='2057106209',id=24,is_public=True,memory_mb=128,name='tempest-flavor_with_ephemeral_1-455650691',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:45:19 np0005481065 nova_compute[260935]: 2025-10-11 08:45:19.296 2 DEBUG nova.virt.hardware [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:45:19 np0005481065 nova_compute[260935]: 2025-10-11 08:45:19.296 2 DEBUG nova.virt.hardware [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:45:19 np0005481065 nova_compute[260935]: 2025-10-11 08:45:19.297 2 DEBUG nova.virt.hardware [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:45:19 np0005481065 nova_compute[260935]: 2025-10-11 08:45:19.297 2 DEBUG nova.virt.hardware [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:45:19 np0005481065 nova_compute[260935]: 2025-10-11 08:45:19.297 2 DEBUG nova.virt.hardware [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:45:19 np0005481065 nova_compute[260935]: 2025-10-11 08:45:19.298 2 DEBUG nova.virt.hardware [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:45:19 np0005481065 nova_compute[260935]: 2025-10-11 08:45:19.298 2 DEBUG nova.virt.hardware [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:45:19 np0005481065 nova_compute[260935]: 2025-10-11 08:45:19.298 2 DEBUG nova.virt.hardware [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:45:19 np0005481065 nova_compute[260935]: 2025-10-11 08:45:19.299 2 DEBUG nova.virt.hardware [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:45:19 np0005481065 nova_compute[260935]: 2025-10-11 08:45:19.299 2 DEBUG nova.virt.hardware [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:45:19 np0005481065 nova_compute[260935]: 2025-10-11 08:45:19.303 2 DEBUG oslo_concurrency.processutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:45:19 np0005481065 nova_compute[260935]: 2025-10-11 08:45:19.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:45:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1159: 321 pgs: 321 active+clean; 90 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 1.8 MiB/s wr, 99 op/s
Oct 11 04:45:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:45:19 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2120444' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:45:19 np0005481065 nova_compute[260935]: 2025-10-11 08:45:19.794 2 DEBUG oslo_concurrency.processutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:45:19 np0005481065 nova_compute[260935]: 2025-10-11 08:45:19.795 2 DEBUG oslo_concurrency.processutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:45:20 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:45:20 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3174672930' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:45:20 np0005481065 nova_compute[260935]: 2025-10-11 08:45:20.278 2 DEBUG oslo_concurrency.processutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:45:20 np0005481065 nova_compute[260935]: 2025-10-11 08:45:20.317 2 DEBUG nova.storage.rbd_utils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] rbd image f7475a32-2490-4f8c-a700-a123973da072_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:45:20 np0005481065 nova_compute[260935]: 2025-10-11 08:45:20.323 2 DEBUG oslo_concurrency.processutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:45:20 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:45:20 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2526477960' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:45:20 np0005481065 nova_compute[260935]: 2025-10-11 08:45:20.842 2 DEBUG oslo_concurrency.processutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:45:20 np0005481065 nova_compute[260935]: 2025-10-11 08:45:20.843 2 DEBUG nova.virt.libvirt.vif [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:45:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1693169676',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1693169676',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(24),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1693169676',id=5,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=24,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNbdle8Jb2CGpQv4SpbWMUmm3hiSixEA0s5KapvSItFHc3x9bUVgKTLkLVOIdrKirkN+vbb4fO8g4FzWk9eXgUjBokGNviLy6eM2HjHbeF2CcHrjTEf0RujQy30DD/WL/w==',key_name='tempest-keypair-358550695',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4e6573eaf6684f1c99a553fd46667a67',ramdisk_id='',reservation_id='r-kd8cf275',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1781130606',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1781130606-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:45:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='113798b24d1e4a9e91db94214d254ea9',uuid=f7475a32-2490-4f8c-a700-a123973da072,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "865f9eb5-5e9d-40e5-abb3-123fcf5d05c3", "address": "fa:16:3e:b3:e2:77", "network": {"id": "3bd537c2-e6ec-4d00-ac83-fbf5d86f963c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-610535994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e6573eaf6684f1c99a553fd46667a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap865f9eb5-5e", "ovs_interfaceid": "865f9eb5-5e9d-40e5-abb3-123fcf5d05c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:45:20 np0005481065 nova_compute[260935]: 2025-10-11 08:45:20.843 2 DEBUG nova.network.os_vif_util [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Converting VIF {"id": "865f9eb5-5e9d-40e5-abb3-123fcf5d05c3", "address": "fa:16:3e:b3:e2:77", "network": {"id": "3bd537c2-e6ec-4d00-ac83-fbf5d86f963c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-610535994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e6573eaf6684f1c99a553fd46667a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap865f9eb5-5e", "ovs_interfaceid": "865f9eb5-5e9d-40e5-abb3-123fcf5d05c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:45:20 np0005481065 nova_compute[260935]: 2025-10-11 08:45:20.844 2 DEBUG nova.network.os_vif_util [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:e2:77,bridge_name='br-int',has_traffic_filtering=True,id=865f9eb5-5e9d-40e5-abb3-123fcf5d05c3,network=Network(3bd537c2-e6ec-4d00-ac83-fbf5d86f963c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap865f9eb5-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:45:20 np0005481065 nova_compute[260935]: 2025-10-11 08:45:20.845 2 DEBUG nova.objects.instance [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lazy-loading 'pci_devices' on Instance uuid f7475a32-2490-4f8c-a700-a123973da072 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:45:21 np0005481065 nova_compute[260935]: 2025-10-11 08:45:21.048 2 DEBUG nova.virt.libvirt.driver [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:45:21 np0005481065 nova_compute[260935]:  <uuid>f7475a32-2490-4f8c-a700-a123973da072</uuid>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:  <name>instance-00000005</name>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:45:21 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:      <nova:name>tempest-ServersWithSpecificFlavorTestJSON-server-1693169676</nova:name>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:45:19</nova:creationTime>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:      <nova:flavor name="tempest-flavor_with_ephemeral_1-455650691">
Oct 11 04:45:21 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:        <nova:ephemeral>1</nova:ephemeral>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:        <nova:user uuid="113798b24d1e4a9e91db94214d254ea9">tempest-ServersWithSpecificFlavorTestJSON-1781130606-project-member</nova:user>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:        <nova:project uuid="4e6573eaf6684f1c99a553fd46667a67">tempest-ServersWithSpecificFlavorTestJSON-1781130606</nova:project>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:        <nova:port uuid="865f9eb5-5e9d-40e5-abb3-123fcf5d05c3">
Oct 11 04:45:21 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:      <entry name="serial">f7475a32-2490-4f8c-a700-a123973da072</entry>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:      <entry name="uuid">f7475a32-2490-4f8c-a700-a123973da072</entry>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:45:21 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/f7475a32-2490-4f8c-a700-a123973da072_disk">
Oct 11 04:45:21 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:45:21 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:45:21 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/f7475a32-2490-4f8c-a700-a123973da072_disk.eph0">
Oct 11 04:45:21 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:45:21 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:      <target dev="vdb" bus="virtio"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:45:21 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/f7475a32-2490-4f8c-a700-a123973da072_disk.config">
Oct 11 04:45:21 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:45:21 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 04:45:21 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:b3:e2:77"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:      <target dev="tap865f9eb5-5e"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:45:21 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/f7475a32-2490-4f8c-a700-a123973da072/console.log" append="off"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:45:21 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:45:21 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:45:21 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:45:21 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:45:21 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:45:21 np0005481065 nova_compute[260935]: 2025-10-11 08:45:21.049 2 DEBUG nova.compute.manager [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Preparing to wait for external event network-vif-plugged-865f9eb5-5e9d-40e5-abb3-123fcf5d05c3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 04:45:21 np0005481065 nova_compute[260935]: 2025-10-11 08:45:21.049 2 DEBUG oslo_concurrency.lockutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Acquiring lock "f7475a32-2490-4f8c-a700-a123973da072-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:45:21 np0005481065 nova_compute[260935]: 2025-10-11 08:45:21.050 2 DEBUG oslo_concurrency.lockutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "f7475a32-2490-4f8c-a700-a123973da072-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:45:21 np0005481065 nova_compute[260935]: 2025-10-11 08:45:21.050 2 DEBUG oslo_concurrency.lockutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "f7475a32-2490-4f8c-a700-a123973da072-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:45:21 np0005481065 nova_compute[260935]: 2025-10-11 08:45:21.051 2 DEBUG nova.virt.libvirt.vif [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:45:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1693169676',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1693169676',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(24),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1693169676',id=5,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=24,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNbdle8Jb2CGpQv4SpbWMUmm3hiSixEA0s5KapvSItFHc3x9bUVgKTLkLVOIdrKirkN+vbb4fO8g4FzWk9eXgUjBokGNviLy6eM2HjHbeF2CcHrjTEf0RujQy30DD/WL/w==',key_name='tempest-keypair-358550695',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4e6573eaf6684f1c99a553fd46667a67',ramdisk_id='',reservation_id='r-kd8cf275',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1781130606',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1781130606-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:45:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='113798b24d1e4a9e91db94214d254ea9',uuid=f7475a32-2490-4f8c-a700-a123973da072,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "865f9eb5-5e9d-40e5-abb3-123fcf5d05c3", "address": "fa:16:3e:b3:e2:77", "network": {"id": "3bd537c2-e6ec-4d00-ac83-fbf5d86f963c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-610535994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e6573eaf6684f1c99a553fd46667a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap865f9eb5-5e", "ovs_interfaceid": "865f9eb5-5e9d-40e5-abb3-123fcf5d05c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:45:21 np0005481065 nova_compute[260935]: 2025-10-11 08:45:21.051 2 DEBUG nova.network.os_vif_util [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Converting VIF {"id": "865f9eb5-5e9d-40e5-abb3-123fcf5d05c3", "address": "fa:16:3e:b3:e2:77", "network": {"id": "3bd537c2-e6ec-4d00-ac83-fbf5d86f963c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-610535994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e6573eaf6684f1c99a553fd46667a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap865f9eb5-5e", "ovs_interfaceid": "865f9eb5-5e9d-40e5-abb3-123fcf5d05c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:45:21 np0005481065 nova_compute[260935]: 2025-10-11 08:45:21.052 2 DEBUG nova.network.os_vif_util [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:e2:77,bridge_name='br-int',has_traffic_filtering=True,id=865f9eb5-5e9d-40e5-abb3-123fcf5d05c3,network=Network(3bd537c2-e6ec-4d00-ac83-fbf5d86f963c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap865f9eb5-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:45:21 np0005481065 nova_compute[260935]: 2025-10-11 08:45:21.052 2 DEBUG os_vif [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:e2:77,bridge_name='br-int',has_traffic_filtering=True,id=865f9eb5-5e9d-40e5-abb3-123fcf5d05c3,network=Network(3bd537c2-e6ec-4d00-ac83-fbf5d86f963c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap865f9eb5-5e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:45:21 np0005481065 nova_compute[260935]: 2025-10-11 08:45:21.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:45:21 np0005481065 nova_compute[260935]: 2025-10-11 08:45:21.053 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:45:21 np0005481065 nova_compute[260935]: 2025-10-11 08:45:21.053 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:45:21 np0005481065 nova_compute[260935]: 2025-10-11 08:45:21.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:45:21 np0005481065 nova_compute[260935]: 2025-10-11 08:45:21.057 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap865f9eb5-5e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:45:21 np0005481065 nova_compute[260935]: 2025-10-11 08:45:21.058 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap865f9eb5-5e, col_values=(('external_ids', {'iface-id': '865f9eb5-5e9d-40e5-abb3-123fcf5d05c3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b3:e2:77', 'vm-uuid': 'f7475a32-2490-4f8c-a700-a123973da072'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:45:21 np0005481065 nova_compute[260935]: 2025-10-11 08:45:21.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:45:21 np0005481065 NetworkManager[44960]: <info>  [1760172321.0609] manager: (tap865f9eb5-5e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Oct 11 04:45:21 np0005481065 nova_compute[260935]: 2025-10-11 08:45:21.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:45:21 np0005481065 nova_compute[260935]: 2025-10-11 08:45:21.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:45:21 np0005481065 nova_compute[260935]: 2025-10-11 08:45:21.068 2 INFO os_vif [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:e2:77,bridge_name='br-int',has_traffic_filtering=True,id=865f9eb5-5e9d-40e5-abb3-123fcf5d05c3,network=Network(3bd537c2-e6ec-4d00-ac83-fbf5d86f963c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap865f9eb5-5e')#033[00m
Oct 11 04:45:21 np0005481065 nova_compute[260935]: 2025-10-11 08:45:21.192 2 DEBUG nova.virt.libvirt.driver [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:45:21 np0005481065 nova_compute[260935]: 2025-10-11 08:45:21.193 2 DEBUG nova.virt.libvirt.driver [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:45:21 np0005481065 nova_compute[260935]: 2025-10-11 08:45:21.193 2 DEBUG nova.virt.libvirt.driver [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:45:21 np0005481065 nova_compute[260935]: 2025-10-11 08:45:21.194 2 DEBUG nova.virt.libvirt.driver [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] No VIF found with MAC fa:16:3e:b3:e2:77, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:45:21 np0005481065 nova_compute[260935]: 2025-10-11 08:45:21.194 2 INFO nova.virt.libvirt.driver [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Using config drive#033[00m
Oct 11 04:45:21 np0005481065 nova_compute[260935]: 2025-10-11 08:45:21.213 2 DEBUG nova.storage.rbd_utils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] rbd image f7475a32-2490-4f8c-a700-a123973da072_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:45:21 np0005481065 nova_compute[260935]: 2025-10-11 08:45:21.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:45:21 np0005481065 nova_compute[260935]: 2025-10-11 08:45:21.565 2 DEBUG nova.network.neutron [req-d0b2a4c7-120b-4413-8dd7-d4efbb163471 req-4ef45372-d99c-498f-8de6-bcc59dab8244 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Updated VIF entry in instance network info cache for port 865f9eb5-5e9d-40e5-abb3-123fcf5d05c3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:45:21 np0005481065 nova_compute[260935]: 2025-10-11 08:45:21.566 2 DEBUG nova.network.neutron [req-d0b2a4c7-120b-4413-8dd7-d4efbb163471 req-4ef45372-d99c-498f-8de6-bcc59dab8244 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Updating instance_info_cache with network_info: [{"id": "865f9eb5-5e9d-40e5-abb3-123fcf5d05c3", "address": "fa:16:3e:b3:e2:77", "network": {"id": "3bd537c2-e6ec-4d00-ac83-fbf5d86f963c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-610535994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e6573eaf6684f1c99a553fd46667a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap865f9eb5-5e", "ovs_interfaceid": "865f9eb5-5e9d-40e5-abb3-123fcf5d05c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:45:21 np0005481065 nova_compute[260935]: 2025-10-11 08:45:21.595 2 DEBUG oslo_concurrency.lockutils [req-d0b2a4c7-120b-4413-8dd7-d4efbb163471 req-4ef45372-d99c-498f-8de6-bcc59dab8244 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-f7475a32-2490-4f8c-a700-a123973da072" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:45:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1160: 321 pgs: 321 active+clean; 90 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 1.8 MiB/s wr, 99 op/s
Oct 11 04:45:21 np0005481065 nova_compute[260935]: 2025-10-11 08:45:21.812 2 INFO nova.virt.libvirt.driver [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Creating config drive at /var/lib/nova/instances/f7475a32-2490-4f8c-a700-a123973da072/disk.config#033[00m
Oct 11 04:45:21 np0005481065 nova_compute[260935]: 2025-10-11 08:45:21.823 2 DEBUG oslo_concurrency.processutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f7475a32-2490-4f8c-a700-a123973da072/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9dnsv9bj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:45:21 np0005481065 nova_compute[260935]: 2025-10-11 08:45:21.970 2 DEBUG oslo_concurrency.processutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f7475a32-2490-4f8c-a700-a123973da072/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9dnsv9bj" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:45:22 np0005481065 nova_compute[260935]: 2025-10-11 08:45:22.010 2 DEBUG nova.storage.rbd_utils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] rbd image f7475a32-2490-4f8c-a700-a123973da072_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:45:22 np0005481065 nova_compute[260935]: 2025-10-11 08:45:22.015 2 DEBUG oslo_concurrency.processutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f7475a32-2490-4f8c-a700-a123973da072/disk.config f7475a32-2490-4f8c-a700-a123973da072_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:45:22 np0005481065 nova_compute[260935]: 2025-10-11 08:45:22.212 2 DEBUG oslo_concurrency.processutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f7475a32-2490-4f8c-a700-a123973da072/disk.config f7475a32-2490-4f8c-a700-a123973da072_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:45:22 np0005481065 nova_compute[260935]: 2025-10-11 08:45:22.213 2 INFO nova.virt.libvirt.driver [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Deleting local config drive /var/lib/nova/instances/f7475a32-2490-4f8c-a700-a123973da072/disk.config because it was imported into RBD.#033[00m
Oct 11 04:45:22 np0005481065 kernel: tap865f9eb5-5e: entered promiscuous mode
Oct 11 04:45:22 np0005481065 NetworkManager[44960]: <info>  [1760172322.2965] manager: (tap865f9eb5-5e): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Oct 11 04:45:22 np0005481065 systemd-udevd[279381]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:45:22 np0005481065 ovn_controller[152945]: 2025-10-11T08:45:22Z|00035|binding|INFO|Claiming lport 865f9eb5-5e9d-40e5-abb3-123fcf5d05c3 for this chassis.
Oct 11 04:45:22 np0005481065 ovn_controller[152945]: 2025-10-11T08:45:22Z|00036|binding|INFO|865f9eb5-5e9d-40e5-abb3-123fcf5d05c3: Claiming fa:16:3e:b3:e2:77 10.100.0.5
Oct 11 04:45:22 np0005481065 nova_compute[260935]: 2025-10-11 08:45:22.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:22.339 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:e2:77 10.100.0.5'], port_security=['fa:16:3e:b3:e2:77 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'f7475a32-2490-4f8c-a700-a123973da072', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4e6573eaf6684f1c99a553fd46667a67', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6b6e9add-724a-49a4-90de-2f4e4912ca78', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be56cb0d-617a-4b33-ac01-b32b133373b2, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=865f9eb5-5e9d-40e5-abb3-123fcf5d05c3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:22.341 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 865f9eb5-5e9d-40e5-abb3-123fcf5d05c3 in datapath 3bd537c2-e6ec-4d00-ac83-fbf5d86f963c bound to our chassis#033[00m
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:22.343 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3bd537c2-e6ec-4d00-ac83-fbf5d86f963c#033[00m
Oct 11 04:45:22 np0005481065 NetworkManager[44960]: <info>  [1760172322.3505] device (tap865f9eb5-5e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:45:22 np0005481065 NetworkManager[44960]: <info>  [1760172322.3526] device (tap865f9eb5-5e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:45:22 np0005481065 ovn_controller[152945]: 2025-10-11T08:45:22Z|00037|binding|INFO|Setting lport 865f9eb5-5e9d-40e5-abb3-123fcf5d05c3 ovn-installed in OVS
Oct 11 04:45:22 np0005481065 ovn_controller[152945]: 2025-10-11T08:45:22Z|00038|binding|INFO|Setting lport 865f9eb5-5e9d-40e5-abb3-123fcf5d05c3 up in Southbound
Oct 11 04:45:22 np0005481065 nova_compute[260935]: 2025-10-11 08:45:22.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:22.358 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[631cc7a8-da19-4338-a01a-216f8133933b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:22.360 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3bd537c2-e1 in ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:22.361 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3bd537c2-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:22.361 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[aab7cde0-ea25-4dbe-8706-783277143a25]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:22.363 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[03e8ee12-2d2a-4bfd-9a32-94df5d256216]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:45:22 np0005481065 systemd-machined[215705]: New machine qemu-5-instance-00000005.
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:22.380 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[07caa198-bb3f-4a5c-8847-b29376a7c0cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:45:22 np0005481065 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:22.409 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c0030254-5388-4c01-b77d-7eb7c855fcd5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:22.451 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[64f0bc0a-8dd0-4c16-aa62-165b4c915280]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:22.459 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[aec6c34a-c46f-4eb5-8f2b-4e66b0d7e0cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:45:22 np0005481065 NetworkManager[44960]: <info>  [1760172322.4603] manager: (tap3bd537c2-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/33)
Oct 11 04:45:22 np0005481065 systemd-udevd[279385]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:22.514 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[10ad650c-676c-4aae-a0fc-c09c712fd1af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:22.519 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[093df9d6-d8ae-4ae9-836f-cc73b8214d57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:45:22 np0005481065 NetworkManager[44960]: <info>  [1760172322.5496] device (tap3bd537c2-e0): carrier: link connected
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:22.557 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[417e01f0-0bb3-4f6f-9f80-6993e6d36889]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:22.580 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7f938f22-7d77-4676-ac28-3e4b5c6f91bb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3bd537c2-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e9:0a:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 416725, 'reachable_time': 21173, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279419, 'error': None, 'target': 'ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:22.604 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3b6aa32a-2c30-4ec9-9c96-7644db76c7fe]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee9:a0a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 416725, 'tstamp': 416725}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279421, 'error': None, 'target': 'ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:22.628 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0bcd41de-b9e3-4e42-a038-93443849c994]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3bd537c2-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e9:0a:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 416725, 'reachable_time': 21173, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 279429, 'error': None, 'target': 'ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:22.694 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b28a4340-2cca-4890-b5e8-1c5fd1e1b149]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:22.788 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5a1a0619-01ef-4b86-a4b7-88318c919f6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:22.790 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3bd537c2-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:22.791 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:22.793 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3bd537c2-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:45:22 np0005481065 kernel: tap3bd537c2-e0: entered promiscuous mode
Oct 11 04:45:22 np0005481065 NetworkManager[44960]: <info>  [1760172322.7957] manager: (tap3bd537c2-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Oct 11 04:45:22 np0005481065 nova_compute[260935]: 2025-10-11 08:45:22.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:22.804 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3bd537c2-e0, col_values=(('external_ids', {'iface-id': '6fcfb560-e2ca-4f85-8b53-5907aa538954'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:45:22 np0005481065 nova_compute[260935]: 2025-10-11 08:45:22.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:45:22 np0005481065 ovn_controller[152945]: 2025-10-11T08:45:22Z|00039|binding|INFO|Releasing lport 6fcfb560-e2ca-4f85-8b53-5907aa538954 from this chassis (sb_readonly=0)
Oct 11 04:45:22 np0005481065 nova_compute[260935]: 2025-10-11 08:45:22.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:22.807 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3bd537c2-e6ec-4d00-ac83-fbf5d86f963c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3bd537c2-e6ec-4d00-ac83-fbf5d86f963c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:22.809 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b505fd38-519b-4e13-9460-1901d5386f16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:22.810 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/3bd537c2-e6ec-4d00-ac83-fbf5d86f963c.pid.haproxy
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID 3bd537c2-e6ec-4d00-ac83-fbf5d86f963c
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 04:45:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:22.811 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c', 'env', 'PROCESS_TAG=haproxy-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3bd537c2-e6ec-4d00-ac83-fbf5d86f963c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 04:45:22 np0005481065 nova_compute[260935]: 2025-10-11 08:45:22.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:45:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:45:23 np0005481065 nova_compute[260935]: 2025-10-11 08:45:23.175 2 DEBUG nova.compute.manager [req-1df20f0a-a20a-486f-9d9f-c776073d1cfd req-c3ff16bc-b6c3-470c-a5ea-0721b2bf7552 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Received event network-vif-plugged-865f9eb5-5e9d-40e5-abb3-123fcf5d05c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:45:23 np0005481065 nova_compute[260935]: 2025-10-11 08:45:23.177 2 DEBUG oslo_concurrency.lockutils [req-1df20f0a-a20a-486f-9d9f-c776073d1cfd req-c3ff16bc-b6c3-470c-a5ea-0721b2bf7552 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "f7475a32-2490-4f8c-a700-a123973da072-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:45:23 np0005481065 nova_compute[260935]: 2025-10-11 08:45:23.177 2 DEBUG oslo_concurrency.lockutils [req-1df20f0a-a20a-486f-9d9f-c776073d1cfd req-c3ff16bc-b6c3-470c-a5ea-0721b2bf7552 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f7475a32-2490-4f8c-a700-a123973da072-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:45:23 np0005481065 nova_compute[260935]: 2025-10-11 08:45:23.178 2 DEBUG oslo_concurrency.lockutils [req-1df20f0a-a20a-486f-9d9f-c776073d1cfd req-c3ff16bc-b6c3-470c-a5ea-0721b2bf7552 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f7475a32-2490-4f8c-a700-a123973da072-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:45:23 np0005481065 nova_compute[260935]: 2025-10-11 08:45:23.179 2 DEBUG nova.compute.manager [req-1df20f0a-a20a-486f-9d9f-c776073d1cfd req-c3ff16bc-b6c3-470c-a5ea-0721b2bf7552 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Processing event network-vif-plugged-865f9eb5-5e9d-40e5-abb3-123fcf5d05c3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 04:45:23 np0005481065 podman[279514]: 2025-10-11 08:45:23.272169658 +0000 UTC m=+0.082567021 container create 7fdfa5eee06da18214dc33fc369066f5a6bd7352c541f9eb3b5949957f4c7925 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 04:45:23 np0005481065 systemd[1]: Started libpod-conmon-7fdfa5eee06da18214dc33fc369066f5a6bd7352c541f9eb3b5949957f4c7925.scope.
Oct 11 04:45:23 np0005481065 podman[279514]: 2025-10-11 08:45:23.233934155 +0000 UTC m=+0.044331618 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 04:45:23 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:45:23 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49d8583738b5511fba56f82d6e92ea769c6441216f2fdd8ecdad4c60001a63ed/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:45:23 np0005481065 podman[279514]: 2025-10-11 08:45:23.369062008 +0000 UTC m=+0.179459431 container init 7fdfa5eee06da18214dc33fc369066f5a6bd7352c541f9eb3b5949957f4c7925 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 11 04:45:23 np0005481065 podman[279514]: 2025-10-11 08:45:23.381302258 +0000 UTC m=+0.191699641 container start 7fdfa5eee06da18214dc33fc369066f5a6bd7352c541f9eb3b5949957f4c7925 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Oct 11 04:45:23 np0005481065 neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c[279530]: [NOTICE]   (279546) : New worker (279555) forked
Oct 11 04:45:23 np0005481065 neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c[279530]: [NOTICE]   (279546) : Loading success.
Oct 11 04:45:23 np0005481065 podman[279527]: 2025-10-11 08:45:23.459279477 +0000 UTC m=+0.140561219 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid)
Oct 11 04:45:23 np0005481065 nova_compute[260935]: 2025-10-11 08:45:23.516 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172323.5152283, f7475a32-2490-4f8c-a700-a123973da072 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:45:23 np0005481065 nova_compute[260935]: 2025-10-11 08:45:23.516 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f7475a32-2490-4f8c-a700-a123973da072] VM Started (Lifecycle Event)#033[00m
Oct 11 04:45:23 np0005481065 nova_compute[260935]: 2025-10-11 08:45:23.518 2 DEBUG nova.compute.manager [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:45:23 np0005481065 nova_compute[260935]: 2025-10-11 08:45:23.523 2 DEBUG nova.virt.libvirt.driver [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:45:23 np0005481065 nova_compute[260935]: 2025-10-11 08:45:23.527 2 INFO nova.virt.libvirt.driver [-] [instance: f7475a32-2490-4f8c-a700-a123973da072] Instance spawned successfully.#033[00m
Oct 11 04:45:23 np0005481065 nova_compute[260935]: 2025-10-11 08:45:23.527 2 DEBUG nova.virt.libvirt.driver [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:45:23 np0005481065 nova_compute[260935]: 2025-10-11 08:45:23.545 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f7475a32-2490-4f8c-a700-a123973da072] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:45:23 np0005481065 nova_compute[260935]: 2025-10-11 08:45:23.548 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f7475a32-2490-4f8c-a700-a123973da072] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:45:23 np0005481065 nova_compute[260935]: 2025-10-11 08:45:23.560 2 DEBUG nova.virt.libvirt.driver [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:45:23 np0005481065 nova_compute[260935]: 2025-10-11 08:45:23.561 2 DEBUG nova.virt.libvirt.driver [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:45:23 np0005481065 nova_compute[260935]: 2025-10-11 08:45:23.561 2 DEBUG nova.virt.libvirt.driver [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:45:23 np0005481065 nova_compute[260935]: 2025-10-11 08:45:23.561 2 DEBUG nova.virt.libvirt.driver [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:45:23 np0005481065 nova_compute[260935]: 2025-10-11 08:45:23.562 2 DEBUG nova.virt.libvirt.driver [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:45:23 np0005481065 nova_compute[260935]: 2025-10-11 08:45:23.562 2 DEBUG nova.virt.libvirt.driver [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:45:23 np0005481065 nova_compute[260935]: 2025-10-11 08:45:23.571 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f7475a32-2490-4f8c-a700-a123973da072] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:45:23 np0005481065 nova_compute[260935]: 2025-10-11 08:45:23.572 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172323.5154285, f7475a32-2490-4f8c-a700-a123973da072 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:45:23 np0005481065 nova_compute[260935]: 2025-10-11 08:45:23.572 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f7475a32-2490-4f8c-a700-a123973da072] VM Paused (Lifecycle Event)#033[00m
Oct 11 04:45:23 np0005481065 nova_compute[260935]: 2025-10-11 08:45:23.631 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f7475a32-2490-4f8c-a700-a123973da072] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:45:23 np0005481065 nova_compute[260935]: 2025-10-11 08:45:23.636 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172323.5212383, f7475a32-2490-4f8c-a700-a123973da072 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:45:23 np0005481065 nova_compute[260935]: 2025-10-11 08:45:23.636 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f7475a32-2490-4f8c-a700-a123973da072] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:45:23 np0005481065 nova_compute[260935]: 2025-10-11 08:45:23.651 2 INFO nova.compute.manager [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Took 8.55 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:45:23 np0005481065 nova_compute[260935]: 2025-10-11 08:45:23.651 2 DEBUG nova.compute.manager [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:45:23 np0005481065 nova_compute[260935]: 2025-10-11 08:45:23.662 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f7475a32-2490-4f8c-a700-a123973da072] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:45:23 np0005481065 nova_compute[260935]: 2025-10-11 08:45:23.666 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f7475a32-2490-4f8c-a700-a123973da072] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:45:23 np0005481065 nova_compute[260935]: 2025-10-11 08:45:23.690 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f7475a32-2490-4f8c-a700-a123973da072] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:45:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1161: 321 pgs: 321 active+clean; 90 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 1.8 MiB/s wr, 106 op/s
Oct 11 04:45:23 np0005481065 nova_compute[260935]: 2025-10-11 08:45:23.726 2 INFO nova.compute.manager [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Took 10.09 seconds to build instance.#033[00m
Oct 11 04:45:23 np0005481065 nova_compute[260935]: 2025-10-11 08:45:23.749 2 DEBUG oslo_concurrency.lockutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "f7475a32-2490-4f8c-a700-a123973da072" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.349s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:45:24 np0005481065 nova_compute[260935]: 2025-10-11 08:45:24.296 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172309.2947922, aac0adcc-167d-400a-a04a-93767356cc9c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:45:24 np0005481065 nova_compute[260935]: 2025-10-11 08:45:24.297 2 INFO nova.compute.manager [-] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] VM Stopped (Lifecycle Event)#033[00m
Oct 11 04:45:24 np0005481065 nova_compute[260935]: 2025-10-11 08:45:24.313 2 DEBUG nova.compute.manager [None req-a9bb22a3-e5d0-418f-8117-09b2f2ece1f7 - - - - - -] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:45:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:45:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:45:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:45:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:45:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:45:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:45:25 np0005481065 nova_compute[260935]: 2025-10-11 08:45:25.322 2 DEBUG nova.compute.manager [req-7fe5b679-bfbf-42fd-ab36-4b1cc4800983 req-ab4f863e-d65b-4eb5-8918-ab4a2e1763f8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Received event network-vif-plugged-865f9eb5-5e9d-40e5-abb3-123fcf5d05c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:45:25 np0005481065 nova_compute[260935]: 2025-10-11 08:45:25.322 2 DEBUG oslo_concurrency.lockutils [req-7fe5b679-bfbf-42fd-ab36-4b1cc4800983 req-ab4f863e-d65b-4eb5-8918-ab4a2e1763f8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "f7475a32-2490-4f8c-a700-a123973da072-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:45:25 np0005481065 nova_compute[260935]: 2025-10-11 08:45:25.323 2 DEBUG oslo_concurrency.lockutils [req-7fe5b679-bfbf-42fd-ab36-4b1cc4800983 req-ab4f863e-d65b-4eb5-8918-ab4a2e1763f8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f7475a32-2490-4f8c-a700-a123973da072-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:45:25 np0005481065 nova_compute[260935]: 2025-10-11 08:45:25.323 2 DEBUG oslo_concurrency.lockutils [req-7fe5b679-bfbf-42fd-ab36-4b1cc4800983 req-ab4f863e-d65b-4eb5-8918-ab4a2e1763f8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f7475a32-2490-4f8c-a700-a123973da072-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:45:25 np0005481065 nova_compute[260935]: 2025-10-11 08:45:25.323 2 DEBUG nova.compute.manager [req-7fe5b679-bfbf-42fd-ab36-4b1cc4800983 req-ab4f863e-d65b-4eb5-8918-ab4a2e1763f8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] No waiting events found dispatching network-vif-plugged-865f9eb5-5e9d-40e5-abb3-123fcf5d05c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:45:25 np0005481065 nova_compute[260935]: 2025-10-11 08:45:25.324 2 WARNING nova.compute.manager [req-7fe5b679-bfbf-42fd-ab36-4b1cc4800983 req-ab4f863e-d65b-4eb5-8918-ab4a2e1763f8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Received unexpected event network-vif-plugged-865f9eb5-5e9d-40e5-abb3-123fcf5d05c3 for instance with vm_state active and task_state None.#033[00m
Oct 11 04:45:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1162: 321 pgs: 321 active+clean; 90 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Oct 11 04:45:26 np0005481065 nova_compute[260935]: 2025-10-11 08:45:26.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:45:26 np0005481065 nova_compute[260935]: 2025-10-11 08:45:26.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:45:27 np0005481065 nova_compute[260935]: 2025-10-11 08:45:27.218 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172312.2159898, 10e2549e-21d4-44fb-acbf-9104ec32970f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:45:27 np0005481065 nova_compute[260935]: 2025-10-11 08:45:27.219 2 INFO nova.compute.manager [-] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] VM Stopped (Lifecycle Event)#033[00m
Oct 11 04:45:27 np0005481065 nova_compute[260935]: 2025-10-11 08:45:27.245 2 DEBUG nova.compute.manager [None req-71f112a0-808f-472e-9937-bd9cfa75e2b3 - - - - - -] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:45:27 np0005481065 nova_compute[260935]: 2025-10-11 08:45:27.681 2 DEBUG nova.compute.manager [req-2ee9f526-9505-4e7b-8194-5e72df746c5e req-804d7d61-9295-4ba7-a554-718eb3fee364 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Received event network-changed-865f9eb5-5e9d-40e5-abb3-123fcf5d05c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:45:27 np0005481065 nova_compute[260935]: 2025-10-11 08:45:27.682 2 DEBUG nova.compute.manager [req-2ee9f526-9505-4e7b-8194-5e72df746c5e req-804d7d61-9295-4ba7-a554-718eb3fee364 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Refreshing instance network info cache due to event network-changed-865f9eb5-5e9d-40e5-abb3-123fcf5d05c3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:45:27 np0005481065 nova_compute[260935]: 2025-10-11 08:45:27.683 2 DEBUG oslo_concurrency.lockutils [req-2ee9f526-9505-4e7b-8194-5e72df746c5e req-804d7d61-9295-4ba7-a554-718eb3fee364 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-f7475a32-2490-4f8c-a700-a123973da072" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:45:27 np0005481065 nova_compute[260935]: 2025-10-11 08:45:27.684 2 DEBUG oslo_concurrency.lockutils [req-2ee9f526-9505-4e7b-8194-5e72df746c5e req-804d7d61-9295-4ba7-a554-718eb3fee364 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-f7475a32-2490-4f8c-a700-a123973da072" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:45:27 np0005481065 nova_compute[260935]: 2025-10-11 08:45:27.684 2 DEBUG nova.network.neutron [req-2ee9f526-9505-4e7b-8194-5e72df746c5e req-804d7d61-9295-4ba7-a554-718eb3fee364 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Refreshing network info cache for port 865f9eb5-5e9d-40e5-abb3-123fcf5d05c3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:45:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1163: 321 pgs: 321 active+clean; 90 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 133 op/s
Oct 11 04:45:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:45:29 np0005481065 nova_compute[260935]: 2025-10-11 08:45:29.702 2 DEBUG nova.network.neutron [req-2ee9f526-9505-4e7b-8194-5e72df746c5e req-804d7d61-9295-4ba7-a554-718eb3fee364 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Updated VIF entry in instance network info cache for port 865f9eb5-5e9d-40e5-abb3-123fcf5d05c3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:45:29 np0005481065 nova_compute[260935]: 2025-10-11 08:45:29.704 2 DEBUG nova.network.neutron [req-2ee9f526-9505-4e7b-8194-5e72df746c5e req-804d7d61-9295-4ba7-a554-718eb3fee364 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Updating instance_info_cache with network_info: [{"id": "865f9eb5-5e9d-40e5-abb3-123fcf5d05c3", "address": "fa:16:3e:b3:e2:77", "network": {"id": "3bd537c2-e6ec-4d00-ac83-fbf5d86f963c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-610535994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e6573eaf6684f1c99a553fd46667a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap865f9eb5-5e", "ovs_interfaceid": "865f9eb5-5e9d-40e5-abb3-123fcf5d05c3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:45:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1164: 321 pgs: 321 active+clean; 90 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 76 op/s
Oct 11 04:45:29 np0005481065 nova_compute[260935]: 2025-10-11 08:45:29.740 2 DEBUG oslo_concurrency.lockutils [req-2ee9f526-9505-4e7b-8194-5e72df746c5e req-804d7d61-9295-4ba7-a554-718eb3fee364 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-f7475a32-2490-4f8c-a700-a123973da072" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:45:29 np0005481065 podman[279565]: 2025-10-11 08:45:29.820716806 +0000 UTC m=+0.114121893 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 11 04:45:30 np0005481065 podman[279586]: 2025-10-11 08:45:30.845561891 +0000 UTC m=+0.143877194 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 11 04:45:31 np0005481065 nova_compute[260935]: 2025-10-11 08:45:31.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:45:31 np0005481065 nova_compute[260935]: 2025-10-11 08:45:31.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:45:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1165: 321 pgs: 321 active+clean; 90 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 76 op/s
Oct 11 04:45:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:45:33 np0005481065 nova_compute[260935]: 2025-10-11 08:45:33.160 2 DEBUG oslo_concurrency.lockutils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Acquiring lock "09442602-bb27-4205-98ce-79781d8ab62f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:45:33 np0005481065 nova_compute[260935]: 2025-10-11 08:45:33.161 2 DEBUG oslo_concurrency.lockutils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Lock "09442602-bb27-4205-98ce-79781d8ab62f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:45:33 np0005481065 nova_compute[260935]: 2025-10-11 08:45:33.287 2 DEBUG nova.compute.manager [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:45:33 np0005481065 nova_compute[260935]: 2025-10-11 08:45:33.429 2 DEBUG oslo_concurrency.lockutils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:45:33 np0005481065 nova_compute[260935]: 2025-10-11 08:45:33.429 2 DEBUG oslo_concurrency.lockutils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:45:33 np0005481065 nova_compute[260935]: 2025-10-11 08:45:33.446 2 DEBUG nova.virt.hardware [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:45:33 np0005481065 nova_compute[260935]: 2025-10-11 08:45:33.447 2 INFO nova.compute.claims [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:45:33 np0005481065 nova_compute[260935]: 2025-10-11 08:45:33.604 2 DEBUG oslo_concurrency.processutils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:45:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1166: 321 pgs: 321 active+clean; 90 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 76 op/s
Oct 11 04:45:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:45:34 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2656833175' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:45:34 np0005481065 nova_compute[260935]: 2025-10-11 08:45:34.088 2 DEBUG oslo_concurrency.processutils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:45:34 np0005481065 nova_compute[260935]: 2025-10-11 08:45:34.095 2 DEBUG nova.compute.provider_tree [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:45:34 np0005481065 nova_compute[260935]: 2025-10-11 08:45:34.135 2 DEBUG nova.scheduler.client.report [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:45:34 np0005481065 nova_compute[260935]: 2025-10-11 08:45:34.180 2 DEBUG oslo_concurrency.lockutils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:45:34 np0005481065 nova_compute[260935]: 2025-10-11 08:45:34.181 2 DEBUG nova.compute.manager [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:45:34 np0005481065 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 11 04:45:34 np0005481065 nova_compute[260935]: 2025-10-11 08:45:34.280 2 DEBUG nova.compute.manager [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:45:34 np0005481065 nova_compute[260935]: 2025-10-11 08:45:34.281 2 DEBUG nova.network.neutron [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:45:34 np0005481065 nova_compute[260935]: 2025-10-11 08:45:34.505 2 INFO nova.virt.libvirt.driver [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:45:34 np0005481065 nova_compute[260935]: 2025-10-11 08:45:34.561 2 DEBUG nova.compute.manager [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:45:34 np0005481065 nova_compute[260935]: 2025-10-11 08:45:34.694 2 DEBUG nova.compute.manager [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:45:34 np0005481065 nova_compute[260935]: 2025-10-11 08:45:34.696 2 DEBUG nova.virt.libvirt.driver [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:45:34 np0005481065 nova_compute[260935]: 2025-10-11 08:45:34.697 2 INFO nova.virt.libvirt.driver [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Creating image(s)#033[00m
Oct 11 04:45:34 np0005481065 nova_compute[260935]: 2025-10-11 08:45:34.728 2 DEBUG nova.storage.rbd_utils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] rbd image 09442602-bb27-4205-98ce-79781d8ab62f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:45:34 np0005481065 nova_compute[260935]: 2025-10-11 08:45:34.759 2 DEBUG nova.storage.rbd_utils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] rbd image 09442602-bb27-4205-98ce-79781d8ab62f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:45:34 np0005481065 nova_compute[260935]: 2025-10-11 08:45:34.793 2 DEBUG nova.storage.rbd_utils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] rbd image 09442602-bb27-4205-98ce-79781d8ab62f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:45:34 np0005481065 nova_compute[260935]: 2025-10-11 08:45:34.798 2 DEBUG oslo_concurrency.processutils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:45:34 np0005481065 nova_compute[260935]: 2025-10-11 08:45:34.887 2 DEBUG oslo_concurrency.processutils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:45:34 np0005481065 nova_compute[260935]: 2025-10-11 08:45:34.889 2 DEBUG oslo_concurrency.lockutils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:45:34 np0005481065 nova_compute[260935]: 2025-10-11 08:45:34.891 2 DEBUG oslo_concurrency.lockutils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:45:34 np0005481065 nova_compute[260935]: 2025-10-11 08:45:34.892 2 DEBUG oslo_concurrency.lockutils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:45:34 np0005481065 nova_compute[260935]: 2025-10-11 08:45:34.927 2 DEBUG nova.storage.rbd_utils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] rbd image 09442602-bb27-4205-98ce-79781d8ab62f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:45:34 np0005481065 nova_compute[260935]: 2025-10-11 08:45:34.934 2 DEBUG oslo_concurrency.processutils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 09442602-bb27-4205-98ce-79781d8ab62f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:45:35 np0005481065 nova_compute[260935]: 2025-10-11 08:45:35.241 2 DEBUG oslo_concurrency.processutils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 09442602-bb27-4205-98ce-79781d8ab62f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.308s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:45:35 np0005481065 nova_compute[260935]: 2025-10-11 08:45:35.328 2 DEBUG nova.storage.rbd_utils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] resizing rbd image 09442602-bb27-4205-98ce-79781d8ab62f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:45:35 np0005481065 ovn_controller[152945]: 2025-10-11T08:45:35Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b3:e2:77 10.100.0.5
Oct 11 04:45:35 np0005481065 ovn_controller[152945]: 2025-10-11T08:45:35Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b3:e2:77 10.100.0.5
Oct 11 04:45:35 np0005481065 nova_compute[260935]: 2025-10-11 08:45:35.454 2 DEBUG nova.objects.instance [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Lazy-loading 'migration_context' on Instance uuid 09442602-bb27-4205-98ce-79781d8ab62f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:45:35 np0005481065 nova_compute[260935]: 2025-10-11 08:45:35.472 2 DEBUG nova.virt.libvirt.driver [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:45:35 np0005481065 nova_compute[260935]: 2025-10-11 08:45:35.473 2 DEBUG nova.virt.libvirt.driver [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Ensure instance console log exists: /var/lib/nova/instances/09442602-bb27-4205-98ce-79781d8ab62f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:45:35 np0005481065 nova_compute[260935]: 2025-10-11 08:45:35.474 2 DEBUG oslo_concurrency.lockutils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:45:35 np0005481065 nova_compute[260935]: 2025-10-11 08:45:35.475 2 DEBUG oslo_concurrency.lockutils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:45:35 np0005481065 nova_compute[260935]: 2025-10-11 08:45:35.476 2 DEBUG oslo_concurrency.lockutils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:45:35 np0005481065 nova_compute[260935]: 2025-10-11 08:45:35.554 2 DEBUG nova.network.neutron [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct 11 04:45:35 np0005481065 nova_compute[260935]: 2025-10-11 08:45:35.555 2 DEBUG nova.compute.manager [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:45:35 np0005481065 nova_compute[260935]: 2025-10-11 08:45:35.558 2 DEBUG nova.virt.libvirt.driver [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:45:35 np0005481065 nova_compute[260935]: 2025-10-11 08:45:35.564 2 WARNING nova.virt.libvirt.driver [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:45:35 np0005481065 nova_compute[260935]: 2025-10-11 08:45:35.571 2 DEBUG nova.virt.libvirt.host [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:45:35 np0005481065 nova_compute[260935]: 2025-10-11 08:45:35.573 2 DEBUG nova.virt.libvirt.host [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:45:35 np0005481065 nova_compute[260935]: 2025-10-11 08:45:35.578 2 DEBUG nova.virt.libvirt.host [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:45:35 np0005481065 nova_compute[260935]: 2025-10-11 08:45:35.579 2 DEBUG nova.virt.libvirt.host [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:45:35 np0005481065 nova_compute[260935]: 2025-10-11 08:45:35.580 2 DEBUG nova.virt.libvirt.driver [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:45:35 np0005481065 nova_compute[260935]: 2025-10-11 08:45:35.580 2 DEBUG nova.virt.hardware [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:45:35 np0005481065 nova_compute[260935]: 2025-10-11 08:45:35.582 2 DEBUG nova.virt.hardware [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:45:35 np0005481065 nova_compute[260935]: 2025-10-11 08:45:35.582 2 DEBUG nova.virt.hardware [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:45:35 np0005481065 nova_compute[260935]: 2025-10-11 08:45:35.583 2 DEBUG nova.virt.hardware [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:45:35 np0005481065 nova_compute[260935]: 2025-10-11 08:45:35.584 2 DEBUG nova.virt.hardware [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:45:35 np0005481065 nova_compute[260935]: 2025-10-11 08:45:35.585 2 DEBUG nova.virt.hardware [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:45:35 np0005481065 nova_compute[260935]: 2025-10-11 08:45:35.586 2 DEBUG nova.virt.hardware [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:45:35 np0005481065 nova_compute[260935]: 2025-10-11 08:45:35.586 2 DEBUG nova.virt.hardware [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:45:35 np0005481065 nova_compute[260935]: 2025-10-11 08:45:35.587 2 DEBUG nova.virt.hardware [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:45:35 np0005481065 nova_compute[260935]: 2025-10-11 08:45:35.588 2 DEBUG nova.virt.hardware [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:45:35 np0005481065 nova_compute[260935]: 2025-10-11 08:45:35.588 2 DEBUG nova.virt.hardware [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:45:35 np0005481065 nova_compute[260935]: 2025-10-11 08:45:35.594 2 DEBUG oslo_concurrency.processutils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:45:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1167: 321 pgs: 321 active+clean; 90 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 69 op/s
Oct 11 04:45:36 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:45:36 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1717705071' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:45:36 np0005481065 nova_compute[260935]: 2025-10-11 08:45:36.044 2 DEBUG oslo_concurrency.processutils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:45:36 np0005481065 nova_compute[260935]: 2025-10-11 08:45:36.079 2 DEBUG nova.storage.rbd_utils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] rbd image 09442602-bb27-4205-98ce-79781d8ab62f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:45:36 np0005481065 nova_compute[260935]: 2025-10-11 08:45:36.086 2 DEBUG oslo_concurrency.processutils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:45:36 np0005481065 nova_compute[260935]: 2025-10-11 08:45:36.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:45:36 np0005481065 nova_compute[260935]: 2025-10-11 08:45:36.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:45:36 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:45:36 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2599094129' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:45:36 np0005481065 nova_compute[260935]: 2025-10-11 08:45:36.579 2 DEBUG oslo_concurrency.processutils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:45:36 np0005481065 nova_compute[260935]: 2025-10-11 08:45:36.581 2 DEBUG nova.objects.instance [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Lazy-loading 'pci_devices' on Instance uuid 09442602-bb27-4205-98ce-79781d8ab62f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:45:36 np0005481065 nova_compute[260935]: 2025-10-11 08:45:36.594 2 DEBUG nova.virt.libvirt.driver [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:45:36 np0005481065 nova_compute[260935]:  <uuid>09442602-bb27-4205-98ce-79781d8ab62f</uuid>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:  <name>instance-00000006</name>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:45:36 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:      <nova:name>tempest-ServerExternalEventsTest-server-2118204747</nova:name>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:45:35</nova:creationTime>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:45:36 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:        <nova:user uuid="9f1d33a73b9b4589967f6f042e6949bd">tempest-ServerExternalEventsTest-1586413318-project-member</nova:user>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:        <nova:project uuid="d9dd13dec2494700919923a94f4273ae">tempest-ServerExternalEventsTest-1586413318</nova:project>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:      <nova:ports/>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:45:36 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:      <entry name="serial">09442602-bb27-4205-98ce-79781d8ab62f</entry>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:      <entry name="uuid">09442602-bb27-4205-98ce-79781d8ab62f</entry>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:45:36 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:45:36 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:45:36 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/09442602-bb27-4205-98ce-79781d8ab62f_disk">
Oct 11 04:45:36 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:45:36 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:45:36 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/09442602-bb27-4205-98ce-79781d8ab62f_disk.config">
Oct 11 04:45:36 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:45:36 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:45:36 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/09442602-bb27-4205-98ce-79781d8ab62f/console.log" append="off"/>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:45:36 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:45:36 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:45:36 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:45:36 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:45:36 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:45:36 np0005481065 nova_compute[260935]: 2025-10-11 08:45:36.637 2 DEBUG nova.virt.libvirt.driver [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:45:36 np0005481065 nova_compute[260935]: 2025-10-11 08:45:36.637 2 DEBUG nova.virt.libvirt.driver [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:45:36 np0005481065 nova_compute[260935]: 2025-10-11 08:45:36.637 2 INFO nova.virt.libvirt.driver [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Using config drive#033[00m
Oct 11 04:45:36 np0005481065 nova_compute[260935]: 2025-10-11 08:45:36.652 2 DEBUG nova.storage.rbd_utils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] rbd image 09442602-bb27-4205-98ce-79781d8ab62f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:45:37 np0005481065 nova_compute[260935]: 2025-10-11 08:45:37.062 2 INFO nova.virt.libvirt.driver [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Creating config drive at /var/lib/nova/instances/09442602-bb27-4205-98ce-79781d8ab62f/disk.config#033[00m
Oct 11 04:45:37 np0005481065 nova_compute[260935]: 2025-10-11 08:45:37.071 2 DEBUG oslo_concurrency.processutils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/09442602-bb27-4205-98ce-79781d8ab62f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6lk36gpa execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:45:37 np0005481065 nova_compute[260935]: 2025-10-11 08:45:37.210 2 DEBUG oslo_concurrency.processutils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/09442602-bb27-4205-98ce-79781d8ab62f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6lk36gpa" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:45:37 np0005481065 nova_compute[260935]: 2025-10-11 08:45:37.248 2 DEBUG nova.storage.rbd_utils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] rbd image 09442602-bb27-4205-98ce-79781d8ab62f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:45:37 np0005481065 nova_compute[260935]: 2025-10-11 08:45:37.253 2 DEBUG oslo_concurrency.processutils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/09442602-bb27-4205-98ce-79781d8ab62f/disk.config 09442602-bb27-4205-98ce-79781d8ab62f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:45:37 np0005481065 nova_compute[260935]: 2025-10-11 08:45:37.440 2 DEBUG oslo_concurrency.processutils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/09442602-bb27-4205-98ce-79781d8ab62f/disk.config 09442602-bb27-4205-98ce-79781d8ab62f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:45:37 np0005481065 nova_compute[260935]: 2025-10-11 08:45:37.441 2 INFO nova.virt.libvirt.driver [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Deleting local config drive /var/lib/nova/instances/09442602-bb27-4205-98ce-79781d8ab62f/disk.config because it was imported into RBD.#033[00m
Oct 11 04:45:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:45:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3736860461' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:45:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:45:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3736860461' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:45:37 np0005481065 systemd-machined[215705]: New machine qemu-6-instance-00000006.
Oct 11 04:45:37 np0005481065 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Oct 11 04:45:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1168: 321 pgs: 321 active+clean; 169 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 172 op/s
Oct 11 04:45:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:45:38 np0005481065 nova_compute[260935]: 2025-10-11 08:45:38.505 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172338.5044954, 09442602-bb27-4205-98ce-79781d8ab62f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:45:38 np0005481065 nova_compute[260935]: 2025-10-11 08:45:38.505 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:45:38 np0005481065 nova_compute[260935]: 2025-10-11 08:45:38.510 2 DEBUG nova.compute.manager [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:45:38 np0005481065 nova_compute[260935]: 2025-10-11 08:45:38.511 2 DEBUG nova.virt.libvirt.driver [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:45:38 np0005481065 nova_compute[260935]: 2025-10-11 08:45:38.516 2 INFO nova.virt.libvirt.driver [-] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Instance spawned successfully.#033[00m
Oct 11 04:45:38 np0005481065 nova_compute[260935]: 2025-10-11 08:45:38.516 2 DEBUG nova.virt.libvirt.driver [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:45:38 np0005481065 nova_compute[260935]: 2025-10-11 08:45:38.542 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:45:38 np0005481065 nova_compute[260935]: 2025-10-11 08:45:38.552 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:45:38 np0005481065 nova_compute[260935]: 2025-10-11 08:45:38.557 2 DEBUG nova.virt.libvirt.driver [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:45:38 np0005481065 nova_compute[260935]: 2025-10-11 08:45:38.558 2 DEBUG nova.virt.libvirt.driver [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:45:38 np0005481065 nova_compute[260935]: 2025-10-11 08:45:38.559 2 DEBUG nova.virt.libvirt.driver [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:45:38 np0005481065 nova_compute[260935]: 2025-10-11 08:45:38.559 2 DEBUG nova.virt.libvirt.driver [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:45:38 np0005481065 nova_compute[260935]: 2025-10-11 08:45:38.560 2 DEBUG nova.virt.libvirt.driver [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:45:38 np0005481065 nova_compute[260935]: 2025-10-11 08:45:38.561 2 DEBUG nova.virt.libvirt.driver [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:45:38 np0005481065 nova_compute[260935]: 2025-10-11 08:45:38.589 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:45:38 np0005481065 nova_compute[260935]: 2025-10-11 08:45:38.589 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172338.5109003, 09442602-bb27-4205-98ce-79781d8ab62f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:45:38 np0005481065 nova_compute[260935]: 2025-10-11 08:45:38.590 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] VM Started (Lifecycle Event)#033[00m
Oct 11 04:45:38 np0005481065 nova_compute[260935]: 2025-10-11 08:45:38.628 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:45:38 np0005481065 nova_compute[260935]: 2025-10-11 08:45:38.631 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:45:38 np0005481065 nova_compute[260935]: 2025-10-11 08:45:38.645 2 INFO nova.compute.manager [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Took 3.95 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:45:38 np0005481065 nova_compute[260935]: 2025-10-11 08:45:38.646 2 DEBUG nova.compute.manager [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:45:38 np0005481065 nova_compute[260935]: 2025-10-11 08:45:38.678 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:45:38 np0005481065 nova_compute[260935]: 2025-10-11 08:45:38.726 2 INFO nova.compute.manager [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Took 5.34 seconds to build instance.#033[00m
Oct 11 04:45:38 np0005481065 nova_compute[260935]: 2025-10-11 08:45:38.769 2 DEBUG oslo_concurrency.lockutils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Lock "09442602-bb27-4205-98ce-79781d8ab62f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:45:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1169: 321 pgs: 321 active+clean; 169 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 400 KiB/s rd, 3.9 MiB/s wr, 103 op/s
Oct 11 04:45:40 np0005481065 nova_compute[260935]: 2025-10-11 08:45:40.453 2 DEBUG nova.compute.manager [None req-197d3687-baab-4e81-aa2b-baf172a60627 4f41a2d813f141ea9155ac6359a8f284 1dfaaf23cdd140b7a0c8c22d4fd12487 - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Received event network-changed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:45:40 np0005481065 nova_compute[260935]: 2025-10-11 08:45:40.453 2 DEBUG nova.compute.manager [None req-197d3687-baab-4e81-aa2b-baf172a60627 4f41a2d813f141ea9155ac6359a8f284 1dfaaf23cdd140b7a0c8c22d4fd12487 - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Refreshing instance network info cache due to event network-changed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:45:40 np0005481065 nova_compute[260935]: 2025-10-11 08:45:40.454 2 DEBUG oslo_concurrency.lockutils [None req-197d3687-baab-4e81-aa2b-baf172a60627 4f41a2d813f141ea9155ac6359a8f284 1dfaaf23cdd140b7a0c8c22d4fd12487 - - default default] Acquiring lock "refresh_cache-09442602-bb27-4205-98ce-79781d8ab62f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:45:40 np0005481065 nova_compute[260935]: 2025-10-11 08:45:40.454 2 DEBUG oslo_concurrency.lockutils [None req-197d3687-baab-4e81-aa2b-baf172a60627 4f41a2d813f141ea9155ac6359a8f284 1dfaaf23cdd140b7a0c8c22d4fd12487 - - default default] Acquired lock "refresh_cache-09442602-bb27-4205-98ce-79781d8ab62f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:45:40 np0005481065 nova_compute[260935]: 2025-10-11 08:45:40.455 2 DEBUG nova.network.neutron [None req-197d3687-baab-4e81-aa2b-baf172a60627 4f41a2d813f141ea9155ac6359a8f284 1dfaaf23cdd140b7a0c8c22d4fd12487 - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:45:40 np0005481065 nova_compute[260935]: 2025-10-11 08:45:40.713 2 DEBUG nova.network.neutron [None req-197d3687-baab-4e81-aa2b-baf172a60627 4f41a2d813f141ea9155ac6359a8f284 1dfaaf23cdd140b7a0c8c22d4fd12487 - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:45:40 np0005481065 nova_compute[260935]: 2025-10-11 08:45:40.787 2 DEBUG oslo_concurrency.lockutils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Acquiring lock "176447af-d4c6-422a-9347-9f07749fb6c3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:45:40 np0005481065 nova_compute[260935]: 2025-10-11 08:45:40.787 2 DEBUG oslo_concurrency.lockutils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Lock "176447af-d4c6-422a-9347-9f07749fb6c3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:45:40 np0005481065 nova_compute[260935]: 2025-10-11 08:45:40.894 2 DEBUG nova.compute.manager [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:45:40 np0005481065 nova_compute[260935]: 2025-10-11 08:45:40.976 2 DEBUG nova.network.neutron [None req-197d3687-baab-4e81-aa2b-baf172a60627 4f41a2d813f141ea9155ac6359a8f284 1dfaaf23cdd140b7a0c8c22d4fd12487 - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:45:41 np0005481065 nova_compute[260935]: 2025-10-11 08:45:41.011 2 DEBUG oslo_concurrency.lockutils [None req-38f2cdd9-91a8-44e1-9340-20c41361c4c4 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Acquiring lock "09442602-bb27-4205-98ce-79781d8ab62f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:45:41 np0005481065 nova_compute[260935]: 2025-10-11 08:45:41.012 2 DEBUG oslo_concurrency.lockutils [None req-38f2cdd9-91a8-44e1-9340-20c41361c4c4 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Lock "09442602-bb27-4205-98ce-79781d8ab62f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:45:41 np0005481065 nova_compute[260935]: 2025-10-11 08:45:41.012 2 DEBUG oslo_concurrency.lockutils [None req-38f2cdd9-91a8-44e1-9340-20c41361c4c4 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Acquiring lock "09442602-bb27-4205-98ce-79781d8ab62f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:45:41 np0005481065 nova_compute[260935]: 2025-10-11 08:45:41.012 2 DEBUG oslo_concurrency.lockutils [None req-38f2cdd9-91a8-44e1-9340-20c41361c4c4 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Lock "09442602-bb27-4205-98ce-79781d8ab62f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:45:41 np0005481065 nova_compute[260935]: 2025-10-11 08:45:41.013 2 DEBUG oslo_concurrency.lockutils [None req-38f2cdd9-91a8-44e1-9340-20c41361c4c4 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Lock "09442602-bb27-4205-98ce-79781d8ab62f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:45:41 np0005481065 nova_compute[260935]: 2025-10-11 08:45:41.014 2 INFO nova.compute.manager [None req-38f2cdd9-91a8-44e1-9340-20c41361c4c4 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Terminating instance#033[00m
Oct 11 04:45:41 np0005481065 nova_compute[260935]: 2025-10-11 08:45:41.014 2 DEBUG oslo_concurrency.lockutils [None req-38f2cdd9-91a8-44e1-9340-20c41361c4c4 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Acquiring lock "refresh_cache-09442602-bb27-4205-98ce-79781d8ab62f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:45:41 np0005481065 nova_compute[260935]: 2025-10-11 08:45:41.057 2 DEBUG oslo_concurrency.lockutils [None req-197d3687-baab-4e81-aa2b-baf172a60627 4f41a2d813f141ea9155ac6359a8f284 1dfaaf23cdd140b7a0c8c22d4fd12487 - - default default] Releasing lock "refresh_cache-09442602-bb27-4205-98ce-79781d8ab62f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:45:41 np0005481065 nova_compute[260935]: 2025-10-11 08:45:41.058 2 DEBUG oslo_concurrency.lockutils [None req-38f2cdd9-91a8-44e1-9340-20c41361c4c4 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Acquired lock "refresh_cache-09442602-bb27-4205-98ce-79781d8ab62f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:45:41 np0005481065 nova_compute[260935]: 2025-10-11 08:45:41.058 2 DEBUG nova.network.neutron [None req-38f2cdd9-91a8-44e1-9340-20c41361c4c4 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:45:41 np0005481065 nova_compute[260935]: 2025-10-11 08:45:41.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:45:41 np0005481065 nova_compute[260935]: 2025-10-11 08:45:41.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:45:41 np0005481065 nova_compute[260935]: 2025-10-11 08:45:41.276 2 DEBUG oslo_concurrency.lockutils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:45:41 np0005481065 nova_compute[260935]: 2025-10-11 08:45:41.277 2 DEBUG oslo_concurrency.lockutils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:45:41 np0005481065 nova_compute[260935]: 2025-10-11 08:45:41.284 2 DEBUG nova.virt.hardware [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:45:41 np0005481065 nova_compute[260935]: 2025-10-11 08:45:41.284 2 INFO nova.compute.claims [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:45:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1170: 321 pgs: 321 active+clean; 169 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 400 KiB/s rd, 3.9 MiB/s wr, 103 op/s
Oct 11 04:45:41 np0005481065 nova_compute[260935]: 2025-10-11 08:45:41.786 2 DEBUG nova.network.neutron [None req-38f2cdd9-91a8-44e1-9340-20c41361c4c4 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:45:41 np0005481065 nova_compute[260935]: 2025-10-11 08:45:41.897 2 DEBUG oslo_concurrency.processutils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:45:41 np0005481065 nova_compute[260935]: 2025-10-11 08:45:41.926 2 DEBUG oslo_concurrency.lockutils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Acquiring lock "7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:45:41 np0005481065 nova_compute[260935]: 2025-10-11 08:45:41.930 2 DEBUG oslo_concurrency.lockutils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Lock "7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:45:42 np0005481065 nova_compute[260935]: 2025-10-11 08:45:42.036 2 DEBUG nova.network.neutron [None req-38f2cdd9-91a8-44e1-9340-20c41361c4c4 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:45:42 np0005481065 nova_compute[260935]: 2025-10-11 08:45:42.094 2 DEBUG nova.compute.manager [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:45:42 np0005481065 nova_compute[260935]: 2025-10-11 08:45:42.201 2 DEBUG oslo_concurrency.lockutils [None req-38f2cdd9-91a8-44e1-9340-20c41361c4c4 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Releasing lock "refresh_cache-09442602-bb27-4205-98ce-79781d8ab62f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:45:42 np0005481065 nova_compute[260935]: 2025-10-11 08:45:42.201 2 DEBUG nova.compute.manager [None req-38f2cdd9-91a8-44e1-9340-20c41361c4c4 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 04:45:42 np0005481065 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Deactivated successfully.
Oct 11 04:45:42 np0005481065 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Consumed 4.777s CPU time.
Oct 11 04:45:42 np0005481065 systemd-machined[215705]: Machine qemu-6-instance-00000006 terminated.
Oct 11 04:45:42 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:45:42 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1276466375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:45:42 np0005481065 nova_compute[260935]: 2025-10-11 08:45:42.410 2 DEBUG oslo_concurrency.processutils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:45:42 np0005481065 nova_compute[260935]: 2025-10-11 08:45:42.422 2 DEBUG nova.compute.provider_tree [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:45:42 np0005481065 nova_compute[260935]: 2025-10-11 08:45:42.429 2 INFO nova.virt.libvirt.driver [-] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Instance destroyed successfully.#033[00m
Oct 11 04:45:42 np0005481065 nova_compute[260935]: 2025-10-11 08:45:42.430 2 DEBUG nova.objects.instance [None req-38f2cdd9-91a8-44e1-9340-20c41361c4c4 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Lazy-loading 'resources' on Instance uuid 09442602-bb27-4205-98ce-79781d8ab62f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:45:42 np0005481065 nova_compute[260935]: 2025-10-11 08:45:42.475 2 DEBUG oslo_concurrency.lockutils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:45:42 np0005481065 nova_compute[260935]: 2025-10-11 08:45:42.489 2 DEBUG nova.scheduler.client.report [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:45:42 np0005481065 nova_compute[260935]: 2025-10-11 08:45:42.616 2 DEBUG oslo_concurrency.lockutils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.339s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:45:42 np0005481065 nova_compute[260935]: 2025-10-11 08:45:42.617 2 DEBUG nova.compute.manager [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:45:42 np0005481065 nova_compute[260935]: 2025-10-11 08:45:42.622 2 DEBUG oslo_concurrency.lockutils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.147s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:45:42 np0005481065 nova_compute[260935]: 2025-10-11 08:45:42.632 2 DEBUG nova.virt.hardware [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:45:42 np0005481065 nova_compute[260935]: 2025-10-11 08:45:42.633 2 INFO nova.compute.claims [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:45:42 np0005481065 nova_compute[260935]: 2025-10-11 08:45:42.913 2 INFO nova.virt.libvirt.driver [None req-38f2cdd9-91a8-44e1-9340-20c41361c4c4 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Deleting instance files /var/lib/nova/instances/09442602-bb27-4205-98ce-79781d8ab62f_del#033[00m
Oct 11 04:45:42 np0005481065 nova_compute[260935]: 2025-10-11 08:45:42.914 2 INFO nova.virt.libvirt.driver [None req-38f2cdd9-91a8-44e1-9340-20c41361c4c4 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Deletion of /var/lib/nova/instances/09442602-bb27-4205-98ce-79781d8ab62f_del complete#033[00m
Oct 11 04:45:42 np0005481065 nova_compute[260935]: 2025-10-11 08:45:42.927 2 DEBUG oslo_concurrency.lockutils [None req-0c4baa25-c066-4e86-af65-3982321596b6 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Acquiring lock "f7475a32-2490-4f8c-a700-a123973da072" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:45:42 np0005481065 nova_compute[260935]: 2025-10-11 08:45:42.927 2 DEBUG oslo_concurrency.lockutils [None req-0c4baa25-c066-4e86-af65-3982321596b6 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "f7475a32-2490-4f8c-a700-a123973da072" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:45:42 np0005481065 nova_compute[260935]: 2025-10-11 08:45:42.927 2 DEBUG oslo_concurrency.lockutils [None req-0c4baa25-c066-4e86-af65-3982321596b6 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Acquiring lock "f7475a32-2490-4f8c-a700-a123973da072-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:45:42 np0005481065 nova_compute[260935]: 2025-10-11 08:45:42.927 2 DEBUG oslo_concurrency.lockutils [None req-0c4baa25-c066-4e86-af65-3982321596b6 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "f7475a32-2490-4f8c-a700-a123973da072-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:45:42 np0005481065 nova_compute[260935]: 2025-10-11 08:45:42.927 2 DEBUG oslo_concurrency.lockutils [None req-0c4baa25-c066-4e86-af65-3982321596b6 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "f7475a32-2490-4f8c-a700-a123973da072-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:45:42 np0005481065 nova_compute[260935]: 2025-10-11 08:45:42.928 2 INFO nova.compute.manager [None req-0c4baa25-c066-4e86-af65-3982321596b6 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Terminating instance#033[00m
Oct 11 04:45:42 np0005481065 nova_compute[260935]: 2025-10-11 08:45:42.929 2 DEBUG nova.compute.manager [None req-0c4baa25-c066-4e86-af65-3982321596b6 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 04:45:42 np0005481065 kernel: tap865f9eb5-5e (unregistering): left promiscuous mode
Oct 11 04:45:42 np0005481065 NetworkManager[44960]: <info>  [1760172342.9848] device (tap865f9eb5-5e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:45:43 np0005481065 ovn_controller[152945]: 2025-10-11T08:45:42Z|00040|binding|INFO|Releasing lport 865f9eb5-5e9d-40e5-abb3-123fcf5d05c3 from this chassis (sb_readonly=0)
Oct 11 04:45:43 np0005481065 nova_compute[260935]: 2025-10-11 08:45:42.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:45:43 np0005481065 ovn_controller[152945]: 2025-10-11T08:45:42Z|00041|binding|INFO|Setting lport 865f9eb5-5e9d-40e5-abb3-123fcf5d05c3 down in Southbound
Oct 11 04:45:43 np0005481065 ovn_controller[152945]: 2025-10-11T08:45:43Z|00042|binding|INFO|Removing iface tap865f9eb5-5e ovn-installed in OVS
Oct 11 04:45:43 np0005481065 nova_compute[260935]: 2025-10-11 08:45:43.001 2 DEBUG nova.compute.manager [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:45:43 np0005481065 nova_compute[260935]: 2025-10-11 08:45:43.001 2 DEBUG nova.network.neutron [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:45:43 np0005481065 nova_compute[260935]: 2025-10-11 08:45:43.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:45:43 np0005481065 nova_compute[260935]: 2025-10-11 08:45:43.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:45:43 np0005481065 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Oct 11 04:45:43 np0005481065 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 13.997s CPU time.
Oct 11 04:45:43 np0005481065 systemd-machined[215705]: Machine qemu-5-instance-00000005 terminated.
Oct 11 04:45:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:45:43 np0005481065 NetworkManager[44960]: <info>  [1760172343.1496] manager: (tap865f9eb5-5e): new Tun device (/org/freedesktop/NetworkManager/Devices/35)
Oct 11 04:45:43 np0005481065 nova_compute[260935]: 2025-10-11 08:45:43.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:45:43 np0005481065 nova_compute[260935]: 2025-10-11 08:45:43.193 2 INFO nova.virt.libvirt.driver [-] [instance: f7475a32-2490-4f8c-a700-a123973da072] Instance destroyed successfully.#033[00m
Oct 11 04:45:43 np0005481065 nova_compute[260935]: 2025-10-11 08:45:43.194 2 DEBUG nova.objects.instance [None req-0c4baa25-c066-4e86-af65-3982321596b6 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lazy-loading 'resources' on Instance uuid f7475a32-2490-4f8c-a700-a123973da072 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:45:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:43.323 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:e2:77 10.100.0.5'], port_security=['fa:16:3e:b3:e2:77 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'f7475a32-2490-4f8c-a700-a123973da072', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4e6573eaf6684f1c99a553fd46667a67', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6b6e9add-724a-49a4-90de-2f4e4912ca78', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.218'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be56cb0d-617a-4b33-ac01-b32b133373b2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=865f9eb5-5e9d-40e5-abb3-123fcf5d05c3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:45:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:43.325 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 865f9eb5-5e9d-40e5-abb3-123fcf5d05c3 in datapath 3bd537c2-e6ec-4d00-ac83-fbf5d86f963c unbound from our chassis#033[00m
Oct 11 04:45:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:43.326 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3bd537c2-e6ec-4d00-ac83-fbf5d86f963c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 04:45:43 np0005481065 nova_compute[260935]: 2025-10-11 08:45:43.327 2 DEBUG oslo_concurrency.processutils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:45:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:43.327 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c502b238-834f-4bf6-a692-79f21d760290]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:45:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:43.332 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c namespace which is not needed anymore#033[00m
Oct 11 04:45:43 np0005481065 nova_compute[260935]: 2025-10-11 08:45:43.367 2 INFO nova.compute.manager [None req-38f2cdd9-91a8-44e1-9340-20c41361c4c4 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Took 1.17 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 04:45:43 np0005481065 nova_compute[260935]: 2025-10-11 08:45:43.368 2 DEBUG oslo.service.loopingcall [None req-38f2cdd9-91a8-44e1-9340-20c41361c4c4 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 04:45:43 np0005481065 nova_compute[260935]: 2025-10-11 08:45:43.369 2 DEBUG nova.compute.manager [-] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 04:45:43 np0005481065 nova_compute[260935]: 2025-10-11 08:45:43.369 2 DEBUG nova.network.neutron [-] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 04:45:43 np0005481065 nova_compute[260935]: 2025-10-11 08:45:43.447 2 INFO nova.virt.libvirt.driver [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:45:43 np0005481065 nova_compute[260935]: 2025-10-11 08:45:43.453 2 DEBUG nova.virt.libvirt.vif [None req-0c4baa25-c066-4e86-af65-3982321596b6 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:45:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1693169676',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1693169676',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(24),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1693169676',id=5,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=24,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNbdle8Jb2CGpQv4SpbWMUmm3hiSixEA0s5KapvSItFHc3x9bUVgKTLkLVOIdrKirkN+vbb4fO8g4FzWk9eXgUjBokGNviLy6eM2HjHbeF2CcHrjTEf0RujQy30DD/WL/w==',key_name='tempest-keypair-358550695',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:45:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4e6573eaf6684f1c99a553fd46667a67',ramdisk_id='',reservation_id='r-kd8cf275',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1781130606',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1781130606-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:45:23Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='113798b24d1e4a9e91db94214d254ea9',uuid=f7475a32-2490-4f8c-a700-a123973da072,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "865f9eb5-5e9d-40e5-abb3-123fcf5d05c3", "address": "fa:16:3e:b3:e2:77", "network": {"id": "3bd537c2-e6ec-4d00-ac83-fbf5d86f963c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-610535994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e6573eaf6684f1c99a553fd46667a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap865f9eb5-5e", "ovs_interfaceid": "865f9eb5-5e9d-40e5-abb3-123fcf5d05c3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 04:45:43 np0005481065 nova_compute[260935]: 2025-10-11 08:45:43.454 2 DEBUG nova.network.os_vif_util [None req-0c4baa25-c066-4e86-af65-3982321596b6 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Converting VIF {"id": "865f9eb5-5e9d-40e5-abb3-123fcf5d05c3", "address": "fa:16:3e:b3:e2:77", "network": {"id": "3bd537c2-e6ec-4d00-ac83-fbf5d86f963c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-610535994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e6573eaf6684f1c99a553fd46667a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap865f9eb5-5e", "ovs_interfaceid": "865f9eb5-5e9d-40e5-abb3-123fcf5d05c3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:45:43 np0005481065 nova_compute[260935]: 2025-10-11 08:45:43.455 2 DEBUG nova.network.os_vif_util [None req-0c4baa25-c066-4e86-af65-3982321596b6 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b3:e2:77,bridge_name='br-int',has_traffic_filtering=True,id=865f9eb5-5e9d-40e5-abb3-123fcf5d05c3,network=Network(3bd537c2-e6ec-4d00-ac83-fbf5d86f963c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap865f9eb5-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:45:43 np0005481065 nova_compute[260935]: 2025-10-11 08:45:43.456 2 DEBUG os_vif [None req-0c4baa25-c066-4e86-af65-3982321596b6 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b3:e2:77,bridge_name='br-int',has_traffic_filtering=True,id=865f9eb5-5e9d-40e5-abb3-123fcf5d05c3,network=Network(3bd537c2-e6ec-4d00-ac83-fbf5d86f963c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap865f9eb5-5e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 04:45:43 np0005481065 nova_compute[260935]: 2025-10-11 08:45:43.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:45:43 np0005481065 nova_compute[260935]: 2025-10-11 08:45:43.459 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap865f9eb5-5e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:45:43 np0005481065 nova_compute[260935]: 2025-10-11 08:45:43.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:45:43 np0005481065 nova_compute[260935]: 2025-10-11 08:45:43.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:45:43 np0005481065 nova_compute[260935]: 2025-10-11 08:45:43.467 2 INFO os_vif [None req-0c4baa25-c066-4e86-af65-3982321596b6 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b3:e2:77,bridge_name='br-int',has_traffic_filtering=True,id=865f9eb5-5e9d-40e5-abb3-123fcf5d05c3,network=Network(3bd537c2-e6ec-4d00-ac83-fbf5d86f963c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap865f9eb5-5e')#033[00m
Oct 11 04:45:43 np0005481065 neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c[279530]: [NOTICE]   (279546) : haproxy version is 2.8.14-c23fe91
Oct 11 04:45:43 np0005481065 neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c[279530]: [NOTICE]   (279546) : path to executable is /usr/sbin/haproxy
Oct 11 04:45:43 np0005481065 neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c[279530]: [WARNING]  (279546) : Exiting Master process...
Oct 11 04:45:43 np0005481065 neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c[279530]: [ALERT]    (279546) : Current worker (279555) exited with code 143 (Terminated)
Oct 11 04:45:43 np0005481065 neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c[279530]: [WARNING]  (279546) : All workers exited. Exiting... (0)
Oct 11 04:45:43 np0005481065 systemd[1]: libpod-7fdfa5eee06da18214dc33fc369066f5a6bd7352c541f9eb3b5949957f4c7925.scope: Deactivated successfully.
Oct 11 04:45:43 np0005481065 podman[280059]: 2025-10-11 08:45:43.536874807 +0000 UTC m=+0.082783017 container died 7fdfa5eee06da18214dc33fc369066f5a6bd7352c541f9eb3b5949957f4c7925 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:45:43 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7fdfa5eee06da18214dc33fc369066f5a6bd7352c541f9eb3b5949957f4c7925-userdata-shm.mount: Deactivated successfully.
Oct 11 04:45:43 np0005481065 systemd[1]: var-lib-containers-storage-overlay-49d8583738b5511fba56f82d6e92ea769c6441216f2fdd8ecdad4c60001a63ed-merged.mount: Deactivated successfully.
Oct 11 04:45:43 np0005481065 podman[280059]: 2025-10-11 08:45:43.592130547 +0000 UTC m=+0.138038697 container cleanup 7fdfa5eee06da18214dc33fc369066f5a6bd7352c541f9eb3b5949957f4c7925 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 04:45:43 np0005481065 systemd[1]: libpod-conmon-7fdfa5eee06da18214dc33fc369066f5a6bd7352c541f9eb3b5949957f4c7925.scope: Deactivated successfully.
Oct 11 04:45:43 np0005481065 nova_compute[260935]: 2025-10-11 08:45:43.658 2 DEBUG nova.compute.manager [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:45:43 np0005481065 podman[280126]: 2025-10-11 08:45:43.667953334 +0000 UTC m=+0.046241713 container remove 7fdfa5eee06da18214dc33fc369066f5a6bd7352c541f9eb3b5949957f4c7925 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 04:45:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:43.673 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d11b584b-33fc-43c6-8fd3-4fece9ccf3d5]: (4, ('Sat Oct 11 08:45:43 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c (7fdfa5eee06da18214dc33fc369066f5a6bd7352c541f9eb3b5949957f4c7925)\n7fdfa5eee06da18214dc33fc369066f5a6bd7352c541f9eb3b5949957f4c7925\nSat Oct 11 08:45:43 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c (7fdfa5eee06da18214dc33fc369066f5a6bd7352c541f9eb3b5949957f4c7925)\n7fdfa5eee06da18214dc33fc369066f5a6bd7352c541f9eb3b5949957f4c7925\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:45:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:43.674 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[97e69434-aa68-475c-ab9c-6f673110588e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:45:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:43.676 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3bd537c2-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:45:43 np0005481065 nova_compute[260935]: 2025-10-11 08:45:43.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:45:43 np0005481065 kernel: tap3bd537c2-e0: left promiscuous mode
Oct 11 04:45:43 np0005481065 nova_compute[260935]: 2025-10-11 08:45:43.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:45:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:43.702 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bd3a115b-8e23-4526-92e3-130a4e9658f4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:45:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:43.725 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[aa8d3902-a1c4-40c1-bd82-a8bbd728dbce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:45:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1171: 321 pgs: 321 active+clean; 169 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 176 op/s
Oct 11 04:45:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:43.726 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[338c6b91-9d65-47a2-839d-17097ebf604c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:45:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:43.745 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[89f67ecd-7b65-4ed8-ad1f-65458e39d218]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 416714, 'reachable_time': 15394, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280141, 'error': None, 'target': 'ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:45:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:43.747 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 04:45:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:43.747 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[ad744295-3970-4759-93a9-b97f6d11ab28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:45:43 np0005481065 systemd[1]: run-netns-ovnmeta\x2d3bd537c2\x2de6ec\x2d4d00\x2dac83\x2dfbf5d86f963c.mount: Deactivated successfully.
Oct 11 04:45:43 np0005481065 nova_compute[260935]: 2025-10-11 08:45:43.781 2 DEBUG nova.network.neutron [-] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:45:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:45:43 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2402792429' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:45:43 np0005481065 nova_compute[260935]: 2025-10-11 08:45:43.829 2 DEBUG oslo_concurrency.processutils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:45:43 np0005481065 nova_compute[260935]: 2025-10-11 08:45:43.837 2 DEBUG nova.compute.provider_tree [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:45:43 np0005481065 nova_compute[260935]: 2025-10-11 08:45:43.992 2 DEBUG nova.network.neutron [-] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:45:44 np0005481065 nova_compute[260935]: 2025-10-11 08:45:44.046 2 INFO nova.virt.libvirt.driver [None req-0c4baa25-c066-4e86-af65-3982321596b6 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Deleting instance files /var/lib/nova/instances/f7475a32-2490-4f8c-a700-a123973da072_del#033[00m
Oct 11 04:45:44 np0005481065 nova_compute[260935]: 2025-10-11 08:45:44.047 2 INFO nova.virt.libvirt.driver [None req-0c4baa25-c066-4e86-af65-3982321596b6 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Deletion of /var/lib/nova/instances/f7475a32-2490-4f8c-a700-a123973da072_del complete#033[00m
Oct 11 04:45:44 np0005481065 nova_compute[260935]: 2025-10-11 08:45:44.052 2 DEBUG nova.scheduler.client.report [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:45:44 np0005481065 nova_compute[260935]: 2025-10-11 08:45:44.058 2 INFO nova.compute.manager [-] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Took 0.69 seconds to deallocate network for instance.#033[00m
Oct 11 04:45:44 np0005481065 nova_compute[260935]: 2025-10-11 08:45:44.164 2 DEBUG nova.compute.manager [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:45:44 np0005481065 nova_compute[260935]: 2025-10-11 08:45:44.166 2 DEBUG nova.virt.libvirt.driver [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:45:44 np0005481065 nova_compute[260935]: 2025-10-11 08:45:44.167 2 INFO nova.virt.libvirt.driver [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Creating image(s)#033[00m
Oct 11 04:45:44 np0005481065 nova_compute[260935]: 2025-10-11 08:45:44.194 2 DEBUG nova.storage.rbd_utils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] rbd image 176447af-d4c6-422a-9347-9f07749fb6c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:45:44 np0005481065 nova_compute[260935]: 2025-10-11 08:45:44.223 2 DEBUG nova.storage.rbd_utils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] rbd image 176447af-d4c6-422a-9347-9f07749fb6c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:45:44 np0005481065 nova_compute[260935]: 2025-10-11 08:45:44.252 2 DEBUG nova.storage.rbd_utils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] rbd image 176447af-d4c6-422a-9347-9f07749fb6c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:45:44 np0005481065 nova_compute[260935]: 2025-10-11 08:45:44.256 2 DEBUG oslo_concurrency.processutils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:45:44 np0005481065 nova_compute[260935]: 2025-10-11 08:45:44.339 2 DEBUG oslo_concurrency.processutils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:45:44 np0005481065 nova_compute[260935]: 2025-10-11 08:45:44.341 2 DEBUG oslo_concurrency.lockutils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:45:44 np0005481065 nova_compute[260935]: 2025-10-11 08:45:44.342 2 DEBUG oslo_concurrency.lockutils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:45:44 np0005481065 nova_compute[260935]: 2025-10-11 08:45:44.342 2 DEBUG oslo_concurrency.lockutils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:45:44 np0005481065 nova_compute[260935]: 2025-10-11 08:45:44.377 2 DEBUG nova.storage.rbd_utils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] rbd image 176447af-d4c6-422a-9347-9f07749fb6c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:45:44 np0005481065 nova_compute[260935]: 2025-10-11 08:45:44.382 2 DEBUG oslo_concurrency.processutils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 176447af-d4c6-422a-9347-9f07749fb6c3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:45:44 np0005481065 nova_compute[260935]: 2025-10-11 08:45:44.414 2 DEBUG oslo_concurrency.lockutils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.792s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:45:44 np0005481065 nova_compute[260935]: 2025-10-11 08:45:44.415 2 DEBUG nova.compute.manager [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:45:44 np0005481065 nova_compute[260935]: 2025-10-11 08:45:44.532 2 INFO nova.compute.manager [None req-0c4baa25-c066-4e86-af65-3982321596b6 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Took 1.60 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 04:45:44 np0005481065 nova_compute[260935]: 2025-10-11 08:45:44.533 2 DEBUG oslo.service.loopingcall [None req-0c4baa25-c066-4e86-af65-3982321596b6 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 04:45:44 np0005481065 nova_compute[260935]: 2025-10-11 08:45:44.534 2 DEBUG nova.compute.manager [-] [instance: f7475a32-2490-4f8c-a700-a123973da072] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 04:45:44 np0005481065 nova_compute[260935]: 2025-10-11 08:45:44.535 2 DEBUG nova.network.neutron [-] [instance: f7475a32-2490-4f8c-a700-a123973da072] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 04:45:44 np0005481065 nova_compute[260935]: 2025-10-11 08:45:44.582 2 DEBUG oslo_concurrency.lockutils [None req-38f2cdd9-91a8-44e1-9340-20c41361c4c4 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:45:44 np0005481065 nova_compute[260935]: 2025-10-11 08:45:44.583 2 DEBUG oslo_concurrency.lockutils [None req-38f2cdd9-91a8-44e1-9340-20c41361c4c4 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:45:44 np0005481065 nova_compute[260935]: 2025-10-11 08:45:44.714 2 DEBUG oslo_concurrency.processutils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 176447af-d4c6-422a-9347-9f07749fb6c3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.332s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:45:44 np0005481065 nova_compute[260935]: 2025-10-11 08:45:44.715 2 DEBUG nova.compute.manager [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Oct 11 04:45:44 np0005481065 nova_compute[260935]: 2025-10-11 08:45:44.756 2 DEBUG oslo_concurrency.processutils [None req-38f2cdd9-91a8-44e1-9340-20c41361c4c4 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:45:44 np0005481065 nova_compute[260935]: 2025-10-11 08:45:44.786 2 DEBUG nova.network.neutron [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct 11 04:45:44 np0005481065 nova_compute[260935]: 2025-10-11 08:45:44.787 2 DEBUG nova.compute.manager [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:45:44 np0005481065 nova_compute[260935]: 2025-10-11 08:45:44.849 2 DEBUG nova.storage.rbd_utils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] resizing rbd image 176447af-d4c6-422a-9347-9f07749fb6c3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:45:44 np0005481065 nova_compute[260935]: 2025-10-11 08:45:44.976 2 DEBUG nova.objects.instance [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Lazy-loading 'migration_context' on Instance uuid 176447af-d4c6-422a-9347-9f07749fb6c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:45:45 np0005481065 nova_compute[260935]: 2025-10-11 08:45:45.209 2 DEBUG nova.virt.libvirt.driver [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:45:45 np0005481065 nova_compute[260935]: 2025-10-11 08:45:45.210 2 DEBUG nova.virt.libvirt.driver [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Ensure instance console log exists: /var/lib/nova/instances/176447af-d4c6-422a-9347-9f07749fb6c3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:45:45 np0005481065 nova_compute[260935]: 2025-10-11 08:45:45.210 2 DEBUG oslo_concurrency.lockutils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:45:45 np0005481065 nova_compute[260935]: 2025-10-11 08:45:45.211 2 DEBUG oslo_concurrency.lockutils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:45:45 np0005481065 nova_compute[260935]: 2025-10-11 08:45:45.212 2 DEBUG oslo_concurrency.lockutils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:45:45 np0005481065 nova_compute[260935]: 2025-10-11 08:45:45.214 2 DEBUG nova.virt.libvirt.driver [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:45:45 np0005481065 nova_compute[260935]: 2025-10-11 08:45:45.221 2 WARNING nova.virt.libvirt.driver [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:45:45 np0005481065 nova_compute[260935]: 2025-10-11 08:45:45.228 2 DEBUG nova.virt.libvirt.host [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:45:45 np0005481065 nova_compute[260935]: 2025-10-11 08:45:45.229 2 DEBUG nova.virt.libvirt.host [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:45:45 np0005481065 nova_compute[260935]: 2025-10-11 08:45:45.235 2 DEBUG nova.virt.libvirt.host [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:45:45 np0005481065 nova_compute[260935]: 2025-10-11 08:45:45.235 2 DEBUG nova.virt.libvirt.host [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:45:45 np0005481065 nova_compute[260935]: 2025-10-11 08:45:45.236 2 DEBUG nova.virt.libvirt.driver [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:45:45 np0005481065 nova_compute[260935]: 2025-10-11 08:45:45.236 2 DEBUG nova.virt.hardware [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:45:45 np0005481065 nova_compute[260935]: 2025-10-11 08:45:45.237 2 DEBUG nova.virt.hardware [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:45:45 np0005481065 nova_compute[260935]: 2025-10-11 08:45:45.238 2 DEBUG nova.virt.hardware [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:45:45 np0005481065 nova_compute[260935]: 2025-10-11 08:45:45.238 2 DEBUG nova.virt.hardware [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:45:45 np0005481065 nova_compute[260935]: 2025-10-11 08:45:45.238 2 DEBUG nova.virt.hardware [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:45:45 np0005481065 nova_compute[260935]: 2025-10-11 08:45:45.239 2 DEBUG nova.virt.hardware [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:45:45 np0005481065 nova_compute[260935]: 2025-10-11 08:45:45.239 2 DEBUG nova.virt.hardware [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:45:45 np0005481065 nova_compute[260935]: 2025-10-11 08:45:45.240 2 DEBUG nova.virt.hardware [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:45:45 np0005481065 nova_compute[260935]: 2025-10-11 08:45:45.240 2 DEBUG nova.virt.hardware [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:45:45 np0005481065 nova_compute[260935]: 2025-10-11 08:45:45.241 2 DEBUG nova.virt.hardware [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:45:45 np0005481065 nova_compute[260935]: 2025-10-11 08:45:45.241 2 DEBUG nova.virt.hardware [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:45:45 np0005481065 nova_compute[260935]: 2025-10-11 08:45:45.246 2 DEBUG oslo_concurrency.processutils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:45:45 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:45:45 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2464896764' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:45:45 np0005481065 nova_compute[260935]: 2025-10-11 08:45:45.286 2 DEBUG oslo_concurrency.processutils [None req-38f2cdd9-91a8-44e1-9340-20c41361c4c4 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:45:45 np0005481065 nova_compute[260935]: 2025-10-11 08:45:45.295 2 DEBUG nova.compute.provider_tree [None req-38f2cdd9-91a8-44e1-9340-20c41361c4c4 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:45:45 np0005481065 nova_compute[260935]: 2025-10-11 08:45:45.301 2 INFO nova.virt.libvirt.driver [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:45:45 np0005481065 nova_compute[260935]: 2025-10-11 08:45:45.360 2 DEBUG nova.scheduler.client.report [None req-38f2cdd9-91a8-44e1-9340-20c41361c4c4 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:45:45 np0005481065 nova_compute[260935]: 2025-10-11 08:45:45.506 2 DEBUG nova.compute.manager [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:45:45 np0005481065 nova_compute[260935]: 2025-10-11 08:45:45.534 2 DEBUG oslo_concurrency.lockutils [None req-38f2cdd9-91a8-44e1-9340-20c41361c4c4 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.952s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:45:45 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:45:45 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/838524003' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:45:45 np0005481065 nova_compute[260935]: 2025-10-11 08:45:45.716 2 DEBUG oslo_concurrency.processutils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:45:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1172: 321 pgs: 321 active+clean; 169 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 176 op/s
Oct 11 04:45:45 np0005481065 nova_compute[260935]: 2025-10-11 08:45:45.735 2 DEBUG nova.storage.rbd_utils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] rbd image 176447af-d4c6-422a-9347-9f07749fb6c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:45:45 np0005481065 nova_compute[260935]: 2025-10-11 08:45:45.738 2 DEBUG oslo_concurrency.processutils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:45:45 np0005481065 nova_compute[260935]: 2025-10-11 08:45:45.760 2 INFO nova.scheduler.client.report [None req-38f2cdd9-91a8-44e1-9340-20c41361c4c4 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Deleted allocations for instance 09442602-bb27-4205-98ce-79781d8ab62f#033[00m
Oct 11 04:45:45 np0005481065 nova_compute[260935]: 2025-10-11 08:45:45.765 2 DEBUG nova.compute.manager [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:45:45 np0005481065 nova_compute[260935]: 2025-10-11 08:45:45.767 2 DEBUG nova.virt.libvirt.driver [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:45:45 np0005481065 nova_compute[260935]: 2025-10-11 08:45:45.767 2 INFO nova.virt.libvirt.driver [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Creating image(s)#033[00m
Oct 11 04:45:45 np0005481065 nova_compute[260935]: 2025-10-11 08:45:45.794 2 DEBUG nova.storage.rbd_utils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] rbd image 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:45:45 np0005481065 nova_compute[260935]: 2025-10-11 08:45:45.830 2 DEBUG nova.storage.rbd_utils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] rbd image 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:45:45 np0005481065 nova_compute[260935]: 2025-10-11 08:45:45.855 2 DEBUG nova.storage.rbd_utils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] rbd image 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:45:45 np0005481065 nova_compute[260935]: 2025-10-11 08:45:45.858 2 DEBUG oslo_concurrency.processutils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:45:45 np0005481065 nova_compute[260935]: 2025-10-11 08:45:45.888 2 DEBUG nova.compute.manager [req-046c99bb-a3e1-4b37-aae5-db2e4e5cbaa9 req-26767a19-0b75-4842-a3b3-6709bf8732c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Received event network-vif-unplugged-865f9eb5-5e9d-40e5-abb3-123fcf5d05c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:45:45 np0005481065 nova_compute[260935]: 2025-10-11 08:45:45.889 2 DEBUG oslo_concurrency.lockutils [req-046c99bb-a3e1-4b37-aae5-db2e4e5cbaa9 req-26767a19-0b75-4842-a3b3-6709bf8732c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "f7475a32-2490-4f8c-a700-a123973da072-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:45:45 np0005481065 nova_compute[260935]: 2025-10-11 08:45:45.889 2 DEBUG oslo_concurrency.lockutils [req-046c99bb-a3e1-4b37-aae5-db2e4e5cbaa9 req-26767a19-0b75-4842-a3b3-6709bf8732c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f7475a32-2490-4f8c-a700-a123973da072-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:45:45 np0005481065 nova_compute[260935]: 2025-10-11 08:45:45.889 2 DEBUG oslo_concurrency.lockutils [req-046c99bb-a3e1-4b37-aae5-db2e4e5cbaa9 req-26767a19-0b75-4842-a3b3-6709bf8732c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f7475a32-2490-4f8c-a700-a123973da072-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:45:45 np0005481065 nova_compute[260935]: 2025-10-11 08:45:45.889 2 DEBUG nova.compute.manager [req-046c99bb-a3e1-4b37-aae5-db2e4e5cbaa9 req-26767a19-0b75-4842-a3b3-6709bf8732c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] No waiting events found dispatching network-vif-unplugged-865f9eb5-5e9d-40e5-abb3-123fcf5d05c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:45:45 np0005481065 nova_compute[260935]: 2025-10-11 08:45:45.890 2 DEBUG nova.compute.manager [req-046c99bb-a3e1-4b37-aae5-db2e4e5cbaa9 req-26767a19-0b75-4842-a3b3-6709bf8732c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Received event network-vif-unplugged-865f9eb5-5e9d-40e5-abb3-123fcf5d05c3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 04:45:45 np0005481065 nova_compute[260935]: 2025-10-11 08:45:45.942 2 DEBUG oslo_concurrency.processutils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:45:45 np0005481065 nova_compute[260935]: 2025-10-11 08:45:45.943 2 DEBUG oslo_concurrency.lockutils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:45:45 np0005481065 nova_compute[260935]: 2025-10-11 08:45:45.943 2 DEBUG oslo_concurrency.lockutils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:45:45 np0005481065 nova_compute[260935]: 2025-10-11 08:45:45.944 2 DEBUG oslo_concurrency.lockutils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:45:45 np0005481065 nova_compute[260935]: 2025-10-11 08:45:45.964 2 DEBUG nova.storage.rbd_utils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] rbd image 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:45:45 np0005481065 nova_compute[260935]: 2025-10-11 08:45:45.967 2 DEBUG oslo_concurrency.processutils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:45:45 np0005481065 nova_compute[260935]: 2025-10-11 08:45:45.991 2 DEBUG oslo_concurrency.lockutils [None req-38f2cdd9-91a8-44e1-9340-20c41361c4c4 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Lock "09442602-bb27-4205-98ce-79781d8ab62f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.979s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:45:46 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:45:46 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/498100304' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:45:46 np0005481065 nova_compute[260935]: 2025-10-11 08:45:46.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:45:46 np0005481065 nova_compute[260935]: 2025-10-11 08:45:46.229 2 DEBUG oslo_concurrency.processutils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:45:46 np0005481065 nova_compute[260935]: 2025-10-11 08:45:46.232 2 DEBUG nova.objects.instance [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Lazy-loading 'pci_devices' on Instance uuid 176447af-d4c6-422a-9347-9f07749fb6c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:45:46 np0005481065 nova_compute[260935]: 2025-10-11 08:45:46.255 2 DEBUG oslo_concurrency.processutils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.288s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:45:46 np0005481065 nova_compute[260935]: 2025-10-11 08:45:46.327 2 DEBUG nova.storage.rbd_utils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] resizing rbd image 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:45:46 np0005481065 nova_compute[260935]: 2025-10-11 08:45:46.357 2 DEBUG nova.virt.libvirt.driver [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:45:46 np0005481065 nova_compute[260935]:  <uuid>176447af-d4c6-422a-9347-9f07749fb6c3</uuid>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:  <name>instance-00000007</name>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:45:46 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:      <nova:name>tempest-ServerDiagnosticsTest-server-326353643</nova:name>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:45:45</nova:creationTime>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:45:46 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:        <nova:user uuid="7290d84027c8464ebe5e3b46d59f3b4a">tempest-ServerDiagnosticsTest-1181458604-project-member</nova:user>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:        <nova:project uuid="d76f051bb1f9449fb098250c00407cac">tempest-ServerDiagnosticsTest-1181458604</nova:project>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:      <nova:ports/>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:45:46 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:      <entry name="serial">176447af-d4c6-422a-9347-9f07749fb6c3</entry>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:      <entry name="uuid">176447af-d4c6-422a-9347-9f07749fb6c3</entry>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:45:46 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:45:46 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:45:46 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/176447af-d4c6-422a-9347-9f07749fb6c3_disk">
Oct 11 04:45:46 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:45:46 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:45:46 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/176447af-d4c6-422a-9347-9f07749fb6c3_disk.config">
Oct 11 04:45:46 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:45:46 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:45:46 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/176447af-d4c6-422a-9347-9f07749fb6c3/console.log" append="off"/>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:45:46 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:45:46 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:45:46 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:45:46 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:45:46 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:45:46 np0005481065 nova_compute[260935]: 2025-10-11 08:45:46.416 2 DEBUG nova.objects.instance [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Lazy-loading 'migration_context' on Instance uuid 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:45:46 np0005481065 nova_compute[260935]: 2025-10-11 08:45:46.437 2 DEBUG nova.virt.libvirt.driver [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:45:46 np0005481065 nova_compute[260935]: 2025-10-11 08:45:46.437 2 DEBUG nova.virt.libvirt.driver [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:45:46 np0005481065 nova_compute[260935]: 2025-10-11 08:45:46.438 2 INFO nova.virt.libvirt.driver [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Using config drive#033[00m
Oct 11 04:45:46 np0005481065 nova_compute[260935]: 2025-10-11 08:45:46.460 2 DEBUG nova.storage.rbd_utils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] rbd image 176447af-d4c6-422a-9347-9f07749fb6c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:45:46 np0005481065 nova_compute[260935]: 2025-10-11 08:45:46.490 2 DEBUG nova.virt.libvirt.driver [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:45:46 np0005481065 nova_compute[260935]: 2025-10-11 08:45:46.490 2 DEBUG nova.virt.libvirt.driver [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Ensure instance console log exists: /var/lib/nova/instances/7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:45:46 np0005481065 nova_compute[260935]: 2025-10-11 08:45:46.490 2 DEBUG oslo_concurrency.lockutils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:45:46 np0005481065 nova_compute[260935]: 2025-10-11 08:45:46.491 2 DEBUG oslo_concurrency.lockutils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:45:46 np0005481065 nova_compute[260935]: 2025-10-11 08:45:46.491 2 DEBUG oslo_concurrency.lockutils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:45:46 np0005481065 nova_compute[260935]: 2025-10-11 08:45:46.492 2 DEBUG nova.virt.libvirt.driver [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:45:46 np0005481065 nova_compute[260935]: 2025-10-11 08:45:46.496 2 WARNING nova.virt.libvirt.driver [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:45:46 np0005481065 nova_compute[260935]: 2025-10-11 08:45:46.500 2 DEBUG nova.virt.libvirt.host [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:45:46 np0005481065 nova_compute[260935]: 2025-10-11 08:45:46.501 2 DEBUG nova.virt.libvirt.host [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:45:46 np0005481065 nova_compute[260935]: 2025-10-11 08:45:46.505 2 DEBUG nova.virt.libvirt.host [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:45:46 np0005481065 nova_compute[260935]: 2025-10-11 08:45:46.506 2 DEBUG nova.virt.libvirt.host [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:45:46 np0005481065 nova_compute[260935]: 2025-10-11 08:45:46.507 2 DEBUG nova.virt.libvirt.driver [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:45:46 np0005481065 nova_compute[260935]: 2025-10-11 08:45:46.507 2 DEBUG nova.virt.hardware [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:45:46 np0005481065 nova_compute[260935]: 2025-10-11 08:45:46.508 2 DEBUG nova.virt.hardware [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:45:46 np0005481065 nova_compute[260935]: 2025-10-11 08:45:46.509 2 DEBUG nova.virt.hardware [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:45:46 np0005481065 nova_compute[260935]: 2025-10-11 08:45:46.509 2 DEBUG nova.virt.hardware [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:45:46 np0005481065 nova_compute[260935]: 2025-10-11 08:45:46.509 2 DEBUG nova.virt.hardware [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:45:46 np0005481065 nova_compute[260935]: 2025-10-11 08:45:46.510 2 DEBUG nova.virt.hardware [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:45:46 np0005481065 nova_compute[260935]: 2025-10-11 08:45:46.510 2 DEBUG nova.virt.hardware [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:45:46 np0005481065 nova_compute[260935]: 2025-10-11 08:45:46.511 2 DEBUG nova.virt.hardware [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:45:46 np0005481065 nova_compute[260935]: 2025-10-11 08:45:46.511 2 DEBUG nova.virt.hardware [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:45:46 np0005481065 nova_compute[260935]: 2025-10-11 08:45:46.512 2 DEBUG nova.virt.hardware [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:45:46 np0005481065 nova_compute[260935]: 2025-10-11 08:45:46.512 2 DEBUG nova.virt.hardware [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:45:46 np0005481065 nova_compute[260935]: 2025-10-11 08:45:46.516 2 DEBUG oslo_concurrency.processutils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:45:46 np0005481065 nova_compute[260935]: 2025-10-11 08:45:46.707 2 INFO nova.virt.libvirt.driver [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Creating config drive at /var/lib/nova/instances/176447af-d4c6-422a-9347-9f07749fb6c3/disk.config#033[00m
Oct 11 04:45:46 np0005481065 nova_compute[260935]: 2025-10-11 08:45:46.711 2 DEBUG oslo_concurrency.processutils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/176447af-d4c6-422a-9347-9f07749fb6c3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz__5aj3k execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:45:46 np0005481065 nova_compute[260935]: 2025-10-11 08:45:46.739 2 DEBUG nova.network.neutron [-] [instance: f7475a32-2490-4f8c-a700-a123973da072] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:45:46 np0005481065 nova_compute[260935]: 2025-10-11 08:45:46.760 2 INFO nova.compute.manager [-] [instance: f7475a32-2490-4f8c-a700-a123973da072] Took 2.23 seconds to deallocate network for instance.#033[00m
Oct 11 04:45:46 np0005481065 nova_compute[260935]: 2025-10-11 08:45:46.778 2 DEBUG nova.compute.manager [req-d9c41343-b8d1-4e09-8eaa-9c24c67cf3bc req-ab67940f-142e-4682-8ae9-7ff21ffb9914 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Received event network-vif-deleted-865f9eb5-5e9d-40e5-abb3-123fcf5d05c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:45:46 np0005481065 nova_compute[260935]: 2025-10-11 08:45:46.852 2 DEBUG oslo_concurrency.processutils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/176447af-d4c6-422a-9347-9f07749fb6c3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz__5aj3k" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:45:46 np0005481065 nova_compute[260935]: 2025-10-11 08:45:46.886 2 DEBUG nova.storage.rbd_utils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] rbd image 176447af-d4c6-422a-9347-9f07749fb6c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:45:46 np0005481065 nova_compute[260935]: 2025-10-11 08:45:46.891 2 DEBUG oslo_concurrency.processutils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/176447af-d4c6-422a-9347-9f07749fb6c3/disk.config 176447af-d4c6-422a-9347-9f07749fb6c3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:45:46 np0005481065 nova_compute[260935]: 2025-10-11 08:45:46.924 2 DEBUG oslo_concurrency.lockutils [None req-0c4baa25-c066-4e86-af65-3982321596b6 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:45:46 np0005481065 nova_compute[260935]: 2025-10-11 08:45:46.925 2 DEBUG oslo_concurrency.lockutils [None req-0c4baa25-c066-4e86-af65-3982321596b6 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:45:47 np0005481065 nova_compute[260935]: 2025-10-11 08:45:47.018 2 DEBUG oslo_concurrency.processutils [None req-0c4baa25-c066-4e86-af65-3982321596b6 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:45:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:45:47 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2547142510' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:45:47 np0005481065 nova_compute[260935]: 2025-10-11 08:45:47.076 2 DEBUG oslo_concurrency.processutils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/176447af-d4c6-422a-9347-9f07749fb6c3/disk.config 176447af-d4c6-422a-9347-9f07749fb6c3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.185s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:45:47 np0005481065 nova_compute[260935]: 2025-10-11 08:45:47.077 2 INFO nova.virt.libvirt.driver [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Deleting local config drive /var/lib/nova/instances/176447af-d4c6-422a-9347-9f07749fb6c3/disk.config because it was imported into RBD.#033[00m
Oct 11 04:45:47 np0005481065 nova_compute[260935]: 2025-10-11 08:45:47.079 2 DEBUG oslo_concurrency.processutils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:45:47 np0005481065 nova_compute[260935]: 2025-10-11 08:45:47.132 2 DEBUG nova.storage.rbd_utils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] rbd image 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:45:47 np0005481065 nova_compute[260935]: 2025-10-11 08:45:47.144 2 DEBUG oslo_concurrency.processutils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:45:47 np0005481065 systemd-machined[215705]: New machine qemu-7-instance-00000007.
Oct 11 04:45:47 np0005481065 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Oct 11 04:45:47 np0005481065 podman[280642]: 2025-10-11 08:45:47.208237782 +0000 UTC m=+0.095926123 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible)
Oct 11 04:45:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:45:47 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/433789802' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:45:47 np0005481065 nova_compute[260935]: 2025-10-11 08:45:47.507 2 DEBUG oslo_concurrency.processutils [None req-0c4baa25-c066-4e86-af65-3982321596b6 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:45:47 np0005481065 nova_compute[260935]: 2025-10-11 08:45:47.513 2 DEBUG nova.compute.provider_tree [None req-0c4baa25-c066-4e86-af65-3982321596b6 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:45:47 np0005481065 nova_compute[260935]: 2025-10-11 08:45:47.531 2 DEBUG nova.scheduler.client.report [None req-0c4baa25-c066-4e86-af65-3982321596b6 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:45:47 np0005481065 nova_compute[260935]: 2025-10-11 08:45:47.559 2 DEBUG oslo_concurrency.lockutils [None req-0c4baa25-c066-4e86-af65-3982321596b6 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:45:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:45:47 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/31627406' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:45:47 np0005481065 nova_compute[260935]: 2025-10-11 08:45:47.596 2 INFO nova.scheduler.client.report [None req-0c4baa25-c066-4e86-af65-3982321596b6 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Deleted allocations for instance f7475a32-2490-4f8c-a700-a123973da072#033[00m
Oct 11 04:45:47 np0005481065 nova_compute[260935]: 2025-10-11 08:45:47.605 2 DEBUG oslo_concurrency.processutils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:45:47 np0005481065 nova_compute[260935]: 2025-10-11 08:45:47.606 2 DEBUG nova.objects.instance [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Lazy-loading 'pci_devices' on Instance uuid 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:45:47 np0005481065 nova_compute[260935]: 2025-10-11 08:45:47.635 2 DEBUG nova.virt.libvirt.driver [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:45:47 np0005481065 nova_compute[260935]:  <uuid>7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d</uuid>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:  <name>instance-00000008</name>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:45:47 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:      <nova:name>tempest-ServerDiagnosticsV248Test-server-1994259322</nova:name>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:45:46</nova:creationTime>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:45:47 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:        <nova:user uuid="6c0be63273314958858acea89fc8ce4c">tempest-ServerDiagnosticsV248Test-1478543614-project-member</nova:user>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:        <nova:project uuid="04d9fd3da9714787ad630c8c68e4f94a">tempest-ServerDiagnosticsV248Test-1478543614</nova:project>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:      <nova:ports/>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:45:47 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:      <entry name="serial">7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d</entry>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:      <entry name="uuid">7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d</entry>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:45:47 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:45:47 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:45:47 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d_disk">
Oct 11 04:45:47 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:45:47 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:45:47 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d_disk.config">
Oct 11 04:45:47 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:45:47 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:45:47 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d/console.log" append="off"/>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:45:47 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:45:47 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:45:47 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:45:47 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:45:47 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:45:47 np0005481065 nova_compute[260935]: 2025-10-11 08:45:47.708 2 DEBUG oslo_concurrency.lockutils [None req-0c4baa25-c066-4e86-af65-3982321596b6 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "f7475a32-2490-4f8c-a700-a123973da072" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.781s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:45:47 np0005481065 nova_compute[260935]: 2025-10-11 08:45:47.719 2 DEBUG nova.virt.libvirt.driver [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:45:47 np0005481065 nova_compute[260935]: 2025-10-11 08:45:47.720 2 DEBUG nova.virt.libvirt.driver [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:45:47 np0005481065 nova_compute[260935]: 2025-10-11 08:45:47.721 2 INFO nova.virt.libvirt.driver [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Using config drive#033[00m
Oct 11 04:45:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1173: 321 pgs: 321 active+clean; 134 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 7.5 MiB/s wr, 300 op/s
Oct 11 04:45:47 np0005481065 nova_compute[260935]: 2025-10-11 08:45:47.762 2 DEBUG nova.storage.rbd_utils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] rbd image 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:45:48 np0005481065 nova_compute[260935]: 2025-10-11 08:45:48.046 2 INFO nova.virt.libvirt.driver [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Creating config drive at /var/lib/nova/instances/7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d/disk.config#033[00m
Oct 11 04:45:48 np0005481065 nova_compute[260935]: 2025-10-11 08:45:48.051 2 DEBUG oslo_concurrency.processutils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbbt_s9wn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:45:48 np0005481065 nova_compute[260935]: 2025-10-11 08:45:48.101 2 DEBUG nova.compute.manager [req-4aeaf4b1-8b1e-4ed1-bd0f-962324f98f33 req-693894c1-d757-45dd-9a72-ba0387787875 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Received event network-vif-plugged-865f9eb5-5e9d-40e5-abb3-123fcf5d05c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:45:48 np0005481065 nova_compute[260935]: 2025-10-11 08:45:48.102 2 DEBUG oslo_concurrency.lockutils [req-4aeaf4b1-8b1e-4ed1-bd0f-962324f98f33 req-693894c1-d757-45dd-9a72-ba0387787875 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "f7475a32-2490-4f8c-a700-a123973da072-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:45:48 np0005481065 nova_compute[260935]: 2025-10-11 08:45:48.103 2 DEBUG oslo_concurrency.lockutils [req-4aeaf4b1-8b1e-4ed1-bd0f-962324f98f33 req-693894c1-d757-45dd-9a72-ba0387787875 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f7475a32-2490-4f8c-a700-a123973da072-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:45:48 np0005481065 nova_compute[260935]: 2025-10-11 08:45:48.103 2 DEBUG oslo_concurrency.lockutils [req-4aeaf4b1-8b1e-4ed1-bd0f-962324f98f33 req-693894c1-d757-45dd-9a72-ba0387787875 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f7475a32-2490-4f8c-a700-a123973da072-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:45:48 np0005481065 nova_compute[260935]: 2025-10-11 08:45:48.103 2 DEBUG nova.compute.manager [req-4aeaf4b1-8b1e-4ed1-bd0f-962324f98f33 req-693894c1-d757-45dd-9a72-ba0387787875 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] No waiting events found dispatching network-vif-plugged-865f9eb5-5e9d-40e5-abb3-123fcf5d05c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:45:48 np0005481065 nova_compute[260935]: 2025-10-11 08:45:48.104 2 WARNING nova.compute.manager [req-4aeaf4b1-8b1e-4ed1-bd0f-962324f98f33 req-693894c1-d757-45dd-9a72-ba0387787875 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Received unexpected event network-vif-plugged-865f9eb5-5e9d-40e5-abb3-123fcf5d05c3 for instance with vm_state deleted and task_state None.#033[00m
Oct 11 04:45:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:45:48 np0005481065 nova_compute[260935]: 2025-10-11 08:45:48.182 2 DEBUG oslo_concurrency.processutils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbbt_s9wn" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:45:48 np0005481065 nova_compute[260935]: 2025-10-11 08:45:48.213 2 DEBUG nova.storage.rbd_utils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] rbd image 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:45:48 np0005481065 nova_compute[260935]: 2025-10-11 08:45:48.218 2 DEBUG oslo_concurrency.processutils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d/disk.config 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:45:48 np0005481065 nova_compute[260935]: 2025-10-11 08:45:48.245 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172348.2446349, 176447af-d4c6-422a-9347-9f07749fb6c3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:45:48 np0005481065 nova_compute[260935]: 2025-10-11 08:45:48.246 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:45:48 np0005481065 nova_compute[260935]: 2025-10-11 08:45:48.253 2 DEBUG nova.compute.manager [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:45:48 np0005481065 nova_compute[260935]: 2025-10-11 08:45:48.254 2 DEBUG nova.virt.libvirt.driver [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:45:48 np0005481065 nova_compute[260935]: 2025-10-11 08:45:48.258 2 INFO nova.virt.libvirt.driver [-] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Instance spawned successfully.#033[00m
Oct 11 04:45:48 np0005481065 nova_compute[260935]: 2025-10-11 08:45:48.258 2 DEBUG nova.virt.libvirt.driver [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:45:48 np0005481065 nova_compute[260935]: 2025-10-11 08:45:48.283 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:45:48 np0005481065 nova_compute[260935]: 2025-10-11 08:45:48.289 2 DEBUG nova.virt.libvirt.driver [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:45:48 np0005481065 nova_compute[260935]: 2025-10-11 08:45:48.290 2 DEBUG nova.virt.libvirt.driver [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:45:48 np0005481065 nova_compute[260935]: 2025-10-11 08:45:48.291 2 DEBUG nova.virt.libvirt.driver [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:45:48 np0005481065 nova_compute[260935]: 2025-10-11 08:45:48.292 2 DEBUG nova.virt.libvirt.driver [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:45:48 np0005481065 nova_compute[260935]: 2025-10-11 08:45:48.293 2 DEBUG nova.virt.libvirt.driver [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:45:48 np0005481065 nova_compute[260935]: 2025-10-11 08:45:48.294 2 DEBUG nova.virt.libvirt.driver [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:45:48 np0005481065 nova_compute[260935]: 2025-10-11 08:45:48.301 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:45:48 np0005481065 nova_compute[260935]: 2025-10-11 08:45:48.336 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:45:48 np0005481065 nova_compute[260935]: 2025-10-11 08:45:48.336 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172348.253233, 176447af-d4c6-422a-9347-9f07749fb6c3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:45:48 np0005481065 nova_compute[260935]: 2025-10-11 08:45:48.337 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] VM Started (Lifecycle Event)#033[00m
Oct 11 04:45:48 np0005481065 nova_compute[260935]: 2025-10-11 08:45:48.360 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:45:48 np0005481065 nova_compute[260935]: 2025-10-11 08:45:48.365 2 INFO nova.compute.manager [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Took 4.20 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:45:48 np0005481065 nova_compute[260935]: 2025-10-11 08:45:48.366 2 DEBUG nova.compute.manager [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:45:48 np0005481065 nova_compute[260935]: 2025-10-11 08:45:48.367 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:45:48 np0005481065 nova_compute[260935]: 2025-10-11 08:45:48.402 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:45:48 np0005481065 nova_compute[260935]: 2025-10-11 08:45:48.439 2 INFO nova.compute.manager [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Took 7.21 seconds to build instance.#033[00m
Oct 11 04:45:48 np0005481065 nova_compute[260935]: 2025-10-11 08:45:48.457 2 DEBUG oslo_concurrency.lockutils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Lock "176447af-d4c6-422a-9347-9f07749fb6c3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:45:48 np0005481065 nova_compute[260935]: 2025-10-11 08:45:48.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:45:49 np0005481065 nova_compute[260935]: 2025-10-11 08:45:49.105 2 DEBUG oslo_concurrency.processutils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d/disk.config 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.887s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:45:49 np0005481065 nova_compute[260935]: 2025-10-11 08:45:49.106 2 INFO nova.virt.libvirt.driver [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Deleting local config drive /var/lib/nova/instances/7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d/disk.config because it was imported into RBD.#033[00m
Oct 11 04:45:49 np0005481065 systemd-machined[215705]: New machine qemu-8-instance-00000008.
Oct 11 04:45:49 np0005481065 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Oct 11 04:45:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1174: 321 pgs: 321 active+clean; 134 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 196 op/s
Oct 11 04:45:50 np0005481065 nova_compute[260935]: 2025-10-11 08:45:50.093 2 DEBUG nova.compute.manager [None req-20743395-68eb-4d72-b551-1173753b75da 74eb6deb0f9147ab8a27056a316e2697 f2f7cc37b41844129b4b8895273c54a9 - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:45:50 np0005481065 nova_compute[260935]: 2025-10-11 08:45:50.099 2 INFO nova.compute.manager [None req-20743395-68eb-4d72-b551-1173753b75da 74eb6deb0f9147ab8a27056a316e2697 f2f7cc37b41844129b4b8895273c54a9 - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Retrieving diagnostics#033[00m
Oct 11 04:45:50 np0005481065 nova_compute[260935]: 2025-10-11 08:45:50.223 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172350.2226708, 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:45:50 np0005481065 nova_compute[260935]: 2025-10-11 08:45:50.223 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:45:50 np0005481065 nova_compute[260935]: 2025-10-11 08:45:50.228 2 DEBUG nova.compute.manager [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:45:50 np0005481065 nova_compute[260935]: 2025-10-11 08:45:50.228 2 DEBUG nova.virt.libvirt.driver [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:45:50 np0005481065 nova_compute[260935]: 2025-10-11 08:45:50.231 2 INFO nova.virt.libvirt.driver [-] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Instance spawned successfully.#033[00m
Oct 11 04:45:50 np0005481065 nova_compute[260935]: 2025-10-11 08:45:50.232 2 DEBUG nova.virt.libvirt.driver [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:45:50 np0005481065 nova_compute[260935]: 2025-10-11 08:45:50.266 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:45:50 np0005481065 nova_compute[260935]: 2025-10-11 08:45:50.280 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:45:50 np0005481065 nova_compute[260935]: 2025-10-11 08:45:50.303 2 DEBUG nova.virt.libvirt.driver [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:45:50 np0005481065 nova_compute[260935]: 2025-10-11 08:45:50.304 2 DEBUG nova.virt.libvirt.driver [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:45:50 np0005481065 nova_compute[260935]: 2025-10-11 08:45:50.306 2 DEBUG nova.virt.libvirt.driver [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:45:50 np0005481065 nova_compute[260935]: 2025-10-11 08:45:50.307 2 DEBUG nova.virt.libvirt.driver [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:45:50 np0005481065 nova_compute[260935]: 2025-10-11 08:45:50.309 2 DEBUG nova.virt.libvirt.driver [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:45:50 np0005481065 nova_compute[260935]: 2025-10-11 08:45:50.310 2 DEBUG nova.virt.libvirt.driver [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:45:50 np0005481065 nova_compute[260935]: 2025-10-11 08:45:50.343 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:45:50 np0005481065 nova_compute[260935]: 2025-10-11 08:45:50.344 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172350.227831, 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:45:50 np0005481065 nova_compute[260935]: 2025-10-11 08:45:50.345 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] VM Started (Lifecycle Event)#033[00m
Oct 11 04:45:50 np0005481065 nova_compute[260935]: 2025-10-11 08:45:50.371 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:45:50 np0005481065 nova_compute[260935]: 2025-10-11 08:45:50.377 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:45:50 np0005481065 nova_compute[260935]: 2025-10-11 08:45:50.383 2 INFO nova.compute.manager [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Took 4.62 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:45:50 np0005481065 nova_compute[260935]: 2025-10-11 08:45:50.384 2 DEBUG nova.compute.manager [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:45:50 np0005481065 nova_compute[260935]: 2025-10-11 08:45:50.398 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:45:50 np0005481065 nova_compute[260935]: 2025-10-11 08:45:50.437 2 DEBUG oslo_concurrency.lockutils [None req-86c8dd69-c086-46e1-8c04-2bd7cf6e005d 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Acquiring lock "176447af-d4c6-422a-9347-9f07749fb6c3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:45:50 np0005481065 nova_compute[260935]: 2025-10-11 08:45:50.438 2 DEBUG oslo_concurrency.lockutils [None req-86c8dd69-c086-46e1-8c04-2bd7cf6e005d 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Lock "176447af-d4c6-422a-9347-9f07749fb6c3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:45:50 np0005481065 nova_compute[260935]: 2025-10-11 08:45:50.439 2 DEBUG oslo_concurrency.lockutils [None req-86c8dd69-c086-46e1-8c04-2bd7cf6e005d 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Acquiring lock "176447af-d4c6-422a-9347-9f07749fb6c3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:45:50 np0005481065 nova_compute[260935]: 2025-10-11 08:45:50.440 2 DEBUG oslo_concurrency.lockutils [None req-86c8dd69-c086-46e1-8c04-2bd7cf6e005d 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Lock "176447af-d4c6-422a-9347-9f07749fb6c3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:45:50 np0005481065 nova_compute[260935]: 2025-10-11 08:45:50.440 2 DEBUG oslo_concurrency.lockutils [None req-86c8dd69-c086-46e1-8c04-2bd7cf6e005d 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Lock "176447af-d4c6-422a-9347-9f07749fb6c3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:45:50 np0005481065 nova_compute[260935]: 2025-10-11 08:45:50.444 2 INFO nova.compute.manager [None req-86c8dd69-c086-46e1-8c04-2bd7cf6e005d 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Terminating instance#033[00m
Oct 11 04:45:50 np0005481065 nova_compute[260935]: 2025-10-11 08:45:50.447 2 DEBUG oslo_concurrency.lockutils [None req-86c8dd69-c086-46e1-8c04-2bd7cf6e005d 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Acquiring lock "refresh_cache-176447af-d4c6-422a-9347-9f07749fb6c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:45:50 np0005481065 nova_compute[260935]: 2025-10-11 08:45:50.448 2 DEBUG oslo_concurrency.lockutils [None req-86c8dd69-c086-46e1-8c04-2bd7cf6e005d 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Acquired lock "refresh_cache-176447af-d4c6-422a-9347-9f07749fb6c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:45:50 np0005481065 nova_compute[260935]: 2025-10-11 08:45:50.448 2 DEBUG nova.network.neutron [None req-86c8dd69-c086-46e1-8c04-2bd7cf6e005d 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:45:50 np0005481065 nova_compute[260935]: 2025-10-11 08:45:50.469 2 INFO nova.compute.manager [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Took 8.01 seconds to build instance.#033[00m
Oct 11 04:45:50 np0005481065 nova_compute[260935]: 2025-10-11 08:45:50.493 2 DEBUG oslo_concurrency.lockutils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Lock "7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:45:50 np0005481065 nova_compute[260935]: 2025-10-11 08:45:50.687 2 DEBUG nova.network.neutron [None req-86c8dd69-c086-46e1-8c04-2bd7cf6e005d 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:45:50 np0005481065 nova_compute[260935]: 2025-10-11 08:45:50.870 2 DEBUG nova.compute.manager [None req-31ab30fe-e299-4918-af66-8ca0d8bfbf55 aaddd80c268b4133b6c3baa62854d1ee 670dcbed2127406089bdfc746797b90a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:45:50 np0005481065 nova_compute[260935]: 2025-10-11 08:45:50.873 2 INFO nova.compute.manager [None req-31ab30fe-e299-4918-af66-8ca0d8bfbf55 aaddd80c268b4133b6c3baa62854d1ee 670dcbed2127406089bdfc746797b90a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Retrieving diagnostics#033[00m
Oct 11 04:45:50 np0005481065 nova_compute[260935]: 2025-10-11 08:45:50.924 2 DEBUG nova.network.neutron [None req-86c8dd69-c086-46e1-8c04-2bd7cf6e005d 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:45:50 np0005481065 nova_compute[260935]: 2025-10-11 08:45:50.944 2 DEBUG oslo_concurrency.lockutils [None req-86c8dd69-c086-46e1-8c04-2bd7cf6e005d 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Releasing lock "refresh_cache-176447af-d4c6-422a-9347-9f07749fb6c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:45:50 np0005481065 nova_compute[260935]: 2025-10-11 08:45:50.945 2 DEBUG nova.compute.manager [None req-86c8dd69-c086-46e1-8c04-2bd7cf6e005d 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 04:45:51 np0005481065 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Oct 11 04:45:51 np0005481065 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 3.733s CPU time.
Oct 11 04:45:51 np0005481065 systemd-machined[215705]: Machine qemu-7-instance-00000007 terminated.
Oct 11 04:45:51 np0005481065 nova_compute[260935]: 2025-10-11 08:45:51.173 2 INFO nova.virt.libvirt.driver [-] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Instance destroyed successfully.#033[00m
Oct 11 04:45:51 np0005481065 nova_compute[260935]: 2025-10-11 08:45:51.174 2 DEBUG nova.objects.instance [None req-86c8dd69-c086-46e1-8c04-2bd7cf6e005d 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Lazy-loading 'resources' on Instance uuid 176447af-d4c6-422a-9347-9f07749fb6c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:45:51 np0005481065 nova_compute[260935]: 2025-10-11 08:45:51.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:45:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1175: 321 pgs: 321 active+clean; 134 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 196 op/s
Oct 11 04:45:52 np0005481065 nova_compute[260935]: 2025-10-11 08:45:52.132 2 INFO nova.virt.libvirt.driver [None req-86c8dd69-c086-46e1-8c04-2bd7cf6e005d 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Deleting instance files /var/lib/nova/instances/176447af-d4c6-422a-9347-9f07749fb6c3_del#033[00m
Oct 11 04:45:52 np0005481065 nova_compute[260935]: 2025-10-11 08:45:52.133 2 INFO nova.virt.libvirt.driver [None req-86c8dd69-c086-46e1-8c04-2bd7cf6e005d 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Deletion of /var/lib/nova/instances/176447af-d4c6-422a-9347-9f07749fb6c3_del complete#033[00m
Oct 11 04:45:52 np0005481065 nova_compute[260935]: 2025-10-11 08:45:52.182 2 INFO nova.compute.manager [None req-86c8dd69-c086-46e1-8c04-2bd7cf6e005d 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Took 1.24 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 04:45:52 np0005481065 nova_compute[260935]: 2025-10-11 08:45:52.183 2 DEBUG oslo.service.loopingcall [None req-86c8dd69-c086-46e1-8c04-2bd7cf6e005d 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 04:45:52 np0005481065 nova_compute[260935]: 2025-10-11 08:45:52.183 2 DEBUG nova.compute.manager [-] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 04:45:52 np0005481065 nova_compute[260935]: 2025-10-11 08:45:52.183 2 DEBUG nova.network.neutron [-] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 04:45:52 np0005481065 nova_compute[260935]: 2025-10-11 08:45:52.348 2 DEBUG nova.network.neutron [-] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:45:52 np0005481065 nova_compute[260935]: 2025-10-11 08:45:52.380 2 DEBUG nova.network.neutron [-] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:45:52 np0005481065 nova_compute[260935]: 2025-10-11 08:45:52.393 2 INFO nova.compute.manager [-] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Took 0.21 seconds to deallocate network for instance.#033[00m
Oct 11 04:45:52 np0005481065 nova_compute[260935]: 2025-10-11 08:45:52.435 2 DEBUG oslo_concurrency.lockutils [None req-86c8dd69-c086-46e1-8c04-2bd7cf6e005d 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:45:52 np0005481065 nova_compute[260935]: 2025-10-11 08:45:52.435 2 DEBUG oslo_concurrency.lockutils [None req-86c8dd69-c086-46e1-8c04-2bd7cf6e005d 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:45:52 np0005481065 nova_compute[260935]: 2025-10-11 08:45:52.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:45:52 np0005481065 nova_compute[260935]: 2025-10-11 08:45:52.497 2 DEBUG oslo_concurrency.processutils [None req-86c8dd69-c086-46e1-8c04-2bd7cf6e005d 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:45:52 np0005481065 nova_compute[260935]: 2025-10-11 08:45:52.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:45:52 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:45:52 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2935633768' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:45:52 np0005481065 nova_compute[260935]: 2025-10-11 08:45:52.993 2 DEBUG oslo_concurrency.processutils [None req-86c8dd69-c086-46e1-8c04-2bd7cf6e005d 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:45:53 np0005481065 nova_compute[260935]: 2025-10-11 08:45:53.000 2 DEBUG nova.compute.provider_tree [None req-86c8dd69-c086-46e1-8c04-2bd7cf6e005d 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:45:53 np0005481065 nova_compute[260935]: 2025-10-11 08:45:53.016 2 DEBUG nova.scheduler.client.report [None req-86c8dd69-c086-46e1-8c04-2bd7cf6e005d 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:45:53 np0005481065 nova_compute[260935]: 2025-10-11 08:45:53.040 2 DEBUG oslo_concurrency.lockutils [None req-86c8dd69-c086-46e1-8c04-2bd7cf6e005d 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:45:53 np0005481065 nova_compute[260935]: 2025-10-11 08:45:53.078 2 INFO nova.scheduler.client.report [None req-86c8dd69-c086-46e1-8c04-2bd7cf6e005d 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Deleted allocations for instance 176447af-d4c6-422a-9347-9f07749fb6c3#033[00m
Oct 11 04:45:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:45:53 np0005481065 nova_compute[260935]: 2025-10-11 08:45:53.148 2 DEBUG oslo_concurrency.lockutils [None req-86c8dd69-c086-46e1-8c04-2bd7cf6e005d 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Lock "176447af-d4c6-422a-9347-9f07749fb6c3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.710s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:45:53 np0005481065 nova_compute[260935]: 2025-10-11 08:45:53.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:45:53 np0005481065 nova_compute[260935]: 2025-10-11 08:45:53.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:45:53 np0005481065 nova_compute[260935]: 2025-10-11 08:45:53.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:45:53 np0005481065 nova_compute[260935]: 2025-10-11 08:45:53.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 04:45:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1176: 321 pgs: 321 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.6 MiB/s wr, 364 op/s
Oct 11 04:45:53 np0005481065 podman[280940]: 2025-10-11 08:45:53.80120089 +0000 UTC m=+0.099717502 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, io.buildah.version=1.41.3)
Oct 11 04:45:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:45:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:45:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:45:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:45:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:45:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:45:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:45:54
Oct 11 04:45:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:45:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 04:45:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', 'images', 'volumes', 'backups', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.control', 'default.rgw.log', '.mgr', 'default.rgw.meta']
Oct 11 04:45:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:45:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:45:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:45:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:45:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:45:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:45:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:45:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:45:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:45:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:45:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:45:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:55.143 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:45:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:55.144 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 04:45:55 np0005481065 nova_compute[260935]: 2025-10-11 08:45:55.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:45:55 np0005481065 nova_compute[260935]: 2025-10-11 08:45:55.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:45:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1177: 321 pgs: 321 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 292 op/s
Oct 11 04:45:56 np0005481065 nova_compute[260935]: 2025-10-11 08:45:56.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:45:56 np0005481065 nova_compute[260935]: 2025-10-11 08:45:56.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:45:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:45:57.146 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:45:57 np0005481065 nova_compute[260935]: 2025-10-11 08:45:57.427 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172342.4255579, 09442602-bb27-4205-98ce-79781d8ab62f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:45:57 np0005481065 nova_compute[260935]: 2025-10-11 08:45:57.427 2 INFO nova.compute.manager [-] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] VM Stopped (Lifecycle Event)#033[00m
Oct 11 04:45:57 np0005481065 nova_compute[260935]: 2025-10-11 08:45:57.455 2 DEBUG nova.compute.manager [None req-8d7e5293-7694-4341-ba68-bdbc3e0afa41 - - - - - -] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:45:57 np0005481065 nova_compute[260935]: 2025-10-11 08:45:57.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:45:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1178: 321 pgs: 321 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 292 op/s
Oct 11 04:45:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:45:58 np0005481065 nova_compute[260935]: 2025-10-11 08:45:58.191 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172343.190196, f7475a32-2490-4f8c-a700-a123973da072 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:45:58 np0005481065 nova_compute[260935]: 2025-10-11 08:45:58.192 2 INFO nova.compute.manager [-] [instance: f7475a32-2490-4f8c-a700-a123973da072] VM Stopped (Lifecycle Event)#033[00m
Oct 11 04:45:58 np0005481065 nova_compute[260935]: 2025-10-11 08:45:58.233 2 DEBUG nova.compute.manager [None req-6bb03b24-802f-4566-8b5a-8e8ecb0b21f9 - - - - - -] [instance: f7475a32-2490-4f8c-a700-a123973da072] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:45:58 np0005481065 nova_compute[260935]: 2025-10-11 08:45:58.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:45:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:45:59 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:45:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:45:59 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:45:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:45:59 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:45:59 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 3ef643f4-b0fb-4368-baa6-550c2bd30c03 does not exist
Oct 11 04:45:59 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 8f2cca00-ef3f-45d6-9bbc-fe4b597d4335 does not exist
Oct 11 04:45:59 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 667d308a-7106-45af-8406-5a0608de5d91 does not exist
Oct 11 04:45:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:45:59 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:45:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:45:59 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:45:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:45:59 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:45:59 np0005481065 nova_compute[260935]: 2025-10-11 08:45:59.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:45:59 np0005481065 nova_compute[260935]: 2025-10-11 08:45:59.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:45:59 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:45:59 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:45:59 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:45:59 np0005481065 nova_compute[260935]: 2025-10-11 08:45:59.733 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:45:59 np0005481065 nova_compute[260935]: 2025-10-11 08:45:59.734 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:45:59 np0005481065 nova_compute[260935]: 2025-10-11 08:45:59.734 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:45:59 np0005481065 nova_compute[260935]: 2025-10-11 08:45:59.735 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 04:45:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1179: 321 pgs: 321 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 14 KiB/s wr, 168 op/s
Oct 11 04:45:59 np0005481065 nova_compute[260935]: 2025-10-11 08:45:59.735 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:00 np0005481065 podman[281150]: 2025-10-11 08:46:00.021842674 +0000 UTC m=+0.098069095 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0)
Oct 11 04:46:00 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:46:00 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3507669860' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:46:00 np0005481065 nova_compute[260935]: 2025-10-11 08:46:00.234 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:00 np0005481065 nova_compute[260935]: 2025-10-11 08:46:00.414 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 04:46:00 np0005481065 nova_compute[260935]: 2025-10-11 08:46:00.415 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 04:46:00 np0005481065 podman[281274]: 2025-10-11 08:46:00.571563058 +0000 UTC m=+0.050552556 container create 31eea65ccb3f4a73699e7dece29b9664d3e779c6b96195e31d409aa74900e4ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_jennings, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 04:46:00 np0005481065 systemd[1]: Started libpod-conmon-31eea65ccb3f4a73699e7dece29b9664d3e779c6b96195e31d409aa74900e4ab.scope.
Oct 11 04:46:00 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:46:00 np0005481065 podman[281274]: 2025-10-11 08:46:00.548102038 +0000 UTC m=+0.027091536 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:46:00 np0005481065 nova_compute[260935]: 2025-10-11 08:46:00.649 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:46:00 np0005481065 nova_compute[260935]: 2025-10-11 08:46:00.650 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4550MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 04:46:00 np0005481065 nova_compute[260935]: 2025-10-11 08:46:00.650 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:46:00 np0005481065 nova_compute[260935]: 2025-10-11 08:46:00.650 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:46:00 np0005481065 podman[281274]: 2025-10-11 08:46:00.650979698 +0000 UTC m=+0.129969296 container init 31eea65ccb3f4a73699e7dece29b9664d3e779c6b96195e31d409aa74900e4ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 11 04:46:00 np0005481065 podman[281274]: 2025-10-11 08:46:00.659876363 +0000 UTC m=+0.138865861 container start 31eea65ccb3f4a73699e7dece29b9664d3e779c6b96195e31d409aa74900e4ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_jennings, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:46:00 np0005481065 podman[281274]: 2025-10-11 08:46:00.663083844 +0000 UTC m=+0.142073462 container attach 31eea65ccb3f4a73699e7dece29b9664d3e779c6b96195e31d409aa74900e4ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 11 04:46:00 np0005481065 competent_jennings[281291]: 167 167
Oct 11 04:46:00 np0005481065 systemd[1]: libpod-31eea65ccb3f4a73699e7dece29b9664d3e779c6b96195e31d409aa74900e4ab.scope: Deactivated successfully.
Oct 11 04:46:00 np0005481065 conmon[281291]: conmon 31eea65ccb3f4a73699e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-31eea65ccb3f4a73699e7dece29b9664d3e779c6b96195e31d409aa74900e4ab.scope/container/memory.events
Oct 11 04:46:00 np0005481065 podman[281296]: 2025-10-11 08:46:00.70947536 +0000 UTC m=+0.024127470 container died 31eea65ccb3f4a73699e7dece29b9664d3e779c6b96195e31d409aa74900e4ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:46:00 np0005481065 systemd[1]: var-lib-containers-storage-overlay-c138ed692c3f0bae850e4115cd3bcafc172817c65768a0b0f49c50257b84a778-merged.mount: Deactivated successfully.
Oct 11 04:46:00 np0005481065 podman[281296]: 2025-10-11 08:46:00.753309163 +0000 UTC m=+0.067961283 container remove 31eea65ccb3f4a73699e7dece29b9664d3e779c6b96195e31d409aa74900e4ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_jennings, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 04:46:00 np0005481065 systemd[1]: libpod-conmon-31eea65ccb3f4a73699e7dece29b9664d3e779c6b96195e31d409aa74900e4ab.scope: Deactivated successfully.
Oct 11 04:46:00 np0005481065 nova_compute[260935]: 2025-10-11 08:46:00.780 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 04:46:00 np0005481065 nova_compute[260935]: 2025-10-11 08:46:00.781 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 04:46:00 np0005481065 nova_compute[260935]: 2025-10-11 08:46:00.782 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 04:46:00 np0005481065 nova_compute[260935]: 2025-10-11 08:46:00.825 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:00 np0005481065 podman[281319]: 2025-10-11 08:46:00.9844321 +0000 UTC m=+0.048367034 container create 8d1105468be7a743d1469e12a9fc446c385639450306d645bd548a01ababc3cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 11 04:46:01 np0005481065 systemd[1]: Started libpod-conmon-8d1105468be7a743d1469e12a9fc446c385639450306d645bd548a01ababc3cc.scope.
Oct 11 04:46:01 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:46:01 np0005481065 podman[281319]: 2025-10-11 08:46:00.958284493 +0000 UTC m=+0.022219437 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:46:01 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e96944c20d091b25cc4a1085ece23100c058b38d0b4a40b13baf1fe1c3fc4a7d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:46:01 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e96944c20d091b25cc4a1085ece23100c058b38d0b4a40b13baf1fe1c3fc4a7d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:46:01 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e96944c20d091b25cc4a1085ece23100c058b38d0b4a40b13baf1fe1c3fc4a7d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:46:01 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e96944c20d091b25cc4a1085ece23100c058b38d0b4a40b13baf1fe1c3fc4a7d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:46:01 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e96944c20d091b25cc4a1085ece23100c058b38d0b4a40b13baf1fe1c3fc4a7d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:46:01 np0005481065 podman[281319]: 2025-10-11 08:46:01.083992625 +0000 UTC m=+0.147927579 container init 8d1105468be7a743d1469e12a9fc446c385639450306d645bd548a01ababc3cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 11 04:46:01 np0005481065 podman[281319]: 2025-10-11 08:46:01.096878583 +0000 UTC m=+0.160813527 container start 8d1105468be7a743d1469e12a9fc446c385639450306d645bd548a01ababc3cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_varahamihira, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:46:01 np0005481065 podman[281319]: 2025-10-11 08:46:01.102932456 +0000 UTC m=+0.166867420 container attach 8d1105468be7a743d1469e12a9fc446c385639450306d645bd548a01ababc3cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 04:46:01 np0005481065 podman[281352]: 2025-10-11 08:46:01.200964179 +0000 UTC m=+0.177798563 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 04:46:01 np0005481065 nova_compute[260935]: 2025-10-11 08:46:01.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:46:01 np0005481065 nova_compute[260935]: 2025-10-11 08:46:01.254 2 DEBUG nova.compute.manager [None req-b4da9aa8-5b6c-48c3-bbb6-64575046a1d1 aaddd80c268b4133b6c3baa62854d1ee 670dcbed2127406089bdfc746797b90a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:46:01 np0005481065 nova_compute[260935]: 2025-10-11 08:46:01.261 2 INFO nova.compute.manager [None req-b4da9aa8-5b6c-48c3-bbb6-64575046a1d1 aaddd80c268b4133b6c3baa62854d1ee 670dcbed2127406089bdfc746797b90a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Retrieving diagnostics#033[00m
Oct 11 04:46:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:46:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2174592845' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:46:01 np0005481065 nova_compute[260935]: 2025-10-11 08:46:01.315 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:01 np0005481065 nova_compute[260935]: 2025-10-11 08:46:01.322 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:46:01 np0005481065 nova_compute[260935]: 2025-10-11 08:46:01.366 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:46:01 np0005481065 nova_compute[260935]: 2025-10-11 08:46:01.680 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 04:46:01 np0005481065 nova_compute[260935]: 2025-10-11 08:46:01.681 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.030s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:46:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1180: 321 pgs: 321 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 14 KiB/s wr, 168 op/s
Oct 11 04:46:02 np0005481065 nova_compute[260935]: 2025-10-11 08:46:02.058 2 DEBUG oslo_concurrency.lockutils [None req-17db25f9-d979-4aeb-a03d-4eb5f10db5ad 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Acquiring lock "7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:46:02 np0005481065 nova_compute[260935]: 2025-10-11 08:46:02.059 2 DEBUG oslo_concurrency.lockutils [None req-17db25f9-d979-4aeb-a03d-4eb5f10db5ad 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Lock "7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:46:02 np0005481065 nova_compute[260935]: 2025-10-11 08:46:02.060 2 DEBUG oslo_concurrency.lockutils [None req-17db25f9-d979-4aeb-a03d-4eb5f10db5ad 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Acquiring lock "7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:46:02 np0005481065 nova_compute[260935]: 2025-10-11 08:46:02.060 2 DEBUG oslo_concurrency.lockutils [None req-17db25f9-d979-4aeb-a03d-4eb5f10db5ad 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Lock "7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:46:02 np0005481065 nova_compute[260935]: 2025-10-11 08:46:02.060 2 DEBUG oslo_concurrency.lockutils [None req-17db25f9-d979-4aeb-a03d-4eb5f10db5ad 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Lock "7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:46:02 np0005481065 nova_compute[260935]: 2025-10-11 08:46:02.063 2 INFO nova.compute.manager [None req-17db25f9-d979-4aeb-a03d-4eb5f10db5ad 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Terminating instance#033[00m
Oct 11 04:46:02 np0005481065 nova_compute[260935]: 2025-10-11 08:46:02.064 2 DEBUG oslo_concurrency.lockutils [None req-17db25f9-d979-4aeb-a03d-4eb5f10db5ad 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Acquiring lock "refresh_cache-7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:46:02 np0005481065 nova_compute[260935]: 2025-10-11 08:46:02.065 2 DEBUG oslo_concurrency.lockutils [None req-17db25f9-d979-4aeb-a03d-4eb5f10db5ad 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Acquired lock "refresh_cache-7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:46:02 np0005481065 nova_compute[260935]: 2025-10-11 08:46:02.065 2 DEBUG nova.network.neutron [None req-17db25f9-d979-4aeb-a03d-4eb5f10db5ad 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:46:02 np0005481065 gallant_varahamihira[281360]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:46:02 np0005481065 gallant_varahamihira[281360]: --> relative data size: 1.0
Oct 11 04:46:02 np0005481065 gallant_varahamihira[281360]: --> All data devices are unavailable
Oct 11 04:46:02 np0005481065 systemd[1]: libpod-8d1105468be7a743d1469e12a9fc446c385639450306d645bd548a01ababc3cc.scope: Deactivated successfully.
Oct 11 04:46:02 np0005481065 systemd[1]: libpod-8d1105468be7a743d1469e12a9fc446c385639450306d645bd548a01ababc3cc.scope: Consumed 1.108s CPU time.
Oct 11 04:46:02 np0005481065 podman[281319]: 2025-10-11 08:46:02.354145722 +0000 UTC m=+1.418080666 container died 8d1105468be7a743d1469e12a9fc446c385639450306d645bd548a01ababc3cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:46:02 np0005481065 systemd[1]: var-lib-containers-storage-overlay-e96944c20d091b25cc4a1085ece23100c058b38d0b4a40b13baf1fe1c3fc4a7d-merged.mount: Deactivated successfully.
Oct 11 04:46:02 np0005481065 podman[281319]: 2025-10-11 08:46:02.439219724 +0000 UTC m=+1.503154678 container remove 8d1105468be7a743d1469e12a9fc446c385639450306d645bd548a01ababc3cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:46:02 np0005481065 systemd[1]: libpod-conmon-8d1105468be7a743d1469e12a9fc446c385639450306d645bd548a01ababc3cc.scope: Deactivated successfully.
Oct 11 04:46:02 np0005481065 nova_compute[260935]: 2025-10-11 08:46:02.535 2 DEBUG nova.network.neutron [None req-17db25f9-d979-4aeb-a03d-4eb5f10db5ad 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:46:02 np0005481065 nova_compute[260935]: 2025-10-11 08:46:02.779 2 DEBUG nova.network.neutron [None req-17db25f9-d979-4aeb-a03d-4eb5f10db5ad 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:46:02 np0005481065 nova_compute[260935]: 2025-10-11 08:46:02.820 2 DEBUG oslo_concurrency.lockutils [None req-17db25f9-d979-4aeb-a03d-4eb5f10db5ad 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Releasing lock "refresh_cache-7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:46:02 np0005481065 nova_compute[260935]: 2025-10-11 08:46:02.822 2 DEBUG nova.compute.manager [None req-17db25f9-d979-4aeb-a03d-4eb5f10db5ad 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 04:46:02 np0005481065 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Oct 11 04:46:02 np0005481065 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 11.909s CPU time.
Oct 11 04:46:02 np0005481065 systemd-machined[215705]: Machine qemu-8-instance-00000008 terminated.
Oct 11 04:46:03 np0005481065 nova_compute[260935]: 2025-10-11 08:46:03.051 2 INFO nova.virt.libvirt.driver [-] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Instance destroyed successfully.#033[00m
Oct 11 04:46:03 np0005481065 nova_compute[260935]: 2025-10-11 08:46:03.053 2 DEBUG nova.objects.instance [None req-17db25f9-d979-4aeb-a03d-4eb5f10db5ad 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Lazy-loading 'resources' on Instance uuid 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:46:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:46:03 np0005481065 podman[281588]: 2025-10-11 08:46:03.358831771 +0000 UTC m=+0.035847676 container create 8aef97b4f105273236c9988887f15a69e37a4cbafdfe62e512aadf54a5d02b1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_noether, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:46:03 np0005481065 systemd[1]: Started libpod-conmon-8aef97b4f105273236c9988887f15a69e37a4cbafdfe62e512aadf54a5d02b1a.scope.
Oct 11 04:46:03 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:46:03 np0005481065 nova_compute[260935]: 2025-10-11 08:46:03.427 2 INFO nova.virt.libvirt.driver [None req-17db25f9-d979-4aeb-a03d-4eb5f10db5ad 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Deleting instance files /var/lib/nova/instances/7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d_del#033[00m
Oct 11 04:46:03 np0005481065 nova_compute[260935]: 2025-10-11 08:46:03.428 2 INFO nova.virt.libvirt.driver [None req-17db25f9-d979-4aeb-a03d-4eb5f10db5ad 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Deletion of /var/lib/nova/instances/7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d_del complete#033[00m
Oct 11 04:46:03 np0005481065 podman[281588]: 2025-10-11 08:46:03.440459444 +0000 UTC m=+0.117475389 container init 8aef97b4f105273236c9988887f15a69e37a4cbafdfe62e512aadf54a5d02b1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_noether, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 11 04:46:03 np0005481065 podman[281588]: 2025-10-11 08:46:03.344736178 +0000 UTC m=+0.021752103 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:46:03 np0005481065 podman[281588]: 2025-10-11 08:46:03.451806509 +0000 UTC m=+0.128822424 container start 8aef97b4f105273236c9988887f15a69e37a4cbafdfe62e512aadf54a5d02b1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_noether, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:46:03 np0005481065 podman[281588]: 2025-10-11 08:46:03.454808835 +0000 UTC m=+0.131824770 container attach 8aef97b4f105273236c9988887f15a69e37a4cbafdfe62e512aadf54a5d02b1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 04:46:03 np0005481065 goofy_noether[281604]: 167 167
Oct 11 04:46:03 np0005481065 systemd[1]: libpod-8aef97b4f105273236c9988887f15a69e37a4cbafdfe62e512aadf54a5d02b1a.scope: Deactivated successfully.
Oct 11 04:46:03 np0005481065 podman[281588]: 2025-10-11 08:46:03.457288295 +0000 UTC m=+0.134304240 container died 8aef97b4f105273236c9988887f15a69e37a4cbafdfe62e512aadf54a5d02b1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_noether, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:46:03 np0005481065 systemd[1]: var-lib-containers-storage-overlay-fe07ce83f9e3b442b1d8936b57c2f5d7cfffdcb56bc04e521c56c44d3c655b90-merged.mount: Deactivated successfully.
Oct 11 04:46:03 np0005481065 podman[281588]: 2025-10-11 08:46:03.501220541 +0000 UTC m=+0.178236446 container remove 8aef97b4f105273236c9988887f15a69e37a4cbafdfe62e512aadf54a5d02b1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_noether, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:46:03 np0005481065 systemd[1]: libpod-conmon-8aef97b4f105273236c9988887f15a69e37a4cbafdfe62e512aadf54a5d02b1a.scope: Deactivated successfully.
Oct 11 04:46:03 np0005481065 nova_compute[260935]: 2025-10-11 08:46:03.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:46:03 np0005481065 nova_compute[260935]: 2025-10-11 08:46:03.682 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:46:03 np0005481065 nova_compute[260935]: 2025-10-11 08:46:03.683 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 04:46:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1181: 321 pgs: 321 active+clean; 121 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.1 MiB/s wr, 227 op/s
Oct 11 04:46:03 np0005481065 nova_compute[260935]: 2025-10-11 08:46:03.772 2 INFO nova.compute.manager [None req-17db25f9-d979-4aeb-a03d-4eb5f10db5ad 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Took 0.95 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 04:46:03 np0005481065 nova_compute[260935]: 2025-10-11 08:46:03.773 2 DEBUG oslo.service.loopingcall [None req-17db25f9-d979-4aeb-a03d-4eb5f10db5ad 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 04:46:03 np0005481065 nova_compute[260935]: 2025-10-11 08:46:03.774 2 DEBUG nova.compute.manager [-] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 04:46:03 np0005481065 nova_compute[260935]: 2025-10-11 08:46:03.774 2 DEBUG nova.network.neutron [-] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 04:46:03 np0005481065 podman[281628]: 2025-10-11 08:46:03.788451922 +0000 UTC m=+0.103111159 container create 2f61528667b9107f87d830a654b37fb3b8747ea04345747ac0fbbb60ef2a0bc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 11 04:46:03 np0005481065 systemd[1]: Started libpod-conmon-2f61528667b9107f87d830a654b37fb3b8747ea04345747ac0fbbb60ef2a0bc0.scope.
Oct 11 04:46:03 np0005481065 podman[281628]: 2025-10-11 08:46:03.760596045 +0000 UTC m=+0.075255342 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:46:03 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:46:03 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85277ff389fa8dc0363e6921f44e80a3bf2a108b971447ce7461143f9efca6b5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:46:03 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85277ff389fa8dc0363e6921f44e80a3bf2a108b971447ce7461143f9efca6b5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:46:03 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85277ff389fa8dc0363e6921f44e80a3bf2a108b971447ce7461143f9efca6b5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:46:03 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85277ff389fa8dc0363e6921f44e80a3bf2a108b971447ce7461143f9efca6b5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:46:03 np0005481065 nova_compute[260935]: 2025-10-11 08:46:03.890 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 11 04:46:03 np0005481065 nova_compute[260935]: 2025-10-11 08:46:03.891 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:46:03 np0005481065 podman[281628]: 2025-10-11 08:46:03.907365201 +0000 UTC m=+0.222024498 container init 2f61528667b9107f87d830a654b37fb3b8747ea04345747ac0fbbb60ef2a0bc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_sinoussi, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 04:46:03 np0005481065 podman[281628]: 2025-10-11 08:46:03.921490625 +0000 UTC m=+0.236149872 container start 2f61528667b9107f87d830a654b37fb3b8747ea04345747ac0fbbb60ef2a0bc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 04:46:03 np0005481065 podman[281628]: 2025-10-11 08:46:03.92551769 +0000 UTC m=+0.240176987 container attach 2f61528667b9107f87d830a654b37fb3b8747ea04345747ac0fbbb60ef2a0bc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:46:04 np0005481065 nova_compute[260935]: 2025-10-11 08:46:04.053 2 DEBUG nova.network.neutron [-] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:46:04 np0005481065 nova_compute[260935]: 2025-10-11 08:46:04.254 2 DEBUG nova.network.neutron [-] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:46:04 np0005481065 nova_compute[260935]: 2025-10-11 08:46:04.392 2 INFO nova.compute.manager [-] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Took 0.62 seconds to deallocate network for instance.#033[00m
Oct 11 04:46:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:46:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:46:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:46:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:46:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007559663345705542 of space, bias 1.0, pg target 0.22678990037116625 quantized to 32 (current 32)
Oct 11 04:46:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:46:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:46:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:46:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:46:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:46:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 11 04:46:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:46:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 04:46:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:46:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:46:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:46:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:46:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:46:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:46:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:46:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:46:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:46:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:46:04 np0005481065 nova_compute[260935]: 2025-10-11 08:46:04.559 2 DEBUG oslo_concurrency.lockutils [None req-17db25f9-d979-4aeb-a03d-4eb5f10db5ad 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:46:04 np0005481065 nova_compute[260935]: 2025-10-11 08:46:04.560 2 DEBUG oslo_concurrency.lockutils [None req-17db25f9-d979-4aeb-a03d-4eb5f10db5ad 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:46:04 np0005481065 nova_compute[260935]: 2025-10-11 08:46:04.612 2 DEBUG oslo_concurrency.processutils [None req-17db25f9-d979-4aeb-a03d-4eb5f10db5ad 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]: {
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:    "0": [
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:        {
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:            "devices": [
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:                "/dev/loop3"
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:            ],
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:            "lv_name": "ceph_lv0",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:            "lv_size": "21470642176",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:            "name": "ceph_lv0",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:            "tags": {
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:                "ceph.cluster_name": "ceph",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:                "ceph.crush_device_class": "",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:                "ceph.encrypted": "0",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:                "ceph.osd_id": "0",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:                "ceph.type": "block",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:                "ceph.vdo": "0"
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:            },
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:            "type": "block",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:            "vg_name": "ceph_vg0"
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:        }
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:    ],
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:    "1": [
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:        {
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:            "devices": [
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:                "/dev/loop4"
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:            ],
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:            "lv_name": "ceph_lv1",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:            "lv_size": "21470642176",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:            "name": "ceph_lv1",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:            "tags": {
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:                "ceph.cluster_name": "ceph",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:                "ceph.crush_device_class": "",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:                "ceph.encrypted": "0",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:                "ceph.osd_id": "1",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:                "ceph.type": "block",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:                "ceph.vdo": "0"
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:            },
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:            "type": "block",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:            "vg_name": "ceph_vg1"
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:        }
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:    ],
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:    "2": [
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:        {
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:            "devices": [
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:                "/dev/loop5"
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:            ],
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:            "lv_name": "ceph_lv2",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:            "lv_size": "21470642176",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:            "name": "ceph_lv2",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:            "tags": {
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:                "ceph.cluster_name": "ceph",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:                "ceph.crush_device_class": "",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:                "ceph.encrypted": "0",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:                "ceph.osd_id": "2",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:                "ceph.type": "block",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:                "ceph.vdo": "0"
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:            },
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:            "type": "block",
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:            "vg_name": "ceph_vg2"
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:        }
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]:    ]
Oct 11 04:46:04 np0005481065 musing_sinoussi[281645]: }
Oct 11 04:46:04 np0005481065 systemd[1]: libpod-2f61528667b9107f87d830a654b37fb3b8747ea04345747ac0fbbb60ef2a0bc0.scope: Deactivated successfully.
Oct 11 04:46:04 np0005481065 podman[281628]: 2025-10-11 08:46:04.725948409 +0000 UTC m=+1.040607646 container died 2f61528667b9107f87d830a654b37fb3b8747ea04345747ac0fbbb60ef2a0bc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_sinoussi, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 11 04:46:04 np0005481065 systemd[1]: var-lib-containers-storage-overlay-85277ff389fa8dc0363e6921f44e80a3bf2a108b971447ce7461143f9efca6b5-merged.mount: Deactivated successfully.
Oct 11 04:46:04 np0005481065 podman[281628]: 2025-10-11 08:46:04.8060891 +0000 UTC m=+1.120748337 container remove 2f61528667b9107f87d830a654b37fb3b8747ea04345747ac0fbbb60ef2a0bc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 04:46:04 np0005481065 systemd[1]: libpod-conmon-2f61528667b9107f87d830a654b37fb3b8747ea04345747ac0fbbb60ef2a0bc0.scope: Deactivated successfully.
Oct 11 04:46:05 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:46:05 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3466782242' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:46:05 np0005481065 nova_compute[260935]: 2025-10-11 08:46:05.090 2 DEBUG oslo_concurrency.processutils [None req-17db25f9-d979-4aeb-a03d-4eb5f10db5ad 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:05 np0005481065 nova_compute[260935]: 2025-10-11 08:46:05.099 2 DEBUG nova.compute.provider_tree [None req-17db25f9-d979-4aeb-a03d-4eb5f10db5ad 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:46:05 np0005481065 nova_compute[260935]: 2025-10-11 08:46:05.532 2 DEBUG nova.scheduler.client.report [None req-17db25f9-d979-4aeb-a03d-4eb5f10db5ad 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:46:05 np0005481065 nova_compute[260935]: 2025-10-11 08:46:05.654 2 DEBUG oslo_concurrency.lockutils [None req-17db25f9-d979-4aeb-a03d-4eb5f10db5ad 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.094s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:46:05 np0005481065 podman[281829]: 2025-10-11 08:46:05.661587824 +0000 UTC m=+0.064539766 container create 6fb2f19a7a6d776d45bb3faf3177e373bed5b783196f83d0fd267d297169f2ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lichterman, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 04:46:05 np0005481065 systemd[1]: Started libpod-conmon-6fb2f19a7a6d776d45bb3faf3177e373bed5b783196f83d0fd267d297169f2ec.scope.
Oct 11 04:46:05 np0005481065 podman[281829]: 2025-10-11 08:46:05.634730677 +0000 UTC m=+0.037682669 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:46:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1182: 321 pgs: 321 active+clean; 121 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 282 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Oct 11 04:46:05 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:46:05 np0005481065 podman[281829]: 2025-10-11 08:46:05.771730413 +0000 UTC m=+0.174682365 container init 6fb2f19a7a6d776d45bb3faf3177e373bed5b783196f83d0fd267d297169f2ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lichterman, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:46:05 np0005481065 podman[281829]: 2025-10-11 08:46:05.784100436 +0000 UTC m=+0.187052388 container start 6fb2f19a7a6d776d45bb3faf3177e373bed5b783196f83d0fd267d297169f2ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 04:46:05 np0005481065 podman[281829]: 2025-10-11 08:46:05.788103911 +0000 UTC m=+0.191055873 container attach 6fb2f19a7a6d776d45bb3faf3177e373bed5b783196f83d0fd267d297169f2ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lichterman, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:46:05 np0005481065 youthful_lichterman[281845]: 167 167
Oct 11 04:46:05 np0005481065 systemd[1]: libpod-6fb2f19a7a6d776d45bb3faf3177e373bed5b783196f83d0fd267d297169f2ec.scope: Deactivated successfully.
Oct 11 04:46:05 np0005481065 podman[281829]: 2025-10-11 08:46:05.792770894 +0000 UTC m=+0.195722846 container died 6fb2f19a7a6d776d45bb3faf3177e373bed5b783196f83d0fd267d297169f2ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:46:05 np0005481065 nova_compute[260935]: 2025-10-11 08:46:05.803 2 INFO nova.scheduler.client.report [None req-17db25f9-d979-4aeb-a03d-4eb5f10db5ad 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Deleted allocations for instance 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d#033[00m
Oct 11 04:46:05 np0005481065 systemd[1]: var-lib-containers-storage-overlay-02eed1aea9672d1bafc896e7280e9ef0bcd69e1b5238d8afa223f4f5701690a8-merged.mount: Deactivated successfully.
Oct 11 04:46:05 np0005481065 podman[281829]: 2025-10-11 08:46:05.839118249 +0000 UTC m=+0.242070201 container remove 6fb2f19a7a6d776d45bb3faf3177e373bed5b783196f83d0fd267d297169f2ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lichterman, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:46:05 np0005481065 systemd[1]: libpod-conmon-6fb2f19a7a6d776d45bb3faf3177e373bed5b783196f83d0fd267d297169f2ec.scope: Deactivated successfully.
Oct 11 04:46:05 np0005481065 nova_compute[260935]: 2025-10-11 08:46:05.939 2 DEBUG oslo_concurrency.lockutils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Acquiring lock "97254317-c848-4369-b0b7-147d230fb1d1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:46:05 np0005481065 nova_compute[260935]: 2025-10-11 08:46:05.939 2 DEBUG oslo_concurrency.lockutils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lock "97254317-c848-4369-b0b7-147d230fb1d1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:46:06 np0005481065 podman[281872]: 2025-10-11 08:46:06.102039604 +0000 UTC m=+0.076742344 container create da0681a2c23026b78f1e88b082c09a8d19a9a85c8a2404510f30dbb38e81998a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_chaplygin, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 04:46:06 np0005481065 systemd[1]: Started libpod-conmon-da0681a2c23026b78f1e88b082c09a8d19a9a85c8a2404510f30dbb38e81998a.scope.
Oct 11 04:46:06 np0005481065 podman[281872]: 2025-10-11 08:46:06.073644483 +0000 UTC m=+0.048347233 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:46:06 np0005481065 nova_compute[260935]: 2025-10-11 08:46:06.171 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172351.1709952, 176447af-d4c6-422a-9347-9f07749fb6c3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:46:06 np0005481065 nova_compute[260935]: 2025-10-11 08:46:06.172 2 INFO nova.compute.manager [-] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] VM Stopped (Lifecycle Event)#033[00m
Oct 11 04:46:06 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:46:06 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eebf84f4c0111721d477a35b16158895830bf8858363886acca7a55475d7074a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:46:06 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eebf84f4c0111721d477a35b16158895830bf8858363886acca7a55475d7074a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:46:06 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eebf84f4c0111721d477a35b16158895830bf8858363886acca7a55475d7074a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:46:06 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eebf84f4c0111721d477a35b16158895830bf8858363886acca7a55475d7074a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:46:06 np0005481065 nova_compute[260935]: 2025-10-11 08:46:06.202 2 DEBUG oslo_concurrency.lockutils [None req-17db25f9-d979-4aeb-a03d-4eb5f10db5ad 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Lock "7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.143s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:46:06 np0005481065 podman[281872]: 2025-10-11 08:46:06.204774141 +0000 UTC m=+0.179476911 container init da0681a2c23026b78f1e88b082c09a8d19a9a85c8a2404510f30dbb38e81998a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:46:06 np0005481065 podman[281872]: 2025-10-11 08:46:06.218594586 +0000 UTC m=+0.193297316 container start da0681a2c23026b78f1e88b082c09a8d19a9a85c8a2404510f30dbb38e81998a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_chaplygin, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 11 04:46:06 np0005481065 nova_compute[260935]: 2025-10-11 08:46:06.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:46:06 np0005481065 podman[281872]: 2025-10-11 08:46:06.222529589 +0000 UTC m=+0.197232369 container attach da0681a2c23026b78f1e88b082c09a8d19a9a85c8a2404510f30dbb38e81998a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 11 04:46:06 np0005481065 nova_compute[260935]: 2025-10-11 08:46:06.302 2 DEBUG nova.compute.manager [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:46:06 np0005481065 nova_compute[260935]: 2025-10-11 08:46:06.415 2 DEBUG nova.compute.manager [None req-3e16f465-b2ca-46ae-9588-69170c0b5f20 - - - - - -] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:46:06 np0005481065 nova_compute[260935]: 2025-10-11 08:46:06.640 2 DEBUG oslo_concurrency.lockutils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:46:06 np0005481065 nova_compute[260935]: 2025-10-11 08:46:06.640 2 DEBUG oslo_concurrency.lockutils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:46:06 np0005481065 nova_compute[260935]: 2025-10-11 08:46:06.651 2 DEBUG nova.virt.hardware [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:46:06 np0005481065 nova_compute[260935]: 2025-10-11 08:46:06.652 2 INFO nova.compute.claims [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:46:07 np0005481065 relaxed_chaplygin[281889]: {
Oct 11 04:46:07 np0005481065 relaxed_chaplygin[281889]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 04:46:07 np0005481065 relaxed_chaplygin[281889]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:46:07 np0005481065 relaxed_chaplygin[281889]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:46:07 np0005481065 relaxed_chaplygin[281889]:        "osd_id": 2,
Oct 11 04:46:07 np0005481065 relaxed_chaplygin[281889]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:46:07 np0005481065 relaxed_chaplygin[281889]:        "type": "bluestore"
Oct 11 04:46:07 np0005481065 relaxed_chaplygin[281889]:    },
Oct 11 04:46:07 np0005481065 relaxed_chaplygin[281889]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 04:46:07 np0005481065 relaxed_chaplygin[281889]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:46:07 np0005481065 relaxed_chaplygin[281889]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:46:07 np0005481065 relaxed_chaplygin[281889]:        "osd_id": 0,
Oct 11 04:46:07 np0005481065 relaxed_chaplygin[281889]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:46:07 np0005481065 relaxed_chaplygin[281889]:        "type": "bluestore"
Oct 11 04:46:07 np0005481065 relaxed_chaplygin[281889]:    },
Oct 11 04:46:07 np0005481065 relaxed_chaplygin[281889]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 04:46:07 np0005481065 relaxed_chaplygin[281889]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:46:07 np0005481065 relaxed_chaplygin[281889]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:46:07 np0005481065 relaxed_chaplygin[281889]:        "osd_id": 1,
Oct 11 04:46:07 np0005481065 relaxed_chaplygin[281889]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:46:07 np0005481065 relaxed_chaplygin[281889]:        "type": "bluestore"
Oct 11 04:46:07 np0005481065 relaxed_chaplygin[281889]:    }
Oct 11 04:46:07 np0005481065 relaxed_chaplygin[281889]: }
Oct 11 04:46:07 np0005481065 systemd[1]: libpod-da0681a2c23026b78f1e88b082c09a8d19a9a85c8a2404510f30dbb38e81998a.scope: Deactivated successfully.
Oct 11 04:46:07 np0005481065 podman[281872]: 2025-10-11 08:46:07.340763573 +0000 UTC m=+1.315466323 container died da0681a2c23026b78f1e88b082c09a8d19a9a85c8a2404510f30dbb38e81998a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 04:46:07 np0005481065 systemd[1]: libpod-da0681a2c23026b78f1e88b082c09a8d19a9a85c8a2404510f30dbb38e81998a.scope: Consumed 1.123s CPU time.
Oct 11 04:46:07 np0005481065 systemd[1]: var-lib-containers-storage-overlay-eebf84f4c0111721d477a35b16158895830bf8858363886acca7a55475d7074a-merged.mount: Deactivated successfully.
Oct 11 04:46:07 np0005481065 podman[281872]: 2025-10-11 08:46:07.423334514 +0000 UTC m=+1.398037224 container remove da0681a2c23026b78f1e88b082c09a8d19a9a85c8a2404510f30dbb38e81998a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_chaplygin, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:46:07 np0005481065 systemd[1]: libpod-conmon-da0681a2c23026b78f1e88b082c09a8d19a9a85c8a2404510f30dbb38e81998a.scope: Deactivated successfully.
Oct 11 04:46:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:46:07 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:46:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:46:07 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:46:07 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 508e8fd9-3515-4e97-865b-046292aef02c does not exist
Oct 11 04:46:07 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev d72ca338-58cc-4bcd-a798-9235122e7d57 does not exist
Oct 11 04:46:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1183: 321 pgs: 321 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 301 KiB/s rd, 2.1 MiB/s wr, 87 op/s
Oct 11 04:46:07 np0005481065 nova_compute[260935]: 2025-10-11 08:46:07.898 2 DEBUG oslo_concurrency.processutils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:46:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:46:08 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2231748333' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:46:08 np0005481065 nova_compute[260935]: 2025-10-11 08:46:08.373 2 DEBUG oslo_concurrency.processutils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:08 np0005481065 nova_compute[260935]: 2025-10-11 08:46:08.380 2 DEBUG nova.compute.provider_tree [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:46:08 np0005481065 nova_compute[260935]: 2025-10-11 08:46:08.397 2 DEBUG nova.scheduler.client.report [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:46:08 np0005481065 nova_compute[260935]: 2025-10-11 08:46:08.427 2 DEBUG oslo_concurrency.lockutils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.787s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:46:08 np0005481065 nova_compute[260935]: 2025-10-11 08:46:08.428 2 DEBUG nova.compute.manager [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:46:08 np0005481065 nova_compute[260935]: 2025-10-11 08:46:08.486 2 DEBUG nova.compute.manager [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:46:08 np0005481065 nova_compute[260935]: 2025-10-11 08:46:08.487 2 DEBUG nova.network.neutron [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:46:08 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:46:08 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:46:08 np0005481065 nova_compute[260935]: 2025-10-11 08:46:08.521 2 INFO nova.virt.libvirt.driver [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:46:08 np0005481065 nova_compute[260935]: 2025-10-11 08:46:08.577 2 DEBUG nova.compute.manager [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:46:08 np0005481065 nova_compute[260935]: 2025-10-11 08:46:08.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:46:08 np0005481065 nova_compute[260935]: 2025-10-11 08:46:08.668 2 DEBUG nova.compute.manager [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:46:08 np0005481065 nova_compute[260935]: 2025-10-11 08:46:08.669 2 DEBUG nova.virt.libvirt.driver [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:46:08 np0005481065 nova_compute[260935]: 2025-10-11 08:46:08.670 2 INFO nova.virt.libvirt.driver [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Creating image(s)#033[00m
Oct 11 04:46:08 np0005481065 nova_compute[260935]: 2025-10-11 08:46:08.702 2 DEBUG nova.storage.rbd_utils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] rbd image 97254317-c848-4369-b0b7-147d230fb1d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:46:08 np0005481065 nova_compute[260935]: 2025-10-11 08:46:08.737 2 DEBUG nova.storage.rbd_utils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] rbd image 97254317-c848-4369-b0b7-147d230fb1d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:46:08 np0005481065 nova_compute[260935]: 2025-10-11 08:46:08.770 2 DEBUG nova.storage.rbd_utils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] rbd image 97254317-c848-4369-b0b7-147d230fb1d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:46:08 np0005481065 nova_compute[260935]: 2025-10-11 08:46:08.775 2 DEBUG oslo_concurrency.processutils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:08 np0005481065 nova_compute[260935]: 2025-10-11 08:46:08.863 2 DEBUG oslo_concurrency.processutils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:08 np0005481065 nova_compute[260935]: 2025-10-11 08:46:08.864 2 DEBUG oslo_concurrency.lockutils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:46:08 np0005481065 nova_compute[260935]: 2025-10-11 08:46:08.864 2 DEBUG oslo_concurrency.lockutils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:46:08 np0005481065 nova_compute[260935]: 2025-10-11 08:46:08.865 2 DEBUG oslo_concurrency.lockutils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:46:08 np0005481065 nova_compute[260935]: 2025-10-11 08:46:08.891 2 DEBUG nova.storage.rbd_utils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] rbd image 97254317-c848-4369-b0b7-147d230fb1d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:46:08 np0005481065 nova_compute[260935]: 2025-10-11 08:46:08.895 2 DEBUG oslo_concurrency.processutils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 97254317-c848-4369-b0b7-147d230fb1d1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:09 np0005481065 nova_compute[260935]: 2025-10-11 08:46:09.172 2 DEBUG oslo_concurrency.processutils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 97254317-c848-4369-b0b7-147d230fb1d1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.277s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:09 np0005481065 nova_compute[260935]: 2025-10-11 08:46:09.268 2 DEBUG nova.storage.rbd_utils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] resizing rbd image 97254317-c848-4369-b0b7-147d230fb1d1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:46:09 np0005481065 nova_compute[260935]: 2025-10-11 08:46:09.382 2 DEBUG nova.objects.instance [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lazy-loading 'migration_context' on Instance uuid 97254317-c848-4369-b0b7-147d230fb1d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:46:09 np0005481065 nova_compute[260935]: 2025-10-11 08:46:09.404 2 DEBUG nova.virt.libvirt.driver [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:46:09 np0005481065 nova_compute[260935]: 2025-10-11 08:46:09.404 2 DEBUG nova.virt.libvirt.driver [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Ensure instance console log exists: /var/lib/nova/instances/97254317-c848-4369-b0b7-147d230fb1d1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:46:09 np0005481065 nova_compute[260935]: 2025-10-11 08:46:09.405 2 DEBUG oslo_concurrency.lockutils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:46:09 np0005481065 nova_compute[260935]: 2025-10-11 08:46:09.406 2 DEBUG oslo_concurrency.lockutils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:46:09 np0005481065 nova_compute[260935]: 2025-10-11 08:46:09.406 2 DEBUG oslo_concurrency.lockutils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:46:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1184: 321 pgs: 321 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 301 KiB/s rd, 2.1 MiB/s wr, 87 op/s
Oct 11 04:46:09 np0005481065 nova_compute[260935]: 2025-10-11 08:46:09.801 2 DEBUG nova.network.neutron [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct 11 04:46:09 np0005481065 nova_compute[260935]: 2025-10-11 08:46:09.801 2 DEBUG nova.compute.manager [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:46:09 np0005481065 nova_compute[260935]: 2025-10-11 08:46:09.804 2 DEBUG nova.virt.libvirt.driver [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:46:09 np0005481065 nova_compute[260935]: 2025-10-11 08:46:09.809 2 WARNING nova.virt.libvirt.driver [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:46:09 np0005481065 nova_compute[260935]: 2025-10-11 08:46:09.818 2 DEBUG nova.virt.libvirt.host [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:46:09 np0005481065 nova_compute[260935]: 2025-10-11 08:46:09.819 2 DEBUG nova.virt.libvirt.host [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:46:09 np0005481065 nova_compute[260935]: 2025-10-11 08:46:09.823 2 DEBUG nova.virt.libvirt.host [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:46:09 np0005481065 nova_compute[260935]: 2025-10-11 08:46:09.823 2 DEBUG nova.virt.libvirt.host [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:46:09 np0005481065 nova_compute[260935]: 2025-10-11 08:46:09.824 2 DEBUG nova.virt.libvirt.driver [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:46:09 np0005481065 nova_compute[260935]: 2025-10-11 08:46:09.824 2 DEBUG nova.virt.hardware [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:46:09 np0005481065 nova_compute[260935]: 2025-10-11 08:46:09.825 2 DEBUG nova.virt.hardware [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:46:09 np0005481065 nova_compute[260935]: 2025-10-11 08:46:09.825 2 DEBUG nova.virt.hardware [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:46:09 np0005481065 nova_compute[260935]: 2025-10-11 08:46:09.826 2 DEBUG nova.virt.hardware [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:46:09 np0005481065 nova_compute[260935]: 2025-10-11 08:46:09.826 2 DEBUG nova.virt.hardware [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:46:09 np0005481065 nova_compute[260935]: 2025-10-11 08:46:09.826 2 DEBUG nova.virt.hardware [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:46:09 np0005481065 nova_compute[260935]: 2025-10-11 08:46:09.827 2 DEBUG nova.virt.hardware [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:46:09 np0005481065 nova_compute[260935]: 2025-10-11 08:46:09.827 2 DEBUG nova.virt.hardware [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:46:09 np0005481065 nova_compute[260935]: 2025-10-11 08:46:09.828 2 DEBUG nova.virt.hardware [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:46:09 np0005481065 nova_compute[260935]: 2025-10-11 08:46:09.828 2 DEBUG nova.virt.hardware [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:46:09 np0005481065 nova_compute[260935]: 2025-10-11 08:46:09.828 2 DEBUG nova.virt.hardware [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:46:09 np0005481065 nova_compute[260935]: 2025-10-11 08:46:09.833 2 DEBUG oslo_concurrency.processutils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:46:10 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/55869803' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:46:10 np0005481065 nova_compute[260935]: 2025-10-11 08:46:10.298 2 DEBUG oslo_concurrency.processutils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:10 np0005481065 nova_compute[260935]: 2025-10-11 08:46:10.329 2 DEBUG nova.storage.rbd_utils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] rbd image 97254317-c848-4369-b0b7-147d230fb1d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:46:10 np0005481065 nova_compute[260935]: 2025-10-11 08:46:10.334 2 DEBUG oslo_concurrency.processutils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:46:10 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3209928029' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:46:10 np0005481065 nova_compute[260935]: 2025-10-11 08:46:10.816 2 DEBUG oslo_concurrency.processutils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:10 np0005481065 nova_compute[260935]: 2025-10-11 08:46:10.819 2 DEBUG nova.objects.instance [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lazy-loading 'pci_devices' on Instance uuid 97254317-c848-4369-b0b7-147d230fb1d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:46:10 np0005481065 nova_compute[260935]: 2025-10-11 08:46:10.997 2 DEBUG nova.virt.libvirt.driver [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:46:10 np0005481065 nova_compute[260935]:  <uuid>97254317-c848-4369-b0b7-147d230fb1d1</uuid>
Oct 11 04:46:10 np0005481065 nova_compute[260935]:  <name>instance-00000009</name>
Oct 11 04:46:10 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:46:10 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:46:10 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:46:10 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:46:10 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:46:10 np0005481065 nova_compute[260935]:      <nova:name>tempest-ServersAdminNegativeTestJSON-server-654853952</nova:name>
Oct 11 04:46:10 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:46:09</nova:creationTime>
Oct 11 04:46:10 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:46:10 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:46:10 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:46:10 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:46:10 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:46:10 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:46:10 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:46:10 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:46:10 np0005481065 nova_compute[260935]:        <nova:user uuid="e35666a092ce4c07b2e0604bfeb5d359">tempest-ServersAdminNegativeTestJSON-646482616-project-member</nova:user>
Oct 11 04:46:10 np0005481065 nova_compute[260935]:        <nova:project uuid="0b7ec3d2d2084beaa800f0f2abd06c77">tempest-ServersAdminNegativeTestJSON-646482616</nova:project>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:      <nova:ports/>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:46:11 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:      <entry name="serial">97254317-c848-4369-b0b7-147d230fb1d1</entry>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:      <entry name="uuid">97254317-c848-4369-b0b7-147d230fb1d1</entry>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:46:11 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:46:11 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:46:11 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/97254317-c848-4369-b0b7-147d230fb1d1_disk">
Oct 11 04:46:11 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:46:11 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:46:11 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/97254317-c848-4369-b0b7-147d230fb1d1_disk.config">
Oct 11 04:46:11 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:46:11 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:46:11 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/97254317-c848-4369-b0b7-147d230fb1d1/console.log" append="off"/>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:46:11 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:46:11 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:46:11 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:46:11 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:46:11 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:46:11 np0005481065 nova_compute[260935]: 2025-10-11 08:46:11.072 2 DEBUG nova.virt.libvirt.driver [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:46:11 np0005481065 nova_compute[260935]: 2025-10-11 08:46:11.073 2 DEBUG nova.virt.libvirt.driver [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:46:11 np0005481065 nova_compute[260935]: 2025-10-11 08:46:11.074 2 INFO nova.virt.libvirt.driver [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Using config drive#033[00m
Oct 11 04:46:11 np0005481065 nova_compute[260935]: 2025-10-11 08:46:11.104 2 DEBUG nova.storage.rbd_utils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] rbd image 97254317-c848-4369-b0b7-147d230fb1d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:46:11 np0005481065 nova_compute[260935]: 2025-10-11 08:46:11.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:46:11 np0005481065 nova_compute[260935]: 2025-10-11 08:46:11.257 2 DEBUG oslo_concurrency.lockutils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Acquiring lock "adcf97ac-4b08-4f25-844b-e6f4e1305e3b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:46:11 np0005481065 nova_compute[260935]: 2025-10-11 08:46:11.258 2 DEBUG oslo_concurrency.lockutils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Lock "adcf97ac-4b08-4f25-844b-e6f4e1305e3b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:46:11 np0005481065 nova_compute[260935]: 2025-10-11 08:46:11.278 2 DEBUG nova.compute.manager [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:46:11 np0005481065 nova_compute[260935]: 2025-10-11 08:46:11.284 2 INFO nova.virt.libvirt.driver [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Creating config drive at /var/lib/nova/instances/97254317-c848-4369-b0b7-147d230fb1d1/disk.config#033[00m
Oct 11 04:46:11 np0005481065 nova_compute[260935]: 2025-10-11 08:46:11.291 2 DEBUG oslo_concurrency.processutils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/97254317-c848-4369-b0b7-147d230fb1d1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_unipmym execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:11 np0005481065 nova_compute[260935]: 2025-10-11 08:46:11.432 2 DEBUG oslo_concurrency.processutils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/97254317-c848-4369-b0b7-147d230fb1d1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_unipmym" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:11 np0005481065 nova_compute[260935]: 2025-10-11 08:46:11.468 2 DEBUG nova.storage.rbd_utils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] rbd image 97254317-c848-4369-b0b7-147d230fb1d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:46:11 np0005481065 nova_compute[260935]: 2025-10-11 08:46:11.473 2 DEBUG oslo_concurrency.processutils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/97254317-c848-4369-b0b7-147d230fb1d1/disk.config 97254317-c848-4369-b0b7-147d230fb1d1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:11 np0005481065 nova_compute[260935]: 2025-10-11 08:46:11.503 2 DEBUG oslo_concurrency.lockutils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:46:11 np0005481065 nova_compute[260935]: 2025-10-11 08:46:11.504 2 DEBUG oslo_concurrency.lockutils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:46:11 np0005481065 nova_compute[260935]: 2025-10-11 08:46:11.518 2 DEBUG nova.virt.hardware [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:46:11 np0005481065 nova_compute[260935]: 2025-10-11 08:46:11.518 2 INFO nova.compute.claims [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:46:11 np0005481065 nova_compute[260935]: 2025-10-11 08:46:11.650 2 DEBUG oslo_concurrency.processutils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/97254317-c848-4369-b0b7-147d230fb1d1/disk.config 97254317-c848-4369-b0b7-147d230fb1d1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:11 np0005481065 nova_compute[260935]: 2025-10-11 08:46:11.651 2 INFO nova.virt.libvirt.driver [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Deleting local config drive /var/lib/nova/instances/97254317-c848-4369-b0b7-147d230fb1d1/disk.config because it was imported into RBD.#033[00m
Oct 11 04:46:11 np0005481065 nova_compute[260935]: 2025-10-11 08:46:11.672 2 DEBUG oslo_concurrency.processutils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1185: 321 pgs: 321 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 301 KiB/s rd, 2.1 MiB/s wr, 87 op/s
Oct 11 04:46:11 np0005481065 systemd-machined[215705]: New machine qemu-9-instance-00000009.
Oct 11 04:46:11 np0005481065 systemd[1]: Started Virtual Machine qemu-9-instance-00000009.
Oct 11 04:46:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:46:12 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/686642392' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:46:12 np0005481065 nova_compute[260935]: 2025-10-11 08:46:12.188 2 DEBUG oslo_concurrency.processutils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:12 np0005481065 nova_compute[260935]: 2025-10-11 08:46:12.195 2 DEBUG nova.compute.provider_tree [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:46:12 np0005481065 nova_compute[260935]: 2025-10-11 08:46:12.211 2 DEBUG nova.scheduler.client.report [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:46:12 np0005481065 nova_compute[260935]: 2025-10-11 08:46:12.241 2 DEBUG oslo_concurrency.lockutils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.737s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:46:12 np0005481065 nova_compute[260935]: 2025-10-11 08:46:12.241 2 DEBUG nova.compute.manager [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:46:12 np0005481065 nova_compute[260935]: 2025-10-11 08:46:12.303 2 DEBUG nova.compute.manager [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:46:12 np0005481065 nova_compute[260935]: 2025-10-11 08:46:12.304 2 DEBUG nova.network.neutron [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:46:12 np0005481065 nova_compute[260935]: 2025-10-11 08:46:12.329 2 INFO nova.virt.libvirt.driver [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:46:12 np0005481065 nova_compute[260935]: 2025-10-11 08:46:12.349 2 DEBUG nova.compute.manager [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:46:12 np0005481065 nova_compute[260935]: 2025-10-11 08:46:12.450 2 DEBUG nova.compute.manager [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:46:12 np0005481065 nova_compute[260935]: 2025-10-11 08:46:12.451 2 DEBUG nova.virt.libvirt.driver [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:46:12 np0005481065 nova_compute[260935]: 2025-10-11 08:46:12.452 2 INFO nova.virt.libvirt.driver [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Creating image(s)#033[00m
Oct 11 04:46:12 np0005481065 nova_compute[260935]: 2025-10-11 08:46:12.479 2 DEBUG nova.storage.rbd_utils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] rbd image adcf97ac-4b08-4f25-844b-e6f4e1305e3b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:46:12 np0005481065 nova_compute[260935]: 2025-10-11 08:46:12.510 2 DEBUG nova.storage.rbd_utils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] rbd image adcf97ac-4b08-4f25-844b-e6f4e1305e3b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:46:12 np0005481065 nova_compute[260935]: 2025-10-11 08:46:12.541 2 DEBUG nova.storage.rbd_utils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] rbd image adcf97ac-4b08-4f25-844b-e6f4e1305e3b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:46:12 np0005481065 nova_compute[260935]: 2025-10-11 08:46:12.546 2 DEBUG oslo_concurrency.processutils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:12 np0005481065 nova_compute[260935]: 2025-10-11 08:46:12.612 2 DEBUG oslo_concurrency.processutils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:12 np0005481065 nova_compute[260935]: 2025-10-11 08:46:12.613 2 DEBUG oslo_concurrency.lockutils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:46:12 np0005481065 nova_compute[260935]: 2025-10-11 08:46:12.614 2 DEBUG oslo_concurrency.lockutils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:46:12 np0005481065 nova_compute[260935]: 2025-10-11 08:46:12.615 2 DEBUG oslo_concurrency.lockutils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:46:12 np0005481065 nova_compute[260935]: 2025-10-11 08:46:12.643 2 DEBUG nova.storage.rbd_utils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] rbd image adcf97ac-4b08-4f25-844b-e6f4e1305e3b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:46:12 np0005481065 nova_compute[260935]: 2025-10-11 08:46:12.649 2 DEBUG oslo_concurrency.processutils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 adcf97ac-4b08-4f25-844b-e6f4e1305e3b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:12 np0005481065 nova_compute[260935]: 2025-10-11 08:46:12.677 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172372.6763885, 97254317-c848-4369-b0b7-147d230fb1d1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:46:12 np0005481065 nova_compute[260935]: 2025-10-11 08:46:12.678 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:46:12 np0005481065 nova_compute[260935]: 2025-10-11 08:46:12.682 2 DEBUG nova.compute.manager [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:46:12 np0005481065 nova_compute[260935]: 2025-10-11 08:46:12.683 2 DEBUG nova.virt.libvirt.driver [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:46:12 np0005481065 nova_compute[260935]: 2025-10-11 08:46:12.689 2 INFO nova.virt.libvirt.driver [-] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Instance spawned successfully.#033[00m
Oct 11 04:46:12 np0005481065 nova_compute[260935]: 2025-10-11 08:46:12.690 2 DEBUG nova.virt.libvirt.driver [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:46:12 np0005481065 nova_compute[260935]: 2025-10-11 08:46:12.708 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:46:12 np0005481065 nova_compute[260935]: 2025-10-11 08:46:12.717 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:46:12 np0005481065 nova_compute[260935]: 2025-10-11 08:46:12.721 2 DEBUG nova.virt.libvirt.driver [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:46:12 np0005481065 nova_compute[260935]: 2025-10-11 08:46:12.722 2 DEBUG nova.virt.libvirt.driver [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:46:12 np0005481065 nova_compute[260935]: 2025-10-11 08:46:12.722 2 DEBUG nova.virt.libvirt.driver [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:46:12 np0005481065 nova_compute[260935]: 2025-10-11 08:46:12.723 2 DEBUG nova.virt.libvirt.driver [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:46:12 np0005481065 nova_compute[260935]: 2025-10-11 08:46:12.724 2 DEBUG nova.virt.libvirt.driver [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:46:12 np0005481065 nova_compute[260935]: 2025-10-11 08:46:12.724 2 DEBUG nova.virt.libvirt.driver [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:46:12 np0005481065 nova_compute[260935]: 2025-10-11 08:46:12.734 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:46:12 np0005481065 nova_compute[260935]: 2025-10-11 08:46:12.735 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172372.6783335, 97254317-c848-4369-b0b7-147d230fb1d1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:46:12 np0005481065 nova_compute[260935]: 2025-10-11 08:46:12.736 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] VM Started (Lifecycle Event)#033[00m
Oct 11 04:46:12 np0005481065 nova_compute[260935]: 2025-10-11 08:46:12.756 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:46:12 np0005481065 nova_compute[260935]: 2025-10-11 08:46:12.760 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:46:12 np0005481065 nova_compute[260935]: 2025-10-11 08:46:12.791 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:46:12 np0005481065 nova_compute[260935]: 2025-10-11 08:46:12.816 2 INFO nova.compute.manager [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Took 4.15 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:46:12 np0005481065 nova_compute[260935]: 2025-10-11 08:46:12.817 2 DEBUG nova.compute.manager [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:46:12 np0005481065 nova_compute[260935]: 2025-10-11 08:46:12.898 2 INFO nova.compute.manager [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Took 6.29 seconds to build instance.#033[00m
Oct 11 04:46:12 np0005481065 nova_compute[260935]: 2025-10-11 08:46:12.950 2 DEBUG nova.network.neutron [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct 11 04:46:12 np0005481065 nova_compute[260935]: 2025-10-11 08:46:12.951 2 DEBUG nova.compute.manager [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:46:12 np0005481065 nova_compute[260935]: 2025-10-11 08:46:12.965 2 DEBUG oslo_concurrency.processutils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 adcf97ac-4b08-4f25-844b-e6f4e1305e3b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.316s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:13 np0005481065 nova_compute[260935]: 2025-10-11 08:46:13.003 2 DEBUG oslo_concurrency.lockutils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lock "97254317-c848-4369-b0b7-147d230fb1d1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.063s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:46:13 np0005481065 nova_compute[260935]: 2025-10-11 08:46:13.046 2 DEBUG nova.storage.rbd_utils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] resizing rbd image adcf97ac-4b08-4f25-844b-e6f4e1305e3b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:46:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:46:13 np0005481065 nova_compute[260935]: 2025-10-11 08:46:13.173 2 DEBUG nova.objects.instance [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Lazy-loading 'migration_context' on Instance uuid adcf97ac-4b08-4f25-844b-e6f4e1305e3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:46:13 np0005481065 nova_compute[260935]: 2025-10-11 08:46:13.211 2 DEBUG nova.virt.libvirt.driver [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:46:13 np0005481065 nova_compute[260935]: 2025-10-11 08:46:13.212 2 DEBUG nova.virt.libvirt.driver [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Ensure instance console log exists: /var/lib/nova/instances/adcf97ac-4b08-4f25-844b-e6f4e1305e3b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:46:13 np0005481065 nova_compute[260935]: 2025-10-11 08:46:13.212 2 DEBUG oslo_concurrency.lockutils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:46:13 np0005481065 nova_compute[260935]: 2025-10-11 08:46:13.213 2 DEBUG oslo_concurrency.lockutils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:46:13 np0005481065 nova_compute[260935]: 2025-10-11 08:46:13.213 2 DEBUG oslo_concurrency.lockutils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:46:13 np0005481065 nova_compute[260935]: 2025-10-11 08:46:13.216 2 DEBUG nova.virt.libvirt.driver [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:46:13 np0005481065 nova_compute[260935]: 2025-10-11 08:46:13.222 2 WARNING nova.virt.libvirt.driver [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:46:13 np0005481065 nova_compute[260935]: 2025-10-11 08:46:13.228 2 DEBUG nova.virt.libvirt.host [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:46:13 np0005481065 nova_compute[260935]: 2025-10-11 08:46:13.229 2 DEBUG nova.virt.libvirt.host [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:46:13 np0005481065 nova_compute[260935]: 2025-10-11 08:46:13.232 2 DEBUG nova.virt.libvirt.host [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:46:13 np0005481065 nova_compute[260935]: 2025-10-11 08:46:13.233 2 DEBUG nova.virt.libvirt.host [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:46:13 np0005481065 nova_compute[260935]: 2025-10-11 08:46:13.233 2 DEBUG nova.virt.libvirt.driver [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:46:13 np0005481065 nova_compute[260935]: 2025-10-11 08:46:13.234 2 DEBUG nova.virt.hardware [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:46:13 np0005481065 nova_compute[260935]: 2025-10-11 08:46:13.235 2 DEBUG nova.virt.hardware [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:46:13 np0005481065 nova_compute[260935]: 2025-10-11 08:46:13.235 2 DEBUG nova.virt.hardware [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:46:13 np0005481065 nova_compute[260935]: 2025-10-11 08:46:13.236 2 DEBUG nova.virt.hardware [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:46:13 np0005481065 nova_compute[260935]: 2025-10-11 08:46:13.236 2 DEBUG nova.virt.hardware [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:46:13 np0005481065 nova_compute[260935]: 2025-10-11 08:46:13.236 2 DEBUG nova.virt.hardware [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:46:13 np0005481065 nova_compute[260935]: 2025-10-11 08:46:13.237 2 DEBUG nova.virt.hardware [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:46:13 np0005481065 nova_compute[260935]: 2025-10-11 08:46:13.237 2 DEBUG nova.virt.hardware [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:46:13 np0005481065 nova_compute[260935]: 2025-10-11 08:46:13.237 2 DEBUG nova.virt.hardware [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:46:13 np0005481065 nova_compute[260935]: 2025-10-11 08:46:13.238 2 DEBUG nova.virt.hardware [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:46:13 np0005481065 nova_compute[260935]: 2025-10-11 08:46:13.238 2 DEBUG nova.virt.hardware [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:46:13 np0005481065 nova_compute[260935]: 2025-10-11 08:46:13.243 2 DEBUG oslo_concurrency.processutils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:13 np0005481065 nova_compute[260935]: 2025-10-11 08:46:13.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:46:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:46:13 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3436262319' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:46:13 np0005481065 nova_compute[260935]: 2025-10-11 08:46:13.734 2 DEBUG oslo_concurrency.processutils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1186: 321 pgs: 321 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 3.9 MiB/s wr, 124 op/s
Oct 11 04:46:13 np0005481065 nova_compute[260935]: 2025-10-11 08:46:13.770 2 DEBUG nova.storage.rbd_utils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] rbd image adcf97ac-4b08-4f25-844b-e6f4e1305e3b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:46:13 np0005481065 nova_compute[260935]: 2025-10-11 08:46:13.778 2 DEBUG oslo_concurrency.processutils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:46:14 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1391371203' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:46:14 np0005481065 nova_compute[260935]: 2025-10-11 08:46:14.219 2 DEBUG oslo_concurrency.processutils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:14 np0005481065 nova_compute[260935]: 2025-10-11 08:46:14.222 2 DEBUG nova.objects.instance [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Lazy-loading 'pci_devices' on Instance uuid adcf97ac-4b08-4f25-844b-e6f4e1305e3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:46:14 np0005481065 nova_compute[260935]: 2025-10-11 08:46:14.268 2 DEBUG nova.virt.libvirt.driver [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:46:14 np0005481065 nova_compute[260935]:  <uuid>adcf97ac-4b08-4f25-844b-e6f4e1305e3b</uuid>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:  <name>instance-0000000a</name>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:46:14 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:      <nova:name>tempest-DeleteServersAdminTestJSON-server-1073345368</nova:name>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:46:13</nova:creationTime>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:46:14 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:        <nova:user uuid="c74c2e90e9e04fd283655f1fac1d2bf7">tempest-DeleteServersAdminTestJSON-159697733-project-member</nova:user>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:        <nova:project uuid="f8adf6d5f1034e90bb5184f87dbf3a16">tempest-DeleteServersAdminTestJSON-159697733</nova:project>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:      <nova:ports/>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:46:14 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:      <entry name="serial">adcf97ac-4b08-4f25-844b-e6f4e1305e3b</entry>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:      <entry name="uuid">adcf97ac-4b08-4f25-844b-e6f4e1305e3b</entry>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:46:14 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:46:14 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:46:14 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/adcf97ac-4b08-4f25-844b-e6f4e1305e3b_disk">
Oct 11 04:46:14 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:46:14 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:46:14 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/adcf97ac-4b08-4f25-844b-e6f4e1305e3b_disk.config">
Oct 11 04:46:14 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:46:14 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:46:14 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/adcf97ac-4b08-4f25-844b-e6f4e1305e3b/console.log" append="off"/>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:46:14 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:46:14 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:46:14 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:46:14 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:46:14 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:46:14 np0005481065 nova_compute[260935]: 2025-10-11 08:46:14.536 2 DEBUG nova.virt.libvirt.driver [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:46:14 np0005481065 nova_compute[260935]: 2025-10-11 08:46:14.536 2 DEBUG nova.virt.libvirt.driver [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:46:14 np0005481065 nova_compute[260935]: 2025-10-11 08:46:14.548 2 INFO nova.virt.libvirt.driver [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Using config drive#033[00m
Oct 11 04:46:14 np0005481065 nova_compute[260935]: 2025-10-11 08:46:14.590 2 DEBUG nova.storage.rbd_utils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] rbd image adcf97ac-4b08-4f25-844b-e6f4e1305e3b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:46:14 np0005481065 nova_compute[260935]: 2025-10-11 08:46:14.748 2 INFO nova.virt.libvirt.driver [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Creating config drive at /var/lib/nova/instances/adcf97ac-4b08-4f25-844b-e6f4e1305e3b/disk.config#033[00m
Oct 11 04:46:14 np0005481065 nova_compute[260935]: 2025-10-11 08:46:14.756 2 DEBUG oslo_concurrency.processutils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/adcf97ac-4b08-4f25-844b-e6f4e1305e3b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7xw3gg8j execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:14 np0005481065 nova_compute[260935]: 2025-10-11 08:46:14.903 2 DEBUG oslo_concurrency.processutils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/adcf97ac-4b08-4f25-844b-e6f4e1305e3b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7xw3gg8j" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:14 np0005481065 nova_compute[260935]: 2025-10-11 08:46:14.930 2 DEBUG nova.storage.rbd_utils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] rbd image adcf97ac-4b08-4f25-844b-e6f4e1305e3b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:46:14 np0005481065 nova_compute[260935]: 2025-10-11 08:46:14.934 2 DEBUG oslo_concurrency.processutils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/adcf97ac-4b08-4f25-844b-e6f4e1305e3b/disk.config adcf97ac-4b08-4f25-844b-e6f4e1305e3b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:15 np0005481065 nova_compute[260935]: 2025-10-11 08:46:15.098 2 DEBUG oslo_concurrency.processutils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/adcf97ac-4b08-4f25-844b-e6f4e1305e3b/disk.config adcf97ac-4b08-4f25-844b-e6f4e1305e3b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:15 np0005481065 nova_compute[260935]: 2025-10-11 08:46:15.100 2 INFO nova.virt.libvirt.driver [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Deleting local config drive /var/lib/nova/instances/adcf97ac-4b08-4f25-844b-e6f4e1305e3b/disk.config because it was imported into RBD.#033[00m
Oct 11 04:46:15 np0005481065 systemd-machined[215705]: New machine qemu-10-instance-0000000a.
Oct 11 04:46:15 np0005481065 systemd[1]: Started Virtual Machine qemu-10-instance-0000000a.
Oct 11 04:46:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:46:15.178 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:46:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:46:15.178 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:46:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:46:15.179 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:46:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1187: 321 pgs: 321 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Oct 11 04:46:16 np0005481065 nova_compute[260935]: 2025-10-11 08:46:16.090 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172376.0894575, adcf97ac-4b08-4f25-844b-e6f4e1305e3b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:46:16 np0005481065 nova_compute[260935]: 2025-10-11 08:46:16.091 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:46:16 np0005481065 nova_compute[260935]: 2025-10-11 08:46:16.123 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:46:16 np0005481065 nova_compute[260935]: 2025-10-11 08:46:16.124 2 DEBUG nova.compute.manager [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:46:16 np0005481065 nova_compute[260935]: 2025-10-11 08:46:16.126 2 DEBUG nova.virt.libvirt.driver [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:46:16 np0005481065 nova_compute[260935]: 2025-10-11 08:46:16.133 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:46:16 np0005481065 nova_compute[260935]: 2025-10-11 08:46:16.137 2 INFO nova.virt.libvirt.driver [-] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Instance spawned successfully.#033[00m
Oct 11 04:46:16 np0005481065 nova_compute[260935]: 2025-10-11 08:46:16.138 2 DEBUG nova.virt.libvirt.driver [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:46:16 np0005481065 nova_compute[260935]: 2025-10-11 08:46:16.158 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:46:16 np0005481065 nova_compute[260935]: 2025-10-11 08:46:16.159 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172376.1212406, adcf97ac-4b08-4f25-844b-e6f4e1305e3b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:46:16 np0005481065 nova_compute[260935]: 2025-10-11 08:46:16.160 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] VM Started (Lifecycle Event)#033[00m
Oct 11 04:46:16 np0005481065 nova_compute[260935]: 2025-10-11 08:46:16.184 2 DEBUG nova.virt.libvirt.driver [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:46:16 np0005481065 nova_compute[260935]: 2025-10-11 08:46:16.185 2 DEBUG nova.virt.libvirt.driver [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:46:16 np0005481065 nova_compute[260935]: 2025-10-11 08:46:16.186 2 DEBUG nova.virt.libvirt.driver [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:46:16 np0005481065 nova_compute[260935]: 2025-10-11 08:46:16.187 2 DEBUG nova.virt.libvirt.driver [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:46:16 np0005481065 nova_compute[260935]: 2025-10-11 08:46:16.188 2 DEBUG nova.virt.libvirt.driver [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:46:16 np0005481065 nova_compute[260935]: 2025-10-11 08:46:16.189 2 DEBUG nova.virt.libvirt.driver [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:46:16 np0005481065 nova_compute[260935]: 2025-10-11 08:46:16.219 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:46:16 np0005481065 nova_compute[260935]: 2025-10-11 08:46:16.260 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:46:16 np0005481065 nova_compute[260935]: 2025-10-11 08:46:16.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:46:16 np0005481065 nova_compute[260935]: 2025-10-11 08:46:16.269 2 INFO nova.compute.manager [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Took 3.82 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:46:16 np0005481065 nova_compute[260935]: 2025-10-11 08:46:16.269 2 DEBUG nova.compute.manager [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:46:16 np0005481065 nova_compute[260935]: 2025-10-11 08:46:16.281 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:46:16 np0005481065 nova_compute[260935]: 2025-10-11 08:46:16.325 2 INFO nova.compute.manager [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Took 4.91 seconds to build instance.#033[00m
Oct 11 04:46:16 np0005481065 nova_compute[260935]: 2025-10-11 08:46:16.362 2 DEBUG oslo_concurrency.lockutils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Lock "adcf97ac-4b08-4f25-844b-e6f4e1305e3b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.104s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:46:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1188: 321 pgs: 321 active+clean; 134 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.6 MiB/s wr, 227 op/s
Oct 11 04:46:17 np0005481065 podman[282718]: 2025-10-11 08:46:17.805265968 +0000 UTC m=+0.099125755 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 11 04:46:18 np0005481065 nova_compute[260935]: 2025-10-11 08:46:18.048 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172363.0469139, 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:46:18 np0005481065 nova_compute[260935]: 2025-10-11 08:46:18.048 2 INFO nova.compute.manager [-] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] VM Stopped (Lifecycle Event)#033[00m
Oct 11 04:46:18 np0005481065 nova_compute[260935]: 2025-10-11 08:46:18.096 2 DEBUG nova.compute.manager [None req-c66dbc6d-cab8-4e01-aaf3-dd2374694f8f - - - - - -] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:46:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:46:18 np0005481065 nova_compute[260935]: 2025-10-11 08:46:18.525 2 DEBUG oslo_concurrency.lockutils [None req-7e8bcdc3-1f7e-4533-b533-40c605046f6f faabd708b3c047d7b74a5ff47c363129 b25ec7ba37dc4a2193c9461101f5bff5 - - default default] Acquiring lock "adcf97ac-4b08-4f25-844b-e6f4e1305e3b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:46:18 np0005481065 nova_compute[260935]: 2025-10-11 08:46:18.525 2 DEBUG oslo_concurrency.lockutils [None req-7e8bcdc3-1f7e-4533-b533-40c605046f6f faabd708b3c047d7b74a5ff47c363129 b25ec7ba37dc4a2193c9461101f5bff5 - - default default] Lock "adcf97ac-4b08-4f25-844b-e6f4e1305e3b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:46:18 np0005481065 nova_compute[260935]: 2025-10-11 08:46:18.525 2 DEBUG oslo_concurrency.lockutils [None req-7e8bcdc3-1f7e-4533-b533-40c605046f6f faabd708b3c047d7b74a5ff47c363129 b25ec7ba37dc4a2193c9461101f5bff5 - - default default] Acquiring lock "adcf97ac-4b08-4f25-844b-e6f4e1305e3b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:46:18 np0005481065 nova_compute[260935]: 2025-10-11 08:46:18.526 2 DEBUG oslo_concurrency.lockutils [None req-7e8bcdc3-1f7e-4533-b533-40c605046f6f faabd708b3c047d7b74a5ff47c363129 b25ec7ba37dc4a2193c9461101f5bff5 - - default default] Lock "adcf97ac-4b08-4f25-844b-e6f4e1305e3b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:46:18 np0005481065 nova_compute[260935]: 2025-10-11 08:46:18.526 2 DEBUG oslo_concurrency.lockutils [None req-7e8bcdc3-1f7e-4533-b533-40c605046f6f faabd708b3c047d7b74a5ff47c363129 b25ec7ba37dc4a2193c9461101f5bff5 - - default default] Lock "adcf97ac-4b08-4f25-844b-e6f4e1305e3b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:46:18 np0005481065 nova_compute[260935]: 2025-10-11 08:46:18.527 2 INFO nova.compute.manager [None req-7e8bcdc3-1f7e-4533-b533-40c605046f6f faabd708b3c047d7b74a5ff47c363129 b25ec7ba37dc4a2193c9461101f5bff5 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Terminating instance#033[00m
Oct 11 04:46:18 np0005481065 nova_compute[260935]: 2025-10-11 08:46:18.528 2 DEBUG oslo_concurrency.lockutils [None req-7e8bcdc3-1f7e-4533-b533-40c605046f6f faabd708b3c047d7b74a5ff47c363129 b25ec7ba37dc4a2193c9461101f5bff5 - - default default] Acquiring lock "refresh_cache-adcf97ac-4b08-4f25-844b-e6f4e1305e3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:46:18 np0005481065 nova_compute[260935]: 2025-10-11 08:46:18.528 2 DEBUG oslo_concurrency.lockutils [None req-7e8bcdc3-1f7e-4533-b533-40c605046f6f faabd708b3c047d7b74a5ff47c363129 b25ec7ba37dc4a2193c9461101f5bff5 - - default default] Acquired lock "refresh_cache-adcf97ac-4b08-4f25-844b-e6f4e1305e3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:46:18 np0005481065 nova_compute[260935]: 2025-10-11 08:46:18.528 2 DEBUG nova.network.neutron [None req-7e8bcdc3-1f7e-4533-b533-40c605046f6f faabd708b3c047d7b74a5ff47c363129 b25ec7ba37dc4a2193c9461101f5bff5 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:46:18 np0005481065 nova_compute[260935]: 2025-10-11 08:46:18.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:46:18 np0005481065 nova_compute[260935]: 2025-10-11 08:46:18.870 2 DEBUG nova.network.neutron [None req-7e8bcdc3-1f7e-4533-b533-40c605046f6f faabd708b3c047d7b74a5ff47c363129 b25ec7ba37dc4a2193c9461101f5bff5 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:46:18 np0005481065 nova_compute[260935]: 2025-10-11 08:46:18.876 2 DEBUG oslo_concurrency.lockutils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Acquiring lock "d1140ad8-4653-441c-aeec-bf4de6a680f4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:46:18 np0005481065 nova_compute[260935]: 2025-10-11 08:46:18.877 2 DEBUG oslo_concurrency.lockutils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lock "d1140ad8-4653-441c-aeec-bf4de6a680f4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:46:18 np0005481065 nova_compute[260935]: 2025-10-11 08:46:18.904 2 DEBUG nova.compute.manager [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:46:18 np0005481065 nova_compute[260935]: 2025-10-11 08:46:18.984 2 DEBUG oslo_concurrency.lockutils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:46:18 np0005481065 nova_compute[260935]: 2025-10-11 08:46:18.985 2 DEBUG oslo_concurrency.lockutils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:46:18 np0005481065 nova_compute[260935]: 2025-10-11 08:46:18.991 2 DEBUG nova.virt.hardware [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:46:18 np0005481065 nova_compute[260935]: 2025-10-11 08:46:18.991 2 INFO nova.compute.claims [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:46:19 np0005481065 nova_compute[260935]: 2025-10-11 08:46:19.186 2 DEBUG oslo_concurrency.processutils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:46:19 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3837523798' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:46:19 np0005481065 nova_compute[260935]: 2025-10-11 08:46:19.650 2 DEBUG oslo_concurrency.processutils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:19 np0005481065 nova_compute[260935]: 2025-10-11 08:46:19.662 2 DEBUG nova.compute.provider_tree [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:46:19 np0005481065 nova_compute[260935]: 2025-10-11 08:46:19.681 2 DEBUG nova.scheduler.client.report [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:46:19 np0005481065 nova_compute[260935]: 2025-10-11 08:46:19.710 2 DEBUG oslo_concurrency.lockutils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:46:19 np0005481065 nova_compute[260935]: 2025-10-11 08:46:19.712 2 DEBUG nova.compute.manager [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:46:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1189: 321 pgs: 321 active+clean; 134 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.6 MiB/s wr, 199 op/s
Oct 11 04:46:19 np0005481065 nova_compute[260935]: 2025-10-11 08:46:19.764 2 DEBUG nova.compute.manager [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:46:19 np0005481065 nova_compute[260935]: 2025-10-11 08:46:19.764 2 DEBUG nova.network.neutron [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:46:19 np0005481065 nova_compute[260935]: 2025-10-11 08:46:19.789 2 INFO nova.virt.libvirt.driver [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:46:19 np0005481065 nova_compute[260935]: 2025-10-11 08:46:19.816 2 DEBUG nova.compute.manager [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:46:19 np0005481065 nova_compute[260935]: 2025-10-11 08:46:19.918 2 DEBUG nova.compute.manager [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:46:19 np0005481065 nova_compute[260935]: 2025-10-11 08:46:19.919 2 DEBUG nova.virt.libvirt.driver [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:46:19 np0005481065 nova_compute[260935]: 2025-10-11 08:46:19.920 2 INFO nova.virt.libvirt.driver [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Creating image(s)#033[00m
Oct 11 04:46:19 np0005481065 nova_compute[260935]: 2025-10-11 08:46:19.954 2 DEBUG nova.storage.rbd_utils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] rbd image d1140ad8-4653-441c-aeec-bf4de6a680f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:46:20 np0005481065 nova_compute[260935]: 2025-10-11 08:46:20.002 2 DEBUG nova.storage.rbd_utils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] rbd image d1140ad8-4653-441c-aeec-bf4de6a680f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:46:20 np0005481065 nova_compute[260935]: 2025-10-11 08:46:20.036 2 DEBUG nova.storage.rbd_utils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] rbd image d1140ad8-4653-441c-aeec-bf4de6a680f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:46:20 np0005481065 nova_compute[260935]: 2025-10-11 08:46:20.040 2 DEBUG oslo_concurrency.processutils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:20 np0005481065 nova_compute[260935]: 2025-10-11 08:46:20.149 2 DEBUG oslo_concurrency.processutils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:20 np0005481065 nova_compute[260935]: 2025-10-11 08:46:20.150 2 DEBUG oslo_concurrency.lockutils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:46:20 np0005481065 nova_compute[260935]: 2025-10-11 08:46:20.153 2 DEBUG oslo_concurrency.lockutils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:46:20 np0005481065 nova_compute[260935]: 2025-10-11 08:46:20.154 2 DEBUG oslo_concurrency.lockutils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:46:20 np0005481065 nova_compute[260935]: 2025-10-11 08:46:20.199 2 DEBUG nova.storage.rbd_utils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] rbd image d1140ad8-4653-441c-aeec-bf4de6a680f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:46:20 np0005481065 nova_compute[260935]: 2025-10-11 08:46:20.207 2 DEBUG oslo_concurrency.processutils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 d1140ad8-4653-441c-aeec-bf4de6a680f4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:20 np0005481065 nova_compute[260935]: 2025-10-11 08:46:20.249 2 DEBUG nova.network.neutron [None req-7e8bcdc3-1f7e-4533-b533-40c605046f6f faabd708b3c047d7b74a5ff47c363129 b25ec7ba37dc4a2193c9461101f5bff5 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:46:20 np0005481065 nova_compute[260935]: 2025-10-11 08:46:20.277 2 DEBUG oslo_concurrency.lockutils [None req-7e8bcdc3-1f7e-4533-b533-40c605046f6f faabd708b3c047d7b74a5ff47c363129 b25ec7ba37dc4a2193c9461101f5bff5 - - default default] Releasing lock "refresh_cache-adcf97ac-4b08-4f25-844b-e6f4e1305e3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:46:20 np0005481065 nova_compute[260935]: 2025-10-11 08:46:20.278 2 DEBUG nova.compute.manager [None req-7e8bcdc3-1f7e-4533-b533-40c605046f6f faabd708b3c047d7b74a5ff47c363129 b25ec7ba37dc4a2193c9461101f5bff5 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 04:46:20 np0005481065 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Oct 11 04:46:20 np0005481065 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Consumed 5.109s CPU time.
Oct 11 04:46:20 np0005481065 systemd-machined[215705]: Machine qemu-10-instance-0000000a terminated.
Oct 11 04:46:20 np0005481065 nova_compute[260935]: 2025-10-11 08:46:20.449 2 DEBUG nova.network.neutron [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct 11 04:46:20 np0005481065 nova_compute[260935]: 2025-10-11 08:46:20.450 2 DEBUG nova.compute.manager [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:46:20 np0005481065 nova_compute[260935]: 2025-10-11 08:46:20.501 2 INFO nova.virt.libvirt.driver [-] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Instance destroyed successfully.#033[00m
Oct 11 04:46:20 np0005481065 nova_compute[260935]: 2025-10-11 08:46:20.501 2 DEBUG nova.objects.instance [None req-7e8bcdc3-1f7e-4533-b533-40c605046f6f faabd708b3c047d7b74a5ff47c363129 b25ec7ba37dc4a2193c9461101f5bff5 - - default default] Lazy-loading 'resources' on Instance uuid adcf97ac-4b08-4f25-844b-e6f4e1305e3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:46:20 np0005481065 nova_compute[260935]: 2025-10-11 08:46:20.566 2 DEBUG oslo_concurrency.processutils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 d1140ad8-4653-441c-aeec-bf4de6a680f4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.360s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:20 np0005481065 nova_compute[260935]: 2025-10-11 08:46:20.681 2 DEBUG nova.storage.rbd_utils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] resizing rbd image d1140ad8-4653-441c-aeec-bf4de6a680f4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:46:20 np0005481065 nova_compute[260935]: 2025-10-11 08:46:20.834 2 DEBUG nova.objects.instance [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lazy-loading 'migration_context' on Instance uuid d1140ad8-4653-441c-aeec-bf4de6a680f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:46:20 np0005481065 nova_compute[260935]: 2025-10-11 08:46:20.879 2 DEBUG nova.virt.libvirt.driver [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:46:20 np0005481065 nova_compute[260935]: 2025-10-11 08:46:20.879 2 DEBUG nova.virt.libvirt.driver [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Ensure instance console log exists: /var/lib/nova/instances/d1140ad8-4653-441c-aeec-bf4de6a680f4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:46:20 np0005481065 nova_compute[260935]: 2025-10-11 08:46:20.880 2 DEBUG oslo_concurrency.lockutils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:46:20 np0005481065 nova_compute[260935]: 2025-10-11 08:46:20.880 2 DEBUG oslo_concurrency.lockutils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:46:20 np0005481065 nova_compute[260935]: 2025-10-11 08:46:20.880 2 DEBUG oslo_concurrency.lockutils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:46:20 np0005481065 nova_compute[260935]: 2025-10-11 08:46:20.882 2 DEBUG nova.virt.libvirt.driver [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:46:20 np0005481065 nova_compute[260935]: 2025-10-11 08:46:20.887 2 WARNING nova.virt.libvirt.driver [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:46:20 np0005481065 nova_compute[260935]: 2025-10-11 08:46:20.894 2 DEBUG nova.virt.libvirt.host [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:46:20 np0005481065 nova_compute[260935]: 2025-10-11 08:46:20.895 2 DEBUG nova.virt.libvirt.host [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:46:20 np0005481065 nova_compute[260935]: 2025-10-11 08:46:20.901 2 DEBUG nova.virt.libvirt.host [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:46:20 np0005481065 nova_compute[260935]: 2025-10-11 08:46:20.903 2 DEBUG nova.virt.libvirt.host [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:46:20 np0005481065 nova_compute[260935]: 2025-10-11 08:46:20.904 2 DEBUG nova.virt.libvirt.driver [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:46:20 np0005481065 nova_compute[260935]: 2025-10-11 08:46:20.904 2 DEBUG nova.virt.hardware [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:46:20 np0005481065 nova_compute[260935]: 2025-10-11 08:46:20.904 2 DEBUG nova.virt.hardware [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:46:20 np0005481065 nova_compute[260935]: 2025-10-11 08:46:20.904 2 DEBUG nova.virt.hardware [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:46:20 np0005481065 nova_compute[260935]: 2025-10-11 08:46:20.905 2 DEBUG nova.virt.hardware [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:46:20 np0005481065 nova_compute[260935]: 2025-10-11 08:46:20.905 2 DEBUG nova.virt.hardware [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:46:20 np0005481065 nova_compute[260935]: 2025-10-11 08:46:20.905 2 DEBUG nova.virt.hardware [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:46:20 np0005481065 nova_compute[260935]: 2025-10-11 08:46:20.905 2 DEBUG nova.virt.hardware [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:46:20 np0005481065 nova_compute[260935]: 2025-10-11 08:46:20.906 2 DEBUG nova.virt.hardware [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:46:20 np0005481065 nova_compute[260935]: 2025-10-11 08:46:20.906 2 DEBUG nova.virt.hardware [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:46:20 np0005481065 nova_compute[260935]: 2025-10-11 08:46:20.906 2 DEBUG nova.virt.hardware [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:46:20 np0005481065 nova_compute[260935]: 2025-10-11 08:46:20.907 2 DEBUG nova.virt.hardware [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:46:20 np0005481065 nova_compute[260935]: 2025-10-11 08:46:20.914 2 DEBUG oslo_concurrency.processutils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:20 np0005481065 nova_compute[260935]: 2025-10-11 08:46:20.971 2 INFO nova.virt.libvirt.driver [None req-7e8bcdc3-1f7e-4533-b533-40c605046f6f faabd708b3c047d7b74a5ff47c363129 b25ec7ba37dc4a2193c9461101f5bff5 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Deleting instance files /var/lib/nova/instances/adcf97ac-4b08-4f25-844b-e6f4e1305e3b_del#033[00m
Oct 11 04:46:20 np0005481065 nova_compute[260935]: 2025-10-11 08:46:20.973 2 INFO nova.virt.libvirt.driver [None req-7e8bcdc3-1f7e-4533-b533-40c605046f6f faabd708b3c047d7b74a5ff47c363129 b25ec7ba37dc4a2193c9461101f5bff5 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Deletion of /var/lib/nova/instances/adcf97ac-4b08-4f25-844b-e6f4e1305e3b_del complete#033[00m
Oct 11 04:46:21 np0005481065 nova_compute[260935]: 2025-10-11 08:46:21.048 2 INFO nova.compute.manager [None req-7e8bcdc3-1f7e-4533-b533-40c605046f6f faabd708b3c047d7b74a5ff47c363129 b25ec7ba37dc4a2193c9461101f5bff5 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Took 0.77 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 04:46:21 np0005481065 nova_compute[260935]: 2025-10-11 08:46:21.049 2 DEBUG oslo.service.loopingcall [None req-7e8bcdc3-1f7e-4533-b533-40c605046f6f faabd708b3c047d7b74a5ff47c363129 b25ec7ba37dc4a2193c9461101f5bff5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 04:46:21 np0005481065 nova_compute[260935]: 2025-10-11 08:46:21.049 2 DEBUG nova.compute.manager [-] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 04:46:21 np0005481065 nova_compute[260935]: 2025-10-11 08:46:21.049 2 DEBUG nova.network.neutron [-] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 04:46:21 np0005481065 nova_compute[260935]: 2025-10-11 08:46:21.212 2 DEBUG nova.network.neutron [-] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:46:21 np0005481065 nova_compute[260935]: 2025-10-11 08:46:21.226 2 DEBUG nova.network.neutron [-] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:46:21 np0005481065 nova_compute[260935]: 2025-10-11 08:46:21.239 2 INFO nova.compute.manager [-] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Took 0.19 seconds to deallocate network for instance.#033[00m
Oct 11 04:46:21 np0005481065 nova_compute[260935]: 2025-10-11 08:46:21.280 2 DEBUG oslo_concurrency.lockutils [None req-7e8bcdc3-1f7e-4533-b533-40c605046f6f faabd708b3c047d7b74a5ff47c363129 b25ec7ba37dc4a2193c9461101f5bff5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:46:21 np0005481065 nova_compute[260935]: 2025-10-11 08:46:21.281 2 DEBUG oslo_concurrency.lockutils [None req-7e8bcdc3-1f7e-4533-b533-40c605046f6f faabd708b3c047d7b74a5ff47c363129 b25ec7ba37dc4a2193c9461101f5bff5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:46:21 np0005481065 nova_compute[260935]: 2025-10-11 08:46:21.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:46:21 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:46:21 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/843929755' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:46:21 np0005481065 nova_compute[260935]: 2025-10-11 08:46:21.379 2 DEBUG oslo_concurrency.processutils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:21 np0005481065 nova_compute[260935]: 2025-10-11 08:46:21.412 2 DEBUG nova.storage.rbd_utils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] rbd image d1140ad8-4653-441c-aeec-bf4de6a680f4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:46:21 np0005481065 nova_compute[260935]: 2025-10-11 08:46:21.418 2 DEBUG oslo_concurrency.processutils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:21 np0005481065 nova_compute[260935]: 2025-10-11 08:46:21.456 2 DEBUG oslo_concurrency.processutils [None req-7e8bcdc3-1f7e-4533-b533-40c605046f6f faabd708b3c047d7b74a5ff47c363129 b25ec7ba37dc4a2193c9461101f5bff5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1190: 321 pgs: 321 active+clean; 134 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.6 MiB/s wr, 199 op/s
Oct 11 04:46:21 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:46:21 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4037687385' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:46:21 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:46:21 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4125394097' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:46:21 np0005481065 nova_compute[260935]: 2025-10-11 08:46:21.910 2 DEBUG oslo_concurrency.processutils [None req-7e8bcdc3-1f7e-4533-b533-40c605046f6f faabd708b3c047d7b74a5ff47c363129 b25ec7ba37dc4a2193c9461101f5bff5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:21 np0005481065 nova_compute[260935]: 2025-10-11 08:46:21.924 2 DEBUG nova.compute.provider_tree [None req-7e8bcdc3-1f7e-4533-b533-40c605046f6f faabd708b3c047d7b74a5ff47c363129 b25ec7ba37dc4a2193c9461101f5bff5 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:46:21 np0005481065 nova_compute[260935]: 2025-10-11 08:46:21.945 2 DEBUG nova.scheduler.client.report [None req-7e8bcdc3-1f7e-4533-b533-40c605046f6f faabd708b3c047d7b74a5ff47c363129 b25ec7ba37dc4a2193c9461101f5bff5 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:46:21 np0005481065 nova_compute[260935]: 2025-10-11 08:46:21.954 2 DEBUG oslo_concurrency.processutils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:21 np0005481065 nova_compute[260935]: 2025-10-11 08:46:21.959 2 DEBUG nova.objects.instance [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lazy-loading 'pci_devices' on Instance uuid d1140ad8-4653-441c-aeec-bf4de6a680f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:46:21 np0005481065 nova_compute[260935]: 2025-10-11 08:46:21.979 2 DEBUG oslo_concurrency.lockutils [None req-7e8bcdc3-1f7e-4533-b533-40c605046f6f faabd708b3c047d7b74a5ff47c363129 b25ec7ba37dc4a2193c9461101f5bff5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:46:21 np0005481065 nova_compute[260935]: 2025-10-11 08:46:21.988 2 DEBUG nova.virt.libvirt.driver [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:46:21 np0005481065 nova_compute[260935]:  <uuid>d1140ad8-4653-441c-aeec-bf4de6a680f4</uuid>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:  <name>instance-0000000b</name>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:46:21 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:      <nova:name>tempest-ServersAdminNegativeTestJSON-server-1290371758</nova:name>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:46:20</nova:creationTime>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:46:21 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:        <nova:user uuid="e35666a092ce4c07b2e0604bfeb5d359">tempest-ServersAdminNegativeTestJSON-646482616-project-member</nova:user>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:        <nova:project uuid="0b7ec3d2d2084beaa800f0f2abd06c77">tempest-ServersAdminNegativeTestJSON-646482616</nova:project>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:      <nova:ports/>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:46:21 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:      <entry name="serial">d1140ad8-4653-441c-aeec-bf4de6a680f4</entry>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:      <entry name="uuid">d1140ad8-4653-441c-aeec-bf4de6a680f4</entry>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:46:21 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:46:21 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:46:21 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/d1140ad8-4653-441c-aeec-bf4de6a680f4_disk">
Oct 11 04:46:21 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:46:21 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:46:21 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/d1140ad8-4653-441c-aeec-bf4de6a680f4_disk.config">
Oct 11 04:46:21 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:46:21 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:46:21 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/d1140ad8-4653-441c-aeec-bf4de6a680f4/console.log" append="off"/>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:46:21 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:46:21 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:46:21 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:46:21 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:46:21 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:46:22 np0005481065 nova_compute[260935]: 2025-10-11 08:46:22.023 2 INFO nova.scheduler.client.report [None req-7e8bcdc3-1f7e-4533-b533-40c605046f6f faabd708b3c047d7b74a5ff47c363129 b25ec7ba37dc4a2193c9461101f5bff5 - - default default] Deleted allocations for instance adcf97ac-4b08-4f25-844b-e6f4e1305e3b#033[00m
Oct 11 04:46:22 np0005481065 nova_compute[260935]: 2025-10-11 08:46:22.047 2 DEBUG nova.virt.libvirt.driver [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:46:22 np0005481065 nova_compute[260935]: 2025-10-11 08:46:22.047 2 DEBUG nova.virt.libvirt.driver [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:46:22 np0005481065 nova_compute[260935]: 2025-10-11 08:46:22.048 2 INFO nova.virt.libvirt.driver [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Using config drive#033[00m
Oct 11 04:46:22 np0005481065 nova_compute[260935]: 2025-10-11 08:46:22.090 2 DEBUG nova.storage.rbd_utils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] rbd image d1140ad8-4653-441c-aeec-bf4de6a680f4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:46:22 np0005481065 nova_compute[260935]: 2025-10-11 08:46:22.112 2 DEBUG oslo_concurrency.lockutils [None req-7e8bcdc3-1f7e-4533-b533-40c605046f6f faabd708b3c047d7b74a5ff47c363129 b25ec7ba37dc4a2193c9461101f5bff5 - - default default] Lock "adcf97ac-4b08-4f25-844b-e6f4e1305e3b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:46:22 np0005481065 nova_compute[260935]: 2025-10-11 08:46:22.376 2 INFO nova.virt.libvirt.driver [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Creating config drive at /var/lib/nova/instances/d1140ad8-4653-441c-aeec-bf4de6a680f4/disk.config#033[00m
Oct 11 04:46:22 np0005481065 nova_compute[260935]: 2025-10-11 08:46:22.381 2 DEBUG oslo_concurrency.processutils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d1140ad8-4653-441c-aeec-bf4de6a680f4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3dzpe8e7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:22 np0005481065 nova_compute[260935]: 2025-10-11 08:46:22.533 2 DEBUG oslo_concurrency.processutils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d1140ad8-4653-441c-aeec-bf4de6a680f4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3dzpe8e7" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:22 np0005481065 nova_compute[260935]: 2025-10-11 08:46:22.569 2 DEBUG nova.storage.rbd_utils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] rbd image d1140ad8-4653-441c-aeec-bf4de6a680f4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:46:22 np0005481065 nova_compute[260935]: 2025-10-11 08:46:22.574 2 DEBUG oslo_concurrency.processutils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d1140ad8-4653-441c-aeec-bf4de6a680f4/disk.config d1140ad8-4653-441c-aeec-bf4de6a680f4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:22 np0005481065 nova_compute[260935]: 2025-10-11 08:46:22.782 2 DEBUG oslo_concurrency.processutils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d1140ad8-4653-441c-aeec-bf4de6a680f4/disk.config d1140ad8-4653-441c-aeec-bf4de6a680f4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.208s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:22 np0005481065 nova_compute[260935]: 2025-10-11 08:46:22.784 2 INFO nova.virt.libvirt.driver [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Deleting local config drive /var/lib/nova/instances/d1140ad8-4653-441c-aeec-bf4de6a680f4/disk.config because it was imported into RBD.#033[00m
Oct 11 04:46:22 np0005481065 systemd-machined[215705]: New machine qemu-11-instance-0000000b.
Oct 11 04:46:22 np0005481065 systemd[1]: Started Virtual Machine qemu-11-instance-0000000b.
Oct 11 04:46:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:46:23 np0005481065 nova_compute[260935]: 2025-10-11 08:46:23.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:46:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1191: 321 pgs: 321 active+clean; 134 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 5.3 MiB/s wr, 255 op/s
Oct 11 04:46:24 np0005481065 nova_compute[260935]: 2025-10-11 08:46:24.095 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172384.0934303, d1140ad8-4653-441c-aeec-bf4de6a680f4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:46:24 np0005481065 nova_compute[260935]: 2025-10-11 08:46:24.096 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:46:24 np0005481065 nova_compute[260935]: 2025-10-11 08:46:24.100 2 DEBUG nova.compute.manager [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:46:24 np0005481065 nova_compute[260935]: 2025-10-11 08:46:24.100 2 DEBUG nova.virt.libvirt.driver [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:46:24 np0005481065 nova_compute[260935]: 2025-10-11 08:46:24.106 2 INFO nova.virt.libvirt.driver [-] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Instance spawned successfully.#033[00m
Oct 11 04:46:24 np0005481065 nova_compute[260935]: 2025-10-11 08:46:24.107 2 DEBUG nova.virt.libvirt.driver [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:46:24 np0005481065 nova_compute[260935]: 2025-10-11 08:46:24.115 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:46:24 np0005481065 nova_compute[260935]: 2025-10-11 08:46:24.126 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:46:24 np0005481065 nova_compute[260935]: 2025-10-11 08:46:24.134 2 DEBUG nova.virt.libvirt.driver [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:46:24 np0005481065 nova_compute[260935]: 2025-10-11 08:46:24.134 2 DEBUG nova.virt.libvirt.driver [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:46:24 np0005481065 nova_compute[260935]: 2025-10-11 08:46:24.135 2 DEBUG nova.virt.libvirt.driver [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:46:24 np0005481065 nova_compute[260935]: 2025-10-11 08:46:24.136 2 DEBUG nova.virt.libvirt.driver [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:46:24 np0005481065 nova_compute[260935]: 2025-10-11 08:46:24.136 2 DEBUG nova.virt.libvirt.driver [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:46:24 np0005481065 nova_compute[260935]: 2025-10-11 08:46:24.137 2 DEBUG nova.virt.libvirt.driver [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:46:24 np0005481065 nova_compute[260935]: 2025-10-11 08:46:24.143 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:46:24 np0005481065 nova_compute[260935]: 2025-10-11 08:46:24.143 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172384.0960228, d1140ad8-4653-441c-aeec-bf4de6a680f4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:46:24 np0005481065 nova_compute[260935]: 2025-10-11 08:46:24.144 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] VM Started (Lifecycle Event)#033[00m
Oct 11 04:46:24 np0005481065 nova_compute[260935]: 2025-10-11 08:46:24.166 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:46:24 np0005481065 nova_compute[260935]: 2025-10-11 08:46:24.171 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:46:24 np0005481065 nova_compute[260935]: 2025-10-11 08:46:24.193 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:46:24 np0005481065 nova_compute[260935]: 2025-10-11 08:46:24.198 2 INFO nova.compute.manager [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Took 4.28 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:46:24 np0005481065 nova_compute[260935]: 2025-10-11 08:46:24.199 2 DEBUG nova.compute.manager [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:46:24 np0005481065 nova_compute[260935]: 2025-10-11 08:46:24.271 2 INFO nova.compute.manager [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Took 5.32 seconds to build instance.#033[00m
Oct 11 04:46:24 np0005481065 nova_compute[260935]: 2025-10-11 08:46:24.288 2 DEBUG oslo_concurrency.lockutils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lock "d1140ad8-4653-441c-aeec-bf4de6a680f4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.411s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:46:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:46:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:46:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:46:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:46:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:46:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:46:24 np0005481065 podman[283148]: 2025-10-11 08:46:24.785700561 +0000 UTC m=+0.091103515 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=iscsid)
Oct 11 04:46:25 np0005481065 nova_compute[260935]: 2025-10-11 08:46:25.034 2 DEBUG oslo_concurrency.lockutils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Acquiring lock "e1d5bdab-1090-4b36-bff3-f11d0bc1d591" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:46:25 np0005481065 nova_compute[260935]: 2025-10-11 08:46:25.035 2 DEBUG oslo_concurrency.lockutils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Lock "e1d5bdab-1090-4b36-bff3-f11d0bc1d591" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:46:25 np0005481065 nova_compute[260935]: 2025-10-11 08:46:25.059 2 DEBUG nova.compute.manager [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:46:25 np0005481065 nova_compute[260935]: 2025-10-11 08:46:25.203 2 DEBUG oslo_concurrency.lockutils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:46:25 np0005481065 nova_compute[260935]: 2025-10-11 08:46:25.204 2 DEBUG oslo_concurrency.lockutils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:46:25 np0005481065 nova_compute[260935]: 2025-10-11 08:46:25.213 2 DEBUG nova.virt.hardware [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:46:25 np0005481065 nova_compute[260935]: 2025-10-11 08:46:25.213 2 INFO nova.compute.claims [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:46:25 np0005481065 nova_compute[260935]: 2025-10-11 08:46:25.376 2 DEBUG oslo_concurrency.processutils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1192: 321 pgs: 321 active+clean; 134 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 218 op/s
Oct 11 04:46:25 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:46:25 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4271988514' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:46:25 np0005481065 nova_compute[260935]: 2025-10-11 08:46:25.874 2 DEBUG oslo_concurrency.processutils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:25 np0005481065 nova_compute[260935]: 2025-10-11 08:46:25.881 2 DEBUG nova.compute.provider_tree [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:46:25 np0005481065 nova_compute[260935]: 2025-10-11 08:46:25.902 2 DEBUG nova.scheduler.client.report [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:46:25 np0005481065 nova_compute[260935]: 2025-10-11 08:46:25.925 2 DEBUG oslo_concurrency.lockutils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:46:25 np0005481065 nova_compute[260935]: 2025-10-11 08:46:25.926 2 DEBUG nova.compute.manager [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:46:25 np0005481065 nova_compute[260935]: 2025-10-11 08:46:25.981 2 DEBUG nova.compute.manager [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:46:25 np0005481065 nova_compute[260935]: 2025-10-11 08:46:25.982 2 DEBUG nova.network.neutron [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:46:26 np0005481065 nova_compute[260935]: 2025-10-11 08:46:26.021 2 INFO nova.virt.libvirt.driver [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:46:26 np0005481065 nova_compute[260935]: 2025-10-11 08:46:26.039 2 DEBUG nova.compute.manager [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:46:26 np0005481065 nova_compute[260935]: 2025-10-11 08:46:26.187 2 DEBUG nova.compute.manager [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:46:26 np0005481065 nova_compute[260935]: 2025-10-11 08:46:26.189 2 DEBUG nova.virt.libvirt.driver [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:46:26 np0005481065 nova_compute[260935]: 2025-10-11 08:46:26.190 2 INFO nova.virt.libvirt.driver [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Creating image(s)#033[00m
Oct 11 04:46:26 np0005481065 nova_compute[260935]: 2025-10-11 08:46:26.226 2 DEBUG nova.storage.rbd_utils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] rbd image e1d5bdab-1090-4b36-bff3-f11d0bc1d591_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:46:26 np0005481065 nova_compute[260935]: 2025-10-11 08:46:26.261 2 DEBUG nova.storage.rbd_utils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] rbd image e1d5bdab-1090-4b36-bff3-f11d0bc1d591_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:46:26 np0005481065 nova_compute[260935]: 2025-10-11 08:46:26.286 2 DEBUG nova.storage.rbd_utils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] rbd image e1d5bdab-1090-4b36-bff3-f11d0bc1d591_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:46:26 np0005481065 nova_compute[260935]: 2025-10-11 08:46:26.292 2 DEBUG oslo_concurrency.processutils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:26 np0005481065 nova_compute[260935]: 2025-10-11 08:46:26.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:46:26 np0005481065 nova_compute[260935]: 2025-10-11 08:46:26.326 2 DEBUG nova.objects.instance [None req-8350f79d-4d9a-4f58-a70c-8d5ce260990b f7290af8b0bb4eaea5271709d2f074d1 bf78cdd64a5b457383affaaa7a1353ec - - default default] Lazy-loading 'pci_devices' on Instance uuid d1140ad8-4653-441c-aeec-bf4de6a680f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:46:26 np0005481065 nova_compute[260935]: 2025-10-11 08:46:26.347 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172386.3466928, d1140ad8-4653-441c-aeec-bf4de6a680f4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:46:26 np0005481065 nova_compute[260935]: 2025-10-11 08:46:26.347 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] VM Paused (Lifecycle Event)#033[00m
Oct 11 04:46:26 np0005481065 nova_compute[260935]: 2025-10-11 08:46:26.375 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:46:26 np0005481065 nova_compute[260935]: 2025-10-11 08:46:26.376 2 DEBUG oslo_concurrency.processutils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:26 np0005481065 nova_compute[260935]: 2025-10-11 08:46:26.378 2 DEBUG oslo_concurrency.lockutils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:46:26 np0005481065 nova_compute[260935]: 2025-10-11 08:46:26.379 2 DEBUG oslo_concurrency.lockutils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:46:26 np0005481065 nova_compute[260935]: 2025-10-11 08:46:26.379 2 DEBUG oslo_concurrency.lockutils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:46:26 np0005481065 nova_compute[260935]: 2025-10-11 08:46:26.409 2 DEBUG nova.storage.rbd_utils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] rbd image e1d5bdab-1090-4b36-bff3-f11d0bc1d591_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:46:26 np0005481065 nova_compute[260935]: 2025-10-11 08:46:26.415 2 DEBUG oslo_concurrency.processutils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 e1d5bdab-1090-4b36-bff3-f11d0bc1d591_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:26 np0005481065 nova_compute[260935]: 2025-10-11 08:46:26.448 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:46:26 np0005481065 nova_compute[260935]: 2025-10-11 08:46:26.469 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Oct 11 04:46:26 np0005481065 nova_compute[260935]: 2025-10-11 08:46:26.619 2 DEBUG nova.network.neutron [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct 11 04:46:26 np0005481065 nova_compute[260935]: 2025-10-11 08:46:26.620 2 DEBUG nova.compute.manager [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:46:26 np0005481065 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Oct 11 04:46:26 np0005481065 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Consumed 3.595s CPU time.
Oct 11 04:46:26 np0005481065 systemd-machined[215705]: Machine qemu-11-instance-0000000b terminated.
Oct 11 04:46:26 np0005481065 nova_compute[260935]: 2025-10-11 08:46:26.916 2 DEBUG oslo_concurrency.processutils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 e1d5bdab-1090-4b36-bff3-f11d0bc1d591_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:27 np0005481065 nova_compute[260935]: 2025-10-11 08:46:27.008 2 DEBUG nova.compute.manager [None req-8350f79d-4d9a-4f58-a70c-8d5ce260990b f7290af8b0bb4eaea5271709d2f074d1 bf78cdd64a5b457383affaaa7a1353ec - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:46:27 np0005481065 nova_compute[260935]: 2025-10-11 08:46:27.015 2 DEBUG nova.storage.rbd_utils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] resizing rbd image e1d5bdab-1090-4b36-bff3-f11d0bc1d591_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:46:27 np0005481065 nova_compute[260935]: 2025-10-11 08:46:27.145 2 DEBUG nova.objects.instance [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Lazy-loading 'migration_context' on Instance uuid e1d5bdab-1090-4b36-bff3-f11d0bc1d591 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:46:27 np0005481065 nova_compute[260935]: 2025-10-11 08:46:27.168 2 DEBUG nova.virt.libvirt.driver [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:46:27 np0005481065 nova_compute[260935]: 2025-10-11 08:46:27.169 2 DEBUG nova.virt.libvirt.driver [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Ensure instance console log exists: /var/lib/nova/instances/e1d5bdab-1090-4b36-bff3-f11d0bc1d591/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:46:27 np0005481065 nova_compute[260935]: 2025-10-11 08:46:27.169 2 DEBUG oslo_concurrency.lockutils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:46:27 np0005481065 nova_compute[260935]: 2025-10-11 08:46:27.170 2 DEBUG oslo_concurrency.lockutils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:46:27 np0005481065 nova_compute[260935]: 2025-10-11 08:46:27.170 2 DEBUG oslo_concurrency.lockutils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:46:27 np0005481065 nova_compute[260935]: 2025-10-11 08:46:27.173 2 DEBUG nova.virt.libvirt.driver [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:46:27 np0005481065 nova_compute[260935]: 2025-10-11 08:46:27.179 2 WARNING nova.virt.libvirt.driver [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:46:27 np0005481065 nova_compute[260935]: 2025-10-11 08:46:27.184 2 DEBUG nova.virt.libvirt.host [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:46:27 np0005481065 nova_compute[260935]: 2025-10-11 08:46:27.185 2 DEBUG nova.virt.libvirt.host [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:46:27 np0005481065 nova_compute[260935]: 2025-10-11 08:46:27.188 2 DEBUG nova.virt.libvirt.host [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:46:27 np0005481065 nova_compute[260935]: 2025-10-11 08:46:27.189 2 DEBUG nova.virt.libvirt.host [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:46:27 np0005481065 nova_compute[260935]: 2025-10-11 08:46:27.189 2 DEBUG nova.virt.libvirt.driver [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:46:27 np0005481065 nova_compute[260935]: 2025-10-11 08:46:27.190 2 DEBUG nova.virt.hardware [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:46:27 np0005481065 nova_compute[260935]: 2025-10-11 08:46:27.191 2 DEBUG nova.virt.hardware [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:46:27 np0005481065 nova_compute[260935]: 2025-10-11 08:46:27.191 2 DEBUG nova.virt.hardware [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:46:27 np0005481065 nova_compute[260935]: 2025-10-11 08:46:27.191 2 DEBUG nova.virt.hardware [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:46:27 np0005481065 nova_compute[260935]: 2025-10-11 08:46:27.192 2 DEBUG nova.virt.hardware [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:46:27 np0005481065 nova_compute[260935]: 2025-10-11 08:46:27.192 2 DEBUG nova.virt.hardware [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:46:27 np0005481065 nova_compute[260935]: 2025-10-11 08:46:27.193 2 DEBUG nova.virt.hardware [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:46:27 np0005481065 nova_compute[260935]: 2025-10-11 08:46:27.193 2 DEBUG nova.virt.hardware [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:46:27 np0005481065 nova_compute[260935]: 2025-10-11 08:46:27.194 2 DEBUG nova.virt.hardware [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:46:27 np0005481065 nova_compute[260935]: 2025-10-11 08:46:27.194 2 DEBUG nova.virt.hardware [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:46:27 np0005481065 nova_compute[260935]: 2025-10-11 08:46:27.194 2 DEBUG nova.virt.hardware [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:46:27 np0005481065 nova_compute[260935]: 2025-10-11 08:46:27.199 2 DEBUG oslo_concurrency.processutils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:46:27 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2466551359' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:46:27 np0005481065 nova_compute[260935]: 2025-10-11 08:46:27.666 2 DEBUG oslo_concurrency.processutils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:27 np0005481065 nova_compute[260935]: 2025-10-11 08:46:27.688 2 DEBUG nova.storage.rbd_utils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] rbd image e1d5bdab-1090-4b36-bff3-f11d0bc1d591_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:46:27 np0005481065 nova_compute[260935]: 2025-10-11 08:46:27.691 2 DEBUG oslo_concurrency.processutils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1193: 321 pgs: 321 active+clean; 213 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 6.1 MiB/s rd, 7.5 MiB/s wr, 381 op/s
Oct 11 04:46:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:46:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:46:28 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/618721267' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:46:28 np0005481065 nova_compute[260935]: 2025-10-11 08:46:28.269 2 DEBUG oslo_concurrency.processutils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:28 np0005481065 nova_compute[260935]: 2025-10-11 08:46:28.271 2 DEBUG nova.objects.instance [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Lazy-loading 'pci_devices' on Instance uuid e1d5bdab-1090-4b36-bff3-f11d0bc1d591 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:46:28 np0005481065 nova_compute[260935]: 2025-10-11 08:46:28.287 2 DEBUG nova.virt.libvirt.driver [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:46:28 np0005481065 nova_compute[260935]:  <uuid>e1d5bdab-1090-4b36-bff3-f11d0bc1d591</uuid>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:  <name>instance-0000000c</name>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:46:28 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:      <nova:name>tempest-DeleteServersAdminTestJSON-server-884730205</nova:name>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:46:27</nova:creationTime>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:46:28 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:        <nova:user uuid="c74c2e90e9e04fd283655f1fac1d2bf7">tempest-DeleteServersAdminTestJSON-159697733-project-member</nova:user>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:        <nova:project uuid="f8adf6d5f1034e90bb5184f87dbf3a16">tempest-DeleteServersAdminTestJSON-159697733</nova:project>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:      <nova:ports/>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:46:28 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:      <entry name="serial">e1d5bdab-1090-4b36-bff3-f11d0bc1d591</entry>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:      <entry name="uuid">e1d5bdab-1090-4b36-bff3-f11d0bc1d591</entry>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:46:28 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:46:28 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:46:28 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/e1d5bdab-1090-4b36-bff3-f11d0bc1d591_disk">
Oct 11 04:46:28 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:46:28 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:46:28 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/e1d5bdab-1090-4b36-bff3-f11d0bc1d591_disk.config">
Oct 11 04:46:28 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:46:28 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:46:28 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/e1d5bdab-1090-4b36-bff3-f11d0bc1d591/console.log" append="off"/>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:46:28 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:46:28 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:46:28 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:46:28 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:46:28 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:46:28 np0005481065 nova_compute[260935]: 2025-10-11 08:46:28.354 2 DEBUG nova.virt.libvirt.driver [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:46:28 np0005481065 nova_compute[260935]: 2025-10-11 08:46:28.354 2 DEBUG nova.virt.libvirt.driver [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:46:28 np0005481065 nova_compute[260935]: 2025-10-11 08:46:28.355 2 INFO nova.virt.libvirt.driver [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Using config drive#033[00m
Oct 11 04:46:28 np0005481065 nova_compute[260935]: 2025-10-11 08:46:28.376 2 DEBUG nova.storage.rbd_utils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] rbd image e1d5bdab-1090-4b36-bff3-f11d0bc1d591_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:46:28 np0005481065 nova_compute[260935]: 2025-10-11 08:46:28.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:46:28 np0005481065 nova_compute[260935]: 2025-10-11 08:46:28.789 2 INFO nova.virt.libvirt.driver [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Creating config drive at /var/lib/nova/instances/e1d5bdab-1090-4b36-bff3-f11d0bc1d591/disk.config#033[00m
Oct 11 04:46:28 np0005481065 nova_compute[260935]: 2025-10-11 08:46:28.793 2 DEBUG oslo_concurrency.processutils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e1d5bdab-1090-4b36-bff3-f11d0bc1d591/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprzemzbm7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:28 np0005481065 nova_compute[260935]: 2025-10-11 08:46:28.926 2 DEBUG oslo_concurrency.processutils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e1d5bdab-1090-4b36-bff3-f11d0bc1d591/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprzemzbm7" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:28 np0005481065 nova_compute[260935]: 2025-10-11 08:46:28.968 2 DEBUG nova.storage.rbd_utils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] rbd image e1d5bdab-1090-4b36-bff3-f11d0bc1d591_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:46:28 np0005481065 nova_compute[260935]: 2025-10-11 08:46:28.974 2 DEBUG oslo_concurrency.processutils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e1d5bdab-1090-4b36-bff3-f11d0bc1d591/disk.config e1d5bdab-1090-4b36-bff3-f11d0bc1d591_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:29 np0005481065 nova_compute[260935]: 2025-10-11 08:46:29.167 2 DEBUG oslo_concurrency.processutils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e1d5bdab-1090-4b36-bff3-f11d0bc1d591/disk.config e1d5bdab-1090-4b36-bff3-f11d0bc1d591_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.192s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:29 np0005481065 nova_compute[260935]: 2025-10-11 08:46:29.169 2 INFO nova.virt.libvirt.driver [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Deleting local config drive /var/lib/nova/instances/e1d5bdab-1090-4b36-bff3-f11d0bc1d591/disk.config because it was imported into RBD.#033[00m
Oct 11 04:46:29 np0005481065 systemd-machined[215705]: New machine qemu-12-instance-0000000c.
Oct 11 04:46:29 np0005481065 systemd[1]: Started Virtual Machine qemu-12-instance-0000000c.
Oct 11 04:46:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1194: 321 pgs: 321 active+clean; 213 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 218 op/s
Oct 11 04:46:29 np0005481065 nova_compute[260935]: 2025-10-11 08:46:29.797 2 DEBUG oslo_concurrency.lockutils [None req-598c7046-b8dd-4a9d-bf2a-d776307457c3 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Acquiring lock "d1140ad8-4653-441c-aeec-bf4de6a680f4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:46:29 np0005481065 nova_compute[260935]: 2025-10-11 08:46:29.799 2 DEBUG oslo_concurrency.lockutils [None req-598c7046-b8dd-4a9d-bf2a-d776307457c3 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lock "d1140ad8-4653-441c-aeec-bf4de6a680f4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:46:29 np0005481065 nova_compute[260935]: 2025-10-11 08:46:29.800 2 DEBUG oslo_concurrency.lockutils [None req-598c7046-b8dd-4a9d-bf2a-d776307457c3 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Acquiring lock "d1140ad8-4653-441c-aeec-bf4de6a680f4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:46:29 np0005481065 nova_compute[260935]: 2025-10-11 08:46:29.800 2 DEBUG oslo_concurrency.lockutils [None req-598c7046-b8dd-4a9d-bf2a-d776307457c3 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lock "d1140ad8-4653-441c-aeec-bf4de6a680f4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:46:29 np0005481065 nova_compute[260935]: 2025-10-11 08:46:29.800 2 DEBUG oslo_concurrency.lockutils [None req-598c7046-b8dd-4a9d-bf2a-d776307457c3 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lock "d1140ad8-4653-441c-aeec-bf4de6a680f4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:46:29 np0005481065 nova_compute[260935]: 2025-10-11 08:46:29.802 2 INFO nova.compute.manager [None req-598c7046-b8dd-4a9d-bf2a-d776307457c3 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Terminating instance#033[00m
Oct 11 04:46:29 np0005481065 nova_compute[260935]: 2025-10-11 08:46:29.803 2 DEBUG oslo_concurrency.lockutils [None req-598c7046-b8dd-4a9d-bf2a-d776307457c3 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Acquiring lock "refresh_cache-d1140ad8-4653-441c-aeec-bf4de6a680f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:46:29 np0005481065 nova_compute[260935]: 2025-10-11 08:46:29.803 2 DEBUG oslo_concurrency.lockutils [None req-598c7046-b8dd-4a9d-bf2a-d776307457c3 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Acquired lock "refresh_cache-d1140ad8-4653-441c-aeec-bf4de6a680f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:46:29 np0005481065 nova_compute[260935]: 2025-10-11 08:46:29.803 2 DEBUG nova.network.neutron [None req-598c7046-b8dd-4a9d-bf2a-d776307457c3 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:46:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Oct 11 04:46:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Oct 11 04:46:29 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Oct 11 04:46:30 np0005481065 nova_compute[260935]: 2025-10-11 08:46:30.072 2 DEBUG nova.network.neutron [None req-598c7046-b8dd-4a9d-bf2a-d776307457c3 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:46:30 np0005481065 nova_compute[260935]: 2025-10-11 08:46:30.329 2 DEBUG nova.network.neutron [None req-598c7046-b8dd-4a9d-bf2a-d776307457c3 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:46:30 np0005481065 nova_compute[260935]: 2025-10-11 08:46:30.351 2 DEBUG oslo_concurrency.lockutils [None req-598c7046-b8dd-4a9d-bf2a-d776307457c3 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Releasing lock "refresh_cache-d1140ad8-4653-441c-aeec-bf4de6a680f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:46:30 np0005481065 nova_compute[260935]: 2025-10-11 08:46:30.352 2 DEBUG nova.compute.manager [None req-598c7046-b8dd-4a9d-bf2a-d776307457c3 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 04:46:30 np0005481065 nova_compute[260935]: 2025-10-11 08:46:30.363 2 INFO nova.virt.libvirt.driver [-] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Instance destroyed successfully.#033[00m
Oct 11 04:46:30 np0005481065 nova_compute[260935]: 2025-10-11 08:46:30.364 2 DEBUG nova.objects.instance [None req-598c7046-b8dd-4a9d-bf2a-d776307457c3 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lazy-loading 'resources' on Instance uuid d1140ad8-4653-441c-aeec-bf4de6a680f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:46:30 np0005481065 nova_compute[260935]: 2025-10-11 08:46:30.449 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172390.4456844, e1d5bdab-1090-4b36-bff3-f11d0bc1d591 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:46:30 np0005481065 nova_compute[260935]: 2025-10-11 08:46:30.450 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:46:30 np0005481065 nova_compute[260935]: 2025-10-11 08:46:30.454 2 DEBUG nova.compute.manager [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:46:30 np0005481065 nova_compute[260935]: 2025-10-11 08:46:30.455 2 DEBUG nova.virt.libvirt.driver [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:46:30 np0005481065 nova_compute[260935]: 2025-10-11 08:46:30.461 2 INFO nova.virt.libvirt.driver [-] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Instance spawned successfully.#033[00m
Oct 11 04:46:30 np0005481065 nova_compute[260935]: 2025-10-11 08:46:30.462 2 DEBUG nova.virt.libvirt.driver [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:46:30 np0005481065 nova_compute[260935]: 2025-10-11 08:46:30.482 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:46:30 np0005481065 nova_compute[260935]: 2025-10-11 08:46:30.487 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:46:30 np0005481065 nova_compute[260935]: 2025-10-11 08:46:30.500 2 DEBUG nova.virt.libvirt.driver [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:46:30 np0005481065 nova_compute[260935]: 2025-10-11 08:46:30.501 2 DEBUG nova.virt.libvirt.driver [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:46:30 np0005481065 nova_compute[260935]: 2025-10-11 08:46:30.502 2 DEBUG nova.virt.libvirt.driver [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:46:30 np0005481065 nova_compute[260935]: 2025-10-11 08:46:30.502 2 DEBUG nova.virt.libvirt.driver [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:46:30 np0005481065 nova_compute[260935]: 2025-10-11 08:46:30.503 2 DEBUG nova.virt.libvirt.driver [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:46:30 np0005481065 nova_compute[260935]: 2025-10-11 08:46:30.504 2 DEBUG nova.virt.libvirt.driver [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:46:30 np0005481065 nova_compute[260935]: 2025-10-11 08:46:30.512 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:46:30 np0005481065 nova_compute[260935]: 2025-10-11 08:46:30.513 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172390.4483535, e1d5bdab-1090-4b36-bff3-f11d0bc1d591 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:46:30 np0005481065 nova_compute[260935]: 2025-10-11 08:46:30.513 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] VM Started (Lifecycle Event)#033[00m
Oct 11 04:46:30 np0005481065 nova_compute[260935]: 2025-10-11 08:46:30.548 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:46:30 np0005481065 nova_compute[260935]: 2025-10-11 08:46:30.556 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:46:30 np0005481065 nova_compute[260935]: 2025-10-11 08:46:30.588 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:46:30 np0005481065 nova_compute[260935]: 2025-10-11 08:46:30.593 2 INFO nova.compute.manager [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Took 4.41 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:46:30 np0005481065 nova_compute[260935]: 2025-10-11 08:46:30.594 2 DEBUG nova.compute.manager [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:46:30 np0005481065 nova_compute[260935]: 2025-10-11 08:46:30.679 2 INFO nova.compute.manager [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Took 5.50 seconds to build instance.#033[00m
Oct 11 04:46:30 np0005481065 nova_compute[260935]: 2025-10-11 08:46:30.703 2 DEBUG oslo_concurrency.lockutils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Lock "e1d5bdab-1090-4b36-bff3-f11d0bc1d591" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:46:30 np0005481065 podman[283560]: 2025-10-11 08:46:30.83197407 +0000 UTC m=+0.120292099 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 11 04:46:30 np0005481065 nova_compute[260935]: 2025-10-11 08:46:30.859 2 INFO nova.virt.libvirt.driver [None req-598c7046-b8dd-4a9d-bf2a-d776307457c3 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Deleting instance files /var/lib/nova/instances/d1140ad8-4653-441c-aeec-bf4de6a680f4_del#033[00m
Oct 11 04:46:30 np0005481065 nova_compute[260935]: 2025-10-11 08:46:30.860 2 INFO nova.virt.libvirt.driver [None req-598c7046-b8dd-4a9d-bf2a-d776307457c3 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Deletion of /var/lib/nova/instances/d1140ad8-4653-441c-aeec-bf4de6a680f4_del complete#033[00m
Oct 11 04:46:30 np0005481065 nova_compute[260935]: 2025-10-11 08:46:30.955 2 INFO nova.compute.manager [None req-598c7046-b8dd-4a9d-bf2a-d776307457c3 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Took 0.60 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 04:46:30 np0005481065 nova_compute[260935]: 2025-10-11 08:46:30.956 2 DEBUG oslo.service.loopingcall [None req-598c7046-b8dd-4a9d-bf2a-d776307457c3 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 04:46:30 np0005481065 nova_compute[260935]: 2025-10-11 08:46:30.957 2 DEBUG nova.compute.manager [-] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 04:46:30 np0005481065 nova_compute[260935]: 2025-10-11 08:46:30.957 2 DEBUG nova.network.neutron [-] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 04:46:31 np0005481065 nova_compute[260935]: 2025-10-11 08:46:31.187 2 DEBUG nova.network.neutron [-] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:46:31 np0005481065 nova_compute[260935]: 2025-10-11 08:46:31.200 2 DEBUG nova.network.neutron [-] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:46:31 np0005481065 nova_compute[260935]: 2025-10-11 08:46:31.213 2 INFO nova.compute.manager [-] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Took 0.26 seconds to deallocate network for instance.#033[00m
Oct 11 04:46:31 np0005481065 nova_compute[260935]: 2025-10-11 08:46:31.255 2 DEBUG oslo_concurrency.lockutils [None req-598c7046-b8dd-4a9d-bf2a-d776307457c3 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:46:31 np0005481065 nova_compute[260935]: 2025-10-11 08:46:31.256 2 DEBUG oslo_concurrency.lockutils [None req-598c7046-b8dd-4a9d-bf2a-d776307457c3 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:46:31 np0005481065 nova_compute[260935]: 2025-10-11 08:46:31.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:46:31 np0005481065 nova_compute[260935]: 2025-10-11 08:46:31.415 2 DEBUG oslo_concurrency.processutils [None req-598c7046-b8dd-4a9d-bf2a-d776307457c3 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1196: 321 pgs: 321 active+clean; 213 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 6.8 MiB/s wr, 262 op/s
Oct 11 04:46:31 np0005481065 podman[283601]: 2025-10-11 08:46:31.855050345 +0000 UTC m=+0.154331083 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 11 04:46:31 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:46:31 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1658224353' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:46:31 np0005481065 nova_compute[260935]: 2025-10-11 08:46:31.900 2 DEBUG oslo_concurrency.processutils [None req-598c7046-b8dd-4a9d-bf2a-d776307457c3 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:31 np0005481065 nova_compute[260935]: 2025-10-11 08:46:31.908 2 DEBUG nova.compute.provider_tree [None req-598c7046-b8dd-4a9d-bf2a-d776307457c3 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:46:31 np0005481065 nova_compute[260935]: 2025-10-11 08:46:31.925 2 DEBUG nova.scheduler.client.report [None req-598c7046-b8dd-4a9d-bf2a-d776307457c3 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:46:31 np0005481065 nova_compute[260935]: 2025-10-11 08:46:31.956 2 DEBUG oslo_concurrency.lockutils [None req-598c7046-b8dd-4a9d-bf2a-d776307457c3 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:46:31 np0005481065 nova_compute[260935]: 2025-10-11 08:46:31.999 2 INFO nova.scheduler.client.report [None req-598c7046-b8dd-4a9d-bf2a-d776307457c3 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Deleted allocations for instance d1140ad8-4653-441c-aeec-bf4de6a680f4#033[00m
Oct 11 04:46:32 np0005481065 nova_compute[260935]: 2025-10-11 08:46:32.039 2 DEBUG oslo_concurrency.lockutils [None req-971532a9-9a46-4aba-86e8-fc73c3054690 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Acquiring lock "e1d5bdab-1090-4b36-bff3-f11d0bc1d591" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:46:32 np0005481065 nova_compute[260935]: 2025-10-11 08:46:32.039 2 DEBUG oslo_concurrency.lockutils [None req-971532a9-9a46-4aba-86e8-fc73c3054690 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Lock "e1d5bdab-1090-4b36-bff3-f11d0bc1d591" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:46:32 np0005481065 nova_compute[260935]: 2025-10-11 08:46:32.040 2 DEBUG oslo_concurrency.lockutils [None req-971532a9-9a46-4aba-86e8-fc73c3054690 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Acquiring lock "e1d5bdab-1090-4b36-bff3-f11d0bc1d591-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:46:32 np0005481065 nova_compute[260935]: 2025-10-11 08:46:32.040 2 DEBUG oslo_concurrency.lockutils [None req-971532a9-9a46-4aba-86e8-fc73c3054690 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Lock "e1d5bdab-1090-4b36-bff3-f11d0bc1d591-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:46:32 np0005481065 nova_compute[260935]: 2025-10-11 08:46:32.041 2 DEBUG oslo_concurrency.lockutils [None req-971532a9-9a46-4aba-86e8-fc73c3054690 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Lock "e1d5bdab-1090-4b36-bff3-f11d0bc1d591-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:46:32 np0005481065 nova_compute[260935]: 2025-10-11 08:46:32.043 2 INFO nova.compute.manager [None req-971532a9-9a46-4aba-86e8-fc73c3054690 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Terminating instance#033[00m
Oct 11 04:46:32 np0005481065 nova_compute[260935]: 2025-10-11 08:46:32.045 2 DEBUG oslo_concurrency.lockutils [None req-971532a9-9a46-4aba-86e8-fc73c3054690 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Acquiring lock "refresh_cache-e1d5bdab-1090-4b36-bff3-f11d0bc1d591" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:46:32 np0005481065 nova_compute[260935]: 2025-10-11 08:46:32.045 2 DEBUG oslo_concurrency.lockutils [None req-971532a9-9a46-4aba-86e8-fc73c3054690 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Acquired lock "refresh_cache-e1d5bdab-1090-4b36-bff3-f11d0bc1d591" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:46:32 np0005481065 nova_compute[260935]: 2025-10-11 08:46:32.046 2 DEBUG nova.network.neutron [None req-971532a9-9a46-4aba-86e8-fc73c3054690 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:46:32 np0005481065 nova_compute[260935]: 2025-10-11 08:46:32.071 2 DEBUG oslo_concurrency.lockutils [None req-598c7046-b8dd-4a9d-bf2a-d776307457c3 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lock "d1140ad8-4653-441c-aeec-bf4de6a680f4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.272s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:46:32 np0005481065 nova_compute[260935]: 2025-10-11 08:46:32.197 2 DEBUG nova.network.neutron [None req-971532a9-9a46-4aba-86e8-fc73c3054690 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:46:32 np0005481065 nova_compute[260935]: 2025-10-11 08:46:32.413 2 DEBUG nova.network.neutron [None req-971532a9-9a46-4aba-86e8-fc73c3054690 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:46:32 np0005481065 nova_compute[260935]: 2025-10-11 08:46:32.434 2 DEBUG oslo_concurrency.lockutils [None req-971532a9-9a46-4aba-86e8-fc73c3054690 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Releasing lock "refresh_cache-e1d5bdab-1090-4b36-bff3-f11d0bc1d591" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:46:32 np0005481065 nova_compute[260935]: 2025-10-11 08:46:32.435 2 DEBUG nova.compute.manager [None req-971532a9-9a46-4aba-86e8-fc73c3054690 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 04:46:32 np0005481065 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Oct 11 04:46:32 np0005481065 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Consumed 3.241s CPU time.
Oct 11 04:46:32 np0005481065 systemd-machined[215705]: Machine qemu-12-instance-0000000c terminated.
Oct 11 04:46:32 np0005481065 nova_compute[260935]: 2025-10-11 08:46:32.661 2 INFO nova.virt.libvirt.driver [-] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Instance destroyed successfully.#033[00m
Oct 11 04:46:32 np0005481065 nova_compute[260935]: 2025-10-11 08:46:32.662 2 DEBUG nova.objects.instance [None req-971532a9-9a46-4aba-86e8-fc73c3054690 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Lazy-loading 'resources' on Instance uuid e1d5bdab-1090-4b36-bff3-f11d0bc1d591 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:46:32 np0005481065 nova_compute[260935]: 2025-10-11 08:46:32.750 2 DEBUG oslo_concurrency.lockutils [None req-1ecd3038-5521-4d63-a2a7-5c4df33ae299 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Acquiring lock "97254317-c848-4369-b0b7-147d230fb1d1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:46:32 np0005481065 nova_compute[260935]: 2025-10-11 08:46:32.751 2 DEBUG oslo_concurrency.lockutils [None req-1ecd3038-5521-4d63-a2a7-5c4df33ae299 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lock "97254317-c848-4369-b0b7-147d230fb1d1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:46:32 np0005481065 nova_compute[260935]: 2025-10-11 08:46:32.752 2 DEBUG oslo_concurrency.lockutils [None req-1ecd3038-5521-4d63-a2a7-5c4df33ae299 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Acquiring lock "97254317-c848-4369-b0b7-147d230fb1d1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:46:32 np0005481065 nova_compute[260935]: 2025-10-11 08:46:32.752 2 DEBUG oslo_concurrency.lockutils [None req-1ecd3038-5521-4d63-a2a7-5c4df33ae299 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lock "97254317-c848-4369-b0b7-147d230fb1d1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:46:32 np0005481065 nova_compute[260935]: 2025-10-11 08:46:32.753 2 DEBUG oslo_concurrency.lockutils [None req-1ecd3038-5521-4d63-a2a7-5c4df33ae299 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lock "97254317-c848-4369-b0b7-147d230fb1d1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:46:32 np0005481065 nova_compute[260935]: 2025-10-11 08:46:32.755 2 INFO nova.compute.manager [None req-1ecd3038-5521-4d63-a2a7-5c4df33ae299 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Terminating instance#033[00m
Oct 11 04:46:32 np0005481065 nova_compute[260935]: 2025-10-11 08:46:32.756 2 DEBUG oslo_concurrency.lockutils [None req-1ecd3038-5521-4d63-a2a7-5c4df33ae299 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Acquiring lock "refresh_cache-97254317-c848-4369-b0b7-147d230fb1d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:46:32 np0005481065 nova_compute[260935]: 2025-10-11 08:46:32.757 2 DEBUG oslo_concurrency.lockutils [None req-1ecd3038-5521-4d63-a2a7-5c4df33ae299 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Acquired lock "refresh_cache-97254317-c848-4369-b0b7-147d230fb1d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:46:32 np0005481065 nova_compute[260935]: 2025-10-11 08:46:32.757 2 DEBUG nova.network.neutron [None req-1ecd3038-5521-4d63-a2a7-5c4df33ae299 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:46:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Oct 11 04:46:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Oct 11 04:46:32 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Oct 11 04:46:33 np0005481065 nova_compute[260935]: 2025-10-11 08:46:33.107 2 DEBUG nova.network.neutron [None req-1ecd3038-5521-4d63-a2a7-5c4df33ae299 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:46:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 04:46:33 np0005481065 nova_compute[260935]: 2025-10-11 08:46:33.155 2 INFO nova.virt.libvirt.driver [None req-971532a9-9a46-4aba-86e8-fc73c3054690 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Deleting instance files /var/lib/nova/instances/e1d5bdab-1090-4b36-bff3-f11d0bc1d591_del#033[00m
Oct 11 04:46:33 np0005481065 nova_compute[260935]: 2025-10-11 08:46:33.156 2 INFO nova.virt.libvirt.driver [None req-971532a9-9a46-4aba-86e8-fc73c3054690 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Deletion of /var/lib/nova/instances/e1d5bdab-1090-4b36-bff3-f11d0bc1d591_del complete#033[00m
Oct 11 04:46:33 np0005481065 nova_compute[260935]: 2025-10-11 08:46:33.240 2 INFO nova.compute.manager [None req-971532a9-9a46-4aba-86e8-fc73c3054690 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Took 0.80 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 04:46:33 np0005481065 nova_compute[260935]: 2025-10-11 08:46:33.241 2 DEBUG oslo.service.loopingcall [None req-971532a9-9a46-4aba-86e8-fc73c3054690 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 04:46:33 np0005481065 nova_compute[260935]: 2025-10-11 08:46:33.241 2 DEBUG nova.compute.manager [-] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 04:46:33 np0005481065 nova_compute[260935]: 2025-10-11 08:46:33.242 2 DEBUG nova.network.neutron [-] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 04:46:33 np0005481065 nova_compute[260935]: 2025-10-11 08:46:33.431 2 DEBUG nova.network.neutron [-] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:46:33 np0005481065 nova_compute[260935]: 2025-10-11 08:46:33.444 2 DEBUG nova.network.neutron [-] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:46:33 np0005481065 nova_compute[260935]: 2025-10-11 08:46:33.457 2 DEBUG nova.network.neutron [None req-1ecd3038-5521-4d63-a2a7-5c4df33ae299 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:46:33 np0005481065 nova_compute[260935]: 2025-10-11 08:46:33.470 2 INFO nova.compute.manager [-] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Took 0.23 seconds to deallocate network for instance.#033[00m
Oct 11 04:46:33 np0005481065 nova_compute[260935]: 2025-10-11 08:46:33.476 2 DEBUG oslo_concurrency.lockutils [None req-1ecd3038-5521-4d63-a2a7-5c4df33ae299 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Releasing lock "refresh_cache-97254317-c848-4369-b0b7-147d230fb1d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:46:33 np0005481065 nova_compute[260935]: 2025-10-11 08:46:33.477 2 DEBUG nova.compute.manager [None req-1ecd3038-5521-4d63-a2a7-5c4df33ae299 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 04:46:33 np0005481065 nova_compute[260935]: 2025-10-11 08:46:33.514 2 DEBUG oslo_concurrency.lockutils [None req-971532a9-9a46-4aba-86e8-fc73c3054690 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:46:33 np0005481065 nova_compute[260935]: 2025-10-11 08:46:33.515 2 DEBUG oslo_concurrency.lockutils [None req-971532a9-9a46-4aba-86e8-fc73c3054690 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:46:33 np0005481065 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Deactivated successfully.
Oct 11 04:46:33 np0005481065 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Consumed 13.051s CPU time.
Oct 11 04:46:33 np0005481065 systemd-machined[215705]: Machine qemu-9-instance-00000009 terminated.
Oct 11 04:46:33 np0005481065 nova_compute[260935]: 2025-10-11 08:46:33.585 2 DEBUG oslo_concurrency.processutils [None req-971532a9-9a46-4aba-86e8-fc73c3054690 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:33 np0005481065 nova_compute[260935]: 2025-10-11 08:46:33.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:46:33 np0005481065 nova_compute[260935]: 2025-10-11 08:46:33.711 2 INFO nova.virt.libvirt.driver [-] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Instance destroyed successfully.#033[00m
Oct 11 04:46:33 np0005481065 nova_compute[260935]: 2025-10-11 08:46:33.712 2 DEBUG nova.objects.instance [None req-1ecd3038-5521-4d63-a2a7-5c4df33ae299 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lazy-loading 'resources' on Instance uuid 97254317-c848-4369-b0b7-147d230fb1d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:46:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1198: 321 pgs: 321 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 6.3 MiB/s rd, 5.9 MiB/s wr, 427 op/s
Oct 11 04:46:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:46:34 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/868699041' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:46:34 np0005481065 nova_compute[260935]: 2025-10-11 08:46:34.071 2 DEBUG oslo_concurrency.processutils [None req-971532a9-9a46-4aba-86e8-fc73c3054690 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:34 np0005481065 nova_compute[260935]: 2025-10-11 08:46:34.082 2 DEBUG nova.compute.provider_tree [None req-971532a9-9a46-4aba-86e8-fc73c3054690 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:46:34 np0005481065 nova_compute[260935]: 2025-10-11 08:46:34.103 2 DEBUG nova.scheduler.client.report [None req-971532a9-9a46-4aba-86e8-fc73c3054690 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:46:34 np0005481065 nova_compute[260935]: 2025-10-11 08:46:34.129 2 DEBUG oslo_concurrency.lockutils [None req-971532a9-9a46-4aba-86e8-fc73c3054690 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:46:34 np0005481065 nova_compute[260935]: 2025-10-11 08:46:34.157 2 INFO nova.scheduler.client.report [None req-971532a9-9a46-4aba-86e8-fc73c3054690 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Deleted allocations for instance e1d5bdab-1090-4b36-bff3-f11d0bc1d591#033[00m
Oct 11 04:46:34 np0005481065 nova_compute[260935]: 2025-10-11 08:46:34.177 2 INFO nova.virt.libvirt.driver [None req-1ecd3038-5521-4d63-a2a7-5c4df33ae299 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Deleting instance files /var/lib/nova/instances/97254317-c848-4369-b0b7-147d230fb1d1_del#033[00m
Oct 11 04:46:34 np0005481065 nova_compute[260935]: 2025-10-11 08:46:34.178 2 INFO nova.virt.libvirt.driver [None req-1ecd3038-5521-4d63-a2a7-5c4df33ae299 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Deletion of /var/lib/nova/instances/97254317-c848-4369-b0b7-147d230fb1d1_del complete#033[00m
Oct 11 04:46:34 np0005481065 nova_compute[260935]: 2025-10-11 08:46:34.252 2 DEBUG oslo_concurrency.lockutils [None req-971532a9-9a46-4aba-86e8-fc73c3054690 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Lock "e1d5bdab-1090-4b36-bff3-f11d0bc1d591" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.213s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:46:34 np0005481065 nova_compute[260935]: 2025-10-11 08:46:34.255 2 INFO nova.compute.manager [None req-1ecd3038-5521-4d63-a2a7-5c4df33ae299 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Took 0.78 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 04:46:34 np0005481065 nova_compute[260935]: 2025-10-11 08:46:34.255 2 DEBUG oslo.service.loopingcall [None req-1ecd3038-5521-4d63-a2a7-5c4df33ae299 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 04:46:34 np0005481065 nova_compute[260935]: 2025-10-11 08:46:34.255 2 DEBUG nova.compute.manager [-] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 04:46:34 np0005481065 nova_compute[260935]: 2025-10-11 08:46:34.256 2 DEBUG nova.network.neutron [-] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 04:46:34 np0005481065 nova_compute[260935]: 2025-10-11 08:46:34.391 2 DEBUG nova.network.neutron [-] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:46:34 np0005481065 nova_compute[260935]: 2025-10-11 08:46:34.404 2 DEBUG nova.network.neutron [-] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:46:34 np0005481065 nova_compute[260935]: 2025-10-11 08:46:34.422 2 INFO nova.compute.manager [-] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Took 0.17 seconds to deallocate network for instance.#033[00m
Oct 11 04:46:34 np0005481065 nova_compute[260935]: 2025-10-11 08:46:34.471 2 DEBUG oslo_concurrency.lockutils [None req-1ecd3038-5521-4d63-a2a7-5c4df33ae299 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:46:34 np0005481065 nova_compute[260935]: 2025-10-11 08:46:34.471 2 DEBUG oslo_concurrency.lockutils [None req-1ecd3038-5521-4d63-a2a7-5c4df33ae299 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:46:34 np0005481065 nova_compute[260935]: 2025-10-11 08:46:34.525 2 DEBUG oslo_concurrency.processutils [None req-1ecd3038-5521-4d63-a2a7-5c4df33ae299 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:46:34 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/870174784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:46:34 np0005481065 nova_compute[260935]: 2025-10-11 08:46:34.960 2 DEBUG oslo_concurrency.processutils [None req-1ecd3038-5521-4d63-a2a7-5c4df33ae299 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:34 np0005481065 nova_compute[260935]: 2025-10-11 08:46:34.970 2 DEBUG nova.compute.provider_tree [None req-1ecd3038-5521-4d63-a2a7-5c4df33ae299 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:46:34 np0005481065 nova_compute[260935]: 2025-10-11 08:46:34.991 2 DEBUG nova.scheduler.client.report [None req-1ecd3038-5521-4d63-a2a7-5c4df33ae299 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:46:35 np0005481065 nova_compute[260935]: 2025-10-11 08:46:35.024 2 DEBUG oslo_concurrency.lockutils [None req-1ecd3038-5521-4d63-a2a7-5c4df33ae299 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.553s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:46:35 np0005481065 nova_compute[260935]: 2025-10-11 08:46:35.073 2 INFO nova.scheduler.client.report [None req-1ecd3038-5521-4d63-a2a7-5c4df33ae299 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Deleted allocations for instance 97254317-c848-4369-b0b7-147d230fb1d1#033[00m
Oct 11 04:46:35 np0005481065 nova_compute[260935]: 2025-10-11 08:46:35.174 2 DEBUG oslo_concurrency.lockutils [None req-1ecd3038-5521-4d63-a2a7-5c4df33ae299 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lock "97254317-c848-4369-b0b7-147d230fb1d1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.423s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:46:35 np0005481065 nova_compute[260935]: 2025-10-11 08:46:35.500 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172380.4990637, adcf97ac-4b08-4f25-844b-e6f4e1305e3b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:46:35 np0005481065 nova_compute[260935]: 2025-10-11 08:46:35.501 2 INFO nova.compute.manager [-] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] VM Stopped (Lifecycle Event)#033[00m
Oct 11 04:46:35 np0005481065 nova_compute[260935]: 2025-10-11 08:46:35.532 2 DEBUG nova.compute.manager [None req-3c54d203-cde9-48b7-8c46-ffd7dd493ae3 - - - - - -] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:46:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1199: 321 pgs: 321 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 41 KiB/s wr, 182 op/s
Oct 11 04:46:35 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Oct 11 04:46:35 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Oct 11 04:46:35 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Oct 11 04:46:36 np0005481065 nova_compute[260935]: 2025-10-11 08:46:36.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:46:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:46:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3896068585' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:46:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:46:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3896068585' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:46:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1201: 321 pgs: 321 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 48 KiB/s wr, 294 op/s
Oct 11 04:46:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:46:38 np0005481065 nova_compute[260935]: 2025-10-11 08:46:38.624 2 DEBUG oslo_concurrency.lockutils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Acquiring lock "318c32b0-9990-4579-8abb-fc79e7460d77" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:46:38 np0005481065 nova_compute[260935]: 2025-10-11 08:46:38.625 2 DEBUG oslo_concurrency.lockutils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lock "318c32b0-9990-4579-8abb-fc79e7460d77" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:46:38 np0005481065 nova_compute[260935]: 2025-10-11 08:46:38.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:46:38 np0005481065 nova_compute[260935]: 2025-10-11 08:46:38.650 2 DEBUG nova.compute.manager [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:46:38 np0005481065 nova_compute[260935]: 2025-10-11 08:46:38.726 2 DEBUG oslo_concurrency.lockutils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:46:38 np0005481065 nova_compute[260935]: 2025-10-11 08:46:38.727 2 DEBUG oslo_concurrency.lockutils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:46:38 np0005481065 nova_compute[260935]: 2025-10-11 08:46:38.734 2 DEBUG nova.virt.hardware [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:46:38 np0005481065 nova_compute[260935]: 2025-10-11 08:46:38.735 2 INFO nova.compute.claims [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:46:38 np0005481065 nova_compute[260935]: 2025-10-11 08:46:38.857 2 DEBUG oslo_concurrency.processutils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:46:39 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2472654938' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:46:39 np0005481065 nova_compute[260935]: 2025-10-11 08:46:39.394 2 DEBUG oslo_concurrency.processutils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:39 np0005481065 nova_compute[260935]: 2025-10-11 08:46:39.402 2 DEBUG nova.compute.provider_tree [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:46:39 np0005481065 nova_compute[260935]: 2025-10-11 08:46:39.420 2 DEBUG nova.scheduler.client.report [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:46:39 np0005481065 nova_compute[260935]: 2025-10-11 08:46:39.444 2 DEBUG oslo_concurrency.lockutils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:46:39 np0005481065 nova_compute[260935]: 2025-10-11 08:46:39.445 2 DEBUG nova.compute.manager [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:46:39 np0005481065 nova_compute[260935]: 2025-10-11 08:46:39.494 2 DEBUG nova.compute.manager [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:46:39 np0005481065 nova_compute[260935]: 2025-10-11 08:46:39.495 2 DEBUG nova.network.neutron [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:46:39 np0005481065 nova_compute[260935]: 2025-10-11 08:46:39.520 2 INFO nova.virt.libvirt.driver [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:46:39 np0005481065 nova_compute[260935]: 2025-10-11 08:46:39.543 2 DEBUG nova.compute.manager [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:46:39 np0005481065 nova_compute[260935]: 2025-10-11 08:46:39.668 2 DEBUG nova.compute.manager [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:46:39 np0005481065 nova_compute[260935]: 2025-10-11 08:46:39.670 2 DEBUG nova.virt.libvirt.driver [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:46:39 np0005481065 nova_compute[260935]: 2025-10-11 08:46:39.670 2 INFO nova.virt.libvirt.driver [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Creating image(s)#033[00m
Oct 11 04:46:39 np0005481065 nova_compute[260935]: 2025-10-11 08:46:39.704 2 DEBUG nova.storage.rbd_utils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] rbd image 318c32b0-9990-4579-8abb-fc79e7460d77_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:46:39 np0005481065 nova_compute[260935]: 2025-10-11 08:46:39.731 2 DEBUG nova.storage.rbd_utils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] rbd image 318c32b0-9990-4579-8abb-fc79e7460d77_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:46:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1202: 321 pgs: 321 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 47 KiB/s wr, 287 op/s
Oct 11 04:46:39 np0005481065 nova_compute[260935]: 2025-10-11 08:46:39.760 2 DEBUG nova.storage.rbd_utils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] rbd image 318c32b0-9990-4579-8abb-fc79e7460d77_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:46:39 np0005481065 nova_compute[260935]: 2025-10-11 08:46:39.764 2 DEBUG oslo_concurrency.processutils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:39 np0005481065 nova_compute[260935]: 2025-10-11 08:46:39.852 2 DEBUG oslo_concurrency.processutils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:39 np0005481065 nova_compute[260935]: 2025-10-11 08:46:39.854 2 DEBUG oslo_concurrency.lockutils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:46:39 np0005481065 nova_compute[260935]: 2025-10-11 08:46:39.855 2 DEBUG oslo_concurrency.lockutils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:46:39 np0005481065 nova_compute[260935]: 2025-10-11 08:46:39.855 2 DEBUG oslo_concurrency.lockutils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:46:39 np0005481065 nova_compute[260935]: 2025-10-11 08:46:39.886 2 DEBUG nova.storage.rbd_utils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] rbd image 318c32b0-9990-4579-8abb-fc79e7460d77_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:46:39 np0005481065 nova_compute[260935]: 2025-10-11 08:46:39.890 2 DEBUG oslo_concurrency.processutils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 318c32b0-9990-4579-8abb-fc79e7460d77_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:40 np0005481065 nova_compute[260935]: 2025-10-11 08:46:40.030 2 DEBUG nova.network.neutron [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct 11 04:46:40 np0005481065 nova_compute[260935]: 2025-10-11 08:46:40.031 2 DEBUG nova.compute.manager [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:46:40 np0005481065 nova_compute[260935]: 2025-10-11 08:46:40.201 2 DEBUG oslo_concurrency.processutils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 318c32b0-9990-4579-8abb-fc79e7460d77_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.311s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:40 np0005481065 nova_compute[260935]: 2025-10-11 08:46:40.285 2 DEBUG nova.storage.rbd_utils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] resizing rbd image 318c32b0-9990-4579-8abb-fc79e7460d77_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:46:40 np0005481065 nova_compute[260935]: 2025-10-11 08:46:40.405 2 DEBUG nova.objects.instance [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lazy-loading 'migration_context' on Instance uuid 318c32b0-9990-4579-8abb-fc79e7460d77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:46:40 np0005481065 nova_compute[260935]: 2025-10-11 08:46:40.421 2 DEBUG nova.virt.libvirt.driver [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:46:40 np0005481065 nova_compute[260935]: 2025-10-11 08:46:40.421 2 DEBUG nova.virt.libvirt.driver [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Ensure instance console log exists: /var/lib/nova/instances/318c32b0-9990-4579-8abb-fc79e7460d77/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:46:40 np0005481065 nova_compute[260935]: 2025-10-11 08:46:40.422 2 DEBUG oslo_concurrency.lockutils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:46:40 np0005481065 nova_compute[260935]: 2025-10-11 08:46:40.423 2 DEBUG oslo_concurrency.lockutils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:46:40 np0005481065 nova_compute[260935]: 2025-10-11 08:46:40.423 2 DEBUG oslo_concurrency.lockutils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:46:40 np0005481065 nova_compute[260935]: 2025-10-11 08:46:40.426 2 DEBUG nova.virt.libvirt.driver [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:46:40 np0005481065 nova_compute[260935]: 2025-10-11 08:46:40.432 2 WARNING nova.virt.libvirt.driver [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:46:40 np0005481065 nova_compute[260935]: 2025-10-11 08:46:40.438 2 DEBUG nova.virt.libvirt.host [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:46:40 np0005481065 nova_compute[260935]: 2025-10-11 08:46:40.439 2 DEBUG nova.virt.libvirt.host [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:46:40 np0005481065 nova_compute[260935]: 2025-10-11 08:46:40.442 2 DEBUG nova.virt.libvirt.host [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:46:40 np0005481065 nova_compute[260935]: 2025-10-11 08:46:40.443 2 DEBUG nova.virt.libvirt.host [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:46:40 np0005481065 nova_compute[260935]: 2025-10-11 08:46:40.444 2 DEBUG nova.virt.libvirt.driver [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:46:40 np0005481065 nova_compute[260935]: 2025-10-11 08:46:40.444 2 DEBUG nova.virt.hardware [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:46:40 np0005481065 nova_compute[260935]: 2025-10-11 08:46:40.445 2 DEBUG nova.virt.hardware [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:46:40 np0005481065 nova_compute[260935]: 2025-10-11 08:46:40.445 2 DEBUG nova.virt.hardware [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:46:40 np0005481065 nova_compute[260935]: 2025-10-11 08:46:40.446 2 DEBUG nova.virt.hardware [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:46:40 np0005481065 nova_compute[260935]: 2025-10-11 08:46:40.446 2 DEBUG nova.virt.hardware [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:46:40 np0005481065 nova_compute[260935]: 2025-10-11 08:46:40.446 2 DEBUG nova.virt.hardware [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:46:40 np0005481065 nova_compute[260935]: 2025-10-11 08:46:40.447 2 DEBUG nova.virt.hardware [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:46:40 np0005481065 nova_compute[260935]: 2025-10-11 08:46:40.447 2 DEBUG nova.virt.hardware [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:46:40 np0005481065 nova_compute[260935]: 2025-10-11 08:46:40.447 2 DEBUG nova.virt.hardware [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:46:40 np0005481065 nova_compute[260935]: 2025-10-11 08:46:40.448 2 DEBUG nova.virt.hardware [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:46:40 np0005481065 nova_compute[260935]: 2025-10-11 08:46:40.448 2 DEBUG nova.virt.hardware [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:46:40 np0005481065 nova_compute[260935]: 2025-10-11 08:46:40.452 2 DEBUG oslo_concurrency.processutils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:40 np0005481065 nova_compute[260935]: 2025-10-11 08:46:40.492 2 DEBUG oslo_concurrency.lockutils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Acquiring lock "fa95251f-1ce7-4a45-8f53-ae932716a172" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:46:40 np0005481065 nova_compute[260935]: 2025-10-11 08:46:40.492 2 DEBUG oslo_concurrency.lockutils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lock "fa95251f-1ce7-4a45-8f53-ae932716a172" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:46:40 np0005481065 nova_compute[260935]: 2025-10-11 08:46:40.508 2 DEBUG nova.compute.manager [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:46:40 np0005481065 nova_compute[260935]: 2025-10-11 08:46:40.573 2 DEBUG oslo_concurrency.lockutils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:46:40 np0005481065 nova_compute[260935]: 2025-10-11 08:46:40.574 2 DEBUG oslo_concurrency.lockutils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:46:40 np0005481065 nova_compute[260935]: 2025-10-11 08:46:40.587 2 DEBUG nova.virt.hardware [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:46:40 np0005481065 nova_compute[260935]: 2025-10-11 08:46:40.588 2 INFO nova.compute.claims [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:46:40 np0005481065 nova_compute[260935]: 2025-10-11 08:46:40.748 2 DEBUG oslo_concurrency.processutils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:40 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:46:40 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/586804537' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:46:40 np0005481065 nova_compute[260935]: 2025-10-11 08:46:40.941 2 DEBUG oslo_concurrency.processutils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:40 np0005481065 nova_compute[260935]: 2025-10-11 08:46:40.968 2 DEBUG nova.storage.rbd_utils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] rbd image 318c32b0-9990-4579-8abb-fc79e7460d77_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:46:40 np0005481065 nova_compute[260935]: 2025-10-11 08:46:40.973 2 DEBUG oslo_concurrency.processutils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:41 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:46:41 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/297712343' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:46:41 np0005481065 nova_compute[260935]: 2025-10-11 08:46:41.255 2 DEBUG oslo_concurrency.processutils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:41 np0005481065 nova_compute[260935]: 2025-10-11 08:46:41.263 2 DEBUG nova.compute.provider_tree [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:46:41 np0005481065 nova_compute[260935]: 2025-10-11 08:46:41.286 2 DEBUG nova.scheduler.client.report [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:46:41 np0005481065 nova_compute[260935]: 2025-10-11 08:46:41.316 2 DEBUG oslo_concurrency.lockutils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:46:41 np0005481065 nova_compute[260935]: 2025-10-11 08:46:41.317 2 DEBUG nova.compute.manager [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:46:41 np0005481065 nova_compute[260935]: 2025-10-11 08:46:41.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:46:41 np0005481065 nova_compute[260935]: 2025-10-11 08:46:41.386 2 DEBUG nova.compute.manager [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:46:41 np0005481065 nova_compute[260935]: 2025-10-11 08:46:41.386 2 DEBUG nova.network.neutron [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:46:41 np0005481065 nova_compute[260935]: 2025-10-11 08:46:41.420 2 INFO nova.virt.libvirt.driver [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:46:41 np0005481065 nova_compute[260935]: 2025-10-11 08:46:41.441 2 DEBUG nova.compute.manager [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:46:41 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:46:41 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/247863693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:46:41 np0005481065 nova_compute[260935]: 2025-10-11 08:46:41.482 2 DEBUG oslo_concurrency.processutils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:41 np0005481065 nova_compute[260935]: 2025-10-11 08:46:41.484 2 DEBUG nova.objects.instance [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lazy-loading 'pci_devices' on Instance uuid 318c32b0-9990-4579-8abb-fc79e7460d77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:46:41 np0005481065 nova_compute[260935]: 2025-10-11 08:46:41.499 2 DEBUG nova.virt.libvirt.driver [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:46:41 np0005481065 nova_compute[260935]:  <uuid>318c32b0-9990-4579-8abb-fc79e7460d77</uuid>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:  <name>instance-0000000d</name>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:46:41 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:      <nova:name>tempest-ListImageFiltersTestJSON-server-1102694279</nova:name>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:46:40</nova:creationTime>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:46:41 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:        <nova:user uuid="976b51186aac40648d84f68dc2241b25">tempest-ListImageFiltersTestJSON-2055883743-project-member</nova:user>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:        <nova:project uuid="cecc6eaab0b74c3786e0cdf9452f1b2c">tempest-ListImageFiltersTestJSON-2055883743</nova:project>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:      <nova:ports/>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:46:41 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:      <entry name="serial">318c32b0-9990-4579-8abb-fc79e7460d77</entry>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:      <entry name="uuid">318c32b0-9990-4579-8abb-fc79e7460d77</entry>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:46:41 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:46:41 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:46:41 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/318c32b0-9990-4579-8abb-fc79e7460d77_disk">
Oct 11 04:46:41 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:46:41 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:46:41 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/318c32b0-9990-4579-8abb-fc79e7460d77_disk.config">
Oct 11 04:46:41 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:46:41 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:46:41 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/318c32b0-9990-4579-8abb-fc79e7460d77/console.log" append="off"/>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:46:41 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:46:41 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:46:41 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:46:41 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:46:41 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:46:41 np0005481065 nova_compute[260935]: 2025-10-11 08:46:41.534 2 DEBUG nova.compute.manager [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:46:41 np0005481065 nova_compute[260935]: 2025-10-11 08:46:41.536 2 DEBUG nova.virt.libvirt.driver [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:46:41 np0005481065 nova_compute[260935]: 2025-10-11 08:46:41.536 2 INFO nova.virt.libvirt.driver [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Creating image(s)#033[00m
Oct 11 04:46:41 np0005481065 nova_compute[260935]: 2025-10-11 08:46:41.563 2 DEBUG nova.storage.rbd_utils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] rbd image fa95251f-1ce7-4a45-8f53-ae932716a172_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:46:41 np0005481065 nova_compute[260935]: 2025-10-11 08:46:41.587 2 DEBUG nova.storage.rbd_utils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] rbd image fa95251f-1ce7-4a45-8f53-ae932716a172_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:46:41 np0005481065 nova_compute[260935]: 2025-10-11 08:46:41.612 2 DEBUG nova.storage.rbd_utils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] rbd image fa95251f-1ce7-4a45-8f53-ae932716a172_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:46:41 np0005481065 nova_compute[260935]: 2025-10-11 08:46:41.616 2 DEBUG oslo_concurrency.processutils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:41 np0005481065 ovn_controller[152945]: 2025-10-11T08:46:41Z|00043|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Oct 11 04:46:41 np0005481065 nova_compute[260935]: 2025-10-11 08:46:41.669 2 DEBUG nova.virt.libvirt.driver [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:46:41 np0005481065 nova_compute[260935]: 2025-10-11 08:46:41.670 2 DEBUG nova.virt.libvirt.driver [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:46:41 np0005481065 nova_compute[260935]: 2025-10-11 08:46:41.671 2 INFO nova.virt.libvirt.driver [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Using config drive#033[00m
Oct 11 04:46:41 np0005481065 nova_compute[260935]: 2025-10-11 08:46:41.703 2 DEBUG nova.storage.rbd_utils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] rbd image 318c32b0-9990-4579-8abb-fc79e7460d77_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:46:41 np0005481065 nova_compute[260935]: 2025-10-11 08:46:41.711 2 DEBUG oslo_concurrency.processutils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:41 np0005481065 nova_compute[260935]: 2025-10-11 08:46:41.711 2 DEBUG oslo_concurrency.lockutils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:46:41 np0005481065 nova_compute[260935]: 2025-10-11 08:46:41.712 2 DEBUG oslo_concurrency.lockutils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:46:41 np0005481065 nova_compute[260935]: 2025-10-11 08:46:41.713 2 DEBUG oslo_concurrency.lockutils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:46:41 np0005481065 nova_compute[260935]: 2025-10-11 08:46:41.740 2 DEBUG nova.storage.rbd_utils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] rbd image fa95251f-1ce7-4a45-8f53-ae932716a172_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:46:41 np0005481065 nova_compute[260935]: 2025-10-11 08:46:41.745 2 DEBUG oslo_concurrency.processutils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 fa95251f-1ce7-4a45-8f53-ae932716a172_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1203: 321 pgs: 321 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 5.6 KiB/s wr, 96 op/s
Oct 11 04:46:41 np0005481065 nova_compute[260935]: 2025-10-11 08:46:41.898 2 DEBUG nova.network.neutron [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct 11 04:46:41 np0005481065 nova_compute[260935]: 2025-10-11 08:46:41.899 2 DEBUG nova.compute.manager [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:46:42 np0005481065 nova_compute[260935]: 2025-10-11 08:46:42.008 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172386.9637132, d1140ad8-4653-441c-aeec-bf4de6a680f4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:46:42 np0005481065 nova_compute[260935]: 2025-10-11 08:46:42.009 2 INFO nova.compute.manager [-] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] VM Stopped (Lifecycle Event)#033[00m
Oct 11 04:46:42 np0005481065 nova_compute[260935]: 2025-10-11 08:46:42.030 2 DEBUG nova.compute.manager [None req-947b0616-6cd0-4172-8ba6-65959417ebf6 - - - - - -] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:46:42 np0005481065 nova_compute[260935]: 2025-10-11 08:46:42.069 2 INFO nova.virt.libvirt.driver [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Creating config drive at /var/lib/nova/instances/318c32b0-9990-4579-8abb-fc79e7460d77/disk.config#033[00m
Oct 11 04:46:42 np0005481065 nova_compute[260935]: 2025-10-11 08:46:42.078 2 DEBUG oslo_concurrency.processutils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/318c32b0-9990-4579-8abb-fc79e7460d77/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxqetqe05 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:42 np0005481065 nova_compute[260935]: 2025-10-11 08:46:42.110 2 DEBUG oslo_concurrency.processutils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 fa95251f-1ce7-4a45-8f53-ae932716a172_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.365s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:42 np0005481065 nova_compute[260935]: 2025-10-11 08:46:42.194 2 DEBUG nova.storage.rbd_utils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] resizing rbd image fa95251f-1ce7-4a45-8f53-ae932716a172_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:46:42 np0005481065 nova_compute[260935]: 2025-10-11 08:46:42.235 2 DEBUG oslo_concurrency.processutils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/318c32b0-9990-4579-8abb-fc79e7460d77/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxqetqe05" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:42 np0005481065 nova_compute[260935]: 2025-10-11 08:46:42.286 2 DEBUG nova.storage.rbd_utils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] rbd image 318c32b0-9990-4579-8abb-fc79e7460d77_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:46:42 np0005481065 nova_compute[260935]: 2025-10-11 08:46:42.295 2 DEBUG oslo_concurrency.processutils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/318c32b0-9990-4579-8abb-fc79e7460d77/disk.config 318c32b0-9990-4579-8abb-fc79e7460d77_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:42 np0005481065 nova_compute[260935]: 2025-10-11 08:46:42.394 2 DEBUG nova.objects.instance [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lazy-loading 'migration_context' on Instance uuid fa95251f-1ce7-4a45-8f53-ae932716a172 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:46:42 np0005481065 nova_compute[260935]: 2025-10-11 08:46:42.410 2 DEBUG nova.virt.libvirt.driver [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:46:42 np0005481065 nova_compute[260935]: 2025-10-11 08:46:42.411 2 DEBUG nova.virt.libvirt.driver [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Ensure instance console log exists: /var/lib/nova/instances/fa95251f-1ce7-4a45-8f53-ae932716a172/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:46:42 np0005481065 nova_compute[260935]: 2025-10-11 08:46:42.412 2 DEBUG oslo_concurrency.lockutils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:46:42 np0005481065 nova_compute[260935]: 2025-10-11 08:46:42.412 2 DEBUG oslo_concurrency.lockutils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:46:42 np0005481065 nova_compute[260935]: 2025-10-11 08:46:42.413 2 DEBUG oslo_concurrency.lockutils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:46:42 np0005481065 nova_compute[260935]: 2025-10-11 08:46:42.417 2 DEBUG nova.virt.libvirt.driver [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:46:42 np0005481065 nova_compute[260935]: 2025-10-11 08:46:42.424 2 WARNING nova.virt.libvirt.driver [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:46:42 np0005481065 nova_compute[260935]: 2025-10-11 08:46:42.430 2 DEBUG nova.virt.libvirt.host [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:46:42 np0005481065 nova_compute[260935]: 2025-10-11 08:46:42.431 2 DEBUG nova.virt.libvirt.host [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:46:42 np0005481065 nova_compute[260935]: 2025-10-11 08:46:42.434 2 DEBUG nova.virt.libvirt.host [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:46:42 np0005481065 nova_compute[260935]: 2025-10-11 08:46:42.435 2 DEBUG nova.virt.libvirt.host [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:46:42 np0005481065 nova_compute[260935]: 2025-10-11 08:46:42.436 2 DEBUG nova.virt.libvirt.driver [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:46:42 np0005481065 nova_compute[260935]: 2025-10-11 08:46:42.436 2 DEBUG nova.virt.hardware [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:46:42 np0005481065 nova_compute[260935]: 2025-10-11 08:46:42.437 2 DEBUG nova.virt.hardware [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:46:42 np0005481065 nova_compute[260935]: 2025-10-11 08:46:42.437 2 DEBUG nova.virt.hardware [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:46:42 np0005481065 nova_compute[260935]: 2025-10-11 08:46:42.437 2 DEBUG nova.virt.hardware [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:46:42 np0005481065 nova_compute[260935]: 2025-10-11 08:46:42.438 2 DEBUG nova.virt.hardware [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:46:42 np0005481065 nova_compute[260935]: 2025-10-11 08:46:42.438 2 DEBUG nova.virt.hardware [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:46:42 np0005481065 nova_compute[260935]: 2025-10-11 08:46:42.438 2 DEBUG nova.virt.hardware [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:46:42 np0005481065 nova_compute[260935]: 2025-10-11 08:46:42.439 2 DEBUG nova.virt.hardware [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:46:42 np0005481065 nova_compute[260935]: 2025-10-11 08:46:42.439 2 DEBUG nova.virt.hardware [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:46:42 np0005481065 nova_compute[260935]: 2025-10-11 08:46:42.439 2 DEBUG nova.virt.hardware [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:46:42 np0005481065 nova_compute[260935]: 2025-10-11 08:46:42.440 2 DEBUG nova.virt.hardware [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:46:42 np0005481065 nova_compute[260935]: 2025-10-11 08:46:42.445 2 DEBUG oslo_concurrency.processutils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:42 np0005481065 nova_compute[260935]: 2025-10-11 08:46:42.510 2 DEBUG oslo_concurrency.processutils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/318c32b0-9990-4579-8abb-fc79e7460d77/disk.config 318c32b0-9990-4579-8abb-fc79e7460d77_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.215s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:42 np0005481065 nova_compute[260935]: 2025-10-11 08:46:42.511 2 INFO nova.virt.libvirt.driver [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Deleting local config drive /var/lib/nova/instances/318c32b0-9990-4579-8abb-fc79e7460d77/disk.config because it was imported into RBD.#033[00m
Oct 11 04:46:42 np0005481065 systemd-machined[215705]: New machine qemu-13-instance-0000000d.
Oct 11 04:46:42 np0005481065 systemd[1]: Started Virtual Machine qemu-13-instance-0000000d.
Oct 11 04:46:42 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:46:42 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/9991398' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:46:42 np0005481065 nova_compute[260935]: 2025-10-11 08:46:42.934 2 DEBUG oslo_concurrency.processutils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:42 np0005481065 nova_compute[260935]: 2025-10-11 08:46:42.958 2 DEBUG nova.storage.rbd_utils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] rbd image fa95251f-1ce7-4a45-8f53-ae932716a172_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:46:42 np0005481065 nova_compute[260935]: 2025-10-11 08:46:42.961 2 DEBUG oslo_concurrency.processutils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:46:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:46:43 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2069511649' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:46:43 np0005481065 nova_compute[260935]: 2025-10-11 08:46:43.477 2 DEBUG oslo_concurrency.lockutils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Acquiring lock "ff13b723-e064-4bed-93dc-b6bf28af0857" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:46:43 np0005481065 nova_compute[260935]: 2025-10-11 08:46:43.477 2 DEBUG oslo_concurrency.lockutils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Lock "ff13b723-e064-4bed-93dc-b6bf28af0857" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:46:43 np0005481065 nova_compute[260935]: 2025-10-11 08:46:43.491 2 DEBUG oslo_concurrency.processutils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:43 np0005481065 nova_compute[260935]: 2025-10-11 08:46:43.492 2 DEBUG nova.objects.instance [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lazy-loading 'pci_devices' on Instance uuid fa95251f-1ce7-4a45-8f53-ae932716a172 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:46:43 np0005481065 nova_compute[260935]: 2025-10-11 08:46:43.497 2 DEBUG nova.compute.manager [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:46:43 np0005481065 nova_compute[260935]: 2025-10-11 08:46:43.508 2 DEBUG nova.virt.libvirt.driver [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:46:43 np0005481065 nova_compute[260935]:  <uuid>fa95251f-1ce7-4a45-8f53-ae932716a172</uuid>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:  <name>instance-0000000e</name>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:46:43 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:      <nova:name>tempest-ListImageFiltersTestJSON-server-868078342</nova:name>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:46:42</nova:creationTime>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:46:43 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:        <nova:user uuid="976b51186aac40648d84f68dc2241b25">tempest-ListImageFiltersTestJSON-2055883743-project-member</nova:user>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:        <nova:project uuid="cecc6eaab0b74c3786e0cdf9452f1b2c">tempest-ListImageFiltersTestJSON-2055883743</nova:project>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:      <nova:ports/>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:46:43 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:      <entry name="serial">fa95251f-1ce7-4a45-8f53-ae932716a172</entry>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:      <entry name="uuid">fa95251f-1ce7-4a45-8f53-ae932716a172</entry>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:46:43 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:46:43 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:46:43 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/fa95251f-1ce7-4a45-8f53-ae932716a172_disk">
Oct 11 04:46:43 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:46:43 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:46:43 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/fa95251f-1ce7-4a45-8f53-ae932716a172_disk.config">
Oct 11 04:46:43 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:46:43 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:46:43 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/fa95251f-1ce7-4a45-8f53-ae932716a172/console.log" append="off"/>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:46:43 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:46:43 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:46:43 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:46:43 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:46:43 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:46:43 np0005481065 nova_compute[260935]: 2025-10-11 08:46:43.558 2 DEBUG oslo_concurrency.lockutils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:46:43 np0005481065 nova_compute[260935]: 2025-10-11 08:46:43.559 2 DEBUG oslo_concurrency.lockutils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:46:43 np0005481065 nova_compute[260935]: 2025-10-11 08:46:43.565 2 DEBUG nova.virt.hardware [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:46:43 np0005481065 nova_compute[260935]: 2025-10-11 08:46:43.565 2 INFO nova.compute.claims [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:46:43 np0005481065 nova_compute[260935]: 2025-10-11 08:46:43.572 2 DEBUG nova.virt.libvirt.driver [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:46:43 np0005481065 nova_compute[260935]: 2025-10-11 08:46:43.572 2 DEBUG nova.virt.libvirt.driver [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:46:43 np0005481065 nova_compute[260935]: 2025-10-11 08:46:43.573 2 INFO nova.virt.libvirt.driver [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Using config drive#033[00m
Oct 11 04:46:43 np0005481065 nova_compute[260935]: 2025-10-11 08:46:43.598 2 DEBUG nova.storage.rbd_utils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] rbd image fa95251f-1ce7-4a45-8f53-ae932716a172_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:46:43 np0005481065 nova_compute[260935]: 2025-10-11 08:46:43.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:46:43 np0005481065 nova_compute[260935]: 2025-10-11 08:46:43.733 2 DEBUG oslo_concurrency.processutils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1204: 321 pgs: 321 active+clean; 134 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 4.3 MiB/s wr, 148 op/s
Oct 11 04:46:43 np0005481065 nova_compute[260935]: 2025-10-11 08:46:43.765 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172403.7623792, 318c32b0-9990-4579-8abb-fc79e7460d77 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:46:43 np0005481065 nova_compute[260935]: 2025-10-11 08:46:43.766 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:46:43 np0005481065 nova_compute[260935]: 2025-10-11 08:46:43.768 2 DEBUG nova.compute.manager [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:46:43 np0005481065 nova_compute[260935]: 2025-10-11 08:46:43.768 2 DEBUG nova.virt.libvirt.driver [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:46:43 np0005481065 nova_compute[260935]: 2025-10-11 08:46:43.772 2 INFO nova.virt.libvirt.driver [-] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Instance spawned successfully.#033[00m
Oct 11 04:46:43 np0005481065 nova_compute[260935]: 2025-10-11 08:46:43.772 2 DEBUG nova.virt.libvirt.driver [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:46:43 np0005481065 nova_compute[260935]: 2025-10-11 08:46:43.779 2 INFO nova.virt.libvirt.driver [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Creating config drive at /var/lib/nova/instances/fa95251f-1ce7-4a45-8f53-ae932716a172/disk.config#033[00m
Oct 11 04:46:43 np0005481065 nova_compute[260935]: 2025-10-11 08:46:43.784 2 DEBUG oslo_concurrency.processutils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fa95251f-1ce7-4a45-8f53-ae932716a172/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8deq_hn3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:43 np0005481065 nova_compute[260935]: 2025-10-11 08:46:43.810 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:46:43 np0005481065 nova_compute[260935]: 2025-10-11 08:46:43.816 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:46:43 np0005481065 nova_compute[260935]: 2025-10-11 08:46:43.818 2 DEBUG nova.virt.libvirt.driver [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:46:43 np0005481065 nova_compute[260935]: 2025-10-11 08:46:43.818 2 DEBUG nova.virt.libvirt.driver [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:46:43 np0005481065 nova_compute[260935]: 2025-10-11 08:46:43.819 2 DEBUG nova.virt.libvirt.driver [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:46:43 np0005481065 nova_compute[260935]: 2025-10-11 08:46:43.819 2 DEBUG nova.virt.libvirt.driver [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:46:43 np0005481065 nova_compute[260935]: 2025-10-11 08:46:43.819 2 DEBUG nova.virt.libvirt.driver [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:46:43 np0005481065 nova_compute[260935]: 2025-10-11 08:46:43.820 2 DEBUG nova.virt.libvirt.driver [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:46:43 np0005481065 nova_compute[260935]: 2025-10-11 08:46:43.842 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:46:43 np0005481065 nova_compute[260935]: 2025-10-11 08:46:43.842 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172403.763082, 318c32b0-9990-4579-8abb-fc79e7460d77 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:46:43 np0005481065 nova_compute[260935]: 2025-10-11 08:46:43.843 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] VM Started (Lifecycle Event)#033[00m
Oct 11 04:46:43 np0005481065 nova_compute[260935]: 2025-10-11 08:46:43.859 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:46:43 np0005481065 nova_compute[260935]: 2025-10-11 08:46:43.863 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:46:43 np0005481065 nova_compute[260935]: 2025-10-11 08:46:43.871 2 INFO nova.compute.manager [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Took 4.20 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:46:43 np0005481065 nova_compute[260935]: 2025-10-11 08:46:43.871 2 DEBUG nova.compute.manager [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:46:43 np0005481065 nova_compute[260935]: 2025-10-11 08:46:43.881 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:46:43 np0005481065 nova_compute[260935]: 2025-10-11 08:46:43.919 2 INFO nova.compute.manager [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Took 5.22 seconds to build instance.#033[00m
Oct 11 04:46:43 np0005481065 nova_compute[260935]: 2025-10-11 08:46:43.924 2 DEBUG oslo_concurrency.processutils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fa95251f-1ce7-4a45-8f53-ae932716a172/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8deq_hn3" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:43 np0005481065 nova_compute[260935]: 2025-10-11 08:46:43.948 2 DEBUG nova.storage.rbd_utils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] rbd image fa95251f-1ce7-4a45-8f53-ae932716a172_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:46:43 np0005481065 nova_compute[260935]: 2025-10-11 08:46:43.951 2 DEBUG oslo_concurrency.processutils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fa95251f-1ce7-4a45-8f53-ae932716a172/disk.config fa95251f-1ce7-4a45-8f53-ae932716a172_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:43 np0005481065 nova_compute[260935]: 2025-10-11 08:46:43.978 2 DEBUG oslo_concurrency.lockutils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lock "318c32b0-9990-4579-8abb-fc79e7460d77" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.353s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:46:44 np0005481065 nova_compute[260935]: 2025-10-11 08:46:44.134 2 DEBUG oslo_concurrency.processutils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fa95251f-1ce7-4a45-8f53-ae932716a172/disk.config fa95251f-1ce7-4a45-8f53-ae932716a172_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:44 np0005481065 nova_compute[260935]: 2025-10-11 08:46:44.135 2 INFO nova.virt.libvirt.driver [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Deleting local config drive /var/lib/nova/instances/fa95251f-1ce7-4a45-8f53-ae932716a172/disk.config because it was imported into RBD.#033[00m
Oct 11 04:46:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:46:44 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/867670951' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:46:44 np0005481065 systemd-machined[215705]: New machine qemu-14-instance-0000000e.
Oct 11 04:46:44 np0005481065 nova_compute[260935]: 2025-10-11 08:46:44.212 2 DEBUG oslo_concurrency.processutils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:44 np0005481065 systemd[1]: Started Virtual Machine qemu-14-instance-0000000e.
Oct 11 04:46:44 np0005481065 nova_compute[260935]: 2025-10-11 08:46:44.221 2 DEBUG nova.compute.provider_tree [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:46:44 np0005481065 nova_compute[260935]: 2025-10-11 08:46:44.236 2 DEBUG nova.scheduler.client.report [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:46:44 np0005481065 nova_compute[260935]: 2025-10-11 08:46:44.254 2 DEBUG oslo_concurrency.lockutils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:46:44 np0005481065 nova_compute[260935]: 2025-10-11 08:46:44.254 2 DEBUG nova.compute.manager [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:46:44 np0005481065 nova_compute[260935]: 2025-10-11 08:46:44.291 2 DEBUG nova.compute.manager [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:46:44 np0005481065 nova_compute[260935]: 2025-10-11 08:46:44.292 2 DEBUG nova.network.neutron [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:46:44 np0005481065 nova_compute[260935]: 2025-10-11 08:46:44.312 2 INFO nova.virt.libvirt.driver [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:46:44 np0005481065 nova_compute[260935]: 2025-10-11 08:46:44.330 2 DEBUG nova.compute.manager [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:46:44 np0005481065 nova_compute[260935]: 2025-10-11 08:46:44.398 2 DEBUG nova.compute.manager [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:46:44 np0005481065 nova_compute[260935]: 2025-10-11 08:46:44.400 2 DEBUG nova.virt.libvirt.driver [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:46:44 np0005481065 nova_compute[260935]: 2025-10-11 08:46:44.400 2 INFO nova.virt.libvirt.driver [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Creating image(s)#033[00m
Oct 11 04:46:44 np0005481065 nova_compute[260935]: 2025-10-11 08:46:44.431 2 DEBUG nova.storage.rbd_utils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] rbd image ff13b723-e064-4bed-93dc-b6bf28af0857_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:46:44 np0005481065 nova_compute[260935]: 2025-10-11 08:46:44.465 2 DEBUG nova.storage.rbd_utils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] rbd image ff13b723-e064-4bed-93dc-b6bf28af0857_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:46:44 np0005481065 nova_compute[260935]: 2025-10-11 08:46:44.497 2 DEBUG nova.storage.rbd_utils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] rbd image ff13b723-e064-4bed-93dc-b6bf28af0857_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:46:44 np0005481065 nova_compute[260935]: 2025-10-11 08:46:44.502 2 DEBUG oslo_concurrency.processutils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:44 np0005481065 nova_compute[260935]: 2025-10-11 08:46:44.580 2 DEBUG oslo_concurrency.processutils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:44 np0005481065 nova_compute[260935]: 2025-10-11 08:46:44.582 2 DEBUG oslo_concurrency.lockutils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:46:44 np0005481065 nova_compute[260935]: 2025-10-11 08:46:44.582 2 DEBUG oslo_concurrency.lockutils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:46:44 np0005481065 nova_compute[260935]: 2025-10-11 08:46:44.583 2 DEBUG oslo_concurrency.lockutils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:46:44 np0005481065 nova_compute[260935]: 2025-10-11 08:46:44.607 2 DEBUG nova.storage.rbd_utils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] rbd image ff13b723-e064-4bed-93dc-b6bf28af0857_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:46:44 np0005481065 nova_compute[260935]: 2025-10-11 08:46:44.612 2 DEBUG oslo_concurrency.processutils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 ff13b723-e064-4bed-93dc-b6bf28af0857_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:44 np0005481065 nova_compute[260935]: 2025-10-11 08:46:44.690 2 DEBUG nova.network.neutron [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct 11 04:46:44 np0005481065 nova_compute[260935]: 2025-10-11 08:46:44.691 2 DEBUG nova.compute.manager [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:46:44 np0005481065 nova_compute[260935]: 2025-10-11 08:46:44.925 2 DEBUG oslo_concurrency.processutils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 ff13b723-e064-4bed-93dc-b6bf28af0857_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.313s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:45 np0005481065 nova_compute[260935]: 2025-10-11 08:46:45.025 2 DEBUG nova.storage.rbd_utils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] resizing rbd image ff13b723-e064-4bed-93dc-b6bf28af0857_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:46:45 np0005481065 nova_compute[260935]: 2025-10-11 08:46:45.159 2 DEBUG nova.objects.instance [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Lazy-loading 'migration_context' on Instance uuid ff13b723-e064-4bed-93dc-b6bf28af0857 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:46:45 np0005481065 nova_compute[260935]: 2025-10-11 08:46:45.174 2 DEBUG nova.virt.libvirt.driver [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:46:45 np0005481065 nova_compute[260935]: 2025-10-11 08:46:45.175 2 DEBUG nova.virt.libvirt.driver [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Ensure instance console log exists: /var/lib/nova/instances/ff13b723-e064-4bed-93dc-b6bf28af0857/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:46:45 np0005481065 nova_compute[260935]: 2025-10-11 08:46:45.177 2 DEBUG oslo_concurrency.lockutils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:46:45 np0005481065 nova_compute[260935]: 2025-10-11 08:46:45.179 2 DEBUG oslo_concurrency.lockutils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:46:45 np0005481065 nova_compute[260935]: 2025-10-11 08:46:45.179 2 DEBUG oslo_concurrency.lockutils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:46:45 np0005481065 nova_compute[260935]: 2025-10-11 08:46:45.182 2 DEBUG nova.virt.libvirt.driver [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:46:45 np0005481065 nova_compute[260935]: 2025-10-11 08:46:45.191 2 WARNING nova.virt.libvirt.driver [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:46:45 np0005481065 nova_compute[260935]: 2025-10-11 08:46:45.200 2 DEBUG nova.virt.libvirt.host [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:46:45 np0005481065 nova_compute[260935]: 2025-10-11 08:46:45.201 2 DEBUG nova.virt.libvirt.host [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:46:45 np0005481065 nova_compute[260935]: 2025-10-11 08:46:45.204 2 DEBUG nova.virt.libvirt.host [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:46:45 np0005481065 nova_compute[260935]: 2025-10-11 08:46:45.205 2 DEBUG nova.virt.libvirt.host [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:46:45 np0005481065 nova_compute[260935]: 2025-10-11 08:46:45.206 2 DEBUG nova.virt.libvirt.driver [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:46:45 np0005481065 nova_compute[260935]: 2025-10-11 08:46:45.206 2 DEBUG nova.virt.hardware [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:46:45 np0005481065 nova_compute[260935]: 2025-10-11 08:46:45.207 2 DEBUG nova.virt.hardware [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:46:45 np0005481065 nova_compute[260935]: 2025-10-11 08:46:45.208 2 DEBUG nova.virt.hardware [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:46:45 np0005481065 nova_compute[260935]: 2025-10-11 08:46:45.208 2 DEBUG nova.virt.hardware [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:46:45 np0005481065 nova_compute[260935]: 2025-10-11 08:46:45.209 2 DEBUG nova.virt.hardware [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:46:45 np0005481065 nova_compute[260935]: 2025-10-11 08:46:45.209 2 DEBUG nova.virt.hardware [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:46:45 np0005481065 nova_compute[260935]: 2025-10-11 08:46:45.210 2 DEBUG nova.virt.hardware [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:46:45 np0005481065 nova_compute[260935]: 2025-10-11 08:46:45.210 2 DEBUG nova.virt.hardware [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:46:45 np0005481065 nova_compute[260935]: 2025-10-11 08:46:45.211 2 DEBUG nova.virt.hardware [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:46:45 np0005481065 nova_compute[260935]: 2025-10-11 08:46:45.211 2 DEBUG nova.virt.hardware [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:46:45 np0005481065 nova_compute[260935]: 2025-10-11 08:46:45.212 2 DEBUG nova.virt.hardware [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:46:45 np0005481065 nova_compute[260935]: 2025-10-11 08:46:45.215 2 DEBUG oslo_concurrency.processutils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:45 np0005481065 nova_compute[260935]: 2025-10-11 08:46:45.379 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172405.3780336, fa95251f-1ce7-4a45-8f53-ae932716a172 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:46:45 np0005481065 nova_compute[260935]: 2025-10-11 08:46:45.380 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:46:45 np0005481065 nova_compute[260935]: 2025-10-11 08:46:45.384 2 DEBUG nova.compute.manager [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:46:45 np0005481065 nova_compute[260935]: 2025-10-11 08:46:45.384 2 DEBUG nova.virt.libvirt.driver [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:46:45 np0005481065 nova_compute[260935]: 2025-10-11 08:46:45.389 2 INFO nova.virt.libvirt.driver [-] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Instance spawned successfully.#033[00m
Oct 11 04:46:45 np0005481065 nova_compute[260935]: 2025-10-11 08:46:45.390 2 DEBUG nova.virt.libvirt.driver [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:46:45 np0005481065 nova_compute[260935]: 2025-10-11 08:46:45.409 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:46:45 np0005481065 nova_compute[260935]: 2025-10-11 08:46:45.416 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:46:45 np0005481065 nova_compute[260935]: 2025-10-11 08:46:45.425 2 DEBUG nova.virt.libvirt.driver [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:46:45 np0005481065 nova_compute[260935]: 2025-10-11 08:46:45.426 2 DEBUG nova.virt.libvirt.driver [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:46:45 np0005481065 nova_compute[260935]: 2025-10-11 08:46:45.427 2 DEBUG nova.virt.libvirt.driver [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:46:45 np0005481065 nova_compute[260935]: 2025-10-11 08:46:45.428 2 DEBUG nova.virt.libvirt.driver [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:46:45 np0005481065 nova_compute[260935]: 2025-10-11 08:46:45.428 2 DEBUG nova.virt.libvirt.driver [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:46:45 np0005481065 nova_compute[260935]: 2025-10-11 08:46:45.429 2 DEBUG nova.virt.libvirt.driver [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:46:45 np0005481065 nova_compute[260935]: 2025-10-11 08:46:45.441 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:46:45 np0005481065 nova_compute[260935]: 2025-10-11 08:46:45.441 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172405.3833487, fa95251f-1ce7-4a45-8f53-ae932716a172 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:46:45 np0005481065 nova_compute[260935]: 2025-10-11 08:46:45.442 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] VM Started (Lifecycle Event)#033[00m
Oct 11 04:46:45 np0005481065 nova_compute[260935]: 2025-10-11 08:46:45.475 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:46:45 np0005481065 nova_compute[260935]: 2025-10-11 08:46:45.480 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:46:45 np0005481065 nova_compute[260935]: 2025-10-11 08:46:45.501 2 INFO nova.compute.manager [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Took 3.97 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:46:45 np0005481065 nova_compute[260935]: 2025-10-11 08:46:45.501 2 DEBUG nova.compute.manager [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:46:45 np0005481065 nova_compute[260935]: 2025-10-11 08:46:45.503 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:46:45 np0005481065 nova_compute[260935]: 2025-10-11 08:46:45.564 2 INFO nova.compute.manager [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Took 5.01 seconds to build instance.#033[00m
Oct 11 04:46:45 np0005481065 nova_compute[260935]: 2025-10-11 08:46:45.580 2 DEBUG oslo_concurrency.lockutils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lock "fa95251f-1ce7-4a45-8f53-ae932716a172" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.088s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:46:45 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:46:45 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1085323306' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:46:45 np0005481065 nova_compute[260935]: 2025-10-11 08:46:45.731 2 DEBUG oslo_concurrency.processutils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1205: 321 pgs: 321 active+clean; 134 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 4.3 MiB/s wr, 148 op/s
Oct 11 04:46:45 np0005481065 nova_compute[260935]: 2025-10-11 08:46:45.760 2 DEBUG nova.storage.rbd_utils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] rbd image ff13b723-e064-4bed-93dc-b6bf28af0857_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:46:45 np0005481065 nova_compute[260935]: 2025-10-11 08:46:45.766 2 DEBUG oslo_concurrency.processutils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:46 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:46:46 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3547736486' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:46:46 np0005481065 nova_compute[260935]: 2025-10-11 08:46:46.305 2 DEBUG oslo_concurrency.processutils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:46 np0005481065 nova_compute[260935]: 2025-10-11 08:46:46.308 2 DEBUG nova.objects.instance [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Lazy-loading 'pci_devices' on Instance uuid ff13b723-e064-4bed-93dc-b6bf28af0857 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:46:46 np0005481065 nova_compute[260935]: 2025-10-11 08:46:46.323 2 DEBUG nova.virt.libvirt.driver [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:46:46 np0005481065 nova_compute[260935]:  <uuid>ff13b723-e064-4bed-93dc-b6bf28af0857</uuid>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:  <name>instance-0000000f</name>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:46:46 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:      <nova:name>tempest-TenantUsagesTestJSON-server-1292884304</nova:name>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:46:45</nova:creationTime>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:46:46 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:        <nova:user uuid="dc0a1b420e6f47e9a9f604fe0dd240fd">tempest-TenantUsagesTestJSON-545714014-project-member</nova:user>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:        <nova:project uuid="666832175a6846b29256147e566ba5be">tempest-TenantUsagesTestJSON-545714014</nova:project>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:      <nova:ports/>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:46:46 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:      <entry name="serial">ff13b723-e064-4bed-93dc-b6bf28af0857</entry>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:      <entry name="uuid">ff13b723-e064-4bed-93dc-b6bf28af0857</entry>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:46:46 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:46:46 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:46:46 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/ff13b723-e064-4bed-93dc-b6bf28af0857_disk">
Oct 11 04:46:46 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:46:46 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:46:46 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/ff13b723-e064-4bed-93dc-b6bf28af0857_disk.config">
Oct 11 04:46:46 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:46:46 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:46:46 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/ff13b723-e064-4bed-93dc-b6bf28af0857/console.log" append="off"/>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:46:46 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:46:46 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:46:46 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:46:46 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:46:46 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:46:46 np0005481065 nova_compute[260935]: 2025-10-11 08:46:46.380 2 DEBUG nova.virt.libvirt.driver [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:46:46 np0005481065 nova_compute[260935]: 2025-10-11 08:46:46.380 2 DEBUG nova.virt.libvirt.driver [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:46:46 np0005481065 nova_compute[260935]: 2025-10-11 08:46:46.387 2 INFO nova.virt.libvirt.driver [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Using config drive#033[00m
Oct 11 04:46:46 np0005481065 nova_compute[260935]: 2025-10-11 08:46:46.420 2 DEBUG nova.storage.rbd_utils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] rbd image ff13b723-e064-4bed-93dc-b6bf28af0857_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:46:46 np0005481065 nova_compute[260935]: 2025-10-11 08:46:46.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:46:46 np0005481065 nova_compute[260935]: 2025-10-11 08:46:46.687 2 INFO nova.virt.libvirt.driver [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Creating config drive at /var/lib/nova/instances/ff13b723-e064-4bed-93dc-b6bf28af0857/disk.config#033[00m
Oct 11 04:46:46 np0005481065 nova_compute[260935]: 2025-10-11 08:46:46.695 2 DEBUG oslo_concurrency.processutils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ff13b723-e064-4bed-93dc-b6bf28af0857/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm2yarjj_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:46 np0005481065 nova_compute[260935]: 2025-10-11 08:46:46.837 2 DEBUG oslo_concurrency.processutils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ff13b723-e064-4bed-93dc-b6bf28af0857/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm2yarjj_" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:46 np0005481065 nova_compute[260935]: 2025-10-11 08:46:46.871 2 DEBUG nova.storage.rbd_utils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] rbd image ff13b723-e064-4bed-93dc-b6bf28af0857_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:46:46 np0005481065 nova_compute[260935]: 2025-10-11 08:46:46.875 2 DEBUG oslo_concurrency.processutils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ff13b723-e064-4bed-93dc-b6bf28af0857/disk.config ff13b723-e064-4bed-93dc-b6bf28af0857_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:47 np0005481065 nova_compute[260935]: 2025-10-11 08:46:47.053 2 DEBUG oslo_concurrency.processutils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ff13b723-e064-4bed-93dc-b6bf28af0857/disk.config ff13b723-e064-4bed-93dc-b6bf28af0857_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.178s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:47 np0005481065 nova_compute[260935]: 2025-10-11 08:46:47.056 2 INFO nova.virt.libvirt.driver [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Deleting local config drive /var/lib/nova/instances/ff13b723-e064-4bed-93dc-b6bf28af0857/disk.config because it was imported into RBD.#033[00m
Oct 11 04:46:47 np0005481065 nova_compute[260935]: 2025-10-11 08:46:47.076 2 DEBUG nova.compute.manager [None req-0a082280-bc65-45a8-95c7-433e38c276a6 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:46:47 np0005481065 systemd-machined[215705]: New machine qemu-15-instance-0000000f.
Oct 11 04:46:47 np0005481065 nova_compute[260935]: 2025-10-11 08:46:47.138 2 INFO nova.compute.manager [None req-0a082280-bc65-45a8-95c7-433e38c276a6 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] instance snapshotting#033[00m
Oct 11 04:46:47 np0005481065 systemd[1]: Started Virtual Machine qemu-15-instance-0000000f.
Oct 11 04:46:47 np0005481065 nova_compute[260935]: 2025-10-11 08:46:47.498 2 INFO nova.virt.libvirt.driver [None req-0a082280-bc65-45a8-95c7-433e38c276a6 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Beginning live snapshot process#033[00m
Oct 11 04:46:47 np0005481065 nova_compute[260935]: 2025-10-11 08:46:47.635 2 DEBUG nova.virt.libvirt.imagebackend [None req-0a082280-bc65-45a8-95c7-433e38c276a6 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] No parent info for 03f2fef0-11c0-48e1-b3a0-3e02d898739e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct 11 04:46:47 np0005481065 nova_compute[260935]: 2025-10-11 08:46:47.660 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172392.6593008, e1d5bdab-1090-4b36-bff3-f11d0bc1d591 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:46:47 np0005481065 nova_compute[260935]: 2025-10-11 08:46:47.660 2 INFO nova.compute.manager [-] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] VM Stopped (Lifecycle Event)#033[00m
Oct 11 04:46:47 np0005481065 nova_compute[260935]: 2025-10-11 08:46:47.680 2 DEBUG nova.compute.manager [None req-63966129-3d6f-4943-a1f2-490a37f063d9 - - - - - -] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:46:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1206: 321 pgs: 321 active+clean; 181 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 5.5 MiB/s wr, 314 op/s
Oct 11 04:46:48 np0005481065 nova_compute[260935]: 2025-10-11 08:46:48.083 2 DEBUG nova.storage.rbd_utils [None req-0a082280-bc65-45a8-95c7-433e38c276a6 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] creating snapshot(1b8a73cd32884872b8b701faca07aacb) on rbd image(318c32b0-9990-4579-8abb-fc79e7460d77_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 11 04:46:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:46:48 np0005481065 nova_compute[260935]: 2025-10-11 08:46:48.197 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172408.1862733, ff13b723-e064-4bed-93dc-b6bf28af0857 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:46:48 np0005481065 nova_compute[260935]: 2025-10-11 08:46:48.198 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:46:48 np0005481065 nova_compute[260935]: 2025-10-11 08:46:48.200 2 DEBUG nova.compute.manager [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:46:48 np0005481065 nova_compute[260935]: 2025-10-11 08:46:48.202 2 DEBUG nova.virt.libvirt.driver [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:46:48 np0005481065 nova_compute[260935]: 2025-10-11 08:46:48.213 2 INFO nova.virt.libvirt.driver [-] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Instance spawned successfully.#033[00m
Oct 11 04:46:48 np0005481065 nova_compute[260935]: 2025-10-11 08:46:48.214 2 DEBUG nova.virt.libvirt.driver [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:46:48 np0005481065 nova_compute[260935]: 2025-10-11 08:46:48.235 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:46:48 np0005481065 nova_compute[260935]: 2025-10-11 08:46:48.239 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:46:48 np0005481065 nova_compute[260935]: 2025-10-11 08:46:48.250 2 DEBUG nova.virt.libvirt.driver [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:46:48 np0005481065 nova_compute[260935]: 2025-10-11 08:46:48.251 2 DEBUG nova.virt.libvirt.driver [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:46:48 np0005481065 nova_compute[260935]: 2025-10-11 08:46:48.252 2 DEBUG nova.virt.libvirt.driver [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:46:48 np0005481065 nova_compute[260935]: 2025-10-11 08:46:48.252 2 DEBUG nova.virt.libvirt.driver [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:46:48 np0005481065 nova_compute[260935]: 2025-10-11 08:46:48.253 2 DEBUG nova.virt.libvirt.driver [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:46:48 np0005481065 nova_compute[260935]: 2025-10-11 08:46:48.254 2 DEBUG nova.virt.libvirt.driver [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:46:48 np0005481065 nova_compute[260935]: 2025-10-11 08:46:48.288 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:46:48 np0005481065 nova_compute[260935]: 2025-10-11 08:46:48.288 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172408.1909053, ff13b723-e064-4bed-93dc-b6bf28af0857 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:46:48 np0005481065 nova_compute[260935]: 2025-10-11 08:46:48.289 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] VM Started (Lifecycle Event)#033[00m
Oct 11 04:46:48 np0005481065 nova_compute[260935]: 2025-10-11 08:46:48.334 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:46:48 np0005481065 nova_compute[260935]: 2025-10-11 08:46:48.339 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:46:48 np0005481065 nova_compute[260935]: 2025-10-11 08:46:48.356 2 INFO nova.compute.manager [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Took 3.96 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:46:48 np0005481065 nova_compute[260935]: 2025-10-11 08:46:48.356 2 DEBUG nova.compute.manager [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:46:48 np0005481065 nova_compute[260935]: 2025-10-11 08:46:48.369 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:46:48 np0005481065 nova_compute[260935]: 2025-10-11 08:46:48.430 2 INFO nova.compute.manager [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Took 4.89 seconds to build instance.#033[00m
Oct 11 04:46:48 np0005481065 nova_compute[260935]: 2025-10-11 08:46:48.455 2 DEBUG oslo_concurrency.lockutils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Lock "ff13b723-e064-4bed-93dc-b6bf28af0857" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.978s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:46:48 np0005481065 nova_compute[260935]: 2025-10-11 08:46:48.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:46:48 np0005481065 nova_compute[260935]: 2025-10-11 08:46:48.706 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172393.7047517, 97254317-c848-4369-b0b7-147d230fb1d1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:46:48 np0005481065 nova_compute[260935]: 2025-10-11 08:46:48.708 2 INFO nova.compute.manager [-] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] VM Stopped (Lifecycle Event)#033[00m
Oct 11 04:46:48 np0005481065 nova_compute[260935]: 2025-10-11 08:46:48.740 2 DEBUG nova.compute.manager [None req-5e1f1b3c-c888-414f-aaad-3319fdc3ae74 - - - - - -] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:46:48 np0005481065 podman[284864]: 2025-10-11 08:46:48.781284075 +0000 UTC m=+0.080743659 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 04:46:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Oct 11 04:46:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Oct 11 04:46:49 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Oct 11 04:46:49 np0005481065 nova_compute[260935]: 2025-10-11 08:46:49.176 2 DEBUG nova.storage.rbd_utils [None req-0a082280-bc65-45a8-95c7-433e38c276a6 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] cloning vms/318c32b0-9990-4579-8abb-fc79e7460d77_disk@1b8a73cd32884872b8b701faca07aacb to images/c42c8a26-c9fa-4474-9b78-d97f24dd45ce clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct 11 04:46:49 np0005481065 nova_compute[260935]: 2025-10-11 08:46:49.313 2 DEBUG nova.storage.rbd_utils [None req-0a082280-bc65-45a8-95c7-433e38c276a6 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] flattening images/c42c8a26-c9fa-4474-9b78-d97f24dd45ce flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct 11 04:46:49 np0005481065 nova_compute[260935]: 2025-10-11 08:46:49.579 2 DEBUG oslo_concurrency.lockutils [None req-73234762-0e1b-432e-96e0-6ec5d35de60e dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Acquiring lock "ff13b723-e064-4bed-93dc-b6bf28af0857" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:46:49 np0005481065 nova_compute[260935]: 2025-10-11 08:46:49.580 2 DEBUG oslo_concurrency.lockutils [None req-73234762-0e1b-432e-96e0-6ec5d35de60e dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Lock "ff13b723-e064-4bed-93dc-b6bf28af0857" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:46:49 np0005481065 nova_compute[260935]: 2025-10-11 08:46:49.581 2 DEBUG oslo_concurrency.lockutils [None req-73234762-0e1b-432e-96e0-6ec5d35de60e dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Acquiring lock "ff13b723-e064-4bed-93dc-b6bf28af0857-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:46:49 np0005481065 nova_compute[260935]: 2025-10-11 08:46:49.581 2 DEBUG oslo_concurrency.lockutils [None req-73234762-0e1b-432e-96e0-6ec5d35de60e dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Lock "ff13b723-e064-4bed-93dc-b6bf28af0857-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:46:49 np0005481065 nova_compute[260935]: 2025-10-11 08:46:49.583 2 DEBUG oslo_concurrency.lockutils [None req-73234762-0e1b-432e-96e0-6ec5d35de60e dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Lock "ff13b723-e064-4bed-93dc-b6bf28af0857-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:46:49 np0005481065 nova_compute[260935]: 2025-10-11 08:46:49.586 2 INFO nova.compute.manager [None req-73234762-0e1b-432e-96e0-6ec5d35de60e dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Terminating instance#033[00m
Oct 11 04:46:49 np0005481065 nova_compute[260935]: 2025-10-11 08:46:49.587 2 DEBUG oslo_concurrency.lockutils [None req-73234762-0e1b-432e-96e0-6ec5d35de60e dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Acquiring lock "refresh_cache-ff13b723-e064-4bed-93dc-b6bf28af0857" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:46:49 np0005481065 nova_compute[260935]: 2025-10-11 08:46:49.588 2 DEBUG oslo_concurrency.lockutils [None req-73234762-0e1b-432e-96e0-6ec5d35de60e dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Acquired lock "refresh_cache-ff13b723-e064-4bed-93dc-b6bf28af0857" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:46:49 np0005481065 nova_compute[260935]: 2025-10-11 08:46:49.588 2 DEBUG nova.network.neutron [None req-73234762-0e1b-432e-96e0-6ec5d35de60e dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:46:49 np0005481065 nova_compute[260935]: 2025-10-11 08:46:49.592 2 DEBUG nova.storage.rbd_utils [None req-0a082280-bc65-45a8-95c7-433e38c276a6 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] removing snapshot(1b8a73cd32884872b8b701faca07aacb) on rbd image(318c32b0-9990-4579-8abb-fc79e7460d77_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct 11 04:46:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1208: 321 pgs: 321 active+clean; 181 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 6.4 MiB/s wr, 286 op/s
Oct 11 04:46:50 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Oct 11 04:46:50 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Oct 11 04:46:50 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Oct 11 04:46:50 np0005481065 nova_compute[260935]: 2025-10-11 08:46:50.171 2 DEBUG nova.storage.rbd_utils [None req-0a082280-bc65-45a8-95c7-433e38c276a6 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] creating snapshot(snap) on rbd image(c42c8a26-c9fa-4474-9b78-d97f24dd45ce) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 11 04:46:50 np0005481065 nova_compute[260935]: 2025-10-11 08:46:50.273 2 DEBUG nova.network.neutron [None req-73234762-0e1b-432e-96e0-6ec5d35de60e dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:46:50 np0005481065 nova_compute[260935]: 2025-10-11 08:46:50.745 2 DEBUG nova.network.neutron [None req-73234762-0e1b-432e-96e0-6ec5d35de60e dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:46:50 np0005481065 nova_compute[260935]: 2025-10-11 08:46:50.763 2 DEBUG oslo_concurrency.lockutils [None req-73234762-0e1b-432e-96e0-6ec5d35de60e dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Releasing lock "refresh_cache-ff13b723-e064-4bed-93dc-b6bf28af0857" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:46:50 np0005481065 nova_compute[260935]: 2025-10-11 08:46:50.764 2 DEBUG nova.compute.manager [None req-73234762-0e1b-432e-96e0-6ec5d35de60e dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 04:46:50 np0005481065 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Oct 11 04:46:50 np0005481065 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000f.scope: Consumed 3.542s CPU time.
Oct 11 04:46:50 np0005481065 systemd-machined[215705]: Machine qemu-15-instance-0000000f terminated.
Oct 11 04:46:50 np0005481065 nova_compute[260935]: 2025-10-11 08:46:50.991 2 INFO nova.virt.libvirt.driver [-] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Instance destroyed successfully.#033[00m
Oct 11 04:46:50 np0005481065 nova_compute[260935]: 2025-10-11 08:46:50.992 2 DEBUG nova.objects.instance [None req-73234762-0e1b-432e-96e0-6ec5d35de60e dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Lazy-loading 'resources' on Instance uuid ff13b723-e064-4bed-93dc-b6bf28af0857 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:46:51 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Oct 11 04:46:51 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Oct 11 04:46:51 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Oct 11 04:46:51 np0005481065 nova_compute[260935]: 2025-10-11 08:46:51.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:46:51 np0005481065 nova_compute[260935]: 2025-10-11 08:46:51.457 2 INFO nova.virt.libvirt.driver [None req-73234762-0e1b-432e-96e0-6ec5d35de60e dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Deleting instance files /var/lib/nova/instances/ff13b723-e064-4bed-93dc-b6bf28af0857_del#033[00m
Oct 11 04:46:51 np0005481065 nova_compute[260935]: 2025-10-11 08:46:51.458 2 INFO nova.virt.libvirt.driver [None req-73234762-0e1b-432e-96e0-6ec5d35de60e dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Deletion of /var/lib/nova/instances/ff13b723-e064-4bed-93dc-b6bf28af0857_del complete#033[00m
Oct 11 04:46:51 np0005481065 nova_compute[260935]: 2025-10-11 08:46:51.530 2 INFO nova.compute.manager [None req-73234762-0e1b-432e-96e0-6ec5d35de60e dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Took 0.77 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 04:46:51 np0005481065 nova_compute[260935]: 2025-10-11 08:46:51.531 2 DEBUG oslo.service.loopingcall [None req-73234762-0e1b-432e-96e0-6ec5d35de60e dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 04:46:51 np0005481065 nova_compute[260935]: 2025-10-11 08:46:51.532 2 DEBUG nova.compute.manager [-] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 04:46:51 np0005481065 nova_compute[260935]: 2025-10-11 08:46:51.532 2 DEBUG nova.network.neutron [-] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 04:46:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1211: 321 pgs: 321 active+clean; 181 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 7.7 MiB/s rd, 3.6 MiB/s wr, 369 op/s
Oct 11 04:46:51 np0005481065 nova_compute[260935]: 2025-10-11 08:46:51.772 2 DEBUG nova.network.neutron [-] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:46:51 np0005481065 nova_compute[260935]: 2025-10-11 08:46:51.789 2 DEBUG nova.network.neutron [-] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:46:51 np0005481065 nova_compute[260935]: 2025-10-11 08:46:51.801 2 INFO nova.compute.manager [-] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Took 0.27 seconds to deallocate network for instance.#033[00m
Oct 11 04:46:51 np0005481065 nova_compute[260935]: 2025-10-11 08:46:51.848 2 DEBUG oslo_concurrency.lockutils [None req-73234762-0e1b-432e-96e0-6ec5d35de60e dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:46:51 np0005481065 nova_compute[260935]: 2025-10-11 08:46:51.849 2 DEBUG oslo_concurrency.lockutils [None req-73234762-0e1b-432e-96e0-6ec5d35de60e dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:46:51 np0005481065 nova_compute[260935]: 2025-10-11 08:46:51.951 2 DEBUG oslo_concurrency.processutils [None req-73234762-0e1b-432e-96e0-6ec5d35de60e dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:46:52 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:46:52 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1498220074' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:46:52 np0005481065 nova_compute[260935]: 2025-10-11 08:46:52.456 2 DEBUG oslo_concurrency.processutils [None req-73234762-0e1b-432e-96e0-6ec5d35de60e dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:46:52 np0005481065 nova_compute[260935]: 2025-10-11 08:46:52.464 2 DEBUG nova.compute.provider_tree [None req-73234762-0e1b-432e-96e0-6ec5d35de60e dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:46:52 np0005481065 nova_compute[260935]: 2025-10-11 08:46:52.480 2 DEBUG nova.scheduler.client.report [None req-73234762-0e1b-432e-96e0-6ec5d35de60e dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:46:52 np0005481065 nova_compute[260935]: 2025-10-11 08:46:52.509 2 DEBUG oslo_concurrency.lockutils [None req-73234762-0e1b-432e-96e0-6ec5d35de60e dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:46:52 np0005481065 nova_compute[260935]: 2025-10-11 08:46:52.538 2 INFO nova.scheduler.client.report [None req-73234762-0e1b-432e-96e0-6ec5d35de60e dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Deleted allocations for instance ff13b723-e064-4bed-93dc-b6bf28af0857#033[00m
Oct 11 04:46:52 np0005481065 nova_compute[260935]: 2025-10-11 08:46:52.617 2 DEBUG oslo_concurrency.lockutils [None req-73234762-0e1b-432e-96e0-6ec5d35de60e dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Lock "ff13b723-e064-4bed-93dc-b6bf28af0857" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.037s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:46:52 np0005481065 nova_compute[260935]: 2025-10-11 08:46:52.975 2 INFO nova.virt.libvirt.driver [None req-0a082280-bc65-45a8-95c7-433e38c276a6 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Snapshot image upload complete#033[00m
Oct 11 04:46:52 np0005481065 nova_compute[260935]: 2025-10-11 08:46:52.976 2 INFO nova.compute.manager [None req-0a082280-bc65-45a8-95c7-433e38c276a6 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Took 5.84 seconds to snapshot the instance on the hypervisor.#033[00m
Oct 11 04:46:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:46:53 np0005481065 nova_compute[260935]: 2025-10-11 08:46:53.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:46:53 np0005481065 nova_compute[260935]: 2025-10-11 08:46:53.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:46:53 np0005481065 nova_compute[260935]: 2025-10-11 08:46:53.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:46:53 np0005481065 nova_compute[260935]: 2025-10-11 08:46:53.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 04:46:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1212: 321 pgs: 321 active+clean; 181 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 7.5 MiB/s rd, 3.6 MiB/s wr, 310 op/s
Oct 11 04:46:54 np0005481065 nova_compute[260935]: 2025-10-11 08:46:54.384 2 DEBUG nova.compute.manager [None req-3050fcdb-e94b-407e-a449-23fb8a3dd1a4 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:46:54 np0005481065 nova_compute[260935]: 2025-10-11 08:46:54.435 2 INFO nova.compute.manager [None req-3050fcdb-e94b-407e-a449-23fb8a3dd1a4 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] instance snapshotting#033[00m
Oct 11 04:46:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:46:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:46:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:46:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:46:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:46:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:46:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:46:54
Oct 11 04:46:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:46:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 04:46:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['default.rgw.meta', 'images', 'cephfs.cephfs.data', 'backups', '.rgw.root', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.control', 'volumes']
Oct 11 04:46:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:46:54 np0005481065 nova_compute[260935]: 2025-10-11 08:46:54.858 2 INFO nova.virt.libvirt.driver [None req-3050fcdb-e94b-407e-a449-23fb8a3dd1a4 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Beginning live snapshot process#033[00m
Oct 11 04:46:55 np0005481065 nova_compute[260935]: 2025-10-11 08:46:55.036 2 DEBUG nova.virt.libvirt.imagebackend [None req-3050fcdb-e94b-407e-a449-23fb8a3dd1a4 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] No parent info for 03f2fef0-11c0-48e1-b3a0-3e02d898739e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct 11 04:46:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:46:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:46:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:46:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:46:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:46:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:46:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:46:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:46:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:46:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:46:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:46:55.254 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:46:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:46:55.255 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 04:46:55 np0005481065 nova_compute[260935]: 2025-10-11 08:46:55.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:46:55 np0005481065 nova_compute[260935]: 2025-10-11 08:46:55.625 2 DEBUG nova.storage.rbd_utils [None req-3050fcdb-e94b-407e-a449-23fb8a3dd1a4 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] creating snapshot(297126c640e945f882404423e085dc3c) on rbd image(fa95251f-1ce7-4a45-8f53-ae932716a172_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 11 04:46:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1213: 321 pgs: 321 active+clean; 181 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 6.8 MiB/s rd, 3.2 MiB/s wr, 280 op/s
Oct 11 04:46:55 np0005481065 podman[285067]: 2025-10-11 08:46:55.788162994 +0000 UTC m=+0.086205375 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:46:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Oct 11 04:46:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Oct 11 04:46:56 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Oct 11 04:46:56 np0005481065 nova_compute[260935]: 2025-10-11 08:46:56.263 2 DEBUG nova.storage.rbd_utils [None req-3050fcdb-e94b-407e-a449-23fb8a3dd1a4 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] cloning vms/fa95251f-1ce7-4a45-8f53-ae932716a172_disk@297126c640e945f882404423e085dc3c to images/5259a9a2-9e09-4a6d-9c26-e2e19bf2a857 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct 11 04:46:56 np0005481065 nova_compute[260935]: 2025-10-11 08:46:56.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:46:56 np0005481065 nova_compute[260935]: 2025-10-11 08:46:56.493 2 DEBUG nova.storage.rbd_utils [None req-3050fcdb-e94b-407e-a449-23fb8a3dd1a4 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] flattening images/5259a9a2-9e09-4a6d-9c26-e2e19bf2a857 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct 11 04:46:56 np0005481065 nova_compute[260935]: 2025-10-11 08:46:56.756 2 DEBUG nova.storage.rbd_utils [None req-3050fcdb-e94b-407e-a449-23fb8a3dd1a4 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] removing snapshot(297126c640e945f882404423e085dc3c) on rbd image(fa95251f-1ce7-4a45-8f53-ae932716a172_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct 11 04:46:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Oct 11 04:46:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Oct 11 04:46:57 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Oct 11 04:46:57 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Oct 11 04:46:57 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:46:57.203358) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 04:46:57 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Oct 11 04:46:57 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172417203415, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1375, "num_deletes": 251, "total_data_size": 1852045, "memory_usage": 1879632, "flush_reason": "Manual Compaction"}
Oct 11 04:46:57 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Oct 11 04:46:57 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172417216655, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 1830786, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24291, "largest_seqno": 25665, "table_properties": {"data_size": 1824406, "index_size": 3519, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14264, "raw_average_key_size": 20, "raw_value_size": 1811282, "raw_average_value_size": 2572, "num_data_blocks": 156, "num_entries": 704, "num_filter_entries": 704, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760172303, "oldest_key_time": 1760172303, "file_creation_time": 1760172417, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:46:57 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 13347 microseconds, and 8785 cpu microseconds.
Oct 11 04:46:57 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:46:57 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:46:57.216707) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 1830786 bytes OK
Oct 11 04:46:57 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:46:57.216737) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Oct 11 04:46:57 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:46:57.219607) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Oct 11 04:46:57 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:46:57.219631) EVENT_LOG_v1 {"time_micros": 1760172417219624, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 04:46:57 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:46:57.219654) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 04:46:57 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 1845865, prev total WAL file size 1845865, number of live WAL files 2.
Oct 11 04:46:57 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:46:57 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:46:57.220730) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Oct 11 04:46:57 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 04:46:57 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(1787KB)], [56(6894KB)]
Oct 11 04:46:57 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172417220795, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 8890882, "oldest_snapshot_seqno": -1}
Oct 11 04:46:57 np0005481065 nova_compute[260935]: 2025-10-11 08:46:57.239 2 DEBUG nova.storage.rbd_utils [None req-3050fcdb-e94b-407e-a449-23fb8a3dd1a4 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] creating snapshot(snap) on rbd image(5259a9a2-9e09-4a6d-9c26-e2e19bf2a857) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 11 04:46:57 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 4702 keys, 7172618 bytes, temperature: kUnknown
Oct 11 04:46:57 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172417267912, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 7172618, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7141354, "index_size": 18418, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11781, "raw_key_size": 117850, "raw_average_key_size": 25, "raw_value_size": 7056527, "raw_average_value_size": 1500, "num_data_blocks": 760, "num_entries": 4702, "num_filter_entries": 4702, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760172417, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:46:57 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:46:57 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:46:57.268313) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 7172618 bytes
Oct 11 04:46:57 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:46:57.269944) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 188.1 rd, 151.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 6.7 +0.0 blob) out(6.8 +0.0 blob), read-write-amplify(8.8) write-amplify(3.9) OK, records in: 5220, records dropped: 518 output_compression: NoCompression
Oct 11 04:46:57 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:46:57.269963) EVENT_LOG_v1 {"time_micros": 1760172417269954, "job": 30, "event": "compaction_finished", "compaction_time_micros": 47274, "compaction_time_cpu_micros": 31941, "output_level": 6, "num_output_files": 1, "total_output_size": 7172618, "num_input_records": 5220, "num_output_records": 4702, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 04:46:57 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:46:57 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172417270868, "job": 30, "event": "table_file_deletion", "file_number": 58}
Oct 11 04:46:57 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:46:57 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172417272341, "job": 30, "event": "table_file_deletion", "file_number": 56}
Oct 11 04:46:57 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:46:57.220618) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:46:57 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:46:57.272538) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:46:57 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:46:57.272546) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:46:57 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:46:57.272549) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:46:57 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:46:57.272551) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:46:57 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:46:57.272553) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:46:57 np0005481065 nova_compute[260935]: 2025-10-11 08:46:57.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:46:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1216: 321 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 310 active+clean; 291 MiB data, 390 MiB used, 60 GiB / 60 GiB avail; 11 MiB/s rd, 14 MiB/s wr, 602 op/s
Oct 11 04:46:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:46:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Oct 11 04:46:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Oct 11 04:46:58 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Oct 11 04:46:58 np0005481065 nova_compute[260935]: 2025-10-11 08:46:58.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:46:58 np0005481065 nova_compute[260935]: 2025-10-11 08:46:58.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:46:58 np0005481065 nova_compute[260935]: 2025-10-11 08:46:58.730 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:46:59 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:46:59.258 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:46:59 np0005481065 nova_compute[260935]: 2025-10-11 08:46:59.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:46:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1218: 321 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 310 active+clean; 291 MiB data, 390 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 12 MiB/s wr, 354 op/s
Oct 11 04:47:00 np0005481065 nova_compute[260935]: 2025-10-11 08:47:00.467 2 INFO nova.virt.libvirt.driver [None req-3050fcdb-e94b-407e-a449-23fb8a3dd1a4 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Snapshot image upload complete#033[00m
Oct 11 04:47:00 np0005481065 nova_compute[260935]: 2025-10-11 08:47:00.468 2 INFO nova.compute.manager [None req-3050fcdb-e94b-407e-a449-23fb8a3dd1a4 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Took 6.03 seconds to snapshot the instance on the hypervisor.#033[00m
Oct 11 04:47:00 np0005481065 nova_compute[260935]: 2025-10-11 08:47:00.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:47:01 np0005481065 nova_compute[260935]: 2025-10-11 08:47:01.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:01 np0005481065 nova_compute[260935]: 2025-10-11 08:47:01.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:47:01 np0005481065 nova_compute[260935]: 2025-10-11 08:47:01.733 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:47:01 np0005481065 nova_compute[260935]: 2025-10-11 08:47:01.734 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:47:01 np0005481065 nova_compute[260935]: 2025-10-11 08:47:01.734 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:47:01 np0005481065 nova_compute[260935]: 2025-10-11 08:47:01.734 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 04:47:01 np0005481065 nova_compute[260935]: 2025-10-11 08:47:01.734 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:47:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1219: 321 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 310 active+clean; 291 MiB data, 390 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 12 MiB/s wr, 354 op/s
Oct 11 04:47:01 np0005481065 podman[285178]: 2025-10-11 08:47:01.784710744 +0000 UTC m=+0.080926274 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 11 04:47:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:47:02 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3076908363' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:47:02 np0005481065 nova_compute[260935]: 2025-10-11 08:47:02.196 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:47:02 np0005481065 nova_compute[260935]: 2025-10-11 08:47:02.286 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 04:47:02 np0005481065 nova_compute[260935]: 2025-10-11 08:47:02.287 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 04:47:02 np0005481065 nova_compute[260935]: 2025-10-11 08:47:02.294 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 04:47:02 np0005481065 nova_compute[260935]: 2025-10-11 08:47:02.295 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 04:47:02 np0005481065 podman[285220]: 2025-10-11 08:47:02.388223715 +0000 UTC m=+0.130477921 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 11 04:47:02 np0005481065 nova_compute[260935]: 2025-10-11 08:47:02.531 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:47:02 np0005481065 nova_compute[260935]: 2025-10-11 08:47:02.534 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4275MB free_disk=59.89763259887695GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 04:47:02 np0005481065 nova_compute[260935]: 2025-10-11 08:47:02.535 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:47:02 np0005481065 nova_compute[260935]: 2025-10-11 08:47:02.535 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:47:02 np0005481065 nova_compute[260935]: 2025-10-11 08:47:02.670 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 318c32b0-9990-4579-8abb-fc79e7460d77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 04:47:02 np0005481065 nova_compute[260935]: 2025-10-11 08:47:02.671 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance fa95251f-1ce7-4a45-8f53-ae932716a172 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 04:47:02 np0005481065 nova_compute[260935]: 2025-10-11 08:47:02.672 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 04:47:02 np0005481065 nova_compute[260935]: 2025-10-11 08:47:02.672 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 04:47:02 np0005481065 nova_compute[260935]: 2025-10-11 08:47:02.763 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:47:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:47:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:47:03 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1074731268' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:47:03 np0005481065 nova_compute[260935]: 2025-10-11 08:47:03.308 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:47:03 np0005481065 nova_compute[260935]: 2025-10-11 08:47:03.315 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:47:03 np0005481065 nova_compute[260935]: 2025-10-11 08:47:03.333 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:47:03 np0005481065 nova_compute[260935]: 2025-10-11 08:47:03.364 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 04:47:03 np0005481065 nova_compute[260935]: 2025-10-11 08:47:03.364 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.829s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:47:03 np0005481065 nova_compute[260935]: 2025-10-11 08:47:03.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:03 np0005481065 nova_compute[260935]: 2025-10-11 08:47:03.722 2 DEBUG nova.compute.manager [None req-dce477b1-cb35-4d20-bef0-997e1401d841 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:47:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1220: 321 pgs: 321 active+clean; 292 MiB data, 391 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 9.5 MiB/s wr, 322 op/s
Oct 11 04:47:04 np0005481065 nova_compute[260935]: 2025-10-11 08:47:04.075 2 INFO nova.compute.manager [None req-dce477b1-cb35-4d20-bef0-997e1401d841 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] instance snapshotting#033[00m
Oct 11 04:47:04 np0005481065 nova_compute[260935]: 2025-10-11 08:47:04.502 2 INFO nova.virt.libvirt.driver [None req-dce477b1-cb35-4d20-bef0-997e1401d841 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Beginning live snapshot process#033[00m
Oct 11 04:47:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:47:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:47:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:47:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:47:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015118690784332877 of space, bias 1.0, pg target 0.45356072352998633 quantized to 32 (current 32)
Oct 11 04:47:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:47:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:47:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:47:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:47:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:47:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013588698354170736 of space, bias 1.0, pg target 0.4076609506251221 quantized to 32 (current 32)
Oct 11 04:47:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:47:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 04:47:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:47:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:47:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:47:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:47:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:47:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:47:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:47:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:47:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:47:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:47:04 np0005481065 nova_compute[260935]: 2025-10-11 08:47:04.707 2 DEBUG nova.virt.libvirt.imagebackend [None req-dce477b1-cb35-4d20-bef0-997e1401d841 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] No parent info for 03f2fef0-11c0-48e1-b3a0-3e02d898739e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct 11 04:47:05 np0005481065 nova_compute[260935]: 2025-10-11 08:47:05.121 2 DEBUG nova.storage.rbd_utils [None req-dce477b1-cb35-4d20-bef0-997e1401d841 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] creating snapshot(f4964df6d3c64d3c8f0e8d150fc90269) on rbd image(318c32b0-9990-4579-8abb-fc79e7460d77_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 11 04:47:05 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Oct 11 04:47:05 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Oct 11 04:47:05 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Oct 11 04:47:05 np0005481065 nova_compute[260935]: 2025-10-11 08:47:05.345 2 DEBUG nova.storage.rbd_utils [None req-dce477b1-cb35-4d20-bef0-997e1401d841 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] cloning vms/318c32b0-9990-4579-8abb-fc79e7460d77_disk@f4964df6d3c64d3c8f0e8d150fc90269 to images/283e0fc7-42ef-4bee-8e6e-575223f25fca clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct 11 04:47:05 np0005481065 nova_compute[260935]: 2025-10-11 08:47:05.390 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:47:05 np0005481065 nova_compute[260935]: 2025-10-11 08:47:05.390 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 04:47:05 np0005481065 nova_compute[260935]: 2025-10-11 08:47:05.391 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 11 04:47:05 np0005481065 nova_compute[260935]: 2025-10-11 08:47:05.413 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-318c32b0-9990-4579-8abb-fc79e7460d77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:47:05 np0005481065 nova_compute[260935]: 2025-10-11 08:47:05.414 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-318c32b0-9990-4579-8abb-fc79e7460d77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:47:05 np0005481065 nova_compute[260935]: 2025-10-11 08:47:05.414 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 04:47:05 np0005481065 nova_compute[260935]: 2025-10-11 08:47:05.415 2 DEBUG nova.objects.instance [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 318c32b0-9990-4579-8abb-fc79e7460d77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:47:05 np0005481065 nova_compute[260935]: 2025-10-11 08:47:05.480 2 DEBUG nova.storage.rbd_utils [None req-dce477b1-cb35-4d20-bef0-997e1401d841 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] flattening images/283e0fc7-42ef-4bee-8e6e-575223f25fca flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct 11 04:47:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1222: 321 pgs: 321 active+clean; 292 MiB data, 391 MiB used, 60 GiB / 60 GiB avail; 86 KiB/s rd, 36 KiB/s wr, 39 op/s
Oct 11 04:47:05 np0005481065 nova_compute[260935]: 2025-10-11 08:47:05.820 2 DEBUG nova.storage.rbd_utils [None req-dce477b1-cb35-4d20-bef0-997e1401d841 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] removing snapshot(f4964df6d3c64d3c8f0e8d150fc90269) on rbd image(318c32b0-9990-4579-8abb-fc79e7460d77_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct 11 04:47:05 np0005481065 nova_compute[260935]: 2025-10-11 08:47:05.871 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:47:05 np0005481065 nova_compute[260935]: 2025-10-11 08:47:05.988 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172410.9875758, ff13b723-e064-4bed-93dc-b6bf28af0857 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:47:05 np0005481065 nova_compute[260935]: 2025-10-11 08:47:05.989 2 INFO nova.compute.manager [-] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] VM Stopped (Lifecycle Event)#033[00m
Oct 11 04:47:06 np0005481065 nova_compute[260935]: 2025-10-11 08:47:06.013 2 DEBUG nova.compute.manager [None req-fde91909-6414-4b0f-af96-bb7a1ebe4752 - - - - - -] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:47:06 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Oct 11 04:47:06 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Oct 11 04:47:06 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Oct 11 04:47:06 np0005481065 nova_compute[260935]: 2025-10-11 08:47:06.322 2 DEBUG nova.storage.rbd_utils [None req-dce477b1-cb35-4d20-bef0-997e1401d841 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] creating snapshot(snap) on rbd image(283e0fc7-42ef-4bee-8e6e-575223f25fca) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 11 04:47:06 np0005481065 nova_compute[260935]: 2025-10-11 08:47:06.373 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:47:06 np0005481065 nova_compute[260935]: 2025-10-11 08:47:06.395 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-318c32b0-9990-4579-8abb-fc79e7460d77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:47:06 np0005481065 nova_compute[260935]: 2025-10-11 08:47:06.395 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 04:47:06 np0005481065 nova_compute[260935]: 2025-10-11 08:47:06.396 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:47:06 np0005481065 nova_compute[260935]: 2025-10-11 08:47:06.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Oct 11 04:47:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Oct 11 04:47:07 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Oct 11 04:47:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1225: 321 pgs: 321 active+clean; 372 MiB data, 437 MiB used, 60 GiB / 60 GiB avail; 7.9 MiB/s rd, 7.8 MiB/s wr, 201 op/s
Oct 11 04:47:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:47:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Oct 11 04:47:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Oct 11 04:47:08 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Oct 11 04:47:08 np0005481065 nova_compute[260935]: 2025-10-11 08:47:08.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:47:08 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:47:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:47:08 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:47:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:47:08 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:47:08 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev ecd54621-2aea-4fb1-b1c6-941c3b9968cc does not exist
Oct 11 04:47:08 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 80ea7eff-bb5d-4667-a97d-9421107de325 does not exist
Oct 11 04:47:08 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev df457feb-b379-4d92-b86e-181631f66d9a does not exist
Oct 11 04:47:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:47:08 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:47:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:47:08 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:47:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:47:08 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:47:09 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:47:09 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:47:09 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:47:09 np0005481065 podman[285682]: 2025-10-11 08:47:09.567195421 +0000 UTC m=+0.060134640 container create 0bd42a1bffc76a58e54faeeb27373569af89907fc865704990e21826031edf52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 11 04:47:09 np0005481065 nova_compute[260935]: 2025-10-11 08:47:09.595 2 INFO nova.virt.libvirt.driver [None req-dce477b1-cb35-4d20-bef0-997e1401d841 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Snapshot image upload complete#033[00m
Oct 11 04:47:09 np0005481065 nova_compute[260935]: 2025-10-11 08:47:09.597 2 INFO nova.compute.manager [None req-dce477b1-cb35-4d20-bef0-997e1401d841 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Took 5.52 seconds to snapshot the instance on the hypervisor.#033[00m
Oct 11 04:47:09 np0005481065 systemd[1]: Started libpod-conmon-0bd42a1bffc76a58e54faeeb27373569af89907fc865704990e21826031edf52.scope.
Oct 11 04:47:09 np0005481065 podman[285682]: 2025-10-11 08:47:09.534356422 +0000 UTC m=+0.027295651 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:47:09 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:47:09 np0005481065 podman[285682]: 2025-10-11 08:47:09.681159298 +0000 UTC m=+0.174098547 container init 0bd42a1bffc76a58e54faeeb27373569af89907fc865704990e21826031edf52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hawking, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 04:47:09 np0005481065 podman[285682]: 2025-10-11 08:47:09.690453544 +0000 UTC m=+0.183392753 container start 0bd42a1bffc76a58e54faeeb27373569af89907fc865704990e21826031edf52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hawking, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 04:47:09 np0005481065 podman[285682]: 2025-10-11 08:47:09.695884289 +0000 UTC m=+0.188823558 container attach 0bd42a1bffc76a58e54faeeb27373569af89907fc865704990e21826031edf52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hawking, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:47:09 np0005481065 stoic_hawking[285698]: 167 167
Oct 11 04:47:09 np0005481065 systemd[1]: libpod-0bd42a1bffc76a58e54faeeb27373569af89907fc865704990e21826031edf52.scope: Deactivated successfully.
Oct 11 04:47:09 np0005481065 podman[285682]: 2025-10-11 08:47:09.703337202 +0000 UTC m=+0.196276451 container died 0bd42a1bffc76a58e54faeeb27373569af89907fc865704990e21826031edf52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hawking, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 11 04:47:09 np0005481065 systemd[1]: var-lib-containers-storage-overlay-8f9c89dac7b28d928fe193c58664338257f1c055d69fc7bfdc635e96d320864a-merged.mount: Deactivated successfully.
Oct 11 04:47:09 np0005481065 podman[285682]: 2025-10-11 08:47:09.761034102 +0000 UTC m=+0.253973321 container remove 0bd42a1bffc76a58e54faeeb27373569af89907fc865704990e21826031edf52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Oct 11 04:47:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1227: 321 pgs: 321 active+clean; 372 MiB data, 437 MiB used, 60 GiB / 60 GiB avail; 10 MiB/s rd, 10 MiB/s wr, 197 op/s
Oct 11 04:47:09 np0005481065 systemd[1]: libpod-conmon-0bd42a1bffc76a58e54faeeb27373569af89907fc865704990e21826031edf52.scope: Deactivated successfully.
Oct 11 04:47:10 np0005481065 podman[285723]: 2025-10-11 08:47:10.024273906 +0000 UTC m=+0.077190457 container create 5a174fc353cfd8d49c2093bdde9cfd7a695a0e87b71deae210ee373911d5bc22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:47:10 np0005481065 podman[285723]: 2025-10-11 08:47:09.992501858 +0000 UTC m=+0.045418449 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:47:10 np0005481065 systemd[1]: Started libpod-conmon-5a174fc353cfd8d49c2093bdde9cfd7a695a0e87b71deae210ee373911d5bc22.scope.
Oct 11 04:47:10 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:47:10 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0d0a2fc1d501d7b8ebcb846fd82a047f619cb044e1f5d53b5b2349482d64abe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:47:10 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0d0a2fc1d501d7b8ebcb846fd82a047f619cb044e1f5d53b5b2349482d64abe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:47:10 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0d0a2fc1d501d7b8ebcb846fd82a047f619cb044e1f5d53b5b2349482d64abe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:47:10 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0d0a2fc1d501d7b8ebcb846fd82a047f619cb044e1f5d53b5b2349482d64abe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:47:10 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0d0a2fc1d501d7b8ebcb846fd82a047f619cb044e1f5d53b5b2349482d64abe/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:47:10 np0005481065 podman[285723]: 2025-10-11 08:47:10.146922672 +0000 UTC m=+0.199839193 container init 5a174fc353cfd8d49c2093bdde9cfd7a695a0e87b71deae210ee373911d5bc22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_blackburn, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:47:10 np0005481065 podman[285723]: 2025-10-11 08:47:10.162550689 +0000 UTC m=+0.215467230 container start 5a174fc353cfd8d49c2093bdde9cfd7a695a0e87b71deae210ee373911d5bc22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:47:10 np0005481065 podman[285723]: 2025-10-11 08:47:10.167727327 +0000 UTC m=+0.220643948 container attach 5a174fc353cfd8d49c2093bdde9cfd7a695a0e87b71deae210ee373911d5bc22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:47:11 np0005481065 mystifying_blackburn[285739]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:47:11 np0005481065 mystifying_blackburn[285739]: --> relative data size: 1.0
Oct 11 04:47:11 np0005481065 mystifying_blackburn[285739]: --> All data devices are unavailable
Oct 11 04:47:11 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Oct 11 04:47:11 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Oct 11 04:47:11 np0005481065 systemd[1]: libpod-5a174fc353cfd8d49c2093bdde9cfd7a695a0e87b71deae210ee373911d5bc22.scope: Deactivated successfully.
Oct 11 04:47:11 np0005481065 podman[285723]: 2025-10-11 08:47:11.338491393 +0000 UTC m=+1.391407994 container died 5a174fc353cfd8d49c2093bdde9cfd7a695a0e87b71deae210ee373911d5bc22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_blackburn, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 04:47:11 np0005481065 systemd[1]: libpod-5a174fc353cfd8d49c2093bdde9cfd7a695a0e87b71deae210ee373911d5bc22.scope: Consumed 1.120s CPU time.
Oct 11 04:47:11 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Oct 11 04:47:11 np0005481065 systemd[1]: var-lib-containers-storage-overlay-e0d0a2fc1d501d7b8ebcb846fd82a047f619cb044e1f5d53b5b2349482d64abe-merged.mount: Deactivated successfully.
Oct 11 04:47:11 np0005481065 podman[285723]: 2025-10-11 08:47:11.400771383 +0000 UTC m=+1.453687894 container remove 5a174fc353cfd8d49c2093bdde9cfd7a695a0e87b71deae210ee373911d5bc22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_blackburn, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:47:11 np0005481065 systemd[1]: libpod-conmon-5a174fc353cfd8d49c2093bdde9cfd7a695a0e87b71deae210ee373911d5bc22.scope: Deactivated successfully.
Oct 11 04:47:11 np0005481065 nova_compute[260935]: 2025-10-11 08:47:11.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:11 np0005481065 nova_compute[260935]: 2025-10-11 08:47:11.664 2 DEBUG oslo_concurrency.lockutils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Acquiring lock "682cad4b-4bab-4a36-9bd9-11e9de7b213a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:47:11 np0005481065 nova_compute[260935]: 2025-10-11 08:47:11.664 2 DEBUG oslo_concurrency.lockutils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Lock "682cad4b-4bab-4a36-9bd9-11e9de7b213a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:47:11 np0005481065 nova_compute[260935]: 2025-10-11 08:47:11.681 2 DEBUG nova.compute.manager [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:47:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1229: 321 pgs: 321 active+clean; 372 MiB data, 437 MiB used, 60 GiB / 60 GiB avail; 8.6 MiB/s rd, 8.5 MiB/s wr, 162 op/s
Oct 11 04:47:11 np0005481065 nova_compute[260935]: 2025-10-11 08:47:11.774 2 DEBUG oslo_concurrency.lockutils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:47:11 np0005481065 nova_compute[260935]: 2025-10-11 08:47:11.774 2 DEBUG oslo_concurrency.lockutils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:47:11 np0005481065 nova_compute[260935]: 2025-10-11 08:47:11.790 2 DEBUG nova.virt.hardware [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:47:11 np0005481065 nova_compute[260935]: 2025-10-11 08:47:11.790 2 INFO nova.compute.claims [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:47:11 np0005481065 nova_compute[260935]: 2025-10-11 08:47:11.948 2 DEBUG oslo_concurrency.processutils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:47:12 np0005481065 podman[285938]: 2025-10-11 08:47:12.367462766 +0000 UTC m=+0.067873261 container create 30d2e33c126fa23094c1a085bd7d8114f7c4c06ceb94ebf1226828664dddc510 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:47:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:47:12 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3736167084' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:47:12 np0005481065 nova_compute[260935]: 2025-10-11 08:47:12.412 2 DEBUG oslo_concurrency.processutils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:47:12 np0005481065 systemd[1]: Started libpod-conmon-30d2e33c126fa23094c1a085bd7d8114f7c4c06ceb94ebf1226828664dddc510.scope.
Oct 11 04:47:12 np0005481065 nova_compute[260935]: 2025-10-11 08:47:12.424 2 DEBUG nova.compute.provider_tree [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:47:12 np0005481065 podman[285938]: 2025-10-11 08:47:12.34659517 +0000 UTC m=+0.047005715 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:47:12 np0005481065 nova_compute[260935]: 2025-10-11 08:47:12.442 2 DEBUG nova.scheduler.client.report [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:47:12 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:47:12 np0005481065 podman[285938]: 2025-10-11 08:47:12.468961498 +0000 UTC m=+0.169372053 container init 30d2e33c126fa23094c1a085bd7d8114f7c4c06ceb94ebf1226828664dddc510 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_allen, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 04:47:12 np0005481065 nova_compute[260935]: 2025-10-11 08:47:12.470 2 DEBUG oslo_concurrency.lockutils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:47:12 np0005481065 nova_compute[260935]: 2025-10-11 08:47:12.471 2 DEBUG nova.compute.manager [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:47:12 np0005481065 podman[285938]: 2025-10-11 08:47:12.481415814 +0000 UTC m=+0.181826279 container start 30d2e33c126fa23094c1a085bd7d8114f7c4c06ceb94ebf1226828664dddc510 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Oct 11 04:47:12 np0005481065 podman[285938]: 2025-10-11 08:47:12.485291084 +0000 UTC m=+0.185701579 container attach 30d2e33c126fa23094c1a085bd7d8114f7c4c06ceb94ebf1226828664dddc510 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_allen, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 11 04:47:12 np0005481065 condescending_allen[285956]: 167 167
Oct 11 04:47:12 np0005481065 systemd[1]: libpod-30d2e33c126fa23094c1a085bd7d8114f7c4c06ceb94ebf1226828664dddc510.scope: Deactivated successfully.
Oct 11 04:47:12 np0005481065 conmon[285956]: conmon 30d2e33c126fa23094c1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-30d2e33c126fa23094c1a085bd7d8114f7c4c06ceb94ebf1226828664dddc510.scope/container/memory.events
Oct 11 04:47:12 np0005481065 podman[285938]: 2025-10-11 08:47:12.490991737 +0000 UTC m=+0.191402262 container died 30d2e33c126fa23094c1a085bd7d8114f7c4c06ceb94ebf1226828664dddc510 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_allen, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:47:12 np0005481065 systemd[1]: var-lib-containers-storage-overlay-fc2676bcd6398f50201c51d5e6f9c0372337a25a32ee5aa54f5bd2f0c40ea93a-merged.mount: Deactivated successfully.
Oct 11 04:47:12 np0005481065 nova_compute[260935]: 2025-10-11 08:47:12.529 2 DEBUG nova.compute.manager [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:47:12 np0005481065 nova_compute[260935]: 2025-10-11 08:47:12.531 2 DEBUG nova.network.neutron [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:47:12 np0005481065 podman[285938]: 2025-10-11 08:47:12.544514227 +0000 UTC m=+0.244924712 container remove 30d2e33c126fa23094c1a085bd7d8114f7c4c06ceb94ebf1226828664dddc510 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_allen, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 04:47:12 np0005481065 nova_compute[260935]: 2025-10-11 08:47:12.559 2 INFO nova.virt.libvirt.driver [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:47:12 np0005481065 systemd[1]: libpod-conmon-30d2e33c126fa23094c1a085bd7d8114f7c4c06ceb94ebf1226828664dddc510.scope: Deactivated successfully.
Oct 11 04:47:12 np0005481065 nova_compute[260935]: 2025-10-11 08:47:12.580 2 DEBUG nova.compute.manager [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:47:12 np0005481065 nova_compute[260935]: 2025-10-11 08:47:12.687 2 DEBUG nova.compute.manager [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:47:12 np0005481065 nova_compute[260935]: 2025-10-11 08:47:12.690 2 DEBUG nova.virt.libvirt.driver [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:47:12 np0005481065 nova_compute[260935]: 2025-10-11 08:47:12.690 2 INFO nova.virt.libvirt.driver [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Creating image(s)#033[00m
Oct 11 04:47:12 np0005481065 nova_compute[260935]: 2025-10-11 08:47:12.722 2 DEBUG nova.storage.rbd_utils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] rbd image 682cad4b-4bab-4a36-9bd9-11e9de7b213a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:47:12 np0005481065 nova_compute[260935]: 2025-10-11 08:47:12.760 2 DEBUG nova.storage.rbd_utils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] rbd image 682cad4b-4bab-4a36-9bd9-11e9de7b213a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:47:12 np0005481065 podman[285978]: 2025-10-11 08:47:12.772549955 +0000 UTC m=+0.069354114 container create 8f8c9355fd14f31cb75fc8d6ded080aff770a5e8bf2ddfa786330c3acacce3da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ganguly, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:47:12 np0005481065 nova_compute[260935]: 2025-10-11 08:47:12.802 2 DEBUG nova.storage.rbd_utils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] rbd image 682cad4b-4bab-4a36-9bd9-11e9de7b213a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:47:12 np0005481065 nova_compute[260935]: 2025-10-11 08:47:12.808 2 DEBUG oslo_concurrency.processutils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:47:12 np0005481065 podman[285978]: 2025-10-11 08:47:12.743971478 +0000 UTC m=+0.040775677 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:47:12 np0005481065 systemd[1]: Started libpod-conmon-8f8c9355fd14f31cb75fc8d6ded080aff770a5e8bf2ddfa786330c3acacce3da.scope.
Oct 11 04:47:12 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:47:12 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5676383cfd25b3e2a2bf4cb044e23b0057373d7334f73d9d6ea24cd52e393e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:47:12 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5676383cfd25b3e2a2bf4cb044e23b0057373d7334f73d9d6ea24cd52e393e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:47:12 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5676383cfd25b3e2a2bf4cb044e23b0057373d7334f73d9d6ea24cd52e393e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:47:12 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5676383cfd25b3e2a2bf4cb044e23b0057373d7334f73d9d6ea24cd52e393e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:47:12 np0005481065 nova_compute[260935]: 2025-10-11 08:47:12.902 2 DEBUG oslo_concurrency.processutils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:47:12 np0005481065 nova_compute[260935]: 2025-10-11 08:47:12.903 2 DEBUG oslo_concurrency.lockutils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:47:12 np0005481065 nova_compute[260935]: 2025-10-11 08:47:12.904 2 DEBUG oslo_concurrency.lockutils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:47:12 np0005481065 nova_compute[260935]: 2025-10-11 08:47:12.905 2 DEBUG oslo_concurrency.lockutils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:47:12 np0005481065 podman[285978]: 2025-10-11 08:47:12.91653165 +0000 UTC m=+0.213335819 container init 8f8c9355fd14f31cb75fc8d6ded080aff770a5e8bf2ddfa786330c3acacce3da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ganguly, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:47:12 np0005481065 podman[285978]: 2025-10-11 08:47:12.929337866 +0000 UTC m=+0.226141985 container start 8f8c9355fd14f31cb75fc8d6ded080aff770a5e8bf2ddfa786330c3acacce3da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:47:12 np0005481065 podman[285978]: 2025-10-11 08:47:12.933930528 +0000 UTC m=+0.230734687 container attach 8f8c9355fd14f31cb75fc8d6ded080aff770a5e8bf2ddfa786330c3acacce3da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ganguly, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 11 04:47:12 np0005481065 nova_compute[260935]: 2025-10-11 08:47:12.938 2 DEBUG nova.storage.rbd_utils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] rbd image 682cad4b-4bab-4a36-9bd9-11e9de7b213a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:47:12 np0005481065 nova_compute[260935]: 2025-10-11 08:47:12.944 2 DEBUG oslo_concurrency.processutils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 682cad4b-4bab-4a36-9bd9-11e9de7b213a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:47:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:47:13 np0005481065 nova_compute[260935]: 2025-10-11 08:47:13.235 2 DEBUG oslo_concurrency.processutils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 682cad4b-4bab-4a36-9bd9-11e9de7b213a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.290s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:47:13 np0005481065 nova_compute[260935]: 2025-10-11 08:47:13.308 2 DEBUG nova.storage.rbd_utils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] resizing rbd image 682cad4b-4bab-4a36-9bd9-11e9de7b213a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:47:13 np0005481065 nova_compute[260935]: 2025-10-11 08:47:13.410 2 DEBUG nova.objects.instance [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Lazy-loading 'migration_context' on Instance uuid 682cad4b-4bab-4a36-9bd9-11e9de7b213a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:47:13 np0005481065 nova_compute[260935]: 2025-10-11 08:47:13.427 2 DEBUG nova.virt.libvirt.driver [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:47:13 np0005481065 nova_compute[260935]: 2025-10-11 08:47:13.427 2 DEBUG nova.virt.libvirt.driver [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Ensure instance console log exists: /var/lib/nova/instances/682cad4b-4bab-4a36-9bd9-11e9de7b213a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:47:13 np0005481065 nova_compute[260935]: 2025-10-11 08:47:13.427 2 DEBUG oslo_concurrency.lockutils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:47:13 np0005481065 nova_compute[260935]: 2025-10-11 08:47:13.428 2 DEBUG oslo_concurrency.lockutils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:47:13 np0005481065 nova_compute[260935]: 2025-10-11 08:47:13.428 2 DEBUG oslo_concurrency.lockutils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]: {
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:    "0": [
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:        {
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:            "devices": [
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:                "/dev/loop3"
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:            ],
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:            "lv_name": "ceph_lv0",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:            "lv_size": "21470642176",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:            "name": "ceph_lv0",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:            "tags": {
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:                "ceph.cluster_name": "ceph",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:                "ceph.crush_device_class": "",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:                "ceph.encrypted": "0",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:                "ceph.osd_id": "0",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:                "ceph.type": "block",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:                "ceph.vdo": "0"
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:            },
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:            "type": "block",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:            "vg_name": "ceph_vg0"
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:        }
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:    ],
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:    "1": [
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:        {
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:            "devices": [
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:                "/dev/loop4"
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:            ],
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:            "lv_name": "ceph_lv1",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:            "lv_size": "21470642176",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:            "name": "ceph_lv1",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:            "tags": {
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:                "ceph.cluster_name": "ceph",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:                "ceph.crush_device_class": "",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:                "ceph.encrypted": "0",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:                "ceph.osd_id": "1",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:                "ceph.type": "block",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:                "ceph.vdo": "0"
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:            },
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:            "type": "block",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:            "vg_name": "ceph_vg1"
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:        }
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:    ],
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:    "2": [
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:        {
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:            "devices": [
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:                "/dev/loop5"
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:            ],
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:            "lv_name": "ceph_lv2",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:            "lv_size": "21470642176",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:            "name": "ceph_lv2",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:            "tags": {
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:                "ceph.cluster_name": "ceph",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:                "ceph.crush_device_class": "",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:                "ceph.encrypted": "0",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:                "ceph.osd_id": "2",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:                "ceph.type": "block",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:                "ceph.vdo": "0"
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:            },
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:            "type": "block",
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:            "vg_name": "ceph_vg2"
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:        }
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]:    ]
Oct 11 04:47:13 np0005481065 intelligent_ganguly[286049]: }
Oct 11 04:47:13 np0005481065 nova_compute[260935]: 2025-10-11 08:47:13.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:13 np0005481065 systemd[1]: libpod-8f8c9355fd14f31cb75fc8d6ded080aff770a5e8bf2ddfa786330c3acacce3da.scope: Deactivated successfully.
Oct 11 04:47:13 np0005481065 podman[285978]: 2025-10-11 08:47:13.761693859 +0000 UTC m=+1.058498018 container died 8f8c9355fd14f31cb75fc8d6ded080aff770a5e8bf2ddfa786330c3acacce3da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:47:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1230: 321 pgs: 321 active+clean; 372 MiB data, 437 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.6 MiB/s wr, 114 op/s
Oct 11 04:47:13 np0005481065 systemd[1]: var-lib-containers-storage-overlay-c5676383cfd25b3e2a2bf4cb044e23b0057373d7334f73d9d6ea24cd52e393e9-merged.mount: Deactivated successfully.
Oct 11 04:47:13 np0005481065 nova_compute[260935]: 2025-10-11 08:47:13.816 2 DEBUG nova.network.neutron [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct 11 04:47:13 np0005481065 nova_compute[260935]: 2025-10-11 08:47:13.816 2 DEBUG nova.compute.manager [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:47:13 np0005481065 nova_compute[260935]: 2025-10-11 08:47:13.819 2 DEBUG nova.virt.libvirt.driver [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:47:13 np0005481065 nova_compute[260935]: 2025-10-11 08:47:13.827 2 WARNING nova.virt.libvirt.driver [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:47:13 np0005481065 nova_compute[260935]: 2025-10-11 08:47:13.843 2 DEBUG nova.virt.libvirt.host [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:47:13 np0005481065 nova_compute[260935]: 2025-10-11 08:47:13.844 2 DEBUG nova.virt.libvirt.host [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:47:13 np0005481065 nova_compute[260935]: 2025-10-11 08:47:13.849 2 DEBUG nova.virt.libvirt.host [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:47:13 np0005481065 nova_compute[260935]: 2025-10-11 08:47:13.850 2 DEBUG nova.virt.libvirt.host [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:47:13 np0005481065 nova_compute[260935]: 2025-10-11 08:47:13.851 2 DEBUG nova.virt.libvirt.driver [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:47:13 np0005481065 nova_compute[260935]: 2025-10-11 08:47:13.851 2 DEBUG nova.virt.hardware [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:47:13 np0005481065 nova_compute[260935]: 2025-10-11 08:47:13.852 2 DEBUG nova.virt.hardware [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:47:13 np0005481065 nova_compute[260935]: 2025-10-11 08:47:13.852 2 DEBUG nova.virt.hardware [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:47:13 np0005481065 nova_compute[260935]: 2025-10-11 08:47:13.852 2 DEBUG nova.virt.hardware [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:47:13 np0005481065 nova_compute[260935]: 2025-10-11 08:47:13.853 2 DEBUG nova.virt.hardware [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:47:13 np0005481065 nova_compute[260935]: 2025-10-11 08:47:13.853 2 DEBUG nova.virt.hardware [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:47:13 np0005481065 podman[285978]: 2025-10-11 08:47:13.853324489 +0000 UTC m=+1.150128648 container remove 8f8c9355fd14f31cb75fc8d6ded080aff770a5e8bf2ddfa786330c3acacce3da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 11 04:47:13 np0005481065 nova_compute[260935]: 2025-10-11 08:47:13.853 2 DEBUG nova.virt.hardware [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:47:13 np0005481065 nova_compute[260935]: 2025-10-11 08:47:13.854 2 DEBUG nova.virt.hardware [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:47:13 np0005481065 nova_compute[260935]: 2025-10-11 08:47:13.854 2 DEBUG nova.virt.hardware [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:47:13 np0005481065 nova_compute[260935]: 2025-10-11 08:47:13.854 2 DEBUG nova.virt.hardware [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:47:13 np0005481065 nova_compute[260935]: 2025-10-11 08:47:13.855 2 DEBUG nova.virt.hardware [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:47:13 np0005481065 nova_compute[260935]: 2025-10-11 08:47:13.860 2 DEBUG oslo_concurrency.processutils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:47:13 np0005481065 systemd[1]: libpod-conmon-8f8c9355fd14f31cb75fc8d6ded080aff770a5e8bf2ddfa786330c3acacce3da.scope: Deactivated successfully.
Oct 11 04:47:14 np0005481065 nova_compute[260935]: 2025-10-11 08:47:14.176 2 DEBUG oslo_concurrency.lockutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:47:14 np0005481065 nova_compute[260935]: 2025-10-11 08:47:14.177 2 DEBUG oslo_concurrency.lockutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:47:14 np0005481065 nova_compute[260935]: 2025-10-11 08:47:14.198 2 DEBUG nova.compute.manager [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:47:14 np0005481065 nova_compute[260935]: 2025-10-11 08:47:14.274 2 DEBUG oslo_concurrency.lockutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:47:14 np0005481065 nova_compute[260935]: 2025-10-11 08:47:14.274 2 DEBUG oslo_concurrency.lockutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:47:14 np0005481065 nova_compute[260935]: 2025-10-11 08:47:14.288 2 DEBUG nova.virt.hardware [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:47:14 np0005481065 nova_compute[260935]: 2025-10-11 08:47:14.288 2 INFO nova.compute.claims [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:47:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:47:14 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2051348334' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:47:14 np0005481065 nova_compute[260935]: 2025-10-11 08:47:14.358 2 DEBUG oslo_concurrency.processutils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:47:14 np0005481065 nova_compute[260935]: 2025-10-11 08:47:14.396 2 DEBUG nova.storage.rbd_utils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] rbd image 682cad4b-4bab-4a36-9bd9-11e9de7b213a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:47:14 np0005481065 nova_compute[260935]: 2025-10-11 08:47:14.402 2 DEBUG oslo_concurrency.processutils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:47:14 np0005481065 nova_compute[260935]: 2025-10-11 08:47:14.727 2 DEBUG oslo_concurrency.processutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:47:14 np0005481065 podman[286384]: 2025-10-11 08:47:14.764795373 +0000 UTC m=+0.062042305 container create 1caec4f6414b20d4069bf5a3b2cdc484d7f0e43a61de6754d33bc398037f9814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:47:14 np0005481065 systemd[1]: Started libpod-conmon-1caec4f6414b20d4069bf5a3b2cdc484d7f0e43a61de6754d33bc398037f9814.scope.
Oct 11 04:47:14 np0005481065 podman[286384]: 2025-10-11 08:47:14.740743335 +0000 UTC m=+0.037990297 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:47:14 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:47:14 np0005481065 podman[286384]: 2025-10-11 08:47:14.856633648 +0000 UTC m=+0.153880620 container init 1caec4f6414b20d4069bf5a3b2cdc484d7f0e43a61de6754d33bc398037f9814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wu, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 11 04:47:14 np0005481065 podman[286384]: 2025-10-11 08:47:14.864697659 +0000 UTC m=+0.161944591 container start 1caec4f6414b20d4069bf5a3b2cdc484d7f0e43a61de6754d33bc398037f9814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wu, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:47:14 np0005481065 podman[286384]: 2025-10-11 08:47:14.869022632 +0000 UTC m=+0.166269654 container attach 1caec4f6414b20d4069bf5a3b2cdc484d7f0e43a61de6754d33bc398037f9814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wu, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 04:47:14 np0005481065 serene_wu[286400]: 167 167
Oct 11 04:47:14 np0005481065 systemd[1]: libpod-1caec4f6414b20d4069bf5a3b2cdc484d7f0e43a61de6754d33bc398037f9814.scope: Deactivated successfully.
Oct 11 04:47:14 np0005481065 conmon[286400]: conmon 1caec4f6414b20d4069b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1caec4f6414b20d4069bf5a3b2cdc484d7f0e43a61de6754d33bc398037f9814.scope/container/memory.events
Oct 11 04:47:14 np0005481065 podman[286384]: 2025-10-11 08:47:14.872743509 +0000 UTC m=+0.169990451 container died 1caec4f6414b20d4069bf5a3b2cdc484d7f0e43a61de6754d33bc398037f9814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 04:47:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:47:14 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1600486936' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:47:14 np0005481065 systemd[1]: var-lib-containers-storage-overlay-fd7e13e7549a36dbbf72b6432dcdc0110daa6ab9cd1a34d7e9de5daa715e2a86-merged.mount: Deactivated successfully.
Oct 11 04:47:14 np0005481065 nova_compute[260935]: 2025-10-11 08:47:14.907 2 DEBUG oslo_concurrency.processutils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:47:14 np0005481065 nova_compute[260935]: 2025-10-11 08:47:14.909 2 DEBUG nova.objects.instance [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 682cad4b-4bab-4a36-9bd9-11e9de7b213a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:47:14 np0005481065 podman[286384]: 2025-10-11 08:47:14.925692922 +0000 UTC m=+0.222939894 container remove 1caec4f6414b20d4069bf5a3b2cdc484d7f0e43a61de6754d33bc398037f9814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:47:14 np0005481065 nova_compute[260935]: 2025-10-11 08:47:14.926 2 DEBUG nova.virt.libvirt.driver [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:47:14 np0005481065 nova_compute[260935]:  <uuid>682cad4b-4bab-4a36-9bd9-11e9de7b213a</uuid>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:  <name>instance-00000010</name>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:47:14 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:      <nova:name>tempest-ServerDiagnosticsNegativeTest-server-1673338711</nova:name>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:47:13</nova:creationTime>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:47:14 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:        <nova:user uuid="a00ec4c98ad14bd6ab5a4e1f11e60389">tempest-ServerDiagnosticsNegativeTest-745659558-project-member</nova:user>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:        <nova:project uuid="7966b3923d1340dca2b22eab0ca26cb3">tempest-ServerDiagnosticsNegativeTest-745659558</nova:project>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:      <nova:ports/>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:47:14 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:      <entry name="serial">682cad4b-4bab-4a36-9bd9-11e9de7b213a</entry>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:      <entry name="uuid">682cad4b-4bab-4a36-9bd9-11e9de7b213a</entry>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:47:14 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:47:14 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:47:14 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/682cad4b-4bab-4a36-9bd9-11e9de7b213a_disk">
Oct 11 04:47:14 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:47:14 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:47:14 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/682cad4b-4bab-4a36-9bd9-11e9de7b213a_disk.config">
Oct 11 04:47:14 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:47:14 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:47:14 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/682cad4b-4bab-4a36-9bd9-11e9de7b213a/console.log" append="off"/>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:47:14 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:47:14 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:47:14 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:47:14 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:47:14 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:47:14 np0005481065 systemd[1]: libpod-conmon-1caec4f6414b20d4069bf5a3b2cdc484d7f0e43a61de6754d33bc398037f9814.scope: Deactivated successfully.
Oct 11 04:47:14 np0005481065 nova_compute[260935]: 2025-10-11 08:47:14.982 2 DEBUG nova.virt.libvirt.driver [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:47:14 np0005481065 nova_compute[260935]: 2025-10-11 08:47:14.982 2 DEBUG nova.virt.libvirt.driver [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:47:14 np0005481065 nova_compute[260935]: 2025-10-11 08:47:14.982 2 INFO nova.virt.libvirt.driver [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Using config drive#033[00m
Oct 11 04:47:15 np0005481065 nova_compute[260935]: 2025-10-11 08:47:15.007 2 DEBUG nova.storage.rbd_utils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] rbd image 682cad4b-4bab-4a36-9bd9-11e9de7b213a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:47:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:15.179 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:47:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:15.180 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:47:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:15.180 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:47:15 np0005481065 podman[286464]: 2025-10-11 08:47:15.191219102 +0000 UTC m=+0.076757345 container create 5e5f67df638d646d697ee7279a4dc6e741bc22bbbb9e6ea9ca9384e4a5e7ab1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_einstein, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 04:47:15 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:47:15 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1334021597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:47:15 np0005481065 podman[286464]: 2025-10-11 08:47:15.157107937 +0000 UTC m=+0.042646270 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:47:15 np0005481065 nova_compute[260935]: 2025-10-11 08:47:15.249 2 DEBUG oslo_concurrency.processutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:47:15 np0005481065 systemd[1]: Started libpod-conmon-5e5f67df638d646d697ee7279a4dc6e741bc22bbbb9e6ea9ca9384e4a5e7ab1b.scope.
Oct 11 04:47:15 np0005481065 nova_compute[260935]: 2025-10-11 08:47:15.258 2 DEBUG nova.compute.provider_tree [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:47:15 np0005481065 nova_compute[260935]: 2025-10-11 08:47:15.274 2 DEBUG nova.scheduler.client.report [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:47:15 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:47:15 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/840f400aaec6c26044543d45a28a1aaceec792ce1db115e3022a4083ce83b19b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:47:15 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/840f400aaec6c26044543d45a28a1aaceec792ce1db115e3022a4083ce83b19b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:47:15 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/840f400aaec6c26044543d45a28a1aaceec792ce1db115e3022a4083ce83b19b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:47:15 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/840f400aaec6c26044543d45a28a1aaceec792ce1db115e3022a4083ce83b19b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:47:15 np0005481065 nova_compute[260935]: 2025-10-11 08:47:15.299 2 DEBUG oslo_concurrency.lockutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.025s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:47:15 np0005481065 nova_compute[260935]: 2025-10-11 08:47:15.301 2 DEBUG nova.compute.manager [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:47:15 np0005481065 podman[286464]: 2025-10-11 08:47:15.309239266 +0000 UTC m=+0.194777609 container init 5e5f67df638d646d697ee7279a4dc6e741bc22bbbb9e6ea9ca9384e4a5e7ab1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_einstein, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:47:15 np0005481065 podman[286464]: 2025-10-11 08:47:15.319588172 +0000 UTC m=+0.205126425 container start 5e5f67df638d646d697ee7279a4dc6e741bc22bbbb9e6ea9ca9384e4a5e7ab1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_einstein, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Oct 11 04:47:15 np0005481065 podman[286464]: 2025-10-11 08:47:15.323585916 +0000 UTC m=+0.209124269 container attach 5e5f67df638d646d697ee7279a4dc6e741bc22bbbb9e6ea9ca9384e4a5e7ab1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 11 04:47:15 np0005481065 nova_compute[260935]: 2025-10-11 08:47:15.347 2 INFO nova.virt.libvirt.driver [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Creating config drive at /var/lib/nova/instances/682cad4b-4bab-4a36-9bd9-11e9de7b213a/disk.config#033[00m
Oct 11 04:47:15 np0005481065 nova_compute[260935]: 2025-10-11 08:47:15.356 2 DEBUG oslo_concurrency.processutils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/682cad4b-4bab-4a36-9bd9-11e9de7b213a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcl4e47jb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:47:15 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Oct 11 04:47:15 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Oct 11 04:47:15 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Oct 11 04:47:15 np0005481065 nova_compute[260935]: 2025-10-11 08:47:15.407 2 DEBUG nova.compute.manager [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:47:15 np0005481065 nova_compute[260935]: 2025-10-11 08:47:15.408 2 DEBUG nova.network.neutron [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:47:15 np0005481065 nova_compute[260935]: 2025-10-11 08:47:15.495 2 INFO nova.virt.libvirt.driver [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:47:15 np0005481065 nova_compute[260935]: 2025-10-11 08:47:15.510 2 DEBUG oslo_concurrency.processutils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/682cad4b-4bab-4a36-9bd9-11e9de7b213a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcl4e47jb" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:47:15 np0005481065 nova_compute[260935]: 2025-10-11 08:47:15.555 2 DEBUG nova.storage.rbd_utils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] rbd image 682cad4b-4bab-4a36-9bd9-11e9de7b213a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:47:15 np0005481065 nova_compute[260935]: 2025-10-11 08:47:15.561 2 DEBUG oslo_concurrency.processutils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/682cad4b-4bab-4a36-9bd9-11e9de7b213a/disk.config 682cad4b-4bab-4a36-9bd9-11e9de7b213a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:47:15 np0005481065 nova_compute[260935]: 2025-10-11 08:47:15.602 2 DEBUG nova.policy [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '34f29a5a135d45f597eeaa741009aa67', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'eddb41c523294041b154a0a99c88e82b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 04:47:15 np0005481065 nova_compute[260935]: 2025-10-11 08:47:15.651 2 DEBUG nova.compute.manager [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:47:15 np0005481065 nova_compute[260935]: 2025-10-11 08:47:15.752 2 DEBUG oslo_concurrency.processutils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/682cad4b-4bab-4a36-9bd9-11e9de7b213a/disk.config 682cad4b-4bab-4a36-9bd9-11e9de7b213a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.191s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:47:15 np0005481065 nova_compute[260935]: 2025-10-11 08:47:15.754 2 INFO nova.virt.libvirt.driver [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Deleting local config drive /var/lib/nova/instances/682cad4b-4bab-4a36-9bd9-11e9de7b213a/disk.config because it was imported into RBD.#033[00m
Oct 11 04:47:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1232: 321 pgs: 321 active+clean; 372 MiB data, 437 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 3.2 KiB/s wr, 41 op/s
Oct 11 04:47:15 np0005481065 systemd-machined[215705]: New machine qemu-16-instance-00000010.
Oct 11 04:47:15 np0005481065 systemd[1]: Started Virtual Machine qemu-16-instance-00000010.
Oct 11 04:47:16 np0005481065 nova_compute[260935]: 2025-10-11 08:47:16.218 2 DEBUG nova.compute.manager [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:47:16 np0005481065 nova_compute[260935]: 2025-10-11 08:47:16.221 2 DEBUG nova.virt.libvirt.driver [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:47:16 np0005481065 nova_compute[260935]: 2025-10-11 08:47:16.221 2 INFO nova.virt.libvirt.driver [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Creating image(s)#033[00m
Oct 11 04:47:16 np0005481065 nova_compute[260935]: 2025-10-11 08:47:16.253 2 DEBUG nova.storage.rbd_utils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] rbd image 5b2193b9-46b9-44a8-9d1c-3c6a642115b6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:47:16 np0005481065 nova_compute[260935]: 2025-10-11 08:47:16.285 2 DEBUG nova.storage.rbd_utils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] rbd image 5b2193b9-46b9-44a8-9d1c-3c6a642115b6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:47:16 np0005481065 nova_compute[260935]: 2025-10-11 08:47:16.320 2 DEBUG nova.storage.rbd_utils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] rbd image 5b2193b9-46b9-44a8-9d1c-3c6a642115b6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:47:16 np0005481065 nova_compute[260935]: 2025-10-11 08:47:16.324 2 DEBUG oslo_concurrency.processutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:47:16 np0005481065 nova_compute[260935]: 2025-10-11 08:47:16.388 2 DEBUG oslo_concurrency.processutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:47:16 np0005481065 nova_compute[260935]: 2025-10-11 08:47:16.390 2 DEBUG oslo_concurrency.lockutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:47:16 np0005481065 nova_compute[260935]: 2025-10-11 08:47:16.391 2 DEBUG oslo_concurrency.lockutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:47:16 np0005481065 nova_compute[260935]: 2025-10-11 08:47:16.392 2 DEBUG oslo_concurrency.lockutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:47:16 np0005481065 nova_compute[260935]: 2025-10-11 08:47:16.423 2 DEBUG nova.storage.rbd_utils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] rbd image 5b2193b9-46b9-44a8-9d1c-3c6a642115b6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:47:16 np0005481065 nova_compute[260935]: 2025-10-11 08:47:16.427 2 DEBUG oslo_concurrency.processutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 5b2193b9-46b9-44a8-9d1c-3c6a642115b6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:47:16 np0005481065 eloquent_einstein[286483]: {
Oct 11 04:47:16 np0005481065 eloquent_einstein[286483]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 04:47:16 np0005481065 eloquent_einstein[286483]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:47:16 np0005481065 eloquent_einstein[286483]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:47:16 np0005481065 eloquent_einstein[286483]:        "osd_id": 2,
Oct 11 04:47:16 np0005481065 eloquent_einstein[286483]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:47:16 np0005481065 eloquent_einstein[286483]:        "type": "bluestore"
Oct 11 04:47:16 np0005481065 eloquent_einstein[286483]:    },
Oct 11 04:47:16 np0005481065 eloquent_einstein[286483]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 04:47:16 np0005481065 eloquent_einstein[286483]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:47:16 np0005481065 eloquent_einstein[286483]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:47:16 np0005481065 eloquent_einstein[286483]:        "osd_id": 0,
Oct 11 04:47:16 np0005481065 eloquent_einstein[286483]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:47:16 np0005481065 eloquent_einstein[286483]:        "type": "bluestore"
Oct 11 04:47:16 np0005481065 eloquent_einstein[286483]:    },
Oct 11 04:47:16 np0005481065 eloquent_einstein[286483]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 04:47:16 np0005481065 eloquent_einstein[286483]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:47:16 np0005481065 eloquent_einstein[286483]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:47:16 np0005481065 eloquent_einstein[286483]:        "osd_id": 1,
Oct 11 04:47:16 np0005481065 eloquent_einstein[286483]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:47:16 np0005481065 eloquent_einstein[286483]:        "type": "bluestore"
Oct 11 04:47:16 np0005481065 eloquent_einstein[286483]:    }
Oct 11 04:47:16 np0005481065 eloquent_einstein[286483]: }
Oct 11 04:47:16 np0005481065 systemd[1]: libpod-5e5f67df638d646d697ee7279a4dc6e741bc22bbbb9e6ea9ca9384e4a5e7ab1b.scope: Deactivated successfully.
Oct 11 04:47:16 np0005481065 systemd[1]: libpod-5e5f67df638d646d697ee7279a4dc6e741bc22bbbb9e6ea9ca9384e4a5e7ab1b.scope: Consumed 1.121s CPU time.
Oct 11 04:47:16 np0005481065 podman[286464]: 2025-10-11 08:47:16.496623686 +0000 UTC m=+1.382161939 container died 5e5f67df638d646d697ee7279a4dc6e741bc22bbbb9e6ea9ca9384e4a5e7ab1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_einstein, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 04:47:16 np0005481065 systemd[1]: var-lib-containers-storage-overlay-840f400aaec6c26044543d45a28a1aaceec792ce1db115e3022a4083ce83b19b-merged.mount: Deactivated successfully.
Oct 11 04:47:16 np0005481065 podman[286464]: 2025-10-11 08:47:16.558241637 +0000 UTC m=+1.443779880 container remove 5e5f67df638d646d697ee7279a4dc6e741bc22bbbb9e6ea9ca9384e4a5e7ab1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:47:16 np0005481065 systemd[1]: libpod-conmon-5e5f67df638d646d697ee7279a4dc6e741bc22bbbb9e6ea9ca9384e4a5e7ab1b.scope: Deactivated successfully.
Oct 11 04:47:16 np0005481065 nova_compute[260935]: 2025-10-11 08:47:16.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:16 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:47:16 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:47:16 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:47:16 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:47:16 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 07d7a294-12ba-48a1-94cf-a509c6508ecc does not exist
Oct 11 04:47:16 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 167103fd-fde4-4b34-a96d-563930e826b4 does not exist
Oct 11 04:47:16 np0005481065 nova_compute[260935]: 2025-10-11 08:47:16.748 2 DEBUG oslo_concurrency.processutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 5b2193b9-46b9-44a8-9d1c-3c6a642115b6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.321s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:47:16 np0005481065 nova_compute[260935]: 2025-10-11 08:47:16.779 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172436.7539876, 682cad4b-4bab-4a36-9bd9-11e9de7b213a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:47:16 np0005481065 nova_compute[260935]: 2025-10-11 08:47:16.779 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:47:16 np0005481065 nova_compute[260935]: 2025-10-11 08:47:16.783 2 DEBUG nova.network.neutron [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Successfully created port: 713e6030-0d3f-41ae-9f66-c4591e2498e4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 04:47:16 np0005481065 nova_compute[260935]: 2025-10-11 08:47:16.787 2 DEBUG nova.compute.manager [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:47:16 np0005481065 nova_compute[260935]: 2025-10-11 08:47:16.787 2 DEBUG nova.virt.libvirt.driver [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:47:16 np0005481065 nova_compute[260935]: 2025-10-11 08:47:16.819 2 DEBUG nova.storage.rbd_utils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] resizing rbd image 5b2193b9-46b9-44a8-9d1c-3c6a642115b6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:47:16 np0005481065 nova_compute[260935]: 2025-10-11 08:47:16.846 2 INFO nova.virt.libvirt.driver [-] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Instance spawned successfully.#033[00m
Oct 11 04:47:16 np0005481065 nova_compute[260935]: 2025-10-11 08:47:16.847 2 DEBUG nova.virt.libvirt.driver [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:47:16 np0005481065 nova_compute[260935]: 2025-10-11 08:47:16.917 2 DEBUG nova.objects.instance [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lazy-loading 'migration_context' on Instance uuid 5b2193b9-46b9-44a8-9d1c-3c6a642115b6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:47:16 np0005481065 nova_compute[260935]: 2025-10-11 08:47:16.941 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:47:16 np0005481065 nova_compute[260935]: 2025-10-11 08:47:16.945 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:47:16 np0005481065 nova_compute[260935]: 2025-10-11 08:47:16.956 2 DEBUG nova.virt.libvirt.driver [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:47:16 np0005481065 nova_compute[260935]: 2025-10-11 08:47:16.956 2 DEBUG nova.virt.libvirt.driver [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Ensure instance console log exists: /var/lib/nova/instances/5b2193b9-46b9-44a8-9d1c-3c6a642115b6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:47:16 np0005481065 nova_compute[260935]: 2025-10-11 08:47:16.957 2 DEBUG oslo_concurrency.lockutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:47:16 np0005481065 nova_compute[260935]: 2025-10-11 08:47:16.957 2 DEBUG oslo_concurrency.lockutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:47:16 np0005481065 nova_compute[260935]: 2025-10-11 08:47:16.958 2 DEBUG oslo_concurrency.lockutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:47:16 np0005481065 nova_compute[260935]: 2025-10-11 08:47:16.960 2 DEBUG nova.virt.libvirt.driver [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:47:16 np0005481065 nova_compute[260935]: 2025-10-11 08:47:16.961 2 DEBUG nova.virt.libvirt.driver [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:47:16 np0005481065 nova_compute[260935]: 2025-10-11 08:47:16.961 2 DEBUG nova.virt.libvirt.driver [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:47:16 np0005481065 nova_compute[260935]: 2025-10-11 08:47:16.962 2 DEBUG nova.virt.libvirt.driver [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:47:16 np0005481065 nova_compute[260935]: 2025-10-11 08:47:16.962 2 DEBUG nova.virt.libvirt.driver [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:47:16 np0005481065 nova_compute[260935]: 2025-10-11 08:47:16.963 2 DEBUG nova.virt.libvirt.driver [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:47:17 np0005481065 nova_compute[260935]: 2025-10-11 08:47:17.011 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:47:17 np0005481065 nova_compute[260935]: 2025-10-11 08:47:17.012 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172436.7540917, 682cad4b-4bab-4a36-9bd9-11e9de7b213a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:47:17 np0005481065 nova_compute[260935]: 2025-10-11 08:47:17.012 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] VM Started (Lifecycle Event)#033[00m
Oct 11 04:47:17 np0005481065 nova_compute[260935]: 2025-10-11 08:47:17.217 2 INFO nova.compute.manager [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Took 4.53 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:47:17 np0005481065 nova_compute[260935]: 2025-10-11 08:47:17.217 2 DEBUG nova.compute.manager [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:47:17 np0005481065 nova_compute[260935]: 2025-10-11 08:47:17.230 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:47:17 np0005481065 nova_compute[260935]: 2025-10-11 08:47:17.234 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:47:17 np0005481065 nova_compute[260935]: 2025-10-11 08:47:17.342 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:47:17 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:47:17 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:47:17 np0005481065 nova_compute[260935]: 2025-10-11 08:47:17.449 2 INFO nova.compute.manager [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Took 5.70 seconds to build instance.#033[00m
Oct 11 04:47:17 np0005481065 nova_compute[260935]: 2025-10-11 08:47:17.645 2 DEBUG oslo_concurrency.lockutils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Lock "682cad4b-4bab-4a36-9bd9-11e9de7b213a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.981s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:47:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1233: 321 pgs: 321 active+clean; 464 MiB data, 480 MiB used, 60 GiB / 60 GiB avail; 800 KiB/s rd, 5.3 MiB/s wr, 188 op/s
Oct 11 04:47:17 np0005481065 nova_compute[260935]: 2025-10-11 08:47:17.958 2 DEBUG nova.network.neutron [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Successfully updated port: 713e6030-0d3f-41ae-9f66-c4591e2498e4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:47:17 np0005481065 nova_compute[260935]: 2025-10-11 08:47:17.989 2 DEBUG oslo_concurrency.lockutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "refresh_cache-5b2193b9-46b9-44a8-9d1c-3c6a642115b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:47:17 np0005481065 nova_compute[260935]: 2025-10-11 08:47:17.990 2 DEBUG oslo_concurrency.lockutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquired lock "refresh_cache-5b2193b9-46b9-44a8-9d1c-3c6a642115b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:47:17 np0005481065 nova_compute[260935]: 2025-10-11 08:47:17.990 2 DEBUG nova.network.neutron [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:47:18 np0005481065 nova_compute[260935]: 2025-10-11 08:47:18.061 2 DEBUG nova.compute.manager [req-41a19694-68f6-43b2-b0f8-58f497a4600c req-5d3708b0-3519-458b-99e1-90d8a9733030 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Received event network-changed-713e6030-0d3f-41ae-9f66-c4591e2498e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:47:18 np0005481065 nova_compute[260935]: 2025-10-11 08:47:18.061 2 DEBUG nova.compute.manager [req-41a19694-68f6-43b2-b0f8-58f497a4600c req-5d3708b0-3519-458b-99e1-90d8a9733030 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Refreshing instance network info cache due to event network-changed-713e6030-0d3f-41ae-9f66-c4591e2498e4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:47:18 np0005481065 nova_compute[260935]: 2025-10-11 08:47:18.061 2 DEBUG oslo_concurrency.lockutils [req-41a19694-68f6-43b2-b0f8-58f497a4600c req-5d3708b0-3519-458b-99e1-90d8a9733030 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-5b2193b9-46b9-44a8-9d1c-3c6a642115b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:47:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:47:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Oct 11 04:47:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Oct 11 04:47:18 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Oct 11 04:47:18 np0005481065 nova_compute[260935]: 2025-10-11 08:47:18.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:18 np0005481065 nova_compute[260935]: 2025-10-11 08:47:18.847 2 DEBUG nova.network.neutron [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:47:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1235: 321 pgs: 321 active+clean; 464 MiB data, 480 MiB used, 60 GiB / 60 GiB avail; 800 KiB/s rd, 5.3 MiB/s wr, 188 op/s
Oct 11 04:47:19 np0005481065 podman[286841]: 2025-10-11 08:47:19.818277674 +0000 UTC m=+0.112221617 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Oct 11 04:47:20 np0005481065 nova_compute[260935]: 2025-10-11 08:47:20.002 2 DEBUG oslo_concurrency.lockutils [None req-9b69a69e-c7dc-483b-8032-001086ce1dbd a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Acquiring lock "682cad4b-4bab-4a36-9bd9-11e9de7b213a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:47:20 np0005481065 nova_compute[260935]: 2025-10-11 08:47:20.002 2 DEBUG oslo_concurrency.lockutils [None req-9b69a69e-c7dc-483b-8032-001086ce1dbd a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Lock "682cad4b-4bab-4a36-9bd9-11e9de7b213a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:47:20 np0005481065 nova_compute[260935]: 2025-10-11 08:47:20.003 2 DEBUG oslo_concurrency.lockutils [None req-9b69a69e-c7dc-483b-8032-001086ce1dbd a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Acquiring lock "682cad4b-4bab-4a36-9bd9-11e9de7b213a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:47:20 np0005481065 nova_compute[260935]: 2025-10-11 08:47:20.003 2 DEBUG oslo_concurrency.lockutils [None req-9b69a69e-c7dc-483b-8032-001086ce1dbd a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Lock "682cad4b-4bab-4a36-9bd9-11e9de7b213a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:47:20 np0005481065 nova_compute[260935]: 2025-10-11 08:47:20.004 2 DEBUG oslo_concurrency.lockutils [None req-9b69a69e-c7dc-483b-8032-001086ce1dbd a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Lock "682cad4b-4bab-4a36-9bd9-11e9de7b213a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:47:20 np0005481065 nova_compute[260935]: 2025-10-11 08:47:20.005 2 INFO nova.compute.manager [None req-9b69a69e-c7dc-483b-8032-001086ce1dbd a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Terminating instance#033[00m
Oct 11 04:47:20 np0005481065 nova_compute[260935]: 2025-10-11 08:47:20.007 2 DEBUG oslo_concurrency.lockutils [None req-9b69a69e-c7dc-483b-8032-001086ce1dbd a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Acquiring lock "refresh_cache-682cad4b-4bab-4a36-9bd9-11e9de7b213a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:47:20 np0005481065 nova_compute[260935]: 2025-10-11 08:47:20.007 2 DEBUG oslo_concurrency.lockutils [None req-9b69a69e-c7dc-483b-8032-001086ce1dbd a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Acquired lock "refresh_cache-682cad4b-4bab-4a36-9bd9-11e9de7b213a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:47:20 np0005481065 nova_compute[260935]: 2025-10-11 08:47:20.008 2 DEBUG nova.network.neutron [None req-9b69a69e-c7dc-483b-8032-001086ce1dbd a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:47:20 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Oct 11 04:47:20 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Oct 11 04:47:20 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Oct 11 04:47:20 np0005481065 nova_compute[260935]: 2025-10-11 08:47:20.323 2 DEBUG nova.network.neutron [None req-9b69a69e-c7dc-483b-8032-001086ce1dbd a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:47:20 np0005481065 nova_compute[260935]: 2025-10-11 08:47:20.659 2 DEBUG nova.network.neutron [None req-9b69a69e-c7dc-483b-8032-001086ce1dbd a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:47:20 np0005481065 nova_compute[260935]: 2025-10-11 08:47:20.683 2 DEBUG oslo_concurrency.lockutils [None req-9b69a69e-c7dc-483b-8032-001086ce1dbd a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Releasing lock "refresh_cache-682cad4b-4bab-4a36-9bd9-11e9de7b213a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:47:20 np0005481065 nova_compute[260935]: 2025-10-11 08:47:20.684 2 DEBUG nova.compute.manager [None req-9b69a69e-c7dc-483b-8032-001086ce1dbd a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 04:47:20 np0005481065 nova_compute[260935]: 2025-10-11 08:47:20.691 2 DEBUG nova.network.neutron [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Updating instance_info_cache with network_info: [{"id": "713e6030-0d3f-41ae-9f66-c4591e2498e4", "address": "fa:16:3e:6b:9e:4e", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap713e6030-0d", "ovs_interfaceid": "713e6030-0d3f-41ae-9f66-c4591e2498e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:47:20 np0005481065 nova_compute[260935]: 2025-10-11 08:47:20.713 2 DEBUG oslo_concurrency.lockutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Releasing lock "refresh_cache-5b2193b9-46b9-44a8-9d1c-3c6a642115b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:47:20 np0005481065 nova_compute[260935]: 2025-10-11 08:47:20.714 2 DEBUG nova.compute.manager [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Instance network_info: |[{"id": "713e6030-0d3f-41ae-9f66-c4591e2498e4", "address": "fa:16:3e:6b:9e:4e", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap713e6030-0d", "ovs_interfaceid": "713e6030-0d3f-41ae-9f66-c4591e2498e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:47:20 np0005481065 nova_compute[260935]: 2025-10-11 08:47:20.714 2 DEBUG oslo_concurrency.lockutils [req-41a19694-68f6-43b2-b0f8-58f497a4600c req-5d3708b0-3519-458b-99e1-90d8a9733030 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-5b2193b9-46b9-44a8-9d1c-3c6a642115b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:47:20 np0005481065 nova_compute[260935]: 2025-10-11 08:47:20.715 2 DEBUG nova.network.neutron [req-41a19694-68f6-43b2-b0f8-58f497a4600c req-5d3708b0-3519-458b-99e1-90d8a9733030 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Refreshing network info cache for port 713e6030-0d3f-41ae-9f66-c4591e2498e4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:47:20 np0005481065 nova_compute[260935]: 2025-10-11 08:47:20.720 2 DEBUG nova.virt.libvirt.driver [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Start _get_guest_xml network_info=[{"id": "713e6030-0d3f-41ae-9f66-c4591e2498e4", "address": "fa:16:3e:6b:9e:4e", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap713e6030-0d", "ovs_interfaceid": "713e6030-0d3f-41ae-9f66-c4591e2498e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:47:20 np0005481065 nova_compute[260935]: 2025-10-11 08:47:20.725 2 WARNING nova.virt.libvirt.driver [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:47:20 np0005481065 nova_compute[260935]: 2025-10-11 08:47:20.733 2 DEBUG nova.virt.libvirt.host [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:47:20 np0005481065 nova_compute[260935]: 2025-10-11 08:47:20.734 2 DEBUG nova.virt.libvirt.host [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:47:20 np0005481065 nova_compute[260935]: 2025-10-11 08:47:20.745 2 DEBUG nova.virt.libvirt.host [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:47:20 np0005481065 nova_compute[260935]: 2025-10-11 08:47:20.746 2 DEBUG nova.virt.libvirt.host [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:47:20 np0005481065 nova_compute[260935]: 2025-10-11 08:47:20.747 2 DEBUG nova.virt.libvirt.driver [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:47:20 np0005481065 nova_compute[260935]: 2025-10-11 08:47:20.747 2 DEBUG nova.virt.hardware [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:47:20 np0005481065 nova_compute[260935]: 2025-10-11 08:47:20.748 2 DEBUG nova.virt.hardware [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:47:20 np0005481065 nova_compute[260935]: 2025-10-11 08:47:20.749 2 DEBUG nova.virt.hardware [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:47:20 np0005481065 nova_compute[260935]: 2025-10-11 08:47:20.749 2 DEBUG nova.virt.hardware [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:47:20 np0005481065 nova_compute[260935]: 2025-10-11 08:47:20.750 2 DEBUG nova.virt.hardware [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:47:20 np0005481065 nova_compute[260935]: 2025-10-11 08:47:20.750 2 DEBUG nova.virt.hardware [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:47:20 np0005481065 nova_compute[260935]: 2025-10-11 08:47:20.751 2 DEBUG nova.virt.hardware [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:47:20 np0005481065 nova_compute[260935]: 2025-10-11 08:47:20.751 2 DEBUG nova.virt.hardware [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:47:20 np0005481065 nova_compute[260935]: 2025-10-11 08:47:20.752 2 DEBUG nova.virt.hardware [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:47:20 np0005481065 nova_compute[260935]: 2025-10-11 08:47:20.752 2 DEBUG nova.virt.hardware [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:47:20 np0005481065 nova_compute[260935]: 2025-10-11 08:47:20.753 2 DEBUG nova.virt.hardware [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:47:20 np0005481065 nova_compute[260935]: 2025-10-11 08:47:20.758 2 DEBUG oslo_concurrency.processutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:47:20 np0005481065 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000010.scope: Deactivated successfully.
Oct 11 04:47:20 np0005481065 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000010.scope: Consumed 4.815s CPU time.
Oct 11 04:47:20 np0005481065 systemd-machined[215705]: Machine qemu-16-instance-00000010 terminated.
Oct 11 04:47:20 np0005481065 nova_compute[260935]: 2025-10-11 08:47:20.913 2 INFO nova.virt.libvirt.driver [-] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Instance destroyed successfully.#033[00m
Oct 11 04:47:20 np0005481065 nova_compute[260935]: 2025-10-11 08:47:20.914 2 DEBUG nova.objects.instance [None req-9b69a69e-c7dc-483b-8032-001086ce1dbd a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Lazy-loading 'resources' on Instance uuid 682cad4b-4bab-4a36-9bd9-11e9de7b213a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:47:21 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Oct 11 04:47:21 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Oct 11 04:47:21 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Oct 11 04:47:21 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:47:21 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3397164084' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:47:21 np0005481065 nova_compute[260935]: 2025-10-11 08:47:21.274 2 DEBUG oslo_concurrency.processutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:47:21 np0005481065 nova_compute[260935]: 2025-10-11 08:47:21.305 2 DEBUG nova.storage.rbd_utils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] rbd image 5b2193b9-46b9-44a8-9d1c-3c6a642115b6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:47:21 np0005481065 nova_compute[260935]: 2025-10-11 08:47:21.310 2 DEBUG oslo_concurrency.processutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:47:21 np0005481065 nova_compute[260935]: 2025-10-11 08:47:21.430 2 INFO nova.virt.libvirt.driver [None req-9b69a69e-c7dc-483b-8032-001086ce1dbd a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Deleting instance files /var/lib/nova/instances/682cad4b-4bab-4a36-9bd9-11e9de7b213a_del#033[00m
Oct 11 04:47:21 np0005481065 nova_compute[260935]: 2025-10-11 08:47:21.432 2 INFO nova.virt.libvirt.driver [None req-9b69a69e-c7dc-483b-8032-001086ce1dbd a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Deletion of /var/lib/nova/instances/682cad4b-4bab-4a36-9bd9-11e9de7b213a_del complete#033[00m
Oct 11 04:47:21 np0005481065 nova_compute[260935]: 2025-10-11 08:47:21.516 2 INFO nova.compute.manager [None req-9b69a69e-c7dc-483b-8032-001086ce1dbd a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Took 0.83 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 04:47:21 np0005481065 nova_compute[260935]: 2025-10-11 08:47:21.517 2 DEBUG oslo.service.loopingcall [None req-9b69a69e-c7dc-483b-8032-001086ce1dbd a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 04:47:21 np0005481065 nova_compute[260935]: 2025-10-11 08:47:21.517 2 DEBUG nova.compute.manager [-] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 04:47:21 np0005481065 nova_compute[260935]: 2025-10-11 08:47:21.518 2 DEBUG nova.network.neutron [-] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 04:47:21 np0005481065 nova_compute[260935]: 2025-10-11 08:47:21.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:21 np0005481065 nova_compute[260935]: 2025-10-11 08:47:21.684 2 DEBUG nova.network.neutron [-] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:47:21 np0005481065 nova_compute[260935]: 2025-10-11 08:47:21.715 2 DEBUG nova.network.neutron [-] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:47:21 np0005481065 nova_compute[260935]: 2025-10-11 08:47:21.745 2 INFO nova.compute.manager [-] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Took 0.23 seconds to deallocate network for instance.#033[00m
Oct 11 04:47:21 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:47:21 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3178421801' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:47:21 np0005481065 nova_compute[260935]: 2025-10-11 08:47:21.764 2 DEBUG oslo_concurrency.processutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:47:21 np0005481065 nova_compute[260935]: 2025-10-11 08:47:21.766 2 DEBUG nova.virt.libvirt.vif [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:47:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1177835038',display_name='tempest-AttachInterfacesTestJSON-server-1177835038',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1177835038',id=17,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM9Il64pQKTCYRuLz2OOsP19v1NZUxnzt1d6CpbMNqNcVSmJsI444B5YIDg/3s4g87KTn1UkUCttTxW17bkkPDQnOj/OhzrtE3rJwHzR/sgT5/vucTFG0ijrEL7r/7PtFg==',key_name='tempest-keypair-130237923',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-a649ez8h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:47:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=5b2193b9-46b9-44a8-9d1c-3c6a642115b6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "713e6030-0d3f-41ae-9f66-c4591e2498e4", "address": "fa:16:3e:6b:9e:4e", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap713e6030-0d", "ovs_interfaceid": "713e6030-0d3f-41ae-9f66-c4591e2498e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:47:21 np0005481065 nova_compute[260935]: 2025-10-11 08:47:21.766 2 DEBUG nova.network.os_vif_util [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "713e6030-0d3f-41ae-9f66-c4591e2498e4", "address": "fa:16:3e:6b:9e:4e", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap713e6030-0d", "ovs_interfaceid": "713e6030-0d3f-41ae-9f66-c4591e2498e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:47:21 np0005481065 nova_compute[260935]: 2025-10-11 08:47:21.767 2 DEBUG nova.network.os_vif_util [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:9e:4e,bridge_name='br-int',has_traffic_filtering=True,id=713e6030-0d3f-41ae-9f66-c4591e2498e4,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap713e6030-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:47:21 np0005481065 nova_compute[260935]: 2025-10-11 08:47:21.769 2 DEBUG nova.objects.instance [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lazy-loading 'pci_devices' on Instance uuid 5b2193b9-46b9-44a8-9d1c-3c6a642115b6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:47:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1238: 321 pgs: 321 active+clean; 464 MiB data, 480 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 7.1 MiB/s wr, 199 op/s
Oct 11 04:47:21 np0005481065 nova_compute[260935]: 2025-10-11 08:47:21.791 2 DEBUG nova.virt.libvirt.driver [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:47:21 np0005481065 nova_compute[260935]:  <uuid>5b2193b9-46b9-44a8-9d1c-3c6a642115b6</uuid>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:  <name>instance-00000011</name>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:47:21 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:      <nova:name>tempest-AttachInterfacesTestJSON-server-1177835038</nova:name>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:47:20</nova:creationTime>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:47:21 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:        <nova:user uuid="34f29a5a135d45f597eeaa741009aa67">tempest-AttachInterfacesTestJSON-2072786320-project-member</nova:user>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:        <nova:project uuid="eddb41c523294041b154a0a99c88e82b">tempest-AttachInterfacesTestJSON-2072786320</nova:project>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:        <nova:port uuid="713e6030-0d3f-41ae-9f66-c4591e2498e4">
Oct 11 04:47:21 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:47:21 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:      <entry name="serial">5b2193b9-46b9-44a8-9d1c-3c6a642115b6</entry>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:      <entry name="uuid">5b2193b9-46b9-44a8-9d1c-3c6a642115b6</entry>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:47:21 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:47:21 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:47:21 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/5b2193b9-46b9-44a8-9d1c-3c6a642115b6_disk">
Oct 11 04:47:21 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:47:21 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:47:21 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/5b2193b9-46b9-44a8-9d1c-3c6a642115b6_disk.config">
Oct 11 04:47:21 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:47:21 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 04:47:21 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:6b:9e:4e"/>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:      <target dev="tap713e6030-0d"/>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:47:21 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/5b2193b9-46b9-44a8-9d1c-3c6a642115b6/console.log" append="off"/>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:47:21 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:47:21 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:47:21 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:47:21 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:47:21 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:47:21 np0005481065 nova_compute[260935]: 2025-10-11 08:47:21.792 2 DEBUG nova.compute.manager [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Preparing to wait for external event network-vif-plugged-713e6030-0d3f-41ae-9f66-c4591e2498e4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 04:47:21 np0005481065 nova_compute[260935]: 2025-10-11 08:47:21.792 2 DEBUG oslo_concurrency.lockutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:47:21 np0005481065 nova_compute[260935]: 2025-10-11 08:47:21.793 2 DEBUG oslo_concurrency.lockutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:47:21 np0005481065 nova_compute[260935]: 2025-10-11 08:47:21.793 2 DEBUG oslo_concurrency.lockutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:47:21 np0005481065 nova_compute[260935]: 2025-10-11 08:47:21.794 2 DEBUG nova.virt.libvirt.vif [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:47:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1177835038',display_name='tempest-AttachInterfacesTestJSON-server-1177835038',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1177835038',id=17,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM9Il64pQKTCYRuLz2OOsP19v1NZUxnzt1d6CpbMNqNcVSmJsI444B5YIDg/3s4g87KTn1UkUCttTxW17bkkPDQnOj/OhzrtE3rJwHzR/sgT5/vucTFG0ijrEL7r/7PtFg==',key_name='tempest-keypair-130237923',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-a649ez8h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:47:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=5b2193b9-46b9-44a8-9d1c-3c6a642115b6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "713e6030-0d3f-41ae-9f66-c4591e2498e4", "address": "fa:16:3e:6b:9e:4e", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap713e6030-0d", "ovs_interfaceid": "713e6030-0d3f-41ae-9f66-c4591e2498e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:47:21 np0005481065 nova_compute[260935]: 2025-10-11 08:47:21.794 2 DEBUG nova.network.os_vif_util [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "713e6030-0d3f-41ae-9f66-c4591e2498e4", "address": "fa:16:3e:6b:9e:4e", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap713e6030-0d", "ovs_interfaceid": "713e6030-0d3f-41ae-9f66-c4591e2498e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:47:21 np0005481065 nova_compute[260935]: 2025-10-11 08:47:21.795 2 DEBUG nova.network.os_vif_util [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:9e:4e,bridge_name='br-int',has_traffic_filtering=True,id=713e6030-0d3f-41ae-9f66-c4591e2498e4,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap713e6030-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:47:21 np0005481065 nova_compute[260935]: 2025-10-11 08:47:21.796 2 DEBUG os_vif [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:9e:4e,bridge_name='br-int',has_traffic_filtering=True,id=713e6030-0d3f-41ae-9f66-c4591e2498e4,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap713e6030-0d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:47:21 np0005481065 nova_compute[260935]: 2025-10-11 08:47:21.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:21 np0005481065 nova_compute[260935]: 2025-10-11 08:47:21.798 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:47:21 np0005481065 nova_compute[260935]: 2025-10-11 08:47:21.799 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:47:21 np0005481065 nova_compute[260935]: 2025-10-11 08:47:21.800 2 DEBUG oslo_concurrency.lockutils [None req-9b69a69e-c7dc-483b-8032-001086ce1dbd a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:47:21 np0005481065 nova_compute[260935]: 2025-10-11 08:47:21.800 2 DEBUG oslo_concurrency.lockutils [None req-9b69a69e-c7dc-483b-8032-001086ce1dbd a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:47:21 np0005481065 nova_compute[260935]: 2025-10-11 08:47:21.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:21 np0005481065 nova_compute[260935]: 2025-10-11 08:47:21.817 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap713e6030-0d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:47:21 np0005481065 nova_compute[260935]: 2025-10-11 08:47:21.818 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap713e6030-0d, col_values=(('external_ids', {'iface-id': '713e6030-0d3f-41ae-9f66-c4591e2498e4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6b:9e:4e', 'vm-uuid': '5b2193b9-46b9-44a8-9d1c-3c6a642115b6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:47:21 np0005481065 NetworkManager[44960]: <info>  [1760172441.8231] manager: (tap713e6030-0d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Oct 11 04:47:21 np0005481065 nova_compute[260935]: 2025-10-11 08:47:21.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:21 np0005481065 nova_compute[260935]: 2025-10-11 08:47:21.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:47:21 np0005481065 nova_compute[260935]: 2025-10-11 08:47:21.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:21 np0005481065 nova_compute[260935]: 2025-10-11 08:47:21.834 2 INFO os_vif [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:9e:4e,bridge_name='br-int',has_traffic_filtering=True,id=713e6030-0d3f-41ae-9f66-c4591e2498e4,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap713e6030-0d')#033[00m
Oct 11 04:47:21 np0005481065 nova_compute[260935]: 2025-10-11 08:47:21.910 2 DEBUG nova.virt.libvirt.driver [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:47:21 np0005481065 nova_compute[260935]: 2025-10-11 08:47:21.911 2 DEBUG nova.virt.libvirt.driver [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:47:21 np0005481065 nova_compute[260935]: 2025-10-11 08:47:21.912 2 DEBUG nova.virt.libvirt.driver [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No VIF found with MAC fa:16:3e:6b:9e:4e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:47:21 np0005481065 nova_compute[260935]: 2025-10-11 08:47:21.912 2 INFO nova.virt.libvirt.driver [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Using config drive#033[00m
Oct 11 04:47:21 np0005481065 nova_compute[260935]: 2025-10-11 08:47:21.945 2 DEBUG nova.storage.rbd_utils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] rbd image 5b2193b9-46b9-44a8-9d1c-3c6a642115b6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:47:21 np0005481065 nova_compute[260935]: 2025-10-11 08:47:21.961 2 DEBUG nova.network.neutron [req-41a19694-68f6-43b2-b0f8-58f497a4600c req-5d3708b0-3519-458b-99e1-90d8a9733030 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Updated VIF entry in instance network info cache for port 713e6030-0d3f-41ae-9f66-c4591e2498e4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:47:21 np0005481065 nova_compute[260935]: 2025-10-11 08:47:21.962 2 DEBUG nova.network.neutron [req-41a19694-68f6-43b2-b0f8-58f497a4600c req-5d3708b0-3519-458b-99e1-90d8a9733030 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Updating instance_info_cache with network_info: [{"id": "713e6030-0d3f-41ae-9f66-c4591e2498e4", "address": "fa:16:3e:6b:9e:4e", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap713e6030-0d", "ovs_interfaceid": "713e6030-0d3f-41ae-9f66-c4591e2498e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:47:21 np0005481065 nova_compute[260935]: 2025-10-11 08:47:21.969 2 DEBUG oslo_concurrency.processutils [None req-9b69a69e-c7dc-483b-8032-001086ce1dbd a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:47:22 np0005481065 nova_compute[260935]: 2025-10-11 08:47:22.007 2 DEBUG oslo_concurrency.lockutils [req-41a19694-68f6-43b2-b0f8-58f497a4600c req-5d3708b0-3519-458b-99e1-90d8a9733030 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-5b2193b9-46b9-44a8-9d1c-3c6a642115b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:47:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Oct 11 04:47:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Oct 11 04:47:22 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Oct 11 04:47:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:47:22 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3256956480' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:47:22 np0005481065 nova_compute[260935]: 2025-10-11 08:47:22.492 2 DEBUG oslo_concurrency.processutils [None req-9b69a69e-c7dc-483b-8032-001086ce1dbd a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:47:22 np0005481065 nova_compute[260935]: 2025-10-11 08:47:22.499 2 DEBUG nova.compute.provider_tree [None req-9b69a69e-c7dc-483b-8032-001086ce1dbd a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:47:22 np0005481065 nova_compute[260935]: 2025-10-11 08:47:22.513 2 DEBUG nova.scheduler.client.report [None req-9b69a69e-c7dc-483b-8032-001086ce1dbd a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:47:22 np0005481065 nova_compute[260935]: 2025-10-11 08:47:22.531 2 DEBUG oslo_concurrency.lockutils [None req-9b69a69e-c7dc-483b-8032-001086ce1dbd a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.731s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:47:22 np0005481065 nova_compute[260935]: 2025-10-11 08:47:22.564 2 INFO nova.scheduler.client.report [None req-9b69a69e-c7dc-483b-8032-001086ce1dbd a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Deleted allocations for instance 682cad4b-4bab-4a36-9bd9-11e9de7b213a#033[00m
Oct 11 04:47:22 np0005481065 nova_compute[260935]: 2025-10-11 08:47:22.600 2 INFO nova.virt.libvirt.driver [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Creating config drive at /var/lib/nova/instances/5b2193b9-46b9-44a8-9d1c-3c6a642115b6/disk.config#033[00m
Oct 11 04:47:22 np0005481065 nova_compute[260935]: 2025-10-11 08:47:22.605 2 DEBUG oslo_concurrency.processutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5b2193b9-46b9-44a8-9d1c-3c6a642115b6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3tslbnzu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:47:22 np0005481065 nova_compute[260935]: 2025-10-11 08:47:22.638 2 DEBUG oslo_concurrency.lockutils [None req-9b69a69e-c7dc-483b-8032-001086ce1dbd a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Lock "682cad4b-4bab-4a36-9bd9-11e9de7b213a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:47:22 np0005481065 nova_compute[260935]: 2025-10-11 08:47:22.751 2 DEBUG oslo_concurrency.processutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5b2193b9-46b9-44a8-9d1c-3c6a642115b6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3tslbnzu" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:47:22 np0005481065 nova_compute[260935]: 2025-10-11 08:47:22.792 2 DEBUG nova.storage.rbd_utils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] rbd image 5b2193b9-46b9-44a8-9d1c-3c6a642115b6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:47:22 np0005481065 nova_compute[260935]: 2025-10-11 08:47:22.797 2 DEBUG oslo_concurrency.processutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5b2193b9-46b9-44a8-9d1c-3c6a642115b6/disk.config 5b2193b9-46b9-44a8-9d1c-3c6a642115b6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:47:22 np0005481065 nova_compute[260935]: 2025-10-11 08:47:22.854 2 DEBUG oslo_concurrency.lockutils [None req-201c53c8-6689-4b2a-b932-451ef298d63e 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Acquiring lock "fa95251f-1ce7-4a45-8f53-ae932716a172" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:47:22 np0005481065 nova_compute[260935]: 2025-10-11 08:47:22.854 2 DEBUG oslo_concurrency.lockutils [None req-201c53c8-6689-4b2a-b932-451ef298d63e 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lock "fa95251f-1ce7-4a45-8f53-ae932716a172" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:47:22 np0005481065 nova_compute[260935]: 2025-10-11 08:47:22.855 2 DEBUG oslo_concurrency.lockutils [None req-201c53c8-6689-4b2a-b932-451ef298d63e 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Acquiring lock "fa95251f-1ce7-4a45-8f53-ae932716a172-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:47:22 np0005481065 nova_compute[260935]: 2025-10-11 08:47:22.855 2 DEBUG oslo_concurrency.lockutils [None req-201c53c8-6689-4b2a-b932-451ef298d63e 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lock "fa95251f-1ce7-4a45-8f53-ae932716a172-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:47:22 np0005481065 nova_compute[260935]: 2025-10-11 08:47:22.856 2 DEBUG oslo_concurrency.lockutils [None req-201c53c8-6689-4b2a-b932-451ef298d63e 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lock "fa95251f-1ce7-4a45-8f53-ae932716a172-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:47:22 np0005481065 nova_compute[260935]: 2025-10-11 08:47:22.858 2 INFO nova.compute.manager [None req-201c53c8-6689-4b2a-b932-451ef298d63e 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Terminating instance#033[00m
Oct 11 04:47:22 np0005481065 nova_compute[260935]: 2025-10-11 08:47:22.859 2 DEBUG oslo_concurrency.lockutils [None req-201c53c8-6689-4b2a-b932-451ef298d63e 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Acquiring lock "refresh_cache-fa95251f-1ce7-4a45-8f53-ae932716a172" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:47:22 np0005481065 nova_compute[260935]: 2025-10-11 08:47:22.860 2 DEBUG oslo_concurrency.lockutils [None req-201c53c8-6689-4b2a-b932-451ef298d63e 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Acquired lock "refresh_cache-fa95251f-1ce7-4a45-8f53-ae932716a172" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:47:22 np0005481065 nova_compute[260935]: 2025-10-11 08:47:22.860 2 DEBUG nova.network.neutron [None req-201c53c8-6689-4b2a-b932-451ef298d63e 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:47:22 np0005481065 nova_compute[260935]: 2025-10-11 08:47:22.989 2 DEBUG oslo_concurrency.processutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5b2193b9-46b9-44a8-9d1c-3c6a642115b6/disk.config 5b2193b9-46b9-44a8-9d1c-3c6a642115b6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.192s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:47:22 np0005481065 nova_compute[260935]: 2025-10-11 08:47:22.990 2 INFO nova.virt.libvirt.driver [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Deleting local config drive /var/lib/nova/instances/5b2193b9-46b9-44a8-9d1c-3c6a642115b6/disk.config because it was imported into RBD.#033[00m
Oct 11 04:47:23 np0005481065 NetworkManager[44960]: <info>  [1760172443.0568] manager: (tap713e6030-0d): new Tun device (/org/freedesktop/NetworkManager/Devices/37)
Oct 11 04:47:23 np0005481065 kernel: tap713e6030-0d: entered promiscuous mode
Oct 11 04:47:23 np0005481065 systemd-udevd[286860]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:47:23 np0005481065 nova_compute[260935]: 2025-10-11 08:47:23.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:23 np0005481065 ovn_controller[152945]: 2025-10-11T08:47:23Z|00044|binding|INFO|Claiming lport 713e6030-0d3f-41ae-9f66-c4591e2498e4 for this chassis.
Oct 11 04:47:23 np0005481065 ovn_controller[152945]: 2025-10-11T08:47:23Z|00045|binding|INFO|713e6030-0d3f-41ae-9f66-c4591e2498e4: Claiming fa:16:3e:6b:9e:4e 10.100.0.10
Oct 11 04:47:23 np0005481065 nova_compute[260935]: 2025-10-11 08:47:23.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:23 np0005481065 NetworkManager[44960]: <info>  [1760172443.0807] device (tap713e6030-0d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:47:23 np0005481065 NetworkManager[44960]: <info>  [1760172443.0819] device (tap713e6030-0d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:23.087 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:9e:4e 10.100.0.10'], port_security=['fa:16:3e:6b:9e:4e 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '5b2193b9-46b9-44a8-9d1c-3c6a642115b6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eddb41c523294041b154a0a99c88e82b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '45992a1d-2e72-47ad-a788-d3f5230a5526', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4201c7b-c907-464d-88cb-d19f17d8f067, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=713e6030-0d3f-41ae-9f66-c4591e2498e4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:23.089 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 713e6030-0d3f-41ae-9f66-c4591e2498e4 in datapath fff13396-b787-4c6e-9112-a1c2ef57b26d bound to our chassis#033[00m
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:23.091 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fff13396-b787-4c6e-9112-a1c2ef57b26d#033[00m
Oct 11 04:47:23 np0005481065 systemd-machined[215705]: New machine qemu-17-instance-00000011.
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:23.111 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d54c97aa-29e8-4596-b037-a748134df35d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:23.112 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfff13396-b1 in ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:23.116 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfff13396-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:23.116 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[706d03af-a7db-4432-adac-8be1de30906e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:23.118 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e2999396-ea7b-487a-9852-47be262e2e6c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:47:23 np0005481065 systemd[1]: Started Virtual Machine qemu-17-instance-00000011.
Oct 11 04:47:23 np0005481065 nova_compute[260935]: 2025-10-11 08:47:23.147 2 DEBUG nova.network.neutron [None req-201c53c8-6689-4b2a-b932-451ef298d63e 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:23.147 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[7a3e21d1-f0f2-49dd-a42c-77c139d37a95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:47:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:47:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Oct 11 04:47:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Oct 11 04:47:23 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:23.187 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[82483b1d-5fc0-42da-b33d-0c55d6f88e21]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:47:23 np0005481065 nova_compute[260935]: 2025-10-11 08:47:23.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:23 np0005481065 ovn_controller[152945]: 2025-10-11T08:47:23Z|00046|binding|INFO|Setting lport 713e6030-0d3f-41ae-9f66-c4591e2498e4 ovn-installed in OVS
Oct 11 04:47:23 np0005481065 ovn_controller[152945]: 2025-10-11T08:47:23Z|00047|binding|INFO|Setting lport 713e6030-0d3f-41ae-9f66-c4591e2498e4 up in Southbound
Oct 11 04:47:23 np0005481065 nova_compute[260935]: 2025-10-11 08:47:23.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:23.229 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f67a42bf-6d3c-4a3c-a9c7-b36bb076ad69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:47:23 np0005481065 NetworkManager[44960]: <info>  [1760172443.2388] manager: (tapfff13396-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/38)
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:23.241 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ebbe329b-00c0-42be-ad5a-e005f6cc668d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:23.288 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[9050206a-94dc-4629-9063-8f8a294109d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:23.292 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[ddc88f76-9781-483c-b7d5-ab2449ec661d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:47:23 np0005481065 NetworkManager[44960]: <info>  [1760172443.3236] device (tapfff13396-b0): carrier: link connected
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:23.331 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[d499b188-6fc9-48da-baf5-8f0374620065]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:23.360 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a43b3faf-462f-4cbd-9b3b-64b1e3d9aceb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfff13396-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:a4:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428802, 'reachable_time': 36432, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287072, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:23.382 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a38839a5-fcca-4da7-9a6c-59777a85bd40]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedd:a42d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 428802, 'tstamp': 428802}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287073, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:23.411 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[caf49178-515e-45cd-90ac-8d062988135c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfff13396-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:a4:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428802, 'reachable_time': 36432, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 287074, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:23.468 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7d990724-c304-4b81-a9b5-802f5a7cd812]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:23.568 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[dcaecf90-8bd8-4066-bf81-b344d54a6afe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:23.570 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfff13396-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:23.570 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:23.571 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfff13396-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:47:23 np0005481065 nova_compute[260935]: 2025-10-11 08:47:23.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:23 np0005481065 NetworkManager[44960]: <info>  [1760172443.5746] manager: (tapfff13396-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/39)
Oct 11 04:47:23 np0005481065 kernel: tapfff13396-b0: entered promiscuous mode
Oct 11 04:47:23 np0005481065 nova_compute[260935]: 2025-10-11 08:47:23.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:23.581 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfff13396-b0, col_values=(('external_ids', {'iface-id': '2a916b98-1e7b-4604-b1f0-e2f195b1c17e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:47:23 np0005481065 nova_compute[260935]: 2025-10-11 08:47:23.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:23 np0005481065 ovn_controller[152945]: 2025-10-11T08:47:23Z|00048|binding|INFO|Releasing lport 2a916b98-1e7b-4604-b1f0-e2f195b1c17e from this chassis (sb_readonly=0)
Oct 11 04:47:23 np0005481065 nova_compute[260935]: 2025-10-11 08:47:23.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:23.619 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fff13396-b787-4c6e-9112-a1c2ef57b26d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fff13396-b787-4c6e-9112-a1c2ef57b26d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:23.621 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fae8b4de-bbe8-424c-9ddf-b3def6dc2d55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:23.622 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-fff13396-b787-4c6e-9112-a1c2ef57b26d
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/fff13396-b787-4c6e-9112-a1c2ef57b26d.pid.haproxy
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID fff13396-b787-4c6e-9112-a1c2ef57b26d
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 04:47:23 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:23.623 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'env', 'PROCESS_TAG=haproxy-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fff13396-b787-4c6e-9112-a1c2ef57b26d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 04:47:23 np0005481065 nova_compute[260935]: 2025-10-11 08:47:23.729 2 DEBUG nova.network.neutron [None req-201c53c8-6689-4b2a-b932-451ef298d63e 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:47:23 np0005481065 nova_compute[260935]: 2025-10-11 08:47:23.752 2 DEBUG oslo_concurrency.lockutils [None req-201c53c8-6689-4b2a-b932-451ef298d63e 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Releasing lock "refresh_cache-fa95251f-1ce7-4a45-8f53-ae932716a172" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:47:23 np0005481065 nova_compute[260935]: 2025-10-11 08:47:23.752 2 DEBUG nova.compute.manager [None req-201c53c8-6689-4b2a-b932-451ef298d63e 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 04:47:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1241: 321 pgs: 321 active+clean; 275 MiB data, 387 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 13 KiB/s wr, 420 op/s
Oct 11 04:47:23 np0005481065 nova_compute[260935]: 2025-10-11 08:47:23.791 2 DEBUG nova.compute.manager [req-536ae651-2067-4b0c-bfe7-fe4b76f6b9a0 req-2455056c-298a-4d9e-917c-ba88756c5c81 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Received event network-vif-plugged-713e6030-0d3f-41ae-9f66-c4591e2498e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:47:23 np0005481065 nova_compute[260935]: 2025-10-11 08:47:23.794 2 DEBUG oslo_concurrency.lockutils [req-536ae651-2067-4b0c-bfe7-fe4b76f6b9a0 req-2455056c-298a-4d9e-917c-ba88756c5c81 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:47:23 np0005481065 nova_compute[260935]: 2025-10-11 08:47:23.795 2 DEBUG oslo_concurrency.lockutils [req-536ae651-2067-4b0c-bfe7-fe4b76f6b9a0 req-2455056c-298a-4d9e-917c-ba88756c5c81 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:47:23 np0005481065 nova_compute[260935]: 2025-10-11 08:47:23.795 2 DEBUG oslo_concurrency.lockutils [req-536ae651-2067-4b0c-bfe7-fe4b76f6b9a0 req-2455056c-298a-4d9e-917c-ba88756c5c81 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:47:23 np0005481065 nova_compute[260935]: 2025-10-11 08:47:23.795 2 DEBUG nova.compute.manager [req-536ae651-2067-4b0c-bfe7-fe4b76f6b9a0 req-2455056c-298a-4d9e-917c-ba88756c5c81 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Processing event network-vif-plugged-713e6030-0d3f-41ae-9f66-c4591e2498e4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 04:47:23 np0005481065 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Oct 11 04:47:23 np0005481065 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000e.scope: Consumed 13.727s CPU time.
Oct 11 04:47:23 np0005481065 systemd-machined[215705]: Machine qemu-14-instance-0000000e terminated.
Oct 11 04:47:23 np0005481065 nova_compute[260935]: 2025-10-11 08:47:23.981 2 INFO nova.virt.libvirt.driver [-] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Instance destroyed successfully.#033[00m
Oct 11 04:47:23 np0005481065 nova_compute[260935]: 2025-10-11 08:47:23.981 2 DEBUG nova.objects.instance [None req-201c53c8-6689-4b2a-b932-451ef298d63e 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lazy-loading 'resources' on Instance uuid fa95251f-1ce7-4a45-8f53-ae932716a172 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:47:24 np0005481065 podman[287169]: 2025-10-11 08:47:24.143118289 +0000 UTC m=+0.063712533 container create 779a4dfe6de7028a6c29a354067b0ff2a57745a062e774c98de6f0ccb789cb5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:47:24 np0005481065 systemd[1]: Started libpod-conmon-779a4dfe6de7028a6c29a354067b0ff2a57745a062e774c98de6f0ccb789cb5a.scope.
Oct 11 04:47:24 np0005481065 podman[287169]: 2025-10-11 08:47:24.109898309 +0000 UTC m=+0.030492603 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 04:47:24 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:47:24 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/829cba9cb35af42a78e9e37b72195be70588984fe3032b82c532e66c2e3b0e2c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:47:24 np0005481065 podman[287169]: 2025-10-11 08:47:24.247673797 +0000 UTC m=+0.168268041 container init 779a4dfe6de7028a6c29a354067b0ff2a57745a062e774c98de6f0ccb789cb5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 11 04:47:24 np0005481065 podman[287169]: 2025-10-11 08:47:24.255406498 +0000 UTC m=+0.176000742 container start 779a4dfe6de7028a6c29a354067b0ff2a57745a062e774c98de6f0ccb789cb5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 04:47:24 np0005481065 neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d[287185]: [NOTICE]   (287190) : New worker (287192) forked
Oct 11 04:47:24 np0005481065 neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d[287185]: [NOTICE]   (287190) : Loading success.
Oct 11 04:47:24 np0005481065 nova_compute[260935]: 2025-10-11 08:47:24.313 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172444.3123507, 5b2193b9-46b9-44a8-9d1c-3c6a642115b6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:47:24 np0005481065 nova_compute[260935]: 2025-10-11 08:47:24.313 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] VM Started (Lifecycle Event)#033[00m
Oct 11 04:47:24 np0005481065 nova_compute[260935]: 2025-10-11 08:47:24.317 2 DEBUG nova.compute.manager [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:47:24 np0005481065 nova_compute[260935]: 2025-10-11 08:47:24.329 2 DEBUG nova.virt.libvirt.driver [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:47:24 np0005481065 nova_compute[260935]: 2025-10-11 08:47:24.335 2 INFO nova.virt.libvirt.driver [-] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Instance spawned successfully.#033[00m
Oct 11 04:47:24 np0005481065 nova_compute[260935]: 2025-10-11 08:47:24.335 2 DEBUG nova.virt.libvirt.driver [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:47:24 np0005481065 nova_compute[260935]: 2025-10-11 08:47:24.339 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:47:24 np0005481065 nova_compute[260935]: 2025-10-11 08:47:24.343 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:47:24 np0005481065 nova_compute[260935]: 2025-10-11 08:47:24.361 2 DEBUG nova.virt.libvirt.driver [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:47:24 np0005481065 nova_compute[260935]: 2025-10-11 08:47:24.361 2 DEBUG nova.virt.libvirt.driver [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:47:24 np0005481065 nova_compute[260935]: 2025-10-11 08:47:24.362 2 DEBUG nova.virt.libvirt.driver [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:47:24 np0005481065 nova_compute[260935]: 2025-10-11 08:47:24.363 2 DEBUG nova.virt.libvirt.driver [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:47:24 np0005481065 nova_compute[260935]: 2025-10-11 08:47:24.363 2 DEBUG nova.virt.libvirt.driver [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:47:24 np0005481065 nova_compute[260935]: 2025-10-11 08:47:24.364 2 DEBUG nova.virt.libvirt.driver [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:47:24 np0005481065 nova_compute[260935]: 2025-10-11 08:47:24.372 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:47:24 np0005481065 nova_compute[260935]: 2025-10-11 08:47:24.372 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172444.3126302, 5b2193b9-46b9-44a8-9d1c-3c6a642115b6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:47:24 np0005481065 nova_compute[260935]: 2025-10-11 08:47:24.373 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] VM Paused (Lifecycle Event)#033[00m
Oct 11 04:47:24 np0005481065 nova_compute[260935]: 2025-10-11 08:47:24.411 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:47:24 np0005481065 nova_compute[260935]: 2025-10-11 08:47:24.421 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172444.32918, 5b2193b9-46b9-44a8-9d1c-3c6a642115b6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:47:24 np0005481065 nova_compute[260935]: 2025-10-11 08:47:24.421 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:47:24 np0005481065 nova_compute[260935]: 2025-10-11 08:47:24.428 2 INFO nova.virt.libvirt.driver [None req-201c53c8-6689-4b2a-b932-451ef298d63e 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Deleting instance files /var/lib/nova/instances/fa95251f-1ce7-4a45-8f53-ae932716a172_del#033[00m
Oct 11 04:47:24 np0005481065 nova_compute[260935]: 2025-10-11 08:47:24.429 2 INFO nova.virt.libvirt.driver [None req-201c53c8-6689-4b2a-b932-451ef298d63e 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Deletion of /var/lib/nova/instances/fa95251f-1ce7-4a45-8f53-ae932716a172_del complete#033[00m
Oct 11 04:47:24 np0005481065 nova_compute[260935]: 2025-10-11 08:47:24.436 2 INFO nova.compute.manager [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Took 8.22 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:47:24 np0005481065 nova_compute[260935]: 2025-10-11 08:47:24.437 2 DEBUG nova.compute.manager [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:47:24 np0005481065 nova_compute[260935]: 2025-10-11 08:47:24.438 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:47:24 np0005481065 nova_compute[260935]: 2025-10-11 08:47:24.447 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:47:24 np0005481065 nova_compute[260935]: 2025-10-11 08:47:24.483 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:47:24 np0005481065 nova_compute[260935]: 2025-10-11 08:47:24.494 2 INFO nova.compute.manager [None req-201c53c8-6689-4b2a-b932-451ef298d63e 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Took 0.74 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 04:47:24 np0005481065 nova_compute[260935]: 2025-10-11 08:47:24.495 2 DEBUG oslo.service.loopingcall [None req-201c53c8-6689-4b2a-b932-451ef298d63e 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 04:47:24 np0005481065 nova_compute[260935]: 2025-10-11 08:47:24.496 2 DEBUG nova.compute.manager [-] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 04:47:24 np0005481065 nova_compute[260935]: 2025-10-11 08:47:24.496 2 DEBUG nova.network.neutron [-] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 04:47:24 np0005481065 nova_compute[260935]: 2025-10-11 08:47:24.510 2 INFO nova.compute.manager [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Took 10.25 seconds to build instance.#033[00m
Oct 11 04:47:24 np0005481065 nova_compute[260935]: 2025-10-11 08:47:24.528 2 DEBUG oslo_concurrency.lockutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.352s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:47:24 np0005481065 nova_compute[260935]: 2025-10-11 08:47:24.665 2 DEBUG nova.network.neutron [-] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:47:24 np0005481065 nova_compute[260935]: 2025-10-11 08:47:24.683 2 DEBUG nova.network.neutron [-] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:47:24 np0005481065 nova_compute[260935]: 2025-10-11 08:47:24.701 2 INFO nova.compute.manager [-] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Took 0.21 seconds to deallocate network for instance.#033[00m
Oct 11 04:47:24 np0005481065 nova_compute[260935]: 2025-10-11 08:47:24.746 2 DEBUG oslo_concurrency.lockutils [None req-201c53c8-6689-4b2a-b932-451ef298d63e 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:47:24 np0005481065 nova_compute[260935]: 2025-10-11 08:47:24.747 2 DEBUG oslo_concurrency.lockutils [None req-201c53c8-6689-4b2a-b932-451ef298d63e 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:47:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:47:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:47:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:47:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:47:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:47:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:47:24 np0005481065 nova_compute[260935]: 2025-10-11 08:47:24.876 2 DEBUG oslo_concurrency.processutils [None req-201c53c8-6689-4b2a-b932-451ef298d63e 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:47:25 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:47:25 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/677872699' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:47:25 np0005481065 nova_compute[260935]: 2025-10-11 08:47:25.368 2 DEBUG oslo_concurrency.processutils [None req-201c53c8-6689-4b2a-b932-451ef298d63e 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:47:25 np0005481065 nova_compute[260935]: 2025-10-11 08:47:25.374 2 DEBUG nova.compute.provider_tree [None req-201c53c8-6689-4b2a-b932-451ef298d63e 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:47:25 np0005481065 nova_compute[260935]: 2025-10-11 08:47:25.391 2 DEBUG nova.scheduler.client.report [None req-201c53c8-6689-4b2a-b932-451ef298d63e 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:47:25 np0005481065 nova_compute[260935]: 2025-10-11 08:47:25.409 2 DEBUG oslo_concurrency.lockutils [None req-201c53c8-6689-4b2a-b932-451ef298d63e 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:47:25 np0005481065 nova_compute[260935]: 2025-10-11 08:47:25.463 2 INFO nova.scheduler.client.report [None req-201c53c8-6689-4b2a-b932-451ef298d63e 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Deleted allocations for instance fa95251f-1ce7-4a45-8f53-ae932716a172#033[00m
Oct 11 04:47:25 np0005481065 nova_compute[260935]: 2025-10-11 08:47:25.532 2 DEBUG oslo_concurrency.lockutils [None req-201c53c8-6689-4b2a-b932-451ef298d63e 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lock "fa95251f-1ce7-4a45-8f53-ae932716a172" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:47:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1242: 321 pgs: 321 active+clean; 275 MiB data, 387 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 9.5 KiB/s wr, 301 op/s
Oct 11 04:47:26 np0005481065 nova_compute[260935]: 2025-10-11 08:47:26.014 2 DEBUG nova.compute.manager [req-0eae21b8-4657-4bb9-b7ec-15783345b9c0 req-1277c82f-979d-4e5d-ac87-13542e399c22 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Received event network-vif-plugged-713e6030-0d3f-41ae-9f66-c4591e2498e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:47:26 np0005481065 nova_compute[260935]: 2025-10-11 08:47:26.015 2 DEBUG oslo_concurrency.lockutils [req-0eae21b8-4657-4bb9-b7ec-15783345b9c0 req-1277c82f-979d-4e5d-ac87-13542e399c22 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:47:26 np0005481065 nova_compute[260935]: 2025-10-11 08:47:26.016 2 DEBUG oslo_concurrency.lockutils [req-0eae21b8-4657-4bb9-b7ec-15783345b9c0 req-1277c82f-979d-4e5d-ac87-13542e399c22 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:47:26 np0005481065 nova_compute[260935]: 2025-10-11 08:47:26.016 2 DEBUG oslo_concurrency.lockutils [req-0eae21b8-4657-4bb9-b7ec-15783345b9c0 req-1277c82f-979d-4e5d-ac87-13542e399c22 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:47:26 np0005481065 nova_compute[260935]: 2025-10-11 08:47:26.017 2 DEBUG nova.compute.manager [req-0eae21b8-4657-4bb9-b7ec-15783345b9c0 req-1277c82f-979d-4e5d-ac87-13542e399c22 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] No waiting events found dispatching network-vif-plugged-713e6030-0d3f-41ae-9f66-c4591e2498e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:47:26 np0005481065 nova_compute[260935]: 2025-10-11 08:47:26.017 2 WARNING nova.compute.manager [req-0eae21b8-4657-4bb9-b7ec-15783345b9c0 req-1277c82f-979d-4e5d-ac87-13542e399c22 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Received unexpected event network-vif-plugged-713e6030-0d3f-41ae-9f66-c4591e2498e4 for instance with vm_state active and task_state None.#033[00m
Oct 11 04:47:26 np0005481065 nova_compute[260935]: 2025-10-11 08:47:26.301 2 DEBUG oslo_concurrency.lockutils [None req-a720d339-044f-4388-8f34-b69eba5324d5 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Acquiring lock "318c32b0-9990-4579-8abb-fc79e7460d77" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:47:26 np0005481065 nova_compute[260935]: 2025-10-11 08:47:26.302 2 DEBUG oslo_concurrency.lockutils [None req-a720d339-044f-4388-8f34-b69eba5324d5 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lock "318c32b0-9990-4579-8abb-fc79e7460d77" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:47:26 np0005481065 nova_compute[260935]: 2025-10-11 08:47:26.302 2 DEBUG oslo_concurrency.lockutils [None req-a720d339-044f-4388-8f34-b69eba5324d5 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Acquiring lock "318c32b0-9990-4579-8abb-fc79e7460d77-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:47:26 np0005481065 nova_compute[260935]: 2025-10-11 08:47:26.303 2 DEBUG oslo_concurrency.lockutils [None req-a720d339-044f-4388-8f34-b69eba5324d5 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lock "318c32b0-9990-4579-8abb-fc79e7460d77-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:47:26 np0005481065 nova_compute[260935]: 2025-10-11 08:47:26.303 2 DEBUG oslo_concurrency.lockutils [None req-a720d339-044f-4388-8f34-b69eba5324d5 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lock "318c32b0-9990-4579-8abb-fc79e7460d77-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:47:26 np0005481065 nova_compute[260935]: 2025-10-11 08:47:26.305 2 INFO nova.compute.manager [None req-a720d339-044f-4388-8f34-b69eba5324d5 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Terminating instance#033[00m
Oct 11 04:47:26 np0005481065 nova_compute[260935]: 2025-10-11 08:47:26.306 2 DEBUG oslo_concurrency.lockutils [None req-a720d339-044f-4388-8f34-b69eba5324d5 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Acquiring lock "refresh_cache-318c32b0-9990-4579-8abb-fc79e7460d77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:47:26 np0005481065 nova_compute[260935]: 2025-10-11 08:47:26.307 2 DEBUG oslo_concurrency.lockutils [None req-a720d339-044f-4388-8f34-b69eba5324d5 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Acquired lock "refresh_cache-318c32b0-9990-4579-8abb-fc79e7460d77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:47:26 np0005481065 nova_compute[260935]: 2025-10-11 08:47:26.307 2 DEBUG nova.network.neutron [None req-a720d339-044f-4388-8f34-b69eba5324d5 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:47:26 np0005481065 nova_compute[260935]: 2025-10-11 08:47:26.485 2 DEBUG nova.network.neutron [None req-a720d339-044f-4388-8f34-b69eba5324d5 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:47:26 np0005481065 nova_compute[260935]: 2025-10-11 08:47:26.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:26 np0005481065 podman[287225]: 2025-10-11 08:47:26.806170652 +0000 UTC m=+0.086515704 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3)
Oct 11 04:47:26 np0005481065 nova_compute[260935]: 2025-10-11 08:47:26.808 2 DEBUG nova.network.neutron [None req-a720d339-044f-4388-8f34-b69eba5324d5 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:47:26 np0005481065 nova_compute[260935]: 2025-10-11 08:47:26.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:26 np0005481065 nova_compute[260935]: 2025-10-11 08:47:26.823 2 DEBUG oslo_concurrency.lockutils [None req-a720d339-044f-4388-8f34-b69eba5324d5 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Releasing lock "refresh_cache-318c32b0-9990-4579-8abb-fc79e7460d77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:47:26 np0005481065 nova_compute[260935]: 2025-10-11 08:47:26.824 2 DEBUG nova.compute.manager [None req-a720d339-044f-4388-8f34-b69eba5324d5 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 04:47:26 np0005481065 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Oct 11 04:47:26 np0005481065 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Consumed 14.615s CPU time.
Oct 11 04:47:26 np0005481065 systemd-machined[215705]: Machine qemu-13-instance-0000000d terminated.
Oct 11 04:47:27 np0005481065 nova_compute[260935]: 2025-10-11 08:47:27.051 2 INFO nova.virt.libvirt.driver [-] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Instance destroyed successfully.#033[00m
Oct 11 04:47:27 np0005481065 nova_compute[260935]: 2025-10-11 08:47:27.053 2 DEBUG nova.objects.instance [None req-a720d339-044f-4388-8f34-b69eba5324d5 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lazy-loading 'resources' on Instance uuid 318c32b0-9990-4579-8abb-fc79e7460d77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:47:27 np0005481065 nova_compute[260935]: 2025-10-11 08:47:27.486 2 INFO nova.virt.libvirt.driver [None req-a720d339-044f-4388-8f34-b69eba5324d5 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Deleting instance files /var/lib/nova/instances/318c32b0-9990-4579-8abb-fc79e7460d77_del#033[00m
Oct 11 04:47:27 np0005481065 nova_compute[260935]: 2025-10-11 08:47:27.488 2 INFO nova.virt.libvirt.driver [None req-a720d339-044f-4388-8f34-b69eba5324d5 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Deletion of /var/lib/nova/instances/318c32b0-9990-4579-8abb-fc79e7460d77_del complete#033[00m
Oct 11 04:47:27 np0005481065 nova_compute[260935]: 2025-10-11 08:47:27.534 2 INFO nova.compute.manager [None req-a720d339-044f-4388-8f34-b69eba5324d5 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Took 0.71 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 04:47:27 np0005481065 nova_compute[260935]: 2025-10-11 08:47:27.536 2 DEBUG oslo.service.loopingcall [None req-a720d339-044f-4388-8f34-b69eba5324d5 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 04:47:27 np0005481065 nova_compute[260935]: 2025-10-11 08:47:27.536 2 DEBUG nova.compute.manager [-] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 04:47:27 np0005481065 nova_compute[260935]: 2025-10-11 08:47:27.537 2 DEBUG nova.network.neutron [-] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 04:47:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1243: 321 pgs: 321 active+clean; 127 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 6.3 MiB/s rd, 38 KiB/s wr, 474 op/s
Oct 11 04:47:27 np0005481065 nova_compute[260935]: 2025-10-11 08:47:27.880 2 DEBUG nova.network.neutron [-] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:47:27 np0005481065 nova_compute[260935]: 2025-10-11 08:47:27.895 2 DEBUG nova.network.neutron [-] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:47:27 np0005481065 nova_compute[260935]: 2025-10-11 08:47:27.918 2 INFO nova.compute.manager [-] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Took 0.38 seconds to deallocate network for instance.#033[00m
Oct 11 04:47:27 np0005481065 nova_compute[260935]: 2025-10-11 08:47:27.967 2 DEBUG oslo_concurrency.lockutils [None req-a720d339-044f-4388-8f34-b69eba5324d5 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:47:27 np0005481065 nova_compute[260935]: 2025-10-11 08:47:27.968 2 DEBUG oslo_concurrency.lockutils [None req-a720d339-044f-4388-8f34-b69eba5324d5 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:47:28 np0005481065 nova_compute[260935]: 2025-10-11 08:47:28.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:28 np0005481065 NetworkManager[44960]: <info>  [1760172448.0161] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Oct 11 04:47:28 np0005481065 NetworkManager[44960]: <info>  [1760172448.0178] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Oct 11 04:47:28 np0005481065 nova_compute[260935]: 2025-10-11 08:47:28.049 2 DEBUG oslo_concurrency.processutils [None req-a720d339-044f-4388-8f34-b69eba5324d5 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:47:28 np0005481065 ovn_controller[152945]: 2025-10-11T08:47:28Z|00049|binding|INFO|Releasing lport 2a916b98-1e7b-4604-b1f0-e2f195b1c17e from this chassis (sb_readonly=0)
Oct 11 04:47:28 np0005481065 nova_compute[260935]: 2025-10-11 08:47:28.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:28 np0005481065 nova_compute[260935]: 2025-10-11 08:47:28.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:47:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Oct 11 04:47:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Oct 11 04:47:28 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Oct 11 04:47:28 np0005481065 nova_compute[260935]: 2025-10-11 08:47:28.256 2 DEBUG nova.compute.manager [req-19146771-dec9-4181-ac6b-e8a57e5f442c req-d7d8cd7c-dba5-427f-9b8d-30a96a3b959e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Received event network-changed-713e6030-0d3f-41ae-9f66-c4591e2498e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:47:28 np0005481065 nova_compute[260935]: 2025-10-11 08:47:28.256 2 DEBUG nova.compute.manager [req-19146771-dec9-4181-ac6b-e8a57e5f442c req-d7d8cd7c-dba5-427f-9b8d-30a96a3b959e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Refreshing instance network info cache due to event network-changed-713e6030-0d3f-41ae-9f66-c4591e2498e4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:47:28 np0005481065 nova_compute[260935]: 2025-10-11 08:47:28.257 2 DEBUG oslo_concurrency.lockutils [req-19146771-dec9-4181-ac6b-e8a57e5f442c req-d7d8cd7c-dba5-427f-9b8d-30a96a3b959e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-5b2193b9-46b9-44a8-9d1c-3c6a642115b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:47:28 np0005481065 nova_compute[260935]: 2025-10-11 08:47:28.257 2 DEBUG oslo_concurrency.lockutils [req-19146771-dec9-4181-ac6b-e8a57e5f442c req-d7d8cd7c-dba5-427f-9b8d-30a96a3b959e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-5b2193b9-46b9-44a8-9d1c-3c6a642115b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:47:28 np0005481065 nova_compute[260935]: 2025-10-11 08:47:28.257 2 DEBUG nova.network.neutron [req-19146771-dec9-4181-ac6b-e8a57e5f442c req-d7d8cd7c-dba5-427f-9b8d-30a96a3b959e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Refreshing network info cache for port 713e6030-0d3f-41ae-9f66-c4591e2498e4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:47:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:47:28 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2828696214' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:47:28 np0005481065 nova_compute[260935]: 2025-10-11 08:47:28.564 2 DEBUG oslo_concurrency.processutils [None req-a720d339-044f-4388-8f34-b69eba5324d5 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:47:28 np0005481065 nova_compute[260935]: 2025-10-11 08:47:28.573 2 DEBUG nova.compute.provider_tree [None req-a720d339-044f-4388-8f34-b69eba5324d5 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:47:28 np0005481065 nova_compute[260935]: 2025-10-11 08:47:28.591 2 DEBUG nova.scheduler.client.report [None req-a720d339-044f-4388-8f34-b69eba5324d5 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:47:28 np0005481065 nova_compute[260935]: 2025-10-11 08:47:28.618 2 DEBUG oslo_concurrency.lockutils [None req-a720d339-044f-4388-8f34-b69eba5324d5 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:47:28 np0005481065 nova_compute[260935]: 2025-10-11 08:47:28.643 2 INFO nova.scheduler.client.report [None req-a720d339-044f-4388-8f34-b69eba5324d5 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Deleted allocations for instance 318c32b0-9990-4579-8abb-fc79e7460d77#033[00m
Oct 11 04:47:28 np0005481065 nova_compute[260935]: 2025-10-11 08:47:28.712 2 DEBUG oslo_concurrency.lockutils [None req-a720d339-044f-4388-8f34-b69eba5324d5 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lock "318c32b0-9990-4579-8abb-fc79e7460d77" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.410s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:47:29 np0005481065 nova_compute[260935]: 2025-10-11 08:47:29.342 2 DEBUG nova.network.neutron [req-19146771-dec9-4181-ac6b-e8a57e5f442c req-d7d8cd7c-dba5-427f-9b8d-30a96a3b959e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Updated VIF entry in instance network info cache for port 713e6030-0d3f-41ae-9f66-c4591e2498e4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:47:29 np0005481065 nova_compute[260935]: 2025-10-11 08:47:29.343 2 DEBUG nova.network.neutron [req-19146771-dec9-4181-ac6b-e8a57e5f442c req-d7d8cd7c-dba5-427f-9b8d-30a96a3b959e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Updating instance_info_cache with network_info: [{"id": "713e6030-0d3f-41ae-9f66-c4591e2498e4", "address": "fa:16:3e:6b:9e:4e", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap713e6030-0d", "ovs_interfaceid": "713e6030-0d3f-41ae-9f66-c4591e2498e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:47:29 np0005481065 nova_compute[260935]: 2025-10-11 08:47:29.365 2 DEBUG oslo_concurrency.lockutils [req-19146771-dec9-4181-ac6b-e8a57e5f442c req-d7d8cd7c-dba5-427f-9b8d-30a96a3b959e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-5b2193b9-46b9-44a8-9d1c-3c6a642115b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:47:29 np0005481065 nova_compute[260935]: 2025-10-11 08:47:29.706 2 DEBUG oslo_concurrency.lockutils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Acquiring lock "66086b61-46ca-4a1b-a9f0-692678bcbf7a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:47:29 np0005481065 nova_compute[260935]: 2025-10-11 08:47:29.708 2 DEBUG oslo_concurrency.lockutils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lock "66086b61-46ca-4a1b-a9f0-692678bcbf7a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:47:29 np0005481065 nova_compute[260935]: 2025-10-11 08:47:29.733 2 DEBUG nova.compute.manager [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:47:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1245: 321 pgs: 321 active+clean; 127 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 30 KiB/s wr, 349 op/s
Oct 11 04:47:29 np0005481065 nova_compute[260935]: 2025-10-11 08:47:29.810 2 DEBUG oslo_concurrency.lockutils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:47:29 np0005481065 nova_compute[260935]: 2025-10-11 08:47:29.811 2 DEBUG oslo_concurrency.lockutils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:47:29 np0005481065 nova_compute[260935]: 2025-10-11 08:47:29.819 2 DEBUG nova.virt.hardware [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:47:29 np0005481065 nova_compute[260935]: 2025-10-11 08:47:29.820 2 INFO nova.compute.claims [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:47:29 np0005481065 nova_compute[260935]: 2025-10-11 08:47:29.960 2 DEBUG oslo_concurrency.processutils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:47:30 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Oct 11 04:47:30 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Oct 11 04:47:30 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Oct 11 04:47:30 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:47:30 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2425366987' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:47:30 np0005481065 nova_compute[260935]: 2025-10-11 08:47:30.454 2 DEBUG oslo_concurrency.processutils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:47:30 np0005481065 nova_compute[260935]: 2025-10-11 08:47:30.463 2 DEBUG nova.compute.provider_tree [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:47:30 np0005481065 nova_compute[260935]: 2025-10-11 08:47:30.483 2 DEBUG nova.scheduler.client.report [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:47:30 np0005481065 nova_compute[260935]: 2025-10-11 08:47:30.511 2 DEBUG oslo_concurrency.lockutils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:47:30 np0005481065 nova_compute[260935]: 2025-10-11 08:47:30.512 2 DEBUG nova.compute.manager [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:47:30 np0005481065 nova_compute[260935]: 2025-10-11 08:47:30.567 2 DEBUG nova.compute.manager [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Oct 11 04:47:30 np0005481065 nova_compute[260935]: 2025-10-11 08:47:30.580 2 INFO nova.virt.libvirt.driver [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:47:30 np0005481065 nova_compute[260935]: 2025-10-11 08:47:30.597 2 DEBUG nova.compute.manager [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:47:30 np0005481065 nova_compute[260935]: 2025-10-11 08:47:30.683 2 DEBUG nova.compute.manager [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:47:30 np0005481065 nova_compute[260935]: 2025-10-11 08:47:30.685 2 DEBUG nova.virt.libvirt.driver [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:47:30 np0005481065 nova_compute[260935]: 2025-10-11 08:47:30.686 2 INFO nova.virt.libvirt.driver [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Creating image(s)#033[00m
Oct 11 04:47:30 np0005481065 nova_compute[260935]: 2025-10-11 08:47:30.720 2 DEBUG nova.storage.rbd_utils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] rbd image 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:47:30 np0005481065 nova_compute[260935]: 2025-10-11 08:47:30.759 2 DEBUG nova.storage.rbd_utils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] rbd image 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:47:30 np0005481065 nova_compute[260935]: 2025-10-11 08:47:30.794 2 DEBUG nova.storage.rbd_utils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] rbd image 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:47:30 np0005481065 nova_compute[260935]: 2025-10-11 08:47:30.799 2 DEBUG oslo_concurrency.processutils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:47:30 np0005481065 nova_compute[260935]: 2025-10-11 08:47:30.889 2 DEBUG oslo_concurrency.processutils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:47:30 np0005481065 nova_compute[260935]: 2025-10-11 08:47:30.891 2 DEBUG oslo_concurrency.lockutils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:47:30 np0005481065 nova_compute[260935]: 2025-10-11 08:47:30.892 2 DEBUG oslo_concurrency.lockutils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:47:30 np0005481065 nova_compute[260935]: 2025-10-11 08:47:30.893 2 DEBUG oslo_concurrency.lockutils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:47:30 np0005481065 nova_compute[260935]: 2025-10-11 08:47:30.928 2 DEBUG nova.storage.rbd_utils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] rbd image 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:47:30 np0005481065 nova_compute[260935]: 2025-10-11 08:47:30.934 2 DEBUG oslo_concurrency.processutils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:47:31 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Oct 11 04:47:31 np0005481065 nova_compute[260935]: 2025-10-11 08:47:31.200 2 DEBUG oslo_concurrency.processutils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.266s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:47:31 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Oct 11 04:47:31 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Oct 11 04:47:31 np0005481065 nova_compute[260935]: 2025-10-11 08:47:31.297 2 DEBUG nova.storage.rbd_utils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] resizing rbd image 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:47:31 np0005481065 nova_compute[260935]: 2025-10-11 08:47:31.446 2 DEBUG nova.objects.instance [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lazy-loading 'migration_context' on Instance uuid 66086b61-46ca-4a1b-a9f0-692678bcbf7a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:47:31 np0005481065 nova_compute[260935]: 2025-10-11 08:47:31.463 2 DEBUG nova.virt.libvirt.driver [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:47:31 np0005481065 nova_compute[260935]: 2025-10-11 08:47:31.464 2 DEBUG nova.virt.libvirt.driver [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Ensure instance console log exists: /var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:47:31 np0005481065 nova_compute[260935]: 2025-10-11 08:47:31.464 2 DEBUG oslo_concurrency.lockutils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:47:31 np0005481065 nova_compute[260935]: 2025-10-11 08:47:31.465 2 DEBUG oslo_concurrency.lockutils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:47:31 np0005481065 nova_compute[260935]: 2025-10-11 08:47:31.465 2 DEBUG oslo_concurrency.lockutils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:47:31 np0005481065 nova_compute[260935]: 2025-10-11 08:47:31.466 2 DEBUG nova.virt.libvirt.driver [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:47:31 np0005481065 nova_compute[260935]: 2025-10-11 08:47:31.470 2 WARNING nova.virt.libvirt.driver [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:47:31 np0005481065 nova_compute[260935]: 2025-10-11 08:47:31.474 2 DEBUG nova.virt.libvirt.host [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:47:31 np0005481065 nova_compute[260935]: 2025-10-11 08:47:31.475 2 DEBUG nova.virt.libvirt.host [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:47:31 np0005481065 nova_compute[260935]: 2025-10-11 08:47:31.477 2 DEBUG nova.virt.libvirt.host [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:47:31 np0005481065 nova_compute[260935]: 2025-10-11 08:47:31.477 2 DEBUG nova.virt.libvirt.host [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:47:31 np0005481065 nova_compute[260935]: 2025-10-11 08:47:31.478 2 DEBUG nova.virt.libvirt.driver [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:47:31 np0005481065 nova_compute[260935]: 2025-10-11 08:47:31.478 2 DEBUG nova.virt.hardware [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:47:31 np0005481065 nova_compute[260935]: 2025-10-11 08:47:31.478 2 DEBUG nova.virt.hardware [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:47:31 np0005481065 nova_compute[260935]: 2025-10-11 08:47:31.478 2 DEBUG nova.virt.hardware [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:47:31 np0005481065 nova_compute[260935]: 2025-10-11 08:47:31.479 2 DEBUG nova.virt.hardware [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:47:31 np0005481065 nova_compute[260935]: 2025-10-11 08:47:31.479 2 DEBUG nova.virt.hardware [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:47:31 np0005481065 nova_compute[260935]: 2025-10-11 08:47:31.479 2 DEBUG nova.virt.hardware [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:47:31 np0005481065 nova_compute[260935]: 2025-10-11 08:47:31.479 2 DEBUG nova.virt.hardware [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:47:31 np0005481065 nova_compute[260935]: 2025-10-11 08:47:31.479 2 DEBUG nova.virt.hardware [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:47:31 np0005481065 nova_compute[260935]: 2025-10-11 08:47:31.480 2 DEBUG nova.virt.hardware [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:47:31 np0005481065 nova_compute[260935]: 2025-10-11 08:47:31.480 2 DEBUG nova.virt.hardware [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:47:31 np0005481065 nova_compute[260935]: 2025-10-11 08:47:31.480 2 DEBUG nova.virt.hardware [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:47:31 np0005481065 nova_compute[260935]: 2025-10-11 08:47:31.483 2 DEBUG oslo_concurrency.processutils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:47:31 np0005481065 nova_compute[260935]: 2025-10-11 08:47:31.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1248: 321 pgs: 321 active+clean; 127 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 33 KiB/s wr, 240 op/s
Oct 11 04:47:31 np0005481065 nova_compute[260935]: 2025-10-11 08:47:31.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:31 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:47:31 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4160667198' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:47:31 np0005481065 nova_compute[260935]: 2025-10-11 08:47:31.929 2 DEBUG oslo_concurrency.processutils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:47:31 np0005481065 nova_compute[260935]: 2025-10-11 08:47:31.963 2 DEBUG nova.storage.rbd_utils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] rbd image 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:47:31 np0005481065 nova_compute[260935]: 2025-10-11 08:47:31.971 2 DEBUG oslo_concurrency.processutils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:47:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Oct 11 04:47:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Oct 11 04:47:32 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Oct 11 04:47:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:47:32 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3614962514' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:47:32 np0005481065 nova_compute[260935]: 2025-10-11 08:47:32.427 2 DEBUG oslo_concurrency.processutils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:47:32 np0005481065 nova_compute[260935]: 2025-10-11 08:47:32.430 2 DEBUG nova.objects.instance [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lazy-loading 'pci_devices' on Instance uuid 66086b61-46ca-4a1b-a9f0-692678bcbf7a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:47:32 np0005481065 nova_compute[260935]: 2025-10-11 08:47:32.455 2 DEBUG nova.virt.libvirt.driver [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:47:32 np0005481065 nova_compute[260935]:  <uuid>66086b61-46ca-4a1b-a9f0-692678bcbf7a</uuid>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:  <name>instance-00000012</name>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:47:32 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:      <nova:name>tempest-ServersAdmin275Test-server-132893026</nova:name>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:47:31</nova:creationTime>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:47:32 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:        <nova:user uuid="7cfe9716527d49f18102a38c7480e208">tempest-ServersAdmin275Test-1935053767-project-member</nova:user>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:        <nova:project uuid="7fdd898b69404913a643940b3869140b">tempest-ServersAdmin275Test-1935053767</nova:project>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:      <nova:ports/>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:47:32 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:      <entry name="serial">66086b61-46ca-4a1b-a9f0-692678bcbf7a</entry>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:      <entry name="uuid">66086b61-46ca-4a1b-a9f0-692678bcbf7a</entry>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:47:32 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:47:32 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:47:32 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk">
Oct 11 04:47:32 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:47:32 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:47:32 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk.config">
Oct 11 04:47:32 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:47:32 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:47:32 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a/console.log" append="off"/>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:47:32 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:47:32 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:47:32 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:47:32 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:47:32 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:47:32 np0005481065 nova_compute[260935]: 2025-10-11 08:47:32.551 2 DEBUG nova.virt.libvirt.driver [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:47:32 np0005481065 nova_compute[260935]: 2025-10-11 08:47:32.552 2 DEBUG nova.virt.libvirt.driver [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:47:32 np0005481065 nova_compute[260935]: 2025-10-11 08:47:32.553 2 INFO nova.virt.libvirt.driver [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Using config drive#033[00m
Oct 11 04:47:32 np0005481065 nova_compute[260935]: 2025-10-11 08:47:32.598 2 DEBUG nova.storage.rbd_utils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] rbd image 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:47:32 np0005481065 podman[287541]: 2025-10-11 08:47:32.610450625 +0000 UTC m=+0.097763076 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 11 04:47:32 np0005481065 podman[287542]: 2025-10-11 08:47:32.639434053 +0000 UTC m=+0.113197536 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001)
Oct 11 04:47:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:47:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Oct 11 04:47:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Oct 11 04:47:33 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Oct 11 04:47:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1251: 321 pgs: 321 active+clean; 134 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 188 KiB/s rd, 5.3 MiB/s wr, 272 op/s
Oct 11 04:47:33 np0005481065 nova_compute[260935]: 2025-10-11 08:47:33.846 2 INFO nova.virt.libvirt.driver [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Creating config drive at /var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a/disk.config#033[00m
Oct 11 04:47:33 np0005481065 nova_compute[260935]: 2025-10-11 08:47:33.856 2 DEBUG oslo_concurrency.processutils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnn6pm873 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:47:34 np0005481065 nova_compute[260935]: 2025-10-11 08:47:34.005 2 DEBUG oslo_concurrency.processutils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnn6pm873" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:47:34 np0005481065 nova_compute[260935]: 2025-10-11 08:47:34.047 2 DEBUG nova.storage.rbd_utils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] rbd image 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:47:34 np0005481065 nova_compute[260935]: 2025-10-11 08:47:34.052 2 DEBUG oslo_concurrency.processutils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a/disk.config 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:47:34 np0005481065 nova_compute[260935]: 2025-10-11 08:47:34.258 2 DEBUG oslo_concurrency.processutils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a/disk.config 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.206s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:47:34 np0005481065 nova_compute[260935]: 2025-10-11 08:47:34.259 2 INFO nova.virt.libvirt.driver [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Deleting local config drive /var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a/disk.config because it was imported into RBD.#033[00m
Oct 11 04:47:34 np0005481065 systemd-machined[215705]: New machine qemu-18-instance-00000012.
Oct 11 04:47:34 np0005481065 systemd[1]: Started Virtual Machine qemu-18-instance-00000012.
Oct 11 04:47:35 np0005481065 nova_compute[260935]: 2025-10-11 08:47:35.622 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172455.6214416, 66086b61-46ca-4a1b-a9f0-692678bcbf7a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:47:35 np0005481065 nova_compute[260935]: 2025-10-11 08:47:35.624 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:47:35 np0005481065 nova_compute[260935]: 2025-10-11 08:47:35.629 2 DEBUG nova.compute.manager [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:47:35 np0005481065 nova_compute[260935]: 2025-10-11 08:47:35.631 2 DEBUG nova.virt.libvirt.driver [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:47:35 np0005481065 nova_compute[260935]: 2025-10-11 08:47:35.637 2 INFO nova.virt.libvirt.driver [-] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Instance spawned successfully.#033[00m
Oct 11 04:47:35 np0005481065 nova_compute[260935]: 2025-10-11 08:47:35.638 2 DEBUG nova.virt.libvirt.driver [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:47:35 np0005481065 nova_compute[260935]: 2025-10-11 08:47:35.650 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:47:35 np0005481065 nova_compute[260935]: 2025-10-11 08:47:35.660 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:47:35 np0005481065 nova_compute[260935]: 2025-10-11 08:47:35.669 2 DEBUG nova.virt.libvirt.driver [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:47:35 np0005481065 nova_compute[260935]: 2025-10-11 08:47:35.670 2 DEBUG nova.virt.libvirt.driver [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:47:35 np0005481065 nova_compute[260935]: 2025-10-11 08:47:35.671 2 DEBUG nova.virt.libvirt.driver [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:47:35 np0005481065 nova_compute[260935]: 2025-10-11 08:47:35.672 2 DEBUG nova.virt.libvirt.driver [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:47:35 np0005481065 nova_compute[260935]: 2025-10-11 08:47:35.673 2 DEBUG nova.virt.libvirt.driver [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:47:35 np0005481065 nova_compute[260935]: 2025-10-11 08:47:35.674 2 DEBUG nova.virt.libvirt.driver [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:47:35 np0005481065 nova_compute[260935]: 2025-10-11 08:47:35.684 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:47:35 np0005481065 nova_compute[260935]: 2025-10-11 08:47:35.685 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172455.6231475, 66086b61-46ca-4a1b-a9f0-692678bcbf7a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:47:35 np0005481065 nova_compute[260935]: 2025-10-11 08:47:35.686 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] VM Started (Lifecycle Event)#033[00m
Oct 11 04:47:35 np0005481065 nova_compute[260935]: 2025-10-11 08:47:35.719 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:47:35 np0005481065 ovn_controller[152945]: 2025-10-11T08:47:35Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6b:9e:4e 10.100.0.10
Oct 11 04:47:35 np0005481065 ovn_controller[152945]: 2025-10-11T08:47:35Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6b:9e:4e 10.100.0.10
Oct 11 04:47:35 np0005481065 nova_compute[260935]: 2025-10-11 08:47:35.725 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:47:35 np0005481065 nova_compute[260935]: 2025-10-11 08:47:35.734 2 INFO nova.compute.manager [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Took 5.05 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:47:35 np0005481065 nova_compute[260935]: 2025-10-11 08:47:35.734 2 DEBUG nova.compute.manager [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:47:35 np0005481065 nova_compute[260935]: 2025-10-11 08:47:35.745 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:47:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1252: 321 pgs: 321 active+clean; 134 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 135 KiB/s rd, 3.8 MiB/s wr, 195 op/s
Oct 11 04:47:35 np0005481065 nova_compute[260935]: 2025-10-11 08:47:35.789 2 INFO nova.compute.manager [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Took 6.01 seconds to build instance.#033[00m
Oct 11 04:47:35 np0005481065 nova_compute[260935]: 2025-10-11 08:47:35.805 2 DEBUG oslo_concurrency.lockutils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lock "66086b61-46ca-4a1b-a9f0-692678bcbf7a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.097s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:47:35 np0005481065 nova_compute[260935]: 2025-10-11 08:47:35.911 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172440.9105935, 682cad4b-4bab-4a36-9bd9-11e9de7b213a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:47:35 np0005481065 nova_compute[260935]: 2025-10-11 08:47:35.912 2 INFO nova.compute.manager [-] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] VM Stopped (Lifecycle Event)#033[00m
Oct 11 04:47:35 np0005481065 nova_compute[260935]: 2025-10-11 08:47:35.933 2 DEBUG nova.compute.manager [None req-e0794b61-391d-4e91-9f24-fcfbba0ecccc - - - - - -] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:47:36 np0005481065 nova_compute[260935]: 2025-10-11 08:47:36.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:36 np0005481065 nova_compute[260935]: 2025-10-11 08:47:36.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:47:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1440883750' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:47:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:47:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1440883750' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:47:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1253: 321 pgs: 321 active+clean; 167 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 7.2 MiB/s wr, 436 op/s
Oct 11 04:47:37 np0005481065 nova_compute[260935]: 2025-10-11 08:47:37.991 2 INFO nova.compute.manager [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Rebuilding instance#033[00m
Oct 11 04:47:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:47:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Oct 11 04:47:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Oct 11 04:47:38 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Oct 11 04:47:38 np0005481065 nova_compute[260935]: 2025-10-11 08:47:38.260 2 DEBUG nova.objects.instance [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lazy-loading 'trusted_certs' on Instance uuid 66086b61-46ca-4a1b-a9f0-692678bcbf7a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:47:38 np0005481065 nova_compute[260935]: 2025-10-11 08:47:38.311 2 DEBUG nova.compute.manager [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:47:38 np0005481065 nova_compute[260935]: 2025-10-11 08:47:38.455 2 DEBUG nova.objects.instance [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lazy-loading 'pci_requests' on Instance uuid 66086b61-46ca-4a1b-a9f0-692678bcbf7a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:47:38 np0005481065 nova_compute[260935]: 2025-10-11 08:47:38.474 2 DEBUG nova.objects.instance [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lazy-loading 'pci_devices' on Instance uuid 66086b61-46ca-4a1b-a9f0-692678bcbf7a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:47:38 np0005481065 nova_compute[260935]: 2025-10-11 08:47:38.494 2 DEBUG nova.objects.instance [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lazy-loading 'resources' on Instance uuid 66086b61-46ca-4a1b-a9f0-692678bcbf7a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:47:38 np0005481065 nova_compute[260935]: 2025-10-11 08:47:38.527 2 DEBUG nova.objects.instance [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lazy-loading 'migration_context' on Instance uuid 66086b61-46ca-4a1b-a9f0-692678bcbf7a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:47:38 np0005481065 nova_compute[260935]: 2025-10-11 08:47:38.560 2 DEBUG nova.objects.instance [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct 11 04:47:38 np0005481065 nova_compute[260935]: 2025-10-11 08:47:38.565 2 DEBUG nova.virt.libvirt.driver [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct 11 04:47:38 np0005481065 nova_compute[260935]: 2025-10-11 08:47:38.978 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172443.9770055, fa95251f-1ce7-4a45-8f53-ae932716a172 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:47:38 np0005481065 nova_compute[260935]: 2025-10-11 08:47:38.978 2 INFO nova.compute.manager [-] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] VM Stopped (Lifecycle Event)#033[00m
Oct 11 04:47:39 np0005481065 nova_compute[260935]: 2025-10-11 08:47:38.999 2 DEBUG nova.compute.manager [None req-6bf2ec43-1071-43bd-a20a-385d195fdc2c - - - - - -] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:47:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1255: 321 pgs: 321 active+clean; 167 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 4.0 MiB/s wr, 327 op/s
Oct 11 04:47:40 np0005481065 ovn_controller[152945]: 2025-10-11T08:47:40Z|00050|binding|INFO|Releasing lport 2a916b98-1e7b-4604-b1f0-e2f195b1c17e from this chassis (sb_readonly=0)
Oct 11 04:47:40 np0005481065 nova_compute[260935]: 2025-10-11 08:47:40.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1256: 321 pgs: 321 active+clean; 167 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 3.0 MiB/s wr, 206 op/s
Oct 11 04:47:41 np0005481065 nova_compute[260935]: 2025-10-11 08:47:41.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:41 np0005481065 nova_compute[260935]: 2025-10-11 08:47:41.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:42 np0005481065 nova_compute[260935]: 2025-10-11 08:47:42.050 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172447.0485742, 318c32b0-9990-4579-8abb-fc79e7460d77 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:47:42 np0005481065 nova_compute[260935]: 2025-10-11 08:47:42.050 2 INFO nova.compute.manager [-] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] VM Stopped (Lifecycle Event)#033[00m
Oct 11 04:47:42 np0005481065 nova_compute[260935]: 2025-10-11 08:47:42.069 2 DEBUG nova.compute.manager [None req-9fa6736e-d1be-4f9b-98bf-8de5cf561552 - - - - - -] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:47:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:47:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Oct 11 04:47:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Oct 11 04:47:43 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Oct 11 04:47:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1258: 321 pgs: 321 active+clean; 167 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 3.2 MiB/s wr, 221 op/s
Oct 11 04:47:45 np0005481065 nova_compute[260935]: 2025-10-11 08:47:45.365 2 DEBUG oslo_concurrency.lockutils [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "interface-5b2193b9-46b9-44a8-9d1c-3c6a642115b6-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:47:45 np0005481065 nova_compute[260935]: 2025-10-11 08:47:45.366 2 DEBUG oslo_concurrency.lockutils [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "interface-5b2193b9-46b9-44a8-9d1c-3c6a642115b6-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:47:45 np0005481065 nova_compute[260935]: 2025-10-11 08:47:45.366 2 DEBUG nova.objects.instance [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lazy-loading 'flavor' on Instance uuid 5b2193b9-46b9-44a8-9d1c-3c6a642115b6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:47:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1259: 321 pgs: 321 active+clean; 167 MiB data, 332 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:47:46 np0005481065 nova_compute[260935]: 2025-10-11 08:47:46.229 2 DEBUG nova.objects.instance [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lazy-loading 'pci_requests' on Instance uuid 5b2193b9-46b9-44a8-9d1c-3c6a642115b6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:47:46 np0005481065 nova_compute[260935]: 2025-10-11 08:47:46.244 2 DEBUG nova.network.neutron [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:47:46 np0005481065 nova_compute[260935]: 2025-10-11 08:47:46.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:46 np0005481065 nova_compute[260935]: 2025-10-11 08:47:46.471 2 DEBUG nova.policy [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '34f29a5a135d45f597eeaa741009aa67', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'eddb41c523294041b154a0a99c88e82b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 04:47:46 np0005481065 nova_compute[260935]: 2025-10-11 08:47:46.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:46 np0005481065 nova_compute[260935]: 2025-10-11 08:47:46.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:47 np0005481065 nova_compute[260935]: 2025-10-11 08:47:47.436 2 DEBUG nova.network.neutron [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Successfully created port: 0ae5b718-3374-4544-8f79-2f11854381cd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 04:47:47 np0005481065 nova_compute[260935]: 2025-10-11 08:47:47.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1260: 321 pgs: 321 active+clean; 195 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 374 KiB/s rd, 2.6 MiB/s wr, 66 op/s
Oct 11 04:47:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:47:48 np0005481065 nova_compute[260935]: 2025-10-11 08:47:48.637 2 DEBUG nova.virt.libvirt.driver [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct 11 04:47:48 np0005481065 nova_compute[260935]: 2025-10-11 08:47:48.730 2 DEBUG nova.network.neutron [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Successfully updated port: 0ae5b718-3374-4544-8f79-2f11854381cd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:47:48 np0005481065 nova_compute[260935]: 2025-10-11 08:47:48.754 2 DEBUG oslo_concurrency.lockutils [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "refresh_cache-5b2193b9-46b9-44a8-9d1c-3c6a642115b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:47:48 np0005481065 nova_compute[260935]: 2025-10-11 08:47:48.755 2 DEBUG oslo_concurrency.lockutils [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquired lock "refresh_cache-5b2193b9-46b9-44a8-9d1c-3c6a642115b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:47:48 np0005481065 nova_compute[260935]: 2025-10-11 08:47:48.755 2 DEBUG nova.network.neutron [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:47:48 np0005481065 nova_compute[260935]: 2025-10-11 08:47:48.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:48 np0005481065 nova_compute[260935]: 2025-10-11 08:47:48.847 2 DEBUG nova.compute.manager [req-13af72b0-7017-42a9-9d17-6a735b1e74e8 req-0d552423-1d45-42ae-be4b-a0bccca404c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Received event network-changed-0ae5b718-3374-4544-8f79-2f11854381cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:47:48 np0005481065 nova_compute[260935]: 2025-10-11 08:47:48.848 2 DEBUG nova.compute.manager [req-13af72b0-7017-42a9-9d17-6a735b1e74e8 req-0d552423-1d45-42ae-be4b-a0bccca404c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Refreshing instance network info cache due to event network-changed-0ae5b718-3374-4544-8f79-2f11854381cd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:47:48 np0005481065 nova_compute[260935]: 2025-10-11 08:47:48.849 2 DEBUG oslo_concurrency.lockutils [req-13af72b0-7017-42a9-9d17-6a735b1e74e8 req-0d552423-1d45-42ae-be4b-a0bccca404c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-5b2193b9-46b9-44a8-9d1c-3c6a642115b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:47:49 np0005481065 nova_compute[260935]: 2025-10-11 08:47:49.170 2 WARNING nova.network.neutron [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] fff13396-b787-4c6e-9112-a1c2ef57b26d already exists in list: networks containing: ['fff13396-b787-4c6e-9112-a1c2ef57b26d']. ignoring it#033[00m
Oct 11 04:47:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1261: 321 pgs: 321 active+clean; 195 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 358 KiB/s rd, 2.5 MiB/s wr, 63 op/s
Oct 11 04:47:50 np0005481065 podman[287698]: 2025-10-11 08:47:50.806699688 +0000 UTC m=+0.093584466 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 04:47:50 np0005481065 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000012.scope: Deactivated successfully.
Oct 11 04:47:50 np0005481065 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000012.scope: Consumed 13.154s CPU time.
Oct 11 04:47:50 np0005481065 systemd-machined[215705]: Machine qemu-18-instance-00000012 terminated.
Oct 11 04:47:51 np0005481065 nova_compute[260935]: 2025-10-11 08:47:51.505 2 DEBUG nova.network.neutron [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Updating instance_info_cache with network_info: [{"id": "713e6030-0d3f-41ae-9f66-c4591e2498e4", "address": "fa:16:3e:6b:9e:4e", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap713e6030-0d", "ovs_interfaceid": "713e6030-0d3f-41ae-9f66-c4591e2498e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0ae5b718-3374-4544-8f79-2f11854381cd", "address": "fa:16:3e:3e:fe:cf", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae5b718-33", "ovs_interfaceid": "0ae5b718-3374-4544-8f79-2f11854381cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:47:51 np0005481065 nova_compute[260935]: 2025-10-11 08:47:51.527 2 DEBUG oslo_concurrency.lockutils [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Releasing lock "refresh_cache-5b2193b9-46b9-44a8-9d1c-3c6a642115b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:47:51 np0005481065 nova_compute[260935]: 2025-10-11 08:47:51.528 2 DEBUG oslo_concurrency.lockutils [req-13af72b0-7017-42a9-9d17-6a735b1e74e8 req-0d552423-1d45-42ae-be4b-a0bccca404c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-5b2193b9-46b9-44a8-9d1c-3c6a642115b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:47:51 np0005481065 nova_compute[260935]: 2025-10-11 08:47:51.528 2 DEBUG nova.network.neutron [req-13af72b0-7017-42a9-9d17-6a735b1e74e8 req-0d552423-1d45-42ae-be4b-a0bccca404c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Refreshing network info cache for port 0ae5b718-3374-4544-8f79-2f11854381cd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:47:51 np0005481065 nova_compute[260935]: 2025-10-11 08:47:51.533 2 DEBUG nova.virt.libvirt.vif [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:47:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1177835038',display_name='tempest-AttachInterfacesTestJSON-server-1177835038',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1177835038',id=17,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM9Il64pQKTCYRuLz2OOsP19v1NZUxnzt1d6CpbMNqNcVSmJsI444B5YIDg/3s4g87KTn1UkUCttTxW17bkkPDQnOj/OhzrtE3rJwHzR/sgT5/vucTFG0ijrEL7r/7PtFg==',key_name='tempest-keypair-130237923',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:47:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-a649ez8h',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:47:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=5b2193b9-46b9-44a8-9d1c-3c6a642115b6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0ae5b718-3374-4544-8f79-2f11854381cd", "address": "fa:16:3e:3e:fe:cf", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae5b718-33", "ovs_interfaceid": "0ae5b718-3374-4544-8f79-2f11854381cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:47:51 np0005481065 nova_compute[260935]: 2025-10-11 08:47:51.533 2 DEBUG nova.network.os_vif_util [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "0ae5b718-3374-4544-8f79-2f11854381cd", "address": "fa:16:3e:3e:fe:cf", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae5b718-33", "ovs_interfaceid": "0ae5b718-3374-4544-8f79-2f11854381cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:47:51 np0005481065 nova_compute[260935]: 2025-10-11 08:47:51.535 2 DEBUG nova.network.os_vif_util [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3e:fe:cf,bridge_name='br-int',has_traffic_filtering=True,id=0ae5b718-3374-4544-8f79-2f11854381cd,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ae5b718-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:47:51 np0005481065 nova_compute[260935]: 2025-10-11 08:47:51.535 2 DEBUG os_vif [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3e:fe:cf,bridge_name='br-int',has_traffic_filtering=True,id=0ae5b718-3374-4544-8f79-2f11854381cd,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ae5b718-33') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:47:51 np0005481065 nova_compute[260935]: 2025-10-11 08:47:51.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:51 np0005481065 nova_compute[260935]: 2025-10-11 08:47:51.537 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:47:51 np0005481065 nova_compute[260935]: 2025-10-11 08:47:51.537 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:47:51 np0005481065 nova_compute[260935]: 2025-10-11 08:47:51.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:51 np0005481065 nova_compute[260935]: 2025-10-11 08:47:51.542 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0ae5b718-33, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:47:51 np0005481065 nova_compute[260935]: 2025-10-11 08:47:51.543 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0ae5b718-33, col_values=(('external_ids', {'iface-id': '0ae5b718-3374-4544-8f79-2f11854381cd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3e:fe:cf', 'vm-uuid': '5b2193b9-46b9-44a8-9d1c-3c6a642115b6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:47:51 np0005481065 NetworkManager[44960]: <info>  [1760172471.5464] manager: (tap0ae5b718-33): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Oct 11 04:47:51 np0005481065 nova_compute[260935]: 2025-10-11 08:47:51.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:47:51 np0005481065 nova_compute[260935]: 2025-10-11 08:47:51.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:51 np0005481065 nova_compute[260935]: 2025-10-11 08:47:51.560 2 INFO os_vif [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3e:fe:cf,bridge_name='br-int',has_traffic_filtering=True,id=0ae5b718-3374-4544-8f79-2f11854381cd,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ae5b718-33')#033[00m
Oct 11 04:47:51 np0005481065 nova_compute[260935]: 2025-10-11 08:47:51.562 2 DEBUG nova.virt.libvirt.vif [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:47:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1177835038',display_name='tempest-AttachInterfacesTestJSON-server-1177835038',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1177835038',id=17,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM9Il64pQKTCYRuLz2OOsP19v1NZUxnzt1d6CpbMNqNcVSmJsI444B5YIDg/3s4g87KTn1UkUCttTxW17bkkPDQnOj/OhzrtE3rJwHzR/sgT5/vucTFG0ijrEL7r/7PtFg==',key_name='tempest-keypair-130237923',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:47:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-a649ez8h',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:47:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=5b2193b9-46b9-44a8-9d1c-3c6a642115b6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0ae5b718-3374-4544-8f79-2f11854381cd", "address": "fa:16:3e:3e:fe:cf", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae5b718-33", "ovs_interfaceid": "0ae5b718-3374-4544-8f79-2f11854381cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:47:51 np0005481065 nova_compute[260935]: 2025-10-11 08:47:51.562 2 DEBUG nova.network.os_vif_util [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "0ae5b718-3374-4544-8f79-2f11854381cd", "address": "fa:16:3e:3e:fe:cf", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae5b718-33", "ovs_interfaceid": "0ae5b718-3374-4544-8f79-2f11854381cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:47:51 np0005481065 nova_compute[260935]: 2025-10-11 08:47:51.563 2 DEBUG nova.network.os_vif_util [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3e:fe:cf,bridge_name='br-int',has_traffic_filtering=True,id=0ae5b718-3374-4544-8f79-2f11854381cd,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ae5b718-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:47:51 np0005481065 nova_compute[260935]: 2025-10-11 08:47:51.567 2 DEBUG nova.virt.libvirt.guest [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] attach device xml: <interface type="ethernet">
Oct 11 04:47:51 np0005481065 nova_compute[260935]:  <mac address="fa:16:3e:3e:fe:cf"/>
Oct 11 04:47:51 np0005481065 nova_compute[260935]:  <model type="virtio"/>
Oct 11 04:47:51 np0005481065 nova_compute[260935]:  <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:47:51 np0005481065 nova_compute[260935]:  <mtu size="1442"/>
Oct 11 04:47:51 np0005481065 nova_compute[260935]:  <target dev="tap0ae5b718-33"/>
Oct 11 04:47:51 np0005481065 nova_compute[260935]: </interface>
Oct 11 04:47:51 np0005481065 nova_compute[260935]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct 11 04:47:51 np0005481065 NetworkManager[44960]: <info>  [1760172471.5848] manager: (tap0ae5b718-33): new Tun device (/org/freedesktop/NetworkManager/Devices/43)
Oct 11 04:47:51 np0005481065 systemd-udevd[287714]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:47:51 np0005481065 kernel: tap0ae5b718-33: entered promiscuous mode
Oct 11 04:47:51 np0005481065 NetworkManager[44960]: <info>  [1760172471.6338] device (tap0ae5b718-33): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:47:51 np0005481065 NetworkManager[44960]: <info>  [1760172471.6370] device (tap0ae5b718-33): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:47:51 np0005481065 ovn_controller[152945]: 2025-10-11T08:47:51Z|00051|binding|INFO|Claiming lport 0ae5b718-3374-4544-8f79-2f11854381cd for this chassis.
Oct 11 04:47:51 np0005481065 ovn_controller[152945]: 2025-10-11T08:47:51Z|00052|binding|INFO|0ae5b718-3374-4544-8f79-2f11854381cd: Claiming fa:16:3e:3e:fe:cf 10.100.0.14
Oct 11 04:47:51 np0005481065 nova_compute[260935]: 2025-10-11 08:47:51.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:51 np0005481065 nova_compute[260935]: 2025-10-11 08:47:51.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:51.650 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3e:fe:cf 10.100.0.14'], port_security=['fa:16:3e:3e:fe:cf 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '5b2193b9-46b9-44a8-9d1c-3c6a642115b6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eddb41c523294041b154a0a99c88e82b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9a83c3d0-687d-44b7-980a-bde786b1b429', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4201c7b-c907-464d-88cb-d19f17d8f067, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=0ae5b718-3374-4544-8f79-2f11854381cd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:47:51 np0005481065 nova_compute[260935]: 2025-10-11 08:47:51.652 2 INFO nova.virt.libvirt.driver [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Instance shutdown successfully after 13 seconds.#033[00m
Oct 11 04:47:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:51.654 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 0ae5b718-3374-4544-8f79-2f11854381cd in datapath fff13396-b787-4c6e-9112-a1c2ef57b26d bound to our chassis#033[00m
Oct 11 04:47:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:51.658 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fff13396-b787-4c6e-9112-a1c2ef57b26d#033[00m
Oct 11 04:47:51 np0005481065 ovn_controller[152945]: 2025-10-11T08:47:51Z|00053|binding|INFO|Setting lport 0ae5b718-3374-4544-8f79-2f11854381cd ovn-installed in OVS
Oct 11 04:47:51 np0005481065 ovn_controller[152945]: 2025-10-11T08:47:51Z|00054|binding|INFO|Setting lport 0ae5b718-3374-4544-8f79-2f11854381cd up in Southbound
Oct 11 04:47:51 np0005481065 nova_compute[260935]: 2025-10-11 08:47:51.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:51 np0005481065 nova_compute[260935]: 2025-10-11 08:47:51.677 2 INFO nova.virt.libvirt.driver [-] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Instance destroyed successfully.#033[00m
Oct 11 04:47:51 np0005481065 nova_compute[260935]: 2025-10-11 08:47:51.687 2 INFO nova.virt.libvirt.driver [-] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Instance destroyed successfully.#033[00m
Oct 11 04:47:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:51.686 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ca77ca6b-3a37-480a-bc26-01b57923cc10]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:47:51 np0005481065 nova_compute[260935]: 2025-10-11 08:47:51.719 2 DEBUG nova.virt.libvirt.driver [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:47:51 np0005481065 nova_compute[260935]: 2025-10-11 08:47:51.719 2 DEBUG nova.virt.libvirt.driver [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:47:51 np0005481065 nova_compute[260935]: 2025-10-11 08:47:51.719 2 DEBUG nova.virt.libvirt.driver [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No VIF found with MAC fa:16:3e:6b:9e:4e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:47:51 np0005481065 nova_compute[260935]: 2025-10-11 08:47:51.720 2 DEBUG nova.virt.libvirt.driver [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No VIF found with MAC fa:16:3e:3e:fe:cf, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:47:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:51.732 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[ef1b1cbe-204c-470f-8f1c-c3a07e436dfd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:47:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:51.737 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6a22556f-52e7-49b9-bf26-755970131fb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:47:51 np0005481065 nova_compute[260935]: 2025-10-11 08:47:51.757 2 DEBUG nova.virt.libvirt.guest [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:47:51 np0005481065 nova_compute[260935]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:47:51 np0005481065 nova_compute[260935]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1177835038</nova:name>
Oct 11 04:47:51 np0005481065 nova_compute[260935]:  <nova:creationTime>2025-10-11 08:47:51</nova:creationTime>
Oct 11 04:47:51 np0005481065 nova_compute[260935]:  <nova:flavor name="m1.nano">
Oct 11 04:47:51 np0005481065 nova_compute[260935]:    <nova:memory>128</nova:memory>
Oct 11 04:47:51 np0005481065 nova_compute[260935]:    <nova:disk>1</nova:disk>
Oct 11 04:47:51 np0005481065 nova_compute[260935]:    <nova:swap>0</nova:swap>
Oct 11 04:47:51 np0005481065 nova_compute[260935]:    <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:47:51 np0005481065 nova_compute[260935]:    <nova:vcpus>1</nova:vcpus>
Oct 11 04:47:51 np0005481065 nova_compute[260935]:  </nova:flavor>
Oct 11 04:47:51 np0005481065 nova_compute[260935]:  <nova:owner>
Oct 11 04:47:51 np0005481065 nova_compute[260935]:    <nova:user uuid="34f29a5a135d45f597eeaa741009aa67">tempest-AttachInterfacesTestJSON-2072786320-project-member</nova:user>
Oct 11 04:47:51 np0005481065 nova_compute[260935]:    <nova:project uuid="eddb41c523294041b154a0a99c88e82b">tempest-AttachInterfacesTestJSON-2072786320</nova:project>
Oct 11 04:47:51 np0005481065 nova_compute[260935]:  </nova:owner>
Oct 11 04:47:51 np0005481065 nova_compute[260935]:  <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:47:51 np0005481065 nova_compute[260935]:  <nova:ports>
Oct 11 04:47:51 np0005481065 nova_compute[260935]:    <nova:port uuid="713e6030-0d3f-41ae-9f66-c4591e2498e4">
Oct 11 04:47:51 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 11 04:47:51 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 04:47:51 np0005481065 nova_compute[260935]:    <nova:port uuid="0ae5b718-3374-4544-8f79-2f11854381cd">
Oct 11 04:47:51 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 04:47:51 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 04:47:51 np0005481065 nova_compute[260935]:  </nova:ports>
Oct 11 04:47:51 np0005481065 nova_compute[260935]: </nova:instance>
Oct 11 04:47:51 np0005481065 nova_compute[260935]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct 11 04:47:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:51.778 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[2ebfd00a-2f95-487d-a598-36fe395c485a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:47:51 np0005481065 nova_compute[260935]: 2025-10-11 08:47:51.791 2 DEBUG oslo_concurrency.lockutils [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "interface-5b2193b9-46b9-44a8-9d1c-3c6a642115b6-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 6.425s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:47:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1262: 321 pgs: 321 active+clean; 195 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 358 KiB/s rd, 2.5 MiB/s wr, 63 op/s
Oct 11 04:47:51 np0005481065 nova_compute[260935]: 2025-10-11 08:47:51.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:51.811 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[163b7040-3427-4443-bf05-5e1efef0a10b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfff13396-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:a4:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428802, 'reachable_time': 36432, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287748, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:47:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:51.839 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[38d42948-5986-43c1-b623-7629e96b158d]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfff13396-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 428821, 'tstamp': 428821}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287749, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfff13396-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 428826, 'tstamp': 428826}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287749, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:47:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:51.842 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfff13396-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:47:51 np0005481065 nova_compute[260935]: 2025-10-11 08:47:51.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:51.849 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfff13396-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:47:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:51.849 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:47:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:51.850 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfff13396-b0, col_values=(('external_ids', {'iface-id': '2a916b98-1e7b-4604-b1f0-e2f195b1c17e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:47:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:51.850 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:47:52 np0005481065 nova_compute[260935]: 2025-10-11 08:47:52.097 2 INFO nova.virt.libvirt.driver [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Deleting instance files /var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a_del#033[00m
Oct 11 04:47:52 np0005481065 nova_compute[260935]: 2025-10-11 08:47:52.098 2 INFO nova.virt.libvirt.driver [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Deletion of /var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a_del complete#033[00m
Oct 11 04:47:52 np0005481065 nova_compute[260935]: 2025-10-11 08:47:52.234 2 DEBUG nova.virt.libvirt.driver [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:47:52 np0005481065 nova_compute[260935]: 2025-10-11 08:47:52.235 2 INFO nova.virt.libvirt.driver [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Creating image(s)#033[00m
Oct 11 04:47:52 np0005481065 nova_compute[260935]: 2025-10-11 08:47:52.262 2 DEBUG nova.storage.rbd_utils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] rbd image 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:47:52 np0005481065 nova_compute[260935]: 2025-10-11 08:47:52.293 2 DEBUG nova.storage.rbd_utils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] rbd image 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:47:52 np0005481065 nova_compute[260935]: 2025-10-11 08:47:52.322 2 DEBUG nova.storage.rbd_utils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] rbd image 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:47:52 np0005481065 nova_compute[260935]: 2025-10-11 08:47:52.325 2 DEBUG oslo_concurrency.lockutils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Acquiring lock "d427ed36e4acfaf36d5cf36bd49361b1db4ee571" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:47:52 np0005481065 nova_compute[260935]: 2025-10-11 08:47:52.326 2 DEBUG oslo_concurrency.lockutils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lock "d427ed36e4acfaf36d5cf36bd49361b1db4ee571" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:47:52 np0005481065 nova_compute[260935]: 2025-10-11 08:47:52.654 2 DEBUG nova.virt.libvirt.imagebackend [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Image locations are: [{'url': 'rbd://33219f8b-dc38-5a8f-a577-8ccc4b37190a/images/95632eb9-5895-4e20-b760-0f149aadf400/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://33219f8b-dc38-5a8f-a577-8ccc4b37190a/images/95632eb9-5895-4e20-b760-0f149aadf400/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct 11 04:47:52 np0005481065 nova_compute[260935]: 2025-10-11 08:47:52.904 2 DEBUG nova.network.neutron [req-13af72b0-7017-42a9-9d17-6a735b1e74e8 req-0d552423-1d45-42ae-be4b-a0bccca404c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Updated VIF entry in instance network info cache for port 0ae5b718-3374-4544-8f79-2f11854381cd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:47:52 np0005481065 nova_compute[260935]: 2025-10-11 08:47:52.905 2 DEBUG nova.network.neutron [req-13af72b0-7017-42a9-9d17-6a735b1e74e8 req-0d552423-1d45-42ae-be4b-a0bccca404c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Updating instance_info_cache with network_info: [{"id": "713e6030-0d3f-41ae-9f66-c4591e2498e4", "address": "fa:16:3e:6b:9e:4e", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap713e6030-0d", "ovs_interfaceid": "713e6030-0d3f-41ae-9f66-c4591e2498e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0ae5b718-3374-4544-8f79-2f11854381cd", "address": "fa:16:3e:3e:fe:cf", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae5b718-33", "ovs_interfaceid": "0ae5b718-3374-4544-8f79-2f11854381cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:47:52 np0005481065 nova_compute[260935]: 2025-10-11 08:47:52.922 2 DEBUG oslo_concurrency.lockutils [req-13af72b0-7017-42a9-9d17-6a735b1e74e8 req-0d552423-1d45-42ae-be4b-a0bccca404c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-5b2193b9-46b9-44a8-9d1c-3c6a642115b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:47:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:47:53 np0005481065 ovn_controller[152945]: 2025-10-11T08:47:53Z|00055|binding|INFO|Releasing lport 2a916b98-1e7b-4604-b1f0-e2f195b1c17e from this chassis (sb_readonly=0)
Oct 11 04:47:53 np0005481065 nova_compute[260935]: 2025-10-11 08:47:53.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:53 np0005481065 nova_compute[260935]: 2025-10-11 08:47:53.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:47:53 np0005481065 nova_compute[260935]: 2025-10-11 08:47:53.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:47:53 np0005481065 nova_compute[260935]: 2025-10-11 08:47:53.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 04:47:53 np0005481065 nova_compute[260935]: 2025-10-11 08:47:53.762 2 DEBUG oslo_concurrency.processutils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:47:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1263: 321 pgs: 321 active+clean; 121 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 387 KiB/s rd, 2.5 MiB/s wr, 106 op/s
Oct 11 04:47:53 np0005481065 nova_compute[260935]: 2025-10-11 08:47:53.826 2 DEBUG oslo_concurrency.processutils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571.part --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:47:53 np0005481065 nova_compute[260935]: 2025-10-11 08:47:53.828 2 DEBUG nova.virt.images [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] 95632eb9-5895-4e20-b760-0f149aadf400 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Oct 11 04:47:53 np0005481065 nova_compute[260935]: 2025-10-11 08:47:53.829 2 DEBUG nova.privsep.utils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Oct 11 04:47:53 np0005481065 nova_compute[260935]: 2025-10-11 08:47:53.830 2 DEBUG oslo_concurrency.processutils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571.part /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:47:53 np0005481065 ovn_controller[152945]: 2025-10-11T08:47:53Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3e:fe:cf 10.100.0.14
Oct 11 04:47:53 np0005481065 ovn_controller[152945]: 2025-10-11T08:47:53Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3e:fe:cf 10.100.0.14
Oct 11 04:47:54 np0005481065 nova_compute[260935]: 2025-10-11 08:47:54.009 2 DEBUG oslo_concurrency.processutils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571.part /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571.converted" returned: 0 in 0.179s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:47:54 np0005481065 nova_compute[260935]: 2025-10-11 08:47:54.017 2 DEBUG oslo_concurrency.processutils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:47:54 np0005481065 nova_compute[260935]: 2025-10-11 08:47:54.110 2 DEBUG oslo_concurrency.processutils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571.converted --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:47:54 np0005481065 nova_compute[260935]: 2025-10-11 08:47:54.114 2 DEBUG oslo_concurrency.lockutils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lock "d427ed36e4acfaf36d5cf36bd49361b1db4ee571" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.787s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:47:54 np0005481065 nova_compute[260935]: 2025-10-11 08:47:54.147 2 DEBUG nova.storage.rbd_utils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] rbd image 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:47:54 np0005481065 nova_compute[260935]: 2025-10-11 08:47:54.152 2 DEBUG oslo_concurrency.processutils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:47:54 np0005481065 nova_compute[260935]: 2025-10-11 08:47:54.463 2 DEBUG oslo_concurrency.processutils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.311s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:47:54 np0005481065 nova_compute[260935]: 2025-10-11 08:47:54.556 2 DEBUG nova.storage.rbd_utils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] resizing rbd image 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:47:54 np0005481065 nova_compute[260935]: 2025-10-11 08:47:54.688 2 DEBUG nova.virt.libvirt.driver [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:47:54 np0005481065 nova_compute[260935]: 2025-10-11 08:47:54.689 2 DEBUG nova.virt.libvirt.driver [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Ensure instance console log exists: /var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:47:54 np0005481065 nova_compute[260935]: 2025-10-11 08:47:54.690 2 DEBUG oslo_concurrency.lockutils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:47:54 np0005481065 nova_compute[260935]: 2025-10-11 08:47:54.691 2 DEBUG oslo_concurrency.lockutils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:47:54 np0005481065 nova_compute[260935]: 2025-10-11 08:47:54.691 2 DEBUG oslo_concurrency.lockutils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:47:54 np0005481065 nova_compute[260935]: 2025-10-11 08:47:54.695 2 DEBUG nova.virt.libvirt.driver [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:29Z,direct_url=<?>,disk_format='qcow2',id=95632eb9-5895-4e20-b760-0f149aadf400,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:47:54 np0005481065 nova_compute[260935]: 2025-10-11 08:47:54.701 2 WARNING nova.virt.libvirt.driver [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Oct 11 04:47:54 np0005481065 nova_compute[260935]: 2025-10-11 08:47:54.712 2 DEBUG nova.virt.libvirt.host [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:47:54 np0005481065 nova_compute[260935]: 2025-10-11 08:47:54.713 2 DEBUG nova.virt.libvirt.host [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:47:54 np0005481065 nova_compute[260935]: 2025-10-11 08:47:54.719 2 DEBUG nova.virt.libvirt.host [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:47:54 np0005481065 nova_compute[260935]: 2025-10-11 08:47:54.720 2 DEBUG nova.virt.libvirt.host [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:47:54 np0005481065 nova_compute[260935]: 2025-10-11 08:47:54.721 2 DEBUG nova.virt.libvirt.driver [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:47:54 np0005481065 nova_compute[260935]: 2025-10-11 08:47:54.721 2 DEBUG nova.virt.hardware [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:29Z,direct_url=<?>,disk_format='qcow2',id=95632eb9-5895-4e20-b760-0f149aadf400,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:47:54 np0005481065 nova_compute[260935]: 2025-10-11 08:47:54.722 2 DEBUG nova.virt.hardware [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:47:54 np0005481065 nova_compute[260935]: 2025-10-11 08:47:54.723 2 DEBUG nova.virt.hardware [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:47:54 np0005481065 nova_compute[260935]: 2025-10-11 08:47:54.723 2 DEBUG nova.virt.hardware [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:47:54 np0005481065 nova_compute[260935]: 2025-10-11 08:47:54.724 2 DEBUG nova.virt.hardware [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:47:54 np0005481065 nova_compute[260935]: 2025-10-11 08:47:54.724 2 DEBUG nova.virt.hardware [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:47:54 np0005481065 nova_compute[260935]: 2025-10-11 08:47:54.724 2 DEBUG nova.virt.hardware [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:47:54 np0005481065 nova_compute[260935]: 2025-10-11 08:47:54.725 2 DEBUG nova.virt.hardware [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:47:54 np0005481065 nova_compute[260935]: 2025-10-11 08:47:54.725 2 DEBUG nova.virt.hardware [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:47:54 np0005481065 nova_compute[260935]: 2025-10-11 08:47:54.726 2 DEBUG nova.virt.hardware [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:47:54 np0005481065 nova_compute[260935]: 2025-10-11 08:47:54.726 2 DEBUG nova.virt.hardware [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:47:54 np0005481065 nova_compute[260935]: 2025-10-11 08:47:54.727 2 DEBUG nova.objects.instance [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lazy-loading 'vcpu_model' on Instance uuid 66086b61-46ca-4a1b-a9f0-692678bcbf7a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:47:54 np0005481065 nova_compute[260935]: 2025-10-11 08:47:54.752 2 DEBUG oslo_concurrency.processutils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:47:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:47:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:47:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:47:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:47:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:47:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:47:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:47:54
Oct 11 04:47:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:47:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 04:47:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log', '.mgr', '.rgw.root', 'volumes', 'vms', 'images', 'cephfs.cephfs.meta', 'backups']
Oct 11 04:47:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:47:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:47:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:47:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:47:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:47:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:47:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:47:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:47:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:47:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:47:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:47:55 np0005481065 nova_compute[260935]: 2025-10-11 08:47:55.272 2 DEBUG oslo_concurrency.lockutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Acquiring lock "f6e6ccd5-d393-4fa3-bf88-491311678dd1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:47:55 np0005481065 nova_compute[260935]: 2025-10-11 08:47:55.273 2 DEBUG oslo_concurrency.lockutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Lock "f6e6ccd5-d393-4fa3-bf88-491311678dd1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:47:55 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:47:55 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3059503799' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:47:55 np0005481065 nova_compute[260935]: 2025-10-11 08:47:55.295 2 DEBUG nova.compute.manager [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:47:55 np0005481065 nova_compute[260935]: 2025-10-11 08:47:55.299 2 DEBUG oslo_concurrency.processutils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:47:55 np0005481065 nova_compute[260935]: 2025-10-11 08:47:55.337 2 DEBUG nova.storage.rbd_utils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] rbd image 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:47:55 np0005481065 nova_compute[260935]: 2025-10-11 08:47:55.343 2 DEBUG oslo_concurrency.processutils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:47:55 np0005481065 nova_compute[260935]: 2025-10-11 08:47:55.416 2 DEBUG oslo_concurrency.lockutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:47:55 np0005481065 nova_compute[260935]: 2025-10-11 08:47:55.417 2 DEBUG oslo_concurrency.lockutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:47:55 np0005481065 nova_compute[260935]: 2025-10-11 08:47:55.426 2 DEBUG nova.virt.hardware [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:47:55 np0005481065 nova_compute[260935]: 2025-10-11 08:47:55.426 2 INFO nova.compute.claims [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:47:55 np0005481065 nova_compute[260935]: 2025-10-11 08:47:55.553 2 DEBUG oslo_concurrency.processutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:47:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1264: 321 pgs: 321 active+clean; 121 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 341 KiB/s rd, 2.2 MiB/s wr, 93 op/s
Oct 11 04:47:55 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:47:55 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2958598207' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:47:55 np0005481065 nova_compute[260935]: 2025-10-11 08:47:55.818 2 DEBUG oslo_concurrency.processutils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:47:55 np0005481065 nova_compute[260935]: 2025-10-11 08:47:55.821 2 DEBUG nova.virt.libvirt.driver [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:47:55 np0005481065 nova_compute[260935]:  <uuid>66086b61-46ca-4a1b-a9f0-692678bcbf7a</uuid>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:  <name>instance-00000012</name>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:47:55 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:      <nova:name>tempest-ServersAdmin275Test-server-132893026</nova:name>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:47:54</nova:creationTime>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:47:55 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:        <nova:user uuid="7cfe9716527d49f18102a38c7480e208">tempest-ServersAdmin275Test-1935053767-project-member</nova:user>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:        <nova:project uuid="7fdd898b69404913a643940b3869140b">tempest-ServersAdmin275Test-1935053767</nova:project>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="95632eb9-5895-4e20-b760-0f149aadf400"/>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:      <nova:ports/>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:47:55 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:      <entry name="serial">66086b61-46ca-4a1b-a9f0-692678bcbf7a</entry>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:      <entry name="uuid">66086b61-46ca-4a1b-a9f0-692678bcbf7a</entry>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:47:55 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:47:55 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:47:55 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk">
Oct 11 04:47:55 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:47:55 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:47:55 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk.config">
Oct 11 04:47:55 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:47:55 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:47:55 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a/console.log" append="off"/>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:47:55 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:47:55 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:47:55 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:47:55 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:47:55 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:47:55 np0005481065 nova_compute[260935]: 2025-10-11 08:47:55.892 2 DEBUG nova.virt.libvirt.driver [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:47:55 np0005481065 nova_compute[260935]: 2025-10-11 08:47:55.893 2 DEBUG nova.virt.libvirt.driver [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:47:55 np0005481065 nova_compute[260935]: 2025-10-11 08:47:55.893 2 INFO nova.virt.libvirt.driver [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Using config drive#033[00m
Oct 11 04:47:55 np0005481065 nova_compute[260935]: 2025-10-11 08:47:55.923 2 DEBUG nova.storage.rbd_utils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] rbd image 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:47:55 np0005481065 nova_compute[260935]: 2025-10-11 08:47:55.947 2 DEBUG nova.objects.instance [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lazy-loading 'ec2_ids' on Instance uuid 66086b61-46ca-4a1b-a9f0-692678bcbf7a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:47:56 np0005481065 nova_compute[260935]: 2025-10-11 08:47:56.028 2 DEBUG nova.objects.instance [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lazy-loading 'keypairs' on Instance uuid 66086b61-46ca-4a1b-a9f0-692678bcbf7a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:47:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:47:56 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2579184716' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:47:56 np0005481065 nova_compute[260935]: 2025-10-11 08:47:56.073 2 DEBUG oslo_concurrency.processutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:47:56 np0005481065 nova_compute[260935]: 2025-10-11 08:47:56.080 2 DEBUG nova.compute.provider_tree [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:47:56 np0005481065 nova_compute[260935]: 2025-10-11 08:47:56.100 2 DEBUG nova.scheduler.client.report [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:47:56 np0005481065 nova_compute[260935]: 2025-10-11 08:47:56.128 2 DEBUG oslo_concurrency.lockutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.710s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:47:56 np0005481065 nova_compute[260935]: 2025-10-11 08:47:56.129 2 DEBUG nova.compute.manager [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:47:56 np0005481065 nova_compute[260935]: 2025-10-11 08:47:56.183 2 DEBUG nova.compute.manager [req-8ec4460b-1875-4ae6-9870-3057a607f965 req-3e8132b0-be06-42bb-a1e5-45257641fba4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Received event network-vif-plugged-0ae5b718-3374-4544-8f79-2f11854381cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:47:56 np0005481065 nova_compute[260935]: 2025-10-11 08:47:56.184 2 DEBUG oslo_concurrency.lockutils [req-8ec4460b-1875-4ae6-9870-3057a607f965 req-3e8132b0-be06-42bb-a1e5-45257641fba4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:47:56 np0005481065 nova_compute[260935]: 2025-10-11 08:47:56.184 2 DEBUG oslo_concurrency.lockutils [req-8ec4460b-1875-4ae6-9870-3057a607f965 req-3e8132b0-be06-42bb-a1e5-45257641fba4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:47:56 np0005481065 nova_compute[260935]: 2025-10-11 08:47:56.185 2 DEBUG oslo_concurrency.lockutils [req-8ec4460b-1875-4ae6-9870-3057a607f965 req-3e8132b0-be06-42bb-a1e5-45257641fba4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:47:56 np0005481065 nova_compute[260935]: 2025-10-11 08:47:56.185 2 DEBUG nova.compute.manager [req-8ec4460b-1875-4ae6-9870-3057a607f965 req-3e8132b0-be06-42bb-a1e5-45257641fba4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] No waiting events found dispatching network-vif-plugged-0ae5b718-3374-4544-8f79-2f11854381cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:47:56 np0005481065 nova_compute[260935]: 2025-10-11 08:47:56.186 2 WARNING nova.compute.manager [req-8ec4460b-1875-4ae6-9870-3057a607f965 req-3e8132b0-be06-42bb-a1e5-45257641fba4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Received unexpected event network-vif-plugged-0ae5b718-3374-4544-8f79-2f11854381cd for instance with vm_state active and task_state None.#033[00m
Oct 11 04:47:56 np0005481065 nova_compute[260935]: 2025-10-11 08:47:56.191 2 DEBUG nova.compute.manager [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:47:56 np0005481065 nova_compute[260935]: 2025-10-11 08:47:56.192 2 DEBUG nova.network.neutron [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:47:56 np0005481065 nova_compute[260935]: 2025-10-11 08:47:56.223 2 INFO nova.virt.libvirt.driver [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:47:56 np0005481065 nova_compute[260935]: 2025-10-11 08:47:56.245 2 DEBUG nova.compute.manager [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:47:56 np0005481065 nova_compute[260935]: 2025-10-11 08:47:56.458 2 DEBUG nova.compute.manager [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:47:56 np0005481065 nova_compute[260935]: 2025-10-11 08:47:56.461 2 DEBUG nova.virt.libvirt.driver [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:47:56 np0005481065 nova_compute[260935]: 2025-10-11 08:47:56.461 2 INFO nova.virt.libvirt.driver [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Creating image(s)#033[00m
Oct 11 04:47:56 np0005481065 nova_compute[260935]: 2025-10-11 08:47:56.490 2 DEBUG nova.storage.rbd_utils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] rbd image f6e6ccd5-d393-4fa3-bf88-491311678dd1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:47:56 np0005481065 nova_compute[260935]: 2025-10-11 08:47:56.516 2 DEBUG nova.storage.rbd_utils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] rbd image f6e6ccd5-d393-4fa3-bf88-491311678dd1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:47:56 np0005481065 nova_compute[260935]: 2025-10-11 08:47:56.547 2 DEBUG nova.storage.rbd_utils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] rbd image f6e6ccd5-d393-4fa3-bf88-491311678dd1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:47:56 np0005481065 nova_compute[260935]: 2025-10-11 08:47:56.552 2 DEBUG oslo_concurrency.processutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:47:56 np0005481065 nova_compute[260935]: 2025-10-11 08:47:56.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:56 np0005481065 nova_compute[260935]: 2025-10-11 08:47:56.646 2 DEBUG oslo_concurrency.processutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:47:56 np0005481065 nova_compute[260935]: 2025-10-11 08:47:56.646 2 DEBUG oslo_concurrency.lockutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:47:56 np0005481065 nova_compute[260935]: 2025-10-11 08:47:56.647 2 DEBUG oslo_concurrency.lockutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:47:56 np0005481065 nova_compute[260935]: 2025-10-11 08:47:56.648 2 DEBUG oslo_concurrency.lockutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:47:56 np0005481065 nova_compute[260935]: 2025-10-11 08:47:56.675 2 DEBUG nova.storage.rbd_utils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] rbd image f6e6ccd5-d393-4fa3-bf88-491311678dd1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:47:56 np0005481065 nova_compute[260935]: 2025-10-11 08:47:56.679 2 DEBUG oslo_concurrency.processutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 f6e6ccd5-d393-4fa3-bf88-491311678dd1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:47:56 np0005481065 nova_compute[260935]: 2025-10-11 08:47:56.712 2 INFO nova.virt.libvirt.driver [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Creating config drive at /var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a/disk.config#033[00m
Oct 11 04:47:56 np0005481065 nova_compute[260935]: 2025-10-11 08:47:56.717 2 DEBUG oslo_concurrency.processutils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl8ivjij3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:47:56 np0005481065 nova_compute[260935]: 2025-10-11 08:47:56.767 2 DEBUG nova.policy [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '16055681fed745bb89347149995b8486', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dc753a4e96fc46008b6e6b1fd29b160d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 04:47:56 np0005481065 nova_compute[260935]: 2025-10-11 08:47:56.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:56 np0005481065 nova_compute[260935]: 2025-10-11 08:47:56.864 2 DEBUG oslo_concurrency.processutils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl8ivjij3" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:47:56 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:56.873 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:47:56 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:56.875 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 04:47:56 np0005481065 nova_compute[260935]: 2025-10-11 08:47:56.906 2 DEBUG nova.storage.rbd_utils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] rbd image 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:47:56 np0005481065 nova_compute[260935]: 2025-10-11 08:47:56.911 2 DEBUG oslo_concurrency.processutils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a/disk.config 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:47:56 np0005481065 nova_compute[260935]: 2025-10-11 08:47:56.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:56 np0005481065 nova_compute[260935]: 2025-10-11 08:47:56.965 2 DEBUG oslo_concurrency.processutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 f6e6ccd5-d393-4fa3-bf88-491311678dd1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.286s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:47:57 np0005481065 nova_compute[260935]: 2025-10-11 08:47:57.028 2 DEBUG nova.storage.rbd_utils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] resizing rbd image f6e6ccd5-d393-4fa3-bf88-491311678dd1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:47:57 np0005481065 nova_compute[260935]: 2025-10-11 08:47:57.059 2 DEBUG oslo_concurrency.processutils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a/disk.config 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:47:57 np0005481065 nova_compute[260935]: 2025-10-11 08:47:57.060 2 INFO nova.virt.libvirt.driver [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Deleting local config drive /var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a/disk.config because it was imported into RBD.#033[00m
Oct 11 04:47:57 np0005481065 nova_compute[260935]: 2025-10-11 08:47:57.119 2 DEBUG nova.objects.instance [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Lazy-loading 'migration_context' on Instance uuid f6e6ccd5-d393-4fa3-bf88-491311678dd1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:47:57 np0005481065 systemd-machined[215705]: New machine qemu-19-instance-00000012.
Oct 11 04:47:57 np0005481065 nova_compute[260935]: 2025-10-11 08:47:57.144 2 DEBUG nova.virt.libvirt.driver [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:47:57 np0005481065 nova_compute[260935]: 2025-10-11 08:47:57.144 2 DEBUG nova.virt.libvirt.driver [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Ensure instance console log exists: /var/lib/nova/instances/f6e6ccd5-d393-4fa3-bf88-491311678dd1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:47:57 np0005481065 nova_compute[260935]: 2025-10-11 08:47:57.145 2 DEBUG oslo_concurrency.lockutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:47:57 np0005481065 nova_compute[260935]: 2025-10-11 08:47:57.146 2 DEBUG oslo_concurrency.lockutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:47:57 np0005481065 nova_compute[260935]: 2025-10-11 08:47:57.146 2 DEBUG oslo_concurrency.lockutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:47:57 np0005481065 systemd[1]: Started Virtual Machine qemu-19-instance-00000012.
Oct 11 04:47:57 np0005481065 podman[288245]: 2025-10-11 08:47:57.202076317 +0000 UTC m=+0.069740575 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Oct 11 04:47:57 np0005481065 nova_compute[260935]: 2025-10-11 08:47:57.556 2 DEBUG nova.network.neutron [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Successfully created port: 128b1135-2e8f-4e78-8e09-e16b082e9225 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 04:47:57 np0005481065 nova_compute[260935]: 2025-10-11 08:47:57.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:47:57 np0005481065 nova_compute[260935]: 2025-10-11 08:47:57.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:47:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1265: 321 pgs: 321 active+clean; 213 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 5.7 MiB/s wr, 160 op/s
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:57.999 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Removed pending event for 66086b61-46ca-4a1b-a9f0-692678bcbf7a due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:57.999 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172477.998533, 66086b61-46ca-4a1b-a9f0-692678bcbf7a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.000 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.002 2 DEBUG nova.compute.manager [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.002 2 DEBUG nova.virt.libvirt.driver [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.005 2 INFO nova.virt.libvirt.driver [-] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Instance spawned successfully.#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.005 2 DEBUG nova.virt.libvirt.driver [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.026 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.029 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.040 2 DEBUG nova.virt.libvirt.driver [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.040 2 DEBUG nova.virt.libvirt.driver [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.040 2 DEBUG nova.virt.libvirt.driver [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.041 2 DEBUG nova.virt.libvirt.driver [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.041 2 DEBUG nova.virt.libvirt.driver [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.042 2 DEBUG nova.virt.libvirt.driver [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.067 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.068 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172478.0016768, 66086b61-46ca-4a1b-a9f0-692678bcbf7a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.068 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] VM Started (Lifecycle Event)#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.072 2 DEBUG oslo_concurrency.lockutils [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "interface-5b2193b9-46b9-44a8-9d1c-3c6a642115b6-0ae5b718-3374-4544-8f79-2f11854381cd" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.072 2 DEBUG oslo_concurrency.lockutils [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "interface-5b2193b9-46b9-44a8-9d1c-3c6a642115b6-0ae5b718-3374-4544-8f79-2f11854381cd" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.099 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.100 2 DEBUG nova.objects.instance [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lazy-loading 'flavor' on Instance uuid 5b2193b9-46b9-44a8-9d1c-3c6a642115b6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.103 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.111 2 DEBUG nova.compute.manager [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.137 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.140 2 DEBUG nova.virt.libvirt.vif [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:47:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1177835038',display_name='tempest-AttachInterfacesTestJSON-server-1177835038',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1177835038',id=17,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM9Il64pQKTCYRuLz2OOsP19v1NZUxnzt1d6CpbMNqNcVSmJsI444B5YIDg/3s4g87KTn1UkUCttTxW17bkkPDQnOj/OhzrtE3rJwHzR/sgT5/vucTFG0ijrEL7r/7PtFg==',key_name='tempest-keypair-130237923',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:47:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-a649ez8h',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:47:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=5b2193b9-46b9-44a8-9d1c-3c6a642115b6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0ae5b718-3374-4544-8f79-2f11854381cd", "address": "fa:16:3e:3e:fe:cf", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae5b718-33", "ovs_interfaceid": "0ae5b718-3374-4544-8f79-2f11854381cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.140 2 DEBUG nova.network.os_vif_util [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "0ae5b718-3374-4544-8f79-2f11854381cd", "address": "fa:16:3e:3e:fe:cf", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae5b718-33", "ovs_interfaceid": "0ae5b718-3374-4544-8f79-2f11854381cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.141 2 DEBUG nova.network.os_vif_util [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3e:fe:cf,bridge_name='br-int',has_traffic_filtering=True,id=0ae5b718-3374-4544-8f79-2f11854381cd,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ae5b718-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.143 2 DEBUG nova.virt.libvirt.guest [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:3e:fe:cf"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap0ae5b718-33"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.146 2 DEBUG nova.virt.libvirt.guest [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:3e:fe:cf"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap0ae5b718-33"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.148 2 DEBUG nova.virt.libvirt.driver [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Attempting to detach device tap0ae5b718-33 from instance 5b2193b9-46b9-44a8-9d1c-3c6a642115b6 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.149 2 DEBUG nova.virt.libvirt.guest [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] detach device xml: <interface type="ethernet">
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <mac address="fa:16:3e:3e:fe:cf"/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <model type="virtio"/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <mtu size="1442"/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <target dev="tap0ae5b718-33"/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]: </interface>
Oct 11 04:47:58 np0005481065 nova_compute[260935]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.157 2 DEBUG nova.virt.libvirt.guest [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:3e:fe:cf"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap0ae5b718-33"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.161 2 DEBUG nova.virt.libvirt.guest [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:3e:fe:cf"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap0ae5b718-33"/></interface>not found in domain: <domain type='kvm' id='17'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <name>instance-00000011</name>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <uuid>5b2193b9-46b9-44a8-9d1c-3c6a642115b6</uuid>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1177835038</nova:name>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <nova:creationTime>2025-10-11 08:47:51</nova:creationTime>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <nova:flavor name="m1.nano">
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <nova:memory>128</nova:memory>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <nova:disk>1</nova:disk>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <nova:swap>0</nova:swap>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <nova:vcpus>1</nova:vcpus>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  </nova:flavor>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <nova:owner>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <nova:user uuid="34f29a5a135d45f597eeaa741009aa67">tempest-AttachInterfacesTestJSON-2072786320-project-member</nova:user>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <nova:project uuid="eddb41c523294041b154a0a99c88e82b">tempest-AttachInterfacesTestJSON-2072786320</nova:project>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  </nova:owner>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <nova:ports>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <nova:port uuid="713e6030-0d3f-41ae-9f66-c4591e2498e4">
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <nova:port uuid="0ae5b718-3374-4544-8f79-2f11854381cd">
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  </nova:ports>
Oct 11 04:47:58 np0005481065 nova_compute[260935]: </nova:instance>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <memory unit='KiB'>131072</memory>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <vcpu placement='static'>1</vcpu>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <resource>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <partition>/machine</partition>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  </resource>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <sysinfo type='smbios'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <entry name='manufacturer'>RDO</entry>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <entry name='product'>OpenStack Compute</entry>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <entry name='serial'>5b2193b9-46b9-44a8-9d1c-3c6a642115b6</entry>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <entry name='uuid'>5b2193b9-46b9-44a8-9d1c-3c6a642115b6</entry>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <entry name='family'>Virtual Machine</entry>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <boot dev='hd'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <smbios mode='sysinfo'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <vmcoreinfo state='on'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <cpu mode='custom' match='exact' check='full'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <model fallback='forbid'>EPYC-Rome</model>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <vendor>AMD</vendor>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='x2apic'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='tsc-deadline'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='hypervisor'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='tsc_adjust'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='spec-ctrl'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='stibp'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='arch-capabilities'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='ssbd'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='cmp_legacy'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='overflow-recov'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='succor'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='ibrs'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='amd-ssbd'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='virt-ssbd'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='disable' name='lbrv'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='disable' name='tsc-scale'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='disable' name='vmcb-clean'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='disable' name='flushbyasid'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='disable' name='pause-filter'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='disable' name='pfthreshold'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='disable' name='svme-addr-chk'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='lfence-always-serializing'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='rdctl-no'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='mds-no'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='pschange-mc-no'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='gds-no'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='rfds-no'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='disable' name='xsaves'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='disable' name='svm'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='topoext'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='disable' name='npt'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='disable' name='nrip-save'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <clock offset='utc'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <timer name='pit' tickpolicy='delay'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <timer name='rtc' tickpolicy='catchup'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <timer name='hpet' present='no'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <on_poweroff>destroy</on_poweroff>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <on_reboot>restart</on_reboot>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <on_crash>destroy</on_crash>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <disk type='network' device='disk'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <driver name='qemu' type='raw' cache='none'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <auth username='openstack'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:        <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <source protocol='rbd' name='vms/5b2193b9-46b9-44a8-9d1c-3c6a642115b6_disk' index='2'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:        <host name='192.168.122.100' port='6789'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target dev='vda' bus='virtio'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='virtio-disk0'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <disk type='network' device='cdrom'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <driver name='qemu' type='raw' cache='none'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <auth username='openstack'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:        <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <source protocol='rbd' name='vms/5b2193b9-46b9-44a8-9d1c-3c6a642115b6_disk.config' index='1'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:        <host name='192.168.122.100' port='6789'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target dev='sda' bus='sata'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <readonly/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='sata0-0-0'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='0' model='pcie-root'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='pcie.0'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target chassis='1' port='0x10'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='pci.1'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target chassis='2' port='0x11'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='pci.2'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target chassis='3' port='0x12'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='pci.3'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target chassis='4' port='0x13'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='pci.4'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target chassis='5' port='0x14'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='pci.5'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target chassis='6' port='0x15'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='pci.6'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target chassis='7' port='0x16'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='pci.7'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target chassis='8' port='0x17'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='pci.8'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target chassis='9' port='0x18'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='pci.9'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target chassis='10' port='0x19'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='pci.10'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target chassis='11' port='0x1a'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='pci.11'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target chassis='12' port='0x1b'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='pci.12'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target chassis='13' port='0x1c'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='pci.13'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target chassis='14' port='0x1d'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='pci.14'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target chassis='15' port='0x1e'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='pci.15'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target chassis='16' port='0x1f'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='pci.16'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target chassis='17' port='0x20'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='pci.17'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target chassis='18' port='0x21'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='pci.18'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target chassis='19' port='0x22'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='pci.19'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target chassis='20' port='0x23'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='pci.20'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target chassis='21' port='0x24'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='pci.21'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target chassis='22' port='0x25'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='pci.22'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target chassis='23' port='0x26'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='pci.23'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target chassis='24' port='0x27'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='pci.24'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target chassis='25' port='0x28'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='pci.25'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model name='pcie-pci-bridge'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='pci.26'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='usb'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='sata' index='0'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='ide'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <interface type='ethernet'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <mac address='fa:16:3e:6b:9e:4e'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target dev='tap713e6030-0d'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model type='virtio'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <driver name='vhost' rx_queue_size='512'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <mtu size='1442'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='net0'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <interface type='ethernet'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <mac address='fa:16:3e:3e:fe:cf'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target dev='tap0ae5b718-33'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model type='virtio'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <driver name='vhost' rx_queue_size='512'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <mtu size='1442'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='net1'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <serial type='pty'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <source path='/dev/pts/2'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <log file='/var/lib/nova/instances/5b2193b9-46b9-44a8-9d1c-3c6a642115b6/console.log' append='off'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target type='isa-serial' port='0'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:        <model name='isa-serial'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      </target>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='serial0'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <console type='pty' tty='/dev/pts/2'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <source path='/dev/pts/2'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <log file='/var/lib/nova/instances/5b2193b9-46b9-44a8-9d1c-3c6a642115b6/console.log' append='off'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target type='serial' port='0'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='serial0'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </console>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <input type='tablet' bus='usb'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='input0'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='usb' bus='0' port='1'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </input>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <input type='mouse' bus='ps2'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='input1'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </input>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <input type='keyboard' bus='ps2'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='input2'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </input>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <listen type='address' address='::0'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </graphics>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <audio id='1' type='none'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model type='virtio' heads='1' primary='yes'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='video0'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <watchdog model='itco' action='reset'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='watchdog0'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </watchdog>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <memballoon model='virtio'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <stats period='10'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='balloon0'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <rng model='virtio'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <backend model='random'>/dev/urandom</backend>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='rng0'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <label>system_u:system_r:svirt_t:s0:c460,c984</label>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c460,c984</imagelabel>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  </seclabel>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <label>+107:+107</label>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <imagelabel>+107:+107</imagelabel>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  </seclabel>
Oct 11 04:47:58 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:47:58 np0005481065 nova_compute[260935]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.175 2 INFO nova.virt.libvirt.driver [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Successfully detached device tap0ae5b718-33 from instance 5b2193b9-46b9-44a8-9d1c-3c6a642115b6 from the persistent domain config.#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.176 2 DEBUG nova.virt.libvirt.driver [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] (1/8): Attempting to detach device tap0ae5b718-33 with device alias net1 from instance 5b2193b9-46b9-44a8-9d1c-3c6a642115b6 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.176 2 DEBUG nova.virt.libvirt.guest [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] detach device xml: <interface type="ethernet">
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <mac address="fa:16:3e:3e:fe:cf"/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <model type="virtio"/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <mtu size="1442"/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <target dev="tap0ae5b718-33"/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]: </interface>
Oct 11 04:47:58 np0005481065 nova_compute[260935]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.183 2 DEBUG oslo_concurrency.lockutils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.184 2 DEBUG oslo_concurrency.lockutils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.184 2 DEBUG nova.objects.instance [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct 11 04:47:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.231 2 DEBUG oslo_concurrency.lockutils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:47:58 np0005481065 kernel: tap0ae5b718-33 (unregistering): left promiscuous mode
Oct 11 04:47:58 np0005481065 NetworkManager[44960]: <info>  [1760172478.2460] device (tap0ae5b718-33): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:58 np0005481065 ovn_controller[152945]: 2025-10-11T08:47:58Z|00056|binding|INFO|Releasing lport 0ae5b718-3374-4544-8f79-2f11854381cd from this chassis (sb_readonly=0)
Oct 11 04:47:58 np0005481065 ovn_controller[152945]: 2025-10-11T08:47:58Z|00057|binding|INFO|Setting lport 0ae5b718-3374-4544-8f79-2f11854381cd down in Southbound
Oct 11 04:47:58 np0005481065 ovn_controller[152945]: 2025-10-11T08:47:58Z|00058|binding|INFO|Removing iface tap0ae5b718-33 ovn-installed in OVS
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:58.262 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3e:fe:cf 10.100.0.14'], port_security=['fa:16:3e:3e:fe:cf 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '5b2193b9-46b9-44a8-9d1c-3c6a642115b6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eddb41c523294041b154a0a99c88e82b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9a83c3d0-687d-44b7-980a-bde786b1b429', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4201c7b-c907-464d-88cb-d19f17d8f067, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=0ae5b718-3374-4544-8f79-2f11854381cd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:47:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:58.263 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 0ae5b718-3374-4544-8f79-2f11854381cd in datapath fff13396-b787-4c6e-9112-a1c2ef57b26d unbound from our chassis#033[00m
Oct 11 04:47:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:58.264 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fff13396-b787-4c6e-9112-a1c2ef57b26d#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:58.280 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[79f4d222-67fc-4b87-80dc-61e0939abdb6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.286 2 DEBUG nova.virt.libvirt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Received event <DeviceRemovedEvent: 1760172478.286079, 5b2193b9-46b9-44a8-9d1c-3c6a642115b6 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.288 2 DEBUG nova.virt.libvirt.driver [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Start waiting for the detach event from libvirt for device tap0ae5b718-33 with device alias net1 for instance 5b2193b9-46b9-44a8-9d1c-3c6a642115b6 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.289 2 DEBUG nova.virt.libvirt.guest [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:3e:fe:cf"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap0ae5b718-33"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.302 2 DEBUG nova.virt.libvirt.guest [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:3e:fe:cf"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap0ae5b718-33"/></interface>not found in domain: <domain type='kvm' id='17'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <name>instance-00000011</name>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <uuid>5b2193b9-46b9-44a8-9d1c-3c6a642115b6</uuid>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1177835038</nova:name>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <nova:creationTime>2025-10-11 08:47:51</nova:creationTime>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <nova:flavor name="m1.nano">
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <nova:memory>128</nova:memory>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <nova:disk>1</nova:disk>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <nova:swap>0</nova:swap>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <nova:vcpus>1</nova:vcpus>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  </nova:flavor>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <nova:owner>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <nova:user uuid="34f29a5a135d45f597eeaa741009aa67">tempest-AttachInterfacesTestJSON-2072786320-project-member</nova:user>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <nova:project uuid="eddb41c523294041b154a0a99c88e82b">tempest-AttachInterfacesTestJSON-2072786320</nova:project>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  </nova:owner>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <nova:ports>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <nova:port uuid="713e6030-0d3f-41ae-9f66-c4591e2498e4">
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <nova:port uuid="0ae5b718-3374-4544-8f79-2f11854381cd">
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  </nova:ports>
Oct 11 04:47:58 np0005481065 nova_compute[260935]: </nova:instance>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <memory unit='KiB'>131072</memory>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <vcpu placement='static'>1</vcpu>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <resource>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <partition>/machine</partition>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  </resource>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <sysinfo type='smbios'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <entry name='manufacturer'>RDO</entry>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <entry name='product'>OpenStack Compute</entry>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <entry name='serial'>5b2193b9-46b9-44a8-9d1c-3c6a642115b6</entry>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <entry name='uuid'>5b2193b9-46b9-44a8-9d1c-3c6a642115b6</entry>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <entry name='family'>Virtual Machine</entry>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <boot dev='hd'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <smbios mode='sysinfo'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <vmcoreinfo state='on'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <cpu mode='custom' match='exact' check='full'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <model fallback='forbid'>EPYC-Rome</model>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <vendor>AMD</vendor>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='x2apic'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='tsc-deadline'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='hypervisor'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='tsc_adjust'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='spec-ctrl'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='stibp'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='arch-capabilities'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='ssbd'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='cmp_legacy'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='overflow-recov'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='succor'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='ibrs'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='amd-ssbd'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='virt-ssbd'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='disable' name='lbrv'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='disable' name='tsc-scale'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='disable' name='vmcb-clean'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='disable' name='flushbyasid'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='disable' name='pause-filter'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='disable' name='pfthreshold'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='disable' name='svme-addr-chk'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='lfence-always-serializing'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='rdctl-no'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='mds-no'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='pschange-mc-no'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='gds-no'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='rfds-no'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='disable' name='xsaves'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='disable' name='svm'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='topoext'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='disable' name='npt'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <feature policy='disable' name='nrip-save'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <clock offset='utc'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <timer name='pit' tickpolicy='delay'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <timer name='rtc' tickpolicy='catchup'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <timer name='hpet' present='no'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <on_poweroff>destroy</on_poweroff>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <on_reboot>restart</on_reboot>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <on_crash>destroy</on_crash>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <disk type='network' device='disk'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <driver name='qemu' type='raw' cache='none'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <auth username='openstack'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:        <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <source protocol='rbd' name='vms/5b2193b9-46b9-44a8-9d1c-3c6a642115b6_disk' index='2'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:        <host name='192.168.122.100' port='6789'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target dev='vda' bus='virtio'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='virtio-disk0'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <disk type='network' device='cdrom'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <driver name='qemu' type='raw' cache='none'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <auth username='openstack'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:        <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <source protocol='rbd' name='vms/5b2193b9-46b9-44a8-9d1c-3c6a642115b6_disk.config' index='1'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:        <host name='192.168.122.100' port='6789'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target dev='sda' bus='sata'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <readonly/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='sata0-0-0'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='0' model='pcie-root'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='pcie.0'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target chassis='1' port='0x10'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='pci.1'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target chassis='2' port='0x11'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='pci.2'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target chassis='3' port='0x12'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='pci.3'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target chassis='4' port='0x13'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='pci.4'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target chassis='5' port='0x14'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='pci.5'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target chassis='6' port='0x15'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='pci.6'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target chassis='7' port='0x16'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='pci.7'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target chassis='8' port='0x17'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='pci.8'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target chassis='9' port='0x18'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='pci.9'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target chassis='10' port='0x19'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='pci.10'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target chassis='11' port='0x1a'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='pci.11'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target chassis='12' port='0x1b'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='pci.12'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target chassis='13' port='0x1c'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='pci.13'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target chassis='14' port='0x1d'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='pci.14'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target chassis='15' port='0x1e'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='pci.15'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target chassis='16' port='0x1f'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='pci.16'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target chassis='17' port='0x20'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='pci.17'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target chassis='18' port='0x21'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='pci.18'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target chassis='19' port='0x22'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='pci.19'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target chassis='20' port='0x23'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='pci.20'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target chassis='21' port='0x24'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='pci.21'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target chassis='22' port='0x25'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='pci.22'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target chassis='23' port='0x26'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='pci.23'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target chassis='24' port='0x27'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='pci.24'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target chassis='25' port='0x28'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='pci.25'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model name='pcie-pci-bridge'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='pci.26'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='usb'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <controller type='sata' index='0'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='ide'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <interface type='ethernet'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <mac address='fa:16:3e:6b:9e:4e'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target dev='tap713e6030-0d'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model type='virtio'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <driver name='vhost' rx_queue_size='512'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <mtu size='1442'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='net0'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <serial type='pty'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <source path='/dev/pts/2'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <log file='/var/lib/nova/instances/5b2193b9-46b9-44a8-9d1c-3c6a642115b6/console.log' append='off'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target type='isa-serial' port='0'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:        <model name='isa-serial'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      </target>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='serial0'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <console type='pty' tty='/dev/pts/2'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <source path='/dev/pts/2'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <log file='/var/lib/nova/instances/5b2193b9-46b9-44a8-9d1c-3c6a642115b6/console.log' append='off'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <target type='serial' port='0'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='serial0'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </console>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <input type='tablet' bus='usb'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='input0'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='usb' bus='0' port='1'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </input>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <input type='mouse' bus='ps2'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='input1'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </input>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <input type='keyboard' bus='ps2'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='input2'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </input>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <listen type='address' address='::0'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </graphics>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <audio id='1' type='none'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <model type='virtio' heads='1' primary='yes'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='video0'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <watchdog model='itco' action='reset'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='watchdog0'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </watchdog>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <memballoon model='virtio'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <stats period='10'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='balloon0'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <rng model='virtio'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <backend model='random'>/dev/urandom</backend>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <alias name='rng0'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <label>system_u:system_r:svirt_t:s0:c460,c984</label>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c460,c984</imagelabel>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  </seclabel>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <label>+107:+107</label>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <imagelabel>+107:+107</imagelabel>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  </seclabel>
Oct 11 04:47:58 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:47:58 np0005481065 nova_compute[260935]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.310 2 INFO nova.virt.libvirt.driver [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Successfully detached device tap0ae5b718-33 from instance 5b2193b9-46b9-44a8-9d1c-3c6a642115b6 from the live domain config.#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.311 2 DEBUG nova.virt.libvirt.vif [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:47:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1177835038',display_name='tempest-AttachInterfacesTestJSON-server-1177835038',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1177835038',id=17,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM9Il64pQKTCYRuLz2OOsP19v1NZUxnzt1d6CpbMNqNcVSmJsI444B5YIDg/3s4g87KTn1UkUCttTxW17bkkPDQnOj/OhzrtE3rJwHzR/sgT5/vucTFG0ijrEL7r/7PtFg==',key_name='tempest-keypair-130237923',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:47:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-a649ez8h',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:47:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=5b2193b9-46b9-44a8-9d1c-3c6a642115b6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0ae5b718-3374-4544-8f79-2f11854381cd", "address": "fa:16:3e:3e:fe:cf", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae5b718-33", "ovs_interfaceid": "0ae5b718-3374-4544-8f79-2f11854381cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.311 2 DEBUG nova.network.os_vif_util [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "0ae5b718-3374-4544-8f79-2f11854381cd", "address": "fa:16:3e:3e:fe:cf", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae5b718-33", "ovs_interfaceid": "0ae5b718-3374-4544-8f79-2f11854381cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.312 2 DEBUG nova.network.os_vif_util [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3e:fe:cf,bridge_name='br-int',has_traffic_filtering=True,id=0ae5b718-3374-4544-8f79-2f11854381cd,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ae5b718-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.313 2 DEBUG os_vif [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3e:fe:cf,bridge_name='br-int',has_traffic_filtering=True,id=0ae5b718-3374-4544-8f79-2f11854381cd,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ae5b718-33') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.320 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0ae5b718-33, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:47:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:58.320 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[eed7dbc2-027d-4515-ab15-d282e4088e2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.325 2 INFO os_vif [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3e:fe:cf,bridge_name='br-int',has_traffic_filtering=True,id=0ae5b718-3374-4544-8f79-2f11854381cd,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ae5b718-33')#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.325 2 DEBUG nova.virt.libvirt.guest [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1177835038</nova:name>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <nova:creationTime>2025-10-11 08:47:58</nova:creationTime>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <nova:flavor name="m1.nano">
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <nova:memory>128</nova:memory>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <nova:disk>1</nova:disk>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <nova:swap>0</nova:swap>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <nova:vcpus>1</nova:vcpus>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  </nova:flavor>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <nova:owner>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <nova:user uuid="34f29a5a135d45f597eeaa741009aa67">tempest-AttachInterfacesTestJSON-2072786320-project-member</nova:user>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <nova:project uuid="eddb41c523294041b154a0a99c88e82b">tempest-AttachInterfacesTestJSON-2072786320</nova:project>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  </nova:owner>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  <nova:ports>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    <nova:port uuid="713e6030-0d3f-41ae-9f66-c4591e2498e4">
Oct 11 04:47:58 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 04:47:58 np0005481065 nova_compute[260935]:  </nova:ports>
Oct 11 04:47:58 np0005481065 nova_compute[260935]: </nova:instance>
Oct 11 04:47:58 np0005481065 nova_compute[260935]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct 11 04:47:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:58.326 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[5980e7ad-c531-4479-abc4-3a98c504bd75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:47:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:58.361 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f885cd05-33a7-4aef-9db1-f7501942bf3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:47:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:58.386 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bc0b09b9-f2d6-4a3c-b451-bdbdf7f72187]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfff13396-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:a4:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 958, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 958, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428802, 'reachable_time': 36432, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288326, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:47:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:58.414 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9bedd155-8046-4d59-941a-79d2dc1ab0c7]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfff13396-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 428821, 'tstamp': 428821}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288327, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfff13396-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 428826, 'tstamp': 428826}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288327, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:47:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:58.416 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfff13396-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:47:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:58.422 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfff13396-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:47:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:58.422 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:47:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:58.423 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfff13396-b0, col_values=(('external_ids', {'iface-id': '2a916b98-1e7b-4604-b1f0-e2f195b1c17e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:47:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:58.424 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.715 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.716 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct 11 04:47:58 np0005481065 nova_compute[260935]: 2025-10-11 08:47:58.736 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct 11 04:47:59 np0005481065 nova_compute[260935]: 2025-10-11 08:47:59.336 2 DEBUG nova.compute.manager [req-c4239238-b186-456b-a93d-8c311e40cd99 req-4bc1b88c-194d-474c-a935-86d30f045a37 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Received event network-vif-plugged-0ae5b718-3374-4544-8f79-2f11854381cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:47:59 np0005481065 nova_compute[260935]: 2025-10-11 08:47:59.336 2 DEBUG oslo_concurrency.lockutils [req-c4239238-b186-456b-a93d-8c311e40cd99 req-4bc1b88c-194d-474c-a935-86d30f045a37 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:47:59 np0005481065 nova_compute[260935]: 2025-10-11 08:47:59.337 2 DEBUG oslo_concurrency.lockutils [req-c4239238-b186-456b-a93d-8c311e40cd99 req-4bc1b88c-194d-474c-a935-86d30f045a37 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:47:59 np0005481065 nova_compute[260935]: 2025-10-11 08:47:59.337 2 DEBUG oslo_concurrency.lockutils [req-c4239238-b186-456b-a93d-8c311e40cd99 req-4bc1b88c-194d-474c-a935-86d30f045a37 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:47:59 np0005481065 nova_compute[260935]: 2025-10-11 08:47:59.337 2 DEBUG nova.compute.manager [req-c4239238-b186-456b-a93d-8c311e40cd99 req-4bc1b88c-194d-474c-a935-86d30f045a37 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] No waiting events found dispatching network-vif-plugged-0ae5b718-3374-4544-8f79-2f11854381cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:47:59 np0005481065 nova_compute[260935]: 2025-10-11 08:47:59.337 2 WARNING nova.compute.manager [req-c4239238-b186-456b-a93d-8c311e40cd99 req-4bc1b88c-194d-474c-a935-86d30f045a37 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Received unexpected event network-vif-plugged-0ae5b718-3374-4544-8f79-2f11854381cd for instance with vm_state active and task_state None.#033[00m
Oct 11 04:47:59 np0005481065 nova_compute[260935]: 2025-10-11 08:47:59.344 2 DEBUG oslo_concurrency.lockutils [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "refresh_cache-5b2193b9-46b9-44a8-9d1c-3c6a642115b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:47:59 np0005481065 nova_compute[260935]: 2025-10-11 08:47:59.344 2 DEBUG oslo_concurrency.lockutils [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquired lock "refresh_cache-5b2193b9-46b9-44a8-9d1c-3c6a642115b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:47:59 np0005481065 nova_compute[260935]: 2025-10-11 08:47:59.344 2 DEBUG nova.network.neutron [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:47:59 np0005481065 nova_compute[260935]: 2025-10-11 08:47:59.387 2 DEBUG nova.network.neutron [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Successfully updated port: 128b1135-2e8f-4e78-8e09-e16b082e9225 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:47:59 np0005481065 nova_compute[260935]: 2025-10-11 08:47:59.408 2 DEBUG oslo_concurrency.lockutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Acquiring lock "refresh_cache-f6e6ccd5-d393-4fa3-bf88-491311678dd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:47:59 np0005481065 nova_compute[260935]: 2025-10-11 08:47:59.408 2 DEBUG oslo_concurrency.lockutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Acquired lock "refresh_cache-f6e6ccd5-d393-4fa3-bf88-491311678dd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:47:59 np0005481065 nova_compute[260935]: 2025-10-11 08:47:59.408 2 DEBUG nova.network.neutron [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:47:59 np0005481065 nova_compute[260935]: 2025-10-11 08:47:59.495 2 DEBUG nova.compute.manager [req-9fbc9f2b-9925-4f02-a420-c41d4237c01f req-71825224-6999-4140-b6cc-a9b837c30ffd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Received event network-changed-128b1135-2e8f-4e78-8e09-e16b082e9225 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:47:59 np0005481065 nova_compute[260935]: 2025-10-11 08:47:59.496 2 DEBUG nova.compute.manager [req-9fbc9f2b-9925-4f02-a420-c41d4237c01f req-71825224-6999-4140-b6cc-a9b837c30ffd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Refreshing instance network info cache due to event network-changed-128b1135-2e8f-4e78-8e09-e16b082e9225. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:47:59 np0005481065 nova_compute[260935]: 2025-10-11 08:47:59.496 2 DEBUG oslo_concurrency.lockutils [req-9fbc9f2b-9925-4f02-a420-c41d4237c01f req-71825224-6999-4140-b6cc-a9b837c30ffd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-f6e6ccd5-d393-4fa3-bf88-491311678dd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:47:59 np0005481065 nova_compute[260935]: 2025-10-11 08:47:59.586 2 DEBUG nova.network.neutron [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:47:59 np0005481065 nova_compute[260935]: 2025-10-11 08:47:59.723 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:47:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1266: 321 pgs: 321 active+clean; 213 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.6 MiB/s wr, 107 op/s
Oct 11 04:47:59 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:47:59.878 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:48:00 np0005481065 nova_compute[260935]: 2025-10-11 08:48:00.416 2 INFO nova.compute.manager [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Rebuilding instance#033[00m
Oct 11 04:48:00 np0005481065 nova_compute[260935]: 2025-10-11 08:48:00.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:48:00 np0005481065 nova_compute[260935]: 2025-10-11 08:48:00.932 2 DEBUG nova.objects.instance [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 66086b61-46ca-4a1b-a9f0-692678bcbf7a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:48:00 np0005481065 nova_compute[260935]: 2025-10-11 08:48:00.950 2 DEBUG nova.compute.manager [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:48:00 np0005481065 nova_compute[260935]: 2025-10-11 08:48:00.997 2 DEBUG nova.objects.instance [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Lazy-loading 'pci_requests' on Instance uuid 66086b61-46ca-4a1b-a9f0-692678bcbf7a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.007 2 DEBUG nova.objects.instance [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Lazy-loading 'pci_devices' on Instance uuid 66086b61-46ca-4a1b-a9f0-692678bcbf7a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.018 2 DEBUG nova.objects.instance [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Lazy-loading 'resources' on Instance uuid 66086b61-46ca-4a1b-a9f0-692678bcbf7a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.030 2 DEBUG nova.objects.instance [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Lazy-loading 'migration_context' on Instance uuid 66086b61-46ca-4a1b-a9f0-692678bcbf7a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.041 2 DEBUG nova.objects.instance [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.046 2 DEBUG nova.virt.libvirt.driver [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.072 2 INFO nova.network.neutron [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Port 0ae5b718-3374-4544-8f79-2f11854381cd from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.073 2 DEBUG nova.network.neutron [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Updating instance_info_cache with network_info: [{"id": "713e6030-0d3f-41ae-9f66-c4591e2498e4", "address": "fa:16:3e:6b:9e:4e", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap713e6030-0d", "ovs_interfaceid": "713e6030-0d3f-41ae-9f66-c4591e2498e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.101 2 DEBUG nova.network.neutron [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Updating instance_info_cache with network_info: [{"id": "128b1135-2e8f-4e78-8e09-e16b082e9225", "address": "fa:16:3e:ac:95:81", "network": {"id": "678d17d5-515c-4e7c-a42a-5bd4db3dbb7b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1273402507-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc753a4e96fc46008b6e6b1fd29b160d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap128b1135-2e", "ovs_interfaceid": "128b1135-2e8f-4e78-8e09-e16b082e9225", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.104 2 DEBUG oslo_concurrency.lockutils [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Releasing lock "refresh_cache-5b2193b9-46b9-44a8-9d1c-3c6a642115b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.126 2 DEBUG oslo_concurrency.lockutils [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "interface-5b2193b9-46b9-44a8-9d1c-3c6a642115b6-0ae5b718-3374-4544-8f79-2f11854381cd" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 3.054s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.130 2 DEBUG oslo_concurrency.lockutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Releasing lock "refresh_cache-f6e6ccd5-d393-4fa3-bf88-491311678dd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.130 2 DEBUG nova.compute.manager [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Instance network_info: |[{"id": "128b1135-2e8f-4e78-8e09-e16b082e9225", "address": "fa:16:3e:ac:95:81", "network": {"id": "678d17d5-515c-4e7c-a42a-5bd4db3dbb7b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1273402507-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc753a4e96fc46008b6e6b1fd29b160d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap128b1135-2e", "ovs_interfaceid": "128b1135-2e8f-4e78-8e09-e16b082e9225", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.131 2 DEBUG oslo_concurrency.lockutils [req-9fbc9f2b-9925-4f02-a420-c41d4237c01f req-71825224-6999-4140-b6cc-a9b837c30ffd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-f6e6ccd5-d393-4fa3-bf88-491311678dd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.131 2 DEBUG nova.network.neutron [req-9fbc9f2b-9925-4f02-a420-c41d4237c01f req-71825224-6999-4140-b6cc-a9b837c30ffd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Refreshing network info cache for port 128b1135-2e8f-4e78-8e09-e16b082e9225 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.134 2 DEBUG nova.virt.libvirt.driver [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Start _get_guest_xml network_info=[{"id": "128b1135-2e8f-4e78-8e09-e16b082e9225", "address": "fa:16:3e:ac:95:81", "network": {"id": "678d17d5-515c-4e7c-a42a-5bd4db3dbb7b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1273402507-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc753a4e96fc46008b6e6b1fd29b160d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap128b1135-2e", "ovs_interfaceid": "128b1135-2e8f-4e78-8e09-e16b082e9225", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.138 2 WARNING nova.virt.libvirt.driver [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.142 2 DEBUG nova.virt.libvirt.host [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.143 2 DEBUG nova.virt.libvirt.host [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.146 2 DEBUG nova.virt.libvirt.host [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.146 2 DEBUG nova.virt.libvirt.host [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.146 2 DEBUG nova.virt.libvirt.driver [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.147 2 DEBUG nova.virt.hardware [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.147 2 DEBUG nova.virt.hardware [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.147 2 DEBUG nova.virt.hardware [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.148 2 DEBUG nova.virt.hardware [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.148 2 DEBUG nova.virt.hardware [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.148 2 DEBUG nova.virt.hardware [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.148 2 DEBUG nova.virt.hardware [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.149 2 DEBUG nova.virt.hardware [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.149 2 DEBUG nova.virt.hardware [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.149 2 DEBUG nova.virt.hardware [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.149 2 DEBUG nova.virt.hardware [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.152 2 DEBUG oslo_concurrency.processutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.214 2 DEBUG oslo_concurrency.lockutils [None req-5a167f4a-efd9-47fc-92a3-43da5d789ab2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.215 2 DEBUG oslo_concurrency.lockutils [None req-5a167f4a-efd9-47fc-92a3-43da5d789ab2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.215 2 DEBUG oslo_concurrency.lockutils [None req-5a167f4a-efd9-47fc-92a3-43da5d789ab2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.216 2 DEBUG oslo_concurrency.lockutils [None req-5a167f4a-efd9-47fc-92a3-43da5d789ab2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.216 2 DEBUG oslo_concurrency.lockutils [None req-5a167f4a-efd9-47fc-92a3-43da5d789ab2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.217 2 INFO nova.compute.manager [None req-5a167f4a-efd9-47fc-92a3-43da5d789ab2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Terminating instance#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.218 2 DEBUG nova.compute.manager [None req-5a167f4a-efd9-47fc-92a3-43da5d789ab2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 04:48:01 np0005481065 kernel: tap713e6030-0d (unregistering): left promiscuous mode
Oct 11 04:48:01 np0005481065 NetworkManager[44960]: <info>  [1760172481.2846] device (tap713e6030-0d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:01 np0005481065 ovn_controller[152945]: 2025-10-11T08:48:01Z|00059|binding|INFO|Releasing lport 713e6030-0d3f-41ae-9f66-c4591e2498e4 from this chassis (sb_readonly=0)
Oct 11 04:48:01 np0005481065 ovn_controller[152945]: 2025-10-11T08:48:01Z|00060|binding|INFO|Setting lport 713e6030-0d3f-41ae-9f66-c4591e2498e4 down in Southbound
Oct 11 04:48:01 np0005481065 ovn_controller[152945]: 2025-10-11T08:48:01Z|00061|binding|INFO|Removing iface tap713e6030-0d ovn-installed in OVS
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:01.310 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:9e:4e 10.100.0.10'], port_security=['fa:16:3e:6b:9e:4e 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '5b2193b9-46b9-44a8-9d1c-3c6a642115b6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eddb41c523294041b154a0a99c88e82b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '45992a1d-2e72-47ad-a788-d3f5230a5526', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.196'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4201c7b-c907-464d-88cb-d19f17d8f067, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=713e6030-0d3f-41ae-9f66-c4591e2498e4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:48:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:01.312 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 713e6030-0d3f-41ae-9f66-c4591e2498e4 in datapath fff13396-b787-4c6e-9112-a1c2ef57b26d unbound from our chassis#033[00m
Oct 11 04:48:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:01.314 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fff13396-b787-4c6e-9112-a1c2ef57b26d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 04:48:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:01.315 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1435ab6b-14c7-4b2b-8105-32a77c79fd3e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:01.316 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d namespace which is not needed anymore#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:01 np0005481065 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000011.scope: Deactivated successfully.
Oct 11 04:48:01 np0005481065 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000011.scope: Consumed 14.191s CPU time.
Oct 11 04:48:01 np0005481065 systemd-machined[215705]: Machine qemu-17-instance-00000011 terminated.
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.463 2 INFO nova.virt.libvirt.driver [-] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Instance destroyed successfully.#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.464 2 DEBUG nova.objects.instance [None req-5a167f4a-efd9-47fc-92a3-43da5d789ab2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lazy-loading 'resources' on Instance uuid 5b2193b9-46b9-44a8-9d1c-3c6a642115b6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.503 2 DEBUG nova.virt.libvirt.vif [None req-5a167f4a-efd9-47fc-92a3-43da5d789ab2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:47:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1177835038',display_name='tempest-AttachInterfacesTestJSON-server-1177835038',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1177835038',id=17,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM9Il64pQKTCYRuLz2OOsP19v1NZUxnzt1d6CpbMNqNcVSmJsI444B5YIDg/3s4g87KTn1UkUCttTxW17bkkPDQnOj/OhzrtE3rJwHzR/sgT5/vucTFG0ijrEL7r/7PtFg==',key_name='tempest-keypair-130237923',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:47:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-a649ez8h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:47:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=5b2193b9-46b9-44a8-9d1c-3c6a642115b6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "713e6030-0d3f-41ae-9f66-c4591e2498e4", "address": "fa:16:3e:6b:9e:4e", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap713e6030-0d", "ovs_interfaceid": "713e6030-0d3f-41ae-9f66-c4591e2498e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.504 2 DEBUG nova.network.os_vif_util [None req-5a167f4a-efd9-47fc-92a3-43da5d789ab2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "713e6030-0d3f-41ae-9f66-c4591e2498e4", "address": "fa:16:3e:6b:9e:4e", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap713e6030-0d", "ovs_interfaceid": "713e6030-0d3f-41ae-9f66-c4591e2498e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.505 2 DEBUG nova.network.os_vif_util [None req-5a167f4a-efd9-47fc-92a3-43da5d789ab2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6b:9e:4e,bridge_name='br-int',has_traffic_filtering=True,id=713e6030-0d3f-41ae-9f66-c4591e2498e4,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap713e6030-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.505 2 DEBUG os_vif [None req-5a167f4a-efd9-47fc-92a3-43da5d789ab2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6b:9e:4e,bridge_name='br-int',has_traffic_filtering=True,id=713e6030-0d3f-41ae-9f66-c4591e2498e4,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap713e6030-0d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.507 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap713e6030-0d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.515 2 INFO os_vif [None req-5a167f4a-efd9-47fc-92a3-43da5d789ab2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6b:9e:4e,bridge_name='br-int',has_traffic_filtering=True,id=713e6030-0d3f-41ae-9f66-c4591e2498e4,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap713e6030-0d')#033[00m
Oct 11 04:48:01 np0005481065 neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d[287185]: [NOTICE]   (287190) : haproxy version is 2.8.14-c23fe91
Oct 11 04:48:01 np0005481065 neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d[287185]: [NOTICE]   (287190) : path to executable is /usr/sbin/haproxy
Oct 11 04:48:01 np0005481065 neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d[287185]: [WARNING]  (287190) : Exiting Master process...
Oct 11 04:48:01 np0005481065 neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d[287185]: [ALERT]    (287190) : Current worker (287192) exited with code 143 (Terminated)
Oct 11 04:48:01 np0005481065 neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d[287185]: [WARNING]  (287190) : All workers exited. Exiting... (0)
Oct 11 04:48:01 np0005481065 systemd[1]: libpod-779a4dfe6de7028a6c29a354067b0ff2a57745a062e774c98de6f0ccb789cb5a.scope: Deactivated successfully.
Oct 11 04:48:01 np0005481065 podman[288372]: 2025-10-11 08:48:01.539605424 +0000 UTC m=+0.075051306 container died 779a4dfe6de7028a6c29a354067b0ff2a57745a062e774c98de6f0ccb789cb5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.559 2 DEBUG nova.compute.manager [req-0d5452c6-f8eb-4c34-b406-4c0253565753 req-977baa6f-3f15-4a41-ae5c-ee28de0b6ac0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Received event network-vif-unplugged-0ae5b718-3374-4544-8f79-2f11854381cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.560 2 DEBUG oslo_concurrency.lockutils [req-0d5452c6-f8eb-4c34-b406-4c0253565753 req-977baa6f-3f15-4a41-ae5c-ee28de0b6ac0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.561 2 DEBUG oslo_concurrency.lockutils [req-0d5452c6-f8eb-4c34-b406-4c0253565753 req-977baa6f-3f15-4a41-ae5c-ee28de0b6ac0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.561 2 DEBUG oslo_concurrency.lockutils [req-0d5452c6-f8eb-4c34-b406-4c0253565753 req-977baa6f-3f15-4a41-ae5c-ee28de0b6ac0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.561 2 DEBUG nova.compute.manager [req-0d5452c6-f8eb-4c34-b406-4c0253565753 req-977baa6f-3f15-4a41-ae5c-ee28de0b6ac0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] No waiting events found dispatching network-vif-unplugged-0ae5b718-3374-4544-8f79-2f11854381cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.562 2 DEBUG nova.compute.manager [req-0d5452c6-f8eb-4c34-b406-4c0253565753 req-977baa6f-3f15-4a41-ae5c-ee28de0b6ac0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Received event network-vif-unplugged-0ae5b718-3374-4544-8f79-2f11854381cd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.562 2 DEBUG nova.compute.manager [req-0d5452c6-f8eb-4c34-b406-4c0253565753 req-977baa6f-3f15-4a41-ae5c-ee28de0b6ac0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Received event network-vif-plugged-0ae5b718-3374-4544-8f79-2f11854381cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.562 2 DEBUG oslo_concurrency.lockutils [req-0d5452c6-f8eb-4c34-b406-4c0253565753 req-977baa6f-3f15-4a41-ae5c-ee28de0b6ac0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.562 2 DEBUG oslo_concurrency.lockutils [req-0d5452c6-f8eb-4c34-b406-4c0253565753 req-977baa6f-3f15-4a41-ae5c-ee28de0b6ac0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.562 2 DEBUG oslo_concurrency.lockutils [req-0d5452c6-f8eb-4c34-b406-4c0253565753 req-977baa6f-3f15-4a41-ae5c-ee28de0b6ac0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.563 2 DEBUG nova.compute.manager [req-0d5452c6-f8eb-4c34-b406-4c0253565753 req-977baa6f-3f15-4a41-ae5c-ee28de0b6ac0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] No waiting events found dispatching network-vif-plugged-0ae5b718-3374-4544-8f79-2f11854381cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.563 2 WARNING nova.compute.manager [req-0d5452c6-f8eb-4c34-b406-4c0253565753 req-977baa6f-3f15-4a41-ae5c-ee28de0b6ac0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Received unexpected event network-vif-plugged-0ae5b718-3374-4544-8f79-2f11854381cd for instance with vm_state active and task_state deleting.#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.563 2 DEBUG nova.compute.manager [req-0d5452c6-f8eb-4c34-b406-4c0253565753 req-977baa6f-3f15-4a41-ae5c-ee28de0b6ac0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Received event network-vif-deleted-0ae5b718-3374-4544-8f79-2f11854381cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:48:01 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-779a4dfe6de7028a6c29a354067b0ff2a57745a062e774c98de6f0ccb789cb5a-userdata-shm.mount: Deactivated successfully.
Oct 11 04:48:01 np0005481065 systemd[1]: var-lib-containers-storage-overlay-829cba9cb35af42a78e9e37b72195be70588984fe3032b82c532e66c2e3b0e2c-merged.mount: Deactivated successfully.
Oct 11 04:48:01 np0005481065 podman[288372]: 2025-10-11 08:48:01.61362505 +0000 UTC m=+0.149070972 container cleanup 779a4dfe6de7028a6c29a354067b0ff2a57745a062e774c98de6f0ccb789cb5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 11 04:48:01 np0005481065 systemd[1]: libpod-conmon-779a4dfe6de7028a6c29a354067b0ff2a57745a062e774c98de6f0ccb789cb5a.scope: Deactivated successfully.
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.707 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.707 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 11 04:48:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:48:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1925108110' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:48:01 np0005481065 podman[288428]: 2025-10-11 08:48:01.729938615 +0000 UTC m=+0.069084116 container remove 779a4dfe6de7028a6c29a354067b0ff2a57745a062e774c98de6f0ccb789cb5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.736 2 DEBUG oslo_concurrency.processutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:48:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:01.747 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a8031769-5b89-416c-b293-5f003bba4196]: (4, ('Sat Oct 11 08:48:01 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d (779a4dfe6de7028a6c29a354067b0ff2a57745a062e774c98de6f0ccb789cb5a)\n779a4dfe6de7028a6c29a354067b0ff2a57745a062e774c98de6f0ccb789cb5a\nSat Oct 11 08:48:01 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d (779a4dfe6de7028a6c29a354067b0ff2a57745a062e774c98de6f0ccb789cb5a)\n779a4dfe6de7028a6c29a354067b0ff2a57745a062e774c98de6f0ccb789cb5a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:01.751 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cec30304-8503-4206-b84c-e5f0011782d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:01.757 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfff13396-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:48:01 np0005481065 kernel: tapfff13396-b0: left promiscuous mode
Oct 11 04:48:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:01.785 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[be660f03-31a0-4a63-9961-8079e4ea2f09]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.788 2 DEBUG nova.storage.rbd_utils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] rbd image f6e6ccd5-d393-4fa3-bf88-491311678dd1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:48:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1267: 321 pgs: 321 active+clean; 213 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.6 MiB/s wr, 107 op/s
Oct 11 04:48:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:01.804 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[689fe06b-c3b6-4fd6-bc16-d237bac9f35b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.805 2 DEBUG oslo_concurrency.processutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:48:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:01.805 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3550bd22-c0cd-4c19-8dd6-8087bafd167a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:01.829 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[30c1cd57-bdec-4050-a7b3-03e47f6da66d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428792, 'reachable_time': 16857, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288462, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:01 np0005481065 systemd[1]: run-netns-ovnmeta\x2dfff13396\x2db787\x2d4c6e\x2d9112\x2da1c2ef57b26d.mount: Deactivated successfully.
Oct 11 04:48:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:01.834 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 04:48:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:01.835 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[5fe7eaab-b9e7-49d2-8972-7dfb3eadbedd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:01 np0005481065 nova_compute[260935]: 2025-10-11 08:48:01.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:02 np0005481065 nova_compute[260935]: 2025-10-11 08:48:02.104 2 INFO nova.virt.libvirt.driver [None req-5a167f4a-efd9-47fc-92a3-43da5d789ab2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Deleting instance files /var/lib/nova/instances/5b2193b9-46b9-44a8-9d1c-3c6a642115b6_del#033[00m
Oct 11 04:48:02 np0005481065 nova_compute[260935]: 2025-10-11 08:48:02.105 2 INFO nova.virt.libvirt.driver [None req-5a167f4a-efd9-47fc-92a3-43da5d789ab2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Deletion of /var/lib/nova/instances/5b2193b9-46b9-44a8-9d1c-3c6a642115b6_del complete#033[00m
Oct 11 04:48:02 np0005481065 nova_compute[260935]: 2025-10-11 08:48:02.155 2 INFO nova.compute.manager [None req-5a167f4a-efd9-47fc-92a3-43da5d789ab2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Took 0.94 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 04:48:02 np0005481065 nova_compute[260935]: 2025-10-11 08:48:02.155 2 DEBUG oslo.service.loopingcall [None req-5a167f4a-efd9-47fc-92a3-43da5d789ab2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 04:48:02 np0005481065 nova_compute[260935]: 2025-10-11 08:48:02.156 2 DEBUG nova.compute.manager [-] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 04:48:02 np0005481065 nova_compute[260935]: 2025-10-11 08:48:02.156 2 DEBUG nova.network.neutron [-] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 04:48:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:48:02 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3227162184' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:48:02 np0005481065 nova_compute[260935]: 2025-10-11 08:48:02.363 2 DEBUG oslo_concurrency.processutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:48:02 np0005481065 nova_compute[260935]: 2025-10-11 08:48:02.365 2 DEBUG nova.virt.libvirt.vif [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:47:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-2127562755',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-2127562755',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-212756275',id=19,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dc753a4e96fc46008b6e6b1fd29b160d',ramdisk_id='',reservation_id='r-wv642fxc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-713335751',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-713335751-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:47:56Z,user_data=None,user_id='16055681fed745bb89347149995b8486',uuid=f6e6ccd5-d393-4fa3-bf88-491311678dd1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "128b1135-2e8f-4e78-8e09-e16b082e9225", "address": "fa:16:3e:ac:95:81", "network": {"id": "678d17d5-515c-4e7c-a42a-5bd4db3dbb7b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1273402507-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc753a4e96fc46008b6e6b1fd29b160d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap128b1135-2e", "ovs_interfaceid": "128b1135-2e8f-4e78-8e09-e16b082e9225", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:48:02 np0005481065 nova_compute[260935]: 2025-10-11 08:48:02.365 2 DEBUG nova.network.os_vif_util [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Converting VIF {"id": "128b1135-2e8f-4e78-8e09-e16b082e9225", "address": "fa:16:3e:ac:95:81", "network": {"id": "678d17d5-515c-4e7c-a42a-5bd4db3dbb7b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1273402507-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc753a4e96fc46008b6e6b1fd29b160d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap128b1135-2e", "ovs_interfaceid": "128b1135-2e8f-4e78-8e09-e16b082e9225", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:48:02 np0005481065 nova_compute[260935]: 2025-10-11 08:48:02.366 2 DEBUG nova.network.os_vif_util [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ac:95:81,bridge_name='br-int',has_traffic_filtering=True,id=128b1135-2e8f-4e78-8e09-e16b082e9225,network=Network(678d17d5-515c-4e7c-a42a-5bd4db3dbb7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap128b1135-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:48:02 np0005481065 nova_compute[260935]: 2025-10-11 08:48:02.367 2 DEBUG nova.objects.instance [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Lazy-loading 'pci_devices' on Instance uuid f6e6ccd5-d393-4fa3-bf88-491311678dd1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:48:02 np0005481065 nova_compute[260935]: 2025-10-11 08:48:02.380 2 DEBUG nova.virt.libvirt.driver [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:48:02 np0005481065 nova_compute[260935]:  <uuid>f6e6ccd5-d393-4fa3-bf88-491311678dd1</uuid>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:  <name>instance-00000013</name>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:48:02 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:      <nova:name>tempest-FloatingIPsAssociationNegativeTestJSON-server-2127562755</nova:name>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:48:01</nova:creationTime>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:48:02 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:        <nova:user uuid="16055681fed745bb89347149995b8486">tempest-FloatingIPsAssociationNegativeTestJSON-713335751-project-member</nova:user>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:        <nova:project uuid="dc753a4e96fc46008b6e6b1fd29b160d">tempest-FloatingIPsAssociationNegativeTestJSON-713335751</nova:project>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:        <nova:port uuid="128b1135-2e8f-4e78-8e09-e16b082e9225">
Oct 11 04:48:02 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:48:02 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:      <entry name="serial">f6e6ccd5-d393-4fa3-bf88-491311678dd1</entry>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:      <entry name="uuid">f6e6ccd5-d393-4fa3-bf88-491311678dd1</entry>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:48:02 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:48:02 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:48:02 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/f6e6ccd5-d393-4fa3-bf88-491311678dd1_disk">
Oct 11 04:48:02 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:48:02 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:48:02 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/f6e6ccd5-d393-4fa3-bf88-491311678dd1_disk.config">
Oct 11 04:48:02 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:48:02 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 04:48:02 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:ac:95:81"/>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:      <target dev="tap128b1135-2e"/>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:48:02 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/f6e6ccd5-d393-4fa3-bf88-491311678dd1/console.log" append="off"/>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:48:02 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:48:02 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:48:02 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:48:02 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:48:02 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:48:02 np0005481065 nova_compute[260935]: 2025-10-11 08:48:02.380 2 DEBUG nova.compute.manager [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Preparing to wait for external event network-vif-plugged-128b1135-2e8f-4e78-8e09-e16b082e9225 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 04:48:02 np0005481065 nova_compute[260935]: 2025-10-11 08:48:02.381 2 DEBUG oslo_concurrency.lockutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Acquiring lock "f6e6ccd5-d393-4fa3-bf88-491311678dd1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:48:02 np0005481065 nova_compute[260935]: 2025-10-11 08:48:02.381 2 DEBUG oslo_concurrency.lockutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Lock "f6e6ccd5-d393-4fa3-bf88-491311678dd1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:48:02 np0005481065 nova_compute[260935]: 2025-10-11 08:48:02.381 2 DEBUG oslo_concurrency.lockutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Lock "f6e6ccd5-d393-4fa3-bf88-491311678dd1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:48:02 np0005481065 nova_compute[260935]: 2025-10-11 08:48:02.382 2 DEBUG nova.virt.libvirt.vif [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:47:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-2127562755',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-2127562755',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-212756275',id=19,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dc753a4e96fc46008b6e6b1fd29b160d',ramdisk_id='',reservation_id='r-wv642fxc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-713335751',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-713335751-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:47:56Z,user_data=None,user_id='16055681fed745bb89347149995b8486',uuid=f6e6ccd5-d393-4fa3-bf88-491311678dd1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "128b1135-2e8f-4e78-8e09-e16b082e9225", "address": "fa:16:3e:ac:95:81", "network": {"id": "678d17d5-515c-4e7c-a42a-5bd4db3dbb7b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1273402507-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc753a4e96fc46008b6e6b1fd29b160d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap128b1135-2e", "ovs_interfaceid": "128b1135-2e8f-4e78-8e09-e16b082e9225", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:48:02 np0005481065 nova_compute[260935]: 2025-10-11 08:48:02.382 2 DEBUG nova.network.os_vif_util [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Converting VIF {"id": "128b1135-2e8f-4e78-8e09-e16b082e9225", "address": "fa:16:3e:ac:95:81", "network": {"id": "678d17d5-515c-4e7c-a42a-5bd4db3dbb7b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1273402507-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc753a4e96fc46008b6e6b1fd29b160d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap128b1135-2e", "ovs_interfaceid": "128b1135-2e8f-4e78-8e09-e16b082e9225", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:48:02 np0005481065 nova_compute[260935]: 2025-10-11 08:48:02.382 2 DEBUG nova.network.os_vif_util [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ac:95:81,bridge_name='br-int',has_traffic_filtering=True,id=128b1135-2e8f-4e78-8e09-e16b082e9225,network=Network(678d17d5-515c-4e7c-a42a-5bd4db3dbb7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap128b1135-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:48:02 np0005481065 nova_compute[260935]: 2025-10-11 08:48:02.383 2 DEBUG os_vif [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:95:81,bridge_name='br-int',has_traffic_filtering=True,id=128b1135-2e8f-4e78-8e09-e16b082e9225,network=Network(678d17d5-515c-4e7c-a42a-5bd4db3dbb7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap128b1135-2e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:48:02 np0005481065 nova_compute[260935]: 2025-10-11 08:48:02.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:02 np0005481065 nova_compute[260935]: 2025-10-11 08:48:02.384 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:48:02 np0005481065 nova_compute[260935]: 2025-10-11 08:48:02.384 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:48:02 np0005481065 nova_compute[260935]: 2025-10-11 08:48:02.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:02 np0005481065 nova_compute[260935]: 2025-10-11 08:48:02.387 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap128b1135-2e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:48:02 np0005481065 nova_compute[260935]: 2025-10-11 08:48:02.388 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap128b1135-2e, col_values=(('external_ids', {'iface-id': '128b1135-2e8f-4e78-8e09-e16b082e9225', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ac:95:81', 'vm-uuid': 'f6e6ccd5-d393-4fa3-bf88-491311678dd1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:48:02 np0005481065 nova_compute[260935]: 2025-10-11 08:48:02.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:02 np0005481065 NetworkManager[44960]: <info>  [1760172482.3917] manager: (tap128b1135-2e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Oct 11 04:48:02 np0005481065 nova_compute[260935]: 2025-10-11 08:48:02.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:48:02 np0005481065 nova_compute[260935]: 2025-10-11 08:48:02.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:02 np0005481065 nova_compute[260935]: 2025-10-11 08:48:02.399 2 INFO os_vif [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:95:81,bridge_name='br-int',has_traffic_filtering=True,id=128b1135-2e8f-4e78-8e09-e16b082e9225,network=Network(678d17d5-515c-4e7c-a42a-5bd4db3dbb7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap128b1135-2e')#033[00m
Oct 11 04:48:02 np0005481065 nova_compute[260935]: 2025-10-11 08:48:02.450 2 DEBUG nova.virt.libvirt.driver [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:48:02 np0005481065 nova_compute[260935]: 2025-10-11 08:48:02.450 2 DEBUG nova.virt.libvirt.driver [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:48:02 np0005481065 nova_compute[260935]: 2025-10-11 08:48:02.450 2 DEBUG nova.virt.libvirt.driver [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] No VIF found with MAC fa:16:3e:ac:95:81, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:48:02 np0005481065 nova_compute[260935]: 2025-10-11 08:48:02.451 2 INFO nova.virt.libvirt.driver [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Using config drive#033[00m
Oct 11 04:48:02 np0005481065 nova_compute[260935]: 2025-10-11 08:48:02.475 2 DEBUG nova.storage.rbd_utils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] rbd image f6e6ccd5-d393-4fa3-bf88-491311678dd1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:48:02 np0005481065 nova_compute[260935]: 2025-10-11 08:48:02.822 2 DEBUG nova.network.neutron [req-9fbc9f2b-9925-4f02-a420-c41d4237c01f req-71825224-6999-4140-b6cc-a9b837c30ffd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Updated VIF entry in instance network info cache for port 128b1135-2e8f-4e78-8e09-e16b082e9225. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:48:02 np0005481065 nova_compute[260935]: 2025-10-11 08:48:02.824 2 DEBUG nova.network.neutron [req-9fbc9f2b-9925-4f02-a420-c41d4237c01f req-71825224-6999-4140-b6cc-a9b837c30ffd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Updating instance_info_cache with network_info: [{"id": "128b1135-2e8f-4e78-8e09-e16b082e9225", "address": "fa:16:3e:ac:95:81", "network": {"id": "678d17d5-515c-4e7c-a42a-5bd4db3dbb7b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1273402507-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc753a4e96fc46008b6e6b1fd29b160d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap128b1135-2e", "ovs_interfaceid": "128b1135-2e8f-4e78-8e09-e16b082e9225", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:48:02 np0005481065 podman[288506]: 2025-10-11 08:48:02.830313418 +0000 UTC m=+0.115861232 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251001, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 11 04:48:02 np0005481065 nova_compute[260935]: 2025-10-11 08:48:02.841 2 DEBUG oslo_concurrency.lockutils [req-9fbc9f2b-9925-4f02-a420-c41d4237c01f req-71825224-6999-4140-b6cc-a9b837c30ffd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-f6e6ccd5-d393-4fa3-bf88-491311678dd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:48:02 np0005481065 podman[288507]: 2025-10-11 08:48:02.919043464 +0000 UTC m=+0.203517367 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 11 04:48:02 np0005481065 nova_compute[260935]: 2025-10-11 08:48:02.936 2 INFO nova.virt.libvirt.driver [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Creating config drive at /var/lib/nova/instances/f6e6ccd5-d393-4fa3-bf88-491311678dd1/disk.config#033[00m
Oct 11 04:48:02 np0005481065 nova_compute[260935]: 2025-10-11 08:48:02.944 2 DEBUG oslo_concurrency.processutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f6e6ccd5-d393-4fa3-bf88-491311678dd1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppk9f8g6j execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:48:03 np0005481065 nova_compute[260935]: 2025-10-11 08:48:03.079 2 DEBUG oslo_concurrency.processutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f6e6ccd5-d393-4fa3-bf88-491311678dd1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppk9f8g6j" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:48:03 np0005481065 nova_compute[260935]: 2025-10-11 08:48:03.116 2 DEBUG nova.storage.rbd_utils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] rbd image f6e6ccd5-d393-4fa3-bf88-491311678dd1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:48:03 np0005481065 nova_compute[260935]: 2025-10-11 08:48:03.121 2 DEBUG oslo_concurrency.processutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f6e6ccd5-d393-4fa3-bf88-491311678dd1/disk.config f6e6ccd5-d393-4fa3-bf88-491311678dd1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:48:03 np0005481065 nova_compute[260935]: 2025-10-11 08:48:03.161 2 DEBUG nova.network.neutron [-] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:48:03 np0005481065 nova_compute[260935]: 2025-10-11 08:48:03.169 2 DEBUG nova.compute.manager [req-58c37e2e-9ca9-4015-a083-381175cda690 req-35d5acf4-9357-4876-89f0-c625f4dc77b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Received event network-vif-deleted-713e6030-0d3f-41ae-9f66-c4591e2498e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:48:03 np0005481065 nova_compute[260935]: 2025-10-11 08:48:03.170 2 INFO nova.compute.manager [req-58c37e2e-9ca9-4015-a083-381175cda690 req-35d5acf4-9357-4876-89f0-c625f4dc77b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Neutron deleted interface 713e6030-0d3f-41ae-9f66-c4591e2498e4; detaching it from the instance and deleting it from the info cache#033[00m
Oct 11 04:48:03 np0005481065 nova_compute[260935]: 2025-10-11 08:48:03.171 2 DEBUG nova.network.neutron [req-58c37e2e-9ca9-4015-a083-381175cda690 req-35d5acf4-9357-4876-89f0-c625f4dc77b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:48:03 np0005481065 nova_compute[260935]: 2025-10-11 08:48:03.178 2 INFO nova.compute.manager [-] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Took 1.02 seconds to deallocate network for instance.#033[00m
Oct 11 04:48:03 np0005481065 nova_compute[260935]: 2025-10-11 08:48:03.191 2 DEBUG nova.compute.manager [req-58c37e2e-9ca9-4015-a083-381175cda690 req-35d5acf4-9357-4876-89f0-c625f4dc77b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Detach interface failed, port_id=713e6030-0d3f-41ae-9f66-c4591e2498e4, reason: Instance 5b2193b9-46b9-44a8-9d1c-3c6a642115b6 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct 11 04:48:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:48:03 np0005481065 nova_compute[260935]: 2025-10-11 08:48:03.237 2 DEBUG oslo_concurrency.lockutils [None req-5a167f4a-efd9-47fc-92a3-43da5d789ab2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:48:03 np0005481065 nova_compute[260935]: 2025-10-11 08:48:03.238 2 DEBUG oslo_concurrency.lockutils [None req-5a167f4a-efd9-47fc-92a3-43da5d789ab2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:48:03 np0005481065 nova_compute[260935]: 2025-10-11 08:48:03.317 2 DEBUG oslo_concurrency.processutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f6e6ccd5-d393-4fa3-bf88-491311678dd1/disk.config f6e6ccd5-d393-4fa3-bf88-491311678dd1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.196s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:48:03 np0005481065 nova_compute[260935]: 2025-10-11 08:48:03.318 2 INFO nova.virt.libvirt.driver [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Deleting local config drive /var/lib/nova/instances/f6e6ccd5-d393-4fa3-bf88-491311678dd1/disk.config because it was imported into RBD.#033[00m
Oct 11 04:48:03 np0005481065 kernel: tap128b1135-2e: entered promiscuous mode
Oct 11 04:48:03 np0005481065 NetworkManager[44960]: <info>  [1760172483.3950] manager: (tap128b1135-2e): new Tun device (/org/freedesktop/NetworkManager/Devices/45)
Oct 11 04:48:03 np0005481065 nova_compute[260935]: 2025-10-11 08:48:03.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:03 np0005481065 ovn_controller[152945]: 2025-10-11T08:48:03Z|00062|binding|INFO|Claiming lport 128b1135-2e8f-4e78-8e09-e16b082e9225 for this chassis.
Oct 11 04:48:03 np0005481065 ovn_controller[152945]: 2025-10-11T08:48:03Z|00063|binding|INFO|128b1135-2e8f-4e78-8e09-e16b082e9225: Claiming fa:16:3e:ac:95:81 10.100.0.4
Oct 11 04:48:03 np0005481065 systemd-udevd[288335]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:03.403 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:95:81 10.100.0.4'], port_security=['fa:16:3e:ac:95:81 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'f6e6ccd5-d393-4fa3-bf88-491311678dd1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-678d17d5-515c-4e7c-a42a-5bd4db3dbb7b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dc753a4e96fc46008b6e6b1fd29b160d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd907eed5-bfee-42df-bbfe-0d6a84057302', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8656b2b4-2d06-42a4-a59e-112920fcccd9, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=128b1135-2e8f-4e78-8e09-e16b082e9225) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:03.405 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 128b1135-2e8f-4e78-8e09-e16b082e9225 in datapath 678d17d5-515c-4e7c-a42a-5bd4db3dbb7b bound to our chassis#033[00m
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:03.407 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 678d17d5-515c-4e7c-a42a-5bd4db3dbb7b#033[00m
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:03.428 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3ed20d4d-c4c2-446c-b5ff-3eae2396904a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:03.430 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap678d17d5-51 in ovnmeta-678d17d5-515c-4e7c-a42a-5bd4db3dbb7b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 04:48:03 np0005481065 nova_compute[260935]: 2025-10-11 08:48:03.426 2 DEBUG oslo_concurrency.processutils [None req-5a167f4a-efd9-47fc-92a3-43da5d789ab2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:48:03 np0005481065 NetworkManager[44960]: <info>  [1760172483.4341] device (tap128b1135-2e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:48:03 np0005481065 NetworkManager[44960]: <info>  [1760172483.4356] device (tap128b1135-2e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:03.435 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap678d17d5-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:03.435 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c44d3566-03ac-409d-959a-15bb0ef5dee9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:03.438 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[638ec3c9-04aa-41c9-a1d7-e5715d09dd5c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:03 np0005481065 ovn_controller[152945]: 2025-10-11T08:48:03Z|00064|binding|INFO|Setting lport 128b1135-2e8f-4e78-8e09-e16b082e9225 ovn-installed in OVS
Oct 11 04:48:03 np0005481065 ovn_controller[152945]: 2025-10-11T08:48:03Z|00065|binding|INFO|Setting lport 128b1135-2e8f-4e78-8e09-e16b082e9225 up in Southbound
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:03.456 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[b08cc99a-9b2e-48f1-b3ec-9be696f3238c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:03 np0005481065 systemd-machined[215705]: New machine qemu-20-instance-00000013.
Oct 11 04:48:03 np0005481065 nova_compute[260935]: 2025-10-11 08:48:03.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:03 np0005481065 systemd[1]: Started Virtual Machine qemu-20-instance-00000013.
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:03.488 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ff7a953c-0e68-4b8c-a7ac-9eba5026a242]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:03.532 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[01f624de-a3b9-4bc3-9b38-441454621d04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:03 np0005481065 NetworkManager[44960]: <info>  [1760172483.5450] manager: (tap678d17d5-50): new Veth device (/org/freedesktop/NetworkManager/Devices/46)
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:03.549 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a18cdf1a-eeb4-4cc5-846b-885f20d71ae8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:03.598 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[fbacd693-51e9-414a-8540-f82406511a9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:03.601 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[31a3b2c5-1472-4586-829f-f777d6fdd3ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:03 np0005481065 nova_compute[260935]: 2025-10-11 08:48:03.606 2 DEBUG nova.compute.manager [req-93a0c873-48cb-4c81-8956-7d715ef9e907 req-3d66c9e9-3462-4563-a9f9-48c6fde53895 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Received event network-vif-unplugged-713e6030-0d3f-41ae-9f66-c4591e2498e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:48:03 np0005481065 nova_compute[260935]: 2025-10-11 08:48:03.607 2 DEBUG oslo_concurrency.lockutils [req-93a0c873-48cb-4c81-8956-7d715ef9e907 req-3d66c9e9-3462-4563-a9f9-48c6fde53895 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:48:03 np0005481065 nova_compute[260935]: 2025-10-11 08:48:03.607 2 DEBUG oslo_concurrency.lockutils [req-93a0c873-48cb-4c81-8956-7d715ef9e907 req-3d66c9e9-3462-4563-a9f9-48c6fde53895 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:48:03 np0005481065 nova_compute[260935]: 2025-10-11 08:48:03.608 2 DEBUG oslo_concurrency.lockutils [req-93a0c873-48cb-4c81-8956-7d715ef9e907 req-3d66c9e9-3462-4563-a9f9-48c6fde53895 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:48:03 np0005481065 nova_compute[260935]: 2025-10-11 08:48:03.609 2 DEBUG nova.compute.manager [req-93a0c873-48cb-4c81-8956-7d715ef9e907 req-3d66c9e9-3462-4563-a9f9-48c6fde53895 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] No waiting events found dispatching network-vif-unplugged-713e6030-0d3f-41ae-9f66-c4591e2498e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:48:03 np0005481065 nova_compute[260935]: 2025-10-11 08:48:03.610 2 WARNING nova.compute.manager [req-93a0c873-48cb-4c81-8956-7d715ef9e907 req-3d66c9e9-3462-4563-a9f9-48c6fde53895 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Received unexpected event network-vif-unplugged-713e6030-0d3f-41ae-9f66-c4591e2498e4 for instance with vm_state deleted and task_state None.#033[00m
Oct 11 04:48:03 np0005481065 nova_compute[260935]: 2025-10-11 08:48:03.610 2 DEBUG nova.compute.manager [req-93a0c873-48cb-4c81-8956-7d715ef9e907 req-3d66c9e9-3462-4563-a9f9-48c6fde53895 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Received event network-vif-plugged-713e6030-0d3f-41ae-9f66-c4591e2498e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:48:03 np0005481065 nova_compute[260935]: 2025-10-11 08:48:03.611 2 DEBUG oslo_concurrency.lockutils [req-93a0c873-48cb-4c81-8956-7d715ef9e907 req-3d66c9e9-3462-4563-a9f9-48c6fde53895 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:48:03 np0005481065 nova_compute[260935]: 2025-10-11 08:48:03.612 2 DEBUG oslo_concurrency.lockutils [req-93a0c873-48cb-4c81-8956-7d715ef9e907 req-3d66c9e9-3462-4563-a9f9-48c6fde53895 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:48:03 np0005481065 nova_compute[260935]: 2025-10-11 08:48:03.613 2 DEBUG oslo_concurrency.lockutils [req-93a0c873-48cb-4c81-8956-7d715ef9e907 req-3d66c9e9-3462-4563-a9f9-48c6fde53895 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:48:03 np0005481065 nova_compute[260935]: 2025-10-11 08:48:03.613 2 DEBUG nova.compute.manager [req-93a0c873-48cb-4c81-8956-7d715ef9e907 req-3d66c9e9-3462-4563-a9f9-48c6fde53895 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] No waiting events found dispatching network-vif-plugged-713e6030-0d3f-41ae-9f66-c4591e2498e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:48:03 np0005481065 nova_compute[260935]: 2025-10-11 08:48:03.614 2 WARNING nova.compute.manager [req-93a0c873-48cb-4c81-8956-7d715ef9e907 req-3d66c9e9-3462-4563-a9f9-48c6fde53895 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Received unexpected event network-vif-plugged-713e6030-0d3f-41ae-9f66-c4591e2498e4 for instance with vm_state deleted and task_state None.#033[00m
Oct 11 04:48:03 np0005481065 NetworkManager[44960]: <info>  [1760172483.6310] device (tap678d17d5-50): carrier: link connected
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:03.636 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[aa583c0a-1691-4fa3-8f58-4e06c841cf72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:03.661 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ba4b0c30-ceac-4bd4-8b13-4f2695ba6633]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap678d17d5-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:29:85:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 432833, 'reachable_time': 22326, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288655, 'error': None, 'target': 'ovnmeta-678d17d5-515c-4e7c-a42a-5bd4db3dbb7b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:03.684 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9fe90ea5-ad5c-47a3-9f12-efa5fe470041]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe29:85ce'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 432833, 'tstamp': 432833}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288656, 'error': None, 'target': 'ovnmeta-678d17d5-515c-4e7c-a42a-5bd4db3dbb7b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:03.706 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e9564193-f5f9-474a-8b0f-8f9bb4cd0dad]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap678d17d5-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:29:85:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 432833, 'reachable_time': 22326, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 288657, 'error': None, 'target': 'ovnmeta-678d17d5-515c-4e7c-a42a-5bd4db3dbb7b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:03 np0005481065 nova_compute[260935]: 2025-10-11 08:48:03.726 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:03.753 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[77c21a5f-4187-4636-b46d-64ecc617efb5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:03 np0005481065 nova_compute[260935]: 2025-10-11 08:48:03.757 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:48:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1268: 321 pgs: 321 active+clean; 134 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 203 op/s
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:03.844 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ba24c921-b670-4559-97dd-128bcd8202fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:03.845 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap678d17d5-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:03.846 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:03.846 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap678d17d5-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:48:03 np0005481065 nova_compute[260935]: 2025-10-11 08:48:03.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:03 np0005481065 kernel: tap678d17d5-50: entered promiscuous mode
Oct 11 04:48:03 np0005481065 NetworkManager[44960]: <info>  [1760172483.8499] manager: (tap678d17d5-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:03.856 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap678d17d5-50, col_values=(('external_ids', {'iface-id': '728422f2-62be-41c7-90af-4ff751731213'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:48:03 np0005481065 ovn_controller[152945]: 2025-10-11T08:48:03Z|00066|binding|INFO|Releasing lport 728422f2-62be-41c7-90af-4ff751731213 from this chassis (sb_readonly=0)
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:03.860 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/678d17d5-515c-4e7c-a42a-5bd4db3dbb7b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/678d17d5-515c-4e7c-a42a-5bd4db3dbb7b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:03.861 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cab2c8db-2181-442a-9b25-6b2bb41f6117]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:03.862 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-678d17d5-515c-4e7c-a42a-5bd4db3dbb7b
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/678d17d5-515c-4e7c-a42a-5bd4db3dbb7b.pid.haproxy
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID 678d17d5-515c-4e7c-a42a-5bd4db3dbb7b
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 04:48:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:03.863 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-678d17d5-515c-4e7c-a42a-5bd4db3dbb7b', 'env', 'PROCESS_TAG=haproxy-678d17d5-515c-4e7c-a42a-5bd4db3dbb7b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/678d17d5-515c-4e7c-a42a-5bd4db3dbb7b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 04:48:03 np0005481065 nova_compute[260935]: 2025-10-11 08:48:03.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:48:03 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3469220891' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:48:03 np0005481065 nova_compute[260935]: 2025-10-11 08:48:03.947 2 DEBUG oslo_concurrency.processutils [None req-5a167f4a-efd9-47fc-92a3-43da5d789ab2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:48:03 np0005481065 nova_compute[260935]: 2025-10-11 08:48:03.957 2 DEBUG nova.compute.provider_tree [None req-5a167f4a-efd9-47fc-92a3-43da5d789ab2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:48:03 np0005481065 nova_compute[260935]: 2025-10-11 08:48:03.977 2 DEBUG nova.scheduler.client.report [None req-5a167f4a-efd9-47fc-92a3-43da5d789ab2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:48:04 np0005481065 nova_compute[260935]: 2025-10-11 08:48:04.014 2 DEBUG oslo_concurrency.lockutils [None req-5a167f4a-efd9-47fc-92a3-43da5d789ab2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:48:04 np0005481065 nova_compute[260935]: 2025-10-11 08:48:04.019 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.262s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:48:04 np0005481065 nova_compute[260935]: 2025-10-11 08:48:04.019 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:48:04 np0005481065 nova_compute[260935]: 2025-10-11 08:48:04.020 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 04:48:04 np0005481065 nova_compute[260935]: 2025-10-11 08:48:04.021 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:48:04 np0005481065 nova_compute[260935]: 2025-10-11 08:48:04.098 2 INFO nova.scheduler.client.report [None req-5a167f4a-efd9-47fc-92a3-43da5d789ab2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Deleted allocations for instance 5b2193b9-46b9-44a8-9d1c-3c6a642115b6#033[00m
Oct 11 04:48:04 np0005481065 nova_compute[260935]: 2025-10-11 08:48:04.163 2 DEBUG oslo_concurrency.lockutils [None req-5a167f4a-efd9-47fc-92a3-43da5d789ab2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.948s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:48:04 np0005481065 podman[288753]: 2025-10-11 08:48:04.372733418 +0000 UTC m=+0.093243537 container create e7f00db60ad82e6e93accaad38709500629083bb52629852047b469c6b4d70ec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-678d17d5-515c-4e7c-a42a-5bd4db3dbb7b, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:48:04 np0005481065 podman[288753]: 2025-10-11 08:48:04.33013208 +0000 UTC m=+0.050642229 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 04:48:04 np0005481065 systemd[1]: Started libpod-conmon-e7f00db60ad82e6e93accaad38709500629083bb52629852047b469c6b4d70ec.scope.
Oct 11 04:48:04 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:48:04 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1697bbf5f67b57c75a7813d378343f8ad0bc5c688674a61d7b6592052ffd17eb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:48:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:48:04 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4092725551' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:48:04 np0005481065 podman[288753]: 2025-10-11 08:48:04.493542551 +0000 UTC m=+0.214052730 container init e7f00db60ad82e6e93accaad38709500629083bb52629852047b469c6b4d70ec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-678d17d5-515c-4e7c-a42a-5bd4db3dbb7b, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 11 04:48:04 np0005481065 podman[288753]: 2025-10-11 08:48:04.499043328 +0000 UTC m=+0.219553447 container start e7f00db60ad82e6e93accaad38709500629083bb52629852047b469c6b4d70ec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-678d17d5-515c-4e7c-a42a-5bd4db3dbb7b, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 04:48:04 np0005481065 nova_compute[260935]: 2025-10-11 08:48:04.500 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:48:04 np0005481065 neutron-haproxy-ovnmeta-678d17d5-515c-4e7c-a42a-5bd4db3dbb7b[288769]: [NOTICE]   (288775) : New worker (288777) forked
Oct 11 04:48:04 np0005481065 neutron-haproxy-ovnmeta-678d17d5-515c-4e7c-a42a-5bd4db3dbb7b[288769]: [NOTICE]   (288775) : Loading success.
Oct 11 04:48:04 np0005481065 nova_compute[260935]: 2025-10-11 08:48:04.530 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172484.5280967, f6e6ccd5-d393-4fa3-bf88-491311678dd1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:48:04 np0005481065 nova_compute[260935]: 2025-10-11 08:48:04.531 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] VM Started (Lifecycle Event)#033[00m
Oct 11 04:48:04 np0005481065 nova_compute[260935]: 2025-10-11 08:48:04.550 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:48:04 np0005481065 nova_compute[260935]: 2025-10-11 08:48:04.554 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172484.5282233, f6e6ccd5-d393-4fa3-bf88-491311678dd1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:48:04 np0005481065 nova_compute[260935]: 2025-10-11 08:48:04.555 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] VM Paused (Lifecycle Event)#033[00m
Oct 11 04:48:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:48:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:48:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:48:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:48:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000694346938692453 of space, bias 1.0, pg target 0.2083040816077359 quantized to 32 (current 32)
Oct 11 04:48:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:48:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:48:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:48:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:48:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:48:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 11 04:48:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:48:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 04:48:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:48:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:48:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:48:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:48:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:48:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:48:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:48:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:48:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:48:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:48:04 np0005481065 nova_compute[260935]: 2025-10-11 08:48:04.575 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:48:04 np0005481065 nova_compute[260935]: 2025-10-11 08:48:04.587 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 04:48:04 np0005481065 nova_compute[260935]: 2025-10-11 08:48:04.587 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 04:48:04 np0005481065 nova_compute[260935]: 2025-10-11 08:48:04.588 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:48:04 np0005481065 nova_compute[260935]: 2025-10-11 08:48:04.594 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 04:48:04 np0005481065 nova_compute[260935]: 2025-10-11 08:48:04.595 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 04:48:04 np0005481065 nova_compute[260935]: 2025-10-11 08:48:04.617 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:48:04 np0005481065 nova_compute[260935]: 2025-10-11 08:48:04.829 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:48:04 np0005481065 nova_compute[260935]: 2025-10-11 08:48:04.831 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4313MB free_disk=59.94662857055664GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 04:48:04 np0005481065 nova_compute[260935]: 2025-10-11 08:48:04.831 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:48:04 np0005481065 nova_compute[260935]: 2025-10-11 08:48:04.832 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:48:04 np0005481065 nova_compute[260935]: 2025-10-11 08:48:04.934 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 66086b61-46ca-4a1b-a9f0-692678bcbf7a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 04:48:04 np0005481065 nova_compute[260935]: 2025-10-11 08:48:04.934 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance f6e6ccd5-d393-4fa3-bf88-491311678dd1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 04:48:04 np0005481065 nova_compute[260935]: 2025-10-11 08:48:04.935 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 04:48:04 np0005481065 nova_compute[260935]: 2025-10-11 08:48:04.935 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 04:48:05 np0005481065 nova_compute[260935]: 2025-10-11 08:48:05.023 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:48:05 np0005481065 nova_compute[260935]: 2025-10-11 08:48:05.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:05 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:48:05 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/778709954' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:48:05 np0005481065 nova_compute[260935]: 2025-10-11 08:48:05.498 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:48:05 np0005481065 nova_compute[260935]: 2025-10-11 08:48:05.505 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:48:05 np0005481065 nova_compute[260935]: 2025-10-11 08:48:05.526 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:48:05 np0005481065 nova_compute[260935]: 2025-10-11 08:48:05.550 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 04:48:05 np0005481065 nova_compute[260935]: 2025-10-11 08:48:05.550 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:48:05 np0005481065 nova_compute[260935]: 2025-10-11 08:48:05.762 2 DEBUG nova.compute.manager [req-2ac474ef-b192-4170-8b67-3dd44e5036f3 req-6a3036d9-ef9b-4006-9f9e-025f81203add e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Received event network-vif-plugged-128b1135-2e8f-4e78-8e09-e16b082e9225 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:48:05 np0005481065 nova_compute[260935]: 2025-10-11 08:48:05.763 2 DEBUG oslo_concurrency.lockutils [req-2ac474ef-b192-4170-8b67-3dd44e5036f3 req-6a3036d9-ef9b-4006-9f9e-025f81203add e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "f6e6ccd5-d393-4fa3-bf88-491311678dd1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:48:05 np0005481065 nova_compute[260935]: 2025-10-11 08:48:05.763 2 DEBUG oslo_concurrency.lockutils [req-2ac474ef-b192-4170-8b67-3dd44e5036f3 req-6a3036d9-ef9b-4006-9f9e-025f81203add e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f6e6ccd5-d393-4fa3-bf88-491311678dd1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:48:05 np0005481065 nova_compute[260935]: 2025-10-11 08:48:05.764 2 DEBUG oslo_concurrency.lockutils [req-2ac474ef-b192-4170-8b67-3dd44e5036f3 req-6a3036d9-ef9b-4006-9f9e-025f81203add e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f6e6ccd5-d393-4fa3-bf88-491311678dd1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:48:05 np0005481065 nova_compute[260935]: 2025-10-11 08:48:05.764 2 DEBUG nova.compute.manager [req-2ac474ef-b192-4170-8b67-3dd44e5036f3 req-6a3036d9-ef9b-4006-9f9e-025f81203add e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Processing event network-vif-plugged-128b1135-2e8f-4e78-8e09-e16b082e9225 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 04:48:05 np0005481065 nova_compute[260935]: 2025-10-11 08:48:05.764 2 DEBUG nova.compute.manager [req-2ac474ef-b192-4170-8b67-3dd44e5036f3 req-6a3036d9-ef9b-4006-9f9e-025f81203add e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Received event network-vif-plugged-128b1135-2e8f-4e78-8e09-e16b082e9225 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:48:05 np0005481065 nova_compute[260935]: 2025-10-11 08:48:05.765 2 DEBUG oslo_concurrency.lockutils [req-2ac474ef-b192-4170-8b67-3dd44e5036f3 req-6a3036d9-ef9b-4006-9f9e-025f81203add e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "f6e6ccd5-d393-4fa3-bf88-491311678dd1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:48:05 np0005481065 nova_compute[260935]: 2025-10-11 08:48:05.765 2 DEBUG oslo_concurrency.lockutils [req-2ac474ef-b192-4170-8b67-3dd44e5036f3 req-6a3036d9-ef9b-4006-9f9e-025f81203add e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f6e6ccd5-d393-4fa3-bf88-491311678dd1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:48:05 np0005481065 nova_compute[260935]: 2025-10-11 08:48:05.765 2 DEBUG oslo_concurrency.lockutils [req-2ac474ef-b192-4170-8b67-3dd44e5036f3 req-6a3036d9-ef9b-4006-9f9e-025f81203add e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f6e6ccd5-d393-4fa3-bf88-491311678dd1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:48:05 np0005481065 nova_compute[260935]: 2025-10-11 08:48:05.766 2 DEBUG nova.compute.manager [req-2ac474ef-b192-4170-8b67-3dd44e5036f3 req-6a3036d9-ef9b-4006-9f9e-025f81203add e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] No waiting events found dispatching network-vif-plugged-128b1135-2e8f-4e78-8e09-e16b082e9225 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:48:05 np0005481065 nova_compute[260935]: 2025-10-11 08:48:05.766 2 WARNING nova.compute.manager [req-2ac474ef-b192-4170-8b67-3dd44e5036f3 req-6a3036d9-ef9b-4006-9f9e-025f81203add e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Received unexpected event network-vif-plugged-128b1135-2e8f-4e78-8e09-e16b082e9225 for instance with vm_state building and task_state spawning.#033[00m
Oct 11 04:48:05 np0005481065 nova_compute[260935]: 2025-10-11 08:48:05.767 2 DEBUG nova.compute.manager [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:48:05 np0005481065 nova_compute[260935]: 2025-10-11 08:48:05.776 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172485.7758877, f6e6ccd5-d393-4fa3-bf88-491311678dd1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:48:05 np0005481065 nova_compute[260935]: 2025-10-11 08:48:05.776 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:48:05 np0005481065 nova_compute[260935]: 2025-10-11 08:48:05.778 2 DEBUG nova.virt.libvirt.driver [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:48:05 np0005481065 nova_compute[260935]: 2025-10-11 08:48:05.783 2 INFO nova.virt.libvirt.driver [-] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Instance spawned successfully.#033[00m
Oct 11 04:48:05 np0005481065 nova_compute[260935]: 2025-10-11 08:48:05.783 2 DEBUG nova.virt.libvirt.driver [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:48:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1269: 321 pgs: 321 active+clean; 134 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 162 op/s
Oct 11 04:48:05 np0005481065 nova_compute[260935]: 2025-10-11 08:48:05.796 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:48:05 np0005481065 nova_compute[260935]: 2025-10-11 08:48:05.801 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:48:05 np0005481065 nova_compute[260935]: 2025-10-11 08:48:05.809 2 DEBUG nova.virt.libvirt.driver [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:48:05 np0005481065 nova_compute[260935]: 2025-10-11 08:48:05.810 2 DEBUG nova.virt.libvirt.driver [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:48:05 np0005481065 nova_compute[260935]: 2025-10-11 08:48:05.810 2 DEBUG nova.virt.libvirt.driver [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:48:05 np0005481065 nova_compute[260935]: 2025-10-11 08:48:05.811 2 DEBUG nova.virt.libvirt.driver [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:48:05 np0005481065 nova_compute[260935]: 2025-10-11 08:48:05.811 2 DEBUG nova.virt.libvirt.driver [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:48:05 np0005481065 nova_compute[260935]: 2025-10-11 08:48:05.812 2 DEBUG nova.virt.libvirt.driver [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:48:05 np0005481065 nova_compute[260935]: 2025-10-11 08:48:05.837 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:48:05 np0005481065 nova_compute[260935]: 2025-10-11 08:48:05.873 2 INFO nova.compute.manager [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Took 9.41 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:48:05 np0005481065 nova_compute[260935]: 2025-10-11 08:48:05.874 2 DEBUG nova.compute.manager [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:48:05 np0005481065 nova_compute[260935]: 2025-10-11 08:48:05.971 2 INFO nova.compute.manager [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Took 10.59 seconds to build instance.#033[00m
Oct 11 04:48:05 np0005481065 nova_compute[260935]: 2025-10-11 08:48:05.991 2 DEBUG oslo_concurrency.lockutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Lock "f6e6ccd5-d393-4fa3-bf88-491311678dd1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:48:06 np0005481065 nova_compute[260935]: 2025-10-11 08:48:06.527 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:48:06 np0005481065 nova_compute[260935]: 2025-10-11 08:48:06.528 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 04:48:06 np0005481065 nova_compute[260935]: 2025-10-11 08:48:06.552 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 11 04:48:06 np0005481065 nova_compute[260935]: 2025-10-11 08:48:06.552 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:48:06 np0005481065 nova_compute[260935]: 2025-10-11 08:48:06.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:07 np0005481065 nova_compute[260935]: 2025-10-11 08:48:07.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1270: 321 pgs: 321 active+clean; 134 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 5.6 MiB/s rd, 3.6 MiB/s wr, 236 op/s
Oct 11 04:48:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:48:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1271: 321 pgs: 321 active+clean; 134 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 14 KiB/s wr, 170 op/s
Oct 11 04:48:10 np0005481065 nova_compute[260935]: 2025-10-11 08:48:10.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:11 np0005481065 nova_compute[260935]: 2025-10-11 08:48:11.102 2 DEBUG nova.virt.libvirt.driver [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct 11 04:48:11 np0005481065 nova_compute[260935]: 2025-10-11 08:48:11.594 2 DEBUG oslo_concurrency.lockutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:48:11 np0005481065 nova_compute[260935]: 2025-10-11 08:48:11.596 2 DEBUG oslo_concurrency.lockutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:48:11 np0005481065 nova_compute[260935]: 2025-10-11 08:48:11.627 2 DEBUG nova.compute.manager [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:48:11 np0005481065 nova_compute[260935]: 2025-10-11 08:48:11.736 2 DEBUG oslo_concurrency.lockutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:48:11 np0005481065 nova_compute[260935]: 2025-10-11 08:48:11.737 2 DEBUG oslo_concurrency.lockutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:48:11 np0005481065 nova_compute[260935]: 2025-10-11 08:48:11.745 2 DEBUG nova.virt.hardware [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:48:11 np0005481065 nova_compute[260935]: 2025-10-11 08:48:11.745 2 INFO nova.compute.claims [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:48:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1272: 321 pgs: 321 active+clean; 134 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 14 KiB/s wr, 170 op/s
Oct 11 04:48:11 np0005481065 nova_compute[260935]: 2025-10-11 08:48:11.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:11 np0005481065 nova_compute[260935]: 2025-10-11 08:48:11.910 2 DEBUG oslo_concurrency.processutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:48:12 np0005481065 nova_compute[260935]: 2025-10-11 08:48:12.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:48:12 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4238313076' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:48:12 np0005481065 nova_compute[260935]: 2025-10-11 08:48:12.425 2 DEBUG oslo_concurrency.processutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:48:12 np0005481065 nova_compute[260935]: 2025-10-11 08:48:12.431 2 DEBUG nova.compute.provider_tree [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:48:12 np0005481065 nova_compute[260935]: 2025-10-11 08:48:12.450 2 DEBUG nova.scheduler.client.report [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:48:12 np0005481065 nova_compute[260935]: 2025-10-11 08:48:12.476 2 DEBUG oslo_concurrency.lockutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:48:12 np0005481065 nova_compute[260935]: 2025-10-11 08:48:12.477 2 DEBUG nova.compute.manager [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:48:12 np0005481065 nova_compute[260935]: 2025-10-11 08:48:12.535 2 DEBUG nova.compute.manager [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:48:12 np0005481065 nova_compute[260935]: 2025-10-11 08:48:12.536 2 DEBUG nova.network.neutron [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:48:12 np0005481065 nova_compute[260935]: 2025-10-11 08:48:12.562 2 INFO nova.virt.libvirt.driver [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:48:12 np0005481065 nova_compute[260935]: 2025-10-11 08:48:12.584 2 DEBUG nova.compute.manager [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:48:12 np0005481065 nova_compute[260935]: 2025-10-11 08:48:12.696 2 DEBUG nova.compute.manager [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:48:12 np0005481065 nova_compute[260935]: 2025-10-11 08:48:12.698 2 DEBUG nova.virt.libvirt.driver [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:48:12 np0005481065 nova_compute[260935]: 2025-10-11 08:48:12.699 2 INFO nova.virt.libvirt.driver [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Creating image(s)#033[00m
Oct 11 04:48:12 np0005481065 nova_compute[260935]: 2025-10-11 08:48:12.729 2 DEBUG nova.storage.rbd_utils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] rbd image 057de6d9-3f9e-4b23-9019-f62ba6b453e7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:48:12 np0005481065 nova_compute[260935]: 2025-10-11 08:48:12.760 2 DEBUG nova.storage.rbd_utils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] rbd image 057de6d9-3f9e-4b23-9019-f62ba6b453e7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:48:12 np0005481065 nova_compute[260935]: 2025-10-11 08:48:12.792 2 DEBUG nova.storage.rbd_utils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] rbd image 057de6d9-3f9e-4b23-9019-f62ba6b453e7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:48:12 np0005481065 nova_compute[260935]: 2025-10-11 08:48:12.796 2 DEBUG oslo_concurrency.processutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:48:12 np0005481065 nova_compute[260935]: 2025-10-11 08:48:12.861 2 DEBUG oslo_concurrency.processutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:48:12 np0005481065 nova_compute[260935]: 2025-10-11 08:48:12.863 2 DEBUG oslo_concurrency.lockutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:48:12 np0005481065 nova_compute[260935]: 2025-10-11 08:48:12.863 2 DEBUG oslo_concurrency.lockutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:48:12 np0005481065 nova_compute[260935]: 2025-10-11 08:48:12.864 2 DEBUG oslo_concurrency.lockutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:48:12 np0005481065 nova_compute[260935]: 2025-10-11 08:48:12.886 2 DEBUG nova.storage.rbd_utils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] rbd image 057de6d9-3f9e-4b23-9019-f62ba6b453e7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:48:12 np0005481065 nova_compute[260935]: 2025-10-11 08:48:12.889 2 DEBUG oslo_concurrency.processutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 057de6d9-3f9e-4b23-9019-f62ba6b453e7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:48:13 np0005481065 nova_compute[260935]: 2025-10-11 08:48:13.165 2 DEBUG nova.policy [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '34f29a5a135d45f597eeaa741009aa67', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'eddb41c523294041b154a0a99c88e82b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 04:48:13 np0005481065 nova_compute[260935]: 2025-10-11 08:48:13.192 2 DEBUG oslo_concurrency.processutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 057de6d9-3f9e-4b23-9019-f62ba6b453e7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.303s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:48:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:48:13 np0005481065 nova_compute[260935]: 2025-10-11 08:48:13.276 2 DEBUG nova.storage.rbd_utils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] resizing rbd image 057de6d9-3f9e-4b23-9019-f62ba6b453e7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:48:13 np0005481065 nova_compute[260935]: 2025-10-11 08:48:13.378 2 DEBUG nova.objects.instance [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lazy-loading 'migration_context' on Instance uuid 057de6d9-3f9e-4b23-9019-f62ba6b453e7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:48:13 np0005481065 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000012.scope: Deactivated successfully.
Oct 11 04:48:13 np0005481065 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000012.scope: Consumed 12.737s CPU time.
Oct 11 04:48:13 np0005481065 systemd-machined[215705]: Machine qemu-19-instance-00000012 terminated.
Oct 11 04:48:13 np0005481065 nova_compute[260935]: 2025-10-11 08:48:13.405 2 DEBUG nova.virt.libvirt.driver [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:48:13 np0005481065 nova_compute[260935]: 2025-10-11 08:48:13.405 2 DEBUG nova.virt.libvirt.driver [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Ensure instance console log exists: /var/lib/nova/instances/057de6d9-3f9e-4b23-9019-f62ba6b453e7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:48:13 np0005481065 nova_compute[260935]: 2025-10-11 08:48:13.406 2 DEBUG oslo_concurrency.lockutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:48:13 np0005481065 nova_compute[260935]: 2025-10-11 08:48:13.406 2 DEBUG oslo_concurrency.lockutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:48:13 np0005481065 nova_compute[260935]: 2025-10-11 08:48:13.407 2 DEBUG oslo_concurrency.lockutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:48:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1273: 321 pgs: 321 active+clean; 167 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 234 op/s
Oct 11 04:48:13 np0005481065 nova_compute[260935]: 2025-10-11 08:48:13.957 2 DEBUG nova.compute.manager [req-7118c14a-f554-4b4c-935c-9bbf62aeec5a req-2a496d66-3650-4afc-b31c-1bdfce1d22c5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Received event network-changed-128b1135-2e8f-4e78-8e09-e16b082e9225 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:48:13 np0005481065 nova_compute[260935]: 2025-10-11 08:48:13.957 2 DEBUG nova.compute.manager [req-7118c14a-f554-4b4c-935c-9bbf62aeec5a req-2a496d66-3650-4afc-b31c-1bdfce1d22c5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Refreshing instance network info cache due to event network-changed-128b1135-2e8f-4e78-8e09-e16b082e9225. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:48:13 np0005481065 nova_compute[260935]: 2025-10-11 08:48:13.958 2 DEBUG oslo_concurrency.lockutils [req-7118c14a-f554-4b4c-935c-9bbf62aeec5a req-2a496d66-3650-4afc-b31c-1bdfce1d22c5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-f6e6ccd5-d393-4fa3-bf88-491311678dd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:48:13 np0005481065 nova_compute[260935]: 2025-10-11 08:48:13.958 2 DEBUG oslo_concurrency.lockutils [req-7118c14a-f554-4b4c-935c-9bbf62aeec5a req-2a496d66-3650-4afc-b31c-1bdfce1d22c5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-f6e6ccd5-d393-4fa3-bf88-491311678dd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:48:13 np0005481065 nova_compute[260935]: 2025-10-11 08:48:13.959 2 DEBUG nova.network.neutron [req-7118c14a-f554-4b4c-935c-9bbf62aeec5a req-2a496d66-3650-4afc-b31c-1bdfce1d22c5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Refreshing network info cache for port 128b1135-2e8f-4e78-8e09-e16b082e9225 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:48:14 np0005481065 nova_compute[260935]: 2025-10-11 08:48:14.122 2 INFO nova.virt.libvirt.driver [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Instance shutdown successfully after 13 seconds.#033[00m
Oct 11 04:48:14 np0005481065 nova_compute[260935]: 2025-10-11 08:48:14.131 2 INFO nova.virt.libvirt.driver [-] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Instance destroyed successfully.#033[00m
Oct 11 04:48:14 np0005481065 nova_compute[260935]: 2025-10-11 08:48:14.137 2 INFO nova.virt.libvirt.driver [-] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Instance destroyed successfully.#033[00m
Oct 11 04:48:14 np0005481065 ovn_controller[152945]: 2025-10-11T08:48:14Z|00067|binding|INFO|Releasing lport 728422f2-62be-41c7-90af-4ff751731213 from this chassis (sb_readonly=0)
Oct 11 04:48:14 np0005481065 nova_compute[260935]: 2025-10-11 08:48:14.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:14 np0005481065 nova_compute[260935]: 2025-10-11 08:48:14.569 2 INFO nova.virt.libvirt.driver [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Deleting instance files /var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a_del#033[00m
Oct 11 04:48:14 np0005481065 nova_compute[260935]: 2025-10-11 08:48:14.571 2 INFO nova.virt.libvirt.driver [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Deletion of /var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a_del complete#033[00m
Oct 11 04:48:14 np0005481065 nova_compute[260935]: 2025-10-11 08:48:14.755 2 DEBUG nova.virt.libvirt.driver [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:48:14 np0005481065 nova_compute[260935]: 2025-10-11 08:48:14.756 2 INFO nova.virt.libvirt.driver [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Creating image(s)#033[00m
Oct 11 04:48:14 np0005481065 nova_compute[260935]: 2025-10-11 08:48:14.797 2 DEBUG nova.storage.rbd_utils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] rbd image 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:48:14 np0005481065 nova_compute[260935]: 2025-10-11 08:48:14.835 2 DEBUG nova.storage.rbd_utils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] rbd image 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:48:14 np0005481065 nova_compute[260935]: 2025-10-11 08:48:14.868 2 DEBUG nova.storage.rbd_utils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] rbd image 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:48:14 np0005481065 nova_compute[260935]: 2025-10-11 08:48:14.873 2 DEBUG oslo_concurrency.processutils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:48:14 np0005481065 nova_compute[260935]: 2025-10-11 08:48:14.909 2 DEBUG nova.network.neutron [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Successfully created port: db31f1b4-b009-40dc-a028-b72fe0b1eb45 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 04:48:14 np0005481065 nova_compute[260935]: 2025-10-11 08:48:14.967 2 DEBUG oslo_concurrency.processutils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:48:14 np0005481065 nova_compute[260935]: 2025-10-11 08:48:14.968 2 DEBUG oslo_concurrency.lockutils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:48:14 np0005481065 nova_compute[260935]: 2025-10-11 08:48:14.969 2 DEBUG oslo_concurrency.lockutils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:48:14 np0005481065 nova_compute[260935]: 2025-10-11 08:48:14.969 2 DEBUG oslo_concurrency.lockutils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:48:15 np0005481065 nova_compute[260935]: 2025-10-11 08:48:15.002 2 DEBUG nova.storage.rbd_utils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] rbd image 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:48:15 np0005481065 nova_compute[260935]: 2025-10-11 08:48:15.007 2 DEBUG oslo_concurrency.processutils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:48:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:15.180 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:48:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:15.182 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:48:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:15.183 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:48:15 np0005481065 nova_compute[260935]: 2025-10-11 08:48:15.338 2 DEBUG oslo_concurrency.processutils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.330s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:48:15 np0005481065 nova_compute[260935]: 2025-10-11 08:48:15.428 2 DEBUG nova.storage.rbd_utils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] resizing rbd image 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:48:15 np0005481065 nova_compute[260935]: 2025-10-11 08:48:15.538 2 DEBUG nova.virt.libvirt.driver [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:48:15 np0005481065 nova_compute[260935]: 2025-10-11 08:48:15.539 2 DEBUG nova.virt.libvirt.driver [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Ensure instance console log exists: /var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:48:15 np0005481065 nova_compute[260935]: 2025-10-11 08:48:15.540 2 DEBUG oslo_concurrency.lockutils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:48:15 np0005481065 nova_compute[260935]: 2025-10-11 08:48:15.540 2 DEBUG oslo_concurrency.lockutils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:48:15 np0005481065 nova_compute[260935]: 2025-10-11 08:48:15.540 2 DEBUG oslo_concurrency.lockutils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:48:15 np0005481065 nova_compute[260935]: 2025-10-11 08:48:15.542 2 DEBUG nova.virt.libvirt.driver [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:48:15 np0005481065 nova_compute[260935]: 2025-10-11 08:48:15.550 2 WARNING nova.virt.libvirt.driver [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Oct 11 04:48:15 np0005481065 nova_compute[260935]: 2025-10-11 08:48:15.602 2 DEBUG nova.virt.libvirt.host [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:48:15 np0005481065 nova_compute[260935]: 2025-10-11 08:48:15.603 2 DEBUG nova.virt.libvirt.host [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:48:15 np0005481065 nova_compute[260935]: 2025-10-11 08:48:15.608 2 DEBUG nova.virt.libvirt.host [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:48:15 np0005481065 nova_compute[260935]: 2025-10-11 08:48:15.608 2 DEBUG nova.virt.libvirt.host [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:48:15 np0005481065 nova_compute[260935]: 2025-10-11 08:48:15.609 2 DEBUG nova.virt.libvirt.driver [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:48:15 np0005481065 nova_compute[260935]: 2025-10-11 08:48:15.609 2 DEBUG nova.virt.hardware [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:48:15 np0005481065 nova_compute[260935]: 2025-10-11 08:48:15.610 2 DEBUG nova.virt.hardware [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:48:15 np0005481065 nova_compute[260935]: 2025-10-11 08:48:15.610 2 DEBUG nova.virt.hardware [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:48:15 np0005481065 nova_compute[260935]: 2025-10-11 08:48:15.610 2 DEBUG nova.virt.hardware [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:48:15 np0005481065 nova_compute[260935]: 2025-10-11 08:48:15.610 2 DEBUG nova.virt.hardware [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:48:15 np0005481065 nova_compute[260935]: 2025-10-11 08:48:15.610 2 DEBUG nova.virt.hardware [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:48:15 np0005481065 nova_compute[260935]: 2025-10-11 08:48:15.611 2 DEBUG nova.virt.hardware [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:48:15 np0005481065 nova_compute[260935]: 2025-10-11 08:48:15.611 2 DEBUG nova.virt.hardware [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:48:15 np0005481065 nova_compute[260935]: 2025-10-11 08:48:15.611 2 DEBUG nova.virt.hardware [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:48:15 np0005481065 nova_compute[260935]: 2025-10-11 08:48:15.611 2 DEBUG nova.virt.hardware [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:48:15 np0005481065 nova_compute[260935]: 2025-10-11 08:48:15.612 2 DEBUG nova.virt.hardware [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:48:15 np0005481065 nova_compute[260935]: 2025-10-11 08:48:15.612 2 DEBUG nova.objects.instance [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 66086b61-46ca-4a1b-a9f0-692678bcbf7a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:48:15 np0005481065 nova_compute[260935]: 2025-10-11 08:48:15.646 2 DEBUG oslo_concurrency.processutils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:48:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1274: 321 pgs: 321 active+clean; 167 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Oct 11 04:48:15 np0005481065 nova_compute[260935]: 2025-10-11 08:48:15.801 2 DEBUG nova.network.neutron [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Successfully updated port: db31f1b4-b009-40dc-a028-b72fe0b1eb45 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:48:15 np0005481065 nova_compute[260935]: 2025-10-11 08:48:15.920 2 DEBUG oslo_concurrency.lockutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:48:15 np0005481065 nova_compute[260935]: 2025-10-11 08:48:15.921 2 DEBUG oslo_concurrency.lockutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquired lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:48:15 np0005481065 nova_compute[260935]: 2025-10-11 08:48:15.922 2 DEBUG nova.network.neutron [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:48:15 np0005481065 nova_compute[260935]: 2025-10-11 08:48:15.978 2 DEBUG nova.network.neutron [req-7118c14a-f554-4b4c-935c-9bbf62aeec5a req-2a496d66-3650-4afc-b31c-1bdfce1d22c5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Updated VIF entry in instance network info cache for port 128b1135-2e8f-4e78-8e09-e16b082e9225. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:48:15 np0005481065 nova_compute[260935]: 2025-10-11 08:48:15.980 2 DEBUG nova.network.neutron [req-7118c14a-f554-4b4c-935c-9bbf62aeec5a req-2a496d66-3650-4afc-b31c-1bdfce1d22c5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Updating instance_info_cache with network_info: [{"id": "128b1135-2e8f-4e78-8e09-e16b082e9225", "address": "fa:16:3e:ac:95:81", "network": {"id": "678d17d5-515c-4e7c-a42a-5bd4db3dbb7b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1273402507-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc753a4e96fc46008b6e6b1fd29b160d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap128b1135-2e", "ovs_interfaceid": "128b1135-2e8f-4e78-8e09-e16b082e9225", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:48:16 np0005481065 nova_compute[260935]: 2025-10-11 08:48:16.000 2 DEBUG oslo_concurrency.lockutils [req-7118c14a-f554-4b4c-935c-9bbf62aeec5a req-2a496d66-3650-4afc-b31c-1bdfce1d22c5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-f6e6ccd5-d393-4fa3-bf88-491311678dd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:48:16 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:48:16 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/245679798' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:48:16 np0005481065 nova_compute[260935]: 2025-10-11 08:48:16.146 2 DEBUG oslo_concurrency.processutils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:48:16 np0005481065 nova_compute[260935]: 2025-10-11 08:48:16.179 2 DEBUG nova.storage.rbd_utils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] rbd image 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:48:16 np0005481065 nova_compute[260935]: 2025-10-11 08:48:16.186 2 DEBUG oslo_concurrency.processutils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:48:16 np0005481065 nova_compute[260935]: 2025-10-11 08:48:16.230 2 DEBUG nova.network.neutron [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:48:16 np0005481065 nova_compute[260935]: 2025-10-11 08:48:16.244 2 DEBUG nova.compute.manager [req-8cf3764b-264a-4309-b17c-988b4ac89461 req-be3cc025-de82-4d65-a37d-d3e6511c0bb6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received event network-changed-db31f1b4-b009-40dc-a028-b72fe0b1eb45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:48:16 np0005481065 nova_compute[260935]: 2025-10-11 08:48:16.245 2 DEBUG nova.compute.manager [req-8cf3764b-264a-4309-b17c-988b4ac89461 req-be3cc025-de82-4d65-a37d-d3e6511c0bb6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Refreshing instance network info cache due to event network-changed-db31f1b4-b009-40dc-a028-b72fe0b1eb45. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:48:16 np0005481065 nova_compute[260935]: 2025-10-11 08:48:16.245 2 DEBUG oslo_concurrency.lockutils [req-8cf3764b-264a-4309-b17c-988b4ac89461 req-be3cc025-de82-4d65-a37d-d3e6511c0bb6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:48:16 np0005481065 nova_compute[260935]: 2025-10-11 08:48:16.462 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172481.4613087, 5b2193b9-46b9-44a8-9d1c-3c6a642115b6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:48:16 np0005481065 nova_compute[260935]: 2025-10-11 08:48:16.463 2 INFO nova.compute.manager [-] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] VM Stopped (Lifecycle Event)#033[00m
Oct 11 04:48:16 np0005481065 nova_compute[260935]: 2025-10-11 08:48:16.490 2 DEBUG nova.compute.manager [None req-28cbac28-a941-4502-836a-f22c1c7ae50b - - - - - -] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:48:16 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:48:16 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3078272704' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:48:16 np0005481065 nova_compute[260935]: 2025-10-11 08:48:16.767 2 DEBUG oslo_concurrency.processutils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:48:16 np0005481065 nova_compute[260935]: 2025-10-11 08:48:16.772 2 DEBUG nova.virt.libvirt.driver [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:48:16 np0005481065 nova_compute[260935]:  <uuid>66086b61-46ca-4a1b-a9f0-692678bcbf7a</uuid>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:  <name>instance-00000012</name>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:48:16 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:      <nova:name>tempest-ServersAdmin275Test-server-132893026</nova:name>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:48:15</nova:creationTime>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:48:16 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:        <nova:user uuid="7cfe9716527d49f18102a38c7480e208">tempest-ServersAdmin275Test-1935053767-project-member</nova:user>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:        <nova:project uuid="7fdd898b69404913a643940b3869140b">tempest-ServersAdmin275Test-1935053767</nova:project>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:      <nova:ports/>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:48:16 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:      <entry name="serial">66086b61-46ca-4a1b-a9f0-692678bcbf7a</entry>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:      <entry name="uuid">66086b61-46ca-4a1b-a9f0-692678bcbf7a</entry>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:48:16 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:48:16 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:48:16 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk">
Oct 11 04:48:16 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:48:16 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:48:16 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk.config">
Oct 11 04:48:16 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:48:16 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:48:16 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a/console.log" append="off"/>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:48:16 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:48:16 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:48:16 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:48:16 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:48:16 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:48:16 np0005481065 nova_compute[260935]: 2025-10-11 08:48:16.849 2 DEBUG nova.virt.libvirt.driver [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:48:16 np0005481065 nova_compute[260935]: 2025-10-11 08:48:16.850 2 DEBUG nova.virt.libvirt.driver [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:48:16 np0005481065 nova_compute[260935]: 2025-10-11 08:48:16.851 2 INFO nova.virt.libvirt.driver [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Using config drive#033[00m
Oct 11 04:48:16 np0005481065 nova_compute[260935]: 2025-10-11 08:48:16.881 2 DEBUG nova.storage.rbd_utils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] rbd image 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:48:16 np0005481065 nova_compute[260935]: 2025-10-11 08:48:16.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:16 np0005481065 nova_compute[260935]: 2025-10-11 08:48:16.926 2 DEBUG nova.objects.instance [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 66086b61-46ca-4a1b-a9f0-692678bcbf7a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:48:16 np0005481065 nova_compute[260935]: 2025-10-11 08:48:16.961 2 DEBUG nova.objects.instance [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Lazy-loading 'keypairs' on Instance uuid 66086b61-46ca-4a1b-a9f0-692678bcbf7a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:48:17 np0005481065 nova_compute[260935]: 2025-10-11 08:48:17.180 2 INFO nova.virt.libvirt.driver [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Creating config drive at /var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a/disk.config#033[00m
Oct 11 04:48:17 np0005481065 nova_compute[260935]: 2025-10-11 08:48:17.191 2 DEBUG oslo_concurrency.processutils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk_ex2f3x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:48:17 np0005481065 nova_compute[260935]: 2025-10-11 08:48:17.350 2 DEBUG oslo_concurrency.processutils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk_ex2f3x" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:48:17 np0005481065 nova_compute[260935]: 2025-10-11 08:48:17.389 2 DEBUG nova.storage.rbd_utils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] rbd image 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:48:17 np0005481065 nova_compute[260935]: 2025-10-11 08:48:17.397 2 DEBUG oslo_concurrency.processutils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a/disk.config 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:48:17 np0005481065 nova_compute[260935]: 2025-10-11 08:48:17.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:17 np0005481065 nova_compute[260935]: 2025-10-11 08:48:17.598 2 DEBUG nova.network.neutron [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Updating instance_info_cache with network_info: [{"id": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "address": "fa:16:3e:97:65:f3", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb31f1b4-b0", "ovs_interfaceid": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:48:17 np0005481065 nova_compute[260935]: 2025-10-11 08:48:17.616 2 DEBUG oslo_concurrency.processutils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a/disk.config 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.219s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:48:17 np0005481065 nova_compute[260935]: 2025-10-11 08:48:17.616 2 INFO nova.virt.libvirt.driver [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Deleting local config drive /var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a/disk.config because it was imported into RBD.#033[00m
Oct 11 04:48:17 np0005481065 nova_compute[260935]: 2025-10-11 08:48:17.623 2 DEBUG oslo_concurrency.lockutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Releasing lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:48:17 np0005481065 nova_compute[260935]: 2025-10-11 08:48:17.624 2 DEBUG nova.compute.manager [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Instance network_info: |[{"id": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "address": "fa:16:3e:97:65:f3", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb31f1b4-b0", "ovs_interfaceid": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:48:17 np0005481065 nova_compute[260935]: 2025-10-11 08:48:17.631 2 DEBUG oslo_concurrency.lockutils [req-8cf3764b-264a-4309-b17c-988b4ac89461 req-be3cc025-de82-4d65-a37d-d3e6511c0bb6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:48:17 np0005481065 nova_compute[260935]: 2025-10-11 08:48:17.633 2 DEBUG nova.network.neutron [req-8cf3764b-264a-4309-b17c-988b4ac89461 req-be3cc025-de82-4d65-a37d-d3e6511c0bb6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Refreshing network info cache for port db31f1b4-b009-40dc-a028-b72fe0b1eb45 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:48:17 np0005481065 nova_compute[260935]: 2025-10-11 08:48:17.640 2 DEBUG nova.virt.libvirt.driver [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Start _get_guest_xml network_info=[{"id": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "address": "fa:16:3e:97:65:f3", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb31f1b4-b0", "ovs_interfaceid": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:48:17 np0005481065 nova_compute[260935]: 2025-10-11 08:48:17.654 2 WARNING nova.virt.libvirt.driver [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:48:17 np0005481065 nova_compute[260935]: 2025-10-11 08:48:17.695 2 DEBUG nova.virt.libvirt.host [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:48:17 np0005481065 nova_compute[260935]: 2025-10-11 08:48:17.697 2 DEBUG nova.virt.libvirt.host [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:48:17 np0005481065 nova_compute[260935]: 2025-10-11 08:48:17.705 2 DEBUG nova.virt.libvirt.host [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:48:17 np0005481065 nova_compute[260935]: 2025-10-11 08:48:17.706 2 DEBUG nova.virt.libvirt.host [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:48:17 np0005481065 nova_compute[260935]: 2025-10-11 08:48:17.707 2 DEBUG nova.virt.libvirt.driver [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:48:17 np0005481065 nova_compute[260935]: 2025-10-11 08:48:17.707 2 DEBUG nova.virt.hardware [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:48:17 np0005481065 nova_compute[260935]: 2025-10-11 08:48:17.708 2 DEBUG nova.virt.hardware [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:48:17 np0005481065 nova_compute[260935]: 2025-10-11 08:48:17.709 2 DEBUG nova.virt.hardware [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:48:17 np0005481065 nova_compute[260935]: 2025-10-11 08:48:17.710 2 DEBUG nova.virt.hardware [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:48:17 np0005481065 nova_compute[260935]: 2025-10-11 08:48:17.711 2 DEBUG nova.virt.hardware [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:48:17 np0005481065 nova_compute[260935]: 2025-10-11 08:48:17.711 2 DEBUG nova.virt.hardware [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:48:17 np0005481065 nova_compute[260935]: 2025-10-11 08:48:17.712 2 DEBUG nova.virt.hardware [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:48:17 np0005481065 nova_compute[260935]: 2025-10-11 08:48:17.713 2 DEBUG nova.virt.hardware [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:48:17 np0005481065 nova_compute[260935]: 2025-10-11 08:48:17.713 2 DEBUG nova.virt.hardware [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:48:17 np0005481065 nova_compute[260935]: 2025-10-11 08:48:17.714 2 DEBUG nova.virt.hardware [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:48:17 np0005481065 nova_compute[260935]: 2025-10-11 08:48:17.714 2 DEBUG nova.virt.hardware [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:48:17 np0005481065 nova_compute[260935]: 2025-10-11 08:48:17.720 2 DEBUG oslo_concurrency.processutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:48:17 np0005481065 systemd-machined[215705]: New machine qemu-21-instance-00000012.
Oct 11 04:48:17 np0005481065 systemd[1]: Started Virtual Machine qemu-21-instance-00000012.
Oct 11 04:48:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:48:17 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:48:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:48:17 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:48:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:48:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1275: 321 pgs: 321 active+clean; 206 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 7.8 MiB/s wr, 271 op/s
Oct 11 04:48:17 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:48:17 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 25572a8e-09a8-4b46-a0e0-cbc8b0a576f7 does not exist
Oct 11 04:48:17 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 2a5bbb53-ee5b-4096-a14d-a1e5d92de080 does not exist
Oct 11 04:48:17 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev bfb84436-79f9-4ace-aac8-5a5aa55729f9 does not exist
Oct 11 04:48:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:48:17 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:48:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:48:17 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:48:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:48:17 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:48:17 np0005481065 ovn_controller[152945]: 2025-10-11T08:48:17Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ac:95:81 10.100.0.4
Oct 11 04:48:17 np0005481065 ovn_controller[152945]: 2025-10-11T08:48:17Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ac:95:81 10.100.0.4
Oct 11 04:48:17 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:48:17 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:48:17 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:48:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:48:18 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/374156802' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:48:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:48:18 np0005481065 nova_compute[260935]: 2025-10-11 08:48:18.215 2 DEBUG oslo_concurrency.processutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:48:18 np0005481065 nova_compute[260935]: 2025-10-11 08:48:18.249 2 DEBUG nova.storage.rbd_utils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] rbd image 057de6d9-3f9e-4b23-9019-f62ba6b453e7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:48:18 np0005481065 nova_compute[260935]: 2025-10-11 08:48:18.255 2 DEBUG oslo_concurrency.processutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:48:18 np0005481065 podman[289651]: 2025-10-11 08:48:18.617984873 +0000 UTC m=+0.074781669 container create 8d015aaf773e65057cb8452300739b368ebed11fe5c85bf82bbbd87d94bc60fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:48:18 np0005481065 systemd[1]: Started libpod-conmon-8d015aaf773e65057cb8452300739b368ebed11fe5c85bf82bbbd87d94bc60fc.scope.
Oct 11 04:48:18 np0005481065 podman[289651]: 2025-10-11 08:48:18.586214735 +0000 UTC m=+0.043011591 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:48:18 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:48:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:48:18 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1458321110' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:48:18 np0005481065 podman[289651]: 2025-10-11 08:48:18.730118968 +0000 UTC m=+0.186915824 container init 8d015aaf773e65057cb8452300739b368ebed11fe5c85bf82bbbd87d94bc60fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_cerf, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 11 04:48:18 np0005481065 nova_compute[260935]: 2025-10-11 08:48:18.739 2 DEBUG oslo_concurrency.processutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:48:18 np0005481065 nova_compute[260935]: 2025-10-11 08:48:18.745 2 DEBUG nova.virt.libvirt.vif [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:48:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1270343715',display_name='tempest-AttachInterfacesTestJSON-server-1270343715',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1270343715',id=20,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGqp+brcDuFBg126s8uf0VE8L4fUfMeeG8JT9FKFYCB1vHmrbx9C6Kt8XshIYtJqZ0JEMq6H9A4MzX7hRa62ELfLstfe4uxEEdjGiwcDGhX0TR8t1c69HTxfDL2XuPv0hw==',key_name='tempest-keypair-1655747577',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-i4h1iqr7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:48:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=057de6d9-3f9e-4b23-9019-f62ba6b453e7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "address": "fa:16:3e:97:65:f3", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb31f1b4-b0", "ovs_interfaceid": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:48:18 np0005481065 nova_compute[260935]: 2025-10-11 08:48:18.746 2 DEBUG nova.network.os_vif_util [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "address": "fa:16:3e:97:65:f3", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb31f1b4-b0", "ovs_interfaceid": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:48:18 np0005481065 podman[289651]: 2025-10-11 08:48:18.746408554 +0000 UTC m=+0.203205360 container start 8d015aaf773e65057cb8452300739b368ebed11fe5c85bf82bbbd87d94bc60fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_cerf, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 04:48:18 np0005481065 nova_compute[260935]: 2025-10-11 08:48:18.747 2 DEBUG nova.network.os_vif_util [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:65:f3,bridge_name='br-int',has_traffic_filtering=True,id=db31f1b4-b009-40dc-a028-b72fe0b1eb45,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb31f1b4-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:48:18 np0005481065 podman[289651]: 2025-10-11 08:48:18.750451469 +0000 UTC m=+0.207248345 container attach 8d015aaf773e65057cb8452300739b368ebed11fe5c85bf82bbbd87d94bc60fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:48:18 np0005481065 busy_cerf[289686]: 167 167
Oct 11 04:48:18 np0005481065 systemd[1]: libpod-8d015aaf773e65057cb8452300739b368ebed11fe5c85bf82bbbd87d94bc60fc.scope: Deactivated successfully.
Oct 11 04:48:18 np0005481065 podman[289651]: 2025-10-11 08:48:18.753444695 +0000 UTC m=+0.210241501 container died 8d015aaf773e65057cb8452300739b368ebed11fe5c85bf82bbbd87d94bc60fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:48:18 np0005481065 nova_compute[260935]: 2025-10-11 08:48:18.750 2 DEBUG nova.objects.instance [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lazy-loading 'pci_devices' on Instance uuid 057de6d9-3f9e-4b23-9019-f62ba6b453e7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:48:18 np0005481065 nova_compute[260935]: 2025-10-11 08:48:18.778 2 DEBUG nova.virt.libvirt.driver [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:48:18 np0005481065 nova_compute[260935]:  <uuid>057de6d9-3f9e-4b23-9019-f62ba6b453e7</uuid>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:  <name>instance-00000014</name>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:48:18 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:      <nova:name>tempest-AttachInterfacesTestJSON-server-1270343715</nova:name>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:48:17</nova:creationTime>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:48:18 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:        <nova:user uuid="34f29a5a135d45f597eeaa741009aa67">tempest-AttachInterfacesTestJSON-2072786320-project-member</nova:user>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:        <nova:project uuid="eddb41c523294041b154a0a99c88e82b">tempest-AttachInterfacesTestJSON-2072786320</nova:project>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:        <nova:port uuid="db31f1b4-b009-40dc-a028-b72fe0b1eb45">
Oct 11 04:48:18 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:48:18 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:      <entry name="serial">057de6d9-3f9e-4b23-9019-f62ba6b453e7</entry>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:      <entry name="uuid">057de6d9-3f9e-4b23-9019-f62ba6b453e7</entry>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:48:18 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:48:18 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:48:18 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/057de6d9-3f9e-4b23-9019-f62ba6b453e7_disk">
Oct 11 04:48:18 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:48:18 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:48:18 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/057de6d9-3f9e-4b23-9019-f62ba6b453e7_disk.config">
Oct 11 04:48:18 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:48:18 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 04:48:18 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:97:65:f3"/>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:      <target dev="tapdb31f1b4-b0"/>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:48:18 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/057de6d9-3f9e-4b23-9019-f62ba6b453e7/console.log" append="off"/>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:48:18 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:48:18 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:48:18 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:48:18 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:48:18 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:48:18 np0005481065 nova_compute[260935]: 2025-10-11 08:48:18.781 2 DEBUG nova.compute.manager [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Preparing to wait for external event network-vif-plugged-db31f1b4-b009-40dc-a028-b72fe0b1eb45 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 04:48:18 np0005481065 nova_compute[260935]: 2025-10-11 08:48:18.782 2 DEBUG oslo_concurrency.lockutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:48:18 np0005481065 nova_compute[260935]: 2025-10-11 08:48:18.782 2 DEBUG oslo_concurrency.lockutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:48:18 np0005481065 nova_compute[260935]: 2025-10-11 08:48:18.782 2 DEBUG oslo_concurrency.lockutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:48:18 np0005481065 nova_compute[260935]: 2025-10-11 08:48:18.783 2 DEBUG nova.virt.libvirt.vif [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:48:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1270343715',display_name='tempest-AttachInterfacesTestJSON-server-1270343715',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1270343715',id=20,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGqp+brcDuFBg126s8uf0VE8L4fUfMeeG8JT9FKFYCB1vHmrbx9C6Kt8XshIYtJqZ0JEMq6H9A4MzX7hRa62ELfLstfe4uxEEdjGiwcDGhX0TR8t1c69HTxfDL2XuPv0hw==',key_name='tempest-keypair-1655747577',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-i4h1iqr7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:48:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=057de6d9-3f9e-4b23-9019-f62ba6b453e7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "address": "fa:16:3e:97:65:f3", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb31f1b4-b0", "ovs_interfaceid": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:48:18 np0005481065 nova_compute[260935]: 2025-10-11 08:48:18.784 2 DEBUG nova.network.os_vif_util [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "address": "fa:16:3e:97:65:f3", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb31f1b4-b0", "ovs_interfaceid": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:48:18 np0005481065 nova_compute[260935]: 2025-10-11 08:48:18.785 2 DEBUG nova.network.os_vif_util [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:65:f3,bridge_name='br-int',has_traffic_filtering=True,id=db31f1b4-b009-40dc-a028-b72fe0b1eb45,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb31f1b4-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:48:18 np0005481065 nova_compute[260935]: 2025-10-11 08:48:18.785 2 DEBUG os_vif [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:65:f3,bridge_name='br-int',has_traffic_filtering=True,id=db31f1b4-b009-40dc-a028-b72fe0b1eb45,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb31f1b4-b0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:48:18 np0005481065 nova_compute[260935]: 2025-10-11 08:48:18.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:18 np0005481065 nova_compute[260935]: 2025-10-11 08:48:18.787 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:48:18 np0005481065 nova_compute[260935]: 2025-10-11 08:48:18.788 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:48:18 np0005481065 systemd[1]: var-lib-containers-storage-overlay-22c837d62ae85d52e23cc2a4e38ae48c4ec09aca6d56c5dc27694159d24c0833-merged.mount: Deactivated successfully.
Oct 11 04:48:18 np0005481065 nova_compute[260935]: 2025-10-11 08:48:18.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:18 np0005481065 nova_compute[260935]: 2025-10-11 08:48:18.795 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdb31f1b4-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:48:18 np0005481065 nova_compute[260935]: 2025-10-11 08:48:18.795 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdb31f1b4-b0, col_values=(('external_ids', {'iface-id': 'db31f1b4-b009-40dc-a028-b72fe0b1eb45', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:97:65:f3', 'vm-uuid': '057de6d9-3f9e-4b23-9019-f62ba6b453e7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:48:18 np0005481065 nova_compute[260935]: 2025-10-11 08:48:18.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:18 np0005481065 NetworkManager[44960]: <info>  [1760172498.7994] manager: (tapdb31f1b4-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Oct 11 04:48:18 np0005481065 nova_compute[260935]: 2025-10-11 08:48:18.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:48:18 np0005481065 podman[289651]: 2025-10-11 08:48:18.806751859 +0000 UTC m=+0.263548635 container remove 8d015aaf773e65057cb8452300739b368ebed11fe5c85bf82bbbd87d94bc60fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:48:18 np0005481065 nova_compute[260935]: 2025-10-11 08:48:18.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:18 np0005481065 nova_compute[260935]: 2025-10-11 08:48:18.809 2 INFO os_vif [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:65:f3,bridge_name='br-int',has_traffic_filtering=True,id=db31f1b4-b009-40dc-a028-b72fe0b1eb45,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb31f1b4-b0')#033[00m
Oct 11 04:48:18 np0005481065 systemd[1]: libpod-conmon-8d015aaf773e65057cb8452300739b368ebed11fe5c85bf82bbbd87d94bc60fc.scope: Deactivated successfully.
Oct 11 04:48:18 np0005481065 nova_compute[260935]: 2025-10-11 08:48:18.888 2 DEBUG nova.virt.libvirt.driver [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:48:18 np0005481065 nova_compute[260935]: 2025-10-11 08:48:18.888 2 DEBUG nova.virt.libvirt.driver [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:48:18 np0005481065 nova_compute[260935]: 2025-10-11 08:48:18.889 2 DEBUG nova.virt.libvirt.driver [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No VIF found with MAC fa:16:3e:97:65:f3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:48:18 np0005481065 nova_compute[260935]: 2025-10-11 08:48:18.889 2 INFO nova.virt.libvirt.driver [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Using config drive#033[00m
Oct 11 04:48:18 np0005481065 nova_compute[260935]: 2025-10-11 08:48:18.914 2 DEBUG nova.storage.rbd_utils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] rbd image 057de6d9-3f9e-4b23-9019-f62ba6b453e7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:48:18 np0005481065 nova_compute[260935]: 2025-10-11 08:48:18.949 2 DEBUG nova.network.neutron [req-8cf3764b-264a-4309-b17c-988b4ac89461 req-be3cc025-de82-4d65-a37d-d3e6511c0bb6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Updated VIF entry in instance network info cache for port db31f1b4-b009-40dc-a028-b72fe0b1eb45. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:48:18 np0005481065 nova_compute[260935]: 2025-10-11 08:48:18.949 2 DEBUG nova.network.neutron [req-8cf3764b-264a-4309-b17c-988b4ac89461 req-be3cc025-de82-4d65-a37d-d3e6511c0bb6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Updating instance_info_cache with network_info: [{"id": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "address": "fa:16:3e:97:65:f3", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb31f1b4-b0", "ovs_interfaceid": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:48:18 np0005481065 nova_compute[260935]: 2025-10-11 08:48:18.966 2 DEBUG oslo_concurrency.lockutils [req-8cf3764b-264a-4309-b17c-988b4ac89461 req-be3cc025-de82-4d65-a37d-d3e6511c0bb6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:48:19 np0005481065 podman[289757]: 2025-10-11 08:48:19.052443502 +0000 UTC m=+0.077429845 container create 99674fca55c1a8d66cea920cd141a7d21bcc405a9b7257fc37767ee80f11da20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_cori, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:48:19 np0005481065 podman[289757]: 2025-10-11 08:48:19.020712595 +0000 UTC m=+0.045698978 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:48:19 np0005481065 systemd[1]: Started libpod-conmon-99674fca55c1a8d66cea920cd141a7d21bcc405a9b7257fc37767ee80f11da20.scope.
Oct 11 04:48:19 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:48:19 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76b72c77a479eacbcda7e16bc67a983ebe8cf44b78b54ab4c604a29b09f2b570/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:48:19 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76b72c77a479eacbcda7e16bc67a983ebe8cf44b78b54ab4c604a29b09f2b570/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:48:19 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76b72c77a479eacbcda7e16bc67a983ebe8cf44b78b54ab4c604a29b09f2b570/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:48:19 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76b72c77a479eacbcda7e16bc67a983ebe8cf44b78b54ab4c604a29b09f2b570/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:48:19 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76b72c77a479eacbcda7e16bc67a983ebe8cf44b78b54ab4c604a29b09f2b570/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:48:19 np0005481065 podman[289757]: 2025-10-11 08:48:19.177387123 +0000 UTC m=+0.202373516 container init 99674fca55c1a8d66cea920cd141a7d21bcc405a9b7257fc37767ee80f11da20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_cori, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Oct 11 04:48:19 np0005481065 podman[289757]: 2025-10-11 08:48:19.199047912 +0000 UTC m=+0.224034245 container start 99674fca55c1a8d66cea920cd141a7d21bcc405a9b7257fc37767ee80f11da20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:48:19 np0005481065 podman[289757]: 2025-10-11 08:48:19.203713326 +0000 UTC m=+0.228699659 container attach 99674fca55c1a8d66cea920cd141a7d21bcc405a9b7257fc37767ee80f11da20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_cori, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:48:19 np0005481065 nova_compute[260935]: 2025-10-11 08:48:19.328 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Removed pending event for 66086b61-46ca-4a1b-a9f0-692678bcbf7a due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct 11 04:48:19 np0005481065 nova_compute[260935]: 2025-10-11 08:48:19.331 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172499.3277628, 66086b61-46ca-4a1b-a9f0-692678bcbf7a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:48:19 np0005481065 nova_compute[260935]: 2025-10-11 08:48:19.331 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:48:19 np0005481065 nova_compute[260935]: 2025-10-11 08:48:19.334 2 DEBUG nova.compute.manager [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:48:19 np0005481065 nova_compute[260935]: 2025-10-11 08:48:19.335 2 DEBUG nova.virt.libvirt.driver [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:48:19 np0005481065 nova_compute[260935]: 2025-10-11 08:48:19.340 2 INFO nova.virt.libvirt.driver [-] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Instance spawned successfully.#033[00m
Oct 11 04:48:19 np0005481065 nova_compute[260935]: 2025-10-11 08:48:19.340 2 DEBUG nova.virt.libvirt.driver [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:48:19 np0005481065 nova_compute[260935]: 2025-10-11 08:48:19.345 2 INFO nova.virt.libvirt.driver [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Creating config drive at /var/lib/nova/instances/057de6d9-3f9e-4b23-9019-f62ba6b453e7/disk.config#033[00m
Oct 11 04:48:19 np0005481065 nova_compute[260935]: 2025-10-11 08:48:19.353 2 DEBUG oslo_concurrency.processutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/057de6d9-3f9e-4b23-9019-f62ba6b453e7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy4fy_tou execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:48:19 np0005481065 nova_compute[260935]: 2025-10-11 08:48:19.390 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:48:19 np0005481065 nova_compute[260935]: 2025-10-11 08:48:19.402 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:48:19 np0005481065 nova_compute[260935]: 2025-10-11 08:48:19.417 2 DEBUG nova.virt.libvirt.driver [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:48:19 np0005481065 nova_compute[260935]: 2025-10-11 08:48:19.418 2 DEBUG nova.virt.libvirt.driver [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:48:19 np0005481065 nova_compute[260935]: 2025-10-11 08:48:19.419 2 DEBUG nova.virt.libvirt.driver [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:48:19 np0005481065 nova_compute[260935]: 2025-10-11 08:48:19.419 2 DEBUG nova.virt.libvirt.driver [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:48:19 np0005481065 nova_compute[260935]: 2025-10-11 08:48:19.420 2 DEBUG nova.virt.libvirt.driver [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:48:19 np0005481065 nova_compute[260935]: 2025-10-11 08:48:19.420 2 DEBUG nova.virt.libvirt.driver [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:48:19 np0005481065 nova_compute[260935]: 2025-10-11 08:48:19.444 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct 11 04:48:19 np0005481065 nova_compute[260935]: 2025-10-11 08:48:19.444 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172499.330621, 66086b61-46ca-4a1b-a9f0-692678bcbf7a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:48:19 np0005481065 nova_compute[260935]: 2025-10-11 08:48:19.445 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] VM Started (Lifecycle Event)#033[00m
Oct 11 04:48:19 np0005481065 nova_compute[260935]: 2025-10-11 08:48:19.497 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:48:19 np0005481065 nova_compute[260935]: 2025-10-11 08:48:19.502 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:48:19 np0005481065 nova_compute[260935]: 2025-10-11 08:48:19.506 2 DEBUG oslo_concurrency.processutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/057de6d9-3f9e-4b23-9019-f62ba6b453e7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy4fy_tou" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:48:19 np0005481065 nova_compute[260935]: 2025-10-11 08:48:19.548 2 DEBUG nova.storage.rbd_utils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] rbd image 057de6d9-3f9e-4b23-9019-f62ba6b453e7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:48:19 np0005481065 nova_compute[260935]: 2025-10-11 08:48:19.553 2 DEBUG oslo_concurrency.processutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/057de6d9-3f9e-4b23-9019-f62ba6b453e7/disk.config 057de6d9-3f9e-4b23-9019-f62ba6b453e7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:48:19 np0005481065 nova_compute[260935]: 2025-10-11 08:48:19.597 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct 11 04:48:19 np0005481065 nova_compute[260935]: 2025-10-11 08:48:19.600 2 DEBUG nova.compute.manager [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:48:19 np0005481065 nova_compute[260935]: 2025-10-11 08:48:19.678 2 DEBUG oslo_concurrency.lockutils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:48:19 np0005481065 nova_compute[260935]: 2025-10-11 08:48:19.679 2 DEBUG oslo_concurrency.lockutils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:48:19 np0005481065 nova_compute[260935]: 2025-10-11 08:48:19.679 2 DEBUG nova.objects.instance [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct 11 04:48:19 np0005481065 nova_compute[260935]: 2025-10-11 08:48:19.753 2 DEBUG oslo_concurrency.lockutils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.074s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:48:19 np0005481065 nova_compute[260935]: 2025-10-11 08:48:19.759 2 DEBUG oslo_concurrency.processutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/057de6d9-3f9e-4b23-9019-f62ba6b453e7/disk.config 057de6d9-3f9e-4b23-9019-f62ba6b453e7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.206s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:48:19 np0005481065 nova_compute[260935]: 2025-10-11 08:48:19.760 2 INFO nova.virt.libvirt.driver [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Deleting local config drive /var/lib/nova/instances/057de6d9-3f9e-4b23-9019-f62ba6b453e7/disk.config because it was imported into RBD.#033[00m
Oct 11 04:48:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1276: 321 pgs: 321 active+clean; 206 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 666 KiB/s rd, 7.7 MiB/s wr, 197 op/s
Oct 11 04:48:19 np0005481065 kernel: tapdb31f1b4-b0: entered promiscuous mode
Oct 11 04:48:19 np0005481065 NetworkManager[44960]: <info>  [1760172499.8774] manager: (tapdb31f1b4-b0): new Tun device (/org/freedesktop/NetworkManager/Devices/49)
Oct 11 04:48:19 np0005481065 systemd-udevd[289717]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:48:19 np0005481065 ovn_controller[152945]: 2025-10-11T08:48:19Z|00068|binding|INFO|Claiming lport db31f1b4-b009-40dc-a028-b72fe0b1eb45 for this chassis.
Oct 11 04:48:19 np0005481065 ovn_controller[152945]: 2025-10-11T08:48:19Z|00069|binding|INFO|db31f1b4-b009-40dc-a028-b72fe0b1eb45: Claiming fa:16:3e:97:65:f3 10.100.0.8
Oct 11 04:48:19 np0005481065 nova_compute[260935]: 2025-10-11 08:48:19.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:19.891 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:65:f3 10.100.0.8'], port_security=['fa:16:3e:97:65:f3 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '057de6d9-3f9e-4b23-9019-f62ba6b453e7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eddb41c523294041b154a0a99c88e82b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6953e178-7635-4f97-a5ef-5126f17f4f48', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4201c7b-c907-464d-88cb-d19f17d8f067, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=db31f1b4-b009-40dc-a028-b72fe0b1eb45) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:48:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:19.892 162815 INFO neutron.agent.ovn.metadata.agent [-] Port db31f1b4-b009-40dc-a028-b72fe0b1eb45 in datapath fff13396-b787-4c6e-9112-a1c2ef57b26d bound to our chassis#033[00m
Oct 11 04:48:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:19.894 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fff13396-b787-4c6e-9112-a1c2ef57b26d#033[00m
Oct 11 04:48:19 np0005481065 NetworkManager[44960]: <info>  [1760172499.9024] device (tapdb31f1b4-b0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:48:19 np0005481065 NetworkManager[44960]: <info>  [1760172499.9039] device (tapdb31f1b4-b0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:48:19 np0005481065 systemd-machined[215705]: New machine qemu-22-instance-00000014.
Oct 11 04:48:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:19.908 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a0c67d40-bbe0-485a-b929-f453bc37a5a4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:19.911 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfff13396-b1 in ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 04:48:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:19.916 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfff13396-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 04:48:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:19.917 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[74fd5bb4-ae40-4fc3-aa2d-3575ce34cfd7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:19.918 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8ee76d12-6b01-4150-b54e-e14a2ff2019b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:19 np0005481065 systemd[1]: Started Virtual Machine qemu-22-instance-00000014.
Oct 11 04:48:19 np0005481065 ovn_controller[152945]: 2025-10-11T08:48:19Z|00070|binding|INFO|Setting lport db31f1b4-b009-40dc-a028-b72fe0b1eb45 ovn-installed in OVS
Oct 11 04:48:19 np0005481065 ovn_controller[152945]: 2025-10-11T08:48:19Z|00071|binding|INFO|Setting lport db31f1b4-b009-40dc-a028-b72fe0b1eb45 up in Southbound
Oct 11 04:48:19 np0005481065 nova_compute[260935]: 2025-10-11 08:48:19.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:19.933 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[daab86c3-a30d-4206-8233-b0a370690985]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:19.961 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d6b30689-f64d-46f8-89ef-93ef1103e6b6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:20.000 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f8f687f7-78cc-4a62-8a11-68c8171401e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:20 np0005481065 NetworkManager[44960]: <info>  [1760172500.0121] manager: (tapfff13396-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/50)
Oct 11 04:48:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:20.013 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[61848cee-1f2b-4b74-a11c-5ba49435750b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:20 np0005481065 systemd-udevd[289833]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:48:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:20.053 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c559582c-ba4c-4732-b62d-ba325eb3e1e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:20.058 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[477c2039-e2bb-40fa-a1e7-2c0550131614]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:20 np0005481065 NetworkManager[44960]: <info>  [1760172500.0961] device (tapfff13396-b0): carrier: link connected
Oct 11 04:48:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:20.105 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c83a25a7-0433-4939-a958-231f5f73b875]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:20.125 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[095ac779-794b-42e0-925e-6dd16481b2b5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfff13396-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:a4:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 434479, 'reachable_time': 16585, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289874, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:20.147 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d9519239-5e3a-4807-bae6-43c3cd330239]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedd:a42d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 434479, 'tstamp': 434479}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289875, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:20.174 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cae60820-d255-42de-a48c-0c4e14a8ac68]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfff13396-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:a4:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 434479, 'reachable_time': 16585, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 289878, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:20.214 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2a85e86e-3f49-46f3-a0da-8eda06cd3eb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:20.329 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a9b9a150-55e8-4874-8acd-6e76cb69a51c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:20.332 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfff13396-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:48:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:20.333 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:48:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:20.334 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfff13396-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:48:20 np0005481065 NetworkManager[44960]: <info>  [1760172500.3373] manager: (tapfff13396-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Oct 11 04:48:20 np0005481065 kernel: tapfff13396-b0: entered promiscuous mode
Oct 11 04:48:20 np0005481065 nova_compute[260935]: 2025-10-11 08:48:20.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:20.344 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfff13396-b0, col_values=(('external_ids', {'iface-id': '2a916b98-1e7b-4604-b1f0-e2f195b1c17e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:48:20 np0005481065 nova_compute[260935]: 2025-10-11 08:48:20.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:20 np0005481065 ovn_controller[152945]: 2025-10-11T08:48:20Z|00072|binding|INFO|Releasing lport 2a916b98-1e7b-4604-b1f0-e2f195b1c17e from this chassis (sb_readonly=0)
Oct 11 04:48:20 np0005481065 nova_compute[260935]: 2025-10-11 08:48:20.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:20.384 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fff13396-b787-4c6e-9112-a1c2ef57b26d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fff13396-b787-4c6e-9112-a1c2ef57b26d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 04:48:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:20.386 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[793e0174-7ed9-44a6-9f3c-b396c4c87ceb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:20.387 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:48:20 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 04:48:20 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 04:48:20 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-fff13396-b787-4c6e-9112-a1c2ef57b26d
Oct 11 04:48:20 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 04:48:20 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 04:48:20 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 04:48:20 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/fff13396-b787-4c6e-9112-a1c2ef57b26d.pid.haproxy
Oct 11 04:48:20 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 04:48:20 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:48:20 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 04:48:20 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 04:48:20 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 04:48:20 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 04:48:20 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 04:48:20 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 04:48:20 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 04:48:20 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 04:48:20 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 04:48:20 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 04:48:20 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 04:48:20 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 04:48:20 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 04:48:20 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:48:20 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:48:20 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 04:48:20 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 04:48:20 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:48:20 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID fff13396-b787-4c6e-9112-a1c2ef57b26d
Oct 11 04:48:20 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 04:48:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:20.388 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'env', 'PROCESS_TAG=haproxy-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fff13396-b787-4c6e-9112-a1c2ef57b26d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 04:48:20 np0005481065 youthful_cori[289773]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:48:20 np0005481065 youthful_cori[289773]: --> relative data size: 1.0
Oct 11 04:48:20 np0005481065 youthful_cori[289773]: --> All data devices are unavailable
Oct 11 04:48:20 np0005481065 systemd[1]: libpod-99674fca55c1a8d66cea920cd141a7d21bcc405a9b7257fc37767ee80f11da20.scope: Deactivated successfully.
Oct 11 04:48:20 np0005481065 systemd[1]: libpod-99674fca55c1a8d66cea920cd141a7d21bcc405a9b7257fc37767ee80f11da20.scope: Consumed 1.151s CPU time.
Oct 11 04:48:20 np0005481065 podman[289757]: 2025-10-11 08:48:20.530414879 +0000 UTC m=+1.555401192 container died 99674fca55c1a8d66cea920cd141a7d21bcc405a9b7257fc37767ee80f11da20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_cori, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:48:20 np0005481065 systemd[1]: var-lib-containers-storage-overlay-76b72c77a479eacbcda7e16bc67a983ebe8cf44b78b54ab4c604a29b09f2b570-merged.mount: Deactivated successfully.
Oct 11 04:48:20 np0005481065 podman[289757]: 2025-10-11 08:48:20.590805796 +0000 UTC m=+1.615792099 container remove 99674fca55c1a8d66cea920cd141a7d21bcc405a9b7257fc37767ee80f11da20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_cori, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 04:48:20 np0005481065 systemd[1]: libpod-conmon-99674fca55c1a8d66cea920cd141a7d21bcc405a9b7257fc37767ee80f11da20.scope: Deactivated successfully.
Oct 11 04:48:20 np0005481065 nova_compute[260935]: 2025-10-11 08:48:20.736 2 DEBUG nova.compute.manager [req-e53e2c4b-7751-473e-a2d9-18052b26c63c req-249566a2-a242-496e-a81b-9143952a7b75 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received event network-vif-plugged-db31f1b4-b009-40dc-a028-b72fe0b1eb45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:48:20 np0005481065 nova_compute[260935]: 2025-10-11 08:48:20.737 2 DEBUG oslo_concurrency.lockutils [req-e53e2c4b-7751-473e-a2d9-18052b26c63c req-249566a2-a242-496e-a81b-9143952a7b75 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:48:20 np0005481065 nova_compute[260935]: 2025-10-11 08:48:20.737 2 DEBUG oslo_concurrency.lockutils [req-e53e2c4b-7751-473e-a2d9-18052b26c63c req-249566a2-a242-496e-a81b-9143952a7b75 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:48:20 np0005481065 nova_compute[260935]: 2025-10-11 08:48:20.737 2 DEBUG oslo_concurrency.lockutils [req-e53e2c4b-7751-473e-a2d9-18052b26c63c req-249566a2-a242-496e-a81b-9143952a7b75 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:48:20 np0005481065 nova_compute[260935]: 2025-10-11 08:48:20.737 2 DEBUG nova.compute.manager [req-e53e2c4b-7751-473e-a2d9-18052b26c63c req-249566a2-a242-496e-a81b-9143952a7b75 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Processing event network-vif-plugged-db31f1b4-b009-40dc-a028-b72fe0b1eb45 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 04:48:20 np0005481065 podman[290025]: 2025-10-11 08:48:20.795371732 +0000 UTC m=+0.055106036 container create 6da8ced8088c8e43c02543b3a07e5131bfc81dd7c04d1410fe0ff52bc270558a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 11 04:48:20 np0005481065 systemd[1]: Started libpod-conmon-6da8ced8088c8e43c02543b3a07e5131bfc81dd7c04d1410fe0ff52bc270558a.scope.
Oct 11 04:48:20 np0005481065 podman[290025]: 2025-10-11 08:48:20.762577735 +0000 UTC m=+0.022312059 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 04:48:20 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:48:20 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bdab5ef0ac4476b9090df9e6eb05fca169470796c58d4abb5cf34784f91353f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:48:20 np0005481065 podman[290025]: 2025-10-11 08:48:20.891438458 +0000 UTC m=+0.151172762 container init 6da8ced8088c8e43c02543b3a07e5131bfc81dd7c04d1410fe0ff52bc270558a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 04:48:20 np0005481065 podman[290025]: 2025-10-11 08:48:20.897911843 +0000 UTC m=+0.157646147 container start 6da8ced8088c8e43c02543b3a07e5131bfc81dd7c04d1410fe0ff52bc270558a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS)
Oct 11 04:48:20 np0005481065 podman[290068]: 2025-10-11 08:48:20.902797073 +0000 UTC m=+0.056846976 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Oct 11 04:48:20 np0005481065 neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d[290066]: [NOTICE]   (290111) : New worker (290115) forked
Oct 11 04:48:20 np0005481065 neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d[290066]: [NOTICE]   (290111) : Loading success.
Oct 11 04:48:21 np0005481065 nova_compute[260935]: 2025-10-11 08:48:21.028 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172501.0277345, 057de6d9-3f9e-4b23-9019-f62ba6b453e7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:48:21 np0005481065 nova_compute[260935]: 2025-10-11 08:48:21.029 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] VM Started (Lifecycle Event)#033[00m
Oct 11 04:48:21 np0005481065 nova_compute[260935]: 2025-10-11 08:48:21.032 2 DEBUG nova.compute.manager [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:48:21 np0005481065 nova_compute[260935]: 2025-10-11 08:48:21.054 2 DEBUG nova.virt.libvirt.driver [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:48:21 np0005481065 nova_compute[260935]: 2025-10-11 08:48:21.058 2 INFO nova.virt.libvirt.driver [-] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Instance spawned successfully.#033[00m
Oct 11 04:48:21 np0005481065 nova_compute[260935]: 2025-10-11 08:48:21.058 2 DEBUG nova.virt.libvirt.driver [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:48:21 np0005481065 nova_compute[260935]: 2025-10-11 08:48:21.097 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:48:21 np0005481065 nova_compute[260935]: 2025-10-11 08:48:21.102 2 DEBUG oslo_concurrency.lockutils [None req-a3eb7b25-19b5-4ebf-a8bb-4e4dd3fe276f 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Acquiring lock "66086b61-46ca-4a1b-a9f0-692678bcbf7a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:48:21 np0005481065 nova_compute[260935]: 2025-10-11 08:48:21.103 2 DEBUG oslo_concurrency.lockutils [None req-a3eb7b25-19b5-4ebf-a8bb-4e4dd3fe276f 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lock "66086b61-46ca-4a1b-a9f0-692678bcbf7a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:48:21 np0005481065 nova_compute[260935]: 2025-10-11 08:48:21.104 2 DEBUG oslo_concurrency.lockutils [None req-a3eb7b25-19b5-4ebf-a8bb-4e4dd3fe276f 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Acquiring lock "66086b61-46ca-4a1b-a9f0-692678bcbf7a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:48:21 np0005481065 nova_compute[260935]: 2025-10-11 08:48:21.104 2 DEBUG oslo_concurrency.lockutils [None req-a3eb7b25-19b5-4ebf-a8bb-4e4dd3fe276f 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lock "66086b61-46ca-4a1b-a9f0-692678bcbf7a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:48:21 np0005481065 nova_compute[260935]: 2025-10-11 08:48:21.104 2 DEBUG oslo_concurrency.lockutils [None req-a3eb7b25-19b5-4ebf-a8bb-4e4dd3fe276f 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lock "66086b61-46ca-4a1b-a9f0-692678bcbf7a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:48:21 np0005481065 nova_compute[260935]: 2025-10-11 08:48:21.106 2 INFO nova.compute.manager [None req-a3eb7b25-19b5-4ebf-a8bb-4e4dd3fe276f 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Terminating instance#033[00m
Oct 11 04:48:21 np0005481065 nova_compute[260935]: 2025-10-11 08:48:21.108 2 DEBUG oslo_concurrency.lockutils [None req-a3eb7b25-19b5-4ebf-a8bb-4e4dd3fe276f 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Acquiring lock "refresh_cache-66086b61-46ca-4a1b-a9f0-692678bcbf7a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:48:21 np0005481065 nova_compute[260935]: 2025-10-11 08:48:21.108 2 DEBUG oslo_concurrency.lockutils [None req-a3eb7b25-19b5-4ebf-a8bb-4e4dd3fe276f 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Acquired lock "refresh_cache-66086b61-46ca-4a1b-a9f0-692678bcbf7a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:48:21 np0005481065 nova_compute[260935]: 2025-10-11 08:48:21.109 2 DEBUG nova.network.neutron [None req-a3eb7b25-19b5-4ebf-a8bb-4e4dd3fe276f 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:48:21 np0005481065 nova_compute[260935]: 2025-10-11 08:48:21.117 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:48:21 np0005481065 nova_compute[260935]: 2025-10-11 08:48:21.120 2 DEBUG nova.virt.libvirt.driver [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:48:21 np0005481065 nova_compute[260935]: 2025-10-11 08:48:21.121 2 DEBUG nova.virt.libvirt.driver [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:48:21 np0005481065 nova_compute[260935]: 2025-10-11 08:48:21.121 2 DEBUG nova.virt.libvirt.driver [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:48:21 np0005481065 nova_compute[260935]: 2025-10-11 08:48:21.122 2 DEBUG nova.virt.libvirt.driver [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:48:21 np0005481065 nova_compute[260935]: 2025-10-11 08:48:21.123 2 DEBUG nova.virt.libvirt.driver [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:48:21 np0005481065 nova_compute[260935]: 2025-10-11 08:48:21.123 2 DEBUG nova.virt.libvirt.driver [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:48:21 np0005481065 nova_compute[260935]: 2025-10-11 08:48:21.180 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:48:21 np0005481065 nova_compute[260935]: 2025-10-11 08:48:21.181 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172501.0315447, 057de6d9-3f9e-4b23-9019-f62ba6b453e7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:48:21 np0005481065 nova_compute[260935]: 2025-10-11 08:48:21.181 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] VM Paused (Lifecycle Event)#033[00m
Oct 11 04:48:21 np0005481065 nova_compute[260935]: 2025-10-11 08:48:21.202 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:48:21 np0005481065 nova_compute[260935]: 2025-10-11 08:48:21.210 2 INFO nova.compute.manager [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Took 8.51 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:48:21 np0005481065 nova_compute[260935]: 2025-10-11 08:48:21.210 2 DEBUG nova.compute.manager [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:48:21 np0005481065 nova_compute[260935]: 2025-10-11 08:48:21.215 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172501.0392647, 057de6d9-3f9e-4b23-9019-f62ba6b453e7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:48:21 np0005481065 nova_compute[260935]: 2025-10-11 08:48:21.215 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:48:21 np0005481065 nova_compute[260935]: 2025-10-11 08:48:21.242 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:48:21 np0005481065 nova_compute[260935]: 2025-10-11 08:48:21.272 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:48:21 np0005481065 nova_compute[260935]: 2025-10-11 08:48:21.289 2 INFO nova.compute.manager [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Took 9.60 seconds to build instance.#033[00m
Oct 11 04:48:21 np0005481065 nova_compute[260935]: 2025-10-11 08:48:21.293 2 DEBUG nova.network.neutron [None req-a3eb7b25-19b5-4ebf-a8bb-4e4dd3fe276f 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:48:21 np0005481065 nova_compute[260935]: 2025-10-11 08:48:21.317 2 DEBUG oslo_concurrency.lockutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:48:21 np0005481065 podman[290165]: 2025-10-11 08:48:21.454171404 +0000 UTC m=+0.096937522 container create 39759e930e785ce5ba32fa64bb2716da41fbb7c38b3cad3b06a9ba7b998ff9c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_gagarin, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:48:21 np0005481065 systemd[1]: Started libpod-conmon-39759e930e785ce5ba32fa64bb2716da41fbb7c38b3cad3b06a9ba7b998ff9c3.scope.
Oct 11 04:48:21 np0005481065 podman[290165]: 2025-10-11 08:48:21.418062622 +0000 UTC m=+0.060828820 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:48:21 np0005481065 nova_compute[260935]: 2025-10-11 08:48:21.511 2 DEBUG nova.network.neutron [None req-a3eb7b25-19b5-4ebf-a8bb-4e4dd3fe276f 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:48:21 np0005481065 nova_compute[260935]: 2025-10-11 08:48:21.529 2 DEBUG oslo_concurrency.lockutils [None req-a3eb7b25-19b5-4ebf-a8bb-4e4dd3fe276f 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Releasing lock "refresh_cache-66086b61-46ca-4a1b-a9f0-692678bcbf7a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:48:21 np0005481065 nova_compute[260935]: 2025-10-11 08:48:21.530 2 DEBUG nova.compute.manager [None req-a3eb7b25-19b5-4ebf-a8bb-4e4dd3fe276f 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 04:48:21 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:48:21 np0005481065 podman[290165]: 2025-10-11 08:48:21.557408725 +0000 UTC m=+0.200174913 container init 39759e930e785ce5ba32fa64bb2716da41fbb7c38b3cad3b06a9ba7b998ff9c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:48:21 np0005481065 podman[290165]: 2025-10-11 08:48:21.568459361 +0000 UTC m=+0.211225469 container start 39759e930e785ce5ba32fa64bb2716da41fbb7c38b3cad3b06a9ba7b998ff9c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 11 04:48:21 np0005481065 podman[290165]: 2025-10-11 08:48:21.572328211 +0000 UTC m=+0.215094409 container attach 39759e930e785ce5ba32fa64bb2716da41fbb7c38b3cad3b06a9ba7b998ff9c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_gagarin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:48:21 np0005481065 crazy_gagarin[290179]: 167 167
Oct 11 04:48:21 np0005481065 systemd[1]: libpod-39759e930e785ce5ba32fa64bb2716da41fbb7c38b3cad3b06a9ba7b998ff9c3.scope: Deactivated successfully.
Oct 11 04:48:21 np0005481065 conmon[290179]: conmon 39759e930e785ce5ba32 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-39759e930e785ce5ba32fa64bb2716da41fbb7c38b3cad3b06a9ba7b998ff9c3.scope/container/memory.events
Oct 11 04:48:21 np0005481065 podman[290165]: 2025-10-11 08:48:21.583203842 +0000 UTC m=+0.225969950 container died 39759e930e785ce5ba32fa64bb2716da41fbb7c38b3cad3b06a9ba7b998ff9c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_gagarin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 11 04:48:21 np0005481065 nova_compute[260935]: 2025-10-11 08:48:21.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:21 np0005481065 systemd[1]: var-lib-containers-storage-overlay-33d67d755aa81b2755f8d66a3a48bbad131c244da0720a4006c77618fafbc719-merged.mount: Deactivated successfully.
Oct 11 04:48:21 np0005481065 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000012.scope: Deactivated successfully.
Oct 11 04:48:21 np0005481065 podman[290165]: 2025-10-11 08:48:21.622958929 +0000 UTC m=+0.265725037 container remove 39759e930e785ce5ba32fa64bb2716da41fbb7c38b3cad3b06a9ba7b998ff9c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_gagarin, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:48:21 np0005481065 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000012.scope: Consumed 3.655s CPU time.
Oct 11 04:48:21 np0005481065 systemd-machined[215705]: Machine qemu-21-instance-00000012 terminated.
Oct 11 04:48:21 np0005481065 systemd[1]: libpod-conmon-39759e930e785ce5ba32fa64bb2716da41fbb7c38b3cad3b06a9ba7b998ff9c3.scope: Deactivated successfully.
Oct 11 04:48:21 np0005481065 nova_compute[260935]: 2025-10-11 08:48:21.762 2 INFO nova.virt.libvirt.driver [-] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Instance destroyed successfully.#033[00m
Oct 11 04:48:21 np0005481065 nova_compute[260935]: 2025-10-11 08:48:21.762 2 DEBUG nova.objects.instance [None req-a3eb7b25-19b5-4ebf-a8bb-4e4dd3fe276f 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lazy-loading 'resources' on Instance uuid 66086b61-46ca-4a1b-a9f0-692678bcbf7a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:48:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1277: 321 pgs: 321 active+clean; 206 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 666 KiB/s rd, 7.7 MiB/s wr, 197 op/s
Oct 11 04:48:21 np0005481065 nova_compute[260935]: 2025-10-11 08:48:21.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:21 np0005481065 podman[290219]: 2025-10-11 08:48:21.888276333 +0000 UTC m=+0.063857836 container create fb24a0e93f77f9eb36884e7e678fc2d6bdd985660db7a595408292c0197d00c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 11 04:48:21 np0005481065 systemd[1]: Started libpod-conmon-fb24a0e93f77f9eb36884e7e678fc2d6bdd985660db7a595408292c0197d00c5.scope.
Oct 11 04:48:21 np0005481065 podman[290219]: 2025-10-11 08:48:21.865172462 +0000 UTC m=+0.040753975 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:48:21 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:48:21 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb6d5af214e77a8d3acb931edeb187cf237a6ea3ab8bd86d169afecc1b1f5568/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:48:21 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb6d5af214e77a8d3acb931edeb187cf237a6ea3ab8bd86d169afecc1b1f5568/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:48:22 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb6d5af214e77a8d3acb931edeb187cf237a6ea3ab8bd86d169afecc1b1f5568/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:48:22 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb6d5af214e77a8d3acb931edeb187cf237a6ea3ab8bd86d169afecc1b1f5568/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:48:22 np0005481065 podman[290219]: 2025-10-11 08:48:22.016897429 +0000 UTC m=+0.192478962 container init fb24a0e93f77f9eb36884e7e678fc2d6bdd985660db7a595408292c0197d00c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hopper, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:48:22 np0005481065 podman[290219]: 2025-10-11 08:48:22.033755831 +0000 UTC m=+0.209337384 container start fb24a0e93f77f9eb36884e7e678fc2d6bdd985660db7a595408292c0197d00c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hopper, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:48:22 np0005481065 podman[290219]: 2025-10-11 08:48:22.037955271 +0000 UTC m=+0.213536804 container attach fb24a0e93f77f9eb36884e7e678fc2d6bdd985660db7a595408292c0197d00c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hopper, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 04:48:22 np0005481065 nova_compute[260935]: 2025-10-11 08:48:22.227 2 INFO nova.virt.libvirt.driver [None req-a3eb7b25-19b5-4ebf-a8bb-4e4dd3fe276f 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Deleting instance files /var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a_del#033[00m
Oct 11 04:48:22 np0005481065 nova_compute[260935]: 2025-10-11 08:48:22.231 2 INFO nova.virt.libvirt.driver [None req-a3eb7b25-19b5-4ebf-a8bb-4e4dd3fe276f 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Deletion of /var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a_del complete#033[00m
Oct 11 04:48:22 np0005481065 nova_compute[260935]: 2025-10-11 08:48:22.303 2 INFO nova.compute.manager [None req-a3eb7b25-19b5-4ebf-a8bb-4e4dd3fe276f 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Took 0.77 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 04:48:22 np0005481065 nova_compute[260935]: 2025-10-11 08:48:22.304 2 DEBUG oslo.service.loopingcall [None req-a3eb7b25-19b5-4ebf-a8bb-4e4dd3fe276f 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 04:48:22 np0005481065 nova_compute[260935]: 2025-10-11 08:48:22.305 2 DEBUG nova.compute.manager [-] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 04:48:22 np0005481065 nova_compute[260935]: 2025-10-11 08:48:22.305 2 DEBUG nova.network.neutron [-] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 04:48:22 np0005481065 nova_compute[260935]: 2025-10-11 08:48:22.676 2 DEBUG nova.network.neutron [-] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:48:22 np0005481065 nova_compute[260935]: 2025-10-11 08:48:22.703 2 DEBUG nova.network.neutron [-] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:48:22 np0005481065 nova_compute[260935]: 2025-10-11 08:48:22.722 2 INFO nova.compute.manager [-] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Took 0.42 seconds to deallocate network for instance.#033[00m
Oct 11 04:48:22 np0005481065 nova_compute[260935]: 2025-10-11 08:48:22.767 2 DEBUG oslo_concurrency.lockutils [None req-a3eb7b25-19b5-4ebf-a8bb-4e4dd3fe276f 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:48:22 np0005481065 nova_compute[260935]: 2025-10-11 08:48:22.769 2 DEBUG oslo_concurrency.lockutils [None req-a3eb7b25-19b5-4ebf-a8bb-4e4dd3fe276f 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]: {
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:    "0": [
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:        {
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:            "devices": [
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:                "/dev/loop3"
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:            ],
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:            "lv_name": "ceph_lv0",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:            "lv_size": "21470642176",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:            "name": "ceph_lv0",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:            "tags": {
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:                "ceph.cluster_name": "ceph",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:                "ceph.crush_device_class": "",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:                "ceph.encrypted": "0",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:                "ceph.osd_id": "0",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:                "ceph.type": "block",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:                "ceph.vdo": "0"
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:            },
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:            "type": "block",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:            "vg_name": "ceph_vg0"
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:        }
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:    ],
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:    "1": [
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:        {
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:            "devices": [
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:                "/dev/loop4"
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:            ],
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:            "lv_name": "ceph_lv1",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:            "lv_size": "21470642176",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:            "name": "ceph_lv1",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:            "tags": {
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:                "ceph.cluster_name": "ceph",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:                "ceph.crush_device_class": "",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:                "ceph.encrypted": "0",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:                "ceph.osd_id": "1",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:                "ceph.type": "block",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:                "ceph.vdo": "0"
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:            },
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:            "type": "block",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:            "vg_name": "ceph_vg1"
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:        }
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:    ],
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:    "2": [
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:        {
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:            "devices": [
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:                "/dev/loop5"
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:            ],
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:            "lv_name": "ceph_lv2",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:            "lv_size": "21470642176",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:            "name": "ceph_lv2",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:            "tags": {
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:                "ceph.cluster_name": "ceph",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:                "ceph.crush_device_class": "",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:                "ceph.encrypted": "0",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:                "ceph.osd_id": "2",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:                "ceph.type": "block",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:                "ceph.vdo": "0"
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:            },
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:            "type": "block",
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:            "vg_name": "ceph_vg2"
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:        }
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]:    ]
Oct 11 04:48:22 np0005481065 ecstatic_hopper[290238]: }
Oct 11 04:48:22 np0005481065 systemd[1]: libpod-fb24a0e93f77f9eb36884e7e678fc2d6bdd985660db7a595408292c0197d00c5.scope: Deactivated successfully.
Oct 11 04:48:22 np0005481065 podman[290219]: 2025-10-11 08:48:22.903719349 +0000 UTC m=+1.079300842 container died fb24a0e93f77f9eb36884e7e678fc2d6bdd985660db7a595408292c0197d00c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hopper, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:48:22 np0005481065 nova_compute[260935]: 2025-10-11 08:48:22.939 2 DEBUG oslo_concurrency.processutils [None req-a3eb7b25-19b5-4ebf-a8bb-4e4dd3fe276f 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:48:22 np0005481065 systemd[1]: var-lib-containers-storage-overlay-eb6d5af214e77a8d3acb931edeb187cf237a6ea3ab8bd86d169afecc1b1f5568-merged.mount: Deactivated successfully.
Oct 11 04:48:22 np0005481065 podman[290219]: 2025-10-11 08:48:22.985442155 +0000 UTC m=+1.161023668 container remove fb24a0e93f77f9eb36884e7e678fc2d6bdd985660db7a595408292c0197d00c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hopper, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 04:48:23 np0005481065 systemd[1]: libpod-conmon-fb24a0e93f77f9eb36884e7e678fc2d6bdd985660db7a595408292c0197d00c5.scope: Deactivated successfully.
Oct 11 04:48:23 np0005481065 nova_compute[260935]: 2025-10-11 08:48:23.121 2 DEBUG nova.compute.manager [req-8586e353-4f14-4155-b564-825841095417 req-51f8ac69-4489-4cfb-8bbe-56a9ae03da66 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received event network-vif-plugged-db31f1b4-b009-40dc-a028-b72fe0b1eb45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:48:23 np0005481065 nova_compute[260935]: 2025-10-11 08:48:23.121 2 DEBUG oslo_concurrency.lockutils [req-8586e353-4f14-4155-b564-825841095417 req-51f8ac69-4489-4cfb-8bbe-56a9ae03da66 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:48:23 np0005481065 nova_compute[260935]: 2025-10-11 08:48:23.122 2 DEBUG oslo_concurrency.lockutils [req-8586e353-4f14-4155-b564-825841095417 req-51f8ac69-4489-4cfb-8bbe-56a9ae03da66 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:48:23 np0005481065 nova_compute[260935]: 2025-10-11 08:48:23.122 2 DEBUG oslo_concurrency.lockutils [req-8586e353-4f14-4155-b564-825841095417 req-51f8ac69-4489-4cfb-8bbe-56a9ae03da66 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:48:23 np0005481065 nova_compute[260935]: 2025-10-11 08:48:23.122 2 DEBUG nova.compute.manager [req-8586e353-4f14-4155-b564-825841095417 req-51f8ac69-4489-4cfb-8bbe-56a9ae03da66 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] No waiting events found dispatching network-vif-plugged-db31f1b4-b009-40dc-a028-b72fe0b1eb45 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:48:23 np0005481065 nova_compute[260935]: 2025-10-11 08:48:23.123 2 WARNING nova.compute.manager [req-8586e353-4f14-4155-b564-825841095417 req-51f8ac69-4489-4cfb-8bbe-56a9ae03da66 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received unexpected event network-vif-plugged-db31f1b4-b009-40dc-a028-b72fe0b1eb45 for instance with vm_state active and task_state None.#033[00m
Oct 11 04:48:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:48:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:48:23 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/175882175' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:48:23 np0005481065 nova_compute[260935]: 2025-10-11 08:48:23.414 2 DEBUG oslo_concurrency.processutils [None req-a3eb7b25-19b5-4ebf-a8bb-4e4dd3fe276f 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:48:23 np0005481065 nova_compute[260935]: 2025-10-11 08:48:23.421 2 DEBUG nova.compute.provider_tree [None req-a3eb7b25-19b5-4ebf-a8bb-4e4dd3fe276f 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:48:23 np0005481065 nova_compute[260935]: 2025-10-11 08:48:23.445 2 DEBUG nova.scheduler.client.report [None req-a3eb7b25-19b5-4ebf-a8bb-4e4dd3fe276f 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:48:23 np0005481065 nova_compute[260935]: 2025-10-11 08:48:23.477 2 DEBUG oslo_concurrency.lockutils [None req-a3eb7b25-19b5-4ebf-a8bb-4e4dd3fe276f 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:48:23 np0005481065 nova_compute[260935]: 2025-10-11 08:48:23.518 2 INFO nova.scheduler.client.report [None req-a3eb7b25-19b5-4ebf-a8bb-4e4dd3fe276f 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Deleted allocations for instance 66086b61-46ca-4a1b-a9f0-692678bcbf7a#033[00m
Oct 11 04:48:23 np0005481065 nova_compute[260935]: 2025-10-11 08:48:23.627 2 DEBUG oslo_concurrency.lockutils [None req-a3eb7b25-19b5-4ebf-a8bb-4e4dd3fe276f 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lock "66086b61-46ca-4a1b-a9f0-692678bcbf7a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.524s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:48:23 np0005481065 podman[290424]: 2025-10-11 08:48:23.7842719 +0000 UTC m=+0.051740740 container create 3e978874171137089eccb80ad497f6e0b2c1d160e81a98742c7defbf58796165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:48:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1278: 321 pgs: 321 active+clean; 167 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 7.8 MiB/s wr, 365 op/s
Oct 11 04:48:23 np0005481065 nova_compute[260935]: 2025-10-11 08:48:23.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:23 np0005481065 systemd[1]: Started libpod-conmon-3e978874171137089eccb80ad497f6e0b2c1d160e81a98742c7defbf58796165.scope.
Oct 11 04:48:23 np0005481065 podman[290424]: 2025-10-11 08:48:23.762166228 +0000 UTC m=+0.029635078 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:48:23 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:48:23 np0005481065 podman[290424]: 2025-10-11 08:48:23.899501174 +0000 UTC m=+0.166970024 container init 3e978874171137089eccb80ad497f6e0b2c1d160e81a98742c7defbf58796165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mahavira, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:48:23 np0005481065 podman[290424]: 2025-10-11 08:48:23.909261183 +0000 UTC m=+0.176729993 container start 3e978874171137089eccb80ad497f6e0b2c1d160e81a98742c7defbf58796165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mahavira, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:48:23 np0005481065 nervous_mahavira[290441]: 167 167
Oct 11 04:48:23 np0005481065 podman[290424]: 2025-10-11 08:48:23.914541584 +0000 UTC m=+0.182010424 container attach 3e978874171137089eccb80ad497f6e0b2c1d160e81a98742c7defbf58796165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 11 04:48:23 np0005481065 systemd[1]: libpod-3e978874171137089eccb80ad497f6e0b2c1d160e81a98742c7defbf58796165.scope: Deactivated successfully.
Oct 11 04:48:23 np0005481065 podman[290424]: 2025-10-11 08:48:23.916617953 +0000 UTC m=+0.184086763 container died 3e978874171137089eccb80ad497f6e0b2c1d160e81a98742c7defbf58796165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mahavira, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 04:48:23 np0005481065 systemd[1]: var-lib-containers-storage-overlay-0317ad5535e17dbc794cebef8b886128df931c185acab25f1b978b1f3bdbb752-merged.mount: Deactivated successfully.
Oct 11 04:48:23 np0005481065 podman[290424]: 2025-10-11 08:48:23.968769384 +0000 UTC m=+0.236238224 container remove 3e978874171137089eccb80ad497f6e0b2c1d160e81a98742c7defbf58796165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 04:48:23 np0005481065 systemd[1]: libpod-conmon-3e978874171137089eccb80ad497f6e0b2c1d160e81a98742c7defbf58796165.scope: Deactivated successfully.
Oct 11 04:48:24 np0005481065 nova_compute[260935]: 2025-10-11 08:48:24.086 2 DEBUG nova.compute.manager [req-e5487992-b3d0-428a-9189-e07ae6fdf210 req-6b2144f8-3baa-4dd3-ac03-2699ead7ad15 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received event network-changed-db31f1b4-b009-40dc-a028-b72fe0b1eb45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:48:24 np0005481065 nova_compute[260935]: 2025-10-11 08:48:24.087 2 DEBUG nova.compute.manager [req-e5487992-b3d0-428a-9189-e07ae6fdf210 req-6b2144f8-3baa-4dd3-ac03-2699ead7ad15 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Refreshing instance network info cache due to event network-changed-db31f1b4-b009-40dc-a028-b72fe0b1eb45. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:48:24 np0005481065 nova_compute[260935]: 2025-10-11 08:48:24.087 2 DEBUG oslo_concurrency.lockutils [req-e5487992-b3d0-428a-9189-e07ae6fdf210 req-6b2144f8-3baa-4dd3-ac03-2699ead7ad15 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:48:24 np0005481065 nova_compute[260935]: 2025-10-11 08:48:24.087 2 DEBUG oslo_concurrency.lockutils [req-e5487992-b3d0-428a-9189-e07ae6fdf210 req-6b2144f8-3baa-4dd3-ac03-2699ead7ad15 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:48:24 np0005481065 nova_compute[260935]: 2025-10-11 08:48:24.087 2 DEBUG nova.network.neutron [req-e5487992-b3d0-428a-9189-e07ae6fdf210 req-6b2144f8-3baa-4dd3-ac03-2699ead7ad15 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Refreshing network info cache for port db31f1b4-b009-40dc-a028-b72fe0b1eb45 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:48:24 np0005481065 podman[290465]: 2025-10-11 08:48:24.247119849 +0000 UTC m=+0.078857504 container create 540b51e5b77bd003090daade15f0ffe4048561f3bfff76195e480b01f9c4e7b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_swanson, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:48:24 np0005481065 podman[290465]: 2025-10-11 08:48:24.210456271 +0000 UTC m=+0.042193986 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:48:24 np0005481065 systemd[1]: Started libpod-conmon-540b51e5b77bd003090daade15f0ffe4048561f3bfff76195e480b01f9c4e7b0.scope.
Oct 11 04:48:24 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:48:24 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80069ce4c9e5f70d791f6ddcb3af87f74ed72915e3a827c7a3bb7fa53cbdb32b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:48:24 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80069ce4c9e5f70d791f6ddcb3af87f74ed72915e3a827c7a3bb7fa53cbdb32b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:48:24 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80069ce4c9e5f70d791f6ddcb3af87f74ed72915e3a827c7a3bb7fa53cbdb32b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:48:24 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80069ce4c9e5f70d791f6ddcb3af87f74ed72915e3a827c7a3bb7fa53cbdb32b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:48:24 np0005481065 podman[290465]: 2025-10-11 08:48:24.398196738 +0000 UTC m=+0.229934403 container init 540b51e5b77bd003090daade15f0ffe4048561f3bfff76195e480b01f9c4e7b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_swanson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Oct 11 04:48:24 np0005481065 podman[290465]: 2025-10-11 08:48:24.412333002 +0000 UTC m=+0.244070637 container start 540b51e5b77bd003090daade15f0ffe4048561f3bfff76195e480b01f9c4e7b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_swanson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 04:48:24 np0005481065 podman[290465]: 2025-10-11 08:48:24.416491721 +0000 UTC m=+0.248229376 container attach 540b51e5b77bd003090daade15f0ffe4048561f3bfff76195e480b01f9c4e7b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_swanson, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 11 04:48:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:48:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:48:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:48:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:48:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:48:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:48:25 np0005481065 nova_compute[260935]: 2025-10-11 08:48:25.226 2 DEBUG nova.compute.manager [req-ee9cdb8d-4a56-44ef-a27b-0642d6e5ee8c req-d476743d-789a-4d4c-bed4-0acf7578f089 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Received event network-changed-128b1135-2e8f-4e78-8e09-e16b082e9225 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:48:25 np0005481065 nova_compute[260935]: 2025-10-11 08:48:25.227 2 DEBUG nova.compute.manager [req-ee9cdb8d-4a56-44ef-a27b-0642d6e5ee8c req-d476743d-789a-4d4c-bed4-0acf7578f089 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Refreshing instance network info cache due to event network-changed-128b1135-2e8f-4e78-8e09-e16b082e9225. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:48:25 np0005481065 nova_compute[260935]: 2025-10-11 08:48:25.227 2 DEBUG oslo_concurrency.lockutils [req-ee9cdb8d-4a56-44ef-a27b-0642d6e5ee8c req-d476743d-789a-4d4c-bed4-0acf7578f089 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-f6e6ccd5-d393-4fa3-bf88-491311678dd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:48:25 np0005481065 nova_compute[260935]: 2025-10-11 08:48:25.228 2 DEBUG oslo_concurrency.lockutils [req-ee9cdb8d-4a56-44ef-a27b-0642d6e5ee8c req-d476743d-789a-4d4c-bed4-0acf7578f089 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-f6e6ccd5-d393-4fa3-bf88-491311678dd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:48:25 np0005481065 nova_compute[260935]: 2025-10-11 08:48:25.228 2 DEBUG nova.network.neutron [req-ee9cdb8d-4a56-44ef-a27b-0642d6e5ee8c req-d476743d-789a-4d4c-bed4-0acf7578f089 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Refreshing network info cache for port 128b1135-2e8f-4e78-8e09-e16b082e9225 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:48:25 np0005481065 beautiful_swanson[290481]: {
Oct 11 04:48:25 np0005481065 beautiful_swanson[290481]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 04:48:25 np0005481065 beautiful_swanson[290481]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:48:25 np0005481065 beautiful_swanson[290481]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:48:25 np0005481065 beautiful_swanson[290481]:        "osd_id": 2,
Oct 11 04:48:25 np0005481065 beautiful_swanson[290481]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:48:25 np0005481065 beautiful_swanson[290481]:        "type": "bluestore"
Oct 11 04:48:25 np0005481065 beautiful_swanson[290481]:    },
Oct 11 04:48:25 np0005481065 beautiful_swanson[290481]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 04:48:25 np0005481065 beautiful_swanson[290481]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:48:25 np0005481065 beautiful_swanson[290481]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:48:25 np0005481065 beautiful_swanson[290481]:        "osd_id": 0,
Oct 11 04:48:25 np0005481065 beautiful_swanson[290481]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:48:25 np0005481065 beautiful_swanson[290481]:        "type": "bluestore"
Oct 11 04:48:25 np0005481065 beautiful_swanson[290481]:    },
Oct 11 04:48:25 np0005481065 beautiful_swanson[290481]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 04:48:25 np0005481065 beautiful_swanson[290481]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:48:25 np0005481065 beautiful_swanson[290481]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:48:25 np0005481065 beautiful_swanson[290481]:        "osd_id": 1,
Oct 11 04:48:25 np0005481065 beautiful_swanson[290481]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:48:25 np0005481065 beautiful_swanson[290481]:        "type": "bluestore"
Oct 11 04:48:25 np0005481065 beautiful_swanson[290481]:    }
Oct 11 04:48:25 np0005481065 beautiful_swanson[290481]: }
Oct 11 04:48:25 np0005481065 systemd[1]: libpod-540b51e5b77bd003090daade15f0ffe4048561f3bfff76195e480b01f9c4e7b0.scope: Deactivated successfully.
Oct 11 04:48:25 np0005481065 podman[290465]: 2025-10-11 08:48:25.506297023 +0000 UTC m=+1.338034658 container died 540b51e5b77bd003090daade15f0ffe4048561f3bfff76195e480b01f9c4e7b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 11 04:48:25 np0005481065 systemd[1]: libpod-540b51e5b77bd003090daade15f0ffe4048561f3bfff76195e480b01f9c4e7b0.scope: Consumed 1.051s CPU time.
Oct 11 04:48:25 np0005481065 systemd[1]: var-lib-containers-storage-overlay-80069ce4c9e5f70d791f6ddcb3af87f74ed72915e3a827c7a3bb7fa53cbdb32b-merged.mount: Deactivated successfully.
Oct 11 04:48:25 np0005481065 podman[290465]: 2025-10-11 08:48:25.586431214 +0000 UTC m=+1.418168839 container remove 540b51e5b77bd003090daade15f0ffe4048561f3bfff76195e480b01f9c4e7b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_swanson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 04:48:25 np0005481065 systemd[1]: libpod-conmon-540b51e5b77bd003090daade15f0ffe4048561f3bfff76195e480b01f9c4e7b0.scope: Deactivated successfully.
Oct 11 04:48:25 np0005481065 nova_compute[260935]: 2025-10-11 08:48:25.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:25 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:48:25 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:48:25 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:48:25 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:48:25 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev b663000c-12b7-4739-9575-03434d4f765a does not exist
Oct 11 04:48:25 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 3c5bc784-1efe-488a-9faa-f70e2fffcf6c does not exist
Oct 11 04:48:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1279: 321 pgs: 321 active+clean; 167 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 5.7 MiB/s wr, 301 op/s
Oct 11 04:48:25 np0005481065 nova_compute[260935]: 2025-10-11 08:48:25.981 2 DEBUG nova.network.neutron [req-e5487992-b3d0-428a-9189-e07ae6fdf210 req-6b2144f8-3baa-4dd3-ac03-2699ead7ad15 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Updated VIF entry in instance network info cache for port db31f1b4-b009-40dc-a028-b72fe0b1eb45. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:48:25 np0005481065 nova_compute[260935]: 2025-10-11 08:48:25.981 2 DEBUG nova.network.neutron [req-e5487992-b3d0-428a-9189-e07ae6fdf210 req-6b2144f8-3baa-4dd3-ac03-2699ead7ad15 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Updating instance_info_cache with network_info: [{"id": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "address": "fa:16:3e:97:65:f3", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb31f1b4-b0", "ovs_interfaceid": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:48:26 np0005481065 nova_compute[260935]: 2025-10-11 08:48:26.006 2 DEBUG oslo_concurrency.lockutils [req-e5487992-b3d0-428a-9189-e07ae6fdf210 req-6b2144f8-3baa-4dd3-ac03-2699ead7ad15 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:48:26 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:48:26 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:48:26 np0005481065 nova_compute[260935]: 2025-10-11 08:48:26.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:27 np0005481065 nova_compute[260935]: 2025-10-11 08:48:27.278 2 DEBUG nova.network.neutron [req-ee9cdb8d-4a56-44ef-a27b-0642d6e5ee8c req-d476743d-789a-4d4c-bed4-0acf7578f089 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Updated VIF entry in instance network info cache for port 128b1135-2e8f-4e78-8e09-e16b082e9225. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:48:27 np0005481065 nova_compute[260935]: 2025-10-11 08:48:27.279 2 DEBUG nova.network.neutron [req-ee9cdb8d-4a56-44ef-a27b-0642d6e5ee8c req-d476743d-789a-4d4c-bed4-0acf7578f089 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Updating instance_info_cache with network_info: [{"id": "128b1135-2e8f-4e78-8e09-e16b082e9225", "address": "fa:16:3e:ac:95:81", "network": {"id": "678d17d5-515c-4e7c-a42a-5bd4db3dbb7b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1273402507-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc753a4e96fc46008b6e6b1fd29b160d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap128b1135-2e", "ovs_interfaceid": "128b1135-2e8f-4e78-8e09-e16b082e9225", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:48:27 np0005481065 nova_compute[260935]: 2025-10-11 08:48:27.317 2 DEBUG oslo_concurrency.lockutils [req-ee9cdb8d-4a56-44ef-a27b-0642d6e5ee8c req-d476743d-789a-4d4c-bed4-0acf7578f089 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-f6e6ccd5-d393-4fa3-bf88-491311678dd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:48:27 np0005481065 podman[290578]: 2025-10-11 08:48:27.79415072 +0000 UTC m=+0.088626823 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 11 04:48:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1280: 321 pgs: 321 active+clean; 167 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.7 MiB/s wr, 319 op/s
Oct 11 04:48:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:48:28 np0005481065 nova_compute[260935]: 2025-10-11 08:48:28.271 2 DEBUG oslo_concurrency.lockutils [None req-bf530d45-37dc-4204-bd7a-4fc58106c816 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Acquiring lock "f6e6ccd5-d393-4fa3-bf88-491311678dd1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:48:28 np0005481065 nova_compute[260935]: 2025-10-11 08:48:28.272 2 DEBUG oslo_concurrency.lockutils [None req-bf530d45-37dc-4204-bd7a-4fc58106c816 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Lock "f6e6ccd5-d393-4fa3-bf88-491311678dd1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:48:28 np0005481065 nova_compute[260935]: 2025-10-11 08:48:28.272 2 DEBUG oslo_concurrency.lockutils [None req-bf530d45-37dc-4204-bd7a-4fc58106c816 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Acquiring lock "f6e6ccd5-d393-4fa3-bf88-491311678dd1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:48:28 np0005481065 nova_compute[260935]: 2025-10-11 08:48:28.272 2 DEBUG oslo_concurrency.lockutils [None req-bf530d45-37dc-4204-bd7a-4fc58106c816 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Lock "f6e6ccd5-d393-4fa3-bf88-491311678dd1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:48:28 np0005481065 nova_compute[260935]: 2025-10-11 08:48:28.273 2 DEBUG oslo_concurrency.lockutils [None req-bf530d45-37dc-4204-bd7a-4fc58106c816 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Lock "f6e6ccd5-d393-4fa3-bf88-491311678dd1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:48:28 np0005481065 nova_compute[260935]: 2025-10-11 08:48:28.274 2 INFO nova.compute.manager [None req-bf530d45-37dc-4204-bd7a-4fc58106c816 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Terminating instance#033[00m
Oct 11 04:48:28 np0005481065 nova_compute[260935]: 2025-10-11 08:48:28.276 2 DEBUG nova.compute.manager [None req-bf530d45-37dc-4204-bd7a-4fc58106c816 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 04:48:28 np0005481065 kernel: tap128b1135-2e (unregistering): left promiscuous mode
Oct 11 04:48:28 np0005481065 NetworkManager[44960]: <info>  [1760172508.3350] device (tap128b1135-2e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:48:28 np0005481065 nova_compute[260935]: 2025-10-11 08:48:28.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:28 np0005481065 ovn_controller[152945]: 2025-10-11T08:48:28Z|00073|binding|INFO|Releasing lport 128b1135-2e8f-4e78-8e09-e16b082e9225 from this chassis (sb_readonly=0)
Oct 11 04:48:28 np0005481065 ovn_controller[152945]: 2025-10-11T08:48:28Z|00074|binding|INFO|Setting lport 128b1135-2e8f-4e78-8e09-e16b082e9225 down in Southbound
Oct 11 04:48:28 np0005481065 ovn_controller[152945]: 2025-10-11T08:48:28Z|00075|binding|INFO|Removing iface tap128b1135-2e ovn-installed in OVS
Oct 11 04:48:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:28.359 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:95:81 10.100.0.4'], port_security=['fa:16:3e:ac:95:81 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'f6e6ccd5-d393-4fa3-bf88-491311678dd1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-678d17d5-515c-4e7c-a42a-5bd4db3dbb7b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dc753a4e96fc46008b6e6b1fd29b160d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd907eed5-bfee-42df-bbfe-0d6a84057302', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8656b2b4-2d06-42a4-a59e-112920fcccd9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=128b1135-2e8f-4e78-8e09-e16b082e9225) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:48:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:28.361 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 128b1135-2e8f-4e78-8e09-e16b082e9225 in datapath 678d17d5-515c-4e7c-a42a-5bd4db3dbb7b unbound from our chassis#033[00m
Oct 11 04:48:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:28.362 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 678d17d5-515c-4e7c-a42a-5bd4db3dbb7b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 04:48:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:28.363 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d2a7f822-b978-4341-ae36-3928d6b4229d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:28.363 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-678d17d5-515c-4e7c-a42a-5bd4db3dbb7b namespace which is not needed anymore#033[00m
Oct 11 04:48:28 np0005481065 nova_compute[260935]: 2025-10-11 08:48:28.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:28 np0005481065 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000013.scope: Deactivated successfully.
Oct 11 04:48:28 np0005481065 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000013.scope: Consumed 12.639s CPU time.
Oct 11 04:48:28 np0005481065 systemd-machined[215705]: Machine qemu-20-instance-00000013 terminated.
Oct 11 04:48:28 np0005481065 nova_compute[260935]: 2025-10-11 08:48:28.522 2 INFO nova.virt.libvirt.driver [-] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Instance destroyed successfully.#033[00m
Oct 11 04:48:28 np0005481065 nova_compute[260935]: 2025-10-11 08:48:28.523 2 DEBUG nova.objects.instance [None req-bf530d45-37dc-4204-bd7a-4fc58106c816 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Lazy-loading 'resources' on Instance uuid f6e6ccd5-d393-4fa3-bf88-491311678dd1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:48:28 np0005481065 neutron-haproxy-ovnmeta-678d17d5-515c-4e7c-a42a-5bd4db3dbb7b[288769]: [NOTICE]   (288775) : haproxy version is 2.8.14-c23fe91
Oct 11 04:48:28 np0005481065 neutron-haproxy-ovnmeta-678d17d5-515c-4e7c-a42a-5bd4db3dbb7b[288769]: [NOTICE]   (288775) : path to executable is /usr/sbin/haproxy
Oct 11 04:48:28 np0005481065 neutron-haproxy-ovnmeta-678d17d5-515c-4e7c-a42a-5bd4db3dbb7b[288769]: [WARNING]  (288775) : Exiting Master process...
Oct 11 04:48:28 np0005481065 neutron-haproxy-ovnmeta-678d17d5-515c-4e7c-a42a-5bd4db3dbb7b[288769]: [WARNING]  (288775) : Exiting Master process...
Oct 11 04:48:28 np0005481065 neutron-haproxy-ovnmeta-678d17d5-515c-4e7c-a42a-5bd4db3dbb7b[288769]: [ALERT]    (288775) : Current worker (288777) exited with code 143 (Terminated)
Oct 11 04:48:28 np0005481065 neutron-haproxy-ovnmeta-678d17d5-515c-4e7c-a42a-5bd4db3dbb7b[288769]: [WARNING]  (288775) : All workers exited. Exiting... (0)
Oct 11 04:48:28 np0005481065 systemd[1]: libpod-e7f00db60ad82e6e93accaad38709500629083bb52629852047b469c6b4d70ec.scope: Deactivated successfully.
Oct 11 04:48:28 np0005481065 nova_compute[260935]: 2025-10-11 08:48:28.550 2 DEBUG nova.virt.libvirt.vif [None req-bf530d45-37dc-4204-bd7a-4fc58106c816 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:47:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-2127562755',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-2127562755',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-212756275',id=19,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:48:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dc753a4e96fc46008b6e6b1fd29b160d',ramdisk_id='',reservation_id='r-wv642fxc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-713335751',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-713335751-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:48:05Z,user_data=None,user_id='16055681fed745bb89347149995b8486',uuid=f6e6ccd5-d393-4fa3-bf88-491311678dd1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "128b1135-2e8f-4e78-8e09-e16b082e9225", "address": "fa:16:3e:ac:95:81", "network": {"id": "678d17d5-515c-4e7c-a42a-5bd4db3dbb7b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1273402507-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc753a4e96fc46008b6e6b1fd29b160d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap128b1135-2e", "ovs_interfaceid": "128b1135-2e8f-4e78-8e09-e16b082e9225", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 04:48:28 np0005481065 nova_compute[260935]: 2025-10-11 08:48:28.551 2 DEBUG nova.network.os_vif_util [None req-bf530d45-37dc-4204-bd7a-4fc58106c816 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Converting VIF {"id": "128b1135-2e8f-4e78-8e09-e16b082e9225", "address": "fa:16:3e:ac:95:81", "network": {"id": "678d17d5-515c-4e7c-a42a-5bd4db3dbb7b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1273402507-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc753a4e96fc46008b6e6b1fd29b160d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap128b1135-2e", "ovs_interfaceid": "128b1135-2e8f-4e78-8e09-e16b082e9225", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:48:28 np0005481065 nova_compute[260935]: 2025-10-11 08:48:28.552 2 DEBUG nova.network.os_vif_util [None req-bf530d45-37dc-4204-bd7a-4fc58106c816 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ac:95:81,bridge_name='br-int',has_traffic_filtering=True,id=128b1135-2e8f-4e78-8e09-e16b082e9225,network=Network(678d17d5-515c-4e7c-a42a-5bd4db3dbb7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap128b1135-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:48:28 np0005481065 nova_compute[260935]: 2025-10-11 08:48:28.552 2 DEBUG os_vif [None req-bf530d45-37dc-4204-bd7a-4fc58106c816 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ac:95:81,bridge_name='br-int',has_traffic_filtering=True,id=128b1135-2e8f-4e78-8e09-e16b082e9225,network=Network(678d17d5-515c-4e7c-a42a-5bd4db3dbb7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap128b1135-2e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 04:48:28 np0005481065 podman[290622]: 2025-10-11 08:48:28.552672712 +0000 UTC m=+0.065488713 container died e7f00db60ad82e6e93accaad38709500629083bb52629852047b469c6b4d70ec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-678d17d5-515c-4e7c-a42a-5bd4db3dbb7b, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 04:48:28 np0005481065 nova_compute[260935]: 2025-10-11 08:48:28.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:28 np0005481065 nova_compute[260935]: 2025-10-11 08:48:28.558 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap128b1135-2e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:48:28 np0005481065 nova_compute[260935]: 2025-10-11 08:48:28.563 2 DEBUG nova.compute.manager [req-50bb8a09-8318-4292-b449-1928695ec69b req-70c08add-6106-4575-af83-11d9a0e8f426 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Received event network-vif-unplugged-128b1135-2e8f-4e78-8e09-e16b082e9225 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:48:28 np0005481065 nova_compute[260935]: 2025-10-11 08:48:28.564 2 DEBUG oslo_concurrency.lockutils [req-50bb8a09-8318-4292-b449-1928695ec69b req-70c08add-6106-4575-af83-11d9a0e8f426 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "f6e6ccd5-d393-4fa3-bf88-491311678dd1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:48:28 np0005481065 nova_compute[260935]: 2025-10-11 08:48:28.564 2 DEBUG oslo_concurrency.lockutils [req-50bb8a09-8318-4292-b449-1928695ec69b req-70c08add-6106-4575-af83-11d9a0e8f426 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f6e6ccd5-d393-4fa3-bf88-491311678dd1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:48:28 np0005481065 nova_compute[260935]: 2025-10-11 08:48:28.565 2 DEBUG oslo_concurrency.lockutils [req-50bb8a09-8318-4292-b449-1928695ec69b req-70c08add-6106-4575-af83-11d9a0e8f426 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f6e6ccd5-d393-4fa3-bf88-491311678dd1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:48:28 np0005481065 nova_compute[260935]: 2025-10-11 08:48:28.565 2 DEBUG nova.compute.manager [req-50bb8a09-8318-4292-b449-1928695ec69b req-70c08add-6106-4575-af83-11d9a0e8f426 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] No waiting events found dispatching network-vif-unplugged-128b1135-2e8f-4e78-8e09-e16b082e9225 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:48:28 np0005481065 nova_compute[260935]: 2025-10-11 08:48:28.566 2 DEBUG nova.compute.manager [req-50bb8a09-8318-4292-b449-1928695ec69b req-70c08add-6106-4575-af83-11d9a0e8f426 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Received event network-vif-unplugged-128b1135-2e8f-4e78-8e09-e16b082e9225 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 04:48:28 np0005481065 nova_compute[260935]: 2025-10-11 08:48:28.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:28 np0005481065 nova_compute[260935]: 2025-10-11 08:48:28.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:48:28 np0005481065 nova_compute[260935]: 2025-10-11 08:48:28.572 2 INFO os_vif [None req-bf530d45-37dc-4204-bd7a-4fc58106c816 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ac:95:81,bridge_name='br-int',has_traffic_filtering=True,id=128b1135-2e8f-4e78-8e09-e16b082e9225,network=Network(678d17d5-515c-4e7c-a42a-5bd4db3dbb7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap128b1135-2e')#033[00m
Oct 11 04:48:28 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e7f00db60ad82e6e93accaad38709500629083bb52629852047b469c6b4d70ec-userdata-shm.mount: Deactivated successfully.
Oct 11 04:48:28 np0005481065 systemd[1]: var-lib-containers-storage-overlay-1697bbf5f67b57c75a7813d378343f8ad0bc5c688674a61d7b6592052ffd17eb-merged.mount: Deactivated successfully.
Oct 11 04:48:28 np0005481065 podman[290622]: 2025-10-11 08:48:28.603554697 +0000 UTC m=+0.116370698 container cleanup e7f00db60ad82e6e93accaad38709500629083bb52629852047b469c6b4d70ec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-678d17d5-515c-4e7c-a42a-5bd4db3dbb7b, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:48:28 np0005481065 systemd[1]: libpod-conmon-e7f00db60ad82e6e93accaad38709500629083bb52629852047b469c6b4d70ec.scope: Deactivated successfully.
Oct 11 04:48:28 np0005481065 podman[290674]: 2025-10-11 08:48:28.697373769 +0000 UTC m=+0.057634489 container remove e7f00db60ad82e6e93accaad38709500629083bb52629852047b469c6b4d70ec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-678d17d5-515c-4e7c-a42a-5bd4db3dbb7b, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 11 04:48:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:28.708 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[eaab65af-5549-4b77-956f-c87644057478]: (4, ('Sat Oct 11 08:48:28 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-678d17d5-515c-4e7c-a42a-5bd4db3dbb7b (e7f00db60ad82e6e93accaad38709500629083bb52629852047b469c6b4d70ec)\ne7f00db60ad82e6e93accaad38709500629083bb52629852047b469c6b4d70ec\nSat Oct 11 08:48:28 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-678d17d5-515c-4e7c-a42a-5bd4db3dbb7b (e7f00db60ad82e6e93accaad38709500629083bb52629852047b469c6b4d70ec)\ne7f00db60ad82e6e93accaad38709500629083bb52629852047b469c6b4d70ec\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:28.710 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d977d233-fb3d-4a0f-b7db-e71f063c2da4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:28.713 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap678d17d5-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:48:28 np0005481065 nova_compute[260935]: 2025-10-11 08:48:28.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:28 np0005481065 kernel: tap678d17d5-50: left promiscuous mode
Oct 11 04:48:28 np0005481065 nova_compute[260935]: 2025-10-11 08:48:28.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:28.722 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7047b6ea-e4f5-4602-adf5-f68eeb2d8a07]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:28 np0005481065 nova_compute[260935]: 2025-10-11 08:48:28.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:28.748 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a452bb22-01b9-4d18-8a18-9f245193a260]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:28.751 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7768325c-eace-4999-8565-36f64774e047]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:28.776 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[90e7329e-c6c8-4cf4-9401-b88743a1fc96]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 432822, 'reachable_time': 32842, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290692, 'error': None, 'target': 'ovnmeta-678d17d5-515c-4e7c-a42a-5bd4db3dbb7b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:28 np0005481065 systemd[1]: run-netns-ovnmeta\x2d678d17d5\x2d515c\x2d4e7c\x2da42a\x2d5bd4db3dbb7b.mount: Deactivated successfully.
Oct 11 04:48:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:28.779 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-678d17d5-515c-4e7c-a42a-5bd4db3dbb7b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 04:48:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:28.780 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[af238738-6e07-429e-b060-4c5bf72d7ab7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:28 np0005481065 nova_compute[260935]: 2025-10-11 08:48:28.993 2 INFO nova.virt.libvirt.driver [None req-bf530d45-37dc-4204-bd7a-4fc58106c816 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Deleting instance files /var/lib/nova/instances/f6e6ccd5-d393-4fa3-bf88-491311678dd1_del#033[00m
Oct 11 04:48:28 np0005481065 nova_compute[260935]: 2025-10-11 08:48:28.994 2 INFO nova.virt.libvirt.driver [None req-bf530d45-37dc-4204-bd7a-4fc58106c816 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Deletion of /var/lib/nova/instances/f6e6ccd5-d393-4fa3-bf88-491311678dd1_del complete#033[00m
Oct 11 04:48:29 np0005481065 nova_compute[260935]: 2025-10-11 08:48:29.056 2 INFO nova.compute.manager [None req-bf530d45-37dc-4204-bd7a-4fc58106c816 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Took 0.78 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 04:48:29 np0005481065 nova_compute[260935]: 2025-10-11 08:48:29.057 2 DEBUG oslo.service.loopingcall [None req-bf530d45-37dc-4204-bd7a-4fc58106c816 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 04:48:29 np0005481065 nova_compute[260935]: 2025-10-11 08:48:29.057 2 DEBUG nova.compute.manager [-] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 04:48:29 np0005481065 nova_compute[260935]: 2025-10-11 08:48:29.057 2 DEBUG nova.network.neutron [-] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 04:48:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1281: 321 pgs: 321 active+clean; 167 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 112 KiB/s wr, 186 op/s
Oct 11 04:48:29 np0005481065 nova_compute[260935]: 2025-10-11 08:48:29.876 2 DEBUG nova.network.neutron [-] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:48:29 np0005481065 nova_compute[260935]: 2025-10-11 08:48:29.901 2 INFO nova.compute.manager [-] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Took 0.84 seconds to deallocate network for instance.#033[00m
Oct 11 04:48:29 np0005481065 nova_compute[260935]: 2025-10-11 08:48:29.941 2 DEBUG oslo_concurrency.lockutils [None req-bf530d45-37dc-4204-bd7a-4fc58106c816 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:48:29 np0005481065 nova_compute[260935]: 2025-10-11 08:48:29.942 2 DEBUG oslo_concurrency.lockutils [None req-bf530d45-37dc-4204-bd7a-4fc58106c816 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:48:30 np0005481065 nova_compute[260935]: 2025-10-11 08:48:30.030 2 DEBUG oslo_concurrency.processutils [None req-bf530d45-37dc-4204-bd7a-4fc58106c816 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:48:30 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:48:30 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1270255684' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:48:30 np0005481065 nova_compute[260935]: 2025-10-11 08:48:30.545 2 DEBUG oslo_concurrency.processutils [None req-bf530d45-37dc-4204-bd7a-4fc58106c816 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:48:30 np0005481065 nova_compute[260935]: 2025-10-11 08:48:30.553 2 DEBUG nova.compute.provider_tree [None req-bf530d45-37dc-4204-bd7a-4fc58106c816 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:48:30 np0005481065 nova_compute[260935]: 2025-10-11 08:48:30.569 2 DEBUG nova.scheduler.client.report [None req-bf530d45-37dc-4204-bd7a-4fc58106c816 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:48:30 np0005481065 nova_compute[260935]: 2025-10-11 08:48:30.594 2 DEBUG oslo_concurrency.lockutils [None req-bf530d45-37dc-4204-bd7a-4fc58106c816 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:48:30 np0005481065 nova_compute[260935]: 2025-10-11 08:48:30.618 2 INFO nova.scheduler.client.report [None req-bf530d45-37dc-4204-bd7a-4fc58106c816 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Deleted allocations for instance f6e6ccd5-d393-4fa3-bf88-491311678dd1#033[00m
Oct 11 04:48:30 np0005481065 nova_compute[260935]: 2025-10-11 08:48:30.638 2 DEBUG nova.compute.manager [req-9ef83ac4-c3ea-48a2-949d-5c0f7898257e req-64d86a1c-6cd9-4eda-aad4-9b01272a88b2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Received event network-vif-plugged-128b1135-2e8f-4e78-8e09-e16b082e9225 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:48:30 np0005481065 nova_compute[260935]: 2025-10-11 08:48:30.639 2 DEBUG oslo_concurrency.lockutils [req-9ef83ac4-c3ea-48a2-949d-5c0f7898257e req-64d86a1c-6cd9-4eda-aad4-9b01272a88b2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "f6e6ccd5-d393-4fa3-bf88-491311678dd1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:48:30 np0005481065 nova_compute[260935]: 2025-10-11 08:48:30.639 2 DEBUG oslo_concurrency.lockutils [req-9ef83ac4-c3ea-48a2-949d-5c0f7898257e req-64d86a1c-6cd9-4eda-aad4-9b01272a88b2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f6e6ccd5-d393-4fa3-bf88-491311678dd1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:48:30 np0005481065 nova_compute[260935]: 2025-10-11 08:48:30.639 2 DEBUG oslo_concurrency.lockutils [req-9ef83ac4-c3ea-48a2-949d-5c0f7898257e req-64d86a1c-6cd9-4eda-aad4-9b01272a88b2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f6e6ccd5-d393-4fa3-bf88-491311678dd1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:48:30 np0005481065 nova_compute[260935]: 2025-10-11 08:48:30.640 2 DEBUG nova.compute.manager [req-9ef83ac4-c3ea-48a2-949d-5c0f7898257e req-64d86a1c-6cd9-4eda-aad4-9b01272a88b2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] No waiting events found dispatching network-vif-plugged-128b1135-2e8f-4e78-8e09-e16b082e9225 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:48:30 np0005481065 nova_compute[260935]: 2025-10-11 08:48:30.640 2 WARNING nova.compute.manager [req-9ef83ac4-c3ea-48a2-949d-5c0f7898257e req-64d86a1c-6cd9-4eda-aad4-9b01272a88b2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Received unexpected event network-vif-plugged-128b1135-2e8f-4e78-8e09-e16b082e9225 for instance with vm_state deleted and task_state None.#033[00m
Oct 11 04:48:30 np0005481065 nova_compute[260935]: 2025-10-11 08:48:30.641 2 DEBUG nova.compute.manager [req-9ef83ac4-c3ea-48a2-949d-5c0f7898257e req-64d86a1c-6cd9-4eda-aad4-9b01272a88b2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Received event network-vif-deleted-128b1135-2e8f-4e78-8e09-e16b082e9225 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:48:30 np0005481065 nova_compute[260935]: 2025-10-11 08:48:30.697 2 DEBUG oslo_concurrency.lockutils [None req-bf530d45-37dc-4204-bd7a-4fc58106c816 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Lock "f6e6ccd5-d393-4fa3-bf88-491311678dd1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.426s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:48:31 np0005481065 nova_compute[260935]: 2025-10-11 08:48:31.005 2 DEBUG oslo_concurrency.lockutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Acquiring lock "e10cd028-76c1-4eb5-be43-f51e4da8abc1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:48:31 np0005481065 nova_compute[260935]: 2025-10-11 08:48:31.005 2 DEBUG oslo_concurrency.lockutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Lock "e10cd028-76c1-4eb5-be43-f51e4da8abc1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:48:31 np0005481065 nova_compute[260935]: 2025-10-11 08:48:31.024 2 DEBUG nova.compute.manager [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:48:31 np0005481065 nova_compute[260935]: 2025-10-11 08:48:31.098 2 DEBUG oslo_concurrency.lockutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:48:31 np0005481065 nova_compute[260935]: 2025-10-11 08:48:31.099 2 DEBUG oslo_concurrency.lockutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:48:31 np0005481065 nova_compute[260935]: 2025-10-11 08:48:31.107 2 DEBUG nova.virt.hardware [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:48:31 np0005481065 nova_compute[260935]: 2025-10-11 08:48:31.107 2 INFO nova.compute.claims [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:48:31 np0005481065 nova_compute[260935]: 2025-10-11 08:48:31.247 2 DEBUG oslo_concurrency.processutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:48:31 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:48:31 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1999242782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:48:31 np0005481065 nova_compute[260935]: 2025-10-11 08:48:31.761 2 DEBUG oslo_concurrency.processutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:48:31 np0005481065 nova_compute[260935]: 2025-10-11 08:48:31.769 2 DEBUG nova.compute.provider_tree [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:48:31 np0005481065 nova_compute[260935]: 2025-10-11 08:48:31.788 2 DEBUG nova.scheduler.client.report [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:48:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1282: 321 pgs: 321 active+clean; 167 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 112 KiB/s wr, 186 op/s
Oct 11 04:48:31 np0005481065 nova_compute[260935]: 2025-10-11 08:48:31.819 2 DEBUG oslo_concurrency.lockutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:48:31 np0005481065 nova_compute[260935]: 2025-10-11 08:48:31.821 2 DEBUG nova.compute.manager [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:48:31 np0005481065 nova_compute[260935]: 2025-10-11 08:48:31.876 2 DEBUG nova.compute.manager [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:48:31 np0005481065 nova_compute[260935]: 2025-10-11 08:48:31.876 2 DEBUG nova.network.neutron [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:48:31 np0005481065 nova_compute[260935]: 2025-10-11 08:48:31.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:31 np0005481065 nova_compute[260935]: 2025-10-11 08:48:31.898 2 INFO nova.virt.libvirt.driver [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:48:31 np0005481065 nova_compute[260935]: 2025-10-11 08:48:31.917 2 DEBUG nova.compute.manager [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:48:32 np0005481065 nova_compute[260935]: 2025-10-11 08:48:32.027 2 DEBUG nova.compute.manager [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:48:32 np0005481065 nova_compute[260935]: 2025-10-11 08:48:32.029 2 DEBUG nova.virt.libvirt.driver [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:48:32 np0005481065 nova_compute[260935]: 2025-10-11 08:48:32.030 2 INFO nova.virt.libvirt.driver [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Creating image(s)#033[00m
Oct 11 04:48:32 np0005481065 nova_compute[260935]: 2025-10-11 08:48:32.068 2 DEBUG nova.storage.rbd_utils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] rbd image e10cd028-76c1-4eb5-be43-f51e4da8abc1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:48:32 np0005481065 nova_compute[260935]: 2025-10-11 08:48:32.117 2 DEBUG nova.storage.rbd_utils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] rbd image e10cd028-76c1-4eb5-be43-f51e4da8abc1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:48:32 np0005481065 nova_compute[260935]: 2025-10-11 08:48:32.154 2 DEBUG nova.storage.rbd_utils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] rbd image e10cd028-76c1-4eb5-be43-f51e4da8abc1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:48:32 np0005481065 nova_compute[260935]: 2025-10-11 08:48:32.163 2 DEBUG oslo_concurrency.processutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:48:32 np0005481065 nova_compute[260935]: 2025-10-11 08:48:32.241 2 DEBUG oslo_concurrency.processutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:48:32 np0005481065 nova_compute[260935]: 2025-10-11 08:48:32.242 2 DEBUG oslo_concurrency.lockutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:48:32 np0005481065 nova_compute[260935]: 2025-10-11 08:48:32.242 2 DEBUG oslo_concurrency.lockutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:48:32 np0005481065 nova_compute[260935]: 2025-10-11 08:48:32.243 2 DEBUG oslo_concurrency.lockutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:48:32 np0005481065 nova_compute[260935]: 2025-10-11 08:48:32.264 2 DEBUG nova.storage.rbd_utils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] rbd image e10cd028-76c1-4eb5-be43-f51e4da8abc1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:48:32 np0005481065 nova_compute[260935]: 2025-10-11 08:48:32.268 2 DEBUG oslo_concurrency.processutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 e10cd028-76c1-4eb5-be43-f51e4da8abc1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:48:32 np0005481065 nova_compute[260935]: 2025-10-11 08:48:32.291 2 DEBUG nova.policy [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6b08b83b77b84cd894e155d2a06682a4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8d6c7a0d842f4dcb95421a3f47580c49', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 04:48:32 np0005481065 nova_compute[260935]: 2025-10-11 08:48:32.561 2 DEBUG oslo_concurrency.processutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 e10cd028-76c1-4eb5-be43-f51e4da8abc1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.293s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:48:32 np0005481065 nova_compute[260935]: 2025-10-11 08:48:32.629 2 DEBUG nova.storage.rbd_utils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] resizing rbd image e10cd028-76c1-4eb5-be43-f51e4da8abc1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:48:32 np0005481065 nova_compute[260935]: 2025-10-11 08:48:32.729 2 DEBUG nova.objects.instance [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Lazy-loading 'migration_context' on Instance uuid e10cd028-76c1-4eb5-be43-f51e4da8abc1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:48:32 np0005481065 nova_compute[260935]: 2025-10-11 08:48:32.744 2 DEBUG nova.virt.libvirt.driver [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:48:32 np0005481065 nova_compute[260935]: 2025-10-11 08:48:32.745 2 DEBUG nova.virt.libvirt.driver [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Ensure instance console log exists: /var/lib/nova/instances/e10cd028-76c1-4eb5-be43-f51e4da8abc1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:48:32 np0005481065 nova_compute[260935]: 2025-10-11 08:48:32.745 2 DEBUG oslo_concurrency.lockutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:48:32 np0005481065 nova_compute[260935]: 2025-10-11 08:48:32.745 2 DEBUG oslo_concurrency.lockutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:48:32 np0005481065 nova_compute[260935]: 2025-10-11 08:48:32.745 2 DEBUG oslo_concurrency.lockutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:48:32 np0005481065 nova_compute[260935]: 2025-10-11 08:48:32.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:48:33 np0005481065 nova_compute[260935]: 2025-10-11 08:48:33.538 2 DEBUG nova.network.neutron [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Successfully created port: b5bae935-7639-4a76-988c-e09d0c6f5fb1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 04:48:33 np0005481065 nova_compute[260935]: 2025-10-11 08:48:33.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:33 np0005481065 ovn_controller[152945]: 2025-10-11T08:48:33Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:97:65:f3 10.100.0.8
Oct 11 04:48:33 np0005481065 ovn_controller[152945]: 2025-10-11T08:48:33Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:97:65:f3 10.100.0.8
Oct 11 04:48:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1283: 321 pgs: 321 active+clean; 124 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 2.5 MiB/s wr, 255 op/s
Oct 11 04:48:33 np0005481065 podman[290905]: 2025-10-11 08:48:33.848034599 +0000 UTC m=+0.131060157 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 11 04:48:33 np0005481065 podman[290906]: 2025-10-11 08:48:33.867796034 +0000 UTC m=+0.149566056 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:48:35 np0005481065 nova_compute[260935]: 2025-10-11 08:48:35.014 2 DEBUG nova.network.neutron [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Successfully updated port: b5bae935-7639-4a76-988c-e09d0c6f5fb1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:48:35 np0005481065 nova_compute[260935]: 2025-10-11 08:48:35.032 2 DEBUG oslo_concurrency.lockutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Acquiring lock "refresh_cache-e10cd028-76c1-4eb5-be43-f51e4da8abc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:48:35 np0005481065 nova_compute[260935]: 2025-10-11 08:48:35.033 2 DEBUG oslo_concurrency.lockutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Acquired lock "refresh_cache-e10cd028-76c1-4eb5-be43-f51e4da8abc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:48:35 np0005481065 nova_compute[260935]: 2025-10-11 08:48:35.033 2 DEBUG nova.network.neutron [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:48:35 np0005481065 nova_compute[260935]: 2025-10-11 08:48:35.138 2 DEBUG nova.compute.manager [req-ab7a6d74-b857-456e-9cf1-967caa3d0496 req-4bcdc51e-6185-487f-9441-8a58f219aecd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Received event network-changed-b5bae935-7639-4a76-988c-e09d0c6f5fb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:48:35 np0005481065 nova_compute[260935]: 2025-10-11 08:48:35.139 2 DEBUG nova.compute.manager [req-ab7a6d74-b857-456e-9cf1-967caa3d0496 req-4bcdc51e-6185-487f-9441-8a58f219aecd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Refreshing instance network info cache due to event network-changed-b5bae935-7639-4a76-988c-e09d0c6f5fb1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:48:35 np0005481065 nova_compute[260935]: 2025-10-11 08:48:35.139 2 DEBUG oslo_concurrency.lockutils [req-ab7a6d74-b857-456e-9cf1-967caa3d0496 req-4bcdc51e-6185-487f-9441-8a58f219aecd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-e10cd028-76c1-4eb5-be43-f51e4da8abc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:48:35 np0005481065 ovn_controller[152945]: 2025-10-11T08:48:35Z|00076|binding|INFO|Releasing lport 2a916b98-1e7b-4604-b1f0-e2f195b1c17e from this chassis (sb_readonly=0)
Oct 11 04:48:35 np0005481065 nova_compute[260935]: 2025-10-11 08:48:35.237 2 DEBUG nova.network.neutron [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:48:35 np0005481065 nova_compute[260935]: 2025-10-11 08:48:35.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1284: 321 pgs: 321 active+clean; 124 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 643 KiB/s rd, 2.4 MiB/s wr, 87 op/s
Oct 11 04:48:36 np0005481065 nova_compute[260935]: 2025-10-11 08:48:36.760 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172501.758883, 66086b61-46ca-4a1b-a9f0-692678bcbf7a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:48:36 np0005481065 nova_compute[260935]: 2025-10-11 08:48:36.761 2 INFO nova.compute.manager [-] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] VM Stopped (Lifecycle Event)#033[00m
Oct 11 04:48:36 np0005481065 nova_compute[260935]: 2025-10-11 08:48:36.786 2 DEBUG nova.compute.manager [None req-9bb50622-f646-49a2-9933-b24878935f6a - - - - - -] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:48:36 np0005481065 nova_compute[260935]: 2025-10-11 08:48:36.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:48:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3818684593' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:48:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:48:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3818684593' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:48:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1285: 321 pgs: 321 active+clean; 167 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 844 KiB/s rd, 3.9 MiB/s wr, 134 op/s
Oct 11 04:48:37 np0005481065 nova_compute[260935]: 2025-10-11 08:48:37.984 2 DEBUG nova.network.neutron [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Updating instance_info_cache with network_info: [{"id": "b5bae935-7639-4a76-988c-e09d0c6f5fb1", "address": "fa:16:3e:ad:83:55", "network": {"id": "2880dd81-df95-47c3-aa3d-53c3f2548f15", "bridge": "br-int", "label": "tempest-ServersTestJSON-1125718573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d6c7a0d842f4dcb95421a3f47580c49", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5bae935-76", "ovs_interfaceid": "b5bae935-7639-4a76-988c-e09d0c6f5fb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:48:38 np0005481065 nova_compute[260935]: 2025-10-11 08:48:38.016 2 DEBUG oslo_concurrency.lockutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Releasing lock "refresh_cache-e10cd028-76c1-4eb5-be43-f51e4da8abc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:48:38 np0005481065 nova_compute[260935]: 2025-10-11 08:48:38.016 2 DEBUG nova.compute.manager [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Instance network_info: |[{"id": "b5bae935-7639-4a76-988c-e09d0c6f5fb1", "address": "fa:16:3e:ad:83:55", "network": {"id": "2880dd81-df95-47c3-aa3d-53c3f2548f15", "bridge": "br-int", "label": "tempest-ServersTestJSON-1125718573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d6c7a0d842f4dcb95421a3f47580c49", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5bae935-76", "ovs_interfaceid": "b5bae935-7639-4a76-988c-e09d0c6f5fb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:48:38 np0005481065 nova_compute[260935]: 2025-10-11 08:48:38.017 2 DEBUG oslo_concurrency.lockutils [req-ab7a6d74-b857-456e-9cf1-967caa3d0496 req-4bcdc51e-6185-487f-9441-8a58f219aecd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-e10cd028-76c1-4eb5-be43-f51e4da8abc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:48:38 np0005481065 nova_compute[260935]: 2025-10-11 08:48:38.017 2 DEBUG nova.network.neutron [req-ab7a6d74-b857-456e-9cf1-967caa3d0496 req-4bcdc51e-6185-487f-9441-8a58f219aecd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Refreshing network info cache for port b5bae935-7639-4a76-988c-e09d0c6f5fb1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:48:38 np0005481065 nova_compute[260935]: 2025-10-11 08:48:38.022 2 DEBUG nova.virt.libvirt.driver [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Start _get_guest_xml network_info=[{"id": "b5bae935-7639-4a76-988c-e09d0c6f5fb1", "address": "fa:16:3e:ad:83:55", "network": {"id": "2880dd81-df95-47c3-aa3d-53c3f2548f15", "bridge": "br-int", "label": "tempest-ServersTestJSON-1125718573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d6c7a0d842f4dcb95421a3f47580c49", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5bae935-76", "ovs_interfaceid": "b5bae935-7639-4a76-988c-e09d0c6f5fb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:48:38 np0005481065 nova_compute[260935]: 2025-10-11 08:48:38.028 2 WARNING nova.virt.libvirt.driver [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:48:38 np0005481065 nova_compute[260935]: 2025-10-11 08:48:38.039 2 DEBUG nova.virt.libvirt.host [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:48:38 np0005481065 nova_compute[260935]: 2025-10-11 08:48:38.040 2 DEBUG nova.virt.libvirt.host [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:48:38 np0005481065 nova_compute[260935]: 2025-10-11 08:48:38.043 2 DEBUG nova.virt.libvirt.host [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:48:38 np0005481065 nova_compute[260935]: 2025-10-11 08:48:38.044 2 DEBUG nova.virt.libvirt.host [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:48:38 np0005481065 nova_compute[260935]: 2025-10-11 08:48:38.044 2 DEBUG nova.virt.libvirt.driver [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:48:38 np0005481065 nova_compute[260935]: 2025-10-11 08:48:38.044 2 DEBUG nova.virt.hardware [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:48:38 np0005481065 nova_compute[260935]: 2025-10-11 08:48:38.045 2 DEBUG nova.virt.hardware [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:48:38 np0005481065 nova_compute[260935]: 2025-10-11 08:48:38.045 2 DEBUG nova.virt.hardware [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:48:38 np0005481065 nova_compute[260935]: 2025-10-11 08:48:38.046 2 DEBUG nova.virt.hardware [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:48:38 np0005481065 nova_compute[260935]: 2025-10-11 08:48:38.046 2 DEBUG nova.virt.hardware [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:48:38 np0005481065 nova_compute[260935]: 2025-10-11 08:48:38.046 2 DEBUG nova.virt.hardware [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:48:38 np0005481065 nova_compute[260935]: 2025-10-11 08:48:38.047 2 DEBUG nova.virt.hardware [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:48:38 np0005481065 nova_compute[260935]: 2025-10-11 08:48:38.047 2 DEBUG nova.virt.hardware [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:48:38 np0005481065 nova_compute[260935]: 2025-10-11 08:48:38.047 2 DEBUG nova.virt.hardware [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:48:38 np0005481065 nova_compute[260935]: 2025-10-11 08:48:38.048 2 DEBUG nova.virt.hardware [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:48:38 np0005481065 nova_compute[260935]: 2025-10-11 08:48:38.048 2 DEBUG nova.virt.hardware [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:48:38 np0005481065 nova_compute[260935]: 2025-10-11 08:48:38.052 2 DEBUG oslo_concurrency.processutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:48:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:48:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:48:38 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2102150919' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:48:38 np0005481065 nova_compute[260935]: 2025-10-11 08:48:38.545 2 DEBUG oslo_concurrency.processutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:48:38 np0005481065 nova_compute[260935]: 2025-10-11 08:48:38.579 2 DEBUG nova.storage.rbd_utils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] rbd image e10cd028-76c1-4eb5-be43-f51e4da8abc1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:48:38 np0005481065 nova_compute[260935]: 2025-10-11 08:48:38.584 2 DEBUG oslo_concurrency.processutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:48:38 np0005481065 nova_compute[260935]: 2025-10-11 08:48:38.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:48:39 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1600308633' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:48:39 np0005481065 nova_compute[260935]: 2025-10-11 08:48:39.116 2 DEBUG oslo_concurrency.processutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:48:39 np0005481065 nova_compute[260935]: 2025-10-11 08:48:39.119 2 DEBUG nova.virt.libvirt.vif [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:48:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-170168590',display_name='tempest-ServersTestJSON-server-170168590',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-170168590',id=21,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBBLmmDX7+o3xlBXsWFtGAAW6QN89FexPyRikLBBoMkqYEgmzcpeem7mJuwXNPqh7hh6YHBKO8aG3FnT45N5dmtZiE21YMODPbWwRwlsUeKoenY7euJ0iBxGg5aRfD6zgQ==',key_name='tempest-keypair-1869462671',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8d6c7a0d842f4dcb95421a3f47580c49',ramdisk_id='',reservation_id='r-ijkk9nxi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-381670503',owner_user_name='tempest-ServersTestJSON-381670503-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:48:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6b08b83b77b84cd894e155d2a06682a4',uuid=e10cd028-76c1-4eb5-be43-f51e4da8abc1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b5bae935-7639-4a76-988c-e09d0c6f5fb1", "address": "fa:16:3e:ad:83:55", "network": {"id": "2880dd81-df95-47c3-aa3d-53c3f2548f15", "bridge": "br-int", "label": "tempest-ServersTestJSON-1125718573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d6c7a0d842f4dcb95421a3f47580c49", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5bae935-76", "ovs_interfaceid": "b5bae935-7639-4a76-988c-e09d0c6f5fb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:48:39 np0005481065 nova_compute[260935]: 2025-10-11 08:48:39.120 2 DEBUG nova.network.os_vif_util [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Converting VIF {"id": "b5bae935-7639-4a76-988c-e09d0c6f5fb1", "address": "fa:16:3e:ad:83:55", "network": {"id": "2880dd81-df95-47c3-aa3d-53c3f2548f15", "bridge": "br-int", "label": "tempest-ServersTestJSON-1125718573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d6c7a0d842f4dcb95421a3f47580c49", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5bae935-76", "ovs_interfaceid": "b5bae935-7639-4a76-988c-e09d0c6f5fb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:48:39 np0005481065 nova_compute[260935]: 2025-10-11 08:48:39.121 2 DEBUG nova.network.os_vif_util [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:83:55,bridge_name='br-int',has_traffic_filtering=True,id=b5bae935-7639-4a76-988c-e09d0c6f5fb1,network=Network(2880dd81-df95-47c3-aa3d-53c3f2548f15),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5bae935-76') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:48:39 np0005481065 nova_compute[260935]: 2025-10-11 08:48:39.123 2 DEBUG nova.objects.instance [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Lazy-loading 'pci_devices' on Instance uuid e10cd028-76c1-4eb5-be43-f51e4da8abc1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:48:39 np0005481065 nova_compute[260935]: 2025-10-11 08:48:39.157 2 DEBUG nova.virt.libvirt.driver [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:48:39 np0005481065 nova_compute[260935]:  <uuid>e10cd028-76c1-4eb5-be43-f51e4da8abc1</uuid>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:  <name>instance-00000015</name>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:48:39 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:      <nova:name>tempest-ServersTestJSON-server-170168590</nova:name>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:48:38</nova:creationTime>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:48:39 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:        <nova:user uuid="6b08b83b77b84cd894e155d2a06682a4">tempest-ServersTestJSON-381670503-project-member</nova:user>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:        <nova:project uuid="8d6c7a0d842f4dcb95421a3f47580c49">tempest-ServersTestJSON-381670503</nova:project>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:        <nova:port uuid="b5bae935-7639-4a76-988c-e09d0c6f5fb1">
Oct 11 04:48:39 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:48:39 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:      <entry name="serial">e10cd028-76c1-4eb5-be43-f51e4da8abc1</entry>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:      <entry name="uuid">e10cd028-76c1-4eb5-be43-f51e4da8abc1</entry>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:48:39 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:48:39 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:48:39 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/e10cd028-76c1-4eb5-be43-f51e4da8abc1_disk">
Oct 11 04:48:39 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:48:39 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:48:39 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/e10cd028-76c1-4eb5-be43-f51e4da8abc1_disk.config">
Oct 11 04:48:39 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:48:39 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 04:48:39 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:ad:83:55"/>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:      <target dev="tapb5bae935-76"/>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:48:39 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/e10cd028-76c1-4eb5-be43-f51e4da8abc1/console.log" append="off"/>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:48:39 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:48:39 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:48:39 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:48:39 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:48:39 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:48:39 np0005481065 nova_compute[260935]: 2025-10-11 08:48:39.160 2 DEBUG nova.compute.manager [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Preparing to wait for external event network-vif-plugged-b5bae935-7639-4a76-988c-e09d0c6f5fb1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 04:48:39 np0005481065 nova_compute[260935]: 2025-10-11 08:48:39.160 2 DEBUG oslo_concurrency.lockutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Acquiring lock "e10cd028-76c1-4eb5-be43-f51e4da8abc1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:48:39 np0005481065 nova_compute[260935]: 2025-10-11 08:48:39.161 2 DEBUG oslo_concurrency.lockutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Lock "e10cd028-76c1-4eb5-be43-f51e4da8abc1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:48:39 np0005481065 nova_compute[260935]: 2025-10-11 08:48:39.161 2 DEBUG oslo_concurrency.lockutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Lock "e10cd028-76c1-4eb5-be43-f51e4da8abc1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:48:39 np0005481065 nova_compute[260935]: 2025-10-11 08:48:39.163 2 DEBUG nova.virt.libvirt.vif [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:48:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-170168590',display_name='tempest-ServersTestJSON-server-170168590',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-170168590',id=21,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBBLmmDX7+o3xlBXsWFtGAAW6QN89FexPyRikLBBoMkqYEgmzcpeem7mJuwXNPqh7hh6YHBKO8aG3FnT45N5dmtZiE21YMODPbWwRwlsUeKoenY7euJ0iBxGg5aRfD6zgQ==',key_name='tempest-keypair-1869462671',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8d6c7a0d842f4dcb95421a3f47580c49',ramdisk_id='',reservation_id='r-ijkk9nxi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-381670503',owner_user_name='tempest-ServersTestJSON-381670503-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:48:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6b08b83b77b84cd894e155d2a06682a4',uuid=e10cd028-76c1-4eb5-be43-f51e4da8abc1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b5bae935-7639-4a76-988c-e09d0c6f5fb1", "address": "fa:16:3e:ad:83:55", "network": {"id": "2880dd81-df95-47c3-aa3d-53c3f2548f15", "bridge": "br-int", "label": "tempest-ServersTestJSON-1125718573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d6c7a0d842f4dcb95421a3f47580c49", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5bae935-76", "ovs_interfaceid": "b5bae935-7639-4a76-988c-e09d0c6f5fb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:48:39 np0005481065 nova_compute[260935]: 2025-10-11 08:48:39.163 2 DEBUG nova.network.os_vif_util [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Converting VIF {"id": "b5bae935-7639-4a76-988c-e09d0c6f5fb1", "address": "fa:16:3e:ad:83:55", "network": {"id": "2880dd81-df95-47c3-aa3d-53c3f2548f15", "bridge": "br-int", "label": "tempest-ServersTestJSON-1125718573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d6c7a0d842f4dcb95421a3f47580c49", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5bae935-76", "ovs_interfaceid": "b5bae935-7639-4a76-988c-e09d0c6f5fb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:48:39 np0005481065 nova_compute[260935]: 2025-10-11 08:48:39.164 2 DEBUG nova.network.os_vif_util [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:83:55,bridge_name='br-int',has_traffic_filtering=True,id=b5bae935-7639-4a76-988c-e09d0c6f5fb1,network=Network(2880dd81-df95-47c3-aa3d-53c3f2548f15),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5bae935-76') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:48:39 np0005481065 nova_compute[260935]: 2025-10-11 08:48:39.165 2 DEBUG os_vif [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:83:55,bridge_name='br-int',has_traffic_filtering=True,id=b5bae935-7639-4a76-988c-e09d0c6f5fb1,network=Network(2880dd81-df95-47c3-aa3d-53c3f2548f15),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5bae935-76') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:48:39 np0005481065 nova_compute[260935]: 2025-10-11 08:48:39.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:39 np0005481065 nova_compute[260935]: 2025-10-11 08:48:39.167 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:48:39 np0005481065 nova_compute[260935]: 2025-10-11 08:48:39.168 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:48:39 np0005481065 nova_compute[260935]: 2025-10-11 08:48:39.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:39 np0005481065 nova_compute[260935]: 2025-10-11 08:48:39.173 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5bae935-76, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:48:39 np0005481065 nova_compute[260935]: 2025-10-11 08:48:39.174 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb5bae935-76, col_values=(('external_ids', {'iface-id': 'b5bae935-7639-4a76-988c-e09d0c6f5fb1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ad:83:55', 'vm-uuid': 'e10cd028-76c1-4eb5-be43-f51e4da8abc1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:48:39 np0005481065 nova_compute[260935]: 2025-10-11 08:48:39.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:39 np0005481065 NetworkManager[44960]: <info>  [1760172519.2059] manager: (tapb5bae935-76): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Oct 11 04:48:39 np0005481065 nova_compute[260935]: 2025-10-11 08:48:39.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:48:39 np0005481065 nova_compute[260935]: 2025-10-11 08:48:39.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:39 np0005481065 nova_compute[260935]: 2025-10-11 08:48:39.217 2 INFO os_vif [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:83:55,bridge_name='br-int',has_traffic_filtering=True,id=b5bae935-7639-4a76-988c-e09d0c6f5fb1,network=Network(2880dd81-df95-47c3-aa3d-53c3f2548f15),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5bae935-76')#033[00m
Oct 11 04:48:39 np0005481065 nova_compute[260935]: 2025-10-11 08:48:39.286 2 DEBUG nova.virt.libvirt.driver [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:48:39 np0005481065 nova_compute[260935]: 2025-10-11 08:48:39.287 2 DEBUG nova.virt.libvirt.driver [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:48:39 np0005481065 nova_compute[260935]: 2025-10-11 08:48:39.288 2 DEBUG nova.virt.libvirt.driver [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] No VIF found with MAC fa:16:3e:ad:83:55, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:48:39 np0005481065 nova_compute[260935]: 2025-10-11 08:48:39.289 2 INFO nova.virt.libvirt.driver [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Using config drive#033[00m
Oct 11 04:48:39 np0005481065 nova_compute[260935]: 2025-10-11 08:48:39.324 2 DEBUG nova.storage.rbd_utils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] rbd image e10cd028-76c1-4eb5-be43-f51e4da8abc1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:48:39 np0005481065 nova_compute[260935]: 2025-10-11 08:48:39.746 2 INFO nova.virt.libvirt.driver [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Creating config drive at /var/lib/nova/instances/e10cd028-76c1-4eb5-be43-f51e4da8abc1/disk.config#033[00m
Oct 11 04:48:39 np0005481065 nova_compute[260935]: 2025-10-11 08:48:39.759 2 DEBUG oslo_concurrency.processutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e10cd028-76c1-4eb5-be43-f51e4da8abc1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuwr7xpyo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:48:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1286: 321 pgs: 321 active+clean; 167 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 292 KiB/s rd, 3.9 MiB/s wr, 116 op/s
Oct 11 04:48:39 np0005481065 nova_compute[260935]: 2025-10-11 08:48:39.914 2 DEBUG oslo_concurrency.processutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e10cd028-76c1-4eb5-be43-f51e4da8abc1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuwr7xpyo" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:48:39 np0005481065 nova_compute[260935]: 2025-10-11 08:48:39.957 2 DEBUG nova.storage.rbd_utils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] rbd image e10cd028-76c1-4eb5-be43-f51e4da8abc1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:48:39 np0005481065 nova_compute[260935]: 2025-10-11 08:48:39.963 2 DEBUG oslo_concurrency.processutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e10cd028-76c1-4eb5-be43-f51e4da8abc1/disk.config e10cd028-76c1-4eb5-be43-f51e4da8abc1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:48:40 np0005481065 nova_compute[260935]: 2025-10-11 08:48:40.165 2 DEBUG oslo_concurrency.processutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e10cd028-76c1-4eb5-be43-f51e4da8abc1/disk.config e10cd028-76c1-4eb5-be43-f51e4da8abc1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.202s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:48:40 np0005481065 nova_compute[260935]: 2025-10-11 08:48:40.166 2 INFO nova.virt.libvirt.driver [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Deleting local config drive /var/lib/nova/instances/e10cd028-76c1-4eb5-be43-f51e4da8abc1/disk.config because it was imported into RBD.#033[00m
Oct 11 04:48:40 np0005481065 kernel: tapb5bae935-76: entered promiscuous mode
Oct 11 04:48:40 np0005481065 NetworkManager[44960]: <info>  [1760172520.2264] manager: (tapb5bae935-76): new Tun device (/org/freedesktop/NetworkManager/Devices/53)
Oct 11 04:48:40 np0005481065 ovn_controller[152945]: 2025-10-11T08:48:40Z|00077|binding|INFO|Claiming lport b5bae935-7639-4a76-988c-e09d0c6f5fb1 for this chassis.
Oct 11 04:48:40 np0005481065 ovn_controller[152945]: 2025-10-11T08:48:40Z|00078|binding|INFO|b5bae935-7639-4a76-988c-e09d0c6f5fb1: Claiming fa:16:3e:ad:83:55 10.100.0.11
Oct 11 04:48:40 np0005481065 nova_compute[260935]: 2025-10-11 08:48:40.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:40 np0005481065 nova_compute[260935]: 2025-10-11 08:48:40.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:40.285 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:83:55 10.100.0.11'], port_security=['fa:16:3e:ad:83:55 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'e10cd028-76c1-4eb5-be43-f51e4da8abc1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2880dd81-df95-47c3-aa3d-53c3f2548f15', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8d6c7a0d842f4dcb95421a3f47580c49', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7275978f-63aa-48f1-b8b7-cd0104b14473', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1c8074e3-09b3-477f-801a-39440484f747, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=b5bae935-7639-4a76-988c-e09d0c6f5fb1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:40.288 162815 INFO neutron.agent.ovn.metadata.agent [-] Port b5bae935-7639-4a76-988c-e09d0c6f5fb1 in datapath 2880dd81-df95-47c3-aa3d-53c3f2548f15 bound to our chassis#033[00m
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:40.292 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2880dd81-df95-47c3-aa3d-53c3f2548f15#033[00m
Oct 11 04:48:40 np0005481065 nova_compute[260935]: 2025-10-11 08:48:40.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:40 np0005481065 ovn_controller[152945]: 2025-10-11T08:48:40Z|00079|binding|INFO|Setting lport b5bae935-7639-4a76-988c-e09d0c6f5fb1 ovn-installed in OVS
Oct 11 04:48:40 np0005481065 ovn_controller[152945]: 2025-10-11T08:48:40Z|00080|binding|INFO|Setting lport b5bae935-7639-4a76-988c-e09d0c6f5fb1 up in Southbound
Oct 11 04:48:40 np0005481065 systemd-machined[215705]: New machine qemu-23-instance-00000015.
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:40.312 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b1edf0e3-372c-47f6-97d6-80e69d1539e8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:40.316 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2880dd81-d1 in ovnmeta-2880dd81-df95-47c3-aa3d-53c3f2548f15 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:40.317 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2880dd81-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:40.317 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d2fb791b-02b0-46b2-bcbc-3537ba2b6dbc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:40.319 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[33366c6d-7041-4d28-9c2f-c00f58fa4b93]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:40 np0005481065 systemd[1]: Started Virtual Machine qemu-23-instance-00000015.
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:40.337 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[ad4e9e37-63cf-4763-8724-79265a79931e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:40 np0005481065 systemd-udevd[291091]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:40.359 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a495240c-5413-413e-ba28-622c81455edf]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:40 np0005481065 NetworkManager[44960]: <info>  [1760172520.3708] device (tapb5bae935-76): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:48:40 np0005481065 NetworkManager[44960]: <info>  [1760172520.3726] device (tapb5bae935-76): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:40.407 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[11816466-dbe6-462e-aec2-3e228102818b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:40.414 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a36308ce-89be-401a-92b0-4e3a47f65fff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:40 np0005481065 NetworkManager[44960]: <info>  [1760172520.4156] manager: (tap2880dd81-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/54)
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:40.468 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[26d5a6df-386f-4d98-87d5-160cd6962862]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:40.472 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[4f36b1eb-5752-4966-be0c-fb145efeb09f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:40 np0005481065 NetworkManager[44960]: <info>  [1760172520.5133] device (tap2880dd81-d0): carrier: link connected
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:40.522 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[46503e41-5d43-45b7-88a3-3d5b37b3678d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:40.550 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[09469b34-83ad-41b9-ba9d-4b62ea82a1fa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2880dd81-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:9e:f3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436521, 'reachable_time': 44999, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291122, 'error': None, 'target': 'ovnmeta-2880dd81-df95-47c3-aa3d-53c3f2548f15', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:40 np0005481065 nova_compute[260935]: 2025-10-11 08:48:40.551 2 DEBUG oslo_concurrency.lockutils [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "interface-057de6d9-3f9e-4b23-9019-f62ba6b453e7-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:48:40 np0005481065 nova_compute[260935]: 2025-10-11 08:48:40.552 2 DEBUG oslo_concurrency.lockutils [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "interface-057de6d9-3f9e-4b23-9019-f62ba6b453e7-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:48:40 np0005481065 nova_compute[260935]: 2025-10-11 08:48:40.553 2 DEBUG nova.objects.instance [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lazy-loading 'flavor' on Instance uuid 057de6d9-3f9e-4b23-9019-f62ba6b453e7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:40.576 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[73d03c32-de99-4dee-84d4-7c8490e1b55a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febf:9ef3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 436521, 'tstamp': 436521}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291123, 'error': None, 'target': 'ovnmeta-2880dd81-df95-47c3-aa3d-53c3f2548f15', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:40 np0005481065 nova_compute[260935]: 2025-10-11 08:48:40.587 2 DEBUG nova.objects.instance [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lazy-loading 'pci_requests' on Instance uuid 057de6d9-3f9e-4b23-9019-f62ba6b453e7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:48:40 np0005481065 nova_compute[260935]: 2025-10-11 08:48:40.603 2 DEBUG nova.network.neutron [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:40.606 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c9add7ba-218a-4693-b935-1efbcceefa31]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2880dd81-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:9e:f3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436521, 'reachable_time': 44999, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 291124, 'error': None, 'target': 'ovnmeta-2880dd81-df95-47c3-aa3d-53c3f2548f15', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:40.659 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[52dd990c-6006-43c6-8dcc-05f42e7ef197]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:40 np0005481065 nova_compute[260935]: 2025-10-11 08:48:40.718 2 DEBUG nova.compute.manager [req-e6d4747d-9889-444c-a644-06f13f5b6866 req-dbaa294c-20e3-41dd-9d39-a5eb8c5955c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Received event network-vif-plugged-b5bae935-7639-4a76-988c-e09d0c6f5fb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:48:40 np0005481065 nova_compute[260935]: 2025-10-11 08:48:40.718 2 DEBUG oslo_concurrency.lockutils [req-e6d4747d-9889-444c-a644-06f13f5b6866 req-dbaa294c-20e3-41dd-9d39-a5eb8c5955c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "e10cd028-76c1-4eb5-be43-f51e4da8abc1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:48:40 np0005481065 nova_compute[260935]: 2025-10-11 08:48:40.718 2 DEBUG oslo_concurrency.lockutils [req-e6d4747d-9889-444c-a644-06f13f5b6866 req-dbaa294c-20e3-41dd-9d39-a5eb8c5955c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e10cd028-76c1-4eb5-be43-f51e4da8abc1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:48:40 np0005481065 nova_compute[260935]: 2025-10-11 08:48:40.719 2 DEBUG oslo_concurrency.lockutils [req-e6d4747d-9889-444c-a644-06f13f5b6866 req-dbaa294c-20e3-41dd-9d39-a5eb8c5955c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e10cd028-76c1-4eb5-be43-f51e4da8abc1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:48:40 np0005481065 nova_compute[260935]: 2025-10-11 08:48:40.719 2 DEBUG nova.compute.manager [req-e6d4747d-9889-444c-a644-06f13f5b6866 req-dbaa294c-20e3-41dd-9d39-a5eb8c5955c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Processing event network-vif-plugged-b5bae935-7639-4a76-988c-e09d0c6f5fb1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:40.760 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e49f3567-ea33-48bd-982c-2e03e473e7fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:40.764 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2880dd81-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:40.765 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:40.765 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2880dd81-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:48:40 np0005481065 NetworkManager[44960]: <info>  [1760172520.7680] manager: (tap2880dd81-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Oct 11 04:48:40 np0005481065 nova_compute[260935]: 2025-10-11 08:48:40.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:40 np0005481065 kernel: tap2880dd81-d0: entered promiscuous mode
Oct 11 04:48:40 np0005481065 nova_compute[260935]: 2025-10-11 08:48:40.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:40.788 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2880dd81-d0, col_values=(('external_ids', {'iface-id': '0d142335-06a1-47f2-b37a-cf32717864bc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:48:40 np0005481065 ovn_controller[152945]: 2025-10-11T08:48:40Z|00081|binding|INFO|Releasing lport 0d142335-06a1-47f2-b37a-cf32717864bc from this chassis (sb_readonly=0)
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:40.791 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2880dd81-df95-47c3-aa3d-53c3f2548f15.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2880dd81-df95-47c3-aa3d-53c3f2548f15.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:40.792 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[50414a75-9876-438c-b524-f8393db696d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:40.793 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-2880dd81-df95-47c3-aa3d-53c3f2548f15
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/2880dd81-df95-47c3-aa3d-53c3f2548f15.pid.haproxy
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID 2880dd81-df95-47c3-aa3d-53c3f2548f15
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 04:48:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:40.794 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2880dd81-df95-47c3-aa3d-53c3f2548f15', 'env', 'PROCESS_TAG=haproxy-2880dd81-df95-47c3-aa3d-53c3f2548f15', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2880dd81-df95-47c3-aa3d-53c3f2548f15.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 04:48:40 np0005481065 nova_compute[260935]: 2025-10-11 08:48:40.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:40 np0005481065 nova_compute[260935]: 2025-10-11 08:48:40.917 2 DEBUG nova.network.neutron [req-ab7a6d74-b857-456e-9cf1-967caa3d0496 req-4bcdc51e-6185-487f-9441-8a58f219aecd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Updated VIF entry in instance network info cache for port b5bae935-7639-4a76-988c-e09d0c6f5fb1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:48:40 np0005481065 nova_compute[260935]: 2025-10-11 08:48:40.917 2 DEBUG nova.network.neutron [req-ab7a6d74-b857-456e-9cf1-967caa3d0496 req-4bcdc51e-6185-487f-9441-8a58f219aecd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Updating instance_info_cache with network_info: [{"id": "b5bae935-7639-4a76-988c-e09d0c6f5fb1", "address": "fa:16:3e:ad:83:55", "network": {"id": "2880dd81-df95-47c3-aa3d-53c3f2548f15", "bridge": "br-int", "label": "tempest-ServersTestJSON-1125718573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d6c7a0d842f4dcb95421a3f47580c49", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5bae935-76", "ovs_interfaceid": "b5bae935-7639-4a76-988c-e09d0c6f5fb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:48:40 np0005481065 nova_compute[260935]: 2025-10-11 08:48:40.936 2 DEBUG oslo_concurrency.lockutils [req-ab7a6d74-b857-456e-9cf1-967caa3d0496 req-4bcdc51e-6185-487f-9441-8a58f219aecd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-e10cd028-76c1-4eb5-be43-f51e4da8abc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:48:41 np0005481065 nova_compute[260935]: 2025-10-11 08:48:41.180 2 DEBUG nova.policy [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '34f29a5a135d45f597eeaa741009aa67', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'eddb41c523294041b154a0a99c88e82b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 04:48:41 np0005481065 podman[291198]: 2025-10-11 08:48:41.248111358 +0000 UTC m=+0.057085853 container create 371c0477cdc472f8f0672be8efb1599ac2b744a3f40122793d8d14be97d4c4a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2880dd81-df95-47c3-aa3d-53c3f2548f15, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 11 04:48:41 np0005481065 nova_compute[260935]: 2025-10-11 08:48:41.271 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172521.2711942, e10cd028-76c1-4eb5-be43-f51e4da8abc1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:48:41 np0005481065 nova_compute[260935]: 2025-10-11 08:48:41.272 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] VM Started (Lifecycle Event)#033[00m
Oct 11 04:48:41 np0005481065 nova_compute[260935]: 2025-10-11 08:48:41.276 2 DEBUG nova.compute.manager [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:48:41 np0005481065 nova_compute[260935]: 2025-10-11 08:48:41.280 2 DEBUG nova.virt.libvirt.driver [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:48:41 np0005481065 nova_compute[260935]: 2025-10-11 08:48:41.285 2 INFO nova.virt.libvirt.driver [-] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Instance spawned successfully.#033[00m
Oct 11 04:48:41 np0005481065 nova_compute[260935]: 2025-10-11 08:48:41.286 2 DEBUG nova.virt.libvirt.driver [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:48:41 np0005481065 systemd[1]: Started libpod-conmon-371c0477cdc472f8f0672be8efb1599ac2b744a3f40122793d8d14be97d4c4a1.scope.
Oct 11 04:48:41 np0005481065 nova_compute[260935]: 2025-10-11 08:48:41.307 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:48:41 np0005481065 podman[291198]: 2025-10-11 08:48:41.219525861 +0000 UTC m=+0.028500336 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 04:48:41 np0005481065 nova_compute[260935]: 2025-10-11 08:48:41.318 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:48:41 np0005481065 nova_compute[260935]: 2025-10-11 08:48:41.325 2 DEBUG nova.virt.libvirt.driver [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:48:41 np0005481065 nova_compute[260935]: 2025-10-11 08:48:41.326 2 DEBUG nova.virt.libvirt.driver [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:48:41 np0005481065 nova_compute[260935]: 2025-10-11 08:48:41.327 2 DEBUG nova.virt.libvirt.driver [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:48:41 np0005481065 nova_compute[260935]: 2025-10-11 08:48:41.328 2 DEBUG nova.virt.libvirt.driver [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:48:41 np0005481065 nova_compute[260935]: 2025-10-11 08:48:41.329 2 DEBUG nova.virt.libvirt.driver [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:48:41 np0005481065 nova_compute[260935]: 2025-10-11 08:48:41.330 2 DEBUG nova.virt.libvirt.driver [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:48:41 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:48:41 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a90ad88f1ec264b7e39d2fba8c4def1e28c3e05feed22a0602c12c8a9a6991b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:48:41 np0005481065 nova_compute[260935]: 2025-10-11 08:48:41.358 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:48:41 np0005481065 nova_compute[260935]: 2025-10-11 08:48:41.358 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172521.2713978, e10cd028-76c1-4eb5-be43-f51e4da8abc1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:48:41 np0005481065 nova_compute[260935]: 2025-10-11 08:48:41.359 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] VM Paused (Lifecycle Event)#033[00m
Oct 11 04:48:41 np0005481065 podman[291198]: 2025-10-11 08:48:41.373492012 +0000 UTC m=+0.182466557 container init 371c0477cdc472f8f0672be8efb1599ac2b744a3f40122793d8d14be97d4c4a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2880dd81-df95-47c3-aa3d-53c3f2548f15, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 11 04:48:41 np0005481065 nova_compute[260935]: 2025-10-11 08:48:41.384 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:48:41 np0005481065 podman[291198]: 2025-10-11 08:48:41.38567958 +0000 UTC m=+0.194654065 container start 371c0477cdc472f8f0672be8efb1599ac2b744a3f40122793d8d14be97d4c4a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2880dd81-df95-47c3-aa3d-53c3f2548f15, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 04:48:41 np0005481065 nova_compute[260935]: 2025-10-11 08:48:41.390 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172521.2785738, e10cd028-76c1-4eb5-be43-f51e4da8abc1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:48:41 np0005481065 nova_compute[260935]: 2025-10-11 08:48:41.390 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:48:41 np0005481065 nova_compute[260935]: 2025-10-11 08:48:41.400 2 INFO nova.compute.manager [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Took 9.37 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:48:41 np0005481065 nova_compute[260935]: 2025-10-11 08:48:41.400 2 DEBUG nova.compute.manager [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:48:41 np0005481065 nova_compute[260935]: 2025-10-11 08:48:41.412 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:48:41 np0005481065 nova_compute[260935]: 2025-10-11 08:48:41.417 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:48:41 np0005481065 neutron-haproxy-ovnmeta-2880dd81-df95-47c3-aa3d-53c3f2548f15[291214]: [NOTICE]   (291218) : New worker (291220) forked
Oct 11 04:48:41 np0005481065 neutron-haproxy-ovnmeta-2880dd81-df95-47c3-aa3d-53c3f2548f15[291214]: [NOTICE]   (291218) : Loading success.
Oct 11 04:48:41 np0005481065 nova_compute[260935]: 2025-10-11 08:48:41.447 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:48:41 np0005481065 nova_compute[260935]: 2025-10-11 08:48:41.471 2 INFO nova.compute.manager [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Took 10.40 seconds to build instance.#033[00m
Oct 11 04:48:41 np0005481065 nova_compute[260935]: 2025-10-11 08:48:41.490 2 DEBUG oslo_concurrency.lockutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Lock "e10cd028-76c1-4eb5-be43-f51e4da8abc1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.485s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:48:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1287: 321 pgs: 321 active+clean; 167 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 292 KiB/s rd, 3.9 MiB/s wr, 116 op/s
Oct 11 04:48:41 np0005481065 nova_compute[260935]: 2025-10-11 08:48:41.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:41 np0005481065 nova_compute[260935]: 2025-10-11 08:48:41.902 2 DEBUG nova.network.neutron [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Successfully created port: b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 04:48:42 np0005481065 nova_compute[260935]: 2025-10-11 08:48:42.918 2 DEBUG nova.compute.manager [req-18b98de8-9499-426d-b22d-70f3b9d6f2a7 req-cd126c65-8b89-486e-a641-a2741f149494 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Received event network-vif-plugged-b5bae935-7639-4a76-988c-e09d0c6f5fb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:48:42 np0005481065 nova_compute[260935]: 2025-10-11 08:48:42.919 2 DEBUG oslo_concurrency.lockutils [req-18b98de8-9499-426d-b22d-70f3b9d6f2a7 req-cd126c65-8b89-486e-a641-a2741f149494 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "e10cd028-76c1-4eb5-be43-f51e4da8abc1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:48:42 np0005481065 nova_compute[260935]: 2025-10-11 08:48:42.920 2 DEBUG oslo_concurrency.lockutils [req-18b98de8-9499-426d-b22d-70f3b9d6f2a7 req-cd126c65-8b89-486e-a641-a2741f149494 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e10cd028-76c1-4eb5-be43-f51e4da8abc1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:48:42 np0005481065 nova_compute[260935]: 2025-10-11 08:48:42.920 2 DEBUG oslo_concurrency.lockutils [req-18b98de8-9499-426d-b22d-70f3b9d6f2a7 req-cd126c65-8b89-486e-a641-a2741f149494 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e10cd028-76c1-4eb5-be43-f51e4da8abc1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:48:42 np0005481065 nova_compute[260935]: 2025-10-11 08:48:42.920 2 DEBUG nova.compute.manager [req-18b98de8-9499-426d-b22d-70f3b9d6f2a7 req-cd126c65-8b89-486e-a641-a2741f149494 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] No waiting events found dispatching network-vif-plugged-b5bae935-7639-4a76-988c-e09d0c6f5fb1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:48:42 np0005481065 nova_compute[260935]: 2025-10-11 08:48:42.921 2 WARNING nova.compute.manager [req-18b98de8-9499-426d-b22d-70f3b9d6f2a7 req-cd126c65-8b89-486e-a641-a2741f149494 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Received unexpected event network-vif-plugged-b5bae935-7639-4a76-988c-e09d0c6f5fb1 for instance with vm_state active and task_state None.#033[00m
Oct 11 04:48:43 np0005481065 nova_compute[260935]: 2025-10-11 08:48:43.000 2 DEBUG nova.network.neutron [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Successfully updated port: b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:48:43 np0005481065 nova_compute[260935]: 2025-10-11 08:48:43.014 2 DEBUG oslo_concurrency.lockutils [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:48:43 np0005481065 nova_compute[260935]: 2025-10-11 08:48:43.015 2 DEBUG oslo_concurrency.lockutils [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquired lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:48:43 np0005481065 nova_compute[260935]: 2025-10-11 08:48:43.015 2 DEBUG nova.network.neutron [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:48:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:48:43 np0005481065 nova_compute[260935]: 2025-10-11 08:48:43.215 2 WARNING nova.network.neutron [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] fff13396-b787-4c6e-9112-a1c2ef57b26d already exists in list: networks containing: ['fff13396-b787-4c6e-9112-a1c2ef57b26d']. ignoring it#033[00m
Oct 11 04:48:43 np0005481065 nova_compute[260935]: 2025-10-11 08:48:43.517 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172508.5161822, f6e6ccd5-d393-4fa3-bf88-491311678dd1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:48:43 np0005481065 nova_compute[260935]: 2025-10-11 08:48:43.517 2 INFO nova.compute.manager [-] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] VM Stopped (Lifecycle Event)#033[00m
Oct 11 04:48:43 np0005481065 nova_compute[260935]: 2025-10-11 08:48:43.549 2 DEBUG nova.compute.manager [None req-cfce80e5-6ef8-4dfe-b05e-708fc1b8b606 - - - - - -] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:48:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1288: 321 pgs: 321 active+clean; 167 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.9 MiB/s wr, 171 op/s
Oct 11 04:48:44 np0005481065 nova_compute[260935]: 2025-10-11 08:48:44.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:44 np0005481065 nova_compute[260935]: 2025-10-11 08:48:44.585 2 DEBUG nova.compute.manager [req-8d725885-de81-40f6-ba5f-70072b8f3a7a req-4ab6bdef-0f8d-482d-8cbd-f0e422f95b04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received event network-changed-b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:48:44 np0005481065 nova_compute[260935]: 2025-10-11 08:48:44.585 2 DEBUG nova.compute.manager [req-8d725885-de81-40f6-ba5f-70072b8f3a7a req-4ab6bdef-0f8d-482d-8cbd-f0e422f95b04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Refreshing instance network info cache due to event network-changed-b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:48:44 np0005481065 nova_compute[260935]: 2025-10-11 08:48:44.586 2 DEBUG oslo_concurrency.lockutils [req-8d725885-de81-40f6-ba5f-70072b8f3a7a req-4ab6bdef-0f8d-482d-8cbd-f0e422f95b04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:48:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1289: 321 pgs: 321 active+clean; 167 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.5 MiB/s wr, 101 op/s
Oct 11 04:48:46 np0005481065 nova_compute[260935]: 2025-10-11 08:48:46.338 2 DEBUG nova.network.neutron [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Updating instance_info_cache with network_info: [{"id": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "address": "fa:16:3e:97:65:f3", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb31f1b4-b0", "ovs_interfaceid": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "address": "fa:16:3e:f2:dc:ce", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e1a780-92", "ovs_interfaceid": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:48:46 np0005481065 nova_compute[260935]: 2025-10-11 08:48:46.364 2 DEBUG oslo_concurrency.lockutils [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Releasing lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:48:46 np0005481065 nova_compute[260935]: 2025-10-11 08:48:46.365 2 DEBUG oslo_concurrency.lockutils [req-8d725885-de81-40f6-ba5f-70072b8f3a7a req-4ab6bdef-0f8d-482d-8cbd-f0e422f95b04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:48:46 np0005481065 nova_compute[260935]: 2025-10-11 08:48:46.365 2 DEBUG nova.network.neutron [req-8d725885-de81-40f6-ba5f-70072b8f3a7a req-4ab6bdef-0f8d-482d-8cbd-f0e422f95b04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Refreshing network info cache for port b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:48:46 np0005481065 nova_compute[260935]: 2025-10-11 08:48:46.369 2 DEBUG nova.virt.libvirt.vif [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:48:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1270343715',display_name='tempest-AttachInterfacesTestJSON-server-1270343715',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1270343715',id=20,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGqp+brcDuFBg126s8uf0VE8L4fUfMeeG8JT9FKFYCB1vHmrbx9C6Kt8XshIYtJqZ0JEMq6H9A4MzX7hRa62ELfLstfe4uxEEdjGiwcDGhX0TR8t1c69HTxfDL2XuPv0hw==',key_name='tempest-keypair-1655747577',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:48:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-i4h1iqr7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:48:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=057de6d9-3f9e-4b23-9019-f62ba6b453e7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "address": "fa:16:3e:f2:dc:ce", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e1a780-92", "ovs_interfaceid": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:48:46 np0005481065 nova_compute[260935]: 2025-10-11 08:48:46.370 2 DEBUG nova.network.os_vif_util [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "address": "fa:16:3e:f2:dc:ce", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e1a780-92", "ovs_interfaceid": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:48:46 np0005481065 nova_compute[260935]: 2025-10-11 08:48:46.371 2 DEBUG nova.network.os_vif_util [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f2:dc:ce,bridge_name='br-int',has_traffic_filtering=True,id=b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e1a780-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:48:46 np0005481065 nova_compute[260935]: 2025-10-11 08:48:46.371 2 DEBUG os_vif [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:dc:ce,bridge_name='br-int',has_traffic_filtering=True,id=b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e1a780-92') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:48:46 np0005481065 nova_compute[260935]: 2025-10-11 08:48:46.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:46 np0005481065 nova_compute[260935]: 2025-10-11 08:48:46.373 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:48:46 np0005481065 nova_compute[260935]: 2025-10-11 08:48:46.373 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:48:46 np0005481065 nova_compute[260935]: 2025-10-11 08:48:46.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:46 np0005481065 nova_compute[260935]: 2025-10-11 08:48:46.377 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb3e1a780-92, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:48:46 np0005481065 nova_compute[260935]: 2025-10-11 08:48:46.378 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb3e1a780-92, col_values=(('external_ids', {'iface-id': 'b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f2:dc:ce', 'vm-uuid': '057de6d9-3f9e-4b23-9019-f62ba6b453e7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:48:46 np0005481065 NetworkManager[44960]: <info>  [1760172526.3819] manager: (tapb3e1a780-92): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/56)
Oct 11 04:48:46 np0005481065 nova_compute[260935]: 2025-10-11 08:48:46.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:46 np0005481065 nova_compute[260935]: 2025-10-11 08:48:46.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:48:46 np0005481065 nova_compute[260935]: 2025-10-11 08:48:46.403 2 INFO os_vif [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:dc:ce,bridge_name='br-int',has_traffic_filtering=True,id=b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e1a780-92')#033[00m
Oct 11 04:48:46 np0005481065 nova_compute[260935]: 2025-10-11 08:48:46.404 2 DEBUG nova.virt.libvirt.vif [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:48:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1270343715',display_name='tempest-AttachInterfacesTestJSON-server-1270343715',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1270343715',id=20,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGqp+brcDuFBg126s8uf0VE8L4fUfMeeG8JT9FKFYCB1vHmrbx9C6Kt8XshIYtJqZ0JEMq6H9A4MzX7hRa62ELfLstfe4uxEEdjGiwcDGhX0TR8t1c69HTxfDL2XuPv0hw==',key_name='tempest-keypair-1655747577',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:48:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-i4h1iqr7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:48:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=057de6d9-3f9e-4b23-9019-f62ba6b453e7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "address": "fa:16:3e:f2:dc:ce", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e1a780-92", "ovs_interfaceid": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:48:46 np0005481065 nova_compute[260935]: 2025-10-11 08:48:46.404 2 DEBUG nova.network.os_vif_util [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "address": "fa:16:3e:f2:dc:ce", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e1a780-92", "ovs_interfaceid": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:48:46 np0005481065 nova_compute[260935]: 2025-10-11 08:48:46.405 2 DEBUG nova.network.os_vif_util [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f2:dc:ce,bridge_name='br-int',has_traffic_filtering=True,id=b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e1a780-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:48:46 np0005481065 nova_compute[260935]: 2025-10-11 08:48:46.409 2 DEBUG nova.virt.libvirt.guest [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] attach device xml: <interface type="ethernet">
Oct 11 04:48:46 np0005481065 nova_compute[260935]:  <mac address="fa:16:3e:f2:dc:ce"/>
Oct 11 04:48:46 np0005481065 nova_compute[260935]:  <model type="virtio"/>
Oct 11 04:48:46 np0005481065 nova_compute[260935]:  <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:48:46 np0005481065 nova_compute[260935]:  <mtu size="1442"/>
Oct 11 04:48:46 np0005481065 nova_compute[260935]:  <target dev="tapb3e1a780-92"/>
Oct 11 04:48:46 np0005481065 nova_compute[260935]: </interface>
Oct 11 04:48:46 np0005481065 nova_compute[260935]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct 11 04:48:46 np0005481065 kernel: tapb3e1a780-92: entered promiscuous mode
Oct 11 04:48:46 np0005481065 NetworkManager[44960]: <info>  [1760172526.4258] manager: (tapb3e1a780-92): new Tun device (/org/freedesktop/NetworkManager/Devices/57)
Oct 11 04:48:46 np0005481065 ovn_controller[152945]: 2025-10-11T08:48:46Z|00082|binding|INFO|Claiming lport b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e for this chassis.
Oct 11 04:48:46 np0005481065 ovn_controller[152945]: 2025-10-11T08:48:46Z|00083|binding|INFO|b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e: Claiming fa:16:3e:f2:dc:ce 10.100.0.12
Oct 11 04:48:46 np0005481065 nova_compute[260935]: 2025-10-11 08:48:46.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:46.439 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f2:dc:ce 10.100.0.12'], port_security=['fa:16:3e:f2:dc:ce 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '057de6d9-3f9e-4b23-9019-f62ba6b453e7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eddb41c523294041b154a0a99c88e82b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9a83c3d0-687d-44b7-980a-bde786b1b429', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4201c7b-c907-464d-88cb-d19f17d8f067, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:48:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:46.445 162815 INFO neutron.agent.ovn.metadata.agent [-] Port b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e in datapath fff13396-b787-4c6e-9112-a1c2ef57b26d bound to our chassis#033[00m
Oct 11 04:48:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:46.450 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fff13396-b787-4c6e-9112-a1c2ef57b26d#033[00m
Oct 11 04:48:46 np0005481065 ovn_controller[152945]: 2025-10-11T08:48:46Z|00084|binding|INFO|Setting lport b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e ovn-installed in OVS
Oct 11 04:48:46 np0005481065 ovn_controller[152945]: 2025-10-11T08:48:46Z|00085|binding|INFO|Setting lport b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e up in Southbound
Oct 11 04:48:46 np0005481065 nova_compute[260935]: 2025-10-11 08:48:46.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:46 np0005481065 nova_compute[260935]: 2025-10-11 08:48:46.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:46 np0005481065 systemd-udevd[291236]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:48:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:46.484 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[848f7625-1660-4948-8e66-a854d4250076]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:46 np0005481065 NetworkManager[44960]: <info>  [1760172526.5141] device (tapb3e1a780-92): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:48:46 np0005481065 NetworkManager[44960]: <info>  [1760172526.5166] device (tapb3e1a780-92): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:48:46 np0005481065 nova_compute[260935]: 2025-10-11 08:48:46.545 2 DEBUG nova.virt.libvirt.driver [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:48:46 np0005481065 nova_compute[260935]: 2025-10-11 08:48:46.547 2 DEBUG nova.virt.libvirt.driver [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:48:46 np0005481065 nova_compute[260935]: 2025-10-11 08:48:46.547 2 DEBUG nova.virt.libvirt.driver [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No VIF found with MAC fa:16:3e:97:65:f3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:48:46 np0005481065 nova_compute[260935]: 2025-10-11 08:48:46.548 2 DEBUG nova.virt.libvirt.driver [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No VIF found with MAC fa:16:3e:f2:dc:ce, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:48:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:46.550 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[5938400f-164c-4243-b8ab-4e93f90b1319]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:46.554 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[156b792f-5e24-4a0e-81e1-71896fb93098]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:46 np0005481065 nova_compute[260935]: 2025-10-11 08:48:46.577 2 DEBUG nova.virt.libvirt.guest [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:48:46 np0005481065 nova_compute[260935]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:48:46 np0005481065 nova_compute[260935]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1270343715</nova:name>
Oct 11 04:48:46 np0005481065 nova_compute[260935]:  <nova:creationTime>2025-10-11 08:48:46</nova:creationTime>
Oct 11 04:48:46 np0005481065 nova_compute[260935]:  <nova:flavor name="m1.nano">
Oct 11 04:48:46 np0005481065 nova_compute[260935]:    <nova:memory>128</nova:memory>
Oct 11 04:48:46 np0005481065 nova_compute[260935]:    <nova:disk>1</nova:disk>
Oct 11 04:48:46 np0005481065 nova_compute[260935]:    <nova:swap>0</nova:swap>
Oct 11 04:48:46 np0005481065 nova_compute[260935]:    <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:48:46 np0005481065 nova_compute[260935]:    <nova:vcpus>1</nova:vcpus>
Oct 11 04:48:46 np0005481065 nova_compute[260935]:  </nova:flavor>
Oct 11 04:48:46 np0005481065 nova_compute[260935]:  <nova:owner>
Oct 11 04:48:46 np0005481065 nova_compute[260935]:    <nova:user uuid="34f29a5a135d45f597eeaa741009aa67">tempest-AttachInterfacesTestJSON-2072786320-project-member</nova:user>
Oct 11 04:48:46 np0005481065 nova_compute[260935]:    <nova:project uuid="eddb41c523294041b154a0a99c88e82b">tempest-AttachInterfacesTestJSON-2072786320</nova:project>
Oct 11 04:48:46 np0005481065 nova_compute[260935]:  </nova:owner>
Oct 11 04:48:46 np0005481065 nova_compute[260935]:  <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:48:46 np0005481065 nova_compute[260935]:  <nova:ports>
Oct 11 04:48:46 np0005481065 nova_compute[260935]:    <nova:port uuid="db31f1b4-b009-40dc-a028-b72fe0b1eb45">
Oct 11 04:48:46 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 11 04:48:46 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 04:48:46 np0005481065 nova_compute[260935]:    <nova:port uuid="b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e">
Oct 11 04:48:46 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 11 04:48:46 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 04:48:46 np0005481065 nova_compute[260935]:  </nova:ports>
Oct 11 04:48:46 np0005481065 nova_compute[260935]: </nova:instance>
Oct 11 04:48:46 np0005481065 nova_compute[260935]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct 11 04:48:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:46.604 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[820de533-482e-4973-986d-cde0a597f617]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:46 np0005481065 nova_compute[260935]: 2025-10-11 08:48:46.626 2 DEBUG oslo_concurrency.lockutils [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "interface-057de6d9-3f9e-4b23-9019-f62ba6b453e7-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 6.074s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:48:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:46.630 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c6ade1f2-e68b-430c-9334-ce6dae11dfce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfff13396-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:a4:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 434479, 'reachable_time': 16585, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291243, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:46.649 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fc3f683d-96a1-443f-b65c-1011b3e90b2f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfff13396-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 434496, 'tstamp': 434496}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291244, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfff13396-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 434502, 'tstamp': 434502}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291244, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:46.651 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfff13396-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:48:46 np0005481065 nova_compute[260935]: 2025-10-11 08:48:46.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:46 np0005481065 nova_compute[260935]: 2025-10-11 08:48:46.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:46.654 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfff13396-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:48:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:46.654 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:48:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:46.654 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfff13396-b0, col_values=(('external_ids', {'iface-id': '2a916b98-1e7b-4604-b1f0-e2f195b1c17e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:48:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:46.654 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:48:46 np0005481065 nova_compute[260935]: 2025-10-11 08:48:46.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:46 np0005481065 nova_compute[260935]: 2025-10-11 08:48:46.990 2 DEBUG nova.compute.manager [req-398e8b53-4ed5-4913-ae93-06f711a708f7 req-8fd7ba0c-8b85-405b-baa8-4db703878886 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received event network-vif-plugged-b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:48:46 np0005481065 nova_compute[260935]: 2025-10-11 08:48:46.991 2 DEBUG oslo_concurrency.lockutils [req-398e8b53-4ed5-4913-ae93-06f711a708f7 req-8fd7ba0c-8b85-405b-baa8-4db703878886 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:48:46 np0005481065 nova_compute[260935]: 2025-10-11 08:48:46.992 2 DEBUG oslo_concurrency.lockutils [req-398e8b53-4ed5-4913-ae93-06f711a708f7 req-8fd7ba0c-8b85-405b-baa8-4db703878886 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:48:46 np0005481065 nova_compute[260935]: 2025-10-11 08:48:46.993 2 DEBUG oslo_concurrency.lockutils [req-398e8b53-4ed5-4913-ae93-06f711a708f7 req-8fd7ba0c-8b85-405b-baa8-4db703878886 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:48:46 np0005481065 nova_compute[260935]: 2025-10-11 08:48:46.993 2 DEBUG nova.compute.manager [req-398e8b53-4ed5-4913-ae93-06f711a708f7 req-8fd7ba0c-8b85-405b-baa8-4db703878886 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] No waiting events found dispatching network-vif-plugged-b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:48:46 np0005481065 nova_compute[260935]: 2025-10-11 08:48:46.994 2 WARNING nova.compute.manager [req-398e8b53-4ed5-4913-ae93-06f711a708f7 req-8fd7ba0c-8b85-405b-baa8-4db703878886 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received unexpected event network-vif-plugged-b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e for instance with vm_state active and task_state None.#033[00m
Oct 11 04:48:47 np0005481065 ovn_controller[152945]: 2025-10-11T08:48:47Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f2:dc:ce 10.100.0.12
Oct 11 04:48:47 np0005481065 ovn_controller[152945]: 2025-10-11T08:48:47Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f2:dc:ce 10.100.0.12
Oct 11 04:48:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1290: 321 pgs: 321 active+clean; 167 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.5 MiB/s wr, 121 op/s
Oct 11 04:48:47 np0005481065 nova_compute[260935]: 2025-10-11 08:48:47.925 2 DEBUG nova.network.neutron [req-8d725885-de81-40f6-ba5f-70072b8f3a7a req-4ab6bdef-0f8d-482d-8cbd-f0e422f95b04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Updated VIF entry in instance network info cache for port b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:48:47 np0005481065 nova_compute[260935]: 2025-10-11 08:48:47.926 2 DEBUG nova.network.neutron [req-8d725885-de81-40f6-ba5f-70072b8f3a7a req-4ab6bdef-0f8d-482d-8cbd-f0e422f95b04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Updating instance_info_cache with network_info: [{"id": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "address": "fa:16:3e:97:65:f3", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb31f1b4-b0", "ovs_interfaceid": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "address": "fa:16:3e:f2:dc:ce", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e1a780-92", "ovs_interfaceid": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:48:47 np0005481065 nova_compute[260935]: 2025-10-11 08:48:47.947 2 DEBUG oslo_concurrency.lockutils [req-8d725885-de81-40f6-ba5f-70072b8f3a7a req-4ab6bdef-0f8d-482d-8cbd-f0e422f95b04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:48:48 np0005481065 nova_compute[260935]: 2025-10-11 08:48:48.034 2 DEBUG oslo_concurrency.lockutils [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "interface-057de6d9-3f9e-4b23-9019-f62ba6b453e7-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:48:48 np0005481065 nova_compute[260935]: 2025-10-11 08:48:48.035 2 DEBUG oslo_concurrency.lockutils [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "interface-057de6d9-3f9e-4b23-9019-f62ba6b453e7-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:48:48 np0005481065 nova_compute[260935]: 2025-10-11 08:48:48.036 2 DEBUG nova.objects.instance [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lazy-loading 'flavor' on Instance uuid 057de6d9-3f9e-4b23-9019-f62ba6b453e7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:48:48 np0005481065 nova_compute[260935]: 2025-10-11 08:48:48.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:48:48 np0005481065 nova_compute[260935]: 2025-10-11 08:48:48.876 2 DEBUG nova.objects.instance [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lazy-loading 'pci_requests' on Instance uuid 057de6d9-3f9e-4b23-9019-f62ba6b453e7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:48:48 np0005481065 nova_compute[260935]: 2025-10-11 08:48:48.897 2 DEBUG nova.network.neutron [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:48:49 np0005481065 nova_compute[260935]: 2025-10-11 08:48:49.228 2 DEBUG nova.compute.manager [req-7adb86f8-c0dd-47b1-89f2-06bd76a5962e req-3f3aed79-5d31-4c4f-b158-a2e75693bc87 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Received event network-changed-b5bae935-7639-4a76-988c-e09d0c6f5fb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:48:49 np0005481065 nova_compute[260935]: 2025-10-11 08:48:49.229 2 DEBUG nova.compute.manager [req-7adb86f8-c0dd-47b1-89f2-06bd76a5962e req-3f3aed79-5d31-4c4f-b158-a2e75693bc87 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Refreshing instance network info cache due to event network-changed-b5bae935-7639-4a76-988c-e09d0c6f5fb1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:48:49 np0005481065 nova_compute[260935]: 2025-10-11 08:48:49.230 2 DEBUG oslo_concurrency.lockutils [req-7adb86f8-c0dd-47b1-89f2-06bd76a5962e req-3f3aed79-5d31-4c4f-b158-a2e75693bc87 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-e10cd028-76c1-4eb5-be43-f51e4da8abc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:48:49 np0005481065 nova_compute[260935]: 2025-10-11 08:48:49.230 2 DEBUG oslo_concurrency.lockutils [req-7adb86f8-c0dd-47b1-89f2-06bd76a5962e req-3f3aed79-5d31-4c4f-b158-a2e75693bc87 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-e10cd028-76c1-4eb5-be43-f51e4da8abc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:48:49 np0005481065 nova_compute[260935]: 2025-10-11 08:48:49.231 2 DEBUG nova.network.neutron [req-7adb86f8-c0dd-47b1-89f2-06bd76a5962e req-3f3aed79-5d31-4c4f-b158-a2e75693bc87 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Refreshing network info cache for port b5bae935-7639-4a76-988c-e09d0c6f5fb1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:48:49 np0005481065 nova_compute[260935]: 2025-10-11 08:48:49.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:49 np0005481065 nova_compute[260935]: 2025-10-11 08:48:49.294 2 DEBUG nova.policy [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '34f29a5a135d45f597eeaa741009aa67', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'eddb41c523294041b154a0a99c88e82b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 04:48:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1291: 321 pgs: 321 active+clean; 167 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 29 KiB/s wr, 74 op/s
Oct 11 04:48:49 np0005481065 nova_compute[260935]: 2025-10-11 08:48:49.926 2 DEBUG nova.network.neutron [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Successfully created port: 723ff3bf-882c-4198-afc8-a31026a4ccfc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 04:48:50 np0005481065 nova_compute[260935]: 2025-10-11 08:48:50.642 2 DEBUG nova.network.neutron [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Successfully updated port: 723ff3bf-882c-4198-afc8-a31026a4ccfc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:48:50 np0005481065 nova_compute[260935]: 2025-10-11 08:48:50.671 2 DEBUG oslo_concurrency.lockutils [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:48:50 np0005481065 nova_compute[260935]: 2025-10-11 08:48:50.672 2 DEBUG oslo_concurrency.lockutils [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquired lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:48:50 np0005481065 nova_compute[260935]: 2025-10-11 08:48:50.672 2 DEBUG nova.network.neutron [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:48:50 np0005481065 nova_compute[260935]: 2025-10-11 08:48:50.754 2 DEBUG nova.compute.manager [req-ec289248-6f0c-4c82-b400-62ea19b8bdf1 req-ca978624-fe00-472c-bc89-4e677bcfb236 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received event network-changed-723ff3bf-882c-4198-afc8-a31026a4ccfc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:48:50 np0005481065 nova_compute[260935]: 2025-10-11 08:48:50.755 2 DEBUG nova.compute.manager [req-ec289248-6f0c-4c82-b400-62ea19b8bdf1 req-ca978624-fe00-472c-bc89-4e677bcfb236 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Refreshing instance network info cache due to event network-changed-723ff3bf-882c-4198-afc8-a31026a4ccfc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:48:50 np0005481065 nova_compute[260935]: 2025-10-11 08:48:50.756 2 DEBUG oslo_concurrency.lockutils [req-ec289248-6f0c-4c82-b400-62ea19b8bdf1 req-ca978624-fe00-472c-bc89-4e677bcfb236 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:48:50 np0005481065 nova_compute[260935]: 2025-10-11 08:48:50.854 2 WARNING nova.network.neutron [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] fff13396-b787-4c6e-9112-a1c2ef57b26d already exists in list: networks containing: ['fff13396-b787-4c6e-9112-a1c2ef57b26d']. ignoring it#033[00m
Oct 11 04:48:50 np0005481065 nova_compute[260935]: 2025-10-11 08:48:50.855 2 WARNING nova.network.neutron [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] fff13396-b787-4c6e-9112-a1c2ef57b26d already exists in list: networks containing: ['fff13396-b787-4c6e-9112-a1c2ef57b26d']. ignoring it#033[00m
Oct 11 04:48:51 np0005481065 nova_compute[260935]: 2025-10-11 08:48:51.306 2 DEBUG nova.network.neutron [req-7adb86f8-c0dd-47b1-89f2-06bd76a5962e req-3f3aed79-5d31-4c4f-b158-a2e75693bc87 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Updated VIF entry in instance network info cache for port b5bae935-7639-4a76-988c-e09d0c6f5fb1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:48:51 np0005481065 nova_compute[260935]: 2025-10-11 08:48:51.308 2 DEBUG nova.network.neutron [req-7adb86f8-c0dd-47b1-89f2-06bd76a5962e req-3f3aed79-5d31-4c4f-b158-a2e75693bc87 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Updating instance_info_cache with network_info: [{"id": "b5bae935-7639-4a76-988c-e09d0c6f5fb1", "address": "fa:16:3e:ad:83:55", "network": {"id": "2880dd81-df95-47c3-aa3d-53c3f2548f15", "bridge": "br-int", "label": "tempest-ServersTestJSON-1125718573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d6c7a0d842f4dcb95421a3f47580c49", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5bae935-76", "ovs_interfaceid": "b5bae935-7639-4a76-988c-e09d0c6f5fb1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:48:51 np0005481065 nova_compute[260935]: 2025-10-11 08:48:51.339 2 DEBUG oslo_concurrency.lockutils [req-7adb86f8-c0dd-47b1-89f2-06bd76a5962e req-3f3aed79-5d31-4c4f-b158-a2e75693bc87 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-e10cd028-76c1-4eb5-be43-f51e4da8abc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:48:51 np0005481065 nova_compute[260935]: 2025-10-11 08:48:51.340 2 DEBUG nova.compute.manager [req-7adb86f8-c0dd-47b1-89f2-06bd76a5962e req-3f3aed79-5d31-4c4f-b158-a2e75693bc87 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received event network-vif-plugged-b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:48:51 np0005481065 nova_compute[260935]: 2025-10-11 08:48:51.341 2 DEBUG oslo_concurrency.lockutils [req-7adb86f8-c0dd-47b1-89f2-06bd76a5962e req-3f3aed79-5d31-4c4f-b158-a2e75693bc87 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:48:51 np0005481065 nova_compute[260935]: 2025-10-11 08:48:51.341 2 DEBUG oslo_concurrency.lockutils [req-7adb86f8-c0dd-47b1-89f2-06bd76a5962e req-3f3aed79-5d31-4c4f-b158-a2e75693bc87 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:48:51 np0005481065 nova_compute[260935]: 2025-10-11 08:48:51.341 2 DEBUG oslo_concurrency.lockutils [req-7adb86f8-c0dd-47b1-89f2-06bd76a5962e req-3f3aed79-5d31-4c4f-b158-a2e75693bc87 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:48:51 np0005481065 nova_compute[260935]: 2025-10-11 08:48:51.342 2 DEBUG nova.compute.manager [req-7adb86f8-c0dd-47b1-89f2-06bd76a5962e req-3f3aed79-5d31-4c4f-b158-a2e75693bc87 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] No waiting events found dispatching network-vif-plugged-b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:48:51 np0005481065 nova_compute[260935]: 2025-10-11 08:48:51.342 2 WARNING nova.compute.manager [req-7adb86f8-c0dd-47b1-89f2-06bd76a5962e req-3f3aed79-5d31-4c4f-b158-a2e75693bc87 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received unexpected event network-vif-plugged-b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e for instance with vm_state active and task_state None.#033[00m
Oct 11 04:48:51 np0005481065 nova_compute[260935]: 2025-10-11 08:48:51.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:51 np0005481065 podman[291245]: 2025-10-11 08:48:51.789006146 +0000 UTC m=+0.082622973 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 04:48:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1292: 321 pgs: 321 active+clean; 167 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 29 KiB/s wr, 74 op/s
Oct 11 04:48:51 np0005481065 nova_compute[260935]: 2025-10-11 08:48:51.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:52 np0005481065 ovn_controller[152945]: 2025-10-11T08:48:52Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ad:83:55 10.100.0.11
Oct 11 04:48:52 np0005481065 ovn_controller[152945]: 2025-10-11T08:48:52Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ad:83:55 10.100.0.11
Oct 11 04:48:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:48:53 np0005481065 nova_compute[260935]: 2025-10-11 08:48:53.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:48:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1293: 321 pgs: 321 active+clean; 189 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 116 op/s
Oct 11 04:48:54 np0005481065 nova_compute[260935]: 2025-10-11 08:48:54.258 2 DEBUG nova.network.neutron [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Updating instance_info_cache with network_info: [{"id": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "address": "fa:16:3e:97:65:f3", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb31f1b4-b0", "ovs_interfaceid": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "address": "fa:16:3e:f2:dc:ce", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e1a780-92", "ovs_interfaceid": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "address": "fa:16:3e:90:fd:25", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap723ff3bf-88", "ovs_interfaceid": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:48:54 np0005481065 nova_compute[260935]: 2025-10-11 08:48:54.292 2 DEBUG oslo_concurrency.lockutils [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Releasing lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:48:54 np0005481065 nova_compute[260935]: 2025-10-11 08:48:54.293 2 DEBUG oslo_concurrency.lockutils [req-ec289248-6f0c-4c82-b400-62ea19b8bdf1 req-ca978624-fe00-472c-bc89-4e677bcfb236 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:48:54 np0005481065 nova_compute[260935]: 2025-10-11 08:48:54.293 2 DEBUG nova.network.neutron [req-ec289248-6f0c-4c82-b400-62ea19b8bdf1 req-ca978624-fe00-472c-bc89-4e677bcfb236 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Refreshing network info cache for port 723ff3bf-882c-4198-afc8-a31026a4ccfc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:48:54 np0005481065 nova_compute[260935]: 2025-10-11 08:48:54.296 2 DEBUG nova.virt.libvirt.vif [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:48:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1270343715',display_name='tempest-AttachInterfacesTestJSON-server-1270343715',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1270343715',id=20,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGqp+brcDuFBg126s8uf0VE8L4fUfMeeG8JT9FKFYCB1vHmrbx9C6Kt8XshIYtJqZ0JEMq6H9A4MzX7hRa62ELfLstfe4uxEEdjGiwcDGhX0TR8t1c69HTxfDL2XuPv0hw==',key_name='tempest-keypair-1655747577',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:48:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-i4h1iqr7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:48:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=057de6d9-3f9e-4b23-9019-f62ba6b453e7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "address": "fa:16:3e:90:fd:25", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap723ff3bf-88", "ovs_interfaceid": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:48:54 np0005481065 nova_compute[260935]: 2025-10-11 08:48:54.296 2 DEBUG nova.network.os_vif_util [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "address": "fa:16:3e:90:fd:25", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap723ff3bf-88", "ovs_interfaceid": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:48:54 np0005481065 nova_compute[260935]: 2025-10-11 08:48:54.297 2 DEBUG nova.network.os_vif_util [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:90:fd:25,bridge_name='br-int',has_traffic_filtering=True,id=723ff3bf-882c-4198-afc8-a31026a4ccfc,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap723ff3bf-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:48:54 np0005481065 nova_compute[260935]: 2025-10-11 08:48:54.297 2 DEBUG os_vif [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:fd:25,bridge_name='br-int',has_traffic_filtering=True,id=723ff3bf-882c-4198-afc8-a31026a4ccfc,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap723ff3bf-88') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:48:54 np0005481065 nova_compute[260935]: 2025-10-11 08:48:54.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:54 np0005481065 nova_compute[260935]: 2025-10-11 08:48:54.298 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:48:54 np0005481065 nova_compute[260935]: 2025-10-11 08:48:54.298 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:48:54 np0005481065 nova_compute[260935]: 2025-10-11 08:48:54.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:54 np0005481065 nova_compute[260935]: 2025-10-11 08:48:54.301 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap723ff3bf-88, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:48:54 np0005481065 nova_compute[260935]: 2025-10-11 08:48:54.302 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap723ff3bf-88, col_values=(('external_ids', {'iface-id': '723ff3bf-882c-4198-afc8-a31026a4ccfc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:90:fd:25', 'vm-uuid': '057de6d9-3f9e-4b23-9019-f62ba6b453e7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:48:54 np0005481065 NetworkManager[44960]: <info>  [1760172534.3064] manager: (tap723ff3bf-88): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Oct 11 04:48:54 np0005481065 nova_compute[260935]: 2025-10-11 08:48:54.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:48:54 np0005481065 nova_compute[260935]: 2025-10-11 08:48:54.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:54 np0005481065 nova_compute[260935]: 2025-10-11 08:48:54.336 2 INFO os_vif [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:fd:25,bridge_name='br-int',has_traffic_filtering=True,id=723ff3bf-882c-4198-afc8-a31026a4ccfc,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap723ff3bf-88')#033[00m
Oct 11 04:48:54 np0005481065 nova_compute[260935]: 2025-10-11 08:48:54.337 2 DEBUG nova.virt.libvirt.vif [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:48:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1270343715',display_name='tempest-AttachInterfacesTestJSON-server-1270343715',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1270343715',id=20,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGqp+brcDuFBg126s8uf0VE8L4fUfMeeG8JT9FKFYCB1vHmrbx9C6Kt8XshIYtJqZ0JEMq6H9A4MzX7hRa62ELfLstfe4uxEEdjGiwcDGhX0TR8t1c69HTxfDL2XuPv0hw==',key_name='tempest-keypair-1655747577',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:48:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-i4h1iqr7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:48:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=057de6d9-3f9e-4b23-9019-f62ba6b453e7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "address": "fa:16:3e:90:fd:25", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap723ff3bf-88", "ovs_interfaceid": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:48:54 np0005481065 nova_compute[260935]: 2025-10-11 08:48:54.338 2 DEBUG nova.network.os_vif_util [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "address": "fa:16:3e:90:fd:25", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap723ff3bf-88", "ovs_interfaceid": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:48:54 np0005481065 nova_compute[260935]: 2025-10-11 08:48:54.339 2 DEBUG nova.network.os_vif_util [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:90:fd:25,bridge_name='br-int',has_traffic_filtering=True,id=723ff3bf-882c-4198-afc8-a31026a4ccfc,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap723ff3bf-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:48:54 np0005481065 nova_compute[260935]: 2025-10-11 08:48:54.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:54 np0005481065 nova_compute[260935]: 2025-10-11 08:48:54.342 2 DEBUG nova.virt.libvirt.guest [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] attach device xml: <interface type="ethernet">
Oct 11 04:48:54 np0005481065 nova_compute[260935]:  <mac address="fa:16:3e:90:fd:25"/>
Oct 11 04:48:54 np0005481065 nova_compute[260935]:  <model type="virtio"/>
Oct 11 04:48:54 np0005481065 nova_compute[260935]:  <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:48:54 np0005481065 nova_compute[260935]:  <mtu size="1442"/>
Oct 11 04:48:54 np0005481065 nova_compute[260935]:  <target dev="tap723ff3bf-88"/>
Oct 11 04:48:54 np0005481065 nova_compute[260935]: </interface>
Oct 11 04:48:54 np0005481065 nova_compute[260935]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct 11 04:48:54 np0005481065 kernel: tap723ff3bf-88: entered promiscuous mode
Oct 11 04:48:54 np0005481065 NetworkManager[44960]: <info>  [1760172534.3581] manager: (tap723ff3bf-88): new Tun device (/org/freedesktop/NetworkManager/Devices/59)
Oct 11 04:48:54 np0005481065 nova_compute[260935]: 2025-10-11 08:48:54.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:54 np0005481065 ovn_controller[152945]: 2025-10-11T08:48:54Z|00086|binding|INFO|Claiming lport 723ff3bf-882c-4198-afc8-a31026a4ccfc for this chassis.
Oct 11 04:48:54 np0005481065 ovn_controller[152945]: 2025-10-11T08:48:54Z|00087|binding|INFO|723ff3bf-882c-4198-afc8-a31026a4ccfc: Claiming fa:16:3e:90:fd:25 10.100.0.6
Oct 11 04:48:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:54.372 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:90:fd:25 10.100.0.6'], port_security=['fa:16:3e:90:fd:25 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '057de6d9-3f9e-4b23-9019-f62ba6b453e7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eddb41c523294041b154a0a99c88e82b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9a83c3d0-687d-44b7-980a-bde786b1b429', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4201c7b-c907-464d-88cb-d19f17d8f067, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=723ff3bf-882c-4198-afc8-a31026a4ccfc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:48:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:54.374 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 723ff3bf-882c-4198-afc8-a31026a4ccfc in datapath fff13396-b787-4c6e-9112-a1c2ef57b26d bound to our chassis#033[00m
Oct 11 04:48:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:54.377 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fff13396-b787-4c6e-9112-a1c2ef57b26d#033[00m
Oct 11 04:48:54 np0005481065 ovn_controller[152945]: 2025-10-11T08:48:54Z|00088|binding|INFO|Setting lport 723ff3bf-882c-4198-afc8-a31026a4ccfc ovn-installed in OVS
Oct 11 04:48:54 np0005481065 ovn_controller[152945]: 2025-10-11T08:48:54Z|00089|binding|INFO|Setting lport 723ff3bf-882c-4198-afc8-a31026a4ccfc up in Southbound
Oct 11 04:48:54 np0005481065 nova_compute[260935]: 2025-10-11 08:48:54.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:54 np0005481065 nova_compute[260935]: 2025-10-11 08:48:54.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:54.401 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ce68f45d-d6fd-4fee-ad66-5bece935c7e0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:54 np0005481065 systemd-udevd[291272]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:48:54 np0005481065 NetworkManager[44960]: <info>  [1760172534.4491] device (tap723ff3bf-88): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:48:54 np0005481065 NetworkManager[44960]: <info>  [1760172534.4505] device (tap723ff3bf-88): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:48:54 np0005481065 nova_compute[260935]: 2025-10-11 08:48:54.459 2 DEBUG nova.virt.libvirt.driver [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:48:54 np0005481065 nova_compute[260935]: 2025-10-11 08:48:54.460 2 DEBUG nova.virt.libvirt.driver [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:48:54 np0005481065 nova_compute[260935]: 2025-10-11 08:48:54.460 2 DEBUG nova.virt.libvirt.driver [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No VIF found with MAC fa:16:3e:97:65:f3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:48:54 np0005481065 nova_compute[260935]: 2025-10-11 08:48:54.460 2 DEBUG nova.virt.libvirt.driver [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No VIF found with MAC fa:16:3e:f2:dc:ce, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:48:54 np0005481065 nova_compute[260935]: 2025-10-11 08:48:54.461 2 DEBUG nova.virt.libvirt.driver [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No VIF found with MAC fa:16:3e:90:fd:25, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:48:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:54.465 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[9bb1861a-3fa5-4661-a402-c364094d9002]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:54.468 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[fc265eac-e3fb-467b-96bf-48308fd34f03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:54 np0005481065 nova_compute[260935]: 2025-10-11 08:48:54.487 2 DEBUG nova.virt.libvirt.guest [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:48:54 np0005481065 nova_compute[260935]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:48:54 np0005481065 nova_compute[260935]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1270343715</nova:name>
Oct 11 04:48:54 np0005481065 nova_compute[260935]:  <nova:creationTime>2025-10-11 08:48:54</nova:creationTime>
Oct 11 04:48:54 np0005481065 nova_compute[260935]:  <nova:flavor name="m1.nano">
Oct 11 04:48:54 np0005481065 nova_compute[260935]:    <nova:memory>128</nova:memory>
Oct 11 04:48:54 np0005481065 nova_compute[260935]:    <nova:disk>1</nova:disk>
Oct 11 04:48:54 np0005481065 nova_compute[260935]:    <nova:swap>0</nova:swap>
Oct 11 04:48:54 np0005481065 nova_compute[260935]:    <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:48:54 np0005481065 nova_compute[260935]:    <nova:vcpus>1</nova:vcpus>
Oct 11 04:48:54 np0005481065 nova_compute[260935]:  </nova:flavor>
Oct 11 04:48:54 np0005481065 nova_compute[260935]:  <nova:owner>
Oct 11 04:48:54 np0005481065 nova_compute[260935]:    <nova:user uuid="34f29a5a135d45f597eeaa741009aa67">tempest-AttachInterfacesTestJSON-2072786320-project-member</nova:user>
Oct 11 04:48:54 np0005481065 nova_compute[260935]:    <nova:project uuid="eddb41c523294041b154a0a99c88e82b">tempest-AttachInterfacesTestJSON-2072786320</nova:project>
Oct 11 04:48:54 np0005481065 nova_compute[260935]:  </nova:owner>
Oct 11 04:48:54 np0005481065 nova_compute[260935]:  <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:48:54 np0005481065 nova_compute[260935]:  <nova:ports>
Oct 11 04:48:54 np0005481065 nova_compute[260935]:    <nova:port uuid="db31f1b4-b009-40dc-a028-b72fe0b1eb45">
Oct 11 04:48:54 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 11 04:48:54 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 04:48:54 np0005481065 nova_compute[260935]:    <nova:port uuid="b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e">
Oct 11 04:48:54 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 11 04:48:54 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 04:48:54 np0005481065 nova_compute[260935]:    <nova:port uuid="723ff3bf-882c-4198-afc8-a31026a4ccfc">
Oct 11 04:48:54 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 11 04:48:54 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 04:48:54 np0005481065 nova_compute[260935]:  </nova:ports>
Oct 11 04:48:54 np0005481065 nova_compute[260935]: </nova:instance>
Oct 11 04:48:54 np0005481065 nova_compute[260935]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct 11 04:48:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:54.508 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[9a22c43a-9626-4821-b458-5c180ec938e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:54 np0005481065 nova_compute[260935]: 2025-10-11 08:48:54.514 2 DEBUG oslo_concurrency.lockutils [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "interface-057de6d9-3f9e-4b23-9019-f62ba6b453e7-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 6.479s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:48:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:54.533 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[79ed8fad-ca9f-47eb-82d0-e96d44856871]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfff13396-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:a4:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 958, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 958, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 434479, 'reachable_time': 16585, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291278, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:54.561 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2a03e968-0ad7-4a48-bec8-18994dfef655]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfff13396-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 434496, 'tstamp': 434496}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291279, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfff13396-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 434502, 'tstamp': 434502}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291279, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:48:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:54.563 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfff13396-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:48:54 np0005481065 nova_compute[260935]: 2025-10-11 08:48:54.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:54 np0005481065 nova_compute[260935]: 2025-10-11 08:48:54.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:54.602 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfff13396-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:48:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:54.603 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:48:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:54.604 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfff13396-b0, col_values=(('external_ids', {'iface-id': '2a916b98-1e7b-4604-b1f0-e2f195b1c17e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:48:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:54.605 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:48:54 np0005481065 nova_compute[260935]: 2025-10-11 08:48:54.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:48:54 np0005481065 nova_compute[260935]: 2025-10-11 08:48:54.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 04:48:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:48:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:48:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:48:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:48:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:48:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:48:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:48:54
Oct 11 04:48:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:48:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 04:48:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', '.mgr', 'backups', '.rgw.root', 'vms', 'images', 'default.rgw.meta', 'default.rgw.log']
Oct 11 04:48:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:48:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:48:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:48:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:48:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:48:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:48:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:48:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:48:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:48:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:48:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:48:55 np0005481065 nova_compute[260935]: 2025-10-11 08:48:55.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1294: 321 pgs: 321 active+clean; 189 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 782 KiB/s rd, 2.0 MiB/s wr, 61 op/s
Oct 11 04:48:55 np0005481065 nova_compute[260935]: 2025-10-11 08:48:55.967 2 DEBUG nova.network.neutron [req-ec289248-6f0c-4c82-b400-62ea19b8bdf1 req-ca978624-fe00-472c-bc89-4e677bcfb236 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Updated VIF entry in instance network info cache for port 723ff3bf-882c-4198-afc8-a31026a4ccfc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:48:55 np0005481065 nova_compute[260935]: 2025-10-11 08:48:55.968 2 DEBUG nova.network.neutron [req-ec289248-6f0c-4c82-b400-62ea19b8bdf1 req-ca978624-fe00-472c-bc89-4e677bcfb236 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Updating instance_info_cache with network_info: [{"id": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "address": "fa:16:3e:97:65:f3", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb31f1b4-b0", "ovs_interfaceid": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "address": "fa:16:3e:f2:dc:ce", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e1a780-92", "ovs_interfaceid": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "address": "fa:16:3e:90:fd:25", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap723ff3bf-88", "ovs_interfaceid": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:48:55 np0005481065 nova_compute[260935]: 2025-10-11 08:48:55.990 2 DEBUG oslo_concurrency.lockutils [req-ec289248-6f0c-4c82-b400-62ea19b8bdf1 req-ca978624-fe00-472c-bc89-4e677bcfb236 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:48:56 np0005481065 ovn_controller[152945]: 2025-10-11T08:48:56Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:90:fd:25 10.100.0.6
Oct 11 04:48:56 np0005481065 ovn_controller[152945]: 2025-10-11T08:48:56Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:90:fd:25 10.100.0.6
Oct 11 04:48:56 np0005481065 nova_compute[260935]: 2025-10-11 08:48:56.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:57.599 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:48:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:48:57.602 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 04:48:57 np0005481065 nova_compute[260935]: 2025-10-11 08:48:57.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:57 np0005481065 nova_compute[260935]: 2025-10-11 08:48:57.708 2 DEBUG nova.compute.manager [req-d6b02d94-a9ad-43a1-8fb6-a75a0d91be81 req-683ce51a-904b-40d5-9df7-2cd69de6cc8a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received event network-vif-plugged-723ff3bf-882c-4198-afc8-a31026a4ccfc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:48:57 np0005481065 nova_compute[260935]: 2025-10-11 08:48:57.709 2 DEBUG oslo_concurrency.lockutils [req-d6b02d94-a9ad-43a1-8fb6-a75a0d91be81 req-683ce51a-904b-40d5-9df7-2cd69de6cc8a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:48:57 np0005481065 nova_compute[260935]: 2025-10-11 08:48:57.709 2 DEBUG oslo_concurrency.lockutils [req-d6b02d94-a9ad-43a1-8fb6-a75a0d91be81 req-683ce51a-904b-40d5-9df7-2cd69de6cc8a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:48:57 np0005481065 nova_compute[260935]: 2025-10-11 08:48:57.709 2 DEBUG oslo_concurrency.lockutils [req-d6b02d94-a9ad-43a1-8fb6-a75a0d91be81 req-683ce51a-904b-40d5-9df7-2cd69de6cc8a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:48:57 np0005481065 nova_compute[260935]: 2025-10-11 08:48:57.710 2 DEBUG nova.compute.manager [req-d6b02d94-a9ad-43a1-8fb6-a75a0d91be81 req-683ce51a-904b-40d5-9df7-2cd69de6cc8a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] No waiting events found dispatching network-vif-plugged-723ff3bf-882c-4198-afc8-a31026a4ccfc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:48:57 np0005481065 nova_compute[260935]: 2025-10-11 08:48:57.710 2 WARNING nova.compute.manager [req-d6b02d94-a9ad-43a1-8fb6-a75a0d91be81 req-683ce51a-904b-40d5-9df7-2cd69de6cc8a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received unexpected event network-vif-plugged-723ff3bf-882c-4198-afc8-a31026a4ccfc for instance with vm_state active and task_state None.#033[00m
Oct 11 04:48:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1295: 321 pgs: 321 active+clean; 200 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 911 KiB/s rd, 2.1 MiB/s wr, 81 op/s
Oct 11 04:48:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:48:58 np0005481065 nova_compute[260935]: 2025-10-11 08:48:58.594 2 DEBUG oslo_concurrency.lockutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:48:58 np0005481065 nova_compute[260935]: 2025-10-11 08:48:58.594 2 DEBUG oslo_concurrency.lockutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:48:58 np0005481065 nova_compute[260935]: 2025-10-11 08:48:58.613 2 DEBUG nova.compute.manager [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:48:58 np0005481065 nova_compute[260935]: 2025-10-11 08:48:58.697 2 DEBUG oslo_concurrency.lockutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:48:58 np0005481065 nova_compute[260935]: 2025-10-11 08:48:58.698 2 DEBUG oslo_concurrency.lockutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:48:58 np0005481065 nova_compute[260935]: 2025-10-11 08:48:58.708 2 DEBUG nova.virt.hardware [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:48:58 np0005481065 nova_compute[260935]: 2025-10-11 08:48:58.708 2 INFO nova.compute.claims [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:48:58 np0005481065 podman[291280]: 2025-10-11 08:48:58.797921423 +0000 UTC m=+0.093685269 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid)
Oct 11 04:48:58 np0005481065 nova_compute[260935]: 2025-10-11 08:48:58.866 2 DEBUG oslo_concurrency.processutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:48:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:48:59 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3815789145' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:48:59 np0005481065 nova_compute[260935]: 2025-10-11 08:48:59.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:48:59 np0005481065 nova_compute[260935]: 2025-10-11 08:48:59.368 2 DEBUG oslo_concurrency.processutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:48:59 np0005481065 nova_compute[260935]: 2025-10-11 08:48:59.379 2 DEBUG nova.compute.provider_tree [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:48:59 np0005481065 nova_compute[260935]: 2025-10-11 08:48:59.412 2 DEBUG nova.scheduler.client.report [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:48:59 np0005481065 nova_compute[260935]: 2025-10-11 08:48:59.442 2 DEBUG oslo_concurrency.lockutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:48:59 np0005481065 nova_compute[260935]: 2025-10-11 08:48:59.443 2 DEBUG nova.compute.manager [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:48:59 np0005481065 nova_compute[260935]: 2025-10-11 08:48:59.508 2 DEBUG nova.compute.manager [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:48:59 np0005481065 nova_compute[260935]: 2025-10-11 08:48:59.509 2 DEBUG nova.network.neutron [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:48:59 np0005481065 nova_compute[260935]: 2025-10-11 08:48:59.533 2 INFO nova.virt.libvirt.driver [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:48:59 np0005481065 nova_compute[260935]: 2025-10-11 08:48:59.551 2 DEBUG nova.compute.manager [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:48:59 np0005481065 nova_compute[260935]: 2025-10-11 08:48:59.628 2 DEBUG nova.compute.manager [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:48:59 np0005481065 nova_compute[260935]: 2025-10-11 08:48:59.631 2 DEBUG nova.virt.libvirt.driver [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:48:59 np0005481065 nova_compute[260935]: 2025-10-11 08:48:59.632 2 INFO nova.virt.libvirt.driver [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Creating image(s)#033[00m
Oct 11 04:48:59 np0005481065 nova_compute[260935]: 2025-10-11 08:48:59.669 2 DEBUG nova.storage.rbd_utils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:48:59 np0005481065 nova_compute[260935]: 2025-10-11 08:48:59.707 2 DEBUG nova.storage.rbd_utils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:48:59 np0005481065 nova_compute[260935]: 2025-10-11 08:48:59.750 2 DEBUG nova.storage.rbd_utils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:48:59 np0005481065 nova_compute[260935]: 2025-10-11 08:48:59.756 2 DEBUG oslo_concurrency.processutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:48:59 np0005481065 nova_compute[260935]: 2025-10-11 08:48:59.797 2 DEBUG nova.policy [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a51c2680b31e40b1908642ef8795c6f0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '39d3043a7835403392c659fbb2fe0b22', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 04:48:59 np0005481065 nova_compute[260935]: 2025-10-11 08:48:59.802 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:48:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1296: 321 pgs: 321 active+clean; 200 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 297 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 11 04:48:59 np0005481065 ovn_controller[152945]: 2025-10-11T08:48:59Z|00090|binding|INFO|Releasing lport 0d142335-06a1-47f2-b37a-cf32717864bc from this chassis (sb_readonly=0)
Oct 11 04:48:59 np0005481065 ovn_controller[152945]: 2025-10-11T08:48:59Z|00091|binding|INFO|Releasing lport 2a916b98-1e7b-4604-b1f0-e2f195b1c17e from this chassis (sb_readonly=0)
Oct 11 04:48:59 np0005481065 nova_compute[260935]: 2025-10-11 08:48:59.842 2 DEBUG nova.compute.manager [req-365e028f-960f-42a4-b796-ff72a9b9357b req-206e1e96-d468-49f1-9b06-ede7901fd7c7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received event network-vif-plugged-723ff3bf-882c-4198-afc8-a31026a4ccfc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:48:59 np0005481065 nova_compute[260935]: 2025-10-11 08:48:59.842 2 DEBUG oslo_concurrency.lockutils [req-365e028f-960f-42a4-b796-ff72a9b9357b req-206e1e96-d468-49f1-9b06-ede7901fd7c7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:48:59 np0005481065 nova_compute[260935]: 2025-10-11 08:48:59.842 2 DEBUG oslo_concurrency.lockutils [req-365e028f-960f-42a4-b796-ff72a9b9357b req-206e1e96-d468-49f1-9b06-ede7901fd7c7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:48:59 np0005481065 nova_compute[260935]: 2025-10-11 08:48:59.842 2 DEBUG oslo_concurrency.lockutils [req-365e028f-960f-42a4-b796-ff72a9b9357b req-206e1e96-d468-49f1-9b06-ede7901fd7c7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:48:59 np0005481065 nova_compute[260935]: 2025-10-11 08:48:59.843 2 DEBUG nova.compute.manager [req-365e028f-960f-42a4-b796-ff72a9b9357b req-206e1e96-d468-49f1-9b06-ede7901fd7c7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] No waiting events found dispatching network-vif-plugged-723ff3bf-882c-4198-afc8-a31026a4ccfc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:48:59 np0005481065 nova_compute[260935]: 2025-10-11 08:48:59.843 2 WARNING nova.compute.manager [req-365e028f-960f-42a4-b796-ff72a9b9357b req-206e1e96-d468-49f1-9b06-ede7901fd7c7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received unexpected event network-vif-plugged-723ff3bf-882c-4198-afc8-a31026a4ccfc for instance with vm_state active and task_state None.#033[00m
Oct 11 04:48:59 np0005481065 nova_compute[260935]: 2025-10-11 08:48:59.852 2 DEBUG oslo_concurrency.processutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:48:59 np0005481065 nova_compute[260935]: 2025-10-11 08:48:59.853 2 DEBUG oslo_concurrency.lockutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:48:59 np0005481065 nova_compute[260935]: 2025-10-11 08:48:59.853 2 DEBUG oslo_concurrency.lockutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:48:59 np0005481065 nova_compute[260935]: 2025-10-11 08:48:59.854 2 DEBUG oslo_concurrency.lockutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:48:59 np0005481065 nova_compute[260935]: 2025-10-11 08:48:59.876 2 DEBUG nova.storage.rbd_utils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:48:59 np0005481065 nova_compute[260935]: 2025-10-11 08:48:59.879 2 DEBUG oslo_concurrency.processutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:48:59 np0005481065 nova_compute[260935]: 2025-10-11 08:48:59.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:00 np0005481065 nova_compute[260935]: 2025-10-11 08:49:00.168 2 DEBUG oslo_concurrency.processutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.289s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:00 np0005481065 nova_compute[260935]: 2025-10-11 08:49:00.260 2 DEBUG nova.storage.rbd_utils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] resizing rbd image 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:49:00 np0005481065 nova_compute[260935]: 2025-10-11 08:49:00.299 2 DEBUG oslo_concurrency.lockutils [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "interface-057de6d9-3f9e-4b23-9019-f62ba6b453e7-c9724939-cd91-44bb-a86b-72bf93c2a818" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:00 np0005481065 nova_compute[260935]: 2025-10-11 08:49:00.300 2 DEBUG oslo_concurrency.lockutils [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "interface-057de6d9-3f9e-4b23-9019-f62ba6b453e7-c9724939-cd91-44bb-a86b-72bf93c2a818" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:00 np0005481065 nova_compute[260935]: 2025-10-11 08:49:00.300 2 DEBUG nova.objects.instance [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lazy-loading 'flavor' on Instance uuid 057de6d9-3f9e-4b23-9019-f62ba6b453e7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:49:00 np0005481065 nova_compute[260935]: 2025-10-11 08:49:00.381 2 DEBUG nova.objects.instance [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lazy-loading 'migration_context' on Instance uuid 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:49:00 np0005481065 nova_compute[260935]: 2025-10-11 08:49:00.397 2 DEBUG nova.virt.libvirt.driver [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:49:00 np0005481065 nova_compute[260935]: 2025-10-11 08:49:00.398 2 DEBUG nova.virt.libvirt.driver [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Ensure instance console log exists: /var/lib/nova/instances/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:49:00 np0005481065 nova_compute[260935]: 2025-10-11 08:49:00.398 2 DEBUG oslo_concurrency.lockutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:00 np0005481065 nova_compute[260935]: 2025-10-11 08:49:00.399 2 DEBUG oslo_concurrency.lockutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:00 np0005481065 nova_compute[260935]: 2025-10-11 08:49:00.399 2 DEBUG oslo_concurrency.lockutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:00 np0005481065 nova_compute[260935]: 2025-10-11 08:49:00.475 2 DEBUG oslo_concurrency.lockutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "6224c79a-8a36-490c-863a-67251512732f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:00 np0005481065 nova_compute[260935]: 2025-10-11 08:49:00.476 2 DEBUG oslo_concurrency.lockutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "6224c79a-8a36-490c-863a-67251512732f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:00 np0005481065 nova_compute[260935]: 2025-10-11 08:49:00.484 2 DEBUG nova.network.neutron [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Successfully created port: a3944a31-9560-49ae-b2a5-caaf2736993a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 04:49:00 np0005481065 nova_compute[260935]: 2025-10-11 08:49:00.491 2 DEBUG nova.compute.manager [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:49:00 np0005481065 nova_compute[260935]: 2025-10-11 08:49:00.558 2 DEBUG oslo_concurrency.lockutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:00 np0005481065 nova_compute[260935]: 2025-10-11 08:49:00.559 2 DEBUG oslo_concurrency.lockutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:00 np0005481065 nova_compute[260935]: 2025-10-11 08:49:00.566 2 DEBUG nova.virt.hardware [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:49:00 np0005481065 nova_compute[260935]: 2025-10-11 08:49:00.567 2 INFO nova.compute.claims [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:49:00 np0005481065 nova_compute[260935]: 2025-10-11 08:49:00.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:49:00 np0005481065 nova_compute[260935]: 2025-10-11 08:49:00.718 2 DEBUG oslo_concurrency.processutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:00 np0005481065 nova_compute[260935]: 2025-10-11 08:49:00.765 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:49:00 np0005481065 nova_compute[260935]: 2025-10-11 08:49:00.766 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:49:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:49:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3196503439' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:49:01 np0005481065 nova_compute[260935]: 2025-10-11 08:49:01.202 2 DEBUG oslo_concurrency.processutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:01 np0005481065 nova_compute[260935]: 2025-10-11 08:49:01.210 2 DEBUG nova.compute.provider_tree [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:49:01 np0005481065 nova_compute[260935]: 2025-10-11 08:49:01.224 2 DEBUG nova.objects.instance [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lazy-loading 'pci_requests' on Instance uuid 057de6d9-3f9e-4b23-9019-f62ba6b453e7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:49:01 np0005481065 nova_compute[260935]: 2025-10-11 08:49:01.228 2 DEBUG nova.scheduler.client.report [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:49:01 np0005481065 nova_compute[260935]: 2025-10-11 08:49:01.236 2 DEBUG nova.network.neutron [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:49:01 np0005481065 nova_compute[260935]: 2025-10-11 08:49:01.249 2 DEBUG oslo_concurrency.lockutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:01 np0005481065 nova_compute[260935]: 2025-10-11 08:49:01.250 2 DEBUG nova.compute.manager [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:49:01 np0005481065 nova_compute[260935]: 2025-10-11 08:49:01.295 2 DEBUG nova.compute.manager [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:49:01 np0005481065 nova_compute[260935]: 2025-10-11 08:49:01.295 2 DEBUG nova.network.neutron [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:49:01 np0005481065 nova_compute[260935]: 2025-10-11 08:49:01.334 2 INFO nova.virt.libvirt.driver [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:49:01 np0005481065 nova_compute[260935]: 2025-10-11 08:49:01.358 2 DEBUG nova.compute.manager [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:49:01 np0005481065 nova_compute[260935]: 2025-10-11 08:49:01.461 2 DEBUG nova.compute.manager [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:49:01 np0005481065 nova_compute[260935]: 2025-10-11 08:49:01.463 2 DEBUG nova.virt.libvirt.driver [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:49:01 np0005481065 nova_compute[260935]: 2025-10-11 08:49:01.464 2 INFO nova.virt.libvirt.driver [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Creating image(s)#033[00m
Oct 11 04:49:01 np0005481065 nova_compute[260935]: 2025-10-11 08:49:01.498 2 DEBUG nova.storage.rbd_utils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 6224c79a-8a36-490c-863a-67251512732f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:49:01 np0005481065 nova_compute[260935]: 2025-10-11 08:49:01.538 2 DEBUG nova.storage.rbd_utils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 6224c79a-8a36-490c-863a-67251512732f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:49:01 np0005481065 nova_compute[260935]: 2025-10-11 08:49:01.572 2 DEBUG nova.storage.rbd_utils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 6224c79a-8a36-490c-863a-67251512732f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:49:01 np0005481065 nova_compute[260935]: 2025-10-11 08:49:01.577 2 DEBUG oslo_concurrency.processutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:01 np0005481065 nova_compute[260935]: 2025-10-11 08:49:01.609 2 DEBUG nova.policy [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a51c2680b31e40b1908642ef8795c6f0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '39d3043a7835403392c659fbb2fe0b22', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 04:49:01 np0005481065 nova_compute[260935]: 2025-10-11 08:49:01.643 2 DEBUG oslo_concurrency.processutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:01 np0005481065 nova_compute[260935]: 2025-10-11 08:49:01.644 2 DEBUG oslo_concurrency.lockutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:01 np0005481065 nova_compute[260935]: 2025-10-11 08:49:01.644 2 DEBUG oslo_concurrency.lockutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:01 np0005481065 nova_compute[260935]: 2025-10-11 08:49:01.645 2 DEBUG oslo_concurrency.lockutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:01 np0005481065 nova_compute[260935]: 2025-10-11 08:49:01.680 2 DEBUG nova.storage.rbd_utils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 6224c79a-8a36-490c-863a-67251512732f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:49:01 np0005481065 nova_compute[260935]: 2025-10-11 08:49:01.685 2 DEBUG oslo_concurrency.processutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 6224c79a-8a36-490c-863a-67251512732f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1297: 321 pgs: 321 active+clean; 200 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 297 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 11 04:49:01 np0005481065 nova_compute[260935]: 2025-10-11 08:49:01.918 2 DEBUG nova.policy [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '34f29a5a135d45f597eeaa741009aa67', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'eddb41c523294041b154a0a99c88e82b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 04:49:01 np0005481065 nova_compute[260935]: 2025-10-11 08:49:01.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:01 np0005481065 nova_compute[260935]: 2025-10-11 08:49:01.946 2 DEBUG oslo_concurrency.processutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 6224c79a-8a36-490c-863a-67251512732f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.261s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:02 np0005481065 nova_compute[260935]: 2025-10-11 08:49:02.010 2 DEBUG nova.storage.rbd_utils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] resizing rbd image 6224c79a-8a36-490c-863a-67251512732f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:49:02 np0005481065 nova_compute[260935]: 2025-10-11 08:49:02.089 2 DEBUG nova.network.neutron [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Successfully updated port: a3944a31-9560-49ae-b2a5-caaf2736993a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:49:02 np0005481065 nova_compute[260935]: 2025-10-11 08:49:02.141 2 DEBUG oslo_concurrency.lockutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "refresh_cache-77ff3a9d-3eb2-40ed-ad12-6367fd4e555f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:49:02 np0005481065 nova_compute[260935]: 2025-10-11 08:49:02.141 2 DEBUG oslo_concurrency.lockutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquired lock "refresh_cache-77ff3a9d-3eb2-40ed-ad12-6367fd4e555f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:49:02 np0005481065 nova_compute[260935]: 2025-10-11 08:49:02.142 2 DEBUG nova.network.neutron [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:49:02 np0005481065 nova_compute[260935]: 2025-10-11 08:49:02.152 2 DEBUG nova.objects.instance [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lazy-loading 'migration_context' on Instance uuid 6224c79a-8a36-490c-863a-67251512732f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:49:02 np0005481065 nova_compute[260935]: 2025-10-11 08:49:02.184 2 DEBUG nova.compute.manager [req-8462493e-83ba-4a62-b180-090142cc0609 req-290da09a-a1ae-488c-820f-fd4ea6533a9b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Received event network-changed-a3944a31-9560-49ae-b2a5-caaf2736993a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:49:02 np0005481065 nova_compute[260935]: 2025-10-11 08:49:02.185 2 DEBUG nova.compute.manager [req-8462493e-83ba-4a62-b180-090142cc0609 req-290da09a-a1ae-488c-820f-fd4ea6533a9b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Refreshing instance network info cache due to event network-changed-a3944a31-9560-49ae-b2a5-caaf2736993a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:49:02 np0005481065 nova_compute[260935]: 2025-10-11 08:49:02.185 2 DEBUG oslo_concurrency.lockutils [req-8462493e-83ba-4a62-b180-090142cc0609 req-290da09a-a1ae-488c-820f-fd4ea6533a9b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-77ff3a9d-3eb2-40ed-ad12-6367fd4e555f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:49:02 np0005481065 nova_compute[260935]: 2025-10-11 08:49:02.188 2 DEBUG nova.virt.libvirt.driver [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:49:02 np0005481065 nova_compute[260935]: 2025-10-11 08:49:02.188 2 DEBUG nova.virt.libvirt.driver [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Ensure instance console log exists: /var/lib/nova/instances/6224c79a-8a36-490c-863a-67251512732f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:49:02 np0005481065 nova_compute[260935]: 2025-10-11 08:49:02.189 2 DEBUG oslo_concurrency.lockutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:02 np0005481065 nova_compute[260935]: 2025-10-11 08:49:02.190 2 DEBUG oslo_concurrency.lockutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:02 np0005481065 nova_compute[260935]: 2025-10-11 08:49:02.190 2 DEBUG oslo_concurrency.lockutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:02 np0005481065 nova_compute[260935]: 2025-10-11 08:49:02.334 2 DEBUG nova.network.neutron [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:49:02 np0005481065 nova_compute[260935]: 2025-10-11 08:49:02.394 2 DEBUG nova.network.neutron [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Successfully created port: b9569700-d7dc-40dc-a27c-1f72d3675682 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 04:49:02 np0005481065 nova_compute[260935]: 2025-10-11 08:49:02.766 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:49:03 np0005481065 nova_compute[260935]: 2025-10-11 08:49:03.123 2 DEBUG nova.network.neutron [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Successfully updated port: c9724939-cd91-44bb-a86b-72bf93c2a818 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:49:03 np0005481065 nova_compute[260935]: 2025-10-11 08:49:03.147 2 DEBUG oslo_concurrency.lockutils [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:49:03 np0005481065 nova_compute[260935]: 2025-10-11 08:49:03.148 2 DEBUG oslo_concurrency.lockutils [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquired lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:49:03 np0005481065 nova_compute[260935]: 2025-10-11 08:49:03.148 2 DEBUG nova.network.neutron [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:49:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:49:03 np0005481065 nova_compute[260935]: 2025-10-11 08:49:03.331 2 WARNING nova.network.neutron [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] fff13396-b787-4c6e-9112-a1c2ef57b26d already exists in list: networks containing: ['fff13396-b787-4c6e-9112-a1c2ef57b26d']. ignoring it#033[00m
Oct 11 04:49:03 np0005481065 nova_compute[260935]: 2025-10-11 08:49:03.332 2 WARNING nova.network.neutron [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] fff13396-b787-4c6e-9112-a1c2ef57b26d already exists in list: networks containing: ['fff13396-b787-4c6e-9112-a1c2ef57b26d']. ignoring it#033[00m
Oct 11 04:49:03 np0005481065 nova_compute[260935]: 2025-10-11 08:49:03.334 2 WARNING nova.network.neutron [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] fff13396-b787-4c6e-9112-a1c2ef57b26d already exists in list: networks containing: ['fff13396-b787-4c6e-9112-a1c2ef57b26d']. ignoring it#033[00m
Oct 11 04:49:03 np0005481065 nova_compute[260935]: 2025-10-11 08:49:03.376 2 DEBUG nova.network.neutron [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Successfully updated port: b9569700-d7dc-40dc-a27c-1f72d3675682 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:49:03 np0005481065 nova_compute[260935]: 2025-10-11 08:49:03.394 2 DEBUG oslo_concurrency.lockutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "refresh_cache-6224c79a-8a36-490c-863a-67251512732f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:49:03 np0005481065 nova_compute[260935]: 2025-10-11 08:49:03.394 2 DEBUG oslo_concurrency.lockutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquired lock "refresh_cache-6224c79a-8a36-490c-863a-67251512732f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:49:03 np0005481065 nova_compute[260935]: 2025-10-11 08:49:03.395 2 DEBUG nova.network.neutron [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:49:03 np0005481065 nova_compute[260935]: 2025-10-11 08:49:03.531 2 DEBUG nova.compute.manager [req-17b33343-2432-45c8-acf2-16c754ad2b86 req-062752ad-273d-4646-93c1-c5b55a8f0515 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Received event network-changed-b9569700-d7dc-40dc-a27c-1f72d3675682 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:49:03 np0005481065 nova_compute[260935]: 2025-10-11 08:49:03.531 2 DEBUG nova.compute.manager [req-17b33343-2432-45c8-acf2-16c754ad2b86 req-062752ad-273d-4646-93c1-c5b55a8f0515 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Refreshing instance network info cache due to event network-changed-b9569700-d7dc-40dc-a27c-1f72d3675682. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:49:03 np0005481065 nova_compute[260935]: 2025-10-11 08:49:03.532 2 DEBUG oslo_concurrency.lockutils [req-17b33343-2432-45c8-acf2-16c754ad2b86 req-062752ad-273d-4646-93c1-c5b55a8f0515 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-6224c79a-8a36-490c-863a-67251512732f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:49:03 np0005481065 nova_compute[260935]: 2025-10-11 08:49:03.578 2 DEBUG nova.network.neutron [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:49:03 np0005481065 nova_compute[260935]: 2025-10-11 08:49:03.583 2 DEBUG nova.network.neutron [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Updating instance_info_cache with network_info: [{"id": "a3944a31-9560-49ae-b2a5-caaf2736993a", "address": "fa:16:3e:bf:d8:0b", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3944a31-95", "ovs_interfaceid": "a3944a31-9560-49ae-b2a5-caaf2736993a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:49:03 np0005481065 nova_compute[260935]: 2025-10-11 08:49:03.603 2 DEBUG oslo_concurrency.lockutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Releasing lock "refresh_cache-77ff3a9d-3eb2-40ed-ad12-6367fd4e555f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:49:03 np0005481065 nova_compute[260935]: 2025-10-11 08:49:03.603 2 DEBUG nova.compute.manager [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Instance network_info: |[{"id": "a3944a31-9560-49ae-b2a5-caaf2736993a", "address": "fa:16:3e:bf:d8:0b", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3944a31-95", "ovs_interfaceid": "a3944a31-9560-49ae-b2a5-caaf2736993a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:49:03 np0005481065 nova_compute[260935]: 2025-10-11 08:49:03.604 2 DEBUG oslo_concurrency.lockutils [req-8462493e-83ba-4a62-b180-090142cc0609 req-290da09a-a1ae-488c-820f-fd4ea6533a9b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-77ff3a9d-3eb2-40ed-ad12-6367fd4e555f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:49:03 np0005481065 nova_compute[260935]: 2025-10-11 08:49:03.604 2 DEBUG nova.network.neutron [req-8462493e-83ba-4a62-b180-090142cc0609 req-290da09a-a1ae-488c-820f-fd4ea6533a9b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Refreshing network info cache for port a3944a31-9560-49ae-b2a5-caaf2736993a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:49:03 np0005481065 nova_compute[260935]: 2025-10-11 08:49:03.607 2 DEBUG nova.virt.libvirt.driver [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Start _get_guest_xml network_info=[{"id": "a3944a31-9560-49ae-b2a5-caaf2736993a", "address": "fa:16:3e:bf:d8:0b", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3944a31-95", "ovs_interfaceid": "a3944a31-9560-49ae-b2a5-caaf2736993a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:49:03 np0005481065 nova_compute[260935]: 2025-10-11 08:49:03.612 2 WARNING nova.virt.libvirt.driver [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:49:03 np0005481065 nova_compute[260935]: 2025-10-11 08:49:03.617 2 DEBUG nova.virt.libvirt.host [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:49:03 np0005481065 nova_compute[260935]: 2025-10-11 08:49:03.618 2 DEBUG nova.virt.libvirt.host [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:49:03 np0005481065 nova_compute[260935]: 2025-10-11 08:49:03.626 2 DEBUG nova.virt.libvirt.host [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:49:03 np0005481065 nova_compute[260935]: 2025-10-11 08:49:03.626 2 DEBUG nova.virt.libvirt.host [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:49:03 np0005481065 nova_compute[260935]: 2025-10-11 08:49:03.627 2 DEBUG nova.virt.libvirt.driver [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:49:03 np0005481065 nova_compute[260935]: 2025-10-11 08:49:03.627 2 DEBUG nova.virt.hardware [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:49:03 np0005481065 nova_compute[260935]: 2025-10-11 08:49:03.627 2 DEBUG nova.virt.hardware [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:49:03 np0005481065 nova_compute[260935]: 2025-10-11 08:49:03.628 2 DEBUG nova.virt.hardware [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:49:03 np0005481065 nova_compute[260935]: 2025-10-11 08:49:03.628 2 DEBUG nova.virt.hardware [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:49:03 np0005481065 nova_compute[260935]: 2025-10-11 08:49:03.628 2 DEBUG nova.virt.hardware [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:49:03 np0005481065 nova_compute[260935]: 2025-10-11 08:49:03.628 2 DEBUG nova.virt.hardware [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:49:03 np0005481065 nova_compute[260935]: 2025-10-11 08:49:03.628 2 DEBUG nova.virt.hardware [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:49:03 np0005481065 nova_compute[260935]: 2025-10-11 08:49:03.628 2 DEBUG nova.virt.hardware [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:49:03 np0005481065 nova_compute[260935]: 2025-10-11 08:49:03.628 2 DEBUG nova.virt.hardware [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:49:03 np0005481065 nova_compute[260935]: 2025-10-11 08:49:03.629 2 DEBUG nova.virt.hardware [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:49:03 np0005481065 nova_compute[260935]: 2025-10-11 08:49:03.629 2 DEBUG nova.virt.hardware [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:49:03 np0005481065 nova_compute[260935]: 2025-10-11 08:49:03.631 2 DEBUG oslo_concurrency.processutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:03 np0005481065 nova_compute[260935]: 2025-10-11 08:49:03.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:49:03 np0005481065 nova_compute[260935]: 2025-10-11 08:49:03.735 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:03 np0005481065 nova_compute[260935]: 2025-10-11 08:49:03.736 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:03 np0005481065 nova_compute[260935]: 2025-10-11 08:49:03.736 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:03 np0005481065 nova_compute[260935]: 2025-10-11 08:49:03.736 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 04:49:03 np0005481065 nova_compute[260935]: 2025-10-11 08:49:03.737 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1298: 321 pgs: 321 active+clean; 293 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 331 KiB/s rd, 5.7 MiB/s wr, 117 op/s
Oct 11 04:49:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:49:04 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2379102154' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.127 2 DEBUG oslo_concurrency.processutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.167 2 DEBUG nova.storage.rbd_utils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.174 2 DEBUG oslo_concurrency.processutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:49:04 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3340845604' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.225 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.401 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.401 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.406 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.406 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 04:49:04 np0005481065 podman[291739]: 2025-10-11 08:49:04.458302233 +0000 UTC m=+0.159876041 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0)
Oct 11 04:49:04 np0005481065 podman[291740]: 2025-10-11 08:49:04.466515237 +0000 UTC m=+0.162498796 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 04:49:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:49:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:49:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:49:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:49:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002211811999413948 of space, bias 1.0, pg target 0.6635435998241844 quantized to 32 (current 32)
Oct 11 04:49:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:49:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:49:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:49:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:49:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:49:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 11 04:49:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:49:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 04:49:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:49:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:49:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:49:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:49:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:49:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:49:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:49:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:49:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:49:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:49:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:04.604 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.690 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.692 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4108MB free_disk=59.85559844970703GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.692 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.692 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:49:04 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2471832843' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.720 2 DEBUG oslo_concurrency.processutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.721 2 DEBUG nova.virt.libvirt.vif [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:48:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1988839763',display_name='tempest-ServersAdminTestJSON-server-1988839763',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1988839763',id=22,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='39d3043a7835403392c659fbb2fe0b22',ramdisk_id='',reservation_id='r-nxz1v0k9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1756812845',owner_user_name='tempest-ServersAdminTestJSON-1756812845-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:48:59Z,user_data=None,user_id='a51c2680b31e40b1908642ef8795c6f0',uuid=77ff3a9d-3eb2-40ed-ad12-6367fd4e555f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a3944a31-9560-49ae-b2a5-caaf2736993a", "address": "fa:16:3e:bf:d8:0b", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3944a31-95", "ovs_interfaceid": "a3944a31-9560-49ae-b2a5-caaf2736993a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.721 2 DEBUG nova.network.os_vif_util [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converting VIF {"id": "a3944a31-9560-49ae-b2a5-caaf2736993a", "address": "fa:16:3e:bf:d8:0b", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3944a31-95", "ovs_interfaceid": "a3944a31-9560-49ae-b2a5-caaf2736993a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.722 2 DEBUG nova.network.os_vif_util [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:d8:0b,bridge_name='br-int',has_traffic_filtering=True,id=a3944a31-9560-49ae-b2a5-caaf2736993a,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3944a31-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.723 2 DEBUG nova.objects.instance [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lazy-loading 'pci_devices' on Instance uuid 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.747 2 DEBUG nova.network.neutron [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Updating instance_info_cache with network_info: [{"id": "b9569700-d7dc-40dc-a27c-1f72d3675682", "address": "fa:16:3e:20:3f:af", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9569700-d7", "ovs_interfaceid": "b9569700-d7dc-40dc-a27c-1f72d3675682", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.755 2 DEBUG nova.virt.libvirt.driver [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:49:04 np0005481065 nova_compute[260935]:  <uuid>77ff3a9d-3eb2-40ed-ad12-6367fd4e555f</uuid>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:  <name>instance-00000016</name>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:49:04 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:      <nova:name>tempest-ServersAdminTestJSON-server-1988839763</nova:name>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:49:03</nova:creationTime>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:49:04 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:        <nova:user uuid="a51c2680b31e40b1908642ef8795c6f0">tempest-ServersAdminTestJSON-1756812845-project-member</nova:user>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:        <nova:project uuid="39d3043a7835403392c659fbb2fe0b22">tempest-ServersAdminTestJSON-1756812845</nova:project>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:        <nova:port uuid="a3944a31-9560-49ae-b2a5-caaf2736993a">
Oct 11 04:49:04 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:49:04 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:      <entry name="serial">77ff3a9d-3eb2-40ed-ad12-6367fd4e555f</entry>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:      <entry name="uuid">77ff3a9d-3eb2-40ed-ad12-6367fd4e555f</entry>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:49:04 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:49:04 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:49:04 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk">
Oct 11 04:49:04 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:49:04 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:49:04 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk.config">
Oct 11 04:49:04 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:49:04 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 04:49:04 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:bf:d8:0b"/>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:      <target dev="tapa3944a31-95"/>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:49:04 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f/console.log" append="off"/>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:49:04 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:49:04 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:49:04 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:49:04 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:49:04 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.756 2 DEBUG nova.compute.manager [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Preparing to wait for external event network-vif-plugged-a3944a31-9560-49ae-b2a5-caaf2736993a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.756 2 DEBUG oslo_concurrency.lockutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.756 2 DEBUG oslo_concurrency.lockutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.756 2 DEBUG oslo_concurrency.lockutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.757 2 DEBUG nova.virt.libvirt.vif [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:48:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1988839763',display_name='tempest-ServersAdminTestJSON-server-1988839763',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1988839763',id=22,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='39d3043a7835403392c659fbb2fe0b22',ramdisk_id='',reservation_id='r-nxz1v0k9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1756812845',owner_user_name='tempest-ServersAdminTestJSON-1756812845-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:48:59Z,user_data=None,user_id='a51c2680b31e40b1908642ef8795c6f0',uuid=77ff3a9d-3eb2-40ed-ad12-6367fd4e555f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a3944a31-9560-49ae-b2a5-caaf2736993a", "address": "fa:16:3e:bf:d8:0b", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3944a31-95", "ovs_interfaceid": "a3944a31-9560-49ae-b2a5-caaf2736993a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.758 2 DEBUG nova.network.os_vif_util [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converting VIF {"id": "a3944a31-9560-49ae-b2a5-caaf2736993a", "address": "fa:16:3e:bf:d8:0b", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3944a31-95", "ovs_interfaceid": "a3944a31-9560-49ae-b2a5-caaf2736993a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.758 2 DEBUG nova.network.os_vif_util [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:d8:0b,bridge_name='br-int',has_traffic_filtering=True,id=a3944a31-9560-49ae-b2a5-caaf2736993a,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3944a31-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.759 2 DEBUG os_vif [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:d8:0b,bridge_name='br-int',has_traffic_filtering=True,id=a3944a31-9560-49ae-b2a5-caaf2736993a,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3944a31-95') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.760 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.760 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.764 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa3944a31-95, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.764 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa3944a31-95, col_values=(('external_ids', {'iface-id': 'a3944a31-9560-49ae-b2a5-caaf2736993a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bf:d8:0b', 'vm-uuid': '77ff3a9d-3eb2-40ed-ad12-6367fd4e555f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:04 np0005481065 NetworkManager[44960]: <info>  [1760172544.7681] manager: (tapa3944a31-95): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/60)
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.776 2 INFO os_vif [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:d8:0b,bridge_name='br-int',has_traffic_filtering=True,id=a3944a31-9560-49ae-b2a5-caaf2736993a,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3944a31-95')#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.779 2 DEBUG oslo_concurrency.lockutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Releasing lock "refresh_cache-6224c79a-8a36-490c-863a-67251512732f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.780 2 DEBUG nova.compute.manager [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Instance network_info: |[{"id": "b9569700-d7dc-40dc-a27c-1f72d3675682", "address": "fa:16:3e:20:3f:af", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9569700-d7", "ovs_interfaceid": "b9569700-d7dc-40dc-a27c-1f72d3675682", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.783 2 DEBUG oslo_concurrency.lockutils [req-17b33343-2432-45c8-acf2-16c754ad2b86 req-062752ad-273d-4646-93c1-c5b55a8f0515 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-6224c79a-8a36-490c-863a-67251512732f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.783 2 DEBUG nova.network.neutron [req-17b33343-2432-45c8-acf2-16c754ad2b86 req-062752ad-273d-4646-93c1-c5b55a8f0515 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Refreshing network info cache for port b9569700-d7dc-40dc-a27c-1f72d3675682 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.787 2 DEBUG nova.virt.libvirt.driver [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Start _get_guest_xml network_info=[{"id": "b9569700-d7dc-40dc-a27c-1f72d3675682", "address": "fa:16:3e:20:3f:af", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9569700-d7", "ovs_interfaceid": "b9569700-d7dc-40dc-a27c-1f72d3675682", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.796 2 WARNING nova.virt.libvirt.driver [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.807 2 DEBUG nova.virt.libvirt.host [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.809 2 DEBUG nova.virt.libvirt.host [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.810 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 057de6d9-3f9e-4b23-9019-f62ba6b453e7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.810 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance e10cd028-76c1-4eb5-be43-f51e4da8abc1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.810 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.811 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 6224c79a-8a36-490c-863a-67251512732f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.811 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.811 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.820 2 DEBUG nova.virt.libvirt.host [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.821 2 DEBUG nova.virt.libvirt.host [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.821 2 DEBUG nova.virt.libvirt.driver [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.821 2 DEBUG nova.virt.hardware [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.822 2 DEBUG nova.virt.hardware [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.822 2 DEBUG nova.virt.hardware [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.823 2 DEBUG nova.virt.hardware [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.823 2 DEBUG nova.virt.hardware [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.823 2 DEBUG nova.virt.hardware [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.823 2 DEBUG nova.virt.hardware [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.824 2 DEBUG nova.virt.hardware [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.824 2 DEBUG nova.virt.hardware [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.824 2 DEBUG nova.virt.hardware [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.824 2 DEBUG nova.virt.hardware [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.827 2 DEBUG oslo_concurrency.processutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.866 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing inventories for resource provider ead2f521-4d5d-46d9-864c-1aac19134114 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.901 2 DEBUG nova.virt.libvirt.driver [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.902 2 DEBUG nova.virt.libvirt.driver [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.902 2 DEBUG nova.virt.libvirt.driver [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] No VIF found with MAC fa:16:3e:bf:d8:0b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.903 2 INFO nova.virt.libvirt.driver [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Using config drive#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.929 2 DEBUG nova.storage.rbd_utils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.939 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updating ProviderTree inventory for provider ead2f521-4d5d-46d9-864c-1aac19134114 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.939 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updating inventory in ProviderTree for provider ead2f521-4d5d-46d9-864c-1aac19134114 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.974 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing aggregate associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 11 04:49:04 np0005481065 nova_compute[260935]: 2025-10-11 08:49:04.997 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing trait associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, traits: HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSE41,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_USB,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,HW_CPU_X86_SVM,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 11 04:49:05 np0005481065 nova_compute[260935]: 2025-10-11 08:49:05.127 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:05 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:49:05 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/581308956' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:49:05 np0005481065 nova_compute[260935]: 2025-10-11 08:49:05.312 2 DEBUG nova.network.neutron [req-8462493e-83ba-4a62-b180-090142cc0609 req-290da09a-a1ae-488c-820f-fd4ea6533a9b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Updated VIF entry in instance network info cache for port a3944a31-9560-49ae-b2a5-caaf2736993a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:49:05 np0005481065 nova_compute[260935]: 2025-10-11 08:49:05.313 2 DEBUG nova.network.neutron [req-8462493e-83ba-4a62-b180-090142cc0609 req-290da09a-a1ae-488c-820f-fd4ea6533a9b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Updating instance_info_cache with network_info: [{"id": "a3944a31-9560-49ae-b2a5-caaf2736993a", "address": "fa:16:3e:bf:d8:0b", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3944a31-95", "ovs_interfaceid": "a3944a31-9560-49ae-b2a5-caaf2736993a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:49:05 np0005481065 nova_compute[260935]: 2025-10-11 08:49:05.338 2 DEBUG oslo_concurrency.processutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:05 np0005481065 nova_compute[260935]: 2025-10-11 08:49:05.375 2 DEBUG nova.storage.rbd_utils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 6224c79a-8a36-490c-863a-67251512732f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:49:05 np0005481065 nova_compute[260935]: 2025-10-11 08:49:05.381 2 DEBUG oslo_concurrency.processutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:05 np0005481065 nova_compute[260935]: 2025-10-11 08:49:05.411 2 DEBUG oslo_concurrency.lockutils [req-8462493e-83ba-4a62-b180-090142cc0609 req-290da09a-a1ae-488c-820f-fd4ea6533a9b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-77ff3a9d-3eb2-40ed-ad12-6367fd4e555f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:49:05 np0005481065 nova_compute[260935]: 2025-10-11 08:49:05.616 2 INFO nova.virt.libvirt.driver [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Creating config drive at /var/lib/nova/instances/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f/disk.config#033[00m
Oct 11 04:49:05 np0005481065 nova_compute[260935]: 2025-10-11 08:49:05.621 2 DEBUG oslo_concurrency.processutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpof15y_f7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:05 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:49:05 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3215393376' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:49:05 np0005481065 nova_compute[260935]: 2025-10-11 08:49:05.654 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:05 np0005481065 nova_compute[260935]: 2025-10-11 08:49:05.662 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:49:05 np0005481065 nova_compute[260935]: 2025-10-11 08:49:05.683 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:49:05 np0005481065 nova_compute[260935]: 2025-10-11 08:49:05.738 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 04:49:05 np0005481065 nova_compute[260935]: 2025-10-11 08:49:05.738 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.046s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:05 np0005481065 nova_compute[260935]: 2025-10-11 08:49:05.770 2 DEBUG oslo_concurrency.processutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpof15y_f7" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:05 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:49:05 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1585438174' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:49:05 np0005481065 nova_compute[260935]: 2025-10-11 08:49:05.811 2 DEBUG nova.storage.rbd_utils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:49:05 np0005481065 nova_compute[260935]: 2025-10-11 08:49:05.817 2 DEBUG oslo_concurrency.processutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f/disk.config 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1299: 321 pgs: 321 active+clean; 293 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 164 KiB/s rd, 3.6 MiB/s wr, 75 op/s
Oct 11 04:49:05 np0005481065 nova_compute[260935]: 2025-10-11 08:49:05.855 2 DEBUG oslo_concurrency.processutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:05 np0005481065 nova_compute[260935]: 2025-10-11 08:49:05.859 2 DEBUG nova.virt.libvirt.vif [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:48:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-993933628',display_name='tempest-ServersAdminTestJSON-server-993933628',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-993933628',id=23,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='39d3043a7835403392c659fbb2fe0b22',ramdisk_id='',reservation_id='r-64hxjs52',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1756812845',owner_user_name='tempest-ServersAdminTestJSON-1756812845-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:49:01Z,user_data=None,user_id='a51c2680b31e40b1908642ef8795c6f0',uuid=6224c79a-8a36-490c-863a-67251512732f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b9569700-d7dc-40dc-a27c-1f72d3675682", "address": "fa:16:3e:20:3f:af", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9569700-d7", "ovs_interfaceid": "b9569700-d7dc-40dc-a27c-1f72d3675682", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:49:05 np0005481065 nova_compute[260935]: 2025-10-11 08:49:05.859 2 DEBUG nova.network.os_vif_util [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converting VIF {"id": "b9569700-d7dc-40dc-a27c-1f72d3675682", "address": "fa:16:3e:20:3f:af", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9569700-d7", "ovs_interfaceid": "b9569700-d7dc-40dc-a27c-1f72d3675682", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:49:05 np0005481065 nova_compute[260935]: 2025-10-11 08:49:05.861 2 DEBUG nova.network.os_vif_util [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:3f:af,bridge_name='br-int',has_traffic_filtering=True,id=b9569700-d7dc-40dc-a27c-1f72d3675682,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9569700-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:49:05 np0005481065 nova_compute[260935]: 2025-10-11 08:49:05.863 2 DEBUG nova.objects.instance [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6224c79a-8a36-490c-863a-67251512732f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:49:05 np0005481065 nova_compute[260935]: 2025-10-11 08:49:05.892 2 DEBUG nova.virt.libvirt.driver [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:49:05 np0005481065 nova_compute[260935]:  <uuid>6224c79a-8a36-490c-863a-67251512732f</uuid>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:  <name>instance-00000017</name>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:49:05 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:      <nova:name>tempest-ServersAdminTestJSON-server-993933628</nova:name>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:49:04</nova:creationTime>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:49:05 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:        <nova:user uuid="a51c2680b31e40b1908642ef8795c6f0">tempest-ServersAdminTestJSON-1756812845-project-member</nova:user>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:        <nova:project uuid="39d3043a7835403392c659fbb2fe0b22">tempest-ServersAdminTestJSON-1756812845</nova:project>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:        <nova:port uuid="b9569700-d7dc-40dc-a27c-1f72d3675682">
Oct 11 04:49:05 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:49:05 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:      <entry name="serial">6224c79a-8a36-490c-863a-67251512732f</entry>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:      <entry name="uuid">6224c79a-8a36-490c-863a-67251512732f</entry>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:49:05 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:49:05 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:49:05 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/6224c79a-8a36-490c-863a-67251512732f_disk">
Oct 11 04:49:05 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:49:05 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:49:05 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/6224c79a-8a36-490c-863a-67251512732f_disk.config">
Oct 11 04:49:05 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:49:05 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 04:49:05 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:20:3f:af"/>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:      <target dev="tapb9569700-d7"/>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:49:05 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/6224c79a-8a36-490c-863a-67251512732f/console.log" append="off"/>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:49:05 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:49:05 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:49:05 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:49:05 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:49:05 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:49:05 np0005481065 nova_compute[260935]: 2025-10-11 08:49:05.893 2 DEBUG nova.compute.manager [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Preparing to wait for external event network-vif-plugged-b9569700-d7dc-40dc-a27c-1f72d3675682 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 04:49:05 np0005481065 nova_compute[260935]: 2025-10-11 08:49:05.893 2 DEBUG oslo_concurrency.lockutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "6224c79a-8a36-490c-863a-67251512732f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:05 np0005481065 nova_compute[260935]: 2025-10-11 08:49:05.894 2 DEBUG oslo_concurrency.lockutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "6224c79a-8a36-490c-863a-67251512732f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:05 np0005481065 nova_compute[260935]: 2025-10-11 08:49:05.894 2 DEBUG oslo_concurrency.lockutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "6224c79a-8a36-490c-863a-67251512732f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:05 np0005481065 nova_compute[260935]: 2025-10-11 08:49:05.895 2 DEBUG nova.virt.libvirt.vif [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:48:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-993933628',display_name='tempest-ServersAdminTestJSON-server-993933628',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-993933628',id=23,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='39d3043a7835403392c659fbb2fe0b22',ramdisk_id='',reservation_id='r-64hxjs52',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1756812845',owner_user_name='tempest-ServersAdminTestJSON-1756812845-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:49:01Z,user_data=None,user_id='a51c2680b31e40b1908642ef8795c6f0',uuid=6224c79a-8a36-490c-863a-67251512732f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b9569700-d7dc-40dc-a27c-1f72d3675682", "address": "fa:16:3e:20:3f:af", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9569700-d7", "ovs_interfaceid": "b9569700-d7dc-40dc-a27c-1f72d3675682", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:49:05 np0005481065 nova_compute[260935]: 2025-10-11 08:49:05.896 2 DEBUG nova.network.os_vif_util [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converting VIF {"id": "b9569700-d7dc-40dc-a27c-1f72d3675682", "address": "fa:16:3e:20:3f:af", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9569700-d7", "ovs_interfaceid": "b9569700-d7dc-40dc-a27c-1f72d3675682", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:49:05 np0005481065 nova_compute[260935]: 2025-10-11 08:49:05.897 2 DEBUG nova.network.os_vif_util [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:3f:af,bridge_name='br-int',has_traffic_filtering=True,id=b9569700-d7dc-40dc-a27c-1f72d3675682,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9569700-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:49:05 np0005481065 nova_compute[260935]: 2025-10-11 08:49:05.898 2 DEBUG os_vif [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:3f:af,bridge_name='br-int',has_traffic_filtering=True,id=b9569700-d7dc-40dc-a27c-1f72d3675682,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9569700-d7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:49:05 np0005481065 nova_compute[260935]: 2025-10-11 08:49:05.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:05 np0005481065 nova_compute[260935]: 2025-10-11 08:49:05.899 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:05 np0005481065 nova_compute[260935]: 2025-10-11 08:49:05.900 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:49:05 np0005481065 nova_compute[260935]: 2025-10-11 08:49:05.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:05 np0005481065 nova_compute[260935]: 2025-10-11 08:49:05.905 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb9569700-d7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:05 np0005481065 nova_compute[260935]: 2025-10-11 08:49:05.906 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb9569700-d7, col_values=(('external_ids', {'iface-id': 'b9569700-d7dc-40dc-a27c-1f72d3675682', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:20:3f:af', 'vm-uuid': '6224c79a-8a36-490c-863a-67251512732f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:05 np0005481065 nova_compute[260935]: 2025-10-11 08:49:05.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:05 np0005481065 NetworkManager[44960]: <info>  [1760172545.9101] manager: (tapb9569700-d7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/61)
Oct 11 04:49:05 np0005481065 nova_compute[260935]: 2025-10-11 08:49:05.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:49:05 np0005481065 nova_compute[260935]: 2025-10-11 08:49:05.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:05 np0005481065 nova_compute[260935]: 2025-10-11 08:49:05.924 2 INFO os_vif [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:3f:af,bridge_name='br-int',has_traffic_filtering=True,id=b9569700-d7dc-40dc-a27c-1f72d3675682,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9569700-d7')#033[00m
Oct 11 04:49:05 np0005481065 nova_compute[260935]: 2025-10-11 08:49:05.988 2 DEBUG nova.virt.libvirt.driver [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:49:05 np0005481065 nova_compute[260935]: 2025-10-11 08:49:05.988 2 DEBUG nova.virt.libvirt.driver [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:49:05 np0005481065 nova_compute[260935]: 2025-10-11 08:49:05.989 2 DEBUG nova.virt.libvirt.driver [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] No VIF found with MAC fa:16:3e:20:3f:af, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:49:05 np0005481065 nova_compute[260935]: 2025-10-11 08:49:05.989 2 INFO nova.virt.libvirt.driver [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Using config drive#033[00m
Oct 11 04:49:06 np0005481065 nova_compute[260935]: 2025-10-11 08:49:06.032 2 DEBUG nova.storage.rbd_utils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 6224c79a-8a36-490c-863a-67251512732f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:49:06 np0005481065 nova_compute[260935]: 2025-10-11 08:49:06.049 2 DEBUG nova.compute.manager [req-e6a8f143-dc32-4a50-990c-6f8e732bf55d req-cc0e207a-337b-4976-b32f-222e42b5bae8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received event network-changed-c9724939-cd91-44bb-a86b-72bf93c2a818 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:49:06 np0005481065 nova_compute[260935]: 2025-10-11 08:49:06.049 2 DEBUG nova.compute.manager [req-e6a8f143-dc32-4a50-990c-6f8e732bf55d req-cc0e207a-337b-4976-b32f-222e42b5bae8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Refreshing instance network info cache due to event network-changed-c9724939-cd91-44bb-a86b-72bf93c2a818. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:49:06 np0005481065 nova_compute[260935]: 2025-10-11 08:49:06.050 2 DEBUG oslo_concurrency.lockutils [req-e6a8f143-dc32-4a50-990c-6f8e732bf55d req-cc0e207a-337b-4976-b32f-222e42b5bae8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:49:06 np0005481065 nova_compute[260935]: 2025-10-11 08:49:06.051 2 DEBUG oslo_concurrency.processutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f/disk.config 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.234s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:06 np0005481065 nova_compute[260935]: 2025-10-11 08:49:06.052 2 INFO nova.virt.libvirt.driver [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Deleting local config drive /var/lib/nova/instances/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f/disk.config because it was imported into RBD.#033[00m
Oct 11 04:49:06 np0005481065 NetworkManager[44960]: <info>  [1760172546.1277] manager: (tapa3944a31-95): new Tun device (/org/freedesktop/NetworkManager/Devices/62)
Oct 11 04:49:06 np0005481065 kernel: tapa3944a31-95: entered promiscuous mode
Oct 11 04:49:06 np0005481065 nova_compute[260935]: 2025-10-11 08:49:06.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:06 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:06Z|00092|binding|INFO|Claiming lport a3944a31-9560-49ae-b2a5-caaf2736993a for this chassis.
Oct 11 04:49:06 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:06Z|00093|binding|INFO|a3944a31-9560-49ae-b2a5-caaf2736993a: Claiming fa:16:3e:bf:d8:0b 10.100.0.11
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.145 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:d8:0b 10.100.0.11'], port_security=['fa:16:3e:bf:d8:0b 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '77ff3a9d-3eb2-40ed-ad12-6367fd4e555f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39d3043a7835403392c659fbb2fe0b22', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8cdf2c97-ed67-4339-928f-1d70d0c6c18c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3bfe7634-8476-437a-9cde-e4512c0e686a, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=a3944a31-9560-49ae-b2a5-caaf2736993a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.148 162815 INFO neutron.agent.ovn.metadata.agent [-] Port a3944a31-9560-49ae-b2a5-caaf2736993a in datapath 09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5 bound to our chassis#033[00m
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.152 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5#033[00m
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.169 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7b89cf53-305c-41de-96b8-19b9eaf77592]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.171 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap09ac2cb6-31 in ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.176 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap09ac2cb6-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.177 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b973b54f-b493-4d82-9c66-e34466396168]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.178 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f65ad05a-d8b7-48d7-9584-563ad156f800]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:06 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:06Z|00094|binding|INFO|Setting lport a3944a31-9560-49ae-b2a5-caaf2736993a ovn-installed in OVS
Oct 11 04:49:06 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:06Z|00095|binding|INFO|Setting lport a3944a31-9560-49ae-b2a5-caaf2736993a up in Southbound
Oct 11 04:49:06 np0005481065 systemd-machined[215705]: New machine qemu-24-instance-00000016.
Oct 11 04:49:06 np0005481065 nova_compute[260935]: 2025-10-11 08:49:06.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:06 np0005481065 systemd[1]: Started Virtual Machine qemu-24-instance-00000016.
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.194 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[b95f9c9a-5dbd-4c83-bf11-ea677ea4a84b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:06 np0005481065 systemd-udevd[291984]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.231 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[03533bd3-f21c-42ab-8d34-38b0166133b6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:06 np0005481065 NetworkManager[44960]: <info>  [1760172546.2436] device (tapa3944a31-95): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:49:06 np0005481065 NetworkManager[44960]: <info>  [1760172546.2449] device (tapa3944a31-95): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.285 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[450dca3f-01d0-4269-9f7c-0d4f69907302]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.295 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[70530710-ec3d-4f2c-a4b2-a5c0d12a5749]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:06 np0005481065 systemd-udevd[291987]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:49:06 np0005481065 NetworkManager[44960]: <info>  [1760172546.2967] manager: (tap09ac2cb6-30): new Veth device (/org/freedesktop/NetworkManager/Devices/63)
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.351 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6549e75a-49f3-4589-9c1a-e68a78ba084a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.367 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[8f285bc8-ec65-47d2-a75e-846575f4c83a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:06 np0005481065 NetworkManager[44960]: <info>  [1760172546.4071] device (tap09ac2cb6-30): carrier: link connected
Oct 11 04:49:06 np0005481065 nova_compute[260935]: 2025-10-11 08:49:06.418 2 DEBUG oslo_concurrency.lockutils [None req-f1c4bf5e-d8d0-480f-854b-9ce12c050804 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Acquiring lock "e10cd028-76c1-4eb5-be43-f51e4da8abc1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:06 np0005481065 nova_compute[260935]: 2025-10-11 08:49:06.418 2 DEBUG oslo_concurrency.lockutils [None req-f1c4bf5e-d8d0-480f-854b-9ce12c050804 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Lock "e10cd028-76c1-4eb5-be43-f51e4da8abc1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:06 np0005481065 nova_compute[260935]: 2025-10-11 08:49:06.419 2 DEBUG oslo_concurrency.lockutils [None req-f1c4bf5e-d8d0-480f-854b-9ce12c050804 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Acquiring lock "e10cd028-76c1-4eb5-be43-f51e4da8abc1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.417 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[02d645f9-e525-4942-8a83-9cd340ed89cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:06 np0005481065 nova_compute[260935]: 2025-10-11 08:49:06.420 2 DEBUG oslo_concurrency.lockutils [None req-f1c4bf5e-d8d0-480f-854b-9ce12c050804 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Lock "e10cd028-76c1-4eb5-be43-f51e4da8abc1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:06 np0005481065 nova_compute[260935]: 2025-10-11 08:49:06.420 2 DEBUG oslo_concurrency.lockutils [None req-f1c4bf5e-d8d0-480f-854b-9ce12c050804 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Lock "e10cd028-76c1-4eb5-be43-f51e4da8abc1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:06 np0005481065 nova_compute[260935]: 2025-10-11 08:49:06.423 2 INFO nova.compute.manager [None req-f1c4bf5e-d8d0-480f-854b-9ce12c050804 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Terminating instance#033[00m
Oct 11 04:49:06 np0005481065 nova_compute[260935]: 2025-10-11 08:49:06.425 2 DEBUG nova.compute.manager [None req-f1c4bf5e-d8d0-480f-854b-9ce12c050804 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.454 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[401a32ca-1f67-400f-b79b-067d37d46088]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap09ac2cb6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:b2:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 439110, 'reachable_time': 35426, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292017, 'error': None, 'target': 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:06 np0005481065 kernel: tapb5bae935-76 (unregistering): left promiscuous mode
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.483 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0b642470-b64f-4a31-ac21-879b7be4f6b6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9f:b233'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 439110, 'tstamp': 439110}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292018, 'error': None, 'target': 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:06 np0005481065 NetworkManager[44960]: <info>  [1760172546.4887] device (tapb5bae935-76): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:49:06 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:06Z|00096|binding|INFO|Releasing lport b5bae935-7639-4a76-988c-e09d0c6f5fb1 from this chassis (sb_readonly=0)
Oct 11 04:49:06 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:06Z|00097|binding|INFO|Setting lport b5bae935-7639-4a76-988c-e09d0c6f5fb1 down in Southbound
Oct 11 04:49:06 np0005481065 nova_compute[260935]: 2025-10-11 08:49:06.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:06 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:06Z|00098|binding|INFO|Removing iface tapb5bae935-76 ovn-installed in OVS
Oct 11 04:49:06 np0005481065 nova_compute[260935]: 2025-10-11 08:49:06.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.515 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:83:55 10.100.0.11'], port_security=['fa:16:3e:ad:83:55 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'e10cd028-76c1-4eb5-be43-f51e4da8abc1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2880dd81-df95-47c3-aa3d-53c3f2548f15', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8d6c7a0d842f4dcb95421a3f47580c49', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7275978f-63aa-48f1-b8b7-cd0104b14473', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.223'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1c8074e3-09b3-477f-801a-39440484f747, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=b5bae935-7639-4a76-988c-e09d0c6f5fb1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.521 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[df60ab66-8c36-4892-86ed-f4e4f64150de]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap09ac2cb6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:b2:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 439110, 'reachable_time': 35426, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 292021, 'error': None, 'target': 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:06 np0005481065 nova_compute[260935]: 2025-10-11 08:49:06.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:06 np0005481065 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000015.scope: Deactivated successfully.
Oct 11 04:49:06 np0005481065 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000015.scope: Consumed 13.070s CPU time.
Oct 11 04:49:06 np0005481065 systemd-machined[215705]: Machine qemu-23-instance-00000015 terminated.
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.572 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[aec43702-9e95-4137-ad69-d23b80acf9b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.661 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1d4c4a47-8ca6-4a8b-ac7b-21fdd49bd178]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.664 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap09ac2cb6-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.664 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.665 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap09ac2cb6-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:06 np0005481065 NetworkManager[44960]: <info>  [1760172546.6680] manager: (tap09ac2cb6-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/64)
Oct 11 04:49:06 np0005481065 nova_compute[260935]: 2025-10-11 08:49:06.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:06 np0005481065 nova_compute[260935]: 2025-10-11 08:49:06.674 2 INFO nova.virt.libvirt.driver [-] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Instance destroyed successfully.#033[00m
Oct 11 04:49:06 np0005481065 nova_compute[260935]: 2025-10-11 08:49:06.675 2 DEBUG nova.objects.instance [None req-f1c4bf5e-d8d0-480f-854b-9ce12c050804 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Lazy-loading 'resources' on Instance uuid e10cd028-76c1-4eb5-be43-f51e4da8abc1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:49:06 np0005481065 kernel: tap09ac2cb6-30: entered promiscuous mode
Oct 11 04:49:06 np0005481065 nova_compute[260935]: 2025-10-11 08:49:06.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.705 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap09ac2cb6-30, col_values=(('external_ids', {'iface-id': '424305ea-6b47-4134-ad52-ee2a450e204c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:06 np0005481065 nova_compute[260935]: 2025-10-11 08:49:06.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:06 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:06Z|00099|binding|INFO|Releasing lport 424305ea-6b47-4134-ad52-ee2a450e204c from this chassis (sb_readonly=0)
Oct 11 04:49:06 np0005481065 nova_compute[260935]: 2025-10-11 08:49:06.739 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:49:06 np0005481065 nova_compute[260935]: 2025-10-11 08:49:06.739 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 04:49:06 np0005481065 nova_compute[260935]: 2025-10-11 08:49:06.740 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 11 04:49:06 np0005481065 nova_compute[260935]: 2025-10-11 08:49:06.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:06 np0005481065 nova_compute[260935]: 2025-10-11 08:49:06.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.750 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.751 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[992fe14f-c7d9-478d-b8c3-dbb18959ebda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.752 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5.pid.haproxy
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID 09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 04:49:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.753 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'env', 'PROCESS_TAG=haproxy-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 04:49:06 np0005481065 nova_compute[260935]: 2025-10-11 08:49:06.786 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Oct 11 04:49:06 np0005481065 nova_compute[260935]: 2025-10-11 08:49:06.786 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct 11 04:49:06 np0005481065 nova_compute[260935]: 2025-10-11 08:49:06.787 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 6224c79a-8a36-490c-863a-67251512732f] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct 11 04:49:06 np0005481065 nova_compute[260935]: 2025-10-11 08:49:06.792 2 DEBUG nova.virt.libvirt.vif [None req-f1c4bf5e-d8d0-480f-854b-9ce12c050804 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:48:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-170168590',display_name='tempest-ServersTestJSON-server-170168590',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-170168590',id=21,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBBLmmDX7+o3xlBXsWFtGAAW6QN89FexPyRikLBBoMkqYEgmzcpeem7mJuwXNPqh7hh6YHBKO8aG3FnT45N5dmtZiE21YMODPbWwRwlsUeKoenY7euJ0iBxGg5aRfD6zgQ==',key_name='tempest-keypair-1869462671',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:48:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8d6c7a0d842f4dcb95421a3f47580c49',ramdisk_id='',reservation_id='r-ijkk9nxi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-381670503',owner_user_name='tempest-ServersTestJSON-381670503-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:48:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6b08b83b77b84cd894e155d2a06682a4',uuid=e10cd028-76c1-4eb5-be43-f51e4da8abc1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b5bae935-7639-4a76-988c-e09d0c6f5fb1", "address": "fa:16:3e:ad:83:55", "network": {"id": "2880dd81-df95-47c3-aa3d-53c3f2548f15", "bridge": "br-int", "label": "tempest-ServersTestJSON-1125718573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d6c7a0d842f4dcb95421a3f47580c49", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5bae935-76", "ovs_interfaceid": "b5bae935-7639-4a76-988c-e09d0c6f5fb1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 04:49:06 np0005481065 nova_compute[260935]: 2025-10-11 08:49:06.793 2 DEBUG nova.network.os_vif_util [None req-f1c4bf5e-d8d0-480f-854b-9ce12c050804 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Converting VIF {"id": "b5bae935-7639-4a76-988c-e09d0c6f5fb1", "address": "fa:16:3e:ad:83:55", "network": {"id": "2880dd81-df95-47c3-aa3d-53c3f2548f15", "bridge": "br-int", "label": "tempest-ServersTestJSON-1125718573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d6c7a0d842f4dcb95421a3f47580c49", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5bae935-76", "ovs_interfaceid": "b5bae935-7639-4a76-988c-e09d0c6f5fb1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:49:06 np0005481065 nova_compute[260935]: 2025-10-11 08:49:06.794 2 DEBUG nova.network.os_vif_util [None req-f1c4bf5e-d8d0-480f-854b-9ce12c050804 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ad:83:55,bridge_name='br-int',has_traffic_filtering=True,id=b5bae935-7639-4a76-988c-e09d0c6f5fb1,network=Network(2880dd81-df95-47c3-aa3d-53c3f2548f15),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5bae935-76') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:49:06 np0005481065 nova_compute[260935]: 2025-10-11 08:49:06.794 2 DEBUG os_vif [None req-f1c4bf5e-d8d0-480f-854b-9ce12c050804 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ad:83:55,bridge_name='br-int',has_traffic_filtering=True,id=b5bae935-7639-4a76-988c-e09d0c6f5fb1,network=Network(2880dd81-df95-47c3-aa3d-53c3f2548f15),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5bae935-76') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 04:49:06 np0005481065 nova_compute[260935]: 2025-10-11 08:49:06.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:06 np0005481065 nova_compute[260935]: 2025-10-11 08:49:06.796 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5bae935-76, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:06 np0005481065 nova_compute[260935]: 2025-10-11 08:49:06.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:06 np0005481065 nova_compute[260935]: 2025-10-11 08:49:06.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:49:06 np0005481065 nova_compute[260935]: 2025-10-11 08:49:06.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:06 np0005481065 nova_compute[260935]: 2025-10-11 08:49:06.812 2 INFO os_vif [None req-f1c4bf5e-d8d0-480f-854b-9ce12c050804 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ad:83:55,bridge_name='br-int',has_traffic_filtering=True,id=b5bae935-7639-4a76-988c-e09d0c6f5fb1,network=Network(2880dd81-df95-47c3-aa3d-53c3f2548f15),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5bae935-76')#033[00m
Oct 11 04:49:06 np0005481065 nova_compute[260935]: 2025-10-11 08:49:06.928 2 INFO nova.virt.libvirt.driver [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Creating config drive at /var/lib/nova/instances/6224c79a-8a36-490c-863a-67251512732f/disk.config#033[00m
Oct 11 04:49:06 np0005481065 nova_compute[260935]: 2025-10-11 08:49:06.946 2 DEBUG oslo_concurrency.processutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6224c79a-8a36-490c-863a-67251512732f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj6axv2dy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:06 np0005481065 nova_compute[260935]: 2025-10-11 08:49:06.984 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:49:06 np0005481065 nova_compute[260935]: 2025-10-11 08:49:06.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.027 2 DEBUG nova.network.neutron [req-17b33343-2432-45c8-acf2-16c754ad2b86 req-062752ad-273d-4646-93c1-c5b55a8f0515 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Updated VIF entry in instance network info cache for port b9569700-d7dc-40dc-a27c-1f72d3675682. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.028 2 DEBUG nova.network.neutron [req-17b33343-2432-45c8-acf2-16c754ad2b86 req-062752ad-273d-4646-93c1-c5b55a8f0515 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Updating instance_info_cache with network_info: [{"id": "b9569700-d7dc-40dc-a27c-1f72d3675682", "address": "fa:16:3e:20:3f:af", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9569700-d7", "ovs_interfaceid": "b9569700-d7dc-40dc-a27c-1f72d3675682", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.062 2 DEBUG oslo_concurrency.lockutils [req-17b33343-2432-45c8-acf2-16c754ad2b86 req-062752ad-273d-4646-93c1-c5b55a8f0515 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-6224c79a-8a36-490c-863a-67251512732f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.092 2 DEBUG oslo_concurrency.processutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6224c79a-8a36-490c-863a-67251512732f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj6axv2dy" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.139 2 DEBUG nova.storage.rbd_utils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 6224c79a-8a36-490c-863a-67251512732f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.144 2 DEBUG oslo_concurrency.processutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6224c79a-8a36-490c-863a-67251512732f/disk.config 6224c79a-8a36-490c-863a-67251512732f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:07 np0005481065 podman[292158]: 2025-10-11 08:49:07.253863833 +0000 UTC m=+0.080690298 container create f161a6e8d54a76f5182305ed623709fa08785ce1e817d89029244e968aa05361 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.296 2 INFO nova.virt.libvirt.driver [None req-f1c4bf5e-d8d0-480f-854b-9ce12c050804 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Deleting instance files /var/lib/nova/instances/e10cd028-76c1-4eb5-be43-f51e4da8abc1_del#033[00m
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.297 2 INFO nova.virt.libvirt.driver [None req-f1c4bf5e-d8d0-480f-854b-9ce12c050804 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Deletion of /var/lib/nova/instances/e10cd028-76c1-4eb5-be43-f51e4da8abc1_del complete#033[00m
Oct 11 04:49:07 np0005481065 systemd[1]: Started libpod-conmon-f161a6e8d54a76f5182305ed623709fa08785ce1e817d89029244e968aa05361.scope.
Oct 11 04:49:07 np0005481065 podman[292158]: 2025-10-11 08:49:07.204554993 +0000 UTC m=+0.031381488 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.303 2 DEBUG nova.compute.manager [req-32a9b09c-f63e-4319-afa2-04f0c14e0acc req-4164704d-ddfe-4987-850e-76d882d371af e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Received event network-vif-plugged-a3944a31-9560-49ae-b2a5-caaf2736993a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.303 2 DEBUG oslo_concurrency.lockutils [req-32a9b09c-f63e-4319-afa2-04f0c14e0acc req-4164704d-ddfe-4987-850e-76d882d371af e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.303 2 DEBUG oslo_concurrency.lockutils [req-32a9b09c-f63e-4319-afa2-04f0c14e0acc req-4164704d-ddfe-4987-850e-76d882d371af e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.303 2 DEBUG oslo_concurrency.lockutils [req-32a9b09c-f63e-4319-afa2-04f0c14e0acc req-4164704d-ddfe-4987-850e-76d882d371af e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.303 2 DEBUG nova.compute.manager [req-32a9b09c-f63e-4319-afa2-04f0c14e0acc req-4164704d-ddfe-4987-850e-76d882d371af e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Processing event network-vif-plugged-a3944a31-9560-49ae-b2a5-caaf2736993a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 04:49:07 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:49:07 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58f2e53c9b14a148f14c0e6dc8556f03c44bc8562cdbc60984e7f6e68a2d640e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.354 2 DEBUG oslo_concurrency.processutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6224c79a-8a36-490c-863a-67251512732f/disk.config 6224c79a-8a36-490c-863a-67251512732f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.210s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:07 np0005481065 podman[292158]: 2025-10-11 08:49:07.358281427 +0000 UTC m=+0.185107892 container init f161a6e8d54a76f5182305ed623709fa08785ce1e817d89029244e968aa05361 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.357 2 INFO nova.virt.libvirt.driver [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Deleting local config drive /var/lib/nova/instances/6224c79a-8a36-490c-863a-67251512732f/disk.config because it was imported into RBD.#033[00m
Oct 11 04:49:07 np0005481065 podman[292158]: 2025-10-11 08:49:07.365926866 +0000 UTC m=+0.192753331 container start f161a6e8d54a76f5182305ed623709fa08785ce1e817d89029244e968aa05361 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.380 2 INFO nova.compute.manager [None req-f1c4bf5e-d8d0-480f-854b-9ce12c050804 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Took 0.95 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.380 2 DEBUG oslo.service.loopingcall [None req-f1c4bf5e-d8d0-480f-854b-9ce12c050804 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.381 2 DEBUG nova.compute.manager [-] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.381 2 DEBUG nova.network.neutron [-] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 04:49:07 np0005481065 neutron-haproxy-ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5[292192]: [NOTICE]   (292196) : New worker (292200) forked
Oct 11 04:49:07 np0005481065 neutron-haproxy-ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5[292192]: [NOTICE]   (292196) : Loading success.
Oct 11 04:49:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:07.445 162815 INFO neutron.agent.ovn.metadata.agent [-] Port b5bae935-7639-4a76-988c-e09d0c6f5fb1 in datapath 2880dd81-df95-47c3-aa3d-53c3f2548f15 unbound from our chassis#033[00m
Oct 11 04:49:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:07.447 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2880dd81-df95-47c3-aa3d-53c3f2548f15, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 04:49:07 np0005481065 kernel: tapb9569700-d7: entered promiscuous mode
Oct 11 04:49:07 np0005481065 systemd-udevd[292011]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:49:07 np0005481065 NetworkManager[44960]: <info>  [1760172547.4504] manager: (tapb9569700-d7): new Tun device (/org/freedesktop/NetworkManager/Devices/65)
Oct 11 04:49:07 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:07Z|00100|binding|INFO|Claiming lport b9569700-d7dc-40dc-a27c-1f72d3675682 for this chassis.
Oct 11 04:49:07 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:07Z|00101|binding|INFO|b9569700-d7dc-40dc-a27c-1f72d3675682: Claiming fa:16:3e:20:3f:af 10.100.0.14
Oct 11 04:49:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:07.449 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9d424a47-23e4-4aee-b154-9795afa49979]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:07.455 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2880dd81-df95-47c3-aa3d-53c3f2548f15 namespace which is not needed anymore#033[00m
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:07 np0005481065 NetworkManager[44960]: <info>  [1760172547.4662] device (tapb9569700-d7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:49:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:07.467 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:3f:af 10.100.0.14'], port_security=['fa:16:3e:20:3f:af 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '6224c79a-8a36-490c-863a-67251512732f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39d3043a7835403392c659fbb2fe0b22', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8cdf2c97-ed67-4339-928f-1d70d0c6c18c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3bfe7634-8476-437a-9cde-e4512c0e686a, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=b9569700-d7dc-40dc-a27c-1f72d3675682) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:49:07 np0005481065 NetworkManager[44960]: <info>  [1760172547.4683] device (tapb9569700-d7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:49:07 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:07Z|00102|binding|INFO|Setting lport b9569700-d7dc-40dc-a27c-1f72d3675682 ovn-installed in OVS
Oct 11 04:49:07 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:07Z|00103|binding|INFO|Setting lport b9569700-d7dc-40dc-a27c-1f72d3675682 up in Southbound
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:07 np0005481065 systemd-machined[215705]: New machine qemu-25-instance-00000017.
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.510 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172547.509975, 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.511 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] VM Started (Lifecycle Event)#033[00m
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.517 2 DEBUG nova.compute.manager [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:49:07 np0005481065 systemd[1]: Started Virtual Machine qemu-25-instance-00000017.
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.527 2 DEBUG nova.virt.libvirt.driver [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.532 2 INFO nova.virt.libvirt.driver [-] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Instance spawned successfully.#033[00m
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.532 2 DEBUG nova.virt.libvirt.driver [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.537 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.542 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.557 2 DEBUG nova.virt.libvirt.driver [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.558 2 DEBUG nova.virt.libvirt.driver [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.560 2 DEBUG nova.virt.libvirt.driver [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.561 2 DEBUG nova.virt.libvirt.driver [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.562 2 DEBUG nova.virt.libvirt.driver [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.562 2 DEBUG nova.virt.libvirt.driver [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.568 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.568 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172547.5102317, 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.568 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] VM Paused (Lifecycle Event)#033[00m
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.604 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.609 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172547.522077, 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.609 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.632 2 INFO nova.compute.manager [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Took 8.00 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.633 2 DEBUG nova.compute.manager [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.634 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:49:07 np0005481065 neutron-haproxy-ovnmeta-2880dd81-df95-47c3-aa3d-53c3f2548f15[291214]: [NOTICE]   (291218) : haproxy version is 2.8.14-c23fe91
Oct 11 04:49:07 np0005481065 neutron-haproxy-ovnmeta-2880dd81-df95-47c3-aa3d-53c3f2548f15[291214]: [NOTICE]   (291218) : path to executable is /usr/sbin/haproxy
Oct 11 04:49:07 np0005481065 neutron-haproxy-ovnmeta-2880dd81-df95-47c3-aa3d-53c3f2548f15[291214]: [WARNING]  (291218) : Exiting Master process...
Oct 11 04:49:07 np0005481065 neutron-haproxy-ovnmeta-2880dd81-df95-47c3-aa3d-53c3f2548f15[291214]: [WARNING]  (291218) : Exiting Master process...
Oct 11 04:49:07 np0005481065 neutron-haproxy-ovnmeta-2880dd81-df95-47c3-aa3d-53c3f2548f15[291214]: [ALERT]    (291218) : Current worker (291220) exited with code 143 (Terminated)
Oct 11 04:49:07 np0005481065 neutron-haproxy-ovnmeta-2880dd81-df95-47c3-aa3d-53c3f2548f15[291214]: [WARNING]  (291218) : All workers exited. Exiting... (0)
Oct 11 04:49:07 np0005481065 systemd[1]: libpod-371c0477cdc472f8f0672be8efb1599ac2b744a3f40122793d8d14be97d4c4a1.scope: Deactivated successfully.
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.644 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:49:07 np0005481065 podman[292240]: 2025-10-11 08:49:07.646776664 +0000 UTC m=+0.050432783 container died 371c0477cdc472f8f0672be8efb1599ac2b744a3f40122793d8d14be97d4c4a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2880dd81-df95-47c3-aa3d-53c3f2548f15, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 04:49:07 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-371c0477cdc472f8f0672be8efb1599ac2b744a3f40122793d8d14be97d4c4a1-userdata-shm.mount: Deactivated successfully.
Oct 11 04:49:07 np0005481065 systemd[1]: var-lib-containers-storage-overlay-9a90ad88f1ec264b7e39d2fba8c4def1e28c3e05feed22a0602c12c8a9a6991b-merged.mount: Deactivated successfully.
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.676 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:49:07 np0005481065 podman[292240]: 2025-10-11 08:49:07.683089542 +0000 UTC m=+0.086745661 container cleanup 371c0477cdc472f8f0672be8efb1599ac2b744a3f40122793d8d14be97d4c4a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2880dd81-df95-47c3-aa3d-53c3f2548f15, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.698 2 INFO nova.compute.manager [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Took 9.04 seconds to build instance.#033[00m
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.718 2 DEBUG oslo_concurrency.lockutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.124s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:07 np0005481065 systemd[1]: libpod-conmon-371c0477cdc472f8f0672be8efb1599ac2b744a3f40122793d8d14be97d4c4a1.scope: Deactivated successfully.
Oct 11 04:49:07 np0005481065 podman[292267]: 2025-10-11 08:49:07.76837709 +0000 UTC m=+0.058609556 container remove 371c0477cdc472f8f0672be8efb1599ac2b744a3f40122793d8d14be97d4c4a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2880dd81-df95-47c3-aa3d-53c3f2548f15, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 04:49:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:07.779 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[dd6b0a6c-01c2-42cb-9c63-4fe5567fc6ae]: (4, ('Sat Oct 11 08:49:07 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2880dd81-df95-47c3-aa3d-53c3f2548f15 (371c0477cdc472f8f0672be8efb1599ac2b744a3f40122793d8d14be97d4c4a1)\n371c0477cdc472f8f0672be8efb1599ac2b744a3f40122793d8d14be97d4c4a1\nSat Oct 11 08:49:07 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2880dd81-df95-47c3-aa3d-53c3f2548f15 (371c0477cdc472f8f0672be8efb1599ac2b744a3f40122793d8d14be97d4c4a1)\n371c0477cdc472f8f0672be8efb1599ac2b744a3f40122793d8d14be97d4c4a1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:07.781 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fc1820b7-eac1-4bf5-871b-b77f052ad842]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:07.782 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2880dd81-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1300: 321 pgs: 321 active+clean; 214 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 192 KiB/s rd, 3.7 MiB/s wr, 117 op/s
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:07 np0005481065 kernel: tap2880dd81-d0: left promiscuous mode
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:07.836 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3dc23973-7b49-40a9-981e-dc7b10741fe3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:07.878 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[db2a1b6a-8de5-4dc8-aae5-53d6f1ee059d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:07.879 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e1ddae32-16f7-4d47-a520-d10e87a0a36c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:07.895 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[403c8fb8-19ec-48d5-83d1-eace5e864dab]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436510, 'reachable_time': 19051, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292286, 'error': None, 'target': 'ovnmeta-2880dd81-df95-47c3-aa3d-53c3f2548f15', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:07.899 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2880dd81-df95-47c3-aa3d-53c3f2548f15 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 04:49:07 np0005481065 systemd[1]: run-netns-ovnmeta\x2d2880dd81\x2ddf95\x2d47c3\x2daa3d\x2d53c3f2548f15.mount: Deactivated successfully.
Oct 11 04:49:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:07.899 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[6fed803f-74dc-4a36-90b1-3bdba562fd7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:07.900 162815 INFO neutron.agent.ovn.metadata.agent [-] Port b9569700-d7dc-40dc-a27c-1f72d3675682 in datapath 09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5 unbound from our chassis#033[00m
Oct 11 04:49:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:07.902 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5#033[00m
Oct 11 04:49:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:07.924 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8016ae52-7887-44b8-889c-1e52467b832d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:07.963 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[1c25d438-eb4a-4d6e-bf06-c69416ba144b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:07.966 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[78c5dccb-758d-4237-894e-85945d99757b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.968 2 DEBUG nova.network.neutron [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Updating instance_info_cache with network_info: [{"id": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "address": "fa:16:3e:97:65:f3", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb31f1b4-b0", "ovs_interfaceid": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "address": "fa:16:3e:f2:dc:ce", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e1a780-92", "ovs_interfaceid": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "address": "fa:16:3e:90:fd:25", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap723ff3bf-88", "ovs_interfaceid": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c9724939-cd91-44bb-a86b-72bf93c2a818", "address": "fa:16:3e:2b:3e:e2", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9724939-cd", "ovs_interfaceid": "c9724939-cd91-44bb-a86b-72bf93c2a818", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.989 2 DEBUG oslo_concurrency.lockutils [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Releasing lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.990 2 DEBUG oslo_concurrency.lockutils [req-e6a8f143-dc32-4a50-990c-6f8e732bf55d req-cc0e207a-337b-4976-b32f-222e42b5bae8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.990 2 DEBUG nova.network.neutron [req-e6a8f143-dc32-4a50-990c-6f8e732bf55d req-cc0e207a-337b-4976-b32f-222e42b5bae8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Refreshing network info cache for port c9724939-cd91-44bb-a86b-72bf93c2a818 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.995 2 DEBUG nova.virt.libvirt.vif [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:48:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1270343715',display_name='tempest-AttachInterfacesTestJSON-server-1270343715',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1270343715',id=20,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGqp+brcDuFBg126s8uf0VE8L4fUfMeeG8JT9FKFYCB1vHmrbx9C6Kt8XshIYtJqZ0JEMq6H9A4MzX7hRa62ELfLstfe4uxEEdjGiwcDGhX0TR8t1c69HTxfDL2XuPv0hw==',key_name='tempest-keypair-1655747577',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:48:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-i4h1iqr7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:48:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=057de6d9-3f9e-4b23-9019-f62ba6b453e7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c9724939-cd91-44bb-a86b-72bf93c2a818", "address": "fa:16:3e:2b:3e:e2", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9724939-cd", "ovs_interfaceid": "c9724939-cd91-44bb-a86b-72bf93c2a818", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.995 2 DEBUG nova.network.os_vif_util [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "c9724939-cd91-44bb-a86b-72bf93c2a818", "address": "fa:16:3e:2b:3e:e2", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9724939-cd", "ovs_interfaceid": "c9724939-cd91-44bb-a86b-72bf93c2a818", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.997 2 DEBUG nova.network.os_vif_util [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:3e:e2,bridge_name='br-int',has_traffic_filtering=True,id=c9724939-cd91-44bb-a86b-72bf93c2a818,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc9724939-cd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.997 2 DEBUG os_vif [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:3e:e2,bridge_name='br-int',has_traffic_filtering=True,id=c9724939-cd91-44bb-a86b-72bf93c2a818,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc9724939-cd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:07 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.999 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:08 np0005481065 nova_compute[260935]: 2025-10-11 08:49:07.999 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:49:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:08.000 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[4b131055-8240-4018-899a-2e598379d74c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:08 np0005481065 nova_compute[260935]: 2025-10-11 08:49:08.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:08 np0005481065 nova_compute[260935]: 2025-10-11 08:49:08.008 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc9724939-cd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:08 np0005481065 nova_compute[260935]: 2025-10-11 08:49:08.009 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc9724939-cd, col_values=(('external_ids', {'iface-id': 'c9724939-cd91-44bb-a86b-72bf93c2a818', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2b:3e:e2', 'vm-uuid': '057de6d9-3f9e-4b23-9019-f62ba6b453e7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:08 np0005481065 nova_compute[260935]: 2025-10-11 08:49:08.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:08 np0005481065 NetworkManager[44960]: <info>  [1760172548.0125] manager: (tapc9724939-cd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/66)
Oct 11 04:49:08 np0005481065 nova_compute[260935]: 2025-10-11 08:49:08.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:49:08 np0005481065 nova_compute[260935]: 2025-10-11 08:49:08.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:08 np0005481065 nova_compute[260935]: 2025-10-11 08:49:08.027 2 INFO os_vif [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:3e:e2,bridge_name='br-int',has_traffic_filtering=True,id=c9724939-cd91-44bb-a86b-72bf93c2a818,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc9724939-cd')#033[00m
Oct 11 04:49:08 np0005481065 nova_compute[260935]: 2025-10-11 08:49:08.029 2 DEBUG nova.virt.libvirt.vif [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:48:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1270343715',display_name='tempest-AttachInterfacesTestJSON-server-1270343715',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1270343715',id=20,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGqp+brcDuFBg126s8uf0VE8L4fUfMeeG8JT9FKFYCB1vHmrbx9C6Kt8XshIYtJqZ0JEMq6H9A4MzX7hRa62ELfLstfe4uxEEdjGiwcDGhX0TR8t1c69HTxfDL2XuPv0hw==',key_name='tempest-keypair-1655747577',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:48:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-i4h1iqr7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:48:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=057de6d9-3f9e-4b23-9019-f62ba6b453e7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c9724939-cd91-44bb-a86b-72bf93c2a818", "address": "fa:16:3e:2b:3e:e2", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9724939-cd", "ovs_interfaceid": "c9724939-cd91-44bb-a86b-72bf93c2a818", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:49:08 np0005481065 nova_compute[260935]: 2025-10-11 08:49:08.029 2 DEBUG nova.network.os_vif_util [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "c9724939-cd91-44bb-a86b-72bf93c2a818", "address": "fa:16:3e:2b:3e:e2", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9724939-cd", "ovs_interfaceid": "c9724939-cd91-44bb-a86b-72bf93c2a818", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:49:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:08.028 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ef8ae2f0-9d1d-4330-926c-fb4b262e948e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap09ac2cb6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:b2:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 6, 'rx_bytes': 612, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 6, 'rx_bytes': 612, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 439110, 'reachable_time': 35426, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 528, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 528, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292293, 'error': None, 'target': 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:08 np0005481065 nova_compute[260935]: 2025-10-11 08:49:08.030 2 DEBUG nova.network.os_vif_util [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:3e:e2,bridge_name='br-int',has_traffic_filtering=True,id=c9724939-cd91-44bb-a86b-72bf93c2a818,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc9724939-cd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:49:08 np0005481065 nova_compute[260935]: 2025-10-11 08:49:08.034 2 DEBUG nova.virt.libvirt.guest [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] attach device xml: <interface type="ethernet">
Oct 11 04:49:08 np0005481065 nova_compute[260935]:  <mac address="fa:16:3e:2b:3e:e2"/>
Oct 11 04:49:08 np0005481065 nova_compute[260935]:  <model type="virtio"/>
Oct 11 04:49:08 np0005481065 nova_compute[260935]:  <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:49:08 np0005481065 nova_compute[260935]:  <mtu size="1442"/>
Oct 11 04:49:08 np0005481065 nova_compute[260935]:  <target dev="tapc9724939-cd"/>
Oct 11 04:49:08 np0005481065 nova_compute[260935]: </interface>
Oct 11 04:49:08 np0005481065 nova_compute[260935]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct 11 04:49:08 np0005481065 NetworkManager[44960]: <info>  [1760172548.0512] manager: (tapc9724939-cd): new Tun device (/org/freedesktop/NetworkManager/Devices/67)
Oct 11 04:49:08 np0005481065 kernel: tapc9724939-cd: entered promiscuous mode
Oct 11 04:49:08 np0005481065 nova_compute[260935]: 2025-10-11 08:49:08.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:08 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:08Z|00104|binding|INFO|Claiming lport c9724939-cd91-44bb-a86b-72bf93c2a818 for this chassis.
Oct 11 04:49:08 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:08Z|00105|binding|INFO|c9724939-cd91-44bb-a86b-72bf93c2a818: Claiming fa:16:3e:2b:3e:e2 10.100.0.9
Oct 11 04:49:08 np0005481065 NetworkManager[44960]: <info>  [1760172548.0678] device (tapc9724939-cd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:49:08 np0005481065 NetworkManager[44960]: <info>  [1760172548.0699] device (tapc9724939-cd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:49:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:08.075 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:3e:e2 10.100.0.9'], port_security=['fa:16:3e:2b:3e:e2 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1629736355', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '057de6d9-3f9e-4b23-9019-f62ba6b453e7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1629736355', 'neutron:project_id': 'eddb41c523294041b154a0a99c88e82b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9a83c3d0-687d-44b7-980a-bde786b1b429', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4201c7b-c907-464d-88cb-d19f17d8f067, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=c9724939-cd91-44bb-a86b-72bf93c2a818) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:49:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:08.081 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[339b86d2-7cf0-4e95-a634-4589432ef72b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap09ac2cb6-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 439130, 'tstamp': 439130}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292296, 'error': None, 'target': 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap09ac2cb6-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 439135, 'tstamp': 439135}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292296, 'error': None, 'target': 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:08.082 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap09ac2cb6-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:08 np0005481065 nova_compute[260935]: 2025-10-11 08:49:08.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:08 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:08Z|00106|binding|INFO|Setting lport c9724939-cd91-44bb-a86b-72bf93c2a818 ovn-installed in OVS
Oct 11 04:49:08 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:08Z|00107|binding|INFO|Setting lport c9724939-cd91-44bb-a86b-72bf93c2a818 up in Southbound
Oct 11 04:49:08 np0005481065 nova_compute[260935]: 2025-10-11 08:49:08.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:08.112 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap09ac2cb6-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:08.113 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:49:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:08.113 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap09ac2cb6-30, col_values=(('external_ids', {'iface-id': '424305ea-6b47-4134-ad52-ee2a450e204c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:08.114 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:49:08 np0005481065 nova_compute[260935]: 2025-10-11 08:49:08.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:08 np0005481065 nova_compute[260935]: 2025-10-11 08:49:08.117 2 DEBUG nova.compute.manager [req-e97601d5-ab84-4108-91ba-9b56f69cdf09 req-7acea177-d73e-445d-931e-b6ebbe3f62f7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Received event network-vif-plugged-b9569700-d7dc-40dc-a27c-1f72d3675682 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:49:08 np0005481065 nova_compute[260935]: 2025-10-11 08:49:08.118 2 DEBUG oslo_concurrency.lockutils [req-e97601d5-ab84-4108-91ba-9b56f69cdf09 req-7acea177-d73e-445d-931e-b6ebbe3f62f7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "6224c79a-8a36-490c-863a-67251512732f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:08 np0005481065 nova_compute[260935]: 2025-10-11 08:49:08.118 2 DEBUG oslo_concurrency.lockutils [req-e97601d5-ab84-4108-91ba-9b56f69cdf09 req-7acea177-d73e-445d-931e-b6ebbe3f62f7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "6224c79a-8a36-490c-863a-67251512732f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:08 np0005481065 nova_compute[260935]: 2025-10-11 08:49:08.119 2 DEBUG oslo_concurrency.lockutils [req-e97601d5-ab84-4108-91ba-9b56f69cdf09 req-7acea177-d73e-445d-931e-b6ebbe3f62f7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "6224c79a-8a36-490c-863a-67251512732f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:08.119 162815 INFO neutron.agent.ovn.metadata.agent [-] Port c9724939-cd91-44bb-a86b-72bf93c2a818 in datapath fff13396-b787-4c6e-9112-a1c2ef57b26d bound to our chassis#033[00m
Oct 11 04:49:08 np0005481065 nova_compute[260935]: 2025-10-11 08:49:08.119 2 DEBUG nova.compute.manager [req-e97601d5-ab84-4108-91ba-9b56f69cdf09 req-7acea177-d73e-445d-931e-b6ebbe3f62f7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Processing event network-vif-plugged-b9569700-d7dc-40dc-a27c-1f72d3675682 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 04:49:08 np0005481065 nova_compute[260935]: 2025-10-11 08:49:08.119 2 DEBUG nova.compute.manager [req-e97601d5-ab84-4108-91ba-9b56f69cdf09 req-7acea177-d73e-445d-931e-b6ebbe3f62f7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Received event network-vif-plugged-b9569700-d7dc-40dc-a27c-1f72d3675682 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:49:08 np0005481065 nova_compute[260935]: 2025-10-11 08:49:08.120 2 DEBUG oslo_concurrency.lockutils [req-e97601d5-ab84-4108-91ba-9b56f69cdf09 req-7acea177-d73e-445d-931e-b6ebbe3f62f7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "6224c79a-8a36-490c-863a-67251512732f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:08 np0005481065 nova_compute[260935]: 2025-10-11 08:49:08.120 2 DEBUG oslo_concurrency.lockutils [req-e97601d5-ab84-4108-91ba-9b56f69cdf09 req-7acea177-d73e-445d-931e-b6ebbe3f62f7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "6224c79a-8a36-490c-863a-67251512732f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:08 np0005481065 nova_compute[260935]: 2025-10-11 08:49:08.121 2 DEBUG oslo_concurrency.lockutils [req-e97601d5-ab84-4108-91ba-9b56f69cdf09 req-7acea177-d73e-445d-931e-b6ebbe3f62f7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "6224c79a-8a36-490c-863a-67251512732f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:08 np0005481065 nova_compute[260935]: 2025-10-11 08:49:08.121 2 DEBUG nova.compute.manager [req-e97601d5-ab84-4108-91ba-9b56f69cdf09 req-7acea177-d73e-445d-931e-b6ebbe3f62f7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] No waiting events found dispatching network-vif-plugged-b9569700-d7dc-40dc-a27c-1f72d3675682 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:49:08 np0005481065 nova_compute[260935]: 2025-10-11 08:49:08.123 2 WARNING nova.compute.manager [req-e97601d5-ab84-4108-91ba-9b56f69cdf09 req-7acea177-d73e-445d-931e-b6ebbe3f62f7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Received unexpected event network-vif-plugged-b9569700-d7dc-40dc-a27c-1f72d3675682 for instance with vm_state building and task_state spawning.#033[00m
Oct 11 04:49:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:08.124 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fff13396-b787-4c6e-9112-a1c2ef57b26d#033[00m
Oct 11 04:49:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:08.153 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c4f50f95-01c5-4673-abb3-eea628a3367e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:08 np0005481065 nova_compute[260935]: 2025-10-11 08:49:08.175 2 DEBUG nova.virt.libvirt.driver [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:49:08 np0005481065 nova_compute[260935]: 2025-10-11 08:49:08.175 2 DEBUG nova.virt.libvirt.driver [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:49:08 np0005481065 nova_compute[260935]: 2025-10-11 08:49:08.175 2 DEBUG nova.virt.libvirt.driver [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No VIF found with MAC fa:16:3e:97:65:f3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:49:08 np0005481065 nova_compute[260935]: 2025-10-11 08:49:08.175 2 DEBUG nova.virt.libvirt.driver [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No VIF found with MAC fa:16:3e:f2:dc:ce, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:49:08 np0005481065 nova_compute[260935]: 2025-10-11 08:49:08.176 2 DEBUG nova.virt.libvirt.driver [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No VIF found with MAC fa:16:3e:90:fd:25, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:49:08 np0005481065 nova_compute[260935]: 2025-10-11 08:49:08.176 2 DEBUG nova.virt.libvirt.driver [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No VIF found with MAC fa:16:3e:2b:3e:e2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:49:08 np0005481065 nova_compute[260935]: 2025-10-11 08:49:08.203 2 DEBUG nova.virt.libvirt.guest [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:49:08 np0005481065 nova_compute[260935]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:49:08 np0005481065 nova_compute[260935]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1270343715</nova:name>
Oct 11 04:49:08 np0005481065 nova_compute[260935]:  <nova:creationTime>2025-10-11 08:49:08</nova:creationTime>
Oct 11 04:49:08 np0005481065 nova_compute[260935]:  <nova:flavor name="m1.nano">
Oct 11 04:49:08 np0005481065 nova_compute[260935]:    <nova:memory>128</nova:memory>
Oct 11 04:49:08 np0005481065 nova_compute[260935]:    <nova:disk>1</nova:disk>
Oct 11 04:49:08 np0005481065 nova_compute[260935]:    <nova:swap>0</nova:swap>
Oct 11 04:49:08 np0005481065 nova_compute[260935]:    <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:49:08 np0005481065 nova_compute[260935]:    <nova:vcpus>1</nova:vcpus>
Oct 11 04:49:08 np0005481065 nova_compute[260935]:  </nova:flavor>
Oct 11 04:49:08 np0005481065 nova_compute[260935]:  <nova:owner>
Oct 11 04:49:08 np0005481065 nova_compute[260935]:    <nova:user uuid="34f29a5a135d45f597eeaa741009aa67">tempest-AttachInterfacesTestJSON-2072786320-project-member</nova:user>
Oct 11 04:49:08 np0005481065 nova_compute[260935]:    <nova:project uuid="eddb41c523294041b154a0a99c88e82b">tempest-AttachInterfacesTestJSON-2072786320</nova:project>
Oct 11 04:49:08 np0005481065 nova_compute[260935]:  </nova:owner>
Oct 11 04:49:08 np0005481065 nova_compute[260935]:  <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:49:08 np0005481065 nova_compute[260935]:  <nova:ports>
Oct 11 04:49:08 np0005481065 nova_compute[260935]:    <nova:port uuid="db31f1b4-b009-40dc-a028-b72fe0b1eb45">
Oct 11 04:49:08 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 11 04:49:08 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 04:49:08 np0005481065 nova_compute[260935]:    <nova:port uuid="b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e">
Oct 11 04:49:08 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 11 04:49:08 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 04:49:08 np0005481065 nova_compute[260935]:    <nova:port uuid="723ff3bf-882c-4198-afc8-a31026a4ccfc">
Oct 11 04:49:08 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 11 04:49:08 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 04:49:08 np0005481065 nova_compute[260935]:    <nova:port uuid="c9724939-cd91-44bb-a86b-72bf93c2a818">
Oct 11 04:49:08 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 11 04:49:08 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 04:49:08 np0005481065 nova_compute[260935]:  </nova:ports>
Oct 11 04:49:08 np0005481065 nova_compute[260935]: </nova:instance>
Oct 11 04:49:08 np0005481065 nova_compute[260935]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct 11 04:49:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:49:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:08.210 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[da26b6fe-feb3-4ead-acbb-ec1f6183b6de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:08.217 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c21ca575-333c-44ec-b0f8-562c1651beee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:08 np0005481065 nova_compute[260935]: 2025-10-11 08:49:08.235 2 DEBUG oslo_concurrency.lockutils [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "interface-057de6d9-3f9e-4b23-9019-f62ba6b453e7-c9724939-cd91-44bb-a86b-72bf93c2a818" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 7.935s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:08.260 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[815333c6-5e87-44d7-9120-30910f3082ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:08.284 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[96939928-91c6-46c3-9245-45669c38da79]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfff13396-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:a4:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 9, 'rx_bytes': 1084, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 9, 'rx_bytes': 1084, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 434479, 'reachable_time': 16585, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292331, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:08.313 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ed59a759-c493-4a99-b2dc-0813c4ba4729]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfff13396-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 434496, 'tstamp': 434496}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292340, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfff13396-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 434502, 'tstamp': 434502}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292340, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:08.316 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfff13396-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:08 np0005481065 nova_compute[260935]: 2025-10-11 08:49:08.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:08 np0005481065 nova_compute[260935]: 2025-10-11 08:49:08.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:08.320 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfff13396-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:08.321 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:49:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:08.322 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfff13396-b0, col_values=(('external_ids', {'iface-id': '2a916b98-1e7b-4604-b1f0-e2f195b1c17e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:08.322 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:49:08 np0005481065 nova_compute[260935]: 2025-10-11 08:49:08.897 2 DEBUG nova.compute.manager [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:49:08 np0005481065 nova_compute[260935]: 2025-10-11 08:49:08.898 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172548.8981323, 6224c79a-8a36-490c-863a-67251512732f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:49:08 np0005481065 nova_compute[260935]: 2025-10-11 08:49:08.899 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6224c79a-8a36-490c-863a-67251512732f] VM Started (Lifecycle Event)#033[00m
Oct 11 04:49:08 np0005481065 nova_compute[260935]: 2025-10-11 08:49:08.905 2 DEBUG nova.virt.libvirt.driver [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:49:08 np0005481065 nova_compute[260935]: 2025-10-11 08:49:08.909 2 INFO nova.virt.libvirt.driver [-] [instance: 6224c79a-8a36-490c-863a-67251512732f] Instance spawned successfully.#033[00m
Oct 11 04:49:08 np0005481065 nova_compute[260935]: 2025-10-11 08:49:08.911 2 DEBUG nova.virt.libvirt.driver [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:49:08 np0005481065 nova_compute[260935]: 2025-10-11 08:49:08.942 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6224c79a-8a36-490c-863a-67251512732f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:49:08 np0005481065 nova_compute[260935]: 2025-10-11 08:49:08.952 2 DEBUG nova.virt.libvirt.driver [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:49:08 np0005481065 nova_compute[260935]: 2025-10-11 08:49:08.954 2 DEBUG nova.virt.libvirt.driver [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:49:08 np0005481065 nova_compute[260935]: 2025-10-11 08:49:08.955 2 DEBUG nova.virt.libvirt.driver [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:49:08 np0005481065 nova_compute[260935]: 2025-10-11 08:49:08.956 2 DEBUG nova.virt.libvirt.driver [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:49:08 np0005481065 nova_compute[260935]: 2025-10-11 08:49:08.957 2 DEBUG nova.virt.libvirt.driver [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:49:08 np0005481065 nova_compute[260935]: 2025-10-11 08:49:08.959 2 DEBUG nova.virt.libvirt.driver [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:49:08 np0005481065 nova_compute[260935]: 2025-10-11 08:49:08.968 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6224c79a-8a36-490c-863a-67251512732f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:49:09 np0005481065 nova_compute[260935]: 2025-10-11 08:49:09.005 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6224c79a-8a36-490c-863a-67251512732f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:49:09 np0005481065 nova_compute[260935]: 2025-10-11 08:49:09.006 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172548.8982635, 6224c79a-8a36-490c-863a-67251512732f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:49:09 np0005481065 nova_compute[260935]: 2025-10-11 08:49:09.007 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6224c79a-8a36-490c-863a-67251512732f] VM Paused (Lifecycle Event)#033[00m
Oct 11 04:49:09 np0005481065 nova_compute[260935]: 2025-10-11 08:49:09.021 2 INFO nova.compute.manager [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Took 7.56 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:49:09 np0005481065 nova_compute[260935]: 2025-10-11 08:49:09.022 2 DEBUG nova.compute.manager [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:49:09 np0005481065 nova_compute[260935]: 2025-10-11 08:49:09.031 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6224c79a-8a36-490c-863a-67251512732f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:49:09 np0005481065 nova_compute[260935]: 2025-10-11 08:49:09.045 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172548.9043639, 6224c79a-8a36-490c-863a-67251512732f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:49:09 np0005481065 nova_compute[260935]: 2025-10-11 08:49:09.046 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6224c79a-8a36-490c-863a-67251512732f] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:49:09 np0005481065 nova_compute[260935]: 2025-10-11 08:49:09.072 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6224c79a-8a36-490c-863a-67251512732f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:49:09 np0005481065 nova_compute[260935]: 2025-10-11 08:49:09.078 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6224c79a-8a36-490c-863a-67251512732f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:49:09 np0005481065 nova_compute[260935]: 2025-10-11 08:49:09.126 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6224c79a-8a36-490c-863a-67251512732f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:49:09 np0005481065 nova_compute[260935]: 2025-10-11 08:49:09.141 2 INFO nova.compute.manager [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Took 8.60 seconds to build instance.#033[00m
Oct 11 04:49:09 np0005481065 nova_compute[260935]: 2025-10-11 08:49:09.163 2 DEBUG oslo_concurrency.lockutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "6224c79a-8a36-490c-863a-67251512732f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:09 np0005481065 nova_compute[260935]: 2025-10-11 08:49:09.286 2 DEBUG nova.network.neutron [-] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:49:09 np0005481065 nova_compute[260935]: 2025-10-11 08:49:09.449 2 DEBUG nova.network.neutron [req-e6a8f143-dc32-4a50-990c-6f8e732bf55d req-cc0e207a-337b-4976-b32f-222e42b5bae8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Updated VIF entry in instance network info cache for port c9724939-cd91-44bb-a86b-72bf93c2a818. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:49:09 np0005481065 nova_compute[260935]: 2025-10-11 08:49:09.451 2 DEBUG nova.network.neutron [req-e6a8f143-dc32-4a50-990c-6f8e732bf55d req-cc0e207a-337b-4976-b32f-222e42b5bae8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Updating instance_info_cache with network_info: [{"id": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "address": "fa:16:3e:97:65:f3", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb31f1b4-b0", "ovs_interfaceid": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "address": "fa:16:3e:f2:dc:ce", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e1a780-92", "ovs_interfaceid": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "address": "fa:16:3e:90:fd:25", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap723ff3bf-88", "ovs_interfaceid": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c9724939-cd91-44bb-a86b-72bf93c2a818", "address": "fa:16:3e:2b:3e:e2", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9724939-cd", "ovs_interfaceid": "c9724939-cd91-44bb-a86b-72bf93c2a818", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:49:09 np0005481065 nova_compute[260935]: 2025-10-11 08:49:09.549 2 INFO nova.compute.manager [-] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Took 2.17 seconds to deallocate network for instance.#033[00m
Oct 11 04:49:09 np0005481065 nova_compute[260935]: 2025-10-11 08:49:09.565 2 DEBUG oslo_concurrency.lockutils [req-e6a8f143-dc32-4a50-990c-6f8e732bf55d req-cc0e207a-337b-4976-b32f-222e42b5bae8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:49:09 np0005481065 nova_compute[260935]: 2025-10-11 08:49:09.566 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:49:09 np0005481065 nova_compute[260935]: 2025-10-11 08:49:09.567 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 04:49:09 np0005481065 nova_compute[260935]: 2025-10-11 08:49:09.567 2 DEBUG nova.objects.instance [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 057de6d9-3f9e-4b23-9019-f62ba6b453e7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:49:09 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:09Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2b:3e:e2 10.100.0.9
Oct 11 04:49:09 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:09Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2b:3e:e2 10.100.0.9
Oct 11 04:49:09 np0005481065 nova_compute[260935]: 2025-10-11 08:49:09.619 2 DEBUG oslo_concurrency.lockutils [None req-f1c4bf5e-d8d0-480f-854b-9ce12c050804 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:09 np0005481065 nova_compute[260935]: 2025-10-11 08:49:09.619 2 DEBUG oslo_concurrency.lockutils [None req-f1c4bf5e-d8d0-480f-854b-9ce12c050804 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:09 np0005481065 nova_compute[260935]: 2025-10-11 08:49:09.742 2 DEBUG nova.compute.manager [req-264f091c-6f39-4683-94b3-73e5444da30d req-43156a74-afa6-432b-a69d-d931a81c4e20 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Received event network-vif-plugged-a3944a31-9560-49ae-b2a5-caaf2736993a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:49:09 np0005481065 nova_compute[260935]: 2025-10-11 08:49:09.743 2 DEBUG oslo_concurrency.lockutils [req-264f091c-6f39-4683-94b3-73e5444da30d req-43156a74-afa6-432b-a69d-d931a81c4e20 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:09 np0005481065 nova_compute[260935]: 2025-10-11 08:49:09.745 2 DEBUG oslo_concurrency.lockutils [req-264f091c-6f39-4683-94b3-73e5444da30d req-43156a74-afa6-432b-a69d-d931a81c4e20 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:09 np0005481065 nova_compute[260935]: 2025-10-11 08:49:09.747 2 DEBUG oslo_concurrency.lockutils [req-264f091c-6f39-4683-94b3-73e5444da30d req-43156a74-afa6-432b-a69d-d931a81c4e20 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:09 np0005481065 nova_compute[260935]: 2025-10-11 08:49:09.747 2 DEBUG nova.compute.manager [req-264f091c-6f39-4683-94b3-73e5444da30d req-43156a74-afa6-432b-a69d-d931a81c4e20 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] No waiting events found dispatching network-vif-plugged-a3944a31-9560-49ae-b2a5-caaf2736993a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:49:09 np0005481065 nova_compute[260935]: 2025-10-11 08:49:09.747 2 WARNING nova.compute.manager [req-264f091c-6f39-4683-94b3-73e5444da30d req-43156a74-afa6-432b-a69d-d931a81c4e20 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Received unexpected event network-vif-plugged-a3944a31-9560-49ae-b2a5-caaf2736993a for instance with vm_state active and task_state None.#033[00m
Oct 11 04:49:09 np0005481065 nova_compute[260935]: 2025-10-11 08:49:09.748 2 DEBUG nova.compute.manager [req-264f091c-6f39-4683-94b3-73e5444da30d req-43156a74-afa6-432b-a69d-d931a81c4e20 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Received event network-vif-unplugged-b5bae935-7639-4a76-988c-e09d0c6f5fb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:49:09 np0005481065 nova_compute[260935]: 2025-10-11 08:49:09.748 2 DEBUG oslo_concurrency.lockutils [req-264f091c-6f39-4683-94b3-73e5444da30d req-43156a74-afa6-432b-a69d-d931a81c4e20 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "e10cd028-76c1-4eb5-be43-f51e4da8abc1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:09 np0005481065 nova_compute[260935]: 2025-10-11 08:49:09.749 2 DEBUG oslo_concurrency.lockutils [req-264f091c-6f39-4683-94b3-73e5444da30d req-43156a74-afa6-432b-a69d-d931a81c4e20 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e10cd028-76c1-4eb5-be43-f51e4da8abc1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:09 np0005481065 nova_compute[260935]: 2025-10-11 08:49:09.749 2 DEBUG oslo_concurrency.lockutils [req-264f091c-6f39-4683-94b3-73e5444da30d req-43156a74-afa6-432b-a69d-d931a81c4e20 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e10cd028-76c1-4eb5-be43-f51e4da8abc1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:09 np0005481065 nova_compute[260935]: 2025-10-11 08:49:09.750 2 DEBUG nova.compute.manager [req-264f091c-6f39-4683-94b3-73e5444da30d req-43156a74-afa6-432b-a69d-d931a81c4e20 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] No waiting events found dispatching network-vif-unplugged-b5bae935-7639-4a76-988c-e09d0c6f5fb1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:49:09 np0005481065 nova_compute[260935]: 2025-10-11 08:49:09.750 2 WARNING nova.compute.manager [req-264f091c-6f39-4683-94b3-73e5444da30d req-43156a74-afa6-432b-a69d-d931a81c4e20 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Received unexpected event network-vif-unplugged-b5bae935-7639-4a76-988c-e09d0c6f5fb1 for instance with vm_state deleted and task_state None.#033[00m
Oct 11 04:49:09 np0005481065 nova_compute[260935]: 2025-10-11 08:49:09.750 2 DEBUG nova.compute.manager [req-264f091c-6f39-4683-94b3-73e5444da30d req-43156a74-afa6-432b-a69d-d931a81c4e20 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Received event network-vif-plugged-b5bae935-7639-4a76-988c-e09d0c6f5fb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:49:09 np0005481065 nova_compute[260935]: 2025-10-11 08:49:09.751 2 DEBUG oslo_concurrency.lockutils [req-264f091c-6f39-4683-94b3-73e5444da30d req-43156a74-afa6-432b-a69d-d931a81c4e20 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "e10cd028-76c1-4eb5-be43-f51e4da8abc1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:09 np0005481065 nova_compute[260935]: 2025-10-11 08:49:09.751 2 DEBUG oslo_concurrency.lockutils [req-264f091c-6f39-4683-94b3-73e5444da30d req-43156a74-afa6-432b-a69d-d931a81c4e20 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e10cd028-76c1-4eb5-be43-f51e4da8abc1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:09 np0005481065 nova_compute[260935]: 2025-10-11 08:49:09.752 2 DEBUG oslo_concurrency.lockutils [req-264f091c-6f39-4683-94b3-73e5444da30d req-43156a74-afa6-432b-a69d-d931a81c4e20 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e10cd028-76c1-4eb5-be43-f51e4da8abc1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:09 np0005481065 nova_compute[260935]: 2025-10-11 08:49:09.752 2 DEBUG nova.compute.manager [req-264f091c-6f39-4683-94b3-73e5444da30d req-43156a74-afa6-432b-a69d-d931a81c4e20 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] No waiting events found dispatching network-vif-plugged-b5bae935-7639-4a76-988c-e09d0c6f5fb1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:49:09 np0005481065 nova_compute[260935]: 2025-10-11 08:49:09.753 2 WARNING nova.compute.manager [req-264f091c-6f39-4683-94b3-73e5444da30d req-43156a74-afa6-432b-a69d-d931a81c4e20 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Received unexpected event network-vif-plugged-b5bae935-7639-4a76-988c-e09d0c6f5fb1 for instance with vm_state deleted and task_state None.#033[00m
Oct 11 04:49:09 np0005481065 nova_compute[260935]: 2025-10-11 08:49:09.753 2 DEBUG nova.compute.manager [req-264f091c-6f39-4683-94b3-73e5444da30d req-43156a74-afa6-432b-a69d-d931a81c4e20 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Received event network-vif-deleted-b5bae935-7639-4a76-988c-e09d0c6f5fb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:49:09 np0005481065 nova_compute[260935]: 2025-10-11 08:49:09.762 2 DEBUG oslo_concurrency.processutils [None req-f1c4bf5e-d8d0-480f-854b-9ce12c050804 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1301: 321 pgs: 321 active+clean; 214 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 3.6 MiB/s wr, 96 op/s
Oct 11 04:49:10 np0005481065 nova_compute[260935]: 2025-10-11 08:49:10.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:49:10 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3714218713' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:49:10 np0005481065 nova_compute[260935]: 2025-10-11 08:49:10.251 2 DEBUG oslo_concurrency.processutils [None req-f1c4bf5e-d8d0-480f-854b-9ce12c050804 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:10 np0005481065 nova_compute[260935]: 2025-10-11 08:49:10.257 2 DEBUG nova.compute.provider_tree [None req-f1c4bf5e-d8d0-480f-854b-9ce12c050804 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:49:10 np0005481065 nova_compute[260935]: 2025-10-11 08:49:10.272 2 DEBUG nova.scheduler.client.report [None req-f1c4bf5e-d8d0-480f-854b-9ce12c050804 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:49:10 np0005481065 nova_compute[260935]: 2025-10-11 08:49:10.291 2 DEBUG oslo_concurrency.lockutils [None req-f1c4bf5e-d8d0-480f-854b-9ce12c050804 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:10 np0005481065 nova_compute[260935]: 2025-10-11 08:49:10.321 2 INFO nova.scheduler.client.report [None req-f1c4bf5e-d8d0-480f-854b-9ce12c050804 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Deleted allocations for instance e10cd028-76c1-4eb5-be43-f51e4da8abc1#033[00m
Oct 11 04:49:10 np0005481065 nova_compute[260935]: 2025-10-11 08:49:10.392 2 DEBUG oslo_concurrency.lockutils [None req-f1c4bf5e-d8d0-480f-854b-9ce12c050804 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Lock "e10cd028-76c1-4eb5-be43-f51e4da8abc1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.973s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:10 np0005481065 nova_compute[260935]: 2025-10-11 08:49:10.502 2 DEBUG nova.compute.manager [req-260920d6-4feb-4991-b99e-d542fbf2e7ac req-4d45081b-db12-4cb1-95d1-87a9c865a4ff e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received event network-vif-plugged-c9724939-cd91-44bb-a86b-72bf93c2a818 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:49:10 np0005481065 nova_compute[260935]: 2025-10-11 08:49:10.503 2 DEBUG oslo_concurrency.lockutils [req-260920d6-4feb-4991-b99e-d542fbf2e7ac req-4d45081b-db12-4cb1-95d1-87a9c865a4ff e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:10 np0005481065 nova_compute[260935]: 2025-10-11 08:49:10.503 2 DEBUG oslo_concurrency.lockutils [req-260920d6-4feb-4991-b99e-d542fbf2e7ac req-4d45081b-db12-4cb1-95d1-87a9c865a4ff e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:10 np0005481065 nova_compute[260935]: 2025-10-11 08:49:10.503 2 DEBUG oslo_concurrency.lockutils [req-260920d6-4feb-4991-b99e-d542fbf2e7ac req-4d45081b-db12-4cb1-95d1-87a9c865a4ff e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:10 np0005481065 nova_compute[260935]: 2025-10-11 08:49:10.503 2 DEBUG nova.compute.manager [req-260920d6-4feb-4991-b99e-d542fbf2e7ac req-4d45081b-db12-4cb1-95d1-87a9c865a4ff e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] No waiting events found dispatching network-vif-plugged-c9724939-cd91-44bb-a86b-72bf93c2a818 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:49:10 np0005481065 nova_compute[260935]: 2025-10-11 08:49:10.503 2 WARNING nova.compute.manager [req-260920d6-4feb-4991-b99e-d542fbf2e7ac req-4d45081b-db12-4cb1-95d1-87a9c865a4ff e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received unexpected event network-vif-plugged-c9724939-cd91-44bb-a86b-72bf93c2a818 for instance with vm_state active and task_state None.#033[00m
Oct 11 04:49:10 np0005481065 nova_compute[260935]: 2025-10-11 08:49:10.503 2 DEBUG nova.compute.manager [req-260920d6-4feb-4991-b99e-d542fbf2e7ac req-4d45081b-db12-4cb1-95d1-87a9c865a4ff e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received event network-vif-plugged-c9724939-cd91-44bb-a86b-72bf93c2a818 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:49:10 np0005481065 nova_compute[260935]: 2025-10-11 08:49:10.504 2 DEBUG oslo_concurrency.lockutils [req-260920d6-4feb-4991-b99e-d542fbf2e7ac req-4d45081b-db12-4cb1-95d1-87a9c865a4ff e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:10 np0005481065 nova_compute[260935]: 2025-10-11 08:49:10.504 2 DEBUG oslo_concurrency.lockutils [req-260920d6-4feb-4991-b99e-d542fbf2e7ac req-4d45081b-db12-4cb1-95d1-87a9c865a4ff e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:10 np0005481065 nova_compute[260935]: 2025-10-11 08:49:10.504 2 DEBUG oslo_concurrency.lockutils [req-260920d6-4feb-4991-b99e-d542fbf2e7ac req-4d45081b-db12-4cb1-95d1-87a9c865a4ff e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:10 np0005481065 nova_compute[260935]: 2025-10-11 08:49:10.504 2 DEBUG nova.compute.manager [req-260920d6-4feb-4991-b99e-d542fbf2e7ac req-4d45081b-db12-4cb1-95d1-87a9c865a4ff e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] No waiting events found dispatching network-vif-plugged-c9724939-cd91-44bb-a86b-72bf93c2a818 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:49:10 np0005481065 nova_compute[260935]: 2025-10-11 08:49:10.504 2 WARNING nova.compute.manager [req-260920d6-4feb-4991-b99e-d542fbf2e7ac req-4d45081b-db12-4cb1-95d1-87a9c865a4ff e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received unexpected event network-vif-plugged-c9724939-cd91-44bb-a86b-72bf93c2a818 for instance with vm_state active and task_state None.#033[00m
Oct 11 04:49:10 np0005481065 nova_compute[260935]: 2025-10-11 08:49:10.999 2 DEBUG oslo_concurrency.lockutils [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "interface-057de6d9-3f9e-4b23-9019-f62ba6b453e7-b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:11 np0005481065 nova_compute[260935]: 2025-10-11 08:49:10.999 2 DEBUG oslo_concurrency.lockutils [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "interface-057de6d9-3f9e-4b23-9019-f62ba6b453e7-b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:11 np0005481065 nova_compute[260935]: 2025-10-11 08:49:11.024 2 DEBUG nova.objects.instance [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lazy-loading 'flavor' on Instance uuid 057de6d9-3f9e-4b23-9019-f62ba6b453e7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:49:11 np0005481065 nova_compute[260935]: 2025-10-11 08:49:11.047 2 DEBUG nova.virt.libvirt.vif [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:48:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1270343715',display_name='tempest-AttachInterfacesTestJSON-server-1270343715',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1270343715',id=20,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGqp+brcDuFBg126s8uf0VE8L4fUfMeeG8JT9FKFYCB1vHmrbx9C6Kt8XshIYtJqZ0JEMq6H9A4MzX7hRa62ELfLstfe4uxEEdjGiwcDGhX0TR8t1c69HTxfDL2XuPv0hw==',key_name='tempest-keypair-1655747577',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:48:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-i4h1iqr7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:48:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=057de6d9-3f9e-4b23-9019-f62ba6b453e7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "address": "fa:16:3e:f2:dc:ce", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e1a780-92", "ovs_interfaceid": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:49:11 np0005481065 nova_compute[260935]: 2025-10-11 08:49:11.047 2 DEBUG nova.network.os_vif_util [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "address": "fa:16:3e:f2:dc:ce", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e1a780-92", "ovs_interfaceid": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:49:11 np0005481065 nova_compute[260935]: 2025-10-11 08:49:11.049 2 DEBUG nova.network.os_vif_util [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f2:dc:ce,bridge_name='br-int',has_traffic_filtering=True,id=b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e1a780-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:49:11 np0005481065 nova_compute[260935]: 2025-10-11 08:49:11.054 2 DEBUG nova.virt.libvirt.guest [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:f2:dc:ce"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb3e1a780-92"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 11 04:49:11 np0005481065 nova_compute[260935]: 2025-10-11 08:49:11.059 2 DEBUG nova.virt.libvirt.guest [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:f2:dc:ce"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb3e1a780-92"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 11 04:49:11 np0005481065 nova_compute[260935]: 2025-10-11 08:49:11.063 2 DEBUG nova.virt.libvirt.driver [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Attempting to detach device tapb3e1a780-92 from instance 057de6d9-3f9e-4b23-9019-f62ba6b453e7 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct 11 04:49:11 np0005481065 nova_compute[260935]: 2025-10-11 08:49:11.063 2 DEBUG nova.virt.libvirt.guest [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] detach device xml: <interface type="ethernet">
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <mac address="fa:16:3e:f2:dc:ce"/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <model type="virtio"/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <mtu size="1442"/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <target dev="tapb3e1a780-92"/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]: </interface>
Oct 11 04:49:11 np0005481065 nova_compute[260935]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 11 04:49:11 np0005481065 nova_compute[260935]: 2025-10-11 08:49:11.070 2 DEBUG nova.virt.libvirt.guest [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:f2:dc:ce"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb3e1a780-92"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 11 04:49:11 np0005481065 nova_compute[260935]: 2025-10-11 08:49:11.076 2 DEBUG nova.virt.libvirt.guest [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:f2:dc:ce"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb3e1a780-92"/></interface>not found in domain: <domain type='kvm' id='22'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <name>instance-00000014</name>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <uuid>057de6d9-3f9e-4b23-9019-f62ba6b453e7</uuid>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1270343715</nova:name>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <nova:creationTime>2025-10-11 08:49:08</nova:creationTime>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <nova:flavor name="m1.nano">
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <nova:memory>128</nova:memory>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <nova:disk>1</nova:disk>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <nova:swap>0</nova:swap>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <nova:vcpus>1</nova:vcpus>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  </nova:flavor>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <nova:owner>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <nova:user uuid="34f29a5a135d45f597eeaa741009aa67">tempest-AttachInterfacesTestJSON-2072786320-project-member</nova:user>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <nova:project uuid="eddb41c523294041b154a0a99c88e82b">tempest-AttachInterfacesTestJSON-2072786320</nova:project>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  </nova:owner>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <nova:ports>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <nova:port uuid="db31f1b4-b009-40dc-a028-b72fe0b1eb45">
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <nova:port uuid="b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e">
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <nova:port uuid="723ff3bf-882c-4198-afc8-a31026a4ccfc">
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <nova:port uuid="c9724939-cd91-44bb-a86b-72bf93c2a818">
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  </nova:ports>
Oct 11 04:49:11 np0005481065 nova_compute[260935]: </nova:instance>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <memory unit='KiB'>131072</memory>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <vcpu placement='static'>1</vcpu>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <resource>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <partition>/machine</partition>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  </resource>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <sysinfo type='smbios'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <entry name='manufacturer'>RDO</entry>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <entry name='product'>OpenStack Compute</entry>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <entry name='serial'>057de6d9-3f9e-4b23-9019-f62ba6b453e7</entry>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <entry name='uuid'>057de6d9-3f9e-4b23-9019-f62ba6b453e7</entry>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <entry name='family'>Virtual Machine</entry>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <boot dev='hd'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <smbios mode='sysinfo'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <vmcoreinfo state='on'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <cpu mode='custom' match='exact' check='full'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <model fallback='forbid'>EPYC-Rome</model>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <vendor>AMD</vendor>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='require' name='x2apic'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='require' name='tsc-deadline'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='require' name='hypervisor'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='require' name='tsc_adjust'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='require' name='spec-ctrl'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='require' name='stibp'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='require' name='arch-capabilities'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='require' name='ssbd'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='require' name='cmp_legacy'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='require' name='overflow-recov'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='require' name='succor'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='require' name='ibrs'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='require' name='amd-ssbd'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='require' name='virt-ssbd'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='disable' name='lbrv'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='disable' name='tsc-scale'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='disable' name='vmcb-clean'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='disable' name='flushbyasid'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='disable' name='pause-filter'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='disable' name='pfthreshold'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='disable' name='svme-addr-chk'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='require' name='lfence-always-serializing'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='require' name='rdctl-no'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='require' name='mds-no'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='require' name='pschange-mc-no'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='require' name='gds-no'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='require' name='rfds-no'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='disable' name='xsaves'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='disable' name='svm'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='require' name='topoext'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='disable' name='npt'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='disable' name='nrip-save'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <clock offset='utc'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <timer name='pit' tickpolicy='delay'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <timer name='rtc' tickpolicy='catchup'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <timer name='hpet' present='no'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <on_poweroff>destroy</on_poweroff>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <on_reboot>restart</on_reboot>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <on_crash>destroy</on_crash>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <disk type='network' device='disk'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <driver name='qemu' type='raw' cache='none'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <auth username='openstack'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:        <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <source protocol='rbd' name='vms/057de6d9-3f9e-4b23-9019-f62ba6b453e7_disk' index='2'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:        <host name='192.168.122.100' port='6789'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target dev='vda' bus='virtio'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='virtio-disk0'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <disk type='network' device='cdrom'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <driver name='qemu' type='raw' cache='none'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <auth username='openstack'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:        <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <source protocol='rbd' name='vms/057de6d9-3f9e-4b23-9019-f62ba6b453e7_disk.config' index='1'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:        <host name='192.168.122.100' port='6789'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target dev='sda' bus='sata'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <readonly/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='sata0-0-0'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='pci' index='0' model='pcie-root'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='pcie.0'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target chassis='1' port='0x10'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='pci.1'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target chassis='2' port='0x11'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='pci.2'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target chassis='3' port='0x12'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='pci.3'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target chassis='4' port='0x13'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='pci.4'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target chassis='5' port='0x14'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='pci.5'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target chassis='6' port='0x15'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='pci.6'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target chassis='7' port='0x16'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='pci.7'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target chassis='8' port='0x17'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='pci.8'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target chassis='9' port='0x18'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='pci.9'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target chassis='10' port='0x19'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='pci.10'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target chassis='11' port='0x1a'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='pci.11'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target chassis='12' port='0x1b'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='pci.12'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target chassis='13' port='0x1c'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='pci.13'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target chassis='14' port='0x1d'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='pci.14'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target chassis='15' port='0x1e'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='pci.15'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target chassis='16' port='0x1f'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='pci.16'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target chassis='17' port='0x20'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='pci.17'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target chassis='18' port='0x21'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='pci.18'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target chassis='19' port='0x22'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='pci.19'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target chassis='20' port='0x23'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='pci.20'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target chassis='21' port='0x24'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='pci.21'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target chassis='22' port='0x25'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='pci.22'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target chassis='23' port='0x26'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='pci.23'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target chassis='24' port='0x27'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='pci.24'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target chassis='25' port='0x28'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='pci.25'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model name='pcie-pci-bridge'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='pci.26'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='usb'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='sata' index='0'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='ide'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <interface type='ethernet'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <mac address='fa:16:3e:97:65:f3'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target dev='tapdb31f1b4-b0'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model type='virtio'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <driver name='vhost' rx_queue_size='512'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <mtu size='1442'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='net0'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <interface type='ethernet'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <mac address='fa:16:3e:f2:dc:ce'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target dev='tapb3e1a780-92'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model type='virtio'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <driver name='vhost' rx_queue_size='512'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <mtu size='1442'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='net1'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <interface type='ethernet'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <mac address='fa:16:3e:90:fd:25'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target dev='tap723ff3bf-88'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model type='virtio'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <driver name='vhost' rx_queue_size='512'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <mtu size='1442'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='net2'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <interface type='ethernet'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <mac address='fa:16:3e:2b:3e:e2'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target dev='tapc9724939-cd'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model type='virtio'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <driver name='vhost' rx_queue_size='512'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <mtu size='1442'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='net3'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <serial type='pty'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <source path='/dev/pts/2'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <log file='/var/lib/nova/instances/057de6d9-3f9e-4b23-9019-f62ba6b453e7/console.log' append='off'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target type='isa-serial' port='0'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:        <model name='isa-serial'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      </target>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='serial0'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <console type='pty' tty='/dev/pts/2'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <source path='/dev/pts/2'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <log file='/var/lib/nova/instances/057de6d9-3f9e-4b23-9019-f62ba6b453e7/console.log' append='off'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target type='serial' port='0'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='serial0'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </console>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <input type='tablet' bus='usb'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='input0'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='usb' bus='0' port='1'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </input>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <input type='mouse' bus='ps2'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='input1'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </input>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <input type='keyboard' bus='ps2'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='input2'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </input>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <listen type='address' address='::0'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </graphics>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <audio id='1' type='none'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model type='virtio' heads='1' primary='yes'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='video0'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <watchdog model='itco' action='reset'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='watchdog0'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </watchdog>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <memballoon model='virtio'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <stats period='10'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='balloon0'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <rng model='virtio'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <backend model='random'>/dev/urandom</backend>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='rng0'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <label>system_u:system_r:svirt_t:s0:c662,c935</label>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c662,c935</imagelabel>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  </seclabel>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <label>+107:+107</label>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <imagelabel>+107:+107</imagelabel>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  </seclabel>
Oct 11 04:49:11 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:49:11 np0005481065 nova_compute[260935]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct 11 04:49:11 np0005481065 nova_compute[260935]: 2025-10-11 08:49:11.076 2 INFO nova.virt.libvirt.driver [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Successfully detached device tapb3e1a780-92 from instance 057de6d9-3f9e-4b23-9019-f62ba6b453e7 from the persistent domain config.#033[00m
Oct 11 04:49:11 np0005481065 nova_compute[260935]: 2025-10-11 08:49:11.077 2 DEBUG nova.virt.libvirt.driver [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] (1/8): Attempting to detach device tapb3e1a780-92 with device alias net1 from instance 057de6d9-3f9e-4b23-9019-f62ba6b453e7 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct 11 04:49:11 np0005481065 nova_compute[260935]: 2025-10-11 08:49:11.078 2 DEBUG nova.virt.libvirt.guest [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] detach device xml: <interface type="ethernet">
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <mac address="fa:16:3e:f2:dc:ce"/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <model type="virtio"/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <mtu size="1442"/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <target dev="tapb3e1a780-92"/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]: </interface>
Oct 11 04:49:11 np0005481065 nova_compute[260935]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 11 04:49:11 np0005481065 kernel: tapb3e1a780-92 (unregistering): left promiscuous mode
Oct 11 04:49:11 np0005481065 NetworkManager[44960]: <info>  [1760172551.2093] device (tapb3e1a780-92): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:49:11 np0005481065 nova_compute[260935]: 2025-10-11 08:49:11.229 2 DEBUG nova.virt.libvirt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Received event <DeviceRemovedEvent: 1760172551.227427, 057de6d9-3f9e-4b23-9019-f62ba6b453e7 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct 11 04:49:11 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:11Z|00108|binding|INFO|Releasing lport b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e from this chassis (sb_readonly=0)
Oct 11 04:49:11 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:11Z|00109|binding|INFO|Setting lport b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e down in Southbound
Oct 11 04:49:11 np0005481065 nova_compute[260935]: 2025-10-11 08:49:11.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:11 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:11Z|00110|binding|INFO|Removing iface tapb3e1a780-92 ovn-installed in OVS
Oct 11 04:49:11 np0005481065 nova_compute[260935]: 2025-10-11 08:49:11.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:11.268 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f2:dc:ce 10.100.0.12'], port_security=['fa:16:3e:f2:dc:ce 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '057de6d9-3f9e-4b23-9019-f62ba6b453e7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eddb41c523294041b154a0a99c88e82b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9a83c3d0-687d-44b7-980a-bde786b1b429', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4201c7b-c907-464d-88cb-d19f17d8f067, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:49:11 np0005481065 nova_compute[260935]: 2025-10-11 08:49:11.269 2 DEBUG nova.virt.libvirt.driver [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Start waiting for the detach event from libvirt for device tapb3e1a780-92 with device alias net1 for instance 057de6d9-3f9e-4b23-9019-f62ba6b453e7 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct 11 04:49:11 np0005481065 nova_compute[260935]: 2025-10-11 08:49:11.270 2 DEBUG nova.virt.libvirt.guest [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:f2:dc:ce"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb3e1a780-92"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 11 04:49:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:11.270 162815 INFO neutron.agent.ovn.metadata.agent [-] Port b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e in datapath fff13396-b787-4c6e-9112-a1c2ef57b26d unbound from our chassis#033[00m
Oct 11 04:49:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:11.272 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fff13396-b787-4c6e-9112-a1c2ef57b26d#033[00m
Oct 11 04:49:11 np0005481065 nova_compute[260935]: 2025-10-11 08:49:11.288 2 DEBUG nova.virt.libvirt.guest [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:f2:dc:ce"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb3e1a780-92"/></interface>not found in domain: <domain type='kvm' id='22'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <name>instance-00000014</name>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <uuid>057de6d9-3f9e-4b23-9019-f62ba6b453e7</uuid>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1270343715</nova:name>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <nova:creationTime>2025-10-11 08:49:08</nova:creationTime>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <nova:flavor name="m1.nano">
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <nova:memory>128</nova:memory>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <nova:disk>1</nova:disk>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <nova:swap>0</nova:swap>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <nova:vcpus>1</nova:vcpus>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  </nova:flavor>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <nova:owner>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <nova:user uuid="34f29a5a135d45f597eeaa741009aa67">tempest-AttachInterfacesTestJSON-2072786320-project-member</nova:user>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <nova:project uuid="eddb41c523294041b154a0a99c88e82b">tempest-AttachInterfacesTestJSON-2072786320</nova:project>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  </nova:owner>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <nova:ports>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <nova:port uuid="db31f1b4-b009-40dc-a028-b72fe0b1eb45">
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <nova:port uuid="b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e">
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <nova:port uuid="723ff3bf-882c-4198-afc8-a31026a4ccfc">
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <nova:port uuid="c9724939-cd91-44bb-a86b-72bf93c2a818">
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  </nova:ports>
Oct 11 04:49:11 np0005481065 nova_compute[260935]: </nova:instance>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <memory unit='KiB'>131072</memory>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <vcpu placement='static'>1</vcpu>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <resource>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <partition>/machine</partition>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  </resource>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <sysinfo type='smbios'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <entry name='manufacturer'>RDO</entry>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <entry name='product'>OpenStack Compute</entry>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <entry name='serial'>057de6d9-3f9e-4b23-9019-f62ba6b453e7</entry>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <entry name='uuid'>057de6d9-3f9e-4b23-9019-f62ba6b453e7</entry>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <entry name='family'>Virtual Machine</entry>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <boot dev='hd'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <smbios mode='sysinfo'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <vmcoreinfo state='on'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <cpu mode='custom' match='exact' check='full'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <model fallback='forbid'>EPYC-Rome</model>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <vendor>AMD</vendor>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='require' name='x2apic'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='require' name='tsc-deadline'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='require' name='hypervisor'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='require' name='tsc_adjust'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='require' name='spec-ctrl'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='require' name='stibp'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='require' name='arch-capabilities'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='require' name='ssbd'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='require' name='cmp_legacy'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='require' name='overflow-recov'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='require' name='succor'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='require' name='ibrs'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='require' name='amd-ssbd'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='require' name='virt-ssbd'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='disable' name='lbrv'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='disable' name='tsc-scale'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='disable' name='vmcb-clean'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='disable' name='flushbyasid'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='disable' name='pause-filter'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='disable' name='pfthreshold'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='disable' name='svme-addr-chk'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='require' name='lfence-always-serializing'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='require' name='rdctl-no'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='require' name='mds-no'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='require' name='pschange-mc-no'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='require' name='gds-no'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='require' name='rfds-no'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='disable' name='xsaves'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='disable' name='svm'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='require' name='topoext'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='disable' name='npt'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <feature policy='disable' name='nrip-save'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <clock offset='utc'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <timer name='pit' tickpolicy='delay'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <timer name='rtc' tickpolicy='catchup'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <timer name='hpet' present='no'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <on_poweroff>destroy</on_poweroff>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <on_reboot>restart</on_reboot>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <on_crash>destroy</on_crash>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <disk type='network' device='disk'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <driver name='qemu' type='raw' cache='none'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <auth username='openstack'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:        <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <source protocol='rbd' name='vms/057de6d9-3f9e-4b23-9019-f62ba6b453e7_disk' index='2'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:        <host name='192.168.122.100' port='6789'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target dev='vda' bus='virtio'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='virtio-disk0'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <disk type='network' device='cdrom'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <driver name='qemu' type='raw' cache='none'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <auth username='openstack'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:        <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <source protocol='rbd' name='vms/057de6d9-3f9e-4b23-9019-f62ba6b453e7_disk.config' index='1'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:        <host name='192.168.122.100' port='6789'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target dev='sda' bus='sata'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <readonly/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='sata0-0-0'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='pci' index='0' model='pcie-root'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='pcie.0'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target chassis='1' port='0x10'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='pci.1'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target chassis='2' port='0x11'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='pci.2'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target chassis='3' port='0x12'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='pci.3'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target chassis='4' port='0x13'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='pci.4'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target chassis='5' port='0x14'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='pci.5'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target chassis='6' port='0x15'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='pci.6'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target chassis='7' port='0x16'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='pci.7'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target chassis='8' port='0x17'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='pci.8'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target chassis='9' port='0x18'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='pci.9'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target chassis='10' port='0x19'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='pci.10'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target chassis='11' port='0x1a'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='pci.11'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target chassis='12' port='0x1b'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='pci.12'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target chassis='13' port='0x1c'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='pci.13'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target chassis='14' port='0x1d'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='pci.14'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target chassis='15' port='0x1e'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='pci.15'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target chassis='16' port='0x1f'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='pci.16'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target chassis='17' port='0x20'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='pci.17'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target chassis='18' port='0x21'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='pci.18'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target chassis='19' port='0x22'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='pci.19'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target chassis='20' port='0x23'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='pci.20'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target chassis='21' port='0x24'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='pci.21'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target chassis='22' port='0x25'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='pci.22'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target chassis='23' port='0x26'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='pci.23'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target chassis='24' port='0x27'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='pci.24'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target chassis='25' port='0x28'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='pci.25'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model name='pcie-pci-bridge'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='pci.26'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='usb'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <controller type='sata' index='0'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='ide'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <interface type='ethernet'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <mac address='fa:16:3e:97:65:f3'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target dev='tapdb31f1b4-b0'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model type='virtio'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <driver name='vhost' rx_queue_size='512'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <mtu size='1442'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='net0'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <interface type='ethernet'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <mac address='fa:16:3e:90:fd:25'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target dev='tap723ff3bf-88'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model type='virtio'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <driver name='vhost' rx_queue_size='512'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <mtu size='1442'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='net2'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <interface type='ethernet'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <mac address='fa:16:3e:2b:3e:e2'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target dev='tapc9724939-cd'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model type='virtio'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <driver name='vhost' rx_queue_size='512'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <mtu size='1442'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='net3'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <serial type='pty'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <source path='/dev/pts/2'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <log file='/var/lib/nova/instances/057de6d9-3f9e-4b23-9019-f62ba6b453e7/console.log' append='off'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target type='isa-serial' port='0'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:        <model name='isa-serial'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      </target>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='serial0'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <console type='pty' tty='/dev/pts/2'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <source path='/dev/pts/2'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <log file='/var/lib/nova/instances/057de6d9-3f9e-4b23-9019-f62ba6b453e7/console.log' append='off'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <target type='serial' port='0'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='serial0'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </console>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <input type='tablet' bus='usb'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='input0'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='usb' bus='0' port='1'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </input>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <input type='mouse' bus='ps2'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='input1'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </input>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <input type='keyboard' bus='ps2'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='input2'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </input>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <listen type='address' address='::0'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </graphics>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <audio id='1' type='none'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <model type='virtio' heads='1' primary='yes'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='video0'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <watchdog model='itco' action='reset'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='watchdog0'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </watchdog>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <memballoon model='virtio'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <stats period='10'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='balloon0'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <rng model='virtio'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <backend model='random'>/dev/urandom</backend>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <alias name='rng0'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <label>system_u:system_r:svirt_t:s0:c662,c935</label>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c662,c935</imagelabel>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  </seclabel>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <label>+107:+107</label>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <imagelabel>+107:+107</imagelabel>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  </seclabel>
Oct 11 04:49:11 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:49:11 np0005481065 nova_compute[260935]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct 11 04:49:11 np0005481065 nova_compute[260935]: 2025-10-11 08:49:11.289 2 INFO nova.virt.libvirt.driver [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Successfully detached device tapb3e1a780-92 from instance 057de6d9-3f9e-4b23-9019-f62ba6b453e7 from the live domain config.#033[00m
Oct 11 04:49:11 np0005481065 nova_compute[260935]: 2025-10-11 08:49:11.290 2 DEBUG nova.virt.libvirt.vif [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:48:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1270343715',display_name='tempest-AttachInterfacesTestJSON-server-1270343715',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1270343715',id=20,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGqp+brcDuFBg126s8uf0VE8L4fUfMeeG8JT9FKFYCB1vHmrbx9C6Kt8XshIYtJqZ0JEMq6H9A4MzX7hRa62ELfLstfe4uxEEdjGiwcDGhX0TR8t1c69HTxfDL2XuPv0hw==',key_name='tempest-keypair-1655747577',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:48:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-i4h1iqr7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:48:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=057de6d9-3f9e-4b23-9019-f62ba6b453e7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "address": "fa:16:3e:f2:dc:ce", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e1a780-92", "ovs_interfaceid": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 04:49:11 np0005481065 nova_compute[260935]: 2025-10-11 08:49:11.291 2 DEBUG nova.network.os_vif_util [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "address": "fa:16:3e:f2:dc:ce", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e1a780-92", "ovs_interfaceid": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:49:11 np0005481065 nova_compute[260935]: 2025-10-11 08:49:11.292 2 DEBUG nova.network.os_vif_util [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f2:dc:ce,bridge_name='br-int',has_traffic_filtering=True,id=b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e1a780-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:49:11 np0005481065 nova_compute[260935]: 2025-10-11 08:49:11.292 2 DEBUG os_vif [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f2:dc:ce,bridge_name='br-int',has_traffic_filtering=True,id=b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e1a780-92') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 04:49:11 np0005481065 nova_compute[260935]: 2025-10-11 08:49:11.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:11 np0005481065 nova_compute[260935]: 2025-10-11 08:49:11.297 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb3e1a780-92, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:11 np0005481065 nova_compute[260935]: 2025-10-11 08:49:11.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:11 np0005481065 nova_compute[260935]: 2025-10-11 08:49:11.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:49:11 np0005481065 nova_compute[260935]: 2025-10-11 08:49:11.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:11.311 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a8ca8907-c172-46ca-9a12-bb4f222a8789]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:11 np0005481065 nova_compute[260935]: 2025-10-11 08:49:11.316 2 INFO os_vif [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f2:dc:ce,bridge_name='br-int',has_traffic_filtering=True,id=b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e1a780-92')#033[00m
Oct 11 04:49:11 np0005481065 nova_compute[260935]: 2025-10-11 08:49:11.317 2 DEBUG nova.virt.libvirt.guest [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1270343715</nova:name>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <nova:creationTime>2025-10-11 08:49:11</nova:creationTime>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <nova:flavor name="m1.nano">
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <nova:memory>128</nova:memory>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <nova:disk>1</nova:disk>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <nova:swap>0</nova:swap>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <nova:vcpus>1</nova:vcpus>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  </nova:flavor>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <nova:owner>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <nova:user uuid="34f29a5a135d45f597eeaa741009aa67">tempest-AttachInterfacesTestJSON-2072786320-project-member</nova:user>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <nova:project uuid="eddb41c523294041b154a0a99c88e82b">tempest-AttachInterfacesTestJSON-2072786320</nova:project>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  </nova:owner>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  <nova:ports>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <nova:port uuid="db31f1b4-b009-40dc-a028-b72fe0b1eb45">
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <nova:port uuid="723ff3bf-882c-4198-afc8-a31026a4ccfc">
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    <nova:port uuid="c9724939-cd91-44bb-a86b-72bf93c2a818">
Oct 11 04:49:11 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 04:49:11 np0005481065 nova_compute[260935]:  </nova:ports>
Oct 11 04:49:11 np0005481065 nova_compute[260935]: </nova:instance>
Oct 11 04:49:11 np0005481065 nova_compute[260935]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct 11 04:49:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:11.357 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[bf95283c-be3e-46c9-9a46-71d0273d26ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:11.362 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[d6bc77db-c5d1-4262-8a9b-4a9e8112635b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:11.400 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[72086f91-1de5-4ffb-9a0a-8374f5aca92a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:11.427 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0feca262-8202-47d7-825a-8d04b21fbe49]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfff13396-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:a4:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 11, 'rx_bytes': 1084, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 11, 'rx_bytes': 1084, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 434479, 'reachable_time': 16585, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292383, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:11.452 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[603adcbd-f1c6-43f6-9564-8f34020d4620]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfff13396-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 434496, 'tstamp': 434496}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292384, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfff13396-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 434502, 'tstamp': 434502}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292384, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:11.454 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfff13396-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:11.458 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfff13396-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:11.458 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:49:11 np0005481065 nova_compute[260935]: 2025-10-11 08:49:11.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:11.459 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfff13396-b0, col_values=(('external_ids', {'iface-id': '2a916b98-1e7b-4604-b1f0-e2f195b1c17e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:11.459 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:49:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1302: 321 pgs: 321 active+clean; 214 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 3.6 MiB/s wr, 96 op/s
Oct 11 04:49:11 np0005481065 nova_compute[260935]: 2025-10-11 08:49:11.875 2 DEBUG nova.compute.manager [req-2d558ed2-8396-4c06-a7d9-2bd8cac2cabf req-6b46fa5d-8b2f-4b95-b18b-a56cc0e3b502 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received event network-vif-unplugged-b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:49:11 np0005481065 nova_compute[260935]: 2025-10-11 08:49:11.876 2 DEBUG oslo_concurrency.lockutils [req-2d558ed2-8396-4c06-a7d9-2bd8cac2cabf req-6b46fa5d-8b2f-4b95-b18b-a56cc0e3b502 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:11 np0005481065 nova_compute[260935]: 2025-10-11 08:49:11.876 2 DEBUG oslo_concurrency.lockutils [req-2d558ed2-8396-4c06-a7d9-2bd8cac2cabf req-6b46fa5d-8b2f-4b95-b18b-a56cc0e3b502 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:11 np0005481065 nova_compute[260935]: 2025-10-11 08:49:11.877 2 DEBUG oslo_concurrency.lockutils [req-2d558ed2-8396-4c06-a7d9-2bd8cac2cabf req-6b46fa5d-8b2f-4b95-b18b-a56cc0e3b502 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:11 np0005481065 nova_compute[260935]: 2025-10-11 08:49:11.877 2 DEBUG nova.compute.manager [req-2d558ed2-8396-4c06-a7d9-2bd8cac2cabf req-6b46fa5d-8b2f-4b95-b18b-a56cc0e3b502 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] No waiting events found dispatching network-vif-unplugged-b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:49:11 np0005481065 nova_compute[260935]: 2025-10-11 08:49:11.877 2 WARNING nova.compute.manager [req-2d558ed2-8396-4c06-a7d9-2bd8cac2cabf req-6b46fa5d-8b2f-4b95-b18b-a56cc0e3b502 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received unexpected event network-vif-unplugged-b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e for instance with vm_state active and task_state None.#033[00m
Oct 11 04:49:11 np0005481065 nova_compute[260935]: 2025-10-11 08:49:11.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:12 np0005481065 nova_compute[260935]: 2025-10-11 08:49:12.638 2 DEBUG oslo_concurrency.lockutils [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:49:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:49:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:13.526 162815 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 6c1453a9-963e-40bd-a179-414e3b276d7f with type ""#033[00m
Oct 11 04:49:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:13.528 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:3e:e2 10.100.0.9'], port_security=['fa:16:3e:2b:3e:e2 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1629736355', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '057de6d9-3f9e-4b23-9019-f62ba6b453e7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1629736355', 'neutron:project_id': 'eddb41c523294041b154a0a99c88e82b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9a83c3d0-687d-44b7-980a-bde786b1b429', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4201c7b-c907-464d-88cb-d19f17d8f067, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=c9724939-cd91-44bb-a86b-72bf93c2a818) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:49:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:13.530 162815 INFO neutron.agent.ovn.metadata.agent [-] Port c9724939-cd91-44bb-a86b-72bf93c2a818 in datapath fff13396-b787-4c6e-9112-a1c2ef57b26d unbound from our chassis#033[00m
Oct 11 04:49:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:13.533 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fff13396-b787-4c6e-9112-a1c2ef57b26d#033[00m
Oct 11 04:49:13 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:13Z|00111|binding|INFO|Removing iface tapc9724939-cd ovn-installed in OVS
Oct 11 04:49:13 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:13Z|00112|binding|INFO|Removing lport c9724939-cd91-44bb-a86b-72bf93c2a818 ovn-installed in OVS
Oct 11 04:49:13 np0005481065 nova_compute[260935]: 2025-10-11 08:49:13.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:13.577 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8e0929c8-ee44-487f-92ae-8aa3eed7f724]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:13 np0005481065 nova_compute[260935]: 2025-10-11 08:49:13.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:13.626 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[7ed962f2-29bb-4adb-bec3-ce347bdef836]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:13.631 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[2b0fdc4f-5f42-4346-b556-7ecbe039fe35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:13 np0005481065 nova_compute[260935]: 2025-10-11 08:49:13.653 2 DEBUG oslo_concurrency.lockutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "b3e20035-c079-4ad0-a085-2086be520d1d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:13 np0005481065 nova_compute[260935]: 2025-10-11 08:49:13.653 2 DEBUG oslo_concurrency.lockutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "b3e20035-c079-4ad0-a085-2086be520d1d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:13 np0005481065 nova_compute[260935]: 2025-10-11 08:49:13.675 2 DEBUG nova.compute.manager [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:49:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:13.680 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[ffa105ec-8521-435f-87dc-8cb830894af0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:13.710 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bb66b7c6-4f5f-4e5e-a8ba-b0e7a2cbe21a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfff13396-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:a4:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 13, 'rx_bytes': 1084, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 13, 'rx_bytes': 1084, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 434479, 'reachable_time': 16585, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292390, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:13.743 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cad31964-8718-4767-b0c5-b465da8f81a3]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfff13396-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 434496, 'tstamp': 434496}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292391, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfff13396-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 434502, 'tstamp': 434502}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292391, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:13.746 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfff13396-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:13 np0005481065 nova_compute[260935]: 2025-10-11 08:49:13.752 2 DEBUG oslo_concurrency.lockutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:13 np0005481065 nova_compute[260935]: 2025-10-11 08:49:13.752 2 DEBUG oslo_concurrency.lockutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:13 np0005481065 nova_compute[260935]: 2025-10-11 08:49:13.767 2 DEBUG oslo_concurrency.lockutils [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:13 np0005481065 nova_compute[260935]: 2025-10-11 08:49:13.767 2 DEBUG oslo_concurrency.lockutils [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:13 np0005481065 nova_compute[260935]: 2025-10-11 08:49:13.767 2 DEBUG oslo_concurrency.lockutils [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:13 np0005481065 nova_compute[260935]: 2025-10-11 08:49:13.767 2 DEBUG oslo_concurrency.lockutils [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:13 np0005481065 nova_compute[260935]: 2025-10-11 08:49:13.767 2 DEBUG oslo_concurrency.lockutils [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:13 np0005481065 nova_compute[260935]: 2025-10-11 08:49:13.769 2 INFO nova.compute.manager [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Terminating instance#033[00m
Oct 11 04:49:13 np0005481065 nova_compute[260935]: 2025-10-11 08:49:13.769 2 DEBUG nova.compute.manager [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 04:49:13 np0005481065 nova_compute[260935]: 2025-10-11 08:49:13.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:13 np0005481065 nova_compute[260935]: 2025-10-11 08:49:13.788 2 DEBUG nova.virt.hardware [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:49:13 np0005481065 nova_compute[260935]: 2025-10-11 08:49:13.789 2 INFO nova.compute.claims [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:49:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:13.789 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfff13396-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:13.790 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:49:13 np0005481065 nova_compute[260935]: 2025-10-11 08:49:13.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:13.791 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfff13396-b0, col_values=(('external_ids', {'iface-id': '2a916b98-1e7b-4604-b1f0-e2f195b1c17e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:13.792 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:49:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1303: 321 pgs: 321 active+clean; 214 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 230 op/s
Oct 11 04:49:13 np0005481065 kernel: tapdb31f1b4-b0 (unregistering): left promiscuous mode
Oct 11 04:49:13 np0005481065 NetworkManager[44960]: <info>  [1760172553.8717] device (tapdb31f1b4-b0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:49:13 np0005481065 nova_compute[260935]: 2025-10-11 08:49:13.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:13 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:13Z|00113|binding|INFO|Releasing lport db31f1b4-b009-40dc-a028-b72fe0b1eb45 from this chassis (sb_readonly=0)
Oct 11 04:49:13 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:13Z|00114|binding|INFO|Setting lport db31f1b4-b009-40dc-a028-b72fe0b1eb45 down in Southbound
Oct 11 04:49:13 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:13Z|00115|binding|INFO|Removing iface tapdb31f1b4-b0 ovn-installed in OVS
Oct 11 04:49:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:13.897 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:65:f3 10.100.0.8'], port_security=['fa:16:3e:97:65:f3 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '057de6d9-3f9e-4b23-9019-f62ba6b453e7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eddb41c523294041b154a0a99c88e82b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6953e178-7635-4f97-a5ef-5126f17f4f48', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.230'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4201c7b-c907-464d-88cb-d19f17d8f067, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=db31f1b4-b009-40dc-a028-b72fe0b1eb45) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:49:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:13.898 162815 INFO neutron.agent.ovn.metadata.agent [-] Port db31f1b4-b009-40dc-a028-b72fe0b1eb45 in datapath fff13396-b787-4c6e-9112-a1c2ef57b26d unbound from our chassis#033[00m
Oct 11 04:49:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:13.899 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fff13396-b787-4c6e-9112-a1c2ef57b26d#033[00m
Oct 11 04:49:13 np0005481065 nova_compute[260935]: 2025-10-11 08:49:13.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:13 np0005481065 kernel: tap723ff3bf-88 (unregistering): left promiscuous mode
Oct 11 04:49:13 np0005481065 NetworkManager[44960]: <info>  [1760172553.9245] device (tap723ff3bf-88): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:49:13 np0005481065 nova_compute[260935]: 2025-10-11 08:49:13.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:13 np0005481065 kernel: tapc9724939-cd (unregistering): left promiscuous mode
Oct 11 04:49:13 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:13Z|00116|binding|INFO|Releasing lport 723ff3bf-882c-4198-afc8-a31026a4ccfc from this chassis (sb_readonly=0)
Oct 11 04:49:13 np0005481065 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 04:49:13 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:13Z|00117|binding|INFO|Setting lport 723ff3bf-882c-4198-afc8-a31026a4ccfc down in Southbound
Oct 11 04:49:13 np0005481065 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 04:49:13 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:13Z|00118|binding|INFO|Removing iface tap723ff3bf-88 ovn-installed in OVS
Oct 11 04:49:13 np0005481065 nova_compute[260935]: 2025-10-11 08:49:13.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:13.941 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:90:fd:25 10.100.0.6'], port_security=['fa:16:3e:90:fd:25 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '057de6d9-3f9e-4b23-9019-f62ba6b453e7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eddb41c523294041b154a0a99c88e82b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9a83c3d0-687d-44b7-980a-bde786b1b429', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4201c7b-c907-464d-88cb-d19f17d8f067, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=723ff3bf-882c-4198-afc8-a31026a4ccfc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:49:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:13.947 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fe13b7c9-f14e-48e0-a398-64f9b1e1d3a9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:13 np0005481065 NetworkManager[44960]: <info>  [1760172553.9553] device (tapc9724939-cd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:49:13 np0005481065 nova_compute[260935]: 2025-10-11 08:49:13.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:13 np0005481065 nova_compute[260935]: 2025-10-11 08:49:13.973 2 DEBUG nova.compute.manager [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received event network-vif-plugged-b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:49:13 np0005481065 nova_compute[260935]: 2025-10-11 08:49:13.973 2 DEBUG oslo_concurrency.lockutils [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:13 np0005481065 nova_compute[260935]: 2025-10-11 08:49:13.974 2 DEBUG oslo_concurrency.lockutils [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:13 np0005481065 nova_compute[260935]: 2025-10-11 08:49:13.974 2 DEBUG oslo_concurrency.lockutils [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:13 np0005481065 nova_compute[260935]: 2025-10-11 08:49:13.974 2 DEBUG nova.compute.manager [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] No waiting events found dispatching network-vif-plugged-b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:49:13 np0005481065 nova_compute[260935]: 2025-10-11 08:49:13.974 2 WARNING nova.compute.manager [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received unexpected event network-vif-plugged-b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e for instance with vm_state active and task_state deleting.#033[00m
Oct 11 04:49:13 np0005481065 nova_compute[260935]: 2025-10-11 08:49:13.974 2 DEBUG nova.compute.manager [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received event network-vif-deleted-b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:49:13 np0005481065 nova_compute[260935]: 2025-10-11 08:49:13.974 2 INFO nova.compute.manager [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Neutron deleted interface b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e; detaching it from the instance and deleting it from the info cache#033[00m
Oct 11 04:49:13 np0005481065 nova_compute[260935]: 2025-10-11 08:49:13.975 2 DEBUG nova.network.neutron [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Updating instance_info_cache with network_info: [{"id": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "address": "fa:16:3e:97:65:f3", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb31f1b4-b0", "ovs_interfaceid": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "address": "fa:16:3e:90:fd:25", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap723ff3bf-88", "ovs_interfaceid": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c9724939-cd91-44bb-a86b-72bf93c2a818", "address": "fa:16:3e:2b:3e:e2", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9724939-cd", "ovs_interfaceid": "c9724939-cd91-44bb-a86b-72bf93c2a818", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:49:13 np0005481065 nova_compute[260935]: 2025-10-11 08:49:13.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:13 np0005481065 nova_compute[260935]: 2025-10-11 08:49:13.983 2 DEBUG oslo_concurrency.processutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:14.007 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[1849c347-2351-410b-844f-cb04b2cfab9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:14.012 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[1f623058-bd7f-4946-bd0b-5c6eae428b61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:14 np0005481065 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000014.scope: Deactivated successfully.
Oct 11 04:49:14 np0005481065 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000014.scope: Consumed 15.609s CPU time.
Oct 11 04:49:14 np0005481065 systemd-machined[215705]: Machine qemu-22-instance-00000014 terminated.
Oct 11 04:49:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:14.055 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[1ffc23b2-ce33-4e85-b899-7c171b8803a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:14.080 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d817de9c-641f-433f-88b9-461318ac62bd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfff13396-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:a4:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 15, 'rx_bytes': 1084, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 15, 'rx_bytes': 1084, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 434479, 'reachable_time': 16585, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292413, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:14.102 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8af0e64d-e84d-4dbc-b94b-6e760f5abbaa]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfff13396-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 434496, 'tstamp': 434496}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292414, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfff13396-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 434502, 'tstamp': 434502}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292414, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:14.103 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfff13396-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:14.116 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfff13396-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:14.116 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:49:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:14.117 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfff13396-b0, col_values=(('external_ids', {'iface-id': '2a916b98-1e7b-4604-b1f0-e2f195b1c17e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:14.117 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:49:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:14.118 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 723ff3bf-882c-4198-afc8-a31026a4ccfc in datapath fff13396-b787-4c6e-9112-a1c2ef57b26d unbound from our chassis#033[00m
Oct 11 04:49:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:14.120 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fff13396-b787-4c6e-9112-a1c2ef57b26d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:14.122 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8048b801-ba33-42b0-b3a0-c2b230fe59c3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:14.122 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d namespace which is not needed anymore#033[00m
Oct 11 04:49:14 np0005481065 NetworkManager[44960]: <info>  [1760172554.2227] manager: (tap723ff3bf-88): new Tun device (/org/freedesktop/NetworkManager/Devices/68)
Oct 11 04:49:14 np0005481065 NetworkManager[44960]: <info>  [1760172554.2323] manager: (tapc9724939-cd): new Tun device (/org/freedesktop/NetworkManager/Devices/69)
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.243 2 DEBUG nova.objects.instance [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lazy-loading 'system_metadata' on Instance uuid 057de6d9-3f9e-4b23-9019-f62ba6b453e7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.259 2 INFO nova.virt.libvirt.driver [-] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Instance destroyed successfully.#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.259 2 DEBUG nova.objects.instance [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lazy-loading 'resources' on Instance uuid 057de6d9-3f9e-4b23-9019-f62ba6b453e7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:49:14 np0005481065 neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d[290066]: [NOTICE]   (290111) : haproxy version is 2.8.14-c23fe91
Oct 11 04:49:14 np0005481065 neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d[290066]: [NOTICE]   (290111) : path to executable is /usr/sbin/haproxy
Oct 11 04:49:14 np0005481065 neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d[290066]: [WARNING]  (290111) : Exiting Master process...
Oct 11 04:49:14 np0005481065 neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d[290066]: [ALERT]    (290111) : Current worker (290115) exited with code 143 (Terminated)
Oct 11 04:49:14 np0005481065 neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d[290066]: [WARNING]  (290111) : All workers exited. Exiting... (0)
Oct 11 04:49:14 np0005481065 systemd[1]: libpod-6da8ced8088c8e43c02543b3a07e5131bfc81dd7c04d1410fe0ff52bc270558a.scope: Deactivated successfully.
Oct 11 04:49:14 np0005481065 podman[292453]: 2025-10-11 08:49:14.306273643 +0000 UTC m=+0.084047012 container died 6da8ced8088c8e43c02543b3a07e5131bfc81dd7c04d1410fe0ff52bc270558a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.310 2 DEBUG nova.objects.instance [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lazy-loading 'flavor' on Instance uuid 057de6d9-3f9e-4b23-9019-f62ba6b453e7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.313 2 DEBUG nova.virt.libvirt.vif [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:48:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1270343715',display_name='tempest-AttachInterfacesTestJSON-server-1270343715',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1270343715',id=20,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGqp+brcDuFBg126s8uf0VE8L4fUfMeeG8JT9FKFYCB1vHmrbx9C6Kt8XshIYtJqZ0JEMq6H9A4MzX7hRa62ELfLstfe4uxEEdjGiwcDGhX0TR8t1c69HTxfDL2XuPv0hw==',key_name='tempest-keypair-1655747577',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:48:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-i4h1iqr7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:48:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=057de6d9-3f9e-4b23-9019-f62ba6b453e7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "address": "fa:16:3e:97:65:f3", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb31f1b4-b0", "ovs_interfaceid": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.313 2 DEBUG nova.network.os_vif_util [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "address": "fa:16:3e:97:65:f3", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb31f1b4-b0", "ovs_interfaceid": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.314 2 DEBUG nova.network.os_vif_util [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:97:65:f3,bridge_name='br-int',has_traffic_filtering=True,id=db31f1b4-b009-40dc-a028-b72fe0b1eb45,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb31f1b4-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.314 2 DEBUG os_vif [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:97:65:f3,bridge_name='br-int',has_traffic_filtering=True,id=db31f1b4-b009-40dc-a028-b72fe0b1eb45,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb31f1b4-b0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.318 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdb31f1b4-b0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:14 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6da8ced8088c8e43c02543b3a07e5131bfc81dd7c04d1410fe0ff52bc270558a-userdata-shm.mount: Deactivated successfully.
Oct 11 04:49:14 np0005481065 systemd[1]: var-lib-containers-storage-overlay-4bdab5ef0ac4476b9090df9e6eb05fca169470796c58d4abb5cf34784f91353f-merged.mount: Deactivated successfully.
Oct 11 04:49:14 np0005481065 podman[292453]: 2025-10-11 08:49:14.342115988 +0000 UTC m=+0.119889357 container cleanup 6da8ced8088c8e43c02543b3a07e5131bfc81dd7c04d1410fe0ff52bc270558a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.342 2 INFO os_vif [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:97:65:f3,bridge_name='br-int',has_traffic_filtering=True,id=db31f1b4-b009-40dc-a028-b72fe0b1eb45,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb31f1b4-b0')#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.343 2 DEBUG nova.virt.libvirt.vif [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:48:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1270343715',display_name='tempest-AttachInterfacesTestJSON-server-1270343715',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1270343715',id=20,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGqp+brcDuFBg126s8uf0VE8L4fUfMeeG8JT9FKFYCB1vHmrbx9C6Kt8XshIYtJqZ0JEMq6H9A4MzX7hRa62ELfLstfe4uxEEdjGiwcDGhX0TR8t1c69HTxfDL2XuPv0hw==',key_name='tempest-keypair-1655747577',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:48:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-i4h1iqr7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:48:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=057de6d9-3f9e-4b23-9019-f62ba6b453e7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "address": "fa:16:3e:f2:dc:ce", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e1a780-92", "ovs_interfaceid": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.344 2 DEBUG nova.network.os_vif_util [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "address": "fa:16:3e:f2:dc:ce", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e1a780-92", "ovs_interfaceid": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.344 2 DEBUG nova.network.os_vif_util [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f2:dc:ce,bridge_name='br-int',has_traffic_filtering=True,id=b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e1a780-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.345 2 DEBUG os_vif [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f2:dc:ce,bridge_name='br-int',has_traffic_filtering=True,id=b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e1a780-92') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.350 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb3e1a780-92, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.351 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.354 2 DEBUG nova.virt.libvirt.vif [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:48:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1270343715',display_name='tempest-AttachInterfacesTestJSON-server-1270343715',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1270343715',id=20,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGqp+brcDuFBg126s8uf0VE8L4fUfMeeG8JT9FKFYCB1vHmrbx9C6Kt8XshIYtJqZ0JEMq6H9A4MzX7hRa62ELfLstfe4uxEEdjGiwcDGhX0TR8t1c69HTxfDL2XuPv0hw==',key_name='tempest-keypair-1655747577',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:48:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-i4h1iqr7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:49:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=057de6d9-3f9e-4b23-9019-f62ba6b453e7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "address": "fa:16:3e:f2:dc:ce", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e1a780-92", "ovs_interfaceid": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.355 2 DEBUG nova.network.os_vif_util [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Converting VIF {"id": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "address": "fa:16:3e:f2:dc:ce", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e1a780-92", "ovs_interfaceid": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.355 2 DEBUG nova.network.os_vif_util [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f2:dc:ce,bridge_name='br-int',has_traffic_filtering=True,id=b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e1a780-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.357 2 INFO os_vif [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f2:dc:ce,bridge_name='br-int',has_traffic_filtering=True,id=b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e1a780-92')#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.358 2 DEBUG nova.virt.libvirt.vif [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:48:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1270343715',display_name='tempest-AttachInterfacesTestJSON-server-1270343715',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1270343715',id=20,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGqp+brcDuFBg126s8uf0VE8L4fUfMeeG8JT9FKFYCB1vHmrbx9C6Kt8XshIYtJqZ0JEMq6H9A4MzX7hRa62ELfLstfe4uxEEdjGiwcDGhX0TR8t1c69HTxfDL2XuPv0hw==',key_name='tempest-keypair-1655747577',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:48:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-i4h1iqr7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:48:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=057de6d9-3f9e-4b23-9019-f62ba6b453e7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "address": "fa:16:3e:90:fd:25", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap723ff3bf-88", "ovs_interfaceid": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.358 2 DEBUG nova.network.os_vif_util [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "address": "fa:16:3e:90:fd:25", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap723ff3bf-88", "ovs_interfaceid": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.358 2 DEBUG nova.network.os_vif_util [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:90:fd:25,bridge_name='br-int',has_traffic_filtering=True,id=723ff3bf-882c-4198-afc8-a31026a4ccfc,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap723ff3bf-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.359 2 DEBUG os_vif [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:90:fd:25,bridge_name='br-int',has_traffic_filtering=True,id=723ff3bf-882c-4198-afc8-a31026a4ccfc,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap723ff3bf-88') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.360 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap723ff3bf-88, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.364 2 DEBUG nova.virt.libvirt.guest [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:f2:dc:ce"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb3e1a780-92"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.369 2 INFO os_vif [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:90:fd:25,bridge_name='br-int',has_traffic_filtering=True,id=723ff3bf-882c-4198-afc8-a31026a4ccfc,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap723ff3bf-88')#033[00m
Oct 11 04:49:14 np0005481065 systemd[1]: libpod-conmon-6da8ced8088c8e43c02543b3a07e5131bfc81dd7c04d1410fe0ff52bc270558a.scope: Deactivated successfully.
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.371 2 DEBUG nova.virt.libvirt.vif [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:48:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1270343715',display_name='tempest-AttachInterfacesTestJSON-server-1270343715',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1270343715',id=20,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGqp+brcDuFBg126s8uf0VE8L4fUfMeeG8JT9FKFYCB1vHmrbx9C6Kt8XshIYtJqZ0JEMq6H9A4MzX7hRa62ELfLstfe4uxEEdjGiwcDGhX0TR8t1c69HTxfDL2XuPv0hw==',key_name='tempest-keypair-1655747577',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:48:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-i4h1iqr7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:48:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=057de6d9-3f9e-4b23-9019-f62ba6b453e7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c9724939-cd91-44bb-a86b-72bf93c2a818", "address": "fa:16:3e:2b:3e:e2", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9724939-cd", "ovs_interfaceid": "c9724939-cd91-44bb-a86b-72bf93c2a818", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.371 2 DEBUG nova.network.os_vif_util [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "c9724939-cd91-44bb-a86b-72bf93c2a818", "address": "fa:16:3e:2b:3e:e2", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9724939-cd", "ovs_interfaceid": "c9724939-cd91-44bb-a86b-72bf93c2a818", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.372 2 DEBUG nova.network.os_vif_util [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:3e:e2,bridge_name='br-int',has_traffic_filtering=True,id=c9724939-cd91-44bb-a86b-72bf93c2a818,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc9724939-cd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.372 2 DEBUG os_vif [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:3e:e2,bridge_name='br-int',has_traffic_filtering=True,id=c9724939-cd91-44bb-a86b-72bf93c2a818,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc9724939-cd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.374 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc9724939-cd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.378 2 DEBUG nova.virt.libvirt.guest [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:f2:dc:ce"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb3e1a780-92"/></interface>not found in domain: <domain type='kvm'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  <name>instance-00000014</name>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  <uuid>057de6d9-3f9e-4b23-9019-f62ba6b453e7</uuid>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <nova:name>tempest-AttachInterfacesTestJSON-server-1270343715</nova:name>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:48:17</nova:creationTime>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:49:14 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:        <nova:user uuid="34f29a5a135d45f597eeaa741009aa67">tempest-AttachInterfacesTestJSON-2072786320-project-member</nova:user>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:        <nova:project uuid="eddb41c523294041b154a0a99c88e82b">tempest-AttachInterfacesTestJSON-2072786320</nova:project>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:        <nova:port uuid="db31f1b4-b009-40dc-a028-b72fe0b1eb45">
Oct 11 04:49:14 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  <memory unit='KiB'>131072</memory>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  <vcpu placement='static'>1</vcpu>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  <sysinfo type='smbios'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <entry name='manufacturer'>RDO</entry>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <entry name='product'>OpenStack Compute</entry>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <entry name='serial'>057de6d9-3f9e-4b23-9019-f62ba6b453e7</entry>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <entry name='uuid'>057de6d9-3f9e-4b23-9019-f62ba6b453e7</entry>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <entry name='family'>Virtual Machine</entry>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <boot dev='hd'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <smbios mode='sysinfo'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <vmcoreinfo state='on'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  <cpu mode='host-model' check='partial'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  <clock offset='utc'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <timer name='pit' tickpolicy='delay'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <timer name='rtc' tickpolicy='catchup'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <timer name='hpet' present='no'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  <on_poweroff>destroy</on_poweroff>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  <on_reboot>restart</on_reboot>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  <on_crash>destroy</on_crash>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <disk type='network' device='disk'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <driver name='qemu' type='raw' cache='none'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <auth username='openstack'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:        <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <source protocol='rbd' name='vms/057de6d9-3f9e-4b23-9019-f62ba6b453e7_disk'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:        <host name='192.168.122.100' port='6789'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target dev='vda' bus='virtio'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <disk type='network' device='cdrom'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <driver name='qemu' type='raw' cache='none'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <auth username='openstack'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:        <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <source protocol='rbd' name='vms/057de6d9-3f9e-4b23-9019-f62ba6b453e7_disk.config'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:        <host name='192.168.122.100' port='6789'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target dev='sda' bus='sata'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <readonly/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='pci' index='0' model='pcie-root'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target chassis='1' port='0x10'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target chassis='2' port='0x11'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target chassis='3' port='0x12'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target chassis='4' port='0x13'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target chassis='5' port='0x14'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target chassis='6' port='0x15'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target chassis='7' port='0x16'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target chassis='8' port='0x17'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target chassis='9' port='0x18'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target chassis='10' port='0x19'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target chassis='11' port='0x1a'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target chassis='12' port='0x1b'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target chassis='13' port='0x1c'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target chassis='14' port='0x1d'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target chassis='15' port='0x1e'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target chassis='16' port='0x1f'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target chassis='17' port='0x20'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target chassis='18' port='0x21'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target chassis='19' port='0x22'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target chassis='20' port='0x23'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target chassis='21' port='0x24'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target chassis='22' port='0x25'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target chassis='23' port='0x26'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target chassis='24' port='0x27'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target chassis='25' port='0x28'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model name='pcie-pci-bridge'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='sata' index='0'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <interface type='ethernet'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <mac address='fa:16:3e:97:65:f3'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target dev='tapdb31f1b4-b0'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model type='virtio'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <driver name='vhost' rx_queue_size='512'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <mtu size='1442'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <interface type='ethernet'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <mac address='fa:16:3e:90:fd:25'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target dev='tap723ff3bf-88'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model type='virtio'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <driver name='vhost' rx_queue_size='512'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <mtu size='1442'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <interface type='ethernet'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <mac address='fa:16:3e:2b:3e:e2'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target dev='tapc9724939-cd'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model type='virtio'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <driver name='vhost' rx_queue_size='512'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <mtu size='1442'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <serial type='pty'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <log file='/var/lib/nova/instances/057de6d9-3f9e-4b23-9019-f62ba6b453e7/console.log' append='off'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target type='isa-serial' port='0'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:        <model name='isa-serial'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      </target>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <console type='pty'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <log file='/var/lib/nova/instances/057de6d9-3f9e-4b23-9019-f62ba6b453e7/console.log' append='off'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target type='serial' port='0'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </console>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <input type='tablet' bus='usb'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='usb' bus='0' port='1'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </input>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <input type='mouse' bus='ps2'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <input type='keyboard' bus='ps2'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <graphics type='vnc' port='-1' autoport='yes' listen='::0'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <listen type='address' address='::0'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </graphics>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <audio id='1' type='none'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model type='virtio' heads='1' primary='yes'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <watchdog model='itco' action='reset'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <memballoon model='virtio'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <stats period='10'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <rng model='virtio'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <backend model='random'>/dev/urandom</backend>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:49:14 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:49:14 np0005481065 nova_compute[260935]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.378 2 WARNING nova.virt.libvirt.driver [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Detaching interface fa:16:3e:f2:dc:ce failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tapb3e1a780-92' not found.#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.379 2 DEBUG nova.virt.libvirt.vif [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:48:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1270343715',display_name='tempest-AttachInterfacesTestJSON-server-1270343715',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1270343715',id=20,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGqp+brcDuFBg126s8uf0VE8L4fUfMeeG8JT9FKFYCB1vHmrbx9C6Kt8XshIYtJqZ0JEMq6H9A4MzX7hRa62ELfLstfe4uxEEdjGiwcDGhX0TR8t1c69HTxfDL2XuPv0hw==',key_name='tempest-keypair-1655747577',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:48:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-i4h1iqr7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:49:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=057de6d9-3f9e-4b23-9019-f62ba6b453e7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "address": "fa:16:3e:f2:dc:ce", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e1a780-92", "ovs_interfaceid": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.379 2 DEBUG nova.network.os_vif_util [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Converting VIF {"id": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "address": "fa:16:3e:f2:dc:ce", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e1a780-92", "ovs_interfaceid": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.380 2 DEBUG nova.network.os_vif_util [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f2:dc:ce,bridge_name='br-int',has_traffic_filtering=True,id=b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e1a780-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.380 2 DEBUG os_vif [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f2:dc:ce,bridge_name='br-int',has_traffic_filtering=True,id=b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e1a780-92') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.381 2 INFO os_vif [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:3e:e2,bridge_name='br-int',has_traffic_filtering=True,id=c9724939-cd91-44bb-a86b-72bf93c2a818,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc9724939-cd')#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.411 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb3e1a780-92, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.411 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.413 2 INFO os_vif [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f2:dc:ce,bridge_name='br-int',has_traffic_filtering=True,id=b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e1a780-92')#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.414 2 DEBUG nova.virt.libvirt.guest [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1270343715</nova:name>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  <nova:creationTime>2025-10-11 08:49:14</nova:creationTime>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  <nova:flavor name="m1.nano">
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <nova:memory>128</nova:memory>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <nova:disk>1</nova:disk>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <nova:swap>0</nova:swap>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <nova:vcpus>1</nova:vcpus>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  </nova:flavor>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  <nova:owner>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <nova:user uuid="34f29a5a135d45f597eeaa741009aa67">tempest-AttachInterfacesTestJSON-2072786320-project-member</nova:user>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <nova:project uuid="eddb41c523294041b154a0a99c88e82b">tempest-AttachInterfacesTestJSON-2072786320</nova:project>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  </nova:owner>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  <nova:ports>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <nova:port uuid="db31f1b4-b009-40dc-a028-b72fe0b1eb45">
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <nova:port uuid="723ff3bf-882c-4198-afc8-a31026a4ccfc">
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <nova:port uuid="c9724939-cd91-44bb-a86b-72bf93c2a818">
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  </nova:ports>
Oct 11 04:49:14 np0005481065 nova_compute[260935]: </nova:instance>
Oct 11 04:49:14 np0005481065 nova_compute[260935]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.419 2 DEBUG nova.compute.manager [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received event network-vif-deleted-c9724939-cd91-44bb-a86b-72bf93c2a818 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.419 2 INFO nova.compute.manager [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Neutron deleted interface c9724939-cd91-44bb-a86b-72bf93c2a818; detaching it from the instance and deleting it from the info cache#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.419 2 DEBUG nova.network.neutron [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Updating instance_info_cache with network_info: [{"id": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "address": "fa:16:3e:97:65:f3", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb31f1b4-b0", "ovs_interfaceid": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "address": "fa:16:3e:90:fd:25", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap723ff3bf-88", "ovs_interfaceid": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:49:14 np0005481065 podman[292512]: 2025-10-11 08:49:14.428995031 +0000 UTC m=+0.059525122 container remove 6da8ced8088c8e43c02543b3a07e5131bfc81dd7c04d1410fe0ff52bc270558a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 04:49:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:14.438 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[35dbcd4e-ee1c-4df9-9cdf-3f740c1d71a1]: (4, ('Sat Oct 11 08:49:14 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d (6da8ced8088c8e43c02543b3a07e5131bfc81dd7c04d1410fe0ff52bc270558a)\n6da8ced8088c8e43c02543b3a07e5131bfc81dd7c04d1410fe0ff52bc270558a\nSat Oct 11 08:49:14 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d (6da8ced8088c8e43c02543b3a07e5131bfc81dd7c04d1410fe0ff52bc270558a)\n6da8ced8088c8e43c02543b3a07e5131bfc81dd7c04d1410fe0ff52bc270558a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.441 2 DEBUG nova.virt.libvirt.vif [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:48:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1270343715',display_name='tempest-AttachInterfacesTestJSON-server-1270343715',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1270343715',id=20,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGqp+brcDuFBg126s8uf0VE8L4fUfMeeG8JT9FKFYCB1vHmrbx9C6Kt8XshIYtJqZ0JEMq6H9A4MzX7hRa62ELfLstfe4uxEEdjGiwcDGhX0TR8t1c69HTxfDL2XuPv0hw==',key_name='tempest-keypair-1655747577',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:48:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-i4h1iqr7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:49:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=057de6d9-3f9e-4b23-9019-f62ba6b453e7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c9724939-cd91-44bb-a86b-72bf93c2a818", "address": "fa:16:3e:2b:3e:e2", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9724939-cd", "ovs_interfaceid": "c9724939-cd91-44bb-a86b-72bf93c2a818", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.442 2 DEBUG nova.network.os_vif_util [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Converting VIF {"id": "c9724939-cd91-44bb-a86b-72bf93c2a818", "address": "fa:16:3e:2b:3e:e2", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9724939-cd", "ovs_interfaceid": "c9724939-cd91-44bb-a86b-72bf93c2a818", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:49:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:14.442 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5ca75c32-ff36-48c2-a00d-e7e09e97ddc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.442 2 DEBUG nova.network.os_vif_util [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:3e:e2,bridge_name='br-int',has_traffic_filtering=True,id=c9724939-cd91-44bb-a86b-72bf93c2a818,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc9724939-cd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.445 2 DEBUG nova.virt.libvirt.guest [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:2b:3e:e2"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc9724939-cd"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 11 04:49:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:14.445 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfff13396-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.448 2 DEBUG nova.virt.libvirt.driver [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Attempting to detach device tapc9724939-cd from instance 057de6d9-3f9e-4b23-9019-f62ba6b453e7 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.448 2 DEBUG nova.virt.libvirt.guest [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] detach device xml: <interface type="ethernet">
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  <mac address="fa:16:3e:2b:3e:e2"/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  <model type="virtio"/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  <mtu size="1442"/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  <target dev="tapc9724939-cd"/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]: </interface>
Oct 11 04:49:14 np0005481065 nova_compute[260935]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:14 np0005481065 kernel: tapfff13396-b0: left promiscuous mode
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:14.457 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[54e01b6a-1693-451f-ad49-976dc402c2f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.466 2 DEBUG nova.virt.libvirt.guest [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:2b:3e:e2"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc9724939-cd"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.469 2 DEBUG nova.virt.libvirt.guest [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:2b:3e:e2"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc9724939-cd"/></interface>not found in domain: <domain type='kvm'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  <name>instance-00000014</name>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  <uuid>057de6d9-3f9e-4b23-9019-f62ba6b453e7</uuid>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <nova:name>tempest-AttachInterfacesTestJSON-server-1270343715</nova:name>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:49:14</nova:creationTime>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:49:14 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:        <nova:user uuid="34f29a5a135d45f597eeaa741009aa67">tempest-AttachInterfacesTestJSON-2072786320-project-member</nova:user>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:        <nova:project uuid="eddb41c523294041b154a0a99c88e82b">tempest-AttachInterfacesTestJSON-2072786320</nova:project>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:        <nova:port uuid="db31f1b4-b009-40dc-a028-b72fe0b1eb45">
Oct 11 04:49:14 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:        <nova:port uuid="723ff3bf-882c-4198-afc8-a31026a4ccfc">
Oct 11 04:49:14 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:        <nova:port uuid="c9724939-cd91-44bb-a86b-72bf93c2a818">
Oct 11 04:49:14 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  <memory unit='KiB'>131072</memory>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  <vcpu placement='static'>1</vcpu>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  <sysinfo type='smbios'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <entry name='manufacturer'>RDO</entry>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <entry name='product'>OpenStack Compute</entry>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <entry name='serial'>057de6d9-3f9e-4b23-9019-f62ba6b453e7</entry>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <entry name='uuid'>057de6d9-3f9e-4b23-9019-f62ba6b453e7</entry>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <entry name='family'>Virtual Machine</entry>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <boot dev='hd'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <smbios mode='sysinfo'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <vmcoreinfo state='on'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  <cpu mode='host-model' check='partial'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  <clock offset='utc'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <timer name='pit' tickpolicy='delay'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <timer name='rtc' tickpolicy='catchup'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <timer name='hpet' present='no'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  <on_poweroff>destroy</on_poweroff>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  <on_reboot>restart</on_reboot>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  <on_crash>destroy</on_crash>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <disk type='network' device='disk'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <driver name='qemu' type='raw' cache='none'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <auth username='openstack'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:        <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <source protocol='rbd' name='vms/057de6d9-3f9e-4b23-9019-f62ba6b453e7_disk'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:        <host name='192.168.122.100' port='6789'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target dev='vda' bus='virtio'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <disk type='network' device='cdrom'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <driver name='qemu' type='raw' cache='none'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <auth username='openstack'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:        <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <source protocol='rbd' name='vms/057de6d9-3f9e-4b23-9019-f62ba6b453e7_disk.config'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:        <host name='192.168.122.100' port='6789'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target dev='sda' bus='sata'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <readonly/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='pci' index='0' model='pcie-root'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target chassis='1' port='0x10'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target chassis='2' port='0x11'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target chassis='3' port='0x12'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target chassis='4' port='0x13'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target chassis='5' port='0x14'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target chassis='6' port='0x15'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target chassis='7' port='0x16'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target chassis='8' port='0x17'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target chassis='9' port='0x18'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target chassis='10' port='0x19'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target chassis='11' port='0x1a'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target chassis='12' port='0x1b'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target chassis='13' port='0x1c'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target chassis='14' port='0x1d'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target chassis='15' port='0x1e'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target chassis='16' port='0x1f'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target chassis='17' port='0x20'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target chassis='18' port='0x21'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target chassis='19' port='0x22'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target chassis='20' port='0x23'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target chassis='21' port='0x24'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target chassis='22' port='0x25'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target chassis='23' port='0x26'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target chassis='24' port='0x27'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target chassis='25' port='0x28'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model name='pcie-pci-bridge'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <controller type='sata' index='0'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <interface type='ethernet'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <mac address='fa:16:3e:97:65:f3'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target dev='tapdb31f1b4-b0'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model type='virtio'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <driver name='vhost' rx_queue_size='512'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <mtu size='1442'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <interface type='ethernet'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <mac address='fa:16:3e:90:fd:25'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target dev='tap723ff3bf-88'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model type='virtio'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <driver name='vhost' rx_queue_size='512'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <mtu size='1442'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <serial type='pty'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <log file='/var/lib/nova/instances/057de6d9-3f9e-4b23-9019-f62ba6b453e7/console.log' append='off'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target type='isa-serial' port='0'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:        <model name='isa-serial'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      </target>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <console type='pty'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <log file='/var/lib/nova/instances/057de6d9-3f9e-4b23-9019-f62ba6b453e7/console.log' append='off'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <target type='serial' port='0'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </console>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <input type='tablet' bus='usb'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='usb' bus='0' port='1'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </input>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <input type='mouse' bus='ps2'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <input type='keyboard' bus='ps2'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <graphics type='vnc' port='-1' autoport='yes' listen='::0'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <listen type='address' address='::0'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </graphics>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <audio id='1' type='none'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <model type='virtio' heads='1' primary='yes'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <watchdog model='itco' action='reset'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <memballoon model='virtio'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <stats period='10'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <rng model='virtio'>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <backend model='random'>/dev/urandom</backend>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:49:14 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:49:14 np0005481065 nova_compute[260935]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.469 2 INFO nova.virt.libvirt.driver [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Successfully detached device tapc9724939-cd from instance 057de6d9-3f9e-4b23-9019-f62ba6b453e7 from the persistent domain config.#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.470 2 DEBUG nova.virt.libvirt.vif [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:48:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1270343715',display_name='tempest-AttachInterfacesTestJSON-server-1270343715',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1270343715',id=20,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGqp+brcDuFBg126s8uf0VE8L4fUfMeeG8JT9FKFYCB1vHmrbx9C6Kt8XshIYtJqZ0JEMq6H9A4MzX7hRa62ELfLstfe4uxEEdjGiwcDGhX0TR8t1c69HTxfDL2XuPv0hw==',key_name='tempest-keypair-1655747577',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:48:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-i4h1iqr7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:49:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=057de6d9-3f9e-4b23-9019-f62ba6b453e7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c9724939-cd91-44bb-a86b-72bf93c2a818", "address": "fa:16:3e:2b:3e:e2", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9724939-cd", "ovs_interfaceid": "c9724939-cd91-44bb-a86b-72bf93c2a818", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.470 2 DEBUG nova.network.os_vif_util [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Converting VIF {"id": "c9724939-cd91-44bb-a86b-72bf93c2a818", "address": "fa:16:3e:2b:3e:e2", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9724939-cd", "ovs_interfaceid": "c9724939-cd91-44bb-a86b-72bf93c2a818", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.471 2 DEBUG nova.network.os_vif_util [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:3e:e2,bridge_name='br-int',has_traffic_filtering=True,id=c9724939-cd91-44bb-a86b-72bf93c2a818,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc9724939-cd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.471 2 DEBUG os_vif [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:3e:e2,bridge_name='br-int',has_traffic_filtering=True,id=c9724939-cd91-44bb-a86b-72bf93c2a818,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc9724939-cd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.473 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc9724939-cd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.473 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.476 2 INFO os_vif [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:3e:e2,bridge_name='br-int',has_traffic_filtering=True,id=c9724939-cd91-44bb-a86b-72bf93c2a818,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc9724939-cd')#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.476 2 DEBUG nova.virt.libvirt.guest [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1270343715</nova:name>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  <nova:creationTime>2025-10-11 08:49:14</nova:creationTime>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  <nova:flavor name="m1.nano">
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <nova:memory>128</nova:memory>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <nova:disk>1</nova:disk>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <nova:swap>0</nova:swap>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <nova:vcpus>1</nova:vcpus>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  </nova:flavor>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  <nova:owner>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <nova:user uuid="34f29a5a135d45f597eeaa741009aa67">tempest-AttachInterfacesTestJSON-2072786320-project-member</nova:user>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <nova:project uuid="eddb41c523294041b154a0a99c88e82b">tempest-AttachInterfacesTestJSON-2072786320</nova:project>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  </nova:owner>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  <nova:ports>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <nova:port uuid="db31f1b4-b009-40dc-a028-b72fe0b1eb45">
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    <nova:port uuid="723ff3bf-882c-4198-afc8-a31026a4ccfc">
Oct 11 04:49:14 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 04:49:14 np0005481065 nova_compute[260935]:  </nova:ports>
Oct 11 04:49:14 np0005481065 nova_compute[260935]: </nova:instance>
Oct 11 04:49:14 np0005481065 nova_compute[260935]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:14.492 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[33de4aa4-081d-44d7-88fc-7eb9307ff854]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:49:14 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3165190981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:49:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:14.493 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6baad070-e4f8-4ffd-bdd9-bf1536ee5ddb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:14.518 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[944c300b-c361-4c33-a99d-d8f42a845acb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 434469, 'reachable_time': 15922, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292548, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:14 np0005481065 systemd[1]: run-netns-ovnmeta\x2dfff13396\x2db787\x2d4c6e\x2d9112\x2da1c2ef57b26d.mount: Deactivated successfully.
Oct 11 04:49:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:14.523 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 04:49:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:14.524 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[4b2698b2-4bb5-48af-9355-ee98397f454c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.525 2 DEBUG oslo_concurrency.processutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.532 2 DEBUG nova.compute.provider_tree [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.548 2 DEBUG nova.scheduler.client.report [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.574 2 DEBUG oslo_concurrency.lockutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.822s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.575 2 DEBUG nova.compute.manager [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.652 2 DEBUG nova.compute.manager [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.653 2 DEBUG nova.network.neutron [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.672 2 INFO nova.virt.libvirt.driver [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.692 2 DEBUG nova.compute.manager [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.786 2 DEBUG nova.compute.manager [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.788 2 DEBUG nova.virt.libvirt.driver [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.789 2 INFO nova.virt.libvirt.driver [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Creating image(s)#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.818 2 DEBUG nova.storage.rbd_utils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image b3e20035-c079-4ad0-a085-2086be520d1d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.845 2 DEBUG nova.storage.rbd_utils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image b3e20035-c079-4ad0-a085-2086be520d1d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.872 2 DEBUG nova.storage.rbd_utils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image b3e20035-c079-4ad0-a085-2086be520d1d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.876 2 DEBUG oslo_concurrency.processutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.915 2 DEBUG nova.compute.manager [req-c9181689-0e14-4d47-ac66-3b0590659678 req-1654703e-54d2-45fa-9068-0ab8428eb752 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received event network-vif-unplugged-db31f1b4-b009-40dc-a028-b72fe0b1eb45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.916 2 DEBUG oslo_concurrency.lockutils [req-c9181689-0e14-4d47-ac66-3b0590659678 req-1654703e-54d2-45fa-9068-0ab8428eb752 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.916 2 DEBUG oslo_concurrency.lockutils [req-c9181689-0e14-4d47-ac66-3b0590659678 req-1654703e-54d2-45fa-9068-0ab8428eb752 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.917 2 DEBUG oslo_concurrency.lockutils [req-c9181689-0e14-4d47-ac66-3b0590659678 req-1654703e-54d2-45fa-9068-0ab8428eb752 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.917 2 DEBUG nova.compute.manager [req-c9181689-0e14-4d47-ac66-3b0590659678 req-1654703e-54d2-45fa-9068-0ab8428eb752 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] No waiting events found dispatching network-vif-unplugged-db31f1b4-b009-40dc-a028-b72fe0b1eb45 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.917 2 DEBUG nova.compute.manager [req-c9181689-0e14-4d47-ac66-3b0590659678 req-1654703e-54d2-45fa-9068-0ab8428eb752 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received event network-vif-unplugged-db31f1b4-b009-40dc-a028-b72fe0b1eb45 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.925 2 INFO nova.virt.libvirt.driver [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Deleting instance files /var/lib/nova/instances/057de6d9-3f9e-4b23-9019-f62ba6b453e7_del#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.926 2 INFO nova.virt.libvirt.driver [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Deletion of /var/lib/nova/instances/057de6d9-3f9e-4b23-9019-f62ba6b453e7_del complete#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.931 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Updating instance_info_cache with network_info: [{"id": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "address": "fa:16:3e:97:65:f3", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb31f1b4-b0", "ovs_interfaceid": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "address": "fa:16:3e:f2:dc:ce", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e1a780-92", "ovs_interfaceid": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "address": "fa:16:3e:90:fd:25", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap723ff3bf-88", "ovs_interfaceid": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c9724939-cd91-44bb-a86b-72bf93c2a818", "address": "fa:16:3e:2b:3e:e2", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9724939-cd", "ovs_interfaceid": "c9724939-cd91-44bb-a86b-72bf93c2a818", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.960 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.960 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.960 2 DEBUG oslo_concurrency.lockutils [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquired lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.960 2 DEBUG nova.network.neutron [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.961 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.972 2 DEBUG oslo_concurrency.processutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.973 2 DEBUG oslo_concurrency.lockutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.973 2 DEBUG oslo_concurrency.lockutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:14 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.974 2 DEBUG oslo_concurrency.lockutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:15 np0005481065 nova_compute[260935]: 2025-10-11 08:49:14.999 2 DEBUG nova.storage.rbd_utils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image b3e20035-c079-4ad0-a085-2086be520d1d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:49:15 np0005481065 nova_compute[260935]: 2025-10-11 08:49:15.004 2 DEBUG oslo_concurrency.processutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 b3e20035-c079-4ad0-a085-2086be520d1d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:15 np0005481065 nova_compute[260935]: 2025-10-11 08:49:15.045 2 DEBUG nova.policy [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a51c2680b31e40b1908642ef8795c6f0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '39d3043a7835403392c659fbb2fe0b22', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 04:49:15 np0005481065 nova_compute[260935]: 2025-10-11 08:49:15.057 2 INFO nova.compute.manager [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Took 1.29 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 04:49:15 np0005481065 nova_compute[260935]: 2025-10-11 08:49:15.057 2 DEBUG oslo.service.loopingcall [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 04:49:15 np0005481065 nova_compute[260935]: 2025-10-11 08:49:15.058 2 DEBUG nova.compute.manager [-] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 04:49:15 np0005481065 nova_compute[260935]: 2025-10-11 08:49:15.058 2 DEBUG nova.network.neutron [-] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 04:49:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:15.182 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:15.184 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:15.185 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:15 np0005481065 nova_compute[260935]: 2025-10-11 08:49:15.366 2 DEBUG oslo_concurrency.processutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 b3e20035-c079-4ad0-a085-2086be520d1d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.362s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:15 np0005481065 nova_compute[260935]: 2025-10-11 08:49:15.455 2 DEBUG nova.storage.rbd_utils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] resizing rbd image b3e20035-c079-4ad0-a085-2086be520d1d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:49:15 np0005481065 nova_compute[260935]: 2025-10-11 08:49:15.565 2 DEBUG nova.objects.instance [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lazy-loading 'migration_context' on Instance uuid b3e20035-c079-4ad0-a085-2086be520d1d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:49:15 np0005481065 nova_compute[260935]: 2025-10-11 08:49:15.636 2 DEBUG nova.virt.libvirt.driver [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:49:15 np0005481065 nova_compute[260935]: 2025-10-11 08:49:15.637 2 DEBUG nova.virt.libvirt.driver [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Ensure instance console log exists: /var/lib/nova/instances/b3e20035-c079-4ad0-a085-2086be520d1d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:49:15 np0005481065 nova_compute[260935]: 2025-10-11 08:49:15.637 2 DEBUG oslo_concurrency.lockutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:15 np0005481065 nova_compute[260935]: 2025-10-11 08:49:15.638 2 DEBUG oslo_concurrency.lockutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:15 np0005481065 nova_compute[260935]: 2025-10-11 08:49:15.638 2 DEBUG oslo_concurrency.lockutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1304: 321 pgs: 321 active+clean; 214 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 35 KiB/s wr, 176 op/s
Oct 11 04:49:16 np0005481065 nova_compute[260935]: 2025-10-11 08:49:16.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:17 np0005481065 nova_compute[260935]: 2025-10-11 08:49:17.048 2 DEBUG nova.compute.manager [req-13ac8601-8710-45c7-8416-db00260a77b7 req-60ca4176-60c3-430d-8a3c-b4db19c0bf06 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received event network-vif-plugged-db31f1b4-b009-40dc-a028-b72fe0b1eb45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:49:17 np0005481065 nova_compute[260935]: 2025-10-11 08:49:17.049 2 DEBUG oslo_concurrency.lockutils [req-13ac8601-8710-45c7-8416-db00260a77b7 req-60ca4176-60c3-430d-8a3c-b4db19c0bf06 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:17 np0005481065 nova_compute[260935]: 2025-10-11 08:49:17.049 2 DEBUG oslo_concurrency.lockutils [req-13ac8601-8710-45c7-8416-db00260a77b7 req-60ca4176-60c3-430d-8a3c-b4db19c0bf06 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:17 np0005481065 nova_compute[260935]: 2025-10-11 08:49:17.050 2 DEBUG oslo_concurrency.lockutils [req-13ac8601-8710-45c7-8416-db00260a77b7 req-60ca4176-60c3-430d-8a3c-b4db19c0bf06 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:17 np0005481065 nova_compute[260935]: 2025-10-11 08:49:17.050 2 DEBUG nova.compute.manager [req-13ac8601-8710-45c7-8416-db00260a77b7 req-60ca4176-60c3-430d-8a3c-b4db19c0bf06 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] No waiting events found dispatching network-vif-plugged-db31f1b4-b009-40dc-a028-b72fe0b1eb45 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:49:17 np0005481065 nova_compute[260935]: 2025-10-11 08:49:17.050 2 WARNING nova.compute.manager [req-13ac8601-8710-45c7-8416-db00260a77b7 req-60ca4176-60c3-430d-8a3c-b4db19c0bf06 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received unexpected event network-vif-plugged-db31f1b4-b009-40dc-a028-b72fe0b1eb45 for instance with vm_state active and task_state deleting.#033[00m
Oct 11 04:49:17 np0005481065 nova_compute[260935]: 2025-10-11 08:49:17.051 2 DEBUG nova.compute.manager [req-13ac8601-8710-45c7-8416-db00260a77b7 req-60ca4176-60c3-430d-8a3c-b4db19c0bf06 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received event network-vif-unplugged-723ff3bf-882c-4198-afc8-a31026a4ccfc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:49:17 np0005481065 nova_compute[260935]: 2025-10-11 08:49:17.051 2 DEBUG oslo_concurrency.lockutils [req-13ac8601-8710-45c7-8416-db00260a77b7 req-60ca4176-60c3-430d-8a3c-b4db19c0bf06 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:17 np0005481065 nova_compute[260935]: 2025-10-11 08:49:17.051 2 DEBUG oslo_concurrency.lockutils [req-13ac8601-8710-45c7-8416-db00260a77b7 req-60ca4176-60c3-430d-8a3c-b4db19c0bf06 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:17 np0005481065 nova_compute[260935]: 2025-10-11 08:49:17.052 2 DEBUG oslo_concurrency.lockutils [req-13ac8601-8710-45c7-8416-db00260a77b7 req-60ca4176-60c3-430d-8a3c-b4db19c0bf06 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:17 np0005481065 nova_compute[260935]: 2025-10-11 08:49:17.052 2 DEBUG nova.compute.manager [req-13ac8601-8710-45c7-8416-db00260a77b7 req-60ca4176-60c3-430d-8a3c-b4db19c0bf06 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] No waiting events found dispatching network-vif-unplugged-723ff3bf-882c-4198-afc8-a31026a4ccfc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:49:17 np0005481065 nova_compute[260935]: 2025-10-11 08:49:17.052 2 DEBUG nova.compute.manager [req-13ac8601-8710-45c7-8416-db00260a77b7 req-60ca4176-60c3-430d-8a3c-b4db19c0bf06 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received event network-vif-unplugged-723ff3bf-882c-4198-afc8-a31026a4ccfc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 04:49:17 np0005481065 nova_compute[260935]: 2025-10-11 08:49:17.382 2 INFO nova.network.neutron [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Port b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Oct 11 04:49:17 np0005481065 nova_compute[260935]: 2025-10-11 08:49:17.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1305: 321 pgs: 321 active+clean; 181 MiB data, 369 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 231 op/s
Oct 11 04:49:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:49:19 np0005481065 nova_compute[260935]: 2025-10-11 08:49:19.182 2 DEBUG nova.network.neutron [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Successfully created port: 3a7614a2-ff1f-4015-a387-8b15256f61b2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 04:49:19 np0005481065 nova_compute[260935]: 2025-10-11 08:49:19.244 2 INFO nova.network.neutron [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Port c9724939-cd91-44bb-a86b-72bf93c2a818 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Oct 11 04:49:19 np0005481065 nova_compute[260935]: 2025-10-11 08:49:19.245 2 DEBUG nova.network.neutron [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Updating instance_info_cache with network_info: [{"id": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "address": "fa:16:3e:97:65:f3", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb31f1b4-b0", "ovs_interfaceid": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "address": "fa:16:3e:90:fd:25", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap723ff3bf-88", "ovs_interfaceid": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:49:19 np0005481065 nova_compute[260935]: 2025-10-11 08:49:19.330 2 DEBUG oslo_concurrency.lockutils [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Releasing lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:49:19 np0005481065 nova_compute[260935]: 2025-10-11 08:49:19.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:19 np0005481065 nova_compute[260935]: 2025-10-11 08:49:19.464 2 DEBUG nova.compute.manager [req-c6e7016b-7502-491f-af1f-71196022c1d7 req-679c8db1-a60d-4ecc-9da8-3ce71cc29b98 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received event network-vif-deleted-db31f1b4-b009-40dc-a028-b72fe0b1eb45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:49:19 np0005481065 nova_compute[260935]: 2025-10-11 08:49:19.465 2 INFO nova.compute.manager [req-c6e7016b-7502-491f-af1f-71196022c1d7 req-679c8db1-a60d-4ecc-9da8-3ce71cc29b98 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Neutron deleted interface db31f1b4-b009-40dc-a028-b72fe0b1eb45; detaching it from the instance and deleting it from the info cache#033[00m
Oct 11 04:49:19 np0005481065 nova_compute[260935]: 2025-10-11 08:49:19.465 2 DEBUG nova.network.neutron [req-c6e7016b-7502-491f-af1f-71196022c1d7 req-679c8db1-a60d-4ecc-9da8-3ce71cc29b98 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Updating instance_info_cache with network_info: [{"id": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "address": "fa:16:3e:90:fd:25", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap723ff3bf-88", "ovs_interfaceid": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:49:19 np0005481065 nova_compute[260935]: 2025-10-11 08:49:19.473 2 DEBUG oslo_concurrency.lockutils [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "interface-057de6d9-3f9e-4b23-9019-f62ba6b453e7-b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 8.474s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:19 np0005481065 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Oct 11 04:49:19 np0005481065 nova_compute[260935]: 2025-10-11 08:49:19.655 2 DEBUG nova.compute.manager [req-d047fca7-3822-4774-9cac-56ce2c26c57f req-3cae172c-ab98-4e6d-8954-5514b76d9432 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received event network-vif-plugged-723ff3bf-882c-4198-afc8-a31026a4ccfc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:49:19 np0005481065 nova_compute[260935]: 2025-10-11 08:49:19.656 2 DEBUG oslo_concurrency.lockutils [req-d047fca7-3822-4774-9cac-56ce2c26c57f req-3cae172c-ab98-4e6d-8954-5514b76d9432 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:19 np0005481065 nova_compute[260935]: 2025-10-11 08:49:19.657 2 DEBUG oslo_concurrency.lockutils [req-d047fca7-3822-4774-9cac-56ce2c26c57f req-3cae172c-ab98-4e6d-8954-5514b76d9432 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:19 np0005481065 nova_compute[260935]: 2025-10-11 08:49:19.657 2 DEBUG oslo_concurrency.lockutils [req-d047fca7-3822-4774-9cac-56ce2c26c57f req-3cae172c-ab98-4e6d-8954-5514b76d9432 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:19 np0005481065 nova_compute[260935]: 2025-10-11 08:49:19.658 2 DEBUG nova.compute.manager [req-d047fca7-3822-4774-9cac-56ce2c26c57f req-3cae172c-ab98-4e6d-8954-5514b76d9432 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] No waiting events found dispatching network-vif-plugged-723ff3bf-882c-4198-afc8-a31026a4ccfc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:49:19 np0005481065 nova_compute[260935]: 2025-10-11 08:49:19.659 2 WARNING nova.compute.manager [req-d047fca7-3822-4774-9cac-56ce2c26c57f req-3cae172c-ab98-4e6d-8954-5514b76d9432 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received unexpected event network-vif-plugged-723ff3bf-882c-4198-afc8-a31026a4ccfc for instance with vm_state active and task_state deleting.#033[00m
Oct 11 04:49:19 np0005481065 nova_compute[260935]: 2025-10-11 08:49:19.664 2 DEBUG nova.compute.manager [req-c6e7016b-7502-491f-af1f-71196022c1d7 req-679c8db1-a60d-4ecc-9da8-3ce71cc29b98 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Detach interface failed, port_id=db31f1b4-b009-40dc-a028-b72fe0b1eb45, reason: Instance 057de6d9-3f9e-4b23-9019-f62ba6b453e7 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct 11 04:49:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1306: 321 pgs: 321 active+clean; 181 MiB data, 369 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 189 op/s
Oct 11 04:49:19 np0005481065 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Oct 11 04:49:19 np0005481065 nova_compute[260935]: 2025-10-11 08:49:19.901 2 DEBUG nova.network.neutron [-] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:49:19 np0005481065 nova_compute[260935]: 2025-10-11 08:49:19.952 2 INFO nova.compute.manager [-] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Took 4.89 seconds to deallocate network for instance.#033[00m
Oct 11 04:49:20 np0005481065 nova_compute[260935]: 2025-10-11 08:49:20.137 2 DEBUG oslo_concurrency.lockutils [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:20 np0005481065 nova_compute[260935]: 2025-10-11 08:49:20.138 2 DEBUG oslo_concurrency.lockutils [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:20 np0005481065 nova_compute[260935]: 2025-10-11 08:49:20.231 2 DEBUG oslo_concurrency.processutils [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:20 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:49:20 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1305414503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:49:20 np0005481065 nova_compute[260935]: 2025-10-11 08:49:20.706 2 DEBUG oslo_concurrency.processutils [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:20 np0005481065 nova_compute[260935]: 2025-10-11 08:49:20.716 2 DEBUG nova.compute.provider_tree [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:49:20 np0005481065 nova_compute[260935]: 2025-10-11 08:49:20.738 2 DEBUG nova.scheduler.client.report [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:49:20 np0005481065 nova_compute[260935]: 2025-10-11 08:49:20.763 2 DEBUG oslo_concurrency.lockutils [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:20 np0005481065 nova_compute[260935]: 2025-10-11 08:49:20.791 2 INFO nova.scheduler.client.report [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Deleted allocations for instance 057de6d9-3f9e-4b23-9019-f62ba6b453e7#033[00m
Oct 11 04:49:20 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:20Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bf:d8:0b 10.100.0.11
Oct 11 04:49:20 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:20Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bf:d8:0b 10.100.0.11
Oct 11 04:49:20 np0005481065 nova_compute[260935]: 2025-10-11 08:49:20.896 2 DEBUG oslo_concurrency.lockutils [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.129s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:21 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:21Z|00119|binding|INFO|Releasing lport 424305ea-6b47-4134-ad52-ee2a450e204c from this chassis (sb_readonly=0)
Oct 11 04:49:21 np0005481065 nova_compute[260935]: 2025-10-11 08:49:21.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:21 np0005481065 nova_compute[260935]: 2025-10-11 08:49:21.461 2 DEBUG nova.network.neutron [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Successfully updated port: 3a7614a2-ff1f-4015-a387-8b15256f61b2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:49:21 np0005481065 nova_compute[260935]: 2025-10-11 08:49:21.482 2 DEBUG oslo_concurrency.lockutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "refresh_cache-b3e20035-c079-4ad0-a085-2086be520d1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:49:21 np0005481065 nova_compute[260935]: 2025-10-11 08:49:21.482 2 DEBUG oslo_concurrency.lockutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquired lock "refresh_cache-b3e20035-c079-4ad0-a085-2086be520d1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:49:21 np0005481065 nova_compute[260935]: 2025-10-11 08:49:21.482 2 DEBUG nova.network.neutron [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:49:21 np0005481065 nova_compute[260935]: 2025-10-11 08:49:21.547 2 DEBUG nova.compute.manager [req-70147c05-b470-4752-b1d1-b6f16d087d86 req-e4b98cf6-b2fc-4d0f-9e0a-7f71b9997f74 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received event network-vif-deleted-723ff3bf-882c-4198-afc8-a31026a4ccfc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:49:21 np0005481065 nova_compute[260935]: 2025-10-11 08:49:21.548 2 DEBUG nova.compute.manager [req-70147c05-b470-4752-b1d1-b6f16d087d86 req-e4b98cf6-b2fc-4d0f-9e0a-7f71b9997f74 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Received event network-changed-3a7614a2-ff1f-4015-a387-8b15256f61b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:49:21 np0005481065 nova_compute[260935]: 2025-10-11 08:49:21.549 2 DEBUG nova.compute.manager [req-70147c05-b470-4752-b1d1-b6f16d087d86 req-e4b98cf6-b2fc-4d0f-9e0a-7f71b9997f74 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Refreshing instance network info cache due to event network-changed-3a7614a2-ff1f-4015-a387-8b15256f61b2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:49:21 np0005481065 nova_compute[260935]: 2025-10-11 08:49:21.550 2 DEBUG oslo_concurrency.lockutils [req-70147c05-b470-4752-b1d1-b6f16d087d86 req-e4b98cf6-b2fc-4d0f-9e0a-7f71b9997f74 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-b3e20035-c079-4ad0-a085-2086be520d1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:49:21 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:21Z|00120|binding|INFO|Releasing lport 424305ea-6b47-4134-ad52-ee2a450e204c from this chassis (sb_readonly=0)
Oct 11 04:49:21 np0005481065 nova_compute[260935]: 2025-10-11 08:49:21.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:21 np0005481065 nova_compute[260935]: 2025-10-11 08:49:21.660 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172546.659631, e10cd028-76c1-4eb5-be43-f51e4da8abc1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:49:21 np0005481065 nova_compute[260935]: 2025-10-11 08:49:21.661 2 INFO nova.compute.manager [-] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] VM Stopped (Lifecycle Event)#033[00m
Oct 11 04:49:21 np0005481065 nova_compute[260935]: 2025-10-11 08:49:21.664 2 DEBUG nova.network.neutron [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:49:21 np0005481065 nova_compute[260935]: 2025-10-11 08:49:21.696 2 DEBUG nova.compute.manager [None req-297ac004-08fb-4980-9418-651b0e3f703b - - - - - -] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:49:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1307: 321 pgs: 321 active+clean; 181 MiB data, 369 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 189 op/s
Oct 11 04:49:21 np0005481065 nova_compute[260935]: 2025-10-11 08:49:21.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:22 np0005481065 podman[292742]: 2025-10-11 08:49:22.409972322 +0000 UTC m=+0.089678607 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 11 04:49:22 np0005481065 nova_compute[260935]: 2025-10-11 08:49:22.518 2 DEBUG nova.network.neutron [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Updating instance_info_cache with network_info: [{"id": "3a7614a2-ff1f-4015-a387-8b15256f61b2", "address": "fa:16:3e:e8:51:53", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a7614a2-ff", "ovs_interfaceid": "3a7614a2-ff1f-4015-a387-8b15256f61b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:49:22 np0005481065 nova_compute[260935]: 2025-10-11 08:49:22.545 2 DEBUG oslo_concurrency.lockutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Releasing lock "refresh_cache-b3e20035-c079-4ad0-a085-2086be520d1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:49:22 np0005481065 nova_compute[260935]: 2025-10-11 08:49:22.545 2 DEBUG nova.compute.manager [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Instance network_info: |[{"id": "3a7614a2-ff1f-4015-a387-8b15256f61b2", "address": "fa:16:3e:e8:51:53", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a7614a2-ff", "ovs_interfaceid": "3a7614a2-ff1f-4015-a387-8b15256f61b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:49:22 np0005481065 nova_compute[260935]: 2025-10-11 08:49:22.545 2 DEBUG oslo_concurrency.lockutils [req-70147c05-b470-4752-b1d1-b6f16d087d86 req-e4b98cf6-b2fc-4d0f-9e0a-7f71b9997f74 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-b3e20035-c079-4ad0-a085-2086be520d1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:49:22 np0005481065 nova_compute[260935]: 2025-10-11 08:49:22.546 2 DEBUG nova.network.neutron [req-70147c05-b470-4752-b1d1-b6f16d087d86 req-e4b98cf6-b2fc-4d0f-9e0a-7f71b9997f74 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Refreshing network info cache for port 3a7614a2-ff1f-4015-a387-8b15256f61b2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:49:22 np0005481065 nova_compute[260935]: 2025-10-11 08:49:22.549 2 DEBUG nova.virt.libvirt.driver [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Start _get_guest_xml network_info=[{"id": "3a7614a2-ff1f-4015-a387-8b15256f61b2", "address": "fa:16:3e:e8:51:53", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a7614a2-ff", "ovs_interfaceid": "3a7614a2-ff1f-4015-a387-8b15256f61b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:49:22 np0005481065 nova_compute[260935]: 2025-10-11 08:49:22.555 2 WARNING nova.virt.libvirt.driver [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:49:22 np0005481065 nova_compute[260935]: 2025-10-11 08:49:22.559 2 DEBUG nova.virt.libvirt.host [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:49:22 np0005481065 nova_compute[260935]: 2025-10-11 08:49:22.561 2 DEBUG nova.virt.libvirt.host [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:49:22 np0005481065 nova_compute[260935]: 2025-10-11 08:49:22.569 2 DEBUG nova.virt.libvirt.host [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:49:22 np0005481065 nova_compute[260935]: 2025-10-11 08:49:22.570 2 DEBUG nova.virt.libvirt.host [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:49:22 np0005481065 nova_compute[260935]: 2025-10-11 08:49:22.570 2 DEBUG nova.virt.libvirt.driver [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:49:22 np0005481065 nova_compute[260935]: 2025-10-11 08:49:22.571 2 DEBUG nova.virt.hardware [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:49:22 np0005481065 nova_compute[260935]: 2025-10-11 08:49:22.571 2 DEBUG nova.virt.hardware [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:49:22 np0005481065 nova_compute[260935]: 2025-10-11 08:49:22.571 2 DEBUG nova.virt.hardware [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:49:22 np0005481065 nova_compute[260935]: 2025-10-11 08:49:22.572 2 DEBUG nova.virt.hardware [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:49:22 np0005481065 nova_compute[260935]: 2025-10-11 08:49:22.572 2 DEBUG nova.virt.hardware [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:49:22 np0005481065 nova_compute[260935]: 2025-10-11 08:49:22.572 2 DEBUG nova.virt.hardware [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:49:22 np0005481065 nova_compute[260935]: 2025-10-11 08:49:22.573 2 DEBUG nova.virt.hardware [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:49:22 np0005481065 nova_compute[260935]: 2025-10-11 08:49:22.573 2 DEBUG nova.virt.hardware [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:49:22 np0005481065 nova_compute[260935]: 2025-10-11 08:49:22.573 2 DEBUG nova.virt.hardware [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:49:22 np0005481065 nova_compute[260935]: 2025-10-11 08:49:22.573 2 DEBUG nova.virt.hardware [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:49:22 np0005481065 nova_compute[260935]: 2025-10-11 08:49:22.574 2 DEBUG nova.virt.hardware [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:49:22 np0005481065 nova_compute[260935]: 2025-10-11 08:49:22.577 2 DEBUG oslo_concurrency.processutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:22 np0005481065 nova_compute[260935]: 2025-10-11 08:49:22.770 2 DEBUG oslo_concurrency.lockutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "8e4f771a-b87a-40f9-a12e-b5b4583b96f7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:22 np0005481065 nova_compute[260935]: 2025-10-11 08:49:22.771 2 DEBUG oslo_concurrency.lockutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "8e4f771a-b87a-40f9-a12e-b5b4583b96f7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:22 np0005481065 nova_compute[260935]: 2025-10-11 08:49:22.788 2 DEBUG nova.compute.manager [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:49:22 np0005481065 nova_compute[260935]: 2025-10-11 08:49:22.852 2 DEBUG oslo_concurrency.lockutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:22 np0005481065 nova_compute[260935]: 2025-10-11 08:49:22.853 2 DEBUG oslo_concurrency.lockutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:22 np0005481065 nova_compute[260935]: 2025-10-11 08:49:22.862 2 DEBUG nova.virt.hardware [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:49:22 np0005481065 nova_compute[260935]: 2025-10-11 08:49:22.863 2 INFO nova.compute.claims [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:49:23 np0005481065 nova_compute[260935]: 2025-10-11 08:49:23.007 2 DEBUG oslo_concurrency.processutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:49:23 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3865369347' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:49:23 np0005481065 nova_compute[260935]: 2025-10-11 08:49:23.072 2 DEBUG oslo_concurrency.processutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:23 np0005481065 nova_compute[260935]: 2025-10-11 08:49:23.108 2 DEBUG nova.storage.rbd_utils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image b3e20035-c079-4ad0-a085-2086be520d1d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:49:23 np0005481065 nova_compute[260935]: 2025-10-11 08:49:23.115 2 DEBUG oslo_concurrency.processutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:23 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:23Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:20:3f:af 10.100.0.14
Oct 11 04:49:23 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:23Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:20:3f:af 10.100.0.14
Oct 11 04:49:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:49:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:49:23 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2082289042' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:49:23 np0005481065 nova_compute[260935]: 2025-10-11 08:49:23.505 2 DEBUG oslo_concurrency.processutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:23 np0005481065 nova_compute[260935]: 2025-10-11 08:49:23.514 2 DEBUG nova.compute.provider_tree [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:49:23 np0005481065 nova_compute[260935]: 2025-10-11 08:49:23.538 2 DEBUG nova.scheduler.client.report [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:49:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:49:23 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2887860017' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:49:23 np0005481065 nova_compute[260935]: 2025-10-11 08:49:23.566 2 DEBUG oslo_concurrency.processutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:23 np0005481065 nova_compute[260935]: 2025-10-11 08:49:23.568 2 DEBUG nova.virt.libvirt.vif [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:49:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-911926473',display_name='tempest-ServersAdminTestJSON-server-911926473',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-911926473',id=24,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='39d3043a7835403392c659fbb2fe0b22',ramdisk_id='',reservation_id='r-14xzfgy4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1756812845',owner_user_name='tempest-ServersAdminTestJSON-1756812845-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:49:14Z,user_data=None,user_id='a51c2680b31e40b1908642ef8795c6f0',uuid=b3e20035-c079-4ad0-a085-2086be520d1d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3a7614a2-ff1f-4015-a387-8b15256f61b2", "address": "fa:16:3e:e8:51:53", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a7614a2-ff", "ovs_interfaceid": "3a7614a2-ff1f-4015-a387-8b15256f61b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:49:23 np0005481065 nova_compute[260935]: 2025-10-11 08:49:23.569 2 DEBUG nova.network.os_vif_util [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converting VIF {"id": "3a7614a2-ff1f-4015-a387-8b15256f61b2", "address": "fa:16:3e:e8:51:53", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a7614a2-ff", "ovs_interfaceid": "3a7614a2-ff1f-4015-a387-8b15256f61b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:49:23 np0005481065 nova_compute[260935]: 2025-10-11 08:49:23.570 2 DEBUG nova.network.os_vif_util [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:51:53,bridge_name='br-int',has_traffic_filtering=True,id=3a7614a2-ff1f-4015-a387-8b15256f61b2,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3a7614a2-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:49:23 np0005481065 nova_compute[260935]: 2025-10-11 08:49:23.572 2 DEBUG nova.objects.instance [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lazy-loading 'pci_devices' on Instance uuid b3e20035-c079-4ad0-a085-2086be520d1d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:49:23 np0005481065 nova_compute[260935]: 2025-10-11 08:49:23.576 2 DEBUG oslo_concurrency.lockutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:23 np0005481065 nova_compute[260935]: 2025-10-11 08:49:23.577 2 DEBUG nova.compute.manager [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:49:23 np0005481065 nova_compute[260935]: 2025-10-11 08:49:23.616 2 DEBUG nova.virt.libvirt.driver [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:49:23 np0005481065 nova_compute[260935]:  <uuid>b3e20035-c079-4ad0-a085-2086be520d1d</uuid>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:  <name>instance-00000018</name>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:49:23 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:      <nova:name>tempest-ServersAdminTestJSON-server-911926473</nova:name>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:49:22</nova:creationTime>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:49:23 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:        <nova:user uuid="a51c2680b31e40b1908642ef8795c6f0">tempest-ServersAdminTestJSON-1756812845-project-member</nova:user>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:        <nova:project uuid="39d3043a7835403392c659fbb2fe0b22">tempest-ServersAdminTestJSON-1756812845</nova:project>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:        <nova:port uuid="3a7614a2-ff1f-4015-a387-8b15256f61b2">
Oct 11 04:49:23 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:49:23 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:      <entry name="serial">b3e20035-c079-4ad0-a085-2086be520d1d</entry>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:      <entry name="uuid">b3e20035-c079-4ad0-a085-2086be520d1d</entry>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:49:23 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:49:23 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:49:23 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/b3e20035-c079-4ad0-a085-2086be520d1d_disk">
Oct 11 04:49:23 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:49:23 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:49:23 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/b3e20035-c079-4ad0-a085-2086be520d1d_disk.config">
Oct 11 04:49:23 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:49:23 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 04:49:23 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:e8:51:53"/>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:      <target dev="tap3a7614a2-ff"/>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:49:23 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/b3e20035-c079-4ad0-a085-2086be520d1d/console.log" append="off"/>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:49:23 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:49:23 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:49:23 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:49:23 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:49:23 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:49:23 np0005481065 nova_compute[260935]: 2025-10-11 08:49:23.618 2 DEBUG nova.compute.manager [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Preparing to wait for external event network-vif-plugged-3a7614a2-ff1f-4015-a387-8b15256f61b2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 04:49:23 np0005481065 nova_compute[260935]: 2025-10-11 08:49:23.619 2 DEBUG oslo_concurrency.lockutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "b3e20035-c079-4ad0-a085-2086be520d1d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:23 np0005481065 nova_compute[260935]: 2025-10-11 08:49:23.619 2 DEBUG oslo_concurrency.lockutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "b3e20035-c079-4ad0-a085-2086be520d1d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:23 np0005481065 nova_compute[260935]: 2025-10-11 08:49:23.620 2 DEBUG oslo_concurrency.lockutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "b3e20035-c079-4ad0-a085-2086be520d1d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:23 np0005481065 nova_compute[260935]: 2025-10-11 08:49:23.621 2 DEBUG nova.virt.libvirt.vif [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:49:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-911926473',display_name='tempest-ServersAdminTestJSON-server-911926473',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-911926473',id=24,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='39d3043a7835403392c659fbb2fe0b22',ramdisk_id='',reservation_id='r-14xzfgy4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1756812845',owner_user_name='tempest-ServersAdminTestJSON-1756812845-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:49:14Z,user_data=None,user_id='a51c2680b31e40b1908642ef8795c6f0',uuid=b3e20035-c079-4ad0-a085-2086be520d1d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3a7614a2-ff1f-4015-a387-8b15256f61b2", "address": "fa:16:3e:e8:51:53", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a7614a2-ff", "ovs_interfaceid": "3a7614a2-ff1f-4015-a387-8b15256f61b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:49:23 np0005481065 nova_compute[260935]: 2025-10-11 08:49:23.622 2 DEBUG nova.network.os_vif_util [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converting VIF {"id": "3a7614a2-ff1f-4015-a387-8b15256f61b2", "address": "fa:16:3e:e8:51:53", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a7614a2-ff", "ovs_interfaceid": "3a7614a2-ff1f-4015-a387-8b15256f61b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:49:23 np0005481065 nova_compute[260935]: 2025-10-11 08:49:23.623 2 DEBUG nova.network.os_vif_util [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:51:53,bridge_name='br-int',has_traffic_filtering=True,id=3a7614a2-ff1f-4015-a387-8b15256f61b2,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3a7614a2-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:49:23 np0005481065 nova_compute[260935]: 2025-10-11 08:49:23.624 2 DEBUG os_vif [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:51:53,bridge_name='br-int',has_traffic_filtering=True,id=3a7614a2-ff1f-4015-a387-8b15256f61b2,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3a7614a2-ff') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:49:23 np0005481065 nova_compute[260935]: 2025-10-11 08:49:23.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:23 np0005481065 nova_compute[260935]: 2025-10-11 08:49:23.625 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:23 np0005481065 nova_compute[260935]: 2025-10-11 08:49:23.626 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:49:23 np0005481065 nova_compute[260935]: 2025-10-11 08:49:23.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:23 np0005481065 nova_compute[260935]: 2025-10-11 08:49:23.630 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3a7614a2-ff, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:23 np0005481065 nova_compute[260935]: 2025-10-11 08:49:23.631 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3a7614a2-ff, col_values=(('external_ids', {'iface-id': '3a7614a2-ff1f-4015-a387-8b15256f61b2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e8:51:53', 'vm-uuid': 'b3e20035-c079-4ad0-a085-2086be520d1d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:23 np0005481065 nova_compute[260935]: 2025-10-11 08:49:23.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:23 np0005481065 NetworkManager[44960]: <info>  [1760172563.6350] manager: (tap3a7614a2-ff): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/70)
Oct 11 04:49:23 np0005481065 nova_compute[260935]: 2025-10-11 08:49:23.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:49:23 np0005481065 nova_compute[260935]: 2025-10-11 08:49:23.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:23 np0005481065 nova_compute[260935]: 2025-10-11 08:49:23.642 2 INFO os_vif [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:51:53,bridge_name='br-int',has_traffic_filtering=True,id=3a7614a2-ff1f-4015-a387-8b15256f61b2,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3a7614a2-ff')#033[00m
Oct 11 04:49:23 np0005481065 nova_compute[260935]: 2025-10-11 08:49:23.688 2 DEBUG nova.compute.manager [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:49:23 np0005481065 nova_compute[260935]: 2025-10-11 08:49:23.689 2 DEBUG nova.network.neutron [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:49:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1308: 321 pgs: 321 active+clean; 239 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 5.9 MiB/s wr, 290 op/s
Oct 11 04:49:24 np0005481065 nova_compute[260935]: 2025-10-11 08:49:24.305 2 DEBUG nova.policy [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1bab12893b9d49aabcb5ca19c9b951de', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f8c7604961214c6d9d49657535d799a5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 04:49:24 np0005481065 nova_compute[260935]: 2025-10-11 08:49:24.469 2 INFO nova.virt.libvirt.driver [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:49:24 np0005481065 nova_compute[260935]: 2025-10-11 08:49:24.479 2 DEBUG nova.virt.libvirt.driver [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:49:24 np0005481065 nova_compute[260935]: 2025-10-11 08:49:24.480 2 DEBUG nova.virt.libvirt.driver [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:49:24 np0005481065 nova_compute[260935]: 2025-10-11 08:49:24.480 2 DEBUG nova.virt.libvirt.driver [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] No VIF found with MAC fa:16:3e:e8:51:53, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:49:24 np0005481065 nova_compute[260935]: 2025-10-11 08:49:24.481 2 INFO nova.virt.libvirt.driver [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Using config drive#033[00m
Oct 11 04:49:24 np0005481065 nova_compute[260935]: 2025-10-11 08:49:24.516 2 DEBUG nova.storage.rbd_utils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image b3e20035-c079-4ad0-a085-2086be520d1d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:49:24 np0005481065 nova_compute[260935]: 2025-10-11 08:49:24.526 2 DEBUG nova.compute.manager [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:49:24 np0005481065 nova_compute[260935]: 2025-10-11 08:49:24.716 2 DEBUG nova.compute.manager [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:49:24 np0005481065 nova_compute[260935]: 2025-10-11 08:49:24.718 2 DEBUG nova.virt.libvirt.driver [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:49:24 np0005481065 nova_compute[260935]: 2025-10-11 08:49:24.718 2 INFO nova.virt.libvirt.driver [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Creating image(s)#033[00m
Oct 11 04:49:24 np0005481065 nova_compute[260935]: 2025-10-11 08:49:24.755 2 DEBUG nova.storage.rbd_utils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image 8e4f771a-b87a-40f9-a12e-b5b4583b96f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:49:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:49:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:49:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:49:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:49:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:49:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:49:24 np0005481065 nova_compute[260935]: 2025-10-11 08:49:24.795 2 DEBUG nova.storage.rbd_utils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image 8e4f771a-b87a-40f9-a12e-b5b4583b96f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:49:24 np0005481065 nova_compute[260935]: 2025-10-11 08:49:24.832 2 DEBUG nova.storage.rbd_utils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image 8e4f771a-b87a-40f9-a12e-b5b4583b96f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:49:24 np0005481065 nova_compute[260935]: 2025-10-11 08:49:24.837 2 DEBUG oslo_concurrency.processutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:24 np0005481065 nova_compute[260935]: 2025-10-11 08:49:24.931 2 DEBUG oslo_concurrency.processutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:24 np0005481065 nova_compute[260935]: 2025-10-11 08:49:24.933 2 DEBUG oslo_concurrency.lockutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:24 np0005481065 nova_compute[260935]: 2025-10-11 08:49:24.934 2 DEBUG oslo_concurrency.lockutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:24 np0005481065 nova_compute[260935]: 2025-10-11 08:49:24.935 2 DEBUG oslo_concurrency.lockutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:24 np0005481065 nova_compute[260935]: 2025-10-11 08:49:24.965 2 DEBUG nova.storage.rbd_utils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image 8e4f771a-b87a-40f9-a12e-b5b4583b96f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:49:24 np0005481065 nova_compute[260935]: 2025-10-11 08:49:24.970 2 DEBUG oslo_concurrency.processutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 8e4f771a-b87a-40f9-a12e-b5b4583b96f7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:25 np0005481065 nova_compute[260935]: 2025-10-11 08:49:25.257 2 DEBUG nova.network.neutron [req-70147c05-b470-4752-b1d1-b6f16d087d86 req-e4b98cf6-b2fc-4d0f-9e0a-7f71b9997f74 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Updated VIF entry in instance network info cache for port 3a7614a2-ff1f-4015-a387-8b15256f61b2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:49:25 np0005481065 nova_compute[260935]: 2025-10-11 08:49:25.260 2 DEBUG nova.network.neutron [req-70147c05-b470-4752-b1d1-b6f16d087d86 req-e4b98cf6-b2fc-4d0f-9e0a-7f71b9997f74 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Updating instance_info_cache with network_info: [{"id": "3a7614a2-ff1f-4015-a387-8b15256f61b2", "address": "fa:16:3e:e8:51:53", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a7614a2-ff", "ovs_interfaceid": "3a7614a2-ff1f-4015-a387-8b15256f61b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:49:25 np0005481065 nova_compute[260935]: 2025-10-11 08:49:25.265 2 DEBUG oslo_concurrency.processutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 8e4f771a-b87a-40f9-a12e-b5b4583b96f7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.295s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:25 np0005481065 nova_compute[260935]: 2025-10-11 08:49:25.270 2 INFO nova.virt.libvirt.driver [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Creating config drive at /var/lib/nova/instances/b3e20035-c079-4ad0-a085-2086be520d1d/disk.config#033[00m
Oct 11 04:49:25 np0005481065 nova_compute[260935]: 2025-10-11 08:49:25.281 2 DEBUG oslo_concurrency.processutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b3e20035-c079-4ad0-a085-2086be520d1d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnrqinbk2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:25 np0005481065 nova_compute[260935]: 2025-10-11 08:49:25.420 2 DEBUG nova.storage.rbd_utils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] resizing rbd image 8e4f771a-b87a-40f9-a12e-b5b4583b96f7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:49:25 np0005481065 nova_compute[260935]: 2025-10-11 08:49:25.472 2 DEBUG oslo_concurrency.processutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b3e20035-c079-4ad0-a085-2086be520d1d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnrqinbk2" returned: 0 in 0.191s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:25 np0005481065 nova_compute[260935]: 2025-10-11 08:49:25.508 2 DEBUG nova.storage.rbd_utils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image b3e20035-c079-4ad0-a085-2086be520d1d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:49:25 np0005481065 nova_compute[260935]: 2025-10-11 08:49:25.512 2 DEBUG oslo_concurrency.processutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b3e20035-c079-4ad0-a085-2086be520d1d/disk.config b3e20035-c079-4ad0-a085-2086be520d1d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:25 np0005481065 nova_compute[260935]: 2025-10-11 08:49:25.548 2 DEBUG oslo_concurrency.lockutils [req-70147c05-b470-4752-b1d1-b6f16d087d86 req-e4b98cf6-b2fc-4d0f-9e0a-7f71b9997f74 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-b3e20035-c079-4ad0-a085-2086be520d1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:49:25 np0005481065 nova_compute[260935]: 2025-10-11 08:49:25.617 2 DEBUG nova.objects.instance [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lazy-loading 'migration_context' on Instance uuid 8e4f771a-b87a-40f9-a12e-b5b4583b96f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:49:25 np0005481065 nova_compute[260935]: 2025-10-11 08:49:25.656 2 DEBUG nova.virt.libvirt.driver [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:49:25 np0005481065 nova_compute[260935]: 2025-10-11 08:49:25.657 2 DEBUG nova.virt.libvirt.driver [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Ensure instance console log exists: /var/lib/nova/instances/8e4f771a-b87a-40f9-a12e-b5b4583b96f7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:49:25 np0005481065 nova_compute[260935]: 2025-10-11 08:49:25.658 2 DEBUG oslo_concurrency.lockutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:25 np0005481065 nova_compute[260935]: 2025-10-11 08:49:25.659 2 DEBUG oslo_concurrency.lockutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:25 np0005481065 nova_compute[260935]: 2025-10-11 08:49:25.659 2 DEBUG oslo_concurrency.lockutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:25 np0005481065 nova_compute[260935]: 2025-10-11 08:49:25.662 2 DEBUG nova.network.neutron [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Successfully created port: f1d8b704-c5df-41f7-b46a-04c0e89ab2cd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 04:49:25 np0005481065 nova_compute[260935]: 2025-10-11 08:49:25.689 2 DEBUG oslo_concurrency.processutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b3e20035-c079-4ad0-a085-2086be520d1d/disk.config b3e20035-c079-4ad0-a085-2086be520d1d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:25 np0005481065 nova_compute[260935]: 2025-10-11 08:49:25.690 2 INFO nova.virt.libvirt.driver [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Deleting local config drive /var/lib/nova/instances/b3e20035-c079-4ad0-a085-2086be520d1d/disk.config because it was imported into RBD.#033[00m
Oct 11 04:49:25 np0005481065 kernel: tap3a7614a2-ff: entered promiscuous mode
Oct 11 04:49:25 np0005481065 NetworkManager[44960]: <info>  [1760172565.7750] manager: (tap3a7614a2-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/71)
Oct 11 04:49:25 np0005481065 nova_compute[260935]: 2025-10-11 08:49:25.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:25 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:25Z|00121|binding|INFO|Claiming lport 3a7614a2-ff1f-4015-a387-8b15256f61b2 for this chassis.
Oct 11 04:49:25 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:25Z|00122|binding|INFO|3a7614a2-ff1f-4015-a387-8b15256f61b2: Claiming fa:16:3e:e8:51:53 10.100.0.5
Oct 11 04:49:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:25.817 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:51:53 10.100.0.5'], port_security=['fa:16:3e:e8:51:53 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b3e20035-c079-4ad0-a085-2086be520d1d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39d3043a7835403392c659fbb2fe0b22', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8cdf2c97-ed67-4339-928f-1d70d0c6c18c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3bfe7634-8476-437a-9cde-e4512c0e686a, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=3a7614a2-ff1f-4015-a387-8b15256f61b2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:49:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:25.818 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 3a7614a2-ff1f-4015-a387-8b15256f61b2 in datapath 09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5 bound to our chassis#033[00m
Oct 11 04:49:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:25.821 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5#033[00m
Oct 11 04:49:25 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:25Z|00123|binding|INFO|Setting lport 3a7614a2-ff1f-4015-a387-8b15256f61b2 ovn-installed in OVS
Oct 11 04:49:25 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:25Z|00124|binding|INFO|Setting lport 3a7614a2-ff1f-4015-a387-8b15256f61b2 up in Southbound
Oct 11 04:49:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1309: 321 pgs: 321 active+clean; 239 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 455 KiB/s rd, 5.9 MiB/s wr, 156 op/s
Oct 11 04:49:25 np0005481065 nova_compute[260935]: 2025-10-11 08:49:25.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:25 np0005481065 systemd-machined[215705]: New machine qemu-26-instance-00000018.
Oct 11 04:49:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:25.855 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b7f696e1-77a7-41ce-8712-4bb832b8ae51]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:25 np0005481065 systemd[1]: Started Virtual Machine qemu-26-instance-00000018.
Oct 11 04:49:25 np0005481065 systemd-udevd[293085]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:49:25 np0005481065 NetworkManager[44960]: <info>  [1760172565.8782] device (tap3a7614a2-ff): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:49:25 np0005481065 NetworkManager[44960]: <info>  [1760172565.8800] device (tap3a7614a2-ff): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:49:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:25.907 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[7732a58e-1eff-47b9-aa6b-28d68300670f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:25.910 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c0f85935-c679-4c51-b4bb-302944cbaa85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:25.963 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[80542b94-1328-4929-a093-64d898e29243]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:25.988 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7d707f89-17d4-4500-9937-d0cfe0f3533e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap09ac2cb6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:b2:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 832, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 832, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 439110, 'reachable_time': 35426, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293122, 'error': None, 'target': 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:26.016 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[38f51894-892b-4fcf-b114-605ee8b79710]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap09ac2cb6-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 439130, 'tstamp': 439130}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293127, 'error': None, 'target': 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap09ac2cb6-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 439135, 'tstamp': 439135}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293127, 'error': None, 'target': 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:26.019 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap09ac2cb6-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:26 np0005481065 nova_compute[260935]: 2025-10-11 08:49:26.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:26 np0005481065 nova_compute[260935]: 2025-10-11 08:49:26.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:26.024 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap09ac2cb6-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:26.025 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:49:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:26.025 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap09ac2cb6-30, col_values=(('external_ids', {'iface-id': '424305ea-6b47-4134-ad52-ee2a450e204c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:26.027 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:49:26 np0005481065 nova_compute[260935]: 2025-10-11 08:49:26.702 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172566.702305, b3e20035-c079-4ad0-a085-2086be520d1d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:49:26 np0005481065 nova_compute[260935]: 2025-10-11 08:49:26.703 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] VM Started (Lifecycle Event)#033[00m
Oct 11 04:49:26 np0005481065 nova_compute[260935]: 2025-10-11 08:49:26.811 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:49:26 np0005481065 nova_compute[260935]: 2025-10-11 08:49:26.817 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172566.703369, b3e20035-c079-4ad0-a085-2086be520d1d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:49:26 np0005481065 nova_compute[260935]: 2025-10-11 08:49:26.818 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] VM Paused (Lifecycle Event)#033[00m
Oct 11 04:49:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:49:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:49:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:49:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:49:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:49:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:49:26 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 5129121b-ac1b-40c6-bb9e-cc53c3193290 does not exist
Oct 11 04:49:26 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 537ec94a-fc68-4c76-879c-6731541868c2 does not exist
Oct 11 04:49:26 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 1d9e505a-7008-4b5d-8e9e-362ed33ceefd does not exist
Oct 11 04:49:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:49:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:49:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:49:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:49:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:49:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:49:27 np0005481065 nova_compute[260935]: 2025-10-11 08:49:27.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:27 np0005481065 nova_compute[260935]: 2025-10-11 08:49:27.113 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:49:27 np0005481065 nova_compute[260935]: 2025-10-11 08:49:27.119 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:49:27 np0005481065 nova_compute[260935]: 2025-10-11 08:49:27.171 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:49:27 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:49:27 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:49:27 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:49:27 np0005481065 nova_compute[260935]: 2025-10-11 08:49:27.589 2 DEBUG nova.network.neutron [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Successfully updated port: f1d8b704-c5df-41f7-b46a-04c0e89ab2cd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:49:27 np0005481065 nova_compute[260935]: 2025-10-11 08:49:27.599 2 DEBUG nova.compute.manager [req-0d1a8ac4-2c0c-43cd-bfa7-0720f77e80ac req-a0f5cf9c-7d22-400b-afee-8ba233faebb3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Received event network-vif-plugged-3a7614a2-ff1f-4015-a387-8b15256f61b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:49:27 np0005481065 nova_compute[260935]: 2025-10-11 08:49:27.599 2 DEBUG oslo_concurrency.lockutils [req-0d1a8ac4-2c0c-43cd-bfa7-0720f77e80ac req-a0f5cf9c-7d22-400b-afee-8ba233faebb3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "b3e20035-c079-4ad0-a085-2086be520d1d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:27 np0005481065 nova_compute[260935]: 2025-10-11 08:49:27.599 2 DEBUG oslo_concurrency.lockutils [req-0d1a8ac4-2c0c-43cd-bfa7-0720f77e80ac req-a0f5cf9c-7d22-400b-afee-8ba233faebb3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b3e20035-c079-4ad0-a085-2086be520d1d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:27 np0005481065 nova_compute[260935]: 2025-10-11 08:49:27.599 2 DEBUG oslo_concurrency.lockutils [req-0d1a8ac4-2c0c-43cd-bfa7-0720f77e80ac req-a0f5cf9c-7d22-400b-afee-8ba233faebb3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b3e20035-c079-4ad0-a085-2086be520d1d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:27 np0005481065 nova_compute[260935]: 2025-10-11 08:49:27.599 2 DEBUG nova.compute.manager [req-0d1a8ac4-2c0c-43cd-bfa7-0720f77e80ac req-a0f5cf9c-7d22-400b-afee-8ba233faebb3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Processing event network-vif-plugged-3a7614a2-ff1f-4015-a387-8b15256f61b2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 04:49:27 np0005481065 nova_compute[260935]: 2025-10-11 08:49:27.600 2 DEBUG nova.compute.manager [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:49:27 np0005481065 nova_compute[260935]: 2025-10-11 08:49:27.603 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172567.6031952, b3e20035-c079-4ad0-a085-2086be520d1d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:49:27 np0005481065 nova_compute[260935]: 2025-10-11 08:49:27.603 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:49:27 np0005481065 nova_compute[260935]: 2025-10-11 08:49:27.604 2 DEBUG nova.virt.libvirt.driver [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:49:27 np0005481065 nova_compute[260935]: 2025-10-11 08:49:27.607 2 INFO nova.virt.libvirt.driver [-] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Instance spawned successfully.#033[00m
Oct 11 04:49:27 np0005481065 nova_compute[260935]: 2025-10-11 08:49:27.607 2 DEBUG nova.virt.libvirt.driver [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:49:27 np0005481065 nova_compute[260935]: 2025-10-11 08:49:27.792 2 DEBUG nova.compute.manager [req-b98f97c0-a73f-4f5e-b94e-07d046b4528e req-983be31e-2754-4d31-b710-1d17e4250d98 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Received event network-changed-f1d8b704-c5df-41f7-b46a-04c0e89ab2cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:49:27 np0005481065 nova_compute[260935]: 2025-10-11 08:49:27.792 2 DEBUG nova.compute.manager [req-b98f97c0-a73f-4f5e-b94e-07d046b4528e req-983be31e-2754-4d31-b710-1d17e4250d98 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Refreshing instance network info cache due to event network-changed-f1d8b704-c5df-41f7-b46a-04c0e89ab2cd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:49:27 np0005481065 nova_compute[260935]: 2025-10-11 08:49:27.793 2 DEBUG oslo_concurrency.lockutils [req-b98f97c0-a73f-4f5e-b94e-07d046b4528e req-983be31e-2754-4d31-b710-1d17e4250d98 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-8e4f771a-b87a-40f9-a12e-b5b4583b96f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:49:27 np0005481065 nova_compute[260935]: 2025-10-11 08:49:27.793 2 DEBUG oslo_concurrency.lockutils [req-b98f97c0-a73f-4f5e-b94e-07d046b4528e req-983be31e-2754-4d31-b710-1d17e4250d98 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-8e4f771a-b87a-40f9-a12e-b5b4583b96f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:49:27 np0005481065 nova_compute[260935]: 2025-10-11 08:49:27.793 2 DEBUG nova.network.neutron [req-b98f97c0-a73f-4f5e-b94e-07d046b4528e req-983be31e-2754-4d31-b710-1d17e4250d98 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Refreshing network info cache for port f1d8b704-c5df-41f7-b46a-04c0e89ab2cd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:49:27 np0005481065 nova_compute[260935]: 2025-10-11 08:49:27.807 2 DEBUG oslo_concurrency.lockutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "refresh_cache-8e4f771a-b87a-40f9-a12e-b5b4583b96f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:49:27 np0005481065 nova_compute[260935]: 2025-10-11 08:49:27.827 2 DEBUG nova.virt.libvirt.driver [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:49:27 np0005481065 nova_compute[260935]: 2025-10-11 08:49:27.827 2 DEBUG nova.virt.libvirt.driver [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:49:27 np0005481065 nova_compute[260935]: 2025-10-11 08:49:27.827 2 DEBUG nova.virt.libvirt.driver [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:49:27 np0005481065 nova_compute[260935]: 2025-10-11 08:49:27.828 2 DEBUG nova.virt.libvirt.driver [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:49:27 np0005481065 nova_compute[260935]: 2025-10-11 08:49:27.828 2 DEBUG nova.virt.libvirt.driver [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:49:27 np0005481065 nova_compute[260935]: 2025-10-11 08:49:27.829 2 DEBUG nova.virt.libvirt.driver [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:49:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1310: 321 pgs: 321 active+clean; 293 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 588 KiB/s rd, 7.8 MiB/s wr, 213 op/s
Oct 11 04:49:27 np0005481065 nova_compute[260935]: 2025-10-11 08:49:27.837 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:49:27 np0005481065 nova_compute[260935]: 2025-10-11 08:49:27.840 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:49:27 np0005481065 podman[293413]: 2025-10-11 08:49:27.858493857 +0000 UTC m=+0.099871195 container create c68dffc718fc136118cb44053b6a77ec0d9df5ffc6259b4872c4e9a501c3b67d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_goldstine, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 11 04:49:27 np0005481065 nova_compute[260935]: 2025-10-11 08:49:27.879 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:49:27 np0005481065 podman[293413]: 2025-10-11 08:49:27.815324986 +0000 UTC m=+0.056702324 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:49:27 np0005481065 systemd[1]: Started libpod-conmon-c68dffc718fc136118cb44053b6a77ec0d9df5ffc6259b4872c4e9a501c3b67d.scope.
Oct 11 04:49:27 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:49:27 np0005481065 nova_compute[260935]: 2025-10-11 08:49:27.933 2 INFO nova.compute.manager [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Took 13.15 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:49:27 np0005481065 nova_compute[260935]: 2025-10-11 08:49:27.933 2 DEBUG nova.compute.manager [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:49:27 np0005481065 podman[293413]: 2025-10-11 08:49:27.951504028 +0000 UTC m=+0.192881366 container init c68dffc718fc136118cb44053b6a77ec0d9df5ffc6259b4872c4e9a501c3b67d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_goldstine, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:49:27 np0005481065 podman[293413]: 2025-10-11 08:49:27.964431513 +0000 UTC m=+0.205808851 container start c68dffc718fc136118cb44053b6a77ec0d9df5ffc6259b4872c4e9a501c3b67d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 04:49:27 np0005481065 podman[293413]: 2025-10-11 08:49:27.967836049 +0000 UTC m=+0.209213387 container attach c68dffc718fc136118cb44053b6a77ec0d9df5ffc6259b4872c4e9a501c3b67d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:49:27 np0005481065 intelligent_goldstine[293429]: 167 167
Oct 11 04:49:27 np0005481065 systemd[1]: libpod-c68dffc718fc136118cb44053b6a77ec0d9df5ffc6259b4872c4e9a501c3b67d.scope: Deactivated successfully.
Oct 11 04:49:27 np0005481065 podman[293413]: 2025-10-11 08:49:27.975197108 +0000 UTC m=+0.216574446 container died c68dffc718fc136118cb44053b6a77ec0d9df5ffc6259b4872c4e9a501c3b67d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_goldstine, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:49:28 np0005481065 systemd[1]: var-lib-containers-storage-overlay-0af76125f860beba6550a7e6c7dbac1e30232e0d474ca52d059bf573fb132055-merged.mount: Deactivated successfully.
Oct 11 04:49:28 np0005481065 podman[293413]: 2025-10-11 08:49:28.015770155 +0000 UTC m=+0.257147503 container remove c68dffc718fc136118cb44053b6a77ec0d9df5ffc6259b4872c4e9a501c3b67d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 11 04:49:28 np0005481065 nova_compute[260935]: 2025-10-11 08:49:28.044 2 DEBUG nova.network.neutron [req-b98f97c0-a73f-4f5e-b94e-07d046b4528e req-983be31e-2754-4d31-b710-1d17e4250d98 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:49:28 np0005481065 systemd[1]: libpod-conmon-c68dffc718fc136118cb44053b6a77ec0d9df5ffc6259b4872c4e9a501c3b67d.scope: Deactivated successfully.
Oct 11 04:49:28 np0005481065 nova_compute[260935]: 2025-10-11 08:49:28.067 2 INFO nova.compute.manager [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Took 14.35 seconds to build instance.#033[00m
Oct 11 04:49:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:49:28 np0005481065 nova_compute[260935]: 2025-10-11 08:49:28.261 2 DEBUG oslo_concurrency.lockutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "b3e20035-c079-4ad0-a085-2086be520d1d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:28 np0005481065 podman[293453]: 2025-10-11 08:49:28.26807683 +0000 UTC m=+0.057560508 container create fec0e3ee2412e8c93cf34d2c8412c859c83ef7b9481e0ca7c88ec238833cea0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_grothendieck, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 04:49:28 np0005481065 systemd[1]: Started libpod-conmon-fec0e3ee2412e8c93cf34d2c8412c859c83ef7b9481e0ca7c88ec238833cea0c.scope.
Oct 11 04:49:28 np0005481065 podman[293453]: 2025-10-11 08:49:28.243867226 +0000 UTC m=+0.033350904 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:49:28 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:49:28 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47648c71e18aec7dd0c7c9ca8c1e922900254258a2c32abfcaafee327d69cbc8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:49:28 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47648c71e18aec7dd0c7c9ca8c1e922900254258a2c32abfcaafee327d69cbc8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:49:28 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47648c71e18aec7dd0c7c9ca8c1e922900254258a2c32abfcaafee327d69cbc8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:49:28 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47648c71e18aec7dd0c7c9ca8c1e922900254258a2c32abfcaafee327d69cbc8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:49:28 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47648c71e18aec7dd0c7c9ca8c1e922900254258a2c32abfcaafee327d69cbc8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:49:28 np0005481065 podman[293453]: 2025-10-11 08:49:28.407467252 +0000 UTC m=+0.196950940 container init fec0e3ee2412e8c93cf34d2c8412c859c83ef7b9481e0ca7c88ec238833cea0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 04:49:28 np0005481065 podman[293453]: 2025-10-11 08:49:28.414419039 +0000 UTC m=+0.203902677 container start fec0e3ee2412e8c93cf34d2c8412c859c83ef7b9481e0ca7c88ec238833cea0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_grothendieck, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 04:49:28 np0005481065 podman[293453]: 2025-10-11 08:49:28.419000019 +0000 UTC m=+0.208483697 container attach fec0e3ee2412e8c93cf34d2c8412c859c83ef7b9481e0ca7c88ec238833cea0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_grothendieck, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:49:28 np0005481065 nova_compute[260935]: 2025-10-11 08:49:28.532 2 DEBUG nova.network.neutron [req-b98f97c0-a73f-4f5e-b94e-07d046b4528e req-983be31e-2754-4d31-b710-1d17e4250d98 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:49:28 np0005481065 nova_compute[260935]: 2025-10-11 08:49:28.602 2 DEBUG oslo_concurrency.lockutils [req-b98f97c0-a73f-4f5e-b94e-07d046b4528e req-983be31e-2754-4d31-b710-1d17e4250d98 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-8e4f771a-b87a-40f9-a12e-b5b4583b96f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:49:28 np0005481065 nova_compute[260935]: 2025-10-11 08:49:28.604 2 DEBUG oslo_concurrency.lockutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquired lock "refresh_cache-8e4f771a-b87a-40f9-a12e-b5b4583b96f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:49:28 np0005481065 nova_compute[260935]: 2025-10-11 08:49:28.604 2 DEBUG nova.network.neutron [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:49:28 np0005481065 nova_compute[260935]: 2025-10-11 08:49:28.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:28 np0005481065 nova_compute[260935]: 2025-10-11 08:49:28.960 2 DEBUG nova.network.neutron [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:49:29 np0005481065 nova_compute[260935]: 2025-10-11 08:49:29.248 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172554.2447348, 057de6d9-3f9e-4b23-9019-f62ba6b453e7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:49:29 np0005481065 nova_compute[260935]: 2025-10-11 08:49:29.249 2 INFO nova.compute.manager [-] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] VM Stopped (Lifecycle Event)#033[00m
Oct 11 04:49:29 np0005481065 nova_compute[260935]: 2025-10-11 08:49:29.286 2 DEBUG nova.compute.manager [None req-a142ebc3-9296-4f0f-9f1f-8a49a44e319e - - - - - -] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:49:29 np0005481065 nova_compute[260935]: 2025-10-11 08:49:29.701 2 DEBUG nova.network.neutron [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Updating instance_info_cache with network_info: [{"id": "f1d8b704-c5df-41f7-b46a-04c0e89ab2cd", "address": "fa:16:3e:3c:11:79", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1d8b704-c5", "ovs_interfaceid": "f1d8b704-c5df-41f7-b46a-04c0e89ab2cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:49:29 np0005481065 practical_grothendieck[293469]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:49:29 np0005481065 practical_grothendieck[293469]: --> relative data size: 1.0
Oct 11 04:49:29 np0005481065 practical_grothendieck[293469]: --> All data devices are unavailable
Oct 11 04:49:29 np0005481065 systemd[1]: libpod-fec0e3ee2412e8c93cf34d2c8412c859c83ef7b9481e0ca7c88ec238833cea0c.scope: Deactivated successfully.
Oct 11 04:49:29 np0005481065 podman[293453]: 2025-10-11 08:49:29.765997491 +0000 UTC m=+1.555481139 container died fec0e3ee2412e8c93cf34d2c8412c859c83ef7b9481e0ca7c88ec238833cea0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_grothendieck, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 11 04:49:29 np0005481065 systemd[1]: libpod-fec0e3ee2412e8c93cf34d2c8412c859c83ef7b9481e0ca7c88ec238833cea0c.scope: Consumed 1.212s CPU time.
Oct 11 04:49:29 np0005481065 systemd[1]: var-lib-containers-storage-overlay-47648c71e18aec7dd0c7c9ca8c1e922900254258a2c32abfcaafee327d69cbc8-merged.mount: Deactivated successfully.
Oct 11 04:49:29 np0005481065 podman[293453]: 2025-10-11 08:49:29.824806764 +0000 UTC m=+1.614290402 container remove fec0e3ee2412e8c93cf34d2c8412c859c83ef7b9481e0ca7c88ec238833cea0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 11 04:49:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1311: 321 pgs: 321 active+clean; 293 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 552 KiB/s rd, 6.0 MiB/s wr, 158 op/s
Oct 11 04:49:29 np0005481065 systemd[1]: libpod-conmon-fec0e3ee2412e8c93cf34d2c8412c859c83ef7b9481e0ca7c88ec238833cea0c.scope: Deactivated successfully.
Oct 11 04:49:29 np0005481065 podman[293497]: 2025-10-11 08:49:29.850647275 +0000 UTC m=+0.132352234 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct 11 04:49:29 np0005481065 nova_compute[260935]: 2025-10-11 08:49:29.969 2 DEBUG oslo_concurrency.lockutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Releasing lock "refresh_cache-8e4f771a-b87a-40f9-a12e-b5b4583b96f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:49:29 np0005481065 nova_compute[260935]: 2025-10-11 08:49:29.969 2 DEBUG nova.compute.manager [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Instance network_info: |[{"id": "f1d8b704-c5df-41f7-b46a-04c0e89ab2cd", "address": "fa:16:3e:3c:11:79", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1d8b704-c5", "ovs_interfaceid": "f1d8b704-c5df-41f7-b46a-04c0e89ab2cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:49:29 np0005481065 nova_compute[260935]: 2025-10-11 08:49:29.973 2 DEBUG nova.virt.libvirt.driver [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Start _get_guest_xml network_info=[{"id": "f1d8b704-c5df-41f7-b46a-04c0e89ab2cd", "address": "fa:16:3e:3c:11:79", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1d8b704-c5", "ovs_interfaceid": "f1d8b704-c5df-41f7-b46a-04c0e89ab2cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:49:29 np0005481065 nova_compute[260935]: 2025-10-11 08:49:29.982 2 WARNING nova.virt.libvirt.driver [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:49:29 np0005481065 nova_compute[260935]: 2025-10-11 08:49:29.990 2 DEBUG nova.virt.libvirt.host [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:49:29 np0005481065 nova_compute[260935]: 2025-10-11 08:49:29.991 2 DEBUG nova.virt.libvirt.host [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:49:29 np0005481065 nova_compute[260935]: 2025-10-11 08:49:29.995 2 DEBUG nova.virt.libvirt.host [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:49:29 np0005481065 nova_compute[260935]: 2025-10-11 08:49:29.996 2 DEBUG nova.virt.libvirt.host [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:49:29 np0005481065 nova_compute[260935]: 2025-10-11 08:49:29.996 2 DEBUG nova.virt.libvirt.driver [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:49:29 np0005481065 nova_compute[260935]: 2025-10-11 08:49:29.996 2 DEBUG nova.virt.hardware [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:49:29 np0005481065 nova_compute[260935]: 2025-10-11 08:49:29.997 2 DEBUG nova.virt.hardware [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:49:29 np0005481065 nova_compute[260935]: 2025-10-11 08:49:29.997 2 DEBUG nova.virt.hardware [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:49:29 np0005481065 nova_compute[260935]: 2025-10-11 08:49:29.998 2 DEBUG nova.virt.hardware [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:49:29 np0005481065 nova_compute[260935]: 2025-10-11 08:49:29.998 2 DEBUG nova.virt.hardware [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:49:29 np0005481065 nova_compute[260935]: 2025-10-11 08:49:29.998 2 DEBUG nova.virt.hardware [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:49:29 np0005481065 nova_compute[260935]: 2025-10-11 08:49:29.999 2 DEBUG nova.virt.hardware [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:49:29 np0005481065 nova_compute[260935]: 2025-10-11 08:49:29.999 2 DEBUG nova.virt.hardware [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:49:30 np0005481065 nova_compute[260935]: 2025-10-11 08:49:29.999 2 DEBUG nova.virt.hardware [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:49:30 np0005481065 nova_compute[260935]: 2025-10-11 08:49:30.000 2 DEBUG nova.virt.hardware [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:49:30 np0005481065 nova_compute[260935]: 2025-10-11 08:49:30.000 2 DEBUG nova.virt.hardware [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:49:30 np0005481065 nova_compute[260935]: 2025-10-11 08:49:30.004 2 DEBUG oslo_concurrency.processutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:30 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:49:30 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/381446818' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:49:30 np0005481065 nova_compute[260935]: 2025-10-11 08:49:30.522 2 DEBUG oslo_concurrency.processutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:30 np0005481065 nova_compute[260935]: 2025-10-11 08:49:30.561 2 DEBUG nova.storage.rbd_utils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image 8e4f771a-b87a-40f9-a12e-b5b4583b96f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:49:30 np0005481065 nova_compute[260935]: 2025-10-11 08:49:30.568 2 DEBUG oslo_concurrency.processutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:30 np0005481065 podman[293711]: 2025-10-11 08:49:30.676447469 +0000 UTC m=+0.058083654 container create 364a1f35e33e8223d877e85c7bb928dc2a3a5cef9a06d7bfd25f7689a2d10b8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mcclintock, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 04:49:30 np0005481065 systemd[1]: Started libpod-conmon-364a1f35e33e8223d877e85c7bb928dc2a3a5cef9a06d7bfd25f7689a2d10b8c.scope.
Oct 11 04:49:30 np0005481065 podman[293711]: 2025-10-11 08:49:30.647874381 +0000 UTC m=+0.029510616 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:49:30 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:49:30 np0005481065 podman[293711]: 2025-10-11 08:49:30.78397635 +0000 UTC m=+0.165612555 container init 364a1f35e33e8223d877e85c7bb928dc2a3a5cef9a06d7bfd25f7689a2d10b8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mcclintock, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:49:30 np0005481065 podman[293711]: 2025-10-11 08:49:30.795273089 +0000 UTC m=+0.176909264 container start 364a1f35e33e8223d877e85c7bb928dc2a3a5cef9a06d7bfd25f7689a2d10b8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mcclintock, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:49:30 np0005481065 podman[293711]: 2025-10-11 08:49:30.800477457 +0000 UTC m=+0.182113632 container attach 364a1f35e33e8223d877e85c7bb928dc2a3a5cef9a06d7bfd25f7689a2d10b8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 11 04:49:30 np0005481065 modest_mcclintock[293747]: 167 167
Oct 11 04:49:30 np0005481065 systemd[1]: libpod-364a1f35e33e8223d877e85c7bb928dc2a3a5cef9a06d7bfd25f7689a2d10b8c.scope: Deactivated successfully.
Oct 11 04:49:30 np0005481065 podman[293711]: 2025-10-11 08:49:30.834658263 +0000 UTC m=+0.216294478 container died 364a1f35e33e8223d877e85c7bb928dc2a3a5cef9a06d7bfd25f7689a2d10b8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mcclintock, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:49:30 np0005481065 systemd[1]: var-lib-containers-storage-overlay-e86952f7d813ea1a86e9b63d880acfb5e92a6c3b4679998143ab14144bcd1244-merged.mount: Deactivated successfully.
Oct 11 04:49:30 np0005481065 podman[293711]: 2025-10-11 08:49:30.890678127 +0000 UTC m=+0.272314272 container remove 364a1f35e33e8223d877e85c7bb928dc2a3a5cef9a06d7bfd25f7689a2d10b8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mcclintock, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 04:49:30 np0005481065 systemd[1]: libpod-conmon-364a1f35e33e8223d877e85c7bb928dc2a3a5cef9a06d7bfd25f7689a2d10b8c.scope: Deactivated successfully.
Oct 11 04:49:30 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:49:30 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3953573349' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:49:31 np0005481065 nova_compute[260935]: 2025-10-11 08:49:30.999 2 DEBUG oslo_concurrency.processutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:31 np0005481065 nova_compute[260935]: 2025-10-11 08:49:31.001 2 DEBUG nova.virt.libvirt.vif [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:49:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1760568996',display_name='tempest-ImagesTestJSON-server-1760568996',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1760568996',id=25,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f8c7604961214c6d9d49657535d799a5',ramdisk_id='',reservation_id='r-ipkecogn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-694493184',owner_user_name='tempest-ImagesTestJSON-694493184-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:49:24Z,user_data=None,user_id='1bab12893b9d49aabcb5ca19c9b951de',uuid=8e4f771a-b87a-40f9-a12e-b5b4583b96f7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f1d8b704-c5df-41f7-b46a-04c0e89ab2cd", "address": "fa:16:3e:3c:11:79", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1d8b704-c5", "ovs_interfaceid": "f1d8b704-c5df-41f7-b46a-04c0e89ab2cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:49:31 np0005481065 nova_compute[260935]: 2025-10-11 08:49:31.002 2 DEBUG nova.network.os_vif_util [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converting VIF {"id": "f1d8b704-c5df-41f7-b46a-04c0e89ab2cd", "address": "fa:16:3e:3c:11:79", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1d8b704-c5", "ovs_interfaceid": "f1d8b704-c5df-41f7-b46a-04c0e89ab2cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:49:31 np0005481065 nova_compute[260935]: 2025-10-11 08:49:31.003 2 DEBUG nova.network.os_vif_util [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3c:11:79,bridge_name='br-int',has_traffic_filtering=True,id=f1d8b704-c5df-41f7-b46a-04c0e89ab2cd,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1d8b704-c5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:49:31 np0005481065 nova_compute[260935]: 2025-10-11 08:49:31.004 2 DEBUG nova.objects.instance [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8e4f771a-b87a-40f9-a12e-b5b4583b96f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:49:31 np0005481065 nova_compute[260935]: 2025-10-11 08:49:31.104 2 DEBUG nova.virt.libvirt.driver [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:49:31 np0005481065 nova_compute[260935]:  <uuid>8e4f771a-b87a-40f9-a12e-b5b4583b96f7</uuid>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:  <name>instance-00000019</name>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:49:31 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:      <nova:name>tempest-ImagesTestJSON-server-1760568996</nova:name>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:49:29</nova:creationTime>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:49:31 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:        <nova:user uuid="1bab12893b9d49aabcb5ca19c9b951de">tempest-ImagesTestJSON-694493184-project-member</nova:user>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:        <nova:project uuid="f8c7604961214c6d9d49657535d799a5">tempest-ImagesTestJSON-694493184</nova:project>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:        <nova:port uuid="f1d8b704-c5df-41f7-b46a-04c0e89ab2cd">
Oct 11 04:49:31 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:49:31 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:      <entry name="serial">8e4f771a-b87a-40f9-a12e-b5b4583b96f7</entry>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:      <entry name="uuid">8e4f771a-b87a-40f9-a12e-b5b4583b96f7</entry>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:49:31 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:49:31 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:49:31 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/8e4f771a-b87a-40f9-a12e-b5b4583b96f7_disk">
Oct 11 04:49:31 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:49:31 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:49:31 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/8e4f771a-b87a-40f9-a12e-b5b4583b96f7_disk.config">
Oct 11 04:49:31 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:49:31 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 04:49:31 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:3c:11:79"/>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:      <target dev="tapf1d8b704-c5"/>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:49:31 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/8e4f771a-b87a-40f9-a12e-b5b4583b96f7/console.log" append="off"/>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:49:31 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:49:31 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:49:31 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:49:31 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:49:31 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:49:31 np0005481065 nova_compute[260935]: 2025-10-11 08:49:31.104 2 DEBUG nova.compute.manager [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Preparing to wait for external event network-vif-plugged-f1d8b704-c5df-41f7-b46a-04c0e89ab2cd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 04:49:31 np0005481065 nova_compute[260935]: 2025-10-11 08:49:31.104 2 DEBUG oslo_concurrency.lockutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "8e4f771a-b87a-40f9-a12e-b5b4583b96f7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:31 np0005481065 nova_compute[260935]: 2025-10-11 08:49:31.105 2 DEBUG oslo_concurrency.lockutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "8e4f771a-b87a-40f9-a12e-b5b4583b96f7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:31 np0005481065 nova_compute[260935]: 2025-10-11 08:49:31.105 2 DEBUG oslo_concurrency.lockutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "8e4f771a-b87a-40f9-a12e-b5b4583b96f7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:31 np0005481065 nova_compute[260935]: 2025-10-11 08:49:31.106 2 DEBUG nova.virt.libvirt.vif [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:49:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1760568996',display_name='tempest-ImagesTestJSON-server-1760568996',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1760568996',id=25,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f8c7604961214c6d9d49657535d799a5',ramdisk_id='',reservation_id='r-ipkecogn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-694493184',owner_user_name='tempest-ImagesTestJSON-694493184-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:49:24Z,user_data=None,user_id='1bab12893b9d49aabcb5ca19c9b951de',uuid=8e4f771a-b87a-40f9-a12e-b5b4583b96f7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f1d8b704-c5df-41f7-b46a-04c0e89ab2cd", "address": "fa:16:3e:3c:11:79", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1d8b704-c5", "ovs_interfaceid": "f1d8b704-c5df-41f7-b46a-04c0e89ab2cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:49:31 np0005481065 nova_compute[260935]: 2025-10-11 08:49:31.106 2 DEBUG nova.network.os_vif_util [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converting VIF {"id": "f1d8b704-c5df-41f7-b46a-04c0e89ab2cd", "address": "fa:16:3e:3c:11:79", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1d8b704-c5", "ovs_interfaceid": "f1d8b704-c5df-41f7-b46a-04c0e89ab2cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:49:31 np0005481065 nova_compute[260935]: 2025-10-11 08:49:31.107 2 DEBUG nova.network.os_vif_util [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3c:11:79,bridge_name='br-int',has_traffic_filtering=True,id=f1d8b704-c5df-41f7-b46a-04c0e89ab2cd,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1d8b704-c5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:49:31 np0005481065 nova_compute[260935]: 2025-10-11 08:49:31.107 2 DEBUG os_vif [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3c:11:79,bridge_name='br-int',has_traffic_filtering=True,id=f1d8b704-c5df-41f7-b46a-04c0e89ab2cd,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1d8b704-c5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:49:31 np0005481065 nova_compute[260935]: 2025-10-11 08:49:31.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:31 np0005481065 nova_compute[260935]: 2025-10-11 08:49:31.109 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:31 np0005481065 nova_compute[260935]: 2025-10-11 08:49:31.109 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:49:31 np0005481065 nova_compute[260935]: 2025-10-11 08:49:31.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:31 np0005481065 nova_compute[260935]: 2025-10-11 08:49:31.113 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf1d8b704-c5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:31 np0005481065 nova_compute[260935]: 2025-10-11 08:49:31.114 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf1d8b704-c5, col_values=(('external_ids', {'iface-id': 'f1d8b704-c5df-41f7-b46a-04c0e89ab2cd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3c:11:79', 'vm-uuid': '8e4f771a-b87a-40f9-a12e-b5b4583b96f7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:31 np0005481065 nova_compute[260935]: 2025-10-11 08:49:31.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:31 np0005481065 NetworkManager[44960]: <info>  [1760172571.1166] manager: (tapf1d8b704-c5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/72)
Oct 11 04:49:31 np0005481065 nova_compute[260935]: 2025-10-11 08:49:31.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:49:31 np0005481065 nova_compute[260935]: 2025-10-11 08:49:31.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:31 np0005481065 nova_compute[260935]: 2025-10-11 08:49:31.127 2 INFO os_vif [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3c:11:79,bridge_name='br-int',has_traffic_filtering=True,id=f1d8b704-c5df-41f7-b46a-04c0e89ab2cd,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1d8b704-c5')#033[00m
Oct 11 04:49:31 np0005481065 podman[293773]: 2025-10-11 08:49:31.170876652 +0000 UTC m=+0.082209036 container create 89ef72c3cfd863bc0d3a9abc4051cd2428fd2e4811e70387d71f39080c81c7ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 11 04:49:31 np0005481065 nova_compute[260935]: 2025-10-11 08:49:31.190 2 DEBUG nova.virt.libvirt.driver [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:49:31 np0005481065 nova_compute[260935]: 2025-10-11 08:49:31.191 2 DEBUG nova.virt.libvirt.driver [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:49:31 np0005481065 nova_compute[260935]: 2025-10-11 08:49:31.191 2 DEBUG nova.virt.libvirt.driver [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] No VIF found with MAC fa:16:3e:3c:11:79, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:49:31 np0005481065 nova_compute[260935]: 2025-10-11 08:49:31.192 2 INFO nova.virt.libvirt.driver [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Using config drive#033[00m
Oct 11 04:49:31 np0005481065 nova_compute[260935]: 2025-10-11 08:49:31.218 2 DEBUG nova.storage.rbd_utils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image 8e4f771a-b87a-40f9-a12e-b5b4583b96f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:49:31 np0005481065 systemd[1]: Started libpod-conmon-89ef72c3cfd863bc0d3a9abc4051cd2428fd2e4811e70387d71f39080c81c7ce.scope.
Oct 11 04:49:31 np0005481065 podman[293773]: 2025-10-11 08:49:31.151472193 +0000 UTC m=+0.062804607 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:49:31 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:49:31 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9333c45619715a8386b9cd8bb4165df4704e20f1361c3eda6655b1cbb3738cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:49:31 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9333c45619715a8386b9cd8bb4165df4704e20f1361c3eda6655b1cbb3738cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:49:31 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9333c45619715a8386b9cd8bb4165df4704e20f1361c3eda6655b1cbb3738cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:49:31 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9333c45619715a8386b9cd8bb4165df4704e20f1361c3eda6655b1cbb3738cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:49:31 np0005481065 podman[293773]: 2025-10-11 08:49:31.285494103 +0000 UTC m=+0.196826547 container init 89ef72c3cfd863bc0d3a9abc4051cd2428fd2e4811e70387d71f39080c81c7ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_khorana, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 04:49:31 np0005481065 podman[293773]: 2025-10-11 08:49:31.297544224 +0000 UTC m=+0.208876598 container start 89ef72c3cfd863bc0d3a9abc4051cd2428fd2e4811e70387d71f39080c81c7ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_khorana, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:49:31 np0005481065 podman[293773]: 2025-10-11 08:49:31.300988771 +0000 UTC m=+0.212321225 container attach 89ef72c3cfd863bc0d3a9abc4051cd2428fd2e4811e70387d71f39080c81c7ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_khorana, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Oct 11 04:49:31 np0005481065 nova_compute[260935]: 2025-10-11 08:49:31.559 2 INFO nova.virt.libvirt.driver [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Creating config drive at /var/lib/nova/instances/8e4f771a-b87a-40f9-a12e-b5b4583b96f7/disk.config#033[00m
Oct 11 04:49:31 np0005481065 nova_compute[260935]: 2025-10-11 08:49:31.574 2 DEBUG oslo_concurrency.processutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8e4f771a-b87a-40f9-a12e-b5b4583b96f7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdo07dn58 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:31 np0005481065 nova_compute[260935]: 2025-10-11 08:49:31.724 2 DEBUG oslo_concurrency.processutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8e4f771a-b87a-40f9-a12e-b5b4583b96f7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdo07dn58" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:31 np0005481065 nova_compute[260935]: 2025-10-11 08:49:31.758 2 DEBUG nova.storage.rbd_utils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image 8e4f771a-b87a-40f9-a12e-b5b4583b96f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:49:31 np0005481065 nova_compute[260935]: 2025-10-11 08:49:31.763 2 DEBUG oslo_concurrency.processutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8e4f771a-b87a-40f9-a12e-b5b4583b96f7/disk.config 8e4f771a-b87a-40f9-a12e-b5b4583b96f7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1312: 321 pgs: 321 active+clean; 293 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 552 KiB/s rd, 6.0 MiB/s wr, 158 op/s
Oct 11 04:49:31 np0005481065 nova_compute[260935]: 2025-10-11 08:49:31.987 2 DEBUG oslo_concurrency.processutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8e4f771a-b87a-40f9-a12e-b5b4583b96f7/disk.config 8e4f771a-b87a-40f9-a12e-b5b4583b96f7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.224s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:31 np0005481065 nova_compute[260935]: 2025-10-11 08:49:31.988 2 INFO nova.virt.libvirt.driver [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Deleting local config drive /var/lib/nova/instances/8e4f771a-b87a-40f9-a12e-b5b4583b96f7/disk.config because it was imported into RBD.#033[00m
Oct 11 04:49:32 np0005481065 nova_compute[260935]: 2025-10-11 08:49:32.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:32 np0005481065 kernel: tapf1d8b704-c5: entered promiscuous mode
Oct 11 04:49:32 np0005481065 NetworkManager[44960]: <info>  [1760172572.0687] manager: (tapf1d8b704-c5): new Tun device (/org/freedesktop/NetworkManager/Devices/73)
Oct 11 04:49:32 np0005481065 nova_compute[260935]: 2025-10-11 08:49:32.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:32 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:32Z|00125|binding|INFO|Claiming lport f1d8b704-c5df-41f7-b46a-04c0e89ab2cd for this chassis.
Oct 11 04:49:32 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:32Z|00126|binding|INFO|f1d8b704-c5df-41f7-b46a-04c0e89ab2cd: Claiming fa:16:3e:3c:11:79 10.100.0.7
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:32.098 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3c:11:79 10.100.0.7'], port_security=['fa:16:3e:3c:11:79 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '8e4f771a-b87a-40f9-a12e-b5b4583b96f7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bac3530-993f-420e-8692-0b14a331d756', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8c7604961214c6d9d49657535d799a5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b92be55a-f97b-4770-99c4-ff8e122b8ad7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=956bef08-638b-4ce0-9cc4-80a6cc4f1331, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=f1d8b704-c5df-41f7-b46a-04c0e89ab2cd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:32.099 162815 INFO neutron.agent.ovn.metadata.agent [-] Port f1d8b704-c5df-41f7-b46a-04c0e89ab2cd in datapath 9bac3530-993f-420e-8692-0b14a331d756 bound to our chassis#033[00m
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:32.102 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bac3530-993f-420e-8692-0b14a331d756#033[00m
Oct 11 04:49:32 np0005481065 systemd-udevd[293869]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:32.121 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5c570324-8721-4af1-b891-6bded7dc2639]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:32.122 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9bac3530-91 in ovnmeta-9bac3530-993f-420e-8692-0b14a331d756 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:32.125 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9bac3530-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:32.125 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b627612e-a4e6-4b5e-b3c8-7a90a07f1385]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:32.127 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d17c40b3-d56f-455e-a7fc-ae27147e9f1c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:32 np0005481065 NetworkManager[44960]: <info>  [1760172572.1367] device (tapf1d8b704-c5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:49:32 np0005481065 NetworkManager[44960]: <info>  [1760172572.1396] device (tapf1d8b704-c5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:32.147 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[52068f6b-dc6a-4fee-bcc7-95d5c1801319]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:32 np0005481065 systemd-machined[215705]: New machine qemu-27-instance-00000019.
Oct 11 04:49:32 np0005481065 systemd[1]: Started Virtual Machine qemu-27-instance-00000019.
Oct 11 04:49:32 np0005481065 nova_compute[260935]: 2025-10-11 08:49:32.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:32 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:32Z|00127|binding|INFO|Setting lport f1d8b704-c5df-41f7-b46a-04c0e89ab2cd ovn-installed in OVS
Oct 11 04:49:32 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:32Z|00128|binding|INFO|Setting lport f1d8b704-c5df-41f7-b46a-04c0e89ab2cd up in Southbound
Oct 11 04:49:32 np0005481065 nova_compute[260935]: 2025-10-11 08:49:32.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:32.186 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bf96c7c2-5ea0-4d58-aef9-03f191362ff5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:32 np0005481065 determined_khorana[293810]: {
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:    "0": [
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:        {
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:            "devices": [
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:                "/dev/loop3"
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:            ],
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:            "lv_name": "ceph_lv0",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:            "lv_size": "21470642176",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:            "name": "ceph_lv0",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:            "tags": {
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:                "ceph.cluster_name": "ceph",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:                "ceph.crush_device_class": "",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:                "ceph.encrypted": "0",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:                "ceph.osd_id": "0",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:                "ceph.type": "block",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:                "ceph.vdo": "0"
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:            },
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:            "type": "block",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:            "vg_name": "ceph_vg0"
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:        }
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:    ],
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:    "1": [
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:        {
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:            "devices": [
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:                "/dev/loop4"
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:            ],
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:            "lv_name": "ceph_lv1",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:            "lv_size": "21470642176",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:            "name": "ceph_lv1",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:            "tags": {
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:                "ceph.cluster_name": "ceph",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:                "ceph.crush_device_class": "",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:                "ceph.encrypted": "0",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:                "ceph.osd_id": "1",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:                "ceph.type": "block",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:                "ceph.vdo": "0"
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:            },
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:            "type": "block",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:            "vg_name": "ceph_vg1"
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:        }
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:    ],
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:    "2": [
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:        {
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:            "devices": [
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:                "/dev/loop5"
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:            ],
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:            "lv_name": "ceph_lv2",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:            "lv_size": "21470642176",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:            "name": "ceph_lv2",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:            "tags": {
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:                "ceph.cluster_name": "ceph",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:                "ceph.crush_device_class": "",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:                "ceph.encrypted": "0",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:                "ceph.osd_id": "2",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:                "ceph.type": "block",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:                "ceph.vdo": "0"
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:            },
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:            "type": "block",
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:            "vg_name": "ceph_vg2"
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:        }
Oct 11 04:49:32 np0005481065 determined_khorana[293810]:    ]
Oct 11 04:49:32 np0005481065 determined_khorana[293810]: }
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:32.228 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[09679949-ebc1-4523-9fbf-3ff21bbbd838]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:32 np0005481065 systemd[1]: libpod-89ef72c3cfd863bc0d3a9abc4051cd2428fd2e4811e70387d71f39080c81c7ce.scope: Deactivated successfully.
Oct 11 04:49:32 np0005481065 conmon[293810]: conmon 89ef72c3cfd863bc0d3a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-89ef72c3cfd863bc0d3a9abc4051cd2428fd2e4811e70387d71f39080c81c7ce.scope/container/memory.events
Oct 11 04:49:32 np0005481065 podman[293773]: 2025-10-11 08:49:32.237132645 +0000 UTC m=+1.148465049 container died 89ef72c3cfd863bc0d3a9abc4051cd2428fd2e4811e70387d71f39080c81c7ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 11 04:49:32 np0005481065 NetworkManager[44960]: <info>  [1760172572.2548] manager: (tap9bac3530-90): new Veth device (/org/freedesktop/NetworkManager/Devices/74)
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:32.256 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[12ed2ac5-e482-44a6-a1bb-a58b14d3a289]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:32 np0005481065 systemd[1]: var-lib-containers-storage-overlay-f9333c45619715a8386b9cd8bb4165df4704e20f1361c3eda6655b1cbb3738cb-merged.mount: Deactivated successfully.
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:32.301 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[914a0950-d62d-4a8b-b5a6-b23de715d13c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:32.305 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b0265ff8-55fc-441b-af35-f31bd6c8ffba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:32 np0005481065 podman[293773]: 2025-10-11 08:49:32.332034848 +0000 UTC m=+1.243367212 container remove 89ef72c3cfd863bc0d3a9abc4051cd2428fd2e4811e70387d71f39080c81c7ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_khorana, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:49:32 np0005481065 NetworkManager[44960]: <info>  [1760172572.3405] device (tap9bac3530-90): carrier: link connected
Oct 11 04:49:32 np0005481065 systemd[1]: libpod-conmon-89ef72c3cfd863bc0d3a9abc4051cd2428fd2e4811e70387d71f39080c81c7ce.scope: Deactivated successfully.
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:32.349 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b927559a-011d-4200-8a78-851d95df6127]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:32.399 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[54189ea8-b023-4fb9-8952-797dcc7006f0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bac3530-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:35:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 441703, 'reachable_time': 33461, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293916, 'error': None, 'target': 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:32.418 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b2c385e6-e907-469d-820a-5ee67eb80d2d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe95:351f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 441703, 'tstamp': 441703}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293923, 'error': None, 'target': 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:32.442 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3d1d9d56-fb05-47e0-8e81-df818fc5813b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bac3530-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:35:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 441703, 'reachable_time': 33461, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 293938, 'error': None, 'target': 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:32.505 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f08a9003-1e90-4576-90ab-59181f0c7462]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:32.599 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[17349e7c-add4-4717-a7a5-78ff69c1133c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:32.601 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bac3530-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:32.601 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:32.602 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bac3530-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:32 np0005481065 nova_compute[260935]: 2025-10-11 08:49:32.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:32 np0005481065 NetworkManager[44960]: <info>  [1760172572.6053] manager: (tap9bac3530-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/75)
Oct 11 04:49:32 np0005481065 kernel: tap9bac3530-90: entered promiscuous mode
Oct 11 04:49:32 np0005481065 nova_compute[260935]: 2025-10-11 08:49:32.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:32.610 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bac3530-90, col_values=(('external_ids', {'iface-id': 'e5becf0d-48c0-404b-9cba-07077454d085'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:32 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:32Z|00129|binding|INFO|Releasing lport e5becf0d-48c0-404b-9cba-07077454d085 from this chassis (sb_readonly=0)
Oct 11 04:49:32 np0005481065 nova_compute[260935]: 2025-10-11 08:49:32.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:32 np0005481065 nova_compute[260935]: 2025-10-11 08:49:32.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:32.636 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9bac3530-993f-420e-8692-0b14a331d756.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9bac3530-993f-420e-8692-0b14a331d756.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:32.637 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[074459c9-b272-4e4d-a395-5d5ee3957132]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:32.639 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-9bac3530-993f-420e-8692-0b14a331d756
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/9bac3530-993f-420e-8692-0b14a331d756.pid.haproxy
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID 9bac3530-993f-420e-8692-0b14a331d756
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 04:49:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:32.640 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'env', 'PROCESS_TAG=haproxy-9bac3530-993f-420e-8692-0b14a331d756', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9bac3530-993f-420e-8692-0b14a331d756.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 04:49:32 np0005481065 nova_compute[260935]: 2025-10-11 08:49:32.837 2 DEBUG nova.compute.manager [req-69e99179-e28f-463f-b67a-586832f80090 req-68e51a53-8ece-464c-b4b7-37d14f75839f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Received event network-vif-plugged-3a7614a2-ff1f-4015-a387-8b15256f61b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:49:32 np0005481065 nova_compute[260935]: 2025-10-11 08:49:32.838 2 DEBUG oslo_concurrency.lockutils [req-69e99179-e28f-463f-b67a-586832f80090 req-68e51a53-8ece-464c-b4b7-37d14f75839f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "b3e20035-c079-4ad0-a085-2086be520d1d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:32 np0005481065 nova_compute[260935]: 2025-10-11 08:49:32.839 2 DEBUG oslo_concurrency.lockutils [req-69e99179-e28f-463f-b67a-586832f80090 req-68e51a53-8ece-464c-b4b7-37d14f75839f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b3e20035-c079-4ad0-a085-2086be520d1d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:32 np0005481065 nova_compute[260935]: 2025-10-11 08:49:32.840 2 DEBUG oslo_concurrency.lockutils [req-69e99179-e28f-463f-b67a-586832f80090 req-68e51a53-8ece-464c-b4b7-37d14f75839f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b3e20035-c079-4ad0-a085-2086be520d1d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:32 np0005481065 nova_compute[260935]: 2025-10-11 08:49:32.841 2 DEBUG nova.compute.manager [req-69e99179-e28f-463f-b67a-586832f80090 req-68e51a53-8ece-464c-b4b7-37d14f75839f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] No waiting events found dispatching network-vif-plugged-3a7614a2-ff1f-4015-a387-8b15256f61b2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:49:32 np0005481065 nova_compute[260935]: 2025-10-11 08:49:32.841 2 WARNING nova.compute.manager [req-69e99179-e28f-463f-b67a-586832f80090 req-68e51a53-8ece-464c-b4b7-37d14f75839f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Received unexpected event network-vif-plugged-3a7614a2-ff1f-4015-a387-8b15256f61b2 for instance with vm_state active and task_state None.#033[00m
Oct 11 04:49:33 np0005481065 podman[294115]: 2025-10-11 08:49:33.166449826 +0000 UTC m=+0.057145267 container create 012dd3b1240c0de00b1d42eb37a54f1c4b5262e85e9be9f6efc3df774b1c68d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 11 04:49:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:49:33 np0005481065 systemd[1]: Started libpod-conmon-012dd3b1240c0de00b1d42eb37a54f1c4b5262e85e9be9f6efc3df774b1c68d0.scope.
Oct 11 04:49:33 np0005481065 podman[294115]: 2025-10-11 08:49:33.134882063 +0000 UTC m=+0.025577534 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 04:49:33 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:49:33 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41c0985dbac57242fc08a988754091dfee60c3f8b611a4151de8d991e5bdb92e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:49:33 np0005481065 podman[294143]: 2025-10-11 08:49:33.266572867 +0000 UTC m=+0.069995270 container create 35e08efaf44587199cef7bd0a0d9b02a5be3546b3ac4fd0e3ec083186fad2961 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_shamir, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 11 04:49:33 np0005481065 podman[294115]: 2025-10-11 08:49:33.285985906 +0000 UTC m=+0.176681357 container init 012dd3b1240c0de00b1d42eb37a54f1c4b5262e85e9be9f6efc3df774b1c68d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:49:33 np0005481065 podman[294115]: 2025-10-11 08:49:33.299510579 +0000 UTC m=+0.190206020 container start 012dd3b1240c0de00b1d42eb37a54f1c4b5262e85e9be9f6efc3df774b1c68d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 11 04:49:33 np0005481065 systemd[1]: Started libpod-conmon-35e08efaf44587199cef7bd0a0d9b02a5be3546b3ac4fd0e3ec083186fad2961.scope.
Oct 11 04:49:33 np0005481065 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[294157]: [NOTICE]   (294165) : New worker (294170) forked
Oct 11 04:49:33 np0005481065 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[294157]: [NOTICE]   (294165) : Loading success.
Oct 11 04:49:33 np0005481065 podman[294143]: 2025-10-11 08:49:33.242418604 +0000 UTC m=+0.045841047 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:49:33 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:49:33 np0005481065 podman[294143]: 2025-10-11 08:49:33.366922255 +0000 UTC m=+0.170344668 container init 35e08efaf44587199cef7bd0a0d9b02a5be3546b3ac4fd0e3ec083186fad2961 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 04:49:33 np0005481065 podman[294143]: 2025-10-11 08:49:33.375954491 +0000 UTC m=+0.179376874 container start 35e08efaf44587199cef7bd0a0d9b02a5be3546b3ac4fd0e3ec083186fad2961 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_shamir, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 11 04:49:33 np0005481065 podman[294143]: 2025-10-11 08:49:33.380241332 +0000 UTC m=+0.183663745 container attach 35e08efaf44587199cef7bd0a0d9b02a5be3546b3ac4fd0e3ec083186fad2961 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_shamir, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:49:33 np0005481065 crazy_shamir[294166]: 167 167
Oct 11 04:49:33 np0005481065 systemd[1]: libpod-35e08efaf44587199cef7bd0a0d9b02a5be3546b3ac4fd0e3ec083186fad2961.scope: Deactivated successfully.
Oct 11 04:49:33 np0005481065 podman[294143]: 2025-10-11 08:49:33.382559108 +0000 UTC m=+0.185981501 container died 35e08efaf44587199cef7bd0a0d9b02a5be3546b3ac4fd0e3ec083186fad2961 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 11 04:49:33 np0005481065 systemd[1]: var-lib-containers-storage-overlay-636b7f7837065960dbea48bce148288bea4f19269eaa8118f0f55ade0dfeb25f-merged.mount: Deactivated successfully.
Oct 11 04:49:33 np0005481065 podman[294143]: 2025-10-11 08:49:33.422021684 +0000 UTC m=+0.225444067 container remove 35e08efaf44587199cef7bd0a0d9b02a5be3546b3ac4fd0e3ec083186fad2961 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_shamir, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:49:33 np0005481065 systemd[1]: libpod-conmon-35e08efaf44587199cef7bd0a0d9b02a5be3546b3ac4fd0e3ec083186fad2961.scope: Deactivated successfully.
Oct 11 04:49:33 np0005481065 nova_compute[260935]: 2025-10-11 08:49:33.505 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172573.5034428, 8e4f771a-b87a-40f9-a12e-b5b4583b96f7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:49:33 np0005481065 nova_compute[260935]: 2025-10-11 08:49:33.505 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] VM Started (Lifecycle Event)#033[00m
Oct 11 04:49:33 np0005481065 nova_compute[260935]: 2025-10-11 08:49:33.548 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:49:33 np0005481065 nova_compute[260935]: 2025-10-11 08:49:33.554 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172573.5035553, 8e4f771a-b87a-40f9-a12e-b5b4583b96f7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:49:33 np0005481065 nova_compute[260935]: 2025-10-11 08:49:33.554 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] VM Paused (Lifecycle Event)#033[00m
Oct 11 04:49:33 np0005481065 nova_compute[260935]: 2025-10-11 08:49:33.587 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:49:33 np0005481065 nova_compute[260935]: 2025-10-11 08:49:33.593 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:49:33 np0005481065 podman[294201]: 2025-10-11 08:49:33.651109052 +0000 UTC m=+0.059640117 container create 629b8629ca07c46b8a756312a34b5d4bfb4f34cecf118d0a428d128537171e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_dubinsky, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:49:33 np0005481065 systemd[1]: Started libpod-conmon-629b8629ca07c46b8a756312a34b5d4bfb4f34cecf118d0a428d128537171e41.scope.
Oct 11 04:49:33 np0005481065 podman[294201]: 2025-10-11 08:49:33.622803952 +0000 UTC m=+0.031335017 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:49:33 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:49:33 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/785c1cc932ca67e56dc37d18e7d2baf8c6e1c02bb6565f525441bfd5e104abf3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:49:33 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/785c1cc932ca67e56dc37d18e7d2baf8c6e1c02bb6565f525441bfd5e104abf3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:49:33 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/785c1cc932ca67e56dc37d18e7d2baf8c6e1c02bb6565f525441bfd5e104abf3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:49:33 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/785c1cc932ca67e56dc37d18e7d2baf8c6e1c02bb6565f525441bfd5e104abf3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:49:33 np0005481065 nova_compute[260935]: 2025-10-11 08:49:33.750 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:49:33 np0005481065 podman[294201]: 2025-10-11 08:49:33.759737814 +0000 UTC m=+0.168268849 container init 629b8629ca07c46b8a756312a34b5d4bfb4f34cecf118d0a428d128537171e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_dubinsky, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:49:33 np0005481065 podman[294201]: 2025-10-11 08:49:33.77126264 +0000 UTC m=+0.179793665 container start 629b8629ca07c46b8a756312a34b5d4bfb4f34cecf118d0a428d128537171e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_dubinsky, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:49:33 np0005481065 podman[294201]: 2025-10-11 08:49:33.77477476 +0000 UTC m=+0.183305885 container attach 629b8629ca07c46b8a756312a34b5d4bfb4f34cecf118d0a428d128537171e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:49:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1313: 321 pgs: 321 active+clean; 293 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 6.1 MiB/s wr, 228 op/s
Oct 11 04:49:34 np0005481065 nova_compute[260935]: 2025-10-11 08:49:34.574 2 DEBUG oslo_concurrency.lockutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "14711e39-46ca-4856-9c19-fa51b869064d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:34 np0005481065 nova_compute[260935]: 2025-10-11 08:49:34.575 2 DEBUG oslo_concurrency.lockutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "14711e39-46ca-4856-9c19-fa51b869064d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:34 np0005481065 nova_compute[260935]: 2025-10-11 08:49:34.601 2 DEBUG oslo_concurrency.lockutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "90e56ca7-b26f-4f83-908d-75204ecd2533" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:34 np0005481065 nova_compute[260935]: 2025-10-11 08:49:34.601 2 DEBUG oslo_concurrency.lockutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "90e56ca7-b26f-4f83-908d-75204ecd2533" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:34 np0005481065 nova_compute[260935]: 2025-10-11 08:49:34.602 2 DEBUG nova.compute.manager [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:49:34 np0005481065 nova_compute[260935]: 2025-10-11 08:49:34.635 2 DEBUG nova.compute.manager [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:49:34 np0005481065 nova_compute[260935]: 2025-10-11 08:49:34.716 2 DEBUG oslo_concurrency.lockutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:34 np0005481065 nova_compute[260935]: 2025-10-11 08:49:34.717 2 DEBUG oslo_concurrency.lockutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:34 np0005481065 nova_compute[260935]: 2025-10-11 08:49:34.735 2 DEBUG nova.virt.hardware [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:49:34 np0005481065 nova_compute[260935]: 2025-10-11 08:49:34.737 2 INFO nova.compute.claims [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:49:34 np0005481065 nova_compute[260935]: 2025-10-11 08:49:34.740 2 DEBUG oslo_concurrency.lockutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:34 np0005481065 interesting_dubinsky[294218]: {
Oct 11 04:49:34 np0005481065 interesting_dubinsky[294218]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 04:49:34 np0005481065 interesting_dubinsky[294218]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:49:34 np0005481065 interesting_dubinsky[294218]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:49:34 np0005481065 interesting_dubinsky[294218]:        "osd_id": 2,
Oct 11 04:49:34 np0005481065 interesting_dubinsky[294218]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:49:34 np0005481065 interesting_dubinsky[294218]:        "type": "bluestore"
Oct 11 04:49:34 np0005481065 interesting_dubinsky[294218]:    },
Oct 11 04:49:34 np0005481065 interesting_dubinsky[294218]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 04:49:34 np0005481065 interesting_dubinsky[294218]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:49:34 np0005481065 interesting_dubinsky[294218]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:49:34 np0005481065 interesting_dubinsky[294218]:        "osd_id": 0,
Oct 11 04:49:34 np0005481065 interesting_dubinsky[294218]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:49:34 np0005481065 interesting_dubinsky[294218]:        "type": "bluestore"
Oct 11 04:49:34 np0005481065 interesting_dubinsky[294218]:    },
Oct 11 04:49:34 np0005481065 interesting_dubinsky[294218]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 04:49:34 np0005481065 interesting_dubinsky[294218]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:49:34 np0005481065 interesting_dubinsky[294218]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:49:34 np0005481065 interesting_dubinsky[294218]:        "osd_id": 1,
Oct 11 04:49:34 np0005481065 interesting_dubinsky[294218]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:49:34 np0005481065 interesting_dubinsky[294218]:        "type": "bluestore"
Oct 11 04:49:34 np0005481065 interesting_dubinsky[294218]:    }
Oct 11 04:49:34 np0005481065 interesting_dubinsky[294218]: }
Oct 11 04:49:34 np0005481065 podman[294243]: 2025-10-11 08:49:34.882951699 +0000 UTC m=+0.169078362 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS)
Oct 11 04:49:34 np0005481065 podman[294244]: 2025-10-11 08:49:34.900630509 +0000 UTC m=+0.187029190 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 04:49:34 np0005481065 systemd[1]: libpod-629b8629ca07c46b8a756312a34b5d4bfb4f34cecf118d0a428d128537171e41.scope: Deactivated successfully.
Oct 11 04:49:34 np0005481065 podman[294201]: 2025-10-11 08:49:34.916772766 +0000 UTC m=+1.325303861 container died 629b8629ca07c46b8a756312a34b5d4bfb4f34cecf118d0a428d128537171e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Oct 11 04:49:34 np0005481065 systemd[1]: libpod-629b8629ca07c46b8a756312a34b5d4bfb4f34cecf118d0a428d128537171e41.scope: Consumed 1.112s CPU time.
Oct 11 04:49:34 np0005481065 nova_compute[260935]: 2025-10-11 08:49:34.926 2 DEBUG nova.compute.manager [req-fa65cc6e-1ade-4705-bb7c-2c509904f490 req-c76a5438-0b15-4d23-a946-84558bc7e9c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Received event network-vif-plugged-f1d8b704-c5df-41f7-b46a-04c0e89ab2cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:49:34 np0005481065 nova_compute[260935]: 2025-10-11 08:49:34.926 2 DEBUG oslo_concurrency.lockutils [req-fa65cc6e-1ade-4705-bb7c-2c509904f490 req-c76a5438-0b15-4d23-a946-84558bc7e9c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "8e4f771a-b87a-40f9-a12e-b5b4583b96f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:34 np0005481065 nova_compute[260935]: 2025-10-11 08:49:34.926 2 DEBUG oslo_concurrency.lockutils [req-fa65cc6e-1ade-4705-bb7c-2c509904f490 req-c76a5438-0b15-4d23-a946-84558bc7e9c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8e4f771a-b87a-40f9-a12e-b5b4583b96f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:34 np0005481065 nova_compute[260935]: 2025-10-11 08:49:34.927 2 DEBUG oslo_concurrency.lockutils [req-fa65cc6e-1ade-4705-bb7c-2c509904f490 req-c76a5438-0b15-4d23-a946-84558bc7e9c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8e4f771a-b87a-40f9-a12e-b5b4583b96f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:34 np0005481065 nova_compute[260935]: 2025-10-11 08:49:34.927 2 DEBUG nova.compute.manager [req-fa65cc6e-1ade-4705-bb7c-2c509904f490 req-c76a5438-0b15-4d23-a946-84558bc7e9c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Processing event network-vif-plugged-f1d8b704-c5df-41f7-b46a-04c0e89ab2cd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 04:49:34 np0005481065 nova_compute[260935]: 2025-10-11 08:49:34.927 2 DEBUG nova.compute.manager [req-fa65cc6e-1ade-4705-bb7c-2c509904f490 req-c76a5438-0b15-4d23-a946-84558bc7e9c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Received event network-vif-plugged-f1d8b704-c5df-41f7-b46a-04c0e89ab2cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:49:34 np0005481065 nova_compute[260935]: 2025-10-11 08:49:34.927 2 DEBUG oslo_concurrency.lockutils [req-fa65cc6e-1ade-4705-bb7c-2c509904f490 req-c76a5438-0b15-4d23-a946-84558bc7e9c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "8e4f771a-b87a-40f9-a12e-b5b4583b96f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:34 np0005481065 nova_compute[260935]: 2025-10-11 08:49:34.927 2 DEBUG oslo_concurrency.lockutils [req-fa65cc6e-1ade-4705-bb7c-2c509904f490 req-c76a5438-0b15-4d23-a946-84558bc7e9c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8e4f771a-b87a-40f9-a12e-b5b4583b96f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:34 np0005481065 nova_compute[260935]: 2025-10-11 08:49:34.928 2 DEBUG oslo_concurrency.lockutils [req-fa65cc6e-1ade-4705-bb7c-2c509904f490 req-c76a5438-0b15-4d23-a946-84558bc7e9c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8e4f771a-b87a-40f9-a12e-b5b4583b96f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:34 np0005481065 nova_compute[260935]: 2025-10-11 08:49:34.928 2 DEBUG nova.compute.manager [req-fa65cc6e-1ade-4705-bb7c-2c509904f490 req-c76a5438-0b15-4d23-a946-84558bc7e9c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] No waiting events found dispatching network-vif-plugged-f1d8b704-c5df-41f7-b46a-04c0e89ab2cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:49:34 np0005481065 nova_compute[260935]: 2025-10-11 08:49:34.928 2 WARNING nova.compute.manager [req-fa65cc6e-1ade-4705-bb7c-2c509904f490 req-c76a5438-0b15-4d23-a946-84558bc7e9c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Received unexpected event network-vif-plugged-f1d8b704-c5df-41f7-b46a-04c0e89ab2cd for instance with vm_state building and task_state spawning.#033[00m
Oct 11 04:49:34 np0005481065 nova_compute[260935]: 2025-10-11 08:49:34.929 2 DEBUG nova.compute.manager [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:49:34 np0005481065 nova_compute[260935]: 2025-10-11 08:49:34.938 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172574.9380574, 8e4f771a-b87a-40f9-a12e-b5b4583b96f7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:49:34 np0005481065 nova_compute[260935]: 2025-10-11 08:49:34.938 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:49:34 np0005481065 nova_compute[260935]: 2025-10-11 08:49:34.946 2 DEBUG nova.virt.libvirt.driver [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:49:34 np0005481065 nova_compute[260935]: 2025-10-11 08:49:34.950 2 INFO nova.virt.libvirt.driver [-] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Instance spawned successfully.#033[00m
Oct 11 04:49:34 np0005481065 nova_compute[260935]: 2025-10-11 08:49:34.950 2 DEBUG nova.virt.libvirt.driver [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:49:34 np0005481065 systemd[1]: var-lib-containers-storage-overlay-785c1cc932ca67e56dc37d18e7d2baf8c6e1c02bb6565f525441bfd5e104abf3-merged.mount: Deactivated successfully.
Oct 11 04:49:34 np0005481065 nova_compute[260935]: 2025-10-11 08:49:34.967 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:49:34 np0005481065 nova_compute[260935]: 2025-10-11 08:49:34.978 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:49:34 np0005481065 nova_compute[260935]: 2025-10-11 08:49:34.985 2 DEBUG nova.virt.libvirt.driver [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:49:34 np0005481065 nova_compute[260935]: 2025-10-11 08:49:34.986 2 DEBUG nova.virt.libvirt.driver [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:49:34 np0005481065 nova_compute[260935]: 2025-10-11 08:49:34.986 2 DEBUG nova.virt.libvirt.driver [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:49:34 np0005481065 nova_compute[260935]: 2025-10-11 08:49:34.987 2 DEBUG nova.virt.libvirt.driver [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:49:34 np0005481065 nova_compute[260935]: 2025-10-11 08:49:34.987 2 DEBUG nova.virt.libvirt.driver [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:49:34 np0005481065 nova_compute[260935]: 2025-10-11 08:49:34.987 2 DEBUG nova.virt.libvirt.driver [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:49:34 np0005481065 podman[294201]: 2025-10-11 08:49:34.989713938 +0000 UTC m=+1.398244973 container remove 629b8629ca07c46b8a756312a34b5d4bfb4f34cecf118d0a428d128537171e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_dubinsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 04:49:35 np0005481065 systemd[1]: libpod-conmon-629b8629ca07c46b8a756312a34b5d4bfb4f34cecf118d0a428d128537171e41.scope: Deactivated successfully.
Oct 11 04:49:35 np0005481065 nova_compute[260935]: 2025-10-11 08:49:35.013 2 DEBUG oslo_concurrency.processutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:35 np0005481065 nova_compute[260935]: 2025-10-11 08:49:35.039 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:49:35 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:49:35 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:49:35 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:49:35 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:49:35 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 2cfdd73d-4dba-47a8-8909-e5af87e909ff does not exist
Oct 11 04:49:35 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev bc1c4287-4c55-43ed-8c96-a47c215f1dbc does not exist
Oct 11 04:49:35 np0005481065 nova_compute[260935]: 2025-10-11 08:49:35.089 2 INFO nova.compute.manager [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Took 10.37 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:49:35 np0005481065 nova_compute[260935]: 2025-10-11 08:49:35.089 2 DEBUG nova.compute.manager [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:49:35 np0005481065 nova_compute[260935]: 2025-10-11 08:49:35.184 2 INFO nova.compute.manager [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Took 12.35 seconds to build instance.#033[00m
Oct 11 04:49:35 np0005481065 nova_compute[260935]: 2025-10-11 08:49:35.214 2 DEBUG oslo_concurrency.lockutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "8e4f771a-b87a-40f9-a12e-b5b4583b96f7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.443s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:35 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:49:35 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:49:35 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:49:35 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3102901362' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:49:35 np0005481065 nova_compute[260935]: 2025-10-11 08:49:35.430 2 DEBUG oslo_concurrency.processutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:35 np0005481065 nova_compute[260935]: 2025-10-11 08:49:35.435 2 DEBUG nova.compute.provider_tree [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:49:35 np0005481065 nova_compute[260935]: 2025-10-11 08:49:35.451 2 DEBUG nova.scheduler.client.report [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:49:35 np0005481065 nova_compute[260935]: 2025-10-11 08:49:35.493 2 DEBUG oslo_concurrency.lockutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:35 np0005481065 nova_compute[260935]: 2025-10-11 08:49:35.494 2 DEBUG nova.compute.manager [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:49:35 np0005481065 nova_compute[260935]: 2025-10-11 08:49:35.498 2 DEBUG oslo_concurrency.lockutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:35 np0005481065 nova_compute[260935]: 2025-10-11 08:49:35.505 2 DEBUG nova.virt.hardware [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:49:35 np0005481065 nova_compute[260935]: 2025-10-11 08:49:35.506 2 INFO nova.compute.claims [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:49:35 np0005481065 nova_compute[260935]: 2025-10-11 08:49:35.596 2 DEBUG nova.compute.manager [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:49:35 np0005481065 nova_compute[260935]: 2025-10-11 08:49:35.596 2 DEBUG nova.network.neutron [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:49:35 np0005481065 nova_compute[260935]: 2025-10-11 08:49:35.742 2 INFO nova.virt.libvirt.driver [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:49:35 np0005481065 nova_compute[260935]: 2025-10-11 08:49:35.779 2 DEBUG nova.compute.manager [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:49:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1314: 321 pgs: 321 active+clean; 293 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 126 op/s
Oct 11 04:49:35 np0005481065 nova_compute[260935]: 2025-10-11 08:49:35.838 2 DEBUG nova.policy [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '34f29a5a135d45f597eeaa741009aa67', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'eddb41c523294041b154a0a99c88e82b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 04:49:35 np0005481065 nova_compute[260935]: 2025-10-11 08:49:35.947 2 DEBUG oslo_concurrency.processutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:36 np0005481065 nova_compute[260935]: 2025-10-11 08:49:36.049 2 DEBUG nova.compute.manager [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:49:36 np0005481065 nova_compute[260935]: 2025-10-11 08:49:36.051 2 DEBUG nova.virt.libvirt.driver [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:49:36 np0005481065 nova_compute[260935]: 2025-10-11 08:49:36.052 2 INFO nova.virt.libvirt.driver [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Creating image(s)#033[00m
Oct 11 04:49:36 np0005481065 nova_compute[260935]: 2025-10-11 08:49:36.085 2 DEBUG nova.storage.rbd_utils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] rbd image 14711e39-46ca-4856-9c19-fa51b869064d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:49:36 np0005481065 nova_compute[260935]: 2025-10-11 08:49:36.126 2 DEBUG nova.storage.rbd_utils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] rbd image 14711e39-46ca-4856-9c19-fa51b869064d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:49:36 np0005481065 nova_compute[260935]: 2025-10-11 08:49:36.153 2 DEBUG nova.storage.rbd_utils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] rbd image 14711e39-46ca-4856-9c19-fa51b869064d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:49:36 np0005481065 nova_compute[260935]: 2025-10-11 08:49:36.158 2 DEBUG oslo_concurrency.processutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:36 np0005481065 nova_compute[260935]: 2025-10-11 08:49:36.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:36 np0005481065 nova_compute[260935]: 2025-10-11 08:49:36.248 2 DEBUG oslo_concurrency.processutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:36 np0005481065 nova_compute[260935]: 2025-10-11 08:49:36.249 2 DEBUG oslo_concurrency.lockutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:36 np0005481065 nova_compute[260935]: 2025-10-11 08:49:36.250 2 DEBUG oslo_concurrency.lockutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:36 np0005481065 nova_compute[260935]: 2025-10-11 08:49:36.251 2 DEBUG oslo_concurrency.lockutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:36 np0005481065 nova_compute[260935]: 2025-10-11 08:49:36.277 2 DEBUG nova.storage.rbd_utils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] rbd image 14711e39-46ca-4856-9c19-fa51b869064d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:49:36 np0005481065 nova_compute[260935]: 2025-10-11 08:49:36.281 2 DEBUG oslo_concurrency.processutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 14711e39-46ca-4856-9c19-fa51b869064d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:36 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:49:36 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1192981870' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:49:36 np0005481065 nova_compute[260935]: 2025-10-11 08:49:36.442 2 DEBUG oslo_concurrency.processutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:36 np0005481065 nova_compute[260935]: 2025-10-11 08:49:36.446 2 INFO nova.compute.manager [None req-2be22760-dc02-4c6e-8405-519e4b89fe8c 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Pausing#033[00m
Oct 11 04:49:36 np0005481065 nova_compute[260935]: 2025-10-11 08:49:36.447 2 DEBUG nova.objects.instance [None req-2be22760-dc02-4c6e-8405-519e4b89fe8c 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lazy-loading 'flavor' on Instance uuid 8e4f771a-b87a-40f9-a12e-b5b4583b96f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:49:36 np0005481065 nova_compute[260935]: 2025-10-11 08:49:36.460 2 DEBUG nova.compute.provider_tree [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:49:36 np0005481065 nova_compute[260935]: 2025-10-11 08:49:36.568 2 DEBUG oslo_concurrency.processutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 14711e39-46ca-4856-9c19-fa51b869064d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.287s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:36 np0005481065 nova_compute[260935]: 2025-10-11 08:49:36.606 2 DEBUG nova.scheduler.client.report [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:49:36 np0005481065 nova_compute[260935]: 2025-10-11 08:49:36.649 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172576.61852, 8e4f771a-b87a-40f9-a12e-b5b4583b96f7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:49:36 np0005481065 nova_compute[260935]: 2025-10-11 08:49:36.650 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] VM Paused (Lifecycle Event)#033[00m
Oct 11 04:49:36 np0005481065 nova_compute[260935]: 2025-10-11 08:49:36.654 2 DEBUG oslo_concurrency.lockutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.155s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:36 np0005481065 nova_compute[260935]: 2025-10-11 08:49:36.654 2 DEBUG nova.compute.manager [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:49:36 np0005481065 nova_compute[260935]: 2025-10-11 08:49:36.657 2 DEBUG nova.compute.manager [None req-2be22760-dc02-4c6e-8405-519e4b89fe8c 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:49:36 np0005481065 nova_compute[260935]: 2025-10-11 08:49:36.663 2 DEBUG nova.storage.rbd_utils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] resizing rbd image 14711e39-46ca-4856-9c19-fa51b869064d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:49:36 np0005481065 nova_compute[260935]: 2025-10-11 08:49:36.705 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:49:36 np0005481065 nova_compute[260935]: 2025-10-11 08:49:36.717 2 DEBUG nova.compute.manager [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:49:36 np0005481065 nova_compute[260935]: 2025-10-11 08:49:36.718 2 DEBUG nova.network.neutron [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:49:36 np0005481065 nova_compute[260935]: 2025-10-11 08:49:36.729 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:49:36 np0005481065 nova_compute[260935]: 2025-10-11 08:49:36.741 2 INFO nova.virt.libvirt.driver [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:49:36 np0005481065 nova_compute[260935]: 2025-10-11 08:49:36.794 2 DEBUG nova.objects.instance [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lazy-loading 'migration_context' on Instance uuid 14711e39-46ca-4856-9c19-fa51b869064d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:49:36 np0005481065 nova_compute[260935]: 2025-10-11 08:49:36.967 2 DEBUG nova.policy [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a51c2680b31e40b1908642ef8795c6f0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '39d3043a7835403392c659fbb2fe0b22', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 04:49:37 np0005481065 nova_compute[260935]: 2025-10-11 08:49:37.022 2 DEBUG nova.virt.libvirt.driver [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:49:37 np0005481065 nova_compute[260935]: 2025-10-11 08:49:37.023 2 DEBUG nova.virt.libvirt.driver [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Ensure instance console log exists: /var/lib/nova/instances/14711e39-46ca-4856-9c19-fa51b869064d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:49:37 np0005481065 nova_compute[260935]: 2025-10-11 08:49:37.023 2 DEBUG oslo_concurrency.lockutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:37 np0005481065 nova_compute[260935]: 2025-10-11 08:49:37.024 2 DEBUG oslo_concurrency.lockutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:37 np0005481065 nova_compute[260935]: 2025-10-11 08:49:37.024 2 DEBUG oslo_concurrency.lockutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:37 np0005481065 nova_compute[260935]: 2025-10-11 08:49:37.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:37 np0005481065 nova_compute[260935]: 2025-10-11 08:49:37.074 2 DEBUG nova.compute.manager [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:49:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:49:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3088976799' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:49:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:49:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3088976799' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:49:37 np0005481065 nova_compute[260935]: 2025-10-11 08:49:37.530 2 DEBUG nova.compute.manager [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:49:37 np0005481065 nova_compute[260935]: 2025-10-11 08:49:37.532 2 DEBUG nova.virt.libvirt.driver [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:49:37 np0005481065 nova_compute[260935]: 2025-10-11 08:49:37.532 2 INFO nova.virt.libvirt.driver [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Creating image(s)#033[00m
Oct 11 04:49:37 np0005481065 nova_compute[260935]: 2025-10-11 08:49:37.570 2 DEBUG nova.storage.rbd_utils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 90e56ca7-b26f-4f83-908d-75204ecd2533_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:49:37 np0005481065 nova_compute[260935]: 2025-10-11 08:49:37.609 2 DEBUG nova.storage.rbd_utils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 90e56ca7-b26f-4f83-908d-75204ecd2533_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:49:37 np0005481065 nova_compute[260935]: 2025-10-11 08:49:37.678 2 DEBUG nova.storage.rbd_utils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 90e56ca7-b26f-4f83-908d-75204ecd2533_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:49:37 np0005481065 nova_compute[260935]: 2025-10-11 08:49:37.683 2 DEBUG oslo_concurrency.processutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:37 np0005481065 nova_compute[260935]: 2025-10-11 08:49:37.779 2 DEBUG oslo_concurrency.processutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:37 np0005481065 nova_compute[260935]: 2025-10-11 08:49:37.781 2 DEBUG oslo_concurrency.lockutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:37 np0005481065 nova_compute[260935]: 2025-10-11 08:49:37.782 2 DEBUG oslo_concurrency.lockutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:37 np0005481065 nova_compute[260935]: 2025-10-11 08:49:37.782 2 DEBUG oslo_concurrency.lockutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:37 np0005481065 nova_compute[260935]: 2025-10-11 08:49:37.815 2 DEBUG nova.storage.rbd_utils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 90e56ca7-b26f-4f83-908d-75204ecd2533_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:49:37 np0005481065 nova_compute[260935]: 2025-10-11 08:49:37.820 2 DEBUG oslo_concurrency.processutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 90e56ca7-b26f-4f83-908d-75204ecd2533_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1315: 321 pgs: 321 active+clean; 339 MiB data, 513 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.7 MiB/s wr, 222 op/s
Oct 11 04:49:38 np0005481065 nova_compute[260935]: 2025-10-11 08:49:38.104 2 DEBUG oslo_concurrency.processutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 90e56ca7-b26f-4f83-908d-75204ecd2533_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.284s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:38 np0005481065 nova_compute[260935]: 2025-10-11 08:49:38.188 2 DEBUG nova.storage.rbd_utils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] resizing rbd image 90e56ca7-b26f-4f83-908d-75204ecd2533_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:49:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:49:38 np0005481065 nova_compute[260935]: 2025-10-11 08:49:38.369 2 DEBUG nova.objects.instance [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lazy-loading 'migration_context' on Instance uuid 90e56ca7-b26f-4f83-908d-75204ecd2533 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:49:38 np0005481065 nova_compute[260935]: 2025-10-11 08:49:38.567 2 DEBUG nova.virt.libvirt.driver [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:49:38 np0005481065 nova_compute[260935]: 2025-10-11 08:49:38.568 2 DEBUG nova.virt.libvirt.driver [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Ensure instance console log exists: /var/lib/nova/instances/90e56ca7-b26f-4f83-908d-75204ecd2533/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:49:38 np0005481065 nova_compute[260935]: 2025-10-11 08:49:38.569 2 DEBUG oslo_concurrency.lockutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:38 np0005481065 nova_compute[260935]: 2025-10-11 08:49:38.570 2 DEBUG oslo_concurrency.lockutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:38 np0005481065 nova_compute[260935]: 2025-10-11 08:49:38.570 2 DEBUG oslo_concurrency.lockutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:38 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:38Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e8:51:53 10.100.0.5
Oct 11 04:49:38 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:38Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e8:51:53 10.100.0.5
Oct 11 04:49:39 np0005481065 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Oct 11 04:49:39 np0005481065 nova_compute[260935]: 2025-10-11 08:49:39.693 2 DEBUG nova.network.neutron [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Successfully created port: ac842ebf-4fca-4930-a4d1-3e8a6760d441 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 04:49:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1316: 321 pgs: 321 active+clean; 339 MiB data, 513 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 165 op/s
Oct 11 04:49:40 np0005481065 nova_compute[260935]: 2025-10-11 08:49:40.042 2 DEBUG nova.network.neutron [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Successfully created port: 302c88cf-6eba-4200-adfe-6c23d5e6078d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 04:49:40 np0005481065 nova_compute[260935]: 2025-10-11 08:49:40.914 2 DEBUG nova.network.neutron [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Successfully updated port: ac842ebf-4fca-4930-a4d1-3e8a6760d441 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:49:40 np0005481065 nova_compute[260935]: 2025-10-11 08:49:40.990 2 DEBUG nova.compute.manager [None req-49a8ed84-8d60-4574-bbd5-06ca9b96a8ef 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:49:41 np0005481065 nova_compute[260935]: 2025-10-11 08:49:41.039 2 DEBUG oslo_concurrency.lockutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "refresh_cache-14711e39-46ca-4856-9c19-fa51b869064d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:49:41 np0005481065 nova_compute[260935]: 2025-10-11 08:49:41.039 2 DEBUG oslo_concurrency.lockutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquired lock "refresh_cache-14711e39-46ca-4856-9c19-fa51b869064d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:49:41 np0005481065 nova_compute[260935]: 2025-10-11 08:49:41.040 2 DEBUG nova.network.neutron [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:49:41 np0005481065 nova_compute[260935]: 2025-10-11 08:49:41.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:41 np0005481065 nova_compute[260935]: 2025-10-11 08:49:41.195 2 INFO nova.compute.manager [None req-49a8ed84-8d60-4574-bbd5-06ca9b96a8ef 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] instance snapshotting#033[00m
Oct 11 04:49:41 np0005481065 nova_compute[260935]: 2025-10-11 08:49:41.196 2 WARNING nova.compute.manager [None req-49a8ed84-8d60-4574-bbd5-06ca9b96a8ef 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] trying to snapshot a non-running instance: (state: 3 expected: 1)#033[00m
Oct 11 04:49:41 np0005481065 nova_compute[260935]: 2025-10-11 08:49:41.253 2 DEBUG nova.compute.manager [req-94a5d34e-36f6-405b-bf51-64a92a12ee93 req-79d18cb9-fe00-4ed0-aa09-4d65e0ed50e7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Received event network-changed-ac842ebf-4fca-4930-a4d1-3e8a6760d441 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:49:41 np0005481065 nova_compute[260935]: 2025-10-11 08:49:41.254 2 DEBUG nova.compute.manager [req-94a5d34e-36f6-405b-bf51-64a92a12ee93 req-79d18cb9-fe00-4ed0-aa09-4d65e0ed50e7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Refreshing instance network info cache due to event network-changed-ac842ebf-4fca-4930-a4d1-3e8a6760d441. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:49:41 np0005481065 nova_compute[260935]: 2025-10-11 08:49:41.255 2 DEBUG oslo_concurrency.lockutils [req-94a5d34e-36f6-405b-bf51-64a92a12ee93 req-79d18cb9-fe00-4ed0-aa09-4d65e0ed50e7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-14711e39-46ca-4856-9c19-fa51b869064d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:49:41 np0005481065 nova_compute[260935]: 2025-10-11 08:49:41.377 2 DEBUG nova.network.neutron [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:49:41 np0005481065 nova_compute[260935]: 2025-10-11 08:49:41.402 2 INFO nova.virt.libvirt.driver [None req-49a8ed84-8d60-4574-bbd5-06ca9b96a8ef 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Beginning live snapshot process#033[00m
Oct 11 04:49:41 np0005481065 nova_compute[260935]: 2025-10-11 08:49:41.591 2 DEBUG nova.network.neutron [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Successfully updated port: 302c88cf-6eba-4200-adfe-6c23d5e6078d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:49:41 np0005481065 nova_compute[260935]: 2025-10-11 08:49:41.610 2 DEBUG oslo_concurrency.lockutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "refresh_cache-90e56ca7-b26f-4f83-908d-75204ecd2533" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:49:41 np0005481065 nova_compute[260935]: 2025-10-11 08:49:41.611 2 DEBUG oslo_concurrency.lockutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquired lock "refresh_cache-90e56ca7-b26f-4f83-908d-75204ecd2533" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:49:41 np0005481065 nova_compute[260935]: 2025-10-11 08:49:41.611 2 DEBUG nova.network.neutron [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:49:41 np0005481065 nova_compute[260935]: 2025-10-11 08:49:41.793 2 DEBUG nova.virt.libvirt.imagebackend [None req-49a8ed84-8d60-4574-bbd5-06ca9b96a8ef 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] No parent info for 03f2fef0-11c0-48e1-b3a0-3e02d898739e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct 11 04:49:41 np0005481065 nova_compute[260935]: 2025-10-11 08:49:41.800 2 DEBUG nova.compute.manager [req-cb98ccbb-a461-4db2-ba4c-735969cdecac req-d5eda931-6fd2-4c2d-9a96-8a475425b333 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Received event network-changed-302c88cf-6eba-4200-adfe-6c23d5e6078d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:49:41 np0005481065 nova_compute[260935]: 2025-10-11 08:49:41.801 2 DEBUG nova.compute.manager [req-cb98ccbb-a461-4db2-ba4c-735969cdecac req-d5eda931-6fd2-4c2d-9a96-8a475425b333 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Refreshing instance network info cache due to event network-changed-302c88cf-6eba-4200-adfe-6c23d5e6078d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:49:41 np0005481065 nova_compute[260935]: 2025-10-11 08:49:41.802 2 DEBUG oslo_concurrency.lockutils [req-cb98ccbb-a461-4db2-ba4c-735969cdecac req-d5eda931-6fd2-4c2d-9a96-8a475425b333 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-90e56ca7-b26f-4f83-908d-75204ecd2533" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:49:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1317: 321 pgs: 321 active+clean; 339 MiB data, 513 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 165 op/s
Oct 11 04:49:41 np0005481065 nova_compute[260935]: 2025-10-11 08:49:41.960 2 DEBUG nova.network.neutron [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:49:42 np0005481065 nova_compute[260935]: 2025-10-11 08:49:42.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:42 np0005481065 nova_compute[260935]: 2025-10-11 08:49:42.325 2 DEBUG nova.storage.rbd_utils [None req-49a8ed84-8d60-4574-bbd5-06ca9b96a8ef 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] creating snapshot(0344ed4086ef47a5a0808d6c4953c045) on rbd image(8e4f771a-b87a-40f9-a12e-b5b4583b96f7_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 11 04:49:43 np0005481065 nova_compute[260935]: 2025-10-11 08:49:43.202 2 DEBUG nova.network.neutron [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Updating instance_info_cache with network_info: [{"id": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "address": "fa:16:3e:56:95:17", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac842ebf-4f", "ovs_interfaceid": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:49:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:49:43 np0005481065 nova_compute[260935]: 2025-10-11 08:49:43.232 2 DEBUG oslo_concurrency.lockutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Releasing lock "refresh_cache-14711e39-46ca-4856-9c19-fa51b869064d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:49:43 np0005481065 nova_compute[260935]: 2025-10-11 08:49:43.233 2 DEBUG nova.compute.manager [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Instance network_info: |[{"id": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "address": "fa:16:3e:56:95:17", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac842ebf-4f", "ovs_interfaceid": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:49:43 np0005481065 nova_compute[260935]: 2025-10-11 08:49:43.239 2 DEBUG oslo_concurrency.lockutils [req-94a5d34e-36f6-405b-bf51-64a92a12ee93 req-79d18cb9-fe00-4ed0-aa09-4d65e0ed50e7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-14711e39-46ca-4856-9c19-fa51b869064d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:49:43 np0005481065 nova_compute[260935]: 2025-10-11 08:49:43.240 2 DEBUG nova.network.neutron [req-94a5d34e-36f6-405b-bf51-64a92a12ee93 req-79d18cb9-fe00-4ed0-aa09-4d65e0ed50e7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Refreshing network info cache for port ac842ebf-4fca-4930-a4d1-3e8a6760d441 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:49:43 np0005481065 nova_compute[260935]: 2025-10-11 08:49:43.246 2 DEBUG nova.virt.libvirt.driver [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Start _get_guest_xml network_info=[{"id": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "address": "fa:16:3e:56:95:17", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac842ebf-4f", "ovs_interfaceid": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:49:43 np0005481065 nova_compute[260935]: 2025-10-11 08:49:43.253 2 WARNING nova.virt.libvirt.driver [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:49:43 np0005481065 nova_compute[260935]: 2025-10-11 08:49:43.261 2 DEBUG nova.virt.libvirt.host [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:49:43 np0005481065 nova_compute[260935]: 2025-10-11 08:49:43.262 2 DEBUG nova.virt.libvirt.host [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:49:43 np0005481065 nova_compute[260935]: 2025-10-11 08:49:43.274 2 DEBUG nova.virt.libvirt.host [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:49:43 np0005481065 nova_compute[260935]: 2025-10-11 08:49:43.275 2 DEBUG nova.virt.libvirt.host [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:49:43 np0005481065 nova_compute[260935]: 2025-10-11 08:49:43.276 2 DEBUG nova.virt.libvirt.driver [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:49:43 np0005481065 nova_compute[260935]: 2025-10-11 08:49:43.277 2 DEBUG nova.virt.hardware [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:49:43 np0005481065 nova_compute[260935]: 2025-10-11 08:49:43.278 2 DEBUG nova.virt.hardware [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:49:43 np0005481065 nova_compute[260935]: 2025-10-11 08:49:43.278 2 DEBUG nova.virt.hardware [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:49:43 np0005481065 nova_compute[260935]: 2025-10-11 08:49:43.279 2 DEBUG nova.virt.hardware [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:49:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Oct 11 04:49:43 np0005481065 nova_compute[260935]: 2025-10-11 08:49:43.279 2 DEBUG nova.virt.hardware [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:49:43 np0005481065 nova_compute[260935]: 2025-10-11 08:49:43.279 2 DEBUG nova.virt.hardware [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:49:43 np0005481065 nova_compute[260935]: 2025-10-11 08:49:43.280 2 DEBUG nova.virt.hardware [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:49:43 np0005481065 nova_compute[260935]: 2025-10-11 08:49:43.280 2 DEBUG nova.virt.hardware [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:49:43 np0005481065 nova_compute[260935]: 2025-10-11 08:49:43.281 2 DEBUG nova.virt.hardware [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:49:43 np0005481065 nova_compute[260935]: 2025-10-11 08:49:43.281 2 DEBUG nova.virt.hardware [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:49:43 np0005481065 nova_compute[260935]: 2025-10-11 08:49:43.282 2 DEBUG nova.virt.hardware [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:49:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Oct 11 04:49:43 np0005481065 nova_compute[260935]: 2025-10-11 08:49:43.286 2 DEBUG oslo_concurrency.processutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:43 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Oct 11 04:49:43 np0005481065 nova_compute[260935]: 2025-10-11 08:49:43.387 2 DEBUG nova.storage.rbd_utils [None req-49a8ed84-8d60-4574-bbd5-06ca9b96a8ef 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] cloning vms/8e4f771a-b87a-40f9-a12e-b5b4583b96f7_disk@0344ed4086ef47a5a0808d6c4953c045 to images/3a225b4b-c750-4e15-aa61-55d1b4ccb7d5 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct 11 04:49:43 np0005481065 nova_compute[260935]: 2025-10-11 08:49:43.530 2 DEBUG nova.storage.rbd_utils [None req-49a8ed84-8d60-4574-bbd5-06ca9b96a8ef 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] flattening images/3a225b4b-c750-4e15-aa61-55d1b4ccb7d5 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct 11 04:49:43 np0005481065 nova_compute[260935]: 2025-10-11 08:49:43.656 2 DEBUG nova.network.neutron [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Updating instance_info_cache with network_info: [{"id": "302c88cf-6eba-4200-adfe-6c23d5e6078d", "address": "fa:16:3e:52:ed:97", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap302c88cf-6e", "ovs_interfaceid": "302c88cf-6eba-4200-adfe-6c23d5e6078d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:49:43 np0005481065 nova_compute[260935]: 2025-10-11 08:49:43.677 2 DEBUG oslo_concurrency.lockutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Releasing lock "refresh_cache-90e56ca7-b26f-4f83-908d-75204ecd2533" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:49:43 np0005481065 nova_compute[260935]: 2025-10-11 08:49:43.678 2 DEBUG nova.compute.manager [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Instance network_info: |[{"id": "302c88cf-6eba-4200-adfe-6c23d5e6078d", "address": "fa:16:3e:52:ed:97", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap302c88cf-6e", "ovs_interfaceid": "302c88cf-6eba-4200-adfe-6c23d5e6078d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:49:43 np0005481065 nova_compute[260935]: 2025-10-11 08:49:43.680 2 DEBUG oslo_concurrency.lockutils [req-cb98ccbb-a461-4db2-ba4c-735969cdecac req-d5eda931-6fd2-4c2d-9a96-8a475425b333 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-90e56ca7-b26f-4f83-908d-75204ecd2533" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:49:43 np0005481065 nova_compute[260935]: 2025-10-11 08:49:43.680 2 DEBUG nova.network.neutron [req-cb98ccbb-a461-4db2-ba4c-735969cdecac req-d5eda931-6fd2-4c2d-9a96-8a475425b333 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Refreshing network info cache for port 302c88cf-6eba-4200-adfe-6c23d5e6078d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:49:43 np0005481065 nova_compute[260935]: 2025-10-11 08:49:43.686 2 DEBUG nova.virt.libvirt.driver [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Start _get_guest_xml network_info=[{"id": "302c88cf-6eba-4200-adfe-6c23d5e6078d", "address": "fa:16:3e:52:ed:97", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap302c88cf-6e", "ovs_interfaceid": "302c88cf-6eba-4200-adfe-6c23d5e6078d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:49:43 np0005481065 nova_compute[260935]: 2025-10-11 08:49:43.691 2 WARNING nova.virt.libvirt.driver [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:49:43 np0005481065 nova_compute[260935]: 2025-10-11 08:49:43.697 2 DEBUG nova.virt.libvirt.host [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:49:43 np0005481065 nova_compute[260935]: 2025-10-11 08:49:43.698 2 DEBUG nova.virt.libvirt.host [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:49:43 np0005481065 nova_compute[260935]: 2025-10-11 08:49:43.705 2 DEBUG nova.virt.libvirt.host [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:49:43 np0005481065 nova_compute[260935]: 2025-10-11 08:49:43.706 2 DEBUG nova.virt.libvirt.host [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:49:43 np0005481065 nova_compute[260935]: 2025-10-11 08:49:43.707 2 DEBUG nova.virt.libvirt.driver [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:49:43 np0005481065 nova_compute[260935]: 2025-10-11 08:49:43.707 2 DEBUG nova.virt.hardware [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:49:43 np0005481065 nova_compute[260935]: 2025-10-11 08:49:43.708 2 DEBUG nova.virt.hardware [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:49:43 np0005481065 nova_compute[260935]: 2025-10-11 08:49:43.708 2 DEBUG nova.virt.hardware [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:49:43 np0005481065 nova_compute[260935]: 2025-10-11 08:49:43.709 2 DEBUG nova.virt.hardware [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:49:43 np0005481065 nova_compute[260935]: 2025-10-11 08:49:43.709 2 DEBUG nova.virt.hardware [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:49:43 np0005481065 nova_compute[260935]: 2025-10-11 08:49:43.709 2 DEBUG nova.virt.hardware [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:49:43 np0005481065 nova_compute[260935]: 2025-10-11 08:49:43.710 2 DEBUG nova.virt.hardware [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:49:43 np0005481065 nova_compute[260935]: 2025-10-11 08:49:43.710 2 DEBUG nova.virt.hardware [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:49:43 np0005481065 nova_compute[260935]: 2025-10-11 08:49:43.710 2 DEBUG nova.virt.hardware [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:49:43 np0005481065 nova_compute[260935]: 2025-10-11 08:49:43.711 2 DEBUG nova.virt.hardware [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:49:43 np0005481065 nova_compute[260935]: 2025-10-11 08:49:43.711 2 DEBUG nova.virt.hardware [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:49:43 np0005481065 nova_compute[260935]: 2025-10-11 08:49:43.715 2 DEBUG oslo_concurrency.processutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:43 np0005481065 nova_compute[260935]: 2025-10-11 08:49:43.815 2 DEBUG nova.storage.rbd_utils [None req-49a8ed84-8d60-4574-bbd5-06ca9b96a8ef 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] removing snapshot(0344ed4086ef47a5a0808d6c4953c045) on rbd image(8e4f771a-b87a-40f9-a12e-b5b4583b96f7_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct 11 04:49:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:49:43 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3267938583' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:49:43 np0005481065 nova_compute[260935]: 2025-10-11 08:49:43.838 2 DEBUG oslo_concurrency.processutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1319: 321 pgs: 321 active+clean; 418 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 6.8 MiB/s wr, 227 op/s
Oct 11 04:49:43 np0005481065 nova_compute[260935]: 2025-10-11 08:49:43.860 2 DEBUG nova.storage.rbd_utils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] rbd image 14711e39-46ca-4856-9c19-fa51b869064d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:49:43 np0005481065 nova_compute[260935]: 2025-10-11 08:49:43.864 2 DEBUG oslo_concurrency.processutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:49:44 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2783032966' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.166 2 DEBUG oslo_concurrency.processutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.197 2 DEBUG nova.storage.rbd_utils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 90e56ca7-b26f-4f83-908d-75204ecd2533_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.202 2 DEBUG oslo_concurrency.processutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:49:44 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1451476487' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.280 2 DEBUG oslo_concurrency.processutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.284 2 DEBUG nova.virt.libvirt.vif [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:49:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-579139719',display_name='tempest-tempest.common.compute-instance-579139719',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-579139719',id=26,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAAy7u30rY9Ua722Pu5k06TsB7yGIeNfS7lWjZwVhd6kg2xMeuomPU5t2dlqG08LvC5AhOx2wSQ4p/whtQgG8tbhB9ScC2x2P4qlM+3BKH/+XFtSpFY70AQQ3oh5qTSzqA==',key_name='tempest-keypair-878513287',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-9v1qz90y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:49:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=14711e39-46ca-4856-9c19-fa51b869064d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "address": "fa:16:3e:56:95:17", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac842ebf-4f", "ovs_interfaceid": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.285 2 DEBUG nova.network.os_vif_util [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "address": "fa:16:3e:56:95:17", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac842ebf-4f", "ovs_interfaceid": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.287 2 DEBUG nova.network.os_vif_util [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:95:17,bridge_name='br-int',has_traffic_filtering=True,id=ac842ebf-4fca-4930-a4d1-3e8a6760d441,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac842ebf-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.289 2 DEBUG nova.objects.instance [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lazy-loading 'pci_devices' on Instance uuid 14711e39-46ca-4856-9c19-fa51b869064d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:49:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Oct 11 04:49:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Oct 11 04:49:44 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.310 2 DEBUG nova.virt.libvirt.driver [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:49:44 np0005481065 nova_compute[260935]:  <uuid>14711e39-46ca-4856-9c19-fa51b869064d</uuid>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:  <name>instance-0000001a</name>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <nova:name>tempest-tempest.common.compute-instance-579139719</nova:name>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:49:43</nova:creationTime>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:49:44 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:        <nova:user uuid="34f29a5a135d45f597eeaa741009aa67">tempest-AttachInterfacesTestJSON-2072786320-project-member</nova:user>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:        <nova:project uuid="eddb41c523294041b154a0a99c88e82b">tempest-AttachInterfacesTestJSON-2072786320</nova:project>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:        <nova:port uuid="ac842ebf-4fca-4930-a4d1-3e8a6760d441">
Oct 11 04:49:44 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <entry name="serial">14711e39-46ca-4856-9c19-fa51b869064d</entry>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <entry name="uuid">14711e39-46ca-4856-9c19-fa51b869064d</entry>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/14711e39-46ca-4856-9c19-fa51b869064d_disk">
Oct 11 04:49:44 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:49:44 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/14711e39-46ca-4856-9c19-fa51b869064d_disk.config">
Oct 11 04:49:44 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:49:44 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:56:95:17"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <target dev="tapac842ebf-4f"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/14711e39-46ca-4856-9c19-fa51b869064d/console.log" append="off"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:49:44 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:49:44 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.312 2 DEBUG nova.compute.manager [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Preparing to wait for external event network-vif-plugged-ac842ebf-4fca-4930-a4d1-3e8a6760d441 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.312 2 DEBUG oslo_concurrency.lockutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "14711e39-46ca-4856-9c19-fa51b869064d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.313 2 DEBUG oslo_concurrency.lockutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "14711e39-46ca-4856-9c19-fa51b869064d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.313 2 DEBUG oslo_concurrency.lockutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "14711e39-46ca-4856-9c19-fa51b869064d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.315 2 DEBUG nova.virt.libvirt.vif [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:49:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-579139719',display_name='tempest-tempest.common.compute-instance-579139719',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-579139719',id=26,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAAy7u30rY9Ua722Pu5k06TsB7yGIeNfS7lWjZwVhd6kg2xMeuomPU5t2dlqG08LvC5AhOx2wSQ4p/whtQgG8tbhB9ScC2x2P4qlM+3BKH/+XFtSpFY70AQQ3oh5qTSzqA==',key_name='tempest-keypair-878513287',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-9v1qz90y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:49:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=14711e39-46ca-4856-9c19-fa51b869064d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "address": "fa:16:3e:56:95:17", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac842ebf-4f", "ovs_interfaceid": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.315 2 DEBUG nova.network.os_vif_util [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "address": "fa:16:3e:56:95:17", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac842ebf-4f", "ovs_interfaceid": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.317 2 DEBUG nova.network.os_vif_util [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:95:17,bridge_name='br-int',has_traffic_filtering=True,id=ac842ebf-4fca-4930-a4d1-3e8a6760d441,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac842ebf-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.317 2 DEBUG os_vif [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:95:17,bridge_name='br-int',has_traffic_filtering=True,id=ac842ebf-4fca-4930-a4d1-3e8a6760d441,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac842ebf-4f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.319 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.320 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.326 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapac842ebf-4f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.329 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapac842ebf-4f, col_values=(('external_ids', {'iface-id': 'ac842ebf-4fca-4930-a4d1-3e8a6760d441', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:56:95:17', 'vm-uuid': '14711e39-46ca-4856-9c19-fa51b869064d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:44 np0005481065 NetworkManager[44960]: <info>  [1760172584.3326] manager: (tapac842ebf-4f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/76)
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.343 2 INFO os_vif [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:95:17,bridge_name='br-int',has_traffic_filtering=True,id=ac842ebf-4fca-4930-a4d1-3e8a6760d441,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac842ebf-4f')#033[00m
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.373 2 DEBUG nova.storage.rbd_utils [None req-49a8ed84-8d60-4574-bbd5-06ca9b96a8ef 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] creating snapshot(snap) on rbd image(3a225b4b-c750-4e15-aa61-55d1b4ccb7d5) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.456 2 DEBUG nova.virt.libvirt.driver [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.456 2 DEBUG nova.virt.libvirt.driver [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.456 2 DEBUG nova.virt.libvirt.driver [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No VIF found with MAC fa:16:3e:56:95:17, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.457 2 INFO nova.virt.libvirt.driver [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Using config drive#033[00m
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.490 2 DEBUG nova.storage.rbd_utils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] rbd image 14711e39-46ca-4856-9c19-fa51b869064d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:49:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:49:44 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2640104767' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.669 2 DEBUG oslo_concurrency.processutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.671 2 DEBUG nova.virt.libvirt.vif [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:49:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1529647290',display_name='tempest-ServersAdminTestJSON-server-1529647290',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1529647290',id=27,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='39d3043a7835403392c659fbb2fe0b22',ramdisk_id='',reservation_id='r-j763nkmo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1756812845',owner_user_name='tempest-ServersAdminTestJSON-1756812845-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:49:37Z,user_data=None,user_id='a51c2680b31e40b1908642ef8795c6f0',uuid=90e56ca7-b26f-4f83-908d-75204ecd2533,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "302c88cf-6eba-4200-adfe-6c23d5e6078d", "address": "fa:16:3e:52:ed:97", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap302c88cf-6e", "ovs_interfaceid": "302c88cf-6eba-4200-adfe-6c23d5e6078d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.671 2 DEBUG nova.network.os_vif_util [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converting VIF {"id": "302c88cf-6eba-4200-adfe-6c23d5e6078d", "address": "fa:16:3e:52:ed:97", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap302c88cf-6e", "ovs_interfaceid": "302c88cf-6eba-4200-adfe-6c23d5e6078d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.672 2 DEBUG nova.network.os_vif_util [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:ed:97,bridge_name='br-int',has_traffic_filtering=True,id=302c88cf-6eba-4200-adfe-6c23d5e6078d,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap302c88cf-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.673 2 DEBUG nova.objects.instance [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lazy-loading 'pci_devices' on Instance uuid 90e56ca7-b26f-4f83-908d-75204ecd2533 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.834 2 DEBUG nova.virt.libvirt.driver [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:49:44 np0005481065 nova_compute[260935]:  <uuid>90e56ca7-b26f-4f83-908d-75204ecd2533</uuid>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:  <name>instance-0000001b</name>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <nova:name>tempest-ServersAdminTestJSON-server-1529647290</nova:name>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:49:43</nova:creationTime>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:49:44 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:        <nova:user uuid="a51c2680b31e40b1908642ef8795c6f0">tempest-ServersAdminTestJSON-1756812845-project-member</nova:user>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:        <nova:project uuid="39d3043a7835403392c659fbb2fe0b22">tempest-ServersAdminTestJSON-1756812845</nova:project>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:        <nova:port uuid="302c88cf-6eba-4200-adfe-6c23d5e6078d">
Oct 11 04:49:44 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <entry name="serial">90e56ca7-b26f-4f83-908d-75204ecd2533</entry>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <entry name="uuid">90e56ca7-b26f-4f83-908d-75204ecd2533</entry>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/90e56ca7-b26f-4f83-908d-75204ecd2533_disk">
Oct 11 04:49:44 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:49:44 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/90e56ca7-b26f-4f83-908d-75204ecd2533_disk.config">
Oct 11 04:49:44 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:49:44 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:52:ed:97"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <target dev="tap302c88cf-6e"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/90e56ca7-b26f-4f83-908d-75204ecd2533/console.log" append="off"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:49:44 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:49:44 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:49:44 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:49:44 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.835 2 DEBUG nova.compute.manager [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Preparing to wait for external event network-vif-plugged-302c88cf-6eba-4200-adfe-6c23d5e6078d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.835 2 DEBUG oslo_concurrency.lockutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "90e56ca7-b26f-4f83-908d-75204ecd2533-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.835 2 DEBUG oslo_concurrency.lockutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "90e56ca7-b26f-4f83-908d-75204ecd2533-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.836 2 DEBUG oslo_concurrency.lockutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "90e56ca7-b26f-4f83-908d-75204ecd2533-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.837 2 DEBUG nova.virt.libvirt.vif [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:49:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1529647290',display_name='tempest-ServersAdminTestJSON-server-1529647290',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1529647290',id=27,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='39d3043a7835403392c659fbb2fe0b22',ramdisk_id='',reservation_id='r-j763nkmo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1756812845',owner_user_name='tempest-ServersAdminTestJSON-1756812845-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:49:37Z,user_data=None,user_id='a51c2680b31e40b1908642ef8795c6f0',uuid=90e56ca7-b26f-4f83-908d-75204ecd2533,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "302c88cf-6eba-4200-adfe-6c23d5e6078d", "address": "fa:16:3e:52:ed:97", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap302c88cf-6e", "ovs_interfaceid": "302c88cf-6eba-4200-adfe-6c23d5e6078d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.837 2 DEBUG nova.network.os_vif_util [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converting VIF {"id": "302c88cf-6eba-4200-adfe-6c23d5e6078d", "address": "fa:16:3e:52:ed:97", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap302c88cf-6e", "ovs_interfaceid": "302c88cf-6eba-4200-adfe-6c23d5e6078d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.838 2 DEBUG nova.network.os_vif_util [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:ed:97,bridge_name='br-int',has_traffic_filtering=True,id=302c88cf-6eba-4200-adfe-6c23d5e6078d,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap302c88cf-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.839 2 DEBUG os_vif [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:ed:97,bridge_name='br-int',has_traffic_filtering=True,id=302c88cf-6eba-4200-adfe-6c23d5e6078d,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap302c88cf-6e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.844 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.845 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.852 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap302c88cf-6e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.853 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap302c88cf-6e, col_values=(('external_ids', {'iface-id': '302c88cf-6eba-4200-adfe-6c23d5e6078d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:52:ed:97', 'vm-uuid': '90e56ca7-b26f-4f83-908d-75204ecd2533'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:44 np0005481065 NetworkManager[44960]: <info>  [1760172584.8567] manager: (tap302c88cf-6e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/77)
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:44 np0005481065 nova_compute[260935]: 2025-10-11 08:49:44.868 2 INFO os_vif [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:ed:97,bridge_name='br-int',has_traffic_filtering=True,id=302c88cf-6eba-4200-adfe-6c23d5e6078d,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap302c88cf-6e')#033[00m
Oct 11 04:49:45 np0005481065 nova_compute[260935]: 2025-10-11 08:49:45.065 2 INFO nova.virt.libvirt.driver [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Creating config drive at /var/lib/nova/instances/14711e39-46ca-4856-9c19-fa51b869064d/disk.config#033[00m
Oct 11 04:49:45 np0005481065 nova_compute[260935]: 2025-10-11 08:49:45.076 2 DEBUG oslo_concurrency.processutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/14711e39-46ca-4856-9c19-fa51b869064d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd5cl00kd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:45 np0005481065 nova_compute[260935]: 2025-10-11 08:49:45.119 2 DEBUG nova.network.neutron [req-94a5d34e-36f6-405b-bf51-64a92a12ee93 req-79d18cb9-fe00-4ed0-aa09-4d65e0ed50e7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Updated VIF entry in instance network info cache for port ac842ebf-4fca-4930-a4d1-3e8a6760d441. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:49:45 np0005481065 nova_compute[260935]: 2025-10-11 08:49:45.120 2 DEBUG nova.network.neutron [req-94a5d34e-36f6-405b-bf51-64a92a12ee93 req-79d18cb9-fe00-4ed0-aa09-4d65e0ed50e7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Updating instance_info_cache with network_info: [{"id": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "address": "fa:16:3e:56:95:17", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac842ebf-4f", "ovs_interfaceid": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:49:45 np0005481065 nova_compute[260935]: 2025-10-11 08:49:45.235 2 DEBUG oslo_concurrency.processutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/14711e39-46ca-4856-9c19-fa51b869064d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd5cl00kd" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:45 np0005481065 nova_compute[260935]: 2025-10-11 08:49:45.271 2 DEBUG nova.storage.rbd_utils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] rbd image 14711e39-46ca-4856-9c19-fa51b869064d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:49:45 np0005481065 nova_compute[260935]: 2025-10-11 08:49:45.276 2 DEBUG oslo_concurrency.processutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/14711e39-46ca-4856-9c19-fa51b869064d/disk.config 14711e39-46ca-4856-9c19-fa51b869064d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:45 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Oct 11 04:49:45 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Oct 11 04:49:45 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Oct 11 04:49:45 np0005481065 nova_compute[260935]: 2025-10-11 08:49:45.479 2 DEBUG oslo_concurrency.processutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/14711e39-46ca-4856-9c19-fa51b869064d/disk.config 14711e39-46ca-4856-9c19-fa51b869064d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.203s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:45 np0005481065 nova_compute[260935]: 2025-10-11 08:49:45.481 2 INFO nova.virt.libvirt.driver [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Deleting local config drive /var/lib/nova/instances/14711e39-46ca-4856-9c19-fa51b869064d/disk.config because it was imported into RBD.#033[00m
Oct 11 04:49:45 np0005481065 NetworkManager[44960]: <info>  [1760172585.5548] manager: (tapac842ebf-4f): new Tun device (/org/freedesktop/NetworkManager/Devices/78)
Oct 11 04:49:45 np0005481065 kernel: tapac842ebf-4f: entered promiscuous mode
Oct 11 04:49:45 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:45Z|00130|binding|INFO|Claiming lport ac842ebf-4fca-4930-a4d1-3e8a6760d441 for this chassis.
Oct 11 04:49:45 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:45Z|00131|binding|INFO|ac842ebf-4fca-4930-a4d1-3e8a6760d441: Claiming fa:16:3e:56:95:17 10.100.0.14
Oct 11 04:49:45 np0005481065 nova_compute[260935]: 2025-10-11 08:49:45.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:45 np0005481065 systemd-machined[215705]: New machine qemu-28-instance-0000001a.
Oct 11 04:49:45 np0005481065 systemd[1]: Started Virtual Machine qemu-28-instance-0000001a.
Oct 11 04:49:45 np0005481065 systemd-udevd[295071]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:49:45 np0005481065 NetworkManager[44960]: <info>  [1760172585.6940] device (tapac842ebf-4f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:49:45 np0005481065 NetworkManager[44960]: <info>  [1760172585.6946] device (tapac842ebf-4f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:49:45 np0005481065 nova_compute[260935]: 2025-10-11 08:49:45.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:45 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:45Z|00132|binding|INFO|Setting lport ac842ebf-4fca-4930-a4d1-3e8a6760d441 ovn-installed in OVS
Oct 11 04:49:45 np0005481065 nova_compute[260935]: 2025-10-11 08:49:45.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1322: 321 pgs: 321 active+clean; 418 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 689 KiB/s rd, 7.8 MiB/s wr, 187 op/s
Oct 11 04:49:46 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:46Z|00133|binding|INFO|Setting lport ac842ebf-4fca-4930-a4d1-3e8a6760d441 up in Southbound
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:46.289 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:95:17 10.100.0.14'], port_security=['fa:16:3e:56:95:17 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '14711e39-46ca-4856-9c19-fa51b869064d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eddb41c523294041b154a0a99c88e82b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b1442d15-c284-4756-a249-5a3bda09cf56', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4201c7b-c907-464d-88cb-d19f17d8f067, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=ac842ebf-4fca-4930-a4d1-3e8a6760d441) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:46.293 162815 INFO neutron.agent.ovn.metadata.agent [-] Port ac842ebf-4fca-4930-a4d1-3e8a6760d441 in datapath fff13396-b787-4c6e-9112-a1c2ef57b26d bound to our chassis#033[00m
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:46.298 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fff13396-b787-4c6e-9112-a1c2ef57b26d#033[00m
Oct 11 04:49:46 np0005481065 nova_compute[260935]: 2025-10-11 08:49:46.316 2 DEBUG oslo_concurrency.lockutils [req-94a5d34e-36f6-405b-bf51-64a92a12ee93 req-79d18cb9-fe00-4ed0-aa09-4d65e0ed50e7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-14711e39-46ca-4856-9c19-fa51b869064d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:46.319 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1525060a-9bb6-46aa-8b71-076d03dbdbed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:46.321 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfff13396-b1 in ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 04:49:46 np0005481065 nova_compute[260935]: 2025-10-11 08:49:46.322 2 DEBUG nova.virt.libvirt.driver [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:49:46 np0005481065 nova_compute[260935]: 2025-10-11 08:49:46.323 2 DEBUG nova.virt.libvirt.driver [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:49:46 np0005481065 nova_compute[260935]: 2025-10-11 08:49:46.323 2 DEBUG nova.virt.libvirt.driver [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] No VIF found with MAC fa:16:3e:52:ed:97, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:49:46 np0005481065 nova_compute[260935]: 2025-10-11 08:49:46.324 2 INFO nova.virt.libvirt.driver [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Using config drive#033[00m
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:46.324 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfff13396-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:46.324 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b513812c-e074-4d9d-8703-d6b73b01b7ae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:46.325 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[28d14d15-17fd-4e58-bc77-bcee23c05abf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:46.351 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[e6ae8c9d-9333-4742-a734-0af75df651d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:46 np0005481065 nova_compute[260935]: 2025-10-11 08:49:46.357 2 DEBUG nova.storage.rbd_utils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 90e56ca7-b26f-4f83-908d-75204ecd2533_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:46.367 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8adb76d5-ed22-43c0-a35d-d8484443c3d4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:46.406 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[4f4d7889-62f5-4d5a-9831-759c8f888098]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:46.414 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c1c7b98c-55ee-483d-9b5d-725052ee72c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:46 np0005481065 systemd-udevd[295078]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:49:46 np0005481065 NetworkManager[44960]: <info>  [1760172586.4168] manager: (tapfff13396-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/79)
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:46.470 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[bdf5cb8f-6c1c-4202-8d1f-bafe7f1c230c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:46.474 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[32d8ee48-3c55-435f-8c91-214716ab29f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:46 np0005481065 NetworkManager[44960]: <info>  [1760172586.5039] device (tapfff13396-b0): carrier: link connected
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:46.507 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[a4cf2ef1-2696-44d6-ab77-cc7fed7cab80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:46.528 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[814df928-f99c-4d01-98f3-2da6b15fc936]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfff13396-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:a4:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 443120, 'reachable_time': 39241, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295126, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:46.547 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3655effd-8408-45b7-88dd-0f5d2f93d06d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedd:a42d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 443120, 'tstamp': 443120}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295127, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:46.569 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[21009625-1246-4d7b-b958-6bf49a5f4167]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfff13396-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:a4:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 443120, 'reachable_time': 39241, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 295128, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:46.617 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7273a5d4-9c6b-4a8d-99d0-eb4e06ab6cae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:46.716 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a346a0e6-55de-43c2-9ee7-e3e082bb3529]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:46.718 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfff13396-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:46.719 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:46.720 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfff13396-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:46 np0005481065 nova_compute[260935]: 2025-10-11 08:49:46.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:46 np0005481065 NetworkManager[44960]: <info>  [1760172586.7580] manager: (tapfff13396-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/80)
Oct 11 04:49:46 np0005481065 kernel: tapfff13396-b0: entered promiscuous mode
Oct 11 04:49:46 np0005481065 nova_compute[260935]: 2025-10-11 08:49:46.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:46.764 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfff13396-b0, col_values=(('external_ids', {'iface-id': '2a916b98-1e7b-4604-b1f0-e2f195b1c17e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:46 np0005481065 nova_compute[260935]: 2025-10-11 08:49:46.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:46 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:46Z|00134|binding|INFO|Releasing lport 2a916b98-1e7b-4604-b1f0-e2f195b1c17e from this chassis (sb_readonly=0)
Oct 11 04:49:46 np0005481065 nova_compute[260935]: 2025-10-11 08:49:46.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:46 np0005481065 nova_compute[260935]: 2025-10-11 08:49:46.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:46.809 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fff13396-b787-4c6e-9112-a1c2ef57b26d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fff13396-b787-4c6e-9112-a1c2ef57b26d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:46.811 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[373eb165-1eab-4b07-b6bd-3ee165183b99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:46.813 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-fff13396-b787-4c6e-9112-a1c2ef57b26d
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/fff13396-b787-4c6e-9112-a1c2ef57b26d.pid.haproxy
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID fff13396-b787-4c6e-9112-a1c2ef57b26d
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 04:49:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:46.814 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'env', 'PROCESS_TAG=haproxy-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fff13396-b787-4c6e-9112-a1c2ef57b26d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 04:49:47 np0005481065 nova_compute[260935]: 2025-10-11 08:49:47.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:47 np0005481065 nova_compute[260935]: 2025-10-11 08:49:47.153 2 INFO nova.virt.libvirt.driver [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Creating config drive at /var/lib/nova/instances/90e56ca7-b26f-4f83-908d-75204ecd2533/disk.config#033[00m
Oct 11 04:49:47 np0005481065 nova_compute[260935]: 2025-10-11 08:49:47.164 2 DEBUG oslo_concurrency.processutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/90e56ca7-b26f-4f83-908d-75204ecd2533/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcjjqozi7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:47 np0005481065 podman[295207]: 2025-10-11 08:49:47.280756329 +0000 UTC m=+0.065749491 container create 7ed29cef5ce31d6622505d70a8cd9e2036824dbcf8fd54bff2161f0611e12d29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:49:47 np0005481065 nova_compute[260935]: 2025-10-11 08:49:47.304 2 DEBUG nova.network.neutron [req-cb98ccbb-a461-4db2-ba4c-735969cdecac req-d5eda931-6fd2-4c2d-9a96-8a475425b333 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Updated VIF entry in instance network info cache for port 302c88cf-6eba-4200-adfe-6c23d5e6078d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:49:47 np0005481065 nova_compute[260935]: 2025-10-11 08:49:47.305 2 DEBUG nova.network.neutron [req-cb98ccbb-a461-4db2-ba4c-735969cdecac req-d5eda931-6fd2-4c2d-9a96-8a475425b333 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Updating instance_info_cache with network_info: [{"id": "302c88cf-6eba-4200-adfe-6c23d5e6078d", "address": "fa:16:3e:52:ed:97", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap302c88cf-6e", "ovs_interfaceid": "302c88cf-6eba-4200-adfe-6c23d5e6078d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:49:47 np0005481065 nova_compute[260935]: 2025-10-11 08:49:47.328 2 INFO nova.virt.libvirt.driver [None req-49a8ed84-8d60-4574-bbd5-06ca9b96a8ef 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Snapshot image upload complete#033[00m
Oct 11 04:49:47 np0005481065 nova_compute[260935]: 2025-10-11 08:49:47.329 2 INFO nova.compute.manager [None req-49a8ed84-8d60-4574-bbd5-06ca9b96a8ef 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Took 6.13 seconds to snapshot the instance on the hypervisor.#033[00m
Oct 11 04:49:47 np0005481065 podman[295207]: 2025-10-11 08:49:47.242759884 +0000 UTC m=+0.027753106 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 04:49:47 np0005481065 nova_compute[260935]: 2025-10-11 08:49:47.335 2 DEBUG oslo_concurrency.processutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/90e56ca7-b26f-4f83-908d-75204ecd2533/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcjjqozi7" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:47 np0005481065 nova_compute[260935]: 2025-10-11 08:49:47.335 2 DEBUG oslo_concurrency.lockutils [req-cb98ccbb-a461-4db2-ba4c-735969cdecac req-d5eda931-6fd2-4c2d-9a96-8a475425b333 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-90e56ca7-b26f-4f83-908d-75204ecd2533" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:49:47 np0005481065 systemd[1]: Started libpod-conmon-7ed29cef5ce31d6622505d70a8cd9e2036824dbcf8fd54bff2161f0611e12d29.scope.
Oct 11 04:49:47 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:49:47 np0005481065 nova_compute[260935]: 2025-10-11 08:49:47.376 2 DEBUG nova.storage.rbd_utils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 90e56ca7-b26f-4f83-908d-75204ecd2533_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:49:47 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c54b9bd547d36ffdac517bcbe86f00a809d959f1c5904ed2903ce52567b3128/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:49:47 np0005481065 nova_compute[260935]: 2025-10-11 08:49:47.387 2 DEBUG oslo_concurrency.processutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/90e56ca7-b26f-4f83-908d-75204ecd2533/disk.config 90e56ca7-b26f-4f83-908d-75204ecd2533_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:47 np0005481065 podman[295207]: 2025-10-11 08:49:47.404761636 +0000 UTC m=+0.189754838 container init 7ed29cef5ce31d6622505d70a8cd9e2036824dbcf8fd54bff2161f0611e12d29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 04:49:47 np0005481065 podman[295207]: 2025-10-11 08:49:47.416652432 +0000 UTC m=+0.201645584 container start 7ed29cef5ce31d6622505d70a8cd9e2036824dbcf8fd54bff2161f0611e12d29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0)
Oct 11 04:49:47 np0005481065 neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d[295224]: [NOTICE]   (295247) : New worker (295256) forked
Oct 11 04:49:47 np0005481065 neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d[295224]: [NOTICE]   (295247) : Loading success.
Oct 11 04:49:47 np0005481065 nova_compute[260935]: 2025-10-11 08:49:47.548 2 DEBUG oslo_concurrency.processutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/90e56ca7-b26f-4f83-908d-75204ecd2533/disk.config 90e56ca7-b26f-4f83-908d-75204ecd2533_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:47 np0005481065 nova_compute[260935]: 2025-10-11 08:49:47.549 2 INFO nova.virt.libvirt.driver [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Deleting local config drive /var/lib/nova/instances/90e56ca7-b26f-4f83-908d-75204ecd2533/disk.config because it was imported into RBD.#033[00m
Oct 11 04:49:47 np0005481065 kernel: tap302c88cf-6e: entered promiscuous mode
Oct 11 04:49:47 np0005481065 NetworkManager[44960]: <info>  [1760172587.6152] manager: (tap302c88cf-6e): new Tun device (/org/freedesktop/NetworkManager/Devices/81)
Oct 11 04:49:47 np0005481065 nova_compute[260935]: 2025-10-11 08:49:47.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:47 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:47Z|00135|binding|INFO|Claiming lport 302c88cf-6eba-4200-adfe-6c23d5e6078d for this chassis.
Oct 11 04:49:47 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:47Z|00136|binding|INFO|302c88cf-6eba-4200-adfe-6c23d5e6078d: Claiming fa:16:3e:52:ed:97 10.100.0.10
Oct 11 04:49:47 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:47.631 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:52:ed:97 10.100.0.10'], port_security=['fa:16:3e:52:ed:97 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '90e56ca7-b26f-4f83-908d-75204ecd2533', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39d3043a7835403392c659fbb2fe0b22', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8cdf2c97-ed67-4339-928f-1d70d0c6c18c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3bfe7634-8476-437a-9cde-e4512c0e686a, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=302c88cf-6eba-4200-adfe-6c23d5e6078d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:49:47 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:47.632 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 302c88cf-6eba-4200-adfe-6c23d5e6078d in datapath 09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5 bound to our chassis#033[00m
Oct 11 04:49:47 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:47.634 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5#033[00m
Oct 11 04:49:47 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:47Z|00137|binding|INFO|Setting lport 302c88cf-6eba-4200-adfe-6c23d5e6078d ovn-installed in OVS
Oct 11 04:49:47 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:47Z|00138|binding|INFO|Setting lport 302c88cf-6eba-4200-adfe-6c23d5e6078d up in Southbound
Oct 11 04:49:47 np0005481065 nova_compute[260935]: 2025-10-11 08:49:47.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:47 np0005481065 nova_compute[260935]: 2025-10-11 08:49:47.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:47 np0005481065 NetworkManager[44960]: <info>  [1760172587.6448] device (tap302c88cf-6e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:49:47 np0005481065 NetworkManager[44960]: <info>  [1760172587.6456] device (tap302c88cf-6e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:49:47 np0005481065 nova_compute[260935]: 2025-10-11 08:49:47.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:47 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:47.653 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9b2ff697-6c33-4702-ace5-60d06f95d439]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:47 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:47.684 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3da2f25e-3b7d-486a-bb91-0d17bedccd7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:47 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:47.689 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b937c492-23a8-4a16-8a3c-55fdf5babb23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:47 np0005481065 systemd-machined[215705]: New machine qemu-29-instance-0000001b.
Oct 11 04:49:47 np0005481065 systemd[1]: Started Virtual Machine qemu-29-instance-0000001b.
Oct 11 04:49:47 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:47.725 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[2ca1740d-545d-4fa3-8240-720e994fbc82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:47 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:47.745 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c488f287-edf0-4be7-9133-2c5b35519eaa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap09ac2cb6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:b2:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 10, 'rx_bytes': 1084, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 10, 'rx_bytes': 1084, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 439110, 'reachable_time': 35426, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295295, 'error': None, 'target': 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:47 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:47.770 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cae10ca5-3dda-41c5-b0e3-3f9de5f7ec07]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap09ac2cb6-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 439130, 'tstamp': 439130}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295299, 'error': None, 'target': 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap09ac2cb6-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 439135, 'tstamp': 439135}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295299, 'error': None, 'target': 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:47 np0005481065 nova_compute[260935]: 2025-10-11 08:49:47.772 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172587.7720613, 14711e39-46ca-4856-9c19-fa51b869064d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:49:47 np0005481065 nova_compute[260935]: 2025-10-11 08:49:47.772 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] VM Started (Lifecycle Event)#033[00m
Oct 11 04:49:47 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:47.772 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap09ac2cb6-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:47 np0005481065 nova_compute[260935]: 2025-10-11 08:49:47.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:47 np0005481065 nova_compute[260935]: 2025-10-11 08:49:47.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:47 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:47.778 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap09ac2cb6-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:47 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:47.779 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:49:47 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:47.779 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap09ac2cb6-30, col_values=(('external_ids', {'iface-id': '424305ea-6b47-4134-ad52-ee2a450e204c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:47 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:47.780 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:49:47 np0005481065 nova_compute[260935]: 2025-10-11 08:49:47.807 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:49:47 np0005481065 nova_compute[260935]: 2025-10-11 08:49:47.813 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172587.7740571, 14711e39-46ca-4856-9c19-fa51b869064d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:49:47 np0005481065 nova_compute[260935]: 2025-10-11 08:49:47.814 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] VM Paused (Lifecycle Event)#033[00m
Oct 11 04:49:47 np0005481065 nova_compute[260935]: 2025-10-11 08:49:47.839 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:49:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1323: 321 pgs: 321 active+clean; 465 MiB data, 603 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 11 MiB/s wr, 340 op/s
Oct 11 04:49:47 np0005481065 nova_compute[260935]: 2025-10-11 08:49:47.845 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:49:47 np0005481065 nova_compute[260935]: 2025-10-11 08:49:47.875 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:49:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:49:48 np0005481065 nova_compute[260935]: 2025-10-11 08:49:48.523 2 DEBUG nova.compute.manager [req-ea68d0a5-7c2a-431f-b70e-4d04ffb10373 req-76197eb4-841a-4960-8213-a4906a3523dc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Received event network-vif-plugged-ac842ebf-4fca-4930-a4d1-3e8a6760d441 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:49:48 np0005481065 nova_compute[260935]: 2025-10-11 08:49:48.523 2 DEBUG oslo_concurrency.lockutils [req-ea68d0a5-7c2a-431f-b70e-4d04ffb10373 req-76197eb4-841a-4960-8213-a4906a3523dc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "14711e39-46ca-4856-9c19-fa51b869064d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:48 np0005481065 nova_compute[260935]: 2025-10-11 08:49:48.524 2 DEBUG oslo_concurrency.lockutils [req-ea68d0a5-7c2a-431f-b70e-4d04ffb10373 req-76197eb4-841a-4960-8213-a4906a3523dc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "14711e39-46ca-4856-9c19-fa51b869064d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:48 np0005481065 nova_compute[260935]: 2025-10-11 08:49:48.524 2 DEBUG oslo_concurrency.lockutils [req-ea68d0a5-7c2a-431f-b70e-4d04ffb10373 req-76197eb4-841a-4960-8213-a4906a3523dc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "14711e39-46ca-4856-9c19-fa51b869064d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:48 np0005481065 nova_compute[260935]: 2025-10-11 08:49:48.525 2 DEBUG nova.compute.manager [req-ea68d0a5-7c2a-431f-b70e-4d04ffb10373 req-76197eb4-841a-4960-8213-a4906a3523dc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Processing event network-vif-plugged-ac842ebf-4fca-4930-a4d1-3e8a6760d441 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 04:49:48 np0005481065 nova_compute[260935]: 2025-10-11 08:49:48.526 2 DEBUG nova.compute.manager [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:49:48 np0005481065 nova_compute[260935]: 2025-10-11 08:49:48.531 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172588.530742, 14711e39-46ca-4856-9c19-fa51b869064d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:49:48 np0005481065 nova_compute[260935]: 2025-10-11 08:49:48.531 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:49:48 np0005481065 nova_compute[260935]: 2025-10-11 08:49:48.532 2 DEBUG nova.virt.libvirt.driver [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:49:48 np0005481065 nova_compute[260935]: 2025-10-11 08:49:48.537 2 INFO nova.virt.libvirt.driver [-] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Instance spawned successfully.#033[00m
Oct 11 04:49:48 np0005481065 nova_compute[260935]: 2025-10-11 08:49:48.537 2 DEBUG nova.virt.libvirt.driver [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:49:48 np0005481065 nova_compute[260935]: 2025-10-11 08:49:48.554 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:49:48 np0005481065 nova_compute[260935]: 2025-10-11 08:49:48.562 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:49:48 np0005481065 nova_compute[260935]: 2025-10-11 08:49:48.569 2 DEBUG nova.virt.libvirt.driver [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:49:48 np0005481065 nova_compute[260935]: 2025-10-11 08:49:48.569 2 DEBUG nova.virt.libvirt.driver [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:49:48 np0005481065 nova_compute[260935]: 2025-10-11 08:49:48.570 2 DEBUG nova.virt.libvirt.driver [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:49:48 np0005481065 nova_compute[260935]: 2025-10-11 08:49:48.570 2 DEBUG nova.virt.libvirt.driver [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:49:48 np0005481065 nova_compute[260935]: 2025-10-11 08:49:48.571 2 DEBUG nova.virt.libvirt.driver [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:49:48 np0005481065 nova_compute[260935]: 2025-10-11 08:49:48.571 2 DEBUG nova.virt.libvirt.driver [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:49:48 np0005481065 nova_compute[260935]: 2025-10-11 08:49:48.609 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:49:48 np0005481065 nova_compute[260935]: 2025-10-11 08:49:48.609 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172588.540548, 90e56ca7-b26f-4f83-908d-75204ecd2533 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:49:48 np0005481065 nova_compute[260935]: 2025-10-11 08:49:48.610 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] VM Started (Lifecycle Event)#033[00m
Oct 11 04:49:48 np0005481065 nova_compute[260935]: 2025-10-11 08:49:48.647 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:49:48 np0005481065 nova_compute[260935]: 2025-10-11 08:49:48.651 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172588.5406275, 90e56ca7-b26f-4f83-908d-75204ecd2533 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:49:48 np0005481065 nova_compute[260935]: 2025-10-11 08:49:48.652 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] VM Paused (Lifecycle Event)#033[00m
Oct 11 04:49:48 np0005481065 nova_compute[260935]: 2025-10-11 08:49:48.683 2 INFO nova.compute.manager [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Took 12.63 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:49:48 np0005481065 nova_compute[260935]: 2025-10-11 08:49:48.683 2 DEBUG nova.compute.manager [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:49:48 np0005481065 nova_compute[260935]: 2025-10-11 08:49:48.699 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:49:48 np0005481065 nova_compute[260935]: 2025-10-11 08:49:48.703 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:49:48 np0005481065 nova_compute[260935]: 2025-10-11 08:49:48.724 2 DEBUG nova.compute.manager [req-1e589ea9-fe8b-487f-a60c-be3bfc352e68 req-542a9f70-47c2-4bbf-b79e-72b13fcdad87 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Received event network-vif-plugged-302c88cf-6eba-4200-adfe-6c23d5e6078d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:49:48 np0005481065 nova_compute[260935]: 2025-10-11 08:49:48.724 2 DEBUG oslo_concurrency.lockutils [req-1e589ea9-fe8b-487f-a60c-be3bfc352e68 req-542a9f70-47c2-4bbf-b79e-72b13fcdad87 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "90e56ca7-b26f-4f83-908d-75204ecd2533-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:48 np0005481065 nova_compute[260935]: 2025-10-11 08:49:48.724 2 DEBUG oslo_concurrency.lockutils [req-1e589ea9-fe8b-487f-a60c-be3bfc352e68 req-542a9f70-47c2-4bbf-b79e-72b13fcdad87 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "90e56ca7-b26f-4f83-908d-75204ecd2533-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:48 np0005481065 nova_compute[260935]: 2025-10-11 08:49:48.725 2 DEBUG oslo_concurrency.lockutils [req-1e589ea9-fe8b-487f-a60c-be3bfc352e68 req-542a9f70-47c2-4bbf-b79e-72b13fcdad87 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "90e56ca7-b26f-4f83-908d-75204ecd2533-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:48 np0005481065 nova_compute[260935]: 2025-10-11 08:49:48.725 2 DEBUG nova.compute.manager [req-1e589ea9-fe8b-487f-a60c-be3bfc352e68 req-542a9f70-47c2-4bbf-b79e-72b13fcdad87 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Processing event network-vif-plugged-302c88cf-6eba-4200-adfe-6c23d5e6078d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 04:49:48 np0005481065 nova_compute[260935]: 2025-10-11 08:49:48.726 2 DEBUG nova.compute.manager [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:49:48 np0005481065 nova_compute[260935]: 2025-10-11 08:49:48.731 2 DEBUG nova.virt.libvirt.driver [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:49:48 np0005481065 nova_compute[260935]: 2025-10-11 08:49:48.735 2 INFO nova.virt.libvirt.driver [-] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Instance spawned successfully.#033[00m
Oct 11 04:49:48 np0005481065 nova_compute[260935]: 2025-10-11 08:49:48.735 2 DEBUG nova.virt.libvirt.driver [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:49:48 np0005481065 nova_compute[260935]: 2025-10-11 08:49:48.790 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:49:48 np0005481065 nova_compute[260935]: 2025-10-11 08:49:48.791 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172588.729687, 90e56ca7-b26f-4f83-908d-75204ecd2533 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:49:48 np0005481065 nova_compute[260935]: 2025-10-11 08:49:48.791 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:49:48 np0005481065 nova_compute[260935]: 2025-10-11 08:49:48.837 2 DEBUG nova.virt.libvirt.driver [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:49:48 np0005481065 nova_compute[260935]: 2025-10-11 08:49:48.838 2 DEBUG nova.virt.libvirt.driver [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:49:48 np0005481065 nova_compute[260935]: 2025-10-11 08:49:48.839 2 DEBUG nova.virt.libvirt.driver [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:49:48 np0005481065 nova_compute[260935]: 2025-10-11 08:49:48.839 2 DEBUG nova.virt.libvirt.driver [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:49:48 np0005481065 nova_compute[260935]: 2025-10-11 08:49:48.840 2 DEBUG nova.virt.libvirt.driver [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:49:48 np0005481065 nova_compute[260935]: 2025-10-11 08:49:48.841 2 DEBUG nova.virt.libvirt.driver [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:49:48 np0005481065 nova_compute[260935]: 2025-10-11 08:49:48.903 2 INFO nova.compute.manager [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Took 14.23 seconds to build instance.#033[00m
Oct 11 04:49:48 np0005481065 nova_compute[260935]: 2025-10-11 08:49:48.932 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:49:48 np0005481065 nova_compute[260935]: 2025-10-11 08:49:48.939 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:49:49 np0005481065 nova_compute[260935]: 2025-10-11 08:49:49.146 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:49:49 np0005481065 nova_compute[260935]: 2025-10-11 08:49:49.152 2 DEBUG oslo_concurrency.lockutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "14711e39-46ca-4856-9c19-fa51b869064d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:49 np0005481065 nova_compute[260935]: 2025-10-11 08:49:49.202 2 INFO nova.compute.manager [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Took 11.67 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:49:49 np0005481065 nova_compute[260935]: 2025-10-11 08:49:49.202 2 DEBUG nova.compute.manager [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:49:49 np0005481065 nova_compute[260935]: 2025-10-11 08:49:49.270 2 INFO nova.compute.manager [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Took 14.55 seconds to build instance.#033[00m
Oct 11 04:49:49 np0005481065 nova_compute[260935]: 2025-10-11 08:49:49.289 2 DEBUG oslo_concurrency.lockutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "90e56ca7-b26f-4f83-908d-75204ecd2533" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1324: 321 pgs: 321 active+clean; 465 MiB data, 603 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.3 MiB/s wr, 140 op/s
Oct 11 04:49:49 np0005481065 nova_compute[260935]: 2025-10-11 08:49:49.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:50 np0005481065 nova_compute[260935]: 2025-10-11 08:49:50.168 2 DEBUG oslo_concurrency.lockutils [None req-562dda92-f28d-460a-8594-d15088f92593 794a63979edc4ddb9f25d94c4999d99b 9b74a79bf5294fa5b15c4e2cc282d6c1 - - default default] Acquiring lock "refresh_cache-90e56ca7-b26f-4f83-908d-75204ecd2533" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:49:50 np0005481065 nova_compute[260935]: 2025-10-11 08:49:50.168 2 DEBUG oslo_concurrency.lockutils [None req-562dda92-f28d-460a-8594-d15088f92593 794a63979edc4ddb9f25d94c4999d99b 9b74a79bf5294fa5b15c4e2cc282d6c1 - - default default] Acquired lock "refresh_cache-90e56ca7-b26f-4f83-908d-75204ecd2533" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:49:50 np0005481065 nova_compute[260935]: 2025-10-11 08:49:50.168 2 DEBUG nova.network.neutron [None req-562dda92-f28d-460a-8594-d15088f92593 794a63979edc4ddb9f25d94c4999d99b 9b74a79bf5294fa5b15c4e2cc282d6c1 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:49:50 np0005481065 nova_compute[260935]: 2025-10-11 08:49:50.223 2 DEBUG oslo_concurrency.lockutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Acquiring lock "9f9aca1c-8e65-435a-bfae-1ff0d4386f58" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:50 np0005481065 nova_compute[260935]: 2025-10-11 08:49:50.223 2 DEBUG oslo_concurrency.lockutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Lock "9f9aca1c-8e65-435a-bfae-1ff0d4386f58" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:50 np0005481065 nova_compute[260935]: 2025-10-11 08:49:50.239 2 DEBUG nova.compute.manager [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:49:50 np0005481065 nova_compute[260935]: 2025-10-11 08:49:50.310 2 DEBUG oslo_concurrency.lockutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:50 np0005481065 nova_compute[260935]: 2025-10-11 08:49:50.311 2 DEBUG oslo_concurrency.lockutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:50 np0005481065 nova_compute[260935]: 2025-10-11 08:49:50.317 2 DEBUG nova.virt.hardware [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:49:50 np0005481065 nova_compute[260935]: 2025-10-11 08:49:50.317 2 INFO nova.compute.claims [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:49:50 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e166 do_prune osdmap full prune enabled
Oct 11 04:49:50 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e167 e167: 3 total, 3 up, 3 in
Oct 11 04:49:50 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e167: 3 total, 3 up, 3 in
Oct 11 04:49:50 np0005481065 nova_compute[260935]: 2025-10-11 08:49:50.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:50 np0005481065 NetworkManager[44960]: <info>  [1760172590.4194] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/82)
Oct 11 04:49:50 np0005481065 NetworkManager[44960]: <info>  [1760172590.4206] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/83)
Oct 11 04:49:50 np0005481065 nova_compute[260935]: 2025-10-11 08:49:50.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:50 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:50Z|00139|binding|INFO|Releasing lport 424305ea-6b47-4134-ad52-ee2a450e204c from this chassis (sb_readonly=0)
Oct 11 04:49:50 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:50Z|00140|binding|INFO|Releasing lport 2a916b98-1e7b-4604-b1f0-e2f195b1c17e from this chassis (sb_readonly=0)
Oct 11 04:49:50 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:50Z|00141|binding|INFO|Releasing lport e5becf0d-48c0-404b-9cba-07077454d085 from this chassis (sb_readonly=0)
Oct 11 04:49:50 np0005481065 nova_compute[260935]: 2025-10-11 08:49:50.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:50 np0005481065 nova_compute[260935]: 2025-10-11 08:49:50.564 2 DEBUG oslo_concurrency.processutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:51 np0005481065 nova_compute[260935]: 2025-10-11 08:49:51.053 2 DEBUG nova.compute.manager [req-8df2a6bf-0b16-4e54-bc8f-8b665358cecb req-cd33f2e1-f1b1-462d-a138-8ce499811998 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Received event network-vif-plugged-ac842ebf-4fca-4930-a4d1-3e8a6760d441 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:49:51 np0005481065 nova_compute[260935]: 2025-10-11 08:49:51.054 2 DEBUG oslo_concurrency.lockutils [req-8df2a6bf-0b16-4e54-bc8f-8b665358cecb req-cd33f2e1-f1b1-462d-a138-8ce499811998 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "14711e39-46ca-4856-9c19-fa51b869064d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:51 np0005481065 nova_compute[260935]: 2025-10-11 08:49:51.054 2 DEBUG oslo_concurrency.lockutils [req-8df2a6bf-0b16-4e54-bc8f-8b665358cecb req-cd33f2e1-f1b1-462d-a138-8ce499811998 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "14711e39-46ca-4856-9c19-fa51b869064d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:51 np0005481065 nova_compute[260935]: 2025-10-11 08:49:51.055 2 DEBUG oslo_concurrency.lockutils [req-8df2a6bf-0b16-4e54-bc8f-8b665358cecb req-cd33f2e1-f1b1-462d-a138-8ce499811998 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "14711e39-46ca-4856-9c19-fa51b869064d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:51 np0005481065 nova_compute[260935]: 2025-10-11 08:49:51.055 2 DEBUG nova.compute.manager [req-8df2a6bf-0b16-4e54-bc8f-8b665358cecb req-cd33f2e1-f1b1-462d-a138-8ce499811998 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] No waiting events found dispatching network-vif-plugged-ac842ebf-4fca-4930-a4d1-3e8a6760d441 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:49:51 np0005481065 nova_compute[260935]: 2025-10-11 08:49:51.056 2 WARNING nova.compute.manager [req-8df2a6bf-0b16-4e54-bc8f-8b665358cecb req-cd33f2e1-f1b1-462d-a138-8ce499811998 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Received unexpected event network-vif-plugged-ac842ebf-4fca-4930-a4d1-3e8a6760d441 for instance with vm_state active and task_state None.#033[00m
Oct 11 04:49:51 np0005481065 nova_compute[260935]: 2025-10-11 08:49:51.129 2 DEBUG nova.compute.manager [req-983fb0e0-0544-4d1f-a156-bf4bf0288b5c req-3f046720-a76f-4795-8faa-336cc9646ffc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Received event network-vif-plugged-302c88cf-6eba-4200-adfe-6c23d5e6078d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:49:51 np0005481065 nova_compute[260935]: 2025-10-11 08:49:51.130 2 DEBUG oslo_concurrency.lockutils [req-983fb0e0-0544-4d1f-a156-bf4bf0288b5c req-3f046720-a76f-4795-8faa-336cc9646ffc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "90e56ca7-b26f-4f83-908d-75204ecd2533-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:51 np0005481065 nova_compute[260935]: 2025-10-11 08:49:51.130 2 DEBUG oslo_concurrency.lockutils [req-983fb0e0-0544-4d1f-a156-bf4bf0288b5c req-3f046720-a76f-4795-8faa-336cc9646ffc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "90e56ca7-b26f-4f83-908d-75204ecd2533-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:51 np0005481065 nova_compute[260935]: 2025-10-11 08:49:51.131 2 DEBUG oslo_concurrency.lockutils [req-983fb0e0-0544-4d1f-a156-bf4bf0288b5c req-3f046720-a76f-4795-8faa-336cc9646ffc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "90e56ca7-b26f-4f83-908d-75204ecd2533-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:51 np0005481065 nova_compute[260935]: 2025-10-11 08:49:51.131 2 DEBUG nova.compute.manager [req-983fb0e0-0544-4d1f-a156-bf4bf0288b5c req-3f046720-a76f-4795-8faa-336cc9646ffc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] No waiting events found dispatching network-vif-plugged-302c88cf-6eba-4200-adfe-6c23d5e6078d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:49:51 np0005481065 nova_compute[260935]: 2025-10-11 08:49:51.131 2 WARNING nova.compute.manager [req-983fb0e0-0544-4d1f-a156-bf4bf0288b5c req-3f046720-a76f-4795-8faa-336cc9646ffc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Received unexpected event network-vif-plugged-302c88cf-6eba-4200-adfe-6c23d5e6078d for instance with vm_state active and task_state None.#033[00m
Oct 11 04:49:51 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:49:51 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1072475470' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:49:51 np0005481065 nova_compute[260935]: 2025-10-11 08:49:51.187 2 DEBUG oslo_concurrency.processutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:51 np0005481065 nova_compute[260935]: 2025-10-11 08:49:51.193 2 DEBUG nova.compute.provider_tree [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:49:51 np0005481065 nova_compute[260935]: 2025-10-11 08:49:51.222 2 DEBUG nova.scheduler.client.report [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:49:51 np0005481065 nova_compute[260935]: 2025-10-11 08:49:51.249 2 DEBUG oslo_concurrency.lockutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.939s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:51 np0005481065 nova_compute[260935]: 2025-10-11 08:49:51.250 2 DEBUG nova.compute.manager [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:49:51 np0005481065 nova_compute[260935]: 2025-10-11 08:49:51.254 2 DEBUG oslo_concurrency.lockutils [None req-380ed33b-63cc-462d-8a1e-2d05d5f998cc 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "8e4f771a-b87a-40f9-a12e-b5b4583b96f7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:51 np0005481065 nova_compute[260935]: 2025-10-11 08:49:51.254 2 DEBUG oslo_concurrency.lockutils [None req-380ed33b-63cc-462d-8a1e-2d05d5f998cc 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "8e4f771a-b87a-40f9-a12e-b5b4583b96f7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:51 np0005481065 nova_compute[260935]: 2025-10-11 08:49:51.255 2 DEBUG oslo_concurrency.lockutils [None req-380ed33b-63cc-462d-8a1e-2d05d5f998cc 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "8e4f771a-b87a-40f9-a12e-b5b4583b96f7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:51 np0005481065 nova_compute[260935]: 2025-10-11 08:49:51.256 2 DEBUG oslo_concurrency.lockutils [None req-380ed33b-63cc-462d-8a1e-2d05d5f998cc 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "8e4f771a-b87a-40f9-a12e-b5b4583b96f7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:51 np0005481065 nova_compute[260935]: 2025-10-11 08:49:51.256 2 DEBUG oslo_concurrency.lockutils [None req-380ed33b-63cc-462d-8a1e-2d05d5f998cc 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "8e4f771a-b87a-40f9-a12e-b5b4583b96f7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:51 np0005481065 nova_compute[260935]: 2025-10-11 08:49:51.258 2 INFO nova.compute.manager [None req-380ed33b-63cc-462d-8a1e-2d05d5f998cc 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Terminating instance#033[00m
Oct 11 04:49:51 np0005481065 nova_compute[260935]: 2025-10-11 08:49:51.260 2 DEBUG nova.compute.manager [None req-380ed33b-63cc-462d-8a1e-2d05d5f998cc 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 04:49:51 np0005481065 nova_compute[260935]: 2025-10-11 08:49:51.310 2 DEBUG nova.compute.manager [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:49:51 np0005481065 nova_compute[260935]: 2025-10-11 08:49:51.311 2 DEBUG nova.network.neutron [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:49:51 np0005481065 kernel: tapf1d8b704-c5 (unregistering): left promiscuous mode
Oct 11 04:49:51 np0005481065 NetworkManager[44960]: <info>  [1760172591.3260] device (tapf1d8b704-c5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:49:51 np0005481065 nova_compute[260935]: 2025-10-11 08:49:51.336 2 INFO nova.virt.libvirt.driver [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:49:51 np0005481065 nova_compute[260935]: 2025-10-11 08:49:51.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:51 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:51Z|00142|binding|INFO|Releasing lport f1d8b704-c5df-41f7-b46a-04c0e89ab2cd from this chassis (sb_readonly=0)
Oct 11 04:49:51 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:51Z|00143|binding|INFO|Setting lport f1d8b704-c5df-41f7-b46a-04c0e89ab2cd down in Southbound
Oct 11 04:49:51 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:51Z|00144|binding|INFO|Removing iface tapf1d8b704-c5 ovn-installed in OVS
Oct 11 04:49:51 np0005481065 nova_compute[260935]: 2025-10-11 08:49:51.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:51.355 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3c:11:79 10.100.0.7'], port_security=['fa:16:3e:3c:11:79 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '8e4f771a-b87a-40f9-a12e-b5b4583b96f7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bac3530-993f-420e-8692-0b14a331d756', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8c7604961214c6d9d49657535d799a5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b92be55a-f97b-4770-99c4-ff8e122b8ad7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=956bef08-638b-4ce0-9cc4-80a6cc4f1331, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=f1d8b704-c5df-41f7-b46a-04c0e89ab2cd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:49:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:51.356 162815 INFO neutron.agent.ovn.metadata.agent [-] Port f1d8b704-c5df-41f7-b46a-04c0e89ab2cd in datapath 9bac3530-993f-420e-8692-0b14a331d756 unbound from our chassis#033[00m
Oct 11 04:49:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:51.358 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9bac3530-993f-420e-8692-0b14a331d756, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 04:49:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:51.358 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3741877f-60e5-48cd-9940-f5be88b8d892]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:51.359 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9bac3530-993f-420e-8692-0b14a331d756 namespace which is not needed anymore#033[00m
Oct 11 04:49:51 np0005481065 nova_compute[260935]: 2025-10-11 08:49:51.366 2 DEBUG nova.compute.manager [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:49:51 np0005481065 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000019.scope: Deactivated successfully.
Oct 11 04:49:51 np0005481065 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000019.scope: Consumed 2.850s CPU time.
Oct 11 04:49:51 np0005481065 systemd-machined[215705]: Machine qemu-27-instance-00000019 terminated.
Oct 11 04:49:51 np0005481065 nova_compute[260935]: 2025-10-11 08:49:51.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:51 np0005481065 nova_compute[260935]: 2025-10-11 08:49:51.485 2 DEBUG nova.compute.manager [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:49:51 np0005481065 nova_compute[260935]: 2025-10-11 08:49:51.486 2 DEBUG nova.virt.libvirt.driver [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:49:51 np0005481065 nova_compute[260935]: 2025-10-11 08:49:51.486 2 INFO nova.virt.libvirt.driver [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Creating image(s)#033[00m
Oct 11 04:49:51 np0005481065 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[294157]: [NOTICE]   (294165) : haproxy version is 2.8.14-c23fe91
Oct 11 04:49:51 np0005481065 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[294157]: [NOTICE]   (294165) : path to executable is /usr/sbin/haproxy
Oct 11 04:49:51 np0005481065 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[294157]: [WARNING]  (294165) : Exiting Master process...
Oct 11 04:49:51 np0005481065 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[294157]: [ALERT]    (294165) : Current worker (294170) exited with code 143 (Terminated)
Oct 11 04:49:51 np0005481065 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[294157]: [WARNING]  (294165) : All workers exited. Exiting... (0)
Oct 11 04:49:51 np0005481065 systemd[1]: libpod-012dd3b1240c0de00b1d42eb37a54f1c4b5262e85e9be9f6efc3df774b1c68d0.scope: Deactivated successfully.
Oct 11 04:49:51 np0005481065 nova_compute[260935]: 2025-10-11 08:49:51.531 2 DEBUG nova.storage.rbd_utils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] rbd image 9f9aca1c-8e65-435a-bfae-1ff0d4386f58_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:49:51 np0005481065 podman[295390]: 2025-10-11 08:49:51.536184983 +0000 UTC m=+0.065973437 container died 012dd3b1240c0de00b1d42eb37a54f1c4b5262e85e9be9f6efc3df774b1c68d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 04:49:51 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-012dd3b1240c0de00b1d42eb37a54f1c4b5262e85e9be9f6efc3df774b1c68d0-userdata-shm.mount: Deactivated successfully.
Oct 11 04:49:51 np0005481065 systemd[1]: var-lib-containers-storage-overlay-41c0985dbac57242fc08a988754091dfee60c3f8b611a4151de8d991e5bdb92e-merged.mount: Deactivated successfully.
Oct 11 04:49:51 np0005481065 nova_compute[260935]: 2025-10-11 08:49:51.574 2 DEBUG nova.storage.rbd_utils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] rbd image 9f9aca1c-8e65-435a-bfae-1ff0d4386f58_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:49:51 np0005481065 podman[295390]: 2025-10-11 08:49:51.579894579 +0000 UTC m=+0.109683033 container cleanup 012dd3b1240c0de00b1d42eb37a54f1c4b5262e85e9be9f6efc3df774b1c68d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 04:49:51 np0005481065 systemd[1]: libpod-conmon-012dd3b1240c0de00b1d42eb37a54f1c4b5262e85e9be9f6efc3df774b1c68d0.scope: Deactivated successfully.
Oct 11 04:49:51 np0005481065 nova_compute[260935]: 2025-10-11 08:49:51.605 2 DEBUG nova.storage.rbd_utils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] rbd image 9f9aca1c-8e65-435a-bfae-1ff0d4386f58_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:49:51 np0005481065 nova_compute[260935]: 2025-10-11 08:49:51.609 2 DEBUG oslo_concurrency.processutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:51 np0005481065 nova_compute[260935]: 2025-10-11 08:49:51.642 2 INFO nova.virt.libvirt.driver [-] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Instance destroyed successfully.#033[00m
Oct 11 04:49:51 np0005481065 nova_compute[260935]: 2025-10-11 08:49:51.643 2 DEBUG nova.objects.instance [None req-380ed33b-63cc-462d-8a1e-2d05d5f998cc 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lazy-loading 'resources' on Instance uuid 8e4f771a-b87a-40f9-a12e-b5b4583b96f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:49:51 np0005481065 podman[295465]: 2025-10-11 08:49:51.657618627 +0000 UTC m=+0.055043858 container remove 012dd3b1240c0de00b1d42eb37a54f1c4b5262e85e9be9f6efc3df774b1c68d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 04:49:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:51.667 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a3abce80-cc45-466d-a111-97a5db531e3f]: (4, ('Sat Oct 11 08:49:51 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756 (012dd3b1240c0de00b1d42eb37a54f1c4b5262e85e9be9f6efc3df774b1c68d0)\n012dd3b1240c0de00b1d42eb37a54f1c4b5262e85e9be9f6efc3df774b1c68d0\nSat Oct 11 08:49:51 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756 (012dd3b1240c0de00b1d42eb37a54f1c4b5262e85e9be9f6efc3df774b1c68d0)\n012dd3b1240c0de00b1d42eb37a54f1c4b5262e85e9be9f6efc3df774b1c68d0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:51.669 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4c1cc4af-f0ff-4a34-af48-84b8f43e9387]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:51 np0005481065 nova_compute[260935]: 2025-10-11 08:49:51.670 2 DEBUG nova.virt.libvirt.vif [None req-380ed33b-63cc-462d-8a1e-2d05d5f998cc 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:49:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1760568996',display_name='tempest-ImagesTestJSON-server-1760568996',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1760568996',id=25,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:49:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=3,progress=0,project_id='f8c7604961214c6d9d49657535d799a5',ramdisk_id='',reservation_id='r-ipkecogn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-694493184',owner_user_name='tempest-ImagesTestJSON-694493184-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:49:47Z,user_data=None,user_id='1bab12893b9d49aabcb5ca19c9b951de',uuid=8e4f771a-b87a-40f9-a12e-b5b4583b96f7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='paused') vif={"id": "f1d8b704-c5df-41f7-b46a-04c0e89ab2cd", "address": "fa:16:3e:3c:11:79", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1d8b704-c5", "ovs_interfaceid": "f1d8b704-c5df-41f7-b46a-04c0e89ab2cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 04:49:51 np0005481065 nova_compute[260935]: 2025-10-11 08:49:51.671 2 DEBUG nova.network.os_vif_util [None req-380ed33b-63cc-462d-8a1e-2d05d5f998cc 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converting VIF {"id": "f1d8b704-c5df-41f7-b46a-04c0e89ab2cd", "address": "fa:16:3e:3c:11:79", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1d8b704-c5", "ovs_interfaceid": "f1d8b704-c5df-41f7-b46a-04c0e89ab2cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:49:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:51.671 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bac3530-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:51 np0005481065 nova_compute[260935]: 2025-10-11 08:49:51.672 2 DEBUG nova.network.os_vif_util [None req-380ed33b-63cc-462d-8a1e-2d05d5f998cc 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3c:11:79,bridge_name='br-int',has_traffic_filtering=True,id=f1d8b704-c5df-41f7-b46a-04c0e89ab2cd,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1d8b704-c5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:49:51 np0005481065 nova_compute[260935]: 2025-10-11 08:49:51.672 2 DEBUG os_vif [None req-380ed33b-63cc-462d-8a1e-2d05d5f998cc 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3c:11:79,bridge_name='br-int',has_traffic_filtering=True,id=f1d8b704-c5df-41f7-b46a-04c0e89ab2cd,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1d8b704-c5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 04:49:51 np0005481065 kernel: tap9bac3530-90: left promiscuous mode
Oct 11 04:49:51 np0005481065 nova_compute[260935]: 2025-10-11 08:49:51.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:51 np0005481065 nova_compute[260935]: 2025-10-11 08:49:51.720 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf1d8b704-c5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:51 np0005481065 nova_compute[260935]: 2025-10-11 08:49:51.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:51 np0005481065 nova_compute[260935]: 2025-10-11 08:49:51.723 2 DEBUG oslo_concurrency.processutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.114s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:51 np0005481065 nova_compute[260935]: 2025-10-11 08:49:51.724 2 DEBUG oslo_concurrency.lockutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:51 np0005481065 nova_compute[260935]: 2025-10-11 08:49:51.724 2 DEBUG oslo_concurrency.lockutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:51 np0005481065 nova_compute[260935]: 2025-10-11 08:49:51.724 2 DEBUG oslo_concurrency.lockutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:51.744 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3d077a3d-2bcd-4ddb-8b61-be507bab6c7e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:51 np0005481065 nova_compute[260935]: 2025-10-11 08:49:51.748 2 DEBUG nova.storage.rbd_utils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] rbd image 9f9aca1c-8e65-435a-bfae-1ff0d4386f58_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:49:51 np0005481065 nova_compute[260935]: 2025-10-11 08:49:51.755 2 DEBUG oslo_concurrency.processutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 9f9aca1c-8e65-435a-bfae-1ff0d4386f58_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:51.786 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[535ae7a0-d88f-4c2e-8f2d-89e5e350a78f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:51.788 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fb4b7d03-1119-41c2-991f-f8b6feaaff3c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:51 np0005481065 nova_compute[260935]: 2025-10-11 08:49:51.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:51 np0005481065 nova_compute[260935]: 2025-10-11 08:49:51.797 2 INFO os_vif [None req-380ed33b-63cc-462d-8a1e-2d05d5f998cc 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3c:11:79,bridge_name='br-int',has_traffic_filtering=True,id=f1d8b704-c5df-41f7-b46a-04c0e89ab2cd,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1d8b704-c5')#033[00m
Oct 11 04:49:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:51.810 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8e85611d-b084-4905-b1f3-9d64be79553f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 441692, 'reachable_time': 18702, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295519, 'error': None, 'target': 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:51 np0005481065 systemd[1]: run-netns-ovnmeta\x2d9bac3530\x2d993f\x2d420e\x2d8692\x2d0b14a331d756.mount: Deactivated successfully.
Oct 11 04:49:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:51.813 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9bac3530-993f-420e-8692-0b14a331d756 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 04:49:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:51.813 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[6cae64f3-c604-4af4-acce-28db5051c81b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1326: 321 pgs: 321 active+clean; 465 MiB data, 603 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.9 MiB/s wr, 121 op/s
Oct 11 04:49:52 np0005481065 nova_compute[260935]: 2025-10-11 08:49:52.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:52 np0005481065 nova_compute[260935]: 2025-10-11 08:49:52.053 2 DEBUG nova.policy [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f557c82d1a6e44f2890ffb382c99df55', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4478aa2544ad454daf82ec0d5a6f1b83', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 04:49:52 np0005481065 nova_compute[260935]: 2025-10-11 08:49:52.055 2 DEBUG oslo_concurrency.processutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 9f9aca1c-8e65-435a-bfae-1ff0d4386f58_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.301s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:52 np0005481065 nova_compute[260935]: 2025-10-11 08:49:52.119 2 DEBUG nova.storage.rbd_utils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] resizing rbd image 9f9aca1c-8e65-435a-bfae-1ff0d4386f58_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:49:52 np0005481065 nova_compute[260935]: 2025-10-11 08:49:52.236 2 DEBUG nova.objects.instance [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Lazy-loading 'migration_context' on Instance uuid 9f9aca1c-8e65-435a-bfae-1ff0d4386f58 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:49:52 np0005481065 nova_compute[260935]: 2025-10-11 08:49:52.252 2 DEBUG nova.virt.libvirt.driver [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:49:52 np0005481065 nova_compute[260935]: 2025-10-11 08:49:52.253 2 DEBUG nova.virt.libvirt.driver [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Ensure instance console log exists: /var/lib/nova/instances/9f9aca1c-8e65-435a-bfae-1ff0d4386f58/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:49:52 np0005481065 nova_compute[260935]: 2025-10-11 08:49:52.254 2 DEBUG oslo_concurrency.lockutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:52 np0005481065 nova_compute[260935]: 2025-10-11 08:49:52.254 2 DEBUG oslo_concurrency.lockutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:52 np0005481065 nova_compute[260935]: 2025-10-11 08:49:52.255 2 DEBUG oslo_concurrency.lockutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:52 np0005481065 nova_compute[260935]: 2025-10-11 08:49:52.295 2 INFO nova.virt.libvirt.driver [None req-380ed33b-63cc-462d-8a1e-2d05d5f998cc 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Deleting instance files /var/lib/nova/instances/8e4f771a-b87a-40f9-a12e-b5b4583b96f7_del#033[00m
Oct 11 04:49:52 np0005481065 nova_compute[260935]: 2025-10-11 08:49:52.296 2 INFO nova.virt.libvirt.driver [None req-380ed33b-63cc-462d-8a1e-2d05d5f998cc 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Deletion of /var/lib/nova/instances/8e4f771a-b87a-40f9-a12e-b5b4583b96f7_del complete#033[00m
Oct 11 04:49:52 np0005481065 nova_compute[260935]: 2025-10-11 08:49:52.337 2 DEBUG nova.network.neutron [None req-562dda92-f28d-460a-8594-d15088f92593 794a63979edc4ddb9f25d94c4999d99b 9b74a79bf5294fa5b15c4e2cc282d6c1 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Updating instance_info_cache with network_info: [{"id": "302c88cf-6eba-4200-adfe-6c23d5e6078d", "address": "fa:16:3e:52:ed:97", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap302c88cf-6e", "ovs_interfaceid": "302c88cf-6eba-4200-adfe-6c23d5e6078d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:49:52 np0005481065 nova_compute[260935]: 2025-10-11 08:49:52.369 2 DEBUG oslo_concurrency.lockutils [None req-562dda92-f28d-460a-8594-d15088f92593 794a63979edc4ddb9f25d94c4999d99b 9b74a79bf5294fa5b15c4e2cc282d6c1 - - default default] Releasing lock "refresh_cache-90e56ca7-b26f-4f83-908d-75204ecd2533" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:49:52 np0005481065 nova_compute[260935]: 2025-10-11 08:49:52.373 2 DEBUG nova.compute.manager [None req-562dda92-f28d-460a-8594-d15088f92593 794a63979edc4ddb9f25d94c4999d99b 9b74a79bf5294fa5b15c4e2cc282d6c1 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Oct 11 04:49:52 np0005481065 nova_compute[260935]: 2025-10-11 08:49:52.373 2 DEBUG nova.compute.manager [None req-562dda92-f28d-460a-8594-d15088f92593 794a63979edc4ddb9f25d94c4999d99b 9b74a79bf5294fa5b15c4e2cc282d6c1 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] network_info to inject: |[{"id": "302c88cf-6eba-4200-adfe-6c23d5e6078d", "address": "fa:16:3e:52:ed:97", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap302c88cf-6e", "ovs_interfaceid": "302c88cf-6eba-4200-adfe-6c23d5e6078d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Oct 11 04:49:52 np0005481065 nova_compute[260935]: 2025-10-11 08:49:52.383 2 INFO nova.compute.manager [None req-380ed33b-63cc-462d-8a1e-2d05d5f998cc 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Took 1.12 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 04:49:52 np0005481065 nova_compute[260935]: 2025-10-11 08:49:52.383 2 DEBUG oslo.service.loopingcall [None req-380ed33b-63cc-462d-8a1e-2d05d5f998cc 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 04:49:52 np0005481065 nova_compute[260935]: 2025-10-11 08:49:52.384 2 DEBUG nova.compute.manager [-] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 04:49:52 np0005481065 nova_compute[260935]: 2025-10-11 08:49:52.384 2 DEBUG nova.network.neutron [-] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 04:49:52 np0005481065 podman[295629]: 2025-10-11 08:49:52.81753682 +0000 UTC m=+0.107980605 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 11 04:49:52 np0005481065 nova_compute[260935]: 2025-10-11 08:49:52.912 2 DEBUG nova.network.neutron [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Successfully created port: 9007d8a3-8797-49c6-9302-4c8f4d699a45 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 04:49:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:49:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e167 do_prune osdmap full prune enabled
Oct 11 04:49:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e168 e168: 3 total, 3 up, 3 in
Oct 11 04:49:53 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e168: 3 total, 3 up, 3 in
Oct 11 04:49:53 np0005481065 nova_compute[260935]: 2025-10-11 08:49:53.517 2 DEBUG nova.network.neutron [-] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:49:53 np0005481065 nova_compute[260935]: 2025-10-11 08:49:53.541 2 INFO nova.compute.manager [-] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Took 1.16 seconds to deallocate network for instance.#033[00m
Oct 11 04:49:53 np0005481065 nova_compute[260935]: 2025-10-11 08:49:53.599 2 DEBUG oslo_concurrency.lockutils [None req-380ed33b-63cc-462d-8a1e-2d05d5f998cc 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:53 np0005481065 nova_compute[260935]: 2025-10-11 08:49:53.600 2 DEBUG oslo_concurrency.lockutils [None req-380ed33b-63cc-462d-8a1e-2d05d5f998cc 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:53 np0005481065 nova_compute[260935]: 2025-10-11 08:49:53.829 2 DEBUG oslo_concurrency.processutils [None req-380ed33b-63cc-462d-8a1e-2d05d5f998cc 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1328: 321 pgs: 321 active+clean; 418 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 8.5 MiB/s rd, 5.4 MiB/s wr, 429 op/s
Oct 11 04:49:53 np0005481065 nova_compute[260935]: 2025-10-11 08:49:53.963 2 DEBUG nova.network.neutron [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Successfully updated port: 9007d8a3-8797-49c6-9302-4c8f4d699a45 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:49:53 np0005481065 nova_compute[260935]: 2025-10-11 08:49:53.977 2 DEBUG oslo_concurrency.lockutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Acquiring lock "refresh_cache-9f9aca1c-8e65-435a-bfae-1ff0d4386f58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:49:53 np0005481065 nova_compute[260935]: 2025-10-11 08:49:53.978 2 DEBUG oslo_concurrency.lockutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Acquired lock "refresh_cache-9f9aca1c-8e65-435a-bfae-1ff0d4386f58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:49:53 np0005481065 nova_compute[260935]: 2025-10-11 08:49:53.978 2 DEBUG nova.network.neutron [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:49:54 np0005481065 nova_compute[260935]: 2025-10-11 08:49:54.155 2 DEBUG nova.network.neutron [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:49:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:49:54 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2805905273' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:49:54 np0005481065 nova_compute[260935]: 2025-10-11 08:49:54.314 2 DEBUG oslo_concurrency.lockutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "b35f4147-9e36-4dab-9ac8-2061c97797f2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:54 np0005481065 nova_compute[260935]: 2025-10-11 08:49:54.315 2 DEBUG oslo_concurrency.lockutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "b35f4147-9e36-4dab-9ac8-2061c97797f2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:54 np0005481065 nova_compute[260935]: 2025-10-11 08:49:54.318 2 DEBUG nova.compute.manager [req-4a8cd530-d3e8-485a-af9c-d96490e4e12a req-6cad29a0-04b0-42b8-b320-f6bb9a693f3f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Received event network-changed-ac842ebf-4fca-4930-a4d1-3e8a6760d441 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:49:54 np0005481065 nova_compute[260935]: 2025-10-11 08:49:54.318 2 DEBUG nova.compute.manager [req-4a8cd530-d3e8-485a-af9c-d96490e4e12a req-6cad29a0-04b0-42b8-b320-f6bb9a693f3f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Refreshing instance network info cache due to event network-changed-ac842ebf-4fca-4930-a4d1-3e8a6760d441. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:49:54 np0005481065 nova_compute[260935]: 2025-10-11 08:49:54.318 2 DEBUG oslo_concurrency.lockutils [req-4a8cd530-d3e8-485a-af9c-d96490e4e12a req-6cad29a0-04b0-42b8-b320-f6bb9a693f3f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-14711e39-46ca-4856-9c19-fa51b869064d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:49:54 np0005481065 nova_compute[260935]: 2025-10-11 08:49:54.318 2 DEBUG oslo_concurrency.lockutils [req-4a8cd530-d3e8-485a-af9c-d96490e4e12a req-6cad29a0-04b0-42b8-b320-f6bb9a693f3f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-14711e39-46ca-4856-9c19-fa51b869064d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:49:54 np0005481065 nova_compute[260935]: 2025-10-11 08:49:54.319 2 DEBUG nova.network.neutron [req-4a8cd530-d3e8-485a-af9c-d96490e4e12a req-6cad29a0-04b0-42b8-b320-f6bb9a693f3f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Refreshing network info cache for port ac842ebf-4fca-4930-a4d1-3e8a6760d441 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:49:54 np0005481065 nova_compute[260935]: 2025-10-11 08:49:54.334 2 DEBUG oslo_concurrency.processutils [None req-380ed33b-63cc-462d-8a1e-2d05d5f998cc 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:54 np0005481065 nova_compute[260935]: 2025-10-11 08:49:54.340 2 DEBUG nova.compute.manager [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:49:54 np0005481065 nova_compute[260935]: 2025-10-11 08:49:54.346 2 DEBUG nova.compute.provider_tree [None req-380ed33b-63cc-462d-8a1e-2d05d5f998cc 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:49:54 np0005481065 nova_compute[260935]: 2025-10-11 08:49:54.365 2 DEBUG nova.scheduler.client.report [None req-380ed33b-63cc-462d-8a1e-2d05d5f998cc 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:49:54 np0005481065 nova_compute[260935]: 2025-10-11 08:49:54.392 2 DEBUG nova.compute.manager [req-e29814d3-bc96-40f8-a7ea-728b7332d556 req-e51284b0-9007-4012-b74e-9bdedfac041d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Received event network-vif-unplugged-f1d8b704-c5df-41f7-b46a-04c0e89ab2cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:49:54 np0005481065 nova_compute[260935]: 2025-10-11 08:49:54.393 2 DEBUG oslo_concurrency.lockutils [req-e29814d3-bc96-40f8-a7ea-728b7332d556 req-e51284b0-9007-4012-b74e-9bdedfac041d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "8e4f771a-b87a-40f9-a12e-b5b4583b96f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:54 np0005481065 nova_compute[260935]: 2025-10-11 08:49:54.393 2 DEBUG oslo_concurrency.lockutils [req-e29814d3-bc96-40f8-a7ea-728b7332d556 req-e51284b0-9007-4012-b74e-9bdedfac041d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8e4f771a-b87a-40f9-a12e-b5b4583b96f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:54 np0005481065 nova_compute[260935]: 2025-10-11 08:49:54.393 2 DEBUG oslo_concurrency.lockutils [req-e29814d3-bc96-40f8-a7ea-728b7332d556 req-e51284b0-9007-4012-b74e-9bdedfac041d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8e4f771a-b87a-40f9-a12e-b5b4583b96f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:54 np0005481065 nova_compute[260935]: 2025-10-11 08:49:54.393 2 DEBUG nova.compute.manager [req-e29814d3-bc96-40f8-a7ea-728b7332d556 req-e51284b0-9007-4012-b74e-9bdedfac041d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] No waiting events found dispatching network-vif-unplugged-f1d8b704-c5df-41f7-b46a-04c0e89ab2cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:49:54 np0005481065 nova_compute[260935]: 2025-10-11 08:49:54.394 2 WARNING nova.compute.manager [req-e29814d3-bc96-40f8-a7ea-728b7332d556 req-e51284b0-9007-4012-b74e-9bdedfac041d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Received unexpected event network-vif-unplugged-f1d8b704-c5df-41f7-b46a-04c0e89ab2cd for instance with vm_state deleted and task_state None.#033[00m
Oct 11 04:49:54 np0005481065 nova_compute[260935]: 2025-10-11 08:49:54.394 2 DEBUG nova.compute.manager [req-e29814d3-bc96-40f8-a7ea-728b7332d556 req-e51284b0-9007-4012-b74e-9bdedfac041d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Received event network-vif-plugged-f1d8b704-c5df-41f7-b46a-04c0e89ab2cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:49:54 np0005481065 nova_compute[260935]: 2025-10-11 08:49:54.394 2 DEBUG oslo_concurrency.lockutils [req-e29814d3-bc96-40f8-a7ea-728b7332d556 req-e51284b0-9007-4012-b74e-9bdedfac041d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "8e4f771a-b87a-40f9-a12e-b5b4583b96f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:54 np0005481065 nova_compute[260935]: 2025-10-11 08:49:54.395 2 DEBUG oslo_concurrency.lockutils [req-e29814d3-bc96-40f8-a7ea-728b7332d556 req-e51284b0-9007-4012-b74e-9bdedfac041d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8e4f771a-b87a-40f9-a12e-b5b4583b96f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:54 np0005481065 nova_compute[260935]: 2025-10-11 08:49:54.395 2 DEBUG oslo_concurrency.lockutils [req-e29814d3-bc96-40f8-a7ea-728b7332d556 req-e51284b0-9007-4012-b74e-9bdedfac041d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8e4f771a-b87a-40f9-a12e-b5b4583b96f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:54 np0005481065 nova_compute[260935]: 2025-10-11 08:49:54.395 2 DEBUG nova.compute.manager [req-e29814d3-bc96-40f8-a7ea-728b7332d556 req-e51284b0-9007-4012-b74e-9bdedfac041d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] No waiting events found dispatching network-vif-plugged-f1d8b704-c5df-41f7-b46a-04c0e89ab2cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:49:54 np0005481065 nova_compute[260935]: 2025-10-11 08:49:54.396 2 WARNING nova.compute.manager [req-e29814d3-bc96-40f8-a7ea-728b7332d556 req-e51284b0-9007-4012-b74e-9bdedfac041d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Received unexpected event network-vif-plugged-f1d8b704-c5df-41f7-b46a-04c0e89ab2cd for instance with vm_state deleted and task_state None.#033[00m
Oct 11 04:49:54 np0005481065 nova_compute[260935]: 2025-10-11 08:49:54.409 2 DEBUG oslo_concurrency.lockutils [None req-380ed33b-63cc-462d-8a1e-2d05d5f998cc 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.809s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:54 np0005481065 nova_compute[260935]: 2025-10-11 08:49:54.429 2 DEBUG oslo_concurrency.lockutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:54 np0005481065 nova_compute[260935]: 2025-10-11 08:49:54.429 2 DEBUG oslo_concurrency.lockutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:54 np0005481065 nova_compute[260935]: 2025-10-11 08:49:54.436 2 DEBUG nova.virt.hardware [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:49:54 np0005481065 nova_compute[260935]: 2025-10-11 08:49:54.436 2 INFO nova.compute.claims [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:49:54 np0005481065 nova_compute[260935]: 2025-10-11 08:49:54.440 2 INFO nova.scheduler.client.report [None req-380ed33b-63cc-462d-8a1e-2d05d5f998cc 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Deleted allocations for instance 8e4f771a-b87a-40f9-a12e-b5b4583b96f7#033[00m
Oct 11 04:49:54 np0005481065 nova_compute[260935]: 2025-10-11 08:49:54.514 2 DEBUG oslo_concurrency.lockutils [None req-380ed33b-63cc-462d-8a1e-2d05d5f998cc 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "8e4f771a-b87a-40f9-a12e-b5b4583b96f7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.260s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:54 np0005481065 nova_compute[260935]: 2025-10-11 08:49:54.677 2 DEBUG oslo_concurrency.processutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:49:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:49:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:49:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:49:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:49:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:49:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:49:54
Oct 11 04:49:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:49:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 04:49:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['default.rgw.control', 'backups', 'default.rgw.meta', '.rgw.root', 'volumes', '.mgr', 'images', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log', 'cephfs.cephfs.data']
Oct 11 04:49:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:49:54 np0005481065 nova_compute[260935]: 2025-10-11 08:49:54.977 2 DEBUG oslo_concurrency.lockutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "cb1503a2-bc9c-4faf-ab16-e7227f8c94f7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:54 np0005481065 nova_compute[260935]: 2025-10-11 08:49:54.979 2 DEBUG oslo_concurrency.lockutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "cb1503a2-bc9c-4faf-ab16-e7227f8c94f7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:54 np0005481065 nova_compute[260935]: 2025-10-11 08:49:54.997 2 DEBUG nova.compute.manager [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.084 2 DEBUG oslo_concurrency.lockutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.086 2 DEBUG nova.network.neutron [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Updating instance_info_cache with network_info: [{"id": "9007d8a3-8797-49c6-9302-4c8f4d699a45", "address": "fa:16:3e:34:0d:85", "network": {"id": "8e0c9798-3406-4335-baf7-3664e8c2cc2d", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1694728476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4478aa2544ad454daf82ec0d5a6f1b83", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9007d8a3-87", "ovs_interfaceid": "9007d8a3-8797-49c6-9302-4c8f4d699a45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:49:55 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:49:55 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1919414430' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.108 2 DEBUG oslo_concurrency.lockutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Releasing lock "refresh_cache-9f9aca1c-8e65-435a-bfae-1ff0d4386f58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.109 2 DEBUG nova.compute.manager [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Instance network_info: |[{"id": "9007d8a3-8797-49c6-9302-4c8f4d699a45", "address": "fa:16:3e:34:0d:85", "network": {"id": "8e0c9798-3406-4335-baf7-3664e8c2cc2d", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1694728476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4478aa2544ad454daf82ec0d5a6f1b83", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9007d8a3-87", "ovs_interfaceid": "9007d8a3-8797-49c6-9302-4c8f4d699a45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.112 2 DEBUG nova.virt.libvirt.driver [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Start _get_guest_xml network_info=[{"id": "9007d8a3-8797-49c6-9302-4c8f4d699a45", "address": "fa:16:3e:34:0d:85", "network": {"id": "8e0c9798-3406-4335-baf7-3664e8c2cc2d", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1694728476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4478aa2544ad454daf82ec0d5a6f1b83", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9007d8a3-87", "ovs_interfaceid": "9007d8a3-8797-49c6-9302-4c8f4d699a45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.114 2 DEBUG oslo_concurrency.processutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:49:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:49:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:49:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:49:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:49:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:49:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:49:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:49:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:49:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.120 2 DEBUG nova.compute.provider_tree [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.123 2 WARNING nova.virt.libvirt.driver [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.128 2 DEBUG nova.virt.libvirt.host [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.128 2 DEBUG nova.virt.libvirt.host [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.134 2 DEBUG nova.virt.libvirt.host [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.135 2 DEBUG nova.virt.libvirt.host [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.135 2 DEBUG nova.virt.libvirt.driver [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.135 2 DEBUG nova.virt.hardware [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.136 2 DEBUG nova.virt.hardware [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.136 2 DEBUG nova.virt.hardware [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.136 2 DEBUG nova.virt.hardware [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.137 2 DEBUG nova.virt.hardware [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.137 2 DEBUG nova.virt.hardware [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.137 2 DEBUG nova.virt.hardware [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.137 2 DEBUG nova.virt.hardware [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.137 2 DEBUG nova.virt.hardware [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.138 2 DEBUG nova.virt.hardware [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.138 2 DEBUG nova.virt.hardware [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.142 2 DEBUG oslo_concurrency.processutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.169 2 DEBUG nova.scheduler.client.report [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.213 2 DEBUG oslo_concurrency.lockutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.214 2 DEBUG nova.compute.manager [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.218 2 DEBUG oslo_concurrency.lockutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.134s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.225 2 DEBUG nova.virt.hardware [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.225 2 INFO nova.compute.claims [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.302 2 DEBUG nova.compute.manager [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.302 2 DEBUG nova.network.neutron [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.328 2 INFO nova.virt.libvirt.driver [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.349 2 DEBUG nova.compute.manager [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.445 2 DEBUG nova.compute.manager [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.447 2 DEBUG nova.virt.libvirt.driver [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.447 2 INFO nova.virt.libvirt.driver [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Creating image(s)#033[00m
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.480 2 DEBUG nova.storage.rbd_utils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] rbd image b35f4147-9e36-4dab-9ac8-2061c97797f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.513 2 DEBUG nova.storage.rbd_utils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] rbd image b35f4147-9e36-4dab-9ac8-2061c97797f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.539 2 DEBUG nova.storage.rbd_utils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] rbd image b35f4147-9e36-4dab-9ac8-2061c97797f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.545 2 DEBUG oslo_concurrency.processutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:55 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:49:55 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1402160965' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.589 2 DEBUG nova.network.neutron [req-4a8cd530-d3e8-485a-af9c-d96490e4e12a req-6cad29a0-04b0-42b8-b320-f6bb9a693f3f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Updated VIF entry in instance network info cache for port ac842ebf-4fca-4930-a4d1-3e8a6760d441. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.590 2 DEBUG nova.network.neutron [req-4a8cd530-d3e8-485a-af9c-d96490e4e12a req-6cad29a0-04b0-42b8-b320-f6bb9a693f3f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Updating instance_info_cache with network_info: [{"id": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "address": "fa:16:3e:56:95:17", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac842ebf-4f", "ovs_interfaceid": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.593 2 DEBUG nova.policy [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '34f29a5a135d45f597eeaa741009aa67', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'eddb41c523294041b154a0a99c88e82b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.596 2 DEBUG oslo_concurrency.processutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.624 2 DEBUG nova.storage.rbd_utils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] rbd image 9f9aca1c-8e65-435a-bfae-1ff0d4386f58_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.627 2 DEBUG oslo_concurrency.processutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.666 2 DEBUG oslo_concurrency.lockutils [req-4a8cd530-d3e8-485a-af9c-d96490e4e12a req-6cad29a0-04b0-42b8-b320-f6bb9a693f3f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-14711e39-46ca-4856-9c19-fa51b869064d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.668 2 DEBUG oslo_concurrency.processutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.669 2 DEBUG oslo_concurrency.lockutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.670 2 DEBUG oslo_concurrency.lockutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.670 2 DEBUG oslo_concurrency.lockutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.693 2 DEBUG nova.storage.rbd_utils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] rbd image b35f4147-9e36-4dab-9ac8-2061c97797f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.697 2 DEBUG oslo_concurrency.processutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 b35f4147-9e36-4dab-9ac8-2061c97797f2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.734 2 DEBUG oslo_concurrency.processutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.763 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.764 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:49:55 np0005481065 nova_compute[260935]: 2025-10-11 08:49:55.764 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 04:49:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1329: 321 pgs: 321 active+clean; 418 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 2.7 MiB/s wr, 314 op/s
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.015 2 DEBUG oslo_concurrency.processutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 b35f4147-9e36-4dab-9ac8-2061c97797f2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.318s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.063 2 DEBUG nova.storage.rbd_utils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] resizing rbd image b35f4147-9e36-4dab-9ac8-2061c97797f2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:49:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:49:56 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1554110969' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.134 2 DEBUG nova.objects.instance [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lazy-loading 'migration_context' on Instance uuid b35f4147-9e36-4dab-9ac8-2061c97797f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.142 2 DEBUG oslo_concurrency.processutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.143 2 DEBUG nova.virt.libvirt.vif [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:49:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-677562955',display_name='tempest-ServersTestManualDisk-server-677562955',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-677562955',id=28,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE1CYmGp9cIjn1OUZpClvh/431spCYwxzBbkMOHDBaljNzck8rmRcU7SShxhilRUkaibqgjOLZGNsP/o0nw2t9clDZxZT6xlOhd2BSbNJPRKX/ZsVWrtD5Ho3SYRh/eonw==',key_name='tempest-keypair-542099764',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4478aa2544ad454daf82ec0d5a6f1b83',ramdisk_id='',reservation_id='r-p05i8rcj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-133082753',owner_user_name='tempest-ServersTestManualDisk-133082753-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:49:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f557c82d1a6e44f2890ffb382c99df55',uuid=9f9aca1c-8e65-435a-bfae-1ff0d4386f58,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9007d8a3-8797-49c6-9302-4c8f4d699a45", "address": "fa:16:3e:34:0d:85", "network": {"id": "8e0c9798-3406-4335-baf7-3664e8c2cc2d", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1694728476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4478aa2544ad454daf82ec0d5a6f1b83", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9007d8a3-87", "ovs_interfaceid": "9007d8a3-8797-49c6-9302-4c8f4d699a45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.143 2 DEBUG nova.network.os_vif_util [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Converting VIF {"id": "9007d8a3-8797-49c6-9302-4c8f4d699a45", "address": "fa:16:3e:34:0d:85", "network": {"id": "8e0c9798-3406-4335-baf7-3664e8c2cc2d", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1694728476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4478aa2544ad454daf82ec0d5a6f1b83", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9007d8a3-87", "ovs_interfaceid": "9007d8a3-8797-49c6-9302-4c8f4d699a45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.144 2 DEBUG nova.network.os_vif_util [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:34:0d:85,bridge_name='br-int',has_traffic_filtering=True,id=9007d8a3-8797-49c6-9302-4c8f4d699a45,network=Network(8e0c9798-3406-4335-baf7-3664e8c2cc2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9007d8a3-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.145 2 DEBUG nova.objects.instance [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9f9aca1c-8e65-435a-bfae-1ff0d4386f58 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.182 2 DEBUG nova.virt.libvirt.driver [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.182 2 DEBUG nova.virt.libvirt.driver [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Ensure instance console log exists: /var/lib/nova/instances/b35f4147-9e36-4dab-9ac8-2061c97797f2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.182 2 DEBUG oslo_concurrency.lockutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.183 2 DEBUG oslo_concurrency.lockutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.183 2 DEBUG oslo_concurrency.lockutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:49:56 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2212142715' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.221 2 DEBUG oslo_concurrency.processutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.253 2 DEBUG nova.compute.provider_tree [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.290 2 DEBUG nova.virt.libvirt.driver [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:49:56 np0005481065 nova_compute[260935]:  <uuid>9f9aca1c-8e65-435a-bfae-1ff0d4386f58</uuid>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:  <name>instance-0000001c</name>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:49:56 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:      <nova:name>tempest-ServersTestManualDisk-server-677562955</nova:name>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:49:55</nova:creationTime>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:49:56 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:        <nova:user uuid="f557c82d1a6e44f2890ffb382c99df55">tempest-ServersTestManualDisk-133082753-project-member</nova:user>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:        <nova:project uuid="4478aa2544ad454daf82ec0d5a6f1b83">tempest-ServersTestManualDisk-133082753</nova:project>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:        <nova:port uuid="9007d8a3-8797-49c6-9302-4c8f4d699a45">
Oct 11 04:49:56 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:49:56 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:      <entry name="serial">9f9aca1c-8e65-435a-bfae-1ff0d4386f58</entry>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:      <entry name="uuid">9f9aca1c-8e65-435a-bfae-1ff0d4386f58</entry>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:49:56 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:49:56 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:49:56 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/9f9aca1c-8e65-435a-bfae-1ff0d4386f58_disk">
Oct 11 04:49:56 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:49:56 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:49:56 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/9f9aca1c-8e65-435a-bfae-1ff0d4386f58_disk.config">
Oct 11 04:49:56 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:49:56 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 04:49:56 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:34:0d:85"/>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:      <target dev="tap9007d8a3-87"/>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:49:56 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/9f9aca1c-8e65-435a-bfae-1ff0d4386f58/console.log" append="off"/>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:49:56 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:49:56 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:49:56 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:49:56 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:49:56 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.290 2 DEBUG nova.compute.manager [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Preparing to wait for external event network-vif-plugged-9007d8a3-8797-49c6-9302-4c8f4d699a45 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.290 2 DEBUG oslo_concurrency.lockutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Acquiring lock "9f9aca1c-8e65-435a-bfae-1ff0d4386f58-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.291 2 DEBUG oslo_concurrency.lockutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Lock "9f9aca1c-8e65-435a-bfae-1ff0d4386f58-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.291 2 DEBUG oslo_concurrency.lockutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Lock "9f9aca1c-8e65-435a-bfae-1ff0d4386f58-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.291 2 DEBUG nova.virt.libvirt.vif [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:49:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-677562955',display_name='tempest-ServersTestManualDisk-server-677562955',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-677562955',id=28,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE1CYmGp9cIjn1OUZpClvh/431spCYwxzBbkMOHDBaljNzck8rmRcU7SShxhilRUkaibqgjOLZGNsP/o0nw2t9clDZxZT6xlOhd2BSbNJPRKX/ZsVWrtD5Ho3SYRh/eonw==',key_name='tempest-keypair-542099764',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4478aa2544ad454daf82ec0d5a6f1b83',ramdisk_id='',reservation_id='r-p05i8rcj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-133082753',owner_user_name='tempest-ServersTestManualDisk-133082753-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:49:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f557c82d1a6e44f2890ffb382c99df55',uuid=9f9aca1c-8e65-435a-bfae-1ff0d4386f58,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9007d8a3-8797-49c6-9302-4c8f4d699a45", "address": "fa:16:3e:34:0d:85", "network": {"id": "8e0c9798-3406-4335-baf7-3664e8c2cc2d", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1694728476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4478aa2544ad454daf82ec0d5a6f1b83", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9007d8a3-87", "ovs_interfaceid": "9007d8a3-8797-49c6-9302-4c8f4d699a45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.291 2 DEBUG nova.network.os_vif_util [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Converting VIF {"id": "9007d8a3-8797-49c6-9302-4c8f4d699a45", "address": "fa:16:3e:34:0d:85", "network": {"id": "8e0c9798-3406-4335-baf7-3664e8c2cc2d", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1694728476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4478aa2544ad454daf82ec0d5a6f1b83", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9007d8a3-87", "ovs_interfaceid": "9007d8a3-8797-49c6-9302-4c8f4d699a45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.292 2 DEBUG nova.network.os_vif_util [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:34:0d:85,bridge_name='br-int',has_traffic_filtering=True,id=9007d8a3-8797-49c6-9302-4c8f4d699a45,network=Network(8e0c9798-3406-4335-baf7-3664e8c2cc2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9007d8a3-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.292 2 DEBUG os_vif [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:0d:85,bridge_name='br-int',has_traffic_filtering=True,id=9007d8a3-8797-49c6-9302-4c8f4d699a45,network=Network(8e0c9798-3406-4335-baf7-3664e8c2cc2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9007d8a3-87') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.293 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.293 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.297 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9007d8a3-87, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.297 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9007d8a3-87, col_values=(('external_ids', {'iface-id': '9007d8a3-8797-49c6-9302-4c8f4d699a45', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:34:0d:85', 'vm-uuid': '9f9aca1c-8e65-435a-bfae-1ff0d4386f58'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:56 np0005481065 NetworkManager[44960]: <info>  [1760172596.3001] manager: (tap9007d8a3-87): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/84)
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.306 2 INFO os_vif [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:0d:85,bridge_name='br-int',has_traffic_filtering=True,id=9007d8a3-8797-49c6-9302-4c8f4d699a45,network=Network(8e0c9798-3406-4335-baf7-3664e8c2cc2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9007d8a3-87')#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.313 2 DEBUG nova.scheduler.client.report [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.368 2 DEBUG oslo_concurrency.lockutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.150s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.369 2 DEBUG nova.compute.manager [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.434 2 DEBUG nova.compute.manager [req-4c28932d-f193-423c-9777-f0cb67246f73 req-b2a45284-8b3f-41eb-ac2f-3efe658ebff6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Received event network-vif-deleted-f1d8b704-c5df-41f7-b46a-04c0e89ab2cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.435 2 DEBUG nova.compute.manager [req-4c28932d-f193-423c-9777-f0cb67246f73 req-b2a45284-8b3f-41eb-ac2f-3efe658ebff6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Received event network-changed-9007d8a3-8797-49c6-9302-4c8f4d699a45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.436 2 DEBUG nova.compute.manager [req-4c28932d-f193-423c-9777-f0cb67246f73 req-b2a45284-8b3f-41eb-ac2f-3efe658ebff6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Refreshing instance network info cache due to event network-changed-9007d8a3-8797-49c6-9302-4c8f4d699a45. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.436 2 DEBUG oslo_concurrency.lockutils [req-4c28932d-f193-423c-9777-f0cb67246f73 req-b2a45284-8b3f-41eb-ac2f-3efe658ebff6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-9f9aca1c-8e65-435a-bfae-1ff0d4386f58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.436 2 DEBUG oslo_concurrency.lockutils [req-4c28932d-f193-423c-9777-f0cb67246f73 req-b2a45284-8b3f-41eb-ac2f-3efe658ebff6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-9f9aca1c-8e65-435a-bfae-1ff0d4386f58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.436 2 DEBUG nova.network.neutron [req-4c28932d-f193-423c-9777-f0cb67246f73 req-b2a45284-8b3f-41eb-ac2f-3efe658ebff6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Refreshing network info cache for port 9007d8a3-8797-49c6-9302-4c8f4d699a45 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.466 2 DEBUG nova.virt.libvirt.driver [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.466 2 DEBUG nova.virt.libvirt.driver [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.466 2 DEBUG nova.virt.libvirt.driver [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] No VIF found with MAC fa:16:3e:34:0d:85, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.467 2 INFO nova.virt.libvirt.driver [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Using config drive#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.486 2 DEBUG nova.storage.rbd_utils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] rbd image 9f9aca1c-8e65-435a-bfae-1ff0d4386f58_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.492 2 DEBUG nova.network.neutron [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Successfully created port: c27797c3-6ac7-45ae-9a2e-7fc42908feab _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.496 2 DEBUG nova.compute.manager [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.496 2 DEBUG nova.network.neutron [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.518 2 INFO nova.virt.libvirt.driver [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.616 2 DEBUG nova.compute.manager [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.753 2 DEBUG nova.compute.manager [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.754 2 DEBUG nova.virt.libvirt.driver [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.755 2 INFO nova.virt.libvirt.driver [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Creating image(s)#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.787 2 DEBUG nova.storage.rbd_utils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image cb1503a2-bc9c-4faf-ab16-e7227f8c94f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.819 2 DEBUG nova.storage.rbd_utils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image cb1503a2-bc9c-4faf-ab16-e7227f8c94f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.850 2 DEBUG nova.storage.rbd_utils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image cb1503a2-bc9c-4faf-ab16-e7227f8c94f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.855 2 DEBUG oslo_concurrency.processutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.888 2 DEBUG nova.policy [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1bab12893b9d49aabcb5ca19c9b951de', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f8c7604961214c6d9d49657535d799a5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.941 2 DEBUG oslo_concurrency.processutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.942 2 DEBUG oslo_concurrency.lockutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.942 2 DEBUG oslo_concurrency.lockutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.943 2 DEBUG oslo_concurrency.lockutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.962 2 DEBUG nova.storage.rbd_utils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image cb1503a2-bc9c-4faf-ab16-e7227f8c94f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:49:56 np0005481065 nova_compute[260935]: 2025-10-11 08:49:56.965 2 DEBUG oslo_concurrency.processutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 cb1503a2-bc9c-4faf-ab16-e7227f8c94f7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:57 np0005481065 nova_compute[260935]: 2025-10-11 08:49:57.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:57 np0005481065 nova_compute[260935]: 2025-10-11 08:49:57.100 2 INFO nova.virt.libvirt.driver [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Creating config drive at /var/lib/nova/instances/9f9aca1c-8e65-435a-bfae-1ff0d4386f58/disk.config#033[00m
Oct 11 04:49:57 np0005481065 nova_compute[260935]: 2025-10-11 08:49:57.128 2 DEBUG oslo_concurrency.processutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9f9aca1c-8e65-435a-bfae-1ff0d4386f58/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdiouyoki execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:57 np0005481065 nova_compute[260935]: 2025-10-11 08:49:57.204 2 DEBUG oslo_concurrency.processutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 cb1503a2-bc9c-4faf-ab16-e7227f8c94f7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.239s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:57 np0005481065 nova_compute[260935]: 2025-10-11 08:49:57.289 2 DEBUG nova.storage.rbd_utils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] resizing rbd image cb1503a2-bc9c-4faf-ab16-e7227f8c94f7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:49:57 np0005481065 nova_compute[260935]: 2025-10-11 08:49:57.327 2 DEBUG oslo_concurrency.processutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9f9aca1c-8e65-435a-bfae-1ff0d4386f58/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdiouyoki" returned: 0 in 0.199s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:57 np0005481065 nova_compute[260935]: 2025-10-11 08:49:57.358 2 DEBUG nova.storage.rbd_utils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] rbd image 9f9aca1c-8e65-435a-bfae-1ff0d4386f58_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:49:57 np0005481065 nova_compute[260935]: 2025-10-11 08:49:57.362 2 DEBUG oslo_concurrency.processutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9f9aca1c-8e65-435a-bfae-1ff0d4386f58/disk.config 9f9aca1c-8e65-435a-bfae-1ff0d4386f58_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:57 np0005481065 nova_compute[260935]: 2025-10-11 08:49:57.466 2 DEBUG nova.objects.instance [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lazy-loading 'migration_context' on Instance uuid cb1503a2-bc9c-4faf-ab16-e7227f8c94f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:49:57 np0005481065 nova_compute[260935]: 2025-10-11 08:49:57.512 2 DEBUG nova.virt.libvirt.driver [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:49:57 np0005481065 nova_compute[260935]: 2025-10-11 08:49:57.512 2 DEBUG nova.virt.libvirt.driver [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Ensure instance console log exists: /var/lib/nova/instances/cb1503a2-bc9c-4faf-ab16-e7227f8c94f7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:49:57 np0005481065 nova_compute[260935]: 2025-10-11 08:49:57.513 2 DEBUG oslo_concurrency.lockutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:57 np0005481065 nova_compute[260935]: 2025-10-11 08:49:57.514 2 DEBUG oslo_concurrency.lockutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:57 np0005481065 nova_compute[260935]: 2025-10-11 08:49:57.514 2 DEBUG oslo_concurrency.lockutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:57 np0005481065 nova_compute[260935]: 2025-10-11 08:49:57.535 2 DEBUG oslo_concurrency.processutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9f9aca1c-8e65-435a-bfae-1ff0d4386f58/disk.config 9f9aca1c-8e65-435a-bfae-1ff0d4386f58_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:57 np0005481065 nova_compute[260935]: 2025-10-11 08:49:57.535 2 INFO nova.virt.libvirt.driver [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Deleting local config drive /var/lib/nova/instances/9f9aca1c-8e65-435a-bfae-1ff0d4386f58/disk.config because it was imported into RBD.#033[00m
Oct 11 04:49:57 np0005481065 kernel: tap9007d8a3-87: entered promiscuous mode
Oct 11 04:49:57 np0005481065 NetworkManager[44960]: <info>  [1760172597.5859] manager: (tap9007d8a3-87): new Tun device (/org/freedesktop/NetworkManager/Devices/85)
Oct 11 04:49:57 np0005481065 nova_compute[260935]: 2025-10-11 08:49:57.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:57 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:57Z|00145|binding|INFO|Claiming lport 9007d8a3-8797-49c6-9302-4c8f4d699a45 for this chassis.
Oct 11 04:49:57 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:57Z|00146|binding|INFO|9007d8a3-8797-49c6-9302-4c8f4d699a45: Claiming fa:16:3e:34:0d:85 10.100.0.3
Oct 11 04:49:57 np0005481065 nova_compute[260935]: 2025-10-11 08:49:57.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:57 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:57Z|00147|binding|INFO|Setting lport 9007d8a3-8797-49c6-9302-4c8f4d699a45 ovn-installed in OVS
Oct 11 04:49:57 np0005481065 systemd-machined[215705]: New machine qemu-30-instance-0000001c.
Oct 11 04:49:57 np0005481065 systemd[1]: Started Virtual Machine qemu-30-instance-0000001c.
Oct 11 04:49:57 np0005481065 nova_compute[260935]: 2025-10-11 08:49:57.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:57 np0005481065 systemd-udevd[296180]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:49:57 np0005481065 NetworkManager[44960]: <info>  [1760172597.6525] device (tap9007d8a3-87): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:49:57 np0005481065 NetworkManager[44960]: <info>  [1760172597.6554] device (tap9007d8a3-87): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:49:57 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:57Z|00148|binding|INFO|Setting lport 9007d8a3-8797-49c6-9302-4c8f4d699a45 up in Southbound
Oct 11 04:49:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:57.712 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:34:0d:85 10.100.0.3'], port_security=['fa:16:3e:34:0d:85 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '9f9aca1c-8e65-435a-bfae-1ff0d4386f58', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e0c9798-3406-4335-baf7-3664e8c2cc2d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4478aa2544ad454daf82ec0d5a6f1b83', 'neutron:revision_number': '2', 'neutron:security_group_ids': '53a8c688-9a7d-4462-906b-8eed321153ea', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=861cf0ef-52d4-42d5-8406-b93929d21340, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=9007d8a3-8797-49c6-9302-4c8f4d699a45) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:49:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:57.715 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 9007d8a3-8797-49c6-9302-4c8f4d699a45 in datapath 8e0c9798-3406-4335-baf7-3664e8c2cc2d bound to our chassis#033[00m
Oct 11 04:49:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:57.720 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8e0c9798-3406-4335-baf7-3664e8c2cc2d#033[00m
Oct 11 04:49:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:57.736 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a4affe5d-7876-499d-a612-156bbac9ddc3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:57.737 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8e0c9798-31 in ovnmeta-8e0c9798-3406-4335-baf7-3664e8c2cc2d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 04:49:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:57.739 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8e0c9798-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 04:49:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:57.739 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b8d8d711-cd63-49aa-a7f5-87f236b9f07b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:57.740 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8c0b21eb-ced4-44fd-a10c-8e81bcd0dc51]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:57.751 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[6efbd235-6088-4bdd-8c27-d43ec87f3f11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:57.767 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1df3bc41-6ea6-4096-be87-9429df7eb4ba]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:57.808 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6816a7c4-5020-48b4-852a-157b3ddd8369]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:57 np0005481065 NetworkManager[44960]: <info>  [1760172597.8192] manager: (tap8e0c9798-30): new Veth device (/org/freedesktop/NetworkManager/Devices/86)
Oct 11 04:49:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:57.818 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4f37ff69-d34f-47f0-bcab-187afb8368fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1330: 321 pgs: 321 active+clean; 511 MiB data, 603 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 8.0 MiB/s wr, 392 op/s
Oct 11 04:49:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:57.870 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[af713344-990b-4f06-b65a-dfddcec6e97d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:57.874 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[4463c7ed-c4f3-4195-a1ac-2d6328764ac2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:57 np0005481065 NetworkManager[44960]: <info>  [1760172597.9144] device (tap8e0c9798-30): carrier: link connected
Oct 11 04:49:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:57.922 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[a4019683-0242-4569-8aa9-46b24b68697f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:57.942 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bbfd8288-f6f0-42a9-b8be-38693634d0fb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8e0c9798-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:48:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 49], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444261, 'reachable_time': 31002, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296251, 'error': None, 'target': 'ovnmeta-8e0c9798-3406-4335-baf7-3664e8c2cc2d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:57.960 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e98ce406-e8c8-440f-bae7-77282d879d69]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe22:4855'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 444261, 'tstamp': 444261}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296253, 'error': None, 'target': 'ovnmeta-8e0c9798-3406-4335-baf7-3664e8c2cc2d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:57.981 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2c921381-a348-4f73-abce-6974b00158c9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8e0c9798-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:48:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 49], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444261, 'reachable_time': 31002, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 296257, 'error': None, 'target': 'ovnmeta-8e0c9798-3406-4335-baf7-3664e8c2cc2d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:57 np0005481065 nova_compute[260935]: 2025-10-11 08:49:57.999 2 DEBUG nova.network.neutron [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Successfully updated port: c27797c3-6ac7-45ae-9a2e-7fc42908feab _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:49:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:58.018 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[dcaab4b1-20ee-4b4b-acae-9ba8f6a744d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:58 np0005481065 nova_compute[260935]: 2025-10-11 08:49:58.080 2 DEBUG oslo_concurrency.lockutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "refresh_cache-b35f4147-9e36-4dab-9ac8-2061c97797f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:49:58 np0005481065 nova_compute[260935]: 2025-10-11 08:49:58.080 2 DEBUG oslo_concurrency.lockutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquired lock "refresh_cache-b35f4147-9e36-4dab-9ac8-2061c97797f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:49:58 np0005481065 nova_compute[260935]: 2025-10-11 08:49:58.081 2 DEBUG nova.network.neutron [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:49:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:58.087 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b7f47fc1-294d-4af6-96f0-cb6c1b5ccd6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:58.088 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e0c9798-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:58.088 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:49:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:58.089 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8e0c9798-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:58 np0005481065 NetworkManager[44960]: <info>  [1760172598.0909] manager: (tap8e0c9798-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/87)
Oct 11 04:49:58 np0005481065 kernel: tap8e0c9798-30: entered promiscuous mode
Oct 11 04:49:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:58.093 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8e0c9798-30, col_values=(('external_ids', {'iface-id': '2e3d9f01-796e-4f56-9d6b-cedcaba6f19e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:49:58 np0005481065 ovn_controller[152945]: 2025-10-11T08:49:58Z|00149|binding|INFO|Releasing lport 2e3d9f01-796e-4f56-9d6b-cedcaba6f19e from this chassis (sb_readonly=0)
Oct 11 04:49:58 np0005481065 nova_compute[260935]: 2025-10-11 08:49:58.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:58 np0005481065 nova_compute[260935]: 2025-10-11 08:49:58.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:58.111 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8e0c9798-3406-4335-baf7-3664e8c2cc2d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8e0c9798-3406-4335-baf7-3664e8c2cc2d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 04:49:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:58.112 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d70b7ca5-758b-4d2c-8897-7a00f78a7917]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:49:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:58.112 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:49:58 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 04:49:58 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 04:49:58 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-8e0c9798-3406-4335-baf7-3664e8c2cc2d
Oct 11 04:49:58 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 04:49:58 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 04:49:58 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 04:49:58 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/8e0c9798-3406-4335-baf7-3664e8c2cc2d.pid.haproxy
Oct 11 04:49:58 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 04:49:58 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:49:58 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 04:49:58 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 04:49:58 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 04:49:58 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 04:49:58 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 04:49:58 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 04:49:58 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 04:49:58 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 04:49:58 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 04:49:58 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 04:49:58 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 04:49:58 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 04:49:58 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 04:49:58 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:49:58 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:49:58 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 04:49:58 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 04:49:58 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:49:58 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID 8e0c9798-3406-4335-baf7-3664e8c2cc2d
Oct 11 04:49:58 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 04:49:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:58.113 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8e0c9798-3406-4335-baf7-3664e8c2cc2d', 'env', 'PROCESS_TAG=haproxy-8e0c9798-3406-4335-baf7-3664e8c2cc2d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8e0c9798-3406-4335-baf7-3664e8c2cc2d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 04:49:58 np0005481065 nova_compute[260935]: 2025-10-11 08:49:58.136 2 DEBUG nova.network.neutron [req-4c28932d-f193-423c-9777-f0cb67246f73 req-b2a45284-8b3f-41eb-ac2f-3efe658ebff6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Updated VIF entry in instance network info cache for port 9007d8a3-8797-49c6-9302-4c8f4d699a45. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:49:58 np0005481065 nova_compute[260935]: 2025-10-11 08:49:58.136 2 DEBUG nova.network.neutron [req-4c28932d-f193-423c-9777-f0cb67246f73 req-b2a45284-8b3f-41eb-ac2f-3efe658ebff6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Updating instance_info_cache with network_info: [{"id": "9007d8a3-8797-49c6-9302-4c8f4d699a45", "address": "fa:16:3e:34:0d:85", "network": {"id": "8e0c9798-3406-4335-baf7-3664e8c2cc2d", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1694728476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4478aa2544ad454daf82ec0d5a6f1b83", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9007d8a3-87", "ovs_interfaceid": "9007d8a3-8797-49c6-9302-4c8f4d699a45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:49:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:49:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e168 do_prune osdmap full prune enabled
Oct 11 04:49:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e169 e169: 3 total, 3 up, 3 in
Oct 11 04:49:58 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e169: 3 total, 3 up, 3 in
Oct 11 04:49:58 np0005481065 nova_compute[260935]: 2025-10-11 08:49:58.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:49:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:58.305 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:49:58 np0005481065 nova_compute[260935]: 2025-10-11 08:49:58.383 2 DEBUG nova.network.neutron [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:49:58 np0005481065 nova_compute[260935]: 2025-10-11 08:49:58.389 2 DEBUG oslo_concurrency.lockutils [req-4c28932d-f193-423c-9777-f0cb67246f73 req-b2a45284-8b3f-41eb-ac2f-3efe658ebff6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-9f9aca1c-8e65-435a-bfae-1ff0d4386f58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:49:58 np0005481065 nova_compute[260935]: 2025-10-11 08:49:58.494 2 DEBUG nova.compute.manager [req-c433fdf2-df56-4d97-a529-b4794da6325a req-610dc799-94e7-4669-93c6-b98d77d63311 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Received event network-vif-plugged-9007d8a3-8797-49c6-9302-4c8f4d699a45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:49:58 np0005481065 nova_compute[260935]: 2025-10-11 08:49:58.495 2 DEBUG oslo_concurrency.lockutils [req-c433fdf2-df56-4d97-a529-b4794da6325a req-610dc799-94e7-4669-93c6-b98d77d63311 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "9f9aca1c-8e65-435a-bfae-1ff0d4386f58-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:49:58 np0005481065 nova_compute[260935]: 2025-10-11 08:49:58.495 2 DEBUG oslo_concurrency.lockutils [req-c433fdf2-df56-4d97-a529-b4794da6325a req-610dc799-94e7-4669-93c6-b98d77d63311 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f9aca1c-8e65-435a-bfae-1ff0d4386f58-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:49:58 np0005481065 nova_compute[260935]: 2025-10-11 08:49:58.495 2 DEBUG oslo_concurrency.lockutils [req-c433fdf2-df56-4d97-a529-b4794da6325a req-610dc799-94e7-4669-93c6-b98d77d63311 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f9aca1c-8e65-435a-bfae-1ff0d4386f58-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:58 np0005481065 nova_compute[260935]: 2025-10-11 08:49:58.495 2 DEBUG nova.compute.manager [req-c433fdf2-df56-4d97-a529-b4794da6325a req-610dc799-94e7-4669-93c6-b98d77d63311 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Processing event network-vif-plugged-9007d8a3-8797-49c6-9302-4c8f4d699a45 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 04:49:58 np0005481065 nova_compute[260935]: 2025-10-11 08:49:58.564 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172598.5633805, 9f9aca1c-8e65-435a-bfae-1ff0d4386f58 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:49:58 np0005481065 nova_compute[260935]: 2025-10-11 08:49:58.564 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] VM Started (Lifecycle Event)#033[00m
Oct 11 04:49:58 np0005481065 nova_compute[260935]: 2025-10-11 08:49:58.568 2 DEBUG nova.compute.manager [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:49:58 np0005481065 nova_compute[260935]: 2025-10-11 08:49:58.574 2 DEBUG nova.virt.libvirt.driver [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:49:58 np0005481065 podman[296291]: 2025-10-11 08:49:58.577662807 +0000 UTC m=+0.073323865 container create 034d4af584d5ac1017cd36efc964e21839dbbea4a1045cbe9431994128504705 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8e0c9798-3406-4335-baf7-3664e8c2cc2d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 04:49:58 np0005481065 nova_compute[260935]: 2025-10-11 08:49:58.580 2 INFO nova.virt.libvirt.driver [-] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Instance spawned successfully.#033[00m
Oct 11 04:49:58 np0005481065 nova_compute[260935]: 2025-10-11 08:49:58.581 2 DEBUG nova.virt.libvirt.driver [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:49:58 np0005481065 nova_compute[260935]: 2025-10-11 08:49:58.591 2 INFO nova.compute.manager [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Rebuilding instance#033[00m
Oct 11 04:49:58 np0005481065 nova_compute[260935]: 2025-10-11 08:49:58.606 2 DEBUG nova.network.neutron [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Successfully created port: 31025494-a361-4c39-aa29-5841978f12e9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 04:49:58 np0005481065 systemd[1]: Started libpod-conmon-034d4af584d5ac1017cd36efc964e21839dbbea4a1045cbe9431994128504705.scope.
Oct 11 04:49:58 np0005481065 nova_compute[260935]: 2025-10-11 08:49:58.638 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:49:58 np0005481065 podman[296291]: 2025-10-11 08:49:58.550522929 +0000 UTC m=+0.046183977 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 04:49:58 np0005481065 nova_compute[260935]: 2025-10-11 08:49:58.646 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:49:58 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:49:58 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/709c524c656a0c0bd950e9b79a2ea29641c4e55d424e4497bd7851268792db19/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:49:58 np0005481065 nova_compute[260935]: 2025-10-11 08:49:58.675 2 DEBUG nova.compute.manager [req-cc207ed8-b354-4964-a537-e6e33bf35c4f req-3ea8603a-6fdc-481e-b0d8-542471dec1b2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Received event network-changed-c27797c3-6ac7-45ae-9a2e-7fc42908feab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:49:58 np0005481065 nova_compute[260935]: 2025-10-11 08:49:58.676 2 DEBUG nova.compute.manager [req-cc207ed8-b354-4964-a537-e6e33bf35c4f req-3ea8603a-6fdc-481e-b0d8-542471dec1b2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Refreshing instance network info cache due to event network-changed-c27797c3-6ac7-45ae-9a2e-7fc42908feab. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:49:58 np0005481065 nova_compute[260935]: 2025-10-11 08:49:58.676 2 DEBUG oslo_concurrency.lockutils [req-cc207ed8-b354-4964-a537-e6e33bf35c4f req-3ea8603a-6fdc-481e-b0d8-542471dec1b2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-b35f4147-9e36-4dab-9ac8-2061c97797f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:49:58 np0005481065 podman[296291]: 2025-10-11 08:49:58.69727957 +0000 UTC m=+0.192940698 container init 034d4af584d5ac1017cd36efc964e21839dbbea4a1045cbe9431994128504705 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8e0c9798-3406-4335-baf7-3664e8c2cc2d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 04:49:58 np0005481065 nova_compute[260935]: 2025-10-11 08:49:58.697 2 DEBUG nova.virt.libvirt.driver [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:49:58 np0005481065 nova_compute[260935]: 2025-10-11 08:49:58.697 2 DEBUG nova.virt.libvirt.driver [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:49:58 np0005481065 nova_compute[260935]: 2025-10-11 08:49:58.698 2 DEBUG nova.virt.libvirt.driver [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:49:58 np0005481065 nova_compute[260935]: 2025-10-11 08:49:58.698 2 DEBUG nova.virt.libvirt.driver [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:49:58 np0005481065 nova_compute[260935]: 2025-10-11 08:49:58.699 2 DEBUG nova.virt.libvirt.driver [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:49:58 np0005481065 nova_compute[260935]: 2025-10-11 08:49:58.700 2 DEBUG nova.virt.libvirt.driver [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:49:58 np0005481065 podman[296291]: 2025-10-11 08:49:58.708469416 +0000 UTC m=+0.204130524 container start 034d4af584d5ac1017cd36efc964e21839dbbea4a1045cbe9431994128504705 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8e0c9798-3406-4335-baf7-3664e8c2cc2d, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:49:58 np0005481065 nova_compute[260935]: 2025-10-11 08:49:58.718 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:49:58 np0005481065 nova_compute[260935]: 2025-10-11 08:49:58.719 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172598.56374, 9f9aca1c-8e65-435a-bfae-1ff0d4386f58 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:49:58 np0005481065 nova_compute[260935]: 2025-10-11 08:49:58.719 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] VM Paused (Lifecycle Event)#033[00m
Oct 11 04:49:58 np0005481065 neutron-haproxy-ovnmeta-8e0c9798-3406-4335-baf7-3664e8c2cc2d[296306]: [NOTICE]   (296310) : New worker (296312) forked
Oct 11 04:49:58 np0005481065 neutron-haproxy-ovnmeta-8e0c9798-3406-4335-baf7-3664e8c2cc2d[296306]: [NOTICE]   (296310) : Loading success.
Oct 11 04:49:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:49:58.784 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 04:49:58 np0005481065 nova_compute[260935]: 2025-10-11 08:49:58.811 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:49:58 np0005481065 nova_compute[260935]: 2025-10-11 08:49:58.815 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172598.5718732, 9f9aca1c-8e65-435a-bfae-1ff0d4386f58 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:49:58 np0005481065 nova_compute[260935]: 2025-10-11 08:49:58.816 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:49:58 np0005481065 nova_compute[260935]: 2025-10-11 08:49:58.839 2 DEBUG nova.objects.instance [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:49:58 np0005481065 nova_compute[260935]: 2025-10-11 08:49:58.862 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:49:58 np0005481065 nova_compute[260935]: 2025-10-11 08:49:58.868 2 INFO nova.compute.manager [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Took 7.38 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:49:58 np0005481065 nova_compute[260935]: 2025-10-11 08:49:58.868 2 DEBUG nova.compute.manager [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:49:58 np0005481065 nova_compute[260935]: 2025-10-11 08:49:58.872 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:49:58 np0005481065 nova_compute[260935]: 2025-10-11 08:49:58.915 2 DEBUG nova.compute.manager [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:49:59 np0005481065 nova_compute[260935]: 2025-10-11 08:49:59.000 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:49:59 np0005481065 nova_compute[260935]: 2025-10-11 08:49:59.036 2 DEBUG nova.network.neutron [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Updating instance_info_cache with network_info: [{"id": "c27797c3-6ac7-45ae-9a2e-7fc42908feab", "address": "fa:16:3e:d9:75:86", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc27797c3-6a", "ovs_interfaceid": "c27797c3-6ac7-45ae-9a2e-7fc42908feab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:49:59 np0005481065 nova_compute[260935]: 2025-10-11 08:49:59.117 2 INFO nova.compute.manager [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Took 8.84 seconds to build instance.#033[00m
Oct 11 04:49:59 np0005481065 nova_compute[260935]: 2025-10-11 08:49:59.132 2 DEBUG nova.objects.instance [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lazy-loading 'pci_requests' on Instance uuid 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:49:59 np0005481065 nova_compute[260935]: 2025-10-11 08:49:59.221 2 DEBUG oslo_concurrency.lockutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Releasing lock "refresh_cache-b35f4147-9e36-4dab-9ac8-2061c97797f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:49:59 np0005481065 nova_compute[260935]: 2025-10-11 08:49:59.222 2 DEBUG nova.compute.manager [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Instance network_info: |[{"id": "c27797c3-6ac7-45ae-9a2e-7fc42908feab", "address": "fa:16:3e:d9:75:86", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc27797c3-6a", "ovs_interfaceid": "c27797c3-6ac7-45ae-9a2e-7fc42908feab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:49:59 np0005481065 nova_compute[260935]: 2025-10-11 08:49:59.222 2 DEBUG oslo_concurrency.lockutils [req-cc207ed8-b354-4964-a537-e6e33bf35c4f req-3ea8603a-6fdc-481e-b0d8-542471dec1b2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-b35f4147-9e36-4dab-9ac8-2061c97797f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:49:59 np0005481065 nova_compute[260935]: 2025-10-11 08:49:59.222 2 DEBUG nova.network.neutron [req-cc207ed8-b354-4964-a537-e6e33bf35c4f req-3ea8603a-6fdc-481e-b0d8-542471dec1b2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Refreshing network info cache for port c27797c3-6ac7-45ae-9a2e-7fc42908feab _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:49:59 np0005481065 nova_compute[260935]: 2025-10-11 08:49:59.227 2 DEBUG nova.virt.libvirt.driver [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Start _get_guest_xml network_info=[{"id": "c27797c3-6ac7-45ae-9a2e-7fc42908feab", "address": "fa:16:3e:d9:75:86", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc27797c3-6a", "ovs_interfaceid": "c27797c3-6ac7-45ae-9a2e-7fc42908feab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:49:59 np0005481065 nova_compute[260935]: 2025-10-11 08:49:59.232 2 WARNING nova.virt.libvirt.driver [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:49:59 np0005481065 nova_compute[260935]: 2025-10-11 08:49:59.240 2 DEBUG nova.virt.libvirt.host [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:49:59 np0005481065 nova_compute[260935]: 2025-10-11 08:49:59.240 2 DEBUG nova.virt.libvirt.host [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:49:59 np0005481065 nova_compute[260935]: 2025-10-11 08:49:59.244 2 DEBUG nova.virt.libvirt.host [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:49:59 np0005481065 nova_compute[260935]: 2025-10-11 08:49:59.244 2 DEBUG nova.virt.libvirt.host [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:49:59 np0005481065 nova_compute[260935]: 2025-10-11 08:49:59.245 2 DEBUG nova.virt.libvirt.driver [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:49:59 np0005481065 nova_compute[260935]: 2025-10-11 08:49:59.245 2 DEBUG nova.virt.hardware [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:49:59 np0005481065 nova_compute[260935]: 2025-10-11 08:49:59.246 2 DEBUG nova.virt.hardware [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:49:59 np0005481065 nova_compute[260935]: 2025-10-11 08:49:59.246 2 DEBUG nova.virt.hardware [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:49:59 np0005481065 nova_compute[260935]: 2025-10-11 08:49:59.246 2 DEBUG nova.virt.hardware [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:49:59 np0005481065 nova_compute[260935]: 2025-10-11 08:49:59.246 2 DEBUG nova.virt.hardware [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:49:59 np0005481065 nova_compute[260935]: 2025-10-11 08:49:59.247 2 DEBUG nova.virt.hardware [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:49:59 np0005481065 nova_compute[260935]: 2025-10-11 08:49:59.247 2 DEBUG nova.virt.hardware [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:49:59 np0005481065 nova_compute[260935]: 2025-10-11 08:49:59.247 2 DEBUG nova.virt.hardware [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:49:59 np0005481065 nova_compute[260935]: 2025-10-11 08:49:59.248 2 DEBUG nova.virt.hardware [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:49:59 np0005481065 nova_compute[260935]: 2025-10-11 08:49:59.248 2 DEBUG nova.virt.hardware [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:49:59 np0005481065 nova_compute[260935]: 2025-10-11 08:49:59.248 2 DEBUG nova.virt.hardware [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:49:59 np0005481065 nova_compute[260935]: 2025-10-11 08:49:59.251 2 DEBUG oslo_concurrency.processutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:59 np0005481065 nova_compute[260935]: 2025-10-11 08:49:59.300 2 DEBUG nova.objects.instance [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lazy-loading 'pci_devices' on Instance uuid 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:49:59 np0005481065 nova_compute[260935]: 2025-10-11 08:49:59.304 2 DEBUG oslo_concurrency.lockutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Lock "9f9aca1c-8e65-435a-bfae-1ff0d4386f58" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.081s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:49:59 np0005481065 nova_compute[260935]: 2025-10-11 08:49:59.349 2 DEBUG nova.objects.instance [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lazy-loading 'resources' on Instance uuid 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:49:59 np0005481065 nova_compute[260935]: 2025-10-11 08:49:59.369 2 DEBUG nova.objects.instance [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lazy-loading 'migration_context' on Instance uuid 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:49:59 np0005481065 nova_compute[260935]: 2025-10-11 08:49:59.435 2 DEBUG nova.objects.instance [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct 11 04:49:59 np0005481065 nova_compute[260935]: 2025-10-11 08:49:59.440 2 DEBUG nova.virt.libvirt.driver [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct 11 04:49:59 np0005481065 nova_compute[260935]: 2025-10-11 08:49:59.595 2 DEBUG nova.network.neutron [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Successfully updated port: 31025494-a361-4c39-aa29-5841978f12e9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:49:59 np0005481065 nova_compute[260935]: 2025-10-11 08:49:59.616 2 DEBUG oslo_concurrency.lockutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "refresh_cache-cb1503a2-bc9c-4faf-ab16-e7227f8c94f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:49:59 np0005481065 nova_compute[260935]: 2025-10-11 08:49:59.617 2 DEBUG oslo_concurrency.lockutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquired lock "refresh_cache-cb1503a2-bc9c-4faf-ab16-e7227f8c94f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:49:59 np0005481065 nova_compute[260935]: 2025-10-11 08:49:59.617 2 DEBUG nova.network.neutron [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:49:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:49:59 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/711992532' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:49:59 np0005481065 nova_compute[260935]: 2025-10-11 08:49:59.764 2 DEBUG oslo_concurrency.processutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:49:59 np0005481065 nova_compute[260935]: 2025-10-11 08:49:59.805 2 DEBUG nova.storage.rbd_utils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] rbd image b35f4147-9e36-4dab-9ac8-2061c97797f2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:49:59 np0005481065 nova_compute[260935]: 2025-10-11 08:49:59.816 2 DEBUG oslo_concurrency.processutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:49:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1332: 321 pgs: 321 active+clean; 511 MiB data, 603 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 8.0 MiB/s wr, 392 op/s
Oct 11 04:49:59 np0005481065 nova_compute[260935]: 2025-10-11 08:49:59.860 2 DEBUG nova.network.neutron [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:50:00 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:50:00 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3554418418' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.387 2 DEBUG oslo_concurrency.processutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.391 2 DEBUG nova.virt.libvirt.vif [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:49:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1789236056',display_name='tempest-tempest.common.compute-instance-1789236056',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1789236056',id=29,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAAy7u30rY9Ua722Pu5k06TsB7yGIeNfS7lWjZwVhd6kg2xMeuomPU5t2dlqG08LvC5AhOx2wSQ4p/whtQgG8tbhB9ScC2x2P4qlM+3BKH/+XFtSpFY70AQQ3oh5qTSzqA==',key_name='tempest-keypair-878513287',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-pbhirm4h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:49:55Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=b35f4147-9e36-4dab-9ac8-2061c97797f2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c27797c3-6ac7-45ae-9a2e-7fc42908feab", "address": "fa:16:3e:d9:75:86", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc27797c3-6a", "ovs_interfaceid": "c27797c3-6ac7-45ae-9a2e-7fc42908feab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.393 2 DEBUG nova.network.os_vif_util [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "c27797c3-6ac7-45ae-9a2e-7fc42908feab", "address": "fa:16:3e:d9:75:86", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc27797c3-6a", "ovs_interfaceid": "c27797c3-6ac7-45ae-9a2e-7fc42908feab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.395 2 DEBUG nova.network.os_vif_util [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:75:86,bridge_name='br-int',has_traffic_filtering=True,id=c27797c3-6ac7-45ae-9a2e-7fc42908feab,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc27797c3-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.399 2 DEBUG nova.objects.instance [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lazy-loading 'pci_devices' on Instance uuid b35f4147-9e36-4dab-9ac8-2061c97797f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.421 2 DEBUG nova.virt.libvirt.driver [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:50:00 np0005481065 nova_compute[260935]:  <uuid>b35f4147-9e36-4dab-9ac8-2061c97797f2</uuid>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:  <name>instance-0000001d</name>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:50:00 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:      <nova:name>tempest-tempest.common.compute-instance-1789236056</nova:name>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:49:59</nova:creationTime>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:50:00 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:        <nova:user uuid="34f29a5a135d45f597eeaa741009aa67">tempest-AttachInterfacesTestJSON-2072786320-project-member</nova:user>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:        <nova:project uuid="eddb41c523294041b154a0a99c88e82b">tempest-AttachInterfacesTestJSON-2072786320</nova:project>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:        <nova:port uuid="c27797c3-6ac7-45ae-9a2e-7fc42908feab">
Oct 11 04:50:00 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:50:00 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:      <entry name="serial">b35f4147-9e36-4dab-9ac8-2061c97797f2</entry>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:      <entry name="uuid">b35f4147-9e36-4dab-9ac8-2061c97797f2</entry>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:50:00 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:50:00 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:50:00 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/b35f4147-9e36-4dab-9ac8-2061c97797f2_disk">
Oct 11 04:50:00 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:50:00 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:50:00 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/b35f4147-9e36-4dab-9ac8-2061c97797f2_disk.config">
Oct 11 04:50:00 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:50:00 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 04:50:00 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:d9:75:86"/>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:      <target dev="tapc27797c3-6a"/>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:50:00 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/b35f4147-9e36-4dab-9ac8-2061c97797f2/console.log" append="off"/>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:50:00 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:50:00 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:50:00 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:50:00 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:50:00 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.423 2 DEBUG nova.compute.manager [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Preparing to wait for external event network-vif-plugged-c27797c3-6ac7-45ae-9a2e-7fc42908feab prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.424 2 DEBUG oslo_concurrency.lockutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "b35f4147-9e36-4dab-9ac8-2061c97797f2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.424 2 DEBUG oslo_concurrency.lockutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "b35f4147-9e36-4dab-9ac8-2061c97797f2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.424 2 DEBUG oslo_concurrency.lockutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "b35f4147-9e36-4dab-9ac8-2061c97797f2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.425 2 DEBUG nova.virt.libvirt.vif [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:49:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1789236056',display_name='tempest-tempest.common.compute-instance-1789236056',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1789236056',id=29,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAAy7u30rY9Ua722Pu5k06TsB7yGIeNfS7lWjZwVhd6kg2xMeuomPU5t2dlqG08LvC5AhOx2wSQ4p/whtQgG8tbhB9ScC2x2P4qlM+3BKH/+XFtSpFY70AQQ3oh5qTSzqA==',key_name='tempest-keypair-878513287',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-pbhirm4h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:49:55Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=b35f4147-9e36-4dab-9ac8-2061c97797f2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c27797c3-6ac7-45ae-9a2e-7fc42908feab", "address": "fa:16:3e:d9:75:86", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc27797c3-6a", "ovs_interfaceid": "c27797c3-6ac7-45ae-9a2e-7fc42908feab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.426 2 DEBUG nova.network.os_vif_util [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "c27797c3-6ac7-45ae-9a2e-7fc42908feab", "address": "fa:16:3e:d9:75:86", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc27797c3-6a", "ovs_interfaceid": "c27797c3-6ac7-45ae-9a2e-7fc42908feab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.427 2 DEBUG nova.network.os_vif_util [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:75:86,bridge_name='br-int',has_traffic_filtering=True,id=c27797c3-6ac7-45ae-9a2e-7fc42908feab,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc27797c3-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.427 2 DEBUG os_vif [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:75:86,bridge_name='br-int',has_traffic_filtering=True,id=c27797c3-6ac7-45ae-9a2e-7fc42908feab,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc27797c3-6a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.429 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.430 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.434 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc27797c3-6a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.434 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc27797c3-6a, col_values=(('external_ids', {'iface-id': 'c27797c3-6ac7-45ae-9a2e-7fc42908feab', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d9:75:86', 'vm-uuid': 'b35f4147-9e36-4dab-9ac8-2061c97797f2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:00 np0005481065 NetworkManager[44960]: <info>  [1760172600.4781] manager: (tapc27797c3-6a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/88)
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.488 2 INFO os_vif [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:75:86,bridge_name='br-int',has_traffic_filtering=True,id=c27797c3-6ac7-45ae-9a2e-7fc42908feab,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc27797c3-6a')#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.551 2 DEBUG nova.network.neutron [req-cc207ed8-b354-4964-a537-e6e33bf35c4f req-3ea8603a-6fdc-481e-b0d8-542471dec1b2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Updated VIF entry in instance network info cache for port c27797c3-6ac7-45ae-9a2e-7fc42908feab. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.552 2 DEBUG nova.network.neutron [req-cc207ed8-b354-4964-a537-e6e33bf35c4f req-3ea8603a-6fdc-481e-b0d8-542471dec1b2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Updating instance_info_cache with network_info: [{"id": "c27797c3-6ac7-45ae-9a2e-7fc42908feab", "address": "fa:16:3e:d9:75:86", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc27797c3-6a", "ovs_interfaceid": "c27797c3-6ac7-45ae-9a2e-7fc42908feab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.557 2 DEBUG nova.virt.libvirt.driver [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.557 2 DEBUG nova.virt.libvirt.driver [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.557 2 DEBUG nova.virt.libvirt.driver [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No VIF found with MAC fa:16:3e:d9:75:86, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.557 2 INFO nova.virt.libvirt.driver [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Using config drive#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.583 2 DEBUG nova.storage.rbd_utils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] rbd image b35f4147-9e36-4dab-9ac8-2061c97797f2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.590 2 DEBUG oslo_concurrency.lockutils [req-cc207ed8-b354-4964-a537-e6e33bf35c4f req-3ea8603a-6fdc-481e-b0d8-542471dec1b2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-b35f4147-9e36-4dab-9ac8-2061c97797f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.592 2 DEBUG nova.compute.manager [req-0ac4c347-70f6-4036-8179-93fdb7f0221b req-b623788f-8ca9-41f2-81a8-d92b4613860c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Received event network-vif-plugged-9007d8a3-8797-49c6-9302-4c8f4d699a45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.592 2 DEBUG oslo_concurrency.lockutils [req-0ac4c347-70f6-4036-8179-93fdb7f0221b req-b623788f-8ca9-41f2-81a8-d92b4613860c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "9f9aca1c-8e65-435a-bfae-1ff0d4386f58-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.593 2 DEBUG oslo_concurrency.lockutils [req-0ac4c347-70f6-4036-8179-93fdb7f0221b req-b623788f-8ca9-41f2-81a8-d92b4613860c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f9aca1c-8e65-435a-bfae-1ff0d4386f58-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.593 2 DEBUG oslo_concurrency.lockutils [req-0ac4c347-70f6-4036-8179-93fdb7f0221b req-b623788f-8ca9-41f2-81a8-d92b4613860c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f9aca1c-8e65-435a-bfae-1ff0d4386f58-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.593 2 DEBUG nova.compute.manager [req-0ac4c347-70f6-4036-8179-93fdb7f0221b req-b623788f-8ca9-41f2-81a8-d92b4613860c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] No waiting events found dispatching network-vif-plugged-9007d8a3-8797-49c6-9302-4c8f4d699a45 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.593 2 WARNING nova.compute.manager [req-0ac4c347-70f6-4036-8179-93fdb7f0221b req-b623788f-8ca9-41f2-81a8-d92b4613860c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Received unexpected event network-vif-plugged-9007d8a3-8797-49c6-9302-4c8f4d699a45 for instance with vm_state active and task_state None.#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.594 2 DEBUG nova.compute.manager [req-0ac4c347-70f6-4036-8179-93fdb7f0221b req-b623788f-8ca9-41f2-81a8-d92b4613860c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Received event network-changed-31025494-a361-4c39-aa29-5841978f12e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.594 2 DEBUG nova.compute.manager [req-0ac4c347-70f6-4036-8179-93fdb7f0221b req-b623788f-8ca9-41f2-81a8-d92b4613860c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Refreshing instance network info cache due to event network-changed-31025494-a361-4c39-aa29-5841978f12e9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.594 2 DEBUG oslo_concurrency.lockutils [req-0ac4c347-70f6-4036-8179-93fdb7f0221b req-b623788f-8ca9-41f2-81a8-d92b4613860c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-cb1503a2-bc9c-4faf-ab16-e7227f8c94f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:50:00 np0005481065 podman[296385]: 2025-10-11 08:50:00.636630186 +0000 UTC m=+0.096150440 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.665 2 DEBUG nova.network.neutron [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Updating instance_info_cache with network_info: [{"id": "31025494-a361-4c39-aa29-5841978f12e9", "address": "fa:16:3e:a8:8e:61", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31025494-a3", "ovs_interfaceid": "31025494-a361-4c39-aa29-5841978f12e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.685 2 DEBUG oslo_concurrency.lockutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Releasing lock "refresh_cache-cb1503a2-bc9c-4faf-ab16-e7227f8c94f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.686 2 DEBUG nova.compute.manager [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Instance network_info: |[{"id": "31025494-a361-4c39-aa29-5841978f12e9", "address": "fa:16:3e:a8:8e:61", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31025494-a3", "ovs_interfaceid": "31025494-a361-4c39-aa29-5841978f12e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.686 2 DEBUG oslo_concurrency.lockutils [req-0ac4c347-70f6-4036-8179-93fdb7f0221b req-b623788f-8ca9-41f2-81a8-d92b4613860c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-cb1503a2-bc9c-4faf-ab16-e7227f8c94f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.686 2 DEBUG nova.network.neutron [req-0ac4c347-70f6-4036-8179-93fdb7f0221b req-b623788f-8ca9-41f2-81a8-d92b4613860c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Refreshing network info cache for port 31025494-a361-4c39-aa29-5841978f12e9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.688 2 DEBUG nova.virt.libvirt.driver [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Start _get_guest_xml network_info=[{"id": "31025494-a361-4c39-aa29-5841978f12e9", "address": "fa:16:3e:a8:8e:61", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31025494-a3", "ovs_interfaceid": "31025494-a361-4c39-aa29-5841978f12e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.693 2 WARNING nova.virt.libvirt.driver [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.697 2 DEBUG nova.virt.libvirt.host [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.698 2 DEBUG nova.virt.libvirt.host [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.702 2 DEBUG nova.virt.libvirt.host [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.702 2 DEBUG nova.virt.libvirt.host [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.703 2 DEBUG nova.virt.libvirt.driver [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.703 2 DEBUG nova.virt.hardware [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.703 2 DEBUG nova.virt.hardware [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.703 2 DEBUG nova.virt.hardware [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.704 2 DEBUG nova.virt.hardware [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.704 2 DEBUG nova.virt.hardware [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.704 2 DEBUG nova.virt.hardware [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.704 2 DEBUG nova.virt.hardware [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.705 2 DEBUG nova.virt.hardware [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.705 2 DEBUG nova.virt.hardware [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.705 2 DEBUG nova.virt.hardware [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.705 2 DEBUG nova.virt.hardware [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.708 2 DEBUG oslo_concurrency.processutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:50:00 np0005481065 nova_compute[260935]: 2025-10-11 08:50:00.736 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:50:01 np0005481065 nova_compute[260935]: 2025-10-11 08:50:01.189 2 INFO nova.virt.libvirt.driver [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Creating config drive at /var/lib/nova/instances/b35f4147-9e36-4dab-9ac8-2061c97797f2/disk.config#033[00m
Oct 11 04:50:01 np0005481065 nova_compute[260935]: 2025-10-11 08:50:01.195 2 DEBUG oslo_concurrency.processutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b35f4147-9e36-4dab-9ac8-2061c97797f2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6aqn8w0i execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:50:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:50:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1651090801' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:50:01 np0005481065 nova_compute[260935]: 2025-10-11 08:50:01.282 2 DEBUG oslo_concurrency.processutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:50:01 np0005481065 nova_compute[260935]: 2025-10-11 08:50:01.308 2 DEBUG nova.storage.rbd_utils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image cb1503a2-bc9c-4faf-ab16-e7227f8c94f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:50:01 np0005481065 nova_compute[260935]: 2025-10-11 08:50:01.313 2 DEBUG oslo_concurrency.processutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:50:01 np0005481065 nova_compute[260935]: 2025-10-11 08:50:01.345 2 DEBUG oslo_concurrency.processutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b35f4147-9e36-4dab-9ac8-2061c97797f2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6aqn8w0i" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:50:01 np0005481065 nova_compute[260935]: 2025-10-11 08:50:01.380 2 DEBUG nova.storage.rbd_utils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] rbd image b35f4147-9e36-4dab-9ac8-2061c97797f2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:50:01 np0005481065 nova_compute[260935]: 2025-10-11 08:50:01.385 2 DEBUG oslo_concurrency.processutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b35f4147-9e36-4dab-9ac8-2061c97797f2/disk.config b35f4147-9e36-4dab-9ac8-2061c97797f2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:50:01 np0005481065 nova_compute[260935]: 2025-10-11 08:50:01.549 2 DEBUG oslo_concurrency.processutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b35f4147-9e36-4dab-9ac8-2061c97797f2/disk.config b35f4147-9e36-4dab-9ac8-2061c97797f2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:50:01 np0005481065 nova_compute[260935]: 2025-10-11 08:50:01.549 2 INFO nova.virt.libvirt.driver [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Deleting local config drive /var/lib/nova/instances/b35f4147-9e36-4dab-9ac8-2061c97797f2/disk.config because it was imported into RBD.#033[00m
Oct 11 04:50:01 np0005481065 kernel: tapc27797c3-6a: entered promiscuous mode
Oct 11 04:50:01 np0005481065 NetworkManager[44960]: <info>  [1760172601.6016] manager: (tapc27797c3-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/89)
Oct 11 04:50:01 np0005481065 nova_compute[260935]: 2025-10-11 08:50:01.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:01 np0005481065 ovn_controller[152945]: 2025-10-11T08:50:01Z|00150|binding|INFO|Claiming lport c27797c3-6ac7-45ae-9a2e-7fc42908feab for this chassis.
Oct 11 04:50:01 np0005481065 ovn_controller[152945]: 2025-10-11T08:50:01Z|00151|binding|INFO|c27797c3-6ac7-45ae-9a2e-7fc42908feab: Claiming fa:16:3e:d9:75:86 10.100.0.5
Oct 11 04:50:01 np0005481065 systemd-udevd[296533]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:50:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:01.652 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:75:86 10.100.0.5'], port_security=['fa:16:3e:d9:75:86 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b35f4147-9e36-4dab-9ac8-2061c97797f2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eddb41c523294041b154a0a99c88e82b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b1442d15-c284-4756-a249-5a3bda09cf56', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4201c7b-c907-464d-88cb-d19f17d8f067, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=c27797c3-6ac7-45ae-9a2e-7fc42908feab) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:50:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:01.653 162815 INFO neutron.agent.ovn.metadata.agent [-] Port c27797c3-6ac7-45ae-9a2e-7fc42908feab in datapath fff13396-b787-4c6e-9112-a1c2ef57b26d bound to our chassis#033[00m
Oct 11 04:50:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:01.655 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fff13396-b787-4c6e-9112-a1c2ef57b26d#033[00m
Oct 11 04:50:01 np0005481065 NetworkManager[44960]: <info>  [1760172601.6640] device (tapc27797c3-6a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:50:01 np0005481065 NetworkManager[44960]: <info>  [1760172601.6650] device (tapc27797c3-6a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:50:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:01.676 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[53021c59-cce1-44ad-bdba-7cc70cd634cf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:01 np0005481065 ovn_controller[152945]: 2025-10-11T08:50:01Z|00152|binding|INFO|Setting lport c27797c3-6ac7-45ae-9a2e-7fc42908feab ovn-installed in OVS
Oct 11 04:50:01 np0005481065 ovn_controller[152945]: 2025-10-11T08:50:01Z|00153|binding|INFO|Setting lport c27797c3-6ac7-45ae-9a2e-7fc42908feab up in Southbound
Oct 11 04:50:01 np0005481065 nova_compute[260935]: 2025-10-11 08:50:01.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:01 np0005481065 nova_compute[260935]: 2025-10-11 08:50:01.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:01 np0005481065 systemd-machined[215705]: New machine qemu-31-instance-0000001d.
Oct 11 04:50:01 np0005481065 systemd[1]: Started Virtual Machine qemu-31-instance-0000001d.
Oct 11 04:50:01 np0005481065 nova_compute[260935]: 2025-10-11 08:50:01.709 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:50:01 np0005481065 nova_compute[260935]: 2025-10-11 08:50:01.710 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:50:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:01.712 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3727c1a3-155f-490c-9450-faa4174880e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:01.717 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[43c7c59d-e1e3-4abc-b71f-2fa5e76341bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:01 np0005481065 kernel: tapa3944a31-95 (unregistering): left promiscuous mode
Oct 11 04:50:01 np0005481065 NetworkManager[44960]: <info>  [1760172601.7324] device (tapa3944a31-95): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:50:01 np0005481065 ovn_controller[152945]: 2025-10-11T08:50:01Z|00154|binding|INFO|Releasing lport a3944a31-9560-49ae-b2a5-caaf2736993a from this chassis (sb_readonly=0)
Oct 11 04:50:01 np0005481065 ovn_controller[152945]: 2025-10-11T08:50:01Z|00155|binding|INFO|Setting lport a3944a31-9560-49ae-b2a5-caaf2736993a down in Southbound
Oct 11 04:50:01 np0005481065 ovn_controller[152945]: 2025-10-11T08:50:01Z|00156|binding|INFO|Removing iface tapa3944a31-95 ovn-installed in OVS
Oct 11 04:50:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:50:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3255312818' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:50:01 np0005481065 nova_compute[260935]: 2025-10-11 08:50:01.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:01.754 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:d8:0b 10.100.0.11'], port_security=['fa:16:3e:bf:d8:0b 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '77ff3a9d-3eb2-40ed-ad12-6367fd4e555f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39d3043a7835403392c659fbb2fe0b22', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8cdf2c97-ed67-4339-928f-1d70d0c6c18c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3bfe7634-8476-437a-9cde-e4512c0e686a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=a3944a31-9560-49ae-b2a5-caaf2736993a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:50:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:01.761 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[8b43786d-0cc5-4761-bffd-6caf031e6032]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:01 np0005481065 nova_compute[260935]: 2025-10-11 08:50:01.786 2 DEBUG oslo_concurrency.processutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:50:01 np0005481065 nova_compute[260935]: 2025-10-11 08:50:01.788 2 DEBUG nova.virt.libvirt.vif [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:49:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1000984331',display_name='tempest-ImagesTestJSON-server-1000984331',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1000984331',id=30,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f8c7604961214c6d9d49657535d799a5',ramdisk_id='',reservation_id='r-ga7cei6c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-694493184',owner_user_name='tempest-ImagesTestJSON-694493184-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:49:56Z,user_data=None,user_id='1bab12893b9d49aabcb5ca19c9b951de',uuid=cb1503a2-bc9c-4faf-ab16-e7227f8c94f7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "31025494-a361-4c39-aa29-5841978f12e9", "address": "fa:16:3e:a8:8e:61", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31025494-a3", "ovs_interfaceid": "31025494-a361-4c39-aa29-5841978f12e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:50:01 np0005481065 nova_compute[260935]: 2025-10-11 08:50:01.788 2 DEBUG nova.network.os_vif_util [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converting VIF {"id": "31025494-a361-4c39-aa29-5841978f12e9", "address": "fa:16:3e:a8:8e:61", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31025494-a3", "ovs_interfaceid": "31025494-a361-4c39-aa29-5841978f12e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:50:01 np0005481065 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000016.scope: Deactivated successfully.
Oct 11 04:50:01 np0005481065 nova_compute[260935]: 2025-10-11 08:50:01.789 2 DEBUG nova.network.os_vif_util [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:8e:61,bridge_name='br-int',has_traffic_filtering=True,id=31025494-a361-4c39-aa29-5841978f12e9,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31025494-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:50:01 np0005481065 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000016.scope: Consumed 15.399s CPU time.
Oct 11 04:50:01 np0005481065 nova_compute[260935]: 2025-10-11 08:50:01.790 2 DEBUG nova.objects.instance [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lazy-loading 'pci_devices' on Instance uuid cb1503a2-bc9c-4faf-ab16-e7227f8c94f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:50:01 np0005481065 nova_compute[260935]: 2025-10-11 08:50:01.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:01 np0005481065 systemd-machined[215705]: Machine qemu-24-instance-00000016 terminated.
Oct 11 04:50:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:01.795 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[efa36c84-3fe8-4020-a06e-2fae1129283e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfff13396-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:a4:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 443120, 'reachable_time': 39241, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296556, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:01 np0005481065 nova_compute[260935]: 2025-10-11 08:50:01.804 2 DEBUG nova.virt.libvirt.driver [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:50:01 np0005481065 nova_compute[260935]:  <uuid>cb1503a2-bc9c-4faf-ab16-e7227f8c94f7</uuid>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:  <name>instance-0000001e</name>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:50:01 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:      <nova:name>tempest-ImagesTestJSON-server-1000984331</nova:name>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:50:00</nova:creationTime>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:50:01 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:        <nova:user uuid="1bab12893b9d49aabcb5ca19c9b951de">tempest-ImagesTestJSON-694493184-project-member</nova:user>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:        <nova:project uuid="f8c7604961214c6d9d49657535d799a5">tempest-ImagesTestJSON-694493184</nova:project>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:        <nova:port uuid="31025494-a361-4c39-aa29-5841978f12e9">
Oct 11 04:50:01 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:50:01 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:      <entry name="serial">cb1503a2-bc9c-4faf-ab16-e7227f8c94f7</entry>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:      <entry name="uuid">cb1503a2-bc9c-4faf-ab16-e7227f8c94f7</entry>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:50:01 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:50:01 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:50:01 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/cb1503a2-bc9c-4faf-ab16-e7227f8c94f7_disk">
Oct 11 04:50:01 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:50:01 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:50:01 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/cb1503a2-bc9c-4faf-ab16-e7227f8c94f7_disk.config">
Oct 11 04:50:01 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:50:01 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 04:50:01 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:a8:8e:61"/>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:      <target dev="tap31025494-a3"/>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:50:01 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/cb1503a2-bc9c-4faf-ab16-e7227f8c94f7/console.log" append="off"/>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:50:01 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:50:01 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:50:01 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:50:01 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:50:01 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:50:01 np0005481065 nova_compute[260935]: 2025-10-11 08:50:01.805 2 DEBUG nova.compute.manager [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Preparing to wait for external event network-vif-plugged-31025494-a361-4c39-aa29-5841978f12e9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 04:50:01 np0005481065 nova_compute[260935]: 2025-10-11 08:50:01.806 2 DEBUG oslo_concurrency.lockutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "cb1503a2-bc9c-4faf-ab16-e7227f8c94f7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:01 np0005481065 nova_compute[260935]: 2025-10-11 08:50:01.807 2 DEBUG oslo_concurrency.lockutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "cb1503a2-bc9c-4faf-ab16-e7227f8c94f7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:01 np0005481065 nova_compute[260935]: 2025-10-11 08:50:01.807 2 DEBUG oslo_concurrency.lockutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "cb1503a2-bc9c-4faf-ab16-e7227f8c94f7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:01 np0005481065 nova_compute[260935]: 2025-10-11 08:50:01.807 2 DEBUG nova.virt.libvirt.vif [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:49:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1000984331',display_name='tempest-ImagesTestJSON-server-1000984331',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1000984331',id=30,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f8c7604961214c6d9d49657535d799a5',ramdisk_id='',reservation_id='r-ga7cei6c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-694493184',owner_user_name='tempest-ImagesTestJSON-694493184-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:49:56Z,user_data=None,user_id='1bab12893b9d49aabcb5ca19c9b951de',uuid=cb1503a2-bc9c-4faf-ab16-e7227f8c94f7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "31025494-a361-4c39-aa29-5841978f12e9", "address": "fa:16:3e:a8:8e:61", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31025494-a3", "ovs_interfaceid": "31025494-a361-4c39-aa29-5841978f12e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:50:01 np0005481065 nova_compute[260935]: 2025-10-11 08:50:01.808 2 DEBUG nova.network.os_vif_util [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converting VIF {"id": "31025494-a361-4c39-aa29-5841978f12e9", "address": "fa:16:3e:a8:8e:61", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31025494-a3", "ovs_interfaceid": "31025494-a361-4c39-aa29-5841978f12e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:50:01 np0005481065 nova_compute[260935]: 2025-10-11 08:50:01.808 2 DEBUG nova.network.os_vif_util [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:8e:61,bridge_name='br-int',has_traffic_filtering=True,id=31025494-a361-4c39-aa29-5841978f12e9,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31025494-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:50:01 np0005481065 nova_compute[260935]: 2025-10-11 08:50:01.808 2 DEBUG os_vif [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:8e:61,bridge_name='br-int',has_traffic_filtering=True,id=31025494-a361-4c39-aa29-5841978f12e9,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31025494-a3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:50:01 np0005481065 nova_compute[260935]: 2025-10-11 08:50:01.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:01 np0005481065 nova_compute[260935]: 2025-10-11 08:50:01.809 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:50:01 np0005481065 nova_compute[260935]: 2025-10-11 08:50:01.809 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:50:01 np0005481065 nova_compute[260935]: 2025-10-11 08:50:01.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:01 np0005481065 nova_compute[260935]: 2025-10-11 08:50:01.815 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap31025494-a3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:50:01 np0005481065 nova_compute[260935]: 2025-10-11 08:50:01.815 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap31025494-a3, col_values=(('external_ids', {'iface-id': '31025494-a361-4c39-aa29-5841978f12e9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a8:8e:61', 'vm-uuid': 'cb1503a2-bc9c-4faf-ab16-e7227f8c94f7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:50:01 np0005481065 NetworkManager[44960]: <info>  [1760172601.8189] manager: (tap31025494-a3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/90)
Oct 11 04:50:01 np0005481065 nova_compute[260935]: 2025-10-11 08:50:01.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:01 np0005481065 nova_compute[260935]: 2025-10-11 08:50:01.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:50:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:01.821 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6abd965a-3b6f-4416-b7d5-885419e152f3]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfff13396-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 443136, 'tstamp': 443136}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296557, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfff13396-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 443140, 'tstamp': 443140}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296557, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:01.823 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfff13396-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:50:01 np0005481065 nova_compute[260935]: 2025-10-11 08:50:01.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:01 np0005481065 nova_compute[260935]: 2025-10-11 08:50:01.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:01 np0005481065 nova_compute[260935]: 2025-10-11 08:50:01.828 2 INFO os_vif [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:8e:61,bridge_name='br-int',has_traffic_filtering=True,id=31025494-a361-4c39-aa29-5841978f12e9,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31025494-a3')#033[00m
Oct 11 04:50:01 np0005481065 nova_compute[260935]: 2025-10-11 08:50:01.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:01.838 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfff13396-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:50:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:01.839 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:50:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:01.839 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfff13396-b0, col_values=(('external_ids', {'iface-id': '2a916b98-1e7b-4604-b1f0-e2f195b1c17e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:50:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:01.840 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:50:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:01.842 162815 INFO neutron.agent.ovn.metadata.agent [-] Port a3944a31-9560-49ae-b2a5-caaf2736993a in datapath 09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5 unbound from our chassis#033[00m
Oct 11 04:50:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:01.844 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5#033[00m
Oct 11 04:50:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1333: 321 pgs: 321 active+clean; 511 MiB data, 603 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 5.0 MiB/s wr, 72 op/s
Oct 11 04:50:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:01.866 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ee4d782f-95ad-4df8-9786-6561649e0c6b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:01 np0005481065 nova_compute[260935]: 2025-10-11 08:50:01.889 2 DEBUG nova.virt.libvirt.driver [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:50:01 np0005481065 nova_compute[260935]: 2025-10-11 08:50:01.890 2 DEBUG nova.virt.libvirt.driver [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:50:01 np0005481065 nova_compute[260935]: 2025-10-11 08:50:01.890 2 DEBUG nova.virt.libvirt.driver [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] No VIF found with MAC fa:16:3e:a8:8e:61, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:50:01 np0005481065 nova_compute[260935]: 2025-10-11 08:50:01.891 2 INFO nova.virt.libvirt.driver [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Using config drive#033[00m
Oct 11 04:50:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:01.925 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[5dc0a1e9-ef47-4330-84da-065554250ae0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:01.928 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[e96da15c-c579-406a-93c1-3ebfd9274f05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:01 np0005481065 nova_compute[260935]: 2025-10-11 08:50:01.939 2 DEBUG nova.storage.rbd_utils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image cb1503a2-bc9c-4faf-ab16-e7227f8c94f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:50:01 np0005481065 nova_compute[260935]: 2025-10-11 08:50:01.949 2 DEBUG nova.network.neutron [req-0ac4c347-70f6-4036-8179-93fdb7f0221b req-b623788f-8ca9-41f2-81a8-d92b4613860c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Updated VIF entry in instance network info cache for port 31025494-a361-4c39-aa29-5841978f12e9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:50:01 np0005481065 nova_compute[260935]: 2025-10-11 08:50:01.949 2 DEBUG nova.network.neutron [req-0ac4c347-70f6-4036-8179-93fdb7f0221b req-b623788f-8ca9-41f2-81a8-d92b4613860c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Updating instance_info_cache with network_info: [{"id": "31025494-a361-4c39-aa29-5841978f12e9", "address": "fa:16:3e:a8:8e:61", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31025494-a3", "ovs_interfaceid": "31025494-a361-4c39-aa29-5841978f12e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:50:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:01.960 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[2a0eaa3a-e758-47a1-9f25-35541f152b10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:01 np0005481065 ovn_controller[152945]: 2025-10-11T08:50:01Z|00030|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:52:ed:97 10.100.0.10
Oct 11 04:50:01 np0005481065 nova_compute[260935]: 2025-10-11 08:50:01.976 2 DEBUG oslo_concurrency.lockutils [req-0ac4c347-70f6-4036-8179-93fdb7f0221b req-b623788f-8ca9-41f2-81a8-d92b4613860c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-cb1503a2-bc9c-4faf-ab16-e7227f8c94f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:50:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:01.980 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1ada4d1b-8d84-4694-a5e2-2b246cac1aa9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap09ac2cb6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:b2:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 12, 'rx_bytes': 1084, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 12, 'rx_bytes': 1084, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 439110, 'reachable_time': 35426, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296591, 'error': None, 'target': 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:01 np0005481065 ovn_controller[152945]: 2025-10-11T08:50:01Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:52:ed:97 10.100.0.10
Oct 11 04:50:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:02.000 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ff7b8bd0-3f24-424b-b51f-2589bb1ab69f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap09ac2cb6-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 439130, 'tstamp': 439130}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296600, 'error': None, 'target': 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap09ac2cb6-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 439135, 'tstamp': 439135}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296600, 'error': None, 'target': 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:02.002 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap09ac2cb6-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:50:02 np0005481065 nova_compute[260935]: 2025-10-11 08:50:02.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:02 np0005481065 nova_compute[260935]: 2025-10-11 08:50:02.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:02.013 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap09ac2cb6-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:50:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:02.013 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:50:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:02.013 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap09ac2cb6-30, col_values=(('external_ids', {'iface-id': '424305ea-6b47-4134-ad52-ee2a450e204c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:50:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:02.014 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:50:02 np0005481065 nova_compute[260935]: 2025-10-11 08:50:02.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:02 np0005481065 nova_compute[260935]: 2025-10-11 08:50:02.332 2 INFO nova.virt.libvirt.driver [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Creating config drive at /var/lib/nova/instances/cb1503a2-bc9c-4faf-ab16-e7227f8c94f7/disk.config#033[00m
Oct 11 04:50:02 np0005481065 nova_compute[260935]: 2025-10-11 08:50:02.341 2 DEBUG oslo_concurrency.processutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cb1503a2-bc9c-4faf-ab16-e7227f8c94f7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvnejti8p execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:50:02 np0005481065 nova_compute[260935]: 2025-10-11 08:50:02.472 2 DEBUG oslo_concurrency.processutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cb1503a2-bc9c-4faf-ab16-e7227f8c94f7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvnejti8p" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:50:02 np0005481065 nova_compute[260935]: 2025-10-11 08:50:02.499 2 DEBUG nova.storage.rbd_utils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image cb1503a2-bc9c-4faf-ab16-e7227f8c94f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:50:02 np0005481065 nova_compute[260935]: 2025-10-11 08:50:02.505 2 DEBUG oslo_concurrency.processutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cb1503a2-bc9c-4faf-ab16-e7227f8c94f7/disk.config cb1503a2-bc9c-4faf-ab16-e7227f8c94f7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:50:02 np0005481065 nova_compute[260935]: 2025-10-11 08:50:02.566 2 INFO nova.virt.libvirt.driver [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Instance shutdown successfully after 3 seconds.#033[00m
Oct 11 04:50:02 np0005481065 nova_compute[260935]: 2025-10-11 08:50:02.573 2 INFO nova.virt.libvirt.driver [-] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Instance destroyed successfully.#033[00m
Oct 11 04:50:02 np0005481065 nova_compute[260935]: 2025-10-11 08:50:02.580 2 INFO nova.virt.libvirt.driver [-] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Instance destroyed successfully.#033[00m
Oct 11 04:50:02 np0005481065 nova_compute[260935]: 2025-10-11 08:50:02.581 2 DEBUG nova.virt.libvirt.vif [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:48:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1988839763',display_name='tempest-ServersAdminTestJSON-server-1988839763',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1988839763',id=22,image_ref='95632eb9-5895-4e20-b760-0f149aadf400',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:49:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='39d3043a7835403392c659fbb2fe0b22',ramdisk_id='',reservation_id='r-nxz1v0k9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='95632eb9-5895-4e20-b760-0f149aadf400',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1756812845',owner_user_name='tempest-ServersAdminTestJSON-1756812845-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:49:57Z,user_data=None,user_id='a51c2680b31e40b1908642ef8795c6f0',uuid=77ff3a9d-3eb2-40ed-ad12-6367fd4e555f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "a3944a31-9560-49ae-b2a5-caaf2736993a", "address": "fa:16:3e:bf:d8:0b", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3944a31-95", "ovs_interfaceid": "a3944a31-9560-49ae-b2a5-caaf2736993a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 04:50:02 np0005481065 nova_compute[260935]: 2025-10-11 08:50:02.581 2 DEBUG nova.network.os_vif_util [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converting VIF {"id": "a3944a31-9560-49ae-b2a5-caaf2736993a", "address": "fa:16:3e:bf:d8:0b", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3944a31-95", "ovs_interfaceid": "a3944a31-9560-49ae-b2a5-caaf2736993a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:50:02 np0005481065 nova_compute[260935]: 2025-10-11 08:50:02.582 2 DEBUG nova.network.os_vif_util [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:d8:0b,bridge_name='br-int',has_traffic_filtering=True,id=a3944a31-9560-49ae-b2a5-caaf2736993a,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3944a31-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:50:02 np0005481065 nova_compute[260935]: 2025-10-11 08:50:02.583 2 DEBUG os_vif [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:d8:0b,bridge_name='br-int',has_traffic_filtering=True,id=a3944a31-9560-49ae-b2a5-caaf2736993a,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3944a31-95') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 04:50:02 np0005481065 nova_compute[260935]: 2025-10-11 08:50:02.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:02 np0005481065 nova_compute[260935]: 2025-10-11 08:50:02.586 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3944a31-95, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:50:02 np0005481065 nova_compute[260935]: 2025-10-11 08:50:02.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:02 np0005481065 nova_compute[260935]: 2025-10-11 08:50:02.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:50:02 np0005481065 nova_compute[260935]: 2025-10-11 08:50:02.600 2 INFO os_vif [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:d8:0b,bridge_name='br-int',has_traffic_filtering=True,id=a3944a31-9560-49ae-b2a5-caaf2736993a,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3944a31-95')#033[00m
Oct 11 04:50:02 np0005481065 ovn_controller[152945]: 2025-10-11T08:50:02Z|00032|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:56:95:17 10.100.0.14
Oct 11 04:50:02 np0005481065 ovn_controller[152945]: 2025-10-11T08:50:02Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:56:95:17 10.100.0.14
Oct 11 04:50:02 np0005481065 nova_compute[260935]: 2025-10-11 08:50:02.754 2 DEBUG oslo_concurrency.processutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cb1503a2-bc9c-4faf-ab16-e7227f8c94f7/disk.config cb1503a2-bc9c-4faf-ab16-e7227f8c94f7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.249s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:50:02 np0005481065 nova_compute[260935]: 2025-10-11 08:50:02.754 2 INFO nova.virt.libvirt.driver [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Deleting local config drive /var/lib/nova/instances/cb1503a2-bc9c-4faf-ab16-e7227f8c94f7/disk.config because it was imported into RBD.#033[00m
Oct 11 04:50:02 np0005481065 kernel: tap31025494-a3: entered promiscuous mode
Oct 11 04:50:02 np0005481065 NetworkManager[44960]: <info>  [1760172602.8367] manager: (tap31025494-a3): new Tun device (/org/freedesktop/NetworkManager/Devices/91)
Oct 11 04:50:02 np0005481065 NetworkManager[44960]: <info>  [1760172602.8526] device (tap31025494-a3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:50:02 np0005481065 NetworkManager[44960]: <info>  [1760172602.8535] device (tap31025494-a3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:50:02 np0005481065 nova_compute[260935]: 2025-10-11 08:50:02.854 2 DEBUG nova.compute.manager [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Received event network-changed-9007d8a3-8797-49c6-9302-4c8f4d699a45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:50:02 np0005481065 nova_compute[260935]: 2025-10-11 08:50:02.855 2 DEBUG nova.compute.manager [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Refreshing instance network info cache due to event network-changed-9007d8a3-8797-49c6-9302-4c8f4d699a45. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:50:02 np0005481065 nova_compute[260935]: 2025-10-11 08:50:02.855 2 DEBUG oslo_concurrency.lockutils [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-9f9aca1c-8e65-435a-bfae-1ff0d4386f58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:50:02 np0005481065 nova_compute[260935]: 2025-10-11 08:50:02.855 2 DEBUG oslo_concurrency.lockutils [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-9f9aca1c-8e65-435a-bfae-1ff0d4386f58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:50:02 np0005481065 nova_compute[260935]: 2025-10-11 08:50:02.856 2 DEBUG nova.network.neutron [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Refreshing network info cache for port 9007d8a3-8797-49c6-9302-4c8f4d699a45 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:50:02 np0005481065 ovn_controller[152945]: 2025-10-11T08:50:02Z|00157|binding|INFO|Claiming lport 31025494-a361-4c39-aa29-5841978f12e9 for this chassis.
Oct 11 04:50:02 np0005481065 ovn_controller[152945]: 2025-10-11T08:50:02Z|00158|binding|INFO|31025494-a361-4c39-aa29-5841978f12e9: Claiming fa:16:3e:a8:8e:61 10.100.0.14
Oct 11 04:50:02 np0005481065 nova_compute[260935]: 2025-10-11 08:50:02.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:02.881 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:8e:61 10.100.0.14'], port_security=['fa:16:3e:a8:8e:61 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'cb1503a2-bc9c-4faf-ab16-e7227f8c94f7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bac3530-993f-420e-8692-0b14a331d756', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8c7604961214c6d9d49657535d799a5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b92be55a-f97b-4770-99c4-ff8e122b8ad7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=956bef08-638b-4ce0-9cc4-80a6cc4f1331, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=31025494-a361-4c39-aa29-5841978f12e9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:50:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:02.882 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 31025494-a361-4c39-aa29-5841978f12e9 in datapath 9bac3530-993f-420e-8692-0b14a331d756 bound to our chassis#033[00m
Oct 11 04:50:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:02.884 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bac3530-993f-420e-8692-0b14a331d756#033[00m
Oct 11 04:50:02 np0005481065 ovn_controller[152945]: 2025-10-11T08:50:02Z|00159|binding|INFO|Setting lport 31025494-a361-4c39-aa29-5841978f12e9 ovn-installed in OVS
Oct 11 04:50:02 np0005481065 ovn_controller[152945]: 2025-10-11T08:50:02Z|00160|binding|INFO|Setting lport 31025494-a361-4c39-aa29-5841978f12e9 up in Southbound
Oct 11 04:50:02 np0005481065 nova_compute[260935]: 2025-10-11 08:50:02.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:02.897 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6c667153-6a86-4948-9710-262dfeec8703]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:02.900 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9bac3530-91 in ovnmeta-9bac3530-993f-420e-8692-0b14a331d756 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 04:50:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:02.902 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9bac3530-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 04:50:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:02.902 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[83226346-0e51-42dd-83ad-c68720582434]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:02.903 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e59e98c3-4ab1-4b0a-a056-77aeda04c97c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:02 np0005481065 systemd-machined[215705]: New machine qemu-32-instance-0000001e.
Oct 11 04:50:02 np0005481065 systemd[1]: Started Virtual Machine qemu-32-instance-0000001e.
Oct 11 04:50:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:02.921 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[ed67e9cd-a5db-475e-b493-c5e74ecb39f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:02.949 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5a989a81-78ec-4094-8d12-dce6288dfc3e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:02 np0005481065 nova_compute[260935]: 2025-10-11 08:50:02.985 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172602.9850109, b35f4147-9e36-4dab-9ac8-2061c97797f2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:50:02 np0005481065 nova_compute[260935]: 2025-10-11 08:50:02.986 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] VM Started (Lifecycle Event)#033[00m
Oct 11 04:50:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:02.996 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[bf45fc8b-bca4-46a3-a69d-e61afd2e2dfa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:03 np0005481065 nova_compute[260935]: 2025-10-11 08:50:03.003 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:50:03 np0005481065 NetworkManager[44960]: <info>  [1760172603.0039] manager: (tap9bac3530-90): new Veth device (/org/freedesktop/NetworkManager/Devices/92)
Oct 11 04:50:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:03.003 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[685e8927-37b5-4491-b74c-f19385884987]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:03 np0005481065 nova_compute[260935]: 2025-10-11 08:50:03.009 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172602.9851007, b35f4147-9e36-4dab-9ac8-2061c97797f2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:50:03 np0005481065 nova_compute[260935]: 2025-10-11 08:50:03.009 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] VM Paused (Lifecycle Event)#033[00m
Oct 11 04:50:03 np0005481065 nova_compute[260935]: 2025-10-11 08:50:03.037 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:50:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:03.040 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[75b06a2e-c884-4e97-ba2d-cae52500257e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:03.043 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[a2e1f71d-e278-46de-b087-f8df121ef21c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:03 np0005481065 nova_compute[260935]: 2025-10-11 08:50:03.043 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:50:03 np0005481065 nova_compute[260935]: 2025-10-11 08:50:03.063 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:50:03 np0005481065 NetworkManager[44960]: <info>  [1760172603.0698] device (tap9bac3530-90): carrier: link connected
Oct 11 04:50:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:03.077 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[da5418b9-e0fc-46b4-a3b7-651b3fbc66b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:03.095 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fdeda3ed-c3b6-4918-b144-b250ab933fc1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bac3530-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:35:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444777, 'reachable_time': 33335, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296755, 'error': None, 'target': 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:03.115 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2270824e-3918-470b-8564-92168a0550e9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe95:351f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 444777, 'tstamp': 444777}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296756, 'error': None, 'target': 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:03.139 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4e36410b-a6de-4b79-bfab-c0315fe3aace]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bac3530-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:35:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444777, 'reachable_time': 33335, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 296757, 'error': None, 'target': 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:03 np0005481065 nova_compute[260935]: 2025-10-11 08:50:03.160 2 INFO nova.virt.libvirt.driver [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Deleting instance files /var/lib/nova/instances/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_del#033[00m
Oct 11 04:50:03 np0005481065 nova_compute[260935]: 2025-10-11 08:50:03.161 2 INFO nova.virt.libvirt.driver [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Deletion of /var/lib/nova/instances/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_del complete#033[00m
Oct 11 04:50:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:03.186 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[081b531f-e278-4204-92f9-3152d26be302]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:50:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:03.257 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[af16d1a5-5c73-48d6-b499-feca51623ef7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:03.259 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bac3530-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:50:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:03.260 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:50:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:03.260 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bac3530-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:50:03 np0005481065 NetworkManager[44960]: <info>  [1760172603.2635] manager: (tap9bac3530-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/93)
Oct 11 04:50:03 np0005481065 kernel: tap9bac3530-90: entered promiscuous mode
Oct 11 04:50:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:03.265 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bac3530-90, col_values=(('external_ids', {'iface-id': 'e5becf0d-48c0-404b-9cba-07077454d085'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:50:03 np0005481065 ovn_controller[152945]: 2025-10-11T08:50:03Z|00161|binding|INFO|Releasing lport e5becf0d-48c0-404b-9cba-07077454d085 from this chassis (sb_readonly=0)
Oct 11 04:50:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:03.283 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9bac3530-993f-420e-8692-0b14a331d756.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9bac3530-993f-420e-8692-0b14a331d756.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 04:50:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:03.285 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[68404314-9802-47e2-8fcb-e6111205a59a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:03.286 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:50:03 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 04:50:03 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 04:50:03 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-9bac3530-993f-420e-8692-0b14a331d756
Oct 11 04:50:03 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 04:50:03 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 04:50:03 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 04:50:03 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/9bac3530-993f-420e-8692-0b14a331d756.pid.haproxy
Oct 11 04:50:03 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 04:50:03 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:50:03 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 04:50:03 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 04:50:03 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 04:50:03 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 04:50:03 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 04:50:03 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 04:50:03 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 04:50:03 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 04:50:03 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 04:50:03 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 04:50:03 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 04:50:03 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 04:50:03 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 04:50:03 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:50:03 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:50:03 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 04:50:03 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 04:50:03 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:50:03 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID 9bac3530-993f-420e-8692-0b14a331d756
Oct 11 04:50:03 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 04:50:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:03.289 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'env', 'PROCESS_TAG=haproxy-9bac3530-993f-420e-8692-0b14a331d756', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9bac3530-993f-420e-8692-0b14a331d756.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 04:50:03 np0005481065 nova_compute[260935]: 2025-10-11 08:50:03.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:03 np0005481065 nova_compute[260935]: 2025-10-11 08:50:03.480 2 DEBUG nova.virt.libvirt.driver [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:50:03 np0005481065 nova_compute[260935]: 2025-10-11 08:50:03.480 2 INFO nova.virt.libvirt.driver [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Creating image(s)#033[00m
Oct 11 04:50:03 np0005481065 nova_compute[260935]: 2025-10-11 08:50:03.522 2 DEBUG nova.storage.rbd_utils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:50:03 np0005481065 nova_compute[260935]: 2025-10-11 08:50:03.561 2 DEBUG nova.storage.rbd_utils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:50:03 np0005481065 nova_compute[260935]: 2025-10-11 08:50:03.605 2 DEBUG nova.storage.rbd_utils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:50:03 np0005481065 nova_compute[260935]: 2025-10-11 08:50:03.610 2 DEBUG oslo_concurrency.processutils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:50:03 np0005481065 nova_compute[260935]: 2025-10-11 08:50:03.693 2 DEBUG oslo_concurrency.processutils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:50:03 np0005481065 nova_compute[260935]: 2025-10-11 08:50:03.695 2 DEBUG oslo_concurrency.lockutils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "d427ed36e4acfaf36d5cf36bd49361b1db4ee571" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:03 np0005481065 nova_compute[260935]: 2025-10-11 08:50:03.697 2 DEBUG oslo_concurrency.lockutils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "d427ed36e4acfaf36d5cf36bd49361b1db4ee571" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:03 np0005481065 nova_compute[260935]: 2025-10-11 08:50:03.697 2 DEBUG oslo_concurrency.lockutils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "d427ed36e4acfaf36d5cf36bd49361b1db4ee571" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:03 np0005481065 podman[296883]: 2025-10-11 08:50:03.698158306 +0000 UTC m=+0.079007095 container create 8b0a546d98e6676ad7230a9ea46074e9f7737c411187814dca94ba845962cc0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 04:50:03 np0005481065 nova_compute[260935]: 2025-10-11 08:50:03.730 2 DEBUG nova.storage.rbd_utils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:50:03 np0005481065 nova_compute[260935]: 2025-10-11 08:50:03.735 2 DEBUG oslo_concurrency.processutils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:50:03 np0005481065 podman[296883]: 2025-10-11 08:50:03.656984882 +0000 UTC m=+0.037833691 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 04:50:03 np0005481065 systemd[1]: Started libpod-conmon-8b0a546d98e6676ad7230a9ea46074e9f7737c411187814dca94ba845962cc0f.scope.
Oct 11 04:50:03 np0005481065 nova_compute[260935]: 2025-10-11 08:50:03.768 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:50:03 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:50:03 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34c413eaa3c0da3cf86ff785e342f7b62278895f169bd630f3f2a56ec7d8ae13/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:50:03 np0005481065 podman[296883]: 2025-10-11 08:50:03.831895128 +0000 UTC m=+0.212743927 container init 8b0a546d98e6676ad7230a9ea46074e9f7737c411187814dca94ba845962cc0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct 11 04:50:03 np0005481065 podman[296883]: 2025-10-11 08:50:03.839230436 +0000 UTC m=+0.220079205 container start 8b0a546d98e6676ad7230a9ea46074e9f7737c411187814dca94ba845962cc0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:50:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1334: 321 pgs: 321 active+clean; 571 MiB data, 662 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 9.3 MiB/s wr, 315 op/s
Oct 11 04:50:03 np0005481065 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[296922]: [NOTICE]   (296941) : New worker (296943) forked
Oct 11 04:50:03 np0005481065 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[296922]: [NOTICE]   (296941) : Loading success.
Oct 11 04:50:04 np0005481065 nova_compute[260935]: 2025-10-11 08:50:04.017 2 DEBUG oslo_concurrency.processutils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.282s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:50:04 np0005481065 nova_compute[260935]: 2025-10-11 08:50:04.045 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172604.0337703, cb1503a2-bc9c-4faf-ab16-e7227f8c94f7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:50:04 np0005481065 nova_compute[260935]: 2025-10-11 08:50:04.046 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] VM Started (Lifecycle Event)#033[00m
Oct 11 04:50:04 np0005481065 nova_compute[260935]: 2025-10-11 08:50:04.076 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:50:04 np0005481065 nova_compute[260935]: 2025-10-11 08:50:04.082 2 DEBUG nova.storage.rbd_utils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] resizing rbd image 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:50:04 np0005481065 nova_compute[260935]: 2025-10-11 08:50:04.124 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172604.0339808, cb1503a2-bc9c-4faf-ab16-e7227f8c94f7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:50:04 np0005481065 nova_compute[260935]: 2025-10-11 08:50:04.125 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] VM Paused (Lifecycle Event)#033[00m
Oct 11 04:50:04 np0005481065 nova_compute[260935]: 2025-10-11 08:50:04.147 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:50:04 np0005481065 nova_compute[260935]: 2025-10-11 08:50:04.150 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:50:04 np0005481065 nova_compute[260935]: 2025-10-11 08:50:04.184 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:50:04 np0005481065 nova_compute[260935]: 2025-10-11 08:50:04.190 2 DEBUG nova.virt.libvirt.driver [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:50:04 np0005481065 nova_compute[260935]: 2025-10-11 08:50:04.191 2 DEBUG nova.virt.libvirt.driver [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Ensure instance console log exists: /var/lib/nova/instances/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:50:04 np0005481065 nova_compute[260935]: 2025-10-11 08:50:04.191 2 DEBUG oslo_concurrency.lockutils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:04 np0005481065 nova_compute[260935]: 2025-10-11 08:50:04.191 2 DEBUG oslo_concurrency.lockutils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:04 np0005481065 nova_compute[260935]: 2025-10-11 08:50:04.191 2 DEBUG oslo_concurrency.lockutils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:04 np0005481065 nova_compute[260935]: 2025-10-11 08:50:04.193 2 DEBUG nova.virt.libvirt.driver [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Start _get_guest_xml network_info=[{"id": "a3944a31-9560-49ae-b2a5-caaf2736993a", "address": "fa:16:3e:bf:d8:0b", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3944a31-95", "ovs_interfaceid": "a3944a31-9560-49ae-b2a5-caaf2736993a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:29Z,direct_url=<?>,disk_format='qcow2',id=95632eb9-5895-4e20-b760-0f149aadf400,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:50:04 np0005481065 nova_compute[260935]: 2025-10-11 08:50:04.197 2 WARNING nova.virt.libvirt.driver [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Oct 11 04:50:04 np0005481065 nova_compute[260935]: 2025-10-11 08:50:04.202 2 DEBUG nova.virt.libvirt.host [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:50:04 np0005481065 nova_compute[260935]: 2025-10-11 08:50:04.202 2 DEBUG nova.virt.libvirt.host [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:50:04 np0005481065 nova_compute[260935]: 2025-10-11 08:50:04.205 2 DEBUG nova.virt.libvirt.host [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:50:04 np0005481065 nova_compute[260935]: 2025-10-11 08:50:04.206 2 DEBUG nova.virt.libvirt.host [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:50:04 np0005481065 nova_compute[260935]: 2025-10-11 08:50:04.206 2 DEBUG nova.virt.libvirt.driver [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:50:04 np0005481065 nova_compute[260935]: 2025-10-11 08:50:04.206 2 DEBUG nova.virt.hardware [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:29Z,direct_url=<?>,disk_format='qcow2',id=95632eb9-5895-4e20-b760-0f149aadf400,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:50:04 np0005481065 nova_compute[260935]: 2025-10-11 08:50:04.207 2 DEBUG nova.virt.hardware [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:50:04 np0005481065 nova_compute[260935]: 2025-10-11 08:50:04.207 2 DEBUG nova.virt.hardware [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:50:04 np0005481065 nova_compute[260935]: 2025-10-11 08:50:04.207 2 DEBUG nova.virt.hardware [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:50:04 np0005481065 nova_compute[260935]: 2025-10-11 08:50:04.207 2 DEBUG nova.virt.hardware [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:50:04 np0005481065 nova_compute[260935]: 2025-10-11 08:50:04.207 2 DEBUG nova.virt.hardware [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:50:04 np0005481065 nova_compute[260935]: 2025-10-11 08:50:04.208 2 DEBUG nova.virt.hardware [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:50:04 np0005481065 nova_compute[260935]: 2025-10-11 08:50:04.208 2 DEBUG nova.virt.hardware [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:50:04 np0005481065 nova_compute[260935]: 2025-10-11 08:50:04.208 2 DEBUG nova.virt.hardware [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:50:04 np0005481065 nova_compute[260935]: 2025-10-11 08:50:04.209 2 DEBUG nova.virt.hardware [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:50:04 np0005481065 nova_compute[260935]: 2025-10-11 08:50:04.209 2 DEBUG nova.virt.hardware [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:50:04 np0005481065 nova_compute[260935]: 2025-10-11 08:50:04.209 2 DEBUG nova.objects.instance [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:50:04 np0005481065 nova_compute[260935]: 2025-10-11 08:50:04.225 2 DEBUG oslo_concurrency.processutils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:50:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:50:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:50:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:50:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:50:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004822146964739207 of space, bias 1.0, pg target 1.4466440894217623 quantized to 32 (current 32)
Oct 11 04:50:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:50:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:50:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:50:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:50:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:50:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.1991676866616201 quantized to 32 (current 32)
Oct 11 04:50:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:50:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 32)
Oct 11 04:50:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:50:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:50:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:50:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Oct 11 04:50:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:50:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Oct 11 04:50:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:50:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:50:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:50:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Oct 11 04:50:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:50:04 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1974380941' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:50:04 np0005481065 nova_compute[260935]: 2025-10-11 08:50:04.694 2 DEBUG oslo_concurrency.processutils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:50:04 np0005481065 nova_compute[260935]: 2025-10-11 08:50:04.734 2 DEBUG nova.storage.rbd_utils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:50:04 np0005481065 nova_compute[260935]: 2025-10-11 08:50:04.740 2 DEBUG oslo_concurrency.processutils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:50:05 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:50:05 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1401924063' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.210 2 DEBUG oslo_concurrency.processutils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.213 2 DEBUG nova.virt.libvirt.vif [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-11T08:48:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1988839763',display_name='tempest-ServersAdminTestJSON-server-1988839763',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1988839763',id=22,image_ref='95632eb9-5895-4e20-b760-0f149aadf400',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:49:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='39d3043a7835403392c659fbb2fe0b22',ramdisk_id='',reservation_id='r-nxz1v0k9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='95632eb9-5895-4e20-b760-0f149aadf400',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1756812845',owner_user_name='tempest-ServersAdminTestJSON-1756812845-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:50:03Z,user_data=None,user_id='a51c2680b31e40b1908642ef8795c6f0',uuid=77ff3a9d-3eb2-40ed-ad12-6367fd4e555f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "a3944a31-9560-49ae-b2a5-caaf2736993a", "address": "fa:16:3e:bf:d8:0b", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3944a31-95", "ovs_interfaceid": "a3944a31-9560-49ae-b2a5-caaf2736993a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.213 2 DEBUG nova.network.os_vif_util [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converting VIF {"id": "a3944a31-9560-49ae-b2a5-caaf2736993a", "address": "fa:16:3e:bf:d8:0b", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3944a31-95", "ovs_interfaceid": "a3944a31-9560-49ae-b2a5-caaf2736993a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.216 2 DEBUG nova.network.os_vif_util [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:d8:0b,bridge_name='br-int',has_traffic_filtering=True,id=a3944a31-9560-49ae-b2a5-caaf2736993a,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3944a31-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.219 2 DEBUG nova.virt.libvirt.driver [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:50:05 np0005481065 nova_compute[260935]:  <uuid>77ff3a9d-3eb2-40ed-ad12-6367fd4e555f</uuid>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:  <name>instance-00000016</name>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:50:05 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:      <nova:name>tempest-ServersAdminTestJSON-server-1988839763</nova:name>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:50:04</nova:creationTime>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:50:05 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:        <nova:user uuid="a51c2680b31e40b1908642ef8795c6f0">tempest-ServersAdminTestJSON-1756812845-project-member</nova:user>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:        <nova:project uuid="39d3043a7835403392c659fbb2fe0b22">tempest-ServersAdminTestJSON-1756812845</nova:project>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="95632eb9-5895-4e20-b760-0f149aadf400"/>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:        <nova:port uuid="a3944a31-9560-49ae-b2a5-caaf2736993a">
Oct 11 04:50:05 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:50:05 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:      <entry name="serial">77ff3a9d-3eb2-40ed-ad12-6367fd4e555f</entry>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:      <entry name="uuid">77ff3a9d-3eb2-40ed-ad12-6367fd4e555f</entry>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:50:05 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:50:05 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:50:05 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk">
Oct 11 04:50:05 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:50:05 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:50:05 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk.config">
Oct 11 04:50:05 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:50:05 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 04:50:05 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:bf:d8:0b"/>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:      <target dev="tapa3944a31-95"/>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:50:05 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f/console.log" append="off"/>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:50:05 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:50:05 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:50:05 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:50:05 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:50:05 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.220 2 DEBUG nova.compute.manager [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Preparing to wait for external event network-vif-plugged-a3944a31-9560-49ae-b2a5-caaf2736993a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.221 2 DEBUG oslo_concurrency.lockutils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.221 2 DEBUG oslo_concurrency.lockutils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.222 2 DEBUG oslo_concurrency.lockutils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.222 2 DEBUG nova.virt.libvirt.vif [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-11T08:48:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1988839763',display_name='tempest-ServersAdminTestJSON-server-1988839763',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1988839763',id=22,image_ref='95632eb9-5895-4e20-b760-0f149aadf400',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:49:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='39d3043a7835403392c659fbb2fe0b22',ramdisk_id='',reservation_id='r-nxz1v0k9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='95632eb9-5895-4e20-b760-0f149aadf400',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1756812845',owner_user_name='tempest-ServersAdminTestJSON-1756812845-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:50:03Z,user_data=None,user_id='a51c2680b31e40b1908642ef8795c6f0',uuid=77ff3a9d-3eb2-40ed-ad12-6367fd4e555f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "a3944a31-9560-49ae-b2a5-caaf2736993a", "address": "fa:16:3e:bf:d8:0b", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3944a31-95", "ovs_interfaceid": "a3944a31-9560-49ae-b2a5-caaf2736993a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.223 2 DEBUG nova.network.os_vif_util [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converting VIF {"id": "a3944a31-9560-49ae-b2a5-caaf2736993a", "address": "fa:16:3e:bf:d8:0b", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3944a31-95", "ovs_interfaceid": "a3944a31-9560-49ae-b2a5-caaf2736993a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.223 2 DEBUG nova.network.os_vif_util [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:d8:0b,bridge_name='br-int',has_traffic_filtering=True,id=a3944a31-9560-49ae-b2a5-caaf2736993a,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3944a31-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.224 2 DEBUG os_vif [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:d8:0b,bridge_name='br-int',has_traffic_filtering=True,id=a3944a31-9560-49ae-b2a5-caaf2736993a,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3944a31-95') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.225 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.226 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.229 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa3944a31-95, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.229 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa3944a31-95, col_values=(('external_ids', {'iface-id': 'a3944a31-9560-49ae-b2a5-caaf2736993a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bf:d8:0b', 'vm-uuid': '77ff3a9d-3eb2-40ed-ad12-6367fd4e555f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:05 np0005481065 NetworkManager[44960]: <info>  [1760172605.2619] manager: (tapa3944a31-95): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/94)
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.273 2 INFO os_vif [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:d8:0b,bridge_name='br-int',has_traffic_filtering=True,id=a3944a31-9560-49ae-b2a5-caaf2736993a,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3944a31-95')#033[00m
Oct 11 04:50:05 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Oct 11 04:50:05 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:50:05.278348) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 04:50:05 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Oct 11 04:50:05 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172605278397, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2228, "num_deletes": 262, "total_data_size": 3285683, "memory_usage": 3349072, "flush_reason": "Manual Compaction"}
Oct 11 04:50:05 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Oct 11 04:50:05 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172605298940, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3224280, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25666, "largest_seqno": 27893, "table_properties": {"data_size": 3214155, "index_size": 6427, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 21815, "raw_average_key_size": 21, "raw_value_size": 3193613, "raw_average_value_size": 3076, "num_data_blocks": 281, "num_entries": 1038, "num_filter_entries": 1038, "num_deletions": 262, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760172418, "oldest_key_time": 1760172418, "file_creation_time": 1760172605, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:50:05 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 20762 microseconds, and 14044 cpu microseconds.
Oct 11 04:50:05 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:50:05 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:50:05.299109) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3224280 bytes OK
Oct 11 04:50:05 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:50:05.299189) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Oct 11 04:50:05 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:50:05.300801) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Oct 11 04:50:05 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:50:05.300852) EVENT_LOG_v1 {"time_micros": 1760172605300843, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 04:50:05 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:50:05.300880) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 04:50:05 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3276149, prev total WAL file size 3276149, number of live WAL files 2.
Oct 11 04:50:05 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:50:05 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:50:05.303536) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Oct 11 04:50:05 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 04:50:05 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3148KB)], [59(7004KB)]
Oct 11 04:50:05 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172605303612, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 10396898, "oldest_snapshot_seqno": -1}
Oct 11 04:50:05 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5208 keys, 8651138 bytes, temperature: kUnknown
Oct 11 04:50:05 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172605364747, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 8651138, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8614862, "index_size": 22150, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13061, "raw_key_size": 129472, "raw_average_key_size": 24, "raw_value_size": 8519566, "raw_average_value_size": 1635, "num_data_blocks": 911, "num_entries": 5208, "num_filter_entries": 5208, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760172605, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:50:05 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:50:05 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:50:05.364966) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 8651138 bytes
Oct 11 04:50:05 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:50:05.366775) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 169.9 rd, 141.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 6.8 +0.0 blob) out(8.3 +0.0 blob), read-write-amplify(5.9) write-amplify(2.7) OK, records in: 5740, records dropped: 532 output_compression: NoCompression
Oct 11 04:50:05 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:50:05.366792) EVENT_LOG_v1 {"time_micros": 1760172605366784, "job": 32, "event": "compaction_finished", "compaction_time_micros": 61206, "compaction_time_cpu_micros": 34038, "output_level": 6, "num_output_files": 1, "total_output_size": 8651138, "num_input_records": 5740, "num_output_records": 5208, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 04:50:05 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:50:05 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172605367517, "job": 32, "event": "table_file_deletion", "file_number": 61}
Oct 11 04:50:05 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:50:05 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172605369519, "job": 32, "event": "table_file_deletion", "file_number": 59}
Oct 11 04:50:05 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:50:05.303135) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:50:05 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:50:05.369637) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:50:05 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:50:05.369648) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:50:05 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:50:05.369650) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:50:05 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:50:05.369652) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:50:05 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:50:05.369654) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.382 2 DEBUG nova.virt.libvirt.driver [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.383 2 DEBUG nova.virt.libvirt.driver [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.383 2 DEBUG nova.virt.libvirt.driver [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] No VIF found with MAC fa:16:3e:bf:d8:0b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.383 2 INFO nova.virt.libvirt.driver [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Using config drive#033[00m
Oct 11 04:50:05 np0005481065 podman[297091]: 2025-10-11 08:50:05.39666496 +0000 UTC m=+0.076067393 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.426 2 DEBUG nova.storage.rbd_utils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:50:05 np0005481065 podman[297092]: 2025-10-11 08:50:05.43273669 +0000 UTC m=+0.112586295 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.453 2 DEBUG nova.objects.instance [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.489 2 DEBUG nova.objects.instance [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lazy-loading 'keypairs' on Instance uuid 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.552 2 DEBUG nova.compute.manager [req-e4912fb3-175a-4513-ae51-2edf2196a6a7 req-9841b910-a205-4e14-b9b1-73d967a3c15e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Received event network-vif-plugged-31025494-a361-4c39-aa29-5841978f12e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.552 2 DEBUG oslo_concurrency.lockutils [req-e4912fb3-175a-4513-ae51-2edf2196a6a7 req-9841b910-a205-4e14-b9b1-73d967a3c15e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "cb1503a2-bc9c-4faf-ab16-e7227f8c94f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.553 2 DEBUG oslo_concurrency.lockutils [req-e4912fb3-175a-4513-ae51-2edf2196a6a7 req-9841b910-a205-4e14-b9b1-73d967a3c15e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "cb1503a2-bc9c-4faf-ab16-e7227f8c94f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.553 2 DEBUG oslo_concurrency.lockutils [req-e4912fb3-175a-4513-ae51-2edf2196a6a7 req-9841b910-a205-4e14-b9b1-73d967a3c15e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "cb1503a2-bc9c-4faf-ab16-e7227f8c94f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.553 2 DEBUG nova.compute.manager [req-e4912fb3-175a-4513-ae51-2edf2196a6a7 req-9841b910-a205-4e14-b9b1-73d967a3c15e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Processing event network-vif-plugged-31025494-a361-4c39-aa29-5841978f12e9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.554 2 DEBUG nova.compute.manager [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.560 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172605.560264, cb1503a2-bc9c-4faf-ab16-e7227f8c94f7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.560 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.563 2 DEBUG nova.virt.libvirt.driver [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.566 2 INFO nova.virt.libvirt.driver [-] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Instance spawned successfully.#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.566 2 DEBUG nova.virt.libvirt.driver [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.580 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.588 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.591 2 DEBUG nova.virt.libvirt.driver [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.591 2 DEBUG nova.virt.libvirt.driver [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.592 2 DEBUG nova.virt.libvirt.driver [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.592 2 DEBUG nova.virt.libvirt.driver [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.593 2 DEBUG nova.virt.libvirt.driver [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.593 2 DEBUG nova.virt.libvirt.driver [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.619 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.652 2 INFO nova.compute.manager [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Took 8.90 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.653 2 DEBUG nova.compute.manager [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.701 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.701 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.702 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.732 2 INFO nova.compute.manager [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Took 10.67 seconds to build instance.#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.739 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.740 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-77ff3a9d-3eb2-40ed-ad12-6367fd4e555f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.740 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-77ff3a9d-3eb2-40ed-ad12-6367fd4e555f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.740 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.741 2 DEBUG nova.objects.instance [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.747 2 DEBUG oslo_concurrency.lockutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "cb1503a2-bc9c-4faf-ab16-e7227f8c94f7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.756 2 INFO nova.virt.libvirt.driver [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Creating config drive at /var/lib/nova/instances/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f/disk.config#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.762 2 DEBUG oslo_concurrency.processutils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmdweqm5t execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:50:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1335: 321 pgs: 321 active+clean; 571 MiB data, 662 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 9.3 MiB/s wr, 315 op/s
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.914 2 DEBUG oslo_concurrency.processutils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmdweqm5t" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.942 2 DEBUG nova.storage.rbd_utils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:50:05 np0005481065 nova_compute[260935]: 2025-10-11 08:50:05.945 2 DEBUG oslo_concurrency.processutils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f/disk.config 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:50:06 np0005481065 nova_compute[260935]: 2025-10-11 08:50:06.111 2 DEBUG oslo_concurrency.processutils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f/disk.config 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:50:06 np0005481065 nova_compute[260935]: 2025-10-11 08:50:06.112 2 INFO nova.virt.libvirt.driver [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Deleting local config drive /var/lib/nova/instances/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f/disk.config because it was imported into RBD.#033[00m
Oct 11 04:50:06 np0005481065 NetworkManager[44960]: <info>  [1760172606.1956] manager: (tapa3944a31-95): new Tun device (/org/freedesktop/NetworkManager/Devices/95)
Oct 11 04:50:06 np0005481065 kernel: tapa3944a31-95: entered promiscuous mode
Oct 11 04:50:06 np0005481065 ovn_controller[152945]: 2025-10-11T08:50:06Z|00162|binding|INFO|Claiming lport a3944a31-9560-49ae-b2a5-caaf2736993a for this chassis.
Oct 11 04:50:06 np0005481065 ovn_controller[152945]: 2025-10-11T08:50:06Z|00163|binding|INFO|a3944a31-9560-49ae-b2a5-caaf2736993a: Claiming fa:16:3e:bf:d8:0b 10.100.0.11
Oct 11 04:50:06 np0005481065 nova_compute[260935]: 2025-10-11 08:50:06.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:06.213 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:d8:0b 10.100.0.11'], port_security=['fa:16:3e:bf:d8:0b 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '77ff3a9d-3eb2-40ed-ad12-6367fd4e555f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39d3043a7835403392c659fbb2fe0b22', 'neutron:revision_number': '5', 'neutron:security_group_ids': '8cdf2c97-ed67-4339-928f-1d70d0c6c18c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3bfe7634-8476-437a-9cde-e4512c0e686a, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=a3944a31-9560-49ae-b2a5-caaf2736993a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:50:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:06.215 162815 INFO neutron.agent.ovn.metadata.agent [-] Port a3944a31-9560-49ae-b2a5-caaf2736993a in datapath 09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5 bound to our chassis#033[00m
Oct 11 04:50:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:06.221 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5#033[00m
Oct 11 04:50:06 np0005481065 ovn_controller[152945]: 2025-10-11T08:50:06Z|00164|binding|INFO|Setting lport a3944a31-9560-49ae-b2a5-caaf2736993a ovn-installed in OVS
Oct 11 04:50:06 np0005481065 ovn_controller[152945]: 2025-10-11T08:50:06Z|00165|binding|INFO|Setting lport a3944a31-9560-49ae-b2a5-caaf2736993a up in Southbound
Oct 11 04:50:06 np0005481065 nova_compute[260935]: 2025-10-11 08:50:06.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:06 np0005481065 nova_compute[260935]: 2025-10-11 08:50:06.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:06.261 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[aeaa9762-34c5-48f4-856e-3c9e84c0600b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:06 np0005481065 systemd-machined[215705]: New machine qemu-33-instance-00000016.
Oct 11 04:50:06 np0005481065 systemd[1]: Started Virtual Machine qemu-33-instance-00000016.
Oct 11 04:50:06 np0005481065 systemd-udevd[297211]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:50:06 np0005481065 NetworkManager[44960]: <info>  [1760172606.3037] device (tapa3944a31-95): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:50:06 np0005481065 NetworkManager[44960]: <info>  [1760172606.3056] device (tapa3944a31-95): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:50:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:06.305 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c305d89c-34e7-455e-94d5-4aec2ebf8360]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:06.309 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b42813e0-c975-45e3-a3e9-90e41f96facd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:06.350 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[d5ac2232-ac4d-4025-bd17-bceaa8da3383]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:06.387 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ea16b7e2-00a7-478e-894f-335ac3e5cf81]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap09ac2cb6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:b2:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 14, 'rx_bytes': 1084, 'tx_bytes': 776, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 14, 'rx_bytes': 1084, 'tx_bytes': 776, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 439110, 'reachable_time': 35426, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297221, 'error': None, 'target': 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:06.426 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[afd424b8-7aa0-4e5c-95da-49ad5c5ac8e8]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap09ac2cb6-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 439130, 'tstamp': 439130}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297223, 'error': None, 'target': 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap09ac2cb6-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 439135, 'tstamp': 439135}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297223, 'error': None, 'target': 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:06.429 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap09ac2cb6-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:50:06 np0005481065 nova_compute[260935]: 2025-10-11 08:50:06.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:06 np0005481065 nova_compute[260935]: 2025-10-11 08:50:06.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:06.473 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap09ac2cb6-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:50:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:06.473 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:50:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:06.474 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap09ac2cb6-30, col_values=(('external_ids', {'iface-id': '424305ea-6b47-4134-ad52-ee2a450e204c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:50:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:06.475 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:50:06 np0005481065 nova_compute[260935]: 2025-10-11 08:50:06.636 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172591.4941485, 8e4f771a-b87a-40f9-a12e-b5b4583b96f7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:50:06 np0005481065 nova_compute[260935]: 2025-10-11 08:50:06.637 2 INFO nova.compute.manager [-] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] VM Stopped (Lifecycle Event)#033[00m
Oct 11 04:50:06 np0005481065 nova_compute[260935]: 2025-10-11 08:50:06.661 2 DEBUG nova.compute.manager [None req-67fea12c-212a-497b-a538-5724d425b749 - - - - - -] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:50:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:06.786 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:50:07 np0005481065 nova_compute[260935]: 2025-10-11 08:50:07.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:07 np0005481065 nova_compute[260935]: 2025-10-11 08:50:07.302 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Removed pending event for 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct 11 04:50:07 np0005481065 nova_compute[260935]: 2025-10-11 08:50:07.302 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172607.301627, 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:50:07 np0005481065 nova_compute[260935]: 2025-10-11 08:50:07.302 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] VM Started (Lifecycle Event)#033[00m
Oct 11 04:50:07 np0005481065 nova_compute[260935]: 2025-10-11 08:50:07.325 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:50:07 np0005481065 nova_compute[260935]: 2025-10-11 08:50:07.328 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172607.303803, 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:50:07 np0005481065 nova_compute[260935]: 2025-10-11 08:50:07.328 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] VM Paused (Lifecycle Event)#033[00m
Oct 11 04:50:07 np0005481065 nova_compute[260935]: 2025-10-11 08:50:07.343 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:50:07 np0005481065 nova_compute[260935]: 2025-10-11 08:50:07.346 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:50:07 np0005481065 nova_compute[260935]: 2025-10-11 08:50:07.377 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct 11 04:50:07 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:50:07 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 6182 writes, 27K keys, 6182 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 6182 writes, 6182 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1563 writes, 7241 keys, 1563 commit groups, 1.0 writes per commit group, ingest: 9.55 MB, 0.02 MB/s#012Interval WAL: 1563 writes, 1563 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     83.9      0.39              0.13        16    0.025       0      0       0.0       0.0#012  L6      1/0    8.25 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.2    163.5    132.8      0.81              0.48        15    0.054     69K   8349       0.0       0.0#012 Sum      1/0    8.25 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.2    109.9    116.8      1.20              0.61        31    0.039     69K   8349       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.4    127.4    131.7      0.37              0.22        10    0.037     26K   3090       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    163.5    132.8      0.81              0.48        15    0.054     69K   8349       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     84.7      0.39              0.13        15    0.026       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.2      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.0 total, 600.0 interval#012Flush(GB): cumulative 0.032, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.14 GB write, 0.06 MB/s write, 0.13 GB read, 0.05 MB/s read, 1.2 seconds#012Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558f0ab3b1f0#2 capacity: 304.00 MB usage: 15.22 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000131 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(985,14.66 MB,4.82271%) FilterBlock(32,200.73 KB,0.0644834%) IndexBlock(32,368.94 KB,0.118517%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct 11 04:50:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1336: 321 pgs: 321 active+clean; 544 MiB data, 638 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 7.3 MiB/s wr, 426 op/s
Oct 11 04:50:08 np0005481065 nova_compute[260935]: 2025-10-11 08:50:08.056 2 DEBUG nova.compute.manager [req-8be733e5-675c-48a5-8e01-8e9392e4198b req-a1bf60b8-ddc2-4488-a952-b6eaeb0c26df e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Received event network-vif-plugged-31025494-a361-4c39-aa29-5841978f12e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:50:08 np0005481065 nova_compute[260935]: 2025-10-11 08:50:08.057 2 DEBUG oslo_concurrency.lockutils [req-8be733e5-675c-48a5-8e01-8e9392e4198b req-a1bf60b8-ddc2-4488-a952-b6eaeb0c26df e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "cb1503a2-bc9c-4faf-ab16-e7227f8c94f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:08 np0005481065 nova_compute[260935]: 2025-10-11 08:50:08.057 2 DEBUG oslo_concurrency.lockutils [req-8be733e5-675c-48a5-8e01-8e9392e4198b req-a1bf60b8-ddc2-4488-a952-b6eaeb0c26df e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "cb1503a2-bc9c-4faf-ab16-e7227f8c94f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:08 np0005481065 nova_compute[260935]: 2025-10-11 08:50:08.057 2 DEBUG oslo_concurrency.lockutils [req-8be733e5-675c-48a5-8e01-8e9392e4198b req-a1bf60b8-ddc2-4488-a952-b6eaeb0c26df e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "cb1503a2-bc9c-4faf-ab16-e7227f8c94f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:08 np0005481065 nova_compute[260935]: 2025-10-11 08:50:08.058 2 DEBUG nova.compute.manager [req-8be733e5-675c-48a5-8e01-8e9392e4198b req-a1bf60b8-ddc2-4488-a952-b6eaeb0c26df e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] No waiting events found dispatching network-vif-plugged-31025494-a361-4c39-aa29-5841978f12e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:50:08 np0005481065 nova_compute[260935]: 2025-10-11 08:50:08.058 2 WARNING nova.compute.manager [req-8be733e5-675c-48a5-8e01-8e9392e4198b req-a1bf60b8-ddc2-4488-a952-b6eaeb0c26df e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Received unexpected event network-vif-plugged-31025494-a361-4c39-aa29-5841978f12e9 for instance with vm_state active and task_state None.#033[00m
Oct 11 04:50:08 np0005481065 nova_compute[260935]: 2025-10-11 08:50:08.168 2 DEBUG oslo_concurrency.lockutils [None req-9e5a278a-2f09-40a3-8a2d-2f913c615ffa 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "cb1503a2-bc9c-4faf-ab16-e7227f8c94f7" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:08 np0005481065 nova_compute[260935]: 2025-10-11 08:50:08.169 2 DEBUG oslo_concurrency.lockutils [None req-9e5a278a-2f09-40a3-8a2d-2f913c615ffa 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "cb1503a2-bc9c-4faf-ab16-e7227f8c94f7" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:08 np0005481065 nova_compute[260935]: 2025-10-11 08:50:08.170 2 DEBUG nova.compute.manager [None req-9e5a278a-2f09-40a3-8a2d-2f913c615ffa 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:50:08 np0005481065 nova_compute[260935]: 2025-10-11 08:50:08.176 2 DEBUG nova.compute.manager [None req-9e5a278a-2f09-40a3-8a2d-2f913c615ffa 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Oct 11 04:50:08 np0005481065 nova_compute[260935]: 2025-10-11 08:50:08.178 2 DEBUG nova.objects.instance [None req-9e5a278a-2f09-40a3-8a2d-2f913c615ffa 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lazy-loading 'flavor' on Instance uuid cb1503a2-bc9c-4faf-ab16-e7227f8c94f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:50:08 np0005481065 nova_compute[260935]: 2025-10-11 08:50:08.210 2 DEBUG nova.virt.libvirt.driver [None req-9e5a278a-2f09-40a3-8a2d-2f913c615ffa 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct 11 04:50:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:50:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1337: 321 pgs: 321 active+clean; 544 MiB data, 638 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 6.3 MiB/s wr, 367 op/s
Oct 11 04:50:09 np0005481065 nova_compute[260935]: 2025-10-11 08:50:09.929 2 DEBUG nova.network.neutron [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Updated VIF entry in instance network info cache for port 9007d8a3-8797-49c6-9302-4c8f4d699a45. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:50:09 np0005481065 nova_compute[260935]: 2025-10-11 08:50:09.931 2 DEBUG nova.network.neutron [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Updating instance_info_cache with network_info: [{"id": "9007d8a3-8797-49c6-9302-4c8f4d699a45", "address": "fa:16:3e:34:0d:85", "network": {"id": "8e0c9798-3406-4335-baf7-3664e8c2cc2d", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1694728476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4478aa2544ad454daf82ec0d5a6f1b83", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9007d8a3-87", "ovs_interfaceid": "9007d8a3-8797-49c6-9302-4c8f4d699a45", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.092 2 DEBUG oslo_concurrency.lockutils [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-9f9aca1c-8e65-435a-bfae-1ff0d4386f58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.094 2 DEBUG nova.compute.manager [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Received event network-vif-plugged-c27797c3-6ac7-45ae-9a2e-7fc42908feab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.094 2 DEBUG oslo_concurrency.lockutils [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "b35f4147-9e36-4dab-9ac8-2061c97797f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.094 2 DEBUG oslo_concurrency.lockutils [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b35f4147-9e36-4dab-9ac8-2061c97797f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.095 2 DEBUG oslo_concurrency.lockutils [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b35f4147-9e36-4dab-9ac8-2061c97797f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.095 2 DEBUG nova.compute.manager [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Processing event network-vif-plugged-c27797c3-6ac7-45ae-9a2e-7fc42908feab _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.095 2 DEBUG nova.compute.manager [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Received event network-vif-plugged-c27797c3-6ac7-45ae-9a2e-7fc42908feab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.096 2 DEBUG oslo_concurrency.lockutils [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "b35f4147-9e36-4dab-9ac8-2061c97797f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.096 2 DEBUG oslo_concurrency.lockutils [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b35f4147-9e36-4dab-9ac8-2061c97797f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.096 2 DEBUG oslo_concurrency.lockutils [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b35f4147-9e36-4dab-9ac8-2061c97797f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.096 2 DEBUG nova.compute.manager [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] No waiting events found dispatching network-vif-plugged-c27797c3-6ac7-45ae-9a2e-7fc42908feab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.097 2 WARNING nova.compute.manager [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Received unexpected event network-vif-plugged-c27797c3-6ac7-45ae-9a2e-7fc42908feab for instance with vm_state building and task_state spawning.#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.097 2 DEBUG nova.compute.manager [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Received event network-vif-unplugged-a3944a31-9560-49ae-b2a5-caaf2736993a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.097 2 DEBUG oslo_concurrency.lockutils [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.097 2 DEBUG oslo_concurrency.lockutils [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.098 2 DEBUG oslo_concurrency.lockutils [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.098 2 DEBUG nova.compute.manager [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] No event matching network-vif-unplugged-a3944a31-9560-49ae-b2a5-caaf2736993a in dict_keys([('network-vif-plugged', 'a3944a31-9560-49ae-b2a5-caaf2736993a')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.098 2 WARNING nova.compute.manager [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Received unexpected event network-vif-unplugged-a3944a31-9560-49ae-b2a5-caaf2736993a for instance with vm_state error and task_state rebuilding.#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.098 2 DEBUG nova.compute.manager [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Received event network-vif-plugged-a3944a31-9560-49ae-b2a5-caaf2736993a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.098 2 DEBUG oslo_concurrency.lockutils [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.099 2 DEBUG oslo_concurrency.lockutils [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.099 2 DEBUG oslo_concurrency.lockutils [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.099 2 DEBUG nova.compute.manager [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Processing event network-vif-plugged-a3944a31-9560-49ae-b2a5-caaf2736993a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.100 2 DEBUG nova.compute.manager [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Instance event wait completed in 7 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.100 2 DEBUG nova.compute.manager [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.104 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172610.1041093, 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.105 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.108 2 DEBUG nova.virt.libvirt.driver [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.108 2 DEBUG nova.virt.libvirt.driver [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.113 2 INFO nova.virt.libvirt.driver [-] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Instance spawned successfully.#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.113 2 DEBUG nova.virt.libvirt.driver [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.115 2 INFO nova.virt.libvirt.driver [-] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Instance spawned successfully.#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.115 2 DEBUG nova.virt.libvirt.driver [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.181 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.185 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.293 2 DEBUG nova.compute.manager [req-5eed3579-e100-4c33-841a-822872a467ad req-f83d1dae-8c2f-4e4c-b31d-5005fbb7277f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Received event network-vif-plugged-a3944a31-9560-49ae-b2a5-caaf2736993a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.294 2 DEBUG oslo_concurrency.lockutils [req-5eed3579-e100-4c33-841a-822872a467ad req-f83d1dae-8c2f-4e4c-b31d-5005fbb7277f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.294 2 DEBUG oslo_concurrency.lockutils [req-5eed3579-e100-4c33-841a-822872a467ad req-f83d1dae-8c2f-4e4c-b31d-5005fbb7277f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.294 2 DEBUG oslo_concurrency.lockutils [req-5eed3579-e100-4c33-841a-822872a467ad req-f83d1dae-8c2f-4e4c-b31d-5005fbb7277f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.294 2 DEBUG nova.compute.manager [req-5eed3579-e100-4c33-841a-822872a467ad req-f83d1dae-8c2f-4e4c-b31d-5005fbb7277f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] No waiting events found dispatching network-vif-plugged-a3944a31-9560-49ae-b2a5-caaf2736993a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.295 2 WARNING nova.compute.manager [req-5eed3579-e100-4c33-841a-822872a467ad req-f83d1dae-8c2f-4e4c-b31d-5005fbb7277f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Received unexpected event network-vif-plugged-a3944a31-9560-49ae-b2a5-caaf2736993a for instance with vm_state error and task_state rebuild_spawning.#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.295 2 DEBUG nova.compute.manager [req-5eed3579-e100-4c33-841a-822872a467ad req-f83d1dae-8c2f-4e4c-b31d-5005fbb7277f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Received event network-vif-plugged-a3944a31-9560-49ae-b2a5-caaf2736993a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.295 2 DEBUG oslo_concurrency.lockutils [req-5eed3579-e100-4c33-841a-822872a467ad req-f83d1dae-8c2f-4e4c-b31d-5005fbb7277f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.295 2 DEBUG oslo_concurrency.lockutils [req-5eed3579-e100-4c33-841a-822872a467ad req-f83d1dae-8c2f-4e4c-b31d-5005fbb7277f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.295 2 DEBUG oslo_concurrency.lockutils [req-5eed3579-e100-4c33-841a-822872a467ad req-f83d1dae-8c2f-4e4c-b31d-5005fbb7277f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.296 2 DEBUG nova.compute.manager [req-5eed3579-e100-4c33-841a-822872a467ad req-f83d1dae-8c2f-4e4c-b31d-5005fbb7277f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] No waiting events found dispatching network-vif-plugged-a3944a31-9560-49ae-b2a5-caaf2736993a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.296 2 WARNING nova.compute.manager [req-5eed3579-e100-4c33-841a-822872a467ad req-f83d1dae-8c2f-4e4c-b31d-5005fbb7277f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Received unexpected event network-vif-plugged-a3944a31-9560-49ae-b2a5-caaf2736993a for instance with vm_state error and task_state rebuild_spawning.#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.311 2 DEBUG nova.virt.libvirt.driver [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.311 2 DEBUG nova.virt.libvirt.driver [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.312 2 DEBUG nova.virt.libvirt.driver [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.312 2 DEBUG nova.virt.libvirt.driver [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.312 2 DEBUG nova.virt.libvirt.driver [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.313 2 DEBUG nova.virt.libvirt.driver [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.321 2 DEBUG nova.virt.libvirt.driver [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.322 2 DEBUG nova.virt.libvirt.driver [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.323 2 DEBUG nova.virt.libvirt.driver [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.324 2 DEBUG nova.virt.libvirt.driver [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.325 2 DEBUG nova.virt.libvirt.driver [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.326 2 DEBUG nova.virt.libvirt.driver [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.356 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.358 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172610.1051445, b35f4147-9e36-4dab-9ac8-2061c97797f2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.359 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.435 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.439 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.483 2 DEBUG nova.compute.manager [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.502 2 INFO nova.compute.manager [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Took 15.06 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.502 2 DEBUG nova.compute.manager [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.545 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.585 2 DEBUG oslo_concurrency.lockutils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.585 2 DEBUG oslo_concurrency.lockutils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.586 2 DEBUG nova.objects.instance [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.622 2 INFO nova.compute.manager [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Took 16.22 seconds to build instance.#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.775 2 DEBUG oslo_concurrency.lockutils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.190s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.822 2 DEBUG oslo_concurrency.lockutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "b35f4147-9e36-4dab-9ac8-2061c97797f2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.508s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:10 np0005481065 nova_compute[260935]: 2025-10-11 08:50:10.953 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Updating instance_info_cache with network_info: [{"id": "a3944a31-9560-49ae-b2a5-caaf2736993a", "address": "fa:16:3e:bf:d8:0b", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3944a31-95", "ovs_interfaceid": "a3944a31-9560-49ae-b2a5-caaf2736993a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:50:11 np0005481065 nova_compute[260935]: 2025-10-11 08:50:11.018 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-77ff3a9d-3eb2-40ed-ad12-6367fd4e555f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:50:11 np0005481065 nova_compute[260935]: 2025-10-11 08:50:11.018 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 04:50:11 np0005481065 nova_compute[260935]: 2025-10-11 08:50:11.019 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:50:11 np0005481065 nova_compute[260935]: 2025-10-11 08:50:11.019 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:50:11 np0005481065 nova_compute[260935]: 2025-10-11 08:50:11.114 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:11 np0005481065 nova_compute[260935]: 2025-10-11 08:50:11.114 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:11 np0005481065 nova_compute[260935]: 2025-10-11 08:50:11.115 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:11 np0005481065 nova_compute[260935]: 2025-10-11 08:50:11.115 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 04:50:11 np0005481065 nova_compute[260935]: 2025-10-11 08:50:11.116 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:50:11 np0005481065 ovn_controller[152945]: 2025-10-11T08:50:11Z|00034|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:34:0d:85 10.100.0.3
Oct 11 04:50:11 np0005481065 ovn_controller[152945]: 2025-10-11T08:50:11Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:34:0d:85 10.100.0.3
Oct 11 04:50:11 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:50:11 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1638522075' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:50:11 np0005481065 nova_compute[260935]: 2025-10-11 08:50:11.624 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:50:11 np0005481065 nova_compute[260935]: 2025-10-11 08:50:11.745 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 04:50:11 np0005481065 nova_compute[260935]: 2025-10-11 08:50:11.746 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 04:50:11 np0005481065 nova_compute[260935]: 2025-10-11 08:50:11.749 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 04:50:11 np0005481065 nova_compute[260935]: 2025-10-11 08:50:11.749 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 04:50:11 np0005481065 nova_compute[260935]: 2025-10-11 08:50:11.751 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 04:50:11 np0005481065 nova_compute[260935]: 2025-10-11 08:50:11.752 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 04:50:11 np0005481065 nova_compute[260935]: 2025-10-11 08:50:11.755 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 04:50:11 np0005481065 nova_compute[260935]: 2025-10-11 08:50:11.755 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 04:50:11 np0005481065 nova_compute[260935]: 2025-10-11 08:50:11.758 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 04:50:11 np0005481065 nova_compute[260935]: 2025-10-11 08:50:11.758 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 04:50:11 np0005481065 nova_compute[260935]: 2025-10-11 08:50:11.760 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000001c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 04:50:11 np0005481065 nova_compute[260935]: 2025-10-11 08:50:11.760 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000001c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 04:50:11 np0005481065 nova_compute[260935]: 2025-10-11 08:50:11.764 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 04:50:11 np0005481065 nova_compute[260935]: 2025-10-11 08:50:11.765 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 04:50:11 np0005481065 nova_compute[260935]: 2025-10-11 08:50:11.768 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 04:50:11 np0005481065 nova_compute[260935]: 2025-10-11 08:50:11.768 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 04:50:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1338: 321 pgs: 321 active+clean; 544 MiB data, 638 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 6.1 MiB/s wr, 355 op/s
Oct 11 04:50:12 np0005481065 nova_compute[260935]: 2025-10-11 08:50:12.021 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:50:12 np0005481065 nova_compute[260935]: 2025-10-11 08:50:12.022 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3094MB free_disk=59.72287368774414GB free_vcpus=0 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 04:50:12 np0005481065 nova_compute[260935]: 2025-10-11 08:50:12.022 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:12 np0005481065 nova_compute[260935]: 2025-10-11 08:50:12.022 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:12 np0005481065 nova_compute[260935]: 2025-10-11 08:50:12.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:12 np0005481065 nova_compute[260935]: 2025-10-11 08:50:12.149 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 04:50:12 np0005481065 nova_compute[260935]: 2025-10-11 08:50:12.149 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 6224c79a-8a36-490c-863a-67251512732f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 04:50:12 np0005481065 nova_compute[260935]: 2025-10-11 08:50:12.150 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b3e20035-c079-4ad0-a085-2086be520d1d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 04:50:12 np0005481065 nova_compute[260935]: 2025-10-11 08:50:12.150 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 14711e39-46ca-4856-9c19-fa51b869064d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 04:50:12 np0005481065 nova_compute[260935]: 2025-10-11 08:50:12.150 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 90e56ca7-b26f-4f83-908d-75204ecd2533 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 04:50:12 np0005481065 nova_compute[260935]: 2025-10-11 08:50:12.150 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 9f9aca1c-8e65-435a-bfae-1ff0d4386f58 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 04:50:12 np0005481065 nova_compute[260935]: 2025-10-11 08:50:12.150 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b35f4147-9e36-4dab-9ac8-2061c97797f2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 04:50:12 np0005481065 nova_compute[260935]: 2025-10-11 08:50:12.150 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance cb1503a2-bc9c-4faf-ab16-e7227f8c94f7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 04:50:12 np0005481065 nova_compute[260935]: 2025-10-11 08:50:12.151 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 8 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 04:50:12 np0005481065 nova_compute[260935]: 2025-10-11 08:50:12.151 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=59GB used_disk=8GB total_vcpus=8 used_vcpus=8 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 04:50:12 np0005481065 nova_compute[260935]: 2025-10-11 08:50:12.323 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:50:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:50:12 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3431085559' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:50:12 np0005481065 nova_compute[260935]: 2025-10-11 08:50:12.798 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:50:12 np0005481065 nova_compute[260935]: 2025-10-11 08:50:12.805 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:50:12 np0005481065 nova_compute[260935]: 2025-10-11 08:50:12.829 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:50:12 np0005481065 nova_compute[260935]: 2025-10-11 08:50:12.869 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 04:50:12 np0005481065 nova_compute[260935]: 2025-10-11 08:50:12.870 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.848s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:12 np0005481065 nova_compute[260935]: 2025-10-11 08:50:12.884 2 INFO nova.compute.manager [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Rebuilding instance#033[00m
Oct 11 04:50:13 np0005481065 nova_compute[260935]: 2025-10-11 08:50:13.152 2 DEBUG nova.objects.instance [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:50:13 np0005481065 nova_compute[260935]: 2025-10-11 08:50:13.176 2 DEBUG nova.compute.manager [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:50:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:50:13 np0005481065 nova_compute[260935]: 2025-10-11 08:50:13.227 2 DEBUG nova.objects.instance [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lazy-loading 'pci_requests' on Instance uuid 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:50:13 np0005481065 nova_compute[260935]: 2025-10-11 08:50:13.242 2 DEBUG nova.objects.instance [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lazy-loading 'pci_devices' on Instance uuid 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:50:13 np0005481065 nova_compute[260935]: 2025-10-11 08:50:13.255 2 DEBUG nova.objects.instance [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lazy-loading 'resources' on Instance uuid 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:50:13 np0005481065 nova_compute[260935]: 2025-10-11 08:50:13.269 2 DEBUG nova.objects.instance [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lazy-loading 'migration_context' on Instance uuid 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:50:13 np0005481065 nova_compute[260935]: 2025-10-11 08:50:13.280 2 DEBUG nova.objects.instance [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct 11 04:50:13 np0005481065 nova_compute[260935]: 2025-10-11 08:50:13.284 2 DEBUG nova.virt.libvirt.driver [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct 11 04:50:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1339: 321 pgs: 321 active+clean; 577 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 8.6 MiB/s rd, 8.2 MiB/s wr, 549 op/s
Oct 11 04:50:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:15.183 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:15.183 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:15.184 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:15 np0005481065 nova_compute[260935]: 2025-10-11 08:50:15.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:15 np0005481065 nova_compute[260935]: 2025-10-11 08:50:15.685 2 DEBUG nova.compute.manager [req-a64722a4-3078-4a03-bb6c-205024a3ff3d req-37aa00e8-c36b-4d32-826b-d22025779a55 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Received event network-changed-ac842ebf-4fca-4930-a4d1-3e8a6760d441 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:50:15 np0005481065 nova_compute[260935]: 2025-10-11 08:50:15.686 2 DEBUG nova.compute.manager [req-a64722a4-3078-4a03-bb6c-205024a3ff3d req-37aa00e8-c36b-4d32-826b-d22025779a55 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Refreshing instance network info cache due to event network-changed-ac842ebf-4fca-4930-a4d1-3e8a6760d441. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:50:15 np0005481065 nova_compute[260935]: 2025-10-11 08:50:15.686 2 DEBUG oslo_concurrency.lockutils [req-a64722a4-3078-4a03-bb6c-205024a3ff3d req-37aa00e8-c36b-4d32-826b-d22025779a55 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-14711e39-46ca-4856-9c19-fa51b869064d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:50:15 np0005481065 nova_compute[260935]: 2025-10-11 08:50:15.687 2 DEBUG oslo_concurrency.lockutils [req-a64722a4-3078-4a03-bb6c-205024a3ff3d req-37aa00e8-c36b-4d32-826b-d22025779a55 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-14711e39-46ca-4856-9c19-fa51b869064d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:50:15 np0005481065 nova_compute[260935]: 2025-10-11 08:50:15.687 2 DEBUG nova.network.neutron [req-a64722a4-3078-4a03-bb6c-205024a3ff3d req-37aa00e8-c36b-4d32-826b-d22025779a55 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Refreshing network info cache for port ac842ebf-4fca-4930-a4d1-3e8a6760d441 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:50:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1340: 321 pgs: 321 active+clean; 577 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 4.0 MiB/s wr, 339 op/s
Oct 11 04:50:16 np0005481065 nova_compute[260935]: 2025-10-11 08:50:16.770 2 DEBUG nova.network.neutron [req-a64722a4-3078-4a03-bb6c-205024a3ff3d req-37aa00e8-c36b-4d32-826b-d22025779a55 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Updated VIF entry in instance network info cache for port ac842ebf-4fca-4930-a4d1-3e8a6760d441. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:50:16 np0005481065 nova_compute[260935]: 2025-10-11 08:50:16.771 2 DEBUG nova.network.neutron [req-a64722a4-3078-4a03-bb6c-205024a3ff3d req-37aa00e8-c36b-4d32-826b-d22025779a55 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Updating instance_info_cache with network_info: [{"id": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "address": "fa:16:3e:56:95:17", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac842ebf-4f", "ovs_interfaceid": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:50:16 np0005481065 nova_compute[260935]: 2025-10-11 08:50:16.788 2 DEBUG oslo_concurrency.lockutils [req-a64722a4-3078-4a03-bb6c-205024a3ff3d req-37aa00e8-c36b-4d32-826b-d22025779a55 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-14711e39-46ca-4856-9c19-fa51b869064d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:50:17 np0005481065 nova_compute[260935]: 2025-10-11 08:50:17.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:17 np0005481065 nova_compute[260935]: 2025-10-11 08:50:17.804 2 DEBUG nova.compute.manager [req-596a1003-206a-4297-92e1-2bf02d0a142c req-80c0c5b5-a684-4b87-b6b1-b1d32ffe56e5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Received event network-changed-c27797c3-6ac7-45ae-9a2e-7fc42908feab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:50:17 np0005481065 nova_compute[260935]: 2025-10-11 08:50:17.804 2 DEBUG nova.compute.manager [req-596a1003-206a-4297-92e1-2bf02d0a142c req-80c0c5b5-a684-4b87-b6b1-b1d32ffe56e5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Refreshing instance network info cache due to event network-changed-c27797c3-6ac7-45ae-9a2e-7fc42908feab. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:50:17 np0005481065 nova_compute[260935]: 2025-10-11 08:50:17.805 2 DEBUG oslo_concurrency.lockutils [req-596a1003-206a-4297-92e1-2bf02d0a142c req-80c0c5b5-a684-4b87-b6b1-b1d32ffe56e5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-b35f4147-9e36-4dab-9ac8-2061c97797f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:50:17 np0005481065 nova_compute[260935]: 2025-10-11 08:50:17.805 2 DEBUG oslo_concurrency.lockutils [req-596a1003-206a-4297-92e1-2bf02d0a142c req-80c0c5b5-a684-4b87-b6b1-b1d32ffe56e5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-b35f4147-9e36-4dab-9ac8-2061c97797f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:50:17 np0005481065 nova_compute[260935]: 2025-10-11 08:50:17.806 2 DEBUG nova.network.neutron [req-596a1003-206a-4297-92e1-2bf02d0a142c req-80c0c5b5-a684-4b87-b6b1-b1d32ffe56e5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Refreshing network info cache for port c27797c3-6ac7-45ae-9a2e-7fc42908feab _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:50:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1341: 321 pgs: 321 active+clean; 610 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 6.2 MiB/s wr, 400 op/s
Oct 11 04:50:18 np0005481065 ovn_controller[152945]: 2025-10-11T08:50:18Z|00036|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a8:8e:61 10.100.0.14
Oct 11 04:50:18 np0005481065 ovn_controller[152945]: 2025-10-11T08:50:18Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a8:8e:61 10.100.0.14
Oct 11 04:50:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:50:18 np0005481065 nova_compute[260935]: 2025-10-11 08:50:18.273 2 DEBUG nova.virt.libvirt.driver [None req-9e5a278a-2f09-40a3-8a2d-2f913c615ffa 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct 11 04:50:19 np0005481065 nova_compute[260935]: 2025-10-11 08:50:19.327 2 DEBUG nova.compute.manager [req-c9a75884-5ba1-4edc-ba86-e068a1c83958 req-9c292f4f-9ab8-448d-ad5c-1ad1418ce00d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Received event network-changed-c27797c3-6ac7-45ae-9a2e-7fc42908feab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:50:19 np0005481065 nova_compute[260935]: 2025-10-11 08:50:19.328 2 DEBUG nova.compute.manager [req-c9a75884-5ba1-4edc-ba86-e068a1c83958 req-9c292f4f-9ab8-448d-ad5c-1ad1418ce00d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Refreshing instance network info cache due to event network-changed-c27797c3-6ac7-45ae-9a2e-7fc42908feab. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:50:19 np0005481065 nova_compute[260935]: 2025-10-11 08:50:19.328 2 DEBUG oslo_concurrency.lockutils [req-c9a75884-5ba1-4edc-ba86-e068a1c83958 req-9c292f4f-9ab8-448d-ad5c-1ad1418ce00d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-b35f4147-9e36-4dab-9ac8-2061c97797f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:50:19 np0005481065 nova_compute[260935]: 2025-10-11 08:50:19.780 2 DEBUG nova.network.neutron [req-596a1003-206a-4297-92e1-2bf02d0a142c req-80c0c5b5-a684-4b87-b6b1-b1d32ffe56e5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Updated VIF entry in instance network info cache for port c27797c3-6ac7-45ae-9a2e-7fc42908feab. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:50:19 np0005481065 nova_compute[260935]: 2025-10-11 08:50:19.781 2 DEBUG nova.network.neutron [req-596a1003-206a-4297-92e1-2bf02d0a142c req-80c0c5b5-a684-4b87-b6b1-b1d32ffe56e5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Updating instance_info_cache with network_info: [{"id": "c27797c3-6ac7-45ae-9a2e-7fc42908feab", "address": "fa:16:3e:d9:75:86", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc27797c3-6a", "ovs_interfaceid": "c27797c3-6ac7-45ae-9a2e-7fc42908feab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:50:19 np0005481065 nova_compute[260935]: 2025-10-11 08:50:19.783 2 DEBUG oslo_concurrency.lockutils [None req-49ced0b0-22cb-48b2-bf9e-9281176f5b95 f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Acquiring lock "9f9aca1c-8e65-435a-bfae-1ff0d4386f58" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:19 np0005481065 nova_compute[260935]: 2025-10-11 08:50:19.784 2 DEBUG oslo_concurrency.lockutils [None req-49ced0b0-22cb-48b2-bf9e-9281176f5b95 f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Lock "9f9aca1c-8e65-435a-bfae-1ff0d4386f58" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:19 np0005481065 nova_compute[260935]: 2025-10-11 08:50:19.785 2 DEBUG oslo_concurrency.lockutils [None req-49ced0b0-22cb-48b2-bf9e-9281176f5b95 f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Acquiring lock "9f9aca1c-8e65-435a-bfae-1ff0d4386f58-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:19 np0005481065 nova_compute[260935]: 2025-10-11 08:50:19.785 2 DEBUG oslo_concurrency.lockutils [None req-49ced0b0-22cb-48b2-bf9e-9281176f5b95 f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Lock "9f9aca1c-8e65-435a-bfae-1ff0d4386f58-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:19 np0005481065 nova_compute[260935]: 2025-10-11 08:50:19.786 2 DEBUG oslo_concurrency.lockutils [None req-49ced0b0-22cb-48b2-bf9e-9281176f5b95 f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Lock "9f9aca1c-8e65-435a-bfae-1ff0d4386f58-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:19 np0005481065 nova_compute[260935]: 2025-10-11 08:50:19.788 2 INFO nova.compute.manager [None req-49ced0b0-22cb-48b2-bf9e-9281176f5b95 f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Terminating instance#033[00m
Oct 11 04:50:19 np0005481065 nova_compute[260935]: 2025-10-11 08:50:19.791 2 DEBUG nova.compute.manager [None req-49ced0b0-22cb-48b2-bf9e-9281176f5b95 f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 04:50:19 np0005481065 nova_compute[260935]: 2025-10-11 08:50:19.796 2 DEBUG oslo_concurrency.lockutils [None req-644005e2-9afc-4683-b0cd-8c4d30c643f6 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "interface-14711e39-46ca-4856-9c19-fa51b869064d-f045b3aa-3ff6-4dea-ad61-a59a01735124" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:19 np0005481065 nova_compute[260935]: 2025-10-11 08:50:19.797 2 DEBUG oslo_concurrency.lockutils [None req-644005e2-9afc-4683-b0cd-8c4d30c643f6 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "interface-14711e39-46ca-4856-9c19-fa51b869064d-f045b3aa-3ff6-4dea-ad61-a59a01735124" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:19 np0005481065 nova_compute[260935]: 2025-10-11 08:50:19.798 2 DEBUG nova.objects.instance [None req-644005e2-9afc-4683-b0cd-8c4d30c643f6 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lazy-loading 'flavor' on Instance uuid 14711e39-46ca-4856-9c19-fa51b869064d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:50:19 np0005481065 nova_compute[260935]: 2025-10-11 08:50:19.824 2 DEBUG oslo_concurrency.lockutils [req-596a1003-206a-4297-92e1-2bf02d0a142c req-80c0c5b5-a684-4b87-b6b1-b1d32ffe56e5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-b35f4147-9e36-4dab-9ac8-2061c97797f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:50:19 np0005481065 nova_compute[260935]: 2025-10-11 08:50:19.827 2 DEBUG oslo_concurrency.lockutils [req-c9a75884-5ba1-4edc-ba86-e068a1c83958 req-9c292f4f-9ab8-448d-ad5c-1ad1418ce00d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-b35f4147-9e36-4dab-9ac8-2061c97797f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:50:19 np0005481065 nova_compute[260935]: 2025-10-11 08:50:19.828 2 DEBUG nova.network.neutron [req-c9a75884-5ba1-4edc-ba86-e068a1c83958 req-9c292f4f-9ab8-448d-ad5c-1ad1418ce00d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Refreshing network info cache for port c27797c3-6ac7-45ae-9a2e-7fc42908feab _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:50:19 np0005481065 kernel: tap9007d8a3-87 (unregistering): left promiscuous mode
Oct 11 04:50:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1342: 321 pgs: 321 active+clean; 610 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 4.3 MiB/s wr, 255 op/s
Oct 11 04:50:19 np0005481065 NetworkManager[44960]: <info>  [1760172619.8664] device (tap9007d8a3-87): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:50:19 np0005481065 ovn_controller[152945]: 2025-10-11T08:50:19Z|00166|binding|INFO|Releasing lport 9007d8a3-8797-49c6-9302-4c8f4d699a45 from this chassis (sb_readonly=0)
Oct 11 04:50:19 np0005481065 ovn_controller[152945]: 2025-10-11T08:50:19Z|00167|binding|INFO|Setting lport 9007d8a3-8797-49c6-9302-4c8f4d699a45 down in Southbound
Oct 11 04:50:19 np0005481065 nova_compute[260935]: 2025-10-11 08:50:19.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:19 np0005481065 ovn_controller[152945]: 2025-10-11T08:50:19Z|00168|binding|INFO|Removing iface tap9007d8a3-87 ovn-installed in OVS
Oct 11 04:50:19 np0005481065 nova_compute[260935]: 2025-10-11 08:50:19.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:19.893 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:34:0d:85 10.100.0.3'], port_security=['fa:16:3e:34:0d:85 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '9f9aca1c-8e65-435a-bfae-1ff0d4386f58', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e0c9798-3406-4335-baf7-3664e8c2cc2d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4478aa2544ad454daf82ec0d5a6f1b83', 'neutron:revision_number': '4', 'neutron:security_group_ids': '53a8c688-9a7d-4462-906b-8eed321153ea', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.204'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=861cf0ef-52d4-42d5-8406-b93929d21340, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=9007d8a3-8797-49c6-9302-4c8f4d699a45) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:50:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:19.895 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 9007d8a3-8797-49c6-9302-4c8f4d699a45 in datapath 8e0c9798-3406-4335-baf7-3664e8c2cc2d unbound from our chassis#033[00m
Oct 11 04:50:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:19.898 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8e0c9798-3406-4335-baf7-3664e8c2cc2d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 04:50:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:19.899 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[83cdd9ea-b47e-4465-b9ab-0b2cf5318d37]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:19.900 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8e0c9798-3406-4335-baf7-3664e8c2cc2d namespace which is not needed anymore#033[00m
Oct 11 04:50:19 np0005481065 nova_compute[260935]: 2025-10-11 08:50:19.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:19 np0005481065 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000001c.scope: Deactivated successfully.
Oct 11 04:50:19 np0005481065 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000001c.scope: Consumed 13.010s CPU time.
Oct 11 04:50:19 np0005481065 systemd-machined[215705]: Machine qemu-30-instance-0000001c terminated.
Oct 11 04:50:20 np0005481065 nova_compute[260935]: 2025-10-11 08:50:20.041 2 INFO nova.virt.libvirt.driver [-] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Instance destroyed successfully.#033[00m
Oct 11 04:50:20 np0005481065 nova_compute[260935]: 2025-10-11 08:50:20.042 2 DEBUG nova.objects.instance [None req-49ced0b0-22cb-48b2-bf9e-9281176f5b95 f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Lazy-loading 'resources' on Instance uuid 9f9aca1c-8e65-435a-bfae-1ff0d4386f58 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:50:20 np0005481065 nova_compute[260935]: 2025-10-11 08:50:20.063 2 DEBUG nova.virt.libvirt.vif [None req-49ced0b0-22cb-48b2-bf9e-9281176f5b95 f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:49:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-677562955',display_name='tempest-ServersTestManualDisk-server-677562955',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-677562955',id=28,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE1CYmGp9cIjn1OUZpClvh/431spCYwxzBbkMOHDBaljNzck8rmRcU7SShxhilRUkaibqgjOLZGNsP/o0nw2t9clDZxZT6xlOhd2BSbNJPRKX/ZsVWrtD5Ho3SYRh/eonw==',key_name='tempest-keypair-542099764',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:49:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4478aa2544ad454daf82ec0d5a6f1b83',ramdisk_id='',reservation_id='r-p05i8rcj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestManualDisk-133082753',owner_user_name='tempest-ServersTestManualDisk-133082753-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:49:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f557c82d1a6e44f2890ffb382c99df55',uuid=9f9aca1c-8e65-435a-bfae-1ff0d4386f58,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9007d8a3-8797-49c6-9302-4c8f4d699a45", "address": "fa:16:3e:34:0d:85", "network": {"id": "8e0c9798-3406-4335-baf7-3664e8c2cc2d", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1694728476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4478aa2544ad454daf82ec0d5a6f1b83", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9007d8a3-87", "ovs_interfaceid": "9007d8a3-8797-49c6-9302-4c8f4d699a45", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 04:50:20 np0005481065 nova_compute[260935]: 2025-10-11 08:50:20.065 2 DEBUG nova.network.os_vif_util [None req-49ced0b0-22cb-48b2-bf9e-9281176f5b95 f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Converting VIF {"id": "9007d8a3-8797-49c6-9302-4c8f4d699a45", "address": "fa:16:3e:34:0d:85", "network": {"id": "8e0c9798-3406-4335-baf7-3664e8c2cc2d", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1694728476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4478aa2544ad454daf82ec0d5a6f1b83", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9007d8a3-87", "ovs_interfaceid": "9007d8a3-8797-49c6-9302-4c8f4d699a45", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:50:20 np0005481065 nova_compute[260935]: 2025-10-11 08:50:20.068 2 DEBUG nova.network.os_vif_util [None req-49ced0b0-22cb-48b2-bf9e-9281176f5b95 f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:34:0d:85,bridge_name='br-int',has_traffic_filtering=True,id=9007d8a3-8797-49c6-9302-4c8f4d699a45,network=Network(8e0c9798-3406-4335-baf7-3664e8c2cc2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9007d8a3-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:50:20 np0005481065 nova_compute[260935]: 2025-10-11 08:50:20.069 2 DEBUG os_vif [None req-49ced0b0-22cb-48b2-bf9e-9281176f5b95 f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:34:0d:85,bridge_name='br-int',has_traffic_filtering=True,id=9007d8a3-8797-49c6-9302-4c8f4d699a45,network=Network(8e0c9798-3406-4335-baf7-3664e8c2cc2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9007d8a3-87') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 04:50:20 np0005481065 neutron-haproxy-ovnmeta-8e0c9798-3406-4335-baf7-3664e8c2cc2d[296306]: [NOTICE]   (296310) : haproxy version is 2.8.14-c23fe91
Oct 11 04:50:20 np0005481065 neutron-haproxy-ovnmeta-8e0c9798-3406-4335-baf7-3664e8c2cc2d[296306]: [NOTICE]   (296310) : path to executable is /usr/sbin/haproxy
Oct 11 04:50:20 np0005481065 neutron-haproxy-ovnmeta-8e0c9798-3406-4335-baf7-3664e8c2cc2d[296306]: [WARNING]  (296310) : Exiting Master process...
Oct 11 04:50:20 np0005481065 neutron-haproxy-ovnmeta-8e0c9798-3406-4335-baf7-3664e8c2cc2d[296306]: [WARNING]  (296310) : Exiting Master process...
Oct 11 04:50:20 np0005481065 neutron-haproxy-ovnmeta-8e0c9798-3406-4335-baf7-3664e8c2cc2d[296306]: [ALERT]    (296310) : Current worker (296312) exited with code 143 (Terminated)
Oct 11 04:50:20 np0005481065 neutron-haproxy-ovnmeta-8e0c9798-3406-4335-baf7-3664e8c2cc2d[296306]: [WARNING]  (296310) : All workers exited. Exiting... (0)
Oct 11 04:50:20 np0005481065 systemd[1]: libpod-034d4af584d5ac1017cd36efc964e21839dbbea4a1045cbe9431994128504705.scope: Deactivated successfully.
Oct 11 04:50:20 np0005481065 conmon[296306]: conmon 034d4af584d5ac1017cd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-034d4af584d5ac1017cd36efc964e21839dbbea4a1045cbe9431994128504705.scope/container/memory.events
Oct 11 04:50:20 np0005481065 nova_compute[260935]: 2025-10-11 08:50:20.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:20 np0005481065 nova_compute[260935]: 2025-10-11 08:50:20.079 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9007d8a3-87, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:50:20 np0005481065 nova_compute[260935]: 2025-10-11 08:50:20.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:20 np0005481065 nova_compute[260935]: 2025-10-11 08:50:20.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:20 np0005481065 podman[297339]: 2025-10-11 08:50:20.084613306 +0000 UTC m=+0.061633904 container died 034d4af584d5ac1017cd36efc964e21839dbbea4a1045cbe9431994128504705 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8e0c9798-3406-4335-baf7-3664e8c2cc2d, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 11 04:50:20 np0005481065 nova_compute[260935]: 2025-10-11 08:50:20.088 2 INFO os_vif [None req-49ced0b0-22cb-48b2-bf9e-9281176f5b95 f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:34:0d:85,bridge_name='br-int',has_traffic_filtering=True,id=9007d8a3-8797-49c6-9302-4c8f4d699a45,network=Network(8e0c9798-3406-4335-baf7-3664e8c2cc2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9007d8a3-87')#033[00m
Oct 11 04:50:20 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-034d4af584d5ac1017cd36efc964e21839dbbea4a1045cbe9431994128504705-userdata-shm.mount: Deactivated successfully.
Oct 11 04:50:20 np0005481065 systemd[1]: var-lib-containers-storage-overlay-709c524c656a0c0bd950e9b79a2ea29641c4e55d424e4497bd7851268792db19-merged.mount: Deactivated successfully.
Oct 11 04:50:20 np0005481065 podman[297339]: 2025-10-11 08:50:20.13390365 +0000 UTC m=+0.110924248 container cleanup 034d4af584d5ac1017cd36efc964e21839dbbea4a1045cbe9431994128504705 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8e0c9798-3406-4335-baf7-3664e8c2cc2d, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 11 04:50:20 np0005481065 systemd[1]: libpod-conmon-034d4af584d5ac1017cd36efc964e21839dbbea4a1045cbe9431994128504705.scope: Deactivated successfully.
Oct 11 04:50:20 np0005481065 podman[297390]: 2025-10-11 08:50:20.207896562 +0000 UTC m=+0.050485009 container remove 034d4af584d5ac1017cd36efc964e21839dbbea4a1045cbe9431994128504705 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8e0c9798-3406-4335-baf7-3664e8c2cc2d, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 04:50:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:20.213 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[41336208-f98a-4514-9fe3-a6d26fa95365]: (4, ('Sat Oct 11 08:50:20 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8e0c9798-3406-4335-baf7-3664e8c2cc2d (034d4af584d5ac1017cd36efc964e21839dbbea4a1045cbe9431994128504705)\n034d4af584d5ac1017cd36efc964e21839dbbea4a1045cbe9431994128504705\nSat Oct 11 08:50:20 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8e0c9798-3406-4335-baf7-3664e8c2cc2d (034d4af584d5ac1017cd36efc964e21839dbbea4a1045cbe9431994128504705)\n034d4af584d5ac1017cd36efc964e21839dbbea4a1045cbe9431994128504705\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:20.216 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[87253f51-2b89-4206-8d1f-61a4f659805a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:20.217 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e0c9798-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:50:20 np0005481065 nova_compute[260935]: 2025-10-11 08:50:20.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:20 np0005481065 nova_compute[260935]: 2025-10-11 08:50:20.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:20 np0005481065 kernel: tap8e0c9798-30: left promiscuous mode
Oct 11 04:50:20 np0005481065 nova_compute[260935]: 2025-10-11 08:50:20.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:20.263 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[09678b3e-1ab1-4fc8-87ee-df1d8fde322c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:20.288 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c5c39aa3-faac-48ac-ab59-0c27aa1f11de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:20.290 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[76274d3d-4575-4873-ba0d-6ccaba625612]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:20.311 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[697912a6-a6da-48bd-a2af-e389baefe863]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444250, 'reachable_time': 22433, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297409, 'error': None, 'target': 'ovnmeta-8e0c9798-3406-4335-baf7-3664e8c2cc2d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:20 np0005481065 systemd[1]: run-netns-ovnmeta\x2d8e0c9798\x2d3406\x2d4335\x2dbaf7\x2d3664e8c2cc2d.mount: Deactivated successfully.
Oct 11 04:50:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:20.315 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8e0c9798-3406-4335-baf7-3664e8c2cc2d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 04:50:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:20.315 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[1240b7de-c4dc-448b-afe2-b9b8eda407bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:20 np0005481065 nova_compute[260935]: 2025-10-11 08:50:20.535 2 INFO nova.virt.libvirt.driver [None req-49ced0b0-22cb-48b2-bf9e-9281176f5b95 f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Deleting instance files /var/lib/nova/instances/9f9aca1c-8e65-435a-bfae-1ff0d4386f58_del#033[00m
Oct 11 04:50:20 np0005481065 nova_compute[260935]: 2025-10-11 08:50:20.536 2 INFO nova.virt.libvirt.driver [None req-49ced0b0-22cb-48b2-bf9e-9281176f5b95 f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Deletion of /var/lib/nova/instances/9f9aca1c-8e65-435a-bfae-1ff0d4386f58_del complete#033[00m
Oct 11 04:50:20 np0005481065 kernel: tap31025494-a3 (unregistering): left promiscuous mode
Oct 11 04:50:20 np0005481065 NetworkManager[44960]: <info>  [1760172620.5745] device (tap31025494-a3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:50:20 np0005481065 ovn_controller[152945]: 2025-10-11T08:50:20Z|00169|binding|INFO|Releasing lport 31025494-a361-4c39-aa29-5841978f12e9 from this chassis (sb_readonly=0)
Oct 11 04:50:20 np0005481065 ovn_controller[152945]: 2025-10-11T08:50:20Z|00170|binding|INFO|Setting lport 31025494-a361-4c39-aa29-5841978f12e9 down in Southbound
Oct 11 04:50:20 np0005481065 ovn_controller[152945]: 2025-10-11T08:50:20Z|00171|binding|INFO|Removing iface tap31025494-a3 ovn-installed in OVS
Oct 11 04:50:20 np0005481065 nova_compute[260935]: 2025-10-11 08:50:20.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:20.598 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:8e:61 10.100.0.14'], port_security=['fa:16:3e:a8:8e:61 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'cb1503a2-bc9c-4faf-ab16-e7227f8c94f7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bac3530-993f-420e-8692-0b14a331d756', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8c7604961214c6d9d49657535d799a5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b92be55a-f97b-4770-99c4-ff8e122b8ad7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=956bef08-638b-4ce0-9cc4-80a6cc4f1331, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=31025494-a361-4c39-aa29-5841978f12e9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:50:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:20.600 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 31025494-a361-4c39-aa29-5841978f12e9 in datapath 9bac3530-993f-420e-8692-0b14a331d756 unbound from our chassis#033[00m
Oct 11 04:50:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:20.603 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9bac3530-993f-420e-8692-0b14a331d756, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 04:50:20 np0005481065 nova_compute[260935]: 2025-10-11 08:50:20.604 2 INFO nova.compute.manager [None req-49ced0b0-22cb-48b2-bf9e-9281176f5b95 f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Took 0.81 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 04:50:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:20.604 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b2ac4138-ada5-4d7f-8f7e-7ff0bae9e47a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:20.604 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9bac3530-993f-420e-8692-0b14a331d756 namespace which is not needed anymore#033[00m
Oct 11 04:50:20 np0005481065 nova_compute[260935]: 2025-10-11 08:50:20.604 2 DEBUG oslo.service.loopingcall [None req-49ced0b0-22cb-48b2-bf9e-9281176f5b95 f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 04:50:20 np0005481065 nova_compute[260935]: 2025-10-11 08:50:20.606 2 DEBUG nova.compute.manager [-] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 04:50:20 np0005481065 nova_compute[260935]: 2025-10-11 08:50:20.606 2 DEBUG nova.network.neutron [-] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 04:50:20 np0005481065 nova_compute[260935]: 2025-10-11 08:50:20.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:20 np0005481065 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000001e.scope: Deactivated successfully.
Oct 11 04:50:20 np0005481065 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000001e.scope: Consumed 12.310s CPU time.
Oct 11 04:50:20 np0005481065 systemd-machined[215705]: Machine qemu-32-instance-0000001e terminated.
Oct 11 04:50:20 np0005481065 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[296922]: [NOTICE]   (296941) : haproxy version is 2.8.14-c23fe91
Oct 11 04:50:20 np0005481065 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[296922]: [NOTICE]   (296941) : path to executable is /usr/sbin/haproxy
Oct 11 04:50:20 np0005481065 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[296922]: [ALERT]    (296941) : Current worker (296943) exited with code 143 (Terminated)
Oct 11 04:50:20 np0005481065 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[296922]: [WARNING]  (296941) : All workers exited. Exiting... (0)
Oct 11 04:50:20 np0005481065 systemd[1]: libpod-8b0a546d98e6676ad7230a9ea46074e9f7737c411187814dca94ba845962cc0f.scope: Deactivated successfully.
Oct 11 04:50:20 np0005481065 podman[297431]: 2025-10-11 08:50:20.75660103 +0000 UTC m=+0.048974526 container died 8b0a546d98e6676ad7230a9ea46074e9f7737c411187814dca94ba845962cc0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:50:20 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8b0a546d98e6676ad7230a9ea46074e9f7737c411187814dca94ba845962cc0f-userdata-shm.mount: Deactivated successfully.
Oct 11 04:50:20 np0005481065 systemd[1]: var-lib-containers-storage-overlay-34c413eaa3c0da3cf86ff785e342f7b62278895f169bd630f3f2a56ec7d8ae13-merged.mount: Deactivated successfully.
Oct 11 04:50:20 np0005481065 podman[297431]: 2025-10-11 08:50:20.809019242 +0000 UTC m=+0.101392728 container cleanup 8b0a546d98e6676ad7230a9ea46074e9f7737c411187814dca94ba845962cc0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:50:20 np0005481065 systemd[1]: libpod-conmon-8b0a546d98e6676ad7230a9ea46074e9f7737c411187814dca94ba845962cc0f.scope: Deactivated successfully.
Oct 11 04:50:20 np0005481065 nova_compute[260935]: 2025-10-11 08:50:20.895 2 DEBUG nova.objects.instance [None req-644005e2-9afc-4683-b0cd-8c4d30c643f6 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lazy-loading 'pci_requests' on Instance uuid 14711e39-46ca-4856-9c19-fa51b869064d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:50:20 np0005481065 podman[297463]: 2025-10-11 08:50:20.925962439 +0000 UTC m=+0.073989033 container remove 8b0a546d98e6676ad7230a9ea46074e9f7737c411187814dca94ba845962cc0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 11 04:50:20 np0005481065 nova_compute[260935]: 2025-10-11 08:50:20.932 2 DEBUG nova.network.neutron [None req-644005e2-9afc-4683-b0cd-8c4d30c643f6 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:50:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:20.963 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ac495425-9e7c-4bae-8e64-4f5b0571c451]: (4, ('Sat Oct 11 08:50:20 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756 (8b0a546d98e6676ad7230a9ea46074e9f7737c411187814dca94ba845962cc0f)\n8b0a546d98e6676ad7230a9ea46074e9f7737c411187814dca94ba845962cc0f\nSat Oct 11 08:50:20 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756 (8b0a546d98e6676ad7230a9ea46074e9f7737c411187814dca94ba845962cc0f)\n8b0a546d98e6676ad7230a9ea46074e9f7737c411187814dca94ba845962cc0f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:20.965 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8ae56427-0a74-475b-9310-9e41eccec376]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:20.966 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bac3530-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:50:20 np0005481065 nova_compute[260935]: 2025-10-11 08:50:20.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:20 np0005481065 kernel: tap9bac3530-90: left promiscuous mode
Oct 11 04:50:21 np0005481065 nova_compute[260935]: 2025-10-11 08:50:21.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:21.005 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b46fd84c-dfcd-4de0-bff7-c773a2bb1854]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:21.032 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b4bf6a2b-7393-4804-b076-ce65758ce770]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:21.034 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[06d96047-901d-4830-b886-6a6692692e1c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:21.047 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[caac0c1a-a4e6-4a9c-8fea-8aa16d2034f2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444769, 'reachable_time': 34264, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297488, 'error': None, 'target': 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:21.050 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9bac3530-993f-420e-8692-0b14a331d756 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 04:50:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:21.050 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[d24a39ad-adf5-47bd-8376-c33423bc9be0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:21 np0005481065 systemd[1]: run-netns-ovnmeta\x2d9bac3530\x2d993f\x2d420e\x2d8692\x2d0b14a331d756.mount: Deactivated successfully.
Oct 11 04:50:21 np0005481065 nova_compute[260935]: 2025-10-11 08:50:21.237 2 DEBUG nova.policy [None req-644005e2-9afc-4683-b0cd-8c4d30c643f6 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '34f29a5a135d45f597eeaa741009aa67', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'eddb41c523294041b154a0a99c88e82b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 04:50:21 np0005481065 nova_compute[260935]: 2025-10-11 08:50:21.289 2 INFO nova.virt.libvirt.driver [None req-9e5a278a-2f09-40a3-8a2d-2f913c615ffa 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Instance shutdown successfully after 13 seconds.#033[00m
Oct 11 04:50:21 np0005481065 nova_compute[260935]: 2025-10-11 08:50:21.296 2 INFO nova.virt.libvirt.driver [-] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Instance destroyed successfully.#033[00m
Oct 11 04:50:21 np0005481065 nova_compute[260935]: 2025-10-11 08:50:21.296 2 DEBUG nova.objects.instance [None req-9e5a278a-2f09-40a3-8a2d-2f913c615ffa 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lazy-loading 'numa_topology' on Instance uuid cb1503a2-bc9c-4faf-ab16-e7227f8c94f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:50:21 np0005481065 nova_compute[260935]: 2025-10-11 08:50:21.329 2 DEBUG nova.compute.manager [None req-9e5a278a-2f09-40a3-8a2d-2f913c615ffa 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:50:21 np0005481065 nova_compute[260935]: 2025-10-11 08:50:21.451 2 DEBUG oslo_concurrency.lockutils [None req-9e5a278a-2f09-40a3-8a2d-2f913c615ffa 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "cb1503a2-bc9c-4faf-ab16-e7227f8c94f7" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.282s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:21 np0005481065 nova_compute[260935]: 2025-10-11 08:50:21.855 2 DEBUG nova.compute.manager [req-bf178e5d-ba15-4d23-aece-37b966a1fed3 req-f9ef3964-92b4-4926-a7bd-a4b4d547a7c4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Received event network-vif-unplugged-9007d8a3-8797-49c6-9302-4c8f4d699a45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:50:21 np0005481065 nova_compute[260935]: 2025-10-11 08:50:21.855 2 DEBUG oslo_concurrency.lockutils [req-bf178e5d-ba15-4d23-aece-37b966a1fed3 req-f9ef3964-92b4-4926-a7bd-a4b4d547a7c4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "9f9aca1c-8e65-435a-bfae-1ff0d4386f58-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:21 np0005481065 nova_compute[260935]: 2025-10-11 08:50:21.855 2 DEBUG oslo_concurrency.lockutils [req-bf178e5d-ba15-4d23-aece-37b966a1fed3 req-f9ef3964-92b4-4926-a7bd-a4b4d547a7c4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f9aca1c-8e65-435a-bfae-1ff0d4386f58-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:21 np0005481065 nova_compute[260935]: 2025-10-11 08:50:21.855 2 DEBUG oslo_concurrency.lockutils [req-bf178e5d-ba15-4d23-aece-37b966a1fed3 req-f9ef3964-92b4-4926-a7bd-a4b4d547a7c4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f9aca1c-8e65-435a-bfae-1ff0d4386f58-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:21 np0005481065 nova_compute[260935]: 2025-10-11 08:50:21.856 2 DEBUG nova.compute.manager [req-bf178e5d-ba15-4d23-aece-37b966a1fed3 req-f9ef3964-92b4-4926-a7bd-a4b4d547a7c4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] No waiting events found dispatching network-vif-unplugged-9007d8a3-8797-49c6-9302-4c8f4d699a45 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:50:21 np0005481065 nova_compute[260935]: 2025-10-11 08:50:21.856 2 DEBUG nova.compute.manager [req-bf178e5d-ba15-4d23-aece-37b966a1fed3 req-f9ef3964-92b4-4926-a7bd-a4b4d547a7c4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Received event network-vif-unplugged-9007d8a3-8797-49c6-9302-4c8f4d699a45 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 04:50:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1343: 321 pgs: 321 active+clean; 610 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 4.3 MiB/s wr, 255 op/s
Oct 11 04:50:22 np0005481065 nova_compute[260935]: 2025-10-11 08:50:22.070 2 DEBUG nova.network.neutron [None req-644005e2-9afc-4683-b0cd-8c4d30c643f6 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Successfully updated port: f045b3aa-3ff6-4dea-ad61-a59a01735124 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:50:22 np0005481065 nova_compute[260935]: 2025-10-11 08:50:22.087 2 DEBUG oslo_concurrency.lockutils [None req-644005e2-9afc-4683-b0cd-8c4d30c643f6 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "refresh_cache-14711e39-46ca-4856-9c19-fa51b869064d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:50:22 np0005481065 nova_compute[260935]: 2025-10-11 08:50:22.087 2 DEBUG oslo_concurrency.lockutils [None req-644005e2-9afc-4683-b0cd-8c4d30c643f6 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquired lock "refresh_cache-14711e39-46ca-4856-9c19-fa51b869064d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:50:22 np0005481065 nova_compute[260935]: 2025-10-11 08:50:22.087 2 DEBUG nova.network.neutron [None req-644005e2-9afc-4683-b0cd-8c4d30c643f6 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:50:22 np0005481065 nova_compute[260935]: 2025-10-11 08:50:22.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:22 np0005481065 ovn_controller[152945]: 2025-10-11T08:50:22Z|00038|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d9:75:86 10.100.0.5
Oct 11 04:50:22 np0005481065 ovn_controller[152945]: 2025-10-11T08:50:22Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d9:75:86 10.100.0.5
Oct 11 04:50:22 np0005481065 nova_compute[260935]: 2025-10-11 08:50:22.393 2 DEBUG nova.network.neutron [req-c9a75884-5ba1-4edc-ba86-e068a1c83958 req-9c292f4f-9ab8-448d-ad5c-1ad1418ce00d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Updated VIF entry in instance network info cache for port c27797c3-6ac7-45ae-9a2e-7fc42908feab. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:50:22 np0005481065 nova_compute[260935]: 2025-10-11 08:50:22.394 2 DEBUG nova.network.neutron [req-c9a75884-5ba1-4edc-ba86-e068a1c83958 req-9c292f4f-9ab8-448d-ad5c-1ad1418ce00d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Updating instance_info_cache with network_info: [{"id": "c27797c3-6ac7-45ae-9a2e-7fc42908feab", "address": "fa:16:3e:d9:75:86", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc27797c3-6a", "ovs_interfaceid": "c27797c3-6ac7-45ae-9a2e-7fc42908feab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:50:22 np0005481065 nova_compute[260935]: 2025-10-11 08:50:22.403 2 WARNING nova.network.neutron [None req-644005e2-9afc-4683-b0cd-8c4d30c643f6 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] fff13396-b787-4c6e-9112-a1c2ef57b26d already exists in list: networks containing: ['fff13396-b787-4c6e-9112-a1c2ef57b26d']. ignoring it#033[00m
Oct 11 04:50:22 np0005481065 nova_compute[260935]: 2025-10-11 08:50:22.416 2 DEBUG oslo_concurrency.lockutils [req-c9a75884-5ba1-4edc-ba86-e068a1c83958 req-9c292f4f-9ab8-448d-ad5c-1ad1418ce00d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-b35f4147-9e36-4dab-9ac8-2061c97797f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:50:22 np0005481065 nova_compute[260935]: 2025-10-11 08:50:22.488 2 DEBUG nova.network.neutron [-] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:50:22 np0005481065 nova_compute[260935]: 2025-10-11 08:50:22.510 2 INFO nova.compute.manager [-] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Took 1.90 seconds to deallocate network for instance.#033[00m
Oct 11 04:50:22 np0005481065 nova_compute[260935]: 2025-10-11 08:50:22.552 2 DEBUG oslo_concurrency.lockutils [None req-49ced0b0-22cb-48b2-bf9e-9281176f5b95 f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:22 np0005481065 nova_compute[260935]: 2025-10-11 08:50:22.553 2 DEBUG oslo_concurrency.lockutils [None req-49ced0b0-22cb-48b2-bf9e-9281176f5b95 f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:22 np0005481065 nova_compute[260935]: 2025-10-11 08:50:22.756 2 DEBUG oslo_concurrency.processutils [None req-49ced0b0-22cb-48b2-bf9e-9281176f5b95 f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:50:22 np0005481065 nova_compute[260935]: 2025-10-11 08:50:22.813 2 DEBUG nova.compute.manager [req-e25d1736-e6e5-4d6f-a2c6-cb703440408b req-a7a2cf3d-cd0b-4ef6-86e1-d86c67796ee4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Received event network-changed-ac842ebf-4fca-4930-a4d1-3e8a6760d441 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:50:22 np0005481065 nova_compute[260935]: 2025-10-11 08:50:22.814 2 DEBUG nova.compute.manager [req-e25d1736-e6e5-4d6f-a2c6-cb703440408b req-a7a2cf3d-cd0b-4ef6-86e1-d86c67796ee4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Refreshing instance network info cache due to event network-changed-ac842ebf-4fca-4930-a4d1-3e8a6760d441. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:50:22 np0005481065 nova_compute[260935]: 2025-10-11 08:50:22.815 2 DEBUG oslo_concurrency.lockutils [req-e25d1736-e6e5-4d6f-a2c6-cb703440408b req-a7a2cf3d-cd0b-4ef6-86e1-d86c67796ee4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-14711e39-46ca-4856-9c19-fa51b869064d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:50:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:50:23 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2134780301' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:50:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:50:23 np0005481065 nova_compute[260935]: 2025-10-11 08:50:23.236 2 DEBUG oslo_concurrency.processutils [None req-49ced0b0-22cb-48b2-bf9e-9281176f5b95 f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:50:23 np0005481065 nova_compute[260935]: 2025-10-11 08:50:23.241 2 DEBUG nova.compute.provider_tree [None req-49ced0b0-22cb-48b2-bf9e-9281176f5b95 f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:50:23 np0005481065 nova_compute[260935]: 2025-10-11 08:50:23.268 2 DEBUG nova.scheduler.client.report [None req-49ced0b0-22cb-48b2-bf9e-9281176f5b95 f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:50:23 np0005481065 nova_compute[260935]: 2025-10-11 08:50:23.290 2 DEBUG oslo_concurrency.lockutils [None req-49ced0b0-22cb-48b2-bf9e-9281176f5b95 f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.737s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:23 np0005481065 nova_compute[260935]: 2025-10-11 08:50:23.327 2 INFO nova.scheduler.client.report [None req-49ced0b0-22cb-48b2-bf9e-9281176f5b95 f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Deleted allocations for instance 9f9aca1c-8e65-435a-bfae-1ff0d4386f58#033[00m
Oct 11 04:50:23 np0005481065 nova_compute[260935]: 2025-10-11 08:50:23.331 2 DEBUG nova.virt.libvirt.driver [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct 11 04:50:23 np0005481065 nova_compute[260935]: 2025-10-11 08:50:23.412 2 DEBUG oslo_concurrency.lockutils [None req-49ced0b0-22cb-48b2-bf9e-9281176f5b95 f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Lock "9f9aca1c-8e65-435a-bfae-1ff0d4386f58" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:23 np0005481065 nova_compute[260935]: 2025-10-11 08:50:23.477 2 DEBUG nova.compute.manager [None req-18a12c87-f15e-4353-88fa-1b82f2fee926 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:50:23 np0005481065 nova_compute[260935]: 2025-10-11 08:50:23.513 2 INFO nova.compute.manager [None req-18a12c87-f15e-4353-88fa-1b82f2fee926 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] instance snapshotting#033[00m
Oct 11 04:50:23 np0005481065 nova_compute[260935]: 2025-10-11 08:50:23.513 2 WARNING nova.compute.manager [None req-18a12c87-f15e-4353-88fa-1b82f2fee926 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] trying to snapshot a non-running instance: (state: 4 expected: 1)#033[00m
Oct 11 04:50:23 np0005481065 nova_compute[260935]: 2025-10-11 08:50:23.725 2 INFO nova.virt.libvirt.driver [None req-18a12c87-f15e-4353-88fa-1b82f2fee926 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Beginning cold snapshot process#033[00m
Oct 11 04:50:23 np0005481065 podman[297513]: 2025-10-11 08:50:23.76614239 +0000 UTC m=+0.070228378 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 04:50:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1344: 321 pgs: 321 active+clean; 579 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 8.4 MiB/s wr, 369 op/s
Oct 11 04:50:23 np0005481065 nova_compute[260935]: 2025-10-11 08:50:23.900 2 DEBUG nova.virt.libvirt.imagebackend [None req-18a12c87-f15e-4353-88fa-1b82f2fee926 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] No parent info for 03f2fef0-11c0-48e1-b3a0-3e02d898739e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct 11 04:50:23 np0005481065 nova_compute[260935]: 2025-10-11 08:50:23.905 2 DEBUG nova.network.neutron [None req-644005e2-9afc-4683-b0cd-8c4d30c643f6 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Updating instance_info_cache with network_info: [{"id": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "address": "fa:16:3e:56:95:17", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac842ebf-4f", "ovs_interfaceid": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "address": "fa:16:3e:ee:3e:c0", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf045b3aa-3f", "ovs_interfaceid": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:50:23 np0005481065 nova_compute[260935]: 2025-10-11 08:50:23.927 2 DEBUG oslo_concurrency.lockutils [None req-644005e2-9afc-4683-b0cd-8c4d30c643f6 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Releasing lock "refresh_cache-14711e39-46ca-4856-9c19-fa51b869064d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:50:23 np0005481065 nova_compute[260935]: 2025-10-11 08:50:23.928 2 DEBUG oslo_concurrency.lockutils [req-e25d1736-e6e5-4d6f-a2c6-cb703440408b req-a7a2cf3d-cd0b-4ef6-86e1-d86c67796ee4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-14711e39-46ca-4856-9c19-fa51b869064d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:50:23 np0005481065 nova_compute[260935]: 2025-10-11 08:50:23.929 2 DEBUG nova.network.neutron [req-e25d1736-e6e5-4d6f-a2c6-cb703440408b req-a7a2cf3d-cd0b-4ef6-86e1-d86c67796ee4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Refreshing network info cache for port ac842ebf-4fca-4930-a4d1-3e8a6760d441 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:50:23 np0005481065 nova_compute[260935]: 2025-10-11 08:50:23.934 2 DEBUG nova.virt.libvirt.vif [None req-644005e2-9afc-4683-b0cd-8c4d30c643f6 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:49:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-579139719',display_name='tempest-tempest.common.compute-instance-579139719',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-579139719',id=26,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAAy7u30rY9Ua722Pu5k06TsB7yGIeNfS7lWjZwVhd6kg2xMeuomPU5t2dlqG08LvC5AhOx2wSQ4p/whtQgG8tbhB9ScC2x2P4qlM+3BKH/+XFtSpFY70AQQ3oh5qTSzqA==',key_name='tempest-keypair-878513287',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:49:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-9v1qz90y',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:49:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=14711e39-46ca-4856-9c19-fa51b869064d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "address": "fa:16:3e:ee:3e:c0", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf045b3aa-3f", "ovs_interfaceid": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:50:23 np0005481065 nova_compute[260935]: 2025-10-11 08:50:23.934 2 DEBUG nova.network.os_vif_util [None req-644005e2-9afc-4683-b0cd-8c4d30c643f6 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "address": "fa:16:3e:ee:3e:c0", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf045b3aa-3f", "ovs_interfaceid": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:50:23 np0005481065 nova_compute[260935]: 2025-10-11 08:50:23.936 2 DEBUG nova.network.os_vif_util [None req-644005e2-9afc-4683-b0cd-8c4d30c643f6 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3e:c0,bridge_name='br-int',has_traffic_filtering=True,id=f045b3aa-3ff6-4dea-ad61-a59a01735124,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf045b3aa-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:50:23 np0005481065 nova_compute[260935]: 2025-10-11 08:50:23.936 2 DEBUG os_vif [None req-644005e2-9afc-4683-b0cd-8c4d30c643f6 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3e:c0,bridge_name='br-int',has_traffic_filtering=True,id=f045b3aa-3ff6-4dea-ad61-a59a01735124,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf045b3aa-3f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:50:23 np0005481065 nova_compute[260935]: 2025-10-11 08:50:23.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:23 np0005481065 nova_compute[260935]: 2025-10-11 08:50:23.938 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:50:23 np0005481065 nova_compute[260935]: 2025-10-11 08:50:23.939 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:50:23 np0005481065 nova_compute[260935]: 2025-10-11 08:50:23.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:23 np0005481065 nova_compute[260935]: 2025-10-11 08:50:23.944 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf045b3aa-3f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:50:23 np0005481065 nova_compute[260935]: 2025-10-11 08:50:23.945 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf045b3aa-3f, col_values=(('external_ids', {'iface-id': 'f045b3aa-3ff6-4dea-ad61-a59a01735124', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ee:3e:c0', 'vm-uuid': '14711e39-46ca-4856-9c19-fa51b869064d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:50:23 np0005481065 NetworkManager[44960]: <info>  [1760172623.9495] manager: (tapf045b3aa-3f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/96)
Oct 11 04:50:23 np0005481065 nova_compute[260935]: 2025-10-11 08:50:23.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:23 np0005481065 nova_compute[260935]: 2025-10-11 08:50:23.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:50:23 np0005481065 nova_compute[260935]: 2025-10-11 08:50:23.962 2 INFO os_vif [None req-644005e2-9afc-4683-b0cd-8c4d30c643f6 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3e:c0,bridge_name='br-int',has_traffic_filtering=True,id=f045b3aa-3ff6-4dea-ad61-a59a01735124,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf045b3aa-3f')#033[00m
Oct 11 04:50:23 np0005481065 nova_compute[260935]: 2025-10-11 08:50:23.963 2 DEBUG nova.virt.libvirt.vif [None req-644005e2-9afc-4683-b0cd-8c4d30c643f6 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:49:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-579139719',display_name='tempest-tempest.common.compute-instance-579139719',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-579139719',id=26,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAAy7u30rY9Ua722Pu5k06TsB7yGIeNfS7lWjZwVhd6kg2xMeuomPU5t2dlqG08LvC5AhOx2wSQ4p/whtQgG8tbhB9ScC2x2P4qlM+3BKH/+XFtSpFY70AQQ3oh5qTSzqA==',key_name='tempest-keypair-878513287',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:49:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-9v1qz90y',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:49:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=14711e39-46ca-4856-9c19-fa51b869064d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "address": "fa:16:3e:ee:3e:c0", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf045b3aa-3f", "ovs_interfaceid": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:50:23 np0005481065 nova_compute[260935]: 2025-10-11 08:50:23.964 2 DEBUG nova.network.os_vif_util [None req-644005e2-9afc-4683-b0cd-8c4d30c643f6 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "address": "fa:16:3e:ee:3e:c0", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf045b3aa-3f", "ovs_interfaceid": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:50:23 np0005481065 nova_compute[260935]: 2025-10-11 08:50:23.964 2 DEBUG nova.network.os_vif_util [None req-644005e2-9afc-4683-b0cd-8c4d30c643f6 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3e:c0,bridge_name='br-int',has_traffic_filtering=True,id=f045b3aa-3ff6-4dea-ad61-a59a01735124,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf045b3aa-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:50:23 np0005481065 nova_compute[260935]: 2025-10-11 08:50:23.968 2 DEBUG nova.virt.libvirt.guest [None req-644005e2-9afc-4683-b0cd-8c4d30c643f6 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] attach device xml: <interface type="ethernet">
Oct 11 04:50:23 np0005481065 nova_compute[260935]:  <mac address="fa:16:3e:ee:3e:c0"/>
Oct 11 04:50:23 np0005481065 nova_compute[260935]:  <model type="virtio"/>
Oct 11 04:50:23 np0005481065 nova_compute[260935]:  <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:50:23 np0005481065 nova_compute[260935]:  <mtu size="1442"/>
Oct 11 04:50:23 np0005481065 nova_compute[260935]:  <target dev="tapf045b3aa-3f"/>
Oct 11 04:50:23 np0005481065 nova_compute[260935]: </interface>
Oct 11 04:50:23 np0005481065 nova_compute[260935]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct 11 04:50:23 np0005481065 kernel: tapf045b3aa-3f: entered promiscuous mode
Oct 11 04:50:23 np0005481065 NetworkManager[44960]: <info>  [1760172623.9868] manager: (tapf045b3aa-3f): new Tun device (/org/freedesktop/NetworkManager/Devices/97)
Oct 11 04:50:23 np0005481065 ovn_controller[152945]: 2025-10-11T08:50:23Z|00172|binding|INFO|Claiming lport f045b3aa-3ff6-4dea-ad61-a59a01735124 for this chassis.
Oct 11 04:50:23 np0005481065 ovn_controller[152945]: 2025-10-11T08:50:23Z|00173|binding|INFO|f045b3aa-3ff6-4dea-ad61-a59a01735124: Claiming fa:16:3e:ee:3e:c0 10.100.0.7
Oct 11 04:50:23 np0005481065 nova_compute[260935]: 2025-10-11 08:50:23.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:24 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:24.001 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:3e:c0 10.100.0.7'], port_security=['fa:16:3e:ee:3e:c0 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-350865697', 'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '14711e39-46ca-4856-9c19-fa51b869064d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-350865697', 'neutron:project_id': 'eddb41c523294041b154a0a99c88e82b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9a83c3d0-687d-44b7-980a-bde786b1b429', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4201c7b-c907-464d-88cb-d19f17d8f067, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=f045b3aa-3ff6-4dea-ad61-a59a01735124) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:50:24 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:24.002 162815 INFO neutron.agent.ovn.metadata.agent [-] Port f045b3aa-3ff6-4dea-ad61-a59a01735124 in datapath fff13396-b787-4c6e-9112-a1c2ef57b26d bound to our chassis#033[00m
Oct 11 04:50:24 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:24.005 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fff13396-b787-4c6e-9112-a1c2ef57b26d#033[00m
Oct 11 04:50:24 np0005481065 ovn_controller[152945]: 2025-10-11T08:50:24Z|00174|binding|INFO|Setting lport f045b3aa-3ff6-4dea-ad61-a59a01735124 ovn-installed in OVS
Oct 11 04:50:24 np0005481065 ovn_controller[152945]: 2025-10-11T08:50:24Z|00175|binding|INFO|Setting lport f045b3aa-3ff6-4dea-ad61-a59a01735124 up in Southbound
Oct 11 04:50:24 np0005481065 nova_compute[260935]: 2025-10-11 08:50:24.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:24 np0005481065 nova_compute[260935]: 2025-10-11 08:50:24.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:24 np0005481065 systemd-udevd[297577]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:50:24 np0005481065 NetworkManager[44960]: <info>  [1760172624.0484] device (tapf045b3aa-3f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:50:24 np0005481065 NetworkManager[44960]: <info>  [1760172624.0490] device (tapf045b3aa-3f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:50:24 np0005481065 nova_compute[260935]: 2025-10-11 08:50:24.049 2 DEBUG nova.compute.manager [req-3cbe713b-75f3-43ca-bb9a-98a2fab5235b req-eae75a65-2db0-4971-bf89-0816ac33dca3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Received event network-vif-plugged-9007d8a3-8797-49c6-9302-4c8f4d699a45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:50:24 np0005481065 nova_compute[260935]: 2025-10-11 08:50:24.050 2 DEBUG oslo_concurrency.lockutils [req-3cbe713b-75f3-43ca-bb9a-98a2fab5235b req-eae75a65-2db0-4971-bf89-0816ac33dca3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "9f9aca1c-8e65-435a-bfae-1ff0d4386f58-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:24 np0005481065 nova_compute[260935]: 2025-10-11 08:50:24.051 2 DEBUG oslo_concurrency.lockutils [req-3cbe713b-75f3-43ca-bb9a-98a2fab5235b req-eae75a65-2db0-4971-bf89-0816ac33dca3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f9aca1c-8e65-435a-bfae-1ff0d4386f58-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:24 np0005481065 nova_compute[260935]: 2025-10-11 08:50:24.051 2 DEBUG oslo_concurrency.lockutils [req-3cbe713b-75f3-43ca-bb9a-98a2fab5235b req-eae75a65-2db0-4971-bf89-0816ac33dca3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f9aca1c-8e65-435a-bfae-1ff0d4386f58-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:24 np0005481065 nova_compute[260935]: 2025-10-11 08:50:24.051 2 DEBUG nova.compute.manager [req-3cbe713b-75f3-43ca-bb9a-98a2fab5235b req-eae75a65-2db0-4971-bf89-0816ac33dca3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] No waiting events found dispatching network-vif-plugged-9007d8a3-8797-49c6-9302-4c8f4d699a45 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:50:24 np0005481065 nova_compute[260935]: 2025-10-11 08:50:24.051 2 WARNING nova.compute.manager [req-3cbe713b-75f3-43ca-bb9a-98a2fab5235b req-eae75a65-2db0-4971-bf89-0816ac33dca3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Received unexpected event network-vif-plugged-9007d8a3-8797-49c6-9302-4c8f4d699a45 for instance with vm_state deleted and task_state None.#033[00m
Oct 11 04:50:24 np0005481065 nova_compute[260935]: 2025-10-11 08:50:24.052 2 DEBUG nova.compute.manager [req-3cbe713b-75f3-43ca-bb9a-98a2fab5235b req-eae75a65-2db0-4971-bf89-0816ac33dca3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Received event network-vif-unplugged-31025494-a361-4c39-aa29-5841978f12e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:50:24 np0005481065 nova_compute[260935]: 2025-10-11 08:50:24.052 2 DEBUG oslo_concurrency.lockutils [req-3cbe713b-75f3-43ca-bb9a-98a2fab5235b req-eae75a65-2db0-4971-bf89-0816ac33dca3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "cb1503a2-bc9c-4faf-ab16-e7227f8c94f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:24 np0005481065 nova_compute[260935]: 2025-10-11 08:50:24.052 2 DEBUG oslo_concurrency.lockutils [req-3cbe713b-75f3-43ca-bb9a-98a2fab5235b req-eae75a65-2db0-4971-bf89-0816ac33dca3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "cb1503a2-bc9c-4faf-ab16-e7227f8c94f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:24 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:24.052 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7706dcd6-4be2-4751-8f21-2a34dc65b9ce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:24 np0005481065 nova_compute[260935]: 2025-10-11 08:50:24.052 2 DEBUG oslo_concurrency.lockutils [req-3cbe713b-75f3-43ca-bb9a-98a2fab5235b req-eae75a65-2db0-4971-bf89-0816ac33dca3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "cb1503a2-bc9c-4faf-ab16-e7227f8c94f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:24 np0005481065 nova_compute[260935]: 2025-10-11 08:50:24.053 2 DEBUG nova.compute.manager [req-3cbe713b-75f3-43ca-bb9a-98a2fab5235b req-eae75a65-2db0-4971-bf89-0816ac33dca3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] No waiting events found dispatching network-vif-unplugged-31025494-a361-4c39-aa29-5841978f12e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:50:24 np0005481065 nova_compute[260935]: 2025-10-11 08:50:24.053 2 WARNING nova.compute.manager [req-3cbe713b-75f3-43ca-bb9a-98a2fab5235b req-eae75a65-2db0-4971-bf89-0816ac33dca3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Received unexpected event network-vif-unplugged-31025494-a361-4c39-aa29-5841978f12e9 for instance with vm_state stopped and task_state image_uploading.#033[00m
Oct 11 04:50:24 np0005481065 nova_compute[260935]: 2025-10-11 08:50:24.053 2 DEBUG nova.compute.manager [req-3cbe713b-75f3-43ca-bb9a-98a2fab5235b req-eae75a65-2db0-4971-bf89-0816ac33dca3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Received event network-vif-plugged-31025494-a361-4c39-aa29-5841978f12e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:50:24 np0005481065 nova_compute[260935]: 2025-10-11 08:50:24.053 2 DEBUG oslo_concurrency.lockutils [req-3cbe713b-75f3-43ca-bb9a-98a2fab5235b req-eae75a65-2db0-4971-bf89-0816ac33dca3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "cb1503a2-bc9c-4faf-ab16-e7227f8c94f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:24 np0005481065 nova_compute[260935]: 2025-10-11 08:50:24.054 2 DEBUG oslo_concurrency.lockutils [req-3cbe713b-75f3-43ca-bb9a-98a2fab5235b req-eae75a65-2db0-4971-bf89-0816ac33dca3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "cb1503a2-bc9c-4faf-ab16-e7227f8c94f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:24 np0005481065 nova_compute[260935]: 2025-10-11 08:50:24.054 2 DEBUG oslo_concurrency.lockutils [req-3cbe713b-75f3-43ca-bb9a-98a2fab5235b req-eae75a65-2db0-4971-bf89-0816ac33dca3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "cb1503a2-bc9c-4faf-ab16-e7227f8c94f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:24 np0005481065 nova_compute[260935]: 2025-10-11 08:50:24.054 2 DEBUG nova.compute.manager [req-3cbe713b-75f3-43ca-bb9a-98a2fab5235b req-eae75a65-2db0-4971-bf89-0816ac33dca3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] No waiting events found dispatching network-vif-plugged-31025494-a361-4c39-aa29-5841978f12e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:50:24 np0005481065 nova_compute[260935]: 2025-10-11 08:50:24.054 2 WARNING nova.compute.manager [req-3cbe713b-75f3-43ca-bb9a-98a2fab5235b req-eae75a65-2db0-4971-bf89-0816ac33dca3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Received unexpected event network-vif-plugged-31025494-a361-4c39-aa29-5841978f12e9 for instance with vm_state stopped and task_state image_uploading.#033[00m
Oct 11 04:50:24 np0005481065 nova_compute[260935]: 2025-10-11 08:50:24.054 2 DEBUG nova.compute.manager [req-3cbe713b-75f3-43ca-bb9a-98a2fab5235b req-eae75a65-2db0-4971-bf89-0816ac33dca3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Received event network-vif-deleted-9007d8a3-8797-49c6-9302-4c8f4d699a45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:50:24 np0005481065 nova_compute[260935]: 2025-10-11 08:50:24.078 2 DEBUG nova.virt.libvirt.driver [None req-644005e2-9afc-4683-b0cd-8c4d30c643f6 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:50:24 np0005481065 nova_compute[260935]: 2025-10-11 08:50:24.079 2 DEBUG nova.virt.libvirt.driver [None req-644005e2-9afc-4683-b0cd-8c4d30c643f6 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:50:24 np0005481065 nova_compute[260935]: 2025-10-11 08:50:24.079 2 DEBUG nova.virt.libvirt.driver [None req-644005e2-9afc-4683-b0cd-8c4d30c643f6 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No VIF found with MAC fa:16:3e:56:95:17, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:50:24 np0005481065 nova_compute[260935]: 2025-10-11 08:50:24.079 2 DEBUG nova.virt.libvirt.driver [None req-644005e2-9afc-4683-b0cd-8c4d30c643f6 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No VIF found with MAC fa:16:3e:ee:3e:c0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:50:24 np0005481065 ovn_controller[152945]: 2025-10-11T08:50:24Z|00040|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bf:d8:0b 10.100.0.11
Oct 11 04:50:24 np0005481065 ovn_controller[152945]: 2025-10-11T08:50:24Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bf:d8:0b 10.100.0.11
Oct 11 04:50:24 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:24.096 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[ad99747a-2dc9-4710-bdbc-2833a492a17e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:24 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:24.098 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[a32b3677-e906-4ce0-a0c6-475a07f11c25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:24 np0005481065 nova_compute[260935]: 2025-10-11 08:50:24.102 2 DEBUG nova.storage.rbd_utils [None req-18a12c87-f15e-4353-88fa-1b82f2fee926 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] creating snapshot(5a11c0b76fc649679b388e414df4e2e7) on rbd image(cb1503a2-bc9c-4faf-ab16-e7227f8c94f7_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 11 04:50:24 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:24.131 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f477e0e3-df53-40ec-b0f8-868d083ee114]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:24 np0005481065 nova_compute[260935]: 2025-10-11 08:50:24.147 2 DEBUG nova.virt.libvirt.guest [None req-644005e2-9afc-4683-b0cd-8c4d30c643f6 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:50:24 np0005481065 nova_compute[260935]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:50:24 np0005481065 nova_compute[260935]:  <nova:name>tempest-tempest.common.compute-instance-579139719</nova:name>
Oct 11 04:50:24 np0005481065 nova_compute[260935]:  <nova:creationTime>2025-10-11 08:50:24</nova:creationTime>
Oct 11 04:50:24 np0005481065 nova_compute[260935]:  <nova:flavor name="m1.nano">
Oct 11 04:50:24 np0005481065 nova_compute[260935]:    <nova:memory>128</nova:memory>
Oct 11 04:50:24 np0005481065 nova_compute[260935]:    <nova:disk>1</nova:disk>
Oct 11 04:50:24 np0005481065 nova_compute[260935]:    <nova:swap>0</nova:swap>
Oct 11 04:50:24 np0005481065 nova_compute[260935]:    <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:50:24 np0005481065 nova_compute[260935]:    <nova:vcpus>1</nova:vcpus>
Oct 11 04:50:24 np0005481065 nova_compute[260935]:  </nova:flavor>
Oct 11 04:50:24 np0005481065 nova_compute[260935]:  <nova:owner>
Oct 11 04:50:24 np0005481065 nova_compute[260935]:    <nova:user uuid="34f29a5a135d45f597eeaa741009aa67">tempest-AttachInterfacesTestJSON-2072786320-project-member</nova:user>
Oct 11 04:50:24 np0005481065 nova_compute[260935]:    <nova:project uuid="eddb41c523294041b154a0a99c88e82b">tempest-AttachInterfacesTestJSON-2072786320</nova:project>
Oct 11 04:50:24 np0005481065 nova_compute[260935]:  </nova:owner>
Oct 11 04:50:24 np0005481065 nova_compute[260935]:  <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:50:24 np0005481065 nova_compute[260935]:  <nova:ports>
Oct 11 04:50:24 np0005481065 nova_compute[260935]:    <nova:port uuid="ac842ebf-4fca-4930-a4d1-3e8a6760d441">
Oct 11 04:50:24 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 04:50:24 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 04:50:24 np0005481065 nova_compute[260935]:    <nova:port uuid="f045b3aa-3ff6-4dea-ad61-a59a01735124">
Oct 11 04:50:24 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 11 04:50:24 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 04:50:24 np0005481065 nova_compute[260935]:  </nova:ports>
Oct 11 04:50:24 np0005481065 nova_compute[260935]: </nova:instance>
Oct 11 04:50:24 np0005481065 nova_compute[260935]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct 11 04:50:24 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:24.153 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1b5eea39-2b12-4ebf-876a-48d6ce39fe1e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfff13396-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:a4:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 443120, 'reachable_time': 39241, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297604, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:24 np0005481065 nova_compute[260935]: 2025-10-11 08:50:24.175 2 DEBUG oslo_concurrency.lockutils [None req-644005e2-9afc-4683-b0cd-8c4d30c643f6 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "interface-14711e39-46ca-4856-9c19-fa51b869064d-f045b3aa-3ff6-4dea-ad61-a59a01735124" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 4.378s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:24 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:24.176 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0b864eef-0a5d-4e2b-9862-e896ad20886a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfff13396-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 443136, 'tstamp': 443136}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297605, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfff13396-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 443140, 'tstamp': 443140}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297605, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:24 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:24.178 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfff13396-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:50:24 np0005481065 nova_compute[260935]: 2025-10-11 08:50:24.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:24 np0005481065 nova_compute[260935]: 2025-10-11 08:50:24.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:24 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:24.222 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfff13396-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:50:24 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:24.222 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:50:24 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:24.223 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfff13396-b0, col_values=(('external_ids', {'iface-id': '2a916b98-1e7b-4604-b1f0-e2f195b1c17e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:50:24 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:24.223 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:50:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e169 do_prune osdmap full prune enabled
Oct 11 04:50:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e170 e170: 3 total, 3 up, 3 in
Oct 11 04:50:24 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e170: 3 total, 3 up, 3 in
Oct 11 04:50:24 np0005481065 nova_compute[260935]: 2025-10-11 08:50:24.434 2 DEBUG nova.storage.rbd_utils [None req-18a12c87-f15e-4353-88fa-1b82f2fee926 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] cloning vms/cb1503a2-bc9c-4faf-ab16-e7227f8c94f7_disk@5a11c0b76fc649679b388e414df4e2e7 to images/84322915-869e-419d-b9c3-505cab3e7ea4 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct 11 04:50:24 np0005481065 nova_compute[260935]: 2025-10-11 08:50:24.551 2 DEBUG nova.storage.rbd_utils [None req-18a12c87-f15e-4353-88fa-1b82f2fee926 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] flattening images/84322915-869e-419d-b9c3-505cab3e7ea4 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct 11 04:50:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:50:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:50:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:50:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:50:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:50:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:50:25 np0005481065 nova_compute[260935]: 2025-10-11 08:50:25.000 2 DEBUG nova.storage.rbd_utils [None req-18a12c87-f15e-4353-88fa-1b82f2fee926 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] removing snapshot(5a11c0b76fc649679b388e414df4e2e7) on rbd image(cb1503a2-bc9c-4faf-ab16-e7227f8c94f7_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct 11 04:50:25 np0005481065 nova_compute[260935]: 2025-10-11 08:50:25.195 2 DEBUG nova.compute.manager [req-1fc696f3-59ef-4390-a3db-4767007bd053 req-9803d2b2-25d4-4845-8526-22acc68ef111 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Received event network-changed-f045b3aa-3ff6-4dea-ad61-a59a01735124 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:50:25 np0005481065 nova_compute[260935]: 2025-10-11 08:50:25.196 2 DEBUG nova.compute.manager [req-1fc696f3-59ef-4390-a3db-4767007bd053 req-9803d2b2-25d4-4845-8526-22acc68ef111 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Refreshing instance network info cache due to event network-changed-f045b3aa-3ff6-4dea-ad61-a59a01735124. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:50:25 np0005481065 nova_compute[260935]: 2025-10-11 08:50:25.197 2 DEBUG oslo_concurrency.lockutils [req-1fc696f3-59ef-4390-a3db-4767007bd053 req-9803d2b2-25d4-4845-8526-22acc68ef111 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-14711e39-46ca-4856-9c19-fa51b869064d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:50:25 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e170 do_prune osdmap full prune enabled
Oct 11 04:50:25 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e171 e171: 3 total, 3 up, 3 in
Oct 11 04:50:25 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e171: 3 total, 3 up, 3 in
Oct 11 04:50:25 np0005481065 nova_compute[260935]: 2025-10-11 08:50:25.444 2 DEBUG nova.storage.rbd_utils [None req-18a12c87-f15e-4353-88fa-1b82f2fee926 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] creating snapshot(snap) on rbd image(84322915-869e-419d-b9c3-505cab3e7ea4) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 11 04:50:25 np0005481065 nova_compute[260935]: 2025-10-11 08:50:25.492 2 DEBUG oslo_concurrency.lockutils [None req-f5b9d75c-fa68-46d5-a532-b168d70600ed 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "interface-14711e39-46ca-4856-9c19-fa51b869064d-f045b3aa-3ff6-4dea-ad61-a59a01735124" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:25 np0005481065 nova_compute[260935]: 2025-10-11 08:50:25.493 2 DEBUG oslo_concurrency.lockutils [None req-f5b9d75c-fa68-46d5-a532-b168d70600ed 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "interface-14711e39-46ca-4856-9c19-fa51b869064d-f045b3aa-3ff6-4dea-ad61-a59a01735124" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:25 np0005481065 nova_compute[260935]: 2025-10-11 08:50:25.509 2 DEBUG nova.objects.instance [None req-f5b9d75c-fa68-46d5-a532-b168d70600ed 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lazy-loading 'flavor' on Instance uuid 14711e39-46ca-4856-9c19-fa51b869064d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:50:25 np0005481065 nova_compute[260935]: 2025-10-11 08:50:25.528 2 DEBUG nova.virt.libvirt.vif [None req-f5b9d75c-fa68-46d5-a532-b168d70600ed 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:49:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-579139719',display_name='tempest-tempest.common.compute-instance-579139719',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-579139719',id=26,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAAy7u30rY9Ua722Pu5k06TsB7yGIeNfS7lWjZwVhd6kg2xMeuomPU5t2dlqG08LvC5AhOx2wSQ4p/whtQgG8tbhB9ScC2x2P4qlM+3BKH/+XFtSpFY70AQQ3oh5qTSzqA==',key_name='tempest-keypair-878513287',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:49:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-9v1qz90y',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:49:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=14711e39-46ca-4856-9c19-fa51b869064d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "address": "fa:16:3e:ee:3e:c0", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf045b3aa-3f", "ovs_interfaceid": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:50:25 np0005481065 nova_compute[260935]: 2025-10-11 08:50:25.528 2 DEBUG nova.network.os_vif_util [None req-f5b9d75c-fa68-46d5-a532-b168d70600ed 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "address": "fa:16:3e:ee:3e:c0", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf045b3aa-3f", "ovs_interfaceid": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:50:25 np0005481065 nova_compute[260935]: 2025-10-11 08:50:25.529 2 DEBUG nova.network.os_vif_util [None req-f5b9d75c-fa68-46d5-a532-b168d70600ed 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3e:c0,bridge_name='br-int',has_traffic_filtering=True,id=f045b3aa-3ff6-4dea-ad61-a59a01735124,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf045b3aa-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:50:25 np0005481065 nova_compute[260935]: 2025-10-11 08:50:25.533 2 DEBUG nova.virt.libvirt.guest [None req-f5b9d75c-fa68-46d5-a532-b168d70600ed 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ee:3e:c0"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf045b3aa-3f"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 11 04:50:25 np0005481065 nova_compute[260935]: 2025-10-11 08:50:25.536 2 DEBUG nova.virt.libvirt.guest [None req-f5b9d75c-fa68-46d5-a532-b168d70600ed 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ee:3e:c0"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf045b3aa-3f"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 11 04:50:25 np0005481065 nova_compute[260935]: 2025-10-11 08:50:25.542 2 DEBUG nova.virt.libvirt.driver [None req-f5b9d75c-fa68-46d5-a532-b168d70600ed 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Attempting to detach device tapf045b3aa-3f from instance 14711e39-46ca-4856-9c19-fa51b869064d from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct 11 04:50:25 np0005481065 nova_compute[260935]: 2025-10-11 08:50:25.543 2 DEBUG nova.virt.libvirt.guest [None req-f5b9d75c-fa68-46d5-a532-b168d70600ed 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] detach device xml: <interface type="ethernet">
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <mac address="fa:16:3e:ee:3e:c0"/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <model type="virtio"/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <mtu size="1442"/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <target dev="tapf045b3aa-3f"/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]: </interface>
Oct 11 04:50:25 np0005481065 nova_compute[260935]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 11 04:50:25 np0005481065 nova_compute[260935]: 2025-10-11 08:50:25.551 2 DEBUG nova.virt.libvirt.guest [None req-f5b9d75c-fa68-46d5-a532-b168d70600ed 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ee:3e:c0"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf045b3aa-3f"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 11 04:50:25 np0005481065 nova_compute[260935]: 2025-10-11 08:50:25.557 2 DEBUG nova.virt.libvirt.guest [None req-f5b9d75c-fa68-46d5-a532-b168d70600ed 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:ee:3e:c0"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf045b3aa-3f"/></interface>not found in domain: <domain type='kvm' id='28'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <name>instance-0000001a</name>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <uuid>14711e39-46ca-4856-9c19-fa51b869064d</uuid>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <nova:name>tempest-tempest.common.compute-instance-579139719</nova:name>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <nova:creationTime>2025-10-11 08:50:24</nova:creationTime>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <nova:flavor name="m1.nano">
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <nova:memory>128</nova:memory>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <nova:disk>1</nova:disk>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <nova:swap>0</nova:swap>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <nova:vcpus>1</nova:vcpus>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  </nova:flavor>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <nova:owner>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <nova:user uuid="34f29a5a135d45f597eeaa741009aa67">tempest-AttachInterfacesTestJSON-2072786320-project-member</nova:user>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <nova:project uuid="eddb41c523294041b154a0a99c88e82b">tempest-AttachInterfacesTestJSON-2072786320</nova:project>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  </nova:owner>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <nova:ports>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <nova:port uuid="ac842ebf-4fca-4930-a4d1-3e8a6760d441">
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <nova:port uuid="f045b3aa-3ff6-4dea-ad61-a59a01735124">
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  </nova:ports>
Oct 11 04:50:25 np0005481065 nova_compute[260935]: </nova:instance>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <memory unit='KiB'>131072</memory>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <vcpu placement='static'>1</vcpu>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <resource>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <partition>/machine</partition>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  </resource>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <sysinfo type='smbios'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <entry name='manufacturer'>RDO</entry>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <entry name='product'>OpenStack Compute</entry>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <entry name='serial'>14711e39-46ca-4856-9c19-fa51b869064d</entry>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <entry name='uuid'>14711e39-46ca-4856-9c19-fa51b869064d</entry>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <entry name='family'>Virtual Machine</entry>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <boot dev='hd'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <smbios mode='sysinfo'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <vmcoreinfo state='on'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <cpu mode='custom' match='exact' check='full'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <model fallback='forbid'>EPYC-Rome</model>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <vendor>AMD</vendor>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='require' name='x2apic'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='require' name='tsc-deadline'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='require' name='hypervisor'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='require' name='tsc_adjust'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='require' name='spec-ctrl'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='require' name='stibp'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='require' name='arch-capabilities'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='require' name='ssbd'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='require' name='cmp_legacy'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='require' name='overflow-recov'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='require' name='succor'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='require' name='ibrs'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='require' name='amd-ssbd'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='require' name='virt-ssbd'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='disable' name='lbrv'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='disable' name='tsc-scale'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='disable' name='vmcb-clean'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='disable' name='flushbyasid'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='disable' name='pause-filter'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='disable' name='pfthreshold'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='disable' name='svme-addr-chk'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='require' name='lfence-always-serializing'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='require' name='rdctl-no'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='require' name='mds-no'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='require' name='pschange-mc-no'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='require' name='gds-no'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='require' name='rfds-no'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='disable' name='xsaves'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='disable' name='svm'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='require' name='topoext'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='disable' name='npt'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='disable' name='nrip-save'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <clock offset='utc'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <timer name='pit' tickpolicy='delay'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <timer name='rtc' tickpolicy='catchup'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <timer name='hpet' present='no'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <on_poweroff>destroy</on_poweroff>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <on_reboot>restart</on_reboot>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <on_crash>destroy</on_crash>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <disk type='network' device='disk'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <driver name='qemu' type='raw' cache='none'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <auth username='openstack'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:        <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <source protocol='rbd' name='vms/14711e39-46ca-4856-9c19-fa51b869064d_disk' index='2'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:        <host name='192.168.122.100' port='6789'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <target dev='vda' bus='virtio'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <alias name='virtio-disk0'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <disk type='network' device='cdrom'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <driver name='qemu' type='raw' cache='none'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <auth username='openstack'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:        <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <source protocol='rbd' name='vms/14711e39-46ca-4856-9c19-fa51b869064d_disk.config' index='1'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:        <host name='192.168.122.100' port='6789'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <target dev='sda' bus='sata'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <readonly/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <alias name='sata0-0-0'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <controller type='pci' index='0' model='pcie-root'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <alias name='pcie.0'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <target chassis='1' port='0x10'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <alias name='pci.1'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <target chassis='2' port='0x11'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <alias name='pci.2'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <target chassis='3' port='0x12'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <alias name='pci.3'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <target chassis='4' port='0x13'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <alias name='pci.4'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <target chassis='5' port='0x14'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <alias name='pci.5'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <target chassis='6' port='0x15'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <alias name='pci.6'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <target chassis='7' port='0x16'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <alias name='pci.7'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <target chassis='8' port='0x17'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <alias name='pci.8'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <target chassis='9' port='0x18'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <alias name='pci.9'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <target chassis='10' port='0x19'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <alias name='pci.10'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <target chassis='11' port='0x1a'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <alias name='pci.11'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <target chassis='12' port='0x1b'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <alias name='pci.12'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <target chassis='13' port='0x1c'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <alias name='pci.13'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <target chassis='14' port='0x1d'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <alias name='pci.14'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <target chassis='15' port='0x1e'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <alias name='pci.15'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <target chassis='16' port='0x1f'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <alias name='pci.16'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <target chassis='17' port='0x20'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <alias name='pci.17'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <target chassis='18' port='0x21'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <alias name='pci.18'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <target chassis='19' port='0x22'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <alias name='pci.19'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <target chassis='20' port='0x23'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <alias name='pci.20'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <target chassis='21' port='0x24'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <alias name='pci.21'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <target chassis='22' port='0x25'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <alias name='pci.22'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <target chassis='23' port='0x26'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <alias name='pci.23'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <target chassis='24' port='0x27'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <alias name='pci.24'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <target chassis='25' port='0x28'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <alias name='pci.25'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <model name='pcie-pci-bridge'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <alias name='pci.26'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <alias name='usb'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <controller type='sata' index='0'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <alias name='ide'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <interface type='ethernet'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <mac address='fa:16:3e:56:95:17'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <target dev='tapac842ebf-4f'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <model type='virtio'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <driver name='vhost' rx_queue_size='512'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <mtu size='1442'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <alias name='net0'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <interface type='ethernet'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <mac address='fa:16:3e:ee:3e:c0'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <target dev='tapf045b3aa-3f'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <model type='virtio'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <driver name='vhost' rx_queue_size='512'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <mtu size='1442'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <alias name='net1'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <serial type='pty'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <source path='/dev/pts/4'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <log file='/var/lib/nova/instances/14711e39-46ca-4856-9c19-fa51b869064d/console.log' append='off'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <target type='isa-serial' port='0'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:        <model name='isa-serial'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      </target>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <alias name='serial0'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <console type='pty' tty='/dev/pts/4'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <source path='/dev/pts/4'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <log file='/var/lib/nova/instances/14711e39-46ca-4856-9c19-fa51b869064d/console.log' append='off'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <target type='serial' port='0'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <alias name='serial0'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </console>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <input type='tablet' bus='usb'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <alias name='input0'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <address type='usb' bus='0' port='1'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </input>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <input type='mouse' bus='ps2'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <alias name='input1'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </input>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <input type='keyboard' bus='ps2'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <alias name='input2'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </input>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <graphics type='vnc' port='5904' autoport='yes' listen='::0'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <listen type='address' address='::0'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </graphics>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <audio id='1' type='none'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <model type='virtio' heads='1' primary='yes'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <alias name='video0'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <watchdog model='itco' action='reset'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <alias name='watchdog0'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </watchdog>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <memballoon model='virtio'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <stats period='10'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <alias name='balloon0'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <rng model='virtio'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <backend model='random'>/dev/urandom</backend>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <alias name='rng0'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <label>system_u:system_r:svirt_t:s0:c266,c549</label>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c266,c549</imagelabel>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  </seclabel>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <label>+107:+107</label>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <imagelabel>+107:+107</imagelabel>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  </seclabel>
Oct 11 04:50:25 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:50:25 np0005481065 nova_compute[260935]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct 11 04:50:25 np0005481065 nova_compute[260935]: 2025-10-11 08:50:25.557 2 INFO nova.virt.libvirt.driver [None req-f5b9d75c-fa68-46d5-a532-b168d70600ed 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Successfully detached device tapf045b3aa-3f from instance 14711e39-46ca-4856-9c19-fa51b869064d from the persistent domain config.#033[00m
Oct 11 04:50:25 np0005481065 nova_compute[260935]: 2025-10-11 08:50:25.558 2 DEBUG nova.virt.libvirt.driver [None req-f5b9d75c-fa68-46d5-a532-b168d70600ed 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] (1/8): Attempting to detach device tapf045b3aa-3f with device alias net1 from instance 14711e39-46ca-4856-9c19-fa51b869064d from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct 11 04:50:25 np0005481065 nova_compute[260935]: 2025-10-11 08:50:25.559 2 DEBUG nova.virt.libvirt.guest [None req-f5b9d75c-fa68-46d5-a532-b168d70600ed 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] detach device xml: <interface type="ethernet">
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <mac address="fa:16:3e:ee:3e:c0"/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <model type="virtio"/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <mtu size="1442"/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <target dev="tapf045b3aa-3f"/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]: </interface>
Oct 11 04:50:25 np0005481065 nova_compute[260935]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 11 04:50:25 np0005481065 kernel: tapf045b3aa-3f (unregistering): left promiscuous mode
Oct 11 04:50:25 np0005481065 NetworkManager[44960]: <info>  [1760172625.6871] device (tapf045b3aa-3f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:50:25 np0005481065 nova_compute[260935]: 2025-10-11 08:50:25.710 2 DEBUG nova.virt.libvirt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Received event <DeviceRemovedEvent: 1760172625.7100601, 14711e39-46ca-4856-9c19-fa51b869064d => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct 11 04:50:25 np0005481065 nova_compute[260935]: 2025-10-11 08:50:25.713 2 DEBUG nova.virt.libvirt.driver [None req-f5b9d75c-fa68-46d5-a532-b168d70600ed 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Start waiting for the detach event from libvirt for device tapf045b3aa-3f with device alias net1 for instance 14711e39-46ca-4856-9c19-fa51b869064d _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct 11 04:50:25 np0005481065 nova_compute[260935]: 2025-10-11 08:50:25.713 2 DEBUG nova.virt.libvirt.guest [None req-f5b9d75c-fa68-46d5-a532-b168d70600ed 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ee:3e:c0"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf045b3aa-3f"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 11 04:50:25 np0005481065 nova_compute[260935]: 2025-10-11 08:50:25.719 2 DEBUG nova.virt.libvirt.guest [None req-f5b9d75c-fa68-46d5-a532-b168d70600ed 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:ee:3e:c0"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf045b3aa-3f"/></interface>not found in domain: <domain type='kvm' id='28'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <name>instance-0000001a</name>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <uuid>14711e39-46ca-4856-9c19-fa51b869064d</uuid>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <nova:name>tempest-tempest.common.compute-instance-579139719</nova:name>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <nova:creationTime>2025-10-11 08:50:24</nova:creationTime>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <nova:flavor name="m1.nano">
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <nova:memory>128</nova:memory>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <nova:disk>1</nova:disk>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <nova:swap>0</nova:swap>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <nova:vcpus>1</nova:vcpus>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  </nova:flavor>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <nova:owner>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <nova:user uuid="34f29a5a135d45f597eeaa741009aa67">tempest-AttachInterfacesTestJSON-2072786320-project-member</nova:user>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <nova:project uuid="eddb41c523294041b154a0a99c88e82b">tempest-AttachInterfacesTestJSON-2072786320</nova:project>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  </nova:owner>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <nova:ports>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <nova:port uuid="ac842ebf-4fca-4930-a4d1-3e8a6760d441">
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <nova:port uuid="f045b3aa-3ff6-4dea-ad61-a59a01735124">
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  </nova:ports>
Oct 11 04:50:25 np0005481065 nova_compute[260935]: </nova:instance>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <memory unit='KiB'>131072</memory>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <vcpu placement='static'>1</vcpu>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <resource>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <partition>/machine</partition>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  </resource>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <sysinfo type='smbios'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <entry name='manufacturer'>RDO</entry>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <entry name='product'>OpenStack Compute</entry>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <entry name='serial'>14711e39-46ca-4856-9c19-fa51b869064d</entry>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <entry name='uuid'>14711e39-46ca-4856-9c19-fa51b869064d</entry>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <entry name='family'>Virtual Machine</entry>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <boot dev='hd'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <smbios mode='sysinfo'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <vmcoreinfo state='on'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <cpu mode='custom' match='exact' check='full'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <model fallback='forbid'>EPYC-Rome</model>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <vendor>AMD</vendor>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='require' name='x2apic'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='require' name='tsc-deadline'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='require' name='hypervisor'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='require' name='tsc_adjust'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='require' name='spec-ctrl'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='require' name='stibp'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='require' name='arch-capabilities'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='require' name='ssbd'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='require' name='cmp_legacy'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='require' name='overflow-recov'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='require' name='succor'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='require' name='ibrs'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='require' name='amd-ssbd'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='require' name='virt-ssbd'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='disable' name='lbrv'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='disable' name='tsc-scale'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='disable' name='vmcb-clean'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='disable' name='flushbyasid'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='disable' name='pause-filter'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='disable' name='pfthreshold'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='disable' name='svme-addr-chk'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='require' name='lfence-always-serializing'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='require' name='rdctl-no'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='require' name='mds-no'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='require' name='pschange-mc-no'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='require' name='gds-no'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='require' name='rfds-no'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='disable' name='xsaves'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='disable' name='svm'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='require' name='topoext'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='disable' name='npt'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <feature policy='disable' name='nrip-save'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <clock offset='utc'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <timer name='pit' tickpolicy='delay'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <timer name='rtc' tickpolicy='catchup'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <timer name='hpet' present='no'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <on_poweroff>destroy</on_poweroff>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <on_reboot>restart</on_reboot>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <on_crash>destroy</on_crash>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <disk type='network' device='disk'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <driver name='qemu' type='raw' cache='none'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <auth username='openstack'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:        <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <source protocol='rbd' name='vms/14711e39-46ca-4856-9c19-fa51b869064d_disk' index='2'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:        <host name='192.168.122.100' port='6789'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <target dev='vda' bus='virtio'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <alias name='virtio-disk0'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <disk type='network' device='cdrom'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <driver name='qemu' type='raw' cache='none'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <auth username='openstack'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:        <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <source protocol='rbd' name='vms/14711e39-46ca-4856-9c19-fa51b869064d_disk.config' index='1'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:        <host name='192.168.122.100' port='6789'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <target dev='sda' bus='sata'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <readonly/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <alias name='sata0-0-0'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <controller type='pci' index='0' model='pcie-root'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <alias name='pcie.0'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <target chassis='1' port='0x10'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <alias name='pci.1'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <target chassis='2' port='0x11'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <alias name='pci.2'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <target chassis='3' port='0x12'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <alias name='pci.3'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <target chassis='4' port='0x13'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <alias name='pci.4'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <target chassis='5' port='0x14'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <alias name='pci.5'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    </controller>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 04:50:25 np0005481065 nova_compute[260935]:      <target chassis='6' port='0x15'/>
Oct 11 04:50:43 np0005481065 nova_compute[260935]: 2025-10-11 08:50:43.811 2 DEBUG nova.objects.instance [None req-d320f62a-71b6-4e37-ae9f-8e5ee6c823e1 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lazy-loading 'pci_devices' on Instance uuid ac3851d8-5df2-4f84-9b28-a5fbf1c31b62 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:50:43 np0005481065 systemd[1]: libpod-adead1b1c50a7b22eb2eaa0cb4e0a8644d6a8a8de0534016ea1ad64381871d6e.scope: Deactivated successfully.
Oct 11 04:50:43 np0005481065 systemd[1]: libpod-adead1b1c50a7b22eb2eaa0cb4e0a8644d6a8a8de0534016ea1ad64381871d6e.scope: Consumed 1.129s CPU time.
Oct 11 04:50:43 np0005481065 podman[299576]: 2025-10-11 08:50:43.844840537 +0000 UTC m=+1.463936931 container died adead1b1c50a7b22eb2eaa0cb4e0a8644d6a8a8de0534016ea1ad64381871d6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_borg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 11 04:50:43 np0005481065 nova_compute[260935]: 2025-10-11 08:50:43.847 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172643.8431635, ac3851d8-5df2-4f84-9b28-a5fbf1c31b62 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:50:43 np0005481065 nova_compute[260935]: 2025-10-11 08:50:43.847 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] VM Paused (Lifecycle Event)#033[00m
Oct 11 04:50:43 np0005481065 nova_compute[260935]: 2025-10-11 08:50:43.860 2 DEBUG oslo_concurrency.processutils [None req-12c644d8-b54f-41a1-aef3-6030c074162f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:50:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1360: 321 pgs: 321 active+clean; 372 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 191 op/s
Oct 11 04:50:43 np0005481065 systemd[1]: var-lib-containers-storage-overlay-bc15b90a81b0990b065736f737a03d2d9ef0df3d07ac9cb481faf4f38f388e49-merged.mount: Deactivated successfully.
Oct 11 04:50:43 np0005481065 nova_compute[260935]: 2025-10-11 08:50:43.901 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:50:43 np0005481065 nova_compute[260935]: 2025-10-11 08:50:43.907 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:50:43 np0005481065 podman[299576]: 2025-10-11 08:50:43.914381263 +0000 UTC m=+1.533477657 container remove adead1b1c50a7b22eb2eaa0cb4e0a8644d6a8a8de0534016ea1ad64381871d6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_borg, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 04:50:43 np0005481065 systemd[1]: libpod-conmon-adead1b1c50a7b22eb2eaa0cb4e0a8644d6a8a8de0534016ea1ad64381871d6e.scope: Deactivated successfully.
Oct 11 04:50:43 np0005481065 nova_compute[260935]: 2025-10-11 08:50:43.936 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Oct 11 04:50:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:50:43 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:50:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:50:43 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:50:43 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 9554ce56-e39b-4f41-a0ab-d3cd42269de9 does not exist
Oct 11 04:50:43 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 46d48053-a235-4f39-9027-bde80e6a6dde does not exist
Oct 11 04:50:44 np0005481065 kernel: tapd190526e-2b (unregistering): left promiscuous mode
Oct 11 04:50:44 np0005481065 NetworkManager[44960]: <info>  [1760172644.0318] device (tapd190526e-2b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:50:44 np0005481065 ovn_controller[152945]: 2025-10-11T08:50:44Z|00211|binding|INFO|Releasing lport d190526e-2bf1-4e6c-925a-2cc0c2b359a8 from this chassis (sb_readonly=0)
Oct 11 04:50:44 np0005481065 nova_compute[260935]: 2025-10-11 08:50:44.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:44 np0005481065 ovn_controller[152945]: 2025-10-11T08:50:44Z|00212|binding|INFO|Setting lport d190526e-2bf1-4e6c-925a-2cc0c2b359a8 down in Southbound
Oct 11 04:50:44 np0005481065 ovn_controller[152945]: 2025-10-11T08:50:44Z|00213|binding|INFO|Removing iface tapd190526e-2b ovn-installed in OVS
Oct 11 04:50:44 np0005481065 rsyslogd[1003]: imjournal: 2545 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct 11 04:50:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:44.051 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:75:39 10.100.0.3'], port_security=['fa:16:3e:57:75:39 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ac3851d8-5df2-4f84-9b28-a5fbf1c31b62', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bac3530-993f-420e-8692-0b14a331d756', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8c7604961214c6d9d49657535d799a5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b92be55a-f97b-4770-99c4-ff8e122b8ad7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=956bef08-638b-4ce0-9cc4-80a6cc4f1331, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=d190526e-2bf1-4e6c-925a-2cc0c2b359a8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:50:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:44.055 162815 INFO neutron.agent.ovn.metadata.agent [-] Port d190526e-2bf1-4e6c-925a-2cc0c2b359a8 in datapath 9bac3530-993f-420e-8692-0b14a331d756 unbound from our chassis#033[00m
Oct 11 04:50:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:44.057 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9bac3530-993f-420e-8692-0b14a331d756, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 04:50:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:44.058 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e08b3dff-27fd-4c36-b369-486dc83ea9ac]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:44.059 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9bac3530-993f-420e-8692-0b14a331d756 namespace which is not needed anymore#033[00m
Oct 11 04:50:44 np0005481065 nova_compute[260935]: 2025-10-11 08:50:44.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:44 np0005481065 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000001f.scope: Deactivated successfully.
Oct 11 04:50:44 np0005481065 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000001f.scope: Consumed 2.693s CPU time.
Oct 11 04:50:44 np0005481065 systemd-machined[215705]: Machine qemu-35-instance-0000001f terminated.
Oct 11 04:50:44 np0005481065 nova_compute[260935]: 2025-10-11 08:50:44.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:44 np0005481065 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[299479]: [NOTICE]   (299495) : haproxy version is 2.8.14-c23fe91
Oct 11 04:50:44 np0005481065 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[299479]: [NOTICE]   (299495) : path to executable is /usr/sbin/haproxy
Oct 11 04:50:44 np0005481065 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[299479]: [ALERT]    (299495) : Current worker (299502) exited with code 143 (Terminated)
Oct 11 04:50:44 np0005481065 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[299479]: [WARNING]  (299495) : All workers exited. Exiting... (0)
Oct 11 04:50:44 np0005481065 nova_compute[260935]: 2025-10-11 08:50:44.262 2 DEBUG nova.compute.manager [None req-d320f62a-71b6-4e37-ae9f-8e5ee6c823e1 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:50:44 np0005481065 systemd[1]: libpod-d6b33e72de80955ab943272f9f518de444cf0bbebf307601581e40a72330454d.scope: Deactivated successfully.
Oct 11 04:50:44 np0005481065 podman[299772]: 2025-10-11 08:50:44.273264223 +0000 UTC m=+0.103380605 container died d6b33e72de80955ab943272f9f518de444cf0bbebf307601581e40a72330454d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 11 04:50:44 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d6b33e72de80955ab943272f9f518de444cf0bbebf307601581e40a72330454d-userdata-shm.mount: Deactivated successfully.
Oct 11 04:50:44 np0005481065 systemd[1]: var-lib-containers-storage-overlay-b0346dbffe7c0e5b77d041061aabf8a470abfa12a3cff5967778da0d06409088-merged.mount: Deactivated successfully.
Oct 11 04:50:44 np0005481065 podman[299772]: 2025-10-11 08:50:44.328047172 +0000 UTC m=+0.158163574 container cleanup d6b33e72de80955ab943272f9f518de444cf0bbebf307601581e40a72330454d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0)
Oct 11 04:50:44 np0005481065 nova_compute[260935]: 2025-10-11 08:50:44.331 2 DEBUG nova.compute.manager [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Received event network-vif-unplugged-3a7614a2-ff1f-4015-a387-8b15256f61b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:50:44 np0005481065 nova_compute[260935]: 2025-10-11 08:50:44.332 2 DEBUG oslo_concurrency.lockutils [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "b3e20035-c079-4ad0-a085-2086be520d1d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:44 np0005481065 nova_compute[260935]: 2025-10-11 08:50:44.332 2 DEBUG oslo_concurrency.lockutils [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b3e20035-c079-4ad0-a085-2086be520d1d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:44 np0005481065 nova_compute[260935]: 2025-10-11 08:50:44.332 2 DEBUG oslo_concurrency.lockutils [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b3e20035-c079-4ad0-a085-2086be520d1d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:44 np0005481065 nova_compute[260935]: 2025-10-11 08:50:44.332 2 DEBUG nova.compute.manager [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] No waiting events found dispatching network-vif-unplugged-3a7614a2-ff1f-4015-a387-8b15256f61b2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:50:44 np0005481065 nova_compute[260935]: 2025-10-11 08:50:44.332 2 WARNING nova.compute.manager [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Received unexpected event network-vif-unplugged-3a7614a2-ff1f-4015-a387-8b15256f61b2 for instance with vm_state deleted and task_state None.#033[00m
Oct 11 04:50:44 np0005481065 nova_compute[260935]: 2025-10-11 08:50:44.332 2 DEBUG nova.compute.manager [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Received event network-vif-plugged-3a7614a2-ff1f-4015-a387-8b15256f61b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:50:44 np0005481065 nova_compute[260935]: 2025-10-11 08:50:44.333 2 DEBUG oslo_concurrency.lockutils [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "b3e20035-c079-4ad0-a085-2086be520d1d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:44 np0005481065 nova_compute[260935]: 2025-10-11 08:50:44.333 2 DEBUG oslo_concurrency.lockutils [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b3e20035-c079-4ad0-a085-2086be520d1d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:44 np0005481065 nova_compute[260935]: 2025-10-11 08:50:44.333 2 DEBUG oslo_concurrency.lockutils [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b3e20035-c079-4ad0-a085-2086be520d1d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:44 np0005481065 nova_compute[260935]: 2025-10-11 08:50:44.333 2 DEBUG nova.compute.manager [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] No waiting events found dispatching network-vif-plugged-3a7614a2-ff1f-4015-a387-8b15256f61b2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:50:44 np0005481065 nova_compute[260935]: 2025-10-11 08:50:44.333 2 WARNING nova.compute.manager [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Received unexpected event network-vif-plugged-3a7614a2-ff1f-4015-a387-8b15256f61b2 for instance with vm_state deleted and task_state None.#033[00m
Oct 11 04:50:44 np0005481065 nova_compute[260935]: 2025-10-11 08:50:44.333 2 DEBUG nova.compute.manager [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Received event network-vif-unplugged-c27797c3-6ac7-45ae-9a2e-7fc42908feab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:50:44 np0005481065 nova_compute[260935]: 2025-10-11 08:50:44.334 2 DEBUG oslo_concurrency.lockutils [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "b35f4147-9e36-4dab-9ac8-2061c97797f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:44 np0005481065 nova_compute[260935]: 2025-10-11 08:50:44.334 2 DEBUG oslo_concurrency.lockutils [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b35f4147-9e36-4dab-9ac8-2061c97797f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:44 np0005481065 nova_compute[260935]: 2025-10-11 08:50:44.334 2 DEBUG oslo_concurrency.lockutils [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b35f4147-9e36-4dab-9ac8-2061c97797f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:44 np0005481065 nova_compute[260935]: 2025-10-11 08:50:44.334 2 DEBUG nova.compute.manager [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] No waiting events found dispatching network-vif-unplugged-c27797c3-6ac7-45ae-9a2e-7fc42908feab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:50:44 np0005481065 nova_compute[260935]: 2025-10-11 08:50:44.334 2 DEBUG nova.compute.manager [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Received event network-vif-unplugged-c27797c3-6ac7-45ae-9a2e-7fc42908feab for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 04:50:44 np0005481065 nova_compute[260935]: 2025-10-11 08:50:44.335 2 DEBUG nova.compute.manager [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Received event network-vif-deleted-3a7614a2-ff1f-4015-a387-8b15256f61b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:50:44 np0005481065 nova_compute[260935]: 2025-10-11 08:50:44.335 2 DEBUG nova.compute.manager [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Received event network-vif-plugged-c27797c3-6ac7-45ae-9a2e-7fc42908feab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:50:44 np0005481065 nova_compute[260935]: 2025-10-11 08:50:44.335 2 DEBUG oslo_concurrency.lockutils [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "b35f4147-9e36-4dab-9ac8-2061c97797f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:44 np0005481065 nova_compute[260935]: 2025-10-11 08:50:44.335 2 DEBUG oslo_concurrency.lockutils [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b35f4147-9e36-4dab-9ac8-2061c97797f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:44 np0005481065 nova_compute[260935]: 2025-10-11 08:50:44.335 2 DEBUG oslo_concurrency.lockutils [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b35f4147-9e36-4dab-9ac8-2061c97797f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:44 np0005481065 nova_compute[260935]: 2025-10-11 08:50:44.335 2 DEBUG nova.compute.manager [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] No waiting events found dispatching network-vif-plugged-c27797c3-6ac7-45ae-9a2e-7fc42908feab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:50:44 np0005481065 nova_compute[260935]: 2025-10-11 08:50:44.336 2 WARNING nova.compute.manager [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Received unexpected event network-vif-plugged-c27797c3-6ac7-45ae-9a2e-7fc42908feab for instance with vm_state active and task_state deleting.#033[00m
Oct 11 04:50:44 np0005481065 nova_compute[260935]: 2025-10-11 08:50:44.336 2 DEBUG nova.compute.manager [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Received event network-vif-unplugged-d190526e-2bf1-4e6c-925a-2cc0c2b359a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:50:44 np0005481065 nova_compute[260935]: 2025-10-11 08:50:44.336 2 DEBUG oslo_concurrency.lockutils [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ac3851d8-5df2-4f84-9b28-a5fbf1c31b62-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:44 np0005481065 nova_compute[260935]: 2025-10-11 08:50:44.336 2 DEBUG oslo_concurrency.lockutils [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ac3851d8-5df2-4f84-9b28-a5fbf1c31b62-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:44 np0005481065 nova_compute[260935]: 2025-10-11 08:50:44.336 2 DEBUG oslo_concurrency.lockutils [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ac3851d8-5df2-4f84-9b28-a5fbf1c31b62-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:44 np0005481065 nova_compute[260935]: 2025-10-11 08:50:44.336 2 DEBUG nova.compute.manager [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] No waiting events found dispatching network-vif-unplugged-d190526e-2bf1-4e6c-925a-2cc0c2b359a8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:50:44 np0005481065 nova_compute[260935]: 2025-10-11 08:50:44.336 2 WARNING nova.compute.manager [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Received unexpected event network-vif-unplugged-d190526e-2bf1-4e6c-925a-2cc0c2b359a8 for instance with vm_state active and task_state suspending.#033[00m
Oct 11 04:50:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:50:44 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4159873179' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:50:44 np0005481065 systemd[1]: libpod-conmon-d6b33e72de80955ab943272f9f518de444cf0bbebf307601581e40a72330454d.scope: Deactivated successfully.
Oct 11 04:50:44 np0005481065 nova_compute[260935]: 2025-10-11 08:50:44.372 2 DEBUG oslo_concurrency.processutils [None req-12c644d8-b54f-41a1-aef3-6030c074162f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:50:44 np0005481065 nova_compute[260935]: 2025-10-11 08:50:44.378 2 DEBUG nova.compute.provider_tree [None req-12c644d8-b54f-41a1-aef3-6030c074162f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:50:44 np0005481065 nova_compute[260935]: 2025-10-11 08:50:44.392 2 DEBUG nova.scheduler.client.report [None req-12c644d8-b54f-41a1-aef3-6030c074162f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:50:44 np0005481065 nova_compute[260935]: 2025-10-11 08:50:44.410 2 DEBUG oslo_concurrency.lockutils [None req-12c644d8-b54f-41a1-aef3-6030c074162f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.809s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:44 np0005481065 podman[299812]: 2025-10-11 08:50:44.426597479 +0000 UTC m=+0.063228019 container remove d6b33e72de80955ab943272f9f518de444cf0bbebf307601581e40a72330454d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:50:44 np0005481065 nova_compute[260935]: 2025-10-11 08:50:44.434 2 INFO nova.scheduler.client.report [None req-12c644d8-b54f-41a1-aef3-6030c074162f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Deleted allocations for instance b3e20035-c079-4ad0-a085-2086be520d1d#033[00m
Oct 11 04:50:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:44.440 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9545df2d-1c4d-47db-82ee-17afcf3079e2]: (4, ('Sat Oct 11 08:50:44 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756 (d6b33e72de80955ab943272f9f518de444cf0bbebf307601581e40a72330454d)\nd6b33e72de80955ab943272f9f518de444cf0bbebf307601581e40a72330454d\nSat Oct 11 08:50:44 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756 (d6b33e72de80955ab943272f9f518de444cf0bbebf307601581e40a72330454d)\nd6b33e72de80955ab943272f9f518de444cf0bbebf307601581e40a72330454d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:44.442 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0aa19906-e538-48f2-ac10-03fe8a76f533]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:44.443 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bac3530-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:50:44 np0005481065 nova_compute[260935]: 2025-10-11 08:50:44.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:44 np0005481065 kernel: tap9bac3530-90: left promiscuous mode
Oct 11 04:50:44 np0005481065 nova_compute[260935]: 2025-10-11 08:50:44.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:44.493 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ed33538c-59c2-4ede-9063-a28939362540]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:44 np0005481065 nova_compute[260935]: 2025-10-11 08:50:44.506 2 DEBUG oslo_concurrency.lockutils [None req-12c644d8-b54f-41a1-aef3-6030c074162f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "b3e20035-c079-4ad0-a085-2086be520d1d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.137s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:44.517 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[30ed4ad1-9745-4fc9-bd86-d8a93b648cfa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:44.518 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6016f5f8-86a4-4246-b855-b6ccaf1dcb69]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:44 np0005481065 nova_compute[260935]: 2025-10-11 08:50:44.529 2 INFO nova.network.neutron [None req-d54a8a3f-002d-4f75-b215-3ef1f4e3b3df 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Port f045b3aa-3ff6-4dea-ad61-a59a01735124 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Oct 11 04:50:44 np0005481065 nova_compute[260935]: 2025-10-11 08:50:44.530 2 DEBUG nova.network.neutron [None req-d54a8a3f-002d-4f75-b215-3ef1f4e3b3df 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Updating instance_info_cache with network_info: [{"id": "c27797c3-6ac7-45ae-9a2e-7fc42908feab", "address": "fa:16:3e:d9:75:86", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc27797c3-6a", "ovs_interfaceid": "c27797c3-6ac7-45ae-9a2e-7fc42908feab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:50:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:44.539 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0d2a5f85-48e1-49c2-b00b-1465a1de1aa6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 448564, 'reachable_time': 23723, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299832, 'error': None, 'target': 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:44 np0005481065 systemd[1]: run-netns-ovnmeta\x2d9bac3530\x2d993f\x2d420e\x2d8692\x2d0b14a331d756.mount: Deactivated successfully.
Oct 11 04:50:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:44.546 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9bac3530-993f-420e-8692-0b14a331d756 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 04:50:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:44.547 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[dd9d2bcc-0101-42b5-a2b9-52532e604bb5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:44 np0005481065 nova_compute[260935]: 2025-10-11 08:50:44.566 2 DEBUG oslo_concurrency.lockutils [None req-d54a8a3f-002d-4f75-b215-3ef1f4e3b3df 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Releasing lock "refresh_cache-b35f4147-9e36-4dab-9ac8-2061c97797f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:50:44 np0005481065 nova_compute[260935]: 2025-10-11 08:50:44.599 2 DEBUG oslo_concurrency.lockutils [None req-d54a8a3f-002d-4f75-b215-3ef1f4e3b3df 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "interface-b35f4147-9e36-4dab-9ac8-2061c97797f2-f045b3aa-3ff6-4dea-ad61-a59a01735124" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 5.507s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:44 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:50:44 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:50:45 np0005481065 ovn_controller[152945]: 2025-10-11T08:50:45Z|00044|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bf:d8:0b 10.100.0.11
Oct 11 04:50:45 np0005481065 ovn_controller[152945]: 2025-10-11T08:50:45Z|00045|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bf:d8:0b 10.100.0.11
Oct 11 04:50:45 np0005481065 nova_compute[260935]: 2025-10-11 08:50:45.135 2 DEBUG oslo_concurrency.lockutils [None req-046881b0-aa8a-4051-8a37-563c0b66e01f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "6224c79a-8a36-490c-863a-67251512732f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:45 np0005481065 nova_compute[260935]: 2025-10-11 08:50:45.136 2 DEBUG oslo_concurrency.lockutils [None req-046881b0-aa8a-4051-8a37-563c0b66e01f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "6224c79a-8a36-490c-863a-67251512732f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:45 np0005481065 nova_compute[260935]: 2025-10-11 08:50:45.136 2 DEBUG oslo_concurrency.lockutils [None req-046881b0-aa8a-4051-8a37-563c0b66e01f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "6224c79a-8a36-490c-863a-67251512732f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:45 np0005481065 nova_compute[260935]: 2025-10-11 08:50:45.137 2 DEBUG oslo_concurrency.lockutils [None req-046881b0-aa8a-4051-8a37-563c0b66e01f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "6224c79a-8a36-490c-863a-67251512732f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:45 np0005481065 nova_compute[260935]: 2025-10-11 08:50:45.137 2 DEBUG oslo_concurrency.lockutils [None req-046881b0-aa8a-4051-8a37-563c0b66e01f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "6224c79a-8a36-490c-863a-67251512732f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:45 np0005481065 nova_compute[260935]: 2025-10-11 08:50:45.139 2 INFO nova.compute.manager [None req-046881b0-aa8a-4051-8a37-563c0b66e01f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Terminating instance#033[00m
Oct 11 04:50:45 np0005481065 nova_compute[260935]: 2025-10-11 08:50:45.141 2 DEBUG nova.compute.manager [None req-046881b0-aa8a-4051-8a37-563c0b66e01f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 04:50:45 np0005481065 nova_compute[260935]: 2025-10-11 08:50:45.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:45 np0005481065 kernel: tapb9569700-d7 (unregistering): left promiscuous mode
Oct 11 04:50:45 np0005481065 NetworkManager[44960]: <info>  [1760172645.2133] device (tapb9569700-d7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:50:45 np0005481065 ovn_controller[152945]: 2025-10-11T08:50:45Z|00214|binding|INFO|Releasing lport b9569700-d7dc-40dc-a27c-1f72d3675682 from this chassis (sb_readonly=0)
Oct 11 04:50:45 np0005481065 ovn_controller[152945]: 2025-10-11T08:50:45Z|00215|binding|INFO|Setting lport b9569700-d7dc-40dc-a27c-1f72d3675682 down in Southbound
Oct 11 04:50:45 np0005481065 ovn_controller[152945]: 2025-10-11T08:50:45Z|00216|binding|INFO|Removing iface tapb9569700-d7 ovn-installed in OVS
Oct 11 04:50:45 np0005481065 nova_compute[260935]: 2025-10-11 08:50:45.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:45.239 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:3f:af 10.100.0.14'], port_security=['fa:16:3e:20:3f:af 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '6224c79a-8a36-490c-863a-67251512732f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39d3043a7835403392c659fbb2fe0b22', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8cdf2c97-ed67-4339-928f-1d70d0c6c18c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3bfe7634-8476-437a-9cde-e4512c0e686a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=b9569700-d7dc-40dc-a27c-1f72d3675682) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:50:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:45.242 162815 INFO neutron.agent.ovn.metadata.agent [-] Port b9569700-d7dc-40dc-a27c-1f72d3675682 in datapath 09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5 unbound from our chassis#033[00m
Oct 11 04:50:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:45.246 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5#033[00m
Oct 11 04:50:45 np0005481065 nova_compute[260935]: 2025-10-11 08:50:45.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:45.275 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9f4d16fd-564d-40e7-ba76-1279ada9ddf8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:45 np0005481065 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000017.scope: Deactivated successfully.
Oct 11 04:50:45 np0005481065 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000017.scope: Consumed 19.159s CPU time.
Oct 11 04:50:45 np0005481065 systemd-machined[215705]: Machine qemu-25-instance-00000017 terminated.
Oct 11 04:50:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:45.323 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[cdcc4d94-d48b-47c4-81f0-77933f88e2e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:45.327 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[ada7c67a-547c-4ec9-aea5-74a52702674d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:45.383 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[7352b98f-344f-4930-b91b-75755b4c7338]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:45 np0005481065 nova_compute[260935]: 2025-10-11 08:50:45.394 2 INFO nova.virt.libvirt.driver [-] [instance: 6224c79a-8a36-490c-863a-67251512732f] Instance destroyed successfully.#033[00m
Oct 11 04:50:45 np0005481065 nova_compute[260935]: 2025-10-11 08:50:45.394 2 DEBUG nova.objects.instance [None req-046881b0-aa8a-4051-8a37-563c0b66e01f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lazy-loading 'resources' on Instance uuid 6224c79a-8a36-490c-863a-67251512732f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:50:45 np0005481065 nova_compute[260935]: 2025-10-11 08:50:45.413 2 DEBUG nova.virt.libvirt.vif [None req-046881b0-aa8a-4051-8a37-563c0b66e01f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:48:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-993933628',display_name='tempest-ServersAdminTestJSON-server-993933628',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-993933628',id=23,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:49:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='39d3043a7835403392c659fbb2fe0b22',ramdisk_id='',reservation_id='r-64hxjs52',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1756812845',owner_user_name='tempest-ServersAdminTestJSON-1756812845-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:49:09Z,user_data=None,user_id='a51c2680b31e40b1908642ef8795c6f0',uuid=6224c79a-8a36-490c-863a-67251512732f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b9569700-d7dc-40dc-a27c-1f72d3675682", "address": "fa:16:3e:20:3f:af", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9569700-d7", "ovs_interfaceid": "b9569700-d7dc-40dc-a27c-1f72d3675682", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 04:50:45 np0005481065 nova_compute[260935]: 2025-10-11 08:50:45.414 2 DEBUG nova.network.os_vif_util [None req-046881b0-aa8a-4051-8a37-563c0b66e01f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converting VIF {"id": "b9569700-d7dc-40dc-a27c-1f72d3675682", "address": "fa:16:3e:20:3f:af", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9569700-d7", "ovs_interfaceid": "b9569700-d7dc-40dc-a27c-1f72d3675682", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:50:45 np0005481065 nova_compute[260935]: 2025-10-11 08:50:45.415 2 DEBUG nova.network.os_vif_util [None req-046881b0-aa8a-4051-8a37-563c0b66e01f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:3f:af,bridge_name='br-int',has_traffic_filtering=True,id=b9569700-d7dc-40dc-a27c-1f72d3675682,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9569700-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:50:45 np0005481065 nova_compute[260935]: 2025-10-11 08:50:45.415 2 DEBUG os_vif [None req-046881b0-aa8a-4051-8a37-563c0b66e01f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:3f:af,bridge_name='br-int',has_traffic_filtering=True,id=b9569700-d7dc-40dc-a27c-1f72d3675682,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9569700-d7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 04:50:45 np0005481065 nova_compute[260935]: 2025-10-11 08:50:45.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:45 np0005481065 nova_compute[260935]: 2025-10-11 08:50:45.418 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb9569700-d7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:50:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:45.422 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c16eee5f-bc4e-46d9-abdf-428aa66fc27f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap09ac2cb6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:b2:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 24, 'rx_bytes': 1168, 'tx_bytes': 1196, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 24, 'rx_bytes': 1168, 'tx_bytes': 1196, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 439110, 'reachable_time': 35426, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299855, 'error': None, 'target': 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:45 np0005481065 nova_compute[260935]: 2025-10-11 08:50:45.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:45 np0005481065 nova_compute[260935]: 2025-10-11 08:50:45.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:50:45 np0005481065 nova_compute[260935]: 2025-10-11 08:50:45.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:45 np0005481065 nova_compute[260935]: 2025-10-11 08:50:45.474 2 INFO os_vif [None req-046881b0-aa8a-4051-8a37-563c0b66e01f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:3f:af,bridge_name='br-int',has_traffic_filtering=True,id=b9569700-d7dc-40dc-a27c-1f72d3675682,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9569700-d7')#033[00m
Oct 11 04:50:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:45.485 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[09a3f543-9f1c-44d2-af40-48b865f7d76b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap09ac2cb6-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 439130, 'tstamp': 439130}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299858, 'error': None, 'target': 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap09ac2cb6-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 439135, 'tstamp': 439135}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299858, 'error': None, 'target': 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:45.486 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap09ac2cb6-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:50:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:45.496 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap09ac2cb6-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:50:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:45.496 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:50:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:45.497 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap09ac2cb6-30, col_values=(('external_ids', {'iface-id': '424305ea-6b47-4134-ad52-ee2a450e204c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:50:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:45.497 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:50:45 np0005481065 nova_compute[260935]: 2025-10-11 08:50:45.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1361: 321 pgs: 321 active+clean; 372 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 191 op/s
Oct 11 04:50:45 np0005481065 nova_compute[260935]: 2025-10-11 08:50:45.896 2 DEBUG nova.network.neutron [-] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:50:45 np0005481065 nova_compute[260935]: 2025-10-11 08:50:45.911 2 INFO nova.compute.manager [-] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Took 2.23 seconds to deallocate network for instance.#033[00m
Oct 11 04:50:45 np0005481065 nova_compute[260935]: 2025-10-11 08:50:45.920 2 INFO nova.virt.libvirt.driver [None req-046881b0-aa8a-4051-8a37-563c0b66e01f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Deleting instance files /var/lib/nova/instances/6224c79a-8a36-490c-863a-67251512732f_del#033[00m
Oct 11 04:50:45 np0005481065 nova_compute[260935]: 2025-10-11 08:50:45.921 2 INFO nova.virt.libvirt.driver [None req-046881b0-aa8a-4051-8a37-563c0b66e01f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Deletion of /var/lib/nova/instances/6224c79a-8a36-490c-863a-67251512732f_del complete#033[00m
Oct 11 04:50:45 np0005481065 nova_compute[260935]: 2025-10-11 08:50:45.979 2 DEBUG oslo_concurrency.lockutils [None req-21fe9cac-4353-4d0f-8ae8-fb8101a4c552 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:45 np0005481065 nova_compute[260935]: 2025-10-11 08:50:45.980 2 DEBUG oslo_concurrency.lockutils [None req-21fe9cac-4353-4d0f-8ae8-fb8101a4c552 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:45 np0005481065 nova_compute[260935]: 2025-10-11 08:50:45.987 2 INFO nova.compute.manager [None req-046881b0-aa8a-4051-8a37-563c0b66e01f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Took 0.85 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 04:50:45 np0005481065 nova_compute[260935]: 2025-10-11 08:50:45.987 2 DEBUG oslo.service.loopingcall [None req-046881b0-aa8a-4051-8a37-563c0b66e01f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 04:50:45 np0005481065 nova_compute[260935]: 2025-10-11 08:50:45.988 2 DEBUG nova.compute.manager [-] [instance: 6224c79a-8a36-490c-863a-67251512732f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 04:50:45 np0005481065 nova_compute[260935]: 2025-10-11 08:50:45.988 2 DEBUG nova.network.neutron [-] [instance: 6224c79a-8a36-490c-863a-67251512732f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 04:50:46 np0005481065 nova_compute[260935]: 2025-10-11 08:50:46.158 2 DEBUG oslo_concurrency.processutils [None req-21fe9cac-4353-4d0f-8ae8-fb8101a4c552 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:50:46 np0005481065 nova_compute[260935]: 2025-10-11 08:50:46.562 2 DEBUG nova.compute.manager [req-5cda5b83-f7c5-4821-a1ba-8d6fa210df55 req-24015828-f5d7-4923-9e7c-e9a14101a0ea e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Received event network-vif-plugged-d190526e-2bf1-4e6c-925a-2cc0c2b359a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:50:46 np0005481065 nova_compute[260935]: 2025-10-11 08:50:46.563 2 DEBUG oslo_concurrency.lockutils [req-5cda5b83-f7c5-4821-a1ba-8d6fa210df55 req-24015828-f5d7-4923-9e7c-e9a14101a0ea e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ac3851d8-5df2-4f84-9b28-a5fbf1c31b62-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:46 np0005481065 nova_compute[260935]: 2025-10-11 08:50:46.564 2 DEBUG oslo_concurrency.lockutils [req-5cda5b83-f7c5-4821-a1ba-8d6fa210df55 req-24015828-f5d7-4923-9e7c-e9a14101a0ea e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ac3851d8-5df2-4f84-9b28-a5fbf1c31b62-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:46 np0005481065 nova_compute[260935]: 2025-10-11 08:50:46.564 2 DEBUG oslo_concurrency.lockutils [req-5cda5b83-f7c5-4821-a1ba-8d6fa210df55 req-24015828-f5d7-4923-9e7c-e9a14101a0ea e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ac3851d8-5df2-4f84-9b28-a5fbf1c31b62-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:46 np0005481065 nova_compute[260935]: 2025-10-11 08:50:46.564 2 DEBUG nova.compute.manager [req-5cda5b83-f7c5-4821-a1ba-8d6fa210df55 req-24015828-f5d7-4923-9e7c-e9a14101a0ea e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] No waiting events found dispatching network-vif-plugged-d190526e-2bf1-4e6c-925a-2cc0c2b359a8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:50:46 np0005481065 nova_compute[260935]: 2025-10-11 08:50:46.565 2 WARNING nova.compute.manager [req-5cda5b83-f7c5-4821-a1ba-8d6fa210df55 req-24015828-f5d7-4923-9e7c-e9a14101a0ea e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Received unexpected event network-vif-plugged-d190526e-2bf1-4e6c-925a-2cc0c2b359a8 for instance with vm_state suspended and task_state image_snapshot_pending.#033[00m
Oct 11 04:50:46 np0005481065 nova_compute[260935]: 2025-10-11 08:50:46.565 2 DEBUG nova.compute.manager [req-5cda5b83-f7c5-4821-a1ba-8d6fa210df55 req-24015828-f5d7-4923-9e7c-e9a14101a0ea e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Received event network-vif-deleted-c27797c3-6ac7-45ae-9a2e-7fc42908feab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:50:46 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:50:46 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2323926474' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:50:46 np0005481065 nova_compute[260935]: 2025-10-11 08:50:46.596 2 DEBUG oslo_concurrency.processutils [None req-21fe9cac-4353-4d0f-8ae8-fb8101a4c552 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:50:46 np0005481065 nova_compute[260935]: 2025-10-11 08:50:46.607 2 DEBUG nova.compute.provider_tree [None req-21fe9cac-4353-4d0f-8ae8-fb8101a4c552 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:50:46 np0005481065 nova_compute[260935]: 2025-10-11 08:50:46.626 2 DEBUG nova.scheduler.client.report [None req-21fe9cac-4353-4d0f-8ae8-fb8101a4c552 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:50:46 np0005481065 nova_compute[260935]: 2025-10-11 08:50:46.668 2 DEBUG oslo_concurrency.lockutils [None req-21fe9cac-4353-4d0f-8ae8-fb8101a4c552 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:46 np0005481065 nova_compute[260935]: 2025-10-11 08:50:46.674 2 DEBUG nova.compute.manager [None req-b5ae42e4-fca2-497f-8563-d753ca93958c 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:50:46 np0005481065 nova_compute[260935]: 2025-10-11 08:50:46.716 2 INFO nova.scheduler.client.report [None req-21fe9cac-4353-4d0f-8ae8-fb8101a4c552 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Deleted allocations for instance b35f4147-9e36-4dab-9ac8-2061c97797f2#033[00m
Oct 11 04:50:46 np0005481065 nova_compute[260935]: 2025-10-11 08:50:46.731 2 DEBUG nova.network.neutron [-] [instance: 6224c79a-8a36-490c-863a-67251512732f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:50:46 np0005481065 nova_compute[260935]: 2025-10-11 08:50:46.740 2 INFO nova.compute.manager [None req-b5ae42e4-fca2-497f-8563-d753ca93958c 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] instance snapshotting#033[00m
Oct 11 04:50:46 np0005481065 nova_compute[260935]: 2025-10-11 08:50:46.740 2 WARNING nova.compute.manager [None req-b5ae42e4-fca2-497f-8563-d753ca93958c 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] trying to snapshot a non-running instance: (state: 4 expected: 1)#033[00m
Oct 11 04:50:46 np0005481065 nova_compute[260935]: 2025-10-11 08:50:46.750 2 INFO nova.compute.manager [-] [instance: 6224c79a-8a36-490c-863a-67251512732f] Took 0.76 seconds to deallocate network for instance.#033[00m
Oct 11 04:50:46 np0005481065 nova_compute[260935]: 2025-10-11 08:50:46.807 2 DEBUG oslo_concurrency.lockutils [None req-21fe9cac-4353-4d0f-8ae8-fb8101a4c552 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "b35f4147-9e36-4dab-9ac8-2061c97797f2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:46 np0005481065 nova_compute[260935]: 2025-10-11 08:50:46.811 2 DEBUG oslo_concurrency.lockutils [None req-046881b0-aa8a-4051-8a37-563c0b66e01f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:46 np0005481065 nova_compute[260935]: 2025-10-11 08:50:46.812 2 DEBUG oslo_concurrency.lockutils [None req-046881b0-aa8a-4051-8a37-563c0b66e01f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:46 np0005481065 nova_compute[260935]: 2025-10-11 08:50:46.932 2 DEBUG oslo_concurrency.processutils [None req-046881b0-aa8a-4051-8a37-563c0b66e01f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:50:47 np0005481065 nova_compute[260935]: 2025-10-11 08:50:47.001 2 INFO nova.virt.libvirt.driver [None req-b5ae42e4-fca2-497f-8563-d753ca93958c 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Beginning cold snapshot process#033[00m
Oct 11 04:50:47 np0005481065 nova_compute[260935]: 2025-10-11 08:50:47.132 2 DEBUG nova.virt.libvirt.imagebackend [None req-b5ae42e4-fca2-497f-8563-d753ca93958c 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] No parent info for 03f2fef0-11c0-48e1-b3a0-3e02d898739e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct 11 04:50:47 np0005481065 nova_compute[260935]: 2025-10-11 08:50:47.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:50:47 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3562417516' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:50:47 np0005481065 nova_compute[260935]: 2025-10-11 08:50:47.400 2 DEBUG oslo_concurrency.processutils [None req-046881b0-aa8a-4051-8a37-563c0b66e01f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:50:47 np0005481065 nova_compute[260935]: 2025-10-11 08:50:47.407 2 DEBUG nova.compute.provider_tree [None req-046881b0-aa8a-4051-8a37-563c0b66e01f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:50:47 np0005481065 nova_compute[260935]: 2025-10-11 08:50:47.427 2 DEBUG nova.storage.rbd_utils [None req-b5ae42e4-fca2-497f-8563-d753ca93958c 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] creating snapshot(854a61dcba4c481eaeae67e6378c78dd) on rbd image(ac3851d8-5df2-4f84-9b28-a5fbf1c31b62_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 11 04:50:47 np0005481065 nova_compute[260935]: 2025-10-11 08:50:47.461 2 DEBUG nova.scheduler.client.report [None req-046881b0-aa8a-4051-8a37-563c0b66e01f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:50:47 np0005481065 nova_compute[260935]: 2025-10-11 08:50:47.491 2 DEBUG oslo_concurrency.lockutils [None req-046881b0-aa8a-4051-8a37-563c0b66e01f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:47 np0005481065 nova_compute[260935]: 2025-10-11 08:50:47.512 2 INFO nova.scheduler.client.report [None req-046881b0-aa8a-4051-8a37-563c0b66e01f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Deleted allocations for instance 6224c79a-8a36-490c-863a-67251512732f#033[00m
Oct 11 04:50:47 np0005481065 nova_compute[260935]: 2025-10-11 08:50:47.616 2 DEBUG oslo_concurrency.lockutils [None req-046881b0-aa8a-4051-8a37-563c0b66e01f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "6224c79a-8a36-490c-863a-67251512732f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.480s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:47 np0005481065 nova_compute[260935]: 2025-10-11 08:50:47.752 2 DEBUG oslo_concurrency.lockutils [None req-d3c18f18-c02a-4253-a213-db44ffaf6acd 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "14711e39-46ca-4856-9c19-fa51b869064d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:47 np0005481065 nova_compute[260935]: 2025-10-11 08:50:47.753 2 DEBUG oslo_concurrency.lockutils [None req-d3c18f18-c02a-4253-a213-db44ffaf6acd 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "14711e39-46ca-4856-9c19-fa51b869064d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:47 np0005481065 nova_compute[260935]: 2025-10-11 08:50:47.753 2 DEBUG oslo_concurrency.lockutils [None req-d3c18f18-c02a-4253-a213-db44ffaf6acd 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "14711e39-46ca-4856-9c19-fa51b869064d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:47 np0005481065 nova_compute[260935]: 2025-10-11 08:50:47.754 2 DEBUG oslo_concurrency.lockutils [None req-d3c18f18-c02a-4253-a213-db44ffaf6acd 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "14711e39-46ca-4856-9c19-fa51b869064d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:47 np0005481065 nova_compute[260935]: 2025-10-11 08:50:47.754 2 DEBUG oslo_concurrency.lockutils [None req-d3c18f18-c02a-4253-a213-db44ffaf6acd 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "14711e39-46ca-4856-9c19-fa51b869064d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:47 np0005481065 nova_compute[260935]: 2025-10-11 08:50:47.757 2 INFO nova.compute.manager [None req-d3c18f18-c02a-4253-a213-db44ffaf6acd 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Terminating instance#033[00m
Oct 11 04:50:47 np0005481065 nova_compute[260935]: 2025-10-11 08:50:47.759 2 DEBUG nova.compute.manager [None req-d3c18f18-c02a-4253-a213-db44ffaf6acd 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 04:50:47 np0005481065 kernel: tapac842ebf-4f (unregistering): left promiscuous mode
Oct 11 04:50:47 np0005481065 NetworkManager[44960]: <info>  [1760172647.8335] device (tapac842ebf-4f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:50:47 np0005481065 nova_compute[260935]: 2025-10-11 08:50:47.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:47 np0005481065 ovn_controller[152945]: 2025-10-11T08:50:47Z|00217|binding|INFO|Releasing lport ac842ebf-4fca-4930-a4d1-3e8a6760d441 from this chassis (sb_readonly=0)
Oct 11 04:50:47 np0005481065 ovn_controller[152945]: 2025-10-11T08:50:47Z|00218|binding|INFO|Setting lport ac842ebf-4fca-4930-a4d1-3e8a6760d441 down in Southbound
Oct 11 04:50:47 np0005481065 ovn_controller[152945]: 2025-10-11T08:50:47Z|00219|binding|INFO|Removing iface tapac842ebf-4f ovn-installed in OVS
Oct 11 04:50:47 np0005481065 nova_compute[260935]: 2025-10-11 08:50:47.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:47 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:47.863 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:95:17 10.100.0.14'], port_security=['fa:16:3e:56:95:17 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '14711e39-46ca-4856-9c19-fa51b869064d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eddb41c523294041b154a0a99c88e82b', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b1442d15-c284-4756-a249-5a3bda09cf56', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4201c7b-c907-464d-88cb-d19f17d8f067, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=ac842ebf-4fca-4930-a4d1-3e8a6760d441) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:50:47 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:47.865 162815 INFO neutron.agent.ovn.metadata.agent [-] Port ac842ebf-4fca-4930-a4d1-3e8a6760d441 in datapath fff13396-b787-4c6e-9112-a1c2ef57b26d unbound from our chassis#033[00m
Oct 11 04:50:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1362: 321 pgs: 321 active+clean; 246 MiB data, 487 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.6 MiB/s wr, 255 op/s
Oct 11 04:50:47 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:47.869 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fff13396-b787-4c6e-9112-a1c2ef57b26d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 04:50:47 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:47.870 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7ff62cdc-e72a-4910-9bf7-b004dc6f1f8a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:47 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:47.870 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d namespace which is not needed anymore#033[00m
Oct 11 04:50:47 np0005481065 nova_compute[260935]: 2025-10-11 08:50:47.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:47 np0005481065 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d0000001a.scope: Deactivated successfully.
Oct 11 04:50:47 np0005481065 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d0000001a.scope: Consumed 16.855s CPU time.
Oct 11 04:50:47 np0005481065 systemd-machined[215705]: Machine qemu-28-instance-0000001a terminated.
Oct 11 04:50:48 np0005481065 nova_compute[260935]: 2025-10-11 08:50:47.999 2 INFO nova.virt.libvirt.driver [-] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Instance destroyed successfully.#033[00m
Oct 11 04:50:48 np0005481065 nova_compute[260935]: 2025-10-11 08:50:48.000 2 DEBUG nova.objects.instance [None req-d3c18f18-c02a-4253-a213-db44ffaf6acd 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lazy-loading 'resources' on Instance uuid 14711e39-46ca-4856-9c19-fa51b869064d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:50:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e175 do_prune osdmap full prune enabled
Oct 11 04:50:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e176 e176: 3 total, 3 up, 3 in
Oct 11 04:50:48 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e176: 3 total, 3 up, 3 in
Oct 11 04:50:48 np0005481065 nova_compute[260935]: 2025-10-11 08:50:48.021 2 DEBUG nova.virt.libvirt.vif [None req-d3c18f18-c02a-4253-a213-db44ffaf6acd 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:49:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-579139719',display_name='tempest-tempest.common.compute-instance-579139719',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-579139719',id=26,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAAy7u30rY9Ua722Pu5k06TsB7yGIeNfS7lWjZwVhd6kg2xMeuomPU5t2dlqG08LvC5AhOx2wSQ4p/whtQgG8tbhB9ScC2x2P4qlM+3BKH/+XFtSpFY70AQQ3oh5qTSzqA==',key_name='tempest-keypair-878513287',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:49:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-9v1qz90y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:49:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=14711e39-46ca-4856-9c19-fa51b869064d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "address": "fa:16:3e:56:95:17", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac842ebf-4f", "ovs_interfaceid": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 04:50:48 np0005481065 nova_compute[260935]: 2025-10-11 08:50:48.022 2 DEBUG nova.network.os_vif_util [None req-d3c18f18-c02a-4253-a213-db44ffaf6acd 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "address": "fa:16:3e:56:95:17", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac842ebf-4f", "ovs_interfaceid": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:50:48 np0005481065 nova_compute[260935]: 2025-10-11 08:50:48.024 2 DEBUG nova.network.os_vif_util [None req-d3c18f18-c02a-4253-a213-db44ffaf6acd 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:56:95:17,bridge_name='br-int',has_traffic_filtering=True,id=ac842ebf-4fca-4930-a4d1-3e8a6760d441,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac842ebf-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:50:48 np0005481065 nova_compute[260935]: 2025-10-11 08:50:48.024 2 DEBUG os_vif [None req-d3c18f18-c02a-4253-a213-db44ffaf6acd 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:56:95:17,bridge_name='br-int',has_traffic_filtering=True,id=ac842ebf-4fca-4930-a4d1-3e8a6760d441,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac842ebf-4f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 04:50:48 np0005481065 nova_compute[260935]: 2025-10-11 08:50:48.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:48 np0005481065 nova_compute[260935]: 2025-10-11 08:50:48.027 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac842ebf-4f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:50:48 np0005481065 nova_compute[260935]: 2025-10-11 08:50:48.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:48 np0005481065 nova_compute[260935]: 2025-10-11 08:50:48.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:50:48 np0005481065 nova_compute[260935]: 2025-10-11 08:50:48.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:48 np0005481065 neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d[295224]: [NOTICE]   (295247) : haproxy version is 2.8.14-c23fe91
Oct 11 04:50:48 np0005481065 neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d[295224]: [NOTICE]   (295247) : path to executable is /usr/sbin/haproxy
Oct 11 04:50:48 np0005481065 neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d[295224]: [WARNING]  (295247) : Exiting Master process...
Oct 11 04:50:48 np0005481065 nova_compute[260935]: 2025-10-11 08:50:48.036 2 INFO os_vif [None req-d3c18f18-c02a-4253-a213-db44ffaf6acd 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:56:95:17,bridge_name='br-int',has_traffic_filtering=True,id=ac842ebf-4fca-4930-a4d1-3e8a6760d441,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac842ebf-4f')#033[00m
Oct 11 04:50:48 np0005481065 neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d[295224]: [ALERT]    (295247) : Current worker (295256) exited with code 143 (Terminated)
Oct 11 04:50:48 np0005481065 neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d[295224]: [WARNING]  (295247) : All workers exited. Exiting... (0)
Oct 11 04:50:48 np0005481065 systemd[1]: libpod-7ed29cef5ce31d6622505d70a8cd9e2036824dbcf8fd54bff2161f0611e12d29.scope: Deactivated successfully.
Oct 11 04:50:48 np0005481065 podman[300000]: 2025-10-11 08:50:48.046666635 +0000 UTC m=+0.061826000 container died 7ed29cef5ce31d6622505d70a8cd9e2036824dbcf8fd54bff2161f0611e12d29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 04:50:48 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7ed29cef5ce31d6622505d70a8cd9e2036824dbcf8fd54bff2161f0611e12d29-userdata-shm.mount: Deactivated successfully.
Oct 11 04:50:48 np0005481065 systemd[1]: var-lib-containers-storage-overlay-1c54b9bd547d36ffdac517bcbe86f00a809d959f1c5904ed2903ce52567b3128-merged.mount: Deactivated successfully.
Oct 11 04:50:48 np0005481065 podman[300000]: 2025-10-11 08:50:48.109956965 +0000 UTC m=+0.125116330 container cleanup 7ed29cef5ce31d6622505d70a8cd9e2036824dbcf8fd54bff2161f0611e12d29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:50:48 np0005481065 nova_compute[260935]: 2025-10-11 08:50:48.113 2 DEBUG nova.storage.rbd_utils [None req-b5ae42e4-fca2-497f-8563-d753ca93958c 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] cloning vms/ac3851d8-5df2-4f84-9b28-a5fbf1c31b62_disk@854a61dcba4c481eaeae67e6378c78dd to images/a3f24c7b-a065-417e-b63f-e5280c978ce3 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct 11 04:50:48 np0005481065 systemd[1]: libpod-conmon-7ed29cef5ce31d6622505d70a8cd9e2036824dbcf8fd54bff2161f0611e12d29.scope: Deactivated successfully.
Oct 11 04:50:48 np0005481065 podman[300062]: 2025-10-11 08:50:48.204840578 +0000 UTC m=+0.063675022 container remove 7ed29cef5ce31d6622505d70a8cd9e2036824dbcf8fd54bff2161f0611e12d29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 11 04:50:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:48.215 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b59bc355-95ca-4957-abcd-b7881fdd5c94]: (4, ('Sat Oct 11 08:50:47 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d (7ed29cef5ce31d6622505d70a8cd9e2036824dbcf8fd54bff2161f0611e12d29)\n7ed29cef5ce31d6622505d70a8cd9e2036824dbcf8fd54bff2161f0611e12d29\nSat Oct 11 08:50:48 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d (7ed29cef5ce31d6622505d70a8cd9e2036824dbcf8fd54bff2161f0611e12d29)\n7ed29cef5ce31d6622505d70a8cd9e2036824dbcf8fd54bff2161f0611e12d29\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:48.217 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0fdd70f1-f16a-4a2c-aa0d-9d5e6c407dfa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:48.218 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfff13396-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:50:48 np0005481065 nova_compute[260935]: 2025-10-11 08:50:48.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:48 np0005481065 kernel: tapfff13396-b0: left promiscuous mode
Oct 11 04:50:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:50:48 np0005481065 nova_compute[260935]: 2025-10-11 08:50:48.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:48.258 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2b0c3f56-4119-4df9-a27b-84a8eb637a3d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:48 np0005481065 nova_compute[260935]: 2025-10-11 08:50:48.279 2 DEBUG nova.storage.rbd_utils [None req-b5ae42e4-fca2-497f-8563-d753ca93958c 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] flattening images/a3f24c7b-a065-417e-b63f-e5280c978ce3 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct 11 04:50:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:48.282 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c2e1044c-a9e9-4afb-ba0f-1173b504a1d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:48.283 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f4503d79-8797-4df3-9d51-da0c5135f3c7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:48.299 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b3ca88b5-72fb-4d70-98af-ca3ce678198e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 443110, 'reachable_time': 18325, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300124, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:48 np0005481065 systemd[1]: run-netns-ovnmeta\x2dfff13396\x2db787\x2d4c6e\x2d9112\x2da1c2ef57b26d.mount: Deactivated successfully.
Oct 11 04:50:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:48.305 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 04:50:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:48.306 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[d906c437-f411-445b-9c8e-6f89f3c8402d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:48 np0005481065 nova_compute[260935]: 2025-10-11 08:50:48.483 2 DEBUG oslo_concurrency.lockutils [None req-cfd8987d-7906-4fa6-9dd8-07dc376f483a a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:48 np0005481065 nova_compute[260935]: 2025-10-11 08:50:48.484 2 DEBUG oslo_concurrency.lockutils [None req-cfd8987d-7906-4fa6-9dd8-07dc376f483a a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:48 np0005481065 nova_compute[260935]: 2025-10-11 08:50:48.484 2 DEBUG oslo_concurrency.lockutils [None req-cfd8987d-7906-4fa6-9dd8-07dc376f483a a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:48 np0005481065 nova_compute[260935]: 2025-10-11 08:50:48.485 2 DEBUG oslo_concurrency.lockutils [None req-cfd8987d-7906-4fa6-9dd8-07dc376f483a a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:48 np0005481065 nova_compute[260935]: 2025-10-11 08:50:48.485 2 DEBUG oslo_concurrency.lockutils [None req-cfd8987d-7906-4fa6-9dd8-07dc376f483a a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:48 np0005481065 nova_compute[260935]: 2025-10-11 08:50:48.487 2 INFO nova.compute.manager [None req-cfd8987d-7906-4fa6-9dd8-07dc376f483a a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Terminating instance#033[00m
Oct 11 04:50:48 np0005481065 nova_compute[260935]: 2025-10-11 08:50:48.488 2 DEBUG nova.compute.manager [None req-cfd8987d-7906-4fa6-9dd8-07dc376f483a a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 04:50:48 np0005481065 kernel: tapa3944a31-95 (unregistering): left promiscuous mode
Oct 11 04:50:48 np0005481065 NetworkManager[44960]: <info>  [1760172648.5572] device (tapa3944a31-95): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:50:48 np0005481065 ovn_controller[152945]: 2025-10-11T08:50:48Z|00220|binding|INFO|Releasing lport a3944a31-9560-49ae-b2a5-caaf2736993a from this chassis (sb_readonly=0)
Oct 11 04:50:48 np0005481065 ovn_controller[152945]: 2025-10-11T08:50:48Z|00221|binding|INFO|Setting lport a3944a31-9560-49ae-b2a5-caaf2736993a down in Southbound
Oct 11 04:50:48 np0005481065 ovn_controller[152945]: 2025-10-11T08:50:48Z|00222|binding|INFO|Removing iface tapa3944a31-95 ovn-installed in OVS
Oct 11 04:50:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:48.578 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:d8:0b 10.100.0.11'], port_security=['fa:16:3e:bf:d8:0b 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '77ff3a9d-3eb2-40ed-ad12-6367fd4e555f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39d3043a7835403392c659fbb2fe0b22', 'neutron:revision_number': '8', 'neutron:security_group_ids': '8cdf2c97-ed67-4339-928f-1d70d0c6c18c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3bfe7634-8476-437a-9cde-e4512c0e686a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=a3944a31-9560-49ae-b2a5-caaf2736993a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:50:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:48.580 162815 INFO neutron.agent.ovn.metadata.agent [-] Port a3944a31-9560-49ae-b2a5-caaf2736993a in datapath 09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5 unbound from our chassis#033[00m
Oct 11 04:50:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:48.585 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 04:50:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:48.586 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[47c47067-c013-4520-8ec0-27f42211686b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:48.589 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5 namespace which is not needed anymore#033[00m
Oct 11 04:50:48 np0005481065 nova_compute[260935]: 2025-10-11 08:50:48.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:48 np0005481065 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d00000016.scope: Deactivated successfully.
Oct 11 04:50:48 np0005481065 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d00000016.scope: Consumed 13.347s CPU time.
Oct 11 04:50:48 np0005481065 systemd-machined[215705]: Machine qemu-34-instance-00000016 terminated.
Oct 11 04:50:48 np0005481065 nova_compute[260935]: 2025-10-11 08:50:48.696 2 DEBUG nova.compute.manager [req-1b10d1ec-fa32-4218-8cee-26d629b9362a req-332f5a06-9aab-409c-a07e-2821b75bd822 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Received event network-vif-deleted-b9569700-d7dc-40dc-a27c-1f72d3675682 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:50:48 np0005481065 nova_compute[260935]: 2025-10-11 08:50:48.710 2 DEBUG nova.storage.rbd_utils [None req-b5ae42e4-fca2-497f-8563-d753ca93958c 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] removing snapshot(854a61dcba4c481eaeae67e6378c78dd) on rbd image(ac3851d8-5df2-4f84-9b28-a5fbf1c31b62_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct 11 04:50:48 np0005481065 nova_compute[260935]: 2025-10-11 08:50:48.732 2 INFO nova.virt.libvirt.driver [-] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Instance destroyed successfully.#033[00m
Oct 11 04:50:48 np0005481065 nova_compute[260935]: 2025-10-11 08:50:48.733 2 DEBUG nova.objects.instance [None req-cfd8987d-7906-4fa6-9dd8-07dc376f483a a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lazy-loading 'resources' on Instance uuid 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:50:48 np0005481065 nova_compute[260935]: 2025-10-11 08:50:48.780 2 INFO nova.virt.libvirt.driver [None req-d3c18f18-c02a-4253-a213-db44ffaf6acd 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Deleting instance files /var/lib/nova/instances/14711e39-46ca-4856-9c19-fa51b869064d_del#033[00m
Oct 11 04:50:48 np0005481065 nova_compute[260935]: 2025-10-11 08:50:48.781 2 INFO nova.virt.libvirt.driver [None req-d3c18f18-c02a-4253-a213-db44ffaf6acd 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Deletion of /var/lib/nova/instances/14711e39-46ca-4856-9c19-fa51b869064d_del complete#033[00m
Oct 11 04:50:48 np0005481065 neutron-haproxy-ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5[292192]: [NOTICE]   (292196) : haproxy version is 2.8.14-c23fe91
Oct 11 04:50:48 np0005481065 neutron-haproxy-ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5[292192]: [NOTICE]   (292196) : path to executable is /usr/sbin/haproxy
Oct 11 04:50:48 np0005481065 neutron-haproxy-ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5[292192]: [WARNING]  (292196) : Exiting Master process...
Oct 11 04:50:48 np0005481065 neutron-haproxy-ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5[292192]: [WARNING]  (292196) : Exiting Master process...
Oct 11 04:50:48 np0005481065 nova_compute[260935]: 2025-10-11 08:50:48.786 2 DEBUG nova.virt.libvirt.vif [None req-cfd8987d-7906-4fa6-9dd8-07dc376f483a a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-11T08:48:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1988839763',display_name='tempest-ServersAdminTestJSON-server-1988839763',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1988839763',id=22,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:50:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='39d3043a7835403392c659fbb2fe0b22',ramdisk_id='',reservation_id='r-nxz1v0k9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1756812845',owner_user_name='tempest-ServersAdminTestJSON-1756812845-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:50:35Z,user_data=None,user_id='a51c2680b31e40b1908642ef8795c6f0',uuid=77ff3a9d-3eb2-40ed-ad12-6367fd4e555f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a3944a31-9560-49ae-b2a5-caaf2736993a", "address": "fa:16:3e:bf:d8:0b", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3944a31-95", "ovs_interfaceid": "a3944a31-9560-49ae-b2a5-caaf2736993a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 04:50:48 np0005481065 nova_compute[260935]: 2025-10-11 08:50:48.787 2 DEBUG nova.network.os_vif_util [None req-cfd8987d-7906-4fa6-9dd8-07dc376f483a a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converting VIF {"id": "a3944a31-9560-49ae-b2a5-caaf2736993a", "address": "fa:16:3e:bf:d8:0b", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3944a31-95", "ovs_interfaceid": "a3944a31-9560-49ae-b2a5-caaf2736993a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:50:48 np0005481065 nova_compute[260935]: 2025-10-11 08:50:48.787 2 DEBUG nova.network.os_vif_util [None req-cfd8987d-7906-4fa6-9dd8-07dc376f483a a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:d8:0b,bridge_name='br-int',has_traffic_filtering=True,id=a3944a31-9560-49ae-b2a5-caaf2736993a,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3944a31-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:50:48 np0005481065 nova_compute[260935]: 2025-10-11 08:50:48.788 2 DEBUG os_vif [None req-cfd8987d-7906-4fa6-9dd8-07dc376f483a a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:d8:0b,bridge_name='br-int',has_traffic_filtering=True,id=a3944a31-9560-49ae-b2a5-caaf2736993a,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3944a31-95') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 04:50:48 np0005481065 neutron-haproxy-ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5[292192]: [ALERT]    (292196) : Current worker (292200) exited with code 143 (Terminated)
Oct 11 04:50:48 np0005481065 neutron-haproxy-ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5[292192]: [WARNING]  (292196) : All workers exited. Exiting... (0)
Oct 11 04:50:48 np0005481065 nova_compute[260935]: 2025-10-11 08:50:48.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:48 np0005481065 nova_compute[260935]: 2025-10-11 08:50:48.790 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3944a31-95, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:50:48 np0005481065 nova_compute[260935]: 2025-10-11 08:50:48.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:48 np0005481065 systemd[1]: libpod-f161a6e8d54a76f5182305ed623709fa08785ce1e817d89029244e968aa05361.scope: Deactivated successfully.
Oct 11 04:50:48 np0005481065 nova_compute[260935]: 2025-10-11 08:50:48.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:50:48 np0005481065 podman[300176]: 2025-10-11 08:50:48.797684924 +0000 UTC m=+0.077755050 container died f161a6e8d54a76f5182305ed623709fa08785ce1e817d89029244e968aa05361 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 11 04:50:48 np0005481065 nova_compute[260935]: 2025-10-11 08:50:48.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:48 np0005481065 nova_compute[260935]: 2025-10-11 08:50:48.801 2 INFO os_vif [None req-cfd8987d-7906-4fa6-9dd8-07dc376f483a a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:d8:0b,bridge_name='br-int',has_traffic_filtering=True,id=a3944a31-9560-49ae-b2a5-caaf2736993a,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3944a31-95')#033[00m
Oct 11 04:50:48 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f161a6e8d54a76f5182305ed623709fa08785ce1e817d89029244e968aa05361-userdata-shm.mount: Deactivated successfully.
Oct 11 04:50:48 np0005481065 systemd[1]: var-lib-containers-storage-overlay-58f2e53c9b14a148f14c0e6dc8556f03c44bc8562cdbc60984e7f6e68a2d640e-merged.mount: Deactivated successfully.
Oct 11 04:50:48 np0005481065 podman[300176]: 2025-10-11 08:50:48.841197194 +0000 UTC m=+0.121267290 container cleanup f161a6e8d54a76f5182305ed623709fa08785ce1e817d89029244e968aa05361 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:50:48 np0005481065 systemd[1]: libpod-conmon-f161a6e8d54a76f5182305ed623709fa08785ce1e817d89029244e968aa05361.scope: Deactivated successfully.
Oct 11 04:50:48 np0005481065 podman[300237]: 2025-10-11 08:50:48.940543714 +0000 UTC m=+0.067536901 container remove f161a6e8d54a76f5182305ed623709fa08785ce1e817d89029244e968aa05361 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 04:50:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:48.949 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[04b1c17c-4135-4e95-b5b3-0f1af18bfd41]: (4, ('Sat Oct 11 08:50:48 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5 (f161a6e8d54a76f5182305ed623709fa08785ce1e817d89029244e968aa05361)\nf161a6e8d54a76f5182305ed623709fa08785ce1e817d89029244e968aa05361\nSat Oct 11 08:50:48 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5 (f161a6e8d54a76f5182305ed623709fa08785ce1e817d89029244e968aa05361)\nf161a6e8d54a76f5182305ed623709fa08785ce1e817d89029244e968aa05361\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:48.951 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bca166af-9c40-4af7-8852-32af010c3f94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:48.953 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap09ac2cb6-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:50:48 np0005481065 nova_compute[260935]: 2025-10-11 08:50:48.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:48 np0005481065 kernel: tap09ac2cb6-30: left promiscuous mode
Oct 11 04:50:48 np0005481065 nova_compute[260935]: 2025-10-11 08:50:48.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:49.001 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8311313f-54cd-4bfa-a9bc-8ca4aecf0738]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e176 do_prune osdmap full prune enabled
Oct 11 04:50:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e177 e177: 3 total, 3 up, 3 in
Oct 11 04:50:49 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e177: 3 total, 3 up, 3 in
Oct 11 04:50:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:49.034 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4216b2b7-ae28-44f6-af3d-8e82c78d00f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:49.036 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[25c158f3-f214-464a-941d-3c1024ee8312]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:49.056 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[df40849c-0f0e-4786-a547-895e2c9e8969]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 439097, 'reachable_time': 31436, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300258, 'error': None, 'target': 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:49.059 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 04:50:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:49.060 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[5ccca476-69c9-4433-b63d-dbeac4f4f6d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:50:49 np0005481065 nova_compute[260935]: 2025-10-11 08:50:49.080 2 DEBUG nova.storage.rbd_utils [None req-b5ae42e4-fca2-497f-8563-d753ca93958c 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] creating snapshot(snap) on rbd image(a3f24c7b-a065-417e-b63f-e5280c978ce3) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 11 04:50:49 np0005481065 systemd[1]: run-netns-ovnmeta\x2d09ac2cb6\x2d3e22\x2d4dd3\x2d895f\x2d47d3e5dc3cd5.mount: Deactivated successfully.
Oct 11 04:50:49 np0005481065 nova_compute[260935]: 2025-10-11 08:50:49.258 2 INFO nova.compute.manager [None req-d3c18f18-c02a-4253-a213-db44ffaf6acd 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Took 1.50 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 04:50:49 np0005481065 nova_compute[260935]: 2025-10-11 08:50:49.259 2 DEBUG oslo.service.loopingcall [None req-d3c18f18-c02a-4253-a213-db44ffaf6acd 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 04:50:49 np0005481065 nova_compute[260935]: 2025-10-11 08:50:49.261 2 DEBUG nova.compute.manager [-] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 04:50:49 np0005481065 nova_compute[260935]: 2025-10-11 08:50:49.261 2 DEBUG nova.network.neutron [-] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 04:50:49 np0005481065 nova_compute[260935]: 2025-10-11 08:50:49.283 2 INFO nova.virt.libvirt.driver [None req-cfd8987d-7906-4fa6-9dd8-07dc376f483a a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Deleting instance files /var/lib/nova/instances/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_del#033[00m
Oct 11 04:50:49 np0005481065 nova_compute[260935]: 2025-10-11 08:50:49.285 2 INFO nova.virt.libvirt.driver [None req-cfd8987d-7906-4fa6-9dd8-07dc376f483a a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Deletion of /var/lib/nova/instances/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_del complete#033[00m
Oct 11 04:50:49 np0005481065 nova_compute[260935]: 2025-10-11 08:50:49.623 2 INFO nova.compute.manager [None req-cfd8987d-7906-4fa6-9dd8-07dc376f483a a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Took 1.13 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 04:50:49 np0005481065 nova_compute[260935]: 2025-10-11 08:50:49.623 2 DEBUG oslo.service.loopingcall [None req-cfd8987d-7906-4fa6-9dd8-07dc376f483a a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 04:50:49 np0005481065 nova_compute[260935]: 2025-10-11 08:50:49.624 2 DEBUG nova.compute.manager [-] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 04:50:49 np0005481065 nova_compute[260935]: 2025-10-11 08:50:49.624 2 DEBUG nova.network.neutron [-] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 04:50:49 np0005481065 nova_compute[260935]: 2025-10-11 08:50:49.821 2 DEBUG nova.compute.manager [req-4424305f-8602-4288-9966-6b5ff364f0c2 req-56267edb-f88a-447d-b1d4-851b40d35543 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Received event network-vif-unplugged-ac842ebf-4fca-4930-a4d1-3e8a6760d441 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:50:49 np0005481065 nova_compute[260935]: 2025-10-11 08:50:49.822 2 DEBUG oslo_concurrency.lockutils [req-4424305f-8602-4288-9966-6b5ff364f0c2 req-56267edb-f88a-447d-b1d4-851b40d35543 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "14711e39-46ca-4856-9c19-fa51b869064d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:49 np0005481065 nova_compute[260935]: 2025-10-11 08:50:49.823 2 DEBUG oslo_concurrency.lockutils [req-4424305f-8602-4288-9966-6b5ff364f0c2 req-56267edb-f88a-447d-b1d4-851b40d35543 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "14711e39-46ca-4856-9c19-fa51b869064d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:49 np0005481065 nova_compute[260935]: 2025-10-11 08:50:49.823 2 DEBUG oslo_concurrency.lockutils [req-4424305f-8602-4288-9966-6b5ff364f0c2 req-56267edb-f88a-447d-b1d4-851b40d35543 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "14711e39-46ca-4856-9c19-fa51b869064d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:49 np0005481065 nova_compute[260935]: 2025-10-11 08:50:49.824 2 DEBUG nova.compute.manager [req-4424305f-8602-4288-9966-6b5ff364f0c2 req-56267edb-f88a-447d-b1d4-851b40d35543 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] No waiting events found dispatching network-vif-unplugged-ac842ebf-4fca-4930-a4d1-3e8a6760d441 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:50:49 np0005481065 nova_compute[260935]: 2025-10-11 08:50:49.825 2 DEBUG nova.compute.manager [req-4424305f-8602-4288-9966-6b5ff364f0c2 req-56267edb-f88a-447d-b1d4-851b40d35543 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Received event network-vif-unplugged-ac842ebf-4fca-4930-a4d1-3e8a6760d441 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 04:50:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1365: 321 pgs: 321 active+clean; 246 MiB data, 487 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.2 MiB/s wr, 319 op/s
Oct 11 04:50:49 np0005481065 nova_compute[260935]: 2025-10-11 08:50:49.890 2 DEBUG nova.network.neutron [-] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:50:49 np0005481065 nova_compute[260935]: 2025-10-11 08:50:49.908 2 INFO nova.compute.manager [-] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Took 0.65 seconds to deallocate network for instance.#033[00m
Oct 11 04:50:49 np0005481065 nova_compute[260935]: 2025-10-11 08:50:49.965 2 DEBUG oslo_concurrency.lockutils [None req-d3c18f18-c02a-4253-a213-db44ffaf6acd 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:49 np0005481065 nova_compute[260935]: 2025-10-11 08:50:49.966 2 DEBUG oslo_concurrency.lockutils [None req-d3c18f18-c02a-4253-a213-db44ffaf6acd 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:50 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e177 do_prune osdmap full prune enabled
Oct 11 04:50:50 np0005481065 nova_compute[260935]: 2025-10-11 08:50:50.031 2 DEBUG oslo_concurrency.processutils [None req-d3c18f18-c02a-4253-a213-db44ffaf6acd 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:50:50 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e178 e178: 3 total, 3 up, 3 in
Oct 11 04:50:50 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e178: 3 total, 3 up, 3 in
Oct 11 04:50:50 np0005481065 nova_compute[260935]: 2025-10-11 08:50:50.228 2 DEBUG nova.network.neutron [-] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:50:50 np0005481065 nova_compute[260935]: 2025-10-11 08:50:50.255 2 INFO nova.compute.manager [-] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Took 0.63 seconds to deallocate network for instance.#033[00m
Oct 11 04:50:50 np0005481065 nova_compute[260935]: 2025-10-11 08:50:50.302 2 DEBUG oslo_concurrency.lockutils [None req-cfd8987d-7906-4fa6-9dd8-07dc376f483a a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:50 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:50:50 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1310658092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:50:50 np0005481065 nova_compute[260935]: 2025-10-11 08:50:50.526 2 DEBUG oslo_concurrency.processutils [None req-d3c18f18-c02a-4253-a213-db44ffaf6acd 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:50:50 np0005481065 nova_compute[260935]: 2025-10-11 08:50:50.536 2 DEBUG nova.compute.provider_tree [None req-d3c18f18-c02a-4253-a213-db44ffaf6acd 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:50:50 np0005481065 nova_compute[260935]: 2025-10-11 08:50:50.552 2 DEBUG nova.scheduler.client.report [None req-d3c18f18-c02a-4253-a213-db44ffaf6acd 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:50:50 np0005481065 nova_compute[260935]: 2025-10-11 08:50:50.578 2 DEBUG oslo_concurrency.lockutils [None req-d3c18f18-c02a-4253-a213-db44ffaf6acd 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:50 np0005481065 nova_compute[260935]: 2025-10-11 08:50:50.582 2 DEBUG oslo_concurrency.lockutils [None req-cfd8987d-7906-4fa6-9dd8-07dc376f483a a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.280s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:50 np0005481065 nova_compute[260935]: 2025-10-11 08:50:50.602 2 INFO nova.scheduler.client.report [None req-d3c18f18-c02a-4253-a213-db44ffaf6acd 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Deleted allocations for instance 14711e39-46ca-4856-9c19-fa51b869064d#033[00m
Oct 11 04:50:50 np0005481065 nova_compute[260935]: 2025-10-11 08:50:50.671 2 DEBUG oslo_concurrency.lockutils [None req-d3c18f18-c02a-4253-a213-db44ffaf6acd 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "14711e39-46ca-4856-9c19-fa51b869064d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.919s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:50 np0005481065 nova_compute[260935]: 2025-10-11 08:50:50.684 2 DEBUG oslo_concurrency.processutils [None req-cfd8987d-7906-4fa6-9dd8-07dc376f483a a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:50:51 np0005481065 nova_compute[260935]: 2025-10-11 08:50:51.000 2 DEBUG nova.compute.manager [req-fb91add6-24eb-467d-8fe1-6e07386ace70 req-e90c83d8-a51e-49a6-8283-074c5af150d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Received event network-vif-unplugged-a3944a31-9560-49ae-b2a5-caaf2736993a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:50:51 np0005481065 nova_compute[260935]: 2025-10-11 08:50:51.001 2 DEBUG oslo_concurrency.lockutils [req-fb91add6-24eb-467d-8fe1-6e07386ace70 req-e90c83d8-a51e-49a6-8283-074c5af150d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:51 np0005481065 nova_compute[260935]: 2025-10-11 08:50:51.002 2 DEBUG oslo_concurrency.lockutils [req-fb91add6-24eb-467d-8fe1-6e07386ace70 req-e90c83d8-a51e-49a6-8283-074c5af150d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:51 np0005481065 nova_compute[260935]: 2025-10-11 08:50:51.002 2 DEBUG oslo_concurrency.lockutils [req-fb91add6-24eb-467d-8fe1-6e07386ace70 req-e90c83d8-a51e-49a6-8283-074c5af150d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:51 np0005481065 nova_compute[260935]: 2025-10-11 08:50:51.003 2 DEBUG nova.compute.manager [req-fb91add6-24eb-467d-8fe1-6e07386ace70 req-e90c83d8-a51e-49a6-8283-074c5af150d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] No waiting events found dispatching network-vif-unplugged-a3944a31-9560-49ae-b2a5-caaf2736993a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:50:51 np0005481065 nova_compute[260935]: 2025-10-11 08:50:51.003 2 WARNING nova.compute.manager [req-fb91add6-24eb-467d-8fe1-6e07386ace70 req-e90c83d8-a51e-49a6-8283-074c5af150d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Received unexpected event network-vif-unplugged-a3944a31-9560-49ae-b2a5-caaf2736993a for instance with vm_state deleted and task_state None.#033[00m
Oct 11 04:50:51 np0005481065 nova_compute[260935]: 2025-10-11 08:50:51.004 2 DEBUG nova.compute.manager [req-fb91add6-24eb-467d-8fe1-6e07386ace70 req-e90c83d8-a51e-49a6-8283-074c5af150d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Received event network-vif-plugged-a3944a31-9560-49ae-b2a5-caaf2736993a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:50:51 np0005481065 nova_compute[260935]: 2025-10-11 08:50:51.005 2 DEBUG oslo_concurrency.lockutils [req-fb91add6-24eb-467d-8fe1-6e07386ace70 req-e90c83d8-a51e-49a6-8283-074c5af150d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:51 np0005481065 nova_compute[260935]: 2025-10-11 08:50:51.005 2 DEBUG oslo_concurrency.lockutils [req-fb91add6-24eb-467d-8fe1-6e07386ace70 req-e90c83d8-a51e-49a6-8283-074c5af150d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:51 np0005481065 nova_compute[260935]: 2025-10-11 08:50:51.006 2 DEBUG oslo_concurrency.lockutils [req-fb91add6-24eb-467d-8fe1-6e07386ace70 req-e90c83d8-a51e-49a6-8283-074c5af150d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:51 np0005481065 nova_compute[260935]: 2025-10-11 08:50:51.006 2 DEBUG nova.compute.manager [req-fb91add6-24eb-467d-8fe1-6e07386ace70 req-e90c83d8-a51e-49a6-8283-074c5af150d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] No waiting events found dispatching network-vif-plugged-a3944a31-9560-49ae-b2a5-caaf2736993a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:50:51 np0005481065 nova_compute[260935]: 2025-10-11 08:50:51.007 2 WARNING nova.compute.manager [req-fb91add6-24eb-467d-8fe1-6e07386ace70 req-e90c83d8-a51e-49a6-8283-074c5af150d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Received unexpected event network-vif-plugged-a3944a31-9560-49ae-b2a5-caaf2736993a for instance with vm_state deleted and task_state None.#033[00m
Oct 11 04:50:51 np0005481065 nova_compute[260935]: 2025-10-11 08:50:51.007 2 DEBUG nova.compute.manager [req-fb91add6-24eb-467d-8fe1-6e07386ace70 req-e90c83d8-a51e-49a6-8283-074c5af150d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Received event network-vif-deleted-ac842ebf-4fca-4930-a4d1-3e8a6760d441 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:50:51 np0005481065 nova_compute[260935]: 2025-10-11 08:50:51.008 2 DEBUG nova.compute.manager [req-fb91add6-24eb-467d-8fe1-6e07386ace70 req-e90c83d8-a51e-49a6-8283-074c5af150d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Received event network-vif-deleted-a3944a31-9560-49ae-b2a5-caaf2736993a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:50:51 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:50:51 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1331782170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:50:51 np0005481065 nova_compute[260935]: 2025-10-11 08:50:51.169 2 DEBUG oslo_concurrency.processutils [None req-cfd8987d-7906-4fa6-9dd8-07dc376f483a a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:50:51 np0005481065 nova_compute[260935]: 2025-10-11 08:50:51.178 2 DEBUG nova.compute.provider_tree [None req-cfd8987d-7906-4fa6-9dd8-07dc376f483a a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:50:51 np0005481065 nova_compute[260935]: 2025-10-11 08:50:51.196 2 DEBUG nova.scheduler.client.report [None req-cfd8987d-7906-4fa6-9dd8-07dc376f483a a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:50:51 np0005481065 nova_compute[260935]: 2025-10-11 08:50:51.225 2 DEBUG oslo_concurrency.lockutils [None req-cfd8987d-7906-4fa6-9dd8-07dc376f483a a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:51 np0005481065 nova_compute[260935]: 2025-10-11 08:50:51.262 2 INFO nova.scheduler.client.report [None req-cfd8987d-7906-4fa6-9dd8-07dc376f483a a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Deleted allocations for instance 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f#033[00m
Oct 11 04:50:51 np0005481065 nova_compute[260935]: 2025-10-11 08:50:51.350 2 DEBUG oslo_concurrency.lockutils [None req-cfd8987d-7906-4fa6-9dd8-07dc376f483a a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.866s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:51 np0005481065 nova_compute[260935]: 2025-10-11 08:50:51.364 2 INFO nova.virt.libvirt.driver [None req-b5ae42e4-fca2-497f-8563-d753ca93958c 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Snapshot image upload complete#033[00m
Oct 11 04:50:51 np0005481065 nova_compute[260935]: 2025-10-11 08:50:51.365 2 INFO nova.compute.manager [None req-b5ae42e4-fca2-497f-8563-d753ca93958c 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Took 4.62 seconds to snapshot the instance on the hypervisor.#033[00m
Oct 11 04:50:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1367: 321 pgs: 321 active+clean; 246 MiB data, 487 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 4.3 MiB/s wr, 340 op/s
Oct 11 04:50:51 np0005481065 nova_compute[260935]: 2025-10-11 08:50:51.911 2 DEBUG nova.compute.manager [req-8c11787a-5da3-472e-b12f-5145e8881ad2 req-b70b1203-9bc8-419a-8ba3-cf752443283d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Received event network-vif-plugged-ac842ebf-4fca-4930-a4d1-3e8a6760d441 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:50:51 np0005481065 nova_compute[260935]: 2025-10-11 08:50:51.912 2 DEBUG oslo_concurrency.lockutils [req-8c11787a-5da3-472e-b12f-5145e8881ad2 req-b70b1203-9bc8-419a-8ba3-cf752443283d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "14711e39-46ca-4856-9c19-fa51b869064d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:51 np0005481065 nova_compute[260935]: 2025-10-11 08:50:51.913 2 DEBUG oslo_concurrency.lockutils [req-8c11787a-5da3-472e-b12f-5145e8881ad2 req-b70b1203-9bc8-419a-8ba3-cf752443283d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "14711e39-46ca-4856-9c19-fa51b869064d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:51 np0005481065 nova_compute[260935]: 2025-10-11 08:50:51.913 2 DEBUG oslo_concurrency.lockutils [req-8c11787a-5da3-472e-b12f-5145e8881ad2 req-b70b1203-9bc8-419a-8ba3-cf752443283d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "14711e39-46ca-4856-9c19-fa51b869064d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:51 np0005481065 nova_compute[260935]: 2025-10-11 08:50:51.914 2 DEBUG nova.compute.manager [req-8c11787a-5da3-472e-b12f-5145e8881ad2 req-b70b1203-9bc8-419a-8ba3-cf752443283d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] No waiting events found dispatching network-vif-plugged-ac842ebf-4fca-4930-a4d1-3e8a6760d441 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:50:51 np0005481065 nova_compute[260935]: 2025-10-11 08:50:51.914 2 WARNING nova.compute.manager [req-8c11787a-5da3-472e-b12f-5145e8881ad2 req-b70b1203-9bc8-419a-8ba3-cf752443283d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Received unexpected event network-vif-plugged-ac842ebf-4fca-4930-a4d1-3e8a6760d441 for instance with vm_state deleted and task_state None.#033[00m
Oct 11 04:50:51 np0005481065 nova_compute[260935]: 2025-10-11 08:50:51.934 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172636.9303, 90e56ca7-b26f-4f83-908d-75204ecd2533 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:50:51 np0005481065 nova_compute[260935]: 2025-10-11 08:50:51.934 2 INFO nova.compute.manager [-] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] VM Stopped (Lifecycle Event)#033[00m
Oct 11 04:50:51 np0005481065 nova_compute[260935]: 2025-10-11 08:50:51.953 2 DEBUG nova.compute.manager [None req-095a72b4-cc85-4845-b60a-537360bbde7b - - - - - -] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:50:52 np0005481065 nova_compute[260935]: 2025-10-11 08:50:52.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:50:53 np0005481065 nova_compute[260935]: 2025-10-11 08:50:53.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1368: 321 pgs: 321 active+clean; 134 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 236 op/s
Oct 11 04:50:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e178 do_prune osdmap full prune enabled
Oct 11 04:50:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e179 e179: 3 total, 3 up, 3 in
Oct 11 04:50:54 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e179: 3 total, 3 up, 3 in
Oct 11 04:50:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:50:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:50:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:50:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:50:54 np0005481065 podman[300321]: 2025-10-11 08:50:54.790026087 +0000 UTC m=+0.086883268 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 11 04:50:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:50:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:50:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:50:54
Oct 11 04:50:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:50:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 04:50:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', '.rgw.root', 'vms', 'backups']
Oct 11 04:50:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:50:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:50:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:50:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:50:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:50:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:50:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:50:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:50:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:50:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:50:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:50:55 np0005481065 nova_compute[260935]: 2025-10-11 08:50:55.256 2 DEBUG oslo_concurrency.lockutils [None req-9594826e-1983-4d93-9d74-c393427fb458 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "ac3851d8-5df2-4f84-9b28-a5fbf1c31b62" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:55 np0005481065 nova_compute[260935]: 2025-10-11 08:50:55.257 2 DEBUG oslo_concurrency.lockutils [None req-9594826e-1983-4d93-9d74-c393427fb458 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "ac3851d8-5df2-4f84-9b28-a5fbf1c31b62" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:55 np0005481065 nova_compute[260935]: 2025-10-11 08:50:55.257 2 DEBUG oslo_concurrency.lockutils [None req-9594826e-1983-4d93-9d74-c393427fb458 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "ac3851d8-5df2-4f84-9b28-a5fbf1c31b62-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:55 np0005481065 nova_compute[260935]: 2025-10-11 08:50:55.258 2 DEBUG oslo_concurrency.lockutils [None req-9594826e-1983-4d93-9d74-c393427fb458 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "ac3851d8-5df2-4f84-9b28-a5fbf1c31b62-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:55 np0005481065 nova_compute[260935]: 2025-10-11 08:50:55.258 2 DEBUG oslo_concurrency.lockutils [None req-9594826e-1983-4d93-9d74-c393427fb458 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "ac3851d8-5df2-4f84-9b28-a5fbf1c31b62-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:55 np0005481065 nova_compute[260935]: 2025-10-11 08:50:55.260 2 INFO nova.compute.manager [None req-9594826e-1983-4d93-9d74-c393427fb458 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Terminating instance#033[00m
Oct 11 04:50:55 np0005481065 nova_compute[260935]: 2025-10-11 08:50:55.261 2 DEBUG nova.compute.manager [None req-9594826e-1983-4d93-9d74-c393427fb458 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 04:50:55 np0005481065 nova_compute[260935]: 2025-10-11 08:50:55.271 2 INFO nova.virt.libvirt.driver [-] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Instance destroyed successfully.#033[00m
Oct 11 04:50:55 np0005481065 nova_compute[260935]: 2025-10-11 08:50:55.272 2 DEBUG nova.objects.instance [None req-9594826e-1983-4d93-9d74-c393427fb458 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lazy-loading 'resources' on Instance uuid ac3851d8-5df2-4f84-9b28-a5fbf1c31b62 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:50:55 np0005481065 nova_compute[260935]: 2025-10-11 08:50:55.301 2 DEBUG nova.virt.libvirt.vif [None req-9594826e-1983-4d93-9d74-c393427fb458 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:50:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1714775256',display_name='tempest-ImagesTestJSON-server-1714775256',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1714775256',id=31,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:50:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='f8c7604961214c6d9d49657535d799a5',ramdisk_id='',reservation_id='r-wz17gid5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ImagesTestJSON-694493184',owner_user_name='tempest-ImagesTestJSON-694493184-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:50:51Z,user_data=None,user_id='1bab12893b9d49aabcb5ca19c9b951de',uuid=ac3851d8-5df2-4f84-9b28-a5fbf1c31b62,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "d190526e-2bf1-4e6c-925a-2cc0c2b359a8", "address": "fa:16:3e:57:75:39", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd190526e-2b", "ovs_interfaceid": "d190526e-2bf1-4e6c-925a-2cc0c2b359a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 04:50:55 np0005481065 nova_compute[260935]: 2025-10-11 08:50:55.302 2 DEBUG nova.network.os_vif_util [None req-9594826e-1983-4d93-9d74-c393427fb458 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converting VIF {"id": "d190526e-2bf1-4e6c-925a-2cc0c2b359a8", "address": "fa:16:3e:57:75:39", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd190526e-2b", "ovs_interfaceid": "d190526e-2bf1-4e6c-925a-2cc0c2b359a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:50:55 np0005481065 nova_compute[260935]: 2025-10-11 08:50:55.303 2 DEBUG nova.network.os_vif_util [None req-9594826e-1983-4d93-9d74-c393427fb458 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:75:39,bridge_name='br-int',has_traffic_filtering=True,id=d190526e-2bf1-4e6c-925a-2cc0c2b359a8,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd190526e-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:50:55 np0005481065 nova_compute[260935]: 2025-10-11 08:50:55.303 2 DEBUG os_vif [None req-9594826e-1983-4d93-9d74-c393427fb458 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:75:39,bridge_name='br-int',has_traffic_filtering=True,id=d190526e-2bf1-4e6c-925a-2cc0c2b359a8,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd190526e-2b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 04:50:55 np0005481065 nova_compute[260935]: 2025-10-11 08:50:55.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:55 np0005481065 nova_compute[260935]: 2025-10-11 08:50:55.306 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd190526e-2b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:50:55 np0005481065 nova_compute[260935]: 2025-10-11 08:50:55.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:55 np0005481065 nova_compute[260935]: 2025-10-11 08:50:55.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:50:55 np0005481065 nova_compute[260935]: 2025-10-11 08:50:55.314 2 INFO os_vif [None req-9594826e-1983-4d93-9d74-c393427fb458 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:75:39,bridge_name='br-int',has_traffic_filtering=True,id=d190526e-2bf1-4e6c-925a-2cc0c2b359a8,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd190526e-2b')#033[00m
Oct 11 04:50:55 np0005481065 nova_compute[260935]: 2025-10-11 08:50:55.754 2 INFO nova.virt.libvirt.driver [None req-9594826e-1983-4d93-9d74-c393427fb458 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Deleting instance files /var/lib/nova/instances/ac3851d8-5df2-4f84-9b28-a5fbf1c31b62_del#033[00m
Oct 11 04:50:55 np0005481065 nova_compute[260935]: 2025-10-11 08:50:55.756 2 INFO nova.virt.libvirt.driver [None req-9594826e-1983-4d93-9d74-c393427fb458 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Deletion of /var/lib/nova/instances/ac3851d8-5df2-4f84-9b28-a5fbf1c31b62_del complete#033[00m
Oct 11 04:50:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1370: 321 pgs: 321 active+clean; 134 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 3.1 MiB/s wr, 207 op/s
Oct 11 04:50:55 np0005481065 nova_compute[260935]: 2025-10-11 08:50:55.982 2 INFO nova.compute.manager [None req-9594826e-1983-4d93-9d74-c393427fb458 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Took 0.72 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 04:50:55 np0005481065 nova_compute[260935]: 2025-10-11 08:50:55.984 2 DEBUG oslo.service.loopingcall [None req-9594826e-1983-4d93-9d74-c393427fb458 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 04:50:55 np0005481065 nova_compute[260935]: 2025-10-11 08:50:55.984 2 DEBUG nova.compute.manager [-] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 04:50:55 np0005481065 nova_compute[260935]: 2025-10-11 08:50:55.985 2 DEBUG nova.network.neutron [-] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 04:50:56 np0005481065 nova_compute[260935]: 2025-10-11 08:50:56.129 2 DEBUG oslo_concurrency.lockutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Acquiring lock "9f842544-f85a-4c24-b273-8ae74177617e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:56 np0005481065 nova_compute[260935]: 2025-10-11 08:50:56.130 2 DEBUG oslo_concurrency.lockutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:56 np0005481065 nova_compute[260935]: 2025-10-11 08:50:56.203 2 DEBUG nova.compute.manager [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:50:56 np0005481065 nova_compute[260935]: 2025-10-11 08:50:56.297 2 DEBUG oslo_concurrency.lockutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:56 np0005481065 nova_compute[260935]: 2025-10-11 08:50:56.298 2 DEBUG oslo_concurrency.lockutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:56 np0005481065 nova_compute[260935]: 2025-10-11 08:50:56.305 2 DEBUG nova.virt.hardware [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:50:56 np0005481065 nova_compute[260935]: 2025-10-11 08:50:56.306 2 INFO nova.compute.claims [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:50:56 np0005481065 nova_compute[260935]: 2025-10-11 08:50:56.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:56 np0005481065 nova_compute[260935]: 2025-10-11 08:50:56.474 2 DEBUG oslo_concurrency.processutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:50:56 np0005481065 nova_compute[260935]: 2025-10-11 08:50:56.620 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172641.6185648, b3e20035-c079-4ad0-a085-2086be520d1d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:50:56 np0005481065 nova_compute[260935]: 2025-10-11 08:50:56.620 2 INFO nova.compute.manager [-] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] VM Stopped (Lifecycle Event)#033[00m
Oct 11 04:50:56 np0005481065 nova_compute[260935]: 2025-10-11 08:50:56.651 2 DEBUG nova.compute.manager [None req-79174a50-acde-48a1-8515-a057021d2fae - - - - - -] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:50:56 np0005481065 nova_compute[260935]: 2025-10-11 08:50:56.765 2 DEBUG nova.network.neutron [-] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:50:56 np0005481065 nova_compute[260935]: 2025-10-11 08:50:56.803 2 DEBUG nova.compute.manager [req-9d5cc97b-c6ec-4c22-9632-97b46985c909 req-86ec6d3a-0d2c-4323-8e33-6010b2592af8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Received event network-vif-deleted-d190526e-2bf1-4e6c-925a-2cc0c2b359a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:50:56 np0005481065 nova_compute[260935]: 2025-10-11 08:50:56.804 2 INFO nova.compute.manager [req-9d5cc97b-c6ec-4c22-9632-97b46985c909 req-86ec6d3a-0d2c-4323-8e33-6010b2592af8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Neutron deleted interface d190526e-2bf1-4e6c-925a-2cc0c2b359a8; detaching it from the instance and deleting it from the info cache#033[00m
Oct 11 04:50:56 np0005481065 nova_compute[260935]: 2025-10-11 08:50:56.805 2 DEBUG nova.network.neutron [req-9d5cc97b-c6ec-4c22-9632-97b46985c909 req-86ec6d3a-0d2c-4323-8e33-6010b2592af8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:50:56 np0005481065 nova_compute[260935]: 2025-10-11 08:50:56.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:56 np0005481065 nova_compute[260935]: 2025-10-11 08:50:56.908 2 INFO nova.compute.manager [-] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Took 0.92 seconds to deallocate network for instance.#033[00m
Oct 11 04:50:56 np0005481065 nova_compute[260935]: 2025-10-11 08:50:56.931 2 DEBUG nova.compute.manager [req-9d5cc97b-c6ec-4c22-9632-97b46985c909 req-86ec6d3a-0d2c-4323-8e33-6010b2592af8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Detach interface failed, port_id=d190526e-2bf1-4e6c-925a-2cc0c2b359a8, reason: Instance ac3851d8-5df2-4f84-9b28-a5fbf1c31b62 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct 11 04:50:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:50:56 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2755288498' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:50:56 np0005481065 nova_compute[260935]: 2025-10-11 08:50:56.973 2 DEBUG oslo_concurrency.processutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:50:56 np0005481065 nova_compute[260935]: 2025-10-11 08:50:56.982 2 DEBUG nova.compute.provider_tree [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:50:57 np0005481065 nova_compute[260935]: 2025-10-11 08:50:57.141 2 DEBUG nova.scheduler.client.report [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:50:57 np0005481065 nova_compute[260935]: 2025-10-11 08:50:57.155 2 DEBUG oslo_concurrency.lockutils [None req-9594826e-1983-4d93-9d74-c393427fb458 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:57 np0005481065 nova_compute[260935]: 2025-10-11 08:50:57.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:57 np0005481065 nova_compute[260935]: 2025-10-11 08:50:57.220 2 DEBUG oslo_concurrency.lockutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.923s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:57 np0005481065 nova_compute[260935]: 2025-10-11 08:50:57.221 2 DEBUG nova.compute.manager [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:50:57 np0005481065 nova_compute[260935]: 2025-10-11 08:50:57.225 2 DEBUG oslo_concurrency.lockutils [None req-9594826e-1983-4d93-9d74-c393427fb458 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.070s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:57 np0005481065 nova_compute[260935]: 2025-10-11 08:50:57.296 2 DEBUG oslo_concurrency.processutils [None req-9594826e-1983-4d93-9d74-c393427fb458 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:50:57 np0005481065 nova_compute[260935]: 2025-10-11 08:50:57.376 2 DEBUG nova.compute.manager [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:50:57 np0005481065 nova_compute[260935]: 2025-10-11 08:50:57.379 2 DEBUG nova.network.neutron [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:50:57 np0005481065 nova_compute[260935]: 2025-10-11 08:50:57.428 2 INFO nova.virt.libvirt.driver [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:50:57 np0005481065 nova_compute[260935]: 2025-10-11 08:50:57.487 2 DEBUG nova.compute.manager [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:50:57 np0005481065 nova_compute[260935]: 2025-10-11 08:50:57.555 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:50:57 np0005481065 nova_compute[260935]: 2025-10-11 08:50:57.556 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:50:57 np0005481065 nova_compute[260935]: 2025-10-11 08:50:57.557 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 04:50:57 np0005481065 nova_compute[260935]: 2025-10-11 08:50:57.638 2 DEBUG nova.policy [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fb27f51b5ffd414ab5ddbea179ada690', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f84de17ba2c5470fbc4c7fe809e7d7b7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 04:50:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:50:57 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2993305814' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:50:57 np0005481065 nova_compute[260935]: 2025-10-11 08:50:57.752 2 DEBUG nova.compute.manager [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:50:57 np0005481065 nova_compute[260935]: 2025-10-11 08:50:57.754 2 DEBUG nova.virt.libvirt.driver [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:50:57 np0005481065 nova_compute[260935]: 2025-10-11 08:50:57.755 2 INFO nova.virt.libvirt.driver [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Creating image(s)#033[00m
Oct 11 04:50:57 np0005481065 nova_compute[260935]: 2025-10-11 08:50:57.793 2 DEBUG nova.storage.rbd_utils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] rbd image 9f842544-f85a-4c24-b273-8ae74177617e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:50:57 np0005481065 nova_compute[260935]: 2025-10-11 08:50:57.831 2 DEBUG nova.storage.rbd_utils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] rbd image 9f842544-f85a-4c24-b273-8ae74177617e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:50:57 np0005481065 nova_compute[260935]: 2025-10-11 08:50:57.864 2 DEBUG nova.storage.rbd_utils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] rbd image 9f842544-f85a-4c24-b273-8ae74177617e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:50:57 np0005481065 nova_compute[260935]: 2025-10-11 08:50:57.868 2 DEBUG oslo_concurrency.processutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:50:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1371: 321 pgs: 321 active+clean; 41 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.7 MiB/s wr, 249 op/s
Oct 11 04:50:57 np0005481065 nova_compute[260935]: 2025-10-11 08:50:57.899 2 DEBUG oslo_concurrency.processutils [None req-9594826e-1983-4d93-9d74-c393427fb458 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:50:57 np0005481065 nova_compute[260935]: 2025-10-11 08:50:57.907 2 DEBUG nova.compute.provider_tree [None req-9594826e-1983-4d93-9d74-c393427fb458 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:50:57 np0005481065 nova_compute[260935]: 2025-10-11 08:50:57.954 2 DEBUG nova.scheduler.client.report [None req-9594826e-1983-4d93-9d74-c393427fb458 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:50:57 np0005481065 nova_compute[260935]: 2025-10-11 08:50:57.959 2 DEBUG oslo_concurrency.processutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:50:57 np0005481065 nova_compute[260935]: 2025-10-11 08:50:57.962 2 DEBUG oslo_concurrency.lockutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:57 np0005481065 nova_compute[260935]: 2025-10-11 08:50:57.962 2 DEBUG oslo_concurrency.lockutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:57 np0005481065 nova_compute[260935]: 2025-10-11 08:50:57.963 2 DEBUG oslo_concurrency.lockutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:57 np0005481065 nova_compute[260935]: 2025-10-11 08:50:57.991 2 DEBUG nova.storage.rbd_utils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] rbd image 9f842544-f85a-4c24-b273-8ae74177617e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:50:57 np0005481065 nova_compute[260935]: 2025-10-11 08:50:57.995 2 DEBUG oslo_concurrency.processutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 9f842544-f85a-4c24-b273-8ae74177617e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:50:58 np0005481065 nova_compute[260935]: 2025-10-11 08:50:58.030 2 DEBUG oslo_concurrency.lockutils [None req-9594826e-1983-4d93-9d74-c393427fb458 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:58 np0005481065 nova_compute[260935]: 2025-10-11 08:50:58.081 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172643.0801108, b35f4147-9e36-4dab-9ac8-2061c97797f2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:50:58 np0005481065 nova_compute[260935]: 2025-10-11 08:50:58.082 2 INFO nova.compute.manager [-] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] VM Stopped (Lifecycle Event)#033[00m
Oct 11 04:50:58 np0005481065 nova_compute[260935]: 2025-10-11 08:50:58.087 2 INFO nova.scheduler.client.report [None req-9594826e-1983-4d93-9d74-c393427fb458 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Deleted allocations for instance ac3851d8-5df2-4f84-9b28-a5fbf1c31b62#033[00m
Oct 11 04:50:58 np0005481065 nova_compute[260935]: 2025-10-11 08:50:58.111 2 DEBUG nova.compute.manager [None req-6564532c-d5ed-484a-84bd-c03507f21e67 - - - - - -] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:50:58 np0005481065 nova_compute[260935]: 2025-10-11 08:50:58.182 2 DEBUG oslo_concurrency.lockutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "5b50d851-e482-40a2-8b7d-d3eca87e15ab" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:58 np0005481065 nova_compute[260935]: 2025-10-11 08:50:58.183 2 DEBUG oslo_concurrency.lockutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "5b50d851-e482-40a2-8b7d-d3eca87e15ab" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:58 np0005481065 nova_compute[260935]: 2025-10-11 08:50:58.210 2 DEBUG oslo_concurrency.lockutils [None req-9594826e-1983-4d93-9d74-c393427fb458 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "ac3851d8-5df2-4f84-9b28-a5fbf1c31b62" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.953s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:58 np0005481065 nova_compute[260935]: 2025-10-11 08:50:58.226 2 DEBUG nova.compute.manager [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:50:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:50:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e179 do_prune osdmap full prune enabled
Oct 11 04:50:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e180 e180: 3 total, 3 up, 3 in
Oct 11 04:50:58 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e180: 3 total, 3 up, 3 in
Oct 11 04:50:58 np0005481065 nova_compute[260935]: 2025-10-11 08:50:58.287 2 DEBUG oslo_concurrency.lockutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:58 np0005481065 nova_compute[260935]: 2025-10-11 08:50:58.288 2 DEBUG oslo_concurrency.lockutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:58 np0005481065 nova_compute[260935]: 2025-10-11 08:50:58.296 2 DEBUG nova.virt.hardware [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:50:58 np0005481065 nova_compute[260935]: 2025-10-11 08:50:58.296 2 INFO nova.compute.claims [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:50:58 np0005481065 nova_compute[260935]: 2025-10-11 08:50:58.304 2 DEBUG oslo_concurrency.processutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 9f842544-f85a-4c24-b273-8ae74177617e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.309s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:50:58 np0005481065 nova_compute[260935]: 2025-10-11 08:50:58.418 2 DEBUG nova.storage.rbd_utils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] resizing rbd image 9f842544-f85a-4c24-b273-8ae74177617e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:50:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:58.472 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:50:58 np0005481065 nova_compute[260935]: 2025-10-11 08:50:58.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:50:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:58.474 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 04:50:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:50:58.475 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:50:58 np0005481065 nova_compute[260935]: 2025-10-11 08:50:58.494 2 DEBUG oslo_concurrency.processutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:50:58 np0005481065 nova_compute[260935]: 2025-10-11 08:50:58.589 2 DEBUG nova.network.neutron [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Successfully created port: fc76a4bd-0e3d-426e-820f-690283cf8257 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 04:50:58 np0005481065 nova_compute[260935]: 2025-10-11 08:50:58.601 2 DEBUG nova.objects.instance [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lazy-loading 'migration_context' on Instance uuid 9f842544-f85a-4c24-b273-8ae74177617e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:50:58 np0005481065 nova_compute[260935]: 2025-10-11 08:50:58.624 2 DEBUG nova.virt.libvirt.driver [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:50:58 np0005481065 nova_compute[260935]: 2025-10-11 08:50:58.625 2 DEBUG nova.virt.libvirt.driver [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Ensure instance console log exists: /var/lib/nova/instances/9f842544-f85a-4c24-b273-8ae74177617e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:50:58 np0005481065 nova_compute[260935]: 2025-10-11 08:50:58.626 2 DEBUG oslo_concurrency.lockutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:58 np0005481065 nova_compute[260935]: 2025-10-11 08:50:58.627 2 DEBUG oslo_concurrency.lockutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:58 np0005481065 nova_compute[260935]: 2025-10-11 08:50:58.627 2 DEBUG oslo_concurrency.lockutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:50:58 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2964181207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:50:58 np0005481065 nova_compute[260935]: 2025-10-11 08:50:58.970 2 DEBUG oslo_concurrency.processutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:50:58 np0005481065 nova_compute[260935]: 2025-10-11 08:50:58.978 2 DEBUG nova.compute.provider_tree [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:50:58 np0005481065 nova_compute[260935]: 2025-10-11 08:50:58.996 2 DEBUG nova.scheduler.client.report [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:50:59 np0005481065 nova_compute[260935]: 2025-10-11 08:50:59.023 2 DEBUG oslo_concurrency.lockutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:59 np0005481065 nova_compute[260935]: 2025-10-11 08:50:59.024 2 DEBUG nova.compute.manager [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:50:59 np0005481065 nova_compute[260935]: 2025-10-11 08:50:59.102 2 DEBUG nova.compute.manager [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:50:59 np0005481065 nova_compute[260935]: 2025-10-11 08:50:59.102 2 DEBUG nova.network.neutron [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:50:59 np0005481065 nova_compute[260935]: 2025-10-11 08:50:59.129 2 INFO nova.virt.libvirt.driver [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:50:59 np0005481065 nova_compute[260935]: 2025-10-11 08:50:59.156 2 DEBUG nova.compute.manager [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:50:59 np0005481065 nova_compute[260935]: 2025-10-11 08:50:59.262 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172644.2619982, ac3851d8-5df2-4f84-9b28-a5fbf1c31b62 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:50:59 np0005481065 nova_compute[260935]: 2025-10-11 08:50:59.263 2 INFO nova.compute.manager [-] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] VM Stopped (Lifecycle Event)#033[00m
Oct 11 04:50:59 np0005481065 nova_compute[260935]: 2025-10-11 08:50:59.269 2 DEBUG nova.compute.manager [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:50:59 np0005481065 nova_compute[260935]: 2025-10-11 08:50:59.270 2 DEBUG nova.virt.libvirt.driver [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:50:59 np0005481065 nova_compute[260935]: 2025-10-11 08:50:59.271 2 INFO nova.virt.libvirt.driver [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Creating image(s)#033[00m
Oct 11 04:50:59 np0005481065 nova_compute[260935]: 2025-10-11 08:50:59.302 2 DEBUG nova.storage.rbd_utils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image 5b50d851-e482-40a2-8b7d-d3eca87e15ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:50:59 np0005481065 nova_compute[260935]: 2025-10-11 08:50:59.338 2 DEBUG nova.storage.rbd_utils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image 5b50d851-e482-40a2-8b7d-d3eca87e15ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:50:59 np0005481065 nova_compute[260935]: 2025-10-11 08:50:59.373 2 DEBUG nova.storage.rbd_utils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image 5b50d851-e482-40a2-8b7d-d3eca87e15ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:50:59 np0005481065 nova_compute[260935]: 2025-10-11 08:50:59.379 2 DEBUG oslo_concurrency.processutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:50:59 np0005481065 nova_compute[260935]: 2025-10-11 08:50:59.421 2 DEBUG nova.compute.manager [None req-ec52a2b7-3f6f-42b8-96da-6fbf8a680e49 - - - - - -] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:50:59 np0005481065 nova_compute[260935]: 2025-10-11 08:50:59.473 2 DEBUG oslo_concurrency.processutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:50:59 np0005481065 nova_compute[260935]: 2025-10-11 08:50:59.474 2 DEBUG oslo_concurrency.lockutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:50:59 np0005481065 nova_compute[260935]: 2025-10-11 08:50:59.475 2 DEBUG oslo_concurrency.lockutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:50:59 np0005481065 nova_compute[260935]: 2025-10-11 08:50:59.476 2 DEBUG oslo_concurrency.lockutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:50:59 np0005481065 nova_compute[260935]: 2025-10-11 08:50:59.510 2 DEBUG nova.storage.rbd_utils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image 5b50d851-e482-40a2-8b7d-d3eca87e15ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:50:59 np0005481065 nova_compute[260935]: 2025-10-11 08:50:59.516 2 DEBUG oslo_concurrency.processutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 5b50d851-e482-40a2-8b7d-d3eca87e15ab_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:50:59 np0005481065 nova_compute[260935]: 2025-10-11 08:50:59.828 2 DEBUG oslo_concurrency.processutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 5b50d851-e482-40a2-8b7d-d3eca87e15ab_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.311s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:50:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1373: 321 pgs: 321 active+clean; 41 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.7 MiB/s wr, 249 op/s
Oct 11 04:50:59 np0005481065 nova_compute[260935]: 2025-10-11 08:50:59.898 2 DEBUG nova.storage.rbd_utils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] resizing rbd image 5b50d851-e482-40a2-8b7d-d3eca87e15ab_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:51:00 np0005481065 nova_compute[260935]: 2025-10-11 08:51:00.007 2 DEBUG nova.policy [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1bab12893b9d49aabcb5ca19c9b951de', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f8c7604961214c6d9d49657535d799a5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 04:51:00 np0005481065 nova_compute[260935]: 2025-10-11 08:51:00.022 2 DEBUG nova.objects.instance [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lazy-loading 'migration_context' on Instance uuid 5b50d851-e482-40a2-8b7d-d3eca87e15ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:51:00 np0005481065 nova_compute[260935]: 2025-10-11 08:51:00.128 2 DEBUG nova.virt.libvirt.driver [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:51:00 np0005481065 nova_compute[260935]: 2025-10-11 08:51:00.130 2 DEBUG nova.virt.libvirt.driver [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Ensure instance console log exists: /var/lib/nova/instances/5b50d851-e482-40a2-8b7d-d3eca87e15ab/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:51:00 np0005481065 nova_compute[260935]: 2025-10-11 08:51:00.131 2 DEBUG oslo_concurrency.lockutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:00 np0005481065 nova_compute[260935]: 2025-10-11 08:51:00.131 2 DEBUG oslo_concurrency.lockutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:00 np0005481065 nova_compute[260935]: 2025-10-11 08:51:00.132 2 DEBUG oslo_concurrency.lockutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:00 np0005481065 nova_compute[260935]: 2025-10-11 08:51:00.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:00 np0005481065 nova_compute[260935]: 2025-10-11 08:51:00.341 2 DEBUG nova.network.neutron [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Successfully created port: ccf53fcf-c7bd-41f4-986b-d90fd701f3e4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 04:51:00 np0005481065 nova_compute[260935]: 2025-10-11 08:51:00.390 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172645.3893924, 6224c79a-8a36-490c-863a-67251512732f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:51:00 np0005481065 nova_compute[260935]: 2025-10-11 08:51:00.390 2 INFO nova.compute.manager [-] [instance: 6224c79a-8a36-490c-863a-67251512732f] VM Stopped (Lifecycle Event)#033[00m
Oct 11 04:51:00 np0005481065 nova_compute[260935]: 2025-10-11 08:51:00.417 2 DEBUG nova.compute.manager [None req-e847c770-9a88-43a0-9686-fea569edcf50 - - - - - -] [instance: 6224c79a-8a36-490c-863a-67251512732f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:51:01 np0005481065 nova_compute[260935]: 2025-10-11 08:51:01.220 2 DEBUG nova.network.neutron [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Successfully created port: 965f1a38-4159-41be-ac4a-f436a8ddeeab _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 04:51:01 np0005481065 nova_compute[260935]: 2025-10-11 08:51:01.340 2 DEBUG nova.network.neutron [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Successfully created port: e7343dc3-6cda-4dfb-8098-f021900f4584 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 04:51:01 np0005481065 podman[300759]: 2025-10-11 08:51:01.351185108 +0000 UTC m=+0.091374946 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid)
Oct 11 04:51:01 np0005481065 nova_compute[260935]: 2025-10-11 08:51:01.706 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:51:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1374: 321 pgs: 321 active+clean; 41 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 3.5 KiB/s wr, 72 op/s
Oct 11 04:51:02 np0005481065 nova_compute[260935]: 2025-10-11 08:51:02.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:02 np0005481065 nova_compute[260935]: 2025-10-11 08:51:02.341 2 DEBUG nova.network.neutron [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Successfully updated port: 965f1a38-4159-41be-ac4a-f436a8ddeeab _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:51:02 np0005481065 nova_compute[260935]: 2025-10-11 08:51:02.378 2 DEBUG oslo_concurrency.lockutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "refresh_cache-5b50d851-e482-40a2-8b7d-d3eca87e15ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:51:02 np0005481065 nova_compute[260935]: 2025-10-11 08:51:02.378 2 DEBUG oslo_concurrency.lockutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquired lock "refresh_cache-5b50d851-e482-40a2-8b7d-d3eca87e15ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:51:02 np0005481065 nova_compute[260935]: 2025-10-11 08:51:02.379 2 DEBUG nova.network.neutron [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:51:02 np0005481065 nova_compute[260935]: 2025-10-11 08:51:02.512 2 DEBUG nova.compute.manager [req-2956e34d-4759-477d-bc53-18b5a23c0636 req-6d0cfd7d-c466-446b-bee1-308373dc4175 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Received event network-changed-965f1a38-4159-41be-ac4a-f436a8ddeeab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:51:02 np0005481065 nova_compute[260935]: 2025-10-11 08:51:02.513 2 DEBUG nova.compute.manager [req-2956e34d-4759-477d-bc53-18b5a23c0636 req-6d0cfd7d-c466-446b-bee1-308373dc4175 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Refreshing instance network info cache due to event network-changed-965f1a38-4159-41be-ac4a-f436a8ddeeab. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:51:02 np0005481065 nova_compute[260935]: 2025-10-11 08:51:02.514 2 DEBUG oslo_concurrency.lockutils [req-2956e34d-4759-477d-bc53-18b5a23c0636 req-6d0cfd7d-c466-446b-bee1-308373dc4175 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-5b50d851-e482-40a2-8b7d-d3eca87e15ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:51:02 np0005481065 nova_compute[260935]: 2025-10-11 08:51:02.545 2 DEBUG nova.network.neutron [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Successfully updated port: fc76a4bd-0e3d-426e-820f-690283cf8257 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:51:02 np0005481065 nova_compute[260935]: 2025-10-11 08:51:02.659 2 DEBUG nova.compute.manager [req-68805fd0-9138-433b-99b2-80e567632e2e req-416c5a24-239d-4d36-a040-5a7c63db660e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Received event network-changed-fc76a4bd-0e3d-426e-820f-690283cf8257 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:51:02 np0005481065 nova_compute[260935]: 2025-10-11 08:51:02.660 2 DEBUG nova.compute.manager [req-68805fd0-9138-433b-99b2-80e567632e2e req-416c5a24-239d-4d36-a040-5a7c63db660e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Refreshing instance network info cache due to event network-changed-fc76a4bd-0e3d-426e-820f-690283cf8257. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:51:02 np0005481065 nova_compute[260935]: 2025-10-11 08:51:02.661 2 DEBUG oslo_concurrency.lockutils [req-68805fd0-9138-433b-99b2-80e567632e2e req-416c5a24-239d-4d36-a040-5a7c63db660e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-9f842544-f85a-4c24-b273-8ae74177617e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:51:02 np0005481065 nova_compute[260935]: 2025-10-11 08:51:02.661 2 DEBUG oslo_concurrency.lockutils [req-68805fd0-9138-433b-99b2-80e567632e2e req-416c5a24-239d-4d36-a040-5a7c63db660e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-9f842544-f85a-4c24-b273-8ae74177617e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:51:02 np0005481065 nova_compute[260935]: 2025-10-11 08:51:02.661 2 DEBUG nova.network.neutron [req-68805fd0-9138-433b-99b2-80e567632e2e req-416c5a24-239d-4d36-a040-5a7c63db660e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Refreshing network info cache for port fc76a4bd-0e3d-426e-820f-690283cf8257 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:51:02 np0005481065 nova_compute[260935]: 2025-10-11 08:51:02.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:51:02 np0005481065 nova_compute[260935]: 2025-10-11 08:51:02.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:51:02 np0005481065 nova_compute[260935]: 2025-10-11 08:51:02.930 2 DEBUG nova.network.neutron [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:51:02 np0005481065 nova_compute[260935]: 2025-10-11 08:51:02.996 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172647.9939656, 14711e39-46ca-4856-9c19-fa51b869064d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:51:02 np0005481065 nova_compute[260935]: 2025-10-11 08:51:02.997 2 INFO nova.compute.manager [-] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] VM Stopped (Lifecycle Event)#033[00m
Oct 11 04:51:03 np0005481065 nova_compute[260935]: 2025-10-11 08:51:03.033 2 DEBUG nova.compute.manager [None req-0d74b37d-7121-4557-81ca-1844e1d92960 - - - - - -] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:51:03 np0005481065 nova_compute[260935]: 2025-10-11 08:51:03.122 2 DEBUG nova.network.neutron [req-68805fd0-9138-433b-99b2-80e567632e2e req-416c5a24-239d-4d36-a040-5a7c63db660e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:51:03 np0005481065 nova_compute[260935]: 2025-10-11 08:51:03.238 2 DEBUG nova.network.neutron [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Successfully updated port: ccf53fcf-c7bd-41f4-986b-d90fd701f3e4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:51:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:51:03 np0005481065 nova_compute[260935]: 2025-10-11 08:51:03.605 2 DEBUG nova.network.neutron [req-68805fd0-9138-433b-99b2-80e567632e2e req-416c5a24-239d-4d36-a040-5a7c63db660e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:51:03 np0005481065 nova_compute[260935]: 2025-10-11 08:51:03.624 2 DEBUG oslo_concurrency.lockutils [req-68805fd0-9138-433b-99b2-80e567632e2e req-416c5a24-239d-4d36-a040-5a7c63db660e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-9f842544-f85a-4c24-b273-8ae74177617e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:51:03 np0005481065 nova_compute[260935]: 2025-10-11 08:51:03.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:51:03 np0005481065 nova_compute[260935]: 2025-10-11 08:51:03.727 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172648.7260673, 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:51:03 np0005481065 nova_compute[260935]: 2025-10-11 08:51:03.727 2 INFO nova.compute.manager [-] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] VM Stopped (Lifecycle Event)#033[00m
Oct 11 04:51:03 np0005481065 nova_compute[260935]: 2025-10-11 08:51:03.744 2 DEBUG nova.compute.manager [None req-261660aa-ca60-4ec4-863f-04c67f450651 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:51:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1375: 321 pgs: 321 active+clean; 134 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 4.3 MiB/s wr, 125 op/s
Oct 11 04:51:04 np0005481065 nova_compute[260935]: 2025-10-11 08:51:04.041 2 DEBUG nova.network.neutron [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Successfully updated port: e7343dc3-6cda-4dfb-8098-f021900f4584 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:51:04 np0005481065 nova_compute[260935]: 2025-10-11 08:51:04.056 2 DEBUG oslo_concurrency.lockutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Acquiring lock "refresh_cache-9f842544-f85a-4c24-b273-8ae74177617e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:51:04 np0005481065 nova_compute[260935]: 2025-10-11 08:51:04.056 2 DEBUG oslo_concurrency.lockutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Acquired lock "refresh_cache-9f842544-f85a-4c24-b273-8ae74177617e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:51:04 np0005481065 nova_compute[260935]: 2025-10-11 08:51:04.056 2 DEBUG nova.network.neutron [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:51:04 np0005481065 nova_compute[260935]: 2025-10-11 08:51:04.164 2 DEBUG nova.network.neutron [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Updating instance_info_cache with network_info: [{"id": "965f1a38-4159-41be-ac4a-f436a8ddeeab", "address": "fa:16:3e:fe:8d:cb", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap965f1a38-41", "ovs_interfaceid": "965f1a38-4159-41be-ac4a-f436a8ddeeab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:51:04 np0005481065 nova_compute[260935]: 2025-10-11 08:51:04.194 2 DEBUG oslo_concurrency.lockutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Releasing lock "refresh_cache-5b50d851-e482-40a2-8b7d-d3eca87e15ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:51:04 np0005481065 nova_compute[260935]: 2025-10-11 08:51:04.194 2 DEBUG nova.compute.manager [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Instance network_info: |[{"id": "965f1a38-4159-41be-ac4a-f436a8ddeeab", "address": "fa:16:3e:fe:8d:cb", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap965f1a38-41", "ovs_interfaceid": "965f1a38-4159-41be-ac4a-f436a8ddeeab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:51:04 np0005481065 nova_compute[260935]: 2025-10-11 08:51:04.195 2 DEBUG oslo_concurrency.lockutils [req-2956e34d-4759-477d-bc53-18b5a23c0636 req-6d0cfd7d-c466-446b-bee1-308373dc4175 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-5b50d851-e482-40a2-8b7d-d3eca87e15ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:51:04 np0005481065 nova_compute[260935]: 2025-10-11 08:51:04.196 2 DEBUG nova.network.neutron [req-2956e34d-4759-477d-bc53-18b5a23c0636 req-6d0cfd7d-c466-446b-bee1-308373dc4175 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Refreshing network info cache for port 965f1a38-4159-41be-ac4a-f436a8ddeeab _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:51:04 np0005481065 nova_compute[260935]: 2025-10-11 08:51:04.201 2 DEBUG nova.virt.libvirt.driver [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Start _get_guest_xml network_info=[{"id": "965f1a38-4159-41be-ac4a-f436a8ddeeab", "address": "fa:16:3e:fe:8d:cb", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap965f1a38-41", "ovs_interfaceid": "965f1a38-4159-41be-ac4a-f436a8ddeeab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:51:04 np0005481065 nova_compute[260935]: 2025-10-11 08:51:04.208 2 WARNING nova.virt.libvirt.driver [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:51:04 np0005481065 nova_compute[260935]: 2025-10-11 08:51:04.213 2 DEBUG nova.virt.libvirt.host [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:51:04 np0005481065 nova_compute[260935]: 2025-10-11 08:51:04.213 2 DEBUG nova.virt.libvirt.host [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:51:04 np0005481065 nova_compute[260935]: 2025-10-11 08:51:04.218 2 DEBUG nova.virt.libvirt.host [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:51:04 np0005481065 nova_compute[260935]: 2025-10-11 08:51:04.219 2 DEBUG nova.virt.libvirt.host [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:51:04 np0005481065 nova_compute[260935]: 2025-10-11 08:51:04.220 2 DEBUG nova.virt.libvirt.driver [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:51:04 np0005481065 nova_compute[260935]: 2025-10-11 08:51:04.221 2 DEBUG nova.virt.hardware [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:51:04 np0005481065 nova_compute[260935]: 2025-10-11 08:51:04.221 2 DEBUG nova.virt.hardware [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:51:04 np0005481065 nova_compute[260935]: 2025-10-11 08:51:04.222 2 DEBUG nova.virt.hardware [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:51:04 np0005481065 nova_compute[260935]: 2025-10-11 08:51:04.223 2 DEBUG nova.virt.hardware [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:51:04 np0005481065 nova_compute[260935]: 2025-10-11 08:51:04.223 2 DEBUG nova.virt.hardware [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:51:04 np0005481065 nova_compute[260935]: 2025-10-11 08:51:04.224 2 DEBUG nova.virt.hardware [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:51:04 np0005481065 nova_compute[260935]: 2025-10-11 08:51:04.224 2 DEBUG nova.virt.hardware [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:51:04 np0005481065 nova_compute[260935]: 2025-10-11 08:51:04.225 2 DEBUG nova.virt.hardware [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:51:04 np0005481065 nova_compute[260935]: 2025-10-11 08:51:04.225 2 DEBUG nova.virt.hardware [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:51:04 np0005481065 nova_compute[260935]: 2025-10-11 08:51:04.226 2 DEBUG nova.virt.hardware [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:51:04 np0005481065 nova_compute[260935]: 2025-10-11 08:51:04.226 2 DEBUG nova.virt.hardware [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:51:04 np0005481065 nova_compute[260935]: 2025-10-11 08:51:04.231 2 DEBUG oslo_concurrency.processutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:51:04 np0005481065 nova_compute[260935]: 2025-10-11 08:51:04.377 2 DEBUG nova.network.neutron [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:51:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:51:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:51:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:51:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:51:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0006919304917952725 of space, bias 1.0, pg target 0.20757914753858175 quantized to 32 (current 32)
Oct 11 04:51:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:51:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:51:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:51:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:51:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:51:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 11 04:51:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:51:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 04:51:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:51:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:51:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:51:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:51:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:51:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:51:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:51:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:51:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:51:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:51:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:51:04 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4268172853' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:51:04 np0005481065 nova_compute[260935]: 2025-10-11 08:51:04.701 2 DEBUG oslo_concurrency.processutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:51:04 np0005481065 nova_compute[260935]: 2025-10-11 08:51:04.732 2 DEBUG nova.storage.rbd_utils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image 5b50d851-e482-40a2-8b7d-d3eca87e15ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:51:04 np0005481065 nova_compute[260935]: 2025-10-11 08:51:04.737 2 DEBUG oslo_concurrency.processutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:51:04 np0005481065 nova_compute[260935]: 2025-10-11 08:51:04.875 2 DEBUG nova.compute.manager [req-414781cc-d6a1-42c0-9106-f249fbcb35e8 req-87a8b0a5-1cf6-4f03-8179-f4cd618401f3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Received event network-changed-ccf53fcf-c7bd-41f4-986b-d90fd701f3e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:51:04 np0005481065 nova_compute[260935]: 2025-10-11 08:51:04.876 2 DEBUG nova.compute.manager [req-414781cc-d6a1-42c0-9106-f249fbcb35e8 req-87a8b0a5-1cf6-4f03-8179-f4cd618401f3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Refreshing instance network info cache due to event network-changed-ccf53fcf-c7bd-41f4-986b-d90fd701f3e4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:51:04 np0005481065 nova_compute[260935]: 2025-10-11 08:51:04.877 2 DEBUG oslo_concurrency.lockutils [req-414781cc-d6a1-42c0-9106-f249fbcb35e8 req-87a8b0a5-1cf6-4f03-8179-f4cd618401f3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-9f842544-f85a-4c24-b273-8ae74177617e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:51:05 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:51:05 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1842549304' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:51:05 np0005481065 nova_compute[260935]: 2025-10-11 08:51:05.236 2 DEBUG oslo_concurrency.processutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:51:05 np0005481065 nova_compute[260935]: 2025-10-11 08:51:05.238 2 DEBUG nova.virt.libvirt.vif [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:50:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1550870837',display_name='tempest-ImagesTestJSON-server-1550870837',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1550870837',id=33,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f8c7604961214c6d9d49657535d799a5',ramdisk_id='',reservation_id='r-jse0ea25',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-694493184',owner_user_name='tempest-ImagesTestJSON-694493184-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:50:59Z,user_data=None,user_id='1bab12893b9d49aabcb5ca19c9b951de',uuid=5b50d851-e482-40a2-8b7d-d3eca87e15ab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "965f1a38-4159-41be-ac4a-f436a8ddeeab", "address": "fa:16:3e:fe:8d:cb", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap965f1a38-41", "ovs_interfaceid": "965f1a38-4159-41be-ac4a-f436a8ddeeab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:51:05 np0005481065 nova_compute[260935]: 2025-10-11 08:51:05.239 2 DEBUG nova.network.os_vif_util [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converting VIF {"id": "965f1a38-4159-41be-ac4a-f436a8ddeeab", "address": "fa:16:3e:fe:8d:cb", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap965f1a38-41", "ovs_interfaceid": "965f1a38-4159-41be-ac4a-f436a8ddeeab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:51:05 np0005481065 nova_compute[260935]: 2025-10-11 08:51:05.240 2 DEBUG nova.network.os_vif_util [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:8d:cb,bridge_name='br-int',has_traffic_filtering=True,id=965f1a38-4159-41be-ac4a-f436a8ddeeab,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap965f1a38-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:51:05 np0005481065 nova_compute[260935]: 2025-10-11 08:51:05.242 2 DEBUG nova.objects.instance [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5b50d851-e482-40a2-8b7d-d3eca87e15ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:51:05 np0005481065 nova_compute[260935]: 2025-10-11 08:51:05.257 2 DEBUG nova.virt.libvirt.driver [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:51:05 np0005481065 nova_compute[260935]:  <uuid>5b50d851-e482-40a2-8b7d-d3eca87e15ab</uuid>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:  <name>instance-00000021</name>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:51:05 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:      <nova:name>tempest-ImagesTestJSON-server-1550870837</nova:name>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:51:04</nova:creationTime>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:51:05 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:        <nova:user uuid="1bab12893b9d49aabcb5ca19c9b951de">tempest-ImagesTestJSON-694493184-project-member</nova:user>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:        <nova:project uuid="f8c7604961214c6d9d49657535d799a5">tempest-ImagesTestJSON-694493184</nova:project>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:        <nova:port uuid="965f1a38-4159-41be-ac4a-f436a8ddeeab">
Oct 11 04:51:05 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:51:05 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:      <entry name="serial">5b50d851-e482-40a2-8b7d-d3eca87e15ab</entry>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:      <entry name="uuid">5b50d851-e482-40a2-8b7d-d3eca87e15ab</entry>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:51:05 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:51:05 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:51:05 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/5b50d851-e482-40a2-8b7d-d3eca87e15ab_disk">
Oct 11 04:51:05 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:51:05 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:51:05 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/5b50d851-e482-40a2-8b7d-d3eca87e15ab_disk.config">
Oct 11 04:51:05 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:51:05 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 04:51:05 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:fe:8d:cb"/>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:      <target dev="tap965f1a38-41"/>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:51:05 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/5b50d851-e482-40a2-8b7d-d3eca87e15ab/console.log" append="off"/>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:51:05 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:51:05 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:51:05 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:51:05 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:51:05 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:51:05 np0005481065 nova_compute[260935]: 2025-10-11 08:51:05.259 2 DEBUG nova.compute.manager [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Preparing to wait for external event network-vif-plugged-965f1a38-4159-41be-ac4a-f436a8ddeeab prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 04:51:05 np0005481065 nova_compute[260935]: 2025-10-11 08:51:05.259 2 DEBUG oslo_concurrency.lockutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "5b50d851-e482-40a2-8b7d-d3eca87e15ab-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:05 np0005481065 nova_compute[260935]: 2025-10-11 08:51:05.260 2 DEBUG oslo_concurrency.lockutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "5b50d851-e482-40a2-8b7d-d3eca87e15ab-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:05 np0005481065 nova_compute[260935]: 2025-10-11 08:51:05.260 2 DEBUG oslo_concurrency.lockutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "5b50d851-e482-40a2-8b7d-d3eca87e15ab-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:05 np0005481065 nova_compute[260935]: 2025-10-11 08:51:05.262 2 DEBUG nova.virt.libvirt.vif [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:50:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1550870837',display_name='tempest-ImagesTestJSON-server-1550870837',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1550870837',id=33,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f8c7604961214c6d9d49657535d799a5',ramdisk_id='',reservation_id='r-jse0ea25',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-694493184',owner_user_name='tempest-ImagesTestJSON-694493184-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:50:59Z,user_data=None,user_id='1bab12893b9d49aabcb5ca19c9b951de',uuid=5b50d851-e482-40a2-8b7d-d3eca87e15ab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "965f1a38-4159-41be-ac4a-f436a8ddeeab", "address": "fa:16:3e:fe:8d:cb", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap965f1a38-41", "ovs_interfaceid": "965f1a38-4159-41be-ac4a-f436a8ddeeab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:51:05 np0005481065 nova_compute[260935]: 2025-10-11 08:51:05.262 2 DEBUG nova.network.os_vif_util [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converting VIF {"id": "965f1a38-4159-41be-ac4a-f436a8ddeeab", "address": "fa:16:3e:fe:8d:cb", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap965f1a38-41", "ovs_interfaceid": "965f1a38-4159-41be-ac4a-f436a8ddeeab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:51:05 np0005481065 nova_compute[260935]: 2025-10-11 08:51:05.264 2 DEBUG nova.network.os_vif_util [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:8d:cb,bridge_name='br-int',has_traffic_filtering=True,id=965f1a38-4159-41be-ac4a-f436a8ddeeab,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap965f1a38-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:51:05 np0005481065 nova_compute[260935]: 2025-10-11 08:51:05.264 2 DEBUG os_vif [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:8d:cb,bridge_name='br-int',has_traffic_filtering=True,id=965f1a38-4159-41be-ac4a-f436a8ddeeab,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap965f1a38-41') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:51:05 np0005481065 nova_compute[260935]: 2025-10-11 08:51:05.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:05 np0005481065 nova_compute[260935]: 2025-10-11 08:51:05.267 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:05 np0005481065 nova_compute[260935]: 2025-10-11 08:51:05.268 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:51:05 np0005481065 nova_compute[260935]: 2025-10-11 08:51:05.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:05 np0005481065 nova_compute[260935]: 2025-10-11 08:51:05.273 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap965f1a38-41, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:05 np0005481065 nova_compute[260935]: 2025-10-11 08:51:05.275 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap965f1a38-41, col_values=(('external_ids', {'iface-id': '965f1a38-4159-41be-ac4a-f436a8ddeeab', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fe:8d:cb', 'vm-uuid': '5b50d851-e482-40a2-8b7d-d3eca87e15ab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:05 np0005481065 nova_compute[260935]: 2025-10-11 08:51:05.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:05 np0005481065 NetworkManager[44960]: <info>  [1760172665.2786] manager: (tap965f1a38-41): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/107)
Oct 11 04:51:05 np0005481065 nova_compute[260935]: 2025-10-11 08:51:05.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:51:05 np0005481065 nova_compute[260935]: 2025-10-11 08:51:05.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:05 np0005481065 nova_compute[260935]: 2025-10-11 08:51:05.285 2 INFO os_vif [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:8d:cb,bridge_name='br-int',has_traffic_filtering=True,id=965f1a38-4159-41be-ac4a-f436a8ddeeab,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap965f1a38-41')#033[00m
Oct 11 04:51:05 np0005481065 nova_compute[260935]: 2025-10-11 08:51:05.336 2 DEBUG nova.virt.libvirt.driver [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:51:05 np0005481065 nova_compute[260935]: 2025-10-11 08:51:05.337 2 DEBUG nova.virt.libvirt.driver [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:51:05 np0005481065 nova_compute[260935]: 2025-10-11 08:51:05.337 2 DEBUG nova.virt.libvirt.driver [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] No VIF found with MAC fa:16:3e:fe:8d:cb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:51:05 np0005481065 nova_compute[260935]: 2025-10-11 08:51:05.337 2 INFO nova.virt.libvirt.driver [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Using config drive#033[00m
Oct 11 04:51:05 np0005481065 nova_compute[260935]: 2025-10-11 08:51:05.357 2 DEBUG nova.storage.rbd_utils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image 5b50d851-e482-40a2-8b7d-d3eca87e15ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:51:05 np0005481065 nova_compute[260935]: 2025-10-11 08:51:05.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:51:05 np0005481065 podman[300863]: 2025-10-11 08:51:05.795286826 +0000 UTC m=+0.092387854 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 11 04:51:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1376: 321 pgs: 321 active+clean; 134 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 82 KiB/s rd, 4.3 MiB/s wr, 122 op/s
Oct 11 04:51:05 np0005481065 podman[300864]: 2025-10-11 08:51:05.904237137 +0000 UTC m=+0.190528439 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 04:51:06 np0005481065 nova_compute[260935]: 2025-10-11 08:51:06.308 2 INFO nova.virt.libvirt.driver [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Creating config drive at /var/lib/nova/instances/5b50d851-e482-40a2-8b7d-d3eca87e15ab/disk.config#033[00m
Oct 11 04:51:06 np0005481065 nova_compute[260935]: 2025-10-11 08:51:06.317 2 DEBUG oslo_concurrency.processutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5b50d851-e482-40a2-8b7d-d3eca87e15ab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptkpppzls execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:51:06 np0005481065 nova_compute[260935]: 2025-10-11 08:51:06.483 2 DEBUG oslo_concurrency.processutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5b50d851-e482-40a2-8b7d-d3eca87e15ab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptkpppzls" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:51:06 np0005481065 nova_compute[260935]: 2025-10-11 08:51:06.524 2 DEBUG nova.storage.rbd_utils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image 5b50d851-e482-40a2-8b7d-d3eca87e15ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:51:06 np0005481065 nova_compute[260935]: 2025-10-11 08:51:06.530 2 DEBUG oslo_concurrency.processutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5b50d851-e482-40a2-8b7d-d3eca87e15ab/disk.config 5b50d851-e482-40a2-8b7d-d3eca87e15ab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:51:06 np0005481065 nova_compute[260935]: 2025-10-11 08:51:06.640 2 DEBUG nova.network.neutron [req-2956e34d-4759-477d-bc53-18b5a23c0636 req-6d0cfd7d-c466-446b-bee1-308373dc4175 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Updated VIF entry in instance network info cache for port 965f1a38-4159-41be-ac4a-f436a8ddeeab. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:51:06 np0005481065 nova_compute[260935]: 2025-10-11 08:51:06.642 2 DEBUG nova.network.neutron [req-2956e34d-4759-477d-bc53-18b5a23c0636 req-6d0cfd7d-c466-446b-bee1-308373dc4175 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Updating instance_info_cache with network_info: [{"id": "965f1a38-4159-41be-ac4a-f436a8ddeeab", "address": "fa:16:3e:fe:8d:cb", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap965f1a38-41", "ovs_interfaceid": "965f1a38-4159-41be-ac4a-f436a8ddeeab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:51:06 np0005481065 nova_compute[260935]: 2025-10-11 08:51:06.669 2 DEBUG oslo_concurrency.lockutils [req-2956e34d-4759-477d-bc53-18b5a23c0636 req-6d0cfd7d-c466-446b-bee1-308373dc4175 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-5b50d851-e482-40a2-8b7d-d3eca87e15ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:51:06 np0005481065 nova_compute[260935]: 2025-10-11 08:51:06.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:51:06 np0005481065 nova_compute[260935]: 2025-10-11 08:51:06.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 04:51:06 np0005481065 nova_compute[260935]: 2025-10-11 08:51:06.740 2 DEBUG oslo_concurrency.processutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5b50d851-e482-40a2-8b7d-d3eca87e15ab/disk.config 5b50d851-e482-40a2-8b7d-d3eca87e15ab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.210s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:51:06 np0005481065 nova_compute[260935]: 2025-10-11 08:51:06.742 2 INFO nova.virt.libvirt.driver [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Deleting local config drive /var/lib/nova/instances/5b50d851-e482-40a2-8b7d-d3eca87e15ab/disk.config because it was imported into RBD.#033[00m
Oct 11 04:51:06 np0005481065 kernel: tap965f1a38-41: entered promiscuous mode
Oct 11 04:51:06 np0005481065 NetworkManager[44960]: <info>  [1760172666.8344] manager: (tap965f1a38-41): new Tun device (/org/freedesktop/NetworkManager/Devices/108)
Oct 11 04:51:06 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:06Z|00223|binding|INFO|Claiming lport 965f1a38-4159-41be-ac4a-f436a8ddeeab for this chassis.
Oct 11 04:51:06 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:06Z|00224|binding|INFO|965f1a38-4159-41be-ac4a-f436a8ddeeab: Claiming fa:16:3e:fe:8d:cb 10.100.0.4
Oct 11 04:51:06 np0005481065 nova_compute[260935]: 2025-10-11 08:51:06.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:06 np0005481065 nova_compute[260935]: 2025-10-11 08:51:06.845 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 11 04:51:06 np0005481065 nova_compute[260935]: 2025-10-11 08:51:06.846 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:51:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:06.859 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:8d:cb 10.100.0.4'], port_security=['fa:16:3e:fe:8d:cb 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '5b50d851-e482-40a2-8b7d-d3eca87e15ab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bac3530-993f-420e-8692-0b14a331d756', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8c7604961214c6d9d49657535d799a5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b92be55a-f97b-4770-99c4-ff8e122b8ad7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=956bef08-638b-4ce0-9cc4-80a6cc4f1331, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=965f1a38-4159-41be-ac4a-f436a8ddeeab) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:51:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:06.863 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 965f1a38-4159-41be-ac4a-f436a8ddeeab in datapath 9bac3530-993f-420e-8692-0b14a331d756 bound to our chassis#033[00m
Oct 11 04:51:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:06.867 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bac3530-993f-420e-8692-0b14a331d756#033[00m
Oct 11 04:51:06 np0005481065 systemd-machined[215705]: New machine qemu-36-instance-00000021.
Oct 11 04:51:06 np0005481065 nova_compute[260935]: 2025-10-11 08:51:06.883 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:06 np0005481065 nova_compute[260935]: 2025-10-11 08:51:06.884 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:06 np0005481065 nova_compute[260935]: 2025-10-11 08:51:06.884 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:06 np0005481065 nova_compute[260935]: 2025-10-11 08:51:06.885 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 04:51:06 np0005481065 nova_compute[260935]: 2025-10-11 08:51:06.885 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:51:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:06.887 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[787f1b69-f375-43c4-a19f-5ece67f315f4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:06.888 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9bac3530-91 in ovnmeta-9bac3530-993f-420e-8692-0b14a331d756 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 04:51:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:06.891 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9bac3530-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 04:51:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:06.892 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4b871840-0dd2-44a0-85de-a61029bb7b63]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:06.899 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fec92914-6cc6-45fa-8fbf-b07947d78941]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:06 np0005481065 systemd[1]: Started Virtual Machine qemu-36-instance-00000021.
Oct 11 04:51:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:06.918 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[f3eed645-38f4-4599-9de2-d9f1dddbead8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:06 np0005481065 systemd-udevd[300967]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:51:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:06.945 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e495f27d-00b3-4781-bcc8-91e8cb728e4d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:06 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:06Z|00225|binding|INFO|Setting lport 965f1a38-4159-41be-ac4a-f436a8ddeeab ovn-installed in OVS
Oct 11 04:51:06 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:06Z|00226|binding|INFO|Setting lport 965f1a38-4159-41be-ac4a-f436a8ddeeab up in Southbound
Oct 11 04:51:06 np0005481065 NetworkManager[44960]: <info>  [1760172666.9677] device (tap965f1a38-41): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:51:06 np0005481065 NetworkManager[44960]: <info>  [1760172666.9689] device (tap965f1a38-41): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:51:06 np0005481065 nova_compute[260935]: 2025-10-11 08:51:06.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:07.018 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[67629c31-57ac-4a6c-95cc-174576f13a23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:07 np0005481065 NetworkManager[44960]: <info>  [1760172667.0267] manager: (tap9bac3530-90): new Veth device (/org/freedesktop/NetworkManager/Devices/109)
Oct 11 04:51:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:07.028 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f29547cb-51af-429d-b8aa-54774c12e587]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:07.072 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f90fed78-ac6d-4c6a-8b46-95e384ba3f2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:07.077 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[48a110d0-07fb-44a0-96e4-417395ba55f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:07 np0005481065 NetworkManager[44960]: <info>  [1760172667.0996] device (tap9bac3530-90): carrier: link connected
Oct 11 04:51:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:07.107 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[813b24d3-868d-4b23-aa16-4c39337c297c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:07.141 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[71b1681a-d9c9-4a5d-840f-cb5ee210c9c2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bac3530-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:35:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 451180, 'reachable_time': 25473, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301016, 'error': None, 'target': 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:07.169 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5ca70518-aa25-4ddf-a009-be4a50d947df]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe95:351f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 451180, 'tstamp': 451180}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301017, 'error': None, 'target': 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:07.195 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d9e6f817-f1d6-41bc-b3f4-c45673356e0a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bac3530-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:35:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 451180, 'reachable_time': 25473, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 301018, 'error': None, 'target': 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:07 np0005481065 nova_compute[260935]: 2025-10-11 08:51:07.217 2 DEBUG nova.compute.manager [req-89d9cef3-4466-473f-8804-aa8942e8e6af req-3230d788-c141-4e2e-bef9-dcc0d5af1349 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Received event network-vif-plugged-965f1a38-4159-41be-ac4a-f436a8ddeeab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:51:07 np0005481065 nova_compute[260935]: 2025-10-11 08:51:07.218 2 DEBUG oslo_concurrency.lockutils [req-89d9cef3-4466-473f-8804-aa8942e8e6af req-3230d788-c141-4e2e-bef9-dcc0d5af1349 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "5b50d851-e482-40a2-8b7d-d3eca87e15ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:07 np0005481065 nova_compute[260935]: 2025-10-11 08:51:07.219 2 DEBUG oslo_concurrency.lockutils [req-89d9cef3-4466-473f-8804-aa8942e8e6af req-3230d788-c141-4e2e-bef9-dcc0d5af1349 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5b50d851-e482-40a2-8b7d-d3eca87e15ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:07 np0005481065 nova_compute[260935]: 2025-10-11 08:51:07.219 2 DEBUG oslo_concurrency.lockutils [req-89d9cef3-4466-473f-8804-aa8942e8e6af req-3230d788-c141-4e2e-bef9-dcc0d5af1349 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5b50d851-e482-40a2-8b7d-d3eca87e15ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:07 np0005481065 nova_compute[260935]: 2025-10-11 08:51:07.220 2 DEBUG nova.compute.manager [req-89d9cef3-4466-473f-8804-aa8942e8e6af req-3230d788-c141-4e2e-bef9-dcc0d5af1349 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Processing event network-vif-plugged-965f1a38-4159-41be-ac4a-f436a8ddeeab _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 04:51:07 np0005481065 nova_compute[260935]: 2025-10-11 08:51:07.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:07.239 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0b5e61dd-c6ff-4d70-8233-38f34816048c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:07.317 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8c310fb4-bcd4-4d25-be54-714b08a1651e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:07.319 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bac3530-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:07.319 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:51:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:07.320 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bac3530-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:07 np0005481065 NetworkManager[44960]: <info>  [1760172667.3240] manager: (tap9bac3530-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/110)
Oct 11 04:51:07 np0005481065 kernel: tap9bac3530-90: entered promiscuous mode
Oct 11 04:51:07 np0005481065 nova_compute[260935]: 2025-10-11 08:51:07.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:07.334 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bac3530-90, col_values=(('external_ids', {'iface-id': 'e5becf0d-48c0-404b-9cba-07077454d085'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:07 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:07Z|00227|binding|INFO|Releasing lport e5becf0d-48c0-404b-9cba-07077454d085 from this chassis (sb_readonly=0)
Oct 11 04:51:07 np0005481065 nova_compute[260935]: 2025-10-11 08:51:07.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:07.338 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9bac3530-993f-420e-8692-0b14a331d756.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9bac3530-993f-420e-8692-0b14a331d756.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 04:51:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:07.339 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c05c43f7-be57-49bc-9e5e-4e788d2f48cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:07.341 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:51:07 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 04:51:07 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 04:51:07 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-9bac3530-993f-420e-8692-0b14a331d756
Oct 11 04:51:07 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 04:51:07 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 04:51:07 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 04:51:07 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/9bac3530-993f-420e-8692-0b14a331d756.pid.haproxy
Oct 11 04:51:07 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 04:51:07 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:51:07 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 04:51:07 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 04:51:07 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 04:51:07 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 04:51:07 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 04:51:07 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 04:51:07 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 04:51:07 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 04:51:07 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 04:51:07 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 04:51:07 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 04:51:07 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 04:51:07 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 04:51:07 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:51:07 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:51:07 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 04:51:07 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 04:51:07 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:51:07 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID 9bac3530-993f-420e-8692-0b14a331d756
Oct 11 04:51:07 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 04:51:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:07.343 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'env', 'PROCESS_TAG=haproxy-9bac3530-993f-420e-8692-0b14a331d756', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9bac3530-993f-420e-8692-0b14a331d756.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 04:51:07 np0005481065 nova_compute[260935]: 2025-10-11 08:51:07.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:51:07 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2145660885' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:51:07 np0005481065 nova_compute[260935]: 2025-10-11 08:51:07.423 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:51:07 np0005481065 nova_compute[260935]: 2025-10-11 08:51:07.504 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000021 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 04:51:07 np0005481065 nova_compute[260935]: 2025-10-11 08:51:07.505 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000021 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 04:51:07 np0005481065 nova_compute[260935]: 2025-10-11 08:51:07.796 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:51:07 np0005481065 nova_compute[260935]: 2025-10-11 08:51:07.798 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4283MB free_disk=59.946773529052734GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 04:51:07 np0005481065 nova_compute[260935]: 2025-10-11 08:51:07.798 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:07 np0005481065 nova_compute[260935]: 2025-10-11 08:51:07.799 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:07 np0005481065 nova_compute[260935]: 2025-10-11 08:51:07.851 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172667.850748, 5b50d851-e482-40a2-8b7d-d3eca87e15ab => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:51:07 np0005481065 nova_compute[260935]: 2025-10-11 08:51:07.852 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] VM Started (Lifecycle Event)#033[00m
Oct 11 04:51:07 np0005481065 nova_compute[260935]: 2025-10-11 08:51:07.856 2 DEBUG nova.compute.manager [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:51:07 np0005481065 nova_compute[260935]: 2025-10-11 08:51:07.867 2 DEBUG nova.virt.libvirt.driver [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:51:07 np0005481065 nova_compute[260935]: 2025-10-11 08:51:07.875 2 INFO nova.virt.libvirt.driver [-] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Instance spawned successfully.#033[00m
Oct 11 04:51:07 np0005481065 nova_compute[260935]: 2025-10-11 08:51:07.878 2 DEBUG nova.virt.libvirt.driver [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:51:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1377: 321 pgs: 321 active+clean; 134 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 4.3 MiB/s wr, 76 op/s
Oct 11 04:51:07 np0005481065 nova_compute[260935]: 2025-10-11 08:51:07.892 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:51:07 np0005481065 nova_compute[260935]: 2025-10-11 08:51:07.896 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 9f842544-f85a-4c24-b273-8ae74177617e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 04:51:07 np0005481065 nova_compute[260935]: 2025-10-11 08:51:07.897 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 5b50d851-e482-40a2-8b7d-d3eca87e15ab actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 04:51:07 np0005481065 nova_compute[260935]: 2025-10-11 08:51:07.897 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 04:51:07 np0005481065 nova_compute[260935]: 2025-10-11 08:51:07.898 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 04:51:07 np0005481065 podman[301096]: 2025-10-11 08:51:07.901012376 +0000 UTC m=+0.083506433 container create 4c69755b2a5a85b116644ea586653252eee337e5bcfe7e1984d85c100f402219 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 04:51:07 np0005481065 nova_compute[260935]: 2025-10-11 08:51:07.904 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:51:07 np0005481065 nova_compute[260935]: 2025-10-11 08:51:07.923 2 DEBUG nova.virt.libvirt.driver [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:51:07 np0005481065 nova_compute[260935]: 2025-10-11 08:51:07.925 2 DEBUG nova.virt.libvirt.driver [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:51:07 np0005481065 nova_compute[260935]: 2025-10-11 08:51:07.925 2 DEBUG nova.virt.libvirt.driver [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:51:07 np0005481065 nova_compute[260935]: 2025-10-11 08:51:07.925 2 DEBUG nova.virt.libvirt.driver [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:51:07 np0005481065 nova_compute[260935]: 2025-10-11 08:51:07.926 2 DEBUG nova.virt.libvirt.driver [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:51:07 np0005481065 nova_compute[260935]: 2025-10-11 08:51:07.926 2 DEBUG nova.virt.libvirt.driver [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:51:07 np0005481065 nova_compute[260935]: 2025-10-11 08:51:07.929 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:51:07 np0005481065 nova_compute[260935]: 2025-10-11 08:51:07.930 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172667.8523057, 5b50d851-e482-40a2-8b7d-d3eca87e15ab => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:51:07 np0005481065 nova_compute[260935]: 2025-10-11 08:51:07.930 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] VM Paused (Lifecycle Event)#033[00m
Oct 11 04:51:07 np0005481065 systemd[1]: Started libpod-conmon-4c69755b2a5a85b116644ea586653252eee337e5bcfe7e1984d85c100f402219.scope.
Oct 11 04:51:07 np0005481065 podman[301096]: 2025-10-11 08:51:07.863295339 +0000 UTC m=+0.045789396 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 04:51:07 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:51:07 np0005481065 nova_compute[260935]: 2025-10-11 08:51:07.970 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:51:07 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81784f68462ca80ca6e90ba3c15edeb0a8d5f87b882ef7fdf8a487f1955dcf4c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:51:07 np0005481065 nova_compute[260935]: 2025-10-11 08:51:07.975 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172667.8645847, 5b50d851-e482-40a2-8b7d-d3eca87e15ab => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:51:07 np0005481065 nova_compute[260935]: 2025-10-11 08:51:07.984 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:51:07 np0005481065 nova_compute[260935]: 2025-10-11 08:51:07.987 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:51:07 np0005481065 podman[301096]: 2025-10-11 08:51:07.992257866 +0000 UTC m=+0.174751903 container init 4c69755b2a5a85b116644ea586653252eee337e5bcfe7e1984d85c100f402219 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 11 04:51:08 np0005481065 podman[301096]: 2025-10-11 08:51:08.002157216 +0000 UTC m=+0.184651243 container start 4c69755b2a5a85b116644ea586653252eee337e5bcfe7e1984d85c100f402219 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 11 04:51:08 np0005481065 nova_compute[260935]: 2025-10-11 08:51:08.034 2 INFO nova.compute.manager [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Took 8.76 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:51:08 np0005481065 nova_compute[260935]: 2025-10-11 08:51:08.035 2 DEBUG nova.compute.manager [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:51:08 np0005481065 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[301111]: [NOTICE]   (301116) : New worker (301118) forked
Oct 11 04:51:08 np0005481065 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[301111]: [NOTICE]   (301116) : Loading success.
Oct 11 04:51:08 np0005481065 nova_compute[260935]: 2025-10-11 08:51:08.055 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:51:08 np0005481065 nova_compute[260935]: 2025-10-11 08:51:08.060 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:51:08 np0005481065 nova_compute[260935]: 2025-10-11 08:51:08.079 2 DEBUG nova.network.neutron [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Updating instance_info_cache with network_info: [{"id": "fc76a4bd-0e3d-426e-820f-690283cf8257", "address": "fa:16:3e:91:1c:e4", "network": {"id": "da40451d-49f4-4bd4-b0a3-55dc537c2426", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1795623970", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.51", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc76a4bd-0e", "ovs_interfaceid": "fc76a4bd-0e3d-426e-820f-690283cf8257", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ccf53fcf-c7bd-41f4-986b-d90fd701f3e4", "address": "fa:16:3e:82:ff:9f", "network": {"id": "27019660-0844-42c7-bd58-04b2d25d3924", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-190755633", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.42", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccf53fcf-c7", "ovs_interfaceid": "ccf53fcf-c7bd-41f4-986b-d90fd701f3e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e7343dc3-6cda-4dfb-8098-f021900f4584", "address": "fa:16:3e:25:95:fe", "network": {"id": "da40451d-49f4-4bd4-b0a3-55dc537c2426", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1795623970", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.254", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7343dc3-6c", "ovs_interfaceid": "e7343dc3-6cda-4dfb-8098-f021900f4584", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:51:08 np0005481065 nova_compute[260935]: 2025-10-11 08:51:08.106 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:51:08 np0005481065 nova_compute[260935]: 2025-10-11 08:51:08.120 2 DEBUG oslo_concurrency.lockutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Releasing lock "refresh_cache-9f842544-f85a-4c24-b273-8ae74177617e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:51:08 np0005481065 nova_compute[260935]: 2025-10-11 08:51:08.121 2 DEBUG nova.compute.manager [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Instance network_info: |[{"id": "fc76a4bd-0e3d-426e-820f-690283cf8257", "address": "fa:16:3e:91:1c:e4", "network": {"id": "da40451d-49f4-4bd4-b0a3-55dc537c2426", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1795623970", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.51", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc76a4bd-0e", "ovs_interfaceid": "fc76a4bd-0e3d-426e-820f-690283cf8257", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ccf53fcf-c7bd-41f4-986b-d90fd701f3e4", "address": "fa:16:3e:82:ff:9f", "network": {"id": "27019660-0844-42c7-bd58-04b2d25d3924", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-190755633", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.42", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccf53fcf-c7", "ovs_interfaceid": "ccf53fcf-c7bd-41f4-986b-d90fd701f3e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e7343dc3-6cda-4dfb-8098-f021900f4584", "address": "fa:16:3e:25:95:fe", "network": {"id": "da40451d-49f4-4bd4-b0a3-55dc537c2426", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1795623970", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.254", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7343dc3-6c", "ovs_interfaceid": "e7343dc3-6cda-4dfb-8098-f021900f4584", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:51:08 np0005481065 nova_compute[260935]: 2025-10-11 08:51:08.122 2 DEBUG oslo_concurrency.lockutils [req-414781cc-d6a1-42c0-9106-f249fbcb35e8 req-87a8b0a5-1cf6-4f03-8179-f4cd618401f3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-9f842544-f85a-4c24-b273-8ae74177617e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:51:08 np0005481065 nova_compute[260935]: 2025-10-11 08:51:08.123 2 DEBUG nova.network.neutron [req-414781cc-d6a1-42c0-9106-f249fbcb35e8 req-87a8b0a5-1cf6-4f03-8179-f4cd618401f3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Refreshing network info cache for port ccf53fcf-c7bd-41f4-986b-d90fd701f3e4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:51:08 np0005481065 nova_compute[260935]: 2025-10-11 08:51:08.130 2 DEBUG nova.virt.libvirt.driver [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Start _get_guest_xml network_info=[{"id": "fc76a4bd-0e3d-426e-820f-690283cf8257", "address": "fa:16:3e:91:1c:e4", "network": {"id": "da40451d-49f4-4bd4-b0a3-55dc537c2426", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1795623970", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.51", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc76a4bd-0e", "ovs_interfaceid": "fc76a4bd-0e3d-426e-820f-690283cf8257", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ccf53fcf-c7bd-41f4-986b-d90fd701f3e4", "address": "fa:16:3e:82:ff:9f", "network": {"id": "27019660-0844-42c7-bd58-04b2d25d3924", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-190755633", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.42", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccf53fcf-c7", "ovs_interfaceid": "ccf53fcf-c7bd-41f4-986b-d90fd701f3e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e7343dc3-6cda-4dfb-8098-f021900f4584", "address": "fa:16:3e:25:95:fe", "network": {"id": "da40451d-49f4-4bd4-b0a3-55dc537c2426", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1795623970", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.254", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7343dc3-6c", "ovs_interfaceid": "e7343dc3-6cda-4dfb-8098-f021900f4584", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:51:08 np0005481065 nova_compute[260935]: 2025-10-11 08:51:08.146 2 INFO nova.compute.manager [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Took 9.87 seconds to build instance.#033[00m
Oct 11 04:51:08 np0005481065 nova_compute[260935]: 2025-10-11 08:51:08.151 2 WARNING nova.virt.libvirt.driver [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:51:08 np0005481065 nova_compute[260935]: 2025-10-11 08:51:08.164 2 DEBUG nova.virt.libvirt.host [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:51:08 np0005481065 nova_compute[260935]: 2025-10-11 08:51:08.165 2 DEBUG nova.virt.libvirt.host [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:51:08 np0005481065 nova_compute[260935]: 2025-10-11 08:51:08.169 2 DEBUG nova.virt.libvirt.host [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:51:08 np0005481065 nova_compute[260935]: 2025-10-11 08:51:08.170 2 DEBUG nova.virt.libvirt.host [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:51:08 np0005481065 nova_compute[260935]: 2025-10-11 08:51:08.171 2 DEBUG nova.virt.libvirt.driver [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:51:08 np0005481065 nova_compute[260935]: 2025-10-11 08:51:08.172 2 DEBUG nova.virt.hardware [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:51:08 np0005481065 nova_compute[260935]: 2025-10-11 08:51:08.173 2 DEBUG nova.virt.hardware [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:51:08 np0005481065 nova_compute[260935]: 2025-10-11 08:51:08.173 2 DEBUG nova.virt.hardware [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:51:08 np0005481065 nova_compute[260935]: 2025-10-11 08:51:08.174 2 DEBUG nova.virt.hardware [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:51:08 np0005481065 nova_compute[260935]: 2025-10-11 08:51:08.175 2 DEBUG nova.virt.hardware [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:51:08 np0005481065 nova_compute[260935]: 2025-10-11 08:51:08.175 2 DEBUG nova.virt.hardware [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:51:08 np0005481065 nova_compute[260935]: 2025-10-11 08:51:08.176 2 DEBUG nova.virt.hardware [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:51:08 np0005481065 nova_compute[260935]: 2025-10-11 08:51:08.176 2 DEBUG nova.virt.hardware [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:51:08 np0005481065 nova_compute[260935]: 2025-10-11 08:51:08.177 2 DEBUG nova.virt.hardware [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:51:08 np0005481065 nova_compute[260935]: 2025-10-11 08:51:08.177 2 DEBUG nova.virt.hardware [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:51:08 np0005481065 nova_compute[260935]: 2025-10-11 08:51:08.178 2 DEBUG nova.virt.hardware [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:51:08 np0005481065 nova_compute[260935]: 2025-10-11 08:51:08.184 2 DEBUG oslo_concurrency.processutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:51:08 np0005481065 nova_compute[260935]: 2025-10-11 08:51:08.235 2 DEBUG oslo_concurrency.lockutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "5b50d851-e482-40a2-8b7d-d3eca87e15ab" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.052s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:51:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:51:08 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1773337102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:51:08 np0005481065 nova_compute[260935]: 2025-10-11 08:51:08.506 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:51:08 np0005481065 nova_compute[260935]: 2025-10-11 08:51:08.514 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:51:08 np0005481065 nova_compute[260935]: 2025-10-11 08:51:08.537 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:51:08 np0005481065 nova_compute[260935]: 2025-10-11 08:51:08.572 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 04:51:08 np0005481065 nova_compute[260935]: 2025-10-11 08:51:08.573 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:51:08 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1450823365' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:51:08 np0005481065 nova_compute[260935]: 2025-10-11 08:51:08.717 2 DEBUG oslo_concurrency.processutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:51:08 np0005481065 nova_compute[260935]: 2025-10-11 08:51:08.759 2 DEBUG nova.storage.rbd_utils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] rbd image 9f842544-f85a-4c24-b273-8ae74177617e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:51:08 np0005481065 nova_compute[260935]: 2025-10-11 08:51:08.769 2 DEBUG oslo_concurrency.processutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:51:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:51:09 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3478334644' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.290 2 DEBUG oslo_concurrency.processutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.294 2 DEBUG nova.virt.libvirt.vif [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:50:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-358627564',display_name='tempest-ServersTestMultiNic-server-358627564',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-358627564',id=32,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f84de17ba2c5470fbc4c7fe809e7d7b7',ramdisk_id='',reservation_id='r-ku19xg05',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-65661968',owner_user_name='tempest-ServersTestMultiNic-65661968-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:50:57Z,user_data=None,user_id='fb27f51b5ffd414ab5ddbea179ada690',uuid=9f842544-f85a-4c24-b273-8ae74177617e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fc76a4bd-0e3d-426e-820f-690283cf8257", "address": "fa:16:3e:91:1c:e4", "network": {"id": "da40451d-49f4-4bd4-b0a3-55dc537c2426", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1795623970", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.51", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc76a4bd-0e", "ovs_interfaceid": "fc76a4bd-0e3d-426e-820f-690283cf8257", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.295 2 DEBUG nova.network.os_vif_util [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converting VIF {"id": "fc76a4bd-0e3d-426e-820f-690283cf8257", "address": "fa:16:3e:91:1c:e4", "network": {"id": "da40451d-49f4-4bd4-b0a3-55dc537c2426", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1795623970", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.51", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc76a4bd-0e", "ovs_interfaceid": "fc76a4bd-0e3d-426e-820f-690283cf8257", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.297 2 DEBUG nova.network.os_vif_util [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:91:1c:e4,bridge_name='br-int',has_traffic_filtering=True,id=fc76a4bd-0e3d-426e-820f-690283cf8257,network=Network(da40451d-49f4-4bd4-b0a3-55dc537c2426),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc76a4bd-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.299 2 DEBUG nova.virt.libvirt.vif [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:50:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-358627564',display_name='tempest-ServersTestMultiNic-server-358627564',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-358627564',id=32,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f84de17ba2c5470fbc4c7fe809e7d7b7',ramdisk_id='',reservation_id='r-ku19xg05',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-65661968',owner_user_name='tempest-ServersTestMultiNic-65661968-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:50:57Z,user_data=None,user_id='fb27f51b5ffd414ab5ddbea179ada690',uuid=9f842544-f85a-4c24-b273-8ae74177617e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ccf53fcf-c7bd-41f4-986b-d90fd701f3e4", "address": "fa:16:3e:82:ff:9f", "network": {"id": "27019660-0844-42c7-bd58-04b2d25d3924", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-190755633", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.42", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccf53fcf-c7", "ovs_interfaceid": "ccf53fcf-c7bd-41f4-986b-d90fd701f3e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.299 2 DEBUG nova.network.os_vif_util [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converting VIF {"id": "ccf53fcf-c7bd-41f4-986b-d90fd701f3e4", "address": "fa:16:3e:82:ff:9f", "network": {"id": "27019660-0844-42c7-bd58-04b2d25d3924", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-190755633", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.42", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccf53fcf-c7", "ovs_interfaceid": "ccf53fcf-c7bd-41f4-986b-d90fd701f3e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.301 2 DEBUG nova.network.os_vif_util [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:82:ff:9f,bridge_name='br-int',has_traffic_filtering=True,id=ccf53fcf-c7bd-41f4-986b-d90fd701f3e4,network=Network(27019660-0844-42c7-bd58-04b2d25d3924),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapccf53fcf-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.302 2 DEBUG nova.virt.libvirt.vif [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:50:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-358627564',display_name='tempest-ServersTestMultiNic-server-358627564',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-358627564',id=32,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f84de17ba2c5470fbc4c7fe809e7d7b7',ramdisk_id='',reservation_id='r-ku19xg05',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-65661968',owner_user_name='tempest-ServersTestMultiNic-65661968-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:50:57Z,user_data=None,user_id='fb27f51b5ffd414ab5ddbea179ada690',uuid=9f842544-f85a-4c24-b273-8ae74177617e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e7343dc3-6cda-4dfb-8098-f021900f4584", "address": "fa:16:3e:25:95:fe", "network": {"id": "da40451d-49f4-4bd4-b0a3-55dc537c2426", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1795623970", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.254", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7343dc3-6c", "ovs_interfaceid": "e7343dc3-6cda-4dfb-8098-f021900f4584", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.303 2 DEBUG nova.network.os_vif_util [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converting VIF {"id": "e7343dc3-6cda-4dfb-8098-f021900f4584", "address": "fa:16:3e:25:95:fe", "network": {"id": "da40451d-49f4-4bd4-b0a3-55dc537c2426", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1795623970", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.254", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7343dc3-6c", "ovs_interfaceid": "e7343dc3-6cda-4dfb-8098-f021900f4584", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.304 2 DEBUG nova.network.os_vif_util [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:95:fe,bridge_name='br-int',has_traffic_filtering=True,id=e7343dc3-6cda-4dfb-8098-f021900f4584,network=Network(da40451d-49f4-4bd4-b0a3-55dc537c2426),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7343dc3-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.306 2 DEBUG nova.objects.instance [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9f842544-f85a-4c24-b273-8ae74177617e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.313 2 DEBUG nova.compute.manager [req-9472affb-1139-4c69-b922-088de0948d7b req-1279bf96-1ac5-4958-9c5b-28b7fc7a41a7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Received event network-vif-plugged-965f1a38-4159-41be-ac4a-f436a8ddeeab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.313 2 DEBUG oslo_concurrency.lockutils [req-9472affb-1139-4c69-b922-088de0948d7b req-1279bf96-1ac5-4958-9c5b-28b7fc7a41a7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "5b50d851-e482-40a2-8b7d-d3eca87e15ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.315 2 DEBUG oslo_concurrency.lockutils [req-9472affb-1139-4c69-b922-088de0948d7b req-1279bf96-1ac5-4958-9c5b-28b7fc7a41a7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5b50d851-e482-40a2-8b7d-d3eca87e15ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.316 2 DEBUG oslo_concurrency.lockutils [req-9472affb-1139-4c69-b922-088de0948d7b req-1279bf96-1ac5-4958-9c5b-28b7fc7a41a7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5b50d851-e482-40a2-8b7d-d3eca87e15ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.317 2 DEBUG nova.compute.manager [req-9472affb-1139-4c69-b922-088de0948d7b req-1279bf96-1ac5-4958-9c5b-28b7fc7a41a7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] No waiting events found dispatching network-vif-plugged-965f1a38-4159-41be-ac4a-f436a8ddeeab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.317 2 WARNING nova.compute.manager [req-9472affb-1139-4c69-b922-088de0948d7b req-1279bf96-1ac5-4958-9c5b-28b7fc7a41a7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Received unexpected event network-vif-plugged-965f1a38-4159-41be-ac4a-f436a8ddeeab for instance with vm_state active and task_state None.#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.331 2 DEBUG nova.virt.libvirt.driver [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:51:09 np0005481065 nova_compute[260935]:  <uuid>9f842544-f85a-4c24-b273-8ae74177617e</uuid>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:  <name>instance-00000020</name>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:51:09 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:      <nova:name>tempest-ServersTestMultiNic-server-358627564</nova:name>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:51:08</nova:creationTime>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:51:09 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:        <nova:user uuid="fb27f51b5ffd414ab5ddbea179ada690">tempest-ServersTestMultiNic-65661968-project-member</nova:user>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:        <nova:project uuid="f84de17ba2c5470fbc4c7fe809e7d7b7">tempest-ServersTestMultiNic-65661968</nova:project>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:        <nova:port uuid="fc76a4bd-0e3d-426e-820f-690283cf8257">
Oct 11 04:51:09 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.51" ipVersion="4"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:        <nova:port uuid="ccf53fcf-c7bd-41f4-986b-d90fd701f3e4">
Oct 11 04:51:09 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.1.42" ipVersion="4"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:        <nova:port uuid="e7343dc3-6cda-4dfb-8098-f021900f4584">
Oct 11 04:51:09 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.254" ipVersion="4"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:      <entry name="serial">9f842544-f85a-4c24-b273-8ae74177617e</entry>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:      <entry name="uuid">9f842544-f85a-4c24-b273-8ae74177617e</entry>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:51:09 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/9f842544-f85a-4c24-b273-8ae74177617e_disk">
Oct 11 04:51:09 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:51:09 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:51:09 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/9f842544-f85a-4c24-b273-8ae74177617e_disk.config">
Oct 11 04:51:09 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:51:09 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 04:51:09 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:91:1c:e4"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:      <target dev="tapfc76a4bd-0e"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 04:51:09 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:82:ff:9f"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:      <target dev="tapccf53fcf-c7"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 04:51:09 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:25:95:fe"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:      <target dev="tape7343dc3-6c"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:51:09 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/9f842544-f85a-4c24-b273-8ae74177617e/console.log" append="off"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:51:09 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:51:09 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:51:09 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:51:09 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:51:09 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.343 2 DEBUG nova.compute.manager [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Preparing to wait for external event network-vif-plugged-fc76a4bd-0e3d-426e-820f-690283cf8257 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.344 2 DEBUG oslo_concurrency.lockutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Acquiring lock "9f842544-f85a-4c24-b273-8ae74177617e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.344 2 DEBUG oslo_concurrency.lockutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.344 2 DEBUG oslo_concurrency.lockutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.345 2 DEBUG nova.compute.manager [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Preparing to wait for external event network-vif-plugged-ccf53fcf-c7bd-41f4-986b-d90fd701f3e4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.345 2 DEBUG oslo_concurrency.lockutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Acquiring lock "9f842544-f85a-4c24-b273-8ae74177617e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.345 2 DEBUG oslo_concurrency.lockutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.345 2 DEBUG oslo_concurrency.lockutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.346 2 DEBUG nova.compute.manager [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Preparing to wait for external event network-vif-plugged-e7343dc3-6cda-4dfb-8098-f021900f4584 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.346 2 DEBUG oslo_concurrency.lockutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Acquiring lock "9f842544-f85a-4c24-b273-8ae74177617e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.346 2 DEBUG oslo_concurrency.lockutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.347 2 DEBUG oslo_concurrency.lockutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.348 2 DEBUG nova.virt.libvirt.vif [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:50:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-358627564',display_name='tempest-ServersTestMultiNic-server-358627564',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-358627564',id=32,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f84de17ba2c5470fbc4c7fe809e7d7b7',ramdisk_id='',reservation_id='r-ku19xg05',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-65661968',owner_user_name='tempest-ServersTestMultiNic-65661968-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:50:57Z,user_data=None,user_id='fb27f51b5ffd414ab5ddbea179ada690',uuid=9f842544-f85a-4c24-b273-8ae74177617e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fc76a4bd-0e3d-426e-820f-690283cf8257", "address": "fa:16:3e:91:1c:e4", "network": {"id": "da40451d-49f4-4bd4-b0a3-55dc537c2426", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1795623970", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.51", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc76a4bd-0e", "ovs_interfaceid": "fc76a4bd-0e3d-426e-820f-690283cf8257", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.348 2 DEBUG nova.network.os_vif_util [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converting VIF {"id": "fc76a4bd-0e3d-426e-820f-690283cf8257", "address": "fa:16:3e:91:1c:e4", "network": {"id": "da40451d-49f4-4bd4-b0a3-55dc537c2426", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1795623970", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.51", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc76a4bd-0e", "ovs_interfaceid": "fc76a4bd-0e3d-426e-820f-690283cf8257", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.350 2 DEBUG nova.network.os_vif_util [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:91:1c:e4,bridge_name='br-int',has_traffic_filtering=True,id=fc76a4bd-0e3d-426e-820f-690283cf8257,network=Network(da40451d-49f4-4bd4-b0a3-55dc537c2426),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc76a4bd-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.351 2 DEBUG os_vif [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:91:1c:e4,bridge_name='br-int',has_traffic_filtering=True,id=fc76a4bd-0e3d-426e-820f-690283cf8257,network=Network(da40451d-49f4-4bd4-b0a3-55dc537c2426),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc76a4bd-0e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.353 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.354 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.359 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc76a4bd-0e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.360 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfc76a4bd-0e, col_values=(('external_ids', {'iface-id': 'fc76a4bd-0e3d-426e-820f-690283cf8257', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:91:1c:e4', 'vm-uuid': '9f842544-f85a-4c24-b273-8ae74177617e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:09 np0005481065 NetworkManager[44960]: <info>  [1760172669.3947] manager: (tapfc76a4bd-0e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/111)
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.402 2 INFO os_vif [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:91:1c:e4,bridge_name='br-int',has_traffic_filtering=True,id=fc76a4bd-0e3d-426e-820f-690283cf8257,network=Network(da40451d-49f4-4bd4-b0a3-55dc537c2426),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc76a4bd-0e')#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.404 2 DEBUG nova.virt.libvirt.vif [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:50:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-358627564',display_name='tempest-ServersTestMultiNic-server-358627564',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-358627564',id=32,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f84de17ba2c5470fbc4c7fe809e7d7b7',ramdisk_id='',reservation_id='r-ku19xg05',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-65661968',owner_user_name='tempest-ServersTestMultiNic-65661968-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:50:57Z,user_data=None,user_id='fb27f51b5ffd414ab5ddbea179ada690',uuid=9f842544-f85a-4c24-b273-8ae74177617e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ccf53fcf-c7bd-41f4-986b-d90fd701f3e4", "address": "fa:16:3e:82:ff:9f", "network": {"id": "27019660-0844-42c7-bd58-04b2d25d3924", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-190755633", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.42", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccf53fcf-c7", "ovs_interfaceid": "ccf53fcf-c7bd-41f4-986b-d90fd701f3e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.404 2 DEBUG nova.network.os_vif_util [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converting VIF {"id": "ccf53fcf-c7bd-41f4-986b-d90fd701f3e4", "address": "fa:16:3e:82:ff:9f", "network": {"id": "27019660-0844-42c7-bd58-04b2d25d3924", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-190755633", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.42", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccf53fcf-c7", "ovs_interfaceid": "ccf53fcf-c7bd-41f4-986b-d90fd701f3e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.405 2 DEBUG nova.network.os_vif_util [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:82:ff:9f,bridge_name='br-int',has_traffic_filtering=True,id=ccf53fcf-c7bd-41f4-986b-d90fd701f3e4,network=Network(27019660-0844-42c7-bd58-04b2d25d3924),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapccf53fcf-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.405 2 DEBUG os_vif [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:ff:9f,bridge_name='br-int',has_traffic_filtering=True,id=ccf53fcf-c7bd-41f4-986b-d90fd701f3e4,network=Network(27019660-0844-42c7-bd58-04b2d25d3924),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapccf53fcf-c7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.406 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.407 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.410 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapccf53fcf-c7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.410 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapccf53fcf-c7, col_values=(('external_ids', {'iface-id': 'ccf53fcf-c7bd-41f4-986b-d90fd701f3e4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:82:ff:9f', 'vm-uuid': '9f842544-f85a-4c24-b273-8ae74177617e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:09 np0005481065 NetworkManager[44960]: <info>  [1760172669.4127] manager: (tapccf53fcf-c7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/112)
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.420 2 INFO os_vif [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:ff:9f,bridge_name='br-int',has_traffic_filtering=True,id=ccf53fcf-c7bd-41f4-986b-d90fd701f3e4,network=Network(27019660-0844-42c7-bd58-04b2d25d3924),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapccf53fcf-c7')#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.421 2 DEBUG nova.virt.libvirt.vif [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:50:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-358627564',display_name='tempest-ServersTestMultiNic-server-358627564',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-358627564',id=32,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f84de17ba2c5470fbc4c7fe809e7d7b7',ramdisk_id='',reservation_id='r-ku19xg05',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-65661968',owner_user_name='tempest-ServersTestMultiNic-65661968-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:50:57Z,user_data=None,user_id='fb27f51b5ffd414ab5ddbea179ada690',uuid=9f842544-f85a-4c24-b273-8ae74177617e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e7343dc3-6cda-4dfb-8098-f021900f4584", "address": "fa:16:3e:25:95:fe", "network": {"id": "da40451d-49f4-4bd4-b0a3-55dc537c2426", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1795623970", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.254", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7343dc3-6c", "ovs_interfaceid": "e7343dc3-6cda-4dfb-8098-f021900f4584", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.422 2 DEBUG nova.network.os_vif_util [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converting VIF {"id": "e7343dc3-6cda-4dfb-8098-f021900f4584", "address": "fa:16:3e:25:95:fe", "network": {"id": "da40451d-49f4-4bd4-b0a3-55dc537c2426", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1795623970", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.254", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7343dc3-6c", "ovs_interfaceid": "e7343dc3-6cda-4dfb-8098-f021900f4584", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.423 2 DEBUG nova.network.os_vif_util [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:95:fe,bridge_name='br-int',has_traffic_filtering=True,id=e7343dc3-6cda-4dfb-8098-f021900f4584,network=Network(da40451d-49f4-4bd4-b0a3-55dc537c2426),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7343dc3-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.423 2 DEBUG os_vif [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:95:fe,bridge_name='br-int',has_traffic_filtering=True,id=e7343dc3-6cda-4dfb-8098-f021900f4584,network=Network(da40451d-49f4-4bd4-b0a3-55dc537c2426),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7343dc3-6c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.425 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.425 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.427 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape7343dc3-6c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.428 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape7343dc3-6c, col_values=(('external_ids', {'iface-id': 'e7343dc3-6cda-4dfb-8098-f021900f4584', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:25:95:fe', 'vm-uuid': '9f842544-f85a-4c24-b273-8ae74177617e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:09 np0005481065 NetworkManager[44960]: <info>  [1760172669.4304] manager: (tape7343dc3-6c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/113)
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.433 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.442 2 INFO os_vif [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:95:fe,bridge_name='br-int',has_traffic_filtering=True,id=e7343dc3-6cda-4dfb-8098-f021900f4584,network=Network(da40451d-49f4-4bd4-b0a3-55dc537c2426),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7343dc3-6c')#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.491 2 DEBUG nova.virt.libvirt.driver [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.492 2 DEBUG nova.virt.libvirt.driver [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.493 2 DEBUG nova.virt.libvirt.driver [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] No VIF found with MAC fa:16:3e:91:1c:e4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.494 2 DEBUG nova.virt.libvirt.driver [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] No VIF found with MAC fa:16:3e:82:ff:9f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.494 2 DEBUG nova.virt.libvirt.driver [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] No VIF found with MAC fa:16:3e:25:95:fe, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.495 2 INFO nova.virt.libvirt.driver [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Using config drive#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.523 2 DEBUG nova.storage.rbd_utils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] rbd image 9f842544-f85a-4c24-b273-8ae74177617e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.618 2 DEBUG nova.compute.manager [None req-414dafa3-ac2d-4e0d-aebd-68df71fcf0fe 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:51:09 np0005481065 nova_compute[260935]: 2025-10-11 08:51:09.658 2 INFO nova.compute.manager [None req-414dafa3-ac2d-4e0d-aebd-68df71fcf0fe 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] instance snapshotting#033[00m
Oct 11 04:51:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1378: 321 pgs: 321 active+clean; 134 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 3.7 MiB/s wr, 66 op/s
Oct 11 04:51:10 np0005481065 nova_compute[260935]: 2025-10-11 08:51:10.285 2 INFO nova.virt.libvirt.driver [None req-414dafa3-ac2d-4e0d-aebd-68df71fcf0fe 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Beginning live snapshot process#033[00m
Oct 11 04:51:10 np0005481065 nova_compute[260935]: 2025-10-11 08:51:10.442 2 DEBUG nova.virt.libvirt.imagebackend [None req-414dafa3-ac2d-4e0d-aebd-68df71fcf0fe 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] No parent info for 03f2fef0-11c0-48e1-b3a0-3e02d898739e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct 11 04:51:10 np0005481065 nova_compute[260935]: 2025-10-11 08:51:10.727 2 INFO nova.virt.libvirt.driver [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Creating config drive at /var/lib/nova/instances/9f842544-f85a-4c24-b273-8ae74177617e/disk.config#033[00m
Oct 11 04:51:10 np0005481065 nova_compute[260935]: 2025-10-11 08:51:10.737 2 DEBUG oslo_concurrency.processutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9f842544-f85a-4c24-b273-8ae74177617e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3s5gi1id execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:51:10 np0005481065 nova_compute[260935]: 2025-10-11 08:51:10.781 2 DEBUG nova.storage.rbd_utils [None req-414dafa3-ac2d-4e0d-aebd-68df71fcf0fe 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] creating snapshot(e9e22bb70e184ce295ed0bc013a73ecc) on rbd image(5b50d851-e482-40a2-8b7d-d3eca87e15ab_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 11 04:51:10 np0005481065 nova_compute[260935]: 2025-10-11 08:51:10.823 2 DEBUG nova.network.neutron [req-414781cc-d6a1-42c0-9106-f249fbcb35e8 req-87a8b0a5-1cf6-4f03-8179-f4cd618401f3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Updated VIF entry in instance network info cache for port ccf53fcf-c7bd-41f4-986b-d90fd701f3e4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:51:10 np0005481065 nova_compute[260935]: 2025-10-11 08:51:10.824 2 DEBUG nova.network.neutron [req-414781cc-d6a1-42c0-9106-f249fbcb35e8 req-87a8b0a5-1cf6-4f03-8179-f4cd618401f3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Updating instance_info_cache with network_info: [{"id": "fc76a4bd-0e3d-426e-820f-690283cf8257", "address": "fa:16:3e:91:1c:e4", "network": {"id": "da40451d-49f4-4bd4-b0a3-55dc537c2426", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1795623970", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.51", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc76a4bd-0e", "ovs_interfaceid": "fc76a4bd-0e3d-426e-820f-690283cf8257", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ccf53fcf-c7bd-41f4-986b-d90fd701f3e4", "address": "fa:16:3e:82:ff:9f", "network": {"id": "27019660-0844-42c7-bd58-04b2d25d3924", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-190755633", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.42", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccf53fcf-c7", "ovs_interfaceid": "ccf53fcf-c7bd-41f4-986b-d90fd701f3e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e7343dc3-6cda-4dfb-8098-f021900f4584", "address": "fa:16:3e:25:95:fe", "network": {"id": "da40451d-49f4-4bd4-b0a3-55dc537c2426", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1795623970", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.254", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7343dc3-6c", "ovs_interfaceid": "e7343dc3-6cda-4dfb-8098-f021900f4584", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:51:10 np0005481065 nova_compute[260935]: 2025-10-11 08:51:10.843 2 DEBUG oslo_concurrency.lockutils [req-414781cc-d6a1-42c0-9106-f249fbcb35e8 req-87a8b0a5-1cf6-4f03-8179-f4cd618401f3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-9f842544-f85a-4c24-b273-8ae74177617e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:51:10 np0005481065 nova_compute[260935]: 2025-10-11 08:51:10.844 2 DEBUG nova.compute.manager [req-414781cc-d6a1-42c0-9106-f249fbcb35e8 req-87a8b0a5-1cf6-4f03-8179-f4cd618401f3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Received event network-changed-e7343dc3-6cda-4dfb-8098-f021900f4584 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:51:10 np0005481065 nova_compute[260935]: 2025-10-11 08:51:10.844 2 DEBUG nova.compute.manager [req-414781cc-d6a1-42c0-9106-f249fbcb35e8 req-87a8b0a5-1cf6-4f03-8179-f4cd618401f3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Refreshing instance network info cache due to event network-changed-e7343dc3-6cda-4dfb-8098-f021900f4584. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:51:10 np0005481065 nova_compute[260935]: 2025-10-11 08:51:10.845 2 DEBUG oslo_concurrency.lockutils [req-414781cc-d6a1-42c0-9106-f249fbcb35e8 req-87a8b0a5-1cf6-4f03-8179-f4cd618401f3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-9f842544-f85a-4c24-b273-8ae74177617e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:51:10 np0005481065 nova_compute[260935]: 2025-10-11 08:51:10.845 2 DEBUG oslo_concurrency.lockutils [req-414781cc-d6a1-42c0-9106-f249fbcb35e8 req-87a8b0a5-1cf6-4f03-8179-f4cd618401f3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-9f842544-f85a-4c24-b273-8ae74177617e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:51:10 np0005481065 nova_compute[260935]: 2025-10-11 08:51:10.846 2 DEBUG nova.network.neutron [req-414781cc-d6a1-42c0-9106-f249fbcb35e8 req-87a8b0a5-1cf6-4f03-8179-f4cd618401f3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Refreshing network info cache for port e7343dc3-6cda-4dfb-8098-f021900f4584 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:51:10 np0005481065 nova_compute[260935]: 2025-10-11 08:51:10.890 2 DEBUG oslo_concurrency.processutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9f842544-f85a-4c24-b273-8ae74177617e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3s5gi1id" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:51:10 np0005481065 nova_compute[260935]: 2025-10-11 08:51:10.921 2 DEBUG nova.storage.rbd_utils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] rbd image 9f842544-f85a-4c24-b273-8ae74177617e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:51:10 np0005481065 nova_compute[260935]: 2025-10-11 08:51:10.925 2 DEBUG oslo_concurrency.processutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9f842544-f85a-4c24-b273-8ae74177617e/disk.config 9f842544-f85a-4c24-b273-8ae74177617e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:51:11 np0005481065 nova_compute[260935]: 2025-10-11 08:51:11.123 2 DEBUG oslo_concurrency.processutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9f842544-f85a-4c24-b273-8ae74177617e/disk.config 9f842544-f85a-4c24-b273-8ae74177617e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.198s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:51:11 np0005481065 nova_compute[260935]: 2025-10-11 08:51:11.125 2 INFO nova.virt.libvirt.driver [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Deleting local config drive /var/lib/nova/instances/9f842544-f85a-4c24-b273-8ae74177617e/disk.config because it was imported into RBD.#033[00m
Oct 11 04:51:11 np0005481065 kernel: tapfc76a4bd-0e: entered promiscuous mode
Oct 11 04:51:11 np0005481065 NetworkManager[44960]: <info>  [1760172671.2039] manager: (tapfc76a4bd-0e): new Tun device (/org/freedesktop/NetworkManager/Devices/114)
Oct 11 04:51:11 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:11Z|00228|binding|INFO|Claiming lport fc76a4bd-0e3d-426e-820f-690283cf8257 for this chassis.
Oct 11 04:51:11 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:11Z|00229|binding|INFO|fc76a4bd-0e3d-426e-820f-690283cf8257: Claiming fa:16:3e:91:1c:e4 10.100.0.51
Oct 11 04:51:11 np0005481065 nova_compute[260935]: 2025-10-11 08:51:11.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.227 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:91:1c:e4 10.100.0.51'], port_security=['fa:16:3e:91:1c:e4 10.100.0.51'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.51/24', 'neutron:device_id': '9f842544-f85a-4c24-b273-8ae74177617e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-da40451d-49f4-4bd4-b0a3-55dc537c2426', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f84de17ba2c5470fbc4c7fe809e7d7b7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1a4e329d-ca16-4021-b76e-a221f4eabd06', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dbdc5e62-4240-4c3e-8aee-e7dbf176e812, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=fc76a4bd-0e3d-426e-820f-690283cf8257) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.231 162815 INFO neutron.agent.ovn.metadata.agent [-] Port fc76a4bd-0e3d-426e-820f-690283cf8257 in datapath da40451d-49f4-4bd4-b0a3-55dc537c2426 bound to our chassis#033[00m
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.235 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network da40451d-49f4-4bd4-b0a3-55dc537c2426#033[00m
Oct 11 04:51:11 np0005481065 NetworkManager[44960]: <info>  [1760172671.2437] manager: (tapccf53fcf-c7): new Tun device (/org/freedesktop/NetworkManager/Devices/115)
Oct 11 04:51:11 np0005481065 systemd-udevd[301342]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:51:11 np0005481065 systemd-udevd[301343]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.255 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2556687f-95e4-467a-8fef-578408904368]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.257 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapda40451d-41 in ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.260 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapda40451d-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.260 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0a137fd8-5286-4a46-acde-23773f86b42e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.262 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[09757eba-1313-472c-b9d9-0e918b97f3e4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:11 np0005481065 NetworkManager[44960]: <info>  [1760172671.2644] manager: (tape7343dc3-6c): new Tun device (/org/freedesktop/NetworkManager/Devices/116)
Oct 11 04:51:11 np0005481065 systemd-udevd[301350]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:51:11 np0005481065 NetworkManager[44960]: <info>  [1760172671.2707] device (tapfc76a4bd-0e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:51:11 np0005481065 NetworkManager[44960]: <info>  [1760172671.2737] device (tapfc76a4bd-0e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:51:11 np0005481065 kernel: tape7343dc3-6c: entered promiscuous mode
Oct 11 04:51:11 np0005481065 kernel: tapccf53fcf-c7: entered promiscuous mode
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.286 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[a8443c34-180e-4f76-bcb3-6392957ddfd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:11 np0005481065 NetworkManager[44960]: <info>  [1760172671.2922] device (tapccf53fcf-c7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:51:11 np0005481065 NetworkManager[44960]: <info>  [1760172671.2936] device (tapccf53fcf-c7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:51:11 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:11Z|00230|binding|INFO|Claiming lport ccf53fcf-c7bd-41f4-986b-d90fd701f3e4 for this chassis.
Oct 11 04:51:11 np0005481065 nova_compute[260935]: 2025-10-11 08:51:11.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:11 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:11Z|00231|binding|INFO|ccf53fcf-c7bd-41f4-986b-d90fd701f3e4: Claiming fa:16:3e:82:ff:9f 10.100.1.42
Oct 11 04:51:11 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:11Z|00232|binding|INFO|Claiming lport e7343dc3-6cda-4dfb-8098-f021900f4584 for this chassis.
Oct 11 04:51:11 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:11Z|00233|binding|INFO|e7343dc3-6cda-4dfb-8098-f021900f4584: Claiming fa:16:3e:25:95:fe 10.100.0.254
Oct 11 04:51:11 np0005481065 nova_compute[260935]: 2025-10-11 08:51:11.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:11 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:11Z|00234|binding|INFO|Setting lport fc76a4bd-0e3d-426e-820f-690283cf8257 ovn-installed in OVS
Oct 11 04:51:11 np0005481065 nova_compute[260935]: 2025-10-11 08:51:11.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:11 np0005481065 NetworkManager[44960]: <info>  [1760172671.3060] device (tape7343dc3-6c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:51:11 np0005481065 NetworkManager[44960]: <info>  [1760172671.3067] device (tape7343dc3-6c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.306 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:95:fe 10.100.0.254'], port_security=['fa:16:3e:25:95:fe 10.100.0.254'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.254/24', 'neutron:device_id': '9f842544-f85a-4c24-b273-8ae74177617e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-da40451d-49f4-4bd4-b0a3-55dc537c2426', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f84de17ba2c5470fbc4c7fe809e7d7b7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1a4e329d-ca16-4021-b76e-a221f4eabd06', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dbdc5e62-4240-4c3e-8aee-e7dbf176e812, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=e7343dc3-6cda-4dfb-8098-f021900f4584) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:51:11 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:11Z|00235|binding|INFO|Setting lport fc76a4bd-0e3d-426e-820f-690283cf8257 up in Southbound
Oct 11 04:51:11 np0005481065 systemd-machined[215705]: New machine qemu-37-instance-00000020.
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.309 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:82:ff:9f 10.100.1.42'], port_security=['fa:16:3e:82:ff:9f 10.100.1.42'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.42/24', 'neutron:device_id': '9f842544-f85a-4c24-b273-8ae74177617e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-27019660-0844-42c7-bd58-04b2d25d3924', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f84de17ba2c5470fbc4c7fe809e7d7b7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1a4e329d-ca16-4021-b76e-a221f4eabd06', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9317b60-e04e-4d9d-813e-91a63ec5f815, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=ccf53fcf-c7bd-41f4-986b-d90fd701f3e4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:51:11 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e180 do_prune osdmap full prune enabled
Oct 11 04:51:11 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e181 e181: 3 total, 3 up, 3 in
Oct 11 04:51:11 np0005481065 systemd[1]: Started Virtual Machine qemu-37-instance-00000020.
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.329 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f35ac527-b444-4778-90bc-68a5c944f4cb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:11 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e181: 3 total, 3 up, 3 in
Oct 11 04:51:11 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:11Z|00236|binding|INFO|Setting lport ccf53fcf-c7bd-41f4-986b-d90fd701f3e4 ovn-installed in OVS
Oct 11 04:51:11 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:11Z|00237|binding|INFO|Setting lport ccf53fcf-c7bd-41f4-986b-d90fd701f3e4 up in Southbound
Oct 11 04:51:11 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:11Z|00238|binding|INFO|Setting lport e7343dc3-6cda-4dfb-8098-f021900f4584 ovn-installed in OVS
Oct 11 04:51:11 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:11Z|00239|binding|INFO|Setting lport e7343dc3-6cda-4dfb-8098-f021900f4584 up in Southbound
Oct 11 04:51:11 np0005481065 nova_compute[260935]: 2025-10-11 08:51:11.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.378 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b4ac2fdb-42af-4a47-b507-3a8e6d5caaa9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:11 np0005481065 NetworkManager[44960]: <info>  [1760172671.3846] manager: (tapda40451d-40): new Veth device (/org/freedesktop/NetworkManager/Devices/117)
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.383 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d0689018-518e-43f0-b60a-aade93807e9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.423 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[ef97727d-f9d1-4c49-a7ed-b9162a063e58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:11 np0005481065 nova_compute[260935]: 2025-10-11 08:51:11.418 2 DEBUG nova.storage.rbd_utils [None req-414dafa3-ac2d-4e0d-aebd-68df71fcf0fe 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] cloning vms/5b50d851-e482-40a2-8b7d-d3eca87e15ab_disk@e9e22bb70e184ce295ed0bc013a73ecc to images/70bd8f23-d068-4d06-af15-566e76e92803 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.427 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c36bb27c-3fac-45db-a947-1782b415092d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:11 np0005481065 NetworkManager[44960]: <info>  [1760172671.4563] device (tapda40451d-40): carrier: link connected
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.465 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[cb883f38-97df-4436-ae85-263103cbb630]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.489 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[48d0c9e1-71ea-49b6-a991-d98eb40540e0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapda40451d-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d6:aa:2c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 75], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 451615, 'reachable_time': 38583, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301414, 'error': None, 'target': 'ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.512 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1e139af0-001f-416f-b6a6-dfb0a8a353d6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed6:aa2c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 451615, 'tstamp': 451615}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301418, 'error': None, 'target': 'ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.535 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[eb92fa7d-1764-471d-adf5-91a3c509b982]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapda40451d-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d6:aa:2c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 75], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 451615, 'reachable_time': 38583, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 301422, 'error': None, 'target': 'ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:11 np0005481065 nova_compute[260935]: 2025-10-11 08:51:11.559 2 DEBUG nova.storage.rbd_utils [None req-414dafa3-ac2d-4e0d-aebd-68df71fcf0fe 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] flattening images/70bd8f23-d068-4d06-af15-566e76e92803 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.566 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a12dcff3-c161-4d61-923a-418ca6578139]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.629 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3d07bc9c-23e5-4869-8c45-32262c977aba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.631 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapda40451d-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.631 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.631 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapda40451d-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:11 np0005481065 kernel: tapda40451d-40: entered promiscuous mode
Oct 11 04:51:11 np0005481065 NetworkManager[44960]: <info>  [1760172671.6340] manager: (tapda40451d-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/118)
Oct 11 04:51:11 np0005481065 nova_compute[260935]: 2025-10-11 08:51:11.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.640 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapda40451d-40, col_values=(('external_ids', {'iface-id': 'cf50a7a8-d7ea-4ad9-aa80-73a2c01bc493'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:11 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:11Z|00240|binding|INFO|Releasing lport cf50a7a8-d7ea-4ad9-aa80-73a2c01bc493 from this chassis (sb_readonly=0)
Oct 11 04:51:11 np0005481065 nova_compute[260935]: 2025-10-11 08:51:11.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:11 np0005481065 nova_compute[260935]: 2025-10-11 08:51:11.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.664 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/da40451d-49f4-4bd4-b0a3-55dc537c2426.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/da40451d-49f4-4bd4-b0a3-55dc537c2426.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.666 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e8453c27-b788-40f9-863e-003194fe586f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.667 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-da40451d-49f4-4bd4-b0a3-55dc537c2426
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/da40451d-49f4-4bd4-b0a3-55dc537c2426.pid.haproxy
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID da40451d-49f4-4bd4-b0a3-55dc537c2426
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 04:51:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.671 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426', 'env', 'PROCESS_TAG=haproxy-da40451d-49f4-4bd4-b0a3-55dc537c2426', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/da40451d-49f4-4bd4-b0a3-55dc537c2426.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 04:51:11 np0005481065 nova_compute[260935]: 2025-10-11 08:51:11.826 2 DEBUG nova.compute.manager [req-734846ac-d593-425d-a91d-f82dc1770b3f req-2c75593b-1268-4e49-958a-91352b373ed0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Received event network-vif-plugged-e7343dc3-6cda-4dfb-8098-f021900f4584 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:51:11 np0005481065 nova_compute[260935]: 2025-10-11 08:51:11.827 2 DEBUG oslo_concurrency.lockutils [req-734846ac-d593-425d-a91d-f82dc1770b3f req-2c75593b-1268-4e49-958a-91352b373ed0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "9f842544-f85a-4c24-b273-8ae74177617e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:11 np0005481065 nova_compute[260935]: 2025-10-11 08:51:11.827 2 DEBUG oslo_concurrency.lockutils [req-734846ac-d593-425d-a91d-f82dc1770b3f req-2c75593b-1268-4e49-958a-91352b373ed0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:11 np0005481065 nova_compute[260935]: 2025-10-11 08:51:11.828 2 DEBUG oslo_concurrency.lockutils [req-734846ac-d593-425d-a91d-f82dc1770b3f req-2c75593b-1268-4e49-958a-91352b373ed0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:11 np0005481065 nova_compute[260935]: 2025-10-11 08:51:11.828 2 DEBUG nova.compute.manager [req-734846ac-d593-425d-a91d-f82dc1770b3f req-2c75593b-1268-4e49-958a-91352b373ed0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Processing event network-vif-plugged-e7343dc3-6cda-4dfb-8098-f021900f4584 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 04:51:11 np0005481065 nova_compute[260935]: 2025-10-11 08:51:11.848 2 DEBUG nova.storage.rbd_utils [None req-414dafa3-ac2d-4e0d-aebd-68df71fcf0fe 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] removing snapshot(e9e22bb70e184ce295ed0bc013a73ecc) on rbd image(5b50d851-e482-40a2-8b7d-d3eca87e15ab_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct 11 04:51:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1380: 321 pgs: 321 active+clean; 134 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 4.3 MiB/s wr, 76 op/s
Oct 11 04:51:12 np0005481065 podman[301534]: 2025-10-11 08:51:12.18295314 +0000 UTC m=+0.089126812 container create f8f7c9bbddd2f77740d6cac8e986d87f20a8751a24dfffeb15fae153fbcb35f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 04:51:12 np0005481065 podman[301534]: 2025-10-11 08:51:12.140569061 +0000 UTC m=+0.046742813 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 04:51:12 np0005481065 nova_compute[260935]: 2025-10-11 08:51:12.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:12 np0005481065 systemd[1]: Started libpod-conmon-f8f7c9bbddd2f77740d6cac8e986d87f20a8751a24dfffeb15fae153fbcb35f1.scope.
Oct 11 04:51:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e181 do_prune osdmap full prune enabled
Oct 11 04:51:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e182 e182: 3 total, 3 up, 3 in
Oct 11 04:51:12 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e182: 3 total, 3 up, 3 in
Oct 11 04:51:12 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:51:12 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f6a6ed8c063f0dc224c05dfdf416baf50adf8b36b436263058ef960624c7842/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:51:12 np0005481065 podman[301534]: 2025-10-11 08:51:12.369018662 +0000 UTC m=+0.275192354 container init f8f7c9bbddd2f77740d6cac8e986d87f20a8751a24dfffeb15fae153fbcb35f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 04:51:12 np0005481065 podman[301534]: 2025-10-11 08:51:12.377156952 +0000 UTC m=+0.283330614 container start f8f7c9bbddd2f77740d6cac8e986d87f20a8751a24dfffeb15fae153fbcb35f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 11 04:51:12 np0005481065 nova_compute[260935]: 2025-10-11 08:51:12.394 2 DEBUG nova.storage.rbd_utils [None req-414dafa3-ac2d-4e0d-aebd-68df71fcf0fe 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] creating snapshot(snap) on rbd image(70bd8f23-d068-4d06-af15-566e76e92803) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 11 04:51:12 np0005481065 neutron-haproxy-ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426[301549]: [NOTICE]   (301553) : New worker (301562) forked
Oct 11 04:51:12 np0005481065 neutron-haproxy-ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426[301549]: [NOTICE]   (301553) : Loading success.
Oct 11 04:51:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.439 162815 INFO neutron.agent.ovn.metadata.agent [-] Port e7343dc3-6cda-4dfb-8098-f021900f4584 in datapath da40451d-49f4-4bd4-b0a3-55dc537c2426 unbound from our chassis#033[00m
Oct 11 04:51:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.441 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network da40451d-49f4-4bd4-b0a3-55dc537c2426#033[00m
Oct 11 04:51:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.458 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ca8b592b-fcf7-45a1-abc8-9c3740b9a8d4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:12 np0005481065 nova_compute[260935]: 2025-10-11 08:51:12.472 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172672.4690866, 9f842544-f85a-4c24-b273-8ae74177617e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:51:12 np0005481065 nova_compute[260935]: 2025-10-11 08:51:12.473 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] VM Started (Lifecycle Event)#033[00m
Oct 11 04:51:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.491 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[a79022e0-fead-4236-9f09-46164e640d9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.493 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[69bfc85b-a645-4ac1-bdec-74059b25d530]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.517 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c06937dd-ba60-4111-9695-dd976bb68aab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:12 np0005481065 nova_compute[260935]: 2025-10-11 08:51:12.551 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:51:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.552 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b88b38b1-e6bd-495c-a059-296d53140c9f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapda40451d-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d6:aa:2c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 6, 'rx_bytes': 612, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 6, 'rx_bytes': 612, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 75], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 451615, 'reachable_time': 38583, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 528, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 528, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301587, 'error': None, 'target': 'ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:12 np0005481065 nova_compute[260935]: 2025-10-11 08:51:12.557 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172672.469218, 9f842544-f85a-4c24-b273-8ae74177617e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:51:12 np0005481065 nova_compute[260935]: 2025-10-11 08:51:12.557 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] VM Paused (Lifecycle Event)#033[00m
Oct 11 04:51:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.573 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[67a45851-a734-4a69-9e91-ce6d1ac5a3e6]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapda40451d-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 451629, 'tstamp': 451629}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301588, 'error': None, 'target': 'ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.255'], ['IFA_LABEL', 'tapda40451d-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 451632, 'tstamp': 451632}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301588, 'error': None, 'target': 'ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.575 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapda40451d-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:12 np0005481065 nova_compute[260935]: 2025-10-11 08:51:12.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.578 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapda40451d-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.578 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:51:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.579 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapda40451d-40, col_values=(('external_ids', {'iface-id': 'cf50a7a8-d7ea-4ad9-aa80-73a2c01bc493'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.579 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:51:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.580 162815 INFO neutron.agent.ovn.metadata.agent [-] Port ccf53fcf-c7bd-41f4-986b-d90fd701f3e4 in datapath 27019660-0844-42c7-bd58-04b2d25d3924 unbound from our chassis#033[00m
Oct 11 04:51:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.581 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 27019660-0844-42c7-bd58-04b2d25d3924#033[00m
Oct 11 04:51:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.592 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d4151532-32bf-4c1f-8a6e-d1a11a1d7552]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.593 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap27019660-01 in ovnmeta-27019660-0844-42c7-bd58-04b2d25d3924 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 04:51:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.595 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap27019660-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 04:51:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.595 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[580fa3e6-d680-47fb-bff6-4f9f24ab8ab1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.596 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d62af85d-ff44-4c4a-a208-d3fb8291bfd5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.612 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[c5598702-4183-4056-9f01-3feced8a54cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.626 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[45af3767-0a04-4307-92d6-94d9f61179b5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.659 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f25ac40c-9f29-4eed-8f22-f2d4012cef7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:12 np0005481065 nova_compute[260935]: 2025-10-11 08:51:12.665 2 DEBUG nova.network.neutron [req-414781cc-d6a1-42c0-9106-f249fbcb35e8 req-87a8b0a5-1cf6-4f03-8179-f4cd618401f3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Updated VIF entry in instance network info cache for port e7343dc3-6cda-4dfb-8098-f021900f4584. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:51:12 np0005481065 nova_compute[260935]: 2025-10-11 08:51:12.666 2 DEBUG nova.network.neutron [req-414781cc-d6a1-42c0-9106-f249fbcb35e8 req-87a8b0a5-1cf6-4f03-8179-f4cd618401f3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Updating instance_info_cache with network_info: [{"id": "fc76a4bd-0e3d-426e-820f-690283cf8257", "address": "fa:16:3e:91:1c:e4", "network": {"id": "da40451d-49f4-4bd4-b0a3-55dc537c2426", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1795623970", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.51", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc76a4bd-0e", "ovs_interfaceid": "fc76a4bd-0e3d-426e-820f-690283cf8257", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ccf53fcf-c7bd-41f4-986b-d90fd701f3e4", "address": "fa:16:3e:82:ff:9f", "network": {"id": "27019660-0844-42c7-bd58-04b2d25d3924", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-190755633", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.42", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccf53fcf-c7", "ovs_interfaceid": "ccf53fcf-c7bd-41f4-986b-d90fd701f3e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e7343dc3-6cda-4dfb-8098-f021900f4584", "address": "fa:16:3e:25:95:fe", "network": {"id": "da40451d-49f4-4bd4-b0a3-55dc537c2426", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1795623970", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.254", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7343dc3-6c", "ovs_interfaceid": "e7343dc3-6cda-4dfb-8098-f021900f4584", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:51:12 np0005481065 NetworkManager[44960]: <info>  [1760172672.6728] manager: (tap27019660-00): new Veth device (/org/freedesktop/NetworkManager/Devices/119)
Oct 11 04:51:12 np0005481065 nova_compute[260935]: 2025-10-11 08:51:12.673 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:51:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.675 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a38e30b4-abd0-4e23-9bb5-cfbc846c7ec4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:12 np0005481065 nova_compute[260935]: 2025-10-11 08:51:12.679 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:51:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.728 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[8ee7699d-1257-46bb-9e13-335f44ae0a32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.732 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[97dae52c-35d0-45d3-a80d-a0ccbae59aef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:12 np0005481065 NetworkManager[44960]: <info>  [1760172672.7707] device (tap27019660-00): carrier: link connected
Oct 11 04:51:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.785 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[2d2a2e22-4e0f-4a9d-b377-eba7663ac74a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.809 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[eebbcccc-1380-4ac1-8ea6-7e9fb3b927a4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap27019660-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ae:7b:58'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 76], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 451747, 'reachable_time': 29931, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301599, 'error': None, 'target': 'ovnmeta-27019660-0844-42c7-bd58-04b2d25d3924', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:12 np0005481065 nova_compute[260935]: 2025-10-11 08:51:12.817 2 DEBUG oslo_concurrency.lockutils [req-414781cc-d6a1-42c0-9106-f249fbcb35e8 req-87a8b0a5-1cf6-4f03-8179-f4cd618401f3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-9f842544-f85a-4c24-b273-8ae74177617e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:51:12 np0005481065 nova_compute[260935]: 2025-10-11 08:51:12.825 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:51:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.847 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f57e9098-2805-4da3-a54d-6db66157f808]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feae:7b58'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 451747, 'tstamp': 451747}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301600, 'error': None, 'target': 'ovnmeta-27019660-0844-42c7-bd58-04b2d25d3924', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.871 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[600b0997-276f-4744-9ca5-dd91644b8e36]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap27019660-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ae:7b:58'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 76], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 451747, 'reachable_time': 29931, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 301601, 'error': None, 'target': 'ovnmeta-27019660-0844-42c7-bd58-04b2d25d3924', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.926 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ea3c524b-99ba-4599-807b-75be3ff6e917]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:13.028 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bc05be55-ffa4-4258-b3d2-c5f461fad75b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:13.030 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap27019660-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:13.031 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:51:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:13.031 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap27019660-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:13 np0005481065 kernel: tap27019660-00: entered promiscuous mode
Oct 11 04:51:13 np0005481065 NetworkManager[44960]: <info>  [1760172673.0357] manager: (tap27019660-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/120)
Oct 11 04:51:13 np0005481065 nova_compute[260935]: 2025-10-11 08:51:13.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:13 np0005481065 nova_compute[260935]: 2025-10-11 08:51:13.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:13.040 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap27019660-00, col_values=(('external_ids', {'iface-id': 'a330e7c0-3dc2-4e01-abd9-1653b7179f53'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:13 np0005481065 nova_compute[260935]: 2025-10-11 08:51:13.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:13 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:13Z|00241|binding|INFO|Releasing lport a330e7c0-3dc2-4e01-abd9-1653b7179f53 from this chassis (sb_readonly=0)
Oct 11 04:51:13 np0005481065 nova_compute[260935]: 2025-10-11 08:51:13.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:13.080 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/27019660-0844-42c7-bd58-04b2d25d3924.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/27019660-0844-42c7-bd58-04b2d25d3924.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 04:51:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:13.081 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d5c88b77-53ba-417d-8972-3d12ff190c97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:13.082 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:51:13 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 04:51:13 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 04:51:13 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-27019660-0844-42c7-bd58-04b2d25d3924
Oct 11 04:51:13 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 04:51:13 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 04:51:13 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 04:51:13 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/27019660-0844-42c7-bd58-04b2d25d3924.pid.haproxy
Oct 11 04:51:13 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 04:51:13 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:51:13 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 04:51:13 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 04:51:13 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 04:51:13 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 04:51:13 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 04:51:13 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 04:51:13 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 04:51:13 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 04:51:13 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 04:51:13 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 04:51:13 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 04:51:13 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 04:51:13 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 04:51:13 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:51:13 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:51:13 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 04:51:13 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 04:51:13 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:51:13 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID 27019660-0844-42c7-bd58-04b2d25d3924
Oct 11 04:51:13 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 04:51:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:13.083 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-27019660-0844-42c7-bd58-04b2d25d3924', 'env', 'PROCESS_TAG=haproxy-27019660-0844-42c7-bd58-04b2d25d3924', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/27019660-0844-42c7-bd58-04b2d25d3924.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 04:51:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:51:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e182 do_prune osdmap full prune enabled
Oct 11 04:51:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e183 e183: 3 total, 3 up, 3 in
Oct 11 04:51:13 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e183: 3 total, 3 up, 3 in
Oct 11 04:51:13 np0005481065 podman[301633]: 2025-10-11 08:51:13.546042697 +0000 UTC m=+0.064831134 container create 0993cadefbc4243bfef6735aa729df33ea38f2f3700a7a90d1f3d13ef9163650 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-27019660-0844-42c7-bd58-04b2d25d3924, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 04:51:13 np0005481065 podman[301633]: 2025-10-11 08:51:13.50971992 +0000 UTC m=+0.028508397 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 04:51:13 np0005481065 systemd[1]: Started libpod-conmon-0993cadefbc4243bfef6735aa729df33ea38f2f3700a7a90d1f3d13ef9163650.scope.
Oct 11 04:51:13 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:51:13 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c691cf546be102ecd2a636c399d183790889e1bda6debfc8222fabd7050af86c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:51:13 np0005481065 podman[301633]: 2025-10-11 08:51:13.675317043 +0000 UTC m=+0.194105480 container init 0993cadefbc4243bfef6735aa729df33ea38f2f3700a7a90d1f3d13ef9163650 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-27019660-0844-42c7-bd58-04b2d25d3924, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 11 04:51:13 np0005481065 podman[301633]: 2025-10-11 08:51:13.686250132 +0000 UTC m=+0.205038569 container start 0993cadefbc4243bfef6735aa729df33ea38f2f3700a7a90d1f3d13ef9163650 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-27019660-0844-42c7-bd58-04b2d25d3924, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 11 04:51:13 np0005481065 neutron-haproxy-ovnmeta-27019660-0844-42c7-bd58-04b2d25d3924[301648]: [NOTICE]   (301652) : New worker (301654) forked
Oct 11 04:51:13 np0005481065 neutron-haproxy-ovnmeta-27019660-0844-42c7-bd58-04b2d25d3924[301648]: [NOTICE]   (301652) : Loading success.
Oct 11 04:51:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1383: 321 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 314 active+clean; 181 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 7.4 MiB/s rd, 3.6 MiB/s wr, 249 op/s
Oct 11 04:51:14 np0005481065 nova_compute[260935]: 2025-10-11 08:51:14.302 2 DEBUG nova.compute.manager [req-1f4f563e-8fc4-48e4-ba8e-429970605cee req-6f88bb4a-0ee3-40ca-8dd1-122fdc1f7133 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Received event network-vif-plugged-e7343dc3-6cda-4dfb-8098-f021900f4584 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:51:14 np0005481065 nova_compute[260935]: 2025-10-11 08:51:14.303 2 DEBUG oslo_concurrency.lockutils [req-1f4f563e-8fc4-48e4-ba8e-429970605cee req-6f88bb4a-0ee3-40ca-8dd1-122fdc1f7133 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "9f842544-f85a-4c24-b273-8ae74177617e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:14 np0005481065 nova_compute[260935]: 2025-10-11 08:51:14.303 2 DEBUG oslo_concurrency.lockutils [req-1f4f563e-8fc4-48e4-ba8e-429970605cee req-6f88bb4a-0ee3-40ca-8dd1-122fdc1f7133 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:14 np0005481065 nova_compute[260935]: 2025-10-11 08:51:14.304 2 DEBUG oslo_concurrency.lockutils [req-1f4f563e-8fc4-48e4-ba8e-429970605cee req-6f88bb4a-0ee3-40ca-8dd1-122fdc1f7133 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:14 np0005481065 nova_compute[260935]: 2025-10-11 08:51:14.304 2 DEBUG nova.compute.manager [req-1f4f563e-8fc4-48e4-ba8e-429970605cee req-6f88bb4a-0ee3-40ca-8dd1-122fdc1f7133 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] No event matching network-vif-plugged-e7343dc3-6cda-4dfb-8098-f021900f4584 in dict_keys([('network-vif-plugged', 'fc76a4bd-0e3d-426e-820f-690283cf8257'), ('network-vif-plugged', 'ccf53fcf-c7bd-41f4-986b-d90fd701f3e4')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Oct 11 04:51:14 np0005481065 nova_compute[260935]: 2025-10-11 08:51:14.305 2 WARNING nova.compute.manager [req-1f4f563e-8fc4-48e4-ba8e-429970605cee req-6f88bb4a-0ee3-40ca-8dd1-122fdc1f7133 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Received unexpected event network-vif-plugged-e7343dc3-6cda-4dfb-8098-f021900f4584 for instance with vm_state building and task_state spawning.#033[00m
Oct 11 04:51:14 np0005481065 nova_compute[260935]: 2025-10-11 08:51:14.305 2 DEBUG nova.compute.manager [req-1f4f563e-8fc4-48e4-ba8e-429970605cee req-6f88bb4a-0ee3-40ca-8dd1-122fdc1f7133 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Received event network-vif-plugged-ccf53fcf-c7bd-41f4-986b-d90fd701f3e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:51:14 np0005481065 nova_compute[260935]: 2025-10-11 08:51:14.306 2 DEBUG oslo_concurrency.lockutils [req-1f4f563e-8fc4-48e4-ba8e-429970605cee req-6f88bb4a-0ee3-40ca-8dd1-122fdc1f7133 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "9f842544-f85a-4c24-b273-8ae74177617e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:14 np0005481065 nova_compute[260935]: 2025-10-11 08:51:14.306 2 DEBUG oslo_concurrency.lockutils [req-1f4f563e-8fc4-48e4-ba8e-429970605cee req-6f88bb4a-0ee3-40ca-8dd1-122fdc1f7133 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:14 np0005481065 nova_compute[260935]: 2025-10-11 08:51:14.307 2 DEBUG oslo_concurrency.lockutils [req-1f4f563e-8fc4-48e4-ba8e-429970605cee req-6f88bb4a-0ee3-40ca-8dd1-122fdc1f7133 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:14 np0005481065 nova_compute[260935]: 2025-10-11 08:51:14.307 2 DEBUG nova.compute.manager [req-1f4f563e-8fc4-48e4-ba8e-429970605cee req-6f88bb4a-0ee3-40ca-8dd1-122fdc1f7133 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Processing event network-vif-plugged-ccf53fcf-c7bd-41f4-986b-d90fd701f3e4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 04:51:14 np0005481065 nova_compute[260935]: 2025-10-11 08:51:14.308 2 DEBUG nova.compute.manager [req-1f4f563e-8fc4-48e4-ba8e-429970605cee req-6f88bb4a-0ee3-40ca-8dd1-122fdc1f7133 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Received event network-vif-plugged-ccf53fcf-c7bd-41f4-986b-d90fd701f3e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:51:14 np0005481065 nova_compute[260935]: 2025-10-11 08:51:14.308 2 DEBUG oslo_concurrency.lockutils [req-1f4f563e-8fc4-48e4-ba8e-429970605cee req-6f88bb4a-0ee3-40ca-8dd1-122fdc1f7133 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "9f842544-f85a-4c24-b273-8ae74177617e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:14 np0005481065 nova_compute[260935]: 2025-10-11 08:51:14.309 2 DEBUG oslo_concurrency.lockutils [req-1f4f563e-8fc4-48e4-ba8e-429970605cee req-6f88bb4a-0ee3-40ca-8dd1-122fdc1f7133 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:14 np0005481065 nova_compute[260935]: 2025-10-11 08:51:14.309 2 DEBUG oslo_concurrency.lockutils [req-1f4f563e-8fc4-48e4-ba8e-429970605cee req-6f88bb4a-0ee3-40ca-8dd1-122fdc1f7133 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:14 np0005481065 nova_compute[260935]: 2025-10-11 08:51:14.310 2 DEBUG nova.compute.manager [req-1f4f563e-8fc4-48e4-ba8e-429970605cee req-6f88bb4a-0ee3-40ca-8dd1-122fdc1f7133 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] No event matching network-vif-plugged-ccf53fcf-c7bd-41f4-986b-d90fd701f3e4 in dict_keys([('network-vif-plugged', 'fc76a4bd-0e3d-426e-820f-690283cf8257')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Oct 11 04:51:14 np0005481065 nova_compute[260935]: 2025-10-11 08:51:14.310 2 WARNING nova.compute.manager [req-1f4f563e-8fc4-48e4-ba8e-429970605cee req-6f88bb4a-0ee3-40ca-8dd1-122fdc1f7133 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Received unexpected event network-vif-plugged-ccf53fcf-c7bd-41f4-986b-d90fd701f3e4 for instance with vm_state building and task_state spawning.#033[00m
Oct 11 04:51:14 np0005481065 nova_compute[260935]: 2025-10-11 08:51:14.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:14 np0005481065 nova_compute[260935]: 2025-10-11 08:51:14.892 2 INFO nova.virt.libvirt.driver [None req-414dafa3-ac2d-4e0d-aebd-68df71fcf0fe 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Snapshot image upload complete#033[00m
Oct 11 04:51:14 np0005481065 nova_compute[260935]: 2025-10-11 08:51:14.893 2 INFO nova.compute.manager [None req-414dafa3-ac2d-4e0d-aebd-68df71fcf0fe 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Took 5.23 seconds to snapshot the instance on the hypervisor.#033[00m
Oct 11 04:51:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:15.185 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:15.186 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:15.187 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:15 np0005481065 nova_compute[260935]: 2025-10-11 08:51:15.650 2 DEBUG oslo_concurrency.lockutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Acquiring lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:15 np0005481065 nova_compute[260935]: 2025-10-11 08:51:15.651 2 DEBUG oslo_concurrency.lockutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:15 np0005481065 nova_compute[260935]: 2025-10-11 08:51:15.670 2 DEBUG nova.compute.manager [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:51:15 np0005481065 nova_compute[260935]: 2025-10-11 08:51:15.761 2 DEBUG oslo_concurrency.lockutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:15 np0005481065 nova_compute[260935]: 2025-10-11 08:51:15.762 2 DEBUG oslo_concurrency.lockutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:15 np0005481065 nova_compute[260935]: 2025-10-11 08:51:15.772 2 DEBUG nova.virt.hardware [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:51:15 np0005481065 nova_compute[260935]: 2025-10-11 08:51:15.772 2 INFO nova.compute.claims [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:51:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1384: 321 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 314 active+clean; 181 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 7.4 MiB/s rd, 3.6 MiB/s wr, 249 op/s
Oct 11 04:51:15 np0005481065 nova_compute[260935]: 2025-10-11 08:51:15.896 2 DEBUG oslo_concurrency.processutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:51:16 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:51:16 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2506380279' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:51:16 np0005481065 nova_compute[260935]: 2025-10-11 08:51:16.374 2 DEBUG oslo_concurrency.processutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:51:16 np0005481065 nova_compute[260935]: 2025-10-11 08:51:16.383 2 DEBUG nova.compute.provider_tree [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:51:16 np0005481065 nova_compute[260935]: 2025-10-11 08:51:16.400 2 DEBUG nova.scheduler.client.report [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:51:16 np0005481065 nova_compute[260935]: 2025-10-11 08:51:16.423 2 DEBUG oslo_concurrency.lockutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:16 np0005481065 nova_compute[260935]: 2025-10-11 08:51:16.423 2 DEBUG nova.compute.manager [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:51:16 np0005481065 nova_compute[260935]: 2025-10-11 08:51:16.478 2 DEBUG nova.compute.manager [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:51:16 np0005481065 nova_compute[260935]: 2025-10-11 08:51:16.479 2 DEBUG nova.network.neutron [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:51:16 np0005481065 nova_compute[260935]: 2025-10-11 08:51:16.505 2 INFO nova.virt.libvirt.driver [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:51:16 np0005481065 nova_compute[260935]: 2025-10-11 08:51:16.527 2 DEBUG nova.compute.manager [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:51:16 np0005481065 nova_compute[260935]: 2025-10-11 08:51:16.652 2 DEBUG nova.compute.manager [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:51:16 np0005481065 nova_compute[260935]: 2025-10-11 08:51:16.654 2 DEBUG nova.virt.libvirt.driver [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:51:16 np0005481065 nova_compute[260935]: 2025-10-11 08:51:16.654 2 INFO nova.virt.libvirt.driver [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Creating image(s)#033[00m
Oct 11 04:51:16 np0005481065 nova_compute[260935]: 2025-10-11 08:51:16.694 2 DEBUG nova.storage.rbd_utils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] rbd image 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:51:16 np0005481065 nova_compute[260935]: 2025-10-11 08:51:16.727 2 DEBUG nova.storage.rbd_utils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] rbd image 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:51:16 np0005481065 nova_compute[260935]: 2025-10-11 08:51:16.757 2 DEBUG nova.storage.rbd_utils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] rbd image 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:51:16 np0005481065 nova_compute[260935]: 2025-10-11 08:51:16.762 2 DEBUG oslo_concurrency.processutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:51:16 np0005481065 nova_compute[260935]: 2025-10-11 08:51:16.845 2 DEBUG nova.policy [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f961a579f0a74ab3a913fc3b21acea43', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '80ef0690d9e94d289f05d85941ef7154', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 04:51:16 np0005481065 nova_compute[260935]: 2025-10-11 08:51:16.856 2 DEBUG oslo_concurrency.processutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:51:16 np0005481065 nova_compute[260935]: 2025-10-11 08:51:16.856 2 DEBUG oslo_concurrency.lockutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:16 np0005481065 nova_compute[260935]: 2025-10-11 08:51:16.857 2 DEBUG oslo_concurrency.lockutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:16 np0005481065 nova_compute[260935]: 2025-10-11 08:51:16.857 2 DEBUG oslo_concurrency.lockutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:16 np0005481065 nova_compute[260935]: 2025-10-11 08:51:16.880 2 DEBUG nova.storage.rbd_utils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] rbd image 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:51:16 np0005481065 nova_compute[260935]: 2025-10-11 08:51:16.884 2 DEBUG oslo_concurrency.processutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:51:17 np0005481065 nova_compute[260935]: 2025-10-11 08:51:17.175 2 DEBUG oslo_concurrency.processutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.291s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:51:17 np0005481065 nova_compute[260935]: 2025-10-11 08:51:17.247 2 DEBUG nova.storage.rbd_utils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] resizing rbd image 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:51:17 np0005481065 nova_compute[260935]: 2025-10-11 08:51:17.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:17 np0005481065 nova_compute[260935]: 2025-10-11 08:51:17.365 2 DEBUG nova.objects.instance [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Lazy-loading 'migration_context' on Instance uuid 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:51:17 np0005481065 nova_compute[260935]: 2025-10-11 08:51:17.380 2 DEBUG nova.virt.libvirt.driver [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:51:17 np0005481065 nova_compute[260935]: 2025-10-11 08:51:17.381 2 DEBUG nova.virt.libvirt.driver [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Ensure instance console log exists: /var/lib/nova/instances/88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:51:17 np0005481065 nova_compute[260935]: 2025-10-11 08:51:17.381 2 DEBUG oslo_concurrency.lockutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:17 np0005481065 nova_compute[260935]: 2025-10-11 08:51:17.381 2 DEBUG oslo_concurrency.lockutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:17 np0005481065 nova_compute[260935]: 2025-10-11 08:51:17.382 2 DEBUG oslo_concurrency.lockutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:17 np0005481065 nova_compute[260935]: 2025-10-11 08:51:17.548 2 DEBUG nova.compute.manager [req-5320edb0-e747-4d23-b93b-7b7080b3050f req-72dec7f4-b01d-4268-a3af-e5f05760f864 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Received event network-vif-plugged-fc76a4bd-0e3d-426e-820f-690283cf8257 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:51:17 np0005481065 nova_compute[260935]: 2025-10-11 08:51:17.549 2 DEBUG oslo_concurrency.lockutils [req-5320edb0-e747-4d23-b93b-7b7080b3050f req-72dec7f4-b01d-4268-a3af-e5f05760f864 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "9f842544-f85a-4c24-b273-8ae74177617e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:17 np0005481065 nova_compute[260935]: 2025-10-11 08:51:17.549 2 DEBUG oslo_concurrency.lockutils [req-5320edb0-e747-4d23-b93b-7b7080b3050f req-72dec7f4-b01d-4268-a3af-e5f05760f864 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:17 np0005481065 nova_compute[260935]: 2025-10-11 08:51:17.550 2 DEBUG oslo_concurrency.lockutils [req-5320edb0-e747-4d23-b93b-7b7080b3050f req-72dec7f4-b01d-4268-a3af-e5f05760f864 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:17 np0005481065 nova_compute[260935]: 2025-10-11 08:51:17.550 2 DEBUG nova.compute.manager [req-5320edb0-e747-4d23-b93b-7b7080b3050f req-72dec7f4-b01d-4268-a3af-e5f05760f864 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Processing event network-vif-plugged-fc76a4bd-0e3d-426e-820f-690283cf8257 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 04:51:17 np0005481065 nova_compute[260935]: 2025-10-11 08:51:17.552 2 DEBUG nova.compute.manager [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Instance event wait completed in 5 seconds for network-vif-plugged,network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:51:17 np0005481065 nova_compute[260935]: 2025-10-11 08:51:17.557 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172677.5567293, 9f842544-f85a-4c24-b273-8ae74177617e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:51:17 np0005481065 nova_compute[260935]: 2025-10-11 08:51:17.558 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:51:17 np0005481065 nova_compute[260935]: 2025-10-11 08:51:17.563 2 DEBUG nova.virt.libvirt.driver [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:51:17 np0005481065 nova_compute[260935]: 2025-10-11 08:51:17.569 2 INFO nova.virt.libvirt.driver [-] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Instance spawned successfully.#033[00m
Oct 11 04:51:17 np0005481065 nova_compute[260935]: 2025-10-11 08:51:17.569 2 DEBUG nova.virt.libvirt.driver [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:51:17 np0005481065 nova_compute[260935]: 2025-10-11 08:51:17.589 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:51:17 np0005481065 nova_compute[260935]: 2025-10-11 08:51:17.600 2 DEBUG nova.network.neutron [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Successfully created port: ab7592f9-1746-47d1-a702-b7be704ccabb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 04:51:17 np0005481065 nova_compute[260935]: 2025-10-11 08:51:17.610 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:51:17 np0005481065 nova_compute[260935]: 2025-10-11 08:51:17.616 2 DEBUG nova.virt.libvirt.driver [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:51:17 np0005481065 nova_compute[260935]: 2025-10-11 08:51:17.617 2 DEBUG nova.virt.libvirt.driver [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:51:17 np0005481065 nova_compute[260935]: 2025-10-11 08:51:17.618 2 DEBUG nova.virt.libvirt.driver [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:51:17 np0005481065 nova_compute[260935]: 2025-10-11 08:51:17.619 2 DEBUG nova.virt.libvirt.driver [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:51:17 np0005481065 nova_compute[260935]: 2025-10-11 08:51:17.620 2 DEBUG nova.virt.libvirt.driver [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:51:17 np0005481065 nova_compute[260935]: 2025-10-11 08:51:17.621 2 DEBUG nova.virt.libvirt.driver [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:51:17 np0005481065 nova_compute[260935]: 2025-10-11 08:51:17.632 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:51:17 np0005481065 nova_compute[260935]: 2025-10-11 08:51:17.685 2 INFO nova.compute.manager [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Took 19.93 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:51:17 np0005481065 nova_compute[260935]: 2025-10-11 08:51:17.686 2 DEBUG nova.compute.manager [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:51:17 np0005481065 nova_compute[260935]: 2025-10-11 08:51:17.758 2 INFO nova.compute.manager [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Took 21.49 seconds to build instance.#033[00m
Oct 11 04:51:17 np0005481065 nova_compute[260935]: 2025-10-11 08:51:17.776 2 DEBUG oslo_concurrency.lockutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1385: 321 pgs: 321 active+clean; 227 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 6.9 MiB/s rd, 6.5 MiB/s wr, 303 op/s
Oct 11 04:51:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:51:18 np0005481065 nova_compute[260935]: 2025-10-11 08:51:18.830 2 DEBUG nova.network.neutron [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Successfully updated port: ab7592f9-1746-47d1-a702-b7be704ccabb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:51:18 np0005481065 nova_compute[260935]: 2025-10-11 08:51:18.848 2 DEBUG oslo_concurrency.lockutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Acquiring lock "refresh_cache-88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:51:18 np0005481065 nova_compute[260935]: 2025-10-11 08:51:18.848 2 DEBUG oslo_concurrency.lockutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Acquired lock "refresh_cache-88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:51:18 np0005481065 nova_compute[260935]: 2025-10-11 08:51:18.848 2 DEBUG nova.network.neutron [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:51:18 np0005481065 nova_compute[260935]: 2025-10-11 08:51:18.973 2 DEBUG oslo_concurrency.lockutils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "3283d482-4ea1-400a-9a1b-486479801813" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:18 np0005481065 nova_compute[260935]: 2025-10-11 08:51:18.973 2 DEBUG oslo_concurrency.lockutils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "3283d482-4ea1-400a-9a1b-486479801813" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:18 np0005481065 nova_compute[260935]: 2025-10-11 08:51:18.977 2 DEBUG nova.network.neutron [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:51:18 np0005481065 nova_compute[260935]: 2025-10-11 08:51:18.988 2 DEBUG nova.compute.manager [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.063 2 DEBUG oslo_concurrency.lockutils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.064 2 DEBUG oslo_concurrency.lockutils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.070 2 DEBUG nova.virt.hardware [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.070 2 INFO nova.compute.claims [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.251 2 DEBUG oslo_concurrency.processutils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.514 2 DEBUG oslo_concurrency.lockutils [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Acquiring lock "9f842544-f85a-4c24-b273-8ae74177617e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.516 2 DEBUG oslo_concurrency.lockutils [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.516 2 DEBUG oslo_concurrency.lockutils [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Acquiring lock "9f842544-f85a-4c24-b273-8ae74177617e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.517 2 DEBUG oslo_concurrency.lockutils [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.518 2 DEBUG oslo_concurrency.lockutils [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.520 2 INFO nova.compute.manager [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Terminating instance#033[00m
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.522 2 DEBUG nova.compute.manager [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 04:51:19 np0005481065 kernel: tapfc76a4bd-0e (unregistering): left promiscuous mode
Oct 11 04:51:19 np0005481065 NetworkManager[44960]: <info>  [1760172679.5746] device (tapfc76a4bd-0e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:19 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:19Z|00242|binding|INFO|Releasing lport fc76a4bd-0e3d-426e-820f-690283cf8257 from this chassis (sb_readonly=0)
Oct 11 04:51:19 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:19Z|00243|binding|INFO|Setting lport fc76a4bd-0e3d-426e-820f-690283cf8257 down in Southbound
Oct 11 04:51:19 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:19Z|00244|binding|INFO|Removing iface tapfc76a4bd-0e ovn-installed in OVS
Oct 11 04:51:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:19.606 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:91:1c:e4 10.100.0.51'], port_security=['fa:16:3e:91:1c:e4 10.100.0.51'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.51/24', 'neutron:device_id': '9f842544-f85a-4c24-b273-8ae74177617e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-da40451d-49f4-4bd4-b0a3-55dc537c2426', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f84de17ba2c5470fbc4c7fe809e7d7b7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1a4e329d-ca16-4021-b76e-a221f4eabd06', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dbdc5e62-4240-4c3e-8aee-e7dbf176e812, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=fc76a4bd-0e3d-426e-820f-690283cf8257) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:51:19 np0005481065 kernel: tapccf53fcf-c7 (unregistering): left promiscuous mode
Oct 11 04:51:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:19.609 162815 INFO neutron.agent.ovn.metadata.agent [-] Port fc76a4bd-0e3d-426e-820f-690283cf8257 in datapath da40451d-49f4-4bd4-b0a3-55dc537c2426 unbound from our chassis#033[00m
Oct 11 04:51:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:19.612 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network da40451d-49f4-4bd4-b0a3-55dc537c2426#033[00m
Oct 11 04:51:19 np0005481065 NetworkManager[44960]: <info>  [1760172679.6136] device (tapccf53fcf-c7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.628 2 DEBUG nova.compute.manager [req-a8323f1e-ba80-40a6-8ccd-630d70908edf req-c350c201-b633-454b-8d06-9d36eb81f350 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Received event network-vif-plugged-fc76a4bd-0e3d-426e-820f-690283cf8257 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.629 2 DEBUG oslo_concurrency.lockutils [req-a8323f1e-ba80-40a6-8ccd-630d70908edf req-c350c201-b633-454b-8d06-9d36eb81f350 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "9f842544-f85a-4c24-b273-8ae74177617e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.630 2 DEBUG oslo_concurrency.lockutils [req-a8323f1e-ba80-40a6-8ccd-630d70908edf req-c350c201-b633-454b-8d06-9d36eb81f350 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.630 2 DEBUG oslo_concurrency.lockutils [req-a8323f1e-ba80-40a6-8ccd-630d70908edf req-c350c201-b633-454b-8d06-9d36eb81f350 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.631 2 DEBUG nova.compute.manager [req-a8323f1e-ba80-40a6-8ccd-630d70908edf req-c350c201-b633-454b-8d06-9d36eb81f350 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] No waiting events found dispatching network-vif-plugged-fc76a4bd-0e3d-426e-820f-690283cf8257 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.631 2 WARNING nova.compute.manager [req-a8323f1e-ba80-40a6-8ccd-630d70908edf req-c350c201-b633-454b-8d06-9d36eb81f350 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Received unexpected event network-vif-plugged-fc76a4bd-0e3d-426e-820f-690283cf8257 for instance with vm_state active and task_state deleting.#033[00m
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.632 2 DEBUG nova.compute.manager [req-a8323f1e-ba80-40a6-8ccd-630d70908edf req-c350c201-b633-454b-8d06-9d36eb81f350 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Received event network-changed-ab7592f9-1746-47d1-a702-b7be704ccabb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.632 2 DEBUG nova.compute.manager [req-a8323f1e-ba80-40a6-8ccd-630d70908edf req-c350c201-b633-454b-8d06-9d36eb81f350 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Refreshing instance network info cache due to event network-changed-ab7592f9-1746-47d1-a702-b7be704ccabb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.633 2 DEBUG oslo_concurrency.lockutils [req-a8323f1e-ba80-40a6-8ccd-630d70908edf req-c350c201-b633-454b-8d06-9d36eb81f350 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:51:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:19.637 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f9fca451-eb63-497d-9a87-e058da276a0c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:19 np0005481065 kernel: tape7343dc3-6c (unregistering): left promiscuous mode
Oct 11 04:51:19 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:19Z|00245|binding|INFO|Releasing lport ccf53fcf-c7bd-41f4-986b-d90fd701f3e4 from this chassis (sb_readonly=0)
Oct 11 04:51:19 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:19Z|00246|binding|INFO|Setting lport ccf53fcf-c7bd-41f4-986b-d90fd701f3e4 down in Southbound
Oct 11 04:51:19 np0005481065 NetworkManager[44960]: <info>  [1760172679.6559] device (tape7343dc3-6c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:51:19 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:19Z|00247|binding|INFO|Removing iface tapccf53fcf-c7 ovn-installed in OVS
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:19.666 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:82:ff:9f 10.100.1.42'], port_security=['fa:16:3e:82:ff:9f 10.100.1.42'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.42/24', 'neutron:device_id': '9f842544-f85a-4c24-b273-8ae74177617e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-27019660-0844-42c7-bd58-04b2d25d3924', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f84de17ba2c5470fbc4c7fe809e7d7b7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1a4e329d-ca16-4021-b76e-a221f4eabd06', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9317b60-e04e-4d9d-813e-91a63ec5f815, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=ccf53fcf-c7bd-41f4-986b-d90fd701f3e4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:51:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:19.694 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[9bb38bcc-3a96-441a-882a-99b99561bdb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:19 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:19Z|00248|binding|INFO|Releasing lport e7343dc3-6cda-4dfb-8098-f021900f4584 from this chassis (sb_readonly=0)
Oct 11 04:51:19 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:19Z|00249|binding|INFO|Setting lport e7343dc3-6cda-4dfb-8098-f021900f4584 down in Southbound
Oct 11 04:51:19 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:19Z|00250|binding|INFO|Removing iface tape7343dc3-6c ovn-installed in OVS
Oct 11 04:51:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:19.741 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[477eba71-19c7-4bae-903d-b494dfb7e0d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:19.748 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:95:fe 10.100.0.254'], port_security=['fa:16:3e:25:95:fe 10.100.0.254'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.254/24', 'neutron:device_id': '9f842544-f85a-4c24-b273-8ae74177617e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-da40451d-49f4-4bd4-b0a3-55dc537c2426', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f84de17ba2c5470fbc4c7fe809e7d7b7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1a4e329d-ca16-4021-b76e-a221f4eabd06', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dbdc5e62-4240-4c3e-8aee-e7dbf176e812, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=e7343dc3-6cda-4dfb-8098-f021900f4584) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:51:19 np0005481065 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000020.scope: Deactivated successfully.
Oct 11 04:51:19 np0005481065 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000020.scope: Consumed 2.954s CPU time.
Oct 11 04:51:19 np0005481065 systemd-machined[215705]: Machine qemu-37-instance-00000020 terminated.
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.763 2 DEBUG nova.network.neutron [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Updating instance_info_cache with network_info: [{"id": "ab7592f9-1746-47d1-a702-b7be704ccabb", "address": "fa:16:3e:af:3d:5c", "network": {"id": "ce478624-f4e7-4bd7-81b2-172d9f364a89", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1659766389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80ef0690d9e94d289f05d85941ef7154", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab7592f9-17", "ovs_interfaceid": "ab7592f9-1746-47d1-a702-b7be704ccabb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:51:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:51:19 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3882468972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.790 2 DEBUG oslo_concurrency.lockutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Releasing lock "refresh_cache-88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.790 2 DEBUG nova.compute.manager [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Instance network_info: |[{"id": "ab7592f9-1746-47d1-a702-b7be704ccabb", "address": "fa:16:3e:af:3d:5c", "network": {"id": "ce478624-f4e7-4bd7-81b2-172d9f364a89", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1659766389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80ef0690d9e94d289f05d85941ef7154", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab7592f9-17", "ovs_interfaceid": "ab7592f9-1746-47d1-a702-b7be704ccabb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.791 2 DEBUG oslo_concurrency.lockutils [req-a8323f1e-ba80-40a6-8ccd-630d70908edf req-c350c201-b633-454b-8d06-9d36eb81f350 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.791 2 DEBUG nova.network.neutron [req-a8323f1e-ba80-40a6-8ccd-630d70908edf req-c350c201-b633-454b-8d06-9d36eb81f350 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Refreshing network info cache for port ab7592f9-1746-47d1-a702-b7be704ccabb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.794 2 DEBUG nova.virt.libvirt.driver [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Start _get_guest_xml network_info=[{"id": "ab7592f9-1746-47d1-a702-b7be704ccabb", "address": "fa:16:3e:af:3d:5c", "network": {"id": "ce478624-f4e7-4bd7-81b2-172d9f364a89", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1659766389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80ef0690d9e94d289f05d85941ef7154", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab7592f9-17", "ovs_interfaceid": "ab7592f9-1746-47d1-a702-b7be704ccabb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:51:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:19.794 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[dd199b23-9fef-4ff7-a523-1a5d7345b13f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.800 2 DEBUG oslo_concurrency.processutils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.808 2 WARNING nova.virt.libvirt.driver [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:51:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:19.817 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2420e570-82e8-48f0-9bc5-315da4b0a9ad]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapda40451d-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d6:aa:2c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 832, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 832, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 75], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 451615, 'reachable_time': 38583, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301893, 'error': None, 'target': 'ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.819 2 DEBUG nova.virt.libvirt.host [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.820 2 DEBUG nova.virt.libvirt.host [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.822 2 DEBUG nova.compute.provider_tree [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.828 2 DEBUG nova.virt.libvirt.host [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.829 2 DEBUG nova.virt.libvirt.host [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.831 2 DEBUG nova.virt.libvirt.driver [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.831 2 DEBUG nova.virt.hardware [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.832 2 DEBUG nova.virt.hardware [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.834 2 DEBUG nova.virt.hardware [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.835 2 DEBUG nova.virt.hardware [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.835 2 DEBUG nova.virt.hardware [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.836 2 DEBUG nova.virt.hardware [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.837 2 DEBUG nova.virt.hardware [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.838 2 DEBUG nova.virt.hardware [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.839 2 DEBUG nova.virt.hardware [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.839 2 DEBUG nova.virt.hardware [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.840 2 DEBUG nova.virt.hardware [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:51:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:19.839 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[975c810d-f9f5-40c4-96a9-23d9ab3a32a7]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapda40451d-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 451629, 'tstamp': 451629}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301894, 'error': None, 'target': 'ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.255'], ['IFA_LABEL', 'tapda40451d-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 451632, 'tstamp': 451632}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301894, 'error': None, 'target': 'ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:19.843 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapda40451d-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.844 2 DEBUG oslo_concurrency.processutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:51:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:19.860 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapda40451d-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:19.860 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:51:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:19.861 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapda40451d-40, col_values=(('external_ids', {'iface-id': 'cf50a7a8-d7ea-4ad9-aa80-73a2c01bc493'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:19.861 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:51:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:19.862 162815 INFO neutron.agent.ovn.metadata.agent [-] Port ccf53fcf-c7bd-41f4-986b-d90fd701f3e4 in datapath 27019660-0844-42c7-bd58-04b2d25d3924 unbound from our chassis#033[00m
Oct 11 04:51:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:19.863 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 27019660-0844-42c7-bd58-04b2d25d3924, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 04:51:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:19.866 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d409476e-b62c-46cb-ae74-92271b8f5ffc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:19.867 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-27019660-0844-42c7-bd58-04b2d25d3924 namespace which is not needed anymore#033[00m
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.880 2 DEBUG nova.scheduler.client.report [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:51:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1386: 321 pgs: 321 active+clean; 227 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 5.6 MiB/s rd, 5.3 MiB/s wr, 248 op/s
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.901 2 DEBUG oslo_concurrency.lockutils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.903 2 DEBUG nova.compute.manager [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.947 2 DEBUG nova.compute.manager [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.949 2 DEBUG nova.network.neutron [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:51:19 np0005481065 NetworkManager[44960]: <info>  [1760172679.9563] manager: (tapccf53fcf-c7): new Tun device (/org/freedesktop/NetworkManager/Devices/121)
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.972 2 INFO nova.virt.libvirt.driver [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:51:19 np0005481065 NetworkManager[44960]: <info>  [1760172679.9734] manager: (tape7343dc3-6c): new Tun device (/org/freedesktop/NetworkManager/Devices/122)
Oct 11 04:51:19 np0005481065 nova_compute[260935]: 2025-10-11 08:51:19.988 2 DEBUG nova.compute.manager [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.009 2 INFO nova.virt.libvirt.driver [-] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Instance destroyed successfully.#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.013 2 DEBUG nova.objects.instance [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lazy-loading 'resources' on Instance uuid 9f842544-f85a-4c24-b273-8ae74177617e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.030 2 DEBUG nova.virt.libvirt.vif [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:50:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-358627564',display_name='tempest-ServersTestMultiNic-server-358627564',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-358627564',id=32,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:51:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f84de17ba2c5470fbc4c7fe809e7d7b7',ramdisk_id='',reservation_id='r-ku19xg05',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-65661968',owner_user_name='tempest-ServersTestMultiNic-65661968-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:51:17Z,user_data=None,user_id='fb27f51b5ffd414ab5ddbea179ada690',uuid=9f842544-f85a-4c24-b273-8ae74177617e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fc76a4bd-0e3d-426e-820f-690283cf8257", "address": "fa:16:3e:91:1c:e4", "network": {"id": "da40451d-49f4-4bd4-b0a3-55dc537c2426", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1795623970", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.51", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc76a4bd-0e", "ovs_interfaceid": "fc76a4bd-0e3d-426e-820f-690283cf8257", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.031 2 DEBUG nova.network.os_vif_util [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converting VIF {"id": "fc76a4bd-0e3d-426e-820f-690283cf8257", "address": "fa:16:3e:91:1c:e4", "network": {"id": "da40451d-49f4-4bd4-b0a3-55dc537c2426", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1795623970", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.51", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc76a4bd-0e", "ovs_interfaceid": "fc76a4bd-0e3d-426e-820f-690283cf8257", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.033 2 DEBUG nova.network.os_vif_util [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:91:1c:e4,bridge_name='br-int',has_traffic_filtering=True,id=fc76a4bd-0e3d-426e-820f-690283cf8257,network=Network(da40451d-49f4-4bd4-b0a3-55dc537c2426),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc76a4bd-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.034 2 DEBUG os_vif [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:91:1c:e4,bridge_name='br-int',has_traffic_filtering=True,id=fc76a4bd-0e3d-426e-820f-690283cf8257,network=Network(da40451d-49f4-4bd4-b0a3-55dc537c2426),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc76a4bd-0e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 04:51:20 np0005481065 neutron-haproxy-ovnmeta-27019660-0844-42c7-bd58-04b2d25d3924[301648]: [NOTICE]   (301652) : haproxy version is 2.8.14-c23fe91
Oct 11 04:51:20 np0005481065 neutron-haproxy-ovnmeta-27019660-0844-42c7-bd58-04b2d25d3924[301648]: [NOTICE]   (301652) : path to executable is /usr/sbin/haproxy
Oct 11 04:51:20 np0005481065 neutron-haproxy-ovnmeta-27019660-0844-42c7-bd58-04b2d25d3924[301648]: [WARNING]  (301652) : Exiting Master process...
Oct 11 04:51:20 np0005481065 neutron-haproxy-ovnmeta-27019660-0844-42c7-bd58-04b2d25d3924[301648]: [WARNING]  (301652) : Exiting Master process...
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.039 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc76a4bd-0e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:20 np0005481065 neutron-haproxy-ovnmeta-27019660-0844-42c7-bd58-04b2d25d3924[301648]: [ALERT]    (301652) : Current worker (301654) exited with code 143 (Terminated)
Oct 11 04:51:20 np0005481065 neutron-haproxy-ovnmeta-27019660-0844-42c7-bd58-04b2d25d3924[301648]: [WARNING]  (301652) : All workers exited. Exiting... (0)
Oct 11 04:51:20 np0005481065 systemd[1]: libpod-0993cadefbc4243bfef6735aa729df33ea38f2f3700a7a90d1f3d13ef9163650.scope: Deactivated successfully.
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.055 2 INFO os_vif [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:91:1c:e4,bridge_name='br-int',has_traffic_filtering=True,id=fc76a4bd-0e3d-426e-820f-690283cf8257,network=Network(da40451d-49f4-4bd4-b0a3-55dc537c2426),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc76a4bd-0e')#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.056 2 DEBUG nova.virt.libvirt.vif [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:50:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-358627564',display_name='tempest-ServersTestMultiNic-server-358627564',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-358627564',id=32,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:51:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f84de17ba2c5470fbc4c7fe809e7d7b7',ramdisk_id='',reservation_id='r-ku19xg05',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-65661968',owner_user_name='tempest-ServersTestMultiNic-65661968-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:51:17Z,user_data=None,user_id='fb27f51b5ffd414ab5ddbea179ada690',uuid=9f842544-f85a-4c24-b273-8ae74177617e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ccf53fcf-c7bd-41f4-986b-d90fd701f3e4", "address": "fa:16:3e:82:ff:9f", "network": {"id": "27019660-0844-42c7-bd58-04b2d25d3924", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-190755633", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.42", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccf53fcf-c7", "ovs_interfaceid": "ccf53fcf-c7bd-41f4-986b-d90fd701f3e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.056 2 DEBUG nova.network.os_vif_util [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converting VIF {"id": "ccf53fcf-c7bd-41f4-986b-d90fd701f3e4", "address": "fa:16:3e:82:ff:9f", "network": {"id": "27019660-0844-42c7-bd58-04b2d25d3924", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-190755633", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.42", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccf53fcf-c7", "ovs_interfaceid": "ccf53fcf-c7bd-41f4-986b-d90fd701f3e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.057 2 DEBUG nova.network.os_vif_util [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:82:ff:9f,bridge_name='br-int',has_traffic_filtering=True,id=ccf53fcf-c7bd-41f4-986b-d90fd701f3e4,network=Network(27019660-0844-42c7-bd58-04b2d25d3924),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapccf53fcf-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.057 2 DEBUG os_vif [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:ff:9f,bridge_name='br-int',has_traffic_filtering=True,id=ccf53fcf-c7bd-41f4-986b-d90fd701f3e4,network=Network(27019660-0844-42c7-bd58-04b2d25d3924),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapccf53fcf-c7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 04:51:20 np0005481065 podman[301939]: 2025-10-11 08:51:20.058109739 +0000 UTC m=+0.066092120 container died 0993cadefbc4243bfef6735aa729df33ea38f2f3700a7a90d1f3d13ef9163650 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-27019660-0844-42c7-bd58-04b2d25d3924, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.059 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapccf53fcf-c7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.069 2 INFO os_vif [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:ff:9f,bridge_name='br-int',has_traffic_filtering=True,id=ccf53fcf-c7bd-41f4-986b-d90fd701f3e4,network=Network(27019660-0844-42c7-bd58-04b2d25d3924),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapccf53fcf-c7')#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.070 2 DEBUG nova.virt.libvirt.vif [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:50:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-358627564',display_name='tempest-ServersTestMultiNic-server-358627564',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-358627564',id=32,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:51:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f84de17ba2c5470fbc4c7fe809e7d7b7',ramdisk_id='',reservation_id='r-ku19xg05',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-65661968',owner_user_name='tempest-ServersTestMultiNic-65661968-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:51:17Z,user_data=None,user_id='fb27f51b5ffd414ab5ddbea179ada690',uuid=9f842544-f85a-4c24-b273-8ae74177617e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e7343dc3-6cda-4dfb-8098-f021900f4584", "address": "fa:16:3e:25:95:fe", "network": {"id": "da40451d-49f4-4bd4-b0a3-55dc537c2426", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1795623970", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.254", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7343dc3-6c", "ovs_interfaceid": "e7343dc3-6cda-4dfb-8098-f021900f4584", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.070 2 DEBUG nova.network.os_vif_util [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converting VIF {"id": "e7343dc3-6cda-4dfb-8098-f021900f4584", "address": "fa:16:3e:25:95:fe", "network": {"id": "da40451d-49f4-4bd4-b0a3-55dc537c2426", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1795623970", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.254", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7343dc3-6c", "ovs_interfaceid": "e7343dc3-6cda-4dfb-8098-f021900f4584", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.071 2 DEBUG nova.network.os_vif_util [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:95:fe,bridge_name='br-int',has_traffic_filtering=True,id=e7343dc3-6cda-4dfb-8098-f021900f4584,network=Network(da40451d-49f4-4bd4-b0a3-55dc537c2426),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7343dc3-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.071 2 DEBUG os_vif [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:95:fe,bridge_name='br-int',has_traffic_filtering=True,id=e7343dc3-6cda-4dfb-8098-f021900f4584,network=Network(da40451d-49f4-4bd4-b0a3-55dc537c2426),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7343dc3-6c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.072 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape7343dc3-6c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.076 2 INFO os_vif [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:95:fe,bridge_name='br-int',has_traffic_filtering=True,id=e7343dc3-6cda-4dfb-8098-f021900f4584,network=Network(da40451d-49f4-4bd4-b0a3-55dc537c2426),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7343dc3-6c')#033[00m
Oct 11 04:51:20 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0993cadefbc4243bfef6735aa729df33ea38f2f3700a7a90d1f3d13ef9163650-userdata-shm.mount: Deactivated successfully.
Oct 11 04:51:20 np0005481065 systemd[1]: var-lib-containers-storage-overlay-c691cf546be102ecd2a636c399d183790889e1bda6debfc8222fabd7050af86c-merged.mount: Deactivated successfully.
Oct 11 04:51:20 np0005481065 podman[301939]: 2025-10-11 08:51:20.09741425 +0000 UTC m=+0.105396621 container cleanup 0993cadefbc4243bfef6735aa729df33ea38f2f3700a7a90d1f3d13ef9163650 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-27019660-0844-42c7-bd58-04b2d25d3924, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.100 2 DEBUG nova.compute.manager [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.101 2 DEBUG nova.virt.libvirt.driver [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.101 2 INFO nova.virt.libvirt.driver [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Creating image(s)#033[00m
Oct 11 04:51:20 np0005481065 systemd[1]: libpod-conmon-0993cadefbc4243bfef6735aa729df33ea38f2f3700a7a90d1f3d13ef9163650.scope: Deactivated successfully.
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.125 2 DEBUG nova.storage.rbd_utils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image 3283d482-4ea1-400a-9a1b-486479801813_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.158 2 DEBUG nova.storage.rbd_utils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image 3283d482-4ea1-400a-9a1b-486479801813_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:51:20 np0005481065 podman[302015]: 2025-10-11 08:51:20.162062209 +0000 UTC m=+0.043128801 container remove 0993cadefbc4243bfef6735aa729df33ea38f2f3700a7a90d1f3d13ef9163650 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-27019660-0844-42c7-bd58-04b2d25d3924, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct 11 04:51:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:20.169 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e41fda03-78ea-4336-8721-0026693e3015]: (4, ('Sat Oct 11 08:51:19 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-27019660-0844-42c7-bd58-04b2d25d3924 (0993cadefbc4243bfef6735aa729df33ea38f2f3700a7a90d1f3d13ef9163650)\n0993cadefbc4243bfef6735aa729df33ea38f2f3700a7a90d1f3d13ef9163650\nSat Oct 11 08:51:20 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-27019660-0844-42c7-bd58-04b2d25d3924 (0993cadefbc4243bfef6735aa729df33ea38f2f3700a7a90d1f3d13ef9163650)\n0993cadefbc4243bfef6735aa729df33ea38f2f3700a7a90d1f3d13ef9163650\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:20.170 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f5ec2be8-e4d2-452e-8a47-31e087cbcf95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:20.171 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap27019660-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:20 np0005481065 kernel: tap27019660-00: left promiscuous mode
Oct 11 04:51:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:20.192 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[614e03be-27ad-4a71-a713-c02783e793a1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.207 2 DEBUG nova.storage.rbd_utils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image 3283d482-4ea1-400a-9a1b-486479801813_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.214 2 DEBUG oslo_concurrency.lockutils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "e581753e2032aab19679355af058b2466b4ef11c" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.215 2 DEBUG oslo_concurrency.lockutils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "e581753e2032aab19679355af058b2466b4ef11c" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:20.214 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a4464784-1e7e-4f77-98d2-0ea8ee434b34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:20.215 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[62d57491-240c-4cc9-bc36-1ae06d15ac8f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.218 2 DEBUG nova.compute.manager [req-95a5cf01-6740-4b9e-b1b5-751c99d6741b req-eaeaf1a3-bd89-4b8c-a704-b73fa1745036 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Received event network-vif-unplugged-ccf53fcf-c7bd-41f4-986b-d90fd701f3e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.218 2 DEBUG oslo_concurrency.lockutils [req-95a5cf01-6740-4b9e-b1b5-751c99d6741b req-eaeaf1a3-bd89-4b8c-a704-b73fa1745036 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "9f842544-f85a-4c24-b273-8ae74177617e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.218 2 DEBUG oslo_concurrency.lockutils [req-95a5cf01-6740-4b9e-b1b5-751c99d6741b req-eaeaf1a3-bd89-4b8c-a704-b73fa1745036 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.218 2 DEBUG oslo_concurrency.lockutils [req-95a5cf01-6740-4b9e-b1b5-751c99d6741b req-eaeaf1a3-bd89-4b8c-a704-b73fa1745036 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.219 2 DEBUG nova.compute.manager [req-95a5cf01-6740-4b9e-b1b5-751c99d6741b req-eaeaf1a3-bd89-4b8c-a704-b73fa1745036 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] No waiting events found dispatching network-vif-unplugged-ccf53fcf-c7bd-41f4-986b-d90fd701f3e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.219 2 DEBUG nova.compute.manager [req-95a5cf01-6740-4b9e-b1b5-751c99d6741b req-eaeaf1a3-bd89-4b8c-a704-b73fa1745036 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Received event network-vif-unplugged-ccf53fcf-c7bd-41f4-986b-d90fd701f3e4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.220 2 DEBUG nova.policy [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1bab12893b9d49aabcb5ca19c9b951de', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f8c7604961214c6d9d49657535d799a5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:20.236 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e7da8976-24f3-4bcb-b535-1e24666064d5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 451735, 'reachable_time': 44714, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302082, 'error': None, 'target': 'ovnmeta-27019660-0844-42c7-bd58-04b2d25d3924', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:20.241 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-27019660-0844-42c7-bd58-04b2d25d3924 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 04:51:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:20.241 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[d1b74ef0-4881-49c7-bdf7-820327f6fdc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:20.242 162815 INFO neutron.agent.ovn.metadata.agent [-] Port e7343dc3-6cda-4dfb-8098-f021900f4584 in datapath da40451d-49f4-4bd4-b0a3-55dc537c2426 unbound from our chassis#033[00m
Oct 11 04:51:20 np0005481065 systemd[1]: run-netns-ovnmeta\x2d27019660\x2d0844\x2d42c7\x2dbd58\x2d04b2d25d3924.mount: Deactivated successfully.
Oct 11 04:51:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:20.243 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network da40451d-49f4-4bd4-b0a3-55dc537c2426, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 04:51:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:20.244 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[47a6a009-da62-4e33-8b2d-90123723bacc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:20.244 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426 namespace which is not needed anymore#033[00m
Oct 11 04:51:20 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:51:20 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3063958681' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.340 2 DEBUG oslo_concurrency.processutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.358 2 DEBUG nova.storage.rbd_utils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] rbd image 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.362 2 DEBUG oslo_concurrency.processutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:51:20 np0005481065 neutron-haproxy-ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426[301549]: [NOTICE]   (301553) : haproxy version is 2.8.14-c23fe91
Oct 11 04:51:20 np0005481065 neutron-haproxy-ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426[301549]: [NOTICE]   (301553) : path to executable is /usr/sbin/haproxy
Oct 11 04:51:20 np0005481065 neutron-haproxy-ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426[301549]: [WARNING]  (301553) : Exiting Master process...
Oct 11 04:51:20 np0005481065 neutron-haproxy-ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426[301549]: [ALERT]    (301553) : Current worker (301562) exited with code 143 (Terminated)
Oct 11 04:51:20 np0005481065 neutron-haproxy-ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426[301549]: [WARNING]  (301553) : All workers exited. Exiting... (0)
Oct 11 04:51:20 np0005481065 systemd[1]: libpod-f8f7c9bbddd2f77740d6cac8e986d87f20a8751a24dfffeb15fae153fbcb35f1.scope: Deactivated successfully.
Oct 11 04:51:20 np0005481065 podman[302105]: 2025-10-11 08:51:20.425350225 +0000 UTC m=+0.069115896 container died f8f7c9bbddd2f77740d6cac8e986d87f20a8751a24dfffeb15fae153fbcb35f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 11 04:51:20 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f8f7c9bbddd2f77740d6cac8e986d87f20a8751a24dfffeb15fae153fbcb35f1-userdata-shm.mount: Deactivated successfully.
Oct 11 04:51:20 np0005481065 systemd[1]: var-lib-containers-storage-overlay-0f6a6ed8c063f0dc224c05dfdf416baf50adf8b36b436263058ef960624c7842-merged.mount: Deactivated successfully.
Oct 11 04:51:20 np0005481065 podman[302105]: 2025-10-11 08:51:20.465227572 +0000 UTC m=+0.108993233 container cleanup f8f7c9bbddd2f77740d6cac8e986d87f20a8751a24dfffeb15fae153fbcb35f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 04:51:20 np0005481065 systemd[1]: libpod-conmon-f8f7c9bbddd2f77740d6cac8e986d87f20a8751a24dfffeb15fae153fbcb35f1.scope: Deactivated successfully.
Oct 11 04:51:20 np0005481065 podman[302157]: 2025-10-11 08:51:20.533920715 +0000 UTC m=+0.042033530 container remove f8f7c9bbddd2f77740d6cac8e986d87f20a8751a24dfffeb15fae153fbcb35f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 11 04:51:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:20.541 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d7e2dba9-6e4d-4f03-92ec-a31813f4d593]: (4, ('Sat Oct 11 08:51:20 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426 (f8f7c9bbddd2f77740d6cac8e986d87f20a8751a24dfffeb15fae153fbcb35f1)\nf8f7c9bbddd2f77740d6cac8e986d87f20a8751a24dfffeb15fae153fbcb35f1\nSat Oct 11 08:51:20 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426 (f8f7c9bbddd2f77740d6cac8e986d87f20a8751a24dfffeb15fae153fbcb35f1)\nf8f7c9bbddd2f77740d6cac8e986d87f20a8751a24dfffeb15fae153fbcb35f1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:20.543 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cb5047cc-e8e9-4792-8517-dde5b86b0558]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:20.543 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapda40451d-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:20 np0005481065 kernel: tapda40451d-40: left promiscuous mode
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:20.565 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[85221e1a-074c-4d27-9fed-678bc0277023]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:20.594 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1f66d1ac-af39-461d-a859-98171f6dfab3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:20.595 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b0f2dfc9-0374-43ba-adba-113a8adeef2d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:20.613 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a6822467-8b73-4530-966b-aeb1b3e6adcb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 451607, 'reachable_time': 17145, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302191, 'error': None, 'target': 'ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:20.615 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 04:51:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:20.615 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[e6f154bf-62e7-442e-be0d-da651e959e89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.625 2 INFO nova.virt.libvirt.driver [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Deleting instance files /var/lib/nova/instances/9f842544-f85a-4c24-b273-8ae74177617e_del#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.626 2 INFO nova.virt.libvirt.driver [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Deletion of /var/lib/nova/instances/9f842544-f85a-4c24-b273-8ae74177617e_del complete#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.704 2 DEBUG nova.virt.libvirt.imagebackend [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Image locations are: [{'url': 'rbd://33219f8b-dc38-5a8f-a577-8ccc4b37190a/images/70bd8f23-d068-4d06-af15-566e76e92803/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://33219f8b-dc38-5a8f-a577-8ccc4b37190a/images/70bd8f23-d068-4d06-af15-566e76e92803/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.774 2 INFO nova.compute.manager [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Took 1.25 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.775 2 DEBUG oslo.service.loopingcall [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.776 2 DEBUG nova.compute.manager [-] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.776 2 DEBUG nova.network.neutron [-] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.783 2 DEBUG nova.virt.libvirt.imagebackend [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Selected location: {'url': 'rbd://33219f8b-dc38-5a8f-a577-8ccc4b37190a/images/70bd8f23-d068-4d06-af15-566e76e92803/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.784 2 DEBUG nova.storage.rbd_utils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] cloning images/70bd8f23-d068-4d06-af15-566e76e92803@snap to None/3283d482-4ea1-400a-9a1b-486479801813_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct 11 04:51:20 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:20Z|00046|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fe:8d:cb 10.100.0.4
Oct 11 04:51:20 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:20Z|00047|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fe:8d:cb 10.100.0.4
Oct 11 04:51:20 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:51:20 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1083942975' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.881 2 DEBUG oslo_concurrency.processutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.883 2 DEBUG nova.virt.libvirt.vif [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:51:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-2081426915',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-2081426915',id=34,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='80ef0690d9e94d289f05d85941ef7154',ramdisk_id='',reservation_id='r-0lez6trd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesV270Test-1301456020',owner_user_name='tempest-AttachInterfacesV270Test-1301456020-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:51:16Z,user_data=None,user_id='f961a579f0a74ab3a913fc3b21acea43',uuid=88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ab7592f9-1746-47d1-a702-b7be704ccabb", "address": "fa:16:3e:af:3d:5c", "network": {"id": "ce478624-f4e7-4bd7-81b2-172d9f364a89", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1659766389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80ef0690d9e94d289f05d85941ef7154", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab7592f9-17", "ovs_interfaceid": "ab7592f9-1746-47d1-a702-b7be704ccabb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.884 2 DEBUG nova.network.os_vif_util [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Converting VIF {"id": "ab7592f9-1746-47d1-a702-b7be704ccabb", "address": "fa:16:3e:af:3d:5c", "network": {"id": "ce478624-f4e7-4bd7-81b2-172d9f364a89", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1659766389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80ef0690d9e94d289f05d85941ef7154", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab7592f9-17", "ovs_interfaceid": "ab7592f9-1746-47d1-a702-b7be704ccabb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.885 2 DEBUG nova.network.os_vif_util [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:3d:5c,bridge_name='br-int',has_traffic_filtering=True,id=ab7592f9-1746-47d1-a702-b7be704ccabb,network=Network(ce478624-f4e7-4bd7-81b2-172d9f364a89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab7592f9-17') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.887 2 DEBUG nova.objects.instance [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Lazy-loading 'pci_devices' on Instance uuid 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.907 2 DEBUG oslo_concurrency.lockutils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "e581753e2032aab19679355af058b2466b4ef11c" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.963 2 DEBUG nova.network.neutron [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Successfully created port: 2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.974 2 DEBUG nova.virt.libvirt.driver [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:51:20 np0005481065 nova_compute[260935]:  <uuid>88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a</uuid>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:  <name>instance-00000022</name>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:51:20 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:      <nova:name>tempest-AttachInterfacesV270Test-server-2081426915</nova:name>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:51:19</nova:creationTime>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:51:20 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:        <nova:user uuid="f961a579f0a74ab3a913fc3b21acea43">tempest-AttachInterfacesV270Test-1301456020-project-member</nova:user>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:        <nova:project uuid="80ef0690d9e94d289f05d85941ef7154">tempest-AttachInterfacesV270Test-1301456020</nova:project>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:        <nova:port uuid="ab7592f9-1746-47d1-a702-b7be704ccabb">
Oct 11 04:51:20 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:51:20 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:      <entry name="serial">88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a</entry>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:      <entry name="uuid">88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a</entry>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:51:20 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:51:20 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:51:20 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a_disk">
Oct 11 04:51:20 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:51:20 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:51:20 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a_disk.config">
Oct 11 04:51:20 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:51:20 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 04:51:20 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:af:3d:5c"/>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:      <target dev="tapab7592f9-17"/>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:51:20 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a/console.log" append="off"/>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:51:20 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:51:20 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:51:20 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:51:20 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:51:20 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.975 2 DEBUG nova.compute.manager [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Preparing to wait for external event network-vif-plugged-ab7592f9-1746-47d1-a702-b7be704ccabb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.975 2 DEBUG oslo_concurrency.lockutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Acquiring lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.976 2 DEBUG oslo_concurrency.lockutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.976 2 DEBUG oslo_concurrency.lockutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.977 2 DEBUG nova.virt.libvirt.vif [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:51:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-2081426915',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-2081426915',id=34,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='80ef0690d9e94d289f05d85941ef7154',ramdisk_id='',reservation_id='r-0lez6trd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesV270Test-1301456020',owner_user_name='tempest-AttachInterfacesV270Test-1301456020-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:51:16Z,user_data=None,user_id='f961a579f0a74ab3a913fc3b21acea43',uuid=88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ab7592f9-1746-47d1-a702-b7be704ccabb", "address": "fa:16:3e:af:3d:5c", "network": {"id": "ce478624-f4e7-4bd7-81b2-172d9f364a89", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1659766389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80ef0690d9e94d289f05d85941ef7154", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab7592f9-17", "ovs_interfaceid": "ab7592f9-1746-47d1-a702-b7be704ccabb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.978 2 DEBUG nova.network.os_vif_util [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Converting VIF {"id": "ab7592f9-1746-47d1-a702-b7be704ccabb", "address": "fa:16:3e:af:3d:5c", "network": {"id": "ce478624-f4e7-4bd7-81b2-172d9f364a89", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1659766389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80ef0690d9e94d289f05d85941ef7154", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab7592f9-17", "ovs_interfaceid": "ab7592f9-1746-47d1-a702-b7be704ccabb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.979 2 DEBUG nova.network.os_vif_util [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:3d:5c,bridge_name='br-int',has_traffic_filtering=True,id=ab7592f9-1746-47d1-a702-b7be704ccabb,network=Network(ce478624-f4e7-4bd7-81b2-172d9f364a89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab7592f9-17') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.979 2 DEBUG os_vif [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:3d:5c,bridge_name='br-int',has_traffic_filtering=True,id=ab7592f9-1746-47d1-a702-b7be704ccabb,network=Network(ce478624-f4e7-4bd7-81b2-172d9f364a89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab7592f9-17') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.980 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:20 np0005481065 nova_compute[260935]: 2025-10-11 08:51:20.981 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:51:21 np0005481065 nova_compute[260935]: 2025-10-11 08:51:21.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:21 np0005481065 nova_compute[260935]: 2025-10-11 08:51:21.040 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapab7592f9-17, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:21 np0005481065 nova_compute[260935]: 2025-10-11 08:51:21.041 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapab7592f9-17, col_values=(('external_ids', {'iface-id': 'ab7592f9-1746-47d1-a702-b7be704ccabb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:af:3d:5c', 'vm-uuid': '88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:21 np0005481065 NetworkManager[44960]: <info>  [1760172681.0477] manager: (tapab7592f9-17): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/123)
Oct 11 04:51:21 np0005481065 systemd[1]: run-netns-ovnmeta\x2dda40451d\x2d49f4\x2d4bd4\x2db0a3\x2d55dc537c2426.mount: Deactivated successfully.
Oct 11 04:51:21 np0005481065 nova_compute[260935]: 2025-10-11 08:51:21.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:21 np0005481065 nova_compute[260935]: 2025-10-11 08:51:21.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:51:21 np0005481065 nova_compute[260935]: 2025-10-11 08:51:21.113 2 INFO os_vif [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:3d:5c,bridge_name='br-int',has_traffic_filtering=True,id=ab7592f9-1746-47d1-a702-b7be704ccabb,network=Network(ce478624-f4e7-4bd7-81b2-172d9f364a89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab7592f9-17')#033[00m
Oct 11 04:51:21 np0005481065 nova_compute[260935]: 2025-10-11 08:51:21.126 2 DEBUG nova.objects.instance [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lazy-loading 'migration_context' on Instance uuid 3283d482-4ea1-400a-9a1b-486479801813 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:51:21 np0005481065 nova_compute[260935]: 2025-10-11 08:51:21.167 2 DEBUG nova.virt.libvirt.driver [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:51:21 np0005481065 nova_compute[260935]: 2025-10-11 08:51:21.168 2 DEBUG nova.virt.libvirt.driver [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Ensure instance console log exists: /var/lib/nova/instances/3283d482-4ea1-400a-9a1b-486479801813/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:51:21 np0005481065 nova_compute[260935]: 2025-10-11 08:51:21.169 2 DEBUG oslo_concurrency.lockutils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:21 np0005481065 nova_compute[260935]: 2025-10-11 08:51:21.170 2 DEBUG oslo_concurrency.lockutils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:21 np0005481065 nova_compute[260935]: 2025-10-11 08:51:21.170 2 DEBUG oslo_concurrency.lockutils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:21 np0005481065 nova_compute[260935]: 2025-10-11 08:51:21.195 2 DEBUG nova.virt.libvirt.driver [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:51:21 np0005481065 nova_compute[260935]: 2025-10-11 08:51:21.195 2 DEBUG nova.virt.libvirt.driver [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:51:21 np0005481065 nova_compute[260935]: 2025-10-11 08:51:21.196 2 DEBUG nova.virt.libvirt.driver [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] No VIF found with MAC fa:16:3e:af:3d:5c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:51:21 np0005481065 nova_compute[260935]: 2025-10-11 08:51:21.197 2 INFO nova.virt.libvirt.driver [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Using config drive#033[00m
Oct 11 04:51:21 np0005481065 nova_compute[260935]: 2025-10-11 08:51:21.229 2 DEBUG nova.storage.rbd_utils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] rbd image 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:51:21 np0005481065 nova_compute[260935]: 2025-10-11 08:51:21.497 2 DEBUG nova.network.neutron [req-a8323f1e-ba80-40a6-8ccd-630d70908edf req-c350c201-b633-454b-8d06-9d36eb81f350 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Updated VIF entry in instance network info cache for port ab7592f9-1746-47d1-a702-b7be704ccabb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:51:21 np0005481065 nova_compute[260935]: 2025-10-11 08:51:21.498 2 DEBUG nova.network.neutron [req-a8323f1e-ba80-40a6-8ccd-630d70908edf req-c350c201-b633-454b-8d06-9d36eb81f350 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Updating instance_info_cache with network_info: [{"id": "ab7592f9-1746-47d1-a702-b7be704ccabb", "address": "fa:16:3e:af:3d:5c", "network": {"id": "ce478624-f4e7-4bd7-81b2-172d9f364a89", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1659766389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80ef0690d9e94d289f05d85941ef7154", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab7592f9-17", "ovs_interfaceid": "ab7592f9-1746-47d1-a702-b7be704ccabb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:51:21 np0005481065 nova_compute[260935]: 2025-10-11 08:51:21.516 2 DEBUG oslo_concurrency.lockutils [req-a8323f1e-ba80-40a6-8ccd-630d70908edf req-c350c201-b633-454b-8d06-9d36eb81f350 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:51:21 np0005481065 nova_compute[260935]: 2025-10-11 08:51:21.773 2 INFO nova.virt.libvirt.driver [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Creating config drive at /var/lib/nova/instances/88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a/disk.config#033[00m
Oct 11 04:51:21 np0005481065 nova_compute[260935]: 2025-10-11 08:51:21.784 2 DEBUG oslo_concurrency.processutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe3cai2ld execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:51:21 np0005481065 nova_compute[260935]: 2025-10-11 08:51:21.839 2 DEBUG nova.compute.manager [req-379a8014-74a2-439e-b95f-e854baf41d60 req-d420aa9f-7c59-4d33-aa2d-aee6dbcd145f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Received event network-vif-unplugged-fc76a4bd-0e3d-426e-820f-690283cf8257 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:51:21 np0005481065 nova_compute[260935]: 2025-10-11 08:51:21.839 2 DEBUG oslo_concurrency.lockutils [req-379a8014-74a2-439e-b95f-e854baf41d60 req-d420aa9f-7c59-4d33-aa2d-aee6dbcd145f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "9f842544-f85a-4c24-b273-8ae74177617e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:21 np0005481065 nova_compute[260935]: 2025-10-11 08:51:21.840 2 DEBUG oslo_concurrency.lockutils [req-379a8014-74a2-439e-b95f-e854baf41d60 req-d420aa9f-7c59-4d33-aa2d-aee6dbcd145f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:21 np0005481065 nova_compute[260935]: 2025-10-11 08:51:21.840 2 DEBUG oslo_concurrency.lockutils [req-379a8014-74a2-439e-b95f-e854baf41d60 req-d420aa9f-7c59-4d33-aa2d-aee6dbcd145f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:21 np0005481065 nova_compute[260935]: 2025-10-11 08:51:21.841 2 DEBUG nova.compute.manager [req-379a8014-74a2-439e-b95f-e854baf41d60 req-d420aa9f-7c59-4d33-aa2d-aee6dbcd145f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] No waiting events found dispatching network-vif-unplugged-fc76a4bd-0e3d-426e-820f-690283cf8257 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:51:21 np0005481065 nova_compute[260935]: 2025-10-11 08:51:21.841 2 DEBUG nova.compute.manager [req-379a8014-74a2-439e-b95f-e854baf41d60 req-d420aa9f-7c59-4d33-aa2d-aee6dbcd145f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Received event network-vif-unplugged-fc76a4bd-0e3d-426e-820f-690283cf8257 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 04:51:21 np0005481065 nova_compute[260935]: 2025-10-11 08:51:21.841 2 DEBUG nova.compute.manager [req-379a8014-74a2-439e-b95f-e854baf41d60 req-d420aa9f-7c59-4d33-aa2d-aee6dbcd145f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Received event network-vif-plugged-fc76a4bd-0e3d-426e-820f-690283cf8257 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:51:21 np0005481065 nova_compute[260935]: 2025-10-11 08:51:21.842 2 DEBUG oslo_concurrency.lockutils [req-379a8014-74a2-439e-b95f-e854baf41d60 req-d420aa9f-7c59-4d33-aa2d-aee6dbcd145f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "9f842544-f85a-4c24-b273-8ae74177617e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:21 np0005481065 nova_compute[260935]: 2025-10-11 08:51:21.842 2 DEBUG oslo_concurrency.lockutils [req-379a8014-74a2-439e-b95f-e854baf41d60 req-d420aa9f-7c59-4d33-aa2d-aee6dbcd145f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:21 np0005481065 nova_compute[260935]: 2025-10-11 08:51:21.843 2 DEBUG oslo_concurrency.lockutils [req-379a8014-74a2-439e-b95f-e854baf41d60 req-d420aa9f-7c59-4d33-aa2d-aee6dbcd145f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:21 np0005481065 nova_compute[260935]: 2025-10-11 08:51:21.843 2 DEBUG nova.compute.manager [req-379a8014-74a2-439e-b95f-e854baf41d60 req-d420aa9f-7c59-4d33-aa2d-aee6dbcd145f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] No waiting events found dispatching network-vif-plugged-fc76a4bd-0e3d-426e-820f-690283cf8257 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:51:21 np0005481065 nova_compute[260935]: 2025-10-11 08:51:21.843 2 WARNING nova.compute.manager [req-379a8014-74a2-439e-b95f-e854baf41d60 req-d420aa9f-7c59-4d33-aa2d-aee6dbcd145f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Received unexpected event network-vif-plugged-fc76a4bd-0e3d-426e-820f-690283cf8257 for instance with vm_state active and task_state deleting.#033[00m
Oct 11 04:51:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1387: 321 pgs: 321 active+clean; 227 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 2.8 MiB/s wr, 172 op/s
Oct 11 04:51:21 np0005481065 nova_compute[260935]: 2025-10-11 08:51:21.945 2 DEBUG oslo_concurrency.processutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe3cai2ld" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:51:21 np0005481065 nova_compute[260935]: 2025-10-11 08:51:21.985 2 DEBUG nova.storage.rbd_utils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] rbd image 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:51:21 np0005481065 nova_compute[260935]: 2025-10-11 08:51:21.990 2 DEBUG oslo_concurrency.processutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a/disk.config 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:51:22 np0005481065 nova_compute[260935]: 2025-10-11 08:51:22.171 2 DEBUG nova.network.neutron [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Successfully updated port: 2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:51:22 np0005481065 nova_compute[260935]: 2025-10-11 08:51:22.179 2 DEBUG oslo_concurrency.processutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a/disk.config 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.189s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:51:22 np0005481065 nova_compute[260935]: 2025-10-11 08:51:22.180 2 INFO nova.virt.libvirt.driver [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Deleting local config drive /var/lib/nova/instances/88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a/disk.config because it was imported into RBD.#033[00m
Oct 11 04:51:22 np0005481065 nova_compute[260935]: 2025-10-11 08:51:22.186 2 DEBUG nova.compute.manager [req-939fc24c-6341-4c2e-9f2c-0fbeb284b8c4 req-7ec2d948-0c61-45eb-9fdb-1bc026274648 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Received event network-vif-plugged-ccf53fcf-c7bd-41f4-986b-d90fd701f3e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:51:22 np0005481065 nova_compute[260935]: 2025-10-11 08:51:22.186 2 DEBUG oslo_concurrency.lockutils [req-939fc24c-6341-4c2e-9f2c-0fbeb284b8c4 req-7ec2d948-0c61-45eb-9fdb-1bc026274648 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "9f842544-f85a-4c24-b273-8ae74177617e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:22 np0005481065 nova_compute[260935]: 2025-10-11 08:51:22.187 2 DEBUG oslo_concurrency.lockutils [req-939fc24c-6341-4c2e-9f2c-0fbeb284b8c4 req-7ec2d948-0c61-45eb-9fdb-1bc026274648 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:22 np0005481065 nova_compute[260935]: 2025-10-11 08:51:22.187 2 DEBUG oslo_concurrency.lockutils [req-939fc24c-6341-4c2e-9f2c-0fbeb284b8c4 req-7ec2d948-0c61-45eb-9fdb-1bc026274648 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:22 np0005481065 nova_compute[260935]: 2025-10-11 08:51:22.188 2 DEBUG nova.compute.manager [req-939fc24c-6341-4c2e-9f2c-0fbeb284b8c4 req-7ec2d948-0c61-45eb-9fdb-1bc026274648 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] No waiting events found dispatching network-vif-plugged-ccf53fcf-c7bd-41f4-986b-d90fd701f3e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:51:22 np0005481065 nova_compute[260935]: 2025-10-11 08:51:22.188 2 WARNING nova.compute.manager [req-939fc24c-6341-4c2e-9f2c-0fbeb284b8c4 req-7ec2d948-0c61-45eb-9fdb-1bc026274648 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Received unexpected event network-vif-plugged-ccf53fcf-c7bd-41f4-986b-d90fd701f3e4 for instance with vm_state active and task_state deleting.#033[00m
Oct 11 04:51:22 np0005481065 nova_compute[260935]: 2025-10-11 08:51:22.189 2 DEBUG nova.compute.manager [req-939fc24c-6341-4c2e-9f2c-0fbeb284b8c4 req-7ec2d948-0c61-45eb-9fdb-1bc026274648 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Received event network-vif-unplugged-e7343dc3-6cda-4dfb-8098-f021900f4584 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:51:22 np0005481065 nova_compute[260935]: 2025-10-11 08:51:22.189 2 DEBUG oslo_concurrency.lockutils [req-939fc24c-6341-4c2e-9f2c-0fbeb284b8c4 req-7ec2d948-0c61-45eb-9fdb-1bc026274648 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "9f842544-f85a-4c24-b273-8ae74177617e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:22 np0005481065 nova_compute[260935]: 2025-10-11 08:51:22.189 2 DEBUG oslo_concurrency.lockutils [req-939fc24c-6341-4c2e-9f2c-0fbeb284b8c4 req-7ec2d948-0c61-45eb-9fdb-1bc026274648 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:22 np0005481065 nova_compute[260935]: 2025-10-11 08:51:22.190 2 DEBUG oslo_concurrency.lockutils [req-939fc24c-6341-4c2e-9f2c-0fbeb284b8c4 req-7ec2d948-0c61-45eb-9fdb-1bc026274648 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:22 np0005481065 nova_compute[260935]: 2025-10-11 08:51:22.190 2 DEBUG nova.compute.manager [req-939fc24c-6341-4c2e-9f2c-0fbeb284b8c4 req-7ec2d948-0c61-45eb-9fdb-1bc026274648 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] No waiting events found dispatching network-vif-unplugged-e7343dc3-6cda-4dfb-8098-f021900f4584 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:51:22 np0005481065 nova_compute[260935]: 2025-10-11 08:51:22.191 2 DEBUG nova.compute.manager [req-939fc24c-6341-4c2e-9f2c-0fbeb284b8c4 req-7ec2d948-0c61-45eb-9fdb-1bc026274648 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Received event network-vif-unplugged-e7343dc3-6cda-4dfb-8098-f021900f4584 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 04:51:22 np0005481065 nova_compute[260935]: 2025-10-11 08:51:22.191 2 DEBUG nova.compute.manager [req-939fc24c-6341-4c2e-9f2c-0fbeb284b8c4 req-7ec2d948-0c61-45eb-9fdb-1bc026274648 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Received event network-vif-plugged-e7343dc3-6cda-4dfb-8098-f021900f4584 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:51:22 np0005481065 nova_compute[260935]: 2025-10-11 08:51:22.192 2 DEBUG oslo_concurrency.lockutils [req-939fc24c-6341-4c2e-9f2c-0fbeb284b8c4 req-7ec2d948-0c61-45eb-9fdb-1bc026274648 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "9f842544-f85a-4c24-b273-8ae74177617e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:22 np0005481065 nova_compute[260935]: 2025-10-11 08:51:22.192 2 DEBUG oslo_concurrency.lockutils [req-939fc24c-6341-4c2e-9f2c-0fbeb284b8c4 req-7ec2d948-0c61-45eb-9fdb-1bc026274648 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:22 np0005481065 nova_compute[260935]: 2025-10-11 08:51:22.192 2 DEBUG oslo_concurrency.lockutils [req-939fc24c-6341-4c2e-9f2c-0fbeb284b8c4 req-7ec2d948-0c61-45eb-9fdb-1bc026274648 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:22 np0005481065 nova_compute[260935]: 2025-10-11 08:51:22.193 2 DEBUG nova.compute.manager [req-939fc24c-6341-4c2e-9f2c-0fbeb284b8c4 req-7ec2d948-0c61-45eb-9fdb-1bc026274648 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] No waiting events found dispatching network-vif-plugged-e7343dc3-6cda-4dfb-8098-f021900f4584 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:51:22 np0005481065 nova_compute[260935]: 2025-10-11 08:51:22.193 2 WARNING nova.compute.manager [req-939fc24c-6341-4c2e-9f2c-0fbeb284b8c4 req-7ec2d948-0c61-45eb-9fdb-1bc026274648 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Received unexpected event network-vif-plugged-e7343dc3-6cda-4dfb-8098-f021900f4584 for instance with vm_state active and task_state deleting.#033[00m
Oct 11 04:51:22 np0005481065 nova_compute[260935]: 2025-10-11 08:51:22.195 2 DEBUG oslo_concurrency.lockutils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "refresh_cache-3283d482-4ea1-400a-9a1b-486479801813" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:51:22 np0005481065 nova_compute[260935]: 2025-10-11 08:51:22.195 2 DEBUG oslo_concurrency.lockutils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquired lock "refresh_cache-3283d482-4ea1-400a-9a1b-486479801813" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:51:22 np0005481065 nova_compute[260935]: 2025-10-11 08:51:22.195 2 DEBUG nova.network.neutron [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:51:22 np0005481065 kernel: tapab7592f9-17: entered promiscuous mode
Oct 11 04:51:22 np0005481065 systemd-udevd[301937]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:51:22 np0005481065 NetworkManager[44960]: <info>  [1760172682.2626] manager: (tapab7592f9-17): new Tun device (/org/freedesktop/NetworkManager/Devices/124)
Oct 11 04:51:22 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:22Z|00251|binding|INFO|Claiming lport ab7592f9-1746-47d1-a702-b7be704ccabb for this chassis.
Oct 11 04:51:22 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:22Z|00252|binding|INFO|ab7592f9-1746-47d1-a702-b7be704ccabb: Claiming fa:16:3e:af:3d:5c 10.100.0.9
Oct 11 04:51:22 np0005481065 nova_compute[260935]: 2025-10-11 08:51:22.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:22 np0005481065 NetworkManager[44960]: <info>  [1760172682.2727] device (tapab7592f9-17): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:51:22 np0005481065 NetworkManager[44960]: <info>  [1760172682.2738] device (tapab7592f9-17): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:22.279 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:3d:5c 10.100.0.9'], port_security=['fa:16:3e:af:3d:5c 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce478624-f4e7-4bd7-81b2-172d9f364a89', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '80ef0690d9e94d289f05d85941ef7154', 'neutron:revision_number': '2', 'neutron:security_group_ids': '29aef361-18af-45f6-a74a-33bc137a03ad', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f751cc78-ae4f-490f-8fb4-d5f1bb30fc5b, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=ab7592f9-1746-47d1-a702-b7be704ccabb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:22.280 162815 INFO neutron.agent.ovn.metadata.agent [-] Port ab7592f9-1746-47d1-a702-b7be704ccabb in datapath ce478624-f4e7-4bd7-81b2-172d9f364a89 bound to our chassis#033[00m
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:22.282 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce478624-f4e7-4bd7-81b2-172d9f364a89#033[00m
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:22.298 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8a3e5ed9-a08f-4f06-add2-ec78a1c1a3c0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:22.299 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapce478624-f1 in ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:22.303 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapce478624-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:22.303 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cf822a37-d1bc-45aa-95e8-c8fdbfa1a070]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:22.305 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4f0b8b16-c2c3-4707-8aee-c6c79ffbc15a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:22 np0005481065 systemd-machined[215705]: New machine qemu-38-instance-00000022.
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:22.317 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[bd6110a5-4f27-45a2-9519-29585da8a6e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:22 np0005481065 systemd[1]: Started Virtual Machine qemu-38-instance-00000022.
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:22.348 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2592b64d-2c6b-47a5-ba6e-d63e5d297ea2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:22 np0005481065 nova_compute[260935]: 2025-10-11 08:51:22.378 2 DEBUG nova.network.neutron [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:51:22 np0005481065 nova_compute[260935]: 2025-10-11 08:51:22.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:22 np0005481065 nova_compute[260935]: 2025-10-11 08:51:22.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:22 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:22Z|00253|binding|INFO|Setting lport ab7592f9-1746-47d1-a702-b7be704ccabb ovn-installed in OVS
Oct 11 04:51:22 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:22Z|00254|binding|INFO|Setting lport ab7592f9-1746-47d1-a702-b7be704ccabb up in Southbound
Oct 11 04:51:22 np0005481065 nova_compute[260935]: 2025-10-11 08:51:22.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:22.394 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[668ebeed-aba8-40aa-a9e5-c9bb113e6351]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:22.403 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[afe951f6-b1c1-4588-8293-52e16c8f4ee8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:22 np0005481065 NetworkManager[44960]: <info>  [1760172682.4054] manager: (tapce478624-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/125)
Oct 11 04:51:22 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:22Z|00255|binding|INFO|Releasing lport e5becf0d-48c0-404b-9cba-07077454d085 from this chassis (sb_readonly=0)
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:22.456 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[ab4eda7e-98cf-4b19-b4b7-dacb5794f537]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:22.462 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[7f43d17b-b8bb-45c3-84c3-d365cf136270]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:22 np0005481065 nova_compute[260935]: 2025-10-11 08:51:22.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:22 np0005481065 NetworkManager[44960]: <info>  [1760172682.4981] device (tapce478624-f0): carrier: link connected
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:22.509 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c0675d18-e009-4d5c-8640-9c16dabe46fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:22.535 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2d1fb8e4-b24d-4168-98a3-35ed63599474]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce478624-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:5c:24'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 81], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452720, 'reachable_time': 18973, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302419, 'error': None, 'target': 'ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:22.556 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[23ac0926-a0a8-4adc-ae36-97c065498c8c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2d:5c24'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 452720, 'tstamp': 452720}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302420, 'error': None, 'target': 'ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:22.582 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f9a38cce-ccbd-45ce-a49c-91afda5c538c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce478624-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:5c:24'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 81], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452720, 'reachable_time': 18973, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 302421, 'error': None, 'target': 'ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:22.635 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[316b0420-e46c-40ca-ba79-006ee4902357]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:22.726 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2207d38c-a743-4c77-a9ab-a02bb0b073fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:22.728 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce478624-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:22.729 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:22.730 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce478624-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:22 np0005481065 nova_compute[260935]: 2025-10-11 08:51:22.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:22 np0005481065 kernel: tapce478624-f0: entered promiscuous mode
Oct 11 04:51:22 np0005481065 NetworkManager[44960]: <info>  [1760172682.7336] manager: (tapce478624-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/126)
Oct 11 04:51:22 np0005481065 nova_compute[260935]: 2025-10-11 08:51:22.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:22.739 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce478624-f0, col_values=(('external_ids', {'iface-id': '875825ad-1b50-485c-91da-f53ce1ebd5e3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:22 np0005481065 nova_compute[260935]: 2025-10-11 08:51:22.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:22 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:22Z|00256|binding|INFO|Releasing lport 875825ad-1b50-485c-91da-f53ce1ebd5e3 from this chassis (sb_readonly=0)
Oct 11 04:51:22 np0005481065 nova_compute[260935]: 2025-10-11 08:51:22.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:22.743 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ce478624-f4e7-4bd7-81b2-172d9f364a89.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ce478624-f4e7-4bd7-81b2-172d9f364a89.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:22.744 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3657bb5a-25a5-4a8e-ad51-4bbdc19237d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:22.745 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-ce478624-f4e7-4bd7-81b2-172d9f364a89
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/ce478624-f4e7-4bd7-81b2-172d9f364a89.pid.haproxy
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID ce478624-f4e7-4bd7-81b2-172d9f364a89
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 04:51:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:22.746 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89', 'env', 'PROCESS_TAG=haproxy-ce478624-f4e7-4bd7-81b2-172d9f364a89', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ce478624-f4e7-4bd7-81b2-172d9f364a89.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 04:51:22 np0005481065 nova_compute[260935]: 2025-10-11 08:51:22.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:23 np0005481065 podman[302496]: 2025-10-11 08:51:23.122659945 +0000 UTC m=+0.056166959 container create b02629e856b8dc326de2a6ee34cc94950403c804b4362549501605f9cd83d758 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 11 04:51:23 np0005481065 systemd[1]: Started libpod-conmon-b02629e856b8dc326de2a6ee34cc94950403c804b4362549501605f9cd83d758.scope.
Oct 11 04:51:23 np0005481065 podman[302496]: 2025-10-11 08:51:23.09171788 +0000 UTC m=+0.025224884 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 04:51:23 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:51:23 np0005481065 nova_compute[260935]: 2025-10-11 08:51:23.215 2 DEBUG nova.network.neutron [-] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:51:23 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0135a08a783b5b6f32ef7dcf7d88e116aafc097e2f6da2c1ab4657101d9a11a8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:51:23 np0005481065 podman[302496]: 2025-10-11 08:51:23.231254605 +0000 UTC m=+0.164761629 container init b02629e856b8dc326de2a6ee34cc94950403c804b4362549501605f9cd83d758 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:51:23 np0005481065 podman[302496]: 2025-10-11 08:51:23.23812995 +0000 UTC m=+0.171636934 container start b02629e856b8dc326de2a6ee34cc94950403c804b4362549501605f9cd83d758 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:51:23 np0005481065 nova_compute[260935]: 2025-10-11 08:51:23.253 2 INFO nova.compute.manager [-] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Took 2.48 seconds to deallocate network for instance.#033[00m
Oct 11 04:51:23 np0005481065 neutron-haproxy-ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89[302511]: [NOTICE]   (302515) : New worker (302517) forked
Oct 11 04:51:23 np0005481065 neutron-haproxy-ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89[302511]: [NOTICE]   (302515) : Loading success.
Oct 11 04:51:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:51:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e183 do_prune osdmap full prune enabled
Oct 11 04:51:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e184 e184: 3 total, 3 up, 3 in
Oct 11 04:51:23 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e184: 3 total, 3 up, 3 in
Oct 11 04:51:23 np0005481065 nova_compute[260935]: 2025-10-11 08:51:23.318 2 DEBUG oslo_concurrency.lockutils [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:23 np0005481065 nova_compute[260935]: 2025-10-11 08:51:23.319 2 DEBUG oslo_concurrency.lockutils [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:23 np0005481065 nova_compute[260935]: 2025-10-11 08:51:23.380 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172683.3802261, 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:51:23 np0005481065 nova_compute[260935]: 2025-10-11 08:51:23.380 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] VM Started (Lifecycle Event)#033[00m
Oct 11 04:51:23 np0005481065 nova_compute[260935]: 2025-10-11 08:51:23.402 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:51:23 np0005481065 nova_compute[260935]: 2025-10-11 08:51:23.408 2 DEBUG oslo_concurrency.processutils [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:51:23 np0005481065 nova_compute[260935]: 2025-10-11 08:51:23.433 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172683.3834124, 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:51:23 np0005481065 nova_compute[260935]: 2025-10-11 08:51:23.434 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] VM Paused (Lifecycle Event)#033[00m
Oct 11 04:51:23 np0005481065 nova_compute[260935]: 2025-10-11 08:51:23.452 2 DEBUG nova.network.neutron [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Updating instance_info_cache with network_info: [{"id": "2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37", "address": "fa:16:3e:2d:84:1d", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b54c9c3-c2", "ovs_interfaceid": "2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:51:23 np0005481065 nova_compute[260935]: 2025-10-11 08:51:23.491 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:51:23 np0005481065 nova_compute[260935]: 2025-10-11 08:51:23.496 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:51:23 np0005481065 nova_compute[260935]: 2025-10-11 08:51:23.690 2 DEBUG oslo_concurrency.lockutils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Releasing lock "refresh_cache-3283d482-4ea1-400a-9a1b-486479801813" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:51:23 np0005481065 nova_compute[260935]: 2025-10-11 08:51:23.691 2 DEBUG nova.compute.manager [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Instance network_info: |[{"id": "2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37", "address": "fa:16:3e:2d:84:1d", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b54c9c3-c2", "ovs_interfaceid": "2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:51:23 np0005481065 nova_compute[260935]: 2025-10-11 08:51:23.695 2 DEBUG nova.virt.libvirt.driver [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Start _get_guest_xml network_info=[{"id": "2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37", "address": "fa:16:3e:2d:84:1d", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b54c9c3-c2", "ovs_interfaceid": "2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-11T08:51:09Z,direct_url=<?>,disk_format='raw',id=70bd8f23-d068-4d06-af15-566e76e92803,min_disk=1,min_ram=0,name='tempest-test-snap-1090653870',owner='f8c7604961214c6d9d49657535d799a5',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-11T08:51:14Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '70bd8f23-d068-4d06-af15-566e76e92803'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:51:23 np0005481065 nova_compute[260935]: 2025-10-11 08:51:23.702 2 WARNING nova.virt.libvirt.driver [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:51:23 np0005481065 nova_compute[260935]: 2025-10-11 08:51:23.707 2 DEBUG nova.virt.libvirt.host [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:51:23 np0005481065 nova_compute[260935]: 2025-10-11 08:51:23.708 2 DEBUG nova.virt.libvirt.host [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:51:23 np0005481065 nova_compute[260935]: 2025-10-11 08:51:23.712 2 DEBUG nova.virt.libvirt.host [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:51:23 np0005481065 nova_compute[260935]: 2025-10-11 08:51:23.713 2 DEBUG nova.virt.libvirt.host [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:51:23 np0005481065 nova_compute[260935]: 2025-10-11 08:51:23.713 2 DEBUG nova.virt.libvirt.driver [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:51:23 np0005481065 nova_compute[260935]: 2025-10-11 08:51:23.714 2 DEBUG nova.virt.hardware [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-11T08:51:09Z,direct_url=<?>,disk_format='raw',id=70bd8f23-d068-4d06-af15-566e76e92803,min_disk=1,min_ram=0,name='tempest-test-snap-1090653870',owner='f8c7604961214c6d9d49657535d799a5',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-11T08:51:14Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:51:23 np0005481065 nova_compute[260935]: 2025-10-11 08:51:23.714 2 DEBUG nova.virt.hardware [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:51:23 np0005481065 nova_compute[260935]: 2025-10-11 08:51:23.715 2 DEBUG nova.virt.hardware [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:51:23 np0005481065 nova_compute[260935]: 2025-10-11 08:51:23.715 2 DEBUG nova.virt.hardware [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:51:23 np0005481065 nova_compute[260935]: 2025-10-11 08:51:23.715 2 DEBUG nova.virt.hardware [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:51:23 np0005481065 nova_compute[260935]: 2025-10-11 08:51:23.716 2 DEBUG nova.virt.hardware [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:51:23 np0005481065 nova_compute[260935]: 2025-10-11 08:51:23.716 2 DEBUG nova.virt.hardware [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:51:23 np0005481065 nova_compute[260935]: 2025-10-11 08:51:23.716 2 DEBUG nova.virt.hardware [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:51:23 np0005481065 nova_compute[260935]: 2025-10-11 08:51:23.717 2 DEBUG nova.virt.hardware [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:51:23 np0005481065 nova_compute[260935]: 2025-10-11 08:51:23.717 2 DEBUG nova.virt.hardware [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:51:23 np0005481065 nova_compute[260935]: 2025-10-11 08:51:23.718 2 DEBUG nova.virt.hardware [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:51:23 np0005481065 nova_compute[260935]: 2025-10-11 08:51:23.723 2 DEBUG oslo_concurrency.processutils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:51:23 np0005481065 nova_compute[260935]: 2025-10-11 08:51:23.771 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:51:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:51:23 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2790349999' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:51:23 np0005481065 nova_compute[260935]: 2025-10-11 08:51:23.816 2 DEBUG oslo_concurrency.processutils [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:51:23 np0005481065 nova_compute[260935]: 2025-10-11 08:51:23.824 2 DEBUG nova.compute.provider_tree [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:51:23 np0005481065 nova_compute[260935]: 2025-10-11 08:51:23.858 2 DEBUG nova.scheduler.client.report [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:51:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1389: 321 pgs: 321 active+clean; 213 MiB data, 471 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.7 MiB/s wr, 274 op/s
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.118 2 DEBUG nova.compute.manager [req-cffe39aa-f7dd-49b6-b4c3-19afa3fafc40 req-46d03a44-d61b-4242-8c99-291943394d7a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Received event network-vif-deleted-e7343dc3-6cda-4dfb-8098-f021900f4584 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.119 2 DEBUG nova.compute.manager [req-cffe39aa-f7dd-49b6-b4c3-19afa3fafc40 req-46d03a44-d61b-4242-8c99-291943394d7a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Received event network-changed-2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.124 2 DEBUG nova.compute.manager [req-cffe39aa-f7dd-49b6-b4c3-19afa3fafc40 req-46d03a44-d61b-4242-8c99-291943394d7a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Refreshing instance network info cache due to event network-changed-2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.124 2 DEBUG oslo_concurrency.lockutils [req-cffe39aa-f7dd-49b6-b4c3-19afa3fafc40 req-46d03a44-d61b-4242-8c99-291943394d7a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-3283d482-4ea1-400a-9a1b-486479801813" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.125 2 DEBUG oslo_concurrency.lockutils [req-cffe39aa-f7dd-49b6-b4c3-19afa3fafc40 req-46d03a44-d61b-4242-8c99-291943394d7a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-3283d482-4ea1-400a-9a1b-486479801813" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.125 2 DEBUG nova.network.neutron [req-cffe39aa-f7dd-49b6-b4c3-19afa3fafc40 req-46d03a44-d61b-4242-8c99-291943394d7a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Refreshing network info cache for port 2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:51:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:51:24 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/543535219' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.177 2 DEBUG oslo_concurrency.lockutils [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.858s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.201 2 DEBUG oslo_concurrency.processutils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.260 2 DEBUG nova.storage.rbd_utils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image 3283d482-4ea1-400a-9a1b-486479801813_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.268 2 DEBUG oslo_concurrency.processutils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.326 2 INFO nova.scheduler.client.report [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Deleted allocations for instance 9f842544-f85a-4c24-b273-8ae74177617e#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.337 2 DEBUG nova.compute.manager [req-7ab430ff-66b6-4ae8-9e39-008ad4871e7f req-10b1f041-c53d-4753-9f63-48b8bcda8b36 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Received event network-vif-plugged-ab7592f9-1746-47d1-a702-b7be704ccabb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.338 2 DEBUG oslo_concurrency.lockutils [req-7ab430ff-66b6-4ae8-9e39-008ad4871e7f req-10b1f041-c53d-4753-9f63-48b8bcda8b36 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.338 2 DEBUG oslo_concurrency.lockutils [req-7ab430ff-66b6-4ae8-9e39-008ad4871e7f req-10b1f041-c53d-4753-9f63-48b8bcda8b36 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.339 2 DEBUG oslo_concurrency.lockutils [req-7ab430ff-66b6-4ae8-9e39-008ad4871e7f req-10b1f041-c53d-4753-9f63-48b8bcda8b36 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.339 2 DEBUG nova.compute.manager [req-7ab430ff-66b6-4ae8-9e39-008ad4871e7f req-10b1f041-c53d-4753-9f63-48b8bcda8b36 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Processing event network-vif-plugged-ab7592f9-1746-47d1-a702-b7be704ccabb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.339 2 DEBUG nova.compute.manager [req-7ab430ff-66b6-4ae8-9e39-008ad4871e7f req-10b1f041-c53d-4753-9f63-48b8bcda8b36 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Received event network-vif-plugged-ab7592f9-1746-47d1-a702-b7be704ccabb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.340 2 DEBUG oslo_concurrency.lockutils [req-7ab430ff-66b6-4ae8-9e39-008ad4871e7f req-10b1f041-c53d-4753-9f63-48b8bcda8b36 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.340 2 DEBUG oslo_concurrency.lockutils [req-7ab430ff-66b6-4ae8-9e39-008ad4871e7f req-10b1f041-c53d-4753-9f63-48b8bcda8b36 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.341 2 DEBUG oslo_concurrency.lockutils [req-7ab430ff-66b6-4ae8-9e39-008ad4871e7f req-10b1f041-c53d-4753-9f63-48b8bcda8b36 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.341 2 DEBUG nova.compute.manager [req-7ab430ff-66b6-4ae8-9e39-008ad4871e7f req-10b1f041-c53d-4753-9f63-48b8bcda8b36 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] No waiting events found dispatching network-vif-plugged-ab7592f9-1746-47d1-a702-b7be704ccabb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.341 2 WARNING nova.compute.manager [req-7ab430ff-66b6-4ae8-9e39-008ad4871e7f req-10b1f041-c53d-4753-9f63-48b8bcda8b36 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Received unexpected event network-vif-plugged-ab7592f9-1746-47d1-a702-b7be704ccabb for instance with vm_state building and task_state spawning.#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.344 2 DEBUG nova.compute.manager [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.353 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172684.3519356, 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.354 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.357 2 DEBUG nova.virt.libvirt.driver [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.370 2 INFO nova.virt.libvirt.driver [-] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Instance spawned successfully.#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.377 2 DEBUG nova.virt.libvirt.driver [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.440 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.459 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.472 2 DEBUG nova.virt.libvirt.driver [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.472 2 DEBUG nova.virt.libvirt.driver [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.473 2 DEBUG nova.virt.libvirt.driver [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.474 2 DEBUG nova.virt.libvirt.driver [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.475 2 DEBUG nova.virt.libvirt.driver [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.475 2 DEBUG nova.virt.libvirt.driver [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.485 2 DEBUG oslo_concurrency.lockutils [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.969s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.488 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.550 2 INFO nova.compute.manager [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Took 7.90 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.551 2 DEBUG nova.compute.manager [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.609 2 INFO nova.compute.manager [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Took 8.89 seconds to build instance.#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.634 2 DEBUG oslo_concurrency.lockutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.984s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:51:24 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1555119279' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.785 2 DEBUG oslo_concurrency.processutils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.786 2 DEBUG nova.virt.libvirt.vif [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:51:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-849464772',display_name='tempest-ImagesTestJSON-server-849464772',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-849464772',id=35,image_ref='70bd8f23-d068-4d06-af15-566e76e92803',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f8c7604961214c6d9d49657535d799a5',ramdisk_id='',reservation_id='r-lbwd8nss',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='5b50d851-e482-40a2-8b7d-d3eca87e15ab',image_min_disk='1',image_min_ram='0',image_owner_id='f8c7604961214c6d9d49657535d799a5',image_owner_project_name='tempest-ImagesTestJSON-694493184',image_owner_user_name='tempest-ImagesTestJSON-694493184-project-member',image_user_id='1bab12893b9d49aabcb5ca19c9b951de',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-694493184',owner_user_name='tempest-ImagesTestJSON-694493184-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:51:20Z,user_data=None,user_id='1bab12893b9d49aabcb5ca19c9b951de',uuid=3283d482-4ea1-400a-9a1b-486479801813,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37", "address": "fa:16:3e:2d:84:1d", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b54c9c3-c2", "ovs_interfaceid": "2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:51:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.787 2 DEBUG nova.network.os_vif_util [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converting VIF {"id": "2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37", "address": "fa:16:3e:2d:84:1d", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b54c9c3-c2", "ovs_interfaceid": "2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:51:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.788 2 DEBUG nova.network.os_vif_util [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:84:1d,bridge_name='br-int',has_traffic_filtering=True,id=2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b54c9c3-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.789 2 DEBUG nova.objects.instance [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3283d482-4ea1-400a-9a1b-486479801813 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:51:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:51:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:51:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:51:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.841 2 DEBUG nova.virt.libvirt.driver [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:51:24 np0005481065 nova_compute[260935]:  <uuid>3283d482-4ea1-400a-9a1b-486479801813</uuid>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:  <name>instance-00000023</name>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:51:24 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:      <nova:name>tempest-ImagesTestJSON-server-849464772</nova:name>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:51:23</nova:creationTime>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:51:24 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:        <nova:user uuid="1bab12893b9d49aabcb5ca19c9b951de">tempest-ImagesTestJSON-694493184-project-member</nova:user>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:        <nova:project uuid="f8c7604961214c6d9d49657535d799a5">tempest-ImagesTestJSON-694493184</nova:project>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="70bd8f23-d068-4d06-af15-566e76e92803"/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:        <nova:port uuid="2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37">
Oct 11 04:51:24 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:      <entry name="serial">3283d482-4ea1-400a-9a1b-486479801813</entry>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:      <entry name="uuid">3283d482-4ea1-400a-9a1b-486479801813</entry>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:51:24 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/3283d482-4ea1-400a-9a1b-486479801813_disk">
Oct 11 04:51:24 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:51:24 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:51:24 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/3283d482-4ea1-400a-9a1b-486479801813_disk.config">
Oct 11 04:51:24 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:51:24 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 04:51:24 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:2d:84:1d"/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:      <target dev="tap2b54c9c3-c2"/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:51:24 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/3283d482-4ea1-400a-9a1b-486479801813/console.log" append="off"/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    <input type="keyboard" bus="usb"/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:51:24 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:51:24 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:51:24 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:51:24 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:51:24 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.842 2 DEBUG nova.compute.manager [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Preparing to wait for external event network-vif-plugged-2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.842 2 DEBUG oslo_concurrency.lockutils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "3283d482-4ea1-400a-9a1b-486479801813-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.843 2 DEBUG oslo_concurrency.lockutils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "3283d482-4ea1-400a-9a1b-486479801813-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.843 2 DEBUG oslo_concurrency.lockutils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "3283d482-4ea1-400a-9a1b-486479801813-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.844 2 DEBUG nova.virt.libvirt.vif [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:51:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-849464772',display_name='tempest-ImagesTestJSON-server-849464772',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-849464772',id=35,image_ref='70bd8f23-d068-4d06-af15-566e76e92803',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f8c7604961214c6d9d49657535d799a5',ramdisk_id='',reservation_id='r-lbwd8nss',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='5b50d851-e482-40a2-8b7d-d3eca87e15ab',image_min_disk='1',image_min_ram='0',image_owner_id='f8c7604961214c6d9d49657535d799a5',image_owner_project_name='tempest-ImagesTestJSON-694493184',image_owner_user_name='tempest-ImagesTestJSON-694493184-project-member',image_user_id='1bab12893b9d49aabcb5ca19c9b951de',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-694493184',owner_user_name='tempest-ImagesTestJSON-694493184-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:51:20Z,user_data=None,user_id='1bab12893b9d49aabcb5ca19c9b951de',uuid=3283d482-4ea1-400a-9a1b-486479801813,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37", "address": "fa:16:3e:2d:84:1d", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b54c9c3-c2", "ovs_interfaceid": "2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.844 2 DEBUG nova.network.os_vif_util [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converting VIF {"id": "2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37", "address": "fa:16:3e:2d:84:1d", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b54c9c3-c2", "ovs_interfaceid": "2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.845 2 DEBUG nova.network.os_vif_util [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:84:1d,bridge_name='br-int',has_traffic_filtering=True,id=2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b54c9c3-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.845 2 DEBUG os_vif [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:84:1d,bridge_name='br-int',has_traffic_filtering=True,id=2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b54c9c3-c2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.846 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.847 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.851 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b54c9c3-c2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.852 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2b54c9c3-c2, col_values=(('external_ids', {'iface-id': '2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2d:84:1d', 'vm-uuid': '3283d482-4ea1-400a-9a1b-486479801813'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:24 np0005481065 NetworkManager[44960]: <info>  [1760172684.8597] manager: (tap2b54c9c3-c2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/127)
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.869 2 INFO os_vif [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:84:1d,bridge_name='br-int',has_traffic_filtering=True,id=2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b54c9c3-c2')#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.967 2 DEBUG nova.virt.libvirt.driver [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.968 2 DEBUG nova.virt.libvirt.driver [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.968 2 DEBUG nova.virt.libvirt.driver [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] No VIF found with MAC fa:16:3e:2d:84:1d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:51:24 np0005481065 nova_compute[260935]: 2025-10-11 08:51:24.969 2 INFO nova.virt.libvirt.driver [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Using config drive#033[00m
Oct 11 04:51:25 np0005481065 nova_compute[260935]: 2025-10-11 08:51:25.002 2 DEBUG nova.storage.rbd_utils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image 3283d482-4ea1-400a-9a1b-486479801813_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:51:25 np0005481065 podman[302614]: 2025-10-11 08:51:25.026184566 +0000 UTC m=+0.099713000 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct 11 04:51:25 np0005481065 nova_compute[260935]: 2025-10-11 08:51:25.334 2 DEBUG nova.network.neutron [req-cffe39aa-f7dd-49b6-b4c3-19afa3fafc40 req-46d03a44-d61b-4242-8c99-291943394d7a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Updated VIF entry in instance network info cache for port 2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:51:25 np0005481065 nova_compute[260935]: 2025-10-11 08:51:25.340 2 DEBUG nova.network.neutron [req-cffe39aa-f7dd-49b6-b4c3-19afa3fafc40 req-46d03a44-d61b-4242-8c99-291943394d7a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Updating instance_info_cache with network_info: [{"id": "2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37", "address": "fa:16:3e:2d:84:1d", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b54c9c3-c2", "ovs_interfaceid": "2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:51:25 np0005481065 nova_compute[260935]: 2025-10-11 08:51:25.361 2 DEBUG oslo_concurrency.lockutils [req-cffe39aa-f7dd-49b6-b4c3-19afa3fafc40 req-46d03a44-d61b-4242-8c99-291943394d7a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-3283d482-4ea1-400a-9a1b-486479801813" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:51:25 np0005481065 nova_compute[260935]: 2025-10-11 08:51:25.361 2 DEBUG nova.compute.manager [req-cffe39aa-f7dd-49b6-b4c3-19afa3fafc40 req-46d03a44-d61b-4242-8c99-291943394d7a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Received event network-vif-deleted-fc76a4bd-0e3d-426e-820f-690283cf8257 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:51:25 np0005481065 nova_compute[260935]: 2025-10-11 08:51:25.362 2 DEBUG nova.compute.manager [req-cffe39aa-f7dd-49b6-b4c3-19afa3fafc40 req-46d03a44-d61b-4242-8c99-291943394d7a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Received event network-vif-deleted-ccf53fcf-c7bd-41f4-986b-d90fd701f3e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:51:25 np0005481065 nova_compute[260935]: 2025-10-11 08:51:25.399 2 INFO nova.virt.libvirt.driver [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Creating config drive at /var/lib/nova/instances/3283d482-4ea1-400a-9a1b-486479801813/disk.config#033[00m
Oct 11 04:51:25 np0005481065 nova_compute[260935]: 2025-10-11 08:51:25.409 2 DEBUG oslo_concurrency.processutils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3283d482-4ea1-400a-9a1b-486479801813/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4aw16o0u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:51:25 np0005481065 nova_compute[260935]: 2025-10-11 08:51:25.565 2 DEBUG oslo_concurrency.processutils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3283d482-4ea1-400a-9a1b-486479801813/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4aw16o0u" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:51:25 np0005481065 nova_compute[260935]: 2025-10-11 08:51:25.600 2 DEBUG nova.storage.rbd_utils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image 3283d482-4ea1-400a-9a1b-486479801813_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:51:25 np0005481065 nova_compute[260935]: 2025-10-11 08:51:25.608 2 DEBUG oslo_concurrency.processutils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3283d482-4ea1-400a-9a1b-486479801813/disk.config 3283d482-4ea1-400a-9a1b-486479801813_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:51:25 np0005481065 nova_compute[260935]: 2025-10-11 08:51:25.647 2 DEBUG oslo_concurrency.lockutils [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Acquiring lock "interface-88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:25 np0005481065 nova_compute[260935]: 2025-10-11 08:51:25.648 2 DEBUG oslo_concurrency.lockutils [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Lock "interface-88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:25 np0005481065 nova_compute[260935]: 2025-10-11 08:51:25.649 2 DEBUG nova.objects.instance [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Lazy-loading 'flavor' on Instance uuid 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:51:25 np0005481065 nova_compute[260935]: 2025-10-11 08:51:25.681 2 DEBUG nova.objects.instance [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Lazy-loading 'pci_requests' on Instance uuid 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:51:25 np0005481065 nova_compute[260935]: 2025-10-11 08:51:25.726 2 DEBUG nova.network.neutron [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:51:25 np0005481065 nova_compute[260935]: 2025-10-11 08:51:25.794 2 DEBUG oslo_concurrency.processutils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3283d482-4ea1-400a-9a1b-486479801813/disk.config 3283d482-4ea1-400a-9a1b-486479801813_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:51:25 np0005481065 nova_compute[260935]: 2025-10-11 08:51:25.795 2 INFO nova.virt.libvirt.driver [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Deleting local config drive /var/lib/nova/instances/3283d482-4ea1-400a-9a1b-486479801813/disk.config because it was imported into RBD.#033[00m
Oct 11 04:51:25 np0005481065 kernel: tap2b54c9c3-c2: entered promiscuous mode
Oct 11 04:51:25 np0005481065 NetworkManager[44960]: <info>  [1760172685.8755] manager: (tap2b54c9c3-c2): new Tun device (/org/freedesktop/NetworkManager/Devices/128)
Oct 11 04:51:25 np0005481065 nova_compute[260935]: 2025-10-11 08:51:25.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:25 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:25Z|00257|binding|INFO|Claiming lport 2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37 for this chassis.
Oct 11 04:51:25 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:25Z|00258|binding|INFO|2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37: Claiming fa:16:3e:2d:84:1d 10.100.0.11
Oct 11 04:51:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1390: 321 pgs: 321 active+clean; 213 MiB data, 471 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.7 MiB/s wr, 274 op/s
Oct 11 04:51:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:25.897 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:84:1d 10.100.0.11'], port_security=['fa:16:3e:2d:84:1d 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '3283d482-4ea1-400a-9a1b-486479801813', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bac3530-993f-420e-8692-0b14a331d756', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8c7604961214c6d9d49657535d799a5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b92be55a-f97b-4770-99c4-ff8e122b8ad7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=956bef08-638b-4ce0-9cc4-80a6cc4f1331, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:51:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:25.898 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37 in datapath 9bac3530-993f-420e-8692-0b14a331d756 bound to our chassis#033[00m
Oct 11 04:51:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:25.900 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bac3530-993f-420e-8692-0b14a331d756#033[00m
Oct 11 04:51:25 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:25Z|00259|binding|INFO|Setting lport 2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37 ovn-installed in OVS
Oct 11 04:51:25 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:25Z|00260|binding|INFO|Setting lport 2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37 up in Southbound
Oct 11 04:51:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:25.928 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[09e3cb9a-c82c-4b3f-998e-7a24a04b9ddb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:25 np0005481065 nova_compute[260935]: 2025-10-11 08:51:25.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:25 np0005481065 systemd-machined[215705]: New machine qemu-39-instance-00000023.
Oct 11 04:51:25 np0005481065 systemd[1]: Started Virtual Machine qemu-39-instance-00000023.
Oct 11 04:51:26 np0005481065 systemd-udevd[302710]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:51:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:26.022 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[912a40d6-6402-46ff-90b5-03526555ea7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:26.026 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[8a86c3cc-f2bb-4f61-b23d-a8b63f282970]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:26 np0005481065 NetworkManager[44960]: <info>  [1760172686.0426] device (tap2b54c9c3-c2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:51:26 np0005481065 NetworkManager[44960]: <info>  [1760172686.0439] device (tap2b54c9c3-c2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:51:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:26.105 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[06942bb9-41de-42b7-8a52-cba8c171912b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:26.131 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a8c0a21d-221c-4919-b06b-af512d0057d9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bac3530-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:35:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 5, 'rx_bytes': 874, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 5, 'rx_bytes': 874, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 451180, 'reachable_time': 25473, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302720, 'error': None, 'target': 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:26.156 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[776600ac-1be1-4657-87de-6013cf8c726e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9bac3530-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 451197, 'tstamp': 451197}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302722, 'error': None, 'target': 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9bac3530-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 451201, 'tstamp': 451201}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302722, 'error': None, 'target': 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:26.158 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bac3530-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:26 np0005481065 nova_compute[260935]: 2025-10-11 08:51:26.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:26 np0005481065 nova_compute[260935]: 2025-10-11 08:51:26.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:26.163 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bac3530-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:26.164 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:51:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:26.165 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bac3530-90, col_values=(('external_ids', {'iface-id': 'e5becf0d-48c0-404b-9cba-07077454d085'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:26.166 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:51:26 np0005481065 nova_compute[260935]: 2025-10-11 08:51:26.279 2 DEBUG nova.policy [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f961a579f0a74ab3a913fc3b21acea43', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '80ef0690d9e94d289f05d85941ef7154', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 04:51:27 np0005481065 nova_compute[260935]: 2025-10-11 08:51:27.054 2 DEBUG nova.network.neutron [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Successfully created port: e758e6dc-cadd-4687-a634-519fb2ecace8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 04:51:27 np0005481065 nova_compute[260935]: 2025-10-11 08:51:27.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:27 np0005481065 nova_compute[260935]: 2025-10-11 08:51:27.669 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172687.6690502, 3283d482-4ea1-400a-9a1b-486479801813 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:51:27 np0005481065 nova_compute[260935]: 2025-10-11 08:51:27.670 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 3283d482-4ea1-400a-9a1b-486479801813] VM Started (Lifecycle Event)#033[00m
Oct 11 04:51:27 np0005481065 nova_compute[260935]: 2025-10-11 08:51:27.792 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:51:27 np0005481065 nova_compute[260935]: 2025-10-11 08:51:27.798 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172687.6691945, 3283d482-4ea1-400a-9a1b-486479801813 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:51:27 np0005481065 nova_compute[260935]: 2025-10-11 08:51:27.799 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 3283d482-4ea1-400a-9a1b-486479801813] VM Paused (Lifecycle Event)#033[00m
Oct 11 04:51:27 np0005481065 nova_compute[260935]: 2025-10-11 08:51:27.818 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:51:27 np0005481065 nova_compute[260935]: 2025-10-11 08:51:27.821 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:51:27 np0005481065 nova_compute[260935]: 2025-10-11 08:51:27.849 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 3283d482-4ea1-400a-9a1b-486479801813] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:51:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1391: 321 pgs: 321 active+clean; 214 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 2.6 MiB/s wr, 323 op/s
Oct 11 04:51:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:51:28 np0005481065 nova_compute[260935]: 2025-10-11 08:51:28.715 2 DEBUG nova.compute.manager [req-d3e60dd6-1fdd-4012-9977-11b06ff0cf76 req-e6f6d58b-28f8-45bb-8982-f61ce10923cd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Received event network-vif-plugged-2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:51:28 np0005481065 nova_compute[260935]: 2025-10-11 08:51:28.715 2 DEBUG oslo_concurrency.lockutils [req-d3e60dd6-1fdd-4012-9977-11b06ff0cf76 req-e6f6d58b-28f8-45bb-8982-f61ce10923cd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "3283d482-4ea1-400a-9a1b-486479801813-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:28 np0005481065 nova_compute[260935]: 2025-10-11 08:51:28.716 2 DEBUG oslo_concurrency.lockutils [req-d3e60dd6-1fdd-4012-9977-11b06ff0cf76 req-e6f6d58b-28f8-45bb-8982-f61ce10923cd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "3283d482-4ea1-400a-9a1b-486479801813-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:28 np0005481065 nova_compute[260935]: 2025-10-11 08:51:28.716 2 DEBUG oslo_concurrency.lockutils [req-d3e60dd6-1fdd-4012-9977-11b06ff0cf76 req-e6f6d58b-28f8-45bb-8982-f61ce10923cd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "3283d482-4ea1-400a-9a1b-486479801813-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:28 np0005481065 nova_compute[260935]: 2025-10-11 08:51:28.716 2 DEBUG nova.compute.manager [req-d3e60dd6-1fdd-4012-9977-11b06ff0cf76 req-e6f6d58b-28f8-45bb-8982-f61ce10923cd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Processing event network-vif-plugged-2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 04:51:28 np0005481065 nova_compute[260935]: 2025-10-11 08:51:28.717 2 DEBUG nova.compute.manager [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:51:28 np0005481065 nova_compute[260935]: 2025-10-11 08:51:28.729 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172688.7294729, 3283d482-4ea1-400a-9a1b-486479801813 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:51:28 np0005481065 nova_compute[260935]: 2025-10-11 08:51:28.730 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 3283d482-4ea1-400a-9a1b-486479801813] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:51:28 np0005481065 nova_compute[260935]: 2025-10-11 08:51:28.731 2 DEBUG nova.virt.libvirt.driver [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:51:28 np0005481065 nova_compute[260935]: 2025-10-11 08:51:28.736 2 INFO nova.virt.libvirt.driver [-] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Instance spawned successfully.#033[00m
Oct 11 04:51:28 np0005481065 nova_compute[260935]: 2025-10-11 08:51:28.736 2 INFO nova.compute.manager [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Took 8.64 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:51:28 np0005481065 nova_compute[260935]: 2025-10-11 08:51:28.736 2 DEBUG nova.compute.manager [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:51:28 np0005481065 nova_compute[260935]: 2025-10-11 08:51:28.753 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:51:28 np0005481065 nova_compute[260935]: 2025-10-11 08:51:28.757 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:51:28 np0005481065 nova_compute[260935]: 2025-10-11 08:51:28.785 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 3283d482-4ea1-400a-9a1b-486479801813] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:51:28 np0005481065 nova_compute[260935]: 2025-10-11 08:51:28.832 2 INFO nova.compute.manager [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Took 9.79 seconds to build instance.#033[00m
Oct 11 04:51:28 np0005481065 nova_compute[260935]: 2025-10-11 08:51:28.983 2 DEBUG oslo_concurrency.lockutils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "3283d482-4ea1-400a-9a1b-486479801813" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.010s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1392: 321 pgs: 321 active+clean; 214 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 2.6 MiB/s wr, 323 op/s
Oct 11 04:51:29 np0005481065 nova_compute[260935]: 2025-10-11 08:51:29.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:30 np0005481065 nova_compute[260935]: 2025-10-11 08:51:30.995 2 DEBUG nova.network.neutron [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Successfully updated port: e758e6dc-cadd-4687-a634-519fb2ecace8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:51:31 np0005481065 nova_compute[260935]: 2025-10-11 08:51:31.017 2 DEBUG oslo_concurrency.lockutils [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Acquiring lock "refresh_cache-88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:51:31 np0005481065 nova_compute[260935]: 2025-10-11 08:51:31.018 2 DEBUG oslo_concurrency.lockutils [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Acquired lock "refresh_cache-88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:51:31 np0005481065 nova_compute[260935]: 2025-10-11 08:51:31.018 2 DEBUG nova.network.neutron [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:51:31 np0005481065 nova_compute[260935]: 2025-10-11 08:51:31.157 2 DEBUG nova.compute.manager [req-ec9a0fdd-29ed-4bd4-a11f-faa643d037f1 req-3759b8ab-4758-47b3-a481-5d370edd5f81 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Received event network-changed-e758e6dc-cadd-4687-a634-519fb2ecace8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:51:31 np0005481065 nova_compute[260935]: 2025-10-11 08:51:31.157 2 DEBUG nova.compute.manager [req-ec9a0fdd-29ed-4bd4-a11f-faa643d037f1 req-3759b8ab-4758-47b3-a481-5d370edd5f81 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Refreshing instance network info cache due to event network-changed-e758e6dc-cadd-4687-a634-519fb2ecace8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:51:31 np0005481065 nova_compute[260935]: 2025-10-11 08:51:31.157 2 DEBUG oslo_concurrency.lockutils [req-ec9a0fdd-29ed-4bd4-a11f-faa643d037f1 req-3759b8ab-4758-47b3-a481-5d370edd5f81 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:51:31 np0005481065 nova_compute[260935]: 2025-10-11 08:51:31.217 2 WARNING nova.network.neutron [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] ce478624-f4e7-4bd7-81b2-172d9f364a89 already exists in list: networks containing: ['ce478624-f4e7-4bd7-81b2-172d9f364a89']. ignoring it#033[00m
Oct 11 04:51:31 np0005481065 nova_compute[260935]: 2025-10-11 08:51:31.414 2 DEBUG oslo_concurrency.lockutils [None req-1afcfcc8-320c-4c1e-bd79-14e8718fe2df 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "3283d482-4ea1-400a-9a1b-486479801813" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:31 np0005481065 nova_compute[260935]: 2025-10-11 08:51:31.414 2 DEBUG oslo_concurrency.lockutils [None req-1afcfcc8-320c-4c1e-bd79-14e8718fe2df 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "3283d482-4ea1-400a-9a1b-486479801813" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:31 np0005481065 nova_compute[260935]: 2025-10-11 08:51:31.416 2 DEBUG oslo_concurrency.lockutils [None req-1afcfcc8-320c-4c1e-bd79-14e8718fe2df 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "3283d482-4ea1-400a-9a1b-486479801813-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:31 np0005481065 nova_compute[260935]: 2025-10-11 08:51:31.416 2 DEBUG oslo_concurrency.lockutils [None req-1afcfcc8-320c-4c1e-bd79-14e8718fe2df 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "3283d482-4ea1-400a-9a1b-486479801813-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:31 np0005481065 nova_compute[260935]: 2025-10-11 08:51:31.417 2 DEBUG oslo_concurrency.lockutils [None req-1afcfcc8-320c-4c1e-bd79-14e8718fe2df 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "3283d482-4ea1-400a-9a1b-486479801813-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:31 np0005481065 nova_compute[260935]: 2025-10-11 08:51:31.418 2 INFO nova.compute.manager [None req-1afcfcc8-320c-4c1e-bd79-14e8718fe2df 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Terminating instance#033[00m
Oct 11 04:51:31 np0005481065 nova_compute[260935]: 2025-10-11 08:51:31.420 2 DEBUG nova.compute.manager [None req-1afcfcc8-320c-4c1e-bd79-14e8718fe2df 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 04:51:31 np0005481065 kernel: tap2b54c9c3-c2 (unregistering): left promiscuous mode
Oct 11 04:51:31 np0005481065 NetworkManager[44960]: <info>  [1760172691.4865] device (tap2b54c9c3-c2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:51:31 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:31Z|00261|binding|INFO|Releasing lport 2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37 from this chassis (sb_readonly=0)
Oct 11 04:51:31 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:31Z|00262|binding|INFO|Setting lport 2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37 down in Southbound
Oct 11 04:51:31 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:31Z|00263|binding|INFO|Removing iface tap2b54c9c3-c2 ovn-installed in OVS
Oct 11 04:51:31 np0005481065 nova_compute[260935]: 2025-10-11 08:51:31.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:31.503 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:84:1d 10.100.0.11'], port_security=['fa:16:3e:2d:84:1d 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '3283d482-4ea1-400a-9a1b-486479801813', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bac3530-993f-420e-8692-0b14a331d756', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8c7604961214c6d9d49657535d799a5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b92be55a-f97b-4770-99c4-ff8e122b8ad7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=956bef08-638b-4ce0-9cc4-80a6cc4f1331, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:51:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:31.504 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37 in datapath 9bac3530-993f-420e-8692-0b14a331d756 unbound from our chassis#033[00m
Oct 11 04:51:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:31.505 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bac3530-993f-420e-8692-0b14a331d756#033[00m
Oct 11 04:51:31 np0005481065 nova_compute[260935]: 2025-10-11 08:51:31.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:31.531 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0c77da4c-0bd3-484e-af90-9e9932f7c575]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:31 np0005481065 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000023.scope: Deactivated successfully.
Oct 11 04:51:31 np0005481065 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000023.scope: Consumed 4.475s CPU time.
Oct 11 04:51:31 np0005481065 systemd-machined[215705]: Machine qemu-39-instance-00000023 terminated.
Oct 11 04:51:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:31.571 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[9e2f316f-ba0c-4f56-9283-ddce3ad59a36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:31.575 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3af2be35-79fc-4ade-b576-0293cde7a40c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:31 np0005481065 podman[302765]: 2025-10-11 08:51:31.590602208 +0000 UTC m=+0.095546783 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:51:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:31.614 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[7db42535-3fd4-4e93-ad0f-87ed1ee85b92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:31.644 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a0ea8b3e-710f-4a4e-81c6-c116d2982b21]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bac3530-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:35:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 451180, 'reachable_time': 25473, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302793, 'error': None, 'target': 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:31 np0005481065 nova_compute[260935]: 2025-10-11 08:51:31.661 2 INFO nova.virt.libvirt.driver [-] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Instance destroyed successfully.#033[00m
Oct 11 04:51:31 np0005481065 nova_compute[260935]: 2025-10-11 08:51:31.662 2 DEBUG nova.objects.instance [None req-1afcfcc8-320c-4c1e-bd79-14e8718fe2df 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lazy-loading 'resources' on Instance uuid 3283d482-4ea1-400a-9a1b-486479801813 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:51:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:31.665 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9badc75d-90c9-4439-bb36-2058ba226d0d]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9bac3530-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 451197, 'tstamp': 451197}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302799, 'error': None, 'target': 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9bac3530-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 451201, 'tstamp': 451201}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302799, 'error': None, 'target': 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:31.666 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bac3530-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:31 np0005481065 nova_compute[260935]: 2025-10-11 08:51:31.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:31 np0005481065 nova_compute[260935]: 2025-10-11 08:51:31.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:31.676 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bac3530-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:31.676 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:51:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:31.676 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bac3530-90, col_values=(('external_ids', {'iface-id': 'e5becf0d-48c0-404b-9cba-07077454d085'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:31.677 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:51:31 np0005481065 nova_compute[260935]: 2025-10-11 08:51:31.680 2 DEBUG nova.virt.libvirt.vif [None req-1afcfcc8-320c-4c1e-bd79-14e8718fe2df 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:51:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-849464772',display_name='tempest-ImagesTestJSON-server-849464772',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-849464772',id=35,image_ref='70bd8f23-d068-4d06-af15-566e76e92803',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:51:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f8c7604961214c6d9d49657535d799a5',ramdisk_id='',reservation_id='r-lbwd8nss',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='5b50d851-e482-40a2-8b7d-d3eca87e15ab',image_min_disk='1',image_min_ram='0',image_owner_id='f8c7604961214c6d9d49657535d799a5',image_owner_project_name='tempest-ImagesTestJSON-694493184',image_owner_user_name='tempest-ImagesTestJSON-694493184-project-member',image_user_id='1bab12893b9d49aabcb5ca19c9b951de',owner_project_name='tempest-ImagesTestJSON-694493184',owner_user_name='tempest-ImagesTestJSON-694493184-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:51:28Z,user_data=None,user_id='1bab12893b9d49aabcb5ca19c9b951de',uuid=3283d482-4ea1-400a-9a1b-486479801813,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37", "address": "fa:16:3e:2d:84:1d", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b54c9c3-c2", "ovs_interfaceid": "2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 04:51:31 np0005481065 nova_compute[260935]: 2025-10-11 08:51:31.680 2 DEBUG nova.network.os_vif_util [None req-1afcfcc8-320c-4c1e-bd79-14e8718fe2df 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converting VIF {"id": "2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37", "address": "fa:16:3e:2d:84:1d", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b54c9c3-c2", "ovs_interfaceid": "2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:51:31 np0005481065 nova_compute[260935]: 2025-10-11 08:51:31.681 2 DEBUG nova.network.os_vif_util [None req-1afcfcc8-320c-4c1e-bd79-14e8718fe2df 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:84:1d,bridge_name='br-int',has_traffic_filtering=True,id=2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b54c9c3-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:51:31 np0005481065 nova_compute[260935]: 2025-10-11 08:51:31.681 2 DEBUG os_vif [None req-1afcfcc8-320c-4c1e-bd79-14e8718fe2df 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:84:1d,bridge_name='br-int',has_traffic_filtering=True,id=2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b54c9c3-c2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 04:51:31 np0005481065 nova_compute[260935]: 2025-10-11 08:51:31.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:31 np0005481065 nova_compute[260935]: 2025-10-11 08:51:31.683 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b54c9c3-c2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:31 np0005481065 nova_compute[260935]: 2025-10-11 08:51:31.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:31 np0005481065 nova_compute[260935]: 2025-10-11 08:51:31.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:31 np0005481065 nova_compute[260935]: 2025-10-11 08:51:31.688 2 INFO os_vif [None req-1afcfcc8-320c-4c1e-bd79-14e8718fe2df 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:84:1d,bridge_name='br-int',has_traffic_filtering=True,id=2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b54c9c3-c2')#033[00m
Oct 11 04:51:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1393: 321 pgs: 321 active+clean; 214 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 2.6 MiB/s wr, 323 op/s
Oct 11 04:51:32 np0005481065 nova_compute[260935]: 2025-10-11 08:51:32.054 2 INFO nova.virt.libvirt.driver [None req-1afcfcc8-320c-4c1e-bd79-14e8718fe2df 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Deleting instance files /var/lib/nova/instances/3283d482-4ea1-400a-9a1b-486479801813_del#033[00m
Oct 11 04:51:32 np0005481065 nova_compute[260935]: 2025-10-11 08:51:32.055 2 INFO nova.virt.libvirt.driver [None req-1afcfcc8-320c-4c1e-bd79-14e8718fe2df 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Deletion of /var/lib/nova/instances/3283d482-4ea1-400a-9a1b-486479801813_del complete#033[00m
Oct 11 04:51:32 np0005481065 nova_compute[260935]: 2025-10-11 08:51:32.130 2 INFO nova.compute.manager [None req-1afcfcc8-320c-4c1e-bd79-14e8718fe2df 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Took 0.71 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 04:51:32 np0005481065 nova_compute[260935]: 2025-10-11 08:51:32.131 2 DEBUG oslo.service.loopingcall [None req-1afcfcc8-320c-4c1e-bd79-14e8718fe2df 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 04:51:32 np0005481065 nova_compute[260935]: 2025-10-11 08:51:32.131 2 DEBUG nova.compute.manager [-] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 04:51:32 np0005481065 nova_compute[260935]: 2025-10-11 08:51:32.131 2 DEBUG nova.network.neutron [-] [instance: 3283d482-4ea1-400a-9a1b-486479801813] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 04:51:32 np0005481065 nova_compute[260935]: 2025-10-11 08:51:32.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:32 np0005481065 nova_compute[260935]: 2025-10-11 08:51:32.935 2 DEBUG nova.network.neutron [-] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:51:32 np0005481065 nova_compute[260935]: 2025-10-11 08:51:32.951 2 INFO nova.compute.manager [-] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Took 0.82 seconds to deallocate network for instance.#033[00m
Oct 11 04:51:32 np0005481065 nova_compute[260935]: 2025-10-11 08:51:32.995 2 DEBUG oslo_concurrency.lockutils [None req-1afcfcc8-320c-4c1e-bd79-14e8718fe2df 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:32 np0005481065 nova_compute[260935]: 2025-10-11 08:51:32.996 2 DEBUG oslo_concurrency.lockutils [None req-1afcfcc8-320c-4c1e-bd79-14e8718fe2df 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:33 np0005481065 nova_compute[260935]: 2025-10-11 08:51:33.079 2 DEBUG oslo_concurrency.processutils [None req-1afcfcc8-320c-4c1e-bd79-14e8718fe2df 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:51:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:51:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:51:33 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/988541132' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:51:33 np0005481065 nova_compute[260935]: 2025-10-11 08:51:33.567 2 DEBUG oslo_concurrency.processutils [None req-1afcfcc8-320c-4c1e-bd79-14e8718fe2df 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:51:33 np0005481065 nova_compute[260935]: 2025-10-11 08:51:33.578 2 DEBUG nova.compute.provider_tree [None req-1afcfcc8-320c-4c1e-bd79-14e8718fe2df 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:51:33 np0005481065 nova_compute[260935]: 2025-10-11 08:51:33.595 2 DEBUG nova.scheduler.client.report [None req-1afcfcc8-320c-4c1e-bd79-14e8718fe2df 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:51:33 np0005481065 nova_compute[260935]: 2025-10-11 08:51:33.620 2 DEBUG oslo_concurrency.lockutils [None req-1afcfcc8-320c-4c1e-bd79-14e8718fe2df 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:33 np0005481065 nova_compute[260935]: 2025-10-11 08:51:33.646 2 INFO nova.scheduler.client.report [None req-1afcfcc8-320c-4c1e-bd79-14e8718fe2df 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Deleted allocations for instance 3283d482-4ea1-400a-9a1b-486479801813#033[00m
Oct 11 04:51:33 np0005481065 nova_compute[260935]: 2025-10-11 08:51:33.714 2 DEBUG nova.network.neutron [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Updating instance_info_cache with network_info: [{"id": "ab7592f9-1746-47d1-a702-b7be704ccabb", "address": "fa:16:3e:af:3d:5c", "network": {"id": "ce478624-f4e7-4bd7-81b2-172d9f364a89", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1659766389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80ef0690d9e94d289f05d85941ef7154", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab7592f9-17", "ovs_interfaceid": "ab7592f9-1746-47d1-a702-b7be704ccabb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e758e6dc-cadd-4687-a634-519fb2ecace8", "address": "fa:16:3e:4b:01:c2", "network": {"id": "ce478624-f4e7-4bd7-81b2-172d9f364a89", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1659766389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80ef0690d9e94d289f05d85941ef7154", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape758e6dc-ca", "ovs_interfaceid": "e758e6dc-cadd-4687-a634-519fb2ecace8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:51:33 np0005481065 nova_compute[260935]: 2025-10-11 08:51:33.731 2 DEBUG oslo_concurrency.lockutils [None req-1afcfcc8-320c-4c1e-bd79-14e8718fe2df 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "3283d482-4ea1-400a-9a1b-486479801813" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.316s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:33 np0005481065 nova_compute[260935]: 2025-10-11 08:51:33.758 2 DEBUG oslo_concurrency.lockutils [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Releasing lock "refresh_cache-88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:51:33 np0005481065 nova_compute[260935]: 2025-10-11 08:51:33.761 2 DEBUG oslo_concurrency.lockutils [req-ec9a0fdd-29ed-4bd4-a11f-faa643d037f1 req-3759b8ab-4758-47b3-a481-5d370edd5f81 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:51:33 np0005481065 nova_compute[260935]: 2025-10-11 08:51:33.761 2 DEBUG nova.network.neutron [req-ec9a0fdd-29ed-4bd4-a11f-faa643d037f1 req-3759b8ab-4758-47b3-a481-5d370edd5f81 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Refreshing network info cache for port e758e6dc-cadd-4687-a634-519fb2ecace8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:51:33 np0005481065 nova_compute[260935]: 2025-10-11 08:51:33.767 2 DEBUG nova.virt.libvirt.vif [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:51:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-2081426915',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-2081426915',id=34,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:51:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='80ef0690d9e94d289f05d85941ef7154',ramdisk_id='',reservation_id='r-0lez6trd',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1301456020',owner_user_name='tempest-AttachInterfacesV270Test-1301456020-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:51:24Z,user_data=None,user_id='f961a579f0a74ab3a913fc3b21acea43',uuid=88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e758e6dc-cadd-4687-a634-519fb2ecace8", "address": "fa:16:3e:4b:01:c2", "network": {"id": "ce478624-f4e7-4bd7-81b2-172d9f364a89", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1659766389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80ef0690d9e94d289f05d85941ef7154", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape758e6dc-ca", "ovs_interfaceid": "e758e6dc-cadd-4687-a634-519fb2ecace8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:51:33 np0005481065 nova_compute[260935]: 2025-10-11 08:51:33.768 2 DEBUG nova.network.os_vif_util [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Converting VIF {"id": "e758e6dc-cadd-4687-a634-519fb2ecace8", "address": "fa:16:3e:4b:01:c2", "network": {"id": "ce478624-f4e7-4bd7-81b2-172d9f364a89", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1659766389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80ef0690d9e94d289f05d85941ef7154", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape758e6dc-ca", "ovs_interfaceid": "e758e6dc-cadd-4687-a634-519fb2ecace8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:51:33 np0005481065 nova_compute[260935]: 2025-10-11 08:51:33.770 2 DEBUG nova.network.os_vif_util [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:01:c2,bridge_name='br-int',has_traffic_filtering=True,id=e758e6dc-cadd-4687-a634-519fb2ecace8,network=Network(ce478624-f4e7-4bd7-81b2-172d9f364a89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape758e6dc-ca') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:51:33 np0005481065 nova_compute[260935]: 2025-10-11 08:51:33.771 2 DEBUG os_vif [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:01:c2,bridge_name='br-int',has_traffic_filtering=True,id=e758e6dc-cadd-4687-a634-519fb2ecace8,network=Network(ce478624-f4e7-4bd7-81b2-172d9f364a89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape758e6dc-ca') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:51:33 np0005481065 nova_compute[260935]: 2025-10-11 08:51:33.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:33 np0005481065 nova_compute[260935]: 2025-10-11 08:51:33.774 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:33 np0005481065 nova_compute[260935]: 2025-10-11 08:51:33.775 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:51:33 np0005481065 nova_compute[260935]: 2025-10-11 08:51:33.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:33 np0005481065 nova_compute[260935]: 2025-10-11 08:51:33.779 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape758e6dc-ca, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:33 np0005481065 nova_compute[260935]: 2025-10-11 08:51:33.780 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape758e6dc-ca, col_values=(('external_ids', {'iface-id': 'e758e6dc-cadd-4687-a634-519fb2ecace8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4b:01:c2', 'vm-uuid': '88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:33 np0005481065 nova_compute[260935]: 2025-10-11 08:51:33.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:33 np0005481065 NetworkManager[44960]: <info>  [1760172693.7838] manager: (tape758e6dc-ca): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/129)
Oct 11 04:51:33 np0005481065 nova_compute[260935]: 2025-10-11 08:51:33.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:33 np0005481065 nova_compute[260935]: 2025-10-11 08:51:33.793 2 INFO os_vif [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:01:c2,bridge_name='br-int',has_traffic_filtering=True,id=e758e6dc-cadd-4687-a634-519fb2ecace8,network=Network(ce478624-f4e7-4bd7-81b2-172d9f364a89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape758e6dc-ca')#033[00m
Oct 11 04:51:33 np0005481065 nova_compute[260935]: 2025-10-11 08:51:33.795 2 DEBUG nova.virt.libvirt.vif [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:51:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-2081426915',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-2081426915',id=34,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:51:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='80ef0690d9e94d289f05d85941ef7154',ramdisk_id='',reservation_id='r-0lez6trd',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1301456020',owner_user_name='tempest-AttachInterfacesV270Test-1301456020-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:51:24Z,user_data=None,user_id='f961a579f0a74ab3a913fc3b21acea43',uuid=88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e758e6dc-cadd-4687-a634-519fb2ecace8", "address": "fa:16:3e:4b:01:c2", "network": {"id": "ce478624-f4e7-4bd7-81b2-172d9f364a89", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1659766389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80ef0690d9e94d289f05d85941ef7154", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape758e6dc-ca", "ovs_interfaceid": "e758e6dc-cadd-4687-a634-519fb2ecace8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:51:33 np0005481065 nova_compute[260935]: 2025-10-11 08:51:33.795 2 DEBUG nova.network.os_vif_util [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Converting VIF {"id": "e758e6dc-cadd-4687-a634-519fb2ecace8", "address": "fa:16:3e:4b:01:c2", "network": {"id": "ce478624-f4e7-4bd7-81b2-172d9f364a89", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1659766389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80ef0690d9e94d289f05d85941ef7154", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape758e6dc-ca", "ovs_interfaceid": "e758e6dc-cadd-4687-a634-519fb2ecace8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:51:33 np0005481065 nova_compute[260935]: 2025-10-11 08:51:33.797 2 DEBUG nova.network.os_vif_util [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:01:c2,bridge_name='br-int',has_traffic_filtering=True,id=e758e6dc-cadd-4687-a634-519fb2ecace8,network=Network(ce478624-f4e7-4bd7-81b2-172d9f364a89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape758e6dc-ca') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:51:33 np0005481065 nova_compute[260935]: 2025-10-11 08:51:33.801 2 DEBUG nova.virt.libvirt.guest [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] attach device xml: <interface type="ethernet">
Oct 11 04:51:33 np0005481065 nova_compute[260935]:  <mac address="fa:16:3e:4b:01:c2"/>
Oct 11 04:51:33 np0005481065 nova_compute[260935]:  <model type="virtio"/>
Oct 11 04:51:33 np0005481065 nova_compute[260935]:  <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:51:33 np0005481065 nova_compute[260935]:  <mtu size="1442"/>
Oct 11 04:51:33 np0005481065 nova_compute[260935]:  <target dev="tape758e6dc-ca"/>
Oct 11 04:51:33 np0005481065 nova_compute[260935]: </interface>
Oct 11 04:51:33 np0005481065 nova_compute[260935]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct 11 04:51:33 np0005481065 systemd-udevd[302775]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:51:33 np0005481065 kernel: tape758e6dc-ca: entered promiscuous mode
Oct 11 04:51:33 np0005481065 NetworkManager[44960]: <info>  [1760172693.8255] manager: (tape758e6dc-ca): new Tun device (/org/freedesktop/NetworkManager/Devices/130)
Oct 11 04:51:33 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:33Z|00264|binding|INFO|Claiming lport e758e6dc-cadd-4687-a634-519fb2ecace8 for this chassis.
Oct 11 04:51:33 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:33Z|00265|binding|INFO|e758e6dc-cadd-4687-a634-519fb2ecace8: Claiming fa:16:3e:4b:01:c2 10.100.0.4
Oct 11 04:51:33 np0005481065 nova_compute[260935]: 2025-10-11 08:51:33.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:33.835 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:01:c2 10.100.0.4'], port_security=['fa:16:3e:4b:01:c2 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce478624-f4e7-4bd7-81b2-172d9f364a89', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '80ef0690d9e94d289f05d85941ef7154', 'neutron:revision_number': '2', 'neutron:security_group_ids': '29aef361-18af-45f6-a74a-33bc137a03ad', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f751cc78-ae4f-490f-8fb4-d5f1bb30fc5b, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=e758e6dc-cadd-4687-a634-519fb2ecace8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:51:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:33.837 162815 INFO neutron.agent.ovn.metadata.agent [-] Port e758e6dc-cadd-4687-a634-519fb2ecace8 in datapath ce478624-f4e7-4bd7-81b2-172d9f364a89 bound to our chassis#033[00m
Oct 11 04:51:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:33.841 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce478624-f4e7-4bd7-81b2-172d9f364a89#033[00m
Oct 11 04:51:33 np0005481065 NetworkManager[44960]: <info>  [1760172693.8496] device (tape758e6dc-ca): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:51:33 np0005481065 NetworkManager[44960]: <info>  [1760172693.8520] device (tape758e6dc-ca): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:51:33 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:33Z|00266|binding|INFO|Setting lport e758e6dc-cadd-4687-a634-519fb2ecace8 up in Southbound
Oct 11 04:51:33 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:33Z|00267|binding|INFO|Setting lport e758e6dc-cadd-4687-a634-519fb2ecace8 ovn-installed in OVS
Oct 11 04:51:33 np0005481065 nova_compute[260935]: 2025-10-11 08:51:33.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:33.867 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[27265177-e573-40a6-9505-175c6bbe26ba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:33 np0005481065 nova_compute[260935]: 2025-10-11 08:51:33.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:33 np0005481065 nova_compute[260935]: 2025-10-11 08:51:33.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1394: 321 pgs: 321 active+clean; 213 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 29 KiB/s wr, 192 op/s
Oct 11 04:51:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:33.921 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[fe0f49c7-fb25-4f3e-88ee-3cd7e59425ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:33.926 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[d99b8cca-17d0-448e-83b6-2c3002def38a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:33 np0005481065 nova_compute[260935]: 2025-10-11 08:51:33.927 2 DEBUG nova.compute.manager [req-d8f71588-b4aa-4b87-b6e2-e6f8ffcfc804 req-737b15b8-58e1-4bbe-afa0-df82d38e27c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Received event network-vif-plugged-2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:51:33 np0005481065 nova_compute[260935]: 2025-10-11 08:51:33.928 2 DEBUG oslo_concurrency.lockutils [req-d8f71588-b4aa-4b87-b6e2-e6f8ffcfc804 req-737b15b8-58e1-4bbe-afa0-df82d38e27c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "3283d482-4ea1-400a-9a1b-486479801813-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:33 np0005481065 nova_compute[260935]: 2025-10-11 08:51:33.928 2 DEBUG oslo_concurrency.lockutils [req-d8f71588-b4aa-4b87-b6e2-e6f8ffcfc804 req-737b15b8-58e1-4bbe-afa0-df82d38e27c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "3283d482-4ea1-400a-9a1b-486479801813-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:33 np0005481065 nova_compute[260935]: 2025-10-11 08:51:33.929 2 DEBUG oslo_concurrency.lockutils [req-d8f71588-b4aa-4b87-b6e2-e6f8ffcfc804 req-737b15b8-58e1-4bbe-afa0-df82d38e27c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "3283d482-4ea1-400a-9a1b-486479801813-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:33 np0005481065 nova_compute[260935]: 2025-10-11 08:51:33.930 2 DEBUG nova.compute.manager [req-d8f71588-b4aa-4b87-b6e2-e6f8ffcfc804 req-737b15b8-58e1-4bbe-afa0-df82d38e27c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] No waiting events found dispatching network-vif-plugged-2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:51:33 np0005481065 nova_compute[260935]: 2025-10-11 08:51:33.930 2 WARNING nova.compute.manager [req-d8f71588-b4aa-4b87-b6e2-e6f8ffcfc804 req-737b15b8-58e1-4bbe-afa0-df82d38e27c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Received unexpected event network-vif-plugged-2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37 for instance with vm_state deleted and task_state None.#033[00m
Oct 11 04:51:33 np0005481065 nova_compute[260935]: 2025-10-11 08:51:33.931 2 DEBUG nova.compute.manager [req-d8f71588-b4aa-4b87-b6e2-e6f8ffcfc804 req-737b15b8-58e1-4bbe-afa0-df82d38e27c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Received event network-vif-deleted-2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:51:33 np0005481065 nova_compute[260935]: 2025-10-11 08:51:33.960 2 DEBUG nova.virt.libvirt.driver [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:51:33 np0005481065 nova_compute[260935]: 2025-10-11 08:51:33.961 2 DEBUG nova.virt.libvirt.driver [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:51:33 np0005481065 nova_compute[260935]: 2025-10-11 08:51:33.962 2 DEBUG nova.virt.libvirt.driver [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] No VIF found with MAC fa:16:3e:af:3d:5c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:51:33 np0005481065 nova_compute[260935]: 2025-10-11 08:51:33.962 2 DEBUG nova.virt.libvirt.driver [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] No VIF found with MAC fa:16:3e:4b:01:c2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:51:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:33.980 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[d6abe7c6-7e40-40bd-9479-ec8536930f1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:33 np0005481065 nova_compute[260935]: 2025-10-11 08:51:33.989 2 DEBUG nova.virt.libvirt.guest [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:51:33 np0005481065 nova_compute[260935]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:51:33 np0005481065 nova_compute[260935]:  <nova:name>tempest-AttachInterfacesV270Test-server-2081426915</nova:name>
Oct 11 04:51:33 np0005481065 nova_compute[260935]:  <nova:creationTime>2025-10-11 08:51:33</nova:creationTime>
Oct 11 04:51:33 np0005481065 nova_compute[260935]:  <nova:flavor name="m1.nano">
Oct 11 04:51:33 np0005481065 nova_compute[260935]:    <nova:memory>128</nova:memory>
Oct 11 04:51:33 np0005481065 nova_compute[260935]:    <nova:disk>1</nova:disk>
Oct 11 04:51:33 np0005481065 nova_compute[260935]:    <nova:swap>0</nova:swap>
Oct 11 04:51:33 np0005481065 nova_compute[260935]:    <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:51:33 np0005481065 nova_compute[260935]:    <nova:vcpus>1</nova:vcpus>
Oct 11 04:51:33 np0005481065 nova_compute[260935]:  </nova:flavor>
Oct 11 04:51:33 np0005481065 nova_compute[260935]:  <nova:owner>
Oct 11 04:51:33 np0005481065 nova_compute[260935]:    <nova:user uuid="f961a579f0a74ab3a913fc3b21acea43">tempest-AttachInterfacesV270Test-1301456020-project-member</nova:user>
Oct 11 04:51:33 np0005481065 nova_compute[260935]:    <nova:project uuid="80ef0690d9e94d289f05d85941ef7154">tempest-AttachInterfacesV270Test-1301456020</nova:project>
Oct 11 04:51:33 np0005481065 nova_compute[260935]:  </nova:owner>
Oct 11 04:51:33 np0005481065 nova_compute[260935]:  <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:51:33 np0005481065 nova_compute[260935]:  <nova:ports>
Oct 11 04:51:33 np0005481065 nova_compute[260935]:    <nova:port uuid="ab7592f9-1746-47d1-a702-b7be704ccabb">
Oct 11 04:51:33 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 11 04:51:33 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 04:51:33 np0005481065 nova_compute[260935]:    <nova:port uuid="e758e6dc-cadd-4687-a634-519fb2ecace8">
Oct 11 04:51:33 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 11 04:51:33 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 04:51:33 np0005481065 nova_compute[260935]:  </nova:ports>
Oct 11 04:51:33 np0005481065 nova_compute[260935]: </nova:instance>
Oct 11 04:51:33 np0005481065 nova_compute[260935]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct 11 04:51:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:34.011 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[22c2a1e1-d5b3-4556-bd5e-0fb2a1a50857]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce478624-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:5c:24'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 81], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452720, 'reachable_time': 18973, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302862, 'error': None, 'target': 'ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:34 np0005481065 nova_compute[260935]: 2025-10-11 08:51:34.015 2 DEBUG oslo_concurrency.lockutils [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Lock "interface-88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 8.367s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:34.039 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[67ef0e55-232a-406e-9b87-58700c36e9f8]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapce478624-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 452738, 'tstamp': 452738}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302863, 'error': None, 'target': 'ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapce478624-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 452742, 'tstamp': 452742}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302863, 'error': None, 'target': 'ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:34.042 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce478624-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:34 np0005481065 nova_compute[260935]: 2025-10-11 08:51:34.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:34 np0005481065 nova_compute[260935]: 2025-10-11 08:51:34.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:34.046 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce478624-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:34.047 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:51:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:34.047 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce478624-f0, col_values=(('external_ids', {'iface-id': '875825ad-1b50-485c-91da-f53ce1ebd5e3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:34.047 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:51:34 np0005481065 nova_compute[260935]: 2025-10-11 08:51:34.987 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172679.985902, 9f842544-f85a-4c24-b273-8ae74177617e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:51:34 np0005481065 nova_compute[260935]: 2025-10-11 08:51:34.988 2 INFO nova.compute.manager [-] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] VM Stopped (Lifecycle Event)#033[00m
Oct 11 04:51:35 np0005481065 nova_compute[260935]: 2025-10-11 08:51:35.025 2 DEBUG nova.compute.manager [None req-300f184f-b1d8-4036-bc07-c6e5189a912d - - - - - -] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:51:35 np0005481065 nova_compute[260935]: 2025-10-11 08:51:35.165 2 DEBUG oslo_concurrency.lockutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Acquiring lock "872b1c1d-bc87-4123-a599-4d64b89018aa" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:35 np0005481065 nova_compute[260935]: 2025-10-11 08:51:35.165 2 DEBUG oslo_concurrency.lockutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "872b1c1d-bc87-4123-a599-4d64b89018aa" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:35 np0005481065 nova_compute[260935]: 2025-10-11 08:51:35.182 2 DEBUG nova.compute.manager [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:51:35 np0005481065 nova_compute[260935]: 2025-10-11 08:51:35.246 2 DEBUG oslo_concurrency.lockutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:35 np0005481065 nova_compute[260935]: 2025-10-11 08:51:35.247 2 DEBUG oslo_concurrency.lockutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:35 np0005481065 nova_compute[260935]: 2025-10-11 08:51:35.255 2 DEBUG nova.virt.hardware [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:51:35 np0005481065 nova_compute[260935]: 2025-10-11 08:51:35.255 2 INFO nova.compute.claims [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:51:35 np0005481065 nova_compute[260935]: 2025-10-11 08:51:35.389 2 DEBUG oslo_concurrency.processutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:51:35 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e184 do_prune osdmap full prune enabled
Oct 11 04:51:35 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e185 e185: 3 total, 3 up, 3 in
Oct 11 04:51:35 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e185: 3 total, 3 up, 3 in
Oct 11 04:51:35 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:51:35 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2978932494' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:51:35 np0005481065 nova_compute[260935]: 2025-10-11 08:51:35.862 2 DEBUG oslo_concurrency.processutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:51:35 np0005481065 nova_compute[260935]: 2025-10-11 08:51:35.870 2 DEBUG nova.compute.provider_tree [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:51:35 np0005481065 nova_compute[260935]: 2025-10-11 08:51:35.875 2 DEBUG nova.network.neutron [req-ec9a0fdd-29ed-4bd4-a11f-faa643d037f1 req-3759b8ab-4758-47b3-a481-5d370edd5f81 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Updated VIF entry in instance network info cache for port e758e6dc-cadd-4687-a634-519fb2ecace8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:51:35 np0005481065 nova_compute[260935]: 2025-10-11 08:51:35.876 2 DEBUG nova.network.neutron [req-ec9a0fdd-29ed-4bd4-a11f-faa643d037f1 req-3759b8ab-4758-47b3-a481-5d370edd5f81 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Updating instance_info_cache with network_info: [{"id": "ab7592f9-1746-47d1-a702-b7be704ccabb", "address": "fa:16:3e:af:3d:5c", "network": {"id": "ce478624-f4e7-4bd7-81b2-172d9f364a89", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1659766389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80ef0690d9e94d289f05d85941ef7154", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab7592f9-17", "ovs_interfaceid": "ab7592f9-1746-47d1-a702-b7be704ccabb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e758e6dc-cadd-4687-a634-519fb2ecace8", "address": "fa:16:3e:4b:01:c2", "network": {"id": "ce478624-f4e7-4bd7-81b2-172d9f364a89", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1659766389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80ef0690d9e94d289f05d85941ef7154", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape758e6dc-ca", "ovs_interfaceid": "e758e6dc-cadd-4687-a634-519fb2ecace8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:51:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1396: 321 pgs: 321 active+clean; 213 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 31 KiB/s wr, 204 op/s
Oct 11 04:51:35 np0005481065 nova_compute[260935]: 2025-10-11 08:51:35.900 2 DEBUG nova.scheduler.client.report [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:51:35 np0005481065 nova_compute[260935]: 2025-10-11 08:51:35.907 2 DEBUG oslo_concurrency.lockutils [req-ec9a0fdd-29ed-4bd4-a11f-faa643d037f1 req-3759b8ab-4758-47b3-a481-5d370edd5f81 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:51:35 np0005481065 nova_compute[260935]: 2025-10-11 08:51:35.923 2 DEBUG oslo_concurrency.lockutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:35 np0005481065 nova_compute[260935]: 2025-10-11 08:51:35.925 2 DEBUG nova.compute.manager [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:51:35 np0005481065 nova_compute[260935]: 2025-10-11 08:51:35.965 2 DEBUG nova.compute.manager [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:51:35 np0005481065 nova_compute[260935]: 2025-10-11 08:51:35.966 2 DEBUG nova.network.neutron [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:51:35 np0005481065 nova_compute[260935]: 2025-10-11 08:51:35.986 2 INFO nova.virt.libvirt.driver [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:51:36 np0005481065 nova_compute[260935]: 2025-10-11 08:51:36.009 2 DEBUG nova.compute.manager [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:51:36 np0005481065 nova_compute[260935]: 2025-10-11 08:51:36.138 2 DEBUG nova.policy [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7f37cd0ba15f412b88192a506c5cec79', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b23ff73ca27245eeb1b46f51326a5568', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 04:51:36 np0005481065 nova_compute[260935]: 2025-10-11 08:51:36.144 2 DEBUG nova.compute.manager [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:51:36 np0005481065 nova_compute[260935]: 2025-10-11 08:51:36.147 2 DEBUG nova.virt.libvirt.driver [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:51:36 np0005481065 nova_compute[260935]: 2025-10-11 08:51:36.148 2 INFO nova.virt.libvirt.driver [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Creating image(s)#033[00m
Oct 11 04:51:36 np0005481065 nova_compute[260935]: 2025-10-11 08:51:36.181 2 DEBUG nova.storage.rbd_utils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] rbd image 872b1c1d-bc87-4123-a599-4d64b89018aa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:51:36 np0005481065 nova_compute[260935]: 2025-10-11 08:51:36.216 2 DEBUG nova.storage.rbd_utils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] rbd image 872b1c1d-bc87-4123-a599-4d64b89018aa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:51:36 np0005481065 nova_compute[260935]: 2025-10-11 08:51:36.251 2 DEBUG nova.storage.rbd_utils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] rbd image 872b1c1d-bc87-4123-a599-4d64b89018aa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:51:36 np0005481065 nova_compute[260935]: 2025-10-11 08:51:36.256 2 DEBUG oslo_concurrency.processutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:51:36 np0005481065 nova_compute[260935]: 2025-10-11 08:51:36.353 2 DEBUG oslo_concurrency.processutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:51:36 np0005481065 nova_compute[260935]: 2025-10-11 08:51:36.355 2 DEBUG oslo_concurrency.lockutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:36 np0005481065 nova_compute[260935]: 2025-10-11 08:51:36.356 2 DEBUG oslo_concurrency.lockutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:36 np0005481065 nova_compute[260935]: 2025-10-11 08:51:36.356 2 DEBUG oslo_concurrency.lockutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:36 np0005481065 nova_compute[260935]: 2025-10-11 08:51:36.390 2 DEBUG nova.storage.rbd_utils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] rbd image 872b1c1d-bc87-4123-a599-4d64b89018aa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:51:36 np0005481065 nova_compute[260935]: 2025-10-11 08:51:36.395 2 DEBUG oslo_concurrency.processutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 872b1c1d-bc87-4123-a599-4d64b89018aa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:51:36 np0005481065 podman[302977]: 2025-10-11 08:51:36.804146708 +0000 UTC m=+0.101072049 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Oct 11 04:51:36 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:36Z|00048|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4b:01:c2 10.100.0.4
Oct 11 04:51:36 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:36Z|00049|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4b:01:c2 10.100.0.4
Oct 11 04:51:36 np0005481065 podman[302978]: 2025-10-11 08:51:36.87671241 +0000 UTC m=+0.169978458 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2)
Oct 11 04:51:37 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:37Z|00050|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:af:3d:5c 10.100.0.9
Oct 11 04:51:37 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:37Z|00051|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:af:3d:5c 10.100.0.9
Oct 11 04:51:37 np0005481065 nova_compute[260935]: 2025-10-11 08:51:37.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:51:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/232623428' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:51:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:51:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/232623428' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:51:37 np0005481065 nova_compute[260935]: 2025-10-11 08:51:37.597 2 DEBUG nova.network.neutron [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Successfully created port: 108be440-11bf-41f6-a628-86fbac597b7d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 04:51:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1397: 321 pgs: 321 active+clean; 239 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.5 MiB/s wr, 185 op/s
Oct 11 04:51:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:51:38 np0005481065 nova_compute[260935]: 2025-10-11 08:51:38.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:39 np0005481065 nova_compute[260935]: 2025-10-11 08:51:39.156 2 DEBUG oslo_concurrency.processutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 872b1c1d-bc87-4123-a599-4d64b89018aa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.761s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:51:39 np0005481065 nova_compute[260935]: 2025-10-11 08:51:39.248 2 DEBUG nova.network.neutron [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Successfully updated port: 108be440-11bf-41f6-a628-86fbac597b7d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:51:39 np0005481065 nova_compute[260935]: 2025-10-11 08:51:39.260 2 DEBUG nova.storage.rbd_utils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] resizing rbd image 872b1c1d-bc87-4123-a599-4d64b89018aa_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:51:39 np0005481065 nova_compute[260935]: 2025-10-11 08:51:39.318 2 DEBUG oslo_concurrency.lockutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Acquiring lock "refresh_cache-872b1c1d-bc87-4123-a599-4d64b89018aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:51:39 np0005481065 nova_compute[260935]: 2025-10-11 08:51:39.318 2 DEBUG oslo_concurrency.lockutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Acquired lock "refresh_cache-872b1c1d-bc87-4123-a599-4d64b89018aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:51:39 np0005481065 nova_compute[260935]: 2025-10-11 08:51:39.318 2 DEBUG nova.network.neutron [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:51:39 np0005481065 nova_compute[260935]: 2025-10-11 08:51:39.419 2 DEBUG nova.objects.instance [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lazy-loading 'migration_context' on Instance uuid 872b1c1d-bc87-4123-a599-4d64b89018aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:51:39 np0005481065 nova_compute[260935]: 2025-10-11 08:51:39.443 2 DEBUG nova.virt.libvirt.driver [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:51:39 np0005481065 nova_compute[260935]: 2025-10-11 08:51:39.444 2 DEBUG nova.virt.libvirt.driver [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Ensure instance console log exists: /var/lib/nova/instances/872b1c1d-bc87-4123-a599-4d64b89018aa/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:51:39 np0005481065 nova_compute[260935]: 2025-10-11 08:51:39.444 2 DEBUG oslo_concurrency.lockutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:39 np0005481065 nova_compute[260935]: 2025-10-11 08:51:39.445 2 DEBUG oslo_concurrency.lockutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:39 np0005481065 nova_compute[260935]: 2025-10-11 08:51:39.445 2 DEBUG oslo_concurrency.lockutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:39 np0005481065 nova_compute[260935]: 2025-10-11 08:51:39.551 2 DEBUG nova.network.neutron [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:51:39 np0005481065 nova_compute[260935]: 2025-10-11 08:51:39.596 2 DEBUG nova.compute.manager [req-a74b530b-01ec-4e15-bf7e-de50c92c1aa1 req-4ca95200-7be0-44b6-b9da-65113a67b440 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Received event network-vif-plugged-e758e6dc-cadd-4687-a634-519fb2ecace8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:51:39 np0005481065 nova_compute[260935]: 2025-10-11 08:51:39.597 2 DEBUG oslo_concurrency.lockutils [req-a74b530b-01ec-4e15-bf7e-de50c92c1aa1 req-4ca95200-7be0-44b6-b9da-65113a67b440 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:39 np0005481065 nova_compute[260935]: 2025-10-11 08:51:39.597 2 DEBUG oslo_concurrency.lockutils [req-a74b530b-01ec-4e15-bf7e-de50c92c1aa1 req-4ca95200-7be0-44b6-b9da-65113a67b440 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:39 np0005481065 nova_compute[260935]: 2025-10-11 08:51:39.598 2 DEBUG oslo_concurrency.lockutils [req-a74b530b-01ec-4e15-bf7e-de50c92c1aa1 req-4ca95200-7be0-44b6-b9da-65113a67b440 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:39 np0005481065 nova_compute[260935]: 2025-10-11 08:51:39.598 2 DEBUG nova.compute.manager [req-a74b530b-01ec-4e15-bf7e-de50c92c1aa1 req-4ca95200-7be0-44b6-b9da-65113a67b440 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] No waiting events found dispatching network-vif-plugged-e758e6dc-cadd-4687-a634-519fb2ecace8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:51:39 np0005481065 nova_compute[260935]: 2025-10-11 08:51:39.598 2 WARNING nova.compute.manager [req-a74b530b-01ec-4e15-bf7e-de50c92c1aa1 req-4ca95200-7be0-44b6-b9da-65113a67b440 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Received unexpected event network-vif-plugged-e758e6dc-cadd-4687-a634-519fb2ecace8 for instance with vm_state active and task_state None.#033[00m
Oct 11 04:51:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1398: 321 pgs: 321 active+clean; 239 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.5 MiB/s wr, 185 op/s
Oct 11 04:51:40 np0005481065 nova_compute[260935]: 2025-10-11 08:51:40.367 2 DEBUG oslo_concurrency.lockutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Acquiring lock "205bee5e-165a-468c-87d3-db44e03ace3e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:40 np0005481065 nova_compute[260935]: 2025-10-11 08:51:40.367 2 DEBUG oslo_concurrency.lockutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "205bee5e-165a-468c-87d3-db44e03ace3e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:40 np0005481065 nova_compute[260935]: 2025-10-11 08:51:40.383 2 DEBUG oslo_concurrency.lockutils [None req-bda1840f-8760-4fce-a3ed-29b2272bc656 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "5b50d851-e482-40a2-8b7d-d3eca87e15ab" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:40 np0005481065 nova_compute[260935]: 2025-10-11 08:51:40.383 2 DEBUG oslo_concurrency.lockutils [None req-bda1840f-8760-4fce-a3ed-29b2272bc656 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "5b50d851-e482-40a2-8b7d-d3eca87e15ab" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:40 np0005481065 nova_compute[260935]: 2025-10-11 08:51:40.384 2 DEBUG oslo_concurrency.lockutils [None req-bda1840f-8760-4fce-a3ed-29b2272bc656 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "5b50d851-e482-40a2-8b7d-d3eca87e15ab-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:40 np0005481065 nova_compute[260935]: 2025-10-11 08:51:40.384 2 DEBUG oslo_concurrency.lockutils [None req-bda1840f-8760-4fce-a3ed-29b2272bc656 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "5b50d851-e482-40a2-8b7d-d3eca87e15ab-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:40 np0005481065 nova_compute[260935]: 2025-10-11 08:51:40.384 2 DEBUG oslo_concurrency.lockutils [None req-bda1840f-8760-4fce-a3ed-29b2272bc656 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "5b50d851-e482-40a2-8b7d-d3eca87e15ab-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:40 np0005481065 nova_compute[260935]: 2025-10-11 08:51:40.387 2 INFO nova.compute.manager [None req-bda1840f-8760-4fce-a3ed-29b2272bc656 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Terminating instance#033[00m
Oct 11 04:51:40 np0005481065 nova_compute[260935]: 2025-10-11 08:51:40.390 2 DEBUG nova.compute.manager [None req-bda1840f-8760-4fce-a3ed-29b2272bc656 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 04:51:40 np0005481065 nova_compute[260935]: 2025-10-11 08:51:40.406 2 DEBUG nova.compute.manager [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:51:40 np0005481065 kernel: tap965f1a38-41 (unregistering): left promiscuous mode
Oct 11 04:51:40 np0005481065 NetworkManager[44960]: <info>  [1760172700.4555] device (tap965f1a38-41): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:51:40 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:40Z|00268|binding|INFO|Releasing lport 965f1a38-4159-41be-ac4a-f436a8ddeeab from this chassis (sb_readonly=0)
Oct 11 04:51:40 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:40Z|00269|binding|INFO|Setting lport 965f1a38-4159-41be-ac4a-f436a8ddeeab down in Southbound
Oct 11 04:51:40 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:40Z|00270|binding|INFO|Removing iface tap965f1a38-41 ovn-installed in OVS
Oct 11 04:51:40 np0005481065 nova_compute[260935]: 2025-10-11 08:51:40.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:40.503 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:8d:cb 10.100.0.4'], port_security=['fa:16:3e:fe:8d:cb 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '5b50d851-e482-40a2-8b7d-d3eca87e15ab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bac3530-993f-420e-8692-0b14a331d756', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8c7604961214c6d9d49657535d799a5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b92be55a-f97b-4770-99c4-ff8e122b8ad7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=956bef08-638b-4ce0-9cc4-80a6cc4f1331, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=965f1a38-4159-41be-ac4a-f436a8ddeeab) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:51:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:40.504 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 965f1a38-4159-41be-ac4a-f436a8ddeeab in datapath 9bac3530-993f-420e-8692-0b14a331d756 unbound from our chassis#033[00m
Oct 11 04:51:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:40.507 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9bac3530-993f-420e-8692-0b14a331d756, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 04:51:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:40.508 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d466436c-7117-4525-bf4f-d022d6e38f3b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:40.509 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9bac3530-993f-420e-8692-0b14a331d756 namespace which is not needed anymore#033[00m
Oct 11 04:51:40 np0005481065 nova_compute[260935]: 2025-10-11 08:51:40.540 2 DEBUG oslo_concurrency.lockutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:40 np0005481065 nova_compute[260935]: 2025-10-11 08:51:40.541 2 DEBUG oslo_concurrency.lockutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:40 np0005481065 nova_compute[260935]: 2025-10-11 08:51:40.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:40 np0005481065 nova_compute[260935]: 2025-10-11 08:51:40.549 2 DEBUG nova.virt.hardware [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:51:40 np0005481065 nova_compute[260935]: 2025-10-11 08:51:40.550 2 INFO nova.compute.claims [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:51:40 np0005481065 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000021.scope: Deactivated successfully.
Oct 11 04:51:40 np0005481065 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000021.scope: Consumed 13.409s CPU time.
Oct 11 04:51:40 np0005481065 systemd-machined[215705]: Machine qemu-36-instance-00000021 terminated.
Oct 11 04:51:40 np0005481065 nova_compute[260935]: 2025-10-11 08:51:40.634 2 INFO nova.virt.libvirt.driver [-] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Instance destroyed successfully.#033[00m
Oct 11 04:51:40 np0005481065 nova_compute[260935]: 2025-10-11 08:51:40.636 2 DEBUG nova.objects.instance [None req-bda1840f-8760-4fce-a3ed-29b2272bc656 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lazy-loading 'resources' on Instance uuid 5b50d851-e482-40a2-8b7d-d3eca87e15ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:51:40 np0005481065 nova_compute[260935]: 2025-10-11 08:51:40.652 2 DEBUG nova.virt.libvirt.vif [None req-bda1840f-8760-4fce-a3ed-29b2272bc656 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:50:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1550870837',display_name='tempest-ImagesTestJSON-server-1550870837',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1550870837',id=33,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:51:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f8c7604961214c6d9d49657535d799a5',ramdisk_id='',reservation_id='r-jse0ea25',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-694493184',owner_user_name='tempest-ImagesTestJSON-694493184-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:51:14Z,user_data=None,user_id='1bab12893b9d49aabcb5ca19c9b951de',uuid=5b50d851-e482-40a2-8b7d-d3eca87e15ab,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "965f1a38-4159-41be-ac4a-f436a8ddeeab", "address": "fa:16:3e:fe:8d:cb", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap965f1a38-41", "ovs_interfaceid": "965f1a38-4159-41be-ac4a-f436a8ddeeab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 04:51:40 np0005481065 nova_compute[260935]: 2025-10-11 08:51:40.653 2 DEBUG nova.network.os_vif_util [None req-bda1840f-8760-4fce-a3ed-29b2272bc656 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converting VIF {"id": "965f1a38-4159-41be-ac4a-f436a8ddeeab", "address": "fa:16:3e:fe:8d:cb", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap965f1a38-41", "ovs_interfaceid": "965f1a38-4159-41be-ac4a-f436a8ddeeab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:51:40 np0005481065 nova_compute[260935]: 2025-10-11 08:51:40.654 2 DEBUG nova.network.os_vif_util [None req-bda1840f-8760-4fce-a3ed-29b2272bc656 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:8d:cb,bridge_name='br-int',has_traffic_filtering=True,id=965f1a38-4159-41be-ac4a-f436a8ddeeab,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap965f1a38-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:51:40 np0005481065 nova_compute[260935]: 2025-10-11 08:51:40.654 2 DEBUG os_vif [None req-bda1840f-8760-4fce-a3ed-29b2272bc656 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:8d:cb,bridge_name='br-int',has_traffic_filtering=True,id=965f1a38-4159-41be-ac4a-f436a8ddeeab,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap965f1a38-41') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 04:51:40 np0005481065 nova_compute[260935]: 2025-10-11 08:51:40.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:40 np0005481065 nova_compute[260935]: 2025-10-11 08:51:40.657 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap965f1a38-41, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:40 np0005481065 nova_compute[260935]: 2025-10-11 08:51:40.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:40 np0005481065 nova_compute[260935]: 2025-10-11 08:51:40.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:51:40 np0005481065 nova_compute[260935]: 2025-10-11 08:51:40.667 2 INFO os_vif [None req-bda1840f-8760-4fce-a3ed-29b2272bc656 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:8d:cb,bridge_name='br-int',has_traffic_filtering=True,id=965f1a38-4159-41be-ac4a-f436a8ddeeab,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap965f1a38-41')#033[00m
Oct 11 04:51:40 np0005481065 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[301111]: [NOTICE]   (301116) : haproxy version is 2.8.14-c23fe91
Oct 11 04:51:40 np0005481065 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[301111]: [NOTICE]   (301116) : path to executable is /usr/sbin/haproxy
Oct 11 04:51:40 np0005481065 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[301111]: [WARNING]  (301116) : Exiting Master process...
Oct 11 04:51:40 np0005481065 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[301111]: [ALERT]    (301116) : Current worker (301118) exited with code 143 (Terminated)
Oct 11 04:51:40 np0005481065 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[301111]: [WARNING]  (301116) : All workers exited. Exiting... (0)
Oct 11 04:51:40 np0005481065 systemd[1]: libpod-4c69755b2a5a85b116644ea586653252eee337e5bcfe7e1984d85c100f402219.scope: Deactivated successfully.
Oct 11 04:51:40 np0005481065 conmon[301111]: conmon 4c69755b2a5a85b11664 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4c69755b2a5a85b116644ea586653252eee337e5bcfe7e1984d85c100f402219.scope/container/memory.events
Oct 11 04:51:40 np0005481065 podman[303123]: 2025-10-11 08:51:40.689144866 +0000 UTC m=+0.056633232 container died 4c69755b2a5a85b116644ea586653252eee337e5bcfe7e1984d85c100f402219 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:51:40 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4c69755b2a5a85b116644ea586653252eee337e5bcfe7e1984d85c100f402219-userdata-shm.mount: Deactivated successfully.
Oct 11 04:51:40 np0005481065 systemd[1]: var-lib-containers-storage-overlay-81784f68462ca80ca6e90ba3c15edeb0a8d5f87b882ef7fdf8a487f1955dcf4c-merged.mount: Deactivated successfully.
Oct 11 04:51:40 np0005481065 podman[303123]: 2025-10-11 08:51:40.745783308 +0000 UTC m=+0.113271694 container cleanup 4c69755b2a5a85b116644ea586653252eee337e5bcfe7e1984d85c100f402219 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct 11 04:51:40 np0005481065 nova_compute[260935]: 2025-10-11 08:51:40.747 2 DEBUG oslo_concurrency.processutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:51:40 np0005481065 systemd[1]: libpod-conmon-4c69755b2a5a85b116644ea586653252eee337e5bcfe7e1984d85c100f402219.scope: Deactivated successfully.
Oct 11 04:51:40 np0005481065 podman[303178]: 2025-10-11 08:51:40.853499584 +0000 UTC m=+0.072952484 container remove 4c69755b2a5a85b116644ea586653252eee337e5bcfe7e1984d85c100f402219 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 04:51:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:40.865 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a3136498-436f-4387-acdc-b4b582612210]: (4, ('Sat Oct 11 08:51:40 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756 (4c69755b2a5a85b116644ea586653252eee337e5bcfe7e1984d85c100f402219)\n4c69755b2a5a85b116644ea586653252eee337e5bcfe7e1984d85c100f402219\nSat Oct 11 08:51:40 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756 (4c69755b2a5a85b116644ea586653252eee337e5bcfe7e1984d85c100f402219)\n4c69755b2a5a85b116644ea586653252eee337e5bcfe7e1984d85c100f402219\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:40.869 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[830cf377-d9e1-4810-a9e9-4cda842fdd8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:40.870 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bac3530-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:40 np0005481065 kernel: tap9bac3530-90: left promiscuous mode
Oct 11 04:51:40 np0005481065 nova_compute[260935]: 2025-10-11 08:51:40.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:40 np0005481065 nova_compute[260935]: 2025-10-11 08:51:40.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:40.906 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2f1177f2-e235-4702-a3e4-f8ef2feffd2e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:40.937 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[16b449ee-ab0e-4711-9b0e-21e0aaa4a31a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:40.939 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c050ee7a-077d-42df-8ca1-1ab1461a00bc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:40.963 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[08ac3927-208b-4905-b56f-a3a9ba4208e1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 451171, 'reachable_time': 17467, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303214, 'error': None, 'target': 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:40.967 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9bac3530-993f-420e-8692-0b14a331d756 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 04:51:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:40.967 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[323d3617-f96d-4027-b2c0-865ab8a4cc1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:40 np0005481065 systemd[1]: run-netns-ovnmeta\x2d9bac3530\x2d993f\x2d420e\x2d8692\x2d0b14a331d756.mount: Deactivated successfully.
Oct 11 04:51:41 np0005481065 nova_compute[260935]: 2025-10-11 08:51:41.130 2 INFO nova.virt.libvirt.driver [None req-bda1840f-8760-4fce-a3ed-29b2272bc656 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Deleting instance files /var/lib/nova/instances/5b50d851-e482-40a2-8b7d-d3eca87e15ab_del#033[00m
Oct 11 04:51:41 np0005481065 nova_compute[260935]: 2025-10-11 08:51:41.132 2 INFO nova.virt.libvirt.driver [None req-bda1840f-8760-4fce-a3ed-29b2272bc656 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Deletion of /var/lib/nova/instances/5b50d851-e482-40a2-8b7d-d3eca87e15ab_del complete#033[00m
Oct 11 04:51:41 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:51:41 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2644219216' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:51:41 np0005481065 nova_compute[260935]: 2025-10-11 08:51:41.211 2 INFO nova.compute.manager [None req-bda1840f-8760-4fce-a3ed-29b2272bc656 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Took 0.82 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 04:51:41 np0005481065 nova_compute[260935]: 2025-10-11 08:51:41.212 2 DEBUG oslo.service.loopingcall [None req-bda1840f-8760-4fce-a3ed-29b2272bc656 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 04:51:41 np0005481065 nova_compute[260935]: 2025-10-11 08:51:41.213 2 DEBUG nova.compute.manager [-] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 04:51:41 np0005481065 nova_compute[260935]: 2025-10-11 08:51:41.213 2 DEBUG nova.network.neutron [-] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 04:51:41 np0005481065 nova_compute[260935]: 2025-10-11 08:51:41.218 2 DEBUG oslo_concurrency.processutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:51:41 np0005481065 nova_compute[260935]: 2025-10-11 08:51:41.229 2 DEBUG nova.compute.provider_tree [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:51:41 np0005481065 nova_compute[260935]: 2025-10-11 08:51:41.246 2 DEBUG nova.scheduler.client.report [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:51:41 np0005481065 nova_compute[260935]: 2025-10-11 08:51:41.288 2 DEBUG nova.network.neutron [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Updating instance_info_cache with network_info: [{"id": "108be440-11bf-41f6-a628-86fbac597b7d", "address": "fa:16:3e:c7:33:2e", "network": {"id": "882d76f4-8cc7-44c7-ad90-277f4f92e044", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-3978382-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b23ff73ca27245eeb1b46f51326a5568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap108be440-11", "ovs_interfaceid": "108be440-11bf-41f6-a628-86fbac597b7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:51:41 np0005481065 nova_compute[260935]: 2025-10-11 08:51:41.383 2 DEBUG oslo_concurrency.lockutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.842s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:41 np0005481065 nova_compute[260935]: 2025-10-11 08:51:41.384 2 DEBUG nova.compute.manager [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:51:41 np0005481065 nova_compute[260935]: 2025-10-11 08:51:41.389 2 DEBUG oslo_concurrency.lockutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Releasing lock "refresh_cache-872b1c1d-bc87-4123-a599-4d64b89018aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:51:41 np0005481065 nova_compute[260935]: 2025-10-11 08:51:41.390 2 DEBUG nova.compute.manager [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Instance network_info: |[{"id": "108be440-11bf-41f6-a628-86fbac597b7d", "address": "fa:16:3e:c7:33:2e", "network": {"id": "882d76f4-8cc7-44c7-ad90-277f4f92e044", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-3978382-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b23ff73ca27245eeb1b46f51326a5568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap108be440-11", "ovs_interfaceid": "108be440-11bf-41f6-a628-86fbac597b7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:51:41 np0005481065 nova_compute[260935]: 2025-10-11 08:51:41.394 2 DEBUG nova.virt.libvirt.driver [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Start _get_guest_xml network_info=[{"id": "108be440-11bf-41f6-a628-86fbac597b7d", "address": "fa:16:3e:c7:33:2e", "network": {"id": "882d76f4-8cc7-44c7-ad90-277f4f92e044", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-3978382-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b23ff73ca27245eeb1b46f51326a5568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap108be440-11", "ovs_interfaceid": "108be440-11bf-41f6-a628-86fbac597b7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:51:41 np0005481065 nova_compute[260935]: 2025-10-11 08:51:41.401 2 WARNING nova.virt.libvirt.driver [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:51:41 np0005481065 nova_compute[260935]: 2025-10-11 08:51:41.407 2 DEBUG nova.virt.libvirt.host [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:51:41 np0005481065 nova_compute[260935]: 2025-10-11 08:51:41.408 2 DEBUG nova.virt.libvirt.host [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:51:41 np0005481065 nova_compute[260935]: 2025-10-11 08:51:41.414 2 DEBUG nova.virt.libvirt.host [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:51:41 np0005481065 nova_compute[260935]: 2025-10-11 08:51:41.415 2 DEBUG nova.virt.libvirt.host [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:51:41 np0005481065 nova_compute[260935]: 2025-10-11 08:51:41.416 2 DEBUG nova.virt.libvirt.driver [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:51:41 np0005481065 nova_compute[260935]: 2025-10-11 08:51:41.416 2 DEBUG nova.virt.hardware [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:51:41 np0005481065 nova_compute[260935]: 2025-10-11 08:51:41.417 2 DEBUG nova.virt.hardware [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:51:41 np0005481065 nova_compute[260935]: 2025-10-11 08:51:41.417 2 DEBUG nova.virt.hardware [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:51:41 np0005481065 nova_compute[260935]: 2025-10-11 08:51:41.418 2 DEBUG nova.virt.hardware [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:51:41 np0005481065 nova_compute[260935]: 2025-10-11 08:51:41.418 2 DEBUG nova.virt.hardware [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:51:41 np0005481065 nova_compute[260935]: 2025-10-11 08:51:41.419 2 DEBUG nova.virt.hardware [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:51:41 np0005481065 nova_compute[260935]: 2025-10-11 08:51:41.419 2 DEBUG nova.virt.hardware [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:51:41 np0005481065 nova_compute[260935]: 2025-10-11 08:51:41.419 2 DEBUG nova.virt.hardware [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:51:41 np0005481065 nova_compute[260935]: 2025-10-11 08:51:41.420 2 DEBUG nova.virt.hardware [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:51:41 np0005481065 nova_compute[260935]: 2025-10-11 08:51:41.420 2 DEBUG nova.virt.hardware [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:51:41 np0005481065 nova_compute[260935]: 2025-10-11 08:51:41.421 2 DEBUG nova.virt.hardware [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:51:41 np0005481065 nova_compute[260935]: 2025-10-11 08:51:41.425 2 DEBUG oslo_concurrency.processutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:51:41 np0005481065 nova_compute[260935]: 2025-10-11 08:51:41.570 2 DEBUG nova.compute.manager [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:51:41 np0005481065 nova_compute[260935]: 2025-10-11 08:51:41.571 2 DEBUG nova.network.neutron [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:51:41 np0005481065 nova_compute[260935]: 2025-10-11 08:51:41.750 2 INFO nova.virt.libvirt.driver [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:51:41 np0005481065 nova_compute[260935]: 2025-10-11 08:51:41.806 2 DEBUG nova.compute.manager [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:51:41 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:51:41 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1862886025' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:51:41 np0005481065 nova_compute[260935]: 2025-10-11 08:51:41.888 2 DEBUG oslo_concurrency.processutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:51:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1399: 321 pgs: 321 active+clean; 239 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.5 MiB/s wr, 185 op/s
Oct 11 04:51:41 np0005481065 nova_compute[260935]: 2025-10-11 08:51:41.920 2 DEBUG nova.storage.rbd_utils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] rbd image 872b1c1d-bc87-4123-a599-4d64b89018aa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:51:41 np0005481065 nova_compute[260935]: 2025-10-11 08:51:41.931 2 DEBUG oslo_concurrency.processutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:51:42 np0005481065 nova_compute[260935]: 2025-10-11 08:51:42.070 2 DEBUG nova.policy [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fb27f51b5ffd414ab5ddbea179ada690', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f84de17ba2c5470fbc4c7fe809e7d7b7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 04:51:42 np0005481065 nova_compute[260935]: 2025-10-11 08:51:42.224 2 DEBUG nova.compute.manager [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:51:42 np0005481065 nova_compute[260935]: 2025-10-11 08:51:42.227 2 DEBUG nova.virt.libvirt.driver [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:51:42 np0005481065 nova_compute[260935]: 2025-10-11 08:51:42.227 2 INFO nova.virt.libvirt.driver [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Creating image(s)#033[00m
Oct 11 04:51:42 np0005481065 nova_compute[260935]: 2025-10-11 08:51:42.261 2 DEBUG nova.storage.rbd_utils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] rbd image 205bee5e-165a-468c-87d3-db44e03ace3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:51:42 np0005481065 nova_compute[260935]: 2025-10-11 08:51:42.297 2 DEBUG nova.storage.rbd_utils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] rbd image 205bee5e-165a-468c-87d3-db44e03ace3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:51:42 np0005481065 nova_compute[260935]: 2025-10-11 08:51:42.333 2 DEBUG nova.storage.rbd_utils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] rbd image 205bee5e-165a-468c-87d3-db44e03ace3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:51:42 np0005481065 nova_compute[260935]: 2025-10-11 08:51:42.340 2 DEBUG oslo_concurrency.processutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:51:42 np0005481065 nova_compute[260935]: 2025-10-11 08:51:42.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:42 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:51:42 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/124764432' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:51:42 np0005481065 nova_compute[260935]: 2025-10-11 08:51:42.426 2 DEBUG oslo_concurrency.processutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:51:42 np0005481065 nova_compute[260935]: 2025-10-11 08:51:42.427 2 DEBUG nova.virt.libvirt.vif [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:51:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-652156110',display_name='tempest-FloatingIPsAssociationTestJSON-server-652156110',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-652156110',id=36,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b23ff73ca27245eeb1b46f51326a5568',ramdisk_id='',reservation_id='r-dxyrerkf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1277815089',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1277815089-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:51:36Z,user_data=None,user_id='7f37cd0ba15f412b88192a506c5cec79',uuid=872b1c1d-bc87-4123-a599-4d64b89018aa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "108be440-11bf-41f6-a628-86fbac597b7d", "address": "fa:16:3e:c7:33:2e", "network": {"id": "882d76f4-8cc7-44c7-ad90-277f4f92e044", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-3978382-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b23ff73ca27245eeb1b46f51326a5568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap108be440-11", "ovs_interfaceid": "108be440-11bf-41f6-a628-86fbac597b7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:51:42 np0005481065 nova_compute[260935]: 2025-10-11 08:51:42.428 2 DEBUG nova.network.os_vif_util [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Converting VIF {"id": "108be440-11bf-41f6-a628-86fbac597b7d", "address": "fa:16:3e:c7:33:2e", "network": {"id": "882d76f4-8cc7-44c7-ad90-277f4f92e044", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-3978382-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b23ff73ca27245eeb1b46f51326a5568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap108be440-11", "ovs_interfaceid": "108be440-11bf-41f6-a628-86fbac597b7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:51:42 np0005481065 nova_compute[260935]: 2025-10-11 08:51:42.429 2 DEBUG nova.network.os_vif_util [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:33:2e,bridge_name='br-int',has_traffic_filtering=True,id=108be440-11bf-41f6-a628-86fbac597b7d,network=Network(882d76f4-8cc7-44c7-ad90-277f4f92e044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap108be440-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:51:42 np0005481065 nova_compute[260935]: 2025-10-11 08:51:42.431 2 DEBUG nova.objects.instance [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lazy-loading 'pci_devices' on Instance uuid 872b1c1d-bc87-4123-a599-4d64b89018aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:51:42 np0005481065 nova_compute[260935]: 2025-10-11 08:51:42.436 2 DEBUG oslo_concurrency.processutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:51:42 np0005481065 nova_compute[260935]: 2025-10-11 08:51:42.436 2 DEBUG oslo_concurrency.lockutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:42 np0005481065 nova_compute[260935]: 2025-10-11 08:51:42.437 2 DEBUG oslo_concurrency.lockutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:42 np0005481065 nova_compute[260935]: 2025-10-11 08:51:42.438 2 DEBUG oslo_concurrency.lockutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:42 np0005481065 nova_compute[260935]: 2025-10-11 08:51:42.466 2 DEBUG nova.storage.rbd_utils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] rbd image 205bee5e-165a-468c-87d3-db44e03ace3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:51:42 np0005481065 nova_compute[260935]: 2025-10-11 08:51:42.469 2 DEBUG oslo_concurrency.processutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 205bee5e-165a-468c-87d3-db44e03ace3e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:51:42 np0005481065 nova_compute[260935]: 2025-10-11 08:51:42.516 2 DEBUG nova.virt.libvirt.driver [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:51:42 np0005481065 nova_compute[260935]:  <uuid>872b1c1d-bc87-4123-a599-4d64b89018aa</uuid>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:  <name>instance-00000024</name>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:51:42 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:      <nova:name>tempest-FloatingIPsAssociationTestJSON-server-652156110</nova:name>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:51:41</nova:creationTime>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:51:42 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:        <nova:user uuid="7f37cd0ba15f412b88192a506c5cec79">tempest-FloatingIPsAssociationTestJSON-1277815089-project-member</nova:user>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:        <nova:project uuid="b23ff73ca27245eeb1b46f51326a5568">tempest-FloatingIPsAssociationTestJSON-1277815089</nova:project>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:        <nova:port uuid="108be440-11bf-41f6-a628-86fbac597b7d">
Oct 11 04:51:42 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:51:42 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:      <entry name="serial">872b1c1d-bc87-4123-a599-4d64b89018aa</entry>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:      <entry name="uuid">872b1c1d-bc87-4123-a599-4d64b89018aa</entry>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:51:42 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:51:42 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:51:42 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/872b1c1d-bc87-4123-a599-4d64b89018aa_disk">
Oct 11 04:51:42 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:51:42 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:51:42 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/872b1c1d-bc87-4123-a599-4d64b89018aa_disk.config">
Oct 11 04:51:42 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:51:42 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 04:51:42 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:c7:33:2e"/>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:      <target dev="tap108be440-11"/>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:51:42 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/872b1c1d-bc87-4123-a599-4d64b89018aa/console.log" append="off"/>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:51:42 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:51:42 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:51:42 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:51:42 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:51:42 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:51:42 np0005481065 nova_compute[260935]: 2025-10-11 08:51:42.518 2 DEBUG nova.compute.manager [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Preparing to wait for external event network-vif-plugged-108be440-11bf-41f6-a628-86fbac597b7d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 04:51:42 np0005481065 nova_compute[260935]: 2025-10-11 08:51:42.519 2 DEBUG oslo_concurrency.lockutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Acquiring lock "872b1c1d-bc87-4123-a599-4d64b89018aa-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:42 np0005481065 nova_compute[260935]: 2025-10-11 08:51:42.520 2 DEBUG oslo_concurrency.lockutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "872b1c1d-bc87-4123-a599-4d64b89018aa-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:42 np0005481065 nova_compute[260935]: 2025-10-11 08:51:42.520 2 DEBUG oslo_concurrency.lockutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "872b1c1d-bc87-4123-a599-4d64b89018aa-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:42 np0005481065 nova_compute[260935]: 2025-10-11 08:51:42.522 2 DEBUG nova.virt.libvirt.vif [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:51:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-652156110',display_name='tempest-FloatingIPsAssociationTestJSON-server-652156110',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-652156110',id=36,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b23ff73ca27245eeb1b46f51326a5568',ramdisk_id='',reservation_id='r-dxyrerkf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1277815089',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1277815089-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:51:36Z,user_data=None,user_id='7f37cd0ba15f412b88192a506c5cec79',uuid=872b1c1d-bc87-4123-a599-4d64b89018aa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "108be440-11bf-41f6-a628-86fbac597b7d", "address": "fa:16:3e:c7:33:2e", "network": {"id": "882d76f4-8cc7-44c7-ad90-277f4f92e044", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-3978382-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b23ff73ca27245eeb1b46f51326a5568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap108be440-11", "ovs_interfaceid": "108be440-11bf-41f6-a628-86fbac597b7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:51:42 np0005481065 nova_compute[260935]: 2025-10-11 08:51:42.522 2 DEBUG nova.network.os_vif_util [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Converting VIF {"id": "108be440-11bf-41f6-a628-86fbac597b7d", "address": "fa:16:3e:c7:33:2e", "network": {"id": "882d76f4-8cc7-44c7-ad90-277f4f92e044", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-3978382-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b23ff73ca27245eeb1b46f51326a5568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap108be440-11", "ovs_interfaceid": "108be440-11bf-41f6-a628-86fbac597b7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:51:42 np0005481065 nova_compute[260935]: 2025-10-11 08:51:42.524 2 DEBUG nova.network.os_vif_util [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:33:2e,bridge_name='br-int',has_traffic_filtering=True,id=108be440-11bf-41f6-a628-86fbac597b7d,network=Network(882d76f4-8cc7-44c7-ad90-277f4f92e044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap108be440-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:51:42 np0005481065 nova_compute[260935]: 2025-10-11 08:51:42.524 2 DEBUG os_vif [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:33:2e,bridge_name='br-int',has_traffic_filtering=True,id=108be440-11bf-41f6-a628-86fbac597b7d,network=Network(882d76f4-8cc7-44c7-ad90-277f4f92e044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap108be440-11') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:51:42 np0005481065 nova_compute[260935]: 2025-10-11 08:51:42.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:42 np0005481065 nova_compute[260935]: 2025-10-11 08:51:42.527 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:42 np0005481065 nova_compute[260935]: 2025-10-11 08:51:42.528 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:51:42 np0005481065 nova_compute[260935]: 2025-10-11 08:51:42.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:42 np0005481065 nova_compute[260935]: 2025-10-11 08:51:42.532 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap108be440-11, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:42 np0005481065 nova_compute[260935]: 2025-10-11 08:51:42.533 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap108be440-11, col_values=(('external_ids', {'iface-id': '108be440-11bf-41f6-a628-86fbac597b7d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c7:33:2e', 'vm-uuid': '872b1c1d-bc87-4123-a599-4d64b89018aa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:42 np0005481065 NetworkManager[44960]: <info>  [1760172702.5366] manager: (tap108be440-11): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/131)
Oct 11 04:51:42 np0005481065 nova_compute[260935]: 2025-10-11 08:51:42.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:42 np0005481065 nova_compute[260935]: 2025-10-11 08:51:42.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:51:42 np0005481065 nova_compute[260935]: 2025-10-11 08:51:42.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:42 np0005481065 nova_compute[260935]: 2025-10-11 08:51:42.546 2 INFO os_vif [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:33:2e,bridge_name='br-int',has_traffic_filtering=True,id=108be440-11bf-41f6-a628-86fbac597b7d,network=Network(882d76f4-8cc7-44c7-ad90-277f4f92e044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap108be440-11')#033[00m
Oct 11 04:51:42 np0005481065 nova_compute[260935]: 2025-10-11 08:51:42.677 2 DEBUG nova.virt.libvirt.driver [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:51:42 np0005481065 nova_compute[260935]: 2025-10-11 08:51:42.678 2 DEBUG nova.virt.libvirt.driver [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:51:42 np0005481065 nova_compute[260935]: 2025-10-11 08:51:42.678 2 DEBUG nova.virt.libvirt.driver [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] No VIF found with MAC fa:16:3e:c7:33:2e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:51:42 np0005481065 nova_compute[260935]: 2025-10-11 08:51:42.679 2 INFO nova.virt.libvirt.driver [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Using config drive#033[00m
Oct 11 04:51:42 np0005481065 nova_compute[260935]: 2025-10-11 08:51:42.717 2 DEBUG nova.storage.rbd_utils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] rbd image 872b1c1d-bc87-4123-a599-4d64b89018aa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:51:42 np0005481065 nova_compute[260935]: 2025-10-11 08:51:42.829 2 DEBUG oslo_concurrency.processutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 205bee5e-165a-468c-87d3-db44e03ace3e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.360s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:51:42 np0005481065 nova_compute[260935]: 2025-10-11 08:51:42.855 2 DEBUG nova.network.neutron [-] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:51:42 np0005481065 nova_compute[260935]: 2025-10-11 08:51:42.893 2 DEBUG nova.storage.rbd_utils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] resizing rbd image 205bee5e-165a-468c-87d3-db44e03ace3e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.004 2 INFO nova.compute.manager [-] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Took 1.79 seconds to deallocate network for instance.#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.015 2 DEBUG nova.objects.instance [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lazy-loading 'migration_context' on Instance uuid 205bee5e-165a-468c-87d3-db44e03ace3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.068 2 DEBUG nova.network.neutron [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Successfully created port: 30f88ebc-fba6-476c-8953-ad64358029bb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.082 2 DEBUG nova.virt.libvirt.driver [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.082 2 DEBUG nova.virt.libvirt.driver [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Ensure instance console log exists: /var/lib/nova/instances/205bee5e-165a-468c-87d3-db44e03ace3e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.083 2 DEBUG oslo_concurrency.lockutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.083 2 DEBUG oslo_concurrency.lockutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.084 2 DEBUG oslo_concurrency.lockutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.126 2 INFO nova.virt.libvirt.driver [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Creating config drive at /var/lib/nova/instances/872b1c1d-bc87-4123-a599-4d64b89018aa/disk.config#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.134 2 DEBUG oslo_concurrency.processutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/872b1c1d-bc87-4123-a599-4d64b89018aa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkxkq7fgt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.210 2 DEBUG oslo_concurrency.lockutils [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Acquiring lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.213 2 DEBUG oslo_concurrency.lockutils [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.214 2 DEBUG oslo_concurrency.lockutils [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Acquiring lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.214 2 DEBUG oslo_concurrency.lockutils [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.215 2 DEBUG oslo_concurrency.lockutils [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.219 2 INFO nova.compute.manager [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Terminating instance#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.221 2 DEBUG nova.compute.manager [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.236 2 DEBUG nova.compute.manager [req-944d9592-ae71-40e2-8a8f-504e64698827 req-cacd7c57-26ec-4fb7-a428-f79a4fe6c125 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Received event network-vif-plugged-e758e6dc-cadd-4687-a634-519fb2ecace8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.236 2 DEBUG oslo_concurrency.lockutils [req-944d9592-ae71-40e2-8a8f-504e64698827 req-cacd7c57-26ec-4fb7-a428-f79a4fe6c125 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.237 2 DEBUG oslo_concurrency.lockutils [req-944d9592-ae71-40e2-8a8f-504e64698827 req-cacd7c57-26ec-4fb7-a428-f79a4fe6c125 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.237 2 DEBUG oslo_concurrency.lockutils [req-944d9592-ae71-40e2-8a8f-504e64698827 req-cacd7c57-26ec-4fb7-a428-f79a4fe6c125 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.237 2 DEBUG nova.compute.manager [req-944d9592-ae71-40e2-8a8f-504e64698827 req-cacd7c57-26ec-4fb7-a428-f79a4fe6c125 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] No waiting events found dispatching network-vif-plugged-e758e6dc-cadd-4687-a634-519fb2ecace8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.238 2 WARNING nova.compute.manager [req-944d9592-ae71-40e2-8a8f-504e64698827 req-cacd7c57-26ec-4fb7-a428-f79a4fe6c125 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Received unexpected event network-vif-plugged-e758e6dc-cadd-4687-a634-519fb2ecace8 for instance with vm_state active and task_state deleting.#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.238 2 DEBUG nova.compute.manager [req-944d9592-ae71-40e2-8a8f-504e64698827 req-cacd7c57-26ec-4fb7-a428-f79a4fe6c125 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Received event network-changed-108be440-11bf-41f6-a628-86fbac597b7d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.238 2 DEBUG nova.compute.manager [req-944d9592-ae71-40e2-8a8f-504e64698827 req-cacd7c57-26ec-4fb7-a428-f79a4fe6c125 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Refreshing instance network info cache due to event network-changed-108be440-11bf-41f6-a628-86fbac597b7d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.239 2 DEBUG oslo_concurrency.lockutils [req-944d9592-ae71-40e2-8a8f-504e64698827 req-cacd7c57-26ec-4fb7-a428-f79a4fe6c125 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-872b1c1d-bc87-4123-a599-4d64b89018aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.239 2 DEBUG oslo_concurrency.lockutils [req-944d9592-ae71-40e2-8a8f-504e64698827 req-cacd7c57-26ec-4fb7-a428-f79a4fe6c125 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-872b1c1d-bc87-4123-a599-4d64b89018aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.240 2 DEBUG nova.network.neutron [req-944d9592-ae71-40e2-8a8f-504e64698827 req-cacd7c57-26ec-4fb7-a428-f79a4fe6c125 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Refreshing network info cache for port 108be440-11bf-41f6-a628-86fbac597b7d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.244 2 DEBUG oslo_concurrency.lockutils [None req-bda1840f-8760-4fce-a3ed-29b2272bc656 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.245 2 DEBUG oslo_concurrency.lockutils [None req-bda1840f-8760-4fce-a3ed-29b2272bc656 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:51:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e185 do_prune osdmap full prune enabled
Oct 11 04:51:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e186 e186: 3 total, 3 up, 3 in
Oct 11 04:51:43 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e186: 3 total, 3 up, 3 in
Oct 11 04:51:43 np0005481065 kernel: tapab7592f9-17 (unregistering): left promiscuous mode
Oct 11 04:51:43 np0005481065 NetworkManager[44960]: <info>  [1760172703.2897] device (tapab7592f9-17): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.295 2 DEBUG oslo_concurrency.processutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/872b1c1d-bc87-4123-a599-4d64b89018aa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkxkq7fgt" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:51:43 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:43Z|00271|binding|INFO|Releasing lport ab7592f9-1746-47d1-a702-b7be704ccabb from this chassis (sb_readonly=0)
Oct 11 04:51:43 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:43Z|00272|binding|INFO|Setting lport ab7592f9-1746-47d1-a702-b7be704ccabb down in Southbound
Oct 11 04:51:43 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:43Z|00273|binding|INFO|Removing iface tapab7592f9-17 ovn-installed in OVS
Oct 11 04:51:43 np0005481065 kernel: tape758e6dc-ca (unregistering): left promiscuous mode
Oct 11 04:51:43 np0005481065 NetworkManager[44960]: <info>  [1760172703.3527] device (tape758e6dc-ca): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:51:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:43.370 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:3d:5c 10.100.0.9'], port_security=['fa:16:3e:af:3d:5c 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce478624-f4e7-4bd7-81b2-172d9f364a89', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '80ef0690d9e94d289f05d85941ef7154', 'neutron:revision_number': '4', 'neutron:security_group_ids': '29aef361-18af-45f6-a74a-33bc137a03ad', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f751cc78-ae4f-490f-8fb4-d5f1bb30fc5b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=ab7592f9-1746-47d1-a702-b7be704ccabb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:51:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:43.373 162815 INFO neutron.agent.ovn.metadata.agent [-] Port ab7592f9-1746-47d1-a702-b7be704ccabb in datapath ce478624-f4e7-4bd7-81b2-172d9f364a89 unbound from our chassis#033[00m
Oct 11 04:51:43 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:43Z|00274|binding|INFO|Releasing lport e758e6dc-cadd-4687-a634-519fb2ecace8 from this chassis (sb_readonly=0)
Oct 11 04:51:43 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:43Z|00275|binding|INFO|Setting lport e758e6dc-cadd-4687-a634-519fb2ecace8 down in Southbound
Oct 11 04:51:43 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:43Z|00276|binding|INFO|Removing iface tape758e6dc-ca ovn-installed in OVS
Oct 11 04:51:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:43.380 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce478624-f4e7-4bd7-81b2-172d9f364a89#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.384 2 DEBUG nova.storage.rbd_utils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] rbd image 872b1c1d-bc87-4123-a599-4d64b89018aa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.391 2 DEBUG oslo_concurrency.processutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/872b1c1d-bc87-4123-a599-4d64b89018aa/disk.config 872b1c1d-bc87-4123-a599-4d64b89018aa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:51:43 np0005481065 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000022.scope: Deactivated successfully.
Oct 11 04:51:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:43.404 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f9b3c5c4-57de-4519-8aa4-791a336e02d4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:43 np0005481065 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000022.scope: Consumed 12.881s CPU time.
Oct 11 04:51:43 np0005481065 systemd-machined[215705]: Machine qemu-38-instance-00000022 terminated.
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:43 np0005481065 NetworkManager[44960]: <info>  [1760172703.4458] manager: (tapab7592f9-17): new Tun device (/org/freedesktop/NetworkManager/Devices/132)
Oct 11 04:51:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:43.449 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[1dce8ba0-6f7e-4f0a-943a-e6b0a4c75cac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:43.455 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[7c5838b0-2891-4298-892e-7533c6414e34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:43 np0005481065 NetworkManager[44960]: <info>  [1760172703.4657] manager: (tape758e6dc-ca): new Tun device (/org/freedesktop/NetworkManager/Devices/133)
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.485 2 INFO nova.virt.libvirt.driver [-] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Instance destroyed successfully.#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.487 2 DEBUG nova.objects.instance [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Lazy-loading 'resources' on Instance uuid 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:51:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:43.515 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[7b59336c-6b1d-49a1-b0b7-120f447813bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:43.544 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[db2303a4-41c8-43d5-952d-724e62621a21]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce478624-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:5c:24'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 81], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452720, 'reachable_time': 18973, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303546, 'error': None, 'target': 'ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:43.563 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:01:c2 10.100.0.4'], port_security=['fa:16:3e:4b:01:c2 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce478624-f4e7-4bd7-81b2-172d9f364a89', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '80ef0690d9e94d289f05d85941ef7154', 'neutron:revision_number': '4', 'neutron:security_group_ids': '29aef361-18af-45f6-a74a-33bc137a03ad', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f751cc78-ae4f-490f-8fb4-d5f1bb30fc5b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=e758e6dc-cadd-4687-a634-519fb2ecace8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:51:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:43.565 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5056ae66-e0ad-4a57-a542-55902baca6d8]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapce478624-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 452738, 'tstamp': 452738}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303550, 'error': None, 'target': 'ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapce478624-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 452742, 'tstamp': 452742}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303550, 'error': None, 'target': 'ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:43.566 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce478624-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:43.583 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce478624-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:43.583 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:51:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:43.584 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce478624-f0, col_values=(('external_ids', {'iface-id': '875825ad-1b50-485c-91da-f53ce1ebd5e3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:43.584 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:51:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:43.585 162815 INFO neutron.agent.ovn.metadata.agent [-] Port e758e6dc-cadd-4687-a634-519fb2ecace8 in datapath ce478624-f4e7-4bd7-81b2-172d9f364a89 unbound from our chassis#033[00m
Oct 11 04:51:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:43.587 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ce478624-f4e7-4bd7-81b2-172d9f364a89, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 04:51:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:43.588 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a6bc1cb6-98c8-4610-b3f0-9d64c4e66971]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:43.588 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89 namespace which is not needed anymore#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.610 2 DEBUG oslo_concurrency.processutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/872b1c1d-bc87-4123-a599-4d64b89018aa/disk.config 872b1c1d-bc87-4123-a599-4d64b89018aa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.218s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.611 2 INFO nova.virt.libvirt.driver [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Deleting local config drive /var/lib/nova/instances/872b1c1d-bc87-4123-a599-4d64b89018aa/disk.config because it was imported into RBD.#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.632 2 DEBUG nova.virt.libvirt.vif [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:51:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-2081426915',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-2081426915',id=34,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:51:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='80ef0690d9e94d289f05d85941ef7154',ramdisk_id='',reservation_id='r-0lez6trd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1301456020',owner_user_name='tempest-AttachInterfacesV270Test-1301456020-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:51:24Z,user_data=None,user_id='f961a579f0a74ab3a913fc3b21acea43',uuid=88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ab7592f9-1746-47d1-a702-b7be704ccabb", "address": "fa:16:3e:af:3d:5c", "network": {"id": "ce478624-f4e7-4bd7-81b2-172d9f364a89", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1659766389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80ef0690d9e94d289f05d85941ef7154", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab7592f9-17", "ovs_interfaceid": "ab7592f9-1746-47d1-a702-b7be704ccabb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.633 2 DEBUG nova.network.os_vif_util [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Converting VIF {"id": "ab7592f9-1746-47d1-a702-b7be704ccabb", "address": "fa:16:3e:af:3d:5c", "network": {"id": "ce478624-f4e7-4bd7-81b2-172d9f364a89", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1659766389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80ef0690d9e94d289f05d85941ef7154", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab7592f9-17", "ovs_interfaceid": "ab7592f9-1746-47d1-a702-b7be704ccabb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.634 2 DEBUG nova.network.os_vif_util [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:af:3d:5c,bridge_name='br-int',has_traffic_filtering=True,id=ab7592f9-1746-47d1-a702-b7be704ccabb,network=Network(ce478624-f4e7-4bd7-81b2-172d9f364a89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab7592f9-17') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.634 2 DEBUG os_vif [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:af:3d:5c,bridge_name='br-int',has_traffic_filtering=True,id=ab7592f9-1746-47d1-a702-b7be704ccabb,network=Network(ce478624-f4e7-4bd7-81b2-172d9f364a89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab7592f9-17') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.636 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapab7592f9-17, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.655 2 INFO os_vif [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:af:3d:5c,bridge_name='br-int',has_traffic_filtering=True,id=ab7592f9-1746-47d1-a702-b7be704ccabb,network=Network(ce478624-f4e7-4bd7-81b2-172d9f364a89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab7592f9-17')#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.656 2 DEBUG nova.virt.libvirt.vif [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:51:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-2081426915',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-2081426915',id=34,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:51:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='80ef0690d9e94d289f05d85941ef7154',ramdisk_id='',reservation_id='r-0lez6trd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1301456020',owner_user_name='tempest-AttachInterfacesV270Test-1301456020-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:51:24Z,user_data=None,user_id='f961a579f0a74ab3a913fc3b21acea43',uuid=88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e758e6dc-cadd-4687-a634-519fb2ecace8", "address": "fa:16:3e:4b:01:c2", "network": {"id": "ce478624-f4e7-4bd7-81b2-172d9f364a89", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1659766389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80ef0690d9e94d289f05d85941ef7154", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape758e6dc-ca", "ovs_interfaceid": "e758e6dc-cadd-4687-a634-519fb2ecace8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.657 2 DEBUG nova.network.os_vif_util [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Converting VIF {"id": "e758e6dc-cadd-4687-a634-519fb2ecace8", "address": "fa:16:3e:4b:01:c2", "network": {"id": "ce478624-f4e7-4bd7-81b2-172d9f364a89", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1659766389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80ef0690d9e94d289f05d85941ef7154", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape758e6dc-ca", "ovs_interfaceid": "e758e6dc-cadd-4687-a634-519fb2ecace8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.658 2 DEBUG nova.network.os_vif_util [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:01:c2,bridge_name='br-int',has_traffic_filtering=True,id=e758e6dc-cadd-4687-a634-519fb2ecace8,network=Network(ce478624-f4e7-4bd7-81b2-172d9f364a89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape758e6dc-ca') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.658 2 DEBUG os_vif [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:01:c2,bridge_name='br-int',has_traffic_filtering=True,id=e758e6dc-cadd-4687-a634-519fb2ecace8,network=Network(ce478624-f4e7-4bd7-81b2-172d9f364a89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape758e6dc-ca') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.660 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape758e6dc-ca, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.676 2 INFO os_vif [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:01:c2,bridge_name='br-int',has_traffic_filtering=True,id=e758e6dc-cadd-4687-a634-519fb2ecace8,network=Network(ce478624-f4e7-4bd7-81b2-172d9f364a89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape758e6dc-ca')#033[00m
Oct 11 04:51:43 np0005481065 kernel: tap108be440-11: entered promiscuous mode
Oct 11 04:51:43 np0005481065 NetworkManager[44960]: <info>  [1760172703.6974] manager: (tap108be440-11): new Tun device (/org/freedesktop/NetworkManager/Devices/134)
Oct 11 04:51:43 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:43Z|00277|binding|INFO|Claiming lport 108be440-11bf-41f6-a628-86fbac597b7d for this chassis.
Oct 11 04:51:43 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:43Z|00278|binding|INFO|108be440-11bf-41f6-a628-86fbac597b7d: Claiming fa:16:3e:c7:33:2e 10.100.0.6
Oct 11 04:51:43 np0005481065 systemd-udevd[303523]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:51:43 np0005481065 NetworkManager[44960]: <info>  [1760172703.7243] device (tap108be440-11): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:51:43 np0005481065 NetworkManager[44960]: <info>  [1760172703.7256] device (tap108be440-11): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:51:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:43.733 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:33:2e 10.100.0.6'], port_security=['fa:16:3e:c7:33:2e 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '872b1c1d-bc87-4123-a599-4d64b89018aa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-882d76f4-8cc7-44c7-ad90-277f4f92e044', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b23ff73ca27245eeb1b46f51326a5568', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1c2e46c6-00fe-417f-b7cc-3c9acf40921b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4841cc19-0006-4d3d-bb24-b088854c8627, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=108be440-11bf-41f6-a628-86fbac597b7d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.737 2 DEBUG oslo_concurrency.processutils [None req-bda1840f-8760-4fce-a3ed-29b2272bc656 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:51:43 np0005481065 systemd-machined[215705]: New machine qemu-40-instance-00000024.
Oct 11 04:51:43 np0005481065 systemd[1]: Started Virtual Machine qemu-40-instance-00000024.
Oct 11 04:51:43 np0005481065 neutron-haproxy-ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89[302511]: [NOTICE]   (302515) : haproxy version is 2.8.14-c23fe91
Oct 11 04:51:43 np0005481065 neutron-haproxy-ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89[302511]: [NOTICE]   (302515) : path to executable is /usr/sbin/haproxy
Oct 11 04:51:43 np0005481065 neutron-haproxy-ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89[302511]: [WARNING]  (302515) : Exiting Master process...
Oct 11 04:51:43 np0005481065 neutron-haproxy-ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89[302511]: [ALERT]    (302515) : Current worker (302517) exited with code 143 (Terminated)
Oct 11 04:51:43 np0005481065 neutron-haproxy-ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89[302511]: [WARNING]  (302515) : All workers exited. Exiting... (0)
Oct 11 04:51:43 np0005481065 systemd[1]: libpod-b02629e856b8dc326de2a6ee34cc94950403c804b4362549501605f9cd83d758.scope: Deactivated successfully.
Oct 11 04:51:43 np0005481065 podman[303591]: 2025-10-11 08:51:43.789374112 +0000 UTC m=+0.064618218 container died b02629e856b8dc326de2a6ee34cc94950403c804b4362549501605f9cd83d758 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:43 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:43Z|00279|binding|INFO|Setting lport 108be440-11bf-41f6-a628-86fbac597b7d ovn-installed in OVS
Oct 11 04:51:43 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:43Z|00280|binding|INFO|Setting lport 108be440-11bf-41f6-a628-86fbac597b7d up in Southbound
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:43 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b02629e856b8dc326de2a6ee34cc94950403c804b4362549501605f9cd83d758-userdata-shm.mount: Deactivated successfully.
Oct 11 04:51:43 np0005481065 systemd[1]: var-lib-containers-storage-overlay-0135a08a783b5b6f32ef7dcf7d88e116aafc097e2f6da2c1ab4657101d9a11a8-merged.mount: Deactivated successfully.
Oct 11 04:51:43 np0005481065 podman[303591]: 2025-10-11 08:51:43.854294028 +0000 UTC m=+0.129538134 container cleanup b02629e856b8dc326de2a6ee34cc94950403c804b4362549501605f9cd83d758 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct 11 04:51:43 np0005481065 systemd[1]: libpod-conmon-b02629e856b8dc326de2a6ee34cc94950403c804b4362549501605f9cd83d758.scope: Deactivated successfully.
Oct 11 04:51:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1401: 321 pgs: 321 active+clean; 185 MiB data, 449 MiB used, 60 GiB / 60 GiB avail; 653 KiB/s rd, 6.9 MiB/s wr, 213 op/s
Oct 11 04:51:43 np0005481065 podman[303644]: 2025-10-11 08:51:43.945156567 +0000 UTC m=+0.055676305 container remove b02629e856b8dc326de2a6ee34cc94950403c804b4362549501605f9cd83d758 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 04:51:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:43.953 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[956bfe20-2dd5-480a-91f5-3d7dd39b93bb]: (4, ('Sat Oct 11 08:51:43 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89 (b02629e856b8dc326de2a6ee34cc94950403c804b4362549501605f9cd83d758)\nb02629e856b8dc326de2a6ee34cc94950403c804b4362549501605f9cd83d758\nSat Oct 11 08:51:43 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89 (b02629e856b8dc326de2a6ee34cc94950403c804b4362549501605f9cd83d758)\nb02629e856b8dc326de2a6ee34cc94950403c804b4362549501605f9cd83d758\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:43.956 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5dd6c884-4641-4f47-b554-cab2c8e5a6c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:43.958 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce478624-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:43 np0005481065 kernel: tapce478624-f0: left promiscuous mode
Oct 11 04:51:43 np0005481065 nova_compute[260935]: 2025-10-11 08:51:43.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:43.987 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[19dece38-afd7-47d1-9a94-826ad4740d38]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.012 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b802a071-6d74-4e9f-8ebe-43201ac9ad1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.014 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[136caae8-ce49-436e-b7f1-9bf45619f417]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.035 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[25a7357a-db45-4b03-ab5d-6dd8b9e6b766]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452708, 'reachable_time': 20166, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303678, 'error': None, 'target': 'ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:44 np0005481065 systemd[1]: run-netns-ovnmeta\x2dce478624\x2df4e7\x2d4bd7\x2d81b2\x2d172d9f364a89.mount: Deactivated successfully.
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.042 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.042 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[60f8ab5a-5461-4d52-934f-043a8b72137a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.043 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 108be440-11bf-41f6-a628-86fbac597b7d in datapath 882d76f4-8cc7-44c7-ad90-277f4f92e044 unbound from our chassis#033[00m
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.046 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 882d76f4-8cc7-44c7-ad90-277f4f92e044#033[00m
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.066 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f6784c69-a9ec-4f0f-8485-ed539501d149]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.070 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap882d76f4-81 in ovnmeta-882d76f4-8cc7-44c7-ad90-277f4f92e044 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.072 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap882d76f4-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.072 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[675f3747-76d9-4a4d-96a7-afd1f85627f4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.074 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d924befd-d420-474a-9e9d-4f3be3b65f81]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.095 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[c0ce9cd0-0dbb-4197-bffe-292d7b442484]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.130 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[09a4a4e0-a9a4-4616-a4fa-52f940aa1269]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.174 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b4fda481-ed10-486a-94be-e0b7e20d3ab7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:44 np0005481065 NetworkManager[44960]: <info>  [1760172704.1887] manager: (tap882d76f4-80): new Veth device (/org/freedesktop/NetworkManager/Devices/135)
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.186 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[494537c1-fb88-4a7e-b6ee-ab14960ee54f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:51:44 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3102064606' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:51:44 np0005481065 nova_compute[260935]: 2025-10-11 08:51:44.242 2 DEBUG nova.network.neutron [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Successfully created port: e389558a-ec7e-4610-aef7-2cfb76da6814 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.245 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f1fc0ccf-cc66-4ed4-8242-28921f8fc65d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.250 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[86892cf2-157f-4798-9454-2e34d9e00c12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:44 np0005481065 nova_compute[260935]: 2025-10-11 08:51:44.249 2 DEBUG oslo_concurrency.processutils [None req-bda1840f-8760-4fce-a3ed-29b2272bc656 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:51:44 np0005481065 nova_compute[260935]: 2025-10-11 08:51:44.273 2 DEBUG nova.compute.provider_tree [None req-bda1840f-8760-4fce-a3ed-29b2272bc656 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:51:44 np0005481065 NetworkManager[44960]: <info>  [1760172704.2771] device (tap882d76f4-80): carrier: link connected
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.287 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c7afa085-d203-4fba-8ed5-fca7b8c3fb77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:44 np0005481065 nova_compute[260935]: 2025-10-11 08:51:44.298 2 DEBUG nova.scheduler.client.report [None req-bda1840f-8760-4fce-a3ed-29b2272bc656 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.311 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[abd9a429-113c-4ebd-b83d-6e76bedcd126]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap882d76f4-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c6:0a:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 89], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 454897, 'reachable_time': 19017, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303753, 'error': None, 'target': 'ovnmeta-882d76f4-8cc7-44c7-ad90-277f4f92e044', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:44 np0005481065 nova_compute[260935]: 2025-10-11 08:51:44.322 2 DEBUG oslo_concurrency.lockutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "1cecd438-75a3-4140-ad35-7439630b1be2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:44 np0005481065 nova_compute[260935]: 2025-10-11 08:51:44.322 2 DEBUG oslo_concurrency.lockutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "1cecd438-75a3-4140-ad35-7439630b1be2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:44 np0005481065 nova_compute[260935]: 2025-10-11 08:51:44.323 2 DEBUG oslo_concurrency.lockutils [None req-bda1840f-8760-4fce-a3ed-29b2272bc656 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.079s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.339 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b730624b-03f6-4940-a88e-6871ecc69f15]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec6:af7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 454897, 'tstamp': 454897}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303754, 'error': None, 'target': 'ovnmeta-882d76f4-8cc7-44c7-ad90-277f4f92e044', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:44 np0005481065 nova_compute[260935]: 2025-10-11 08:51:44.344 2 INFO nova.virt.libvirt.driver [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Deleting instance files /var/lib/nova/instances/88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a_del#033[00m
Oct 11 04:51:44 np0005481065 nova_compute[260935]: 2025-10-11 08:51:44.345 2 INFO nova.virt.libvirt.driver [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Deletion of /var/lib/nova/instances/88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a_del complete#033[00m
Oct 11 04:51:44 np0005481065 nova_compute[260935]: 2025-10-11 08:51:44.349 2 DEBUG nova.compute.manager [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.362 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9941343f-cdc0-4e9f-8aff-c5f9a1a71e3e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap882d76f4-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c6:0a:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 89], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 454897, 'reachable_time': 19017, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 303757, 'error': None, 'target': 'ovnmeta-882d76f4-8cc7-44c7-ad90-277f4f92e044', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:44 np0005481065 nova_compute[260935]: 2025-10-11 08:51:44.365 2 INFO nova.scheduler.client.report [None req-bda1840f-8760-4fce-a3ed-29b2272bc656 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Deleted allocations for instance 5b50d851-e482-40a2-8b7d-d3eca87e15ab#033[00m
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.412 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[48851c5b-4ab1-4628-9779-bc52bae58a1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:44 np0005481065 nova_compute[260935]: 2025-10-11 08:51:44.449 2 INFO nova.compute.manager [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Took 1.23 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 04:51:44 np0005481065 nova_compute[260935]: 2025-10-11 08:51:44.450 2 DEBUG oslo.service.loopingcall [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 04:51:44 np0005481065 nova_compute[260935]: 2025-10-11 08:51:44.450 2 DEBUG nova.compute.manager [-] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 04:51:44 np0005481065 nova_compute[260935]: 2025-10-11 08:51:44.451 2 DEBUG nova.network.neutron [-] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 04:51:44 np0005481065 nova_compute[260935]: 2025-10-11 08:51:44.466 2 DEBUG oslo_concurrency.lockutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:44 np0005481065 nova_compute[260935]: 2025-10-11 08:51:44.466 2 DEBUG oslo_concurrency.lockutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:44 np0005481065 nova_compute[260935]: 2025-10-11 08:51:44.473 2 DEBUG nova.virt.hardware [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:51:44 np0005481065 nova_compute[260935]: 2025-10-11 08:51:44.474 2 INFO nova.compute.claims [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:51:44 np0005481065 nova_compute[260935]: 2025-10-11 08:51:44.477 2 DEBUG oslo_concurrency.lockutils [None req-bda1840f-8760-4fce-a3ed-29b2272bc656 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "5b50d851-e482-40a2-8b7d-d3eca87e15ab" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.094s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.493 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[95e40e0c-e330-4a01-aff4-3819666ab166]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.494 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap882d76f4-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.495 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.495 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap882d76f4-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:44 np0005481065 NetworkManager[44960]: <info>  [1760172704.5398] manager: (tap882d76f4-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/136)
Oct 11 04:51:44 np0005481065 nova_compute[260935]: 2025-10-11 08:51:44.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:44 np0005481065 kernel: tap882d76f4-80: entered promiscuous mode
Oct 11 04:51:44 np0005481065 nova_compute[260935]: 2025-10-11 08:51:44.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.549 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap882d76f4-80, col_values=(('external_ids', {'iface-id': 'abe57b96-aedd-418b-be8f-7ad9fb9218ac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:44 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:44Z|00281|binding|INFO|Releasing lport abe57b96-aedd-418b-be8f-7ad9fb9218ac from this chassis (sb_readonly=0)
Oct 11 04:51:44 np0005481065 nova_compute[260935]: 2025-10-11 08:51:44.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:44 np0005481065 nova_compute[260935]: 2025-10-11 08:51:44.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.581 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/882d76f4-8cc7-44c7-ad90-277f4f92e044.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/882d76f4-8cc7-44c7-ad90-277f4f92e044.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.582 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9509c161-fcb0-4220-8eaa-49c9132bbab1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.584 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-882d76f4-8cc7-44c7-ad90-277f4f92e044
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/882d76f4-8cc7-44c7-ad90-277f4f92e044.pid.haproxy
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID 882d76f4-8cc7-44c7-ad90-277f4f92e044
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 04:51:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.585 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-882d76f4-8cc7-44c7-ad90-277f4f92e044', 'env', 'PROCESS_TAG=haproxy-882d76f4-8cc7-44c7-ad90-277f4f92e044', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/882d76f4-8cc7-44c7-ad90-277f4f92e044.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 04:51:44 np0005481065 nova_compute[260935]: 2025-10-11 08:51:44.591 2 DEBUG nova.compute.manager [req-d4b0f78c-bbc6-4f01-875a-59ca0b3efefa req-087d2d57-a13c-49c3-95e4-43983f2e7184 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Received event network-vif-unplugged-ab7592f9-1746-47d1-a702-b7be704ccabb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:51:44 np0005481065 nova_compute[260935]: 2025-10-11 08:51:44.592 2 DEBUG oslo_concurrency.lockutils [req-d4b0f78c-bbc6-4f01-875a-59ca0b3efefa req-087d2d57-a13c-49c3-95e4-43983f2e7184 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:44 np0005481065 nova_compute[260935]: 2025-10-11 08:51:44.592 2 DEBUG oslo_concurrency.lockutils [req-d4b0f78c-bbc6-4f01-875a-59ca0b3efefa req-087d2d57-a13c-49c3-95e4-43983f2e7184 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:44 np0005481065 nova_compute[260935]: 2025-10-11 08:51:44.593 2 DEBUG oslo_concurrency.lockutils [req-d4b0f78c-bbc6-4f01-875a-59ca0b3efefa req-087d2d57-a13c-49c3-95e4-43983f2e7184 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:44 np0005481065 nova_compute[260935]: 2025-10-11 08:51:44.593 2 DEBUG nova.compute.manager [req-d4b0f78c-bbc6-4f01-875a-59ca0b3efefa req-087d2d57-a13c-49c3-95e4-43983f2e7184 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] No waiting events found dispatching network-vif-unplugged-ab7592f9-1746-47d1-a702-b7be704ccabb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:51:44 np0005481065 nova_compute[260935]: 2025-10-11 08:51:44.594 2 DEBUG nova.compute.manager [req-d4b0f78c-bbc6-4f01-875a-59ca0b3efefa req-087d2d57-a13c-49c3-95e4-43983f2e7184 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Received event network-vif-unplugged-ab7592f9-1746-47d1-a702-b7be704ccabb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 04:51:44 np0005481065 nova_compute[260935]: 2025-10-11 08:51:44.654 2 DEBUG oslo_concurrency.processutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:51:44 np0005481065 nova_compute[260935]: 2025-10-11 08:51:44.868 2 DEBUG nova.network.neutron [req-944d9592-ae71-40e2-8a8f-504e64698827 req-cacd7c57-26ec-4fb7-a428-f79a4fe6c125 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Updated VIF entry in instance network info cache for port 108be440-11bf-41f6-a628-86fbac597b7d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:51:44 np0005481065 nova_compute[260935]: 2025-10-11 08:51:44.872 2 DEBUG nova.network.neutron [req-944d9592-ae71-40e2-8a8f-504e64698827 req-cacd7c57-26ec-4fb7-a428-f79a4fe6c125 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Updating instance_info_cache with network_info: [{"id": "108be440-11bf-41f6-a628-86fbac597b7d", "address": "fa:16:3e:c7:33:2e", "network": {"id": "882d76f4-8cc7-44c7-ad90-277f4f92e044", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-3978382-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b23ff73ca27245eeb1b46f51326a5568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap108be440-11", "ovs_interfaceid": "108be440-11bf-41f6-a628-86fbac597b7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:51:44 np0005481065 nova_compute[260935]: 2025-10-11 08:51:44.893 2 DEBUG oslo_concurrency.lockutils [req-944d9592-ae71-40e2-8a8f-504e64698827 req-cacd7c57-26ec-4fb7-a428-f79a4fe6c125 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-872b1c1d-bc87-4123-a599-4d64b89018aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:51:44 np0005481065 nova_compute[260935]: 2025-10-11 08:51:44.894 2 DEBUG nova.compute.manager [req-944d9592-ae71-40e2-8a8f-504e64698827 req-cacd7c57-26ec-4fb7-a428-f79a4fe6c125 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Received event network-vif-unplugged-965f1a38-4159-41be-ac4a-f436a8ddeeab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:51:44 np0005481065 nova_compute[260935]: 2025-10-11 08:51:44.895 2 DEBUG oslo_concurrency.lockutils [req-944d9592-ae71-40e2-8a8f-504e64698827 req-cacd7c57-26ec-4fb7-a428-f79a4fe6c125 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "5b50d851-e482-40a2-8b7d-d3eca87e15ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:44 np0005481065 nova_compute[260935]: 2025-10-11 08:51:44.896 2 DEBUG oslo_concurrency.lockutils [req-944d9592-ae71-40e2-8a8f-504e64698827 req-cacd7c57-26ec-4fb7-a428-f79a4fe6c125 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5b50d851-e482-40a2-8b7d-d3eca87e15ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:44 np0005481065 nova_compute[260935]: 2025-10-11 08:51:44.897 2 DEBUG oslo_concurrency.lockutils [req-944d9592-ae71-40e2-8a8f-504e64698827 req-cacd7c57-26ec-4fb7-a428-f79a4fe6c125 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5b50d851-e482-40a2-8b7d-d3eca87e15ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:44 np0005481065 nova_compute[260935]: 2025-10-11 08:51:44.897 2 DEBUG nova.compute.manager [req-944d9592-ae71-40e2-8a8f-504e64698827 req-cacd7c57-26ec-4fb7-a428-f79a4fe6c125 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] No waiting events found dispatching network-vif-unplugged-965f1a38-4159-41be-ac4a-f436a8ddeeab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:51:44 np0005481065 nova_compute[260935]: 2025-10-11 08:51:44.898 2 DEBUG nova.compute.manager [req-944d9592-ae71-40e2-8a8f-504e64698827 req-cacd7c57-26ec-4fb7-a428-f79a4fe6c125 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Received event network-vif-unplugged-965f1a38-4159-41be-ac4a-f436a8ddeeab for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 04:51:45 np0005481065 podman[303917]: 2025-10-11 08:51:45.006999316 +0000 UTC m=+0.056516829 container create c606860635a65babc5f4e20e1937563ff429e8f57b93abca2fa39c3cc7258fd7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-882d76f4-8cc7-44c7-ad90-277f4f92e044, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 11 04:51:45 np0005481065 systemd[1]: Started libpod-conmon-c606860635a65babc5f4e20e1937563ff429e8f57b93abca2fa39c3cc7258fd7.scope.
Oct 11 04:51:45 np0005481065 podman[303917]: 2025-10-11 08:51:44.974762594 +0000 UTC m=+0.024280137 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 04:51:45 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:51:45 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/398392368' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:51:45 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.102 2 DEBUG oslo_concurrency.processutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.109 2 DEBUG nova.compute.provider_tree [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:51:45 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43007129255deff7aeaffb2ae4dcdedaaf076ce8807efca09774d5aee6cf146c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.118 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172705.1180646, 872b1c1d-bc87-4123-a599-4d64b89018aa => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.118 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] VM Started (Lifecycle Event)#033[00m
Oct 11 04:51:45 np0005481065 podman[303917]: 2025-10-11 08:51:45.131807186 +0000 UTC m=+0.181324719 container init c606860635a65babc5f4e20e1937563ff429e8f57b93abca2fa39c3cc7258fd7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-882d76f4-8cc7-44c7-ad90-277f4f92e044, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 04:51:45 np0005481065 podman[303917]: 2025-10-11 08:51:45.136486538 +0000 UTC m=+0.186004051 container start c606860635a65babc5f4e20e1937563ff429e8f57b93abca2fa39c3cc7258fd7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-882d76f4-8cc7-44c7-ad90-277f4f92e044, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 11 04:51:45 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:51:45 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:51:45 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:51:45 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:51:45 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:51:45 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:51:45 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev b0f93bc6-6f60-49d1-92f5-7d07a4149a47 does not exist
Oct 11 04:51:45 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 6377aee4-a59c-4cfd-a1d7-b87f2eb13adc does not exist
Oct 11 04:51:45 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev d5c9b4ef-1bc7-420a-ac48-c3bd69fde34e does not exist
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.157 2 DEBUG nova.scheduler.client.report [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:51:45 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:51:45 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.161 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:51:45 np0005481065 neutron-haproxy-ovnmeta-882d76f4-8cc7-44c7-ad90-277f4f92e044[303946]: [NOTICE]   (303952) : New worker (303954) forked
Oct 11 04:51:45 np0005481065 neutron-haproxy-ovnmeta-882d76f4-8cc7-44c7-ad90-277f4f92e044[303946]: [NOTICE]   (303952) : Loading success.
Oct 11 04:51:45 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:51:45 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.165 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172705.1184494, 872b1c1d-bc87-4123-a599-4d64b89018aa => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.165 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] VM Paused (Lifecycle Event)#033[00m
Oct 11 04:51:45 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:51:45 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.186 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.189 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.192 2 DEBUG oslo_concurrency.lockutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.726s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.192 2 DEBUG nova.compute.manager [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.233 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.270 2 DEBUG nova.compute.manager [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.271 2 DEBUG nova.network.neutron [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.293 2 INFO nova.virt.libvirt.driver [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:51:45 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:51:45 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:51:45 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.315 2 DEBUG nova.compute.manager [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.396 2 DEBUG nova.compute.manager [req-0d04d990-0121-40ec-a760-80781ae89b6f req-08ba0558-db42-4080-bb5d-cdd90c7c0285 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Received event network-vif-plugged-965f1a38-4159-41be-ac4a-f436a8ddeeab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.397 2 DEBUG oslo_concurrency.lockutils [req-0d04d990-0121-40ec-a760-80781ae89b6f req-08ba0558-db42-4080-bb5d-cdd90c7c0285 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "5b50d851-e482-40a2-8b7d-d3eca87e15ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.397 2 DEBUG oslo_concurrency.lockutils [req-0d04d990-0121-40ec-a760-80781ae89b6f req-08ba0558-db42-4080-bb5d-cdd90c7c0285 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5b50d851-e482-40a2-8b7d-d3eca87e15ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.397 2 DEBUG oslo_concurrency.lockutils [req-0d04d990-0121-40ec-a760-80781ae89b6f req-08ba0558-db42-4080-bb5d-cdd90c7c0285 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5b50d851-e482-40a2-8b7d-d3eca87e15ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.398 2 DEBUG nova.compute.manager [req-0d04d990-0121-40ec-a760-80781ae89b6f req-08ba0558-db42-4080-bb5d-cdd90c7c0285 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] No waiting events found dispatching network-vif-plugged-965f1a38-4159-41be-ac4a-f436a8ddeeab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.398 2 WARNING nova.compute.manager [req-0d04d990-0121-40ec-a760-80781ae89b6f req-08ba0558-db42-4080-bb5d-cdd90c7c0285 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Received unexpected event network-vif-plugged-965f1a38-4159-41be-ac4a-f436a8ddeeab for instance with vm_state deleted and task_state None.#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.398 2 DEBUG nova.compute.manager [req-0d04d990-0121-40ec-a760-80781ae89b6f req-08ba0558-db42-4080-bb5d-cdd90c7c0285 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Received event network-vif-deleted-965f1a38-4159-41be-ac4a-f436a8ddeeab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.399 2 DEBUG nova.compute.manager [req-0d04d990-0121-40ec-a760-80781ae89b6f req-08ba0558-db42-4080-bb5d-cdd90c7c0285 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Received event network-vif-unplugged-e758e6dc-cadd-4687-a634-519fb2ecace8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.399 2 DEBUG oslo_concurrency.lockutils [req-0d04d990-0121-40ec-a760-80781ae89b6f req-08ba0558-db42-4080-bb5d-cdd90c7c0285 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.399 2 DEBUG oslo_concurrency.lockutils [req-0d04d990-0121-40ec-a760-80781ae89b6f req-08ba0558-db42-4080-bb5d-cdd90c7c0285 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.399 2 DEBUG oslo_concurrency.lockutils [req-0d04d990-0121-40ec-a760-80781ae89b6f req-08ba0558-db42-4080-bb5d-cdd90c7c0285 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.400 2 DEBUG nova.compute.manager [req-0d04d990-0121-40ec-a760-80781ae89b6f req-08ba0558-db42-4080-bb5d-cdd90c7c0285 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] No waiting events found dispatching network-vif-unplugged-e758e6dc-cadd-4687-a634-519fb2ecace8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.400 2 DEBUG nova.compute.manager [req-0d04d990-0121-40ec-a760-80781ae89b6f req-08ba0558-db42-4080-bb5d-cdd90c7c0285 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Received event network-vif-unplugged-e758e6dc-cadd-4687-a634-519fb2ecace8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.400 2 DEBUG nova.compute.manager [req-0d04d990-0121-40ec-a760-80781ae89b6f req-08ba0558-db42-4080-bb5d-cdd90c7c0285 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Received event network-vif-plugged-e758e6dc-cadd-4687-a634-519fb2ecace8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.400 2 DEBUG oslo_concurrency.lockutils [req-0d04d990-0121-40ec-a760-80781ae89b6f req-08ba0558-db42-4080-bb5d-cdd90c7c0285 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.401 2 DEBUG oslo_concurrency.lockutils [req-0d04d990-0121-40ec-a760-80781ae89b6f req-08ba0558-db42-4080-bb5d-cdd90c7c0285 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.401 2 DEBUG oslo_concurrency.lockutils [req-0d04d990-0121-40ec-a760-80781ae89b6f req-08ba0558-db42-4080-bb5d-cdd90c7c0285 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.401 2 DEBUG nova.compute.manager [req-0d04d990-0121-40ec-a760-80781ae89b6f req-08ba0558-db42-4080-bb5d-cdd90c7c0285 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] No waiting events found dispatching network-vif-plugged-e758e6dc-cadd-4687-a634-519fb2ecace8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.401 2 WARNING nova.compute.manager [req-0d04d990-0121-40ec-a760-80781ae89b6f req-08ba0558-db42-4080-bb5d-cdd90c7c0285 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Received unexpected event network-vif-plugged-e758e6dc-cadd-4687-a634-519fb2ecace8 for instance with vm_state active and task_state deleting.#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.402 2 DEBUG nova.compute.manager [req-0d04d990-0121-40ec-a760-80781ae89b6f req-08ba0558-db42-4080-bb5d-cdd90c7c0285 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Received event network-vif-plugged-108be440-11bf-41f6-a628-86fbac597b7d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.402 2 DEBUG oslo_concurrency.lockutils [req-0d04d990-0121-40ec-a760-80781ae89b6f req-08ba0558-db42-4080-bb5d-cdd90c7c0285 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "872b1c1d-bc87-4123-a599-4d64b89018aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.402 2 DEBUG oslo_concurrency.lockutils [req-0d04d990-0121-40ec-a760-80781ae89b6f req-08ba0558-db42-4080-bb5d-cdd90c7c0285 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "872b1c1d-bc87-4123-a599-4d64b89018aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.402 2 DEBUG oslo_concurrency.lockutils [req-0d04d990-0121-40ec-a760-80781ae89b6f req-08ba0558-db42-4080-bb5d-cdd90c7c0285 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "872b1c1d-bc87-4123-a599-4d64b89018aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.403 2 DEBUG nova.compute.manager [req-0d04d990-0121-40ec-a760-80781ae89b6f req-08ba0558-db42-4080-bb5d-cdd90c7c0285 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Processing event network-vif-plugged-108be440-11bf-41f6-a628-86fbac597b7d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.403 2 DEBUG nova.compute.manager [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.411 2 DEBUG nova.virt.libvirt.driver [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.411 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172705.4113019, 872b1c1d-bc87-4123-a599-4d64b89018aa => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.412 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.415 2 INFO nova.virt.libvirt.driver [-] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Instance spawned successfully.#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.416 2 DEBUG nova.virt.libvirt.driver [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.442 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.445 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.452 2 DEBUG nova.virt.libvirt.driver [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.452 2 DEBUG nova.virt.libvirt.driver [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.453 2 DEBUG nova.virt.libvirt.driver [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.453 2 DEBUG nova.virt.libvirt.driver [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.453 2 DEBUG nova.virt.libvirt.driver [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.454 2 DEBUG nova.virt.libvirt.driver [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.457 2 DEBUG nova.compute.manager [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.457 2 DEBUG nova.virt.libvirt.driver [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.458 2 INFO nova.virt.libvirt.driver [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Creating image(s)#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.485 2 DEBUG nova.storage.rbd_utils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image 1cecd438-75a3-4140-ad35-7439630b1be2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.520 2 DEBUG nova.storage.rbd_utils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image 1cecd438-75a3-4140-ad35-7439630b1be2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.554 2 DEBUG nova.storage.rbd_utils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image 1cecd438-75a3-4140-ad35-7439630b1be2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.558 2 DEBUG oslo_concurrency.processutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.611 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.617 2 INFO nova.compute.manager [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Took 9.47 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.618 2 DEBUG nova.compute.manager [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.622 2 DEBUG nova.policy [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1bab12893b9d49aabcb5ca19c9b951de', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f8c7604961214c6d9d49657535d799a5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.656 2 DEBUG oslo_concurrency.processutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.658 2 DEBUG oslo_concurrency.lockutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.659 2 DEBUG oslo_concurrency.lockutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.659 2 DEBUG oslo_concurrency.lockutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.695 2 DEBUG nova.storage.rbd_utils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image 1cecd438-75a3-4140-ad35-7439630b1be2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.703 2 DEBUG oslo_concurrency.processutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 1cecd438-75a3-4140-ad35-7439630b1be2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.751 2 INFO nova.compute.manager [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Took 10.53 seconds to build instance.#033[00m
Oct 11 04:51:45 np0005481065 nova_compute[260935]: 2025-10-11 08:51:45.769 2 DEBUG oslo_concurrency.lockutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "872b1c1d-bc87-4123-a599-4d64b89018aa" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:45 np0005481065 podman[304187]: 2025-10-11 08:51:45.841205858 +0000 UTC m=+0.052591149 container create c163337c28db586d0e79f59d974bc7cd13091a496f6438f729e8e75b046c9a8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_brattain, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:51:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1402: 321 pgs: 321 active+clean; 185 MiB data, 449 MiB used, 60 GiB / 60 GiB avail; 528 KiB/s rd, 5.5 MiB/s wr, 172 op/s
Oct 11 04:51:45 np0005481065 systemd[1]: Started libpod-conmon-c163337c28db586d0e79f59d974bc7cd13091a496f6438f729e8e75b046c9a8b.scope.
Oct 11 04:51:45 np0005481065 podman[304187]: 2025-10-11 08:51:45.817437536 +0000 UTC m=+0.028822877 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:51:45 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:51:45 np0005481065 podman[304187]: 2025-10-11 08:51:45.99050381 +0000 UTC m=+0.201889161 container init c163337c28db586d0e79f59d974bc7cd13091a496f6438f729e8e75b046c9a8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_brattain, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 11 04:51:46 np0005481065 podman[304187]: 2025-10-11 08:51:46.003039184 +0000 UTC m=+0.214424475 container start c163337c28db586d0e79f59d974bc7cd13091a496f6438f729e8e75b046c9a8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:51:46 np0005481065 podman[304187]: 2025-10-11 08:51:46.007426068 +0000 UTC m=+0.218811389 container attach c163337c28db586d0e79f59d974bc7cd13091a496f6438f729e8e75b046c9a8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 11 04:51:46 np0005481065 modest_brattain[304215]: 167 167
Oct 11 04:51:46 np0005481065 systemd[1]: libpod-c163337c28db586d0e79f59d974bc7cd13091a496f6438f729e8e75b046c9a8b.scope: Deactivated successfully.
Oct 11 04:51:46 np0005481065 conmon[304215]: conmon c163337c28db586d0e79 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c163337c28db586d0e79f59d974bc7cd13091a496f6438f729e8e75b046c9a8b.scope/container/memory.events
Oct 11 04:51:46 np0005481065 podman[304187]: 2025-10-11 08:51:46.012890663 +0000 UTC m=+0.224275964 container died c163337c28db586d0e79f59d974bc7cd13091a496f6438f729e8e75b046c9a8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 11 04:51:46 np0005481065 systemd[1]: var-lib-containers-storage-overlay-a66f90427bb1c7fb6fca6a262ff65ead13b0b25be553489eee9625c70b749e56-merged.mount: Deactivated successfully.
Oct 11 04:51:46 np0005481065 podman[304187]: 2025-10-11 08:51:46.062521457 +0000 UTC m=+0.273906748 container remove c163337c28db586d0e79f59d974bc7cd13091a496f6438f729e8e75b046c9a8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_brattain, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:51:46 np0005481065 nova_compute[260935]: 2025-10-11 08:51:46.066 2 DEBUG oslo_concurrency.processutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 1cecd438-75a3-4140-ad35-7439630b1be2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.363s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:51:46 np0005481065 systemd[1]: libpod-conmon-c163337c28db586d0e79f59d974bc7cd13091a496f6438f729e8e75b046c9a8b.scope: Deactivated successfully.
Oct 11 04:51:46 np0005481065 nova_compute[260935]: 2025-10-11 08:51:46.196 2 DEBUG nova.storage.rbd_utils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] resizing rbd image 1cecd438-75a3-4140-ad35-7439630b1be2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:51:46 np0005481065 podman[304292]: 2025-10-11 08:51:46.339832059 +0000 UTC m=+0.084329496 container create f2439ece2a45ce1fee32491dd5de8bfb9fa4a04bc899954e07eebf083a8d6dca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:51:46 np0005481065 nova_compute[260935]: 2025-10-11 08:51:46.340 2 DEBUG nova.objects.instance [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lazy-loading 'migration_context' on Instance uuid 1cecd438-75a3-4140-ad35-7439630b1be2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:51:46 np0005481065 nova_compute[260935]: 2025-10-11 08:51:46.346 2 DEBUG nova.network.neutron [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Successfully created port: 7290684c-3823-4e1e-84f5-826316aa4548 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 04:51:46 np0005481065 nova_compute[260935]: 2025-10-11 08:51:46.356 2 DEBUG nova.network.neutron [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Successfully updated port: 30f88ebc-fba6-476c-8953-ad64358029bb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:51:46 np0005481065 nova_compute[260935]: 2025-10-11 08:51:46.377 2 DEBUG nova.virt.libvirt.driver [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:51:46 np0005481065 nova_compute[260935]: 2025-10-11 08:51:46.378 2 DEBUG nova.virt.libvirt.driver [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Ensure instance console log exists: /var/lib/nova/instances/1cecd438-75a3-4140-ad35-7439630b1be2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:51:46 np0005481065 nova_compute[260935]: 2025-10-11 08:51:46.378 2 DEBUG oslo_concurrency.lockutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:46 np0005481065 nova_compute[260935]: 2025-10-11 08:51:46.379 2 DEBUG oslo_concurrency.lockutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:46 np0005481065 nova_compute[260935]: 2025-10-11 08:51:46.379 2 DEBUG oslo_concurrency.lockutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:46 np0005481065 systemd[1]: Started libpod-conmon-f2439ece2a45ce1fee32491dd5de8bfb9fa4a04bc899954e07eebf083a8d6dca.scope.
Oct 11 04:51:46 np0005481065 podman[304292]: 2025-10-11 08:51:46.305722664 +0000 UTC m=+0.050220161 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:51:46 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:51:46 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cefc3e90171c4916069a4f285fed17ad5869ccb9f0d0666bc18a4e2d9d2e337f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:51:46 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cefc3e90171c4916069a4f285fed17ad5869ccb9f0d0666bc18a4e2d9d2e337f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:51:46 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cefc3e90171c4916069a4f285fed17ad5869ccb9f0d0666bc18a4e2d9d2e337f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:51:46 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cefc3e90171c4916069a4f285fed17ad5869ccb9f0d0666bc18a4e2d9d2e337f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:51:46 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cefc3e90171c4916069a4f285fed17ad5869ccb9f0d0666bc18a4e2d9d2e337f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:51:46 np0005481065 podman[304292]: 2025-10-11 08:51:46.486296661 +0000 UTC m=+0.230794138 container init f2439ece2a45ce1fee32491dd5de8bfb9fa4a04bc899954e07eebf083a8d6dca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_agnesi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:51:46 np0005481065 podman[304292]: 2025-10-11 08:51:46.497347874 +0000 UTC m=+0.241845291 container start f2439ece2a45ce1fee32491dd5de8bfb9fa4a04bc899954e07eebf083a8d6dca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_agnesi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 11 04:51:46 np0005481065 podman[304292]: 2025-10-11 08:51:46.501514301 +0000 UTC m=+0.246011788 container attach f2439ece2a45ce1fee32491dd5de8bfb9fa4a04bc899954e07eebf083a8d6dca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_agnesi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:51:46 np0005481065 nova_compute[260935]: 2025-10-11 08:51:46.547 2 DEBUG nova.network.neutron [-] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:51:46 np0005481065 nova_compute[260935]: 2025-10-11 08:51:46.567 2 INFO nova.compute.manager [-] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Took 2.12 seconds to deallocate network for instance.#033[00m
Oct 11 04:51:46 np0005481065 nova_compute[260935]: 2025-10-11 08:51:46.629 2 DEBUG oslo_concurrency.lockutils [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:46 np0005481065 nova_compute[260935]: 2025-10-11 08:51:46.629 2 DEBUG oslo_concurrency.lockutils [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:46 np0005481065 nova_compute[260935]: 2025-10-11 08:51:46.659 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172691.658396, 3283d482-4ea1-400a-9a1b-486479801813 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:51:46 np0005481065 nova_compute[260935]: 2025-10-11 08:51:46.659 2 INFO nova.compute.manager [-] [instance: 3283d482-4ea1-400a-9a1b-486479801813] VM Stopped (Lifecycle Event)#033[00m
Oct 11 04:51:46 np0005481065 nova_compute[260935]: 2025-10-11 08:51:46.691 2 DEBUG nova.compute.manager [None req-7a2867e9-1006-401c-8ea7-ac3756b1cf7e - - - - - -] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:51:46 np0005481065 nova_compute[260935]: 2025-10-11 08:51:46.751 2 DEBUG oslo_concurrency.processutils [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:51:47 np0005481065 nova_compute[260935]: 2025-10-11 08:51:47.006 2 DEBUG nova.compute.manager [req-deede00b-64c4-415b-bfb1-575d4b23fe84 req-cf95fa32-007b-47bd-811f-95fc11cc2cdc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Received event network-vif-plugged-ab7592f9-1746-47d1-a702-b7be704ccabb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:51:47 np0005481065 nova_compute[260935]: 2025-10-11 08:51:47.007 2 DEBUG oslo_concurrency.lockutils [req-deede00b-64c4-415b-bfb1-575d4b23fe84 req-cf95fa32-007b-47bd-811f-95fc11cc2cdc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:47 np0005481065 nova_compute[260935]: 2025-10-11 08:51:47.007 2 DEBUG oslo_concurrency.lockutils [req-deede00b-64c4-415b-bfb1-575d4b23fe84 req-cf95fa32-007b-47bd-811f-95fc11cc2cdc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:47 np0005481065 nova_compute[260935]: 2025-10-11 08:51:47.007 2 DEBUG oslo_concurrency.lockutils [req-deede00b-64c4-415b-bfb1-575d4b23fe84 req-cf95fa32-007b-47bd-811f-95fc11cc2cdc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:47 np0005481065 nova_compute[260935]: 2025-10-11 08:51:47.008 2 DEBUG nova.compute.manager [req-deede00b-64c4-415b-bfb1-575d4b23fe84 req-cf95fa32-007b-47bd-811f-95fc11cc2cdc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] No waiting events found dispatching network-vif-plugged-ab7592f9-1746-47d1-a702-b7be704ccabb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:51:47 np0005481065 nova_compute[260935]: 2025-10-11 08:51:47.008 2 WARNING nova.compute.manager [req-deede00b-64c4-415b-bfb1-575d4b23fe84 req-cf95fa32-007b-47bd-811f-95fc11cc2cdc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Received unexpected event network-vif-plugged-ab7592f9-1746-47d1-a702-b7be704ccabb for instance with vm_state deleted and task_state None.#033[00m
Oct 11 04:51:47 np0005481065 nova_compute[260935]: 2025-10-11 08:51:47.143 2 DEBUG nova.network.neutron [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Successfully updated port: 7290684c-3823-4e1e-84f5-826316aa4548 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:51:47 np0005481065 nova_compute[260935]: 2025-10-11 08:51:47.158 2 DEBUG oslo_concurrency.lockutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "refresh_cache-1cecd438-75a3-4140-ad35-7439630b1be2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:51:47 np0005481065 nova_compute[260935]: 2025-10-11 08:51:47.158 2 DEBUG oslo_concurrency.lockutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquired lock "refresh_cache-1cecd438-75a3-4140-ad35-7439630b1be2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:51:47 np0005481065 nova_compute[260935]: 2025-10-11 08:51:47.159 2 DEBUG nova.network.neutron [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:51:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:51:47 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3661709572' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:51:47 np0005481065 nova_compute[260935]: 2025-10-11 08:51:47.212 2 DEBUG oslo_concurrency.processutils [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:51:47 np0005481065 nova_compute[260935]: 2025-10-11 08:51:47.218 2 DEBUG nova.compute.provider_tree [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:51:47 np0005481065 nova_compute[260935]: 2025-10-11 08:51:47.236 2 DEBUG nova.scheduler.client.report [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:51:47 np0005481065 nova_compute[260935]: 2025-10-11 08:51:47.258 2 DEBUG oslo_concurrency.lockutils [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:47 np0005481065 nova_compute[260935]: 2025-10-11 08:51:47.298 2 INFO nova.scheduler.client.report [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Deleted allocations for instance 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a#033[00m
Oct 11 04:51:47 np0005481065 nova_compute[260935]: 2025-10-11 08:51:47.359 2 DEBUG nova.network.neutron [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:51:47 np0005481065 nova_compute[260935]: 2025-10-11 08:51:47.381 2 DEBUG oslo_concurrency.lockutils [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.169s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:47 np0005481065 nova_compute[260935]: 2025-10-11 08:51:47.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:47 np0005481065 charming_agnesi[304326]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:51:47 np0005481065 charming_agnesi[304326]: --> relative data size: 1.0
Oct 11 04:51:47 np0005481065 charming_agnesi[304326]: --> All data devices are unavailable
Oct 11 04:51:47 np0005481065 systemd[1]: libpod-f2439ece2a45ce1fee32491dd5de8bfb9fa4a04bc899954e07eebf083a8d6dca.scope: Deactivated successfully.
Oct 11 04:51:47 np0005481065 podman[304292]: 2025-10-11 08:51:47.675803801 +0000 UTC m=+1.420301238 container died f2439ece2a45ce1fee32491dd5de8bfb9fa4a04bc899954e07eebf083a8d6dca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_agnesi, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 11 04:51:47 np0005481065 systemd[1]: libpod-f2439ece2a45ce1fee32491dd5de8bfb9fa4a04bc899954e07eebf083a8d6dca.scope: Consumed 1.091s CPU time.
Oct 11 04:51:47 np0005481065 systemd[1]: var-lib-containers-storage-overlay-cefc3e90171c4916069a4f285fed17ad5869ccb9f0d0666bc18a4e2d9d2e337f-merged.mount: Deactivated successfully.
Oct 11 04:51:47 np0005481065 podman[304292]: 2025-10-11 08:51:47.756378699 +0000 UTC m=+1.500876146 container remove f2439ece2a45ce1fee32491dd5de8bfb9fa4a04bc899954e07eebf083a8d6dca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_agnesi, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 04:51:47 np0005481065 systemd[1]: libpod-conmon-f2439ece2a45ce1fee32491dd5de8bfb9fa4a04bc899954e07eebf083a8d6dca.scope: Deactivated successfully.
Oct 11 04:51:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1403: 321 pgs: 321 active+clean; 180 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.5 MiB/s wr, 277 op/s
Oct 11 04:51:48 np0005481065 nova_compute[260935]: 2025-10-11 08:51:48.198 2 DEBUG nova.network.neutron [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Successfully updated port: e389558a-ec7e-4610-aef7-2cfb76da6814 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:51:48 np0005481065 nova_compute[260935]: 2025-10-11 08:51:48.214 2 DEBUG oslo_concurrency.lockutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Acquiring lock "refresh_cache-205bee5e-165a-468c-87d3-db44e03ace3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:51:48 np0005481065 nova_compute[260935]: 2025-10-11 08:51:48.215 2 DEBUG oslo_concurrency.lockutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Acquired lock "refresh_cache-205bee5e-165a-468c-87d3-db44e03ace3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:51:48 np0005481065 nova_compute[260935]: 2025-10-11 08:51:48.215 2 DEBUG nova.network.neutron [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:51:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:51:48 np0005481065 nova_compute[260935]: 2025-10-11 08:51:48.387 2 DEBUG nova.network.neutron [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:51:48 np0005481065 nova_compute[260935]: 2025-10-11 08:51:48.442 2 DEBUG nova.network.neutron [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Updating instance_info_cache with network_info: [{"id": "7290684c-3823-4e1e-84f5-826316aa4548", "address": "fa:16:3e:7f:41:62", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7290684c-38", "ovs_interfaceid": "7290684c-3823-4e1e-84f5-826316aa4548", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:51:48 np0005481065 nova_compute[260935]: 2025-10-11 08:51:48.465 2 DEBUG oslo_concurrency.lockutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Releasing lock "refresh_cache-1cecd438-75a3-4140-ad35-7439630b1be2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:51:48 np0005481065 nova_compute[260935]: 2025-10-11 08:51:48.465 2 DEBUG nova.compute.manager [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Instance network_info: |[{"id": "7290684c-3823-4e1e-84f5-826316aa4548", "address": "fa:16:3e:7f:41:62", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7290684c-38", "ovs_interfaceid": "7290684c-3823-4e1e-84f5-826316aa4548", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:51:48 np0005481065 nova_compute[260935]: 2025-10-11 08:51:48.468 2 DEBUG nova.virt.libvirt.driver [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Start _get_guest_xml network_info=[{"id": "7290684c-3823-4e1e-84f5-826316aa4548", "address": "fa:16:3e:7f:41:62", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7290684c-38", "ovs_interfaceid": "7290684c-3823-4e1e-84f5-826316aa4548", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:51:48 np0005481065 nova_compute[260935]: 2025-10-11 08:51:48.474 2 WARNING nova.virt.libvirt.driver [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:51:48 np0005481065 nova_compute[260935]: 2025-10-11 08:51:48.480 2 DEBUG nova.virt.libvirt.host [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:51:48 np0005481065 nova_compute[260935]: 2025-10-11 08:51:48.481 2 DEBUG nova.virt.libvirt.host [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:51:48 np0005481065 nova_compute[260935]: 2025-10-11 08:51:48.484 2 DEBUG nova.virt.libvirt.host [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:51:48 np0005481065 nova_compute[260935]: 2025-10-11 08:51:48.485 2 DEBUG nova.virt.libvirt.host [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:51:48 np0005481065 nova_compute[260935]: 2025-10-11 08:51:48.486 2 DEBUG nova.virt.libvirt.driver [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:51:48 np0005481065 nova_compute[260935]: 2025-10-11 08:51:48.486 2 DEBUG nova.virt.hardware [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:51:48 np0005481065 nova_compute[260935]: 2025-10-11 08:51:48.487 2 DEBUG nova.virt.hardware [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:51:48 np0005481065 nova_compute[260935]: 2025-10-11 08:51:48.487 2 DEBUG nova.virt.hardware [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:51:48 np0005481065 nova_compute[260935]: 2025-10-11 08:51:48.488 2 DEBUG nova.virt.hardware [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:51:48 np0005481065 nova_compute[260935]: 2025-10-11 08:51:48.488 2 DEBUG nova.virt.hardware [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:51:48 np0005481065 nova_compute[260935]: 2025-10-11 08:51:48.489 2 DEBUG nova.virt.hardware [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:51:48 np0005481065 nova_compute[260935]: 2025-10-11 08:51:48.489 2 DEBUG nova.virt.hardware [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:51:48 np0005481065 nova_compute[260935]: 2025-10-11 08:51:48.490 2 DEBUG nova.virt.hardware [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:51:48 np0005481065 nova_compute[260935]: 2025-10-11 08:51:48.490 2 DEBUG nova.virt.hardware [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:51:48 np0005481065 nova_compute[260935]: 2025-10-11 08:51:48.490 2 DEBUG nova.virt.hardware [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:51:48 np0005481065 nova_compute[260935]: 2025-10-11 08:51:48.491 2 DEBUG nova.virt.hardware [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:51:48 np0005481065 nova_compute[260935]: 2025-10-11 08:51:48.495 2 DEBUG oslo_concurrency.processutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:51:48 np0005481065 podman[304533]: 2025-10-11 08:51:48.518956734 +0000 UTC m=+0.059945676 container create 53e45b88636b08d5f51df2a75166906a4d765912fdaea3d81b3d1b3b64233109 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Oct 11 04:51:48 np0005481065 nova_compute[260935]: 2025-10-11 08:51:48.539 2 DEBUG nova.compute.manager [req-18b1f560-5b30-4047-bffe-840d9c80635f req-3dbda898-6a48-421b-b8a8-2f61879a7924 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Received event network-vif-plugged-108be440-11bf-41f6-a628-86fbac597b7d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:51:48 np0005481065 nova_compute[260935]: 2025-10-11 08:51:48.540 2 DEBUG oslo_concurrency.lockutils [req-18b1f560-5b30-4047-bffe-840d9c80635f req-3dbda898-6a48-421b-b8a8-2f61879a7924 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "872b1c1d-bc87-4123-a599-4d64b89018aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:48 np0005481065 nova_compute[260935]: 2025-10-11 08:51:48.541 2 DEBUG oslo_concurrency.lockutils [req-18b1f560-5b30-4047-bffe-840d9c80635f req-3dbda898-6a48-421b-b8a8-2f61879a7924 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "872b1c1d-bc87-4123-a599-4d64b89018aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:48 np0005481065 nova_compute[260935]: 2025-10-11 08:51:48.541 2 DEBUG oslo_concurrency.lockutils [req-18b1f560-5b30-4047-bffe-840d9c80635f req-3dbda898-6a48-421b-b8a8-2f61879a7924 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "872b1c1d-bc87-4123-a599-4d64b89018aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:48 np0005481065 nova_compute[260935]: 2025-10-11 08:51:48.542 2 DEBUG nova.compute.manager [req-18b1f560-5b30-4047-bffe-840d9c80635f req-3dbda898-6a48-421b-b8a8-2f61879a7924 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] No waiting events found dispatching network-vif-plugged-108be440-11bf-41f6-a628-86fbac597b7d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:51:48 np0005481065 nova_compute[260935]: 2025-10-11 08:51:48.542 2 WARNING nova.compute.manager [req-18b1f560-5b30-4047-bffe-840d9c80635f req-3dbda898-6a48-421b-b8a8-2f61879a7924 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Received unexpected event network-vif-plugged-108be440-11bf-41f6-a628-86fbac597b7d for instance with vm_state active and task_state None.#033[00m
Oct 11 04:51:48 np0005481065 nova_compute[260935]: 2025-10-11 08:51:48.542 2 DEBUG nova.compute.manager [req-18b1f560-5b30-4047-bffe-840d9c80635f req-3dbda898-6a48-421b-b8a8-2f61879a7924 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Received event network-vif-deleted-ab7592f9-1746-47d1-a702-b7be704ccabb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:51:48 np0005481065 nova_compute[260935]: 2025-10-11 08:51:48.543 2 DEBUG nova.compute.manager [req-18b1f560-5b30-4047-bffe-840d9c80635f req-3dbda898-6a48-421b-b8a8-2f61879a7924 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Received event network-changed-30f88ebc-fba6-476c-8953-ad64358029bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:51:48 np0005481065 nova_compute[260935]: 2025-10-11 08:51:48.543 2 DEBUG nova.compute.manager [req-18b1f560-5b30-4047-bffe-840d9c80635f req-3dbda898-6a48-421b-b8a8-2f61879a7924 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Refreshing instance network info cache due to event network-changed-30f88ebc-fba6-476c-8953-ad64358029bb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:51:48 np0005481065 nova_compute[260935]: 2025-10-11 08:51:48.543 2 DEBUG oslo_concurrency.lockutils [req-18b1f560-5b30-4047-bffe-840d9c80635f req-3dbda898-6a48-421b-b8a8-2f61879a7924 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-205bee5e-165a-468c-87d3-db44e03ace3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:51:48 np0005481065 systemd[1]: Started libpod-conmon-53e45b88636b08d5f51df2a75166906a4d765912fdaea3d81b3d1b3b64233109.scope.
Oct 11 04:51:48 np0005481065 podman[304533]: 2025-10-11 08:51:48.488067761 +0000 UTC m=+0.029056793 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:51:48 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:51:48 np0005481065 podman[304533]: 2025-10-11 08:51:48.612016866 +0000 UTC m=+0.153005908 container init 53e45b88636b08d5f51df2a75166906a4d765912fdaea3d81b3d1b3b64233109 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_murdock, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:51:48 np0005481065 podman[304533]: 2025-10-11 08:51:48.62490642 +0000 UTC m=+0.165895362 container start 53e45b88636b08d5f51df2a75166906a4d765912fdaea3d81b3d1b3b64233109 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_murdock, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 04:51:48 np0005481065 podman[304533]: 2025-10-11 08:51:48.628464921 +0000 UTC m=+0.169453903 container attach 53e45b88636b08d5f51df2a75166906a4d765912fdaea3d81b3d1b3b64233109 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:51:48 np0005481065 gracious_murdock[304550]: 167 167
Oct 11 04:51:48 np0005481065 systemd[1]: libpod-53e45b88636b08d5f51df2a75166906a4d765912fdaea3d81b3d1b3b64233109.scope: Deactivated successfully.
Oct 11 04:51:48 np0005481065 podman[304533]: 2025-10-11 08:51:48.635789458 +0000 UTC m=+0.176778430 container died 53e45b88636b08d5f51df2a75166906a4d765912fdaea3d81b3d1b3b64233109 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:51:48 np0005481065 nova_compute[260935]: 2025-10-11 08:51:48.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:48 np0005481065 systemd[1]: var-lib-containers-storage-overlay-837d24aa7ec2c3db428f68c6c4fa0bfe653a46d2efdda2e9aa9d2925ea6889ee-merged.mount: Deactivated successfully.
Oct 11 04:51:48 np0005481065 podman[304533]: 2025-10-11 08:51:48.681683766 +0000 UTC m=+0.222672718 container remove 53e45b88636b08d5f51df2a75166906a4d765912fdaea3d81b3d1b3b64233109 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_murdock, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:51:48 np0005481065 systemd[1]: libpod-conmon-53e45b88636b08d5f51df2a75166906a4d765912fdaea3d81b3d1b3b64233109.scope: Deactivated successfully.
Oct 11 04:51:48 np0005481065 podman[304592]: 2025-10-11 08:51:48.934165966 +0000 UTC m=+0.075564958 container create 470fba272ba173b6e140441af48ba30555ce0133e9ef09a0e6b69fbdcf029ce8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_robinson, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:51:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:51:48 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2221305782' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:51:48 np0005481065 nova_compute[260935]: 2025-10-11 08:51:48.977 2 DEBUG oslo_concurrency.processutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:51:48 np0005481065 podman[304592]: 2025-10-11 08:51:48.902205323 +0000 UTC m=+0.043604375 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:51:48 np0005481065 systemd[1]: Started libpod-conmon-470fba272ba173b6e140441af48ba30555ce0133e9ef09a0e6b69fbdcf029ce8.scope.
Oct 11 04:51:49 np0005481065 nova_compute[260935]: 2025-10-11 08:51:49.023 2 DEBUG nova.storage.rbd_utils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image 1cecd438-75a3-4140-ad35-7439630b1be2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:51:49 np0005481065 nova_compute[260935]: 2025-10-11 08:51:49.032 2 DEBUG oslo_concurrency.processutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:51:49 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:51:49 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/380c25bd118040f4df26fb05122a29aa96b2e9165a8dc57b44c167c96bf1c784/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:51:49 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/380c25bd118040f4df26fb05122a29aa96b2e9165a8dc57b44c167c96bf1c784/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:51:49 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/380c25bd118040f4df26fb05122a29aa96b2e9165a8dc57b44c167c96bf1c784/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:51:49 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/380c25bd118040f4df26fb05122a29aa96b2e9165a8dc57b44c167c96bf1c784/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:51:49 np0005481065 podman[304592]: 2025-10-11 08:51:49.068170286 +0000 UTC m=+0.209569278 container init 470fba272ba173b6e140441af48ba30555ce0133e9ef09a0e6b69fbdcf029ce8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_robinson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:51:49 np0005481065 podman[304592]: 2025-10-11 08:51:49.079299401 +0000 UTC m=+0.220698393 container start 470fba272ba173b6e140441af48ba30555ce0133e9ef09a0e6b69fbdcf029ce8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_robinson, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 11 04:51:49 np0005481065 podman[304592]: 2025-10-11 08:51:49.082716158 +0000 UTC m=+0.224115150 container attach 470fba272ba173b6e140441af48ba30555ce0133e9ef09a0e6b69fbdcf029ce8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_robinson, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 04:51:49 np0005481065 nova_compute[260935]: 2025-10-11 08:51:49.297 2 DEBUG nova.compute.manager [req-e2ed1991-2d14-42af-8112-020ab611cbd6 req-7bc4f01a-a2b2-4c11-a4ba-4e1bb56b307c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Received event network-changed-7290684c-3823-4e1e-84f5-826316aa4548 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:51:49 np0005481065 nova_compute[260935]: 2025-10-11 08:51:49.300 2 DEBUG nova.compute.manager [req-e2ed1991-2d14-42af-8112-020ab611cbd6 req-7bc4f01a-a2b2-4c11-a4ba-4e1bb56b307c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Refreshing instance network info cache due to event network-changed-7290684c-3823-4e1e-84f5-826316aa4548. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:51:49 np0005481065 nova_compute[260935]: 2025-10-11 08:51:49.300 2 DEBUG oslo_concurrency.lockutils [req-e2ed1991-2d14-42af-8112-020ab611cbd6 req-7bc4f01a-a2b2-4c11-a4ba-4e1bb56b307c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-1cecd438-75a3-4140-ad35-7439630b1be2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:51:49 np0005481065 nova_compute[260935]: 2025-10-11 08:51:49.300 2 DEBUG oslo_concurrency.lockutils [req-e2ed1991-2d14-42af-8112-020ab611cbd6 req-7bc4f01a-a2b2-4c11-a4ba-4e1bb56b307c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-1cecd438-75a3-4140-ad35-7439630b1be2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:51:49 np0005481065 nova_compute[260935]: 2025-10-11 08:51:49.301 2 DEBUG nova.network.neutron [req-e2ed1991-2d14-42af-8112-020ab611cbd6 req-7bc4f01a-a2b2-4c11-a4ba-4e1bb56b307c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Refreshing network info cache for port 7290684c-3823-4e1e-84f5-826316aa4548 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:51:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:51:49 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3005440805' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:51:49 np0005481065 nova_compute[260935]: 2025-10-11 08:51:49.478 2 DEBUG oslo_concurrency.processutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:51:49 np0005481065 nova_compute[260935]: 2025-10-11 08:51:49.480 2 DEBUG nova.virt.libvirt.vif [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:51:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-45605011',display_name='tempest-ImagesTestJSON-server-45605011',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-45605011',id=38,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f8c7604961214c6d9d49657535d799a5',ramdisk_id='',reservation_id='r-bbdhfx2h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-694493184',owner_user_name='tempest-ImagesTestJSON-694493184-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:51:45Z,user_data=None,user_id='1bab12893b9d49aabcb5ca19c9b951de',uuid=1cecd438-75a3-4140-ad35-7439630b1be2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7290684c-3823-4e1e-84f5-826316aa4548", "address": "fa:16:3e:7f:41:62", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7290684c-38", "ovs_interfaceid": "7290684c-3823-4e1e-84f5-826316aa4548", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:51:49 np0005481065 nova_compute[260935]: 2025-10-11 08:51:49.480 2 DEBUG nova.network.os_vif_util [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converting VIF {"id": "7290684c-3823-4e1e-84f5-826316aa4548", "address": "fa:16:3e:7f:41:62", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7290684c-38", "ovs_interfaceid": "7290684c-3823-4e1e-84f5-826316aa4548", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:51:49 np0005481065 nova_compute[260935]: 2025-10-11 08:51:49.481 2 DEBUG nova.network.os_vif_util [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:41:62,bridge_name='br-int',has_traffic_filtering=True,id=7290684c-3823-4e1e-84f5-826316aa4548,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7290684c-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:51:49 np0005481065 nova_compute[260935]: 2025-10-11 08:51:49.482 2 DEBUG nova.objects.instance [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1cecd438-75a3-4140-ad35-7439630b1be2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:51:49 np0005481065 nova_compute[260935]: 2025-10-11 08:51:49.551 2 DEBUG nova.virt.libvirt.driver [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:51:49 np0005481065 nova_compute[260935]:  <uuid>1cecd438-75a3-4140-ad35-7439630b1be2</uuid>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:  <name>instance-00000026</name>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:51:49 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:      <nova:name>tempest-ImagesTestJSON-server-45605011</nova:name>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:51:48</nova:creationTime>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:51:49 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:        <nova:user uuid="1bab12893b9d49aabcb5ca19c9b951de">tempest-ImagesTestJSON-694493184-project-member</nova:user>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:        <nova:project uuid="f8c7604961214c6d9d49657535d799a5">tempest-ImagesTestJSON-694493184</nova:project>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:        <nova:port uuid="7290684c-3823-4e1e-84f5-826316aa4548">
Oct 11 04:51:49 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:51:49 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:      <entry name="serial">1cecd438-75a3-4140-ad35-7439630b1be2</entry>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:      <entry name="uuid">1cecd438-75a3-4140-ad35-7439630b1be2</entry>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:51:49 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:51:49 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:51:49 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/1cecd438-75a3-4140-ad35-7439630b1be2_disk">
Oct 11 04:51:49 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:51:49 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:51:49 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/1cecd438-75a3-4140-ad35-7439630b1be2_disk.config">
Oct 11 04:51:49 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:51:49 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 04:51:49 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:7f:41:62"/>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:      <target dev="tap7290684c-38"/>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:51:49 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/1cecd438-75a3-4140-ad35-7439630b1be2/console.log" append="off"/>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:51:49 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:51:49 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:51:49 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:51:49 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:51:49 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:51:49 np0005481065 nova_compute[260935]: 2025-10-11 08:51:49.563 2 DEBUG nova.compute.manager [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Preparing to wait for external event network-vif-plugged-7290684c-3823-4e1e-84f5-826316aa4548 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 04:51:49 np0005481065 nova_compute[260935]: 2025-10-11 08:51:49.563 2 DEBUG oslo_concurrency.lockutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "1cecd438-75a3-4140-ad35-7439630b1be2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:49 np0005481065 nova_compute[260935]: 2025-10-11 08:51:49.564 2 DEBUG oslo_concurrency.lockutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "1cecd438-75a3-4140-ad35-7439630b1be2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:49 np0005481065 nova_compute[260935]: 2025-10-11 08:51:49.564 2 DEBUG oslo_concurrency.lockutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "1cecd438-75a3-4140-ad35-7439630b1be2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:49 np0005481065 nova_compute[260935]: 2025-10-11 08:51:49.565 2 DEBUG nova.virt.libvirt.vif [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:51:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-45605011',display_name='tempest-ImagesTestJSON-server-45605011',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-45605011',id=38,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f8c7604961214c6d9d49657535d799a5',ramdisk_id='',reservation_id='r-bbdhfx2h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-694493184',owner_user_name='tempest-ImagesTestJSON-694493184-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:51:45Z,user_data=None,user_id='1bab12893b9d49aabcb5ca19c9b951de',uuid=1cecd438-75a3-4140-ad35-7439630b1be2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7290684c-3823-4e1e-84f5-826316aa4548", "address": "fa:16:3e:7f:41:62", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7290684c-38", "ovs_interfaceid": "7290684c-3823-4e1e-84f5-826316aa4548", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:51:49 np0005481065 nova_compute[260935]: 2025-10-11 08:51:49.566 2 DEBUG nova.network.os_vif_util [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converting VIF {"id": "7290684c-3823-4e1e-84f5-826316aa4548", "address": "fa:16:3e:7f:41:62", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7290684c-38", "ovs_interfaceid": "7290684c-3823-4e1e-84f5-826316aa4548", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:51:49 np0005481065 nova_compute[260935]: 2025-10-11 08:51:49.568 2 DEBUG nova.network.os_vif_util [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:41:62,bridge_name='br-int',has_traffic_filtering=True,id=7290684c-3823-4e1e-84f5-826316aa4548,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7290684c-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:51:49 np0005481065 nova_compute[260935]: 2025-10-11 08:51:49.569 2 DEBUG os_vif [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:41:62,bridge_name='br-int',has_traffic_filtering=True,id=7290684c-3823-4e1e-84f5-826316aa4548,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7290684c-38') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:51:49 np0005481065 nova_compute[260935]: 2025-10-11 08:51:49.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:49 np0005481065 nova_compute[260935]: 2025-10-11 08:51:49.571 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:49 np0005481065 nova_compute[260935]: 2025-10-11 08:51:49.572 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:51:49 np0005481065 nova_compute[260935]: 2025-10-11 08:51:49.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:49 np0005481065 nova_compute[260935]: 2025-10-11 08:51:49.577 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7290684c-38, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:49 np0005481065 nova_compute[260935]: 2025-10-11 08:51:49.578 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7290684c-38, col_values=(('external_ids', {'iface-id': '7290684c-3823-4e1e-84f5-826316aa4548', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7f:41:62', 'vm-uuid': '1cecd438-75a3-4140-ad35-7439630b1be2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:49 np0005481065 nova_compute[260935]: 2025-10-11 08:51:49.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:49 np0005481065 NetworkManager[44960]: <info>  [1760172709.5820] manager: (tap7290684c-38): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/137)
Oct 11 04:51:49 np0005481065 nova_compute[260935]: 2025-10-11 08:51:49.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:51:49 np0005481065 nova_compute[260935]: 2025-10-11 08:51:49.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:49 np0005481065 nova_compute[260935]: 2025-10-11 08:51:49.591 2 INFO os_vif [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:41:62,bridge_name='br-int',has_traffic_filtering=True,id=7290684c-3823-4e1e-84f5-826316aa4548,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7290684c-38')#033[00m
Oct 11 04:51:49 np0005481065 nova_compute[260935]: 2025-10-11 08:51:49.689 2 DEBUG nova.virt.libvirt.driver [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:51:49 np0005481065 nova_compute[260935]: 2025-10-11 08:51:49.690 2 DEBUG nova.virt.libvirt.driver [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:51:49 np0005481065 nova_compute[260935]: 2025-10-11 08:51:49.691 2 DEBUG nova.virt.libvirt.driver [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] No VIF found with MAC fa:16:3e:7f:41:62, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:51:49 np0005481065 nova_compute[260935]: 2025-10-11 08:51:49.692 2 INFO nova.virt.libvirt.driver [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Using config drive#033[00m
Oct 11 04:51:49 np0005481065 nova_compute[260935]: 2025-10-11 08:51:49.730 2 DEBUG nova.storage.rbd_utils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image 1cecd438-75a3-4140-ad35-7439630b1be2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:51:49 np0005481065 epic_robinson[304624]: {
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:    "0": [
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:        {
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:            "devices": [
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:                "/dev/loop3"
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:            ],
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:            "lv_name": "ceph_lv0",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:            "lv_size": "21470642176",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:            "name": "ceph_lv0",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:            "tags": {
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:                "ceph.cluster_name": "ceph",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:                "ceph.crush_device_class": "",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:                "ceph.encrypted": "0",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:                "ceph.osd_id": "0",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:                "ceph.type": "block",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:                "ceph.vdo": "0"
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:            },
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:            "type": "block",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:            "vg_name": "ceph_vg0"
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:        }
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:    ],
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:    "1": [
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:        {
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:            "devices": [
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:                "/dev/loop4"
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:            ],
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:            "lv_name": "ceph_lv1",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:            "lv_size": "21470642176",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:            "name": "ceph_lv1",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:            "tags": {
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:                "ceph.cluster_name": "ceph",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:                "ceph.crush_device_class": "",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:                "ceph.encrypted": "0",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:                "ceph.osd_id": "1",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:                "ceph.type": "block",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:                "ceph.vdo": "0"
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:            },
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:            "type": "block",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:            "vg_name": "ceph_vg1"
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:        }
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:    ],
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:    "2": [
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:        {
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:            "devices": [
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:                "/dev/loop5"
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:            ],
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:            "lv_name": "ceph_lv2",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:            "lv_size": "21470642176",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:            "name": "ceph_lv2",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:            "tags": {
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:                "ceph.cluster_name": "ceph",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:                "ceph.crush_device_class": "",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:                "ceph.encrypted": "0",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:                "ceph.osd_id": "2",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:                "ceph.type": "block",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:                "ceph.vdo": "0"
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:            },
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:            "type": "block",
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:            "vg_name": "ceph_vg2"
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:        }
Oct 11 04:51:49 np0005481065 epic_robinson[304624]:    ]
Oct 11 04:51:49 np0005481065 epic_robinson[304624]: }
Oct 11 04:51:49 np0005481065 systemd[1]: libpod-470fba272ba173b6e140441af48ba30555ce0133e9ef09a0e6b69fbdcf029ce8.scope: Deactivated successfully.
Oct 11 04:51:49 np0005481065 podman[304592]: 2025-10-11 08:51:49.894709091 +0000 UTC m=+1.036108083 container died 470fba272ba173b6e140441af48ba30555ce0133e9ef09a0e6b69fbdcf029ce8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_robinson, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:51:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1404: 321 pgs: 321 active+clean; 180 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.5 MiB/s wr, 277 op/s
Oct 11 04:51:49 np0005481065 systemd[1]: var-lib-containers-storage-overlay-380c25bd118040f4df26fb05122a29aa96b2e9165a8dc57b44c167c96bf1c784-merged.mount: Deactivated successfully.
Oct 11 04:51:49 np0005481065 podman[304592]: 2025-10-11 08:51:49.98307986 +0000 UTC m=+1.124478852 container remove 470fba272ba173b6e140441af48ba30555ce0133e9ef09a0e6b69fbdcf029ce8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_robinson, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 04:51:49 np0005481065 systemd[1]: libpod-conmon-470fba272ba173b6e140441af48ba30555ce0133e9ef09a0e6b69fbdcf029ce8.scope: Deactivated successfully.
Oct 11 04:51:50 np0005481065 podman[304833]: 2025-10-11 08:51:50.892575871 +0000 UTC m=+0.077968186 container create d77e33b3270985ec832bd24d0af73efbd129d490e5bf3e1e6e3b02a2a39a44ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bell, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:51:50 np0005481065 podman[304833]: 2025-10-11 08:51:50.86143408 +0000 UTC m=+0.046826445 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:51:50 np0005481065 systemd[1]: Started libpod-conmon-d77e33b3270985ec832bd24d0af73efbd129d490e5bf3e1e6e3b02a2a39a44ed.scope.
Oct 11 04:51:50 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:51:51 np0005481065 podman[304833]: 2025-10-11 08:51:51.014443788 +0000 UTC m=+0.199836153 container init d77e33b3270985ec832bd24d0af73efbd129d490e5bf3e1e6e3b02a2a39a44ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 04:51:51 np0005481065 podman[304833]: 2025-10-11 08:51:51.027091745 +0000 UTC m=+0.212484050 container start d77e33b3270985ec832bd24d0af73efbd129d490e5bf3e1e6e3b02a2a39a44ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 04:51:51 np0005481065 podman[304833]: 2025-10-11 08:51:51.031609243 +0000 UTC m=+0.217001558 container attach d77e33b3270985ec832bd24d0af73efbd129d490e5bf3e1e6e3b02a2a39a44ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 04:51:51 np0005481065 happy_bell[304850]: 167 167
Oct 11 04:51:51 np0005481065 systemd[1]: libpod-d77e33b3270985ec832bd24d0af73efbd129d490e5bf3e1e6e3b02a2a39a44ed.scope: Deactivated successfully.
Oct 11 04:51:51 np0005481065 podman[304833]: 2025-10-11 08:51:51.038063575 +0000 UTC m=+0.223455890 container died d77e33b3270985ec832bd24d0af73efbd129d490e5bf3e1e6e3b02a2a39a44ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:51:51 np0005481065 systemd[1]: var-lib-containers-storage-overlay-a1ba4542a2f3450a932d705eebd67e3f05d2d82dfa276d52f2cc6f6d67f243e2-merged.mount: Deactivated successfully.
Oct 11 04:51:51 np0005481065 podman[304833]: 2025-10-11 08:51:51.094197193 +0000 UTC m=+0.279589508 container remove d77e33b3270985ec832bd24d0af73efbd129d490e5bf3e1e6e3b02a2a39a44ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bell, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:51:51 np0005481065 systemd[1]: libpod-conmon-d77e33b3270985ec832bd24d0af73efbd129d490e5bf3e1e6e3b02a2a39a44ed.scope: Deactivated successfully.
Oct 11 04:51:51 np0005481065 podman[304873]: 2025-10-11 08:51:51.367900223 +0000 UTC m=+0.065095622 container create 1cc072f4dc4625defc2bfc1a7ba8e9e02ed760558035360855c1280acc1204e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Oct 11 04:51:51 np0005481065 nova_compute[260935]: 2025-10-11 08:51:51.387 2 INFO nova.virt.libvirt.driver [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Creating config drive at /var/lib/nova/instances/1cecd438-75a3-4140-ad35-7439630b1be2/disk.config#033[00m
Oct 11 04:51:51 np0005481065 nova_compute[260935]: 2025-10-11 08:51:51.393 2 DEBUG oslo_concurrency.processutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1cecd438-75a3-4140-ad35-7439630b1be2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7ehh3bmo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:51:51 np0005481065 systemd[1]: Started libpod-conmon-1cc072f4dc4625defc2bfc1a7ba8e9e02ed760558035360855c1280acc1204e7.scope.
Oct 11 04:51:51 np0005481065 podman[304873]: 2025-10-11 08:51:51.342225027 +0000 UTC m=+0.039420486 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:51:51 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:51:51 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2751d60763e0af113cc26b6a0ebed6af910b3fb061d43b3af3d183db66bb8994/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:51:51 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2751d60763e0af113cc26b6a0ebed6af910b3fb061d43b3af3d183db66bb8994/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:51:51 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2751d60763e0af113cc26b6a0ebed6af910b3fb061d43b3af3d183db66bb8994/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:51:51 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2751d60763e0af113cc26b6a0ebed6af910b3fb061d43b3af3d183db66bb8994/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:51:51 np0005481065 podman[304873]: 2025-10-11 08:51:51.485752386 +0000 UTC m=+0.182947775 container init 1cc072f4dc4625defc2bfc1a7ba8e9e02ed760558035360855c1280acc1204e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_perlman, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 04:51:51 np0005481065 podman[304873]: 2025-10-11 08:51:51.492583169 +0000 UTC m=+0.189778568 container start 1cc072f4dc4625defc2bfc1a7ba8e9e02ed760558035360855c1280acc1204e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_perlman, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:51:51 np0005481065 podman[304873]: 2025-10-11 08:51:51.497358494 +0000 UTC m=+0.194553893 container attach 1cc072f4dc4625defc2bfc1a7ba8e9e02ed760558035360855c1280acc1204e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 04:51:51 np0005481065 nova_compute[260935]: 2025-10-11 08:51:51.551 2 DEBUG oslo_concurrency.processutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1cecd438-75a3-4140-ad35-7439630b1be2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7ehh3bmo" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:51:51 np0005481065 nova_compute[260935]: 2025-10-11 08:51:51.576 2 DEBUG nova.storage.rbd_utils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image 1cecd438-75a3-4140-ad35-7439630b1be2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:51:51 np0005481065 nova_compute[260935]: 2025-10-11 08:51:51.579 2 DEBUG oslo_concurrency.processutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1cecd438-75a3-4140-ad35-7439630b1be2/disk.config 1cecd438-75a3-4140-ad35-7439630b1be2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:51:51 np0005481065 nova_compute[260935]: 2025-10-11 08:51:51.655 2 DEBUG nova.network.neutron [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Updating instance_info_cache with network_info: [{"id": "30f88ebc-fba6-476c-8953-ad64358029bb", "address": "fa:16:3e:84:a9:6f", "network": {"id": "98ec9321-44f2-4cc2-a0a2-531598e629d2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-550040071", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.192", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30f88ebc-fb", "ovs_interfaceid": "30f88ebc-fba6-476c-8953-ad64358029bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e389558a-ec7e-4610-aef7-2cfb76da6814", "address": "fa:16:3e:67:bd:db", "network": {"id": "7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2082644605", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape389558a-ec", "ovs_interfaceid": "e389558a-ec7e-4610-aef7-2cfb76da6814", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:51:51 np0005481065 nova_compute[260935]: 2025-10-11 08:51:51.729 2 DEBUG oslo_concurrency.processutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1cecd438-75a3-4140-ad35-7439630b1be2/disk.config 1cecd438-75a3-4140-ad35-7439630b1be2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:51:51 np0005481065 nova_compute[260935]: 2025-10-11 08:51:51.730 2 INFO nova.virt.libvirt.driver [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Deleting local config drive /var/lib/nova/instances/1cecd438-75a3-4140-ad35-7439630b1be2/disk.config because it was imported into RBD.#033[00m
Oct 11 04:51:51 np0005481065 nova_compute[260935]: 2025-10-11 08:51:51.739 2 DEBUG oslo_concurrency.lockutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Releasing lock "refresh_cache-205bee5e-165a-468c-87d3-db44e03ace3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:51:51 np0005481065 nova_compute[260935]: 2025-10-11 08:51:51.739 2 DEBUG nova.compute.manager [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Instance network_info: |[{"id": "30f88ebc-fba6-476c-8953-ad64358029bb", "address": "fa:16:3e:84:a9:6f", "network": {"id": "98ec9321-44f2-4cc2-a0a2-531598e629d2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-550040071", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.192", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30f88ebc-fb", "ovs_interfaceid": "30f88ebc-fba6-476c-8953-ad64358029bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e389558a-ec7e-4610-aef7-2cfb76da6814", "address": "fa:16:3e:67:bd:db", "network": {"id": "7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2082644605", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape389558a-ec", "ovs_interfaceid": "e389558a-ec7e-4610-aef7-2cfb76da6814", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:51:51 np0005481065 nova_compute[260935]: 2025-10-11 08:51:51.740 2 DEBUG oslo_concurrency.lockutils [req-18b1f560-5b30-4047-bffe-840d9c80635f req-3dbda898-6a48-421b-b8a8-2f61879a7924 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-205bee5e-165a-468c-87d3-db44e03ace3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:51:51 np0005481065 nova_compute[260935]: 2025-10-11 08:51:51.740 2 DEBUG nova.network.neutron [req-18b1f560-5b30-4047-bffe-840d9c80635f req-3dbda898-6a48-421b-b8a8-2f61879a7924 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Refreshing network info cache for port 30f88ebc-fba6-476c-8953-ad64358029bb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:51:51 np0005481065 nova_compute[260935]: 2025-10-11 08:51:51.743 2 DEBUG nova.virt.libvirt.driver [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Start _get_guest_xml network_info=[{"id": "30f88ebc-fba6-476c-8953-ad64358029bb", "address": "fa:16:3e:84:a9:6f", "network": {"id": "98ec9321-44f2-4cc2-a0a2-531598e629d2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-550040071", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.192", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30f88ebc-fb", "ovs_interfaceid": "30f88ebc-fba6-476c-8953-ad64358029bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e389558a-ec7e-4610-aef7-2cfb76da6814", "address": "fa:16:3e:67:bd:db", "network": {"id": "7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2082644605", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape389558a-ec", "ovs_interfaceid": "e389558a-ec7e-4610-aef7-2cfb76da6814", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:51:51 np0005481065 nova_compute[260935]: 2025-10-11 08:51:51.758 2 WARNING nova.virt.libvirt.driver [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:51:51 np0005481065 nova_compute[260935]: 2025-10-11 08:51:51.765 2 DEBUG nova.virt.libvirt.host [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:51:51 np0005481065 nova_compute[260935]: 2025-10-11 08:51:51.765 2 DEBUG nova.virt.libvirt.host [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:51:51 np0005481065 nova_compute[260935]: 2025-10-11 08:51:51.769 2 DEBUG nova.virt.libvirt.host [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:51:51 np0005481065 nova_compute[260935]: 2025-10-11 08:51:51.770 2 DEBUG nova.virt.libvirt.host [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:51:51 np0005481065 nova_compute[260935]: 2025-10-11 08:51:51.770 2 DEBUG nova.virt.libvirt.driver [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:51:51 np0005481065 nova_compute[260935]: 2025-10-11 08:51:51.770 2 DEBUG nova.virt.hardware [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:51:51 np0005481065 nova_compute[260935]: 2025-10-11 08:51:51.771 2 DEBUG nova.virt.hardware [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:51:51 np0005481065 nova_compute[260935]: 2025-10-11 08:51:51.771 2 DEBUG nova.virt.hardware [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:51:51 np0005481065 nova_compute[260935]: 2025-10-11 08:51:51.771 2 DEBUG nova.virt.hardware [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:51:51 np0005481065 nova_compute[260935]: 2025-10-11 08:51:51.771 2 DEBUG nova.virt.hardware [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:51:51 np0005481065 nova_compute[260935]: 2025-10-11 08:51:51.771 2 DEBUG nova.virt.hardware [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:51:51 np0005481065 nova_compute[260935]: 2025-10-11 08:51:51.771 2 DEBUG nova.virt.hardware [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:51:51 np0005481065 nova_compute[260935]: 2025-10-11 08:51:51.772 2 DEBUG nova.virt.hardware [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:51:51 np0005481065 nova_compute[260935]: 2025-10-11 08:51:51.772 2 DEBUG nova.virt.hardware [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:51:51 np0005481065 nova_compute[260935]: 2025-10-11 08:51:51.772 2 DEBUG nova.virt.hardware [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:51:51 np0005481065 nova_compute[260935]: 2025-10-11 08:51:51.772 2 DEBUG nova.virt.hardware [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:51:51 np0005481065 nova_compute[260935]: 2025-10-11 08:51:51.775 2 DEBUG oslo_concurrency.processutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:51:51 np0005481065 kernel: tap7290684c-38: entered promiscuous mode
Oct 11 04:51:51 np0005481065 NetworkManager[44960]: <info>  [1760172711.8021] manager: (tap7290684c-38): new Tun device (/org/freedesktop/NetworkManager/Devices/138)
Oct 11 04:51:51 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:51Z|00282|binding|INFO|Claiming lport 7290684c-3823-4e1e-84f5-826316aa4548 for this chassis.
Oct 11 04:51:51 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:51Z|00283|binding|INFO|7290684c-3823-4e1e-84f5-826316aa4548: Claiming fa:16:3e:7f:41:62 10.100.0.9
Oct 11 04:51:51 np0005481065 nova_compute[260935]: 2025-10-11 08:51:51.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:51.814 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:41:62 10.100.0.9'], port_security=['fa:16:3e:7f:41:62 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '1cecd438-75a3-4140-ad35-7439630b1be2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bac3530-993f-420e-8692-0b14a331d756', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8c7604961214c6d9d49657535d799a5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b92be55a-f97b-4770-99c4-ff8e122b8ad7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=956bef08-638b-4ce0-9cc4-80a6cc4f1331, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=7290684c-3823-4e1e-84f5-826316aa4548) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:51:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:51.815 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 7290684c-3823-4e1e-84f5-826316aa4548 in datapath 9bac3530-993f-420e-8692-0b14a331d756 bound to our chassis#033[00m
Oct 11 04:51:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:51.816 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bac3530-993f-420e-8692-0b14a331d756#033[00m
Oct 11 04:51:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:51.827 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d8bb36d8-ef14-4ee0-b657-c91634f730e6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:51.829 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9bac3530-91 in ovnmeta-9bac3530-993f-420e-8692-0b14a331d756 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 04:51:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:51.830 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9bac3530-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 04:51:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:51.831 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[086ee292-f6a6-417a-b588-2d4d1860eb88]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:51.832 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[782b1fa5-c5ef-4738-ae22-181b3e8e59e2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:51 np0005481065 systemd-machined[215705]: New machine qemu-41-instance-00000026.
Oct 11 04:51:51 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:51Z|00284|binding|INFO|Setting lport 7290684c-3823-4e1e-84f5-826316aa4548 ovn-installed in OVS
Oct 11 04:51:51 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:51Z|00285|binding|INFO|Setting lport 7290684c-3823-4e1e-84f5-826316aa4548 up in Southbound
Oct 11 04:51:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:51.846 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[15df475c-59fa-458d-91a2-48f1bd6666c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:51 np0005481065 nova_compute[260935]: 2025-10-11 08:51:51.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:51 np0005481065 nova_compute[260935]: 2025-10-11 08:51:51.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:51 np0005481065 systemd[1]: Started Virtual Machine qemu-41-instance-00000026.
Oct 11 04:51:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:51.862 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e7074563-c576-46af-9bcc-5d96a51fe2d8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:51 np0005481065 systemd-udevd[304951]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:51:51 np0005481065 NetworkManager[44960]: <info>  [1760172711.8896] device (tap7290684c-38): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:51:51 np0005481065 NetworkManager[44960]: <info>  [1760172711.8905] device (tap7290684c-38): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:51:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1405: 321 pgs: 321 active+clean; 180 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.5 MiB/s wr, 277 op/s
Oct 11 04:51:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:51.917 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[65e2eec6-01a2-4c6d-b858-8baa522519bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:51 np0005481065 NetworkManager[44960]: <info>  [1760172711.9238] manager: (tap9bac3530-90): new Veth device (/org/freedesktop/NetworkManager/Devices/139)
Oct 11 04:51:51 np0005481065 systemd-udevd[304956]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:51:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:51.924 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[20e5f3f2-05bd-4dce-a26e-02bcbf385c7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:51.960 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[a1b41f58-1574-4e99-add4-518a87f1de85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:51.963 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[863c745a-d003-4e32-853f-bfa3ef74a6f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:51 np0005481065 NetworkManager[44960]: <info>  [1760172711.9881] device (tap9bac3530-90): carrier: link connected
Oct 11 04:51:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:51.994 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[9f6b8dbb-95e4-4507-98d7-7488e6c3a534]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:52.014 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a1fd28f4-4baf-4bb1-87fb-683468c12a1f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bac3530-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:35:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 91], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455669, 'reachable_time': 40500, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305000, 'error': None, 'target': 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:52.034 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[769dea0c-1432-458f-ab80-2549ec683a59]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe95:351f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 455669, 'tstamp': 455669}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305001, 'error': None, 'target': 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:52.051 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3529771c-396a-43c9-8784-51676e708a2f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bac3530-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:35:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 91], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455669, 'reachable_time': 40500, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 305002, 'error': None, 'target': 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.086 2 DEBUG nova.compute.manager [req-13beb4be-75da-41d4-82ed-b3dba8c20fe0 req-004bb7a9-53c9-46c5-bdf3-a2a729e03547 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Received event network-changed-e389558a-ec7e-4610-aef7-2cfb76da6814 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.087 2 DEBUG nova.compute.manager [req-13beb4be-75da-41d4-82ed-b3dba8c20fe0 req-004bb7a9-53c9-46c5-bdf3-a2a729e03547 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Refreshing instance network info cache due to event network-changed-e389558a-ec7e-4610-aef7-2cfb76da6814. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.087 2 DEBUG oslo_concurrency.lockutils [req-13beb4be-75da-41d4-82ed-b3dba8c20fe0 req-004bb7a9-53c9-46c5-bdf3-a2a729e03547 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-205bee5e-165a-468c-87d3-db44e03ace3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:51:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:52.099 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[17704a91-af3c-4d55-bd66-33ab5b556e76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:52.179 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c3b471b6-a95e-4644-a9cd-01193a7b8f25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:52.181 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bac3530-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:52.182 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:51:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:52.182 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bac3530-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:52 np0005481065 NetworkManager[44960]: <info>  [1760172712.2245] manager: (tap9bac3530-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/140)
Oct 11 04:51:52 np0005481065 kernel: tap9bac3530-90: entered promiscuous mode
Oct 11 04:51:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:52.227 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bac3530-90, col_values=(('external_ids', {'iface-id': 'e5becf0d-48c0-404b-9cba-07077454d085'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:52.230 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9bac3530-993f-420e-8692-0b14a331d756.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9bac3530-993f-420e-8692-0b14a331d756.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:52.233 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[13649bc5-23e6-4d29-baee-07ca80189d0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:52 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:52Z|00286|binding|INFO|Releasing lport e5becf0d-48c0-404b-9cba-07077454d085 from this chassis (sb_readonly=0)
Oct 11 04:51:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:52.234 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:51:52 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 04:51:52 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 04:51:52 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-9bac3530-993f-420e-8692-0b14a331d756
Oct 11 04:51:52 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 04:51:52 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 04:51:52 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 04:51:52 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/9bac3530-993f-420e-8692-0b14a331d756.pid.haproxy
Oct 11 04:51:52 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 04:51:52 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:51:52 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 04:51:52 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 04:51:52 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 04:51:52 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 04:51:52 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 04:51:52 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 04:51:52 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 04:51:52 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 04:51:52 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 04:51:52 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 04:51:52 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 04:51:52 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 04:51:52 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 04:51:52 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:51:52 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:51:52 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 04:51:52 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 04:51:52 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:51:52 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID 9bac3530-993f-420e-8692-0b14a331d756
Oct 11 04:51:52 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 04:51:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:52.236 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'env', 'PROCESS_TAG=haproxy-9bac3530-993f-420e-8692-0b14a331d756', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9bac3530-993f-420e-8692-0b14a331d756.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.236 2 DEBUG nova.compute.manager [req-80412251-fe85-43a2-8a22-4b723e034e33 req-bb9269c7-faa2-4695-bbf7-6d87e60e45e4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Received event network-vif-plugged-7290684c-3823-4e1e-84f5-826316aa4548 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.236 2 DEBUG oslo_concurrency.lockutils [req-80412251-fe85-43a2-8a22-4b723e034e33 req-bb9269c7-faa2-4695-bbf7-6d87e60e45e4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "1cecd438-75a3-4140-ad35-7439630b1be2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.237 2 DEBUG oslo_concurrency.lockutils [req-80412251-fe85-43a2-8a22-4b723e034e33 req-bb9269c7-faa2-4695-bbf7-6d87e60e45e4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1cecd438-75a3-4140-ad35-7439630b1be2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.237 2 DEBUG oslo_concurrency.lockutils [req-80412251-fe85-43a2-8a22-4b723e034e33 req-bb9269c7-faa2-4695-bbf7-6d87e60e45e4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1cecd438-75a3-4140-ad35-7439630b1be2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.237 2 DEBUG nova.compute.manager [req-80412251-fe85-43a2-8a22-4b723e034e33 req-bb9269c7-faa2-4695-bbf7-6d87e60e45e4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Processing event network-vif-plugged-7290684c-3823-4e1e-84f5-826316aa4548 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:52 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:51:52 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1147531894' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.284 2 DEBUG oslo_concurrency.processutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.311 2 DEBUG nova.storage.rbd_utils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] rbd image 205bee5e-165a-468c-87d3-db44e03ace3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.322 2 DEBUG oslo_concurrency.processutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:52 np0005481065 intelligent_perlman[304890]: {
Oct 11 04:51:52 np0005481065 intelligent_perlman[304890]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 04:51:52 np0005481065 intelligent_perlman[304890]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:51:52 np0005481065 intelligent_perlman[304890]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:51:52 np0005481065 intelligent_perlman[304890]:        "osd_id": 2,
Oct 11 04:51:52 np0005481065 intelligent_perlman[304890]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:51:52 np0005481065 intelligent_perlman[304890]:        "type": "bluestore"
Oct 11 04:51:52 np0005481065 intelligent_perlman[304890]:    },
Oct 11 04:51:52 np0005481065 intelligent_perlman[304890]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 04:51:52 np0005481065 intelligent_perlman[304890]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:51:52 np0005481065 intelligent_perlman[304890]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:51:52 np0005481065 intelligent_perlman[304890]:        "osd_id": 0,
Oct 11 04:51:52 np0005481065 intelligent_perlman[304890]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:51:52 np0005481065 intelligent_perlman[304890]:        "type": "bluestore"
Oct 11 04:51:52 np0005481065 intelligent_perlman[304890]:    },
Oct 11 04:51:52 np0005481065 intelligent_perlman[304890]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 04:51:52 np0005481065 intelligent_perlman[304890]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:51:52 np0005481065 intelligent_perlman[304890]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:51:52 np0005481065 intelligent_perlman[304890]:        "osd_id": 1,
Oct 11 04:51:52 np0005481065 intelligent_perlman[304890]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:51:52 np0005481065 intelligent_perlman[304890]:        "type": "bluestore"
Oct 11 04:51:52 np0005481065 intelligent_perlman[304890]:    }
Oct 11 04:51:52 np0005481065 intelligent_perlman[304890]: }
Oct 11 04:51:52 np0005481065 systemd[1]: libpod-1cc072f4dc4625defc2bfc1a7ba8e9e02ed760558035360855c1280acc1204e7.scope: Deactivated successfully.
Oct 11 04:51:52 np0005481065 podman[304873]: 2025-10-11 08:51:52.516585798 +0000 UTC m=+1.213781197 container died 1cc072f4dc4625defc2bfc1a7ba8e9e02ed760558035360855c1280acc1204e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_perlman, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.555 2 DEBUG nova.network.neutron [req-e2ed1991-2d14-42af-8112-020ab611cbd6 req-7bc4f01a-a2b2-4c11-a4ba-4e1bb56b307c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Updated VIF entry in instance network info cache for port 7290684c-3823-4e1e-84f5-826316aa4548. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.557 2 DEBUG nova.network.neutron [req-e2ed1991-2d14-42af-8112-020ab611cbd6 req-7bc4f01a-a2b2-4c11-a4ba-4e1bb56b307c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Updating instance_info_cache with network_info: [{"id": "7290684c-3823-4e1e-84f5-826316aa4548", "address": "fa:16:3e:7f:41:62", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7290684c-38", "ovs_interfaceid": "7290684c-3823-4e1e-84f5-826316aa4548", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:51:52 np0005481065 systemd[1]: var-lib-containers-storage-overlay-2751d60763e0af113cc26b6a0ebed6af910b3fb061d43b3af3d183db66bb8994-merged.mount: Deactivated successfully.
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.576 2 DEBUG oslo_concurrency.lockutils [req-e2ed1991-2d14-42af-8112-020ab611cbd6 req-7bc4f01a-a2b2-4c11-a4ba-4e1bb56b307c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-1cecd438-75a3-4140-ad35-7439630b1be2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:51:52 np0005481065 podman[304873]: 2025-10-11 08:51:52.594219103 +0000 UTC m=+1.291414472 container remove 1cc072f4dc4625defc2bfc1a7ba8e9e02ed760558035360855c1280acc1204e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_perlman, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 04:51:52 np0005481065 systemd[1]: libpod-conmon-1cc072f4dc4625defc2bfc1a7ba8e9e02ed760558035360855c1280acc1204e7.scope: Deactivated successfully.
Oct 11 04:51:52 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:51:52 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:51:52 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:51:52 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:51:52 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev cab240f1-85c2-484a-9dc1-041b5e368c4b does not exist
Oct 11 04:51:52 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 8382618e-0e89-478b-917d-938ebf5d3d85 does not exist
Oct 11 04:51:52 np0005481065 podman[305153]: 2025-10-11 08:51:52.665010065 +0000 UTC m=+0.066139291 container create f81c47c225c7ddf12f9c961bf177d7a7c127519972d558801a670cad60b3507e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:51:52 np0005481065 podman[305153]: 2025-10-11 08:51:52.622628146 +0000 UTC m=+0.023757372 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 04:51:52 np0005481065 systemd[1]: Started libpod-conmon-f81c47c225c7ddf12f9c961bf177d7a7c127519972d558801a670cad60b3507e.scope.
Oct 11 04:51:52 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:51:52 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3241342263' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:51:52 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:51:52 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ddfd5ce54df1d1aac4c1e654420057838a802df3d512bb77cf4087a954ac2fb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.770 2 DEBUG oslo_concurrency.processutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.771 2 DEBUG nova.virt.libvirt.vif [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:51:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-318864694',display_name='tempest-ServersTestMultiNic-server-318864694',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-318864694',id=37,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f84de17ba2c5470fbc4c7fe809e7d7b7',ramdisk_id='',reservation_id='r-932cooa3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-65661968',owner_user_name='tempest-ServersTestMultiNic-65661968-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:51:41Z,user_data=None,user_id='fb27f51b5ffd414ab5ddbea179ada690',uuid=205bee5e-165a-468c-87d3-db44e03ace3e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "30f88ebc-fba6-476c-8953-ad64358029bb", "address": "fa:16:3e:84:a9:6f", "network": {"id": "98ec9321-44f2-4cc2-a0a2-531598e629d2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-550040071", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.192", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30f88ebc-fb", "ovs_interfaceid": "30f88ebc-fba6-476c-8953-ad64358029bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.772 2 DEBUG nova.network.os_vif_util [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converting VIF {"id": "30f88ebc-fba6-476c-8953-ad64358029bb", "address": "fa:16:3e:84:a9:6f", "network": {"id": "98ec9321-44f2-4cc2-a0a2-531598e629d2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-550040071", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.192", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30f88ebc-fb", "ovs_interfaceid": "30f88ebc-fba6-476c-8953-ad64358029bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.773 2 DEBUG nova.network.os_vif_util [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:a9:6f,bridge_name='br-int',has_traffic_filtering=True,id=30f88ebc-fba6-476c-8953-ad64358029bb,network=Network(98ec9321-44f2-4cc2-a0a2-531598e629d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30f88ebc-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.774 2 DEBUG nova.virt.libvirt.vif [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:51:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-318864694',display_name='tempest-ServersTestMultiNic-server-318864694',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-318864694',id=37,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f84de17ba2c5470fbc4c7fe809e7d7b7',ramdisk_id='',reservation_id='r-932cooa3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-65661968',owner_user_name='tempest-ServersTestMultiNic-65661968-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:51:41Z,user_data=None,user_id='fb27f51b5ffd414ab5ddbea179ada690',uuid=205bee5e-165a-468c-87d3-db44e03ace3e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e389558a-ec7e-4610-aef7-2cfb76da6814", "address": "fa:16:3e:67:bd:db", "network": {"id": "7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2082644605", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape389558a-ec", "ovs_interfaceid": "e389558a-ec7e-4610-aef7-2cfb76da6814", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.775 2 DEBUG nova.network.os_vif_util [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converting VIF {"id": "e389558a-ec7e-4610-aef7-2cfb76da6814", "address": "fa:16:3e:67:bd:db", "network": {"id": "7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2082644605", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape389558a-ec", "ovs_interfaceid": "e389558a-ec7e-4610-aef7-2cfb76da6814", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.775 2 DEBUG nova.network.os_vif_util [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:bd:db,bridge_name='br-int',has_traffic_filtering=True,id=e389558a-ec7e-4610-aef7-2cfb76da6814,network=Network(7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape389558a-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.777 2 DEBUG nova.objects.instance [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 205bee5e-165a-468c-87d3-db44e03ace3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:51:52 np0005481065 podman[305153]: 2025-10-11 08:51:52.786253304 +0000 UTC m=+0.187382540 container init f81c47c225c7ddf12f9c961bf177d7a7c127519972d558801a670cad60b3507e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 11 04:51:52 np0005481065 podman[305153]: 2025-10-11 08:51:52.792098429 +0000 UTC m=+0.193227625 container start f81c47c225c7ddf12f9c961bf177d7a7c127519972d558801a670cad60b3507e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 11 04:51:52 np0005481065 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[305191]: [NOTICE]   (305217) : New worker (305224) forked
Oct 11 04:51:52 np0005481065 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[305191]: [NOTICE]   (305217) : Loading success.
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.897 2 DEBUG nova.virt.libvirt.driver [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:51:52 np0005481065 nova_compute[260935]:  <uuid>205bee5e-165a-468c-87d3-db44e03ace3e</uuid>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:  <name>instance-00000025</name>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:51:52 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:      <nova:name>tempest-ServersTestMultiNic-server-318864694</nova:name>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:51:51</nova:creationTime>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:51:52 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:        <nova:user uuid="fb27f51b5ffd414ab5ddbea179ada690">tempest-ServersTestMultiNic-65661968-project-member</nova:user>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:        <nova:project uuid="f84de17ba2c5470fbc4c7fe809e7d7b7">tempest-ServersTestMultiNic-65661968</nova:project>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:        <nova:port uuid="30f88ebc-fba6-476c-8953-ad64358029bb">
Oct 11 04:51:52 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.192" ipVersion="4"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:        <nova:port uuid="e389558a-ec7e-4610-aef7-2cfb76da6814">
Oct 11 04:51:52 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.1.173" ipVersion="4"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:      <entry name="serial">205bee5e-165a-468c-87d3-db44e03ace3e</entry>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:      <entry name="uuid">205bee5e-165a-468c-87d3-db44e03ace3e</entry>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:51:52 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/205bee5e-165a-468c-87d3-db44e03ace3e_disk">
Oct 11 04:51:52 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:51:52 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:51:52 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/205bee5e-165a-468c-87d3-db44e03ace3e_disk.config">
Oct 11 04:51:52 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:51:52 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 04:51:52 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:84:a9:6f"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:      <target dev="tap30f88ebc-fb"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 04:51:52 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:67:bd:db"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:      <target dev="tape389558a-ec"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:51:52 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/205bee5e-165a-468c-87d3-db44e03ace3e/console.log" append="off"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:51:52 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:51:52 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:51:52 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:51:52 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:51:52 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.911 2 DEBUG nova.compute.manager [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Preparing to wait for external event network-vif-plugged-30f88ebc-fba6-476c-8953-ad64358029bb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.911 2 DEBUG oslo_concurrency.lockutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Acquiring lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.912 2 DEBUG oslo_concurrency.lockutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.912 2 DEBUG oslo_concurrency.lockutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.912 2 DEBUG nova.compute.manager [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Preparing to wait for external event network-vif-plugged-e389558a-ec7e-4610-aef7-2cfb76da6814 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.913 2 DEBUG oslo_concurrency.lockutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Acquiring lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.913 2 DEBUG oslo_concurrency.lockutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.913 2 DEBUG oslo_concurrency.lockutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.915 2 DEBUG nova.virt.libvirt.vif [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:51:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-318864694',display_name='tempest-ServersTestMultiNic-server-318864694',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-318864694',id=37,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f84de17ba2c5470fbc4c7fe809e7d7b7',ramdisk_id='',reservation_id='r-932cooa3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-65661968',owner_user_name='tempest-ServersTestMultiNic-65661968-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:51:41Z,user_data=None,user_id='fb27f51b5ffd414ab5ddbea179ada690',uuid=205bee5e-165a-468c-87d3-db44e03ace3e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "30f88ebc-fba6-476c-8953-ad64358029bb", "address": "fa:16:3e:84:a9:6f", "network": {"id": "98ec9321-44f2-4cc2-a0a2-531598e629d2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-550040071", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.192", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30f88ebc-fb", "ovs_interfaceid": "30f88ebc-fba6-476c-8953-ad64358029bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.915 2 DEBUG nova.network.os_vif_util [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converting VIF {"id": "30f88ebc-fba6-476c-8953-ad64358029bb", "address": "fa:16:3e:84:a9:6f", "network": {"id": "98ec9321-44f2-4cc2-a0a2-531598e629d2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-550040071", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.192", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30f88ebc-fb", "ovs_interfaceid": "30f88ebc-fba6-476c-8953-ad64358029bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.916 2 DEBUG nova.network.os_vif_util [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:a9:6f,bridge_name='br-int',has_traffic_filtering=True,id=30f88ebc-fba6-476c-8953-ad64358029bb,network=Network(98ec9321-44f2-4cc2-a0a2-531598e629d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30f88ebc-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.918 2 DEBUG os_vif [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:a9:6f,bridge_name='br-int',has_traffic_filtering=True,id=30f88ebc-fba6-476c-8953-ad64358029bb,network=Network(98ec9321-44f2-4cc2-a0a2-531598e629d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30f88ebc-fb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.919 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.920 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.925 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap30f88ebc-fb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.926 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap30f88ebc-fb, col_values=(('external_ids', {'iface-id': '30f88ebc-fba6-476c-8953-ad64358029bb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:84:a9:6f', 'vm-uuid': '205bee5e-165a-468c-87d3-db44e03ace3e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:52 np0005481065 NetworkManager[44960]: <info>  [1760172712.9298] manager: (tap30f88ebc-fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/141)
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.935 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172712.9346688, 1cecd438-75a3-4140-ad35-7439630b1be2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.936 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] VM Started (Lifecycle Event)#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.940 2 DEBUG nova.compute.manager [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.941 2 INFO os_vif [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:a9:6f,bridge_name='br-int',has_traffic_filtering=True,id=30f88ebc-fba6-476c-8953-ad64358029bb,network=Network(98ec9321-44f2-4cc2-a0a2-531598e629d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30f88ebc-fb')#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.943 2 DEBUG nova.virt.libvirt.vif [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:51:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-318864694',display_name='tempest-ServersTestMultiNic-server-318864694',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-318864694',id=37,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f84de17ba2c5470fbc4c7fe809e7d7b7',ramdisk_id='',reservation_id='r-932cooa3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-65661968',owner_user_name='tempest-ServersTestMultiNic-65661968-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:51:41Z,user_data=None,user_id='fb27f51b5ffd414ab5ddbea179ada690',uuid=205bee5e-165a-468c-87d3-db44e03ace3e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e389558a-ec7e-4610-aef7-2cfb76da6814", "address": "fa:16:3e:67:bd:db", "network": {"id": "7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2082644605", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape389558a-ec", "ovs_interfaceid": "e389558a-ec7e-4610-aef7-2cfb76da6814", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.954 2 DEBUG nova.network.os_vif_util [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converting VIF {"id": "e389558a-ec7e-4610-aef7-2cfb76da6814", "address": "fa:16:3e:67:bd:db", "network": {"id": "7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2082644605", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape389558a-ec", "ovs_interfaceid": "e389558a-ec7e-4610-aef7-2cfb76da6814", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.955 2 DEBUG nova.network.os_vif_util [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:bd:db,bridge_name='br-int',has_traffic_filtering=True,id=e389558a-ec7e-4610-aef7-2cfb76da6814,network=Network(7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape389558a-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.955 2 DEBUG os_vif [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:bd:db,bridge_name='br-int',has_traffic_filtering=True,id=e389558a-ec7e-4610-aef7-2cfb76da6814,network=Network(7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape389558a-ec') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.956 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.957 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.959 2 DEBUG nova.virt.libvirt.driver [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.963 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.965 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape389558a-ec, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.966 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape389558a-ec, col_values=(('external_ids', {'iface-id': 'e389558a-ec7e-4610-aef7-2cfb76da6814', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:67:bd:db', 'vm-uuid': '205bee5e-165a-468c-87d3-db44e03ace3e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:52 np0005481065 NetworkManager[44960]: <info>  [1760172712.9681] manager: (tape389558a-ec): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/142)
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.972 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.974 2 INFO nova.virt.libvirt.driver [-] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Instance spawned successfully.#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.975 2 DEBUG nova.virt.libvirt.driver [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.977 2 INFO os_vif [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:bd:db,bridge_name='br-int',has_traffic_filtering=True,id=e389558a-ec7e-4610-aef7-2cfb76da6814,network=Network(7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape389558a-ec')#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.996 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.997 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172712.9360156, 1cecd438-75a3-4140-ad35-7439630b1be2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:51:52 np0005481065 nova_compute[260935]: 2025-10-11 08:51:52.998 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] VM Paused (Lifecycle Event)#033[00m
Oct 11 04:51:53 np0005481065 nova_compute[260935]: 2025-10-11 08:51:53.004 2 DEBUG nova.virt.libvirt.driver [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:51:53 np0005481065 nova_compute[260935]: 2025-10-11 08:51:53.004 2 DEBUG nova.virt.libvirt.driver [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:51:53 np0005481065 nova_compute[260935]: 2025-10-11 08:51:53.005 2 DEBUG nova.virt.libvirt.driver [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:51:53 np0005481065 nova_compute[260935]: 2025-10-11 08:51:53.005 2 DEBUG nova.virt.libvirt.driver [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:51:53 np0005481065 nova_compute[260935]: 2025-10-11 08:51:53.006 2 DEBUG nova.virt.libvirt.driver [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:51:53 np0005481065 nova_compute[260935]: 2025-10-11 08:51:53.006 2 DEBUG nova.virt.libvirt.driver [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:51:53 np0005481065 nova_compute[260935]: 2025-10-11 08:51:53.013 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:51:53 np0005481065 nova_compute[260935]: 2025-10-11 08:51:53.020 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172712.951538, 1cecd438-75a3-4140-ad35-7439630b1be2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:51:53 np0005481065 nova_compute[260935]: 2025-10-11 08:51:53.020 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:51:53 np0005481065 nova_compute[260935]: 2025-10-11 08:51:53.040 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:51:53 np0005481065 nova_compute[260935]: 2025-10-11 08:51:53.043 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:51:53 np0005481065 nova_compute[260935]: 2025-10-11 08:51:53.177 2 DEBUG nova.virt.libvirt.driver [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:51:53 np0005481065 nova_compute[260935]: 2025-10-11 08:51:53.177 2 DEBUG nova.virt.libvirt.driver [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:51:53 np0005481065 nova_compute[260935]: 2025-10-11 08:51:53.178 2 DEBUG nova.virt.libvirt.driver [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] No VIF found with MAC fa:16:3e:84:a9:6f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:51:53 np0005481065 nova_compute[260935]: 2025-10-11 08:51:53.178 2 DEBUG nova.virt.libvirt.driver [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] No VIF found with MAC fa:16:3e:67:bd:db, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:51:53 np0005481065 nova_compute[260935]: 2025-10-11 08:51:53.179 2 INFO nova.virt.libvirt.driver [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Using config drive#033[00m
Oct 11 04:51:53 np0005481065 nova_compute[260935]: 2025-10-11 08:51:53.212 2 DEBUG nova.storage.rbd_utils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] rbd image 205bee5e-165a-468c-87d3-db44e03ace3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:51:53 np0005481065 nova_compute[260935]: 2025-10-11 08:51:53.222 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:51:53 np0005481065 nova_compute[260935]: 2025-10-11 08:51:53.224 2 INFO nova.compute.manager [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Took 7.77 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:51:53 np0005481065 nova_compute[260935]: 2025-10-11 08:51:53.225 2 DEBUG nova.compute.manager [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:51:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:51:53 np0005481065 nova_compute[260935]: 2025-10-11 08:51:53.305 2 INFO nova.compute.manager [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Took 8.85 seconds to build instance.#033[00m
Oct 11 04:51:53 np0005481065 nova_compute[260935]: 2025-10-11 08:51:53.326 2 DEBUG oslo_concurrency.lockutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "1cecd438-75a3-4140-ad35-7439630b1be2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:53 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:51:53 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:51:53 np0005481065 nova_compute[260935]: 2025-10-11 08:51:53.520 2 DEBUG nova.network.neutron [req-18b1f560-5b30-4047-bffe-840d9c80635f req-3dbda898-6a48-421b-b8a8-2f61879a7924 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Updated VIF entry in instance network info cache for port 30f88ebc-fba6-476c-8953-ad64358029bb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:51:53 np0005481065 nova_compute[260935]: 2025-10-11 08:51:53.520 2 DEBUG nova.network.neutron [req-18b1f560-5b30-4047-bffe-840d9c80635f req-3dbda898-6a48-421b-b8a8-2f61879a7924 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Updating instance_info_cache with network_info: [{"id": "30f88ebc-fba6-476c-8953-ad64358029bb", "address": "fa:16:3e:84:a9:6f", "network": {"id": "98ec9321-44f2-4cc2-a0a2-531598e629d2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-550040071", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.192", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30f88ebc-fb", "ovs_interfaceid": "30f88ebc-fba6-476c-8953-ad64358029bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e389558a-ec7e-4610-aef7-2cfb76da6814", "address": "fa:16:3e:67:bd:db", "network": {"id": "7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2082644605", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape389558a-ec", "ovs_interfaceid": "e389558a-ec7e-4610-aef7-2cfb76da6814", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:51:53 np0005481065 nova_compute[260935]: 2025-10-11 08:51:53.549 2 DEBUG oslo_concurrency.lockutils [req-18b1f560-5b30-4047-bffe-840d9c80635f req-3dbda898-6a48-421b-b8a8-2f61879a7924 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-205bee5e-165a-468c-87d3-db44e03ace3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:51:53 np0005481065 nova_compute[260935]: 2025-10-11 08:51:53.550 2 DEBUG nova.compute.manager [req-18b1f560-5b30-4047-bffe-840d9c80635f req-3dbda898-6a48-421b-b8a8-2f61879a7924 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Received event network-vif-deleted-e758e6dc-cadd-4687-a634-519fb2ecace8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:51:53 np0005481065 nova_compute[260935]: 2025-10-11 08:51:53.550 2 DEBUG oslo_concurrency.lockutils [req-13beb4be-75da-41d4-82ed-b3dba8c20fe0 req-004bb7a9-53c9-46c5-bdf3-a2a729e03547 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-205bee5e-165a-468c-87d3-db44e03ace3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:51:53 np0005481065 nova_compute[260935]: 2025-10-11 08:51:53.550 2 DEBUG nova.network.neutron [req-13beb4be-75da-41d4-82ed-b3dba8c20fe0 req-004bb7a9-53c9-46c5-bdf3-a2a729e03547 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Refreshing network info cache for port e389558a-ec7e-4610-aef7-2cfb76da6814 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:51:53 np0005481065 nova_compute[260935]: 2025-10-11 08:51:53.634 2 INFO nova.virt.libvirt.driver [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Creating config drive at /var/lib/nova/instances/205bee5e-165a-468c-87d3-db44e03ace3e/disk.config#033[00m
Oct 11 04:51:53 np0005481065 nova_compute[260935]: 2025-10-11 08:51:53.639 2 DEBUG oslo_concurrency.processutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/205bee5e-165a-468c-87d3-db44e03ace3e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfxw46wcj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:51:53 np0005481065 nova_compute[260935]: 2025-10-11 08:51:53.778 2 DEBUG oslo_concurrency.processutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/205bee5e-165a-468c-87d3-db44e03ace3e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfxw46wcj" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:51:53 np0005481065 nova_compute[260935]: 2025-10-11 08:51:53.807 2 DEBUG nova.storage.rbd_utils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] rbd image 205bee5e-165a-468c-87d3-db44e03ace3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:51:53 np0005481065 nova_compute[260935]: 2025-10-11 08:51:53.813 2 DEBUG oslo_concurrency.processutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/205bee5e-165a-468c-87d3-db44e03ace3e/disk.config 205bee5e-165a-468c-87d3-db44e03ace3e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:51:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1406: 321 pgs: 321 active+clean; 181 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.2 MiB/s wr, 185 op/s
Oct 11 04:51:53 np0005481065 nova_compute[260935]: 2025-10-11 08:51:53.917 2 DEBUG oslo_concurrency.lockutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Acquiring lock "840d2a1b-48bb-42ec-aa12-067e1b65ea39" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:53 np0005481065 nova_compute[260935]: 2025-10-11 08:51:53.918 2 DEBUG oslo_concurrency.lockutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "840d2a1b-48bb-42ec-aa12-067e1b65ea39" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:53 np0005481065 nova_compute[260935]: 2025-10-11 08:51:53.940 2 DEBUG nova.compute.manager [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:51:53 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:53Z|00287|binding|INFO|Releasing lport e5becf0d-48c0-404b-9cba-07077454d085 from this chassis (sb_readonly=0)
Oct 11 04:51:53 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:53Z|00288|binding|INFO|Releasing lport abe57b96-aedd-418b-be8f-7ad9fb9218ac from this chassis (sb_readonly=0)
Oct 11 04:51:54 np0005481065 nova_compute[260935]: 2025-10-11 08:51:54.024 2 DEBUG oslo_concurrency.processutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/205bee5e-165a-468c-87d3-db44e03ace3e/disk.config 205bee5e-165a-468c-87d3-db44e03ace3e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.211s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:51:54 np0005481065 nova_compute[260935]: 2025-10-11 08:51:54.025 2 INFO nova.virt.libvirt.driver [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Deleting local config drive /var/lib/nova/instances/205bee5e-165a-468c-87d3-db44e03ace3e/disk.config because it was imported into RBD.#033[00m
Oct 11 04:51:54 np0005481065 nova_compute[260935]: 2025-10-11 08:51:54.040 2 DEBUG oslo_concurrency.lockutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:54 np0005481065 nova_compute[260935]: 2025-10-11 08:51:54.041 2 DEBUG oslo_concurrency.lockutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:54 np0005481065 nova_compute[260935]: 2025-10-11 08:51:54.055 2 DEBUG nova.virt.hardware [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:51:54 np0005481065 nova_compute[260935]: 2025-10-11 08:51:54.056 2 INFO nova.compute.claims [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:51:54 np0005481065 nova_compute[260935]: 2025-10-11 08:51:54.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:54 np0005481065 NetworkManager[44960]: <info>  [1760172714.1018] manager: (tap30f88ebc-fb): new Tun device (/org/freedesktop/NetworkManager/Devices/143)
Oct 11 04:51:54 np0005481065 kernel: tap30f88ebc-fb: entered promiscuous mode
Oct 11 04:51:54 np0005481065 nova_compute[260935]: 2025-10-11 08:51:54.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:54 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:54Z|00289|binding|INFO|Claiming lport 30f88ebc-fba6-476c-8953-ad64358029bb for this chassis.
Oct 11 04:51:54 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:54Z|00290|binding|INFO|30f88ebc-fba6-476c-8953-ad64358029bb: Claiming fa:16:3e:84:a9:6f 10.100.0.192
Oct 11 04:51:54 np0005481065 NetworkManager[44960]: <info>  [1760172714.1275] device (tap30f88ebc-fb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:51:54 np0005481065 NetworkManager[44960]: <info>  [1760172714.1286] device (tap30f88ebc-fb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.124 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:a9:6f 10.100.0.192'], port_security=['fa:16:3e:84:a9:6f 10.100.0.192'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.192/24', 'neutron:device_id': '205bee5e-165a-468c-87d3-db44e03ace3e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-98ec9321-44f2-4cc2-a0a2-531598e629d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f84de17ba2c5470fbc4c7fe809e7d7b7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1a4e329d-ca16-4021-b76e-a221f4eabd06', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=310a160b-9164-4639-a339-2ba6c58b1709, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=30f88ebc-fba6-476c-8953-ad64358029bb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.129 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 30f88ebc-fba6-476c-8953-ad64358029bb in datapath 98ec9321-44f2-4cc2-a0a2-531598e629d2 bound to our chassis#033[00m
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.131 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 98ec9321-44f2-4cc2-a0a2-531598e629d2#033[00m
Oct 11 04:51:54 np0005481065 NetworkManager[44960]: <info>  [1760172714.1365] manager: (tape389558a-ec): new Tun device (/org/freedesktop/NetworkManager/Devices/144)
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.148 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[dbf7788c-79f5-458e-9c70-2ecf7bc9ce04]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.149 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap98ec9321-41 in ovnmeta-98ec9321-44f2-4cc2-a0a2-531598e629d2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.154 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap98ec9321-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.154 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c83f0e22-a67a-47e5-b14a-41662afb7271]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.155 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7da3804e-905b-4d3d-ad92-89b006ab1fb2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:54 np0005481065 kernel: tape389558a-ec: entered promiscuous mode
Oct 11 04:51:54 np0005481065 NetworkManager[44960]: <info>  [1760172714.1610] device (tape389558a-ec): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:51:54 np0005481065 NetworkManager[44960]: <info>  [1760172714.1637] device (tape389558a-ec): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:51:54 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:54Z|00291|binding|INFO|Claiming lport e389558a-ec7e-4610-aef7-2cfb76da6814 for this chassis.
Oct 11 04:51:54 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:54Z|00292|binding|INFO|e389558a-ec7e-4610-aef7-2cfb76da6814: Claiming fa:16:3e:67:bd:db 10.100.1.173
Oct 11 04:51:54 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:54Z|00293|binding|INFO|Setting lport 30f88ebc-fba6-476c-8953-ad64358029bb ovn-installed in OVS
Oct 11 04:51:54 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:54Z|00294|binding|INFO|Setting lport 30f88ebc-fba6-476c-8953-ad64358029bb up in Southbound
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.171 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[1f1b3cd5-d4b2-42ff-a102-4e12752e9039]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.174 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:bd:db 10.100.1.173'], port_security=['fa:16:3e:67:bd:db 10.100.1.173'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.173/24', 'neutron:device_id': '205bee5e-165a-468c-87d3-db44e03ace3e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f84de17ba2c5470fbc4c7fe809e7d7b7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1a4e329d-ca16-4021-b76e-a221f4eabd06', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bdcaf9f8-32cf-46cd-8760-089ea8b1e312, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=e389558a-ec7e-4610-aef7-2cfb76da6814) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:51:54 np0005481065 systemd-machined[215705]: New machine qemu-42-instance-00000025.
Oct 11 04:51:54 np0005481065 systemd[1]: Started Virtual Machine qemu-42-instance-00000025.
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.207 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[08936cab-dd05-419d-80b9-c7a7f1ef1367]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:54 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:54Z|00295|binding|INFO|Setting lport e389558a-ec7e-4610-aef7-2cfb76da6814 ovn-installed in OVS
Oct 11 04:51:54 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:54Z|00296|binding|INFO|Setting lport e389558a-ec7e-4610-aef7-2cfb76da6814 up in Southbound
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.251 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[5cd545d3-4979-41a1-a900-2d437d6f3691]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.256 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f57db02e-685a-4fda-a245-e9641e00b824]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:54 np0005481065 NetworkManager[44960]: <info>  [1760172714.2604] manager: (tap98ec9321-40): new Veth device (/org/freedesktop/NetworkManager/Devices/145)
Oct 11 04:51:54 np0005481065 nova_compute[260935]: 2025-10-11 08:51:54.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.322 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b81eeb8b-6197-4b20-9126-7fd99895f2ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:54 np0005481065 nova_compute[260935]: 2025-10-11 08:51:54.326 2 DEBUG nova.compute.manager [req-bdf198d8-2280-4e34-8de6-f854c06df9ba req-518c9b8f-d86d-4839-9466-ccae24e5570a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Received event network-vif-plugged-7290684c-3823-4e1e-84f5-826316aa4548 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:51:54 np0005481065 nova_compute[260935]: 2025-10-11 08:51:54.326 2 DEBUG oslo_concurrency.lockutils [req-bdf198d8-2280-4e34-8de6-f854c06df9ba req-518c9b8f-d86d-4839-9466-ccae24e5570a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "1cecd438-75a3-4140-ad35-7439630b1be2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:54 np0005481065 nova_compute[260935]: 2025-10-11 08:51:54.326 2 DEBUG oslo_concurrency.lockutils [req-bdf198d8-2280-4e34-8de6-f854c06df9ba req-518c9b8f-d86d-4839-9466-ccae24e5570a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1cecd438-75a3-4140-ad35-7439630b1be2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:54 np0005481065 nova_compute[260935]: 2025-10-11 08:51:54.326 2 DEBUG oslo_concurrency.lockutils [req-bdf198d8-2280-4e34-8de6-f854c06df9ba req-518c9b8f-d86d-4839-9466-ccae24e5570a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1cecd438-75a3-4140-ad35-7439630b1be2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:54 np0005481065 nova_compute[260935]: 2025-10-11 08:51:54.327 2 DEBUG nova.compute.manager [req-bdf198d8-2280-4e34-8de6-f854c06df9ba req-518c9b8f-d86d-4839-9466-ccae24e5570a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] No waiting events found dispatching network-vif-plugged-7290684c-3823-4e1e-84f5-826316aa4548 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:51:54 np0005481065 nova_compute[260935]: 2025-10-11 08:51:54.327 2 WARNING nova.compute.manager [req-bdf198d8-2280-4e34-8de6-f854c06df9ba req-518c9b8f-d86d-4839-9466-ccae24e5570a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Received unexpected event network-vif-plugged-7290684c-3823-4e1e-84f5-826316aa4548 for instance with vm_state active and task_state None.#033[00m
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.330 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[eff8777e-fe57-4f2e-af92-d0c6de414856]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:54 np0005481065 NetworkManager[44960]: <info>  [1760172714.3634] device (tap98ec9321-40): carrier: link connected
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.375 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[35d4cc39-5a07-4932-b03d-0a4252e31326]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.404 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[667131ad-2c6d-48e7-a288-c0406465a7a9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap98ec9321-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fb:82:1b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 94], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455906, 'reachable_time': 18858, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305332, 'error': None, 'target': 'ovnmeta-98ec9321-44f2-4cc2-a0a2-531598e629d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.425 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b5475861-723d-4f65-bac8-aae0b21394b0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefb:821b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 455906, 'tstamp': 455906}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305333, 'error': None, 'target': 'ovnmeta-98ec9321-44f2-4cc2-a0a2-531598e629d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:54 np0005481065 nova_compute[260935]: 2025-10-11 08:51:54.431 2 DEBUG oslo_concurrency.processutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.453 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6155aa3c-5bec-450b-99af-71fe94956a04]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap98ec9321-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fb:82:1b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 94], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455906, 'reachable_time': 18858, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 305334, 'error': None, 'target': 'ovnmeta-98ec9321-44f2-4cc2-a0a2-531598e629d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.498 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5e2fc27f-7ed5-4a20-9e70-fdf972877daa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:54 np0005481065 nova_compute[260935]: 2025-10-11 08:51:54.522 2 DEBUG nova.compute.manager [req-9f6d84af-496c-4bf5-8469-4f34a69c2704 req-1001e463-0c00-44bd-b6fa-447002bb7d88 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Received event network-vif-plugged-e389558a-ec7e-4610-aef7-2cfb76da6814 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:51:54 np0005481065 nova_compute[260935]: 2025-10-11 08:51:54.523 2 DEBUG oslo_concurrency.lockutils [req-9f6d84af-496c-4bf5-8469-4f34a69c2704 req-1001e463-0c00-44bd-b6fa-447002bb7d88 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:54 np0005481065 nova_compute[260935]: 2025-10-11 08:51:54.523 2 DEBUG oslo_concurrency.lockutils [req-9f6d84af-496c-4bf5-8469-4f34a69c2704 req-1001e463-0c00-44bd-b6fa-447002bb7d88 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:54 np0005481065 nova_compute[260935]: 2025-10-11 08:51:54.523 2 DEBUG oslo_concurrency.lockutils [req-9f6d84af-496c-4bf5-8469-4f34a69c2704 req-1001e463-0c00-44bd-b6fa-447002bb7d88 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:54 np0005481065 nova_compute[260935]: 2025-10-11 08:51:54.524 2 DEBUG nova.compute.manager [req-9f6d84af-496c-4bf5-8469-4f34a69c2704 req-1001e463-0c00-44bd-b6fa-447002bb7d88 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Processing event network-vif-plugged-e389558a-ec7e-4610-aef7-2cfb76da6814 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.592 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ec4f8e13-894e-4c5a-a359-267e6788f69b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.594 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap98ec9321-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.594 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.595 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap98ec9321-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:54 np0005481065 nova_compute[260935]: 2025-10-11 08:51:54.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:54 np0005481065 NetworkManager[44960]: <info>  [1760172714.5977] manager: (tap98ec9321-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/146)
Oct 11 04:51:54 np0005481065 kernel: tap98ec9321-40: entered promiscuous mode
Oct 11 04:51:54 np0005481065 nova_compute[260935]: 2025-10-11 08:51:54.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.602 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap98ec9321-40, col_values=(('external_ids', {'iface-id': 'e3dc9ca3-46c1-4422-a2ef-638d9e041fda'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:54 np0005481065 nova_compute[260935]: 2025-10-11 08:51:54.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:54 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:54Z|00297|binding|INFO|Releasing lport e3dc9ca3-46c1-4422-a2ef-638d9e041fda from this chassis (sb_readonly=0)
Oct 11 04:51:54 np0005481065 nova_compute[260935]: 2025-10-11 08:51:54.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.605 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/98ec9321-44f2-4cc2-a0a2-531598e629d2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/98ec9321-44f2-4cc2-a0a2-531598e629d2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.605 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1509a69e-7943-4c18-93df-69bc959fc88c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.606 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-98ec9321-44f2-4cc2-a0a2-531598e629d2
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/98ec9321-44f2-4cc2-a0a2-531598e629d2.pid.haproxy
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID 98ec9321-44f2-4cc2-a0a2-531598e629d2
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 04:51:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.606 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-98ec9321-44f2-4cc2-a0a2-531598e629d2', 'env', 'PROCESS_TAG=haproxy-98ec9321-44f2-4cc2-a0a2-531598e629d2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/98ec9321-44f2-4cc2-a0a2-531598e629d2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 04:51:54 np0005481065 nova_compute[260935]: 2025-10-11 08:51:54.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:51:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:51:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:51:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:51:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:51:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:51:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:51:54
Oct 11 04:51:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:51:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 04:51:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['.mgr', 'vms', 'backups', 'cephfs.cephfs.meta', 'images', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', '.rgw.root', 'default.rgw.meta', 'default.rgw.control']
Oct 11 04:51:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:51:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:51:54 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2264999212' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:51:54 np0005481065 nova_compute[260935]: 2025-10-11 08:51:54.909 2 DEBUG oslo_concurrency.processutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:51:54 np0005481065 nova_compute[260935]: 2025-10-11 08:51:54.917 2 DEBUG nova.compute.provider_tree [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:51:54 np0005481065 nova_compute[260935]: 2025-10-11 08:51:54.932 2 DEBUG nova.scheduler.client.report [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:51:54 np0005481065 nova_compute[260935]: 2025-10-11 08:51:54.950 2 DEBUG oslo_concurrency.lockutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.908s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:54 np0005481065 nova_compute[260935]: 2025-10-11 08:51:54.951 2 DEBUG nova.compute.manager [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:51:54 np0005481065 podman[305427]: 2025-10-11 08:51:54.990935143 +0000 UTC m=+0.061851760 container create cf510c4b706e2f2d45925ce3f9c6c72673e0db26b6b8d1cd42dadad92afd6053 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-98ec9321-44f2-4cc2-a0a2-531598e629d2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 04:51:55 np0005481065 nova_compute[260935]: 2025-10-11 08:51:55.002 2 DEBUG nova.compute.manager [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:51:55 np0005481065 nova_compute[260935]: 2025-10-11 08:51:55.002 2 DEBUG nova.network.neutron [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:51:55 np0005481065 nova_compute[260935]: 2025-10-11 08:51:55.029 2 INFO nova.virt.libvirt.driver [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:51:55 np0005481065 systemd[1]: Started libpod-conmon-cf510c4b706e2f2d45925ce3f9c6c72673e0db26b6b8d1cd42dadad92afd6053.scope.
Oct 11 04:51:55 np0005481065 podman[305427]: 2025-10-11 08:51:54.961191082 +0000 UTC m=+0.032107709 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 04:51:55 np0005481065 nova_compute[260935]: 2025-10-11 08:51:55.051 2 DEBUG nova.compute.manager [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:51:55 np0005481065 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:51:55 np0005481065 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 16K writes, 65K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.03 MB/s#012Cumulative WAL: 16K writes, 5101 syncs, 3.19 writes per sync, written: 0.06 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 10K writes, 40K keys, 10K commit groups, 1.0 writes per commit group, ingest: 41.21 MB, 0.07 MB/s#012Interval WAL: 10K writes, 4028 syncs, 2.53 writes per sync, written: 0.04 GB, 0.07 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 04:51:55 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:51:55 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3306a34e27badc402a040902ece0ef4a86df5e07e08134f4bb9517ea1ccf4987/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:51:55 np0005481065 podman[305427]: 2025-10-11 08:51:55.082678827 +0000 UTC m=+0.153595464 container init cf510c4b706e2f2d45925ce3f9c6c72673e0db26b6b8d1cd42dadad92afd6053 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-98ec9321-44f2-4cc2-a0a2-531598e629d2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 04:51:55 np0005481065 podman[305427]: 2025-10-11 08:51:55.09372355 +0000 UTC m=+0.164640167 container start cf510c4b706e2f2d45925ce3f9c6c72673e0db26b6b8d1cd42dadad92afd6053 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-98ec9321-44f2-4cc2-a0a2-531598e629d2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:51:55 np0005481065 neutron-haproxy-ovnmeta-98ec9321-44f2-4cc2-a0a2-531598e629d2[305444]: [NOTICE]   (305457) : New worker (305459) forked
Oct 11 04:51:55 np0005481065 neutron-haproxy-ovnmeta-98ec9321-44f2-4cc2-a0a2-531598e629d2[305444]: [NOTICE]   (305457) : Loading success.
Oct 11 04:51:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:51:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:51:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:51:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:51:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:51:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:51:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:51:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:51:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:51:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:51:55 np0005481065 podman[305447]: 2025-10-11 08:51:55.153301545 +0000 UTC m=+0.079933522 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 11 04:51:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:55.158 162815 INFO neutron.agent.ovn.metadata.agent [-] Port e389558a-ec7e-4610-aef7-2cfb76da6814 in datapath 7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7 unbound from our chassis#033[00m
Oct 11 04:51:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:55.162 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7#033[00m
Oct 11 04:51:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:55.173 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[124725f0-d70e-4e65-ad6c-19e901a10474]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:55.174 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7aa0dec9-51 in ovnmeta-7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 04:51:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:55.176 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7aa0dec9-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 04:51:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:55.176 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[74d1edab-b4b2-4b63-94a5-899d4d25eae7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:55.177 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[eb3e8c21-fa2b-4067-9d89-023436bc679f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:55 np0005481065 nova_compute[260935]: 2025-10-11 08:51:55.185 2 DEBUG nova.compute.manager [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:51:55 np0005481065 nova_compute[260935]: 2025-10-11 08:51:55.186 2 DEBUG nova.virt.libvirt.driver [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:51:55 np0005481065 nova_compute[260935]: 2025-10-11 08:51:55.186 2 INFO nova.virt.libvirt.driver [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Creating image(s)#033[00m
Oct 11 04:51:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:55.200 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[563a7dc0-efa8-43f8-9a91-7b0556be591f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:55 np0005481065 nova_compute[260935]: 2025-10-11 08:51:55.213 2 DEBUG nova.storage.rbd_utils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] rbd image 840d2a1b-48bb-42ec-aa12-067e1b65ea39_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:51:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:55.223 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b7e29c8a-40f0-4d51-b1af-1f3e97433701]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:55 np0005481065 nova_compute[260935]: 2025-10-11 08:51:55.292 2 DEBUG nova.storage.rbd_utils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] rbd image 840d2a1b-48bb-42ec-aa12-067e1b65ea39_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:51:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:55.305 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[d6c6ba7e-cf37-4115-8aef-de4bd5aa716a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:55 np0005481065 systemd-udevd[305424]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:51:55 np0005481065 NetworkManager[44960]: <info>  [1760172715.3131] manager: (tap7aa0dec9-50): new Veth device (/org/freedesktop/NetworkManager/Devices/147)
Oct 11 04:51:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:55.311 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f85c6b09-89de-421c-a586-e013c389ed17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:55 np0005481065 nova_compute[260935]: 2025-10-11 08:51:55.348 2 DEBUG nova.storage.rbd_utils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] rbd image 840d2a1b-48bb-42ec-aa12-067e1b65ea39_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:51:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:55.348 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[d7d45ca3-d2ac-40c7-a2b4-0fdbd3ac48c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:55.352 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c0eb21b5-9031-407d-be65-69c493566bb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:55 np0005481065 nova_compute[260935]: 2025-10-11 08:51:55.368 2 DEBUG oslo_concurrency.processutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:51:55 np0005481065 NetworkManager[44960]: <info>  [1760172715.3823] device (tap7aa0dec9-50): carrier: link connected
Oct 11 04:51:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:55.393 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b2cca89a-8110-4d93-8706-8f9b1bd4e7e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:55 np0005481065 nova_compute[260935]: 2025-10-11 08:51:55.397 2 DEBUG nova.policy [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7f37cd0ba15f412b88192a506c5cec79', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b23ff73ca27245eeb1b46f51326a5568', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 04:51:55 np0005481065 nova_compute[260935]: 2025-10-11 08:51:55.401 2 DEBUG nova.network.neutron [req-13beb4be-75da-41d4-82ed-b3dba8c20fe0 req-004bb7a9-53c9-46c5-bdf3-a2a729e03547 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Updated VIF entry in instance network info cache for port e389558a-ec7e-4610-aef7-2cfb76da6814. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:51:55 np0005481065 nova_compute[260935]: 2025-10-11 08:51:55.402 2 DEBUG nova.network.neutron [req-13beb4be-75da-41d4-82ed-b3dba8c20fe0 req-004bb7a9-53c9-46c5-bdf3-a2a729e03547 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Updating instance_info_cache with network_info: [{"id": "30f88ebc-fba6-476c-8953-ad64358029bb", "address": "fa:16:3e:84:a9:6f", "network": {"id": "98ec9321-44f2-4cc2-a0a2-531598e629d2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-550040071", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.192", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30f88ebc-fb", "ovs_interfaceid": "30f88ebc-fba6-476c-8953-ad64358029bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e389558a-ec7e-4610-aef7-2cfb76da6814", "address": "fa:16:3e:67:bd:db", "network": {"id": "7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2082644605", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape389558a-ec", "ovs_interfaceid": "e389558a-ec7e-4610-aef7-2cfb76da6814", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:51:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:55.415 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[14d11d3f-1d35-47da-9589-4845bffadb89]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7aa0dec9-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:59:23:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 95], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456008, 'reachable_time': 40647, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305555, 'error': None, 'target': 'ovnmeta-7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:55 np0005481065 nova_compute[260935]: 2025-10-11 08:51:55.434 2 DEBUG oslo_concurrency.lockutils [req-13beb4be-75da-41d4-82ed-b3dba8c20fe0 req-004bb7a9-53c9-46c5-bdf3-a2a729e03547 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-205bee5e-165a-468c-87d3-db44e03ace3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:51:55 np0005481065 nova_compute[260935]: 2025-10-11 08:51:55.435 2 DEBUG oslo_concurrency.processutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:51:55 np0005481065 nova_compute[260935]: 2025-10-11 08:51:55.435 2 DEBUG oslo_concurrency.lockutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:55 np0005481065 nova_compute[260935]: 2025-10-11 08:51:55.436 2 DEBUG oslo_concurrency.lockutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:55 np0005481065 nova_compute[260935]: 2025-10-11 08:51:55.436 2 DEBUG oslo_concurrency.lockutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:55.447 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c90e32ab-0389-4690-9eab-6875e3cfac9c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe59:230a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 456008, 'tstamp': 456008}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305556, 'error': None, 'target': 'ovnmeta-7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:55 np0005481065 nova_compute[260935]: 2025-10-11 08:51:55.461 2 DEBUG nova.storage.rbd_utils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] rbd image 840d2a1b-48bb-42ec-aa12-067e1b65ea39_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:51:55 np0005481065 nova_compute[260935]: 2025-10-11 08:51:55.468 2 DEBUG oslo_concurrency.processutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 840d2a1b-48bb-42ec-aa12-067e1b65ea39_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:51:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:55.471 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c717bb68-4154-49a2-ad65-1176a86d2f66]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7aa0dec9-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:59:23:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 95], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456008, 'reachable_time': 40647, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 305574, 'error': None, 'target': 'ovnmeta-7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:55.508 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3ca05aea-5e96-4613-8dc8-d2341c928c73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:55 np0005481065 nova_compute[260935]: 2025-10-11 08:51:55.621 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172715.6194649, 205bee5e-165a-468c-87d3-db44e03ace3e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:51:55 np0005481065 nova_compute[260935]: 2025-10-11 08:51:55.623 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] VM Started (Lifecycle Event)#033[00m
Oct 11 04:51:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:55.629 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3cb411c8-33ab-47df-937e-afc9f9b66b87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:55.631 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7aa0dec9-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:55.632 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:51:55 np0005481065 nova_compute[260935]: 2025-10-11 08:51:55.632 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172700.6303284, 5b50d851-e482-40a2-8b7d-d3eca87e15ab => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:51:55 np0005481065 nova_compute[260935]: 2025-10-11 08:51:55.632 2 INFO nova.compute.manager [-] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] VM Stopped (Lifecycle Event)#033[00m
Oct 11 04:51:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:55.633 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7aa0dec9-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:55 np0005481065 nova_compute[260935]: 2025-10-11 08:51:55.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:55 np0005481065 NetworkManager[44960]: <info>  [1760172715.6363] manager: (tap7aa0dec9-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/148)
Oct 11 04:51:55 np0005481065 kernel: tap7aa0dec9-50: entered promiscuous mode
Oct 11 04:51:55 np0005481065 nova_compute[260935]: 2025-10-11 08:51:55.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:55.643 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7aa0dec9-50, col_values=(('external_ids', {'iface-id': '90d43dcf-2ebb-421e-9ea5-e973adcf2f96'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:51:55 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:55Z|00298|binding|INFO|Releasing lport 90d43dcf-2ebb-421e-9ea5-e973adcf2f96 from this chassis (sb_readonly=0)
Oct 11 04:51:55 np0005481065 nova_compute[260935]: 2025-10-11 08:51:55.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:55 np0005481065 nova_compute[260935]: 2025-10-11 08:51:55.664 2 DEBUG nova.compute.manager [None req-28749f6d-c2e4-4b3b-b5b9-867e3c5d28f0 - - - - - -] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:51:55 np0005481065 nova_compute[260935]: 2025-10-11 08:51:55.667 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:51:55 np0005481065 nova_compute[260935]: 2025-10-11 08:51:55.678 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172715.6203399, 205bee5e-165a-468c-87d3-db44e03ace3e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:51:55 np0005481065 nova_compute[260935]: 2025-10-11 08:51:55.679 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] VM Paused (Lifecycle Event)#033[00m
Oct 11 04:51:55 np0005481065 nova_compute[260935]: 2025-10-11 08:51:55.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:55.681 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 04:51:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:55.685 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7c9edda1-f67c-4a8e-9644-75f9a00f3ec1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:51:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:55.686 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:51:55 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 04:51:55 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 04:51:55 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7
Oct 11 04:51:55 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 04:51:55 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 04:51:55 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 04:51:55 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7.pid.haproxy
Oct 11 04:51:55 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 04:51:55 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:51:55 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 04:51:55 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 04:51:55 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 04:51:55 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 04:51:55 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 04:51:55 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 04:51:55 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 04:51:55 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 04:51:55 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 04:51:55 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 04:51:55 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 04:51:55 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 04:51:55 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 04:51:55 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:51:55 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:51:55 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 04:51:55 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 04:51:55 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:51:55 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID 7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7
Oct 11 04:51:55 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 04:51:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:51:55.687 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7', 'env', 'PROCESS_TAG=haproxy-7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 04:51:55 np0005481065 nova_compute[260935]: 2025-10-11 08:51:55.704 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:51:55 np0005481065 nova_compute[260935]: 2025-10-11 08:51:55.707 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:51:55 np0005481065 nova_compute[260935]: 2025-10-11 08:51:55.734 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:51:55 np0005481065 nova_compute[260935]: 2025-10-11 08:51:55.785 2 DEBUG oslo_concurrency.processutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 840d2a1b-48bb-42ec-aa12-067e1b65ea39_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.317s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:51:55 np0005481065 nova_compute[260935]: 2025-10-11 08:51:55.890 2 DEBUG nova.storage.rbd_utils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] resizing rbd image 840d2a1b-48bb-42ec-aa12-067e1b65ea39_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:51:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1407: 321 pgs: 321 active+clean; 181 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.9 MiB/s wr, 163 op/s
Oct 11 04:51:56 np0005481065 nova_compute[260935]: 2025-10-11 08:51:56.030 2 DEBUG nova.objects.instance [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lazy-loading 'migration_context' on Instance uuid 840d2a1b-48bb-42ec-aa12-067e1b65ea39 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:51:56 np0005481065 nova_compute[260935]: 2025-10-11 08:51:56.053 2 DEBUG nova.virt.libvirt.driver [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:51:56 np0005481065 nova_compute[260935]: 2025-10-11 08:51:56.054 2 DEBUG nova.virt.libvirt.driver [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Ensure instance console log exists: /var/lib/nova/instances/840d2a1b-48bb-42ec-aa12-067e1b65ea39/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:51:56 np0005481065 nova_compute[260935]: 2025-10-11 08:51:56.054 2 DEBUG oslo_concurrency.lockutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:56 np0005481065 nova_compute[260935]: 2025-10-11 08:51:56.055 2 DEBUG oslo_concurrency.lockutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:56 np0005481065 nova_compute[260935]: 2025-10-11 08:51:56.055 2 DEBUG oslo_concurrency.lockutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:56 np0005481065 nova_compute[260935]: 2025-10-11 08:51:56.127 2 DEBUG nova.compute.manager [None req-925046ff-c9d6-441c-a90b-369ac96f271a 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:51:56 np0005481065 nova_compute[260935]: 2025-10-11 08:51:56.138 2 DEBUG nova.network.neutron [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Successfully created port: bf26054f-43d5-471a-bca4-e7948b4409d0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 04:51:56 np0005481065 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Oct 11 04:51:56 np0005481065 nova_compute[260935]: 2025-10-11 08:51:56.166 2 INFO nova.compute.manager [None req-925046ff-c9d6-441c-a90b-369ac96f271a 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] instance snapshotting#033[00m
Oct 11 04:51:56 np0005481065 podman[305701]: 2025-10-11 08:51:56.172713883 +0000 UTC m=+0.112134262 container create 5698044c3b3e464a40e793712c4eace612b32ffc3835eeba24ac108bb902523c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 11 04:51:56 np0005481065 podman[305701]: 2025-10-11 08:51:56.106558652 +0000 UTC m=+0.045979021 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 04:51:56 np0005481065 systemd[1]: Started libpod-conmon-5698044c3b3e464a40e793712c4eace612b32ffc3835eeba24ac108bb902523c.scope.
Oct 11 04:51:56 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:51:56 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a65300d208cb9158423dc57616ca08fb30633e13196b7c893ce5c6964e246ba/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:51:56 np0005481065 podman[305701]: 2025-10-11 08:51:56.285503183 +0000 UTC m=+0.224923572 container init 5698044c3b3e464a40e793712c4eace612b32ffc3835eeba24ac108bb902523c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2)
Oct 11 04:51:56 np0005481065 podman[305701]: 2025-10-11 08:51:56.296521784 +0000 UTC m=+0.235942153 container start 5698044c3b3e464a40e793712c4eace612b32ffc3835eeba24ac108bb902523c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 11 04:51:56 np0005481065 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Oct 11 04:51:56 np0005481065 neutron-haproxy-ovnmeta-7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7[305716]: [NOTICE]   (305720) : New worker (305722) forked
Oct 11 04:51:56 np0005481065 neutron-haproxy-ovnmeta-7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7[305716]: [NOTICE]   (305720) : Loading success.
Oct 11 04:51:56 np0005481065 nova_compute[260935]: 2025-10-11 08:51:56.544 2 INFO nova.virt.libvirt.driver [None req-925046ff-c9d6-441c-a90b-369ac96f271a 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Beginning live snapshot process#033[00m
Oct 11 04:51:56 np0005481065 nova_compute[260935]: 2025-10-11 08:51:56.742 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:51:56 np0005481065 nova_compute[260935]: 2025-10-11 08:51:56.751 2 DEBUG nova.virt.libvirt.imagebackend [None req-925046ff-c9d6-441c-a90b-369ac96f271a 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] No parent info for 03f2fef0-11c0-48e1-b3a0-3e02d898739e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct 11 04:51:56 np0005481065 nova_compute[260935]: 2025-10-11 08:51:56.807 2 DEBUG nova.compute.manager [req-83e3797e-fe2e-485b-aece-e354e10d0f6a req-1d4cd149-53bc-470c-a829-d1c87fd54318 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Received event network-vif-plugged-30f88ebc-fba6-476c-8953-ad64358029bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:51:56 np0005481065 nova_compute[260935]: 2025-10-11 08:51:56.808 2 DEBUG oslo_concurrency.lockutils [req-83e3797e-fe2e-485b-aece-e354e10d0f6a req-1d4cd149-53bc-470c-a829-d1c87fd54318 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:56 np0005481065 nova_compute[260935]: 2025-10-11 08:51:56.809 2 DEBUG oslo_concurrency.lockutils [req-83e3797e-fe2e-485b-aece-e354e10d0f6a req-1d4cd149-53bc-470c-a829-d1c87fd54318 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:56 np0005481065 nova_compute[260935]: 2025-10-11 08:51:56.809 2 DEBUG oslo_concurrency.lockutils [req-83e3797e-fe2e-485b-aece-e354e10d0f6a req-1d4cd149-53bc-470c-a829-d1c87fd54318 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:56 np0005481065 nova_compute[260935]: 2025-10-11 08:51:56.810 2 DEBUG nova.compute.manager [req-83e3797e-fe2e-485b-aece-e354e10d0f6a req-1d4cd149-53bc-470c-a829-d1c87fd54318 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Processing event network-vif-plugged-30f88ebc-fba6-476c-8953-ad64358029bb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 04:51:56 np0005481065 nova_compute[260935]: 2025-10-11 08:51:56.810 2 DEBUG nova.compute.manager [req-83e3797e-fe2e-485b-aece-e354e10d0f6a req-1d4cd149-53bc-470c-a829-d1c87fd54318 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Received event network-vif-plugged-30f88ebc-fba6-476c-8953-ad64358029bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:51:56 np0005481065 nova_compute[260935]: 2025-10-11 08:51:56.811 2 DEBUG oslo_concurrency.lockutils [req-83e3797e-fe2e-485b-aece-e354e10d0f6a req-1d4cd149-53bc-470c-a829-d1c87fd54318 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:56 np0005481065 nova_compute[260935]: 2025-10-11 08:51:56.811 2 DEBUG oslo_concurrency.lockutils [req-83e3797e-fe2e-485b-aece-e354e10d0f6a req-1d4cd149-53bc-470c-a829-d1c87fd54318 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:56 np0005481065 nova_compute[260935]: 2025-10-11 08:51:56.812 2 DEBUG oslo_concurrency.lockutils [req-83e3797e-fe2e-485b-aece-e354e10d0f6a req-1d4cd149-53bc-470c-a829-d1c87fd54318 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:56 np0005481065 nova_compute[260935]: 2025-10-11 08:51:56.812 2 DEBUG nova.compute.manager [req-83e3797e-fe2e-485b-aece-e354e10d0f6a req-1d4cd149-53bc-470c-a829-d1c87fd54318 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] No waiting events found dispatching network-vif-plugged-30f88ebc-fba6-476c-8953-ad64358029bb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:51:56 np0005481065 nova_compute[260935]: 2025-10-11 08:51:56.813 2 WARNING nova.compute.manager [req-83e3797e-fe2e-485b-aece-e354e10d0f6a req-1d4cd149-53bc-470c-a829-d1c87fd54318 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Received unexpected event network-vif-plugged-30f88ebc-fba6-476c-8953-ad64358029bb for instance with vm_state building and task_state spawning.#033[00m
Oct 11 04:51:56 np0005481065 nova_compute[260935]: 2025-10-11 08:51:56.814 2 DEBUG nova.compute.manager [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:51:56 np0005481065 nova_compute[260935]: 2025-10-11 08:51:56.831 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172716.8305564, 205bee5e-165a-468c-87d3-db44e03ace3e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:51:56 np0005481065 nova_compute[260935]: 2025-10-11 08:51:56.831 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:51:56 np0005481065 nova_compute[260935]: 2025-10-11 08:51:56.834 2 DEBUG nova.virt.libvirt.driver [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:51:56 np0005481065 nova_compute[260935]: 2025-10-11 08:51:56.839 2 INFO nova.virt.libvirt.driver [-] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Instance spawned successfully.#033[00m
Oct 11 04:51:56 np0005481065 nova_compute[260935]: 2025-10-11 08:51:56.840 2 DEBUG nova.virt.libvirt.driver [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:51:56 np0005481065 nova_compute[260935]: 2025-10-11 08:51:56.972 2 DEBUG nova.storage.rbd_utils [None req-925046ff-c9d6-441c-a90b-369ac96f271a 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] creating snapshot(44f23a098021423198ece7a5a6ca7e99) on rbd image(1cecd438-75a3-4140-ad35-7439630b1be2_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 11 04:51:57 np0005481065 nova_compute[260935]: 2025-10-11 08:51:57.012 2 DEBUG nova.compute.manager [req-c2770c4e-9b50-429f-aef1-c676ef6e7b78 req-516c7e38-59d1-4635-ad19-2a795099c245 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Received event network-vif-plugged-e389558a-ec7e-4610-aef7-2cfb76da6814 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:51:57 np0005481065 nova_compute[260935]: 2025-10-11 08:51:57.012 2 DEBUG oslo_concurrency.lockutils [req-c2770c4e-9b50-429f-aef1-c676ef6e7b78 req-516c7e38-59d1-4635-ad19-2a795099c245 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:51:57 np0005481065 nova_compute[260935]: 2025-10-11 08:51:57.012 2 DEBUG oslo_concurrency.lockutils [req-c2770c4e-9b50-429f-aef1-c676ef6e7b78 req-516c7e38-59d1-4635-ad19-2a795099c245 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:51:57 np0005481065 nova_compute[260935]: 2025-10-11 08:51:57.013 2 DEBUG oslo_concurrency.lockutils [req-c2770c4e-9b50-429f-aef1-c676ef6e7b78 req-516c7e38-59d1-4635-ad19-2a795099c245 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:57 np0005481065 nova_compute[260935]: 2025-10-11 08:51:57.013 2 DEBUG nova.compute.manager [req-c2770c4e-9b50-429f-aef1-c676ef6e7b78 req-516c7e38-59d1-4635-ad19-2a795099c245 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] No waiting events found dispatching network-vif-plugged-e389558a-ec7e-4610-aef7-2cfb76da6814 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:51:57 np0005481065 nova_compute[260935]: 2025-10-11 08:51:57.013 2 WARNING nova.compute.manager [req-c2770c4e-9b50-429f-aef1-c676ef6e7b78 req-516c7e38-59d1-4635-ad19-2a795099c245 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Received unexpected event network-vif-plugged-e389558a-ec7e-4610-aef7-2cfb76da6814 for instance with vm_state building and task_state spawning.#033[00m
Oct 11 04:51:57 np0005481065 nova_compute[260935]: 2025-10-11 08:51:57.031 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:51:57 np0005481065 nova_compute[260935]: 2025-10-11 08:51:57.035 2 DEBUG nova.virt.libvirt.driver [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:51:57 np0005481065 nova_compute[260935]: 2025-10-11 08:51:57.035 2 DEBUG nova.virt.libvirt.driver [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:51:57 np0005481065 nova_compute[260935]: 2025-10-11 08:51:57.036 2 DEBUG nova.virt.libvirt.driver [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:51:57 np0005481065 nova_compute[260935]: 2025-10-11 08:51:57.036 2 DEBUG nova.virt.libvirt.driver [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:51:57 np0005481065 nova_compute[260935]: 2025-10-11 08:51:57.036 2 DEBUG nova.virt.libvirt.driver [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:51:57 np0005481065 nova_compute[260935]: 2025-10-11 08:51:57.037 2 DEBUG nova.virt.libvirt.driver [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:51:57 np0005481065 nova_compute[260935]: 2025-10-11 08:51:57.041 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:51:57 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:57Z|00052|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c7:33:2e 10.100.0.6
Oct 11 04:51:57 np0005481065 ovn_controller[152945]: 2025-10-11T08:51:57Z|00053|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c7:33:2e 10.100.0.6
Oct 11 04:51:57 np0005481065 nova_compute[260935]: 2025-10-11 08:51:57.149 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:51:57 np0005481065 nova_compute[260935]: 2025-10-11 08:51:57.253 2 INFO nova.compute.manager [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Took 15.03 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:51:57 np0005481065 nova_compute[260935]: 2025-10-11 08:51:57.254 2 DEBUG nova.compute.manager [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:51:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e186 do_prune osdmap full prune enabled
Oct 11 04:51:57 np0005481065 nova_compute[260935]: 2025-10-11 08:51:57.372 2 INFO nova.compute.manager [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Took 16.87 seconds to build instance.#033[00m
Oct 11 04:51:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e187 e187: 3 total, 3 up, 3 in
Oct 11 04:51:57 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e187: 3 total, 3 up, 3 in
Oct 11 04:51:57 np0005481065 nova_compute[260935]: 2025-10-11 08:51:57.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:57 np0005481065 nova_compute[260935]: 2025-10-11 08:51:57.448 2 DEBUG nova.storage.rbd_utils [None req-925046ff-c9d6-441c-a90b-369ac96f271a 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] cloning vms/1cecd438-75a3-4140-ad35-7439630b1be2_disk@44f23a098021423198ece7a5a6ca7e99 to images/1eaca523-07cd-4bcb-9efb-5df816d086ce clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct 11 04:51:57 np0005481065 nova_compute[260935]: 2025-10-11 08:51:57.508 2 DEBUG oslo_concurrency.lockutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "205bee5e-165a-468c-87d3-db44e03ace3e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.141s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:51:57 np0005481065 nova_compute[260935]: 2025-10-11 08:51:57.597 2 DEBUG nova.storage.rbd_utils [None req-925046ff-c9d6-441c-a90b-369ac96f271a 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] flattening images/1eaca523-07cd-4bcb-9efb-5df816d086ce flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct 11 04:51:57 np0005481065 nova_compute[260935]: 2025-10-11 08:51:57.671 2 DEBUG nova.network.neutron [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Successfully updated port: bf26054f-43d5-471a-bca4-e7948b4409d0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:51:57 np0005481065 nova_compute[260935]: 2025-10-11 08:51:57.884 2 DEBUG nova.storage.rbd_utils [None req-925046ff-c9d6-441c-a90b-369ac96f271a 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] removing snapshot(44f23a098021423198ece7a5a6ca7e99) on rbd image(1cecd438-75a3-4140-ad35-7439630b1be2_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct 11 04:51:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1409: 321 pgs: 321 active+clean; 264 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 5.1 MiB/s wr, 229 op/s
Oct 11 04:51:57 np0005481065 nova_compute[260935]: 2025-10-11 08:51:57.931 2 DEBUG oslo_concurrency.lockutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Acquiring lock "refresh_cache-840d2a1b-48bb-42ec-aa12-067e1b65ea39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:51:57 np0005481065 nova_compute[260935]: 2025-10-11 08:51:57.931 2 DEBUG oslo_concurrency.lockutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Acquired lock "refresh_cache-840d2a1b-48bb-42ec-aa12-067e1b65ea39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:51:57 np0005481065 nova_compute[260935]: 2025-10-11 08:51:57.931 2 DEBUG nova.network.neutron [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:51:57 np0005481065 nova_compute[260935]: 2025-10-11 08:51:57.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:51:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:51:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e187 do_prune osdmap full prune enabled
Oct 11 04:51:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e188 e188: 3 total, 3 up, 3 in
Oct 11 04:51:58 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e188: 3 total, 3 up, 3 in
Oct 11 04:51:58 np0005481065 nova_compute[260935]: 2025-10-11 08:51:58.422 2 DEBUG nova.storage.rbd_utils [None req-925046ff-c9d6-441c-a90b-369ac96f271a 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] creating snapshot(snap) on rbd image(1eaca523-07cd-4bcb-9efb-5df816d086ce) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 11 04:51:58 np0005481065 nova_compute[260935]: 2025-10-11 08:51:58.483 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172703.4819636, 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:51:58 np0005481065 nova_compute[260935]: 2025-10-11 08:51:58.484 2 INFO nova.compute.manager [-] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] VM Stopped (Lifecycle Event)#033[00m
Oct 11 04:51:58 np0005481065 nova_compute[260935]: 2025-10-11 08:51:58.659 2 DEBUG nova.compute.manager [None req-29c1ab44-e3a5-4342-bb6d-04a7486cdb71 - - - - - -] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:51:58 np0005481065 nova_compute[260935]: 2025-10-11 08:51:58.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:51:58 np0005481065 nova_compute[260935]: 2025-10-11 08:51:58.702 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 04:51:58 np0005481065 nova_compute[260935]: 2025-10-11 08:51:58.940 2 DEBUG nova.network.neutron [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.361 2 DEBUG nova.compute.manager [req-4198e2d9-f34d-4aef-a085-d181880f1e6b req-7fda7ade-ccf2-452c-94f6-37f429d6f633 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Received event network-changed-bf26054f-43d5-471a-bca4-e7948b4409d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.362 2 DEBUG nova.compute.manager [req-4198e2d9-f34d-4aef-a085-d181880f1e6b req-7fda7ade-ccf2-452c-94f6-37f429d6f633 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Refreshing instance network info cache due to event network-changed-bf26054f-43d5-471a-bca4-e7948b4409d0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.362 2 DEBUG oslo_concurrency.lockutils [req-4198e2d9-f34d-4aef-a085-d181880f1e6b req-7fda7ade-ccf2-452c-94f6-37f429d6f633 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-840d2a1b-48bb-42ec-aa12-067e1b65ea39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:51:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e188 do_prune osdmap full prune enabled
Oct 11 04:51:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e189 e189: 3 total, 3 up, 3 in
Oct 11 04:51:59 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e189: 3 total, 3 up, 3 in
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver [None req-925046ff-c9d6-441c-a90b-369ac96f271a 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image 1eaca523-07cd-4bcb-9efb-5df816d086ce could not be found.
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver     image = self._client.call(
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID 1eaca523-07cd-4bcb-9efb-5df816d086ce
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver 
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver 
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver     image = self._client.call(
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image 1eaca523-07cd-4bcb-9efb-5df816d086ce could not be found.
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver #033[00m
Oct 11 04:51:59 np0005481065 nova_compute[260935]: 2025-10-11 08:51:59.671 2 DEBUG nova.storage.rbd_utils [None req-925046ff-c9d6-441c-a90b-369ac96f271a 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] removing snapshot(snap) on rbd image(1eaca523-07cd-4bcb-9efb-5df816d086ce) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct 11 04:51:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1412: 321 pgs: 321 active+clean; 264 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 8.5 MiB/s wr, 363 op/s
Oct 11 04:52:00 np0005481065 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:52:00 np0005481065 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 17K writes, 66K keys, 17K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.03 MB/s#012Cumulative WAL: 17K writes, 5508 syncs, 3.14 writes per sync, written: 0.06 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 10K writes, 37K keys, 10K commit groups, 1.0 writes per commit group, ingest: 41.63 MB, 0.07 MB/s#012Interval WAL: 10K writes, 4092 syncs, 2.48 writes per sync, written: 0.04 GB, 0.07 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 04:52:00 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e189 do_prune osdmap full prune enabled
Oct 11 04:52:00 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e190 e190: 3 total, 3 up, 3 in
Oct 11 04:52:00 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e190: 3 total, 3 up, 3 in
Oct 11 04:52:00 np0005481065 nova_compute[260935]: 2025-10-11 08:52:00.955 2 WARNING nova.compute.manager [None req-925046ff-c9d6-441c-a90b-369ac96f271a 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Image not found during snapshot: nova.exception.ImageNotFound: Image 1eaca523-07cd-4bcb-9efb-5df816d086ce could not be found.#033[00m
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:01.386 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:52:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:01.389 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.390 2 DEBUG oslo_concurrency.lockutils [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Acquiring lock "205bee5e-165a-468c-87d3-db44e03ace3e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.391 2 DEBUG oslo_concurrency.lockutils [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "205bee5e-165a-468c-87d3-db44e03ace3e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.391 2 DEBUG oslo_concurrency.lockutils [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Acquiring lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:01.391 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.391 2 DEBUG oslo_concurrency.lockutils [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.392 2 DEBUG oslo_concurrency.lockutils [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.393 2 INFO nova.compute.manager [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Terminating instance#033[00m
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.395 2 DEBUG nova.compute.manager [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 04:52:01 np0005481065 kernel: tap30f88ebc-fb (unregistering): left promiscuous mode
Oct 11 04:52:01 np0005481065 NetworkManager[44960]: <info>  [1760172721.4466] device (tap30f88ebc-fb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:01 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:01Z|00299|binding|INFO|Releasing lport 30f88ebc-fba6-476c-8953-ad64358029bb from this chassis (sb_readonly=0)
Oct 11 04:52:01 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:01Z|00300|binding|INFO|Setting lport 30f88ebc-fba6-476c-8953-ad64358029bb down in Southbound
Oct 11 04:52:01 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:01Z|00301|binding|INFO|Removing iface tap30f88ebc-fb ovn-installed in OVS
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:01 np0005481065 kernel: tape389558a-ec (unregistering): left promiscuous mode
Oct 11 04:52:01 np0005481065 NetworkManager[44960]: <info>  [1760172721.4825] device (tape389558a-ec): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:52:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:01.505 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:a9:6f 10.100.0.192'], port_security=['fa:16:3e:84:a9:6f 10.100.0.192'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.192/24', 'neutron:device_id': '205bee5e-165a-468c-87d3-db44e03ace3e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-98ec9321-44f2-4cc2-a0a2-531598e629d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f84de17ba2c5470fbc4c7fe809e7d7b7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1a4e329d-ca16-4021-b76e-a221f4eabd06', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=310a160b-9164-4639-a339-2ba6c58b1709, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=30f88ebc-fba6-476c-8953-ad64358029bb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:52:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:01.507 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 30f88ebc-fba6-476c-8953-ad64358029bb in datapath 98ec9321-44f2-4cc2-a0a2-531598e629d2 unbound from our chassis#033[00m
Oct 11 04:52:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:01.510 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 98ec9321-44f2-4cc2-a0a2-531598e629d2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 04:52:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:01.511 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3f6853e8-8b44-4fea-93a8-ef8497486c4b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:01.512 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-98ec9321-44f2-4cc2-a0a2-531598e629d2 namespace which is not needed anymore#033[00m
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:01 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:01Z|00302|binding|INFO|Releasing lport e389558a-ec7e-4610-aef7-2cfb76da6814 from this chassis (sb_readonly=0)
Oct 11 04:52:01 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:01Z|00303|binding|INFO|Setting lport e389558a-ec7e-4610-aef7-2cfb76da6814 down in Southbound
Oct 11 04:52:01 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:01Z|00304|binding|INFO|Removing iface tape389558a-ec ovn-installed in OVS
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:01 np0005481065 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000025.scope: Deactivated successfully.
Oct 11 04:52:01 np0005481065 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000025.scope: Consumed 5.669s CPU time.
Oct 11 04:52:01 np0005481065 systemd-machined[215705]: Machine qemu-42-instance-00000025 terminated.
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:01 np0005481065 NetworkManager[44960]: <info>  [1760172721.6302] manager: (tape389558a-ec): new Tun device (/org/freedesktop/NetworkManager/Devices/149)
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.653 2 INFO nova.virt.libvirt.driver [-] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Instance destroyed successfully.#033[00m
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.654 2 DEBUG nova.objects.instance [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lazy-loading 'resources' on Instance uuid 205bee5e-165a-468c-87d3-db44e03ace3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:52:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:01.676 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:bd:db 10.100.1.173'], port_security=['fa:16:3e:67:bd:db 10.100.1.173'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.173/24', 'neutron:device_id': '205bee5e-165a-468c-87d3-db44e03ace3e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f84de17ba2c5470fbc4c7fe809e7d7b7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1a4e329d-ca16-4021-b76e-a221f4eabd06', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bdcaf9f8-32cf-46cd-8760-089ea8b1e312, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=e389558a-ec7e-4610-aef7-2cfb76da6814) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.672 2 DEBUG nova.virt.libvirt.vif [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:51:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-318864694',display_name='tempest-ServersTestMultiNic-server-318864694',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-318864694',id=37,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:51:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f84de17ba2c5470fbc4c7fe809e7d7b7',ramdisk_id='',reservation_id='r-932cooa3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-65661968',owner_user_name='tempest-ServersTestMultiNic-65661968-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:51:57Z,user_data=None,user_id='fb27f51b5ffd414ab5ddbea179ada690',uuid=205bee5e-165a-468c-87d3-db44e03ace3e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "30f88ebc-fba6-476c-8953-ad64358029bb", "address": "fa:16:3e:84:a9:6f", "network": {"id": "98ec9321-44f2-4cc2-a0a2-531598e629d2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-550040071", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.192", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30f88ebc-fb", "ovs_interfaceid": "30f88ebc-fba6-476c-8953-ad64358029bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.673 2 DEBUG nova.network.os_vif_util [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converting VIF {"id": "30f88ebc-fba6-476c-8953-ad64358029bb", "address": "fa:16:3e:84:a9:6f", "network": {"id": "98ec9321-44f2-4cc2-a0a2-531598e629d2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-550040071", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.192", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30f88ebc-fb", "ovs_interfaceid": "30f88ebc-fba6-476c-8953-ad64358029bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.674 2 DEBUG nova.network.os_vif_util [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:a9:6f,bridge_name='br-int',has_traffic_filtering=True,id=30f88ebc-fba6-476c-8953-ad64358029bb,network=Network(98ec9321-44f2-4cc2-a0a2-531598e629d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30f88ebc-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.674 2 DEBUG os_vif [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:a9:6f,bridge_name='br-int',has_traffic_filtering=True,id=30f88ebc-fba6-476c-8953-ad64358029bb,network=Network(98ec9321-44f2-4cc2-a0a2-531598e629d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30f88ebc-fb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.676 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap30f88ebc-fb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:52:01 np0005481065 neutron-haproxy-ovnmeta-98ec9321-44f2-4cc2-a0a2-531598e629d2[305444]: [NOTICE]   (305457) : haproxy version is 2.8.14-c23fe91
Oct 11 04:52:01 np0005481065 neutron-haproxy-ovnmeta-98ec9321-44f2-4cc2-a0a2-531598e629d2[305444]: [NOTICE]   (305457) : path to executable is /usr/sbin/haproxy
Oct 11 04:52:01 np0005481065 neutron-haproxy-ovnmeta-98ec9321-44f2-4cc2-a0a2-531598e629d2[305444]: [WARNING]  (305457) : Exiting Master process...
Oct 11 04:52:01 np0005481065 neutron-haproxy-ovnmeta-98ec9321-44f2-4cc2-a0a2-531598e629d2[305444]: [ALERT]    (305457) : Current worker (305459) exited with code 143 (Terminated)
Oct 11 04:52:01 np0005481065 neutron-haproxy-ovnmeta-98ec9321-44f2-4cc2-a0a2-531598e629d2[305444]: [WARNING]  (305457) : All workers exited. Exiting... (0)
Oct 11 04:52:01 np0005481065 systemd[1]: libpod-cf510c4b706e2f2d45925ce3f9c6c72673e0db26b6b8d1cd42dadad92afd6053.scope: Deactivated successfully.
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.691 2 INFO os_vif [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:a9:6f,bridge_name='br-int',has_traffic_filtering=True,id=30f88ebc-fba6-476c-8953-ad64358029bb,network=Network(98ec9321-44f2-4cc2-a0a2-531598e629d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30f88ebc-fb')#033[00m
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.692 2 DEBUG nova.virt.libvirt.vif [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:51:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-318864694',display_name='tempest-ServersTestMultiNic-server-318864694',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-318864694',id=37,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:51:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f84de17ba2c5470fbc4c7fe809e7d7b7',ramdisk_id='',reservation_id='r-932cooa3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-65661968',owner_user_name='tempest-ServersTestMultiNic-65661968-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:51:57Z,user_data=None,user_id='fb27f51b5ffd414ab5ddbea179ada690',uuid=205bee5e-165a-468c-87d3-db44e03ace3e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e389558a-ec7e-4610-aef7-2cfb76da6814", "address": "fa:16:3e:67:bd:db", "network": {"id": "7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2082644605", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape389558a-ec", "ovs_interfaceid": "e389558a-ec7e-4610-aef7-2cfb76da6814", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.692 2 DEBUG nova.network.os_vif_util [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converting VIF {"id": "e389558a-ec7e-4610-aef7-2cfb76da6814", "address": "fa:16:3e:67:bd:db", "network": {"id": "7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2082644605", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape389558a-ec", "ovs_interfaceid": "e389558a-ec7e-4610-aef7-2cfb76da6814", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.693 2 DEBUG nova.network.os_vif_util [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:bd:db,bridge_name='br-int',has_traffic_filtering=True,id=e389558a-ec7e-4610-aef7-2cfb76da6814,network=Network(7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape389558a-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.693 2 DEBUG os_vif [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:bd:db,bridge_name='br-int',has_traffic_filtering=True,id=e389558a-ec7e-4610-aef7-2cfb76da6814,network=Network(7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape389558a-ec') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.695 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape389558a-ec, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:01 np0005481065 podman[305935]: 2025-10-11 08:52:01.700280593 +0000 UTC m=+0.078118600 container died cf510c4b706e2f2d45925ce3f9c6c72673e0db26b6b8d1cd42dadad92afd6053 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-98ec9321-44f2-4cc2-a0a2-531598e629d2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.700 2 INFO os_vif [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:bd:db,bridge_name='br-int',has_traffic_filtering=True,id=e389558a-ec7e-4610-aef7-2cfb76da6814,network=Network(7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape389558a-ec')#033[00m
Oct 11 04:52:01 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cf510c4b706e2f2d45925ce3f9c6c72673e0db26b6b8d1cd42dadad92afd6053-userdata-shm.mount: Deactivated successfully.
Oct 11 04:52:01 np0005481065 systemd[1]: var-lib-containers-storage-overlay-3306a34e27badc402a040902ece0ef4a86df5e07e08134f4bb9517ea1ccf4987-merged.mount: Deactivated successfully.
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.752 2 DEBUG nova.network.neutron [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Updating instance_info_cache with network_info: [{"id": "bf26054f-43d5-471a-bca4-e7948b4409d0", "address": "fa:16:3e:6d:9c:27", "network": {"id": "882d76f4-8cc7-44c7-ad90-277f4f92e044", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-3978382-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b23ff73ca27245eeb1b46f51326a5568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf26054f-43", "ovs_interfaceid": "bf26054f-43d5-471a-bca4-e7948b4409d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:52:01 np0005481065 podman[305935]: 2025-10-11 08:52:01.754495127 +0000 UTC m=+0.132333104 container cleanup cf510c4b706e2f2d45925ce3f9c6c72673e0db26b6b8d1cd42dadad92afd6053 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-98ec9321-44f2-4cc2-a0a2-531598e629d2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 04:52:01 np0005481065 podman[305968]: 2025-10-11 08:52:01.760765004 +0000 UTC m=+0.079949012 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 11 04:52:01 np0005481065 systemd[1]: libpod-conmon-cf510c4b706e2f2d45925ce3f9c6c72673e0db26b6b8d1cd42dadad92afd6053.scope: Deactivated successfully.
Oct 11 04:52:01 np0005481065 podman[306025]: 2025-10-11 08:52:01.819539376 +0000 UTC m=+0.041055572 container remove cf510c4b706e2f2d45925ce3f9c6c72673e0db26b6b8d1cd42dadad92afd6053 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-98ec9321-44f2-4cc2-a0a2-531598e629d2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 04:52:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:01.826 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c2c0bb66-6844-4c36-ae4d-868930823ab4]: (4, ('Sat Oct 11 08:52:01 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-98ec9321-44f2-4cc2-a0a2-531598e629d2 (cf510c4b706e2f2d45925ce3f9c6c72673e0db26b6b8d1cd42dadad92afd6053)\ncf510c4b706e2f2d45925ce3f9c6c72673e0db26b6b8d1cd42dadad92afd6053\nSat Oct 11 08:52:01 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-98ec9321-44f2-4cc2-a0a2-531598e629d2 (cf510c4b706e2f2d45925ce3f9c6c72673e0db26b6b8d1cd42dadad92afd6053)\ncf510c4b706e2f2d45925ce3f9c6c72673e0db26b6b8d1cd42dadad92afd6053\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:01.828 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[86ad350c-cd03-4de7-80d8-43cf612c397c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:01.830 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap98ec9321-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:01 np0005481065 kernel: tap98ec9321-40: left promiscuous mode
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:01.859 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7a784092-e1f2-4d71-a577-84a9529610c1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.872 2 DEBUG oslo_concurrency.lockutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Releasing lock "refresh_cache-840d2a1b-48bb-42ec-aa12-067e1b65ea39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.873 2 DEBUG nova.compute.manager [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Instance network_info: |[{"id": "bf26054f-43d5-471a-bca4-e7948b4409d0", "address": "fa:16:3e:6d:9c:27", "network": {"id": "882d76f4-8cc7-44c7-ad90-277f4f92e044", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-3978382-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b23ff73ca27245eeb1b46f51326a5568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf26054f-43", "ovs_interfaceid": "bf26054f-43d5-471a-bca4-e7948b4409d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.873 2 DEBUG oslo_concurrency.lockutils [req-4198e2d9-f34d-4aef-a085-d181880f1e6b req-7fda7ade-ccf2-452c-94f6-37f429d6f633 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-840d2a1b-48bb-42ec-aa12-067e1b65ea39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.873 2 DEBUG nova.network.neutron [req-4198e2d9-f34d-4aef-a085-d181880f1e6b req-7fda7ade-ccf2-452c-94f6-37f429d6f633 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Refreshing network info cache for port bf26054f-43d5-471a-bca4-e7948b4409d0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.877 2 DEBUG nova.virt.libvirt.driver [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Start _get_guest_xml network_info=[{"id": "bf26054f-43d5-471a-bca4-e7948b4409d0", "address": "fa:16:3e:6d:9c:27", "network": {"id": "882d76f4-8cc7-44c7-ad90-277f4f92e044", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-3978382-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b23ff73ca27245eeb1b46f51326a5568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf26054f-43", "ovs_interfaceid": "bf26054f-43d5-471a-bca4-e7948b4409d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:52:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:01.885 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[148f4e6e-caee-4e9f-a52d-11b6ae03ca31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.885 2 WARNING nova.virt.libvirt.driver [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.895 2 DEBUG nova.virt.libvirt.host [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.896 2 DEBUG nova.virt.libvirt.host [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.900 2 DEBUG nova.virt.libvirt.host [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.900 2 DEBUG nova.virt.libvirt.host [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.900 2 DEBUG nova.virt.libvirt.driver [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:52:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:01.900 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f7f9e100-8e41-4682-ba69-3ac8f0036531]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.900 2 DEBUG nova.virt.hardware [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.901 2 DEBUG nova.virt.hardware [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.901 2 DEBUG nova.virt.hardware [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.901 2 DEBUG nova.virt.hardware [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.901 2 DEBUG nova.virt.hardware [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.901 2 DEBUG nova.virt.hardware [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.902 2 DEBUG nova.virt.hardware [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.902 2 DEBUG nova.virt.hardware [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.902 2 DEBUG nova.virt.hardware [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.902 2 DEBUG nova.virt.hardware [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.902 2 DEBUG nova.virt.hardware [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:52:01 np0005481065 nova_compute[260935]: 2025-10-11 08:52:01.904 2 DEBUG oslo_concurrency.processutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:52:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1414: 321 pgs: 321 active+clean; 264 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 5.0 MiB/s wr, 340 op/s
Oct 11 04:52:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:01.929 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7e42c255-9cbb-4aa4-b0a4-33a725327359]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455894, 'reachable_time': 39596, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306040, 'error': None, 'target': 'ovnmeta-98ec9321-44f2-4cc2-a0a2-531598e629d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:01.933 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-98ec9321-44f2-4cc2-a0a2-531598e629d2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 04:52:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:01.933 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[60496f57-139d-4496-a156-fdf6b1a9a986]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:01.934 162815 INFO neutron.agent.ovn.metadata.agent [-] Port e389558a-ec7e-4610-aef7-2cfb76da6814 in datapath 7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7 unbound from our chassis#033[00m
Oct 11 04:52:01 np0005481065 systemd[1]: run-netns-ovnmeta\x2d98ec9321\x2d44f2\x2d4cc2\x2da0a2\x2d531598e629d2.mount: Deactivated successfully.
Oct 11 04:52:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:01.936 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 04:52:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:01.937 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[05a3d55a-3609-4787-95de-7b41c2c2378f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:01.938 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7 namespace which is not needed anymore#033[00m
Oct 11 04:52:02 np0005481065 neutron-haproxy-ovnmeta-7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7[305716]: [NOTICE]   (305720) : haproxy version is 2.8.14-c23fe91
Oct 11 04:52:02 np0005481065 neutron-haproxy-ovnmeta-7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7[305716]: [NOTICE]   (305720) : path to executable is /usr/sbin/haproxy
Oct 11 04:52:02 np0005481065 neutron-haproxy-ovnmeta-7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7[305716]: [ALERT]    (305720) : Current worker (305722) exited with code 143 (Terminated)
Oct 11 04:52:02 np0005481065 neutron-haproxy-ovnmeta-7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7[305716]: [WARNING]  (305720) : All workers exited. Exiting... (0)
Oct 11 04:52:02 np0005481065 systemd[1]: libpod-5698044c3b3e464a40e793712c4eace612b32ffc3835eeba24ac108bb902523c.scope: Deactivated successfully.
Oct 11 04:52:02 np0005481065 podman[306060]: 2025-10-11 08:52:02.1174189 +0000 UTC m=+0.065135663 container died 5698044c3b3e464a40e793712c4eace612b32ffc3835eeba24ac108bb902523c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 11 04:52:02 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5698044c3b3e464a40e793712c4eace612b32ffc3835eeba24ac108bb902523c-userdata-shm.mount: Deactivated successfully.
Oct 11 04:52:02 np0005481065 systemd[1]: var-lib-containers-storage-overlay-4a65300d208cb9158423dc57616ca08fb30633e13196b7c893ce5c6964e246ba-merged.mount: Deactivated successfully.
Oct 11 04:52:02 np0005481065 nova_compute[260935]: 2025-10-11 08:52:02.160 2 INFO nova.virt.libvirt.driver [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Deleting instance files /var/lib/nova/instances/205bee5e-165a-468c-87d3-db44e03ace3e_del#033[00m
Oct 11 04:52:02 np0005481065 nova_compute[260935]: 2025-10-11 08:52:02.162 2 INFO nova.virt.libvirt.driver [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Deletion of /var/lib/nova/instances/205bee5e-165a-468c-87d3-db44e03ace3e_del complete#033[00m
Oct 11 04:52:02 np0005481065 podman[306060]: 2025-10-11 08:52:02.164309316 +0000 UTC m=+0.112026069 container cleanup 5698044c3b3e464a40e793712c4eace612b32ffc3835eeba24ac108bb902523c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 04:52:02 np0005481065 systemd[1]: libpod-conmon-5698044c3b3e464a40e793712c4eace612b32ffc3835eeba24ac108bb902523c.scope: Deactivated successfully.
Oct 11 04:52:02 np0005481065 podman[306107]: 2025-10-11 08:52:02.259947051 +0000 UTC m=+0.061143010 container remove 5698044c3b3e464a40e793712c4eace612b32ffc3835eeba24ac108bb902523c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 04:52:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:02.267 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fc7ff873-2f64-4c15-81b2-ba52a3a57d04]: (4, ('Sat Oct 11 08:52:02 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7 (5698044c3b3e464a40e793712c4eace612b32ffc3835eeba24ac108bb902523c)\n5698044c3b3e464a40e793712c4eace612b32ffc3835eeba24ac108bb902523c\nSat Oct 11 08:52:02 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7 (5698044c3b3e464a40e793712c4eace612b32ffc3835eeba24ac108bb902523c)\n5698044c3b3e464a40e793712c4eace612b32ffc3835eeba24ac108bb902523c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:02.269 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[470260d3-f1c9-4c70-88f1-8e49d96bac47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:02.270 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7aa0dec9-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:02 np0005481065 nova_compute[260935]: 2025-10-11 08:52:02.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:02 np0005481065 kernel: tap7aa0dec9-50: left promiscuous mode
Oct 11 04:52:02 np0005481065 nova_compute[260935]: 2025-10-11 08:52:02.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:02.314 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[00f3afe4-9b34-409e-b936-121ac6470552]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:02.360 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[53c3a28e-416e-494c-a47a-d1d0cca4735b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:02.361 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[58a9b940-3a04-4af6-843c-47919976367a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:02.383 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[87ff5784-78a5-4e66-9fcc-7b9365a6c69b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456000, 'reachable_time': 31417, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306121, 'error': None, 'target': 'ovnmeta-7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:02.386 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 04:52:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:02.386 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[fd76124f-3623-4d65-bb07-a095da24d5bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:52:02 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4230801645' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:52:02 np0005481065 nova_compute[260935]: 2025-10-11 08:52:02.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:02 np0005481065 nova_compute[260935]: 2025-10-11 08:52:02.427 2 DEBUG oslo_concurrency.processutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:52:02 np0005481065 nova_compute[260935]: 2025-10-11 08:52:02.459 2 DEBUG nova.storage.rbd_utils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] rbd image 840d2a1b-48bb-42ec-aa12-067e1b65ea39_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:52:02 np0005481065 nova_compute[260935]: 2025-10-11 08:52:02.463 2 DEBUG oslo_concurrency.processutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:52:02 np0005481065 nova_compute[260935]: 2025-10-11 08:52:02.499 2 INFO nova.compute.manager [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Took 1.10 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 04:52:02 np0005481065 nova_compute[260935]: 2025-10-11 08:52:02.500 2 DEBUG oslo.service.loopingcall [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 04:52:02 np0005481065 nova_compute[260935]: 2025-10-11 08:52:02.500 2 DEBUG nova.compute.manager [-] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 04:52:02 np0005481065 nova_compute[260935]: 2025-10-11 08:52:02.500 2 DEBUG nova.network.neutron [-] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 04:52:02 np0005481065 nova_compute[260935]: 2025-10-11 08:52:02.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:52:02 np0005481065 nova_compute[260935]: 2025-10-11 08:52:02.716 2 DEBUG nova.compute.manager [req-5c1b0ae2-19c9-4eb8-a527-4e3d9d435caa req-78d3fd41-a67e-477a-9e1f-bec1d803bec2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Received event network-vif-unplugged-30f88ebc-fba6-476c-8953-ad64358029bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:52:02 np0005481065 nova_compute[260935]: 2025-10-11 08:52:02.717 2 DEBUG oslo_concurrency.lockutils [req-5c1b0ae2-19c9-4eb8-a527-4e3d9d435caa req-78d3fd41-a67e-477a-9e1f-bec1d803bec2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:02 np0005481065 nova_compute[260935]: 2025-10-11 08:52:02.717 2 DEBUG oslo_concurrency.lockutils [req-5c1b0ae2-19c9-4eb8-a527-4e3d9d435caa req-78d3fd41-a67e-477a-9e1f-bec1d803bec2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:02 np0005481065 nova_compute[260935]: 2025-10-11 08:52:02.718 2 DEBUG oslo_concurrency.lockutils [req-5c1b0ae2-19c9-4eb8-a527-4e3d9d435caa req-78d3fd41-a67e-477a-9e1f-bec1d803bec2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:02 np0005481065 nova_compute[260935]: 2025-10-11 08:52:02.718 2 DEBUG nova.compute.manager [req-5c1b0ae2-19c9-4eb8-a527-4e3d9d435caa req-78d3fd41-a67e-477a-9e1f-bec1d803bec2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] No waiting events found dispatching network-vif-unplugged-30f88ebc-fba6-476c-8953-ad64358029bb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:52:02 np0005481065 nova_compute[260935]: 2025-10-11 08:52:02.718 2 DEBUG nova.compute.manager [req-5c1b0ae2-19c9-4eb8-a527-4e3d9d435caa req-78d3fd41-a67e-477a-9e1f-bec1d803bec2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Received event network-vif-unplugged-30f88ebc-fba6-476c-8953-ad64358029bb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 04:52:02 np0005481065 systemd[1]: run-netns-ovnmeta\x2d7aa0dec9\x2d58b4\x2d4b16\x2db265\x2df3c8cbedd3a7.mount: Deactivated successfully.
Oct 11 04:52:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:52:02 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/150167715' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:52:02 np0005481065 nova_compute[260935]: 2025-10-11 08:52:02.912 2 DEBUG oslo_concurrency.processutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:52:02 np0005481065 nova_compute[260935]: 2025-10-11 08:52:02.914 2 DEBUG nova.virt.libvirt.vif [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:51:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-397044721',display_name='tempest-FloatingIPsAssociationTestJSON-server-397044721',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-397044721',id=39,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b23ff73ca27245eeb1b46f51326a5568',ramdisk_id='',reservation_id='r-5vhnbeg2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1277815089',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1277815089-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:51:55Z,user_data=None,user_id='7f37cd0ba15f412b88192a506c5cec79',uuid=840d2a1b-48bb-42ec-aa12-067e1b65ea39,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf26054f-43d5-471a-bca4-e7948b4409d0", "address": "fa:16:3e:6d:9c:27", "network": {"id": "882d76f4-8cc7-44c7-ad90-277f4f92e044", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-3978382-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b23ff73ca27245eeb1b46f51326a5568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf26054f-43", "ovs_interfaceid": "bf26054f-43d5-471a-bca4-e7948b4409d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:52:02 np0005481065 nova_compute[260935]: 2025-10-11 08:52:02.914 2 DEBUG nova.network.os_vif_util [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Converting VIF {"id": "bf26054f-43d5-471a-bca4-e7948b4409d0", "address": "fa:16:3e:6d:9c:27", "network": {"id": "882d76f4-8cc7-44c7-ad90-277f4f92e044", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-3978382-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b23ff73ca27245eeb1b46f51326a5568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf26054f-43", "ovs_interfaceid": "bf26054f-43d5-471a-bca4-e7948b4409d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:52:02 np0005481065 nova_compute[260935]: 2025-10-11 08:52:02.914 2 DEBUG nova.network.os_vif_util [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:9c:27,bridge_name='br-int',has_traffic_filtering=True,id=bf26054f-43d5-471a-bca4-e7948b4409d0,network=Network(882d76f4-8cc7-44c7-ad90-277f4f92e044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf26054f-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:52:02 np0005481065 nova_compute[260935]: 2025-10-11 08:52:02.915 2 DEBUG nova.objects.instance [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lazy-loading 'pci_devices' on Instance uuid 840d2a1b-48bb-42ec-aa12-067e1b65ea39 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:52:03 np0005481065 nova_compute[260935]: 2025-10-11 08:52:03.029 2 DEBUG nova.virt.libvirt.driver [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:52:03 np0005481065 nova_compute[260935]:  <uuid>840d2a1b-48bb-42ec-aa12-067e1b65ea39</uuid>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:  <name>instance-00000027</name>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:52:03 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:      <nova:name>tempest-FloatingIPsAssociationTestJSON-server-397044721</nova:name>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:52:01</nova:creationTime>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:52:03 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:        <nova:user uuid="7f37cd0ba15f412b88192a506c5cec79">tempest-FloatingIPsAssociationTestJSON-1277815089-project-member</nova:user>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:        <nova:project uuid="b23ff73ca27245eeb1b46f51326a5568">tempest-FloatingIPsAssociationTestJSON-1277815089</nova:project>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:        <nova:port uuid="bf26054f-43d5-471a-bca4-e7948b4409d0">
Oct 11 04:52:03 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:52:03 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:      <entry name="serial">840d2a1b-48bb-42ec-aa12-067e1b65ea39</entry>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:      <entry name="uuid">840d2a1b-48bb-42ec-aa12-067e1b65ea39</entry>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:52:03 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:52:03 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:52:03 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/840d2a1b-48bb-42ec-aa12-067e1b65ea39_disk">
Oct 11 04:52:03 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:52:03 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:52:03 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/840d2a1b-48bb-42ec-aa12-067e1b65ea39_disk.config">
Oct 11 04:52:03 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:52:03 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 04:52:03 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:6d:9c:27"/>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:      <target dev="tapbf26054f-43"/>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:52:03 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/840d2a1b-48bb-42ec-aa12-067e1b65ea39/console.log" append="off"/>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:52:03 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:52:03 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:52:03 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:52:03 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:52:03 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:52:03 np0005481065 nova_compute[260935]: 2025-10-11 08:52:03.029 2 DEBUG nova.compute.manager [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Preparing to wait for external event network-vif-plugged-bf26054f-43d5-471a-bca4-e7948b4409d0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 04:52:03 np0005481065 nova_compute[260935]: 2025-10-11 08:52:03.044 2 DEBUG oslo_concurrency.lockutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Acquiring lock "840d2a1b-48bb-42ec-aa12-067e1b65ea39-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:03 np0005481065 nova_compute[260935]: 2025-10-11 08:52:03.044 2 DEBUG oslo_concurrency.lockutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "840d2a1b-48bb-42ec-aa12-067e1b65ea39-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:03 np0005481065 nova_compute[260935]: 2025-10-11 08:52:03.044 2 DEBUG oslo_concurrency.lockutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "840d2a1b-48bb-42ec-aa12-067e1b65ea39-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:03 np0005481065 nova_compute[260935]: 2025-10-11 08:52:03.045 2 DEBUG nova.virt.libvirt.vif [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:51:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-397044721',display_name='tempest-FloatingIPsAssociationTestJSON-server-397044721',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-397044721',id=39,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b23ff73ca27245eeb1b46f51326a5568',ramdisk_id='',reservation_id='r-5vhnbeg2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1277815089',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1277815089-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:51:55Z,user_data=None,user_id='7f37cd0ba15f412b88192a506c5cec79',uuid=840d2a1b-48bb-42ec-aa12-067e1b65ea39,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf26054f-43d5-471a-bca4-e7948b4409d0", "address": "fa:16:3e:6d:9c:27", "network": {"id": "882d76f4-8cc7-44c7-ad90-277f4f92e044", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-3978382-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b23ff73ca27245eeb1b46f51326a5568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf26054f-43", "ovs_interfaceid": "bf26054f-43d5-471a-bca4-e7948b4409d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:52:03 np0005481065 nova_compute[260935]: 2025-10-11 08:52:03.045 2 DEBUG nova.network.os_vif_util [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Converting VIF {"id": "bf26054f-43d5-471a-bca4-e7948b4409d0", "address": "fa:16:3e:6d:9c:27", "network": {"id": "882d76f4-8cc7-44c7-ad90-277f4f92e044", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-3978382-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b23ff73ca27245eeb1b46f51326a5568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf26054f-43", "ovs_interfaceid": "bf26054f-43d5-471a-bca4-e7948b4409d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:52:03 np0005481065 nova_compute[260935]: 2025-10-11 08:52:03.046 2 DEBUG nova.network.os_vif_util [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:9c:27,bridge_name='br-int',has_traffic_filtering=True,id=bf26054f-43d5-471a-bca4-e7948b4409d0,network=Network(882d76f4-8cc7-44c7-ad90-277f4f92e044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf26054f-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:52:03 np0005481065 nova_compute[260935]: 2025-10-11 08:52:03.046 2 DEBUG os_vif [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:9c:27,bridge_name='br-int',has_traffic_filtering=True,id=bf26054f-43d5-471a-bca4-e7948b4409d0,network=Network(882d76f4-8cc7-44c7-ad90-277f4f92e044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf26054f-43') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:52:03 np0005481065 nova_compute[260935]: 2025-10-11 08:52:03.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:03 np0005481065 nova_compute[260935]: 2025-10-11 08:52:03.048 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:03 np0005481065 nova_compute[260935]: 2025-10-11 08:52:03.048 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:52:03 np0005481065 nova_compute[260935]: 2025-10-11 08:52:03.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:03 np0005481065 nova_compute[260935]: 2025-10-11 08:52:03.052 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf26054f-43, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:03 np0005481065 nova_compute[260935]: 2025-10-11 08:52:03.053 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbf26054f-43, col_values=(('external_ids', {'iface-id': 'bf26054f-43d5-471a-bca4-e7948b4409d0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6d:9c:27', 'vm-uuid': '840d2a1b-48bb-42ec-aa12-067e1b65ea39'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:03 np0005481065 nova_compute[260935]: 2025-10-11 08:52:03.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:03 np0005481065 NetworkManager[44960]: <info>  [1760172723.0897] manager: (tapbf26054f-43): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/150)
Oct 11 04:52:03 np0005481065 nova_compute[260935]: 2025-10-11 08:52:03.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:52:03 np0005481065 nova_compute[260935]: 2025-10-11 08:52:03.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:03 np0005481065 nova_compute[260935]: 2025-10-11 08:52:03.098 2 INFO os_vif [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:9c:27,bridge_name='br-int',has_traffic_filtering=True,id=bf26054f-43d5-471a-bca4-e7948b4409d0,network=Network(882d76f4-8cc7-44c7-ad90-277f4f92e044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf26054f-43')#033[00m
Oct 11 04:52:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:52:03 np0005481065 nova_compute[260935]: 2025-10-11 08:52:03.334 2 DEBUG nova.virt.libvirt.driver [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:52:03 np0005481065 nova_compute[260935]: 2025-10-11 08:52:03.339 2 DEBUG nova.virt.libvirt.driver [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:52:03 np0005481065 nova_compute[260935]: 2025-10-11 08:52:03.339 2 DEBUG nova.virt.libvirt.driver [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] No VIF found with MAC fa:16:3e:6d:9c:27, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:52:03 np0005481065 nova_compute[260935]: 2025-10-11 08:52:03.340 2 INFO nova.virt.libvirt.driver [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Using config drive#033[00m
Oct 11 04:52:03 np0005481065 nova_compute[260935]: 2025-10-11 08:52:03.371 2 DEBUG nova.storage.rbd_utils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] rbd image 840d2a1b-48bb-42ec-aa12-067e1b65ea39_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:52:03 np0005481065 nova_compute[260935]: 2025-10-11 08:52:03.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:52:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1415: 321 pgs: 321 active+clean; 213 MiB data, 534 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 2.8 MiB/s wr, 298 op/s
Oct 11 04:52:04 np0005481065 nova_compute[260935]: 2025-10-11 08:52:04.185 2 INFO nova.virt.libvirt.driver [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Creating config drive at /var/lib/nova/instances/840d2a1b-48bb-42ec-aa12-067e1b65ea39/disk.config#033[00m
Oct 11 04:52:04 np0005481065 nova_compute[260935]: 2025-10-11 08:52:04.195 2 DEBUG oslo_concurrency.processutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/840d2a1b-48bb-42ec-aa12-067e1b65ea39/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp84ibcmva execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:52:04 np0005481065 nova_compute[260935]: 2025-10-11 08:52:04.335 2 DEBUG oslo_concurrency.processutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/840d2a1b-48bb-42ec-aa12-067e1b65ea39/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp84ibcmva" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:52:04 np0005481065 nova_compute[260935]: 2025-10-11 08:52:04.366 2 DEBUG nova.storage.rbd_utils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] rbd image 840d2a1b-48bb-42ec-aa12-067e1b65ea39_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:52:04 np0005481065 nova_compute[260935]: 2025-10-11 08:52:04.370 2 DEBUG oslo_concurrency.processutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/840d2a1b-48bb-42ec-aa12-067e1b65ea39/disk.config 840d2a1b-48bb-42ec-aa12-067e1b65ea39_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:52:04 np0005481065 nova_compute[260935]: 2025-10-11 08:52:04.405 2 DEBUG nova.compute.manager [req-76f744fe-b035-43d3-ae0a-f2d27ea2b93a req-5431233d-ca53-4603-8350-05abdede4eb2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Received event network-vif-unplugged-e389558a-ec7e-4610-aef7-2cfb76da6814 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:52:04 np0005481065 nova_compute[260935]: 2025-10-11 08:52:04.406 2 DEBUG oslo_concurrency.lockutils [req-76f744fe-b035-43d3-ae0a-f2d27ea2b93a req-5431233d-ca53-4603-8350-05abdede4eb2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:04 np0005481065 nova_compute[260935]: 2025-10-11 08:52:04.407 2 DEBUG oslo_concurrency.lockutils [req-76f744fe-b035-43d3-ae0a-f2d27ea2b93a req-5431233d-ca53-4603-8350-05abdede4eb2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:04 np0005481065 nova_compute[260935]: 2025-10-11 08:52:04.407 2 DEBUG oslo_concurrency.lockutils [req-76f744fe-b035-43d3-ae0a-f2d27ea2b93a req-5431233d-ca53-4603-8350-05abdede4eb2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:04 np0005481065 nova_compute[260935]: 2025-10-11 08:52:04.407 2 DEBUG nova.compute.manager [req-76f744fe-b035-43d3-ae0a-f2d27ea2b93a req-5431233d-ca53-4603-8350-05abdede4eb2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] No waiting events found dispatching network-vif-unplugged-e389558a-ec7e-4610-aef7-2cfb76da6814 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:52:04 np0005481065 nova_compute[260935]: 2025-10-11 08:52:04.408 2 DEBUG nova.compute.manager [req-76f744fe-b035-43d3-ae0a-f2d27ea2b93a req-5431233d-ca53-4603-8350-05abdede4eb2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Received event network-vif-unplugged-e389558a-ec7e-4610-aef7-2cfb76da6814 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 04:52:04 np0005481065 nova_compute[260935]: 2025-10-11 08:52:04.552 2 DEBUG oslo_concurrency.processutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/840d2a1b-48bb-42ec-aa12-067e1b65ea39/disk.config 840d2a1b-48bb-42ec-aa12-067e1b65ea39_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:52:04 np0005481065 nova_compute[260935]: 2025-10-11 08:52:04.553 2 INFO nova.virt.libvirt.driver [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Deleting local config drive /var/lib/nova/instances/840d2a1b-48bb-42ec-aa12-067e1b65ea39/disk.config because it was imported into RBD.#033[00m
Oct 11 04:52:04 np0005481065 NetworkManager[44960]: <info>  [1760172724.6233] manager: (tapbf26054f-43): new Tun device (/org/freedesktop/NetworkManager/Devices/151)
Oct 11 04:52:04 np0005481065 kernel: tapbf26054f-43: entered promiscuous mode
Oct 11 04:52:04 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:04Z|00305|binding|INFO|Claiming lport bf26054f-43d5-471a-bca4-e7948b4409d0 for this chassis.
Oct 11 04:52:04 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:04Z|00306|binding|INFO|bf26054f-43d5-471a-bca4-e7948b4409d0: Claiming fa:16:3e:6d:9c:27 10.100.0.12
Oct 11 04:52:04 np0005481065 nova_compute[260935]: 2025-10-11 08:52:04.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:04.640 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:9c:27 10.100.0.12'], port_security=['fa:16:3e:6d:9c:27 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '840d2a1b-48bb-42ec-aa12-067e1b65ea39', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-882d76f4-8cc7-44c7-ad90-277f4f92e044', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b23ff73ca27245eeb1b46f51326a5568', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1c2e46c6-00fe-417f-b7cc-3c9acf40921b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4841cc19-0006-4d3d-bb24-b088854c8627, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=bf26054f-43d5-471a-bca4-e7948b4409d0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:52:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:04.642 162815 INFO neutron.agent.ovn.metadata.agent [-] Port bf26054f-43d5-471a-bca4-e7948b4409d0 in datapath 882d76f4-8cc7-44c7-ad90-277f4f92e044 bound to our chassis#033[00m
Oct 11 04:52:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:04.652 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 882d76f4-8cc7-44c7-ad90-277f4f92e044#033[00m
Oct 11 04:52:04 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:04Z|00054|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7f:41:62 10.100.0.9
Oct 11 04:52:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:52:04 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:04Z|00307|binding|INFO|Setting lport bf26054f-43d5-471a-bca4-e7948b4409d0 ovn-installed in OVS
Oct 11 04:52:04 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:04Z|00055|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7f:41:62 10.100.0.9
Oct 11 04:52:04 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:04Z|00308|binding|INFO|Setting lport bf26054f-43d5-471a-bca4-e7948b4409d0 up in Southbound
Oct 11 04:52:04 np0005481065 nova_compute[260935]: 2025-10-11 08:52:04.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:52:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:52:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:52:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0014502496825551867 of space, bias 1.0, pg target 0.43507490476655597 quantized to 32 (current 32)
Oct 11 04:52:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:52:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:52:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:52:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:52:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:52:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 11 04:52:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:52:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 04:52:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:52:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:52:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:52:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:52:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:52:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:52:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:52:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:52:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:52:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:52:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:04.673 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[96eeddb1-e22b-4ce1-8ab5-91daaa2fa88a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:04 np0005481065 systemd-machined[215705]: New machine qemu-43-instance-00000027.
Oct 11 04:52:04 np0005481065 systemd-udevd[306241]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:52:04 np0005481065 systemd[1]: Started Virtual Machine qemu-43-instance-00000027.
Oct 11 04:52:04 np0005481065 nova_compute[260935]: 2025-10-11 08:52:04.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:52:04 np0005481065 nova_compute[260935]: 2025-10-11 08:52:04.700 2 DEBUG oslo_concurrency.lockutils [None req-fe3e845c-c74f-4725-ac43-be05e8dd5ef7 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "1cecd438-75a3-4140-ad35-7439630b1be2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:04 np0005481065 nova_compute[260935]: 2025-10-11 08:52:04.701 2 DEBUG oslo_concurrency.lockutils [None req-fe3e845c-c74f-4725-ac43-be05e8dd5ef7 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "1cecd438-75a3-4140-ad35-7439630b1be2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:04 np0005481065 nova_compute[260935]: 2025-10-11 08:52:04.701 2 DEBUG oslo_concurrency.lockutils [None req-fe3e845c-c74f-4725-ac43-be05e8dd5ef7 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "1cecd438-75a3-4140-ad35-7439630b1be2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:04 np0005481065 nova_compute[260935]: 2025-10-11 08:52:04.702 2 DEBUG oslo_concurrency.lockutils [None req-fe3e845c-c74f-4725-ac43-be05e8dd5ef7 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "1cecd438-75a3-4140-ad35-7439630b1be2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:04 np0005481065 nova_compute[260935]: 2025-10-11 08:52:04.703 2 DEBUG oslo_concurrency.lockutils [None req-fe3e845c-c74f-4725-ac43-be05e8dd5ef7 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "1cecd438-75a3-4140-ad35-7439630b1be2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:04 np0005481065 nova_compute[260935]: 2025-10-11 08:52:04.706 2 INFO nova.compute.manager [None req-fe3e845c-c74f-4725-ac43-be05e8dd5ef7 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Terminating instance#033[00m
Oct 11 04:52:04 np0005481065 NetworkManager[44960]: <info>  [1760172724.7110] device (tapbf26054f-43): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:52:04 np0005481065 NetworkManager[44960]: <info>  [1760172724.7140] device (tapbf26054f-43): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:52:04 np0005481065 nova_compute[260935]: 2025-10-11 08:52:04.717 2 DEBUG nova.compute.manager [None req-fe3e845c-c74f-4725-ac43-be05e8dd5ef7 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 04:52:04 np0005481065 nova_compute[260935]: 2025-10-11 08:52:04.718 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:52:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:04.735 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[4f28b403-13d7-4601-9380-b8cf96ebe79c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:04.740 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[de48f0ca-0f9a-4c03-8ac3-e5266fd2e2a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:04 np0005481065 kernel: tap7290684c-38 (unregistering): left promiscuous mode
Oct 11 04:52:04 np0005481065 NetworkManager[44960]: <info>  [1760172724.7806] device (tap7290684c-38): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:52:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:04.789 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[69a0b73f-7624-44dd-ae50-8bb42d281053]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:04 np0005481065 nova_compute[260935]: 2025-10-11 08:52:04.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:04 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:04Z|00309|binding|INFO|Releasing lport 7290684c-3823-4e1e-84f5-826316aa4548 from this chassis (sb_readonly=0)
Oct 11 04:52:04 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:04Z|00310|binding|INFO|Setting lport 7290684c-3823-4e1e-84f5-826316aa4548 down in Southbound
Oct 11 04:52:04 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:04Z|00311|binding|INFO|Removing iface tap7290684c-38 ovn-installed in OVS
Oct 11 04:52:04 np0005481065 nova_compute[260935]: 2025-10-11 08:52:04.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:04.807 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[da3d92fc-0526-4a4c-9907-57317eafdfeb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap882d76f4-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c6:0a:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 89], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 454897, 'reachable_time': 19017, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306256, 'error': None, 'target': 'ovnmeta-882d76f4-8cc7-44c7-ad90-277f4f92e044', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:04 np0005481065 nova_compute[260935]: 2025-10-11 08:52:04.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:04.825 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4ec85d09-9a06-48d2-8bdd-40c7f19a79bd]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap882d76f4-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 454914, 'tstamp': 454914}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306257, 'error': None, 'target': 'ovnmeta-882d76f4-8cc7-44c7-ad90-277f4f92e044', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap882d76f4-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 454918, 'tstamp': 454918}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306257, 'error': None, 'target': 'ovnmeta-882d76f4-8cc7-44c7-ad90-277f4f92e044', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:04.827 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap882d76f4-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:04 np0005481065 nova_compute[260935]: 2025-10-11 08:52:04.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:04 np0005481065 nova_compute[260935]: 2025-10-11 08:52:04.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:04.833 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap882d76f4-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:04.835 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:52:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:04.836 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap882d76f4-80, col_values=(('external_ids', {'iface-id': 'abe57b96-aedd-418b-be8f-7ad9fb9218ac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:04.837 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:52:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:04.842 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:41:62 10.100.0.9'], port_security=['fa:16:3e:7f:41:62 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '1cecd438-75a3-4140-ad35-7439630b1be2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bac3530-993f-420e-8692-0b14a331d756', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8c7604961214c6d9d49657535d799a5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b92be55a-f97b-4770-99c4-ff8e122b8ad7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=956bef08-638b-4ce0-9cc4-80a6cc4f1331, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=7290684c-3823-4e1e-84f5-826316aa4548) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:52:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:04.844 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 7290684c-3823-4e1e-84f5-826316aa4548 in datapath 9bac3530-993f-420e-8692-0b14a331d756 unbound from our chassis#033[00m
Oct 11 04:52:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:04.848 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9bac3530-993f-420e-8692-0b14a331d756, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 04:52:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:04.849 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[21a888da-a24c-471d-81c8-768288aa39de]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:04.850 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9bac3530-993f-420e-8692-0b14a331d756 namespace which is not needed anymore#033[00m
Oct 11 04:52:04 np0005481065 nova_compute[260935]: 2025-10-11 08:52:04.855 2 DEBUG nova.compute.manager [req-303c1880-bb39-46a7-8e76-3d89fbf37874 req-73d22e08-a6f8-4194-bc60-972f1e93a7b0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Received event network-vif-plugged-30f88ebc-fba6-476c-8953-ad64358029bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:52:04 np0005481065 nova_compute[260935]: 2025-10-11 08:52:04.855 2 DEBUG oslo_concurrency.lockutils [req-303c1880-bb39-46a7-8e76-3d89fbf37874 req-73d22e08-a6f8-4194-bc60-972f1e93a7b0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:04 np0005481065 nova_compute[260935]: 2025-10-11 08:52:04.855 2 DEBUG oslo_concurrency.lockutils [req-303c1880-bb39-46a7-8e76-3d89fbf37874 req-73d22e08-a6f8-4194-bc60-972f1e93a7b0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:04 np0005481065 nova_compute[260935]: 2025-10-11 08:52:04.855 2 DEBUG oslo_concurrency.lockutils [req-303c1880-bb39-46a7-8e76-3d89fbf37874 req-73d22e08-a6f8-4194-bc60-972f1e93a7b0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:04 np0005481065 nova_compute[260935]: 2025-10-11 08:52:04.856 2 DEBUG nova.compute.manager [req-303c1880-bb39-46a7-8e76-3d89fbf37874 req-73d22e08-a6f8-4194-bc60-972f1e93a7b0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] No waiting events found dispatching network-vif-plugged-30f88ebc-fba6-476c-8953-ad64358029bb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:52:04 np0005481065 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000026.scope: Deactivated successfully.
Oct 11 04:52:04 np0005481065 nova_compute[260935]: 2025-10-11 08:52:04.856 2 WARNING nova.compute.manager [req-303c1880-bb39-46a7-8e76-3d89fbf37874 req-73d22e08-a6f8-4194-bc60-972f1e93a7b0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Received unexpected event network-vif-plugged-30f88ebc-fba6-476c-8953-ad64358029bb for instance with vm_state active and task_state deleting.#033[00m
Oct 11 04:52:04 np0005481065 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000026.scope: Consumed 12.027s CPU time.
Oct 11 04:52:04 np0005481065 systemd-machined[215705]: Machine qemu-41-instance-00000026 terminated.
Oct 11 04:52:04 np0005481065 nova_compute[260935]: 2025-10-11 08:52:04.962 2 DEBUG nova.network.neutron [req-4198e2d9-f34d-4aef-a085-d181880f1e6b req-7fda7ade-ccf2-452c-94f6-37f429d6f633 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Updated VIF entry in instance network info cache for port bf26054f-43d5-471a-bca4-e7948b4409d0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:52:04 np0005481065 nova_compute[260935]: 2025-10-11 08:52:04.962 2 DEBUG nova.network.neutron [req-4198e2d9-f34d-4aef-a085-d181880f1e6b req-7fda7ade-ccf2-452c-94f6-37f429d6f633 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Updating instance_info_cache with network_info: [{"id": "bf26054f-43d5-471a-bca4-e7948b4409d0", "address": "fa:16:3e:6d:9c:27", "network": {"id": "882d76f4-8cc7-44c7-ad90-277f4f92e044", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-3978382-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b23ff73ca27245eeb1b46f51326a5568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf26054f-43", "ovs_interfaceid": "bf26054f-43d5-471a-bca4-e7948b4409d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:52:04 np0005481065 nova_compute[260935]: 2025-10-11 08:52:04.971 2 INFO nova.virt.libvirt.driver [-] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Instance destroyed successfully.#033[00m
Oct 11 04:52:04 np0005481065 nova_compute[260935]: 2025-10-11 08:52:04.971 2 DEBUG nova.objects.instance [None req-fe3e845c-c74f-4725-ac43-be05e8dd5ef7 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lazy-loading 'resources' on Instance uuid 1cecd438-75a3-4140-ad35-7439630b1be2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:52:04 np0005481065 nova_compute[260935]: 2025-10-11 08:52:04.980 2 DEBUG oslo_concurrency.lockutils [req-4198e2d9-f34d-4aef-a085-d181880f1e6b req-7fda7ade-ccf2-452c-94f6-37f429d6f633 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-840d2a1b-48bb-42ec-aa12-067e1b65ea39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:52:04 np0005481065 nova_compute[260935]: 2025-10-11 08:52:04.992 2 DEBUG nova.virt.libvirt.vif [None req-fe3e845c-c74f-4725-ac43-be05e8dd5ef7 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:51:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-45605011',display_name='tempest-ImagesTestJSON-server-45605011',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-45605011',id=38,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:51:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f8c7604961214c6d9d49657535d799a5',ramdisk_id='',reservation_id='r-bbdhfx2h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-694493184',owner_user_name='tempest-ImagesTestJSON-694493184-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:52:00Z,user_data=None,user_id='1bab12893b9d49aabcb5ca19c9b951de',uuid=1cecd438-75a3-4140-ad35-7439630b1be2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7290684c-3823-4e1e-84f5-826316aa4548", "address": "fa:16:3e:7f:41:62", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7290684c-38", "ovs_interfaceid": "7290684c-3823-4e1e-84f5-826316aa4548", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 04:52:04 np0005481065 nova_compute[260935]: 2025-10-11 08:52:04.992 2 DEBUG nova.network.os_vif_util [None req-fe3e845c-c74f-4725-ac43-be05e8dd5ef7 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converting VIF {"id": "7290684c-3823-4e1e-84f5-826316aa4548", "address": "fa:16:3e:7f:41:62", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7290684c-38", "ovs_interfaceid": "7290684c-3823-4e1e-84f5-826316aa4548", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:52:04 np0005481065 nova_compute[260935]: 2025-10-11 08:52:04.993 2 DEBUG nova.network.os_vif_util [None req-fe3e845c-c74f-4725-ac43-be05e8dd5ef7 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:41:62,bridge_name='br-int',has_traffic_filtering=True,id=7290684c-3823-4e1e-84f5-826316aa4548,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7290684c-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:52:04 np0005481065 nova_compute[260935]: 2025-10-11 08:52:04.993 2 DEBUG os_vif [None req-fe3e845c-c74f-4725-ac43-be05e8dd5ef7 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:41:62,bridge_name='br-int',has_traffic_filtering=True,id=7290684c-3823-4e1e-84f5-826316aa4548,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7290684c-38') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 04:52:04 np0005481065 nova_compute[260935]: 2025-10-11 08:52:04.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:04 np0005481065 nova_compute[260935]: 2025-10-11 08:52:04.995 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7290684c-38, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:04 np0005481065 nova_compute[260935]: 2025-10-11 08:52:04.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:52:05 np0005481065 nova_compute[260935]: 2025-10-11 08:52:05.001 2 INFO os_vif [None req-fe3e845c-c74f-4725-ac43-be05e8dd5ef7 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:41:62,bridge_name='br-int',has_traffic_filtering=True,id=7290684c-3823-4e1e-84f5-826316aa4548,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7290684c-38')#033[00m
Oct 11 04:52:05 np0005481065 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[305191]: [NOTICE]   (305217) : haproxy version is 2.8.14-c23fe91
Oct 11 04:52:05 np0005481065 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[305191]: [NOTICE]   (305217) : path to executable is /usr/sbin/haproxy
Oct 11 04:52:05 np0005481065 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[305191]: [WARNING]  (305217) : Exiting Master process...
Oct 11 04:52:05 np0005481065 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[305191]: [ALERT]    (305217) : Current worker (305224) exited with code 143 (Terminated)
Oct 11 04:52:05 np0005481065 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[305191]: [WARNING]  (305217) : All workers exited. Exiting... (0)
Oct 11 04:52:05 np0005481065 systemd[1]: libpod-f81c47c225c7ddf12f9c961bf177d7a7c127519972d558801a670cad60b3507e.scope: Deactivated successfully.
Oct 11 04:52:05 np0005481065 podman[306286]: 2025-10-11 08:52:05.068440285 +0000 UTC m=+0.073365486 container died f81c47c225c7ddf12f9c961bf177d7a7c127519972d558801a670cad60b3507e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 11 04:52:05 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f81c47c225c7ddf12f9c961bf177d7a7c127519972d558801a670cad60b3507e-userdata-shm.mount: Deactivated successfully.
Oct 11 04:52:05 np0005481065 systemd[1]: var-lib-containers-storage-overlay-7ddfd5ce54df1d1aac4c1e654420057838a802df3d512bb77cf4087a954ac2fb-merged.mount: Deactivated successfully.
Oct 11 04:52:05 np0005481065 podman[306286]: 2025-10-11 08:52:05.128781701 +0000 UTC m=+0.133706902 container cleanup f81c47c225c7ddf12f9c961bf177d7a7c127519972d558801a670cad60b3507e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 11 04:52:05 np0005481065 systemd[1]: libpod-conmon-f81c47c225c7ddf12f9c961bf177d7a7c127519972d558801a670cad60b3507e.scope: Deactivated successfully.
Oct 11 04:52:05 np0005481065 podman[306332]: 2025-10-11 08:52:05.229231462 +0000 UTC m=+0.061527161 container remove f81c47c225c7ddf12f9c961bf177d7a7c127519972d558801a670cad60b3507e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:52:05 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:05.237 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[01b723ee-f8bb-46dc-9088-8cada6d22852]: (4, ('Sat Oct 11 08:52:04 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756 (f81c47c225c7ddf12f9c961bf177d7a7c127519972d558801a670cad60b3507e)\nf81c47c225c7ddf12f9c961bf177d7a7c127519972d558801a670cad60b3507e\nSat Oct 11 08:52:05 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756 (f81c47c225c7ddf12f9c961bf177d7a7c127519972d558801a670cad60b3507e)\nf81c47c225c7ddf12f9c961bf177d7a7c127519972d558801a670cad60b3507e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:05 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:05.240 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9e9065fd-8d8d-40b1-8d18-c9d613ed2027]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:05 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:05.241 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bac3530-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:05 np0005481065 kernel: tap9bac3530-90: left promiscuous mode
Oct 11 04:52:05 np0005481065 nova_compute[260935]: 2025-10-11 08:52:05.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:05 np0005481065 nova_compute[260935]: 2025-10-11 08:52:05.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:05 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:05.276 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[922e5c0a-ef0e-433c-93b9-71741641dae0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:05 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:05.303 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6459e7d5-ef14-48d9-a6f1-b5b639878d99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:05 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:05.305 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[22df6530-ccc2-41c9-8824-9f577e9315a0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:05 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:05.341 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[eb4d1e17-613b-4866-bf29-2c5d374dea5a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455661, 'reachable_time': 18743, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306348, 'error': None, 'target': 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:05 np0005481065 systemd[1]: run-netns-ovnmeta\x2d9bac3530\x2d993f\x2d420e\x2d8692\x2d0b14a331d756.mount: Deactivated successfully.
Oct 11 04:52:05 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:05.351 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9bac3530-993f-420e-8692-0b14a331d756 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 04:52:05 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:05.351 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[d9490544-4044-4e51-893a-aa62ea5c3a79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:05 np0005481065 nova_compute[260935]: 2025-10-11 08:52:05.470 2 INFO nova.virt.libvirt.driver [None req-fe3e845c-c74f-4725-ac43-be05e8dd5ef7 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Deleting instance files /var/lib/nova/instances/1cecd438-75a3-4140-ad35-7439630b1be2_del#033[00m
Oct 11 04:52:05 np0005481065 nova_compute[260935]: 2025-10-11 08:52:05.470 2 INFO nova.virt.libvirt.driver [None req-fe3e845c-c74f-4725-ac43-be05e8dd5ef7 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Deletion of /var/lib/nova/instances/1cecd438-75a3-4140-ad35-7439630b1be2_del complete#033[00m
Oct 11 04:52:05 np0005481065 nova_compute[260935]: 2025-10-11 08:52:05.587 2 INFO nova.compute.manager [None req-fe3e845c-c74f-4725-ac43-be05e8dd5ef7 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Took 0.87 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 04:52:05 np0005481065 nova_compute[260935]: 2025-10-11 08:52:05.588 2 DEBUG oslo.service.loopingcall [None req-fe3e845c-c74f-4725-ac43-be05e8dd5ef7 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 04:52:05 np0005481065 nova_compute[260935]: 2025-10-11 08:52:05.588 2 DEBUG nova.compute.manager [-] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 04:52:05 np0005481065 nova_compute[260935]: 2025-10-11 08:52:05.588 2 DEBUG nova.network.neutron [-] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 04:52:05 np0005481065 nova_compute[260935]: 2025-10-11 08:52:05.664 2 DEBUG nova.network.neutron [-] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:52:05 np0005481065 nova_compute[260935]: 2025-10-11 08:52:05.708 2 INFO nova.compute.manager [-] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Took 3.21 seconds to deallocate network for instance.#033[00m
Oct 11 04:52:05 np0005481065 nova_compute[260935]: 2025-10-11 08:52:05.853 2 DEBUG oslo_concurrency.lockutils [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:05 np0005481065 nova_compute[260935]: 2025-10-11 08:52:05.854 2 DEBUG oslo_concurrency.lockutils [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1416: 321 pgs: 321 active+clean; 213 MiB data, 534 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 2.3 MiB/s wr, 238 op/s
Oct 11 04:52:05 np0005481065 nova_compute[260935]: 2025-10-11 08:52:05.984 2 DEBUG oslo_concurrency.processutils [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.245 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172726.2438536, 840d2a1b-48bb-42ec-aa12-067e1b65ea39 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.246 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] VM Started (Lifecycle Event)#033[00m
Oct 11 04:52:06 np0005481065 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 04:52:06 np0005481065 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.3 total, 600.0 interval#012Cumulative writes: 13K writes, 55K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s#012Cumulative WAL: 13K writes, 4294 syncs, 3.25 writes per sync, written: 0.05 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 7888 writes, 30K keys, 7888 commit groups, 1.0 writes per commit group, ingest: 31.66 MB, 0.05 MB/s#012Interval WAL: 7888 writes, 3237 syncs, 2.44 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.309 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.314 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172726.2441976, 840d2a1b-48bb-42ec-aa12-067e1b65ea39 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.315 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] VM Paused (Lifecycle Event)#033[00m
Oct 11 04:52:06 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:52:06 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/292514972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.433 2 DEBUG oslo_concurrency.processutils [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.440 2 DEBUG nova.compute.provider_tree [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.466 2 DEBUG nova.compute.manager [req-6be89823-bb65-43eb-bfbc-bfe3ef11aac0 req-01be8c57-0f4f-41ce-b69e-2983458a3e46 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Received event network-vif-plugged-e389558a-ec7e-4610-aef7-2cfb76da6814 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.467 2 DEBUG oslo_concurrency.lockutils [req-6be89823-bb65-43eb-bfbc-bfe3ef11aac0 req-01be8c57-0f4f-41ce-b69e-2983458a3e46 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.468 2 DEBUG oslo_concurrency.lockutils [req-6be89823-bb65-43eb-bfbc-bfe3ef11aac0 req-01be8c57-0f4f-41ce-b69e-2983458a3e46 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.468 2 DEBUG oslo_concurrency.lockutils [req-6be89823-bb65-43eb-bfbc-bfe3ef11aac0 req-01be8c57-0f4f-41ce-b69e-2983458a3e46 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.468 2 DEBUG nova.compute.manager [req-6be89823-bb65-43eb-bfbc-bfe3ef11aac0 req-01be8c57-0f4f-41ce-b69e-2983458a3e46 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] No waiting events found dispatching network-vif-plugged-e389558a-ec7e-4610-aef7-2cfb76da6814 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.469 2 WARNING nova.compute.manager [req-6be89823-bb65-43eb-bfbc-bfe3ef11aac0 req-01be8c57-0f4f-41ce-b69e-2983458a3e46 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Received unexpected event network-vif-plugged-e389558a-ec7e-4610-aef7-2cfb76da6814 for instance with vm_state deleted and task_state None.#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.469 2 DEBUG nova.compute.manager [req-6be89823-bb65-43eb-bfbc-bfe3ef11aac0 req-01be8c57-0f4f-41ce-b69e-2983458a3e46 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Received event network-vif-deleted-30f88ebc-fba6-476c-8953-ad64358029bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.469 2 DEBUG nova.compute.manager [req-6be89823-bb65-43eb-bfbc-bfe3ef11aac0 req-01be8c57-0f4f-41ce-b69e-2983458a3e46 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Received event network-vif-plugged-bf26054f-43d5-471a-bca4-e7948b4409d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.469 2 DEBUG oslo_concurrency.lockutils [req-6be89823-bb65-43eb-bfbc-bfe3ef11aac0 req-01be8c57-0f4f-41ce-b69e-2983458a3e46 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "840d2a1b-48bb-42ec-aa12-067e1b65ea39-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.470 2 DEBUG oslo_concurrency.lockutils [req-6be89823-bb65-43eb-bfbc-bfe3ef11aac0 req-01be8c57-0f4f-41ce-b69e-2983458a3e46 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "840d2a1b-48bb-42ec-aa12-067e1b65ea39-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.470 2 DEBUG oslo_concurrency.lockutils [req-6be89823-bb65-43eb-bfbc-bfe3ef11aac0 req-01be8c57-0f4f-41ce-b69e-2983458a3e46 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "840d2a1b-48bb-42ec-aa12-067e1b65ea39-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.471 2 DEBUG nova.compute.manager [req-6be89823-bb65-43eb-bfbc-bfe3ef11aac0 req-01be8c57-0f4f-41ce-b69e-2983458a3e46 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Processing event network-vif-plugged-bf26054f-43d5-471a-bca4-e7948b4409d0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.471 2 DEBUG nova.compute.manager [req-6be89823-bb65-43eb-bfbc-bfe3ef11aac0 req-01be8c57-0f4f-41ce-b69e-2983458a3e46 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Received event network-vif-deleted-e389558a-ec7e-4610-aef7-2cfb76da6814 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.471 2 DEBUG nova.compute.manager [req-6be89823-bb65-43eb-bfbc-bfe3ef11aac0 req-01be8c57-0f4f-41ce-b69e-2983458a3e46 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Received event network-vif-plugged-bf26054f-43d5-471a-bca4-e7948b4409d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.471 2 DEBUG oslo_concurrency.lockutils [req-6be89823-bb65-43eb-bfbc-bfe3ef11aac0 req-01be8c57-0f4f-41ce-b69e-2983458a3e46 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "840d2a1b-48bb-42ec-aa12-067e1b65ea39-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.472 2 DEBUG oslo_concurrency.lockutils [req-6be89823-bb65-43eb-bfbc-bfe3ef11aac0 req-01be8c57-0f4f-41ce-b69e-2983458a3e46 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "840d2a1b-48bb-42ec-aa12-067e1b65ea39-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.472 2 DEBUG oslo_concurrency.lockutils [req-6be89823-bb65-43eb-bfbc-bfe3ef11aac0 req-01be8c57-0f4f-41ce-b69e-2983458a3e46 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "840d2a1b-48bb-42ec-aa12-067e1b65ea39-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.473 2 DEBUG nova.compute.manager [req-6be89823-bb65-43eb-bfbc-bfe3ef11aac0 req-01be8c57-0f4f-41ce-b69e-2983458a3e46 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] No waiting events found dispatching network-vif-plugged-bf26054f-43d5-471a-bca4-e7948b4409d0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.474 2 WARNING nova.compute.manager [req-6be89823-bb65-43eb-bfbc-bfe3ef11aac0 req-01be8c57-0f4f-41ce-b69e-2983458a3e46 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Received unexpected event network-vif-plugged-bf26054f-43d5-471a-bca4-e7948b4409d0 for instance with vm_state building and task_state spawning.#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.475 2 DEBUG nova.compute.manager [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.481 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.485 2 DEBUG nova.scheduler.client.report [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.491 2 DEBUG nova.network.neutron [-] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.499 2 DEBUG nova.virt.libvirt.driver [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.505 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172726.4821656, 840d2a1b-48bb-42ec-aa12-067e1b65ea39 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.505 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.509 2 INFO nova.virt.libvirt.driver [-] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Instance spawned successfully.#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.509 2 DEBUG nova.virt.libvirt.driver [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.636 2 DEBUG oslo_concurrency.lockutils [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.706 2 DEBUG nova.virt.libvirt.driver [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.707 2 DEBUG nova.virt.libvirt.driver [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.708 2 DEBUG nova.virt.libvirt.driver [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.709 2 DEBUG nova.virt.libvirt.driver [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.710 2 DEBUG nova.virt.libvirt.driver [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.710 2 DEBUG nova.virt.libvirt.driver [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.718 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.720 2 INFO nova.compute.manager [-] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Took 1.13 seconds to deallocate network for instance.#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.722 2 INFO nova.scheduler.client.report [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Deleted allocations for instance 205bee5e-165a-468c-87d3-db44e03ace3e#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.733 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.792 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.921 2 DEBUG oslo_concurrency.lockutils [None req-fe3e845c-c74f-4725-ac43-be05e8dd5ef7 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.921 2 DEBUG oslo_concurrency.lockutils [None req-fe3e845c-c74f-4725-ac43-be05e8dd5ef7 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.927 2 INFO nova.compute.manager [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Took 11.74 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.927 2 DEBUG nova.compute.manager [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.937 2 DEBUG oslo_concurrency.lockutils [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "205bee5e-165a-468c-87d3-db44e03ace3e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.546s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.959 2 DEBUG nova.compute.manager [req-19251d93-2e0e-4464-b895-b7a377953982 req-50e40549-3375-4905-afe0-451944d60629 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Received event network-vif-unplugged-7290684c-3823-4e1e-84f5-826316aa4548 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.959 2 DEBUG oslo_concurrency.lockutils [req-19251d93-2e0e-4464-b895-b7a377953982 req-50e40549-3375-4905-afe0-451944d60629 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "1cecd438-75a3-4140-ad35-7439630b1be2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.960 2 DEBUG oslo_concurrency.lockutils [req-19251d93-2e0e-4464-b895-b7a377953982 req-50e40549-3375-4905-afe0-451944d60629 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1cecd438-75a3-4140-ad35-7439630b1be2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.960 2 DEBUG oslo_concurrency.lockutils [req-19251d93-2e0e-4464-b895-b7a377953982 req-50e40549-3375-4905-afe0-451944d60629 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1cecd438-75a3-4140-ad35-7439630b1be2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.960 2 DEBUG nova.compute.manager [req-19251d93-2e0e-4464-b895-b7a377953982 req-50e40549-3375-4905-afe0-451944d60629 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] No waiting events found dispatching network-vif-unplugged-7290684c-3823-4e1e-84f5-826316aa4548 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.961 2 WARNING nova.compute.manager [req-19251d93-2e0e-4464-b895-b7a377953982 req-50e40549-3375-4905-afe0-451944d60629 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Received unexpected event network-vif-unplugged-7290684c-3823-4e1e-84f5-826316aa4548 for instance with vm_state deleted and task_state None.#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.961 2 DEBUG nova.compute.manager [req-19251d93-2e0e-4464-b895-b7a377953982 req-50e40549-3375-4905-afe0-451944d60629 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Received event network-vif-plugged-7290684c-3823-4e1e-84f5-826316aa4548 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.961 2 DEBUG oslo_concurrency.lockutils [req-19251d93-2e0e-4464-b895-b7a377953982 req-50e40549-3375-4905-afe0-451944d60629 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "1cecd438-75a3-4140-ad35-7439630b1be2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.961 2 DEBUG oslo_concurrency.lockutils [req-19251d93-2e0e-4464-b895-b7a377953982 req-50e40549-3375-4905-afe0-451944d60629 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1cecd438-75a3-4140-ad35-7439630b1be2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.962 2 DEBUG oslo_concurrency.lockutils [req-19251d93-2e0e-4464-b895-b7a377953982 req-50e40549-3375-4905-afe0-451944d60629 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1cecd438-75a3-4140-ad35-7439630b1be2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.962 2 DEBUG nova.compute.manager [req-19251d93-2e0e-4464-b895-b7a377953982 req-50e40549-3375-4905-afe0-451944d60629 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] No waiting events found dispatching network-vif-plugged-7290684c-3823-4e1e-84f5-826316aa4548 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.962 2 WARNING nova.compute.manager [req-19251d93-2e0e-4464-b895-b7a377953982 req-50e40549-3375-4905-afe0-451944d60629 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Received unexpected event network-vif-plugged-7290684c-3823-4e1e-84f5-826316aa4548 for instance with vm_state deleted and task_state None.#033[00m
Oct 11 04:52:06 np0005481065 nova_compute[260935]: 2025-10-11 08:52:06.963 2 DEBUG nova.compute.manager [req-19251d93-2e0e-4464-b895-b7a377953982 req-50e40549-3375-4905-afe0-451944d60629 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Received event network-vif-deleted-7290684c-3823-4e1e-84f5-826316aa4548 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:52:07 np0005481065 nova_compute[260935]: 2025-10-11 08:52:07.000 2 INFO nova.compute.manager [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Took 13.00 seconds to build instance.#033[00m
Oct 11 04:52:07 np0005481065 nova_compute[260935]: 2025-10-11 08:52:07.021 2 DEBUG oslo_concurrency.lockutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "840d2a1b-48bb-42ec-aa12-067e1b65ea39" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.103s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:07 np0005481065 nova_compute[260935]: 2025-10-11 08:52:07.031 2 DEBUG oslo_concurrency.processutils [None req-fe3e845c-c74f-4725-ac43-be05e8dd5ef7 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:52:07 np0005481065 nova_compute[260935]: 2025-10-11 08:52:07.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:52:07 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/85545623' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:52:07 np0005481065 nova_compute[260935]: 2025-10-11 08:52:07.488 2 DEBUG oslo_concurrency.processutils [None req-fe3e845c-c74f-4725-ac43-be05e8dd5ef7 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:52:07 np0005481065 nova_compute[260935]: 2025-10-11 08:52:07.494 2 DEBUG nova.compute.provider_tree [None req-fe3e845c-c74f-4725-ac43-be05e8dd5ef7 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:52:07 np0005481065 nova_compute[260935]: 2025-10-11 08:52:07.515 2 DEBUG nova.scheduler.client.report [None req-fe3e845c-c74f-4725-ac43-be05e8dd5ef7 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:52:07 np0005481065 nova_compute[260935]: 2025-10-11 08:52:07.538 2 DEBUG oslo_concurrency.lockutils [None req-fe3e845c-c74f-4725-ac43-be05e8dd5ef7 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:07 np0005481065 nova_compute[260935]: 2025-10-11 08:52:07.573 2 INFO nova.scheduler.client.report [None req-fe3e845c-c74f-4725-ac43-be05e8dd5ef7 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Deleted allocations for instance 1cecd438-75a3-4140-ad35-7439630b1be2#033[00m
Oct 11 04:52:07 np0005481065 nova_compute[260935]: 2025-10-11 08:52:07.680 2 DEBUG oslo_concurrency.lockutils [None req-fe3e845c-c74f-4725-ac43-be05e8dd5ef7 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "1cecd438-75a3-4140-ad35-7439630b1be2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.979s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:07 np0005481065 nova_compute[260935]: 2025-10-11 08:52:07.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:52:07 np0005481065 podman[306435]: 2025-10-11 08:52:07.790669049 +0000 UTC m=+0.084442459 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:52:07 np0005481065 podman[306436]: 2025-10-11 08:52:07.815416329 +0000 UTC m=+0.112548444 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller)
Oct 11 04:52:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1417: 321 pgs: 321 active+clean; 167 MiB data, 511 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 4.9 MiB/s wr, 362 op/s
Oct 11 04:52:07 np0005481065 ceph-mgr[74605]: [devicehealth INFO root] Check health
Oct 11 04:52:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:52:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e190 do_prune osdmap full prune enabled
Oct 11 04:52:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e191 e191: 3 total, 3 up, 3 in
Oct 11 04:52:08 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e191: 3 total, 3 up, 3 in
Oct 11 04:52:08 np0005481065 nova_compute[260935]: 2025-10-11 08:52:08.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:52:08 np0005481065 nova_compute[260935]: 2025-10-11 08:52:08.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 04:52:08 np0005481065 nova_compute[260935]: 2025-10-11 08:52:08.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 11 04:52:08 np0005481065 nova_compute[260935]: 2025-10-11 08:52:08.996 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-872b1c1d-bc87-4123-a599-4d64b89018aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:52:08 np0005481065 nova_compute[260935]: 2025-10-11 08:52:08.996 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-872b1c1d-bc87-4123-a599-4d64b89018aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:52:08 np0005481065 nova_compute[260935]: 2025-10-11 08:52:08.997 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 04:52:08 np0005481065 nova_compute[260935]: 2025-10-11 08:52:08.997 2 DEBUG nova.objects.instance [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 872b1c1d-bc87-4123-a599-4d64b89018aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:52:09 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:09Z|00312|binding|INFO|Releasing lport abe57b96-aedd-418b-be8f-7ad9fb9218ac from this chassis (sb_readonly=0)
Oct 11 04:52:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1419: 321 pgs: 321 active+clean; 167 MiB data, 511 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 4.4 MiB/s wr, 324 op/s
Oct 11 04:52:09 np0005481065 nova_compute[260935]: 2025-10-11 08:52:09.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:09 np0005481065 nova_compute[260935]: 2025-10-11 08:52:09.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:09 np0005481065 NetworkManager[44960]: <info>  [1760172729.9749] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/152)
Oct 11 04:52:09 np0005481065 NetworkManager[44960]: <info>  [1760172729.9770] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/153)
Oct 11 04:52:09 np0005481065 nova_compute[260935]: 2025-10-11 08:52:09.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:10 np0005481065 nova_compute[260935]: 2025-10-11 08:52:10.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:10 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:10Z|00313|binding|INFO|Releasing lport abe57b96-aedd-418b-be8f-7ad9fb9218ac from this chassis (sb_readonly=0)
Oct 11 04:52:10 np0005481065 nova_compute[260935]: 2025-10-11 08:52:10.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:10 np0005481065 nova_compute[260935]: 2025-10-11 08:52:10.568 2 DEBUG nova.compute.manager [req-e9ccf899-764f-4317-b0b0-1722872270ee req-fdd9567c-0e59-4fb7-bb8f-8fa17a789e18 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Received event network-changed-108be440-11bf-41f6-a628-86fbac597b7d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:52:10 np0005481065 nova_compute[260935]: 2025-10-11 08:52:10.569 2 DEBUG nova.compute.manager [req-e9ccf899-764f-4317-b0b0-1722872270ee req-fdd9567c-0e59-4fb7-bb8f-8fa17a789e18 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Refreshing instance network info cache due to event network-changed-108be440-11bf-41f6-a628-86fbac597b7d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:52:10 np0005481065 nova_compute[260935]: 2025-10-11 08:52:10.569 2 DEBUG oslo_concurrency.lockutils [req-e9ccf899-764f-4317-b0b0-1722872270ee req-fdd9567c-0e59-4fb7-bb8f-8fa17a789e18 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-872b1c1d-bc87-4123-a599-4d64b89018aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:52:11 np0005481065 nova_compute[260935]: 2025-10-11 08:52:11.544 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Updating instance_info_cache with network_info: [{"id": "108be440-11bf-41f6-a628-86fbac597b7d", "address": "fa:16:3e:c7:33:2e", "network": {"id": "882d76f4-8cc7-44c7-ad90-277f4f92e044", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-3978382-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b23ff73ca27245eeb1b46f51326a5568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap108be440-11", "ovs_interfaceid": "108be440-11bf-41f6-a628-86fbac597b7d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:52:11 np0005481065 nova_compute[260935]: 2025-10-11 08:52:11.731 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-872b1c1d-bc87-4123-a599-4d64b89018aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:52:11 np0005481065 nova_compute[260935]: 2025-10-11 08:52:11.732 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 04:52:11 np0005481065 nova_compute[260935]: 2025-10-11 08:52:11.732 2 DEBUG oslo_concurrency.lockutils [req-e9ccf899-764f-4317-b0b0-1722872270ee req-fdd9567c-0e59-4fb7-bb8f-8fa17a789e18 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-872b1c1d-bc87-4123-a599-4d64b89018aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:52:11 np0005481065 nova_compute[260935]: 2025-10-11 08:52:11.733 2 DEBUG nova.network.neutron [req-e9ccf899-764f-4317-b0b0-1722872270ee req-fdd9567c-0e59-4fb7-bb8f-8fa17a789e18 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Refreshing network info cache for port 108be440-11bf-41f6-a628-86fbac597b7d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:52:11 np0005481065 nova_compute[260935]: 2025-10-11 08:52:11.734 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:52:11 np0005481065 nova_compute[260935]: 2025-10-11 08:52:11.791 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:11 np0005481065 nova_compute[260935]: 2025-10-11 08:52:11.791 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:11 np0005481065 nova_compute[260935]: 2025-10-11 08:52:11.791 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:11 np0005481065 nova_compute[260935]: 2025-10-11 08:52:11.792 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 04:52:11 np0005481065 nova_compute[260935]: 2025-10-11 08:52:11.792 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:52:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1420: 321 pgs: 321 active+clean; 167 MiB data, 511 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 4.2 MiB/s wr, 307 op/s
Oct 11 04:52:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:52:12 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/654906437' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:52:12 np0005481065 nova_compute[260935]: 2025-10-11 08:52:12.276 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:52:12 np0005481065 nova_compute[260935]: 2025-10-11 08:52:12.402 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000027 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 04:52:12 np0005481065 nova_compute[260935]: 2025-10-11 08:52:12.403 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000027 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 04:52:12 np0005481065 nova_compute[260935]: 2025-10-11 08:52:12.409 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000024 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 04:52:12 np0005481065 nova_compute[260935]: 2025-10-11 08:52:12.409 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000024 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 04:52:12 np0005481065 nova_compute[260935]: 2025-10-11 08:52:12.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:12 np0005481065 nova_compute[260935]: 2025-10-11 08:52:12.689 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:52:12 np0005481065 nova_compute[260935]: 2025-10-11 08:52:12.692 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3907MB free_disk=59.921897888183594GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 04:52:12 np0005481065 nova_compute[260935]: 2025-10-11 08:52:12.692 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:12 np0005481065 nova_compute[260935]: 2025-10-11 08:52:12.693 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:12 np0005481065 nova_compute[260935]: 2025-10-11 08:52:12.779 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 872b1c1d-bc87-4123-a599-4d64b89018aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 04:52:12 np0005481065 nova_compute[260935]: 2025-10-11 08:52:12.780 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 840d2a1b-48bb-42ec-aa12-067e1b65ea39 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 04:52:12 np0005481065 nova_compute[260935]: 2025-10-11 08:52:12.781 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 04:52:12 np0005481065 nova_compute[260935]: 2025-10-11 08:52:12.781 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 04:52:12 np0005481065 nova_compute[260935]: 2025-10-11 08:52:12.855 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:52:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:52:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:52:13 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/936869615' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:52:13 np0005481065 nova_compute[260935]: 2025-10-11 08:52:13.297 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:52:13 np0005481065 nova_compute[260935]: 2025-10-11 08:52:13.305 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:52:13 np0005481065 nova_compute[260935]: 2025-10-11 08:52:13.328 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:52:13 np0005481065 nova_compute[260935]: 2025-10-11 08:52:13.363 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 04:52:13 np0005481065 nova_compute[260935]: 2025-10-11 08:52:13.364 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1421: 321 pgs: 321 active+clean; 167 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.5 MiB/s wr, 174 op/s
Oct 11 04:52:14 np0005481065 nova_compute[260935]: 2025-10-11 08:52:14.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:15 np0005481065 nova_compute[260935]: 2025-10-11 08:52:15.049 2 DEBUG nova.network.neutron [req-e9ccf899-764f-4317-b0b0-1722872270ee req-fdd9567c-0e59-4fb7-bb8f-8fa17a789e18 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Updated VIF entry in instance network info cache for port 108be440-11bf-41f6-a628-86fbac597b7d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:52:15 np0005481065 nova_compute[260935]: 2025-10-11 08:52:15.050 2 DEBUG nova.network.neutron [req-e9ccf899-764f-4317-b0b0-1722872270ee req-fdd9567c-0e59-4fb7-bb8f-8fa17a789e18 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Updating instance_info_cache with network_info: [{"id": "108be440-11bf-41f6-a628-86fbac597b7d", "address": "fa:16:3e:c7:33:2e", "network": {"id": "882d76f4-8cc7-44c7-ad90-277f4f92e044", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-3978382-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b23ff73ca27245eeb1b46f51326a5568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap108be440-11", "ovs_interfaceid": "108be440-11bf-41f6-a628-86fbac597b7d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:52:15 np0005481065 nova_compute[260935]: 2025-10-11 08:52:15.086 2 DEBUG oslo_concurrency.lockutils [req-e9ccf899-764f-4317-b0b0-1722872270ee req-fdd9567c-0e59-4fb7-bb8f-8fa17a789e18 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-872b1c1d-bc87-4123-a599-4d64b89018aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:52:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:15.185 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:15.186 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:15.187 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:15 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:15Z|00314|binding|INFO|Releasing lport abe57b96-aedd-418b-be8f-7ad9fb9218ac from this chassis (sb_readonly=0)
Oct 11 04:52:15 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:15Z|00315|binding|INFO|Releasing lport abe57b96-aedd-418b-be8f-7ad9fb9218ac from this chassis (sb_readonly=0)
Oct 11 04:52:15 np0005481065 nova_compute[260935]: 2025-10-11 08:52:15.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:15 np0005481065 nova_compute[260935]: 2025-10-11 08:52:15.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:15 np0005481065 nova_compute[260935]: 2025-10-11 08:52:15.754 2 DEBUG nova.compute.manager [req-d12a0dfc-10a2-4bda-a9f5-d4da12e3e3bd req-052c383d-e6d4-4643-b36e-22ddbc5ba1f0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Received event network-changed-108be440-11bf-41f6-a628-86fbac597b7d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:52:15 np0005481065 nova_compute[260935]: 2025-10-11 08:52:15.755 2 DEBUG nova.compute.manager [req-d12a0dfc-10a2-4bda-a9f5-d4da12e3e3bd req-052c383d-e6d4-4643-b36e-22ddbc5ba1f0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Refreshing instance network info cache due to event network-changed-108be440-11bf-41f6-a628-86fbac597b7d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:52:15 np0005481065 nova_compute[260935]: 2025-10-11 08:52:15.756 2 DEBUG oslo_concurrency.lockutils [req-d12a0dfc-10a2-4bda-a9f5-d4da12e3e3bd req-052c383d-e6d4-4643-b36e-22ddbc5ba1f0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-872b1c1d-bc87-4123-a599-4d64b89018aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:52:15 np0005481065 nova_compute[260935]: 2025-10-11 08:52:15.756 2 DEBUG oslo_concurrency.lockutils [req-d12a0dfc-10a2-4bda-a9f5-d4da12e3e3bd req-052c383d-e6d4-4643-b36e-22ddbc5ba1f0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-872b1c1d-bc87-4123-a599-4d64b89018aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:52:15 np0005481065 nova_compute[260935]: 2025-10-11 08:52:15.756 2 DEBUG nova.network.neutron [req-d12a0dfc-10a2-4bda-a9f5-d4da12e3e3bd req-052c383d-e6d4-4643-b36e-22ddbc5ba1f0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Refreshing network info cache for port 108be440-11bf-41f6-a628-86fbac597b7d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:52:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1422: 321 pgs: 321 active+clean; 167 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.5 MiB/s wr, 174 op/s
Oct 11 04:52:16 np0005481065 nova_compute[260935]: 2025-10-11 08:52:16.648 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172721.6469274, 205bee5e-165a-468c-87d3-db44e03ace3e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:52:16 np0005481065 nova_compute[260935]: 2025-10-11 08:52:16.649 2 INFO nova.compute.manager [-] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] VM Stopped (Lifecycle Event)#033[00m
Oct 11 04:52:16 np0005481065 nova_compute[260935]: 2025-10-11 08:52:16.671 2 DEBUG nova.compute.manager [None req-b67f88a5-a729-4920-b216-0db5d9c8f882 - - - - - -] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:52:16 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:16Z|00316|binding|INFO|Releasing lport abe57b96-aedd-418b-be8f-7ad9fb9218ac from this chassis (sb_readonly=0)
Oct 11 04:52:17 np0005481065 nova_compute[260935]: 2025-10-11 08:52:17.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:17 np0005481065 nova_compute[260935]: 2025-10-11 08:52:17.009 2 DEBUG nova.network.neutron [req-d12a0dfc-10a2-4bda-a9f5-d4da12e3e3bd req-052c383d-e6d4-4643-b36e-22ddbc5ba1f0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Updated VIF entry in instance network info cache for port 108be440-11bf-41f6-a628-86fbac597b7d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:52:17 np0005481065 nova_compute[260935]: 2025-10-11 08:52:17.010 2 DEBUG nova.network.neutron [req-d12a0dfc-10a2-4bda-a9f5-d4da12e3e3bd req-052c383d-e6d4-4643-b36e-22ddbc5ba1f0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Updating instance_info_cache with network_info: [{"id": "108be440-11bf-41f6-a628-86fbac597b7d", "address": "fa:16:3e:c7:33:2e", "network": {"id": "882d76f4-8cc7-44c7-ad90-277f4f92e044", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-3978382-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b23ff73ca27245eeb1b46f51326a5568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap108be440-11", "ovs_interfaceid": "108be440-11bf-41f6-a628-86fbac597b7d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:52:17 np0005481065 nova_compute[260935]: 2025-10-11 08:52:17.039 2 DEBUG oslo_concurrency.lockutils [req-d12a0dfc-10a2-4bda-a9f5-d4da12e3e3bd req-052c383d-e6d4-4643-b36e-22ddbc5ba1f0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-872b1c1d-bc87-4123-a599-4d64b89018aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:52:17 np0005481065 nova_compute[260935]: 2025-10-11 08:52:17.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1423: 321 pgs: 321 active+clean; 170 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 404 KiB/s wr, 61 op/s
Oct 11 04:52:17 np0005481065 nova_compute[260935]: 2025-10-11 08:52:17.918 2 DEBUG nova.compute.manager [req-bc1167c0-e1c1-4a41-b5a7-9b2a590b0237 req-f0f736c4-55c8-4c31-a363-f43efc97d0c0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Received event network-changed-bf26054f-43d5-471a-bca4-e7948b4409d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:52:17 np0005481065 nova_compute[260935]: 2025-10-11 08:52:17.919 2 DEBUG nova.compute.manager [req-bc1167c0-e1c1-4a41-b5a7-9b2a590b0237 req-f0f736c4-55c8-4c31-a363-f43efc97d0c0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Refreshing instance network info cache due to event network-changed-bf26054f-43d5-471a-bca4-e7948b4409d0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:52:17 np0005481065 nova_compute[260935]: 2025-10-11 08:52:17.919 2 DEBUG oslo_concurrency.lockutils [req-bc1167c0-e1c1-4a41-b5a7-9b2a590b0237 req-f0f736c4-55c8-4c31-a363-f43efc97d0c0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-840d2a1b-48bb-42ec-aa12-067e1b65ea39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:52:17 np0005481065 nova_compute[260935]: 2025-10-11 08:52:17.919 2 DEBUG oslo_concurrency.lockutils [req-bc1167c0-e1c1-4a41-b5a7-9b2a590b0237 req-f0f736c4-55c8-4c31-a363-f43efc97d0c0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-840d2a1b-48bb-42ec-aa12-067e1b65ea39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:52:17 np0005481065 nova_compute[260935]: 2025-10-11 08:52:17.920 2 DEBUG nova.network.neutron [req-bc1167c0-e1c1-4a41-b5a7-9b2a590b0237 req-f0f736c4-55c8-4c31-a363-f43efc97d0c0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Refreshing network info cache for port bf26054f-43d5-471a-bca4-e7948b4409d0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:52:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:52:18 np0005481065 nova_compute[260935]: 2025-10-11 08:52:18.905 2 DEBUG oslo_concurrency.lockutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:18 np0005481065 nova_compute[260935]: 2025-10-11 08:52:18.905 2 DEBUG oslo_concurrency.lockutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:18 np0005481065 nova_compute[260935]: 2025-10-11 08:52:18.927 2 DEBUG nova.compute.manager [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:52:19 np0005481065 nova_compute[260935]: 2025-10-11 08:52:19.024 2 DEBUG oslo_concurrency.lockutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:19 np0005481065 nova_compute[260935]: 2025-10-11 08:52:19.025 2 DEBUG oslo_concurrency.lockutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:19 np0005481065 nova_compute[260935]: 2025-10-11 08:52:19.031 2 DEBUG nova.virt.hardware [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:52:19 np0005481065 nova_compute[260935]: 2025-10-11 08:52:19.031 2 INFO nova.compute.claims [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:52:19 np0005481065 nova_compute[260935]: 2025-10-11 08:52:19.179 2 DEBUG oslo_concurrency.processutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:52:19 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:19Z|00056|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6d:9c:27 10.100.0.12
Oct 11 04:52:19 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:19Z|00057|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6d:9c:27 10.100.0.12
Oct 11 04:52:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:52:19 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2273607470' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:52:19 np0005481065 nova_compute[260935]: 2025-10-11 08:52:19.628 2 DEBUG oslo_concurrency.processutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:52:19 np0005481065 nova_compute[260935]: 2025-10-11 08:52:19.634 2 DEBUG nova.compute.provider_tree [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:52:19 np0005481065 nova_compute[260935]: 2025-10-11 08:52:19.656 2 DEBUG nova.scheduler.client.report [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:52:19 np0005481065 nova_compute[260935]: 2025-10-11 08:52:19.686 2 DEBUG oslo_concurrency.lockutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:19 np0005481065 nova_compute[260935]: 2025-10-11 08:52:19.687 2 DEBUG nova.compute.manager [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:52:19 np0005481065 nova_compute[260935]: 2025-10-11 08:52:19.755 2 DEBUG nova.compute.manager [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:52:19 np0005481065 nova_compute[260935]: 2025-10-11 08:52:19.755 2 DEBUG nova.network.neutron [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:52:19 np0005481065 nova_compute[260935]: 2025-10-11 08:52:19.784 2 INFO nova.virt.libvirt.driver [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:52:19 np0005481065 nova_compute[260935]: 2025-10-11 08:52:19.806 2 DEBUG nova.compute.manager [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:52:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1424: 321 pgs: 321 active+clean; 170 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 347 KiB/s wr, 52 op/s
Oct 11 04:52:19 np0005481065 nova_compute[260935]: 2025-10-11 08:52:19.954 2 DEBUG nova.compute.manager [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:52:19 np0005481065 nova_compute[260935]: 2025-10-11 08:52:19.955 2 DEBUG nova.virt.libvirt.driver [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:52:19 np0005481065 nova_compute[260935]: 2025-10-11 08:52:19.955 2 INFO nova.virt.libvirt.driver [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Creating image(s)#033[00m
Oct 11 04:52:19 np0005481065 nova_compute[260935]: 2025-10-11 08:52:19.978 2 DEBUG nova.storage.rbd_utils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 98fabab3-6b4a-44f3-b232-f23f34f4e19f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:52:20 np0005481065 nova_compute[260935]: 2025-10-11 08:52:19.999 2 DEBUG nova.storage.rbd_utils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 98fabab3-6b4a-44f3-b232-f23f34f4e19f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:52:20 np0005481065 nova_compute[260935]: 2025-10-11 08:52:20.066 2 DEBUG nova.storage.rbd_utils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 98fabab3-6b4a-44f3-b232-f23f34f4e19f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:52:20 np0005481065 nova_compute[260935]: 2025-10-11 08:52:20.069 2 DEBUG oslo_concurrency.processutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:52:20 np0005481065 nova_compute[260935]: 2025-10-11 08:52:20.106 2 DEBUG nova.policy [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2a171de1f79843e0b048393cabfee77d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9af27fad6b5a4783b66213343f27f0a1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 04:52:20 np0005481065 nova_compute[260935]: 2025-10-11 08:52:20.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:20 np0005481065 nova_compute[260935]: 2025-10-11 08:52:20.111 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172724.9593604, 1cecd438-75a3-4140-ad35-7439630b1be2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:52:20 np0005481065 nova_compute[260935]: 2025-10-11 08:52:20.111 2 INFO nova.compute.manager [-] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] VM Stopped (Lifecycle Event)#033[00m
Oct 11 04:52:20 np0005481065 nova_compute[260935]: 2025-10-11 08:52:20.138 2 DEBUG nova.compute.manager [None req-125062ce-c246-42fd-8643-79b48cc61826 - - - - - -] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:52:20 np0005481065 nova_compute[260935]: 2025-10-11 08:52:20.153 2 DEBUG oslo_concurrency.processutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:52:20 np0005481065 nova_compute[260935]: 2025-10-11 08:52:20.153 2 DEBUG oslo_concurrency.lockutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:20 np0005481065 nova_compute[260935]: 2025-10-11 08:52:20.154 2 DEBUG oslo_concurrency.lockutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:20 np0005481065 nova_compute[260935]: 2025-10-11 08:52:20.154 2 DEBUG oslo_concurrency.lockutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:20 np0005481065 nova_compute[260935]: 2025-10-11 08:52:20.173 2 DEBUG nova.storage.rbd_utils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 98fabab3-6b4a-44f3-b232-f23f34f4e19f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:52:20 np0005481065 nova_compute[260935]: 2025-10-11 08:52:20.177 2 DEBUG oslo_concurrency.processutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 98fabab3-6b4a-44f3-b232-f23f34f4e19f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:52:20 np0005481065 nova_compute[260935]: 2025-10-11 08:52:20.493 2 DEBUG oslo_concurrency.processutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 98fabab3-6b4a-44f3-b232-f23f34f4e19f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.316s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:52:20 np0005481065 nova_compute[260935]: 2025-10-11 08:52:20.584 2 DEBUG nova.storage.rbd_utils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] resizing rbd image 98fabab3-6b4a-44f3-b232-f23f34f4e19f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:52:20 np0005481065 nova_compute[260935]: 2025-10-11 08:52:20.719 2 DEBUG nova.objects.instance [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lazy-loading 'migration_context' on Instance uuid 98fabab3-6b4a-44f3-b232-f23f34f4e19f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:52:20 np0005481065 nova_compute[260935]: 2025-10-11 08:52:20.737 2 DEBUG nova.virt.libvirt.driver [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:52:20 np0005481065 nova_compute[260935]: 2025-10-11 08:52:20.737 2 DEBUG nova.virt.libvirt.driver [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Ensure instance console log exists: /var/lib/nova/instances/98fabab3-6b4a-44f3-b232-f23f34f4e19f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:52:20 np0005481065 nova_compute[260935]: 2025-10-11 08:52:20.738 2 DEBUG oslo_concurrency.lockutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:20 np0005481065 nova_compute[260935]: 2025-10-11 08:52:20.739 2 DEBUG oslo_concurrency.lockutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:20 np0005481065 nova_compute[260935]: 2025-10-11 08:52:20.739 2 DEBUG oslo_concurrency.lockutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:21 np0005481065 nova_compute[260935]: 2025-10-11 08:52:21.178 2 DEBUG nova.network.neutron [req-bc1167c0-e1c1-4a41-b5a7-9b2a590b0237 req-f0f736c4-55c8-4c31-a363-f43efc97d0c0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Updated VIF entry in instance network info cache for port bf26054f-43d5-471a-bca4-e7948b4409d0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:52:21 np0005481065 nova_compute[260935]: 2025-10-11 08:52:21.179 2 DEBUG nova.network.neutron [req-bc1167c0-e1c1-4a41-b5a7-9b2a590b0237 req-f0f736c4-55c8-4c31-a363-f43efc97d0c0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Updating instance_info_cache with network_info: [{"id": "bf26054f-43d5-471a-bca4-e7948b4409d0", "address": "fa:16:3e:6d:9c:27", "network": {"id": "882d76f4-8cc7-44c7-ad90-277f4f92e044", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-3978382-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b23ff73ca27245eeb1b46f51326a5568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf26054f-43", "ovs_interfaceid": "bf26054f-43d5-471a-bca4-e7948b4409d0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:52:21 np0005481065 nova_compute[260935]: 2025-10-11 08:52:21.200 2 DEBUG oslo_concurrency.lockutils [req-bc1167c0-e1c1-4a41-b5a7-9b2a590b0237 req-f0f736c4-55c8-4c31-a363-f43efc97d0c0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-840d2a1b-48bb-42ec-aa12-067e1b65ea39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:52:21 np0005481065 nova_compute[260935]: 2025-10-11 08:52:21.247 2 DEBUG nova.network.neutron [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Successfully created port: 20c8164c-0779-4589-b3d3-afc10a47631f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 04:52:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1425: 321 pgs: 321 active+clean; 170 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 337 KiB/s wr, 51 op/s
Oct 11 04:52:21 np0005481065 nova_compute[260935]: 2025-10-11 08:52:21.974 2 DEBUG nova.network.neutron [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Successfully updated port: 20c8164c-0779-4589-b3d3-afc10a47631f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:52:21 np0005481065 nova_compute[260935]: 2025-10-11 08:52:21.990 2 DEBUG oslo_concurrency.lockutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "refresh_cache-98fabab3-6b4a-44f3-b232-f23f34f4e19f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:52:21 np0005481065 nova_compute[260935]: 2025-10-11 08:52:21.991 2 DEBUG oslo_concurrency.lockutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquired lock "refresh_cache-98fabab3-6b4a-44f3-b232-f23f34f4e19f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:52:21 np0005481065 nova_compute[260935]: 2025-10-11 08:52:21.991 2 DEBUG nova.network.neutron [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:52:22 np0005481065 nova_compute[260935]: 2025-10-11 08:52:22.106 2 DEBUG nova.compute.manager [req-e05c9b7c-6e20-469f-9fee-c6c4f5b79b0c req-a6701c37-d6f2-4ef6-ab25-13c4651f50d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Received event network-changed-20c8164c-0779-4589-b3d3-afc10a47631f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:52:22 np0005481065 nova_compute[260935]: 2025-10-11 08:52:22.107 2 DEBUG nova.compute.manager [req-e05c9b7c-6e20-469f-9fee-c6c4f5b79b0c req-a6701c37-d6f2-4ef6-ab25-13c4651f50d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Refreshing instance network info cache due to event network-changed-20c8164c-0779-4589-b3d3-afc10a47631f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:52:22 np0005481065 nova_compute[260935]: 2025-10-11 08:52:22.108 2 DEBUG oslo_concurrency.lockutils [req-e05c9b7c-6e20-469f-9fee-c6c4f5b79b0c req-a6701c37-d6f2-4ef6-ab25-13c4651f50d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-98fabab3-6b4a-44f3-b232-f23f34f4e19f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:52:22 np0005481065 nova_compute[260935]: 2025-10-11 08:52:22.186 2 DEBUG nova.network.neutron [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:52:22 np0005481065 nova_compute[260935]: 2025-10-11 08:52:22.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:23 np0005481065 nova_compute[260935]: 2025-10-11 08:52:23.252 2 DEBUG nova.network.neutron [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Updating instance_info_cache with network_info: [{"id": "20c8164c-0779-4589-b3d3-afc10a47631f", "address": "fa:16:3e:bf:90:73", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20c8164c-07", "ovs_interfaceid": "20c8164c-0779-4589-b3d3-afc10a47631f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:52:23 np0005481065 nova_compute[260935]: 2025-10-11 08:52:23.276 2 DEBUG oslo_concurrency.lockutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Releasing lock "refresh_cache-98fabab3-6b4a-44f3-b232-f23f34f4e19f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:52:23 np0005481065 nova_compute[260935]: 2025-10-11 08:52:23.277 2 DEBUG nova.compute.manager [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Instance network_info: |[{"id": "20c8164c-0779-4589-b3d3-afc10a47631f", "address": "fa:16:3e:bf:90:73", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20c8164c-07", "ovs_interfaceid": "20c8164c-0779-4589-b3d3-afc10a47631f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:52:23 np0005481065 nova_compute[260935]: 2025-10-11 08:52:23.278 2 DEBUG oslo_concurrency.lockutils [req-e05c9b7c-6e20-469f-9fee-c6c4f5b79b0c req-a6701c37-d6f2-4ef6-ab25-13c4651f50d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-98fabab3-6b4a-44f3-b232-f23f34f4e19f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:52:23 np0005481065 nova_compute[260935]: 2025-10-11 08:52:23.278 2 DEBUG nova.network.neutron [req-e05c9b7c-6e20-469f-9fee-c6c4f5b79b0c req-a6701c37-d6f2-4ef6-ab25-13c4651f50d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Refreshing network info cache for port 20c8164c-0779-4589-b3d3-afc10a47631f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:52:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:52:23 np0005481065 nova_compute[260935]: 2025-10-11 08:52:23.285 2 DEBUG nova.virt.libvirt.driver [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Start _get_guest_xml network_info=[{"id": "20c8164c-0779-4589-b3d3-afc10a47631f", "address": "fa:16:3e:bf:90:73", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20c8164c-07", "ovs_interfaceid": "20c8164c-0779-4589-b3d3-afc10a47631f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:52:23 np0005481065 nova_compute[260935]: 2025-10-11 08:52:23.292 2 WARNING nova.virt.libvirt.driver [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:52:23 np0005481065 nova_compute[260935]: 2025-10-11 08:52:23.302 2 DEBUG nova.virt.libvirt.host [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:52:23 np0005481065 nova_compute[260935]: 2025-10-11 08:52:23.303 2 DEBUG nova.virt.libvirt.host [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:52:23 np0005481065 nova_compute[260935]: 2025-10-11 08:52:23.307 2 DEBUG nova.virt.libvirt.host [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:52:23 np0005481065 nova_compute[260935]: 2025-10-11 08:52:23.308 2 DEBUG nova.virt.libvirt.host [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:52:23 np0005481065 nova_compute[260935]: 2025-10-11 08:52:23.309 2 DEBUG nova.virt.libvirt.driver [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:52:23 np0005481065 nova_compute[260935]: 2025-10-11 08:52:23.310 2 DEBUG nova.virt.hardware [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:52:23 np0005481065 nova_compute[260935]: 2025-10-11 08:52:23.311 2 DEBUG nova.virt.hardware [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:52:23 np0005481065 nova_compute[260935]: 2025-10-11 08:52:23.311 2 DEBUG nova.virt.hardware [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:52:23 np0005481065 nova_compute[260935]: 2025-10-11 08:52:23.312 2 DEBUG nova.virt.hardware [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:52:23 np0005481065 nova_compute[260935]: 2025-10-11 08:52:23.312 2 DEBUG nova.virt.hardware [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:52:23 np0005481065 nova_compute[260935]: 2025-10-11 08:52:23.313 2 DEBUG nova.virt.hardware [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:52:23 np0005481065 nova_compute[260935]: 2025-10-11 08:52:23.313 2 DEBUG nova.virt.hardware [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:52:23 np0005481065 nova_compute[260935]: 2025-10-11 08:52:23.314 2 DEBUG nova.virt.hardware [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:52:23 np0005481065 nova_compute[260935]: 2025-10-11 08:52:23.314 2 DEBUG nova.virt.hardware [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:52:23 np0005481065 nova_compute[260935]: 2025-10-11 08:52:23.315 2 DEBUG nova.virt.hardware [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:52:23 np0005481065 nova_compute[260935]: 2025-10-11 08:52:23.315 2 DEBUG nova.virt.hardware [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:52:23 np0005481065 nova_compute[260935]: 2025-10-11 08:52:23.320 2 DEBUG oslo_concurrency.processutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:52:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:52:23 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/16960398' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:52:23 np0005481065 nova_compute[260935]: 2025-10-11 08:52:23.798 2 DEBUG oslo_concurrency.processutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:52:23 np0005481065 nova_compute[260935]: 2025-10-11 08:52:23.833 2 DEBUG nova.storage.rbd_utils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 98fabab3-6b4a-44f3-b232-f23f34f4e19f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:52:23 np0005481065 nova_compute[260935]: 2025-10-11 08:52:23.838 2 DEBUG oslo_concurrency.processutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:52:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1426: 321 pgs: 321 active+clean; 246 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.9 MiB/s wr, 127 op/s
Oct 11 04:52:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:52:24 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2228053641' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:52:24 np0005481065 nova_compute[260935]: 2025-10-11 08:52:24.316 2 DEBUG oslo_concurrency.processutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:52:24 np0005481065 nova_compute[260935]: 2025-10-11 08:52:24.319 2 DEBUG nova.virt.libvirt.vif [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:52:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1265647930',display_name='tempest-ServerDiskConfigTestJSON-server-1265647930',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1265647930',id=40,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9af27fad6b5a4783b66213343f27f0a1',ramdisk_id='',reservation_id='r-qeuhsate',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-387886039',owner_user_name='tempest-ServerDiskConfigTestJSON-387886039-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:52:19Z,user_data=None,user_id='2a171de1f79843e0b048393cabfee77d',uuid=98fabab3-6b4a-44f3-b232-f23f34f4e19f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "20c8164c-0779-4589-b3d3-afc10a47631f", "address": "fa:16:3e:bf:90:73", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20c8164c-07", "ovs_interfaceid": "20c8164c-0779-4589-b3d3-afc10a47631f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:52:24 np0005481065 nova_compute[260935]: 2025-10-11 08:52:24.320 2 DEBUG nova.network.os_vif_util [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converting VIF {"id": "20c8164c-0779-4589-b3d3-afc10a47631f", "address": "fa:16:3e:bf:90:73", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20c8164c-07", "ovs_interfaceid": "20c8164c-0779-4589-b3d3-afc10a47631f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:52:24 np0005481065 nova_compute[260935]: 2025-10-11 08:52:24.322 2 DEBUG nova.network.os_vif_util [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:90:73,bridge_name='br-int',has_traffic_filtering=True,id=20c8164c-0779-4589-b3d3-afc10a47631f,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20c8164c-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:52:24 np0005481065 nova_compute[260935]: 2025-10-11 08:52:24.323 2 DEBUG nova.objects.instance [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 98fabab3-6b4a-44f3-b232-f23f34f4e19f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:52:24 np0005481065 nova_compute[260935]: 2025-10-11 08:52:24.329 2 DEBUG nova.compute.manager [req-40b93b29-5bc4-4366-8632-576d5d47e7e7 req-0570f71b-0e8e-4c53-b8b4-554728cbc4f5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Received event network-changed-bf26054f-43d5-471a-bca4-e7948b4409d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:52:24 np0005481065 nova_compute[260935]: 2025-10-11 08:52:24.329 2 DEBUG nova.compute.manager [req-40b93b29-5bc4-4366-8632-576d5d47e7e7 req-0570f71b-0e8e-4c53-b8b4-554728cbc4f5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Refreshing instance network info cache due to event network-changed-bf26054f-43d5-471a-bca4-e7948b4409d0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:52:24 np0005481065 nova_compute[260935]: 2025-10-11 08:52:24.330 2 DEBUG oslo_concurrency.lockutils [req-40b93b29-5bc4-4366-8632-576d5d47e7e7 req-0570f71b-0e8e-4c53-b8b4-554728cbc4f5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-840d2a1b-48bb-42ec-aa12-067e1b65ea39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:52:24 np0005481065 nova_compute[260935]: 2025-10-11 08:52:24.330 2 DEBUG oslo_concurrency.lockutils [req-40b93b29-5bc4-4366-8632-576d5d47e7e7 req-0570f71b-0e8e-4c53-b8b4-554728cbc4f5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-840d2a1b-48bb-42ec-aa12-067e1b65ea39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:52:24 np0005481065 nova_compute[260935]: 2025-10-11 08:52:24.331 2 DEBUG nova.network.neutron [req-40b93b29-5bc4-4366-8632-576d5d47e7e7 req-0570f71b-0e8e-4c53-b8b4-554728cbc4f5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Refreshing network info cache for port bf26054f-43d5-471a-bca4-e7948b4409d0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:52:24 np0005481065 nova_compute[260935]: 2025-10-11 08:52:24.363 2 DEBUG nova.virt.libvirt.driver [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:52:24 np0005481065 nova_compute[260935]:  <uuid>98fabab3-6b4a-44f3-b232-f23f34f4e19f</uuid>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:  <name>instance-00000028</name>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:52:24 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-1265647930</nova:name>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:52:23</nova:creationTime>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:52:24 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:        <nova:user uuid="2a171de1f79843e0b048393cabfee77d">tempest-ServerDiskConfigTestJSON-387886039-project-member</nova:user>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:        <nova:project uuid="9af27fad6b5a4783b66213343f27f0a1">tempest-ServerDiskConfigTestJSON-387886039</nova:project>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:        <nova:port uuid="20c8164c-0779-4589-b3d3-afc10a47631f">
Oct 11 04:52:24 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:52:24 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:      <entry name="serial">98fabab3-6b4a-44f3-b232-f23f34f4e19f</entry>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:      <entry name="uuid">98fabab3-6b4a-44f3-b232-f23f34f4e19f</entry>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:52:24 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:52:24 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:52:24 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/98fabab3-6b4a-44f3-b232-f23f34f4e19f_disk">
Oct 11 04:52:24 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:52:24 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:52:24 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/98fabab3-6b4a-44f3-b232-f23f34f4e19f_disk.config">
Oct 11 04:52:24 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:52:24 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 04:52:24 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:bf:90:73"/>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:      <target dev="tap20c8164c-07"/>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:52:24 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/98fabab3-6b4a-44f3-b232-f23f34f4e19f/console.log" append="off"/>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:52:24 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:52:24 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:52:24 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:52:24 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:52:24 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:52:24 np0005481065 nova_compute[260935]: 2025-10-11 08:52:24.365 2 DEBUG nova.compute.manager [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Preparing to wait for external event network-vif-plugged-20c8164c-0779-4589-b3d3-afc10a47631f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 04:52:24 np0005481065 nova_compute[260935]: 2025-10-11 08:52:24.366 2 DEBUG oslo_concurrency.lockutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:24 np0005481065 nova_compute[260935]: 2025-10-11 08:52:24.367 2 DEBUG oslo_concurrency.lockutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:24 np0005481065 nova_compute[260935]: 2025-10-11 08:52:24.368 2 DEBUG oslo_concurrency.lockutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:24 np0005481065 nova_compute[260935]: 2025-10-11 08:52:24.370 2 DEBUG nova.virt.libvirt.vif [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:52:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1265647930',display_name='tempest-ServerDiskConfigTestJSON-server-1265647930',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1265647930',id=40,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9af27fad6b5a4783b66213343f27f0a1',ramdisk_id='',reservation_id='r-qeuhsate',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-387886039',owner_user_name='tempest-ServerDiskConfigTestJSON-387886039-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:52:19Z,user_data=None,user_id='2a171de1f79843e0b048393cabfee77d',uuid=98fabab3-6b4a-44f3-b232-f23f34f4e19f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "20c8164c-0779-4589-b3d3-afc10a47631f", "address": "fa:16:3e:bf:90:73", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20c8164c-07", "ovs_interfaceid": "20c8164c-0779-4589-b3d3-afc10a47631f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:52:24 np0005481065 nova_compute[260935]: 2025-10-11 08:52:24.370 2 DEBUG nova.network.os_vif_util [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converting VIF {"id": "20c8164c-0779-4589-b3d3-afc10a47631f", "address": "fa:16:3e:bf:90:73", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20c8164c-07", "ovs_interfaceid": "20c8164c-0779-4589-b3d3-afc10a47631f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:52:24 np0005481065 nova_compute[260935]: 2025-10-11 08:52:24.372 2 DEBUG nova.network.os_vif_util [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:90:73,bridge_name='br-int',has_traffic_filtering=True,id=20c8164c-0779-4589-b3d3-afc10a47631f,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20c8164c-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:52:24 np0005481065 nova_compute[260935]: 2025-10-11 08:52:24.373 2 DEBUG os_vif [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:90:73,bridge_name='br-int',has_traffic_filtering=True,id=20c8164c-0779-4589-b3d3-afc10a47631f,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20c8164c-07') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:52:24 np0005481065 nova_compute[260935]: 2025-10-11 08:52:24.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:24 np0005481065 nova_compute[260935]: 2025-10-11 08:52:24.376 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:24 np0005481065 nova_compute[260935]: 2025-10-11 08:52:24.377 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:52:24 np0005481065 nova_compute[260935]: 2025-10-11 08:52:24.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:24 np0005481065 nova_compute[260935]: 2025-10-11 08:52:24.383 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap20c8164c-07, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:24 np0005481065 nova_compute[260935]: 2025-10-11 08:52:24.384 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap20c8164c-07, col_values=(('external_ids', {'iface-id': '20c8164c-0779-4589-b3d3-afc10a47631f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bf:90:73', 'vm-uuid': '98fabab3-6b4a-44f3-b232-f23f34f4e19f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:24 np0005481065 nova_compute[260935]: 2025-10-11 08:52:24.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:24 np0005481065 NetworkManager[44960]: <info>  [1760172744.3878] manager: (tap20c8164c-07): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/154)
Oct 11 04:52:24 np0005481065 nova_compute[260935]: 2025-10-11 08:52:24.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:52:24 np0005481065 nova_compute[260935]: 2025-10-11 08:52:24.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:24 np0005481065 nova_compute[260935]: 2025-10-11 08:52:24.397 2 INFO os_vif [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:90:73,bridge_name='br-int',has_traffic_filtering=True,id=20c8164c-0779-4589-b3d3-afc10a47631f,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20c8164c-07')#033[00m
Oct 11 04:52:24 np0005481065 nova_compute[260935]: 2025-10-11 08:52:24.468 2 DEBUG nova.virt.libvirt.driver [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:52:24 np0005481065 nova_compute[260935]: 2025-10-11 08:52:24.468 2 DEBUG nova.virt.libvirt.driver [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:52:24 np0005481065 nova_compute[260935]: 2025-10-11 08:52:24.468 2 DEBUG nova.virt.libvirt.driver [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] No VIF found with MAC fa:16:3e:bf:90:73, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:52:24 np0005481065 nova_compute[260935]: 2025-10-11 08:52:24.469 2 INFO nova.virt.libvirt.driver [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Using config drive#033[00m
Oct 11 04:52:24 np0005481065 nova_compute[260935]: 2025-10-11 08:52:24.502 2 DEBUG nova.storage.rbd_utils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 98fabab3-6b4a-44f3-b232-f23f34f4e19f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:52:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:52:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:52:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:52:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:52:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:52:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:52:24 np0005481065 nova_compute[260935]: 2025-10-11 08:52:24.975 2 INFO nova.virt.libvirt.driver [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Creating config drive at /var/lib/nova/instances/98fabab3-6b4a-44f3-b232-f23f34f4e19f/disk.config#033[00m
Oct 11 04:52:24 np0005481065 nova_compute[260935]: 2025-10-11 08:52:24.985 2 DEBUG oslo_concurrency.processutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/98fabab3-6b4a-44f3-b232-f23f34f4e19f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv73rd79n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:52:25 np0005481065 nova_compute[260935]: 2025-10-11 08:52:25.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:25 np0005481065 nova_compute[260935]: 2025-10-11 08:52:25.134 2 DEBUG oslo_concurrency.lockutils [None req-246c938c-4e76-4a49-8f73-6af5f563282d 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Acquiring lock "840d2a1b-48bb-42ec-aa12-067e1b65ea39" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:25 np0005481065 nova_compute[260935]: 2025-10-11 08:52:25.135 2 DEBUG oslo_concurrency.lockutils [None req-246c938c-4e76-4a49-8f73-6af5f563282d 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "840d2a1b-48bb-42ec-aa12-067e1b65ea39" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:25 np0005481065 nova_compute[260935]: 2025-10-11 08:52:25.135 2 DEBUG oslo_concurrency.lockutils [None req-246c938c-4e76-4a49-8f73-6af5f563282d 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Acquiring lock "840d2a1b-48bb-42ec-aa12-067e1b65ea39-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:25 np0005481065 nova_compute[260935]: 2025-10-11 08:52:25.136 2 DEBUG oslo_concurrency.lockutils [None req-246c938c-4e76-4a49-8f73-6af5f563282d 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "840d2a1b-48bb-42ec-aa12-067e1b65ea39-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:25 np0005481065 nova_compute[260935]: 2025-10-11 08:52:25.137 2 DEBUG oslo_concurrency.lockutils [None req-246c938c-4e76-4a49-8f73-6af5f563282d 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "840d2a1b-48bb-42ec-aa12-067e1b65ea39-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:25 np0005481065 nova_compute[260935]: 2025-10-11 08:52:25.139 2 INFO nova.compute.manager [None req-246c938c-4e76-4a49-8f73-6af5f563282d 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Terminating instance#033[00m
Oct 11 04:52:25 np0005481065 nova_compute[260935]: 2025-10-11 08:52:25.141 2 DEBUG nova.compute.manager [None req-246c938c-4e76-4a49-8f73-6af5f563282d 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 04:52:25 np0005481065 nova_compute[260935]: 2025-10-11 08:52:25.143 2 DEBUG oslo_concurrency.processutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/98fabab3-6b4a-44f3-b232-f23f34f4e19f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv73rd79n" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:52:25 np0005481065 nova_compute[260935]: 2025-10-11 08:52:25.180 2 DEBUG nova.storage.rbd_utils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 98fabab3-6b4a-44f3-b232-f23f34f4e19f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:52:25 np0005481065 nova_compute[260935]: 2025-10-11 08:52:25.185 2 DEBUG oslo_concurrency.processutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/98fabab3-6b4a-44f3-b232-f23f34f4e19f/disk.config 98fabab3-6b4a-44f3-b232-f23f34f4e19f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:52:25 np0005481065 kernel: tapbf26054f-43 (unregistering): left promiscuous mode
Oct 11 04:52:25 np0005481065 NetworkManager[44960]: <info>  [1760172745.2977] device (tapbf26054f-43): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:52:25 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:25Z|00317|binding|INFO|Releasing lport bf26054f-43d5-471a-bca4-e7948b4409d0 from this chassis (sb_readonly=0)
Oct 11 04:52:25 np0005481065 nova_compute[260935]: 2025-10-11 08:52:25.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:25 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:25Z|00318|binding|INFO|Setting lport bf26054f-43d5-471a-bca4-e7948b4409d0 down in Southbound
Oct 11 04:52:25 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:25Z|00319|binding|INFO|Removing iface tapbf26054f-43 ovn-installed in OVS
Oct 11 04:52:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.322 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:9c:27 10.100.0.12'], port_security=['fa:16:3e:6d:9c:27 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '840d2a1b-48bb-42ec-aa12-067e1b65ea39', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-882d76f4-8cc7-44c7-ad90-277f4f92e044', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b23ff73ca27245eeb1b46f51326a5568', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1c2e46c6-00fe-417f-b7cc-3c9acf40921b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4841cc19-0006-4d3d-bb24-b088854c8627, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=bf26054f-43d5-471a-bca4-e7948b4409d0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:52:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.323 162815 INFO neutron.agent.ovn.metadata.agent [-] Port bf26054f-43d5-471a-bca4-e7948b4409d0 in datapath 882d76f4-8cc7-44c7-ad90-277f4f92e044 unbound from our chassis#033[00m
Oct 11 04:52:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.325 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 882d76f4-8cc7-44c7-ad90-277f4f92e044#033[00m
Oct 11 04:52:25 np0005481065 nova_compute[260935]: 2025-10-11 08:52:25.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.356 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[caeada04-f98a-4f11-8996-15d6433c49ab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:25 np0005481065 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000027.scope: Deactivated successfully.
Oct 11 04:52:25 np0005481065 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000027.scope: Consumed 13.690s CPU time.
Oct 11 04:52:25 np0005481065 systemd-machined[215705]: Machine qemu-43-instance-00000027 terminated.
Oct 11 04:52:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.398 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[7afd06bd-22eb-4c45-a780-7efc1edb158f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.404 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[2cc9a78a-4a7e-4fe5-801e-93ad284efb1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:25 np0005481065 nova_compute[260935]: 2025-10-11 08:52:25.433 2 DEBUG oslo_concurrency.processutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/98fabab3-6b4a-44f3-b232-f23f34f4e19f/disk.config 98fabab3-6b4a-44f3-b232-f23f34f4e19f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.248s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:52:25 np0005481065 nova_compute[260935]: 2025-10-11 08:52:25.434 2 INFO nova.virt.libvirt.driver [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Deleting local config drive /var/lib/nova/instances/98fabab3-6b4a-44f3-b232-f23f34f4e19f/disk.config because it was imported into RBD.#033[00m
Oct 11 04:52:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.456 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[8c0f37fd-4606-4bcf-bd85-3731c8cadca0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:25 np0005481065 podman[306838]: 2025-10-11 08:52:25.462653088 +0000 UTC m=+0.123548595 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 04:52:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.486 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[dddff302-f724-4c2c-8cfd-6a13d5190aca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap882d76f4-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c6:0a:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 958, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 958, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 89], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 454897, 'reachable_time': 23401, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306877, 'error': None, 'target': 'ovnmeta-882d76f4-8cc7-44c7-ad90-277f4f92e044', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:25 np0005481065 nova_compute[260935]: 2025-10-11 08:52:25.492 2 INFO nova.virt.libvirt.driver [-] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Instance destroyed successfully.#033[00m
Oct 11 04:52:25 np0005481065 nova_compute[260935]: 2025-10-11 08:52:25.493 2 DEBUG nova.objects.instance [None req-246c938c-4e76-4a49-8f73-6af5f563282d 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lazy-loading 'resources' on Instance uuid 840d2a1b-48bb-42ec-aa12-067e1b65ea39 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:52:25 np0005481065 nova_compute[260935]: 2025-10-11 08:52:25.510 2 DEBUG nova.virt.libvirt.vif [None req-246c938c-4e76-4a49-8f73-6af5f563282d 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:51:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-397044721',display_name='tempest-FloatingIPsAssociationTestJSON-server-397044721',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-397044721',id=39,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:52:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b23ff73ca27245eeb1b46f51326a5568',ramdisk_id='',reservation_id='r-5vhnbeg2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1277815089',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1277815089-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:52:06Z,user_data=None,user_id='7f37cd0ba15f412b88192a506c5cec79',uuid=840d2a1b-48bb-42ec-aa12-067e1b65ea39,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf26054f-43d5-471a-bca4-e7948b4409d0", "address": "fa:16:3e:6d:9c:27", "network": {"id": "882d76f4-8cc7-44c7-ad90-277f4f92e044", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-3978382-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b23ff73ca27245eeb1b46f51326a5568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf26054f-43", "ovs_interfaceid": "bf26054f-43d5-471a-bca4-e7948b4409d0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 04:52:25 np0005481065 nova_compute[260935]: 2025-10-11 08:52:25.511 2 DEBUG nova.network.os_vif_util [None req-246c938c-4e76-4a49-8f73-6af5f563282d 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Converting VIF {"id": "bf26054f-43d5-471a-bca4-e7948b4409d0", "address": "fa:16:3e:6d:9c:27", "network": {"id": "882d76f4-8cc7-44c7-ad90-277f4f92e044", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-3978382-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b23ff73ca27245eeb1b46f51326a5568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf26054f-43", "ovs_interfaceid": "bf26054f-43d5-471a-bca4-e7948b4409d0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:52:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.511 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ec3dcf03-14dc-4ef0-924c-038581e75edd]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap882d76f4-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 454914, 'tstamp': 454914}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306887, 'error': None, 'target': 'ovnmeta-882d76f4-8cc7-44c7-ad90-277f4f92e044', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap882d76f4-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 454918, 'tstamp': 454918}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306887, 'error': None, 'target': 'ovnmeta-882d76f4-8cc7-44c7-ad90-277f4f92e044', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:25 np0005481065 nova_compute[260935]: 2025-10-11 08:52:25.512 2 DEBUG nova.network.os_vif_util [None req-246c938c-4e76-4a49-8f73-6af5f563282d 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6d:9c:27,bridge_name='br-int',has_traffic_filtering=True,id=bf26054f-43d5-471a-bca4-e7948b4409d0,network=Network(882d76f4-8cc7-44c7-ad90-277f4f92e044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf26054f-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:52:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.513 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap882d76f4-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:25 np0005481065 nova_compute[260935]: 2025-10-11 08:52:25.513 2 DEBUG os_vif [None req-246c938c-4e76-4a49-8f73-6af5f563282d 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6d:9c:27,bridge_name='br-int',has_traffic_filtering=True,id=bf26054f-43d5-471a-bca4-e7948b4409d0,network=Network(882d76f4-8cc7-44c7-ad90-277f4f92e044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf26054f-43') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 04:52:25 np0005481065 nova_compute[260935]: 2025-10-11 08:52:25.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:25 np0005481065 nova_compute[260935]: 2025-10-11 08:52:25.517 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf26054f-43, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:25 np0005481065 nova_compute[260935]: 2025-10-11 08:52:25.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:25 np0005481065 nova_compute[260935]: 2025-10-11 08:52:25.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:52:25 np0005481065 nova_compute[260935]: 2025-10-11 08:52:25.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:25 np0005481065 systemd-udevd[306848]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:52:25 np0005481065 kernel: tap20c8164c-07: entered promiscuous mode
Oct 11 04:52:25 np0005481065 NetworkManager[44960]: <info>  [1760172745.5328] manager: (tap20c8164c-07): new Tun device (/org/freedesktop/NetworkManager/Devices/155)
Oct 11 04:52:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.533 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap882d76f4-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.533 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:52:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.534 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap882d76f4-80, col_values=(('external_ids', {'iface-id': 'abe57b96-aedd-418b-be8f-7ad9fb9218ac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.535 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:52:25 np0005481065 nova_compute[260935]: 2025-10-11 08:52:25.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:25 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:25Z|00320|binding|INFO|Claiming lport 20c8164c-0779-4589-b3d3-afc10a47631f for this chassis.
Oct 11 04:52:25 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:25Z|00321|binding|INFO|20c8164c-0779-4589-b3d3-afc10a47631f: Claiming fa:16:3e:bf:90:73 10.100.0.12
Oct 11 04:52:25 np0005481065 nova_compute[260935]: 2025-10-11 08:52:25.543 2 INFO os_vif [None req-246c938c-4e76-4a49-8f73-6af5f563282d 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6d:9c:27,bridge_name='br-int',has_traffic_filtering=True,id=bf26054f-43d5-471a-bca4-e7948b4409d0,network=Network(882d76f4-8cc7-44c7-ad90-277f4f92e044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf26054f-43')#033[00m
Oct 11 04:52:25 np0005481065 NetworkManager[44960]: <info>  [1760172745.5529] device (tap20c8164c-07): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:52:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.553 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:90:73 10.100.0.12'], port_security=['fa:16:3e:bf:90:73 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '98fabab3-6b4a-44f3-b232-f23f34f4e19f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9af27fad6b5a4783b66213343f27f0a1', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a50a9db8-e2ab-4969-88fc-b4ddbb372174', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b34768a1-4d4c-416a-8ec1-d8538916a72a, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=20c8164c-0779-4589-b3d3-afc10a47631f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:52:25 np0005481065 NetworkManager[44960]: <info>  [1760172745.5547] device (tap20c8164c-07): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:52:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.555 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 20c8164c-0779-4589-b3d3-afc10a47631f in datapath e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd bound to our chassis#033[00m
Oct 11 04:52:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.558 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd#033[00m
Oct 11 04:52:25 np0005481065 systemd-machined[215705]: New machine qemu-44-instance-00000028.
Oct 11 04:52:25 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:25Z|00322|binding|INFO|Setting lport 20c8164c-0779-4589-b3d3-afc10a47631f ovn-installed in OVS
Oct 11 04:52:25 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:25Z|00323|binding|INFO|Setting lport 20c8164c-0779-4589-b3d3-afc10a47631f up in Southbound
Oct 11 04:52:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.577 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c4ae4e52-ba94-43eb-951e-4f1fd215bfe6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:25 np0005481065 nova_compute[260935]: 2025-10-11 08:52:25.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.578 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape5d4fc7a-11 in ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 04:52:25 np0005481065 nova_compute[260935]: 2025-10-11 08:52:25.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.581 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape5d4fc7a-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 04:52:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.582 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[be134c99-122e-400b-b179-2d8b8cf94b48]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.583 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[79ff8c07-5818-46bb-8344-c5ec6d331ce5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:25 np0005481065 systemd[1]: Started Virtual Machine qemu-44-instance-00000028.
Oct 11 04:52:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.602 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[59fff58b-37b8-476d-a4bb-4406ff149226]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.631 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[71a81077-fdc5-41c2-8de0-e95344102628]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:25 np0005481065 nova_compute[260935]: 2025-10-11 08:52:25.678 2 DEBUG nova.network.neutron [req-e05c9b7c-6e20-469f-9fee-c6c4f5b79b0c req-a6701c37-d6f2-4ef6-ab25-13c4651f50d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Updated VIF entry in instance network info cache for port 20c8164c-0779-4589-b3d3-afc10a47631f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:52:25 np0005481065 nova_compute[260935]: 2025-10-11 08:52:25.679 2 DEBUG nova.network.neutron [req-e05c9b7c-6e20-469f-9fee-c6c4f5b79b0c req-a6701c37-d6f2-4ef6-ab25-13c4651f50d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Updating instance_info_cache with network_info: [{"id": "20c8164c-0779-4589-b3d3-afc10a47631f", "address": "fa:16:3e:bf:90:73", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20c8164c-07", "ovs_interfaceid": "20c8164c-0779-4589-b3d3-afc10a47631f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:52:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.679 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[45ea9783-27bb-4d65-882a-8df7eea026fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:25 np0005481065 NetworkManager[44960]: <info>  [1760172745.6893] manager: (tape5d4fc7a-10): new Veth device (/org/freedesktop/NetworkManager/Devices/156)
Oct 11 04:52:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.688 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cd7453ed-d113-44ea-af73-089a1aab0392]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:25 np0005481065 nova_compute[260935]: 2025-10-11 08:52:25.700 2 DEBUG oslo_concurrency.lockutils [req-e05c9b7c-6e20-469f-9fee-c6c4f5b79b0c req-a6701c37-d6f2-4ef6-ab25-13c4651f50d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-98fabab3-6b4a-44f3-b232-f23f34f4e19f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:52:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.729 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[bb97663f-d1ee-49fd-bae3-245f12a60f9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.734 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c55b58ed-356e-4df7-b668-1bd0beb16f52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:25 np0005481065 NetworkManager[44960]: <info>  [1760172745.7644] device (tape5d4fc7a-10): carrier: link connected
Oct 11 04:52:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.770 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[fedcb07b-fd0e-4eec-88bd-3fe6bc448463]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:25 np0005481065 nova_compute[260935]: 2025-10-11 08:52:25.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.843 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6050e627-4e36-470a-9edd-ff0b74feae85]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape5d4fc7a-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:20:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 102], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 459046, 'reachable_time': 26652, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306949, 'error': None, 'target': 'ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.865 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8d4471d1-ec8f-43ea-bb06-12a71a78b7b7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe17:2033'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 459046, 'tstamp': 459046}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306950, 'error': None, 'target': 'ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.895 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[851a9698-9360-46a1-860d-c78e05b1df38]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape5d4fc7a-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:20:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 102], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 459046, 'reachable_time': 26652, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 306951, 'error': None, 'target': 'ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1427: 321 pgs: 321 active+clean; 246 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 318 KiB/s rd, 3.9 MiB/s wr, 89 op/s
Oct 11 04:52:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.949 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c8f06ba7-ad16-490b-b868-1da6d663c3de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:26.058 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[28fba919-3a20-44c7-a4e4-9e1380488c2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:26.060 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape5d4fc7a-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:26.061 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:52:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:26.063 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape5d4fc7a-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:26 np0005481065 NetworkManager[44960]: <info>  [1760172746.0664] manager: (tape5d4fc7a-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/157)
Oct 11 04:52:26 np0005481065 kernel: tape5d4fc7a-10: entered promiscuous mode
Oct 11 04:52:26 np0005481065 nova_compute[260935]: 2025-10-11 08:52:26.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:26.070 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape5d4fc7a-10, col_values=(('external_ids', {'iface-id': '7a0f31c4-9bda-45df-9fec-aacc40fc88c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:26 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:26Z|00324|binding|INFO|Releasing lport 7a0f31c4-9bda-45df-9fec-aacc40fc88c1 from this chassis (sb_readonly=0)
Oct 11 04:52:26 np0005481065 nova_compute[260935]: 2025-10-11 08:52:26.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:26 np0005481065 nova_compute[260935]: 2025-10-11 08:52:26.090 2 INFO nova.virt.libvirt.driver [None req-246c938c-4e76-4a49-8f73-6af5f563282d 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Deleting instance files /var/lib/nova/instances/840d2a1b-48bb-42ec-aa12-067e1b65ea39_del#033[00m
Oct 11 04:52:26 np0005481065 nova_compute[260935]: 2025-10-11 08:52:26.091 2 INFO nova.virt.libvirt.driver [None req-246c938c-4e76-4a49-8f73-6af5f563282d 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Deletion of /var/lib/nova/instances/840d2a1b-48bb-42ec-aa12-067e1b65ea39_del complete#033[00m
Oct 11 04:52:26 np0005481065 nova_compute[260935]: 2025-10-11 08:52:26.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:26.096 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 04:52:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:26.097 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ca3af2f9-93f5-4b2f-9ecd-3fe4263dee78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:26.099 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:52:26 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 04:52:26 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 04:52:26 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd
Oct 11 04:52:26 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 04:52:26 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 04:52:26 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 04:52:26 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd.pid.haproxy
Oct 11 04:52:26 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 04:52:26 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:52:26 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 04:52:26 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 04:52:26 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 04:52:26 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 04:52:26 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 04:52:26 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 04:52:26 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 04:52:26 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 04:52:26 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 04:52:26 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 04:52:26 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 04:52:26 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 04:52:26 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 04:52:26 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:52:26 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:52:26 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 04:52:26 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 04:52:26 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:52:26 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd
Oct 11 04:52:26 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 04:52:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:26.100 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'env', 'PROCESS_TAG=haproxy-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 04:52:26 np0005481065 nova_compute[260935]: 2025-10-11 08:52:26.176 2 INFO nova.compute.manager [None req-246c938c-4e76-4a49-8f73-6af5f563282d 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Took 1.03 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 04:52:26 np0005481065 nova_compute[260935]: 2025-10-11 08:52:26.178 2 DEBUG oslo.service.loopingcall [None req-246c938c-4e76-4a49-8f73-6af5f563282d 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 04:52:26 np0005481065 nova_compute[260935]: 2025-10-11 08:52:26.178 2 DEBUG nova.compute.manager [-] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 04:52:26 np0005481065 nova_compute[260935]: 2025-10-11 08:52:26.178 2 DEBUG nova.network.neutron [-] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 04:52:26 np0005481065 nova_compute[260935]: 2025-10-11 08:52:26.424 2 DEBUG nova.compute.manager [req-86597d18-7aa1-4141-b807-89569d9965be req-5be5dcf5-0dea-45a9-8d6d-8401781c5d38 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Received event network-vif-unplugged-bf26054f-43d5-471a-bca4-e7948b4409d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:52:26 np0005481065 nova_compute[260935]: 2025-10-11 08:52:26.425 2 DEBUG oslo_concurrency.lockutils [req-86597d18-7aa1-4141-b807-89569d9965be req-5be5dcf5-0dea-45a9-8d6d-8401781c5d38 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "840d2a1b-48bb-42ec-aa12-067e1b65ea39-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:26 np0005481065 nova_compute[260935]: 2025-10-11 08:52:26.426 2 DEBUG oslo_concurrency.lockutils [req-86597d18-7aa1-4141-b807-89569d9965be req-5be5dcf5-0dea-45a9-8d6d-8401781c5d38 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "840d2a1b-48bb-42ec-aa12-067e1b65ea39-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:26 np0005481065 nova_compute[260935]: 2025-10-11 08:52:26.426 2 DEBUG oslo_concurrency.lockutils [req-86597d18-7aa1-4141-b807-89569d9965be req-5be5dcf5-0dea-45a9-8d6d-8401781c5d38 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "840d2a1b-48bb-42ec-aa12-067e1b65ea39-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:26 np0005481065 nova_compute[260935]: 2025-10-11 08:52:26.426 2 DEBUG nova.compute.manager [req-86597d18-7aa1-4141-b807-89569d9965be req-5be5dcf5-0dea-45a9-8d6d-8401781c5d38 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] No waiting events found dispatching network-vif-unplugged-bf26054f-43d5-471a-bca4-e7948b4409d0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:52:26 np0005481065 nova_compute[260935]: 2025-10-11 08:52:26.427 2 DEBUG nova.compute.manager [req-86597d18-7aa1-4141-b807-89569d9965be req-5be5dcf5-0dea-45a9-8d6d-8401781c5d38 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Received event network-vif-unplugged-bf26054f-43d5-471a-bca4-e7948b4409d0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 04:52:26 np0005481065 nova_compute[260935]: 2025-10-11 08:52:26.427 2 DEBUG nova.compute.manager [req-86597d18-7aa1-4141-b807-89569d9965be req-5be5dcf5-0dea-45a9-8d6d-8401781c5d38 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Received event network-vif-plugged-bf26054f-43d5-471a-bca4-e7948b4409d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:52:26 np0005481065 nova_compute[260935]: 2025-10-11 08:52:26.428 2 DEBUG oslo_concurrency.lockutils [req-86597d18-7aa1-4141-b807-89569d9965be req-5be5dcf5-0dea-45a9-8d6d-8401781c5d38 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "840d2a1b-48bb-42ec-aa12-067e1b65ea39-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:26 np0005481065 nova_compute[260935]: 2025-10-11 08:52:26.428 2 DEBUG oslo_concurrency.lockutils [req-86597d18-7aa1-4141-b807-89569d9965be req-5be5dcf5-0dea-45a9-8d6d-8401781c5d38 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "840d2a1b-48bb-42ec-aa12-067e1b65ea39-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:26 np0005481065 nova_compute[260935]: 2025-10-11 08:52:26.428 2 DEBUG oslo_concurrency.lockutils [req-86597d18-7aa1-4141-b807-89569d9965be req-5be5dcf5-0dea-45a9-8d6d-8401781c5d38 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "840d2a1b-48bb-42ec-aa12-067e1b65ea39-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:26 np0005481065 nova_compute[260935]: 2025-10-11 08:52:26.429 2 DEBUG nova.compute.manager [req-86597d18-7aa1-4141-b807-89569d9965be req-5be5dcf5-0dea-45a9-8d6d-8401781c5d38 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] No waiting events found dispatching network-vif-plugged-bf26054f-43d5-471a-bca4-e7948b4409d0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:52:26 np0005481065 nova_compute[260935]: 2025-10-11 08:52:26.429 2 WARNING nova.compute.manager [req-86597d18-7aa1-4141-b807-89569d9965be req-5be5dcf5-0dea-45a9-8d6d-8401781c5d38 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Received unexpected event network-vif-plugged-bf26054f-43d5-471a-bca4-e7948b4409d0 for instance with vm_state active and task_state deleting.#033[00m
Oct 11 04:52:26 np0005481065 nova_compute[260935]: 2025-10-11 08:52:26.547 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172746.5470092, 98fabab3-6b4a-44f3-b232-f23f34f4e19f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:52:26 np0005481065 nova_compute[260935]: 2025-10-11 08:52:26.549 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] VM Started (Lifecycle Event)#033[00m
Oct 11 04:52:26 np0005481065 podman[307026]: 2025-10-11 08:52:26.550455771 +0000 UTC m=+0.075688381 container create cf84e2cb23338f38ffba7aa87fde853c0346e0e3a636c10ffc6f7b131567dfc3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 04:52:26 np0005481065 nova_compute[260935]: 2025-10-11 08:52:26.572 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:52:26 np0005481065 nova_compute[260935]: 2025-10-11 08:52:26.580 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172746.5472462, 98fabab3-6b4a-44f3-b232-f23f34f4e19f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:52:26 np0005481065 nova_compute[260935]: 2025-10-11 08:52:26.582 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] VM Paused (Lifecycle Event)#033[00m
Oct 11 04:52:26 np0005481065 systemd[1]: Started libpod-conmon-cf84e2cb23338f38ffba7aa87fde853c0346e0e3a636c10ffc6f7b131567dfc3.scope.
Oct 11 04:52:26 np0005481065 nova_compute[260935]: 2025-10-11 08:52:26.603 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:52:26 np0005481065 nova_compute[260935]: 2025-10-11 08:52:26.607 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:52:26 np0005481065 podman[307026]: 2025-10-11 08:52:26.518807626 +0000 UTC m=+0.044040266 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 04:52:26 np0005481065 nova_compute[260935]: 2025-10-11 08:52:26.634 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:52:26 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:52:26 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95e0d34e1c9b5cb00bbf8b4589ff9eade911de7e540b11ead99c1576f4f3ca7c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:52:26 np0005481065 podman[307026]: 2025-10-11 08:52:26.658314682 +0000 UTC m=+0.183547392 container init cf84e2cb23338f38ffba7aa87fde853c0346e0e3a636c10ffc6f7b131567dfc3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 11 04:52:26 np0005481065 podman[307026]: 2025-10-11 08:52:26.667930574 +0000 UTC m=+0.193163204 container start cf84e2cb23338f38ffba7aa87fde853c0346e0e3a636c10ffc6f7b131567dfc3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:52:26 np0005481065 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[307041]: [NOTICE]   (307045) : New worker (307047) forked
Oct 11 04:52:26 np0005481065 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[307041]: [NOTICE]   (307045) : Loading success.
Oct 11 04:52:26 np0005481065 nova_compute[260935]: 2025-10-11 08:52:26.780 2 DEBUG nova.network.neutron [req-40b93b29-5bc4-4366-8632-576d5d47e7e7 req-0570f71b-0e8e-4c53-b8b4-554728cbc4f5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Updated VIF entry in instance network info cache for port bf26054f-43d5-471a-bca4-e7948b4409d0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:52:26 np0005481065 nova_compute[260935]: 2025-10-11 08:52:26.781 2 DEBUG nova.network.neutron [req-40b93b29-5bc4-4366-8632-576d5d47e7e7 req-0570f71b-0e8e-4c53-b8b4-554728cbc4f5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Updating instance_info_cache with network_info: [{"id": "bf26054f-43d5-471a-bca4-e7948b4409d0", "address": "fa:16:3e:6d:9c:27", "network": {"id": "882d76f4-8cc7-44c7-ad90-277f4f92e044", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-3978382-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b23ff73ca27245eeb1b46f51326a5568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf26054f-43", "ovs_interfaceid": "bf26054f-43d5-471a-bca4-e7948b4409d0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:52:26 np0005481065 nova_compute[260935]: 2025-10-11 08:52:26.800 2 DEBUG oslo_concurrency.lockutils [req-40b93b29-5bc4-4366-8632-576d5d47e7e7 req-0570f71b-0e8e-4c53-b8b4-554728cbc4f5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-840d2a1b-48bb-42ec-aa12-067e1b65ea39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:52:27 np0005481065 nova_compute[260935]: 2025-10-11 08:52:27.446 2 DEBUG nova.network.neutron [-] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:52:27 np0005481065 nova_compute[260935]: 2025-10-11 08:52:27.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:27 np0005481065 nova_compute[260935]: 2025-10-11 08:52:27.490 2 INFO nova.compute.manager [-] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Took 1.31 seconds to deallocate network for instance.#033[00m
Oct 11 04:52:27 np0005481065 nova_compute[260935]: 2025-10-11 08:52:27.547 2 DEBUG oslo_concurrency.lockutils [None req-246c938c-4e76-4a49-8f73-6af5f563282d 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:27 np0005481065 nova_compute[260935]: 2025-10-11 08:52:27.548 2 DEBUG oslo_concurrency.lockutils [None req-246c938c-4e76-4a49-8f73-6af5f563282d 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:27 np0005481065 nova_compute[260935]: 2025-10-11 08:52:27.655 2 DEBUG oslo_concurrency.processutils [None req-246c938c-4e76-4a49-8f73-6af5f563282d 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:52:27 np0005481065 nova_compute[260935]: 2025-10-11 08:52:27.784 2 DEBUG nova.compute.manager [req-c13bd8b3-cc28-45a7-9607-5304f2f086ad req-be81931b-8443-4126-ada2-523f90bf4d33 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Received event network-vif-deleted-bf26054f-43d5-471a-bca4-e7948b4409d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:52:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1428: 321 pgs: 321 active+clean; 167 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 345 KiB/s rd, 3.9 MiB/s wr, 127 op/s
Oct 11 04:52:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:52:28 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1305509786' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:52:28 np0005481065 nova_compute[260935]: 2025-10-11 08:52:28.184 2 DEBUG oslo_concurrency.processutils [None req-246c938c-4e76-4a49-8f73-6af5f563282d 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:52:28 np0005481065 nova_compute[260935]: 2025-10-11 08:52:28.191 2 DEBUG nova.compute.provider_tree [None req-246c938c-4e76-4a49-8f73-6af5f563282d 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:52:28 np0005481065 nova_compute[260935]: 2025-10-11 08:52:28.211 2 DEBUG nova.scheduler.client.report [None req-246c938c-4e76-4a49-8f73-6af5f563282d 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:52:28 np0005481065 nova_compute[260935]: 2025-10-11 08:52:28.229 2 DEBUG oslo_concurrency.lockutils [None req-246c938c-4e76-4a49-8f73-6af5f563282d 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:28 np0005481065 nova_compute[260935]: 2025-10-11 08:52:28.269 2 INFO nova.scheduler.client.report [None req-246c938c-4e76-4a49-8f73-6af5f563282d 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Deleted allocations for instance 840d2a1b-48bb-42ec-aa12-067e1b65ea39#033[00m
Oct 11 04:52:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:52:28 np0005481065 nova_compute[260935]: 2025-10-11 08:52:28.340 2 DEBUG oslo_concurrency.lockutils [None req-246c938c-4e76-4a49-8f73-6af5f563282d 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "840d2a1b-48bb-42ec-aa12-067e1b65ea39" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.205s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:28 np0005481065 nova_compute[260935]: 2025-10-11 08:52:28.635 2 DEBUG nova.compute.manager [req-c5bae198-9c77-434b-aa5c-ab1fc53025d5 req-dbd45bee-b5cd-48e2-8f44-3ab2ce3293b4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Received event network-vif-plugged-20c8164c-0779-4589-b3d3-afc10a47631f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:52:28 np0005481065 nova_compute[260935]: 2025-10-11 08:52:28.635 2 DEBUG oslo_concurrency.lockutils [req-c5bae198-9c77-434b-aa5c-ab1fc53025d5 req-dbd45bee-b5cd-48e2-8f44-3ab2ce3293b4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:28 np0005481065 nova_compute[260935]: 2025-10-11 08:52:28.636 2 DEBUG oslo_concurrency.lockutils [req-c5bae198-9c77-434b-aa5c-ab1fc53025d5 req-dbd45bee-b5cd-48e2-8f44-3ab2ce3293b4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:28 np0005481065 nova_compute[260935]: 2025-10-11 08:52:28.637 2 DEBUG oslo_concurrency.lockutils [req-c5bae198-9c77-434b-aa5c-ab1fc53025d5 req-dbd45bee-b5cd-48e2-8f44-3ab2ce3293b4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:28 np0005481065 nova_compute[260935]: 2025-10-11 08:52:28.637 2 DEBUG nova.compute.manager [req-c5bae198-9c77-434b-aa5c-ab1fc53025d5 req-dbd45bee-b5cd-48e2-8f44-3ab2ce3293b4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Processing event network-vif-plugged-20c8164c-0779-4589-b3d3-afc10a47631f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 04:52:28 np0005481065 nova_compute[260935]: 2025-10-11 08:52:28.638 2 DEBUG nova.compute.manager [req-c5bae198-9c77-434b-aa5c-ab1fc53025d5 req-dbd45bee-b5cd-48e2-8f44-3ab2ce3293b4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Received event network-vif-plugged-20c8164c-0779-4589-b3d3-afc10a47631f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:52:28 np0005481065 nova_compute[260935]: 2025-10-11 08:52:28.638 2 DEBUG oslo_concurrency.lockutils [req-c5bae198-9c77-434b-aa5c-ab1fc53025d5 req-dbd45bee-b5cd-48e2-8f44-3ab2ce3293b4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:28 np0005481065 nova_compute[260935]: 2025-10-11 08:52:28.639 2 DEBUG oslo_concurrency.lockutils [req-c5bae198-9c77-434b-aa5c-ab1fc53025d5 req-dbd45bee-b5cd-48e2-8f44-3ab2ce3293b4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:28 np0005481065 nova_compute[260935]: 2025-10-11 08:52:28.639 2 DEBUG oslo_concurrency.lockutils [req-c5bae198-9c77-434b-aa5c-ab1fc53025d5 req-dbd45bee-b5cd-48e2-8f44-3ab2ce3293b4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:28 np0005481065 nova_compute[260935]: 2025-10-11 08:52:28.640 2 DEBUG nova.compute.manager [req-c5bae198-9c77-434b-aa5c-ab1fc53025d5 req-dbd45bee-b5cd-48e2-8f44-3ab2ce3293b4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] No waiting events found dispatching network-vif-plugged-20c8164c-0779-4589-b3d3-afc10a47631f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:52:28 np0005481065 nova_compute[260935]: 2025-10-11 08:52:28.640 2 WARNING nova.compute.manager [req-c5bae198-9c77-434b-aa5c-ab1fc53025d5 req-dbd45bee-b5cd-48e2-8f44-3ab2ce3293b4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Received unexpected event network-vif-plugged-20c8164c-0779-4589-b3d3-afc10a47631f for instance with vm_state building and task_state spawning.#033[00m
Oct 11 04:52:28 np0005481065 nova_compute[260935]: 2025-10-11 08:52:28.641 2 DEBUG nova.compute.manager [req-c5bae198-9c77-434b-aa5c-ab1fc53025d5 req-dbd45bee-b5cd-48e2-8f44-3ab2ce3293b4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Received event network-changed-108be440-11bf-41f6-a628-86fbac597b7d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:52:28 np0005481065 nova_compute[260935]: 2025-10-11 08:52:28.641 2 DEBUG nova.compute.manager [req-c5bae198-9c77-434b-aa5c-ab1fc53025d5 req-dbd45bee-b5cd-48e2-8f44-3ab2ce3293b4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Refreshing instance network info cache due to event network-changed-108be440-11bf-41f6-a628-86fbac597b7d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:52:28 np0005481065 nova_compute[260935]: 2025-10-11 08:52:28.641 2 DEBUG oslo_concurrency.lockutils [req-c5bae198-9c77-434b-aa5c-ab1fc53025d5 req-dbd45bee-b5cd-48e2-8f44-3ab2ce3293b4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-872b1c1d-bc87-4123-a599-4d64b89018aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:52:28 np0005481065 nova_compute[260935]: 2025-10-11 08:52:28.642 2 DEBUG oslo_concurrency.lockutils [req-c5bae198-9c77-434b-aa5c-ab1fc53025d5 req-dbd45bee-b5cd-48e2-8f44-3ab2ce3293b4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-872b1c1d-bc87-4123-a599-4d64b89018aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:52:28 np0005481065 nova_compute[260935]: 2025-10-11 08:52:28.642 2 DEBUG nova.network.neutron [req-c5bae198-9c77-434b-aa5c-ab1fc53025d5 req-dbd45bee-b5cd-48e2-8f44-3ab2ce3293b4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Refreshing network info cache for port 108be440-11bf-41f6-a628-86fbac597b7d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:52:28 np0005481065 nova_compute[260935]: 2025-10-11 08:52:28.646 2 DEBUG nova.compute.manager [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:52:28 np0005481065 nova_compute[260935]: 2025-10-11 08:52:28.652 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172748.6509447, 98fabab3-6b4a-44f3-b232-f23f34f4e19f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:52:28 np0005481065 nova_compute[260935]: 2025-10-11 08:52:28.653 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:52:28 np0005481065 nova_compute[260935]: 2025-10-11 08:52:28.657 2 DEBUG nova.virt.libvirt.driver [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:52:28 np0005481065 nova_compute[260935]: 2025-10-11 08:52:28.662 2 INFO nova.virt.libvirt.driver [-] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Instance spawned successfully.#033[00m
Oct 11 04:52:28 np0005481065 nova_compute[260935]: 2025-10-11 08:52:28.663 2 DEBUG nova.virt.libvirt.driver [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:52:28 np0005481065 nova_compute[260935]: 2025-10-11 08:52:28.685 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:52:28 np0005481065 nova_compute[260935]: 2025-10-11 08:52:28.696 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:52:28 np0005481065 nova_compute[260935]: 2025-10-11 08:52:28.701 2 DEBUG nova.virt.libvirt.driver [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:52:28 np0005481065 nova_compute[260935]: 2025-10-11 08:52:28.702 2 DEBUG nova.virt.libvirt.driver [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:52:28 np0005481065 nova_compute[260935]: 2025-10-11 08:52:28.703 2 DEBUG nova.virt.libvirt.driver [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:52:28 np0005481065 nova_compute[260935]: 2025-10-11 08:52:28.704 2 DEBUG nova.virt.libvirt.driver [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:52:28 np0005481065 nova_compute[260935]: 2025-10-11 08:52:28.705 2 DEBUG nova.virt.libvirt.driver [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:52:28 np0005481065 nova_compute[260935]: 2025-10-11 08:52:28.706 2 DEBUG nova.virt.libvirt.driver [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:52:28 np0005481065 nova_compute[260935]: 2025-10-11 08:52:28.732 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:52:28 np0005481065 nova_compute[260935]: 2025-10-11 08:52:28.781 2 INFO nova.compute.manager [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Took 8.83 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:52:28 np0005481065 nova_compute[260935]: 2025-10-11 08:52:28.782 2 DEBUG nova.compute.manager [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:52:28 np0005481065 nova_compute[260935]: 2025-10-11 08:52:28.870 2 INFO nova.compute.manager [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Took 9.87 seconds to build instance.#033[00m
Oct 11 04:52:28 np0005481065 nova_compute[260935]: 2025-10-11 08:52:28.891 2 DEBUG oslo_concurrency.lockutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.986s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:29 np0005481065 nova_compute[260935]: 2025-10-11 08:52:29.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1429: 321 pgs: 321 active+clean; 167 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 315 KiB/s rd, 3.6 MiB/s wr, 114 op/s
Oct 11 04:52:30 np0005481065 nova_compute[260935]: 2025-10-11 08:52:30.105 2 DEBUG nova.network.neutron [req-c5bae198-9c77-434b-aa5c-ab1fc53025d5 req-dbd45bee-b5cd-48e2-8f44-3ab2ce3293b4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Updated VIF entry in instance network info cache for port 108be440-11bf-41f6-a628-86fbac597b7d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:52:30 np0005481065 nova_compute[260935]: 2025-10-11 08:52:30.106 2 DEBUG nova.network.neutron [req-c5bae198-9c77-434b-aa5c-ab1fc53025d5 req-dbd45bee-b5cd-48e2-8f44-3ab2ce3293b4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Updating instance_info_cache with network_info: [{"id": "108be440-11bf-41f6-a628-86fbac597b7d", "address": "fa:16:3e:c7:33:2e", "network": {"id": "882d76f4-8cc7-44c7-ad90-277f4f92e044", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-3978382-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b23ff73ca27245eeb1b46f51326a5568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap108be440-11", "ovs_interfaceid": "108be440-11bf-41f6-a628-86fbac597b7d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:52:30 np0005481065 nova_compute[260935]: 2025-10-11 08:52:30.126 2 DEBUG oslo_concurrency.lockutils [req-c5bae198-9c77-434b-aa5c-ab1fc53025d5 req-dbd45bee-b5cd-48e2-8f44-3ab2ce3293b4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-872b1c1d-bc87-4123-a599-4d64b89018aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:52:30 np0005481065 nova_compute[260935]: 2025-10-11 08:52:30.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:30 np0005481065 nova_compute[260935]: 2025-10-11 08:52:30.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1430: 321 pgs: 321 active+clean; 167 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 315 KiB/s rd, 3.6 MiB/s wr, 114 op/s
Oct 11 04:52:32 np0005481065 nova_compute[260935]: 2025-10-11 08:52:32.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:32 np0005481065 nova_compute[260935]: 2025-10-11 08:52:32.549 2 DEBUG oslo_concurrency.lockutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Acquiring lock "dd616f8b-2be3-455f-8979-ac6e9e57af2d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:32 np0005481065 nova_compute[260935]: 2025-10-11 08:52:32.549 2 DEBUG oslo_concurrency.lockutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Lock "dd616f8b-2be3-455f-8979-ac6e9e57af2d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:32 np0005481065 nova_compute[260935]: 2025-10-11 08:52:32.577 2 DEBUG nova.compute.manager [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:52:32 np0005481065 nova_compute[260935]: 2025-10-11 08:52:32.665 2 DEBUG oslo_concurrency.lockutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:32 np0005481065 nova_compute[260935]: 2025-10-11 08:52:32.666 2 DEBUG oslo_concurrency.lockutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:32 np0005481065 nova_compute[260935]: 2025-10-11 08:52:32.675 2 DEBUG nova.virt.hardware [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:52:32 np0005481065 nova_compute[260935]: 2025-10-11 08:52:32.676 2 INFO nova.compute.claims [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:52:32 np0005481065 podman[307078]: 2025-10-11 08:52:32.826730965 +0000 UTC m=+0.119550232 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 11 04:52:32 np0005481065 nova_compute[260935]: 2025-10-11 08:52:32.838 2 DEBUG oslo_concurrency.processutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:52:33 np0005481065 nova_compute[260935]: 2025-10-11 08:52:33.126 2 DEBUG nova.compute.manager [req-71cf0004-6e29-4f12-a789-4a0f55773950 req-c62cab1a-a9f0-457b-b21e-a98b97ff7efc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Received event network-changed-108be440-11bf-41f6-a628-86fbac597b7d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:52:33 np0005481065 nova_compute[260935]: 2025-10-11 08:52:33.127 2 DEBUG nova.compute.manager [req-71cf0004-6e29-4f12-a789-4a0f55773950 req-c62cab1a-a9f0-457b-b21e-a98b97ff7efc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Refreshing instance network info cache due to event network-changed-108be440-11bf-41f6-a628-86fbac597b7d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:52:33 np0005481065 nova_compute[260935]: 2025-10-11 08:52:33.128 2 DEBUG oslo_concurrency.lockutils [req-71cf0004-6e29-4f12-a789-4a0f55773950 req-c62cab1a-a9f0-457b-b21e-a98b97ff7efc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-872b1c1d-bc87-4123-a599-4d64b89018aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:52:33 np0005481065 nova_compute[260935]: 2025-10-11 08:52:33.128 2 DEBUG oslo_concurrency.lockutils [req-71cf0004-6e29-4f12-a789-4a0f55773950 req-c62cab1a-a9f0-457b-b21e-a98b97ff7efc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-872b1c1d-bc87-4123-a599-4d64b89018aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:52:33 np0005481065 nova_compute[260935]: 2025-10-11 08:52:33.129 2 DEBUG nova.network.neutron [req-71cf0004-6e29-4f12-a789-4a0f55773950 req-c62cab1a-a9f0-457b-b21e-a98b97ff7efc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Refreshing network info cache for port 108be440-11bf-41f6-a628-86fbac597b7d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:52:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:52:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:52:33 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3389995710' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:52:33 np0005481065 nova_compute[260935]: 2025-10-11 08:52:33.315 2 DEBUG oslo_concurrency.processutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:52:33 np0005481065 nova_compute[260935]: 2025-10-11 08:52:33.324 2 DEBUG nova.compute.provider_tree [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:52:33 np0005481065 nova_compute[260935]: 2025-10-11 08:52:33.348 2 DEBUG nova.scheduler.client.report [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:52:33 np0005481065 nova_compute[260935]: 2025-10-11 08:52:33.380 2 DEBUG oslo_concurrency.lockutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:33 np0005481065 nova_compute[260935]: 2025-10-11 08:52:33.382 2 DEBUG nova.compute.manager [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:52:33 np0005481065 nova_compute[260935]: 2025-10-11 08:52:33.435 2 DEBUG nova.compute.manager [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:52:33 np0005481065 nova_compute[260935]: 2025-10-11 08:52:33.436 2 DEBUG nova.network.neutron [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:52:33 np0005481065 nova_compute[260935]: 2025-10-11 08:52:33.466 2 INFO nova.virt.libvirt.driver [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:52:33 np0005481065 nova_compute[260935]: 2025-10-11 08:52:33.491 2 DEBUG nova.compute.manager [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:52:33 np0005481065 nova_compute[260935]: 2025-10-11 08:52:33.592 2 DEBUG nova.compute.manager [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:52:33 np0005481065 nova_compute[260935]: 2025-10-11 08:52:33.594 2 DEBUG nova.virt.libvirt.driver [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:52:33 np0005481065 nova_compute[260935]: 2025-10-11 08:52:33.595 2 INFO nova.virt.libvirt.driver [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Creating image(s)#033[00m
Oct 11 04:52:33 np0005481065 nova_compute[260935]: 2025-10-11 08:52:33.630 2 DEBUG nova.storage.rbd_utils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] rbd image dd616f8b-2be3-455f-8979-ac6e9e57af2d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:52:33 np0005481065 nova_compute[260935]: 2025-10-11 08:52:33.673 2 DEBUG nova.storage.rbd_utils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] rbd image dd616f8b-2be3-455f-8979-ac6e9e57af2d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:52:33 np0005481065 nova_compute[260935]: 2025-10-11 08:52:33.711 2 DEBUG nova.storage.rbd_utils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] rbd image dd616f8b-2be3-455f-8979-ac6e9e57af2d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:52:33 np0005481065 nova_compute[260935]: 2025-10-11 08:52:33.717 2 DEBUG oslo_concurrency.processutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:52:33 np0005481065 nova_compute[260935]: 2025-10-11 08:52:33.760 2 DEBUG nova.policy [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '713a9b01d75f4fd2b8ad41bac3ba6343', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6fb895e8125a437b8cc29be31706fccb', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 04:52:33 np0005481065 nova_compute[260935]: 2025-10-11 08:52:33.814 2 DEBUG oslo_concurrency.processutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:52:33 np0005481065 nova_compute[260935]: 2025-10-11 08:52:33.815 2 DEBUG oslo_concurrency.lockutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:33 np0005481065 nova_compute[260935]: 2025-10-11 08:52:33.816 2 DEBUG oslo_concurrency.lockutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:33 np0005481065 nova_compute[260935]: 2025-10-11 08:52:33.817 2 DEBUG oslo_concurrency.lockutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:33 np0005481065 nova_compute[260935]: 2025-10-11 08:52:33.853 2 DEBUG nova.storage.rbd_utils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] rbd image dd616f8b-2be3-455f-8979-ac6e9e57af2d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:52:33 np0005481065 nova_compute[260935]: 2025-10-11 08:52:33.858 2 DEBUG oslo_concurrency.processutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 dd616f8b-2be3-455f-8979-ac6e9e57af2d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:52:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1431: 321 pgs: 321 active+clean; 167 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.6 MiB/s wr, 179 op/s
Oct 11 04:52:34 np0005481065 nova_compute[260935]: 2025-10-11 08:52:34.195 2 DEBUG oslo_concurrency.processutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 dd616f8b-2be3-455f-8979-ac6e9e57af2d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.337s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:52:34 np0005481065 nova_compute[260935]: 2025-10-11 08:52:34.293 2 DEBUG nova.storage.rbd_utils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] resizing rbd image dd616f8b-2be3-455f-8979-ac6e9e57af2d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:52:34 np0005481065 nova_compute[260935]: 2025-10-11 08:52:34.390 2 DEBUG nova.objects.instance [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Lazy-loading 'migration_context' on Instance uuid dd616f8b-2be3-455f-8979-ac6e9e57af2d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:52:34 np0005481065 nova_compute[260935]: 2025-10-11 08:52:34.410 2 DEBUG nova.virt.libvirt.driver [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:52:34 np0005481065 nova_compute[260935]: 2025-10-11 08:52:34.411 2 DEBUG nova.virt.libvirt.driver [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Ensure instance console log exists: /var/lib/nova/instances/dd616f8b-2be3-455f-8979-ac6e9e57af2d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:52:34 np0005481065 nova_compute[260935]: 2025-10-11 08:52:34.411 2 DEBUG oslo_concurrency.lockutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:34 np0005481065 nova_compute[260935]: 2025-10-11 08:52:34.412 2 DEBUG oslo_concurrency.lockutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:34 np0005481065 nova_compute[260935]: 2025-10-11 08:52:34.413 2 DEBUG oslo_concurrency.lockutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:34 np0005481065 nova_compute[260935]: 2025-10-11 08:52:34.494 2 DEBUG nova.network.neutron [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Successfully created port: a5dcad71-6ce4-4a7c-95fc-900cb7ee620e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 04:52:35 np0005481065 nova_compute[260935]: 2025-10-11 08:52:35.149 2 DEBUG nova.network.neutron [req-71cf0004-6e29-4f12-a789-4a0f55773950 req-c62cab1a-a9f0-457b-b21e-a98b97ff7efc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Updated VIF entry in instance network info cache for port 108be440-11bf-41f6-a628-86fbac597b7d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:52:35 np0005481065 nova_compute[260935]: 2025-10-11 08:52:35.150 2 DEBUG nova.network.neutron [req-71cf0004-6e29-4f12-a789-4a0f55773950 req-c62cab1a-a9f0-457b-b21e-a98b97ff7efc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Updating instance_info_cache with network_info: [{"id": "108be440-11bf-41f6-a628-86fbac597b7d", "address": "fa:16:3e:c7:33:2e", "network": {"id": "882d76f4-8cc7-44c7-ad90-277f4f92e044", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-3978382-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b23ff73ca27245eeb1b46f51326a5568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap108be440-11", "ovs_interfaceid": "108be440-11bf-41f6-a628-86fbac597b7d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:52:35 np0005481065 nova_compute[260935]: 2025-10-11 08:52:35.172 2 DEBUG oslo_concurrency.lockutils [req-71cf0004-6e29-4f12-a789-4a0f55773950 req-c62cab1a-a9f0-457b-b21e-a98b97ff7efc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-872b1c1d-bc87-4123-a599-4d64b89018aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:52:35 np0005481065 nova_compute[260935]: 2025-10-11 08:52:35.387 2 INFO nova.compute.manager [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Rebuilding instance#033[00m
Oct 11 04:52:35 np0005481065 nova_compute[260935]: 2025-10-11 08:52:35.456 2 DEBUG oslo_concurrency.lockutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Acquiring lock "5750649d-960f-42d5-b127-de8b9a2bee8f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:35 np0005481065 nova_compute[260935]: 2025-10-11 08:52:35.457 2 DEBUG oslo_concurrency.lockutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Lock "5750649d-960f-42d5-b127-de8b9a2bee8f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:35 np0005481065 nova_compute[260935]: 2025-10-11 08:52:35.477 2 DEBUG nova.compute.manager [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:52:35 np0005481065 nova_compute[260935]: 2025-10-11 08:52:35.504 2 DEBUG nova.network.neutron [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Successfully updated port: a5dcad71-6ce4-4a7c-95fc-900cb7ee620e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:52:35 np0005481065 nova_compute[260935]: 2025-10-11 08:52:35.525 2 DEBUG oslo_concurrency.lockutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Acquiring lock "refresh_cache-dd616f8b-2be3-455f-8979-ac6e9e57af2d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:52:35 np0005481065 nova_compute[260935]: 2025-10-11 08:52:35.526 2 DEBUG oslo_concurrency.lockutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Acquired lock "refresh_cache-dd616f8b-2be3-455f-8979-ac6e9e57af2d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:52:35 np0005481065 nova_compute[260935]: 2025-10-11 08:52:35.526 2 DEBUG nova.network.neutron [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:52:35 np0005481065 nova_compute[260935]: 2025-10-11 08:52:35.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:35 np0005481065 nova_compute[260935]: 2025-10-11 08:52:35.550 2 DEBUG oslo_concurrency.lockutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:35 np0005481065 nova_compute[260935]: 2025-10-11 08:52:35.550 2 DEBUG oslo_concurrency.lockutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:35 np0005481065 nova_compute[260935]: 2025-10-11 08:52:35.558 2 DEBUG nova.virt.hardware [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:52:35 np0005481065 nova_compute[260935]: 2025-10-11 08:52:35.558 2 INFO nova.compute.claims [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:52:35 np0005481065 nova_compute[260935]: 2025-10-11 08:52:35.664 2 DEBUG nova.objects.instance [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 98fabab3-6b4a-44f3-b232-f23f34f4e19f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:52:35 np0005481065 nova_compute[260935]: 2025-10-11 08:52:35.680 2 DEBUG nova.compute.manager [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:52:35 np0005481065 nova_compute[260935]: 2025-10-11 08:52:35.727 2 DEBUG nova.network.neutron [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:52:35 np0005481065 nova_compute[260935]: 2025-10-11 08:52:35.746 2 DEBUG nova.objects.instance [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lazy-loading 'pci_requests' on Instance uuid 98fabab3-6b4a-44f3-b232-f23f34f4e19f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:52:35 np0005481065 nova_compute[260935]: 2025-10-11 08:52:35.752 2 DEBUG oslo_concurrency.processutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:52:35 np0005481065 nova_compute[260935]: 2025-10-11 08:52:35.795 2 DEBUG nova.objects.instance [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 98fabab3-6b4a-44f3-b232-f23f34f4e19f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:52:35 np0005481065 nova_compute[260935]: 2025-10-11 08:52:35.809 2 DEBUG nova.objects.instance [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lazy-loading 'resources' on Instance uuid 98fabab3-6b4a-44f3-b232-f23f34f4e19f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:52:35 np0005481065 nova_compute[260935]: 2025-10-11 08:52:35.838 2 DEBUG nova.objects.instance [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lazy-loading 'migration_context' on Instance uuid 98fabab3-6b4a-44f3-b232-f23f34f4e19f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:52:35 np0005481065 nova_compute[260935]: 2025-10-11 08:52:35.854 2 DEBUG nova.objects.instance [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct 11 04:52:35 np0005481065 nova_compute[260935]: 2025-10-11 08:52:35.863 2 DEBUG nova.virt.libvirt.driver [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct 11 04:52:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1432: 321 pgs: 321 active+clean; 167 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 102 op/s
Oct 11 04:52:36 np0005481065 nova_compute[260935]: 2025-10-11 08:52:36.126 2 DEBUG nova.compute.manager [req-105183b8-0d0d-46b7-a878-25cb3a0dde91 req-64e46d8c-ba1b-4df1-9024-4292acd28f53 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Received event network-changed-a5dcad71-6ce4-4a7c-95fc-900cb7ee620e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:52:36 np0005481065 nova_compute[260935]: 2025-10-11 08:52:36.127 2 DEBUG nova.compute.manager [req-105183b8-0d0d-46b7-a878-25cb3a0dde91 req-64e46d8c-ba1b-4df1-9024-4292acd28f53 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Refreshing instance network info cache due to event network-changed-a5dcad71-6ce4-4a7c-95fc-900cb7ee620e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:52:36 np0005481065 nova_compute[260935]: 2025-10-11 08:52:36.128 2 DEBUG oslo_concurrency.lockutils [req-105183b8-0d0d-46b7-a878-25cb3a0dde91 req-64e46d8c-ba1b-4df1-9024-4292acd28f53 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-dd616f8b-2be3-455f-8979-ac6e9e57af2d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:52:36 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:52:36 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2579751925' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:52:36 np0005481065 nova_compute[260935]: 2025-10-11 08:52:36.181 2 DEBUG oslo_concurrency.processutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:52:36 np0005481065 nova_compute[260935]: 2025-10-11 08:52:36.187 2 DEBUG nova.compute.provider_tree [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:52:36 np0005481065 nova_compute[260935]: 2025-10-11 08:52:36.208 2 DEBUG nova.scheduler.client.report [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:52:36 np0005481065 nova_compute[260935]: 2025-10-11 08:52:36.238 2 DEBUG oslo_concurrency.lockutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:36 np0005481065 nova_compute[260935]: 2025-10-11 08:52:36.240 2 DEBUG nova.compute.manager [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:52:36 np0005481065 nova_compute[260935]: 2025-10-11 08:52:36.298 2 DEBUG nova.compute.manager [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:52:36 np0005481065 nova_compute[260935]: 2025-10-11 08:52:36.299 2 DEBUG nova.network.neutron [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:52:36 np0005481065 nova_compute[260935]: 2025-10-11 08:52:36.325 2 INFO nova.virt.libvirt.driver [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:52:36 np0005481065 nova_compute[260935]: 2025-10-11 08:52:36.382 2 DEBUG nova.compute.manager [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:52:36 np0005481065 nova_compute[260935]: 2025-10-11 08:52:36.510 2 DEBUG nova.compute.manager [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:52:36 np0005481065 nova_compute[260935]: 2025-10-11 08:52:36.512 2 DEBUG nova.virt.libvirt.driver [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:52:36 np0005481065 nova_compute[260935]: 2025-10-11 08:52:36.513 2 INFO nova.virt.libvirt.driver [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Creating image(s)#033[00m
Oct 11 04:52:36 np0005481065 nova_compute[260935]: 2025-10-11 08:52:36.538 2 DEBUG nova.storage.rbd_utils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] rbd image 5750649d-960f-42d5-b127-de8b9a2bee8f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:52:36 np0005481065 nova_compute[260935]: 2025-10-11 08:52:36.573 2 DEBUG nova.storage.rbd_utils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] rbd image 5750649d-960f-42d5-b127-de8b9a2bee8f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:52:36 np0005481065 nova_compute[260935]: 2025-10-11 08:52:36.601 2 DEBUG nova.storage.rbd_utils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] rbd image 5750649d-960f-42d5-b127-de8b9a2bee8f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:52:36 np0005481065 nova_compute[260935]: 2025-10-11 08:52:36.607 2 DEBUG oslo_concurrency.processutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:52:36 np0005481065 nova_compute[260935]: 2025-10-11 08:52:36.657 2 DEBUG nova.policy [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'aeed30817a7740109e765d227ff2b78a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e394c641aa0e46e2a7d8129cd88c9a01', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 04:52:36 np0005481065 nova_compute[260935]: 2025-10-11 08:52:36.698 2 DEBUG oslo_concurrency.processutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:52:36 np0005481065 nova_compute[260935]: 2025-10-11 08:52:36.699 2 DEBUG oslo_concurrency.lockutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:36 np0005481065 nova_compute[260935]: 2025-10-11 08:52:36.699 2 DEBUG oslo_concurrency.lockutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:36 np0005481065 nova_compute[260935]: 2025-10-11 08:52:36.700 2 DEBUG oslo_concurrency.lockutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:36 np0005481065 nova_compute[260935]: 2025-10-11 08:52:36.729 2 DEBUG nova.storage.rbd_utils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] rbd image 5750649d-960f-42d5-b127-de8b9a2bee8f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:52:36 np0005481065 nova_compute[260935]: 2025-10-11 08:52:36.737 2 DEBUG oslo_concurrency.processutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 5750649d-960f-42d5-b127-de8b9a2bee8f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:52:36 np0005481065 nova_compute[260935]: 2025-10-11 08:52:36.853 2 DEBUG oslo_concurrency.lockutils [None req-945a89b9-902f-4e98-8219-c31dfa583812 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Acquiring lock "872b1c1d-bc87-4123-a599-4d64b89018aa" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:36 np0005481065 nova_compute[260935]: 2025-10-11 08:52:36.857 2 DEBUG oslo_concurrency.lockutils [None req-945a89b9-902f-4e98-8219-c31dfa583812 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "872b1c1d-bc87-4123-a599-4d64b89018aa" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:36 np0005481065 nova_compute[260935]: 2025-10-11 08:52:36.857 2 DEBUG oslo_concurrency.lockutils [None req-945a89b9-902f-4e98-8219-c31dfa583812 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Acquiring lock "872b1c1d-bc87-4123-a599-4d64b89018aa-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:36 np0005481065 nova_compute[260935]: 2025-10-11 08:52:36.858 2 DEBUG oslo_concurrency.lockutils [None req-945a89b9-902f-4e98-8219-c31dfa583812 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "872b1c1d-bc87-4123-a599-4d64b89018aa-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:36 np0005481065 nova_compute[260935]: 2025-10-11 08:52:36.858 2 DEBUG oslo_concurrency.lockutils [None req-945a89b9-902f-4e98-8219-c31dfa583812 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "872b1c1d-bc87-4123-a599-4d64b89018aa-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:36 np0005481065 nova_compute[260935]: 2025-10-11 08:52:36.860 2 INFO nova.compute.manager [None req-945a89b9-902f-4e98-8219-c31dfa583812 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Terminating instance#033[00m
Oct 11 04:52:36 np0005481065 nova_compute[260935]: 2025-10-11 08:52:36.862 2 DEBUG nova.compute.manager [None req-945a89b9-902f-4e98-8219-c31dfa583812 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 04:52:36 np0005481065 nova_compute[260935]: 2025-10-11 08:52:36.909 2 DEBUG nova.network.neutron [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Updating instance_info_cache with network_info: [{"id": "a5dcad71-6ce4-4a7c-95fc-900cb7ee620e", "address": "fa:16:3e:40:7a:6c", "network": {"id": "f4f23928-daba-425d-b61c-e65657ec3386", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-643614329-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fb895e8125a437b8cc29be31706fccb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5dcad71-6c", "ovs_interfaceid": "a5dcad71-6ce4-4a7c-95fc-900cb7ee620e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:52:36 np0005481065 kernel: tap108be440-11 (unregistering): left promiscuous mode
Oct 11 04:52:36 np0005481065 NetworkManager[44960]: <info>  [1760172756.9379] device (tap108be440-11): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:52:36 np0005481065 nova_compute[260935]: 2025-10-11 08:52:36.940 2 DEBUG oslo_concurrency.lockutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Releasing lock "refresh_cache-dd616f8b-2be3-455f-8979-ac6e9e57af2d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:52:36 np0005481065 nova_compute[260935]: 2025-10-11 08:52:36.941 2 DEBUG nova.compute.manager [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Instance network_info: |[{"id": "a5dcad71-6ce4-4a7c-95fc-900cb7ee620e", "address": "fa:16:3e:40:7a:6c", "network": {"id": "f4f23928-daba-425d-b61c-e65657ec3386", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-643614329-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fb895e8125a437b8cc29be31706fccb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5dcad71-6c", "ovs_interfaceid": "a5dcad71-6ce4-4a7c-95fc-900cb7ee620e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:52:36 np0005481065 nova_compute[260935]: 2025-10-11 08:52:36.942 2 DEBUG oslo_concurrency.lockutils [req-105183b8-0d0d-46b7-a878-25cb3a0dde91 req-64e46d8c-ba1b-4df1-9024-4292acd28f53 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-dd616f8b-2be3-455f-8979-ac6e9e57af2d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:52:36 np0005481065 nova_compute[260935]: 2025-10-11 08:52:36.943 2 DEBUG nova.network.neutron [req-105183b8-0d0d-46b7-a878-25cb3a0dde91 req-64e46d8c-ba1b-4df1-9024-4292acd28f53 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Refreshing network info cache for port a5dcad71-6ce4-4a7c-95fc-900cb7ee620e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:52:37 np0005481065 nova_compute[260935]: 2025-10-11 08:52:36.953 2 DEBUG nova.virt.libvirt.driver [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Start _get_guest_xml network_info=[{"id": "a5dcad71-6ce4-4a7c-95fc-900cb7ee620e", "address": "fa:16:3e:40:7a:6c", "network": {"id": "f4f23928-daba-425d-b61c-e65657ec3386", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-643614329-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fb895e8125a437b8cc29be31706fccb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5dcad71-6c", "ovs_interfaceid": "a5dcad71-6ce4-4a7c-95fc-900cb7ee620e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:52:37 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:37Z|00325|binding|INFO|Releasing lport 108be440-11bf-41f6-a628-86fbac597b7d from this chassis (sb_readonly=0)
Oct 11 04:52:37 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:37Z|00326|binding|INFO|Setting lport 108be440-11bf-41f6-a628-86fbac597b7d down in Southbound
Oct 11 04:52:37 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:37Z|00327|binding|INFO|Removing iface tap108be440-11 ovn-installed in OVS
Oct 11 04:52:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:37.023 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:33:2e 10.100.0.6'], port_security=['fa:16:3e:c7:33:2e 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '872b1c1d-bc87-4123-a599-4d64b89018aa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-882d76f4-8cc7-44c7-ad90-277f4f92e044', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b23ff73ca27245eeb1b46f51326a5568', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1c2e46c6-00fe-417f-b7cc-3c9acf40921b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4841cc19-0006-4d3d-bb24-b088854c8627, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=108be440-11bf-41f6-a628-86fbac597b7d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:52:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:37.024 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 108be440-11bf-41f6-a628-86fbac597b7d in datapath 882d76f4-8cc7-44c7-ad90-277f4f92e044 unbound from our chassis#033[00m
Oct 11 04:52:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:37.025 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 882d76f4-8cc7-44c7-ad90-277f4f92e044, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 04:52:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:37.026 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b46d3303-6f78-4cea-8cc1-8f4dfe0a0ed4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:37.027 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-882d76f4-8cc7-44c7-ad90-277f4f92e044 namespace which is not needed anymore#033[00m
Oct 11 04:52:37 np0005481065 nova_compute[260935]: 2025-10-11 08:52:37.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:37 np0005481065 nova_compute[260935]: 2025-10-11 08:52:37.056 2 WARNING nova.virt.libvirt.driver [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:52:37 np0005481065 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000024.scope: Deactivated successfully.
Oct 11 04:52:37 np0005481065 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000024.scope: Consumed 14.575s CPU time.
Oct 11 04:52:37 np0005481065 systemd-machined[215705]: Machine qemu-40-instance-00000024 terminated.
Oct 11 04:52:37 np0005481065 nova_compute[260935]: 2025-10-11 08:52:37.065 2 DEBUG nova.virt.libvirt.host [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:52:37 np0005481065 nova_compute[260935]: 2025-10-11 08:52:37.066 2 DEBUG nova.virt.libvirt.host [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:52:37 np0005481065 nova_compute[260935]: 2025-10-11 08:52:37.070 2 DEBUG nova.virt.libvirt.host [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:52:37 np0005481065 nova_compute[260935]: 2025-10-11 08:52:37.070 2 DEBUG nova.virt.libvirt.host [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:52:37 np0005481065 nova_compute[260935]: 2025-10-11 08:52:37.071 2 DEBUG nova.virt.libvirt.driver [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:52:37 np0005481065 nova_compute[260935]: 2025-10-11 08:52:37.071 2 DEBUG nova.virt.hardware [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:52:37 np0005481065 nova_compute[260935]: 2025-10-11 08:52:37.071 2 DEBUG nova.virt.hardware [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:52:37 np0005481065 nova_compute[260935]: 2025-10-11 08:52:37.071 2 DEBUG nova.virt.hardware [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:52:37 np0005481065 nova_compute[260935]: 2025-10-11 08:52:37.072 2 DEBUG nova.virt.hardware [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:52:37 np0005481065 nova_compute[260935]: 2025-10-11 08:52:37.072 2 DEBUG nova.virt.hardware [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:52:37 np0005481065 nova_compute[260935]: 2025-10-11 08:52:37.072 2 DEBUG nova.virt.hardware [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:52:37 np0005481065 nova_compute[260935]: 2025-10-11 08:52:37.072 2 DEBUG nova.virt.hardware [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:52:37 np0005481065 nova_compute[260935]: 2025-10-11 08:52:37.072 2 DEBUG nova.virt.hardware [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:52:37 np0005481065 nova_compute[260935]: 2025-10-11 08:52:37.072 2 DEBUG nova.virt.hardware [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:52:37 np0005481065 nova_compute[260935]: 2025-10-11 08:52:37.073 2 DEBUG nova.virt.hardware [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:52:37 np0005481065 nova_compute[260935]: 2025-10-11 08:52:37.073 2 DEBUG nova.virt.hardware [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:52:37 np0005481065 nova_compute[260935]: 2025-10-11 08:52:37.076 2 DEBUG oslo_concurrency.processutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:52:37 np0005481065 nova_compute[260935]: 2025-10-11 08:52:37.127 2 INFO nova.virt.libvirt.driver [-] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Instance destroyed successfully.#033[00m
Oct 11 04:52:37 np0005481065 nova_compute[260935]: 2025-10-11 08:52:37.128 2 DEBUG nova.objects.instance [None req-945a89b9-902f-4e98-8219-c31dfa583812 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lazy-loading 'resources' on Instance uuid 872b1c1d-bc87-4123-a599-4d64b89018aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:52:37 np0005481065 nova_compute[260935]: 2025-10-11 08:52:37.149 2 DEBUG oslo_concurrency.processutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 5750649d-960f-42d5-b127-de8b9a2bee8f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:52:37 np0005481065 nova_compute[260935]: 2025-10-11 08:52:37.151 2 DEBUG nova.virt.libvirt.vif [None req-945a89b9-902f-4e98-8219-c31dfa583812 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:51:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-652156110',display_name='tempest-FloatingIPsAssociationTestJSON-server-652156110',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-652156110',id=36,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:51:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b23ff73ca27245eeb1b46f51326a5568',ramdisk_id='',reservation_id='r-dxyrerkf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1277815089',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1277815089-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:51:45Z,user_data=None,user_id='7f37cd0ba15f412b88192a506c5cec79',uuid=872b1c1d-bc87-4123-a599-4d64b89018aa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "108be440-11bf-41f6-a628-86fbac597b7d", "address": "fa:16:3e:c7:33:2e", "network": {"id": "882d76f4-8cc7-44c7-ad90-277f4f92e044", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-3978382-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b23ff73ca27245eeb1b46f51326a5568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap108be440-11", "ovs_interfaceid": "108be440-11bf-41f6-a628-86fbac597b7d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 04:52:37 np0005481065 nova_compute[260935]: 2025-10-11 08:52:37.151 2 DEBUG nova.network.os_vif_util [None req-945a89b9-902f-4e98-8219-c31dfa583812 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Converting VIF {"id": "108be440-11bf-41f6-a628-86fbac597b7d", "address": "fa:16:3e:c7:33:2e", "network": {"id": "882d76f4-8cc7-44c7-ad90-277f4f92e044", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-3978382-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b23ff73ca27245eeb1b46f51326a5568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap108be440-11", "ovs_interfaceid": "108be440-11bf-41f6-a628-86fbac597b7d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:52:37 np0005481065 nova_compute[260935]: 2025-10-11 08:52:37.152 2 DEBUG nova.network.os_vif_util [None req-945a89b9-902f-4e98-8219-c31dfa583812 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c7:33:2e,bridge_name='br-int',has_traffic_filtering=True,id=108be440-11bf-41f6-a628-86fbac597b7d,network=Network(882d76f4-8cc7-44c7-ad90-277f4f92e044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap108be440-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:52:37 np0005481065 nova_compute[260935]: 2025-10-11 08:52:37.152 2 DEBUG os_vif [None req-945a89b9-902f-4e98-8219-c31dfa583812 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c7:33:2e,bridge_name='br-int',has_traffic_filtering=True,id=108be440-11bf-41f6-a628-86fbac597b7d,network=Network(882d76f4-8cc7-44c7-ad90-277f4f92e044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap108be440-11') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 04:52:37 np0005481065 neutron-haproxy-ovnmeta-882d76f4-8cc7-44c7-ad90-277f4f92e044[303946]: [NOTICE]   (303952) : haproxy version is 2.8.14-c23fe91
Oct 11 04:52:37 np0005481065 neutron-haproxy-ovnmeta-882d76f4-8cc7-44c7-ad90-277f4f92e044[303946]: [NOTICE]   (303952) : path to executable is /usr/sbin/haproxy
Oct 11 04:52:37 np0005481065 neutron-haproxy-ovnmeta-882d76f4-8cc7-44c7-ad90-277f4f92e044[303946]: [ALERT]    (303952) : Current worker (303954) exited with code 143 (Terminated)
Oct 11 04:52:37 np0005481065 neutron-haproxy-ovnmeta-882d76f4-8cc7-44c7-ad90-277f4f92e044[303946]: [WARNING]  (303952) : All workers exited. Exiting... (0)
Oct 11 04:52:37 np0005481065 systemd[1]: libpod-c606860635a65babc5f4e20e1937563ff429e8f57b93abca2fa39c3cc7258fd7.scope: Deactivated successfully.
Oct 11 04:52:37 np0005481065 podman[307435]: 2025-10-11 08:52:37.178544775 +0000 UTC m=+0.057605130 container died c606860635a65babc5f4e20e1937563ff429e8f57b93abca2fa39c3cc7258fd7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-882d76f4-8cc7-44c7-ad90-277f4f92e044, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 11 04:52:37 np0005481065 nova_compute[260935]: 2025-10-11 08:52:37.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:37 np0005481065 nova_compute[260935]: 2025-10-11 08:52:37.196 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap108be440-11, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:37 np0005481065 nova_compute[260935]: 2025-10-11 08:52:37.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:37 np0005481065 nova_compute[260935]: 2025-10-11 08:52:37.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:37 np0005481065 nova_compute[260935]: 2025-10-11 08:52:37.202 2 INFO os_vif [None req-945a89b9-902f-4e98-8219-c31dfa583812 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c7:33:2e,bridge_name='br-int',has_traffic_filtering=True,id=108be440-11bf-41f6-a628-86fbac597b7d,network=Network(882d76f4-8cc7-44c7-ad90-277f4f92e044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap108be440-11')#033[00m
Oct 11 04:52:37 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c606860635a65babc5f4e20e1937563ff429e8f57b93abca2fa39c3cc7258fd7-userdata-shm.mount: Deactivated successfully.
Oct 11 04:52:37 np0005481065 systemd[1]: var-lib-containers-storage-overlay-43007129255deff7aeaffb2ae4dcdedaaf076ce8807efca09774d5aee6cf146c-merged.mount: Deactivated successfully.
Oct 11 04:52:37 np0005481065 podman[307435]: 2025-10-11 08:52:37.229947399 +0000 UTC m=+0.109007744 container cleanup c606860635a65babc5f4e20e1937563ff429e8f57b93abca2fa39c3cc7258fd7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-882d76f4-8cc7-44c7-ad90-277f4f92e044, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct 11 04:52:37 np0005481065 systemd[1]: libpod-conmon-c606860635a65babc5f4e20e1937563ff429e8f57b93abca2fa39c3cc7258fd7.scope: Deactivated successfully.
Oct 11 04:52:37 np0005481065 nova_compute[260935]: 2025-10-11 08:52:37.291 2 DEBUG nova.storage.rbd_utils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] resizing rbd image 5750649d-960f-42d5-b127-de8b9a2bee8f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:52:37 np0005481065 podman[307510]: 2025-10-11 08:52:37.310044744 +0000 UTC m=+0.053000510 container remove c606860635a65babc5f4e20e1937563ff429e8f57b93abca2fa39c3cc7258fd7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-882d76f4-8cc7-44c7-ad90-277f4f92e044, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 11 04:52:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:37.317 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[618d2fe1-1c82-4cf5-b86b-73fe1401601e]: (4, ('Sat Oct 11 08:52:37 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-882d76f4-8cc7-44c7-ad90-277f4f92e044 (c606860635a65babc5f4e20e1937563ff429e8f57b93abca2fa39c3cc7258fd7)\nc606860635a65babc5f4e20e1937563ff429e8f57b93abca2fa39c3cc7258fd7\nSat Oct 11 08:52:37 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-882d76f4-8cc7-44c7-ad90-277f4f92e044 (c606860635a65babc5f4e20e1937563ff429e8f57b93abca2fa39c3cc7258fd7)\nc606860635a65babc5f4e20e1937563ff429e8f57b93abca2fa39c3cc7258fd7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:37.321 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8ab2aad0-3704-4af7-9358-c31c5ae3d2a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:37.323 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap882d76f4-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:37 np0005481065 kernel: tap882d76f4-80: left promiscuous mode
Oct 11 04:52:37 np0005481065 nova_compute[260935]: 2025-10-11 08:52:37.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:37 np0005481065 nova_compute[260935]: 2025-10-11 08:52:37.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:37.354 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fca15d0c-9294-4ddf-8527-f1803e5d854c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:37.387 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ad705350-c9d1-40f0-9478-7c4a0dd522bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:37.388 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[293e8261-5839-49b5-814b-0a5cebb447bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:37.414 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3a66e563-246a-48a1-b15d-56aa82ae7fe9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 454886, 'reachable_time': 38404, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307573, 'error': None, 'target': 'ovnmeta-882d76f4-8cc7-44c7-ad90-277f4f92e044', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:37 np0005481065 systemd[1]: run-netns-ovnmeta\x2d882d76f4\x2d8cc7\x2d44c7\x2dad90\x2d277f4f92e044.mount: Deactivated successfully.
Oct 11 04:52:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:37.426 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-882d76f4-8cc7-44c7-ad90-277f4f92e044 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 04:52:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:37.426 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[9e7be4f8-7359-4710-91d3-c027db6551fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:37 np0005481065 nova_compute[260935]: 2025-10-11 08:52:37.448 2 DEBUG nova.objects.instance [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Lazy-loading 'migration_context' on Instance uuid 5750649d-960f-42d5-b127-de8b9a2bee8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:52:37 np0005481065 nova_compute[260935]: 2025-10-11 08:52:37.470 2 DEBUG nova.virt.libvirt.driver [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:52:37 np0005481065 nova_compute[260935]: 2025-10-11 08:52:37.470 2 DEBUG nova.virt.libvirt.driver [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Ensure instance console log exists: /var/lib/nova/instances/5750649d-960f-42d5-b127-de8b9a2bee8f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:52:37 np0005481065 nova_compute[260935]: 2025-10-11 08:52:37.471 2 DEBUG oslo_concurrency.lockutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:37 np0005481065 nova_compute[260935]: 2025-10-11 08:52:37.471 2 DEBUG oslo_concurrency.lockutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:37 np0005481065 nova_compute[260935]: 2025-10-11 08:52:37.471 2 DEBUG oslo_concurrency.lockutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:37 np0005481065 nova_compute[260935]: 2025-10-11 08:52:37.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:52:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2909891233' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:52:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:52:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2909891233' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:52:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:52:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2614441609' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:52:37 np0005481065 nova_compute[260935]: 2025-10-11 08:52:37.566 2 DEBUG oslo_concurrency.processutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:52:37 np0005481065 nova_compute[260935]: 2025-10-11 08:52:37.591 2 DEBUG nova.storage.rbd_utils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] rbd image dd616f8b-2be3-455f-8979-ac6e9e57af2d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:52:37 np0005481065 nova_compute[260935]: 2025-10-11 08:52:37.599 2 DEBUG oslo_concurrency.processutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:52:37 np0005481065 nova_compute[260935]: 2025-10-11 08:52:37.675 2 INFO nova.virt.libvirt.driver [None req-945a89b9-902f-4e98-8219-c31dfa583812 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Deleting instance files /var/lib/nova/instances/872b1c1d-bc87-4123-a599-4d64b89018aa_del#033[00m
Oct 11 04:52:37 np0005481065 nova_compute[260935]: 2025-10-11 08:52:37.677 2 INFO nova.virt.libvirt.driver [None req-945a89b9-902f-4e98-8219-c31dfa583812 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Deletion of /var/lib/nova/instances/872b1c1d-bc87-4123-a599-4d64b89018aa_del complete#033[00m
Oct 11 04:52:37 np0005481065 nova_compute[260935]: 2025-10-11 08:52:37.745 2 INFO nova.compute.manager [None req-945a89b9-902f-4e98-8219-c31dfa583812 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Took 0.88 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 04:52:37 np0005481065 nova_compute[260935]: 2025-10-11 08:52:37.746 2 DEBUG oslo.service.loopingcall [None req-945a89b9-902f-4e98-8219-c31dfa583812 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 04:52:37 np0005481065 nova_compute[260935]: 2025-10-11 08:52:37.747 2 DEBUG nova.compute.manager [-] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 04:52:37 np0005481065 nova_compute[260935]: 2025-10-11 08:52:37.747 2 DEBUG nova.network.neutron [-] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 04:52:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1433: 321 pgs: 321 active+clean; 180 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 175 op/s
Oct 11 04:52:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:52:38 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2408323234' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:52:38 np0005481065 nova_compute[260935]: 2025-10-11 08:52:38.131 2 DEBUG oslo_concurrency.processutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:52:38 np0005481065 nova_compute[260935]: 2025-10-11 08:52:38.132 2 DEBUG nova.virt.libvirt.vif [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:52:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-1610135239',display_name='tempest-ImagesNegativeTestJSON-server-1610135239',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-1610135239',id=41,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6fb895e8125a437b8cc29be31706fccb',ramdisk_id='',reservation_id='r-xtlxm36r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesNegativeTestJSON-1030914515',owner_user_name='tempest-ImagesNegativeTestJSON-1030914515-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:52:33Z,user_data=None,user_id='713a9b01d75f4fd2b8ad41bac3ba6343',uuid=dd616f8b-2be3-455f-8979-ac6e9e57af2d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a5dcad71-6ce4-4a7c-95fc-900cb7ee620e", "address": "fa:16:3e:40:7a:6c", "network": {"id": "f4f23928-daba-425d-b61c-e65657ec3386", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-643614329-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fb895e8125a437b8cc29be31706fccb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5dcad71-6c", "ovs_interfaceid": "a5dcad71-6ce4-4a7c-95fc-900cb7ee620e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:52:38 np0005481065 nova_compute[260935]: 2025-10-11 08:52:38.133 2 DEBUG nova.network.os_vif_util [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Converting VIF {"id": "a5dcad71-6ce4-4a7c-95fc-900cb7ee620e", "address": "fa:16:3e:40:7a:6c", "network": {"id": "f4f23928-daba-425d-b61c-e65657ec3386", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-643614329-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fb895e8125a437b8cc29be31706fccb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5dcad71-6c", "ovs_interfaceid": "a5dcad71-6ce4-4a7c-95fc-900cb7ee620e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:52:38 np0005481065 nova_compute[260935]: 2025-10-11 08:52:38.133 2 DEBUG nova.network.os_vif_util [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:7a:6c,bridge_name='br-int',has_traffic_filtering=True,id=a5dcad71-6ce4-4a7c-95fc-900cb7ee620e,network=Network(f4f23928-daba-425d-b61c-e65657ec3386),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5dcad71-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:52:38 np0005481065 nova_compute[260935]: 2025-10-11 08:52:38.135 2 DEBUG nova.objects.instance [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Lazy-loading 'pci_devices' on Instance uuid dd616f8b-2be3-455f-8979-ac6e9e57af2d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:52:38 np0005481065 nova_compute[260935]: 2025-10-11 08:52:38.152 2 DEBUG nova.network.neutron [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Successfully created port: 81c1d19a-c479-4381-9557-92f3e52b0cf0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 04:52:38 np0005481065 nova_compute[260935]: 2025-10-11 08:52:38.159 2 DEBUG nova.virt.libvirt.driver [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:52:38 np0005481065 nova_compute[260935]:  <uuid>dd616f8b-2be3-455f-8979-ac6e9e57af2d</uuid>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:  <name>instance-00000029</name>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:52:38 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:      <nova:name>tempest-ImagesNegativeTestJSON-server-1610135239</nova:name>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:52:37</nova:creationTime>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:52:38 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:        <nova:user uuid="713a9b01d75f4fd2b8ad41bac3ba6343">tempest-ImagesNegativeTestJSON-1030914515-project-member</nova:user>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:        <nova:project uuid="6fb895e8125a437b8cc29be31706fccb">tempest-ImagesNegativeTestJSON-1030914515</nova:project>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:        <nova:port uuid="a5dcad71-6ce4-4a7c-95fc-900cb7ee620e">
Oct 11 04:52:38 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:52:38 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:      <entry name="serial">dd616f8b-2be3-455f-8979-ac6e9e57af2d</entry>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:      <entry name="uuid">dd616f8b-2be3-455f-8979-ac6e9e57af2d</entry>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:52:38 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:52:38 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:52:38 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/dd616f8b-2be3-455f-8979-ac6e9e57af2d_disk">
Oct 11 04:52:38 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:52:38 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:52:38 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/dd616f8b-2be3-455f-8979-ac6e9e57af2d_disk.config">
Oct 11 04:52:38 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:52:38 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 04:52:38 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:40:7a:6c"/>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:      <target dev="tapa5dcad71-6c"/>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:52:38 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/dd616f8b-2be3-455f-8979-ac6e9e57af2d/console.log" append="off"/>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:52:38 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:52:38 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:52:38 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:52:38 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:52:38 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:52:38 np0005481065 nova_compute[260935]: 2025-10-11 08:52:38.160 2 DEBUG nova.compute.manager [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Preparing to wait for external event network-vif-plugged-a5dcad71-6ce4-4a7c-95fc-900cb7ee620e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 04:52:38 np0005481065 nova_compute[260935]: 2025-10-11 08:52:38.160 2 DEBUG oslo_concurrency.lockutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Acquiring lock "dd616f8b-2be3-455f-8979-ac6e9e57af2d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:38 np0005481065 nova_compute[260935]: 2025-10-11 08:52:38.161 2 DEBUG oslo_concurrency.lockutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Lock "dd616f8b-2be3-455f-8979-ac6e9e57af2d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:38 np0005481065 nova_compute[260935]: 2025-10-11 08:52:38.161 2 DEBUG oslo_concurrency.lockutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Lock "dd616f8b-2be3-455f-8979-ac6e9e57af2d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:38 np0005481065 nova_compute[260935]: 2025-10-11 08:52:38.162 2 DEBUG nova.virt.libvirt.vif [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:52:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-1610135239',display_name='tempest-ImagesNegativeTestJSON-server-1610135239',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-1610135239',id=41,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6fb895e8125a437b8cc29be31706fccb',ramdisk_id='',reservation_id='r-xtlxm36r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesNegativeTestJSON-1030914515',owner_user_name='tempest-ImagesNegativeTestJSON-1030914515-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:52:33Z,user_data=None,user_id='713a9b01d75f4fd2b8ad41bac3ba6343',uuid=dd616f8b-2be3-455f-8979-ac6e9e57af2d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a5dcad71-6ce4-4a7c-95fc-900cb7ee620e", "address": "fa:16:3e:40:7a:6c", "network": {"id": "f4f23928-daba-425d-b61c-e65657ec3386", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-643614329-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fb895e8125a437b8cc29be31706fccb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5dcad71-6c", "ovs_interfaceid": "a5dcad71-6ce4-4a7c-95fc-900cb7ee620e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:52:38 np0005481065 nova_compute[260935]: 2025-10-11 08:52:38.162 2 DEBUG nova.network.os_vif_util [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Converting VIF {"id": "a5dcad71-6ce4-4a7c-95fc-900cb7ee620e", "address": "fa:16:3e:40:7a:6c", "network": {"id": "f4f23928-daba-425d-b61c-e65657ec3386", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-643614329-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fb895e8125a437b8cc29be31706fccb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5dcad71-6c", "ovs_interfaceid": "a5dcad71-6ce4-4a7c-95fc-900cb7ee620e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:52:38 np0005481065 nova_compute[260935]: 2025-10-11 08:52:38.163 2 DEBUG nova.network.os_vif_util [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:7a:6c,bridge_name='br-int',has_traffic_filtering=True,id=a5dcad71-6ce4-4a7c-95fc-900cb7ee620e,network=Network(f4f23928-daba-425d-b61c-e65657ec3386),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5dcad71-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:52:38 np0005481065 nova_compute[260935]: 2025-10-11 08:52:38.163 2 DEBUG os_vif [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:7a:6c,bridge_name='br-int',has_traffic_filtering=True,id=a5dcad71-6ce4-4a7c-95fc-900cb7ee620e,network=Network(f4f23928-daba-425d-b61c-e65657ec3386),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5dcad71-6c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:52:38 np0005481065 nova_compute[260935]: 2025-10-11 08:52:38.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:38 np0005481065 nova_compute[260935]: 2025-10-11 08:52:38.164 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:38 np0005481065 nova_compute[260935]: 2025-10-11 08:52:38.165 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:52:38 np0005481065 nova_compute[260935]: 2025-10-11 08:52:38.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:38 np0005481065 nova_compute[260935]: 2025-10-11 08:52:38.171 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa5dcad71-6c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:38 np0005481065 nova_compute[260935]: 2025-10-11 08:52:38.171 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa5dcad71-6c, col_values=(('external_ids', {'iface-id': 'a5dcad71-6ce4-4a7c-95fc-900cb7ee620e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:40:7a:6c', 'vm-uuid': 'dd616f8b-2be3-455f-8979-ac6e9e57af2d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:38 np0005481065 nova_compute[260935]: 2025-10-11 08:52:38.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:38 np0005481065 NetworkManager[44960]: <info>  [1760172758.2057] manager: (tapa5dcad71-6c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/158)
Oct 11 04:52:38 np0005481065 nova_compute[260935]: 2025-10-11 08:52:38.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:38 np0005481065 nova_compute[260935]: 2025-10-11 08:52:38.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:52:38 np0005481065 nova_compute[260935]: 2025-10-11 08:52:38.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:38 np0005481065 nova_compute[260935]: 2025-10-11 08:52:38.214 2 INFO os_vif [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:7a:6c,bridge_name='br-int',has_traffic_filtering=True,id=a5dcad71-6ce4-4a7c-95fc-900cb7ee620e,network=Network(f4f23928-daba-425d-b61c-e65657ec3386),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5dcad71-6c')#033[00m
Oct 11 04:52:38 np0005481065 nova_compute[260935]: 2025-10-11 08:52:38.232 2 DEBUG nova.compute.manager [req-37042f26-928e-4728-93e5-0a7b251c53dd req-a87e0089-2388-4289-98bb-16465f1fa62b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Received event network-vif-unplugged-108be440-11bf-41f6-a628-86fbac597b7d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:52:38 np0005481065 nova_compute[260935]: 2025-10-11 08:52:38.232 2 DEBUG oslo_concurrency.lockutils [req-37042f26-928e-4728-93e5-0a7b251c53dd req-a87e0089-2388-4289-98bb-16465f1fa62b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "872b1c1d-bc87-4123-a599-4d64b89018aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:38 np0005481065 nova_compute[260935]: 2025-10-11 08:52:38.233 2 DEBUG oslo_concurrency.lockutils [req-37042f26-928e-4728-93e5-0a7b251c53dd req-a87e0089-2388-4289-98bb-16465f1fa62b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "872b1c1d-bc87-4123-a599-4d64b89018aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:38 np0005481065 nova_compute[260935]: 2025-10-11 08:52:38.233 2 DEBUG oslo_concurrency.lockutils [req-37042f26-928e-4728-93e5-0a7b251c53dd req-a87e0089-2388-4289-98bb-16465f1fa62b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "872b1c1d-bc87-4123-a599-4d64b89018aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:38 np0005481065 nova_compute[260935]: 2025-10-11 08:52:38.233 2 DEBUG nova.compute.manager [req-37042f26-928e-4728-93e5-0a7b251c53dd req-a87e0089-2388-4289-98bb-16465f1fa62b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] No waiting events found dispatching network-vif-unplugged-108be440-11bf-41f6-a628-86fbac597b7d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:52:38 np0005481065 nova_compute[260935]: 2025-10-11 08:52:38.233 2 DEBUG nova.compute.manager [req-37042f26-928e-4728-93e5-0a7b251c53dd req-a87e0089-2388-4289-98bb-16465f1fa62b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Received event network-vif-unplugged-108be440-11bf-41f6-a628-86fbac597b7d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 04:52:38 np0005481065 nova_compute[260935]: 2025-10-11 08:52:38.233 2 DEBUG nova.compute.manager [req-37042f26-928e-4728-93e5-0a7b251c53dd req-a87e0089-2388-4289-98bb-16465f1fa62b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Received event network-vif-plugged-108be440-11bf-41f6-a628-86fbac597b7d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:52:38 np0005481065 nova_compute[260935]: 2025-10-11 08:52:38.233 2 DEBUG oslo_concurrency.lockutils [req-37042f26-928e-4728-93e5-0a7b251c53dd req-a87e0089-2388-4289-98bb-16465f1fa62b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "872b1c1d-bc87-4123-a599-4d64b89018aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:38 np0005481065 nova_compute[260935]: 2025-10-11 08:52:38.234 2 DEBUG oslo_concurrency.lockutils [req-37042f26-928e-4728-93e5-0a7b251c53dd req-a87e0089-2388-4289-98bb-16465f1fa62b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "872b1c1d-bc87-4123-a599-4d64b89018aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:38 np0005481065 nova_compute[260935]: 2025-10-11 08:52:38.234 2 DEBUG oslo_concurrency.lockutils [req-37042f26-928e-4728-93e5-0a7b251c53dd req-a87e0089-2388-4289-98bb-16465f1fa62b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "872b1c1d-bc87-4123-a599-4d64b89018aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:38 np0005481065 nova_compute[260935]: 2025-10-11 08:52:38.234 2 DEBUG nova.compute.manager [req-37042f26-928e-4728-93e5-0a7b251c53dd req-a87e0089-2388-4289-98bb-16465f1fa62b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] No waiting events found dispatching network-vif-plugged-108be440-11bf-41f6-a628-86fbac597b7d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:52:38 np0005481065 nova_compute[260935]: 2025-10-11 08:52:38.234 2 WARNING nova.compute.manager [req-37042f26-928e-4728-93e5-0a7b251c53dd req-a87e0089-2388-4289-98bb-16465f1fa62b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Received unexpected event network-vif-plugged-108be440-11bf-41f6-a628-86fbac597b7d for instance with vm_state active and task_state deleting.#033[00m
Oct 11 04:52:38 np0005481065 nova_compute[260935]: 2025-10-11 08:52:38.279 2 DEBUG nova.virt.libvirt.driver [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:52:38 np0005481065 nova_compute[260935]: 2025-10-11 08:52:38.280 2 DEBUG nova.virt.libvirt.driver [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:52:38 np0005481065 nova_compute[260935]: 2025-10-11 08:52:38.280 2 DEBUG nova.virt.libvirt.driver [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] No VIF found with MAC fa:16:3e:40:7a:6c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:52:38 np0005481065 nova_compute[260935]: 2025-10-11 08:52:38.281 2 INFO nova.virt.libvirt.driver [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Using config drive#033[00m
Oct 11 04:52:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:52:38 np0005481065 nova_compute[260935]: 2025-10-11 08:52:38.310 2 DEBUG nova.storage.rbd_utils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] rbd image dd616f8b-2be3-455f-8979-ac6e9e57af2d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:52:38 np0005481065 podman[307637]: 2025-10-11 08:52:38.358634167 +0000 UTC m=+0.090351875 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 04:52:38 np0005481065 nova_compute[260935]: 2025-10-11 08:52:38.403 2 DEBUG nova.network.neutron [-] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:52:38 np0005481065 podman[307638]: 2025-10-11 08:52:38.404792243 +0000 UTC m=+0.138325382 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:52:38 np0005481065 nova_compute[260935]: 2025-10-11 08:52:38.424 2 INFO nova.compute.manager [-] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Took 0.68 seconds to deallocate network for instance.#033[00m
Oct 11 04:52:38 np0005481065 nova_compute[260935]: 2025-10-11 08:52:38.477 2 DEBUG oslo_concurrency.lockutils [None req-945a89b9-902f-4e98-8219-c31dfa583812 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:38 np0005481065 nova_compute[260935]: 2025-10-11 08:52:38.478 2 DEBUG oslo_concurrency.lockutils [None req-945a89b9-902f-4e98-8219-c31dfa583812 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:38 np0005481065 nova_compute[260935]: 2025-10-11 08:52:38.483 2 DEBUG nova.compute.manager [req-310b79db-49ff-4075-aa84-d6129636a051 req-a942d1fd-0413-4ef4-afe4-f9b14281a34f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Received event network-vif-deleted-108be440-11bf-41f6-a628-86fbac597b7d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:52:38 np0005481065 nova_compute[260935]: 2025-10-11 08:52:38.598 2 DEBUG oslo_concurrency.processutils [None req-945a89b9-902f-4e98-8219-c31dfa583812 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:52:38 np0005481065 nova_compute[260935]: 2025-10-11 08:52:38.802 2 INFO nova.virt.libvirt.driver [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Creating config drive at /var/lib/nova/instances/dd616f8b-2be3-455f-8979-ac6e9e57af2d/disk.config#033[00m
Oct 11 04:52:38 np0005481065 nova_compute[260935]: 2025-10-11 08:52:38.809 2 DEBUG oslo_concurrency.processutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dd616f8b-2be3-455f-8979-ac6e9e57af2d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf_ase0x0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:52:38 np0005481065 nova_compute[260935]: 2025-10-11 08:52:38.976 2 DEBUG oslo_concurrency.processutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dd616f8b-2be3-455f-8979-ac6e9e57af2d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf_ase0x0" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:52:39 np0005481065 nova_compute[260935]: 2025-10-11 08:52:39.014 2 DEBUG nova.storage.rbd_utils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] rbd image dd616f8b-2be3-455f-8979-ac6e9e57af2d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:52:39 np0005481065 nova_compute[260935]: 2025-10-11 08:52:39.020 2 DEBUG oslo_concurrency.processutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dd616f8b-2be3-455f-8979-ac6e9e57af2d/disk.config dd616f8b-2be3-455f-8979-ac6e9e57af2d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:52:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:52:39 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1192914383' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:52:39 np0005481065 nova_compute[260935]: 2025-10-11 08:52:39.087 2 DEBUG nova.network.neutron [req-105183b8-0d0d-46b7-a878-25cb3a0dde91 req-64e46d8c-ba1b-4df1-9024-4292acd28f53 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Updated VIF entry in instance network info cache for port a5dcad71-6ce4-4a7c-95fc-900cb7ee620e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:52:39 np0005481065 nova_compute[260935]: 2025-10-11 08:52:39.088 2 DEBUG nova.network.neutron [req-105183b8-0d0d-46b7-a878-25cb3a0dde91 req-64e46d8c-ba1b-4df1-9024-4292acd28f53 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Updating instance_info_cache with network_info: [{"id": "a5dcad71-6ce4-4a7c-95fc-900cb7ee620e", "address": "fa:16:3e:40:7a:6c", "network": {"id": "f4f23928-daba-425d-b61c-e65657ec3386", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-643614329-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fb895e8125a437b8cc29be31706fccb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5dcad71-6c", "ovs_interfaceid": "a5dcad71-6ce4-4a7c-95fc-900cb7ee620e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:52:39 np0005481065 nova_compute[260935]: 2025-10-11 08:52:39.101 2 DEBUG oslo_concurrency.processutils [None req-945a89b9-902f-4e98-8219-c31dfa583812 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:52:39 np0005481065 nova_compute[260935]: 2025-10-11 08:52:39.110 2 DEBUG nova.compute.provider_tree [None req-945a89b9-902f-4e98-8219-c31dfa583812 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:52:39 np0005481065 nova_compute[260935]: 2025-10-11 08:52:39.115 2 DEBUG oslo_concurrency.lockutils [req-105183b8-0d0d-46b7-a878-25cb3a0dde91 req-64e46d8c-ba1b-4df1-9024-4292acd28f53 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-dd616f8b-2be3-455f-8979-ac6e9e57af2d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:52:39 np0005481065 nova_compute[260935]: 2025-10-11 08:52:39.135 2 DEBUG nova.scheduler.client.report [None req-945a89b9-902f-4e98-8219-c31dfa583812 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:52:39 np0005481065 nova_compute[260935]: 2025-10-11 08:52:39.164 2 DEBUG oslo_concurrency.lockutils [None req-945a89b9-902f-4e98-8219-c31dfa583812 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:39 np0005481065 nova_compute[260935]: 2025-10-11 08:52:39.196 2 DEBUG nova.network.neutron [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Successfully updated port: 81c1d19a-c479-4381-9557-92f3e52b0cf0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:52:39 np0005481065 nova_compute[260935]: 2025-10-11 08:52:39.211 2 INFO nova.scheduler.client.report [None req-945a89b9-902f-4e98-8219-c31dfa583812 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Deleted allocations for instance 872b1c1d-bc87-4123-a599-4d64b89018aa#033[00m
Oct 11 04:52:39 np0005481065 nova_compute[260935]: 2025-10-11 08:52:39.215 2 DEBUG oslo_concurrency.lockutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Acquiring lock "refresh_cache-5750649d-960f-42d5-b127-de8b9a2bee8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:52:39 np0005481065 nova_compute[260935]: 2025-10-11 08:52:39.216 2 DEBUG oslo_concurrency.lockutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Acquired lock "refresh_cache-5750649d-960f-42d5-b127-de8b9a2bee8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:52:39 np0005481065 nova_compute[260935]: 2025-10-11 08:52:39.216 2 DEBUG nova.network.neutron [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:52:39 np0005481065 nova_compute[260935]: 2025-10-11 08:52:39.223 2 DEBUG oslo_concurrency.processutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dd616f8b-2be3-455f-8979-ac6e9e57af2d/disk.config dd616f8b-2be3-455f-8979-ac6e9e57af2d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.203s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:52:39 np0005481065 nova_compute[260935]: 2025-10-11 08:52:39.224 2 INFO nova.virt.libvirt.driver [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Deleting local config drive /var/lib/nova/instances/dd616f8b-2be3-455f-8979-ac6e9e57af2d/disk.config because it was imported into RBD.#033[00m
Oct 11 04:52:39 np0005481065 kernel: tapa5dcad71-6c: entered promiscuous mode
Oct 11 04:52:39 np0005481065 nova_compute[260935]: 2025-10-11 08:52:39.296 2 DEBUG oslo_concurrency.lockutils [None req-945a89b9-902f-4e98-8219-c31dfa583812 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "872b1c1d-bc87-4123-a599-4d64b89018aa" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.439s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:39 np0005481065 NetworkManager[44960]: <info>  [1760172759.2980] manager: (tapa5dcad71-6c): new Tun device (/org/freedesktop/NetworkManager/Devices/159)
Oct 11 04:52:39 np0005481065 systemd-udevd[307409]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:52:39 np0005481065 nova_compute[260935]: 2025-10-11 08:52:39.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:39 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:39Z|00328|binding|INFO|Claiming lport a5dcad71-6ce4-4a7c-95fc-900cb7ee620e for this chassis.
Oct 11 04:52:39 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:39Z|00329|binding|INFO|a5dcad71-6ce4-4a7c-95fc-900cb7ee620e: Claiming fa:16:3e:40:7a:6c 10.100.0.9
Oct 11 04:52:39 np0005481065 NetworkManager[44960]: <info>  [1760172759.3460] device (tapa5dcad71-6c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:52:39 np0005481065 NetworkManager[44960]: <info>  [1760172759.3476] device (tapa5dcad71-6c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:39.349 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:7a:6c 10.100.0.9'], port_security=['fa:16:3e:40:7a:6c 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'dd616f8b-2be3-455f-8979-ac6e9e57af2d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f4f23928-daba-425d-b61c-e65657ec3386', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6fb895e8125a437b8cc29be31706fccb', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ff7ea8b2-ee94-49d1-aa81-ade5270f3ae5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80d8b754-3910-4271-b26f-fbc82b8c8419, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=a5dcad71-6ce4-4a7c-95fc-900cb7ee620e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:39.350 162815 INFO neutron.agent.ovn.metadata.agent [-] Port a5dcad71-6ce4-4a7c-95fc-900cb7ee620e in datapath f4f23928-daba-425d-b61c-e65657ec3386 bound to our chassis#033[00m
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:39.352 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f4f23928-daba-425d-b61c-e65657ec3386#033[00m
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:39.369 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ee3860cc-399e-490d-a36c-552ee685464a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:39.370 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf4f23928-d1 in ovnmeta-f4f23928-daba-425d-b61c-e65657ec3386 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:39.372 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf4f23928-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:39.373 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3848c99a-49b7-4417-b98c-733e68489775]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:39.373 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b241ae37-491e-498f-b361-2a70a346b0f3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:39 np0005481065 nova_compute[260935]: 2025-10-11 08:52:39.379 2 DEBUG nova.network.neutron [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:52:39 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:39Z|00330|binding|INFO|Setting lport a5dcad71-6ce4-4a7c-95fc-900cb7ee620e ovn-installed in OVS
Oct 11 04:52:39 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:39Z|00331|binding|INFO|Setting lport a5dcad71-6ce4-4a7c-95fc-900cb7ee620e up in Southbound
Oct 11 04:52:39 np0005481065 nova_compute[260935]: 2025-10-11 08:52:39.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:39 np0005481065 systemd-machined[215705]: New machine qemu-45-instance-00000029.
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:39.401 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[3602d432-15b3-4100-8883-a8a6dfd8f375]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:39 np0005481065 systemd[1]: Started Virtual Machine qemu-45-instance-00000029.
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:39.428 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5fcf5d2d-3e34-4d74-a7a6-e519e3d08daa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:39.477 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[453df920-73cb-42b0-a2e1-712b47e4d1ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:39 np0005481065 NetworkManager[44960]: <info>  [1760172759.4835] manager: (tapf4f23928-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/160)
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:39.482 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4c67fec5-d078-4b4d-9a54-10bd604ece68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:39.527 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[1977eafb-5f4e-457b-86b0-6fe644076de1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:39.532 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[fed1e6da-282a-430d-9433-9556cdbcdf5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:39 np0005481065 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 04:52:39 np0005481065 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 04:52:39 np0005481065 NetworkManager[44960]: <info>  [1760172759.5595] device (tapf4f23928-d0): carrier: link connected
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:39.571 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f1938fa6-426f-43fc-ad8e-e2634478dfa8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:39.593 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7f6dfc8c-4195-4678-b478-ebd8b405b2eb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf4f23928-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a6:0c:d5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 105], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 460426, 'reachable_time': 36731, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307809, 'error': None, 'target': 'ovnmeta-f4f23928-daba-425d-b61c-e65657ec3386', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:39.617 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5e3328d1-e940-4d70-848c-b1b42dc1637e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea6:cd5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 460426, 'tstamp': 460426}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307810, 'error': None, 'target': 'ovnmeta-f4f23928-daba-425d-b61c-e65657ec3386', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:39.639 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f0246e00-4255-47c8-ad8c-e48e509ea669]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf4f23928-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a6:0c:d5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 105], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 460426, 'reachable_time': 36731, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 307811, 'error': None, 'target': 'ovnmeta-f4f23928-daba-425d-b61c-e65657ec3386', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:39.684 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2a19af0d-bca3-4fb0-b9ae-d81106abe5d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:39.787 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d162d837-590c-46f3-b9df-79759c425e4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:39.789 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf4f23928-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:39.789 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:39.790 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf4f23928-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:39 np0005481065 nova_compute[260935]: 2025-10-11 08:52:39.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:39 np0005481065 kernel: tapf4f23928-d0: entered promiscuous mode
Oct 11 04:52:39 np0005481065 NetworkManager[44960]: <info>  [1760172759.7928] manager: (tapf4f23928-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/161)
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:39.795 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf4f23928-d0, col_values=(('external_ids', {'iface-id': 'a2760dfe-6dd6-4319-8bbc-a3a72b9651da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:39 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:39Z|00332|binding|INFO|Releasing lport a2760dfe-6dd6-4319-8bbc-a3a72b9651da from this chassis (sb_readonly=0)
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:39.798 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f4f23928-daba-425d-b61c-e65657ec3386.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f4f23928-daba-425d-b61c-e65657ec3386.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:39.799 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[28452e46-cf5d-4706-b779-2c98d4766379]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:39.800 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-f4f23928-daba-425d-b61c-e65657ec3386
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/f4f23928-daba-425d-b61c-e65657ec3386.pid.haproxy
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID f4f23928-daba-425d-b61c-e65657ec3386
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 04:52:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:39.801 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f4f23928-daba-425d-b61c-e65657ec3386', 'env', 'PROCESS_TAG=haproxy-f4f23928-daba-425d-b61c-e65657ec3386', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f4f23928-daba-425d-b61c-e65657ec3386.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 04:52:39 np0005481065 nova_compute[260935]: 2025-10-11 08:52:39.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:39 np0005481065 nova_compute[260935]: 2025-10-11 08:52:39.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1434: 321 pgs: 321 active+clean; 180 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.5 MiB/s wr, 137 op/s
Oct 11 04:52:39 np0005481065 nova_compute[260935]: 2025-10-11 08:52:39.950 2 DEBUG nova.network.neutron [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Updating instance_info_cache with network_info: [{"id": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "address": "fa:16:3e:49:c0:a4", "network": {"id": "aa9adc37-a18c-4b31-b4cc-d46c43f91e9b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-563102071-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e394c641aa0e46e2a7d8129cd88c9a01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81c1d19a-c4", "ovs_interfaceid": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:52:39 np0005481065 nova_compute[260935]: 2025-10-11 08:52:39.970 2 DEBUG oslo_concurrency.lockutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Releasing lock "refresh_cache-5750649d-960f-42d5-b127-de8b9a2bee8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:52:39 np0005481065 nova_compute[260935]: 2025-10-11 08:52:39.970 2 DEBUG nova.compute.manager [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Instance network_info: |[{"id": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "address": "fa:16:3e:49:c0:a4", "network": {"id": "aa9adc37-a18c-4b31-b4cc-d46c43f91e9b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-563102071-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e394c641aa0e46e2a7d8129cd88c9a01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81c1d19a-c4", "ovs_interfaceid": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:52:39 np0005481065 nova_compute[260935]: 2025-10-11 08:52:39.972 2 DEBUG nova.virt.libvirt.driver [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Start _get_guest_xml network_info=[{"id": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "address": "fa:16:3e:49:c0:a4", "network": {"id": "aa9adc37-a18c-4b31-b4cc-d46c43f91e9b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-563102071-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e394c641aa0e46e2a7d8129cd88c9a01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81c1d19a-c4", "ovs_interfaceid": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:52:39 np0005481065 nova_compute[260935]: 2025-10-11 08:52:39.977 2 WARNING nova.virt.libvirt.driver [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:52:39 np0005481065 nova_compute[260935]: 2025-10-11 08:52:39.982 2 DEBUG nova.virt.libvirt.host [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:52:39 np0005481065 nova_compute[260935]: 2025-10-11 08:52:39.982 2 DEBUG nova.virt.libvirt.host [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:52:39 np0005481065 nova_compute[260935]: 2025-10-11 08:52:39.985 2 DEBUG nova.virt.libvirt.host [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:52:39 np0005481065 nova_compute[260935]: 2025-10-11 08:52:39.986 2 DEBUG nova.virt.libvirt.host [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:52:39 np0005481065 nova_compute[260935]: 2025-10-11 08:52:39.986 2 DEBUG nova.virt.libvirt.driver [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:52:39 np0005481065 nova_compute[260935]: 2025-10-11 08:52:39.986 2 DEBUG nova.virt.hardware [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:52:39 np0005481065 nova_compute[260935]: 2025-10-11 08:52:39.987 2 DEBUG nova.virt.hardware [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:52:39 np0005481065 nova_compute[260935]: 2025-10-11 08:52:39.987 2 DEBUG nova.virt.hardware [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:52:39 np0005481065 nova_compute[260935]: 2025-10-11 08:52:39.987 2 DEBUG nova.virt.hardware [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:52:39 np0005481065 nova_compute[260935]: 2025-10-11 08:52:39.987 2 DEBUG nova.virt.hardware [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:52:39 np0005481065 nova_compute[260935]: 2025-10-11 08:52:39.988 2 DEBUG nova.virt.hardware [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:52:39 np0005481065 nova_compute[260935]: 2025-10-11 08:52:39.988 2 DEBUG nova.virt.hardware [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:52:39 np0005481065 nova_compute[260935]: 2025-10-11 08:52:39.988 2 DEBUG nova.virt.hardware [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:52:39 np0005481065 nova_compute[260935]: 2025-10-11 08:52:39.988 2 DEBUG nova.virt.hardware [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:52:39 np0005481065 nova_compute[260935]: 2025-10-11 08:52:39.988 2 DEBUG nova.virt.hardware [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:52:39 np0005481065 nova_compute[260935]: 2025-10-11 08:52:39.988 2 DEBUG nova.virt.hardware [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:52:39 np0005481065 nova_compute[260935]: 2025-10-11 08:52:39.991 2 DEBUG oslo_concurrency.processutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:52:40 np0005481065 podman[307905]: 2025-10-11 08:52:40.280772286 +0000 UTC m=+0.069481206 container create f8e09cb4eedc0e259649ff3c36a48535508fb7bc291d534579493db0016a0175 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f4f23928-daba-425d-b61c-e65657ec3386, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.318 2 DEBUG nova.compute.manager [req-a5aa03d9-d534-4391-97ae-bdbf80c602df req-3cd44909-0238-4938-9d47-409343517324 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Received event network-changed-81c1d19a-c479-4381-9557-92f3e52b0cf0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.320 2 DEBUG nova.compute.manager [req-a5aa03d9-d534-4391-97ae-bdbf80c602df req-3cd44909-0238-4938-9d47-409343517324 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Refreshing instance network info cache due to event network-changed-81c1d19a-c479-4381-9557-92f3e52b0cf0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.321 2 DEBUG oslo_concurrency.lockutils [req-a5aa03d9-d534-4391-97ae-bdbf80c602df req-3cd44909-0238-4938-9d47-409343517324 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-5750649d-960f-42d5-b127-de8b9a2bee8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.321 2 DEBUG oslo_concurrency.lockutils [req-a5aa03d9-d534-4391-97ae-bdbf80c602df req-3cd44909-0238-4938-9d47-409343517324 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-5750649d-960f-42d5-b127-de8b9a2bee8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.322 2 DEBUG nova.network.neutron [req-a5aa03d9-d534-4391-97ae-bdbf80c602df req-3cd44909-0238-4938-9d47-409343517324 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Refreshing network info cache for port 81c1d19a-c479-4381-9557-92f3e52b0cf0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.325 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172760.3139884, dd616f8b-2be3-455f-8979-ac6e9e57af2d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.326 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] VM Started (Lifecycle Event)#033[00m
Oct 11 04:52:40 np0005481065 podman[307905]: 2025-10-11 08:52:40.243475961 +0000 UTC m=+0.032184861 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 04:52:40 np0005481065 systemd[1]: Started libpod-conmon-f8e09cb4eedc0e259649ff3c36a48535508fb7bc291d534579493db0016a0175.scope.
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.358 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.367 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172760.3156483, dd616f8b-2be3-455f-8979-ac6e9e57af2d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.367 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] VM Paused (Lifecycle Event)#033[00m
Oct 11 04:52:40 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:52:40 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93672fbd07a97df543920eefb51abcbc13f5aadfb34ff046cc470d22c59772b4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.386 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.394 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:52:40 np0005481065 podman[307905]: 2025-10-11 08:52:40.404234008 +0000 UTC m=+0.192942978 container init f8e09cb4eedc0e259649ff3c36a48535508fb7bc291d534579493db0016a0175 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f4f23928-daba-425d-b61c-e65657ec3386, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.412 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:52:40 np0005481065 podman[307905]: 2025-10-11 08:52:40.414810227 +0000 UTC m=+0.203519147 container start f8e09cb4eedc0e259649ff3c36a48535508fb7bc291d534579493db0016a0175 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f4f23928-daba-425d-b61c-e65657ec3386, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:52:40 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:52:40 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1721062941' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:52:40 np0005481065 neutron-haproxy-ovnmeta-f4f23928-daba-425d-b61c-e65657ec3386[307920]: [NOTICE]   (307924) : New worker (307928) forked
Oct 11 04:52:40 np0005481065 neutron-haproxy-ovnmeta-f4f23928-daba-425d-b61c-e65657ec3386[307920]: [NOTICE]   (307924) : Loading success.
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.461 2 DEBUG oslo_concurrency.processutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.489 2 DEBUG nova.storage.rbd_utils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] rbd image 5750649d-960f-42d5-b127-de8b9a2bee8f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.493 2 DEBUG oslo_concurrency.processutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:52:40 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:40Z|00058|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bf:90:73 10.100.0.12
Oct 11 04:52:40 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:40Z|00059|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bf:90:73 10.100.0.12
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.524 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172745.4866738, 840d2a1b-48bb-42ec-aa12-067e1b65ea39 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.525 2 INFO nova.compute.manager [-] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] VM Stopped (Lifecycle Event)#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.556 2 DEBUG nova.compute.manager [None req-8ec6ec35-7dfa-4c2d-adcf-996cf8d12f40 - - - - - -] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.594 2 DEBUG nova.compute.manager [req-77328f23-5d3a-4872-8293-5c3a289fca07 req-c458ab29-f284-493d-9ff0-dc56ee5109e2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Received event network-vif-plugged-a5dcad71-6ce4-4a7c-95fc-900cb7ee620e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.595 2 DEBUG oslo_concurrency.lockutils [req-77328f23-5d3a-4872-8293-5c3a289fca07 req-c458ab29-f284-493d-9ff0-dc56ee5109e2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "dd616f8b-2be3-455f-8979-ac6e9e57af2d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.596 2 DEBUG oslo_concurrency.lockutils [req-77328f23-5d3a-4872-8293-5c3a289fca07 req-c458ab29-f284-493d-9ff0-dc56ee5109e2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd616f8b-2be3-455f-8979-ac6e9e57af2d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.596 2 DEBUG oslo_concurrency.lockutils [req-77328f23-5d3a-4872-8293-5c3a289fca07 req-c458ab29-f284-493d-9ff0-dc56ee5109e2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd616f8b-2be3-455f-8979-ac6e9e57af2d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.596 2 DEBUG nova.compute.manager [req-77328f23-5d3a-4872-8293-5c3a289fca07 req-c458ab29-f284-493d-9ff0-dc56ee5109e2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Processing event network-vif-plugged-a5dcad71-6ce4-4a7c-95fc-900cb7ee620e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.597 2 DEBUG nova.compute.manager [req-77328f23-5d3a-4872-8293-5c3a289fca07 req-c458ab29-f284-493d-9ff0-dc56ee5109e2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Received event network-vif-plugged-a5dcad71-6ce4-4a7c-95fc-900cb7ee620e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.597 2 DEBUG oslo_concurrency.lockutils [req-77328f23-5d3a-4872-8293-5c3a289fca07 req-c458ab29-f284-493d-9ff0-dc56ee5109e2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "dd616f8b-2be3-455f-8979-ac6e9e57af2d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.598 2 DEBUG oslo_concurrency.lockutils [req-77328f23-5d3a-4872-8293-5c3a289fca07 req-c458ab29-f284-493d-9ff0-dc56ee5109e2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd616f8b-2be3-455f-8979-ac6e9e57af2d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.598 2 DEBUG oslo_concurrency.lockutils [req-77328f23-5d3a-4872-8293-5c3a289fca07 req-c458ab29-f284-493d-9ff0-dc56ee5109e2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd616f8b-2be3-455f-8979-ac6e9e57af2d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.599 2 DEBUG nova.compute.manager [req-77328f23-5d3a-4872-8293-5c3a289fca07 req-c458ab29-f284-493d-9ff0-dc56ee5109e2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] No waiting events found dispatching network-vif-plugged-a5dcad71-6ce4-4a7c-95fc-900cb7ee620e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.599 2 WARNING nova.compute.manager [req-77328f23-5d3a-4872-8293-5c3a289fca07 req-c458ab29-f284-493d-9ff0-dc56ee5109e2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Received unexpected event network-vif-plugged-a5dcad71-6ce4-4a7c-95fc-900cb7ee620e for instance with vm_state building and task_state spawning.#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.600 2 DEBUG nova.compute.manager [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.604 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172760.6039762, dd616f8b-2be3-455f-8979-ac6e9e57af2d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.604 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.609 2 DEBUG nova.virt.libvirt.driver [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.613 2 INFO nova.virt.libvirt.driver [-] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Instance spawned successfully.#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.613 2 DEBUG nova.virt.libvirt.driver [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.636 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.643 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.646 2 DEBUG nova.virt.libvirt.driver [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.647 2 DEBUG nova.virt.libvirt.driver [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.647 2 DEBUG nova.virt.libvirt.driver [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.648 2 DEBUG nova.virt.libvirt.driver [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.648 2 DEBUG nova.virt.libvirt.driver [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.648 2 DEBUG nova.virt.libvirt.driver [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.681 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.710 2 INFO nova.compute.manager [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Took 7.12 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.710 2 DEBUG nova.compute.manager [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.771 2 INFO nova.compute.manager [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Took 8.13 seconds to build instance.#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.788 2 DEBUG oslo_concurrency.lockutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Lock "dd616f8b-2be3-455f-8979-ac6e9e57af2d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.238s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:40 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:52:40 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3592473107' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.934 2 DEBUG oslo_concurrency.processutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.936 2 DEBUG nova.virt.libvirt.vif [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:52:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1556963080',display_name='tempest-InstanceActionsTestJSON-server-1556963080',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1556963080',id=42,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e394c641aa0e46e2a7d8129cd88c9a01',ramdisk_id='',reservation_id='r-r0vpe0lh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-761055393',owner_user_name='tempest-InstanceActionsTestJSON-761055393-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:52:36Z,user_data=None,user_id='aeed30817a7740109e765d227ff2b78a',uuid=5750649d-960f-42d5-b127-de8b9a2bee8f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "address": "fa:16:3e:49:c0:a4", "network": {"id": "aa9adc37-a18c-4b31-b4cc-d46c43f91e9b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-563102071-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e394c641aa0e46e2a7d8129cd88c9a01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81c1d19a-c4", "ovs_interfaceid": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.938 2 DEBUG nova.network.os_vif_util [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Converting VIF {"id": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "address": "fa:16:3e:49:c0:a4", "network": {"id": "aa9adc37-a18c-4b31-b4cc-d46c43f91e9b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-563102071-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e394c641aa0e46e2a7d8129cd88c9a01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81c1d19a-c4", "ovs_interfaceid": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.940 2 DEBUG nova.network.os_vif_util [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:c0:a4,bridge_name='br-int',has_traffic_filtering=True,id=81c1d19a-c479-4381-9557-92f3e52b0cf0,network=Network(aa9adc37-a18c-4b31-b4cc-d46c43f91e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81c1d19a-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.941 2 DEBUG nova.objects.instance [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5750649d-960f-42d5-b127-de8b9a2bee8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.958 2 DEBUG nova.virt.libvirt.driver [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:52:40 np0005481065 nova_compute[260935]:  <uuid>5750649d-960f-42d5-b127-de8b9a2bee8f</uuid>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:  <name>instance-0000002a</name>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:52:40 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:      <nova:name>tempest-InstanceActionsTestJSON-server-1556963080</nova:name>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:52:39</nova:creationTime>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:52:40 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:        <nova:user uuid="aeed30817a7740109e765d227ff2b78a">tempest-InstanceActionsTestJSON-761055393-project-member</nova:user>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:        <nova:project uuid="e394c641aa0e46e2a7d8129cd88c9a01">tempest-InstanceActionsTestJSON-761055393</nova:project>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:        <nova:port uuid="81c1d19a-c479-4381-9557-92f3e52b0cf0">
Oct 11 04:52:40 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:52:40 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:      <entry name="serial">5750649d-960f-42d5-b127-de8b9a2bee8f</entry>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:      <entry name="uuid">5750649d-960f-42d5-b127-de8b9a2bee8f</entry>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:52:40 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:52:40 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:52:40 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/5750649d-960f-42d5-b127-de8b9a2bee8f_disk">
Oct 11 04:52:40 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:52:40 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:52:40 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/5750649d-960f-42d5-b127-de8b9a2bee8f_disk.config">
Oct 11 04:52:40 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:52:40 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 04:52:40 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:49:c0:a4"/>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:      <target dev="tap81c1d19a-c4"/>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:52:40 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/5750649d-960f-42d5-b127-de8b9a2bee8f/console.log" append="off"/>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:52:40 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:52:40 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:52:40 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:52:40 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:52:40 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.960 2 DEBUG nova.compute.manager [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Preparing to wait for external event network-vif-plugged-81c1d19a-c479-4381-9557-92f3e52b0cf0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.960 2 DEBUG oslo_concurrency.lockutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Acquiring lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.961 2 DEBUG oslo_concurrency.lockutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.961 2 DEBUG oslo_concurrency.lockutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.962 2 DEBUG nova.virt.libvirt.vif [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:52:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1556963080',display_name='tempest-InstanceActionsTestJSON-server-1556963080',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1556963080',id=42,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e394c641aa0e46e2a7d8129cd88c9a01',ramdisk_id='',reservation_id='r-r0vpe0lh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-761055393',owner_user_name='tempest-InstanceActionsTestJSON-761055393-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:52:36Z,user_data=None,user_id='aeed30817a7740109e765d227ff2b78a',uuid=5750649d-960f-42d5-b127-de8b9a2bee8f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "address": "fa:16:3e:49:c0:a4", "network": {"id": "aa9adc37-a18c-4b31-b4cc-d46c43f91e9b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-563102071-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e394c641aa0e46e2a7d8129cd88c9a01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81c1d19a-c4", "ovs_interfaceid": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.962 2 DEBUG nova.network.os_vif_util [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Converting VIF {"id": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "address": "fa:16:3e:49:c0:a4", "network": {"id": "aa9adc37-a18c-4b31-b4cc-d46c43f91e9b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-563102071-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e394c641aa0e46e2a7d8129cd88c9a01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81c1d19a-c4", "ovs_interfaceid": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.963 2 DEBUG nova.network.os_vif_util [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:c0:a4,bridge_name='br-int',has_traffic_filtering=True,id=81c1d19a-c479-4381-9557-92f3e52b0cf0,network=Network(aa9adc37-a18c-4b31-b4cc-d46c43f91e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81c1d19a-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.963 2 DEBUG os_vif [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:c0:a4,bridge_name='br-int',has_traffic_filtering=True,id=81c1d19a-c479-4381-9557-92f3e52b0cf0,network=Network(aa9adc37-a18c-4b31-b4cc-d46c43f91e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81c1d19a-c4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.965 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.965 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.968 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap81c1d19a-c4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.969 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap81c1d19a-c4, col_values=(('external_ids', {'iface-id': '81c1d19a-c479-4381-9557-92f3e52b0cf0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:49:c0:a4', 'vm-uuid': '5750649d-960f-42d5-b127-de8b9a2bee8f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:40 np0005481065 NetworkManager[44960]: <info>  [1760172760.9717] manager: (tap81c1d19a-c4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/162)
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:40 np0005481065 nova_compute[260935]: 2025-10-11 08:52:40.979 2 INFO os_vif [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:c0:a4,bridge_name='br-int',has_traffic_filtering=True,id=81c1d19a-c479-4381-9557-92f3e52b0cf0,network=Network(aa9adc37-a18c-4b31-b4cc-d46c43f91e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81c1d19a-c4')#033[00m
Oct 11 04:52:41 np0005481065 nova_compute[260935]: 2025-10-11 08:52:41.032 2 DEBUG nova.virt.libvirt.driver [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:52:41 np0005481065 nova_compute[260935]: 2025-10-11 08:52:41.033 2 DEBUG nova.virt.libvirt.driver [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:52:41 np0005481065 nova_compute[260935]: 2025-10-11 08:52:41.034 2 DEBUG nova.virt.libvirt.driver [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] No VIF found with MAC fa:16:3e:49:c0:a4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:52:41 np0005481065 nova_compute[260935]: 2025-10-11 08:52:41.034 2 INFO nova.virt.libvirt.driver [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Using config drive#033[00m
Oct 11 04:52:41 np0005481065 nova_compute[260935]: 2025-10-11 08:52:41.070 2 DEBUG nova.storage.rbd_utils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] rbd image 5750649d-960f-42d5-b127-de8b9a2bee8f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:52:41 np0005481065 nova_compute[260935]: 2025-10-11 08:52:41.554 2 INFO nova.virt.libvirt.driver [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Creating config drive at /var/lib/nova/instances/5750649d-960f-42d5-b127-de8b9a2bee8f/disk.config#033[00m
Oct 11 04:52:41 np0005481065 nova_compute[260935]: 2025-10-11 08:52:41.565 2 DEBUG oslo_concurrency.processutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5750649d-960f-42d5-b127-de8b9a2bee8f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6j82h1qi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:52:41 np0005481065 nova_compute[260935]: 2025-10-11 08:52:41.732 2 DEBUG oslo_concurrency.processutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5750649d-960f-42d5-b127-de8b9a2bee8f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6j82h1qi" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:52:41 np0005481065 nova_compute[260935]: 2025-10-11 08:52:41.774 2 DEBUG nova.storage.rbd_utils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] rbd image 5750649d-960f-42d5-b127-de8b9a2bee8f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:52:41 np0005481065 nova_compute[260935]: 2025-10-11 08:52:41.778 2 DEBUG oslo_concurrency.processutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5750649d-960f-42d5-b127-de8b9a2bee8f/disk.config 5750649d-960f-42d5-b127-de8b9a2bee8f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:52:41 np0005481065 nova_compute[260935]: 2025-10-11 08:52:41.820 2 DEBUG nova.network.neutron [req-a5aa03d9-d534-4391-97ae-bdbf80c602df req-3cd44909-0238-4938-9d47-409343517324 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Updated VIF entry in instance network info cache for port 81c1d19a-c479-4381-9557-92f3e52b0cf0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:52:41 np0005481065 nova_compute[260935]: 2025-10-11 08:52:41.821 2 DEBUG nova.network.neutron [req-a5aa03d9-d534-4391-97ae-bdbf80c602df req-3cd44909-0238-4938-9d47-409343517324 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Updating instance_info_cache with network_info: [{"id": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "address": "fa:16:3e:49:c0:a4", "network": {"id": "aa9adc37-a18c-4b31-b4cc-d46c43f91e9b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-563102071-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e394c641aa0e46e2a7d8129cd88c9a01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81c1d19a-c4", "ovs_interfaceid": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:52:41 np0005481065 nova_compute[260935]: 2025-10-11 08:52:41.846 2 DEBUG oslo_concurrency.lockutils [req-a5aa03d9-d534-4391-97ae-bdbf80c602df req-3cd44909-0238-4938-9d47-409343517324 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-5750649d-960f-42d5-b127-de8b9a2bee8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:52:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1435: 321 pgs: 321 active+clean; 180 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.5 MiB/s wr, 137 op/s
Oct 11 04:52:41 np0005481065 nova_compute[260935]: 2025-10-11 08:52:41.993 2 DEBUG oslo_concurrency.processutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5750649d-960f-42d5-b127-de8b9a2bee8f/disk.config 5750649d-960f-42d5-b127-de8b9a2bee8f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.215s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:52:41 np0005481065 nova_compute[260935]: 2025-10-11 08:52:41.994 2 INFO nova.virt.libvirt.driver [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Deleting local config drive /var/lib/nova/instances/5750649d-960f-42d5-b127-de8b9a2bee8f/disk.config because it was imported into RBD.#033[00m
Oct 11 04:52:42 np0005481065 kernel: tap81c1d19a-c4: entered promiscuous mode
Oct 11 04:52:42 np0005481065 NetworkManager[44960]: <info>  [1760172762.0790] manager: (tap81c1d19a-c4): new Tun device (/org/freedesktop/NetworkManager/Devices/163)
Oct 11 04:52:42 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:42Z|00333|binding|INFO|Claiming lport 81c1d19a-c479-4381-9557-92f3e52b0cf0 for this chassis.
Oct 11 04:52:42 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:42Z|00334|binding|INFO|81c1d19a-c479-4381-9557-92f3e52b0cf0: Claiming fa:16:3e:49:c0:a4 10.100.0.10
Oct 11 04:52:42 np0005481065 nova_compute[260935]: 2025-10-11 08:52:42.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:42.088 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:c0:a4 10.100.0.10'], port_security=['fa:16:3e:49:c0:a4 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '5750649d-960f-42d5-b127-de8b9a2bee8f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e394c641aa0e46e2a7d8129cd88c9a01', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e0772185-519f-4131-b64e-b4728e2c0dd9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bf506c8e-a600-4595-8f2f-eb0ff97eb2d2, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=81c1d19a-c479-4381-9557-92f3e52b0cf0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:42.089 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 81c1d19a-c479-4381-9557-92f3e52b0cf0 in datapath aa9adc37-a18c-4b31-b4cc-d46c43f91e9b bound to our chassis#033[00m
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:42.091 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aa9adc37-a18c-4b31-b4cc-d46c43f91e9b#033[00m
Oct 11 04:52:42 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:42Z|00335|binding|INFO|Setting lport 81c1d19a-c479-4381-9557-92f3e52b0cf0 ovn-installed in OVS
Oct 11 04:52:42 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:42Z|00336|binding|INFO|Setting lport 81c1d19a-c479-4381-9557-92f3e52b0cf0 up in Southbound
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:42.117 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0e8a4382-883c-4552-887d-74cb430a90f1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:42.118 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaa9adc37-a1 in ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 04:52:42 np0005481065 nova_compute[260935]: 2025-10-11 08:52:42.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:42.120 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaa9adc37-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:42.120 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f21d3c8e-eff6-4a85-af4a-d515c7d13957]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:42.121 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7349045f-3f32-4a88-b249-b829069f4a31]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:42 np0005481065 nova_compute[260935]: 2025-10-11 08:52:42.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:42.138 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[1b54a8f3-2bcc-4054-b830-aac15465ad4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:42 np0005481065 systemd-machined[215705]: New machine qemu-46-instance-0000002a.
Oct 11 04:52:42 np0005481065 systemd[1]: Started Virtual Machine qemu-46-instance-0000002a.
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:42.152 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e81a960a-762c-421c-aea0-3f38b79bb5d9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:42 np0005481065 systemd-udevd[308055]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:52:42 np0005481065 NetworkManager[44960]: <info>  [1760172762.1857] device (tap81c1d19a-c4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:52:42 np0005481065 NetworkManager[44960]: <info>  [1760172762.1873] device (tap81c1d19a-c4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:42.198 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[022edfd7-1a3f-4cd7-9d8d-c0ccdd52a9f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:42.206 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1a444b95-08d7-4f66-afe4-14f69de5b2bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:42 np0005481065 NetworkManager[44960]: <info>  [1760172762.2086] manager: (tapaa9adc37-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/164)
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:42.253 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[07e56530-1f9e-4bbb-a0fd-14adc14e7e6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:42.257 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[79d91093-29ea-42ce-8faf-a9a5a7b514f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:42 np0005481065 NetworkManager[44960]: <info>  [1760172762.2900] device (tapaa9adc37-a0): carrier: link connected
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:42.297 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[88ed452e-5aa6-47de-8191-c78f0fd9ddc2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:42.320 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a80b66e4-57e5-4b3f-91f5-3956b979c1d4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa9adc37-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:36:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 107], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 460699, 'reachable_time': 26270, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308084, 'error': None, 'target': 'ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:42.350 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e109ac35-3980-4790-ac45-a39d5e97711e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febf:3618'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 460699, 'tstamp': 460699}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308085, 'error': None, 'target': 'ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:42.387 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[90a4cd13-f44b-4b0b-9e15-6a0cb0154457]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa9adc37-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:36:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 107], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 460699, 'reachable_time': 26270, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 308086, 'error': None, 'target': 'ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:42.430 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[eba3ce42-d00b-4e17-be33-f5e2db698605]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:42 np0005481065 nova_compute[260935]: 2025-10-11 08:52:42.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:42.527 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9e1636bd-1adc-4dd8-9332-c75c0c243127]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:42.529 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa9adc37-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:42.530 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:42.531 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa9adc37-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:42 np0005481065 nova_compute[260935]: 2025-10-11 08:52:42.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:42 np0005481065 kernel: tapaa9adc37-a0: entered promiscuous mode
Oct 11 04:52:42 np0005481065 NetworkManager[44960]: <info>  [1760172762.5346] manager: (tapaa9adc37-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/165)
Oct 11 04:52:42 np0005481065 nova_compute[260935]: 2025-10-11 08:52:42.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:42.539 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaa9adc37-a0, col_values=(('external_ids', {'iface-id': '3043e870-2e1f-4df6-b72c-a581ec61613e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:42 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:42Z|00337|binding|INFO|Releasing lport 3043e870-2e1f-4df6-b72c-a581ec61613e from this chassis (sb_readonly=0)
Oct 11 04:52:42 np0005481065 nova_compute[260935]: 2025-10-11 08:52:42.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:42 np0005481065 nova_compute[260935]: 2025-10-11 08:52:42.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:42.577 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/aa9adc37-a18c-4b31-b4cc-d46c43f91e9b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/aa9adc37-a18c-4b31-b4cc-d46c43f91e9b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:42.578 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[842d28c4-d081-4a69-b14c-d8a744db63e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:42.580 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/aa9adc37-a18c-4b31-b4cc-d46c43f91e9b.pid.haproxy
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID aa9adc37-a18c-4b31-b4cc-d46c43f91e9b
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 04:52:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:42.581 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b', 'env', 'PROCESS_TAG=haproxy-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/aa9adc37-a18c-4b31-b4cc-d46c43f91e9b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 04:52:42 np0005481065 nova_compute[260935]: 2025-10-11 08:52:42.907 2 DEBUG nova.compute.manager [req-bea205ab-8ad1-4f21-912c-2a2d7b7800b7 req-a19eebee-e85f-4277-a814-8daa1ff425f4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Received event network-vif-plugged-81c1d19a-c479-4381-9557-92f3e52b0cf0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:52:42 np0005481065 nova_compute[260935]: 2025-10-11 08:52:42.910 2 DEBUG oslo_concurrency.lockutils [req-bea205ab-8ad1-4f21-912c-2a2d7b7800b7 req-a19eebee-e85f-4277-a814-8daa1ff425f4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:42 np0005481065 nova_compute[260935]: 2025-10-11 08:52:42.911 2 DEBUG oslo_concurrency.lockutils [req-bea205ab-8ad1-4f21-912c-2a2d7b7800b7 req-a19eebee-e85f-4277-a814-8daa1ff425f4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:42 np0005481065 nova_compute[260935]: 2025-10-11 08:52:42.911 2 DEBUG oslo_concurrency.lockutils [req-bea205ab-8ad1-4f21-912c-2a2d7b7800b7 req-a19eebee-e85f-4277-a814-8daa1ff425f4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:42 np0005481065 nova_compute[260935]: 2025-10-11 08:52:42.911 2 DEBUG nova.compute.manager [req-bea205ab-8ad1-4f21-912c-2a2d7b7800b7 req-a19eebee-e85f-4277-a814-8daa1ff425f4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Processing event network-vif-plugged-81c1d19a-c479-4381-9557-92f3e52b0cf0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 04:52:42 np0005481065 nova_compute[260935]: 2025-10-11 08:52:42.912 2 DEBUG nova.compute.manager [req-bea205ab-8ad1-4f21-912c-2a2d7b7800b7 req-a19eebee-e85f-4277-a814-8daa1ff425f4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Received event network-vif-plugged-81c1d19a-c479-4381-9557-92f3e52b0cf0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:52:42 np0005481065 nova_compute[260935]: 2025-10-11 08:52:42.912 2 DEBUG oslo_concurrency.lockutils [req-bea205ab-8ad1-4f21-912c-2a2d7b7800b7 req-a19eebee-e85f-4277-a814-8daa1ff425f4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:42 np0005481065 nova_compute[260935]: 2025-10-11 08:52:42.912 2 DEBUG oslo_concurrency.lockutils [req-bea205ab-8ad1-4f21-912c-2a2d7b7800b7 req-a19eebee-e85f-4277-a814-8daa1ff425f4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:42 np0005481065 nova_compute[260935]: 2025-10-11 08:52:42.913 2 DEBUG oslo_concurrency.lockutils [req-bea205ab-8ad1-4f21-912c-2a2d7b7800b7 req-a19eebee-e85f-4277-a814-8daa1ff425f4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:42 np0005481065 nova_compute[260935]: 2025-10-11 08:52:42.913 2 DEBUG nova.compute.manager [req-bea205ab-8ad1-4f21-912c-2a2d7b7800b7 req-a19eebee-e85f-4277-a814-8daa1ff425f4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] No waiting events found dispatching network-vif-plugged-81c1d19a-c479-4381-9557-92f3e52b0cf0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:52:42 np0005481065 nova_compute[260935]: 2025-10-11 08:52:42.913 2 WARNING nova.compute.manager [req-bea205ab-8ad1-4f21-912c-2a2d7b7800b7 req-a19eebee-e85f-4277-a814-8daa1ff425f4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Received unexpected event network-vif-plugged-81c1d19a-c479-4381-9557-92f3e52b0cf0 for instance with vm_state building and task_state spawning.#033[00m
Oct 11 04:52:43 np0005481065 podman[308159]: 2025-10-11 08:52:43.069994335 +0000 UTC m=+0.113502011 container create 94a21b0b94de7518338fcaeb99ecd605aad02a7ec069ff0c0918f06ba08b8847 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 11 04:52:43 np0005481065 nova_compute[260935]: 2025-10-11 08:52:43.084 2 DEBUG oslo_concurrency.lockutils [None req-f23160f8-92e3-41f9-ba4d-288df21f8b7a 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Acquiring lock "dd616f8b-2be3-455f-8979-ac6e9e57af2d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:43 np0005481065 nova_compute[260935]: 2025-10-11 08:52:43.085 2 DEBUG oslo_concurrency.lockutils [None req-f23160f8-92e3-41f9-ba4d-288df21f8b7a 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Lock "dd616f8b-2be3-455f-8979-ac6e9e57af2d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:43 np0005481065 nova_compute[260935]: 2025-10-11 08:52:43.085 2 DEBUG oslo_concurrency.lockutils [None req-f23160f8-92e3-41f9-ba4d-288df21f8b7a 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Acquiring lock "dd616f8b-2be3-455f-8979-ac6e9e57af2d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:43 np0005481065 nova_compute[260935]: 2025-10-11 08:52:43.085 2 DEBUG oslo_concurrency.lockutils [None req-f23160f8-92e3-41f9-ba4d-288df21f8b7a 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Lock "dd616f8b-2be3-455f-8979-ac6e9e57af2d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:43 np0005481065 nova_compute[260935]: 2025-10-11 08:52:43.086 2 DEBUG oslo_concurrency.lockutils [None req-f23160f8-92e3-41f9-ba4d-288df21f8b7a 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Lock "dd616f8b-2be3-455f-8979-ac6e9e57af2d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:43 np0005481065 nova_compute[260935]: 2025-10-11 08:52:43.088 2 INFO nova.compute.manager [None req-f23160f8-92e3-41f9-ba4d-288df21f8b7a 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Terminating instance#033[00m
Oct 11 04:52:43 np0005481065 nova_compute[260935]: 2025-10-11 08:52:43.090 2 DEBUG nova.compute.manager [None req-f23160f8-92e3-41f9-ba4d-288df21f8b7a 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 04:52:43 np0005481065 podman[308159]: 2025-10-11 08:52:43.006132229 +0000 UTC m=+0.049639915 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 04:52:43 np0005481065 kernel: tapa5dcad71-6c (unregistering): left promiscuous mode
Oct 11 04:52:43 np0005481065 NetworkManager[44960]: <info>  [1760172763.1385] device (tapa5dcad71-6c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:52:43 np0005481065 nova_compute[260935]: 2025-10-11 08:52:43.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:43 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:43Z|00338|binding|INFO|Releasing lport a5dcad71-6ce4-4a7c-95fc-900cb7ee620e from this chassis (sb_readonly=0)
Oct 11 04:52:43 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:43Z|00339|binding|INFO|Setting lport a5dcad71-6ce4-4a7c-95fc-900cb7ee620e down in Southbound
Oct 11 04:52:43 np0005481065 systemd[1]: Started libpod-conmon-94a21b0b94de7518338fcaeb99ecd605aad02a7ec069ff0c0918f06ba08b8847.scope.
Oct 11 04:52:43 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:43Z|00340|binding|INFO|Removing iface tapa5dcad71-6c ovn-installed in OVS
Oct 11 04:52:43 np0005481065 nova_compute[260935]: 2025-10-11 08:52:43.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:43.172 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:7a:6c 10.100.0.9'], port_security=['fa:16:3e:40:7a:6c 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'dd616f8b-2be3-455f-8979-ac6e9e57af2d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f4f23928-daba-425d-b61c-e65657ec3386', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6fb895e8125a437b8cc29be31706fccb', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ff7ea8b2-ee94-49d1-aa81-ade5270f3ae5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80d8b754-3910-4271-b26f-fbc82b8c8419, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=a5dcad71-6ce4-4a7c-95fc-900cb7ee620e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:52:43 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:52:43 np0005481065 nova_compute[260935]: 2025-10-11 08:52:43.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:43 np0005481065 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000029.scope: Deactivated successfully.
Oct 11 04:52:43 np0005481065 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000029.scope: Consumed 3.324s CPU time.
Oct 11 04:52:43 np0005481065 systemd-machined[215705]: Machine qemu-45-instance-00000029 terminated.
Oct 11 04:52:43 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aaec042284ad1b73f2650a678143c3397946f80b5c9fd24812807415a14b7ec/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:52:43 np0005481065 podman[308159]: 2025-10-11 08:52:43.223333052 +0000 UTC m=+0.266840778 container init 94a21b0b94de7518338fcaeb99ecd605aad02a7ec069ff0c0918f06ba08b8847 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:52:43 np0005481065 podman[308159]: 2025-10-11 08:52:43.234988811 +0000 UTC m=+0.278496487 container start 94a21b0b94de7518338fcaeb99ecd605aad02a7ec069ff0c0918f06ba08b8847 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 11 04:52:43 np0005481065 neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b[308177]: [NOTICE]   (308181) : New worker (308183) forked
Oct 11 04:52:43 np0005481065 neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b[308177]: [NOTICE]   (308181) : Loading success.
Oct 11 04:52:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:52:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:43.287 162815 INFO neutron.agent.ovn.metadata.agent [-] Port a5dcad71-6ce4-4a7c-95fc-900cb7ee620e in datapath f4f23928-daba-425d-b61c-e65657ec3386 unbound from our chassis#033[00m
Oct 11 04:52:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:43.289 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f4f23928-daba-425d-b61c-e65657ec3386, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 04:52:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:43.289 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d60c86a7-fc0a-491c-b094-55a5129ac6b5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:43.290 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f4f23928-daba-425d-b61c-e65657ec3386 namespace which is not needed anymore#033[00m
Oct 11 04:52:43 np0005481065 nova_compute[260935]: 2025-10-11 08:52:43.350 2 INFO nova.virt.libvirt.driver [-] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Instance destroyed successfully.#033[00m
Oct 11 04:52:43 np0005481065 nova_compute[260935]: 2025-10-11 08:52:43.351 2 DEBUG nova.objects.instance [None req-f23160f8-92e3-41f9-ba4d-288df21f8b7a 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Lazy-loading 'resources' on Instance uuid dd616f8b-2be3-455f-8979-ac6e9e57af2d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:52:43 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:43Z|00341|binding|INFO|Releasing lport a2760dfe-6dd6-4319-8bbc-a3a72b9651da from this chassis (sb_readonly=0)
Oct 11 04:52:43 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:43Z|00342|binding|INFO|Releasing lport 3043e870-2e1f-4df6-b72c-a581ec61613e from this chassis (sb_readonly=0)
Oct 11 04:52:43 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:43Z|00343|binding|INFO|Releasing lport 7a0f31c4-9bda-45df-9fec-aacc40fc88c1 from this chassis (sb_readonly=0)
Oct 11 04:52:43 np0005481065 nova_compute[260935]: 2025-10-11 08:52:43.371 2 DEBUG nova.virt.libvirt.vif [None req-f23160f8-92e3-41f9-ba4d-288df21f8b7a 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:52:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-1610135239',display_name='tempest-ImagesNegativeTestJSON-server-1610135239',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-1610135239',id=41,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:52:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6fb895e8125a437b8cc29be31706fccb',ramdisk_id='',reservation_id='r-xtlxm36r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesNegativeTestJSON-1030914515',owner_user_name='tempest-ImagesNegativeTestJSON-1030914515-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:52:40Z,user_data=None,user_id='713a9b01d75f4fd2b8ad41bac3ba6343',uuid=dd616f8b-2be3-455f-8979-ac6e9e57af2d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a5dcad71-6ce4-4a7c-95fc-900cb7ee620e", "address": "fa:16:3e:40:7a:6c", "network": {"id": "f4f23928-daba-425d-b61c-e65657ec3386", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-643614329-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fb895e8125a437b8cc29be31706fccb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5dcad71-6c", "ovs_interfaceid": "a5dcad71-6ce4-4a7c-95fc-900cb7ee620e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 04:52:43 np0005481065 nova_compute[260935]: 2025-10-11 08:52:43.372 2 DEBUG nova.network.os_vif_util [None req-f23160f8-92e3-41f9-ba4d-288df21f8b7a 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Converting VIF {"id": "a5dcad71-6ce4-4a7c-95fc-900cb7ee620e", "address": "fa:16:3e:40:7a:6c", "network": {"id": "f4f23928-daba-425d-b61c-e65657ec3386", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-643614329-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fb895e8125a437b8cc29be31706fccb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5dcad71-6c", "ovs_interfaceid": "a5dcad71-6ce4-4a7c-95fc-900cb7ee620e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:52:43 np0005481065 nova_compute[260935]: 2025-10-11 08:52:43.374 2 DEBUG nova.network.os_vif_util [None req-f23160f8-92e3-41f9-ba4d-288df21f8b7a 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:7a:6c,bridge_name='br-int',has_traffic_filtering=True,id=a5dcad71-6ce4-4a7c-95fc-900cb7ee620e,network=Network(f4f23928-daba-425d-b61c-e65657ec3386),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5dcad71-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:52:43 np0005481065 nova_compute[260935]: 2025-10-11 08:52:43.374 2 DEBUG os_vif [None req-f23160f8-92e3-41f9-ba4d-288df21f8b7a 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:7a:6c,bridge_name='br-int',has_traffic_filtering=True,id=a5dcad71-6ce4-4a7c-95fc-900cb7ee620e,network=Network(f4f23928-daba-425d-b61c-e65657ec3386),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5dcad71-6c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 04:52:43 np0005481065 nova_compute[260935]: 2025-10-11 08:52:43.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:43 np0005481065 nova_compute[260935]: 2025-10-11 08:52:43.378 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa5dcad71-6c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:43 np0005481065 nova_compute[260935]: 2025-10-11 08:52:43.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:52:43 np0005481065 neutron-haproxy-ovnmeta-f4f23928-daba-425d-b61c-e65657ec3386[307920]: [NOTICE]   (307924) : haproxy version is 2.8.14-c23fe91
Oct 11 04:52:43 np0005481065 neutron-haproxy-ovnmeta-f4f23928-daba-425d-b61c-e65657ec3386[307920]: [NOTICE]   (307924) : path to executable is /usr/sbin/haproxy
Oct 11 04:52:43 np0005481065 neutron-haproxy-ovnmeta-f4f23928-daba-425d-b61c-e65657ec3386[307920]: [WARNING]  (307924) : Exiting Master process...
Oct 11 04:52:43 np0005481065 neutron-haproxy-ovnmeta-f4f23928-daba-425d-b61c-e65657ec3386[307920]: [WARNING]  (307924) : Exiting Master process...
Oct 11 04:52:43 np0005481065 neutron-haproxy-ovnmeta-f4f23928-daba-425d-b61c-e65657ec3386[307920]: [ALERT]    (307924) : Current worker (307928) exited with code 143 (Terminated)
Oct 11 04:52:43 np0005481065 neutron-haproxy-ovnmeta-f4f23928-daba-425d-b61c-e65657ec3386[307920]: [WARNING]  (307924) : All workers exited. Exiting... (0)
Oct 11 04:52:43 np0005481065 systemd[1]: libpod-f8e09cb4eedc0e259649ff3c36a48535508fb7bc291d534579493db0016a0175.scope: Deactivated successfully.
Oct 11 04:52:43 np0005481065 podman[308219]: 2025-10-11 08:52:43.484984251 +0000 UTC m=+0.068463147 container died f8e09cb4eedc0e259649ff3c36a48535508fb7bc291d534579493db0016a0175 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f4f23928-daba-425d-b61c-e65657ec3386, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 04:52:43 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f8e09cb4eedc0e259649ff3c36a48535508fb7bc291d534579493db0016a0175-userdata-shm.mount: Deactivated successfully.
Oct 11 04:52:43 np0005481065 systemd[1]: var-lib-containers-storage-overlay-93672fbd07a97df543920eefb51abcbc13f5aadfb34ff046cc470d22c59772b4-merged.mount: Deactivated successfully.
Oct 11 04:52:43 np0005481065 podman[308219]: 2025-10-11 08:52:43.538974648 +0000 UTC m=+0.122453584 container cleanup f8e09cb4eedc0e259649ff3c36a48535508fb7bc291d534579493db0016a0175 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f4f23928-daba-425d-b61c-e65657ec3386, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 04:52:43 np0005481065 nova_compute[260935]: 2025-10-11 08:52:43.544 2 DEBUG nova.compute.manager [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:52:43 np0005481065 nova_compute[260935]: 2025-10-11 08:52:43.545 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172763.5433564, 5750649d-960f-42d5-b127-de8b9a2bee8f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:52:43 np0005481065 nova_compute[260935]: 2025-10-11 08:52:43.545 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] VM Started (Lifecycle Event)#033[00m
Oct 11 04:52:43 np0005481065 nova_compute[260935]: 2025-10-11 08:52:43.548 2 DEBUG nova.virt.libvirt.driver [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:52:43 np0005481065 nova_compute[260935]: 2025-10-11 08:52:43.553 2 INFO nova.virt.libvirt.driver [-] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Instance spawned successfully.#033[00m
Oct 11 04:52:43 np0005481065 nova_compute[260935]: 2025-10-11 08:52:43.553 2 DEBUG nova.virt.libvirt.driver [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:52:43 np0005481065 nova_compute[260935]: 2025-10-11 08:52:43.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:43 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:43Z|00344|binding|INFO|Releasing lport a2760dfe-6dd6-4319-8bbc-a3a72b9651da from this chassis (sb_readonly=0)
Oct 11 04:52:43 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:43Z|00345|binding|INFO|Releasing lport 3043e870-2e1f-4df6-b72c-a581ec61613e from this chassis (sb_readonly=0)
Oct 11 04:52:43 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:43Z|00346|binding|INFO|Releasing lport 7a0f31c4-9bda-45df-9fec-aacc40fc88c1 from this chassis (sb_readonly=0)
Oct 11 04:52:43 np0005481065 nova_compute[260935]: 2025-10-11 08:52:43.564 2 INFO os_vif [None req-f23160f8-92e3-41f9-ba4d-288df21f8b7a 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:7a:6c,bridge_name='br-int',has_traffic_filtering=True,id=a5dcad71-6ce4-4a7c-95fc-900cb7ee620e,network=Network(f4f23928-daba-425d-b61c-e65657ec3386),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5dcad71-6c')#033[00m
Oct 11 04:52:43 np0005481065 systemd[1]: libpod-conmon-f8e09cb4eedc0e259649ff3c36a48535508fb7bc291d534579493db0016a0175.scope: Deactivated successfully.
Oct 11 04:52:43 np0005481065 nova_compute[260935]: 2025-10-11 08:52:43.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:43 np0005481065 nova_compute[260935]: 2025-10-11 08:52:43.596 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:52:43 np0005481065 nova_compute[260935]: 2025-10-11 08:52:43.609 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:52:43 np0005481065 podman[308246]: 2025-10-11 08:52:43.61435669 +0000 UTC m=+0.047178915 container remove f8e09cb4eedc0e259649ff3c36a48535508fb7bc291d534579493db0016a0175 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f4f23928-daba-425d-b61c-e65657ec3386, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct 11 04:52:43 np0005481065 nova_compute[260935]: 2025-10-11 08:52:43.615 2 DEBUG nova.virt.libvirt.driver [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:52:43 np0005481065 nova_compute[260935]: 2025-10-11 08:52:43.615 2 DEBUG nova.virt.libvirt.driver [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:52:43 np0005481065 nova_compute[260935]: 2025-10-11 08:52:43.616 2 DEBUG nova.virt.libvirt.driver [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:52:43 np0005481065 nova_compute[260935]: 2025-10-11 08:52:43.616 2 DEBUG nova.virt.libvirt.driver [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:52:43 np0005481065 nova_compute[260935]: 2025-10-11 08:52:43.616 2 DEBUG nova.virt.libvirt.driver [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:52:43 np0005481065 nova_compute[260935]: 2025-10-11 08:52:43.617 2 DEBUG nova.virt.libvirt.driver [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:52:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:43.623 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[188c5fce-cf86-42c8-be38-611022c967fe]: (4, ('Sat Oct 11 08:52:43 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f4f23928-daba-425d-b61c-e65657ec3386 (f8e09cb4eedc0e259649ff3c36a48535508fb7bc291d534579493db0016a0175)\nf8e09cb4eedc0e259649ff3c36a48535508fb7bc291d534579493db0016a0175\nSat Oct 11 08:52:43 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f4f23928-daba-425d-b61c-e65657ec3386 (f8e09cb4eedc0e259649ff3c36a48535508fb7bc291d534579493db0016a0175)\nf8e09cb4eedc0e259649ff3c36a48535508fb7bc291d534579493db0016a0175\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:43.626 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6514d528-699e-4158-b6c3-f158e2c0a034]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:43.627 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf4f23928-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:43 np0005481065 nova_compute[260935]: 2025-10-11 08:52:43.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:43 np0005481065 kernel: tapf4f23928-d0: left promiscuous mode
Oct 11 04:52:43 np0005481065 nova_compute[260935]: 2025-10-11 08:52:43.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:43.636 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[39324262-c8f5-4a3b-a90f-0706b6ad145c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:43 np0005481065 nova_compute[260935]: 2025-10-11 08:52:43.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:43 np0005481065 nova_compute[260935]: 2025-10-11 08:52:43.653 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:52:43 np0005481065 nova_compute[260935]: 2025-10-11 08:52:43.653 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172763.54498, 5750649d-960f-42d5-b127-de8b9a2bee8f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:52:43 np0005481065 nova_compute[260935]: 2025-10-11 08:52:43.654 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] VM Paused (Lifecycle Event)#033[00m
Oct 11 04:52:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:43.656 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9b5b9a98-967b-4ed1-8595-a6963aa839ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:43.659 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[66f91a56-c875-4344-a80e-7e7914bb09cb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:43.684 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f89e000f-ea44-4157-a751-1f9ed5861ad9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 460417, 'reachable_time': 20140, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308279, 'error': None, 'target': 'ovnmeta-f4f23928-daba-425d-b61c-e65657ec3386', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:43.689 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f4f23928-daba-425d-b61c-e65657ec3386 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 04:52:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:43.689 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[899bdcbb-400a-4ab8-99fb-755fc331fe4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:43 np0005481065 systemd[1]: run-netns-ovnmeta\x2df4f23928\x2ddaba\x2d425d\x2db61c\x2de65657ec3386.mount: Deactivated successfully.
Oct 11 04:52:43 np0005481065 nova_compute[260935]: 2025-10-11 08:52:43.697 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:52:43 np0005481065 nova_compute[260935]: 2025-10-11 08:52:43.703 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172763.5480337, 5750649d-960f-42d5-b127-de8b9a2bee8f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:52:43 np0005481065 nova_compute[260935]: 2025-10-11 08:52:43.703 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:52:43 np0005481065 nova_compute[260935]: 2025-10-11 08:52:43.707 2 INFO nova.compute.manager [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Took 7.20 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:52:43 np0005481065 nova_compute[260935]: 2025-10-11 08:52:43.707 2 DEBUG nova.compute.manager [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:52:43 np0005481065 nova_compute[260935]: 2025-10-11 08:52:43.740 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:52:43 np0005481065 nova_compute[260935]: 2025-10-11 08:52:43.748 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:52:43 np0005481065 nova_compute[260935]: 2025-10-11 08:52:43.785 2 INFO nova.compute.manager [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Took 8.25 seconds to build instance.#033[00m
Oct 11 04:52:43 np0005481065 nova_compute[260935]: 2025-10-11 08:52:43.805 2 DEBUG oslo_concurrency.lockutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Lock "5750649d-960f-42d5-b127-de8b9a2bee8f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.348s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1436: 321 pgs: 321 active+clean; 213 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.7 MiB/s wr, 288 op/s
Oct 11 04:52:44 np0005481065 nova_compute[260935]: 2025-10-11 08:52:43.999 2 INFO nova.virt.libvirt.driver [None req-f23160f8-92e3-41f9-ba4d-288df21f8b7a 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Deleting instance files /var/lib/nova/instances/dd616f8b-2be3-455f-8979-ac6e9e57af2d_del#033[00m
Oct 11 04:52:44 np0005481065 nova_compute[260935]: 2025-10-11 08:52:44.000 2 INFO nova.virt.libvirt.driver [None req-f23160f8-92e3-41f9-ba4d-288df21f8b7a 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Deletion of /var/lib/nova/instances/dd616f8b-2be3-455f-8979-ac6e9e57af2d_del complete#033[00m
Oct 11 04:52:44 np0005481065 nova_compute[260935]: 2025-10-11 08:52:44.095 2 INFO nova.compute.manager [None req-f23160f8-92e3-41f9-ba4d-288df21f8b7a 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Took 1.00 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 04:52:44 np0005481065 nova_compute[260935]: 2025-10-11 08:52:44.096 2 DEBUG oslo.service.loopingcall [None req-f23160f8-92e3-41f9-ba4d-288df21f8b7a 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 04:52:44 np0005481065 nova_compute[260935]: 2025-10-11 08:52:44.096 2 DEBUG nova.compute.manager [-] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 04:52:44 np0005481065 nova_compute[260935]: 2025-10-11 08:52:44.096 2 DEBUG nova.network.neutron [-] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 04:52:45 np0005481065 nova_compute[260935]: 2025-10-11 08:52:45.295 2 DEBUG nova.compute.manager [req-38873521-963a-4ceb-9a0d-3e9a9c774fda req-01d61b01-611d-46ea-a305-82cec5bd3d4d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Received event network-vif-unplugged-a5dcad71-6ce4-4a7c-95fc-900cb7ee620e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:52:45 np0005481065 nova_compute[260935]: 2025-10-11 08:52:45.296 2 DEBUG oslo_concurrency.lockutils [req-38873521-963a-4ceb-9a0d-3e9a9c774fda req-01d61b01-611d-46ea-a305-82cec5bd3d4d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "dd616f8b-2be3-455f-8979-ac6e9e57af2d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:45 np0005481065 nova_compute[260935]: 2025-10-11 08:52:45.297 2 DEBUG oslo_concurrency.lockutils [req-38873521-963a-4ceb-9a0d-3e9a9c774fda req-01d61b01-611d-46ea-a305-82cec5bd3d4d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd616f8b-2be3-455f-8979-ac6e9e57af2d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:45 np0005481065 nova_compute[260935]: 2025-10-11 08:52:45.297 2 DEBUG oslo_concurrency.lockutils [req-38873521-963a-4ceb-9a0d-3e9a9c774fda req-01d61b01-611d-46ea-a305-82cec5bd3d4d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd616f8b-2be3-455f-8979-ac6e9e57af2d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:45 np0005481065 nova_compute[260935]: 2025-10-11 08:52:45.298 2 DEBUG nova.compute.manager [req-38873521-963a-4ceb-9a0d-3e9a9c774fda req-01d61b01-611d-46ea-a305-82cec5bd3d4d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] No waiting events found dispatching network-vif-unplugged-a5dcad71-6ce4-4a7c-95fc-900cb7ee620e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:52:45 np0005481065 nova_compute[260935]: 2025-10-11 08:52:45.298 2 DEBUG nova.compute.manager [req-38873521-963a-4ceb-9a0d-3e9a9c774fda req-01d61b01-611d-46ea-a305-82cec5bd3d4d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Received event network-vif-unplugged-a5dcad71-6ce4-4a7c-95fc-900cb7ee620e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 04:52:45 np0005481065 nova_compute[260935]: 2025-10-11 08:52:45.299 2 DEBUG nova.compute.manager [req-38873521-963a-4ceb-9a0d-3e9a9c774fda req-01d61b01-611d-46ea-a305-82cec5bd3d4d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Received event network-vif-plugged-a5dcad71-6ce4-4a7c-95fc-900cb7ee620e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:52:45 np0005481065 nova_compute[260935]: 2025-10-11 08:52:45.299 2 DEBUG oslo_concurrency.lockutils [req-38873521-963a-4ceb-9a0d-3e9a9c774fda req-01d61b01-611d-46ea-a305-82cec5bd3d4d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "dd616f8b-2be3-455f-8979-ac6e9e57af2d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:45 np0005481065 nova_compute[260935]: 2025-10-11 08:52:45.300 2 DEBUG oslo_concurrency.lockutils [req-38873521-963a-4ceb-9a0d-3e9a9c774fda req-01d61b01-611d-46ea-a305-82cec5bd3d4d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd616f8b-2be3-455f-8979-ac6e9e57af2d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:45 np0005481065 nova_compute[260935]: 2025-10-11 08:52:45.300 2 DEBUG oslo_concurrency.lockutils [req-38873521-963a-4ceb-9a0d-3e9a9c774fda req-01d61b01-611d-46ea-a305-82cec5bd3d4d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd616f8b-2be3-455f-8979-ac6e9e57af2d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:45 np0005481065 nova_compute[260935]: 2025-10-11 08:52:45.300 2 DEBUG nova.compute.manager [req-38873521-963a-4ceb-9a0d-3e9a9c774fda req-01d61b01-611d-46ea-a305-82cec5bd3d4d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] No waiting events found dispatching network-vif-plugged-a5dcad71-6ce4-4a7c-95fc-900cb7ee620e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:52:45 np0005481065 nova_compute[260935]: 2025-10-11 08:52:45.301 2 WARNING nova.compute.manager [req-38873521-963a-4ceb-9a0d-3e9a9c774fda req-01d61b01-611d-46ea-a305-82cec5bd3d4d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Received unexpected event network-vif-plugged-a5dcad71-6ce4-4a7c-95fc-900cb7ee620e for instance with vm_state active and task_state deleting.#033[00m
Oct 11 04:52:45 np0005481065 nova_compute[260935]: 2025-10-11 08:52:45.459 2 DEBUG nova.network.neutron [-] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:52:45 np0005481065 nova_compute[260935]: 2025-10-11 08:52:45.476 2 INFO nova.compute.manager [-] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Took 1.38 seconds to deallocate network for instance.#033[00m
Oct 11 04:52:45 np0005481065 nova_compute[260935]: 2025-10-11 08:52:45.531 2 DEBUG oslo_concurrency.lockutils [None req-f23160f8-92e3-41f9-ba4d-288df21f8b7a 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:45 np0005481065 nova_compute[260935]: 2025-10-11 08:52:45.533 2 DEBUG oslo_concurrency.lockutils [None req-f23160f8-92e3-41f9-ba4d-288df21f8b7a 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:45 np0005481065 nova_compute[260935]: 2025-10-11 08:52:45.551 2 DEBUG nova.compute.manager [req-a7d8479e-41c3-4975-980a-7b86dd80df05 req-1cb1bca1-34ae-4f1f-b45b-e57931351c31 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Received event network-vif-deleted-a5dcad71-6ce4-4a7c-95fc-900cb7ee620e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:52:45 np0005481065 nova_compute[260935]: 2025-10-11 08:52:45.625 2 DEBUG oslo_concurrency.processutils [None req-f23160f8-92e3-41f9-ba4d-288df21f8b7a 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:52:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1437: 321 pgs: 321 active+clean; 213 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 224 op/s
Oct 11 04:52:45 np0005481065 nova_compute[260935]: 2025-10-11 08:52:45.939 2 DEBUG nova.virt.libvirt.driver [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct 11 04:52:46 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:52:46 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2545751957' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:52:46 np0005481065 nova_compute[260935]: 2025-10-11 08:52:46.181 2 DEBUG oslo_concurrency.processutils [None req-f23160f8-92e3-41f9-ba4d-288df21f8b7a 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:52:46 np0005481065 nova_compute[260935]: 2025-10-11 08:52:46.189 2 DEBUG nova.compute.provider_tree [None req-f23160f8-92e3-41f9-ba4d-288df21f8b7a 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:52:46 np0005481065 nova_compute[260935]: 2025-10-11 08:52:46.209 2 DEBUG nova.scheduler.client.report [None req-f23160f8-92e3-41f9-ba4d-288df21f8b7a 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:52:46 np0005481065 nova_compute[260935]: 2025-10-11 08:52:46.238 2 DEBUG oslo_concurrency.lockutils [None req-f23160f8-92e3-41f9-ba4d-288df21f8b7a 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:46 np0005481065 nova_compute[260935]: 2025-10-11 08:52:46.245 2 DEBUG oslo_concurrency.lockutils [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Acquiring lock "5750649d-960f-42d5-b127-de8b9a2bee8f" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:46 np0005481065 nova_compute[260935]: 2025-10-11 08:52:46.246 2 DEBUG oslo_concurrency.lockutils [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Lock "5750649d-960f-42d5-b127-de8b9a2bee8f" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:46 np0005481065 nova_compute[260935]: 2025-10-11 08:52:46.247 2 INFO nova.compute.manager [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Rebooting instance#033[00m
Oct 11 04:52:46 np0005481065 nova_compute[260935]: 2025-10-11 08:52:46.272 2 DEBUG oslo_concurrency.lockutils [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Acquiring lock "refresh_cache-5750649d-960f-42d5-b127-de8b9a2bee8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:52:46 np0005481065 nova_compute[260935]: 2025-10-11 08:52:46.272 2 DEBUG oslo_concurrency.lockutils [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Acquired lock "refresh_cache-5750649d-960f-42d5-b127-de8b9a2bee8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:52:46 np0005481065 nova_compute[260935]: 2025-10-11 08:52:46.273 2 DEBUG nova.network.neutron [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:52:46 np0005481065 nova_compute[260935]: 2025-10-11 08:52:46.284 2 INFO nova.scheduler.client.report [None req-f23160f8-92e3-41f9-ba4d-288df21f8b7a 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Deleted allocations for instance dd616f8b-2be3-455f-8979-ac6e9e57af2d#033[00m
Oct 11 04:52:46 np0005481065 nova_compute[260935]: 2025-10-11 08:52:46.408 2 DEBUG oslo_concurrency.lockutils [None req-f23160f8-92e3-41f9-ba4d-288df21f8b7a 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Lock "dd616f8b-2be3-455f-8979-ac6e9e57af2d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.323s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:47 np0005481065 nova_compute[260935]: 2025-10-11 08:52:47.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1438: 321 pgs: 321 active+clean; 167 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.7 MiB/s wr, 321 op/s
Oct 11 04:52:48 np0005481065 kernel: tap20c8164c-07 (unregistering): left promiscuous mode
Oct 11 04:52:48 np0005481065 NetworkManager[44960]: <info>  [1760172768.2077] device (tap20c8164c-07): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:52:48 np0005481065 nova_compute[260935]: 2025-10-11 08:52:48.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:48 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:48Z|00347|binding|INFO|Releasing lport 20c8164c-0779-4589-b3d3-afc10a47631f from this chassis (sb_readonly=0)
Oct 11 04:52:48 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:48Z|00348|binding|INFO|Setting lport 20c8164c-0779-4589-b3d3-afc10a47631f down in Southbound
Oct 11 04:52:48 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:48Z|00349|binding|INFO|Removing iface tap20c8164c-07 ovn-installed in OVS
Oct 11 04:52:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:48.234 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:90:73 10.100.0.12'], port_security=['fa:16:3e:bf:90:73 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '98fabab3-6b4a-44f3-b232-f23f34f4e19f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9af27fad6b5a4783b66213343f27f0a1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a50a9db8-e2ab-4969-88fc-b4ddbb372174', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b34768a1-4d4c-416a-8ec1-d8538916a72a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=20c8164c-0779-4589-b3d3-afc10a47631f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:52:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:48.236 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 20c8164c-0779-4589-b3d3-afc10a47631f in datapath e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd unbound from our chassis#033[00m
Oct 11 04:52:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:48.239 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 04:52:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:48.241 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[67cfa682-d7dd-46c8-b710-a32089e927d3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:48.242 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd namespace which is not needed anymore#033[00m
Oct 11 04:52:48 np0005481065 nova_compute[260935]: 2025-10-11 08:52:48.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:52:48 np0005481065 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000028.scope: Deactivated successfully.
Oct 11 04:52:48 np0005481065 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000028.scope: Consumed 13.280s CPU time.
Oct 11 04:52:48 np0005481065 systemd-machined[215705]: Machine qemu-44-instance-00000028 terminated.
Oct 11 04:52:48 np0005481065 nova_compute[260935]: 2025-10-11 08:52:48.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:48 np0005481065 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[307041]: [NOTICE]   (307045) : haproxy version is 2.8.14-c23fe91
Oct 11 04:52:48 np0005481065 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[307041]: [NOTICE]   (307045) : path to executable is /usr/sbin/haproxy
Oct 11 04:52:48 np0005481065 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[307041]: [WARNING]  (307045) : Exiting Master process...
Oct 11 04:52:48 np0005481065 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[307041]: [WARNING]  (307045) : Exiting Master process...
Oct 11 04:52:48 np0005481065 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[307041]: [ALERT]    (307045) : Current worker (307047) exited with code 143 (Terminated)
Oct 11 04:52:48 np0005481065 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[307041]: [WARNING]  (307045) : All workers exited. Exiting... (0)
Oct 11 04:52:48 np0005481065 systemd[1]: libpod-cf84e2cb23338f38ffba7aa87fde853c0346e0e3a636c10ffc6f7b131567dfc3.scope: Deactivated successfully.
Oct 11 04:52:48 np0005481065 podman[308325]: 2025-10-11 08:52:48.445047653 +0000 UTC m=+0.071504694 container died cf84e2cb23338f38ffba7aa87fde853c0346e0e3a636c10ffc6f7b131567dfc3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 04:52:48 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cf84e2cb23338f38ffba7aa87fde853c0346e0e3a636c10ffc6f7b131567dfc3-userdata-shm.mount: Deactivated successfully.
Oct 11 04:52:48 np0005481065 systemd[1]: var-lib-containers-storage-overlay-95e0d34e1c9b5cb00bbf8b4589ff9eade911de7e540b11ead99c1576f4f3ca7c-merged.mount: Deactivated successfully.
Oct 11 04:52:48 np0005481065 podman[308325]: 2025-10-11 08:52:48.507671254 +0000 UTC m=+0.134128265 container cleanup cf84e2cb23338f38ffba7aa87fde853c0346e0e3a636c10ffc6f7b131567dfc3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 04:52:48 np0005481065 systemd[1]: libpod-conmon-cf84e2cb23338f38ffba7aa87fde853c0346e0e3a636c10ffc6f7b131567dfc3.scope: Deactivated successfully.
Oct 11 04:52:48 np0005481065 podman[308366]: 2025-10-11 08:52:48.586333768 +0000 UTC m=+0.051343023 container remove cf84e2cb23338f38ffba7aa87fde853c0346e0e3a636c10ffc6f7b131567dfc3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0)
Oct 11 04:52:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:48.597 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[87495b02-ff49-4b78-8ba0-391cea7d9c34]: (4, ('Sat Oct 11 08:52:48 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd (cf84e2cb23338f38ffba7aa87fde853c0346e0e3a636c10ffc6f7b131567dfc3)\ncf84e2cb23338f38ffba7aa87fde853c0346e0e3a636c10ffc6f7b131567dfc3\nSat Oct 11 08:52:48 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd (cf84e2cb23338f38ffba7aa87fde853c0346e0e3a636c10ffc6f7b131567dfc3)\ncf84e2cb23338f38ffba7aa87fde853c0346e0e3a636c10ffc6f7b131567dfc3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:48.599 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[746c2d9d-ac21-4595-8fd8-36e0bac42fb5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:48.600 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape5d4fc7a-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:48 np0005481065 kernel: tape5d4fc7a-10: left promiscuous mode
Oct 11 04:52:48 np0005481065 nova_compute[260935]: 2025-10-11 08:52:48.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:48 np0005481065 nova_compute[260935]: 2025-10-11 08:52:48.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:48.645 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[52300dc3-1664-45fc-a3a0-e0805eec68de]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:48.683 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[56ea8e36-61bb-4fae-ba9f-3c4a1ae68ba4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:48.685 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6ca6de7c-07b6-450f-b9b3-ec750af08fb5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:48 np0005481065 nova_compute[260935]: 2025-10-11 08:52:48.685 2 DEBUG nova.network.neutron [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Updating instance_info_cache with network_info: [{"id": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "address": "fa:16:3e:49:c0:a4", "network": {"id": "aa9adc37-a18c-4b31-b4cc-d46c43f91e9b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-563102071-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e394c641aa0e46e2a7d8129cd88c9a01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81c1d19a-c4", "ovs_interfaceid": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:52:48 np0005481065 nova_compute[260935]: 2025-10-11 08:52:48.711 2 DEBUG oslo_concurrency.lockutils [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Releasing lock "refresh_cache-5750649d-960f-42d5-b127-de8b9a2bee8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:52:48 np0005481065 nova_compute[260935]: 2025-10-11 08:52:48.713 2 DEBUG nova.compute.manager [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:52:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:48.713 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[33b1c907-9305-45c8-ad84-28f64a0d5a7c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 459037, 'reachable_time': 32395, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308384, 'error': None, 'target': 'ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:48 np0005481065 systemd[1]: run-netns-ovnmeta\x2de5d4fc7a\x2d1d47\x2d4774\x2daad7\x2d8bb2b388fbbd.mount: Deactivated successfully.
Oct 11 04:52:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:48.720 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 04:52:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:48.720 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[4c116757-6d35-40b0-b39f-1f64d3080d3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:48 np0005481065 nova_compute[260935]: 2025-10-11 08:52:48.907 2 DEBUG nova.compute.manager [req-ff92a531-8736-4cc4-b6a0-18ecba7b2e04 req-fbf6e7ea-53f3-488b-a330-da465885419a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Received event network-vif-unplugged-20c8164c-0779-4589-b3d3-afc10a47631f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:52:48 np0005481065 nova_compute[260935]: 2025-10-11 08:52:48.907 2 DEBUG oslo_concurrency.lockutils [req-ff92a531-8736-4cc4-b6a0-18ecba7b2e04 req-fbf6e7ea-53f3-488b-a330-da465885419a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:48 np0005481065 nova_compute[260935]: 2025-10-11 08:52:48.908 2 DEBUG oslo_concurrency.lockutils [req-ff92a531-8736-4cc4-b6a0-18ecba7b2e04 req-fbf6e7ea-53f3-488b-a330-da465885419a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:48 np0005481065 nova_compute[260935]: 2025-10-11 08:52:48.908 2 DEBUG oslo_concurrency.lockutils [req-ff92a531-8736-4cc4-b6a0-18ecba7b2e04 req-fbf6e7ea-53f3-488b-a330-da465885419a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:48 np0005481065 nova_compute[260935]: 2025-10-11 08:52:48.908 2 DEBUG nova.compute.manager [req-ff92a531-8736-4cc4-b6a0-18ecba7b2e04 req-fbf6e7ea-53f3-488b-a330-da465885419a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] No waiting events found dispatching network-vif-unplugged-20c8164c-0779-4589-b3d3-afc10a47631f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:52:48 np0005481065 nova_compute[260935]: 2025-10-11 08:52:48.909 2 WARNING nova.compute.manager [req-ff92a531-8736-4cc4-b6a0-18ecba7b2e04 req-fbf6e7ea-53f3-488b-a330-da465885419a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Received unexpected event network-vif-unplugged-20c8164c-0779-4589-b3d3-afc10a47631f for instance with vm_state active and task_state rebuilding.#033[00m
Oct 11 04:52:48 np0005481065 nova_compute[260935]: 2025-10-11 08:52:48.956 2 INFO nova.virt.libvirt.driver [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Instance shutdown successfully after 13 seconds.#033[00m
Oct 11 04:52:48 np0005481065 nova_compute[260935]: 2025-10-11 08:52:48.968 2 INFO nova.virt.libvirt.driver [-] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Instance destroyed successfully.#033[00m
Oct 11 04:52:48 np0005481065 nova_compute[260935]: 2025-10-11 08:52:48.974 2 INFO nova.virt.libvirt.driver [-] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Instance destroyed successfully.#033[00m
Oct 11 04:52:48 np0005481065 nova_compute[260935]: 2025-10-11 08:52:48.975 2 DEBUG nova.virt.libvirt.vif [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:52:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1265647930',display_name='tempest-ServerDiskConfigTestJSON-server-1265647930',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1265647930',id=40,image_ref='95632eb9-5895-4e20-b760-0f149aadf400',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:52:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9af27fad6b5a4783b66213343f27f0a1',ramdisk_id='',reservation_id='r-qeuhsate',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='95632eb9-5895-4e20-b760-0f149aadf400',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-387886039',owner_user_name='tempest-ServerDiskConfigTestJSON-387886039-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:52:34Z,user_data=None,user_id='2a171de1f79843e0b048393cabfee77d',uuid=98fabab3-6b4a-44f3-b232-f23f34f4e19f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "20c8164c-0779-4589-b3d3-afc10a47631f", "address": "fa:16:3e:bf:90:73", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20c8164c-07", "ovs_interfaceid": "20c8164c-0779-4589-b3d3-afc10a47631f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 04:52:48 np0005481065 nova_compute[260935]: 2025-10-11 08:52:48.977 2 DEBUG nova.network.os_vif_util [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converting VIF {"id": "20c8164c-0779-4589-b3d3-afc10a47631f", "address": "fa:16:3e:bf:90:73", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20c8164c-07", "ovs_interfaceid": "20c8164c-0779-4589-b3d3-afc10a47631f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:52:48 np0005481065 nova_compute[260935]: 2025-10-11 08:52:48.978 2 DEBUG nova.network.os_vif_util [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:90:73,bridge_name='br-int',has_traffic_filtering=True,id=20c8164c-0779-4589-b3d3-afc10a47631f,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20c8164c-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:52:48 np0005481065 nova_compute[260935]: 2025-10-11 08:52:48.978 2 DEBUG os_vif [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:90:73,bridge_name='br-int',has_traffic_filtering=True,id=20c8164c-0779-4589-b3d3-afc10a47631f,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20c8164c-07') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 04:52:48 np0005481065 nova_compute[260935]: 2025-10-11 08:52:48.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:48 np0005481065 nova_compute[260935]: 2025-10-11 08:52:48.980 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap20c8164c-07, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:48 np0005481065 nova_compute[260935]: 2025-10-11 08:52:48.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:48 np0005481065 nova_compute[260935]: 2025-10-11 08:52:48.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:52:48 np0005481065 nova_compute[260935]: 2025-10-11 08:52:48.989 2 INFO os_vif [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:90:73,bridge_name='br-int',has_traffic_filtering=True,id=20c8164c-0779-4589-b3d3-afc10a47631f,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20c8164c-07')#033[00m
Oct 11 04:52:49 np0005481065 kernel: tap81c1d19a-c4 (unregistering): left promiscuous mode
Oct 11 04:52:49 np0005481065 NetworkManager[44960]: <info>  [1760172769.0110] device (tap81c1d19a-c4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:52:49 np0005481065 nova_compute[260935]: 2025-10-11 08:52:49.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:49 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:49Z|00350|binding|INFO|Releasing lport 81c1d19a-c479-4381-9557-92f3e52b0cf0 from this chassis (sb_readonly=0)
Oct 11 04:52:49 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:49Z|00351|binding|INFO|Setting lport 81c1d19a-c479-4381-9557-92f3e52b0cf0 down in Southbound
Oct 11 04:52:49 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:49Z|00352|binding|INFO|Removing iface tap81c1d19a-c4 ovn-installed in OVS
Oct 11 04:52:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:49.028 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:c0:a4 10.100.0.10'], port_security=['fa:16:3e:49:c0:a4 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '5750649d-960f-42d5-b127-de8b9a2bee8f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e394c641aa0e46e2a7d8129cd88c9a01', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e0772185-519f-4131-b64e-b4728e2c0dd9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bf506c8e-a600-4595-8f2f-eb0ff97eb2d2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=81c1d19a-c479-4381-9557-92f3e52b0cf0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:52:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:49.029 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 81c1d19a-c479-4381-9557-92f3e52b0cf0 in datapath aa9adc37-a18c-4b31-b4cc-d46c43f91e9b unbound from our chassis#033[00m
Oct 11 04:52:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:49.030 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aa9adc37-a18c-4b31-b4cc-d46c43f91e9b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 04:52:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:49.031 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[69a16a00-7120-4a51-9b02-4e21c3eef7f6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:49.031 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b namespace which is not needed anymore#033[00m
Oct 11 04:52:49 np0005481065 nova_compute[260935]: 2025-10-11 08:52:49.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:49 np0005481065 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d0000002a.scope: Deactivated successfully.
Oct 11 04:52:49 np0005481065 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d0000002a.scope: Consumed 6.736s CPU time.
Oct 11 04:52:49 np0005481065 systemd-machined[215705]: Machine qemu-46-instance-0000002a terminated.
Oct 11 04:52:49 np0005481065 systemd-udevd[308305]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:52:49 np0005481065 kernel: tap81c1d19a-c4: entered promiscuous mode
Oct 11 04:52:49 np0005481065 NetworkManager[44960]: <info>  [1760172769.1912] manager: (tap81c1d19a-c4): new Tun device (/org/freedesktop/NetworkManager/Devices/166)
Oct 11 04:52:49 np0005481065 kernel: tap81c1d19a-c4 (unregistering): left promiscuous mode
Oct 11 04:52:49 np0005481065 nova_compute[260935]: 2025-10-11 08:52:49.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:49 np0005481065 virtnodedevd[261258]: libvirt version: 10.10.0, package: 15.el9 (builder@centos.org, 2025-08-18-13:22:20, )
Oct 11 04:52:49 np0005481065 virtnodedevd[261258]: hostname: compute-0
Oct 11 04:52:49 np0005481065 virtnodedevd[261258]: ethtool ioctl error on tap81c1d19a-c4: No such device
Oct 11 04:52:49 np0005481065 virtnodedevd[261258]: ethtool ioctl error on tap81c1d19a-c4: No such device
Oct 11 04:52:49 np0005481065 nova_compute[260935]: 2025-10-11 08:52:49.225 2 INFO nova.virt.libvirt.driver [-] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Instance destroyed successfully.#033[00m
Oct 11 04:52:49 np0005481065 nova_compute[260935]: 2025-10-11 08:52:49.226 2 DEBUG nova.objects.instance [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Lazy-loading 'resources' on Instance uuid 5750649d-960f-42d5-b127-de8b9a2bee8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:52:49 np0005481065 nova_compute[260935]: 2025-10-11 08:52:49.231 2 DEBUG nova.compute.manager [req-df8f348c-3f75-4727-810f-44a3eb2d13cf req-05ae73c5-02a7-44cb-b3ee-dd61fe7edb3e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Received event network-vif-unplugged-81c1d19a-c479-4381-9557-92f3e52b0cf0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:52:49 np0005481065 nova_compute[260935]: 2025-10-11 08:52:49.232 2 DEBUG oslo_concurrency.lockutils [req-df8f348c-3f75-4727-810f-44a3eb2d13cf req-05ae73c5-02a7-44cb-b3ee-dd61fe7edb3e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:49 np0005481065 virtnodedevd[261258]: ethtool ioctl error on tap81c1d19a-c4: No such device
Oct 11 04:52:49 np0005481065 nova_compute[260935]: 2025-10-11 08:52:49.232 2 DEBUG oslo_concurrency.lockutils [req-df8f348c-3f75-4727-810f-44a3eb2d13cf req-05ae73c5-02a7-44cb-b3ee-dd61fe7edb3e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:49 np0005481065 neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b[308177]: [NOTICE]   (308181) : haproxy version is 2.8.14-c23fe91
Oct 11 04:52:49 np0005481065 neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b[308177]: [NOTICE]   (308181) : path to executable is /usr/sbin/haproxy
Oct 11 04:52:49 np0005481065 neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b[308177]: [WARNING]  (308181) : Exiting Master process...
Oct 11 04:52:49 np0005481065 neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b[308177]: [ALERT]    (308181) : Current worker (308183) exited with code 143 (Terminated)
Oct 11 04:52:49 np0005481065 neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b[308177]: [WARNING]  (308181) : All workers exited. Exiting... (0)
Oct 11 04:52:49 np0005481065 nova_compute[260935]: 2025-10-11 08:52:49.233 2 DEBUG oslo_concurrency.lockutils [req-df8f348c-3f75-4727-810f-44a3eb2d13cf req-05ae73c5-02a7-44cb-b3ee-dd61fe7edb3e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:49 np0005481065 nova_compute[260935]: 2025-10-11 08:52:49.236 2 DEBUG nova.compute.manager [req-df8f348c-3f75-4727-810f-44a3eb2d13cf req-05ae73c5-02a7-44cb-b3ee-dd61fe7edb3e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] No waiting events found dispatching network-vif-unplugged-81c1d19a-c479-4381-9557-92f3e52b0cf0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:52:49 np0005481065 nova_compute[260935]: 2025-10-11 08:52:49.236 2 WARNING nova.compute.manager [req-df8f348c-3f75-4727-810f-44a3eb2d13cf req-05ae73c5-02a7-44cb-b3ee-dd61fe7edb3e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Received unexpected event network-vif-unplugged-81c1d19a-c479-4381-9557-92f3e52b0cf0 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Oct 11 04:52:49 np0005481065 systemd[1]: libpod-94a21b0b94de7518338fcaeb99ecd605aad02a7ec069ff0c0918f06ba08b8847.scope: Deactivated successfully.
Oct 11 04:52:49 np0005481065 virtnodedevd[261258]: ethtool ioctl error on tap81c1d19a-c4: No such device
Oct 11 04:52:49 np0005481065 podman[308424]: 2025-10-11 08:52:49.245084847 +0000 UTC m=+0.080307352 container died 94a21b0b94de7518338fcaeb99ecd605aad02a7ec069ff0c0918f06ba08b8847 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 04:52:49 np0005481065 nova_compute[260935]: 2025-10-11 08:52:49.252 2 DEBUG nova.virt.libvirt.vif [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:52:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1556963080',display_name='tempest-InstanceActionsTestJSON-server-1556963080',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1556963080',id=42,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:52:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e394c641aa0e46e2a7d8129cd88c9a01',ramdisk_id='',reservation_id='r-r0vpe0lh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-761055393',owner_user_name='tempest-InstanceActionsTestJSON-761055393-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:52:48Z,user_data=None,user_id='aeed30817a7740109e765d227ff2b78a',uuid=5750649d-960f-42d5-b127-de8b9a2bee8f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "address": "fa:16:3e:49:c0:a4", "network": {"id": "aa9adc37-a18c-4b31-b4cc-d46c43f91e9b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-563102071-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e394c641aa0e46e2a7d8129cd88c9a01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81c1d19a-c4", "ovs_interfaceid": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 04:52:49 np0005481065 nova_compute[260935]: 2025-10-11 08:52:49.252 2 DEBUG nova.network.os_vif_util [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Converting VIF {"id": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "address": "fa:16:3e:49:c0:a4", "network": {"id": "aa9adc37-a18c-4b31-b4cc-d46c43f91e9b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-563102071-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e394c641aa0e46e2a7d8129cd88c9a01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81c1d19a-c4", "ovs_interfaceid": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:52:49 np0005481065 nova_compute[260935]: 2025-10-11 08:52:49.253 2 DEBUG nova.network.os_vif_util [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:49:c0:a4,bridge_name='br-int',has_traffic_filtering=True,id=81c1d19a-c479-4381-9557-92f3e52b0cf0,network=Network(aa9adc37-a18c-4b31-b4cc-d46c43f91e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81c1d19a-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:52:49 np0005481065 nova_compute[260935]: 2025-10-11 08:52:49.253 2 DEBUG os_vif [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:49:c0:a4,bridge_name='br-int',has_traffic_filtering=True,id=81c1d19a-c479-4381-9557-92f3e52b0cf0,network=Network(aa9adc37-a18c-4b31-b4cc-d46c43f91e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81c1d19a-c4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 04:52:49 np0005481065 virtnodedevd[261258]: ethtool ioctl error on tap81c1d19a-c4: No such device
Oct 11 04:52:49 np0005481065 nova_compute[260935]: 2025-10-11 08:52:49.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:49 np0005481065 nova_compute[260935]: 2025-10-11 08:52:49.255 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap81c1d19a-c4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:49 np0005481065 nova_compute[260935]: 2025-10-11 08:52:49.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:49 np0005481065 nova_compute[260935]: 2025-10-11 08:52:49.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:52:49 np0005481065 nova_compute[260935]: 2025-10-11 08:52:49.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:49 np0005481065 nova_compute[260935]: 2025-10-11 08:52:49.260 2 INFO os_vif [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:49:c0:a4,bridge_name='br-int',has_traffic_filtering=True,id=81c1d19a-c479-4381-9557-92f3e52b0cf0,network=Network(aa9adc37-a18c-4b31-b4cc-d46c43f91e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81c1d19a-c4')#033[00m
Oct 11 04:52:49 np0005481065 virtnodedevd[261258]: ethtool ioctl error on tap81c1d19a-c4: No such device
Oct 11 04:52:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e191 do_prune osdmap full prune enabled
Oct 11 04:52:49 np0005481065 nova_compute[260935]: 2025-10-11 08:52:49.270 2 DEBUG nova.virt.libvirt.driver [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Start _get_guest_xml network_info=[{"id": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "address": "fa:16:3e:49:c0:a4", "network": {"id": "aa9adc37-a18c-4b31-b4cc-d46c43f91e9b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-563102071-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e394c641aa0e46e2a7d8129cd88c9a01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81c1d19a-c4", "ovs_interfaceid": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:52:49 np0005481065 nova_compute[260935]: 2025-10-11 08:52:49.275 2 WARNING nova.virt.libvirt.driver [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:52:49 np0005481065 virtnodedevd[261258]: ethtool ioctl error on tap81c1d19a-c4: No such device
Oct 11 04:52:49 np0005481065 nova_compute[260935]: 2025-10-11 08:52:49.281 2 DEBUG nova.virt.libvirt.host [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:52:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e192 e192: 3 total, 3 up, 3 in
Oct 11 04:52:49 np0005481065 nova_compute[260935]: 2025-10-11 08:52:49.282 2 DEBUG nova.virt.libvirt.host [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:52:49 np0005481065 virtnodedevd[261258]: ethtool ioctl error on tap81c1d19a-c4: No such device
Oct 11 04:52:49 np0005481065 nova_compute[260935]: 2025-10-11 08:52:49.286 2 DEBUG nova.virt.libvirt.host [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:52:49 np0005481065 nova_compute[260935]: 2025-10-11 08:52:49.286 2 DEBUG nova.virt.libvirt.host [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:52:49 np0005481065 nova_compute[260935]: 2025-10-11 08:52:49.287 2 DEBUG nova.virt.libvirt.driver [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:52:49 np0005481065 nova_compute[260935]: 2025-10-11 08:52:49.287 2 DEBUG nova.virt.hardware [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:52:49 np0005481065 nova_compute[260935]: 2025-10-11 08:52:49.287 2 DEBUG nova.virt.hardware [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:52:49 np0005481065 nova_compute[260935]: 2025-10-11 08:52:49.288 2 DEBUG nova.virt.hardware [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:52:49 np0005481065 nova_compute[260935]: 2025-10-11 08:52:49.288 2 DEBUG nova.virt.hardware [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:52:49 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e192: 3 total, 3 up, 3 in
Oct 11 04:52:49 np0005481065 nova_compute[260935]: 2025-10-11 08:52:49.288 2 DEBUG nova.virt.hardware [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:52:49 np0005481065 nova_compute[260935]: 2025-10-11 08:52:49.288 2 DEBUG nova.virt.hardware [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:52:49 np0005481065 nova_compute[260935]: 2025-10-11 08:52:49.289 2 DEBUG nova.virt.hardware [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:52:49 np0005481065 nova_compute[260935]: 2025-10-11 08:52:49.289 2 DEBUG nova.virt.hardware [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:52:49 np0005481065 nova_compute[260935]: 2025-10-11 08:52:49.290 2 DEBUG nova.virt.hardware [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:52:49 np0005481065 nova_compute[260935]: 2025-10-11 08:52:49.290 2 DEBUG nova.virt.hardware [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:52:49 np0005481065 nova_compute[260935]: 2025-10-11 08:52:49.290 2 DEBUG nova.virt.hardware [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:52:49 np0005481065 nova_compute[260935]: 2025-10-11 08:52:49.291 2 DEBUG nova.objects.instance [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 5750649d-960f-42d5-b127-de8b9a2bee8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:52:49 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-94a21b0b94de7518338fcaeb99ecd605aad02a7ec069ff0c0918f06ba08b8847-userdata-shm.mount: Deactivated successfully.
Oct 11 04:52:49 np0005481065 systemd[1]: var-lib-containers-storage-overlay-1aaec042284ad1b73f2650a678143c3397946f80b5c9fd24812807415a14b7ec-merged.mount: Deactivated successfully.
Oct 11 04:52:49 np0005481065 podman[308424]: 2025-10-11 08:52:49.312217476 +0000 UTC m=+0.147439921 container cleanup 94a21b0b94de7518338fcaeb99ecd605aad02a7ec069ff0c0918f06ba08b8847 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:52:49 np0005481065 nova_compute[260935]: 2025-10-11 08:52:49.333 2 DEBUG oslo_concurrency.processutils [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:52:49 np0005481065 systemd[1]: libpod-conmon-94a21b0b94de7518338fcaeb99ecd605aad02a7ec069ff0c0918f06ba08b8847.scope: Deactivated successfully.
Oct 11 04:52:49 np0005481065 podman[308474]: 2025-10-11 08:52:49.40078362 +0000 UTC m=+0.050919361 container remove 94a21b0b94de7518338fcaeb99ecd605aad02a7ec069ff0c0918f06ba08b8847 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 11 04:52:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:49.410 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2c320c23-8ebb-4de7-8337-cedaf5c898aa]: (4, ('Sat Oct 11 08:52:49 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b (94a21b0b94de7518338fcaeb99ecd605aad02a7ec069ff0c0918f06ba08b8847)\n94a21b0b94de7518338fcaeb99ecd605aad02a7ec069ff0c0918f06ba08b8847\nSat Oct 11 08:52:49 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b (94a21b0b94de7518338fcaeb99ecd605aad02a7ec069ff0c0918f06ba08b8847)\n94a21b0b94de7518338fcaeb99ecd605aad02a7ec069ff0c0918f06ba08b8847\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:49.412 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ca27a427-f626-4075-b0e7-b23bffbe704e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:49.413 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa9adc37-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:49 np0005481065 nova_compute[260935]: 2025-10-11 08:52:49.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:49 np0005481065 kernel: tapaa9adc37-a0: left promiscuous mode
Oct 11 04:52:49 np0005481065 nova_compute[260935]: 2025-10-11 08:52:49.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:49.454 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[461558be-f03f-4c9c-8c4f-a17277eaf4a0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:49 np0005481065 nova_compute[260935]: 2025-10-11 08:52:49.472 2 INFO nova.virt.libvirt.driver [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Deleting instance files /var/lib/nova/instances/98fabab3-6b4a-44f3-b232-f23f34f4e19f_del#033[00m
Oct 11 04:52:49 np0005481065 nova_compute[260935]: 2025-10-11 08:52:49.473 2 INFO nova.virt.libvirt.driver [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Deletion of /var/lib/nova/instances/98fabab3-6b4a-44f3-b232-f23f34f4e19f_del complete#033[00m
Oct 11 04:52:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:49.474 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0e772696-1a5d-4068-a05b-316855561cf7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:49.475 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[00e5cb40-e77c-4460-a656-810bacf7e1a2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:49.497 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[215b045c-514d-4803-81b7-a25a5534c73c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 460689, 'reachable_time': 37724, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308504, 'error': None, 'target': 'ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:49.500 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 04:52:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:49.501 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[335c0ea4-0efd-43cb-8411-691acf289fdd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:49 np0005481065 systemd[1]: run-netns-ovnmeta\x2daa9adc37\x2da18c\x2d4b31\x2db4cc\x2dd46c43f91e9b.mount: Deactivated successfully.
Oct 11 04:52:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:52:49 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2449534070' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:52:49 np0005481065 nova_compute[260935]: 2025-10-11 08:52:49.779 2 DEBUG oslo_concurrency.processutils [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:52:49 np0005481065 nova_compute[260935]: 2025-10-11 08:52:49.825 2 DEBUG oslo_concurrency.processutils [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:52:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1440: 321 pgs: 321 active+clean; 167 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 2.6 MiB/s wr, 298 op/s
Oct 11 04:52:49 np0005481065 nova_compute[260935]: 2025-10-11 08:52:49.979 2 DEBUG nova.virt.libvirt.driver [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:52:49 np0005481065 nova_compute[260935]: 2025-10-11 08:52:49.980 2 INFO nova.virt.libvirt.driver [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Creating image(s)#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.041 2 DEBUG nova.storage.rbd_utils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 98fabab3-6b4a-44f3-b232-f23f34f4e19f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.078 2 DEBUG nova.storage.rbd_utils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 98fabab3-6b4a-44f3-b232-f23f34f4e19f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.116 2 DEBUG nova.storage.rbd_utils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 98fabab3-6b4a-44f3-b232-f23f34f4e19f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.122 2 DEBUG oslo_concurrency.processutils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.205 2 DEBUG oslo_concurrency.processutils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.206 2 DEBUG oslo_concurrency.lockutils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "d427ed36e4acfaf36d5cf36bd49361b1db4ee571" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.206 2 DEBUG oslo_concurrency.lockutils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "d427ed36e4acfaf36d5cf36bd49361b1db4ee571" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.207 2 DEBUG oslo_concurrency.lockutils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "d427ed36e4acfaf36d5cf36bd49361b1db4ee571" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.234 2 DEBUG nova.storage.rbd_utils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 98fabab3-6b4a-44f3-b232-f23f34f4e19f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.240 2 DEBUG oslo_concurrency.processutils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 98fabab3-6b4a-44f3-b232-f23f34f4e19f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:52:50 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:52:50 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1546140836' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.319 2 DEBUG oslo_concurrency.processutils [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.329 2 DEBUG nova.virt.libvirt.vif [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:52:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1556963080',display_name='tempest-InstanceActionsTestJSON-server-1556963080',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1556963080',id=42,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:52:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e394c641aa0e46e2a7d8129cd88c9a01',ramdisk_id='',reservation_id='r-r0vpe0lh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-761055393',owner_user_name='tempest-InstanceActionsTestJSON-761055393-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:52:48Z,user_data=None,user_id='aeed30817a7740109e765d227ff2b78a',uuid=5750649d-960f-42d5-b127-de8b9a2bee8f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "address": "fa:16:3e:49:c0:a4", "network": {"id": "aa9adc37-a18c-4b31-b4cc-d46c43f91e9b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-563102071-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e394c641aa0e46e2a7d8129cd88c9a01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81c1d19a-c4", "ovs_interfaceid": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.330 2 DEBUG nova.network.os_vif_util [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Converting VIF {"id": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "address": "fa:16:3e:49:c0:a4", "network": {"id": "aa9adc37-a18c-4b31-b4cc-d46c43f91e9b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-563102071-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e394c641aa0e46e2a7d8129cd88c9a01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81c1d19a-c4", "ovs_interfaceid": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.332 2 DEBUG nova.network.os_vif_util [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:49:c0:a4,bridge_name='br-int',has_traffic_filtering=True,id=81c1d19a-c479-4381-9557-92f3e52b0cf0,network=Network(aa9adc37-a18c-4b31-b4cc-d46c43f91e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81c1d19a-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.334 2 DEBUG nova.objects.instance [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5750649d-960f-42d5-b127-de8b9a2bee8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.351 2 DEBUG nova.virt.libvirt.driver [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:52:50 np0005481065 nova_compute[260935]:  <uuid>5750649d-960f-42d5-b127-de8b9a2bee8f</uuid>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:  <name>instance-0000002a</name>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:52:50 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:      <nova:name>tempest-InstanceActionsTestJSON-server-1556963080</nova:name>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:52:49</nova:creationTime>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:52:50 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:        <nova:user uuid="aeed30817a7740109e765d227ff2b78a">tempest-InstanceActionsTestJSON-761055393-project-member</nova:user>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:        <nova:project uuid="e394c641aa0e46e2a7d8129cd88c9a01">tempest-InstanceActionsTestJSON-761055393</nova:project>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:        <nova:port uuid="81c1d19a-c479-4381-9557-92f3e52b0cf0">
Oct 11 04:52:50 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:      <entry name="serial">5750649d-960f-42d5-b127-de8b9a2bee8f</entry>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:      <entry name="uuid">5750649d-960f-42d5-b127-de8b9a2bee8f</entry>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:52:50 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/5750649d-960f-42d5-b127-de8b9a2bee8f_disk">
Oct 11 04:52:50 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:52:50 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:52:50 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/5750649d-960f-42d5-b127-de8b9a2bee8f_disk.config">
Oct 11 04:52:50 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:52:50 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 04:52:50 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:49:c0:a4"/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:      <target dev="tap81c1d19a-c4"/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:52:50 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/5750649d-960f-42d5-b127-de8b9a2bee8f/console.log" append="off"/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    <input type="keyboard" bus="usb"/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:52:50 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:52:50 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:52:50 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:52:50 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:52:50 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.352 2 DEBUG nova.virt.libvirt.driver [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] skipping disk for instance-0000002a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.353 2 DEBUG nova.virt.libvirt.driver [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] skipping disk for instance-0000002a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.354 2 DEBUG nova.virt.libvirt.vif [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:52:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1556963080',display_name='tempest-InstanceActionsTestJSON-server-1556963080',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1556963080',id=42,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:52:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='e394c641aa0e46e2a7d8129cd88c9a01',ramdisk_id='',reservation_id='r-r0vpe0lh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-761055393',owner_user_name='tempest-InstanceActionsTestJSON-761055393-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:52:48Z,user_data=None,user_id='aeed30817a7740109e765d227ff2b78a',uuid=5750649d-960f-42d5-b127-de8b9a2bee8f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "address": "fa:16:3e:49:c0:a4", "network": {"id": "aa9adc37-a18c-4b31-b4cc-d46c43f91e9b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-563102071-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e394c641aa0e46e2a7d8129cd88c9a01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81c1d19a-c4", "ovs_interfaceid": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.354 2 DEBUG nova.network.os_vif_util [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Converting VIF {"id": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "address": "fa:16:3e:49:c0:a4", "network": {"id": "aa9adc37-a18c-4b31-b4cc-d46c43f91e9b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-563102071-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e394c641aa0e46e2a7d8129cd88c9a01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81c1d19a-c4", "ovs_interfaceid": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.355 2 DEBUG nova.network.os_vif_util [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:49:c0:a4,bridge_name='br-int',has_traffic_filtering=True,id=81c1d19a-c479-4381-9557-92f3e52b0cf0,network=Network(aa9adc37-a18c-4b31-b4cc-d46c43f91e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81c1d19a-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.355 2 DEBUG os_vif [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:49:c0:a4,bridge_name='br-int',has_traffic_filtering=True,id=81c1d19a-c479-4381-9557-92f3e52b0cf0,network=Network(aa9adc37-a18c-4b31-b4cc-d46c43f91e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81c1d19a-c4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.356 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.357 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.361 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap81c1d19a-c4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.361 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap81c1d19a-c4, col_values=(('external_ids', {'iface-id': '81c1d19a-c479-4381-9557-92f3e52b0cf0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:49:c0:a4', 'vm-uuid': '5750649d-960f-42d5-b127-de8b9a2bee8f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:50 np0005481065 NetworkManager[44960]: <info>  [1760172770.4051] manager: (tap81c1d19a-c4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/167)
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.411 2 INFO os_vif [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:49:c0:a4,bridge_name='br-int',has_traffic_filtering=True,id=81c1d19a-c479-4381-9557-92f3e52b0cf0,network=Network(aa9adc37-a18c-4b31-b4cc-d46c43f91e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81c1d19a-c4')#033[00m
Oct 11 04:52:50 np0005481065 kernel: tap81c1d19a-c4: entered promiscuous mode
Oct 11 04:52:50 np0005481065 NetworkManager[44960]: <info>  [1760172770.5204] manager: (tap81c1d19a-c4): new Tun device (/org/freedesktop/NetworkManager/Devices/168)
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:50 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:50Z|00353|binding|INFO|Claiming lport 81c1d19a-c479-4381-9557-92f3e52b0cf0 for this chassis.
Oct 11 04:52:50 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:50Z|00354|binding|INFO|81c1d19a-c479-4381-9557-92f3e52b0cf0: Claiming fa:16:3e:49:c0:a4 10.100.0.10
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:50.529 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:c0:a4 10.100.0.10'], port_security=['fa:16:3e:49:c0:a4 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '5750649d-960f-42d5-b127-de8b9a2bee8f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e394c641aa0e46e2a7d8129cd88c9a01', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'e0772185-519f-4131-b64e-b4728e2c0dd9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bf506c8e-a600-4595-8f2f-eb0ff97eb2d2, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=81c1d19a-c479-4381-9557-92f3e52b0cf0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:52:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:50.530 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 81c1d19a-c479-4381-9557-92f3e52b0cf0 in datapath aa9adc37-a18c-4b31-b4cc-d46c43f91e9b bound to our chassis#033[00m
Oct 11 04:52:50 np0005481065 NetworkManager[44960]: <info>  [1760172770.5328] device (tap81c1d19a-c4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:52:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:50.533 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aa9adc37-a18c-4b31-b4cc-d46c43f91e9b#033[00m
Oct 11 04:52:50 np0005481065 NetworkManager[44960]: <info>  [1760172770.5334] device (tap81c1d19a-c4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:52:50 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:50Z|00355|binding|INFO|Setting lport 81c1d19a-c479-4381-9557-92f3e52b0cf0 ovn-installed in OVS
Oct 11 04:52:50 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:50Z|00356|binding|INFO|Setting lport 81c1d19a-c479-4381-9557-92f3e52b0cf0 up in Southbound
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:50.553 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[04427270-6e6b-43df-a952-2bb1145525d8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:50.555 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaa9adc37-a1 in ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:50.559 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaa9adc37-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 04:52:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:50.559 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0ead639b-8f6c-47ce-a625-d5dbfa55cbf4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:50.560 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1d215652-de6e-44ad-a55d-9266cbf4fb4a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:50 np0005481065 systemd-machined[215705]: New machine qemu-47-instance-0000002a.
Oct 11 04:52:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:50.583 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[c6741854-52b1-4880-8185-bca15cc34fb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:50 np0005481065 systemd[1]: Started Virtual Machine qemu-47-instance-0000002a.
Oct 11 04:52:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:50.604 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[493a476e-5e9b-41b5-9de0-72503101b161]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.613 2 DEBUG oslo_concurrency.processutils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 98fabab3-6b4a-44f3-b232-f23f34f4e19f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.373s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:52:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:50.640 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[575c2dc9-332c-4e71-b0bb-7f3e9eb6589f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:50.645 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f12892e7-89ff-4885-b6b4-a40f821b4175]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:50 np0005481065 NetworkManager[44960]: <info>  [1760172770.6486] manager: (tapaa9adc37-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/169)
Oct 11 04:52:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:50.702 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6f64a8ee-13b2-4fd0-b7d5-2d20353f98fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:50.706 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[97253bb6-a19e-474c-a8ac-7928b2aed5e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:50 np0005481065 NetworkManager[44960]: <info>  [1760172770.7369] device (tapaa9adc37-a0): carrier: link connected
Oct 11 04:52:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:50.748 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[d1b38dda-b1b3-409a-92ad-02fe7a750c2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.750 2 DEBUG nova.storage.rbd_utils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] resizing rbd image 98fabab3-6b4a-44f3-b232-f23f34f4e19f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:52:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:50.777 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[aee7c13a-5bd7-470a-945d-878f009a9c0b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa9adc37-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:36:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 112], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461543, 'reachable_time': 16788, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308732, 'error': None, 'target': 'ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:50.803 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[62111d47-83fa-470c-98b3-5c919d1252bd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febf:3618'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461543, 'tstamp': 461543}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308756, 'error': None, 'target': 'ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:50.893 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[79a5ab5f-b1a8-4a00-863b-737c03794393]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa9adc37-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:36:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 112], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461543, 'reachable_time': 16788, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 308770, 'error': None, 'target': 'ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:50.929 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[191e469d-c14e-490b-abb2-80005c798968]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.946 2 DEBUG nova.virt.libvirt.driver [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.946 2 DEBUG nova.virt.libvirt.driver [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Ensure instance console log exists: /var/lib/nova/instances/98fabab3-6b4a-44f3-b232-f23f34f4e19f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.964 2 DEBUG oslo_concurrency.lockutils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.965 2 DEBUG oslo_concurrency.lockutils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.965 2 DEBUG oslo_concurrency.lockutils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.968 2 DEBUG nova.virt.libvirt.driver [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Start _get_guest_xml network_info=[{"id": "20c8164c-0779-4589-b3d3-afc10a47631f", "address": "fa:16:3e:bf:90:73", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20c8164c-07", "ovs_interfaceid": "20c8164c-0779-4589-b3d3-afc10a47631f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:29Z,direct_url=<?>,disk_format='qcow2',id=95632eb9-5895-4e20-b760-0f149aadf400,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.975 2 WARNING nova.virt.libvirt.driver [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.980 2 DEBUG nova.virt.libvirt.host [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.981 2 DEBUG nova.virt.libvirt.host [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.985 2 DEBUG nova.virt.libvirt.host [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.985 2 DEBUG nova.virt.libvirt.host [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.985 2 DEBUG nova.virt.libvirt.driver [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.986 2 DEBUG nova.virt.hardware [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:29Z,direct_url=<?>,disk_format='qcow2',id=95632eb9-5895-4e20-b760-0f149aadf400,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.986 2 DEBUG nova.virt.hardware [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.986 2 DEBUG nova.virt.hardware [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.987 2 DEBUG nova.virt.hardware [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.987 2 DEBUG nova.virt.hardware [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.987 2 DEBUG nova.virt.hardware [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.987 2 DEBUG nova.virt.hardware [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.987 2 DEBUG nova.virt.hardware [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.988 2 DEBUG nova.virt.hardware [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.988 2 DEBUG nova.virt.hardware [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.988 2 DEBUG nova.virt.hardware [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:52:50 np0005481065 nova_compute[260935]: 2025-10-11 08:52:50.988 2 DEBUG nova.objects.instance [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 98fabab3-6b4a-44f3-b232-f23f34f4e19f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:52:51 np0005481065 nova_compute[260935]: 2025-10-11 08:52:51.007 2 DEBUG oslo_concurrency.processutils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:52:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:51.022 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cc786796-b098-4d8f-97da-1bd6f09ec5a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:51.024 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa9adc37-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:51.024 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:52:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:51.025 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa9adc37-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:51 np0005481065 NetworkManager[44960]: <info>  [1760172771.0275] manager: (tapaa9adc37-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/170)
Oct 11 04:52:51 np0005481065 kernel: tapaa9adc37-a0: entered promiscuous mode
Oct 11 04:52:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:51.032 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaa9adc37-a0, col_values=(('external_ids', {'iface-id': '3043e870-2e1f-4df6-b72c-a581ec61613e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:51 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:51Z|00357|binding|INFO|Releasing lport 3043e870-2e1f-4df6-b72c-a581ec61613e from this chassis (sb_readonly=0)
Oct 11 04:52:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:51.037 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/aa9adc37-a18c-4b31-b4cc-d46c43f91e9b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/aa9adc37-a18c-4b31-b4cc-d46c43f91e9b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 04:52:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:51.039 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[96d1781b-caba-42fc-b38c-09c394b5b4c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:51.040 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:52:51 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 04:52:51 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 04:52:51 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b
Oct 11 04:52:51 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 04:52:51 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 04:52:51 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 04:52:51 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/aa9adc37-a18c-4b31-b4cc-d46c43f91e9b.pid.haproxy
Oct 11 04:52:51 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 04:52:51 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:52:51 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 04:52:51 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 04:52:51 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 04:52:51 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 04:52:51 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 04:52:51 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 04:52:51 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 04:52:51 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 04:52:51 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 04:52:51 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 04:52:51 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 04:52:51 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 04:52:51 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 04:52:51 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:52:51 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:52:51 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 04:52:51 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 04:52:51 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:52:51 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID aa9adc37-a18c-4b31-b4cc-d46c43f91e9b
Oct 11 04:52:51 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 04:52:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:51.041 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b', 'env', 'PROCESS_TAG=haproxy-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/aa9adc37-a18c-4b31-b4cc-d46c43f91e9b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 04:52:51 np0005481065 nova_compute[260935]: 2025-10-11 08:52:51.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:51 np0005481065 nova_compute[260935]: 2025-10-11 08:52:51.387 2 DEBUG nova.compute.manager [req-443894da-07d8-49e3-ac67-1740758d0439 req-6b708ddf-889a-44b7-8294-9c62c5f76c16 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Received event network-vif-plugged-20c8164c-0779-4589-b3d3-afc10a47631f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:52:51 np0005481065 nova_compute[260935]: 2025-10-11 08:52:51.387 2 DEBUG oslo_concurrency.lockutils [req-443894da-07d8-49e3-ac67-1740758d0439 req-6b708ddf-889a-44b7-8294-9c62c5f76c16 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:51 np0005481065 nova_compute[260935]: 2025-10-11 08:52:51.388 2 DEBUG oslo_concurrency.lockutils [req-443894da-07d8-49e3-ac67-1740758d0439 req-6b708ddf-889a-44b7-8294-9c62c5f76c16 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:51 np0005481065 nova_compute[260935]: 2025-10-11 08:52:51.388 2 DEBUG oslo_concurrency.lockutils [req-443894da-07d8-49e3-ac67-1740758d0439 req-6b708ddf-889a-44b7-8294-9c62c5f76c16 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:51 np0005481065 nova_compute[260935]: 2025-10-11 08:52:51.388 2 DEBUG nova.compute.manager [req-443894da-07d8-49e3-ac67-1740758d0439 req-6b708ddf-889a-44b7-8294-9c62c5f76c16 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] No waiting events found dispatching network-vif-plugged-20c8164c-0779-4589-b3d3-afc10a47631f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:52:51 np0005481065 nova_compute[260935]: 2025-10-11 08:52:51.389 2 WARNING nova.compute.manager [req-443894da-07d8-49e3-ac67-1740758d0439 req-6b708ddf-889a-44b7-8294-9c62c5f76c16 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Received unexpected event network-vif-plugged-20c8164c-0779-4589-b3d3-afc10a47631f for instance with vm_state active and task_state rebuild_spawning.#033[00m
Oct 11 04:52:51 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:52:51 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3437292564' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:52:51 np0005481065 nova_compute[260935]: 2025-10-11 08:52:51.473 2 DEBUG oslo_concurrency.processutils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:52:51 np0005481065 podman[308863]: 2025-10-11 08:52:51.489394977 +0000 UTC m=+0.077695308 container create 458401b6351fc27ad334ef31058a6591a600ca5a50b42d2ff10e9293f3fcd45c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 11 04:52:51 np0005481065 nova_compute[260935]: 2025-10-11 08:52:51.512 2 DEBUG nova.storage.rbd_utils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 98fabab3-6b4a-44f3-b232-f23f34f4e19f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:52:51 np0005481065 nova_compute[260935]: 2025-10-11 08:52:51.518 2 DEBUG oslo_concurrency.processutils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:52:51 np0005481065 systemd[1]: Started libpod-conmon-458401b6351fc27ad334ef31058a6591a600ca5a50b42d2ff10e9293f3fcd45c.scope.
Oct 11 04:52:51 np0005481065 podman[308863]: 2025-10-11 08:52:51.45097356 +0000 UTC m=+0.039273951 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 04:52:51 np0005481065 nova_compute[260935]: 2025-10-11 08:52:51.563 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Removed pending event for 5750649d-960f-42d5-b127-de8b9a2bee8f due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct 11 04:52:51 np0005481065 nova_compute[260935]: 2025-10-11 08:52:51.564 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172771.5501351, 5750649d-960f-42d5-b127-de8b9a2bee8f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:52:51 np0005481065 nova_compute[260935]: 2025-10-11 08:52:51.564 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:52:51 np0005481065 nova_compute[260935]: 2025-10-11 08:52:51.575 2 DEBUG nova.compute.manager [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:52:51 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:52:51 np0005481065 nova_compute[260935]: 2025-10-11 08:52:51.585 2 INFO nova.virt.libvirt.driver [-] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Instance rebooted successfully.#033[00m
Oct 11 04:52:51 np0005481065 nova_compute[260935]: 2025-10-11 08:52:51.585 2 DEBUG nova.compute.manager [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:52:51 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0972cf6f05853bc79fa51b63023889054a6bcc2df62a3dc5f83edc5bef62cfe/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:52:51 np0005481065 nova_compute[260935]: 2025-10-11 08:52:51.596 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:52:51 np0005481065 podman[308863]: 2025-10-11 08:52:51.601815836 +0000 UTC m=+0.190116187 container init 458401b6351fc27ad334ef31058a6591a600ca5a50b42d2ff10e9293f3fcd45c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 11 04:52:51 np0005481065 nova_compute[260935]: 2025-10-11 08:52:51.603 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:52:51 np0005481065 podman[308863]: 2025-10-11 08:52:51.608630299 +0000 UTC m=+0.196930640 container start 458401b6351fc27ad334ef31058a6591a600ca5a50b42d2ff10e9293f3fcd45c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:52:51 np0005481065 nova_compute[260935]: 2025-10-11 08:52:51.629 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.#033[00m
Oct 11 04:52:51 np0005481065 nova_compute[260935]: 2025-10-11 08:52:51.629 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172771.5525095, 5750649d-960f-42d5-b127-de8b9a2bee8f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:52:51 np0005481065 nova_compute[260935]: 2025-10-11 08:52:51.629 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] VM Started (Lifecycle Event)#033[00m
Oct 11 04:52:51 np0005481065 neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b[308899]: [NOTICE]   (308903) : New worker (308905) forked
Oct 11 04:52:51 np0005481065 neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b[308899]: [NOTICE]   (308903) : Loading success.
Oct 11 04:52:51 np0005481065 nova_compute[260935]: 2025-10-11 08:52:51.648 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:52:51 np0005481065 nova_compute[260935]: 2025-10-11 08:52:51.653 2 DEBUG oslo_concurrency.lockutils [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Lock "5750649d-960f-42d5-b127-de8b9a2bee8f" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 5.407s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:51 np0005481065 nova_compute[260935]: 2025-10-11 08:52:51.655 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:52:51 np0005481065 nova_compute[260935]: 2025-10-11 08:52:51.698 2 DEBUG nova.compute.manager [req-53e0c9cc-3ce8-4dcb-ae28-f2ebf760fd3d req-61ede8f3-4d22-4548-9422-c9e3ab09de04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Received event network-vif-plugged-81c1d19a-c479-4381-9557-92f3e52b0cf0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:52:51 np0005481065 nova_compute[260935]: 2025-10-11 08:52:51.699 2 DEBUG oslo_concurrency.lockutils [req-53e0c9cc-3ce8-4dcb-ae28-f2ebf760fd3d req-61ede8f3-4d22-4548-9422-c9e3ab09de04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:51 np0005481065 nova_compute[260935]: 2025-10-11 08:52:51.699 2 DEBUG oslo_concurrency.lockutils [req-53e0c9cc-3ce8-4dcb-ae28-f2ebf760fd3d req-61ede8f3-4d22-4548-9422-c9e3ab09de04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:51 np0005481065 nova_compute[260935]: 2025-10-11 08:52:51.699 2 DEBUG oslo_concurrency.lockutils [req-53e0c9cc-3ce8-4dcb-ae28-f2ebf760fd3d req-61ede8f3-4d22-4548-9422-c9e3ab09de04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:51 np0005481065 nova_compute[260935]: 2025-10-11 08:52:51.699 2 DEBUG nova.compute.manager [req-53e0c9cc-3ce8-4dcb-ae28-f2ebf760fd3d req-61ede8f3-4d22-4548-9422-c9e3ab09de04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] No waiting events found dispatching network-vif-plugged-81c1d19a-c479-4381-9557-92f3e52b0cf0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:52:51 np0005481065 nova_compute[260935]: 2025-10-11 08:52:51.699 2 WARNING nova.compute.manager [req-53e0c9cc-3ce8-4dcb-ae28-f2ebf760fd3d req-61ede8f3-4d22-4548-9422-c9e3ab09de04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Received unexpected event network-vif-plugged-81c1d19a-c479-4381-9557-92f3e52b0cf0 for instance with vm_state active and task_state None.#033[00m
Oct 11 04:52:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1441: 321 pgs: 321 active+clean; 167 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 2.6 MiB/s wr, 298 op/s
Oct 11 04:52:51 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:52:51 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/502272125' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:52:52 np0005481065 nova_compute[260935]: 2025-10-11 08:52:52.010 2 DEBUG oslo_concurrency.processutils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:52:52 np0005481065 nova_compute[260935]: 2025-10-11 08:52:52.012 2 DEBUG nova.virt.libvirt.vif [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-11T08:52:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1265647930',display_name='tempest-ServerDiskConfigTestJSON-server-1265647930',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1265647930',id=40,image_ref='95632eb9-5895-4e20-b760-0f149aadf400',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:52:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9af27fad6b5a4783b66213343f27f0a1',ramdisk_id='',reservation_id='r-qeuhsate',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='95632eb9-5895-4e20-b760-0f149aadf400',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-387886039',owner_user_name='tempest-ServerDiskConfigTestJSON-387886039-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:52:49Z,user_data=None,user_id='2a171de1f79843e0b048393cabfee77d',uuid=98fabab3-6b4a-44f3-b232-f23f34f4e19f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "20c8164c-0779-4589-b3d3-afc10a47631f", "address": "fa:16:3e:bf:90:73", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20c8164c-07", "ovs_interfaceid": "20c8164c-0779-4589-b3d3-afc10a47631f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:52:52 np0005481065 nova_compute[260935]: 2025-10-11 08:52:52.012 2 DEBUG nova.network.os_vif_util [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converting VIF {"id": "20c8164c-0779-4589-b3d3-afc10a47631f", "address": "fa:16:3e:bf:90:73", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20c8164c-07", "ovs_interfaceid": "20c8164c-0779-4589-b3d3-afc10a47631f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:52:52 np0005481065 nova_compute[260935]: 2025-10-11 08:52:52.013 2 DEBUG nova.network.os_vif_util [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:90:73,bridge_name='br-int',has_traffic_filtering=True,id=20c8164c-0779-4589-b3d3-afc10a47631f,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20c8164c-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:52:52 np0005481065 nova_compute[260935]: 2025-10-11 08:52:52.017 2 DEBUG nova.virt.libvirt.driver [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:52:52 np0005481065 nova_compute[260935]:  <uuid>98fabab3-6b4a-44f3-b232-f23f34f4e19f</uuid>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:  <name>instance-00000028</name>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:52:52 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-1265647930</nova:name>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:52:50</nova:creationTime>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:52:52 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:        <nova:user uuid="2a171de1f79843e0b048393cabfee77d">tempest-ServerDiskConfigTestJSON-387886039-project-member</nova:user>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:        <nova:project uuid="9af27fad6b5a4783b66213343f27f0a1">tempest-ServerDiskConfigTestJSON-387886039</nova:project>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="95632eb9-5895-4e20-b760-0f149aadf400"/>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:        <nova:port uuid="20c8164c-0779-4589-b3d3-afc10a47631f">
Oct 11 04:52:52 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:52:52 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:      <entry name="serial">98fabab3-6b4a-44f3-b232-f23f34f4e19f</entry>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:      <entry name="uuid">98fabab3-6b4a-44f3-b232-f23f34f4e19f</entry>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:52:52 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:52:52 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:52:52 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/98fabab3-6b4a-44f3-b232-f23f34f4e19f_disk">
Oct 11 04:52:52 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:52:52 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:52:52 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/98fabab3-6b4a-44f3-b232-f23f34f4e19f_disk.config">
Oct 11 04:52:52 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:52:52 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 04:52:52 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:bf:90:73"/>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:      <target dev="tap20c8164c-07"/>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:52:52 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/98fabab3-6b4a-44f3-b232-f23f34f4e19f/console.log" append="off"/>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:52:52 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:52:52 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:52:52 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:52:52 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:52:52 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:52:52 np0005481065 nova_compute[260935]: 2025-10-11 08:52:52.018 2 DEBUG nova.compute.manager [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Preparing to wait for external event network-vif-plugged-20c8164c-0779-4589-b3d3-afc10a47631f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 04:52:52 np0005481065 nova_compute[260935]: 2025-10-11 08:52:52.019 2 DEBUG oslo_concurrency.lockutils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:52 np0005481065 nova_compute[260935]: 2025-10-11 08:52:52.019 2 DEBUG oslo_concurrency.lockutils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:52 np0005481065 nova_compute[260935]: 2025-10-11 08:52:52.019 2 DEBUG oslo_concurrency.lockutils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:52 np0005481065 nova_compute[260935]: 2025-10-11 08:52:52.020 2 DEBUG nova.virt.libvirt.vif [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-11T08:52:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1265647930',display_name='tempest-ServerDiskConfigTestJSON-server-1265647930',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1265647930',id=40,image_ref='95632eb9-5895-4e20-b760-0f149aadf400',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:52:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9af27fad6b5a4783b66213343f27f0a1',ramdisk_id='',reservation_id='r-qeuhsate',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='95632eb9-5895-4e20-b760-0f149aadf400',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-387886039',owner_user_name='tempest-ServerDiskConfigTestJSON-387886039-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:52:49Z,user_data=None,user_id='2a171de1f79843e0b048393cabfee77d',uuid=98fabab3-6b4a-44f3-b232-f23f34f4e19f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "20c8164c-0779-4589-b3d3-afc10a47631f", "address": "fa:16:3e:bf:90:73", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20c8164c-07", "ovs_interfaceid": "20c8164c-0779-4589-b3d3-afc10a47631f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:52:52 np0005481065 nova_compute[260935]: 2025-10-11 08:52:52.020 2 DEBUG nova.network.os_vif_util [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converting VIF {"id": "20c8164c-0779-4589-b3d3-afc10a47631f", "address": "fa:16:3e:bf:90:73", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20c8164c-07", "ovs_interfaceid": "20c8164c-0779-4589-b3d3-afc10a47631f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:52:52 np0005481065 nova_compute[260935]: 2025-10-11 08:52:52.021 2 DEBUG nova.network.os_vif_util [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:90:73,bridge_name='br-int',has_traffic_filtering=True,id=20c8164c-0779-4589-b3d3-afc10a47631f,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20c8164c-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:52:52 np0005481065 nova_compute[260935]: 2025-10-11 08:52:52.021 2 DEBUG os_vif [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:90:73,bridge_name='br-int',has_traffic_filtering=True,id=20c8164c-0779-4589-b3d3-afc10a47631f,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20c8164c-07') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:52:52 np0005481065 nova_compute[260935]: 2025-10-11 08:52:52.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:52 np0005481065 nova_compute[260935]: 2025-10-11 08:52:52.023 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:52 np0005481065 nova_compute[260935]: 2025-10-11 08:52:52.023 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:52:52 np0005481065 nova_compute[260935]: 2025-10-11 08:52:52.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:52 np0005481065 nova_compute[260935]: 2025-10-11 08:52:52.026 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap20c8164c-07, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:52 np0005481065 nova_compute[260935]: 2025-10-11 08:52:52.027 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap20c8164c-07, col_values=(('external_ids', {'iface-id': '20c8164c-0779-4589-b3d3-afc10a47631f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bf:90:73', 'vm-uuid': '98fabab3-6b4a-44f3-b232-f23f34f4e19f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:52 np0005481065 nova_compute[260935]: 2025-10-11 08:52:52.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:52 np0005481065 NetworkManager[44960]: <info>  [1760172772.0303] manager: (tap20c8164c-07): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/171)
Oct 11 04:52:52 np0005481065 nova_compute[260935]: 2025-10-11 08:52:52.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:52:52 np0005481065 nova_compute[260935]: 2025-10-11 08:52:52.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:52 np0005481065 nova_compute[260935]: 2025-10-11 08:52:52.039 2 INFO os_vif [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:90:73,bridge_name='br-int',has_traffic_filtering=True,id=20c8164c-0779-4589-b3d3-afc10a47631f,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20c8164c-07')#033[00m
Oct 11 04:52:52 np0005481065 nova_compute[260935]: 2025-10-11 08:52:52.097 2 DEBUG nova.virt.libvirt.driver [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:52:52 np0005481065 nova_compute[260935]: 2025-10-11 08:52:52.098 2 DEBUG nova.virt.libvirt.driver [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:52:52 np0005481065 nova_compute[260935]: 2025-10-11 08:52:52.098 2 DEBUG nova.virt.libvirt.driver [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] No VIF found with MAC fa:16:3e:bf:90:73, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:52:52 np0005481065 nova_compute[260935]: 2025-10-11 08:52:52.099 2 INFO nova.virt.libvirt.driver [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Using config drive#033[00m
Oct 11 04:52:52 np0005481065 nova_compute[260935]: 2025-10-11 08:52:52.130 2 DEBUG nova.storage.rbd_utils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 98fabab3-6b4a-44f3-b232-f23f34f4e19f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:52:52 np0005481065 nova_compute[260935]: 2025-10-11 08:52:52.137 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172757.121933, 872b1c1d-bc87-4123-a599-4d64b89018aa => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:52:52 np0005481065 nova_compute[260935]: 2025-10-11 08:52:52.138 2 INFO nova.compute.manager [-] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] VM Stopped (Lifecycle Event)#033[00m
Oct 11 04:52:52 np0005481065 nova_compute[260935]: 2025-10-11 08:52:52.157 2 DEBUG nova.objects.instance [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 98fabab3-6b4a-44f3-b232-f23f34f4e19f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:52:52 np0005481065 nova_compute[260935]: 2025-10-11 08:52:52.164 2 DEBUG nova.compute.manager [None req-b0b99b4c-71f4-4506-8327-1e4c8c21c884 - - - - - -] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:52:52 np0005481065 nova_compute[260935]: 2025-10-11 08:52:52.206 2 DEBUG nova.objects.instance [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lazy-loading 'keypairs' on Instance uuid 98fabab3-6b4a-44f3-b232-f23f34f4e19f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:52:52 np0005481065 nova_compute[260935]: 2025-10-11 08:52:52.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:52 np0005481065 nova_compute[260935]: 2025-10-11 08:52:52.784 2 INFO nova.virt.libvirt.driver [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Creating config drive at /var/lib/nova/instances/98fabab3-6b4a-44f3-b232-f23f34f4e19f/disk.config#033[00m
Oct 11 04:52:52 np0005481065 nova_compute[260935]: 2025-10-11 08:52:52.790 2 DEBUG oslo_concurrency.processutils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/98fabab3-6b4a-44f3-b232-f23f34f4e19f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyzkc8jeu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:52:52 np0005481065 nova_compute[260935]: 2025-10-11 08:52:52.940 2 DEBUG oslo_concurrency.processutils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/98fabab3-6b4a-44f3-b232-f23f34f4e19f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyzkc8jeu" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:52:52 np0005481065 nova_compute[260935]: 2025-10-11 08:52:52.989 2 DEBUG nova.storage.rbd_utils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 98fabab3-6b4a-44f3-b232-f23f34f4e19f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:52:52 np0005481065 nova_compute[260935]: 2025-10-11 08:52:52.994 2 DEBUG oslo_concurrency.processutils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/98fabab3-6b4a-44f3-b232-f23f34f4e19f/disk.config 98fabab3-6b4a-44f3-b232-f23f34f4e19f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:52:53 np0005481065 nova_compute[260935]: 2025-10-11 08:52:53.207 2 DEBUG oslo_concurrency.processutils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/98fabab3-6b4a-44f3-b232-f23f34f4e19f/disk.config 98fabab3-6b4a-44f3-b232-f23f34f4e19f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.213s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:52:53 np0005481065 nova_compute[260935]: 2025-10-11 08:52:53.210 2 INFO nova.virt.libvirt.driver [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Deleting local config drive /var/lib/nova/instances/98fabab3-6b4a-44f3-b232-f23f34f4e19f/disk.config because it was imported into RBD.#033[00m
Oct 11 04:52:53 np0005481065 kernel: tap20c8164c-07: entered promiscuous mode
Oct 11 04:52:53 np0005481065 NetworkManager[44960]: <info>  [1760172773.2797] manager: (tap20c8164c-07): new Tun device (/org/freedesktop/NetworkManager/Devices/172)
Oct 11 04:52:53 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:53Z|00358|binding|INFO|Claiming lport 20c8164c-0779-4589-b3d3-afc10a47631f for this chassis.
Oct 11 04:52:53 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:53Z|00359|binding|INFO|20c8164c-0779-4589-b3d3-afc10a47631f: Claiming fa:16:3e:bf:90:73 10.100.0.12
Oct 11 04:52:53 np0005481065 nova_compute[260935]: 2025-10-11 08:52:53.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:52:53 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:53Z|00360|binding|INFO|Setting lport 20c8164c-0779-4589-b3d3-afc10a47631f ovn-installed in OVS
Oct 11 04:52:53 np0005481065 nova_compute[260935]: 2025-10-11 08:52:53.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:53 np0005481065 nova_compute[260935]: 2025-10-11 08:52:53.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:53 np0005481065 systemd-machined[215705]: New machine qemu-48-instance-00000028.
Oct 11 04:52:53 np0005481065 systemd-udevd[309109]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:52:53 np0005481065 NetworkManager[44960]: <info>  [1760172773.3468] device (tap20c8164c-07): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:52:53 np0005481065 NetworkManager[44960]: <info>  [1760172773.3481] device (tap20c8164c-07): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:52:53 np0005481065 systemd[1]: Started Virtual Machine qemu-48-instance-00000028.
Oct 11 04:52:53 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:53Z|00361|binding|INFO|Setting lport 20c8164c-0779-4589-b3d3-afc10a47631f up in Southbound
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:53.424 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:90:73 10.100.0.12'], port_security=['fa:16:3e:bf:90:73 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '98fabab3-6b4a-44f3-b232-f23f34f4e19f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9af27fad6b5a4783b66213343f27f0a1', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'a50a9db8-e2ab-4969-88fc-b4ddbb372174', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b34768a1-4d4c-416a-8ec1-d8538916a72a, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=20c8164c-0779-4589-b3d3-afc10a47631f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:53.427 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 20c8164c-0779-4589-b3d3-afc10a47631f in datapath e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd bound to our chassis#033[00m
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:53.431 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd#033[00m
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:53.444 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[697e1822-0d94-4f16-ac76-3584939955ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:53.444 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape5d4fc7a-11 in ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:53.447 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape5d4fc7a-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:53.447 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e71e3b25-707d-456e-a1e9-86bc28963f60]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:53.449 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[618e2487-1e39-44d5-bafa-696a66f86c2d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:53.469 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[3cf71ae4-43dc-423e-8b23-ef1de4c776f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:53.497 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ed0aac54-8570-48e7-a254-e66469ae1a5f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:53.543 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[8ac71e67-0fec-486b-9b7c-a4328b899e54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:53 np0005481065 NetworkManager[44960]: <info>  [1760172773.5538] manager: (tape5d4fc7a-10): new Veth device (/org/freedesktop/NetworkManager/Devices/173)
Oct 11 04:52:53 np0005481065 systemd-udevd[309111]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:53.552 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f4f8854a-835c-43dc-811d-2bfc663b0db1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:53.608 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[bd2452c3-f950-4495-ac3c-165c2f2f117f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:53.618 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[cc5bf88e-88cd-4a38-8f48-6decb140cf88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:52:53 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:52:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:52:53 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:52:53 np0005481065 NetworkManager[44960]: <info>  [1760172773.6734] device (tape5d4fc7a-10): carrier: link connected
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:53.690 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[bbae958a-cbea-4d52-b9b1-5f2ce0fcac0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:53.726 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[75c16df7-d6f7-44b6-bb2f-0b3edde2056f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape5d4fc7a-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:20:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 114], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461837, 'reachable_time': 30932, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309213, 'error': None, 'target': 'ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:53.753 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b8fb8ee7-8627-4a1c-b48b-763044b573d5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe17:2033'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461837, 'tstamp': 461837}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309226, 'error': None, 'target': 'ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:53.788 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b5128597-baf3-4b52-809c-516c5f8b7b82]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape5d4fc7a-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:20:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 114], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461837, 'reachable_time': 30932, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309248, 'error': None, 'target': 'ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:53.831 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[54e54439-f705-4f77-b47e-f6e617a2bd07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1442: 321 pgs: 321 active+clean; 134 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.2 MiB/s wr, 247 op/s
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:53.941 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[96fa8750-eb22-47cc-8a5e-93bc5c5b9610]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:53.942 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape5d4fc7a-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:53.944 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:53.945 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape5d4fc7a-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:53 np0005481065 nova_compute[260935]: 2025-10-11 08:52:53.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:53 np0005481065 kernel: tape5d4fc7a-10: entered promiscuous mode
Oct 11 04:52:53 np0005481065 NetworkManager[44960]: <info>  [1760172773.9484] manager: (tape5d4fc7a-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/174)
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:53.952 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape5d4fc7a-10, col_values=(('external_ids', {'iface-id': '7a0f31c4-9bda-45df-9fec-aacc40fc88c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:53 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:53Z|00362|binding|INFO|Releasing lport 7a0f31c4-9bda-45df-9fec-aacc40fc88c1 from this chassis (sb_readonly=0)
Oct 11 04:52:53 np0005481065 nova_compute[260935]: 2025-10-11 08:52:53.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:53 np0005481065 nova_compute[260935]: 2025-10-11 08:52:53.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:53.976 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:53.977 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[51e2e008-b8c4-4430-9430-9f7997e51d13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:53.979 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd.pid.haproxy
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 04:52:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:53.980 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'env', 'PROCESS_TAG=haproxy-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.131 2 DEBUG oslo_concurrency.lockutils [None req-c3e300fc-08be-4a83-8017-2538602b2e40 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Acquiring lock "5750649d-960f-42d5-b127-de8b9a2bee8f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.132 2 DEBUG oslo_concurrency.lockutils [None req-c3e300fc-08be-4a83-8017-2538602b2e40 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Lock "5750649d-960f-42d5-b127-de8b9a2bee8f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.133 2 DEBUG oslo_concurrency.lockutils [None req-c3e300fc-08be-4a83-8017-2538602b2e40 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Acquiring lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.134 2 DEBUG oslo_concurrency.lockutils [None req-c3e300fc-08be-4a83-8017-2538602b2e40 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.135 2 DEBUG oslo_concurrency.lockutils [None req-c3e300fc-08be-4a83-8017-2538602b2e40 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.141 2 INFO nova.compute.manager [None req-c3e300fc-08be-4a83-8017-2538602b2e40 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Terminating instance#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.143 2 DEBUG nova.compute.manager [None req-c3e300fc-08be-4a83-8017-2538602b2e40 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 04:52:54 np0005481065 kernel: tap81c1d19a-c4 (unregistering): left promiscuous mode
Oct 11 04:52:54 np0005481065 NetworkManager[44960]: <info>  [1760172774.2048] device (tap81c1d19a-c4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:54 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:54Z|00363|binding|INFO|Releasing lport 81c1d19a-c479-4381-9557-92f3e52b0cf0 from this chassis (sb_readonly=0)
Oct 11 04:52:54 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:54Z|00364|binding|INFO|Setting lport 81c1d19a-c479-4381-9557-92f3e52b0cf0 down in Southbound
Oct 11 04:52:54 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:54Z|00365|binding|INFO|Removing iface tap81c1d19a-c4 ovn-installed in OVS
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.259 2 DEBUG nova.compute.manager [req-e373de34-7511-4d36-8097-f2d7bdea427d req-8658f0f0-bd9e-4824-a2ba-9145ee9c3433 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Received event network-vif-plugged-20c8164c-0779-4589-b3d3-afc10a47631f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.260 2 DEBUG oslo_concurrency.lockutils [req-e373de34-7511-4d36-8097-f2d7bdea427d req-8658f0f0-bd9e-4824-a2ba-9145ee9c3433 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.260 2 DEBUG oslo_concurrency.lockutils [req-e373de34-7511-4d36-8097-f2d7bdea427d req-8658f0f0-bd9e-4824-a2ba-9145ee9c3433 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.261 2 DEBUG oslo_concurrency.lockutils [req-e373de34-7511-4d36-8097-f2d7bdea427d req-8658f0f0-bd9e-4824-a2ba-9145ee9c3433 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.261 2 DEBUG nova.compute.manager [req-e373de34-7511-4d36-8097-f2d7bdea427d req-8658f0f0-bd9e-4824-a2ba-9145ee9c3433 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Processing event network-vif-plugged-20c8164c-0779-4589-b3d3-afc10a47631f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 04:52:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:54.265 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:c0:a4 10.100.0.10'], port_security=['fa:16:3e:49:c0:a4 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '5750649d-960f-42d5-b127-de8b9a2bee8f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e394c641aa0e46e2a7d8129cd88c9a01', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'e0772185-519f-4131-b64e-b4728e2c0dd9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bf506c8e-a600-4595-8f2f-eb0ff97eb2d2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=81c1d19a-c479-4381-9557-92f3e52b0cf0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:52:54 np0005481065 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d0000002a.scope: Deactivated successfully.
Oct 11 04:52:54 np0005481065 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d0000002a.scope: Consumed 3.554s CPU time.
Oct 11 04:52:54 np0005481065 systemd-machined[215705]: Machine qemu-47-instance-0000002a terminated.
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.334 2 DEBUG nova.compute.manager [req-8299269b-7e7d-42c7-803e-326b654a787a req-8d2d5315-7a44-4ef7-91e4-8573c137b863 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Received event network-vif-plugged-81c1d19a-c479-4381-9557-92f3e52b0cf0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.335 2 DEBUG oslo_concurrency.lockutils [req-8299269b-7e7d-42c7-803e-326b654a787a req-8d2d5315-7a44-4ef7-91e4-8573c137b863 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.336 2 DEBUG oslo_concurrency.lockutils [req-8299269b-7e7d-42c7-803e-326b654a787a req-8d2d5315-7a44-4ef7-91e4-8573c137b863 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.336 2 DEBUG oslo_concurrency.lockutils [req-8299269b-7e7d-42c7-803e-326b654a787a req-8d2d5315-7a44-4ef7-91e4-8573c137b863 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.336 2 DEBUG nova.compute.manager [req-8299269b-7e7d-42c7-803e-326b654a787a req-8d2d5315-7a44-4ef7-91e4-8573c137b863 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] No waiting events found dispatching network-vif-plugged-81c1d19a-c479-4381-9557-92f3e52b0cf0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.336 2 WARNING nova.compute.manager [req-8299269b-7e7d-42c7-803e-326b654a787a req-8d2d5315-7a44-4ef7-91e4-8573c137b863 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Received unexpected event network-vif-plugged-81c1d19a-c479-4381-9557-92f3e52b0cf0 for instance with vm_state active and task_state deleting.#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.337 2 DEBUG nova.compute.manager [req-8299269b-7e7d-42c7-803e-326b654a787a req-8d2d5315-7a44-4ef7-91e4-8573c137b863 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Received event network-vif-plugged-81c1d19a-c479-4381-9557-92f3e52b0cf0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.337 2 DEBUG oslo_concurrency.lockutils [req-8299269b-7e7d-42c7-803e-326b654a787a req-8d2d5315-7a44-4ef7-91e4-8573c137b863 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.337 2 DEBUG oslo_concurrency.lockutils [req-8299269b-7e7d-42c7-803e-326b654a787a req-8d2d5315-7a44-4ef7-91e4-8573c137b863 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.338 2 DEBUG oslo_concurrency.lockutils [req-8299269b-7e7d-42c7-803e-326b654a787a req-8d2d5315-7a44-4ef7-91e4-8573c137b863 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.338 2 DEBUG nova.compute.manager [req-8299269b-7e7d-42c7-803e-326b654a787a req-8d2d5315-7a44-4ef7-91e4-8573c137b863 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] No waiting events found dispatching network-vif-plugged-81c1d19a-c479-4381-9557-92f3e52b0cf0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.338 2 WARNING nova.compute.manager [req-8299269b-7e7d-42c7-803e-326b654a787a req-8d2d5315-7a44-4ef7-91e4-8573c137b863 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Received unexpected event network-vif-plugged-81c1d19a-c479-4381-9557-92f3e52b0cf0 for instance with vm_state active and task_state deleting.#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.381 2 DEBUG nova.compute.manager [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.382 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Removed pending event for 98fabab3-6b4a-44f3-b232-f23f34f4e19f due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.382 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172774.3791225, 98fabab3-6b4a-44f3-b232-f23f34f4e19f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.383 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] VM Started (Lifecycle Event)#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.391 2 DEBUG nova.virt.libvirt.driver [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.396 2 INFO nova.virt.libvirt.driver [-] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Instance destroyed successfully.#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.396 2 DEBUG nova.objects.instance [None req-c3e300fc-08be-4a83-8017-2538602b2e40 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Lazy-loading 'resources' on Instance uuid 5750649d-960f-42d5-b127-de8b9a2bee8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.398 2 INFO nova.virt.libvirt.driver [-] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Instance spawned successfully.#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.399 2 DEBUG nova.virt.libvirt.driver [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.462 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.471 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:52:54 np0005481065 podman[309364]: 2025-10-11 08:52:54.496345474 +0000 UTC m=+0.070700011 container create d4e2f908c9b22d1b7b0ed1fbea6cdc9975eb11f0b165c5a96baab3af81b2bd58 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 11 04:52:54 np0005481065 systemd[1]: Started libpod-conmon-d4e2f908c9b22d1b7b0ed1fbea6cdc9975eb11f0b165c5a96baab3af81b2bd58.scope.
Oct 11 04:52:54 np0005481065 podman[309364]: 2025-10-11 08:52:54.455275332 +0000 UTC m=+0.029629849 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 04:52:54 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:52:54 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d1665eb2b30cde48dd305e3696f1763d29e793e23b05e0d4ebb02f8a2d77485/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.589 2 DEBUG nova.virt.libvirt.vif [None req-c3e300fc-08be-4a83-8017-2538602b2e40 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:52:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1556963080',display_name='tempest-InstanceActionsTestJSON-server-1556963080',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1556963080',id=42,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:52:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e394c641aa0e46e2a7d8129cd88c9a01',ramdisk_id='',reservation_id='r-r0vpe0lh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-761055393',owner_user_name='tempest-InstanceActionsTestJSON-761055393-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:52:51Z,user_data=None,user_id='aeed30817a7740109e765d227ff2b78a',uuid=5750649d-960f-42d5-b127-de8b9a2bee8f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "address": "fa:16:3e:49:c0:a4", "network": {"id": "aa9adc37-a18c-4b31-b4cc-d46c43f91e9b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-563102071-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e394c641aa0e46e2a7d8129cd88c9a01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81c1d19a-c4", "ovs_interfaceid": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.590 2 DEBUG nova.network.os_vif_util [None req-c3e300fc-08be-4a83-8017-2538602b2e40 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Converting VIF {"id": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "address": "fa:16:3e:49:c0:a4", "network": {"id": "aa9adc37-a18c-4b31-b4cc-d46c43f91e9b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-563102071-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e394c641aa0e46e2a7d8129cd88c9a01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81c1d19a-c4", "ovs_interfaceid": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.591 2 DEBUG nova.network.os_vif_util [None req-c3e300fc-08be-4a83-8017-2538602b2e40 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:49:c0:a4,bridge_name='br-int',has_traffic_filtering=True,id=81c1d19a-c479-4381-9557-92f3e52b0cf0,network=Network(aa9adc37-a18c-4b31-b4cc-d46c43f91e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81c1d19a-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.591 2 DEBUG os_vif [None req-c3e300fc-08be-4a83-8017-2538602b2e40 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:49:c0:a4,bridge_name='br-int',has_traffic_filtering=True,id=81c1d19a-c479-4381-9557-92f3e52b0cf0,network=Network(aa9adc37-a18c-4b31-b4cc-d46c43f91e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81c1d19a-c4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.594 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap81c1d19a-c4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.596 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.596 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172774.3792953, 98fabab3-6b4a-44f3-b232-f23f34f4e19f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.596 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] VM Paused (Lifecycle Event)#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.606 2 DEBUG nova.virt.libvirt.driver [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.607 2 DEBUG nova.virt.libvirt.driver [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.607 2 DEBUG nova.virt.libvirt.driver [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.608 2 DEBUG nova.virt.libvirt.driver [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.608 2 DEBUG nova.virt.libvirt.driver [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.610 2 DEBUG nova.virt.libvirt.driver [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.615 2 INFO os_vif [None req-c3e300fc-08be-4a83-8017-2538602b2e40 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:49:c0:a4,bridge_name='br-int',has_traffic_filtering=True,id=81c1d19a-c479-4381-9557-92f3e52b0cf0,network=Network(aa9adc37-a18c-4b31-b4cc-d46c43f91e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81c1d19a-c4')#033[00m
Oct 11 04:52:54 np0005481065 podman[309364]: 2025-10-11 08:52:54.619521087 +0000 UTC m=+0.193875614 container init d4e2f908c9b22d1b7b0ed1fbea6cdc9975eb11f0b165c5a96baab3af81b2bd58 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 11 04:52:54 np0005481065 podman[309364]: 2025-10-11 08:52:54.6305727 +0000 UTC m=+0.204927237 container start d4e2f908c9b22d1b7b0ed1fbea6cdc9975eb11f0b165c5a96baab3af81b2bd58 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:52:54 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:52:54 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:52:54 np0005481065 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[309388]: [NOTICE]   (309405) : New worker (309415) forked
Oct 11 04:52:54 np0005481065 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[309388]: [NOTICE]   (309405) : Loading success.
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.681 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.692 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172774.3893898, 98fabab3-6b4a-44f3-b232-f23f34f4e19f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.692 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:52:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:52:54 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:52:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:52:54 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:52:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:52:54 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:52:54 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev cc9bca7e-c460-4cd1-949e-0387bdd64ebf does not exist
Oct 11 04:52:54 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev dcfcaca8-bda3-4291-9c36-4c00045a41a2 does not exist
Oct 11 04:52:54 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev cb1fdd65-3a4c-4c01-a895-fac2b3993ccc does not exist
Oct 11 04:52:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:52:54 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:52:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:52:54 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:52:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:52:54 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:52:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:54.740 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 81c1d19a-c479-4381-9557-92f3e52b0cf0 in datapath aa9adc37-a18c-4b31-b4cc-d46c43f91e9b unbound from our chassis#033[00m
Oct 11 04:52:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:54.743 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aa9adc37-a18c-4b31-b4cc-d46c43f91e9b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 04:52:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:54.744 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3c212a78-a75e-47ff-add2-8301b10bc9c7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:54.745 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b namespace which is not needed anymore#033[00m
Oct 11 04:52:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:52:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:52:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:52:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:52:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:52:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:52:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:52:54
Oct 11 04:52:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:52:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 04:52:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'default.rgw.log', 'volumes', 'default.rgw.control', '.rgw.root', 'default.rgw.meta', '.mgr', 'images', 'cephfs.cephfs.meta', 'vms']
Oct 11 04:52:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.831 2 DEBUG nova.compute.manager [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.836 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.847 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.875 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.908 2 DEBUG oslo_concurrency.lockutils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.909 2 DEBUG oslo_concurrency.lockutils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.909 2 DEBUG nova.objects.instance [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct 11 04:52:54 np0005481065 neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b[308899]: [NOTICE]   (308903) : haproxy version is 2.8.14-c23fe91
Oct 11 04:52:54 np0005481065 neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b[308899]: [NOTICE]   (308903) : path to executable is /usr/sbin/haproxy
Oct 11 04:52:54 np0005481065 neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b[308899]: [WARNING]  (308903) : Exiting Master process...
Oct 11 04:52:54 np0005481065 neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b[308899]: [WARNING]  (308903) : Exiting Master process...
Oct 11 04:52:54 np0005481065 neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b[308899]: [ALERT]    (308903) : Current worker (308905) exited with code 143 (Terminated)
Oct 11 04:52:54 np0005481065 neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b[308899]: [WARNING]  (308903) : All workers exited. Exiting... (0)
Oct 11 04:52:54 np0005481065 systemd[1]: libpod-458401b6351fc27ad334ef31058a6591a600ca5a50b42d2ff10e9293f3fcd45c.scope: Deactivated successfully.
Oct 11 04:52:54 np0005481065 podman[309485]: 2025-10-11 08:52:54.954173251 +0000 UTC m=+0.081801514 container died 458401b6351fc27ad334ef31058a6591a600ca5a50b42d2ff10e9293f3fcd45c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 11 04:52:54 np0005481065 nova_compute[260935]: 2025-10-11 08:52:54.972 2 DEBUG oslo_concurrency.lockutils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.063s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:55 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-458401b6351fc27ad334ef31058a6591a600ca5a50b42d2ff10e9293f3fcd45c-userdata-shm.mount: Deactivated successfully.
Oct 11 04:52:55 np0005481065 systemd[1]: var-lib-containers-storage-overlay-c0972cf6f05853bc79fa51b63023889054a6bcc2df62a3dc5f83edc5bef62cfe-merged.mount: Deactivated successfully.
Oct 11 04:52:55 np0005481065 podman[309485]: 2025-10-11 08:52:55.018246883 +0000 UTC m=+0.145875146 container cleanup 458401b6351fc27ad334ef31058a6591a600ca5a50b42d2ff10e9293f3fcd45c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 11 04:52:55 np0005481065 systemd[1]: libpod-conmon-458401b6351fc27ad334ef31058a6591a600ca5a50b42d2ff10e9293f3fcd45c.scope: Deactivated successfully.
Oct 11 04:52:55 np0005481065 nova_compute[260935]: 2025-10-11 08:52:55.130 2 INFO nova.virt.libvirt.driver [None req-c3e300fc-08be-4a83-8017-2538602b2e40 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Deleting instance files /var/lib/nova/instances/5750649d-960f-42d5-b127-de8b9a2bee8f_del#033[00m
Oct 11 04:52:55 np0005481065 nova_compute[260935]: 2025-10-11 08:52:55.132 2 INFO nova.virt.libvirt.driver [None req-c3e300fc-08be-4a83-8017-2538602b2e40 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Deletion of /var/lib/nova/instances/5750649d-960f-42d5-b127-de8b9a2bee8f_del complete#033[00m
Oct 11 04:52:55 np0005481065 podman[309574]: 2025-10-11 08:52:55.134044418 +0000 UTC m=+0.084808049 container remove 458401b6351fc27ad334ef31058a6591a600ca5a50b42d2ff10e9293f3fcd45c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 04:52:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:52:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:52:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:52:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:52:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:52:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:52:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:52:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:52:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:52:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:52:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:55.146 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2ac9dfd9-777c-4095-af3a-68bff8651397]: (4, ('Sat Oct 11 08:52:54 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b (458401b6351fc27ad334ef31058a6591a600ca5a50b42d2ff10e9293f3fcd45c)\n458401b6351fc27ad334ef31058a6591a600ca5a50b42d2ff10e9293f3fcd45c\nSat Oct 11 08:52:55 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b (458401b6351fc27ad334ef31058a6591a600ca5a50b42d2ff10e9293f3fcd45c)\n458401b6351fc27ad334ef31058a6591a600ca5a50b42d2ff10e9293f3fcd45c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:55.148 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5b816a8d-9c25-4627-a58b-031801dee4b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:55.149 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa9adc37-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:55 np0005481065 nova_compute[260935]: 2025-10-11 08:52:55.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:55 np0005481065 kernel: tapaa9adc37-a0: left promiscuous mode
Oct 11 04:52:55 np0005481065 nova_compute[260935]: 2025-10-11 08:52:55.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:55.178 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[311f0cff-0ab0-4c06-ac56-81d05d51f30f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:55 np0005481065 nova_compute[260935]: 2025-10-11 08:52:55.204 2 INFO nova.compute.manager [None req-c3e300fc-08be-4a83-8017-2538602b2e40 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Took 1.06 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 04:52:55 np0005481065 nova_compute[260935]: 2025-10-11 08:52:55.204 2 DEBUG oslo.service.loopingcall [None req-c3e300fc-08be-4a83-8017-2538602b2e40 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 04:52:55 np0005481065 nova_compute[260935]: 2025-10-11 08:52:55.205 2 DEBUG nova.compute.manager [-] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 04:52:55 np0005481065 nova_compute[260935]: 2025-10-11 08:52:55.205 2 DEBUG nova.network.neutron [-] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 04:52:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:55.207 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5a9684ce-8e97-4fee-b3a1-11749f897f4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:55.208 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8af03e73-f226-484c-8a0b-013f75ff6d85]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:55.229 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[22676936-8b72-4206-86a6-1acfde8956d2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461533, 'reachable_time': 40680, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309596, 'error': None, 'target': 'ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:55 np0005481065 systemd[1]: run-netns-ovnmeta\x2daa9adc37\x2da18c\x2d4b31\x2db4cc\x2dd46c43f91e9b.mount: Deactivated successfully.
Oct 11 04:52:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:55.235 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 04:52:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:55.235 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[1048a372-432d-4686-a9e2-acd3fac83321]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:55 np0005481065 podman[309628]: 2025-10-11 08:52:55.469811004 +0000 UTC m=+0.058003902 container create 78d86c7b17fdc07e170edf377356bbe31b9f48e083e5f78763fa94404806fb02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 04:52:55 np0005481065 podman[309628]: 2025-10-11 08:52:55.445775574 +0000 UTC m=+0.033968482 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:52:55 np0005481065 systemd[1]: Started libpod-conmon-78d86c7b17fdc07e170edf377356bbe31b9f48e083e5f78763fa94404806fb02.scope.
Oct 11 04:52:55 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:52:55 np0005481065 podman[309628]: 2025-10-11 08:52:55.60156355 +0000 UTC m=+0.189756518 container init 78d86c7b17fdc07e170edf377356bbe31b9f48e083e5f78763fa94404806fb02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kepler, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:52:55 np0005481065 podman[309642]: 2025-10-11 08:52:55.613404814 +0000 UTC m=+0.095424659 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 11 04:52:55 np0005481065 podman[309628]: 2025-10-11 08:52:55.615875874 +0000 UTC m=+0.204068782 container start 78d86c7b17fdc07e170edf377356bbe31b9f48e083e5f78763fa94404806fb02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:52:55 np0005481065 podman[309628]: 2025-10-11 08:52:55.620133995 +0000 UTC m=+0.208327183 container attach 78d86c7b17fdc07e170edf377356bbe31b9f48e083e5f78763fa94404806fb02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 04:52:55 np0005481065 crazy_kepler[309654]: 167 167
Oct 11 04:52:55 np0005481065 podman[309628]: 2025-10-11 08:52:55.626503875 +0000 UTC m=+0.214696803 container died 78d86c7b17fdc07e170edf377356bbe31b9f48e083e5f78763fa94404806fb02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 11 04:52:55 np0005481065 systemd[1]: libpod-78d86c7b17fdc07e170edf377356bbe31b9f48e083e5f78763fa94404806fb02.scope: Deactivated successfully.
Oct 11 04:52:55 np0005481065 systemd[1]: var-lib-containers-storage-overlay-be798599c83a490b6453859e9bfea1aa0cabd2b8c2cd155c295d6454d1fe7fe7-merged.mount: Deactivated successfully.
Oct 11 04:52:55 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:52:55 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:52:55 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:52:55 np0005481065 podman[309628]: 2025-10-11 08:52:55.673763471 +0000 UTC m=+0.261956399 container remove 78d86c7b17fdc07e170edf377356bbe31b9f48e083e5f78763fa94404806fb02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:52:55 np0005481065 systemd[1]: libpod-conmon-78d86c7b17fdc07e170edf377356bbe31b9f48e083e5f78763fa94404806fb02.scope: Deactivated successfully.
Oct 11 04:52:55 np0005481065 nova_compute[260935]: 2025-10-11 08:52:55.764 2 DEBUG nova.network.neutron [-] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:52:55 np0005481065 nova_compute[260935]: 2025-10-11 08:52:55.778 2 INFO nova.compute.manager [-] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Took 0.57 seconds to deallocate network for instance.#033[00m
Oct 11 04:52:55 np0005481065 nova_compute[260935]: 2025-10-11 08:52:55.814 2 DEBUG oslo_concurrency.lockutils [None req-c3e300fc-08be-4a83-8017-2538602b2e40 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:55 np0005481065 nova_compute[260935]: 2025-10-11 08:52:55.814 2 DEBUG oslo_concurrency.lockutils [None req-c3e300fc-08be-4a83-8017-2538602b2e40 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:55 np0005481065 nova_compute[260935]: 2025-10-11 08:52:55.885 2 DEBUG oslo_concurrency.processutils [None req-c3e300fc-08be-4a83-8017-2538602b2e40 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:52:55 np0005481065 podman[309688]: 2025-10-11 08:52:55.904045014 +0000 UTC m=+0.057924369 container create 044207d2cc99679b368667ac020db1cc6b2c67a47713e9f9a0b850368fce94ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:52:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1443: 321 pgs: 321 active+clean; 134 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.2 MiB/s wr, 247 op/s
Oct 11 04:52:55 np0005481065 systemd[1]: Started libpod-conmon-044207d2cc99679b368667ac020db1cc6b2c67a47713e9f9a0b850368fce94ef.scope.
Oct 11 04:52:55 np0005481065 podman[309688]: 2025-10-11 08:52:55.878015888 +0000 UTC m=+0.031895283 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:52:55 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:52:55 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98d6b7332c3a361675949a67f88687a0d3fed7083ee0b491120d1b4356570935/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:52:55 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98d6b7332c3a361675949a67f88687a0d3fed7083ee0b491120d1b4356570935/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:52:55 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98d6b7332c3a361675949a67f88687a0d3fed7083ee0b491120d1b4356570935/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:52:55 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98d6b7332c3a361675949a67f88687a0d3fed7083ee0b491120d1b4356570935/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:52:55 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98d6b7332c3a361675949a67f88687a0d3fed7083ee0b491120d1b4356570935/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:52:56 np0005481065 podman[309688]: 2025-10-11 08:52:56.00929998 +0000 UTC m=+0.163179375 container init 044207d2cc99679b368667ac020db1cc6b2c67a47713e9f9a0b850368fce94ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_dewdney, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 04:52:56 np0005481065 podman[309688]: 2025-10-11 08:52:56.017089721 +0000 UTC m=+0.170969076 container start 044207d2cc99679b368667ac020db1cc6b2c67a47713e9f9a0b850368fce94ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:52:56 np0005481065 podman[309688]: 2025-10-11 08:52:56.02024013 +0000 UTC m=+0.174119495 container attach 044207d2cc99679b368667ac020db1cc6b2c67a47713e9f9a0b850368fce94ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:52:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:52:56 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/833367698' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:52:56 np0005481065 nova_compute[260935]: 2025-10-11 08:52:56.390 2 DEBUG oslo_concurrency.processutils [None req-c3e300fc-08be-4a83-8017-2538602b2e40 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:52:56 np0005481065 nova_compute[260935]: 2025-10-11 08:52:56.397 2 DEBUG nova.compute.provider_tree [None req-c3e300fc-08be-4a83-8017-2538602b2e40 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:52:56 np0005481065 nova_compute[260935]: 2025-10-11 08:52:56.413 2 DEBUG nova.scheduler.client.report [None req-c3e300fc-08be-4a83-8017-2538602b2e40 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:52:56 np0005481065 nova_compute[260935]: 2025-10-11 08:52:56.430 2 DEBUG oslo_concurrency.lockutils [None req-c3e300fc-08be-4a83-8017-2538602b2e40 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:56 np0005481065 nova_compute[260935]: 2025-10-11 08:52:56.452 2 INFO nova.scheduler.client.report [None req-c3e300fc-08be-4a83-8017-2538602b2e40 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Deleted allocations for instance 5750649d-960f-42d5-b127-de8b9a2bee8f#033[00m
Oct 11 04:52:56 np0005481065 nova_compute[260935]: 2025-10-11 08:52:56.524 2 DEBUG oslo_concurrency.lockutils [None req-c3e300fc-08be-4a83-8017-2538602b2e40 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Lock "5750649d-960f-42d5-b127-de8b9a2bee8f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.392s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:56 np0005481065 nova_compute[260935]: 2025-10-11 08:52:56.735 2 DEBUG nova.compute.manager [req-94be7bdf-0548-413a-bd71-76d00ce36e5c req-00bdfc07-4a46-4a94-b8c4-d6c692f1f0b7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Received event network-vif-plugged-20c8164c-0779-4589-b3d3-afc10a47631f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:52:56 np0005481065 nova_compute[260935]: 2025-10-11 08:52:56.735 2 DEBUG oslo_concurrency.lockutils [req-94be7bdf-0548-413a-bd71-76d00ce36e5c req-00bdfc07-4a46-4a94-b8c4-d6c692f1f0b7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:56 np0005481065 nova_compute[260935]: 2025-10-11 08:52:56.736 2 DEBUG oslo_concurrency.lockutils [req-94be7bdf-0548-413a-bd71-76d00ce36e5c req-00bdfc07-4a46-4a94-b8c4-d6c692f1f0b7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:56 np0005481065 nova_compute[260935]: 2025-10-11 08:52:56.736 2 DEBUG oslo_concurrency.lockutils [req-94be7bdf-0548-413a-bd71-76d00ce36e5c req-00bdfc07-4a46-4a94-b8c4-d6c692f1f0b7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:56 np0005481065 nova_compute[260935]: 2025-10-11 08:52:56.736 2 DEBUG nova.compute.manager [req-94be7bdf-0548-413a-bd71-76d00ce36e5c req-00bdfc07-4a46-4a94-b8c4-d6c692f1f0b7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] No waiting events found dispatching network-vif-plugged-20c8164c-0779-4589-b3d3-afc10a47631f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:52:56 np0005481065 nova_compute[260935]: 2025-10-11 08:52:56.736 2 WARNING nova.compute.manager [req-94be7bdf-0548-413a-bd71-76d00ce36e5c req-00bdfc07-4a46-4a94-b8c4-d6c692f1f0b7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Received unexpected event network-vif-plugged-20c8164c-0779-4589-b3d3-afc10a47631f for instance with vm_state active and task_state None.#033[00m
Oct 11 04:52:56 np0005481065 nova_compute[260935]: 2025-10-11 08:52:56.736 2 DEBUG nova.compute.manager [req-94be7bdf-0548-413a-bd71-76d00ce36e5c req-00bdfc07-4a46-4a94-b8c4-d6c692f1f0b7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Received event network-vif-deleted-81c1d19a-c479-4381-9557-92f3e52b0cf0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:52:56 np0005481065 nova_compute[260935]: 2025-10-11 08:52:56.803 2 DEBUG nova.compute.manager [req-dc09ce48-9942-4e30-b999-5d3cd5ec4bca req-7db056d7-09e2-4051-9fd6-bb0f797941d9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Received event network-vif-unplugged-81c1d19a-c479-4381-9557-92f3e52b0cf0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:52:56 np0005481065 nova_compute[260935]: 2025-10-11 08:52:56.804 2 DEBUG oslo_concurrency.lockutils [req-dc09ce48-9942-4e30-b999-5d3cd5ec4bca req-7db056d7-09e2-4051-9fd6-bb0f797941d9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:56 np0005481065 nova_compute[260935]: 2025-10-11 08:52:56.804 2 DEBUG oslo_concurrency.lockutils [req-dc09ce48-9942-4e30-b999-5d3cd5ec4bca req-7db056d7-09e2-4051-9fd6-bb0f797941d9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:56 np0005481065 nova_compute[260935]: 2025-10-11 08:52:56.804 2 DEBUG oslo_concurrency.lockutils [req-dc09ce48-9942-4e30-b999-5d3cd5ec4bca req-7db056d7-09e2-4051-9fd6-bb0f797941d9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:56 np0005481065 nova_compute[260935]: 2025-10-11 08:52:56.804 2 DEBUG nova.compute.manager [req-dc09ce48-9942-4e30-b999-5d3cd5ec4bca req-7db056d7-09e2-4051-9fd6-bb0f797941d9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] No waiting events found dispatching network-vif-unplugged-81c1d19a-c479-4381-9557-92f3e52b0cf0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:52:56 np0005481065 nova_compute[260935]: 2025-10-11 08:52:56.804 2 WARNING nova.compute.manager [req-dc09ce48-9942-4e30-b999-5d3cd5ec4bca req-7db056d7-09e2-4051-9fd6-bb0f797941d9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Received unexpected event network-vif-unplugged-81c1d19a-c479-4381-9557-92f3e52b0cf0 for instance with vm_state deleted and task_state None.#033[00m
Oct 11 04:52:56 np0005481065 nova_compute[260935]: 2025-10-11 08:52:56.804 2 DEBUG nova.compute.manager [req-dc09ce48-9942-4e30-b999-5d3cd5ec4bca req-7db056d7-09e2-4051-9fd6-bb0f797941d9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Received event network-vif-plugged-81c1d19a-c479-4381-9557-92f3e52b0cf0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:52:56 np0005481065 nova_compute[260935]: 2025-10-11 08:52:56.804 2 DEBUG oslo_concurrency.lockutils [req-dc09ce48-9942-4e30-b999-5d3cd5ec4bca req-7db056d7-09e2-4051-9fd6-bb0f797941d9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:56 np0005481065 nova_compute[260935]: 2025-10-11 08:52:56.805 2 DEBUG oslo_concurrency.lockutils [req-dc09ce48-9942-4e30-b999-5d3cd5ec4bca req-7db056d7-09e2-4051-9fd6-bb0f797941d9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:56 np0005481065 nova_compute[260935]: 2025-10-11 08:52:56.805 2 DEBUG oslo_concurrency.lockutils [req-dc09ce48-9942-4e30-b999-5d3cd5ec4bca req-7db056d7-09e2-4051-9fd6-bb0f797941d9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:56 np0005481065 nova_compute[260935]: 2025-10-11 08:52:56.805 2 DEBUG nova.compute.manager [req-dc09ce48-9942-4e30-b999-5d3cd5ec4bca req-7db056d7-09e2-4051-9fd6-bb0f797941d9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] No waiting events found dispatching network-vif-plugged-81c1d19a-c479-4381-9557-92f3e52b0cf0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:52:56 np0005481065 nova_compute[260935]: 2025-10-11 08:52:56.805 2 WARNING nova.compute.manager [req-dc09ce48-9942-4e30-b999-5d3cd5ec4bca req-7db056d7-09e2-4051-9fd6-bb0f797941d9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Received unexpected event network-vif-plugged-81c1d19a-c479-4381-9557-92f3e52b0cf0 for instance with vm_state deleted and task_state None.#033[00m
Oct 11 04:52:57 np0005481065 amazing_dewdney[309705]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:52:57 np0005481065 amazing_dewdney[309705]: --> relative data size: 1.0
Oct 11 04:52:57 np0005481065 amazing_dewdney[309705]: --> All data devices are unavailable
Oct 11 04:52:57 np0005481065 systemd[1]: libpod-044207d2cc99679b368667ac020db1cc6b2c67a47713e9f9a0b850368fce94ef.scope: Deactivated successfully.
Oct 11 04:52:57 np0005481065 systemd[1]: libpod-044207d2cc99679b368667ac020db1cc6b2c67a47713e9f9a0b850368fce94ef.scope: Consumed 1.059s CPU time.
Oct 11 04:52:57 np0005481065 conmon[309705]: conmon 044207d2cc99679b3686 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-044207d2cc99679b368667ac020db1cc6b2c67a47713e9f9a0b850368fce94ef.scope/container/memory.events
Oct 11 04:52:57 np0005481065 podman[309688]: 2025-10-11 08:52:57.146014496 +0000 UTC m=+1.299893871 container died 044207d2cc99679b368667ac020db1cc6b2c67a47713e9f9a0b850368fce94ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_dewdney, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:52:57 np0005481065 systemd[1]: var-lib-containers-storage-overlay-98d6b7332c3a361675949a67f88687a0d3fed7083ee0b491120d1b4356570935-merged.mount: Deactivated successfully.
Oct 11 04:52:57 np0005481065 podman[309688]: 2025-10-11 08:52:57.217447076 +0000 UTC m=+1.371326441 container remove 044207d2cc99679b368667ac020db1cc6b2c67a47713e9f9a0b850368fce94ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 04:52:57 np0005481065 systemd[1]: libpod-conmon-044207d2cc99679b368667ac020db1cc6b2c67a47713e9f9a0b850368fce94ef.scope: Deactivated successfully.
Oct 11 04:52:57 np0005481065 nova_compute[260935]: 2025-10-11 08:52:57.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:57 np0005481065 nova_compute[260935]: 2025-10-11 08:52:57.638 2 DEBUG oslo_concurrency.lockutils [None req-34b6a3b2-ff2f-4c3b-9697-af6701ea0d27 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:57 np0005481065 nova_compute[260935]: 2025-10-11 08:52:57.639 2 DEBUG oslo_concurrency.lockutils [None req-34b6a3b2-ff2f-4c3b-9697-af6701ea0d27 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:57 np0005481065 nova_compute[260935]: 2025-10-11 08:52:57.642 2 DEBUG oslo_concurrency.lockutils [None req-34b6a3b2-ff2f-4c3b-9697-af6701ea0d27 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:57 np0005481065 nova_compute[260935]: 2025-10-11 08:52:57.642 2 DEBUG oslo_concurrency.lockutils [None req-34b6a3b2-ff2f-4c3b-9697-af6701ea0d27 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:57 np0005481065 nova_compute[260935]: 2025-10-11 08:52:57.643 2 DEBUG oslo_concurrency.lockutils [None req-34b6a3b2-ff2f-4c3b-9697-af6701ea0d27 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:57 np0005481065 nova_compute[260935]: 2025-10-11 08:52:57.645 2 INFO nova.compute.manager [None req-34b6a3b2-ff2f-4c3b-9697-af6701ea0d27 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Terminating instance#033[00m
Oct 11 04:52:57 np0005481065 nova_compute[260935]: 2025-10-11 08:52:57.647 2 DEBUG nova.compute.manager [None req-34b6a3b2-ff2f-4c3b-9697-af6701ea0d27 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 04:52:57 np0005481065 kernel: tap20c8164c-07 (unregistering): left promiscuous mode
Oct 11 04:52:57 np0005481065 NetworkManager[44960]: <info>  [1760172777.7036] device (tap20c8164c-07): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:52:57 np0005481065 nova_compute[260935]: 2025-10-11 08:52:57.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:57 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:57Z|00366|binding|INFO|Releasing lport 20c8164c-0779-4589-b3d3-afc10a47631f from this chassis (sb_readonly=0)
Oct 11 04:52:57 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:57Z|00367|binding|INFO|Setting lport 20c8164c-0779-4589-b3d3-afc10a47631f down in Southbound
Oct 11 04:52:57 np0005481065 ovn_controller[152945]: 2025-10-11T08:52:57Z|00368|binding|INFO|Removing iface tap20c8164c-07 ovn-installed in OVS
Oct 11 04:52:57 np0005481065 nova_compute[260935]: 2025-10-11 08:52:57.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:57.734 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:90:73 10.100.0.12'], port_security=['fa:16:3e:bf:90:73 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '98fabab3-6b4a-44f3-b232-f23f34f4e19f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9af27fad6b5a4783b66213343f27f0a1', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'a50a9db8-e2ab-4969-88fc-b4ddbb372174', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b34768a1-4d4c-416a-8ec1-d8538916a72a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=20c8164c-0779-4589-b3d3-afc10a47631f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:52:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:57.736 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 20c8164c-0779-4589-b3d3-afc10a47631f in datapath e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd unbound from our chassis#033[00m
Oct 11 04:52:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:57.739 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 04:52:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:57.740 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5d54683b-0c9e-4319-9636-6a3b7d814c39]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:57.741 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd namespace which is not needed anymore#033[00m
Oct 11 04:52:57 np0005481065 nova_compute[260935]: 2025-10-11 08:52:57.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:57 np0005481065 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d00000028.scope: Deactivated successfully.
Oct 11 04:52:57 np0005481065 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d00000028.scope: Consumed 4.133s CPU time.
Oct 11 04:52:57 np0005481065 systemd-machined[215705]: Machine qemu-48-instance-00000028 terminated.
Oct 11 04:52:57 np0005481065 nova_compute[260935]: 2025-10-11 08:52:57.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:57 np0005481065 nova_compute[260935]: 2025-10-11 08:52:57.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:57 np0005481065 nova_compute[260935]: 2025-10-11 08:52:57.895 2 INFO nova.virt.libvirt.driver [-] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Instance destroyed successfully.#033[00m
Oct 11 04:52:57 np0005481065 nova_compute[260935]: 2025-10-11 08:52:57.895 2 DEBUG nova.objects.instance [None req-34b6a3b2-ff2f-4c3b-9697-af6701ea0d27 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lazy-loading 'resources' on Instance uuid 98fabab3-6b4a-44f3-b232-f23f34f4e19f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:52:57 np0005481065 nova_compute[260935]: 2025-10-11 08:52:57.918 2 DEBUG nova.virt.libvirt.vif [None req-34b6a3b2-ff2f-4c3b-9697-af6701ea0d27 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-11T08:52:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1265647930',display_name='tempest-ServerDiskConfigTestJSON-server-1265647930',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1265647930',id=40,image_ref='95632eb9-5895-4e20-b760-0f149aadf400',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:52:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9af27fad6b5a4783b66213343f27f0a1',ramdisk_id='',reservation_id='r-qeuhsate',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='95632eb9-5895-4e20-b760-0f149aadf400',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-387886039',owner_user_name='tempest-ServerDiskConfigTestJSON-387886039-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:52:54Z,user_data=None,user_id='2a171de1f79843e0b048393cabfee77d',uuid=98fabab3-6b4a-44f3-b232-f23f34f4e19f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "20c8164c-0779-4589-b3d3-afc10a47631f", "address": "fa:16:3e:bf:90:73", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20c8164c-07", "ovs_interfaceid": "20c8164c-0779-4589-b3d3-afc10a47631f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 04:52:57 np0005481065 nova_compute[260935]: 2025-10-11 08:52:57.918 2 DEBUG nova.network.os_vif_util [None req-34b6a3b2-ff2f-4c3b-9697-af6701ea0d27 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converting VIF {"id": "20c8164c-0779-4589-b3d3-afc10a47631f", "address": "fa:16:3e:bf:90:73", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20c8164c-07", "ovs_interfaceid": "20c8164c-0779-4589-b3d3-afc10a47631f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:52:57 np0005481065 nova_compute[260935]: 2025-10-11 08:52:57.919 2 DEBUG nova.network.os_vif_util [None req-34b6a3b2-ff2f-4c3b-9697-af6701ea0d27 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:90:73,bridge_name='br-int',has_traffic_filtering=True,id=20c8164c-0779-4589-b3d3-afc10a47631f,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20c8164c-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:52:57 np0005481065 nova_compute[260935]: 2025-10-11 08:52:57.920 2 DEBUG os_vif [None req-34b6a3b2-ff2f-4c3b-9697-af6701ea0d27 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:90:73,bridge_name='br-int',has_traffic_filtering=True,id=20c8164c-0779-4589-b3d3-afc10a47631f,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20c8164c-07') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 04:52:57 np0005481065 nova_compute[260935]: 2025-10-11 08:52:57.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:57 np0005481065 nova_compute[260935]: 2025-10-11 08:52:57.922 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap20c8164c-07, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:57 np0005481065 nova_compute[260935]: 2025-10-11 08:52:57.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:57 np0005481065 nova_compute[260935]: 2025-10-11 08:52:57.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:57 np0005481065 nova_compute[260935]: 2025-10-11 08:52:57.928 2 INFO os_vif [None req-34b6a3b2-ff2f-4c3b-9697-af6701ea0d27 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:90:73,bridge_name='br-int',has_traffic_filtering=True,id=20c8164c-0779-4589-b3d3-afc10a47631f,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20c8164c-07')#033[00m
Oct 11 04:52:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1444: 321 pgs: 321 active+clean; 88 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 2.1 MiB/s wr, 286 op/s
Oct 11 04:52:57 np0005481065 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[309388]: [NOTICE]   (309405) : haproxy version is 2.8.14-c23fe91
Oct 11 04:52:57 np0005481065 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[309388]: [NOTICE]   (309405) : path to executable is /usr/sbin/haproxy
Oct 11 04:52:57 np0005481065 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[309388]: [WARNING]  (309405) : Exiting Master process...
Oct 11 04:52:57 np0005481065 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[309388]: [ALERT]    (309405) : Current worker (309415) exited with code 143 (Terminated)
Oct 11 04:52:57 np0005481065 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[309388]: [WARNING]  (309405) : All workers exited. Exiting... (0)
Oct 11 04:52:57 np0005481065 systemd[1]: libpod-d4e2f908c9b22d1b7b0ed1fbea6cdc9975eb11f0b165c5a96baab3af81b2bd58.scope: Deactivated successfully.
Oct 11 04:52:57 np0005481065 podman[309897]: 2025-10-11 08:52:57.95447963 +0000 UTC m=+0.065833443 container died d4e2f908c9b22d1b7b0ed1fbea6cdc9975eb11f0b165c5a96baab3af81b2bd58 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 04:52:57 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d4e2f908c9b22d1b7b0ed1fbea6cdc9975eb11f0b165c5a96baab3af81b2bd58-userdata-shm.mount: Deactivated successfully.
Oct 11 04:52:58 np0005481065 systemd[1]: var-lib-containers-storage-overlay-5d1665eb2b30cde48dd305e3696f1763d29e793e23b05e0d4ebb02f8a2d77485-merged.mount: Deactivated successfully.
Oct 11 04:52:58 np0005481065 podman[309897]: 2025-10-11 08:52:58.010754952 +0000 UTC m=+0.122108735 container cleanup d4e2f908c9b22d1b7b0ed1fbea6cdc9975eb11f0b165c5a96baab3af81b2bd58 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 11 04:52:58 np0005481065 systemd[1]: libpod-conmon-d4e2f908c9b22d1b7b0ed1fbea6cdc9975eb11f0b165c5a96baab3af81b2bd58.scope: Deactivated successfully.
Oct 11 04:52:58 np0005481065 podman[309969]: 2025-10-11 08:52:58.119524198 +0000 UTC m=+0.074502818 container remove d4e2f908c9b22d1b7b0ed1fbea6cdc9975eb11f0b165c5a96baab3af81b2bd58 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 11 04:52:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:58.127 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[882171b8-bc39-4ba8-a09c-bc5df6e883d6]: (4, ('Sat Oct 11 08:52:57 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd (d4e2f908c9b22d1b7b0ed1fbea6cdc9975eb11f0b165c5a96baab3af81b2bd58)\nd4e2f908c9b22d1b7b0ed1fbea6cdc9975eb11f0b165c5a96baab3af81b2bd58\nSat Oct 11 08:52:58 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd (d4e2f908c9b22d1b7b0ed1fbea6cdc9975eb11f0b165c5a96baab3af81b2bd58)\nd4e2f908c9b22d1b7b0ed1fbea6cdc9975eb11f0b165c5a96baab3af81b2bd58\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:58.130 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[63f39671-a53a-4d41-b9eb-275d2db1446a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:58.131 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape5d4fc7a-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:52:58 np0005481065 nova_compute[260935]: 2025-10-11 08:52:58.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:58 np0005481065 kernel: tape5d4fc7a-10: left promiscuous mode
Oct 11 04:52:58 np0005481065 nova_compute[260935]: 2025-10-11 08:52:58.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:58 np0005481065 nova_compute[260935]: 2025-10-11 08:52:58.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:52:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:58.157 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f1e4111b-7966-4960-958c-efecb1731641]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:58 np0005481065 podman[309994]: 2025-10-11 08:52:58.180930104 +0000 UTC m=+0.053382501 container create 488739b7fbccbe388ee31acba80d3e0f8ddc9ab266817bd3630419faebaaccde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_leavitt, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 11 04:52:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:58.182 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ada1423d-acd7-4553-b518-498e6d20d4f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:58.184 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7f094b42-a03c-4c36-b1c3-fbb58bd484e6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:58.205 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7ad70094-af83-42b4-a512-cd9f8e48aa49]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461823, 'reachable_time': 16078, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310011, 'error': None, 'target': 'ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:58.207 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 04:52:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:52:58.207 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[57f69635-09fb-4798-a131-b9c963c44678]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:52:58 np0005481065 systemd[1]: run-netns-ovnmeta\x2de5d4fc7a\x2d1d47\x2d4774\x2daad7\x2d8bb2b388fbbd.mount: Deactivated successfully.
Oct 11 04:52:58 np0005481065 systemd[1]: Started libpod-conmon-488739b7fbccbe388ee31acba80d3e0f8ddc9ab266817bd3630419faebaaccde.scope.
Oct 11 04:52:58 np0005481065 podman[309994]: 2025-10-11 08:52:58.162710049 +0000 UTC m=+0.035162446 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:52:58 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:52:58 np0005481065 podman[309994]: 2025-10-11 08:52:58.288154347 +0000 UTC m=+0.160606794 container init 488739b7fbccbe388ee31acba80d3e0f8ddc9ab266817bd3630419faebaaccde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_leavitt, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:52:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:52:58 np0005481065 podman[309994]: 2025-10-11 08:52:58.295425162 +0000 UTC m=+0.167877589 container start 488739b7fbccbe388ee31acba80d3e0f8ddc9ab266817bd3630419faebaaccde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_leavitt, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:52:58 np0005481065 podman[309994]: 2025-10-11 08:52:58.298977313 +0000 UTC m=+0.171429700 container attach 488739b7fbccbe388ee31acba80d3e0f8ddc9ab266817bd3630419faebaaccde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:52:58 np0005481065 naughty_leavitt[310014]: 167 167
Oct 11 04:52:58 np0005481065 systemd[1]: libpod-488739b7fbccbe388ee31acba80d3e0f8ddc9ab266817bd3630419faebaaccde.scope: Deactivated successfully.
Oct 11 04:52:58 np0005481065 podman[309994]: 2025-10-11 08:52:58.303358566 +0000 UTC m=+0.175810993 container died 488739b7fbccbe388ee31acba80d3e0f8ddc9ab266817bd3630419faebaaccde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 11 04:52:58 np0005481065 systemd[1]: var-lib-containers-storage-overlay-b7eb3d4325e4b87600d0c5dd9eaebd6c92406dd6d45425ca6d8f53c57d422218-merged.mount: Deactivated successfully.
Oct 11 04:52:58 np0005481065 nova_compute[260935]: 2025-10-11 08:52:58.343 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172763.340904, dd616f8b-2be3-455f-8979-ac6e9e57af2d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:52:58 np0005481065 nova_compute[260935]: 2025-10-11 08:52:58.343 2 INFO nova.compute.manager [-] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] VM Stopped (Lifecycle Event)#033[00m
Oct 11 04:52:58 np0005481065 podman[309994]: 2025-10-11 08:52:58.346024823 +0000 UTC m=+0.218477210 container remove 488739b7fbccbe388ee31acba80d3e0f8ddc9ab266817bd3630419faebaaccde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_leavitt, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Oct 11 04:52:58 np0005481065 systemd[1]: libpod-conmon-488739b7fbccbe388ee31acba80d3e0f8ddc9ab266817bd3630419faebaaccde.scope: Deactivated successfully.
Oct 11 04:52:58 np0005481065 nova_compute[260935]: 2025-10-11 08:52:58.364 2 DEBUG nova.compute.manager [None req-fc728c8f-b0bf-49a5-b224-4973b1970265 - - - - - -] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:52:58 np0005481065 nova_compute[260935]: 2025-10-11 08:52:58.373 2 INFO nova.virt.libvirt.driver [None req-34b6a3b2-ff2f-4c3b-9697-af6701ea0d27 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Deleting instance files /var/lib/nova/instances/98fabab3-6b4a-44f3-b232-f23f34f4e19f_del#033[00m
Oct 11 04:52:58 np0005481065 nova_compute[260935]: 2025-10-11 08:52:58.373 2 INFO nova.virt.libvirt.driver [None req-34b6a3b2-ff2f-4c3b-9697-af6701ea0d27 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Deletion of /var/lib/nova/instances/98fabab3-6b4a-44f3-b232-f23f34f4e19f_del complete#033[00m
Oct 11 04:52:58 np0005481065 nova_compute[260935]: 2025-10-11 08:52:58.439 2 INFO nova.compute.manager [None req-34b6a3b2-ff2f-4c3b-9697-af6701ea0d27 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Took 0.79 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 04:52:58 np0005481065 nova_compute[260935]: 2025-10-11 08:52:58.439 2 DEBUG oslo.service.loopingcall [None req-34b6a3b2-ff2f-4c3b-9697-af6701ea0d27 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 04:52:58 np0005481065 nova_compute[260935]: 2025-10-11 08:52:58.440 2 DEBUG nova.compute.manager [-] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 04:52:58 np0005481065 nova_compute[260935]: 2025-10-11 08:52:58.440 2 DEBUG nova.network.neutron [-] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 04:52:58 np0005481065 podman[310039]: 2025-10-11 08:52:58.573897317 +0000 UTC m=+0.069106425 container create 67eafe431d091d218de970a2f69afdd2d0db981e1a58b5c698af5b5cc0586553 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_curran, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:52:58 np0005481065 systemd[1]: Started libpod-conmon-67eafe431d091d218de970a2f69afdd2d0db981e1a58b5c698af5b5cc0586553.scope.
Oct 11 04:52:58 np0005481065 podman[310039]: 2025-10-11 08:52:58.545649819 +0000 UTC m=+0.040858977 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:52:58 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:52:58 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb33431700cc476a4f1a1bfc98f3fab84a65463530df8ea6bbfd830026701ae0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:52:58 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb33431700cc476a4f1a1bfc98f3fab84a65463530df8ea6bbfd830026701ae0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:52:58 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb33431700cc476a4f1a1bfc98f3fab84a65463530df8ea6bbfd830026701ae0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:52:58 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb33431700cc476a4f1a1bfc98f3fab84a65463530df8ea6bbfd830026701ae0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:52:58 np0005481065 podman[310039]: 2025-10-11 08:52:58.696879985 +0000 UTC m=+0.192089113 container init 67eafe431d091d218de970a2f69afdd2d0db981e1a58b5c698af5b5cc0586553 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Oct 11 04:52:58 np0005481065 podman[310039]: 2025-10-11 08:52:58.709459881 +0000 UTC m=+0.204668989 container start 67eafe431d091d218de970a2f69afdd2d0db981e1a58b5c698af5b5cc0586553 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:52:58 np0005481065 podman[310039]: 2025-10-11 08:52:58.71506848 +0000 UTC m=+0.210277588 container attach 67eafe431d091d218de970a2f69afdd2d0db981e1a58b5c698af5b5cc0586553 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_curran, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:52:58 np0005481065 nova_compute[260935]: 2025-10-11 08:52:58.932 2 DEBUG nova.network.neutron [-] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:52:58 np0005481065 nova_compute[260935]: 2025-10-11 08:52:58.957 2 INFO nova.compute.manager [-] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Took 0.52 seconds to deallocate network for instance.#033[00m
Oct 11 04:52:59 np0005481065 nova_compute[260935]: 2025-10-11 08:52:59.006 2 DEBUG oslo_concurrency.lockutils [None req-34b6a3b2-ff2f-4c3b-9697-af6701ea0d27 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:59 np0005481065 nova_compute[260935]: 2025-10-11 08:52:59.006 2 DEBUG oslo_concurrency.lockutils [None req-34b6a3b2-ff2f-4c3b-9697-af6701ea0d27 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:59 np0005481065 nova_compute[260935]: 2025-10-11 08:52:59.044 2 DEBUG oslo_concurrency.processutils [None req-34b6a3b2-ff2f-4c3b-9697-af6701ea0d27 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:52:59 np0005481065 nova_compute[260935]: 2025-10-11 08:52:59.334 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:52:59 np0005481065 amazing_curran[310055]: {
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:    "0": [
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:        {
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:            "devices": [
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:                "/dev/loop3"
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:            ],
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:            "lv_name": "ceph_lv0",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:            "lv_size": "21470642176",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:            "name": "ceph_lv0",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:            "tags": {
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:                "ceph.cluster_name": "ceph",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:                "ceph.crush_device_class": "",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:                "ceph.encrypted": "0",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:                "ceph.osd_id": "0",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:                "ceph.type": "block",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:                "ceph.vdo": "0"
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:            },
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:            "type": "block",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:            "vg_name": "ceph_vg0"
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:        }
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:    ],
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:    "1": [
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:        {
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:            "devices": [
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:                "/dev/loop4"
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:            ],
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:            "lv_name": "ceph_lv1",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:            "lv_size": "21470642176",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:            "name": "ceph_lv1",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:            "tags": {
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:                "ceph.cluster_name": "ceph",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:                "ceph.crush_device_class": "",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:                "ceph.encrypted": "0",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:                "ceph.osd_id": "1",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:                "ceph.type": "block",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:                "ceph.vdo": "0"
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:            },
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:            "type": "block",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:            "vg_name": "ceph_vg1"
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:        }
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:    ],
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:    "2": [
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:        {
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:            "devices": [
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:                "/dev/loop5"
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:            ],
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:            "lv_name": "ceph_lv2",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:            "lv_size": "21470642176",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:            "name": "ceph_lv2",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:            "tags": {
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:                "ceph.cluster_name": "ceph",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:                "ceph.crush_device_class": "",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:                "ceph.encrypted": "0",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:                "ceph.osd_id": "2",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:                "ceph.type": "block",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:                "ceph.vdo": "0"
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:            },
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:            "type": "block",
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:            "vg_name": "ceph_vg2"
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:        }
Oct 11 04:52:59 np0005481065 amazing_curran[310055]:    ]
Oct 11 04:52:59 np0005481065 amazing_curran[310055]: }
Oct 11 04:52:59 np0005481065 systemd[1]: libpod-67eafe431d091d218de970a2f69afdd2d0db981e1a58b5c698af5b5cc0586553.scope: Deactivated successfully.
Oct 11 04:52:59 np0005481065 conmon[310055]: conmon 67eafe431d091d218de9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-67eafe431d091d218de970a2f69afdd2d0db981e1a58b5c698af5b5cc0586553.scope/container/memory.events
Oct 11 04:52:59 np0005481065 podman[310039]: 2025-10-11 08:52:59.512212103 +0000 UTC m=+1.007421181 container died 67eafe431d091d218de970a2f69afdd2d0db981e1a58b5c698af5b5cc0586553 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_curran, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:52:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:52:59 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3415986618' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:52:59 np0005481065 systemd[1]: var-lib-containers-storage-overlay-eb33431700cc476a4f1a1bfc98f3fab84a65463530df8ea6bbfd830026701ae0-merged.mount: Deactivated successfully.
Oct 11 04:52:59 np0005481065 nova_compute[260935]: 2025-10-11 08:52:59.549 2 DEBUG oslo_concurrency.processutils [None req-34b6a3b2-ff2f-4c3b-9697-af6701ea0d27 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:52:59 np0005481065 nova_compute[260935]: 2025-10-11 08:52:59.556 2 DEBUG nova.compute.provider_tree [None req-34b6a3b2-ff2f-4c3b-9697-af6701ea0d27 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:52:59 np0005481065 podman[310039]: 2025-10-11 08:52:59.576917953 +0000 UTC m=+1.072127021 container remove 67eafe431d091d218de970a2f69afdd2d0db981e1a58b5c698af5b5cc0586553 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_curran, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 04:52:59 np0005481065 nova_compute[260935]: 2025-10-11 08:52:59.582 2 DEBUG nova.scheduler.client.report [None req-34b6a3b2-ff2f-4c3b-9697-af6701ea0d27 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:52:59 np0005481065 systemd[1]: libpod-conmon-67eafe431d091d218de970a2f69afdd2d0db981e1a58b5c698af5b5cc0586553.scope: Deactivated successfully.
Oct 11 04:52:59 np0005481065 nova_compute[260935]: 2025-10-11 08:52:59.614 2 DEBUG oslo_concurrency.lockutils [None req-34b6a3b2-ff2f-4c3b-9697-af6701ea0d27 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:59 np0005481065 nova_compute[260935]: 2025-10-11 08:52:59.643 2 INFO nova.scheduler.client.report [None req-34b6a3b2-ff2f-4c3b-9697-af6701ea0d27 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Deleted allocations for instance 98fabab3-6b4a-44f3-b232-f23f34f4e19f#033[00m
Oct 11 04:52:59 np0005481065 nova_compute[260935]: 2025-10-11 08:52:59.701 2 DEBUG oslo_concurrency.lockutils [None req-34b6a3b2-ff2f-4c3b-9697-af6701ea0d27 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.062s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:59 np0005481065 nova_compute[260935]: 2025-10-11 08:52:59.798 2 DEBUG nova.compute.manager [req-fa403da4-3075-4c49-a7ad-5a120564a015 req-e6181d39-8f48-4207-9829-6db0da095606 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Received event network-vif-unplugged-20c8164c-0779-4589-b3d3-afc10a47631f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:52:59 np0005481065 nova_compute[260935]: 2025-10-11 08:52:59.798 2 DEBUG oslo_concurrency.lockutils [req-fa403da4-3075-4c49-a7ad-5a120564a015 req-e6181d39-8f48-4207-9829-6db0da095606 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:59 np0005481065 nova_compute[260935]: 2025-10-11 08:52:59.799 2 DEBUG oslo_concurrency.lockutils [req-fa403da4-3075-4c49-a7ad-5a120564a015 req-e6181d39-8f48-4207-9829-6db0da095606 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:59 np0005481065 nova_compute[260935]: 2025-10-11 08:52:59.799 2 DEBUG oslo_concurrency.lockutils [req-fa403da4-3075-4c49-a7ad-5a120564a015 req-e6181d39-8f48-4207-9829-6db0da095606 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:59 np0005481065 nova_compute[260935]: 2025-10-11 08:52:59.799 2 DEBUG nova.compute.manager [req-fa403da4-3075-4c49-a7ad-5a120564a015 req-e6181d39-8f48-4207-9829-6db0da095606 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] No waiting events found dispatching network-vif-unplugged-20c8164c-0779-4589-b3d3-afc10a47631f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:52:59 np0005481065 nova_compute[260935]: 2025-10-11 08:52:59.800 2 WARNING nova.compute.manager [req-fa403da4-3075-4c49-a7ad-5a120564a015 req-e6181d39-8f48-4207-9829-6db0da095606 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Received unexpected event network-vif-unplugged-20c8164c-0779-4589-b3d3-afc10a47631f for instance with vm_state deleted and task_state None.#033[00m
Oct 11 04:52:59 np0005481065 nova_compute[260935]: 2025-10-11 08:52:59.800 2 DEBUG nova.compute.manager [req-fa403da4-3075-4c49-a7ad-5a120564a015 req-e6181d39-8f48-4207-9829-6db0da095606 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Received event network-vif-plugged-20c8164c-0779-4589-b3d3-afc10a47631f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:52:59 np0005481065 nova_compute[260935]: 2025-10-11 08:52:59.800 2 DEBUG oslo_concurrency.lockutils [req-fa403da4-3075-4c49-a7ad-5a120564a015 req-e6181d39-8f48-4207-9829-6db0da095606 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:52:59 np0005481065 nova_compute[260935]: 2025-10-11 08:52:59.801 2 DEBUG oslo_concurrency.lockutils [req-fa403da4-3075-4c49-a7ad-5a120564a015 req-e6181d39-8f48-4207-9829-6db0da095606 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:52:59 np0005481065 nova_compute[260935]: 2025-10-11 08:52:59.801 2 DEBUG oslo_concurrency.lockutils [req-fa403da4-3075-4c49-a7ad-5a120564a015 req-e6181d39-8f48-4207-9829-6db0da095606 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:52:59 np0005481065 nova_compute[260935]: 2025-10-11 08:52:59.801 2 DEBUG nova.compute.manager [req-fa403da4-3075-4c49-a7ad-5a120564a015 req-e6181d39-8f48-4207-9829-6db0da095606 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] No waiting events found dispatching network-vif-plugged-20c8164c-0779-4589-b3d3-afc10a47631f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:52:59 np0005481065 nova_compute[260935]: 2025-10-11 08:52:59.801 2 WARNING nova.compute.manager [req-fa403da4-3075-4c49-a7ad-5a120564a015 req-e6181d39-8f48-4207-9829-6db0da095606 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Received unexpected event network-vif-plugged-20c8164c-0779-4589-b3d3-afc10a47631f for instance with vm_state deleted and task_state None.#033[00m
Oct 11 04:52:59 np0005481065 nova_compute[260935]: 2025-10-11 08:52:59.855 2 DEBUG nova.compute.manager [req-7ce82171-9c3e-481e-8edf-4e7d96ed85c1 req-3cb2151b-ff51-4416-b902-dfdd98368a39 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Received event network-vif-deleted-20c8164c-0779-4589-b3d3-afc10a47631f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:52:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1445: 321 pgs: 321 active+clean; 88 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 2.0 MiB/s wr, 268 op/s
Oct 11 04:53:00 np0005481065 podman[310240]: 2025-10-11 08:53:00.479692643 +0000 UTC m=+0.061743817 container create 9b36c5a3a7303d6fe946492da37d13b708a88fbe17bc0ae6abfc284844e94d8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 04:53:00 np0005481065 systemd[1]: Started libpod-conmon-9b36c5a3a7303d6fe946492da37d13b708a88fbe17bc0ae6abfc284844e94d8b.scope.
Oct 11 04:53:00 np0005481065 podman[310240]: 2025-10-11 08:53:00.461789147 +0000 UTC m=+0.043840341 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:53:00 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:53:00 np0005481065 podman[310240]: 2025-10-11 08:53:00.579064283 +0000 UTC m=+0.161115497 container init 9b36c5a3a7303d6fe946492da37d13b708a88fbe17bc0ae6abfc284844e94d8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 11 04:53:00 np0005481065 podman[310240]: 2025-10-11 08:53:00.590225799 +0000 UTC m=+0.172276993 container start 9b36c5a3a7303d6fe946492da37d13b708a88fbe17bc0ae6abfc284844e94d8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_raman, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:53:00 np0005481065 podman[310240]: 2025-10-11 08:53:00.593710387 +0000 UTC m=+0.175761641 container attach 9b36c5a3a7303d6fe946492da37d13b708a88fbe17bc0ae6abfc284844e94d8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_raman, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:53:00 np0005481065 sweet_raman[310256]: 167 167
Oct 11 04:53:00 np0005481065 systemd[1]: libpod-9b36c5a3a7303d6fe946492da37d13b708a88fbe17bc0ae6abfc284844e94d8b.scope: Deactivated successfully.
Oct 11 04:53:00 np0005481065 podman[310240]: 2025-10-11 08:53:00.599724147 +0000 UTC m=+0.181775331 container died 9b36c5a3a7303d6fe946492da37d13b708a88fbe17bc0ae6abfc284844e94d8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_raman, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 11 04:53:00 np0005481065 nova_compute[260935]: 2025-10-11 08:53:00.615 2 DEBUG oslo_concurrency.lockutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "92be5b35-6b7a-4f95-924d-008348f27b42" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:00 np0005481065 nova_compute[260935]: 2025-10-11 08:53:00.618 2 DEBUG oslo_concurrency.lockutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "92be5b35-6b7a-4f95-924d-008348f27b42" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:00 np0005481065 systemd[1]: var-lib-containers-storage-overlay-afce9a37c0a06a95408e5848237e24f20a5711fcbb9d10314e4dc349130b4e74-merged.mount: Deactivated successfully.
Oct 11 04:53:00 np0005481065 nova_compute[260935]: 2025-10-11 08:53:00.643 2 DEBUG nova.compute.manager [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:53:00 np0005481065 podman[310240]: 2025-10-11 08:53:00.652559552 +0000 UTC m=+0.234610766 container remove 9b36c5a3a7303d6fe946492da37d13b708a88fbe17bc0ae6abfc284844e94d8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 04:53:00 np0005481065 systemd[1]: libpod-conmon-9b36c5a3a7303d6fe946492da37d13b708a88fbe17bc0ae6abfc284844e94d8b.scope: Deactivated successfully.
Oct 11 04:53:00 np0005481065 nova_compute[260935]: 2025-10-11 08:53:00.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:53:00 np0005481065 nova_compute[260935]: 2025-10-11 08:53:00.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 04:53:00 np0005481065 nova_compute[260935]: 2025-10-11 08:53:00.748 2 DEBUG oslo_concurrency.lockutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:00 np0005481065 nova_compute[260935]: 2025-10-11 08:53:00.748 2 DEBUG oslo_concurrency.lockutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:00 np0005481065 nova_compute[260935]: 2025-10-11 08:53:00.759 2 DEBUG nova.virt.hardware [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:53:00 np0005481065 nova_compute[260935]: 2025-10-11 08:53:00.760 2 INFO nova.compute.claims [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:53:00 np0005481065 podman[310278]: 2025-10-11 08:53:00.872856732 +0000 UTC m=+0.068484838 container create f14261d100ae34e44d30c50dae716cc8ee037f38abc59ab170ae363ecd9c6686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:53:00 np0005481065 podman[310278]: 2025-10-11 08:53:00.846013753 +0000 UTC m=+0.041641909 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:53:00 np0005481065 systemd[1]: Started libpod-conmon-f14261d100ae34e44d30c50dae716cc8ee037f38abc59ab170ae363ecd9c6686.scope.
Oct 11 04:53:00 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:53:00 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cda8c89c8950d7efef89b0f8573a2c883d52ebc4c4077795ee908fb2f84f529/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:53:00 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cda8c89c8950d7efef89b0f8573a2c883d52ebc4c4077795ee908fb2f84f529/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:53:00 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cda8c89c8950d7efef89b0f8573a2c883d52ebc4c4077795ee908fb2f84f529/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:53:00 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cda8c89c8950d7efef89b0f8573a2c883d52ebc4c4077795ee908fb2f84f529/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:53:01 np0005481065 podman[310278]: 2025-10-11 08:53:01.003323641 +0000 UTC m=+0.198951787 container init f14261d100ae34e44d30c50dae716cc8ee037f38abc59ab170ae363ecd9c6686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hypatia, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:53:01 np0005481065 podman[310278]: 2025-10-11 08:53:01.023898763 +0000 UTC m=+0.219526869 container start f14261d100ae34e44d30c50dae716cc8ee037f38abc59ab170ae363ecd9c6686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:53:01 np0005481065 podman[310278]: 2025-10-11 08:53:01.028971007 +0000 UTC m=+0.224599153 container attach f14261d100ae34e44d30c50dae716cc8ee037f38abc59ab170ae363ecd9c6686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hypatia, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 04:53:01 np0005481065 nova_compute[260935]: 2025-10-11 08:53:01.047 2 DEBUG oslo_concurrency.processutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:01 np0005481065 nova_compute[260935]: 2025-10-11 08:53:01.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:53:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3978567880' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:53:01 np0005481065 nova_compute[260935]: 2025-10-11 08:53:01.520 2 DEBUG oslo_concurrency.processutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:01 np0005481065 nova_compute[260935]: 2025-10-11 08:53:01.526 2 DEBUG nova.compute.provider_tree [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:53:01 np0005481065 nova_compute[260935]: 2025-10-11 08:53:01.544 2 DEBUG nova.scheduler.client.report [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:53:01 np0005481065 nova_compute[260935]: 2025-10-11 08:53:01.576 2 DEBUG oslo_concurrency.lockutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.828s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:01 np0005481065 nova_compute[260935]: 2025-10-11 08:53:01.577 2 DEBUG nova.compute.manager [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:53:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:01.590 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:53:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:01.591 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 04:53:01 np0005481065 nova_compute[260935]: 2025-10-11 08:53:01.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:01 np0005481065 nova_compute[260935]: 2025-10-11 08:53:01.623 2 DEBUG nova.compute.manager [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:53:01 np0005481065 nova_compute[260935]: 2025-10-11 08:53:01.624 2 DEBUG nova.network.neutron [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:53:01 np0005481065 nova_compute[260935]: 2025-10-11 08:53:01.649 2 INFO nova.virt.libvirt.driver [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:53:01 np0005481065 nova_compute[260935]: 2025-10-11 08:53:01.666 2 DEBUG nova.compute.manager [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:53:01 np0005481065 nova_compute[260935]: 2025-10-11 08:53:01.782 2 DEBUG nova.compute.manager [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:53:01 np0005481065 nova_compute[260935]: 2025-10-11 08:53:01.785 2 DEBUG nova.virt.libvirt.driver [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:53:01 np0005481065 nova_compute[260935]: 2025-10-11 08:53:01.786 2 INFO nova.virt.libvirt.driver [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Creating image(s)#033[00m
Oct 11 04:53:01 np0005481065 nova_compute[260935]: 2025-10-11 08:53:01.819 2 DEBUG nova.storage.rbd_utils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 92be5b35-6b7a-4f95-924d-008348f27b42_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:01 np0005481065 nova_compute[260935]: 2025-10-11 08:53:01.860 2 DEBUG nova.storage.rbd_utils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 92be5b35-6b7a-4f95-924d-008348f27b42_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:01 np0005481065 nova_compute[260935]: 2025-10-11 08:53:01.899 2 DEBUG nova.storage.rbd_utils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 92be5b35-6b7a-4f95-924d-008348f27b42_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:01 np0005481065 nova_compute[260935]: 2025-10-11 08:53:01.904 2 DEBUG oslo_concurrency.processutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1446: 321 pgs: 321 active+clean; 88 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 238 op/s
Oct 11 04:53:01 np0005481065 nova_compute[260935]: 2025-10-11 08:53:01.953 2 DEBUG nova.policy [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2a171de1f79843e0b048393cabfee77d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9af27fad6b5a4783b66213343f27f0a1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 04:53:01 np0005481065 nova_compute[260935]: 2025-10-11 08:53:01.993 2 DEBUG oslo_concurrency.processutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:01 np0005481065 nova_compute[260935]: 2025-10-11 08:53:01.994 2 DEBUG oslo_concurrency.lockutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:01 np0005481065 nova_compute[260935]: 2025-10-11 08:53:01.996 2 DEBUG oslo_concurrency.lockutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:01 np0005481065 nova_compute[260935]: 2025-10-11 08:53:01.996 2 DEBUG oslo_concurrency.lockutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:02 np0005481065 nova_compute[260935]: 2025-10-11 08:53:02.038 2 DEBUG nova.storage.rbd_utils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 92be5b35-6b7a-4f95-924d-008348f27b42_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:02 np0005481065 nova_compute[260935]: 2025-10-11 08:53:02.043 2 DEBUG oslo_concurrency.processutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 92be5b35-6b7a-4f95-924d-008348f27b42_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:02 np0005481065 quirky_hypatia[310295]: {
Oct 11 04:53:02 np0005481065 quirky_hypatia[310295]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 04:53:02 np0005481065 quirky_hypatia[310295]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:53:02 np0005481065 quirky_hypatia[310295]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:53:02 np0005481065 quirky_hypatia[310295]:        "osd_id": 2,
Oct 11 04:53:02 np0005481065 quirky_hypatia[310295]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:53:02 np0005481065 quirky_hypatia[310295]:        "type": "bluestore"
Oct 11 04:53:02 np0005481065 quirky_hypatia[310295]:    },
Oct 11 04:53:02 np0005481065 quirky_hypatia[310295]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 04:53:02 np0005481065 quirky_hypatia[310295]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:53:02 np0005481065 quirky_hypatia[310295]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:53:02 np0005481065 quirky_hypatia[310295]:        "osd_id": 0,
Oct 11 04:53:02 np0005481065 quirky_hypatia[310295]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:53:02 np0005481065 quirky_hypatia[310295]:        "type": "bluestore"
Oct 11 04:53:02 np0005481065 quirky_hypatia[310295]:    },
Oct 11 04:53:02 np0005481065 quirky_hypatia[310295]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 04:53:02 np0005481065 quirky_hypatia[310295]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:53:02 np0005481065 quirky_hypatia[310295]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:53:02 np0005481065 quirky_hypatia[310295]:        "osd_id": 1,
Oct 11 04:53:02 np0005481065 quirky_hypatia[310295]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:53:02 np0005481065 quirky_hypatia[310295]:        "type": "bluestore"
Oct 11 04:53:02 np0005481065 quirky_hypatia[310295]:    }
Oct 11 04:53:02 np0005481065 quirky_hypatia[310295]: }
Oct 11 04:53:02 np0005481065 systemd[1]: libpod-f14261d100ae34e44d30c50dae716cc8ee037f38abc59ab170ae363ecd9c6686.scope: Deactivated successfully.
Oct 11 04:53:02 np0005481065 podman[310278]: 2025-10-11 08:53:02.134605804 +0000 UTC m=+1.330233910 container died f14261d100ae34e44d30c50dae716cc8ee037f38abc59ab170ae363ecd9c6686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 11 04:53:02 np0005481065 systemd[1]: libpod-f14261d100ae34e44d30c50dae716cc8ee037f38abc59ab170ae363ecd9c6686.scope: Consumed 1.108s CPU time.
Oct 11 04:53:02 np0005481065 systemd[1]: var-lib-containers-storage-overlay-1cda8c89c8950d7efef89b0f8573a2c883d52ebc4c4077795ee908fb2f84f529-merged.mount: Deactivated successfully.
Oct 11 04:53:02 np0005481065 podman[310278]: 2025-10-11 08:53:02.196089413 +0000 UTC m=+1.391717479 container remove f14261d100ae34e44d30c50dae716cc8ee037f38abc59ab170ae363ecd9c6686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 04:53:02 np0005481065 systemd[1]: libpod-conmon-f14261d100ae34e44d30c50dae716cc8ee037f38abc59ab170ae363ecd9c6686.scope: Deactivated successfully.
Oct 11 04:53:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:53:02 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:53:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:53:02 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:53:02 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 9975a922-4687-40f2-a837-b4207ad0f114 does not exist
Oct 11 04:53:02 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev dd29e3f0-6553-446c-b39f-c25113221b55 does not exist
Oct 11 04:53:02 np0005481065 nova_compute[260935]: 2025-10-11 08:53:02.343 2 DEBUG oslo_concurrency.processutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 92be5b35-6b7a-4f95-924d-008348f27b42_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.300s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:02 np0005481065 nova_compute[260935]: 2025-10-11 08:53:02.403 2 DEBUG nova.storage.rbd_utils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] resizing rbd image 92be5b35-6b7a-4f95-924d-008348f27b42_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:53:02 np0005481065 nova_compute[260935]: 2025-10-11 08:53:02.492 2 DEBUG nova.objects.instance [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lazy-loading 'migration_context' on Instance uuid 92be5b35-6b7a-4f95-924d-008348f27b42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:53:02 np0005481065 nova_compute[260935]: 2025-10-11 08:53:02.512 2 DEBUG nova.virt.libvirt.driver [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:53:02 np0005481065 nova_compute[260935]: 2025-10-11 08:53:02.513 2 DEBUG nova.virt.libvirt.driver [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Ensure instance console log exists: /var/lib/nova/instances/92be5b35-6b7a-4f95-924d-008348f27b42/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:53:02 np0005481065 nova_compute[260935]: 2025-10-11 08:53:02.513 2 DEBUG oslo_concurrency.lockutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:02 np0005481065 nova_compute[260935]: 2025-10-11 08:53:02.513 2 DEBUG oslo_concurrency.lockutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:02 np0005481065 nova_compute[260935]: 2025-10-11 08:53:02.514 2 DEBUG oslo_concurrency.lockutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:02 np0005481065 nova_compute[260935]: 2025-10-11 08:53:02.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:02 np0005481065 nova_compute[260935]: 2025-10-11 08:53:02.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:02 np0005481065 nova_compute[260935]: 2025-10-11 08:53:02.983 2 DEBUG nova.network.neutron [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Successfully created port: 13bb6d15-e65c-4e29-b0f3-b7a5a830236d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 04:53:03 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:53:03 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:53:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:53:03 np0005481065 podman[310579]: 2025-10-11 08:53:03.310308394 +0000 UTC m=+0.104133845 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 04:53:03 np0005481065 nova_compute[260935]: 2025-10-11 08:53:03.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:53:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1447: 321 pgs: 321 active+clean; 88 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 291 op/s
Oct 11 04:53:04 np0005481065 nova_compute[260935]: 2025-10-11 08:53:04.316 2 DEBUG nova.network.neutron [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Successfully updated port: 13bb6d15-e65c-4e29-b0f3-b7a5a830236d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:53:04 np0005481065 nova_compute[260935]: 2025-10-11 08:53:04.336 2 DEBUG oslo_concurrency.lockutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "refresh_cache-92be5b35-6b7a-4f95-924d-008348f27b42" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:53:04 np0005481065 nova_compute[260935]: 2025-10-11 08:53:04.336 2 DEBUG oslo_concurrency.lockutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquired lock "refresh_cache-92be5b35-6b7a-4f95-924d-008348f27b42" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:53:04 np0005481065 nova_compute[260935]: 2025-10-11 08:53:04.337 2 DEBUG nova.network.neutron [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:53:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:53:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:53:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:53:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:53:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003459970412515465 of space, bias 1.0, pg target 0.10379911237546395 quantized to 32 (current 32)
Oct 11 04:53:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:53:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:53:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:53:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:53:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:53:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006663670272514163 of space, bias 1.0, pg target 0.19991010817542487 quantized to 32 (current 32)
Oct 11 04:53:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:53:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 04:53:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:53:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:53:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:53:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:53:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:53:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:53:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:53:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:53:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:53:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:53:04 np0005481065 nova_compute[260935]: 2025-10-11 08:53:04.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:53:04 np0005481065 nova_compute[260935]: 2025-10-11 08:53:04.820 2 DEBUG nova.network.neutron [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:53:05 np0005481065 nova_compute[260935]: 2025-10-11 08:53:05.340 2 DEBUG nova.compute.manager [req-38da5eba-4ea9-426c-8077-e383ef156bd3 req-2b1b7bde-db0e-499b-96f6-ffd59fc17030 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Received event network-changed-13bb6d15-e65c-4e29-b0f3-b7a5a830236d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:53:05 np0005481065 nova_compute[260935]: 2025-10-11 08:53:05.340 2 DEBUG nova.compute.manager [req-38da5eba-4ea9-426c-8077-e383ef156bd3 req-2b1b7bde-db0e-499b-96f6-ffd59fc17030 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Refreshing instance network info cache due to event network-changed-13bb6d15-e65c-4e29-b0f3-b7a5a830236d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:53:05 np0005481065 nova_compute[260935]: 2025-10-11 08:53:05.341 2 DEBUG oslo_concurrency.lockutils [req-38da5eba-4ea9-426c-8077-e383ef156bd3 req-2b1b7bde-db0e-499b-96f6-ffd59fc17030 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-92be5b35-6b7a-4f95-924d-008348f27b42" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:53:05 np0005481065 nova_compute[260935]: 2025-10-11 08:53:05.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:53:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1448: 321 pgs: 321 active+clean; 88 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 183 op/s
Oct 11 04:53:06 np0005481065 nova_compute[260935]: 2025-10-11 08:53:06.241 2 DEBUG nova.network.neutron [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Updating instance_info_cache with network_info: [{"id": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "address": "fa:16:3e:5b:b5:4a", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13bb6d15-e6", "ovs_interfaceid": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:53:06 np0005481065 nova_compute[260935]: 2025-10-11 08:53:06.261 2 DEBUG oslo_concurrency.lockutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Releasing lock "refresh_cache-92be5b35-6b7a-4f95-924d-008348f27b42" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:53:06 np0005481065 nova_compute[260935]: 2025-10-11 08:53:06.261 2 DEBUG nova.compute.manager [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Instance network_info: |[{"id": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "address": "fa:16:3e:5b:b5:4a", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13bb6d15-e6", "ovs_interfaceid": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:53:06 np0005481065 nova_compute[260935]: 2025-10-11 08:53:06.262 2 DEBUG oslo_concurrency.lockutils [req-38da5eba-4ea9-426c-8077-e383ef156bd3 req-2b1b7bde-db0e-499b-96f6-ffd59fc17030 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-92be5b35-6b7a-4f95-924d-008348f27b42" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:53:06 np0005481065 nova_compute[260935]: 2025-10-11 08:53:06.262 2 DEBUG nova.network.neutron [req-38da5eba-4ea9-426c-8077-e383ef156bd3 req-2b1b7bde-db0e-499b-96f6-ffd59fc17030 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Refreshing network info cache for port 13bb6d15-e65c-4e29-b0f3-b7a5a830236d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:53:06 np0005481065 nova_compute[260935]: 2025-10-11 08:53:06.268 2 DEBUG nova.virt.libvirt.driver [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Start _get_guest_xml network_info=[{"id": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "address": "fa:16:3e:5b:b5:4a", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13bb6d15-e6", "ovs_interfaceid": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:53:06 np0005481065 nova_compute[260935]: 2025-10-11 08:53:06.276 2 WARNING nova.virt.libvirt.driver [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:53:06 np0005481065 nova_compute[260935]: 2025-10-11 08:53:06.287 2 DEBUG nova.virt.libvirt.host [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:53:06 np0005481065 nova_compute[260935]: 2025-10-11 08:53:06.287 2 DEBUG nova.virt.libvirt.host [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:53:06 np0005481065 nova_compute[260935]: 2025-10-11 08:53:06.291 2 DEBUG nova.virt.libvirt.host [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:53:06 np0005481065 nova_compute[260935]: 2025-10-11 08:53:06.292 2 DEBUG nova.virt.libvirt.host [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:53:06 np0005481065 nova_compute[260935]: 2025-10-11 08:53:06.292 2 DEBUG nova.virt.libvirt.driver [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:53:06 np0005481065 nova_compute[260935]: 2025-10-11 08:53:06.292 2 DEBUG nova.virt.hardware [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:53:06 np0005481065 nova_compute[260935]: 2025-10-11 08:53:06.293 2 DEBUG nova.virt.hardware [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:53:06 np0005481065 nova_compute[260935]: 2025-10-11 08:53:06.293 2 DEBUG nova.virt.hardware [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:53:06 np0005481065 nova_compute[260935]: 2025-10-11 08:53:06.293 2 DEBUG nova.virt.hardware [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:53:06 np0005481065 nova_compute[260935]: 2025-10-11 08:53:06.294 2 DEBUG nova.virt.hardware [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:53:06 np0005481065 nova_compute[260935]: 2025-10-11 08:53:06.294 2 DEBUG nova.virt.hardware [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:53:06 np0005481065 nova_compute[260935]: 2025-10-11 08:53:06.294 2 DEBUG nova.virt.hardware [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:53:06 np0005481065 nova_compute[260935]: 2025-10-11 08:53:06.294 2 DEBUG nova.virt.hardware [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:53:06 np0005481065 nova_compute[260935]: 2025-10-11 08:53:06.295 2 DEBUG nova.virt.hardware [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:53:06 np0005481065 nova_compute[260935]: 2025-10-11 08:53:06.295 2 DEBUG nova.virt.hardware [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:53:06 np0005481065 nova_compute[260935]: 2025-10-11 08:53:06.295 2 DEBUG nova.virt.hardware [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:53:06 np0005481065 nova_compute[260935]: 2025-10-11 08:53:06.299 2 DEBUG oslo_concurrency.processutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:06 np0005481065 nova_compute[260935]: 2025-10-11 08:53:06.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:53:06 np0005481065 nova_compute[260935]: 2025-10-11 08:53:06.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:53:06 np0005481065 nova_compute[260935]: 2025-10-11 08:53:06.702 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 11 04:53:06 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:53:06 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3193996983' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:53:06 np0005481065 nova_compute[260935]: 2025-10-11 08:53:06.745 2 DEBUG oslo_concurrency.processutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:06 np0005481065 nova_compute[260935]: 2025-10-11 08:53:06.775 2 DEBUG nova.storage.rbd_utils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 92be5b35-6b7a-4f95-924d-008348f27b42_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:06 np0005481065 nova_compute[260935]: 2025-10-11 08:53:06.779 2 DEBUG oslo_concurrency.processutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:53:07 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3532255320' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:53:07 np0005481065 nova_compute[260935]: 2025-10-11 08:53:07.219 2 DEBUG oslo_concurrency.processutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:07 np0005481065 nova_compute[260935]: 2025-10-11 08:53:07.222 2 DEBUG nova.virt.libvirt.vif [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:52:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-118771075',display_name='tempest-ServerDiskConfigTestJSON-server-118771075',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-118771075',id=43,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9af27fad6b5a4783b66213343f27f0a1',ramdisk_id='',reservation_id='r-aqy60l5l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-387886039',owner_user_name='tempest-ServerDiskConfigTestJSON-387886039-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:53:01Z,user_data=None,user_id='2a171de1f79843e0b048393cabfee77d',uuid=92be5b35-6b7a-4f95-924d-008348f27b42,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "address": "fa:16:3e:5b:b5:4a", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13bb6d15-e6", "ovs_interfaceid": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:53:07 np0005481065 nova_compute[260935]: 2025-10-11 08:53:07.222 2 DEBUG nova.network.os_vif_util [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converting VIF {"id": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "address": "fa:16:3e:5b:b5:4a", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13bb6d15-e6", "ovs_interfaceid": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:53:07 np0005481065 nova_compute[260935]: 2025-10-11 08:53:07.224 2 DEBUG nova.network.os_vif_util [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:b5:4a,bridge_name='br-int',has_traffic_filtering=True,id=13bb6d15-e65c-4e29-b0f3-b7a5a830236d,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13bb6d15-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:53:07 np0005481065 nova_compute[260935]: 2025-10-11 08:53:07.225 2 DEBUG nova.objects.instance [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 92be5b35-6b7a-4f95-924d-008348f27b42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:53:07 np0005481065 nova_compute[260935]: 2025-10-11 08:53:07.253 2 DEBUG nova.virt.libvirt.driver [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:53:07 np0005481065 nova_compute[260935]:  <uuid>92be5b35-6b7a-4f95-924d-008348f27b42</uuid>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:  <name>instance-0000002b</name>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:53:07 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-118771075</nova:name>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:53:06</nova:creationTime>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:53:07 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:        <nova:user uuid="2a171de1f79843e0b048393cabfee77d">tempest-ServerDiskConfigTestJSON-387886039-project-member</nova:user>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:        <nova:project uuid="9af27fad6b5a4783b66213343f27f0a1">tempest-ServerDiskConfigTestJSON-387886039</nova:project>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:        <nova:port uuid="13bb6d15-e65c-4e29-b0f3-b7a5a830236d">
Oct 11 04:53:07 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:53:07 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:      <entry name="serial">92be5b35-6b7a-4f95-924d-008348f27b42</entry>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:      <entry name="uuid">92be5b35-6b7a-4f95-924d-008348f27b42</entry>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:53:07 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:53:07 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:53:07 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/92be5b35-6b7a-4f95-924d-008348f27b42_disk">
Oct 11 04:53:07 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:53:07 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:53:07 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/92be5b35-6b7a-4f95-924d-008348f27b42_disk.config">
Oct 11 04:53:07 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:53:07 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 04:53:07 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:5b:b5:4a"/>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:      <target dev="tap13bb6d15-e6"/>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:53:07 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/92be5b35-6b7a-4f95-924d-008348f27b42/console.log" append="off"/>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:53:07 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:53:07 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:53:07 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:53:07 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:53:07 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:53:07 np0005481065 nova_compute[260935]: 2025-10-11 08:53:07.254 2 DEBUG nova.compute.manager [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Preparing to wait for external event network-vif-plugged-13bb6d15-e65c-4e29-b0f3-b7a5a830236d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 04:53:07 np0005481065 nova_compute[260935]: 2025-10-11 08:53:07.254 2 DEBUG oslo_concurrency.lockutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:07 np0005481065 nova_compute[260935]: 2025-10-11 08:53:07.255 2 DEBUG oslo_concurrency.lockutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:07 np0005481065 nova_compute[260935]: 2025-10-11 08:53:07.255 2 DEBUG oslo_concurrency.lockutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:07 np0005481065 nova_compute[260935]: 2025-10-11 08:53:07.256 2 DEBUG nova.virt.libvirt.vif [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:52:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-118771075',display_name='tempest-ServerDiskConfigTestJSON-server-118771075',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-118771075',id=43,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9af27fad6b5a4783b66213343f27f0a1',ramdisk_id='',reservation_id='r-aqy60l5l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-387886039',owner_user_name='tempest-ServerDiskConfigTestJSON-387886039-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:53:01Z,user_data=None,user_id='2a171de1f79843e0b048393cabfee77d',uuid=92be5b35-6b7a-4f95-924d-008348f27b42,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "address": "fa:16:3e:5b:b5:4a", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13bb6d15-e6", "ovs_interfaceid": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:53:07 np0005481065 nova_compute[260935]: 2025-10-11 08:53:07.256 2 DEBUG nova.network.os_vif_util [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converting VIF {"id": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "address": "fa:16:3e:5b:b5:4a", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13bb6d15-e6", "ovs_interfaceid": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:53:07 np0005481065 nova_compute[260935]: 2025-10-11 08:53:07.257 2 DEBUG nova.network.os_vif_util [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:b5:4a,bridge_name='br-int',has_traffic_filtering=True,id=13bb6d15-e65c-4e29-b0f3-b7a5a830236d,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13bb6d15-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:53:07 np0005481065 nova_compute[260935]: 2025-10-11 08:53:07.257 2 DEBUG os_vif [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:b5:4a,bridge_name='br-int',has_traffic_filtering=True,id=13bb6d15-e65c-4e29-b0f3-b7a5a830236d,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13bb6d15-e6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:53:07 np0005481065 nova_compute[260935]: 2025-10-11 08:53:07.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:07 np0005481065 nova_compute[260935]: 2025-10-11 08:53:07.259 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:07 np0005481065 nova_compute[260935]: 2025-10-11 08:53:07.260 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:53:07 np0005481065 nova_compute[260935]: 2025-10-11 08:53:07.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:07 np0005481065 nova_compute[260935]: 2025-10-11 08:53:07.264 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap13bb6d15-e6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:07 np0005481065 nova_compute[260935]: 2025-10-11 08:53:07.265 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap13bb6d15-e6, col_values=(('external_ids', {'iface-id': '13bb6d15-e65c-4e29-b0f3-b7a5a830236d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5b:b5:4a', 'vm-uuid': '92be5b35-6b7a-4f95-924d-008348f27b42'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:07 np0005481065 nova_compute[260935]: 2025-10-11 08:53:07.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:07 np0005481065 NetworkManager[44960]: <info>  [1760172787.2683] manager: (tap13bb6d15-e6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/175)
Oct 11 04:53:07 np0005481065 nova_compute[260935]: 2025-10-11 08:53:07.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:53:07 np0005481065 nova_compute[260935]: 2025-10-11 08:53:07.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:07 np0005481065 nova_compute[260935]: 2025-10-11 08:53:07.279 2 INFO os_vif [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:b5:4a,bridge_name='br-int',has_traffic_filtering=True,id=13bb6d15-e65c-4e29-b0f3-b7a5a830236d,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13bb6d15-e6')#033[00m
Oct 11 04:53:07 np0005481065 nova_compute[260935]: 2025-10-11 08:53:07.350 2 DEBUG nova.virt.libvirt.driver [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:53:07 np0005481065 nova_compute[260935]: 2025-10-11 08:53:07.350 2 DEBUG nova.virt.libvirt.driver [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:53:07 np0005481065 nova_compute[260935]: 2025-10-11 08:53:07.350 2 DEBUG nova.virt.libvirt.driver [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] No VIF found with MAC fa:16:3e:5b:b5:4a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:53:07 np0005481065 nova_compute[260935]: 2025-10-11 08:53:07.351 2 INFO nova.virt.libvirt.driver [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Using config drive#033[00m
Oct 11 04:53:07 np0005481065 nova_compute[260935]: 2025-10-11 08:53:07.368 2 DEBUG nova.storage.rbd_utils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 92be5b35-6b7a-4f95-924d-008348f27b42_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:07 np0005481065 nova_compute[260935]: 2025-10-11 08:53:07.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:07.593 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:07 np0005481065 nova_compute[260935]: 2025-10-11 08:53:07.714 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:53:07 np0005481065 nova_compute[260935]: 2025-10-11 08:53:07.747 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:53:07 np0005481065 nova_compute[260935]: 2025-10-11 08:53:07.747 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct 11 04:53:07 np0005481065 nova_compute[260935]: 2025-10-11 08:53:07.765 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct 11 04:53:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1449: 321 pgs: 321 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 184 op/s
Oct 11 04:53:07 np0005481065 nova_compute[260935]: 2025-10-11 08:53:07.971 2 INFO nova.virt.libvirt.driver [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Creating config drive at /var/lib/nova/instances/92be5b35-6b7a-4f95-924d-008348f27b42/disk.config#033[00m
Oct 11 04:53:07 np0005481065 nova_compute[260935]: 2025-10-11 08:53:07.980 2 DEBUG oslo_concurrency.processutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/92be5b35-6b7a-4f95-924d-008348f27b42/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp53mlddjp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:08 np0005481065 nova_compute[260935]: 2025-10-11 08:53:08.031 2 DEBUG nova.network.neutron [req-38da5eba-4ea9-426c-8077-e383ef156bd3 req-2b1b7bde-db0e-499b-96f6-ffd59fc17030 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Updated VIF entry in instance network info cache for port 13bb6d15-e65c-4e29-b0f3-b7a5a830236d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:53:08 np0005481065 nova_compute[260935]: 2025-10-11 08:53:08.033 2 DEBUG nova.network.neutron [req-38da5eba-4ea9-426c-8077-e383ef156bd3 req-2b1b7bde-db0e-499b-96f6-ffd59fc17030 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Updating instance_info_cache with network_info: [{"id": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "address": "fa:16:3e:5b:b5:4a", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13bb6d15-e6", "ovs_interfaceid": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:53:08 np0005481065 nova_compute[260935]: 2025-10-11 08:53:08.050 2 DEBUG oslo_concurrency.lockutils [req-38da5eba-4ea9-426c-8077-e383ef156bd3 req-2b1b7bde-db0e-499b-96f6-ffd59fc17030 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-92be5b35-6b7a-4f95-924d-008348f27b42" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:53:08 np0005481065 nova_compute[260935]: 2025-10-11 08:53:08.143 2 DEBUG oslo_concurrency.processutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/92be5b35-6b7a-4f95-924d-008348f27b42/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp53mlddjp" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:08 np0005481065 nova_compute[260935]: 2025-10-11 08:53:08.179 2 DEBUG nova.storage.rbd_utils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 92be5b35-6b7a-4f95-924d-008348f27b42_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:08 np0005481065 nova_compute[260935]: 2025-10-11 08:53:08.183 2 DEBUG oslo_concurrency.processutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/92be5b35-6b7a-4f95-924d-008348f27b42/disk.config 92be5b35-6b7a-4f95-924d-008348f27b42_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:08 np0005481065 nova_compute[260935]: 2025-10-11 08:53:08.265 2 DEBUG oslo_concurrency.lockutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Acquiring lock "2c551e6f-adba-4963-a583-c5118e2be62a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:08 np0005481065 nova_compute[260935]: 2025-10-11 08:53:08.266 2 DEBUG oslo_concurrency.lockutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Lock "2c551e6f-adba-4963-a583-c5118e2be62a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:08 np0005481065 nova_compute[260935]: 2025-10-11 08:53:08.283 2 DEBUG nova.compute.manager [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:53:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e192 do_prune osdmap full prune enabled
Oct 11 04:53:08 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Oct 11 04:53:08 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:08.300658) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 04:53:08 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Oct 11 04:53:08 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172788300741, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2479, "num_deletes": 517, "total_data_size": 3280159, "memory_usage": 3341912, "flush_reason": "Manual Compaction"}
Oct 11 04:53:08 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Oct 11 04:53:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e193 e193: 3 total, 3 up, 3 in
Oct 11 04:53:08 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e193: 3 total, 3 up, 3 in
Oct 11 04:53:08 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172788333678, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 3207618, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 27894, "largest_seqno": 30372, "table_properties": {"data_size": 3196929, "index_size": 6351, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3269, "raw_key_size": 25793, "raw_average_key_size": 19, "raw_value_size": 3173114, "raw_average_value_size": 2452, "num_data_blocks": 278, "num_entries": 1294, "num_filter_entries": 1294, "num_deletions": 517, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760172606, "oldest_key_time": 1760172606, "file_creation_time": 1760172788, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:53:08 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 33073 microseconds, and 13209 cpu microseconds.
Oct 11 04:53:08 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:53:08 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:08.333739) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 3207618 bytes OK
Oct 11 04:53:08 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:08.333769) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Oct 11 04:53:08 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:08.337079) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Oct 11 04:53:08 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:08.337105) EVENT_LOG_v1 {"time_micros": 1760172788337097, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 04:53:08 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:08.337129) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 04:53:08 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 3268589, prev total WAL file size 3268630, number of live WAL files 2.
Oct 11 04:53:08 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:53:08 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:08.339923) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Oct 11 04:53:08 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 04:53:08 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(3132KB)], [62(8448KB)]
Oct 11 04:53:08 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172788339991, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 11858756, "oldest_snapshot_seqno": -1}
Oct 11 04:53:08 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5457 keys, 10129590 bytes, temperature: kUnknown
Oct 11 04:53:08 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172788429407, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 10129590, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10089267, "index_size": 25557, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13701, "raw_key_size": 136910, "raw_average_key_size": 25, "raw_value_size": 9987205, "raw_average_value_size": 1830, "num_data_blocks": 1047, "num_entries": 5457, "num_filter_entries": 5457, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760172788, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:53:08 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:53:08 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:08.430324) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 10129590 bytes
Oct 11 04:53:08 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:08.432850) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 132.1 rd, 112.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 8.3 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(6.9) write-amplify(3.2) OK, records in: 6502, records dropped: 1045 output_compression: NoCompression
Oct 11 04:53:08 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:08.432889) EVENT_LOG_v1 {"time_micros": 1760172788432871, "job": 34, "event": "compaction_finished", "compaction_time_micros": 89778, "compaction_time_cpu_micros": 48306, "output_level": 6, "num_output_files": 1, "total_output_size": 10129590, "num_input_records": 6502, "num_output_records": 5457, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 04:53:08 np0005481065 nova_compute[260935]: 2025-10-11 08:53:08.433 2 DEBUG oslo_concurrency.lockutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:08 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:53:08 np0005481065 nova_compute[260935]: 2025-10-11 08:53:08.434 2 DEBUG oslo_concurrency.lockutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:08 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172788435425, "job": 34, "event": "table_file_deletion", "file_number": 64}
Oct 11 04:53:08 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:53:08 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172788439564, "job": 34, "event": "table_file_deletion", "file_number": 62}
Oct 11 04:53:08 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:08.339703) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:53:08 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:08.440177) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:53:08 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:08.440186) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:53:08 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:08.440189) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:53:08 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:08.440192) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:53:08 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:08.440195) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:53:08 np0005481065 nova_compute[260935]: 2025-10-11 08:53:08.444 2 DEBUG nova.virt.hardware [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:53:08 np0005481065 nova_compute[260935]: 2025-10-11 08:53:08.445 2 INFO nova.compute.claims [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:53:08 np0005481065 nova_compute[260935]: 2025-10-11 08:53:08.489 2 DEBUG oslo_concurrency.processutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/92be5b35-6b7a-4f95-924d-008348f27b42/disk.config 92be5b35-6b7a-4f95-924d-008348f27b42_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.306s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:08 np0005481065 nova_compute[260935]: 2025-10-11 08:53:08.490 2 INFO nova.virt.libvirt.driver [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Deleting local config drive /var/lib/nova/instances/92be5b35-6b7a-4f95-924d-008348f27b42/disk.config because it was imported into RBD.#033[00m
Oct 11 04:53:08 np0005481065 kernel: tap13bb6d15-e6: entered promiscuous mode
Oct 11 04:53:08 np0005481065 NetworkManager[44960]: <info>  [1760172788.5855] manager: (tap13bb6d15-e6): new Tun device (/org/freedesktop/NetworkManager/Devices/176)
Oct 11 04:53:08 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:08Z|00369|binding|INFO|Claiming lport 13bb6d15-e65c-4e29-b0f3-b7a5a830236d for this chassis.
Oct 11 04:53:08 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:08Z|00370|binding|INFO|13bb6d15-e65c-4e29-b0f3-b7a5a830236d: Claiming fa:16:3e:5b:b5:4a 10.100.0.12
Oct 11 04:53:08 np0005481065 nova_compute[260935]: 2025-10-11 08:53:08.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:08.608 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:b5:4a 10.100.0.12'], port_security=['fa:16:3e:5b:b5:4a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '92be5b35-6b7a-4f95-924d-008348f27b42', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9af27fad6b5a4783b66213343f27f0a1', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a50a9db8-e2ab-4969-88fc-b4ddbb372174', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b34768a1-4d4c-416a-8ec1-d8538916a72a, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=13bb6d15-e65c-4e29-b0f3-b7a5a830236d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:53:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:08.610 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 13bb6d15-e65c-4e29-b0f3-b7a5a830236d in datapath e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd bound to our chassis#033[00m
Oct 11 04:53:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:08.614 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd#033[00m
Oct 11 04:53:08 np0005481065 nova_compute[260935]: 2025-10-11 08:53:08.631 2 DEBUG oslo_concurrency.processutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:08.634 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[52903d81-9db3-4b0b-8fa8-5756ec9f3c51]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:08.635 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape5d4fc7a-11 in ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 04:53:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:08.639 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape5d4fc7a-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 04:53:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:08.639 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ad8f81fb-2b2f-4a65-b8e6-2662cf1a5de4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:08.640 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2bf53cc1-8bd0-41a4-b204-d46b1ae4232d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:08.662 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[8ef8a986-a6d7-47b8-88d0-58e077d3d69a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:08 np0005481065 systemd-machined[215705]: New machine qemu-49-instance-0000002b.
Oct 11 04:53:08 np0005481065 systemd-udevd[310756]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:53:08 np0005481065 systemd[1]: Started Virtual Machine qemu-49-instance-0000002b.
Oct 11 04:53:08 np0005481065 NetworkManager[44960]: <info>  [1760172788.7014] device (tap13bb6d15-e6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:53:08 np0005481065 NetworkManager[44960]: <info>  [1760172788.7023] device (tap13bb6d15-e6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:53:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:08.705 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4f6fc483-8ee3-42ba-b2c5-7541c242f4ef]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:08 np0005481065 nova_compute[260935]: 2025-10-11 08:53:08.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:08 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:08Z|00371|binding|INFO|Setting lport 13bb6d15-e65c-4e29-b0f3-b7a5a830236d ovn-installed in OVS
Oct 11 04:53:08 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:08Z|00372|binding|INFO|Setting lport 13bb6d15-e65c-4e29-b0f3-b7a5a830236d up in Southbound
Oct 11 04:53:08 np0005481065 nova_compute[260935]: 2025-10-11 08:53:08.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:08 np0005481065 nova_compute[260935]: 2025-10-11 08:53:08.720 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:53:08 np0005481065 podman[310733]: 2025-10-11 08:53:08.738921065 +0000 UTC m=+0.110941729 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:53:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:08.747 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c3734bb4-0998-407f-a11d-e59cc0275325]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:08.755 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[916896fb-7ade-4b3d-bd42-444ebaf5098d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:08 np0005481065 NetworkManager[44960]: <info>  [1760172788.7562] manager: (tape5d4fc7a-10): new Veth device (/org/freedesktop/NetworkManager/Devices/177)
Oct 11 04:53:08 np0005481065 systemd-udevd[310763]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:53:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:08.797 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3a5e5172-fcfe-4cb9-88c5-2f390d29d2e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:08.799 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[a42293ce-0299-4be3-9f07-2cf32ac51e73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:08 np0005481065 podman[310734]: 2025-10-11 08:53:08.809024427 +0000 UTC m=+0.164595415 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 04:53:08 np0005481065 NetworkManager[44960]: <info>  [1760172788.8415] device (tape5d4fc7a-10): carrier: link connected
Oct 11 04:53:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:08.847 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[82a40946-c684-4e57-9456-ee517aee032b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:08.872 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f0074c37-12da-4699-b3d1-079bcce35bd4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape5d4fc7a-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:20:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 118], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 463354, 'reachable_time': 25190, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310833, 'error': None, 'target': 'ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:08.894 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[655ce9ac-0966-4e33-acf3-4369f472a357]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe17:2033'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 463354, 'tstamp': 463354}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310834, 'error': None, 'target': 'ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:08.916 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[dedd3a42-4fd8-4d70-a24c-d8d071b6de39]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape5d4fc7a-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:20:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 118], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 463354, 'reachable_time': 25190, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 310835, 'error': None, 'target': 'ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:08.954 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[75534780-5fb8-4b40-831b-5c607bfe571a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:09 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:09.025 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[18ac4946-bf25-4532-8e2b-c57bd6b28fc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:09 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:09.026 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape5d4fc7a-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:09 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:09.026 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:53:09 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:09.027 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape5d4fc7a-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:09 np0005481065 NetworkManager[44960]: <info>  [1760172789.0302] manager: (tape5d4fc7a-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/178)
Oct 11 04:53:09 np0005481065 kernel: tape5d4fc7a-10: entered promiscuous mode
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:09 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:09.033 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape5d4fc7a-10, col_values=(('external_ids', {'iface-id': '7a0f31c4-9bda-45df-9fec-aacc40fc88c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:09 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:09Z|00373|binding|INFO|Releasing lport 7a0f31c4-9bda-45df-9fec-aacc40fc88c1 from this chassis (sb_readonly=0)
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:09 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:09.056 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 04:53:09 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:09.058 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4e0e16e4-10a8-40d2-a7c5-c87221106bc2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:09 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:09.059 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:53:09 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 04:53:09 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 04:53:09 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd
Oct 11 04:53:09 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 04:53:09 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 04:53:09 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 04:53:09 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd.pid.haproxy
Oct 11 04:53:09 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 04:53:09 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:53:09 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 04:53:09 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 04:53:09 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 04:53:09 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 04:53:09 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 04:53:09 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 04:53:09 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 04:53:09 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 04:53:09 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 04:53:09 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 04:53:09 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 04:53:09 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 04:53:09 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 04:53:09 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:53:09 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:53:09 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 04:53:09 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 04:53:09 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:53:09 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd
Oct 11 04:53:09 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 04:53:09 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:09.059 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'env', 'PROCESS_TAG=haproxy-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.065 2 DEBUG nova.compute.manager [req-d9972e53-6061-4406-8fbc-25aee1c67b8a req-459fa68a-bb49-4799-a8cb-9032c356b804 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Received event network-vif-plugged-13bb6d15-e65c-4e29-b0f3-b7a5a830236d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.065 2 DEBUG oslo_concurrency.lockutils [req-d9972e53-6061-4406-8fbc-25aee1c67b8a req-459fa68a-bb49-4799-a8cb-9032c356b804 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.066 2 DEBUG oslo_concurrency.lockutils [req-d9972e53-6061-4406-8fbc-25aee1c67b8a req-459fa68a-bb49-4799-a8cb-9032c356b804 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.066 2 DEBUG oslo_concurrency.lockutils [req-d9972e53-6061-4406-8fbc-25aee1c67b8a req-459fa68a-bb49-4799-a8cb-9032c356b804 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.066 2 DEBUG nova.compute.manager [req-d9972e53-6061-4406-8fbc-25aee1c67b8a req-459fa68a-bb49-4799-a8cb-9032c356b804 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Processing event network-vif-plugged-13bb6d15-e65c-4e29-b0f3-b7a5a830236d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 04:53:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:53:09 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2864788087' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.152 2 DEBUG oslo_concurrency.processutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.159 2 DEBUG nova.compute.provider_tree [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.173 2 DEBUG nova.scheduler.client.report [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.190 2 DEBUG oslo_concurrency.lockutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.191 2 DEBUG nova.compute.manager [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.229 2 DEBUG nova.compute.manager [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.229 2 DEBUG nova.network.neutron [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.256 2 INFO nova.virt.libvirt.driver [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.270 2 DEBUG nova.compute.manager [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.349 2 DEBUG nova.compute.manager [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.350 2 DEBUG nova.virt.libvirt.driver [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.351 2 INFO nova.virt.libvirt.driver [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Creating image(s)#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.383 2 DEBUG nova.storage.rbd_utils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] rbd image 2c551e6f-adba-4963-a583-c5118e2be62a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.441 2 DEBUG nova.storage.rbd_utils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] rbd image 2c551e6f-adba-4963-a583-c5118e2be62a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.478 2 DEBUG nova.storage.rbd_utils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] rbd image 2c551e6f-adba-4963-a583-c5118e2be62a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.485 2 DEBUG oslo_concurrency.processutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:09 np0005481065 podman[310939]: 2025-10-11 08:53:09.536938573 +0000 UTC m=+0.083154573 container create 8f4077603fd01054fa63797c06b1df995fb1f62bbad8ac790793721f2ef41e8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.537 2 DEBUG nova.policy [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ea1a78d0e9f549b580366a5b344f23f5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '141fa83aa39e4cf6883c3f86fe0de7d4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 04:53:09 np0005481065 systemd[1]: Started libpod-conmon-8f4077603fd01054fa63797c06b1df995fb1f62bbad8ac790793721f2ef41e8b.scope.
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.591 2 DEBUG oslo_concurrency.processutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.592 2 DEBUG oslo_concurrency.lockutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.593 2 DEBUG oslo_concurrency.lockutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.594 2 DEBUG oslo_concurrency.lockutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:09 np0005481065 podman[310939]: 2025-10-11 08:53:09.507297205 +0000 UTC m=+0.053513245 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 04:53:09 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:53:09 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32b308db994383b8949a545fed9fa022caa751e57c9421e100b75cd184b4505b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:53:09 np0005481065 podman[310939]: 2025-10-11 08:53:09.628993566 +0000 UTC m=+0.175209576 container init 8f4077603fd01054fa63797c06b1df995fb1f62bbad8ac790793721f2ef41e8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 11 04:53:09 np0005481065 podman[310939]: 2025-10-11 08:53:09.638131785 +0000 UTC m=+0.184347785 container start 8f4077603fd01054fa63797c06b1df995fb1f62bbad8ac790793721f2ef41e8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.638 2 DEBUG nova.storage.rbd_utils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] rbd image 2c551e6f-adba-4963-a583-c5118e2be62a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.648 2 DEBUG oslo_concurrency.processutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 2c551e6f-adba-4963-a583-c5118e2be62a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:09 np0005481065 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[310979]: [NOTICE]   (311001) : New worker (311004) forked
Oct 11 04:53:09 np0005481065 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[310979]: [NOTICE]   (311001) : Loading success.
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.684 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172789.6052735, 92be5b35-6b7a-4f95-924d-008348f27b42 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.685 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] VM Started (Lifecycle Event)#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.690 2 DEBUG nova.compute.manager [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.692 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172774.3881469, 5750649d-960f-42d5-b127-de8b9a2bee8f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.692 2 INFO nova.compute.manager [-] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] VM Stopped (Lifecycle Event)#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.696 2 DEBUG nova.virt.libvirt.driver [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.700 2 INFO nova.virt.libvirt.driver [-] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Instance spawned successfully.#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.700 2 DEBUG nova.virt.libvirt.driver [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.706 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.741 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.744 2 DEBUG nova.compute.manager [None req-b67761ac-2fd0-4cf9-85ca-60791b6f67d8 - - - - - -] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.749 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.750 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.753 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.760 2 DEBUG nova.virt.libvirt.driver [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.761 2 DEBUG nova.virt.libvirt.driver [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.761 2 DEBUG nova.virt.libvirt.driver [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.762 2 DEBUG nova.virt.libvirt.driver [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.763 2 DEBUG nova.virt.libvirt.driver [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.763 2 DEBUG nova.virt.libvirt.driver [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.773 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.774 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172789.6054146, 92be5b35-6b7a-4f95-924d-008348f27b42 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.774 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] VM Paused (Lifecycle Event)#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.798 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.804 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.805 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.805 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.805 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.806 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.857 2 INFO nova.compute.manager [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Took 8.07 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.859 2 DEBUG nova.compute.manager [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.861 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172789.7062812, 92be5b35-6b7a-4f95-924d-008348f27b42 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.862 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.906 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.922 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:53:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1451: 321 pgs: 321 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.941 2 INFO nova.compute.manager [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Took 9.22 seconds to build instance.#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.969 2 DEBUG oslo_concurrency.lockutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "92be5b35-6b7a-4f95-924d-008348f27b42" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.351s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:09 np0005481065 nova_compute[260935]: 2025-10-11 08:53:09.976 2 DEBUG oslo_concurrency.processutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 2c551e6f-adba-4963-a583-c5118e2be62a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.328s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:10 np0005481065 nova_compute[260935]: 2025-10-11 08:53:10.064 2 DEBUG nova.storage.rbd_utils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] resizing rbd image 2c551e6f-adba-4963-a583-c5118e2be62a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:53:10 np0005481065 nova_compute[260935]: 2025-10-11 08:53:10.188 2 DEBUG nova.objects.instance [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Lazy-loading 'migration_context' on Instance uuid 2c551e6f-adba-4963-a583-c5118e2be62a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:53:10 np0005481065 nova_compute[260935]: 2025-10-11 08:53:10.206 2 DEBUG nova.virt.libvirt.driver [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:53:10 np0005481065 nova_compute[260935]: 2025-10-11 08:53:10.206 2 DEBUG nova.virt.libvirt.driver [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Ensure instance console log exists: /var/lib/nova/instances/2c551e6f-adba-4963-a583-c5118e2be62a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:53:10 np0005481065 nova_compute[260935]: 2025-10-11 08:53:10.207 2 DEBUG oslo_concurrency.lockutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:10 np0005481065 nova_compute[260935]: 2025-10-11 08:53:10.208 2 DEBUG oslo_concurrency.lockutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:10 np0005481065 nova_compute[260935]: 2025-10-11 08:53:10.208 2 DEBUG oslo_concurrency.lockutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:53:10 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3780399583' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:53:10 np0005481065 nova_compute[260935]: 2025-10-11 08:53:10.287 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:10 np0005481065 nova_compute[260935]: 2025-10-11 08:53:10.365 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 04:53:10 np0005481065 nova_compute[260935]: 2025-10-11 08:53:10.365 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 04:53:10 np0005481065 nova_compute[260935]: 2025-10-11 08:53:10.424 2 DEBUG nova.network.neutron [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Successfully created port: 29b8f5a1-6e7c-42c3-9876-cc4cc9942b76 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 04:53:10 np0005481065 nova_compute[260935]: 2025-10-11 08:53:10.644 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:53:10 np0005481065 nova_compute[260935]: 2025-10-11 08:53:10.647 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4182MB free_disk=59.967525482177734GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 04:53:10 np0005481065 nova_compute[260935]: 2025-10-11 08:53:10.648 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:10 np0005481065 nova_compute[260935]: 2025-10-11 08:53:10.648 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:10 np0005481065 nova_compute[260935]: 2025-10-11 08:53:10.746 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 92be5b35-6b7a-4f95-924d-008348f27b42 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 04:53:10 np0005481065 nova_compute[260935]: 2025-10-11 08:53:10.747 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 2c551e6f-adba-4963-a583-c5118e2be62a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 04:53:10 np0005481065 nova_compute[260935]: 2025-10-11 08:53:10.747 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 04:53:10 np0005481065 nova_compute[260935]: 2025-10-11 08:53:10.747 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 04:53:10 np0005481065 nova_compute[260935]: 2025-10-11 08:53:10.806 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:11 np0005481065 nova_compute[260935]: 2025-10-11 08:53:11.193 2 DEBUG nova.compute.manager [req-ff0c1d83-7808-4b8c-b82c-29d0897a515b req-9f3e7d21-7879-4bc4-b398-7eddfa554359 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Received event network-vif-plugged-13bb6d15-e65c-4e29-b0f3-b7a5a830236d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:53:11 np0005481065 nova_compute[260935]: 2025-10-11 08:53:11.194 2 DEBUG oslo_concurrency.lockutils [req-ff0c1d83-7808-4b8c-b82c-29d0897a515b req-9f3e7d21-7879-4bc4-b398-7eddfa554359 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:11 np0005481065 nova_compute[260935]: 2025-10-11 08:53:11.194 2 DEBUG oslo_concurrency.lockutils [req-ff0c1d83-7808-4b8c-b82c-29d0897a515b req-9f3e7d21-7879-4bc4-b398-7eddfa554359 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:11 np0005481065 nova_compute[260935]: 2025-10-11 08:53:11.195 2 DEBUG oslo_concurrency.lockutils [req-ff0c1d83-7808-4b8c-b82c-29d0897a515b req-9f3e7d21-7879-4bc4-b398-7eddfa554359 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:11 np0005481065 nova_compute[260935]: 2025-10-11 08:53:11.195 2 DEBUG nova.compute.manager [req-ff0c1d83-7808-4b8c-b82c-29d0897a515b req-9f3e7d21-7879-4bc4-b398-7eddfa554359 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] No waiting events found dispatching network-vif-plugged-13bb6d15-e65c-4e29-b0f3-b7a5a830236d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:53:11 np0005481065 nova_compute[260935]: 2025-10-11 08:53:11.195 2 WARNING nova.compute.manager [req-ff0c1d83-7808-4b8c-b82c-29d0897a515b req-9f3e7d21-7879-4bc4-b398-7eddfa554359 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Received unexpected event network-vif-plugged-13bb6d15-e65c-4e29-b0f3-b7a5a830236d for instance with vm_state active and task_state None.#033[00m
Oct 11 04:53:11 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:53:11 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4181708531' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:53:11 np0005481065 nova_compute[260935]: 2025-10-11 08:53:11.267 2 DEBUG nova.network.neutron [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Successfully updated port: 29b8f5a1-6e7c-42c3-9876-cc4cc9942b76 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:53:11 np0005481065 nova_compute[260935]: 2025-10-11 08:53:11.270 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:11 np0005481065 nova_compute[260935]: 2025-10-11 08:53:11.277 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:53:11 np0005481065 nova_compute[260935]: 2025-10-11 08:53:11.293 2 DEBUG oslo_concurrency.lockutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Acquiring lock "refresh_cache-2c551e6f-adba-4963-a583-c5118e2be62a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:53:11 np0005481065 nova_compute[260935]: 2025-10-11 08:53:11.294 2 DEBUG oslo_concurrency.lockutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Acquired lock "refresh_cache-2c551e6f-adba-4963-a583-c5118e2be62a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:53:11 np0005481065 nova_compute[260935]: 2025-10-11 08:53:11.294 2 DEBUG nova.network.neutron [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:53:11 np0005481065 nova_compute[260935]: 2025-10-11 08:53:11.300 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:53:11 np0005481065 nova_compute[260935]: 2025-10-11 08:53:11.339 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 04:53:11 np0005481065 nova_compute[260935]: 2025-10-11 08:53:11.340 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:11 np0005481065 nova_compute[260935]: 2025-10-11 08:53:11.340 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:53:11 np0005481065 nova_compute[260935]: 2025-10-11 08:53:11.466 2 DEBUG nova.compute.manager [req-c2ac8725-adad-4a27-bff3-e69a97e4f301 req-89f0cbff-394d-4cd0-9be9-a1fa1dc3db3b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Received event network-changed-29b8f5a1-6e7c-42c3-9876-cc4cc9942b76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:53:11 np0005481065 nova_compute[260935]: 2025-10-11 08:53:11.467 2 DEBUG nova.compute.manager [req-c2ac8725-adad-4a27-bff3-e69a97e4f301 req-89f0cbff-394d-4cd0-9be9-a1fa1dc3db3b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Refreshing instance network info cache due to event network-changed-29b8f5a1-6e7c-42c3-9876-cc4cc9942b76. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:53:11 np0005481065 nova_compute[260935]: 2025-10-11 08:53:11.467 2 DEBUG oslo_concurrency.lockutils [req-c2ac8725-adad-4a27-bff3-e69a97e4f301 req-89f0cbff-394d-4cd0-9be9-a1fa1dc3db3b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-2c551e6f-adba-4963-a583-c5118e2be62a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:53:11 np0005481065 nova_compute[260935]: 2025-10-11 08:53:11.565 2 DEBUG nova.network.neutron [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:53:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1452: 321 pgs: 321 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 04:53:12 np0005481065 nova_compute[260935]: 2025-10-11 08:53:12.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:12 np0005481065 nova_compute[260935]: 2025-10-11 08:53:12.341 2 DEBUG nova.network.neutron [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Updating instance_info_cache with network_info: [{"id": "29b8f5a1-6e7c-42c3-9876-cc4cc9942b76", "address": "fa:16:3e:d0:47:df", "network": {"id": "eb048a28-2e76-4170-b83c-10a20efb7841", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1710166253-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "141fa83aa39e4cf6883c3f86fe0de7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29b8f5a1-6e", "ovs_interfaceid": "29b8f5a1-6e7c-42c3-9876-cc4cc9942b76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:53:12 np0005481065 nova_compute[260935]: 2025-10-11 08:53:12.413 2 DEBUG oslo_concurrency.lockutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Releasing lock "refresh_cache-2c551e6f-adba-4963-a583-c5118e2be62a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:53:12 np0005481065 nova_compute[260935]: 2025-10-11 08:53:12.413 2 DEBUG nova.compute.manager [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Instance network_info: |[{"id": "29b8f5a1-6e7c-42c3-9876-cc4cc9942b76", "address": "fa:16:3e:d0:47:df", "network": {"id": "eb048a28-2e76-4170-b83c-10a20efb7841", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1710166253-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "141fa83aa39e4cf6883c3f86fe0de7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29b8f5a1-6e", "ovs_interfaceid": "29b8f5a1-6e7c-42c3-9876-cc4cc9942b76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:53:12 np0005481065 nova_compute[260935]: 2025-10-11 08:53:12.414 2 DEBUG oslo_concurrency.lockutils [req-c2ac8725-adad-4a27-bff3-e69a97e4f301 req-89f0cbff-394d-4cd0-9be9-a1fa1dc3db3b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-2c551e6f-adba-4963-a583-c5118e2be62a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:53:12 np0005481065 nova_compute[260935]: 2025-10-11 08:53:12.415 2 DEBUG nova.network.neutron [req-c2ac8725-adad-4a27-bff3-e69a97e4f301 req-89f0cbff-394d-4cd0-9be9-a1fa1dc3db3b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Refreshing network info cache for port 29b8f5a1-6e7c-42c3-9876-cc4cc9942b76 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:53:12 np0005481065 nova_compute[260935]: 2025-10-11 08:53:12.420 2 DEBUG nova.virt.libvirt.driver [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Start _get_guest_xml network_info=[{"id": "29b8f5a1-6e7c-42c3-9876-cc4cc9942b76", "address": "fa:16:3e:d0:47:df", "network": {"id": "eb048a28-2e76-4170-b83c-10a20efb7841", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1710166253-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "141fa83aa39e4cf6883c3f86fe0de7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29b8f5a1-6e", "ovs_interfaceid": "29b8f5a1-6e7c-42c3-9876-cc4cc9942b76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:53:12 np0005481065 nova_compute[260935]: 2025-10-11 08:53:12.426 2 WARNING nova.virt.libvirt.driver [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:53:12 np0005481065 nova_compute[260935]: 2025-10-11 08:53:12.432 2 DEBUG nova.virt.libvirt.host [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:53:12 np0005481065 nova_compute[260935]: 2025-10-11 08:53:12.434 2 DEBUG nova.virt.libvirt.host [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:53:12 np0005481065 nova_compute[260935]: 2025-10-11 08:53:12.438 2 DEBUG nova.virt.libvirt.host [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:53:12 np0005481065 nova_compute[260935]: 2025-10-11 08:53:12.439 2 DEBUG nova.virt.libvirt.host [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:53:12 np0005481065 nova_compute[260935]: 2025-10-11 08:53:12.440 2 DEBUG nova.virt.libvirt.driver [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:53:12 np0005481065 nova_compute[260935]: 2025-10-11 08:53:12.440 2 DEBUG nova.virt.hardware [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:53:12 np0005481065 nova_compute[260935]: 2025-10-11 08:53:12.441 2 DEBUG nova.virt.hardware [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:53:12 np0005481065 nova_compute[260935]: 2025-10-11 08:53:12.441 2 DEBUG nova.virt.hardware [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:53:12 np0005481065 nova_compute[260935]: 2025-10-11 08:53:12.442 2 DEBUG nova.virt.hardware [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:53:12 np0005481065 nova_compute[260935]: 2025-10-11 08:53:12.442 2 DEBUG nova.virt.hardware [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:53:12 np0005481065 nova_compute[260935]: 2025-10-11 08:53:12.443 2 DEBUG nova.virt.hardware [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:53:12 np0005481065 nova_compute[260935]: 2025-10-11 08:53:12.443 2 DEBUG nova.virt.hardware [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:53:12 np0005481065 nova_compute[260935]: 2025-10-11 08:53:12.444 2 DEBUG nova.virt.hardware [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:53:12 np0005481065 nova_compute[260935]: 2025-10-11 08:53:12.444 2 DEBUG nova.virt.hardware [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:53:12 np0005481065 nova_compute[260935]: 2025-10-11 08:53:12.445 2 DEBUG nova.virt.hardware [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:53:12 np0005481065 nova_compute[260935]: 2025-10-11 08:53:12.445 2 DEBUG nova.virt.hardware [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:53:12 np0005481065 nova_compute[260935]: 2025-10-11 08:53:12.451 2 DEBUG oslo_concurrency.processutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:12 np0005481065 nova_compute[260935]: 2025-10-11 08:53:12.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:12 np0005481065 nova_compute[260935]: 2025-10-11 08:53:12.894 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172777.892748, 98fabab3-6b4a-44f3-b232-f23f34f4e19f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:53:12 np0005481065 nova_compute[260935]: 2025-10-11 08:53:12.895 2 INFO nova.compute.manager [-] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] VM Stopped (Lifecycle Event)#033[00m
Oct 11 04:53:12 np0005481065 nova_compute[260935]: 2025-10-11 08:53:12.932 2 DEBUG nova.compute.manager [None req-3608f9a2-9b35-47fe-a037-863e4ddd5669 - - - - - -] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:53:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:53:13 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3883858593' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:53:13 np0005481065 nova_compute[260935]: 2025-10-11 08:53:13.031 2 DEBUG oslo_concurrency.processutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:13 np0005481065 nova_compute[260935]: 2025-10-11 08:53:13.070 2 DEBUG nova.storage.rbd_utils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] rbd image 2c551e6f-adba-4963-a583-c5118e2be62a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:13 np0005481065 nova_compute[260935]: 2025-10-11 08:53:13.075 2 DEBUG oslo_concurrency.processutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:53:13 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Oct 11 04:53:13 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:13.299538) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 04:53:13 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Oct 11 04:53:13 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172793299662, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 294, "num_deletes": 250, "total_data_size": 74445, "memory_usage": 81056, "flush_reason": "Manual Compaction"}
Oct 11 04:53:13 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Oct 11 04:53:13 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172793303373, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 73431, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30373, "largest_seqno": 30666, "table_properties": {"data_size": 71503, "index_size": 156, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5578, "raw_average_key_size": 20, "raw_value_size": 67627, "raw_average_value_size": 245, "num_data_blocks": 7, "num_entries": 275, "num_filter_entries": 275, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760172788, "oldest_key_time": 1760172788, "file_creation_time": 1760172793, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:53:13 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 3858 microseconds, and 1463 cpu microseconds.
Oct 11 04:53:13 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:53:13 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:13.303425) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 73431 bytes OK
Oct 11 04:53:13 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:13.303447) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Oct 11 04:53:13 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:13.305122) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Oct 11 04:53:13 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:13.305143) EVENT_LOG_v1 {"time_micros": 1760172793305136, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 04:53:13 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:13.305166) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 04:53:13 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 72293, prev total WAL file size 72293, number of live WAL files 2.
Oct 11 04:53:13 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:53:13 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:13.305914) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303034' seq:72057594037927935, type:22 .. '6D6772737461740031323535' seq:0, type:0; will stop at (end)
Oct 11 04:53:13 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 04:53:13 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(71KB)], [65(9892KB)]
Oct 11 04:53:13 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172793305989, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 10203021, "oldest_snapshot_seqno": -1}
Oct 11 04:53:13 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 5224 keys, 6912359 bytes, temperature: kUnknown
Oct 11 04:53:13 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172793362292, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 6912359, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6878460, "index_size": 19709, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13125, "raw_key_size": 132277, "raw_average_key_size": 25, "raw_value_size": 6785298, "raw_average_value_size": 1298, "num_data_blocks": 802, "num_entries": 5224, "num_filter_entries": 5224, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760172793, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Oct 11 04:53:13 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 04:53:13 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:13.362639) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 6912359 bytes
Oct 11 04:53:13 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:13.363745) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 180.9 rd, 122.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 9.7 +0.0 blob) out(6.6 +0.0 blob), read-write-amplify(233.1) write-amplify(94.1) OK, records in: 5732, records dropped: 508 output_compression: NoCompression
Oct 11 04:53:13 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:13.363774) EVENT_LOG_v1 {"time_micros": 1760172793363762, "job": 36, "event": "compaction_finished", "compaction_time_micros": 56405, "compaction_time_cpu_micros": 37504, "output_level": 6, "num_output_files": 1, "total_output_size": 6912359, "num_input_records": 5732, "num_output_records": 5224, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 04:53:13 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:53:13 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172793364018, "job": 36, "event": "table_file_deletion", "file_number": 67}
Oct 11 04:53:13 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 04:53:13 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172793367101, "job": 36, "event": "table_file_deletion", "file_number": 65}
Oct 11 04:53:13 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:13.305726) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:53:13 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:13.367252) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:53:13 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:13.367262) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:53:13 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:13.367266) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:53:13 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:13.367270) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:53:13 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:13.367275) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 04:53:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:53:13 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1026540899' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:53:13 np0005481065 nova_compute[260935]: 2025-10-11 08:53:13.518 2 DEBUG oslo_concurrency.processutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:13 np0005481065 nova_compute[260935]: 2025-10-11 08:53:13.523 2 DEBUG nova.virt.libvirt.vif [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:53:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-702852874',display_name='tempest-ImagesOneServerTestJSON-server-702852874',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-702852874',id=44,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='141fa83aa39e4cf6883c3f86fe0de7d4',ramdisk_id='',reservation_id='r-0ygioays',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-896973684',owner_user_name='tempest-ImagesOneServerTestJSON-896973684-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:53:09Z,user_data=None,user_id='ea1a78d0e9f549b580366a5b344f23f5',uuid=2c551e6f-adba-4963-a583-c5118e2be62a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "29b8f5a1-6e7c-42c3-9876-cc4cc9942b76", "address": "fa:16:3e:d0:47:df", "network": {"id": "eb048a28-2e76-4170-b83c-10a20efb7841", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1710166253-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "141fa83aa39e4cf6883c3f86fe0de7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29b8f5a1-6e", "ovs_interfaceid": "29b8f5a1-6e7c-42c3-9876-cc4cc9942b76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:53:13 np0005481065 nova_compute[260935]: 2025-10-11 08:53:13.525 2 DEBUG nova.network.os_vif_util [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Converting VIF {"id": "29b8f5a1-6e7c-42c3-9876-cc4cc9942b76", "address": "fa:16:3e:d0:47:df", "network": {"id": "eb048a28-2e76-4170-b83c-10a20efb7841", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1710166253-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "141fa83aa39e4cf6883c3f86fe0de7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29b8f5a1-6e", "ovs_interfaceid": "29b8f5a1-6e7c-42c3-9876-cc4cc9942b76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:53:13 np0005481065 nova_compute[260935]: 2025-10-11 08:53:13.527 2 DEBUG nova.network.os_vif_util [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d0:47:df,bridge_name='br-int',has_traffic_filtering=True,id=29b8f5a1-6e7c-42c3-9876-cc4cc9942b76,network=Network(eb048a28-2e76-4170-b83c-10a20efb7841),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29b8f5a1-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:53:13 np0005481065 nova_compute[260935]: 2025-10-11 08:53:13.530 2 DEBUG nova.objects.instance [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2c551e6f-adba-4963-a583-c5118e2be62a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:53:13 np0005481065 nova_compute[260935]: 2025-10-11 08:53:13.563 2 DEBUG nova.virt.libvirt.driver [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:53:13 np0005481065 nova_compute[260935]:  <uuid>2c551e6f-adba-4963-a583-c5118e2be62a</uuid>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:  <name>instance-0000002c</name>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:53:13 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:      <nova:name>tempest-ImagesOneServerTestJSON-server-702852874</nova:name>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:53:12</nova:creationTime>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:53:13 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:        <nova:user uuid="ea1a78d0e9f549b580366a5b344f23f5">tempest-ImagesOneServerTestJSON-896973684-project-member</nova:user>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:        <nova:project uuid="141fa83aa39e4cf6883c3f86fe0de7d4">tempest-ImagesOneServerTestJSON-896973684</nova:project>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:        <nova:port uuid="29b8f5a1-6e7c-42c3-9876-cc4cc9942b76">
Oct 11 04:53:13 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:53:13 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:      <entry name="serial">2c551e6f-adba-4963-a583-c5118e2be62a</entry>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:      <entry name="uuid">2c551e6f-adba-4963-a583-c5118e2be62a</entry>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:53:13 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:53:13 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:53:13 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/2c551e6f-adba-4963-a583-c5118e2be62a_disk">
Oct 11 04:53:13 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:53:13 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:53:13 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/2c551e6f-adba-4963-a583-c5118e2be62a_disk.config">
Oct 11 04:53:13 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:53:13 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 04:53:13 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:d0:47:df"/>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:      <target dev="tap29b8f5a1-6e"/>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:53:13 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/2c551e6f-adba-4963-a583-c5118e2be62a/console.log" append="off"/>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:53:13 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:53:13 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:53:13 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:53:13 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:53:13 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:53:13 np0005481065 nova_compute[260935]: 2025-10-11 08:53:13.576 2 DEBUG nova.compute.manager [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Preparing to wait for external event network-vif-plugged-29b8f5a1-6e7c-42c3-9876-cc4cc9942b76 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 04:53:13 np0005481065 nova_compute[260935]: 2025-10-11 08:53:13.577 2 DEBUG oslo_concurrency.lockutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Acquiring lock "2c551e6f-adba-4963-a583-c5118e2be62a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:13 np0005481065 nova_compute[260935]: 2025-10-11 08:53:13.577 2 DEBUG oslo_concurrency.lockutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Lock "2c551e6f-adba-4963-a583-c5118e2be62a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:13 np0005481065 nova_compute[260935]: 2025-10-11 08:53:13.578 2 DEBUG oslo_concurrency.lockutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Lock "2c551e6f-adba-4963-a583-c5118e2be62a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:13 np0005481065 nova_compute[260935]: 2025-10-11 08:53:13.579 2 DEBUG nova.virt.libvirt.vif [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:53:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-702852874',display_name='tempest-ImagesOneServerTestJSON-server-702852874',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-702852874',id=44,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='141fa83aa39e4cf6883c3f86fe0de7d4',ramdisk_id='',reservation_id='r-0ygioays',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-896973684',owner_user_name='tempest-ImagesOneServerTestJSON-896973684-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:53:09Z,user_data=None,user_id='ea1a78d0e9f549b580366a5b344f23f5',uuid=2c551e6f-adba-4963-a583-c5118e2be62a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "29b8f5a1-6e7c-42c3-9876-cc4cc9942b76", "address": "fa:16:3e:d0:47:df", "network": {"id": "eb048a28-2e76-4170-b83c-10a20efb7841", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1710166253-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "141fa83aa39e4cf6883c3f86fe0de7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29b8f5a1-6e", "ovs_interfaceid": "29b8f5a1-6e7c-42c3-9876-cc4cc9942b76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:53:13 np0005481065 nova_compute[260935]: 2025-10-11 08:53:13.580 2 DEBUG nova.network.os_vif_util [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Converting VIF {"id": "29b8f5a1-6e7c-42c3-9876-cc4cc9942b76", "address": "fa:16:3e:d0:47:df", "network": {"id": "eb048a28-2e76-4170-b83c-10a20efb7841", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1710166253-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "141fa83aa39e4cf6883c3f86fe0de7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29b8f5a1-6e", "ovs_interfaceid": "29b8f5a1-6e7c-42c3-9876-cc4cc9942b76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:53:13 np0005481065 nova_compute[260935]: 2025-10-11 08:53:13.581 2 DEBUG nova.network.os_vif_util [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d0:47:df,bridge_name='br-int',has_traffic_filtering=True,id=29b8f5a1-6e7c-42c3-9876-cc4cc9942b76,network=Network(eb048a28-2e76-4170-b83c-10a20efb7841),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29b8f5a1-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:53:13 np0005481065 nova_compute[260935]: 2025-10-11 08:53:13.581 2 DEBUG os_vif [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d0:47:df,bridge_name='br-int',has_traffic_filtering=True,id=29b8f5a1-6e7c-42c3-9876-cc4cc9942b76,network=Network(eb048a28-2e76-4170-b83c-10a20efb7841),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29b8f5a1-6e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:53:13 np0005481065 nova_compute[260935]: 2025-10-11 08:53:13.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:13 np0005481065 nova_compute[260935]: 2025-10-11 08:53:13.585 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:13 np0005481065 nova_compute[260935]: 2025-10-11 08:53:13.586 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:53:13 np0005481065 nova_compute[260935]: 2025-10-11 08:53:13.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:13 np0005481065 nova_compute[260935]: 2025-10-11 08:53:13.593 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap29b8f5a1-6e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:13 np0005481065 nova_compute[260935]: 2025-10-11 08:53:13.595 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap29b8f5a1-6e, col_values=(('external_ids', {'iface-id': '29b8f5a1-6e7c-42c3-9876-cc4cc9942b76', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d0:47:df', 'vm-uuid': '2c551e6f-adba-4963-a583-c5118e2be62a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:13 np0005481065 nova_compute[260935]: 2025-10-11 08:53:13.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:13 np0005481065 NetworkManager[44960]: <info>  [1760172793.5987] manager: (tap29b8f5a1-6e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/179)
Oct 11 04:53:13 np0005481065 nova_compute[260935]: 2025-10-11 08:53:13.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:53:13 np0005481065 nova_compute[260935]: 2025-10-11 08:53:13.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:13 np0005481065 nova_compute[260935]: 2025-10-11 08:53:13.609 2 INFO os_vif [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d0:47:df,bridge_name='br-int',has_traffic_filtering=True,id=29b8f5a1-6e7c-42c3-9876-cc4cc9942b76,network=Network(eb048a28-2e76-4170-b83c-10a20efb7841),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29b8f5a1-6e')#033[00m
Oct 11 04:53:13 np0005481065 nova_compute[260935]: 2025-10-11 08:53:13.690 2 DEBUG nova.virt.libvirt.driver [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:53:13 np0005481065 nova_compute[260935]: 2025-10-11 08:53:13.691 2 DEBUG nova.virt.libvirt.driver [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:53:13 np0005481065 nova_compute[260935]: 2025-10-11 08:53:13.692 2 DEBUG nova.virt.libvirt.driver [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] No VIF found with MAC fa:16:3e:d0:47:df, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:53:13 np0005481065 nova_compute[260935]: 2025-10-11 08:53:13.693 2 INFO nova.virt.libvirt.driver [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Using config drive#033[00m
Oct 11 04:53:13 np0005481065 nova_compute[260935]: 2025-10-11 08:53:13.730 2 DEBUG nova.storage.rbd_utils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] rbd image 2c551e6f-adba-4963-a583-c5118e2be62a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1453: 321 pgs: 321 active+clean; 134 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 145 op/s
Oct 11 04:53:13 np0005481065 nova_compute[260935]: 2025-10-11 08:53:13.955 2 DEBUG nova.network.neutron [req-c2ac8725-adad-4a27-bff3-e69a97e4f301 req-89f0cbff-394d-4cd0-9be9-a1fa1dc3db3b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Updated VIF entry in instance network info cache for port 29b8f5a1-6e7c-42c3-9876-cc4cc9942b76. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:53:13 np0005481065 nova_compute[260935]: 2025-10-11 08:53:13.955 2 DEBUG nova.network.neutron [req-c2ac8725-adad-4a27-bff3-e69a97e4f301 req-89f0cbff-394d-4cd0-9be9-a1fa1dc3db3b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Updating instance_info_cache with network_info: [{"id": "29b8f5a1-6e7c-42c3-9876-cc4cc9942b76", "address": "fa:16:3e:d0:47:df", "network": {"id": "eb048a28-2e76-4170-b83c-10a20efb7841", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1710166253-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "141fa83aa39e4cf6883c3f86fe0de7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29b8f5a1-6e", "ovs_interfaceid": "29b8f5a1-6e7c-42c3-9876-cc4cc9942b76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:53:13 np0005481065 nova_compute[260935]: 2025-10-11 08:53:13.978 2 DEBUG oslo_concurrency.lockutils [req-c2ac8725-adad-4a27-bff3-e69a97e4f301 req-89f0cbff-394d-4cd0-9be9-a1fa1dc3db3b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-2c551e6f-adba-4963-a583-c5118e2be62a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:53:14 np0005481065 nova_compute[260935]: 2025-10-11 08:53:14.701 2 INFO nova.virt.libvirt.driver [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Creating config drive at /var/lib/nova/instances/2c551e6f-adba-4963-a583-c5118e2be62a/disk.config#033[00m
Oct 11 04:53:14 np0005481065 nova_compute[260935]: 2025-10-11 08:53:14.711 2 DEBUG oslo_concurrency.processutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2c551e6f-adba-4963-a583-c5118e2be62a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpncvrozx1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:14 np0005481065 nova_compute[260935]: 2025-10-11 08:53:14.869 2 DEBUG oslo_concurrency.processutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2c551e6f-adba-4963-a583-c5118e2be62a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpncvrozx1" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:14 np0005481065 nova_compute[260935]: 2025-10-11 08:53:14.912 2 DEBUG nova.storage.rbd_utils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] rbd image 2c551e6f-adba-4963-a583-c5118e2be62a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:14 np0005481065 nova_compute[260935]: 2025-10-11 08:53:14.918 2 DEBUG oslo_concurrency.processutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2c551e6f-adba-4963-a583-c5118e2be62a/disk.config 2c551e6f-adba-4963-a583-c5118e2be62a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:15 np0005481065 nova_compute[260935]: 2025-10-11 08:53:15.113 2 DEBUG oslo_concurrency.processutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2c551e6f-adba-4963-a583-c5118e2be62a/disk.config 2c551e6f-adba-4963-a583-c5118e2be62a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.194s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:15 np0005481065 nova_compute[260935]: 2025-10-11 08:53:15.114 2 INFO nova.virt.libvirt.driver [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Deleting local config drive /var/lib/nova/instances/2c551e6f-adba-4963-a583-c5118e2be62a/disk.config because it was imported into RBD.#033[00m
Oct 11 04:53:15 np0005481065 kernel: tap29b8f5a1-6e: entered promiscuous mode
Oct 11 04:53:15 np0005481065 NetworkManager[44960]: <info>  [1760172795.1853] manager: (tap29b8f5a1-6e): new Tun device (/org/freedesktop/NetworkManager/Devices/180)
Oct 11 04:53:15 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:15Z|00374|binding|INFO|Claiming lport 29b8f5a1-6e7c-42c3-9876-cc4cc9942b76 for this chassis.
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.188 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:15 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:15Z|00375|binding|INFO|29b8f5a1-6e7c-42c3-9876-cc4cc9942b76: Claiming fa:16:3e:d0:47:df 10.100.0.14
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.189 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.190 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:15 np0005481065 nova_compute[260935]: 2025-10-11 08:53:15.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.202 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d0:47:df 10.100.0.14'], port_security=['fa:16:3e:d0:47:df 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '2c551e6f-adba-4963-a583-c5118e2be62a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eb048a28-2e76-4170-b83c-10a20efb7841', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '141fa83aa39e4cf6883c3f86fe0de7d4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '82265068-f2dc-451b-ac76-cae1a0de925c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=74c9fb34-71dd-4115-aee6-37c651590f11, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=29b8f5a1-6e7c-42c3-9876-cc4cc9942b76) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.203 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 29b8f5a1-6e7c-42c3-9876-cc4cc9942b76 in datapath eb048a28-2e76-4170-b83c-10a20efb7841 bound to our chassis#033[00m
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.205 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network eb048a28-2e76-4170-b83c-10a20efb7841#033[00m
Oct 11 04:53:15 np0005481065 systemd-udevd[311280]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.220 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[845d325c-fab6-4c9c-bd40-8244b1a7c7c2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.221 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapeb048a28-21 in ovnmeta-eb048a28-2e76-4170-b83c-10a20efb7841 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.223 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapeb048a28-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.223 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b2a26d3f-91ee-45ae-b608-1c223547811a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.224 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[116b0e9c-3db9-4602-b52b-f4541302c31a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:15 np0005481065 NetworkManager[44960]: <info>  [1760172795.2330] device (tap29b8f5a1-6e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:53:15 np0005481065 NetworkManager[44960]: <info>  [1760172795.2355] device (tap29b8f5a1-6e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:53:15 np0005481065 systemd-machined[215705]: New machine qemu-50-instance-0000002c.
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.244 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[99cde9bd-3d6b-4063-9b3a-a59b4128b4f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:15 np0005481065 systemd[1]: Started Virtual Machine qemu-50-instance-0000002c.
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.274 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[988ff2c0-b9ca-400c-b879-d44dfc5b10b3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:15 np0005481065 nova_compute[260935]: 2025-10-11 08:53:15.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:15 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:15Z|00376|binding|INFO|Setting lport 29b8f5a1-6e7c-42c3-9876-cc4cc9942b76 ovn-installed in OVS
Oct 11 04:53:15 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:15Z|00377|binding|INFO|Setting lport 29b8f5a1-6e7c-42c3-9876-cc4cc9942b76 up in Southbound
Oct 11 04:53:15 np0005481065 nova_compute[260935]: 2025-10-11 08:53:15.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.315 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c18fa117-fef0-4512-a3ce-705bb22297c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.321 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4425fb80-0c83-4b3c-a4d7-df963b0152c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:15 np0005481065 systemd-udevd[311286]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:53:15 np0005481065 NetworkManager[44960]: <info>  [1760172795.3237] manager: (tapeb048a28-20): new Veth device (/org/freedesktop/NetworkManager/Devices/181)
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.366 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[67233180-62bf-4295-a7c0-a45204d59483]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.368 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f9cfbab1-8228-49b7-b559-416311d3e3a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:15 np0005481065 NetworkManager[44960]: <info>  [1760172795.4067] device (tapeb048a28-20): carrier: link connected
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.416 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[a33cf4d3-7969-4294-bc6d-7a2f3a30657a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.441 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b9d61b32-ba92-43ae-a0f6-c0306543850b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeb048a28-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7d:90:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 120], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464010, 'reachable_time': 23977, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311323, 'error': None, 'target': 'ovnmeta-eb048a28-2e76-4170-b83c-10a20efb7841', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.463 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6beccc49-20c5-4327-b15f-447a88e7fc00]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7d:90b4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464010, 'tstamp': 464010}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311333, 'error': None, 'target': 'ovnmeta-eb048a28-2e76-4170-b83c-10a20efb7841', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.491 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[36de97b8-3c1f-4b42-b5b0-b3b467ca6bf5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeb048a28-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7d:90:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 120], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464010, 'reachable_time': 23977, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 311343, 'error': None, 'target': 'ovnmeta-eb048a28-2e76-4170-b83c-10a20efb7841', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.537 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4eb84f1b-f251-4d50-8cd2-db052c07c338]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.624 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[17310434-bdce-4104-a26c-e3081919a4fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.626 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeb048a28-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.626 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.627 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeb048a28-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:15 np0005481065 NetworkManager[44960]: <info>  [1760172795.6299] manager: (tapeb048a28-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/182)
Oct 11 04:53:15 np0005481065 nova_compute[260935]: 2025-10-11 08:53:15.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:15 np0005481065 kernel: tapeb048a28-20: entered promiscuous mode
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.633 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapeb048a28-20, col_values=(('external_ids', {'iface-id': '7efe2784-feff-4b2a-b250-f9f249333e6d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:15 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:15Z|00378|binding|INFO|Releasing lport 7efe2784-feff-4b2a-b250-f9f249333e6d from this chassis (sb_readonly=0)
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.636 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/eb048a28-2e76-4170-b83c-10a20efb7841.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/eb048a28-2e76-4170-b83c-10a20efb7841.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.637 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6369eb23-08de-488b-b800-3ad2fa07368b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.638 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-eb048a28-2e76-4170-b83c-10a20efb7841
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/eb048a28-2e76-4170-b83c-10a20efb7841.pid.haproxy
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID eb048a28-2e76-4170-b83c-10a20efb7841
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 04:53:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.639 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-eb048a28-2e76-4170-b83c-10a20efb7841', 'env', 'PROCESS_TAG=haproxy-eb048a28-2e76-4170-b83c-10a20efb7841', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/eb048a28-2e76-4170-b83c-10a20efb7841.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 04:53:15 np0005481065 nova_compute[260935]: 2025-10-11 08:53:15.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1454: 321 pgs: 321 active+clean; 134 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 145 op/s
Oct 11 04:53:16 np0005481065 nova_compute[260935]: 2025-10-11 08:53:16.059 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172796.0580928, 2c551e6f-adba-4963-a583-c5118e2be62a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:53:16 np0005481065 nova_compute[260935]: 2025-10-11 08:53:16.061 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] VM Started (Lifecycle Event)#033[00m
Oct 11 04:53:16 np0005481065 nova_compute[260935]: 2025-10-11 08:53:16.080 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:53:16 np0005481065 nova_compute[260935]: 2025-10-11 08:53:16.086 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172796.0584378, 2c551e6f-adba-4963-a583-c5118e2be62a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:53:16 np0005481065 nova_compute[260935]: 2025-10-11 08:53:16.087 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] VM Paused (Lifecycle Event)#033[00m
Oct 11 04:53:16 np0005481065 nova_compute[260935]: 2025-10-11 08:53:16.113 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:53:16 np0005481065 podman[311392]: 2025-10-11 08:53:16.116252316 +0000 UTC m=+0.092868777 container create b936b3db46013b0427a4d00c2b0b214765ca225acfddc18e89eb06ec24770bad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-eb048a28-2e76-4170-b83c-10a20efb7841, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 11 04:53:16 np0005481065 nova_compute[260935]: 2025-10-11 08:53:16.121 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:53:16 np0005481065 nova_compute[260935]: 2025-10-11 08:53:16.142 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:53:16 np0005481065 systemd[1]: Started libpod-conmon-b936b3db46013b0427a4d00c2b0b214765ca225acfddc18e89eb06ec24770bad.scope.
Oct 11 04:53:16 np0005481065 podman[311392]: 2025-10-11 08:53:16.075446782 +0000 UTC m=+0.052063273 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 04:53:16 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:53:16 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a9fa1c571dca2050c3743ed1af84045936d49108271115daa852b0e59f97447/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:53:16 np0005481065 podman[311392]: 2025-10-11 08:53:16.226787112 +0000 UTC m=+0.203403603 container init b936b3db46013b0427a4d00c2b0b214765ca225acfddc18e89eb06ec24770bad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-eb048a28-2e76-4170-b83c-10a20efb7841, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 11 04:53:16 np0005481065 podman[311392]: 2025-10-11 08:53:16.236580419 +0000 UTC m=+0.213196880 container start b936b3db46013b0427a4d00c2b0b214765ca225acfddc18e89eb06ec24770bad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-eb048a28-2e76-4170-b83c-10a20efb7841, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:53:16 np0005481065 neutron-haproxy-ovnmeta-eb048a28-2e76-4170-b83c-10a20efb7841[311407]: [NOTICE]   (311411) : New worker (311413) forked
Oct 11 04:53:16 np0005481065 neutron-haproxy-ovnmeta-eb048a28-2e76-4170-b83c-10a20efb7841[311407]: [NOTICE]   (311411) : Loading success.
Oct 11 04:53:16 np0005481065 nova_compute[260935]: 2025-10-11 08:53:16.692 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "e3289c21-dd0f-43aa-9d39-3aff16eff5cd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:16 np0005481065 nova_compute[260935]: 2025-10-11 08:53:16.693 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "e3289c21-dd0f-43aa-9d39-3aff16eff5cd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:16 np0005481065 nova_compute[260935]: 2025-10-11 08:53:16.720 2 DEBUG nova.compute.manager [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:53:16 np0005481065 nova_compute[260935]: 2025-10-11 08:53:16.729 2 INFO nova.compute.manager [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Rebuilding instance#033[00m
Oct 11 04:53:16 np0005481065 nova_compute[260935]: 2025-10-11 08:53:16.747 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "dd2f9164-cc85-46e5-9ac5-2847421fe9fc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:16 np0005481065 nova_compute[260935]: 2025-10-11 08:53:16.748 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "dd2f9164-cc85-46e5-9ac5-2847421fe9fc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:16 np0005481065 nova_compute[260935]: 2025-10-11 08:53:16.779 2 DEBUG nova.compute.manager [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:53:16 np0005481065 nova_compute[260935]: 2025-10-11 08:53:16.835 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:16 np0005481065 nova_compute[260935]: 2025-10-11 08:53:16.836 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:16 np0005481065 nova_compute[260935]: 2025-10-11 08:53:16.843 2 DEBUG nova.virt.hardware [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:53:16 np0005481065 nova_compute[260935]: 2025-10-11 08:53:16.844 2 INFO nova.compute.claims [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:53:16 np0005481065 nova_compute[260935]: 2025-10-11 08:53:16.865 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:17 np0005481065 nova_compute[260935]: 2025-10-11 08:53:17.076 2 DEBUG nova.objects.instance [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 92be5b35-6b7a-4f95-924d-008348f27b42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:53:17 np0005481065 nova_compute[260935]: 2025-10-11 08:53:17.095 2 DEBUG nova.compute.manager [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:53:17 np0005481065 nova_compute[260935]: 2025-10-11 08:53:17.097 2 DEBUG oslo_concurrency.processutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:17 np0005481065 nova_compute[260935]: 2025-10-11 08:53:17.203 2 DEBUG nova.objects.instance [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lazy-loading 'pci_requests' on Instance uuid 92be5b35-6b7a-4f95-924d-008348f27b42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:53:17 np0005481065 nova_compute[260935]: 2025-10-11 08:53:17.215 2 DEBUG nova.objects.instance [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 92be5b35-6b7a-4f95-924d-008348f27b42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:53:17 np0005481065 nova_compute[260935]: 2025-10-11 08:53:17.227 2 DEBUG nova.objects.instance [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lazy-loading 'resources' on Instance uuid 92be5b35-6b7a-4f95-924d-008348f27b42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:53:17 np0005481065 nova_compute[260935]: 2025-10-11 08:53:17.238 2 DEBUG nova.objects.instance [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lazy-loading 'migration_context' on Instance uuid 92be5b35-6b7a-4f95-924d-008348f27b42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:53:17 np0005481065 nova_compute[260935]: 2025-10-11 08:53:17.250 2 DEBUG nova.objects.instance [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct 11 04:53:17 np0005481065 nova_compute[260935]: 2025-10-11 08:53:17.257 2 DEBUG nova.virt.libvirt.driver [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct 11 04:53:17 np0005481065 nova_compute[260935]: 2025-10-11 08:53:17.596 2 DEBUG nova.compute.manager [req-e7dab3bd-61d3-489c-9e82-32d5fbd774d5 req-374e1fec-c5bd-468f-a945-6efb5c2cffbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Received event network-vif-plugged-29b8f5a1-6e7c-42c3-9876-cc4cc9942b76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:53:17 np0005481065 nova_compute[260935]: 2025-10-11 08:53:17.597 2 DEBUG oslo_concurrency.lockutils [req-e7dab3bd-61d3-489c-9e82-32d5fbd774d5 req-374e1fec-c5bd-468f-a945-6efb5c2cffbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "2c551e6f-adba-4963-a583-c5118e2be62a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:17 np0005481065 nova_compute[260935]: 2025-10-11 08:53:17.598 2 DEBUG oslo_concurrency.lockutils [req-e7dab3bd-61d3-489c-9e82-32d5fbd774d5 req-374e1fec-c5bd-468f-a945-6efb5c2cffbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "2c551e6f-adba-4963-a583-c5118e2be62a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:17 np0005481065 nova_compute[260935]: 2025-10-11 08:53:17.598 2 DEBUG oslo_concurrency.lockutils [req-e7dab3bd-61d3-489c-9e82-32d5fbd774d5 req-374e1fec-c5bd-468f-a945-6efb5c2cffbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "2c551e6f-adba-4963-a583-c5118e2be62a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:17 np0005481065 nova_compute[260935]: 2025-10-11 08:53:17.599 2 DEBUG nova.compute.manager [req-e7dab3bd-61d3-489c-9e82-32d5fbd774d5 req-374e1fec-c5bd-468f-a945-6efb5c2cffbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Processing event network-vif-plugged-29b8f5a1-6e7c-42c3-9876-cc4cc9942b76 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 04:53:17 np0005481065 nova_compute[260935]: 2025-10-11 08:53:17.600 2 DEBUG nova.compute.manager [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:53:17 np0005481065 nova_compute[260935]: 2025-10-11 08:53:17.618 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172797.617573, 2c551e6f-adba-4963-a583-c5118e2be62a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:53:17 np0005481065 nova_compute[260935]: 2025-10-11 08:53:17.619 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:53:17 np0005481065 nova_compute[260935]: 2025-10-11 08:53:17.622 2 DEBUG nova.virt.libvirt.driver [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:53:17 np0005481065 nova_compute[260935]: 2025-10-11 08:53:17.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:17 np0005481065 nova_compute[260935]: 2025-10-11 08:53:17.643 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:53:17 np0005481065 nova_compute[260935]: 2025-10-11 08:53:17.644 2 INFO nova.virt.libvirt.driver [-] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Instance spawned successfully.#033[00m
Oct 11 04:53:17 np0005481065 nova_compute[260935]: 2025-10-11 08:53:17.645 2 DEBUG nova.virt.libvirt.driver [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:53:17 np0005481065 nova_compute[260935]: 2025-10-11 08:53:17.648 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:53:17 np0005481065 nova_compute[260935]: 2025-10-11 08:53:17.665 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:53:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:53:17 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2042650142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:53:17 np0005481065 nova_compute[260935]: 2025-10-11 08:53:17.682 2 DEBUG nova.virt.libvirt.driver [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:53:17 np0005481065 nova_compute[260935]: 2025-10-11 08:53:17.682 2 DEBUG nova.virt.libvirt.driver [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:53:17 np0005481065 nova_compute[260935]: 2025-10-11 08:53:17.683 2 DEBUG nova.virt.libvirt.driver [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:53:17 np0005481065 nova_compute[260935]: 2025-10-11 08:53:17.683 2 DEBUG nova.virt.libvirt.driver [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:53:17 np0005481065 nova_compute[260935]: 2025-10-11 08:53:17.684 2 DEBUG nova.virt.libvirt.driver [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:53:17 np0005481065 nova_compute[260935]: 2025-10-11 08:53:17.685 2 DEBUG nova.virt.libvirt.driver [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:53:17 np0005481065 nova_compute[260935]: 2025-10-11 08:53:17.693 2 DEBUG oslo_concurrency.processutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:17 np0005481065 nova_compute[260935]: 2025-10-11 08:53:17.699 2 DEBUG nova.compute.provider_tree [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:53:17 np0005481065 nova_compute[260935]: 2025-10-11 08:53:17.710 2 DEBUG nova.scheduler.client.report [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:53:17 np0005481065 nova_compute[260935]: 2025-10-11 08:53:17.734 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.897s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:17 np0005481065 nova_compute[260935]: 2025-10-11 08:53:17.735 2 DEBUG nova.compute.manager [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:53:17 np0005481065 nova_compute[260935]: 2025-10-11 08:53:17.738 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.872s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:17 np0005481065 nova_compute[260935]: 2025-10-11 08:53:17.740 2 INFO nova.compute.manager [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Took 8.39 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:53:17 np0005481065 nova_compute[260935]: 2025-10-11 08:53:17.740 2 DEBUG nova.compute.manager [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:53:17 np0005481065 nova_compute[260935]: 2025-10-11 08:53:17.749 2 DEBUG nova.virt.hardware [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:53:17 np0005481065 nova_compute[260935]: 2025-10-11 08:53:17.749 2 INFO nova.compute.claims [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:53:17 np0005481065 nova_compute[260935]: 2025-10-11 08:53:17.822 2 DEBUG nova.compute.manager [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:53:17 np0005481065 nova_compute[260935]: 2025-10-11 08:53:17.823 2 DEBUG nova.network.neutron [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:53:17 np0005481065 nova_compute[260935]: 2025-10-11 08:53:17.842 2 INFO nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:53:17 np0005481065 nova_compute[260935]: 2025-10-11 08:53:17.850 2 INFO nova.compute.manager [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Took 9.52 seconds to build instance.#033[00m
Oct 11 04:53:17 np0005481065 nova_compute[260935]: 2025-10-11 08:53:17.869 2 DEBUG nova.compute.manager [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:53:17 np0005481065 nova_compute[260935]: 2025-10-11 08:53:17.877 2 DEBUG oslo_concurrency.lockutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Lock "2c551e6f-adba-4963-a583-c5118e2be62a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1455: 321 pgs: 321 active+clean; 134 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 157 op/s
Oct 11 04:53:17 np0005481065 nova_compute[260935]: 2025-10-11 08:53:17.959 2 DEBUG nova.compute.manager [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:53:17 np0005481065 nova_compute[260935]: 2025-10-11 08:53:17.960 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:53:17 np0005481065 nova_compute[260935]: 2025-10-11 08:53:17.960 2 INFO nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Creating image(s)#033[00m
Oct 11 04:53:17 np0005481065 nova_compute[260935]: 2025-10-11 08:53:17.983 2 DEBUG nova.storage.rbd_utils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] rbd image e3289c21-dd0f-43aa-9d39-3aff16eff5cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:18 np0005481065 nova_compute[260935]: 2025-10-11 08:53:18.007 2 DEBUG nova.storage.rbd_utils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] rbd image e3289c21-dd0f-43aa-9d39-3aff16eff5cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:18 np0005481065 nova_compute[260935]: 2025-10-11 08:53:18.030 2 DEBUG nova.storage.rbd_utils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] rbd image e3289c21-dd0f-43aa-9d39-3aff16eff5cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:18 np0005481065 nova_compute[260935]: 2025-10-11 08:53:18.034 2 DEBUG oslo_concurrency.processutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:18 np0005481065 nova_compute[260935]: 2025-10-11 08:53:18.073 2 DEBUG oslo_concurrency.processutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:18 np0005481065 nova_compute[260935]: 2025-10-11 08:53:18.134 2 DEBUG oslo_concurrency.processutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:18 np0005481065 nova_compute[260935]: 2025-10-11 08:53:18.140 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:18 np0005481065 nova_compute[260935]: 2025-10-11 08:53:18.141 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:18 np0005481065 nova_compute[260935]: 2025-10-11 08:53:18.142 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:18 np0005481065 nova_compute[260935]: 2025-10-11 08:53:18.174 2 DEBUG nova.storage.rbd_utils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] rbd image e3289c21-dd0f-43aa-9d39-3aff16eff5cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:18 np0005481065 nova_compute[260935]: 2025-10-11 08:53:18.179 2 DEBUG oslo_concurrency.processutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 e3289c21-dd0f-43aa-9d39-3aff16eff5cd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:53:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e193 do_prune osdmap full prune enabled
Oct 11 04:53:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e194 e194: 3 total, 3 up, 3 in
Oct 11 04:53:18 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e194: 3 total, 3 up, 3 in
Oct 11 04:53:18 np0005481065 nova_compute[260935]: 2025-10-11 08:53:18.509 2 DEBUG oslo_concurrency.processutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 e3289c21-dd0f-43aa-9d39-3aff16eff5cd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.329s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:53:18 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4061802871' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:53:18 np0005481065 nova_compute[260935]: 2025-10-11 08:53:18.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:18 np0005481065 nova_compute[260935]: 2025-10-11 08:53:18.617 2 DEBUG nova.storage.rbd_utils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] resizing rbd image e3289c21-dd0f-43aa-9d39-3aff16eff5cd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:53:18 np0005481065 nova_compute[260935]: 2025-10-11 08:53:18.653 2 DEBUG oslo_concurrency.processutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:18 np0005481065 nova_compute[260935]: 2025-10-11 08:53:18.663 2 DEBUG nova.compute.provider_tree [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:53:18 np0005481065 nova_compute[260935]: 2025-10-11 08:53:18.684 2 DEBUG nova.scheduler.client.report [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:53:18 np0005481065 nova_compute[260935]: 2025-10-11 08:53:18.733 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.995s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:18 np0005481065 nova_compute[260935]: 2025-10-11 08:53:18.734 2 DEBUG nova.compute.manager [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:53:18 np0005481065 nova_compute[260935]: 2025-10-11 08:53:18.751 2 DEBUG nova.objects.instance [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lazy-loading 'migration_context' on Instance uuid e3289c21-dd0f-43aa-9d39-3aff16eff5cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:53:18 np0005481065 nova_compute[260935]: 2025-10-11 08:53:18.766 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:53:18 np0005481065 nova_compute[260935]: 2025-10-11 08:53:18.769 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Ensure instance console log exists: /var/lib/nova/instances/e3289c21-dd0f-43aa-9d39-3aff16eff5cd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:53:18 np0005481065 nova_compute[260935]: 2025-10-11 08:53:18.770 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:18 np0005481065 nova_compute[260935]: 2025-10-11 08:53:18.770 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:18 np0005481065 nova_compute[260935]: 2025-10-11 08:53:18.771 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:18 np0005481065 nova_compute[260935]: 2025-10-11 08:53:18.791 2 DEBUG nova.compute.manager [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:53:18 np0005481065 nova_compute[260935]: 2025-10-11 08:53:18.792 2 DEBUG nova.network.neutron [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:53:18 np0005481065 nova_compute[260935]: 2025-10-11 08:53:18.819 2 INFO nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:53:18 np0005481065 nova_compute[260935]: 2025-10-11 08:53:18.850 2 DEBUG nova.compute.manager [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:53:18 np0005481065 nova_compute[260935]: 2025-10-11 08:53:18.967 2 DEBUG nova.compute.manager [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:53:18 np0005481065 nova_compute[260935]: 2025-10-11 08:53:18.968 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:53:18 np0005481065 nova_compute[260935]: 2025-10-11 08:53:18.969 2 INFO nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Creating image(s)#033[00m
Oct 11 04:53:18 np0005481065 nova_compute[260935]: 2025-10-11 08:53:18.989 2 DEBUG nova.storage.rbd_utils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] rbd image dd2f9164-cc85-46e5-9ac5-2847421fe9fc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:19 np0005481065 nova_compute[260935]: 2025-10-11 08:53:19.016 2 DEBUG nova.storage.rbd_utils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] rbd image dd2f9164-cc85-46e5-9ac5-2847421fe9fc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:19 np0005481065 nova_compute[260935]: 2025-10-11 08:53:19.048 2 DEBUG nova.storage.rbd_utils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] rbd image dd2f9164-cc85-46e5-9ac5-2847421fe9fc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:19 np0005481065 nova_compute[260935]: 2025-10-11 08:53:19.052 2 DEBUG oslo_concurrency.processutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:19 np0005481065 nova_compute[260935]: 2025-10-11 08:53:19.099 2 DEBUG nova.policy [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '624c293d73ca4d14a182fadee17abb16', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5fe47dfd30914099a9819413cbab00c6', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 04:53:19 np0005481065 nova_compute[260935]: 2025-10-11 08:53:19.102 2 DEBUG nova.policy [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '624c293d73ca4d14a182fadee17abb16', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5fe47dfd30914099a9819413cbab00c6', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 04:53:19 np0005481065 nova_compute[260935]: 2025-10-11 08:53:19.142 2 DEBUG oslo_concurrency.processutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:19 np0005481065 nova_compute[260935]: 2025-10-11 08:53:19.143 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:19 np0005481065 nova_compute[260935]: 2025-10-11 08:53:19.144 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:19 np0005481065 nova_compute[260935]: 2025-10-11 08:53:19.144 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:19 np0005481065 nova_compute[260935]: 2025-10-11 08:53:19.167 2 DEBUG nova.storage.rbd_utils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] rbd image dd2f9164-cc85-46e5-9ac5-2847421fe9fc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:19 np0005481065 nova_compute[260935]: 2025-10-11 08:53:19.174 2 DEBUG oslo_concurrency.processutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 dd2f9164-cc85-46e5-9ac5-2847421fe9fc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:19 np0005481065 nova_compute[260935]: 2025-10-11 08:53:19.489 2 DEBUG oslo_concurrency.processutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 dd2f9164-cc85-46e5-9ac5-2847421fe9fc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.315s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:19 np0005481065 nova_compute[260935]: 2025-10-11 08:53:19.561 2 DEBUG nova.storage.rbd_utils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] resizing rbd image dd2f9164-cc85-46e5-9ac5-2847421fe9fc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:53:19 np0005481065 nova_compute[260935]: 2025-10-11 08:53:19.689 2 DEBUG nova.objects.instance [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lazy-loading 'migration_context' on Instance uuid dd2f9164-cc85-46e5-9ac5-2847421fe9fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:53:19 np0005481065 nova_compute[260935]: 2025-10-11 08:53:19.710 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:53:19 np0005481065 nova_compute[260935]: 2025-10-11 08:53:19.710 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Ensure instance console log exists: /var/lib/nova/instances/dd2f9164-cc85-46e5-9ac5-2847421fe9fc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:53:19 np0005481065 nova_compute[260935]: 2025-10-11 08:53:19.711 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:19 np0005481065 nova_compute[260935]: 2025-10-11 08:53:19.711 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:19 np0005481065 nova_compute[260935]: 2025-10-11 08:53:19.712 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1457: 321 pgs: 321 active+clean; 134 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 157 op/s
Oct 11 04:53:20 np0005481065 nova_compute[260935]: 2025-10-11 08:53:20.146 2 DEBUG nova.compute.manager [req-6f164a33-7cd7-4c53-9a54-51a3b206314f req-993fb27e-9d3d-4f47-b30a-847a94a74cc0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Received event network-vif-plugged-29b8f5a1-6e7c-42c3-9876-cc4cc9942b76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:53:20 np0005481065 nova_compute[260935]: 2025-10-11 08:53:20.150 2 DEBUG oslo_concurrency.lockutils [req-6f164a33-7cd7-4c53-9a54-51a3b206314f req-993fb27e-9d3d-4f47-b30a-847a94a74cc0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "2c551e6f-adba-4963-a583-c5118e2be62a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:20 np0005481065 nova_compute[260935]: 2025-10-11 08:53:20.150 2 DEBUG oslo_concurrency.lockutils [req-6f164a33-7cd7-4c53-9a54-51a3b206314f req-993fb27e-9d3d-4f47-b30a-847a94a74cc0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "2c551e6f-adba-4963-a583-c5118e2be62a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:20 np0005481065 nova_compute[260935]: 2025-10-11 08:53:20.151 2 DEBUG oslo_concurrency.lockutils [req-6f164a33-7cd7-4c53-9a54-51a3b206314f req-993fb27e-9d3d-4f47-b30a-847a94a74cc0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "2c551e6f-adba-4963-a583-c5118e2be62a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:20 np0005481065 nova_compute[260935]: 2025-10-11 08:53:20.151 2 DEBUG nova.compute.manager [req-6f164a33-7cd7-4c53-9a54-51a3b206314f req-993fb27e-9d3d-4f47-b30a-847a94a74cc0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] No waiting events found dispatching network-vif-plugged-29b8f5a1-6e7c-42c3-9876-cc4cc9942b76 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:53:20 np0005481065 nova_compute[260935]: 2025-10-11 08:53:20.152 2 WARNING nova.compute.manager [req-6f164a33-7cd7-4c53-9a54-51a3b206314f req-993fb27e-9d3d-4f47-b30a-847a94a74cc0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Received unexpected event network-vif-plugged-29b8f5a1-6e7c-42c3-9876-cc4cc9942b76 for instance with vm_state active and task_state None.#033[00m
Oct 11 04:53:20 np0005481065 nova_compute[260935]: 2025-10-11 08:53:20.769 2 DEBUG nova.compute.manager [None req-518c83aa-c875-4011-987e-33e4ebeb8a6e ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:53:20 np0005481065 nova_compute[260935]: 2025-10-11 08:53:20.852 2 INFO nova.compute.manager [None req-518c83aa-c875-4011-987e-33e4ebeb8a6e ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] instance snapshotting#033[00m
Oct 11 04:53:20 np0005481065 nova_compute[260935]: 2025-10-11 08:53:20.914 2 DEBUG nova.network.neutron [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Successfully created port: 69a28fb2-3a8f-49be-9b93-8dc3db323fd5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 04:53:20 np0005481065 nova_compute[260935]: 2025-10-11 08:53:20.973 2 DEBUG nova.network.neutron [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Successfully created port: 35cdb65d-4c85-4645-9bdf-3ef8f40f9596 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 04:53:21 np0005481065 nova_compute[260935]: 2025-10-11 08:53:21.213 2 INFO nova.virt.libvirt.driver [None req-518c83aa-c875-4011-987e-33e4ebeb8a6e ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Beginning live snapshot process#033[00m
Oct 11 04:53:21 np0005481065 nova_compute[260935]: 2025-10-11 08:53:21.379 2 DEBUG nova.virt.libvirt.imagebackend [None req-518c83aa-c875-4011-987e-33e4ebeb8a6e ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] No parent info for 03f2fef0-11c0-48e1-b3a0-3e02d898739e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct 11 04:53:21 np0005481065 nova_compute[260935]: 2025-10-11 08:53:21.608 2 DEBUG nova.storage.rbd_utils [None req-518c83aa-c875-4011-987e-33e4ebeb8a6e ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] creating snapshot(f14c129f75a249f19f2e682befcf2145) on rbd image(2c551e6f-adba-4963-a583-c5118e2be62a_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 11 04:53:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1458: 321 pgs: 321 active+clean; 134 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 157 op/s
Oct 11 04:53:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e194 do_prune osdmap full prune enabled
Oct 11 04:53:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e195 e195: 3 total, 3 up, 3 in
Oct 11 04:53:22 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e195: 3 total, 3 up, 3 in
Oct 11 04:53:22 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:22Z|00060|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5b:b5:4a 10.100.0.12
Oct 11 04:53:22 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:22Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5b:b5:4a 10.100.0.12
Oct 11 04:53:22 np0005481065 nova_compute[260935]: 2025-10-11 08:53:22.486 2 DEBUG nova.storage.rbd_utils [None req-518c83aa-c875-4011-987e-33e4ebeb8a6e ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] cloning vms/2c551e6f-adba-4963-a583-c5118e2be62a_disk@f14c129f75a249f19f2e682befcf2145 to images/61bf0bdb-0d02-4888-bd1b-e94e583bf619 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct 11 04:53:22 np0005481065 nova_compute[260935]: 2025-10-11 08:53:22.611 2 DEBUG nova.storage.rbd_utils [None req-518c83aa-c875-4011-987e-33e4ebeb8a6e ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] flattening images/61bf0bdb-0d02-4888-bd1b-e94e583bf619 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct 11 04:53:22 np0005481065 nova_compute[260935]: 2025-10-11 08:53:22.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:22 np0005481065 nova_compute[260935]: 2025-10-11 08:53:22.879 2 DEBUG nova.storage.rbd_utils [None req-518c83aa-c875-4011-987e-33e4ebeb8a6e ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] removing snapshot(f14c129f75a249f19f2e682befcf2145) on rbd image(2c551e6f-adba-4963-a583-c5118e2be62a_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct 11 04:53:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e195 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:53:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e195 do_prune osdmap full prune enabled
Oct 11 04:53:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e196 e196: 3 total, 3 up, 3 in
Oct 11 04:53:23 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e196: 3 total, 3 up, 3 in
Oct 11 04:53:23 np0005481065 nova_compute[260935]: 2025-10-11 08:53:23.485 2 DEBUG nova.storage.rbd_utils [None req-518c83aa-c875-4011-987e-33e4ebeb8a6e ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] creating snapshot(snap) on rbd image(61bf0bdb-0d02-4888-bd1b-e94e583bf619) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 11 04:53:23 np0005481065 nova_compute[260935]: 2025-10-11 08:53:23.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:23 np0005481065 nova_compute[260935]: 2025-10-11 08:53:23.749 2 DEBUG nova.network.neutron [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Successfully updated port: 69a28fb2-3a8f-49be-9b93-8dc3db323fd5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:53:23 np0005481065 nova_compute[260935]: 2025-10-11 08:53:23.780 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "refresh_cache-e3289c21-dd0f-43aa-9d39-3aff16eff5cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:53:23 np0005481065 nova_compute[260935]: 2025-10-11 08:53:23.780 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquired lock "refresh_cache-e3289c21-dd0f-43aa-9d39-3aff16eff5cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:53:23 np0005481065 nova_compute[260935]: 2025-10-11 08:53:23.781 2 DEBUG nova.network.neutron [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:53:23 np0005481065 nova_compute[260935]: 2025-10-11 08:53:23.829 2 DEBUG nova.network.neutron [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Successfully updated port: 35cdb65d-4c85-4645-9bdf-3ef8f40f9596 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:53:23 np0005481065 nova_compute[260935]: 2025-10-11 08:53:23.850 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "refresh_cache-dd2f9164-cc85-46e5-9ac5-2847421fe9fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:53:23 np0005481065 nova_compute[260935]: 2025-10-11 08:53:23.851 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquired lock "refresh_cache-dd2f9164-cc85-46e5-9ac5-2847421fe9fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:53:23 np0005481065 nova_compute[260935]: 2025-10-11 08:53:23.851 2 DEBUG nova.network.neutron [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:53:23 np0005481065 nova_compute[260935]: 2025-10-11 08:53:23.934 2 DEBUG nova.compute.manager [req-2e78e465-415c-4b8d-8b8a-0a43f7ac3938 req-568616a4-c20c-49f1-90d1-472bcf58c89e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Received event network-changed-69a28fb2-3a8f-49be-9b93-8dc3db323fd5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:53:23 np0005481065 nova_compute[260935]: 2025-10-11 08:53:23.936 2 DEBUG nova.compute.manager [req-2e78e465-415c-4b8d-8b8a-0a43f7ac3938 req-568616a4-c20c-49f1-90d1-472bcf58c89e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Refreshing instance network info cache due to event network-changed-69a28fb2-3a8f-49be-9b93-8dc3db323fd5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:53:23 np0005481065 nova_compute[260935]: 2025-10-11 08:53:23.936 2 DEBUG oslo_concurrency.lockutils [req-2e78e465-415c-4b8d-8b8a-0a43f7ac3938 req-568616a4-c20c-49f1-90d1-472bcf58c89e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-e3289c21-dd0f-43aa-9d39-3aff16eff5cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:53:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1461: 321 pgs: 321 active+clean; 281 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 13 MiB/s wr, 409 op/s
Oct 11 04:53:24 np0005481065 nova_compute[260935]: 2025-10-11 08:53:24.019 2 DEBUG nova.network.neutron [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:53:24 np0005481065 nova_compute[260935]: 2025-10-11 08:53:24.058 2 DEBUG nova.network.neutron [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:53:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e196 do_prune osdmap full prune enabled
Oct 11 04:53:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e197 e197: 3 total, 3 up, 3 in
Oct 11 04:53:24 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e197: 3 total, 3 up, 3 in
Oct 11 04:53:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:53:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:53:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:53:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:53:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:53:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:53:25 np0005481065 nova_compute[260935]: 2025-10-11 08:53:25.577 2 DEBUG nova.network.neutron [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Updating instance_info_cache with network_info: [{"id": "35cdb65d-4c85-4645-9bdf-3ef8f40f9596", "address": "fa:16:3e:37:a9:70", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35cdb65d-4c", "ovs_interfaceid": "35cdb65d-4c85-4645-9bdf-3ef8f40f9596", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:53:25 np0005481065 nova_compute[260935]: 2025-10-11 08:53:25.598 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Releasing lock "refresh_cache-dd2f9164-cc85-46e5-9ac5-2847421fe9fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:53:25 np0005481065 nova_compute[260935]: 2025-10-11 08:53:25.599 2 DEBUG nova.compute.manager [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Instance network_info: |[{"id": "35cdb65d-4c85-4645-9bdf-3ef8f40f9596", "address": "fa:16:3e:37:a9:70", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35cdb65d-4c", "ovs_interfaceid": "35cdb65d-4c85-4645-9bdf-3ef8f40f9596", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:53:25 np0005481065 nova_compute[260935]: 2025-10-11 08:53:25.603 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Start _get_guest_xml network_info=[{"id": "35cdb65d-4c85-4645-9bdf-3ef8f40f9596", "address": "fa:16:3e:37:a9:70", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35cdb65d-4c", "ovs_interfaceid": "35cdb65d-4c85-4645-9bdf-3ef8f40f9596", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:53:25 np0005481065 nova_compute[260935]: 2025-10-11 08:53:25.608 2 WARNING nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:53:25 np0005481065 nova_compute[260935]: 2025-10-11 08:53:25.614 2 DEBUG nova.virt.libvirt.host [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:53:25 np0005481065 nova_compute[260935]: 2025-10-11 08:53:25.615 2 DEBUG nova.virt.libvirt.host [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:53:25 np0005481065 nova_compute[260935]: 2025-10-11 08:53:25.619 2 DEBUG nova.virt.libvirt.host [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:53:25 np0005481065 nova_compute[260935]: 2025-10-11 08:53:25.620 2 DEBUG nova.virt.libvirt.host [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:53:25 np0005481065 nova_compute[260935]: 2025-10-11 08:53:25.620 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:53:25 np0005481065 nova_compute[260935]: 2025-10-11 08:53:25.621 2 DEBUG nova.virt.hardware [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:53:25 np0005481065 nova_compute[260935]: 2025-10-11 08:53:25.622 2 DEBUG nova.virt.hardware [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:53:25 np0005481065 nova_compute[260935]: 2025-10-11 08:53:25.622 2 DEBUG nova.virt.hardware [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:53:25 np0005481065 nova_compute[260935]: 2025-10-11 08:53:25.623 2 DEBUG nova.virt.hardware [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:53:25 np0005481065 nova_compute[260935]: 2025-10-11 08:53:25.623 2 DEBUG nova.virt.hardware [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:53:25 np0005481065 nova_compute[260935]: 2025-10-11 08:53:25.623 2 DEBUG nova.virt.hardware [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:53:25 np0005481065 nova_compute[260935]: 2025-10-11 08:53:25.624 2 DEBUG nova.virt.hardware [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:53:25 np0005481065 nova_compute[260935]: 2025-10-11 08:53:25.624 2 DEBUG nova.virt.hardware [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:53:25 np0005481065 nova_compute[260935]: 2025-10-11 08:53:25.625 2 DEBUG nova.virt.hardware [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:53:25 np0005481065 nova_compute[260935]: 2025-10-11 08:53:25.626 2 DEBUG nova.virt.hardware [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:53:25 np0005481065 nova_compute[260935]: 2025-10-11 08:53:25.626 2 DEBUG nova.virt.hardware [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:53:25 np0005481065 nova_compute[260935]: 2025-10-11 08:53:25.631 2 DEBUG oslo_concurrency.processutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:25 np0005481065 nova_compute[260935]: 2025-10-11 08:53:25.679 2 DEBUG nova.network.neutron [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Updating instance_info_cache with network_info: [{"id": "69a28fb2-3a8f-49be-9b93-8dc3db323fd5", "address": "fa:16:3e:93:4d:8d", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69a28fb2-3a", "ovs_interfaceid": "69a28fb2-3a8f-49be-9b93-8dc3db323fd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:53:25 np0005481065 nova_compute[260935]: 2025-10-11 08:53:25.716 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Releasing lock "refresh_cache-e3289c21-dd0f-43aa-9d39-3aff16eff5cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:53:25 np0005481065 nova_compute[260935]: 2025-10-11 08:53:25.717 2 DEBUG nova.compute.manager [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Instance network_info: |[{"id": "69a28fb2-3a8f-49be-9b93-8dc3db323fd5", "address": "fa:16:3e:93:4d:8d", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69a28fb2-3a", "ovs_interfaceid": "69a28fb2-3a8f-49be-9b93-8dc3db323fd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:53:25 np0005481065 nova_compute[260935]: 2025-10-11 08:53:25.718 2 DEBUG oslo_concurrency.lockutils [req-2e78e465-415c-4b8d-8b8a-0a43f7ac3938 req-568616a4-c20c-49f1-90d1-472bcf58c89e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-e3289c21-dd0f-43aa-9d39-3aff16eff5cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:53:25 np0005481065 nova_compute[260935]: 2025-10-11 08:53:25.719 2 DEBUG nova.network.neutron [req-2e78e465-415c-4b8d-8b8a-0a43f7ac3938 req-568616a4-c20c-49f1-90d1-472bcf58c89e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Refreshing network info cache for port 69a28fb2-3a8f-49be-9b93-8dc3db323fd5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:53:25 np0005481065 nova_compute[260935]: 2025-10-11 08:53:25.725 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Start _get_guest_xml network_info=[{"id": "69a28fb2-3a8f-49be-9b93-8dc3db323fd5", "address": "fa:16:3e:93:4d:8d", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69a28fb2-3a", "ovs_interfaceid": "69a28fb2-3a8f-49be-9b93-8dc3db323fd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:53:25 np0005481065 nova_compute[260935]: 2025-10-11 08:53:25.734 2 WARNING nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:53:25 np0005481065 nova_compute[260935]: 2025-10-11 08:53:25.740 2 DEBUG nova.virt.libvirt.host [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:53:25 np0005481065 nova_compute[260935]: 2025-10-11 08:53:25.741 2 DEBUG nova.virt.libvirt.host [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:53:25 np0005481065 nova_compute[260935]: 2025-10-11 08:53:25.747 2 DEBUG nova.virt.libvirt.host [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:53:25 np0005481065 nova_compute[260935]: 2025-10-11 08:53:25.748 2 DEBUG nova.virt.libvirt.host [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:53:25 np0005481065 nova_compute[260935]: 2025-10-11 08:53:25.748 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:53:25 np0005481065 nova_compute[260935]: 2025-10-11 08:53:25.749 2 DEBUG nova.virt.hardware [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:53:25 np0005481065 nova_compute[260935]: 2025-10-11 08:53:25.749 2 DEBUG nova.virt.hardware [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:53:25 np0005481065 nova_compute[260935]: 2025-10-11 08:53:25.750 2 DEBUG nova.virt.hardware [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:53:25 np0005481065 nova_compute[260935]: 2025-10-11 08:53:25.750 2 DEBUG nova.virt.hardware [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:53:25 np0005481065 nova_compute[260935]: 2025-10-11 08:53:25.750 2 DEBUG nova.virt.hardware [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:53:25 np0005481065 nova_compute[260935]: 2025-10-11 08:53:25.750 2 DEBUG nova.virt.hardware [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:53:25 np0005481065 nova_compute[260935]: 2025-10-11 08:53:25.750 2 DEBUG nova.virt.hardware [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:53:25 np0005481065 nova_compute[260935]: 2025-10-11 08:53:25.751 2 DEBUG nova.virt.hardware [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:53:25 np0005481065 nova_compute[260935]: 2025-10-11 08:53:25.751 2 DEBUG nova.virt.hardware [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:53:25 np0005481065 nova_compute[260935]: 2025-10-11 08:53:25.751 2 DEBUG nova.virt.hardware [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:53:25 np0005481065 nova_compute[260935]: 2025-10-11 08:53:25.751 2 DEBUG nova.virt.hardware [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:53:25 np0005481065 nova_compute[260935]: 2025-10-11 08:53:25.756 2 DEBUG oslo_concurrency.processutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:25 np0005481065 podman[311940]: 2025-10-11 08:53:25.78117731 +0000 UTC m=+0.084471360 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:53:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1463: 321 pgs: 321 active+clean; 281 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 13 MiB/s wr, 409 op/s
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.030 2 DEBUG nova.compute.manager [req-65478d01-8716-47ae-9a3d-137c554d1201 req-8a236e0f-e2a3-4526-b2d5-8a02f7de2f30 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Received event network-changed-35cdb65d-4c85-4645-9bdf-3ef8f40f9596 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.031 2 DEBUG nova.compute.manager [req-65478d01-8716-47ae-9a3d-137c554d1201 req-8a236e0f-e2a3-4526-b2d5-8a02f7de2f30 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Refreshing instance network info cache due to event network-changed-35cdb65d-4c85-4645-9bdf-3ef8f40f9596. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.032 2 DEBUG oslo_concurrency.lockutils [req-65478d01-8716-47ae-9a3d-137c554d1201 req-8a236e0f-e2a3-4526-b2d5-8a02f7de2f30 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-dd2f9164-cc85-46e5-9ac5-2847421fe9fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.032 2 DEBUG oslo_concurrency.lockutils [req-65478d01-8716-47ae-9a3d-137c554d1201 req-8a236e0f-e2a3-4526-b2d5-8a02f7de2f30 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-dd2f9164-cc85-46e5-9ac5-2847421fe9fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.032 2 DEBUG nova.network.neutron [req-65478d01-8716-47ae-9a3d-137c554d1201 req-8a236e0f-e2a3-4526-b2d5-8a02f7de2f30 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Refreshing network info cache for port 35cdb65d-4c85-4645-9bdf-3ef8f40f9596 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:53:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:53:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4017531465' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.140 2 DEBUG oslo_concurrency.processutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.176 2 DEBUG nova.storage.rbd_utils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] rbd image dd2f9164-cc85-46e5-9ac5-2847421fe9fc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.181 2 DEBUG oslo_concurrency.processutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:53:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/973492518' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.252 2 INFO nova.virt.libvirt.driver [None req-518c83aa-c875-4011-987e-33e4ebeb8a6e ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Snapshot image upload complete#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.253 2 INFO nova.compute.manager [None req-518c83aa-c875-4011-987e-33e4ebeb8a6e ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Took 5.40 seconds to snapshot the instance on the hypervisor.#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.257 2 DEBUG oslo_concurrency.processutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.287 2 DEBUG nova.storage.rbd_utils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] rbd image e3289c21-dd0f-43aa-9d39-3aff16eff5cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.293 2 DEBUG oslo_concurrency.processutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:53:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2825526755' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.644 2 DEBUG oslo_concurrency.processutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.646 2 DEBUG nova.virt.libvirt.vif [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:53:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1151635572',display_name='tempest-tempest.common.compute-instance-1151635572-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1151635572-2',id=46,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5fe47dfd30914099a9819413cbab00c6',ramdisk_id='',reservation_id='r-08kxr91r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1825846956',owner_user_name='tempest-MultipleCreateTestJSON-1825846956-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:53:18Z,user_data=None,user_id='624c293d73ca4d14a182fadee17abb16',uuid=dd2f9164-cc85-46e5-9ac5-2847421fe9fc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "35cdb65d-4c85-4645-9bdf-3ef8f40f9596", "address": "fa:16:3e:37:a9:70", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35cdb65d-4c", "ovs_interfaceid": "35cdb65d-4c85-4645-9bdf-3ef8f40f9596", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.647 2 DEBUG nova.network.os_vif_util [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Converting VIF {"id": "35cdb65d-4c85-4645-9bdf-3ef8f40f9596", "address": "fa:16:3e:37:a9:70", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35cdb65d-4c", "ovs_interfaceid": "35cdb65d-4c85-4645-9bdf-3ef8f40f9596", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.648 2 DEBUG nova.network.os_vif_util [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:a9:70,bridge_name='br-int',has_traffic_filtering=True,id=35cdb65d-4c85-4645-9bdf-3ef8f40f9596,network=Network(aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35cdb65d-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.650 2 DEBUG nova.objects.instance [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lazy-loading 'pci_devices' on Instance uuid dd2f9164-cc85-46e5-9ac5-2847421fe9fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.670 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:53:26 np0005481065 nova_compute[260935]:  <uuid>dd2f9164-cc85-46e5-9ac5-2847421fe9fc</uuid>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:  <name>instance-0000002e</name>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <nova:name>tempest-tempest.common.compute-instance-1151635572-2</nova:name>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:53:25</nova:creationTime>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:53:26 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:        <nova:user uuid="624c293d73ca4d14a182fadee17abb16">tempest-MultipleCreateTestJSON-1825846956-project-member</nova:user>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:        <nova:project uuid="5fe47dfd30914099a9819413cbab00c6">tempest-MultipleCreateTestJSON-1825846956</nova:project>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:        <nova:port uuid="35cdb65d-4c85-4645-9bdf-3ef8f40f9596">
Oct 11 04:53:26 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <entry name="serial">dd2f9164-cc85-46e5-9ac5-2847421fe9fc</entry>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <entry name="uuid">dd2f9164-cc85-46e5-9ac5-2847421fe9fc</entry>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/dd2f9164-cc85-46e5-9ac5-2847421fe9fc_disk">
Oct 11 04:53:26 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:53:26 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/dd2f9164-cc85-46e5-9ac5-2847421fe9fc_disk.config">
Oct 11 04:53:26 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:53:26 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:37:a9:70"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <target dev="tap35cdb65d-4c"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/dd2f9164-cc85-46e5-9ac5-2847421fe9fc/console.log" append="off"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:53:26 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:53:26 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.671 2 DEBUG nova.compute.manager [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Preparing to wait for external event network-vif-plugged-35cdb65d-4c85-4645-9bdf-3ef8f40f9596 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.671 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "dd2f9164-cc85-46e5-9ac5-2847421fe9fc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.672 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "dd2f9164-cc85-46e5-9ac5-2847421fe9fc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.672 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "dd2f9164-cc85-46e5-9ac5-2847421fe9fc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.673 2 DEBUG nova.virt.libvirt.vif [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:53:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1151635572',display_name='tempest-tempest.common.compute-instance-1151635572-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1151635572-2',id=46,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5fe47dfd30914099a9819413cbab00c6',ramdisk_id='',reservation_id='r-08kxr91r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1825846956',owner_user_name='tempest-MultipleCreateTestJSON-1825846956-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:53:18Z,user_data=None,user_id='624c293d73ca4d14a182fadee17abb16',uuid=dd2f9164-cc85-46e5-9ac5-2847421fe9fc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "35cdb65d-4c85-4645-9bdf-3ef8f40f9596", "address": "fa:16:3e:37:a9:70", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35cdb65d-4c", "ovs_interfaceid": "35cdb65d-4c85-4645-9bdf-3ef8f40f9596", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.674 2 DEBUG nova.network.os_vif_util [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Converting VIF {"id": "35cdb65d-4c85-4645-9bdf-3ef8f40f9596", "address": "fa:16:3e:37:a9:70", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35cdb65d-4c", "ovs_interfaceid": "35cdb65d-4c85-4645-9bdf-3ef8f40f9596", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.675 2 DEBUG nova.network.os_vif_util [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:a9:70,bridge_name='br-int',has_traffic_filtering=True,id=35cdb65d-4c85-4645-9bdf-3ef8f40f9596,network=Network(aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35cdb65d-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.676 2 DEBUG os_vif [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:a9:70,bridge_name='br-int',has_traffic_filtering=True,id=35cdb65d-4c85-4645-9bdf-3ef8f40f9596,network=Network(aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35cdb65d-4c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.678 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.678 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.683 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap35cdb65d-4c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.684 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap35cdb65d-4c, col_values=(('external_ids', {'iface-id': '35cdb65d-4c85-4645-9bdf-3ef8f40f9596', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:37:a9:70', 'vm-uuid': 'dd2f9164-cc85-46e5-9ac5-2847421fe9fc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:26 np0005481065 NetworkManager[44960]: <info>  [1760172806.6880] manager: (tap35cdb65d-4c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/183)
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.698 2 INFO os_vif [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:a9:70,bridge_name='br-int',has_traffic_filtering=True,id=35cdb65d-4c85-4645-9bdf-3ef8f40f9596,network=Network(aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35cdb65d-4c')#033[00m
Oct 11 04:53:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:53:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/93188165' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.767 2 DEBUG oslo_concurrency.processutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.770 2 DEBUG nova.virt.libvirt.vif [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:53:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1151635572',display_name='tempest-tempest.common.compute-instance-1151635572-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1151635572-1',id=45,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5fe47dfd30914099a9819413cbab00c6',ramdisk_id='',reservation_id='r-08kxr91r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1825846956',owner_user_name='tempest-MultipleCreateTestJSON-1825846956-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:53:17Z,user_data=None,user_id='624c293d73ca4d14a182fadee17abb16',uuid=e3289c21-dd0f-43aa-9d39-3aff16eff5cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "69a28fb2-3a8f-49be-9b93-8dc3db323fd5", "address": "fa:16:3e:93:4d:8d", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69a28fb2-3a", "ovs_interfaceid": "69a28fb2-3a8f-49be-9b93-8dc3db323fd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.770 2 DEBUG nova.network.os_vif_util [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Converting VIF {"id": "69a28fb2-3a8f-49be-9b93-8dc3db323fd5", "address": "fa:16:3e:93:4d:8d", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69a28fb2-3a", "ovs_interfaceid": "69a28fb2-3a8f-49be-9b93-8dc3db323fd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.771 2 DEBUG nova.network.os_vif_util [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:4d:8d,bridge_name='br-int',has_traffic_filtering=True,id=69a28fb2-3a8f-49be-9b93-8dc3db323fd5,network=Network(aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69a28fb2-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.772 2 DEBUG nova.objects.instance [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lazy-loading 'pci_devices' on Instance uuid e3289c21-dd0f-43aa-9d39-3aff16eff5cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.779 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.779 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.780 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] No VIF found with MAC fa:16:3e:37:a9:70, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.780 2 INFO nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Using config drive#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.813 2 DEBUG nova.storage.rbd_utils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] rbd image dd2f9164-cc85-46e5-9ac5-2847421fe9fc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.823 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:53:26 np0005481065 nova_compute[260935]:  <uuid>e3289c21-dd0f-43aa-9d39-3aff16eff5cd</uuid>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:  <name>instance-0000002d</name>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <nova:name>tempest-tempest.common.compute-instance-1151635572-1</nova:name>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:53:25</nova:creationTime>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:53:26 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:        <nova:user uuid="624c293d73ca4d14a182fadee17abb16">tempest-MultipleCreateTestJSON-1825846956-project-member</nova:user>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:        <nova:project uuid="5fe47dfd30914099a9819413cbab00c6">tempest-MultipleCreateTestJSON-1825846956</nova:project>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:        <nova:port uuid="69a28fb2-3a8f-49be-9b93-8dc3db323fd5">
Oct 11 04:53:26 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <entry name="serial">e3289c21-dd0f-43aa-9d39-3aff16eff5cd</entry>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <entry name="uuid">e3289c21-dd0f-43aa-9d39-3aff16eff5cd</entry>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/e3289c21-dd0f-43aa-9d39-3aff16eff5cd_disk">
Oct 11 04:53:26 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:53:26 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/e3289c21-dd0f-43aa-9d39-3aff16eff5cd_disk.config">
Oct 11 04:53:26 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:53:26 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:93:4d:8d"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <target dev="tap69a28fb2-3a"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/e3289c21-dd0f-43aa-9d39-3aff16eff5cd/console.log" append="off"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:53:26 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:53:26 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:53:26 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:53:26 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.823 2 DEBUG nova.compute.manager [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Preparing to wait for external event network-vif-plugged-69a28fb2-3a8f-49be-9b93-8dc3db323fd5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.824 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "e3289c21-dd0f-43aa-9d39-3aff16eff5cd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.824 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "e3289c21-dd0f-43aa-9d39-3aff16eff5cd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.824 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "e3289c21-dd0f-43aa-9d39-3aff16eff5cd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.825 2 DEBUG nova.virt.libvirt.vif [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:53:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1151635572',display_name='tempest-tempest.common.compute-instance-1151635572-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1151635572-1',id=45,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5fe47dfd30914099a9819413cbab00c6',ramdisk_id='',reservation_id='r-08kxr91r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1825846956',owner_user_name='tempest-MultipleCreateTestJSON-1825846956-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:53:17Z,user_data=None,user_id='624c293d73ca4d14a182fadee17abb16',uuid=e3289c21-dd0f-43aa-9d39-3aff16eff5cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "69a28fb2-3a8f-49be-9b93-8dc3db323fd5", "address": "fa:16:3e:93:4d:8d", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69a28fb2-3a", "ovs_interfaceid": "69a28fb2-3a8f-49be-9b93-8dc3db323fd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.826 2 DEBUG nova.network.os_vif_util [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Converting VIF {"id": "69a28fb2-3a8f-49be-9b93-8dc3db323fd5", "address": "fa:16:3e:93:4d:8d", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69a28fb2-3a", "ovs_interfaceid": "69a28fb2-3a8f-49be-9b93-8dc3db323fd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.827 2 DEBUG nova.network.os_vif_util [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:4d:8d,bridge_name='br-int',has_traffic_filtering=True,id=69a28fb2-3a8f-49be-9b93-8dc3db323fd5,network=Network(aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69a28fb2-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.828 2 DEBUG os_vif [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:4d:8d,bridge_name='br-int',has_traffic_filtering=True,id=69a28fb2-3a8f-49be-9b93-8dc3db323fd5,network=Network(aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69a28fb2-3a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.829 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.830 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.834 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap69a28fb2-3a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.835 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap69a28fb2-3a, col_values=(('external_ids', {'iface-id': '69a28fb2-3a8f-49be-9b93-8dc3db323fd5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:93:4d:8d', 'vm-uuid': 'e3289c21-dd0f-43aa-9d39-3aff16eff5cd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:26 np0005481065 NetworkManager[44960]: <info>  [1760172806.8389] manager: (tap69a28fb2-3a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/184)
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.851 2 INFO os_vif [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:4d:8d,bridge_name='br-int',has_traffic_filtering=True,id=69a28fb2-3a8f-49be-9b93-8dc3db323fd5,network=Network(aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69a28fb2-3a')#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.908 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.909 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.909 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] No VIF found with MAC fa:16:3e:93:4d:8d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.910 2 INFO nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Using config drive#033[00m
Oct 11 04:53:26 np0005481065 nova_compute[260935]: 2025-10-11 08:53:26.945 2 DEBUG nova.storage.rbd_utils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] rbd image e3289c21-dd0f-43aa-9d39-3aff16eff5cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:27 np0005481065 nova_compute[260935]: 2025-10-11 08:53:27.356 2 INFO nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Creating config drive at /var/lib/nova/instances/dd2f9164-cc85-46e5-9ac5-2847421fe9fc/disk.config#033[00m
Oct 11 04:53:27 np0005481065 nova_compute[260935]: 2025-10-11 08:53:27.365 2 DEBUG oslo_concurrency.processutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dd2f9164-cc85-46e5-9ac5-2847421fe9fc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr38qd_6m execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:27 np0005481065 nova_compute[260935]: 2025-10-11 08:53:27.417 2 DEBUG nova.virt.libvirt.driver [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct 11 04:53:27 np0005481065 nova_compute[260935]: 2025-10-11 08:53:27.499 2 INFO nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Creating config drive at /var/lib/nova/instances/e3289c21-dd0f-43aa-9d39-3aff16eff5cd/disk.config#033[00m
Oct 11 04:53:27 np0005481065 nova_compute[260935]: 2025-10-11 08:53:27.504 2 DEBUG oslo_concurrency.processutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e3289c21-dd0f-43aa-9d39-3aff16eff5cd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp460r16g7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:27 np0005481065 nova_compute[260935]: 2025-10-11 08:53:27.543 2 DEBUG nova.network.neutron [req-65478d01-8716-47ae-9a3d-137c554d1201 req-8a236e0f-e2a3-4526-b2d5-8a02f7de2f30 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Updated VIF entry in instance network info cache for port 35cdb65d-4c85-4645-9bdf-3ef8f40f9596. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:53:27 np0005481065 nova_compute[260935]: 2025-10-11 08:53:27.544 2 DEBUG nova.network.neutron [req-65478d01-8716-47ae-9a3d-137c554d1201 req-8a236e0f-e2a3-4526-b2d5-8a02f7de2f30 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Updating instance_info_cache with network_info: [{"id": "35cdb65d-4c85-4645-9bdf-3ef8f40f9596", "address": "fa:16:3e:37:a9:70", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35cdb65d-4c", "ovs_interfaceid": "35cdb65d-4c85-4645-9bdf-3ef8f40f9596", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:53:27 np0005481065 nova_compute[260935]: 2025-10-11 08:53:27.549 2 DEBUG oslo_concurrency.processutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dd2f9164-cc85-46e5-9ac5-2847421fe9fc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr38qd_6m" returned: 0 in 0.184s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:27 np0005481065 nova_compute[260935]: 2025-10-11 08:53:27.579 2 DEBUG nova.storage.rbd_utils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] rbd image dd2f9164-cc85-46e5-9ac5-2847421fe9fc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:27 np0005481065 nova_compute[260935]: 2025-10-11 08:53:27.583 2 DEBUG oslo_concurrency.processutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dd2f9164-cc85-46e5-9ac5-2847421fe9fc/disk.config dd2f9164-cc85-46e5-9ac5-2847421fe9fc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:27 np0005481065 nova_compute[260935]: 2025-10-11 08:53:27.638 2 DEBUG oslo_concurrency.lockutils [req-65478d01-8716-47ae-9a3d-137c554d1201 req-8a236e0f-e2a3-4526-b2d5-8a02f7de2f30 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-dd2f9164-cc85-46e5-9ac5-2847421fe9fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:53:27 np0005481065 nova_compute[260935]: 2025-10-11 08:53:27.659 2 DEBUG oslo_concurrency.processutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e3289c21-dd0f-43aa-9d39-3aff16eff5cd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp460r16g7" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:27 np0005481065 nova_compute[260935]: 2025-10-11 08:53:27.701 2 DEBUG nova.storage.rbd_utils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] rbd image e3289c21-dd0f-43aa-9d39-3aff16eff5cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:27 np0005481065 nova_compute[260935]: 2025-10-11 08:53:27.706 2 DEBUG oslo_concurrency.processutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e3289c21-dd0f-43aa-9d39-3aff16eff5cd/disk.config e3289c21-dd0f-43aa-9d39-3aff16eff5cd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:27 np0005481065 nova_compute[260935]: 2025-10-11 08:53:27.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:27 np0005481065 nova_compute[260935]: 2025-10-11 08:53:27.771 2 DEBUG oslo_concurrency.processutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dd2f9164-cc85-46e5-9ac5-2847421fe9fc/disk.config dd2f9164-cc85-46e5-9ac5-2847421fe9fc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.188s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:27 np0005481065 nova_compute[260935]: 2025-10-11 08:53:27.773 2 INFO nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Deleting local config drive /var/lib/nova/instances/dd2f9164-cc85-46e5-9ac5-2847421fe9fc/disk.config because it was imported into RBD.#033[00m
Oct 11 04:53:27 np0005481065 NetworkManager[44960]: <info>  [1760172807.8315] manager: (tap35cdb65d-4c): new Tun device (/org/freedesktop/NetworkManager/Devices/185)
Oct 11 04:53:27 np0005481065 kernel: tap35cdb65d-4c: entered promiscuous mode
Oct 11 04:53:27 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:27Z|00379|binding|INFO|Claiming lport 35cdb65d-4c85-4645-9bdf-3ef8f40f9596 for this chassis.
Oct 11 04:53:27 np0005481065 nova_compute[260935]: 2025-10-11 08:53:27.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:27 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:27Z|00380|binding|INFO|35cdb65d-4c85-4645-9bdf-3ef8f40f9596: Claiming fa:16:3e:37:a9:70 10.100.0.5
Oct 11 04:53:27 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:27.862 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:a9:70 10.100.0.5'], port_security=['fa:16:3e:37:a9:70 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'dd2f9164-cc85-46e5-9ac5-2847421fe9fc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5fe47dfd30914099a9819413cbab00c6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '76121093-a7de-4040-aef4-22a2c06e5eea', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0c1ab924-8567-41be-9107-10c6210b8f10, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=35cdb65d-4c85-4645-9bdf-3ef8f40f9596) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:53:27 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:27.864 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 35cdb65d-4c85-4645-9bdf-3ef8f40f9596 in datapath aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f bound to our chassis#033[00m
Oct 11 04:53:27 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:27.868 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f#033[00m
Oct 11 04:53:27 np0005481065 systemd-udevd[312218]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:53:27 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:27.886 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8ecb5ce4-e5cb-47f6-ba21-c5050519a0ba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:27 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:27.887 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaac3fe7a-b1 in ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 04:53:27 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:27.888 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaac3fe7a-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 04:53:27 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:27.888 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3879ed49-6a17-46bf-b9a3-22463208af77]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:27 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:27.889 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f733addf-94ee-44e7-9b00-b1d11c0ece73]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:27 np0005481065 systemd-machined[215705]: New machine qemu-51-instance-0000002e.
Oct 11 04:53:27 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:27.904 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[c4880881-2a76-46b5-830c-b2e209e20e4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:27 np0005481065 NetworkManager[44960]: <info>  [1760172807.9060] device (tap35cdb65d-4c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:53:27 np0005481065 NetworkManager[44960]: <info>  [1760172807.9068] device (tap35cdb65d-4c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:53:27 np0005481065 systemd[1]: Started Virtual Machine qemu-51-instance-0000002e.
Oct 11 04:53:27 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:27.929 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6544bb64-0ccd-42ed-8823-ec3d0439b21e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1464: 321 pgs: 321 active+clean; 306 MiB data, 557 MiB used, 59 GiB / 60 GiB avail; 8.1 MiB/s rd, 15 MiB/s wr, 499 op/s
Oct 11 04:53:27 np0005481065 nova_compute[260935]: 2025-10-11 08:53:27.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:27 np0005481065 nova_compute[260935]: 2025-10-11 08:53:27.953 2 DEBUG oslo_concurrency.processutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e3289c21-dd0f-43aa-9d39-3aff16eff5cd/disk.config e3289c21-dd0f-43aa-9d39-3aff16eff5cd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.247s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:27 np0005481065 nova_compute[260935]: 2025-10-11 08:53:27.954 2 INFO nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Deleting local config drive /var/lib/nova/instances/e3289c21-dd0f-43aa-9d39-3aff16eff5cd/disk.config because it was imported into RBD.#033[00m
Oct 11 04:53:27 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:27Z|00381|binding|INFO|Setting lport 35cdb65d-4c85-4645-9bdf-3ef8f40f9596 ovn-installed in OVS
Oct 11 04:53:27 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:27Z|00382|binding|INFO|Setting lport 35cdb65d-4c85-4645-9bdf-3ef8f40f9596 up in Southbound
Oct 11 04:53:27 np0005481065 nova_compute[260935]: 2025-10-11 08:53:27.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:27 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:27.963 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[7141b691-a12f-4778-a135-6c7414437f56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:27 np0005481065 systemd-udevd[312223]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:53:27 np0005481065 NetworkManager[44960]: <info>  [1760172807.9716] manager: (tapaac3fe7a-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/186)
Oct 11 04:53:27 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:27.970 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5e541e18-25e5-477b-8f38-443e47ff1447]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:28 np0005481065 nova_compute[260935]: 2025-10-11 08:53:28.007 2 DEBUG nova.network.neutron [req-2e78e465-415c-4b8d-8b8a-0a43f7ac3938 req-568616a4-c20c-49f1-90d1-472bcf58c89e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Updated VIF entry in instance network info cache for port 69a28fb2-3a8f-49be-9b93-8dc3db323fd5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:53:28 np0005481065 nova_compute[260935]: 2025-10-11 08:53:28.008 2 DEBUG nova.network.neutron [req-2e78e465-415c-4b8d-8b8a-0a43f7ac3938 req-568616a4-c20c-49f1-90d1-472bcf58c89e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Updating instance_info_cache with network_info: [{"id": "69a28fb2-3a8f-49be-9b93-8dc3db323fd5", "address": "fa:16:3e:93:4d:8d", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69a28fb2-3a", "ovs_interfaceid": "69a28fb2-3a8f-49be-9b93-8dc3db323fd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:53:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:28.009 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[8562c81c-5582-47be-be52-09196c9267f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:28.013 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c6c869e5-b479-4745-805c-88a777e6d837]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:28 np0005481065 NetworkManager[44960]: <info>  [1760172808.0322] manager: (tap69a28fb2-3a): new Tun device (/org/freedesktop/NetworkManager/Devices/187)
Oct 11 04:53:28 np0005481065 kernel: tap69a28fb2-3a: entered promiscuous mode
Oct 11 04:53:28 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:28Z|00383|binding|INFO|Claiming lport 69a28fb2-3a8f-49be-9b93-8dc3db323fd5 for this chassis.
Oct 11 04:53:28 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:28Z|00384|binding|INFO|69a28fb2-3a8f-49be-9b93-8dc3db323fd5: Claiming fa:16:3e:93:4d:8d 10.100.0.11
Oct 11 04:53:28 np0005481065 nova_compute[260935]: 2025-10-11 08:53:28.035 2 DEBUG oslo_concurrency.lockutils [req-2e78e465-415c-4b8d-8b8a-0a43f7ac3938 req-568616a4-c20c-49f1-90d1-472bcf58c89e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-e3289c21-dd0f-43aa-9d39-3aff16eff5cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:53:28 np0005481065 nova_compute[260935]: 2025-10-11 08:53:28.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:28 np0005481065 NetworkManager[44960]: <info>  [1760172808.0363] device (tapaac3fe7a-b0): carrier: link connected
Oct 11 04:53:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:28.044 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[7cd48173-22fa-406a-8cf4-57a96b236ba4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:28.048 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:4d:8d 10.100.0.11'], port_security=['fa:16:3e:93:4d:8d 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'e3289c21-dd0f-43aa-9d39-3aff16eff5cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5fe47dfd30914099a9819413cbab00c6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '76121093-a7de-4040-aef4-22a2c06e5eea', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0c1ab924-8567-41be-9107-10c6210b8f10, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=69a28fb2-3a8f-49be-9b93-8dc3db323fd5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:53:28 np0005481065 NetworkManager[44960]: <info>  [1760172808.0520] device (tap69a28fb2-3a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:53:28 np0005481065 NetworkManager[44960]: <info>  [1760172808.0534] device (tap69a28fb2-3a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:53:28 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:28Z|00385|binding|INFO|Setting lport 69a28fb2-3a8f-49be-9b93-8dc3db323fd5 ovn-installed in OVS
Oct 11 04:53:28 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:28Z|00386|binding|INFO|Setting lport 69a28fb2-3a8f-49be-9b93-8dc3db323fd5 up in Southbound
Oct 11 04:53:28 np0005481065 nova_compute[260935]: 2025-10-11 08:53:28.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:28 np0005481065 nova_compute[260935]: 2025-10-11 08:53:28.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:28.072 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5ee3c22a-7993-49a8-8f24-06e4ca3c9dc7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaac3fe7a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a9:0a:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 122], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465273, 'reachable_time': 34198, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312266, 'error': None, 'target': 'ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:28 np0005481065 systemd-machined[215705]: New machine qemu-52-instance-0000002d.
Oct 11 04:53:28 np0005481065 systemd[1]: Started Virtual Machine qemu-52-instance-0000002d.
Oct 11 04:53:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:28.096 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[90437ad9-4494-4c4a-8e21-425471dd666a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea9:a6b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 465273, 'tstamp': 465273}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312269, 'error': None, 'target': 'ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:28.117 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1223a9d8-837e-4873-8164-07f497917dfc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaac3fe7a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a9:0a:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 122], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465273, 'reachable_time': 34198, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 312270, 'error': None, 'target': 'ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:28.162 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3dd660ab-09e7-42c8-98bd-18ca8b72153b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:28 np0005481065 nova_compute[260935]: 2025-10-11 08:53:28.252 2 DEBUG nova.compute.manager [req-1f3ffa2b-2998-49c7-bce3-629033f992f3 req-3413247f-ac07-43c9-bbdc-d343eefff5ec e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Received event network-vif-plugged-35cdb65d-4c85-4645-9bdf-3ef8f40f9596 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:53:28 np0005481065 nova_compute[260935]: 2025-10-11 08:53:28.254 2 DEBUG oslo_concurrency.lockutils [req-1f3ffa2b-2998-49c7-bce3-629033f992f3 req-3413247f-ac07-43c9-bbdc-d343eefff5ec e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "dd2f9164-cc85-46e5-9ac5-2847421fe9fc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:28 np0005481065 nova_compute[260935]: 2025-10-11 08:53:28.255 2 DEBUG oslo_concurrency.lockutils [req-1f3ffa2b-2998-49c7-bce3-629033f992f3 req-3413247f-ac07-43c9-bbdc-d343eefff5ec e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd2f9164-cc85-46e5-9ac5-2847421fe9fc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:28 np0005481065 nova_compute[260935]: 2025-10-11 08:53:28.256 2 DEBUG oslo_concurrency.lockutils [req-1f3ffa2b-2998-49c7-bce3-629033f992f3 req-3413247f-ac07-43c9-bbdc-d343eefff5ec e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd2f9164-cc85-46e5-9ac5-2847421fe9fc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:28 np0005481065 nova_compute[260935]: 2025-10-11 08:53:28.256 2 DEBUG nova.compute.manager [req-1f3ffa2b-2998-49c7-bce3-629033f992f3 req-3413247f-ac07-43c9-bbdc-d343eefff5ec e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Processing event network-vif-plugged-35cdb65d-4c85-4645-9bdf-3ef8f40f9596 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 04:53:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:28.263 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e8819015-80c4-420e-ac08-2b9676b37f73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:28.265 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaac3fe7a-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:28.266 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:53:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:28.267 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaac3fe7a-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:28 np0005481065 nova_compute[260935]: 2025-10-11 08:53:28.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:28 np0005481065 NetworkManager[44960]: <info>  [1760172808.2701] manager: (tapaac3fe7a-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/188)
Oct 11 04:53:28 np0005481065 kernel: tapaac3fe7a-b0: entered promiscuous mode
Oct 11 04:53:28 np0005481065 nova_compute[260935]: 2025-10-11 08:53:28.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:28.281 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaac3fe7a-b0, col_values=(('external_ids', {'iface-id': 'debf3d0c-b4f8-4ab8-9507-a0acd9b0ee66'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:28 np0005481065 nova_compute[260935]: 2025-10-11 08:53:28.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:28 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:28Z|00387|binding|INFO|Releasing lport debf3d0c-b4f8-4ab8-9507-a0acd9b0ee66 from this chassis (sb_readonly=0)
Oct 11 04:53:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:28.288 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 04:53:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:28.289 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2184b6e3-a3fd-4394-954f-be81c546e1bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:28.291 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:53:28 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 04:53:28 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 04:53:28 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f
Oct 11 04:53:28 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 04:53:28 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 04:53:28 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 04:53:28 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f.pid.haproxy
Oct 11 04:53:28 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 04:53:28 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:53:28 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 04:53:28 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 04:53:28 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 04:53:28 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 04:53:28 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 04:53:28 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 04:53:28 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 04:53:28 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 04:53:28 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 04:53:28 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 04:53:28 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 04:53:28 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 04:53:28 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 04:53:28 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:53:28 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:53:28 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 04:53:28 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 04:53:28 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:53:28 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f
Oct 11 04:53:28 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 04:53:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:28.295 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'env', 'PROCESS_TAG=haproxy-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 04:53:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:53:28 np0005481065 nova_compute[260935]: 2025-10-11 08:53:28.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:28 np0005481065 podman[312348]: 2025-10-11 08:53:28.783395153 +0000 UTC m=+0.126357914 container create 015a5b99a61fb98e56ce0a6707996a2334b957091df6ba0aa6f28194e95fa8c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 11 04:53:28 np0005481065 podman[312348]: 2025-10-11 08:53:28.710177193 +0000 UTC m=+0.053139954 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 04:53:28 np0005481065 systemd[1]: Started libpod-conmon-015a5b99a61fb98e56ce0a6707996a2334b957091df6ba0aa6f28194e95fa8c7.scope.
Oct 11 04:53:28 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:53:28 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/865d12609cd49e2af502496fdc178165cf27c22f0add21517383ac764b2e1b6a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:53:28 np0005481065 podman[312348]: 2025-10-11 08:53:28.886141379 +0000 UTC m=+0.229104160 container init 015a5b99a61fb98e56ce0a6707996a2334b957091df6ba0aa6f28194e95fa8c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:53:28 np0005481065 podman[312348]: 2025-10-11 08:53:28.897367197 +0000 UTC m=+0.240329948 container start 015a5b99a61fb98e56ce0a6707996a2334b957091df6ba0aa6f28194e95fa8c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 11 04:53:28 np0005481065 neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f[312371]: [NOTICE]   (312404) : New worker (312410) forked
Oct 11 04:53:28 np0005481065 neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f[312371]: [NOTICE]   (312404) : Loading success.
Oct 11 04:53:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:28.992 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 69a28fb2-3a8f-49be-9b93-8dc3db323fd5 in datapath aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f unbound from our chassis#033[00m
Oct 11 04:53:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:28.994 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f#033[00m
Oct 11 04:53:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:29.022 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7d07f9b9-9db1-4fa6-8cd3-6955d9b53326]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:29.074 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[9cb9ab5e-683c-499b-aba0-69d06f494001]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:29.079 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[e2ea61ba-498e-42a7-91a7-23ff317f425d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:29.118 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3df83cba-648c-4a90-951b-3fd605d7ca4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:29 np0005481065 nova_compute[260935]: 2025-10-11 08:53:29.136 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172809.136218, e3289c21-dd0f-43aa-9d39-3aff16eff5cd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:53:29 np0005481065 nova_compute[260935]: 2025-10-11 08:53:29.137 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] VM Started (Lifecycle Event)#033[00m
Oct 11 04:53:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:29.141 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[43adf8d4-074c-4e2b-8cae-a92755bb8320]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaac3fe7a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a9:0a:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 306, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 306, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 122], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465273, 'reachable_time': 34198, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 264, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 264, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312426, 'error': None, 'target': 'ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:29.160 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[56eac33b-f78b-45f3-ba31-fc63716a2e82]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapaac3fe7a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 465290, 'tstamp': 465290}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312427, 'error': None, 'target': 'ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapaac3fe7a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 465294, 'tstamp': 465294}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312427, 'error': None, 'target': 'ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:29.162 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaac3fe7a-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:29 np0005481065 nova_compute[260935]: 2025-10-11 08:53:29.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:29.167 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaac3fe7a-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:29.168 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:53:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:29.168 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaac3fe7a-b0, col_values=(('external_ids', {'iface-id': 'debf3d0c-b4f8-4ab8-9507-a0acd9b0ee66'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:29.168 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:53:29 np0005481065 nova_compute[260935]: 2025-10-11 08:53:29.176 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:53:29 np0005481065 nova_compute[260935]: 2025-10-11 08:53:29.180 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172809.13974, e3289c21-dd0f-43aa-9d39-3aff16eff5cd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:53:29 np0005481065 nova_compute[260935]: 2025-10-11 08:53:29.181 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] VM Paused (Lifecycle Event)#033[00m
Oct 11 04:53:29 np0005481065 nova_compute[260935]: 2025-10-11 08:53:29.249 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:53:29 np0005481065 nova_compute[260935]: 2025-10-11 08:53:29.253 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:53:29 np0005481065 nova_compute[260935]: 2025-10-11 08:53:29.289 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:53:29 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:29Z|00062|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d0:47:df 10.100.0.14
Oct 11 04:53:29 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:29Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d0:47:df 10.100.0.14
Oct 11 04:53:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e197 do_prune osdmap full prune enabled
Oct 11 04:53:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e198 e198: 3 total, 3 up, 3 in
Oct 11 04:53:29 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e198: 3 total, 3 up, 3 in
Oct 11 04:53:29 np0005481065 nova_compute[260935]: 2025-10-11 08:53:29.524 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172809.5241818, dd2f9164-cc85-46e5-9ac5-2847421fe9fc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:53:29 np0005481065 nova_compute[260935]: 2025-10-11 08:53:29.525 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] VM Started (Lifecycle Event)#033[00m
Oct 11 04:53:29 np0005481065 nova_compute[260935]: 2025-10-11 08:53:29.527 2 DEBUG nova.compute.manager [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:53:29 np0005481065 nova_compute[260935]: 2025-10-11 08:53:29.531 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:53:29 np0005481065 nova_compute[260935]: 2025-10-11 08:53:29.534 2 INFO nova.virt.libvirt.driver [-] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Instance spawned successfully.#033[00m
Oct 11 04:53:29 np0005481065 nova_compute[260935]: 2025-10-11 08:53:29.535 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:53:29 np0005481065 nova_compute[260935]: 2025-10-11 08:53:29.559 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:53:29 np0005481065 nova_compute[260935]: 2025-10-11 08:53:29.566 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:53:29 np0005481065 nova_compute[260935]: 2025-10-11 08:53:29.576 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:53:29 np0005481065 nova_compute[260935]: 2025-10-11 08:53:29.576 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:53:29 np0005481065 nova_compute[260935]: 2025-10-11 08:53:29.577 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:53:29 np0005481065 nova_compute[260935]: 2025-10-11 08:53:29.578 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:53:29 np0005481065 nova_compute[260935]: 2025-10-11 08:53:29.578 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:53:29 np0005481065 nova_compute[260935]: 2025-10-11 08:53:29.579 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:53:29 np0005481065 nova_compute[260935]: 2025-10-11 08:53:29.588 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:53:29 np0005481065 nova_compute[260935]: 2025-10-11 08:53:29.589 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172809.524286, dd2f9164-cc85-46e5-9ac5-2847421fe9fc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:53:29 np0005481065 nova_compute[260935]: 2025-10-11 08:53:29.589 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] VM Paused (Lifecycle Event)#033[00m
Oct 11 04:53:29 np0005481065 nova_compute[260935]: 2025-10-11 08:53:29.623 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:53:29 np0005481065 nova_compute[260935]: 2025-10-11 08:53:29.628 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172809.5312035, dd2f9164-cc85-46e5-9ac5-2847421fe9fc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:53:29 np0005481065 nova_compute[260935]: 2025-10-11 08:53:29.628 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:53:29 np0005481065 nova_compute[260935]: 2025-10-11 08:53:29.635 2 INFO nova.compute.manager [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Took 10.67 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:53:29 np0005481065 nova_compute[260935]: 2025-10-11 08:53:29.635 2 DEBUG nova.compute.manager [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:53:29 np0005481065 nova_compute[260935]: 2025-10-11 08:53:29.645 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:53:29 np0005481065 nova_compute[260935]: 2025-10-11 08:53:29.648 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:53:29 np0005481065 nova_compute[260935]: 2025-10-11 08:53:29.662 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:53:29 np0005481065 nova_compute[260935]: 2025-10-11 08:53:29.691 2 INFO nova.compute.manager [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Took 12.84 seconds to build instance.#033[00m
Oct 11 04:53:29 np0005481065 nova_compute[260935]: 2025-10-11 08:53:29.711 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "dd2f9164-cc85-46e5-9ac5-2847421fe9fc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.963s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:29 np0005481065 kernel: tap13bb6d15-e6 (unregistering): left promiscuous mode
Oct 11 04:53:29 np0005481065 NetworkManager[44960]: <info>  [1760172809.7253] device (tap13bb6d15-e6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:53:29 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:29Z|00388|binding|INFO|Releasing lport 13bb6d15-e65c-4e29-b0f3-b7a5a830236d from this chassis (sb_readonly=0)
Oct 11 04:53:29 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:29Z|00389|binding|INFO|Setting lport 13bb6d15-e65c-4e29-b0f3-b7a5a830236d down in Southbound
Oct 11 04:53:29 np0005481065 nova_compute[260935]: 2025-10-11 08:53:29.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:29 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:29Z|00390|binding|INFO|Removing iface tap13bb6d15-e6 ovn-installed in OVS
Oct 11 04:53:29 np0005481065 nova_compute[260935]: 2025-10-11 08:53:29.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:29.746 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:b5:4a 10.100.0.12'], port_security=['fa:16:3e:5b:b5:4a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '92be5b35-6b7a-4f95-924d-008348f27b42', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9af27fad6b5a4783b66213343f27f0a1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a50a9db8-e2ab-4969-88fc-b4ddbb372174', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b34768a1-4d4c-416a-8ec1-d8538916a72a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=13bb6d15-e65c-4e29-b0f3-b7a5a830236d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:53:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:29.747 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 13bb6d15-e65c-4e29-b0f3-b7a5a830236d in datapath e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd unbound from our chassis#033[00m
Oct 11 04:53:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:29.748 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 04:53:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:29.749 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ef1f9e24-1ca1-4c72-b96c-5ffe1c293ab1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:29.750 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd namespace which is not needed anymore#033[00m
Oct 11 04:53:29 np0005481065 nova_compute[260935]: 2025-10-11 08:53:29.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:29 np0005481065 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d0000002b.scope: Deactivated successfully.
Oct 11 04:53:29 np0005481065 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d0000002b.scope: Consumed 13.410s CPU time.
Oct 11 04:53:29 np0005481065 systemd-machined[215705]: Machine qemu-49-instance-0000002b terminated.
Oct 11 04:53:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1466: 321 pgs: 321 active+clean; 306 MiB data, 557 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 1.5 MiB/s wr, 83 op/s
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:30 np0005481065 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[310979]: [NOTICE]   (311001) : haproxy version is 2.8.14-c23fe91
Oct 11 04:53:30 np0005481065 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[310979]: [NOTICE]   (311001) : path to executable is /usr/sbin/haproxy
Oct 11 04:53:30 np0005481065 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[310979]: [WARNING]  (311001) : Exiting Master process...
Oct 11 04:53:30 np0005481065 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[310979]: [WARNING]  (311001) : Exiting Master process...
Oct 11 04:53:30 np0005481065 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[310979]: [ALERT]    (311001) : Current worker (311004) exited with code 143 (Terminated)
Oct 11 04:53:30 np0005481065 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[310979]: [WARNING]  (311001) : All workers exited. Exiting... (0)
Oct 11 04:53:30 np0005481065 systemd[1]: libpod-8f4077603fd01054fa63797c06b1df995fb1f62bbad8ac790793721f2ef41e8b.scope: Deactivated successfully.
Oct 11 04:53:30 np0005481065 podman[312447]: 2025-10-11 08:53:30.023083162 +0000 UTC m=+0.103004104 container died 8f4077603fd01054fa63797c06b1df995fb1f62bbad8ac790793721f2ef41e8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 04:53:30 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8f4077603fd01054fa63797c06b1df995fb1f62bbad8ac790793721f2ef41e8b-userdata-shm.mount: Deactivated successfully.
Oct 11 04:53:30 np0005481065 systemd[1]: var-lib-containers-storage-overlay-32b308db994383b8949a545fed9fa022caa751e57c9421e100b75cd184b4505b-merged.mount: Deactivated successfully.
Oct 11 04:53:30 np0005481065 podman[312447]: 2025-10-11 08:53:30.090099147 +0000 UTC m=+0.170020089 container cleanup 8f4077603fd01054fa63797c06b1df995fb1f62bbad8ac790793721f2ef41e8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 11 04:53:30 np0005481065 systemd[1]: libpod-conmon-8f4077603fd01054fa63797c06b1df995fb1f62bbad8ac790793721f2ef41e8b.scope: Deactivated successfully.
Oct 11 04:53:30 np0005481065 podman[312486]: 2025-10-11 08:53:30.21572288 +0000 UTC m=+0.069074054 container remove 8f4077603fd01054fa63797c06b1df995fb1f62bbad8ac790793721f2ef41e8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 11 04:53:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:30.226 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[33a13dc0-a021-4aab-b8ef-0f1acb61d72c]: (4, ('Sat Oct 11 08:53:29 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd (8f4077603fd01054fa63797c06b1df995fb1f62bbad8ac790793721f2ef41e8b)\n8f4077603fd01054fa63797c06b1df995fb1f62bbad8ac790793721f2ef41e8b\nSat Oct 11 08:53:30 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd (8f4077603fd01054fa63797c06b1df995fb1f62bbad8ac790793721f2ef41e8b)\n8f4077603fd01054fa63797c06b1df995fb1f62bbad8ac790793721f2ef41e8b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:30.230 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4ae57bb0-beab-4f55-8a22-38f4577fdffd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:30.231 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape5d4fc7a-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:30 np0005481065 kernel: tape5d4fc7a-10: left promiscuous mode
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:30.276 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[aaccb866-aa95-4e89-b073-9dd44aa0af7c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:30.292 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[820a581f-9ca2-45e6-9a34-03fca2870dd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.297 2 DEBUG nova.compute.manager [req-141dd33c-9076-47ef-b021-e0251dbb9fe0 req-504686d0-3c65-474b-8d67-7509a5cb59ff e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Received event network-vif-plugged-69a28fb2-3a8f-49be-9b93-8dc3db323fd5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.297 2 DEBUG oslo_concurrency.lockutils [req-141dd33c-9076-47ef-b021-e0251dbb9fe0 req-504686d0-3c65-474b-8d67-7509a5cb59ff e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "e3289c21-dd0f-43aa-9d39-3aff16eff5cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.298 2 DEBUG oslo_concurrency.lockutils [req-141dd33c-9076-47ef-b021-e0251dbb9fe0 req-504686d0-3c65-474b-8d67-7509a5cb59ff e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e3289c21-dd0f-43aa-9d39-3aff16eff5cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.298 2 DEBUG oslo_concurrency.lockutils [req-141dd33c-9076-47ef-b021-e0251dbb9fe0 req-504686d0-3c65-474b-8d67-7509a5cb59ff e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e3289c21-dd0f-43aa-9d39-3aff16eff5cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.298 2 DEBUG nova.compute.manager [req-141dd33c-9076-47ef-b021-e0251dbb9fe0 req-504686d0-3c65-474b-8d67-7509a5cb59ff e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Processing event network-vif-plugged-69a28fb2-3a8f-49be-9b93-8dc3db323fd5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.298 2 DEBUG nova.compute.manager [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:53:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:30.299 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[aaa8361d-30f9-4bb3-85ad-35b721bceb99]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.303 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172810.3035688, e3289c21-dd0f-43aa-9d39-3aff16eff5cd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.304 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.306 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.310 2 INFO nova.virt.libvirt.driver [-] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Instance spawned successfully.#033[00m
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.310 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:53:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:30.317 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[53476561-220c-4b22-b754-563a6bad0dfd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 463344, 'reachable_time': 16016, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312505, 'error': None, 'target': 'ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:30 np0005481065 systemd[1]: run-netns-ovnmeta\x2de5d4fc7a\x2d1d47\x2d4774\x2daad7\x2d8bb2b388fbbd.mount: Deactivated successfully.
Oct 11 04:53:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:30.323 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 04:53:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:30.323 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[d7ba5545-dcec-499b-a404-9ebe9721c3f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.331 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.335 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.348 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.349 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.349 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.350 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.350 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.351 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.363 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.413 2 INFO nova.compute.manager [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Took 12.45 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.414 2 DEBUG nova.compute.manager [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.434 2 INFO nova.virt.libvirt.driver [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Instance shutdown successfully after 13 seconds.#033[00m
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.440 2 INFO nova.virt.libvirt.driver [-] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Instance destroyed successfully.#033[00m
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.446 2 INFO nova.virt.libvirt.driver [-] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Instance destroyed successfully.#033[00m
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.448 2 DEBUG nova.virt.libvirt.vif [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:52:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-118771075',display_name='tempest-ServerDiskConfigTestJSON-server-118771075',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-118771075',id=43,image_ref='95632eb9-5895-4e20-b760-0f149aadf400',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:53:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9af27fad6b5a4783b66213343f27f0a1',ramdisk_id='',reservation_id='r-aqy60l5l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='95632eb9-5895-4e20-b760-0f149aadf400',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-387886039',owner_user_name='tempest-ServerDiskConfigTestJSON-387886039-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:53:16Z,user_data=None,user_id='2a171de1f79843e0b048393cabfee77d',uuid=92be5b35-6b7a-4f95-924d-008348f27b42,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "address": "fa:16:3e:5b:b5:4a", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13bb6d15-e6", "ovs_interfaceid": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.448 2 DEBUG nova.network.os_vif_util [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converting VIF {"id": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "address": "fa:16:3e:5b:b5:4a", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13bb6d15-e6", "ovs_interfaceid": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.450 2 DEBUG nova.network.os_vif_util [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:b5:4a,bridge_name='br-int',has_traffic_filtering=True,id=13bb6d15-e65c-4e29-b0f3-b7a5a830236d,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13bb6d15-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.450 2 DEBUG os_vif [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:b5:4a,bridge_name='br-int',has_traffic_filtering=True,id=13bb6d15-e65c-4e29-b0f3-b7a5a830236d,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13bb6d15-e6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.455 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap13bb6d15-e6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.464 2 INFO os_vif [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:b5:4a,bridge_name='br-int',has_traffic_filtering=True,id=13bb6d15-e65c-4e29-b0f3-b7a5a830236d,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13bb6d15-e6')#033[00m
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.492 2 INFO nova.compute.manager [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Took 13.72 seconds to build instance.#033[00m
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.512 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "e3289c21-dd0f-43aa-9d39-3aff16eff5cd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.819s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.866 2 DEBUG nova.compute.manager [None req-ea864462-84d7-453c-a364-0b170ab59761 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.891 2 DEBUG nova.compute.manager [req-ccd99a30-56cf-4b58-8bc1-fdb4bb69f03b req-de43d55a-259a-4ef5-82df-eac64f92c4ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Received event network-vif-plugged-35cdb65d-4c85-4645-9bdf-3ef8f40f9596 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.892 2 DEBUG oslo_concurrency.lockutils [req-ccd99a30-56cf-4b58-8bc1-fdb4bb69f03b req-de43d55a-259a-4ef5-82df-eac64f92c4ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "dd2f9164-cc85-46e5-9ac5-2847421fe9fc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.893 2 DEBUG oslo_concurrency.lockutils [req-ccd99a30-56cf-4b58-8bc1-fdb4bb69f03b req-de43d55a-259a-4ef5-82df-eac64f92c4ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd2f9164-cc85-46e5-9ac5-2847421fe9fc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.893 2 DEBUG oslo_concurrency.lockutils [req-ccd99a30-56cf-4b58-8bc1-fdb4bb69f03b req-de43d55a-259a-4ef5-82df-eac64f92c4ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd2f9164-cc85-46e5-9ac5-2847421fe9fc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.894 2 DEBUG nova.compute.manager [req-ccd99a30-56cf-4b58-8bc1-fdb4bb69f03b req-de43d55a-259a-4ef5-82df-eac64f92c4ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] No waiting events found dispatching network-vif-plugged-35cdb65d-4c85-4645-9bdf-3ef8f40f9596 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.894 2 WARNING nova.compute.manager [req-ccd99a30-56cf-4b58-8bc1-fdb4bb69f03b req-de43d55a-259a-4ef5-82df-eac64f92c4ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Received unexpected event network-vif-plugged-35cdb65d-4c85-4645-9bdf-3ef8f40f9596 for instance with vm_state active and task_state None.#033[00m
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.895 2 DEBUG nova.compute.manager [req-ccd99a30-56cf-4b58-8bc1-fdb4bb69f03b req-de43d55a-259a-4ef5-82df-eac64f92c4ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Received event network-vif-unplugged-13bb6d15-e65c-4e29-b0f3-b7a5a830236d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.895 2 DEBUG oslo_concurrency.lockutils [req-ccd99a30-56cf-4b58-8bc1-fdb4bb69f03b req-de43d55a-259a-4ef5-82df-eac64f92c4ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.896 2 DEBUG oslo_concurrency.lockutils [req-ccd99a30-56cf-4b58-8bc1-fdb4bb69f03b req-de43d55a-259a-4ef5-82df-eac64f92c4ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.896 2 DEBUG oslo_concurrency.lockutils [req-ccd99a30-56cf-4b58-8bc1-fdb4bb69f03b req-de43d55a-259a-4ef5-82df-eac64f92c4ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.896 2 DEBUG nova.compute.manager [req-ccd99a30-56cf-4b58-8bc1-fdb4bb69f03b req-de43d55a-259a-4ef5-82df-eac64f92c4ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] No waiting events found dispatching network-vif-unplugged-13bb6d15-e65c-4e29-b0f3-b7a5a830236d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.897 2 WARNING nova.compute.manager [req-ccd99a30-56cf-4b58-8bc1-fdb4bb69f03b req-de43d55a-259a-4ef5-82df-eac64f92c4ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Received unexpected event network-vif-unplugged-13bb6d15-e65c-4e29-b0f3-b7a5a830236d for instance with vm_state active and task_state rebuilding.#033[00m
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.897 2 DEBUG nova.compute.manager [req-ccd99a30-56cf-4b58-8bc1-fdb4bb69f03b req-de43d55a-259a-4ef5-82df-eac64f92c4ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Received event network-vif-plugged-13bb6d15-e65c-4e29-b0f3-b7a5a830236d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.898 2 DEBUG oslo_concurrency.lockutils [req-ccd99a30-56cf-4b58-8bc1-fdb4bb69f03b req-de43d55a-259a-4ef5-82df-eac64f92c4ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.898 2 DEBUG oslo_concurrency.lockutils [req-ccd99a30-56cf-4b58-8bc1-fdb4bb69f03b req-de43d55a-259a-4ef5-82df-eac64f92c4ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.899 2 DEBUG oslo_concurrency.lockutils [req-ccd99a30-56cf-4b58-8bc1-fdb4bb69f03b req-de43d55a-259a-4ef5-82df-eac64f92c4ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.899 2 DEBUG nova.compute.manager [req-ccd99a30-56cf-4b58-8bc1-fdb4bb69f03b req-de43d55a-259a-4ef5-82df-eac64f92c4ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] No waiting events found dispatching network-vif-plugged-13bb6d15-e65c-4e29-b0f3-b7a5a830236d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.900 2 WARNING nova.compute.manager [req-ccd99a30-56cf-4b58-8bc1-fdb4bb69f03b req-de43d55a-259a-4ef5-82df-eac64f92c4ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Received unexpected event network-vif-plugged-13bb6d15-e65c-4e29-b0f3-b7a5a830236d for instance with vm_state active and task_state rebuilding.#033[00m
Oct 11 04:53:30 np0005481065 nova_compute[260935]: 2025-10-11 08:53:30.936 2 INFO nova.compute.manager [None req-ea864462-84d7-453c-a364-0b170ab59761 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] instance snapshotting#033[00m
Oct 11 04:53:31 np0005481065 nova_compute[260935]: 2025-10-11 08:53:31.018 2 INFO nova.virt.libvirt.driver [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Deleting instance files /var/lib/nova/instances/92be5b35-6b7a-4f95-924d-008348f27b42_del#033[00m
Oct 11 04:53:31 np0005481065 nova_compute[260935]: 2025-10-11 08:53:31.019 2 INFO nova.virt.libvirt.driver [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Deletion of /var/lib/nova/instances/92be5b35-6b7a-4f95-924d-008348f27b42_del complete#033[00m
Oct 11 04:53:31 np0005481065 nova_compute[260935]: 2025-10-11 08:53:31.203 2 INFO nova.virt.libvirt.driver [None req-ea864462-84d7-453c-a364-0b170ab59761 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Beginning live snapshot process#033[00m
Oct 11 04:53:31 np0005481065 nova_compute[260935]: 2025-10-11 08:53:31.382 2 DEBUG nova.virt.libvirt.driver [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:53:31 np0005481065 nova_compute[260935]: 2025-10-11 08:53:31.383 2 INFO nova.virt.libvirt.driver [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Creating image(s)#033[00m
Oct 11 04:53:31 np0005481065 nova_compute[260935]: 2025-10-11 08:53:31.415 2 DEBUG nova.storage.rbd_utils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 92be5b35-6b7a-4f95-924d-008348f27b42_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:31 np0005481065 nova_compute[260935]: 2025-10-11 08:53:31.446 2 DEBUG nova.storage.rbd_utils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 92be5b35-6b7a-4f95-924d-008348f27b42_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:31 np0005481065 nova_compute[260935]: 2025-10-11 08:53:31.478 2 DEBUG nova.storage.rbd_utils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 92be5b35-6b7a-4f95-924d-008348f27b42_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:31 np0005481065 nova_compute[260935]: 2025-10-11 08:53:31.484 2 DEBUG oslo_concurrency.processutils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:31 np0005481065 nova_compute[260935]: 2025-10-11 08:53:31.551 2 DEBUG nova.virt.libvirt.imagebackend [None req-ea864462-84d7-453c-a364-0b170ab59761 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] No parent info for 03f2fef0-11c0-48e1-b3a0-3e02d898739e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct 11 04:53:31 np0005481065 nova_compute[260935]: 2025-10-11 08:53:31.585 2 DEBUG oslo_concurrency.lockutils [None req-9e304d57-2dd4-4121-8be6-93377b7cb6cf 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "e3289c21-dd0f-43aa-9d39-3aff16eff5cd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:31 np0005481065 nova_compute[260935]: 2025-10-11 08:53:31.586 2 DEBUG oslo_concurrency.lockutils [None req-9e304d57-2dd4-4121-8be6-93377b7cb6cf 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "e3289c21-dd0f-43aa-9d39-3aff16eff5cd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:31 np0005481065 nova_compute[260935]: 2025-10-11 08:53:31.586 2 DEBUG oslo_concurrency.lockutils [None req-9e304d57-2dd4-4121-8be6-93377b7cb6cf 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "e3289c21-dd0f-43aa-9d39-3aff16eff5cd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:31 np0005481065 nova_compute[260935]: 2025-10-11 08:53:31.587 2 DEBUG oslo_concurrency.lockutils [None req-9e304d57-2dd4-4121-8be6-93377b7cb6cf 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "e3289c21-dd0f-43aa-9d39-3aff16eff5cd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:31 np0005481065 nova_compute[260935]: 2025-10-11 08:53:31.587 2 DEBUG oslo_concurrency.lockutils [None req-9e304d57-2dd4-4121-8be6-93377b7cb6cf 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "e3289c21-dd0f-43aa-9d39-3aff16eff5cd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:31 np0005481065 nova_compute[260935]: 2025-10-11 08:53:31.588 2 INFO nova.compute.manager [None req-9e304d57-2dd4-4121-8be6-93377b7cb6cf 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Terminating instance#033[00m
Oct 11 04:53:31 np0005481065 nova_compute[260935]: 2025-10-11 08:53:31.589 2 DEBUG nova.compute.manager [None req-9e304d57-2dd4-4121-8be6-93377b7cb6cf 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 04:53:31 np0005481065 nova_compute[260935]: 2025-10-11 08:53:31.593 2 DEBUG oslo_concurrency.processutils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:31 np0005481065 nova_compute[260935]: 2025-10-11 08:53:31.594 2 DEBUG oslo_concurrency.lockutils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "d427ed36e4acfaf36d5cf36bd49361b1db4ee571" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:31 np0005481065 nova_compute[260935]: 2025-10-11 08:53:31.594 2 DEBUG oslo_concurrency.lockutils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "d427ed36e4acfaf36d5cf36bd49361b1db4ee571" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:31 np0005481065 nova_compute[260935]: 2025-10-11 08:53:31.594 2 DEBUG oslo_concurrency.lockutils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "d427ed36e4acfaf36d5cf36bd49361b1db4ee571" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:31 np0005481065 nova_compute[260935]: 2025-10-11 08:53:31.621 2 DEBUG nova.storage.rbd_utils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 92be5b35-6b7a-4f95-924d-008348f27b42_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:31 np0005481065 nova_compute[260935]: 2025-10-11 08:53:31.625 2 DEBUG oslo_concurrency.processutils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 92be5b35-6b7a-4f95-924d-008348f27b42_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:31 np0005481065 kernel: tap69a28fb2-3a (unregistering): left promiscuous mode
Oct 11 04:53:31 np0005481065 NetworkManager[44960]: <info>  [1760172811.7304] device (tap69a28fb2-3a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:53:31 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:31Z|00391|binding|INFO|Releasing lport 69a28fb2-3a8f-49be-9b93-8dc3db323fd5 from this chassis (sb_readonly=0)
Oct 11 04:53:31 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:31Z|00392|binding|INFO|Setting lport 69a28fb2-3a8f-49be-9b93-8dc3db323fd5 down in Southbound
Oct 11 04:53:31 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:31Z|00393|binding|INFO|Removing iface tap69a28fb2-3a ovn-installed in OVS
Oct 11 04:53:31 np0005481065 nova_compute[260935]: 2025-10-11 08:53:31.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:31 np0005481065 nova_compute[260935]: 2025-10-11 08:53:31.741 2 DEBUG oslo_concurrency.lockutils [None req-c8c60bb6-fa6d-4709-9538-fd7ca557b391 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "dd2f9164-cc85-46e5-9ac5-2847421fe9fc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:31 np0005481065 nova_compute[260935]: 2025-10-11 08:53:31.742 2 DEBUG oslo_concurrency.lockutils [None req-c8c60bb6-fa6d-4709-9538-fd7ca557b391 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "dd2f9164-cc85-46e5-9ac5-2847421fe9fc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:31 np0005481065 nova_compute[260935]: 2025-10-11 08:53:31.743 2 DEBUG oslo_concurrency.lockutils [None req-c8c60bb6-fa6d-4709-9538-fd7ca557b391 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "dd2f9164-cc85-46e5-9ac5-2847421fe9fc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:31 np0005481065 nova_compute[260935]: 2025-10-11 08:53:31.743 2 DEBUG oslo_concurrency.lockutils [None req-c8c60bb6-fa6d-4709-9538-fd7ca557b391 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "dd2f9164-cc85-46e5-9ac5-2847421fe9fc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:31.740 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:4d:8d 10.100.0.11'], port_security=['fa:16:3e:93:4d:8d 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'e3289c21-dd0f-43aa-9d39-3aff16eff5cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5fe47dfd30914099a9819413cbab00c6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '76121093-a7de-4040-aef4-22a2c06e5eea', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0c1ab924-8567-41be-9107-10c6210b8f10, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=69a28fb2-3a8f-49be-9b93-8dc3db323fd5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:53:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:31.742 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 69a28fb2-3a8f-49be-9b93-8dc3db323fd5 in datapath aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f unbound from our chassis#033[00m
Oct 11 04:53:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:31.744 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f#033[00m
Oct 11 04:53:31 np0005481065 nova_compute[260935]: 2025-10-11 08:53:31.744 2 DEBUG oslo_concurrency.lockutils [None req-c8c60bb6-fa6d-4709-9538-fd7ca557b391 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "dd2f9164-cc85-46e5-9ac5-2847421fe9fc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:31 np0005481065 nova_compute[260935]: 2025-10-11 08:53:31.746 2 INFO nova.compute.manager [None req-c8c60bb6-fa6d-4709-9538-fd7ca557b391 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Terminating instance#033[00m
Oct 11 04:53:31 np0005481065 nova_compute[260935]: 2025-10-11 08:53:31.747 2 DEBUG nova.compute.manager [None req-c8c60bb6-fa6d-4709-9538-fd7ca557b391 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 04:53:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:31.771 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[99be63b1-6f29-4998-8223-fbdd0ffaa7bd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:31 np0005481065 nova_compute[260935]: 2025-10-11 08:53:31.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:31 np0005481065 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d0000002d.scope: Deactivated successfully.
Oct 11 04:53:31 np0005481065 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d0000002d.scope: Consumed 2.232s CPU time.
Oct 11 04:53:31 np0005481065 systemd-machined[215705]: Machine qemu-52-instance-0000002d terminated.
Oct 11 04:53:31 np0005481065 kernel: tap35cdb65d-4c (unregistering): left promiscuous mode
Oct 11 04:53:31 np0005481065 NetworkManager[44960]: <info>  [1760172811.8104] device (tap35cdb65d-4c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:53:31 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:31Z|00394|binding|INFO|Releasing lport 35cdb65d-4c85-4645-9bdf-3ef8f40f9596 from this chassis (sb_readonly=0)
Oct 11 04:53:31 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:31Z|00395|binding|INFO|Setting lport 35cdb65d-4c85-4645-9bdf-3ef8f40f9596 down in Southbound
Oct 11 04:53:31 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:31Z|00396|binding|INFO|Removing iface tap35cdb65d-4c ovn-installed in OVS
Oct 11 04:53:31 np0005481065 nova_compute[260935]: 2025-10-11 08:53:31.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:31.828 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:a9:70 10.100.0.5'], port_security=['fa:16:3e:37:a9:70 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'dd2f9164-cc85-46e5-9ac5-2847421fe9fc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5fe47dfd30914099a9819413cbab00c6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '76121093-a7de-4040-aef4-22a2c06e5eea', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0c1ab924-8567-41be-9107-10c6210b8f10, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=35cdb65d-4c85-4645-9bdf-3ef8f40f9596) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:53:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:31.829 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b66c6daf-a385-4296-9f44-d72214923a7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:31.833 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[200ab8ef-dce5-45e0-a26f-13115fea9872]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:31 np0005481065 nova_compute[260935]: 2025-10-11 08:53:31.839 2 DEBUG nova.storage.rbd_utils [None req-ea864462-84d7-453c-a364-0b170ab59761 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] creating snapshot(d13f8dacb182414abf2529591a8646b0) on rbd image(2c551e6f-adba-4963-a583-c5118e2be62a_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 11 04:53:31 np0005481065 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d0000002e.scope: Deactivated successfully.
Oct 11 04:53:31 np0005481065 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d0000002e.scope: Consumed 3.544s CPU time.
Oct 11 04:53:31 np0005481065 systemd-machined[215705]: Machine qemu-51-instance-0000002e terminated.
Oct 11 04:53:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:31.879 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[88b856a7-9a3c-47ff-a37d-7041d21a08be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:31.915 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[674ab7a2-f4da-4474-9149-f0de49f76db9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaac3fe7a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a9:0a:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 122], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465273, 'reachable_time': 34198, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312687, 'error': None, 'target': 'ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:31.941 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e69aba8b-6fff-4c47-8bc5-1c34069145ed]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapaac3fe7a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 465290, 'tstamp': 465290}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312698, 'error': None, 'target': 'ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapaac3fe7a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 465294, 'tstamp': 465294}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312698, 'error': None, 'target': 'ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:31.943 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaac3fe7a-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1467: 321 pgs: 321 active+clean; 306 MiB data, 557 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.3 MiB/s wr, 67 op/s
Oct 11 04:53:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:31.953 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaac3fe7a-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:31.953 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:53:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:31.954 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaac3fe7a-b0, col_values=(('external_ids', {'iface-id': 'debf3d0c-b4f8-4ab8-9507-a0acd9b0ee66'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:31.954 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:53:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:31.956 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 35cdb65d-4c85-4645-9bdf-3ef8f40f9596 in datapath aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f unbound from our chassis#033[00m
Oct 11 04:53:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:31.958 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 04:53:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:31.960 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d77ac3d6-7dc7-40fa-9b06-c798cc906c5d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:31.960 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f namespace which is not needed anymore#033[00m
Oct 11 04:53:31 np0005481065 nova_compute[260935]: 2025-10-11 08:53:31.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:31 np0005481065 nova_compute[260935]: 2025-10-11 08:53:31.990 2 INFO nova.virt.libvirt.driver [-] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Instance destroyed successfully.#033[00m
Oct 11 04:53:31 np0005481065 nova_compute[260935]: 2025-10-11 08:53:31.991 2 DEBUG nova.objects.instance [None req-9e304d57-2dd4-4121-8be6-93377b7cb6cf 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lazy-loading 'resources' on Instance uuid e3289c21-dd0f-43aa-9d39-3aff16eff5cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:53:31 np0005481065 nova_compute[260935]: 2025-10-11 08:53:31.994 2 INFO nova.virt.libvirt.driver [-] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Instance destroyed successfully.#033[00m
Oct 11 04:53:31 np0005481065 nova_compute[260935]: 2025-10-11 08:53:31.995 2 DEBUG nova.objects.instance [None req-c8c60bb6-fa6d-4709-9538-fd7ca557b391 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lazy-loading 'resources' on Instance uuid dd2f9164-cc85-46e5-9ac5-2847421fe9fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.010 2 DEBUG nova.virt.libvirt.vif [None req-9e304d57-2dd4-4121-8be6-93377b7cb6cf 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:53:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1151635572',display_name='tempest-tempest.common.compute-instance-1151635572-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1151635572-1',id=45,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:53:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5fe47dfd30914099a9819413cbab00c6',ramdisk_id='',reservation_id='r-08kxr91r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-1825846956',owner_user_name='tempest-MultipleCreateTestJSON-1825846956-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:53:30Z,user_data=None,user_id='624c293d73ca4d14a182fadee17abb16',uuid=e3289c21-dd0f-43aa-9d39-3aff16eff5cd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "69a28fb2-3a8f-49be-9b93-8dc3db323fd5", "address": "fa:16:3e:93:4d:8d", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69a28fb2-3a", "ovs_interfaceid": "69a28fb2-3a8f-49be-9b93-8dc3db323fd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.011 2 DEBUG nova.network.os_vif_util [None req-9e304d57-2dd4-4121-8be6-93377b7cb6cf 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Converting VIF {"id": "69a28fb2-3a8f-49be-9b93-8dc3db323fd5", "address": "fa:16:3e:93:4d:8d", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69a28fb2-3a", "ovs_interfaceid": "69a28fb2-3a8f-49be-9b93-8dc3db323fd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.012 2 DEBUG nova.network.os_vif_util [None req-9e304d57-2dd4-4121-8be6-93377b7cb6cf 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:4d:8d,bridge_name='br-int',has_traffic_filtering=True,id=69a28fb2-3a8f-49be-9b93-8dc3db323fd5,network=Network(aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69a28fb2-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.012 2 DEBUG os_vif [None req-9e304d57-2dd4-4121-8be6-93377b7cb6cf 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:4d:8d,bridge_name='br-int',has_traffic_filtering=True,id=69a28fb2-3a8f-49be-9b93-8dc3db323fd5,network=Network(aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69a28fb2-3a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.015 2 DEBUG nova.virt.libvirt.vif [None req-c8c60bb6-fa6d-4709-9538-fd7ca557b391 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:53:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1151635572',display_name='tempest-tempest.common.compute-instance-1151635572-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1151635572-2',id=46,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-10-11T08:53:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5fe47dfd30914099a9819413cbab00c6',ramdisk_id='',reservation_id='r-08kxr91r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-1825846956',owner_user_name='tempest-MultipleCreateTestJSON-1825846956-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:53:29Z,user_data=None,user_id='624c293d73ca4d14a182fadee17abb16',uuid=dd2f9164-cc85-46e5-9ac5-2847421fe9fc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "35cdb65d-4c85-4645-9bdf-3ef8f40f9596", "address": "fa:16:3e:37:a9:70", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35cdb65d-4c", "ovs_interfaceid": "35cdb65d-4c85-4645-9bdf-3ef8f40f9596", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.016 2 DEBUG nova.network.os_vif_util [None req-c8c60bb6-fa6d-4709-9538-fd7ca557b391 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Converting VIF {"id": "35cdb65d-4c85-4645-9bdf-3ef8f40f9596", "address": "fa:16:3e:37:a9:70", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35cdb65d-4c", "ovs_interfaceid": "35cdb65d-4c85-4645-9bdf-3ef8f40f9596", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.016 2 DEBUG nova.network.os_vif_util [None req-c8c60bb6-fa6d-4709-9538-fd7ca557b391 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:a9:70,bridge_name='br-int',has_traffic_filtering=True,id=35cdb65d-4c85-4645-9bdf-3ef8f40f9596,network=Network(aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35cdb65d-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.017 2 DEBUG os_vif [None req-c8c60bb6-fa6d-4709-9538-fd7ca557b391 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:a9:70,bridge_name='br-int',has_traffic_filtering=True,id=35cdb65d-4c85-4645-9bdf-3ef8f40f9596,network=Network(aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35cdb65d-4c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.018 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap69a28fb2-3a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.023 2 DEBUG oslo_concurrency.processutils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 92be5b35-6b7a-4f95-924d-008348f27b42_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.398s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e198 do_prune osdmap full prune enabled
Oct 11 04:53:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e199 e199: 3 total, 3 up, 3 in
Oct 11 04:53:32 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e199: 3 total, 3 up, 3 in
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.063 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap35cdb65d-4c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.070 2 INFO os_vif [None req-9e304d57-2dd4-4121-8be6-93377b7cb6cf 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:4d:8d,bridge_name='br-int',has_traffic_filtering=True,id=69a28fb2-3a8f-49be-9b93-8dc3db323fd5,network=Network(aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69a28fb2-3a')#033[00m
Oct 11 04:53:32 np0005481065 neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f[312371]: [NOTICE]   (312404) : haproxy version is 2.8.14-c23fe91
Oct 11 04:53:32 np0005481065 neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f[312371]: [NOTICE]   (312404) : path to executable is /usr/sbin/haproxy
Oct 11 04:53:32 np0005481065 neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f[312371]: [WARNING]  (312404) : Exiting Master process...
Oct 11 04:53:32 np0005481065 neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f[312371]: [WARNING]  (312404) : Exiting Master process...
Oct 11 04:53:32 np0005481065 neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f[312371]: [ALERT]    (312404) : Current worker (312410) exited with code 143 (Terminated)
Oct 11 04:53:32 np0005481065 neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f[312371]: [WARNING]  (312404) : All workers exited. Exiting... (0)
Oct 11 04:53:32 np0005481065 systemd[1]: libpod-015a5b99a61fb98e56ce0a6707996a2334b957091df6ba0aa6f28194e95fa8c7.scope: Deactivated successfully.
Oct 11 04:53:32 np0005481065 podman[312748]: 2025-10-11 08:53:32.130117719 +0000 UTC m=+0.050131579 container died 015a5b99a61fb98e56ce0a6707996a2334b957091df6ba0aa6f28194e95fa8c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.132 2 INFO os_vif [None req-c8c60bb6-fa6d-4709-9538-fd7ca557b391 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:a9:70,bridge_name='br-int',has_traffic_filtering=True,id=35cdb65d-4c85-4645-9bdf-3ef8f40f9596,network=Network(aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35cdb65d-4c')#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.162 2 DEBUG nova.storage.rbd_utils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] resizing rbd image 92be5b35-6b7a-4f95-924d-008348f27b42_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:53:32 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-015a5b99a61fb98e56ce0a6707996a2334b957091df6ba0aa6f28194e95fa8c7-userdata-shm.mount: Deactivated successfully.
Oct 11 04:53:32 np0005481065 systemd[1]: var-lib-containers-storage-overlay-865d12609cd49e2af502496fdc178165cf27c22f0add21517383ac764b2e1b6a-merged.mount: Deactivated successfully.
Oct 11 04:53:32 np0005481065 podman[312748]: 2025-10-11 08:53:32.180995008 +0000 UTC m=+0.101008868 container cleanup 015a5b99a61fb98e56ce0a6707996a2334b957091df6ba0aa6f28194e95fa8c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001)
Oct 11 04:53:32 np0005481065 systemd[1]: libpod-conmon-015a5b99a61fb98e56ce0a6707996a2334b957091df6ba0aa6f28194e95fa8c7.scope: Deactivated successfully.
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.224 2 DEBUG nova.storage.rbd_utils [None req-ea864462-84d7-453c-a364-0b170ab59761 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] cloning vms/2c551e6f-adba-4963-a583-c5118e2be62a_disk@d13f8dacb182414abf2529591a8646b0 to images/d9333d4f-c539-4d70-aedc-258482a78d91 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct 11 04:53:32 np0005481065 podman[312845]: 2025-10-11 08:53:32.259765345 +0000 UTC m=+0.052380802 container remove 015a5b99a61fb98e56ce0a6707996a2334b957091df6ba0aa6f28194e95fa8c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 11 04:53:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:32.273 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[61e723c4-4a3f-45d8-a4cd-de9164846834]: (4, ('Sat Oct 11 08:53:32 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f (015a5b99a61fb98e56ce0a6707996a2334b957091df6ba0aa6f28194e95fa8c7)\n015a5b99a61fb98e56ce0a6707996a2334b957091df6ba0aa6f28194e95fa8c7\nSat Oct 11 08:53:32 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f (015a5b99a61fb98e56ce0a6707996a2334b957091df6ba0aa6f28194e95fa8c7)\n015a5b99a61fb98e56ce0a6707996a2334b957091df6ba0aa6f28194e95fa8c7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:32.275 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[dcd9c8df-c3b0-42c8-b0c5-98016640a3aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:32.276 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaac3fe7a-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:32 np0005481065 kernel: tapaac3fe7a-b0: left promiscuous mode
Oct 11 04:53:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:32.312 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f4305119-a813-4059-ae92-fae5c5f562b4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:32.340 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7383fd71-0744-4a9d-9d4b-5733d9bf2e8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:32.342 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7dca388b-300c-402a-86a5-313025cce5cb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.357 2 DEBUG nova.virt.libvirt.driver [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.358 2 DEBUG nova.virt.libvirt.driver [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Ensure instance console log exists: /var/lib/nova/instances/92be5b35-6b7a-4f95-924d-008348f27b42/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.358 2 DEBUG oslo_concurrency.lockutils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.359 2 DEBUG oslo_concurrency.lockutils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.359 2 DEBUG oslo_concurrency.lockutils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.362 2 DEBUG nova.virt.libvirt.driver [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Start _get_guest_xml network_info=[{"id": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "address": "fa:16:3e:5b:b5:4a", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13bb6d15-e6", "ovs_interfaceid": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:29Z,direct_url=<?>,disk_format='qcow2',id=95632eb9-5895-4e20-b760-0f149aadf400,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:53:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:32.368 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fddf5d8c-a452-40d7-bcca-64f792d4f09a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465266, 'reachable_time': 30081, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312921, 'error': None, 'target': 'ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:32 np0005481065 systemd[1]: run-netns-ovnmeta\x2daac3fe7a\x2db6f5\x2d4809\x2d9f09\x2d04a3bbe54c8f.mount: Deactivated successfully.
Oct 11 04:53:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:32.373 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.374 2 WARNING nova.virt.libvirt.driver [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Oct 11 04:53:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:32.373 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[d4be74a0-dba5-42df-ad47-474be2d8f835]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.392 2 DEBUG nova.storage.rbd_utils [None req-ea864462-84d7-453c-a364-0b170ab59761 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] flattening images/d9333d4f-c539-4d70-aedc-258482a78d91 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.443 2 DEBUG nova.virt.libvirt.host [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.444 2 DEBUG nova.virt.libvirt.host [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.450 2 DEBUG nova.virt.libvirt.host [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.451 2 DEBUG nova.virt.libvirt.host [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.451 2 DEBUG nova.virt.libvirt.driver [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.452 2 DEBUG nova.virt.hardware [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:29Z,direct_url=<?>,disk_format='qcow2',id=95632eb9-5895-4e20-b760-0f149aadf400,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.453 2 DEBUG nova.virt.hardware [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.453 2 DEBUG nova.virt.hardware [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.454 2 DEBUG nova.virt.hardware [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.454 2 DEBUG nova.virt.hardware [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.455 2 DEBUG nova.virt.hardware [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.455 2 DEBUG nova.virt.hardware [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.455 2 DEBUG nova.virt.hardware [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.455 2 DEBUG nova.virt.hardware [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.457 2 DEBUG nova.virt.hardware [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.457 2 DEBUG nova.virt.hardware [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.458 2 DEBUG nova.objects.instance [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 92be5b35-6b7a-4f95-924d-008348f27b42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.480 2 DEBUG oslo_concurrency.processutils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.684 2 DEBUG nova.compute.manager [req-96deb5cc-723d-4d1e-97b1-25284d83845f req-fccfca93-4258-49bc-a6df-6f63075e43c7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Received event network-vif-plugged-69a28fb2-3a8f-49be-9b93-8dc3db323fd5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.686 2 DEBUG oslo_concurrency.lockutils [req-96deb5cc-723d-4d1e-97b1-25284d83845f req-fccfca93-4258-49bc-a6df-6f63075e43c7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "e3289c21-dd0f-43aa-9d39-3aff16eff5cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.686 2 DEBUG oslo_concurrency.lockutils [req-96deb5cc-723d-4d1e-97b1-25284d83845f req-fccfca93-4258-49bc-a6df-6f63075e43c7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e3289c21-dd0f-43aa-9d39-3aff16eff5cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.687 2 DEBUG oslo_concurrency.lockutils [req-96deb5cc-723d-4d1e-97b1-25284d83845f req-fccfca93-4258-49bc-a6df-6f63075e43c7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e3289c21-dd0f-43aa-9d39-3aff16eff5cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.687 2 DEBUG nova.compute.manager [req-96deb5cc-723d-4d1e-97b1-25284d83845f req-fccfca93-4258-49bc-a6df-6f63075e43c7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] No waiting events found dispatching network-vif-plugged-69a28fb2-3a8f-49be-9b93-8dc3db323fd5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.687 2 WARNING nova.compute.manager [req-96deb5cc-723d-4d1e-97b1-25284d83845f req-fccfca93-4258-49bc-a6df-6f63075e43c7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Received unexpected event network-vif-plugged-69a28fb2-3a8f-49be-9b93-8dc3db323fd5 for instance with vm_state active and task_state deleting.#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.836 2 INFO nova.virt.libvirt.driver [None req-9e304d57-2dd4-4121-8be6-93377b7cb6cf 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Deleting instance files /var/lib/nova/instances/e3289c21-dd0f-43aa-9d39-3aff16eff5cd_del#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.837 2 INFO nova.virt.libvirt.driver [None req-9e304d57-2dd4-4121-8be6-93377b7cb6cf 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Deletion of /var/lib/nova/instances/e3289c21-dd0f-43aa-9d39-3aff16eff5cd_del complete#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.856 2 DEBUG nova.storage.rbd_utils [None req-ea864462-84d7-453c-a364-0b170ab59761 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] removing snapshot(d13f8dacb182414abf2529591a8646b0) on rbd image(2c551e6f-adba-4963-a583-c5118e2be62a_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.889 2 INFO nova.virt.libvirt.driver [None req-c8c60bb6-fa6d-4709-9538-fd7ca557b391 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Deleting instance files /var/lib/nova/instances/dd2f9164-cc85-46e5-9ac5-2847421fe9fc_del#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.890 2 INFO nova.virt.libvirt.driver [None req-c8c60bb6-fa6d-4709-9538-fd7ca557b391 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Deletion of /var/lib/nova/instances/dd2f9164-cc85-46e5-9ac5-2847421fe9fc_del complete#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.913 2 INFO nova.compute.manager [None req-9e304d57-2dd4-4121-8be6-93377b7cb6cf 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Took 1.32 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.913 2 DEBUG oslo.service.loopingcall [None req-9e304d57-2dd4-4121-8be6-93377b7cb6cf 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.914 2 DEBUG nova.compute.manager [-] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.914 2 DEBUG nova.network.neutron [-] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.948 2 INFO nova.compute.manager [None req-c8c60bb6-fa6d-4709-9538-fd7ca557b391 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Took 1.20 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.949 2 DEBUG oslo.service.loopingcall [None req-c8c60bb6-fa6d-4709-9538-fd7ca557b391 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.949 2 DEBUG nova.compute.manager [-] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 04:53:32 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.950 2 DEBUG nova.network.neutron [-] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 04:53:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:53:32 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1468391462' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:53:33 np0005481065 nova_compute[260935]: 2025-10-11 08:53:32.999 2 DEBUG oslo_concurrency.processutils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:33 np0005481065 nova_compute[260935]: 2025-10-11 08:53:33.028 2 DEBUG nova.storage.rbd_utils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 92be5b35-6b7a-4f95-924d-008348f27b42_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:33 np0005481065 nova_compute[260935]: 2025-10-11 08:53:33.033 2 DEBUG oslo_concurrency.processutils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e199 do_prune osdmap full prune enabled
Oct 11 04:53:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e200 e200: 3 total, 3 up, 3 in
Oct 11 04:53:33 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e200: 3 total, 3 up, 3 in
Oct 11 04:53:33 np0005481065 nova_compute[260935]: 2025-10-11 08:53:33.114 2 DEBUG nova.storage.rbd_utils [None req-ea864462-84d7-453c-a364-0b170ab59761 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] creating snapshot(snap) on rbd image(d9333d4f-c539-4d70-aedc-258482a78d91) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 11 04:53:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e200 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:53:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e200 do_prune osdmap full prune enabled
Oct 11 04:53:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e201 e201: 3 total, 3 up, 3 in
Oct 11 04:53:33 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e201: 3 total, 3 up, 3 in
Oct 11 04:53:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:53:33 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2171843780' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:53:33 np0005481065 nova_compute[260935]: 2025-10-11 08:53:33.499 2 DEBUG nova.compute.manager [req-53ad6e34-30d5-4659-a707-55f19a2c1aab req-db7691e4-928d-47a7-b4d4-2da41cd7c501 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Received event network-vif-unplugged-35cdb65d-4c85-4645-9bdf-3ef8f40f9596 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:53:33 np0005481065 nova_compute[260935]: 2025-10-11 08:53:33.500 2 DEBUG oslo_concurrency.lockutils [req-53ad6e34-30d5-4659-a707-55f19a2c1aab req-db7691e4-928d-47a7-b4d4-2da41cd7c501 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "dd2f9164-cc85-46e5-9ac5-2847421fe9fc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:33 np0005481065 nova_compute[260935]: 2025-10-11 08:53:33.500 2 DEBUG oslo_concurrency.lockutils [req-53ad6e34-30d5-4659-a707-55f19a2c1aab req-db7691e4-928d-47a7-b4d4-2da41cd7c501 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd2f9164-cc85-46e5-9ac5-2847421fe9fc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:33 np0005481065 nova_compute[260935]: 2025-10-11 08:53:33.500 2 DEBUG oslo_concurrency.lockutils [req-53ad6e34-30d5-4659-a707-55f19a2c1aab req-db7691e4-928d-47a7-b4d4-2da41cd7c501 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd2f9164-cc85-46e5-9ac5-2847421fe9fc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:33 np0005481065 nova_compute[260935]: 2025-10-11 08:53:33.501 2 DEBUG nova.compute.manager [req-53ad6e34-30d5-4659-a707-55f19a2c1aab req-db7691e4-928d-47a7-b4d4-2da41cd7c501 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] No waiting events found dispatching network-vif-unplugged-35cdb65d-4c85-4645-9bdf-3ef8f40f9596 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:53:33 np0005481065 nova_compute[260935]: 2025-10-11 08:53:33.501 2 DEBUG nova.compute.manager [req-53ad6e34-30d5-4659-a707-55f19a2c1aab req-db7691e4-928d-47a7-b4d4-2da41cd7c501 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Received event network-vif-unplugged-35cdb65d-4c85-4645-9bdf-3ef8f40f9596 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 04:53:33 np0005481065 nova_compute[260935]: 2025-10-11 08:53:33.501 2 DEBUG nova.compute.manager [req-53ad6e34-30d5-4659-a707-55f19a2c1aab req-db7691e4-928d-47a7-b4d4-2da41cd7c501 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Received event network-vif-plugged-35cdb65d-4c85-4645-9bdf-3ef8f40f9596 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:53:33 np0005481065 nova_compute[260935]: 2025-10-11 08:53:33.501 2 DEBUG oslo_concurrency.lockutils [req-53ad6e34-30d5-4659-a707-55f19a2c1aab req-db7691e4-928d-47a7-b4d4-2da41cd7c501 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "dd2f9164-cc85-46e5-9ac5-2847421fe9fc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:33 np0005481065 nova_compute[260935]: 2025-10-11 08:53:33.502 2 DEBUG oslo_concurrency.lockutils [req-53ad6e34-30d5-4659-a707-55f19a2c1aab req-db7691e4-928d-47a7-b4d4-2da41cd7c501 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd2f9164-cc85-46e5-9ac5-2847421fe9fc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:33 np0005481065 nova_compute[260935]: 2025-10-11 08:53:33.502 2 DEBUG oslo_concurrency.lockutils [req-53ad6e34-30d5-4659-a707-55f19a2c1aab req-db7691e4-928d-47a7-b4d4-2da41cd7c501 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd2f9164-cc85-46e5-9ac5-2847421fe9fc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:33 np0005481065 nova_compute[260935]: 2025-10-11 08:53:33.502 2 DEBUG nova.compute.manager [req-53ad6e34-30d5-4659-a707-55f19a2c1aab req-db7691e4-928d-47a7-b4d4-2da41cd7c501 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] No waiting events found dispatching network-vif-plugged-35cdb65d-4c85-4645-9bdf-3ef8f40f9596 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:53:33 np0005481065 nova_compute[260935]: 2025-10-11 08:53:33.502 2 WARNING nova.compute.manager [req-53ad6e34-30d5-4659-a707-55f19a2c1aab req-db7691e4-928d-47a7-b4d4-2da41cd7c501 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Received unexpected event network-vif-plugged-35cdb65d-4c85-4645-9bdf-3ef8f40f9596 for instance with vm_state active and task_state deleting.#033[00m
Oct 11 04:53:33 np0005481065 nova_compute[260935]: 2025-10-11 08:53:33.510 2 DEBUG oslo_concurrency.processutils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:33 np0005481065 nova_compute[260935]: 2025-10-11 08:53:33.511 2 DEBUG nova.virt.libvirt.vif [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-11T08:52:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-118771075',display_name='tempest-ServerDiskConfigTestJSON-server-118771075',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-118771075',id=43,image_ref='95632eb9-5895-4e20-b760-0f149aadf400',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:53:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9af27fad6b5a4783b66213343f27f0a1',ramdisk_id='',reservation_id='r-aqy60l5l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='95632eb9-5895-4e20-b760-0f149aadf400',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-387886039',owner_user_name='tempest-ServerDiskConfigTestJSON-387886039-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:53:31Z,user_data=None,user_id='2a171de1f79843e0b048393cabfee77d',uuid=92be5b35-6b7a-4f95-924d-008348f27b42,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "address": "fa:16:3e:5b:b5:4a", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13bb6d15-e6", "ovs_interfaceid": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:53:33 np0005481065 nova_compute[260935]: 2025-10-11 08:53:33.511 2 DEBUG nova.network.os_vif_util [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converting VIF {"id": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "address": "fa:16:3e:5b:b5:4a", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13bb6d15-e6", "ovs_interfaceid": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:53:33 np0005481065 nova_compute[260935]: 2025-10-11 08:53:33.512 2 DEBUG nova.network.os_vif_util [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:b5:4a,bridge_name='br-int',has_traffic_filtering=True,id=13bb6d15-e65c-4e29-b0f3-b7a5a830236d,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13bb6d15-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:53:33 np0005481065 nova_compute[260935]: 2025-10-11 08:53:33.514 2 DEBUG nova.virt.libvirt.driver [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:53:33 np0005481065 nova_compute[260935]:  <uuid>92be5b35-6b7a-4f95-924d-008348f27b42</uuid>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:  <name>instance-0000002b</name>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:53:33 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-118771075</nova:name>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:53:32</nova:creationTime>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:53:33 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:        <nova:user uuid="2a171de1f79843e0b048393cabfee77d">tempest-ServerDiskConfigTestJSON-387886039-project-member</nova:user>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:        <nova:project uuid="9af27fad6b5a4783b66213343f27f0a1">tempest-ServerDiskConfigTestJSON-387886039</nova:project>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="95632eb9-5895-4e20-b760-0f149aadf400"/>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:        <nova:port uuid="13bb6d15-e65c-4e29-b0f3-b7a5a830236d">
Oct 11 04:53:33 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:53:33 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:      <entry name="serial">92be5b35-6b7a-4f95-924d-008348f27b42</entry>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:      <entry name="uuid">92be5b35-6b7a-4f95-924d-008348f27b42</entry>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:53:33 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:53:33 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:53:33 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/92be5b35-6b7a-4f95-924d-008348f27b42_disk">
Oct 11 04:53:33 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:53:33 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:53:33 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/92be5b35-6b7a-4f95-924d-008348f27b42_disk.config">
Oct 11 04:53:33 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:53:33 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 04:53:33 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:5b:b5:4a"/>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:      <target dev="tap13bb6d15-e6"/>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:53:33 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/92be5b35-6b7a-4f95-924d-008348f27b42/console.log" append="off"/>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:53:33 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:53:33 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:53:33 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:53:33 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:53:33 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:53:33 np0005481065 nova_compute[260935]: 2025-10-11 08:53:33.515 2 DEBUG nova.compute.manager [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Preparing to wait for external event network-vif-plugged-13bb6d15-e65c-4e29-b0f3-b7a5a830236d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 04:53:33 np0005481065 nova_compute[260935]: 2025-10-11 08:53:33.515 2 DEBUG oslo_concurrency.lockutils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:33 np0005481065 nova_compute[260935]: 2025-10-11 08:53:33.515 2 DEBUG oslo_concurrency.lockutils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:33 np0005481065 nova_compute[260935]: 2025-10-11 08:53:33.515 2 DEBUG oslo_concurrency.lockutils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:33 np0005481065 nova_compute[260935]: 2025-10-11 08:53:33.516 2 DEBUG nova.virt.libvirt.vif [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-11T08:52:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-118771075',display_name='tempest-ServerDiskConfigTestJSON-server-118771075',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-118771075',id=43,image_ref='95632eb9-5895-4e20-b760-0f149aadf400',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:53:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9af27fad6b5a4783b66213343f27f0a1',ramdisk_id='',reservation_id='r-aqy60l5l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='95632eb9-5895-4e20-b760-0f149aadf400',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-387886039',owner_user_name='tempest-ServerDiskConfigTestJSON-387886039-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:53:31Z,user_data=None,user_id='2a171de1f79843e0b048393cabfee77d',uuid=92be5b35-6b7a-4f95-924d-008348f27b42,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "address": "fa:16:3e:5b:b5:4a", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13bb6d15-e6", "ovs_interfaceid": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:53:33 np0005481065 nova_compute[260935]: 2025-10-11 08:53:33.516 2 DEBUG nova.network.os_vif_util [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converting VIF {"id": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "address": "fa:16:3e:5b:b5:4a", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13bb6d15-e6", "ovs_interfaceid": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:53:33 np0005481065 nova_compute[260935]: 2025-10-11 08:53:33.516 2 DEBUG nova.network.os_vif_util [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:b5:4a,bridge_name='br-int',has_traffic_filtering=True,id=13bb6d15-e65c-4e29-b0f3-b7a5a830236d,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13bb6d15-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:53:33 np0005481065 nova_compute[260935]: 2025-10-11 08:53:33.517 2 DEBUG os_vif [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:b5:4a,bridge_name='br-int',has_traffic_filtering=True,id=13bb6d15-e65c-4e29-b0f3-b7a5a830236d,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13bb6d15-e6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:53:33 np0005481065 nova_compute[260935]: 2025-10-11 08:53:33.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:33 np0005481065 nova_compute[260935]: 2025-10-11 08:53:33.517 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:33 np0005481065 nova_compute[260935]: 2025-10-11 08:53:33.518 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:53:33 np0005481065 nova_compute[260935]: 2025-10-11 08:53:33.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:33 np0005481065 nova_compute[260935]: 2025-10-11 08:53:33.520 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap13bb6d15-e6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:33 np0005481065 nova_compute[260935]: 2025-10-11 08:53:33.521 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap13bb6d15-e6, col_values=(('external_ids', {'iface-id': '13bb6d15-e65c-4e29-b0f3-b7a5a830236d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5b:b5:4a', 'vm-uuid': '92be5b35-6b7a-4f95-924d-008348f27b42'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:33 np0005481065 NetworkManager[44960]: <info>  [1760172813.5229] manager: (tap13bb6d15-e6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/189)
Oct 11 04:53:33 np0005481065 nova_compute[260935]: 2025-10-11 08:53:33.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:33 np0005481065 nova_compute[260935]: 2025-10-11 08:53:33.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:53:33 np0005481065 nova_compute[260935]: 2025-10-11 08:53:33.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:33 np0005481065 nova_compute[260935]: 2025-10-11 08:53:33.528 2 INFO os_vif [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:b5:4a,bridge_name='br-int',has_traffic_filtering=True,id=13bb6d15-e65c-4e29-b0f3-b7a5a830236d,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13bb6d15-e6')#033[00m
Oct 11 04:53:33 np0005481065 nova_compute[260935]: 2025-10-11 08:53:33.587 2 DEBUG nova.virt.libvirt.driver [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:53:33 np0005481065 nova_compute[260935]: 2025-10-11 08:53:33.588 2 DEBUG nova.virt.libvirt.driver [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:53:33 np0005481065 nova_compute[260935]: 2025-10-11 08:53:33.588 2 DEBUG nova.virt.libvirt.driver [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] No VIF found with MAC fa:16:3e:5b:b5:4a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:53:33 np0005481065 nova_compute[260935]: 2025-10-11 08:53:33.588 2 INFO nova.virt.libvirt.driver [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Using config drive#033[00m
Oct 11 04:53:33 np0005481065 nova_compute[260935]: 2025-10-11 08:53:33.606 2 DEBUG nova.storage.rbd_utils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 92be5b35-6b7a-4f95-924d-008348f27b42_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:33 np0005481065 nova_compute[260935]: 2025-10-11 08:53:33.627 2 DEBUG nova.objects.instance [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 92be5b35-6b7a-4f95-924d-008348f27b42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:53:33 np0005481065 podman[313040]: 2025-10-11 08:53:33.645577071 +0000 UTC m=+0.075349109 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:53:33 np0005481065 nova_compute[260935]: 2025-10-11 08:53:33.680 2 DEBUG nova.objects.instance [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lazy-loading 'keypairs' on Instance uuid 92be5b35-6b7a-4f95-924d-008348f27b42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:53:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1471: 321 pgs: 321 active+clean; 208 MiB data, 530 MiB used, 59 GiB / 60 GiB avail; 19 MiB/s rd, 16 MiB/s wr, 1.15k op/s
Oct 11 04:53:33 np0005481065 nova_compute[260935]: 2025-10-11 08:53:33.984 2 DEBUG nova.network.neutron [-] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:53:34 np0005481065 nova_compute[260935]: 2025-10-11 08:53:34.001 2 DEBUG nova.network.neutron [-] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:53:34 np0005481065 nova_compute[260935]: 2025-10-11 08:53:34.004 2 INFO nova.compute.manager [-] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Took 1.05 seconds to deallocate network for instance.#033[00m
Oct 11 04:53:34 np0005481065 nova_compute[260935]: 2025-10-11 08:53:34.026 2 INFO nova.compute.manager [-] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Took 1.11 seconds to deallocate network for instance.#033[00m
Oct 11 04:53:34 np0005481065 nova_compute[260935]: 2025-10-11 08:53:34.070 2 DEBUG oslo_concurrency.lockutils [None req-c8c60bb6-fa6d-4709-9538-fd7ca557b391 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:34 np0005481065 nova_compute[260935]: 2025-10-11 08:53:34.071 2 DEBUG oslo_concurrency.lockutils [None req-c8c60bb6-fa6d-4709-9538-fd7ca557b391 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:34 np0005481065 nova_compute[260935]: 2025-10-11 08:53:34.077 2 DEBUG oslo_concurrency.lockutils [None req-9e304d57-2dd4-4121-8be6-93377b7cb6cf 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:34 np0005481065 nova_compute[260935]: 2025-10-11 08:53:34.171 2 INFO nova.virt.libvirt.driver [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Creating config drive at /var/lib/nova/instances/92be5b35-6b7a-4f95-924d-008348f27b42/disk.config#033[00m
Oct 11 04:53:34 np0005481065 nova_compute[260935]: 2025-10-11 08:53:34.178 2 DEBUG oslo_concurrency.processutils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/92be5b35-6b7a-4f95-924d-008348f27b42/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpijavio5b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:34 np0005481065 nova_compute[260935]: 2025-10-11 08:53:34.234 2 DEBUG oslo_concurrency.processutils [None req-c8c60bb6-fa6d-4709-9538-fd7ca557b391 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:34 np0005481065 nova_compute[260935]: 2025-10-11 08:53:34.347 2 DEBUG oslo_concurrency.processutils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/92be5b35-6b7a-4f95-924d-008348f27b42/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpijavio5b" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:34 np0005481065 nova_compute[260935]: 2025-10-11 08:53:34.380 2 DEBUG nova.storage.rbd_utils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 92be5b35-6b7a-4f95-924d-008348f27b42_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:34 np0005481065 nova_compute[260935]: 2025-10-11 08:53:34.385 2 DEBUG oslo_concurrency.processutils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/92be5b35-6b7a-4f95-924d-008348f27b42/disk.config 92be5b35-6b7a-4f95-924d-008348f27b42_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:34 np0005481065 nova_compute[260935]: 2025-10-11 08:53:34.608 2 DEBUG oslo_concurrency.processutils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/92be5b35-6b7a-4f95-924d-008348f27b42/disk.config 92be5b35-6b7a-4f95-924d-008348f27b42_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.223s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:34 np0005481065 nova_compute[260935]: 2025-10-11 08:53:34.609 2 INFO nova.virt.libvirt.driver [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Deleting local config drive /var/lib/nova/instances/92be5b35-6b7a-4f95-924d-008348f27b42/disk.config because it was imported into RBD.#033[00m
Oct 11 04:53:34 np0005481065 kernel: tap13bb6d15-e6: entered promiscuous mode
Oct 11 04:53:34 np0005481065 systemd-udevd[312651]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:53:34 np0005481065 nova_compute[260935]: 2025-10-11 08:53:34.689 2 DEBUG oslo_concurrency.lockutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquiring lock "8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:34 np0005481065 nova_compute[260935]: 2025-10-11 08:53:34.689 2 DEBUG oslo_concurrency.lockutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:34 np0005481065 NetworkManager[44960]: <info>  [1760172814.6960] manager: (tap13bb6d15-e6): new Tun device (/org/freedesktop/NetworkManager/Devices/190)
Oct 11 04:53:34 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:34Z|00397|binding|INFO|Claiming lport 13bb6d15-e65c-4e29-b0f3-b7a5a830236d for this chassis.
Oct 11 04:53:34 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:34Z|00398|binding|INFO|13bb6d15-e65c-4e29-b0f3-b7a5a830236d: Claiming fa:16:3e:5b:b5:4a 10.100.0.12
Oct 11 04:53:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:34.701 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:b5:4a 10.100.0.12'], port_security=['fa:16:3e:5b:b5:4a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '92be5b35-6b7a-4f95-924d-008348f27b42', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9af27fad6b5a4783b66213343f27f0a1', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'a50a9db8-e2ab-4969-88fc-b4ddbb372174', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b34768a1-4d4c-416a-8ec1-d8538916a72a, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=13bb6d15-e65c-4e29-b0f3-b7a5a830236d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:53:34 np0005481065 nova_compute[260935]: 2025-10-11 08:53:34.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:34.702 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 13bb6d15-e65c-4e29-b0f3-b7a5a830236d in datapath e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd bound to our chassis#033[00m
Oct 11 04:53:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:34.704 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd#033[00m
Oct 11 04:53:34 np0005481065 NetworkManager[44960]: <info>  [1760172814.7092] device (tap13bb6d15-e6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:53:34 np0005481065 NetworkManager[44960]: <info>  [1760172814.7102] device (tap13bb6d15-e6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:53:34 np0005481065 nova_compute[260935]: 2025-10-11 08:53:34.713 2 DEBUG nova.compute.manager [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:53:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:34.724 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[92c9e708-3d8c-4321-a2ee-f8e425dc2b06]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:34.725 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape5d4fc7a-11 in ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 04:53:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:34.727 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape5d4fc7a-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 04:53:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:34.728 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2aad4c2b-e227-4cf6-ac46-69345d22dfba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:34.728 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c7103abf-7fb2-43e5-b264-cd7c4d2faf24]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:34 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:34Z|00399|binding|INFO|Setting lport 13bb6d15-e65c-4e29-b0f3-b7a5a830236d up in Southbound
Oct 11 04:53:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:34.744 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[9b59e377-8fc9-4320-9754-2cb60a240eaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:34 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:34Z|00400|binding|INFO|Setting lport 13bb6d15-e65c-4e29-b0f3-b7a5a830236d ovn-installed in OVS
Oct 11 04:53:34 np0005481065 nova_compute[260935]: 2025-10-11 08:53:34.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:34 np0005481065 nova_compute[260935]: 2025-10-11 08:53:34.751 2 DEBUG nova.compute.manager [req-40b0ea35-a8de-40d5-9a38-fcf450d29e7d req-59ab1f4c-da03-4840-8ce6-33df76e937ae e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Received event network-vif-deleted-69a28fb2-3a8f-49be-9b93-8dc3db323fd5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:53:34 np0005481065 nova_compute[260935]: 2025-10-11 08:53:34.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:34 np0005481065 systemd-machined[215705]: New machine qemu-53-instance-0000002b.
Oct 11 04:53:34 np0005481065 systemd[1]: Started Virtual Machine qemu-53-instance-0000002b.
Oct 11 04:53:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:34.762 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5218679e-e420-4f4e-aba7-b68c72940dd8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:53:34 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3295096361' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:53:34 np0005481065 nova_compute[260935]: 2025-10-11 08:53:34.801 2 DEBUG oslo_concurrency.lockutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:34 np0005481065 nova_compute[260935]: 2025-10-11 08:53:34.809 2 DEBUG oslo_concurrency.processutils [None req-c8c60bb6-fa6d-4709-9538-fd7ca557b391 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:34.824 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3c89fa47-31b6-4455-931d-62b98b1f2b3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:34.829 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a2a1e848-c59e-42c5-b5ae-e089e7e5cdf9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:34 np0005481065 nova_compute[260935]: 2025-10-11 08:53:34.828 2 DEBUG nova.compute.provider_tree [None req-c8c60bb6-fa6d-4709-9538-fd7ca557b391 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:53:34 np0005481065 NetworkManager[44960]: <info>  [1760172814.8349] manager: (tape5d4fc7a-10): new Veth device (/org/freedesktop/NetworkManager/Devices/191)
Oct 11 04:53:34 np0005481065 nova_compute[260935]: 2025-10-11 08:53:34.846 2 DEBUG nova.scheduler.client.report [None req-c8c60bb6-fa6d-4709-9538-fd7ca557b391 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:53:34 np0005481065 systemd-udevd[313169]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:53:34 np0005481065 nova_compute[260935]: 2025-10-11 08:53:34.869 2 DEBUG oslo_concurrency.lockutils [None req-c8c60bb6-fa6d-4709-9538-fd7ca557b391 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.798s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:34 np0005481065 nova_compute[260935]: 2025-10-11 08:53:34.872 2 DEBUG oslo_concurrency.lockutils [None req-9e304d57-2dd4-4121-8be6-93377b7cb6cf 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:34.884 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b199c1a4-58c1-4669-87f5-7b200fc010e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:34.890 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[9d06f12e-bc7f-4851-9806-490a55fe6fc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:34 np0005481065 nova_compute[260935]: 2025-10-11 08:53:34.901 2 INFO nova.scheduler.client.report [None req-c8c60bb6-fa6d-4709-9538-fd7ca557b391 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Deleted allocations for instance dd2f9164-cc85-46e5-9ac5-2847421fe9fc#033[00m
Oct 11 04:53:34 np0005481065 NetworkManager[44960]: <info>  [1760172814.9312] device (tape5d4fc7a-10): carrier: link connected
Oct 11 04:53:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:34.942 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[013b19be-7518-4db1-ac2c-0776a4619edb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:34.967 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8c37baa8-fb3f-4c47-be95-6fd7645f399c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape5d4fc7a-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:20:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 128], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465963, 'reachable_time': 16746, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313188, 'error': None, 'target': 'ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:34 np0005481065 nova_compute[260935]: 2025-10-11 08:53:34.968 2 DEBUG oslo_concurrency.lockutils [None req-c8c60bb6-fa6d-4709-9538-fd7ca557b391 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "dd2f9164-cc85-46e5-9ac5-2847421fe9fc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.226s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:34.989 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6d4f2365-f2bc-40ef-840c-4dd4f985a746]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe17:2033'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 465963, 'tstamp': 465963}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313196, 'error': None, 'target': 'ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:34 np0005481065 nova_compute[260935]: 2025-10-11 08:53:34.993 2 DEBUG oslo_concurrency.processutils [None req-9e304d57-2dd4-4121-8be6-93377b7cb6cf 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:35.015 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2120003d-ccf8-42ad-b0a2-fcd00aafa6d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape5d4fc7a-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:20:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 128], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465963, 'reachable_time': 16746, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 313208, 'error': None, 'target': 'ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:35 np0005481065 nova_compute[260935]: 2025-10-11 08:53:35.038 2 INFO nova.virt.libvirt.driver [None req-ea864462-84d7-453c-a364-0b170ab59761 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Snapshot image upload complete#033[00m
Oct 11 04:53:35 np0005481065 nova_compute[260935]: 2025-10-11 08:53:35.039 2 INFO nova.compute.manager [None req-ea864462-84d7-453c-a364-0b170ab59761 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Took 4.10 seconds to snapshot the instance on the hypervisor.#033[00m
Oct 11 04:53:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:35.054 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f8b11d0c-86f4-449b-ad52-137a9d6b4ebf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:35.151 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[94283121-b6d8-4f53-a9f5-eda2c86a55f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:35.152 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape5d4fc7a-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:35.153 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:53:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:35.153 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape5d4fc7a-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:35 np0005481065 kernel: tape5d4fc7a-10: entered promiscuous mode
Oct 11 04:53:35 np0005481065 NetworkManager[44960]: <info>  [1760172815.1974] manager: (tape5d4fc7a-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/192)
Oct 11 04:53:35 np0005481065 nova_compute[260935]: 2025-10-11 08:53:35.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:35 np0005481065 nova_compute[260935]: 2025-10-11 08:53:35.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:35.200 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape5d4fc7a-10, col_values=(('external_ids', {'iface-id': '7a0f31c4-9bda-45df-9fec-aacc40fc88c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:35 np0005481065 nova_compute[260935]: 2025-10-11 08:53:35.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:35 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:35Z|00401|binding|INFO|Releasing lport 7a0f31c4-9bda-45df-9fec-aacc40fc88c1 from this chassis (sb_readonly=0)
Oct 11 04:53:35 np0005481065 nova_compute[260935]: 2025-10-11 08:53:35.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:35.204 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 04:53:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:35.205 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1fb1ff04-5c98-4f22-a868-c30b1161bc3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:35.205 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:53:35 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 04:53:35 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 04:53:35 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd
Oct 11 04:53:35 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 04:53:35 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 04:53:35 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 04:53:35 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd.pid.haproxy
Oct 11 04:53:35 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 04:53:35 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:53:35 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 04:53:35 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 04:53:35 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 04:53:35 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 04:53:35 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 04:53:35 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 04:53:35 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 04:53:35 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 04:53:35 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 04:53:35 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 04:53:35 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 04:53:35 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 04:53:35 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 04:53:35 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:53:35 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:53:35 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 04:53:35 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 04:53:35 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:53:35 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd
Oct 11 04:53:35 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 04:53:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:35.206 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'env', 'PROCESS_TAG=haproxy-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 04:53:35 np0005481065 nova_compute[260935]: 2025-10-11 08:53:35.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:35 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:53:35 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1964509641' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:53:35 np0005481065 nova_compute[260935]: 2025-10-11 08:53:35.433 2 DEBUG oslo_concurrency.processutils [None req-9e304d57-2dd4-4121-8be6-93377b7cb6cf 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:35 np0005481065 nova_compute[260935]: 2025-10-11 08:53:35.444 2 DEBUG nova.compute.provider_tree [None req-9e304d57-2dd4-4121-8be6-93377b7cb6cf 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:53:35 np0005481065 nova_compute[260935]: 2025-10-11 08:53:35.465 2 DEBUG nova.scheduler.client.report [None req-9e304d57-2dd4-4121-8be6-93377b7cb6cf 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:53:35 np0005481065 nova_compute[260935]: 2025-10-11 08:53:35.506 2 DEBUG oslo_concurrency.lockutils [None req-9e304d57-2dd4-4121-8be6-93377b7cb6cf 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:35 np0005481065 nova_compute[260935]: 2025-10-11 08:53:35.511 2 DEBUG oslo_concurrency.lockutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:35 np0005481065 nova_compute[260935]: 2025-10-11 08:53:35.520 2 DEBUG nova.virt.hardware [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:53:35 np0005481065 nova_compute[260935]: 2025-10-11 08:53:35.520 2 INFO nova.compute.claims [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:53:35 np0005481065 nova_compute[260935]: 2025-10-11 08:53:35.562 2 INFO nova.scheduler.client.report [None req-9e304d57-2dd4-4121-8be6-93377b7cb6cf 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Deleted allocations for instance e3289c21-dd0f-43aa-9d39-3aff16eff5cd#033[00m
Oct 11 04:53:35 np0005481065 nova_compute[260935]: 2025-10-11 08:53:35.630 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Removed pending event for 92be5b35-6b7a-4f95-924d-008348f27b42 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct 11 04:53:35 np0005481065 nova_compute[260935]: 2025-10-11 08:53:35.630 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172815.627102, 92be5b35-6b7a-4f95-924d-008348f27b42 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:53:35 np0005481065 nova_compute[260935]: 2025-10-11 08:53:35.631 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] VM Started (Lifecycle Event)#033[00m
Oct 11 04:53:35 np0005481065 nova_compute[260935]: 2025-10-11 08:53:35.646 2 DEBUG nova.compute.manager [req-cda3e6aa-518d-4bbc-8058-2a15f28d5af2 req-b2f214e4-0029-427e-b3aa-424a501b78d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Received event network-vif-deleted-35cdb65d-4c85-4645-9bdf-3ef8f40f9596 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:53:35 np0005481065 nova_compute[260935]: 2025-10-11 08:53:35.646 2 DEBUG nova.compute.manager [req-cda3e6aa-518d-4bbc-8058-2a15f28d5af2 req-b2f214e4-0029-427e-b3aa-424a501b78d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Received event network-vif-plugged-13bb6d15-e65c-4e29-b0f3-b7a5a830236d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:53:35 np0005481065 nova_compute[260935]: 2025-10-11 08:53:35.647 2 DEBUG oslo_concurrency.lockutils [req-cda3e6aa-518d-4bbc-8058-2a15f28d5af2 req-b2f214e4-0029-427e-b3aa-424a501b78d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:35 np0005481065 nova_compute[260935]: 2025-10-11 08:53:35.647 2 DEBUG oslo_concurrency.lockutils [req-cda3e6aa-518d-4bbc-8058-2a15f28d5af2 req-b2f214e4-0029-427e-b3aa-424a501b78d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:35 np0005481065 nova_compute[260935]: 2025-10-11 08:53:35.648 2 DEBUG oslo_concurrency.lockutils [req-cda3e6aa-518d-4bbc-8058-2a15f28d5af2 req-b2f214e4-0029-427e-b3aa-424a501b78d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:35 np0005481065 nova_compute[260935]: 2025-10-11 08:53:35.648 2 DEBUG nova.compute.manager [req-cda3e6aa-518d-4bbc-8058-2a15f28d5af2 req-b2f214e4-0029-427e-b3aa-424a501b78d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Processing event network-vif-plugged-13bb6d15-e65c-4e29-b0f3-b7a5a830236d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 04:53:35 np0005481065 nova_compute[260935]: 2025-10-11 08:53:35.648 2 DEBUG nova.compute.manager [req-cda3e6aa-518d-4bbc-8058-2a15f28d5af2 req-b2f214e4-0029-427e-b3aa-424a501b78d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Received event network-vif-plugged-13bb6d15-e65c-4e29-b0f3-b7a5a830236d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:53:35 np0005481065 nova_compute[260935]: 2025-10-11 08:53:35.649 2 DEBUG oslo_concurrency.lockutils [req-cda3e6aa-518d-4bbc-8058-2a15f28d5af2 req-b2f214e4-0029-427e-b3aa-424a501b78d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:35 np0005481065 nova_compute[260935]: 2025-10-11 08:53:35.649 2 DEBUG oslo_concurrency.lockutils [req-cda3e6aa-518d-4bbc-8058-2a15f28d5af2 req-b2f214e4-0029-427e-b3aa-424a501b78d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:35 np0005481065 nova_compute[260935]: 2025-10-11 08:53:35.649 2 DEBUG oslo_concurrency.lockutils [req-cda3e6aa-518d-4bbc-8058-2a15f28d5af2 req-b2f214e4-0029-427e-b3aa-424a501b78d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:35 np0005481065 nova_compute[260935]: 2025-10-11 08:53:35.650 2 DEBUG nova.compute.manager [req-cda3e6aa-518d-4bbc-8058-2a15f28d5af2 req-b2f214e4-0029-427e-b3aa-424a501b78d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] No waiting events found dispatching network-vif-plugged-13bb6d15-e65c-4e29-b0f3-b7a5a830236d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:53:35 np0005481065 nova_compute[260935]: 2025-10-11 08:53:35.650 2 WARNING nova.compute.manager [req-cda3e6aa-518d-4bbc-8058-2a15f28d5af2 req-b2f214e4-0029-427e-b3aa-424a501b78d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Received unexpected event network-vif-plugged-13bb6d15-e65c-4e29-b0f3-b7a5a830236d for instance with vm_state active and task_state rebuild_spawning.#033[00m
Oct 11 04:53:35 np0005481065 nova_compute[260935]: 2025-10-11 08:53:35.653 2 DEBUG nova.compute.manager [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:53:35 np0005481065 nova_compute[260935]: 2025-10-11 08:53:35.656 2 DEBUG oslo_concurrency.lockutils [None req-9e304d57-2dd4-4121-8be6-93377b7cb6cf 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "e3289c21-dd0f-43aa-9d39-3aff16eff5cd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.070s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:35 np0005481065 nova_compute[260935]: 2025-10-11 08:53:35.658 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:53:35 np0005481065 nova_compute[260935]: 2025-10-11 08:53:35.668 2 DEBUG nova.virt.libvirt.driver [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:53:35 np0005481065 podman[313286]: 2025-10-11 08:53:35.673076274 +0000 UTC m=+0.081621339 container create 93ce5815cc24073fc5465470cde888fbff7aeb8d44ee8d0f37da2dc9d054a24a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:53:35 np0005481065 nova_compute[260935]: 2025-10-11 08:53:35.675 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:53:35 np0005481065 nova_compute[260935]: 2025-10-11 08:53:35.681 2 INFO nova.virt.libvirt.driver [-] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Instance spawned successfully.#033[00m
Oct 11 04:53:35 np0005481065 nova_compute[260935]: 2025-10-11 08:53:35.682 2 DEBUG nova.virt.libvirt.driver [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:53:35 np0005481065 nova_compute[260935]: 2025-10-11 08:53:35.699 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct 11 04:53:35 np0005481065 nova_compute[260935]: 2025-10-11 08:53:35.699 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172815.6274183, 92be5b35-6b7a-4f95-924d-008348f27b42 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:53:35 np0005481065 nova_compute[260935]: 2025-10-11 08:53:35.700 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] VM Paused (Lifecycle Event)#033[00m
Oct 11 04:53:35 np0005481065 nova_compute[260935]: 2025-10-11 08:53:35.716 2 DEBUG nova.virt.libvirt.driver [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:53:35 np0005481065 nova_compute[260935]: 2025-10-11 08:53:35.717 2 DEBUG nova.virt.libvirt.driver [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:53:35 np0005481065 nova_compute[260935]: 2025-10-11 08:53:35.717 2 DEBUG nova.virt.libvirt.driver [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:53:35 np0005481065 podman[313286]: 2025-10-11 08:53:35.628850343 +0000 UTC m=+0.037395458 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 04:53:35 np0005481065 nova_compute[260935]: 2025-10-11 08:53:35.717 2 DEBUG nova.virt.libvirt.driver [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:53:35 np0005481065 nova_compute[260935]: 2025-10-11 08:53:35.718 2 DEBUG nova.virt.libvirt.driver [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:53:35 np0005481065 nova_compute[260935]: 2025-10-11 08:53:35.719 2 DEBUG nova.virt.libvirt.driver [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:53:35 np0005481065 nova_compute[260935]: 2025-10-11 08:53:35.728 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:53:35 np0005481065 nova_compute[260935]: 2025-10-11 08:53:35.732 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172815.6643114, 92be5b35-6b7a-4f95-924d-008348f27b42 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:53:35 np0005481065 nova_compute[260935]: 2025-10-11 08:53:35.732 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:53:35 np0005481065 systemd[1]: Started libpod-conmon-93ce5815cc24073fc5465470cde888fbff7aeb8d44ee8d0f37da2dc9d054a24a.scope.
Oct 11 04:53:35 np0005481065 nova_compute[260935]: 2025-10-11 08:53:35.745 2 DEBUG oslo_concurrency.processutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:35 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:53:35 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ac96d95cf133868deb35b7c5e5709e96b6a8d34762efff7b7d3e6627c387060/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:53:35 np0005481065 nova_compute[260935]: 2025-10-11 08:53:35.797 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:53:35 np0005481065 nova_compute[260935]: 2025-10-11 08:53:35.800 2 DEBUG nova.compute.manager [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:53:35 np0005481065 nova_compute[260935]: 2025-10-11 08:53:35.805 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:53:35 np0005481065 podman[313286]: 2025-10-11 08:53:35.81042778 +0000 UTC m=+0.218972895 container init 93ce5815cc24073fc5465470cde888fbff7aeb8d44ee8d0f37da2dc9d054a24a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 04:53:35 np0005481065 podman[313286]: 2025-10-11 08:53:35.822904236 +0000 UTC m=+0.231449301 container start 93ce5815cc24073fc5465470cde888fbff7aeb8d44ee8d0f37da2dc9d054a24a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 11 04:53:35 np0005481065 nova_compute[260935]: 2025-10-11 08:53:35.849 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct 11 04:53:35 np0005481065 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[313302]: [NOTICE]   (313307) : New worker (313309) forked
Oct 11 04:53:35 np0005481065 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[313302]: [NOTICE]   (313307) : Loading success.
Oct 11 04:53:35 np0005481065 nova_compute[260935]: 2025-10-11 08:53:35.881 2 DEBUG oslo_concurrency.lockutils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1472: 321 pgs: 321 active+clean; 208 MiB data, 530 MiB used, 59 GiB / 60 GiB avail; 14 MiB/s rd, 12 MiB/s wr, 851 op/s
Oct 11 04:53:36 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:53:36 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/183429936' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:53:36 np0005481065 nova_compute[260935]: 2025-10-11 08:53:36.312 2 DEBUG oslo_concurrency.processutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:36 np0005481065 nova_compute[260935]: 2025-10-11 08:53:36.320 2 DEBUG nova.compute.provider_tree [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:53:36 np0005481065 nova_compute[260935]: 2025-10-11 08:53:36.343 2 DEBUG nova.scheduler.client.report [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:53:36 np0005481065 nova_compute[260935]: 2025-10-11 08:53:36.374 2 DEBUG oslo_concurrency.lockutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.863s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:36 np0005481065 nova_compute[260935]: 2025-10-11 08:53:36.375 2 DEBUG nova.compute.manager [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:53:36 np0005481065 nova_compute[260935]: 2025-10-11 08:53:36.380 2 DEBUG oslo_concurrency.lockutils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.499s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:36 np0005481065 nova_compute[260935]: 2025-10-11 08:53:36.381 2 DEBUG nova.objects.instance [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct 11 04:53:36 np0005481065 nova_compute[260935]: 2025-10-11 08:53:36.460 2 DEBUG nova.compute.manager [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:53:36 np0005481065 nova_compute[260935]: 2025-10-11 08:53:36.460 2 DEBUG nova.network.neutron [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:53:36 np0005481065 nova_compute[260935]: 2025-10-11 08:53:36.495 2 DEBUG oslo_concurrency.lockutils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.115s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:36 np0005481065 nova_compute[260935]: 2025-10-11 08:53:36.502 2 INFO nova.virt.libvirt.driver [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:53:36 np0005481065 nova_compute[260935]: 2025-10-11 08:53:36.528 2 DEBUG nova.compute.manager [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:53:36 np0005481065 nova_compute[260935]: 2025-10-11 08:53:36.777 2 DEBUG nova.compute.manager [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:53:36 np0005481065 nova_compute[260935]: 2025-10-11 08:53:36.780 2 DEBUG nova.virt.libvirt.driver [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:53:36 np0005481065 nova_compute[260935]: 2025-10-11 08:53:36.781 2 INFO nova.virt.libvirt.driver [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Creating image(s)#033[00m
Oct 11 04:53:36 np0005481065 nova_compute[260935]: 2025-10-11 08:53:36.820 2 DEBUG nova.storage.rbd_utils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] rbd image 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:36 np0005481065 nova_compute[260935]: 2025-10-11 08:53:36.864 2 DEBUG nova.storage.rbd_utils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] rbd image 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:36 np0005481065 nova_compute[260935]: 2025-10-11 08:53:36.904 2 DEBUG nova.storage.rbd_utils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] rbd image 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:36 np0005481065 nova_compute[260935]: 2025-10-11 08:53:36.910 2 DEBUG oslo_concurrency.processutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:37 np0005481065 nova_compute[260935]: 2025-10-11 08:53:37.021 2 DEBUG oslo_concurrency.processutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.111s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:37 np0005481065 nova_compute[260935]: 2025-10-11 08:53:37.023 2 DEBUG oslo_concurrency.lockutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:37 np0005481065 nova_compute[260935]: 2025-10-11 08:53:37.024 2 DEBUG oslo_concurrency.lockutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:37 np0005481065 nova_compute[260935]: 2025-10-11 08:53:37.024 2 DEBUG oslo_concurrency.lockutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:37 np0005481065 nova_compute[260935]: 2025-10-11 08:53:37.057 2 DEBUG nova.storage.rbd_utils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] rbd image 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:37 np0005481065 nova_compute[260935]: 2025-10-11 08:53:37.063 2 DEBUG oslo_concurrency.processutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:37 np0005481065 nova_compute[260935]: 2025-10-11 08:53:37.272 2 DEBUG nova.policy [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd4dce65b39be45739408ca70d672df84', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b34d40b1586348c3be3d9142dfe1770d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 04:53:37 np0005481065 nova_compute[260935]: 2025-10-11 08:53:37.376 2 DEBUG oslo_concurrency.processutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.313s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:37 np0005481065 nova_compute[260935]: 2025-10-11 08:53:37.442 2 DEBUG nova.storage.rbd_utils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] resizing rbd image 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:53:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:53:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3362876690' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:53:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:53:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3362876690' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:53:37 np0005481065 nova_compute[260935]: 2025-10-11 08:53:37.564 2 DEBUG nova.objects.instance [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lazy-loading 'migration_context' on Instance uuid 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:53:37 np0005481065 nova_compute[260935]: 2025-10-11 08:53:37.581 2 DEBUG nova.virt.libvirt.driver [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:53:37 np0005481065 nova_compute[260935]: 2025-10-11 08:53:37.581 2 DEBUG nova.virt.libvirt.driver [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Ensure instance console log exists: /var/lib/nova/instances/8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:53:37 np0005481065 nova_compute[260935]: 2025-10-11 08:53:37.582 2 DEBUG oslo_concurrency.lockutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:37 np0005481065 nova_compute[260935]: 2025-10-11 08:53:37.583 2 DEBUG oslo_concurrency.lockutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:37 np0005481065 nova_compute[260935]: 2025-10-11 08:53:37.584 2 DEBUG oslo_concurrency.lockutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:37 np0005481065 nova_compute[260935]: 2025-10-11 08:53:37.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1473: 321 pgs: 321 active+clean; 292 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 20 MiB/s rd, 19 MiB/s wr, 1.18k op/s
Oct 11 04:53:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:53:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e201 do_prune osdmap full prune enabled
Oct 11 04:53:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e202 e202: 3 total, 3 up, 3 in
Oct 11 04:53:38 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e202: 3 total, 3 up, 3 in
Oct 11 04:53:38 np0005481065 nova_compute[260935]: 2025-10-11 08:53:38.444 2 DEBUG nova.network.neutron [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Successfully created port: 5e003588-ef81-4e87-900f-734b2c4bad32 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 04:53:38 np0005481065 nova_compute[260935]: 2025-10-11 08:53:38.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:39 np0005481065 nova_compute[260935]: 2025-10-11 08:53:39.051 2 DEBUG nova.network.neutron [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Successfully updated port: 5e003588-ef81-4e87-900f-734b2c4bad32 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:53:39 np0005481065 nova_compute[260935]: 2025-10-11 08:53:39.065 2 DEBUG oslo_concurrency.lockutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquiring lock "refresh_cache-8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:53:39 np0005481065 nova_compute[260935]: 2025-10-11 08:53:39.065 2 DEBUG oslo_concurrency.lockutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquired lock "refresh_cache-8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:53:39 np0005481065 nova_compute[260935]: 2025-10-11 08:53:39.065 2 DEBUG nova.network.neutron [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:53:39 np0005481065 nova_compute[260935]: 2025-10-11 08:53:39.078 2 DEBUG oslo_concurrency.lockutils [None req-abb033af-cc52-4e7c-9afb-3c534d1eb32b 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "92be5b35-6b7a-4f95-924d-008348f27b42" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:39 np0005481065 nova_compute[260935]: 2025-10-11 08:53:39.078 2 DEBUG oslo_concurrency.lockutils [None req-abb033af-cc52-4e7c-9afb-3c534d1eb32b 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "92be5b35-6b7a-4f95-924d-008348f27b42" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:39 np0005481065 nova_compute[260935]: 2025-10-11 08:53:39.079 2 DEBUG oslo_concurrency.lockutils [None req-abb033af-cc52-4e7c-9afb-3c534d1eb32b 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:39 np0005481065 nova_compute[260935]: 2025-10-11 08:53:39.079 2 DEBUG oslo_concurrency.lockutils [None req-abb033af-cc52-4e7c-9afb-3c534d1eb32b 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:39 np0005481065 nova_compute[260935]: 2025-10-11 08:53:39.080 2 DEBUG oslo_concurrency.lockutils [None req-abb033af-cc52-4e7c-9afb-3c534d1eb32b 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:39 np0005481065 nova_compute[260935]: 2025-10-11 08:53:39.082 2 INFO nova.compute.manager [None req-abb033af-cc52-4e7c-9afb-3c534d1eb32b 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Terminating instance#033[00m
Oct 11 04:53:39 np0005481065 nova_compute[260935]: 2025-10-11 08:53:39.084 2 DEBUG nova.compute.manager [None req-abb033af-cc52-4e7c-9afb-3c534d1eb32b 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 04:53:39 np0005481065 kernel: tap13bb6d15-e6 (unregistering): left promiscuous mode
Oct 11 04:53:39 np0005481065 NetworkManager[44960]: <info>  [1760172819.1447] device (tap13bb6d15-e6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:53:39 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:39Z|00402|binding|INFO|Releasing lport 13bb6d15-e65c-4e29-b0f3-b7a5a830236d from this chassis (sb_readonly=0)
Oct 11 04:53:39 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:39Z|00403|binding|INFO|Setting lport 13bb6d15-e65c-4e29-b0f3-b7a5a830236d down in Southbound
Oct 11 04:53:39 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:39Z|00404|binding|INFO|Removing iface tap13bb6d15-e6 ovn-installed in OVS
Oct 11 04:53:39 np0005481065 nova_compute[260935]: 2025-10-11 08:53:39.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:39.164 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:b5:4a 10.100.0.12'], port_security=['fa:16:3e:5b:b5:4a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '92be5b35-6b7a-4f95-924d-008348f27b42', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9af27fad6b5a4783b66213343f27f0a1', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'a50a9db8-e2ab-4969-88fc-b4ddbb372174', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b34768a1-4d4c-416a-8ec1-d8538916a72a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=13bb6d15-e65c-4e29-b0f3-b7a5a830236d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:53:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:39.168 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 13bb6d15-e65c-4e29-b0f3-b7a5a830236d in datapath e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd unbound from our chassis#033[00m
Oct 11 04:53:39 np0005481065 nova_compute[260935]: 2025-10-11 08:53:39.169 2 DEBUG nova.compute.manager [req-fac62629-87d1-4969-8011-b92ed5f58452 req-9ce308ad-42cc-4154-9540-7782c7ad3c55 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Received event network-changed-5e003588-ef81-4e87-900f-734b2c4bad32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:53:39 np0005481065 nova_compute[260935]: 2025-10-11 08:53:39.170 2 DEBUG nova.compute.manager [req-fac62629-87d1-4969-8011-b92ed5f58452 req-9ce308ad-42cc-4154-9540-7782c7ad3c55 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Refreshing instance network info cache due to event network-changed-5e003588-ef81-4e87-900f-734b2c4bad32. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:53:39 np0005481065 nova_compute[260935]: 2025-10-11 08:53:39.170 2 DEBUG oslo_concurrency.lockutils [req-fac62629-87d1-4969-8011-b92ed5f58452 req-9ce308ad-42cc-4154-9540-7782c7ad3c55 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:53:39 np0005481065 nova_compute[260935]: 2025-10-11 08:53:39.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:39.172 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 04:53:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:39.174 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[33d49502-e71b-45f2-a30d-27576874575b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:39.175 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd namespace which is not needed anymore#033[00m
Oct 11 04:53:39 np0005481065 nova_compute[260935]: 2025-10-11 08:53:39.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:39 np0005481065 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d0000002b.scope: Deactivated successfully.
Oct 11 04:53:39 np0005481065 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d0000002b.scope: Consumed 4.270s CPU time.
Oct 11 04:53:39 np0005481065 systemd-machined[215705]: Machine qemu-53-instance-0000002b terminated.
Oct 11 04:53:39 np0005481065 nova_compute[260935]: 2025-10-11 08:53:39.231 2 DEBUG nova.network.neutron [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:53:39 np0005481065 podman[313505]: 2025-10-11 08:53:39.276582396 +0000 UTC m=+0.110082550 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 11 04:53:39 np0005481065 podman[313508]: 2025-10-11 08:53:39.313138308 +0000 UTC m=+0.131648325 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:53:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e202 do_prune osdmap full prune enabled
Oct 11 04:53:39 np0005481065 nova_compute[260935]: 2025-10-11 08:53:39.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e203 e203: 3 total, 3 up, 3 in
Oct 11 04:53:39 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e203: 3 total, 3 up, 3 in
Oct 11 04:53:39 np0005481065 nova_compute[260935]: 2025-10-11 08:53:39.330 2 INFO nova.virt.libvirt.driver [-] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Instance destroyed successfully.#033[00m
Oct 11 04:53:39 np0005481065 nova_compute[260935]: 2025-10-11 08:53:39.331 2 DEBUG nova.objects.instance [None req-abb033af-cc52-4e7c-9afb-3c534d1eb32b 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lazy-loading 'resources' on Instance uuid 92be5b35-6b7a-4f95-924d-008348f27b42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:53:39 np0005481065 nova_compute[260935]: 2025-10-11 08:53:39.351 2 DEBUG nova.virt.libvirt.vif [None req-abb033af-cc52-4e7c-9afb-3c534d1eb32b 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-11T08:52:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-118771075',display_name='tempest-ServerDiskConfigTestJSON-server-118771075',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-118771075',id=43,image_ref='95632eb9-5895-4e20-b760-0f149aadf400',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:53:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9af27fad6b5a4783b66213343f27f0a1',ramdisk_id='',reservation_id='r-aqy60l5l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='95632eb9-5895-4e20-b760-0f149aadf400',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-387886039',owner_user_name='tempest-ServerDiskConfigTestJSON-387886039-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:53:36Z,user_data=None,user_id='2a171de1f79843e0b048393cabfee77d',uuid=92be5b35-6b7a-4f95-924d-008348f27b42,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "address": "fa:16:3e:5b:b5:4a", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13bb6d15-e6", "ovs_interfaceid": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 04:53:39 np0005481065 nova_compute[260935]: 2025-10-11 08:53:39.351 2 DEBUG nova.network.os_vif_util [None req-abb033af-cc52-4e7c-9afb-3c534d1eb32b 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converting VIF {"id": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "address": "fa:16:3e:5b:b5:4a", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13bb6d15-e6", "ovs_interfaceid": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:53:39 np0005481065 nova_compute[260935]: 2025-10-11 08:53:39.352 2 DEBUG nova.network.os_vif_util [None req-abb033af-cc52-4e7c-9afb-3c534d1eb32b 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:b5:4a,bridge_name='br-int',has_traffic_filtering=True,id=13bb6d15-e65c-4e29-b0f3-b7a5a830236d,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13bb6d15-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:53:39 np0005481065 nova_compute[260935]: 2025-10-11 08:53:39.353 2 DEBUG os_vif [None req-abb033af-cc52-4e7c-9afb-3c534d1eb32b 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:b5:4a,bridge_name='br-int',has_traffic_filtering=True,id=13bb6d15-e65c-4e29-b0f3-b7a5a830236d,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13bb6d15-e6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 04:53:39 np0005481065 nova_compute[260935]: 2025-10-11 08:53:39.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:39 np0005481065 nova_compute[260935]: 2025-10-11 08:53:39.355 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap13bb6d15-e6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:39 np0005481065 nova_compute[260935]: 2025-10-11 08:53:39.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:39 np0005481065 nova_compute[260935]: 2025-10-11 08:53:39.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:39 np0005481065 nova_compute[260935]: 2025-10-11 08:53:39.361 2 INFO os_vif [None req-abb033af-cc52-4e7c-9afb-3c534d1eb32b 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:b5:4a,bridge_name='br-int',has_traffic_filtering=True,id=13bb6d15-e65c-4e29-b0f3-b7a5a830236d,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13bb6d15-e6')#033[00m
Oct 11 04:53:39 np0005481065 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[313302]: [NOTICE]   (313307) : haproxy version is 2.8.14-c23fe91
Oct 11 04:53:39 np0005481065 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[313302]: [NOTICE]   (313307) : path to executable is /usr/sbin/haproxy
Oct 11 04:53:39 np0005481065 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[313302]: [WARNING]  (313307) : Exiting Master process...
Oct 11 04:53:39 np0005481065 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[313302]: [WARNING]  (313307) : Exiting Master process...
Oct 11 04:53:39 np0005481065 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[313302]: [ALERT]    (313307) : Current worker (313309) exited with code 143 (Terminated)
Oct 11 04:53:39 np0005481065 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[313302]: [WARNING]  (313307) : All workers exited. Exiting... (0)
Oct 11 04:53:39 np0005481065 systemd[1]: libpod-93ce5815cc24073fc5465470cde888fbff7aeb8d44ee8d0f37da2dc9d054a24a.scope: Deactivated successfully.
Oct 11 04:53:39 np0005481065 podman[313569]: 2025-10-11 08:53:39.383964498 +0000 UTC m=+0.080741604 container died 93ce5815cc24073fc5465470cde888fbff7aeb8d44ee8d0f37da2dc9d054a24a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 11 04:53:39 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-93ce5815cc24073fc5465470cde888fbff7aeb8d44ee8d0f37da2dc9d054a24a-userdata-shm.mount: Deactivated successfully.
Oct 11 04:53:39 np0005481065 systemd[1]: var-lib-containers-storage-overlay-1ac96d95cf133868deb35b7c5e5709e96b6a8d34762efff7b7d3e6627c387060-merged.mount: Deactivated successfully.
Oct 11 04:53:39 np0005481065 podman[313569]: 2025-10-11 08:53:39.427488709 +0000 UTC m=+0.124265805 container cleanup 93ce5815cc24073fc5465470cde888fbff7aeb8d44ee8d0f37da2dc9d054a24a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 04:53:39 np0005481065 systemd[1]: libpod-conmon-93ce5815cc24073fc5465470cde888fbff7aeb8d44ee8d0f37da2dc9d054a24a.scope: Deactivated successfully.
Oct 11 04:53:39 np0005481065 podman[313627]: 2025-10-11 08:53:39.52046693 +0000 UTC m=+0.061888976 container remove 93ce5815cc24073fc5465470cde888fbff7aeb8d44ee8d0f37da2dc9d054a24a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 04:53:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:39.529 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[eb176494-5e16-4c16-9e9e-e4f1f1871a89]: (4, ('Sat Oct 11 08:53:39 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd (93ce5815cc24073fc5465470cde888fbff7aeb8d44ee8d0f37da2dc9d054a24a)\n93ce5815cc24073fc5465470cde888fbff7aeb8d44ee8d0f37da2dc9d054a24a\nSat Oct 11 08:53:39 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd (93ce5815cc24073fc5465470cde888fbff7aeb8d44ee8d0f37da2dc9d054a24a)\n93ce5815cc24073fc5465470cde888fbff7aeb8d44ee8d0f37da2dc9d054a24a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:39.532 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[dce67f11-2815-4ac9-b288-b5d51bba2100]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:39.533 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape5d4fc7a-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:39 np0005481065 nova_compute[260935]: 2025-10-11 08:53:39.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:39 np0005481065 kernel: tape5d4fc7a-10: left promiscuous mode
Oct 11 04:53:39 np0005481065 nova_compute[260935]: 2025-10-11 08:53:39.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:39.578 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[dd36affc-2f17-4db0-897b-39809b07317d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:39.601 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9a2ff319-106e-45b3-a4e3-d73de3099c09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:39.603 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b5a04e21-0c93-42ca-aa43-b2a9c005fec0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:39.622 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[06e44ba7-adae-4c9f-898c-b805b762cdef]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465951, 'reachable_time': 30669, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313642, 'error': None, 'target': 'ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:39.625 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 04:53:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:39.625 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[d96c45cb-2bd3-4e54-bcbc-6dd136af525e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:39 np0005481065 systemd[1]: run-netns-ovnmeta\x2de5d4fc7a\x2d1d47\x2d4774\x2daad7\x2d8bb2b388fbbd.mount: Deactivated successfully.
Oct 11 04:53:39 np0005481065 nova_compute[260935]: 2025-10-11 08:53:39.853 2 INFO nova.virt.libvirt.driver [None req-abb033af-cc52-4e7c-9afb-3c534d1eb32b 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Deleting instance files /var/lib/nova/instances/92be5b35-6b7a-4f95-924d-008348f27b42_del#033[00m
Oct 11 04:53:39 np0005481065 nova_compute[260935]: 2025-10-11 08:53:39.854 2 INFO nova.virt.libvirt.driver [None req-abb033af-cc52-4e7c-9afb-3c534d1eb32b 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Deletion of /var/lib/nova/instances/92be5b35-6b7a-4f95-924d-008348f27b42_del complete#033[00m
Oct 11 04:53:39 np0005481065 nova_compute[260935]: 2025-10-11 08:53:39.916 2 INFO nova.compute.manager [None req-abb033af-cc52-4e7c-9afb-3c534d1eb32b 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Took 0.83 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 04:53:39 np0005481065 nova_compute[260935]: 2025-10-11 08:53:39.918 2 DEBUG oslo.service.loopingcall [None req-abb033af-cc52-4e7c-9afb-3c534d1eb32b 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 04:53:39 np0005481065 nova_compute[260935]: 2025-10-11 08:53:39.919 2 DEBUG nova.compute.manager [-] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 04:53:39 np0005481065 nova_compute[260935]: 2025-10-11 08:53:39.919 2 DEBUG nova.network.neutron [-] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 04:53:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1476: 321 pgs: 321 active+clean; 292 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 6.4 MiB/s wr, 294 op/s
Oct 11 04:53:41 np0005481065 nova_compute[260935]: 2025-10-11 08:53:41.129 2 DEBUG nova.network.neutron [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Updating instance_info_cache with network_info: [{"id": "5e003588-ef81-4e87-900f-734b2c4bad32", "address": "fa:16:3e:97:d1:ed", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e003588-ef", "ovs_interfaceid": "5e003588-ef81-4e87-900f-734b2c4bad32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:53:41 np0005481065 nova_compute[260935]: 2025-10-11 08:53:41.187 2 DEBUG oslo_concurrency.lockutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Releasing lock "refresh_cache-8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:53:41 np0005481065 nova_compute[260935]: 2025-10-11 08:53:41.188 2 DEBUG nova.compute.manager [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Instance network_info: |[{"id": "5e003588-ef81-4e87-900f-734b2c4bad32", "address": "fa:16:3e:97:d1:ed", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e003588-ef", "ovs_interfaceid": "5e003588-ef81-4e87-900f-734b2c4bad32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:53:41 np0005481065 nova_compute[260935]: 2025-10-11 08:53:41.188 2 DEBUG oslo_concurrency.lockutils [req-fac62629-87d1-4969-8011-b92ed5f58452 req-9ce308ad-42cc-4154-9540-7782c7ad3c55 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:53:41 np0005481065 nova_compute[260935]: 2025-10-11 08:53:41.189 2 DEBUG nova.network.neutron [req-fac62629-87d1-4969-8011-b92ed5f58452 req-9ce308ad-42cc-4154-9540-7782c7ad3c55 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Refreshing network info cache for port 5e003588-ef81-4e87-900f-734b2c4bad32 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:53:41 np0005481065 nova_compute[260935]: 2025-10-11 08:53:41.194 2 DEBUG nova.virt.libvirt.driver [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Start _get_guest_xml network_info=[{"id": "5e003588-ef81-4e87-900f-734b2c4bad32", "address": "fa:16:3e:97:d1:ed", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e003588-ef", "ovs_interfaceid": "5e003588-ef81-4e87-900f-734b2c4bad32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:53:41 np0005481065 nova_compute[260935]: 2025-10-11 08:53:41.201 2 WARNING nova.virt.libvirt.driver [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:53:41 np0005481065 nova_compute[260935]: 2025-10-11 08:53:41.207 2 DEBUG nova.virt.libvirt.host [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:53:41 np0005481065 nova_compute[260935]: 2025-10-11 08:53:41.208 2 DEBUG nova.virt.libvirt.host [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:53:41 np0005481065 nova_compute[260935]: 2025-10-11 08:53:41.218 2 DEBUG nova.virt.libvirt.host [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:53:41 np0005481065 nova_compute[260935]: 2025-10-11 08:53:41.218 2 DEBUG nova.virt.libvirt.host [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:53:41 np0005481065 nova_compute[260935]: 2025-10-11 08:53:41.219 2 DEBUG nova.virt.libvirt.driver [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:53:41 np0005481065 nova_compute[260935]: 2025-10-11 08:53:41.219 2 DEBUG nova.virt.hardware [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:53:41 np0005481065 nova_compute[260935]: 2025-10-11 08:53:41.220 2 DEBUG nova.virt.hardware [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:53:41 np0005481065 nova_compute[260935]: 2025-10-11 08:53:41.221 2 DEBUG nova.virt.hardware [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:53:41 np0005481065 nova_compute[260935]: 2025-10-11 08:53:41.221 2 DEBUG nova.virt.hardware [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:53:41 np0005481065 nova_compute[260935]: 2025-10-11 08:53:41.221 2 DEBUG nova.virt.hardware [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:53:41 np0005481065 nova_compute[260935]: 2025-10-11 08:53:41.222 2 DEBUG nova.virt.hardware [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:53:41 np0005481065 nova_compute[260935]: 2025-10-11 08:53:41.222 2 DEBUG nova.virt.hardware [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:53:41 np0005481065 nova_compute[260935]: 2025-10-11 08:53:41.223 2 DEBUG nova.virt.hardware [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:53:41 np0005481065 nova_compute[260935]: 2025-10-11 08:53:41.223 2 DEBUG nova.virt.hardware [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:53:41 np0005481065 nova_compute[260935]: 2025-10-11 08:53:41.224 2 DEBUG nova.virt.hardware [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:53:41 np0005481065 nova_compute[260935]: 2025-10-11 08:53:41.224 2 DEBUG nova.virt.hardware [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:53:41 np0005481065 nova_compute[260935]: 2025-10-11 08:53:41.229 2 DEBUG oslo_concurrency.processutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:41 np0005481065 nova_compute[260935]: 2025-10-11 08:53:41.517 2 DEBUG nova.network.neutron [-] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:53:41 np0005481065 nova_compute[260935]: 2025-10-11 08:53:41.551 2 INFO nova.compute.manager [-] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Took 1.63 seconds to deallocate network for instance.#033[00m
Oct 11 04:53:41 np0005481065 nova_compute[260935]: 2025-10-11 08:53:41.616 2 DEBUG oslo_concurrency.lockutils [None req-abb033af-cc52-4e7c-9afb-3c534d1eb32b 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:41 np0005481065 nova_compute[260935]: 2025-10-11 08:53:41.618 2 DEBUG oslo_concurrency.lockutils [None req-abb033af-cc52-4e7c-9afb-3c534d1eb32b 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:41 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:53:41 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2411837045' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:53:41 np0005481065 nova_compute[260935]: 2025-10-11 08:53:41.749 2 DEBUG oslo_concurrency.processutils [None req-abb033af-cc52-4e7c-9afb-3c534d1eb32b 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:41 np0005481065 nova_compute[260935]: 2025-10-11 08:53:41.806 2 DEBUG oslo_concurrency.processutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:41 np0005481065 nova_compute[260935]: 2025-10-11 08:53:41.811 2 DEBUG nova.compute.manager [req-7d68ca44-f441-441c-866f-cbb22269d6fb req-6a12569b-682a-4969-9a62-05601bd0ba3a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Received event network-vif-unplugged-13bb6d15-e65c-4e29-b0f3-b7a5a830236d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:53:41 np0005481065 nova_compute[260935]: 2025-10-11 08:53:41.811 2 DEBUG oslo_concurrency.lockutils [req-7d68ca44-f441-441c-866f-cbb22269d6fb req-6a12569b-682a-4969-9a62-05601bd0ba3a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:41 np0005481065 nova_compute[260935]: 2025-10-11 08:53:41.813 2 DEBUG oslo_concurrency.lockutils [req-7d68ca44-f441-441c-866f-cbb22269d6fb req-6a12569b-682a-4969-9a62-05601bd0ba3a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:41 np0005481065 nova_compute[260935]: 2025-10-11 08:53:41.814 2 DEBUG oslo_concurrency.lockutils [req-7d68ca44-f441-441c-866f-cbb22269d6fb req-6a12569b-682a-4969-9a62-05601bd0ba3a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:41 np0005481065 nova_compute[260935]: 2025-10-11 08:53:41.815 2 DEBUG nova.compute.manager [req-7d68ca44-f441-441c-866f-cbb22269d6fb req-6a12569b-682a-4969-9a62-05601bd0ba3a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] No waiting events found dispatching network-vif-unplugged-13bb6d15-e65c-4e29-b0f3-b7a5a830236d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:53:41 np0005481065 nova_compute[260935]: 2025-10-11 08:53:41.815 2 WARNING nova.compute.manager [req-7d68ca44-f441-441c-866f-cbb22269d6fb req-6a12569b-682a-4969-9a62-05601bd0ba3a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Received unexpected event network-vif-unplugged-13bb6d15-e65c-4e29-b0f3-b7a5a830236d for instance with vm_state deleted and task_state None.#033[00m
Oct 11 04:53:41 np0005481065 nova_compute[260935]: 2025-10-11 08:53:41.848 2 DEBUG nova.storage.rbd_utils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] rbd image 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:41 np0005481065 nova_compute[260935]: 2025-10-11 08:53:41.855 2 DEBUG oslo_concurrency.processutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1477: 321 pgs: 321 active+clean; 292 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 5.3 MiB/s wr, 244 op/s
Oct 11 04:53:41 np0005481065 nova_compute[260935]: 2025-10-11 08:53:41.959 2 DEBUG nova.compute.manager [req-7b4d5a9b-ebe0-4a64-862e-37dad2b2c3e0 req-dfa4c539-f64a-486e-97a3-d24e9e7d4e5b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Received event network-vif-deleted-13bb6d15-e65c-4e29-b0f3-b7a5a830236d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:53:42 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:53:42 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/533724975' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.192 2 DEBUG oslo_concurrency.processutils [None req-abb033af-cc52-4e7c-9afb-3c534d1eb32b 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.199 2 DEBUG nova.compute.provider_tree [None req-abb033af-cc52-4e7c-9afb-3c534d1eb32b 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.217 2 DEBUG nova.scheduler.client.report [None req-abb033af-cc52-4e7c-9afb-3c534d1eb32b 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.239 2 DEBUG oslo_concurrency.lockutils [None req-abb033af-cc52-4e7c-9afb-3c534d1eb32b 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.274 2 INFO nova.scheduler.client.report [None req-abb033af-cc52-4e7c-9afb-3c534d1eb32b 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Deleted allocations for instance 92be5b35-6b7a-4f95-924d-008348f27b42#033[00m
Oct 11 04:53:42 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:53:42 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/705102083' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.344 2 DEBUG oslo_concurrency.lockutils [None req-abb033af-cc52-4e7c-9afb-3c534d1eb32b 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "92be5b35-6b7a-4f95-924d-008348f27b42" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.266s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.351 2 DEBUG oslo_concurrency.processutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.353 2 DEBUG nova.virt.libvirt.vif [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:53:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-454723019',display_name='tempest-ImagesOneServerNegativeTestJSON-server-454723019',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-454723019',id=47,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b34d40b1586348c3be3d9142dfe1770d',ramdisk_id='',reservation_id='r-tv2wxykz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-253514738',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-253514738-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:53:36Z,user_data=None,user_id='d4dce65b39be45739408ca70d672df84',uuid=8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5e003588-ef81-4e87-900f-734b2c4bad32", "address": "fa:16:3e:97:d1:ed", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e003588-ef", "ovs_interfaceid": "5e003588-ef81-4e87-900f-734b2c4bad32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.353 2 DEBUG nova.network.os_vif_util [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Converting VIF {"id": "5e003588-ef81-4e87-900f-734b2c4bad32", "address": "fa:16:3e:97:d1:ed", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e003588-ef", "ovs_interfaceid": "5e003588-ef81-4e87-900f-734b2c4bad32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.354 2 DEBUG nova.network.os_vif_util [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:d1:ed,bridge_name='br-int',has_traffic_filtering=True,id=5e003588-ef81-4e87-900f-734b2c4bad32,network=Network(a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e003588-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.356 2 DEBUG nova.objects.instance [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lazy-loading 'pci_devices' on Instance uuid 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.372 2 DEBUG nova.virt.libvirt.driver [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:53:42 np0005481065 nova_compute[260935]:  <uuid>8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08</uuid>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:  <name>instance-0000002f</name>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:53:42 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:      <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-454723019</nova:name>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:53:41</nova:creationTime>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:53:42 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:        <nova:user uuid="d4dce65b39be45739408ca70d672df84">tempest-ImagesOneServerNegativeTestJSON-253514738-project-member</nova:user>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:        <nova:project uuid="b34d40b1586348c3be3d9142dfe1770d">tempest-ImagesOneServerNegativeTestJSON-253514738</nova:project>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:        <nova:port uuid="5e003588-ef81-4e87-900f-734b2c4bad32">
Oct 11 04:53:42 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:53:42 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:      <entry name="serial">8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08</entry>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:      <entry name="uuid">8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08</entry>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:53:42 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:53:42 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:53:42 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08_disk">
Oct 11 04:53:42 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:53:42 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:53:42 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08_disk.config">
Oct 11 04:53:42 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:53:42 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 04:53:42 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:97:d1:ed"/>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:      <target dev="tap5e003588-ef"/>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:53:42 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08/console.log" append="off"/>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:53:42 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:53:42 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:53:42 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:53:42 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:53:42 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.374 2 DEBUG nova.compute.manager [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Preparing to wait for external event network-vif-plugged-5e003588-ef81-4e87-900f-734b2c4bad32 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.375 2 DEBUG oslo_concurrency.lockutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquiring lock "8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.375 2 DEBUG oslo_concurrency.lockutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.376 2 DEBUG oslo_concurrency.lockutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.378 2 DEBUG nova.virt.libvirt.vif [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:53:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-454723019',display_name='tempest-ImagesOneServerNegativeTestJSON-server-454723019',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-454723019',id=47,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b34d40b1586348c3be3d9142dfe1770d',ramdisk_id='',reservation_id='r-tv2wxykz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-253514738',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-253514738-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:53:36Z,user_data=None,user_id='d4dce65b39be45739408ca70d672df84',uuid=8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5e003588-ef81-4e87-900f-734b2c4bad32", "address": "fa:16:3e:97:d1:ed", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e003588-ef", "ovs_interfaceid": "5e003588-ef81-4e87-900f-734b2c4bad32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.378 2 DEBUG nova.network.os_vif_util [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Converting VIF {"id": "5e003588-ef81-4e87-900f-734b2c4bad32", "address": "fa:16:3e:97:d1:ed", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e003588-ef", "ovs_interfaceid": "5e003588-ef81-4e87-900f-734b2c4bad32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.380 2 DEBUG nova.network.os_vif_util [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:d1:ed,bridge_name='br-int',has_traffic_filtering=True,id=5e003588-ef81-4e87-900f-734b2c4bad32,network=Network(a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e003588-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.380 2 DEBUG os_vif [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:d1:ed,bridge_name='br-int',has_traffic_filtering=True,id=5e003588-ef81-4e87-900f-734b2c4bad32,network=Network(a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e003588-ef') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.386 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.387 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.400 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "d0ac94c4-9bbc-443b-bbce-0d447b37153a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.401 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "d0ac94c4-9bbc-443b-bbce-0d447b37153a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.408 2 DEBUG oslo_concurrency.lockutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "090243f9-46ac-42a1-b921-fa3b1974f127" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.409 2 DEBUG oslo_concurrency.lockutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "090243f9-46ac-42a1-b921-fa3b1974f127" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.411 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e003588-ef, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.412 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5e003588-ef, col_values=(('external_ids', {'iface-id': '5e003588-ef81-4e87-900f-734b2c4bad32', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:97:d1:ed', 'vm-uuid': '8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:42 np0005481065 NetworkManager[44960]: <info>  [1760172822.4156] manager: (tap5e003588-ef): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/193)
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.425 2 INFO os_vif [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:d1:ed,bridge_name='br-int',has_traffic_filtering=True,id=5e003588-ef81-4e87-900f-734b2c4bad32,network=Network(a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e003588-ef')#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.476 2 DEBUG nova.compute.manager [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.481 2 DEBUG oslo_concurrency.lockutils [None req-4bbbffb0-c2c6-44de-bf01-b4884f8d23f1 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Acquiring lock "2c551e6f-adba-4963-a583-c5118e2be62a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.482 2 DEBUG oslo_concurrency.lockutils [None req-4bbbffb0-c2c6-44de-bf01-b4884f8d23f1 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Lock "2c551e6f-adba-4963-a583-c5118e2be62a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.483 2 DEBUG oslo_concurrency.lockutils [None req-4bbbffb0-c2c6-44de-bf01-b4884f8d23f1 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Acquiring lock "2c551e6f-adba-4963-a583-c5118e2be62a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.483 2 DEBUG oslo_concurrency.lockutils [None req-4bbbffb0-c2c6-44de-bf01-b4884f8d23f1 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Lock "2c551e6f-adba-4963-a583-c5118e2be62a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.484 2 DEBUG oslo_concurrency.lockutils [None req-4bbbffb0-c2c6-44de-bf01-b4884f8d23f1 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Lock "2c551e6f-adba-4963-a583-c5118e2be62a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.486 2 INFO nova.compute.manager [None req-4bbbffb0-c2c6-44de-bf01-b4884f8d23f1 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Terminating instance#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.488 2 DEBUG nova.compute.manager [None req-4bbbffb0-c2c6-44de-bf01-b4884f8d23f1 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.499 2 DEBUG nova.compute.manager [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.507 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "c5c5e7c6-36ba-4cdd-9ad5-03996c419556" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.508 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "c5c5e7c6-36ba-4cdd-9ad5-03996c419556" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.525 2 DEBUG nova.virt.libvirt.driver [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.526 2 DEBUG nova.virt.libvirt.driver [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.526 2 DEBUG nova.virt.libvirt.driver [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] No VIF found with MAC fa:16:3e:97:d1:ed, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.527 2 INFO nova.virt.libvirt.driver [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Using config drive#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.561 2 DEBUG nova.storage.rbd_utils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] rbd image 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.568 2 DEBUG nova.compute.manager [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:53:42 np0005481065 kernel: tap29b8f5a1-6e (unregistering): left promiscuous mode
Oct 11 04:53:42 np0005481065 NetworkManager[44960]: <info>  [1760172822.5855] device (tap29b8f5a1-6e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:53:42 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:42Z|00405|binding|INFO|Releasing lport 29b8f5a1-6e7c-42c3-9876-cc4cc9942b76 from this chassis (sb_readonly=0)
Oct 11 04:53:42 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:42Z|00406|binding|INFO|Setting lport 29b8f5a1-6e7c-42c3-9876-cc4cc9942b76 down in Southbound
Oct 11 04:53:42 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:42Z|00407|binding|INFO|Removing iface tap29b8f5a1-6e ovn-installed in OVS
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.606 2 DEBUG oslo_concurrency.lockutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.606 2 DEBUG oslo_concurrency.lockutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.611 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:42.613 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d0:47:df 10.100.0.14'], port_security=['fa:16:3e:d0:47:df 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '2c551e6f-adba-4963-a583-c5118e2be62a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eb048a28-2e76-4170-b83c-10a20efb7841', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '141fa83aa39e4cf6883c3f86fe0de7d4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '82265068-f2dc-451b-ac76-cae1a0de925c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=74c9fb34-71dd-4115-aee6-37c651590f11, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=29b8f5a1-6e7c-42c3-9876-cc4cc9942b76) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:53:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:42.615 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 29b8f5a1-6e7c-42c3-9876-cc4cc9942b76 in datapath eb048a28-2e76-4170-b83c-10a20efb7841 unbound from our chassis#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.616 2 DEBUG nova.virt.hardware [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.616 2 INFO nova.compute.claims [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:53:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:42.617 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network eb048a28-2e76-4170-b83c-10a20efb7841, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 04:53:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:42.618 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b5126f32-929b-4f5d-8c50-b0dc86981a4d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:42.619 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-eb048a28-2e76-4170-b83c-10a20efb7841 namespace which is not needed anymore#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.648 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:42 np0005481065 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d0000002c.scope: Deactivated successfully.
Oct 11 04:53:42 np0005481065 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d0000002c.scope: Consumed 13.083s CPU time.
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:42 np0005481065 systemd-machined[215705]: Machine qemu-50-instance-0000002c terminated.
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.685 2 DEBUG nova.network.neutron [req-fac62629-87d1-4969-8011-b92ed5f58452 req-9ce308ad-42cc-4154-9540-7782c7ad3c55 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Updated VIF entry in instance network info cache for port 5e003588-ef81-4e87-900f-734b2c4bad32. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.685 2 DEBUG nova.network.neutron [req-fac62629-87d1-4969-8011-b92ed5f58452 req-9ce308ad-42cc-4154-9540-7782c7ad3c55 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Updating instance_info_cache with network_info: [{"id": "5e003588-ef81-4e87-900f-734b2c4bad32", "address": "fa:16:3e:97:d1:ed", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e003588-ef", "ovs_interfaceid": "5e003588-ef81-4e87-900f-734b2c4bad32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.709 2 DEBUG oslo_concurrency.lockutils [req-fac62629-87d1-4969-8011-b92ed5f58452 req-9ce308ad-42cc-4154-9540-7782c7ad3c55 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.756 2 INFO nova.virt.libvirt.driver [-] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Instance destroyed successfully.#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.758 2 DEBUG nova.objects.instance [None req-4bbbffb0-c2c6-44de-bf01-b4884f8d23f1 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Lazy-loading 'resources' on Instance uuid 2c551e6f-adba-4963-a583-c5118e2be62a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.771 2 DEBUG nova.virt.libvirt.vif [None req-4bbbffb0-c2c6-44de-bf01-b4884f8d23f1 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:53:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-702852874',display_name='tempest-ImagesOneServerTestJSON-server-702852874',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-702852874',id=44,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:53:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='141fa83aa39e4cf6883c3f86fe0de7d4',ramdisk_id='',reservation_id='r-0ygioays',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerTestJSON-896973684',owner_user_name='tempest-ImagesOneServerTestJSON-896973684-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:53:35Z,user_data=None,user_id='ea1a78d0e9f549b580366a5b344f23f5',uuid=2c551e6f-adba-4963-a583-c5118e2be62a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "29b8f5a1-6e7c-42c3-9876-cc4cc9942b76", "address": "fa:16:3e:d0:47:df", "network": {"id": "eb048a28-2e76-4170-b83c-10a20efb7841", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1710166253-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "141fa83aa39e4cf6883c3f86fe0de7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29b8f5a1-6e", "ovs_interfaceid": "29b8f5a1-6e7c-42c3-9876-cc4cc9942b76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.771 2 DEBUG nova.network.os_vif_util [None req-4bbbffb0-c2c6-44de-bf01-b4884f8d23f1 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Converting VIF {"id": "29b8f5a1-6e7c-42c3-9876-cc4cc9942b76", "address": "fa:16:3e:d0:47:df", "network": {"id": "eb048a28-2e76-4170-b83c-10a20efb7841", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1710166253-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "141fa83aa39e4cf6883c3f86fe0de7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29b8f5a1-6e", "ovs_interfaceid": "29b8f5a1-6e7c-42c3-9876-cc4cc9942b76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.772 2 DEBUG nova.network.os_vif_util [None req-4bbbffb0-c2c6-44de-bf01-b4884f8d23f1 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d0:47:df,bridge_name='br-int',has_traffic_filtering=True,id=29b8f5a1-6e7c-42c3-9876-cc4cc9942b76,network=Network(eb048a28-2e76-4170-b83c-10a20efb7841),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29b8f5a1-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.772 2 DEBUG os_vif [None req-4bbbffb0-c2c6-44de-bf01-b4884f8d23f1 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d0:47:df,bridge_name='br-int',has_traffic_filtering=True,id=29b8f5a1-6e7c-42c3-9876-cc4cc9942b76,network=Network(eb048a28-2e76-4170-b83c-10a20efb7841),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29b8f5a1-6e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.774 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap29b8f5a1-6e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.783 2 INFO os_vif [None req-4bbbffb0-c2c6-44de-bf01-b4884f8d23f1 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d0:47:df,bridge_name='br-int',has_traffic_filtering=True,id=29b8f5a1-6e7c-42c3-9876-cc4cc9942b76,network=Network(eb048a28-2e76-4170-b83c-10a20efb7841),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29b8f5a1-6e')#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.810 2 DEBUG oslo_concurrency.processutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:42 np0005481065 neutron-haproxy-ovnmeta-eb048a28-2e76-4170-b83c-10a20efb7841[311407]: [NOTICE]   (311411) : haproxy version is 2.8.14-c23fe91
Oct 11 04:53:42 np0005481065 neutron-haproxy-ovnmeta-eb048a28-2e76-4170-b83c-10a20efb7841[311407]: [NOTICE]   (311411) : path to executable is /usr/sbin/haproxy
Oct 11 04:53:42 np0005481065 neutron-haproxy-ovnmeta-eb048a28-2e76-4170-b83c-10a20efb7841[311407]: [WARNING]  (311411) : Exiting Master process...
Oct 11 04:53:42 np0005481065 neutron-haproxy-ovnmeta-eb048a28-2e76-4170-b83c-10a20efb7841[311407]: [WARNING]  (311411) : Exiting Master process...
Oct 11 04:53:42 np0005481065 neutron-haproxy-ovnmeta-eb048a28-2e76-4170-b83c-10a20efb7841[311407]: [ALERT]    (311411) : Current worker (311413) exited with code 143 (Terminated)
Oct 11 04:53:42 np0005481065 neutron-haproxy-ovnmeta-eb048a28-2e76-4170-b83c-10a20efb7841[311407]: [WARNING]  (311411) : All workers exited. Exiting... (0)
Oct 11 04:53:42 np0005481065 systemd[1]: libpod-b936b3db46013b0427a4d00c2b0b214765ca225acfddc18e89eb06ec24770bad.scope: Deactivated successfully.
Oct 11 04:53:42 np0005481065 podman[313776]: 2025-10-11 08:53:42.833298623 +0000 UTC m=+0.082675308 container died b936b3db46013b0427a4d00c2b0b214765ca225acfddc18e89eb06ec24770bad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-eb048a28-2e76-4170-b83c-10a20efb7841, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 04:53:42 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b936b3db46013b0427a4d00c2b0b214765ca225acfddc18e89eb06ec24770bad-userdata-shm.mount: Deactivated successfully.
Oct 11 04:53:42 np0005481065 systemd[1]: var-lib-containers-storage-overlay-8a9fa1c571dca2050c3743ed1af84045936d49108271115daa852b0e59f97447-merged.mount: Deactivated successfully.
Oct 11 04:53:42 np0005481065 podman[313776]: 2025-10-11 08:53:42.880333575 +0000 UTC m=+0.129710250 container cleanup b936b3db46013b0427a4d00c2b0b214765ca225acfddc18e89eb06ec24770bad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-eb048a28-2e76-4170-b83c-10a20efb7841, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 11 04:53:42 np0005481065 systemd[1]: libpod-conmon-b936b3db46013b0427a4d00c2b0b214765ca225acfddc18e89eb06ec24770bad.scope: Deactivated successfully.
Oct 11 04:53:42 np0005481065 podman[313839]: 2025-10-11 08:53:42.971916976 +0000 UTC m=+0.050172582 container remove b936b3db46013b0427a4d00c2b0b214765ca225acfddc18e89eb06ec24770bad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-eb048a28-2e76-4170-b83c-10a20efb7841, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 11 04:53:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:42.980 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e5a22c16-3d52-4daa-add9-734fe1a69879]: (4, ('Sat Oct 11 08:53:42 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-eb048a28-2e76-4170-b83c-10a20efb7841 (b936b3db46013b0427a4d00c2b0b214765ca225acfddc18e89eb06ec24770bad)\nb936b3db46013b0427a4d00c2b0b214765ca225acfddc18e89eb06ec24770bad\nSat Oct 11 08:53:42 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-eb048a28-2e76-4170-b83c-10a20efb7841 (b936b3db46013b0427a4d00c2b0b214765ca225acfddc18e89eb06ec24770bad)\nb936b3db46013b0427a4d00c2b0b214765ca225acfddc18e89eb06ec24770bad\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:42.982 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c965dfb9-db60-48af-856c-f2f2c4384b15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:42.984 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeb048a28-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:42 np0005481065 nova_compute[260935]: 2025-10-11 08:53:42.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:42 np0005481065 kernel: tapeb048a28-20: left promiscuous mode
Oct 11 04:53:43 np0005481065 nova_compute[260935]: 2025-10-11 08:53:43.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:43.034 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[70d0920d-51a7-474f-aa11-263ec1021798]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:43 np0005481065 nova_compute[260935]: 2025-10-11 08:53:43.045 2 INFO nova.virt.libvirt.driver [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Creating config drive at /var/lib/nova/instances/8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08/disk.config#033[00m
Oct 11 04:53:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:43.053 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4c5b056b-b934-41cb-a831-219ff205b5f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:43 np0005481065 nova_compute[260935]: 2025-10-11 08:53:43.055 2 DEBUG oslo_concurrency.processutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzk3vg3h5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:43.055 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[434d5fa8-07ee-45b3-ae0c-952ff2ec2751]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:43.080 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[931c29e1-d431-46b4-85e9-8d0121bdc04e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464001, 'reachable_time': 23734, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313877, 'error': None, 'target': 'ovnmeta-eb048a28-2e76-4170-b83c-10a20efb7841', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:43.083 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-eb048a28-2e76-4170-b83c-10a20efb7841 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 04:53:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:43.083 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[20aa0b79-4ab1-4721-80d2-9cdd6ce252a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:43 np0005481065 systemd[1]: run-netns-ovnmeta\x2deb048a28\x2d2e76\x2d4170\x2db83c\x2d10a20efb7841.mount: Deactivated successfully.
Oct 11 04:53:43 np0005481065 nova_compute[260935]: 2025-10-11 08:53:43.223 2 DEBUG oslo_concurrency.processutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzk3vg3h5" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:43 np0005481065 nova_compute[260935]: 2025-10-11 08:53:43.254 2 DEBUG nova.storage.rbd_utils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] rbd image 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:43 np0005481065 nova_compute[260935]: 2025-10-11 08:53:43.258 2 DEBUG oslo_concurrency.processutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08/disk.config 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:53:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e203 do_prune osdmap full prune enabled
Oct 11 04:53:43 np0005481065 nova_compute[260935]: 2025-10-11 08:53:43.313 2 INFO nova.virt.libvirt.driver [None req-4bbbffb0-c2c6-44de-bf01-b4884f8d23f1 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Deleting instance files /var/lib/nova/instances/2c551e6f-adba-4963-a583-c5118e2be62a_del#033[00m
Oct 11 04:53:43 np0005481065 nova_compute[260935]: 2025-10-11 08:53:43.314 2 INFO nova.virt.libvirt.driver [None req-4bbbffb0-c2c6-44de-bf01-b4884f8d23f1 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Deletion of /var/lib/nova/instances/2c551e6f-adba-4963-a583-c5118e2be62a_del complete#033[00m
Oct 11 04:53:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e204 e204: 3 total, 3 up, 3 in
Oct 11 04:53:43 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e204: 3 total, 3 up, 3 in
Oct 11 04:53:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:53:43 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1127732979' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:53:43 np0005481065 nova_compute[260935]: 2025-10-11 08:53:43.362 2 DEBUG oslo_concurrency.processutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:43 np0005481065 nova_compute[260935]: 2025-10-11 08:53:43.364 2 INFO nova.compute.manager [None req-4bbbffb0-c2c6-44de-bf01-b4884f8d23f1 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Took 0.87 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 04:53:43 np0005481065 nova_compute[260935]: 2025-10-11 08:53:43.364 2 DEBUG oslo.service.loopingcall [None req-4bbbffb0-c2c6-44de-bf01-b4884f8d23f1 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 04:53:43 np0005481065 nova_compute[260935]: 2025-10-11 08:53:43.364 2 DEBUG nova.compute.manager [-] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 04:53:43 np0005481065 nova_compute[260935]: 2025-10-11 08:53:43.364 2 DEBUG nova.network.neutron [-] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 04:53:43 np0005481065 nova_compute[260935]: 2025-10-11 08:53:43.374 2 DEBUG nova.compute.provider_tree [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:53:43 np0005481065 nova_compute[260935]: 2025-10-11 08:53:43.391 2 DEBUG nova.scheduler.client.report [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:53:43 np0005481065 nova_compute[260935]: 2025-10-11 08:53:43.418 2 DEBUG oslo_concurrency.lockutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.812s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:43 np0005481065 nova_compute[260935]: 2025-10-11 08:53:43.419 2 DEBUG nova.compute.manager [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:53:43 np0005481065 nova_compute[260935]: 2025-10-11 08:53:43.421 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.810s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:43 np0005481065 nova_compute[260935]: 2025-10-11 08:53:43.429 2 DEBUG nova.virt.hardware [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:53:43 np0005481065 nova_compute[260935]: 2025-10-11 08:53:43.430 2 INFO nova.compute.claims [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:53:43 np0005481065 nova_compute[260935]: 2025-10-11 08:53:43.463 2 DEBUG oslo_concurrency.processutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08/disk.config 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.204s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:43 np0005481065 nova_compute[260935]: 2025-10-11 08:53:43.463 2 INFO nova.virt.libvirt.driver [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Deleting local config drive /var/lib/nova/instances/8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08/disk.config because it was imported into RBD.#033[00m
Oct 11 04:53:43 np0005481065 nova_compute[260935]: 2025-10-11 08:53:43.488 2 DEBUG nova.compute.manager [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:53:43 np0005481065 nova_compute[260935]: 2025-10-11 08:53:43.489 2 DEBUG nova.network.neutron [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:53:43 np0005481065 nova_compute[260935]: 2025-10-11 08:53:43.517 2 INFO nova.virt.libvirt.driver [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:53:43 np0005481065 nova_compute[260935]: 2025-10-11 08:53:43.539 2 DEBUG nova.compute.manager [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:53:43 np0005481065 kernel: tap5e003588-ef: entered promiscuous mode
Oct 11 04:53:43 np0005481065 systemd-udevd[313754]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:53:43 np0005481065 NetworkManager[44960]: <info>  [1760172823.5611] manager: (tap5e003588-ef): new Tun device (/org/freedesktop/NetworkManager/Devices/194)
Oct 11 04:53:43 np0005481065 nova_compute[260935]: 2025-10-11 08:53:43.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:43 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:43Z|00408|binding|INFO|Claiming lport 5e003588-ef81-4e87-900f-734b2c4bad32 for this chassis.
Oct 11 04:53:43 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:43Z|00409|binding|INFO|5e003588-ef81-4e87-900f-734b2c4bad32: Claiming fa:16:3e:97:d1:ed 10.100.0.8
Oct 11 04:53:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:43.580 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:d1:ed 10.100.0.8'], port_security=['fa:16:3e:97:d1:ed 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b34d40b1586348c3be3d9142dfe1770d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9b07fc82-c0d2-4e42-8878-3b0706731285', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be4f554b-8352-4ab3-babd-ad834c691fd3, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=5e003588-ef81-4e87-900f-734b2c4bad32) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:53:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:43.583 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 5e003588-ef81-4e87-900f-734b2c4bad32 in datapath a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf bound to our chassis#033[00m
Oct 11 04:53:43 np0005481065 NetworkManager[44960]: <info>  [1760172823.5841] device (tap5e003588-ef): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:53:43 np0005481065 NetworkManager[44960]: <info>  [1760172823.5865] device (tap5e003588-ef): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:53:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:43.586 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf#033[00m
Oct 11 04:53:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:43.607 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e7c94611-8c50-4b98-a9fa-56d2aa8c6e39]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:43.609 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa1a65c6f-c1 in ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 04:53:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:43.611 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa1a65c6f-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 04:53:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:43.612 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5416e1ca-1e9b-437f-8bfb-cbe643ce5d86]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:43 np0005481065 systemd-machined[215705]: New machine qemu-54-instance-0000002f.
Oct 11 04:53:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:43.613 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[335e1b26-84cd-48b6-8651-18630e441207]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:43.633 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[4805ad24-b2d0-4600-8a19-301dbd83b3de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:43 np0005481065 systemd[1]: Started Virtual Machine qemu-54-instance-0000002f.
Oct 11 04:53:43 np0005481065 nova_compute[260935]: 2025-10-11 08:53:43.643 2 DEBUG nova.compute.manager [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:53:43 np0005481065 nova_compute[260935]: 2025-10-11 08:53:43.645 2 DEBUG nova.virt.libvirt.driver [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:53:43 np0005481065 nova_compute[260935]: 2025-10-11 08:53:43.646 2 INFO nova.virt.libvirt.driver [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Creating image(s)#033[00m
Oct 11 04:53:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:43.662 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4e894a4f-ee96-4f51-8394-fd396ceb2bdd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:43 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:43Z|00410|binding|INFO|Setting lport 5e003588-ef81-4e87-900f-734b2c4bad32 ovn-installed in OVS
Oct 11 04:53:43 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:43Z|00411|binding|INFO|Setting lport 5e003588-ef81-4e87-900f-734b2c4bad32 up in Southbound
Oct 11 04:53:43 np0005481065 nova_compute[260935]: 2025-10-11 08:53:43.703 2 DEBUG nova.storage.rbd_utils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 090243f9-46ac-42a1-b921-fa3b1974f127_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:43.719 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[0dee435a-b72a-4a58-a599-37f4a7ea9f69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:43.727 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e112b3a1-c5b9-41d1-a28f-66600b3fedc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:43 np0005481065 NetworkManager[44960]: <info>  [1760172823.7315] manager: (tapa1a65c6f-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/195)
Oct 11 04:53:43 np0005481065 nova_compute[260935]: 2025-10-11 08:53:43.783 2 DEBUG nova.storage.rbd_utils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 090243f9-46ac-42a1-b921-fa3b1974f127_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:43.785 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[1f3985db-4176-4b92-a9d0-f4b5e73342f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:43.789 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[5e5e2f7d-a779-4237-8a8f-4e00e7dd7ebe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:43 np0005481065 nova_compute[260935]: 2025-10-11 08:53:43.827 2 DEBUG nova.storage.rbd_utils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 090243f9-46ac-42a1-b921-fa3b1974f127_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:43 np0005481065 NetworkManager[44960]: <info>  [1760172823.8292] device (tapa1a65c6f-c0): carrier: link connected
Oct 11 04:53:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:43.839 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[2c321b18-91bc-4bca-ae18-15f48e0a9905]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:43 np0005481065 nova_compute[260935]: 2025-10-11 08:53:43.851 2 DEBUG oslo_concurrency.processutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:43.870 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1edcbd77-0680-4d33-8b49-94df7eed6c7f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa1a65c6f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:05:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 132], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 466853, 'reachable_time': 34091, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314019, 'error': None, 'target': 'ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:43.896 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[23efa336-8af4-4721-b2ab-9056b6e341e9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb9:555'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 466853, 'tstamp': 466853}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314021, 'error': None, 'target': 'ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:43 np0005481065 nova_compute[260935]: 2025-10-11 08:53:43.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:43.929 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1de38658-04ca-41c7-a4ff-599823c3eb78]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa1a65c6f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:05:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 132], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 466853, 'reachable_time': 34091, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 314022, 'error': None, 'target': 'ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:43 np0005481065 nova_compute[260935]: 2025-10-11 08:53:43.934 2 DEBUG oslo_concurrency.processutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1479: 321 pgs: 321 active+clean; 167 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 71 KiB/s rd, 4.7 KiB/s wr, 102 op/s
Oct 11 04:53:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:43.974 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a46fce07-d061-4a64-bdfb-bd3b3dff9778]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:43 np0005481065 nova_compute[260935]: 2025-10-11 08:53:43.980 2 DEBUG oslo_concurrency.processutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:43 np0005481065 nova_compute[260935]: 2025-10-11 08:53:43.982 2 DEBUG oslo_concurrency.lockutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:43 np0005481065 nova_compute[260935]: 2025-10-11 08:53:43.983 2 DEBUG oslo_concurrency.lockutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:43 np0005481065 nova_compute[260935]: 2025-10-11 08:53:43.983 2 DEBUG oslo_concurrency.lockutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:44 np0005481065 nova_compute[260935]: 2025-10-11 08:53:44.018 2 DEBUG nova.storage.rbd_utils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 090243f9-46ac-42a1-b921-fa3b1974f127_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:44 np0005481065 nova_compute[260935]: 2025-10-11 08:53:44.028 2 DEBUG oslo_concurrency.processutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 090243f9-46ac-42a1-b921-fa3b1974f127_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:44.057 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[861d96b0-997c-4dca-9fa2-f0b6198bbcbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:44.059 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa1a65c6f-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:44.059 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:53:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:44.060 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa1a65c6f-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:44 np0005481065 NetworkManager[44960]: <info>  [1760172824.0632] manager: (tapa1a65c6f-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/196)
Oct 11 04:53:44 np0005481065 kernel: tapa1a65c6f-c0: entered promiscuous mode
Oct 11 04:53:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:44.066 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa1a65c6f-c0, col_values=(('external_ids', {'iface-id': '4bae176b-fbae-4a70-a041-16a7a5205899'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:44 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:44Z|00412|binding|INFO|Releasing lport 4bae176b-fbae-4a70-a041-16a7a5205899 from this chassis (sb_readonly=0)
Oct 11 04:53:44 np0005481065 nova_compute[260935]: 2025-10-11 08:53:44.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:44 np0005481065 nova_compute[260935]: 2025-10-11 08:53:44.088 2 DEBUG nova.policy [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2a171de1f79843e0b048393cabfee77d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9af27fad6b5a4783b66213343f27f0a1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 04:53:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:44.104 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 04:53:44 np0005481065 nova_compute[260935]: 2025-10-11 08:53:44.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:44.105 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6a44de22-9005-4a56-8425-fd15fcd18326]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:44.106 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:53:44 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 04:53:44 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 04:53:44 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf
Oct 11 04:53:44 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 04:53:44 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 04:53:44 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 04:53:44 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf.pid.haproxy
Oct 11 04:53:44 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 04:53:44 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:53:44 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 04:53:44 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 04:53:44 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 04:53:44 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 04:53:44 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 04:53:44 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 04:53:44 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 04:53:44 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 04:53:44 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 04:53:44 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 04:53:44 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 04:53:44 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 04:53:44 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 04:53:44 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:53:44 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:53:44 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 04:53:44 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 04:53:44 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:53:44 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf
Oct 11 04:53:44 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 04:53:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:44.107 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf', 'env', 'PROCESS_TAG=haproxy-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 04:53:44 np0005481065 nova_compute[260935]: 2025-10-11 08:53:44.357 2 DEBUG nova.compute.manager [req-4c549393-be9d-41fa-bc0f-e02b869f842d req-3cf6b827-3a09-4a1d-97dd-2b4c7dd87384 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Received event network-vif-plugged-13bb6d15-e65c-4e29-b0f3-b7a5a830236d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:53:44 np0005481065 nova_compute[260935]: 2025-10-11 08:53:44.358 2 DEBUG oslo_concurrency.lockutils [req-4c549393-be9d-41fa-bc0f-e02b869f842d req-3cf6b827-3a09-4a1d-97dd-2b4c7dd87384 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:44 np0005481065 nova_compute[260935]: 2025-10-11 08:53:44.359 2 DEBUG oslo_concurrency.lockutils [req-4c549393-be9d-41fa-bc0f-e02b869f842d req-3cf6b827-3a09-4a1d-97dd-2b4c7dd87384 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:44 np0005481065 nova_compute[260935]: 2025-10-11 08:53:44.362 2 DEBUG oslo_concurrency.lockutils [req-4c549393-be9d-41fa-bc0f-e02b869f842d req-3cf6b827-3a09-4a1d-97dd-2b4c7dd87384 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:44 np0005481065 nova_compute[260935]: 2025-10-11 08:53:44.363 2 DEBUG nova.compute.manager [req-4c549393-be9d-41fa-bc0f-e02b869f842d req-3cf6b827-3a09-4a1d-97dd-2b4c7dd87384 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] No waiting events found dispatching network-vif-plugged-13bb6d15-e65c-4e29-b0f3-b7a5a830236d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:53:44 np0005481065 nova_compute[260935]: 2025-10-11 08:53:44.364 2 WARNING nova.compute.manager [req-4c549393-be9d-41fa-bc0f-e02b869f842d req-3cf6b827-3a09-4a1d-97dd-2b4c7dd87384 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Received unexpected event network-vif-plugged-13bb6d15-e65c-4e29-b0f3-b7a5a830236d for instance with vm_state deleted and task_state None.#033[00m
Oct 11 04:53:44 np0005481065 nova_compute[260935]: 2025-10-11 08:53:44.402 2 DEBUG oslo_concurrency.processutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 090243f9-46ac-42a1-b921-fa3b1974f127_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.374s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:53:44 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4125527845' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:53:44 np0005481065 nova_compute[260935]: 2025-10-11 08:53:44.451 2 DEBUG nova.compute.manager [req-1d85bb26-9222-420c-ab00-114cd583d1b5 req-8d785d72-8afd-4451-97ca-3d53e718e78f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Received event network-vif-unplugged-29b8f5a1-6e7c-42c3-9876-cc4cc9942b76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:53:44 np0005481065 nova_compute[260935]: 2025-10-11 08:53:44.453 2 DEBUG oslo_concurrency.lockutils [req-1d85bb26-9222-420c-ab00-114cd583d1b5 req-8d785d72-8afd-4451-97ca-3d53e718e78f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "2c551e6f-adba-4963-a583-c5118e2be62a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:44 np0005481065 nova_compute[260935]: 2025-10-11 08:53:44.453 2 DEBUG oslo_concurrency.lockutils [req-1d85bb26-9222-420c-ab00-114cd583d1b5 req-8d785d72-8afd-4451-97ca-3d53e718e78f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "2c551e6f-adba-4963-a583-c5118e2be62a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:44 np0005481065 nova_compute[260935]: 2025-10-11 08:53:44.454 2 DEBUG oslo_concurrency.lockutils [req-1d85bb26-9222-420c-ab00-114cd583d1b5 req-8d785d72-8afd-4451-97ca-3d53e718e78f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "2c551e6f-adba-4963-a583-c5118e2be62a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:44 np0005481065 nova_compute[260935]: 2025-10-11 08:53:44.455 2 DEBUG nova.compute.manager [req-1d85bb26-9222-420c-ab00-114cd583d1b5 req-8d785d72-8afd-4451-97ca-3d53e718e78f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] No waiting events found dispatching network-vif-unplugged-29b8f5a1-6e7c-42c3-9876-cc4cc9942b76 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:53:44 np0005481065 nova_compute[260935]: 2025-10-11 08:53:44.455 2 DEBUG nova.compute.manager [req-1d85bb26-9222-420c-ab00-114cd583d1b5 req-8d785d72-8afd-4451-97ca-3d53e718e78f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Received event network-vif-unplugged-29b8f5a1-6e7c-42c3-9876-cc4cc9942b76 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 04:53:44 np0005481065 nova_compute[260935]: 2025-10-11 08:53:44.456 2 DEBUG nova.compute.manager [req-1d85bb26-9222-420c-ab00-114cd583d1b5 req-8d785d72-8afd-4451-97ca-3d53e718e78f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Received event network-vif-plugged-29b8f5a1-6e7c-42c3-9876-cc4cc9942b76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:53:44 np0005481065 nova_compute[260935]: 2025-10-11 08:53:44.457 2 DEBUG oslo_concurrency.lockutils [req-1d85bb26-9222-420c-ab00-114cd583d1b5 req-8d785d72-8afd-4451-97ca-3d53e718e78f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "2c551e6f-adba-4963-a583-c5118e2be62a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:44 np0005481065 nova_compute[260935]: 2025-10-11 08:53:44.457 2 DEBUG oslo_concurrency.lockutils [req-1d85bb26-9222-420c-ab00-114cd583d1b5 req-8d785d72-8afd-4451-97ca-3d53e718e78f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "2c551e6f-adba-4963-a583-c5118e2be62a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:44 np0005481065 nova_compute[260935]: 2025-10-11 08:53:44.458 2 DEBUG oslo_concurrency.lockutils [req-1d85bb26-9222-420c-ab00-114cd583d1b5 req-8d785d72-8afd-4451-97ca-3d53e718e78f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "2c551e6f-adba-4963-a583-c5118e2be62a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:44 np0005481065 nova_compute[260935]: 2025-10-11 08:53:44.459 2 DEBUG nova.compute.manager [req-1d85bb26-9222-420c-ab00-114cd583d1b5 req-8d785d72-8afd-4451-97ca-3d53e718e78f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] No waiting events found dispatching network-vif-plugged-29b8f5a1-6e7c-42c3-9876-cc4cc9942b76 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:53:44 np0005481065 nova_compute[260935]: 2025-10-11 08:53:44.460 2 WARNING nova.compute.manager [req-1d85bb26-9222-420c-ab00-114cd583d1b5 req-8d785d72-8afd-4451-97ca-3d53e718e78f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Received unexpected event network-vif-plugged-29b8f5a1-6e7c-42c3-9876-cc4cc9942b76 for instance with vm_state active and task_state deleting.#033[00m
Oct 11 04:53:44 np0005481065 nova_compute[260935]: 2025-10-11 08:53:44.461 2 DEBUG oslo_concurrency.processutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:44 np0005481065 nova_compute[260935]: 2025-10-11 08:53:44.511 2 DEBUG nova.storage.rbd_utils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] resizing rbd image 090243f9-46ac-42a1-b921-fa3b1974f127_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:53:44 np0005481065 podman[314130]: 2025-10-11 08:53:44.538343682 +0000 UTC m=+0.062996697 container create 7d1a9e6b3c9130792d0328aa3e55096abd83598bbcc264728304976547ed89b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 11 04:53:44 np0005481065 nova_compute[260935]: 2025-10-11 08:53:44.572 2 DEBUG nova.compute.provider_tree [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:53:44 np0005481065 systemd[1]: Started libpod-conmon-7d1a9e6b3c9130792d0328aa3e55096abd83598bbcc264728304976547ed89b5.scope.
Oct 11 04:53:44 np0005481065 podman[314130]: 2025-10-11 08:53:44.506102773 +0000 UTC m=+0.030755828 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 04:53:44 np0005481065 nova_compute[260935]: 2025-10-11 08:53:44.602 2 DEBUG nova.scheduler.client.report [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:53:44 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:53:44 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5898e42c3d8153f562219a47f22912dd20bb1c3a0e1902954c3573851fcf1fb5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:53:44 np0005481065 podman[314130]: 2025-10-11 08:53:44.650394167 +0000 UTC m=+0.175047172 container init 7d1a9e6b3c9130792d0328aa3e55096abd83598bbcc264728304976547ed89b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 04:53:44 np0005481065 podman[314130]: 2025-10-11 08:53:44.66065629 +0000 UTC m=+0.185309295 container start 7d1a9e6b3c9130792d0328aa3e55096abd83598bbcc264728304976547ed89b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 11 04:53:44 np0005481065 nova_compute[260935]: 2025-10-11 08:53:44.676 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.255s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:44 np0005481065 nova_compute[260935]: 2025-10-11 08:53:44.677 2 DEBUG nova.compute.manager [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:53:44 np0005481065 nova_compute[260935]: 2025-10-11 08:53:44.680 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 2.032s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:44 np0005481065 neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf[314221]: [NOTICE]   (314243) : New worker (314248) forked
Oct 11 04:53:44 np0005481065 neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf[314221]: [NOTICE]   (314243) : Loading success.
Oct 11 04:53:44 np0005481065 nova_compute[260935]: 2025-10-11 08:53:44.702 2 DEBUG nova.objects.instance [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lazy-loading 'migration_context' on Instance uuid 090243f9-46ac-42a1-b921-fa3b1974f127 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:53:44 np0005481065 nova_compute[260935]: 2025-10-11 08:53:44.705 2 DEBUG nova.network.neutron [-] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:53:44 np0005481065 nova_compute[260935]: 2025-10-11 08:53:44.709 2 DEBUG nova.virt.hardware [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:53:44 np0005481065 nova_compute[260935]: 2025-10-11 08:53:44.709 2 INFO nova.compute.claims [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:53:44 np0005481065 nova_compute[260935]: 2025-10-11 08:53:44.724 2 DEBUG nova.virt.libvirt.driver [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:53:44 np0005481065 nova_compute[260935]: 2025-10-11 08:53:44.724 2 DEBUG nova.virt.libvirt.driver [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Ensure instance console log exists: /var/lib/nova/instances/090243f9-46ac-42a1-b921-fa3b1974f127/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:53:44 np0005481065 nova_compute[260935]: 2025-10-11 08:53:44.725 2 DEBUG oslo_concurrency.lockutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:44 np0005481065 nova_compute[260935]: 2025-10-11 08:53:44.725 2 DEBUG oslo_concurrency.lockutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:44 np0005481065 nova_compute[260935]: 2025-10-11 08:53:44.725 2 DEBUG oslo_concurrency.lockutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:44 np0005481065 nova_compute[260935]: 2025-10-11 08:53:44.744 2 DEBUG nova.compute.manager [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:53:44 np0005481065 nova_compute[260935]: 2025-10-11 08:53:44.744 2 DEBUG nova.network.neutron [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:53:44 np0005481065 nova_compute[260935]: 2025-10-11 08:53:44.748 2 INFO nova.compute.manager [-] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Took 1.38 seconds to deallocate network for instance.#033[00m
Oct 11 04:53:44 np0005481065 nova_compute[260935]: 2025-10-11 08:53:44.780 2 INFO nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:53:44 np0005481065 nova_compute[260935]: 2025-10-11 08:53:44.814 2 DEBUG oslo_concurrency.lockutils [None req-4bbbffb0-c2c6-44de-bf01-b4884f8d23f1 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:44 np0005481065 nova_compute[260935]: 2025-10-11 08:53:44.815 2 DEBUG nova.compute.manager [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:53:44 np0005481065 nova_compute[260935]: 2025-10-11 08:53:44.938 2 DEBUG nova.compute.manager [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:53:44 np0005481065 nova_compute[260935]: 2025-10-11 08:53:44.941 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:53:44 np0005481065 nova_compute[260935]: 2025-10-11 08:53:44.941 2 INFO nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Creating image(s)#033[00m
Oct 11 04:53:44 np0005481065 nova_compute[260935]: 2025-10-11 08:53:44.976 2 DEBUG nova.storage.rbd_utils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] rbd image d0ac94c4-9bbc-443b-bbce-0d447b37153a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:45 np0005481065 nova_compute[260935]: 2025-10-11 08:53:45.015 2 DEBUG nova.storage.rbd_utils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] rbd image d0ac94c4-9bbc-443b-bbce-0d447b37153a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:45 np0005481065 nova_compute[260935]: 2025-10-11 08:53:45.062 2 DEBUG nova.storage.rbd_utils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] rbd image d0ac94c4-9bbc-443b-bbce-0d447b37153a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:45 np0005481065 nova_compute[260935]: 2025-10-11 08:53:45.069 2 DEBUG oslo_concurrency.processutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:45 np0005481065 nova_compute[260935]: 2025-10-11 08:53:45.126 2 DEBUG nova.policy [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '624c293d73ca4d14a182fadee17abb16', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5fe47dfd30914099a9819413cbab00c6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 04:53:45 np0005481065 nova_compute[260935]: 2025-10-11 08:53:45.132 2 DEBUG oslo_concurrency.processutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:45 np0005481065 nova_compute[260935]: 2025-10-11 08:53:45.178 2 DEBUG nova.network.neutron [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Successfully created port: 94e66298-1c10-486d-9cd5-fd680cfa037c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 04:53:45 np0005481065 nova_compute[260935]: 2025-10-11 08:53:45.190 2 DEBUG oslo_concurrency.processutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:45 np0005481065 nova_compute[260935]: 2025-10-11 08:53:45.193 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:45 np0005481065 nova_compute[260935]: 2025-10-11 08:53:45.194 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:45 np0005481065 nova_compute[260935]: 2025-10-11 08:53:45.194 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:45 np0005481065 nova_compute[260935]: 2025-10-11 08:53:45.219 2 DEBUG nova.storage.rbd_utils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] rbd image d0ac94c4-9bbc-443b-bbce-0d447b37153a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:45 np0005481065 nova_compute[260935]: 2025-10-11 08:53:45.223 2 DEBUG oslo_concurrency.processutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 d0ac94c4-9bbc-443b-bbce-0d447b37153a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:45 np0005481065 nova_compute[260935]: 2025-10-11 08:53:45.269 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172825.201277, 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:53:45 np0005481065 nova_compute[260935]: 2025-10-11 08:53:45.270 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] VM Started (Lifecycle Event)#033[00m
Oct 11 04:53:45 np0005481065 nova_compute[260935]: 2025-10-11 08:53:45.318 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:53:45 np0005481065 nova_compute[260935]: 2025-10-11 08:53:45.323 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172825.201428, 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:53:45 np0005481065 nova_compute[260935]: 2025-10-11 08:53:45.323 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] VM Paused (Lifecycle Event)#033[00m
Oct 11 04:53:45 np0005481065 nova_compute[260935]: 2025-10-11 08:53:45.354 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:53:45 np0005481065 nova_compute[260935]: 2025-10-11 08:53:45.359 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:53:45 np0005481065 nova_compute[260935]: 2025-10-11 08:53:45.391 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:53:45 np0005481065 nova_compute[260935]: 2025-10-11 08:53:45.556 2 DEBUG oslo_concurrency.processutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 d0ac94c4-9bbc-443b-bbce-0d447b37153a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:45 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:53:45 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1446357688' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:53:45 np0005481065 nova_compute[260935]: 2025-10-11 08:53:45.638 2 DEBUG oslo_concurrency.processutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:45 np0005481065 nova_compute[260935]: 2025-10-11 08:53:45.646 2 DEBUG nova.storage.rbd_utils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] resizing rbd image d0ac94c4-9bbc-443b-bbce-0d447b37153a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:53:45 np0005481065 nova_compute[260935]: 2025-10-11 08:53:45.689 2 DEBUG nova.compute.provider_tree [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:53:45 np0005481065 nova_compute[260935]: 2025-10-11 08:53:45.707 2 DEBUG nova.scheduler.client.report [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:53:45 np0005481065 nova_compute[260935]: 2025-10-11 08:53:45.775 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.095s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:45 np0005481065 nova_compute[260935]: 2025-10-11 08:53:45.776 2 DEBUG nova.compute.manager [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:53:45 np0005481065 nova_compute[260935]: 2025-10-11 08:53:45.781 2 DEBUG oslo_concurrency.lockutils [None req-4bbbffb0-c2c6-44de-bf01-b4884f8d23f1 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.967s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:45 np0005481065 nova_compute[260935]: 2025-10-11 08:53:45.788 2 DEBUG nova.objects.instance [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lazy-loading 'migration_context' on Instance uuid d0ac94c4-9bbc-443b-bbce-0d447b37153a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:53:45 np0005481065 nova_compute[260935]: 2025-10-11 08:53:45.802 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:53:45 np0005481065 nova_compute[260935]: 2025-10-11 08:53:45.802 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Ensure instance console log exists: /var/lib/nova/instances/d0ac94c4-9bbc-443b-bbce-0d447b37153a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:53:45 np0005481065 nova_compute[260935]: 2025-10-11 08:53:45.803 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:45 np0005481065 nova_compute[260935]: 2025-10-11 08:53:45.803 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:45 np0005481065 nova_compute[260935]: 2025-10-11 08:53:45.804 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:45 np0005481065 nova_compute[260935]: 2025-10-11 08:53:45.827 2 DEBUG nova.compute.manager [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:53:45 np0005481065 nova_compute[260935]: 2025-10-11 08:53:45.827 2 DEBUG nova.network.neutron [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:53:45 np0005481065 nova_compute[260935]: 2025-10-11 08:53:45.844 2 INFO nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:53:45 np0005481065 nova_compute[260935]: 2025-10-11 08:53:45.868 2 DEBUG nova.compute.manager [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:53:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1480: 321 pgs: 321 active+clean; 167 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 3.7 KiB/s wr, 80 op/s
Oct 11 04:53:45 np0005481065 nova_compute[260935]: 2025-10-11 08:53:45.972 2 DEBUG oslo_concurrency.processutils [None req-4bbbffb0-c2c6-44de-bf01-b4884f8d23f1 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.027 2 DEBUG nova.compute.manager [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.032 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.033 2 INFO nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Creating image(s)#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.071 2 DEBUG nova.storage.rbd_utils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] rbd image c5c5e7c6-36ba-4cdd-9ad5-03996c419556_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.110 2 DEBUG nova.storage.rbd_utils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] rbd image c5c5e7c6-36ba-4cdd-9ad5-03996c419556_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.153 2 DEBUG nova.storage.rbd_utils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] rbd image c5c5e7c6-36ba-4cdd-9ad5-03996c419556_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.160 2 DEBUG oslo_concurrency.processutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.261 2 DEBUG oslo_concurrency.processutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.263 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.264 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.264 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.300 2 DEBUG nova.storage.rbd_utils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] rbd image c5c5e7c6-36ba-4cdd-9ad5-03996c419556_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.306 2 DEBUG oslo_concurrency.processutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 c5c5e7c6-36ba-4cdd-9ad5-03996c419556_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.416 2 DEBUG nova.policy [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '624c293d73ca4d14a182fadee17abb16', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5fe47dfd30914099a9819413cbab00c6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 04:53:46 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:53:46 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/92852459' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.468 2 DEBUG oslo_concurrency.processutils [None req-4bbbffb0-c2c6-44de-bf01-b4884f8d23f1 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.477 2 DEBUG nova.compute.provider_tree [None req-4bbbffb0-c2c6-44de-bf01-b4884f8d23f1 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.496 2 DEBUG nova.compute.manager [req-b87a8a70-14ef-44a8-96c3-3352464a101a req-6bddad4d-1e15-42d8-b47d-94d1c8d77588 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Received event network-vif-plugged-5e003588-ef81-4e87-900f-734b2c4bad32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.497 2 DEBUG oslo_concurrency.lockutils [req-b87a8a70-14ef-44a8-96c3-3352464a101a req-6bddad4d-1e15-42d8-b47d-94d1c8d77588 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.497 2 DEBUG oslo_concurrency.lockutils [req-b87a8a70-14ef-44a8-96c3-3352464a101a req-6bddad4d-1e15-42d8-b47d-94d1c8d77588 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.497 2 DEBUG oslo_concurrency.lockutils [req-b87a8a70-14ef-44a8-96c3-3352464a101a req-6bddad4d-1e15-42d8-b47d-94d1c8d77588 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.497 2 DEBUG nova.compute.manager [req-b87a8a70-14ef-44a8-96c3-3352464a101a req-6bddad4d-1e15-42d8-b47d-94d1c8d77588 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Processing event network-vif-plugged-5e003588-ef81-4e87-900f-734b2c4bad32 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.497 2 DEBUG nova.compute.manager [req-b87a8a70-14ef-44a8-96c3-3352464a101a req-6bddad4d-1e15-42d8-b47d-94d1c8d77588 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Received event network-vif-plugged-5e003588-ef81-4e87-900f-734b2c4bad32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.497 2 DEBUG oslo_concurrency.lockutils [req-b87a8a70-14ef-44a8-96c3-3352464a101a req-6bddad4d-1e15-42d8-b47d-94d1c8d77588 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.498 2 DEBUG oslo_concurrency.lockutils [req-b87a8a70-14ef-44a8-96c3-3352464a101a req-6bddad4d-1e15-42d8-b47d-94d1c8d77588 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.498 2 DEBUG oslo_concurrency.lockutils [req-b87a8a70-14ef-44a8-96c3-3352464a101a req-6bddad4d-1e15-42d8-b47d-94d1c8d77588 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.498 2 DEBUG nova.compute.manager [req-b87a8a70-14ef-44a8-96c3-3352464a101a req-6bddad4d-1e15-42d8-b47d-94d1c8d77588 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] No waiting events found dispatching network-vif-plugged-5e003588-ef81-4e87-900f-734b2c4bad32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.498 2 WARNING nova.compute.manager [req-b87a8a70-14ef-44a8-96c3-3352464a101a req-6bddad4d-1e15-42d8-b47d-94d1c8d77588 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Received unexpected event network-vif-plugged-5e003588-ef81-4e87-900f-734b2c4bad32 for instance with vm_state building and task_state spawning.#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.499 2 DEBUG nova.compute.manager [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.500 2 DEBUG nova.scheduler.client.report [None req-4bbbffb0-c2c6-44de-bf01-b4884f8d23f1 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.509 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172826.508762, 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.509 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.511 2 DEBUG nova.virt.libvirt.driver [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.525 2 DEBUG oslo_concurrency.lockutils [None req-4bbbffb0-c2c6-44de-bf01-b4884f8d23f1 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.528 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.539 2 INFO nova.virt.libvirt.driver [-] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Instance spawned successfully.#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.540 2 DEBUG nova.virt.libvirt.driver [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.543 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.560 2 INFO nova.scheduler.client.report [None req-4bbbffb0-c2c6-44de-bf01-b4884f8d23f1 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Deleted allocations for instance 2c551e6f-adba-4963-a583-c5118e2be62a#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.578 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.580 2 DEBUG nova.compute.manager [req-5cd1e8d7-27b0-4e66-8667-0fd01263b2ee req-de94ea56-ee52-4dbf-8002-b0ec841770f2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Received event network-vif-deleted-29b8f5a1-6e7c-42c3-9876-cc4cc9942b76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.586 2 DEBUG nova.virt.libvirt.driver [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.586 2 DEBUG nova.virt.libvirt.driver [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.586 2 DEBUG nova.virt.libvirt.driver [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.587 2 DEBUG nova.virt.libvirt.driver [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.587 2 DEBUG nova.virt.libvirt.driver [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.587 2 DEBUG nova.virt.libvirt.driver [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.642 2 DEBUG oslo_concurrency.processutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 c5c5e7c6-36ba-4cdd-9ad5-03996c419556_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.336s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.712 2 INFO nova.compute.manager [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Took 9.93 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.712 2 DEBUG nova.compute.manager [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.714 2 DEBUG oslo_concurrency.lockutils [None req-4bbbffb0-c2c6-44de-bf01-b4884f8d23f1 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Lock "2c551e6f-adba-4963-a583-c5118e2be62a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.232s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.721 2 DEBUG nova.storage.rbd_utils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] resizing rbd image c5c5e7c6-36ba-4cdd-9ad5-03996c419556_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.819 2 INFO nova.compute.manager [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Took 12.05 seconds to build instance.#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.823 2 DEBUG nova.objects.instance [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lazy-loading 'migration_context' on Instance uuid c5c5e7c6-36ba-4cdd-9ad5-03996c419556 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.851 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.851 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Ensure instance console log exists: /var/lib/nova/instances/c5c5e7c6-36ba-4cdd-9ad5-03996c419556/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.852 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.852 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.852 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.853 2 DEBUG oslo_concurrency.lockutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.164s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.986 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172811.9043243, e3289c21-dd0f-43aa-9d39-3aff16eff5cd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.987 2 INFO nova.compute.manager [-] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] VM Stopped (Lifecycle Event)#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.989 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172811.9815793, dd2f9164-cc85-46e5-9ac5-2847421fe9fc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:53:46 np0005481065 nova_compute[260935]: 2025-10-11 08:53:46.989 2 INFO nova.compute.manager [-] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] VM Stopped (Lifecycle Event)#033[00m
Oct 11 04:53:47 np0005481065 nova_compute[260935]: 2025-10-11 08:53:47.011 2 DEBUG nova.compute.manager [None req-c752f7c7-78fa-4959-9b83-d4995827bd89 - - - - - -] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:53:47 np0005481065 nova_compute[260935]: 2025-10-11 08:53:47.012 2 DEBUG nova.compute.manager [None req-e7a63e32-9373-4e71-8cc4-461e1707c165 - - - - - -] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:53:47 np0005481065 nova_compute[260935]: 2025-10-11 08:53:47.051 2 DEBUG nova.network.neutron [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Successfully created port: 11b55091-2876-4b36-98b0-aa1a4db00d3e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 04:53:47 np0005481065 nova_compute[260935]: 2025-10-11 08:53:47.273 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:53:47 np0005481065 nova_compute[260935]: 2025-10-11 08:53:47.314 2 WARNING nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] While synchronizing instance power states, found 4 instances in the database and 1 instances on the hypervisor.#033[00m
Oct 11 04:53:47 np0005481065 nova_compute[260935]: 2025-10-11 08:53:47.314 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Triggering sync for uuid 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct 11 04:53:47 np0005481065 nova_compute[260935]: 2025-10-11 08:53:47.315 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Triggering sync for uuid d0ac94c4-9bbc-443b-bbce-0d447b37153a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct 11 04:53:47 np0005481065 nova_compute[260935]: 2025-10-11 08:53:47.315 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Triggering sync for uuid 090243f9-46ac-42a1-b921-fa3b1974f127 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct 11 04:53:47 np0005481065 nova_compute[260935]: 2025-10-11 08:53:47.316 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Triggering sync for uuid c5c5e7c6-36ba-4cdd-9ad5-03996c419556 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct 11 04:53:47 np0005481065 nova_compute[260935]: 2025-10-11 08:53:47.317 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:47 np0005481065 nova_compute[260935]: 2025-10-11 08:53:47.318 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:47 np0005481065 nova_compute[260935]: 2025-10-11 08:53:47.318 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "d0ac94c4-9bbc-443b-bbce-0d447b37153a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:47 np0005481065 nova_compute[260935]: 2025-10-11 08:53:47.319 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "090243f9-46ac-42a1-b921-fa3b1974f127" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:47 np0005481065 nova_compute[260935]: 2025-10-11 08:53:47.319 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "c5c5e7c6-36ba-4cdd-9ad5-03996c419556" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:47 np0005481065 nova_compute[260935]: 2025-10-11 08:53:47.334 2 DEBUG nova.network.neutron [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Successfully updated port: 94e66298-1c10-486d-9cd5-fd680cfa037c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:53:47 np0005481065 nova_compute[260935]: 2025-10-11 08:53:47.368 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.050s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:47 np0005481065 nova_compute[260935]: 2025-10-11 08:53:47.370 2 DEBUG oslo_concurrency.lockutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "refresh_cache-090243f9-46ac-42a1-b921-fa3b1974f127" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:53:47 np0005481065 nova_compute[260935]: 2025-10-11 08:53:47.370 2 DEBUG oslo_concurrency.lockutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquired lock "refresh_cache-090243f9-46ac-42a1-b921-fa3b1974f127" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:53:47 np0005481065 nova_compute[260935]: 2025-10-11 08:53:47.371 2 DEBUG nova.network.neutron [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:53:47 np0005481065 nova_compute[260935]: 2025-10-11 08:53:47.402 2 DEBUG nova.network.neutron [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Successfully created port: eb770b47-0e30-41bb-8d08-e52bc86b8fb7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 04:53:47 np0005481065 nova_compute[260935]: 2025-10-11 08:53:47.703 2 DEBUG nova.network.neutron [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:53:47 np0005481065 nova_compute[260935]: 2025-10-11 08:53:47.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:47 np0005481065 nova_compute[260935]: 2025-10-11 08:53:47.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1481: 321 pgs: 321 active+clean; 227 MiB data, 515 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 7.4 MiB/s wr, 279 op/s
Oct 11 04:53:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e204 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:53:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e204 do_prune osdmap full prune enabled
Oct 11 04:53:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e205 e205: 3 total, 3 up, 3 in
Oct 11 04:53:48 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e205: 3 total, 3 up, 3 in
Oct 11 04:53:49 np0005481065 nova_compute[260935]: 2025-10-11 08:53:49.037 2 DEBUG nova.network.neutron [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Successfully updated port: 11b55091-2876-4b36-98b0-aa1a4db00d3e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:53:49 np0005481065 nova_compute[260935]: 2025-10-11 08:53:49.051 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "refresh_cache-d0ac94c4-9bbc-443b-bbce-0d447b37153a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:53:49 np0005481065 nova_compute[260935]: 2025-10-11 08:53:49.051 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquired lock "refresh_cache-d0ac94c4-9bbc-443b-bbce-0d447b37153a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:53:49 np0005481065 nova_compute[260935]: 2025-10-11 08:53:49.052 2 DEBUG nova.network.neutron [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:53:49 np0005481065 nova_compute[260935]: 2025-10-11 08:53:49.166 2 DEBUG nova.compute.manager [req-02986fd4-437b-4afb-8a3f-197e22109c88 req-8ffd67a5-1631-4492-9d4a-bcbd91aadd44 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Received event network-changed-94e66298-1c10-486d-9cd5-fd680cfa037c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:53:49 np0005481065 nova_compute[260935]: 2025-10-11 08:53:49.167 2 DEBUG nova.compute.manager [req-02986fd4-437b-4afb-8a3f-197e22109c88 req-8ffd67a5-1631-4492-9d4a-bcbd91aadd44 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Refreshing instance network info cache due to event network-changed-94e66298-1c10-486d-9cd5-fd680cfa037c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:53:49 np0005481065 nova_compute[260935]: 2025-10-11 08:53:49.167 2 DEBUG oslo_concurrency.lockutils [req-02986fd4-437b-4afb-8a3f-197e22109c88 req-8ffd67a5-1631-4492-9d4a-bcbd91aadd44 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-090243f9-46ac-42a1-b921-fa3b1974f127" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:53:49 np0005481065 nova_compute[260935]: 2025-10-11 08:53:49.232 2 DEBUG nova.network.neutron [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:53:49 np0005481065 nova_compute[260935]: 2025-10-11 08:53:49.390 2 DEBUG nova.network.neutron [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Updating instance_info_cache with network_info: [{"id": "94e66298-1c10-486d-9cd5-fd680cfa037c", "address": "fa:16:3e:a6:01:8a", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94e66298-1c", "ovs_interfaceid": "94e66298-1c10-486d-9cd5-fd680cfa037c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:53:49 np0005481065 nova_compute[260935]: 2025-10-11 08:53:49.423 2 DEBUG oslo_concurrency.lockutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Releasing lock "refresh_cache-090243f9-46ac-42a1-b921-fa3b1974f127" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:53:49 np0005481065 nova_compute[260935]: 2025-10-11 08:53:49.424 2 DEBUG nova.compute.manager [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Instance network_info: |[{"id": "94e66298-1c10-486d-9cd5-fd680cfa037c", "address": "fa:16:3e:a6:01:8a", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94e66298-1c", "ovs_interfaceid": "94e66298-1c10-486d-9cd5-fd680cfa037c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:53:49 np0005481065 nova_compute[260935]: 2025-10-11 08:53:49.424 2 DEBUG oslo_concurrency.lockutils [req-02986fd4-437b-4afb-8a3f-197e22109c88 req-8ffd67a5-1631-4492-9d4a-bcbd91aadd44 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-090243f9-46ac-42a1-b921-fa3b1974f127" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:53:49 np0005481065 nova_compute[260935]: 2025-10-11 08:53:49.425 2 DEBUG nova.network.neutron [req-02986fd4-437b-4afb-8a3f-197e22109c88 req-8ffd67a5-1631-4492-9d4a-bcbd91aadd44 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Refreshing network info cache for port 94e66298-1c10-486d-9cd5-fd680cfa037c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:53:49 np0005481065 nova_compute[260935]: 2025-10-11 08:53:49.430 2 DEBUG nova.virt.libvirt.driver [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Start _get_guest_xml network_info=[{"id": "94e66298-1c10-486d-9cd5-fd680cfa037c", "address": "fa:16:3e:a6:01:8a", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94e66298-1c", "ovs_interfaceid": "94e66298-1c10-486d-9cd5-fd680cfa037c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:53:49 np0005481065 nova_compute[260935]: 2025-10-11 08:53:49.437 2 WARNING nova.virt.libvirt.driver [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:53:49 np0005481065 nova_compute[260935]: 2025-10-11 08:53:49.445 2 DEBUG nova.virt.libvirt.host [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:53:49 np0005481065 nova_compute[260935]: 2025-10-11 08:53:49.446 2 DEBUG nova.virt.libvirt.host [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:53:49 np0005481065 nova_compute[260935]: 2025-10-11 08:53:49.462 2 DEBUG nova.virt.libvirt.host [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:53:49 np0005481065 nova_compute[260935]: 2025-10-11 08:53:49.463 2 DEBUG nova.virt.libvirt.host [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:53:49 np0005481065 nova_compute[260935]: 2025-10-11 08:53:49.464 2 DEBUG nova.virt.libvirt.driver [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:53:49 np0005481065 nova_compute[260935]: 2025-10-11 08:53:49.464 2 DEBUG nova.virt.hardware [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:53:49 np0005481065 nova_compute[260935]: 2025-10-11 08:53:49.465 2 DEBUG nova.virt.hardware [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:53:49 np0005481065 nova_compute[260935]: 2025-10-11 08:53:49.465 2 DEBUG nova.virt.hardware [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:53:49 np0005481065 nova_compute[260935]: 2025-10-11 08:53:49.466 2 DEBUG nova.virt.hardware [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:53:49 np0005481065 nova_compute[260935]: 2025-10-11 08:53:49.466 2 DEBUG nova.virt.hardware [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:53:49 np0005481065 nova_compute[260935]: 2025-10-11 08:53:49.467 2 DEBUG nova.virt.hardware [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:53:49 np0005481065 nova_compute[260935]: 2025-10-11 08:53:49.467 2 DEBUG nova.virt.hardware [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:53:49 np0005481065 nova_compute[260935]: 2025-10-11 08:53:49.468 2 DEBUG nova.virt.hardware [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:53:49 np0005481065 nova_compute[260935]: 2025-10-11 08:53:49.468 2 DEBUG nova.virt.hardware [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:53:49 np0005481065 nova_compute[260935]: 2025-10-11 08:53:49.469 2 DEBUG nova.virt.hardware [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:53:49 np0005481065 nova_compute[260935]: 2025-10-11 08:53:49.469 2 DEBUG nova.virt.hardware [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:53:49 np0005481065 nova_compute[260935]: 2025-10-11 08:53:49.474 2 DEBUG oslo_concurrency.processutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:53:49 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1759633833' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:53:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1483: 321 pgs: 321 active+clean; 227 MiB data, 515 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 8.0 MiB/s wr, 301 op/s
Oct 11 04:53:49 np0005481065 nova_compute[260935]: 2025-10-11 08:53:49.967 2 DEBUG oslo_concurrency.processutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:50 np0005481065 nova_compute[260935]: 2025-10-11 08:53:50.029 2 DEBUG nova.storage.rbd_utils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 090243f9-46ac-42a1-b921-fa3b1974f127_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:50 np0005481065 nova_compute[260935]: 2025-10-11 08:53:50.037 2 DEBUG oslo_concurrency.processutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:50 np0005481065 nova_compute[260935]: 2025-10-11 08:53:50.100 2 DEBUG nova.network.neutron [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Successfully updated port: eb770b47-0e30-41bb-8d08-e52bc86b8fb7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:53:50 np0005481065 nova_compute[260935]: 2025-10-11 08:53:50.133 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "refresh_cache-c5c5e7c6-36ba-4cdd-9ad5-03996c419556" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:53:50 np0005481065 nova_compute[260935]: 2025-10-11 08:53:50.134 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquired lock "refresh_cache-c5c5e7c6-36ba-4cdd-9ad5-03996c419556" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:53:50 np0005481065 nova_compute[260935]: 2025-10-11 08:53:50.134 2 DEBUG nova.network.neutron [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:53:50 np0005481065 nova_compute[260935]: 2025-10-11 08:53:50.342 2 DEBUG nova.network.neutron [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:53:50 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:53:50 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1169621818' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:53:50 np0005481065 nova_compute[260935]: 2025-10-11 08:53:50.581 2 DEBUG oslo_concurrency.processutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:50 np0005481065 nova_compute[260935]: 2025-10-11 08:53:50.584 2 DEBUG nova.virt.libvirt.vif [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:53:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-416013570',display_name='tempest-ServerDiskConfigTestJSON-server-416013570',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-416013570',id=49,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9af27fad6b5a4783b66213343f27f0a1',ramdisk_id='',reservation_id='r-2vpa8irs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-387886039',owner_user_name='tempest-ServerDiskConfigTestJSON-387886039-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:53:43Z,user_data=None,user_id='2a171de1f79843e0b048393cabfee77d',uuid=090243f9-46ac-42a1-b921-fa3b1974f127,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "94e66298-1c10-486d-9cd5-fd680cfa037c", "address": "fa:16:3e:a6:01:8a", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94e66298-1c", "ovs_interfaceid": "94e66298-1c10-486d-9cd5-fd680cfa037c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:53:50 np0005481065 nova_compute[260935]: 2025-10-11 08:53:50.584 2 DEBUG nova.network.os_vif_util [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converting VIF {"id": "94e66298-1c10-486d-9cd5-fd680cfa037c", "address": "fa:16:3e:a6:01:8a", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94e66298-1c", "ovs_interfaceid": "94e66298-1c10-486d-9cd5-fd680cfa037c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:53:50 np0005481065 nova_compute[260935]: 2025-10-11 08:53:50.586 2 DEBUG nova.network.os_vif_util [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:01:8a,bridge_name='br-int',has_traffic_filtering=True,id=94e66298-1c10-486d-9cd5-fd680cfa037c,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94e66298-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:53:50 np0005481065 nova_compute[260935]: 2025-10-11 08:53:50.588 2 DEBUG nova.objects.instance [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 090243f9-46ac-42a1-b921-fa3b1974f127 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:53:50 np0005481065 nova_compute[260935]: 2025-10-11 08:53:50.603 2 DEBUG nova.virt.libvirt.driver [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:53:50 np0005481065 nova_compute[260935]:  <uuid>090243f9-46ac-42a1-b921-fa3b1974f127</uuid>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:  <name>instance-00000031</name>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:53:50 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-416013570</nova:name>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:53:49</nova:creationTime>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:53:50 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:        <nova:user uuid="2a171de1f79843e0b048393cabfee77d">tempest-ServerDiskConfigTestJSON-387886039-project-member</nova:user>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:        <nova:project uuid="9af27fad6b5a4783b66213343f27f0a1">tempest-ServerDiskConfigTestJSON-387886039</nova:project>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:        <nova:port uuid="94e66298-1c10-486d-9cd5-fd680cfa037c">
Oct 11 04:53:50 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:53:50 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:      <entry name="serial">090243f9-46ac-42a1-b921-fa3b1974f127</entry>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:      <entry name="uuid">090243f9-46ac-42a1-b921-fa3b1974f127</entry>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:53:50 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:53:50 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:53:50 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/090243f9-46ac-42a1-b921-fa3b1974f127_disk">
Oct 11 04:53:50 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:53:50 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:53:50 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/090243f9-46ac-42a1-b921-fa3b1974f127_disk.config">
Oct 11 04:53:50 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:53:50 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 04:53:50 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:a6:01:8a"/>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:      <target dev="tap94e66298-1c"/>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:53:50 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/090243f9-46ac-42a1-b921-fa3b1974f127/console.log" append="off"/>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:53:50 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:53:50 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:53:50 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:53:50 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:53:50 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:53:50 np0005481065 nova_compute[260935]: 2025-10-11 08:53:50.606 2 DEBUG nova.compute.manager [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Preparing to wait for external event network-vif-plugged-94e66298-1c10-486d-9cd5-fd680cfa037c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 04:53:50 np0005481065 nova_compute[260935]: 2025-10-11 08:53:50.607 2 DEBUG oslo_concurrency.lockutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "090243f9-46ac-42a1-b921-fa3b1974f127-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:50 np0005481065 nova_compute[260935]: 2025-10-11 08:53:50.607 2 DEBUG oslo_concurrency.lockutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "090243f9-46ac-42a1-b921-fa3b1974f127-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:50 np0005481065 nova_compute[260935]: 2025-10-11 08:53:50.608 2 DEBUG oslo_concurrency.lockutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "090243f9-46ac-42a1-b921-fa3b1974f127-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:50 np0005481065 nova_compute[260935]: 2025-10-11 08:53:50.610 2 DEBUG nova.virt.libvirt.vif [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:53:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-416013570',display_name='tempest-ServerDiskConfigTestJSON-server-416013570',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-416013570',id=49,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9af27fad6b5a4783b66213343f27f0a1',ramdisk_id='',reservation_id='r-2vpa8irs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-387886039',owner_user_name='tempest-ServerDiskConfigTestJSON-387886039-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:53:43Z,user_data=None,user_id='2a171de1f79843e0b048393cabfee77d',uuid=090243f9-46ac-42a1-b921-fa3b1974f127,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "94e66298-1c10-486d-9cd5-fd680cfa037c", "address": "fa:16:3e:a6:01:8a", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94e66298-1c", "ovs_interfaceid": "94e66298-1c10-486d-9cd5-fd680cfa037c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:53:50 np0005481065 nova_compute[260935]: 2025-10-11 08:53:50.611 2 DEBUG nova.network.os_vif_util [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converting VIF {"id": "94e66298-1c10-486d-9cd5-fd680cfa037c", "address": "fa:16:3e:a6:01:8a", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94e66298-1c", "ovs_interfaceid": "94e66298-1c10-486d-9cd5-fd680cfa037c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:53:50 np0005481065 nova_compute[260935]: 2025-10-11 08:53:50.612 2 DEBUG nova.network.os_vif_util [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:01:8a,bridge_name='br-int',has_traffic_filtering=True,id=94e66298-1c10-486d-9cd5-fd680cfa037c,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94e66298-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:53:50 np0005481065 nova_compute[260935]: 2025-10-11 08:53:50.613 2 DEBUG os_vif [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:01:8a,bridge_name='br-int',has_traffic_filtering=True,id=94e66298-1c10-486d-9cd5-fd680cfa037c,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94e66298-1c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:53:50 np0005481065 nova_compute[260935]: 2025-10-11 08:53:50.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:50 np0005481065 nova_compute[260935]: 2025-10-11 08:53:50.616 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:50 np0005481065 nova_compute[260935]: 2025-10-11 08:53:50.617 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:53:50 np0005481065 nova_compute[260935]: 2025-10-11 08:53:50.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:50 np0005481065 nova_compute[260935]: 2025-10-11 08:53:50.622 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap94e66298-1c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:50 np0005481065 nova_compute[260935]: 2025-10-11 08:53:50.623 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap94e66298-1c, col_values=(('external_ids', {'iface-id': '94e66298-1c10-486d-9cd5-fd680cfa037c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a6:01:8a', 'vm-uuid': '090243f9-46ac-42a1-b921-fa3b1974f127'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:50 np0005481065 nova_compute[260935]: 2025-10-11 08:53:50.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:50 np0005481065 NetworkManager[44960]: <info>  [1760172830.6275] manager: (tap94e66298-1c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/197)
Oct 11 04:53:50 np0005481065 nova_compute[260935]: 2025-10-11 08:53:50.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:53:50 np0005481065 nova_compute[260935]: 2025-10-11 08:53:50.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:50 np0005481065 nova_compute[260935]: 2025-10-11 08:53:50.636 2 INFO os_vif [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:01:8a,bridge_name='br-int',has_traffic_filtering=True,id=94e66298-1c10-486d-9cd5-fd680cfa037c,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94e66298-1c')#033[00m
Oct 11 04:53:50 np0005481065 nova_compute[260935]: 2025-10-11 08:53:50.686 2 DEBUG nova.virt.libvirt.driver [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:53:50 np0005481065 nova_compute[260935]: 2025-10-11 08:53:50.686 2 DEBUG nova.virt.libvirt.driver [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:53:50 np0005481065 nova_compute[260935]: 2025-10-11 08:53:50.686 2 DEBUG nova.virt.libvirt.driver [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] No VIF found with MAC fa:16:3e:a6:01:8a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:53:50 np0005481065 nova_compute[260935]: 2025-10-11 08:53:50.687 2 INFO nova.virt.libvirt.driver [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Using config drive#033[00m
Oct 11 04:53:50 np0005481065 nova_compute[260935]: 2025-10-11 08:53:50.708 2 DEBUG nova.storage.rbd_utils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 090243f9-46ac-42a1-b921-fa3b1974f127_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:51 np0005481065 nova_compute[260935]: 2025-10-11 08:53:51.060 2 DEBUG nova.network.neutron [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Updating instance_info_cache with network_info: [{"id": "11b55091-2876-4b36-98b0-aa1a4db00d3e", "address": "fa:16:3e:86:b0:c9", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11b55091-28", "ovs_interfaceid": "11b55091-2876-4b36-98b0-aa1a4db00d3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:53:51 np0005481065 nova_compute[260935]: 2025-10-11 08:53:51.122 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Releasing lock "refresh_cache-d0ac94c4-9bbc-443b-bbce-0d447b37153a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:53:51 np0005481065 nova_compute[260935]: 2025-10-11 08:53:51.123 2 DEBUG nova.compute.manager [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Instance network_info: |[{"id": "11b55091-2876-4b36-98b0-aa1a4db00d3e", "address": "fa:16:3e:86:b0:c9", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11b55091-28", "ovs_interfaceid": "11b55091-2876-4b36-98b0-aa1a4db00d3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:53:51 np0005481065 nova_compute[260935]: 2025-10-11 08:53:51.127 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Start _get_guest_xml network_info=[{"id": "11b55091-2876-4b36-98b0-aa1a4db00d3e", "address": "fa:16:3e:86:b0:c9", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11b55091-28", "ovs_interfaceid": "11b55091-2876-4b36-98b0-aa1a4db00d3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:53:51 np0005481065 nova_compute[260935]: 2025-10-11 08:53:51.135 2 WARNING nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:53:51 np0005481065 nova_compute[260935]: 2025-10-11 08:53:51.147 2 DEBUG nova.virt.libvirt.host [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:53:51 np0005481065 nova_compute[260935]: 2025-10-11 08:53:51.148 2 DEBUG nova.virt.libvirt.host [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:53:51 np0005481065 nova_compute[260935]: 2025-10-11 08:53:51.152 2 DEBUG nova.virt.libvirt.host [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:53:51 np0005481065 nova_compute[260935]: 2025-10-11 08:53:51.153 2 DEBUG nova.virt.libvirt.host [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:53:51 np0005481065 nova_compute[260935]: 2025-10-11 08:53:51.154 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:53:51 np0005481065 nova_compute[260935]: 2025-10-11 08:53:51.154 2 DEBUG nova.virt.hardware [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:53:51 np0005481065 nova_compute[260935]: 2025-10-11 08:53:51.155 2 DEBUG nova.virt.hardware [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:53:51 np0005481065 nova_compute[260935]: 2025-10-11 08:53:51.156 2 DEBUG nova.virt.hardware [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:53:51 np0005481065 nova_compute[260935]: 2025-10-11 08:53:51.157 2 DEBUG nova.virt.hardware [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:53:51 np0005481065 nova_compute[260935]: 2025-10-11 08:53:51.157 2 DEBUG nova.virt.hardware [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:53:51 np0005481065 nova_compute[260935]: 2025-10-11 08:53:51.157 2 DEBUG nova.virt.hardware [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:53:51 np0005481065 nova_compute[260935]: 2025-10-11 08:53:51.158 2 DEBUG nova.virt.hardware [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:53:51 np0005481065 nova_compute[260935]: 2025-10-11 08:53:51.158 2 DEBUG nova.virt.hardware [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:53:51 np0005481065 nova_compute[260935]: 2025-10-11 08:53:51.159 2 DEBUG nova.virt.hardware [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:53:51 np0005481065 nova_compute[260935]: 2025-10-11 08:53:51.159 2 DEBUG nova.virt.hardware [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:53:51 np0005481065 nova_compute[260935]: 2025-10-11 08:53:51.160 2 DEBUG nova.virt.hardware [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:53:51 np0005481065 nova_compute[260935]: 2025-10-11 08:53:51.164 2 DEBUG oslo_concurrency.processutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:51 np0005481065 nova_compute[260935]: 2025-10-11 08:53:51.222 2 INFO nova.virt.libvirt.driver [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Creating config drive at /var/lib/nova/instances/090243f9-46ac-42a1-b921-fa3b1974f127/disk.config#033[00m
Oct 11 04:53:51 np0005481065 nova_compute[260935]: 2025-10-11 08:53:51.230 2 DEBUG oslo_concurrency.processutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/090243f9-46ac-42a1-b921-fa3b1974f127/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk0_ms0y_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:51 np0005481065 nova_compute[260935]: 2025-10-11 08:53:51.296 2 DEBUG nova.compute.manager [req-f165fd72-225b-42bb-86e6-4bafc9fabcca req-b7518090-f07b-4f2c-a6a8-7914104d447b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Received event network-changed-11b55091-2876-4b36-98b0-aa1a4db00d3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:53:51 np0005481065 nova_compute[260935]: 2025-10-11 08:53:51.297 2 DEBUG nova.compute.manager [req-f165fd72-225b-42bb-86e6-4bafc9fabcca req-b7518090-f07b-4f2c-a6a8-7914104d447b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Refreshing instance network info cache due to event network-changed-11b55091-2876-4b36-98b0-aa1a4db00d3e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:53:51 np0005481065 nova_compute[260935]: 2025-10-11 08:53:51.297 2 DEBUG oslo_concurrency.lockutils [req-f165fd72-225b-42bb-86e6-4bafc9fabcca req-b7518090-f07b-4f2c-a6a8-7914104d447b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-d0ac94c4-9bbc-443b-bbce-0d447b37153a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:53:51 np0005481065 nova_compute[260935]: 2025-10-11 08:53:51.298 2 DEBUG oslo_concurrency.lockutils [req-f165fd72-225b-42bb-86e6-4bafc9fabcca req-b7518090-f07b-4f2c-a6a8-7914104d447b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-d0ac94c4-9bbc-443b-bbce-0d447b37153a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:53:51 np0005481065 nova_compute[260935]: 2025-10-11 08:53:51.298 2 DEBUG nova.network.neutron [req-f165fd72-225b-42bb-86e6-4bafc9fabcca req-b7518090-f07b-4f2c-a6a8-7914104d447b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Refreshing network info cache for port 11b55091-2876-4b36-98b0-aa1a4db00d3e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:53:51 np0005481065 nova_compute[260935]: 2025-10-11 08:53:51.395 2 DEBUG oslo_concurrency.processutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/090243f9-46ac-42a1-b921-fa3b1974f127/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk0_ms0y_" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:51 np0005481065 nova_compute[260935]: 2025-10-11 08:53:51.431 2 DEBUG nova.storage.rbd_utils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 090243f9-46ac-42a1-b921-fa3b1974f127_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:51 np0005481065 nova_compute[260935]: 2025-10-11 08:53:51.436 2 DEBUG oslo_concurrency.processutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/090243f9-46ac-42a1-b921-fa3b1974f127/disk.config 090243f9-46ac-42a1-b921-fa3b1974f127_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:51 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:53:51 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/194243873' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:53:51 np0005481065 nova_compute[260935]: 2025-10-11 08:53:51.660 2 DEBUG oslo_concurrency.processutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/090243f9-46ac-42a1-b921-fa3b1974f127/disk.config 090243f9-46ac-42a1-b921-fa3b1974f127_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.224s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:51 np0005481065 nova_compute[260935]: 2025-10-11 08:53:51.661 2 INFO nova.virt.libvirt.driver [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Deleting local config drive /var/lib/nova/instances/090243f9-46ac-42a1-b921-fa3b1974f127/disk.config because it was imported into RBD.#033[00m
Oct 11 04:53:51 np0005481065 nova_compute[260935]: 2025-10-11 08:53:51.664 2 DEBUG oslo_concurrency.processutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:51 np0005481065 nova_compute[260935]: 2025-10-11 08:53:51.708 2 DEBUG nova.storage.rbd_utils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] rbd image d0ac94c4-9bbc-443b-bbce-0d447b37153a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:51 np0005481065 nova_compute[260935]: 2025-10-11 08:53:51.714 2 DEBUG oslo_concurrency.processutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:51 np0005481065 kernel: tap94e66298-1c: entered promiscuous mode
Oct 11 04:53:51 np0005481065 NetworkManager[44960]: <info>  [1760172831.7559] manager: (tap94e66298-1c): new Tun device (/org/freedesktop/NetworkManager/Devices/198)
Oct 11 04:53:51 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:51Z|00413|binding|INFO|Claiming lport 94e66298-1c10-486d-9cd5-fd680cfa037c for this chassis.
Oct 11 04:53:51 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:51Z|00414|binding|INFO|94e66298-1c10-486d-9cd5-fd680cfa037c: Claiming fa:16:3e:a6:01:8a 10.100.0.7
Oct 11 04:53:51 np0005481065 nova_compute[260935]: 2025-10-11 08:53:51.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:51.787 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a6:01:8a 10.100.0.7'], port_security=['fa:16:3e:a6:01:8a 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '090243f9-46ac-42a1-b921-fa3b1974f127', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9af27fad6b5a4783b66213343f27f0a1', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a50a9db8-e2ab-4969-88fc-b4ddbb372174', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b34768a1-4d4c-416a-8ec1-d8538916a72a, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=94e66298-1c10-486d-9cd5-fd680cfa037c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:53:51 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:51Z|00415|binding|INFO|Setting lport 94e66298-1c10-486d-9cd5-fd680cfa037c ovn-installed in OVS
Oct 11 04:53:51 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:51Z|00416|binding|INFO|Setting lport 94e66298-1c10-486d-9cd5-fd680cfa037c up in Southbound
Oct 11 04:53:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:51.789 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 94e66298-1c10-486d-9cd5-fd680cfa037c in datapath e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd bound to our chassis#033[00m
Oct 11 04:53:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:51.792 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd#033[00m
Oct 11 04:53:51 np0005481065 nova_compute[260935]: 2025-10-11 08:53:51.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:51 np0005481065 nova_compute[260935]: 2025-10-11 08:53:51.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:51.814 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[895e86dc-0a1c-43b8-ac52-0f4f8d05a8c3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:51.816 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape5d4fc7a-11 in ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 04:53:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:51.818 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape5d4fc7a-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 04:53:51 np0005481065 systemd-udevd[314810]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:53:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:51.819 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ea9f5f20-e720-4f71-814a-017066547044]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:51.820 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e1628853-71c3-4cd3-bfe1-e49392d34f43]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:51 np0005481065 NetworkManager[44960]: <info>  [1760172831.8351] device (tap94e66298-1c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:53:51 np0005481065 NetworkManager[44960]: <info>  [1760172831.8376] device (tap94e66298-1c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:53:51 np0005481065 systemd-machined[215705]: New machine qemu-55-instance-00000031.
Oct 11 04:53:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:51.853 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[e943ba1b-e8a5-4a75-8584-4b5975144919]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:51 np0005481065 systemd[1]: Started Virtual Machine qemu-55-instance-00000031.
Oct 11 04:53:51 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:51Z|00417|binding|INFO|Releasing lport 4bae176b-fbae-4a70-a041-16a7a5205899 from this chassis (sb_readonly=0)
Oct 11 04:53:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:51.887 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a6648c49-e8c5-46ee-a1db-d6ca74732c97]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:51 np0005481065 nova_compute[260935]: 2025-10-11 08:53:51.930 2 DEBUG nova.network.neutron [req-02986fd4-437b-4afb-8a3f-197e22109c88 req-8ffd67a5-1631-4492-9d4a-bcbd91aadd44 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Updated VIF entry in instance network info cache for port 94e66298-1c10-486d-9cd5-fd680cfa037c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:53:51 np0005481065 nova_compute[260935]: 2025-10-11 08:53:51.931 2 DEBUG nova.network.neutron [req-02986fd4-437b-4afb-8a3f-197e22109c88 req-8ffd67a5-1631-4492-9d4a-bcbd91aadd44 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Updating instance_info_cache with network_info: [{"id": "94e66298-1c10-486d-9cd5-fd680cfa037c", "address": "fa:16:3e:a6:01:8a", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94e66298-1c", "ovs_interfaceid": "94e66298-1c10-486d-9cd5-fd680cfa037c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:53:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:51.936 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[e8753ce9-9f2b-4aef-abb0-c41f47d9ee9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:51.949 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[85fcd662-3cc7-4af3-82a0-709f849da72a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:51 np0005481065 NetworkManager[44960]: <info>  [1760172831.9516] manager: (tape5d4fc7a-10): new Veth device (/org/freedesktop/NetworkManager/Devices/199)
Oct 11 04:53:51 np0005481065 systemd-udevd[314814]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:53:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1484: 321 pgs: 321 active+clean; 227 MiB data, 515 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 7.4 MiB/s wr, 207 op/s
Oct 11 04:53:51 np0005481065 nova_compute[260935]: 2025-10-11 08:53:51.976 2 DEBUG oslo_concurrency.lockutils [req-02986fd4-437b-4afb-8a3f-197e22109c88 req-8ffd67a5-1631-4492-9d4a-bcbd91aadd44 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-090243f9-46ac-42a1-b921-fa3b1974f127" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:53:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:52.005 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3e6a8383-8955-4c27-bcee-fcf80f85678f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:52.009 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6d020d3b-145a-483d-9c08-fb61b5aa1cf2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:52 np0005481065 NetworkManager[44960]: <info>  [1760172832.0422] device (tape5d4fc7a-10): carrier: link connected
Oct 11 04:53:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:52.050 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[4b14827a-7ca0-4040-9672-1de1b78918bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:52.071 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2b39e0ba-494a-47dd-87f1-0b520dccd406]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape5d4fc7a-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:20:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 134], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 467674, 'reachable_time': 39901, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314862, 'error': None, 'target': 'ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.080 2 DEBUG nova.network.neutron [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Updating instance_info_cache with network_info: [{"id": "eb770b47-0e30-41bb-8d08-e52bc86b8fb7", "address": "fa:16:3e:c2:96:8d", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb770b47-0e", "ovs_interfaceid": "eb770b47-0e30-41bb-8d08-e52bc86b8fb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:53:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:52.090 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b6a9359f-7391-41fa-b16d-a0b40a791c04]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe17:2033'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 467674, 'tstamp': 467674}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314863, 'error': None, 'target': 'ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.108 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Releasing lock "refresh_cache-c5c5e7c6-36ba-4cdd-9ad5-03996c419556" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.108 2 DEBUG nova.compute.manager [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Instance network_info: |[{"id": "eb770b47-0e30-41bb-8d08-e52bc86b8fb7", "address": "fa:16:3e:c2:96:8d", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb770b47-0e", "ovs_interfaceid": "eb770b47-0e30-41bb-8d08-e52bc86b8fb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:53:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:52.110 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d6a582bd-00b2-4ebc-b193-ca3fe27de974]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape5d4fc7a-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:20:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 134], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 467674, 'reachable_time': 39901, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 314864, 'error': None, 'target': 'ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.112 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Start _get_guest_xml network_info=[{"id": "eb770b47-0e30-41bb-8d08-e52bc86b8fb7", "address": "fa:16:3e:c2:96:8d", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb770b47-0e", "ovs_interfaceid": "eb770b47-0e30-41bb-8d08-e52bc86b8fb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.117 2 WARNING nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.122 2 DEBUG nova.virt.libvirt.host [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.122 2 DEBUG nova.virt.libvirt.host [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.129 2 DEBUG nova.virt.libvirt.host [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.129 2 DEBUG nova.virt.libvirt.host [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.130 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.130 2 DEBUG nova.virt.hardware [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.131 2 DEBUG nova.virt.hardware [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.131 2 DEBUG nova.virt.hardware [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.132 2 DEBUG nova.virt.hardware [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.132 2 DEBUG nova.virt.hardware [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.132 2 DEBUG nova.virt.hardware [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.133 2 DEBUG nova.virt.hardware [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.133 2 DEBUG nova.virt.hardware [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.133 2 DEBUG nova.virt.hardware [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.134 2 DEBUG nova.virt.hardware [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.134 2 DEBUG nova.virt.hardware [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.138 2 DEBUG oslo_concurrency.processutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:52.153 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[013318c4-e63b-4e9e-90df-3b35ce3f1aae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:52.225 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[55267de1-5cd6-4495-8bec-0d91f7f85097]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:52.226 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape5d4fc7a-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:52.227 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:53:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:52.227 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape5d4fc7a-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:52 np0005481065 kernel: tape5d4fc7a-10: entered promiscuous mode
Oct 11 04:53:52 np0005481065 NetworkManager[44960]: <info>  [1760172832.2300] manager: (tape5d4fc7a-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/200)
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:52 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:53:52 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1495783055' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:53:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:52.232 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape5d4fc7a-10, col_values=(('external_ids', {'iface-id': '7a0f31c4-9bda-45df-9fec-aacc40fc88c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:52 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:52Z|00418|binding|INFO|Releasing lport 7a0f31c4-9bda-45df-9fec-aacc40fc88c1 from this chassis (sb_readonly=0)
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:52.253 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 04:53:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:52.254 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[54402c03-c84b-470d-a33d-3e0dd190fb42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:52.255 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:53:52 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 04:53:52 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 04:53:52 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd
Oct 11 04:53:52 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 04:53:52 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 04:53:52 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 04:53:52 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd.pid.haproxy
Oct 11 04:53:52 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 04:53:52 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:53:52 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 04:53:52 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 04:53:52 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 04:53:52 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 04:53:52 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 04:53:52 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 04:53:52 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 04:53:52 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 04:53:52 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 04:53:52 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 04:53:52 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 04:53:52 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 04:53:52 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 04:53:52 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:53:52 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:53:52 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 04:53:52 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 04:53:52 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:53:52 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd
Oct 11 04:53:52 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 04:53:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:52.257 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'env', 'PROCESS_TAG=haproxy-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.281 2 DEBUG oslo_concurrency.processutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.284 2 DEBUG nova.virt.libvirt.vif [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:53:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-904038611',display_name='tempest-MultipleCreateTestJSON-server-904038611-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-904038611-1',id=48,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5fe47dfd30914099a9819413cbab00c6',ramdisk_id='',reservation_id='r-gdifyikz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1825846956',owner_user_name='tempest-MultipleCreateTestJSON-1825846956-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:53:44Z,user_data=None,user_id='624c293d73ca4d14a182fadee17abb16',uuid=d0ac94c4-9bbc-443b-bbce-0d447b37153a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "11b55091-2876-4b36-98b0-aa1a4db00d3e", "address": "fa:16:3e:86:b0:c9", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11b55091-28", "ovs_interfaceid": "11b55091-2876-4b36-98b0-aa1a4db00d3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.285 2 DEBUG nova.network.os_vif_util [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Converting VIF {"id": "11b55091-2876-4b36-98b0-aa1a4db00d3e", "address": "fa:16:3e:86:b0:c9", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11b55091-28", "ovs_interfaceid": "11b55091-2876-4b36-98b0-aa1a4db00d3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.287 2 DEBUG nova.network.os_vif_util [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:86:b0:c9,bridge_name='br-int',has_traffic_filtering=True,id=11b55091-2876-4b36-98b0-aa1a4db00d3e,network=Network(aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11b55091-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.289 2 DEBUG nova.objects.instance [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lazy-loading 'pci_devices' on Instance uuid d0ac94c4-9bbc-443b-bbce-0d447b37153a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.311 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:53:52 np0005481065 nova_compute[260935]:  <uuid>d0ac94c4-9bbc-443b-bbce-0d447b37153a</uuid>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:  <name>instance-00000030</name>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:53:52 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:      <nova:name>tempest-MultipleCreateTestJSON-server-904038611-1</nova:name>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:53:51</nova:creationTime>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:53:52 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:        <nova:user uuid="624c293d73ca4d14a182fadee17abb16">tempest-MultipleCreateTestJSON-1825846956-project-member</nova:user>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:        <nova:project uuid="5fe47dfd30914099a9819413cbab00c6">tempest-MultipleCreateTestJSON-1825846956</nova:project>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:        <nova:port uuid="11b55091-2876-4b36-98b0-aa1a4db00d3e">
Oct 11 04:53:52 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:53:52 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:      <entry name="serial">d0ac94c4-9bbc-443b-bbce-0d447b37153a</entry>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:      <entry name="uuid">d0ac94c4-9bbc-443b-bbce-0d447b37153a</entry>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:53:52 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:53:52 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:53:52 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/d0ac94c4-9bbc-443b-bbce-0d447b37153a_disk">
Oct 11 04:53:52 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:53:52 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:53:52 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/d0ac94c4-9bbc-443b-bbce-0d447b37153a_disk.config">
Oct 11 04:53:52 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:53:52 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 04:53:52 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:86:b0:c9"/>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:      <target dev="tap11b55091-28"/>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:53:52 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/d0ac94c4-9bbc-443b-bbce-0d447b37153a/console.log" append="off"/>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:53:52 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:53:52 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:53:52 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:53:52 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:53:52 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.315 2 DEBUG nova.compute.manager [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Preparing to wait for external event network-vif-plugged-11b55091-2876-4b36-98b0-aa1a4db00d3e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.316 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "d0ac94c4-9bbc-443b-bbce-0d447b37153a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.317 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "d0ac94c4-9bbc-443b-bbce-0d447b37153a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.317 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "d0ac94c4-9bbc-443b-bbce-0d447b37153a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.318 2 DEBUG nova.virt.libvirt.vif [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:53:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-904038611',display_name='tempest-MultipleCreateTestJSON-server-904038611-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-904038611-1',id=48,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5fe47dfd30914099a9819413cbab00c6',ramdisk_id='',reservation_id='r-gdifyikz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1825846956',owner_user_name='tempest-MultipleCreateTestJSON-1825846956-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:53:44Z,user_data=None,user_id='624c293d73ca4d14a182fadee17abb16',uuid=d0ac94c4-9bbc-443b-bbce-0d447b37153a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "11b55091-2876-4b36-98b0-aa1a4db00d3e", "address": "fa:16:3e:86:b0:c9", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11b55091-28", "ovs_interfaceid": "11b55091-2876-4b36-98b0-aa1a4db00d3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.319 2 DEBUG nova.network.os_vif_util [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Converting VIF {"id": "11b55091-2876-4b36-98b0-aa1a4db00d3e", "address": "fa:16:3e:86:b0:c9", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11b55091-28", "ovs_interfaceid": "11b55091-2876-4b36-98b0-aa1a4db00d3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.320 2 DEBUG nova.network.os_vif_util [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:86:b0:c9,bridge_name='br-int',has_traffic_filtering=True,id=11b55091-2876-4b36-98b0-aa1a4db00d3e,network=Network(aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11b55091-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.321 2 DEBUG os_vif [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:86:b0:c9,bridge_name='br-int',has_traffic_filtering=True,id=11b55091-2876-4b36-98b0-aa1a4db00d3e,network=Network(aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11b55091-28') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.322 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.323 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.329 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap11b55091-28, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.329 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap11b55091-28, col_values=(('external_ids', {'iface-id': '11b55091-2876-4b36-98b0-aa1a4db00d3e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:86:b0:c9', 'vm-uuid': 'd0ac94c4-9bbc-443b-bbce-0d447b37153a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:52 np0005481065 NetworkManager[44960]: <info>  [1760172832.3329] manager: (tap11b55091-28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/201)
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.342 2 INFO os_vif [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:86:b0:c9,bridge_name='br-int',has_traffic_filtering=True,id=11b55091-2876-4b36-98b0-aa1a4db00d3e,network=Network(aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11b55091-28')#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.401 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.402 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.402 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] No VIF found with MAC fa:16:3e:86:b0:c9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.403 2 INFO nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Using config drive#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.425 2 DEBUG nova.storage.rbd_utils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] rbd image d0ac94c4-9bbc-443b-bbce-0d447b37153a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:52 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:53:52 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/49145194' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.646 2 DEBUG oslo_concurrency.processutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.682 2 DEBUG nova.storage.rbd_utils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] rbd image c5c5e7c6-36ba-4cdd-9ad5-03996c419556_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.694 2 DEBUG oslo_concurrency.processutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:52 np0005481065 podman[314933]: 2025-10-11 08:53:52.700019238 +0000 UTC m=+0.074234418 container create 81ee36c5aeb7a05d069a71a8721b9939b3ddb5f0983b70f5954fbe381e6c2684 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:53:52 np0005481065 systemd[1]: Started libpod-conmon-81ee36c5aeb7a05d069a71a8721b9939b3ddb5f0983b70f5954fbe381e6c2684.scope.
Oct 11 04:53:52 np0005481065 podman[314933]: 2025-10-11 08:53:52.662148188 +0000 UTC m=+0.036363408 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:52 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:53:52 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ff5f9aab65ed5ed74268fc74e80741bd6d56127a04d5f99970f5b4eb6779e28/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:53:52 np0005481065 podman[314933]: 2025-10-11 08:53:52.850025295 +0000 UTC m=+0.224240535 container init 81ee36c5aeb7a05d069a71a8721b9939b3ddb5f0983b70f5954fbe381e6c2684 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 11 04:53:52 np0005481065 podman[314933]: 2025-10-11 08:53:52.855475911 +0000 UTC m=+0.229691081 container start 81ee36c5aeb7a05d069a71a8721b9939b3ddb5f0983b70f5954fbe381e6c2684 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.856 2 DEBUG nova.network.neutron [req-f165fd72-225b-42bb-86e6-4bafc9fabcca req-b7518090-f07b-4f2c-a6a8-7914104d447b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Updated VIF entry in instance network info cache for port 11b55091-2876-4b36-98b0-aa1a4db00d3e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.857 2 DEBUG nova.network.neutron [req-f165fd72-225b-42bb-86e6-4bafc9fabcca req-b7518090-f07b-4f2c-a6a8-7914104d447b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Updating instance_info_cache with network_info: [{"id": "11b55091-2876-4b36-98b0-aa1a4db00d3e", "address": "fa:16:3e:86:b0:c9", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11b55091-28", "ovs_interfaceid": "11b55091-2876-4b36-98b0-aa1a4db00d3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.871 2 DEBUG oslo_concurrency.lockutils [req-f165fd72-225b-42bb-86e6-4bafc9fabcca req-b7518090-f07b-4f2c-a6a8-7914104d447b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-d0ac94c4-9bbc-443b-bbce-0d447b37153a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.872 2 DEBUG nova.compute.manager [req-f165fd72-225b-42bb-86e6-4bafc9fabcca req-b7518090-f07b-4f2c-a6a8-7914104d447b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Received event network-changed-eb770b47-0e30-41bb-8d08-e52bc86b8fb7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.872 2 DEBUG nova.compute.manager [req-f165fd72-225b-42bb-86e6-4bafc9fabcca req-b7518090-f07b-4f2c-a6a8-7914104d447b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Refreshing instance network info cache due to event network-changed-eb770b47-0e30-41bb-8d08-e52bc86b8fb7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.872 2 DEBUG oslo_concurrency.lockutils [req-f165fd72-225b-42bb-86e6-4bafc9fabcca req-b7518090-f07b-4f2c-a6a8-7914104d447b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-c5c5e7c6-36ba-4cdd-9ad5-03996c419556" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.872 2 DEBUG oslo_concurrency.lockutils [req-f165fd72-225b-42bb-86e6-4bafc9fabcca req-b7518090-f07b-4f2c-a6a8-7914104d447b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-c5c5e7c6-36ba-4cdd-9ad5-03996c419556" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:53:52 np0005481065 nova_compute[260935]: 2025-10-11 08:53:52.872 2 DEBUG nova.network.neutron [req-f165fd72-225b-42bb-86e6-4bafc9fabcca req-b7518090-f07b-4f2c-a6a8-7914104d447b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Refreshing network info cache for port eb770b47-0e30-41bb-8d08-e52bc86b8fb7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:53:52 np0005481065 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[314970]: [NOTICE]   (314974) : New worker (314978) forked
Oct 11 04:53:52 np0005481065 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[314970]: [NOTICE]   (314974) : Loading success.
Oct 11 04:53:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:53:53 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3245159004' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:53:53 np0005481065 nova_compute[260935]: 2025-10-11 08:53:53.183 2 DEBUG oslo_concurrency.processutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:53 np0005481065 nova_compute[260935]: 2025-10-11 08:53:53.185 2 DEBUG nova.virt.libvirt.vif [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:53:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-904038611',display_name='tempest-MultipleCreateTestJSON-server-904038611-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-904038611-2',id=50,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5fe47dfd30914099a9819413cbab00c6',ramdisk_id='',reservation_id='r-gdifyikz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1825846956',owner_user_name='tempest-MultipleCreateTestJSON-1825846956-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:53:45Z,user_data=None,user_id='624c293d73ca4d14a182fadee17abb16',uuid=c5c5e7c6-36ba-4cdd-9ad5-03996c419556,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eb770b47-0e30-41bb-8d08-e52bc86b8fb7", "address": "fa:16:3e:c2:96:8d", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb770b47-0e", "ovs_interfaceid": "eb770b47-0e30-41bb-8d08-e52bc86b8fb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:53:53 np0005481065 nova_compute[260935]: 2025-10-11 08:53:53.186 2 DEBUG nova.network.os_vif_util [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Converting VIF {"id": "eb770b47-0e30-41bb-8d08-e52bc86b8fb7", "address": "fa:16:3e:c2:96:8d", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb770b47-0e", "ovs_interfaceid": "eb770b47-0e30-41bb-8d08-e52bc86b8fb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:53:53 np0005481065 nova_compute[260935]: 2025-10-11 08:53:53.188 2 DEBUG nova.network.os_vif_util [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c2:96:8d,bridge_name='br-int',has_traffic_filtering=True,id=eb770b47-0e30-41bb-8d08-e52bc86b8fb7,network=Network(aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb770b47-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:53:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:53:53 np0005481065 nova_compute[260935]: 2025-10-11 08:53:53.190 2 DEBUG nova.objects.instance [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lazy-loading 'pci_devices' on Instance uuid c5c5e7c6-36ba-4cdd-9ad5-03996c419556 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:53:53 np0005481065 nova_compute[260935]: 2025-10-11 08:53:53.356 2 INFO nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Creating config drive at /var/lib/nova/instances/d0ac94c4-9bbc-443b-bbce-0d447b37153a/disk.config#033[00m
Oct 11 04:53:53 np0005481065 nova_compute[260935]: 2025-10-11 08:53:53.360 2 DEBUG oslo_concurrency.processutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d0ac94c4-9bbc-443b-bbce-0d447b37153a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1f3cibw9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:53 np0005481065 nova_compute[260935]: 2025-10-11 08:53:53.422 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:53:53 np0005481065 nova_compute[260935]:  <uuid>c5c5e7c6-36ba-4cdd-9ad5-03996c419556</uuid>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:  <name>instance-00000032</name>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:53:53 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:      <nova:name>tempest-MultipleCreateTestJSON-server-904038611-2</nova:name>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:53:52</nova:creationTime>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:53:53 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:        <nova:user uuid="624c293d73ca4d14a182fadee17abb16">tempest-MultipleCreateTestJSON-1825846956-project-member</nova:user>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:        <nova:project uuid="5fe47dfd30914099a9819413cbab00c6">tempest-MultipleCreateTestJSON-1825846956</nova:project>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:        <nova:port uuid="eb770b47-0e30-41bb-8d08-e52bc86b8fb7">
Oct 11 04:53:53 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:53:53 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:      <entry name="serial">c5c5e7c6-36ba-4cdd-9ad5-03996c419556</entry>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:      <entry name="uuid">c5c5e7c6-36ba-4cdd-9ad5-03996c419556</entry>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:53:53 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:53:53 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:53:53 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/c5c5e7c6-36ba-4cdd-9ad5-03996c419556_disk">
Oct 11 04:53:53 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:53:53 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:53:53 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/c5c5e7c6-36ba-4cdd-9ad5-03996c419556_disk.config">
Oct 11 04:53:53 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:53:53 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 04:53:53 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:c2:96:8d"/>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:      <target dev="tapeb770b47-0e"/>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:53:53 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/c5c5e7c6-36ba-4cdd-9ad5-03996c419556/console.log" append="off"/>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:53:53 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:53:53 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:53:53 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:53:53 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:53:53 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:53:53 np0005481065 nova_compute[260935]: 2025-10-11 08:53:53.424 2 DEBUG nova.compute.manager [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Preparing to wait for external event network-vif-plugged-eb770b47-0e30-41bb-8d08-e52bc86b8fb7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 04:53:53 np0005481065 nova_compute[260935]: 2025-10-11 08:53:53.424 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "c5c5e7c6-36ba-4cdd-9ad5-03996c419556-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:53 np0005481065 nova_compute[260935]: 2025-10-11 08:53:53.425 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "c5c5e7c6-36ba-4cdd-9ad5-03996c419556-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:53 np0005481065 nova_compute[260935]: 2025-10-11 08:53:53.425 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "c5c5e7c6-36ba-4cdd-9ad5-03996c419556-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:53 np0005481065 nova_compute[260935]: 2025-10-11 08:53:53.426 2 DEBUG nova.virt.libvirt.vif [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:53:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-904038611',display_name='tempest-MultipleCreateTestJSON-server-904038611-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-904038611-2',id=50,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5fe47dfd30914099a9819413cbab00c6',ramdisk_id='',reservation_id='r-gdifyikz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1825846956',owner_user_name='tempest-MultipleCreateTestJSON-1825846956-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:53:45Z,user_data=None,user_id='624c293d73ca4d14a182fadee17abb16',uuid=c5c5e7c6-36ba-4cdd-9ad5-03996c419556,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eb770b47-0e30-41bb-8d08-e52bc86b8fb7", "address": "fa:16:3e:c2:96:8d", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb770b47-0e", "ovs_interfaceid": "eb770b47-0e30-41bb-8d08-e52bc86b8fb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:53:53 np0005481065 nova_compute[260935]: 2025-10-11 08:53:53.427 2 DEBUG nova.network.os_vif_util [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Converting VIF {"id": "eb770b47-0e30-41bb-8d08-e52bc86b8fb7", "address": "fa:16:3e:c2:96:8d", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb770b47-0e", "ovs_interfaceid": "eb770b47-0e30-41bb-8d08-e52bc86b8fb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:53:53 np0005481065 nova_compute[260935]: 2025-10-11 08:53:53.428 2 DEBUG nova.network.os_vif_util [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c2:96:8d,bridge_name='br-int',has_traffic_filtering=True,id=eb770b47-0e30-41bb-8d08-e52bc86b8fb7,network=Network(aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb770b47-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:53:53 np0005481065 nova_compute[260935]: 2025-10-11 08:53:53.428 2 DEBUG os_vif [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:96:8d,bridge_name='br-int',has_traffic_filtering=True,id=eb770b47-0e30-41bb-8d08-e52bc86b8fb7,network=Network(aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb770b47-0e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:53:53 np0005481065 nova_compute[260935]: 2025-10-11 08:53:53.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:53 np0005481065 nova_compute[260935]: 2025-10-11 08:53:53.429 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:53 np0005481065 nova_compute[260935]: 2025-10-11 08:53:53.430 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:53:53 np0005481065 nova_compute[260935]: 2025-10-11 08:53:53.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:53 np0005481065 nova_compute[260935]: 2025-10-11 08:53:53.433 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeb770b47-0e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:53 np0005481065 nova_compute[260935]: 2025-10-11 08:53:53.433 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapeb770b47-0e, col_values=(('external_ids', {'iface-id': 'eb770b47-0e30-41bb-8d08-e52bc86b8fb7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c2:96:8d', 'vm-uuid': 'c5c5e7c6-36ba-4cdd-9ad5-03996c419556'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:53 np0005481065 nova_compute[260935]: 2025-10-11 08:53:53.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:53 np0005481065 NetworkManager[44960]: <info>  [1760172833.4373] manager: (tapeb770b47-0e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/202)
Oct 11 04:53:53 np0005481065 nova_compute[260935]: 2025-10-11 08:53:53.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:53:53 np0005481065 nova_compute[260935]: 2025-10-11 08:53:53.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:53 np0005481065 nova_compute[260935]: 2025-10-11 08:53:53.448 2 INFO os_vif [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:96:8d,bridge_name='br-int',has_traffic_filtering=True,id=eb770b47-0e30-41bb-8d08-e52bc86b8fb7,network=Network(aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb770b47-0e')#033[00m
Oct 11 04:53:53 np0005481065 nova_compute[260935]: 2025-10-11 08:53:53.507 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:53:53 np0005481065 nova_compute[260935]: 2025-10-11 08:53:53.508 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:53:53 np0005481065 nova_compute[260935]: 2025-10-11 08:53:53.508 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] No VIF found with MAC fa:16:3e:c2:96:8d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:53:53 np0005481065 nova_compute[260935]: 2025-10-11 08:53:53.509 2 INFO nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Using config drive#033[00m
Oct 11 04:53:53 np0005481065 nova_compute[260935]: 2025-10-11 08:53:53.535 2 DEBUG nova.storage.rbd_utils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] rbd image c5c5e7c6-36ba-4cdd-9ad5-03996c419556_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:53 np0005481065 nova_compute[260935]: 2025-10-11 08:53:53.542 2 DEBUG oslo_concurrency.processutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d0ac94c4-9bbc-443b-bbce-0d447b37153a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1f3cibw9" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:53 np0005481065 nova_compute[260935]: 2025-10-11 08:53:53.576 2 DEBUG nova.storage.rbd_utils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] rbd image d0ac94c4-9bbc-443b-bbce-0d447b37153a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:53 np0005481065 nova_compute[260935]: 2025-10-11 08:53:53.581 2 DEBUG oslo_concurrency.processutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d0ac94c4-9bbc-443b-bbce-0d447b37153a/disk.config d0ac94c4-9bbc-443b-bbce-0d447b37153a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:53 np0005481065 nova_compute[260935]: 2025-10-11 08:53:53.776 2 DEBUG oslo_concurrency.processutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d0ac94c4-9bbc-443b-bbce-0d447b37153a/disk.config d0ac94c4-9bbc-443b-bbce-0d447b37153a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.195s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:53 np0005481065 nova_compute[260935]: 2025-10-11 08:53:53.777 2 INFO nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Deleting local config drive /var/lib/nova/instances/d0ac94c4-9bbc-443b-bbce-0d447b37153a/disk.config because it was imported into RBD.#033[00m
Oct 11 04:53:53 np0005481065 systemd-udevd[314847]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:53:53 np0005481065 NetworkManager[44960]: <info>  [1760172833.8369] manager: (tap11b55091-28): new Tun device (/org/freedesktop/NetworkManager/Devices/203)
Oct 11 04:53:53 np0005481065 kernel: tap11b55091-28: entered promiscuous mode
Oct 11 04:53:53 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:53Z|00419|binding|INFO|Claiming lport 11b55091-2876-4b36-98b0-aa1a4db00d3e for this chassis.
Oct 11 04:53:53 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:53Z|00420|binding|INFO|11b55091-2876-4b36-98b0-aa1a4db00d3e: Claiming fa:16:3e:86:b0:c9 10.100.0.11
Oct 11 04:53:53 np0005481065 nova_compute[260935]: 2025-10-11 08:53:53.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:53 np0005481065 NetworkManager[44960]: <info>  [1760172833.8601] device (tap11b55091-28): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:53:53 np0005481065 NetworkManager[44960]: <info>  [1760172833.8619] device (tap11b55091-28): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:53:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:53.875 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:86:b0:c9 10.100.0.11'], port_security=['fa:16:3e:86:b0:c9 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'd0ac94c4-9bbc-443b-bbce-0d447b37153a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5fe47dfd30914099a9819413cbab00c6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '76121093-a7de-4040-aef4-22a2c06e5eea', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0c1ab924-8567-41be-9107-10c6210b8f10, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=11b55091-2876-4b36-98b0-aa1a4db00d3e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:53:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:53.877 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 11b55091-2876-4b36-98b0-aa1a4db00d3e in datapath aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f bound to our chassis#033[00m
Oct 11 04:53:53 np0005481065 systemd-machined[215705]: New machine qemu-56-instance-00000030.
Oct 11 04:53:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:53.880 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f#033[00m
Oct 11 04:53:53 np0005481065 systemd[1]: Started Virtual Machine qemu-56-instance-00000030.
Oct 11 04:53:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:53.899 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5afcc04c-df00-461d-9143-32c52048a497]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:53.901 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaac3fe7a-b1 in ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 04:53:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:53.903 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaac3fe7a-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 04:53:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:53.903 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9d020a89-5e69-4df1-a6cd-bfebd8c4e774]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:53.904 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[918fcd57-a957-48fc-aa99-590a212852ee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:53.928 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[53fd9b1c-c74a-4c51-8263-25a6357d93e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1485: 321 pgs: 321 active+clean; 227 MiB data, 515 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 6.4 MiB/s wr, 225 op/s
Oct 11 04:53:53 np0005481065 nova_compute[260935]: 2025-10-11 08:53:53.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:53 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:53Z|00421|binding|INFO|Setting lport 11b55091-2876-4b36-98b0-aa1a4db00d3e ovn-installed in OVS
Oct 11 04:53:53 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:53Z|00422|binding|INFO|Setting lport 11b55091-2876-4b36-98b0-aa1a4db00d3e up in Southbound
Oct 11 04:53:53 np0005481065 nova_compute[260935]: 2025-10-11 08:53:53.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:53.965 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e0c24a03-cdf0-4fe6-b0c5-9369ae568886]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:54.011 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[a49950e1-7ba9-4e60-93b4-a8d67da6d1de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:54 np0005481065 NetworkManager[44960]: <info>  [1760172834.0206] manager: (tapaac3fe7a-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/204)
Oct 11 04:53:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:54.019 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[73d49e45-9281-442d-9e7d-97310af20357]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.022 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172834.0202982, 090243f9-46ac-42a1-b921-fa3b1974f127 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.022 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] VM Started (Lifecycle Event)#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.044 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.049 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172834.0204086, 090243f9-46ac-42a1-b921-fa3b1974f127 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.049 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] VM Paused (Lifecycle Event)#033[00m
Oct 11 04:53:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:54.068 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[2eabc20a-8288-46d8-8284-c6cf366b732c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.069 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:53:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:54.072 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[7f6fdeb1-f2c5-461d-9196-103b790e926a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.074 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:53:54 np0005481065 NetworkManager[44960]: <info>  [1760172834.1095] device (tapaac3fe7a-b0): carrier: link connected
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.116 2 DEBUG nova.compute.manager [req-d1031697-3fb7-4718-87f2-2fb1eac0763c req-58ca6bc9-4a0c-4688-bf6f-ebd32311e5cd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Received event network-vif-plugged-94e66298-1c10-486d-9cd5-fd680cfa037c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.116 2 DEBUG oslo_concurrency.lockutils [req-d1031697-3fb7-4718-87f2-2fb1eac0763c req-58ca6bc9-4a0c-4688-bf6f-ebd32311e5cd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "090243f9-46ac-42a1-b921-fa3b1974f127-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.117 2 DEBUG oslo_concurrency.lockutils [req-d1031697-3fb7-4718-87f2-2fb1eac0763c req-58ca6bc9-4a0c-4688-bf6f-ebd32311e5cd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "090243f9-46ac-42a1-b921-fa3b1974f127-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.117 2 DEBUG oslo_concurrency.lockutils [req-d1031697-3fb7-4718-87f2-2fb1eac0763c req-58ca6bc9-4a0c-4688-bf6f-ebd32311e5cd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "090243f9-46ac-42a1-b921-fa3b1974f127-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.117 2 DEBUG nova.compute.manager [req-d1031697-3fb7-4718-87f2-2fb1eac0763c req-58ca6bc9-4a0c-4688-bf6f-ebd32311e5cd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Processing event network-vif-plugged-94e66298-1c10-486d-9cd5-fd680cfa037c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.118 2 DEBUG nova.compute.manager [req-d1031697-3fb7-4718-87f2-2fb1eac0763c req-58ca6bc9-4a0c-4688-bf6f-ebd32311e5cd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Received event network-vif-plugged-94e66298-1c10-486d-9cd5-fd680cfa037c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.118 2 DEBUG oslo_concurrency.lockutils [req-d1031697-3fb7-4718-87f2-2fb1eac0763c req-58ca6bc9-4a0c-4688-bf6f-ebd32311e5cd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "090243f9-46ac-42a1-b921-fa3b1974f127-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.118 2 DEBUG oslo_concurrency.lockutils [req-d1031697-3fb7-4718-87f2-2fb1eac0763c req-58ca6bc9-4a0c-4688-bf6f-ebd32311e5cd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "090243f9-46ac-42a1-b921-fa3b1974f127-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.119 2 DEBUG oslo_concurrency.lockutils [req-d1031697-3fb7-4718-87f2-2fb1eac0763c req-58ca6bc9-4a0c-4688-bf6f-ebd32311e5cd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "090243f9-46ac-42a1-b921-fa3b1974f127-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.119 2 DEBUG nova.compute.manager [req-d1031697-3fb7-4718-87f2-2fb1eac0763c req-58ca6bc9-4a0c-4688-bf6f-ebd32311e5cd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] No waiting events found dispatching network-vif-plugged-94e66298-1c10-486d-9cd5-fd680cfa037c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.119 2 WARNING nova.compute.manager [req-d1031697-3fb7-4718-87f2-2fb1eac0763c req-58ca6bc9-4a0c-4688-bf6f-ebd32311e5cd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Received unexpected event network-vif-plugged-94e66298-1c10-486d-9cd5-fd680cfa037c for instance with vm_state building and task_state spawning.#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.121 2 DEBUG nova.compute.manager [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.121 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.125 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172834.1249022, 090243f9-46ac-42a1-b921-fa3b1974f127 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.125 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:53:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:54.127 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[961ccd76-e2d4-4dc9-bcbf-fc198f4be98f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.128 2 DEBUG nova.virt.libvirt.driver [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.136 2 INFO nova.virt.libvirt.driver [-] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Instance spawned successfully.#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.137 2 DEBUG nova.virt.libvirt.driver [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.146 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.150 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:53:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:54.159 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b266f02a-c66c-46d2-b9d4-4486669c7572]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaac3fe7a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a9:0a:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 136], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 467881, 'reachable_time': 38741, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315145, 'error': None, 'target': 'ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.169 2 DEBUG nova.virt.libvirt.driver [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.170 2 DEBUG nova.virt.libvirt.driver [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.170 2 DEBUG nova.virt.libvirt.driver [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.171 2 DEBUG nova.virt.libvirt.driver [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.172 2 DEBUG nova.virt.libvirt.driver [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.172 2 DEBUG nova.virt.libvirt.driver [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.181 2 INFO nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Creating config drive at /var/lib/nova/instances/c5c5e7c6-36ba-4cdd-9ad5-03996c419556/disk.config#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.187 2 DEBUG oslo_concurrency.processutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c5c5e7c6-36ba-4cdd-9ad5-03996c419556/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpew9l23hy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:54.196 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fe5786d7-95e4-427c-8d95-7e1be5210886]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea9:a6b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 467881, 'tstamp': 467881}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315146, 'error': None, 'target': 'ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:54.220 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ab5cb7c2-2f07-4aea-a87b-ca1766043ef0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaac3fe7a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a9:0a:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 136], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 467881, 'reachable_time': 38741, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 315148, 'error': None, 'target': 'ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.232 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:53:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:54.262 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3c02c45f-0c61-4b78-8f28-37fc08d7f790]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.282 2 INFO nova.compute.manager [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Took 10.64 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.282 2 DEBUG nova.compute.manager [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.326 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172819.3255072, 92be5b35-6b7a-4f95-924d-008348f27b42 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.326 2 INFO nova.compute.manager [-] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] VM Stopped (Lifecycle Event)#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.341 2 DEBUG oslo_concurrency.processutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c5c5e7c6-36ba-4cdd-9ad5-03996c419556/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpew9l23hy" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:54.349 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f49fe4c1-871e-441a-9a1f-c94f078192ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:54.351 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaac3fe7a-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:54.351 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:53:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:54.352 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaac3fe7a-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:54 np0005481065 NetworkManager[44960]: <info>  [1760172834.3550] manager: (tapaac3fe7a-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/205)
Oct 11 04:53:54 np0005481065 kernel: tapaac3fe7a-b0: entered promiscuous mode
Oct 11 04:53:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:54.362 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaac3fe7a-b0, col_values=(('external_ids', {'iface-id': 'debf3d0c-b4f8-4ab8-9507-a0acd9b0ee66'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:54 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:54Z|00423|binding|INFO|Releasing lport debf3d0c-b4f8-4ab8-9507-a0acd9b0ee66 from this chassis (sb_readonly=0)
Oct 11 04:53:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:54.394 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 04:53:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:54.395 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[192bf1be-d0e8-4078-92ca-80ddd922973a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:54.396 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:53:54 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 04:53:54 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 04:53:54 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f
Oct 11 04:53:54 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 04:53:54 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 04:53:54 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 04:53:54 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f.pid.haproxy
Oct 11 04:53:54 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 04:53:54 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:53:54 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 04:53:54 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 04:53:54 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 04:53:54 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 04:53:54 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 04:53:54 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 04:53:54 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 04:53:54 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 04:53:54 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 04:53:54 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 04:53:54 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 04:53:54 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 04:53:54 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 04:53:54 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:53:54 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:53:54 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 04:53:54 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 04:53:54 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:53:54 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f
Oct 11 04:53:54 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 04:53:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:54.397 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'env', 'PROCESS_TAG=haproxy-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.404 2 DEBUG nova.storage.rbd_utils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] rbd image c5c5e7c6-36ba-4cdd-9ad5-03996c419556_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.424 2 DEBUG oslo_concurrency.processutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c5c5e7c6-36ba-4cdd-9ad5-03996c419556/disk.config c5c5e7c6-36ba-4cdd-9ad5-03996c419556_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.483 2 DEBUG nova.compute.manager [req-2f4a8150-7eae-4bc1-9e8d-03c4f5154652 req-9f9296de-edd5-4fd6-8327-87666a070852 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Received event network-vif-plugged-11b55091-2876-4b36-98b0-aa1a4db00d3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.484 2 DEBUG oslo_concurrency.lockutils [req-2f4a8150-7eae-4bc1-9e8d-03c4f5154652 req-9f9296de-edd5-4fd6-8327-87666a070852 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d0ac94c4-9bbc-443b-bbce-0d447b37153a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.484 2 DEBUG oslo_concurrency.lockutils [req-2f4a8150-7eae-4bc1-9e8d-03c4f5154652 req-9f9296de-edd5-4fd6-8327-87666a070852 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d0ac94c4-9bbc-443b-bbce-0d447b37153a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.485 2 DEBUG oslo_concurrency.lockutils [req-2f4a8150-7eae-4bc1-9e8d-03c4f5154652 req-9f9296de-edd5-4fd6-8327-87666a070852 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d0ac94c4-9bbc-443b-bbce-0d447b37153a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.485 2 DEBUG nova.compute.manager [req-2f4a8150-7eae-4bc1-9e8d-03c4f5154652 req-9f9296de-edd5-4fd6-8327-87666a070852 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Processing event network-vif-plugged-11b55091-2876-4b36-98b0-aa1a4db00d3e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.486 2 DEBUG nova.compute.manager [None req-147356cb-8766-4ad3-8d9b-86134d03b166 - - - - - -] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.502 2 INFO nova.compute.manager [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Took 11.93 seconds to build instance.#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.534 2 DEBUG oslo_concurrency.lockutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "090243f9-46ac-42a1-b921-fa3b1974f127" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.125s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.535 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "090243f9-46ac-42a1-b921-fa3b1974f127" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 7.216s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.535 2 INFO nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.536 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "090243f9-46ac-42a1-b921-fa3b1974f127" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.602 2 DEBUG oslo_concurrency.processutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c5c5e7c6-36ba-4cdd-9ad5-03996c419556/disk.config c5c5e7c6-36ba-4cdd-9ad5-03996c419556_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.178s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.602 2 INFO nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Deleting local config drive /var/lib/nova/instances/c5c5e7c6-36ba-4cdd-9ad5-03996c419556/disk.config because it was imported into RBD.#033[00m
Oct 11 04:53:54 np0005481065 kernel: tapeb770b47-0e: entered promiscuous mode
Oct 11 04:53:54 np0005481065 NetworkManager[44960]: <info>  [1760172834.6571] manager: (tapeb770b47-0e): new Tun device (/org/freedesktop/NetworkManager/Devices/206)
Oct 11 04:53:54 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:54Z|00424|binding|INFO|Claiming lport eb770b47-0e30-41bb-8d08-e52bc86b8fb7 for this chassis.
Oct 11 04:53:54 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:54Z|00425|binding|INFO|eb770b47-0e30-41bb-8d08-e52bc86b8fb7: Claiming fa:16:3e:c2:96:8d 10.100.0.9
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:54.669 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:96:8d 10.100.0.9'], port_security=['fa:16:3e:c2:96:8d 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'c5c5e7c6-36ba-4cdd-9ad5-03996c419556', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5fe47dfd30914099a9819413cbab00c6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '76121093-a7de-4040-aef4-22a2c06e5eea', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0c1ab924-8567-41be-9107-10c6210b8f10, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=eb770b47-0e30-41bb-8d08-e52bc86b8fb7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:53:54 np0005481065 NetworkManager[44960]: <info>  [1760172834.6706] device (tapeb770b47-0e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:53:54 np0005481065 NetworkManager[44960]: <info>  [1760172834.6713] device (tapeb770b47-0e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:53:54 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:54Z|00426|binding|INFO|Setting lport eb770b47-0e30-41bb-8d08-e52bc86b8fb7 ovn-installed in OVS
Oct 11 04:53:54 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:54Z|00427|binding|INFO|Setting lport eb770b47-0e30-41bb-8d08-e52bc86b8fb7 up in Southbound
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:54 np0005481065 nova_compute[260935]: 2025-10-11 08:53:54.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:54 np0005481065 systemd-machined[215705]: New machine qemu-57-instance-00000032.
Oct 11 04:53:54 np0005481065 systemd[1]: Started Virtual Machine qemu-57-instance-00000032.
Oct 11 04:53:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:53:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:53:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:53:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:53:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:53:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:53:54 np0005481065 podman[315236]: 2025-10-11 08:53:54.826952216 +0000 UTC m=+0.059433586 container create 6877f03e8f74ec3b019684f0961795298d916dc72d355b1afc9b4cff51f152cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 11 04:53:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:53:54
Oct 11 04:53:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:53:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 04:53:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', 'backups', 'images', 'default.rgw.control', 'volumes']
Oct 11 04:53:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:53:54 np0005481065 systemd[1]: Started libpod-conmon-6877f03e8f74ec3b019684f0961795298d916dc72d355b1afc9b4cff51f152cd.scope.
Oct 11 04:53:54 np0005481065 podman[315236]: 2025-10-11 08:53:54.791868015 +0000 UTC m=+0.024349405 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 04:53:54 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:53:54 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/696ca5a58f39daa91dbd8051307c63172a5233db2990bf8722059ac5f83a523e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:53:54 np0005481065 podman[315236]: 2025-10-11 08:53:54.938518767 +0000 UTC m=+0.171000167 container init 6877f03e8f74ec3b019684f0961795298d916dc72d355b1afc9b4cff51f152cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 04:53:54 np0005481065 podman[315236]: 2025-10-11 08:53:54.946317029 +0000 UTC m=+0.178798409 container start 6877f03e8f74ec3b019684f0961795298d916dc72d355b1afc9b4cff51f152cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, tcib_managed=true)
Oct 11 04:53:54 np0005481065 neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f[315254]: [NOTICE]   (315258) : New worker (315260) forked
Oct 11 04:53:54 np0005481065 neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f[315254]: [NOTICE]   (315258) : Loading success.
Oct 11 04:53:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:55.015 162815 INFO neutron.agent.ovn.metadata.agent [-] Port eb770b47-0e30-41bb-8d08-e52bc86b8fb7 in datapath aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f unbound from our chassis#033[00m
Oct 11 04:53:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:55.018 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f#033[00m
Oct 11 04:53:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:55.045 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f9f98f0c-351c-45e6-974b-020c1d086453]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:55.099 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[d4b17342-a30a-4ed0-abea-b1a48dac7f7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:55 np0005481065 nova_compute[260935]: 2025-10-11 08:53:55.100 2 DEBUG nova.network.neutron [req-f165fd72-225b-42bb-86e6-4bafc9fabcca req-b7518090-f07b-4f2c-a6a8-7914104d447b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Updated VIF entry in instance network info cache for port eb770b47-0e30-41bb-8d08-e52bc86b8fb7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:53:55 np0005481065 nova_compute[260935]: 2025-10-11 08:53:55.101 2 DEBUG nova.network.neutron [req-f165fd72-225b-42bb-86e6-4bafc9fabcca req-b7518090-f07b-4f2c-a6a8-7914104d447b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Updating instance_info_cache with network_info: [{"id": "eb770b47-0e30-41bb-8d08-e52bc86b8fb7", "address": "fa:16:3e:c2:96:8d", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb770b47-0e", "ovs_interfaceid": "eb770b47-0e30-41bb-8d08-e52bc86b8fb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:53:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:55.103 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[8a7e078c-2982-421c-95f1-0f61470e5a09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:55 np0005481065 nova_compute[260935]: 2025-10-11 08:53:55.117 2 DEBUG oslo_concurrency.lockutils [req-f165fd72-225b-42bb-86e6-4bafc9fabcca req-b7518090-f07b-4f2c-a6a8-7914104d447b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-c5c5e7c6-36ba-4cdd-9ad5-03996c419556" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:53:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:53:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:53:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:53:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:53:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:53:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:53:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:53:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:53:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:53:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:53:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:55.163 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b1ca0248-10cc-446d-89a9-47e937ce4be7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:55.186 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8733bf2c-8fdd-4331-8256-1b751128cde4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaac3fe7a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a9:0a:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 4, 'rx_bytes': 306, 'tx_bytes': 264, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 4, 'rx_bytes': 306, 'tx_bytes': 264, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 136], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 467881, 'reachable_time': 38741, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 264, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 264, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315352, 'error': None, 'target': 'ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:55.210 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8129e78f-affc-4cf1-bece-57a0c29390d3]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapaac3fe7a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 467899, 'tstamp': 467899}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315357, 'error': None, 'target': 'ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapaac3fe7a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 467904, 'tstamp': 467904}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315357, 'error': None, 'target': 'ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:53:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:55.213 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaac3fe7a-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:55 np0005481065 nova_compute[260935]: 2025-10-11 08:53:55.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:55 np0005481065 nova_compute[260935]: 2025-10-11 08:53:55.253 2 DEBUG nova.compute.manager [None req-96b0935f-0317-4ff0-b413-63f083993510 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:53:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:55.258 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaac3fe7a-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:55.259 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:53:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:55.260 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaac3fe7a-b0, col_values=(('external_ids', {'iface-id': 'debf3d0c-b4f8-4ab8-9507-a0acd9b0ee66'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:53:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:53:55.260 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:53:55 np0005481065 nova_compute[260935]: 2025-10-11 08:53:55.340 2 INFO nova.compute.manager [None req-96b0935f-0317-4ff0-b413-63f083993510 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] instance snapshotting#033[00m
Oct 11 04:53:55 np0005481065 nova_compute[260935]: 2025-10-11 08:53:55.607 2 INFO nova.virt.libvirt.driver [None req-96b0935f-0317-4ff0-b413-63f083993510 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Beginning live snapshot process#033[00m
Oct 11 04:53:55 np0005481065 nova_compute[260935]: 2025-10-11 08:53:55.728 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172835.7273057, c5c5e7c6-36ba-4cdd-9ad5-03996c419556 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:53:55 np0005481065 nova_compute[260935]: 2025-10-11 08:53:55.728 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] VM Started (Lifecycle Event)#033[00m
Oct 11 04:53:55 np0005481065 nova_compute[260935]: 2025-10-11 08:53:55.822 2 DEBUG nova.compute.manager [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:53:55 np0005481065 nova_compute[260935]: 2025-10-11 08:53:55.824 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:53:55 np0005481065 nova_compute[260935]: 2025-10-11 08:53:55.829 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:53:55 np0005481065 nova_compute[260935]: 2025-10-11 08:53:55.835 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172835.7276287, c5c5e7c6-36ba-4cdd-9ad5-03996c419556 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:53:55 np0005481065 nova_compute[260935]: 2025-10-11 08:53:55.836 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] VM Paused (Lifecycle Event)#033[00m
Oct 11 04:53:55 np0005481065 nova_compute[260935]: 2025-10-11 08:53:55.839 2 INFO nova.virt.libvirt.driver [-] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Instance spawned successfully.#033[00m
Oct 11 04:53:55 np0005481065 nova_compute[260935]: 2025-10-11 08:53:55.840 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:53:55 np0005481065 nova_compute[260935]: 2025-10-11 08:53:55.855 2 DEBUG nova.virt.libvirt.imagebackend [None req-96b0935f-0317-4ff0-b413-63f083993510 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] No parent info for 03f2fef0-11c0-48e1-b3a0-3e02d898739e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct 11 04:53:55 np0005481065 nova_compute[260935]: 2025-10-11 08:53:55.866 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:53:55 np0005481065 nova_compute[260935]: 2025-10-11 08:53:55.873 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:53:55 np0005481065 nova_compute[260935]: 2025-10-11 08:53:55.873 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:53:55 np0005481065 nova_compute[260935]: 2025-10-11 08:53:55.874 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:53:55 np0005481065 nova_compute[260935]: 2025-10-11 08:53:55.875 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:53:55 np0005481065 nova_compute[260935]: 2025-10-11 08:53:55.876 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:53:55 np0005481065 nova_compute[260935]: 2025-10-11 08:53:55.877 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:53:55 np0005481065 nova_compute[260935]: 2025-10-11 08:53:55.883 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:53:55 np0005481065 nova_compute[260935]: 2025-10-11 08:53:55.926 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:53:55 np0005481065 nova_compute[260935]: 2025-10-11 08:53:55.927 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172835.7832527, d0ac94c4-9bbc-443b-bbce-0d447b37153a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:53:55 np0005481065 nova_compute[260935]: 2025-10-11 08:53:55.927 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] VM Started (Lifecycle Event)#033[00m
Oct 11 04:53:55 np0005481065 nova_compute[260935]: 2025-10-11 08:53:55.941 2 INFO nova.compute.manager [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Took 11.00 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:53:55 np0005481065 nova_compute[260935]: 2025-10-11 08:53:55.942 2 DEBUG nova.compute.manager [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:53:55 np0005481065 nova_compute[260935]: 2025-10-11 08:53:55.951 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:53:55 np0005481065 nova_compute[260935]: 2025-10-11 08:53:55.954 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:53:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1486: 321 pgs: 321 active+clean; 227 MiB data, 515 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 6.4 MiB/s wr, 225 op/s
Oct 11 04:53:55 np0005481065 nova_compute[260935]: 2025-10-11 08:53:55.996 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:53:55 np0005481065 nova_compute[260935]: 2025-10-11 08:53:55.997 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172835.7833843, d0ac94c4-9bbc-443b-bbce-0d447b37153a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:53:55 np0005481065 nova_compute[260935]: 2025-10-11 08:53:55.998 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] VM Paused (Lifecycle Event)#033[00m
Oct 11 04:53:56 np0005481065 nova_compute[260935]: 2025-10-11 08:53:56.018 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:53:56 np0005481065 nova_compute[260935]: 2025-10-11 08:53:56.021 2 INFO nova.compute.manager [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Took 13.44 seconds to build instance.#033[00m
Oct 11 04:53:56 np0005481065 nova_compute[260935]: 2025-10-11 08:53:56.027 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172835.8278902, d0ac94c4-9bbc-443b-bbce-0d447b37153a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:53:56 np0005481065 nova_compute[260935]: 2025-10-11 08:53:56.028 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:53:56 np0005481065 nova_compute[260935]: 2025-10-11 08:53:56.038 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "d0ac94c4-9bbc-443b-bbce-0d447b37153a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:56 np0005481065 nova_compute[260935]: 2025-10-11 08:53:56.039 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "d0ac94c4-9bbc-443b-bbce-0d447b37153a" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 8.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:56 np0005481065 nova_compute[260935]: 2025-10-11 08:53:56.040 2 INFO nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:53:56 np0005481065 nova_compute[260935]: 2025-10-11 08:53:56.040 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "d0ac94c4-9bbc-443b-bbce-0d447b37153a" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:56 np0005481065 nova_compute[260935]: 2025-10-11 08:53:56.044 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:53:56 np0005481065 nova_compute[260935]: 2025-10-11 08:53:56.049 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:53:56 np0005481065 nova_compute[260935]: 2025-10-11 08:53:56.132 2 DEBUG nova.storage.rbd_utils [None req-96b0935f-0317-4ff0-b413-63f083993510 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] creating snapshot(3d25b77f5fcc48438dbbb76e7eed7002) on rbd image(8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 11 04:53:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e205 do_prune osdmap full prune enabled
Oct 11 04:53:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e206 e206: 3 total, 3 up, 3 in
Oct 11 04:53:56 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e206: 3 total, 3 up, 3 in
Oct 11 04:53:56 np0005481065 nova_compute[260935]: 2025-10-11 08:53:56.461 2 DEBUG nova.storage.rbd_utils [None req-96b0935f-0317-4ff0-b413-63f083993510 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] cloning vms/8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08_disk@3d25b77f5fcc48438dbbb76e7eed7002 to images/309541da-fb02-457f-9678-947ffe38cc30 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct 11 04:53:56 np0005481065 nova_compute[260935]: 2025-10-11 08:53:56.501 2 DEBUG nova.compute.manager [req-3c5cb53a-d5e5-4ecf-9054-632a20bd25d9 req-862bfe80-199c-4bfa-a986-dfae860e6d6b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Received event network-vif-plugged-eb770b47-0e30-41bb-8d08-e52bc86b8fb7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:53:56 np0005481065 nova_compute[260935]: 2025-10-11 08:53:56.502 2 DEBUG oslo_concurrency.lockutils [req-3c5cb53a-d5e5-4ecf-9054-632a20bd25d9 req-862bfe80-199c-4bfa-a986-dfae860e6d6b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "c5c5e7c6-36ba-4cdd-9ad5-03996c419556-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:56 np0005481065 nova_compute[260935]: 2025-10-11 08:53:56.504 2 DEBUG oslo_concurrency.lockutils [req-3c5cb53a-d5e5-4ecf-9054-632a20bd25d9 req-862bfe80-199c-4bfa-a986-dfae860e6d6b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "c5c5e7c6-36ba-4cdd-9ad5-03996c419556-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:56 np0005481065 nova_compute[260935]: 2025-10-11 08:53:56.505 2 DEBUG oslo_concurrency.lockutils [req-3c5cb53a-d5e5-4ecf-9054-632a20bd25d9 req-862bfe80-199c-4bfa-a986-dfae860e6d6b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "c5c5e7c6-36ba-4cdd-9ad5-03996c419556-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:56 np0005481065 nova_compute[260935]: 2025-10-11 08:53:56.505 2 DEBUG nova.compute.manager [req-3c5cb53a-d5e5-4ecf-9054-632a20bd25d9 req-862bfe80-199c-4bfa-a986-dfae860e6d6b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Processing event network-vif-plugged-eb770b47-0e30-41bb-8d08-e52bc86b8fb7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 04:53:56 np0005481065 nova_compute[260935]: 2025-10-11 08:53:56.506 2 DEBUG nova.compute.manager [req-3c5cb53a-d5e5-4ecf-9054-632a20bd25d9 req-862bfe80-199c-4bfa-a986-dfae860e6d6b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Received event network-vif-plugged-eb770b47-0e30-41bb-8d08-e52bc86b8fb7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:53:56 np0005481065 nova_compute[260935]: 2025-10-11 08:53:56.507 2 DEBUG oslo_concurrency.lockutils [req-3c5cb53a-d5e5-4ecf-9054-632a20bd25d9 req-862bfe80-199c-4bfa-a986-dfae860e6d6b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "c5c5e7c6-36ba-4cdd-9ad5-03996c419556-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:56 np0005481065 nova_compute[260935]: 2025-10-11 08:53:56.507 2 DEBUG oslo_concurrency.lockutils [req-3c5cb53a-d5e5-4ecf-9054-632a20bd25d9 req-862bfe80-199c-4bfa-a986-dfae860e6d6b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "c5c5e7c6-36ba-4cdd-9ad5-03996c419556-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:56 np0005481065 nova_compute[260935]: 2025-10-11 08:53:56.508 2 DEBUG oslo_concurrency.lockutils [req-3c5cb53a-d5e5-4ecf-9054-632a20bd25d9 req-862bfe80-199c-4bfa-a986-dfae860e6d6b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "c5c5e7c6-36ba-4cdd-9ad5-03996c419556-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:56 np0005481065 nova_compute[260935]: 2025-10-11 08:53:56.511 2 DEBUG nova.compute.manager [req-3c5cb53a-d5e5-4ecf-9054-632a20bd25d9 req-862bfe80-199c-4bfa-a986-dfae860e6d6b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] No waiting events found dispatching network-vif-plugged-eb770b47-0e30-41bb-8d08-e52bc86b8fb7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:53:56 np0005481065 nova_compute[260935]: 2025-10-11 08:53:56.511 2 WARNING nova.compute.manager [req-3c5cb53a-d5e5-4ecf-9054-632a20bd25d9 req-862bfe80-199c-4bfa-a986-dfae860e6d6b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Received unexpected event network-vif-plugged-eb770b47-0e30-41bb-8d08-e52bc86b8fb7 for instance with vm_state building and task_state spawning.#033[00m
Oct 11 04:53:56 np0005481065 nova_compute[260935]: 2025-10-11 08:53:56.512 2 DEBUG nova.compute.manager [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:53:56 np0005481065 nova_compute[260935]: 2025-10-11 08:53:56.531 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172836.5191183, c5c5e7c6-36ba-4cdd-9ad5-03996c419556 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:53:56 np0005481065 nova_compute[260935]: 2025-10-11 08:53:56.546 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:53:56 np0005481065 nova_compute[260935]: 2025-10-11 08:53:56.549 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:53:56 np0005481065 nova_compute[260935]: 2025-10-11 08:53:56.554 2 INFO nova.virt.libvirt.driver [-] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Instance spawned successfully.#033[00m
Oct 11 04:53:56 np0005481065 nova_compute[260935]: 2025-10-11 08:53:56.555 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:53:56 np0005481065 nova_compute[260935]: 2025-10-11 08:53:56.570 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:53:56 np0005481065 nova_compute[260935]: 2025-10-11 08:53:56.576 2 DEBUG nova.compute.manager [req-49f3dbff-50d7-4fc6-ae46-bacdc675b4bc req-bc643af6-86b9-4b65-b859-1a28c5373eed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Received event network-vif-plugged-11b55091-2876-4b36-98b0-aa1a4db00d3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:53:56 np0005481065 nova_compute[260935]: 2025-10-11 08:53:56.576 2 DEBUG oslo_concurrency.lockutils [req-49f3dbff-50d7-4fc6-ae46-bacdc675b4bc req-bc643af6-86b9-4b65-b859-1a28c5373eed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d0ac94c4-9bbc-443b-bbce-0d447b37153a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:53:56 np0005481065 nova_compute[260935]: 2025-10-11 08:53:56.577 2 DEBUG oslo_concurrency.lockutils [req-49f3dbff-50d7-4fc6-ae46-bacdc675b4bc req-bc643af6-86b9-4b65-b859-1a28c5373eed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d0ac94c4-9bbc-443b-bbce-0d447b37153a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:56 np0005481065 nova_compute[260935]: 2025-10-11 08:53:56.577 2 DEBUG oslo_concurrency.lockutils [req-49f3dbff-50d7-4fc6-ae46-bacdc675b4bc req-bc643af6-86b9-4b65-b859-1a28c5373eed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d0ac94c4-9bbc-443b-bbce-0d447b37153a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:56 np0005481065 nova_compute[260935]: 2025-10-11 08:53:56.577 2 DEBUG nova.compute.manager [req-49f3dbff-50d7-4fc6-ae46-bacdc675b4bc req-bc643af6-86b9-4b65-b859-1a28c5373eed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] No waiting events found dispatching network-vif-plugged-11b55091-2876-4b36-98b0-aa1a4db00d3e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:53:56 np0005481065 nova_compute[260935]: 2025-10-11 08:53:56.578 2 WARNING nova.compute.manager [req-49f3dbff-50d7-4fc6-ae46-bacdc675b4bc req-bc643af6-86b9-4b65-b859-1a28c5373eed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Received unexpected event network-vif-plugged-11b55091-2876-4b36-98b0-aa1a4db00d3e for instance with vm_state active and task_state None.#033[00m
Oct 11 04:53:56 np0005481065 nova_compute[260935]: 2025-10-11 08:53:56.589 2 DEBUG nova.storage.rbd_utils [None req-96b0935f-0317-4ff0-b413-63f083993510 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] flattening images/309541da-fb02-457f-9678-947ffe38cc30 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct 11 04:53:56 np0005481065 nova_compute[260935]: 2025-10-11 08:53:56.651 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:53:56 np0005481065 nova_compute[260935]: 2025-10-11 08:53:56.661 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:53:56 np0005481065 nova_compute[260935]: 2025-10-11 08:53:56.661 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:53:56 np0005481065 nova_compute[260935]: 2025-10-11 08:53:56.662 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:53:56 np0005481065 nova_compute[260935]: 2025-10-11 08:53:56.662 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:53:56 np0005481065 nova_compute[260935]: 2025-10-11 08:53:56.663 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:53:56 np0005481065 nova_compute[260935]: 2025-10-11 08:53:56.663 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:53:56 np0005481065 nova_compute[260935]: 2025-10-11 08:53:56.694 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:53:56 np0005481065 nova_compute[260935]: 2025-10-11 08:53:56.742 2 INFO nova.compute.manager [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Took 10.71 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:53:56 np0005481065 nova_compute[260935]: 2025-10-11 08:53:56.745 2 DEBUG nova.compute.manager [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:53:56 np0005481065 podman[315466]: 2025-10-11 08:53:56.76997306 +0000 UTC m=+0.071457528 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent)
Oct 11 04:53:56 np0005481065 nova_compute[260935]: 2025-10-11 08:53:56.824 2 DEBUG nova.storage.rbd_utils [None req-96b0935f-0317-4ff0-b413-63f083993510 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] removing snapshot(3d25b77f5fcc48438dbbb76e7eed7002) on rbd image(8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct 11 04:53:56 np0005481065 nova_compute[260935]: 2025-10-11 08:53:56.835 2 INFO nova.compute.manager [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Took 14.21 seconds to build instance.#033[00m
Oct 11 04:53:56 np0005481065 nova_compute[260935]: 2025-10-11 08:53:56.852 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "c5c5e7c6-36ba-4cdd-9ad5-03996c419556" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.344s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:56 np0005481065 nova_compute[260935]: 2025-10-11 08:53:56.853 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "c5c5e7c6-36ba-4cdd-9ad5-03996c419556" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 9.533s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:53:56 np0005481065 nova_compute[260935]: 2025-10-11 08:53:56.853 2 INFO nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:53:56 np0005481065 nova_compute[260935]: 2025-10-11 08:53:56.853 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "c5c5e7c6-36ba-4cdd-9ad5-03996c419556" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:53:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e206 do_prune osdmap full prune enabled
Oct 11 04:53:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e207 e207: 3 total, 3 up, 3 in
Oct 11 04:53:57 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e207: 3 total, 3 up, 3 in
Oct 11 04:53:57 np0005481065 nova_compute[260935]: 2025-10-11 08:53:57.448 2 DEBUG nova.storage.rbd_utils [None req-96b0935f-0317-4ff0-b413-63f083993510 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] creating snapshot(snap) on rbd image(309541da-fb02-457f-9678-947ffe38cc30) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 11 04:53:57 np0005481065 nova_compute[260935]: 2025-10-11 08:53:57.754 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172822.7521193, 2c551e6f-adba-4963-a583-c5118e2be62a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:53:57 np0005481065 nova_compute[260935]: 2025-10-11 08:53:57.754 2 INFO nova.compute.manager [-] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] VM Stopped (Lifecycle Event)#033[00m
Oct 11 04:53:57 np0005481065 nova_compute[260935]: 2025-10-11 08:53:57.798 2 DEBUG nova.compute.manager [None req-e7bbb00d-90c1-4393-8e96-2bb2761cbf37 - - - - - -] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:53:57 np0005481065 nova_compute[260935]: 2025-10-11 08:53:57.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1489: 321 pgs: 4 active+clean+snaptrim, 14 active+clean+snaptrim_wait, 303 active+clean; 278 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 3.3 MiB/s wr, 437 op/s
Oct 11 04:53:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:53:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e207 do_prune osdmap full prune enabled
Oct 11 04:53:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e208 e208: 3 total, 3 up, 3 in
Oct 11 04:53:58 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e208: 3 total, 3 up, 3 in
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver [None req-96b0935f-0317-4ff0-b413-63f083993510 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image 309541da-fb02-457f-9678-947ffe38cc30 could not be found.
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver     image = self._client.call(
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID 309541da-fb02-457f-9678-947ffe38cc30
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver 
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver 
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver     image = self._client.call(
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image 309541da-fb02-457f-9678-947ffe38cc30 could not be found.
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver #033[00m
Oct 11 04:53:58 np0005481065 nova_compute[260935]: 2025-10-11 08:53:58.734 2 DEBUG nova.storage.rbd_utils [None req-96b0935f-0317-4ff0-b413-63f083993510 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] removing snapshot(snap) on rbd image(309541da-fb02-457f-9678-947ffe38cc30) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct 11 04:53:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e208 do_prune osdmap full prune enabled
Oct 11 04:53:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e209 e209: 3 total, 3 up, 3 in
Oct 11 04:53:59 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e209: 3 total, 3 up, 3 in
Oct 11 04:53:59 np0005481065 nova_compute[260935]: 2025-10-11 08:53:59.749 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:53:59 np0005481065 nova_compute[260935]: 2025-10-11 08:53:59.900 2 WARNING nova.compute.manager [None req-96b0935f-0317-4ff0-b413-63f083993510 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Image not found during snapshot: nova.exception.ImageNotFound: Image 309541da-fb02-457f-9678-947ffe38cc30 could not be found.#033[00m
Oct 11 04:53:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1492: 321 pgs: 4 active+clean+snaptrim, 14 active+clean+snaptrim_wait, 303 active+clean; 278 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 20 MiB/s rd, 6.6 MiB/s wr, 760 op/s
Oct 11 04:53:59 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:59Z|00064|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:97:d1:ed 10.100.0.8
Oct 11 04:53:59 np0005481065 ovn_controller[152945]: 2025-10-11T08:53:59Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:97:d1:ed 10.100.0.8
Oct 11 04:54:01 np0005481065 nova_compute[260935]: 2025-10-11 08:54:01.598 2 DEBUG oslo_concurrency.lockutils [None req-08113e51-2469-4f43-8573-66a713d640b3 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "d0ac94c4-9bbc-443b-bbce-0d447b37153a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:01 np0005481065 nova_compute[260935]: 2025-10-11 08:54:01.599 2 DEBUG oslo_concurrency.lockutils [None req-08113e51-2469-4f43-8573-66a713d640b3 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "d0ac94c4-9bbc-443b-bbce-0d447b37153a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:01 np0005481065 nova_compute[260935]: 2025-10-11 08:54:01.599 2 DEBUG oslo_concurrency.lockutils [None req-08113e51-2469-4f43-8573-66a713d640b3 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "d0ac94c4-9bbc-443b-bbce-0d447b37153a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:01 np0005481065 nova_compute[260935]: 2025-10-11 08:54:01.600 2 DEBUG oslo_concurrency.lockutils [None req-08113e51-2469-4f43-8573-66a713d640b3 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "d0ac94c4-9bbc-443b-bbce-0d447b37153a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:01 np0005481065 nova_compute[260935]: 2025-10-11 08:54:01.600 2 DEBUG oslo_concurrency.lockutils [None req-08113e51-2469-4f43-8573-66a713d640b3 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "d0ac94c4-9bbc-443b-bbce-0d447b37153a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:01 np0005481065 nova_compute[260935]: 2025-10-11 08:54:01.602 2 INFO nova.compute.manager [None req-08113e51-2469-4f43-8573-66a713d640b3 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Terminating instance#033[00m
Oct 11 04:54:01 np0005481065 nova_compute[260935]: 2025-10-11 08:54:01.604 2 DEBUG nova.compute.manager [None req-08113e51-2469-4f43-8573-66a713d640b3 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 04:54:01 np0005481065 kernel: tap11b55091-28 (unregistering): left promiscuous mode
Oct 11 04:54:01 np0005481065 NetworkManager[44960]: <info>  [1760172841.6562] device (tap11b55091-28): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:54:01 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:01Z|00428|binding|INFO|Releasing lport 11b55091-2876-4b36-98b0-aa1a4db00d3e from this chassis (sb_readonly=0)
Oct 11 04:54:01 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:01Z|00429|binding|INFO|Setting lport 11b55091-2876-4b36-98b0-aa1a4db00d3e down in Southbound
Oct 11 04:54:01 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:01Z|00430|binding|INFO|Removing iface tap11b55091-28 ovn-installed in OVS
Oct 11 04:54:01 np0005481065 nova_compute[260935]: 2025-10-11 08:54:01.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:01 np0005481065 nova_compute[260935]: 2025-10-11 08:54:01.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:01 np0005481065 nova_compute[260935]: 2025-10-11 08:54:01.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:54:01 np0005481065 nova_compute[260935]: 2025-10-11 08:54:01.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 04:54:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:01.708 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:86:b0:c9 10.100.0.11'], port_security=['fa:16:3e:86:b0:c9 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'd0ac94c4-9bbc-443b-bbce-0d447b37153a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5fe47dfd30914099a9819413cbab00c6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '76121093-a7de-4040-aef4-22a2c06e5eea', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0c1ab924-8567-41be-9107-10c6210b8f10, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=11b55091-2876-4b36-98b0-aa1a4db00d3e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:54:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:01.710 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 11b55091-2876-4b36-98b0-aa1a4db00d3e in datapath aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f unbound from our chassis#033[00m
Oct 11 04:54:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:01.713 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f#033[00m
Oct 11 04:54:01 np0005481065 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000030.scope: Deactivated successfully.
Oct 11 04:54:01 np0005481065 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000030.scope: Consumed 7.346s CPU time.
Oct 11 04:54:01 np0005481065 systemd-machined[215705]: Machine qemu-56-instance-00000030 terminated.
Oct 11 04:54:01 np0005481065 nova_compute[260935]: 2025-10-11 08:54:01.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:01.755 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7d26c729-50f9-4a84-8f9c-d0517cbbb557]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:01.800 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3050a010-dc4b-484a-acef-f43aebf0ff10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:01.804 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6eb3fe79-f50f-4086-95bb-a3e6ac677c80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:01 np0005481065 nova_compute[260935]: 2025-10-11 08:54:01.848 2 INFO nova.virt.libvirt.driver [-] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Instance destroyed successfully.#033[00m
Oct 11 04:54:01 np0005481065 nova_compute[260935]: 2025-10-11 08:54:01.849 2 DEBUG nova.objects.instance [None req-08113e51-2469-4f43-8573-66a713d640b3 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lazy-loading 'resources' on Instance uuid d0ac94c4-9bbc-443b-bbce-0d447b37153a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:54:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:01.853 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[1fa7e5ea-19b4-4d30-ac4e-6006bd32ad83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:01.881 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[28d6aa8a-9de3-4c13-9c83-c37d5ffa3dc8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaac3fe7a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a9:0a:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 136], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 467881, 'reachable_time': 38741, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315577, 'error': None, 'target': 'ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:01.908 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1aae5895-e693-470f-8e68-dc2d80dea730]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapaac3fe7a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 467899, 'tstamp': 467899}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315578, 'error': None, 'target': 'ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapaac3fe7a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 467904, 'tstamp': 467904}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315578, 'error': None, 'target': 'ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:01.910 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaac3fe7a-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:01 np0005481065 nova_compute[260935]: 2025-10-11 08:54:01.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:01 np0005481065 nova_compute[260935]: 2025-10-11 08:54:01.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:01.920 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaac3fe7a-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:01.920 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:54:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:01.921 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaac3fe7a-b0, col_values=(('external_ids', {'iface-id': 'debf3d0c-b4f8-4ab8-9507-a0acd9b0ee66'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:01.922 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:54:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1493: 321 pgs: 4 active+clean+snaptrim, 14 active+clean+snaptrim_wait, 303 active+clean; 278 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 14 MiB/s rd, 4.8 MiB/s wr, 547 op/s
Oct 11 04:54:01 np0005481065 nova_compute[260935]: 2025-10-11 08:54:01.959 2 DEBUG nova.virt.libvirt.vif [None req-08113e51-2469-4f43-8573-66a713d640b3 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:53:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-904038611',display_name='tempest-MultipleCreateTestJSON-server-904038611-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-904038611-1',id=48,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:53:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5fe47dfd30914099a9819413cbab00c6',ramdisk_id='',reservation_id='r-gdifyikz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-1825846956',owner_user_name='tempest-MultipleCreateTestJSON-1825846956-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:53:55Z,user_data=None,user_id='624c293d73ca4d14a182fadee17abb16',uuid=d0ac94c4-9bbc-443b-bbce-0d447b37153a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "11b55091-2876-4b36-98b0-aa1a4db00d3e", "address": "fa:16:3e:86:b0:c9", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11b55091-28", "ovs_interfaceid": "11b55091-2876-4b36-98b0-aa1a4db00d3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 04:54:01 np0005481065 nova_compute[260935]: 2025-10-11 08:54:01.959 2 DEBUG nova.network.os_vif_util [None req-08113e51-2469-4f43-8573-66a713d640b3 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Converting VIF {"id": "11b55091-2876-4b36-98b0-aa1a4db00d3e", "address": "fa:16:3e:86:b0:c9", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11b55091-28", "ovs_interfaceid": "11b55091-2876-4b36-98b0-aa1a4db00d3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:54:01 np0005481065 nova_compute[260935]: 2025-10-11 08:54:01.961 2 DEBUG nova.network.os_vif_util [None req-08113e51-2469-4f43-8573-66a713d640b3 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:86:b0:c9,bridge_name='br-int',has_traffic_filtering=True,id=11b55091-2876-4b36-98b0-aa1a4db00d3e,network=Network(aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11b55091-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:54:01 np0005481065 nova_compute[260935]: 2025-10-11 08:54:01.962 2 DEBUG os_vif [None req-08113e51-2469-4f43-8573-66a713d640b3 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:86:b0:c9,bridge_name='br-int',has_traffic_filtering=True,id=11b55091-2876-4b36-98b0-aa1a4db00d3e,network=Network(aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11b55091-28') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 04:54:01 np0005481065 nova_compute[260935]: 2025-10-11 08:54:01.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:01 np0005481065 nova_compute[260935]: 2025-10-11 08:54:01.964 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap11b55091-28, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:01 np0005481065 nova_compute[260935]: 2025-10-11 08:54:01.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:01 np0005481065 nova_compute[260935]: 2025-10-11 08:54:01.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:01 np0005481065 nova_compute[260935]: 2025-10-11 08:54:01.970 2 INFO os_vif [None req-08113e51-2469-4f43-8573-66a713d640b3 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:86:b0:c9,bridge_name='br-int',has_traffic_filtering=True,id=11b55091-2876-4b36-98b0-aa1a4db00d3e,network=Network(aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11b55091-28')#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.307 2 DEBUG oslo_concurrency.lockutils [None req-79edc500-30b7-473e-9cd9-7958c1b3987c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "c5c5e7c6-36ba-4cdd-9ad5-03996c419556" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.307 2 DEBUG oslo_concurrency.lockutils [None req-79edc500-30b7-473e-9cd9-7958c1b3987c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "c5c5e7c6-36ba-4cdd-9ad5-03996c419556" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.308 2 DEBUG oslo_concurrency.lockutils [None req-79edc500-30b7-473e-9cd9-7958c1b3987c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "c5c5e7c6-36ba-4cdd-9ad5-03996c419556-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.308 2 DEBUG oslo_concurrency.lockutils [None req-79edc500-30b7-473e-9cd9-7958c1b3987c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "c5c5e7c6-36ba-4cdd-9ad5-03996c419556-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.308 2 DEBUG oslo_concurrency.lockutils [None req-79edc500-30b7-473e-9cd9-7958c1b3987c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "c5c5e7c6-36ba-4cdd-9ad5-03996c419556-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.310 2 INFO nova.compute.manager [None req-79edc500-30b7-473e-9cd9-7958c1b3987c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Terminating instance#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.310 2 DEBUG nova.compute.manager [None req-79edc500-30b7-473e-9cd9-7958c1b3987c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.331 2 INFO nova.virt.libvirt.driver [None req-08113e51-2469-4f43-8573-66a713d640b3 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Deleting instance files /var/lib/nova/instances/d0ac94c4-9bbc-443b-bbce-0d447b37153a_del#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.332 2 INFO nova.virt.libvirt.driver [None req-08113e51-2469-4f43-8573-66a713d640b3 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Deletion of /var/lib/nova/instances/d0ac94c4-9bbc-443b-bbce-0d447b37153a_del complete#033[00m
Oct 11 04:54:02 np0005481065 kernel: tapeb770b47-0e (unregistering): left promiscuous mode
Oct 11 04:54:02 np0005481065 NetworkManager[44960]: <info>  [1760172842.3470] device (tapeb770b47-0e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:54:02 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:02Z|00431|binding|INFO|Releasing lport eb770b47-0e30-41bb-8d08-e52bc86b8fb7 from this chassis (sb_readonly=0)
Oct 11 04:54:02 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:02Z|00432|binding|INFO|Setting lport eb770b47-0e30-41bb-8d08-e52bc86b8fb7 down in Southbound
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:02 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:02Z|00433|binding|INFO|Removing iface tapeb770b47-0e ovn-installed in OVS
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:02 np0005481065 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000032.scope: Deactivated successfully.
Oct 11 04:54:02 np0005481065 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000032.scope: Consumed 6.631s CPU time.
Oct 11 04:54:02 np0005481065 systemd-machined[215705]: Machine qemu-57-instance-00000032 terminated.
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.545 2 INFO nova.virt.libvirt.driver [-] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Instance destroyed successfully.#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.546 2 DEBUG nova.objects.instance [None req-79edc500-30b7-473e-9cd9-7958c1b3987c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lazy-loading 'resources' on Instance uuid c5c5e7c6-36ba-4cdd-9ad5-03996c419556 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:54:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:02.621 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:96:8d 10.100.0.9'], port_security=['fa:16:3e:c2:96:8d 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'c5c5e7c6-36ba-4cdd-9ad5-03996c419556', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5fe47dfd30914099a9819413cbab00c6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '76121093-a7de-4040-aef4-22a2c06e5eea', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0c1ab924-8567-41be-9107-10c6210b8f10, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=eb770b47-0e30-41bb-8d08-e52bc86b8fb7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:54:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:02.622 162815 INFO neutron.agent.ovn.metadata.agent [-] Port eb770b47-0e30-41bb-8d08-e52bc86b8fb7 in datapath aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f unbound from our chassis#033[00m
Oct 11 04:54:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:02.624 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 04:54:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:02.625 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[10e4bbfd-eb39-4e7e-82db-d4cb7796fd90]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:02.626 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f namespace which is not needed anymore#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.679 2 DEBUG nova.virt.libvirt.vif [None req-79edc500-30b7-473e-9cd9-7958c1b3987c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:53:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-904038611',display_name='tempest-MultipleCreateTestJSON-server-904038611-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-904038611-2',id=50,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-10-11T08:53:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5fe47dfd30914099a9819413cbab00c6',ramdisk_id='',reservation_id='r-gdifyikz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-1825846956',owner_user_name='tempest-MultipleCreateTestJSON-1825846956-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:53:56Z,user_data=None,user_id='624c293d73ca4d14a182fadee17abb16',uuid=c5c5e7c6-36ba-4cdd-9ad5-03996c419556,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "eb770b47-0e30-41bb-8d08-e52bc86b8fb7", "address": "fa:16:3e:c2:96:8d", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb770b47-0e", "ovs_interfaceid": "eb770b47-0e30-41bb-8d08-e52bc86b8fb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.681 2 DEBUG nova.network.os_vif_util [None req-79edc500-30b7-473e-9cd9-7958c1b3987c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Converting VIF {"id": "eb770b47-0e30-41bb-8d08-e52bc86b8fb7", "address": "fa:16:3e:c2:96:8d", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb770b47-0e", "ovs_interfaceid": "eb770b47-0e30-41bb-8d08-e52bc86b8fb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.681 2 DEBUG nova.network.os_vif_util [None req-79edc500-30b7-473e-9cd9-7958c1b3987c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c2:96:8d,bridge_name='br-int',has_traffic_filtering=True,id=eb770b47-0e30-41bb-8d08-e52bc86b8fb7,network=Network(aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb770b47-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.682 2 DEBUG os_vif [None req-79edc500-30b7-473e-9cd9-7958c1b3987c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:96:8d,bridge_name='br-int',has_traffic_filtering=True,id=eb770b47-0e30-41bb-8d08-e52bc86b8fb7,network=Network(aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb770b47-0e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.684 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeb770b47-0e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.685 2 DEBUG oslo_concurrency.lockutils [None req-d29d2b04-23c0-4456-a657-aa63148acf2f 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "090243f9-46ac-42a1-b921-fa3b1974f127" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.686 2 DEBUG oslo_concurrency.lockutils [None req-d29d2b04-23c0-4456-a657-aa63148acf2f 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "090243f9-46ac-42a1-b921-fa3b1974f127" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.686 2 DEBUG oslo_concurrency.lockutils [None req-d29d2b04-23c0-4456-a657-aa63148acf2f 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "090243f9-46ac-42a1-b921-fa3b1974f127-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.687 2 DEBUG oslo_concurrency.lockutils [None req-d29d2b04-23c0-4456-a657-aa63148acf2f 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "090243f9-46ac-42a1-b921-fa3b1974f127-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.687 2 DEBUG oslo_concurrency.lockutils [None req-d29d2b04-23c0-4456-a657-aa63148acf2f 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "090243f9-46ac-42a1-b921-fa3b1974f127-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.688 2 INFO nova.compute.manager [None req-d29d2b04-23c0-4456-a657-aa63148acf2f 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Terminating instance#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.689 2 DEBUG nova.compute.manager [None req-d29d2b04-23c0-4456-a657-aa63148acf2f 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.694 2 DEBUG oslo_concurrency.lockutils [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquiring lock "8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.694 2 DEBUG oslo_concurrency.lockutils [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.695 2 DEBUG oslo_concurrency.lockutils [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquiring lock "8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.695 2 DEBUG oslo_concurrency.lockutils [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.695 2 DEBUG oslo_concurrency.lockutils [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.696 2 INFO nova.compute.manager [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Terminating instance#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.697 2 DEBUG nova.compute.manager [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.698 2 INFO os_vif [None req-79edc500-30b7-473e-9cd9-7958c1b3987c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:96:8d,bridge_name='br-int',has_traffic_filtering=True,id=eb770b47-0e30-41bb-8d08-e52bc86b8fb7,network=Network(aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb770b47-0e')#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.731 2 INFO nova.compute.manager [None req-08113e51-2469-4f43-8573-66a713d640b3 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Took 1.13 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.732 2 DEBUG oslo.service.loopingcall [None req-08113e51-2469-4f43-8573-66a713d640b3 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.732 2 DEBUG nova.compute.manager [-] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.732 2 DEBUG nova.network.neutron [-] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 04:54:02 np0005481065 kernel: tap94e66298-1c (unregistering): left promiscuous mode
Oct 11 04:54:02 np0005481065 NetworkManager[44960]: <info>  [1760172842.7477] device (tap94e66298-1c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:54:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:02.755 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:54:02 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:02Z|00434|binding|INFO|Releasing lport 94e66298-1c10-486d-9cd5-fd680cfa037c from this chassis (sb_readonly=1)
Oct 11 04:54:02 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:02Z|00435|if_status|INFO|Not setting lport 94e66298-1c10-486d-9cd5-fd680cfa037c down as sb is readonly
Oct 11 04:54:02 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:02Z|00436|binding|INFO|Removing iface tap94e66298-1c ovn-installed in OVS
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:02 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:02Z|00437|binding|INFO|Setting lport 94e66298-1c10-486d-9cd5-fd680cfa037c down in Southbound
Oct 11 04:54:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:02.770 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a6:01:8a 10.100.0.7'], port_security=['fa:16:3e:a6:01:8a 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '090243f9-46ac-42a1-b921-fa3b1974f127', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9af27fad6b5a4783b66213343f27f0a1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a50a9db8-e2ab-4969-88fc-b4ddbb372174', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b34768a1-4d4c-416a-8ec1-d8538916a72a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=94e66298-1c10-486d-9cd5-fd680cfa037c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:02 np0005481065 kernel: tap5e003588-ef (unregistering): left promiscuous mode
Oct 11 04:54:02 np0005481065 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000031.scope: Deactivated successfully.
Oct 11 04:54:02 np0005481065 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000031.scope: Consumed 10.323s CPU time.
Oct 11 04:54:02 np0005481065 NetworkManager[44960]: <info>  [1760172842.8026] device (tap5e003588-ef): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:54:02 np0005481065 systemd-machined[215705]: Machine qemu-55-instance-00000031 terminated.
Oct 11 04:54:02 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:02Z|00438|binding|INFO|Releasing lport 5e003588-ef81-4e87-900f-734b2c4bad32 from this chassis (sb_readonly=0)
Oct 11 04:54:02 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:02Z|00439|binding|INFO|Setting lport 5e003588-ef81-4e87-900f-734b2c4bad32 down in Southbound
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:02 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:02Z|00440|binding|INFO|Removing iface tap5e003588-ef ovn-installed in OVS
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:02.826 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:d1:ed 10.100.0.8'], port_security=['fa:16:3e:97:d1:ed 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b34d40b1586348c3be3d9142dfe1770d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9b07fc82-c0d2-4e42-8878-3b0706731285', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be4f554b-8352-4ab3-babd-ad834c691fd3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=5e003588-ef81-4e87-900f-734b2c4bad32) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:54:02 np0005481065 neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f[315254]: [NOTICE]   (315258) : haproxy version is 2.8.14-c23fe91
Oct 11 04:54:02 np0005481065 neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f[315254]: [NOTICE]   (315258) : path to executable is /usr/sbin/haproxy
Oct 11 04:54:02 np0005481065 neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f[315254]: [WARNING]  (315258) : Exiting Master process...
Oct 11 04:54:02 np0005481065 neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f[315254]: [ALERT]    (315258) : Current worker (315260) exited with code 143 (Terminated)
Oct 11 04:54:02 np0005481065 neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f[315254]: [WARNING]  (315258) : All workers exited. Exiting... (0)
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.832 2 DEBUG nova.compute.manager [req-f07856b1-2a06-4d8b-918a-d27234b3aaaa req-0adf7753-90ef-4308-92a2-33727f4b2d43 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Received event network-vif-unplugged-11b55091-2876-4b36-98b0-aa1a4db00d3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.832 2 DEBUG oslo_concurrency.lockutils [req-f07856b1-2a06-4d8b-918a-d27234b3aaaa req-0adf7753-90ef-4308-92a2-33727f4b2d43 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d0ac94c4-9bbc-443b-bbce-0d447b37153a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.833 2 DEBUG oslo_concurrency.lockutils [req-f07856b1-2a06-4d8b-918a-d27234b3aaaa req-0adf7753-90ef-4308-92a2-33727f4b2d43 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d0ac94c4-9bbc-443b-bbce-0d447b37153a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.833 2 DEBUG oslo_concurrency.lockutils [req-f07856b1-2a06-4d8b-918a-d27234b3aaaa req-0adf7753-90ef-4308-92a2-33727f4b2d43 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d0ac94c4-9bbc-443b-bbce-0d447b37153a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.833 2 DEBUG nova.compute.manager [req-f07856b1-2a06-4d8b-918a-d27234b3aaaa req-0adf7753-90ef-4308-92a2-33727f4b2d43 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] No waiting events found dispatching network-vif-unplugged-11b55091-2876-4b36-98b0-aa1a4db00d3e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.833 2 DEBUG nova.compute.manager [req-f07856b1-2a06-4d8b-918a-d27234b3aaaa req-0adf7753-90ef-4308-92a2-33727f4b2d43 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Received event network-vif-unplugged-11b55091-2876-4b36-98b0-aa1a4db00d3e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 04:54:02 np0005481065 systemd[1]: libpod-6877f03e8f74ec3b019684f0961795298d916dc72d355b1afc9b4cff51f152cd.scope: Deactivated successfully.
Oct 11 04:54:02 np0005481065 podman[315724]: 2025-10-11 08:54:02.845372296 +0000 UTC m=+0.071356786 container died 6877f03e8f74ec3b019684f0961795298d916dc72d355b1afc9b4cff51f152cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:02 np0005481065 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d0000002f.scope: Deactivated successfully.
Oct 11 04:54:02 np0005481065 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d0000002f.scope: Consumed 13.074s CPU time.
Oct 11 04:54:02 np0005481065 systemd-machined[215705]: Machine qemu-54-instance-0000002f terminated.
Oct 11 04:54:02 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6877f03e8f74ec3b019684f0961795298d916dc72d355b1afc9b4cff51f152cd-userdata-shm.mount: Deactivated successfully.
Oct 11 04:54:02 np0005481065 systemd[1]: var-lib-containers-storage-overlay-696ca5a58f39daa91dbd8051307c63172a5233db2990bf8722059ac5f83a523e-merged.mount: Deactivated successfully.
Oct 11 04:54:02 np0005481065 podman[315724]: 2025-10-11 08:54:02.893998773 +0000 UTC m=+0.119983263 container cleanup 6877f03e8f74ec3b019684f0961795298d916dc72d355b1afc9b4cff51f152cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 11 04:54:02 np0005481065 systemd[1]: libpod-conmon-6877f03e8f74ec3b019684f0961795298d916dc72d355b1afc9b4cff51f152cd.scope: Deactivated successfully.
Oct 11 04:54:02 np0005481065 NetworkManager[44960]: <info>  [1760172842.9207] manager: (tap94e66298-1c): new Tun device (/org/freedesktop/NetworkManager/Devices/207)
Oct 11 04:54:02 np0005481065 NetworkManager[44960]: <info>  [1760172842.9521] manager: (tap5e003588-ef): new Tun device (/org/freedesktop/NetworkManager/Devices/208)
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.959 2 INFO nova.virt.libvirt.driver [-] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Instance destroyed successfully.#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.960 2 DEBUG nova.objects.instance [None req-d29d2b04-23c0-4456-a657-aa63148acf2f 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lazy-loading 'resources' on Instance uuid 090243f9-46ac-42a1-b921-fa3b1974f127 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.968 2 INFO nova.virt.libvirt.driver [-] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Instance destroyed successfully.#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.968 2 DEBUG nova.objects.instance [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lazy-loading 'resources' on Instance uuid 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.974 2 DEBUG nova.virt.libvirt.vif [None req-d29d2b04-23c0-4456-a657-aa63148acf2f 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:53:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-416013570',display_name='tempest-ServerDiskConfigTestJSON-server-416013570',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-416013570',id=49,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:53:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9af27fad6b5a4783b66213343f27f0a1',ramdisk_id='',reservation_id='r-2vpa8irs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-387886039',owner_user_name='tempest-ServerDiskConfigTestJSON-387886039-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:54:00Z,user_data=None,user_id='2a171de1f79843e0b048393cabfee77d',uuid=090243f9-46ac-42a1-b921-fa3b1974f127,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "94e66298-1c10-486d-9cd5-fd680cfa037c", "address": "fa:16:3e:a6:01:8a", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94e66298-1c", "ovs_interfaceid": "94e66298-1c10-486d-9cd5-fd680cfa037c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.974 2 DEBUG nova.network.os_vif_util [None req-d29d2b04-23c0-4456-a657-aa63148acf2f 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converting VIF {"id": "94e66298-1c10-486d-9cd5-fd680cfa037c", "address": "fa:16:3e:a6:01:8a", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94e66298-1c", "ovs_interfaceid": "94e66298-1c10-486d-9cd5-fd680cfa037c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.975 2 DEBUG nova.network.os_vif_util [None req-d29d2b04-23c0-4456-a657-aa63148acf2f 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:01:8a,bridge_name='br-int',has_traffic_filtering=True,id=94e66298-1c10-486d-9cd5-fd680cfa037c,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94e66298-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.975 2 DEBUG os_vif [None req-d29d2b04-23c0-4456-a657-aa63148acf2f 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:01:8a,bridge_name='br-int',has_traffic_filtering=True,id=94e66298-1c10-486d-9cd5-fd680cfa037c,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94e66298-1c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.977 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap94e66298-1c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.980 2 DEBUG nova.virt.libvirt.vif [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:53:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-454723019',display_name='tempest-ImagesOneServerNegativeTestJSON-server-454723019',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-454723019',id=47,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:53:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b34d40b1586348c3be3d9142dfe1770d',ramdisk_id='',reservation_id='r-tv2wxykz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-253514738',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-253514738-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:53:59Z,user_data=None,user_id='d4dce65b39be45739408ca70d672df84',uuid=8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5e003588-ef81-4e87-900f-734b2c4bad32", "address": "fa:16:3e:97:d1:ed", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e003588-ef", "ovs_interfaceid": "5e003588-ef81-4e87-900f-734b2c4bad32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.981 2 DEBUG nova.network.os_vif_util [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Converting VIF {"id": "5e003588-ef81-4e87-900f-734b2c4bad32", "address": "fa:16:3e:97:d1:ed", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e003588-ef", "ovs_interfaceid": "5e003588-ef81-4e87-900f-734b2c4bad32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.981 2 DEBUG nova.network.os_vif_util [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:d1:ed,bridge_name='br-int',has_traffic_filtering=True,id=5e003588-ef81-4e87-900f-734b2c4bad32,network=Network(a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e003588-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.982 2 DEBUG os_vif [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:d1:ed,bridge_name='br-int',has_traffic_filtering=True,id=5e003588-ef81-4e87-900f-734b2c4bad32,network=Network(a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e003588-ef') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.984 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e003588-ef, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:02 np0005481065 nova_compute[260935]: 2025-10-11 08:54:02.986 2 INFO os_vif [None req-d29d2b04-23c0-4456-a657-aa63148acf2f 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:01:8a,bridge_name='br-int',has_traffic_filtering=True,id=94e66298-1c10-486d-9cd5-fd680cfa037c,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94e66298-1c')#033[00m
Oct 11 04:54:03 np0005481065 nova_compute[260935]: 2025-10-11 08:54:03.000 2 INFO os_vif [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:d1:ed,bridge_name='br-int',has_traffic_filtering=True,id=5e003588-ef81-4e87-900f-734b2c4bad32,network=Network(a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e003588-ef')#033[00m
Oct 11 04:54:03 np0005481065 podman[315789]: 2025-10-11 08:54:03.025010339 +0000 UTC m=+0.086099747 container remove 6877f03e8f74ec3b019684f0961795298d916dc72d355b1afc9b4cff51f152cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:54:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.032 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c4c77e1a-6733-402e-8d63-16f7962339bf]: (4, ('Sat Oct 11 08:54:02 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f (6877f03e8f74ec3b019684f0961795298d916dc72d355b1afc9b4cff51f152cd)\n6877f03e8f74ec3b019684f0961795298d916dc72d355b1afc9b4cff51f152cd\nSat Oct 11 08:54:02 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f (6877f03e8f74ec3b019684f0961795298d916dc72d355b1afc9b4cff51f152cd)\n6877f03e8f74ec3b019684f0961795298d916dc72d355b1afc9b4cff51f152cd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.036 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[240ee7ba-756f-458a-9a10-dd65ab8c2fc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.038 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaac3fe7a-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:03 np0005481065 kernel: tapaac3fe7a-b0: left promiscuous mode
Oct 11 04:54:03 np0005481065 nova_compute[260935]: 2025-10-11 08:54:03.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:03 np0005481065 nova_compute[260935]: 2025-10-11 08:54:03.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.068 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2e2683c1-e4fb-4875-9297-3d8be39f45d9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.089 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[49fe5b99-27e2-4fa6-8133-c635afe2730d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.090 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[31e4e408-184b-4469-bab5-f03786c74957]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.108 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2fe1aa36-0951-4b80-adc5-155433773d69]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 467870, 'reachable_time': 32885, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315881, 'error': None, 'target': 'ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:03 np0005481065 systemd[1]: run-netns-ovnmeta\x2daac3fe7a\x2db6f5\x2d4809\x2d9f09\x2d04a3bbe54c8f.mount: Deactivated successfully.
Oct 11 04:54:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.113 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 04:54:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.113 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[e35d2f25-afd2-4528-8f3e-99f86c43079f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.114 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 04:54:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.114 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 94e66298-1c10-486d-9cd5-fd680cfa037c in datapath e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd unbound from our chassis#033[00m
Oct 11 04:54:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.115 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 04:54:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.116 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c14584a4-d3f1-413e-b5bf-34d82310dba4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.117 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd namespace which is not needed anymore#033[00m
Oct 11 04:54:03 np0005481065 nova_compute[260935]: 2025-10-11 08:54:03.301 2 INFO nova.virt.libvirt.driver [None req-79edc500-30b7-473e-9cd9-7958c1b3987c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Deleting instance files /var/lib/nova/instances/c5c5e7c6-36ba-4cdd-9ad5-03996c419556_del#033[00m
Oct 11 04:54:03 np0005481065 nova_compute[260935]: 2025-10-11 08:54:03.302 2 INFO nova.virt.libvirt.driver [None req-79edc500-30b7-473e-9cd9-7958c1b3987c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Deletion of /var/lib/nova/instances/c5c5e7c6-36ba-4cdd-9ad5-03996c419556_del complete#033[00m
Oct 11 04:54:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:54:03 np0005481065 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[314970]: [NOTICE]   (314974) : haproxy version is 2.8.14-c23fe91
Oct 11 04:54:03 np0005481065 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[314970]: [NOTICE]   (314974) : path to executable is /usr/sbin/haproxy
Oct 11 04:54:03 np0005481065 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[314970]: [WARNING]  (314974) : Exiting Master process...
Oct 11 04:54:03 np0005481065 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[314970]: [WARNING]  (314974) : Exiting Master process...
Oct 11 04:54:03 np0005481065 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[314970]: [ALERT]    (314974) : Current worker (314978) exited with code 143 (Terminated)
Oct 11 04:54:03 np0005481065 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[314970]: [WARNING]  (314974) : All workers exited. Exiting... (0)
Oct 11 04:54:03 np0005481065 systemd[1]: libpod-81ee36c5aeb7a05d069a71a8721b9939b3ddb5f0983b70f5954fbe381e6c2684.scope: Deactivated successfully.
Oct 11 04:54:03 np0005481065 podman[315904]: 2025-10-11 08:54:03.340352661 +0000 UTC m=+0.072095907 container died 81ee36c5aeb7a05d069a71a8721b9939b3ddb5f0983b70f5954fbe381e6c2684 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 11 04:54:03 np0005481065 nova_compute[260935]: 2025-10-11 08:54:03.360 2 INFO nova.compute.manager [None req-79edc500-30b7-473e-9cd9-7958c1b3987c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Took 1.05 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 04:54:03 np0005481065 nova_compute[260935]: 2025-10-11 08:54:03.361 2 DEBUG oslo.service.loopingcall [None req-79edc500-30b7-473e-9cd9-7958c1b3987c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 04:54:03 np0005481065 nova_compute[260935]: 2025-10-11 08:54:03.362 2 DEBUG nova.compute.manager [-] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 04:54:03 np0005481065 nova_compute[260935]: 2025-10-11 08:54:03.362 2 DEBUG nova.network.neutron [-] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 04:54:03 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-81ee36c5aeb7a05d069a71a8721b9939b3ddb5f0983b70f5954fbe381e6c2684-userdata-shm.mount: Deactivated successfully.
Oct 11 04:54:03 np0005481065 systemd[1]: var-lib-containers-storage-overlay-6ff5f9aab65ed5ed74268fc74e80741bd6d56127a04d5f99970f5b4eb6779e28-merged.mount: Deactivated successfully.
Oct 11 04:54:03 np0005481065 podman[315904]: 2025-10-11 08:54:03.397193621 +0000 UTC m=+0.128936867 container cleanup 81ee36c5aeb7a05d069a71a8721b9939b3ddb5f0983b70f5954fbe381e6c2684 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 11 04:54:03 np0005481065 systemd[1]: libpod-conmon-81ee36c5aeb7a05d069a71a8721b9939b3ddb5f0983b70f5954fbe381e6c2684.scope: Deactivated successfully.
Oct 11 04:54:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 11 04:54:03 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 04:54:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:54:03 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:54:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:54:03 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:54:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:54:03 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:54:03 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev f83aa430-0644-4ff0-b9bc-f1b8b6a89775 does not exist
Oct 11 04:54:03 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 5df8fa34-3f3b-4d63-a7f1-d333ab0d8aab does not exist
Oct 11 04:54:03 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 91dfbaa9-c364-4783-ac53-e85d6558ba0e does not exist
Oct 11 04:54:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:54:03 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:54:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:54:03 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:54:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:54:03 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:54:03 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 04:54:03 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:54:03 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:54:03 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:54:03 np0005481065 podman[315944]: 2025-10-11 08:54:03.488972278 +0000 UTC m=+0.061821323 container remove 81ee36c5aeb7a05d069a71a8721b9939b3ddb5f0983b70f5954fbe381e6c2684 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001)
Oct 11 04:54:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.494 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9bf97bd7-90c1-4c3a-91cf-3c12f5e25580]: (4, ('Sat Oct 11 08:54:03 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd (81ee36c5aeb7a05d069a71a8721b9939b3ddb5f0983b70f5954fbe381e6c2684)\n81ee36c5aeb7a05d069a71a8721b9939b3ddb5f0983b70f5954fbe381e6c2684\nSat Oct 11 08:54:03 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd (81ee36c5aeb7a05d069a71a8721b9939b3ddb5f0983b70f5954fbe381e6c2684)\n81ee36c5aeb7a05d069a71a8721b9939b3ddb5f0983b70f5954fbe381e6c2684\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.495 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[62654643-70ab-4cb9-a309-036d46067d91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.496 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape5d4fc7a-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:03 np0005481065 nova_compute[260935]: 2025-10-11 08:54:03.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:03 np0005481065 kernel: tape5d4fc7a-10: left promiscuous mode
Oct 11 04:54:03 np0005481065 nova_compute[260935]: 2025-10-11 08:54:03.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.524 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[47fbb3c3-5ed1-455d-b995-9f341845cf7d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.546 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[73f48f54-63f3-40b5-8c88-1dfea14d47e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.547 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cd95bb6f-4e0d-4db8-8d40-296fddd5c55e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.564 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3ced2cf0-b9e8-4527-914b-2f792225e979]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 467663, 'reachable_time': 24734, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315997, 'error': None, 'target': 'ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.566 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 04:54:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.566 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[0adaa183-a656-4fd9-965c-95f079ffb1fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.567 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 5e003588-ef81-4e87-900f-734b2c4bad32 in datapath a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf unbound from our chassis#033[00m
Oct 11 04:54:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.568 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 04:54:03 np0005481065 nova_compute[260935]: 2025-10-11 08:54:03.572 2 INFO nova.virt.libvirt.driver [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Deleting instance files /var/lib/nova/instances/8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08_del#033[00m
Oct 11 04:54:03 np0005481065 nova_compute[260935]: 2025-10-11 08:54:03.573 2 INFO nova.virt.libvirt.driver [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Deletion of /var/lib/nova/instances/8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08_del complete#033[00m
Oct 11 04:54:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.576 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b68927d3-84ba-4752-afd4-7deb5ed71679]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.579 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf namespace which is not needed anymore#033[00m
Oct 11 04:54:03 np0005481065 nova_compute[260935]: 2025-10-11 08:54:03.582 2 INFO nova.virt.libvirt.driver [None req-d29d2b04-23c0-4456-a657-aa63148acf2f 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Deleting instance files /var/lib/nova/instances/090243f9-46ac-42a1-b921-fa3b1974f127_del#033[00m
Oct 11 04:54:03 np0005481065 nova_compute[260935]: 2025-10-11 08:54:03.583 2 INFO nova.virt.libvirt.driver [None req-d29d2b04-23c0-4456-a657-aa63148acf2f 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Deletion of /var/lib/nova/instances/090243f9-46ac-42a1-b921-fa3b1974f127_del complete#033[00m
Oct 11 04:54:03 np0005481065 nova_compute[260935]: 2025-10-11 08:54:03.632 2 DEBUG nova.compute.manager [req-0aff252e-5bd8-460b-a1bd-ff499e3266eb req-871cea7c-d402-46e9-a675-27fa49f1ee42 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Received event network-vif-unplugged-eb770b47-0e30-41bb-8d08-e52bc86b8fb7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:54:03 np0005481065 nova_compute[260935]: 2025-10-11 08:54:03.632 2 DEBUG oslo_concurrency.lockutils [req-0aff252e-5bd8-460b-a1bd-ff499e3266eb req-871cea7c-d402-46e9-a675-27fa49f1ee42 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "c5c5e7c6-36ba-4cdd-9ad5-03996c419556-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:03 np0005481065 nova_compute[260935]: 2025-10-11 08:54:03.633 2 DEBUG oslo_concurrency.lockutils [req-0aff252e-5bd8-460b-a1bd-ff499e3266eb req-871cea7c-d402-46e9-a675-27fa49f1ee42 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "c5c5e7c6-36ba-4cdd-9ad5-03996c419556-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:03 np0005481065 nova_compute[260935]: 2025-10-11 08:54:03.633 2 DEBUG oslo_concurrency.lockutils [req-0aff252e-5bd8-460b-a1bd-ff499e3266eb req-871cea7c-d402-46e9-a675-27fa49f1ee42 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "c5c5e7c6-36ba-4cdd-9ad5-03996c419556-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:03 np0005481065 nova_compute[260935]: 2025-10-11 08:54:03.633 2 DEBUG nova.compute.manager [req-0aff252e-5bd8-460b-a1bd-ff499e3266eb req-871cea7c-d402-46e9-a675-27fa49f1ee42 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] No waiting events found dispatching network-vif-unplugged-eb770b47-0e30-41bb-8d08-e52bc86b8fb7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:54:03 np0005481065 nova_compute[260935]: 2025-10-11 08:54:03.634 2 DEBUG nova.compute.manager [req-0aff252e-5bd8-460b-a1bd-ff499e3266eb req-871cea7c-d402-46e9-a675-27fa49f1ee42 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Received event network-vif-unplugged-eb770b47-0e30-41bb-8d08-e52bc86b8fb7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 04:54:03 np0005481065 nova_compute[260935]: 2025-10-11 08:54:03.653 2 INFO nova.compute.manager [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Took 0.96 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 04:54:03 np0005481065 nova_compute[260935]: 2025-10-11 08:54:03.653 2 DEBUG oslo.service.loopingcall [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 04:54:03 np0005481065 nova_compute[260935]: 2025-10-11 08:54:03.654 2 DEBUG nova.compute.manager [-] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 04:54:03 np0005481065 nova_compute[260935]: 2025-10-11 08:54:03.654 2 DEBUG nova.network.neutron [-] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 04:54:03 np0005481065 nova_compute[260935]: 2025-10-11 08:54:03.660 2 INFO nova.compute.manager [None req-d29d2b04-23c0-4456-a657-aa63148acf2f 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Took 0.97 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 04:54:03 np0005481065 nova_compute[260935]: 2025-10-11 08:54:03.660 2 DEBUG oslo.service.loopingcall [None req-d29d2b04-23c0-4456-a657-aa63148acf2f 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 04:54:03 np0005481065 nova_compute[260935]: 2025-10-11 08:54:03.661 2 DEBUG nova.compute.manager [-] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 04:54:03 np0005481065 nova_compute[260935]: 2025-10-11 08:54:03.661 2 DEBUG nova.network.neutron [-] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 04:54:03 np0005481065 neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf[314221]: [NOTICE]   (314243) : haproxy version is 2.8.14-c23fe91
Oct 11 04:54:03 np0005481065 neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf[314221]: [NOTICE]   (314243) : path to executable is /usr/sbin/haproxy
Oct 11 04:54:03 np0005481065 neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf[314221]: [ALERT]    (314243) : Current worker (314248) exited with code 143 (Terminated)
Oct 11 04:54:03 np0005481065 neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf[314221]: [WARNING]  (314243) : All workers exited. Exiting... (0)
Oct 11 04:54:03 np0005481065 systemd[1]: libpod-7d1a9e6b3c9130792d0328aa3e55096abd83598bbcc264728304976547ed89b5.scope: Deactivated successfully.
Oct 11 04:54:03 np0005481065 podman[316054]: 2025-10-11 08:54:03.727445089 +0000 UTC m=+0.054905857 container died 7d1a9e6b3c9130792d0328aa3e55096abd83598bbcc264728304976547ed89b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:54:03 np0005481065 podman[316054]: 2025-10-11 08:54:03.758570046 +0000 UTC m=+0.086030814 container cleanup 7d1a9e6b3c9130792d0328aa3e55096abd83598bbcc264728304976547ed89b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 04:54:03 np0005481065 podman[316060]: 2025-10-11 08:54:03.774074418 +0000 UTC m=+0.081973148 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, container_name=iscsid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:54:03 np0005481065 systemd[1]: libpod-conmon-7d1a9e6b3c9130792d0328aa3e55096abd83598bbcc264728304976547ed89b5.scope: Deactivated successfully.
Oct 11 04:54:03 np0005481065 podman[316119]: 2025-10-11 08:54:03.815269743 +0000 UTC m=+0.032862958 container remove 7d1a9e6b3c9130792d0328aa3e55096abd83598bbcc264728304976547ed89b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 04:54:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.824 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d8ff9c2b-47fc-4fbe-a1c3-dfdb9538a636]: (4, ('Sat Oct 11 08:54:03 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf (7d1a9e6b3c9130792d0328aa3e55096abd83598bbcc264728304976547ed89b5)\n7d1a9e6b3c9130792d0328aa3e55096abd83598bbcc264728304976547ed89b5\nSat Oct 11 08:54:03 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf (7d1a9e6b3c9130792d0328aa3e55096abd83598bbcc264728304976547ed89b5)\n7d1a9e6b3c9130792d0328aa3e55096abd83598bbcc264728304976547ed89b5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.826 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[529428f6-5675-4ea5-88e9-3e713ca35451]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.827 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa1a65c6f-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:03 np0005481065 nova_compute[260935]: 2025-10-11 08:54:03.867 2 DEBUG nova.network.neutron [-] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:54:03 np0005481065 kernel: tapa1a65c6f-c0: left promiscuous mode
Oct 11 04:54:03 np0005481065 nova_compute[260935]: 2025-10-11 08:54:03.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:03 np0005481065 systemd[1]: run-netns-ovnmeta\x2de5d4fc7a\x2d1d47\x2d4774\x2daad7\x2d8bb2b388fbbd.mount: Deactivated successfully.
Oct 11 04:54:03 np0005481065 systemd[1]: var-lib-containers-storage-overlay-5898e42c3d8153f562219a47f22912dd20bb1c3a0e1902954c3573851fcf1fb5-merged.mount: Deactivated successfully.
Oct 11 04:54:03 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7d1a9e6b3c9130792d0328aa3e55096abd83598bbcc264728304976547ed89b5-userdata-shm.mount: Deactivated successfully.
Oct 11 04:54:03 np0005481065 nova_compute[260935]: 2025-10-11 08:54:03.891 2 INFO nova.compute.manager [-] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Took 1.16 seconds to deallocate network for instance.#033[00m
Oct 11 04:54:03 np0005481065 nova_compute[260935]: 2025-10-11 08:54:03.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:03 np0005481065 nova_compute[260935]: 2025-10-11 08:54:03.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.911 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[445f355f-9405-4ca5-9016-904d417ceab3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.938 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[83b27fa3-4d8a-45d3-b7a8-e20143e8f4fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.939 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a29ad126-fa98-42b6-aab4-99cc87153a65]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:03 np0005481065 nova_compute[260935]: 2025-10-11 08:54:03.959 2 DEBUG oslo_concurrency.lockutils [None req-08113e51-2469-4f43-8573-66a713d640b3 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:03 np0005481065 nova_compute[260935]: 2025-10-11 08:54:03.959 2 DEBUG oslo_concurrency.lockutils [None req-08113e51-2469-4f43-8573-66a713d640b3 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1494: 321 pgs: 321 active+clean; 213 MiB data, 550 MiB used, 59 GiB / 60 GiB avail; 14 MiB/s rd, 7.2 MiB/s wr, 716 op/s
Oct 11 04:54:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.966 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ce70d240-fc91-401b-8095-fc4f2b227183]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 466841, 'reachable_time': 38667, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316148, 'error': None, 'target': 'ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.968 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 04:54:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.969 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[1199c91d-1d81-4f80-96c7-501de420e916]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:03 np0005481065 systemd[1]: run-netns-ovnmeta\x2da1a65c6f\x2dcd0e\x2d4ac5\x2db9de\x2d21e41a32ffbf.mount: Deactivated successfully.
Oct 11 04:54:04 np0005481065 nova_compute[260935]: 2025-10-11 08:54:04.014 2 DEBUG nova.network.neutron [-] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:54:04 np0005481065 nova_compute[260935]: 2025-10-11 08:54:04.040 2 INFO nova.compute.manager [-] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Took 0.68 seconds to deallocate network for instance.#033[00m
Oct 11 04:54:04 np0005481065 nova_compute[260935]: 2025-10-11 08:54:04.100 2 DEBUG oslo_concurrency.lockutils [None req-79edc500-30b7-473e-9cd9-7958c1b3987c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:04 np0005481065 nova_compute[260935]: 2025-10-11 08:54:04.104 2 DEBUG oslo_concurrency.processutils [None req-08113e51-2469-4f43-8573-66a713d640b3 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:54:04 np0005481065 podman[316176]: 2025-10-11 08:54:04.206145559 +0000 UTC m=+0.061899246 container create d61237b684e2b67fcf2d69455ea2af1003e62f0d37ef7a5d8781ec1a24e8e64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_montalcini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 04:54:04 np0005481065 systemd[1]: Started libpod-conmon-d61237b684e2b67fcf2d69455ea2af1003e62f0d37ef7a5d8781ec1a24e8e64b.scope.
Oct 11 04:54:04 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:54:04 np0005481065 podman[316176]: 2025-10-11 08:54:04.184685167 +0000 UTC m=+0.040438914 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:54:04 np0005481065 podman[316176]: 2025-10-11 08:54:04.297282066 +0000 UTC m=+0.153035853 container init d61237b684e2b67fcf2d69455ea2af1003e62f0d37ef7a5d8781ec1a24e8e64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_montalcini, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:54:04 np0005481065 podman[316176]: 2025-10-11 08:54:04.309074633 +0000 UTC m=+0.164828330 container start d61237b684e2b67fcf2d69455ea2af1003e62f0d37ef7a5d8781ec1a24e8e64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 11 04:54:04 np0005481065 podman[316176]: 2025-10-11 08:54:04.312719907 +0000 UTC m=+0.168473684 container attach d61237b684e2b67fcf2d69455ea2af1003e62f0d37ef7a5d8781ec1a24e8e64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_montalcini, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:54:04 np0005481065 systemd[1]: libpod-d61237b684e2b67fcf2d69455ea2af1003e62f0d37ef7a5d8781ec1a24e8e64b.scope: Deactivated successfully.
Oct 11 04:54:04 np0005481065 affectionate_montalcini[316192]: 167 167
Oct 11 04:54:04 np0005481065 podman[316176]: 2025-10-11 08:54:04.319338775 +0000 UTC m=+0.175092502 container died d61237b684e2b67fcf2d69455ea2af1003e62f0d37ef7a5d8781ec1a24e8e64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 11 04:54:04 np0005481065 conmon[316192]: conmon d61237b684e2b67fcf2d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d61237b684e2b67fcf2d69455ea2af1003e62f0d37ef7a5d8781ec1a24e8e64b.scope/container/memory.events
Oct 11 04:54:04 np0005481065 systemd[1]: var-lib-containers-storage-overlay-391d942d1c2eb1371a9aef4ba63866e47d7e397fd5f0cfec914fd605f1000cdc-merged.mount: Deactivated successfully.
Oct 11 04:54:04 np0005481065 podman[316176]: 2025-10-11 08:54:04.383071373 +0000 UTC m=+0.238825100 container remove d61237b684e2b67fcf2d69455ea2af1003e62f0d37ef7a5d8781ec1a24e8e64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 11 04:54:04 np0005481065 systemd[1]: libpod-conmon-d61237b684e2b67fcf2d69455ea2af1003e62f0d37ef7a5d8781ec1a24e8e64b.scope: Deactivated successfully.
Oct 11 04:54:04 np0005481065 nova_compute[260935]: 2025-10-11 08:54:04.528 2 DEBUG nova.network.neutron [-] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:54:04 np0005481065 nova_compute[260935]: 2025-10-11 08:54:04.542 2 DEBUG nova.network.neutron [-] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:54:04 np0005481065 nova_compute[260935]: 2025-10-11 08:54:04.554 2 INFO nova.compute.manager [-] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Took 0.90 seconds to deallocate network for instance.#033[00m
Oct 11 04:54:04 np0005481065 nova_compute[260935]: 2025-10-11 08:54:04.563 2 INFO nova.compute.manager [-] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Took 0.90 seconds to deallocate network for instance.#033[00m
Oct 11 04:54:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:54:04 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/822842830' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:54:04 np0005481065 nova_compute[260935]: 2025-10-11 08:54:04.620 2 DEBUG oslo_concurrency.lockutils [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:04 np0005481065 nova_compute[260935]: 2025-10-11 08:54:04.621 2 DEBUG oslo_concurrency.processutils [None req-08113e51-2469-4f43-8573-66a713d640b3 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:54:04 np0005481065 nova_compute[260935]: 2025-10-11 08:54:04.628 2 DEBUG oslo_concurrency.lockutils [None req-d29d2b04-23c0-4456-a657-aa63148acf2f 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:04 np0005481065 nova_compute[260935]: 2025-10-11 08:54:04.634 2 DEBUG nova.compute.provider_tree [None req-08113e51-2469-4f43-8573-66a713d640b3 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:54:04 np0005481065 podman[316236]: 2025-10-11 08:54:04.648082859 +0000 UTC m=+0.075518384 container create 3ec23f4547cc0e539ac02b827b38cc6f130b6a8c99a84276140ffd435d4ab6d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 04:54:04 np0005481065 nova_compute[260935]: 2025-10-11 08:54:04.650 2 DEBUG nova.scheduler.client.report [None req-08113e51-2469-4f43-8573-66a713d640b3 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:54:04 np0005481065 nova_compute[260935]: 2025-10-11 08:54:04.673 2 DEBUG oslo_concurrency.lockutils [None req-08113e51-2469-4f43-8573-66a713d640b3 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:04 np0005481065 nova_compute[260935]: 2025-10-11 08:54:04.676 2 DEBUG oslo_concurrency.lockutils [None req-79edc500-30b7-473e-9cd9-7958c1b3987c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:04 np0005481065 nova_compute[260935]: 2025-10-11 08:54:04.704 2 INFO nova.scheduler.client.report [None req-08113e51-2469-4f43-8573-66a713d640b3 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Deleted allocations for instance d0ac94c4-9bbc-443b-bbce-0d447b37153a#033[00m
Oct 11 04:54:04 np0005481065 podman[316236]: 2025-10-11 08:54:04.614203093 +0000 UTC m=+0.041638678 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:54:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:54:04 np0005481065 nova_compute[260935]: 2025-10-11 08:54:04.706 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:54:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:54:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:54:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:54:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001452666129452367 of space, bias 1.0, pg target 0.4357998388357101 quantized to 32 (current 32)
Oct 11 04:54:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:54:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:54:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:54:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:54:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:54:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 11 04:54:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:54:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 04:54:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:54:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:54:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:54:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:54:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:54:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:54:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:54:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:54:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:54:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:54:04 np0005481065 systemd[1]: Started libpod-conmon-3ec23f4547cc0e539ac02b827b38cc6f130b6a8c99a84276140ffd435d4ab6d8.scope.
Oct 11 04:54:04 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:54:04 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5081022b4876deca3781b856df4070f0b16910071fed7e9767360ef6b7b5c0ab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:54:04 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5081022b4876deca3781b856df4070f0b16910071fed7e9767360ef6b7b5c0ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:54:04 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5081022b4876deca3781b856df4070f0b16910071fed7e9767360ef6b7b5c0ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:54:04 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5081022b4876deca3781b856df4070f0b16910071fed7e9767360ef6b7b5c0ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:54:04 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5081022b4876deca3781b856df4070f0b16910071fed7e9767360ef6b7b5c0ab/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:54:04 np0005481065 nova_compute[260935]: 2025-10-11 08:54:04.774 2 DEBUG oslo_concurrency.lockutils [None req-08113e51-2469-4f43-8573-66a713d640b3 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "d0ac94c4-9bbc-443b-bbce-0d447b37153a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.175s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:04 np0005481065 podman[316236]: 2025-10-11 08:54:04.781511894 +0000 UTC m=+0.208947459 container init 3ec23f4547cc0e539ac02b827b38cc6f130b6a8c99a84276140ffd435d4ab6d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_neumann, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 04:54:04 np0005481065 podman[316236]: 2025-10-11 08:54:04.801964868 +0000 UTC m=+0.229400393 container start 3ec23f4547cc0e539ac02b827b38cc6f130b6a8c99a84276140ffd435d4ab6d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:54:04 np0005481065 nova_compute[260935]: 2025-10-11 08:54:04.801 2 DEBUG oslo_concurrency.processutils [None req-79edc500-30b7-473e-9cd9-7958c1b3987c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:54:04 np0005481065 podman[316236]: 2025-10-11 08:54:04.806169657 +0000 UTC m=+0.233605172 container attach 3ec23f4547cc0e539ac02b827b38cc6f130b6a8c99a84276140ffd435d4ab6d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_neumann, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:54:04 np0005481065 nova_compute[260935]: 2025-10-11 08:54:04.930 2 DEBUG nova.compute.manager [req-044a5dfc-b69a-45e5-8ed3-4219a8215da9 req-c2b68031-d77b-4589-80c6-e4c03ef718a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Received event network-vif-plugged-11b55091-2876-4b36-98b0-aa1a4db00d3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:54:04 np0005481065 nova_compute[260935]: 2025-10-11 08:54:04.932 2 DEBUG oslo_concurrency.lockutils [req-044a5dfc-b69a-45e5-8ed3-4219a8215da9 req-c2b68031-d77b-4589-80c6-e4c03ef718a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d0ac94c4-9bbc-443b-bbce-0d447b37153a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:04 np0005481065 nova_compute[260935]: 2025-10-11 08:54:04.932 2 DEBUG oslo_concurrency.lockutils [req-044a5dfc-b69a-45e5-8ed3-4219a8215da9 req-c2b68031-d77b-4589-80c6-e4c03ef718a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d0ac94c4-9bbc-443b-bbce-0d447b37153a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:04 np0005481065 nova_compute[260935]: 2025-10-11 08:54:04.933 2 DEBUG oslo_concurrency.lockutils [req-044a5dfc-b69a-45e5-8ed3-4219a8215da9 req-c2b68031-d77b-4589-80c6-e4c03ef718a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d0ac94c4-9bbc-443b-bbce-0d447b37153a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:04 np0005481065 nova_compute[260935]: 2025-10-11 08:54:04.933 2 DEBUG nova.compute.manager [req-044a5dfc-b69a-45e5-8ed3-4219a8215da9 req-c2b68031-d77b-4589-80c6-e4c03ef718a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] No waiting events found dispatching network-vif-plugged-11b55091-2876-4b36-98b0-aa1a4db00d3e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:54:04 np0005481065 nova_compute[260935]: 2025-10-11 08:54:04.933 2 WARNING nova.compute.manager [req-044a5dfc-b69a-45e5-8ed3-4219a8215da9 req-c2b68031-d77b-4589-80c6-e4c03ef718a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Received unexpected event network-vif-plugged-11b55091-2876-4b36-98b0-aa1a4db00d3e for instance with vm_state deleted and task_state None.#033[00m
Oct 11 04:54:04 np0005481065 nova_compute[260935]: 2025-10-11 08:54:04.934 2 DEBUG nova.compute.manager [req-044a5dfc-b69a-45e5-8ed3-4219a8215da9 req-c2b68031-d77b-4589-80c6-e4c03ef718a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Received event network-vif-unplugged-5e003588-ef81-4e87-900f-734b2c4bad32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:54:04 np0005481065 nova_compute[260935]: 2025-10-11 08:54:04.934 2 DEBUG oslo_concurrency.lockutils [req-044a5dfc-b69a-45e5-8ed3-4219a8215da9 req-c2b68031-d77b-4589-80c6-e4c03ef718a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:04 np0005481065 nova_compute[260935]: 2025-10-11 08:54:04.935 2 DEBUG oslo_concurrency.lockutils [req-044a5dfc-b69a-45e5-8ed3-4219a8215da9 req-c2b68031-d77b-4589-80c6-e4c03ef718a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:04 np0005481065 nova_compute[260935]: 2025-10-11 08:54:04.935 2 DEBUG oslo_concurrency.lockutils [req-044a5dfc-b69a-45e5-8ed3-4219a8215da9 req-c2b68031-d77b-4589-80c6-e4c03ef718a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:04 np0005481065 nova_compute[260935]: 2025-10-11 08:54:04.936 2 DEBUG nova.compute.manager [req-044a5dfc-b69a-45e5-8ed3-4219a8215da9 req-c2b68031-d77b-4589-80c6-e4c03ef718a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] No waiting events found dispatching network-vif-unplugged-5e003588-ef81-4e87-900f-734b2c4bad32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:54:04 np0005481065 nova_compute[260935]: 2025-10-11 08:54:04.936 2 WARNING nova.compute.manager [req-044a5dfc-b69a-45e5-8ed3-4219a8215da9 req-c2b68031-d77b-4589-80c6-e4c03ef718a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Received unexpected event network-vif-unplugged-5e003588-ef81-4e87-900f-734b2c4bad32 for instance with vm_state deleted and task_state None.#033[00m
Oct 11 04:54:04 np0005481065 nova_compute[260935]: 2025-10-11 08:54:04.937 2 DEBUG nova.compute.manager [req-044a5dfc-b69a-45e5-8ed3-4219a8215da9 req-c2b68031-d77b-4589-80c6-e4c03ef718a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Received event network-vif-plugged-5e003588-ef81-4e87-900f-734b2c4bad32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:54:04 np0005481065 nova_compute[260935]: 2025-10-11 08:54:04.937 2 DEBUG oslo_concurrency.lockutils [req-044a5dfc-b69a-45e5-8ed3-4219a8215da9 req-c2b68031-d77b-4589-80c6-e4c03ef718a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:04 np0005481065 nova_compute[260935]: 2025-10-11 08:54:04.937 2 DEBUG oslo_concurrency.lockutils [req-044a5dfc-b69a-45e5-8ed3-4219a8215da9 req-c2b68031-d77b-4589-80c6-e4c03ef718a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:04 np0005481065 nova_compute[260935]: 2025-10-11 08:54:04.938 2 DEBUG oslo_concurrency.lockutils [req-044a5dfc-b69a-45e5-8ed3-4219a8215da9 req-c2b68031-d77b-4589-80c6-e4c03ef718a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:04 np0005481065 nova_compute[260935]: 2025-10-11 08:54:04.938 2 DEBUG nova.compute.manager [req-044a5dfc-b69a-45e5-8ed3-4219a8215da9 req-c2b68031-d77b-4589-80c6-e4c03ef718a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] No waiting events found dispatching network-vif-plugged-5e003588-ef81-4e87-900f-734b2c4bad32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:54:04 np0005481065 nova_compute[260935]: 2025-10-11 08:54:04.938 2 WARNING nova.compute.manager [req-044a5dfc-b69a-45e5-8ed3-4219a8215da9 req-c2b68031-d77b-4589-80c6-e4c03ef718a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Received unexpected event network-vif-plugged-5e003588-ef81-4e87-900f-734b2c4bad32 for instance with vm_state deleted and task_state None.#033[00m
Oct 11 04:54:04 np0005481065 nova_compute[260935]: 2025-10-11 08:54:04.938 2 DEBUG nova.compute.manager [req-044a5dfc-b69a-45e5-8ed3-4219a8215da9 req-c2b68031-d77b-4589-80c6-e4c03ef718a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Received event network-vif-deleted-5e003588-ef81-4e87-900f-734b2c4bad32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:54:04 np0005481065 nova_compute[260935]: 2025-10-11 08:54:04.939 2 DEBUG nova.compute.manager [req-044a5dfc-b69a-45e5-8ed3-4219a8215da9 req-c2b68031-d77b-4589-80c6-e4c03ef718a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Received event network-vif-deleted-94e66298-1c10-486d-9cd5-fd680cfa037c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:54:05 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:54:05 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1397735718' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:54:05 np0005481065 nova_compute[260935]: 2025-10-11 08:54:05.310 2 DEBUG oslo_concurrency.processutils [None req-79edc500-30b7-473e-9cd9-7958c1b3987c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:54:05 np0005481065 nova_compute[260935]: 2025-10-11 08:54:05.317 2 DEBUG nova.compute.provider_tree [None req-79edc500-30b7-473e-9cd9-7958c1b3987c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:54:05 np0005481065 nova_compute[260935]: 2025-10-11 08:54:05.335 2 DEBUG nova.scheduler.client.report [None req-79edc500-30b7-473e-9cd9-7958c1b3987c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:54:05 np0005481065 nova_compute[260935]: 2025-10-11 08:54:05.360 2 DEBUG oslo_concurrency.lockutils [None req-79edc500-30b7-473e-9cd9-7958c1b3987c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:05 np0005481065 nova_compute[260935]: 2025-10-11 08:54:05.364 2 DEBUG oslo_concurrency.lockutils [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:05 np0005481065 nova_compute[260935]: 2025-10-11 08:54:05.389 2 DEBUG nova.scheduler.client.report [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Refreshing inventories for resource provider ead2f521-4d5d-46d9-864c-1aac19134114 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 11 04:54:05 np0005481065 nova_compute[260935]: 2025-10-11 08:54:05.394 2 INFO nova.scheduler.client.report [None req-79edc500-30b7-473e-9cd9-7958c1b3987c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Deleted allocations for instance c5c5e7c6-36ba-4cdd-9ad5-03996c419556#033[00m
Oct 11 04:54:05 np0005481065 nova_compute[260935]: 2025-10-11 08:54:05.412 2 DEBUG nova.scheduler.client.report [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Updating ProviderTree inventory for provider ead2f521-4d5d-46d9-864c-1aac19134114 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 11 04:54:05 np0005481065 nova_compute[260935]: 2025-10-11 08:54:05.412 2 DEBUG nova.compute.provider_tree [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Updating inventory in ProviderTree for provider ead2f521-4d5d-46d9-864c-1aac19134114 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 11 04:54:05 np0005481065 nova_compute[260935]: 2025-10-11 08:54:05.426 2 DEBUG nova.scheduler.client.report [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Refreshing aggregate associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 11 04:54:05 np0005481065 nova_compute[260935]: 2025-10-11 08:54:05.465 2 DEBUG nova.scheduler.client.report [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Refreshing trait associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, traits: HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSE41,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_USB,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,HW_CPU_X86_SVM,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 11 04:54:05 np0005481065 nova_compute[260935]: 2025-10-11 08:54:05.481 2 DEBUG oslo_concurrency.lockutils [None req-79edc500-30b7-473e-9cd9-7958c1b3987c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "c5c5e7c6-36ba-4cdd-9ad5-03996c419556" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.174s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:05 np0005481065 nova_compute[260935]: 2025-10-11 08:54:05.531 2 DEBUG oslo_concurrency.processutils [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:54:05 np0005481065 nova_compute[260935]: 2025-10-11 08:54:05.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:54:05 np0005481065 nova_compute[260935]: 2025-10-11 08:54:05.721 2 DEBUG nova.compute.manager [req-bd8dea70-8e78-42cf-a611-7e12a8a2e7b6 req-41677940-7b17-48d2-994b-721ce56eb7b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Received event network-vif-plugged-eb770b47-0e30-41bb-8d08-e52bc86b8fb7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:54:05 np0005481065 nova_compute[260935]: 2025-10-11 08:54:05.722 2 DEBUG oslo_concurrency.lockutils [req-bd8dea70-8e78-42cf-a611-7e12a8a2e7b6 req-41677940-7b17-48d2-994b-721ce56eb7b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "c5c5e7c6-36ba-4cdd-9ad5-03996c419556-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:05 np0005481065 nova_compute[260935]: 2025-10-11 08:54:05.722 2 DEBUG oslo_concurrency.lockutils [req-bd8dea70-8e78-42cf-a611-7e12a8a2e7b6 req-41677940-7b17-48d2-994b-721ce56eb7b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "c5c5e7c6-36ba-4cdd-9ad5-03996c419556-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:05 np0005481065 nova_compute[260935]: 2025-10-11 08:54:05.723 2 DEBUG oslo_concurrency.lockutils [req-bd8dea70-8e78-42cf-a611-7e12a8a2e7b6 req-41677940-7b17-48d2-994b-721ce56eb7b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "c5c5e7c6-36ba-4cdd-9ad5-03996c419556-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:05 np0005481065 nova_compute[260935]: 2025-10-11 08:54:05.723 2 DEBUG nova.compute.manager [req-bd8dea70-8e78-42cf-a611-7e12a8a2e7b6 req-41677940-7b17-48d2-994b-721ce56eb7b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] No waiting events found dispatching network-vif-plugged-eb770b47-0e30-41bb-8d08-e52bc86b8fb7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:54:05 np0005481065 nova_compute[260935]: 2025-10-11 08:54:05.723 2 WARNING nova.compute.manager [req-bd8dea70-8e78-42cf-a611-7e12a8a2e7b6 req-41677940-7b17-48d2-994b-721ce56eb7b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Received unexpected event network-vif-plugged-eb770b47-0e30-41bb-8d08-e52bc86b8fb7 for instance with vm_state deleted and task_state None.#033[00m
Oct 11 04:54:05 np0005481065 nova_compute[260935]: 2025-10-11 08:54:05.724 2 DEBUG nova.compute.manager [req-bd8dea70-8e78-42cf-a611-7e12a8a2e7b6 req-41677940-7b17-48d2-994b-721ce56eb7b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Received event network-vif-unplugged-94e66298-1c10-486d-9cd5-fd680cfa037c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:54:05 np0005481065 nova_compute[260935]: 2025-10-11 08:54:05.724 2 DEBUG oslo_concurrency.lockutils [req-bd8dea70-8e78-42cf-a611-7e12a8a2e7b6 req-41677940-7b17-48d2-994b-721ce56eb7b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "090243f9-46ac-42a1-b921-fa3b1974f127-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:05 np0005481065 nova_compute[260935]: 2025-10-11 08:54:05.724 2 DEBUG oslo_concurrency.lockutils [req-bd8dea70-8e78-42cf-a611-7e12a8a2e7b6 req-41677940-7b17-48d2-994b-721ce56eb7b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "090243f9-46ac-42a1-b921-fa3b1974f127-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:05 np0005481065 nova_compute[260935]: 2025-10-11 08:54:05.725 2 DEBUG oslo_concurrency.lockutils [req-bd8dea70-8e78-42cf-a611-7e12a8a2e7b6 req-41677940-7b17-48d2-994b-721ce56eb7b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "090243f9-46ac-42a1-b921-fa3b1974f127-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:05 np0005481065 nova_compute[260935]: 2025-10-11 08:54:05.725 2 DEBUG nova.compute.manager [req-bd8dea70-8e78-42cf-a611-7e12a8a2e7b6 req-41677940-7b17-48d2-994b-721ce56eb7b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] No waiting events found dispatching network-vif-unplugged-94e66298-1c10-486d-9cd5-fd680cfa037c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:54:05 np0005481065 nova_compute[260935]: 2025-10-11 08:54:05.725 2 WARNING nova.compute.manager [req-bd8dea70-8e78-42cf-a611-7e12a8a2e7b6 req-41677940-7b17-48d2-994b-721ce56eb7b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Received unexpected event network-vif-unplugged-94e66298-1c10-486d-9cd5-fd680cfa037c for instance with vm_state deleted and task_state None.#033[00m
Oct 11 04:54:05 np0005481065 nova_compute[260935]: 2025-10-11 08:54:05.726 2 DEBUG nova.compute.manager [req-bd8dea70-8e78-42cf-a611-7e12a8a2e7b6 req-41677940-7b17-48d2-994b-721ce56eb7b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Received event network-vif-plugged-94e66298-1c10-486d-9cd5-fd680cfa037c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:54:05 np0005481065 nova_compute[260935]: 2025-10-11 08:54:05.726 2 DEBUG oslo_concurrency.lockutils [req-bd8dea70-8e78-42cf-a611-7e12a8a2e7b6 req-41677940-7b17-48d2-994b-721ce56eb7b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "090243f9-46ac-42a1-b921-fa3b1974f127-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:05 np0005481065 nova_compute[260935]: 2025-10-11 08:54:05.726 2 DEBUG oslo_concurrency.lockutils [req-bd8dea70-8e78-42cf-a611-7e12a8a2e7b6 req-41677940-7b17-48d2-994b-721ce56eb7b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "090243f9-46ac-42a1-b921-fa3b1974f127-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:05 np0005481065 nova_compute[260935]: 2025-10-11 08:54:05.726 2 DEBUG oslo_concurrency.lockutils [req-bd8dea70-8e78-42cf-a611-7e12a8a2e7b6 req-41677940-7b17-48d2-994b-721ce56eb7b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "090243f9-46ac-42a1-b921-fa3b1974f127-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:05 np0005481065 nova_compute[260935]: 2025-10-11 08:54:05.727 2 DEBUG nova.compute.manager [req-bd8dea70-8e78-42cf-a611-7e12a8a2e7b6 req-41677940-7b17-48d2-994b-721ce56eb7b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] No waiting events found dispatching network-vif-plugged-94e66298-1c10-486d-9cd5-fd680cfa037c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:54:05 np0005481065 nova_compute[260935]: 2025-10-11 08:54:05.727 2 WARNING nova.compute.manager [req-bd8dea70-8e78-42cf-a611-7e12a8a2e7b6 req-41677940-7b17-48d2-994b-721ce56eb7b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Received unexpected event network-vif-plugged-94e66298-1c10-486d-9cd5-fd680cfa037c for instance with vm_state deleted and task_state None.#033[00m
Oct 11 04:54:05 np0005481065 nova_compute[260935]: 2025-10-11 08:54:05.727 2 DEBUG nova.compute.manager [req-bd8dea70-8e78-42cf-a611-7e12a8a2e7b6 req-41677940-7b17-48d2-994b-721ce56eb7b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Received event network-vif-deleted-11b55091-2876-4b36-98b0-aa1a4db00d3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:54:05 np0005481065 nova_compute[260935]: 2025-10-11 08:54:05.728 2 DEBUG nova.compute.manager [req-bd8dea70-8e78-42cf-a611-7e12a8a2e7b6 req-41677940-7b17-48d2-994b-721ce56eb7b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Received event network-vif-deleted-eb770b47-0e30-41bb-8d08-e52bc86b8fb7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:54:05 np0005481065 romantic_neumann[316254]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:54:05 np0005481065 romantic_neumann[316254]: --> relative data size: 1.0
Oct 11 04:54:05 np0005481065 romantic_neumann[316254]: --> All data devices are unavailable
Oct 11 04:54:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1495: 321 pgs: 321 active+clean; 213 MiB data, 550 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.6 MiB/s wr, 205 op/s
Oct 11 04:54:05 np0005481065 systemd[1]: libpod-3ec23f4547cc0e539ac02b827b38cc6f130b6a8c99a84276140ffd435d4ab6d8.scope: Deactivated successfully.
Oct 11 04:54:05 np0005481065 podman[316236]: 2025-10-11 08:54:05.980128702 +0000 UTC m=+1.407564217 container died 3ec23f4547cc0e539ac02b827b38cc6f130b6a8c99a84276140ffd435d4ab6d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_neumann, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 11 04:54:05 np0005481065 systemd[1]: libpod-3ec23f4547cc0e539ac02b827b38cc6f130b6a8c99a84276140ffd435d4ab6d8.scope: Consumed 1.113s CPU time.
Oct 11 04:54:05 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:54:05 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/31269448' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:54:06 np0005481065 nova_compute[260935]: 2025-10-11 08:54:06.013 2 DEBUG oslo_concurrency.processutils [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:54:06 np0005481065 systemd[1]: var-lib-containers-storage-overlay-5081022b4876deca3781b856df4070f0b16910071fed7e9767360ef6b7b5c0ab-merged.mount: Deactivated successfully.
Oct 11 04:54:06 np0005481065 nova_compute[260935]: 2025-10-11 08:54:06.024 2 DEBUG nova.compute.provider_tree [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:54:06 np0005481065 nova_compute[260935]: 2025-10-11 08:54:06.040 2 DEBUG nova.scheduler.client.report [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:54:06 np0005481065 podman[316236]: 2025-10-11 08:54:06.053982058 +0000 UTC m=+1.481417553 container remove 3ec23f4547cc0e539ac02b827b38cc6f130b6a8c99a84276140ffd435d4ab6d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_neumann, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 11 04:54:06 np0005481065 nova_compute[260935]: 2025-10-11 08:54:06.060 2 DEBUG oslo_concurrency.lockutils [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:06 np0005481065 nova_compute[260935]: 2025-10-11 08:54:06.062 2 DEBUG oslo_concurrency.lockutils [None req-d29d2b04-23c0-4456-a657-aa63148acf2f 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 1.434s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:06 np0005481065 systemd[1]: libpod-conmon-3ec23f4547cc0e539ac02b827b38cc6f130b6a8c99a84276140ffd435d4ab6d8.scope: Deactivated successfully.
Oct 11 04:54:06 np0005481065 nova_compute[260935]: 2025-10-11 08:54:06.109 2 INFO nova.scheduler.client.report [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Deleted allocations for instance 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08#033[00m
Oct 11 04:54:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:06.115 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:06 np0005481065 nova_compute[260935]: 2025-10-11 08:54:06.152 2 DEBUG oslo_concurrency.processutils [None req-d29d2b04-23c0-4456-a657-aa63148acf2f 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:54:06 np0005481065 nova_compute[260935]: 2025-10-11 08:54:06.208 2 DEBUG oslo_concurrency.lockutils [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.514s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:06 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:54:06 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1468553982' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:54:06 np0005481065 nova_compute[260935]: 2025-10-11 08:54:06.625 2 DEBUG oslo_concurrency.processutils [None req-d29d2b04-23c0-4456-a657-aa63148acf2f 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:54:06 np0005481065 nova_compute[260935]: 2025-10-11 08:54:06.633 2 DEBUG nova.compute.provider_tree [None req-d29d2b04-23c0-4456-a657-aa63148acf2f 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:54:06 np0005481065 nova_compute[260935]: 2025-10-11 08:54:06.647 2 DEBUG nova.scheduler.client.report [None req-d29d2b04-23c0-4456-a657-aa63148acf2f 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:54:06 np0005481065 nova_compute[260935]: 2025-10-11 08:54:06.673 2 DEBUG oslo_concurrency.lockutils [None req-d29d2b04-23c0-4456-a657-aa63148acf2f 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:06 np0005481065 nova_compute[260935]: 2025-10-11 08:54:06.700 2 INFO nova.scheduler.client.report [None req-d29d2b04-23c0-4456-a657-aa63148acf2f 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Deleted allocations for instance 090243f9-46ac-42a1-b921-fa3b1974f127#033[00m
Oct 11 04:54:06 np0005481065 nova_compute[260935]: 2025-10-11 08:54:06.769 2 DEBUG oslo_concurrency.lockutils [None req-d29d2b04-23c0-4456-a657-aa63148acf2f 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "090243f9-46ac-42a1-b921-fa3b1974f127" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.084s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:06 np0005481065 podman[316502]: 2025-10-11 08:54:06.916492822 +0000 UTC m=+0.060461015 container create f2dbd6f65681c48bb22ad416d63550fe56094b6e23d4b68b0275220fb2fa7cd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 04:54:06 np0005481065 systemd[1]: Started libpod-conmon-f2dbd6f65681c48bb22ad416d63550fe56094b6e23d4b68b0275220fb2fa7cd4.scope.
Oct 11 04:54:06 np0005481065 podman[316502]: 2025-10-11 08:54:06.887353581 +0000 UTC m=+0.031321844 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:54:06 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:54:07 np0005481065 podman[316502]: 2025-10-11 08:54:07.012122339 +0000 UTC m=+0.156090542 container init f2dbd6f65681c48bb22ad416d63550fe56094b6e23d4b68b0275220fb2fa7cd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lovelace, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 11 04:54:07 np0005481065 podman[316502]: 2025-10-11 08:54:07.026219331 +0000 UTC m=+0.170187504 container start f2dbd6f65681c48bb22ad416d63550fe56094b6e23d4b68b0275220fb2fa7cd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:54:07 np0005481065 podman[316502]: 2025-10-11 08:54:07.030646127 +0000 UTC m=+0.174614320 container attach f2dbd6f65681c48bb22ad416d63550fe56094b6e23d4b68b0275220fb2fa7cd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:54:07 np0005481065 great_lovelace[316518]: 167 167
Oct 11 04:54:07 np0005481065 systemd[1]: libpod-f2dbd6f65681c48bb22ad416d63550fe56094b6e23d4b68b0275220fb2fa7cd4.scope: Deactivated successfully.
Oct 11 04:54:07 np0005481065 podman[316502]: 2025-10-11 08:54:07.034405095 +0000 UTC m=+0.178373268 container died f2dbd6f65681c48bb22ad416d63550fe56094b6e23d4b68b0275220fb2fa7cd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lovelace, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 11 04:54:07 np0005481065 systemd[1]: var-lib-containers-storage-overlay-96330046e8420a46328b894f3f8e6191ba87972981afcf24c7d4de53e9f5fd62-merged.mount: Deactivated successfully.
Oct 11 04:54:07 np0005481065 podman[316502]: 2025-10-11 08:54:07.086661295 +0000 UTC m=+0.230629498 container remove f2dbd6f65681c48bb22ad416d63550fe56094b6e23d4b68b0275220fb2fa7cd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 11 04:54:07 np0005481065 systemd[1]: libpod-conmon-f2dbd6f65681c48bb22ad416d63550fe56094b6e23d4b68b0275220fb2fa7cd4.scope: Deactivated successfully.
Oct 11 04:54:07 np0005481065 podman[316542]: 2025-10-11 08:54:07.32428889 +0000 UTC m=+0.065831828 container create f74e55edfe1842c399128fa5647a1ac23c656e25571b5c42cc5f45fe2f8e3bae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_spence, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 04:54:07 np0005481065 systemd[1]: Started libpod-conmon-f74e55edfe1842c399128fa5647a1ac23c656e25571b5c42cc5f45fe2f8e3bae.scope.
Oct 11 04:54:07 np0005481065 podman[316542]: 2025-10-11 08:54:07.293298667 +0000 UTC m=+0.034841685 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:54:07 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:54:07 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eb945b9f724009f8ab9d720c5f2b13aaae9b7771597074ae4196cd82d23219d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:54:07 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eb945b9f724009f8ab9d720c5f2b13aaae9b7771597074ae4196cd82d23219d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:54:07 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eb945b9f724009f8ab9d720c5f2b13aaae9b7771597074ae4196cd82d23219d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:54:07 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eb945b9f724009f8ab9d720c5f2b13aaae9b7771597074ae4196cd82d23219d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:54:07 np0005481065 podman[316542]: 2025-10-11 08:54:07.433659259 +0000 UTC m=+0.175202277 container init f74e55edfe1842c399128fa5647a1ac23c656e25571b5c42cc5f45fe2f8e3bae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_spence, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:54:07 np0005481065 podman[316542]: 2025-10-11 08:54:07.444009294 +0000 UTC m=+0.185552232 container start f74e55edfe1842c399128fa5647a1ac23c656e25571b5c42cc5f45fe2f8e3bae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_spence, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 04:54:07 np0005481065 podman[316542]: 2025-10-11 08:54:07.448032319 +0000 UTC m=+0.189575347 container attach f74e55edfe1842c399128fa5647a1ac23c656e25571b5c42cc5f45fe2f8e3bae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 11 04:54:07 np0005481065 nova_compute[260935]: 2025-10-11 08:54:07.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:54:07 np0005481065 nova_compute[260935]: 2025-10-11 08:54:07.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1496: 321 pgs: 321 active+clean; 41 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.2 MiB/s wr, 269 op/s
Oct 11 04:54:07 np0005481065 nova_compute[260935]: 2025-10-11 08:54:07.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]: {
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:    "0": [
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:        {
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:            "devices": [
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:                "/dev/loop3"
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:            ],
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:            "lv_name": "ceph_lv0",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:            "lv_size": "21470642176",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:            "name": "ceph_lv0",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:            "tags": {
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:                "ceph.cluster_name": "ceph",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:                "ceph.crush_device_class": "",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:                "ceph.encrypted": "0",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:                "ceph.osd_id": "0",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:                "ceph.type": "block",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:                "ceph.vdo": "0"
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:            },
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:            "type": "block",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:            "vg_name": "ceph_vg0"
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:        }
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:    ],
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:    "1": [
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:        {
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:            "devices": [
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:                "/dev/loop4"
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:            ],
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:            "lv_name": "ceph_lv1",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:            "lv_size": "21470642176",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:            "name": "ceph_lv1",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:            "tags": {
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:                "ceph.cluster_name": "ceph",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:                "ceph.crush_device_class": "",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:                "ceph.encrypted": "0",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:                "ceph.osd_id": "1",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:                "ceph.type": "block",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:                "ceph.vdo": "0"
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:            },
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:            "type": "block",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:            "vg_name": "ceph_vg1"
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:        }
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:    ],
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:    "2": [
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:        {
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:            "devices": [
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:                "/dev/loop5"
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:            ],
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:            "lv_name": "ceph_lv2",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:            "lv_size": "21470642176",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:            "name": "ceph_lv2",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:            "tags": {
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:                "ceph.cluster_name": "ceph",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:                "ceph.crush_device_class": "",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:                "ceph.encrypted": "0",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:                "ceph.osd_id": "2",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:                "ceph.type": "block",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:                "ceph.vdo": "0"
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:            },
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:            "type": "block",
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:            "vg_name": "ceph_vg2"
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:        }
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]:    ]
Oct 11 04:54:08 np0005481065 mystifying_spence[316559]: }
Oct 11 04:54:08 np0005481065 podman[316542]: 2025-10-11 08:54:08.322618937 +0000 UTC m=+1.064161875 container died f74e55edfe1842c399128fa5647a1ac23c656e25571b5c42cc5f45fe2f8e3bae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_spence, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Oct 11 04:54:08 np0005481065 systemd[1]: libpod-f74e55edfe1842c399128fa5647a1ac23c656e25571b5c42cc5f45fe2f8e3bae.scope: Deactivated successfully.
Oct 11 04:54:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:54:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e209 do_prune osdmap full prune enabled
Oct 11 04:54:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e210 e210: 3 total, 3 up, 3 in
Oct 11 04:54:08 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e210: 3 total, 3 up, 3 in
Oct 11 04:54:08 np0005481065 systemd[1]: var-lib-containers-storage-overlay-0eb945b9f724009f8ab9d720c5f2b13aaae9b7771597074ae4196cd82d23219d-merged.mount: Deactivated successfully.
Oct 11 04:54:08 np0005481065 podman[316542]: 2025-10-11 08:54:08.403571715 +0000 UTC m=+1.145114653 container remove f74e55edfe1842c399128fa5647a1ac23c656e25571b5c42cc5f45fe2f8e3bae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_spence, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:54:08 np0005481065 systemd[1]: libpod-conmon-f74e55edfe1842c399128fa5647a1ac23c656e25571b5c42cc5f45fe2f8e3bae.scope: Deactivated successfully.
Oct 11 04:54:08 np0005481065 nova_compute[260935]: 2025-10-11 08:54:08.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:54:08 np0005481065 nova_compute[260935]: 2025-10-11 08:54:08.701 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:54:09 np0005481065 podman[316716]: 2025-10-11 08:54:09.386311547 +0000 UTC m=+0.066923779 container create 604e50214345a226ea677773add0c75ad83a1e3673fd03f0d711d2444a1309f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 04:54:09 np0005481065 podman[316716]: 2025-10-11 08:54:09.354931503 +0000 UTC m=+0.035543775 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:54:09 np0005481065 nova_compute[260935]: 2025-10-11 08:54:09.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:09 np0005481065 systemd[1]: Started libpod-conmon-604e50214345a226ea677773add0c75ad83a1e3673fd03f0d711d2444a1309f5.scope.
Oct 11 04:54:09 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:54:09 np0005481065 podman[316716]: 2025-10-11 08:54:09.530640943 +0000 UTC m=+0.211253235 container init 604e50214345a226ea677773add0c75ad83a1e3673fd03f0d711d2444a1309f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:54:09 np0005481065 podman[316716]: 2025-10-11 08:54:09.548188053 +0000 UTC m=+0.228800285 container start 604e50214345a226ea677773add0c75ad83a1e3673fd03f0d711d2444a1309f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_brattain, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 04:54:09 np0005481065 podman[316716]: 2025-10-11 08:54:09.552354502 +0000 UTC m=+0.232966764 container attach 604e50214345a226ea677773add0c75ad83a1e3673fd03f0d711d2444a1309f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_brattain, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 04:54:09 np0005481065 nice_brattain[316735]: 167 167
Oct 11 04:54:09 np0005481065 systemd[1]: libpod-604e50214345a226ea677773add0c75ad83a1e3673fd03f0d711d2444a1309f5.scope: Deactivated successfully.
Oct 11 04:54:09 np0005481065 podman[316716]: 2025-10-11 08:54:09.556653675 +0000 UTC m=+0.237265907 container died 604e50214345a226ea677773add0c75ad83a1e3673fd03f0d711d2444a1309f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 04:54:09 np0005481065 systemd[1]: var-lib-containers-storage-overlay-ae0f68c6c65815cef1f89661a5f948bbb43d76dad7c92147208055baf61cd4b3-merged.mount: Deactivated successfully.
Oct 11 04:54:09 np0005481065 podman[316716]: 2025-10-11 08:54:09.613186577 +0000 UTC m=+0.293798829 container remove 604e50214345a226ea677773add0c75ad83a1e3673fd03f0d711d2444a1309f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:54:09 np0005481065 podman[316728]: 2025-10-11 08:54:09.623674286 +0000 UTC m=+0.126252611 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 11 04:54:09 np0005481065 systemd[1]: libpod-conmon-604e50214345a226ea677773add0c75ad83a1e3673fd03f0d711d2444a1309f5.scope: Deactivated successfully.
Oct 11 04:54:09 np0005481065 podman[316734]: 2025-10-11 08:54:09.690785789 +0000 UTC m=+0.195444784 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 04:54:09 np0005481065 podman[316802]: 2025-10-11 08:54:09.837172644 +0000 UTC m=+0.065858169 container create 5e4b1457b1c6de6300a551d88826763ee6f81d378aa3d0f903972b9318558ded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:54:09 np0005481065 systemd[1]: Started libpod-conmon-5e4b1457b1c6de6300a551d88826763ee6f81d378aa3d0f903972b9318558ded.scope.
Oct 11 04:54:09 np0005481065 podman[316802]: 2025-10-11 08:54:09.807322282 +0000 UTC m=+0.036007867 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:54:09 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:54:09 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c41e86106ae4a335d8011372adf8366a211630267a953265ca526eaf04b53626/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:54:09 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c41e86106ae4a335d8011372adf8366a211630267a953265ca526eaf04b53626/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:54:09 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c41e86106ae4a335d8011372adf8366a211630267a953265ca526eaf04b53626/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:54:09 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c41e86106ae4a335d8011372adf8366a211630267a953265ca526eaf04b53626/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:54:09 np0005481065 podman[316802]: 2025-10-11 08:54:09.95138744 +0000 UTC m=+0.180073015 container init 5e4b1457b1c6de6300a551d88826763ee6f81d378aa3d0f903972b9318558ded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_brown, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:54:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1498: 321 pgs: 321 active+clean; 41 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 257 op/s
Oct 11 04:54:09 np0005481065 podman[316802]: 2025-10-11 08:54:09.972179833 +0000 UTC m=+0.200865368 container start 5e4b1457b1c6de6300a551d88826763ee6f81d378aa3d0f903972b9318558ded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_brown, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 11 04:54:09 np0005481065 podman[316802]: 2025-10-11 08:54:09.976871297 +0000 UTC m=+0.205556832 container attach 5e4b1457b1c6de6300a551d88826763ee6f81d378aa3d0f903972b9318558ded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_brown, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:54:10 np0005481065 nova_compute[260935]: 2025-10-11 08:54:10.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:54:10 np0005481065 nova_compute[260935]: 2025-10-11 08:54:10.730 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:10 np0005481065 nova_compute[260935]: 2025-10-11 08:54:10.731 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:10 np0005481065 nova_compute[260935]: 2025-10-11 08:54:10.731 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:10 np0005481065 nova_compute[260935]: 2025-10-11 08:54:10.732 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 04:54:10 np0005481065 nova_compute[260935]: 2025-10-11 08:54:10.732 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:54:11 np0005481065 upbeat_brown[316819]: {
Oct 11 04:54:11 np0005481065 upbeat_brown[316819]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 04:54:11 np0005481065 upbeat_brown[316819]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:54:11 np0005481065 upbeat_brown[316819]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:54:11 np0005481065 upbeat_brown[316819]:        "osd_id": 2,
Oct 11 04:54:11 np0005481065 upbeat_brown[316819]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:54:11 np0005481065 upbeat_brown[316819]:        "type": "bluestore"
Oct 11 04:54:11 np0005481065 upbeat_brown[316819]:    },
Oct 11 04:54:11 np0005481065 upbeat_brown[316819]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 04:54:11 np0005481065 upbeat_brown[316819]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:54:11 np0005481065 upbeat_brown[316819]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:54:11 np0005481065 upbeat_brown[316819]:        "osd_id": 0,
Oct 11 04:54:11 np0005481065 upbeat_brown[316819]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:54:11 np0005481065 upbeat_brown[316819]:        "type": "bluestore"
Oct 11 04:54:11 np0005481065 upbeat_brown[316819]:    },
Oct 11 04:54:11 np0005481065 upbeat_brown[316819]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 04:54:11 np0005481065 upbeat_brown[316819]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:54:11 np0005481065 upbeat_brown[316819]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:54:11 np0005481065 upbeat_brown[316819]:        "osd_id": 1,
Oct 11 04:54:11 np0005481065 upbeat_brown[316819]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:54:11 np0005481065 upbeat_brown[316819]:        "type": "bluestore"
Oct 11 04:54:11 np0005481065 upbeat_brown[316819]:    }
Oct 11 04:54:11 np0005481065 upbeat_brown[316819]: }
Oct 11 04:54:11 np0005481065 systemd[1]: libpod-5e4b1457b1c6de6300a551d88826763ee6f81d378aa3d0f903972b9318558ded.scope: Deactivated successfully.
Oct 11 04:54:11 np0005481065 systemd[1]: libpod-5e4b1457b1c6de6300a551d88826763ee6f81d378aa3d0f903972b9318558ded.scope: Consumed 1.181s CPU time.
Oct 11 04:54:11 np0005481065 podman[316802]: 2025-10-11 08:54:11.151708527 +0000 UTC m=+1.380394052 container died 5e4b1457b1c6de6300a551d88826763ee6f81d378aa3d0f903972b9318558ded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_brown, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 11 04:54:11 np0005481065 systemd[1]: var-lib-containers-storage-overlay-c41e86106ae4a335d8011372adf8366a211630267a953265ca526eaf04b53626-merged.mount: Deactivated successfully.
Oct 11 04:54:11 np0005481065 podman[316802]: 2025-10-11 08:54:11.218391509 +0000 UTC m=+1.447077004 container remove 5e4b1457b1c6de6300a551d88826763ee6f81d378aa3d0f903972b9318558ded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_brown, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 04:54:11 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:54:11 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1963532223' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:54:11 np0005481065 systemd[1]: libpod-conmon-5e4b1457b1c6de6300a551d88826763ee6f81d378aa3d0f903972b9318558ded.scope: Deactivated successfully.
Oct 11 04:54:11 np0005481065 nova_compute[260935]: 2025-10-11 08:54:11.246 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:54:11 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:54:11 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:54:11 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:54:11 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:54:11 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 463ce18b-c9f9-4e95-8fe4-b2f77282082a does not exist
Oct 11 04:54:11 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 201a6b9b-3093-48a0-9d40-2757175ac650 does not exist
Oct 11 04:54:11 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:54:11 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:54:11 np0005481065 nova_compute[260935]: 2025-10-11 08:54:11.470 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:54:11 np0005481065 nova_compute[260935]: 2025-10-11 08:54:11.471 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4135MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 04:54:11 np0005481065 nova_compute[260935]: 2025-10-11 08:54:11.471 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:11 np0005481065 nova_compute[260935]: 2025-10-11 08:54:11.472 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:11 np0005481065 nova_compute[260935]: 2025-10-11 08:54:11.535 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 04:54:11 np0005481065 nova_compute[260935]: 2025-10-11 08:54:11.535 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 04:54:11 np0005481065 nova_compute[260935]: 2025-10-11 08:54:11.561 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:54:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1499: 321 pgs: 321 active+clean; 41 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 257 op/s
Oct 11 04:54:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:54:12 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3896342550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:54:12 np0005481065 nova_compute[260935]: 2025-10-11 08:54:12.107 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:54:12 np0005481065 nova_compute[260935]: 2025-10-11 08:54:12.115 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:54:12 np0005481065 nova_compute[260935]: 2025-10-11 08:54:12.136 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:54:12 np0005481065 nova_compute[260935]: 2025-10-11 08:54:12.164 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 04:54:12 np0005481065 nova_compute[260935]: 2025-10-11 08:54:12.165 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:12 np0005481065 nova_compute[260935]: 2025-10-11 08:54:12.763 2 DEBUG oslo_concurrency.lockutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquiring lock "1a70666a-f421-4a09-bfa4-f1171cbe2000" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:12 np0005481065 nova_compute[260935]: 2025-10-11 08:54:12.764 2 DEBUG oslo_concurrency.lockutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "1a70666a-f421-4a09-bfa4-f1171cbe2000" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:12 np0005481065 nova_compute[260935]: 2025-10-11 08:54:12.787 2 DEBUG nova.compute.manager [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:54:12 np0005481065 nova_compute[260935]: 2025-10-11 08:54:12.851 2 DEBUG oslo_concurrency.lockutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:12 np0005481065 nova_compute[260935]: 2025-10-11 08:54:12.851 2 DEBUG oslo_concurrency.lockutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:12 np0005481065 nova_compute[260935]: 2025-10-11 08:54:12.859 2 DEBUG nova.virt.hardware [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:54:12 np0005481065 nova_compute[260935]: 2025-10-11 08:54:12.860 2 INFO nova.compute.claims [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:54:12 np0005481065 nova_compute[260935]: 2025-10-11 08:54:12.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:12 np0005481065 nova_compute[260935]: 2025-10-11 08:54:12.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:12 np0005481065 nova_compute[260935]: 2025-10-11 08:54:12.992 2 DEBUG oslo_concurrency.processutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:54:13 np0005481065 nova_compute[260935]: 2025-10-11 08:54:13.166 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:54:13 np0005481065 nova_compute[260935]: 2025-10-11 08:54:13.166 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 04:54:13 np0005481065 nova_compute[260935]: 2025-10-11 08:54:13.166 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 11 04:54:13 np0005481065 nova_compute[260935]: 2025-10-11 08:54:13.210 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct 11 04:54:13 np0005481065 nova_compute[260935]: 2025-10-11 08:54:13.211 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 11 04:54:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:54:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:54:13 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/214837527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:54:13 np0005481065 nova_compute[260935]: 2025-10-11 08:54:13.480 2 DEBUG oslo_concurrency.processutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:54:13 np0005481065 nova_compute[260935]: 2025-10-11 08:54:13.489 2 DEBUG nova.compute.provider_tree [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:54:13 np0005481065 nova_compute[260935]: 2025-10-11 08:54:13.507 2 DEBUG nova.scheduler.client.report [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:54:13 np0005481065 nova_compute[260935]: 2025-10-11 08:54:13.538 2 DEBUG oslo_concurrency.lockutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:13 np0005481065 nova_compute[260935]: 2025-10-11 08:54:13.539 2 DEBUG nova.compute.manager [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:54:13 np0005481065 nova_compute[260935]: 2025-10-11 08:54:13.590 2 DEBUG nova.compute.manager [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:54:13 np0005481065 nova_compute[260935]: 2025-10-11 08:54:13.590 2 DEBUG nova.network.neutron [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:54:13 np0005481065 nova_compute[260935]: 2025-10-11 08:54:13.611 2 INFO nova.virt.libvirt.driver [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:54:13 np0005481065 nova_compute[260935]: 2025-10-11 08:54:13.636 2 DEBUG nova.compute.manager [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:54:13 np0005481065 nova_compute[260935]: 2025-10-11 08:54:13.719 2 DEBUG nova.compute.manager [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:54:13 np0005481065 nova_compute[260935]: 2025-10-11 08:54:13.720 2 DEBUG nova.virt.libvirt.driver [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:54:13 np0005481065 nova_compute[260935]: 2025-10-11 08:54:13.721 2 INFO nova.virt.libvirt.driver [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Creating image(s)#033[00m
Oct 11 04:54:13 np0005481065 nova_compute[260935]: 2025-10-11 08:54:13.747 2 DEBUG nova.storage.rbd_utils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] rbd image 1a70666a-f421-4a09-bfa4-f1171cbe2000_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:54:13 np0005481065 nova_compute[260935]: 2025-10-11 08:54:13.778 2 DEBUG nova.storage.rbd_utils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] rbd image 1a70666a-f421-4a09-bfa4-f1171cbe2000_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:54:13 np0005481065 nova_compute[260935]: 2025-10-11 08:54:13.805 2 DEBUG nova.storage.rbd_utils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] rbd image 1a70666a-f421-4a09-bfa4-f1171cbe2000_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:54:13 np0005481065 nova_compute[260935]: 2025-10-11 08:54:13.809 2 DEBUG oslo_concurrency.processutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:54:13 np0005481065 nova_compute[260935]: 2025-10-11 08:54:13.889 2 DEBUG oslo_concurrency.processutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:54:13 np0005481065 nova_compute[260935]: 2025-10-11 08:54:13.891 2 DEBUG oslo_concurrency.lockutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:13 np0005481065 nova_compute[260935]: 2025-10-11 08:54:13.892 2 DEBUG oslo_concurrency.lockutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:13 np0005481065 nova_compute[260935]: 2025-10-11 08:54:13.892 2 DEBUG oslo_concurrency.lockutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:13 np0005481065 nova_compute[260935]: 2025-10-11 08:54:13.924 2 DEBUG nova.storage.rbd_utils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] rbd image 1a70666a-f421-4a09-bfa4-f1171cbe2000_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:54:13 np0005481065 nova_compute[260935]: 2025-10-11 08:54:13.929 2 DEBUG oslo_concurrency.processutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 1a70666a-f421-4a09-bfa4-f1171cbe2000_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:54:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1500: 321 pgs: 321 active+clean; 41 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 4.2 KiB/s wr, 92 op/s
Oct 11 04:54:14 np0005481065 nova_compute[260935]: 2025-10-11 08:54:14.012 2 DEBUG nova.policy [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd4dce65b39be45739408ca70d672df84', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b34d40b1586348c3be3d9142dfe1770d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 04:54:14 np0005481065 nova_compute[260935]: 2025-10-11 08:54:14.282 2 DEBUG oslo_concurrency.processutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 1a70666a-f421-4a09-bfa4-f1171cbe2000_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.353s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:54:14 np0005481065 nova_compute[260935]: 2025-10-11 08:54:14.378 2 DEBUG nova.storage.rbd_utils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] resizing rbd image 1a70666a-f421-4a09-bfa4-f1171cbe2000_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:54:14 np0005481065 nova_compute[260935]: 2025-10-11 08:54:14.623 2 DEBUG nova.objects.instance [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lazy-loading 'migration_context' on Instance uuid 1a70666a-f421-4a09-bfa4-f1171cbe2000 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:54:14 np0005481065 nova_compute[260935]: 2025-10-11 08:54:14.649 2 DEBUG nova.virt.libvirt.driver [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:54:14 np0005481065 nova_compute[260935]: 2025-10-11 08:54:14.649 2 DEBUG nova.virt.libvirt.driver [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Ensure instance console log exists: /var/lib/nova/instances/1a70666a-f421-4a09-bfa4-f1171cbe2000/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:54:14 np0005481065 nova_compute[260935]: 2025-10-11 08:54:14.650 2 DEBUG oslo_concurrency.lockutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:14 np0005481065 nova_compute[260935]: 2025-10-11 08:54:14.651 2 DEBUG oslo_concurrency.lockutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:14 np0005481065 nova_compute[260935]: 2025-10-11 08:54:14.651 2 DEBUG oslo_concurrency.lockutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:15.189 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:15.190 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:15.190 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:15 np0005481065 nova_compute[260935]: 2025-10-11 08:54:15.271 2 DEBUG nova.network.neutron [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Successfully created port: 56e8a82c-acef-4136-b229-8039f0d8c843 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 04:54:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1501: 321 pgs: 321 active+clean; 41 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 4.2 KiB/s wr, 92 op/s
Oct 11 04:54:16 np0005481065 nova_compute[260935]: 2025-10-11 08:54:16.719 2 DEBUG nova.network.neutron [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Successfully updated port: 56e8a82c-acef-4136-b229-8039f0d8c843 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:54:16 np0005481065 nova_compute[260935]: 2025-10-11 08:54:16.748 2 DEBUG oslo_concurrency.lockutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquiring lock "refresh_cache-1a70666a-f421-4a09-bfa4-f1171cbe2000" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:54:16 np0005481065 nova_compute[260935]: 2025-10-11 08:54:16.749 2 DEBUG oslo_concurrency.lockutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquired lock "refresh_cache-1a70666a-f421-4a09-bfa4-f1171cbe2000" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:54:16 np0005481065 nova_compute[260935]: 2025-10-11 08:54:16.749 2 DEBUG nova.network.neutron [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:54:16 np0005481065 nova_compute[260935]: 2025-10-11 08:54:16.845 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172841.8440382, d0ac94c4-9bbc-443b-bbce-0d447b37153a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:54:16 np0005481065 nova_compute[260935]: 2025-10-11 08:54:16.846 2 INFO nova.compute.manager [-] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] VM Stopped (Lifecycle Event)#033[00m
Oct 11 04:54:16 np0005481065 nova_compute[260935]: 2025-10-11 08:54:16.861 2 DEBUG nova.compute.manager [req-77ca7993-b53f-454c-8997-94fb4b118031 req-0db12ec8-248f-4752-ac46-69dd7c7e21fc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Received event network-changed-56e8a82c-acef-4136-b229-8039f0d8c843 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:54:16 np0005481065 nova_compute[260935]: 2025-10-11 08:54:16.862 2 DEBUG nova.compute.manager [req-77ca7993-b53f-454c-8997-94fb4b118031 req-0db12ec8-248f-4752-ac46-69dd7c7e21fc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Refreshing instance network info cache due to event network-changed-56e8a82c-acef-4136-b229-8039f0d8c843. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:54:16 np0005481065 nova_compute[260935]: 2025-10-11 08:54:16.863 2 DEBUG oslo_concurrency.lockutils [req-77ca7993-b53f-454c-8997-94fb4b118031 req-0db12ec8-248f-4752-ac46-69dd7c7e21fc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-1a70666a-f421-4a09-bfa4-f1171cbe2000" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:54:16 np0005481065 nova_compute[260935]: 2025-10-11 08:54:16.881 2 DEBUG nova.compute.manager [None req-151a3760-b8ab-408b-bcf8-ce0a238ab5b2 - - - - - -] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:54:17 np0005481065 nova_compute[260935]: 2025-10-11 08:54:17.205 2 DEBUG nova.network.neutron [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:54:17 np0005481065 nova_compute[260935]: 2025-10-11 08:54:17.543 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172842.5422695, c5c5e7c6-36ba-4cdd-9ad5-03996c419556 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:54:17 np0005481065 nova_compute[260935]: 2025-10-11 08:54:17.544 2 INFO nova.compute.manager [-] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] VM Stopped (Lifecycle Event)#033[00m
Oct 11 04:54:17 np0005481065 nova_compute[260935]: 2025-10-11 08:54:17.570 2 DEBUG nova.compute.manager [None req-bd800a87-1dfd-4999-aa21-3fa66502cbb8 - - - - - -] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:54:17 np0005481065 nova_compute[260935]: 2025-10-11 08:54:17.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:17 np0005481065 nova_compute[260935]: 2025-10-11 08:54:17.938 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172842.937579, 090243f9-46ac-42a1-b921-fa3b1974f127 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:54:17 np0005481065 nova_compute[260935]: 2025-10-11 08:54:17.939 2 INFO nova.compute.manager [-] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] VM Stopped (Lifecycle Event)#033[00m
Oct 11 04:54:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1502: 321 pgs: 321 active+clean; 88 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 32 op/s
Oct 11 04:54:17 np0005481065 nova_compute[260935]: 2025-10-11 08:54:17.968 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172842.9658763, 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:54:17 np0005481065 nova_compute[260935]: 2025-10-11 08:54:17.969 2 INFO nova.compute.manager [-] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] VM Stopped (Lifecycle Event)#033[00m
Oct 11 04:54:17 np0005481065 nova_compute[260935]: 2025-10-11 08:54:17.971 2 DEBUG nova.compute.manager [None req-e399e5c3-5378-4f91-b9d3-93da32675437 - - - - - -] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:54:17 np0005481065 nova_compute[260935]: 2025-10-11 08:54:17.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:18 np0005481065 nova_compute[260935]: 2025-10-11 08:54:18.000 2 DEBUG nova.compute.manager [None req-83286972-4e47-4a85-8a7e-efbb70d20bbd - - - - - -] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:54:18 np0005481065 nova_compute[260935]: 2025-10-11 08:54:18.231 2 DEBUG nova.network.neutron [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Updating instance_info_cache with network_info: [{"id": "56e8a82c-acef-4136-b229-8039f0d8c843", "address": "fa:16:3e:cf:5a:16", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56e8a82c-ac", "ovs_interfaceid": "56e8a82c-acef-4136-b229-8039f0d8c843", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:54:18 np0005481065 nova_compute[260935]: 2025-10-11 08:54:18.265 2 DEBUG oslo_concurrency.lockutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Releasing lock "refresh_cache-1a70666a-f421-4a09-bfa4-f1171cbe2000" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:54:18 np0005481065 nova_compute[260935]: 2025-10-11 08:54:18.266 2 DEBUG nova.compute.manager [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Instance network_info: |[{"id": "56e8a82c-acef-4136-b229-8039f0d8c843", "address": "fa:16:3e:cf:5a:16", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56e8a82c-ac", "ovs_interfaceid": "56e8a82c-acef-4136-b229-8039f0d8c843", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:54:18 np0005481065 nova_compute[260935]: 2025-10-11 08:54:18.266 2 DEBUG oslo_concurrency.lockutils [req-77ca7993-b53f-454c-8997-94fb4b118031 req-0db12ec8-248f-4752-ac46-69dd7c7e21fc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-1a70666a-f421-4a09-bfa4-f1171cbe2000" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:54:18 np0005481065 nova_compute[260935]: 2025-10-11 08:54:18.267 2 DEBUG nova.network.neutron [req-77ca7993-b53f-454c-8997-94fb4b118031 req-0db12ec8-248f-4752-ac46-69dd7c7e21fc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Refreshing network info cache for port 56e8a82c-acef-4136-b229-8039f0d8c843 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:54:18 np0005481065 nova_compute[260935]: 2025-10-11 08:54:18.273 2 DEBUG nova.virt.libvirt.driver [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Start _get_guest_xml network_info=[{"id": "56e8a82c-acef-4136-b229-8039f0d8c843", "address": "fa:16:3e:cf:5a:16", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56e8a82c-ac", "ovs_interfaceid": "56e8a82c-acef-4136-b229-8039f0d8c843", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:54:18 np0005481065 nova_compute[260935]: 2025-10-11 08:54:18.281 2 WARNING nova.virt.libvirt.driver [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:54:18 np0005481065 nova_compute[260935]: 2025-10-11 08:54:18.287 2 DEBUG nova.virt.libvirt.host [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:54:18 np0005481065 nova_compute[260935]: 2025-10-11 08:54:18.288 2 DEBUG nova.virt.libvirt.host [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:54:18 np0005481065 nova_compute[260935]: 2025-10-11 08:54:18.299 2 DEBUG nova.virt.libvirt.host [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:54:18 np0005481065 nova_compute[260935]: 2025-10-11 08:54:18.300 2 DEBUG nova.virt.libvirt.host [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:54:18 np0005481065 nova_compute[260935]: 2025-10-11 08:54:18.301 2 DEBUG nova.virt.libvirt.driver [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:54:18 np0005481065 nova_compute[260935]: 2025-10-11 08:54:18.301 2 DEBUG nova.virt.hardware [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:54:18 np0005481065 nova_compute[260935]: 2025-10-11 08:54:18.302 2 DEBUG nova.virt.hardware [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:54:18 np0005481065 nova_compute[260935]: 2025-10-11 08:54:18.303 2 DEBUG nova.virt.hardware [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:54:18 np0005481065 nova_compute[260935]: 2025-10-11 08:54:18.303 2 DEBUG nova.virt.hardware [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:54:18 np0005481065 nova_compute[260935]: 2025-10-11 08:54:18.304 2 DEBUG nova.virt.hardware [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:54:18 np0005481065 nova_compute[260935]: 2025-10-11 08:54:18.304 2 DEBUG nova.virt.hardware [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:54:18 np0005481065 nova_compute[260935]: 2025-10-11 08:54:18.305 2 DEBUG nova.virt.hardware [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:54:18 np0005481065 nova_compute[260935]: 2025-10-11 08:54:18.305 2 DEBUG nova.virt.hardware [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:54:18 np0005481065 nova_compute[260935]: 2025-10-11 08:54:18.306 2 DEBUG nova.virt.hardware [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:54:18 np0005481065 nova_compute[260935]: 2025-10-11 08:54:18.306 2 DEBUG nova.virt.hardware [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:54:18 np0005481065 nova_compute[260935]: 2025-10-11 08:54:18.307 2 DEBUG nova.virt.hardware [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:54:18 np0005481065 nova_compute[260935]: 2025-10-11 08:54:18.312 2 DEBUG oslo_concurrency.processutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:54:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:54:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:54:18 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/177491119' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:54:18 np0005481065 nova_compute[260935]: 2025-10-11 08:54:18.860 2 DEBUG oslo_concurrency.processutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:54:18 np0005481065 nova_compute[260935]: 2025-10-11 08:54:18.896 2 DEBUG nova.storage.rbd_utils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] rbd image 1a70666a-f421-4a09-bfa4-f1171cbe2000_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:54:18 np0005481065 nova_compute[260935]: 2025-10-11 08:54:18.902 2 DEBUG oslo_concurrency.processutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:54:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:54:19 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3052950723' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:54:19 np0005481065 nova_compute[260935]: 2025-10-11 08:54:19.443 2 DEBUG oslo_concurrency.processutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:54:19 np0005481065 nova_compute[260935]: 2025-10-11 08:54:19.447 2 DEBUG nova.virt.libvirt.vif [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:54:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1703807190',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1703807190',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1703807190',id=51,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b34d40b1586348c3be3d9142dfe1770d',ramdisk_id='',reservation_id='r-d42968wc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-253514738',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-253514738-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:54:13Z,user_data=None,user_id='d4dce65b39be45739408ca70d672df84',uuid=1a70666a-f421-4a09-bfa4-f1171cbe2000,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "56e8a82c-acef-4136-b229-8039f0d8c843", "address": "fa:16:3e:cf:5a:16", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56e8a82c-ac", "ovs_interfaceid": "56e8a82c-acef-4136-b229-8039f0d8c843", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:54:19 np0005481065 nova_compute[260935]: 2025-10-11 08:54:19.448 2 DEBUG nova.network.os_vif_util [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Converting VIF {"id": "56e8a82c-acef-4136-b229-8039f0d8c843", "address": "fa:16:3e:cf:5a:16", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56e8a82c-ac", "ovs_interfaceid": "56e8a82c-acef-4136-b229-8039f0d8c843", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:54:19 np0005481065 nova_compute[260935]: 2025-10-11 08:54:19.449 2 DEBUG nova.network.os_vif_util [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:5a:16,bridge_name='br-int',has_traffic_filtering=True,id=56e8a82c-acef-4136-b229-8039f0d8c843,network=Network(a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56e8a82c-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:54:19 np0005481065 nova_compute[260935]: 2025-10-11 08:54:19.452 2 DEBUG nova.objects.instance [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lazy-loading 'pci_devices' on Instance uuid 1a70666a-f421-4a09-bfa4-f1171cbe2000 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:54:19 np0005481065 nova_compute[260935]: 2025-10-11 08:54:19.472 2 DEBUG nova.virt.libvirt.driver [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:54:19 np0005481065 nova_compute[260935]:  <uuid>1a70666a-f421-4a09-bfa4-f1171cbe2000</uuid>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:  <name>instance-00000033</name>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:54:19 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:      <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-1703807190</nova:name>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:54:18</nova:creationTime>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:54:19 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:        <nova:user uuid="d4dce65b39be45739408ca70d672df84">tempest-ImagesOneServerNegativeTestJSON-253514738-project-member</nova:user>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:        <nova:project uuid="b34d40b1586348c3be3d9142dfe1770d">tempest-ImagesOneServerNegativeTestJSON-253514738</nova:project>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:        <nova:port uuid="56e8a82c-acef-4136-b229-8039f0d8c843">
Oct 11 04:54:19 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:54:19 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:      <entry name="serial">1a70666a-f421-4a09-bfa4-f1171cbe2000</entry>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:      <entry name="uuid">1a70666a-f421-4a09-bfa4-f1171cbe2000</entry>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:54:19 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:54:19 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:54:19 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/1a70666a-f421-4a09-bfa4-f1171cbe2000_disk">
Oct 11 04:54:19 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:54:19 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:54:19 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/1a70666a-f421-4a09-bfa4-f1171cbe2000_disk.config">
Oct 11 04:54:19 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:54:19 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 04:54:19 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:cf:5a:16"/>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:      <target dev="tap56e8a82c-ac"/>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:54:19 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/1a70666a-f421-4a09-bfa4-f1171cbe2000/console.log" append="off"/>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:54:19 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:54:19 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:54:19 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:54:19 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:54:19 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:54:19 np0005481065 nova_compute[260935]: 2025-10-11 08:54:19.474 2 DEBUG nova.compute.manager [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Preparing to wait for external event network-vif-plugged-56e8a82c-acef-4136-b229-8039f0d8c843 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 04:54:19 np0005481065 nova_compute[260935]: 2025-10-11 08:54:19.475 2 DEBUG oslo_concurrency.lockutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquiring lock "1a70666a-f421-4a09-bfa4-f1171cbe2000-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:19 np0005481065 nova_compute[260935]: 2025-10-11 08:54:19.476 2 DEBUG oslo_concurrency.lockutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "1a70666a-f421-4a09-bfa4-f1171cbe2000-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:19 np0005481065 nova_compute[260935]: 2025-10-11 08:54:19.476 2 DEBUG oslo_concurrency.lockutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "1a70666a-f421-4a09-bfa4-f1171cbe2000-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:19 np0005481065 nova_compute[260935]: 2025-10-11 08:54:19.477 2 DEBUG nova.virt.libvirt.vif [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:54:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1703807190',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1703807190',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1703807190',id=51,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b34d40b1586348c3be3d9142dfe1770d',ramdisk_id='',reservation_id='r-d42968wc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-253514738',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-253514738-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:54:13Z,user_data=None,user_id='d4dce65b39be45739408ca70d672df84',uuid=1a70666a-f421-4a09-bfa4-f1171cbe2000,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "56e8a82c-acef-4136-b229-8039f0d8c843", "address": "fa:16:3e:cf:5a:16", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56e8a82c-ac", "ovs_interfaceid": "56e8a82c-acef-4136-b229-8039f0d8c843", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:54:19 np0005481065 nova_compute[260935]: 2025-10-11 08:54:19.478 2 DEBUG nova.network.os_vif_util [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Converting VIF {"id": "56e8a82c-acef-4136-b229-8039f0d8c843", "address": "fa:16:3e:cf:5a:16", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56e8a82c-ac", "ovs_interfaceid": "56e8a82c-acef-4136-b229-8039f0d8c843", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:54:19 np0005481065 nova_compute[260935]: 2025-10-11 08:54:19.479 2 DEBUG nova.network.os_vif_util [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:5a:16,bridge_name='br-int',has_traffic_filtering=True,id=56e8a82c-acef-4136-b229-8039f0d8c843,network=Network(a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56e8a82c-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:54:19 np0005481065 nova_compute[260935]: 2025-10-11 08:54:19.480 2 DEBUG os_vif [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:5a:16,bridge_name='br-int',has_traffic_filtering=True,id=56e8a82c-acef-4136-b229-8039f0d8c843,network=Network(a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56e8a82c-ac') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:54:19 np0005481065 nova_compute[260935]: 2025-10-11 08:54:19.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:19 np0005481065 nova_compute[260935]: 2025-10-11 08:54:19.482 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:19 np0005481065 nova_compute[260935]: 2025-10-11 08:54:19.483 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:54:19 np0005481065 nova_compute[260935]: 2025-10-11 08:54:19.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:19 np0005481065 nova_compute[260935]: 2025-10-11 08:54:19.488 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap56e8a82c-ac, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:19 np0005481065 nova_compute[260935]: 2025-10-11 08:54:19.489 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap56e8a82c-ac, col_values=(('external_ids', {'iface-id': '56e8a82c-acef-4136-b229-8039f0d8c843', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cf:5a:16', 'vm-uuid': '1a70666a-f421-4a09-bfa4-f1171cbe2000'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:19 np0005481065 NetworkManager[44960]: <info>  [1760172859.4935] manager: (tap56e8a82c-ac): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/209)
Oct 11 04:54:19 np0005481065 nova_compute[260935]: 2025-10-11 08:54:19.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:19 np0005481065 nova_compute[260935]: 2025-10-11 08:54:19.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:19 np0005481065 nova_compute[260935]: 2025-10-11 08:54:19.507 2 INFO os_vif [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:5a:16,bridge_name='br-int',has_traffic_filtering=True,id=56e8a82c-acef-4136-b229-8039f0d8c843,network=Network(a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56e8a82c-ac')#033[00m
Oct 11 04:54:19 np0005481065 nova_compute[260935]: 2025-10-11 08:54:19.569 2 DEBUG nova.virt.libvirt.driver [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:54:19 np0005481065 nova_compute[260935]: 2025-10-11 08:54:19.570 2 DEBUG nova.virt.libvirt.driver [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:54:19 np0005481065 nova_compute[260935]: 2025-10-11 08:54:19.571 2 DEBUG nova.virt.libvirt.driver [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] No VIF found with MAC fa:16:3e:cf:5a:16, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:54:19 np0005481065 nova_compute[260935]: 2025-10-11 08:54:19.571 2 INFO nova.virt.libvirt.driver [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Using config drive#033[00m
Oct 11 04:54:19 np0005481065 nova_compute[260935]: 2025-10-11 08:54:19.610 2 DEBUG nova.storage.rbd_utils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] rbd image 1a70666a-f421-4a09-bfa4-f1171cbe2000_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:54:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1503: 321 pgs: 321 active+clean; 88 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 04:54:20 np0005481065 nova_compute[260935]: 2025-10-11 08:54:20.156 2 DEBUG nova.network.neutron [req-77ca7993-b53f-454c-8997-94fb4b118031 req-0db12ec8-248f-4752-ac46-69dd7c7e21fc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Updated VIF entry in instance network info cache for port 56e8a82c-acef-4136-b229-8039f0d8c843. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:54:20 np0005481065 nova_compute[260935]: 2025-10-11 08:54:20.157 2 DEBUG nova.network.neutron [req-77ca7993-b53f-454c-8997-94fb4b118031 req-0db12ec8-248f-4752-ac46-69dd7c7e21fc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Updating instance_info_cache with network_info: [{"id": "56e8a82c-acef-4136-b229-8039f0d8c843", "address": "fa:16:3e:cf:5a:16", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56e8a82c-ac", "ovs_interfaceid": "56e8a82c-acef-4136-b229-8039f0d8c843", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:54:20 np0005481065 nova_compute[260935]: 2025-10-11 08:54:20.183 2 DEBUG oslo_concurrency.lockutils [req-77ca7993-b53f-454c-8997-94fb4b118031 req-0db12ec8-248f-4752-ac46-69dd7c7e21fc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-1a70666a-f421-4a09-bfa4-f1171cbe2000" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:54:20 np0005481065 nova_compute[260935]: 2025-10-11 08:54:20.337 2 INFO nova.virt.libvirt.driver [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Creating config drive at /var/lib/nova/instances/1a70666a-f421-4a09-bfa4-f1171cbe2000/disk.config#033[00m
Oct 11 04:54:20 np0005481065 nova_compute[260935]: 2025-10-11 08:54:20.344 2 DEBUG oslo_concurrency.processutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1a70666a-f421-4a09-bfa4-f1171cbe2000/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptdf3svxi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:54:20 np0005481065 nova_compute[260935]: 2025-10-11 08:54:20.500 2 DEBUG oslo_concurrency.processutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1a70666a-f421-4a09-bfa4-f1171cbe2000/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptdf3svxi" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:54:20 np0005481065 nova_compute[260935]: 2025-10-11 08:54:20.539 2 DEBUG nova.storage.rbd_utils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] rbd image 1a70666a-f421-4a09-bfa4-f1171cbe2000_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:54:20 np0005481065 nova_compute[260935]: 2025-10-11 08:54:20.544 2 DEBUG oslo_concurrency.processutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1a70666a-f421-4a09-bfa4-f1171cbe2000/disk.config 1a70666a-f421-4a09-bfa4-f1171cbe2000_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:54:20 np0005481065 nova_compute[260935]: 2025-10-11 08:54:20.765 2 DEBUG oslo_concurrency.processutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1a70666a-f421-4a09-bfa4-f1171cbe2000/disk.config 1a70666a-f421-4a09-bfa4-f1171cbe2000_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.221s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:54:20 np0005481065 nova_compute[260935]: 2025-10-11 08:54:20.768 2 INFO nova.virt.libvirt.driver [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Deleting local config drive /var/lib/nova/instances/1a70666a-f421-4a09-bfa4-f1171cbe2000/disk.config because it was imported into RBD.#033[00m
Oct 11 04:54:20 np0005481065 kernel: tap56e8a82c-ac: entered promiscuous mode
Oct 11 04:54:20 np0005481065 NetworkManager[44960]: <info>  [1760172860.8436] manager: (tap56e8a82c-ac): new Tun device (/org/freedesktop/NetworkManager/Devices/210)
Oct 11 04:54:20 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:20Z|00441|binding|INFO|Claiming lport 56e8a82c-acef-4136-b229-8039f0d8c843 for this chassis.
Oct 11 04:54:20 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:20Z|00442|binding|INFO|56e8a82c-acef-4136-b229-8039f0d8c843: Claiming fa:16:3e:cf:5a:16 10.100.0.9
Oct 11 04:54:20 np0005481065 nova_compute[260935]: 2025-10-11 08:54:20.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:20 np0005481065 nova_compute[260935]: 2025-10-11 08:54:20.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:20.865 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:5a:16 10.100.0.9'], port_security=['fa:16:3e:cf:5a:16 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '1a70666a-f421-4a09-bfa4-f1171cbe2000', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b34d40b1586348c3be3d9142dfe1770d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9b07fc82-c0d2-4e42-8878-3b0706731285', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be4f554b-8352-4ab3-babd-ad834c691fd3, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=56e8a82c-acef-4136-b229-8039f0d8c843) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:54:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:20.868 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 56e8a82c-acef-4136-b229-8039f0d8c843 in datapath a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf bound to our chassis#033[00m
Oct 11 04:54:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:20.871 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf#033[00m
Oct 11 04:54:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:20.888 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6da704ca-e878-477a-a334-67662a6f53e4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:20.889 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa1a65c6f-c1 in ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 04:54:20 np0005481065 systemd-udevd[317281]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:54:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:20.891 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa1a65c6f-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 04:54:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:20.892 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2b520de9-bc1e-4f40-bb5b-668d2fd16247]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:20.894 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1e794482-7a42-4085-80ff-e7957bcb82c2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:20 np0005481065 systemd-machined[215705]: New machine qemu-58-instance-00000033.
Oct 11 04:54:20 np0005481065 NetworkManager[44960]: <info>  [1760172860.9117] device (tap56e8a82c-ac): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:54:20 np0005481065 NetworkManager[44960]: <info>  [1760172860.9131] device (tap56e8a82c-ac): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:54:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:20.911 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[a602382c-c6c8-460e-a078-8d53f7fb2462]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:20 np0005481065 systemd[1]: Started Virtual Machine qemu-58-instance-00000033.
Oct 11 04:54:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:20.946 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[320cb43d-97d3-4979-8bac-b8210bad8ff2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:20 np0005481065 nova_compute[260935]: 2025-10-11 08:54:20.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:20 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:20Z|00443|binding|INFO|Setting lport 56e8a82c-acef-4136-b229-8039f0d8c843 ovn-installed in OVS
Oct 11 04:54:20 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:20Z|00444|binding|INFO|Setting lport 56e8a82c-acef-4136-b229-8039f0d8c843 up in Southbound
Oct 11 04:54:20 np0005481065 nova_compute[260935]: 2025-10-11 08:54:20.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:20.987 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[0b9e9df8-6a2f-4ead-983d-3654fe35f5f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:20 np0005481065 systemd-udevd[317285]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:54:20 np0005481065 NetworkManager[44960]: <info>  [1760172860.9964] manager: (tapa1a65c6f-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/211)
Oct 11 04:54:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:20.996 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2057db80-e87d-4c37-92ce-b87c7c5e5e16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:21.058 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[243a7911-593e-40ef-85e2-33230c856f33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:21.062 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[10f565a8-5a09-4147-ae51-d936ff004323]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:21 np0005481065 NetworkManager[44960]: <info>  [1760172861.1000] device (tapa1a65c6f-c0): carrier: link connected
Oct 11 04:54:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:21.109 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6050adfc-7bbf-46d9-9b7a-5ca44646a0dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:21.139 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[03962623-741c-4012-9758-19f105d1a30c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa1a65c6f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:05:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 143], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 470580, 'reachable_time': 40085, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317314, 'error': None, 'target': 'ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:21.161 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d7d201d4-be6b-4a62-8bb4-19e688fba808]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb9:555'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 470580, 'tstamp': 470580}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317330, 'error': None, 'target': 'ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:21.192 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7a4184a9-a708-456c-956e-d6347591fbce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa1a65c6f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:05:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 143], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 470580, 'reachable_time': 40085, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 317334, 'error': None, 'target': 'ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:21.241 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[31ea0098-2335-4690-8c5f-8da53ebdd5b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:21.348 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2fa33ae7-a25e-44dc-9583-60d97f735f01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:21.351 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa1a65c6f-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:21.352 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:54:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:21.353 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa1a65c6f-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:21 np0005481065 kernel: tapa1a65c6f-c0: entered promiscuous mode
Oct 11 04:54:21 np0005481065 NetworkManager[44960]: <info>  [1760172861.3566] manager: (tapa1a65c6f-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/212)
Oct 11 04:54:21 np0005481065 nova_compute[260935]: 2025-10-11 08:54:21.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:21 np0005481065 nova_compute[260935]: 2025-10-11 08:54:21.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:21.361 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa1a65c6f-c0, col_values=(('external_ids', {'iface-id': '4bae176b-fbae-4a70-a041-16a7a5205899'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:21 np0005481065 nova_compute[260935]: 2025-10-11 08:54:21.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:21 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:21Z|00445|binding|INFO|Releasing lport 4bae176b-fbae-4a70-a041-16a7a5205899 from this chassis (sb_readonly=0)
Oct 11 04:54:21 np0005481065 nova_compute[260935]: 2025-10-11 08:54:21.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:21.399 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 04:54:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:21.401 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b16824db-481f-4155-9ef5-49f1ae0782b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:21.402 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:54:21 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 04:54:21 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 04:54:21 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf
Oct 11 04:54:21 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 04:54:21 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 04:54:21 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 04:54:21 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf.pid.haproxy
Oct 11 04:54:21 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 04:54:21 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:54:21 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 04:54:21 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 04:54:21 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 04:54:21 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 04:54:21 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 04:54:21 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 04:54:21 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 04:54:21 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 04:54:21 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 04:54:21 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 04:54:21 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 04:54:21 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 04:54:21 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 04:54:21 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:54:21 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:54:21 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 04:54:21 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 04:54:21 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:54:21 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf
Oct 11 04:54:21 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 04:54:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:21.403 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf', 'env', 'PROCESS_TAG=haproxy-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 04:54:21 np0005481065 nova_compute[260935]: 2025-10-11 08:54:21.738 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172861.7371757, 1a70666a-f421-4a09-bfa4-f1171cbe2000 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:54:21 np0005481065 nova_compute[260935]: 2025-10-11 08:54:21.739 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] VM Started (Lifecycle Event)#033[00m
Oct 11 04:54:21 np0005481065 nova_compute[260935]: 2025-10-11 08:54:21.764 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:54:21 np0005481065 nova_compute[260935]: 2025-10-11 08:54:21.769 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172861.7393045, 1a70666a-f421-4a09-bfa4-f1171cbe2000 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:54:21 np0005481065 nova_compute[260935]: 2025-10-11 08:54:21.770 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] VM Paused (Lifecycle Event)#033[00m
Oct 11 04:54:21 np0005481065 nova_compute[260935]: 2025-10-11 08:54:21.787 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:54:21 np0005481065 nova_compute[260935]: 2025-10-11 08:54:21.791 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:54:21 np0005481065 nova_compute[260935]: 2025-10-11 08:54:21.811 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:54:21 np0005481065 podman[317390]: 2025-10-11 08:54:21.863982231 +0000 UTC m=+0.069871404 container create de40479888e2850fca297c3be14505de2621bf2639bc9ebf7a2af5f81362af64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 11 04:54:21 np0005481065 systemd[1]: Started libpod-conmon-de40479888e2850fca297c3be14505de2621bf2639bc9ebf7a2af5f81362af64.scope.
Oct 11 04:54:21 np0005481065 podman[317390]: 2025-10-11 08:54:21.833075569 +0000 UTC m=+0.038964742 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 04:54:21 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:54:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1504: 321 pgs: 321 active+clean; 88 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 04:54:21 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68e778c257d1db0a748ad4d230ca44b22155e290228243828b5216cbcae77472/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:54:21 np0005481065 podman[317390]: 2025-10-11 08:54:21.997986312 +0000 UTC m=+0.203875515 container init de40479888e2850fca297c3be14505de2621bf2639bc9ebf7a2af5f81362af64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001)
Oct 11 04:54:22 np0005481065 podman[317390]: 2025-10-11 08:54:22.008860892 +0000 UTC m=+0.214750055 container start de40479888e2850fca297c3be14505de2621bf2639bc9ebf7a2af5f81362af64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:54:22 np0005481065 neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf[317405]: [NOTICE]   (317409) : New worker (317411) forked
Oct 11 04:54:22 np0005481065 neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf[317405]: [NOTICE]   (317409) : Loading success.
Oct 11 04:54:22 np0005481065 nova_compute[260935]: 2025-10-11 08:54:22.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:54:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1505: 321 pgs: 321 active+clean; 88 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 11 04:54:24 np0005481065 nova_compute[260935]: 2025-10-11 08:54:24.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:24 np0005481065 nova_compute[260935]: 2025-10-11 08:54:24.657 2 DEBUG nova.compute.manager [req-932262b6-7ea7-4339-bac4-f21c185fa611 req-a54421a1-2013-49ba-b12d-616fcbcb3556 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Received event network-vif-plugged-56e8a82c-acef-4136-b229-8039f0d8c843 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:54:24 np0005481065 nova_compute[260935]: 2025-10-11 08:54:24.659 2 DEBUG oslo_concurrency.lockutils [req-932262b6-7ea7-4339-bac4-f21c185fa611 req-a54421a1-2013-49ba-b12d-616fcbcb3556 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "1a70666a-f421-4a09-bfa4-f1171cbe2000-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:24 np0005481065 nova_compute[260935]: 2025-10-11 08:54:24.659 2 DEBUG oslo_concurrency.lockutils [req-932262b6-7ea7-4339-bac4-f21c185fa611 req-a54421a1-2013-49ba-b12d-616fcbcb3556 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1a70666a-f421-4a09-bfa4-f1171cbe2000-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:24 np0005481065 nova_compute[260935]: 2025-10-11 08:54:24.660 2 DEBUG oslo_concurrency.lockutils [req-932262b6-7ea7-4339-bac4-f21c185fa611 req-a54421a1-2013-49ba-b12d-616fcbcb3556 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1a70666a-f421-4a09-bfa4-f1171cbe2000-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:24 np0005481065 nova_compute[260935]: 2025-10-11 08:54:24.660 2 DEBUG nova.compute.manager [req-932262b6-7ea7-4339-bac4-f21c185fa611 req-a54421a1-2013-49ba-b12d-616fcbcb3556 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Processing event network-vif-plugged-56e8a82c-acef-4136-b229-8039f0d8c843 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 04:54:24 np0005481065 nova_compute[260935]: 2025-10-11 08:54:24.661 2 DEBUG nova.compute.manager [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:54:24 np0005481065 nova_compute[260935]: 2025-10-11 08:54:24.673 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172864.6723206, 1a70666a-f421-4a09-bfa4-f1171cbe2000 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:54:24 np0005481065 nova_compute[260935]: 2025-10-11 08:54:24.674 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:54:24 np0005481065 nova_compute[260935]: 2025-10-11 08:54:24.676 2 DEBUG nova.virt.libvirt.driver [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:54:24 np0005481065 nova_compute[260935]: 2025-10-11 08:54:24.681 2 INFO nova.virt.libvirt.driver [-] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Instance spawned successfully.#033[00m
Oct 11 04:54:24 np0005481065 nova_compute[260935]: 2025-10-11 08:54:24.682 2 DEBUG nova.virt.libvirt.driver [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:54:24 np0005481065 nova_compute[260935]: 2025-10-11 08:54:24.700 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:54:24 np0005481065 nova_compute[260935]: 2025-10-11 08:54:24.704 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:54:24 np0005481065 nova_compute[260935]: 2025-10-11 08:54:24.715 2 DEBUG nova.virt.libvirt.driver [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:54:24 np0005481065 nova_compute[260935]: 2025-10-11 08:54:24.716 2 DEBUG nova.virt.libvirt.driver [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:54:24 np0005481065 nova_compute[260935]: 2025-10-11 08:54:24.717 2 DEBUG nova.virt.libvirt.driver [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:54:24 np0005481065 nova_compute[260935]: 2025-10-11 08:54:24.717 2 DEBUG nova.virt.libvirt.driver [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:54:24 np0005481065 nova_compute[260935]: 2025-10-11 08:54:24.718 2 DEBUG nova.virt.libvirt.driver [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:54:24 np0005481065 nova_compute[260935]: 2025-10-11 08:54:24.719 2 DEBUG nova.virt.libvirt.driver [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:54:24 np0005481065 nova_compute[260935]: 2025-10-11 08:54:24.726 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:54:24 np0005481065 nova_compute[260935]: 2025-10-11 08:54:24.786 2 INFO nova.compute.manager [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Took 11.07 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:54:24 np0005481065 nova_compute[260935]: 2025-10-11 08:54:24.786 2 DEBUG nova.compute.manager [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:54:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:54:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:54:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:54:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:54:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:54:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:54:24 np0005481065 nova_compute[260935]: 2025-10-11 08:54:24.862 2 INFO nova.compute.manager [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Took 12.04 seconds to build instance.#033[00m
Oct 11 04:54:24 np0005481065 nova_compute[260935]: 2025-10-11 08:54:24.882 2 DEBUG oslo_concurrency.lockutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "1a70666a-f421-4a09-bfa4-f1171cbe2000" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.118s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1506: 321 pgs: 321 active+clean; 88 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 11 04:54:27 np0005481065 nova_compute[260935]: 2025-10-11 08:54:27.052 2 DEBUG oslo_concurrency.lockutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Acquiring lock "ceef84cf-9df6-4484-862c-624eab05f1fe" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:27 np0005481065 nova_compute[260935]: 2025-10-11 08:54:27.053 2 DEBUG oslo_concurrency.lockutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Lock "ceef84cf-9df6-4484-862c-624eab05f1fe" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:27 np0005481065 nova_compute[260935]: 2025-10-11 08:54:27.073 2 DEBUG nova.compute.manager [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:54:27 np0005481065 nova_compute[260935]: 2025-10-11 08:54:27.159 2 DEBUG oslo_concurrency.lockutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:27 np0005481065 nova_compute[260935]: 2025-10-11 08:54:27.160 2 DEBUG oslo_concurrency.lockutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:27 np0005481065 nova_compute[260935]: 2025-10-11 08:54:27.169 2 DEBUG nova.virt.hardware [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:54:27 np0005481065 nova_compute[260935]: 2025-10-11 08:54:27.169 2 INFO nova.compute.claims [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:54:27 np0005481065 nova_compute[260935]: 2025-10-11 08:54:27.315 2 DEBUG oslo_concurrency.processutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:54:27 np0005481065 nova_compute[260935]: 2025-10-11 08:54:27.395 2 DEBUG nova.compute.manager [req-95c34f18-2e39-4f29-823d-b5212d6325ca req-f0b9d6fa-7011-4951-af88-5dcaf8ed4ae6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Received event network-vif-plugged-56e8a82c-acef-4136-b229-8039f0d8c843 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:54:27 np0005481065 nova_compute[260935]: 2025-10-11 08:54:27.395 2 DEBUG oslo_concurrency.lockutils [req-95c34f18-2e39-4f29-823d-b5212d6325ca req-f0b9d6fa-7011-4951-af88-5dcaf8ed4ae6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "1a70666a-f421-4a09-bfa4-f1171cbe2000-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:27 np0005481065 nova_compute[260935]: 2025-10-11 08:54:27.396 2 DEBUG oslo_concurrency.lockutils [req-95c34f18-2e39-4f29-823d-b5212d6325ca req-f0b9d6fa-7011-4951-af88-5dcaf8ed4ae6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1a70666a-f421-4a09-bfa4-f1171cbe2000-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:27 np0005481065 nova_compute[260935]: 2025-10-11 08:54:27.396 2 DEBUG oslo_concurrency.lockutils [req-95c34f18-2e39-4f29-823d-b5212d6325ca req-f0b9d6fa-7011-4951-af88-5dcaf8ed4ae6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1a70666a-f421-4a09-bfa4-f1171cbe2000-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:27 np0005481065 nova_compute[260935]: 2025-10-11 08:54:27.396 2 DEBUG nova.compute.manager [req-95c34f18-2e39-4f29-823d-b5212d6325ca req-f0b9d6fa-7011-4951-af88-5dcaf8ed4ae6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] No waiting events found dispatching network-vif-plugged-56e8a82c-acef-4136-b229-8039f0d8c843 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:54:27 np0005481065 nova_compute[260935]: 2025-10-11 08:54:27.396 2 WARNING nova.compute.manager [req-95c34f18-2e39-4f29-823d-b5212d6325ca req-f0b9d6fa-7011-4951-af88-5dcaf8ed4ae6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Received unexpected event network-vif-plugged-56e8a82c-acef-4136-b229-8039f0d8c843 for instance with vm_state active and task_state None.#033[00m
Oct 11 04:54:27 np0005481065 podman[317442]: 2025-10-11 08:54:27.794734652 +0000 UTC m=+0.093102336 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001)
Oct 11 04:54:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:54:27 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4045841869' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:54:27 np0005481065 nova_compute[260935]: 2025-10-11 08:54:27.858 2 DEBUG oslo_concurrency.processutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:54:27 np0005481065 nova_compute[260935]: 2025-10-11 08:54:27.867 2 DEBUG nova.compute.provider_tree [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:54:27 np0005481065 nova_compute[260935]: 2025-10-11 08:54:27.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:27 np0005481065 nova_compute[260935]: 2025-10-11 08:54:27.890 2 DEBUG nova.scheduler.client.report [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:54:27 np0005481065 nova_compute[260935]: 2025-10-11 08:54:27.915 2 DEBUG oslo_concurrency.lockutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:27 np0005481065 nova_compute[260935]: 2025-10-11 08:54:27.917 2 DEBUG nova.compute.manager [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:54:27 np0005481065 nova_compute[260935]: 2025-10-11 08:54:27.971 2 DEBUG nova.compute.manager [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:54:27 np0005481065 nova_compute[260935]: 2025-10-11 08:54:27.972 2 DEBUG nova.network.neutron [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:54:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1507: 321 pgs: 321 active+clean; 88 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 04:54:27 np0005481065 nova_compute[260935]: 2025-10-11 08:54:27.998 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquiring lock "d60a2dcd-7fb6-4bfe-8351-a38a71164f83" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:28 np0005481065 nova_compute[260935]: 2025-10-11 08:54:27.999 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "d60a2dcd-7fb6-4bfe-8351-a38a71164f83" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:28 np0005481065 nova_compute[260935]: 2025-10-11 08:54:28.004 2 INFO nova.virt.libvirt.driver [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:54:28 np0005481065 nova_compute[260935]: 2025-10-11 08:54:28.026 2 DEBUG nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:54:28 np0005481065 nova_compute[260935]: 2025-10-11 08:54:28.032 2 DEBUG nova.compute.manager [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:54:28 np0005481065 nova_compute[260935]: 2025-10-11 08:54:28.038 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquiring lock "e263661e-e9c2-4a4d-a6e5-5fc8a7353f50" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:28 np0005481065 nova_compute[260935]: 2025-10-11 08:54:28.038 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "e263661e-e9c2-4a4d-a6e5-5fc8a7353f50" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:28 np0005481065 nova_compute[260935]: 2025-10-11 08:54:28.078 2 DEBUG nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:54:28 np0005481065 nova_compute[260935]: 2025-10-11 08:54:28.086 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquiring lock "21a71d10-e13b-47fe-88fd-ec9597f7902e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:28 np0005481065 nova_compute[260935]: 2025-10-11 08:54:28.086 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "21a71d10-e13b-47fe-88fd-ec9597f7902e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:28 np0005481065 nova_compute[260935]: 2025-10-11 08:54:28.137 2 DEBUG nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:54:28 np0005481065 nova_compute[260935]: 2025-10-11 08:54:28.161 2 DEBUG nova.policy [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e7ae37ce1df34d87a006582ce3cb7d6d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ce2ed1c47abf4e1889253402aa1e536f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 04:54:28 np0005481065 nova_compute[260935]: 2025-10-11 08:54:28.164 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:28 np0005481065 nova_compute[260935]: 2025-10-11 08:54:28.165 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:28 np0005481065 nova_compute[260935]: 2025-10-11 08:54:28.172 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:54:28 np0005481065 nova_compute[260935]: 2025-10-11 08:54:28.172 2 INFO nova.compute.claims [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:54:28 np0005481065 nova_compute[260935]: 2025-10-11 08:54:28.201 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:28 np0005481065 nova_compute[260935]: 2025-10-11 08:54:28.218 2 DEBUG nova.compute.manager [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:54:28 np0005481065 nova_compute[260935]: 2025-10-11 08:54:28.220 2 DEBUG nova.virt.libvirt.driver [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:54:28 np0005481065 nova_compute[260935]: 2025-10-11 08:54:28.220 2 INFO nova.virt.libvirt.driver [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Creating image(s)#033[00m
Oct 11 04:54:28 np0005481065 nova_compute[260935]: 2025-10-11 08:54:28.250 2 DEBUG nova.storage.rbd_utils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] rbd image ceef84cf-9df6-4484-862c-624eab05f1fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:54:28 np0005481065 nova_compute[260935]: 2025-10-11 08:54:28.278 2 DEBUG nova.storage.rbd_utils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] rbd image ceef84cf-9df6-4484-862c-624eab05f1fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:54:28 np0005481065 nova_compute[260935]: 2025-10-11 08:54:28.322 2 DEBUG nova.storage.rbd_utils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] rbd image ceef84cf-9df6-4484-862c-624eab05f1fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:54:28 np0005481065 nova_compute[260935]: 2025-10-11 08:54:28.330 2 DEBUG oslo_concurrency.processutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:54:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:54:28 np0005481065 nova_compute[260935]: 2025-10-11 08:54:28.425 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:28 np0005481065 nova_compute[260935]: 2025-10-11 08:54:28.434 2 DEBUG oslo_concurrency.processutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:54:28 np0005481065 nova_compute[260935]: 2025-10-11 08:54:28.435 2 DEBUG oslo_concurrency.lockutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:28 np0005481065 nova_compute[260935]: 2025-10-11 08:54:28.436 2 DEBUG oslo_concurrency.lockutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:28 np0005481065 nova_compute[260935]: 2025-10-11 08:54:28.437 2 DEBUG oslo_concurrency.lockutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:28 np0005481065 nova_compute[260935]: 2025-10-11 08:54:28.469 2 DEBUG nova.storage.rbd_utils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] rbd image ceef84cf-9df6-4484-862c-624eab05f1fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:54:28 np0005481065 nova_compute[260935]: 2025-10-11 08:54:28.472 2 DEBUG oslo_concurrency.processutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 ceef84cf-9df6-4484-862c-624eab05f1fe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:54:28 np0005481065 nova_compute[260935]: 2025-10-11 08:54:28.627 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:54:28 np0005481065 nova_compute[260935]: 2025-10-11 08:54:28.742 2 DEBUG nova.compute.manager [None req-e0bd14a6-d051-4980-b375-b43e3c5f4192 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:54:28 np0005481065 nova_compute[260935]: 2025-10-11 08:54:28.752 2 DEBUG nova.network.neutron [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Successfully created port: b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 04:54:28 np0005481065 nova_compute[260935]: 2025-10-11 08:54:28.802 2 INFO nova.compute.manager [None req-e0bd14a6-d051-4980-b375-b43e3c5f4192 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] instance snapshotting#033[00m
Oct 11 04:54:28 np0005481065 nova_compute[260935]: 2025-10-11 08:54:28.835 2 DEBUG oslo_concurrency.processutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 ceef84cf-9df6-4484-862c-624eab05f1fe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.362s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:54:28 np0005481065 nova_compute[260935]: 2025-10-11 08:54:28.913 2 DEBUG nova.storage.rbd_utils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] resizing rbd image ceef84cf-9df6-4484-862c-624eab05f1fe_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:54:29 np0005481065 nova_compute[260935]: 2025-10-11 08:54:29.031 2 WARNING nova.compute.manager [None req-e0bd14a6-d051-4980-b375-b43e3c5f4192 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Image not found during snapshot: nova.exception.ImageNotFound: Image f8549b54-e342-4157-bb2f-4d8e7cd886e5 could not be found.#033[00m
Oct 11 04:54:29 np0005481065 nova_compute[260935]: 2025-10-11 08:54:29.040 2 DEBUG nova.objects.instance [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Lazy-loading 'migration_context' on Instance uuid ceef84cf-9df6-4484-862c-624eab05f1fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:54:29 np0005481065 nova_compute[260935]: 2025-10-11 08:54:29.058 2 DEBUG nova.virt.libvirt.driver [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:54:29 np0005481065 nova_compute[260935]: 2025-10-11 08:54:29.059 2 DEBUG nova.virt.libvirt.driver [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Ensure instance console log exists: /var/lib/nova/instances/ceef84cf-9df6-4484-862c-624eab05f1fe/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:54:29 np0005481065 nova_compute[260935]: 2025-10-11 08:54:29.060 2 DEBUG oslo_concurrency.lockutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:29 np0005481065 nova_compute[260935]: 2025-10-11 08:54:29.060 2 DEBUG oslo_concurrency.lockutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:29 np0005481065 nova_compute[260935]: 2025-10-11 08:54:29.061 2 DEBUG oslo_concurrency.lockutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:54:29 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4186694936' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:54:29 np0005481065 nova_compute[260935]: 2025-10-11 08:54:29.098 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:54:29 np0005481065 nova_compute[260935]: 2025-10-11 08:54:29.108 2 DEBUG nova.compute.provider_tree [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:54:29 np0005481065 nova_compute[260935]: 2025-10-11 08:54:29.128 2 DEBUG nova.scheduler.client.report [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:54:29 np0005481065 nova_compute[260935]: 2025-10-11 08:54:29.156 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.991s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:29 np0005481065 nova_compute[260935]: 2025-10-11 08:54:29.158 2 DEBUG nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:54:29 np0005481065 nova_compute[260935]: 2025-10-11 08:54:29.163 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.961s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:29 np0005481065 nova_compute[260935]: 2025-10-11 08:54:29.172 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:54:29 np0005481065 nova_compute[260935]: 2025-10-11 08:54:29.172 2 INFO nova.compute.claims [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:54:29 np0005481065 nova_compute[260935]: 2025-10-11 08:54:29.232 2 DEBUG nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:54:29 np0005481065 nova_compute[260935]: 2025-10-11 08:54:29.233 2 DEBUG nova.network.neutron [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:54:29 np0005481065 nova_compute[260935]: 2025-10-11 08:54:29.261 2 INFO nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:54:29 np0005481065 nova_compute[260935]: 2025-10-11 08:54:29.283 2 DEBUG nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:54:29 np0005481065 nova_compute[260935]: 2025-10-11 08:54:29.371 2 DEBUG nova.network.neutron [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Successfully updated port: b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:54:29 np0005481065 nova_compute[260935]: 2025-10-11 08:54:29.378 2 DEBUG nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:54:29 np0005481065 nova_compute[260935]: 2025-10-11 08:54:29.380 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:54:29 np0005481065 nova_compute[260935]: 2025-10-11 08:54:29.381 2 INFO nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Creating image(s)#033[00m
Oct 11 04:54:29 np0005481065 nova_compute[260935]: 2025-10-11 08:54:29.411 2 DEBUG nova.storage.rbd_utils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] rbd image d60a2dcd-7fb6-4bfe-8351-a38a71164f83_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:54:29 np0005481065 nova_compute[260935]: 2025-10-11 08:54:29.446 2 DEBUG nova.storage.rbd_utils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] rbd image d60a2dcd-7fb6-4bfe-8351-a38a71164f83_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:54:29 np0005481065 nova_compute[260935]: 2025-10-11 08:54:29.483 2 DEBUG nova.storage.rbd_utils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] rbd image d60a2dcd-7fb6-4bfe-8351-a38a71164f83_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:54:29 np0005481065 nova_compute[260935]: 2025-10-11 08:54:29.487 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:54:29 np0005481065 nova_compute[260935]: 2025-10-11 08:54:29.542 2 DEBUG nova.policy [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5c02b9d6bdae439c9f1e49ae63c5c5e3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7a37bbdce5194d96bed20d4162e25337', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 04:54:29 np0005481065 nova_compute[260935]: 2025-10-11 08:54:29.553 2 DEBUG oslo_concurrency.lockutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Acquiring lock "refresh_cache-ceef84cf-9df6-4484-862c-624eab05f1fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:54:29 np0005481065 nova_compute[260935]: 2025-10-11 08:54:29.554 2 DEBUG oslo_concurrency.lockutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Acquired lock "refresh_cache-ceef84cf-9df6-4484-862c-624eab05f1fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:54:29 np0005481065 nova_compute[260935]: 2025-10-11 08:54:29.554 2 DEBUG nova.network.neutron [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:54:29 np0005481065 nova_compute[260935]: 2025-10-11 08:54:29.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:29 np0005481065 nova_compute[260935]: 2025-10-11 08:54:29.582 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:54:29 np0005481065 nova_compute[260935]: 2025-10-11 08:54:29.631 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:54:29 np0005481065 nova_compute[260935]: 2025-10-11 08:54:29.633 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:29 np0005481065 nova_compute[260935]: 2025-10-11 08:54:29.634 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:29 np0005481065 nova_compute[260935]: 2025-10-11 08:54:29.634 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:29 np0005481065 nova_compute[260935]: 2025-10-11 08:54:29.669 2 DEBUG nova.storage.rbd_utils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] rbd image d60a2dcd-7fb6-4bfe-8351-a38a71164f83_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:54:29 np0005481065 nova_compute[260935]: 2025-10-11 08:54:29.674 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 d60a2dcd-7fb6-4bfe-8351-a38a71164f83_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:54:29 np0005481065 nova_compute[260935]: 2025-10-11 08:54:29.731 2 DEBUG nova.compute.manager [req-5d35c0fd-bfb2-4bb0-a2c1-023917277d4b req-7fffc75b-4fd3-4f16-b812-723cc111f850 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Received event network-changed-b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:54:29 np0005481065 nova_compute[260935]: 2025-10-11 08:54:29.733 2 DEBUG nova.compute.manager [req-5d35c0fd-bfb2-4bb0-a2c1-023917277d4b req-7fffc75b-4fd3-4f16-b812-723cc111f850 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Refreshing instance network info cache due to event network-changed-b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:54:29 np0005481065 nova_compute[260935]: 2025-10-11 08:54:29.733 2 DEBUG oslo_concurrency.lockutils [req-5d35c0fd-bfb2-4bb0-a2c1-023917277d4b req-7fffc75b-4fd3-4f16-b812-723cc111f850 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-ceef84cf-9df6-4484-862c-624eab05f1fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:54:29 np0005481065 nova_compute[260935]: 2025-10-11 08:54:29.736 2 DEBUG nova.network.neutron [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:54:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1508: 321 pgs: 321 active+clean; 88 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 04:54:29 np0005481065 nova_compute[260935]: 2025-10-11 08:54:29.990 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 d60a2dcd-7fb6-4bfe-8351-a38a71164f83_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.315s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:54:30 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:54:30 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3160616652' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.076 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.084 2 DEBUG nova.storage.rbd_utils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] resizing rbd image d60a2dcd-7fb6-4bfe-8351-a38a71164f83_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.143 2 DEBUG nova.compute.provider_tree [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.203 2 DEBUG nova.network.neutron [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Successfully created port: b0464a2f-214f-420b-aa3e-70d2e3dce4ac _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.209 2 DEBUG oslo_concurrency.lockutils [None req-65afeaf2-f404-4ffc-a661-16ab532bb966 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquiring lock "1a70666a-f421-4a09-bfa4-f1171cbe2000" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.210 2 DEBUG oslo_concurrency.lockutils [None req-65afeaf2-f404-4ffc-a661-16ab532bb966 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "1a70666a-f421-4a09-bfa4-f1171cbe2000" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.211 2 DEBUG oslo_concurrency.lockutils [None req-65afeaf2-f404-4ffc-a661-16ab532bb966 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquiring lock "1a70666a-f421-4a09-bfa4-f1171cbe2000-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.211 2 DEBUG oslo_concurrency.lockutils [None req-65afeaf2-f404-4ffc-a661-16ab532bb966 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "1a70666a-f421-4a09-bfa4-f1171cbe2000-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.211 2 DEBUG oslo_concurrency.lockutils [None req-65afeaf2-f404-4ffc-a661-16ab532bb966 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "1a70666a-f421-4a09-bfa4-f1171cbe2000-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.213 2 INFO nova.compute.manager [None req-65afeaf2-f404-4ffc-a661-16ab532bb966 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Terminating instance#033[00m
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.215 2 DEBUG nova.compute.manager [None req-65afeaf2-f404-4ffc-a661-16ab532bb966 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.217 2 DEBUG nova.scheduler.client.report [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.230 2 DEBUG nova.objects.instance [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lazy-loading 'migration_context' on Instance uuid d60a2dcd-7fb6-4bfe-8351-a38a71164f83 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.247 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.247 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Ensure instance console log exists: /var/lib/nova/instances/d60a2dcd-7fb6-4bfe-8351-a38a71164f83/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.249 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.249 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.250 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.252 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.089s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.253 2 DEBUG nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.257 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 1.833s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.265 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.266 2 INFO nova.compute.claims [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:54:30 np0005481065 kernel: tap56e8a82c-ac (unregistering): left promiscuous mode
Oct 11 04:54:30 np0005481065 NetworkManager[44960]: <info>  [1760172870.2778] device (tap56e8a82c-ac): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.326 2 DEBUG nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.327 2 DEBUG nova.network.neutron [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:54:30 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:30Z|00446|binding|INFO|Releasing lport 56e8a82c-acef-4136-b229-8039f0d8c843 from this chassis (sb_readonly=0)
Oct 11 04:54:30 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:30Z|00447|binding|INFO|Setting lport 56e8a82c-acef-4136-b229-8039f0d8c843 down in Southbound
Oct 11 04:54:30 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:30Z|00448|binding|INFO|Removing iface tap56e8a82c-ac ovn-installed in OVS
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:30.343 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:5a:16 10.100.0.9'], port_security=['fa:16:3e:cf:5a:16 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '1a70666a-f421-4a09-bfa4-f1171cbe2000', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b34d40b1586348c3be3d9142dfe1770d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9b07fc82-c0d2-4e42-8878-3b0706731285', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be4f554b-8352-4ab3-babd-ad834c691fd3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=56e8a82c-acef-4136-b229-8039f0d8c843) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:54:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:30.346 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 56e8a82c-acef-4136-b229-8039f0d8c843 in datapath a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf unbound from our chassis#033[00m
Oct 11 04:54:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:30.351 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.352 2 INFO nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:54:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:30.352 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d7c7f043-f86f-44c8-9ac6-1e5c9ef25f3e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:30.354 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf namespace which is not needed anymore#033[00m
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.370 2 DEBUG nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:30 np0005481065 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000033.scope: Deactivated successfully.
Oct 11 04:54:30 np0005481065 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000033.scope: Consumed 6.452s CPU time.
Oct 11 04:54:30 np0005481065 systemd-machined[215705]: Machine qemu-58-instance-00000033 terminated.
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.460 2 DEBUG nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.461 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.462 2 INFO nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Creating image(s)#033[00m
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.498 2 DEBUG nova.storage.rbd_utils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] rbd image e263661e-e9c2-4a4d-a6e5-5fc8a7353f50_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.550 2 DEBUG nova.storage.rbd_utils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] rbd image e263661e-e9c2-4a4d-a6e5-5fc8a7353f50_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:54:30 np0005481065 neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf[317405]: [NOTICE]   (317409) : haproxy version is 2.8.14-c23fe91
Oct 11 04:54:30 np0005481065 neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf[317405]: [NOTICE]   (317409) : path to executable is /usr/sbin/haproxy
Oct 11 04:54:30 np0005481065 neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf[317405]: [WARNING]  (317409) : Exiting Master process...
Oct 11 04:54:30 np0005481065 neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf[317405]: [WARNING]  (317409) : Exiting Master process...
Oct 11 04:54:30 np0005481065 neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf[317405]: [ALERT]    (317409) : Current worker (317411) exited with code 143 (Terminated)
Oct 11 04:54:30 np0005481065 neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf[317405]: [WARNING]  (317409) : All workers exited. Exiting... (0)
Oct 11 04:54:30 np0005481065 systemd[1]: libpod-de40479888e2850fca297c3be14505de2621bf2639bc9ebf7a2af5f81362af64.scope: Deactivated successfully.
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.588 2 DEBUG nova.storage.rbd_utils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] rbd image e263661e-e9c2-4a4d-a6e5-5fc8a7353f50_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:54:30 np0005481065 podman[317887]: 2025-10-11 08:54:30.592483198 +0000 UTC m=+0.079452646 container died de40479888e2850fca297c3be14505de2621bf2639bc9ebf7a2af5f81362af64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.605 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:54:30 np0005481065 systemd[1]: var-lib-containers-storage-overlay-68e778c257d1db0a748ad4d230ca44b22155e290228243828b5216cbcae77472-merged.mount: Deactivated successfully.
Oct 11 04:54:30 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-de40479888e2850fca297c3be14505de2621bf2639bc9ebf7a2af5f81362af64-userdata-shm.mount: Deactivated successfully.
Oct 11 04:54:30 np0005481065 podman[317887]: 2025-10-11 08:54:30.676922366 +0000 UTC m=+0.163891814 container cleanup de40479888e2850fca297c3be14505de2621bf2639bc9ebf7a2af5f81362af64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.688 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:54:30 np0005481065 systemd[1]: libpod-conmon-de40479888e2850fca297c3be14505de2621bf2639bc9ebf7a2af5f81362af64.scope: Deactivated successfully.
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.748 2 DEBUG nova.policy [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5c02b9d6bdae439c9f1e49ae63c5c5e3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7a37bbdce5194d96bed20d4162e25337', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.762 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.765 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.767 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.767 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:30 np0005481065 podman[317957]: 2025-10-11 08:54:30.772415249 +0000 UTC m=+0.058706805 container remove de40479888e2850fca297c3be14505de2621bf2639bc9ebf7a2af5f81362af64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 04:54:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:30.782 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[645fde58-3a87-49b0-ab78-651f7e05feee]: (4, ('Sat Oct 11 08:54:30 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf (de40479888e2850fca297c3be14505de2621bf2639bc9ebf7a2af5f81362af64)\nde40479888e2850fca297c3be14505de2621bf2639bc9ebf7a2af5f81362af64\nSat Oct 11 08:54:30 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf (de40479888e2850fca297c3be14505de2621bf2639bc9ebf7a2af5f81362af64)\nde40479888e2850fca297c3be14505de2621bf2639bc9ebf7a2af5f81362af64\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:30.787 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c11aa929-0ce0-4265-bbbd-6a944136e469]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:30.789 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa1a65c6f-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:30 np0005481065 kernel: tapa1a65c6f-c0: left promiscuous mode
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.795 2 DEBUG nova.storage.rbd_utils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] rbd image e263661e-e9c2-4a4d-a6e5-5fc8a7353f50_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.805 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 e263661e-e9c2-4a4d-a6e5-5fc8a7353f50_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:54:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:30.825 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ea46d0f1-ca14-43e8-b2c5-87c47b29eae3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:30.858 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c668cd0b-5992-44d0-a377-e3f0e543c1c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:30.859 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[12ced982-4636-45c4-9e89-4f059653d445]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.865 2 INFO nova.virt.libvirt.driver [-] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Instance destroyed successfully.#033[00m
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.866 2 DEBUG nova.objects.instance [None req-65afeaf2-f404-4ffc-a661-16ab532bb966 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lazy-loading 'resources' on Instance uuid 1a70666a-f421-4a09-bfa4-f1171cbe2000 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:54:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:30.878 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8c0a5da4-3299-4d1b-bacc-9e95428db0ab]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 470567, 'reachable_time': 44844, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318000, 'error': None, 'target': 'ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:30 np0005481065 systemd[1]: run-netns-ovnmeta\x2da1a65c6f\x2dcd0e\x2d4ac5\x2db9de\x2d21e41a32ffbf.mount: Deactivated successfully.
Oct 11 04:54:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:30.884 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 04:54:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:30.884 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[5a0c72e2-f9c1-4758-8adc-aa9b6c40e7ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.886 2 DEBUG nova.virt.libvirt.vif [None req-65afeaf2-f404-4ffc-a661-16ab532bb966 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:54:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1703807190',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1703807190',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1703807190',id=51,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:54:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b34d40b1586348c3be3d9142dfe1770d',ramdisk_id='',reservation_id='r-d42968wc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-253514738',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-253514738-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:54:28Z,user_data=None,user_id='d4dce65b39be45739408ca70d672df84',uuid=1a70666a-f421-4a09-bfa4-f1171cbe2000,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "56e8a82c-acef-4136-b229-8039f0d8c843", "address": "fa:16:3e:cf:5a:16", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56e8a82c-ac", "ovs_interfaceid": "56e8a82c-acef-4136-b229-8039f0d8c843", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.886 2 DEBUG nova.network.os_vif_util [None req-65afeaf2-f404-4ffc-a661-16ab532bb966 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Converting VIF {"id": "56e8a82c-acef-4136-b229-8039f0d8c843", "address": "fa:16:3e:cf:5a:16", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56e8a82c-ac", "ovs_interfaceid": "56e8a82c-acef-4136-b229-8039f0d8c843", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.887 2 DEBUG nova.network.os_vif_util [None req-65afeaf2-f404-4ffc-a661-16ab532bb966 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:5a:16,bridge_name='br-int',has_traffic_filtering=True,id=56e8a82c-acef-4136-b229-8039f0d8c843,network=Network(a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56e8a82c-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.888 2 DEBUG os_vif [None req-65afeaf2-f404-4ffc-a661-16ab532bb966 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:5a:16,bridge_name='br-int',has_traffic_filtering=True,id=56e8a82c-acef-4136-b229-8039f0d8c843,network=Network(a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56e8a82c-ac') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.891 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap56e8a82c-ac, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.899 2 INFO os_vif [None req-65afeaf2-f404-4ffc-a661-16ab532bb966 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:5a:16,bridge_name='br-int',has_traffic_filtering=True,id=56e8a82c-acef-4136-b229-8039f0d8c843,network=Network(a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56e8a82c-ac')#033[00m
Oct 11 04:54:30 np0005481065 nova_compute[260935]: 2025-10-11 08:54:30.987 2 DEBUG nova.network.neutron [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Updating instance_info_cache with network_info: [{"id": "b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b", "address": "fa:16:3e:4a:a8:a7", "network": {"id": "f022782f-f41d-4a83-babf-b48f2f344e01", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1436623710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ed1c47abf4e1889253402aa1e536f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6cea59b-74", "ovs_interfaceid": "b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.028 2 DEBUG oslo_concurrency.lockutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Releasing lock "refresh_cache-ceef84cf-9df6-4484-862c-624eab05f1fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.028 2 DEBUG nova.compute.manager [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Instance network_info: |[{"id": "b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b", "address": "fa:16:3e:4a:a8:a7", "network": {"id": "f022782f-f41d-4a83-babf-b48f2f344e01", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1436623710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ed1c47abf4e1889253402aa1e536f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6cea59b-74", "ovs_interfaceid": "b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.031 2 DEBUG oslo_concurrency.lockutils [req-5d35c0fd-bfb2-4bb0-a2c1-023917277d4b req-7fffc75b-4fd3-4f16-b812-723cc111f850 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-ceef84cf-9df6-4484-862c-624eab05f1fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.031 2 DEBUG nova.network.neutron [req-5d35c0fd-bfb2-4bb0-a2c1-023917277d4b req-7fffc75b-4fd3-4f16-b812-723cc111f850 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Refreshing network info cache for port b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.035 2 DEBUG nova.virt.libvirt.driver [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Start _get_guest_xml network_info=[{"id": "b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b", "address": "fa:16:3e:4a:a8:a7", "network": {"id": "f022782f-f41d-4a83-babf-b48f2f344e01", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1436623710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ed1c47abf4e1889253402aa1e536f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6cea59b-74", "ovs_interfaceid": "b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.041 2 WARNING nova.virt.libvirt.driver [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.049 2 DEBUG nova.virt.libvirt.host [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.050 2 DEBUG nova.virt.libvirt.host [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.054 2 DEBUG nova.virt.libvirt.host [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.055 2 DEBUG nova.virt.libvirt.host [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.055 2 DEBUG nova.virt.libvirt.driver [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.055 2 DEBUG nova.virt.hardware [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.056 2 DEBUG nova.virt.hardware [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.056 2 DEBUG nova.virt.hardware [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.056 2 DEBUG nova.virt.hardware [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.056 2 DEBUG nova.virt.hardware [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.056 2 DEBUG nova.virt.hardware [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.057 2 DEBUG nova.virt.hardware [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.057 2 DEBUG nova.virt.hardware [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.057 2 DEBUG nova.virt.hardware [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.057 2 DEBUG nova.virt.hardware [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.058 2 DEBUG nova.virt.hardware [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.061 2 DEBUG oslo_concurrency.processutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.158 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 e263661e-e9c2-4a4d-a6e5-5fc8a7353f50_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.354s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:54:31 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:54:31 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1731978409' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.236 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.248 2 DEBUG nova.storage.rbd_utils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] resizing rbd image e263661e-e9c2-4a4d-a6e5-5fc8a7353f50_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.293 2 DEBUG nova.compute.provider_tree [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.307 2 DEBUG nova.scheduler.client.report [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.378 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.120s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.378 2 DEBUG nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.387 2 DEBUG nova.objects.instance [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lazy-loading 'migration_context' on Instance uuid e263661e-e9c2-4a4d-a6e5-5fc8a7353f50 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.407 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.408 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Ensure instance console log exists: /var/lib/nova/instances/e263661e-e9c2-4a4d-a6e5-5fc8a7353f50/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.408 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.409 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.409 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.438 2 INFO nova.virt.libvirt.driver [None req-65afeaf2-f404-4ffc-a661-16ab532bb966 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Deleting instance files /var/lib/nova/instances/1a70666a-f421-4a09-bfa4-f1171cbe2000_del#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.439 2 INFO nova.virt.libvirt.driver [None req-65afeaf2-f404-4ffc-a661-16ab532bb966 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Deletion of /var/lib/nova/instances/1a70666a-f421-4a09-bfa4-f1171cbe2000_del complete#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.481 2 DEBUG nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.482 2 DEBUG nova.network.neutron [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.516 2 INFO nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.523 2 INFO nova.compute.manager [None req-65afeaf2-f404-4ffc-a661-16ab532bb966 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Took 1.31 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.523 2 DEBUG oslo.service.loopingcall [None req-65afeaf2-f404-4ffc-a661-16ab532bb966 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.523 2 DEBUG nova.compute.manager [-] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.524 2 DEBUG nova.network.neutron [-] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.536 2 DEBUG nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.568 2 DEBUG nova.network.neutron [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Successfully updated port: b0464a2f-214f-420b-aa3e-70d2e3dce4ac _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.585 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquiring lock "refresh_cache-d60a2dcd-7fb6-4bfe-8351-a38a71164f83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.585 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquired lock "refresh_cache-d60a2dcd-7fb6-4bfe-8351-a38a71164f83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.585 2 DEBUG nova.network.neutron [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:54:31 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:54:31 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3127925003' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.607 2 DEBUG oslo_concurrency.processutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.628 2 DEBUG nova.storage.rbd_utils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] rbd image ceef84cf-9df6-4484-862c-624eab05f1fe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.631 2 DEBUG oslo_concurrency.processutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.665 2 DEBUG nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.667 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.667 2 INFO nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Creating image(s)#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.704 2 DEBUG nova.storage.rbd_utils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] rbd image 21a71d10-e13b-47fe-88fd-ec9597f7902e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.738 2 DEBUG nova.storage.rbd_utils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] rbd image 21a71d10-e13b-47fe-88fd-ec9597f7902e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.763 2 DEBUG nova.storage.rbd_utils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] rbd image 21a71d10-e13b-47fe-88fd-ec9597f7902e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.768 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.815 2 DEBUG nova.policy [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5c02b9d6bdae439c9f1e49ae63c5c5e3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7a37bbdce5194d96bed20d4162e25337', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.820 2 DEBUG nova.compute.manager [req-d0f9a41a-70b5-48b6-9aef-2193727d100a req-f263e311-55e3-4330-ba6d-d4e427c55af1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Received event network-changed-b0464a2f-214f-420b-aa3e-70d2e3dce4ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.820 2 DEBUG nova.compute.manager [req-d0f9a41a-70b5-48b6-9aef-2193727d100a req-f263e311-55e3-4330-ba6d-d4e427c55af1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Refreshing instance network info cache due to event network-changed-b0464a2f-214f-420b-aa3e-70d2e3dce4ac. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.821 2 DEBUG oslo_concurrency.lockutils [req-d0f9a41a-70b5-48b6-9aef-2193727d100a req-f263e311-55e3-4330-ba6d-d4e427c55af1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-d60a2dcd-7fb6-4bfe-8351-a38a71164f83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.864 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.864 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.865 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.865 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.889 2 DEBUG nova.storage.rbd_utils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] rbd image 21a71d10-e13b-47fe-88fd-ec9597f7902e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.893 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 21a71d10-e13b-47fe-88fd-ec9597f7902e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.934 2 DEBUG nova.network.neutron [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.945 2 DEBUG nova.compute.manager [req-54c48891-8bbd-43e1-a84e-4d8a00cadf46 req-a8ee6222-7351-4400-a145-4f0e41e83ecd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Received event network-vif-unplugged-56e8a82c-acef-4136-b229-8039f0d8c843 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.945 2 DEBUG oslo_concurrency.lockutils [req-54c48891-8bbd-43e1-a84e-4d8a00cadf46 req-a8ee6222-7351-4400-a145-4f0e41e83ecd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "1a70666a-f421-4a09-bfa4-f1171cbe2000-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.945 2 DEBUG oslo_concurrency.lockutils [req-54c48891-8bbd-43e1-a84e-4d8a00cadf46 req-a8ee6222-7351-4400-a145-4f0e41e83ecd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1a70666a-f421-4a09-bfa4-f1171cbe2000-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.946 2 DEBUG oslo_concurrency.lockutils [req-54c48891-8bbd-43e1-a84e-4d8a00cadf46 req-a8ee6222-7351-4400-a145-4f0e41e83ecd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1a70666a-f421-4a09-bfa4-f1171cbe2000-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.946 2 DEBUG nova.compute.manager [req-54c48891-8bbd-43e1-a84e-4d8a00cadf46 req-a8ee6222-7351-4400-a145-4f0e41e83ecd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] No waiting events found dispatching network-vif-unplugged-56e8a82c-acef-4136-b229-8039f0d8c843 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.946 2 DEBUG nova.compute.manager [req-54c48891-8bbd-43e1-a84e-4d8a00cadf46 req-a8ee6222-7351-4400-a145-4f0e41e83ecd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Received event network-vif-unplugged-56e8a82c-acef-4136-b229-8039f0d8c843 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.946 2 DEBUG nova.compute.manager [req-54c48891-8bbd-43e1-a84e-4d8a00cadf46 req-a8ee6222-7351-4400-a145-4f0e41e83ecd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Received event network-vif-plugged-56e8a82c-acef-4136-b229-8039f0d8c843 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.947 2 DEBUG oslo_concurrency.lockutils [req-54c48891-8bbd-43e1-a84e-4d8a00cadf46 req-a8ee6222-7351-4400-a145-4f0e41e83ecd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "1a70666a-f421-4a09-bfa4-f1171cbe2000-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.947 2 DEBUG oslo_concurrency.lockutils [req-54c48891-8bbd-43e1-a84e-4d8a00cadf46 req-a8ee6222-7351-4400-a145-4f0e41e83ecd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1a70666a-f421-4a09-bfa4-f1171cbe2000-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.947 2 DEBUG oslo_concurrency.lockutils [req-54c48891-8bbd-43e1-a84e-4d8a00cadf46 req-a8ee6222-7351-4400-a145-4f0e41e83ecd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1a70666a-f421-4a09-bfa4-f1171cbe2000-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.947 2 DEBUG nova.compute.manager [req-54c48891-8bbd-43e1-a84e-4d8a00cadf46 req-a8ee6222-7351-4400-a145-4f0e41e83ecd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] No waiting events found dispatching network-vif-plugged-56e8a82c-acef-4136-b229-8039f0d8c843 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:54:31 np0005481065 nova_compute[260935]: 2025-10-11 08:54:31.947 2 WARNING nova.compute.manager [req-54c48891-8bbd-43e1-a84e-4d8a00cadf46 req-a8ee6222-7351-4400-a145-4f0e41e83ecd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Received unexpected event network-vif-plugged-56e8a82c-acef-4136-b229-8039f0d8c843 for instance with vm_state active and task_state deleting.#033[00m
Oct 11 04:54:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1509: 321 pgs: 321 active+clean; 88 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 04:54:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:54:32 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1646259643' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:54:32 np0005481065 nova_compute[260935]: 2025-10-11 08:54:32.085 2 DEBUG oslo_concurrency.processutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:54:32 np0005481065 nova_compute[260935]: 2025-10-11 08:54:32.088 2 DEBUG nova.virt.libvirt.vif [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:54:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1070563181',display_name='tempest-ServerAddressesTestJSON-server-1070563181',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1070563181',id=52,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ce2ed1c47abf4e1889253402aa1e536f',ramdisk_id='',reservation_id='r-xo306n5g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1144666060',owner_user_name='tempest-ServerAddressesTestJSON-1144666060-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:54:28Z,user_data=None,user_id='e7ae37ce1df34d87a006582ce3cb7d6d',uuid=ceef84cf-9df6-4484-862c-624eab05f1fe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b", "address": "fa:16:3e:4a:a8:a7", "network": {"id": "f022782f-f41d-4a83-babf-b48f2f344e01", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1436623710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ed1c47abf4e1889253402aa1e536f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6cea59b-74", "ovs_interfaceid": "b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:54:32 np0005481065 nova_compute[260935]: 2025-10-11 08:54:32.089 2 DEBUG nova.network.os_vif_util [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Converting VIF {"id": "b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b", "address": "fa:16:3e:4a:a8:a7", "network": {"id": "f022782f-f41d-4a83-babf-b48f2f344e01", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1436623710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ed1c47abf4e1889253402aa1e536f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6cea59b-74", "ovs_interfaceid": "b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:54:32 np0005481065 nova_compute[260935]: 2025-10-11 08:54:32.090 2 DEBUG nova.network.os_vif_util [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:a8:a7,bridge_name='br-int',has_traffic_filtering=True,id=b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b,network=Network(f022782f-f41d-4a83-babf-b48f2f344e01),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6cea59b-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:54:32 np0005481065 nova_compute[260935]: 2025-10-11 08:54:32.093 2 DEBUG nova.objects.instance [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Lazy-loading 'pci_devices' on Instance uuid ceef84cf-9df6-4484-862c-624eab05f1fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:54:32 np0005481065 nova_compute[260935]: 2025-10-11 08:54:32.112 2 DEBUG nova.virt.libvirt.driver [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:54:32 np0005481065 nova_compute[260935]:  <uuid>ceef84cf-9df6-4484-862c-624eab05f1fe</uuid>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:  <name>instance-00000034</name>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:54:32 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:      <nova:name>tempest-ServerAddressesTestJSON-server-1070563181</nova:name>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:54:31</nova:creationTime>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:54:32 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:        <nova:user uuid="e7ae37ce1df34d87a006582ce3cb7d6d">tempest-ServerAddressesTestJSON-1144666060-project-member</nova:user>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:        <nova:project uuid="ce2ed1c47abf4e1889253402aa1e536f">tempest-ServerAddressesTestJSON-1144666060</nova:project>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:        <nova:port uuid="b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b">
Oct 11 04:54:32 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:54:32 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:      <entry name="serial">ceef84cf-9df6-4484-862c-624eab05f1fe</entry>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:      <entry name="uuid">ceef84cf-9df6-4484-862c-624eab05f1fe</entry>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:54:32 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:54:32 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:54:32 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/ceef84cf-9df6-4484-862c-624eab05f1fe_disk">
Oct 11 04:54:32 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:54:32 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:54:32 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/ceef84cf-9df6-4484-862c-624eab05f1fe_disk.config">
Oct 11 04:54:32 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:54:32 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 04:54:32 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:4a:a8:a7"/>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:      <target dev="tapb6cea59b-74"/>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:54:32 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/ceef84cf-9df6-4484-862c-624eab05f1fe/console.log" append="off"/>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:54:32 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:54:32 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:54:32 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:54:32 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:54:32 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:54:32 np0005481065 nova_compute[260935]: 2025-10-11 08:54:32.112 2 DEBUG nova.compute.manager [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Preparing to wait for external event network-vif-plugged-b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 04:54:32 np0005481065 nova_compute[260935]: 2025-10-11 08:54:32.113 2 DEBUG oslo_concurrency.lockutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Acquiring lock "ceef84cf-9df6-4484-862c-624eab05f1fe-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:32 np0005481065 nova_compute[260935]: 2025-10-11 08:54:32.113 2 DEBUG oslo_concurrency.lockutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Lock "ceef84cf-9df6-4484-862c-624eab05f1fe-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:32 np0005481065 nova_compute[260935]: 2025-10-11 08:54:32.113 2 DEBUG oslo_concurrency.lockutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Lock "ceef84cf-9df6-4484-862c-624eab05f1fe-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:32 np0005481065 nova_compute[260935]: 2025-10-11 08:54:32.114 2 DEBUG nova.virt.libvirt.vif [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:54:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1070563181',display_name='tempest-ServerAddressesTestJSON-server-1070563181',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1070563181',id=52,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ce2ed1c47abf4e1889253402aa1e536f',ramdisk_id='',reservation_id='r-xo306n5g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1144666060',owner_user_name='tempest-ServerAddressesTestJSON-1144666060-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:54:28Z,user_data=None,user_id='e7ae37ce1df34d87a006582ce3cb7d6d',uuid=ceef84cf-9df6-4484-862c-624eab05f1fe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b", "address": "fa:16:3e:4a:a8:a7", "network": {"id": "f022782f-f41d-4a83-babf-b48f2f344e01", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1436623710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ed1c47abf4e1889253402aa1e536f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6cea59b-74", "ovs_interfaceid": "b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:54:32 np0005481065 nova_compute[260935]: 2025-10-11 08:54:32.115 2 DEBUG nova.network.os_vif_util [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Converting VIF {"id": "b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b", "address": "fa:16:3e:4a:a8:a7", "network": {"id": "f022782f-f41d-4a83-babf-b48f2f344e01", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1436623710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ed1c47abf4e1889253402aa1e536f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6cea59b-74", "ovs_interfaceid": "b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:54:32 np0005481065 nova_compute[260935]: 2025-10-11 08:54:32.116 2 DEBUG nova.network.os_vif_util [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:a8:a7,bridge_name='br-int',has_traffic_filtering=True,id=b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b,network=Network(f022782f-f41d-4a83-babf-b48f2f344e01),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6cea59b-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:54:32 np0005481065 nova_compute[260935]: 2025-10-11 08:54:32.116 2 DEBUG os_vif [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:a8:a7,bridge_name='br-int',has_traffic_filtering=True,id=b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b,network=Network(f022782f-f41d-4a83-babf-b48f2f344e01),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6cea59b-74') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:54:32 np0005481065 nova_compute[260935]: 2025-10-11 08:54:32.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:32 np0005481065 nova_compute[260935]: 2025-10-11 08:54:32.118 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:32 np0005481065 nova_compute[260935]: 2025-10-11 08:54:32.119 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:54:32 np0005481065 nova_compute[260935]: 2025-10-11 08:54:32.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:32 np0005481065 nova_compute[260935]: 2025-10-11 08:54:32.123 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb6cea59b-74, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:32 np0005481065 nova_compute[260935]: 2025-10-11 08:54:32.124 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb6cea59b-74, col_values=(('external_ids', {'iface-id': 'b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4a:a8:a7', 'vm-uuid': 'ceef84cf-9df6-4484-862c-624eab05f1fe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:32 np0005481065 nova_compute[260935]: 2025-10-11 08:54:32.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:32 np0005481065 NetworkManager[44960]: <info>  [1760172872.1268] manager: (tapb6cea59b-74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/213)
Oct 11 04:54:32 np0005481065 nova_compute[260935]: 2025-10-11 08:54:32.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:32 np0005481065 nova_compute[260935]: 2025-10-11 08:54:32.135 2 INFO os_vif [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:a8:a7,bridge_name='br-int',has_traffic_filtering=True,id=b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b,network=Network(f022782f-f41d-4a83-babf-b48f2f344e01),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6cea59b-74')#033[00m
Oct 11 04:54:32 np0005481065 nova_compute[260935]: 2025-10-11 08:54:32.170 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 21a71d10-e13b-47fe-88fd-ec9597f7902e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.278s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:54:32 np0005481065 nova_compute[260935]: 2025-10-11 08:54:32.208 2 DEBUG nova.network.neutron [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Successfully created port: 5a02bb17-5d97-4eef-a91d-245b70cc5e1b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 04:54:32 np0005481065 nova_compute[260935]: 2025-10-11 08:54:32.272 2 DEBUG nova.storage.rbd_utils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] resizing rbd image 21a71d10-e13b-47fe-88fd-ec9597f7902e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:54:32 np0005481065 nova_compute[260935]: 2025-10-11 08:54:32.313 2 DEBUG nova.virt.libvirt.driver [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:54:32 np0005481065 nova_compute[260935]: 2025-10-11 08:54:32.314 2 DEBUG nova.virt.libvirt.driver [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:54:32 np0005481065 nova_compute[260935]: 2025-10-11 08:54:32.314 2 DEBUG nova.virt.libvirt.driver [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] No VIF found with MAC fa:16:3e:4a:a8:a7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:54:32 np0005481065 nova_compute[260935]: 2025-10-11 08:54:32.315 2 INFO nova.virt.libvirt.driver [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Using config drive#033[00m
Oct 11 04:54:32 np0005481065 nova_compute[260935]: 2025-10-11 08:54:32.338 2 DEBUG nova.storage.rbd_utils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] rbd image ceef84cf-9df6-4484-862c-624eab05f1fe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:54:32 np0005481065 nova_compute[260935]: 2025-10-11 08:54:32.411 2 DEBUG nova.objects.instance [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lazy-loading 'migration_context' on Instance uuid 21a71d10-e13b-47fe-88fd-ec9597f7902e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:54:32 np0005481065 nova_compute[260935]: 2025-10-11 08:54:32.434 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:54:32 np0005481065 nova_compute[260935]: 2025-10-11 08:54:32.435 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Ensure instance console log exists: /var/lib/nova/instances/21a71d10-e13b-47fe-88fd-ec9597f7902e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:54:32 np0005481065 nova_compute[260935]: 2025-10-11 08:54:32.435 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:32 np0005481065 nova_compute[260935]: 2025-10-11 08:54:32.435 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:32 np0005481065 nova_compute[260935]: 2025-10-11 08:54:32.436 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:32 np0005481065 nova_compute[260935]: 2025-10-11 08:54:32.694 2 DEBUG nova.network.neutron [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Successfully created port: 70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 04:54:32 np0005481065 nova_compute[260935]: 2025-10-11 08:54:32.719 2 DEBUG nova.network.neutron [-] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:54:32 np0005481065 nova_compute[260935]: 2025-10-11 08:54:32.740 2 INFO nova.compute.manager [-] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Took 1.22 seconds to deallocate network for instance.#033[00m
Oct 11 04:54:32 np0005481065 nova_compute[260935]: 2025-10-11 08:54:32.794 2 DEBUG oslo_concurrency.lockutils [None req-65afeaf2-f404-4ffc-a661-16ab532bb966 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:32 np0005481065 nova_compute[260935]: 2025-10-11 08:54:32.795 2 DEBUG oslo_concurrency.lockutils [None req-65afeaf2-f404-4ffc-a661-16ab532bb966 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:32 np0005481065 nova_compute[260935]: 2025-10-11 08:54:32.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:32 np0005481065 nova_compute[260935]: 2025-10-11 08:54:32.871 2 DEBUG nova.network.neutron [req-5d35c0fd-bfb2-4bb0-a2c1-023917277d4b req-7fffc75b-4fd3-4f16-b812-723cc111f850 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Updated VIF entry in instance network info cache for port b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:54:32 np0005481065 nova_compute[260935]: 2025-10-11 08:54:32.872 2 DEBUG nova.network.neutron [req-5d35c0fd-bfb2-4bb0-a2c1-023917277d4b req-7fffc75b-4fd3-4f16-b812-723cc111f850 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Updating instance_info_cache with network_info: [{"id": "b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b", "address": "fa:16:3e:4a:a8:a7", "network": {"id": "f022782f-f41d-4a83-babf-b48f2f344e01", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1436623710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ed1c47abf4e1889253402aa1e536f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6cea59b-74", "ovs_interfaceid": "b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:54:32 np0005481065 nova_compute[260935]: 2025-10-11 08:54:32.886 2 INFO nova.virt.libvirt.driver [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Creating config drive at /var/lib/nova/instances/ceef84cf-9df6-4484-862c-624eab05f1fe/disk.config#033[00m
Oct 11 04:54:32 np0005481065 nova_compute[260935]: 2025-10-11 08:54:32.894 2 DEBUG oslo_concurrency.processutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ceef84cf-9df6-4484-862c-624eab05f1fe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3616h6tw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:54:32 np0005481065 nova_compute[260935]: 2025-10-11 08:54:32.955 2 DEBUG oslo_concurrency.lockutils [req-5d35c0fd-bfb2-4bb0-a2c1-023917277d4b req-7fffc75b-4fd3-4f16-b812-723cc111f850 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-ceef84cf-9df6-4484-862c-624eab05f1fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:54:33 np0005481065 nova_compute[260935]: 2025-10-11 08:54:33.017 2 DEBUG oslo_concurrency.processutils [None req-65afeaf2-f404-4ffc-a661-16ab532bb966 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:54:33 np0005481065 nova_compute[260935]: 2025-10-11 08:54:33.071 2 DEBUG oslo_concurrency.processutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ceef84cf-9df6-4484-862c-624eab05f1fe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3616h6tw" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:54:33 np0005481065 nova_compute[260935]: 2025-10-11 08:54:33.076 2 DEBUG nova.network.neutron [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Updating instance_info_cache with network_info: [{"id": "b0464a2f-214f-420b-aa3e-70d2e3dce4ac", "address": "fa:16:3e:9b:af:82", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0464a2f-21", "ovs_interfaceid": "b0464a2f-214f-420b-aa3e-70d2e3dce4ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:54:33 np0005481065 nova_compute[260935]: 2025-10-11 08:54:33.113 2 DEBUG nova.storage.rbd_utils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] rbd image ceef84cf-9df6-4484-862c-624eab05f1fe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:54:33 np0005481065 nova_compute[260935]: 2025-10-11 08:54:33.118 2 DEBUG oslo_concurrency.processutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ceef84cf-9df6-4484-862c-624eab05f1fe/disk.config ceef84cf-9df6-4484-862c-624eab05f1fe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:54:33 np0005481065 nova_compute[260935]: 2025-10-11 08:54:33.184 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Releasing lock "refresh_cache-d60a2dcd-7fb6-4bfe-8351-a38a71164f83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:54:33 np0005481065 nova_compute[260935]: 2025-10-11 08:54:33.185 2 DEBUG nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Instance network_info: |[{"id": "b0464a2f-214f-420b-aa3e-70d2e3dce4ac", "address": "fa:16:3e:9b:af:82", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0464a2f-21", "ovs_interfaceid": "b0464a2f-214f-420b-aa3e-70d2e3dce4ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:54:33 np0005481065 nova_compute[260935]: 2025-10-11 08:54:33.187 2 DEBUG oslo_concurrency.lockutils [req-d0f9a41a-70b5-48b6-9aef-2193727d100a req-f263e311-55e3-4330-ba6d-d4e427c55af1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-d60a2dcd-7fb6-4bfe-8351-a38a71164f83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:54:33 np0005481065 nova_compute[260935]: 2025-10-11 08:54:33.187 2 DEBUG nova.network.neutron [req-d0f9a41a-70b5-48b6-9aef-2193727d100a req-f263e311-55e3-4330-ba6d-d4e427c55af1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Refreshing network info cache for port b0464a2f-214f-420b-aa3e-70d2e3dce4ac _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:54:33 np0005481065 nova_compute[260935]: 2025-10-11 08:54:33.193 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Start _get_guest_xml network_info=[{"id": "b0464a2f-214f-420b-aa3e-70d2e3dce4ac", "address": "fa:16:3e:9b:af:82", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0464a2f-21", "ovs_interfaceid": "b0464a2f-214f-420b-aa3e-70d2e3dce4ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:54:33 np0005481065 nova_compute[260935]: 2025-10-11 08:54:33.200 2 WARNING nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:54:33 np0005481065 nova_compute[260935]: 2025-10-11 08:54:33.206 2 DEBUG nova.virt.libvirt.host [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:54:33 np0005481065 nova_compute[260935]: 2025-10-11 08:54:33.207 2 DEBUG nova.virt.libvirt.host [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:54:33 np0005481065 nova_compute[260935]: 2025-10-11 08:54:33.221 2 DEBUG nova.virt.libvirt.host [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:54:33 np0005481065 nova_compute[260935]: 2025-10-11 08:54:33.222 2 DEBUG nova.virt.libvirt.host [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:54:33 np0005481065 nova_compute[260935]: 2025-10-11 08:54:33.222 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:54:33 np0005481065 nova_compute[260935]: 2025-10-11 08:54:33.223 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:54:33 np0005481065 nova_compute[260935]: 2025-10-11 08:54:33.224 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:54:33 np0005481065 nova_compute[260935]: 2025-10-11 08:54:33.224 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:54:33 np0005481065 nova_compute[260935]: 2025-10-11 08:54:33.225 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:54:33 np0005481065 nova_compute[260935]: 2025-10-11 08:54:33.225 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:54:33 np0005481065 nova_compute[260935]: 2025-10-11 08:54:33.226 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:54:33 np0005481065 nova_compute[260935]: 2025-10-11 08:54:33.226 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:54:33 np0005481065 nova_compute[260935]: 2025-10-11 08:54:33.226 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:54:33 np0005481065 nova_compute[260935]: 2025-10-11 08:54:33.227 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:54:33 np0005481065 nova_compute[260935]: 2025-10-11 08:54:33.227 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:54:33 np0005481065 nova_compute[260935]: 2025-10-11 08:54:33.228 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:54:33 np0005481065 nova_compute[260935]: 2025-10-11 08:54:33.234 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:54:33 np0005481065 nova_compute[260935]: 2025-10-11 08:54:33.314 2 DEBUG nova.network.neutron [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Successfully updated port: 5a02bb17-5d97-4eef-a91d-245b70cc5e1b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:54:33 np0005481065 nova_compute[260935]: 2025-10-11 08:54:33.319 2 DEBUG oslo_concurrency.processutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ceef84cf-9df6-4484-862c-624eab05f1fe/disk.config ceef84cf-9df6-4484-862c-624eab05f1fe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.200s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:54:33 np0005481065 nova_compute[260935]: 2025-10-11 08:54:33.319 2 INFO nova.virt.libvirt.driver [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Deleting local config drive /var/lib/nova/instances/ceef84cf-9df6-4484-862c-624eab05f1fe/disk.config because it was imported into RBD.#033[00m
Oct 11 04:54:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:54:33 np0005481065 nova_compute[260935]: 2025-10-11 08:54:33.344 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquiring lock "refresh_cache-e263661e-e9c2-4a4d-a6e5-5fc8a7353f50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:54:33 np0005481065 nova_compute[260935]: 2025-10-11 08:54:33.345 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquired lock "refresh_cache-e263661e-e9c2-4a4d-a6e5-5fc8a7353f50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:54:33 np0005481065 nova_compute[260935]: 2025-10-11 08:54:33.345 2 DEBUG nova.network.neutron [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:54:33 np0005481065 kernel: tapb6cea59b-74: entered promiscuous mode
Oct 11 04:54:33 np0005481065 NetworkManager[44960]: <info>  [1760172873.4145] manager: (tapb6cea59b-74): new Tun device (/org/freedesktop/NetworkManager/Devices/214)
Oct 11 04:54:33 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:33Z|00449|binding|INFO|Claiming lport b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b for this chassis.
Oct 11 04:54:33 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:33Z|00450|binding|INFO|b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b: Claiming fa:16:3e:4a:a8:a7 10.100.0.14
Oct 11 04:54:33 np0005481065 nova_compute[260935]: 2025-10-11 08:54:33.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:33.428 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:a8:a7 10.100.0.14'], port_security=['fa:16:3e:4a:a8:a7 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'ceef84cf-9df6-4484-862c-624eab05f1fe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f022782f-f41d-4a83-babf-b48f2f344e01', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce2ed1c47abf4e1889253402aa1e536f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4c233320-8183-4c2d-a5ba-21893adcb132', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc533fa3-dc44-4446-b57b-1e5f53ba099c, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:33.431 162815 INFO neutron.agent.ovn.metadata.agent [-] Port b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b in datapath f022782f-f41d-4a83-babf-b48f2f344e01 bound to our chassis#033[00m
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:33.434 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f022782f-f41d-4a83-babf-b48f2f344e01#033[00m
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:33.453 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[14f4a5b5-2211-4a4c-be8a-d6ee0fc4565f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:33.455 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf022782f-f1 in ovnmeta-f022782f-f41d-4a83-babf-b48f2f344e01 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:33.459 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf022782f-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:33.459 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[61bfaccd-4fd9-43c0-a507-273b7b0652ed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:33.460 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[523354d0-9f00-4fe8-877f-c842c2ed814f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:33 np0005481065 systemd-machined[215705]: New machine qemu-59-instance-00000034.
Oct 11 04:54:33 np0005481065 systemd[1]: Started Virtual Machine qemu-59-instance-00000034.
Oct 11 04:54:33 np0005481065 systemd-udevd[318476]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:54:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:54:33 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/507219709' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:33.496 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[b7ce63b4-4cad-4600-878e-ca2fead4c209]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:33 np0005481065 NetworkManager[44960]: <info>  [1760172873.5048] device (tapb6cea59b-74): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:54:33 np0005481065 NetworkManager[44960]: <info>  [1760172873.5061] device (tapb6cea59b-74): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:33.531 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[61d2a890-2e91-4c18-b047-5852e42d03da]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:33 np0005481065 nova_compute[260935]: 2025-10-11 08:54:33.541 2 DEBUG oslo_concurrency.processutils [None req-65afeaf2-f404-4ffc-a661-16ab532bb966 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:54:33 np0005481065 nova_compute[260935]: 2025-10-11 08:54:33.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:33 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:33Z|00451|binding|INFO|Setting lport b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b ovn-installed in OVS
Oct 11 04:54:33 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:33Z|00452|binding|INFO|Setting lport b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b up in Southbound
Oct 11 04:54:33 np0005481065 nova_compute[260935]: 2025-10-11 08:54:33.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:33 np0005481065 nova_compute[260935]: 2025-10-11 08:54:33.558 2 DEBUG nova.compute.provider_tree [None req-65afeaf2-f404-4ffc-a661-16ab532bb966 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:54:33 np0005481065 nova_compute[260935]: 2025-10-11 08:54:33.575 2 DEBUG nova.network.neutron [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:33.575 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b8305f1a-9ad7-4987-9959-151cf8634183]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:33 np0005481065 NetworkManager[44960]: <info>  [1760172873.5819] manager: (tapf022782f-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/215)
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:33.582 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8c8fe602-4d0e-47b5-b8a3-8b217f3b939e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:33 np0005481065 nova_compute[260935]: 2025-10-11 08:54:33.590 2 DEBUG nova.scheduler.client.report [None req-65afeaf2-f404-4ffc-a661-16ab532bb966 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:54:33 np0005481065 nova_compute[260935]: 2025-10-11 08:54:33.622 2 DEBUG oslo_concurrency.lockutils [None req-65afeaf2-f404-4ffc-a661-16ab532bb966 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.827s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:33.630 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[4f0016a6-a336-4f25-b171-29fb2cc4a96b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:33.634 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[49abc609-27bb-470a-9aff-e564b18dd647]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:33 np0005481065 nova_compute[260935]: 2025-10-11 08:54:33.656 2 INFO nova.scheduler.client.report [None req-65afeaf2-f404-4ffc-a661-16ab532bb966 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Deleted allocations for instance 1a70666a-f421-4a09-bfa4-f1171cbe2000#033[00m
Oct 11 04:54:33 np0005481065 NetworkManager[44960]: <info>  [1760172873.6681] device (tapf022782f-f0): carrier: link connected
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:33.676 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[97c846b4-cf36-490c-bdaa-d2a86de6a5a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:33.701 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[533b2419-6ec2-4a0d-93b6-c35ae12a845c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf022782f-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:5c:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 146], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 471837, 'reachable_time': 20507, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318508, 'error': None, 'target': 'ovnmeta-f022782f-f41d-4a83-babf-b48f2f344e01', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:33.728 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[80878071-976c-4a66-9a23-a193e419aa05]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fead:5cfe'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 471837, 'tstamp': 471837}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318509, 'error': None, 'target': 'ovnmeta-f022782f-f41d-4a83-babf-b48f2f344e01', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:33 np0005481065 nova_compute[260935]: 2025-10-11 08:54:33.749 2 DEBUG oslo_concurrency.lockutils [None req-65afeaf2-f404-4ffc-a661-16ab532bb966 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "1a70666a-f421-4a09-bfa4-f1171cbe2000" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.539s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:33.754 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c5bed711-5384-4e1b-9562-402df5800808]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf022782f-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:5c:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 146], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 471837, 'reachable_time': 20507, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 318517, 'error': None, 'target': 'ovnmeta-f022782f-f41d-4a83-babf-b48f2f344e01', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:33.813 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a7533e93-4488-429c-b3bd-1966a7ffc012]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:54:33 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3829389537' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:54:33 np0005481065 nova_compute[260935]: 2025-10-11 08:54:33.887 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.653s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:33.900 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[064a419f-051e-4e80-b29a-ed14727384f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:33.904 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf022782f-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:33.904 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:33.905 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf022782f-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:33 np0005481065 kernel: tapf022782f-f0: entered promiscuous mode
Oct 11 04:54:33 np0005481065 NetworkManager[44960]: <info>  [1760172873.9090] manager: (tapf022782f-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/216)
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:33.913 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf022782f-f0, col_values=(('external_ids', {'iface-id': '1b264af0-5de7-4f41-b533-8be3fcb7e621'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:33 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:33Z|00453|binding|INFO|Releasing lport 1b264af0-5de7-4f41-b533-8be3fcb7e621 from this chassis (sb_readonly=0)
Oct 11 04:54:33 np0005481065 nova_compute[260935]: 2025-10-11 08:54:33.954 2 DEBUG nova.storage.rbd_utils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] rbd image d60a2dcd-7fb6-4bfe-8351-a38a71164f83_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:33.955 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f022782f-f41d-4a83-babf-b48f2f344e01.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f022782f-f41d-4a83-babf-b48f2f344e01.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:33.957 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[13a09d7e-7dfc-4414-845e-14939b6b9480]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:33.958 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-f022782f-f41d-4a83-babf-b48f2f344e01
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/f022782f-f41d-4a83-babf-b48f2f344e01.pid.haproxy
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID f022782f-f41d-4a83-babf-b48f2f344e01
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 04:54:33 np0005481065 nova_compute[260935]: 2025-10-11 08:54:33.961 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:54:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:33.961 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f022782f-f41d-4a83-babf-b48f2f344e01', 'env', 'PROCESS_TAG=haproxy-f022782f-f41d-4a83-babf-b48f2f344e01', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f022782f-f41d-4a83-babf-b48f2f344e01.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 04:54:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1510: 321 pgs: 321 active+clean; 226 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 7.1 MiB/s wr, 208 op/s
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.030 2 DEBUG nova.compute.manager [req-72fdb077-1185-4aed-8ee7-a56482e76d58 req-62e8f989-09b2-486c-b3f9-27662ba47a66 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Received event network-vif-deleted-56e8a82c-acef-4136-b229-8039f0d8c843 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.030 2 DEBUG nova.compute.manager [req-72fdb077-1185-4aed-8ee7-a56482e76d58 req-62e8f989-09b2-486c-b3f9-27662ba47a66 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Received event network-changed-5a02bb17-5d97-4eef-a91d-245b70cc5e1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.031 2 DEBUG nova.compute.manager [req-72fdb077-1185-4aed-8ee7-a56482e76d58 req-62e8f989-09b2-486c-b3f9-27662ba47a66 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Refreshing instance network info cache due to event network-changed-5a02bb17-5d97-4eef-a91d-245b70cc5e1b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.031 2 DEBUG oslo_concurrency.lockutils [req-72fdb077-1185-4aed-8ee7-a56482e76d58 req-62e8f989-09b2-486c-b3f9-27662ba47a66 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-e263661e-e9c2-4a4d-a6e5-5fc8a7353f50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.070 2 DEBUG nova.network.neutron [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Successfully updated port: 70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.087 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquiring lock "refresh_cache-21a71d10-e13b-47fe-88fd-ec9597f7902e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.087 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquired lock "refresh_cache-21a71d10-e13b-47fe-88fd-ec9597f7902e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.087 2 DEBUG nova.network.neutron [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.233 2 DEBUG nova.network.neutron [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:54:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:54:34 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2095302234' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.442 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.446 2 DEBUG nova.virt.libvirt.vif [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:54:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-123447316',display_name='tempest-ListServersNegativeTestJSON-server-123447316-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-123447316-1',id=53,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7a37bbdce5194d96bed20d4162e25337',ramdisk_id='',reservation_id='r-yl5fomq0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-567006104',owner_user_name='tempest-ListServersNegativeTestJSON-567006104-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:54:29Z,user_data=None,user_id='5c02b9d6bdae439c9f1e49ae63c5c5e3',uuid=d60a2dcd-7fb6-4bfe-8351-a38a71164f83,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b0464a2f-214f-420b-aa3e-70d2e3dce4ac", "address": "fa:16:3e:9b:af:82", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0464a2f-21", "ovs_interfaceid": "b0464a2f-214f-420b-aa3e-70d2e3dce4ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.447 2 DEBUG nova.network.os_vif_util [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Converting VIF {"id": "b0464a2f-214f-420b-aa3e-70d2e3dce4ac", "address": "fa:16:3e:9b:af:82", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0464a2f-21", "ovs_interfaceid": "b0464a2f-214f-420b-aa3e-70d2e3dce4ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.449 2 DEBUG nova.network.os_vif_util [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:af:82,bridge_name='br-int',has_traffic_filtering=True,id=b0464a2f-214f-420b-aa3e-70d2e3dce4ac,network=Network(72464893-0f19-40a9-84d1-392a298e50b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0464a2f-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.451 2 DEBUG nova.objects.instance [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lazy-loading 'pci_devices' on Instance uuid d60a2dcd-7fb6-4bfe-8351-a38a71164f83 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:54:34 np0005481065 podman[318624]: 2025-10-11 08:54:34.454845841 +0000 UTC m=+0.087598479 container create f3aae3bca0e893ccf324b75817ea1dd437236d36d1ffd2c53b7adacc2120d991 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f022782f-f41d-4a83-babf-b48f2f344e01, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.465 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172874.4638848, ceef84cf-9df6-4484-862c-624eab05f1fe => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.465 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] VM Started (Lifecycle Event)#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.472 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:54:34 np0005481065 nova_compute[260935]:  <uuid>d60a2dcd-7fb6-4bfe-8351-a38a71164f83</uuid>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:  <name>instance-00000035</name>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:54:34 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:      <nova:name>tempest-ListServersNegativeTestJSON-server-123447316-1</nova:name>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:54:33</nova:creationTime>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:54:34 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:        <nova:user uuid="5c02b9d6bdae439c9f1e49ae63c5c5e3">tempest-ListServersNegativeTestJSON-567006104-project-member</nova:user>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:        <nova:project uuid="7a37bbdce5194d96bed20d4162e25337">tempest-ListServersNegativeTestJSON-567006104</nova:project>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:        <nova:port uuid="b0464a2f-214f-420b-aa3e-70d2e3dce4ac">
Oct 11 04:54:34 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:54:34 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:      <entry name="serial">d60a2dcd-7fb6-4bfe-8351-a38a71164f83</entry>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:      <entry name="uuid">d60a2dcd-7fb6-4bfe-8351-a38a71164f83</entry>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:54:34 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:54:34 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:54:34 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/d60a2dcd-7fb6-4bfe-8351-a38a71164f83_disk">
Oct 11 04:54:34 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:54:34 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:54:34 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/d60a2dcd-7fb6-4bfe-8351-a38a71164f83_disk.config">
Oct 11 04:54:34 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:54:34 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 04:54:34 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:9b:af:82"/>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:      <target dev="tapb0464a2f-21"/>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:54:34 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/d60a2dcd-7fb6-4bfe-8351-a38a71164f83/console.log" append="off"/>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:54:34 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:54:34 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:54:34 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:54:34 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:54:34 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.473 2 DEBUG nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Preparing to wait for external event network-vif-plugged-b0464a2f-214f-420b-aa3e-70d2e3dce4ac prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.473 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquiring lock "d60a2dcd-7fb6-4bfe-8351-a38a71164f83-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.474 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "d60a2dcd-7fb6-4bfe-8351-a38a71164f83-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.474 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "d60a2dcd-7fb6-4bfe-8351-a38a71164f83-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.476 2 DEBUG nova.virt.libvirt.vif [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:54:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-123447316',display_name='tempest-ListServersNegativeTestJSON-server-123447316-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-123447316-1',id=53,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7a37bbdce5194d96bed20d4162e25337',ramdisk_id='',reservation_id='r-yl5fomq0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-567006104',owner_user_name='tempest-ListServersNegativeTestJSON-567006104-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:54:29Z,user_data=None,user_id='5c02b9d6bdae439c9f1e49ae63c5c5e3',uuid=d60a2dcd-7fb6-4bfe-8351-a38a71164f83,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b0464a2f-214f-420b-aa3e-70d2e3dce4ac", "address": "fa:16:3e:9b:af:82", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0464a2f-21", "ovs_interfaceid": "b0464a2f-214f-420b-aa3e-70d2e3dce4ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.476 2 DEBUG nova.network.os_vif_util [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Converting VIF {"id": "b0464a2f-214f-420b-aa3e-70d2e3dce4ac", "address": "fa:16:3e:9b:af:82", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0464a2f-21", "ovs_interfaceid": "b0464a2f-214f-420b-aa3e-70d2e3dce4ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.477 2 DEBUG nova.network.os_vif_util [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:af:82,bridge_name='br-int',has_traffic_filtering=True,id=b0464a2f-214f-420b-aa3e-70d2e3dce4ac,network=Network(72464893-0f19-40a9-84d1-392a298e50b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0464a2f-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.478 2 DEBUG os_vif [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:af:82,bridge_name='br-int',has_traffic_filtering=True,id=b0464a2f-214f-420b-aa3e-70d2e3dce4ac,network=Network(72464893-0f19-40a9-84d1-392a298e50b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0464a2f-21') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.480 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.480 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.488 2 DEBUG nova.network.neutron [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Updating instance_info_cache with network_info: [{"id": "5a02bb17-5d97-4eef-a91d-245b70cc5e1b", "address": "fa:16:3e:2c:7f:c3", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a02bb17-5d", "ovs_interfaceid": "5a02bb17-5d97-4eef-a91d-245b70cc5e1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.490 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:34 np0005481065 podman[318624]: 2025-10-11 08:54:34.400972415 +0000 UTC m=+0.033725123 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.493 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb0464a2f-21, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.493 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb0464a2f-21, col_values=(('external_ids', {'iface-id': 'b0464a2f-214f-420b-aa3e-70d2e3dce4ac', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9b:af:82', 'vm-uuid': 'd60a2dcd-7fb6-4bfe-8351-a38a71164f83'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:34 np0005481065 NetworkManager[44960]: <info>  [1760172874.4962] manager: (tapb0464a2f-21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/217)
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:34 np0005481065 systemd[1]: Started libpod-conmon-f3aae3bca0e893ccf324b75817ea1dd437236d36d1ffd2c53b7adacc2120d991.scope.
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.506 2 INFO os_vif [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:af:82,bridge_name='br-int',has_traffic_filtering=True,id=b0464a2f-214f-420b-aa3e-70d2e3dce4ac,network=Network(72464893-0f19-40a9-84d1-392a298e50b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0464a2f-21')#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.511 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Releasing lock "refresh_cache-e263661e-e9c2-4a4d-a6e5-5fc8a7353f50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.511 2 DEBUG nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Instance network_info: |[{"id": "5a02bb17-5d97-4eef-a91d-245b70cc5e1b", "address": "fa:16:3e:2c:7f:c3", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a02bb17-5d", "ovs_interfaceid": "5a02bb17-5d97-4eef-a91d-245b70cc5e1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.513 2 DEBUG oslo_concurrency.lockutils [req-72fdb077-1185-4aed-8ee7-a56482e76d58 req-62e8f989-09b2-486c-b3f9-27662ba47a66 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-e263661e-e9c2-4a4d-a6e5-5fc8a7353f50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.514 2 DEBUG nova.network.neutron [req-72fdb077-1185-4aed-8ee7-a56482e76d58 req-62e8f989-09b2-486c-b3f9-27662ba47a66 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Refreshing network info cache for port 5a02bb17-5d97-4eef-a91d-245b70cc5e1b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.520 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Start _get_guest_xml network_info=[{"id": "5a02bb17-5d97-4eef-a91d-245b70cc5e1b", "address": "fa:16:3e:2c:7f:c3", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a02bb17-5d", "ovs_interfaceid": "5a02bb17-5d97-4eef-a91d-245b70cc5e1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.529 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172874.4642432, ceef84cf-9df6-4484-862c-624eab05f1fe => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.529 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] VM Paused (Lifecycle Event)#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.537 2 DEBUG nova.compute.manager [req-47b386dc-515d-41f2-9145-a7df00467aef req-e6eba40c-8030-45df-8420-9603a2619002 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Received event network-changed-70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:54:34 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.537 2 DEBUG nova.compute.manager [req-47b386dc-515d-41f2-9145-a7df00467aef req-e6eba40c-8030-45df-8420-9603a2619002 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Refreshing instance network info cache due to event network-changed-70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.538 2 DEBUG oslo_concurrency.lockutils [req-47b386dc-515d-41f2-9145-a7df00467aef req-e6eba40c-8030-45df-8420-9603a2619002 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-21a71d10-e13b-47fe-88fd-ec9597f7902e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:54:34 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2265d21a76f95933280395fdeb37d947104c519b3836bf0feabc7e53e6859577/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.553 2 WARNING nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.560 2 DEBUG nova.virt.libvirt.host [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.561 2 DEBUG nova.virt.libvirt.host [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:54:34 np0005481065 podman[318637]: 2025-10-11 08:54:34.567150544 +0000 UTC m=+0.072354745 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.567 2 DEBUG nova.virt.libvirt.host [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.568 2 DEBUG nova.virt.libvirt.host [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.568 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.568 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.569 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.569 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.569 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.570 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.570 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.570 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.571 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.571 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.571 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.571 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.576 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:54:34 np0005481065 podman[318624]: 2025-10-11 08:54:34.578030274 +0000 UTC m=+0.210782932 container init f3aae3bca0e893ccf324b75817ea1dd437236d36d1ffd2c53b7adacc2120d991 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f022782f-f41d-4a83-babf-b48f2f344e01, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 04:54:34 np0005481065 podman[318624]: 2025-10-11 08:54:34.585487587 +0000 UTC m=+0.218240225 container start f3aae3bca0e893ccf324b75817ea1dd437236d36d1ffd2c53b7adacc2120d991 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f022782f-f41d-4a83-babf-b48f2f344e01, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 11 04:54:34 np0005481065 neutron-haproxy-ovnmeta-f022782f-f41d-4a83-babf-b48f2f344e01[318651]: [NOTICE]   (318665) : New worker (318668) forked
Oct 11 04:54:34 np0005481065 neutron-haproxy-ovnmeta-f022782f-f41d-4a83-babf-b48f2f344e01[318651]: [NOTICE]   (318665) : Loading success.
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.621 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.635 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.660 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.662 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.662 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.663 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] No VIF found with MAC fa:16:3e:9b:af:82, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.663 2 INFO nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Using config drive#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.688 2 DEBUG nova.storage.rbd_utils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] rbd image d60a2dcd-7fb6-4bfe-8351-a38a71164f83_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.706 2 DEBUG nova.network.neutron [req-d0f9a41a-70b5-48b6-9aef-2193727d100a req-f263e311-55e3-4330-ba6d-d4e427c55af1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Updated VIF entry in instance network info cache for port b0464a2f-214f-420b-aa3e-70d2e3dce4ac. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.707 2 DEBUG nova.network.neutron [req-d0f9a41a-70b5-48b6-9aef-2193727d100a req-f263e311-55e3-4330-ba6d-d4e427c55af1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Updating instance_info_cache with network_info: [{"id": "b0464a2f-214f-420b-aa3e-70d2e3dce4ac", "address": "fa:16:3e:9b:af:82", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0464a2f-21", "ovs_interfaceid": "b0464a2f-214f-420b-aa3e-70d2e3dce4ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:54:34 np0005481065 nova_compute[260935]: 2025-10-11 08:54:34.757 2 DEBUG oslo_concurrency.lockutils [req-d0f9a41a-70b5-48b6-9aef-2193727d100a req-f263e311-55e3-4330-ba6d-d4e427c55af1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-d60a2dcd-7fb6-4bfe-8351-a38a71164f83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:54:35 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:54:35 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4038275198' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.023 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.059 2 DEBUG nova.storage.rbd_utils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] rbd image e263661e-e9c2-4a4d-a6e5-5fc8a7353f50_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.066 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.114 2 INFO nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Creating config drive at /var/lib/nova/instances/d60a2dcd-7fb6-4bfe-8351-a38a71164f83/disk.config#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.123 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d60a2dcd-7fb6-4bfe-8351-a38a71164f83/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf26us83j execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.180 2 DEBUG nova.network.neutron [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Updating instance_info_cache with network_info: [{"id": "70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b", "address": "fa:16:3e:f5:da:34", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70c55a7a-6f", "ovs_interfaceid": "70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.217 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Releasing lock "refresh_cache-21a71d10-e13b-47fe-88fd-ec9597f7902e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.218 2 DEBUG nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Instance network_info: |[{"id": "70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b", "address": "fa:16:3e:f5:da:34", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70c55a7a-6f", "ovs_interfaceid": "70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.220 2 DEBUG oslo_concurrency.lockutils [req-47b386dc-515d-41f2-9145-a7df00467aef req-e6eba40c-8030-45df-8420-9603a2619002 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-21a71d10-e13b-47fe-88fd-ec9597f7902e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.222 2 DEBUG nova.network.neutron [req-47b386dc-515d-41f2-9145-a7df00467aef req-e6eba40c-8030-45df-8420-9603a2619002 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Refreshing network info cache for port 70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.230 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Start _get_guest_xml network_info=[{"id": "70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b", "address": "fa:16:3e:f5:da:34", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70c55a7a-6f", "ovs_interfaceid": "70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.239 2 WARNING nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.245 2 DEBUG nova.virt.libvirt.host [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.246 2 DEBUG nova.virt.libvirt.host [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.255 2 DEBUG nova.virt.libvirt.host [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.256 2 DEBUG nova.virt.libvirt.host [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.257 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.257 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.259 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.259 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.260 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.260 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.261 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.261 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.262 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.262 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.263 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.264 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.270 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.320 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d60a2dcd-7fb6-4bfe-8351-a38a71164f83/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf26us83j" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.365 2 DEBUG nova.storage.rbd_utils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] rbd image d60a2dcd-7fb6-4bfe-8351-a38a71164f83_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.370 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d60a2dcd-7fb6-4bfe-8351-a38a71164f83/disk.config d60a2dcd-7fb6-4bfe-8351-a38a71164f83_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:54:35 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:54:35 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2071018622' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.523 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.527 2 DEBUG nova.virt.libvirt.vif [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:54:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-123447316',display_name='tempest-ListServersNegativeTestJSON-server-123447316-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-123447316-2',id=54,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7a37bbdce5194d96bed20d4162e25337',ramdisk_id='',reservation_id='r-yl5fomq0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-567006104',owner_user_name='tempest-ListServersNegativeTestJSON-567006104-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:54:30Z,user_data=None,user_id='5c02b9d6bdae439c9f1e49ae63c5c5e3',uuid=e263661e-e9c2-4a4d-a6e5-5fc8a7353f50,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5a02bb17-5d97-4eef-a91d-245b70cc5e1b", "address": "fa:16:3e:2c:7f:c3", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a02bb17-5d", "ovs_interfaceid": "5a02bb17-5d97-4eef-a91d-245b70cc5e1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.528 2 DEBUG nova.network.os_vif_util [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Converting VIF {"id": "5a02bb17-5d97-4eef-a91d-245b70cc5e1b", "address": "fa:16:3e:2c:7f:c3", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a02bb17-5d", "ovs_interfaceid": "5a02bb17-5d97-4eef-a91d-245b70cc5e1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.530 2 DEBUG nova.network.os_vif_util [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2c:7f:c3,bridge_name='br-int',has_traffic_filtering=True,id=5a02bb17-5d97-4eef-a91d-245b70cc5e1b,network=Network(72464893-0f19-40a9-84d1-392a298e50b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a02bb17-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.532 2 DEBUG nova.objects.instance [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lazy-loading 'pci_devices' on Instance uuid e263661e-e9c2-4a4d-a6e5-5fc8a7353f50 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.559 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:54:35 np0005481065 nova_compute[260935]:  <uuid>e263661e-e9c2-4a4d-a6e5-5fc8a7353f50</uuid>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:  <name>instance-00000036</name>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:54:35 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:      <nova:name>tempest-ListServersNegativeTestJSON-server-123447316-2</nova:name>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:54:34</nova:creationTime>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:54:35 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:        <nova:user uuid="5c02b9d6bdae439c9f1e49ae63c5c5e3">tempest-ListServersNegativeTestJSON-567006104-project-member</nova:user>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:        <nova:project uuid="7a37bbdce5194d96bed20d4162e25337">tempest-ListServersNegativeTestJSON-567006104</nova:project>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:        <nova:port uuid="5a02bb17-5d97-4eef-a91d-245b70cc5e1b">
Oct 11 04:54:35 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:54:35 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:      <entry name="serial">e263661e-e9c2-4a4d-a6e5-5fc8a7353f50</entry>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:      <entry name="uuid">e263661e-e9c2-4a4d-a6e5-5fc8a7353f50</entry>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:54:35 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:54:35 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:54:35 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/e263661e-e9c2-4a4d-a6e5-5fc8a7353f50_disk">
Oct 11 04:54:35 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:54:35 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:54:35 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/e263661e-e9c2-4a4d-a6e5-5fc8a7353f50_disk.config">
Oct 11 04:54:35 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:54:35 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 04:54:35 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:2c:7f:c3"/>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:      <target dev="tap5a02bb17-5d"/>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:54:35 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/e263661e-e9c2-4a4d-a6e5-5fc8a7353f50/console.log" append="off"/>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:54:35 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:54:35 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:54:35 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:54:35 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:54:35 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.560 2 DEBUG nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Preparing to wait for external event network-vif-plugged-5a02bb17-5d97-4eef-a91d-245b70cc5e1b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.560 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquiring lock "e263661e-e9c2-4a4d-a6e5-5fc8a7353f50-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.561 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "e263661e-e9c2-4a4d-a6e5-5fc8a7353f50-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.561 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "e263661e-e9c2-4a4d-a6e5-5fc8a7353f50-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.562 2 DEBUG nova.virt.libvirt.vif [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:54:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-123447316',display_name='tempest-ListServersNegativeTestJSON-server-123447316-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-123447316-2',id=54,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7a37bbdce5194d96bed20d4162e25337',ramdisk_id='',reservation_id='r-yl5fomq0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-567006104',owner_user_name='tempest-ListServersNegativeTestJSON-567006104-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:54:30Z,user_data=None,user_id='5c02b9d6bdae439c9f1e49ae63c5c5e3',uuid=e263661e-e9c2-4a4d-a6e5-5fc8a7353f50,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5a02bb17-5d97-4eef-a91d-245b70cc5e1b", "address": "fa:16:3e:2c:7f:c3", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a02bb17-5d", "ovs_interfaceid": "5a02bb17-5d97-4eef-a91d-245b70cc5e1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.563 2 DEBUG nova.network.os_vif_util [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Converting VIF {"id": "5a02bb17-5d97-4eef-a91d-245b70cc5e1b", "address": "fa:16:3e:2c:7f:c3", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a02bb17-5d", "ovs_interfaceid": "5a02bb17-5d97-4eef-a91d-245b70cc5e1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.564 2 DEBUG nova.network.os_vif_util [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2c:7f:c3,bridge_name='br-int',has_traffic_filtering=True,id=5a02bb17-5d97-4eef-a91d-245b70cc5e1b,network=Network(72464893-0f19-40a9-84d1-392a298e50b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a02bb17-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.568 2 DEBUG os_vif [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2c:7f:c3,bridge_name='br-int',has_traffic_filtering=True,id=5a02bb17-5d97-4eef-a91d-245b70cc5e1b,network=Network(72464893-0f19-40a9-84d1-392a298e50b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a02bb17-5d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.569 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.570 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.576 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5a02bb17-5d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.576 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5a02bb17-5d, col_values=(('external_ids', {'iface-id': '5a02bb17-5d97-4eef-a91d-245b70cc5e1b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2c:7f:c3', 'vm-uuid': 'e263661e-e9c2-4a4d-a6e5-5fc8a7353f50'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:35 np0005481065 NetworkManager[44960]: <info>  [1760172875.5804] manager: (tap5a02bb17-5d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/218)
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.594 2 INFO os_vif [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2c:7f:c3,bridge_name='br-int',has_traffic_filtering=True,id=5a02bb17-5d97-4eef-a91d-245b70cc5e1b,network=Network(72464893-0f19-40a9-84d1-392a298e50b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a02bb17-5d')#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.638 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d60a2dcd-7fb6-4bfe-8351-a38a71164f83/disk.config d60a2dcd-7fb6-4bfe-8351-a38a71164f83_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.268s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.639 2 INFO nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Deleting local config drive /var/lib/nova/instances/d60a2dcd-7fb6-4bfe-8351-a38a71164f83/disk.config because it was imported into RBD.#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.678 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.678 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.679 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] No VIF found with MAC fa:16:3e:2c:7f:c3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.680 2 INFO nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Using config drive#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.718 2 DEBUG nova.storage.rbd_utils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] rbd image e263661e-e9c2-4a4d-a6e5-5fc8a7353f50_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:54:35 np0005481065 NetworkManager[44960]: <info>  [1760172875.7254] manager: (tapb0464a2f-21): new Tun device (/org/freedesktop/NetworkManager/Devices/219)
Oct 11 04:54:35 np0005481065 systemd-udevd[318497]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:54:35 np0005481065 kernel: tapb0464a2f-21: entered promiscuous mode
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:35 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:35Z|00454|binding|INFO|Claiming lport b0464a2f-214f-420b-aa3e-70d2e3dce4ac for this chassis.
Oct 11 04:54:35 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:35Z|00455|binding|INFO|b0464a2f-214f-420b-aa3e-70d2e3dce4ac: Claiming fa:16:3e:9b:af:82 10.100.0.4
Oct 11 04:54:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:35.753 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:af:82 10.100.0.4'], port_security=['fa:16:3e:9b:af:82 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'd60a2dcd-7fb6-4bfe-8351-a38a71164f83', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-72464893-0f19-40a9-84d1-392a298e50b9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7a37bbdce5194d96bed20d4162e25337', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a5c4bd12-f795-4994-bb5e-cd2cc665fe9f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2dc3e696-a953-4227-9499-d68d54f25c77, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=b0464a2f-214f-420b-aa3e-70d2e3dce4ac) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:54:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:35.755 162815 INFO neutron.agent.ovn.metadata.agent [-] Port b0464a2f-214f-420b-aa3e-70d2e3dce4ac in datapath 72464893-0f19-40a9-84d1-392a298e50b9 bound to our chassis#033[00m
Oct 11 04:54:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:35.758 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 72464893-0f19-40a9-84d1-392a298e50b9#033[00m
Oct 11 04:54:35 np0005481065 NetworkManager[44960]: <info>  [1760172875.7691] device (tapb0464a2f-21): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:54:35 np0005481065 NetworkManager[44960]: <info>  [1760172875.7717] device (tapb0464a2f-21): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:54:35 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:54:35 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1971376858' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:54:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:35.779 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[da732a24-8b05-4448-9778-bd06734a4f2a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:35.780 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap72464893-01 in ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 04:54:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:35.783 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap72464893-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 04:54:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:35.783 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[58a08904-2ce9-483a-adcc-0e1ac3db1593]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:35.784 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4ed45eb2-5f93-469c-9214-67a0e1de52f9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:35 np0005481065 systemd-machined[215705]: New machine qemu-60-instance-00000035.
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.807 2 DEBUG nova.network.neutron [req-72fdb077-1185-4aed-8ee7-a56482e76d58 req-62e8f989-09b2-486c-b3f9-27662ba47a66 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Updated VIF entry in instance network info cache for port 5a02bb17-5d97-4eef-a91d-245b70cc5e1b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.808 2 DEBUG nova.network.neutron [req-72fdb077-1185-4aed-8ee7-a56482e76d58 req-62e8f989-09b2-486c-b3f9-27662ba47a66 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Updating instance_info_cache with network_info: [{"id": "5a02bb17-5d97-4eef-a91d-245b70cc5e1b", "address": "fa:16:3e:2c:7f:c3", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a02bb17-5d", "ovs_interfaceid": "5a02bb17-5d97-4eef-a91d-245b70cc5e1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:54:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:35.814 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[685397cb-76f1-4ae3-9dea-5e7ca89ba0f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.815 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:54:35 np0005481065 systemd[1]: Started Virtual Machine qemu-60-instance-00000035.
Oct 11 04:54:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:35.852 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4f1034fe-5017-4a49-8c7f-83407159b3b4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.862 2 DEBUG nova.storage.rbd_utils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] rbd image 21a71d10-e13b-47fe-88fd-ec9597f7902e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.879 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:54:35 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:35Z|00456|binding|INFO|Setting lport b0464a2f-214f-420b-aa3e-70d2e3dce4ac ovn-installed in OVS
Oct 11 04:54:35 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:35Z|00457|binding|INFO|Setting lport b0464a2f-214f-420b-aa3e-70d2e3dce4ac up in Southbound
Oct 11 04:54:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:35.891 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[4cb80c43-fd3b-4e81-85e6-5178c30c5931]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:35.899 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c7084681-c8ad-46fb-9cbd-a3c8f85c21fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:35 np0005481065 NetworkManager[44960]: <info>  [1760172875.9008] manager: (tap72464893-00): new Veth device (/org/freedesktop/NetworkManager/Devices/220)
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.944 2 DEBUG oslo_concurrency.lockutils [req-72fdb077-1185-4aed-8ee7-a56482e76d58 req-62e8f989-09b2-486c-b3f9-27662ba47a66 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-e263661e-e9c2-4a4d-a6e5-5fc8a7353f50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.945 2 DEBUG nova.compute.manager [req-72fdb077-1185-4aed-8ee7-a56482e76d58 req-62e8f989-09b2-486c-b3f9-27662ba47a66 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Received event network-vif-plugged-b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.946 2 DEBUG oslo_concurrency.lockutils [req-72fdb077-1185-4aed-8ee7-a56482e76d58 req-62e8f989-09b2-486c-b3f9-27662ba47a66 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ceef84cf-9df6-4484-862c-624eab05f1fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.946 2 DEBUG oslo_concurrency.lockutils [req-72fdb077-1185-4aed-8ee7-a56482e76d58 req-62e8f989-09b2-486c-b3f9-27662ba47a66 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ceef84cf-9df6-4484-862c-624eab05f1fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.946 2 DEBUG oslo_concurrency.lockutils [req-72fdb077-1185-4aed-8ee7-a56482e76d58 req-62e8f989-09b2-486c-b3f9-27662ba47a66 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ceef84cf-9df6-4484-862c-624eab05f1fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.946 2 DEBUG nova.compute.manager [req-72fdb077-1185-4aed-8ee7-a56482e76d58 req-62e8f989-09b2-486c-b3f9-27662ba47a66 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Processing event network-vif-plugged-b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.947 2 DEBUG nova.compute.manager [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:54:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:35.949 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[cc134880-9d74-4a3d-ae9d-37952d6e7bf7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:35.953 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[28c4261c-52be-4826-8100-bcce27cdf0b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.954 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172875.9537165, ceef84cf-9df6-4484-862c-624eab05f1fe => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.954 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.977 2 DEBUG nova.virt.libvirt.driver [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:54:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1511: 321 pgs: 321 active+clean; 226 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 7.1 MiB/s wr, 198 op/s
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.987 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:54:35 np0005481065 NetworkManager[44960]: <info>  [1760172875.9951] device (tap72464893-00): carrier: link connected
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.997 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.999 2 INFO nova.virt.libvirt.driver [-] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Instance spawned successfully.#033[00m
Oct 11 04:54:35 np0005481065 nova_compute[260935]: 2025-10-11 08:54:35.999 2 DEBUG nova.virt.libvirt.driver [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:54:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:36.005 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3327f642-9e2c-4ad9-9f89-d23603b721f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:36.024 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[02fdde35-9038-4036-8d49-950b77c7805f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap72464893-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:78:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 148], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 472069, 'reachable_time': 41754, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318893, 'error': None, 'target': 'ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.040 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.055 2 DEBUG nova.virt.libvirt.driver [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.056 2 DEBUG nova.virt.libvirt.driver [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.057 2 DEBUG nova.virt.libvirt.driver [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.057 2 DEBUG nova.virt.libvirt.driver [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.058 2 DEBUG nova.virt.libvirt.driver [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.058 2 DEBUG nova.virt.libvirt.driver [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:54:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:36.061 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2f473e2f-34ad-4b5a-9cdc-d98365fca672]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feca:784f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 472069, 'tstamp': 472069}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318896, 'error': None, 'target': 'ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:36.088 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[83421c29-955b-4c01-b130-71add83b809c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap72464893-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:78:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 148], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 472069, 'reachable_time': 41754, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 318914, 'error': None, 'target': 'ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.129 2 INFO nova.compute.manager [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Took 7.91 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.129 2 DEBUG nova.compute.manager [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:54:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:36.137 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[473aef8e-1a90-49f4-ac6b-610da6812a7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.140 2 DEBUG nova.compute.manager [req-78e56137-1959-458c-b2c4-01f4e313f31a req-ba3f7f2a-1a98-4c87-8f69-35f2e89fd038 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Received event network-vif-plugged-b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.141 2 DEBUG oslo_concurrency.lockutils [req-78e56137-1959-458c-b2c4-01f4e313f31a req-ba3f7f2a-1a98-4c87-8f69-35f2e89fd038 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ceef84cf-9df6-4484-862c-624eab05f1fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.142 2 DEBUG oslo_concurrency.lockutils [req-78e56137-1959-458c-b2c4-01f4e313f31a req-ba3f7f2a-1a98-4c87-8f69-35f2e89fd038 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ceef84cf-9df6-4484-862c-624eab05f1fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.142 2 DEBUG oslo_concurrency.lockutils [req-78e56137-1959-458c-b2c4-01f4e313f31a req-ba3f7f2a-1a98-4c87-8f69-35f2e89fd038 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ceef84cf-9df6-4484-862c-624eab05f1fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.142 2 DEBUG nova.compute.manager [req-78e56137-1959-458c-b2c4-01f4e313f31a req-ba3f7f2a-1a98-4c87-8f69-35f2e89fd038 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] No waiting events found dispatching network-vif-plugged-b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.143 2 WARNING nova.compute.manager [req-78e56137-1959-458c-b2c4-01f4e313f31a req-ba3f7f2a-1a98-4c87-8f69-35f2e89fd038 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Received unexpected event network-vif-plugged-b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b for instance with vm_state building and task_state spawning.#033[00m
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.206 2 INFO nova.compute.manager [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Took 9.09 seconds to build instance.#033[00m
Oct 11 04:54:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:36.237 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[84a1cf93-8498-4c97-b7ab-acb985182aa5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:36.239 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap72464893-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:36.240 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:54:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:36.241 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap72464893-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:36 np0005481065 NetworkManager[44960]: <info>  [1760172876.2446] manager: (tap72464893-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/221)
Oct 11 04:54:36 np0005481065 kernel: tap72464893-00: entered promiscuous mode
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:36.253 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap72464893-00, col_values=(('external_ids', {'iface-id': 'fba8f4e1-8635-4527-85f3-29ce2a1033b5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:36 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:36Z|00458|binding|INFO|Releasing lport fba8f4e1-8635-4527-85f3-29ce2a1033b5 from this chassis (sb_readonly=0)
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.269 2 DEBUG oslo_concurrency.lockutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Lock "ceef84cf-9df6-4484-862c-624eab05f1fe" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.216s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:36.291 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/72464893-0f19-40a9-84d1-392a298e50b9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/72464893-0f19-40a9-84d1-392a298e50b9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 04:54:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:36.299 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ac15d768-3b93-427a-b249-a552f8e0a1d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:36.300 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:54:36 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 04:54:36 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 04:54:36 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-72464893-0f19-40a9-84d1-392a298e50b9
Oct 11 04:54:36 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 04:54:36 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 04:54:36 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 04:54:36 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/72464893-0f19-40a9-84d1-392a298e50b9.pid.haproxy
Oct 11 04:54:36 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 04:54:36 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:54:36 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 04:54:36 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 04:54:36 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 04:54:36 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 04:54:36 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 04:54:36 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 04:54:36 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 04:54:36 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 04:54:36 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 04:54:36 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 04:54:36 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 04:54:36 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 04:54:36 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 04:54:36 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:54:36 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:54:36 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 04:54:36 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 04:54:36 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:54:36 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID 72464893-0f19-40a9-84d1-392a298e50b9
Oct 11 04:54:36 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.302 2 INFO nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Creating config drive at /var/lib/nova/instances/e263661e-e9c2-4a4d-a6e5-5fc8a7353f50/disk.config#033[00m
Oct 11 04:54:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:36.304 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9', 'env', 'PROCESS_TAG=haproxy-72464893-0f19-40a9-84d1-392a298e50b9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/72464893-0f19-40a9-84d1-392a298e50b9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.311 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e263661e-e9c2-4a4d-a6e5-5fc8a7353f50/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpet0fhntn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:54:36 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:54:36 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2465400486' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.375 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.378 2 DEBUG nova.virt.libvirt.vif [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:54:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-123447316',display_name='tempest-ListServersNegativeTestJSON-server-123447316-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-123447316-3',id=55,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7a37bbdce5194d96bed20d4162e25337',ramdisk_id='',reservation_id='r-yl5fomq0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-567006104',owner_user_name='tempest-ListServersNegativeTestJSON-567006104-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:54:31Z,user_data=None,user_id='5c02b9d6bdae439c9f1e49ae63c5c5e3',uuid=21a71d10-e13b-47fe-88fd-ec9597f7902e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b", "address": "fa:16:3e:f5:da:34", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70c55a7a-6f", "ovs_interfaceid": "70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.379 2 DEBUG nova.network.os_vif_util [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Converting VIF {"id": "70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b", "address": "fa:16:3e:f5:da:34", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70c55a7a-6f", "ovs_interfaceid": "70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.380 2 DEBUG nova.network.os_vif_util [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f5:da:34,bridge_name='br-int',has_traffic_filtering=True,id=70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b,network=Network(72464893-0f19-40a9-84d1-392a298e50b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70c55a7a-6f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.382 2 DEBUG nova.objects.instance [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lazy-loading 'pci_devices' on Instance uuid 21a71d10-e13b-47fe-88fd-ec9597f7902e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.401 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:54:36 np0005481065 nova_compute[260935]:  <uuid>21a71d10-e13b-47fe-88fd-ec9597f7902e</uuid>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:  <name>instance-00000037</name>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:54:36 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:      <nova:name>tempest-ListServersNegativeTestJSON-server-123447316-3</nova:name>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:54:35</nova:creationTime>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:54:36 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:        <nova:user uuid="5c02b9d6bdae439c9f1e49ae63c5c5e3">tempest-ListServersNegativeTestJSON-567006104-project-member</nova:user>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:        <nova:project uuid="7a37bbdce5194d96bed20d4162e25337">tempest-ListServersNegativeTestJSON-567006104</nova:project>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:        <nova:port uuid="70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b">
Oct 11 04:54:36 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:54:36 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:      <entry name="serial">21a71d10-e13b-47fe-88fd-ec9597f7902e</entry>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:      <entry name="uuid">21a71d10-e13b-47fe-88fd-ec9597f7902e</entry>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:54:36 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:54:36 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:54:36 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/21a71d10-e13b-47fe-88fd-ec9597f7902e_disk">
Oct 11 04:54:36 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:54:36 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:54:36 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/21a71d10-e13b-47fe-88fd-ec9597f7902e_disk.config">
Oct 11 04:54:36 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:54:36 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 04:54:36 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:f5:da:34"/>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:      <target dev="tap70c55a7a-6f"/>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:54:36 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/21a71d10-e13b-47fe-88fd-ec9597f7902e/console.log" append="off"/>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:54:36 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:54:36 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:54:36 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:54:36 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:54:36 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.402 2 DEBUG nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Preparing to wait for external event network-vif-plugged-70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.402 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquiring lock "21a71d10-e13b-47fe-88fd-ec9597f7902e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.403 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "21a71d10-e13b-47fe-88fd-ec9597f7902e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.403 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "21a71d10-e13b-47fe-88fd-ec9597f7902e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.404 2 DEBUG nova.virt.libvirt.vif [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:54:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-123447316',display_name='tempest-ListServersNegativeTestJSON-server-123447316-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-123447316-3',id=55,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7a37bbdce5194d96bed20d4162e25337',ramdisk_id='',reservation_id='r-yl5fomq0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-567006104',owner_user_name='tempest-ListServersNegativeTestJSON-567006104-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:54:31Z,user_data=None,user_id='5c02b9d6bdae439c9f1e49ae63c5c5e3',uuid=21a71d10-e13b-47fe-88fd-ec9597f7902e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b", "address": "fa:16:3e:f5:da:34", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70c55a7a-6f", "ovs_interfaceid": "70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.405 2 DEBUG nova.network.os_vif_util [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Converting VIF {"id": "70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b", "address": "fa:16:3e:f5:da:34", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70c55a7a-6f", "ovs_interfaceid": "70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.406 2 DEBUG nova.network.os_vif_util [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f5:da:34,bridge_name='br-int',has_traffic_filtering=True,id=70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b,network=Network(72464893-0f19-40a9-84d1-392a298e50b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70c55a7a-6f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.406 2 DEBUG os_vif [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:da:34,bridge_name='br-int',has_traffic_filtering=True,id=70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b,network=Network(72464893-0f19-40a9-84d1-392a298e50b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70c55a7a-6f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.408 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.409 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.414 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap70c55a7a-6f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.415 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap70c55a7a-6f, col_values=(('external_ids', {'iface-id': '70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f5:da:34', 'vm-uuid': '21a71d10-e13b-47fe-88fd-ec9597f7902e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:36 np0005481065 NetworkManager[44960]: <info>  [1760172876.4546] manager: (tap70c55a7a-6f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/222)
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.465 2 INFO os_vif [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:da:34,bridge_name='br-int',has_traffic_filtering=True,id=70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b,network=Network(72464893-0f19-40a9-84d1-392a298e50b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70c55a7a-6f')#033[00m
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.475 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e263661e-e9c2-4a4d-a6e5-5fc8a7353f50/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpet0fhntn" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.511 2 DEBUG nova.storage.rbd_utils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] rbd image e263661e-e9c2-4a4d-a6e5-5fc8a7353f50_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.528 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e263661e-e9c2-4a4d-a6e5-5fc8a7353f50/disk.config e263661e-e9c2-4a4d-a6e5-5fc8a7353f50_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.624 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.625 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.625 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] No VIF found with MAC fa:16:3e:f5:da:34, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.626 2 INFO nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Using config drive#033[00m
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.653 2 DEBUG nova.storage.rbd_utils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] rbd image 21a71d10-e13b-47fe-88fd-ec9597f7902e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.726 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e263661e-e9c2-4a4d-a6e5-5fc8a7353f50/disk.config e263661e-e9c2-4a4d-a6e5-5fc8a7353f50_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.199s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.727 2 INFO nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Deleting local config drive /var/lib/nova/instances/e263661e-e9c2-4a4d-a6e5-5fc8a7353f50/disk.config because it was imported into RBD.#033[00m
Oct 11 04:54:36 np0005481065 kernel: tap5a02bb17-5d: entered promiscuous mode
Oct 11 04:54:36 np0005481065 NetworkManager[44960]: <info>  [1760172876.7952] manager: (tap5a02bb17-5d): new Tun device (/org/freedesktop/NetworkManager/Devices/223)
Oct 11 04:54:36 np0005481065 systemd-udevd[318994]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:54:36 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:36Z|00459|binding|INFO|Claiming lport 5a02bb17-5d97-4eef-a91d-245b70cc5e1b for this chassis.
Oct 11 04:54:36 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:36Z|00460|binding|INFO|5a02bb17-5d97-4eef-a91d-245b70cc5e1b: Claiming fa:16:3e:2c:7f:c3 10.100.0.14
Oct 11 04:54:36 np0005481065 podman[319058]: 2025-10-11 08:54:36.804979574 +0000 UTC m=+0.070350437 container create 291a72ea54cbd11b08a42f9445c0733f327b2d0109a7ed73d9c34352184c0676 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:36.807 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2c:7f:c3 10.100.0.14'], port_security=['fa:16:3e:2c:7f:c3 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'e263661e-e9c2-4a4d-a6e5-5fc8a7353f50', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-72464893-0f19-40a9-84d1-392a298e50b9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7a37bbdce5194d96bed20d4162e25337', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a5c4bd12-f795-4994-bb5e-cd2cc665fe9f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2dc3e696-a953-4227-9499-d68d54f25c77, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=5a02bb17-5d97-4eef-a91d-245b70cc5e1b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:54:36 np0005481065 NetworkManager[44960]: <info>  [1760172876.8166] device (tap5a02bb17-5d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:54:36 np0005481065 NetworkManager[44960]: <info>  [1760172876.8179] device (tap5a02bb17-5d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:54:36 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:36Z|00461|binding|INFO|Setting lport 5a02bb17-5d97-4eef-a91d-245b70cc5e1b ovn-installed in OVS
Oct 11 04:54:36 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:36Z|00462|binding|INFO|Setting lport 5a02bb17-5d97-4eef-a91d-245b70cc5e1b up in Southbound
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:36 np0005481065 systemd-machined[215705]: New machine qemu-61-instance-00000036.
Oct 11 04:54:36 np0005481065 systemd[1]: Started libpod-conmon-291a72ea54cbd11b08a42f9445c0733f327b2d0109a7ed73d9c34352184c0676.scope.
Oct 11 04:54:36 np0005481065 podman[319058]: 2025-10-11 08:54:36.767114974 +0000 UTC m=+0.032485847 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 04:54:36 np0005481065 systemd[1]: Started Virtual Machine qemu-61-instance-00000036.
Oct 11 04:54:36 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:54:36 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/601b7f68ddbb33e7d9c40d40c0537063b097ca0468dd1d3c4f0f1fbe13b69382/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:54:36 np0005481065 podman[319058]: 2025-10-11 08:54:36.896158454 +0000 UTC m=+0.161529327 container init 291a72ea54cbd11b08a42f9445c0733f327b2d0109a7ed73d9c34352184c0676 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.896 2 DEBUG nova.network.neutron [req-47b386dc-515d-41f2-9145-a7df00467aef req-e6eba40c-8030-45df-8420-9603a2619002 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Updated VIF entry in instance network info cache for port 70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.897 2 DEBUG nova.network.neutron [req-47b386dc-515d-41f2-9145-a7df00467aef req-e6eba40c-8030-45df-8420-9603a2619002 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Updating instance_info_cache with network_info: [{"id": "70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b", "address": "fa:16:3e:f5:da:34", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70c55a7a-6f", "ovs_interfaceid": "70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:54:36 np0005481065 podman[319058]: 2025-10-11 08:54:36.906287862 +0000 UTC m=+0.171658725 container start 291a72ea54cbd11b08a42f9445c0733f327b2d0109a7ed73d9c34352184c0676 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 04:54:36 np0005481065 nova_compute[260935]: 2025-10-11 08:54:36.914 2 DEBUG oslo_concurrency.lockutils [req-47b386dc-515d-41f2-9145-a7df00467aef req-e6eba40c-8030-45df-8420-9603a2619002 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-21a71d10-e13b-47fe-88fd-ec9597f7902e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:54:36 np0005481065 neutron-haproxy-ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9[319091]: [NOTICE]   (319100) : New worker (319103) forked
Oct 11 04:54:36 np0005481065 neutron-haproxy-ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9[319091]: [NOTICE]   (319100) : Loading success.
Oct 11 04:54:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:36.985 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 5a02bb17-5d97-4eef-a91d-245b70cc5e1b in datapath 72464893-0f19-40a9-84d1-392a298e50b9 unbound from our chassis#033[00m
Oct 11 04:54:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:36.987 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 72464893-0f19-40a9-84d1-392a298e50b9#033[00m
Oct 11 04:54:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:37.008 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[aec9a17e-5753-4e33-9431-6e1736389ca7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:37.050 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[91ee42e1-87ae-4ff3-b3a7-e9049332a587]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:37.054 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[837df0ef-7859-496c-aea6-7bdad5fc0f7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:37.087 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f3459ad3-64c4-4bfd-b4b5-9ee9ce202852]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:37.109 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1618d04b-ae13-4b3d-bd00-8f9f49847392]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap72464893-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:78:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 306, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 306, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 148], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 472069, 'reachable_time': 41754, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 264, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 264, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319135, 'error': None, 'target': 'ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.139 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172877.138601, d60a2dcd-7fb6-4bfe-8351-a38a71164f83 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.139 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] VM Started (Lifecycle Event)#033[00m
Oct 11 04:54:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:37.143 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d321d792-6e9b-4049-a997-5dacbbd7ac59]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap72464893-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 472087, 'tstamp': 472087}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319137, 'error': None, 'target': 'ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap72464893-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 472092, 'tstamp': 472092}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319137, 'error': None, 'target': 'ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:37.145 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap72464893-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:37.161 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap72464893-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:37.161 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:54:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:37.162 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap72464893-00, col_values=(('external_ids', {'iface-id': 'fba8f4e1-8635-4527-85f3-29ce2a1033b5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:37.163 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.163 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.169 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172877.1413875, d60a2dcd-7fb6-4bfe-8351-a38a71164f83 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.169 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] VM Paused (Lifecycle Event)#033[00m
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.197 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.202 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.222 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.355 2 INFO nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Creating config drive at /var/lib/nova/instances/21a71d10-e13b-47fe-88fd-ec9597f7902e/disk.config#033[00m
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.364 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/21a71d10-e13b-47fe-88fd-ec9597f7902e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpikgdgygh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.461 2 DEBUG oslo_concurrency.lockutils [None req-70b80284-766f-4128-b01b-e503f64803d5 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Acquiring lock "ceef84cf-9df6-4484-862c-624eab05f1fe" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.463 2 DEBUG oslo_concurrency.lockutils [None req-70b80284-766f-4128-b01b-e503f64803d5 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Lock "ceef84cf-9df6-4484-862c-624eab05f1fe" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.463 2 DEBUG oslo_concurrency.lockutils [None req-70b80284-766f-4128-b01b-e503f64803d5 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Acquiring lock "ceef84cf-9df6-4484-862c-624eab05f1fe-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.464 2 DEBUG oslo_concurrency.lockutils [None req-70b80284-766f-4128-b01b-e503f64803d5 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Lock "ceef84cf-9df6-4484-862c-624eab05f1fe-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.464 2 DEBUG oslo_concurrency.lockutils [None req-70b80284-766f-4128-b01b-e503f64803d5 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Lock "ceef84cf-9df6-4484-862c-624eab05f1fe-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.468 2 INFO nova.compute.manager [None req-70b80284-766f-4128-b01b-e503f64803d5 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Terminating instance#033[00m
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.469 2 DEBUG nova.compute.manager [None req-70b80284-766f-4128-b01b-e503f64803d5 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 04:54:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:54:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2271091576' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:54:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:54:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2271091576' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:54:37 np0005481065 kernel: tapb6cea59b-74 (unregistering): left promiscuous mode
Oct 11 04:54:37 np0005481065 NetworkManager[44960]: <info>  [1760172877.5332] device (tapb6cea59b-74): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.536 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/21a71d10-e13b-47fe-88fd-ec9597f7902e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpikgdgygh" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.563 2 DEBUG nova.storage.rbd_utils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] rbd image 21a71d10-e13b-47fe-88fd-ec9597f7902e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.567 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/21a71d10-e13b-47fe-88fd-ec9597f7902e/disk.config 21a71d10-e13b-47fe-88fd-ec9597f7902e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:54:37 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:37Z|00463|binding|INFO|Releasing lport b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b from this chassis (sb_readonly=0)
Oct 11 04:54:37 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:37Z|00464|binding|INFO|Setting lport b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b down in Southbound
Oct 11 04:54:37 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:37Z|00465|binding|INFO|Removing iface tapb6cea59b-74 ovn-installed in OVS
Oct 11 04:54:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:37.588 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:a8:a7 10.100.0.14'], port_security=['fa:16:3e:4a:a8:a7 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'ceef84cf-9df6-4484-862c-624eab05f1fe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f022782f-f41d-4a83-babf-b48f2f344e01', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce2ed1c47abf4e1889253402aa1e536f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4c233320-8183-4c2d-a5ba-21893adcb132', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc533fa3-dc44-4446-b57b-1e5f53ba099c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:54:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:37.591 162815 INFO neutron.agent.ovn.metadata.agent [-] Port b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b in datapath f022782f-f41d-4a83-babf-b48f2f344e01 unbound from our chassis#033[00m
Oct 11 04:54:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:37.594 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f022782f-f41d-4a83-babf-b48f2f344e01, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 04:54:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:37.595 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[197e1690-c883-4771-9b1c-4c2b54c7173a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:37.596 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f022782f-f41d-4a83-babf-b48f2f344e01 namespace which is not needed anymore#033[00m
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:37 np0005481065 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000034.scope: Deactivated successfully.
Oct 11 04:54:37 np0005481065 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000034.scope: Consumed 2.361s CPU time.
Oct 11 04:54:37 np0005481065 systemd-machined[215705]: Machine qemu-59-instance-00000034 terminated.
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.689 2 DEBUG nova.compute.manager [req-7ba802ea-fa41-4cfd-9495-4318c9bfd1d8 req-55a231c4-201f-4b38-8c98-c268fd1d0e49 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Received event network-vif-plugged-5a02bb17-5d97-4eef-a91d-245b70cc5e1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.690 2 DEBUG oslo_concurrency.lockutils [req-7ba802ea-fa41-4cfd-9495-4318c9bfd1d8 req-55a231c4-201f-4b38-8c98-c268fd1d0e49 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "e263661e-e9c2-4a4d-a6e5-5fc8a7353f50-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.690 2 DEBUG oslo_concurrency.lockutils [req-7ba802ea-fa41-4cfd-9495-4318c9bfd1d8 req-55a231c4-201f-4b38-8c98-c268fd1d0e49 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e263661e-e9c2-4a4d-a6e5-5fc8a7353f50-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.690 2 DEBUG oslo_concurrency.lockutils [req-7ba802ea-fa41-4cfd-9495-4318c9bfd1d8 req-55a231c4-201f-4b38-8c98-c268fd1d0e49 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e263661e-e9c2-4a4d-a6e5-5fc8a7353f50-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.690 2 DEBUG nova.compute.manager [req-7ba802ea-fa41-4cfd-9495-4318c9bfd1d8 req-55a231c4-201f-4b38-8c98-c268fd1d0e49 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Processing event network-vif-plugged-5a02bb17-5d97-4eef-a91d-245b70cc5e1b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.724 2 INFO nova.virt.libvirt.driver [-] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Instance destroyed successfully.#033[00m
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.724 2 DEBUG nova.objects.instance [None req-70b80284-766f-4128-b01b-e503f64803d5 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Lazy-loading 'resources' on Instance uuid ceef84cf-9df6-4484-862c-624eab05f1fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.739 2 DEBUG nova.virt.libvirt.vif [None req-70b80284-766f-4128-b01b-e503f64803d5 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:54:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1070563181',display_name='tempest-ServerAddressesTestJSON-server-1070563181',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1070563181',id=52,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:54:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ce2ed1c47abf4e1889253402aa1e536f',ramdisk_id='',reservation_id='r-xo306n5g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesTestJSON-1144666060',owner_user_name='tempest-ServerAddressesTestJSON-1144666060-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:54:36Z,user_data=None,user_id='e7ae37ce1df34d87a006582ce3cb7d6d',uuid=ceef84cf-9df6-4484-862c-624eab05f1fe,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b", "address": "fa:16:3e:4a:a8:a7", "network": {"id": "f022782f-f41d-4a83-babf-b48f2f344e01", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1436623710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ed1c47abf4e1889253402aa1e536f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6cea59b-74", "ovs_interfaceid": "b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.740 2 DEBUG nova.network.os_vif_util [None req-70b80284-766f-4128-b01b-e503f64803d5 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Converting VIF {"id": "b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b", "address": "fa:16:3e:4a:a8:a7", "network": {"id": "f022782f-f41d-4a83-babf-b48f2f344e01", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1436623710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ed1c47abf4e1889253402aa1e536f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6cea59b-74", "ovs_interfaceid": "b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.741 2 DEBUG nova.network.os_vif_util [None req-70b80284-766f-4128-b01b-e503f64803d5 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:a8:a7,bridge_name='br-int',has_traffic_filtering=True,id=b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b,network=Network(f022782f-f41d-4a83-babf-b48f2f344e01),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6cea59b-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.741 2 DEBUG os_vif [None req-70b80284-766f-4128-b01b-e503f64803d5 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:a8:a7,bridge_name='br-int',has_traffic_filtering=True,id=b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b,network=Network(f022782f-f41d-4a83-babf-b48f2f344e01),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6cea59b-74') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.743 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb6cea59b-74, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.752 2 INFO os_vif [None req-70b80284-766f-4128-b01b-e503f64803d5 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:a8:a7,bridge_name='br-int',has_traffic_filtering=True,id=b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b,network=Network(f022782f-f41d-4a83-babf-b48f2f344e01),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6cea59b-74')#033[00m
Oct 11 04:54:37 np0005481065 neutron-haproxy-ovnmeta-f022782f-f41d-4a83-babf-b48f2f344e01[318651]: [NOTICE]   (318665) : haproxy version is 2.8.14-c23fe91
Oct 11 04:54:37 np0005481065 neutron-haproxy-ovnmeta-f022782f-f41d-4a83-babf-b48f2f344e01[318651]: [NOTICE]   (318665) : path to executable is /usr/sbin/haproxy
Oct 11 04:54:37 np0005481065 neutron-haproxy-ovnmeta-f022782f-f41d-4a83-babf-b48f2f344e01[318651]: [WARNING]  (318665) : Exiting Master process...
Oct 11 04:54:37 np0005481065 neutron-haproxy-ovnmeta-f022782f-f41d-4a83-babf-b48f2f344e01[318651]: [WARNING]  (318665) : Exiting Master process...
Oct 11 04:54:37 np0005481065 neutron-haproxy-ovnmeta-f022782f-f41d-4a83-babf-b48f2f344e01[318651]: [ALERT]    (318665) : Current worker (318668) exited with code 143 (Terminated)
Oct 11 04:54:37 np0005481065 neutron-haproxy-ovnmeta-f022782f-f41d-4a83-babf-b48f2f344e01[318651]: [WARNING]  (318665) : All workers exited. Exiting... (0)
Oct 11 04:54:37 np0005481065 systemd[1]: libpod-f3aae3bca0e893ccf324b75817ea1dd437236d36d1ffd2c53b7adacc2120d991.scope: Deactivated successfully.
Oct 11 04:54:37 np0005481065 podman[319228]: 2025-10-11 08:54:37.77588941 +0000 UTC m=+0.054191597 container died f3aae3bca0e893ccf324b75817ea1dd437236d36d1ffd2c53b7adacc2120d991 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f022782f-f41d-4a83-babf-b48f2f344e01, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.796 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/21a71d10-e13b-47fe-88fd-ec9597f7902e/disk.config 21a71d10-e13b-47fe-88fd-ec9597f7902e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.229s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.797 2 INFO nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Deleting local config drive /var/lib/nova/instances/21a71d10-e13b-47fe-88fd-ec9597f7902e/disk.config because it was imported into RBD.#033[00m
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.804 2 DEBUG oslo_concurrency.lockutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquiring lock "1ef94ed5-fffa-41e9-b72f-4569354392c6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.804 2 DEBUG oslo_concurrency.lockutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "1ef94ed5-fffa-41e9-b72f-4569354392c6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:37 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f3aae3bca0e893ccf324b75817ea1dd437236d36d1ffd2c53b7adacc2120d991-userdata-shm.mount: Deactivated successfully.
Oct 11 04:54:37 np0005481065 systemd[1]: var-lib-containers-storage-overlay-2265d21a76f95933280395fdeb37d947104c519b3836bf0feabc7e53e6859577-merged.mount: Deactivated successfully.
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.827 2 DEBUG nova.compute.manager [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:54:37 np0005481065 podman[319228]: 2025-10-11 08:54:37.840919254 +0000 UTC m=+0.119221441 container cleanup f3aae3bca0e893ccf324b75817ea1dd437236d36d1ffd2c53b7adacc2120d991 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f022782f-f41d-4a83-babf-b48f2f344e01, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true)
Oct 11 04:54:37 np0005481065 systemd[1]: libpod-conmon-f3aae3bca0e893ccf324b75817ea1dd437236d36d1ffd2c53b7adacc2120d991.scope: Deactivated successfully.
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:37 np0005481065 kernel: tap70c55a7a-6f: entered promiscuous mode
Oct 11 04:54:37 np0005481065 systemd-udevd[319081]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:37 np0005481065 NetworkManager[44960]: <info>  [1760172877.8802] manager: (tap70c55a7a-6f): new Tun device (/org/freedesktop/NetworkManager/Devices/224)
Oct 11 04:54:37 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:37Z|00466|binding|INFO|Claiming lport 70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b for this chassis.
Oct 11 04:54:37 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:37Z|00467|binding|INFO|70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b: Claiming fa:16:3e:f5:da:34 10.100.0.7
Oct 11 04:54:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:37.889 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f5:da:34 10.100.0.7'], port_security=['fa:16:3e:f5:da:34 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '21a71d10-e13b-47fe-88fd-ec9597f7902e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-72464893-0f19-40a9-84d1-392a298e50b9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7a37bbdce5194d96bed20d4162e25337', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a5c4bd12-f795-4994-bb5e-cd2cc665fe9f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2dc3e696-a953-4227-9499-d68d54f25c77, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:54:37 np0005481065 NetworkManager[44960]: <info>  [1760172877.8987] device (tap70c55a7a-6f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:54:37 np0005481065 NetworkManager[44960]: <info>  [1760172877.9008] device (tap70c55a7a-6f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:54:37 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:37Z|00468|binding|INFO|Setting lport 70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b ovn-installed in OVS
Oct 11 04:54:37 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:37Z|00469|binding|INFO|Setting lport 70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b up in Southbound
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:37 np0005481065 systemd-machined[215705]: New machine qemu-62-instance-00000037.
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.931 2 DEBUG oslo_concurrency.lockutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.931 2 DEBUG oslo_concurrency.lockutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:37 np0005481065 systemd[1]: Started Virtual Machine qemu-62-instance-00000037.
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.938 2 DEBUG nova.virt.hardware [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.938 2 INFO nova.compute.claims [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:54:37 np0005481065 podman[319293]: 2025-10-11 08:54:37.94211373 +0000 UTC m=+0.069692709 container remove f3aae3bca0e893ccf324b75817ea1dd437236d36d1ffd2c53b7adacc2120d991 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f022782f-f41d-4a83-babf-b48f2f344e01, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.951 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172877.9506705, e263661e-e9c2-4a4d-a6e5-5fc8a7353f50 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.951 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] VM Started (Lifecycle Event)#033[00m
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.952 2 DEBUG nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:54:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:37.954 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3e0b5d59-0a9d-43ff-8ed4-eaded2805595]: (4, ('Sat Oct 11 08:54:37 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f022782f-f41d-4a83-babf-b48f2f344e01 (f3aae3bca0e893ccf324b75817ea1dd437236d36d1ffd2c53b7adacc2120d991)\nf3aae3bca0e893ccf324b75817ea1dd437236d36d1ffd2c53b7adacc2120d991\nSat Oct 11 08:54:37 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f022782f-f41d-4a83-babf-b48f2f344e01 (f3aae3bca0e893ccf324b75817ea1dd437236d36d1ffd2c53b7adacc2120d991)\nf3aae3bca0e893ccf324b75817ea1dd437236d36d1ffd2c53b7adacc2120d991\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:37.958 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0eee0da8-d32f-4082-b419-77d26decc0b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:37.959 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf022782f-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:37 np0005481065 kernel: tapf022782f-f0: left promiscuous mode
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.964 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.970 2 INFO nova.virt.libvirt.driver [-] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Instance spawned successfully.#033[00m
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.971 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:54:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1512: 321 pgs: 321 active+clean; 227 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 7.1 MiB/s wr, 287 op/s
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:37.983 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e4498be6-1a42-42d0-b427-4e8b85cc13e5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:37 np0005481065 nova_compute[260935]: 2025-10-11 08:54:37.991 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.001 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:54:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:38.001 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4bfd5f20-0bd2-45be-a1b2-a5430267ba3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:38.003 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e779b33e-b2a2-41c6-948d-eb7ca5d3451e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.004 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.004 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.005 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.005 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.005 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.006 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:54:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:38.021 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0e56aa0c-d230-410a-ad82-a63d5ee6ef7a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 471827, 'reachable_time': 17984, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319322, 'error': None, 'target': 'ovnmeta-f022782f-f41d-4a83-babf-b48f2f344e01', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:38 np0005481065 systemd[1]: run-netns-ovnmeta\x2df022782f\x2df41d\x2d4a83\x2dbabf\x2db48f2f344e01.mount: Deactivated successfully.
Oct 11 04:54:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:38.029 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f022782f-f41d-4a83-babf-b48f2f344e01 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 04:54:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:38.030 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[9de48926-d8d6-4e06-9b71-20a8497b919e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.032 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.033 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172877.950964, e263661e-e9c2-4a4d-a6e5-5fc8a7353f50 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.033 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] VM Paused (Lifecycle Event)#033[00m
Oct 11 04:54:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:38.031 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b in datapath 72464893-0f19-40a9-84d1-392a298e50b9 unbound from our chassis#033[00m
Oct 11 04:54:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:38.033 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 72464893-0f19-40a9-84d1-392a298e50b9#033[00m
Oct 11 04:54:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:38.059 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[67e4fedc-e524-401a-bdf3-caa60845a621]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.067 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.073 2 INFO nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Took 7.61 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.073 2 DEBUG nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.081 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172877.9594476, e263661e-e9c2-4a4d-a6e5-5fc8a7353f50 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.082 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.110 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.113 2 DEBUG oslo_concurrency.processutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:54:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:38.121 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[ac199919-6400-434a-b6db-700c93ef82de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:38.126 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[66bb6b79-d872-42d3-9841-9c919f0b074e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.174 2 INFO nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Took 10.00 seconds to build instance.#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.177 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:54:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:38.186 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[22ae053b-7691-4654-a46c-b47dbc0c12ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.195 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "e263661e-e9c2-4a4d-a6e5-5fc8a7353f50" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.157s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:38.209 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[27ba249a-cd51-4ec7-86aa-e7fa22390745]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap72464893-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:78:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 7, 'rx_bytes': 742, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 7, 'rx_bytes': 742, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 148], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 472069, 'reachable_time': 41754, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 7, 'inoctets': 644, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 7, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 644, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 7, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319356, 'error': None, 'target': 'ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.229 2 DEBUG nova.compute.manager [req-1773fafa-73d2-4eac-b0b2-8906ac7f50d6 req-82f2b123-1067-41cd-82ce-f706021e8fd8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Received event network-vif-plugged-b0464a2f-214f-420b-aa3e-70d2e3dce4ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.229 2 DEBUG oslo_concurrency.lockutils [req-1773fafa-73d2-4eac-b0b2-8906ac7f50d6 req-82f2b123-1067-41cd-82ce-f706021e8fd8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d60a2dcd-7fb6-4bfe-8351-a38a71164f83-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.230 2 DEBUG oslo_concurrency.lockutils [req-1773fafa-73d2-4eac-b0b2-8906ac7f50d6 req-82f2b123-1067-41cd-82ce-f706021e8fd8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d60a2dcd-7fb6-4bfe-8351-a38a71164f83-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.230 2 DEBUG oslo_concurrency.lockutils [req-1773fafa-73d2-4eac-b0b2-8906ac7f50d6 req-82f2b123-1067-41cd-82ce-f706021e8fd8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d60a2dcd-7fb6-4bfe-8351-a38a71164f83-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.230 2 DEBUG nova.compute.manager [req-1773fafa-73d2-4eac-b0b2-8906ac7f50d6 req-82f2b123-1067-41cd-82ce-f706021e8fd8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Processing event network-vif-plugged-b0464a2f-214f-420b-aa3e-70d2e3dce4ac _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.231 2 DEBUG nova.compute.manager [req-1773fafa-73d2-4eac-b0b2-8906ac7f50d6 req-82f2b123-1067-41cd-82ce-f706021e8fd8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Received event network-vif-plugged-b0464a2f-214f-420b-aa3e-70d2e3dce4ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.231 2 DEBUG oslo_concurrency.lockutils [req-1773fafa-73d2-4eac-b0b2-8906ac7f50d6 req-82f2b123-1067-41cd-82ce-f706021e8fd8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d60a2dcd-7fb6-4bfe-8351-a38a71164f83-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.231 2 DEBUG oslo_concurrency.lockutils [req-1773fafa-73d2-4eac-b0b2-8906ac7f50d6 req-82f2b123-1067-41cd-82ce-f706021e8fd8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d60a2dcd-7fb6-4bfe-8351-a38a71164f83-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.232 2 DEBUG oslo_concurrency.lockutils [req-1773fafa-73d2-4eac-b0b2-8906ac7f50d6 req-82f2b123-1067-41cd-82ce-f706021e8fd8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d60a2dcd-7fb6-4bfe-8351-a38a71164f83-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.232 2 DEBUG nova.compute.manager [req-1773fafa-73d2-4eac-b0b2-8906ac7f50d6 req-82f2b123-1067-41cd-82ce-f706021e8fd8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] No waiting events found dispatching network-vif-plugged-b0464a2f-214f-420b-aa3e-70d2e3dce4ac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.232 2 WARNING nova.compute.manager [req-1773fafa-73d2-4eac-b0b2-8906ac7f50d6 req-82f2b123-1067-41cd-82ce-f706021e8fd8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Received unexpected event network-vif-plugged-b0464a2f-214f-420b-aa3e-70d2e3dce4ac for instance with vm_state building and task_state spawning.#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.233 2 DEBUG nova.compute.manager [req-1773fafa-73d2-4eac-b0b2-8906ac7f50d6 req-82f2b123-1067-41cd-82ce-f706021e8fd8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Received event network-vif-unplugged-b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.233 2 DEBUG oslo_concurrency.lockutils [req-1773fafa-73d2-4eac-b0b2-8906ac7f50d6 req-82f2b123-1067-41cd-82ce-f706021e8fd8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ceef84cf-9df6-4484-862c-624eab05f1fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.233 2 DEBUG oslo_concurrency.lockutils [req-1773fafa-73d2-4eac-b0b2-8906ac7f50d6 req-82f2b123-1067-41cd-82ce-f706021e8fd8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ceef84cf-9df6-4484-862c-624eab05f1fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.234 2 DEBUG oslo_concurrency.lockutils [req-1773fafa-73d2-4eac-b0b2-8906ac7f50d6 req-82f2b123-1067-41cd-82ce-f706021e8fd8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ceef84cf-9df6-4484-862c-624eab05f1fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.234 2 DEBUG nova.compute.manager [req-1773fafa-73d2-4eac-b0b2-8906ac7f50d6 req-82f2b123-1067-41cd-82ce-f706021e8fd8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] No waiting events found dispatching network-vif-unplugged-b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.234 2 DEBUG nova.compute.manager [req-1773fafa-73d2-4eac-b0b2-8906ac7f50d6 req-82f2b123-1067-41cd-82ce-f706021e8fd8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Received event network-vif-unplugged-b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.234 2 DEBUG nova.compute.manager [req-1773fafa-73d2-4eac-b0b2-8906ac7f50d6 req-82f2b123-1067-41cd-82ce-f706021e8fd8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Received event network-vif-plugged-b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.234 2 DEBUG oslo_concurrency.lockutils [req-1773fafa-73d2-4eac-b0b2-8906ac7f50d6 req-82f2b123-1067-41cd-82ce-f706021e8fd8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ceef84cf-9df6-4484-862c-624eab05f1fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.235 2 DEBUG oslo_concurrency.lockutils [req-1773fafa-73d2-4eac-b0b2-8906ac7f50d6 req-82f2b123-1067-41cd-82ce-f706021e8fd8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ceef84cf-9df6-4484-862c-624eab05f1fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.235 2 DEBUG oslo_concurrency.lockutils [req-1773fafa-73d2-4eac-b0b2-8906ac7f50d6 req-82f2b123-1067-41cd-82ce-f706021e8fd8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ceef84cf-9df6-4484-862c-624eab05f1fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.235 2 DEBUG nova.compute.manager [req-1773fafa-73d2-4eac-b0b2-8906ac7f50d6 req-82f2b123-1067-41cd-82ce-f706021e8fd8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] No waiting events found dispatching network-vif-plugged-b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.235 2 WARNING nova.compute.manager [req-1773fafa-73d2-4eac-b0b2-8906ac7f50d6 req-82f2b123-1067-41cd-82ce-f706021e8fd8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Received unexpected event network-vif-plugged-b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b for instance with vm_state active and task_state deleting.#033[00m
Oct 11 04:54:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:38.235 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c1a8afd8-4e1a-4463-8cc4-a1db5b6f7de7]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap72464893-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 472087, 'tstamp': 472087}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319365, 'error': None, 'target': 'ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap72464893-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 472092, 'tstamp': 472092}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319365, 'error': None, 'target': 'ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.236 2 DEBUG nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:54:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:38.237 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap72464893-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:38.241 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap72464893-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:38.241 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:54:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:38.242 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap72464893-00, col_values=(('external_ids', {'iface-id': 'fba8f4e1-8635-4527-85f3-29ce2a1033b5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:38.242 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.243 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.243 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172878.2427812, d60a2dcd-7fb6-4bfe-8351-a38a71164f83 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.243 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.252 2 INFO nova.virt.libvirt.driver [-] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Instance spawned successfully.#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.253 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.263 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.270 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.279 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.280 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.280 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.281 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.281 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.281 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.287 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.325 2 INFO nova.virt.libvirt.driver [None req-70b80284-766f-4128-b01b-e503f64803d5 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Deleting instance files /var/lib/nova/instances/ceef84cf-9df6-4484-862c-624eab05f1fe_del#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.326 2 INFO nova.virt.libvirt.driver [None req-70b80284-766f-4128-b01b-e503f64803d5 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Deletion of /var/lib/nova/instances/ceef84cf-9df6-4484-862c-624eab05f1fe_del complete#033[00m
Oct 11 04:54:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.353 2 INFO nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Took 8.97 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.353 2 DEBUG nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.388 2 INFO nova.compute.manager [None req-70b80284-766f-4128-b01b-e503f64803d5 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Took 0.92 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.389 2 DEBUG oslo.service.loopingcall [None req-70b80284-766f-4128-b01b-e503f64803d5 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.389 2 DEBUG nova.compute.manager [-] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.390 2 DEBUG nova.network.neutron [-] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.414 2 INFO nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Took 10.27 seconds to build instance.#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.428 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "d60a2dcd-7fb6-4bfe-8351-a38a71164f83" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.429s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:54:38 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2272426784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.626 2 DEBUG oslo_concurrency.processutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.634 2 DEBUG nova.compute.provider_tree [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.649 2 DEBUG nova.scheduler.client.report [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.677 2 DEBUG oslo_concurrency.lockutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.746s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.678 2 DEBUG nova.compute.manager [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.743 2 DEBUG nova.compute.manager [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.743 2 DEBUG nova.network.neutron [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.762 2 INFO nova.virt.libvirt.driver [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.786 2 DEBUG nova.compute.manager [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.878 2 DEBUG nova.compute.manager [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.880 2 DEBUG nova.virt.libvirt.driver [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.880 2 INFO nova.virt.libvirt.driver [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Creating image(s)#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.920 2 DEBUG nova.storage.rbd_utils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] rbd image 1ef94ed5-fffa-41e9-b72f-4569354392c6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.954 2 DEBUG nova.storage.rbd_utils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] rbd image 1ef94ed5-fffa-41e9-b72f-4569354392c6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.981 2 DEBUG nova.storage.rbd_utils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] rbd image 1ef94ed5-fffa-41e9-b72f-4569354392c6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:54:38 np0005481065 nova_compute[260935]: 2025-10-11 08:54:38.986 2 DEBUG oslo_concurrency.processutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.034 2 DEBUG nova.policy [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd4dce65b39be45739408ca70d672df84', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b34d40b1586348c3be3d9142dfe1770d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.037 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172878.9103584, 21a71d10-e13b-47fe-88fd-ec9597f7902e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.037 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] VM Started (Lifecycle Event)#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.057 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.064 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172878.9107077, 21a71d10-e13b-47fe-88fd-ec9597f7902e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.064 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] VM Paused (Lifecycle Event)#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.081 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.086 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.090 2 DEBUG oslo_concurrency.processutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.092 2 DEBUG oslo_concurrency.lockutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.093 2 DEBUG oslo_concurrency.lockutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.094 2 DEBUG oslo_concurrency.lockutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.129 2 DEBUG nova.storage.rbd_utils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] rbd image 1ef94ed5-fffa-41e9-b72f-4569354392c6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.133 2 DEBUG oslo_concurrency.processutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 1ef94ed5-fffa-41e9-b72f-4569354392c6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.201 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.284 2 DEBUG nova.network.neutron [-] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.305 2 INFO nova.compute.manager [-] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Took 0.92 seconds to deallocate network for instance.#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.359 2 DEBUG oslo_concurrency.lockutils [None req-70b80284-766f-4128-b01b-e503f64803d5 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.360 2 DEBUG oslo_concurrency.lockutils [None req-70b80284-766f-4128-b01b-e503f64803d5 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.448 2 DEBUG oslo_concurrency.processutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 1ef94ed5-fffa-41e9-b72f-4569354392c6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.314s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.527 2 DEBUG oslo_concurrency.processutils [None req-70b80284-766f-4128-b01b-e503f64803d5 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.582 2 DEBUG nova.storage.rbd_utils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] resizing rbd image 1ef94ed5-fffa-41e9-b72f-4569354392c6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.733 2 DEBUG nova.objects.instance [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lazy-loading 'migration_context' on Instance uuid 1ef94ed5-fffa-41e9-b72f-4569354392c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.737 2 DEBUG nova.network.neutron [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Successfully created port: db59456c-ccd3-48b6-a889-d54a6e2cdf66 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.752 2 DEBUG nova.virt.libvirt.driver [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.752 2 DEBUG nova.virt.libvirt.driver [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Ensure instance console log exists: /var/lib/nova/instances/1ef94ed5-fffa-41e9-b72f-4569354392c6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.753 2 DEBUG oslo_concurrency.lockutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.753 2 DEBUG oslo_concurrency.lockutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.753 2 DEBUG oslo_concurrency.lockutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:39 np0005481065 podman[319559]: 2025-10-11 08:54:39.770033263 +0000 UTC m=+0.077849651 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.792 2 DEBUG nova.compute.manager [req-3382cbc4-648f-4b7e-9a5c-2110ee075686 req-5cf248a8-7232-42ae-907f-df244d59d0a3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Received event network-vif-plugged-5a02bb17-5d97-4eef-a91d-245b70cc5e1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.792 2 DEBUG oslo_concurrency.lockutils [req-3382cbc4-648f-4b7e-9a5c-2110ee075686 req-5cf248a8-7232-42ae-907f-df244d59d0a3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "e263661e-e9c2-4a4d-a6e5-5fc8a7353f50-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.792 2 DEBUG oslo_concurrency.lockutils [req-3382cbc4-648f-4b7e-9a5c-2110ee075686 req-5cf248a8-7232-42ae-907f-df244d59d0a3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e263661e-e9c2-4a4d-a6e5-5fc8a7353f50-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.792 2 DEBUG oslo_concurrency.lockutils [req-3382cbc4-648f-4b7e-9a5c-2110ee075686 req-5cf248a8-7232-42ae-907f-df244d59d0a3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e263661e-e9c2-4a4d-a6e5-5fc8a7353f50-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.793 2 DEBUG nova.compute.manager [req-3382cbc4-648f-4b7e-9a5c-2110ee075686 req-5cf248a8-7232-42ae-907f-df244d59d0a3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] No waiting events found dispatching network-vif-plugged-5a02bb17-5d97-4eef-a91d-245b70cc5e1b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.793 2 WARNING nova.compute.manager [req-3382cbc4-648f-4b7e-9a5c-2110ee075686 req-5cf248a8-7232-42ae-907f-df244d59d0a3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Received unexpected event network-vif-plugged-5a02bb17-5d97-4eef-a91d-245b70cc5e1b for instance with vm_state active and task_state None.#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.793 2 DEBUG nova.compute.manager [req-3382cbc4-648f-4b7e-9a5c-2110ee075686 req-5cf248a8-7232-42ae-907f-df244d59d0a3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Received event network-vif-plugged-70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.793 2 DEBUG oslo_concurrency.lockutils [req-3382cbc4-648f-4b7e-9a5c-2110ee075686 req-5cf248a8-7232-42ae-907f-df244d59d0a3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "21a71d10-e13b-47fe-88fd-ec9597f7902e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.793 2 DEBUG oslo_concurrency.lockutils [req-3382cbc4-648f-4b7e-9a5c-2110ee075686 req-5cf248a8-7232-42ae-907f-df244d59d0a3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "21a71d10-e13b-47fe-88fd-ec9597f7902e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.793 2 DEBUG oslo_concurrency.lockutils [req-3382cbc4-648f-4b7e-9a5c-2110ee075686 req-5cf248a8-7232-42ae-907f-df244d59d0a3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "21a71d10-e13b-47fe-88fd-ec9597f7902e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.793 2 DEBUG nova.compute.manager [req-3382cbc4-648f-4b7e-9a5c-2110ee075686 req-5cf248a8-7232-42ae-907f-df244d59d0a3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Processing event network-vif-plugged-70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.794 2 DEBUG nova.compute.manager [req-3382cbc4-648f-4b7e-9a5c-2110ee075686 req-5cf248a8-7232-42ae-907f-df244d59d0a3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Received event network-vif-plugged-70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.794 2 DEBUG oslo_concurrency.lockutils [req-3382cbc4-648f-4b7e-9a5c-2110ee075686 req-5cf248a8-7232-42ae-907f-df244d59d0a3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "21a71d10-e13b-47fe-88fd-ec9597f7902e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.794 2 DEBUG oslo_concurrency.lockutils [req-3382cbc4-648f-4b7e-9a5c-2110ee075686 req-5cf248a8-7232-42ae-907f-df244d59d0a3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "21a71d10-e13b-47fe-88fd-ec9597f7902e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.794 2 DEBUG oslo_concurrency.lockutils [req-3382cbc4-648f-4b7e-9a5c-2110ee075686 req-5cf248a8-7232-42ae-907f-df244d59d0a3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "21a71d10-e13b-47fe-88fd-ec9597f7902e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.794 2 DEBUG nova.compute.manager [req-3382cbc4-648f-4b7e-9a5c-2110ee075686 req-5cf248a8-7232-42ae-907f-df244d59d0a3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] No waiting events found dispatching network-vif-plugged-70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.794 2 WARNING nova.compute.manager [req-3382cbc4-648f-4b7e-9a5c-2110ee075686 req-5cf248a8-7232-42ae-907f-df244d59d0a3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Received unexpected event network-vif-plugged-70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b for instance with vm_state building and task_state spawning.#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.794 2 DEBUG nova.compute.manager [req-3382cbc4-648f-4b7e-9a5c-2110ee075686 req-5cf248a8-7232-42ae-907f-df244d59d0a3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Received event network-vif-deleted-b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.795 2 DEBUG nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.799 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172879.7992473, 21a71d10-e13b-47fe-88fd-ec9597f7902e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.800 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.808 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.815 2 INFO nova.virt.libvirt.driver [-] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Instance spawned successfully.#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.815 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:54:39 np0005481065 podman[319602]: 2025-10-11 08:54:39.892463314 +0000 UTC m=+0.097187493 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.906 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.911 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.942 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.961 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.962 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.962 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.962 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.963 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:54:39 np0005481065 nova_compute[260935]: 2025-10-11 08:54:39.963 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:54:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1513: 321 pgs: 321 active+clean; 227 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 7.1 MiB/s wr, 223 op/s
Oct 11 04:54:40 np0005481065 nova_compute[260935]: 2025-10-11 08:54:40.019 2 INFO nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Took 8.35 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:54:40 np0005481065 nova_compute[260935]: 2025-10-11 08:54:40.020 2 DEBUG nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:54:40 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:54:40 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3872670261' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:54:40 np0005481065 nova_compute[260935]: 2025-10-11 08:54:40.082 2 INFO nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Took 11.70 seconds to build instance.#033[00m
Oct 11 04:54:40 np0005481065 nova_compute[260935]: 2025-10-11 08:54:40.091 2 DEBUG oslo_concurrency.processutils [None req-70b80284-766f-4128-b01b-e503f64803d5 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:54:40 np0005481065 nova_compute[260935]: 2025-10-11 08:54:40.099 2 DEBUG nova.compute.provider_tree [None req-70b80284-766f-4128-b01b-e503f64803d5 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:54:40 np0005481065 nova_compute[260935]: 2025-10-11 08:54:40.113 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "21a71d10-e13b-47fe-88fd-ec9597f7902e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.027s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:40 np0005481065 nova_compute[260935]: 2025-10-11 08:54:40.120 2 DEBUG nova.scheduler.client.report [None req-70b80284-766f-4128-b01b-e503f64803d5 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:54:40 np0005481065 nova_compute[260935]: 2025-10-11 08:54:40.148 2 DEBUG oslo_concurrency.lockutils [None req-70b80284-766f-4128-b01b-e503f64803d5 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.788s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:40 np0005481065 nova_compute[260935]: 2025-10-11 08:54:40.176 2 INFO nova.scheduler.client.report [None req-70b80284-766f-4128-b01b-e503f64803d5 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Deleted allocations for instance ceef84cf-9df6-4484-862c-624eab05f1fe#033[00m
Oct 11 04:54:40 np0005481065 nova_compute[260935]: 2025-10-11 08:54:40.255 2 DEBUG oslo_concurrency.lockutils [None req-70b80284-766f-4128-b01b-e503f64803d5 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Lock "ceef84cf-9df6-4484-862c-624eab05f1fe" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.792s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:41 np0005481065 nova_compute[260935]: 2025-10-11 08:54:41.295 2 DEBUG nova.network.neutron [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Successfully updated port: db59456c-ccd3-48b6-a889-d54a6e2cdf66 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:54:41 np0005481065 nova_compute[260935]: 2025-10-11 08:54:41.326 2 DEBUG oslo_concurrency.lockutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquiring lock "refresh_cache-1ef94ed5-fffa-41e9-b72f-4569354392c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:54:41 np0005481065 nova_compute[260935]: 2025-10-11 08:54:41.326 2 DEBUG oslo_concurrency.lockutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquired lock "refresh_cache-1ef94ed5-fffa-41e9-b72f-4569354392c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:54:41 np0005481065 nova_compute[260935]: 2025-10-11 08:54:41.327 2 DEBUG nova.network.neutron [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:54:41 np0005481065 nova_compute[260935]: 2025-10-11 08:54:41.511 2 DEBUG nova.network.neutron [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:54:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1514: 321 pgs: 321 active+clean; 227 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 7.1 MiB/s wr, 223 op/s
Oct 11 04:54:42 np0005481065 nova_compute[260935]: 2025-10-11 08:54:42.520 2 DEBUG nova.compute.manager [req-0fedcdb2-49a5-4db3-b091-aaa41e5f83b8 req-f177e1c5-78c3-436f-b6ae-5a9c3259b670 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Received event network-changed-db59456c-ccd3-48b6-a889-d54a6e2cdf66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:54:42 np0005481065 nova_compute[260935]: 2025-10-11 08:54:42.522 2 DEBUG nova.compute.manager [req-0fedcdb2-49a5-4db3-b091-aaa41e5f83b8 req-f177e1c5-78c3-436f-b6ae-5a9c3259b670 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Refreshing instance network info cache due to event network-changed-db59456c-ccd3-48b6-a889-d54a6e2cdf66. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:54:42 np0005481065 nova_compute[260935]: 2025-10-11 08:54:42.523 2 DEBUG oslo_concurrency.lockutils [req-0fedcdb2-49a5-4db3-b091-aaa41e5f83b8 req-f177e1c5-78c3-436f-b6ae-5a9c3259b670 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-1ef94ed5-fffa-41e9-b72f-4569354392c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:54:42 np0005481065 nova_compute[260935]: 2025-10-11 08:54:42.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:42 np0005481065 nova_compute[260935]: 2025-10-11 08:54:42.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:43 np0005481065 nova_compute[260935]: 2025-10-11 08:54:43.291 2 DEBUG nova.network.neutron [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Updating instance_info_cache with network_info: [{"id": "db59456c-ccd3-48b6-a889-d54a6e2cdf66", "address": "fa:16:3e:5b:fd:5e", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb59456c-cc", "ovs_interfaceid": "db59456c-ccd3-48b6-a889-d54a6e2cdf66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:54:43 np0005481065 nova_compute[260935]: 2025-10-11 08:54:43.314 2 DEBUG oslo_concurrency.lockutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Releasing lock "refresh_cache-1ef94ed5-fffa-41e9-b72f-4569354392c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:54:43 np0005481065 nova_compute[260935]: 2025-10-11 08:54:43.315 2 DEBUG nova.compute.manager [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Instance network_info: |[{"id": "db59456c-ccd3-48b6-a889-d54a6e2cdf66", "address": "fa:16:3e:5b:fd:5e", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb59456c-cc", "ovs_interfaceid": "db59456c-ccd3-48b6-a889-d54a6e2cdf66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:54:43 np0005481065 nova_compute[260935]: 2025-10-11 08:54:43.315 2 DEBUG oslo_concurrency.lockutils [req-0fedcdb2-49a5-4db3-b091-aaa41e5f83b8 req-f177e1c5-78c3-436f-b6ae-5a9c3259b670 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-1ef94ed5-fffa-41e9-b72f-4569354392c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:54:43 np0005481065 nova_compute[260935]: 2025-10-11 08:54:43.316 2 DEBUG nova.network.neutron [req-0fedcdb2-49a5-4db3-b091-aaa41e5f83b8 req-f177e1c5-78c3-436f-b6ae-5a9c3259b670 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Refreshing network info cache for port db59456c-ccd3-48b6-a889-d54a6e2cdf66 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:54:43 np0005481065 nova_compute[260935]: 2025-10-11 08:54:43.320 2 DEBUG nova.virt.libvirt.driver [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Start _get_guest_xml network_info=[{"id": "db59456c-ccd3-48b6-a889-d54a6e2cdf66", "address": "fa:16:3e:5b:fd:5e", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb59456c-cc", "ovs_interfaceid": "db59456c-ccd3-48b6-a889-d54a6e2cdf66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:54:43 np0005481065 nova_compute[260935]: 2025-10-11 08:54:43.326 2 WARNING nova.virt.libvirt.driver [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:54:43 np0005481065 nova_compute[260935]: 2025-10-11 08:54:43.331 2 DEBUG nova.virt.libvirt.host [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:54:43 np0005481065 nova_compute[260935]: 2025-10-11 08:54:43.332 2 DEBUG nova.virt.libvirt.host [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:54:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:54:43 np0005481065 nova_compute[260935]: 2025-10-11 08:54:43.339 2 DEBUG nova.virt.libvirt.host [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:54:43 np0005481065 nova_compute[260935]: 2025-10-11 08:54:43.339 2 DEBUG nova.virt.libvirt.host [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:54:43 np0005481065 nova_compute[260935]: 2025-10-11 08:54:43.340 2 DEBUG nova.virt.libvirt.driver [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:54:43 np0005481065 nova_compute[260935]: 2025-10-11 08:54:43.340 2 DEBUG nova.virt.hardware [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:54:43 np0005481065 nova_compute[260935]: 2025-10-11 08:54:43.340 2 DEBUG nova.virt.hardware [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:54:43 np0005481065 nova_compute[260935]: 2025-10-11 08:54:43.341 2 DEBUG nova.virt.hardware [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:54:43 np0005481065 nova_compute[260935]: 2025-10-11 08:54:43.341 2 DEBUG nova.virt.hardware [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:54:43 np0005481065 nova_compute[260935]: 2025-10-11 08:54:43.341 2 DEBUG nova.virt.hardware [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:54:43 np0005481065 nova_compute[260935]: 2025-10-11 08:54:43.341 2 DEBUG nova.virt.hardware [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:54:43 np0005481065 nova_compute[260935]: 2025-10-11 08:54:43.341 2 DEBUG nova.virt.hardware [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:54:43 np0005481065 nova_compute[260935]: 2025-10-11 08:54:43.342 2 DEBUG nova.virt.hardware [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:54:43 np0005481065 nova_compute[260935]: 2025-10-11 08:54:43.342 2 DEBUG nova.virt.hardware [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:54:43 np0005481065 nova_compute[260935]: 2025-10-11 08:54:43.342 2 DEBUG nova.virt.hardware [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:54:43 np0005481065 nova_compute[260935]: 2025-10-11 08:54:43.342 2 DEBUG nova.virt.hardware [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:54:43 np0005481065 nova_compute[260935]: 2025-10-11 08:54:43.347 2 DEBUG oslo_concurrency.processutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:54:43 np0005481065 nova_compute[260935]: 2025-10-11 08:54:43.588 2 DEBUG oslo_concurrency.lockutils [None req-d94bae46-c9d4-4e42-b937-79b2476b94a2 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquiring lock "d60a2dcd-7fb6-4bfe-8351-a38a71164f83" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:43 np0005481065 nova_compute[260935]: 2025-10-11 08:54:43.589 2 DEBUG oslo_concurrency.lockutils [None req-d94bae46-c9d4-4e42-b937-79b2476b94a2 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "d60a2dcd-7fb6-4bfe-8351-a38a71164f83" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:43 np0005481065 nova_compute[260935]: 2025-10-11 08:54:43.592 2 DEBUG oslo_concurrency.lockutils [None req-d94bae46-c9d4-4e42-b937-79b2476b94a2 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquiring lock "d60a2dcd-7fb6-4bfe-8351-a38a71164f83-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:43 np0005481065 nova_compute[260935]: 2025-10-11 08:54:43.592 2 DEBUG oslo_concurrency.lockutils [None req-d94bae46-c9d4-4e42-b937-79b2476b94a2 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "d60a2dcd-7fb6-4bfe-8351-a38a71164f83-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:43 np0005481065 nova_compute[260935]: 2025-10-11 08:54:43.593 2 DEBUG oslo_concurrency.lockutils [None req-d94bae46-c9d4-4e42-b937-79b2476b94a2 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "d60a2dcd-7fb6-4bfe-8351-a38a71164f83-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:43 np0005481065 nova_compute[260935]: 2025-10-11 08:54:43.594 2 INFO nova.compute.manager [None req-d94bae46-c9d4-4e42-b937-79b2476b94a2 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Terminating instance#033[00m
Oct 11 04:54:43 np0005481065 nova_compute[260935]: 2025-10-11 08:54:43.595 2 DEBUG nova.compute.manager [None req-d94bae46-c9d4-4e42-b937-79b2476b94a2 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 04:54:43 np0005481065 kernel: tapb0464a2f-21 (unregistering): left promiscuous mode
Oct 11 04:54:43 np0005481065 NetworkManager[44960]: <info>  [1760172883.6620] device (tapb0464a2f-21): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:54:43 np0005481065 nova_compute[260935]: 2025-10-11 08:54:43.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:43 np0005481065 nova_compute[260935]: 2025-10-11 08:54:43.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:43 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:43Z|00470|binding|INFO|Releasing lport b0464a2f-214f-420b-aa3e-70d2e3dce4ac from this chassis (sb_readonly=0)
Oct 11 04:54:43 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:43Z|00471|binding|INFO|Setting lport b0464a2f-214f-420b-aa3e-70d2e3dce4ac down in Southbound
Oct 11 04:54:43 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:43Z|00472|binding|INFO|Removing iface tapb0464a2f-21 ovn-installed in OVS
Oct 11 04:54:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:43.740 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:af:82 10.100.0.4'], port_security=['fa:16:3e:9b:af:82 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'd60a2dcd-7fb6-4bfe-8351-a38a71164f83', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-72464893-0f19-40a9-84d1-392a298e50b9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7a37bbdce5194d96bed20d4162e25337', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a5c4bd12-f795-4994-bb5e-cd2cc665fe9f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2dc3e696-a953-4227-9499-d68d54f25c77, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=b0464a2f-214f-420b-aa3e-70d2e3dce4ac) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:54:43 np0005481065 nova_compute[260935]: 2025-10-11 08:54:43.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:43.743 162815 INFO neutron.agent.ovn.metadata.agent [-] Port b0464a2f-214f-420b-aa3e-70d2e3dce4ac in datapath 72464893-0f19-40a9-84d1-392a298e50b9 unbound from our chassis#033[00m
Oct 11 04:54:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:43.745 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 72464893-0f19-40a9-84d1-392a298e50b9#033[00m
Oct 11 04:54:43 np0005481065 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000035.scope: Deactivated successfully.
Oct 11 04:54:43 np0005481065 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000035.scope: Consumed 6.430s CPU time.
Oct 11 04:54:43 np0005481065 systemd-machined[215705]: Machine qemu-60-instance-00000035 terminated.
Oct 11 04:54:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:43.776 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[efa728ef-c05a-4faa-b63e-fa3538136739]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:43 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:43Z|00473|binding|INFO|Releasing lport fba8f4e1-8635-4527-85f3-29ce2a1033b5 from this chassis (sb_readonly=0)
Oct 11 04:54:43 np0005481065 NetworkManager[44960]: <info>  [1760172883.8332] manager: (tapb0464a2f-21): new Tun device (/org/freedesktop/NetworkManager/Devices/225)
Oct 11 04:54:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:43.848 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[db78581f-0398-4de2-be6a-e7ec63a49744]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:43.857 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[0c6cb2da-09f5-4a96-b7be-83b4eb388e59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:43 np0005481065 nova_compute[260935]: 2025-10-11 08:54:43.866 2 INFO nova.virt.libvirt.driver [-] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Instance destroyed successfully.#033[00m
Oct 11 04:54:43 np0005481065 nova_compute[260935]: 2025-10-11 08:54:43.867 2 DEBUG nova.objects.instance [None req-d94bae46-c9d4-4e42-b937-79b2476b94a2 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lazy-loading 'resources' on Instance uuid d60a2dcd-7fb6-4bfe-8351-a38a71164f83 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:54:43 np0005481065 nova_compute[260935]: 2025-10-11 08:54:43.884 2 DEBUG nova.virt.libvirt.vif [None req-d94bae46-c9d4-4e42-b937-79b2476b94a2 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:54:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-123447316',display_name='tempest-ListServersNegativeTestJSON-server-123447316-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-123447316-1',id=53,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:54:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7a37bbdce5194d96bed20d4162e25337',ramdisk_id='',reservation_id='r-yl5fomq0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-567006104',owner_user_name='tempest-ListServersNegativeTestJSON-567006104-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:54:38Z,user_data=None,user_id='5c02b9d6bdae439c9f1e49ae63c5c5e3',uuid=d60a2dcd-7fb6-4bfe-8351-a38a71164f83,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b0464a2f-214f-420b-aa3e-70d2e3dce4ac", "address": "fa:16:3e:9b:af:82", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0464a2f-21", "ovs_interfaceid": "b0464a2f-214f-420b-aa3e-70d2e3dce4ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 04:54:43 np0005481065 nova_compute[260935]: 2025-10-11 08:54:43.885 2 DEBUG nova.network.os_vif_util [None req-d94bae46-c9d4-4e42-b937-79b2476b94a2 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Converting VIF {"id": "b0464a2f-214f-420b-aa3e-70d2e3dce4ac", "address": "fa:16:3e:9b:af:82", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0464a2f-21", "ovs_interfaceid": "b0464a2f-214f-420b-aa3e-70d2e3dce4ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:54:43 np0005481065 nova_compute[260935]: 2025-10-11 08:54:43.886 2 DEBUG nova.network.os_vif_util [None req-d94bae46-c9d4-4e42-b937-79b2476b94a2 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:af:82,bridge_name='br-int',has_traffic_filtering=True,id=b0464a2f-214f-420b-aa3e-70d2e3dce4ac,network=Network(72464893-0f19-40a9-84d1-392a298e50b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0464a2f-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:54:43 np0005481065 nova_compute[260935]: 2025-10-11 08:54:43.886 2 DEBUG os_vif [None req-d94bae46-c9d4-4e42-b937-79b2476b94a2 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:af:82,bridge_name='br-int',has_traffic_filtering=True,id=b0464a2f-214f-420b-aa3e-70d2e3dce4ac,network=Network(72464893-0f19-40a9-84d1-392a298e50b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0464a2f-21') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 04:54:43 np0005481065 nova_compute[260935]: 2025-10-11 08:54:43.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:43 np0005481065 nova_compute[260935]: 2025-10-11 08:54:43.888 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb0464a2f-21, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:43 np0005481065 nova_compute[260935]: 2025-10-11 08:54:43.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:43 np0005481065 nova_compute[260935]: 2025-10-11 08:54:43.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:54:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:43.902 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b3d3b183-e127-4ba6-bd6f-e0d4b22d4e18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:43.932 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[defc64ab-734d-4d5b-9438-44a1f2a1939b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap72464893-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:78:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 832, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 832, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 148], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 472069, 'reachable_time': 41754, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319662, 'error': None, 'target': 'ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:43 np0005481065 nova_compute[260935]: 2025-10-11 08:54:43.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:43 np0005481065 nova_compute[260935]: 2025-10-11 08:54:43.955 2 INFO os_vif [None req-d94bae46-c9d4-4e42-b937-79b2476b94a2 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:af:82,bridge_name='br-int',has_traffic_filtering=True,id=b0464a2f-214f-420b-aa3e-70d2e3dce4ac,network=Network(72464893-0f19-40a9-84d1-392a298e50b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0464a2f-21')#033[00m
Oct 11 04:54:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:43.959 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[24283729-1fbb-440d-8d75-7c470f83ec2c]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap72464893-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 472087, 'tstamp': 472087}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319663, 'error': None, 'target': 'ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap72464893-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 472092, 'tstamp': 472092}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319663, 'error': None, 'target': 'ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:43.961 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap72464893-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:54:43 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1331195559' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:54:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:43.964 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap72464893-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:43.965 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:54:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:43.965 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap72464893-00, col_values=(('external_ids', {'iface-id': 'fba8f4e1-8635-4527-85f3-29ce2a1033b5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:43.966 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:54:43 np0005481065 nova_compute[260935]: 2025-10-11 08:54:43.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1515: 321 pgs: 321 active+clean; 227 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 8.9 MiB/s wr, 475 op/s
Oct 11 04:54:43 np0005481065 nova_compute[260935]: 2025-10-11 08:54:43.991 2 DEBUG oslo_concurrency.processutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.644s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:54:44 np0005481065 nova_compute[260935]: 2025-10-11 08:54:44.012 2 DEBUG nova.storage.rbd_utils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] rbd image 1ef94ed5-fffa-41e9-b72f-4569354392c6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:54:44 np0005481065 nova_compute[260935]: 2025-10-11 08:54:44.018 2 DEBUG oslo_concurrency.processutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:54:44 np0005481065 nova_compute[260935]: 2025-10-11 08:54:44.194 2 DEBUG nova.compute.manager [req-e18c29af-3ccd-462a-a9f1-47c720ce16ed req-a875288b-9a1b-4ad1-9166-12e305250bf3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Received event network-vif-unplugged-b0464a2f-214f-420b-aa3e-70d2e3dce4ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:54:44 np0005481065 nova_compute[260935]: 2025-10-11 08:54:44.195 2 DEBUG oslo_concurrency.lockutils [req-e18c29af-3ccd-462a-a9f1-47c720ce16ed req-a875288b-9a1b-4ad1-9166-12e305250bf3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d60a2dcd-7fb6-4bfe-8351-a38a71164f83-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:44 np0005481065 nova_compute[260935]: 2025-10-11 08:54:44.196 2 DEBUG oslo_concurrency.lockutils [req-e18c29af-3ccd-462a-a9f1-47c720ce16ed req-a875288b-9a1b-4ad1-9166-12e305250bf3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d60a2dcd-7fb6-4bfe-8351-a38a71164f83-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:44 np0005481065 nova_compute[260935]: 2025-10-11 08:54:44.196 2 DEBUG oslo_concurrency.lockutils [req-e18c29af-3ccd-462a-a9f1-47c720ce16ed req-a875288b-9a1b-4ad1-9166-12e305250bf3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d60a2dcd-7fb6-4bfe-8351-a38a71164f83-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:44 np0005481065 nova_compute[260935]: 2025-10-11 08:54:44.197 2 DEBUG nova.compute.manager [req-e18c29af-3ccd-462a-a9f1-47c720ce16ed req-a875288b-9a1b-4ad1-9166-12e305250bf3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] No waiting events found dispatching network-vif-unplugged-b0464a2f-214f-420b-aa3e-70d2e3dce4ac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:54:44 np0005481065 nova_compute[260935]: 2025-10-11 08:54:44.198 2 DEBUG nova.compute.manager [req-e18c29af-3ccd-462a-a9f1-47c720ce16ed req-a875288b-9a1b-4ad1-9166-12e305250bf3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Received event network-vif-unplugged-b0464a2f-214f-420b-aa3e-70d2e3dce4ac for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 04:54:44 np0005481065 nova_compute[260935]: 2025-10-11 08:54:44.339 2 INFO nova.virt.libvirt.driver [None req-d94bae46-c9d4-4e42-b937-79b2476b94a2 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Deleting instance files /var/lib/nova/instances/d60a2dcd-7fb6-4bfe-8351-a38a71164f83_del#033[00m
Oct 11 04:54:44 np0005481065 nova_compute[260935]: 2025-10-11 08:54:44.341 2 INFO nova.virt.libvirt.driver [None req-d94bae46-c9d4-4e42-b937-79b2476b94a2 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Deletion of /var/lib/nova/instances/d60a2dcd-7fb6-4bfe-8351-a38a71164f83_del complete#033[00m
Oct 11 04:54:44 np0005481065 nova_compute[260935]: 2025-10-11 08:54:44.427 2 INFO nova.compute.manager [None req-d94bae46-c9d4-4e42-b937-79b2476b94a2 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Took 0.83 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 04:54:44 np0005481065 nova_compute[260935]: 2025-10-11 08:54:44.428 2 DEBUG oslo.service.loopingcall [None req-d94bae46-c9d4-4e42-b937-79b2476b94a2 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 04:54:44 np0005481065 nova_compute[260935]: 2025-10-11 08:54:44.428 2 DEBUG nova.compute.manager [-] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 04:54:44 np0005481065 nova_compute[260935]: 2025-10-11 08:54:44.428 2 DEBUG nova.network.neutron [-] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 04:54:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:54:44 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3966572897' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:54:44 np0005481065 nova_compute[260935]: 2025-10-11 08:54:44.509 2 DEBUG oslo_concurrency.processutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:54:44 np0005481065 nova_compute[260935]: 2025-10-11 08:54:44.511 2 DEBUG nova.virt.libvirt.vif [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:54:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1487174086',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1487174086',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1487174086',id=56,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b34d40b1586348c3be3d9142dfe1770d',ramdisk_id='',reservation_id='r-9kmel6v4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-253514738',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-253514738-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:54:38Z,user_data=None,user_id='d4dce65b39be45739408ca70d672df84',uuid=1ef94ed5-fffa-41e9-b72f-4569354392c6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "db59456c-ccd3-48b6-a889-d54a6e2cdf66", "address": "fa:16:3e:5b:fd:5e", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb59456c-cc", "ovs_interfaceid": "db59456c-ccd3-48b6-a889-d54a6e2cdf66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:54:44 np0005481065 nova_compute[260935]: 2025-10-11 08:54:44.512 2 DEBUG nova.network.os_vif_util [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Converting VIF {"id": "db59456c-ccd3-48b6-a889-d54a6e2cdf66", "address": "fa:16:3e:5b:fd:5e", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb59456c-cc", "ovs_interfaceid": "db59456c-ccd3-48b6-a889-d54a6e2cdf66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:54:44 np0005481065 nova_compute[260935]: 2025-10-11 08:54:44.513 2 DEBUG nova.network.os_vif_util [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:fd:5e,bridge_name='br-int',has_traffic_filtering=True,id=db59456c-ccd3-48b6-a889-d54a6e2cdf66,network=Network(a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb59456c-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:54:44 np0005481065 nova_compute[260935]: 2025-10-11 08:54:44.515 2 DEBUG nova.objects.instance [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lazy-loading 'pci_devices' on Instance uuid 1ef94ed5-fffa-41e9-b72f-4569354392c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:54:44 np0005481065 nova_compute[260935]: 2025-10-11 08:54:44.532 2 DEBUG nova.virt.libvirt.driver [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:54:44 np0005481065 nova_compute[260935]:  <uuid>1ef94ed5-fffa-41e9-b72f-4569354392c6</uuid>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:  <name>instance-00000038</name>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:54:44 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:      <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-1487174086</nova:name>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:54:43</nova:creationTime>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:54:44 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:        <nova:user uuid="d4dce65b39be45739408ca70d672df84">tempest-ImagesOneServerNegativeTestJSON-253514738-project-member</nova:user>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:        <nova:project uuid="b34d40b1586348c3be3d9142dfe1770d">tempest-ImagesOneServerNegativeTestJSON-253514738</nova:project>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:        <nova:port uuid="db59456c-ccd3-48b6-a889-d54a6e2cdf66">
Oct 11 04:54:44 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:54:44 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:      <entry name="serial">1ef94ed5-fffa-41e9-b72f-4569354392c6</entry>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:      <entry name="uuid">1ef94ed5-fffa-41e9-b72f-4569354392c6</entry>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:54:44 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:54:44 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:54:44 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/1ef94ed5-fffa-41e9-b72f-4569354392c6_disk">
Oct 11 04:54:44 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:54:44 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:54:44 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/1ef94ed5-fffa-41e9-b72f-4569354392c6_disk.config">
Oct 11 04:54:44 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:54:44 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 04:54:44 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:5b:fd:5e"/>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:      <target dev="tapdb59456c-cc"/>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:54:44 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/1ef94ed5-fffa-41e9-b72f-4569354392c6/console.log" append="off"/>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:54:44 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:54:44 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:54:44 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:54:44 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:54:44 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:54:44 np0005481065 nova_compute[260935]: 2025-10-11 08:54:44.542 2 DEBUG nova.compute.manager [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Preparing to wait for external event network-vif-plugged-db59456c-ccd3-48b6-a889-d54a6e2cdf66 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 04:54:44 np0005481065 nova_compute[260935]: 2025-10-11 08:54:44.543 2 DEBUG oslo_concurrency.lockutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquiring lock "1ef94ed5-fffa-41e9-b72f-4569354392c6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:44 np0005481065 nova_compute[260935]: 2025-10-11 08:54:44.543 2 DEBUG oslo_concurrency.lockutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "1ef94ed5-fffa-41e9-b72f-4569354392c6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:44 np0005481065 nova_compute[260935]: 2025-10-11 08:54:44.543 2 DEBUG oslo_concurrency.lockutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "1ef94ed5-fffa-41e9-b72f-4569354392c6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:44 np0005481065 nova_compute[260935]: 2025-10-11 08:54:44.544 2 DEBUG nova.virt.libvirt.vif [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:54:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1487174086',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1487174086',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1487174086',id=56,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b34d40b1586348c3be3d9142dfe1770d',ramdisk_id='',reservation_id='r-9kmel6v4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-253514738',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-253514738-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:54:38Z,user_data=None,user_id='d4dce65b39be45739408ca70d672df84',uuid=1ef94ed5-fffa-41e9-b72f-4569354392c6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "db59456c-ccd3-48b6-a889-d54a6e2cdf66", "address": "fa:16:3e:5b:fd:5e", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb59456c-cc", "ovs_interfaceid": "db59456c-ccd3-48b6-a889-d54a6e2cdf66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:54:44 np0005481065 nova_compute[260935]: 2025-10-11 08:54:44.545 2 DEBUG nova.network.os_vif_util [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Converting VIF {"id": "db59456c-ccd3-48b6-a889-d54a6e2cdf66", "address": "fa:16:3e:5b:fd:5e", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb59456c-cc", "ovs_interfaceid": "db59456c-ccd3-48b6-a889-d54a6e2cdf66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:54:44 np0005481065 nova_compute[260935]: 2025-10-11 08:54:44.546 2 DEBUG nova.network.os_vif_util [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:fd:5e,bridge_name='br-int',has_traffic_filtering=True,id=db59456c-ccd3-48b6-a889-d54a6e2cdf66,network=Network(a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb59456c-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:54:44 np0005481065 nova_compute[260935]: 2025-10-11 08:54:44.546 2 DEBUG os_vif [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:fd:5e,bridge_name='br-int',has_traffic_filtering=True,id=db59456c-ccd3-48b6-a889-d54a6e2cdf66,network=Network(a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb59456c-cc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:54:44 np0005481065 nova_compute[260935]: 2025-10-11 08:54:44.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:44 np0005481065 nova_compute[260935]: 2025-10-11 08:54:44.549 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:44 np0005481065 nova_compute[260935]: 2025-10-11 08:54:44.550 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:54:44 np0005481065 nova_compute[260935]: 2025-10-11 08:54:44.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:44 np0005481065 nova_compute[260935]: 2025-10-11 08:54:44.555 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdb59456c-cc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:44 np0005481065 nova_compute[260935]: 2025-10-11 08:54:44.555 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdb59456c-cc, col_values=(('external_ids', {'iface-id': 'db59456c-ccd3-48b6-a889-d54a6e2cdf66', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5b:fd:5e', 'vm-uuid': '1ef94ed5-fffa-41e9-b72f-4569354392c6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:44 np0005481065 nova_compute[260935]: 2025-10-11 08:54:44.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:44 np0005481065 NetworkManager[44960]: <info>  [1760172884.5593] manager: (tapdb59456c-cc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/226)
Oct 11 04:54:44 np0005481065 nova_compute[260935]: 2025-10-11 08:54:44.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:54:44 np0005481065 nova_compute[260935]: 2025-10-11 08:54:44.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:44 np0005481065 nova_compute[260935]: 2025-10-11 08:54:44.567 2 INFO os_vif [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:fd:5e,bridge_name='br-int',has_traffic_filtering=True,id=db59456c-ccd3-48b6-a889-d54a6e2cdf66,network=Network(a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb59456c-cc')#033[00m
Oct 11 04:54:44 np0005481065 nova_compute[260935]: 2025-10-11 08:54:44.652 2 DEBUG nova.virt.libvirt.driver [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:54:44 np0005481065 nova_compute[260935]: 2025-10-11 08:54:44.652 2 DEBUG nova.virt.libvirt.driver [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:54:44 np0005481065 nova_compute[260935]: 2025-10-11 08:54:44.653 2 DEBUG nova.virt.libvirt.driver [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] No VIF found with MAC fa:16:3e:5b:fd:5e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:54:44 np0005481065 nova_compute[260935]: 2025-10-11 08:54:44.653 2 INFO nova.virt.libvirt.driver [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Using config drive#033[00m
Oct 11 04:54:44 np0005481065 nova_compute[260935]: 2025-10-11 08:54:44.672 2 DEBUG nova.storage.rbd_utils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] rbd image 1ef94ed5-fffa-41e9-b72f-4569354392c6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:54:44 np0005481065 nova_compute[260935]: 2025-10-11 08:54:44.834 2 DEBUG nova.network.neutron [req-0fedcdb2-49a5-4db3-b091-aaa41e5f83b8 req-f177e1c5-78c3-436f-b6ae-5a9c3259b670 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Updated VIF entry in instance network info cache for port db59456c-ccd3-48b6-a889-d54a6e2cdf66. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:54:44 np0005481065 nova_compute[260935]: 2025-10-11 08:54:44.835 2 DEBUG nova.network.neutron [req-0fedcdb2-49a5-4db3-b091-aaa41e5f83b8 req-f177e1c5-78c3-436f-b6ae-5a9c3259b670 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Updating instance_info_cache with network_info: [{"id": "db59456c-ccd3-48b6-a889-d54a6e2cdf66", "address": "fa:16:3e:5b:fd:5e", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb59456c-cc", "ovs_interfaceid": "db59456c-ccd3-48b6-a889-d54a6e2cdf66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:54:44 np0005481065 nova_compute[260935]: 2025-10-11 08:54:44.860 2 DEBUG oslo_concurrency.lockutils [req-0fedcdb2-49a5-4db3-b091-aaa41e5f83b8 req-f177e1c5-78c3-436f-b6ae-5a9c3259b670 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-1ef94ed5-fffa-41e9-b72f-4569354392c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:54:45 np0005481065 nova_compute[260935]: 2025-10-11 08:54:45.344 2 INFO nova.virt.libvirt.driver [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Creating config drive at /var/lib/nova/instances/1ef94ed5-fffa-41e9-b72f-4569354392c6/disk.config#033[00m
Oct 11 04:54:45 np0005481065 nova_compute[260935]: 2025-10-11 08:54:45.353 2 DEBUG oslo_concurrency.processutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1ef94ed5-fffa-41e9-b72f-4569354392c6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7ocnqdgb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:54:45 np0005481065 nova_compute[260935]: 2025-10-11 08:54:45.403 2 DEBUG nova.network.neutron [-] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:54:45 np0005481065 nova_compute[260935]: 2025-10-11 08:54:45.438 2 INFO nova.compute.manager [-] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Took 1.01 seconds to deallocate network for instance.#033[00m
Oct 11 04:54:45 np0005481065 nova_compute[260935]: 2025-10-11 08:54:45.514 2 DEBUG oslo_concurrency.processutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1ef94ed5-fffa-41e9-b72f-4569354392c6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7ocnqdgb" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:54:45 np0005481065 nova_compute[260935]: 2025-10-11 08:54:45.548 2 DEBUG nova.storage.rbd_utils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] rbd image 1ef94ed5-fffa-41e9-b72f-4569354392c6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:54:45 np0005481065 nova_compute[260935]: 2025-10-11 08:54:45.552 2 DEBUG oslo_concurrency.processutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1ef94ed5-fffa-41e9-b72f-4569354392c6/disk.config 1ef94ed5-fffa-41e9-b72f-4569354392c6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:54:45 np0005481065 nova_compute[260935]: 2025-10-11 08:54:45.607 2 DEBUG oslo_concurrency.lockutils [None req-d94bae46-c9d4-4e42-b937-79b2476b94a2 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:45 np0005481065 nova_compute[260935]: 2025-10-11 08:54:45.608 2 DEBUG oslo_concurrency.lockutils [None req-d94bae46-c9d4-4e42-b937-79b2476b94a2 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:45 np0005481065 nova_compute[260935]: 2025-10-11 08:54:45.743 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172870.4721122, 1a70666a-f421-4a09-bfa4-f1171cbe2000 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:54:45 np0005481065 nova_compute[260935]: 2025-10-11 08:54:45.745 2 INFO nova.compute.manager [-] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] VM Stopped (Lifecycle Event)#033[00m
Oct 11 04:54:45 np0005481065 nova_compute[260935]: 2025-10-11 08:54:45.749 2 DEBUG oslo_concurrency.processutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1ef94ed5-fffa-41e9-b72f-4569354392c6/disk.config 1ef94ed5-fffa-41e9-b72f-4569354392c6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.196s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:54:45 np0005481065 nova_compute[260935]: 2025-10-11 08:54:45.750 2 INFO nova.virt.libvirt.driver [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Deleting local config drive /var/lib/nova/instances/1ef94ed5-fffa-41e9-b72f-4569354392c6/disk.config because it was imported into RBD.#033[00m
Oct 11 04:54:45 np0005481065 nova_compute[260935]: 2025-10-11 08:54:45.773 2 DEBUG nova.compute.manager [None req-a8d730d2-aad8-4733-8c08-8d35225543ea - - - - - -] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:54:45 np0005481065 nova_compute[260935]: 2025-10-11 08:54:45.778 2 DEBUG oslo_concurrency.processutils [None req-d94bae46-c9d4-4e42-b937-79b2476b94a2 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:54:45 np0005481065 kernel: tapdb59456c-cc: entered promiscuous mode
Oct 11 04:54:45 np0005481065 NetworkManager[44960]: <info>  [1760172885.8085] manager: (tapdb59456c-cc): new Tun device (/org/freedesktop/NetworkManager/Devices/227)
Oct 11 04:54:45 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:45Z|00474|binding|INFO|Claiming lport db59456c-ccd3-48b6-a889-d54a6e2cdf66 for this chassis.
Oct 11 04:54:45 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:45Z|00475|binding|INFO|db59456c-ccd3-48b6-a889-d54a6e2cdf66: Claiming fa:16:3e:5b:fd:5e 10.100.0.10
Oct 11 04:54:45 np0005481065 systemd-udevd[319650]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:54:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:45.821 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:fd:5e 10.100.0.10'], port_security=['fa:16:3e:5b:fd:5e 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '1ef94ed5-fffa-41e9-b72f-4569354392c6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b34d40b1586348c3be3d9142dfe1770d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9b07fc82-c0d2-4e42-8878-3b0706731285', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be4f554b-8352-4ab3-babd-ad834c691fd3, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=db59456c-ccd3-48b6-a889-d54a6e2cdf66) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:54:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:45.824 162815 INFO neutron.agent.ovn.metadata.agent [-] Port db59456c-ccd3-48b6-a889-d54a6e2cdf66 in datapath a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf bound to our chassis#033[00m
Oct 11 04:54:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:45.826 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf#033[00m
Oct 11 04:54:45 np0005481065 NetworkManager[44960]: <info>  [1760172885.8284] device (tapdb59456c-cc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:54:45 np0005481065 NetworkManager[44960]: <info>  [1760172885.8295] device (tapdb59456c-cc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:54:45 np0005481065 nova_compute[260935]: 2025-10-11 08:54:45.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:45.844 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0ecbf9c8-85d6-42d8-a4db-33800a4740d6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:45.845 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa1a65c6f-c1 in ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 04:54:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:45.847 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa1a65c6f-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 04:54:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:45.847 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[339f59e4-cfc1-4b22-9824-1ec27924c0f1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:45.848 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[372dc595-92c4-4acb-bddf-32f92bba052c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:45.861 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[3bcbf3f0-3736-41f2-adc6-682b90525d38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:45 np0005481065 systemd-machined[215705]: New machine qemu-63-instance-00000038.
Oct 11 04:54:45 np0005481065 systemd[1]: Started Virtual Machine qemu-63-instance-00000038.
Oct 11 04:54:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:45.892 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fc74f96b-615e-4681-b182-db555a53ae14]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:45 np0005481065 nova_compute[260935]: 2025-10-11 08:54:45.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:45 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:45Z|00476|binding|INFO|Setting lport db59456c-ccd3-48b6-a889-d54a6e2cdf66 ovn-installed in OVS
Oct 11 04:54:45 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:45Z|00477|binding|INFO|Setting lport db59456c-ccd3-48b6-a889-d54a6e2cdf66 up in Southbound
Oct 11 04:54:45 np0005481065 nova_compute[260935]: 2025-10-11 08:54:45.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:45.937 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[2b3557d2-7a09-4c18-b696-f0dce1329b7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:45 np0005481065 NetworkManager[44960]: <info>  [1760172885.9531] manager: (tapa1a65c6f-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/228)
Oct 11 04:54:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:45.954 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4cd7797b-555c-4dc4-91d2-2336755b459b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1516: 321 pgs: 321 active+clean; 227 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 1.8 MiB/s wr, 340 op/s
Oct 11 04:54:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:46.000 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[21f72bfd-c2ed-4249-a954-00f16d96abbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:46.005 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c8027311-1122-4e8a-b09a-ab7d7af594c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:46 np0005481065 NetworkManager[44960]: <info>  [1760172886.0331] device (tapa1a65c6f-c0): carrier: link connected
Oct 11 04:54:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:46.042 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[fa6fc794-5730-42da-a780-cda12528ebd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:46.063 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e2d3448d-9015-4e56-a3f7-f3a669b377d5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa1a65c6f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:05:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 154], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 473073, 'reachable_time': 21336, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319849, 'error': None, 'target': 'ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:46.084 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d5bb2875-8f35-45f5-a2af-b3a5fde65dbe]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb9:555'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 473073, 'tstamp': 473073}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319850, 'error': None, 'target': 'ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:46.104 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[03adcd42-5537-452b-b20a-1e47adf7fdc9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa1a65c6f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:05:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 154], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 473073, 'reachable_time': 21336, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 319851, 'error': None, 'target': 'ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:46.151 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[caa10a3d-f0f6-45d4-abc0-a1873984bf73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:46 np0005481065 nova_compute[260935]: 2025-10-11 08:54:46.247 2 DEBUG nova.compute.manager [req-788ae14b-7d65-4cb1-8735-efc7853a35f6 req-9fbc1bc7-9e16-4d6f-92bf-fa90ba1c359c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Received event network-vif-plugged-db59456c-ccd3-48b6-a889-d54a6e2cdf66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:54:46 np0005481065 nova_compute[260935]: 2025-10-11 08:54:46.248 2 DEBUG oslo_concurrency.lockutils [req-788ae14b-7d65-4cb1-8735-efc7853a35f6 req-9fbc1bc7-9e16-4d6f-92bf-fa90ba1c359c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "1ef94ed5-fffa-41e9-b72f-4569354392c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:46 np0005481065 nova_compute[260935]: 2025-10-11 08:54:46.248 2 DEBUG oslo_concurrency.lockutils [req-788ae14b-7d65-4cb1-8735-efc7853a35f6 req-9fbc1bc7-9e16-4d6f-92bf-fa90ba1c359c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1ef94ed5-fffa-41e9-b72f-4569354392c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:46 np0005481065 nova_compute[260935]: 2025-10-11 08:54:46.248 2 DEBUG oslo_concurrency.lockutils [req-788ae14b-7d65-4cb1-8735-efc7853a35f6 req-9fbc1bc7-9e16-4d6f-92bf-fa90ba1c359c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1ef94ed5-fffa-41e9-b72f-4569354392c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:46 np0005481065 nova_compute[260935]: 2025-10-11 08:54:46.249 2 DEBUG nova.compute.manager [req-788ae14b-7d65-4cb1-8735-efc7853a35f6 req-9fbc1bc7-9e16-4d6f-92bf-fa90ba1c359c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Processing event network-vif-plugged-db59456c-ccd3-48b6-a889-d54a6e2cdf66 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 04:54:46 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:54:46 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/336707106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:54:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:46.255 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4ee9e063-82ad-4fed-a10b-50832c158220]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:46.257 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa1a65c6f-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:46.257 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:54:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:46.258 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa1a65c6f-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:46 np0005481065 NetworkManager[44960]: <info>  [1760172886.2603] manager: (tapa1a65c6f-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/229)
Oct 11 04:54:46 np0005481065 kernel: tapa1a65c6f-c0: entered promiscuous mode
Oct 11 04:54:46 np0005481065 nova_compute[260935]: 2025-10-11 08:54:46.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:46.262 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa1a65c6f-c0, col_values=(('external_ids', {'iface-id': '4bae176b-fbae-4a70-a041-16a7a5205899'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:46 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:46Z|00478|binding|INFO|Releasing lport 4bae176b-fbae-4a70-a041-16a7a5205899 from this chassis (sb_readonly=0)
Oct 11 04:54:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:46.264 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 04:54:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:46.265 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[16dea164-eada-4895-b029-43fb0386bf28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:46.266 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:54:46 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 04:54:46 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 04:54:46 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf
Oct 11 04:54:46 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 04:54:46 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 04:54:46 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 04:54:46 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf.pid.haproxy
Oct 11 04:54:46 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 04:54:46 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:54:46 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 04:54:46 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 04:54:46 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 04:54:46 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 04:54:46 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 04:54:46 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 04:54:46 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 04:54:46 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 04:54:46 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 04:54:46 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 04:54:46 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 04:54:46 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 04:54:46 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 04:54:46 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:54:46 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:54:46 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 04:54:46 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 04:54:46 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:54:46 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf
Oct 11 04:54:46 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 04:54:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:46.267 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf', 'env', 'PROCESS_TAG=haproxy-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 04:54:46 np0005481065 nova_compute[260935]: 2025-10-11 08:54:46.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:46 np0005481065 nova_compute[260935]: 2025-10-11 08:54:46.299 2 DEBUG oslo_concurrency.processutils [None req-d94bae46-c9d4-4e42-b937-79b2476b94a2 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:54:46 np0005481065 nova_compute[260935]: 2025-10-11 08:54:46.309 2 DEBUG nova.compute.provider_tree [None req-d94bae46-c9d4-4e42-b937-79b2476b94a2 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:54:46 np0005481065 nova_compute[260935]: 2025-10-11 08:54:46.333 2 DEBUG nova.scheduler.client.report [None req-d94bae46-c9d4-4e42-b937-79b2476b94a2 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:54:46 np0005481065 nova_compute[260935]: 2025-10-11 08:54:46.347 2 DEBUG nova.compute.manager [req-124e287c-6471-4752-b1c9-85f874b68ca1 req-ff17e4bb-3089-4a11-9581-8e7531eb24f6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Received event network-vif-plugged-b0464a2f-214f-420b-aa3e-70d2e3dce4ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:54:46 np0005481065 nova_compute[260935]: 2025-10-11 08:54:46.348 2 DEBUG oslo_concurrency.lockutils [req-124e287c-6471-4752-b1c9-85f874b68ca1 req-ff17e4bb-3089-4a11-9581-8e7531eb24f6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d60a2dcd-7fb6-4bfe-8351-a38a71164f83-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:46 np0005481065 nova_compute[260935]: 2025-10-11 08:54:46.349 2 DEBUG oslo_concurrency.lockutils [req-124e287c-6471-4752-b1c9-85f874b68ca1 req-ff17e4bb-3089-4a11-9581-8e7531eb24f6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d60a2dcd-7fb6-4bfe-8351-a38a71164f83-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:46 np0005481065 nova_compute[260935]: 2025-10-11 08:54:46.349 2 DEBUG oslo_concurrency.lockutils [req-124e287c-6471-4752-b1c9-85f874b68ca1 req-ff17e4bb-3089-4a11-9581-8e7531eb24f6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d60a2dcd-7fb6-4bfe-8351-a38a71164f83-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:46 np0005481065 nova_compute[260935]: 2025-10-11 08:54:46.350 2 DEBUG nova.compute.manager [req-124e287c-6471-4752-b1c9-85f874b68ca1 req-ff17e4bb-3089-4a11-9581-8e7531eb24f6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] No waiting events found dispatching network-vif-plugged-b0464a2f-214f-420b-aa3e-70d2e3dce4ac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:54:46 np0005481065 nova_compute[260935]: 2025-10-11 08:54:46.351 2 WARNING nova.compute.manager [req-124e287c-6471-4752-b1c9-85f874b68ca1 req-ff17e4bb-3089-4a11-9581-8e7531eb24f6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Received unexpected event network-vif-plugged-b0464a2f-214f-420b-aa3e-70d2e3dce4ac for instance with vm_state deleted and task_state None.#033[00m
Oct 11 04:54:46 np0005481065 nova_compute[260935]: 2025-10-11 08:54:46.351 2 DEBUG nova.compute.manager [req-124e287c-6471-4752-b1c9-85f874b68ca1 req-ff17e4bb-3089-4a11-9581-8e7531eb24f6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Received event network-vif-deleted-b0464a2f-214f-420b-aa3e-70d2e3dce4ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:54:46 np0005481065 nova_compute[260935]: 2025-10-11 08:54:46.386 2 DEBUG oslo_concurrency.lockutils [None req-d94bae46-c9d4-4e42-b937-79b2476b94a2 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.778s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:46 np0005481065 nova_compute[260935]: 2025-10-11 08:54:46.427 2 INFO nova.scheduler.client.report [None req-d94bae46-c9d4-4e42-b937-79b2476b94a2 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Deleted allocations for instance d60a2dcd-7fb6-4bfe-8351-a38a71164f83#033[00m
Oct 11 04:54:46 np0005481065 nova_compute[260935]: 2025-10-11 08:54:46.530 2 DEBUG oslo_concurrency.lockutils [None req-d94bae46-c9d4-4e42-b937-79b2476b94a2 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "d60a2dcd-7fb6-4bfe-8351-a38a71164f83" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.941s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:46 np0005481065 podman[319885]: 2025-10-11 08:54:46.735317104 +0000 UTC m=+0.068887826 container create f9bc4e79d7c45187cc50bc28fd2e966162c326981e6dd2cfe73dd9a5e7742e0d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 04:54:46 np0005481065 systemd[1]: Started libpod-conmon-f9bc4e79d7c45187cc50bc28fd2e966162c326981e6dd2cfe73dd9a5e7742e0d.scope.
Oct 11 04:54:46 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:54:46 np0005481065 podman[319885]: 2025-10-11 08:54:46.702194469 +0000 UTC m=+0.035765231 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 04:54:46 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47c7b3c28e9a6a99763d7a9f3da3eb8fa1d4e961a8207f68a66aeb356e779987/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:54:46 np0005481065 podman[319885]: 2025-10-11 08:54:46.817112256 +0000 UTC m=+0.150682988 container init f9bc4e79d7c45187cc50bc28fd2e966162c326981e6dd2cfe73dd9a5e7742e0d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:54:46 np0005481065 podman[319885]: 2025-10-11 08:54:46.825097214 +0000 UTC m=+0.158667916 container start f9bc4e79d7c45187cc50bc28fd2e966162c326981e6dd2cfe73dd9a5e7742e0d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:54:46 np0005481065 neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf[319900]: [NOTICE]   (319904) : New worker (319906) forked
Oct 11 04:54:46 np0005481065 neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf[319900]: [NOTICE]   (319904) : Loading success.
Oct 11 04:54:47 np0005481065 nova_compute[260935]: 2025-10-11 08:54:47.657 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172887.6566904, 1ef94ed5-fffa-41e9-b72f-4569354392c6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:54:47 np0005481065 nova_compute[260935]: 2025-10-11 08:54:47.658 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] VM Started (Lifecycle Event)#033[00m
Oct 11 04:54:47 np0005481065 nova_compute[260935]: 2025-10-11 08:54:47.661 2 DEBUG nova.compute.manager [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:54:47 np0005481065 nova_compute[260935]: 2025-10-11 08:54:47.664 2 DEBUG nova.virt.libvirt.driver [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:54:47 np0005481065 nova_compute[260935]: 2025-10-11 08:54:47.668 2 INFO nova.virt.libvirt.driver [-] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Instance spawned successfully.#033[00m
Oct 11 04:54:47 np0005481065 nova_compute[260935]: 2025-10-11 08:54:47.669 2 DEBUG nova.virt.libvirt.driver [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:54:47 np0005481065 nova_compute[260935]: 2025-10-11 08:54:47.715 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:54:47 np0005481065 nova_compute[260935]: 2025-10-11 08:54:47.721 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:54:47 np0005481065 nova_compute[260935]: 2025-10-11 08:54:47.726 2 DEBUG nova.virt.libvirt.driver [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:54:47 np0005481065 nova_compute[260935]: 2025-10-11 08:54:47.726 2 DEBUG nova.virt.libvirt.driver [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:54:47 np0005481065 nova_compute[260935]: 2025-10-11 08:54:47.727 2 DEBUG nova.virt.libvirt.driver [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:54:47 np0005481065 nova_compute[260935]: 2025-10-11 08:54:47.727 2 DEBUG nova.virt.libvirt.driver [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:54:47 np0005481065 nova_compute[260935]: 2025-10-11 08:54:47.727 2 DEBUG nova.virt.libvirt.driver [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:54:47 np0005481065 nova_compute[260935]: 2025-10-11 08:54:47.728 2 DEBUG nova.virt.libvirt.driver [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:54:47 np0005481065 nova_compute[260935]: 2025-10-11 08:54:47.760 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:54:47 np0005481065 nova_compute[260935]: 2025-10-11 08:54:47.761 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172887.6580708, 1ef94ed5-fffa-41e9-b72f-4569354392c6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:54:47 np0005481065 nova_compute[260935]: 2025-10-11 08:54:47.761 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] VM Paused (Lifecycle Event)#033[00m
Oct 11 04:54:47 np0005481065 nova_compute[260935]: 2025-10-11 08:54:47.778 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:54:47 np0005481065 nova_compute[260935]: 2025-10-11 08:54:47.783 2 INFO nova.compute.manager [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Took 8.90 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:54:47 np0005481065 nova_compute[260935]: 2025-10-11 08:54:47.783 2 DEBUG nova.compute.manager [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:54:47 np0005481065 nova_compute[260935]: 2025-10-11 08:54:47.785 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172887.6641626, 1ef94ed5-fffa-41e9-b72f-4569354392c6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:54:47 np0005481065 nova_compute[260935]: 2025-10-11 08:54:47.786 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:54:47 np0005481065 nova_compute[260935]: 2025-10-11 08:54:47.822 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:54:47 np0005481065 nova_compute[260935]: 2025-10-11 08:54:47.827 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:54:47 np0005481065 nova_compute[260935]: 2025-10-11 08:54:47.847 2 INFO nova.compute.manager [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Took 9.95 seconds to build instance.#033[00m
Oct 11 04:54:47 np0005481065 nova_compute[260935]: 2025-10-11 08:54:47.872 2 DEBUG oslo_concurrency.lockutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "1ef94ed5-fffa-41e9-b72f-4569354392c6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.068s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:47 np0005481065 nova_compute[260935]: 2025-10-11 08:54:47.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1517: 321 pgs: 321 active+clean; 181 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 1.8 MiB/s wr, 377 op/s
Oct 11 04:54:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:54:48 np0005481065 nova_compute[260935]: 2025-10-11 08:54:48.455 2 DEBUG nova.compute.manager [req-590dc3fd-8b97-4789-ae8b-dc8b5791cdad req-40be9c89-2e34-4bdc-afec-7661b1854608 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Received event network-vif-plugged-db59456c-ccd3-48b6-a889-d54a6e2cdf66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:54:48 np0005481065 nova_compute[260935]: 2025-10-11 08:54:48.456 2 DEBUG oslo_concurrency.lockutils [req-590dc3fd-8b97-4789-ae8b-dc8b5791cdad req-40be9c89-2e34-4bdc-afec-7661b1854608 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "1ef94ed5-fffa-41e9-b72f-4569354392c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:48 np0005481065 nova_compute[260935]: 2025-10-11 08:54:48.458 2 DEBUG oslo_concurrency.lockutils [req-590dc3fd-8b97-4789-ae8b-dc8b5791cdad req-40be9c89-2e34-4bdc-afec-7661b1854608 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1ef94ed5-fffa-41e9-b72f-4569354392c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:48 np0005481065 nova_compute[260935]: 2025-10-11 08:54:48.458 2 DEBUG oslo_concurrency.lockutils [req-590dc3fd-8b97-4789-ae8b-dc8b5791cdad req-40be9c89-2e34-4bdc-afec-7661b1854608 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1ef94ed5-fffa-41e9-b72f-4569354392c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:48 np0005481065 nova_compute[260935]: 2025-10-11 08:54:48.459 2 DEBUG nova.compute.manager [req-590dc3fd-8b97-4789-ae8b-dc8b5791cdad req-40be9c89-2e34-4bdc-afec-7661b1854608 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] No waiting events found dispatching network-vif-plugged-db59456c-ccd3-48b6-a889-d54a6e2cdf66 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:54:48 np0005481065 nova_compute[260935]: 2025-10-11 08:54:48.460 2 WARNING nova.compute.manager [req-590dc3fd-8b97-4789-ae8b-dc8b5791cdad req-40be9c89-2e34-4bdc-afec-7661b1854608 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Received unexpected event network-vif-plugged-db59456c-ccd3-48b6-a889-d54a6e2cdf66 for instance with vm_state active and task_state None.#033[00m
Oct 11 04:54:49 np0005481065 nova_compute[260935]: 2025-10-11 08:54:49.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:49 np0005481065 nova_compute[260935]: 2025-10-11 08:54:49.941 2 DEBUG oslo_concurrency.lockutils [None req-1681816a-f1db-452b-a156-0db80137229d d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquiring lock "1ef94ed5-fffa-41e9-b72f-4569354392c6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:49 np0005481065 nova_compute[260935]: 2025-10-11 08:54:49.942 2 DEBUG oslo_concurrency.lockutils [None req-1681816a-f1db-452b-a156-0db80137229d d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "1ef94ed5-fffa-41e9-b72f-4569354392c6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:49 np0005481065 nova_compute[260935]: 2025-10-11 08:54:49.942 2 DEBUG oslo_concurrency.lockutils [None req-1681816a-f1db-452b-a156-0db80137229d d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquiring lock "1ef94ed5-fffa-41e9-b72f-4569354392c6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:49 np0005481065 nova_compute[260935]: 2025-10-11 08:54:49.942 2 DEBUG oslo_concurrency.lockutils [None req-1681816a-f1db-452b-a156-0db80137229d d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "1ef94ed5-fffa-41e9-b72f-4569354392c6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:49 np0005481065 nova_compute[260935]: 2025-10-11 08:54:49.943 2 DEBUG oslo_concurrency.lockutils [None req-1681816a-f1db-452b-a156-0db80137229d d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "1ef94ed5-fffa-41e9-b72f-4569354392c6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:49 np0005481065 nova_compute[260935]: 2025-10-11 08:54:49.944 2 INFO nova.compute.manager [None req-1681816a-f1db-452b-a156-0db80137229d d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Terminating instance#033[00m
Oct 11 04:54:49 np0005481065 nova_compute[260935]: 2025-10-11 08:54:49.945 2 DEBUG nova.compute.manager [None req-1681816a-f1db-452b-a156-0db80137229d d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 04:54:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1518: 321 pgs: 321 active+clean; 181 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 288 op/s
Oct 11 04:54:50 np0005481065 kernel: tapdb59456c-cc (unregistering): left promiscuous mode
Oct 11 04:54:50 np0005481065 NetworkManager[44960]: <info>  [1760172890.0119] device (tapdb59456c-cc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:50 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:50Z|00479|binding|INFO|Releasing lport db59456c-ccd3-48b6-a889-d54a6e2cdf66 from this chassis (sb_readonly=0)
Oct 11 04:54:50 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:50Z|00480|binding|INFO|Setting lport db59456c-ccd3-48b6-a889-d54a6e2cdf66 down in Southbound
Oct 11 04:54:50 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:50Z|00481|binding|INFO|Removing iface tapdb59456c-cc ovn-installed in OVS
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.085 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:fd:5e 10.100.0.10'], port_security=['fa:16:3e:5b:fd:5e 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '1ef94ed5-fffa-41e9-b72f-4569354392c6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b34d40b1586348c3be3d9142dfe1770d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9b07fc82-c0d2-4e42-8878-3b0706731285', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be4f554b-8352-4ab3-babd-ad834c691fd3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=db59456c-ccd3-48b6-a889-d54a6e2cdf66) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:54:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.086 162815 INFO neutron.agent.ovn.metadata.agent [-] Port db59456c-ccd3-48b6-a889-d54a6e2cdf66 in datapath a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf unbound from our chassis#033[00m
Oct 11 04:54:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.088 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 04:54:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.089 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[56aa557e-a1e0-4213-b0b7-9dc7617ffc15]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.090 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf namespace which is not needed anymore#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:50 np0005481065 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000038.scope: Deactivated successfully.
Oct 11 04:54:50 np0005481065 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000038.scope: Consumed 3.938s CPU time.
Oct 11 04:54:50 np0005481065 systemd-machined[215705]: Machine qemu-63-instance-00000038 terminated.
Oct 11 04:54:50 np0005481065 kernel: tapdb59456c-cc: entered promiscuous mode
Oct 11 04:54:50 np0005481065 NetworkManager[44960]: <info>  [1760172890.1714] manager: (tapdb59456c-cc): new Tun device (/org/freedesktop/NetworkManager/Devices/230)
Oct 11 04:54:50 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:50Z|00482|binding|INFO|Claiming lport db59456c-ccd3-48b6-a889-d54a6e2cdf66 for this chassis.
Oct 11 04:54:50 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:50Z|00483|binding|INFO|db59456c-ccd3-48b6-a889-d54a6e2cdf66: Claiming fa:16:3e:5b:fd:5e 10.100.0.10
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:50 np0005481065 kernel: tapdb59456c-cc (unregistering): left promiscuous mode
Oct 11 04:54:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.194 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:fd:5e 10.100.0.10'], port_security=['fa:16:3e:5b:fd:5e 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '1ef94ed5-fffa-41e9-b72f-4569354392c6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b34d40b1586348c3be3d9142dfe1770d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9b07fc82-c0d2-4e42-8878-3b0706731285', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be4f554b-8352-4ab3-babd-ad834c691fd3, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=db59456c-ccd3-48b6-a889-d54a6e2cdf66) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.204 2 INFO nova.virt.libvirt.driver [-] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Instance destroyed successfully.#033[00m
Oct 11 04:54:50 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:50Z|00484|binding|INFO|Setting lport db59456c-ccd3-48b6-a889-d54a6e2cdf66 ovn-installed in OVS
Oct 11 04:54:50 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:50Z|00485|binding|INFO|Setting lport db59456c-ccd3-48b6-a889-d54a6e2cdf66 up in Southbound
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.204 2 DEBUG nova.objects.instance [None req-1681816a-f1db-452b-a156-0db80137229d d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lazy-loading 'resources' on Instance uuid 1ef94ed5-fffa-41e9-b72f-4569354392c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:54:50 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:50Z|00486|binding|INFO|Releasing lport db59456c-ccd3-48b6-a889-d54a6e2cdf66 from this chassis (sb_readonly=1)
Oct 11 04:54:50 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:50Z|00487|if_status|INFO|Dropped 3 log messages in last 47 seconds (most recently, 47 seconds ago) due to excessive rate
Oct 11 04:54:50 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:50Z|00488|if_status|INFO|Not setting lport db59456c-ccd3-48b6-a889-d54a6e2cdf66 down as sb is readonly
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:50 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:50Z|00489|binding|INFO|Removing iface tapdb59456c-cc ovn-installed in OVS
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:50 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:50Z|00490|binding|INFO|Releasing lport db59456c-ccd3-48b6-a889-d54a6e2cdf66 from this chassis (sb_readonly=0)
Oct 11 04:54:50 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:50Z|00491|binding|INFO|Setting lport db59456c-ccd3-48b6-a889-d54a6e2cdf66 down in Southbound
Oct 11 04:54:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.231 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:fd:5e 10.100.0.10'], port_security=['fa:16:3e:5b:fd:5e 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '1ef94ed5-fffa-41e9-b72f-4569354392c6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b34d40b1586348c3be3d9142dfe1770d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9b07fc82-c0d2-4e42-8878-3b0706731285', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be4f554b-8352-4ab3-babd-ad834c691fd3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=db59456c-ccd3-48b6-a889-d54a6e2cdf66) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.237 2 DEBUG nova.virt.libvirt.vif [None req-1681816a-f1db-452b-a156-0db80137229d d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:54:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1487174086',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1487174086',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1487174086',id=56,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:54:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b34d40b1586348c3be3d9142dfe1770d',ramdisk_id='',reservation_id='r-9kmel6v4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-253514738',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-253514738-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:54:47Z,user_data=None,user_id='d4dce65b39be45739408ca70d672df84',uuid=1ef94ed5-fffa-41e9-b72f-4569354392c6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "db59456c-ccd3-48b6-a889-d54a6e2cdf66", "address": "fa:16:3e:5b:fd:5e", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb59456c-cc", "ovs_interfaceid": "db59456c-ccd3-48b6-a889-d54a6e2cdf66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.237 2 DEBUG nova.network.os_vif_util [None req-1681816a-f1db-452b-a156-0db80137229d d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Converting VIF {"id": "db59456c-ccd3-48b6-a889-d54a6e2cdf66", "address": "fa:16:3e:5b:fd:5e", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb59456c-cc", "ovs_interfaceid": "db59456c-ccd3-48b6-a889-d54a6e2cdf66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.238 2 DEBUG nova.network.os_vif_util [None req-1681816a-f1db-452b-a156-0db80137229d d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:fd:5e,bridge_name='br-int',has_traffic_filtering=True,id=db59456c-ccd3-48b6-a889-d54a6e2cdf66,network=Network(a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb59456c-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.238 2 DEBUG os_vif [None req-1681816a-f1db-452b-a156-0db80137229d d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:fd:5e,bridge_name='br-int',has_traffic_filtering=True,id=db59456c-ccd3-48b6-a889-d54a6e2cdf66,network=Network(a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb59456c-cc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.240 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdb59456c-cc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.247 2 INFO os_vif [None req-1681816a-f1db-452b-a156-0db80137229d d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:fd:5e,bridge_name='br-int',has_traffic_filtering=True,id=db59456c-ccd3-48b6-a889-d54a6e2cdf66,network=Network(a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb59456c-cc')#033[00m
Oct 11 04:54:50 np0005481065 neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf[319900]: [NOTICE]   (319904) : haproxy version is 2.8.14-c23fe91
Oct 11 04:54:50 np0005481065 neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf[319900]: [NOTICE]   (319904) : path to executable is /usr/sbin/haproxy
Oct 11 04:54:50 np0005481065 neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf[319900]: [WARNING]  (319904) : Exiting Master process...
Oct 11 04:54:50 np0005481065 neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf[319900]: [ALERT]    (319904) : Current worker (319906) exited with code 143 (Terminated)
Oct 11 04:54:50 np0005481065 neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf[319900]: [WARNING]  (319904) : All workers exited. Exiting... (0)
Oct 11 04:54:50 np0005481065 systemd[1]: libpod-f9bc4e79d7c45187cc50bc28fd2e966162c326981e6dd2cfe73dd9a5e7742e0d.scope: Deactivated successfully.
Oct 11 04:54:50 np0005481065 podman[319990]: 2025-10-11 08:54:50.298790274 +0000 UTC m=+0.076313277 container died f9bc4e79d7c45187cc50bc28fd2e966162c326981e6dd2cfe73dd9a5e7742e0d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.320 2 DEBUG oslo_concurrency.lockutils [None req-1daaf7ef-40ad-467c-bc43-803ed00a5e23 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquiring lock "e263661e-e9c2-4a4d-a6e5-5fc8a7353f50" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.321 2 DEBUG oslo_concurrency.lockutils [None req-1daaf7ef-40ad-467c-bc43-803ed00a5e23 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "e263661e-e9c2-4a4d-a6e5-5fc8a7353f50" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.321 2 DEBUG oslo_concurrency.lockutils [None req-1daaf7ef-40ad-467c-bc43-803ed00a5e23 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquiring lock "e263661e-e9c2-4a4d-a6e5-5fc8a7353f50-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.321 2 DEBUG oslo_concurrency.lockutils [None req-1daaf7ef-40ad-467c-bc43-803ed00a5e23 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "e263661e-e9c2-4a4d-a6e5-5fc8a7353f50-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.322 2 DEBUG oslo_concurrency.lockutils [None req-1daaf7ef-40ad-467c-bc43-803ed00a5e23 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "e263661e-e9c2-4a4d-a6e5-5fc8a7353f50-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.324 2 INFO nova.compute.manager [None req-1daaf7ef-40ad-467c-bc43-803ed00a5e23 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Terminating instance#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.325 2 DEBUG nova.compute.manager [None req-1daaf7ef-40ad-467c-bc43-803ed00a5e23 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 04:54:50 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f9bc4e79d7c45187cc50bc28fd2e966162c326981e6dd2cfe73dd9a5e7742e0d-userdata-shm.mount: Deactivated successfully.
Oct 11 04:54:50 np0005481065 systemd[1]: var-lib-containers-storage-overlay-47c7b3c28e9a6a99763d7a9f3da3eb8fa1d4e961a8207f68a66aeb356e779987-merged.mount: Deactivated successfully.
Oct 11 04:54:50 np0005481065 podman[319990]: 2025-10-11 08:54:50.352150285 +0000 UTC m=+0.129673278 container cleanup f9bc4e79d7c45187cc50bc28fd2e966162c326981e6dd2cfe73dd9a5e7742e0d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 04:54:50 np0005481065 systemd[1]: libpod-conmon-f9bc4e79d7c45187cc50bc28fd2e966162c326981e6dd2cfe73dd9a5e7742e0d.scope: Deactivated successfully.
Oct 11 04:54:50 np0005481065 kernel: tap5a02bb17-5d (unregistering): left promiscuous mode
Oct 11 04:54:50 np0005481065 NetworkManager[44960]: <info>  [1760172890.3876] device (tap5a02bb17-5d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:50 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:50Z|00492|binding|INFO|Releasing lport 5a02bb17-5d97-4eef-a91d-245b70cc5e1b from this chassis (sb_readonly=0)
Oct 11 04:54:50 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:50Z|00493|binding|INFO|Setting lport 5a02bb17-5d97-4eef-a91d-245b70cc5e1b down in Southbound
Oct 11 04:54:50 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:50Z|00494|binding|INFO|Removing iface tap5a02bb17-5d ovn-installed in OVS
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.424 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2c:7f:c3 10.100.0.14'], port_security=['fa:16:3e:2c:7f:c3 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'e263661e-e9c2-4a4d-a6e5-5fc8a7353f50', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-72464893-0f19-40a9-84d1-392a298e50b9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7a37bbdce5194d96bed20d4162e25337', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a5c4bd12-f795-4994-bb5e-cd2cc665fe9f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2dc3e696-a953-4227-9499-d68d54f25c77, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=5a02bb17-5d97-4eef-a91d-245b70cc5e1b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:54:50 np0005481065 podman[320041]: 2025-10-11 08:54:50.431831527 +0000 UTC m=+0.055896954 container remove f9bc4e79d7c45187cc50bc28fd2e966162c326981e6dd2cfe73dd9a5e7742e0d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS)
Oct 11 04:54:50 np0005481065 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000036.scope: Deactivated successfully.
Oct 11 04:54:50 np0005481065 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000036.scope: Consumed 12.118s CPU time.
Oct 11 04:54:50 np0005481065 systemd-machined[215705]: Machine qemu-61-instance-00000036 terminated.
Oct 11 04:54:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.446 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7eb60fa2-b3ca-49bf-adf5-f37a6c36bc15]: (4, ('Sat Oct 11 08:54:50 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf (f9bc4e79d7c45187cc50bc28fd2e966162c326981e6dd2cfe73dd9a5e7742e0d)\nf9bc4e79d7c45187cc50bc28fd2e966162c326981e6dd2cfe73dd9a5e7742e0d\nSat Oct 11 08:54:50 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf (f9bc4e79d7c45187cc50bc28fd2e966162c326981e6dd2cfe73dd9a5e7742e0d)\nf9bc4e79d7c45187cc50bc28fd2e966162c326981e6dd2cfe73dd9a5e7742e0d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.454 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7fe891f1-df33-47ea-a23b-d9bc4eeedfba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.455 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa1a65c6f-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:50 np0005481065 kernel: tapa1a65c6f-c0: left promiscuous mode
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.482 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[88264f71-5be0-406e-981a-09d22ba9152a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.504 2 DEBUG oslo_concurrency.lockutils [None req-fdae6184-c540-45c9-8b9a-255e63108ea0 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquiring lock "21a71d10-e13b-47fe-88fd-ec9597f7902e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.504 2 DEBUG oslo_concurrency.lockutils [None req-fdae6184-c540-45c9-8b9a-255e63108ea0 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "21a71d10-e13b-47fe-88fd-ec9597f7902e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.505 2 DEBUG oslo_concurrency.lockutils [None req-fdae6184-c540-45c9-8b9a-255e63108ea0 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquiring lock "21a71d10-e13b-47fe-88fd-ec9597f7902e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.505 2 DEBUG oslo_concurrency.lockutils [None req-fdae6184-c540-45c9-8b9a-255e63108ea0 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "21a71d10-e13b-47fe-88fd-ec9597f7902e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.505 2 DEBUG oslo_concurrency.lockutils [None req-fdae6184-c540-45c9-8b9a-255e63108ea0 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "21a71d10-e13b-47fe-88fd-ec9597f7902e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.506 2 INFO nova.compute.manager [None req-fdae6184-c540-45c9-8b9a-255e63108ea0 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Terminating instance#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.507 2 DEBUG nova.compute.manager [None req-fdae6184-c540-45c9-8b9a-255e63108ea0 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 04:54:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.515 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c810eec3-0ae3-4c73-929e-d2c9ee6bb3b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.516 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[62c9f84b-9f83-438e-ae2a-1de3c2aa0548]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.551 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[29ba1dff-e221-4add-bc24-17b7da0cc5a3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 473063, 'reachable_time': 38189, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320065, 'error': None, 'target': 'ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:50 np0005481065 systemd[1]: run-netns-ovnmeta\x2da1a65c6f\x2dcd0e\x2d4ac5\x2db9de\x2d21e41a32ffbf.mount: Deactivated successfully.
Oct 11 04:54:50 np0005481065 kernel: tap70c55a7a-6f (unregistering): left promiscuous mode
Oct 11 04:54:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.556 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 04:54:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.556 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[e6ec8a83-c1bb-4f77-b70f-510d9ef21695]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.560 162815 INFO neutron.agent.ovn.metadata.agent [-] Port db59456c-ccd3-48b6-a889-d54a6e2cdf66 in datapath a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf unbound from our chassis#033[00m
Oct 11 04:54:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.561 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 04:54:50 np0005481065 NetworkManager[44960]: <info>  [1760172890.5620] device (tap70c55a7a-6f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:54:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.562 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c75cab0c-2012-408a-861e-4d2780e7f4f1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.562 162815 INFO neutron.agent.ovn.metadata.agent [-] Port db59456c-ccd3-48b6-a889-d54a6e2cdf66 in datapath a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf unbound from our chassis#033[00m
Oct 11 04:54:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.563 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 04:54:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.564 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[07dc8b78-ef35-4908-9e48-9e5c8ed0b917]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.564 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 5a02bb17-5d97-4eef-a91d-245b70cc5e1b in datapath 72464893-0f19-40a9-84d1-392a298e50b9 unbound from our chassis#033[00m
Oct 11 04:54:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.565 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 72464893-0f19-40a9-84d1-392a298e50b9#033[00m
Oct 11 04:54:50 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:50Z|00495|binding|INFO|Releasing lport 70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b from this chassis (sb_readonly=0)
Oct 11 04:54:50 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:50Z|00496|binding|INFO|Setting lport 70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b down in Southbound
Oct 11 04:54:50 np0005481065 ovn_controller[152945]: 2025-10-11T08:54:50Z|00497|binding|INFO|Removing iface tap70c55a7a-6f ovn-installed in OVS
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.579 2 INFO nova.virt.libvirt.driver [-] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Instance destroyed successfully.#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.579 2 DEBUG nova.objects.instance [None req-1daaf7ef-40ad-467c-bc43-803ed00a5e23 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lazy-loading 'resources' on Instance uuid e263661e-e9c2-4a4d-a6e5-5fc8a7353f50 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:54:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.590 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f5:da:34 10.100.0.7'], port_security=['fa:16:3e:f5:da:34 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '21a71d10-e13b-47fe-88fd-ec9597f7902e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-72464893-0f19-40a9-84d1-392a298e50b9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7a37bbdce5194d96bed20d4162e25337', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a5c4bd12-f795-4994-bb5e-cd2cc665fe9f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2dc3e696-a953-4227-9499-d68d54f25c77, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:54:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.591 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c7eb0d08-b6b4-4628-8b23-508c039857fe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.612 2 DEBUG nova.virt.libvirt.vif [None req-1daaf7ef-40ad-467c-bc43-803ed00a5e23 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:54:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-123447316',display_name='tempest-ListServersNegativeTestJSON-server-123447316-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-123447316-2',id=54,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-10-11T08:54:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7a37bbdce5194d96bed20d4162e25337',ramdisk_id='',reservation_id='r-yl5fomq0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-567006104',owner_user_name='tempest-ListServersNegativeTestJSON-567006104-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:54:38Z,user_data=None,user_id='5c02b9d6bdae439c9f1e49ae63c5c5e3',uuid=e263661e-e9c2-4a4d-a6e5-5fc8a7353f50,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5a02bb17-5d97-4eef-a91d-245b70cc5e1b", "address": "fa:16:3e:2c:7f:c3", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a02bb17-5d", "ovs_interfaceid": "5a02bb17-5d97-4eef-a91d-245b70cc5e1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.613 2 DEBUG nova.network.os_vif_util [None req-1daaf7ef-40ad-467c-bc43-803ed00a5e23 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Converting VIF {"id": "5a02bb17-5d97-4eef-a91d-245b70cc5e1b", "address": "fa:16:3e:2c:7f:c3", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a02bb17-5d", "ovs_interfaceid": "5a02bb17-5d97-4eef-a91d-245b70cc5e1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.613 2 DEBUG nova.network.os_vif_util [None req-1daaf7ef-40ad-467c-bc43-803ed00a5e23 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2c:7f:c3,bridge_name='br-int',has_traffic_filtering=True,id=5a02bb17-5d97-4eef-a91d-245b70cc5e1b,network=Network(72464893-0f19-40a9-84d1-392a298e50b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a02bb17-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.614 2 DEBUG os_vif [None req-1daaf7ef-40ad-467c-bc43-803ed00a5e23 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2c:7f:c3,bridge_name='br-int',has_traffic_filtering=True,id=5a02bb17-5d97-4eef-a91d-245b70cc5e1b,network=Network(72464893-0f19-40a9-84d1-392a298e50b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a02bb17-5d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.615 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5a02bb17-5d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.626 2 INFO os_vif [None req-1daaf7ef-40ad-467c-bc43-803ed00a5e23 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2c:7f:c3,bridge_name='br-int',has_traffic_filtering=True,id=5a02bb17-5d97-4eef-a91d-245b70cc5e1b,network=Network(72464893-0f19-40a9-84d1-392a298e50b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a02bb17-5d')#033[00m
Oct 11 04:54:50 np0005481065 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000037.scope: Deactivated successfully.
Oct 11 04:54:50 np0005481065 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000037.scope: Consumed 11.273s CPU time.
Oct 11 04:54:50 np0005481065 systemd-machined[215705]: Machine qemu-62-instance-00000037 terminated.
Oct 11 04:54:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.638 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[e366586a-2691-4028-a7cf-042313229021]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.641 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[a2c48ea3-59e0-4709-82e7-6f8074d707c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.689 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b4b6ca9b-c904-4398-80fa-3fd9420bf5ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.713 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0d03ae91-a6dd-46c7-9491-ab7014dc1d81]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap72464893-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:78:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 832, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 832, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 148], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 472069, 'reachable_time': 41754, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320107, 'error': None, 'target': 'ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.743 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f572d752-e84a-4b91-982f-1a7bb6769bd3]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap72464893-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 472087, 'tstamp': 472087}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320110, 'error': None, 'target': 'ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap72464893-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 472092, 'tstamp': 472092}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320110, 'error': None, 'target': 'ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.748 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap72464893-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.756 2 INFO nova.virt.libvirt.driver [None req-1681816a-f1db-452b-a156-0db80137229d d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Deleting instance files /var/lib/nova/instances/1ef94ed5-fffa-41e9-b72f-4569354392c6_del#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.756 2 INFO nova.virt.libvirt.driver [None req-1681816a-f1db-452b-a156-0db80137229d d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Deletion of /var/lib/nova/instances/1ef94ed5-fffa-41e9-b72f-4569354392c6_del complete#033[00m
Oct 11 04:54:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.757 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap72464893-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.758 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:54:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.758 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap72464893-00, col_values=(('external_ids', {'iface-id': 'fba8f4e1-8635-4527-85f3-29ce2a1033b5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.760 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:54:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.763 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b in datapath 72464893-0f19-40a9-84d1-392a298e50b9 unbound from our chassis#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.764 2 INFO nova.virt.libvirt.driver [-] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Instance destroyed successfully.#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.764 2 DEBUG nova.objects.instance [None req-fdae6184-c540-45c9-8b9a-255e63108ea0 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lazy-loading 'resources' on Instance uuid 21a71d10-e13b-47fe-88fd-ec9597f7902e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:54:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.766 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 72464893-0f19-40a9-84d1-392a298e50b9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 04:54:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.768 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3321906e-1382-490f-b19d-795667a26121]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.769 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9 namespace which is not needed anymore#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.780 2 DEBUG nova.virt.libvirt.vif [None req-fdae6184-c540-45c9-8b9a-255e63108ea0 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:54:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-123447316',display_name='tempest-ListServersNegativeTestJSON-server-123447316-3',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-123447316-3',id=55,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=2,launched_at=2025-10-11T08:54:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7a37bbdce5194d96bed20d4162e25337',ramdisk_id='',reservation_id='r-yl5fomq0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-567006104',owner_user_name='tempest-ListServersNegativeTestJSON-567006104-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:54:40Z,user_data=None,user_id='5c02b9d6bdae439c9f1e49ae63c5c5e3',uuid=21a71d10-e13b-47fe-88fd-ec9597f7902e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b", "address": "fa:16:3e:f5:da:34", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70c55a7a-6f", "ovs_interfaceid": "70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.781 2 DEBUG nova.network.os_vif_util [None req-fdae6184-c540-45c9-8b9a-255e63108ea0 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Converting VIF {"id": "70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b", "address": "fa:16:3e:f5:da:34", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70c55a7a-6f", "ovs_interfaceid": "70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.781 2 DEBUG nova.network.os_vif_util [None req-fdae6184-c540-45c9-8b9a-255e63108ea0 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f5:da:34,bridge_name='br-int',has_traffic_filtering=True,id=70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b,network=Network(72464893-0f19-40a9-84d1-392a298e50b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70c55a7a-6f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.782 2 DEBUG os_vif [None req-fdae6184-c540-45c9-8b9a-255e63108ea0 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:da:34,bridge_name='br-int',has_traffic_filtering=True,id=70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b,network=Network(72464893-0f19-40a9-84d1-392a298e50b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70c55a7a-6f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.786 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap70c55a7a-6f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.793 2 INFO os_vif [None req-fdae6184-c540-45c9-8b9a-255e63108ea0 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:da:34,bridge_name='br-int',has_traffic_filtering=True,id=70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b,network=Network(72464893-0f19-40a9-84d1-392a298e50b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70c55a7a-6f')#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.831 2 INFO nova.compute.manager [None req-1681816a-f1db-452b-a156-0db80137229d d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Took 0.89 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.832 2 DEBUG oslo.service.loopingcall [None req-1681816a-f1db-452b-a156-0db80137229d d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.833 2 DEBUG nova.compute.manager [-] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 04:54:50 np0005481065 nova_compute[260935]: 2025-10-11 08:54:50.833 2 DEBUG nova.network.neutron [-] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 04:54:50 np0005481065 neutron-haproxy-ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9[319091]: [NOTICE]   (319100) : haproxy version is 2.8.14-c23fe91
Oct 11 04:54:50 np0005481065 neutron-haproxy-ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9[319091]: [NOTICE]   (319100) : path to executable is /usr/sbin/haproxy
Oct 11 04:54:50 np0005481065 neutron-haproxy-ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9[319091]: [WARNING]  (319100) : Exiting Master process...
Oct 11 04:54:50 np0005481065 neutron-haproxy-ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9[319091]: [ALERT]    (319100) : Current worker (319103) exited with code 143 (Terminated)
Oct 11 04:54:50 np0005481065 neutron-haproxy-ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9[319091]: [WARNING]  (319100) : All workers exited. Exiting... (0)
Oct 11 04:54:50 np0005481065 systemd[1]: libpod-291a72ea54cbd11b08a42f9445c0733f327b2d0109a7ed73d9c34352184c0676.scope: Deactivated successfully.
Oct 11 04:54:50 np0005481065 podman[320158]: 2025-10-11 08:54:50.993891323 +0000 UTC m=+0.082132723 container died 291a72ea54cbd11b08a42f9445c0733f327b2d0109a7ed73d9c34352184c0676 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 04:54:51 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-291a72ea54cbd11b08a42f9445c0733f327b2d0109a7ed73d9c34352184c0676-userdata-shm.mount: Deactivated successfully.
Oct 11 04:54:51 np0005481065 systemd[1]: var-lib-containers-storage-overlay-601b7f68ddbb33e7d9c40d40c0537063b097ca0468dd1d3c4f0f1fbe13b69382-merged.mount: Deactivated successfully.
Oct 11 04:54:51 np0005481065 podman[320158]: 2025-10-11 08:54:51.043437806 +0000 UTC m=+0.131679206 container cleanup 291a72ea54cbd11b08a42f9445c0733f327b2d0109a7ed73d9c34352184c0676 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 04:54:51 np0005481065 nova_compute[260935]: 2025-10-11 08:54:51.064 2 INFO nova.virt.libvirt.driver [None req-1daaf7ef-40ad-467c-bc43-803ed00a5e23 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Deleting instance files /var/lib/nova/instances/e263661e-e9c2-4a4d-a6e5-5fc8a7353f50_del#033[00m
Oct 11 04:54:51 np0005481065 nova_compute[260935]: 2025-10-11 08:54:51.066 2 INFO nova.virt.libvirt.driver [None req-1daaf7ef-40ad-467c-bc43-803ed00a5e23 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Deletion of /var/lib/nova/instances/e263661e-e9c2-4a4d-a6e5-5fc8a7353f50_del complete#033[00m
Oct 11 04:54:51 np0005481065 systemd[1]: libpod-conmon-291a72ea54cbd11b08a42f9445c0733f327b2d0109a7ed73d9c34352184c0676.scope: Deactivated successfully.
Oct 11 04:54:51 np0005481065 podman[320190]: 2025-10-11 08:54:51.141097871 +0000 UTC m=+0.067996880 container remove 291a72ea54cbd11b08a42f9445c0733f327b2d0109a7ed73d9c34352184c0676 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 11 04:54:51 np0005481065 nova_compute[260935]: 2025-10-11 08:54:51.142 2 INFO nova.compute.manager [None req-1daaf7ef-40ad-467c-bc43-803ed00a5e23 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Took 0.82 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 04:54:51 np0005481065 nova_compute[260935]: 2025-10-11 08:54:51.142 2 DEBUG oslo.service.loopingcall [None req-1daaf7ef-40ad-467c-bc43-803ed00a5e23 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 04:54:51 np0005481065 nova_compute[260935]: 2025-10-11 08:54:51.143 2 DEBUG nova.compute.manager [-] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 04:54:51 np0005481065 nova_compute[260935]: 2025-10-11 08:54:51.143 2 DEBUG nova.network.neutron [-] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 04:54:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:51.149 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[09e39d31-b729-45e8-b6d9-865706a28a6d]: (4, ('Sat Oct 11 08:54:50 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9 (291a72ea54cbd11b08a42f9445c0733f327b2d0109a7ed73d9c34352184c0676)\n291a72ea54cbd11b08a42f9445c0733f327b2d0109a7ed73d9c34352184c0676\nSat Oct 11 08:54:51 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9 (291a72ea54cbd11b08a42f9445c0733f327b2d0109a7ed73d9c34352184c0676)\n291a72ea54cbd11b08a42f9445c0733f327b2d0109a7ed73d9c34352184c0676\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:51.152 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b3828b9b-8804-496d-89db-c8f9bb8b27b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:51.153 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap72464893-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:54:51 np0005481065 kernel: tap72464893-00: left promiscuous mode
Oct 11 04:54:51 np0005481065 nova_compute[260935]: 2025-10-11 08:54:51.193 2 DEBUG nova.compute.manager [req-4c68f7cc-5de5-4f5a-a6bc-3db3caca0ab4 req-8f04d767-2ffb-4e57-9a2a-0e4e7cad7ae7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Received event network-vif-unplugged-db59456c-ccd3-48b6-a889-d54a6e2cdf66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:54:51 np0005481065 nova_compute[260935]: 2025-10-11 08:54:51.194 2 DEBUG oslo_concurrency.lockutils [req-4c68f7cc-5de5-4f5a-a6bc-3db3caca0ab4 req-8f04d767-2ffb-4e57-9a2a-0e4e7cad7ae7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "1ef94ed5-fffa-41e9-b72f-4569354392c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:51 np0005481065 nova_compute[260935]: 2025-10-11 08:54:51.194 2 DEBUG oslo_concurrency.lockutils [req-4c68f7cc-5de5-4f5a-a6bc-3db3caca0ab4 req-8f04d767-2ffb-4e57-9a2a-0e4e7cad7ae7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1ef94ed5-fffa-41e9-b72f-4569354392c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:51 np0005481065 nova_compute[260935]: 2025-10-11 08:54:51.195 2 DEBUG oslo_concurrency.lockutils [req-4c68f7cc-5de5-4f5a-a6bc-3db3caca0ab4 req-8f04d767-2ffb-4e57-9a2a-0e4e7cad7ae7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1ef94ed5-fffa-41e9-b72f-4569354392c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:51 np0005481065 nova_compute[260935]: 2025-10-11 08:54:51.195 2 DEBUG nova.compute.manager [req-4c68f7cc-5de5-4f5a-a6bc-3db3caca0ab4 req-8f04d767-2ffb-4e57-9a2a-0e4e7cad7ae7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] No waiting events found dispatching network-vif-unplugged-db59456c-ccd3-48b6-a889-d54a6e2cdf66 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:54:51 np0005481065 nova_compute[260935]: 2025-10-11 08:54:51.195 2 DEBUG nova.compute.manager [req-4c68f7cc-5de5-4f5a-a6bc-3db3caca0ab4 req-8f04d767-2ffb-4e57-9a2a-0e4e7cad7ae7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Received event network-vif-unplugged-db59456c-ccd3-48b6-a889-d54a6e2cdf66 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 04:54:51 np0005481065 nova_compute[260935]: 2025-10-11 08:54:51.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:51.200 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[05135659-4d18-4e3c-aa5f-f289705d7d3c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:51 np0005481065 nova_compute[260935]: 2025-10-11 08:54:51.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:51.252 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a69a76eb-1518-4624-94da-4aaa087122de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:51.254 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f5fcd1d7-7f8d-44de-bf2b-349e2753a0d0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:51 np0005481065 nova_compute[260935]: 2025-10-11 08:54:51.254 2 INFO nova.virt.libvirt.driver [None req-fdae6184-c540-45c9-8b9a-255e63108ea0 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Deleting instance files /var/lib/nova/instances/21a71d10-e13b-47fe-88fd-ec9597f7902e_del#033[00m
Oct 11 04:54:51 np0005481065 nova_compute[260935]: 2025-10-11 08:54:51.255 2 INFO nova.virt.libvirt.driver [None req-fdae6184-c540-45c9-8b9a-255e63108ea0 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Deletion of /var/lib/nova/instances/21a71d10-e13b-47fe-88fd-ec9597f7902e_del complete#033[00m
Oct 11 04:54:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:51.279 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e0f827b8-f5db-492d-8535-f63082ac6a87]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 472058, 'reachable_time': 20838, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320205, 'error': None, 'target': 'ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:51.283 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 04:54:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:54:51.283 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[c008a465-7511-4c91-ba46-1e7bd3fd093e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:54:51 np0005481065 nova_compute[260935]: 2025-10-11 08:54:51.322 2 INFO nova.compute.manager [None req-fdae6184-c540-45c9-8b9a-255e63108ea0 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Took 0.81 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 04:54:51 np0005481065 nova_compute[260935]: 2025-10-11 08:54:51.322 2 DEBUG oslo.service.loopingcall [None req-fdae6184-c540-45c9-8b9a-255e63108ea0 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 04:54:51 np0005481065 nova_compute[260935]: 2025-10-11 08:54:51.322 2 DEBUG nova.compute.manager [-] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 04:54:51 np0005481065 nova_compute[260935]: 2025-10-11 08:54:51.322 2 DEBUG nova.network.neutron [-] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 04:54:51 np0005481065 systemd[1]: run-netns-ovnmeta\x2d72464893\x2d0f19\x2d40a9\x2d84d1\x2d392a298e50b9.mount: Deactivated successfully.
Oct 11 04:54:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1519: 321 pgs: 321 active+clean; 181 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 288 op/s
Oct 11 04:54:52 np0005481065 nova_compute[260935]: 2025-10-11 08:54:52.713 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172877.712025, ceef84cf-9df6-4484-862c-624eab05f1fe => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:54:52 np0005481065 nova_compute[260935]: 2025-10-11 08:54:52.713 2 INFO nova.compute.manager [-] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] VM Stopped (Lifecycle Event)#033[00m
Oct 11 04:54:52 np0005481065 nova_compute[260935]: 2025-10-11 08:54:52.771 2 DEBUG nova.compute.manager [None req-e37e818c-94dc-45dc-9069-a1aba3a8392e - - - - - -] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:54:52 np0005481065 nova_compute[260935]: 2025-10-11 08:54:52.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:54:53 np0005481065 nova_compute[260935]: 2025-10-11 08:54:53.583 2 DEBUG nova.compute.manager [req-2aeb6158-ecd1-4bb6-a257-cd4ea0476950 req-581df741-160e-44e1-831a-e2f3f4149067 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Received event network-vif-plugged-db59456c-ccd3-48b6-a889-d54a6e2cdf66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:54:53 np0005481065 nova_compute[260935]: 2025-10-11 08:54:53.583 2 DEBUG oslo_concurrency.lockutils [req-2aeb6158-ecd1-4bb6-a257-cd4ea0476950 req-581df741-160e-44e1-831a-e2f3f4149067 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "1ef94ed5-fffa-41e9-b72f-4569354392c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:53 np0005481065 nova_compute[260935]: 2025-10-11 08:54:53.584 2 DEBUG oslo_concurrency.lockutils [req-2aeb6158-ecd1-4bb6-a257-cd4ea0476950 req-581df741-160e-44e1-831a-e2f3f4149067 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1ef94ed5-fffa-41e9-b72f-4569354392c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:53 np0005481065 nova_compute[260935]: 2025-10-11 08:54:53.584 2 DEBUG oslo_concurrency.lockutils [req-2aeb6158-ecd1-4bb6-a257-cd4ea0476950 req-581df741-160e-44e1-831a-e2f3f4149067 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1ef94ed5-fffa-41e9-b72f-4569354392c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:53 np0005481065 nova_compute[260935]: 2025-10-11 08:54:53.584 2 DEBUG nova.compute.manager [req-2aeb6158-ecd1-4bb6-a257-cd4ea0476950 req-581df741-160e-44e1-831a-e2f3f4149067 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] No waiting events found dispatching network-vif-plugged-db59456c-ccd3-48b6-a889-d54a6e2cdf66 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:54:53 np0005481065 nova_compute[260935]: 2025-10-11 08:54:53.585 2 WARNING nova.compute.manager [req-2aeb6158-ecd1-4bb6-a257-cd4ea0476950 req-581df741-160e-44e1-831a-e2f3f4149067 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Received unexpected event network-vif-plugged-db59456c-ccd3-48b6-a889-d54a6e2cdf66 for instance with vm_state active and task_state deleting.#033[00m
Oct 11 04:54:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1520: 321 pgs: 321 active+clean; 41 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 8.0 MiB/s rd, 3.9 MiB/s wr, 490 op/s
Oct 11 04:54:54 np0005481065 nova_compute[260935]: 2025-10-11 08:54:54.770 2 DEBUG nova.network.neutron [-] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:54:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:54:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:54:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:54:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:54:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:54:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:54:54 np0005481065 nova_compute[260935]: 2025-10-11 08:54:54.801 2 DEBUG nova.network.neutron [-] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:54:54 np0005481065 nova_compute[260935]: 2025-10-11 08:54:54.806 2 DEBUG nova.network.neutron [-] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:54:54 np0005481065 nova_compute[260935]: 2025-10-11 08:54:54.807 2 INFO nova.compute.manager [-] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Took 3.97 seconds to deallocate network for instance.#033[00m
Oct 11 04:54:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:54:54
Oct 11 04:54:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:54:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 04:54:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'volumes', 'backups', 'vms', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', '.mgr', 'default.rgw.control']
Oct 11 04:54:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:54:54 np0005481065 nova_compute[260935]: 2025-10-11 08:54:54.847 2 INFO nova.compute.manager [-] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Took 3.70 seconds to deallocate network for instance.#033[00m
Oct 11 04:54:54 np0005481065 nova_compute[260935]: 2025-10-11 08:54:54.855 2 INFO nova.compute.manager [-] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Took 3.53 seconds to deallocate network for instance.#033[00m
Oct 11 04:54:54 np0005481065 nova_compute[260935]: 2025-10-11 08:54:54.864 2 DEBUG oslo_concurrency.lockutils [None req-1681816a-f1db-452b-a156-0db80137229d d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:54 np0005481065 nova_compute[260935]: 2025-10-11 08:54:54.865 2 DEBUG oslo_concurrency.lockutils [None req-1681816a-f1db-452b-a156-0db80137229d d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:54 np0005481065 nova_compute[260935]: 2025-10-11 08:54:54.932 2 DEBUG oslo_concurrency.lockutils [None req-fdae6184-c540-45c9-8b9a-255e63108ea0 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:54 np0005481065 nova_compute[260935]: 2025-10-11 08:54:54.941 2 DEBUG oslo_concurrency.lockutils [None req-1daaf7ef-40ad-467c-bc43-803ed00a5e23 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:54:54 np0005481065 nova_compute[260935]: 2025-10-11 08:54:54.997 2 DEBUG oslo_concurrency.processutils [None req-1681816a-f1db-452b-a156-0db80137229d d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:54:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:54:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:54:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:54:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:54:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:54:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:54:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:54:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:54:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:54:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:54:55 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:54:55 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/534161292' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:54:55 np0005481065 nova_compute[260935]: 2025-10-11 08:54:55.508 2 DEBUG oslo_concurrency.processutils [None req-1681816a-f1db-452b-a156-0db80137229d d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:54:55 np0005481065 nova_compute[260935]: 2025-10-11 08:54:55.516 2 DEBUG nova.compute.provider_tree [None req-1681816a-f1db-452b-a156-0db80137229d d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:54:55 np0005481065 nova_compute[260935]: 2025-10-11 08:54:55.539 2 DEBUG nova.scheduler.client.report [None req-1681816a-f1db-452b-a156-0db80137229d d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:54:55 np0005481065 nova_compute[260935]: 2025-10-11 08:54:55.569 2 DEBUG oslo_concurrency.lockutils [None req-1681816a-f1db-452b-a156-0db80137229d d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:55 np0005481065 nova_compute[260935]: 2025-10-11 08:54:55.573 2 DEBUG oslo_concurrency.lockutils [None req-fdae6184-c540-45c9-8b9a-255e63108ea0 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:55 np0005481065 nova_compute[260935]: 2025-10-11 08:54:55.622 2 INFO nova.scheduler.client.report [None req-1681816a-f1db-452b-a156-0db80137229d d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Deleted allocations for instance 1ef94ed5-fffa-41e9-b72f-4569354392c6#033[00m
Oct 11 04:54:55 np0005481065 nova_compute[260935]: 2025-10-11 08:54:55.707 2 DEBUG nova.compute.manager [req-73482234-6fa9-45e6-aa4e-4ad597d7c942 req-4dce729d-be92-4738-8b5d-4e4a5e1dd5f8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Received event network-vif-deleted-db59456c-ccd3-48b6-a889-d54a6e2cdf66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:54:55 np0005481065 nova_compute[260935]: 2025-10-11 08:54:55.707 2 DEBUG nova.compute.manager [req-73482234-6fa9-45e6-aa4e-4ad597d7c942 req-4dce729d-be92-4738-8b5d-4e4a5e1dd5f8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Received event network-vif-deleted-70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:54:55 np0005481065 nova_compute[260935]: 2025-10-11 08:54:55.708 2 DEBUG nova.compute.manager [req-73482234-6fa9-45e6-aa4e-4ad597d7c942 req-4dce729d-be92-4738-8b5d-4e4a5e1dd5f8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Received event network-vif-deleted-5a02bb17-5d97-4eef-a91d-245b70cc5e1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:54:55 np0005481065 nova_compute[260935]: 2025-10-11 08:54:55.710 2 DEBUG oslo_concurrency.processutils [None req-fdae6184-c540-45c9-8b9a-255e63108ea0 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:54:55 np0005481065 nova_compute[260935]: 2025-10-11 08:54:55.767 2 DEBUG oslo_concurrency.lockutils [None req-1681816a-f1db-452b-a156-0db80137229d d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "1ef94ed5-fffa-41e9-b72f-4569354392c6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.825s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:55 np0005481065 nova_compute[260935]: 2025-10-11 08:54:55.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1521: 321 pgs: 321 active+clean; 41 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 238 op/s
Oct 11 04:54:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:54:56 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2297362032' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:54:56 np0005481065 nova_compute[260935]: 2025-10-11 08:54:56.196 2 DEBUG oslo_concurrency.processutils [None req-fdae6184-c540-45c9-8b9a-255e63108ea0 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:54:56 np0005481065 nova_compute[260935]: 2025-10-11 08:54:56.203 2 DEBUG nova.compute.provider_tree [None req-fdae6184-c540-45c9-8b9a-255e63108ea0 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:54:56 np0005481065 nova_compute[260935]: 2025-10-11 08:54:56.217 2 DEBUG nova.scheduler.client.report [None req-fdae6184-c540-45c9-8b9a-255e63108ea0 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:54:56 np0005481065 nova_compute[260935]: 2025-10-11 08:54:56.241 2 DEBUG oslo_concurrency.lockutils [None req-fdae6184-c540-45c9-8b9a-255e63108ea0 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:56 np0005481065 nova_compute[260935]: 2025-10-11 08:54:56.243 2 DEBUG oslo_concurrency.lockutils [None req-1daaf7ef-40ad-467c-bc43-803ed00a5e23 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 1.302s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:54:56 np0005481065 nova_compute[260935]: 2025-10-11 08:54:56.280 2 INFO nova.scheduler.client.report [None req-fdae6184-c540-45c9-8b9a-255e63108ea0 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Deleted allocations for instance 21a71d10-e13b-47fe-88fd-ec9597f7902e#033[00m
Oct 11 04:54:56 np0005481065 nova_compute[260935]: 2025-10-11 08:54:56.313 2 DEBUG oslo_concurrency.processutils [None req-1daaf7ef-40ad-467c-bc43-803ed00a5e23 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:54:56 np0005481065 nova_compute[260935]: 2025-10-11 08:54:56.365 2 DEBUG oslo_concurrency.lockutils [None req-fdae6184-c540-45c9-8b9a-255e63108ea0 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "21a71d10-e13b-47fe-88fd-ec9597f7902e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.861s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:54:56 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1666318733' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:54:56 np0005481065 nova_compute[260935]: 2025-10-11 08:54:56.792 2 DEBUG oslo_concurrency.processutils [None req-1daaf7ef-40ad-467c-bc43-803ed00a5e23 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:54:56 np0005481065 nova_compute[260935]: 2025-10-11 08:54:56.802 2 DEBUG nova.compute.provider_tree [None req-1daaf7ef-40ad-467c-bc43-803ed00a5e23 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:54:56 np0005481065 nova_compute[260935]: 2025-10-11 08:54:56.884 2 DEBUG nova.scheduler.client.report [None req-1daaf7ef-40ad-467c-bc43-803ed00a5e23 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:54:56 np0005481065 nova_compute[260935]: 2025-10-11 08:54:56.910 2 DEBUG oslo_concurrency.lockutils [None req-1daaf7ef-40ad-467c-bc43-803ed00a5e23 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:56 np0005481065 nova_compute[260935]: 2025-10-11 08:54:56.953 2 INFO nova.scheduler.client.report [None req-1daaf7ef-40ad-467c-bc43-803ed00a5e23 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Deleted allocations for instance e263661e-e9c2-4a4d-a6e5-5fc8a7353f50#033[00m
Oct 11 04:54:57 np0005481065 nova_compute[260935]: 2025-10-11 08:54:57.041 2 DEBUG oslo_concurrency.lockutils [None req-1daaf7ef-40ad-467c-bc43-803ed00a5e23 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "e263661e-e9c2-4a4d-a6e5-5fc8a7353f50" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:54:57 np0005481065 nova_compute[260935]: 2025-10-11 08:54:57.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:54:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1522: 321 pgs: 321 active+clean; 41 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 238 op/s
Oct 11 04:54:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:54:58 np0005481065 podman[320272]: 2025-10-11 08:54:58.803782389 +0000 UTC m=+0.088020101 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:54:58 np0005481065 nova_compute[260935]: 2025-10-11 08:54:58.865 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172883.861947, d60a2dcd-7fb6-4bfe-8351-a38a71164f83 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:54:58 np0005481065 nova_compute[260935]: 2025-10-11 08:54:58.866 2 INFO nova.compute.manager [-] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] VM Stopped (Lifecycle Event)#033[00m
Oct 11 04:54:58 np0005481065 nova_compute[260935]: 2025-10-11 08:54:58.893 2 DEBUG nova.compute.manager [None req-312d7f9a-6db6-44bf-ae03-5563ce79b866 - - - - - -] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:54:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1523: 321 pgs: 321 active+clean; 41 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 202 op/s
Oct 11 04:55:00 np0005481065 nova_compute[260935]: 2025-10-11 08:55:00.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:55:00 np0005481065 nova_compute[260935]: 2025-10-11 08:55:00.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1524: 321 pgs: 321 active+clean; 41 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 202 op/s
Oct 11 04:55:02 np0005481065 nova_compute[260935]: 2025-10-11 08:55:02.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:55:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:03.514 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:55:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:03.515 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 04:55:03 np0005481065 nova_compute[260935]: 2025-10-11 08:55:03.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:03 np0005481065 nova_compute[260935]: 2025-10-11 08:55:03.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:55:03 np0005481065 nova_compute[260935]: 2025-10-11 08:55:03.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 04:55:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1525: 321 pgs: 321 active+clean; 41 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 202 op/s
Oct 11 04:55:04 np0005481065 nova_compute[260935]: 2025-10-11 08:55:04.159 2 DEBUG oslo_concurrency.lockutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Acquiring lock "09e8444f-162e-4773-a181-a5b70c7af8dd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:55:04 np0005481065 nova_compute[260935]: 2025-10-11 08:55:04.160 2 DEBUG oslo_concurrency.lockutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Lock "09e8444f-162e-4773-a181-a5b70c7af8dd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:55:04 np0005481065 nova_compute[260935]: 2025-10-11 08:55:04.192 2 DEBUG nova.compute.manager [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:55:04 np0005481065 nova_compute[260935]: 2025-10-11 08:55:04.276 2 DEBUG oslo_concurrency.lockutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:55:04 np0005481065 nova_compute[260935]: 2025-10-11 08:55:04.276 2 DEBUG oslo_concurrency.lockutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:55:04 np0005481065 nova_compute[260935]: 2025-10-11 08:55:04.285 2 DEBUG nova.virt.hardware [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:55:04 np0005481065 nova_compute[260935]: 2025-10-11 08:55:04.286 2 INFO nova.compute.claims [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:55:04 np0005481065 nova_compute[260935]: 2025-10-11 08:55:04.413 2 DEBUG oslo_concurrency.processutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:55:04 np0005481065 nova_compute[260935]: 2025-10-11 08:55:04.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:55:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:55:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:55:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:55:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:55:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 04:55:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:55:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:55:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:55:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:55:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:55:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 11 04:55:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:55:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 04:55:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:55:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:55:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:55:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:55:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:55:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:55:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:55:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:55:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:55:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:55:04 np0005481065 podman[320313]: 2025-10-11 08:55:04.765211307 +0000 UTC m=+0.073683502 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 11 04:55:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:55:04 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2259497220' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:55:04 np0005481065 nova_compute[260935]: 2025-10-11 08:55:04.896 2 DEBUG oslo_concurrency.processutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:55:04 np0005481065 nova_compute[260935]: 2025-10-11 08:55:04.904 2 DEBUG nova.compute.provider_tree [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:55:04 np0005481065 nova_compute[260935]: 2025-10-11 08:55:04.924 2 DEBUG nova.scheduler.client.report [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:55:04 np0005481065 nova_compute[260935]: 2025-10-11 08:55:04.951 2 DEBUG oslo_concurrency.lockutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:55:04 np0005481065 nova_compute[260935]: 2025-10-11 08:55:04.952 2 DEBUG nova.compute.manager [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:55:04 np0005481065 nova_compute[260935]: 2025-10-11 08:55:04.996 2 DEBUG nova.compute.manager [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:55:04 np0005481065 nova_compute[260935]: 2025-10-11 08:55:04.996 2 DEBUG nova.network.neutron [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:55:05 np0005481065 nova_compute[260935]: 2025-10-11 08:55:05.020 2 INFO nova.virt.libvirt.driver [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:55:05 np0005481065 nova_compute[260935]: 2025-10-11 08:55:05.039 2 DEBUG nova.compute.manager [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:55:05 np0005481065 nova_compute[260935]: 2025-10-11 08:55:05.147 2 DEBUG nova.compute.manager [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:55:05 np0005481065 nova_compute[260935]: 2025-10-11 08:55:05.149 2 DEBUG nova.virt.libvirt.driver [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:55:05 np0005481065 nova_compute[260935]: 2025-10-11 08:55:05.150 2 INFO nova.virt.libvirt.driver [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Creating image(s)#033[00m
Oct 11 04:55:05 np0005481065 nova_compute[260935]: 2025-10-11 08:55:05.176 2 DEBUG nova.storage.rbd_utils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] rbd image 09e8444f-162e-4773-a181-a5b70c7af8dd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:55:05 np0005481065 nova_compute[260935]: 2025-10-11 08:55:05.207 2 DEBUG nova.storage.rbd_utils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] rbd image 09e8444f-162e-4773-a181-a5b70c7af8dd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:55:05 np0005481065 nova_compute[260935]: 2025-10-11 08:55:05.235 2 DEBUG nova.storage.rbd_utils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] rbd image 09e8444f-162e-4773-a181-a5b70c7af8dd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:55:05 np0005481065 nova_compute[260935]: 2025-10-11 08:55:05.239 2 DEBUG oslo_concurrency.processutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:55:05 np0005481065 nova_compute[260935]: 2025-10-11 08:55:05.286 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172890.1957762, 1ef94ed5-fffa-41e9-b72f-4569354392c6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:55:05 np0005481065 nova_compute[260935]: 2025-10-11 08:55:05.287 2 INFO nova.compute.manager [-] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] VM Stopped (Lifecycle Event)#033[00m
Oct 11 04:55:05 np0005481065 nova_compute[260935]: 2025-10-11 08:55:05.310 2 DEBUG nova.compute.manager [None req-38bf9143-9f00-4664-a64f-79e8ab629418 - - - - - -] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:55:05 np0005481065 nova_compute[260935]: 2025-10-11 08:55:05.325 2 DEBUG nova.policy [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f58717ff8108424e87196def751cc1fe', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '06df44dbf05c4a5e8532640419eb19d3', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 04:55:05 np0005481065 nova_compute[260935]: 2025-10-11 08:55:05.338 2 DEBUG oslo_concurrency.processutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:55:05 np0005481065 nova_compute[260935]: 2025-10-11 08:55:05.339 2 DEBUG oslo_concurrency.lockutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:55:05 np0005481065 nova_compute[260935]: 2025-10-11 08:55:05.340 2 DEBUG oslo_concurrency.lockutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:55:05 np0005481065 nova_compute[260935]: 2025-10-11 08:55:05.340 2 DEBUG oslo_concurrency.lockutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:55:05 np0005481065 nova_compute[260935]: 2025-10-11 08:55:05.374 2 DEBUG nova.storage.rbd_utils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] rbd image 09e8444f-162e-4773-a181-a5b70c7af8dd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:55:05 np0005481065 nova_compute[260935]: 2025-10-11 08:55:05.380 2 DEBUG oslo_concurrency.processutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 09e8444f-162e-4773-a181-a5b70c7af8dd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:55:05 np0005481065 nova_compute[260935]: 2025-10-11 08:55:05.564 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172890.5631247, e263661e-e9c2-4a4d-a6e5-5fc8a7353f50 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:55:05 np0005481065 nova_compute[260935]: 2025-10-11 08:55:05.566 2 INFO nova.compute.manager [-] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] VM Stopped (Lifecycle Event)#033[00m
Oct 11 04:55:05 np0005481065 nova_compute[260935]: 2025-10-11 08:55:05.599 2 DEBUG nova.compute.manager [None req-f1746f9d-2353-4264-8a19-577d651f5553 - - - - - -] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:55:05 np0005481065 nova_compute[260935]: 2025-10-11 08:55:05.689 2 DEBUG oslo_concurrency.processutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 09e8444f-162e-4773-a181-a5b70c7af8dd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.310s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:55:05 np0005481065 nova_compute[260935]: 2025-10-11 08:55:05.780 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172890.7499397, 21a71d10-e13b-47fe-88fd-ec9597f7902e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:55:05 np0005481065 nova_compute[260935]: 2025-10-11 08:55:05.781 2 INFO nova.compute.manager [-] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] VM Stopped (Lifecycle Event)#033[00m
Oct 11 04:55:05 np0005481065 nova_compute[260935]: 2025-10-11 08:55:05.792 2 DEBUG nova.storage.rbd_utils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] resizing rbd image 09e8444f-162e-4773-a181-a5b70c7af8dd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:55:05 np0005481065 nova_compute[260935]: 2025-10-11 08:55:05.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:05 np0005481065 nova_compute[260935]: 2025-10-11 08:55:05.846 2 DEBUG nova.compute.manager [None req-27b6963c-2e29-40d5-8f5b-4d1e66d299c2 - - - - - -] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:55:05 np0005481065 nova_compute[260935]: 2025-10-11 08:55:05.929 2 DEBUG nova.objects.instance [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Lazy-loading 'migration_context' on Instance uuid 09e8444f-162e-4773-a181-a5b70c7af8dd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:55:05 np0005481065 nova_compute[260935]: 2025-10-11 08:55:05.943 2 DEBUG nova.virt.libvirt.driver [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:55:05 np0005481065 nova_compute[260935]: 2025-10-11 08:55:05.944 2 DEBUG nova.virt.libvirt.driver [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Ensure instance console log exists: /var/lib/nova/instances/09e8444f-162e-4773-a181-a5b70c7af8dd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:55:05 np0005481065 nova_compute[260935]: 2025-10-11 08:55:05.944 2 DEBUG oslo_concurrency.lockutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:55:05 np0005481065 nova_compute[260935]: 2025-10-11 08:55:05.945 2 DEBUG oslo_concurrency.lockutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:55:05 np0005481065 nova_compute[260935]: 2025-10-11 08:55:05.945 2 DEBUG oslo_concurrency.lockutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:55:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1526: 321 pgs: 321 active+clean; 41 MiB data, 459 MiB used, 60 GiB / 60 GiB avail
Oct 11 04:55:06 np0005481065 nova_compute[260935]: 2025-10-11 08:55:06.378 2 DEBUG nova.network.neutron [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Successfully created port: ce404d10-5133-400e-acde-02c0abd10470 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 04:55:07 np0005481065 nova_compute[260935]: 2025-10-11 08:55:07.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:55:07 np0005481065 nova_compute[260935]: 2025-10-11 08:55:07.704 2 DEBUG nova.network.neutron [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Successfully updated port: ce404d10-5133-400e-acde-02c0abd10470 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:55:07 np0005481065 nova_compute[260935]: 2025-10-11 08:55:07.706 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:55:07 np0005481065 nova_compute[260935]: 2025-10-11 08:55:07.723 2 DEBUG oslo_concurrency.lockutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Acquiring lock "refresh_cache-09e8444f-162e-4773-a181-a5b70c7af8dd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:55:07 np0005481065 nova_compute[260935]: 2025-10-11 08:55:07.723 2 DEBUG oslo_concurrency.lockutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Acquired lock "refresh_cache-09e8444f-162e-4773-a181-a5b70c7af8dd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:55:07 np0005481065 nova_compute[260935]: 2025-10-11 08:55:07.723 2 DEBUG nova.network.neutron [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:55:07 np0005481065 nova_compute[260935]: 2025-10-11 08:55:07.873 2 DEBUG nova.compute.manager [req-bdba56ee-c925-441f-8d8e-ef1e57eb4bda req-31f96c90-e75b-4bbe-9d30-5ddf9cbe9fce e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Received event network-changed-ce404d10-5133-400e-acde-02c0abd10470 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:55:07 np0005481065 nova_compute[260935]: 2025-10-11 08:55:07.874 2 DEBUG nova.compute.manager [req-bdba56ee-c925-441f-8d8e-ef1e57eb4bda req-31f96c90-e75b-4bbe-9d30-5ddf9cbe9fce e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Refreshing instance network info cache due to event network-changed-ce404d10-5133-400e-acde-02c0abd10470. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:55:07 np0005481065 nova_compute[260935]: 2025-10-11 08:55:07.874 2 DEBUG oslo_concurrency.lockutils [req-bdba56ee-c925-441f-8d8e-ef1e57eb4bda req-31f96c90-e75b-4bbe-9d30-5ddf9cbe9fce e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-09e8444f-162e-4773-a181-a5b70c7af8dd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:55:07 np0005481065 nova_compute[260935]: 2025-10-11 08:55:07.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1527: 321 pgs: 321 active+clean; 88 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 04:55:08 np0005481065 nova_compute[260935]: 2025-10-11 08:55:08.029 2 DEBUG nova.network.neutron [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:55:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:55:09 np0005481065 nova_compute[260935]: 2025-10-11 08:55:09.017 2 DEBUG nova.network.neutron [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Updating instance_info_cache with network_info: [{"id": "ce404d10-5133-400e-acde-02c0abd10470", "address": "fa:16:3e:53:75:e2", "network": {"id": "651d0a3a-4912-47c8-a5af-eb2f7badf8c4", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1981929345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06df44dbf05c4a5e8532640419eb19d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce404d10-51", "ovs_interfaceid": "ce404d10-5133-400e-acde-02c0abd10470", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:55:09 np0005481065 nova_compute[260935]: 2025-10-11 08:55:09.033 2 DEBUG oslo_concurrency.lockutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Releasing lock "refresh_cache-09e8444f-162e-4773-a181-a5b70c7af8dd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:55:09 np0005481065 nova_compute[260935]: 2025-10-11 08:55:09.034 2 DEBUG nova.compute.manager [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Instance network_info: |[{"id": "ce404d10-5133-400e-acde-02c0abd10470", "address": "fa:16:3e:53:75:e2", "network": {"id": "651d0a3a-4912-47c8-a5af-eb2f7badf8c4", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1981929345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06df44dbf05c4a5e8532640419eb19d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce404d10-51", "ovs_interfaceid": "ce404d10-5133-400e-acde-02c0abd10470", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:55:09 np0005481065 nova_compute[260935]: 2025-10-11 08:55:09.034 2 DEBUG oslo_concurrency.lockutils [req-bdba56ee-c925-441f-8d8e-ef1e57eb4bda req-31f96c90-e75b-4bbe-9d30-5ddf9cbe9fce e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-09e8444f-162e-4773-a181-a5b70c7af8dd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:55:09 np0005481065 nova_compute[260935]: 2025-10-11 08:55:09.035 2 DEBUG nova.network.neutron [req-bdba56ee-c925-441f-8d8e-ef1e57eb4bda req-31f96c90-e75b-4bbe-9d30-5ddf9cbe9fce e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Refreshing network info cache for port ce404d10-5133-400e-acde-02c0abd10470 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:55:09 np0005481065 nova_compute[260935]: 2025-10-11 08:55:09.040 2 DEBUG nova.virt.libvirt.driver [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Start _get_guest_xml network_info=[{"id": "ce404d10-5133-400e-acde-02c0abd10470", "address": "fa:16:3e:53:75:e2", "network": {"id": "651d0a3a-4912-47c8-a5af-eb2f7badf8c4", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1981929345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06df44dbf05c4a5e8532640419eb19d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce404d10-51", "ovs_interfaceid": "ce404d10-5133-400e-acde-02c0abd10470", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:55:09 np0005481065 nova_compute[260935]: 2025-10-11 08:55:09.048 2 WARNING nova.virt.libvirt.driver [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:55:09 np0005481065 nova_compute[260935]: 2025-10-11 08:55:09.062 2 DEBUG nova.virt.libvirt.host [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:55:09 np0005481065 nova_compute[260935]: 2025-10-11 08:55:09.063 2 DEBUG nova.virt.libvirt.host [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:55:09 np0005481065 nova_compute[260935]: 2025-10-11 08:55:09.068 2 DEBUG nova.virt.libvirt.host [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:55:09 np0005481065 nova_compute[260935]: 2025-10-11 08:55:09.069 2 DEBUG nova.virt.libvirt.host [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:55:09 np0005481065 nova_compute[260935]: 2025-10-11 08:55:09.070 2 DEBUG nova.virt.libvirt.driver [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:55:09 np0005481065 nova_compute[260935]: 2025-10-11 08:55:09.070 2 DEBUG nova.virt.hardware [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:55:09 np0005481065 nova_compute[260935]: 2025-10-11 08:55:09.071 2 DEBUG nova.virt.hardware [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:55:09 np0005481065 nova_compute[260935]: 2025-10-11 08:55:09.072 2 DEBUG nova.virt.hardware [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:55:09 np0005481065 nova_compute[260935]: 2025-10-11 08:55:09.072 2 DEBUG nova.virt.hardware [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:55:09 np0005481065 nova_compute[260935]: 2025-10-11 08:55:09.073 2 DEBUG nova.virt.hardware [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:55:09 np0005481065 nova_compute[260935]: 2025-10-11 08:55:09.073 2 DEBUG nova.virt.hardware [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:55:09 np0005481065 nova_compute[260935]: 2025-10-11 08:55:09.074 2 DEBUG nova.virt.hardware [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:55:09 np0005481065 nova_compute[260935]: 2025-10-11 08:55:09.074 2 DEBUG nova.virt.hardware [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:55:09 np0005481065 nova_compute[260935]: 2025-10-11 08:55:09.075 2 DEBUG nova.virt.hardware [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:55:09 np0005481065 nova_compute[260935]: 2025-10-11 08:55:09.075 2 DEBUG nova.virt.hardware [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:55:09 np0005481065 nova_compute[260935]: 2025-10-11 08:55:09.076 2 DEBUG nova.virt.hardware [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:55:09 np0005481065 nova_compute[260935]: 2025-10-11 08:55:09.081 2 DEBUG oslo_concurrency.processutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:55:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:55:09 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2107231039' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:55:09 np0005481065 nova_compute[260935]: 2025-10-11 08:55:09.590 2 DEBUG oslo_concurrency.processutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:55:09 np0005481065 nova_compute[260935]: 2025-10-11 08:55:09.623 2 DEBUG nova.storage.rbd_utils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] rbd image 09e8444f-162e-4773-a181-a5b70c7af8dd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:55:09 np0005481065 nova_compute[260935]: 2025-10-11 08:55:09.628 2 DEBUG oslo_concurrency.processutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:55:09 np0005481065 nova_compute[260935]: 2025-10-11 08:55:09.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:55:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1528: 321 pgs: 321 active+clean; 88 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 04:55:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:55:10 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1724282179' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:55:10 np0005481065 nova_compute[260935]: 2025-10-11 08:55:10.120 2 DEBUG oslo_concurrency.processutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:55:10 np0005481065 nova_compute[260935]: 2025-10-11 08:55:10.123 2 DEBUG nova.virt.libvirt.vif [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:55:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-1230911333',display_name='tempest-ServerMetadataNegativeTestJSON-server-1230911333',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-1230911333',id=57,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='06df44dbf05c4a5e8532640419eb19d3',ramdisk_id='',reservation_id='r-ac9lff0k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataNegativeTestJSON-968790743',owner_user_name='tempest-ServerMetadataNegativeTestJSON-968790743-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:55:05Z,user_data=None,user_id='f58717ff8108424e87196def751cc1fe',uuid=09e8444f-162e-4773-a181-a5b70c7af8dd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ce404d10-5133-400e-acde-02c0abd10470", "address": "fa:16:3e:53:75:e2", "network": {"id": "651d0a3a-4912-47c8-a5af-eb2f7badf8c4", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1981929345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06df44dbf05c4a5e8532640419eb19d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce404d10-51", "ovs_interfaceid": "ce404d10-5133-400e-acde-02c0abd10470", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:55:10 np0005481065 nova_compute[260935]: 2025-10-11 08:55:10.124 2 DEBUG nova.network.os_vif_util [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Converting VIF {"id": "ce404d10-5133-400e-acde-02c0abd10470", "address": "fa:16:3e:53:75:e2", "network": {"id": "651d0a3a-4912-47c8-a5af-eb2f7badf8c4", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1981929345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06df44dbf05c4a5e8532640419eb19d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce404d10-51", "ovs_interfaceid": "ce404d10-5133-400e-acde-02c0abd10470", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:55:10 np0005481065 nova_compute[260935]: 2025-10-11 08:55:10.125 2 DEBUG nova.network.os_vif_util [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:75:e2,bridge_name='br-int',has_traffic_filtering=True,id=ce404d10-5133-400e-acde-02c0abd10470,network=Network(651d0a3a-4912-47c8-a5af-eb2f7badf8c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce404d10-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:55:10 np0005481065 nova_compute[260935]: 2025-10-11 08:55:10.127 2 DEBUG nova.objects.instance [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 09e8444f-162e-4773-a181-a5b70c7af8dd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:55:10 np0005481065 nova_compute[260935]: 2025-10-11 08:55:10.149 2 DEBUG nova.virt.libvirt.driver [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:55:10 np0005481065 nova_compute[260935]:  <uuid>09e8444f-162e-4773-a181-a5b70c7af8dd</uuid>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:  <name>instance-00000039</name>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:55:10 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:      <nova:name>tempest-ServerMetadataNegativeTestJSON-server-1230911333</nova:name>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:55:09</nova:creationTime>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:55:10 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:        <nova:user uuid="f58717ff8108424e87196def751cc1fe">tempest-ServerMetadataNegativeTestJSON-968790743-project-member</nova:user>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:        <nova:project uuid="06df44dbf05c4a5e8532640419eb19d3">tempest-ServerMetadataNegativeTestJSON-968790743</nova:project>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:        <nova:port uuid="ce404d10-5133-400e-acde-02c0abd10470">
Oct 11 04:55:10 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:55:10 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:      <entry name="serial">09e8444f-162e-4773-a181-a5b70c7af8dd</entry>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:      <entry name="uuid">09e8444f-162e-4773-a181-a5b70c7af8dd</entry>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:55:10 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:55:10 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:55:10 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/09e8444f-162e-4773-a181-a5b70c7af8dd_disk">
Oct 11 04:55:10 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:55:10 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:55:10 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/09e8444f-162e-4773-a181-a5b70c7af8dd_disk.config">
Oct 11 04:55:10 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:55:10 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 04:55:10 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:53:75:e2"/>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:      <target dev="tapce404d10-51"/>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:55:10 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/09e8444f-162e-4773-a181-a5b70c7af8dd/console.log" append="off"/>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:55:10 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:55:10 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:55:10 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:55:10 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:55:10 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:55:10 np0005481065 nova_compute[260935]: 2025-10-11 08:55:10.152 2 DEBUG nova.compute.manager [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Preparing to wait for external event network-vif-plugged-ce404d10-5133-400e-acde-02c0abd10470 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 04:55:10 np0005481065 nova_compute[260935]: 2025-10-11 08:55:10.152 2 DEBUG oslo_concurrency.lockutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Acquiring lock "09e8444f-162e-4773-a181-a5b70c7af8dd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:55:10 np0005481065 nova_compute[260935]: 2025-10-11 08:55:10.153 2 DEBUG oslo_concurrency.lockutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Lock "09e8444f-162e-4773-a181-a5b70c7af8dd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:55:10 np0005481065 nova_compute[260935]: 2025-10-11 08:55:10.153 2 DEBUG oslo_concurrency.lockutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Lock "09e8444f-162e-4773-a181-a5b70c7af8dd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:55:10 np0005481065 nova_compute[260935]: 2025-10-11 08:55:10.154 2 DEBUG nova.virt.libvirt.vif [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:55:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-1230911333',display_name='tempest-ServerMetadataNegativeTestJSON-server-1230911333',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-1230911333',id=57,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='06df44dbf05c4a5e8532640419eb19d3',ramdisk_id='',reservation_id='r-ac9lff0k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataNegativeTestJSON-968790743',owner_user_name='tempest-ServerMetadataNegativeTestJSON-968790743-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:55:05Z,user_data=None,user_id='f58717ff8108424e87196def751cc1fe',uuid=09e8444f-162e-4773-a181-a5b70c7af8dd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ce404d10-5133-400e-acde-02c0abd10470", "address": "fa:16:3e:53:75:e2", "network": {"id": "651d0a3a-4912-47c8-a5af-eb2f7badf8c4", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1981929345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06df44dbf05c4a5e8532640419eb19d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce404d10-51", "ovs_interfaceid": "ce404d10-5133-400e-acde-02c0abd10470", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:55:10 np0005481065 nova_compute[260935]: 2025-10-11 08:55:10.155 2 DEBUG nova.network.os_vif_util [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Converting VIF {"id": "ce404d10-5133-400e-acde-02c0abd10470", "address": "fa:16:3e:53:75:e2", "network": {"id": "651d0a3a-4912-47c8-a5af-eb2f7badf8c4", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1981929345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06df44dbf05c4a5e8532640419eb19d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce404d10-51", "ovs_interfaceid": "ce404d10-5133-400e-acde-02c0abd10470", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:55:10 np0005481065 nova_compute[260935]: 2025-10-11 08:55:10.156 2 DEBUG nova.network.os_vif_util [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:75:e2,bridge_name='br-int',has_traffic_filtering=True,id=ce404d10-5133-400e-acde-02c0abd10470,network=Network(651d0a3a-4912-47c8-a5af-eb2f7badf8c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce404d10-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:55:10 np0005481065 nova_compute[260935]: 2025-10-11 08:55:10.157 2 DEBUG os_vif [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:75:e2,bridge_name='br-int',has_traffic_filtering=True,id=ce404d10-5133-400e-acde-02c0abd10470,network=Network(651d0a3a-4912-47c8-a5af-eb2f7badf8c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce404d10-51') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:55:10 np0005481065 nova_compute[260935]: 2025-10-11 08:55:10.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:10 np0005481065 nova_compute[260935]: 2025-10-11 08:55:10.159 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:55:10 np0005481065 nova_compute[260935]: 2025-10-11 08:55:10.159 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:55:10 np0005481065 nova_compute[260935]: 2025-10-11 08:55:10.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:10 np0005481065 nova_compute[260935]: 2025-10-11 08:55:10.164 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce404d10-51, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:55:10 np0005481065 nova_compute[260935]: 2025-10-11 08:55:10.165 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapce404d10-51, col_values=(('external_ids', {'iface-id': 'ce404d10-5133-400e-acde-02c0abd10470', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:53:75:e2', 'vm-uuid': '09e8444f-162e-4773-a181-a5b70c7af8dd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:55:10 np0005481065 nova_compute[260935]: 2025-10-11 08:55:10.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:10 np0005481065 NetworkManager[44960]: <info>  [1760172910.1684] manager: (tapce404d10-51): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/231)
Oct 11 04:55:10 np0005481065 nova_compute[260935]: 2025-10-11 08:55:10.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:55:10 np0005481065 nova_compute[260935]: 2025-10-11 08:55:10.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:10 np0005481065 nova_compute[260935]: 2025-10-11 08:55:10.178 2 INFO os_vif [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:75:e2,bridge_name='br-int',has_traffic_filtering=True,id=ce404d10-5133-400e-acde-02c0abd10470,network=Network(651d0a3a-4912-47c8-a5af-eb2f7badf8c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce404d10-51')#033[00m
Oct 11 04:55:10 np0005481065 nova_compute[260935]: 2025-10-11 08:55:10.250 2 DEBUG nova.virt.libvirt.driver [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:55:10 np0005481065 nova_compute[260935]: 2025-10-11 08:55:10.250 2 DEBUG nova.virt.libvirt.driver [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:55:10 np0005481065 nova_compute[260935]: 2025-10-11 08:55:10.251 2 DEBUG nova.virt.libvirt.driver [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] No VIF found with MAC fa:16:3e:53:75:e2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:55:10 np0005481065 nova_compute[260935]: 2025-10-11 08:55:10.251 2 INFO nova.virt.libvirt.driver [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Using config drive#033[00m
Oct 11 04:55:10 np0005481065 nova_compute[260935]: 2025-10-11 08:55:10.297 2 DEBUG nova.storage.rbd_utils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] rbd image 09e8444f-162e-4773-a181-a5b70c7af8dd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:55:10 np0005481065 podman[320567]: 2025-10-11 08:55:10.324211289 +0000 UTC m=+0.098735367 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 11 04:55:10 np0005481065 podman[320569]: 2025-10-11 08:55:10.370219261 +0000 UTC m=+0.138222223 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 04:55:10 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:10.518 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:55:10 np0005481065 nova_compute[260935]: 2025-10-11 08:55:10.621 2 DEBUG nova.network.neutron [req-bdba56ee-c925-441f-8d8e-ef1e57eb4bda req-31f96c90-e75b-4bbe-9d30-5ddf9cbe9fce e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Updated VIF entry in instance network info cache for port ce404d10-5133-400e-acde-02c0abd10470. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:55:10 np0005481065 nova_compute[260935]: 2025-10-11 08:55:10.622 2 DEBUG nova.network.neutron [req-bdba56ee-c925-441f-8d8e-ef1e57eb4bda req-31f96c90-e75b-4bbe-9d30-5ddf9cbe9fce e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Updating instance_info_cache with network_info: [{"id": "ce404d10-5133-400e-acde-02c0abd10470", "address": "fa:16:3e:53:75:e2", "network": {"id": "651d0a3a-4912-47c8-a5af-eb2f7badf8c4", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1981929345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06df44dbf05c4a5e8532640419eb19d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce404d10-51", "ovs_interfaceid": "ce404d10-5133-400e-acde-02c0abd10470", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:55:10 np0005481065 nova_compute[260935]: 2025-10-11 08:55:10.646 2 DEBUG oslo_concurrency.lockutils [req-bdba56ee-c925-441f-8d8e-ef1e57eb4bda req-31f96c90-e75b-4bbe-9d30-5ddf9cbe9fce e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-09e8444f-162e-4773-a181-a5b70c7af8dd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:55:10 np0005481065 nova_compute[260935]: 2025-10-11 08:55:10.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:55:10 np0005481065 nova_compute[260935]: 2025-10-11 08:55:10.811 2 INFO nova.virt.libvirt.driver [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Creating config drive at /var/lib/nova/instances/09e8444f-162e-4773-a181-a5b70c7af8dd/disk.config#033[00m
Oct 11 04:55:10 np0005481065 nova_compute[260935]: 2025-10-11 08:55:10.820 2 DEBUG oslo_concurrency.processutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/09e8444f-162e-4773-a181-a5b70c7af8dd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfnfp1fi4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:55:10 np0005481065 nova_compute[260935]: 2025-10-11 08:55:10.990 2 DEBUG oslo_concurrency.processutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/09e8444f-162e-4773-a181-a5b70c7af8dd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfnfp1fi4" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:55:11 np0005481065 nova_compute[260935]: 2025-10-11 08:55:11.030 2 DEBUG nova.storage.rbd_utils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] rbd image 09e8444f-162e-4773-a181-a5b70c7af8dd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:55:11 np0005481065 nova_compute[260935]: 2025-10-11 08:55:11.036 2 DEBUG oslo_concurrency.processutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/09e8444f-162e-4773-a181-a5b70c7af8dd/disk.config 09e8444f-162e-4773-a181-a5b70c7af8dd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:55:11 np0005481065 nova_compute[260935]: 2025-10-11 08:55:11.265 2 DEBUG oslo_concurrency.processutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/09e8444f-162e-4773-a181-a5b70c7af8dd/disk.config 09e8444f-162e-4773-a181-a5b70c7af8dd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.230s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:55:11 np0005481065 nova_compute[260935]: 2025-10-11 08:55:11.266 2 INFO nova.virt.libvirt.driver [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Deleting local config drive /var/lib/nova/instances/09e8444f-162e-4773-a181-a5b70c7af8dd/disk.config because it was imported into RBD.#033[00m
Oct 11 04:55:11 np0005481065 kernel: tapce404d10-51: entered promiscuous mode
Oct 11 04:55:11 np0005481065 ovn_controller[152945]: 2025-10-11T08:55:11Z|00498|binding|INFO|Claiming lport ce404d10-5133-400e-acde-02c0abd10470 for this chassis.
Oct 11 04:55:11 np0005481065 ovn_controller[152945]: 2025-10-11T08:55:11Z|00499|binding|INFO|ce404d10-5133-400e-acde-02c0abd10470: Claiming fa:16:3e:53:75:e2 10.100.0.9
Oct 11 04:55:11 np0005481065 NetworkManager[44960]: <info>  [1760172911.3412] manager: (tapce404d10-51): new Tun device (/org/freedesktop/NetworkManager/Devices/232)
Oct 11 04:55:11 np0005481065 nova_compute[260935]: 2025-10-11 08:55:11.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:11.356 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:75:e2 10.100.0.9'], port_security=['fa:16:3e:53:75:e2 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '09e8444f-162e-4773-a181-a5b70c7af8dd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-651d0a3a-4912-47c8-a5af-eb2f7badf8c4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '06df44dbf05c4a5e8532640419eb19d3', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ef6334f9-e759-487e-a426-75ba5c313554', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c8280cc5-aa3a-4200-a683-dd07c09ecd59, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=ce404d10-5133-400e-acde-02c0abd10470) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:11.360 162815 INFO neutron.agent.ovn.metadata.agent [-] Port ce404d10-5133-400e-acde-02c0abd10470 in datapath 651d0a3a-4912-47c8-a5af-eb2f7badf8c4 bound to our chassis#033[00m
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:11.365 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 651d0a3a-4912-47c8-a5af-eb2f7badf8c4#033[00m
Oct 11 04:55:11 np0005481065 systemd-udevd[320679]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:55:11 np0005481065 NetworkManager[44960]: <info>  [1760172911.3864] device (tapce404d10-51): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:55:11 np0005481065 NetworkManager[44960]: <info>  [1760172911.3895] device (tapce404d10-51): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:11.390 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2c87e034-6244-4a21-8d62-66d44dd7b08d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:11.391 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap651d0a3a-41 in ovnmeta-651d0a3a-4912-47c8-a5af-eb2f7badf8c4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:11.394 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap651d0a3a-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:11.394 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[20290bfe-5732-45f9-b0c7-bfebec62b51f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:11.396 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b93e174a-00f1-4ada-bf0e-46d65b675e22]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:11 np0005481065 systemd-machined[215705]: New machine qemu-64-instance-00000039.
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:11.416 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[f01a96fd-9f90-4ea5-85fd-741802419a0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:11 np0005481065 systemd[1]: Started Virtual Machine qemu-64-instance-00000039.
Oct 11 04:55:11 np0005481065 nova_compute[260935]: 2025-10-11 08:55:11.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:11.458 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[53c15a9e-19d4-484b-b3b1-5e21a09ff3a9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:11 np0005481065 nova_compute[260935]: 2025-10-11 08:55:11.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:11 np0005481065 ovn_controller[152945]: 2025-10-11T08:55:11Z|00500|binding|INFO|Setting lport ce404d10-5133-400e-acde-02c0abd10470 ovn-installed in OVS
Oct 11 04:55:11 np0005481065 ovn_controller[152945]: 2025-10-11T08:55:11Z|00501|binding|INFO|Setting lport ce404d10-5133-400e-acde-02c0abd10470 up in Southbound
Oct 11 04:55:11 np0005481065 nova_compute[260935]: 2025-10-11 08:55:11.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:11.508 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[d1149bef-d96e-4b9b-bd6c-38f5ec9098b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:11.516 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2d347470-3079-4d39-b807-95be9b5507a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:11 np0005481065 NetworkManager[44960]: <info>  [1760172911.5174] manager: (tap651d0a3a-40): new Veth device (/org/freedesktop/NetworkManager/Devices/233)
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:11.578 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[810890f0-aa2e-4781-9e22-e6014ac8fe6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:11.583 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b085137a-2f67-4e5e-9b3c-ee917f480af1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:11 np0005481065 NetworkManager[44960]: <info>  [1760172911.6208] device (tap651d0a3a-40): carrier: link connected
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:11.629 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[82d8a116-61ea-4874-ac44-abbe3d1ec0b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:11.658 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[12138814-ee80-47cc-91a7-fe435a166a6e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap651d0a3a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:f8:23'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 159], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 475632, 'reachable_time': 22611, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320742, 'error': None, 'target': 'ovnmeta-651d0a3a-4912-47c8-a5af-eb2f7badf8c4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:11.688 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e92c4993-245f-485a-a3ce-4ea20f487159]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe02:f823'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 475632, 'tstamp': 475632}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320759, 'error': None, 'target': 'ovnmeta-651d0a3a-4912-47c8-a5af-eb2f7badf8c4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:11 np0005481065 nova_compute[260935]: 2025-10-11 08:55:11.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:11.721 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7951ed6d-b821-4338-8cd9-85d0cab29957]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap651d0a3a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:f8:23'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 159], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 475632, 'reachable_time': 22611, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 320765, 'error': None, 'target': 'ovnmeta-651d0a3a-4912-47c8-a5af-eb2f7badf8c4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:11 np0005481065 nova_compute[260935]: 2025-10-11 08:55:11.728 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:55:11 np0005481065 nova_compute[260935]: 2025-10-11 08:55:11.729 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:55:11 np0005481065 nova_compute[260935]: 2025-10-11 08:55:11.729 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:55:11 np0005481065 nova_compute[260935]: 2025-10-11 08:55:11.729 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 04:55:11 np0005481065 nova_compute[260935]: 2025-10-11 08:55:11.730 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:11.775 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b00f100f-7f9d-4208-bbd0-c00c51d0841d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:11.880 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[dc27c0d8-eea8-4af9-b5b6-966fdcca04a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:11.883 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap651d0a3a-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:11.886 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:11.887 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap651d0a3a-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:55:11 np0005481065 kernel: tap651d0a3a-40: entered promiscuous mode
Oct 11 04:55:11 np0005481065 NetworkManager[44960]: <info>  [1760172911.8917] manager: (tap651d0a3a-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/234)
Oct 11 04:55:11 np0005481065 nova_compute[260935]: 2025-10-11 08:55:11.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:11.899 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap651d0a3a-40, col_values=(('external_ids', {'iface-id': '7ae49d05-3fe4-433c-8b86-08a1da3da34c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:55:11 np0005481065 ovn_controller[152945]: 2025-10-11T08:55:11Z|00502|binding|INFO|Releasing lport 7ae49d05-3fe4-433c-8b86-08a1da3da34c from this chassis (sb_readonly=0)
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:11.904 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/651d0a3a-4912-47c8-a5af-eb2f7badf8c4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/651d0a3a-4912-47c8-a5af-eb2f7badf8c4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:11.905 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7dec6af5-e8d4-45e5-ba3e-3c15030ffba5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:11.906 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-651d0a3a-4912-47c8-a5af-eb2f7badf8c4
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/651d0a3a-4912-47c8-a5af-eb2f7badf8c4.pid.haproxy
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID 651d0a3a-4912-47c8-a5af-eb2f7badf8c4
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 04:55:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:11.909 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-651d0a3a-4912-47c8-a5af-eb2f7badf8c4', 'env', 'PROCESS_TAG=haproxy-651d0a3a-4912-47c8-a5af-eb2f7badf8c4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/651d0a3a-4912-47c8-a5af-eb2f7badf8c4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 04:55:11 np0005481065 nova_compute[260935]: 2025-10-11 08:55:11.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1529: 321 pgs: 321 active+clean; 88 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 04:55:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:55:12 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3190582371' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:55:12 np0005481065 nova_compute[260935]: 2025-10-11 08:55:12.308 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:55:12 np0005481065 podman[320939]: 2025-10-11 08:55:12.310093774 +0000 UTC m=+0.054408082 container create e681dc86b4349b3c4497879b295c775e50a4f08774066f23cdbadfc587418566 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-651d0a3a-4912-47c8-a5af-eb2f7badf8c4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS)
Oct 11 04:55:12 np0005481065 systemd[1]: Started libpod-conmon-e681dc86b4349b3c4497879b295c775e50a4f08774066f23cdbadfc587418566.scope.
Oct 11 04:55:12 np0005481065 podman[320939]: 2025-10-11 08:55:12.281767697 +0000 UTC m=+0.026082025 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 04:55:12 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:55:12 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5d0672aa8e6c963225b544aba8752ab8163ac36884a8a150664ddc3431c4373/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:55:12 np0005481065 podman[320939]: 2025-10-11 08:55:12.405633129 +0000 UTC m=+0.149947457 container init e681dc86b4349b3c4497879b295c775e50a4f08774066f23cdbadfc587418566 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-651d0a3a-4912-47c8-a5af-eb2f7badf8c4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 04:55:12 np0005481065 podman[320939]: 2025-10-11 08:55:12.414549323 +0000 UTC m=+0.158863631 container start e681dc86b4349b3c4497879b295c775e50a4f08774066f23cdbadfc587418566 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-651d0a3a-4912-47c8-a5af-eb2f7badf8c4, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 11 04:55:12 np0005481065 nova_compute[260935]: 2025-10-11 08:55:12.415 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000039 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 04:55:12 np0005481065 nova_compute[260935]: 2025-10-11 08:55:12.416 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000039 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 04:55:12 np0005481065 neutron-haproxy-ovnmeta-651d0a3a-4912-47c8-a5af-eb2f7badf8c4[320975]: [NOTICE]   (320992) : New worker (321003) forked
Oct 11 04:55:12 np0005481065 neutron-haproxy-ovnmeta-651d0a3a-4912-47c8-a5af-eb2f7badf8c4[320975]: [NOTICE]   (320992) : Loading success.
Oct 11 04:55:12 np0005481065 nova_compute[260935]: 2025-10-11 08:55:12.521 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172912.519378, 09e8444f-162e-4773-a181-a5b70c7af8dd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:55:12 np0005481065 nova_compute[260935]: 2025-10-11 08:55:12.522 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] VM Started (Lifecycle Event)#033[00m
Oct 11 04:55:12 np0005481065 nova_compute[260935]: 2025-10-11 08:55:12.550 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:55:12 np0005481065 nova_compute[260935]: 2025-10-11 08:55:12.556 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172912.5221946, 09e8444f-162e-4773-a181-a5b70c7af8dd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:55:12 np0005481065 nova_compute[260935]: 2025-10-11 08:55:12.557 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] VM Paused (Lifecycle Event)#033[00m
Oct 11 04:55:12 np0005481065 nova_compute[260935]: 2025-10-11 08:55:12.581 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:55:12 np0005481065 nova_compute[260935]: 2025-10-11 08:55:12.585 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:55:12 np0005481065 podman[321013]: 2025-10-11 08:55:12.598879179 +0000 UTC m=+0.103328837 container exec ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:55:12 np0005481065 nova_compute[260935]: 2025-10-11 08:55:12.607 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:55:12 np0005481065 nova_compute[260935]: 2025-10-11 08:55:12.670 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:55:12 np0005481065 nova_compute[260935]: 2025-10-11 08:55:12.674 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4073MB free_disk=59.967525482177734GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 04:55:12 np0005481065 nova_compute[260935]: 2025-10-11 08:55:12.674 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:55:12 np0005481065 nova_compute[260935]: 2025-10-11 08:55:12.675 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:55:12 np0005481065 podman[321013]: 2025-10-11 08:55:12.718451139 +0000 UTC m=+0.222900747 container exec_died ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:55:12 np0005481065 nova_compute[260935]: 2025-10-11 08:55:12.751 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 09e8444f-162e-4773-a181-a5b70c7af8dd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 04:55:12 np0005481065 nova_compute[260935]: 2025-10-11 08:55:12.752 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 04:55:12 np0005481065 nova_compute[260935]: 2025-10-11 08:55:12.753 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 04:55:12 np0005481065 nova_compute[260935]: 2025-10-11 08:55:12.786 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:55:12 np0005481065 nova_compute[260935]: 2025-10-11 08:55:12.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:55:13 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3260994994' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:55:13 np0005481065 nova_compute[260935]: 2025-10-11 08:55:13.251 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:55:13 np0005481065 nova_compute[260935]: 2025-10-11 08:55:13.266 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:55:13 np0005481065 nova_compute[260935]: 2025-10-11 08:55:13.283 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:55:13 np0005481065 nova_compute[260935]: 2025-10-11 08:55:13.307 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 04:55:13 np0005481065 nova_compute[260935]: 2025-10-11 08:55:13.308 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:55:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:55:13 np0005481065 nova_compute[260935]: 2025-10-11 08:55:13.490 2 DEBUG nova.compute.manager [req-e1cc71d1-e453-456d-97b3-2aee7aa6f6e7 req-a55d57e2-c25e-4f8e-9b2c-90a0ec4c8bab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Received event network-vif-plugged-ce404d10-5133-400e-acde-02c0abd10470 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:55:13 np0005481065 nova_compute[260935]: 2025-10-11 08:55:13.490 2 DEBUG oslo_concurrency.lockutils [req-e1cc71d1-e453-456d-97b3-2aee7aa6f6e7 req-a55d57e2-c25e-4f8e-9b2c-90a0ec4c8bab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "09e8444f-162e-4773-a181-a5b70c7af8dd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:55:13 np0005481065 nova_compute[260935]: 2025-10-11 08:55:13.491 2 DEBUG oslo_concurrency.lockutils [req-e1cc71d1-e453-456d-97b3-2aee7aa6f6e7 req-a55d57e2-c25e-4f8e-9b2c-90a0ec4c8bab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "09e8444f-162e-4773-a181-a5b70c7af8dd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:55:13 np0005481065 nova_compute[260935]: 2025-10-11 08:55:13.491 2 DEBUG oslo_concurrency.lockutils [req-e1cc71d1-e453-456d-97b3-2aee7aa6f6e7 req-a55d57e2-c25e-4f8e-9b2c-90a0ec4c8bab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "09e8444f-162e-4773-a181-a5b70c7af8dd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:55:13 np0005481065 nova_compute[260935]: 2025-10-11 08:55:13.492 2 DEBUG nova.compute.manager [req-e1cc71d1-e453-456d-97b3-2aee7aa6f6e7 req-a55d57e2-c25e-4f8e-9b2c-90a0ec4c8bab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Processing event network-vif-plugged-ce404d10-5133-400e-acde-02c0abd10470 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 04:55:13 np0005481065 nova_compute[260935]: 2025-10-11 08:55:13.493 2 DEBUG nova.compute.manager [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:55:13 np0005481065 nova_compute[260935]: 2025-10-11 08:55:13.498 2 DEBUG nova.virt.libvirt.driver [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:55:13 np0005481065 nova_compute[260935]: 2025-10-11 08:55:13.501 2 INFO nova.virt.libvirt.driver [-] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Instance spawned successfully.#033[00m
Oct 11 04:55:13 np0005481065 nova_compute[260935]: 2025-10-11 08:55:13.502 2 DEBUG nova.virt.libvirt.driver [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:55:13 np0005481065 nova_compute[260935]: 2025-10-11 08:55:13.504 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172913.5043106, 09e8444f-162e-4773-a181-a5b70c7af8dd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:55:13 np0005481065 nova_compute[260935]: 2025-10-11 08:55:13.505 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:55:13 np0005481065 nova_compute[260935]: 2025-10-11 08:55:13.520 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:55:13 np0005481065 nova_compute[260935]: 2025-10-11 08:55:13.523 2 DEBUG nova.virt.libvirt.driver [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:55:13 np0005481065 nova_compute[260935]: 2025-10-11 08:55:13.524 2 DEBUG nova.virt.libvirt.driver [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:55:13 np0005481065 nova_compute[260935]: 2025-10-11 08:55:13.525 2 DEBUG nova.virt.libvirt.driver [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:55:13 np0005481065 nova_compute[260935]: 2025-10-11 08:55:13.525 2 DEBUG nova.virt.libvirt.driver [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:55:13 np0005481065 nova_compute[260935]: 2025-10-11 08:55:13.526 2 DEBUG nova.virt.libvirt.driver [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:55:13 np0005481065 nova_compute[260935]: 2025-10-11 08:55:13.527 2 DEBUG nova.virt.libvirt.driver [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:55:13 np0005481065 nova_compute[260935]: 2025-10-11 08:55:13.534 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:55:13 np0005481065 nova_compute[260935]: 2025-10-11 08:55:13.558 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:55:13 np0005481065 nova_compute[260935]: 2025-10-11 08:55:13.578 2 INFO nova.compute.manager [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Took 8.43 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:55:13 np0005481065 nova_compute[260935]: 2025-10-11 08:55:13.579 2 DEBUG nova.compute.manager [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:55:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:55:13 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:55:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:55:13 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:55:13 np0005481065 nova_compute[260935]: 2025-10-11 08:55:13.631 2 INFO nova.compute.manager [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Took 9.38 seconds to build instance.#033[00m
Oct 11 04:55:13 np0005481065 nova_compute[260935]: 2025-10-11 08:55:13.651 2 DEBUG oslo_concurrency.lockutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Lock "09e8444f-162e-4773-a181-a5b70c7af8dd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.491s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:55:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1530: 321 pgs: 321 active+clean; 88 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 11 04:55:14 np0005481065 nova_compute[260935]: 2025-10-11 08:55:14.304 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:55:14 np0005481065 nova_compute[260935]: 2025-10-11 08:55:14.336 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:55:14 np0005481065 nova_compute[260935]: 2025-10-11 08:55:14.337 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 04:55:14 np0005481065 nova_compute[260935]: 2025-10-11 08:55:14.337 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 11 04:55:14 np0005481065 nova_compute[260935]: 2025-10-11 08:55:14.520 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-09e8444f-162e-4773-a181-a5b70c7af8dd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:55:14 np0005481065 nova_compute[260935]: 2025-10-11 08:55:14.521 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-09e8444f-162e-4773-a181-a5b70c7af8dd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:55:14 np0005481065 nova_compute[260935]: 2025-10-11 08:55:14.521 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 04:55:14 np0005481065 nova_compute[260935]: 2025-10-11 08:55:14.522 2 DEBUG nova.objects.instance [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 09e8444f-162e-4773-a181-a5b70c7af8dd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:55:14 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:55:14 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:55:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:55:14 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:55:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 04:55:14 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:55:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 04:55:14 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:55:14 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 957767da-43fb-4087-a7d3-469c0f5b2ec3 does not exist
Oct 11 04:55:14 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 8181938e-4418-4b23-af7a-2e0a0ac209c5 does not exist
Oct 11 04:55:14 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 1327d814-a308-47b8-8fa5-efb83c5dc90d does not exist
Oct 11 04:55:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 04:55:14 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 04:55:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 04:55:14 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:55:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 04:55:14 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 04:55:15 np0005481065 nova_compute[260935]: 2025-10-11 08:55:15.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:15.189 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:55:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:15.191 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:55:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:15.192 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:55:15 np0005481065 podman[321466]: 2025-10-11 08:55:15.483305398 +0000 UTC m=+0.070113111 container create aff24f5373a8b6f827d60c04385982c098ea3435db8d5af1aad709de9a6eebc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_benz, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 04:55:15 np0005481065 systemd[1]: Started libpod-conmon-aff24f5373a8b6f827d60c04385982c098ea3435db8d5af1aad709de9a6eebc6.scope.
Oct 11 04:55:15 np0005481065 podman[321466]: 2025-10-11 08:55:15.455095934 +0000 UTC m=+0.041903656 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:55:15 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:55:15 np0005481065 podman[321466]: 2025-10-11 08:55:15.597088893 +0000 UTC m=+0.183896665 container init aff24f5373a8b6f827d60c04385982c098ea3435db8d5af1aad709de9a6eebc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 04:55:15 np0005481065 podman[321466]: 2025-10-11 08:55:15.606156272 +0000 UTC m=+0.192963994 container start aff24f5373a8b6f827d60c04385982c098ea3435db8d5af1aad709de9a6eebc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 04:55:15 np0005481065 podman[321466]: 2025-10-11 08:55:15.610114704 +0000 UTC m=+0.196922386 container attach aff24f5373a8b6f827d60c04385982c098ea3435db8d5af1aad709de9a6eebc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_benz, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 04:55:15 np0005481065 systemd[1]: libpod-aff24f5373a8b6f827d60c04385982c098ea3435db8d5af1aad709de9a6eebc6.scope: Deactivated successfully.
Oct 11 04:55:15 np0005481065 condescending_benz[321482]: 167 167
Oct 11 04:55:15 np0005481065 conmon[321482]: conmon aff24f5373a8b6f827d6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aff24f5373a8b6f827d60c04385982c098ea3435db8d5af1aad709de9a6eebc6.scope/container/memory.events
Oct 11 04:55:15 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 04:55:15 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:55:15 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 04:55:15 np0005481065 nova_compute[260935]: 2025-10-11 08:55:15.679 2 DEBUG nova.compute.manager [req-5bcdd117-c7dc-4aef-8f80-a53e623588e7 req-343ee253-b6e9-4bfa-9784-64d094d59fe8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Received event network-vif-plugged-ce404d10-5133-400e-acde-02c0abd10470 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:55:15 np0005481065 nova_compute[260935]: 2025-10-11 08:55:15.680 2 DEBUG oslo_concurrency.lockutils [req-5bcdd117-c7dc-4aef-8f80-a53e623588e7 req-343ee253-b6e9-4bfa-9784-64d094d59fe8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "09e8444f-162e-4773-a181-a5b70c7af8dd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:55:15 np0005481065 nova_compute[260935]: 2025-10-11 08:55:15.682 2 DEBUG oslo_concurrency.lockutils [req-5bcdd117-c7dc-4aef-8f80-a53e623588e7 req-343ee253-b6e9-4bfa-9784-64d094d59fe8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "09e8444f-162e-4773-a181-a5b70c7af8dd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:55:15 np0005481065 nova_compute[260935]: 2025-10-11 08:55:15.683 2 DEBUG oslo_concurrency.lockutils [req-5bcdd117-c7dc-4aef-8f80-a53e623588e7 req-343ee253-b6e9-4bfa-9784-64d094d59fe8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "09e8444f-162e-4773-a181-a5b70c7af8dd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:55:15 np0005481065 nova_compute[260935]: 2025-10-11 08:55:15.683 2 DEBUG nova.compute.manager [req-5bcdd117-c7dc-4aef-8f80-a53e623588e7 req-343ee253-b6e9-4bfa-9784-64d094d59fe8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] No waiting events found dispatching network-vif-plugged-ce404d10-5133-400e-acde-02c0abd10470 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:55:15 np0005481065 nova_compute[260935]: 2025-10-11 08:55:15.684 2 WARNING nova.compute.manager [req-5bcdd117-c7dc-4aef-8f80-a53e623588e7 req-343ee253-b6e9-4bfa-9784-64d094d59fe8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Received unexpected event network-vif-plugged-ce404d10-5133-400e-acde-02c0abd10470 for instance with vm_state active and task_state None.#033[00m
Oct 11 04:55:15 np0005481065 podman[321487]: 2025-10-11 08:55:15.709621672 +0000 UTC m=+0.055467593 container died aff24f5373a8b6f827d60c04385982c098ea3435db8d5af1aad709de9a6eebc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_benz, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 11 04:55:15 np0005481065 systemd[1]: var-lib-containers-storage-overlay-a294410b4aa76ff7f51aabf70c79f7d8b3f8fe965473c1b4f6f13d76dfbe8e23-merged.mount: Deactivated successfully.
Oct 11 04:55:15 np0005481065 podman[321487]: 2025-10-11 08:55:15.758435694 +0000 UTC m=+0.104281515 container remove aff24f5373a8b6f827d60c04385982c098ea3435db8d5af1aad709de9a6eebc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_benz, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:55:15 np0005481065 systemd[1]: libpod-conmon-aff24f5373a8b6f827d60c04385982c098ea3435db8d5af1aad709de9a6eebc6.scope: Deactivated successfully.
Oct 11 04:55:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1531: 321 pgs: 321 active+clean; 88 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 11 04:55:16 np0005481065 podman[321508]: 2025-10-11 08:55:16.010282494 +0000 UTC m=+0.054871416 container create 80d7f0f29fd5a4808047d83bb15a897823b65ac7bcf0a72344ee354f8ba92723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_khorana, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 04:55:16 np0005481065 systemd[1]: Started libpod-conmon-80d7f0f29fd5a4808047d83bb15a897823b65ac7bcf0a72344ee354f8ba92723.scope.
Oct 11 04:55:16 np0005481065 podman[321508]: 2025-10-11 08:55:15.993133295 +0000 UTC m=+0.037722207 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:55:16 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:55:16 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c3d2a07cedb84d616d5b5149dea268e244be311f4572232edd35cb8efbc4d20/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:55:16 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c3d2a07cedb84d616d5b5149dea268e244be311f4572232edd35cb8efbc4d20/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:55:16 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c3d2a07cedb84d616d5b5149dea268e244be311f4572232edd35cb8efbc4d20/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:55:16 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c3d2a07cedb84d616d5b5149dea268e244be311f4572232edd35cb8efbc4d20/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:55:16 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c3d2a07cedb84d616d5b5149dea268e244be311f4572232edd35cb8efbc4d20/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 04:55:16 np0005481065 podman[321508]: 2025-10-11 08:55:16.123498902 +0000 UTC m=+0.168087894 container init 80d7f0f29fd5a4808047d83bb15a897823b65ac7bcf0a72344ee354f8ba92723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 04:55:16 np0005481065 podman[321508]: 2025-10-11 08:55:16.138303735 +0000 UTC m=+0.182892657 container start 80d7f0f29fd5a4808047d83bb15a897823b65ac7bcf0a72344ee354f8ba92723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:55:16 np0005481065 podman[321508]: 2025-10-11 08:55:16.143881304 +0000 UTC m=+0.188470226 container attach 80d7f0f29fd5a4808047d83bb15a897823b65ac7bcf0a72344ee354f8ba92723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 04:55:17 np0005481065 nova_compute[260935]: 2025-10-11 08:55:17.149 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Updating instance_info_cache with network_info: [{"id": "ce404d10-5133-400e-acde-02c0abd10470", "address": "fa:16:3e:53:75:e2", "network": {"id": "651d0a3a-4912-47c8-a5af-eb2f7badf8c4", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1981929345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06df44dbf05c4a5e8532640419eb19d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce404d10-51", "ovs_interfaceid": "ce404d10-5133-400e-acde-02c0abd10470", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:55:17 np0005481065 nova_compute[260935]: 2025-10-11 08:55:17.196 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-09e8444f-162e-4773-a181-a5b70c7af8dd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:55:17 np0005481065 nova_compute[260935]: 2025-10-11 08:55:17.197 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 04:55:17 np0005481065 admiring_khorana[321525]: --> passed data devices: 0 physical, 3 LVM
Oct 11 04:55:17 np0005481065 admiring_khorana[321525]: --> relative data size: 1.0
Oct 11 04:55:17 np0005481065 admiring_khorana[321525]: --> All data devices are unavailable
Oct 11 04:55:17 np0005481065 systemd[1]: libpod-80d7f0f29fd5a4808047d83bb15a897823b65ac7bcf0a72344ee354f8ba92723.scope: Deactivated successfully.
Oct 11 04:55:17 np0005481065 systemd[1]: libpod-80d7f0f29fd5a4808047d83bb15a897823b65ac7bcf0a72344ee354f8ba92723.scope: Consumed 1.186s CPU time.
Oct 11 04:55:17 np0005481065 podman[321508]: 2025-10-11 08:55:17.389797411 +0000 UTC m=+1.434386343 container died 80d7f0f29fd5a4808047d83bb15a897823b65ac7bcf0a72344ee354f8ba92723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_khorana, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 04:55:17 np0005481065 systemd[1]: var-lib-containers-storage-overlay-3c3d2a07cedb84d616d5b5149dea268e244be311f4572232edd35cb8efbc4d20-merged.mount: Deactivated successfully.
Oct 11 04:55:17 np0005481065 podman[321508]: 2025-10-11 08:55:17.461073773 +0000 UTC m=+1.505662665 container remove 80d7f0f29fd5a4808047d83bb15a897823b65ac7bcf0a72344ee354f8ba92723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_khorana, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 04:55:17 np0005481065 systemd[1]: libpod-conmon-80d7f0f29fd5a4808047d83bb15a897823b65ac7bcf0a72344ee354f8ba92723.scope: Deactivated successfully.
Oct 11 04:55:17 np0005481065 nova_compute[260935]: 2025-10-11 08:55:17.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:18 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1532: 321 pgs: 321 active+clean; 88 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 04:55:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:55:18 np0005481065 podman[321707]: 2025-10-11 08:55:18.360675505 +0000 UTC m=+0.050184622 container create 6b01b572343acd054058db25378a7b1d68431193329f47c26c1479d2e19e3d18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 04:55:18 np0005481065 systemd[1]: Started libpod-conmon-6b01b572343acd054058db25378a7b1d68431193329f47c26c1479d2e19e3d18.scope.
Oct 11 04:55:18 np0005481065 podman[321707]: 2025-10-11 08:55:18.342398674 +0000 UTC m=+0.031907811 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:55:18 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:55:18 np0005481065 podman[321707]: 2025-10-11 08:55:18.457117985 +0000 UTC m=+0.146627172 container init 6b01b572343acd054058db25378a7b1d68431193329f47c26c1479d2e19e3d18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wiles, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:55:18 np0005481065 podman[321707]: 2025-10-11 08:55:18.469642272 +0000 UTC m=+0.159151419 container start 6b01b572343acd054058db25378a7b1d68431193329f47c26c1479d2e19e3d18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:55:18 np0005481065 podman[321707]: 2025-10-11 08:55:18.473386089 +0000 UTC m=+0.162895246 container attach 6b01b572343acd054058db25378a7b1d68431193329f47c26c1479d2e19e3d18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wiles, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 04:55:18 np0005481065 gifted_wiles[321724]: 167 167
Oct 11 04:55:18 np0005481065 systemd[1]: libpod-6b01b572343acd054058db25378a7b1d68431193329f47c26c1479d2e19e3d18.scope: Deactivated successfully.
Oct 11 04:55:18 np0005481065 podman[321707]: 2025-10-11 08:55:18.476107517 +0000 UTC m=+0.165616664 container died 6b01b572343acd054058db25378a7b1d68431193329f47c26c1479d2e19e3d18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wiles, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 04:55:18 np0005481065 systemd[1]: var-lib-containers-storage-overlay-a55af9e3b2e8dcdb77721c8e9e6ba8b7a24f9b00f86389e5d9786f400f762937-merged.mount: Deactivated successfully.
Oct 11 04:55:18 np0005481065 podman[321707]: 2025-10-11 08:55:18.529285353 +0000 UTC m=+0.218794500 container remove 6b01b572343acd054058db25378a7b1d68431193329f47c26c1479d2e19e3d18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wiles, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:55:18 np0005481065 systemd[1]: libpod-conmon-6b01b572343acd054058db25378a7b1d68431193329f47c26c1479d2e19e3d18.scope: Deactivated successfully.
Oct 11 04:55:18 np0005481065 podman[321748]: 2025-10-11 08:55:18.78309477 +0000 UTC m=+0.060092584 container create 9618a48970d0fb721263bca73c3c00f1ca75a100b51950c37a900b6bed4693ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_gauss, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 11 04:55:18 np0005481065 systemd[1]: Started libpod-conmon-9618a48970d0fb721263bca73c3c00f1ca75a100b51950c37a900b6bed4693ab.scope.
Oct 11 04:55:18 np0005481065 podman[321748]: 2025-10-11 08:55:18.75818823 +0000 UTC m=+0.035186054 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:55:18 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:55:18 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99a110e0be8418cab9844a0664bd7dc2f1e3f7b8b64adc755fc9e0b93ab022fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:55:18 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99a110e0be8418cab9844a0664bd7dc2f1e3f7b8b64adc755fc9e0b93ab022fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:55:18 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99a110e0be8418cab9844a0664bd7dc2f1e3f7b8b64adc755fc9e0b93ab022fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:55:18 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99a110e0be8418cab9844a0664bd7dc2f1e3f7b8b64adc755fc9e0b93ab022fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:55:18 np0005481065 podman[321748]: 2025-10-11 08:55:18.897630376 +0000 UTC m=+0.174628240 container init 9618a48970d0fb721263bca73c3c00f1ca75a100b51950c37a900b6bed4693ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 04:55:18 np0005481065 nova_compute[260935]: 2025-10-11 08:55:18.908 2 DEBUG oslo_concurrency.lockutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Acquiring lock "98d8ebd6-0917-49cf-8efc-a245486424bc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:55:18 np0005481065 nova_compute[260935]: 2025-10-11 08:55:18.911 2 DEBUG oslo_concurrency.lockutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "98d8ebd6-0917-49cf-8efc-a245486424bc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:55:18 np0005481065 podman[321748]: 2025-10-11 08:55:18.918252824 +0000 UTC m=+0.195250638 container start 9618a48970d0fb721263bca73c3c00f1ca75a100b51950c37a900b6bed4693ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_gauss, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 11 04:55:18 np0005481065 podman[321748]: 2025-10-11 08:55:18.922376402 +0000 UTC m=+0.199374256 container attach 9618a48970d0fb721263bca73c3c00f1ca75a100b51950c37a900b6bed4693ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 11 04:55:18 np0005481065 nova_compute[260935]: 2025-10-11 08:55:18.938 2 DEBUG nova.compute.manager [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:55:19 np0005481065 nova_compute[260935]: 2025-10-11 08:55:19.003 2 DEBUG oslo_concurrency.lockutils [None req-1b87c1bf-b2e2-431a-b9ab-6a5580ceddde f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Acquiring lock "09e8444f-162e-4773-a181-a5b70c7af8dd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:55:19 np0005481065 nova_compute[260935]: 2025-10-11 08:55:19.003 2 DEBUG oslo_concurrency.lockutils [None req-1b87c1bf-b2e2-431a-b9ab-6a5580ceddde f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Lock "09e8444f-162e-4773-a181-a5b70c7af8dd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:55:19 np0005481065 nova_compute[260935]: 2025-10-11 08:55:19.004 2 DEBUG oslo_concurrency.lockutils [None req-1b87c1bf-b2e2-431a-b9ab-6a5580ceddde f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Acquiring lock "09e8444f-162e-4773-a181-a5b70c7af8dd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:55:19 np0005481065 nova_compute[260935]: 2025-10-11 08:55:19.005 2 DEBUG oslo_concurrency.lockutils [None req-1b87c1bf-b2e2-431a-b9ab-6a5580ceddde f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Lock "09e8444f-162e-4773-a181-a5b70c7af8dd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:55:19 np0005481065 nova_compute[260935]: 2025-10-11 08:55:19.005 2 DEBUG oslo_concurrency.lockutils [None req-1b87c1bf-b2e2-431a-b9ab-6a5580ceddde f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Lock "09e8444f-162e-4773-a181-a5b70c7af8dd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:55:19 np0005481065 nova_compute[260935]: 2025-10-11 08:55:19.007 2 INFO nova.compute.manager [None req-1b87c1bf-b2e2-431a-b9ab-6a5580ceddde f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Terminating instance#033[00m
Oct 11 04:55:19 np0005481065 nova_compute[260935]: 2025-10-11 08:55:19.009 2 DEBUG nova.compute.manager [None req-1b87c1bf-b2e2-431a-b9ab-6a5580ceddde f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 04:55:19 np0005481065 nova_compute[260935]: 2025-10-11 08:55:19.027 2 DEBUG oslo_concurrency.lockutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:55:19 np0005481065 nova_compute[260935]: 2025-10-11 08:55:19.029 2 DEBUG oslo_concurrency.lockutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:55:19 np0005481065 nova_compute[260935]: 2025-10-11 08:55:19.042 2 DEBUG nova.virt.hardware [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:55:19 np0005481065 nova_compute[260935]: 2025-10-11 08:55:19.045 2 INFO nova.compute.claims [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:55:19 np0005481065 kernel: tapce404d10-51 (unregistering): left promiscuous mode
Oct 11 04:55:19 np0005481065 NetworkManager[44960]: <info>  [1760172919.0808] device (tapce404d10-51): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:55:19 np0005481065 ovn_controller[152945]: 2025-10-11T08:55:19Z|00503|binding|INFO|Releasing lport ce404d10-5133-400e-acde-02c0abd10470 from this chassis (sb_readonly=0)
Oct 11 04:55:19 np0005481065 ovn_controller[152945]: 2025-10-11T08:55:19Z|00504|binding|INFO|Setting lport ce404d10-5133-400e-acde-02c0abd10470 down in Southbound
Oct 11 04:55:19 np0005481065 nova_compute[260935]: 2025-10-11 08:55:19.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:19 np0005481065 nova_compute[260935]: 2025-10-11 08:55:19.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:19 np0005481065 ovn_controller[152945]: 2025-10-11T08:55:19Z|00505|binding|INFO|Removing iface tapce404d10-51 ovn-installed in OVS
Oct 11 04:55:19 np0005481065 nova_compute[260935]: 2025-10-11 08:55:19.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:19.109 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:75:e2 10.100.0.9'], port_security=['fa:16:3e:53:75:e2 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '09e8444f-162e-4773-a181-a5b70c7af8dd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-651d0a3a-4912-47c8-a5af-eb2f7badf8c4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '06df44dbf05c4a5e8532640419eb19d3', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ef6334f9-e759-487e-a426-75ba5c313554', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c8280cc5-aa3a-4200-a683-dd07c09ecd59, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=ce404d10-5133-400e-acde-02c0abd10470) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:55:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:19.113 162815 INFO neutron.agent.ovn.metadata.agent [-] Port ce404d10-5133-400e-acde-02c0abd10470 in datapath 651d0a3a-4912-47c8-a5af-eb2f7badf8c4 unbound from our chassis#033[00m
Oct 11 04:55:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:19.121 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 651d0a3a-4912-47c8-a5af-eb2f7badf8c4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 04:55:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:19.122 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[831d57a7-0a7c-4034-8735-14337a4ed0a5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:19.123 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-651d0a3a-4912-47c8-a5af-eb2f7badf8c4 namespace which is not needed anymore#033[00m
Oct 11 04:55:19 np0005481065 nova_compute[260935]: 2025-10-11 08:55:19.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:19 np0005481065 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000039.scope: Deactivated successfully.
Oct 11 04:55:19 np0005481065 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000039.scope: Consumed 6.610s CPU time.
Oct 11 04:55:19 np0005481065 systemd-machined[215705]: Machine qemu-64-instance-00000039 terminated.
Oct 11 04:55:19 np0005481065 nova_compute[260935]: 2025-10-11 08:55:19.232 2 DEBUG oslo_concurrency.processutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:55:19 np0005481065 nova_compute[260935]: 2025-10-11 08:55:19.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:19 np0005481065 nova_compute[260935]: 2025-10-11 08:55:19.285 2 INFO nova.virt.libvirt.driver [-] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Instance destroyed successfully.#033[00m
Oct 11 04:55:19 np0005481065 nova_compute[260935]: 2025-10-11 08:55:19.287 2 DEBUG nova.objects.instance [None req-1b87c1bf-b2e2-431a-b9ab-6a5580ceddde f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Lazy-loading 'resources' on Instance uuid 09e8444f-162e-4773-a181-a5b70c7af8dd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:55:19 np0005481065 nova_compute[260935]: 2025-10-11 08:55:19.303 2 DEBUG nova.virt.libvirt.vif [None req-1b87c1bf-b2e2-431a-b9ab-6a5580ceddde f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:55:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-1230911333',display_name='tempest-ServerMetadataNegativeTestJSON-server-1230911333',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-1230911333',id=57,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:55:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='06df44dbf05c4a5e8532640419eb19d3',ramdisk_id='',reservation_id='r-ac9lff0k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerMetadataNegativeTestJSON-968790743',owner_user_name='tempest-ServerMetadataNegativeTestJSON-968790743-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:55:13Z,user_data=None,user_id='f58717ff8108424e87196def751cc1fe',uuid=09e8444f-162e-4773-a181-a5b70c7af8dd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ce404d10-5133-400e-acde-02c0abd10470", "address": "fa:16:3e:53:75:e2", "network": {"id": "651d0a3a-4912-47c8-a5af-eb2f7badf8c4", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1981929345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06df44dbf05c4a5e8532640419eb19d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce404d10-51", "ovs_interfaceid": "ce404d10-5133-400e-acde-02c0abd10470", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 04:55:19 np0005481065 nova_compute[260935]: 2025-10-11 08:55:19.309 2 DEBUG nova.network.os_vif_util [None req-1b87c1bf-b2e2-431a-b9ab-6a5580ceddde f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Converting VIF {"id": "ce404d10-5133-400e-acde-02c0abd10470", "address": "fa:16:3e:53:75:e2", "network": {"id": "651d0a3a-4912-47c8-a5af-eb2f7badf8c4", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1981929345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06df44dbf05c4a5e8532640419eb19d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce404d10-51", "ovs_interfaceid": "ce404d10-5133-400e-acde-02c0abd10470", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:55:19 np0005481065 nova_compute[260935]: 2025-10-11 08:55:19.310 2 DEBUG nova.network.os_vif_util [None req-1b87c1bf-b2e2-431a-b9ab-6a5580ceddde f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:53:75:e2,bridge_name='br-int',has_traffic_filtering=True,id=ce404d10-5133-400e-acde-02c0abd10470,network=Network(651d0a3a-4912-47c8-a5af-eb2f7badf8c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce404d10-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:55:19 np0005481065 nova_compute[260935]: 2025-10-11 08:55:19.311 2 DEBUG os_vif [None req-1b87c1bf-b2e2-431a-b9ab-6a5580ceddde f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:53:75:e2,bridge_name='br-int',has_traffic_filtering=True,id=ce404d10-5133-400e-acde-02c0abd10470,network=Network(651d0a3a-4912-47c8-a5af-eb2f7badf8c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce404d10-51') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 04:55:19 np0005481065 nova_compute[260935]: 2025-10-11 08:55:19.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:19 np0005481065 nova_compute[260935]: 2025-10-11 08:55:19.314 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce404d10-51, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:55:19 np0005481065 nova_compute[260935]: 2025-10-11 08:55:19.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:19 np0005481065 nova_compute[260935]: 2025-10-11 08:55:19.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:55:19 np0005481065 nova_compute[260935]: 2025-10-11 08:55:19.321 2 INFO os_vif [None req-1b87c1bf-b2e2-431a-b9ab-6a5580ceddde f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:53:75:e2,bridge_name='br-int',has_traffic_filtering=True,id=ce404d10-5133-400e-acde-02c0abd10470,network=Network(651d0a3a-4912-47c8-a5af-eb2f7badf8c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce404d10-51')#033[00m
Oct 11 04:55:19 np0005481065 neutron-haproxy-ovnmeta-651d0a3a-4912-47c8-a5af-eb2f7badf8c4[320975]: [NOTICE]   (320992) : haproxy version is 2.8.14-c23fe91
Oct 11 04:55:19 np0005481065 neutron-haproxy-ovnmeta-651d0a3a-4912-47c8-a5af-eb2f7badf8c4[320975]: [NOTICE]   (320992) : path to executable is /usr/sbin/haproxy
Oct 11 04:55:19 np0005481065 neutron-haproxy-ovnmeta-651d0a3a-4912-47c8-a5af-eb2f7badf8c4[320975]: [WARNING]  (320992) : Exiting Master process...
Oct 11 04:55:19 np0005481065 neutron-haproxy-ovnmeta-651d0a3a-4912-47c8-a5af-eb2f7badf8c4[320975]: [ALERT]    (320992) : Current worker (321003) exited with code 143 (Terminated)
Oct 11 04:55:19 np0005481065 neutron-haproxy-ovnmeta-651d0a3a-4912-47c8-a5af-eb2f7badf8c4[320975]: [WARNING]  (320992) : All workers exited. Exiting... (0)
Oct 11 04:55:19 np0005481065 systemd[1]: libpod-e681dc86b4349b3c4497879b295c775e50a4f08774066f23cdbadfc587418566.scope: Deactivated successfully.
Oct 11 04:55:19 np0005481065 podman[321796]: 2025-10-11 08:55:19.340129334 +0000 UTC m=+0.081038852 container died e681dc86b4349b3c4497879b295c775e50a4f08774066f23cdbadfc587418566 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-651d0a3a-4912-47c8-a5af-eb2f7badf8c4, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 11 04:55:19 np0005481065 systemd[1]: var-lib-containers-storage-overlay-b5d0672aa8e6c963225b544aba8752ab8163ac36884a8a150664ddc3431c4373-merged.mount: Deactivated successfully.
Oct 11 04:55:19 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e681dc86b4349b3c4497879b295c775e50a4f08774066f23cdbadfc587418566-userdata-shm.mount: Deactivated successfully.
Oct 11 04:55:19 np0005481065 podman[321796]: 2025-10-11 08:55:19.41821768 +0000 UTC m=+0.159127188 container cleanup e681dc86b4349b3c4497879b295c775e50a4f08774066f23cdbadfc587418566 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-651d0a3a-4912-47c8-a5af-eb2f7badf8c4, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 04:55:19 np0005481065 systemd[1]: libpod-conmon-e681dc86b4349b3c4497879b295c775e50a4f08774066f23cdbadfc587418566.scope: Deactivated successfully.
Oct 11 04:55:19 np0005481065 podman[321871]: 2025-10-11 08:55:19.504259693 +0000 UTC m=+0.050221363 container remove e681dc86b4349b3c4497879b295c775e50a4f08774066f23cdbadfc587418566 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-651d0a3a-4912-47c8-a5af-eb2f7badf8c4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 04:55:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:19.517 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b7f1164a-2c26-4888-9757-9a10cfb43e0a]: (4, ('Sat Oct 11 08:55:19 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-651d0a3a-4912-47c8-a5af-eb2f7badf8c4 (e681dc86b4349b3c4497879b295c775e50a4f08774066f23cdbadfc587418566)\ne681dc86b4349b3c4497879b295c775e50a4f08774066f23cdbadfc587418566\nSat Oct 11 08:55:19 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-651d0a3a-4912-47c8-a5af-eb2f7badf8c4 (e681dc86b4349b3c4497879b295c775e50a4f08774066f23cdbadfc587418566)\ne681dc86b4349b3c4497879b295c775e50a4f08774066f23cdbadfc587418566\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:19.522 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8ccaf1e4-5ec0-4da3-a371-77bef494f9e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:19.525 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap651d0a3a-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:55:19 np0005481065 kernel: tap651d0a3a-40: left promiscuous mode
Oct 11 04:55:19 np0005481065 nova_compute[260935]: 2025-10-11 08:55:19.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:19 np0005481065 nova_compute[260935]: 2025-10-11 08:55:19.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:19.564 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4ee0a282-dfd4-496a-9d04-c928cd67c9a1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:19.597 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8cb21848-1f96-4440-be65-16c290ca22fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:19.598 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4ed757ba-1a85-4a6f-b6f5-fad60be780b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:19.620 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4189895e-1a92-428b-89c0-0f4992f998a1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 475620, 'reachable_time': 29139, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321886, 'error': None, 'target': 'ovnmeta-651d0a3a-4912-47c8-a5af-eb2f7badf8c4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:19.623 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-651d0a3a-4912-47c8-a5af-eb2f7badf8c4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 04:55:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:19.623 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[3f371a86-ad70-44a5-858b-baff64646287]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:19 np0005481065 systemd[1]: run-netns-ovnmeta\x2d651d0a3a\x2d4912\x2d47c8\x2da5af\x2deb2f7badf8c4.mount: Deactivated successfully.
Oct 11 04:55:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:55:19 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/942282138' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:55:19 np0005481065 nova_compute[260935]: 2025-10-11 08:55:19.750 2 DEBUG oslo_concurrency.processutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:55:19 np0005481065 nova_compute[260935]: 2025-10-11 08:55:19.761 2 DEBUG nova.compute.provider_tree [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:55:19 np0005481065 nova_compute[260935]: 2025-10-11 08:55:19.789 2 DEBUG nova.scheduler.client.report [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:55:19 np0005481065 nova_compute[260935]: 2025-10-11 08:55:19.822 2 DEBUG oslo_concurrency.lockutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.792s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:55:19 np0005481065 nova_compute[260935]: 2025-10-11 08:55:19.823 2 DEBUG nova.compute.manager [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]: {
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:    "0": [
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:        {
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:            "devices": [
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:                "/dev/loop3"
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:            ],
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:            "lv_name": "ceph_lv0",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:            "lv_size": "21470642176",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:            "name": "ceph_lv0",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:            "tags": {
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:                "ceph.cluster_name": "ceph",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:                "ceph.crush_device_class": "",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:                "ceph.encrypted": "0",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:                "ceph.osd_id": "0",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:                "ceph.type": "block",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:                "ceph.vdo": "0"
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:            },
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:            "type": "block",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:            "vg_name": "ceph_vg0"
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:        }
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:    ],
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:    "1": [
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:        {
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:            "devices": [
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:                "/dev/loop4"
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:            ],
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:            "lv_name": "ceph_lv1",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:            "lv_size": "21470642176",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:            "name": "ceph_lv1",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:            "tags": {
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:                "ceph.cluster_name": "ceph",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:                "ceph.crush_device_class": "",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:                "ceph.encrypted": "0",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:                "ceph.osd_id": "1",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:                "ceph.type": "block",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:                "ceph.vdo": "0"
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:            },
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:            "type": "block",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:            "vg_name": "ceph_vg1"
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:        }
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:    ],
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:    "2": [
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:        {
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:            "devices": [
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:                "/dev/loop5"
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:            ],
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:            "lv_name": "ceph_lv2",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:            "lv_size": "21470642176",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:            "name": "ceph_lv2",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:            "tags": {
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:                "ceph.cephx_lockbox_secret": "",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:                "ceph.cluster_name": "ceph",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:                "ceph.crush_device_class": "",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:                "ceph.encrypted": "0",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:                "ceph.osd_id": "2",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:                "ceph.type": "block",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:                "ceph.vdo": "0"
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:            },
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:            "type": "block",
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:            "vg_name": "ceph_vg2"
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:        }
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]:    ]
Oct 11 04:55:19 np0005481065 admiring_gauss[321765]: }
Oct 11 04:55:19 np0005481065 nova_compute[260935]: 2025-10-11 08:55:19.867 2 INFO nova.virt.libvirt.driver [None req-1b87c1bf-b2e2-431a-b9ab-6a5580ceddde f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Deleting instance files /var/lib/nova/instances/09e8444f-162e-4773-a181-a5b70c7af8dd_del#033[00m
Oct 11 04:55:19 np0005481065 nova_compute[260935]: 2025-10-11 08:55:19.869 2 INFO nova.virt.libvirt.driver [None req-1b87c1bf-b2e2-431a-b9ab-6a5580ceddde f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Deletion of /var/lib/nova/instances/09e8444f-162e-4773-a181-a5b70c7af8dd_del complete#033[00m
Oct 11 04:55:19 np0005481065 nova_compute[260935]: 2025-10-11 08:55:19.879 2 DEBUG nova.compute.manager [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:55:19 np0005481065 nova_compute[260935]: 2025-10-11 08:55:19.879 2 DEBUG nova.network.neutron [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:55:19 np0005481065 systemd[1]: libpod-9618a48970d0fb721263bca73c3c00f1ca75a100b51950c37a900b6bed4693ab.scope: Deactivated successfully.
Oct 11 04:55:19 np0005481065 podman[321748]: 2025-10-11 08:55:19.885792912 +0000 UTC m=+1.162790766 container died 9618a48970d0fb721263bca73c3c00f1ca75a100b51950c37a900b6bed4693ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 04:55:19 np0005481065 nova_compute[260935]: 2025-10-11 08:55:19.900 2 INFO nova.virt.libvirt.driver [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:55:19 np0005481065 systemd[1]: var-lib-containers-storage-overlay-99a110e0be8418cab9844a0664bd7dc2f1e3f7b8b64adc755fc9e0b93ab022fe-merged.mount: Deactivated successfully.
Oct 11 04:55:19 np0005481065 nova_compute[260935]: 2025-10-11 08:55:19.930 2 DEBUG nova.compute.manager [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:55:19 np0005481065 nova_compute[260935]: 2025-10-11 08:55:19.951 2 INFO nova.compute.manager [None req-1b87c1bf-b2e2-431a-b9ab-6a5580ceddde f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Took 0.94 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 04:55:19 np0005481065 nova_compute[260935]: 2025-10-11 08:55:19.952 2 DEBUG oslo.service.loopingcall [None req-1b87c1bf-b2e2-431a-b9ab-6a5580ceddde f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 04:55:19 np0005481065 nova_compute[260935]: 2025-10-11 08:55:19.953 2 DEBUG nova.compute.manager [-] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 04:55:19 np0005481065 nova_compute[260935]: 2025-10-11 08:55:19.953 2 DEBUG nova.network.neutron [-] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 04:55:19 np0005481065 podman[321748]: 2025-10-11 08:55:19.960110731 +0000 UTC m=+1.237108505 container remove 9618a48970d0fb721263bca73c3c00f1ca75a100b51950c37a900b6bed4693ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_gauss, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 04:55:19 np0005481065 systemd[1]: libpod-conmon-9618a48970d0fb721263bca73c3c00f1ca75a100b51950c37a900b6bed4693ab.scope: Deactivated successfully.
Oct 11 04:55:20 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1533: 321 pgs: 321 active+clean; 88 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 04:55:20 np0005481065 nova_compute[260935]: 2025-10-11 08:55:20.067 2 DEBUG nova.compute.manager [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:55:20 np0005481065 nova_compute[260935]: 2025-10-11 08:55:20.071 2 DEBUG nova.virt.libvirt.driver [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:55:20 np0005481065 nova_compute[260935]: 2025-10-11 08:55:20.073 2 INFO nova.virt.libvirt.driver [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Creating image(s)#033[00m
Oct 11 04:55:20 np0005481065 nova_compute[260935]: 2025-10-11 08:55:20.108 2 DEBUG nova.storage.rbd_utils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] rbd image 98d8ebd6-0917-49cf-8efc-a245486424bc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:55:20 np0005481065 nova_compute[260935]: 2025-10-11 08:55:20.144 2 DEBUG nova.storage.rbd_utils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] rbd image 98d8ebd6-0917-49cf-8efc-a245486424bc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:55:20 np0005481065 nova_compute[260935]: 2025-10-11 08:55:20.176 2 DEBUG nova.storage.rbd_utils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] rbd image 98d8ebd6-0917-49cf-8efc-a245486424bc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:55:20 np0005481065 nova_compute[260935]: 2025-10-11 08:55:20.184 2 DEBUG oslo_concurrency.processutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:55:20 np0005481065 nova_compute[260935]: 2025-10-11 08:55:20.273 2 DEBUG oslo_concurrency.processutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:55:20 np0005481065 nova_compute[260935]: 2025-10-11 08:55:20.274 2 DEBUG oslo_concurrency.lockutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:55:20 np0005481065 nova_compute[260935]: 2025-10-11 08:55:20.274 2 DEBUG oslo_concurrency.lockutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:55:20 np0005481065 nova_compute[260935]: 2025-10-11 08:55:20.275 2 DEBUG oslo_concurrency.lockutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:55:20 np0005481065 nova_compute[260935]: 2025-10-11 08:55:20.304 2 DEBUG nova.storage.rbd_utils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] rbd image 98d8ebd6-0917-49cf-8efc-a245486424bc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:55:20 np0005481065 nova_compute[260935]: 2025-10-11 08:55:20.311 2 DEBUG oslo_concurrency.processutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 98d8ebd6-0917-49cf-8efc-a245486424bc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:55:20 np0005481065 nova_compute[260935]: 2025-10-11 08:55:20.441 2 DEBUG nova.policy [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8d5f5f07c57c467286168be7c097bf26', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '73adfb8cf0c64359b1f33a9643148ef4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 04:55:20 np0005481065 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Oct 11 04:55:20 np0005481065 nova_compute[260935]: 2025-10-11 08:55:20.684 2 DEBUG oslo_concurrency.processutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 98d8ebd6-0917-49cf-8efc-a245486424bc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.373s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:55:20 np0005481065 nova_compute[260935]: 2025-10-11 08:55:20.780 2 DEBUG nova.storage.rbd_utils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] resizing rbd image 98d8ebd6-0917-49cf-8efc-a245486424bc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:55:20 np0005481065 podman[322174]: 2025-10-11 08:55:20.860170396 +0000 UTC m=+0.072290142 container create f6a00a97705227a298b49acab7315968fcbe173928b6830a3d21c89bbf976724 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hopper, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 04:55:20 np0005481065 nova_compute[260935]: 2025-10-11 08:55:20.907 2 DEBUG nova.objects.instance [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lazy-loading 'migration_context' on Instance uuid 98d8ebd6-0917-49cf-8efc-a245486424bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:55:20 np0005481065 systemd[1]: Started libpod-conmon-f6a00a97705227a298b49acab7315968fcbe173928b6830a3d21c89bbf976724.scope.
Oct 11 04:55:20 np0005481065 podman[322174]: 2025-10-11 08:55:20.828699139 +0000 UTC m=+0.040818925 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:55:20 np0005481065 nova_compute[260935]: 2025-10-11 08:55:20.926 2 DEBUG nova.virt.libvirt.driver [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:55:20 np0005481065 nova_compute[260935]: 2025-10-11 08:55:20.927 2 DEBUG nova.virt.libvirt.driver [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Ensure instance console log exists: /var/lib/nova/instances/98d8ebd6-0917-49cf-8efc-a245486424bc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:55:20 np0005481065 nova_compute[260935]: 2025-10-11 08:55:20.928 2 DEBUG oslo_concurrency.lockutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:55:20 np0005481065 nova_compute[260935]: 2025-10-11 08:55:20.929 2 DEBUG oslo_concurrency.lockutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:55:20 np0005481065 nova_compute[260935]: 2025-10-11 08:55:20.929 2 DEBUG oslo_concurrency.lockutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:55:20 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:55:20 np0005481065 podman[322174]: 2025-10-11 08:55:20.986331184 +0000 UTC m=+0.198450980 container init f6a00a97705227a298b49acab7315968fcbe173928b6830a3d21c89bbf976724 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:55:21 np0005481065 podman[322174]: 2025-10-11 08:55:21.004504242 +0000 UTC m=+0.216623988 container start f6a00a97705227a298b49acab7315968fcbe173928b6830a3d21c89bbf976724 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hopper, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:55:21 np0005481065 podman[322174]: 2025-10-11 08:55:21.009462953 +0000 UTC m=+0.221582699 container attach f6a00a97705227a298b49acab7315968fcbe173928b6830a3d21c89bbf976724 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hopper, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:55:21 np0005481065 tender_hopper[322227]: 167 167
Oct 11 04:55:21 np0005481065 systemd[1]: libpod-f6a00a97705227a298b49acab7315968fcbe173928b6830a3d21c89bbf976724.scope: Deactivated successfully.
Oct 11 04:55:21 np0005481065 podman[322174]: 2025-10-11 08:55:21.013516299 +0000 UTC m=+0.225636045 container died f6a00a97705227a298b49acab7315968fcbe173928b6830a3d21c89bbf976724 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 04:55:21 np0005481065 systemd[1]: var-lib-containers-storage-overlay-d838f7d3ce2549dcfa9d5479a3c061d63b3f5529b893b85143e15eac8890ef53-merged.mount: Deactivated successfully.
Oct 11 04:55:21 np0005481065 podman[322174]: 2025-10-11 08:55:21.070313339 +0000 UTC m=+0.282433085 container remove f6a00a97705227a298b49acab7315968fcbe173928b6830a3d21c89bbf976724 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 04:55:21 np0005481065 systemd[1]: libpod-conmon-f6a00a97705227a298b49acab7315968fcbe173928b6830a3d21c89bbf976724.scope: Deactivated successfully.
Oct 11 04:55:21 np0005481065 nova_compute[260935]: 2025-10-11 08:55:21.148 2 DEBUG nova.network.neutron [-] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:55:21 np0005481065 nova_compute[260935]: 2025-10-11 08:55:21.174 2 INFO nova.compute.manager [-] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Took 1.22 seconds to deallocate network for instance.#033[00m
Oct 11 04:55:21 np0005481065 nova_compute[260935]: 2025-10-11 08:55:21.223 2 DEBUG oslo_concurrency.lockutils [None req-1b87c1bf-b2e2-431a-b9ab-6a5580ceddde f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:55:21 np0005481065 nova_compute[260935]: 2025-10-11 08:55:21.223 2 DEBUG oslo_concurrency.lockutils [None req-1b87c1bf-b2e2-431a-b9ab-6a5580ceddde f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:55:21 np0005481065 nova_compute[260935]: 2025-10-11 08:55:21.230 2 DEBUG nova.compute.manager [req-3b1dfc25-dead-412e-9808-086c87c4b1ce req-04f06ac5-679f-4108-8993-e3106cb05c7b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Received event network-vif-deleted-ce404d10-5133-400e-acde-02c0abd10470 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:55:21 np0005481065 nova_compute[260935]: 2025-10-11 08:55:21.292 2 DEBUG oslo_concurrency.processutils [None req-1b87c1bf-b2e2-431a-b9ab-6a5580ceddde f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:55:21 np0005481065 podman[322252]: 2025-10-11 08:55:21.311678471 +0000 UTC m=+0.055699939 container create 5003ba35a7b1791b085fbdcb61f43b55cf0e3267b869ceb4309960f2f81144db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_payne, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 04:55:21 np0005481065 podman[322252]: 2025-10-11 08:55:21.283281331 +0000 UTC m=+0.027302849 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 04:55:21 np0005481065 systemd[1]: Started libpod-conmon-5003ba35a7b1791b085fbdcb61f43b55cf0e3267b869ceb4309960f2f81144db.scope.
Oct 11 04:55:21 np0005481065 nova_compute[260935]: 2025-10-11 08:55:21.383 2 DEBUG nova.network.neutron [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Successfully created port: 10e01bbb-0d2c-4f76-8f81-c90e20d3e54c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 04:55:21 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:55:21 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaaca534b58519e8e7068d084895d1b8930222cbe8ce3cd600fe9570cf3b6fac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 04:55:21 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaaca534b58519e8e7068d084895d1b8930222cbe8ce3cd600fe9570cf3b6fac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 04:55:21 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaaca534b58519e8e7068d084895d1b8930222cbe8ce3cd600fe9570cf3b6fac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 04:55:21 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaaca534b58519e8e7068d084895d1b8930222cbe8ce3cd600fe9570cf3b6fac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 04:55:21 np0005481065 podman[322252]: 2025-10-11 08:55:21.439801784 +0000 UTC m=+0.183823302 container init 5003ba35a7b1791b085fbdcb61f43b55cf0e3267b869ceb4309960f2f81144db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_payne, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:55:21 np0005481065 podman[322252]: 2025-10-11 08:55:21.45191619 +0000 UTC m=+0.195937668 container start 5003ba35a7b1791b085fbdcb61f43b55cf0e3267b869ceb4309960f2f81144db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_payne, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 04:55:21 np0005481065 podman[322252]: 2025-10-11 08:55:21.456014317 +0000 UTC m=+0.200035835 container attach 5003ba35a7b1791b085fbdcb61f43b55cf0e3267b869ceb4309960f2f81144db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_payne, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 04:55:21 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:55:21 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/549302481' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:55:21 np0005481065 nova_compute[260935]: 2025-10-11 08:55:21.817 2 DEBUG oslo_concurrency.processutils [None req-1b87c1bf-b2e2-431a-b9ab-6a5580ceddde f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:55:21 np0005481065 nova_compute[260935]: 2025-10-11 08:55:21.827 2 DEBUG nova.compute.provider_tree [None req-1b87c1bf-b2e2-431a-b9ab-6a5580ceddde f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:55:21 np0005481065 nova_compute[260935]: 2025-10-11 08:55:21.846 2 DEBUG nova.scheduler.client.report [None req-1b87c1bf-b2e2-431a-b9ab-6a5580ceddde f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:55:21 np0005481065 nova_compute[260935]: 2025-10-11 08:55:21.881 2 DEBUG oslo_concurrency.lockutils [None req-1b87c1bf-b2e2-431a-b9ab-6a5580ceddde f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:55:21 np0005481065 nova_compute[260935]: 2025-10-11 08:55:21.924 2 INFO nova.scheduler.client.report [None req-1b87c1bf-b2e2-431a-b9ab-6a5580ceddde f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Deleted allocations for instance 09e8444f-162e-4773-a181-a5b70c7af8dd#033[00m
Oct 11 04:55:21 np0005481065 nova_compute[260935]: 2025-10-11 08:55:21.993 2 DEBUG oslo_concurrency.lockutils [None req-1b87c1bf-b2e2-431a-b9ab-6a5580ceddde f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Lock "09e8444f-162e-4773-a181-a5b70c7af8dd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.989s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:55:22 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1534: 321 pgs: 321 active+clean; 88 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 04:55:22 np0005481065 nova_compute[260935]: 2025-10-11 08:55:22.179 2 DEBUG nova.network.neutron [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Successfully updated port: 10e01bbb-0d2c-4f76-8f81-c90e20d3e54c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:55:22 np0005481065 nova_compute[260935]: 2025-10-11 08:55:22.195 2 DEBUG oslo_concurrency.lockutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Acquiring lock "refresh_cache-98d8ebd6-0917-49cf-8efc-a245486424bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:55:22 np0005481065 nova_compute[260935]: 2025-10-11 08:55:22.196 2 DEBUG oslo_concurrency.lockutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Acquired lock "refresh_cache-98d8ebd6-0917-49cf-8efc-a245486424bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:55:22 np0005481065 nova_compute[260935]: 2025-10-11 08:55:22.196 2 DEBUG nova.network.neutron [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:55:22 np0005481065 nova_compute[260935]: 2025-10-11 08:55:22.371 2 DEBUG nova.network.neutron [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:55:22 np0005481065 hardcore_payne[322269]: {
Oct 11 04:55:22 np0005481065 hardcore_payne[322269]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 04:55:22 np0005481065 hardcore_payne[322269]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:55:22 np0005481065 hardcore_payne[322269]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 04:55:22 np0005481065 hardcore_payne[322269]:        "osd_id": 2,
Oct 11 04:55:22 np0005481065 hardcore_payne[322269]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 04:55:22 np0005481065 hardcore_payne[322269]:        "type": "bluestore"
Oct 11 04:55:22 np0005481065 hardcore_payne[322269]:    },
Oct 11 04:55:22 np0005481065 hardcore_payne[322269]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 04:55:22 np0005481065 hardcore_payne[322269]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:55:22 np0005481065 hardcore_payne[322269]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 04:55:22 np0005481065 hardcore_payne[322269]:        "osd_id": 0,
Oct 11 04:55:22 np0005481065 hardcore_payne[322269]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 04:55:22 np0005481065 hardcore_payne[322269]:        "type": "bluestore"
Oct 11 04:55:22 np0005481065 hardcore_payne[322269]:    },
Oct 11 04:55:22 np0005481065 hardcore_payne[322269]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 04:55:22 np0005481065 hardcore_payne[322269]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 04:55:22 np0005481065 hardcore_payne[322269]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 04:55:22 np0005481065 hardcore_payne[322269]:        "osd_id": 1,
Oct 11 04:55:22 np0005481065 hardcore_payne[322269]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 04:55:22 np0005481065 hardcore_payne[322269]:        "type": "bluestore"
Oct 11 04:55:22 np0005481065 hardcore_payne[322269]:    }
Oct 11 04:55:22 np0005481065 hardcore_payne[322269]: }
Oct 11 04:55:22 np0005481065 systemd[1]: libpod-5003ba35a7b1791b085fbdcb61f43b55cf0e3267b869ceb4309960f2f81144db.scope: Deactivated successfully.
Oct 11 04:55:22 np0005481065 systemd[1]: libpod-5003ba35a7b1791b085fbdcb61f43b55cf0e3267b869ceb4309960f2f81144db.scope: Consumed 1.209s CPU time.
Oct 11 04:55:22 np0005481065 podman[322323]: 2025-10-11 08:55:22.730128728 +0000 UTC m=+0.040324791 container died 5003ba35a7b1791b085fbdcb61f43b55cf0e3267b869ceb4309960f2f81144db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_payne, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 04:55:22 np0005481065 systemd[1]: var-lib-containers-storage-overlay-aaaca534b58519e8e7068d084895d1b8930222cbe8ce3cd600fe9570cf3b6fac-merged.mount: Deactivated successfully.
Oct 11 04:55:22 np0005481065 podman[322323]: 2025-10-11 08:55:22.800770752 +0000 UTC m=+0.110966745 container remove 5003ba35a7b1791b085fbdcb61f43b55cf0e3267b869ceb4309960f2f81144db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_payne, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 04:55:22 np0005481065 systemd[1]: libpod-conmon-5003ba35a7b1791b085fbdcb61f43b55cf0e3267b869ceb4309960f2f81144db.scope: Deactivated successfully.
Oct 11 04:55:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 04:55:22 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:55:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 04:55:22 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:55:22 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev a6e8a64a-8bba-4427-a276-c1d2dbb0d615 does not exist
Oct 11 04:55:22 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev c18f2582-d38e-4088-aa51-3162d587c647 does not exist
Oct 11 04:55:22 np0005481065 nova_compute[260935]: 2025-10-11 08:55:22.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:23 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:55:23 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 04:55:23 np0005481065 nova_compute[260935]: 2025-10-11 08:55:23.122 2 DEBUG nova.network.neutron [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Updating instance_info_cache with network_info: [{"id": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "address": "fa:16:3e:3f:0f:36", "network": {"id": "056c6769-bc97-4ae9-9759-4cc2d984a31d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2094705751-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73adfb8cf0c64359b1f33a9643148ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10e01bbb-0d", "ovs_interfaceid": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:55:23 np0005481065 nova_compute[260935]: 2025-10-11 08:55:23.147 2 DEBUG oslo_concurrency.lockutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Releasing lock "refresh_cache-98d8ebd6-0917-49cf-8efc-a245486424bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:55:23 np0005481065 nova_compute[260935]: 2025-10-11 08:55:23.147 2 DEBUG nova.compute.manager [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Instance network_info: |[{"id": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "address": "fa:16:3e:3f:0f:36", "network": {"id": "056c6769-bc97-4ae9-9759-4cc2d984a31d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2094705751-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73adfb8cf0c64359b1f33a9643148ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10e01bbb-0d", "ovs_interfaceid": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:55:23 np0005481065 nova_compute[260935]: 2025-10-11 08:55:23.151 2 DEBUG nova.virt.libvirt.driver [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Start _get_guest_xml network_info=[{"id": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "address": "fa:16:3e:3f:0f:36", "network": {"id": "056c6769-bc97-4ae9-9759-4cc2d984a31d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2094705751-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73adfb8cf0c64359b1f33a9643148ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10e01bbb-0d", "ovs_interfaceid": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:55:23 np0005481065 nova_compute[260935]: 2025-10-11 08:55:23.159 2 WARNING nova.virt.libvirt.driver [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:55:23 np0005481065 nova_compute[260935]: 2025-10-11 08:55:23.167 2 DEBUG nova.virt.libvirt.host [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:55:23 np0005481065 nova_compute[260935]: 2025-10-11 08:55:23.169 2 DEBUG nova.virt.libvirt.host [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:55:23 np0005481065 nova_compute[260935]: 2025-10-11 08:55:23.173 2 DEBUG nova.virt.libvirt.host [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:55:23 np0005481065 nova_compute[260935]: 2025-10-11 08:55:23.173 2 DEBUG nova.virt.libvirt.host [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:55:23 np0005481065 nova_compute[260935]: 2025-10-11 08:55:23.174 2 DEBUG nova.virt.libvirt.driver [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:55:23 np0005481065 nova_compute[260935]: 2025-10-11 08:55:23.174 2 DEBUG nova.virt.hardware [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:55:23 np0005481065 nova_compute[260935]: 2025-10-11 08:55:23.175 2 DEBUG nova.virt.hardware [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:55:23 np0005481065 nova_compute[260935]: 2025-10-11 08:55:23.175 2 DEBUG nova.virt.hardware [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:55:23 np0005481065 nova_compute[260935]: 2025-10-11 08:55:23.175 2 DEBUG nova.virt.hardware [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:55:23 np0005481065 nova_compute[260935]: 2025-10-11 08:55:23.176 2 DEBUG nova.virt.hardware [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:55:23 np0005481065 nova_compute[260935]: 2025-10-11 08:55:23.176 2 DEBUG nova.virt.hardware [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:55:23 np0005481065 nova_compute[260935]: 2025-10-11 08:55:23.176 2 DEBUG nova.virt.hardware [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:55:23 np0005481065 nova_compute[260935]: 2025-10-11 08:55:23.177 2 DEBUG nova.virt.hardware [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:55:23 np0005481065 nova_compute[260935]: 2025-10-11 08:55:23.177 2 DEBUG nova.virt.hardware [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:55:23 np0005481065 nova_compute[260935]: 2025-10-11 08:55:23.177 2 DEBUG nova.virt.hardware [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:55:23 np0005481065 nova_compute[260935]: 2025-10-11 08:55:23.178 2 DEBUG nova.virt.hardware [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:55:23 np0005481065 nova_compute[260935]: 2025-10-11 08:55:23.183 2 DEBUG oslo_concurrency.processutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:55:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:55:23 np0005481065 nova_compute[260935]: 2025-10-11 08:55:23.512 2 DEBUG nova.compute.manager [req-06fe644e-ddd3-4ebb-952e-8aa7cfcffbbe req-6baf8e50-c140-4ebd-8f45-16443ca86763 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Received event network-changed-10e01bbb-0d2c-4f76-8f81-c90e20d3e54c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:55:23 np0005481065 nova_compute[260935]: 2025-10-11 08:55:23.513 2 DEBUG nova.compute.manager [req-06fe644e-ddd3-4ebb-952e-8aa7cfcffbbe req-6baf8e50-c140-4ebd-8f45-16443ca86763 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Refreshing instance network info cache due to event network-changed-10e01bbb-0d2c-4f76-8f81-c90e20d3e54c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:55:23 np0005481065 nova_compute[260935]: 2025-10-11 08:55:23.513 2 DEBUG oslo_concurrency.lockutils [req-06fe644e-ddd3-4ebb-952e-8aa7cfcffbbe req-6baf8e50-c140-4ebd-8f45-16443ca86763 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-98d8ebd6-0917-49cf-8efc-a245486424bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:55:23 np0005481065 nova_compute[260935]: 2025-10-11 08:55:23.514 2 DEBUG oslo_concurrency.lockutils [req-06fe644e-ddd3-4ebb-952e-8aa7cfcffbbe req-6baf8e50-c140-4ebd-8f45-16443ca86763 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-98d8ebd6-0917-49cf-8efc-a245486424bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:55:23 np0005481065 nova_compute[260935]: 2025-10-11 08:55:23.514 2 DEBUG nova.network.neutron [req-06fe644e-ddd3-4ebb-952e-8aa7cfcffbbe req-6baf8e50-c140-4ebd-8f45-16443ca86763 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Refreshing network info cache for port 10e01bbb-0d2c-4f76-8f81-c90e20d3e54c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:55:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:55:23 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2791638788' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:55:23 np0005481065 nova_compute[260935]: 2025-10-11 08:55:23.634 2 DEBUG oslo_concurrency.processutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:55:23 np0005481065 nova_compute[260935]: 2025-10-11 08:55:23.668 2 DEBUG nova.storage.rbd_utils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] rbd image 98d8ebd6-0917-49cf-8efc-a245486424bc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:55:23 np0005481065 nova_compute[260935]: 2025-10-11 08:55:23.673 2 DEBUG oslo_concurrency.processutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:55:24 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1535: 321 pgs: 321 active+clean; 88 MiB data, 495 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Oct 11 04:55:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:55:24 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2462706206' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:55:24 np0005481065 nova_compute[260935]: 2025-10-11 08:55:24.144 2 DEBUG oslo_concurrency.processutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:55:24 np0005481065 nova_compute[260935]: 2025-10-11 08:55:24.147 2 DEBUG nova.virt.libvirt.vif [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:55:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-400269969',display_name='tempest-ServerActionsTestOtherB-server-400269969',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-400269969',id=58,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBeZ9ALPs3Oq9pCizSQrA3R5ihPZilMrzZLelKqG/iqyw53khzMkIgHWfcYBLUTF7AIKY9B5CPoZLdbz5r+wTvSZysVCKTRxhaIofirBhYLqgm1OKLLxBwrJrQAKSAriNg==',key_name='tempest-keypair-1264143465',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='73adfb8cf0c64359b1f33a9643148ef4',ramdisk_id='',reservation_id='r-bh4aj5da',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1445504716',owner_user_name='tempest-ServerActionsTestOtherB-1445504716-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:55:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8d5f5f07c57c467286168be7c097bf26',uuid=98d8ebd6-0917-49cf-8efc-a245486424bc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "address": "fa:16:3e:3f:0f:36", "network": {"id": "056c6769-bc97-4ae9-9759-4cc2d984a31d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2094705751-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73adfb8cf0c64359b1f33a9643148ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10e01bbb-0d", "ovs_interfaceid": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:55:24 np0005481065 nova_compute[260935]: 2025-10-11 08:55:24.148 2 DEBUG nova.network.os_vif_util [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Converting VIF {"id": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "address": "fa:16:3e:3f:0f:36", "network": {"id": "056c6769-bc97-4ae9-9759-4cc2d984a31d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2094705751-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73adfb8cf0c64359b1f33a9643148ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10e01bbb-0d", "ovs_interfaceid": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:55:24 np0005481065 nova_compute[260935]: 2025-10-11 08:55:24.150 2 DEBUG nova.network.os_vif_util [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:0f:36,bridge_name='br-int',has_traffic_filtering=True,id=10e01bbb-0d2c-4f76-8f81-c90e20d3e54c,network=Network(056c6769-bc97-4ae9-9759-4cc2d984a31d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10e01bbb-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:55:24 np0005481065 nova_compute[260935]: 2025-10-11 08:55:24.153 2 DEBUG nova.objects.instance [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 98d8ebd6-0917-49cf-8efc-a245486424bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:55:24 np0005481065 nova_compute[260935]: 2025-10-11 08:55:24.170 2 DEBUG nova.virt.libvirt.driver [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:55:24 np0005481065 nova_compute[260935]:  <uuid>98d8ebd6-0917-49cf-8efc-a245486424bc</uuid>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:  <name>instance-0000003a</name>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:55:24 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:      <nova:name>tempest-ServerActionsTestOtherB-server-400269969</nova:name>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:55:23</nova:creationTime>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:55:24 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:        <nova:user uuid="8d5f5f07c57c467286168be7c097bf26">tempest-ServerActionsTestOtherB-1445504716-project-member</nova:user>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:        <nova:project uuid="73adfb8cf0c64359b1f33a9643148ef4">tempest-ServerActionsTestOtherB-1445504716</nova:project>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:        <nova:port uuid="10e01bbb-0d2c-4f76-8f81-c90e20d3e54c">
Oct 11 04:55:24 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:55:24 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:      <entry name="serial">98d8ebd6-0917-49cf-8efc-a245486424bc</entry>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:      <entry name="uuid">98d8ebd6-0917-49cf-8efc-a245486424bc</entry>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:55:24 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:55:24 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:55:24 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/98d8ebd6-0917-49cf-8efc-a245486424bc_disk">
Oct 11 04:55:24 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:55:24 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:55:24 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/98d8ebd6-0917-49cf-8efc-a245486424bc_disk.config">
Oct 11 04:55:24 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:55:24 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 04:55:24 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:3f:0f:36"/>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:      <target dev="tap10e01bbb-0d"/>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:55:24 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/98d8ebd6-0917-49cf-8efc-a245486424bc/console.log" append="off"/>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:55:24 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:55:24 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:55:24 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:55:24 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:55:24 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:55:24 np0005481065 nova_compute[260935]: 2025-10-11 08:55:24.172 2 DEBUG nova.compute.manager [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Preparing to wait for external event network-vif-plugged-10e01bbb-0d2c-4f76-8f81-c90e20d3e54c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 04:55:24 np0005481065 nova_compute[260935]: 2025-10-11 08:55:24.173 2 DEBUG oslo_concurrency.lockutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Acquiring lock "98d8ebd6-0917-49cf-8efc-a245486424bc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:55:24 np0005481065 nova_compute[260935]: 2025-10-11 08:55:24.173 2 DEBUG oslo_concurrency.lockutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "98d8ebd6-0917-49cf-8efc-a245486424bc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:55:24 np0005481065 nova_compute[260935]: 2025-10-11 08:55:24.173 2 DEBUG oslo_concurrency.lockutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "98d8ebd6-0917-49cf-8efc-a245486424bc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:55:24 np0005481065 nova_compute[260935]: 2025-10-11 08:55:24.174 2 DEBUG nova.virt.libvirt.vif [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:55:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-400269969',display_name='tempest-ServerActionsTestOtherB-server-400269969',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-400269969',id=58,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBeZ9ALPs3Oq9pCizSQrA3R5ihPZilMrzZLelKqG/iqyw53khzMkIgHWfcYBLUTF7AIKY9B5CPoZLdbz5r+wTvSZysVCKTRxhaIofirBhYLqgm1OKLLxBwrJrQAKSAriNg==',key_name='tempest-keypair-1264143465',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='73adfb8cf0c64359b1f33a9643148ef4',ramdisk_id='',reservation_id='r-bh4aj5da',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1445504716',owner_user_name='tempest-ServerActionsTestOtherB-1445504716-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:55:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8d5f5f07c57c467286168be7c097bf26',uuid=98d8ebd6-0917-49cf-8efc-a245486424bc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "address": "fa:16:3e:3f:0f:36", "network": {"id": "056c6769-bc97-4ae9-9759-4cc2d984a31d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2094705751-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73adfb8cf0c64359b1f33a9643148ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10e01bbb-0d", "ovs_interfaceid": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:55:24 np0005481065 nova_compute[260935]: 2025-10-11 08:55:24.174 2 DEBUG nova.network.os_vif_util [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Converting VIF {"id": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "address": "fa:16:3e:3f:0f:36", "network": {"id": "056c6769-bc97-4ae9-9759-4cc2d984a31d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2094705751-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73adfb8cf0c64359b1f33a9643148ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10e01bbb-0d", "ovs_interfaceid": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:55:24 np0005481065 nova_compute[260935]: 2025-10-11 08:55:24.175 2 DEBUG nova.network.os_vif_util [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:0f:36,bridge_name='br-int',has_traffic_filtering=True,id=10e01bbb-0d2c-4f76-8f81-c90e20d3e54c,network=Network(056c6769-bc97-4ae9-9759-4cc2d984a31d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10e01bbb-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:55:24 np0005481065 nova_compute[260935]: 2025-10-11 08:55:24.175 2 DEBUG os_vif [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:0f:36,bridge_name='br-int',has_traffic_filtering=True,id=10e01bbb-0d2c-4f76-8f81-c90e20d3e54c,network=Network(056c6769-bc97-4ae9-9759-4cc2d984a31d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10e01bbb-0d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:55:24 np0005481065 nova_compute[260935]: 2025-10-11 08:55:24.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:24 np0005481065 nova_compute[260935]: 2025-10-11 08:55:24.176 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:55:24 np0005481065 nova_compute[260935]: 2025-10-11 08:55:24.177 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:55:24 np0005481065 nova_compute[260935]: 2025-10-11 08:55:24.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:24 np0005481065 nova_compute[260935]: 2025-10-11 08:55:24.181 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap10e01bbb-0d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:55:24 np0005481065 nova_compute[260935]: 2025-10-11 08:55:24.181 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap10e01bbb-0d, col_values=(('external_ids', {'iface-id': '10e01bbb-0d2c-4f76-8f81-c90e20d3e54c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3f:0f:36', 'vm-uuid': '98d8ebd6-0917-49cf-8efc-a245486424bc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:55:24 np0005481065 nova_compute[260935]: 2025-10-11 08:55:24.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:24 np0005481065 nova_compute[260935]: 2025-10-11 08:55:24.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:55:24 np0005481065 NetworkManager[44960]: <info>  [1760172924.1862] manager: (tap10e01bbb-0d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/235)
Oct 11 04:55:24 np0005481065 nova_compute[260935]: 2025-10-11 08:55:24.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:24 np0005481065 nova_compute[260935]: 2025-10-11 08:55:24.193 2 INFO os_vif [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:0f:36,bridge_name='br-int',has_traffic_filtering=True,id=10e01bbb-0d2c-4f76-8f81-c90e20d3e54c,network=Network(056c6769-bc97-4ae9-9759-4cc2d984a31d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10e01bbb-0d')#033[00m
Oct 11 04:55:24 np0005481065 nova_compute[260935]: 2025-10-11 08:55:24.272 2 DEBUG nova.virt.libvirt.driver [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:55:24 np0005481065 nova_compute[260935]: 2025-10-11 08:55:24.273 2 DEBUG nova.virt.libvirt.driver [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:55:24 np0005481065 nova_compute[260935]: 2025-10-11 08:55:24.273 2 DEBUG nova.virt.libvirt.driver [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] No VIF found with MAC fa:16:3e:3f:0f:36, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:55:24 np0005481065 nova_compute[260935]: 2025-10-11 08:55:24.273 2 INFO nova.virt.libvirt.driver [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Using config drive#033[00m
Oct 11 04:55:24 np0005481065 nova_compute[260935]: 2025-10-11 08:55:24.298 2 DEBUG nova.storage.rbd_utils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] rbd image 98d8ebd6-0917-49cf-8efc-a245486424bc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:55:24 np0005481065 nova_compute[260935]: 2025-10-11 08:55:24.749 2 INFO nova.virt.libvirt.driver [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Creating config drive at /var/lib/nova/instances/98d8ebd6-0917-49cf-8efc-a245486424bc/disk.config#033[00m
Oct 11 04:55:24 np0005481065 nova_compute[260935]: 2025-10-11 08:55:24.760 2 DEBUG oslo_concurrency.processutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/98d8ebd6-0917-49cf-8efc-a245486424bc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6xi1v3yj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:55:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:55:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:55:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:55:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:55:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:55:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:55:24 np0005481065 nova_compute[260935]: 2025-10-11 08:55:24.917 2 DEBUG oslo_concurrency.processutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/98d8ebd6-0917-49cf-8efc-a245486424bc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6xi1v3yj" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:55:24 np0005481065 nova_compute[260935]: 2025-10-11 08:55:24.956 2 DEBUG nova.storage.rbd_utils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] rbd image 98d8ebd6-0917-49cf-8efc-a245486424bc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:55:24 np0005481065 nova_compute[260935]: 2025-10-11 08:55:24.962 2 DEBUG oslo_concurrency.processutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/98d8ebd6-0917-49cf-8efc-a245486424bc/disk.config 98d8ebd6-0917-49cf-8efc-a245486424bc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:55:25 np0005481065 nova_compute[260935]: 2025-10-11 08:55:25.087 2 DEBUG nova.network.neutron [req-06fe644e-ddd3-4ebb-952e-8aa7cfcffbbe req-6baf8e50-c140-4ebd-8f45-16443ca86763 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Updated VIF entry in instance network info cache for port 10e01bbb-0d2c-4f76-8f81-c90e20d3e54c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:55:25 np0005481065 nova_compute[260935]: 2025-10-11 08:55:25.088 2 DEBUG nova.network.neutron [req-06fe644e-ddd3-4ebb-952e-8aa7cfcffbbe req-6baf8e50-c140-4ebd-8f45-16443ca86763 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Updating instance_info_cache with network_info: [{"id": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "address": "fa:16:3e:3f:0f:36", "network": {"id": "056c6769-bc97-4ae9-9759-4cc2d984a31d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2094705751-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73adfb8cf0c64359b1f33a9643148ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10e01bbb-0d", "ovs_interfaceid": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:55:25 np0005481065 nova_compute[260935]: 2025-10-11 08:55:25.118 2 DEBUG oslo_concurrency.lockutils [req-06fe644e-ddd3-4ebb-952e-8aa7cfcffbbe req-6baf8e50-c140-4ebd-8f45-16443ca86763 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-98d8ebd6-0917-49cf-8efc-a245486424bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:55:25 np0005481065 nova_compute[260935]: 2025-10-11 08:55:25.187 2 DEBUG oslo_concurrency.processutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/98d8ebd6-0917-49cf-8efc-a245486424bc/disk.config 98d8ebd6-0917-49cf-8efc-a245486424bc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.225s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:55:25 np0005481065 nova_compute[260935]: 2025-10-11 08:55:25.189 2 INFO nova.virt.libvirt.driver [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Deleting local config drive /var/lib/nova/instances/98d8ebd6-0917-49cf-8efc-a245486424bc/disk.config because it was imported into RBD.#033[00m
Oct 11 04:55:25 np0005481065 kernel: tap10e01bbb-0d: entered promiscuous mode
Oct 11 04:55:25 np0005481065 NetworkManager[44960]: <info>  [1760172925.2933] manager: (tap10e01bbb-0d): new Tun device (/org/freedesktop/NetworkManager/Devices/236)
Oct 11 04:55:25 np0005481065 ovn_controller[152945]: 2025-10-11T08:55:25Z|00506|binding|INFO|Claiming lport 10e01bbb-0d2c-4f76-8f81-c90e20d3e54c for this chassis.
Oct 11 04:55:25 np0005481065 ovn_controller[152945]: 2025-10-11T08:55:25Z|00507|binding|INFO|10e01bbb-0d2c-4f76-8f81-c90e20d3e54c: Claiming fa:16:3e:3f:0f:36 10.100.0.9
Oct 11 04:55:25 np0005481065 nova_compute[260935]: 2025-10-11 08:55:25.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:25 np0005481065 nova_compute[260935]: 2025-10-11 08:55:25.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:25 np0005481065 nova_compute[260935]: 2025-10-11 08:55:25.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:25.363 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:0f:36 10.100.0.9'], port_security=['fa:16:3e:3f:0f:36 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '98d8ebd6-0917-49cf-8efc-a245486424bc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-056c6769-bc97-4ae9-9759-4cc2d984a31d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '73adfb8cf0c64359b1f33a9643148ef4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '06a0521a-9ff7-49ed-a194-a12a4a8fd551', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0cfa1fc9-121e-4e0a-bd08-716b82275316, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=10e01bbb-0d2c-4f76-8f81-c90e20d3e54c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:25.365 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 10e01bbb-0d2c-4f76-8f81-c90e20d3e54c in datapath 056c6769-bc97-4ae9-9759-4cc2d984a31d bound to our chassis#033[00m
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:25.368 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 056c6769-bc97-4ae9-9759-4cc2d984a31d#033[00m
Oct 11 04:55:25 np0005481065 systemd-udevd[322524]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:25.390 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[aa743280-5c03-4037-9e48-970349400840]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:25.391 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap056c6769-b1 in ovnmeta-056c6769-bc97-4ae9-9759-4cc2d984a31d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:25.395 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap056c6769-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:25.395 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1234da06-e75d-4a21-8196-a7452ee3309f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:25 np0005481065 systemd-machined[215705]: New machine qemu-65-instance-0000003a.
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:25.396 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f69d9b88-7881-4c26-99de-a32fa150760f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:25 np0005481065 NetworkManager[44960]: <info>  [1760172925.4030] device (tap10e01bbb-0d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:55:25 np0005481065 NetworkManager[44960]: <info>  [1760172925.4046] device (tap10e01bbb-0d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:25.414 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[de170643-e281-4680-89c4-e87fe18c5c5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:25 np0005481065 systemd[1]: Started Virtual Machine qemu-65-instance-0000003a.
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:25.451 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4c83be95-1938-4d05-a6fa-06da991a4f78]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:25 np0005481065 nova_compute[260935]: 2025-10-11 08:55:25.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:25 np0005481065 ovn_controller[152945]: 2025-10-11T08:55:25Z|00508|binding|INFO|Setting lport 10e01bbb-0d2c-4f76-8f81-c90e20d3e54c ovn-installed in OVS
Oct 11 04:55:25 np0005481065 ovn_controller[152945]: 2025-10-11T08:55:25Z|00509|binding|INFO|Setting lport 10e01bbb-0d2c-4f76-8f81-c90e20d3e54c up in Southbound
Oct 11 04:55:25 np0005481065 nova_compute[260935]: 2025-10-11 08:55:25.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:25.503 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[0c3d33cb-15eb-433b-812a-0a8581bd18d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:25 np0005481065 systemd-udevd[322529]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:25.511 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cb555a12-9cb6-4541-9f0e-61380af890f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:25 np0005481065 NetworkManager[44960]: <info>  [1760172925.5126] manager: (tap056c6769-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/237)
Oct 11 04:55:25 np0005481065 nova_compute[260935]: 2025-10-11 08:55:25.531 2 DEBUG oslo_concurrency.lockutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Acquiring lock "3915cf40-bdd7-4fe8-8311-834ff26aaf9c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:55:25 np0005481065 nova_compute[260935]: 2025-10-11 08:55:25.532 2 DEBUG oslo_concurrency.lockutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lock "3915cf40-bdd7-4fe8-8311-834ff26aaf9c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:55:25 np0005481065 nova_compute[260935]: 2025-10-11 08:55:25.551 2 DEBUG nova.compute.manager [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:25.583 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[ad5026b6-e66a-4e93-96ae-f6cac41c20d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:25.588 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[67cf0134-05ba-40d0-b2f6-8f69b405ff6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:25 np0005481065 nova_compute[260935]: 2025-10-11 08:55:25.622 2 DEBUG oslo_concurrency.lockutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:55:25 np0005481065 nova_compute[260935]: 2025-10-11 08:55:25.623 2 DEBUG oslo_concurrency.lockutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:55:25 np0005481065 NetworkManager[44960]: <info>  [1760172925.6274] device (tap056c6769-b0): carrier: link connected
Oct 11 04:55:25 np0005481065 nova_compute[260935]: 2025-10-11 08:55:25.635 2 DEBUG nova.virt.hardware [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:55:25 np0005481065 nova_compute[260935]: 2025-10-11 08:55:25.636 2 INFO nova.compute.claims [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:25.639 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[ec6df846-f1fb-4151-99e8-7188489c9379]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:25.667 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5ca87f82-cc34-4d3d-917c-e21859ba985d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap056c6769-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:cc:02'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 162], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 477032, 'reachable_time': 36312, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322558, 'error': None, 'target': 'ovnmeta-056c6769-bc97-4ae9-9759-4cc2d984a31d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:25.698 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[37fd57a9-4e63-4fc4-9759-6232f8475aff]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8b:cc02'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 477032, 'tstamp': 477032}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322559, 'error': None, 'target': 'ovnmeta-056c6769-bc97-4ae9-9759-4cc2d984a31d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:25.733 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4bf514bd-b7ba-49d7-bba0-dec9fbad58a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap056c6769-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:cc:02'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 220, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 220, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 162], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 477032, 'reachable_time': 36312, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 322560, 'error': None, 'target': 'ovnmeta-056c6769-bc97-4ae9-9759-4cc2d984a31d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:25 np0005481065 nova_compute[260935]: 2025-10-11 08:55:25.755 2 DEBUG oslo_concurrency.processutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:25.793 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[268e71da-409c-49e7-8c5c-d3b6b00be060]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:25.891 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[dc807e98-70f3-49a5-897a-59a4c3e5961f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:25.893 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap056c6769-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:25.893 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:25.894 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap056c6769-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:55:25 np0005481065 NetworkManager[44960]: <info>  [1760172925.8980] manager: (tap056c6769-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/238)
Oct 11 04:55:25 np0005481065 nova_compute[260935]: 2025-10-11 08:55:25.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:25 np0005481065 kernel: tap056c6769-b0: entered promiscuous mode
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:25.903 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap056c6769-b0, col_values=(('external_ids', {'iface-id': '056a8563-0695-415b-921f-e75fa98e60e5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:55:25 np0005481065 nova_compute[260935]: 2025-10-11 08:55:25.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:25 np0005481065 ovn_controller[152945]: 2025-10-11T08:55:25Z|00510|binding|INFO|Releasing lport 056a8563-0695-415b-921f-e75fa98e60e5 from this chassis (sb_readonly=0)
Oct 11 04:55:25 np0005481065 nova_compute[260935]: 2025-10-11 08:55:25.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:25.944 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/056c6769-bc97-4ae9-9759-4cc2d984a31d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/056c6769-bc97-4ae9-9759-4cc2d984a31d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:25.950 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bb532465-7e41-4970-b1ef-46d2e34b7293]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:25.952 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-056c6769-bc97-4ae9-9759-4cc2d984a31d
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/056c6769-bc97-4ae9-9759-4cc2d984a31d.pid.haproxy
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID 056c6769-bc97-4ae9-9759-4cc2d984a31d
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 04:55:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:25.952 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-056c6769-bc97-4ae9-9759-4cc2d984a31d', 'env', 'PROCESS_TAG=haproxy-056c6769-bc97-4ae9-9759-4cc2d984a31d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/056c6769-bc97-4ae9-9759-4cc2d984a31d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 04:55:26 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1536: 321 pgs: 321 active+clean; 88 MiB data, 495 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 117 op/s
Oct 11 04:55:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:55:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2143679200' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:55:26 np0005481065 nova_compute[260935]: 2025-10-11 08:55:26.286 2 DEBUG oslo_concurrency.processutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:55:26 np0005481065 nova_compute[260935]: 2025-10-11 08:55:26.294 2 DEBUG nova.compute.provider_tree [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:55:26 np0005481065 ovn_controller[152945]: 2025-10-11T08:55:26Z|00511|binding|INFO|Releasing lport 056a8563-0695-415b-921f-e75fa98e60e5 from this chassis (sb_readonly=0)
Oct 11 04:55:26 np0005481065 nova_compute[260935]: 2025-10-11 08:55:26.313 2 DEBUG nova.scheduler.client.report [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:55:26 np0005481065 nova_compute[260935]: 2025-10-11 08:55:26.341 2 DEBUG oslo_concurrency.lockutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:55:26 np0005481065 nova_compute[260935]: 2025-10-11 08:55:26.342 2 DEBUG nova.compute.manager [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:55:26 np0005481065 nova_compute[260935]: 2025-10-11 08:55:26.405 2 DEBUG nova.compute.manager [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:55:26 np0005481065 nova_compute[260935]: 2025-10-11 08:55:26.406 2 DEBUG nova.network.neutron [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:55:26 np0005481065 nova_compute[260935]: 2025-10-11 08:55:26.434 2 INFO nova.virt.libvirt.driver [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:55:26 np0005481065 nova_compute[260935]: 2025-10-11 08:55:26.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:26 np0005481065 nova_compute[260935]: 2025-10-11 08:55:26.451 2 DEBUG nova.compute.manager [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:55:26 np0005481065 podman[322656]: 2025-10-11 08:55:26.466164059 +0000 UTC m=+0.057840201 container create ee71ccc73d4cb3e1954d0b370d20ab59432e766b0286a99e46c4f66196d6860d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-056c6769-bc97-4ae9-9759-4cc2d984a31d, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:55:26 np0005481065 systemd[1]: Started libpod-conmon-ee71ccc73d4cb3e1954d0b370d20ab59432e766b0286a99e46c4f66196d6860d.scope.
Oct 11 04:55:26 np0005481065 podman[322656]: 2025-10-11 08:55:26.438575712 +0000 UTC m=+0.030251874 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 04:55:26 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:55:26 np0005481065 nova_compute[260935]: 2025-10-11 08:55:26.553 2 DEBUG nova.compute.manager [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:55:26 np0005481065 nova_compute[260935]: 2025-10-11 08:55:26.557 2 DEBUG nova.virt.libvirt.driver [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:55:26 np0005481065 nova_compute[260935]: 2025-10-11 08:55:26.558 2 INFO nova.virt.libvirt.driver [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Creating image(s)#033[00m
Oct 11 04:55:26 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65f5d931826b904a40c4e8f3faa087cfd21b33725873fe141d1cea9369cc5ab3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:55:26 np0005481065 podman[322656]: 2025-10-11 08:55:26.586335544 +0000 UTC m=+0.178011686 container init ee71ccc73d4cb3e1954d0b370d20ab59432e766b0286a99e46c4f66196d6860d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-056c6769-bc97-4ae9-9759-4cc2d984a31d, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 04:55:26 np0005481065 podman[322656]: 2025-10-11 08:55:26.592953393 +0000 UTC m=+0.184629535 container start ee71ccc73d4cb3e1954d0b370d20ab59432e766b0286a99e46c4f66196d6860d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-056c6769-bc97-4ae9-9759-4cc2d984a31d, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:55:26 np0005481065 nova_compute[260935]: 2025-10-11 08:55:26.609 2 DEBUG nova.storage.rbd_utils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] rbd image 3915cf40-bdd7-4fe8-8311-834ff26aaf9c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:55:26 np0005481065 neutron-haproxy-ovnmeta-056c6769-bc97-4ae9-9759-4cc2d984a31d[322672]: [NOTICE]   (322691) : New worker (322696) forked
Oct 11 04:55:26 np0005481065 neutron-haproxy-ovnmeta-056c6769-bc97-4ae9-9759-4cc2d984a31d[322672]: [NOTICE]   (322691) : Loading success.
Oct 11 04:55:26 np0005481065 nova_compute[260935]: 2025-10-11 08:55:26.648 2 DEBUG nova.storage.rbd_utils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] rbd image 3915cf40-bdd7-4fe8-8311-834ff26aaf9c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:55:26 np0005481065 nova_compute[260935]: 2025-10-11 08:55:26.677 2 DEBUG nova.storage.rbd_utils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] rbd image 3915cf40-bdd7-4fe8-8311-834ff26aaf9c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:55:26 np0005481065 nova_compute[260935]: 2025-10-11 08:55:26.681 2 DEBUG oslo_concurrency.processutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:55:26 np0005481065 nova_compute[260935]: 2025-10-11 08:55:26.717 2 DEBUG nova.policy [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a04e2908f5a54c8f98bee8d0faf3e658', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3a6a3cc2a54f4a9bafcdc1304f07944b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 04:55:26 np0005481065 nova_compute[260935]: 2025-10-11 08:55:26.719 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172926.7117898, 98d8ebd6-0917-49cf-8efc-a245486424bc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:55:26 np0005481065 nova_compute[260935]: 2025-10-11 08:55:26.720 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] VM Started (Lifecycle Event)#033[00m
Oct 11 04:55:26 np0005481065 nova_compute[260935]: 2025-10-11 08:55:26.741 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:55:26 np0005481065 nova_compute[260935]: 2025-10-11 08:55:26.747 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172926.7126245, 98d8ebd6-0917-49cf-8efc-a245486424bc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:55:26 np0005481065 nova_compute[260935]: 2025-10-11 08:55:26.747 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] VM Paused (Lifecycle Event)#033[00m
Oct 11 04:55:26 np0005481065 nova_compute[260935]: 2025-10-11 08:55:26.751 2 DEBUG oslo_concurrency.processutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:55:26 np0005481065 nova_compute[260935]: 2025-10-11 08:55:26.752 2 DEBUG oslo_concurrency.lockutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:55:26 np0005481065 nova_compute[260935]: 2025-10-11 08:55:26.752 2 DEBUG oslo_concurrency.lockutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:55:26 np0005481065 nova_compute[260935]: 2025-10-11 08:55:26.753 2 DEBUG oslo_concurrency.lockutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:55:26 np0005481065 nova_compute[260935]: 2025-10-11 08:55:26.781 2 DEBUG nova.storage.rbd_utils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] rbd image 3915cf40-bdd7-4fe8-8311-834ff26aaf9c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:55:26 np0005481065 nova_compute[260935]: 2025-10-11 08:55:26.788 2 DEBUG oslo_concurrency.processutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 3915cf40-bdd7-4fe8-8311-834ff26aaf9c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:55:26 np0005481065 nova_compute[260935]: 2025-10-11 08:55:26.829 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:55:26 np0005481065 nova_compute[260935]: 2025-10-11 08:55:26.834 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:55:26 np0005481065 nova_compute[260935]: 2025-10-11 08:55:26.861 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:55:27 np0005481065 nova_compute[260935]: 2025-10-11 08:55:27.126 2 DEBUG oslo_concurrency.processutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 3915cf40-bdd7-4fe8-8311-834ff26aaf9c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:55:27 np0005481065 nova_compute[260935]: 2025-10-11 08:55:27.214 2 DEBUG nova.storage.rbd_utils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] resizing rbd image 3915cf40-bdd7-4fe8-8311-834ff26aaf9c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:55:27 np0005481065 nova_compute[260935]: 2025-10-11 08:55:27.266 2 DEBUG oslo_concurrency.lockutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:55:27 np0005481065 nova_compute[260935]: 2025-10-11 08:55:27.267 2 DEBUG oslo_concurrency.lockutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:55:27 np0005481065 nova_compute[260935]: 2025-10-11 08:55:27.284 2 DEBUG nova.compute.manager [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:55:27 np0005481065 nova_compute[260935]: 2025-10-11 08:55:27.343 2 DEBUG nova.objects.instance [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lazy-loading 'migration_context' on Instance uuid 3915cf40-bdd7-4fe8-8311-834ff26aaf9c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:55:27 np0005481065 nova_compute[260935]: 2025-10-11 08:55:27.361 2 DEBUG nova.virt.libvirt.driver [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:55:27 np0005481065 nova_compute[260935]: 2025-10-11 08:55:27.362 2 DEBUG nova.virt.libvirt.driver [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Ensure instance console log exists: /var/lib/nova/instances/3915cf40-bdd7-4fe8-8311-834ff26aaf9c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:55:27 np0005481065 nova_compute[260935]: 2025-10-11 08:55:27.362 2 DEBUG oslo_concurrency.lockutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:55:27 np0005481065 nova_compute[260935]: 2025-10-11 08:55:27.363 2 DEBUG oslo_concurrency.lockutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:55:27 np0005481065 nova_compute[260935]: 2025-10-11 08:55:27.363 2 DEBUG oslo_concurrency.lockutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:55:27 np0005481065 nova_compute[260935]: 2025-10-11 08:55:27.366 2 DEBUG oslo_concurrency.lockutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:55:27 np0005481065 nova_compute[260935]: 2025-10-11 08:55:27.366 2 DEBUG oslo_concurrency.lockutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:55:27 np0005481065 nova_compute[260935]: 2025-10-11 08:55:27.377 2 DEBUG nova.virt.hardware [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:55:27 np0005481065 nova_compute[260935]: 2025-10-11 08:55:27.377 2 INFO nova.compute.claims [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:55:27 np0005481065 nova_compute[260935]: 2025-10-11 08:55:27.510 2 DEBUG oslo_concurrency.processutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:55:27 np0005481065 nova_compute[260935]: 2025-10-11 08:55:27.583 2 DEBUG nova.network.neutron [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Successfully created port: 33b4a30b-13cb-4c3b-a6fb-df71a1ce760e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 04:55:27 np0005481065 nova_compute[260935]: 2025-10-11 08:55:27.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:28 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1537: 321 pgs: 321 active+clean; 134 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 154 op/s
Oct 11 04:55:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:55:28 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3649119031' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:55:28 np0005481065 nova_compute[260935]: 2025-10-11 08:55:28.067 2 DEBUG oslo_concurrency.processutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:55:28 np0005481065 nova_compute[260935]: 2025-10-11 08:55:28.074 2 DEBUG nova.compute.provider_tree [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:55:28 np0005481065 nova_compute[260935]: 2025-10-11 08:55:28.096 2 DEBUG nova.scheduler.client.report [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:55:28 np0005481065 nova_compute[260935]: 2025-10-11 08:55:28.122 2 DEBUG oslo_concurrency.lockutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:55:28 np0005481065 nova_compute[260935]: 2025-10-11 08:55:28.123 2 DEBUG nova.compute.manager [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:55:28 np0005481065 nova_compute[260935]: 2025-10-11 08:55:28.175 2 DEBUG nova.compute.manager [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:55:28 np0005481065 nova_compute[260935]: 2025-10-11 08:55:28.175 2 DEBUG nova.network.neutron [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:55:28 np0005481065 nova_compute[260935]: 2025-10-11 08:55:28.195 2 INFO nova.virt.libvirt.driver [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:55:28 np0005481065 nova_compute[260935]: 2025-10-11 08:55:28.210 2 DEBUG nova.compute.manager [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:55:28 np0005481065 nova_compute[260935]: 2025-10-11 08:55:28.305 2 DEBUG nova.compute.manager [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:55:28 np0005481065 nova_compute[260935]: 2025-10-11 08:55:28.306 2 DEBUG nova.virt.libvirt.driver [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:55:28 np0005481065 nova_compute[260935]: 2025-10-11 08:55:28.307 2 INFO nova.virt.libvirt.driver [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Creating image(s)#033[00m
Oct 11 04:55:28 np0005481065 nova_compute[260935]: 2025-10-11 08:55:28.331 2 DEBUG nova.storage.rbd_utils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] rbd image f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:55:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:55:28 np0005481065 nova_compute[260935]: 2025-10-11 08:55:28.359 2 DEBUG nova.storage.rbd_utils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] rbd image f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:55:28 np0005481065 nova_compute[260935]: 2025-10-11 08:55:28.384 2 DEBUG nova.storage.rbd_utils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] rbd image f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:55:28 np0005481065 nova_compute[260935]: 2025-10-11 08:55:28.389 2 DEBUG oslo_concurrency.processutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:55:28 np0005481065 nova_compute[260935]: 2025-10-11 08:55:28.506 2 DEBUG oslo_concurrency.processutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.117s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:55:28 np0005481065 nova_compute[260935]: 2025-10-11 08:55:28.508 2 DEBUG oslo_concurrency.lockutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:55:28 np0005481065 nova_compute[260935]: 2025-10-11 08:55:28.509 2 DEBUG oslo_concurrency.lockutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:55:28 np0005481065 nova_compute[260935]: 2025-10-11 08:55:28.509 2 DEBUG oslo_concurrency.lockutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:55:28 np0005481065 nova_compute[260935]: 2025-10-11 08:55:28.532 2 DEBUG nova.storage.rbd_utils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] rbd image f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:55:28 np0005481065 nova_compute[260935]: 2025-10-11 08:55:28.537 2 DEBUG oslo_concurrency.processutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:55:28 np0005481065 nova_compute[260935]: 2025-10-11 08:55:28.866 2 DEBUG oslo_concurrency.processutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.329s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:55:28 np0005481065 nova_compute[260935]: 2025-10-11 08:55:28.953 2 DEBUG nova.storage.rbd_utils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] resizing rbd image f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:55:29 np0005481065 nova_compute[260935]: 2025-10-11 08:55:29.087 2 DEBUG nova.objects.instance [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lazy-loading 'migration_context' on Instance uuid f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:55:29 np0005481065 nova_compute[260935]: 2025-10-11 08:55:29.106 2 DEBUG nova.virt.libvirt.driver [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:55:29 np0005481065 nova_compute[260935]: 2025-10-11 08:55:29.107 2 DEBUG nova.virt.libvirt.driver [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Ensure instance console log exists: /var/lib/nova/instances/f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:55:29 np0005481065 nova_compute[260935]: 2025-10-11 08:55:29.108 2 DEBUG oslo_concurrency.lockutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:55:29 np0005481065 nova_compute[260935]: 2025-10-11 08:55:29.109 2 DEBUG oslo_concurrency.lockutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:55:29 np0005481065 nova_compute[260935]: 2025-10-11 08:55:29.109 2 DEBUG oslo_concurrency.lockutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:55:29 np0005481065 nova_compute[260935]: 2025-10-11 08:55:29.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:29 np0005481065 nova_compute[260935]: 2025-10-11 08:55:29.249 2 DEBUG nova.network.neutron [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Successfully updated port: 33b4a30b-13cb-4c3b-a6fb-df71a1ce760e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:55:29 np0005481065 nova_compute[260935]: 2025-10-11 08:55:29.267 2 DEBUG oslo_concurrency.lockutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Acquiring lock "refresh_cache-3915cf40-bdd7-4fe8-8311-834ff26aaf9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:55:29 np0005481065 nova_compute[260935]: 2025-10-11 08:55:29.268 2 DEBUG oslo_concurrency.lockutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Acquired lock "refresh_cache-3915cf40-bdd7-4fe8-8311-834ff26aaf9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:55:29 np0005481065 nova_compute[260935]: 2025-10-11 08:55:29.268 2 DEBUG nova.network.neutron [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:55:29 np0005481065 nova_compute[260935]: 2025-10-11 08:55:29.302 2 DEBUG nova.policy [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '213e5693e94f44e7950e3dfbca04228a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '93dd4902ce324862a38006da8e06503a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 04:55:29 np0005481065 nova_compute[260935]: 2025-10-11 08:55:29.426 2 DEBUG nova.network.neutron [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:55:29 np0005481065 nova_compute[260935]: 2025-10-11 08:55:29.639 2 DEBUG nova.compute.manager [req-06a58741-ccf6-4c13-ac23-0627206f9fbc req-0c0d2df4-8613-46f5-8fd6-6a10befac62e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Received event network-changed-33b4a30b-13cb-4c3b-a6fb-df71a1ce760e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:55:29 np0005481065 nova_compute[260935]: 2025-10-11 08:55:29.640 2 DEBUG nova.compute.manager [req-06a58741-ccf6-4c13-ac23-0627206f9fbc req-0c0d2df4-8613-46f5-8fd6-6a10befac62e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Refreshing instance network info cache due to event network-changed-33b4a30b-13cb-4c3b-a6fb-df71a1ce760e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:55:29 np0005481065 nova_compute[260935]: 2025-10-11 08:55:29.640 2 DEBUG oslo_concurrency.lockutils [req-06a58741-ccf6-4c13-ac23-0627206f9fbc req-0c0d2df4-8613-46f5-8fd6-6a10befac62e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-3915cf40-bdd7-4fe8-8311-834ff26aaf9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:55:29 np0005481065 podman[323041]: 2025-10-11 08:55:29.813539197 +0000 UTC m=+0.097365617 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 11 04:55:29 np0005481065 nova_compute[260935]: 2025-10-11 08:55:29.861 2 DEBUG nova.network.neutron [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Successfully created port: d295555f-76d3-480c-9c34-1ef65200656c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 04:55:30 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1538: 321 pgs: 321 active+clean; 134 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 60 KiB/s rd, 3.6 MiB/s wr, 90 op/s
Oct 11 04:55:30 np0005481065 nova_compute[260935]: 2025-10-11 08:55:30.382 2 DEBUG nova.network.neutron [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Updating instance_info_cache with network_info: [{"id": "33b4a30b-13cb-4c3b-a6fb-df71a1ce760e", "address": "fa:16:3e:ef:c3:49", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33b4a30b-13", "ovs_interfaceid": "33b4a30b-13cb-4c3b-a6fb-df71a1ce760e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:55:30 np0005481065 nova_compute[260935]: 2025-10-11 08:55:30.563 2 DEBUG oslo_concurrency.lockutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Releasing lock "refresh_cache-3915cf40-bdd7-4fe8-8311-834ff26aaf9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:55:30 np0005481065 nova_compute[260935]: 2025-10-11 08:55:30.564 2 DEBUG nova.compute.manager [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Instance network_info: |[{"id": "33b4a30b-13cb-4c3b-a6fb-df71a1ce760e", "address": "fa:16:3e:ef:c3:49", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33b4a30b-13", "ovs_interfaceid": "33b4a30b-13cb-4c3b-a6fb-df71a1ce760e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:55:30 np0005481065 nova_compute[260935]: 2025-10-11 08:55:30.566 2 DEBUG oslo_concurrency.lockutils [req-06a58741-ccf6-4c13-ac23-0627206f9fbc req-0c0d2df4-8613-46f5-8fd6-6a10befac62e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-3915cf40-bdd7-4fe8-8311-834ff26aaf9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:55:30 np0005481065 nova_compute[260935]: 2025-10-11 08:55:30.567 2 DEBUG nova.network.neutron [req-06a58741-ccf6-4c13-ac23-0627206f9fbc req-0c0d2df4-8613-46f5-8fd6-6a10befac62e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Refreshing network info cache for port 33b4a30b-13cb-4c3b-a6fb-df71a1ce760e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:55:30 np0005481065 nova_compute[260935]: 2025-10-11 08:55:30.573 2 DEBUG nova.virt.libvirt.driver [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Start _get_guest_xml network_info=[{"id": "33b4a30b-13cb-4c3b-a6fb-df71a1ce760e", "address": "fa:16:3e:ef:c3:49", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33b4a30b-13", "ovs_interfaceid": "33b4a30b-13cb-4c3b-a6fb-df71a1ce760e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:55:30 np0005481065 nova_compute[260935]: 2025-10-11 08:55:30.582 2 WARNING nova.virt.libvirt.driver [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:55:30 np0005481065 nova_compute[260935]: 2025-10-11 08:55:30.588 2 DEBUG nova.virt.libvirt.host [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:55:30 np0005481065 nova_compute[260935]: 2025-10-11 08:55:30.589 2 DEBUG nova.virt.libvirt.host [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:55:30 np0005481065 nova_compute[260935]: 2025-10-11 08:55:30.599 2 DEBUG nova.virt.libvirt.host [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:55:30 np0005481065 nova_compute[260935]: 2025-10-11 08:55:30.600 2 DEBUG nova.virt.libvirt.host [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:55:30 np0005481065 nova_compute[260935]: 2025-10-11 08:55:30.601 2 DEBUG nova.virt.libvirt.driver [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:55:30 np0005481065 nova_compute[260935]: 2025-10-11 08:55:30.602 2 DEBUG nova.virt.hardware [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:55:30 np0005481065 nova_compute[260935]: 2025-10-11 08:55:30.603 2 DEBUG nova.virt.hardware [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:55:30 np0005481065 nova_compute[260935]: 2025-10-11 08:55:30.604 2 DEBUG nova.virt.hardware [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:55:30 np0005481065 nova_compute[260935]: 2025-10-11 08:55:30.605 2 DEBUG nova.virt.hardware [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:55:30 np0005481065 nova_compute[260935]: 2025-10-11 08:55:30.605 2 DEBUG nova.virt.hardware [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:55:30 np0005481065 nova_compute[260935]: 2025-10-11 08:55:30.606 2 DEBUG nova.virt.hardware [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:55:30 np0005481065 nova_compute[260935]: 2025-10-11 08:55:30.606 2 DEBUG nova.virt.hardware [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:55:30 np0005481065 nova_compute[260935]: 2025-10-11 08:55:30.607 2 DEBUG nova.virt.hardware [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:55:30 np0005481065 nova_compute[260935]: 2025-10-11 08:55:30.608 2 DEBUG nova.virt.hardware [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:55:30 np0005481065 nova_compute[260935]: 2025-10-11 08:55:30.608 2 DEBUG nova.virt.hardware [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:55:30 np0005481065 nova_compute[260935]: 2025-10-11 08:55:30.609 2 DEBUG nova.virt.hardware [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:55:30 np0005481065 nova_compute[260935]: 2025-10-11 08:55:30.615 2 DEBUG oslo_concurrency.processutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.073 2 DEBUG nova.network.neutron [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Successfully updated port: d295555f-76d3-480c-9c34-1ef65200656c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.097 2 DEBUG oslo_concurrency.lockutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "refresh_cache-f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.097 2 DEBUG oslo_concurrency.lockutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquired lock "refresh_cache-f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.098 2 DEBUG nova.network.neutron [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:55:31 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:55:31 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3623249392' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.199 2 DEBUG oslo_concurrency.processutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.224 2 DEBUG nova.storage.rbd_utils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] rbd image 3915cf40-bdd7-4fe8-8311-834ff26aaf9c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.230 2 DEBUG oslo_concurrency.processutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.282 2 DEBUG nova.network.neutron [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:55:31 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:55:31 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/514337895' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.714 2 DEBUG oslo_concurrency.processutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.717 2 DEBUG nova.virt.libvirt.vif [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:55:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-604807773',display_name='tempest-SecurityGroupsTestJSON-server-604807773',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-604807773',id=59,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3a6a3cc2a54f4a9bafcdc1304f07944b',ramdisk_id='',reservation_id='r-zmgd5yiy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-2086563292',owner_user_name='tempest-SecurityGroupsTestJSON-2086563292-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:55:26Z,user_data=None,user_id='a04e2908f5a54c8f98bee8d0faf3e658',uuid=3915cf40-bdd7-4fe8-8311-834ff26aaf9c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "33b4a30b-13cb-4c3b-a6fb-df71a1ce760e", "address": "fa:16:3e:ef:c3:49", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33b4a30b-13", "ovs_interfaceid": "33b4a30b-13cb-4c3b-a6fb-df71a1ce760e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.717 2 DEBUG nova.network.os_vif_util [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Converting VIF {"id": "33b4a30b-13cb-4c3b-a6fb-df71a1ce760e", "address": "fa:16:3e:ef:c3:49", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33b4a30b-13", "ovs_interfaceid": "33b4a30b-13cb-4c3b-a6fb-df71a1ce760e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.718 2 DEBUG nova.network.os_vif_util [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ef:c3:49,bridge_name='br-int',has_traffic_filtering=True,id=33b4a30b-13cb-4c3b-a6fb-df71a1ce760e,network=Network(338aeaf8-43d5-4292-a8fa-8952dd3c508b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33b4a30b-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.720 2 DEBUG nova.objects.instance [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lazy-loading 'pci_devices' on Instance uuid 3915cf40-bdd7-4fe8-8311-834ff26aaf9c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.747 2 DEBUG nova.virt.libvirt.driver [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:55:31 np0005481065 nova_compute[260935]:  <uuid>3915cf40-bdd7-4fe8-8311-834ff26aaf9c</uuid>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:  <name>instance-0000003b</name>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:55:31 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:      <nova:name>tempest-SecurityGroupsTestJSON-server-604807773</nova:name>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:55:30</nova:creationTime>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:55:31 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:        <nova:user uuid="a04e2908f5a54c8f98bee8d0faf3e658">tempest-SecurityGroupsTestJSON-2086563292-project-member</nova:user>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:        <nova:project uuid="3a6a3cc2a54f4a9bafcdc1304f07944b">tempest-SecurityGroupsTestJSON-2086563292</nova:project>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:        <nova:port uuid="33b4a30b-13cb-4c3b-a6fb-df71a1ce760e">
Oct 11 04:55:31 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:55:31 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:      <entry name="serial">3915cf40-bdd7-4fe8-8311-834ff26aaf9c</entry>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:      <entry name="uuid">3915cf40-bdd7-4fe8-8311-834ff26aaf9c</entry>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:55:31 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:55:31 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:55:31 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/3915cf40-bdd7-4fe8-8311-834ff26aaf9c_disk">
Oct 11 04:55:31 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:55:31 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:55:31 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/3915cf40-bdd7-4fe8-8311-834ff26aaf9c_disk.config">
Oct 11 04:55:31 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:55:31 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 04:55:31 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:ef:c3:49"/>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:      <target dev="tap33b4a30b-13"/>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:55:31 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/3915cf40-bdd7-4fe8-8311-834ff26aaf9c/console.log" append="off"/>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:55:31 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:55:31 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:55:31 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:55:31 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:55:31 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.749 2 DEBUG nova.compute.manager [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Preparing to wait for external event network-vif-plugged-33b4a30b-13cb-4c3b-a6fb-df71a1ce760e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.750 2 DEBUG oslo_concurrency.lockutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Acquiring lock "3915cf40-bdd7-4fe8-8311-834ff26aaf9c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.750 2 DEBUG oslo_concurrency.lockutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lock "3915cf40-bdd7-4fe8-8311-834ff26aaf9c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.751 2 DEBUG oslo_concurrency.lockutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lock "3915cf40-bdd7-4fe8-8311-834ff26aaf9c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.752 2 DEBUG nova.virt.libvirt.vif [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:55:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-604807773',display_name='tempest-SecurityGroupsTestJSON-server-604807773',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-604807773',id=59,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3a6a3cc2a54f4a9bafcdc1304f07944b',ramdisk_id='',reservation_id='r-zmgd5yiy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-2086563292',owner_user_name='tempest-SecurityGroupsTestJSON-2086563292-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:55:26Z,user_data=None,user_id='a04e2908f5a54c8f98bee8d0faf3e658',uuid=3915cf40-bdd7-4fe8-8311-834ff26aaf9c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "33b4a30b-13cb-4c3b-a6fb-df71a1ce760e", "address": "fa:16:3e:ef:c3:49", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33b4a30b-13", "ovs_interfaceid": "33b4a30b-13cb-4c3b-a6fb-df71a1ce760e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.753 2 DEBUG nova.network.os_vif_util [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Converting VIF {"id": "33b4a30b-13cb-4c3b-a6fb-df71a1ce760e", "address": "fa:16:3e:ef:c3:49", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33b4a30b-13", "ovs_interfaceid": "33b4a30b-13cb-4c3b-a6fb-df71a1ce760e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.754 2 DEBUG nova.network.os_vif_util [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ef:c3:49,bridge_name='br-int',has_traffic_filtering=True,id=33b4a30b-13cb-4c3b-a6fb-df71a1ce760e,network=Network(338aeaf8-43d5-4292-a8fa-8952dd3c508b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33b4a30b-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.755 2 DEBUG os_vif [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ef:c3:49,bridge_name='br-int',has_traffic_filtering=True,id=33b4a30b-13cb-4c3b-a6fb-df71a1ce760e,network=Network(338aeaf8-43d5-4292-a8fa-8952dd3c508b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33b4a30b-13') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.759 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.760 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.763 2 DEBUG nova.compute.manager [req-cef8cce4-6ed4-40cf-87da-74586a8ca383 req-305547af-e019-4c3d-bbe6-844d3846ace7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Received event network-vif-plugged-10e01bbb-0d2c-4f76-8f81-c90e20d3e54c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.764 2 DEBUG oslo_concurrency.lockutils [req-cef8cce4-6ed4-40cf-87da-74586a8ca383 req-305547af-e019-4c3d-bbe6-844d3846ace7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "98d8ebd6-0917-49cf-8efc-a245486424bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.764 2 DEBUG oslo_concurrency.lockutils [req-cef8cce4-6ed4-40cf-87da-74586a8ca383 req-305547af-e019-4c3d-bbe6-844d3846ace7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "98d8ebd6-0917-49cf-8efc-a245486424bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.765 2 DEBUG oslo_concurrency.lockutils [req-cef8cce4-6ed4-40cf-87da-74586a8ca383 req-305547af-e019-4c3d-bbe6-844d3846ace7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "98d8ebd6-0917-49cf-8efc-a245486424bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.765 2 DEBUG nova.compute.manager [req-cef8cce4-6ed4-40cf-87da-74586a8ca383 req-305547af-e019-4c3d-bbe6-844d3846ace7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Processing event network-vif-plugged-10e01bbb-0d2c-4f76-8f81-c90e20d3e54c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.766 2 DEBUG nova.compute.manager [req-cef8cce4-6ed4-40cf-87da-74586a8ca383 req-305547af-e019-4c3d-bbe6-844d3846ace7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Received event network-vif-plugged-10e01bbb-0d2c-4f76-8f81-c90e20d3e54c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.766 2 DEBUG oslo_concurrency.lockutils [req-cef8cce4-6ed4-40cf-87da-74586a8ca383 req-305547af-e019-4c3d-bbe6-844d3846ace7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "98d8ebd6-0917-49cf-8efc-a245486424bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.766 2 DEBUG oslo_concurrency.lockutils [req-cef8cce4-6ed4-40cf-87da-74586a8ca383 req-305547af-e019-4c3d-bbe6-844d3846ace7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "98d8ebd6-0917-49cf-8efc-a245486424bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.767 2 DEBUG oslo_concurrency.lockutils [req-cef8cce4-6ed4-40cf-87da-74586a8ca383 req-305547af-e019-4c3d-bbe6-844d3846ace7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "98d8ebd6-0917-49cf-8efc-a245486424bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.767 2 DEBUG nova.compute.manager [req-cef8cce4-6ed4-40cf-87da-74586a8ca383 req-305547af-e019-4c3d-bbe6-844d3846ace7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] No waiting events found dispatching network-vif-plugged-10e01bbb-0d2c-4f76-8f81-c90e20d3e54c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.768 2 WARNING nova.compute.manager [req-cef8cce4-6ed4-40cf-87da-74586a8ca383 req-305547af-e019-4c3d-bbe6-844d3846ace7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Received unexpected event network-vif-plugged-10e01bbb-0d2c-4f76-8f81-c90e20d3e54c for instance with vm_state building and task_state spawning.#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.768 2 DEBUG nova.compute.manager [req-cef8cce4-6ed4-40cf-87da-74586a8ca383 req-305547af-e019-4c3d-bbe6-844d3846ace7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Received event network-changed-d295555f-76d3-480c-9c34-1ef65200656c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.769 2 DEBUG nova.compute.manager [req-cef8cce4-6ed4-40cf-87da-74586a8ca383 req-305547af-e019-4c3d-bbe6-844d3846ace7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Refreshing instance network info cache due to event network-changed-d295555f-76d3-480c-9c34-1ef65200656c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.769 2 DEBUG oslo_concurrency.lockutils [req-cef8cce4-6ed4-40cf-87da-74586a8ca383 req-305547af-e019-4c3d-bbe6-844d3846ace7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.771 2 DEBUG nova.compute.manager [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.775 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap33b4a30b-13, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.776 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap33b4a30b-13, col_values=(('external_ids', {'iface-id': '33b4a30b-13cb-4c3b-a6fb-df71a1ce760e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ef:c3:49', 'vm-uuid': '3915cf40-bdd7-4fe8-8311-834ff26aaf9c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:55:31 np0005481065 NetworkManager[44960]: <info>  [1760172931.7799] manager: (tap33b4a30b-13): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/239)
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.782 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172931.7819803, 98d8ebd6-0917-49cf-8efc-a245486424bc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.783 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.792 2 INFO os_vif [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ef:c3:49,bridge_name='br-int',has_traffic_filtering=True,id=33b4a30b-13cb-4c3b-a6fb-df71a1ce760e,network=Network(338aeaf8-43d5-4292-a8fa-8952dd3c508b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33b4a30b-13')#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.795 2 DEBUG nova.virt.libvirt.driver [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.801 2 INFO nova.virt.libvirt.driver [-] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Instance spawned successfully.#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.802 2 DEBUG nova.virt.libvirt.driver [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.806 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.812 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.822 2 DEBUG nova.virt.libvirt.driver [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.822 2 DEBUG nova.virt.libvirt.driver [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.822 2 DEBUG nova.virt.libvirt.driver [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.823 2 DEBUG nova.virt.libvirt.driver [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.823 2 DEBUG nova.virt.libvirt.driver [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.823 2 DEBUG nova.virt.libvirt.driver [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.829 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.904 2 DEBUG nova.virt.libvirt.driver [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.905 2 DEBUG nova.virt.libvirt.driver [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.905 2 DEBUG nova.virt.libvirt.driver [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] No VIF found with MAC fa:16:3e:ef:c3:49, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.905 2 INFO nova.virt.libvirt.driver [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Using config drive#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.927 2 DEBUG nova.storage.rbd_utils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] rbd image 3915cf40-bdd7-4fe8-8311-834ff26aaf9c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.934 2 INFO nova.compute.manager [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Took 11.87 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.934 2 DEBUG nova.compute.manager [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:55:31 np0005481065 nova_compute[260935]: 2025-10-11 08:55:31.993 2 INFO nova.compute.manager [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Took 13.01 seconds to build instance.#033[00m
Oct 11 04:55:32 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1539: 321 pgs: 321 active+clean; 134 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 60 KiB/s rd, 3.6 MiB/s wr, 90 op/s
Oct 11 04:55:32 np0005481065 nova_compute[260935]: 2025-10-11 08:55:32.010 2 DEBUG oslo_concurrency.lockutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "98d8ebd6-0917-49cf-8efc-a245486424bc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.100s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:55:32 np0005481065 nova_compute[260935]: 2025-10-11 08:55:32.297 2 INFO nova.virt.libvirt.driver [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Creating config drive at /var/lib/nova/instances/3915cf40-bdd7-4fe8-8311-834ff26aaf9c/disk.config#033[00m
Oct 11 04:55:32 np0005481065 nova_compute[260935]: 2025-10-11 08:55:32.306 2 DEBUG oslo_concurrency.processutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3915cf40-bdd7-4fe8-8311-834ff26aaf9c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphyqu0pg4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:55:32 np0005481065 nova_compute[260935]: 2025-10-11 08:55:32.484 2 DEBUG oslo_concurrency.processutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3915cf40-bdd7-4fe8-8311-834ff26aaf9c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphyqu0pg4" returned: 0 in 0.178s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:55:32 np0005481065 nova_compute[260935]: 2025-10-11 08:55:32.530 2 DEBUG nova.storage.rbd_utils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] rbd image 3915cf40-bdd7-4fe8-8311-834ff26aaf9c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:55:32 np0005481065 nova_compute[260935]: 2025-10-11 08:55:32.536 2 DEBUG oslo_concurrency.processutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3915cf40-bdd7-4fe8-8311-834ff26aaf9c/disk.config 3915cf40-bdd7-4fe8-8311-834ff26aaf9c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:55:32 np0005481065 nova_compute[260935]: 2025-10-11 08:55:32.707 2 DEBUG nova.network.neutron [req-06a58741-ccf6-4c13-ac23-0627206f9fbc req-0c0d2df4-8613-46f5-8fd6-6a10befac62e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Updated VIF entry in instance network info cache for port 33b4a30b-13cb-4c3b-a6fb-df71a1ce760e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:55:32 np0005481065 nova_compute[260935]: 2025-10-11 08:55:32.709 2 DEBUG nova.network.neutron [req-06a58741-ccf6-4c13-ac23-0627206f9fbc req-0c0d2df4-8613-46f5-8fd6-6a10befac62e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Updating instance_info_cache with network_info: [{"id": "33b4a30b-13cb-4c3b-a6fb-df71a1ce760e", "address": "fa:16:3e:ef:c3:49", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33b4a30b-13", "ovs_interfaceid": "33b4a30b-13cb-4c3b-a6fb-df71a1ce760e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:55:32 np0005481065 nova_compute[260935]: 2025-10-11 08:55:32.731 2 DEBUG oslo_concurrency.lockutils [req-06a58741-ccf6-4c13-ac23-0627206f9fbc req-0c0d2df4-8613-46f5-8fd6-6a10befac62e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-3915cf40-bdd7-4fe8-8311-834ff26aaf9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:55:32 np0005481065 nova_compute[260935]: 2025-10-11 08:55:32.741 2 DEBUG oslo_concurrency.processutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3915cf40-bdd7-4fe8-8311-834ff26aaf9c/disk.config 3915cf40-bdd7-4fe8-8311-834ff26aaf9c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.206s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:55:32 np0005481065 nova_compute[260935]: 2025-10-11 08:55:32.742 2 INFO nova.virt.libvirt.driver [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Deleting local config drive /var/lib/nova/instances/3915cf40-bdd7-4fe8-8311-834ff26aaf9c/disk.config because it was imported into RBD.#033[00m
Oct 11 04:55:32 np0005481065 nova_compute[260935]: 2025-10-11 08:55:32.761 2 DEBUG nova.network.neutron [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Updating instance_info_cache with network_info: [{"id": "d295555f-76d3-480c-9c34-1ef65200656c", "address": "fa:16:3e:67:47:3c", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd295555f-76", "ovs_interfaceid": "d295555f-76d3-480c-9c34-1ef65200656c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:55:32 np0005481065 nova_compute[260935]: 2025-10-11 08:55:32.779 2 DEBUG oslo_concurrency.lockutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Releasing lock "refresh_cache-f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:55:32 np0005481065 nova_compute[260935]: 2025-10-11 08:55:32.779 2 DEBUG nova.compute.manager [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Instance network_info: |[{"id": "d295555f-76d3-480c-9c34-1ef65200656c", "address": "fa:16:3e:67:47:3c", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd295555f-76", "ovs_interfaceid": "d295555f-76d3-480c-9c34-1ef65200656c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:55:32 np0005481065 nova_compute[260935]: 2025-10-11 08:55:32.780 2 DEBUG oslo_concurrency.lockutils [req-cef8cce4-6ed4-40cf-87da-74586a8ca383 req-305547af-e019-4c3d-bbe6-844d3846ace7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:55:32 np0005481065 nova_compute[260935]: 2025-10-11 08:55:32.780 2 DEBUG nova.network.neutron [req-cef8cce4-6ed4-40cf-87da-74586a8ca383 req-305547af-e019-4c3d-bbe6-844d3846ace7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Refreshing network info cache for port d295555f-76d3-480c-9c34-1ef65200656c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:55:32 np0005481065 nova_compute[260935]: 2025-10-11 08:55:32.788 2 DEBUG nova.virt.libvirt.driver [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Start _get_guest_xml network_info=[{"id": "d295555f-76d3-480c-9c34-1ef65200656c", "address": "fa:16:3e:67:47:3c", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd295555f-76", "ovs_interfaceid": "d295555f-76d3-480c-9c34-1ef65200656c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:55:32 np0005481065 nova_compute[260935]: 2025-10-11 08:55:32.806 2 WARNING nova.virt.libvirt.driver [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:55:32 np0005481065 nova_compute[260935]: 2025-10-11 08:55:32.812 2 DEBUG nova.virt.libvirt.host [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:55:32 np0005481065 nova_compute[260935]: 2025-10-11 08:55:32.813 2 DEBUG nova.virt.libvirt.host [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:55:32 np0005481065 nova_compute[260935]: 2025-10-11 08:55:32.819 2 DEBUG nova.virt.libvirt.host [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:55:32 np0005481065 nova_compute[260935]: 2025-10-11 08:55:32.819 2 DEBUG nova.virt.libvirt.host [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:55:32 np0005481065 nova_compute[260935]: 2025-10-11 08:55:32.819 2 DEBUG nova.virt.libvirt.driver [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:55:32 np0005481065 nova_compute[260935]: 2025-10-11 08:55:32.820 2 DEBUG nova.virt.hardware [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:55:32 np0005481065 nova_compute[260935]: 2025-10-11 08:55:32.820 2 DEBUG nova.virt.hardware [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:55:32 np0005481065 nova_compute[260935]: 2025-10-11 08:55:32.821 2 DEBUG nova.virt.hardware [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:55:32 np0005481065 nova_compute[260935]: 2025-10-11 08:55:32.821 2 DEBUG nova.virt.hardware [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:55:32 np0005481065 nova_compute[260935]: 2025-10-11 08:55:32.821 2 DEBUG nova.virt.hardware [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:55:32 np0005481065 nova_compute[260935]: 2025-10-11 08:55:32.821 2 DEBUG nova.virt.hardware [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:55:32 np0005481065 nova_compute[260935]: 2025-10-11 08:55:32.822 2 DEBUG nova.virt.hardware [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:55:32 np0005481065 nova_compute[260935]: 2025-10-11 08:55:32.822 2 DEBUG nova.virt.hardware [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:55:32 np0005481065 nova_compute[260935]: 2025-10-11 08:55:32.822 2 DEBUG nova.virt.hardware [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:55:32 np0005481065 nova_compute[260935]: 2025-10-11 08:55:32.823 2 DEBUG nova.virt.hardware [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:55:32 np0005481065 nova_compute[260935]: 2025-10-11 08:55:32.824 2 DEBUG nova.virt.hardware [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:55:32 np0005481065 nova_compute[260935]: 2025-10-11 08:55:32.828 2 DEBUG oslo_concurrency.processutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:55:32 np0005481065 kernel: tap33b4a30b-13: entered promiscuous mode
Oct 11 04:55:32 np0005481065 NetworkManager[44960]: <info>  [1760172932.8339] manager: (tap33b4a30b-13): new Tun device (/org/freedesktop/NetworkManager/Devices/240)
Oct 11 04:55:32 np0005481065 ovn_controller[152945]: 2025-10-11T08:55:32Z|00512|binding|INFO|Claiming lport 33b4a30b-13cb-4c3b-a6fb-df71a1ce760e for this chassis.
Oct 11 04:55:32 np0005481065 ovn_controller[152945]: 2025-10-11T08:55:32Z|00513|binding|INFO|33b4a30b-13cb-4c3b-a6fb-df71a1ce760e: Claiming fa:16:3e:ef:c3:49 10.100.0.13
Oct 11 04:55:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:32.853 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ef:c3:49 10.100.0.13'], port_security=['fa:16:3e:ef:c3:49 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '3915cf40-bdd7-4fe8-8311-834ff26aaf9c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-338aeaf8-43d5-4292-a8fa-8952dd3c508b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a6a3cc2a54f4a9bafcdc1304f07944b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e3ab43aa-8c9d-4578-afa6-1b85bfb9e682', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9b76a516-f507-4a4f-aa31-3047cedf7049, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=33b4a30b-13cb-4c3b-a6fb-df71a1ce760e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:55:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:32.856 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 33b4a30b-13cb-4c3b-a6fb-df71a1ce760e in datapath 338aeaf8-43d5-4292-a8fa-8952dd3c508b bound to our chassis#033[00m
Oct 11 04:55:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:32.859 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 338aeaf8-43d5-4292-a8fa-8952dd3c508b#033[00m
Oct 11 04:55:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:32.878 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[dc8f7bdf-7f83-49d8-a612-8f0eefa71056]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:32.879 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap338aeaf8-41 in ovnmeta-338aeaf8-43d5-4292-a8fa-8952dd3c508b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 04:55:32 np0005481065 nova_compute[260935]: 2025-10-11 08:55:32.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:32.883 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap338aeaf8-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 04:55:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:32.883 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[18f1ccaa-d53f-432b-bbe9-14384e0dca84]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:32.887 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1e63dfaf-f1d9-48aa-a8b1-91ae98a03ff3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:32 np0005481065 systemd-udevd[323199]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:55:32 np0005481065 systemd-machined[215705]: New machine qemu-66-instance-0000003b.
Oct 11 04:55:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:32.907 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[5586d1b1-f73c-4338-a61e-4ea73bbdf020]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:32 np0005481065 NetworkManager[44960]: <info>  [1760172932.9137] device (tap33b4a30b-13): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:55:32 np0005481065 NetworkManager[44960]: <info>  [1760172932.9152] device (tap33b4a30b-13): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:55:32 np0005481065 systemd[1]: Started Virtual Machine qemu-66-instance-0000003b.
Oct 11 04:55:32 np0005481065 ovn_controller[152945]: 2025-10-11T08:55:32Z|00514|binding|INFO|Setting lport 33b4a30b-13cb-4c3b-a6fb-df71a1ce760e ovn-installed in OVS
Oct 11 04:55:32 np0005481065 ovn_controller[152945]: 2025-10-11T08:55:32Z|00515|binding|INFO|Setting lport 33b4a30b-13cb-4c3b-a6fb-df71a1ce760e up in Southbound
Oct 11 04:55:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:32.964 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d0eaeb7e-4262-4fc9-981f-d7fbcfbb5956]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:32 np0005481065 nova_compute[260935]: 2025-10-11 08:55:32.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:32 np0005481065 nova_compute[260935]: 2025-10-11 08:55:32.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:33.000 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[85b54dce-8e06-4099-80f1-b7c3103c40e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:33.008 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c6b9968c-a5a1-4862-9b40-83676579f5ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:33 np0005481065 NetworkManager[44960]: <info>  [1760172933.0090] manager: (tap338aeaf8-40): new Veth device (/org/freedesktop/NetworkManager/Devices/241)
Oct 11 04:55:33 np0005481065 systemd-udevd[323204]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:55:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:33.049 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6bf4c29d-a6c5-4855-b81e-b95c1f6a9801]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:33.053 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[de15e042-aeba-409c-9b2c-462a72539077]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:33 np0005481065 NetworkManager[44960]: <info>  [1760172933.0840] device (tap338aeaf8-40): carrier: link connected
Oct 11 04:55:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:33.092 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[37362a33-6f51-4319-8836-fd2fe4ddc40c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:33.116 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[62fef6a6-09a9-4fa8-8f30-5021d64445f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap338aeaf8-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:2a:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 477778, 'reachable_time': 18372, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323251, 'error': None, 'target': 'ovnmeta-338aeaf8-43d5-4292-a8fa-8952dd3c508b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:33.134 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a875f85a-47ed-411c-a319-07915daf1f78]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee5:2a1c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 477778, 'tstamp': 477778}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323252, 'error': None, 'target': 'ovnmeta-338aeaf8-43d5-4292-a8fa-8952dd3c508b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:33.157 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[55256811-08bd-45f7-a1ba-1ec6a8a06aa9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap338aeaf8-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:2a:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 1, 'rx_bytes': 306, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 1, 'rx_bytes': 306, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 477778, 'reachable_time': 18372, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 264, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 264, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 323253, 'error': None, 'target': 'ovnmeta-338aeaf8-43d5-4292-a8fa-8952dd3c508b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:33 np0005481065 nova_compute[260935]: 2025-10-11 08:55:33.193 2 DEBUG nova.compute.manager [req-a742d6e7-8f4e-48d5-9180-9a10ae4b5ab5 req-3dbc7d7f-e25f-430b-907c-b0c6ece9bd9f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Received event network-vif-plugged-33b4a30b-13cb-4c3b-a6fb-df71a1ce760e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:55:33 np0005481065 nova_compute[260935]: 2025-10-11 08:55:33.194 2 DEBUG oslo_concurrency.lockutils [req-a742d6e7-8f4e-48d5-9180-9a10ae4b5ab5 req-3dbc7d7f-e25f-430b-907c-b0c6ece9bd9f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "3915cf40-bdd7-4fe8-8311-834ff26aaf9c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:55:33 np0005481065 nova_compute[260935]: 2025-10-11 08:55:33.194 2 DEBUG oslo_concurrency.lockutils [req-a742d6e7-8f4e-48d5-9180-9a10ae4b5ab5 req-3dbc7d7f-e25f-430b-907c-b0c6ece9bd9f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "3915cf40-bdd7-4fe8-8311-834ff26aaf9c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:55:33 np0005481065 nova_compute[260935]: 2025-10-11 08:55:33.195 2 DEBUG oslo_concurrency.lockutils [req-a742d6e7-8f4e-48d5-9180-9a10ae4b5ab5 req-3dbc7d7f-e25f-430b-907c-b0c6ece9bd9f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "3915cf40-bdd7-4fe8-8311-834ff26aaf9c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:55:33 np0005481065 nova_compute[260935]: 2025-10-11 08:55:33.196 2 DEBUG nova.compute.manager [req-a742d6e7-8f4e-48d5-9180-9a10ae4b5ab5 req-3dbc7d7f-e25f-430b-907c-b0c6ece9bd9f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Processing event network-vif-plugged-33b4a30b-13cb-4c3b-a6fb-df71a1ce760e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 04:55:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:33.196 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[46d2dbb5-b16b-450b-84da-8e3fddc4a188]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:33.283 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1ca2b8e9-e0c2-4333-a2d9-1e4dcfb315dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:33.284 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap338aeaf8-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:55:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:33.285 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:55:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:33.285 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap338aeaf8-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:55:33 np0005481065 nova_compute[260935]: 2025-10-11 08:55:33.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:33 np0005481065 NetworkManager[44960]: <info>  [1760172933.2882] manager: (tap338aeaf8-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/242)
Oct 11 04:55:33 np0005481065 kernel: tap338aeaf8-40: entered promiscuous mode
Oct 11 04:55:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:33.293 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap338aeaf8-40, col_values=(('external_ids', {'iface-id': 'ebd712a5-5601-47dd-8cfc-89d2ce6b1035'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:55:33 np0005481065 ovn_controller[152945]: 2025-10-11T08:55:33Z|00516|binding|INFO|Releasing lport ebd712a5-5601-47dd-8cfc-89d2ce6b1035 from this chassis (sb_readonly=0)
Oct 11 04:55:33 np0005481065 nova_compute[260935]: 2025-10-11 08:55:33.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:33.297 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/338aeaf8-43d5-4292-a8fa-8952dd3c508b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/338aeaf8-43d5-4292-a8fa-8952dd3c508b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 04:55:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:33.300 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5adaf018-8e37-4726-bd99-13f48285b856]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:33.301 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:55:33 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 04:55:33 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 04:55:33 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-338aeaf8-43d5-4292-a8fa-8952dd3c508b
Oct 11 04:55:33 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 04:55:33 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 04:55:33 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 04:55:33 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/338aeaf8-43d5-4292-a8fa-8952dd3c508b.pid.haproxy
Oct 11 04:55:33 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 04:55:33 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:55:33 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 04:55:33 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 04:55:33 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 04:55:33 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 04:55:33 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 04:55:33 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 04:55:33 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 04:55:33 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 04:55:33 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 04:55:33 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 04:55:33 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 04:55:33 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 04:55:33 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 04:55:33 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:55:33 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:55:33 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 04:55:33 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 04:55:33 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:55:33 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID 338aeaf8-43d5-4292-a8fa-8952dd3c508b
Oct 11 04:55:33 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 04:55:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:33.301 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-338aeaf8-43d5-4292-a8fa-8952dd3c508b', 'env', 'PROCESS_TAG=haproxy-338aeaf8-43d5-4292-a8fa-8952dd3c508b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/338aeaf8-43d5-4292-a8fa-8952dd3c508b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 04:55:33 np0005481065 nova_compute[260935]: 2025-10-11 08:55:33.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:55:33 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2261304288' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:55:33 np0005481065 nova_compute[260935]: 2025-10-11 08:55:33.346 2 DEBUG oslo_concurrency.processutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:55:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:55:33 np0005481065 nova_compute[260935]: 2025-10-11 08:55:33.378 2 DEBUG nova.storage.rbd_utils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] rbd image f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:55:33 np0005481065 nova_compute[260935]: 2025-10-11 08:55:33.382 2 DEBUG oslo_concurrency.processutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:55:33 np0005481065 podman[323322]: 2025-10-11 08:55:33.671426662 +0000 UTC m=+0.059414045 container create 32436912e067a4fb7ef37686671c1ef8b53abf62640fd4a0660436ebf6897565 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-338aeaf8-43d5-4292-a8fa-8952dd3c508b, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:55:33 np0005481065 podman[323322]: 2025-10-11 08:55:33.635669943 +0000 UTC m=+0.023657366 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 04:55:33 np0005481065 systemd[1]: Started libpod-conmon-32436912e067a4fb7ef37686671c1ef8b53abf62640fd4a0660436ebf6897565.scope.
Oct 11 04:55:33 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:55:33 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68996452ee67c233085e6e9ba704114e7118cced28fee3c4a5f9d0d8650bf5d5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:55:33 np0005481065 podman[323322]: 2025-10-11 08:55:33.79694376 +0000 UTC m=+0.184931143 container init 32436912e067a4fb7ef37686671c1ef8b53abf62640fd4a0660436ebf6897565 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-338aeaf8-43d5-4292-a8fa-8952dd3c508b, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001)
Oct 11 04:55:33 np0005481065 podman[323322]: 2025-10-11 08:55:33.80709653 +0000 UTC m=+0.195083883 container start 32436912e067a4fb7ef37686671c1ef8b53abf62640fd4a0660436ebf6897565 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-338aeaf8-43d5-4292-a8fa-8952dd3c508b, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 11 04:55:33 np0005481065 neutron-haproxy-ovnmeta-338aeaf8-43d5-4292-a8fa-8952dd3c508b[323338]: [NOTICE]   (323342) : New worker (323351) forked
Oct 11 04:55:33 np0005481065 neutron-haproxy-ovnmeta-338aeaf8-43d5-4292-a8fa-8952dd3c508b[323338]: [NOTICE]   (323342) : Loading success.
Oct 11 04:55:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:55:33 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3218247990' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:55:33 np0005481065 nova_compute[260935]: 2025-10-11 08:55:33.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:33 np0005481065 NetworkManager[44960]: <info>  [1760172933.8667] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/243)
Oct 11 04:55:33 np0005481065 NetworkManager[44960]: <info>  [1760172933.8685] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/244)
Oct 11 04:55:33 np0005481065 nova_compute[260935]: 2025-10-11 08:55:33.872 2 DEBUG oslo_concurrency.processutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:55:33 np0005481065 nova_compute[260935]: 2025-10-11 08:55:33.875 2 DEBUG nova.virt.libvirt.vif [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:55:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1662661810',display_name='tempest-DeleteServersTestJSON-server-1662661810',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1662661810',id=60,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='93dd4902ce324862a38006da8e06503a',ramdisk_id='',reservation_id='r-hslosofy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1019340677',owner_user_name='tempest-DeleteServersTestJSON-1019340677-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:55:28Z,user_data=None,user_id='213e5693e94f44e7950e3dfbca04228a',uuid=f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d295555f-76d3-480c-9c34-1ef65200656c", "address": "fa:16:3e:67:47:3c", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd295555f-76", "ovs_interfaceid": "d295555f-76d3-480c-9c34-1ef65200656c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:55:33 np0005481065 nova_compute[260935]: 2025-10-11 08:55:33.876 2 DEBUG nova.network.os_vif_util [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Converting VIF {"id": "d295555f-76d3-480c-9c34-1ef65200656c", "address": "fa:16:3e:67:47:3c", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd295555f-76", "ovs_interfaceid": "d295555f-76d3-480c-9c34-1ef65200656c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:55:33 np0005481065 nova_compute[260935]: 2025-10-11 08:55:33.878 2 DEBUG nova.network.os_vif_util [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:47:3c,bridge_name='br-int',has_traffic_filtering=True,id=d295555f-76d3-480c-9c34-1ef65200656c,network=Network(2cb96d57-a5e9-4b38-b10e-68187a5bf82f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd295555f-76') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:55:33 np0005481065 nova_compute[260935]: 2025-10-11 08:55:33.880 2 DEBUG nova.objects.instance [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lazy-loading 'pci_devices' on Instance uuid f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:55:33 np0005481065 nova_compute[260935]: 2025-10-11 08:55:33.896 2 DEBUG nova.virt.libvirt.driver [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:55:33 np0005481065 nova_compute[260935]:  <uuid>f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8</uuid>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:  <name>instance-0000003c</name>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:55:33 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:      <nova:name>tempest-DeleteServersTestJSON-server-1662661810</nova:name>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:55:32</nova:creationTime>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:55:33 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:        <nova:user uuid="213e5693e94f44e7950e3dfbca04228a">tempest-DeleteServersTestJSON-1019340677-project-member</nova:user>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:        <nova:project uuid="93dd4902ce324862a38006da8e06503a">tempest-DeleteServersTestJSON-1019340677</nova:project>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:        <nova:port uuid="d295555f-76d3-480c-9c34-1ef65200656c">
Oct 11 04:55:33 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:55:33 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:      <entry name="serial">f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8</entry>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:      <entry name="uuid">f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8</entry>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:55:33 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:55:33 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:55:33 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8_disk">
Oct 11 04:55:33 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:55:33 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:55:33 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8_disk.config">
Oct 11 04:55:33 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:55:33 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 04:55:33 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:67:47:3c"/>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:      <target dev="tapd295555f-76"/>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:55:33 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8/console.log" append="off"/>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:55:33 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:55:33 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:55:33 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:55:33 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:55:33 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:55:33 np0005481065 nova_compute[260935]: 2025-10-11 08:55:33.906 2 DEBUG nova.compute.manager [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Preparing to wait for external event network-vif-plugged-d295555f-76d3-480c-9c34-1ef65200656c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 04:55:33 np0005481065 nova_compute[260935]: 2025-10-11 08:55:33.907 2 DEBUG oslo_concurrency.lockutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:55:33 np0005481065 nova_compute[260935]: 2025-10-11 08:55:33.907 2 DEBUG oslo_concurrency.lockutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:55:33 np0005481065 nova_compute[260935]: 2025-10-11 08:55:33.908 2 DEBUG oslo_concurrency.lockutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:55:33 np0005481065 nova_compute[260935]: 2025-10-11 08:55:33.909 2 DEBUG nova.virt.libvirt.vif [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:55:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1662661810',display_name='tempest-DeleteServersTestJSON-server-1662661810',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1662661810',id=60,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='93dd4902ce324862a38006da8e06503a',ramdisk_id='',reservation_id='r-hslosofy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1019340677',owner_user_name='tempest-DeleteServersTestJSON-1019340677-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:55:28Z,user_data=None,user_id='213e5693e94f44e7950e3dfbca04228a',uuid=f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d295555f-76d3-480c-9c34-1ef65200656c", "address": "fa:16:3e:67:47:3c", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd295555f-76", "ovs_interfaceid": "d295555f-76d3-480c-9c34-1ef65200656c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:55:33 np0005481065 nova_compute[260935]: 2025-10-11 08:55:33.910 2 DEBUG nova.network.os_vif_util [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Converting VIF {"id": "d295555f-76d3-480c-9c34-1ef65200656c", "address": "fa:16:3e:67:47:3c", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd295555f-76", "ovs_interfaceid": "d295555f-76d3-480c-9c34-1ef65200656c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:55:33 np0005481065 nova_compute[260935]: 2025-10-11 08:55:33.911 2 DEBUG nova.network.os_vif_util [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:47:3c,bridge_name='br-int',has_traffic_filtering=True,id=d295555f-76d3-480c-9c34-1ef65200656c,network=Network(2cb96d57-a5e9-4b38-b10e-68187a5bf82f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd295555f-76') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:55:33 np0005481065 nova_compute[260935]: 2025-10-11 08:55:33.912 2 DEBUG os_vif [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:47:3c,bridge_name='br-int',has_traffic_filtering=True,id=d295555f-76d3-480c-9c34-1ef65200656c,network=Network(2cb96d57-a5e9-4b38-b10e-68187a5bf82f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd295555f-76') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:55:33 np0005481065 nova_compute[260935]: 2025-10-11 08:55:33.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:33 np0005481065 nova_compute[260935]: 2025-10-11 08:55:33.914 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:55:33 np0005481065 nova_compute[260935]: 2025-10-11 08:55:33.915 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:55:33 np0005481065 nova_compute[260935]: 2025-10-11 08:55:33.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:33 np0005481065 nova_compute[260935]: 2025-10-11 08:55:33.919 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd295555f-76, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:55:33 np0005481065 nova_compute[260935]: 2025-10-11 08:55:33.920 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd295555f-76, col_values=(('external_ids', {'iface-id': 'd295555f-76d3-480c-9c34-1ef65200656c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:67:47:3c', 'vm-uuid': 'f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:55:33 np0005481065 NetworkManager[44960]: <info>  [1760172933.9244] manager: (tapd295555f-76): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/245)
Oct 11 04:55:33 np0005481065 nova_compute[260935]: 2025-10-11 08:55:33.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:33 np0005481065 nova_compute[260935]: 2025-10-11 08:55:33.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:55:33 np0005481065 nova_compute[260935]: 2025-10-11 08:55:33.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:33 np0005481065 nova_compute[260935]: 2025-10-11 08:55:33.982 2 INFO os_vif [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:47:3c,bridge_name='br-int',has_traffic_filtering=True,id=d295555f-76d3-480c-9c34-1ef65200656c,network=Network(2cb96d57-a5e9-4b38-b10e-68187a5bf82f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd295555f-76')#033[00m
Oct 11 04:55:33 np0005481065 ovn_controller[152945]: 2025-10-11T08:55:33Z|00517|binding|INFO|Releasing lport 056a8563-0695-415b-921f-e75fa98e60e5 from this chassis (sb_readonly=0)
Oct 11 04:55:33 np0005481065 ovn_controller[152945]: 2025-10-11T08:55:33Z|00518|binding|INFO|Releasing lport ebd712a5-5601-47dd-8cfc-89d2ce6b1035 from this chassis (sb_readonly=0)
Oct 11 04:55:33 np0005481065 nova_compute[260935]: 2025-10-11 08:55:33.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:34 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1540: 321 pgs: 321 active+clean; 180 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 707 KiB/s rd, 5.3 MiB/s wr, 139 op/s
Oct 11 04:55:34 np0005481065 nova_compute[260935]: 2025-10-11 08:55:34.053 2 DEBUG nova.virt.libvirt.driver [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:55:34 np0005481065 nova_compute[260935]: 2025-10-11 08:55:34.053 2 DEBUG nova.virt.libvirt.driver [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:55:34 np0005481065 nova_compute[260935]: 2025-10-11 08:55:34.054 2 DEBUG nova.virt.libvirt.driver [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] No VIF found with MAC fa:16:3e:67:47:3c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:55:34 np0005481065 nova_compute[260935]: 2025-10-11 08:55:34.054 2 INFO nova.virt.libvirt.driver [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Using config drive#033[00m
Oct 11 04:55:34 np0005481065 nova_compute[260935]: 2025-10-11 08:55:34.081 2 DEBUG nova.storage.rbd_utils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] rbd image f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:55:34 np0005481065 nova_compute[260935]: 2025-10-11 08:55:34.215 2 DEBUG nova.compute.manager [req-8b1982a6-5e3b-44ee-a3b0-2905a568ce51 req-70e56087-3b89-4036-8627-16d0fcb9a987 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Received event network-changed-10e01bbb-0d2c-4f76-8f81-c90e20d3e54c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:55:34 np0005481065 nova_compute[260935]: 2025-10-11 08:55:34.216 2 DEBUG nova.compute.manager [req-8b1982a6-5e3b-44ee-a3b0-2905a568ce51 req-70e56087-3b89-4036-8627-16d0fcb9a987 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Refreshing instance network info cache due to event network-changed-10e01bbb-0d2c-4f76-8f81-c90e20d3e54c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:55:34 np0005481065 nova_compute[260935]: 2025-10-11 08:55:34.217 2 DEBUG oslo_concurrency.lockutils [req-8b1982a6-5e3b-44ee-a3b0-2905a568ce51 req-70e56087-3b89-4036-8627-16d0fcb9a987 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-98d8ebd6-0917-49cf-8efc-a245486424bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:55:34 np0005481065 nova_compute[260935]: 2025-10-11 08:55:34.217 2 DEBUG oslo_concurrency.lockutils [req-8b1982a6-5e3b-44ee-a3b0-2905a568ce51 req-70e56087-3b89-4036-8627-16d0fcb9a987 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-98d8ebd6-0917-49cf-8efc-a245486424bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:55:34 np0005481065 nova_compute[260935]: 2025-10-11 08:55:34.218 2 DEBUG nova.network.neutron [req-8b1982a6-5e3b-44ee-a3b0-2905a568ce51 req-70e56087-3b89-4036-8627-16d0fcb9a987 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Refreshing network info cache for port 10e01bbb-0d2c-4f76-8f81-c90e20d3e54c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:55:34 np0005481065 nova_compute[260935]: 2025-10-11 08:55:34.275 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172919.2630043, 09e8444f-162e-4773-a181-a5b70c7af8dd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:55:34 np0005481065 nova_compute[260935]: 2025-10-11 08:55:34.275 2 INFO nova.compute.manager [-] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] VM Stopped (Lifecycle Event)#033[00m
Oct 11 04:55:34 np0005481065 nova_compute[260935]: 2025-10-11 08:55:34.310 2 DEBUG nova.compute.manager [None req-e656393c-f61a-4b1f-b595-5ec9d62ffef7 - - - - - -] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:55:34 np0005481065 nova_compute[260935]: 2025-10-11 08:55:34.367 2 DEBUG nova.network.neutron [req-cef8cce4-6ed4-40cf-87da-74586a8ca383 req-305547af-e019-4c3d-bbe6-844d3846ace7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Updated VIF entry in instance network info cache for port d295555f-76d3-480c-9c34-1ef65200656c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:55:34 np0005481065 nova_compute[260935]: 2025-10-11 08:55:34.368 2 DEBUG nova.network.neutron [req-cef8cce4-6ed4-40cf-87da-74586a8ca383 req-305547af-e019-4c3d-bbe6-844d3846ace7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Updating instance_info_cache with network_info: [{"id": "d295555f-76d3-480c-9c34-1ef65200656c", "address": "fa:16:3e:67:47:3c", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd295555f-76", "ovs_interfaceid": "d295555f-76d3-480c-9c34-1ef65200656c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:55:34 np0005481065 nova_compute[260935]: 2025-10-11 08:55:34.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:34 np0005481065 nova_compute[260935]: 2025-10-11 08:55:34.384 2 DEBUG oslo_concurrency.lockutils [req-cef8cce4-6ed4-40cf-87da-74586a8ca383 req-305547af-e019-4c3d-bbe6-844d3846ace7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:55:34 np0005481065 nova_compute[260935]: 2025-10-11 08:55:34.480 2 INFO nova.virt.libvirt.driver [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Creating config drive at /var/lib/nova/instances/f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8/disk.config#033[00m
Oct 11 04:55:34 np0005481065 nova_compute[260935]: 2025-10-11 08:55:34.487 2 DEBUG oslo_concurrency.processutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt1hddp2x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:55:34 np0005481065 nova_compute[260935]: 2025-10-11 08:55:34.560 2 DEBUG nova.compute.manager [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:55:34 np0005481065 nova_compute[260935]: 2025-10-11 08:55:34.563 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172934.559016, 3915cf40-bdd7-4fe8-8311-834ff26aaf9c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:55:34 np0005481065 nova_compute[260935]: 2025-10-11 08:55:34.563 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] VM Started (Lifecycle Event)#033[00m
Oct 11 04:55:34 np0005481065 nova_compute[260935]: 2025-10-11 08:55:34.581 2 DEBUG nova.virt.libvirt.driver [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:55:34 np0005481065 nova_compute[260935]: 2025-10-11 08:55:34.586 2 INFO nova.virt.libvirt.driver [-] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Instance spawned successfully.#033[00m
Oct 11 04:55:34 np0005481065 nova_compute[260935]: 2025-10-11 08:55:34.587 2 DEBUG nova.virt.libvirt.driver [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:55:34 np0005481065 nova_compute[260935]: 2025-10-11 08:55:34.601 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:55:34 np0005481065 nova_compute[260935]: 2025-10-11 08:55:34.611 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:55:34 np0005481065 nova_compute[260935]: 2025-10-11 08:55:34.617 2 DEBUG nova.virt.libvirt.driver [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:55:34 np0005481065 nova_compute[260935]: 2025-10-11 08:55:34.617 2 DEBUG nova.virt.libvirt.driver [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:55:34 np0005481065 nova_compute[260935]: 2025-10-11 08:55:34.618 2 DEBUG nova.virt.libvirt.driver [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:55:34 np0005481065 nova_compute[260935]: 2025-10-11 08:55:34.619 2 DEBUG nova.virt.libvirt.driver [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:55:34 np0005481065 nova_compute[260935]: 2025-10-11 08:55:34.619 2 DEBUG nova.virt.libvirt.driver [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:55:34 np0005481065 nova_compute[260935]: 2025-10-11 08:55:34.620 2 DEBUG nova.virt.libvirt.driver [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:55:34 np0005481065 nova_compute[260935]: 2025-10-11 08:55:34.630 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:55:34 np0005481065 nova_compute[260935]: 2025-10-11 08:55:34.631 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172934.5608969, 3915cf40-bdd7-4fe8-8311-834ff26aaf9c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:55:34 np0005481065 nova_compute[260935]: 2025-10-11 08:55:34.631 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] VM Paused (Lifecycle Event)#033[00m
Oct 11 04:55:34 np0005481065 nova_compute[260935]: 2025-10-11 08:55:34.647 2 DEBUG oslo_concurrency.processutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt1hddp2x" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:55:34 np0005481065 nova_compute[260935]: 2025-10-11 08:55:34.684 2 DEBUG nova.storage.rbd_utils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] rbd image f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:55:34 np0005481065 nova_compute[260935]: 2025-10-11 08:55:34.688 2 DEBUG oslo_concurrency.processutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8/disk.config f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:55:34 np0005481065 nova_compute[260935]: 2025-10-11 08:55:34.749 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:55:34 np0005481065 nova_compute[260935]: 2025-10-11 08:55:34.753 2 INFO nova.compute.manager [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Took 8.20 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:55:34 np0005481065 nova_compute[260935]: 2025-10-11 08:55:34.754 2 DEBUG nova.compute.manager [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:55:34 np0005481065 nova_compute[260935]: 2025-10-11 08:55:34.761 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172934.5754414, 3915cf40-bdd7-4fe8-8311-834ff26aaf9c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:55:34 np0005481065 nova_compute[260935]: 2025-10-11 08:55:34.762 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:55:34 np0005481065 nova_compute[260935]: 2025-10-11 08:55:34.792 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:55:34 np0005481065 nova_compute[260935]: 2025-10-11 08:55:34.815 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:55:34 np0005481065 nova_compute[260935]: 2025-10-11 08:55:34.825 2 INFO nova.compute.manager [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Took 9.23 seconds to build instance.#033[00m
Oct 11 04:55:34 np0005481065 nova_compute[260935]: 2025-10-11 08:55:34.855 2 DEBUG oslo_concurrency.lockutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lock "3915cf40-bdd7-4fe8-8311-834ff26aaf9c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.323s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:55:34 np0005481065 nova_compute[260935]: 2025-10-11 08:55:34.897 2 DEBUG oslo_concurrency.processutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8/disk.config f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.209s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:55:34 np0005481065 nova_compute[260935]: 2025-10-11 08:55:34.898 2 INFO nova.virt.libvirt.driver [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Deleting local config drive /var/lib/nova/instances/f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8/disk.config because it was imported into RBD.#033[00m
Oct 11 04:55:34 np0005481065 kernel: tapd295555f-76: entered promiscuous mode
Oct 11 04:55:34 np0005481065 NetworkManager[44960]: <info>  [1760172934.9786] manager: (tapd295555f-76): new Tun device (/org/freedesktop/NetworkManager/Devices/246)
Oct 11 04:55:34 np0005481065 systemd-udevd[323241]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:55:34 np0005481065 ovn_controller[152945]: 2025-10-11T08:55:34Z|00519|binding|INFO|Claiming lport d295555f-76d3-480c-9c34-1ef65200656c for this chassis.
Oct 11 04:55:34 np0005481065 ovn_controller[152945]: 2025-10-11T08:55:34Z|00520|binding|INFO|d295555f-76d3-480c-9c34-1ef65200656c: Claiming fa:16:3e:67:47:3c 10.100.0.7
Oct 11 04:55:34 np0005481065 nova_compute[260935]: 2025-10-11 08:55:34.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:34.995 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:47:3c 10.100.0.7'], port_security=['fa:16:3e:67:47:3c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2cb96d57-a5e9-4b38-b10e-68187a5bf82f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '93dd4902ce324862a38006da8e06503a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b773900e-3df7-4cb6-b9b0-3d240ff499b1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=69794a2f-48ab-4a0d-8725-f4a7f57172dd, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=d295555f-76d3-480c-9c34-1ef65200656c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:55:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:34.996 162815 INFO neutron.agent.ovn.metadata.agent [-] Port d295555f-76d3-480c-9c34-1ef65200656c in datapath 2cb96d57-a5e9-4b38-b10e-68187a5bf82f bound to our chassis#033[00m
Oct 11 04:55:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:34.998 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2cb96d57-a5e9-4b38-b10e-68187a5bf82f#033[00m
Oct 11 04:55:35 np0005481065 NetworkManager[44960]: <info>  [1760172935.0102] device (tapd295555f-76): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:55:35 np0005481065 NetworkManager[44960]: <info>  [1760172935.0157] device (tapd295555f-76): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:55:35 np0005481065 ovn_controller[152945]: 2025-10-11T08:55:35Z|00521|binding|INFO|Setting lport d295555f-76d3-480c-9c34-1ef65200656c ovn-installed in OVS
Oct 11 04:55:35 np0005481065 ovn_controller[152945]: 2025-10-11T08:55:35Z|00522|binding|INFO|Setting lport d295555f-76d3-480c-9c34-1ef65200656c up in Southbound
Oct 11 04:55:35 np0005481065 nova_compute[260935]: 2025-10-11 08:55:35.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:35 np0005481065 nova_compute[260935]: 2025-10-11 08:55:35.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:35.030 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[64bd29a3-5fc3-4979-8129-c7e91b1985b5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:35.032 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2cb96d57-a1 in ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 04:55:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:35.038 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2cb96d57-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 04:55:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:35.038 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[374af8d0-2f6a-4923-ace5-ca74ae5f4374]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:35.041 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6f85d7bf-a0e8-4db0-bc75-23b80e605378]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:35 np0005481065 systemd-machined[215705]: New machine qemu-67-instance-0000003c.
Oct 11 04:55:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:35.060 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[bfd52b05-ad40-41df-9af5-1de3f3503d92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:35 np0005481065 systemd[1]: Started Virtual Machine qemu-67-instance-0000003c.
Oct 11 04:55:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:35.084 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c371493a-d3d1-48a3-a34f-a13ef7e91022]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:35.132 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3b13385f-fb34-4753-ad7d-75da4780c65a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:35 np0005481065 NetworkManager[44960]: <info>  [1760172935.1483] manager: (tap2cb96d57-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/247)
Oct 11 04:55:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:35.149 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bf4af24d-94e8-4efb-8d3e-85a8c6188c83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:35 np0005481065 podman[323468]: 2025-10-11 08:55:35.159152783 +0000 UTC m=+0.121426863 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=iscsid)
Oct 11 04:55:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:35.201 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6394a352-e463-435c-99d3-46acae9cb589]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:35.205 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[5c4e2a5a-940b-4077-bf8b-2ec982b1bcbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:35 np0005481065 NetworkManager[44960]: <info>  [1760172935.2456] device (tap2cb96d57-a0): carrier: link connected
Oct 11 04:55:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:35.261 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[cb6727ef-6066-46f2-a6e9-9cebf9d07743]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:35.285 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6ed3a25e-f540-441c-ae7b-d0191d46439c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2cb96d57-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:c9:b5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 166], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 477994, 'reachable_time': 20209, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323506, 'error': None, 'target': 'ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:35.306 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[13072288-94a0-4e33-b011-ffcb9d80d33b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0b:c9b5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 477994, 'tstamp': 477994}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323507, 'error': None, 'target': 'ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:35.331 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d38b6494-d532-441d-aec7-beb05cbd61c4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2cb96d57-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:c9:b5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 166], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 477994, 'reachable_time': 20209, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 323508, 'error': None, 'target': 'ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:35.377 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1143afe9-6ac9-45ff-8e25-3b9c10882e22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:35.474 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[00b5f734-6f45-4dae-b4ce-8a604ca5b307]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:35.476 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2cb96d57-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:55:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:35.476 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:55:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:35.476 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2cb96d57-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:55:35 np0005481065 NetworkManager[44960]: <info>  [1760172935.4791] manager: (tap2cb96d57-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/248)
Oct 11 04:55:35 np0005481065 kernel: tap2cb96d57-a0: entered promiscuous mode
Oct 11 04:55:35 np0005481065 nova_compute[260935]: 2025-10-11 08:55:35.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:35 np0005481065 nova_compute[260935]: 2025-10-11 08:55:35.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:35.482 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2cb96d57-a0, col_values=(('external_ids', {'iface-id': 'a11e0a08-d1ab-4bff-901b-632484cc0e21'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:55:35 np0005481065 nova_compute[260935]: 2025-10-11 08:55:35.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:35 np0005481065 ovn_controller[152945]: 2025-10-11T08:55:35Z|00523|binding|INFO|Releasing lport a11e0a08-d1ab-4bff-901b-632484cc0e21 from this chassis (sb_readonly=0)
Oct 11 04:55:35 np0005481065 nova_compute[260935]: 2025-10-11 08:55:35.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:35.505 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2cb96d57-a5e9-4b38-b10e-68187a5bf82f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2cb96d57-a5e9-4b38-b10e-68187a5bf82f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 04:55:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:35.505 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ecec851e-eb86-4d48-a5e1-15c160086f46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:35.506 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:55:35 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 04:55:35 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 04:55:35 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-2cb96d57-a5e9-4b38-b10e-68187a5bf82f
Oct 11 04:55:35 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 04:55:35 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 04:55:35 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 04:55:35 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/2cb96d57-a5e9-4b38-b10e-68187a5bf82f.pid.haproxy
Oct 11 04:55:35 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 04:55:35 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:55:35 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 04:55:35 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 04:55:35 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 04:55:35 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 04:55:35 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 04:55:35 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 04:55:35 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 04:55:35 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 04:55:35 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 04:55:35 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 04:55:35 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 04:55:35 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 04:55:35 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 04:55:35 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:55:35 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:55:35 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 04:55:35 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 04:55:35 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:55:35 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID 2cb96d57-a5e9-4b38-b10e-68187a5bf82f
Oct 11 04:55:35 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 04:55:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:35.507 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f', 'env', 'PROCESS_TAG=haproxy-2cb96d57-a5e9-4b38-b10e-68187a5bf82f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2cb96d57-a5e9-4b38-b10e-68187a5bf82f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 04:55:35 np0005481065 nova_compute[260935]: 2025-10-11 08:55:35.574 2 DEBUG nova.compute.manager [req-571904f1-dfbe-4ccc-8996-a132b8e01a28 req-6800dab8-4e2c-42b3-8e14-6284943d2e3b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Received event network-vif-plugged-33b4a30b-13cb-4c3b-a6fb-df71a1ce760e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:55:35 np0005481065 nova_compute[260935]: 2025-10-11 08:55:35.575 2 DEBUG oslo_concurrency.lockutils [req-571904f1-dfbe-4ccc-8996-a132b8e01a28 req-6800dab8-4e2c-42b3-8e14-6284943d2e3b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "3915cf40-bdd7-4fe8-8311-834ff26aaf9c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:55:35 np0005481065 nova_compute[260935]: 2025-10-11 08:55:35.575 2 DEBUG oslo_concurrency.lockutils [req-571904f1-dfbe-4ccc-8996-a132b8e01a28 req-6800dab8-4e2c-42b3-8e14-6284943d2e3b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "3915cf40-bdd7-4fe8-8311-834ff26aaf9c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:55:35 np0005481065 nova_compute[260935]: 2025-10-11 08:55:35.575 2 DEBUG oslo_concurrency.lockutils [req-571904f1-dfbe-4ccc-8996-a132b8e01a28 req-6800dab8-4e2c-42b3-8e14-6284943d2e3b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "3915cf40-bdd7-4fe8-8311-834ff26aaf9c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:55:35 np0005481065 nova_compute[260935]: 2025-10-11 08:55:35.575 2 DEBUG nova.compute.manager [req-571904f1-dfbe-4ccc-8996-a132b8e01a28 req-6800dab8-4e2c-42b3-8e14-6284943d2e3b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] No waiting events found dispatching network-vif-plugged-33b4a30b-13cb-4c3b-a6fb-df71a1ce760e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:55:35 np0005481065 nova_compute[260935]: 2025-10-11 08:55:35.575 2 WARNING nova.compute.manager [req-571904f1-dfbe-4ccc-8996-a132b8e01a28 req-6800dab8-4e2c-42b3-8e14-6284943d2e3b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Received unexpected event network-vif-plugged-33b4a30b-13cb-4c3b-a6fb-df71a1ce760e for instance with vm_state active and task_state None.#033[00m
Oct 11 04:55:35 np0005481065 nova_compute[260935]: 2025-10-11 08:55:35.830 2 DEBUG nova.network.neutron [req-8b1982a6-5e3b-44ee-a3b0-2905a568ce51 req-70e56087-3b89-4036-8627-16d0fcb9a987 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Updated VIF entry in instance network info cache for port 10e01bbb-0d2c-4f76-8f81-c90e20d3e54c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:55:35 np0005481065 nova_compute[260935]: 2025-10-11 08:55:35.831 2 DEBUG nova.network.neutron [req-8b1982a6-5e3b-44ee-a3b0-2905a568ce51 req-70e56087-3b89-4036-8627-16d0fcb9a987 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Updating instance_info_cache with network_info: [{"id": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "address": "fa:16:3e:3f:0f:36", "network": {"id": "056c6769-bc97-4ae9-9759-4cc2d984a31d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2094705751-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73adfb8cf0c64359b1f33a9643148ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10e01bbb-0d", "ovs_interfaceid": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:55:35 np0005481065 nova_compute[260935]: 2025-10-11 08:55:35.853 2 DEBUG oslo_concurrency.lockutils [req-8b1982a6-5e3b-44ee-a3b0-2905a568ce51 req-70e56087-3b89-4036-8627-16d0fcb9a987 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-98d8ebd6-0917-49cf-8efc-a245486424bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:55:35 np0005481065 podman[323580]: 2025-10-11 08:55:35.985693492 +0000 UTC m=+0.075517014 container create 798bebd64e4b7ad73be2758893eb501a1aeb4ef4f4569bb10c702b4bbf0edf4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 11 04:55:36 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1541: 321 pgs: 321 active+clean; 180 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 671 KiB/s rd, 3.6 MiB/s wr, 85 op/s
Oct 11 04:55:36 np0005481065 systemd[1]: Started libpod-conmon-798bebd64e4b7ad73be2758893eb501a1aeb4ef4f4569bb10c702b4bbf0edf4f.scope.
Oct 11 04:55:36 np0005481065 podman[323580]: 2025-10-11 08:55:35.945930458 +0000 UTC m=+0.035754060 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 04:55:36 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:55:36 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7de2860817c6756baaf80b94981532227b2da2929edff94d5b737c93d30e6677/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:55:36 np0005481065 podman[323580]: 2025-10-11 08:55:36.09891457 +0000 UTC m=+0.188738102 container init 798bebd64e4b7ad73be2758893eb501a1aeb4ef4f4569bb10c702b4bbf0edf4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct 11 04:55:36 np0005481065 podman[323580]: 2025-10-11 08:55:36.105147388 +0000 UTC m=+0.194970910 container start 798bebd64e4b7ad73be2758893eb501a1aeb4ef4f4569bb10c702b4bbf0edf4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:55:36 np0005481065 neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f[323595]: [NOTICE]   (323599) : New worker (323601) forked
Oct 11 04:55:36 np0005481065 neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f[323595]: [NOTICE]   (323599) : Loading success.
Oct 11 04:55:36 np0005481065 nova_compute[260935]: 2025-10-11 08:55:36.354 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172936.353903, f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:55:36 np0005481065 nova_compute[260935]: 2025-10-11 08:55:36.355 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] VM Started (Lifecycle Event)#033[00m
Oct 11 04:55:36 np0005481065 nova_compute[260935]: 2025-10-11 08:55:36.393 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:55:36 np0005481065 nova_compute[260935]: 2025-10-11 08:55:36.399 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172936.3580801, f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:55:36 np0005481065 nova_compute[260935]: 2025-10-11 08:55:36.400 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] VM Paused (Lifecycle Event)#033[00m
Oct 11 04:55:36 np0005481065 nova_compute[260935]: 2025-10-11 08:55:36.435 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:55:36 np0005481065 nova_compute[260935]: 2025-10-11 08:55:36.440 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:55:36 np0005481065 nova_compute[260935]: 2025-10-11 08:55:36.494 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:55:36 np0005481065 nova_compute[260935]: 2025-10-11 08:55:36.508 2 DEBUG nova.compute.manager [req-dfc42284-f51d-4dfb-8bd4-74ace4d9946b req-3f2df2e4-8c6a-4012-9935-d67fc4f69930 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Received event network-vif-plugged-d295555f-76d3-480c-9c34-1ef65200656c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:55:36 np0005481065 nova_compute[260935]: 2025-10-11 08:55:36.509 2 DEBUG oslo_concurrency.lockutils [req-dfc42284-f51d-4dfb-8bd4-74ace4d9946b req-3f2df2e4-8c6a-4012-9935-d67fc4f69930 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:55:36 np0005481065 nova_compute[260935]: 2025-10-11 08:55:36.509 2 DEBUG oslo_concurrency.lockutils [req-dfc42284-f51d-4dfb-8bd4-74ace4d9946b req-3f2df2e4-8c6a-4012-9935-d67fc4f69930 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:55:36 np0005481065 nova_compute[260935]: 2025-10-11 08:55:36.510 2 DEBUG oslo_concurrency.lockutils [req-dfc42284-f51d-4dfb-8bd4-74ace4d9946b req-3f2df2e4-8c6a-4012-9935-d67fc4f69930 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:55:36 np0005481065 nova_compute[260935]: 2025-10-11 08:55:36.510 2 DEBUG nova.compute.manager [req-dfc42284-f51d-4dfb-8bd4-74ace4d9946b req-3f2df2e4-8c6a-4012-9935-d67fc4f69930 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Processing event network-vif-plugged-d295555f-76d3-480c-9c34-1ef65200656c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 04:55:36 np0005481065 nova_compute[260935]: 2025-10-11 08:55:36.510 2 DEBUG nova.compute.manager [req-dfc42284-f51d-4dfb-8bd4-74ace4d9946b req-3f2df2e4-8c6a-4012-9935-d67fc4f69930 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Received event network-vif-plugged-d295555f-76d3-480c-9c34-1ef65200656c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:55:36 np0005481065 nova_compute[260935]: 2025-10-11 08:55:36.511 2 DEBUG oslo_concurrency.lockutils [req-dfc42284-f51d-4dfb-8bd4-74ace4d9946b req-3f2df2e4-8c6a-4012-9935-d67fc4f69930 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:55:36 np0005481065 nova_compute[260935]: 2025-10-11 08:55:36.511 2 DEBUG oslo_concurrency.lockutils [req-dfc42284-f51d-4dfb-8bd4-74ace4d9946b req-3f2df2e4-8c6a-4012-9935-d67fc4f69930 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:55:36 np0005481065 nova_compute[260935]: 2025-10-11 08:55:36.511 2 DEBUG oslo_concurrency.lockutils [req-dfc42284-f51d-4dfb-8bd4-74ace4d9946b req-3f2df2e4-8c6a-4012-9935-d67fc4f69930 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:55:36 np0005481065 nova_compute[260935]: 2025-10-11 08:55:36.512 2 DEBUG nova.compute.manager [req-dfc42284-f51d-4dfb-8bd4-74ace4d9946b req-3f2df2e4-8c6a-4012-9935-d67fc4f69930 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] No waiting events found dispatching network-vif-plugged-d295555f-76d3-480c-9c34-1ef65200656c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:55:36 np0005481065 nova_compute[260935]: 2025-10-11 08:55:36.512 2 WARNING nova.compute.manager [req-dfc42284-f51d-4dfb-8bd4-74ace4d9946b req-3f2df2e4-8c6a-4012-9935-d67fc4f69930 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Received unexpected event network-vif-plugged-d295555f-76d3-480c-9c34-1ef65200656c for instance with vm_state building and task_state spawning.#033[00m
Oct 11 04:55:36 np0005481065 nova_compute[260935]: 2025-10-11 08:55:36.513 2 DEBUG nova.compute.manager [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:55:36 np0005481065 nova_compute[260935]: 2025-10-11 08:55:36.518 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172936.5179389, f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:55:36 np0005481065 nova_compute[260935]: 2025-10-11 08:55:36.518 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:55:36 np0005481065 nova_compute[260935]: 2025-10-11 08:55:36.523 2 DEBUG nova.virt.libvirt.driver [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:55:36 np0005481065 nova_compute[260935]: 2025-10-11 08:55:36.528 2 INFO nova.virt.libvirt.driver [-] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Instance spawned successfully.#033[00m
Oct 11 04:55:36 np0005481065 nova_compute[260935]: 2025-10-11 08:55:36.529 2 DEBUG nova.virt.libvirt.driver [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:55:36 np0005481065 nova_compute[260935]: 2025-10-11 08:55:36.543 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:55:36 np0005481065 nova_compute[260935]: 2025-10-11 08:55:36.553 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:55:36 np0005481065 nova_compute[260935]: 2025-10-11 08:55:36.561 2 DEBUG nova.virt.libvirt.driver [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:55:36 np0005481065 nova_compute[260935]: 2025-10-11 08:55:36.562 2 DEBUG nova.virt.libvirt.driver [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:55:36 np0005481065 nova_compute[260935]: 2025-10-11 08:55:36.563 2 DEBUG nova.virt.libvirt.driver [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:55:36 np0005481065 nova_compute[260935]: 2025-10-11 08:55:36.563 2 DEBUG nova.virt.libvirt.driver [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:55:36 np0005481065 nova_compute[260935]: 2025-10-11 08:55:36.564 2 DEBUG nova.virt.libvirt.driver [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:55:36 np0005481065 nova_compute[260935]: 2025-10-11 08:55:36.565 2 DEBUG nova.virt.libvirt.driver [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:55:36 np0005481065 nova_compute[260935]: 2025-10-11 08:55:36.577 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:55:36 np0005481065 nova_compute[260935]: 2025-10-11 08:55:36.669 2 INFO nova.compute.manager [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Took 8.36 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:55:36 np0005481065 nova_compute[260935]: 2025-10-11 08:55:36.670 2 DEBUG nova.compute.manager [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:55:36 np0005481065 nova_compute[260935]: 2025-10-11 08:55:36.744 2 INFO nova.compute.manager [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Took 9.41 seconds to build instance.#033[00m
Oct 11 04:55:36 np0005481065 nova_compute[260935]: 2025-10-11 08:55:36.761 2 DEBUG oslo_concurrency.lockutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.494s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:55:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 04:55:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2681708282' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 04:55:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 04:55:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2681708282' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 04:55:37 np0005481065 nova_compute[260935]: 2025-10-11 08:55:37.672 2 DEBUG oslo_concurrency.lockutils [None req-54d2ec95-f145-417d-84d8-e8abc3dc7c99 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:55:37 np0005481065 nova_compute[260935]: 2025-10-11 08:55:37.672 2 DEBUG oslo_concurrency.lockutils [None req-54d2ec95-f145-417d-84d8-e8abc3dc7c99 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:55:37 np0005481065 nova_compute[260935]: 2025-10-11 08:55:37.672 2 DEBUG oslo_concurrency.lockutils [None req-54d2ec95-f145-417d-84d8-e8abc3dc7c99 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:55:37 np0005481065 nova_compute[260935]: 2025-10-11 08:55:37.673 2 DEBUG oslo_concurrency.lockutils [None req-54d2ec95-f145-417d-84d8-e8abc3dc7c99 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:55:37 np0005481065 nova_compute[260935]: 2025-10-11 08:55:37.673 2 DEBUG oslo_concurrency.lockutils [None req-54d2ec95-f145-417d-84d8-e8abc3dc7c99 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:55:37 np0005481065 nova_compute[260935]: 2025-10-11 08:55:37.674 2 INFO nova.compute.manager [None req-54d2ec95-f145-417d-84d8-e8abc3dc7c99 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Terminating instance#033[00m
Oct 11 04:55:37 np0005481065 nova_compute[260935]: 2025-10-11 08:55:37.675 2 DEBUG nova.compute.manager [None req-54d2ec95-f145-417d-84d8-e8abc3dc7c99 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 04:55:37 np0005481065 kernel: tapd295555f-76 (unregistering): left promiscuous mode
Oct 11 04:55:37 np0005481065 NetworkManager[44960]: <info>  [1760172937.7253] device (tapd295555f-76): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:55:37 np0005481065 nova_compute[260935]: 2025-10-11 08:55:37.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:37 np0005481065 ovn_controller[152945]: 2025-10-11T08:55:37Z|00524|binding|INFO|Releasing lport d295555f-76d3-480c-9c34-1ef65200656c from this chassis (sb_readonly=0)
Oct 11 04:55:37 np0005481065 ovn_controller[152945]: 2025-10-11T08:55:37Z|00525|binding|INFO|Setting lport d295555f-76d3-480c-9c34-1ef65200656c down in Southbound
Oct 11 04:55:37 np0005481065 ovn_controller[152945]: 2025-10-11T08:55:37Z|00526|binding|INFO|Removing iface tapd295555f-76 ovn-installed in OVS
Oct 11 04:55:37 np0005481065 nova_compute[260935]: 2025-10-11 08:55:37.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:37.755 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:47:3c 10.100.0.7'], port_security=['fa:16:3e:67:47:3c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2cb96d57-a5e9-4b38-b10e-68187a5bf82f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '93dd4902ce324862a38006da8e06503a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b773900e-3df7-4cb6-b9b0-3d240ff499b1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=69794a2f-48ab-4a0d-8725-f4a7f57172dd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=d295555f-76d3-480c-9c34-1ef65200656c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:55:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:37.757 162815 INFO neutron.agent.ovn.metadata.agent [-] Port d295555f-76d3-480c-9c34-1ef65200656c in datapath 2cb96d57-a5e9-4b38-b10e-68187a5bf82f unbound from our chassis#033[00m
Oct 11 04:55:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:37.759 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2cb96d57-a5e9-4b38-b10e-68187a5bf82f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 04:55:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:37.760 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1f3a7691-085c-4dd5-b4cb-7d01bad3d5ae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:37.761 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f namespace which is not needed anymore#033[00m
Oct 11 04:55:37 np0005481065 nova_compute[260935]: 2025-10-11 08:55:37.773 2 DEBUG nova.compute.manager [req-44d98fdf-593b-4bdd-928d-8db5ab480085 req-ba11f04b-55d8-48d1-abda-ebf305165faa e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Received event network-changed-33b4a30b-13cb-4c3b-a6fb-df71a1ce760e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:55:37 np0005481065 nova_compute[260935]: 2025-10-11 08:55:37.773 2 DEBUG nova.compute.manager [req-44d98fdf-593b-4bdd-928d-8db5ab480085 req-ba11f04b-55d8-48d1-abda-ebf305165faa e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Refreshing instance network info cache due to event network-changed-33b4a30b-13cb-4c3b-a6fb-df71a1ce760e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:55:37 np0005481065 nova_compute[260935]: 2025-10-11 08:55:37.773 2 DEBUG oslo_concurrency.lockutils [req-44d98fdf-593b-4bdd-928d-8db5ab480085 req-ba11f04b-55d8-48d1-abda-ebf305165faa e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-3915cf40-bdd7-4fe8-8311-834ff26aaf9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:55:37 np0005481065 nova_compute[260935]: 2025-10-11 08:55:37.774 2 DEBUG oslo_concurrency.lockutils [req-44d98fdf-593b-4bdd-928d-8db5ab480085 req-ba11f04b-55d8-48d1-abda-ebf305165faa e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-3915cf40-bdd7-4fe8-8311-834ff26aaf9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:55:37 np0005481065 nova_compute[260935]: 2025-10-11 08:55:37.774 2 DEBUG nova.network.neutron [req-44d98fdf-593b-4bdd-928d-8db5ab480085 req-ba11f04b-55d8-48d1-abda-ebf305165faa e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Refreshing network info cache for port 33b4a30b-13cb-4c3b-a6fb-df71a1ce760e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:55:37 np0005481065 nova_compute[260935]: 2025-10-11 08:55:37.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:37 np0005481065 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d0000003c.scope: Deactivated successfully.
Oct 11 04:55:37 np0005481065 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d0000003c.scope: Consumed 2.271s CPU time.
Oct 11 04:55:37 np0005481065 systemd-machined[215705]: Machine qemu-67-instance-0000003c terminated.
Oct 11 04:55:37 np0005481065 nova_compute[260935]: 2025-10-11 08:55:37.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:37 np0005481065 nova_compute[260935]: 2025-10-11 08:55:37.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:37 np0005481065 nova_compute[260935]: 2025-10-11 08:55:37.914 2 INFO nova.virt.libvirt.driver [-] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Instance destroyed successfully.#033[00m
Oct 11 04:55:37 np0005481065 nova_compute[260935]: 2025-10-11 08:55:37.914 2 DEBUG nova.objects.instance [None req-54d2ec95-f145-417d-84d8-e8abc3dc7c99 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lazy-loading 'resources' on Instance uuid f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:55:37 np0005481065 neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f[323595]: [NOTICE]   (323599) : haproxy version is 2.8.14-c23fe91
Oct 11 04:55:37 np0005481065 neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f[323595]: [NOTICE]   (323599) : path to executable is /usr/sbin/haproxy
Oct 11 04:55:37 np0005481065 neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f[323595]: [WARNING]  (323599) : Exiting Master process...
Oct 11 04:55:37 np0005481065 neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f[323595]: [WARNING]  (323599) : Exiting Master process...
Oct 11 04:55:37 np0005481065 neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f[323595]: [ALERT]    (323599) : Current worker (323601) exited with code 143 (Terminated)
Oct 11 04:55:37 np0005481065 neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f[323595]: [WARNING]  (323599) : All workers exited. Exiting... (0)
Oct 11 04:55:37 np0005481065 nova_compute[260935]: 2025-10-11 08:55:37.932 2 DEBUG nova.virt.libvirt.vif [None req-54d2ec95-f145-417d-84d8-e8abc3dc7c99 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:55:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1662661810',display_name='tempest-DeleteServersTestJSON-server-1662661810',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1662661810',id=60,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:55:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='93dd4902ce324862a38006da8e06503a',ramdisk_id='',reservation_id='r-hslosofy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-1019340677',owner_user_name='tempest-DeleteServersTestJSON-1019340677-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:55:36Z,user_data=None,user_id='213e5693e94f44e7950e3dfbca04228a',uuid=f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d295555f-76d3-480c-9c34-1ef65200656c", "address": "fa:16:3e:67:47:3c", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd295555f-76", "ovs_interfaceid": "d295555f-76d3-480c-9c34-1ef65200656c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 04:55:37 np0005481065 nova_compute[260935]: 2025-10-11 08:55:37.932 2 DEBUG nova.network.os_vif_util [None req-54d2ec95-f145-417d-84d8-e8abc3dc7c99 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Converting VIF {"id": "d295555f-76d3-480c-9c34-1ef65200656c", "address": "fa:16:3e:67:47:3c", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd295555f-76", "ovs_interfaceid": "d295555f-76d3-480c-9c34-1ef65200656c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:55:37 np0005481065 systemd[1]: libpod-798bebd64e4b7ad73be2758893eb501a1aeb4ef4f4569bb10c702b4bbf0edf4f.scope: Deactivated successfully.
Oct 11 04:55:37 np0005481065 nova_compute[260935]: 2025-10-11 08:55:37.935 2 DEBUG nova.network.os_vif_util [None req-54d2ec95-f145-417d-84d8-e8abc3dc7c99 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:47:3c,bridge_name='br-int',has_traffic_filtering=True,id=d295555f-76d3-480c-9c34-1ef65200656c,network=Network(2cb96d57-a5e9-4b38-b10e-68187a5bf82f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd295555f-76') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:55:37 np0005481065 nova_compute[260935]: 2025-10-11 08:55:37.936 2 DEBUG os_vif [None req-54d2ec95-f145-417d-84d8-e8abc3dc7c99 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:47:3c,bridge_name='br-int',has_traffic_filtering=True,id=d295555f-76d3-480c-9c34-1ef65200656c,network=Network(2cb96d57-a5e9-4b38-b10e-68187a5bf82f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd295555f-76') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 04:55:37 np0005481065 nova_compute[260935]: 2025-10-11 08:55:37.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:37 np0005481065 nova_compute[260935]: 2025-10-11 08:55:37.938 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd295555f-76, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:55:37 np0005481065 nova_compute[260935]: 2025-10-11 08:55:37.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:37 np0005481065 nova_compute[260935]: 2025-10-11 08:55:37.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:37 np0005481065 podman[323633]: 2025-10-11 08:55:37.94395587 +0000 UTC m=+0.066706413 container died 798bebd64e4b7ad73be2758893eb501a1aeb4ef4f4569bb10c702b4bbf0edf4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 04:55:37 np0005481065 nova_compute[260935]: 2025-10-11 08:55:37.947 2 INFO os_vif [None req-54d2ec95-f145-417d-84d8-e8abc3dc7c99 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:47:3c,bridge_name='br-int',has_traffic_filtering=True,id=d295555f-76d3-480c-9c34-1ef65200656c,network=Network(2cb96d57-a5e9-4b38-b10e-68187a5bf82f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd295555f-76')#033[00m
Oct 11 04:55:37 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-798bebd64e4b7ad73be2758893eb501a1aeb4ef4f4569bb10c702b4bbf0edf4f-userdata-shm.mount: Deactivated successfully.
Oct 11 04:55:37 np0005481065 systemd[1]: var-lib-containers-storage-overlay-7de2860817c6756baaf80b94981532227b2da2929edff94d5b737c93d30e6677-merged.mount: Deactivated successfully.
Oct 11 04:55:37 np0005481065 nova_compute[260935]: 2025-10-11 08:55:37.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:38 np0005481065 podman[323633]: 2025-10-11 08:55:38.002647184 +0000 UTC m=+0.125397747 container cleanup 798bebd64e4b7ad73be2758893eb501a1aeb4ef4f4569bb10c702b4bbf0edf4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:55:38 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1542: 321 pgs: 321 active+clean; 181 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 3.6 MiB/s wr, 262 op/s
Oct 11 04:55:38 np0005481065 systemd[1]: libpod-conmon-798bebd64e4b7ad73be2758893eb501a1aeb4ef4f4569bb10c702b4bbf0edf4f.scope: Deactivated successfully.
Oct 11 04:55:38 np0005481065 nova_compute[260935]: 2025-10-11 08:55:38.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:38 np0005481065 podman[323686]: 2025-10-11 08:55:38.099349121 +0000 UTC m=+0.065771546 container remove 798bebd64e4b7ad73be2758893eb501a1aeb4ef4f4569bb10c702b4bbf0edf4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct 11 04:55:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:38.110 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b0922263-8503-4452-8968-c06fa72632a1]: (4, ('Sat Oct 11 08:55:37 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f (798bebd64e4b7ad73be2758893eb501a1aeb4ef4f4569bb10c702b4bbf0edf4f)\n798bebd64e4b7ad73be2758893eb501a1aeb4ef4f4569bb10c702b4bbf0edf4f\nSat Oct 11 08:55:38 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f (798bebd64e4b7ad73be2758893eb501a1aeb4ef4f4569bb10c702b4bbf0edf4f)\n798bebd64e4b7ad73be2758893eb501a1aeb4ef4f4569bb10c702b4bbf0edf4f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:38.112 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[24c684eb-a63f-4503-9871-25f5ac6234c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:38.114 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2cb96d57-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:55:38 np0005481065 nova_compute[260935]: 2025-10-11 08:55:38.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:38 np0005481065 kernel: tap2cb96d57-a0: left promiscuous mode
Oct 11 04:55:38 np0005481065 nova_compute[260935]: 2025-10-11 08:55:38.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:38.158 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[24ffd6d5-fdbc-4468-b506-8a28cd5d8c5d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:38.183 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3eba3290-6726-4a6c-988c-7e88c5581dc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:38.184 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[33b77bc3-7753-46c0-a3c6-05e50b6963b0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:38.208 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[845c456e-da1c-4467-a557-0a1d6b510692]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 477982, 'reachable_time': 38789, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323704, 'error': None, 'target': 'ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:38 np0005481065 systemd[1]: run-netns-ovnmeta\x2d2cb96d57\x2da5e9\x2d4b38\x2db10e\x2d68187a5bf82f.mount: Deactivated successfully.
Oct 11 04:55:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:38.211 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 04:55:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:38.211 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[d0d3527b-a87f-4598-9a06-9b5d55a2e9b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:55:38 np0005481065 nova_compute[260935]: 2025-10-11 08:55:38.380 2 INFO nova.virt.libvirt.driver [None req-54d2ec95-f145-417d-84d8-e8abc3dc7c99 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Deleting instance files /var/lib/nova/instances/f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8_del#033[00m
Oct 11 04:55:38 np0005481065 nova_compute[260935]: 2025-10-11 08:55:38.382 2 INFO nova.virt.libvirt.driver [None req-54d2ec95-f145-417d-84d8-e8abc3dc7c99 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Deletion of /var/lib/nova/instances/f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8_del complete#033[00m
Oct 11 04:55:38 np0005481065 nova_compute[260935]: 2025-10-11 08:55:38.443 2 INFO nova.compute.manager [None req-54d2ec95-f145-417d-84d8-e8abc3dc7c99 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Took 0.77 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 04:55:38 np0005481065 nova_compute[260935]: 2025-10-11 08:55:38.444 2 DEBUG oslo.service.loopingcall [None req-54d2ec95-f145-417d-84d8-e8abc3dc7c99 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 04:55:38 np0005481065 nova_compute[260935]: 2025-10-11 08:55:38.448 2 DEBUG nova.compute.manager [-] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 04:55:38 np0005481065 nova_compute[260935]: 2025-10-11 08:55:38.449 2 DEBUG nova.network.neutron [-] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 04:55:39 np0005481065 nova_compute[260935]: 2025-10-11 08:55:39.216 2 DEBUG nova.network.neutron [req-44d98fdf-593b-4bdd-928d-8db5ab480085 req-ba11f04b-55d8-48d1-abda-ebf305165faa e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Updated VIF entry in instance network info cache for port 33b4a30b-13cb-4c3b-a6fb-df71a1ce760e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:55:39 np0005481065 nova_compute[260935]: 2025-10-11 08:55:39.217 2 DEBUG nova.network.neutron [req-44d98fdf-593b-4bdd-928d-8db5ab480085 req-ba11f04b-55d8-48d1-abda-ebf305165faa e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Updating instance_info_cache with network_info: [{"id": "33b4a30b-13cb-4c3b-a6fb-df71a1ce760e", "address": "fa:16:3e:ef:c3:49", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33b4a30b-13", "ovs_interfaceid": "33b4a30b-13cb-4c3b-a6fb-df71a1ce760e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:55:39 np0005481065 nova_compute[260935]: 2025-10-11 08:55:39.220 2 DEBUG nova.network.neutron [-] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:55:39 np0005481065 nova_compute[260935]: 2025-10-11 08:55:39.253 2 INFO nova.compute.manager [-] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Took 0.80 seconds to deallocate network for instance.#033[00m
Oct 11 04:55:39 np0005481065 nova_compute[260935]: 2025-10-11 08:55:39.269 2 DEBUG oslo_concurrency.lockutils [req-44d98fdf-593b-4bdd-928d-8db5ab480085 req-ba11f04b-55d8-48d1-abda-ebf305165faa e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-3915cf40-bdd7-4fe8-8311-834ff26aaf9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:55:39 np0005481065 nova_compute[260935]: 2025-10-11 08:55:39.288 2 DEBUG nova.compute.manager [req-7b709226-509a-4a81-aaea-19173d5f23c3 req-8ed6eb1c-a1c3-4a17-888c-edbb02a26534 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Received event network-vif-deleted-d295555f-76d3-480c-9c34-1ef65200656c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:55:39 np0005481065 nova_compute[260935]: 2025-10-11 08:55:39.329 2 DEBUG oslo_concurrency.lockutils [None req-54d2ec95-f145-417d-84d8-e8abc3dc7c99 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:55:39 np0005481065 nova_compute[260935]: 2025-10-11 08:55:39.329 2 DEBUG oslo_concurrency.lockutils [None req-54d2ec95-f145-417d-84d8-e8abc3dc7c99 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:55:39 np0005481065 nova_compute[260935]: 2025-10-11 08:55:39.434 2 DEBUG oslo_concurrency.processutils [None req-54d2ec95-f145-417d-84d8-e8abc3dc7c99 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:55:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:55:39 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2409747422' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:55:39 np0005481065 nova_compute[260935]: 2025-10-11 08:55:39.944 2 DEBUG oslo_concurrency.processutils [None req-54d2ec95-f145-417d-84d8-e8abc3dc7c99 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:55:39 np0005481065 nova_compute[260935]: 2025-10-11 08:55:39.954 2 DEBUG nova.compute.provider_tree [None req-54d2ec95-f145-417d-84d8-e8abc3dc7c99 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:55:39 np0005481065 nova_compute[260935]: 2025-10-11 08:55:39.970 2 DEBUG nova.scheduler.client.report [None req-54d2ec95-f145-417d-84d8-e8abc3dc7c99 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:55:39 np0005481065 nova_compute[260935]: 2025-10-11 08:55:39.993 2 DEBUG oslo_concurrency.lockutils [None req-54d2ec95-f145-417d-84d8-e8abc3dc7c99 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:55:40 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1543: 321 pgs: 321 active+clean; 181 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 1.8 MiB/s wr, 225 op/s
Oct 11 04:55:40 np0005481065 nova_compute[260935]: 2025-10-11 08:55:40.019 2 INFO nova.scheduler.client.report [None req-54d2ec95-f145-417d-84d8-e8abc3dc7c99 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Deleted allocations for instance f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8#033[00m
Oct 11 04:55:40 np0005481065 nova_compute[260935]: 2025-10-11 08:55:40.080 2 DEBUG oslo_concurrency.lockutils [None req-54d2ec95-f145-417d-84d8-e8abc3dc7c99 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.408s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:55:40 np0005481065 nova_compute[260935]: 2025-10-11 08:55:40.203 2 DEBUG nova.compute.manager [req-289e0ea6-05a6-4578-be5f-079bc76edf5e req-94153dbe-cb94-47b2-89e2-659a09997f90 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Received event network-changed-33b4a30b-13cb-4c3b-a6fb-df71a1ce760e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:55:40 np0005481065 nova_compute[260935]: 2025-10-11 08:55:40.204 2 DEBUG nova.compute.manager [req-289e0ea6-05a6-4578-be5f-079bc76edf5e req-94153dbe-cb94-47b2-89e2-659a09997f90 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Refreshing instance network info cache due to event network-changed-33b4a30b-13cb-4c3b-a6fb-df71a1ce760e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:55:40 np0005481065 nova_compute[260935]: 2025-10-11 08:55:40.204 2 DEBUG oslo_concurrency.lockutils [req-289e0ea6-05a6-4578-be5f-079bc76edf5e req-94153dbe-cb94-47b2-89e2-659a09997f90 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-3915cf40-bdd7-4fe8-8311-834ff26aaf9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:55:40 np0005481065 nova_compute[260935]: 2025-10-11 08:55:40.205 2 DEBUG oslo_concurrency.lockutils [req-289e0ea6-05a6-4578-be5f-079bc76edf5e req-94153dbe-cb94-47b2-89e2-659a09997f90 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-3915cf40-bdd7-4fe8-8311-834ff26aaf9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:55:40 np0005481065 nova_compute[260935]: 2025-10-11 08:55:40.205 2 DEBUG nova.network.neutron [req-289e0ea6-05a6-4578-be5f-079bc76edf5e req-94153dbe-cb94-47b2-89e2-659a09997f90 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Refreshing network info cache for port 33b4a30b-13cb-4c3b-a6fb-df71a1ce760e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:55:40 np0005481065 podman[323728]: 2025-10-11 08:55:40.81300939 +0000 UTC m=+0.106948220 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 04:55:40 np0005481065 podman[323729]: 2025-10-11 08:55:40.864300562 +0000 UTC m=+0.154224128 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 04:55:42 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1544: 321 pgs: 321 active+clean; 181 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 1.8 MiB/s wr, 225 op/s
Oct 11 04:55:42 np0005481065 nova_compute[260935]: 2025-10-11 08:55:42.385 2 DEBUG nova.network.neutron [req-289e0ea6-05a6-4578-be5f-079bc76edf5e req-94153dbe-cb94-47b2-89e2-659a09997f90 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Updated VIF entry in instance network info cache for port 33b4a30b-13cb-4c3b-a6fb-df71a1ce760e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:55:42 np0005481065 nova_compute[260935]: 2025-10-11 08:55:42.386 2 DEBUG nova.network.neutron [req-289e0ea6-05a6-4578-be5f-079bc76edf5e req-94153dbe-cb94-47b2-89e2-659a09997f90 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Updating instance_info_cache with network_info: [{"id": "33b4a30b-13cb-4c3b-a6fb-df71a1ce760e", "address": "fa:16:3e:ef:c3:49", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33b4a30b-13", "ovs_interfaceid": "33b4a30b-13cb-4c3b-a6fb-df71a1ce760e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:55:42 np0005481065 nova_compute[260935]: 2025-10-11 08:55:42.409 2 DEBUG oslo_concurrency.lockutils [req-289e0ea6-05a6-4578-be5f-079bc76edf5e req-94153dbe-cb94-47b2-89e2-659a09997f90 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-3915cf40-bdd7-4fe8-8311-834ff26aaf9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:55:42 np0005481065 nova_compute[260935]: 2025-10-11 08:55:42.920 2 DEBUG oslo_concurrency.lockutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "b297454f-91af-4716-b4f7-6af9f0d7e62d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:55:42 np0005481065 nova_compute[260935]: 2025-10-11 08:55:42.920 2 DEBUG oslo_concurrency.lockutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "b297454f-91af-4716-b4f7-6af9f0d7e62d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:55:42 np0005481065 nova_compute[260935]: 2025-10-11 08:55:42.941 2 DEBUG nova.compute.manager [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:55:42 np0005481065 nova_compute[260935]: 2025-10-11 08:55:42.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:43 np0005481065 nova_compute[260935]: 2025-10-11 08:55:43.013 2 DEBUG oslo_concurrency.lockutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:55:43 np0005481065 nova_compute[260935]: 2025-10-11 08:55:43.014 2 DEBUG oslo_concurrency.lockutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:55:43 np0005481065 nova_compute[260935]: 2025-10-11 08:55:43.023 2 DEBUG nova.virt.hardware [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:55:43 np0005481065 nova_compute[260935]: 2025-10-11 08:55:43.023 2 INFO nova.compute.claims [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:55:43 np0005481065 nova_compute[260935]: 2025-10-11 08:55:43.170 2 DEBUG oslo_concurrency.processutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:55:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:55:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:55:43 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/217999181' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:55:43 np0005481065 nova_compute[260935]: 2025-10-11 08:55:43.677 2 DEBUG oslo_concurrency.processutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:55:43 np0005481065 nova_compute[260935]: 2025-10-11 08:55:43.685 2 DEBUG nova.compute.provider_tree [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:55:43 np0005481065 nova_compute[260935]: 2025-10-11 08:55:43.708 2 DEBUG nova.scheduler.client.report [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:55:43 np0005481065 nova_compute[260935]: 2025-10-11 08:55:43.733 2 DEBUG oslo_concurrency.lockutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:55:43 np0005481065 nova_compute[260935]: 2025-10-11 08:55:43.734 2 DEBUG nova.compute.manager [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:55:43 np0005481065 nova_compute[260935]: 2025-10-11 08:55:43.788 2 DEBUG nova.compute.manager [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:55:43 np0005481065 nova_compute[260935]: 2025-10-11 08:55:43.788 2 DEBUG nova.network.neutron [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:55:43 np0005481065 nova_compute[260935]: 2025-10-11 08:55:43.812 2 INFO nova.virt.libvirt.driver [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:55:43 np0005481065 nova_compute[260935]: 2025-10-11 08:55:43.834 2 DEBUG nova.compute.manager [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:55:43 np0005481065 nova_compute[260935]: 2025-10-11 08:55:43.918 2 DEBUG nova.compute.manager [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:55:43 np0005481065 nova_compute[260935]: 2025-10-11 08:55:43.920 2 DEBUG nova.virt.libvirt.driver [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:55:43 np0005481065 nova_compute[260935]: 2025-10-11 08:55:43.921 2 INFO nova.virt.libvirt.driver [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Creating image(s)#033[00m
Oct 11 04:55:43 np0005481065 nova_compute[260935]: 2025-10-11 08:55:43.946 2 DEBUG nova.storage.rbd_utils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image b297454f-91af-4716-b4f7-6af9f0d7e62d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:55:43 np0005481065 nova_compute[260935]: 2025-10-11 08:55:43.972 2 DEBUG nova.storage.rbd_utils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image b297454f-91af-4716-b4f7-6af9f0d7e62d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:55:43 np0005481065 nova_compute[260935]: 2025-10-11 08:55:43.997 2 DEBUG nova.storage.rbd_utils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image b297454f-91af-4716-b4f7-6af9f0d7e62d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:55:44 np0005481065 nova_compute[260935]: 2025-10-11 08:55:44.001 2 DEBUG oslo_concurrency.processutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:55:44 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1545: 321 pgs: 321 active+clean; 135 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 1.8 MiB/s wr, 261 op/s
Oct 11 04:55:44 np0005481065 nova_compute[260935]: 2025-10-11 08:55:44.039 2 DEBUG nova.policy [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ee0e5fedb9fc464eb2a9ac362f5e0749', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd9864fda4f8641d8a9c1509c426cc206', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 04:55:44 np0005481065 nova_compute[260935]: 2025-10-11 08:55:44.085 2 DEBUG oslo_concurrency.processutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:55:44 np0005481065 nova_compute[260935]: 2025-10-11 08:55:44.086 2 DEBUG oslo_concurrency.lockutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:55:44 np0005481065 nova_compute[260935]: 2025-10-11 08:55:44.087 2 DEBUG oslo_concurrency.lockutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:55:44 np0005481065 nova_compute[260935]: 2025-10-11 08:55:44.087 2 DEBUG oslo_concurrency.lockutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:55:44 np0005481065 nova_compute[260935]: 2025-10-11 08:55:44.116 2 DEBUG nova.storage.rbd_utils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image b297454f-91af-4716-b4f7-6af9f0d7e62d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:55:44 np0005481065 nova_compute[260935]: 2025-10-11 08:55:44.121 2 DEBUG oslo_concurrency.processutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 b297454f-91af-4716-b4f7-6af9f0d7e62d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:55:44 np0005481065 nova_compute[260935]: 2025-10-11 08:55:44.430 2 DEBUG oslo_concurrency.processutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 b297454f-91af-4716-b4f7-6af9f0d7e62d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.309s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:55:44 np0005481065 nova_compute[260935]: 2025-10-11 08:55:44.498 2 DEBUG nova.storage.rbd_utils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] resizing rbd image b297454f-91af-4716-b4f7-6af9f0d7e62d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:55:44 np0005481065 nova_compute[260935]: 2025-10-11 08:55:44.603 2 DEBUG nova.objects.instance [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lazy-loading 'migration_context' on Instance uuid b297454f-91af-4716-b4f7-6af9f0d7e62d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:55:44 np0005481065 nova_compute[260935]: 2025-10-11 08:55:44.619 2 DEBUG nova.virt.libvirt.driver [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:55:44 np0005481065 nova_compute[260935]: 2025-10-11 08:55:44.619 2 DEBUG nova.virt.libvirt.driver [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Ensure instance console log exists: /var/lib/nova/instances/b297454f-91af-4716-b4f7-6af9f0d7e62d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:55:44 np0005481065 nova_compute[260935]: 2025-10-11 08:55:44.620 2 DEBUG oslo_concurrency.lockutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:55:44 np0005481065 nova_compute[260935]: 2025-10-11 08:55:44.620 2 DEBUG oslo_concurrency.lockutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:55:44 np0005481065 nova_compute[260935]: 2025-10-11 08:55:44.620 2 DEBUG oslo_concurrency.lockutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:55:44 np0005481065 nova_compute[260935]: 2025-10-11 08:55:44.639 2 DEBUG nova.network.neutron [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Successfully created port: 7e1e0a2e-111a-44a6-8191-45964dbea604 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 04:55:45 np0005481065 ovn_controller[152945]: 2025-10-11T08:55:45Z|00066|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3f:0f:36 10.100.0.9
Oct 11 04:55:45 np0005481065 ovn_controller[152945]: 2025-10-11T08:55:45Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3f:0f:36 10.100.0.9
Oct 11 04:55:45 np0005481065 nova_compute[260935]: 2025-10-11 08:55:45.415 2 DEBUG nova.network.neutron [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Successfully updated port: 7e1e0a2e-111a-44a6-8191-45964dbea604 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:55:45 np0005481065 nova_compute[260935]: 2025-10-11 08:55:45.433 2 DEBUG oslo_concurrency.lockutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "refresh_cache-b297454f-91af-4716-b4f7-6af9f0d7e62d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:55:45 np0005481065 nova_compute[260935]: 2025-10-11 08:55:45.434 2 DEBUG oslo_concurrency.lockutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquired lock "refresh_cache-b297454f-91af-4716-b4f7-6af9f0d7e62d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:55:45 np0005481065 nova_compute[260935]: 2025-10-11 08:55:45.434 2 DEBUG nova.network.neutron [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:55:45 np0005481065 nova_compute[260935]: 2025-10-11 08:55:45.583 2 DEBUG nova.network.neutron [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:55:45 np0005481065 nova_compute[260935]: 2025-10-11 08:55:45.607 2 DEBUG nova.compute.manager [req-84565fdb-7d79-412c-b3f9-678ce40bdaf2 req-8f22b11e-df39-4c4f-bafd-bfc65333392b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Received event network-changed-7e1e0a2e-111a-44a6-8191-45964dbea604 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:55:45 np0005481065 nova_compute[260935]: 2025-10-11 08:55:45.608 2 DEBUG nova.compute.manager [req-84565fdb-7d79-412c-b3f9-678ce40bdaf2 req-8f22b11e-df39-4c4f-bafd-bfc65333392b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Refreshing instance network info cache due to event network-changed-7e1e0a2e-111a-44a6-8191-45964dbea604. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:55:45 np0005481065 nova_compute[260935]: 2025-10-11 08:55:45.609 2 DEBUG oslo_concurrency.lockutils [req-84565fdb-7d79-412c-b3f9-678ce40bdaf2 req-8f22b11e-df39-4c4f-bafd-bfc65333392b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-b297454f-91af-4716-b4f7-6af9f0d7e62d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:55:46 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1546: 321 pgs: 321 active+clean; 135 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 32 KiB/s wr, 212 op/s
Oct 11 04:55:46 np0005481065 nova_compute[260935]: 2025-10-11 08:55:46.451 2 DEBUG oslo_concurrency.lockutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "bb58fb30-73c8-457a-9293-2d07d0015a46" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:55:46 np0005481065 nova_compute[260935]: 2025-10-11 08:55:46.452 2 DEBUG oslo_concurrency.lockutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "bb58fb30-73c8-457a-9293-2d07d0015a46" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:55:46 np0005481065 nova_compute[260935]: 2025-10-11 08:55:46.479 2 DEBUG nova.compute.manager [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:55:46 np0005481065 nova_compute[260935]: 2025-10-11 08:55:46.563 2 DEBUG nova.network.neutron [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Updating instance_info_cache with network_info: [{"id": "7e1e0a2e-111a-44a6-8191-45964dbea604", "address": "fa:16:3e:45:a4:99", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e1e0a2e-11", "ovs_interfaceid": "7e1e0a2e-111a-44a6-8191-45964dbea604", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:55:46 np0005481065 nova_compute[260935]: 2025-10-11 08:55:46.567 2 DEBUG oslo_concurrency.lockutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:55:46 np0005481065 nova_compute[260935]: 2025-10-11 08:55:46.568 2 DEBUG oslo_concurrency.lockutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:55:46 np0005481065 nova_compute[260935]: 2025-10-11 08:55:46.578 2 DEBUG nova.virt.hardware [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:55:46 np0005481065 nova_compute[260935]: 2025-10-11 08:55:46.578 2 INFO nova.compute.claims [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:55:46 np0005481065 nova_compute[260935]: 2025-10-11 08:55:46.604 2 DEBUG oslo_concurrency.lockutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Releasing lock "refresh_cache-b297454f-91af-4716-b4f7-6af9f0d7e62d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:55:46 np0005481065 nova_compute[260935]: 2025-10-11 08:55:46.604 2 DEBUG nova.compute.manager [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Instance network_info: |[{"id": "7e1e0a2e-111a-44a6-8191-45964dbea604", "address": "fa:16:3e:45:a4:99", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e1e0a2e-11", "ovs_interfaceid": "7e1e0a2e-111a-44a6-8191-45964dbea604", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:55:46 np0005481065 nova_compute[260935]: 2025-10-11 08:55:46.605 2 DEBUG oslo_concurrency.lockutils [req-84565fdb-7d79-412c-b3f9-678ce40bdaf2 req-8f22b11e-df39-4c4f-bafd-bfc65333392b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-b297454f-91af-4716-b4f7-6af9f0d7e62d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:55:46 np0005481065 nova_compute[260935]: 2025-10-11 08:55:46.605 2 DEBUG nova.network.neutron [req-84565fdb-7d79-412c-b3f9-678ce40bdaf2 req-8f22b11e-df39-4c4f-bafd-bfc65333392b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Refreshing network info cache for port 7e1e0a2e-111a-44a6-8191-45964dbea604 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:55:46 np0005481065 nova_compute[260935]: 2025-10-11 08:55:46.608 2 DEBUG nova.virt.libvirt.driver [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Start _get_guest_xml network_info=[{"id": "7e1e0a2e-111a-44a6-8191-45964dbea604", "address": "fa:16:3e:45:a4:99", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e1e0a2e-11", "ovs_interfaceid": "7e1e0a2e-111a-44a6-8191-45964dbea604", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:55:46 np0005481065 nova_compute[260935]: 2025-10-11 08:55:46.613 2 WARNING nova.virt.libvirt.driver [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:55:46 np0005481065 nova_compute[260935]: 2025-10-11 08:55:46.618 2 DEBUG nova.virt.libvirt.host [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:55:46 np0005481065 nova_compute[260935]: 2025-10-11 08:55:46.619 2 DEBUG nova.virt.libvirt.host [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:55:46 np0005481065 nova_compute[260935]: 2025-10-11 08:55:46.622 2 DEBUG nova.virt.libvirt.host [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:55:46 np0005481065 nova_compute[260935]: 2025-10-11 08:55:46.623 2 DEBUG nova.virt.libvirt.host [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:55:46 np0005481065 nova_compute[260935]: 2025-10-11 08:55:46.623 2 DEBUG nova.virt.libvirt.driver [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:55:46 np0005481065 nova_compute[260935]: 2025-10-11 08:55:46.623 2 DEBUG nova.virt.hardware [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:55:46 np0005481065 nova_compute[260935]: 2025-10-11 08:55:46.624 2 DEBUG nova.virt.hardware [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:55:46 np0005481065 nova_compute[260935]: 2025-10-11 08:55:46.624 2 DEBUG nova.virt.hardware [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:55:46 np0005481065 nova_compute[260935]: 2025-10-11 08:55:46.624 2 DEBUG nova.virt.hardware [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:55:46 np0005481065 nova_compute[260935]: 2025-10-11 08:55:46.625 2 DEBUG nova.virt.hardware [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:55:46 np0005481065 nova_compute[260935]: 2025-10-11 08:55:46.625 2 DEBUG nova.virt.hardware [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:55:46 np0005481065 nova_compute[260935]: 2025-10-11 08:55:46.625 2 DEBUG nova.virt.hardware [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:55:46 np0005481065 nova_compute[260935]: 2025-10-11 08:55:46.626 2 DEBUG nova.virt.hardware [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:55:46 np0005481065 nova_compute[260935]: 2025-10-11 08:55:46.626 2 DEBUG nova.virt.hardware [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:55:46 np0005481065 nova_compute[260935]: 2025-10-11 08:55:46.626 2 DEBUG nova.virt.hardware [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:55:46 np0005481065 nova_compute[260935]: 2025-10-11 08:55:46.626 2 DEBUG nova.virt.hardware [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:55:46 np0005481065 nova_compute[260935]: 2025-10-11 08:55:46.630 2 DEBUG oslo_concurrency.processutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:55:46 np0005481065 nova_compute[260935]: 2025-10-11 08:55:46.814 2 DEBUG oslo_concurrency.processutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:55:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:55:47 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1913167330' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:55:47 np0005481065 nova_compute[260935]: 2025-10-11 08:55:47.173 2 DEBUG oslo_concurrency.processutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:55:47 np0005481065 nova_compute[260935]: 2025-10-11 08:55:47.208 2 DEBUG nova.storage.rbd_utils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image b297454f-91af-4716-b4f7-6af9f0d7e62d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:55:47 np0005481065 nova_compute[260935]: 2025-10-11 08:55:47.213 2 DEBUG oslo_concurrency.processutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:55:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:55:47 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/90154206' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:55:47 np0005481065 nova_compute[260935]: 2025-10-11 08:55:47.364 2 DEBUG oslo_concurrency.processutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:55:47 np0005481065 nova_compute[260935]: 2025-10-11 08:55:47.379 2 DEBUG nova.compute.provider_tree [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:55:47 np0005481065 nova_compute[260935]: 2025-10-11 08:55:47.398 2 DEBUG nova.scheduler.client.report [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:55:47 np0005481065 nova_compute[260935]: 2025-10-11 08:55:47.424 2 DEBUG oslo_concurrency.lockutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.856s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:55:47 np0005481065 nova_compute[260935]: 2025-10-11 08:55:47.425 2 DEBUG nova.compute.manager [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:55:47 np0005481065 nova_compute[260935]: 2025-10-11 08:55:47.475 2 DEBUG nova.compute.manager [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:55:47 np0005481065 nova_compute[260935]: 2025-10-11 08:55:47.476 2 DEBUG nova.network.neutron [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:55:47 np0005481065 nova_compute[260935]: 2025-10-11 08:55:47.497 2 INFO nova.virt.libvirt.driver [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:55:47 np0005481065 nova_compute[260935]: 2025-10-11 08:55:47.518 2 DEBUG nova.compute.manager [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:55:47 np0005481065 nova_compute[260935]: 2025-10-11 08:55:47.629 2 DEBUG nova.compute.manager [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:55:47 np0005481065 nova_compute[260935]: 2025-10-11 08:55:47.631 2 DEBUG nova.virt.libvirt.driver [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:55:47 np0005481065 nova_compute[260935]: 2025-10-11 08:55:47.631 2 INFO nova.virt.libvirt.driver [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Creating image(s)#033[00m
Oct 11 04:55:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:55:47 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1754153588' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:55:47 np0005481065 nova_compute[260935]: 2025-10-11 08:55:47.664 2 DEBUG nova.storage.rbd_utils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] rbd image bb58fb30-73c8-457a-9293-2d07d0015a46_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:55:47 np0005481065 nova_compute[260935]: 2025-10-11 08:55:47.698 2 DEBUG nova.storage.rbd_utils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] rbd image bb58fb30-73c8-457a-9293-2d07d0015a46_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:55:47 np0005481065 nova_compute[260935]: 2025-10-11 08:55:47.726 2 DEBUG nova.storage.rbd_utils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] rbd image bb58fb30-73c8-457a-9293-2d07d0015a46_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:55:47 np0005481065 nova_compute[260935]: 2025-10-11 08:55:47.731 2 DEBUG oslo_concurrency.processutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:55:47 np0005481065 nova_compute[260935]: 2025-10-11 08:55:47.777 2 DEBUG nova.policy [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '213e5693e94f44e7950e3dfbca04228a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '93dd4902ce324862a38006da8e06503a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 04:55:47 np0005481065 nova_compute[260935]: 2025-10-11 08:55:47.784 2 DEBUG oslo_concurrency.processutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:55:47 np0005481065 nova_compute[260935]: 2025-10-11 08:55:47.787 2 DEBUG nova.virt.libvirt.vif [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:55:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-₡-268621522',display_name='tempest-₡-268621522',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--268621522',id=61,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d9864fda4f8641d8a9c1509c426cc206',ramdisk_id='',reservation_id='r-w76lxkw4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-101172647',owner_user_name='tempest-ServersTestJSON-101172647-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:55:43Z,user_data=None,user_id='ee0e5fedb9fc464eb2a9ac362f5e0749',uuid=b297454f-91af-4716-b4f7-6af9f0d7e62d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7e1e0a2e-111a-44a6-8191-45964dbea604", "address": "fa:16:3e:45:a4:99", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e1e0a2e-11", "ovs_interfaceid": "7e1e0a2e-111a-44a6-8191-45964dbea604", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:55:47 np0005481065 nova_compute[260935]: 2025-10-11 08:55:47.787 2 DEBUG nova.network.os_vif_util [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Converting VIF {"id": "7e1e0a2e-111a-44a6-8191-45964dbea604", "address": "fa:16:3e:45:a4:99", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e1e0a2e-11", "ovs_interfaceid": "7e1e0a2e-111a-44a6-8191-45964dbea604", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:55:47 np0005481065 nova_compute[260935]: 2025-10-11 08:55:47.788 2 DEBUG nova.network.os_vif_util [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:a4:99,bridge_name='br-int',has_traffic_filtering=True,id=7e1e0a2e-111a-44a6-8191-45964dbea604,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e1e0a2e-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:55:47 np0005481065 nova_compute[260935]: 2025-10-11 08:55:47.790 2 DEBUG nova.objects.instance [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lazy-loading 'pci_devices' on Instance uuid b297454f-91af-4716-b4f7-6af9f0d7e62d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:55:47 np0005481065 nova_compute[260935]: 2025-10-11 08:55:47.808 2 DEBUG nova.virt.libvirt.driver [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:55:47 np0005481065 nova_compute[260935]:  <uuid>b297454f-91af-4716-b4f7-6af9f0d7e62d</uuid>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:  <name>instance-0000003d</name>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:55:47 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:      <nova:name>tempest-₡-268621522</nova:name>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:55:46</nova:creationTime>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:55:47 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:        <nova:user uuid="ee0e5fedb9fc464eb2a9ac362f5e0749">tempest-ServersTestJSON-101172647-project-member</nova:user>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:        <nova:project uuid="d9864fda4f8641d8a9c1509c426cc206">tempest-ServersTestJSON-101172647</nova:project>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:        <nova:port uuid="7e1e0a2e-111a-44a6-8191-45964dbea604">
Oct 11 04:55:47 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:55:47 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:      <entry name="serial">b297454f-91af-4716-b4f7-6af9f0d7e62d</entry>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:      <entry name="uuid">b297454f-91af-4716-b4f7-6af9f0d7e62d</entry>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:55:47 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:55:47 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:55:47 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/b297454f-91af-4716-b4f7-6af9f0d7e62d_disk">
Oct 11 04:55:47 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:55:47 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:55:47 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/b297454f-91af-4716-b4f7-6af9f0d7e62d_disk.config">
Oct 11 04:55:47 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:55:47 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 04:55:47 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:45:a4:99"/>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:      <target dev="tap7e1e0a2e-11"/>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:55:47 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/b297454f-91af-4716-b4f7-6af9f0d7e62d/console.log" append="off"/>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:55:47 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:55:47 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:55:47 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:55:47 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:55:47 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:55:47 np0005481065 nova_compute[260935]: 2025-10-11 08:55:47.809 2 DEBUG nova.compute.manager [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Preparing to wait for external event network-vif-plugged-7e1e0a2e-111a-44a6-8191-45964dbea604 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 04:55:47 np0005481065 nova_compute[260935]: 2025-10-11 08:55:47.810 2 DEBUG oslo_concurrency.lockutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "b297454f-91af-4716-b4f7-6af9f0d7e62d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:55:47 np0005481065 nova_compute[260935]: 2025-10-11 08:55:47.810 2 DEBUG oslo_concurrency.lockutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "b297454f-91af-4716-b4f7-6af9f0d7e62d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:55:47 np0005481065 nova_compute[260935]: 2025-10-11 08:55:47.810 2 DEBUG oslo_concurrency.lockutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "b297454f-91af-4716-b4f7-6af9f0d7e62d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:55:47 np0005481065 nova_compute[260935]: 2025-10-11 08:55:47.811 2 DEBUG nova.virt.libvirt.vif [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:55:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-₡-268621522',display_name='tempest-₡-268621522',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--268621522',id=61,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d9864fda4f8641d8a9c1509c426cc206',ramdisk_id='',reservation_id='r-w76lxkw4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-101172647',owner_user_name='tempest-ServersTestJSON-101172647-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:55:43Z,user_data=None,user_id='ee0e5fedb9fc464eb2a9ac362f5e0749',uuid=b297454f-91af-4716-b4f7-6af9f0d7e62d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7e1e0a2e-111a-44a6-8191-45964dbea604", "address": "fa:16:3e:45:a4:99", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e1e0a2e-11", "ovs_interfaceid": "7e1e0a2e-111a-44a6-8191-45964dbea604", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:55:47 np0005481065 nova_compute[260935]: 2025-10-11 08:55:47.812 2 DEBUG nova.network.os_vif_util [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Converting VIF {"id": "7e1e0a2e-111a-44a6-8191-45964dbea604", "address": "fa:16:3e:45:a4:99", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e1e0a2e-11", "ovs_interfaceid": "7e1e0a2e-111a-44a6-8191-45964dbea604", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:55:47 np0005481065 nova_compute[260935]: 2025-10-11 08:55:47.812 2 DEBUG nova.network.os_vif_util [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:a4:99,bridge_name='br-int',has_traffic_filtering=True,id=7e1e0a2e-111a-44a6-8191-45964dbea604,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e1e0a2e-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:55:47 np0005481065 nova_compute[260935]: 2025-10-11 08:55:47.813 2 DEBUG os_vif [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:a4:99,bridge_name='br-int',has_traffic_filtering=True,id=7e1e0a2e-111a-44a6-8191-45964dbea604,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e1e0a2e-11') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:55:47 np0005481065 nova_compute[260935]: 2025-10-11 08:55:47.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:47 np0005481065 nova_compute[260935]: 2025-10-11 08:55:47.814 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:55:47 np0005481065 nova_compute[260935]: 2025-10-11 08:55:47.814 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:55:47 np0005481065 nova_compute[260935]: 2025-10-11 08:55:47.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:47 np0005481065 nova_compute[260935]: 2025-10-11 08:55:47.817 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7e1e0a2e-11, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:55:47 np0005481065 nova_compute[260935]: 2025-10-11 08:55:47.818 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7e1e0a2e-11, col_values=(('external_ids', {'iface-id': '7e1e0a2e-111a-44a6-8191-45964dbea604', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:45:a4:99', 'vm-uuid': 'b297454f-91af-4716-b4f7-6af9f0d7e62d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:55:47 np0005481065 nova_compute[260935]: 2025-10-11 08:55:47.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:47 np0005481065 NetworkManager[44960]: <info>  [1760172947.8227] manager: (tap7e1e0a2e-11): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/249)
Oct 11 04:55:47 np0005481065 nova_compute[260935]: 2025-10-11 08:55:47.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:55:47 np0005481065 nova_compute[260935]: 2025-10-11 08:55:47.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:47 np0005481065 nova_compute[260935]: 2025-10-11 08:55:47.830 2 INFO os_vif [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:a4:99,bridge_name='br-int',has_traffic_filtering=True,id=7e1e0a2e-111a-44a6-8191-45964dbea604,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e1e0a2e-11')#033[00m
Oct 11 04:55:47 np0005481065 nova_compute[260935]: 2025-10-11 08:55:47.832 2 DEBUG oslo_concurrency.processutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:55:47 np0005481065 nova_compute[260935]: 2025-10-11 08:55:47.832 2 DEBUG oslo_concurrency.lockutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:55:47 np0005481065 nova_compute[260935]: 2025-10-11 08:55:47.833 2 DEBUG oslo_concurrency.lockutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:55:47 np0005481065 nova_compute[260935]: 2025-10-11 08:55:47.833 2 DEBUG oslo_concurrency.lockutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:55:47 np0005481065 nova_compute[260935]: 2025-10-11 08:55:47.857 2 DEBUG nova.storage.rbd_utils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] rbd image bb58fb30-73c8-457a-9293-2d07d0015a46_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:55:47 np0005481065 nova_compute[260935]: 2025-10-11 08:55:47.861 2 DEBUG oslo_concurrency.processutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 bb58fb30-73c8-457a-9293-2d07d0015a46_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:55:47 np0005481065 nova_compute[260935]: 2025-10-11 08:55:47.975 2 DEBUG nova.virt.libvirt.driver [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:55:47 np0005481065 nova_compute[260935]: 2025-10-11 08:55:47.976 2 DEBUG nova.virt.libvirt.driver [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:55:47 np0005481065 nova_compute[260935]: 2025-10-11 08:55:47.976 2 DEBUG nova.virt.libvirt.driver [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] No VIF found with MAC fa:16:3e:45:a4:99, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:55:47 np0005481065 nova_compute[260935]: 2025-10-11 08:55:47.977 2 INFO nova.virt.libvirt.driver [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Using config drive#033[00m
Oct 11 04:55:47 np0005481065 ovn_controller[152945]: 2025-10-11T08:55:47Z|00068|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ef:c3:49 10.100.0.13
Oct 11 04:55:47 np0005481065 ovn_controller[152945]: 2025-10-11T08:55:47Z|00069|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ef:c3:49 10.100.0.13
Oct 11 04:55:48 np0005481065 nova_compute[260935]: 2025-10-11 08:55:48.005 2 DEBUG nova.storage.rbd_utils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image b297454f-91af-4716-b4f7-6af9f0d7e62d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:55:48 np0005481065 nova_compute[260935]: 2025-10-11 08:55:48.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:48 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1547: 321 pgs: 321 active+clean; 246 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 6.1 MiB/s wr, 363 op/s
Oct 11 04:55:48 np0005481065 nova_compute[260935]: 2025-10-11 08:55:48.186 2 DEBUG oslo_concurrency.processutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 bb58fb30-73c8-457a-9293-2d07d0015a46_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.325s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:55:48 np0005481065 nova_compute[260935]: 2025-10-11 08:55:48.283 2 DEBUG nova.storage.rbd_utils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] resizing rbd image bb58fb30-73c8-457a-9293-2d07d0015a46_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:55:48 np0005481065 nova_compute[260935]: 2025-10-11 08:55:48.345 2 DEBUG nova.network.neutron [req-84565fdb-7d79-412c-b3f9-678ce40bdaf2 req-8f22b11e-df39-4c4f-bafd-bfc65333392b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Updated VIF entry in instance network info cache for port 7e1e0a2e-111a-44a6-8191-45964dbea604. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:55:48 np0005481065 nova_compute[260935]: 2025-10-11 08:55:48.346 2 DEBUG nova.network.neutron [req-84565fdb-7d79-412c-b3f9-678ce40bdaf2 req-8f22b11e-df39-4c4f-bafd-bfc65333392b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Updating instance_info_cache with network_info: [{"id": "7e1e0a2e-111a-44a6-8191-45964dbea604", "address": "fa:16:3e:45:a4:99", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e1e0a2e-11", "ovs_interfaceid": "7e1e0a2e-111a-44a6-8191-45964dbea604", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:55:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:55:48 np0005481065 nova_compute[260935]: 2025-10-11 08:55:48.401 2 DEBUG oslo_concurrency.lockutils [req-84565fdb-7d79-412c-b3f9-678ce40bdaf2 req-8f22b11e-df39-4c4f-bafd-bfc65333392b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-b297454f-91af-4716-b4f7-6af9f0d7e62d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:55:48 np0005481065 nova_compute[260935]: 2025-10-11 08:55:48.412 2 DEBUG nova.objects.instance [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lazy-loading 'migration_context' on Instance uuid bb58fb30-73c8-457a-9293-2d07d0015a46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:55:48 np0005481065 nova_compute[260935]: 2025-10-11 08:55:48.426 2 DEBUG nova.virt.libvirt.driver [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:55:48 np0005481065 nova_compute[260935]: 2025-10-11 08:55:48.426 2 DEBUG nova.virt.libvirt.driver [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Ensure instance console log exists: /var/lib/nova/instances/bb58fb30-73c8-457a-9293-2d07d0015a46/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:55:48 np0005481065 nova_compute[260935]: 2025-10-11 08:55:48.427 2 DEBUG oslo_concurrency.lockutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:55:48 np0005481065 nova_compute[260935]: 2025-10-11 08:55:48.427 2 DEBUG oslo_concurrency.lockutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:55:48 np0005481065 nova_compute[260935]: 2025-10-11 08:55:48.427 2 DEBUG oslo_concurrency.lockutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:55:48 np0005481065 nova_compute[260935]: 2025-10-11 08:55:48.543 2 INFO nova.virt.libvirt.driver [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Creating config drive at /var/lib/nova/instances/b297454f-91af-4716-b4f7-6af9f0d7e62d/disk.config#033[00m
Oct 11 04:55:48 np0005481065 nova_compute[260935]: 2025-10-11 08:55:48.549 2 DEBUG oslo_concurrency.processutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b297454f-91af-4716-b4f7-6af9f0d7e62d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm_h_qpuv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:55:48 np0005481065 nova_compute[260935]: 2025-10-11 08:55:48.660 2 DEBUG nova.network.neutron [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Successfully created port: fca0187a-40e2-4312-bb5b-f3a6c59f3c9b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 04:55:48 np0005481065 nova_compute[260935]: 2025-10-11 08:55:48.710 2 DEBUG oslo_concurrency.processutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b297454f-91af-4716-b4f7-6af9f0d7e62d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm_h_qpuv" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:55:48 np0005481065 nova_compute[260935]: 2025-10-11 08:55:48.753 2 DEBUG nova.storage.rbd_utils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image b297454f-91af-4716-b4f7-6af9f0d7e62d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:55:48 np0005481065 nova_compute[260935]: 2025-10-11 08:55:48.759 2 DEBUG oslo_concurrency.processutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b297454f-91af-4716-b4f7-6af9f0d7e62d/disk.config b297454f-91af-4716-b4f7-6af9f0d7e62d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:55:48 np0005481065 nova_compute[260935]: 2025-10-11 08:55:48.957 2 DEBUG oslo_concurrency.processutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b297454f-91af-4716-b4f7-6af9f0d7e62d/disk.config b297454f-91af-4716-b4f7-6af9f0d7e62d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.198s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:55:48 np0005481065 nova_compute[260935]: 2025-10-11 08:55:48.959 2 INFO nova.virt.libvirt.driver [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Deleting local config drive /var/lib/nova/instances/b297454f-91af-4716-b4f7-6af9f0d7e62d/disk.config because it was imported into RBD.#033[00m
Oct 11 04:55:49 np0005481065 kernel: tap7e1e0a2e-11: entered promiscuous mode
Oct 11 04:55:49 np0005481065 NetworkManager[44960]: <info>  [1760172949.0258] manager: (tap7e1e0a2e-11): new Tun device (/org/freedesktop/NetworkManager/Devices/250)
Oct 11 04:55:49 np0005481065 ovn_controller[152945]: 2025-10-11T08:55:49Z|00527|binding|INFO|Claiming lport 7e1e0a2e-111a-44a6-8191-45964dbea604 for this chassis.
Oct 11 04:55:49 np0005481065 ovn_controller[152945]: 2025-10-11T08:55:49Z|00528|binding|INFO|7e1e0a2e-111a-44a6-8191-45964dbea604: Claiming fa:16:3e:45:a4:99 10.100.0.5
Oct 11 04:55:49 np0005481065 nova_compute[260935]: 2025-10-11 08:55:49.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:49.035 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:a4:99 10.100.0.5'], port_security=['fa:16:3e:45:a4:99 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b297454f-91af-4716-b4f7-6af9f0d7e62d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e075bdab-78c4-414f-b270-c41d1c82f498', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9864fda4f8641d8a9c1509c426cc206', 'neutron:revision_number': '2', 'neutron:security_group_ids': '708d19b6-1edc-4b98-ad73-f234668f1633', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2b07dbe7-131d-4b4e-9a25-dea5d7b28985, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=7e1e0a2e-111a-44a6-8191-45964dbea604) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:49.039 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 7e1e0a2e-111a-44a6-8191-45964dbea604 in datapath e075bdab-78c4-414f-b270-c41d1c82f498 bound to our chassis#033[00m
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:49.049 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e075bdab-78c4-414f-b270-c41d1c82f498#033[00m
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:49.065 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4f5bf80e-a8d7-4a60-9d90-62b8392158f7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:49.066 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape075bdab-71 in ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:49.068 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape075bdab-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:49.069 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[df8e9ba5-518b-4907-96e8-485a26ca7b93]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:49 np0005481065 ovn_controller[152945]: 2025-10-11T08:55:49Z|00529|binding|INFO|Setting lport 7e1e0a2e-111a-44a6-8191-45964dbea604 ovn-installed in OVS
Oct 11 04:55:49 np0005481065 ovn_controller[152945]: 2025-10-11T08:55:49Z|00530|binding|INFO|Setting lport 7e1e0a2e-111a-44a6-8191-45964dbea604 up in Southbound
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:49.071 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e1fcc81e-e9cb-4743-a5b2-732b072a2f93]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:49 np0005481065 nova_compute[260935]: 2025-10-11 08:55:49.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:49 np0005481065 systemd-machined[215705]: New machine qemu-68-instance-0000003d.
Oct 11 04:55:49 np0005481065 nova_compute[260935]: 2025-10-11 08:55:49.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:49 np0005481065 systemd[1]: Started Virtual Machine qemu-68-instance-0000003d.
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:49.088 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[2cc9bf16-8b20-4755-8bf5-a25af93ad88e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:49 np0005481065 systemd-udevd[324286]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:49.106 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4d6aa683-7044-4950-bfdc-fd2c1aca18ac]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:49 np0005481065 NetworkManager[44960]: <info>  [1760172949.1229] device (tap7e1e0a2e-11): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:55:49 np0005481065 NetworkManager[44960]: <info>  [1760172949.1244] device (tap7e1e0a2e-11): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:49.144 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[9006185d-90af-41ff-a1b6-64616b6099d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:49.152 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5b3fbaaf-74ae-4d26-9dbe-9b8830d377a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:49 np0005481065 NetworkManager[44960]: <info>  [1760172949.1536] manager: (tape075bdab-70): new Veth device (/org/freedesktop/NetworkManager/Devices/251)
Oct 11 04:55:49 np0005481065 systemd-udevd[324290]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:49.201 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3cdfd693-647b-4aec-bc62-911e1e4b2a71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:49.204 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[e4943a1d-270f-4cdc-8a37-7e451e9901f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:49 np0005481065 NetworkManager[44960]: <info>  [1760172949.2404] device (tape075bdab-70): carrier: link connected
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:49.250 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3d1897ac-7a1a-4006-a2fa-1f883304db27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:49.278 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[75f9a9ff-c70e-4af3-9970-5d10175305b3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape075bdab-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:b9:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479394, 'reachable_time': 36159, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324317, 'error': None, 'target': 'ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:49.300 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[463c42de-23fc-42e1-ad23-c96bf188ae21]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0b:b979'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479394, 'tstamp': 479394}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324318, 'error': None, 'target': 'ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:49.332 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[068b4d21-c1cb-436f-b248-5a2169d7de7e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape075bdab-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:b9:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479394, 'reachable_time': 36159, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 324326, 'error': None, 'target': 'ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:49 np0005481065 nova_compute[260935]: 2025-10-11 08:55:49.352 2 DEBUG nova.compute.manager [req-a105ff72-ee38-4075-8802-38737c0d8065 req-19c2e772-6da8-44c3-9b4d-ba4ab3868153 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Received event network-vif-plugged-7e1e0a2e-111a-44a6-8191-45964dbea604 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:55:49 np0005481065 nova_compute[260935]: 2025-10-11 08:55:49.352 2 DEBUG oslo_concurrency.lockutils [req-a105ff72-ee38-4075-8802-38737c0d8065 req-19c2e772-6da8-44c3-9b4d-ba4ab3868153 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "b297454f-91af-4716-b4f7-6af9f0d7e62d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:55:49 np0005481065 nova_compute[260935]: 2025-10-11 08:55:49.353 2 DEBUG oslo_concurrency.lockutils [req-a105ff72-ee38-4075-8802-38737c0d8065 req-19c2e772-6da8-44c3-9b4d-ba4ab3868153 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b297454f-91af-4716-b4f7-6af9f0d7e62d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:55:49 np0005481065 nova_compute[260935]: 2025-10-11 08:55:49.354 2 DEBUG oslo_concurrency.lockutils [req-a105ff72-ee38-4075-8802-38737c0d8065 req-19c2e772-6da8-44c3-9b4d-ba4ab3868153 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b297454f-91af-4716-b4f7-6af9f0d7e62d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:55:49 np0005481065 nova_compute[260935]: 2025-10-11 08:55:49.354 2 DEBUG nova.compute.manager [req-a105ff72-ee38-4075-8802-38737c0d8065 req-19c2e772-6da8-44c3-9b4d-ba4ab3868153 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Processing event network-vif-plugged-7e1e0a2e-111a-44a6-8191-45964dbea604 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:49.391 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2ee2e9c1-a4a1-462c-9995-cb55641966c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:49 np0005481065 nova_compute[260935]: 2025-10-11 08:55:49.433 2 DEBUG nova.network.neutron [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Successfully updated port: fca0187a-40e2-4312-bb5b-f3a6c59f3c9b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:55:49 np0005481065 nova_compute[260935]: 2025-10-11 08:55:49.455 2 DEBUG oslo_concurrency.lockutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "refresh_cache-bb58fb30-73c8-457a-9293-2d07d0015a46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:55:49 np0005481065 nova_compute[260935]: 2025-10-11 08:55:49.456 2 DEBUG oslo_concurrency.lockutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquired lock "refresh_cache-bb58fb30-73c8-457a-9293-2d07d0015a46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:55:49 np0005481065 nova_compute[260935]: 2025-10-11 08:55:49.456 2 DEBUG nova.network.neutron [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:49.506 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[27fe92a1-1e5c-4529-94cd-bda65d4d9259]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:49.508 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape075bdab-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:49.509 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:49.509 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape075bdab-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:55:49 np0005481065 nova_compute[260935]: 2025-10-11 08:55:49.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:49 np0005481065 kernel: tape075bdab-70: entered promiscuous mode
Oct 11 04:55:49 np0005481065 NetworkManager[44960]: <info>  [1760172949.5127] manager: (tape075bdab-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/252)
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:49.515 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape075bdab-70, col_values=(('external_ids', {'iface-id': 'b9cf681c-9f4c-4c56-987a-55fa7aa89e1a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:55:49 np0005481065 nova_compute[260935]: 2025-10-11 08:55:49.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:49 np0005481065 ovn_controller[152945]: 2025-10-11T08:55:49Z|00531|binding|INFO|Releasing lport b9cf681c-9f4c-4c56-987a-55fa7aa89e1a from this chassis (sb_readonly=0)
Oct 11 04:55:49 np0005481065 nova_compute[260935]: 2025-10-11 08:55:49.545 2 DEBUG nova.compute.manager [req-88a95b43-b48f-454f-a58c-6f2f93a90840 req-db6047e1-defa-4d7e-aedc-d5938f501a8c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Received event network-changed-fca0187a-40e2-4312-bb5b-f3a6c59f3c9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:55:49 np0005481065 nova_compute[260935]: 2025-10-11 08:55:49.547 2 DEBUG nova.compute.manager [req-88a95b43-b48f-454f-a58c-6f2f93a90840 req-db6047e1-defa-4d7e-aedc-d5938f501a8c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Refreshing instance network info cache due to event network-changed-fca0187a-40e2-4312-bb5b-f3a6c59f3c9b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:55:49 np0005481065 nova_compute[260935]: 2025-10-11 08:55:49.548 2 DEBUG oslo_concurrency.lockutils [req-88a95b43-b48f-454f-a58c-6f2f93a90840 req-db6047e1-defa-4d7e-aedc-d5938f501a8c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-bb58fb30-73c8-457a-9293-2d07d0015a46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:55:49 np0005481065 nova_compute[260935]: 2025-10-11 08:55:49.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:49.550 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e075bdab-78c4-414f-b270-c41d1c82f498.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e075bdab-78c4-414f-b270-c41d1c82f498.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:49.551 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[eb74b5a9-5604-4f8d-b835-b50ed28a95f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:49.552 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-e075bdab-78c4-414f-b270-c41d1c82f498
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/e075bdab-78c4-414f-b270-c41d1c82f498.pid.haproxy
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID e075bdab-78c4-414f-b270-c41d1c82f498
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 04:55:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:49.553 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498', 'env', 'PROCESS_TAG=haproxy-e075bdab-78c4-414f-b270-c41d1c82f498', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e075bdab-78c4-414f-b270-c41d1c82f498.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 04:55:49 np0005481065 nova_compute[260935]: 2025-10-11 08:55:49.574 2 DEBUG oslo_concurrency.lockutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Acquiring lock "d80189d8-28e4-440b-8aed-b43c62f59dd7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:55:49 np0005481065 nova_compute[260935]: 2025-10-11 08:55:49.575 2 DEBUG oslo_concurrency.lockutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lock "d80189d8-28e4-440b-8aed-b43c62f59dd7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:55:49 np0005481065 nova_compute[260935]: 2025-10-11 08:55:49.600 2 DEBUG nova.compute.manager [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:55:49 np0005481065 nova_compute[260935]: 2025-10-11 08:55:49.631 2 DEBUG nova.network.neutron [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:55:49 np0005481065 nova_compute[260935]: 2025-10-11 08:55:49.662 2 DEBUG oslo_concurrency.lockutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:55:49 np0005481065 nova_compute[260935]: 2025-10-11 08:55:49.663 2 DEBUG oslo_concurrency.lockutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:55:49 np0005481065 nova_compute[260935]: 2025-10-11 08:55:49.670 2 DEBUG nova.virt.hardware [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:55:49 np0005481065 nova_compute[260935]: 2025-10-11 08:55:49.671 2 INFO nova.compute.claims [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:55:49 np0005481065 nova_compute[260935]: 2025-10-11 08:55:49.864 2 DEBUG oslo_concurrency.processutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:55:49 np0005481065 nova_compute[260935]: 2025-10-11 08:55:49.980 2 DEBUG nova.compute.manager [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:55:49 np0005481065 nova_compute[260935]: 2025-10-11 08:55:49.982 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172949.9816473, b297454f-91af-4716-b4f7-6af9f0d7e62d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:55:49 np0005481065 nova_compute[260935]: 2025-10-11 08:55:49.984 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] VM Started (Lifecycle Event)#033[00m
Oct 11 04:55:49 np0005481065 nova_compute[260935]: 2025-10-11 08:55:49.990 2 DEBUG nova.virt.libvirt.driver [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:55:49 np0005481065 nova_compute[260935]: 2025-10-11 08:55:49.996 2 INFO nova.virt.libvirt.driver [-] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Instance spawned successfully.#033[00m
Oct 11 04:55:49 np0005481065 nova_compute[260935]: 2025-10-11 08:55:49.997 2 DEBUG nova.virt.libvirt.driver [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.008 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:55:50 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1548: 321 pgs: 321 active+clean; 246 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 884 KiB/s rd, 6.0 MiB/s wr, 186 op/s
Oct 11 04:55:50 np0005481065 podman[324392]: 2025-10-11 08:55:50.015478115 +0000 UTC m=+0.076959216 container create 7365a64d78a5203cb6a4f7973d3225d3cf0f9d03377cb6f3325f2e530419040b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.028 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.036 2 DEBUG nova.virt.libvirt.driver [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.037 2 DEBUG nova.virt.libvirt.driver [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.038 2 DEBUG nova.virt.libvirt.driver [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.039 2 DEBUG nova.virt.libvirt.driver [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.040 2 DEBUG nova.virt.libvirt.driver [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.041 2 DEBUG nova.virt.libvirt.driver [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:55:50 np0005481065 systemd[1]: Started libpod-conmon-7365a64d78a5203cb6a4f7973d3225d3cf0f9d03377cb6f3325f2e530419040b.scope.
Oct 11 04:55:50 np0005481065 podman[324392]: 2025-10-11 08:55:49.967070444 +0000 UTC m=+0.028551635 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.063 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.064 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172949.9818032, b297454f-91af-4716-b4f7-6af9f0d7e62d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.064 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] VM Paused (Lifecycle Event)#033[00m
Oct 11 04:55:50 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.088 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:55:50 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf198163e725340e667f14571b9dc05439b8f7c35370125da99319b5667780ac/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.098 2 INFO nova.compute.manager [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Took 6.18 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.098 2 DEBUG nova.compute.manager [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.104 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172949.9852617, b297454f-91af-4716-b4f7-6af9f0d7e62d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.105 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:55:50 np0005481065 podman[324392]: 2025-10-11 08:55:50.112175882 +0000 UTC m=+0.173657033 container init 7365a64d78a5203cb6a4f7973d3225d3cf0f9d03377cb6f3325f2e530419040b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 04:55:50 np0005481065 podman[324392]: 2025-10-11 08:55:50.119209323 +0000 UTC m=+0.180690434 container start 7365a64d78a5203cb6a4f7973d3225d3cf0f9d03377cb6f3325f2e530419040b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.132 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.135 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:55:50 np0005481065 neutron-haproxy-ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498[324426]: [NOTICE]   (324430) : New worker (324432) forked
Oct 11 04:55:50 np0005481065 neutron-haproxy-ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498[324426]: [NOTICE]   (324430) : Loading success.
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.169 2 INFO nova.compute.manager [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Took 7.18 seconds to build instance.#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.188 2 DEBUG oslo_concurrency.lockutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "b297454f-91af-4716-b4f7-6af9f0d7e62d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.268s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:55:50 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:55:50 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/815335945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.378 2 DEBUG oslo_concurrency.processutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.385 2 DEBUG nova.compute.provider_tree [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.403 2 DEBUG nova.network.neutron [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Updating instance_info_cache with network_info: [{"id": "fca0187a-40e2-4312-bb5b-f3a6c59f3c9b", "address": "fa:16:3e:9f:3c:ad", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfca0187a-40", "ovs_interfaceid": "fca0187a-40e2-4312-bb5b-f3a6c59f3c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.423 2 DEBUG nova.scheduler.client.report [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.431 2 DEBUG oslo_concurrency.lockutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Releasing lock "refresh_cache-bb58fb30-73c8-457a-9293-2d07d0015a46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.432 2 DEBUG nova.compute.manager [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Instance network_info: |[{"id": "fca0187a-40e2-4312-bb5b-f3a6c59f3c9b", "address": "fa:16:3e:9f:3c:ad", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfca0187a-40", "ovs_interfaceid": "fca0187a-40e2-4312-bb5b-f3a6c59f3c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.433 2 DEBUG oslo_concurrency.lockutils [req-88a95b43-b48f-454f-a58c-6f2f93a90840 req-db6047e1-defa-4d7e-aedc-d5938f501a8c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-bb58fb30-73c8-457a-9293-2d07d0015a46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.434 2 DEBUG nova.network.neutron [req-88a95b43-b48f-454f-a58c-6f2f93a90840 req-db6047e1-defa-4d7e-aedc-d5938f501a8c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Refreshing network info cache for port fca0187a-40e2-4312-bb5b-f3a6c59f3c9b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.438 2 DEBUG nova.virt.libvirt.driver [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Start _get_guest_xml network_info=[{"id": "fca0187a-40e2-4312-bb5b-f3a6c59f3c9b", "address": "fa:16:3e:9f:3c:ad", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfca0187a-40", "ovs_interfaceid": "fca0187a-40e2-4312-bb5b-f3a6c59f3c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.445 2 WARNING nova.virt.libvirt.driver [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.452 2 DEBUG nova.virt.libvirt.host [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.454 2 DEBUG nova.virt.libvirt.host [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.458 2 DEBUG oslo_concurrency.lockutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.459 2 DEBUG nova.compute.manager [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.473 2 DEBUG nova.virt.libvirt.host [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.475 2 DEBUG nova.virt.libvirt.host [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.475 2 DEBUG nova.virt.libvirt.driver [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.476 2 DEBUG nova.virt.hardware [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.477 2 DEBUG nova.virt.hardware [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.478 2 DEBUG nova.virt.hardware [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.478 2 DEBUG nova.virt.hardware [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.479 2 DEBUG nova.virt.hardware [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.479 2 DEBUG nova.virt.hardware [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.480 2 DEBUG nova.virt.hardware [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.481 2 DEBUG nova.virt.hardware [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.481 2 DEBUG nova.virt.hardware [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.482 2 DEBUG nova.virt.hardware [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.482 2 DEBUG nova.virt.hardware [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.488 2 DEBUG oslo_concurrency.processutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.544 2 DEBUG nova.compute.manager [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.546 2 DEBUG nova.network.neutron [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.570 2 INFO nova.virt.libvirt.driver [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.595 2 DEBUG nova.compute.manager [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.728 2 DEBUG nova.compute.manager [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.730 2 DEBUG nova.virt.libvirt.driver [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.731 2 INFO nova.virt.libvirt.driver [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Creating image(s)#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.767 2 DEBUG nova.storage.rbd_utils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] rbd image d80189d8-28e4-440b-8aed-b43c62f59dd7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.809 2 DEBUG nova.storage.rbd_utils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] rbd image d80189d8-28e4-440b-8aed-b43c62f59dd7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.851 2 DEBUG nova.storage.rbd_utils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] rbd image d80189d8-28e4-440b-8aed-b43c62f59dd7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.857 2 DEBUG oslo_concurrency.processutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.919 2 DEBUG nova.policy [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a04e2908f5a54c8f98bee8d0faf3e658', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3a6a3cc2a54f4a9bafcdc1304f07944b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 04:55:50 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:55:50 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/688212283' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.977 2 DEBUG oslo_concurrency.processutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.120s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.979 2 DEBUG oslo_concurrency.lockutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.980 2 DEBUG oslo_concurrency.lockutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:55:50 np0005481065 nova_compute[260935]: 2025-10-11 08:55:50.981 2 DEBUG oslo_concurrency.lockutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:55:51 np0005481065 nova_compute[260935]: 2025-10-11 08:55:51.026 2 DEBUG nova.storage.rbd_utils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] rbd image d80189d8-28e4-440b-8aed-b43c62f59dd7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:55:51 np0005481065 nova_compute[260935]: 2025-10-11 08:55:51.031 2 DEBUG oslo_concurrency.processutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 d80189d8-28e4-440b-8aed-b43c62f59dd7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:55:51 np0005481065 nova_compute[260935]: 2025-10-11 08:55:51.072 2 DEBUG oslo_concurrency.processutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:55:51 np0005481065 nova_compute[260935]: 2025-10-11 08:55:51.105 2 DEBUG nova.storage.rbd_utils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] rbd image bb58fb30-73c8-457a-9293-2d07d0015a46_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:55:51 np0005481065 nova_compute[260935]: 2025-10-11 08:55:51.110 2 DEBUG oslo_concurrency.processutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:55:51 np0005481065 nova_compute[260935]: 2025-10-11 08:55:51.336 2 DEBUG oslo_concurrency.processutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 d80189d8-28e4-440b-8aed-b43c62f59dd7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.305s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:55:51 np0005481065 nova_compute[260935]: 2025-10-11 08:55:51.414 2 DEBUG nova.storage.rbd_utils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] resizing rbd image d80189d8-28e4-440b-8aed-b43c62f59dd7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:55:51 np0005481065 nova_compute[260935]: 2025-10-11 08:55:51.490 2 DEBUG nova.compute.manager [req-bbdc1455-a948-4f08-a6d9-f29047c4b732 req-f33010bb-b949-4296-b9ee-b32aa96223f9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Received event network-vif-plugged-7e1e0a2e-111a-44a6-8191-45964dbea604 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:55:51 np0005481065 nova_compute[260935]: 2025-10-11 08:55:51.491 2 DEBUG oslo_concurrency.lockutils [req-bbdc1455-a948-4f08-a6d9-f29047c4b732 req-f33010bb-b949-4296-b9ee-b32aa96223f9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "b297454f-91af-4716-b4f7-6af9f0d7e62d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:55:51 np0005481065 nova_compute[260935]: 2025-10-11 08:55:51.491 2 DEBUG oslo_concurrency.lockutils [req-bbdc1455-a948-4f08-a6d9-f29047c4b732 req-f33010bb-b949-4296-b9ee-b32aa96223f9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b297454f-91af-4716-b4f7-6af9f0d7e62d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:55:51 np0005481065 nova_compute[260935]: 2025-10-11 08:55:51.492 2 DEBUG oslo_concurrency.lockutils [req-bbdc1455-a948-4f08-a6d9-f29047c4b732 req-f33010bb-b949-4296-b9ee-b32aa96223f9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b297454f-91af-4716-b4f7-6af9f0d7e62d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:55:51 np0005481065 nova_compute[260935]: 2025-10-11 08:55:51.492 2 DEBUG nova.compute.manager [req-bbdc1455-a948-4f08-a6d9-f29047c4b732 req-f33010bb-b949-4296-b9ee-b32aa96223f9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] No waiting events found dispatching network-vif-plugged-7e1e0a2e-111a-44a6-8191-45964dbea604 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:55:51 np0005481065 nova_compute[260935]: 2025-10-11 08:55:51.493 2 WARNING nova.compute.manager [req-bbdc1455-a948-4f08-a6d9-f29047c4b732 req-f33010bb-b949-4296-b9ee-b32aa96223f9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Received unexpected event network-vif-plugged-7e1e0a2e-111a-44a6-8191-45964dbea604 for instance with vm_state active and task_state None.#033[00m
Oct 11 04:55:51 np0005481065 nova_compute[260935]: 2025-10-11 08:55:51.532 2 DEBUG nova.objects.instance [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lazy-loading 'migration_context' on Instance uuid d80189d8-28e4-440b-8aed-b43c62f59dd7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:55:51 np0005481065 nova_compute[260935]: 2025-10-11 08:55:51.546 2 DEBUG nova.virt.libvirt.driver [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:55:51 np0005481065 nova_compute[260935]: 2025-10-11 08:55:51.547 2 DEBUG nova.virt.libvirt.driver [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Ensure instance console log exists: /var/lib/nova/instances/d80189d8-28e4-440b-8aed-b43c62f59dd7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:55:51 np0005481065 nova_compute[260935]: 2025-10-11 08:55:51.548 2 DEBUG oslo_concurrency.lockutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:55:51 np0005481065 nova_compute[260935]: 2025-10-11 08:55:51.548 2 DEBUG oslo_concurrency.lockutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:55:51 np0005481065 nova_compute[260935]: 2025-10-11 08:55:51.549 2 DEBUG oslo_concurrency.lockutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:55:51 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:55:51 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/540591875' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:55:51 np0005481065 nova_compute[260935]: 2025-10-11 08:55:51.625 2 DEBUG oslo_concurrency.processutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:55:51 np0005481065 nova_compute[260935]: 2025-10-11 08:55:51.627 2 DEBUG nova.virt.libvirt.vif [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:55:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-254064893',display_name='tempest-DeleteServersTestJSON-server-254064893',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-254064893',id=62,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='93dd4902ce324862a38006da8e06503a',ramdisk_id='',reservation_id='r-k16b35q4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1019340677',owner_user_name='tempest-DeleteServersTestJSON-1019340677-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:55:47Z,user_data=None,user_id='213e5693e94f44e7950e3dfbca04228a',uuid=bb58fb30-73c8-457a-9293-2d07d0015a46,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fca0187a-40e2-4312-bb5b-f3a6c59f3c9b", "address": "fa:16:3e:9f:3c:ad", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfca0187a-40", "ovs_interfaceid": "fca0187a-40e2-4312-bb5b-f3a6c59f3c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:55:51 np0005481065 nova_compute[260935]: 2025-10-11 08:55:51.628 2 DEBUG nova.network.os_vif_util [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Converting VIF {"id": "fca0187a-40e2-4312-bb5b-f3a6c59f3c9b", "address": "fa:16:3e:9f:3c:ad", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfca0187a-40", "ovs_interfaceid": "fca0187a-40e2-4312-bb5b-f3a6c59f3c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:55:51 np0005481065 nova_compute[260935]: 2025-10-11 08:55:51.629 2 DEBUG nova.network.os_vif_util [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9f:3c:ad,bridge_name='br-int',has_traffic_filtering=True,id=fca0187a-40e2-4312-bb5b-f3a6c59f3c9b,network=Network(2cb96d57-a5e9-4b38-b10e-68187a5bf82f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfca0187a-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:55:51 np0005481065 nova_compute[260935]: 2025-10-11 08:55:51.630 2 DEBUG nova.objects.instance [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lazy-loading 'pci_devices' on Instance uuid bb58fb30-73c8-457a-9293-2d07d0015a46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:55:51 np0005481065 nova_compute[260935]: 2025-10-11 08:55:51.655 2 DEBUG nova.virt.libvirt.driver [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:55:51 np0005481065 nova_compute[260935]:  <uuid>bb58fb30-73c8-457a-9293-2d07d0015a46</uuid>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:  <name>instance-0000003e</name>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:55:51 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:      <nova:name>tempest-DeleteServersTestJSON-server-254064893</nova:name>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:55:50</nova:creationTime>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:55:51 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:        <nova:user uuid="213e5693e94f44e7950e3dfbca04228a">tempest-DeleteServersTestJSON-1019340677-project-member</nova:user>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:        <nova:project uuid="93dd4902ce324862a38006da8e06503a">tempest-DeleteServersTestJSON-1019340677</nova:project>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:        <nova:port uuid="fca0187a-40e2-4312-bb5b-f3a6c59f3c9b">
Oct 11 04:55:51 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:55:51 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:      <entry name="serial">bb58fb30-73c8-457a-9293-2d07d0015a46</entry>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:      <entry name="uuid">bb58fb30-73c8-457a-9293-2d07d0015a46</entry>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:55:51 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:55:51 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:55:51 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/bb58fb30-73c8-457a-9293-2d07d0015a46_disk">
Oct 11 04:55:51 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:55:51 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:55:51 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/bb58fb30-73c8-457a-9293-2d07d0015a46_disk.config">
Oct 11 04:55:51 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:55:51 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 04:55:51 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:9f:3c:ad"/>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:      <target dev="tapfca0187a-40"/>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:55:51 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/bb58fb30-73c8-457a-9293-2d07d0015a46/console.log" append="off"/>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:55:51 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:55:51 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:55:51 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:55:51 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:55:51 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:55:51 np0005481065 nova_compute[260935]: 2025-10-11 08:55:51.663 2 DEBUG nova.compute.manager [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Preparing to wait for external event network-vif-plugged-fca0187a-40e2-4312-bb5b-f3a6c59f3c9b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 04:55:51 np0005481065 nova_compute[260935]: 2025-10-11 08:55:51.663 2 DEBUG oslo_concurrency.lockutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "bb58fb30-73c8-457a-9293-2d07d0015a46-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:55:51 np0005481065 nova_compute[260935]: 2025-10-11 08:55:51.664 2 DEBUG oslo_concurrency.lockutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "bb58fb30-73c8-457a-9293-2d07d0015a46-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:55:51 np0005481065 nova_compute[260935]: 2025-10-11 08:55:51.664 2 DEBUG oslo_concurrency.lockutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "bb58fb30-73c8-457a-9293-2d07d0015a46-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:55:51 np0005481065 nova_compute[260935]: 2025-10-11 08:55:51.665 2 DEBUG nova.virt.libvirt.vif [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:55:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-254064893',display_name='tempest-DeleteServersTestJSON-server-254064893',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-254064893',id=62,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='93dd4902ce324862a38006da8e06503a',ramdisk_id='',reservation_id='r-k16b35q4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1019340677',owner_user_name='tempest-DeleteServersTestJSON-1019340677-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:55:47Z,user_data=None,user_id='213e5693e94f44e7950e3dfbca04228a',uuid=bb58fb30-73c8-457a-9293-2d07d0015a46,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fca0187a-40e2-4312-bb5b-f3a6c59f3c9b", "address": "fa:16:3e:9f:3c:ad", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfca0187a-40", "ovs_interfaceid": "fca0187a-40e2-4312-bb5b-f3a6c59f3c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:55:51 np0005481065 nova_compute[260935]: 2025-10-11 08:55:51.666 2 DEBUG nova.network.os_vif_util [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Converting VIF {"id": "fca0187a-40e2-4312-bb5b-f3a6c59f3c9b", "address": "fa:16:3e:9f:3c:ad", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfca0187a-40", "ovs_interfaceid": "fca0187a-40e2-4312-bb5b-f3a6c59f3c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:55:51 np0005481065 nova_compute[260935]: 2025-10-11 08:55:51.667 2 DEBUG nova.network.os_vif_util [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9f:3c:ad,bridge_name='br-int',has_traffic_filtering=True,id=fca0187a-40e2-4312-bb5b-f3a6c59f3c9b,network=Network(2cb96d57-a5e9-4b38-b10e-68187a5bf82f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfca0187a-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:55:51 np0005481065 nova_compute[260935]: 2025-10-11 08:55:51.667 2 DEBUG os_vif [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9f:3c:ad,bridge_name='br-int',has_traffic_filtering=True,id=fca0187a-40e2-4312-bb5b-f3a6c59f3c9b,network=Network(2cb96d57-a5e9-4b38-b10e-68187a5bf82f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfca0187a-40') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:55:51 np0005481065 nova_compute[260935]: 2025-10-11 08:55:51.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:51 np0005481065 nova_compute[260935]: 2025-10-11 08:55:51.669 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:55:51 np0005481065 nova_compute[260935]: 2025-10-11 08:55:51.670 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:55:51 np0005481065 nova_compute[260935]: 2025-10-11 08:55:51.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:51 np0005481065 nova_compute[260935]: 2025-10-11 08:55:51.675 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfca0187a-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:55:51 np0005481065 nova_compute[260935]: 2025-10-11 08:55:51.675 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfca0187a-40, col_values=(('external_ids', {'iface-id': 'fca0187a-40e2-4312-bb5b-f3a6c59f3c9b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9f:3c:ad', 'vm-uuid': 'bb58fb30-73c8-457a-9293-2d07d0015a46'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:55:51 np0005481065 nova_compute[260935]: 2025-10-11 08:55:51.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:51 np0005481065 NetworkManager[44960]: <info>  [1760172951.7142] manager: (tapfca0187a-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/253)
Oct 11 04:55:51 np0005481065 nova_compute[260935]: 2025-10-11 08:55:51.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:55:51 np0005481065 nova_compute[260935]: 2025-10-11 08:55:51.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:51 np0005481065 nova_compute[260935]: 2025-10-11 08:55:51.723 2 INFO os_vif [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9f:3c:ad,bridge_name='br-int',has_traffic_filtering=True,id=fca0187a-40e2-4312-bb5b-f3a6c59f3c9b,network=Network(2cb96d57-a5e9-4b38-b10e-68187a5bf82f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfca0187a-40')#033[00m
Oct 11 04:55:51 np0005481065 nova_compute[260935]: 2025-10-11 08:55:51.780 2 DEBUG nova.virt.libvirt.driver [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:55:51 np0005481065 nova_compute[260935]: 2025-10-11 08:55:51.781 2 DEBUG nova.virt.libvirt.driver [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:55:51 np0005481065 nova_compute[260935]: 2025-10-11 08:55:51.781 2 DEBUG nova.virt.libvirt.driver [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] No VIF found with MAC fa:16:3e:9f:3c:ad, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:55:51 np0005481065 nova_compute[260935]: 2025-10-11 08:55:51.782 2 INFO nova.virt.libvirt.driver [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Using config drive#033[00m
Oct 11 04:55:51 np0005481065 nova_compute[260935]: 2025-10-11 08:55:51.815 2 DEBUG nova.storage.rbd_utils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] rbd image bb58fb30-73c8-457a-9293-2d07d0015a46_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:55:52 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1549: 321 pgs: 321 active+clean; 246 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 884 KiB/s rd, 6.0 MiB/s wr, 186 op/s
Oct 11 04:55:52 np0005481065 nova_compute[260935]: 2025-10-11 08:55:52.908 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172937.905254, f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:55:52 np0005481065 nova_compute[260935]: 2025-10-11 08:55:52.908 2 INFO nova.compute.manager [-] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] VM Stopped (Lifecycle Event)#033[00m
Oct 11 04:55:52 np0005481065 nova_compute[260935]: 2025-10-11 08:55:52.944 2 DEBUG nova.compute.manager [None req-23a8afcb-0934-4002-a9da-a0babaa5bbd1 - - - - - -] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:55:52 np0005481065 nova_compute[260935]: 2025-10-11 08:55:52.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:55:54 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1550: 321 pgs: 321 active+clean; 339 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 9.6 MiB/s wr, 315 op/s
Oct 11 04:55:54 np0005481065 nova_compute[260935]: 2025-10-11 08:55:54.138 2 INFO nova.virt.libvirt.driver [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Creating config drive at /var/lib/nova/instances/bb58fb30-73c8-457a-9293-2d07d0015a46/disk.config#033[00m
Oct 11 04:55:54 np0005481065 nova_compute[260935]: 2025-10-11 08:55:54.148 2 DEBUG oslo_concurrency.processutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bb58fb30-73c8-457a-9293-2d07d0015a46/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwrb3r71t execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:55:54 np0005481065 nova_compute[260935]: 2025-10-11 08:55:54.209 2 DEBUG nova.network.neutron [req-88a95b43-b48f-454f-a58c-6f2f93a90840 req-db6047e1-defa-4d7e-aedc-d5938f501a8c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Updated VIF entry in instance network info cache for port fca0187a-40e2-4312-bb5b-f3a6c59f3c9b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:55:54 np0005481065 nova_compute[260935]: 2025-10-11 08:55:54.210 2 DEBUG nova.network.neutron [req-88a95b43-b48f-454f-a58c-6f2f93a90840 req-db6047e1-defa-4d7e-aedc-d5938f501a8c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Updating instance_info_cache with network_info: [{"id": "fca0187a-40e2-4312-bb5b-f3a6c59f3c9b", "address": "fa:16:3e:9f:3c:ad", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfca0187a-40", "ovs_interfaceid": "fca0187a-40e2-4312-bb5b-f3a6c59f3c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:55:54 np0005481065 nova_compute[260935]: 2025-10-11 08:55:54.231 2 DEBUG oslo_concurrency.lockutils [req-88a95b43-b48f-454f-a58c-6f2f93a90840 req-db6047e1-defa-4d7e-aedc-d5938f501a8c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-bb58fb30-73c8-457a-9293-2d07d0015a46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:55:54 np0005481065 nova_compute[260935]: 2025-10-11 08:55:54.315 2 DEBUG oslo_concurrency.processutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bb58fb30-73c8-457a-9293-2d07d0015a46/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwrb3r71t" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:55:54 np0005481065 nova_compute[260935]: 2025-10-11 08:55:54.342 2 DEBUG nova.storage.rbd_utils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] rbd image bb58fb30-73c8-457a-9293-2d07d0015a46_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:55:54 np0005481065 nova_compute[260935]: 2025-10-11 08:55:54.347 2 DEBUG oslo_concurrency.processutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bb58fb30-73c8-457a-9293-2d07d0015a46/disk.config bb58fb30-73c8-457a-9293-2d07d0015a46_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:55:54 np0005481065 nova_compute[260935]: 2025-10-11 08:55:54.641 2 DEBUG oslo_concurrency.processutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bb58fb30-73c8-457a-9293-2d07d0015a46/disk.config bb58fb30-73c8-457a-9293-2d07d0015a46_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.293s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:55:54 np0005481065 nova_compute[260935]: 2025-10-11 08:55:54.642 2 INFO nova.virt.libvirt.driver [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Deleting local config drive /var/lib/nova/instances/bb58fb30-73c8-457a-9293-2d07d0015a46/disk.config because it was imported into RBD.#033[00m
Oct 11 04:55:54 np0005481065 nova_compute[260935]: 2025-10-11 08:55:54.739 2 DEBUG nova.network.neutron [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Successfully created port: 7bc371fa-443a-4188-ace2-2837e3709136 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 04:55:54 np0005481065 kernel: tapfca0187a-40: entered promiscuous mode
Oct 11 04:55:54 np0005481065 nova_compute[260935]: 2025-10-11 08:55:54.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:54 np0005481065 NetworkManager[44960]: <info>  [1760172954.7487] manager: (tapfca0187a-40): new Tun device (/org/freedesktop/NetworkManager/Devices/254)
Oct 11 04:55:54 np0005481065 ovn_controller[152945]: 2025-10-11T08:55:54Z|00532|binding|INFO|Claiming lport fca0187a-40e2-4312-bb5b-f3a6c59f3c9b for this chassis.
Oct 11 04:55:54 np0005481065 ovn_controller[152945]: 2025-10-11T08:55:54Z|00533|binding|INFO|fca0187a-40e2-4312-bb5b-f3a6c59f3c9b: Claiming fa:16:3e:9f:3c:ad 10.100.0.9
Oct 11 04:55:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:54.756 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9f:3c:ad 10.100.0.9'], port_security=['fa:16:3e:9f:3c:ad 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'bb58fb30-73c8-457a-9293-2d07d0015a46', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2cb96d57-a5e9-4b38-b10e-68187a5bf82f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '93dd4902ce324862a38006da8e06503a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b773900e-3df7-4cb6-b9b0-3d240ff499b1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=69794a2f-48ab-4a0d-8725-f4a7f57172dd, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=fca0187a-40e2-4312-bb5b-f3a6c59f3c9b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:55:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:54.759 162815 INFO neutron.agent.ovn.metadata.agent [-] Port fca0187a-40e2-4312-bb5b-f3a6c59f3c9b in datapath 2cb96d57-a5e9-4b38-b10e-68187a5bf82f bound to our chassis#033[00m
Oct 11 04:55:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:54.762 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2cb96d57-a5e9-4b38-b10e-68187a5bf82f#033[00m
Oct 11 04:55:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:54.780 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[225eb295-30c4-4766-b127-9baa5c900d30]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:54.781 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2cb96d57-a1 in ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 04:55:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:54.784 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2cb96d57-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 04:55:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:54.784 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[89603c7a-9409-4b6e-98e4-b531384f9eb9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:54.786 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7b6ff1f0-9a93-42b5-89d5-7d6a20714a81]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:55:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:55:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:55:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:55:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 04:55:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 04:55:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:54.798 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[6be61996-8414-4c51-8dc8-0ee4806d32ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:54 np0005481065 ovn_controller[152945]: 2025-10-11T08:55:54Z|00534|binding|INFO|Setting lport fca0187a-40e2-4312-bb5b-f3a6c59f3c9b ovn-installed in OVS
Oct 11 04:55:54 np0005481065 ovn_controller[152945]: 2025-10-11T08:55:54Z|00535|binding|INFO|Setting lport fca0187a-40e2-4312-bb5b-f3a6c59f3c9b up in Southbound
Oct 11 04:55:54 np0005481065 nova_compute[260935]: 2025-10-11 08:55:54.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:54.818 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[59dcbd26-60e3-4a46-849d-4ee8d47e1b77]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:54 np0005481065 systemd-machined[215705]: New machine qemu-69-instance-0000003e.
Oct 11 04:55:54 np0005481065 systemd-udevd[324748]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:55:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:55:54
Oct 11 04:55:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 04:55:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 04:55:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', 'images', 'volumes', 'cephfs.cephfs.data', 'backups', '.mgr', 'vms', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta']
Oct 11 04:55:54 np0005481065 systemd[1]: Started Virtual Machine qemu-69-instance-0000003e.
Oct 11 04:55:54 np0005481065 NetworkManager[44960]: <info>  [1760172954.8426] device (tapfca0187a-40): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:55:54 np0005481065 NetworkManager[44960]: <info>  [1760172954.8446] device (tapfca0187a-40): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:55:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 04:55:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:54.861 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[ad2cc58f-322c-406d-ba58-80c3d8d93864]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:54.867 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2989965a-54c5-432c-b42a-b89b41c36a39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:54 np0005481065 NetworkManager[44960]: <info>  [1760172954.8690] manager: (tap2cb96d57-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/255)
Oct 11 04:55:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:54.926 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[24bcd841-6866-406a-8966-acca9dca3ad0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:54.931 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[581f770f-4217-4da6-be8d-106897641cb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:54 np0005481065 NetworkManager[44960]: <info>  [1760172954.9756] device (tap2cb96d57-a0): carrier: link connected
Oct 11 04:55:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:54.987 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f2d1ebbe-894a-41e3-b77d-e72cbc5c436e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:55.011 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cad05475-62c8-43b8-885a-43b720e90458]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2cb96d57-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:c9:b5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 171], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479967, 'reachable_time': 39345, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324778, 'error': None, 'target': 'ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:55.035 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[507fe403-f5e3-4f7d-a7e3-e5bc4c957fd3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0b:c9b5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479967, 'tstamp': 479967}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324779, 'error': None, 'target': 'ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:55.064 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b345842f-6491-48b3-805c-d56258a72a1d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2cb96d57-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:c9:b5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 171], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479967, 'reachable_time': 39345, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 324780, 'error': None, 'target': 'ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:55.111 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5efe06a6-a6f1-4609-83cf-8c321c91ae4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 04:55:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:55:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 04:55:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 04:55:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:55:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 04:55:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:55:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 04:55:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:55:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 04:55:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:55.210 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e741887e-80dc-4bcb-acc4-7c36373cf036]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:55.212 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2cb96d57-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:55:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:55.212 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:55:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:55.213 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2cb96d57-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:55:55 np0005481065 nova_compute[260935]: 2025-10-11 08:55:55.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:55 np0005481065 kernel: tap2cb96d57-a0: entered promiscuous mode
Oct 11 04:55:55 np0005481065 NetworkManager[44960]: <info>  [1760172955.2659] manager: (tap2cb96d57-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/256)
Oct 11 04:55:55 np0005481065 nova_compute[260935]: 2025-10-11 08:55:55.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:55.268 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2cb96d57-a0, col_values=(('external_ids', {'iface-id': 'a11e0a08-d1ab-4bff-901b-632484cc0e21'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:55:55 np0005481065 nova_compute[260935]: 2025-10-11 08:55:55.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:55 np0005481065 ovn_controller[152945]: 2025-10-11T08:55:55Z|00536|binding|INFO|Releasing lport a11e0a08-d1ab-4bff-901b-632484cc0e21 from this chassis (sb_readonly=0)
Oct 11 04:55:55 np0005481065 nova_compute[260935]: 2025-10-11 08:55:55.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:55.271 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2cb96d57-a5e9-4b38-b10e-68187a5bf82f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2cb96d57-a5e9-4b38-b10e-68187a5bf82f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 04:55:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:55.272 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2b8d29b5-5683-4ffc-9937-5ac9cf3600f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:55:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:55.273 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 04:55:55 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 04:55:55 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 04:55:55 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-2cb96d57-a5e9-4b38-b10e-68187a5bf82f
Oct 11 04:55:55 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 04:55:55 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 04:55:55 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 04:55:55 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/2cb96d57-a5e9-4b38-b10e-68187a5bf82f.pid.haproxy
Oct 11 04:55:55 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 04:55:55 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:55:55 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 04:55:55 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 04:55:55 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 04:55:55 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 04:55:55 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 04:55:55 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 04:55:55 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 04:55:55 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 04:55:55 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 04:55:55 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 04:55:55 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 04:55:55 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 04:55:55 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 04:55:55 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:55:55 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 04:55:55 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 04:55:55 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 04:55:55 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 04:55:55 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID 2cb96d57-a5e9-4b38-b10e-68187a5bf82f
Oct 11 04:55:55 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 04:55:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:55:55.274 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f', 'env', 'PROCESS_TAG=haproxy-2cb96d57-a5e9-4b38-b10e-68187a5bf82f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2cb96d57-a5e9-4b38-b10e-68187a5bf82f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 04:55:55 np0005481065 nova_compute[260935]: 2025-10-11 08:55:55.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:55 np0005481065 nova_compute[260935]: 2025-10-11 08:55:55.501 2 DEBUG nova.compute.manager [req-898263a4-78b9-402e-b7a7-3b54219ffbdf req-0690edbd-2184-4116-8ec0-de6835c98af8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Received event network-vif-plugged-fca0187a-40e2-4312-bb5b-f3a6c59f3c9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:55:55 np0005481065 nova_compute[260935]: 2025-10-11 08:55:55.502 2 DEBUG oslo_concurrency.lockutils [req-898263a4-78b9-402e-b7a7-3b54219ffbdf req-0690edbd-2184-4116-8ec0-de6835c98af8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "bb58fb30-73c8-457a-9293-2d07d0015a46-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:55:55 np0005481065 nova_compute[260935]: 2025-10-11 08:55:55.502 2 DEBUG oslo_concurrency.lockutils [req-898263a4-78b9-402e-b7a7-3b54219ffbdf req-0690edbd-2184-4116-8ec0-de6835c98af8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "bb58fb30-73c8-457a-9293-2d07d0015a46-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:55:55 np0005481065 nova_compute[260935]: 2025-10-11 08:55:55.503 2 DEBUG oslo_concurrency.lockutils [req-898263a4-78b9-402e-b7a7-3b54219ffbdf req-0690edbd-2184-4116-8ec0-de6835c98af8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "bb58fb30-73c8-457a-9293-2d07d0015a46-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:55:55 np0005481065 nova_compute[260935]: 2025-10-11 08:55:55.503 2 DEBUG nova.compute.manager [req-898263a4-78b9-402e-b7a7-3b54219ffbdf req-0690edbd-2184-4116-8ec0-de6835c98af8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Processing event network-vif-plugged-fca0187a-40e2-4312-bb5b-f3a6c59f3c9b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 04:55:55 np0005481065 ceph-mgr[74605]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2898047278
Oct 11 04:55:55 np0005481065 nova_compute[260935]: 2025-10-11 08:55:55.792 2 DEBUG nova.network.neutron [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Successfully updated port: 7bc371fa-443a-4188-ace2-2837e3709136 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:55:55 np0005481065 podman[324856]: 2025-10-11 08:55:55.716469715 +0000 UTC m=+0.037455349 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 04:55:55 np0005481065 nova_compute[260935]: 2025-10-11 08:55:55.807 2 DEBUG oslo_concurrency.lockutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Acquiring lock "refresh_cache-d80189d8-28e4-440b-8aed-b43c62f59dd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:55:55 np0005481065 nova_compute[260935]: 2025-10-11 08:55:55.807 2 DEBUG oslo_concurrency.lockutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Acquired lock "refresh_cache-d80189d8-28e4-440b-8aed-b43c62f59dd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:55:55 np0005481065 nova_compute[260935]: 2025-10-11 08:55:55.808 2 DEBUG nova.network.neutron [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:55:55 np0005481065 podman[324856]: 2025-10-11 08:55:55.928340496 +0000 UTC m=+0.249326100 container create dd8176e1614dd73fb708e1ba61d19ab31c61d189e1f5f2968e41437bb8c3b43d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 04:55:55 np0005481065 nova_compute[260935]: 2025-10-11 08:55:55.986 2 DEBUG nova.network.neutron [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:55:56 np0005481065 systemd[1]: Started libpod-conmon-dd8176e1614dd73fb708e1ba61d19ab31c61d189e1f5f2968e41437bb8c3b43d.scope.
Oct 11 04:55:56 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1551: 321 pgs: 321 active+clean; 339 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 9.6 MiB/s wr, 279 op/s
Oct 11 04:55:56 np0005481065 systemd[1]: Started libcrun container.
Oct 11 04:55:56 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cb611fd86e7d9e583b61223ad1390858d6dcde7efb0b54c1de6f4a299db85d1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 04:55:56 np0005481065 nova_compute[260935]: 2025-10-11 08:55:56.071 2 DEBUG nova.compute.manager [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:55:56 np0005481065 nova_compute[260935]: 2025-10-11 08:55:56.072 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172956.0704134, bb58fb30-73c8-457a-9293-2d07d0015a46 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:55:56 np0005481065 nova_compute[260935]: 2025-10-11 08:55:56.072 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] VM Started (Lifecycle Event)#033[00m
Oct 11 04:55:56 np0005481065 nova_compute[260935]: 2025-10-11 08:55:56.097 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:55:56 np0005481065 nova_compute[260935]: 2025-10-11 08:55:56.103 2 DEBUG nova.virt.libvirt.driver [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:55:56 np0005481065 nova_compute[260935]: 2025-10-11 08:55:56.104 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:55:56 np0005481065 nova_compute[260935]: 2025-10-11 08:55:56.108 2 INFO nova.virt.libvirt.driver [-] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Instance spawned successfully.#033[00m
Oct 11 04:55:56 np0005481065 nova_compute[260935]: 2025-10-11 08:55:56.108 2 DEBUG nova.virt.libvirt.driver [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:55:56 np0005481065 nova_compute[260935]: 2025-10-11 08:55:56.129 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:55:56 np0005481065 nova_compute[260935]: 2025-10-11 08:55:56.130 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172956.070837, bb58fb30-73c8-457a-9293-2d07d0015a46 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:55:56 np0005481065 nova_compute[260935]: 2025-10-11 08:55:56.130 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] VM Paused (Lifecycle Event)#033[00m
Oct 11 04:55:56 np0005481065 podman[324856]: 2025-10-11 08:55:56.137377917 +0000 UTC m=+0.458363531 container init dd8176e1614dd73fb708e1ba61d19ab31c61d189e1f5f2968e41437bb8c3b43d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 04:55:56 np0005481065 nova_compute[260935]: 2025-10-11 08:55:56.142 2 DEBUG nova.virt.libvirt.driver [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:55:56 np0005481065 nova_compute[260935]: 2025-10-11 08:55:56.143 2 DEBUG nova.virt.libvirt.driver [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:55:56 np0005481065 podman[324856]: 2025-10-11 08:55:56.144218462 +0000 UTC m=+0.465204056 container start dd8176e1614dd73fb708e1ba61d19ab31c61d189e1f5f2968e41437bb8c3b43d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 11 04:55:56 np0005481065 nova_compute[260935]: 2025-10-11 08:55:56.144 2 DEBUG nova.virt.libvirt.driver [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:55:56 np0005481065 nova_compute[260935]: 2025-10-11 08:55:56.144 2 DEBUG nova.virt.libvirt.driver [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:55:56 np0005481065 nova_compute[260935]: 2025-10-11 08:55:56.144 2 DEBUG nova.virt.libvirt.driver [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:55:56 np0005481065 nova_compute[260935]: 2025-10-11 08:55:56.145 2 DEBUG nova.virt.libvirt.driver [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:55:56 np0005481065 neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f[324871]: [NOTICE]   (324875) : New worker (324877) forked
Oct 11 04:55:56 np0005481065 neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f[324871]: [NOTICE]   (324875) : Loading success.
Oct 11 04:55:56 np0005481065 nova_compute[260935]: 2025-10-11 08:55:56.168 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:55:56 np0005481065 nova_compute[260935]: 2025-10-11 08:55:56.180 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172956.075883, bb58fb30-73c8-457a-9293-2d07d0015a46 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:55:56 np0005481065 nova_compute[260935]: 2025-10-11 08:55:56.181 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:55:56 np0005481065 nova_compute[260935]: 2025-10-11 08:55:56.202 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:55:56 np0005481065 nova_compute[260935]: 2025-10-11 08:55:56.206 2 INFO nova.compute.manager [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Took 8.58 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:55:56 np0005481065 nova_compute[260935]: 2025-10-11 08:55:56.207 2 DEBUG nova.compute.manager [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:55:56 np0005481065 nova_compute[260935]: 2025-10-11 08:55:56.211 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:55:56 np0005481065 nova_compute[260935]: 2025-10-11 08:55:56.246 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:55:56 np0005481065 nova_compute[260935]: 2025-10-11 08:55:56.261 2 INFO nova.compute.manager [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Took 9.72 seconds to build instance.#033[00m
Oct 11 04:55:56 np0005481065 nova_compute[260935]: 2025-10-11 08:55:56.280 2 DEBUG oslo_concurrency.lockutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "bb58fb30-73c8-457a-9293-2d07d0015a46" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.828s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:55:56 np0005481065 nova_compute[260935]: 2025-10-11 08:55:56.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:57 np0005481065 nova_compute[260935]: 2025-10-11 08:55:57.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:55:58 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1552: 321 pgs: 321 active+clean; 339 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 9.6 MiB/s wr, 353 op/s
Oct 11 04:55:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:55:58 np0005481065 nova_compute[260935]: 2025-10-11 08:55:58.430 2 DEBUG nova.compute.manager [req-84e9f6e3-53a4-4647-8643-9ba779a0e05b req-27763275-15a9-4e2c-9e98-d9e854adae65 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Received event network-vif-plugged-fca0187a-40e2-4312-bb5b-f3a6c59f3c9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:55:58 np0005481065 nova_compute[260935]: 2025-10-11 08:55:58.431 2 DEBUG oslo_concurrency.lockutils [req-84e9f6e3-53a4-4647-8643-9ba779a0e05b req-27763275-15a9-4e2c-9e98-d9e854adae65 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "bb58fb30-73c8-457a-9293-2d07d0015a46-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:55:58 np0005481065 nova_compute[260935]: 2025-10-11 08:55:58.431 2 DEBUG oslo_concurrency.lockutils [req-84e9f6e3-53a4-4647-8643-9ba779a0e05b req-27763275-15a9-4e2c-9e98-d9e854adae65 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "bb58fb30-73c8-457a-9293-2d07d0015a46-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:55:58 np0005481065 nova_compute[260935]: 2025-10-11 08:55:58.432 2 DEBUG oslo_concurrency.lockutils [req-84e9f6e3-53a4-4647-8643-9ba779a0e05b req-27763275-15a9-4e2c-9e98-d9e854adae65 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "bb58fb30-73c8-457a-9293-2d07d0015a46-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:55:58 np0005481065 nova_compute[260935]: 2025-10-11 08:55:58.432 2 DEBUG nova.compute.manager [req-84e9f6e3-53a4-4647-8643-9ba779a0e05b req-27763275-15a9-4e2c-9e98-d9e854adae65 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] No waiting events found dispatching network-vif-plugged-fca0187a-40e2-4312-bb5b-f3a6c59f3c9b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:55:58 np0005481065 nova_compute[260935]: 2025-10-11 08:55:58.433 2 WARNING nova.compute.manager [req-84e9f6e3-53a4-4647-8643-9ba779a0e05b req-27763275-15a9-4e2c-9e98-d9e854adae65 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Received unexpected event network-vif-plugged-fca0187a-40e2-4312-bb5b-f3a6c59f3c9b for instance with vm_state active and task_state None.#033[00m
Oct 11 04:55:58 np0005481065 nova_compute[260935]: 2025-10-11 08:55:58.433 2 DEBUG nova.compute.manager [req-84e9f6e3-53a4-4647-8643-9ba779a0e05b req-27763275-15a9-4e2c-9e98-d9e854adae65 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Received event network-changed-7bc371fa-443a-4188-ace2-2837e3709136 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:55:58 np0005481065 nova_compute[260935]: 2025-10-11 08:55:58.434 2 DEBUG nova.compute.manager [req-84e9f6e3-53a4-4647-8643-9ba779a0e05b req-27763275-15a9-4e2c-9e98-d9e854adae65 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Refreshing instance network info cache due to event network-changed-7bc371fa-443a-4188-ace2-2837e3709136. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:55:58 np0005481065 nova_compute[260935]: 2025-10-11 08:55:58.434 2 DEBUG oslo_concurrency.lockutils [req-84e9f6e3-53a4-4647-8643-9ba779a0e05b req-27763275-15a9-4e2c-9e98-d9e854adae65 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-d80189d8-28e4-440b-8aed-b43c62f59dd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:55:59 np0005481065 nova_compute[260935]: 2025-10-11 08:55:59.124 2 DEBUG nova.network.neutron [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Updating instance_info_cache with network_info: [{"id": "7bc371fa-443a-4188-ace2-2837e3709136", "address": "fa:16:3e:76:54:8b", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bc371fa-44", "ovs_interfaceid": "7bc371fa-443a-4188-ace2-2837e3709136", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:55:59 np0005481065 nova_compute[260935]: 2025-10-11 08:55:59.161 2 DEBUG oslo_concurrency.lockutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Releasing lock "refresh_cache-d80189d8-28e4-440b-8aed-b43c62f59dd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:55:59 np0005481065 nova_compute[260935]: 2025-10-11 08:55:59.161 2 DEBUG nova.compute.manager [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Instance network_info: |[{"id": "7bc371fa-443a-4188-ace2-2837e3709136", "address": "fa:16:3e:76:54:8b", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bc371fa-44", "ovs_interfaceid": "7bc371fa-443a-4188-ace2-2837e3709136", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:55:59 np0005481065 nova_compute[260935]: 2025-10-11 08:55:59.162 2 DEBUG oslo_concurrency.lockutils [req-84e9f6e3-53a4-4647-8643-9ba779a0e05b req-27763275-15a9-4e2c-9e98-d9e854adae65 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-d80189d8-28e4-440b-8aed-b43c62f59dd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:55:59 np0005481065 nova_compute[260935]: 2025-10-11 08:55:59.162 2 DEBUG nova.network.neutron [req-84e9f6e3-53a4-4647-8643-9ba779a0e05b req-27763275-15a9-4e2c-9e98-d9e854adae65 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Refreshing network info cache for port 7bc371fa-443a-4188-ace2-2837e3709136 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:55:59 np0005481065 nova_compute[260935]: 2025-10-11 08:55:59.167 2 DEBUG nova.virt.libvirt.driver [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Start _get_guest_xml network_info=[{"id": "7bc371fa-443a-4188-ace2-2837e3709136", "address": "fa:16:3e:76:54:8b", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bc371fa-44", "ovs_interfaceid": "7bc371fa-443a-4188-ace2-2837e3709136", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:55:59 np0005481065 nova_compute[260935]: 2025-10-11 08:55:59.175 2 WARNING nova.virt.libvirt.driver [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:55:59 np0005481065 nova_compute[260935]: 2025-10-11 08:55:59.182 2 DEBUG nova.virt.libvirt.host [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:55:59 np0005481065 nova_compute[260935]: 2025-10-11 08:55:59.183 2 DEBUG nova.virt.libvirt.host [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:55:59 np0005481065 nova_compute[260935]: 2025-10-11 08:55:59.188 2 DEBUG nova.virt.libvirt.host [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:55:59 np0005481065 nova_compute[260935]: 2025-10-11 08:55:59.189 2 DEBUG nova.virt.libvirt.host [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:55:59 np0005481065 nova_compute[260935]: 2025-10-11 08:55:59.189 2 DEBUG nova.virt.libvirt.driver [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:55:59 np0005481065 nova_compute[260935]: 2025-10-11 08:55:59.189 2 DEBUG nova.virt.hardware [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:55:59 np0005481065 nova_compute[260935]: 2025-10-11 08:55:59.190 2 DEBUG nova.virt.hardware [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:55:59 np0005481065 nova_compute[260935]: 2025-10-11 08:55:59.190 2 DEBUG nova.virt.hardware [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:55:59 np0005481065 nova_compute[260935]: 2025-10-11 08:55:59.190 2 DEBUG nova.virt.hardware [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:55:59 np0005481065 nova_compute[260935]: 2025-10-11 08:55:59.190 2 DEBUG nova.virt.hardware [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:55:59 np0005481065 nova_compute[260935]: 2025-10-11 08:55:59.190 2 DEBUG nova.virt.hardware [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:55:59 np0005481065 nova_compute[260935]: 2025-10-11 08:55:59.191 2 DEBUG nova.virt.hardware [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:55:59 np0005481065 nova_compute[260935]: 2025-10-11 08:55:59.191 2 DEBUG nova.virt.hardware [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:55:59 np0005481065 nova_compute[260935]: 2025-10-11 08:55:59.191 2 DEBUG nova.virt.hardware [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:55:59 np0005481065 nova_compute[260935]: 2025-10-11 08:55:59.191 2 DEBUG nova.virt.hardware [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:55:59 np0005481065 nova_compute[260935]: 2025-10-11 08:55:59.192 2 DEBUG nova.virt.hardware [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:55:59 np0005481065 nova_compute[260935]: 2025-10-11 08:55:59.194 2 DEBUG oslo_concurrency.processutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:55:59 np0005481065 nova_compute[260935]: 2025-10-11 08:55:59.384 2 INFO nova.compute.manager [None req-894bc8a0-d0e1-4167-b303-3bb53f41cf69 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Pausing#033[00m
Oct 11 04:55:59 np0005481065 nova_compute[260935]: 2025-10-11 08:55:59.385 2 DEBUG nova.objects.instance [None req-894bc8a0-d0e1-4167-b303-3bb53f41cf69 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lazy-loading 'flavor' on Instance uuid bb58fb30-73c8-457a-9293-2d07d0015a46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:55:59 np0005481065 nova_compute[260935]: 2025-10-11 08:55:59.423 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172959.4231842, bb58fb30-73c8-457a-9293-2d07d0015a46 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:55:59 np0005481065 nova_compute[260935]: 2025-10-11 08:55:59.424 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] VM Paused (Lifecycle Event)#033[00m
Oct 11 04:55:59 np0005481065 nova_compute[260935]: 2025-10-11 08:55:59.426 2 DEBUG nova.compute.manager [None req-894bc8a0-d0e1-4167-b303-3bb53f41cf69 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:55:59 np0005481065 nova_compute[260935]: 2025-10-11 08:55:59.456 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:55:59 np0005481065 nova_compute[260935]: 2025-10-11 08:55:59.468 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:55:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:55:59 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1603621127' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:55:59 np0005481065 nova_compute[260935]: 2025-10-11 08:55:59.669 2 DEBUG oslo_concurrency.processutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:55:59 np0005481065 nova_compute[260935]: 2025-10-11 08:55:59.706 2 DEBUG nova.storage.rbd_utils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] rbd image d80189d8-28e4-440b-8aed-b43c62f59dd7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:55:59 np0005481065 nova_compute[260935]: 2025-10-11 08:55:59.710 2 DEBUG oslo_concurrency.processutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:56:00 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1553: 321 pgs: 321 active+clean; 339 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 202 op/s
Oct 11 04:56:00 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:56:00 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2039512440' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.181 2 DEBUG oslo_concurrency.processutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.184 2 DEBUG nova.virt.libvirt.vif [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:55:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-136419371',display_name='tempest-SecurityGroupsTestJSON-server-136419371',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-136419371',id=63,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3a6a3cc2a54f4a9bafcdc1304f07944b',ramdisk_id='',reservation_id='r-qhzm8q7c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-2086563292',owner_user_name='tempest-SecurityGroupsTestJSON-2086563292-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:55:50Z,user_data=None,user_id='a04e2908f5a54c8f98bee8d0faf3e658',uuid=d80189d8-28e4-440b-8aed-b43c62f59dd7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7bc371fa-443a-4188-ace2-2837e3709136", "address": "fa:16:3e:76:54:8b", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bc371fa-44", "ovs_interfaceid": "7bc371fa-443a-4188-ace2-2837e3709136", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.185 2 DEBUG nova.network.os_vif_util [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Converting VIF {"id": "7bc371fa-443a-4188-ace2-2837e3709136", "address": "fa:16:3e:76:54:8b", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bc371fa-44", "ovs_interfaceid": "7bc371fa-443a-4188-ace2-2837e3709136", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.186 2 DEBUG nova.network.os_vif_util [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:54:8b,bridge_name='br-int',has_traffic_filtering=True,id=7bc371fa-443a-4188-ace2-2837e3709136,network=Network(338aeaf8-43d5-4292-a8fa-8952dd3c508b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7bc371fa-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.189 2 DEBUG nova.objects.instance [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lazy-loading 'pci_devices' on Instance uuid d80189d8-28e4-440b-8aed-b43c62f59dd7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.212 2 DEBUG nova.virt.libvirt.driver [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:56:00 np0005481065 nova_compute[260935]:  <uuid>d80189d8-28e4-440b-8aed-b43c62f59dd7</uuid>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:  <name>instance-0000003f</name>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:56:00 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:      <nova:name>tempest-SecurityGroupsTestJSON-server-136419371</nova:name>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:55:59</nova:creationTime>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:56:00 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:        <nova:user uuid="a04e2908f5a54c8f98bee8d0faf3e658">tempest-SecurityGroupsTestJSON-2086563292-project-member</nova:user>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:        <nova:project uuid="3a6a3cc2a54f4a9bafcdc1304f07944b">tempest-SecurityGroupsTestJSON-2086563292</nova:project>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:        <nova:port uuid="7bc371fa-443a-4188-ace2-2837e3709136">
Oct 11 04:56:00 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:56:00 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:      <entry name="serial">d80189d8-28e4-440b-8aed-b43c62f59dd7</entry>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:      <entry name="uuid">d80189d8-28e4-440b-8aed-b43c62f59dd7</entry>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:56:00 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:56:00 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:56:00 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/d80189d8-28e4-440b-8aed-b43c62f59dd7_disk">
Oct 11 04:56:00 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:56:00 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:56:00 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/d80189d8-28e4-440b-8aed-b43c62f59dd7_disk.config">
Oct 11 04:56:00 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:56:00 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 04:56:00 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:76:54:8b"/>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:      <target dev="tap7bc371fa-44"/>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:56:00 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/d80189d8-28e4-440b-8aed-b43c62f59dd7/console.log" append="off"/>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:56:00 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:56:00 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:56:00 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:56:00 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:56:00 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.214 2 DEBUG nova.compute.manager [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Preparing to wait for external event network-vif-plugged-7bc371fa-443a-4188-ace2-2837e3709136 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.215 2 DEBUG oslo_concurrency.lockutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Acquiring lock "d80189d8-28e4-440b-8aed-b43c62f59dd7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.215 2 DEBUG oslo_concurrency.lockutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lock "d80189d8-28e4-440b-8aed-b43c62f59dd7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.215 2 DEBUG oslo_concurrency.lockutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lock "d80189d8-28e4-440b-8aed-b43c62f59dd7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.216 2 DEBUG nova.virt.libvirt.vif [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:55:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-136419371',display_name='tempest-SecurityGroupsTestJSON-server-136419371',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-136419371',id=63,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3a6a3cc2a54f4a9bafcdc1304f07944b',ramdisk_id='',reservation_id='r-qhzm8q7c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-2086563292',owner_user_name='tempest-SecurityGroupsTestJSON-2086563292-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:55:50Z,user_data=None,user_id='a04e2908f5a54c8f98bee8d0faf3e658',uuid=d80189d8-28e4-440b-8aed-b43c62f59dd7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7bc371fa-443a-4188-ace2-2837e3709136", "address": "fa:16:3e:76:54:8b", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bc371fa-44", "ovs_interfaceid": "7bc371fa-443a-4188-ace2-2837e3709136", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.217 2 DEBUG nova.network.os_vif_util [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Converting VIF {"id": "7bc371fa-443a-4188-ace2-2837e3709136", "address": "fa:16:3e:76:54:8b", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bc371fa-44", "ovs_interfaceid": "7bc371fa-443a-4188-ace2-2837e3709136", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.217 2 DEBUG nova.network.os_vif_util [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:54:8b,bridge_name='br-int',has_traffic_filtering=True,id=7bc371fa-443a-4188-ace2-2837e3709136,network=Network(338aeaf8-43d5-4292-a8fa-8952dd3c508b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7bc371fa-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.218 2 DEBUG os_vif [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:54:8b,bridge_name='br-int',has_traffic_filtering=True,id=7bc371fa-443a-4188-ace2-2837e3709136,network=Network(338aeaf8-43d5-4292-a8fa-8952dd3c508b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7bc371fa-44') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.219 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.220 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.225 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7bc371fa-44, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.226 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7bc371fa-44, col_values=(('external_ids', {'iface-id': '7bc371fa-443a-4188-ace2-2837e3709136', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:76:54:8b', 'vm-uuid': 'd80189d8-28e4-440b-8aed-b43c62f59dd7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:56:00 np0005481065 NetworkManager[44960]: <info>  [1760172960.2295] manager: (tap7bc371fa-44): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/257)
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.240 2 INFO os_vif [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:54:8b,bridge_name='br-int',has_traffic_filtering=True,id=7bc371fa-443a-4188-ace2-2837e3709136,network=Network(338aeaf8-43d5-4292-a8fa-8952dd3c508b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7bc371fa-44')#033[00m
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.314 2 DEBUG nova.virt.libvirt.driver [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.315 2 DEBUG nova.virt.libvirt.driver [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.315 2 DEBUG nova.virt.libvirt.driver [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] No VIF found with MAC fa:16:3e:76:54:8b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.316 2 INFO nova.virt.libvirt.driver [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Using config drive#033[00m
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.369 2 DEBUG nova.storage.rbd_utils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] rbd image d80189d8-28e4-440b-8aed-b43c62f59dd7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:56:00 np0005481065 podman[324950]: 2025-10-11 08:56:00.404411659 +0000 UTC m=+0.108302289 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.407 2 DEBUG oslo_concurrency.lockutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.408 2 DEBUG oslo_concurrency.lockutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.427 2 DEBUG nova.compute.manager [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.467 2 DEBUG oslo_concurrency.lockutils [None req-8f7cc2a4-e125-420d-9ac6-934dcb8fa0bc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "bb58fb30-73c8-457a-9293-2d07d0015a46" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.468 2 DEBUG oslo_concurrency.lockutils [None req-8f7cc2a4-e125-420d-9ac6-934dcb8fa0bc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "bb58fb30-73c8-457a-9293-2d07d0015a46" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.468 2 DEBUG oslo_concurrency.lockutils [None req-8f7cc2a4-e125-420d-9ac6-934dcb8fa0bc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "bb58fb30-73c8-457a-9293-2d07d0015a46-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.468 2 DEBUG oslo_concurrency.lockutils [None req-8f7cc2a4-e125-420d-9ac6-934dcb8fa0bc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "bb58fb30-73c8-457a-9293-2d07d0015a46-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.469 2 DEBUG oslo_concurrency.lockutils [None req-8f7cc2a4-e125-420d-9ac6-934dcb8fa0bc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "bb58fb30-73c8-457a-9293-2d07d0015a46-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.470 2 INFO nova.compute.manager [None req-8f7cc2a4-e125-420d-9ac6-934dcb8fa0bc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Terminating instance#033[00m
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.472 2 DEBUG nova.compute.manager [None req-8f7cc2a4-e125-420d-9ac6-934dcb8fa0bc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 04:56:00 np0005481065 kernel: tapfca0187a-40 (unregistering): left promiscuous mode
Oct 11 04:56:00 np0005481065 NetworkManager[44960]: <info>  [1760172960.5246] device (tapfca0187a-40): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.525 2 DEBUG oslo_concurrency.lockutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.526 2 DEBUG oslo_concurrency.lockutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.537 2 DEBUG nova.virt.hardware [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.537 2 INFO nova.compute.claims [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:56:00 np0005481065 ovn_controller[152945]: 2025-10-11T08:56:00Z|00537|binding|INFO|Releasing lport fca0187a-40e2-4312-bb5b-f3a6c59f3c9b from this chassis (sb_readonly=0)
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:56:00 np0005481065 ovn_controller[152945]: 2025-10-11T08:56:00Z|00538|binding|INFO|Setting lport fca0187a-40e2-4312-bb5b-f3a6c59f3c9b down in Southbound
Oct 11 04:56:00 np0005481065 ovn_controller[152945]: 2025-10-11T08:56:00Z|00539|binding|INFO|Removing iface tapfca0187a-40 ovn-installed in OVS
Oct 11 04:56:00 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:00.591 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9f:3c:ad 10.100.0.9'], port_security=['fa:16:3e:9f:3c:ad 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'bb58fb30-73c8-457a-9293-2d07d0015a46', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2cb96d57-a5e9-4b38-b10e-68187a5bf82f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '93dd4902ce324862a38006da8e06503a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b773900e-3df7-4cb6-b9b0-3d240ff499b1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=69794a2f-48ab-4a0d-8725-f4a7f57172dd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=fca0187a-40e2-4312-bb5b-f3a6c59f3c9b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:56:00 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:00.593 162815 INFO neutron.agent.ovn.metadata.agent [-] Port fca0187a-40e2-4312-bb5b-f3a6c59f3c9b in datapath 2cb96d57-a5e9-4b38-b10e-68187a5bf82f unbound from our chassis#033[00m
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:56:00 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:00.594 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2cb96d57-a5e9-4b38-b10e-68187a5bf82f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 04:56:00 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:00.595 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[50f32637-85d8-4843-a990-cc824bf4e3a8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:56:00 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:00.597 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f namespace which is not needed anymore#033[00m
Oct 11 04:56:00 np0005481065 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d0000003e.scope: Deactivated successfully.
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:56:00 np0005481065 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d0000003e.scope: Consumed 4.264s CPU time.
Oct 11 04:56:00 np0005481065 systemd-machined[215705]: Machine qemu-69-instance-0000003e terminated.
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.734 2 INFO nova.virt.libvirt.driver [-] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Instance destroyed successfully.#033[00m
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.734 2 DEBUG nova.objects.instance [None req-8f7cc2a4-e125-420d-9ac6-934dcb8fa0bc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lazy-loading 'resources' on Instance uuid bb58fb30-73c8-457a-9293-2d07d0015a46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.754 2 DEBUG nova.virt.libvirt.vif [None req-8f7cc2a4-e125-420d-9ac6-934dcb8fa0bc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:55:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-254064893',display_name='tempest-DeleteServersTestJSON-server-254064893',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-254064893',id=62,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:55:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=3,progress=0,project_id='93dd4902ce324862a38006da8e06503a',ramdisk_id='',reservation_id='r-k16b35q4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-1019340677',owner_user_name='tempest-DeleteServersTestJSON-1019340677-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:55:59Z,user_data=None,user_id='213e5693e94f44e7950e3dfbca04228a',uuid=bb58fb30-73c8-457a-9293-2d07d0015a46,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='paused') vif={"id": "fca0187a-40e2-4312-bb5b-f3a6c59f3c9b", "address": "fa:16:3e:9f:3c:ad", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfca0187a-40", "ovs_interfaceid": "fca0187a-40e2-4312-bb5b-f3a6c59f3c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.755 2 DEBUG nova.network.os_vif_util [None req-8f7cc2a4-e125-420d-9ac6-934dcb8fa0bc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Converting VIF {"id": "fca0187a-40e2-4312-bb5b-f3a6c59f3c9b", "address": "fa:16:3e:9f:3c:ad", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfca0187a-40", "ovs_interfaceid": "fca0187a-40e2-4312-bb5b-f3a6c59f3c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.756 2 DEBUG nova.network.os_vif_util [None req-8f7cc2a4-e125-420d-9ac6-934dcb8fa0bc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9f:3c:ad,bridge_name='br-int',has_traffic_filtering=True,id=fca0187a-40e2-4312-bb5b-f3a6c59f3c9b,network=Network(2cb96d57-a5e9-4b38-b10e-68187a5bf82f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfca0187a-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.756 2 DEBUG os_vif [None req-8f7cc2a4-e125-420d-9ac6-934dcb8fa0bc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9f:3c:ad,bridge_name='br-int',has_traffic_filtering=True,id=fca0187a-40e2-4312-bb5b-f3a6c59f3c9b,network=Network(2cb96d57-a5e9-4b38-b10e-68187a5bf82f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfca0187a-40') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.759 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfca0187a-40, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.778 2 INFO os_vif [None req-8f7cc2a4-e125-420d-9ac6-934dcb8fa0bc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9f:3c:ad,bridge_name='br-int',has_traffic_filtering=True,id=fca0187a-40e2-4312-bb5b-f3a6c59f3c9b,network=Network(2cb96d57-a5e9-4b38-b10e-68187a5bf82f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfca0187a-40')#033[00m
Oct 11 04:56:00 np0005481065 neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f[324871]: [NOTICE]   (324875) : haproxy version is 2.8.14-c23fe91
Oct 11 04:56:00 np0005481065 neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f[324871]: [NOTICE]   (324875) : path to executable is /usr/sbin/haproxy
Oct 11 04:56:00 np0005481065 neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f[324871]: [WARNING]  (324875) : Exiting Master process...
Oct 11 04:56:00 np0005481065 neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f[324871]: [WARNING]  (324875) : Exiting Master process...
Oct 11 04:56:00 np0005481065 nova_compute[260935]: 2025-10-11 08:56:00.840 2 DEBUG oslo_concurrency.processutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:56:00 np0005481065 neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f[324871]: [ALERT]    (324875) : Current worker (324877) exited with code 143 (Terminated)
Oct 11 04:56:00 np0005481065 neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f[324871]: [WARNING]  (324875) : All workers exited. Exiting... (0)
Oct 11 04:56:00 np0005481065 systemd[1]: libpod-dd8176e1614dd73fb708e1ba61d19ab31c61d189e1f5f2968e41437bb8c3b43d.scope: Deactivated successfully.
Oct 11 04:56:00 np0005481065 podman[325022]: 2025-10-11 08:56:00.851847348 +0000 UTC m=+0.087823376 container died dd8176e1614dd73fb708e1ba61d19ab31c61d189e1f5f2968e41437bb8c3b43d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 04:56:00 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dd8176e1614dd73fb708e1ba61d19ab31c61d189e1f5f2968e41437bb8c3b43d-userdata-shm.mount: Deactivated successfully.
Oct 11 04:56:00 np0005481065 systemd[1]: var-lib-containers-storage-overlay-9cb611fd86e7d9e583b61223ad1390858d6dcde7efb0b54c1de6f4a299db85d1-merged.mount: Deactivated successfully.
Oct 11 04:56:00 np0005481065 podman[325022]: 2025-10-11 08:56:00.923223053 +0000 UTC m=+0.159199081 container cleanup dd8176e1614dd73fb708e1ba61d19ab31c61d189e1f5f2968e41437bb8c3b43d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 11 04:56:00 np0005481065 systemd[1]: libpod-conmon-dd8176e1614dd73fb708e1ba61d19ab31c61d189e1f5f2968e41437bb8c3b43d.scope: Deactivated successfully.
Oct 11 04:56:01 np0005481065 podman[325071]: 2025-10-11 08:56:01.003613205 +0000 UTC m=+0.050294045 container remove dd8176e1614dd73fb708e1ba61d19ab31c61d189e1f5f2968e41437bb8c3b43d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 11 04:56:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:01.012 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e495a16a-7cec-47b5-b61e-fe608cf7bd47]: (4, ('Sat Oct 11 08:56:00 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f (dd8176e1614dd73fb708e1ba61d19ab31c61d189e1f5f2968e41437bb8c3b43d)\ndd8176e1614dd73fb708e1ba61d19ab31c61d189e1f5f2968e41437bb8c3b43d\nSat Oct 11 08:56:00 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f (dd8176e1614dd73fb708e1ba61d19ab31c61d189e1f5f2968e41437bb8c3b43d)\ndd8176e1614dd73fb708e1ba61d19ab31c61d189e1f5f2968e41437bb8c3b43d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:56:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:01.014 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e0f77bcc-08a3-457d-a392-c016df913edd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:56:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:01.016 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2cb96d57-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:56:01 np0005481065 kernel: tap2cb96d57-a0: left promiscuous mode
Oct 11 04:56:01 np0005481065 nova_compute[260935]: 2025-10-11 08:56:01.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:56:01 np0005481065 nova_compute[260935]: 2025-10-11 08:56:01.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:56:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:01.044 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2ee483bc-2cf9-4b6e-a032-d61fcadef8e3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:56:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:01.067 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[590a3de4-ddf7-432a-b91e-6137beac5011]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:56:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:01.069 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[76562b0c-2af0-4e5c-8fec-9cf77911e9cc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:56:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:01.091 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8002bc5a-7982-4606-ad38-a1ef48ab2018]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479955, 'reachable_time': 37254, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325107, 'error': None, 'target': 'ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:56:01 np0005481065 systemd[1]: run-netns-ovnmeta\x2d2cb96d57\x2da5e9\x2d4b38\x2db10e\x2d68187a5bf82f.mount: Deactivated successfully.
Oct 11 04:56:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:01.093 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 04:56:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:01.093 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[de80a685-e6cb-443d-a2c3-e11692e900d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:56:01 np0005481065 nova_compute[260935]: 2025-10-11 08:56:01.248 2 INFO nova.virt.libvirt.driver [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Creating config drive at /var/lib/nova/instances/d80189d8-28e4-440b-8aed-b43c62f59dd7/disk.config#033[00m
Oct 11 04:56:01 np0005481065 nova_compute[260935]: 2025-10-11 08:56:01.254 2 DEBUG oslo_concurrency.processutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d80189d8-28e4-440b-8aed-b43c62f59dd7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr5npneld execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:56:01 np0005481065 nova_compute[260935]: 2025-10-11 08:56:01.311 2 DEBUG nova.compute.manager [req-69241d06-4257-434e-aeae-032fccc301c4 req-b5af871e-f2ca-4bd1-9c38-7588a37d8920 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Received event network-vif-unplugged-fca0187a-40e2-4312-bb5b-f3a6c59f3c9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:56:01 np0005481065 nova_compute[260935]: 2025-10-11 08:56:01.312 2 DEBUG oslo_concurrency.lockutils [req-69241d06-4257-434e-aeae-032fccc301c4 req-b5af871e-f2ca-4bd1-9c38-7588a37d8920 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "bb58fb30-73c8-457a-9293-2d07d0015a46-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:56:01 np0005481065 nova_compute[260935]: 2025-10-11 08:56:01.312 2 DEBUG oslo_concurrency.lockutils [req-69241d06-4257-434e-aeae-032fccc301c4 req-b5af871e-f2ca-4bd1-9c38-7588a37d8920 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "bb58fb30-73c8-457a-9293-2d07d0015a46-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:56:01 np0005481065 nova_compute[260935]: 2025-10-11 08:56:01.312 2 DEBUG oslo_concurrency.lockutils [req-69241d06-4257-434e-aeae-032fccc301c4 req-b5af871e-f2ca-4bd1-9c38-7588a37d8920 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "bb58fb30-73c8-457a-9293-2d07d0015a46-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:56:01 np0005481065 nova_compute[260935]: 2025-10-11 08:56:01.313 2 DEBUG nova.compute.manager [req-69241d06-4257-434e-aeae-032fccc301c4 req-b5af871e-f2ca-4bd1-9c38-7588a37d8920 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] No waiting events found dispatching network-vif-unplugged-fca0187a-40e2-4312-bb5b-f3a6c59f3c9b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:56:01 np0005481065 nova_compute[260935]: 2025-10-11 08:56:01.313 2 DEBUG nova.compute.manager [req-69241d06-4257-434e-aeae-032fccc301c4 req-b5af871e-f2ca-4bd1-9c38-7588a37d8920 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Received event network-vif-unplugged-fca0187a-40e2-4312-bb5b-f3a6c59f3c9b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 04:56:01 np0005481065 nova_compute[260935]: 2025-10-11 08:56:01.357 2 INFO nova.virt.libvirt.driver [None req-8f7cc2a4-e125-420d-9ac6-934dcb8fa0bc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Deleting instance files /var/lib/nova/instances/bb58fb30-73c8-457a-9293-2d07d0015a46_del#033[00m
Oct 11 04:56:01 np0005481065 nova_compute[260935]: 2025-10-11 08:56:01.357 2 INFO nova.virt.libvirt.driver [None req-8f7cc2a4-e125-420d-9ac6-934dcb8fa0bc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Deletion of /var/lib/nova/instances/bb58fb30-73c8-457a-9293-2d07d0015a46_del complete#033[00m
Oct 11 04:56:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:56:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4288032189' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:56:01 np0005481065 nova_compute[260935]: 2025-10-11 08:56:01.389 2 DEBUG oslo_concurrency.processutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:56:01 np0005481065 nova_compute[260935]: 2025-10-11 08:56:01.395 2 DEBUG nova.compute.provider_tree [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:56:01 np0005481065 nova_compute[260935]: 2025-10-11 08:56:01.419 2 DEBUG oslo_concurrency.processutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d80189d8-28e4-440b-8aed-b43c62f59dd7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr5npneld" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:56:01 np0005481065 nova_compute[260935]: 2025-10-11 08:56:01.444 2 DEBUG nova.storage.rbd_utils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] rbd image d80189d8-28e4-440b-8aed-b43c62f59dd7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:56:01 np0005481065 nova_compute[260935]: 2025-10-11 08:56:01.448 2 DEBUG oslo_concurrency.processutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d80189d8-28e4-440b-8aed-b43c62f59dd7/disk.config d80189d8-28e4-440b-8aed-b43c62f59dd7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:56:01 np0005481065 nova_compute[260935]: 2025-10-11 08:56:01.497 2 DEBUG nova.scheduler.client.report [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:56:01 np0005481065 ovn_controller[152945]: 2025-10-11T08:56:01Z|00070|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:45:a4:99 10.100.0.5
Oct 11 04:56:01 np0005481065 ovn_controller[152945]: 2025-10-11T08:56:01Z|00071|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:45:a4:99 10.100.0.5
Oct 11 04:56:01 np0005481065 nova_compute[260935]: 2025-10-11 08:56:01.595 2 DEBUG oslo_concurrency.lockutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.069s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:56:01 np0005481065 nova_compute[260935]: 2025-10-11 08:56:01.596 2 DEBUG nova.compute.manager [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:56:01 np0005481065 nova_compute[260935]: 2025-10-11 08:56:01.603 2 DEBUG nova.network.neutron [req-84e9f6e3-53a4-4647-8643-9ba779a0e05b req-27763275-15a9-4e2c-9e98-d9e854adae65 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Updated VIF entry in instance network info cache for port 7bc371fa-443a-4188-ace2-2837e3709136. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:56:01 np0005481065 nova_compute[260935]: 2025-10-11 08:56:01.604 2 DEBUG nova.network.neutron [req-84e9f6e3-53a4-4647-8643-9ba779a0e05b req-27763275-15a9-4e2c-9e98-d9e854adae65 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Updating instance_info_cache with network_info: [{"id": "7bc371fa-443a-4188-ace2-2837e3709136", "address": "fa:16:3e:76:54:8b", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bc371fa-44", "ovs_interfaceid": "7bc371fa-443a-4188-ace2-2837e3709136", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:56:01 np0005481065 nova_compute[260935]: 2025-10-11 08:56:01.614 2 INFO nova.compute.manager [None req-8f7cc2a4-e125-420d-9ac6-934dcb8fa0bc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Took 1.14 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 04:56:01 np0005481065 nova_compute[260935]: 2025-10-11 08:56:01.615 2 DEBUG oslo.service.loopingcall [None req-8f7cc2a4-e125-420d-9ac6-934dcb8fa0bc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 04:56:01 np0005481065 nova_compute[260935]: 2025-10-11 08:56:01.615 2 DEBUG nova.compute.manager [-] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 04:56:01 np0005481065 nova_compute[260935]: 2025-10-11 08:56:01.616 2 DEBUG nova.network.neutron [-] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 04:56:01 np0005481065 nova_compute[260935]: 2025-10-11 08:56:01.632 2 DEBUG oslo_concurrency.lockutils [req-84e9f6e3-53a4-4647-8643-9ba779a0e05b req-27763275-15a9-4e2c-9e98-d9e854adae65 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-d80189d8-28e4-440b-8aed-b43c62f59dd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:56:01 np0005481065 nova_compute[260935]: 2025-10-11 08:56:01.633 2 DEBUG oslo_concurrency.processutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d80189d8-28e4-440b-8aed-b43c62f59dd7/disk.config d80189d8-28e4-440b-8aed-b43c62f59dd7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.185s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:56:01 np0005481065 nova_compute[260935]: 2025-10-11 08:56:01.634 2 INFO nova.virt.libvirt.driver [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Deleting local config drive /var/lib/nova/instances/d80189d8-28e4-440b-8aed-b43c62f59dd7/disk.config because it was imported into RBD.#033[00m
Oct 11 04:56:01 np0005481065 nova_compute[260935]: 2025-10-11 08:56:01.663 2 DEBUG nova.compute.manager [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:56:01 np0005481065 nova_compute[260935]: 2025-10-11 08:56:01.663 2 DEBUG nova.network.neutron [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:56:01 np0005481065 nova_compute[260935]: 2025-10-11 08:56:01.680 2 INFO nova.virt.libvirt.driver [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:56:01 np0005481065 nova_compute[260935]: 2025-10-11 08:56:01.709 2 DEBUG nova.compute.manager [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:56:01 np0005481065 kernel: tap7bc371fa-44: entered promiscuous mode
Oct 11 04:56:01 np0005481065 NetworkManager[44960]: <info>  [1760172961.7150] manager: (tap7bc371fa-44): new Tun device (/org/freedesktop/NetworkManager/Devices/258)
Oct 11 04:56:01 np0005481065 systemd-udevd[324994]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:56:01 np0005481065 NetworkManager[44960]: <info>  [1760172961.7418] device (tap7bc371fa-44): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:56:01 np0005481065 NetworkManager[44960]: <info>  [1760172961.7426] device (tap7bc371fa-44): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:56:01 np0005481065 ovn_controller[152945]: 2025-10-11T08:56:01Z|00540|binding|INFO|Claiming lport 7bc371fa-443a-4188-ace2-2837e3709136 for this chassis.
Oct 11 04:56:01 np0005481065 ovn_controller[152945]: 2025-10-11T08:56:01Z|00541|binding|INFO|7bc371fa-443a-4188-ace2-2837e3709136: Claiming fa:16:3e:76:54:8b 10.100.0.14
Oct 11 04:56:01 np0005481065 nova_compute[260935]: 2025-10-11 08:56:01.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:56:01 np0005481065 ovn_controller[152945]: 2025-10-11T08:56:01Z|00542|binding|INFO|Setting lport 7bc371fa-443a-4188-ace2-2837e3709136 ovn-installed in OVS
Oct 11 04:56:01 np0005481065 nova_compute[260935]: 2025-10-11 08:56:01.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:56:01 np0005481065 ovn_controller[152945]: 2025-10-11T08:56:01Z|00543|binding|INFO|Setting lport 7bc371fa-443a-4188-ace2-2837e3709136 up in Southbound
Oct 11 04:56:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:01.778 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:54:8b 10.100.0.14'], port_security=['fa:16:3e:76:54:8b 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'd80189d8-28e4-440b-8aed-b43c62f59dd7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-338aeaf8-43d5-4292-a8fa-8952dd3c508b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a6a3cc2a54f4a9bafcdc1304f07944b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e3ab43aa-8c9d-4578-afa6-1b85bfb9e682', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9b76a516-f507-4a4f-aa31-3047cedf7049, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=7bc371fa-443a-4188-ace2-2837e3709136) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:56:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:01.780 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 7bc371fa-443a-4188-ace2-2837e3709136 in datapath 338aeaf8-43d5-4292-a8fa-8952dd3c508b bound to our chassis#033[00m
Oct 11 04:56:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:01.783 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 338aeaf8-43d5-4292-a8fa-8952dd3c508b#033[00m
Oct 11 04:56:01 np0005481065 systemd-machined[215705]: New machine qemu-70-instance-0000003f.
Oct 11 04:56:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:01.806 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[74d177bc-cd78-454a-b493-502560520107]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:56:01 np0005481065 systemd[1]: Started Virtual Machine qemu-70-instance-0000003f.
Oct 11 04:56:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:01.855 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[d982f1e6-a125-46b9-8417-5e5ca9d5b4b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:56:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:01.860 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6941660d-9abb-4cac-89ba-ab712aadacf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:56:01 np0005481065 nova_compute[260935]: 2025-10-11 08:56:01.863 2 DEBUG nova.compute.manager [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:56:01 np0005481065 nova_compute[260935]: 2025-10-11 08:56:01.864 2 DEBUG nova.virt.libvirt.driver [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:56:01 np0005481065 nova_compute[260935]: 2025-10-11 08:56:01.865 2 INFO nova.virt.libvirt.driver [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Creating image(s)#033[00m
Oct 11 04:56:01 np0005481065 nova_compute[260935]: 2025-10-11 08:56:01.894 2 DEBUG nova.storage.rbd_utils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:56:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:01.908 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[ff1769ca-f784-422b-b1e0-29efe52e6407]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:56:01 np0005481065 nova_compute[260935]: 2025-10-11 08:56:01.932 2 DEBUG nova.storage.rbd_utils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:56:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:01.937 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[90b0e028-0e04-4dfa-8349-0c04661eab26]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap338aeaf8-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:2a:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 477778, 'reachable_time': 18372, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325209, 'error': None, 'target': 'ovnmeta-338aeaf8-43d5-4292-a8fa-8952dd3c508b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:56:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:01.964 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0a027dc9-f656-4f71-87fc-a88d017157b0]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap338aeaf8-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 477793, 'tstamp': 477793}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325213, 'error': None, 'target': 'ovnmeta-338aeaf8-43d5-4292-a8fa-8952dd3c508b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap338aeaf8-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 477797, 'tstamp': 477797}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325213, 'error': None, 'target': 'ovnmeta-338aeaf8-43d5-4292-a8fa-8952dd3c508b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:56:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:01.967 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap338aeaf8-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:56:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:01.972 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap338aeaf8-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:56:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:01.972 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:56:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:01.973 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap338aeaf8-40, col_values=(('external_ids', {'iface-id': 'ebd712a5-5601-47dd-8cfc-89d2ce6b1035'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:56:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:01.973 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:56:01 np0005481065 nova_compute[260935]: 2025-10-11 08:56:01.972 2 DEBUG nova.storage.rbd_utils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:56:01 np0005481065 nova_compute[260935]: 2025-10-11 08:56:01.978 2 DEBUG oslo_concurrency.processutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:56:02 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1554: 321 pgs: 321 active+clean; 339 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 202 op/s
Oct 11 04:56:02 np0005481065 nova_compute[260935]: 2025-10-11 08:56:02.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:56:02 np0005481065 nova_compute[260935]: 2025-10-11 08:56:02.075 2 DEBUG oslo_concurrency.processutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:56:02 np0005481065 nova_compute[260935]: 2025-10-11 08:56:02.076 2 DEBUG oslo_concurrency.lockutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:56:02 np0005481065 nova_compute[260935]: 2025-10-11 08:56:02.077 2 DEBUG oslo_concurrency.lockutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:56:02 np0005481065 nova_compute[260935]: 2025-10-11 08:56:02.077 2 DEBUG oslo_concurrency.lockutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:56:02 np0005481065 nova_compute[260935]: 2025-10-11 08:56:02.106 2 DEBUG nova.storage.rbd_utils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:56:02 np0005481065 nova_compute[260935]: 2025-10-11 08:56:02.111 2 DEBUG oslo_concurrency.processutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:56:02 np0005481065 nova_compute[260935]: 2025-10-11 08:56:02.401 2 DEBUG oslo_concurrency.processutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.290s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:56:02 np0005481065 nova_compute[260935]: 2025-10-11 08:56:02.439 2 DEBUG nova.policy [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ee0e5fedb9fc464eb2a9ac362f5e0749', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd9864fda4f8641d8a9c1509c426cc206', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 04:56:02 np0005481065 nova_compute[260935]: 2025-10-11 08:56:02.480 2 DEBUG nova.storage.rbd_utils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] resizing rbd image ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:56:02 np0005481065 nova_compute[260935]: 2025-10-11 08:56:02.575 2 DEBUG nova.objects.instance [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lazy-loading 'migration_context' on Instance uuid ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:56:02 np0005481065 nova_compute[260935]: 2025-10-11 08:56:02.595 2 DEBUG nova.virt.libvirt.driver [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:56:02 np0005481065 nova_compute[260935]: 2025-10-11 08:56:02.596 2 DEBUG nova.virt.libvirt.driver [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Ensure instance console log exists: /var/lib/nova/instances/ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:56:02 np0005481065 nova_compute[260935]: 2025-10-11 08:56:02.596 2 DEBUG oslo_concurrency.lockutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:56:02 np0005481065 nova_compute[260935]: 2025-10-11 08:56:02.597 2 DEBUG oslo_concurrency.lockutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:56:02 np0005481065 nova_compute[260935]: 2025-10-11 08:56:02.597 2 DEBUG oslo_concurrency.lockutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:56:02 np0005481065 nova_compute[260935]: 2025-10-11 08:56:02.796 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172962.796125, d80189d8-28e4-440b-8aed-b43c62f59dd7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:56:02 np0005481065 nova_compute[260935]: 2025-10-11 08:56:02.797 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] VM Started (Lifecycle Event)#033[00m
Oct 11 04:56:02 np0005481065 nova_compute[260935]: 2025-10-11 08:56:02.819 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:56:02 np0005481065 nova_compute[260935]: 2025-10-11 08:56:02.826 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172962.796411, d80189d8-28e4-440b-8aed-b43c62f59dd7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:56:02 np0005481065 nova_compute[260935]: 2025-10-11 08:56:02.827 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] VM Paused (Lifecycle Event)#033[00m
Oct 11 04:56:02 np0005481065 nova_compute[260935]: 2025-10-11 08:56:02.846 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:56:02 np0005481065 nova_compute[260935]: 2025-10-11 08:56:02.852 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:56:02 np0005481065 nova_compute[260935]: 2025-10-11 08:56:02.874 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:56:03 np0005481065 nova_compute[260935]: 2025-10-11 08:56:03.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:56:03 np0005481065 nova_compute[260935]: 2025-10-11 08:56:03.242 2 DEBUG nova.compute.manager [req-4d0d3f90-c073-4842-b58b-3e375c01ae29 req-15728aff-c117-4337-a759-c78e3b024be6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Received event network-vif-plugged-7bc371fa-443a-4188-ace2-2837e3709136 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:56:03 np0005481065 nova_compute[260935]: 2025-10-11 08:56:03.243 2 DEBUG oslo_concurrency.lockutils [req-4d0d3f90-c073-4842-b58b-3e375c01ae29 req-15728aff-c117-4337-a759-c78e3b024be6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d80189d8-28e4-440b-8aed-b43c62f59dd7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:56:03 np0005481065 nova_compute[260935]: 2025-10-11 08:56:03.243 2 DEBUG oslo_concurrency.lockutils [req-4d0d3f90-c073-4842-b58b-3e375c01ae29 req-15728aff-c117-4337-a759-c78e3b024be6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d80189d8-28e4-440b-8aed-b43c62f59dd7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:56:03 np0005481065 nova_compute[260935]: 2025-10-11 08:56:03.243 2 DEBUG oslo_concurrency.lockutils [req-4d0d3f90-c073-4842-b58b-3e375c01ae29 req-15728aff-c117-4337-a759-c78e3b024be6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d80189d8-28e4-440b-8aed-b43c62f59dd7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:56:03 np0005481065 nova_compute[260935]: 2025-10-11 08:56:03.243 2 DEBUG nova.compute.manager [req-4d0d3f90-c073-4842-b58b-3e375c01ae29 req-15728aff-c117-4337-a759-c78e3b024be6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Processing event network-vif-plugged-7bc371fa-443a-4188-ace2-2837e3709136 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 04:56:03 np0005481065 nova_compute[260935]: 2025-10-11 08:56:03.244 2 DEBUG nova.compute.manager [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:56:03 np0005481065 nova_compute[260935]: 2025-10-11 08:56:03.250 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172963.2499874, d80189d8-28e4-440b-8aed-b43c62f59dd7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:56:03 np0005481065 nova_compute[260935]: 2025-10-11 08:56:03.251 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:56:03 np0005481065 nova_compute[260935]: 2025-10-11 08:56:03.253 2 DEBUG nova.virt.libvirt.driver [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:56:03 np0005481065 nova_compute[260935]: 2025-10-11 08:56:03.261 2 INFO nova.virt.libvirt.driver [-] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Instance spawned successfully.#033[00m
Oct 11 04:56:03 np0005481065 nova_compute[260935]: 2025-10-11 08:56:03.261 2 DEBUG nova.virt.libvirt.driver [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:56:03 np0005481065 nova_compute[260935]: 2025-10-11 08:56:03.289 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:56:03 np0005481065 nova_compute[260935]: 2025-10-11 08:56:03.302 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:56:03 np0005481065 nova_compute[260935]: 2025-10-11 08:56:03.330 2 DEBUG nova.virt.libvirt.driver [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:56:03 np0005481065 nova_compute[260935]: 2025-10-11 08:56:03.332 2 DEBUG nova.virt.libvirt.driver [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:56:03 np0005481065 nova_compute[260935]: 2025-10-11 08:56:03.332 2 DEBUG nova.virt.libvirt.driver [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:56:03 np0005481065 nova_compute[260935]: 2025-10-11 08:56:03.333 2 DEBUG nova.virt.libvirt.driver [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:56:03 np0005481065 nova_compute[260935]: 2025-10-11 08:56:03.334 2 DEBUG nova.virt.libvirt.driver [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:56:03 np0005481065 nova_compute[260935]: 2025-10-11 08:56:03.335 2 DEBUG nova.virt.libvirt.driver [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:56:03 np0005481065 nova_compute[260935]: 2025-10-11 08:56:03.343 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:56:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:56:03 np0005481065 nova_compute[260935]: 2025-10-11 08:56:03.367 2 DEBUG nova.compute.manager [req-f6218389-546a-492a-a239-f2f26974a1b5 req-f64282cc-88c9-47a6-9ac7-ab14e16fa4b0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Received event network-vif-plugged-fca0187a-40e2-4312-bb5b-f3a6c59f3c9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:56:03 np0005481065 nova_compute[260935]: 2025-10-11 08:56:03.368 2 DEBUG oslo_concurrency.lockutils [req-f6218389-546a-492a-a239-f2f26974a1b5 req-f64282cc-88c9-47a6-9ac7-ab14e16fa4b0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "bb58fb30-73c8-457a-9293-2d07d0015a46-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:56:03 np0005481065 nova_compute[260935]: 2025-10-11 08:56:03.368 2 DEBUG oslo_concurrency.lockutils [req-f6218389-546a-492a-a239-f2f26974a1b5 req-f64282cc-88c9-47a6-9ac7-ab14e16fa4b0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "bb58fb30-73c8-457a-9293-2d07d0015a46-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:56:03 np0005481065 nova_compute[260935]: 2025-10-11 08:56:03.369 2 DEBUG oslo_concurrency.lockutils [req-f6218389-546a-492a-a239-f2f26974a1b5 req-f64282cc-88c9-47a6-9ac7-ab14e16fa4b0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "bb58fb30-73c8-457a-9293-2d07d0015a46-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:56:03 np0005481065 nova_compute[260935]: 2025-10-11 08:56:03.369 2 DEBUG nova.compute.manager [req-f6218389-546a-492a-a239-f2f26974a1b5 req-f64282cc-88c9-47a6-9ac7-ab14e16fa4b0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] No waiting events found dispatching network-vif-plugged-fca0187a-40e2-4312-bb5b-f3a6c59f3c9b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:56:03 np0005481065 nova_compute[260935]: 2025-10-11 08:56:03.370 2 WARNING nova.compute.manager [req-f6218389-546a-492a-a239-f2f26974a1b5 req-f64282cc-88c9-47a6-9ac7-ab14e16fa4b0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Received unexpected event network-vif-plugged-fca0187a-40e2-4312-bb5b-f3a6c59f3c9b for instance with vm_state paused and task_state deleting.#033[00m
Oct 11 04:56:03 np0005481065 nova_compute[260935]: 2025-10-11 08:56:03.395 2 INFO nova.compute.manager [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Took 12.67 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:56:03 np0005481065 nova_compute[260935]: 2025-10-11 08:56:03.396 2 DEBUG nova.compute.manager [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:56:03 np0005481065 nova_compute[260935]: 2025-10-11 08:56:03.463 2 INFO nova.compute.manager [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Took 13.82 seconds to build instance.#033[00m
Oct 11 04:56:03 np0005481065 nova_compute[260935]: 2025-10-11 08:56:03.480 2 DEBUG oslo_concurrency.lockutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lock "d80189d8-28e4-440b-8aed-b43c62f59dd7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.905s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:56:03 np0005481065 nova_compute[260935]: 2025-10-11 08:56:03.548 2 DEBUG nova.network.neutron [-] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:56:03 np0005481065 nova_compute[260935]: 2025-10-11 08:56:03.565 2 INFO nova.compute.manager [-] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Took 1.95 seconds to deallocate network for instance.#033[00m
Oct 11 04:56:03 np0005481065 nova_compute[260935]: 2025-10-11 08:56:03.605 2 DEBUG oslo_concurrency.lockutils [None req-8f7cc2a4-e125-420d-9ac6-934dcb8fa0bc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:56:03 np0005481065 nova_compute[260935]: 2025-10-11 08:56:03.606 2 DEBUG oslo_concurrency.lockutils [None req-8f7cc2a4-e125-420d-9ac6-934dcb8fa0bc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:56:03 np0005481065 nova_compute[260935]: 2025-10-11 08:56:03.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:56:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:03.630 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:56:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:03.632 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 04:56:03 np0005481065 nova_compute[260935]: 2025-10-11 08:56:03.696 2 DEBUG nova.network.neutron [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Successfully created port: 92faeec9-cc08-45c5-84b3-7191f39c6339 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 04:56:03 np0005481065 nova_compute[260935]: 2025-10-11 08:56:03.741 2 DEBUG oslo_concurrency.processutils [None req-8f7cc2a4-e125-420d-9ac6-934dcb8fa0bc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:56:04 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1555: 321 pgs: 321 active+clean; 372 MiB data, 657 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 7.5 MiB/s wr, 322 op/s
Oct 11 04:56:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:56:04 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1608763001' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:56:04 np0005481065 nova_compute[260935]: 2025-10-11 08:56:04.226 2 DEBUG oslo_concurrency.processutils [None req-8f7cc2a4-e125-420d-9ac6-934dcb8fa0bc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:56:04 np0005481065 nova_compute[260935]: 2025-10-11 08:56:04.238 2 DEBUG nova.compute.provider_tree [None req-8f7cc2a4-e125-420d-9ac6-934dcb8fa0bc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:56:04 np0005481065 nova_compute[260935]: 2025-10-11 08:56:04.256 2 DEBUG nova.scheduler.client.report [None req-8f7cc2a4-e125-420d-9ac6-934dcb8fa0bc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:56:04 np0005481065 nova_compute[260935]: 2025-10-11 08:56:04.274 2 DEBUG oslo_concurrency.lockutils [None req-8f7cc2a4-e125-420d-9ac6-934dcb8fa0bc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:56:04 np0005481065 nova_compute[260935]: 2025-10-11 08:56:04.304 2 INFO nova.scheduler.client.report [None req-8f7cc2a4-e125-420d-9ac6-934dcb8fa0bc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Deleted allocations for instance bb58fb30-73c8-457a-9293-2d07d0015a46#033[00m
Oct 11 04:56:04 np0005481065 nova_compute[260935]: 2025-10-11 08:56:04.366 2 DEBUG oslo_concurrency.lockutils [None req-8f7cc2a4-e125-420d-9ac6-934dcb8fa0bc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "bb58fb30-73c8-457a-9293-2d07d0015a46" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.898s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:56:04 np0005481065 nova_compute[260935]: 2025-10-11 08:56:04.560 2 DEBUG nova.network.neutron [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Successfully updated port: 92faeec9-cc08-45c5-84b3-7191f39c6339 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 04:56:04 np0005481065 nova_compute[260935]: 2025-10-11 08:56:04.576 2 DEBUG oslo_concurrency.lockutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "refresh_cache-ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:56:04 np0005481065 nova_compute[260935]: 2025-10-11 08:56:04.577 2 DEBUG oslo_concurrency.lockutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquired lock "refresh_cache-ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:56:04 np0005481065 nova_compute[260935]: 2025-10-11 08:56:04.577 2 DEBUG nova.network.neutron [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:56:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 04:56:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:56:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 04:56:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:56:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0029680962875236047 of space, bias 1.0, pg target 0.8904288862570814 quantized to 32 (current 32)
Oct 11 04:56:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:56:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:56:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:56:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:56:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:56:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 11 04:56:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:56:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 04:56:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:56:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:56:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:56:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 04:56:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:56:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 04:56:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:56:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 04:56:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 04:56:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 04:56:04 np0005481065 nova_compute[260935]: 2025-10-11 08:56:04.791 2 DEBUG nova.network.neutron [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 04:56:05 np0005481065 nova_compute[260935]: 2025-10-11 08:56:05.332 2 DEBUG nova.compute.manager [req-5c8abb19-9bf1-423b-a8f1-ce714c8eaf2c req-fa35f099-bd91-4ef1-9e66-0837fab88803 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Received event network-vif-plugged-7bc371fa-443a-4188-ace2-2837e3709136 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:56:05 np0005481065 nova_compute[260935]: 2025-10-11 08:56:05.333 2 DEBUG oslo_concurrency.lockutils [req-5c8abb19-9bf1-423b-a8f1-ce714c8eaf2c req-fa35f099-bd91-4ef1-9e66-0837fab88803 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d80189d8-28e4-440b-8aed-b43c62f59dd7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:56:05 np0005481065 nova_compute[260935]: 2025-10-11 08:56:05.333 2 DEBUG oslo_concurrency.lockutils [req-5c8abb19-9bf1-423b-a8f1-ce714c8eaf2c req-fa35f099-bd91-4ef1-9e66-0837fab88803 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d80189d8-28e4-440b-8aed-b43c62f59dd7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:56:05 np0005481065 nova_compute[260935]: 2025-10-11 08:56:05.334 2 DEBUG oslo_concurrency.lockutils [req-5c8abb19-9bf1-423b-a8f1-ce714c8eaf2c req-fa35f099-bd91-4ef1-9e66-0837fab88803 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d80189d8-28e4-440b-8aed-b43c62f59dd7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:56:05 np0005481065 nova_compute[260935]: 2025-10-11 08:56:05.334 2 DEBUG nova.compute.manager [req-5c8abb19-9bf1-423b-a8f1-ce714c8eaf2c req-fa35f099-bd91-4ef1-9e66-0837fab88803 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] No waiting events found dispatching network-vif-plugged-7bc371fa-443a-4188-ace2-2837e3709136 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:56:05 np0005481065 nova_compute[260935]: 2025-10-11 08:56:05.334 2 WARNING nova.compute.manager [req-5c8abb19-9bf1-423b-a8f1-ce714c8eaf2c req-fa35f099-bd91-4ef1-9e66-0837fab88803 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Received unexpected event network-vif-plugged-7bc371fa-443a-4188-ace2-2837e3709136 for instance with vm_state active and task_state None.#033[00m
Oct 11 04:56:05 np0005481065 nova_compute[260935]: 2025-10-11 08:56:05.335 2 DEBUG nova.compute.manager [req-5c8abb19-9bf1-423b-a8f1-ce714c8eaf2c req-fa35f099-bd91-4ef1-9e66-0837fab88803 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Received event network-vif-deleted-fca0187a-40e2-4312-bb5b-f3a6c59f3c9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:56:05 np0005481065 nova_compute[260935]: 2025-10-11 08:56:05.335 2 DEBUG nova.compute.manager [req-5c8abb19-9bf1-423b-a8f1-ce714c8eaf2c req-fa35f099-bd91-4ef1-9e66-0837fab88803 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Received event network-changed-92faeec9-cc08-45c5-84b3-7191f39c6339 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:56:05 np0005481065 nova_compute[260935]: 2025-10-11 08:56:05.336 2 DEBUG nova.compute.manager [req-5c8abb19-9bf1-423b-a8f1-ce714c8eaf2c req-fa35f099-bd91-4ef1-9e66-0837fab88803 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Refreshing instance network info cache due to event network-changed-92faeec9-cc08-45c5-84b3-7191f39c6339. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:56:05 np0005481065 nova_compute[260935]: 2025-10-11 08:56:05.336 2 DEBUG oslo_concurrency.lockutils [req-5c8abb19-9bf1-423b-a8f1-ce714c8eaf2c req-fa35f099-bd91-4ef1-9e66-0837fab88803 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:56:05 np0005481065 nova_compute[260935]: 2025-10-11 08:56:05.608 2 DEBUG nova.compute.manager [req-c92a476e-f938-4c1f-9dde-8b8636a25a87 req-fb0c2822-4d8e-4609-be2b-f2cfbfa206db e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Received event network-changed-7bc371fa-443a-4188-ace2-2837e3709136 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:56:05 np0005481065 nova_compute[260935]: 2025-10-11 08:56:05.609 2 DEBUG nova.compute.manager [req-c92a476e-f938-4c1f-9dde-8b8636a25a87 req-fb0c2822-4d8e-4609-be2b-f2cfbfa206db e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Refreshing instance network info cache due to event network-changed-7bc371fa-443a-4188-ace2-2837e3709136. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 04:56:05 np0005481065 nova_compute[260935]: 2025-10-11 08:56:05.609 2 DEBUG oslo_concurrency.lockutils [req-c92a476e-f938-4c1f-9dde-8b8636a25a87 req-fb0c2822-4d8e-4609-be2b-f2cfbfa206db e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-d80189d8-28e4-440b-8aed-b43c62f59dd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:56:05 np0005481065 nova_compute[260935]: 2025-10-11 08:56:05.610 2 DEBUG oslo_concurrency.lockutils [req-c92a476e-f938-4c1f-9dde-8b8636a25a87 req-fb0c2822-4d8e-4609-be2b-f2cfbfa206db e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-d80189d8-28e4-440b-8aed-b43c62f59dd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:56:05 np0005481065 nova_compute[260935]: 2025-10-11 08:56:05.610 2 DEBUG nova.network.neutron [req-c92a476e-f938-4c1f-9dde-8b8636a25a87 req-fb0c2822-4d8e-4609-be2b-f2cfbfa206db e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Refreshing network info cache for port 7bc371fa-443a-4188-ace2-2837e3709136 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:56:05 np0005481065 nova_compute[260935]: 2025-10-11 08:56:05.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:56:05 np0005481065 nova_compute[260935]: 2025-10-11 08:56:05.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:56:05 np0005481065 nova_compute[260935]: 2025-10-11 08:56:05.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 04:56:05 np0005481065 nova_compute[260935]: 2025-10-11 08:56:05.724 2 DEBUG nova.network.neutron [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Updating instance_info_cache with network_info: [{"id": "92faeec9-cc08-45c5-84b3-7191f39c6339", "address": "fa:16:3e:22:da:61", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92faeec9-cc", "ovs_interfaceid": "92faeec9-cc08-45c5-84b3-7191f39c6339", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:56:05 np0005481065 nova_compute[260935]: 2025-10-11 08:56:05.743 2 DEBUG oslo_concurrency.lockutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Releasing lock "refresh_cache-ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:56:05 np0005481065 nova_compute[260935]: 2025-10-11 08:56:05.744 2 DEBUG nova.compute.manager [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Instance network_info: |[{"id": "92faeec9-cc08-45c5-84b3-7191f39c6339", "address": "fa:16:3e:22:da:61", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92faeec9-cc", "ovs_interfaceid": "92faeec9-cc08-45c5-84b3-7191f39c6339", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 04:56:05 np0005481065 nova_compute[260935]: 2025-10-11 08:56:05.744 2 DEBUG oslo_concurrency.lockutils [req-5c8abb19-9bf1-423b-a8f1-ce714c8eaf2c req-fa35f099-bd91-4ef1-9e66-0837fab88803 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:56:05 np0005481065 nova_compute[260935]: 2025-10-11 08:56:05.745 2 DEBUG nova.network.neutron [req-5c8abb19-9bf1-423b-a8f1-ce714c8eaf2c req-fa35f099-bd91-4ef1-9e66-0837fab88803 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Refreshing network info cache for port 92faeec9-cc08-45c5-84b3-7191f39c6339 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 04:56:05 np0005481065 nova_compute[260935]: 2025-10-11 08:56:05.751 2 DEBUG nova.virt.libvirt.driver [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Start _get_guest_xml network_info=[{"id": "92faeec9-cc08-45c5-84b3-7191f39c6339", "address": "fa:16:3e:22:da:61", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92faeec9-cc", "ovs_interfaceid": "92faeec9-cc08-45c5-84b3-7191f39c6339", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:56:05 np0005481065 nova_compute[260935]: 2025-10-11 08:56:05.759 2 WARNING nova.virt.libvirt.driver [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:56:05 np0005481065 nova_compute[260935]: 2025-10-11 08:56:05.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:56:05 np0005481065 nova_compute[260935]: 2025-10-11 08:56:05.767 2 DEBUG nova.virt.libvirt.host [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:56:05 np0005481065 nova_compute[260935]: 2025-10-11 08:56:05.768 2 DEBUG nova.virt.libvirt.host [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:56:05 np0005481065 nova_compute[260935]: 2025-10-11 08:56:05.774 2 DEBUG nova.virt.libvirt.host [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:56:05 np0005481065 nova_compute[260935]: 2025-10-11 08:56:05.775 2 DEBUG nova.virt.libvirt.host [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:56:05 np0005481065 nova_compute[260935]: 2025-10-11 08:56:05.775 2 DEBUG nova.virt.libvirt.driver [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:56:05 np0005481065 nova_compute[260935]: 2025-10-11 08:56:05.775 2 DEBUG nova.virt.hardware [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:56:05 np0005481065 nova_compute[260935]: 2025-10-11 08:56:05.777 2 DEBUG nova.virt.hardware [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:56:05 np0005481065 nova_compute[260935]: 2025-10-11 08:56:05.777 2 DEBUG nova.virt.hardware [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:56:05 np0005481065 nova_compute[260935]: 2025-10-11 08:56:05.778 2 DEBUG nova.virt.hardware [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:56:05 np0005481065 nova_compute[260935]: 2025-10-11 08:56:05.778 2 DEBUG nova.virt.hardware [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:56:05 np0005481065 nova_compute[260935]: 2025-10-11 08:56:05.778 2 DEBUG nova.virt.hardware [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:56:05 np0005481065 nova_compute[260935]: 2025-10-11 08:56:05.779 2 DEBUG nova.virt.hardware [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:56:05 np0005481065 nova_compute[260935]: 2025-10-11 08:56:05.779 2 DEBUG nova.virt.hardware [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:56:05 np0005481065 nova_compute[260935]: 2025-10-11 08:56:05.779 2 DEBUG nova.virt.hardware [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:56:05 np0005481065 nova_compute[260935]: 2025-10-11 08:56:05.780 2 DEBUG nova.virt.hardware [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:56:05 np0005481065 nova_compute[260935]: 2025-10-11 08:56:05.780 2 DEBUG nova.virt.hardware [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:56:05 np0005481065 nova_compute[260935]: 2025-10-11 08:56:05.785 2 DEBUG oslo_concurrency.processutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:56:05 np0005481065 podman[325409]: 2025-10-11 08:56:05.81889378 +0000 UTC m=+0.113018754 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3)
Oct 11 04:56:06 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1556: 321 pgs: 321 active+clean; 372 MiB data, 657 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 194 op/s
Oct 11 04:56:06 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:56:06 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1208195544' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:56:06 np0005481065 nova_compute[260935]: 2025-10-11 08:56:06.357 2 DEBUG oslo_concurrency.processutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:56:06 np0005481065 nova_compute[260935]: 2025-10-11 08:56:06.389 2 DEBUG nova.storage.rbd_utils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:56:06 np0005481065 nova_compute[260935]: 2025-10-11 08:56:06.396 2 DEBUG oslo_concurrency.processutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:56:06 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:56:06 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3796411927' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:56:06 np0005481065 nova_compute[260935]: 2025-10-11 08:56:06.879 2 DEBUG oslo_concurrency.processutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:56:06 np0005481065 nova_compute[260935]: 2025-10-11 08:56:06.881 2 DEBUG nova.virt.libvirt.vif [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:55:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-865090442',display_name='tempest-ServersTestJSON-server-865090442',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-865090442',id=64,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d9864fda4f8641d8a9c1509c426cc206',ramdisk_id='',reservation_id='r-9tcw9lxi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-101172647',owner_user_name='tempest-ServersTestJSON-101172647-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:56:01Z,user_data=None,user_id='ee0e5fedb9fc464eb2a9ac362f5e0749',uuid=ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "92faeec9-cc08-45c5-84b3-7191f39c6339", "address": "fa:16:3e:22:da:61", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92faeec9-cc", "ovs_interfaceid": "92faeec9-cc08-45c5-84b3-7191f39c6339", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:56:06 np0005481065 nova_compute[260935]: 2025-10-11 08:56:06.882 2 DEBUG nova.network.os_vif_util [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Converting VIF {"id": "92faeec9-cc08-45c5-84b3-7191f39c6339", "address": "fa:16:3e:22:da:61", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92faeec9-cc", "ovs_interfaceid": "92faeec9-cc08-45c5-84b3-7191f39c6339", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:56:06 np0005481065 nova_compute[260935]: 2025-10-11 08:56:06.884 2 DEBUG nova.network.os_vif_util [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:da:61,bridge_name='br-int',has_traffic_filtering=True,id=92faeec9-cc08-45c5-84b3-7191f39c6339,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92faeec9-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:56:06 np0005481065 nova_compute[260935]: 2025-10-11 08:56:06.886 2 DEBUG nova.objects.instance [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lazy-loading 'pci_devices' on Instance uuid ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:56:06 np0005481065 nova_compute[260935]: 2025-10-11 08:56:06.905 2 DEBUG nova.virt.libvirt.driver [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:56:06 np0005481065 nova_compute[260935]:  <uuid>ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc</uuid>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:  <name>instance-00000040</name>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:56:06 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:      <nova:name>tempest-ServersTestJSON-server-865090442</nova:name>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:56:05</nova:creationTime>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:56:06 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:        <nova:user uuid="ee0e5fedb9fc464eb2a9ac362f5e0749">tempest-ServersTestJSON-101172647-project-member</nova:user>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:        <nova:project uuid="d9864fda4f8641d8a9c1509c426cc206">tempest-ServersTestJSON-101172647</nova:project>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:        <nova:port uuid="92faeec9-cc08-45c5-84b3-7191f39c6339">
Oct 11 04:56:06 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:56:06 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:      <entry name="serial">ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc</entry>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:      <entry name="uuid">ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc</entry>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:56:06 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:56:06 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:56:06 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc_disk">
Oct 11 04:56:06 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:56:06 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 04:56:06 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc_disk.config">
Oct 11 04:56:06 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:      </source>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 04:56:06 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:      </auth>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:    </disk>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 04:56:06 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:22:da:61"/>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:      <target dev="tap92faeec9-cc"/>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:    </interface>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 04:56:06 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc/console.log" append="off"/>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:    </serial>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:    <video>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:    </video>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 04:56:06 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:    </rng>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 04:56:06 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 04:56:06 np0005481065 nova_compute[260935]:  </devices>
Oct 11 04:56:06 np0005481065 nova_compute[260935]: </domain>
Oct 11 04:56:06 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 04:56:06 np0005481065 nova_compute[260935]: 2025-10-11 08:56:06.908 2 DEBUG nova.compute.manager [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Preparing to wait for external event network-vif-plugged-92faeec9-cc08-45c5-84b3-7191f39c6339 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 04:56:06 np0005481065 nova_compute[260935]: 2025-10-11 08:56:06.909 2 DEBUG oslo_concurrency.lockutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:56:06 np0005481065 nova_compute[260935]: 2025-10-11 08:56:06.910 2 DEBUG oslo_concurrency.lockutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:56:06 np0005481065 nova_compute[260935]: 2025-10-11 08:56:06.910 2 DEBUG oslo_concurrency.lockutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:56:06 np0005481065 nova_compute[260935]: 2025-10-11 08:56:06.912 2 DEBUG nova.virt.libvirt.vif [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:55:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-865090442',display_name='tempest-ServersTestJSON-server-865090442',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-865090442',id=64,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d9864fda4f8641d8a9c1509c426cc206',ramdisk_id='',reservation_id='r-9tcw9lxi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-101172647',owner_user_name='tempest-ServersTestJSON-101172647-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:56:01Z,user_data=None,user_id='ee0e5fedb9fc464eb2a9ac362f5e0749',uuid=ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "92faeec9-cc08-45c5-84b3-7191f39c6339", "address": "fa:16:3e:22:da:61", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92faeec9-cc", "ovs_interfaceid": "92faeec9-cc08-45c5-84b3-7191f39c6339", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 04:56:06 np0005481065 nova_compute[260935]: 2025-10-11 08:56:06.912 2 DEBUG nova.network.os_vif_util [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Converting VIF {"id": "92faeec9-cc08-45c5-84b3-7191f39c6339", "address": "fa:16:3e:22:da:61", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92faeec9-cc", "ovs_interfaceid": "92faeec9-cc08-45c5-84b3-7191f39c6339", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:56:06 np0005481065 nova_compute[260935]: 2025-10-11 08:56:06.913 2 DEBUG nova.network.os_vif_util [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:da:61,bridge_name='br-int',has_traffic_filtering=True,id=92faeec9-cc08-45c5-84b3-7191f39c6339,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92faeec9-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:56:06 np0005481065 nova_compute[260935]: 2025-10-11 08:56:06.914 2 DEBUG os_vif [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:da:61,bridge_name='br-int',has_traffic_filtering=True,id=92faeec9-cc08-45c5-84b3-7191f39c6339,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92faeec9-cc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 04:56:06 np0005481065 nova_compute[260935]: 2025-10-11 08:56:06.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:56:06 np0005481065 nova_compute[260935]: 2025-10-11 08:56:06.917 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:56:06 np0005481065 nova_compute[260935]: 2025-10-11 08:56:06.917 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:56:06 np0005481065 nova_compute[260935]: 2025-10-11 08:56:06.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:56:06 np0005481065 nova_compute[260935]: 2025-10-11 08:56:06.923 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap92faeec9-cc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:56:06 np0005481065 nova_compute[260935]: 2025-10-11 08:56:06.924 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap92faeec9-cc, col_values=(('external_ids', {'iface-id': '92faeec9-cc08-45c5-84b3-7191f39c6339', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:22:da:61', 'vm-uuid': 'ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:56:06 np0005481065 nova_compute[260935]: 2025-10-11 08:56:06.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:56:06 np0005481065 NetworkManager[44960]: <info>  [1760172966.9286] manager: (tap92faeec9-cc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/259)
Oct 11 04:56:06 np0005481065 nova_compute[260935]: 2025-10-11 08:56:06.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:56:06 np0005481065 nova_compute[260935]: 2025-10-11 08:56:06.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:56:06 np0005481065 nova_compute[260935]: 2025-10-11 08:56:06.934 2 INFO os_vif [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:da:61,bridge_name='br-int',has_traffic_filtering=True,id=92faeec9-cc08-45c5-84b3-7191f39c6339,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92faeec9-cc')#033[00m
Oct 11 04:56:07 np0005481065 nova_compute[260935]: 2025-10-11 08:56:07.009 2 DEBUG nova.virt.libvirt.driver [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:56:07 np0005481065 nova_compute[260935]: 2025-10-11 08:56:07.010 2 DEBUG nova.virt.libvirt.driver [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 04:56:07 np0005481065 nova_compute[260935]: 2025-10-11 08:56:07.011 2 DEBUG nova.virt.libvirt.driver [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] No VIF found with MAC fa:16:3e:22:da:61, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 04:56:07 np0005481065 nova_compute[260935]: 2025-10-11 08:56:07.012 2 INFO nova.virt.libvirt.driver [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Using config drive#033[00m
Oct 11 04:56:07 np0005481065 nova_compute[260935]: 2025-10-11 08:56:07.045 2 DEBUG nova.storage.rbd_utils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:56:07 np0005481065 nova_compute[260935]: 2025-10-11 08:56:07.054 2 DEBUG nova.network.neutron [req-c92a476e-f938-4c1f-9dde-8b8636a25a87 req-fb0c2822-4d8e-4609-be2b-f2cfbfa206db e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Updated VIF entry in instance network info cache for port 7bc371fa-443a-4188-ace2-2837e3709136. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:56:07 np0005481065 nova_compute[260935]: 2025-10-11 08:56:07.055 2 DEBUG nova.network.neutron [req-c92a476e-f938-4c1f-9dde-8b8636a25a87 req-fb0c2822-4d8e-4609-be2b-f2cfbfa206db e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Updating instance_info_cache with network_info: [{"id": "7bc371fa-443a-4188-ace2-2837e3709136", "address": "fa:16:3e:76:54:8b", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bc371fa-44", "ovs_interfaceid": "7bc371fa-443a-4188-ace2-2837e3709136", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:56:07 np0005481065 nova_compute[260935]: 2025-10-11 08:56:07.085 2 DEBUG oslo_concurrency.lockutils [req-c92a476e-f938-4c1f-9dde-8b8636a25a87 req-fb0c2822-4d8e-4609-be2b-f2cfbfa206db e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-d80189d8-28e4-440b-8aed-b43c62f59dd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:56:07 np0005481065 nova_compute[260935]: 2025-10-11 08:56:07.092 2 DEBUG oslo_concurrency.lockutils [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Acquiring lock "d80189d8-28e4-440b-8aed-b43c62f59dd7" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:56:07 np0005481065 nova_compute[260935]: 2025-10-11 08:56:07.093 2 DEBUG oslo_concurrency.lockutils [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lock "d80189d8-28e4-440b-8aed-b43c62f59dd7" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:56:07 np0005481065 nova_compute[260935]: 2025-10-11 08:56:07.093 2 INFO nova.compute.manager [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Rebooting instance#033[00m
Oct 11 04:56:07 np0005481065 nova_compute[260935]: 2025-10-11 08:56:07.109 2 DEBUG oslo_concurrency.lockutils [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Acquiring lock "refresh_cache-d80189d8-28e4-440b-8aed-b43c62f59dd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 04:56:07 np0005481065 nova_compute[260935]: 2025-10-11 08:56:07.109 2 DEBUG oslo_concurrency.lockutils [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Acquired lock "refresh_cache-d80189d8-28e4-440b-8aed-b43c62f59dd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 04:56:07 np0005481065 nova_compute[260935]: 2025-10-11 08:56:07.110 2 DEBUG nova.network.neutron [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 04:56:07 np0005481065 nova_compute[260935]: 2025-10-11 08:56:07.200 2 DEBUG nova.network.neutron [req-5c8abb19-9bf1-423b-a8f1-ce714c8eaf2c req-fa35f099-bd91-4ef1-9e66-0837fab88803 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Updated VIF entry in instance network info cache for port 92faeec9-cc08-45c5-84b3-7191f39c6339. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 04:56:07 np0005481065 nova_compute[260935]: 2025-10-11 08:56:07.201 2 DEBUG nova.network.neutron [req-5c8abb19-9bf1-423b-a8f1-ce714c8eaf2c req-fa35f099-bd91-4ef1-9e66-0837fab88803 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Updating instance_info_cache with network_info: [{"id": "92faeec9-cc08-45c5-84b3-7191f39c6339", "address": "fa:16:3e:22:da:61", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92faeec9-cc", "ovs_interfaceid": "92faeec9-cc08-45c5-84b3-7191f39c6339", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:56:07 np0005481065 nova_compute[260935]: 2025-10-11 08:56:07.214 2 DEBUG oslo_concurrency.lockutils [req-5c8abb19-9bf1-423b-a8f1-ce714c8eaf2c req-fa35f099-bd91-4ef1-9e66-0837fab88803 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:56:07 np0005481065 nova_compute[260935]: 2025-10-11 08:56:07.359 2 INFO nova.virt.libvirt.driver [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Creating config drive at /var/lib/nova/instances/ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc/disk.config#033[00m
Oct 11 04:56:07 np0005481065 nova_compute[260935]: 2025-10-11 08:56:07.368 2 DEBUG oslo_concurrency.processutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkb5dx60x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:56:07 np0005481065 nova_compute[260935]: 2025-10-11 08:56:07.520 2 DEBUG oslo_concurrency.processutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkb5dx60x" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:56:07 np0005481065 nova_compute[260935]: 2025-10-11 08:56:07.580 2 DEBUG nova.storage.rbd_utils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:56:07 np0005481065 nova_compute[260935]: 2025-10-11 08:56:07.586 2 DEBUG oslo_concurrency.processutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc/disk.config ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:56:07 np0005481065 nova_compute[260935]: 2025-10-11 08:56:07.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:56:07 np0005481065 nova_compute[260935]: 2025-10-11 08:56:07.745 2 DEBUG oslo_concurrency.lockutils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "62c9b5d7-f22e-4738-b2e6-7c53fcb968ea" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:56:07 np0005481065 nova_compute[260935]: 2025-10-11 08:56:07.746 2 DEBUG oslo_concurrency.lockutils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "62c9b5d7-f22e-4738-b2e6-7c53fcb968ea" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:56:07 np0005481065 nova_compute[260935]: 2025-10-11 08:56:07.763 2 DEBUG nova.compute.manager [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 04:56:07 np0005481065 nova_compute[260935]: 2025-10-11 08:56:07.768 2 DEBUG oslo_concurrency.processutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc/disk.config ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:56:07 np0005481065 nova_compute[260935]: 2025-10-11 08:56:07.770 2 INFO nova.virt.libvirt.driver [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Deleting local config drive /var/lib/nova/instances/ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc/disk.config because it was imported into RBD.#033[00m
Oct 11 04:56:07 np0005481065 nova_compute[260935]: 2025-10-11 08:56:07.837 2 DEBUG oslo_concurrency.lockutils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:56:07 np0005481065 nova_compute[260935]: 2025-10-11 08:56:07.837 2 DEBUG oslo_concurrency.lockutils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:56:07 np0005481065 nova_compute[260935]: 2025-10-11 08:56:07.848 2 DEBUG nova.virt.hardware [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 04:56:07 np0005481065 nova_compute[260935]: 2025-10-11 08:56:07.849 2 INFO nova.compute.claims [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 04:56:07 np0005481065 kernel: tap92faeec9-cc: entered promiscuous mode
Oct 11 04:56:07 np0005481065 nova_compute[260935]: 2025-10-11 08:56:07.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:56:07 np0005481065 ovn_controller[152945]: 2025-10-11T08:56:07Z|00544|binding|INFO|Claiming lport 92faeec9-cc08-45c5-84b3-7191f39c6339 for this chassis.
Oct 11 04:56:07 np0005481065 ovn_controller[152945]: 2025-10-11T08:56:07Z|00545|binding|INFO|92faeec9-cc08-45c5-84b3-7191f39c6339: Claiming fa:16:3e:22:da:61 10.100.0.10
Oct 11 04:56:07 np0005481065 NetworkManager[44960]: <info>  [1760172967.8572] manager: (tap92faeec9-cc): new Tun device (/org/freedesktop/NetworkManager/Devices/260)
Oct 11 04:56:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:07.864 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:22:da:61 10.100.0.10'], port_security=['fa:16:3e:22:da:61 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e075bdab-78c4-414f-b270-c41d1c82f498', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9864fda4f8641d8a9c1509c426cc206', 'neutron:revision_number': '2', 'neutron:security_group_ids': '708d19b6-1edc-4b98-ad73-f234668f1633', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2b07dbe7-131d-4b4e-9a25-dea5d7b28985, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=92faeec9-cc08-45c5-84b3-7191f39c6339) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:56:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:07.866 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 92faeec9-cc08-45c5-84b3-7191f39c6339 in datapath e075bdab-78c4-414f-b270-c41d1c82f498 bound to our chassis#033[00m
Oct 11 04:56:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:07.870 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e075bdab-78c4-414f-b270-c41d1c82f498#033[00m
Oct 11 04:56:07 np0005481065 ovn_controller[152945]: 2025-10-11T08:56:07Z|00546|binding|INFO|Setting lport 92faeec9-cc08-45c5-84b3-7191f39c6339 ovn-installed in OVS
Oct 11 04:56:07 np0005481065 ovn_controller[152945]: 2025-10-11T08:56:07Z|00547|binding|INFO|Setting lport 92faeec9-cc08-45c5-84b3-7191f39c6339 up in Southbound
Oct 11 04:56:07 np0005481065 nova_compute[260935]: 2025-10-11 08:56:07.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:56:07 np0005481065 nova_compute[260935]: 2025-10-11 08:56:07.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:56:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:07.905 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[67c46097-e63e-47bf-86f0-746a4b43a0f8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:56:07 np0005481065 systemd-machined[215705]: New machine qemu-71-instance-00000040.
Oct 11 04:56:07 np0005481065 systemd[1]: Started Virtual Machine qemu-71-instance-00000040.
Oct 11 04:56:07 np0005481065 systemd-udevd[325565]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 04:56:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:07.951 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[5e3d2261-8ef1-4f5a-b9c4-719eef8d4202]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:56:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:07.955 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[813431b6-052f-4592-88b6-394b569fcadf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:56:07 np0005481065 NetworkManager[44960]: <info>  [1760172967.9748] device (tap92faeec9-cc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 04:56:07 np0005481065 NetworkManager[44960]: <info>  [1760172967.9760] device (tap92faeec9-cc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 04:56:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:07.998 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[5f7f5207-e4a4-41c5-9ddb-9130761783bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:56:08 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1557: 321 pgs: 321 active+clean; 372 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 3.9 MiB/s wr, 259 op/s
Oct 11 04:56:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:08.021 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8cd6bff6-2063-4f71-9237-afd9cfa950bc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape075bdab-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:b9:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479394, 'reachable_time': 36159, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325575, 'error': None, 'target': 'ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:56:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:08.043 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4cb85dd3-8815-4bf1-9e64-63d233627953]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape075bdab-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479414, 'tstamp': 479414}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325576, 'error': None, 'target': 'ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape075bdab-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479419, 'tstamp': 479419}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325576, 'error': None, 'target': 'ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:56:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:08.045 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape075bdab-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:56:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:08.048 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape075bdab-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:56:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:08.048 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:56:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:08.049 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape075bdab-70, col_values=(('external_ids', {'iface-id': 'b9cf681c-9f4c-4c56-987a-55fa7aa89e1a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:56:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:08.050 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.075 2 DEBUG oslo_concurrency.processutils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.323 2 DEBUG nova.network.neutron [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Updating instance_info_cache with network_info: [{"id": "7bc371fa-443a-4188-ace2-2837e3709136", "address": "fa:16:3e:76:54:8b", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bc371fa-44", "ovs_interfaceid": "7bc371fa-443a-4188-ace2-2837e3709136", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.343 2 DEBUG oslo_concurrency.lockutils [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Releasing lock "refresh_cache-d80189d8-28e4-440b-8aed-b43c62f59dd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.346 2 DEBUG nova.compute.manager [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.351 2 DEBUG nova.compute.manager [None req-c1c756a2-3b40-40d3-9664-2fb8da17375d 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:56:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.404 2 INFO nova.compute.manager [None req-c1c756a2-3b40-40d3-9664-2fb8da17375d 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] instance snapshotting#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.406 2 DEBUG nova.objects.instance [None req-c1c756a2-3b40-40d3-9664-2fb8da17375d 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lazy-loading 'flavor' on Instance uuid 98d8ebd6-0917-49cf-8efc-a245486424bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.415 2 DEBUG nova.compute.manager [req-47020f6e-a698-48f4-a0ea-68caed07017e req-938018d0-0965-47b6-9336-66d5048a4745 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Received event network-vif-plugged-92faeec9-cc08-45c5-84b3-7191f39c6339 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.415 2 DEBUG oslo_concurrency.lockutils [req-47020f6e-a698-48f4-a0ea-68caed07017e req-938018d0-0965-47b6-9336-66d5048a4745 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.415 2 DEBUG oslo_concurrency.lockutils [req-47020f6e-a698-48f4-a0ea-68caed07017e req-938018d0-0965-47b6-9336-66d5048a4745 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.416 2 DEBUG oslo_concurrency.lockutils [req-47020f6e-a698-48f4-a0ea-68caed07017e req-938018d0-0965-47b6-9336-66d5048a4745 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.416 2 DEBUG nova.compute.manager [req-47020f6e-a698-48f4-a0ea-68caed07017e req-938018d0-0965-47b6-9336-66d5048a4745 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Processing event network-vif-plugged-92faeec9-cc08-45c5-84b3-7191f39c6339 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 04:56:08 np0005481065 kernel: tap7bc371fa-44 (unregistering): left promiscuous mode
Oct 11 04:56:08 np0005481065 NetworkManager[44960]: <info>  [1760172968.5072] device (tap7bc371fa-44): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:56:08 np0005481065 ovn_controller[152945]: 2025-10-11T08:56:08Z|00548|binding|INFO|Releasing lport 7bc371fa-443a-4188-ace2-2837e3709136 from this chassis (sb_readonly=0)
Oct 11 04:56:08 np0005481065 ovn_controller[152945]: 2025-10-11T08:56:08Z|00549|binding|INFO|Setting lport 7bc371fa-443a-4188-ace2-2837e3709136 down in Southbound
Oct 11 04:56:08 np0005481065 ovn_controller[152945]: 2025-10-11T08:56:08Z|00550|binding|INFO|Removing iface tap7bc371fa-44 ovn-installed in OVS
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:56:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:08.526 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:54:8b 10.100.0.14'], port_security=['fa:16:3e:76:54:8b 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'd80189d8-28e4-440b-8aed-b43c62f59dd7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-338aeaf8-43d5-4292-a8fa-8952dd3c508b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a6a3cc2a54f4a9bafcdc1304f07944b', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'e0385938-baf4-4634-ae20-542a58f27831 e3ab43aa-8c9d-4578-afa6-1b85bfb9e682', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9b76a516-f507-4a4f-aa31-3047cedf7049, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=7bc371fa-443a-4188-ace2-2837e3709136) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 04:56:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:08.527 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 7bc371fa-443a-4188-ace2-2837e3709136 in datapath 338aeaf8-43d5-4292-a8fa-8952dd3c508b unbound from our chassis#033[00m
Oct 11 04:56:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:08.531 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 338aeaf8-43d5-4292-a8fa-8952dd3c508b#033[00m
Oct 11 04:56:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 04:56:08 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3488982851' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:56:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:08.554 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f5f8c950-5ac2-4625-9bb2-b859184bf84f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.558 2 DEBUG oslo_concurrency.processutils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.565 2 DEBUG nova.compute.provider_tree [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 04:56:08 np0005481065 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d0000003f.scope: Deactivated successfully.
Oct 11 04:56:08 np0005481065 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d0000003f.scope: Consumed 6.217s CPU time.
Oct 11 04:56:08 np0005481065 systemd-machined[215705]: Machine qemu-70-instance-0000003f terminated.
Oct 11 04:56:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:08.592 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[bb333247-b690-4bd0-98af-dd6aec411cf6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.595 2 DEBUG nova.scheduler.client.report [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 04:56:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:08.598 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[aa54d482-825a-4247-b5be-4db72bd739ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.621 2 DEBUG oslo_concurrency.lockutils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.622 2 DEBUG nova.compute.manager [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 04:56:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:08.634 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6d4b5e09-d30a-4e93-bf57-0fda9a7156cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:56:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:08.653 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8bd259d4-1020-497d-a9b5-4b82122c4910]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap338aeaf8-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:2a:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 477778, 'reachable_time': 18372, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325650, 'error': None, 'target': 'ovnmeta-338aeaf8-43d5-4292-a8fa-8952dd3c508b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.674 2 DEBUG nova.compute.manager [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.674 2 DEBUG nova.network.neutron [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 04:56:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:08.679 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[caab2cef-4f0e-4912-b225-b6dd2f505c39]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap338aeaf8-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 477793, 'tstamp': 477793}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325651, 'error': None, 'target': 'ovnmeta-338aeaf8-43d5-4292-a8fa-8952dd3c508b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap338aeaf8-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 477797, 'tstamp': 477797}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325651, 'error': None, 'target': 'ovnmeta-338aeaf8-43d5-4292-a8fa-8952dd3c508b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 04:56:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:08.681 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap338aeaf8-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:56:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:08.690 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap338aeaf8-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:56:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:08.691 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:56:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:08.691 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap338aeaf8-40, col_values=(('external_ids', {'iface-id': 'ebd712a5-5601-47dd-8cfc-89d2ce6b1035'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:56:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 08:56:08.692 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.697 2 INFO nova.virt.libvirt.driver [None req-c1c756a2-3b40-40d3-9664-2fb8da17375d 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Beginning live snapshot process#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.701 2 INFO nova.virt.libvirt.driver [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.705 2 INFO nova.virt.libvirt.driver [-] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Instance destroyed successfully.#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.705 2 DEBUG nova.objects.instance [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lazy-loading 'resources' on Instance uuid d80189d8-28e4-440b-8aed-b43c62f59dd7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.720 2 DEBUG nova.virt.libvirt.vif [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:55:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-136419371',display_name='tempest-SecurityGroupsTestJSON-server-136419371',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-136419371',id=63,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:56:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3a6a3cc2a54f4a9bafcdc1304f07944b',ramdisk_id='',reservation_id='r-qhzm8q7c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-2086563292',owner_user_name='tempest-SecurityGroupsTestJSON-2086563292-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:56:08Z,user_data=None,user_id='a04e2908f5a54c8f98bee8d0faf3e658',uuid=d80189d8-28e4-440b-8aed-b43c62f59dd7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7bc371fa-443a-4188-ace2-2837e3709136", "address": "fa:16:3e:76:54:8b", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bc371fa-44", "ovs_interfaceid": "7bc371fa-443a-4188-ace2-2837e3709136", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.720 2 DEBUG nova.network.os_vif_util [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Converting VIF {"id": "7bc371fa-443a-4188-ace2-2837e3709136", "address": "fa:16:3e:76:54:8b", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bc371fa-44", "ovs_interfaceid": "7bc371fa-443a-4188-ace2-2837e3709136", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.723 2 DEBUG nova.network.os_vif_util [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:76:54:8b,bridge_name='br-int',has_traffic_filtering=True,id=7bc371fa-443a-4188-ace2-2837e3709136,network=Network(338aeaf8-43d5-4292-a8fa-8952dd3c508b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7bc371fa-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.723 2 DEBUG os_vif [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:54:8b,bridge_name='br-int',has_traffic_filtering=True,id=7bc371fa-443a-4188-ace2-2837e3709136,network=Network(338aeaf8-43d5-4292-a8fa-8952dd3c508b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7bc371fa-44') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.726 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7bc371fa-44, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.728 2 DEBUG nova.compute.manager [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.734 2 DEBUG nova.compute.manager [req-96ef74b7-58ac-40bf-a536-545d2b8094e5 req-8037edb4-b3da-45a8-8dcd-1dedd54f00c4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Received event network-vif-unplugged-7bc371fa-443a-4188-ace2-2837e3709136 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.734 2 DEBUG oslo_concurrency.lockutils [req-96ef74b7-58ac-40bf-a536-545d2b8094e5 req-8037edb4-b3da-45a8-8dcd-1dedd54f00c4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d80189d8-28e4-440b-8aed-b43c62f59dd7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.734 2 DEBUG oslo_concurrency.lockutils [req-96ef74b7-58ac-40bf-a536-545d2b8094e5 req-8037edb4-b3da-45a8-8dcd-1dedd54f00c4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d80189d8-28e4-440b-8aed-b43c62f59dd7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.735 2 DEBUG oslo_concurrency.lockutils [req-96ef74b7-58ac-40bf-a536-545d2b8094e5 req-8037edb4-b3da-45a8-8dcd-1dedd54f00c4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d80189d8-28e4-440b-8aed-b43c62f59dd7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.735 2 DEBUG nova.compute.manager [req-96ef74b7-58ac-40bf-a536-545d2b8094e5 req-8037edb4-b3da-45a8-8dcd-1dedd54f00c4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] No waiting events found dispatching network-vif-unplugged-7bc371fa-443a-4188-ace2-2837e3709136 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.735 2 WARNING nova.compute.manager [req-96ef74b7-58ac-40bf-a536-545d2b8094e5 req-8037edb4-b3da-45a8-8dcd-1dedd54f00c4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Received unexpected event network-vif-unplugged-7bc371fa-443a-4188-ace2-2837e3709136 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.736 2 INFO os_vif [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:54:8b,bridge_name='br-int',has_traffic_filtering=True,id=7bc371fa-443a-4188-ace2-2837e3709136,network=Network(338aeaf8-43d5-4292-a8fa-8952dd3c508b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7bc371fa-44')#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.742 2 DEBUG nova.virt.libvirt.driver [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Start _get_guest_xml network_info=[{"id": "7bc371fa-443a-4188-ace2-2837e3709136", "address": "fa:16:3e:76:54:8b", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bc371fa-44", "ovs_interfaceid": "7bc371fa-443a-4188-ace2-2837e3709136", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.746 2 WARNING nova.virt.libvirt.driver [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.752 2 DEBUG nova.virt.libvirt.host [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.753 2 DEBUG nova.virt.libvirt.host [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.759 2 DEBUG nova.virt.libvirt.host [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.760 2 DEBUG nova.virt.libvirt.host [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.760 2 DEBUG nova.virt.libvirt.driver [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.760 2 DEBUG nova.virt.hardware [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.761 2 DEBUG nova.virt.hardware [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.761 2 DEBUG nova.virt.hardware [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.761 2 DEBUG nova.virt.hardware [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.762 2 DEBUG nova.virt.hardware [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.762 2 DEBUG nova.virt.hardware [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.762 2 DEBUG nova.virt.hardware [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.762 2 DEBUG nova.virt.hardware [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.763 2 DEBUG nova.virt.hardware [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.763 2 DEBUG nova.virt.hardware [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.763 2 DEBUG nova.virt.hardware [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.764 2 DEBUG nova.objects.instance [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lazy-loading 'vcpu_model' on Instance uuid d80189d8-28e4-440b-8aed-b43c62f59dd7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.794 2 DEBUG oslo_concurrency.processutils [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.838 2 DEBUG nova.compute.manager [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.841 2 DEBUG nova.virt.libvirt.driver [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.841 2 INFO nova.virt.libvirt.driver [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Creating image(s)#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.865 2 DEBUG nova.storage.rbd_utils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] rbd image 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.888 2 DEBUG nova.storage.rbd_utils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] rbd image 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.911 2 DEBUG nova.storage.rbd_utils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] rbd image 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:56:08 np0005481065 nova_compute[260935]: 2025-10-11 08:56:08.916 2 DEBUG oslo_concurrency.processutils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:56:09 np0005481065 nova_compute[260935]: 2025-10-11 08:56:09.008 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172968.908331, ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:56:09 np0005481065 nova_compute[260935]: 2025-10-11 08:56:09.009 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] VM Started (Lifecycle Event)#033[00m
Oct 11 04:56:09 np0005481065 nova_compute[260935]: 2025-10-11 08:56:09.015 2 DEBUG nova.policy [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '213e5693e94f44e7950e3dfbca04228a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '93dd4902ce324862a38006da8e06503a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 04:56:09 np0005481065 nova_compute[260935]: 2025-10-11 08:56:09.019 2 DEBUG nova.compute.manager [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 04:56:09 np0005481065 nova_compute[260935]: 2025-10-11 08:56:09.021 2 DEBUG oslo_concurrency.processutils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:56:09 np0005481065 nova_compute[260935]: 2025-10-11 08:56:09.030 2 DEBUG oslo_concurrency.lockutils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:56:09 np0005481065 nova_compute[260935]: 2025-10-11 08:56:09.031 2 DEBUG oslo_concurrency.lockutils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:56:09 np0005481065 nova_compute[260935]: 2025-10-11 08:56:09.031 2 DEBUG oslo_concurrency.lockutils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:56:09 np0005481065 nova_compute[260935]: 2025-10-11 08:56:09.051 2 DEBUG nova.storage.rbd_utils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] rbd image 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 04:56:09 np0005481065 nova_compute[260935]: 2025-10-11 08:56:09.054 2 DEBUG oslo_concurrency.processutils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:56:09 np0005481065 nova_compute[260935]: 2025-10-11 08:56:09.093 2 DEBUG nova.virt.libvirt.imagebackend [None req-c1c756a2-3b40-40d3-9664-2fb8da17375d 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] No parent info for 03f2fef0-11c0-48e1-b3a0-3e02d898739e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct 11 04:56:09 np0005481065 nova_compute[260935]: 2025-10-11 08:56:09.098 2 DEBUG nova.virt.libvirt.driver [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 04:56:09 np0005481065 nova_compute[260935]: 2025-10-11 08:56:09.100 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:56:09 np0005481065 nova_compute[260935]: 2025-10-11 08:56:09.107 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:56:09 np0005481065 nova_compute[260935]: 2025-10-11 08:56:09.110 2 INFO nova.virt.libvirt.driver [-] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Instance spawned successfully.#033[00m
Oct 11 04:56:09 np0005481065 nova_compute[260935]: 2025-10-11 08:56:09.110 2 DEBUG nova.virt.libvirt.driver [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 04:56:09 np0005481065 nova_compute[260935]: 2025-10-11 08:56:09.137 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:56:09 np0005481065 nova_compute[260935]: 2025-10-11 08:56:09.138 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172968.9086003, ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:56:09 np0005481065 nova_compute[260935]: 2025-10-11 08:56:09.138 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] VM Paused (Lifecycle Event)#033[00m
Oct 11 04:56:09 np0005481065 nova_compute[260935]: 2025-10-11 08:56:09.159 2 DEBUG nova.virt.libvirt.driver [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:56:09 np0005481065 nova_compute[260935]: 2025-10-11 08:56:09.160 2 DEBUG nova.virt.libvirt.driver [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:56:09 np0005481065 nova_compute[260935]: 2025-10-11 08:56:09.160 2 DEBUG nova.virt.libvirt.driver [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:56:09 np0005481065 nova_compute[260935]: 2025-10-11 08:56:09.161 2 DEBUG nova.virt.libvirt.driver [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:56:09 np0005481065 nova_compute[260935]: 2025-10-11 08:56:09.162 2 DEBUG nova.virt.libvirt.driver [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:56:09 np0005481065 nova_compute[260935]: 2025-10-11 08:56:09.163 2 DEBUG nova.virt.libvirt.driver [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 04:56:09 np0005481065 nova_compute[260935]: 2025-10-11 08:56:09.179 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:56:09 np0005481065 nova_compute[260935]: 2025-10-11 08:56:09.185 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172969.0239458, ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 04:56:09 np0005481065 nova_compute[260935]: 2025-10-11 08:56:09.186 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] VM Resumed (Lifecycle Event)#033[00m
Oct 11 04:56:09 np0005481065 nova_compute[260935]: 2025-10-11 08:56:09.221 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:56:09 np0005481065 nova_compute[260935]: 2025-10-11 08:56:09.226 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 04:56:09 np0005481065 nova_compute[260935]: 2025-10-11 08:56:09.273 2 INFO nova.compute.manager [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Took 7.41 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 04:56:09 np0005481065 nova_compute[260935]: 2025-10-11 08:56:09.274 2 DEBUG nova.compute.manager [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 04:56:09 np0005481065 nova_compute[260935]: 2025-10-11 08:56:09.280 2 DEBUG nova.storage.rbd_utils [None req-c1c756a2-3b40-40d3-9664-2fb8da17375d 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] creating snapshot(afee575312834d4e959990ab1d1ecca1) on rbd image(98d8ebd6-0917-49cf-8efc-a245486424bc_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 11 04:56:09 np0005481065 nova_compute[260935]: 2025-10-11 08:56:09.327 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 04:56:09 np0005481065 nova_compute[260935]: 2025-10-11 08:56:09.338 2 DEBUG oslo_concurrency.processutils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.284s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:56:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:56:09 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2913637587' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:56:09 np0005481065 nova_compute[260935]: 2025-10-11 08:56:09.406 2 DEBUG oslo_concurrency.processutils [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:56:09 np0005481065 nova_compute[260935]: 2025-10-11 08:56:09.445 2 DEBUG oslo_concurrency.processutils [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 04:56:09 np0005481065 nova_compute[260935]: 2025-10-11 08:56:09.501 2 DEBUG nova.storage.rbd_utils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] resizing rbd image 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 04:56:09 np0005481065 nova_compute[260935]: 2025-10-11 08:56:09.544 2 INFO nova.compute.manager [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Took 9.05 seconds to build instance.#033[00m
Oct 11 04:56:09 np0005481065 nova_compute[260935]: 2025-10-11 08:56:09.566 2 DEBUG oslo_concurrency.lockutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.158s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:56:09 np0005481065 nova_compute[260935]: 2025-10-11 08:56:09.611 2 DEBUG nova.network.neutron [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Successfully created port: 37653b55-c083-4108-a42f-4bbe7778f058 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 04:56:09 np0005481065 nova_compute[260935]: 2025-10-11 08:56:09.621 2 DEBUG nova.objects.instance [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lazy-loading 'migration_context' on Instance uuid 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:56:09 np0005481065 nova_compute[260935]: 2025-10-11 08:56:09.636 2 DEBUG nova.virt.libvirt.driver [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 04:56:09 np0005481065 nova_compute[260935]: 2025-10-11 08:56:09.636 2 DEBUG nova.virt.libvirt.driver [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Ensure instance console log exists: /var/lib/nova/instances/62c9b5d7-f22e-4738-b2e6-7c53fcb968ea/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 04:56:09 np0005481065 nova_compute[260935]: 2025-10-11 08:56:09.636 2 DEBUG oslo_concurrency.lockutils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 04:56:09 np0005481065 nova_compute[260935]: 2025-10-11 08:56:09.637 2 DEBUG oslo_concurrency.lockutils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 04:56:09 np0005481065 nova_compute[260935]: 2025-10-11 08:56:09.637 2 DEBUG oslo_concurrency.lockutils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 04:56:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 04:56:09 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/394974751' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 04:56:09 np0005481065 nova_compute[260935]: 2025-10-11 08:56:09.981 2 DEBUG oslo_concurrency.processutils [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 04:56:09 np0005481065 nova_compute[260935]: 2025-10-11 08:56:09.983 2 DEBUG nova.virt.libvirt.vif [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:55:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-136419371',display_name='tempest-SecurityGroupsTestJSON-server-136419371',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-136419371',id=63,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:56:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3a6a3cc2a54f4a9bafcdc1304f07944b',ramdisk_id='',reservation_id='r-qhzm8q7c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-2086563292',owner_user_name='tempest-SecurityGroupsTestJSON-2086563292-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:56:08Z,user_data=None,user_id='a04e2908f5a54c8f98bee8d0faf3e658',uuid=d80189d8-28e4-440b-8aed-b43c62f59dd7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7bc371fa-443a-4188-ace2-2837e3709136", "address": "fa:16:3e:76:54:8b", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bc371fa-44", "ovs_interfaceid": "7bc371fa-443a-4188-ace2-2837e3709136", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 04:56:09 np0005481065 nova_compute[260935]: 2025-10-11 08:56:09.984 2 DEBUG nova.network.os_vif_util [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Converting VIF {"id": "7bc371fa-443a-4188-ace2-2837e3709136", "address": "fa:16:3e:76:54:8b", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bc371fa-44", "ovs_interfaceid": "7bc371fa-443a-4188-ace2-2837e3709136", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 04:56:09 np0005481065 nova_compute[260935]: 2025-10-11 08:56:09.985 2 DEBUG nova.network.os_vif_util [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:76:54:8b,bridge_name='br-int',has_traffic_filtering=True,id=7bc371fa-443a-4188-ace2-2837e3709136,network=Network(338aeaf8-43d5-4292-a8fa-8952dd3c508b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7bc371fa-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 04:56:09 np0005481065 nova_compute[260935]: 2025-10-11 08:56:09.987 2 DEBUG nova.objects.instance [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lazy-loading 'pci_devices' on Instance uuid d80189d8-28e4-440b-8aed-b43c62f59dd7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 04:56:10 np0005481065 nova_compute[260935]: 2025-10-11 08:56:10.005 2 DEBUG nova.virt.libvirt.driver [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] End _get_guest_xml xml=<domain type="kvm">
Oct 11 04:56:10 np0005481065 nova_compute[260935]:  <uuid>d80189d8-28e4-440b-8aed-b43c62f59dd7</uuid>
Oct 11 04:56:10 np0005481065 nova_compute[260935]:  <name>instance-0000003f</name>
Oct 11 04:56:10 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 04:56:10 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 04:56:10 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 04:56:10 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 04:56:10 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 04:56:10 np0005481065 nova_compute[260935]:      <nova:name>tempest-SecurityGroupsTestJSON-server-136419371</nova:name>
Oct 11 04:56:10 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 08:56:08</nova:creationTime>
Oct 11 04:56:10 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 04:56:10 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 04:56:10 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 04:56:10 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 04:56:10 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 04:56:10 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 04:56:10 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 04:56:10 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 04:56:10 np0005481065 nova_compute[260935]:        <nova:user uuid="a04e2908f5a54c8f98bee8d0faf3e658">tempest-SecurityGroupsTestJSON-2086563292-project-member</nova:user>
Oct 11 04:56:10 np0005481065 nova_compute[260935]:        <nova:project uuid="3a6a3cc2a54f4a9bafcdc1304f07944b">tempest-SecurityGroupsTestJSON-2086563292</nova:project>
Oct 11 04:56:10 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 04:56:10 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 04:56:10 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 04:56:10 np0005481065 nova_compute[260935]:        <nova:port uuid="7bc371fa-443a-4188-ace2-2837e3709136">
Oct 11 04:56:10 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 04:56:10 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 04:56:10 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 04:56:10 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 04:56:10 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 04:56:10 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 04:56:10 np0005481065 nova_compute[260935]:    <system>
Oct 11 04:56:10 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 04:56:10 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 04:56:10 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 04:56:10 np0005481065 nova_compute[260935]:      <entry name="serial">d80189d8-28e4-440b-8aed-b43c62f59dd7</entry>
Oct 11 04:56:10 np0005481065 nova_compute[260935]:      <entry name="uuid">d80189d8-28e4-440b-8aed-b43c62f59dd7</entry>
Oct 11 04:56:10 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 04:56:10 np0005481065 nova_compute[260935]:    </system>
Oct 11 04:56:10 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 04:56:10 np0005481065 nova_compute[260935]:  <os>
Oct 11 04:56:10 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 04:56:10 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 04:56:10 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 04:56:10 np0005481065 nova_compute[260935]:  </os>
Oct 11 04:56:10 np0005481065 nova_compute[260935]:  <features>
Oct 11 04:56:10 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 04:56:10 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 04:56:10 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 04:56:10 np0005481065 nova_compute[260935]:  </features>
Oct 11 04:56:10 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 04:56:10 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 04:56:10 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 04:56:10 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 04:56:10 np0005481065 nova_compute[260935]:  </clock>
Oct 11 04:56:10 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 04:56:10 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 04:56:10 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 04:56:10 np0005481065 nova_compute[260935]:  <devices>
Oct 11 04:56:10 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 04:56:10 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 04:56:10 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/d80189d8-28e4-440b-8aed-b43c62f59dd7_disk">
Oct 11 04:56:10 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:00:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:00:44.970 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:b9:d1 10.100.0.11'], port_security=['fa:16:3e:16:b9:d1 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '2ca1b1c6-7cd2-42f9-a24b-5c20bb567361', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-164a664d-5e52-48b9-8b00-f73d0851a4cc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd33b48586acf4e6c8254f2a1213b001c', 'neutron:revision_number': '7', 'neutron:security_group_ids': '7c2dc1cf-8ac0-4645-86fa-d32df3cf1552', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=68f3e6c4-f574-4830-9133-912bb9cd6132, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=bcfaf217-8703-4c1e-bf80-d24ab0e642bd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:00:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:00:44.971 162815 INFO neutron.agent.ovn.metadata.agent [-] Port bcfaf217-8703-4c1e-bf80-d24ab0e642bd in datapath 164a664d-5e52-48b9-8b00-f73d0851a4cc bound to our chassis#033[00m
Oct 11 05:00:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:00:44.973 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 164a664d-5e52-48b9-8b00-f73d0851a4cc#033[00m
Oct 11 05:00:44 np0005481065 systemd-machined[215705]: New machine qemu-97-instance-00000054.
Oct 11 05:00:44 np0005481065 systemd[1]: Started Virtual Machine qemu-97-instance-00000054.
Oct 11 05:00:44 np0005481065 ovn_controller[152945]: 2025-10-11T09:00:44Z|00778|binding|INFO|Setting lport bcfaf217-8703-4c1e-bf80-d24ab0e642bd ovn-installed in OVS
Oct 11 05:00:44 np0005481065 ovn_controller[152945]: 2025-10-11T09:00:44Z|00779|binding|INFO|Setting lport bcfaf217-8703-4c1e-bf80-d24ab0e642bd up in Southbound
Oct 11 05:00:44 np0005481065 nova_compute[260935]: 2025-10-11 09:00:44.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:00:44 np0005481065 nova_compute[260935]: 2025-10-11 09:00:44.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:00:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:00:44.994 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[887c9808-4298-41bd-b562-064278badeca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:00:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:00:45.026 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[39b4ef0a-8e50-44ee-bc56-d130da2bca5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:00:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:00:45.029 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c6c2e310-382f-4722-a514-2911be86046b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:00:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:00:45.068 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[47928288-143e-4028-a065-effe0f1dcc6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:00:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:00:45.087 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[06f6008b-5962-4be3-9ad2-813ed6e1945e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap164a664d-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:a0:ed'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 916, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 916, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 224], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 500906, 'reachable_time': 17587, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343688, 'error': None, 'target': 'ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:00:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:00:45.105 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[65a5c6ee-7b42-4979-a148-0e8ccba449f3]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap164a664d-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 500919, 'tstamp': 500919}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343689, 'error': None, 'target': 'ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap164a664d-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 500922, 'tstamp': 500922}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343689, 'error': None, 'target': 'ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:00:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:00:45.107 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap164a664d-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:00:45 np0005481065 nova_compute[260935]: 2025-10-11 09:00:45.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:00:45 np0005481065 nova_compute[260935]: 2025-10-11 09:00:45.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:00:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:00:45.110 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap164a664d-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:00:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:00:45.110 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:00:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:00:45.110 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap164a664d-50, col_values=(('external_ids', {'iface-id': 'e23cd806-8523-4e59-ba27-db15cee52548'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:00:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:00:45.111 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:00:45 np0005481065 rsyslogd[1003]: imjournal: 13554 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct 11 05:00:45 np0005481065 nova_compute[260935]: 2025-10-11 09:00:45.556 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Removed pending event for b75d8ded-515b-48ff-a6b6-28df88878996 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct 11 05:00:45 np0005481065 nova_compute[260935]: 2025-10-11 09:00:45.556 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173245.555472, b75d8ded-515b-48ff-a6b6-28df88878996 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:00:45 np0005481065 nova_compute[260935]: 2025-10-11 09:00:45.557 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:00:45 np0005481065 nova_compute[260935]: 2025-10-11 09:00:45.569 2 DEBUG nova.compute.manager [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:00:45 np0005481065 nova_compute[260935]: 2025-10-11 09:00:45.585 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:00:45 np0005481065 nova_compute[260935]: 2025-10-11 09:00:45.590 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:00:45 np0005481065 podman[343692]: 2025-10-11 09:00:45.770088983 +0000 UTC m=+0.076198324 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 11 05:00:45 np0005481065 podman[343693]: 2025-10-11 09:00:45.808584941 +0000 UTC m=+0.114702752 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 05:00:45 np0005481065 nova_compute[260935]: 2025-10-11 09:00:45.884 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Oct 11 05:00:45 np0005481065 nova_compute[260935]: 2025-10-11 09:00:45.886 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173245.5641122, b75d8ded-515b-48ff-a6b6-28df88878996 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:00:45 np0005481065 nova_compute[260935]: 2025-10-11 09:00:45.886 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] VM Started (Lifecycle Event)#033[00m
Oct 11 05:00:45 np0005481065 nova_compute[260935]: 2025-10-11 09:00:45.912 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:00:45 np0005481065 nova_compute[260935]: 2025-10-11 09:00:45.922 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:00:46 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1728: 321 pgs: 321 active+clean; 315 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 4.4 MiB/s wr, 81 op/s
Oct 11 05:00:46 np0005481065 nova_compute[260935]: 2025-10-11 09:00:46.427 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173246.4262223, 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:00:46 np0005481065 nova_compute[260935]: 2025-10-11 09:00:46.427 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] VM Started (Lifecycle Event)#033[00m
Oct 11 05:00:46 np0005481065 nova_compute[260935]: 2025-10-11 09:00:46.461 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:00:46 np0005481065 nova_compute[260935]: 2025-10-11 09:00:46.468 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173246.4264927, 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:00:46 np0005481065 nova_compute[260935]: 2025-10-11 09:00:46.468 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:00:46 np0005481065 nova_compute[260935]: 2025-10-11 09:00:46.493 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:00:46 np0005481065 nova_compute[260935]: 2025-10-11 09:00:46.499 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: stopped, current task_state: rebuild_spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:00:46 np0005481065 nova_compute[260935]: 2025-10-11 09:00:46.524 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct 11 05:00:46 np0005481065 nova_compute[260935]: 2025-10-11 09:00:46.820 2 DEBUG nova.compute.manager [req-aa14fb0e-377e-45a3-a06f-c7b8d6a9cffd req-75cc260b-05c3-4438-8cfd-0aa40c8f120a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Received event network-changed-c992d6e3-ef59-42a0-80c5-109fe0c056cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:00:46 np0005481065 nova_compute[260935]: 2025-10-11 09:00:46.821 2 DEBUG nova.compute.manager [req-aa14fb0e-377e-45a3-a06f-c7b8d6a9cffd req-75cc260b-05c3-4438-8cfd-0aa40c8f120a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Refreshing instance network info cache due to event network-changed-c992d6e3-ef59-42a0-80c5-109fe0c056cd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:00:46 np0005481065 nova_compute[260935]: 2025-10-11 09:00:46.822 2 DEBUG oslo_concurrency.lockutils [req-aa14fb0e-377e-45a3-a06f-c7b8d6a9cffd req-75cc260b-05c3-4438-8cfd-0aa40c8f120a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:00:47 np0005481065 nova_compute[260935]: 2025-10-11 09:00:47.223 2 DEBUG nova.network.neutron [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updating instance_info_cache with network_info: [{"id": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "address": "fa:16:3e:d3:b5:ce", "network": {"id": "7c40ad6c-6e2c-4d8e-a70f-72c8786fa745", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1855455514-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ba95f2514ce4fe4b00f245335eaeb01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc992d6e3-ef", "ovs_interfaceid": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:00:47 np0005481065 nova_compute[260935]: 2025-10-11 09:00:47.254 2 DEBUG oslo_concurrency.lockutils [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Releasing lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:00:47 np0005481065 nova_compute[260935]: 2025-10-11 09:00:47.255 2 DEBUG nova.compute.manager [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Instance network_info: |[{"id": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "address": "fa:16:3e:d3:b5:ce", "network": {"id": "7c40ad6c-6e2c-4d8e-a70f-72c8786fa745", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1855455514-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ba95f2514ce4fe4b00f245335eaeb01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc992d6e3-ef", "ovs_interfaceid": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:00:47 np0005481065 nova_compute[260935]: 2025-10-11 09:00:47.255 2 DEBUG oslo_concurrency.lockutils [req-aa14fb0e-377e-45a3-a06f-c7b8d6a9cffd req-75cc260b-05c3-4438-8cfd-0aa40c8f120a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:00:47 np0005481065 nova_compute[260935]: 2025-10-11 09:00:47.256 2 DEBUG nova.network.neutron [req-aa14fb0e-377e-45a3-a06f-c7b8d6a9cffd req-75cc260b-05c3-4438-8cfd-0aa40c8f120a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Refreshing network info cache for port c992d6e3-ef59-42a0-80c5-109fe0c056cd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:00:47 np0005481065 nova_compute[260935]: 2025-10-11 09:00:47.261 2 DEBUG nova.virt.libvirt.driver [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Start _get_guest_xml network_info=[{"id": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "address": "fa:16:3e:d3:b5:ce", "network": {"id": "7c40ad6c-6e2c-4d8e-a70f-72c8786fa745", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1855455514-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ba95f2514ce4fe4b00f245335eaeb01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc992d6e3-ef", "ovs_interfaceid": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:00:47 np0005481065 nova_compute[260935]: 2025-10-11 09:00:47.267 2 WARNING nova.virt.libvirt.driver [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:00:47 np0005481065 nova_compute[260935]: 2025-10-11 09:00:47.273 2 DEBUG nova.virt.libvirt.host [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:00:47 np0005481065 nova_compute[260935]: 2025-10-11 09:00:47.274 2 DEBUG nova.virt.libvirt.host [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:00:47 np0005481065 nova_compute[260935]: 2025-10-11 09:00:47.282 2 DEBUG nova.virt.libvirt.host [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:00:47 np0005481065 nova_compute[260935]: 2025-10-11 09:00:47.283 2 DEBUG nova.virt.libvirt.host [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:00:47 np0005481065 nova_compute[260935]: 2025-10-11 09:00:47.284 2 DEBUG nova.virt.libvirt.driver [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:00:47 np0005481065 nova_compute[260935]: 2025-10-11 09:00:47.285 2 DEBUG nova.virt.hardware [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:00:47 np0005481065 nova_compute[260935]: 2025-10-11 09:00:47.286 2 DEBUG nova.virt.hardware [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:00:47 np0005481065 nova_compute[260935]: 2025-10-11 09:00:47.286 2 DEBUG nova.virt.hardware [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:00:47 np0005481065 nova_compute[260935]: 2025-10-11 09:00:47.287 2 DEBUG nova.virt.hardware [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:00:47 np0005481065 nova_compute[260935]: 2025-10-11 09:00:47.287 2 DEBUG nova.virt.hardware [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:00:47 np0005481065 nova_compute[260935]: 2025-10-11 09:00:47.288 2 DEBUG nova.virt.hardware [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:00:47 np0005481065 nova_compute[260935]: 2025-10-11 09:00:47.288 2 DEBUG nova.virt.hardware [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:00:47 np0005481065 nova_compute[260935]: 2025-10-11 09:00:47.289 2 DEBUG nova.virt.hardware [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:00:47 np0005481065 nova_compute[260935]: 2025-10-11 09:00:47.289 2 DEBUG nova.virt.hardware [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:00:47 np0005481065 nova_compute[260935]: 2025-10-11 09:00:47.290 2 DEBUG nova.virt.hardware [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:00:47 np0005481065 nova_compute[260935]: 2025-10-11 09:00:47.290 2 DEBUG nova.virt.hardware [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:00:47 np0005481065 nova_compute[260935]: 2025-10-11 09:00:47.296 2 DEBUG oslo_concurrency.processutils [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:00:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:00:47 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1540332784' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:00:47 np0005481065 nova_compute[260935]: 2025-10-11 09:00:47.830 2 DEBUG oslo_concurrency.processutils [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:00:47 np0005481065 nova_compute[260935]: 2025-10-11 09:00:47.865 2 DEBUG nova.storage.rbd_utils [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] rbd image 52be16b4-343a-4fd4-9041-39069a1fde2a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:00:47 np0005481065 nova_compute[260935]: 2025-10-11 09:00:47.870 2 DEBUG oslo_concurrency.processutils [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:00:48 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1729: 321 pgs: 321 active+clean; 339 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 183 op/s
Oct 11 05:00:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:00:48 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2669296274' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:00:48 np0005481065 nova_compute[260935]: 2025-10-11 09:00:48.298 2 DEBUG oslo_concurrency.processutils [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:00:48 np0005481065 nova_compute[260935]: 2025-10-11 09:00:48.300 2 DEBUG nova.virt.libvirt.vif [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:00:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1975294956',display_name='tempest-ServerActionsTestJSON-server-1975294956',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1975294956',id=86,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCldfXJbbZ7d23yZCI3wgMskc62RJ3W+h+Bujyoq+l99HIouQoz2ogsrxnyNOy7JQrwu2S23uZGGnM/6kJAmk9ewWoiaLMeddrGku0Zod7LFIlcm/esb5hA9IKL9pBW3cA==',key_name='tempest-keypair-1522598924',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0ba95f2514ce4fe4b00f245335eaeb01',ramdisk_id='',reservation_id='r-4yee06pj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-332398676',owner_user_name='tempest-ServerActionsTestJSON-332398676-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:00:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2e604a7f01ba42f8a2f2a90bf14cafba',uuid=52be16b4-343a-4fd4-9041-39069a1fde2a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "address": "fa:16:3e:d3:b5:ce", "network": {"id": "7c40ad6c-6e2c-4d8e-a70f-72c8786fa745", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1855455514-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ba95f2514ce4fe4b00f245335eaeb01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc992d6e3-ef", "ovs_interfaceid": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:00:48 np0005481065 nova_compute[260935]: 2025-10-11 09:00:48.300 2 DEBUG nova.network.os_vif_util [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Converting VIF {"id": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "address": "fa:16:3e:d3:b5:ce", "network": {"id": "7c40ad6c-6e2c-4d8e-a70f-72c8786fa745", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1855455514-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ba95f2514ce4fe4b00f245335eaeb01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc992d6e3-ef", "ovs_interfaceid": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:00:48 np0005481065 nova_compute[260935]: 2025-10-11 09:00:48.301 2 DEBUG nova.network.os_vif_util [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d3:b5:ce,bridge_name='br-int',has_traffic_filtering=True,id=c992d6e3-ef59-42a0-80c5-109fe0c056cd,network=Network(7c40ad6c-6e2c-4d8e-a70f-72c8786fa745),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc992d6e3-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:00:48 np0005481065 nova_compute[260935]: 2025-10-11 09:00:48.302 2 DEBUG nova.objects.instance [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Lazy-loading 'pci_devices' on Instance uuid 52be16b4-343a-4fd4-9041-39069a1fde2a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:00:48 np0005481065 nova_compute[260935]: 2025-10-11 09:00:48.320 2 DEBUG nova.virt.libvirt.driver [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:00:48 np0005481065 nova_compute[260935]:  <uuid>52be16b4-343a-4fd4-9041-39069a1fde2a</uuid>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:  <name>instance-00000056</name>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:00:48 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:      <nova:name>tempest-ServerActionsTestJSON-server-1975294956</nova:name>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:00:47</nova:creationTime>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:00:48 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:        <nova:user uuid="2e604a7f01ba42f8a2f2a90bf14cafba">tempest-ServerActionsTestJSON-332398676-project-member</nova:user>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:        <nova:project uuid="0ba95f2514ce4fe4b00f245335eaeb01">tempest-ServerActionsTestJSON-332398676</nova:project>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:00:48 np0005481065 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 05:00:48 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:00:48 np0005481065 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 05:00:48 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:        <nova:port uuid="c992d6e3-ef59-42a0-80c5-109fe0c056cd">
Oct 11 05:00:48 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:00:48 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:      <entry name="serial">52be16b4-343a-4fd4-9041-39069a1fde2a</entry>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:      <entry name="uuid">52be16b4-343a-4fd4-9041-39069a1fde2a</entry>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:00:48 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:00:48 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:00:48 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/52be16b4-343a-4fd4-9041-39069a1fde2a_disk">
Oct 11 05:00:48 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:00:48 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:00:48 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/52be16b4-343a-4fd4-9041-39069a1fde2a_disk.config">
Oct 11 05:00:48 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:00:48 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:00:48 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:d3:b5:ce"/>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:      <target dev="tapc992d6e3-ef"/>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:00:48 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/52be16b4-343a-4fd4-9041-39069a1fde2a/console.log" append="off"/>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:00:48 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:00:48 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:00:48 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:00:48 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:00:48 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:00:48 np0005481065 nova_compute[260935]: 2025-10-11 09:00:48.330 2 DEBUG nova.compute.manager [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Preparing to wait for external event network-vif-plugged-c992d6e3-ef59-42a0-80c5-109fe0c056cd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:00:48 np0005481065 nova_compute[260935]: 2025-10-11 09:00:48.330 2 DEBUG oslo_concurrency.lockutils [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Acquiring lock "52be16b4-343a-4fd4-9041-39069a1fde2a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:00:48 np0005481065 nova_compute[260935]: 2025-10-11 09:00:48.330 2 DEBUG oslo_concurrency.lockutils [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Lock "52be16b4-343a-4fd4-9041-39069a1fde2a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:00:48 np0005481065 nova_compute[260935]: 2025-10-11 09:00:48.330 2 DEBUG oslo_concurrency.lockutils [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Lock "52be16b4-343a-4fd4-9041-39069a1fde2a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:00:48 np0005481065 nova_compute[260935]: 2025-10-11 09:00:48.331 2 DEBUG nova.virt.libvirt.vif [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:00:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1975294956',display_name='tempest-ServerActionsTestJSON-server-1975294956',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1975294956',id=86,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCldfXJbbZ7d23yZCI3wgMskc62RJ3W+h+Bujyoq+l99HIouQoz2ogsrxnyNOy7JQrwu2S23uZGGnM/6kJAmk9ewWoiaLMeddrGku0Zod7LFIlcm/esb5hA9IKL9pBW3cA==',key_name='tempest-keypair-1522598924',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0ba95f2514ce4fe4b00f245335eaeb01',ramdisk_id='',reservation_id='r-4yee06pj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-332398676',owner_user_name='tempest-ServerActionsTestJSON-332398676-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:00:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2e604a7f01ba42f8a2f2a90bf14cafba',uuid=52be16b4-343a-4fd4-9041-39069a1fde2a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "address": "fa:16:3e:d3:b5:ce", "network": {"id": "7c40ad6c-6e2c-4d8e-a70f-72c8786fa745", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1855455514-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ba95f2514ce4fe4b00f245335eaeb01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc992d6e3-ef", "ovs_interfaceid": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:00:48 np0005481065 nova_compute[260935]: 2025-10-11 09:00:48.331 2 DEBUG nova.network.os_vif_util [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Converting VIF {"id": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "address": "fa:16:3e:d3:b5:ce", "network": {"id": "7c40ad6c-6e2c-4d8e-a70f-72c8786fa745", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1855455514-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ba95f2514ce4fe4b00f245335eaeb01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc992d6e3-ef", "ovs_interfaceid": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:00:48 np0005481065 nova_compute[260935]: 2025-10-11 09:00:48.332 2 DEBUG nova.network.os_vif_util [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d3:b5:ce,bridge_name='br-int',has_traffic_filtering=True,id=c992d6e3-ef59-42a0-80c5-109fe0c056cd,network=Network(7c40ad6c-6e2c-4d8e-a70f-72c8786fa745),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc992d6e3-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:00:48 np0005481065 nova_compute[260935]: 2025-10-11 09:00:48.332 2 DEBUG os_vif [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:b5:ce,bridge_name='br-int',has_traffic_filtering=True,id=c992d6e3-ef59-42a0-80c5-109fe0c056cd,network=Network(7c40ad6c-6e2c-4d8e-a70f-72c8786fa745),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc992d6e3-ef') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:00:48 np0005481065 nova_compute[260935]: 2025-10-11 09:00:48.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:00:48 np0005481065 nova_compute[260935]: 2025-10-11 09:00:48.333 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:00:48 np0005481065 nova_compute[260935]: 2025-10-11 09:00:48.333 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:00:48 np0005481065 nova_compute[260935]: 2025-10-11 09:00:48.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:00:48 np0005481065 nova_compute[260935]: 2025-10-11 09:00:48.338 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc992d6e3-ef, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:00:48 np0005481065 nova_compute[260935]: 2025-10-11 09:00:48.338 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc992d6e3-ef, col_values=(('external_ids', {'iface-id': 'c992d6e3-ef59-42a0-80c5-109fe0c056cd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d3:b5:ce', 'vm-uuid': '52be16b4-343a-4fd4-9041-39069a1fde2a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:00:48 np0005481065 NetworkManager[44960]: <info>  [1760173248.3409] manager: (tapc992d6e3-ef): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/338)
Oct 11 05:00:48 np0005481065 nova_compute[260935]: 2025-10-11 09:00:48.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:00:48 np0005481065 nova_compute[260935]: 2025-10-11 09:00:48.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:00:48 np0005481065 nova_compute[260935]: 2025-10-11 09:00:48.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:00:48 np0005481065 nova_compute[260935]: 2025-10-11 09:00:48.352 2 INFO os_vif [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:b5:ce,bridge_name='br-int',has_traffic_filtering=True,id=c992d6e3-ef59-42a0-80c5-109fe0c056cd,network=Network(7c40ad6c-6e2c-4d8e-a70f-72c8786fa745),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc992d6e3-ef')#033[00m
Oct 11 05:00:48 np0005481065 nova_compute[260935]: 2025-10-11 09:00:48.406 2 DEBUG nova.virt.libvirt.driver [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:00:48 np0005481065 nova_compute[260935]: 2025-10-11 09:00:48.407 2 DEBUG nova.virt.libvirt.driver [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:00:48 np0005481065 nova_compute[260935]: 2025-10-11 09:00:48.407 2 DEBUG nova.virt.libvirt.driver [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] No VIF found with MAC fa:16:3e:d3:b5:ce, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:00:48 np0005481065 nova_compute[260935]: 2025-10-11 09:00:48.408 2 INFO nova.virt.libvirt.driver [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Using config drive#033[00m
Oct 11 05:00:48 np0005481065 nova_compute[260935]: 2025-10-11 09:00:48.438 2 DEBUG nova.storage.rbd_utils [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] rbd image 52be16b4-343a-4fd4-9041-39069a1fde2a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:00:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:00:48 np0005481065 nova_compute[260935]: 2025-10-11 09:00:48.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:00:49 np0005481065 nova_compute[260935]: 2025-10-11 09:00:49.156 2 INFO nova.virt.libvirt.driver [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Creating config drive at /var/lib/nova/instances/52be16b4-343a-4fd4-9041-39069a1fde2a/disk.config#033[00m
Oct 11 05:00:49 np0005481065 nova_compute[260935]: 2025-10-11 09:00:49.165 2 DEBUG oslo_concurrency.processutils [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/52be16b4-343a-4fd4-9041-39069a1fde2a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkcischal execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:00:49 np0005481065 nova_compute[260935]: 2025-10-11 09:00:49.331 2 DEBUG oslo_concurrency.processutils [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/52be16b4-343a-4fd4-9041-39069a1fde2a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkcischal" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:00:49 np0005481065 nova_compute[260935]: 2025-10-11 09:00:49.369 2 DEBUG nova.storage.rbd_utils [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] rbd image 52be16b4-343a-4fd4-9041-39069a1fde2a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:00:49 np0005481065 nova_compute[260935]: 2025-10-11 09:00:49.375 2 DEBUG oslo_concurrency.processutils [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/52be16b4-343a-4fd4-9041-39069a1fde2a/disk.config 52be16b4-343a-4fd4-9041-39069a1fde2a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:00:49 np0005481065 nova_compute[260935]: 2025-10-11 09:00:49.438 2 DEBUG nova.network.neutron [req-aa14fb0e-377e-45a3-a06f-c7b8d6a9cffd req-75cc260b-05c3-4438-8cfd-0aa40c8f120a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updated VIF entry in instance network info cache for port c992d6e3-ef59-42a0-80c5-109fe0c056cd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:00:49 np0005481065 nova_compute[260935]: 2025-10-11 09:00:49.439 2 DEBUG nova.network.neutron [req-aa14fb0e-377e-45a3-a06f-c7b8d6a9cffd req-75cc260b-05c3-4438-8cfd-0aa40c8f120a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updating instance_info_cache with network_info: [{"id": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "address": "fa:16:3e:d3:b5:ce", "network": {"id": "7c40ad6c-6e2c-4d8e-a70f-72c8786fa745", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1855455514-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ba95f2514ce4fe4b00f245335eaeb01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc992d6e3-ef", "ovs_interfaceid": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:00:49 np0005481065 nova_compute[260935]: 2025-10-11 09:00:49.457 2 DEBUG oslo_concurrency.lockutils [req-aa14fb0e-377e-45a3-a06f-c7b8d6a9cffd req-75cc260b-05c3-4438-8cfd-0aa40c8f120a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:00:49 np0005481065 nova_compute[260935]: 2025-10-11 09:00:49.458 2 DEBUG nova.compute.manager [req-aa14fb0e-377e-45a3-a06f-c7b8d6a9cffd req-75cc260b-05c3-4438-8cfd-0aa40c8f120a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Received event network-vif-plugged-99e74dca-1d94-446c-ac4b-bc16dc028d2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:00:49 np0005481065 nova_compute[260935]: 2025-10-11 09:00:49.458 2 DEBUG oslo_concurrency.lockutils [req-aa14fb0e-377e-45a3-a06f-c7b8d6a9cffd req-75cc260b-05c3-4438-8cfd-0aa40c8f120a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "b75d8ded-515b-48ff-a6b6-28df88878996-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:00:49 np0005481065 nova_compute[260935]: 2025-10-11 09:00:49.459 2 DEBUG oslo_concurrency.lockutils [req-aa14fb0e-377e-45a3-a06f-c7b8d6a9cffd req-75cc260b-05c3-4438-8cfd-0aa40c8f120a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b75d8ded-515b-48ff-a6b6-28df88878996-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:00:49 np0005481065 nova_compute[260935]: 2025-10-11 09:00:49.459 2 DEBUG oslo_concurrency.lockutils [req-aa14fb0e-377e-45a3-a06f-c7b8d6a9cffd req-75cc260b-05c3-4438-8cfd-0aa40c8f120a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b75d8ded-515b-48ff-a6b6-28df88878996-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:00:49 np0005481065 nova_compute[260935]: 2025-10-11 09:00:49.460 2 DEBUG nova.compute.manager [req-aa14fb0e-377e-45a3-a06f-c7b8d6a9cffd req-75cc260b-05c3-4438-8cfd-0aa40c8f120a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] No waiting events found dispatching network-vif-plugged-99e74dca-1d94-446c-ac4b-bc16dc028d2b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:00:49 np0005481065 nova_compute[260935]: 2025-10-11 09:00:49.460 2 WARNING nova.compute.manager [req-aa14fb0e-377e-45a3-a06f-c7b8d6a9cffd req-75cc260b-05c3-4438-8cfd-0aa40c8f120a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Received unexpected event network-vif-plugged-99e74dca-1d94-446c-ac4b-bc16dc028d2b for instance with vm_state rescued and task_state None.#033[00m
Oct 11 05:00:49 np0005481065 nova_compute[260935]: 2025-10-11 09:00:49.461 2 DEBUG nova.compute.manager [req-aa14fb0e-377e-45a3-a06f-c7b8d6a9cffd req-75cc260b-05c3-4438-8cfd-0aa40c8f120a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Received event network-vif-plugged-bcfaf217-8703-4c1e-bf80-d24ab0e642bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:00:49 np0005481065 nova_compute[260935]: 2025-10-11 09:00:49.461 2 DEBUG oslo_concurrency.lockutils [req-aa14fb0e-377e-45a3-a06f-c7b8d6a9cffd req-75cc260b-05c3-4438-8cfd-0aa40c8f120a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:00:49 np0005481065 nova_compute[260935]: 2025-10-11 09:00:49.462 2 DEBUG oslo_concurrency.lockutils [req-aa14fb0e-377e-45a3-a06f-c7b8d6a9cffd req-75cc260b-05c3-4438-8cfd-0aa40c8f120a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:00:49 np0005481065 nova_compute[260935]: 2025-10-11 09:00:49.462 2 DEBUG oslo_concurrency.lockutils [req-aa14fb0e-377e-45a3-a06f-c7b8d6a9cffd req-75cc260b-05c3-4438-8cfd-0aa40c8f120a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:00:49 np0005481065 nova_compute[260935]: 2025-10-11 09:00:49.462 2 DEBUG nova.compute.manager [req-aa14fb0e-377e-45a3-a06f-c7b8d6a9cffd req-75cc260b-05c3-4438-8cfd-0aa40c8f120a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Processing event network-vif-plugged-bcfaf217-8703-4c1e-bf80-d24ab0e642bd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:00:49 np0005481065 nova_compute[260935]: 2025-10-11 09:00:49.463 2 DEBUG nova.compute.manager [req-aa14fb0e-377e-45a3-a06f-c7b8d6a9cffd req-75cc260b-05c3-4438-8cfd-0aa40c8f120a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Received event network-vif-plugged-bcfaf217-8703-4c1e-bf80-d24ab0e642bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:00:49 np0005481065 nova_compute[260935]: 2025-10-11 09:00:49.463 2 DEBUG oslo_concurrency.lockutils [req-aa14fb0e-377e-45a3-a06f-c7b8d6a9cffd req-75cc260b-05c3-4438-8cfd-0aa40c8f120a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:00:49 np0005481065 nova_compute[260935]: 2025-10-11 09:00:49.464 2 DEBUG oslo_concurrency.lockutils [req-aa14fb0e-377e-45a3-a06f-c7b8d6a9cffd req-75cc260b-05c3-4438-8cfd-0aa40c8f120a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:00:49 np0005481065 nova_compute[260935]: 2025-10-11 09:00:49.464 2 DEBUG oslo_concurrency.lockutils [req-aa14fb0e-377e-45a3-a06f-c7b8d6a9cffd req-75cc260b-05c3-4438-8cfd-0aa40c8f120a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:00:49 np0005481065 nova_compute[260935]: 2025-10-11 09:00:49.465 2 DEBUG nova.compute.manager [req-aa14fb0e-377e-45a3-a06f-c7b8d6a9cffd req-75cc260b-05c3-4438-8cfd-0aa40c8f120a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] No waiting events found dispatching network-vif-plugged-bcfaf217-8703-4c1e-bf80-d24ab0e642bd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:00:49 np0005481065 nova_compute[260935]: 2025-10-11 09:00:49.465 2 WARNING nova.compute.manager [req-aa14fb0e-377e-45a3-a06f-c7b8d6a9cffd req-75cc260b-05c3-4438-8cfd-0aa40c8f120a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Received unexpected event network-vif-plugged-bcfaf217-8703-4c1e-bf80-d24ab0e642bd for instance with vm_state stopped and task_state rebuild_spawning.#033[00m
Oct 11 05:00:49 np0005481065 nova_compute[260935]: 2025-10-11 09:00:49.466 2 DEBUG nova.compute.manager [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:00:49 np0005481065 nova_compute[260935]: 2025-10-11 09:00:49.470 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173249.4698424, 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:00:49 np0005481065 nova_compute[260935]: 2025-10-11 09:00:49.470 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:00:49 np0005481065 nova_compute[260935]: 2025-10-11 09:00:49.473 2 DEBUG nova.virt.libvirt.driver [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:00:49 np0005481065 nova_compute[260935]: 2025-10-11 09:00:49.477 2 INFO nova.virt.libvirt.driver [-] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Instance spawned successfully.#033[00m
Oct 11 05:00:49 np0005481065 nova_compute[260935]: 2025-10-11 09:00:49.477 2 DEBUG nova.virt.libvirt.driver [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:00:49 np0005481065 nova_compute[260935]: 2025-10-11 09:00:49.496 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:00:49 np0005481065 nova_compute[260935]: 2025-10-11 09:00:49.508 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: rebuild_spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:00:49 np0005481065 nova_compute[260935]: 2025-10-11 09:00:49.513 2 DEBUG nova.virt.libvirt.driver [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:00:49 np0005481065 nova_compute[260935]: 2025-10-11 09:00:49.514 2 DEBUG nova.virt.libvirt.driver [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:00:49 np0005481065 nova_compute[260935]: 2025-10-11 09:00:49.514 2 DEBUG nova.virt.libvirt.driver [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:00:49 np0005481065 nova_compute[260935]: 2025-10-11 09:00:49.515 2 DEBUG nova.virt.libvirt.driver [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:00:49 np0005481065 nova_compute[260935]: 2025-10-11 09:00:49.515 2 DEBUG nova.virt.libvirt.driver [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:00:49 np0005481065 nova_compute[260935]: 2025-10-11 09:00:49.516 2 DEBUG nova.virt.libvirt.driver [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:00:49 np0005481065 nova_compute[260935]: 2025-10-11 09:00:49.543 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct 11 05:00:49 np0005481065 nova_compute[260935]: 2025-10-11 09:00:49.582 2 DEBUG nova.compute.manager [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:00:49 np0005481065 nova_compute[260935]: 2025-10-11 09:00:49.603 2 DEBUG oslo_concurrency.processutils [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/52be16b4-343a-4fd4-9041-39069a1fde2a/disk.config 52be16b4-343a-4fd4-9041-39069a1fde2a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.228s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:00:49 np0005481065 nova_compute[260935]: 2025-10-11 09:00:49.604 2 INFO nova.virt.libvirt.driver [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Deleting local config drive /var/lib/nova/instances/52be16b4-343a-4fd4-9041-39069a1fde2a/disk.config because it was imported into RBD.#033[00m
Oct 11 05:00:49 np0005481065 nova_compute[260935]: 2025-10-11 09:00:49.646 2 INFO nova.compute.manager [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] bringing vm to original state: 'stopped'#033[00m
Oct 11 05:00:49 np0005481065 NetworkManager[44960]: <info>  [1760173249.6757] manager: (tapc992d6e3-ef): new Tun device (/org/freedesktop/NetworkManager/Devices/339)
Oct 11 05:00:49 np0005481065 kernel: tapc992d6e3-ef: entered promiscuous mode
Oct 11 05:00:49 np0005481065 ovn_controller[152945]: 2025-10-11T09:00:49Z|00780|binding|INFO|Claiming lport c992d6e3-ef59-42a0-80c5-109fe0c056cd for this chassis.
Oct 11 05:00:49 np0005481065 ovn_controller[152945]: 2025-10-11T09:00:49Z|00781|binding|INFO|c992d6e3-ef59-42a0-80c5-109fe0c056cd: Claiming fa:16:3e:d3:b5:ce 10.100.0.14
Oct 11 05:00:49 np0005481065 nova_compute[260935]: 2025-10-11 09:00:49.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:00:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:00:49.689 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:b5:ce 10.100.0.14'], port_security=['fa:16:3e:d3:b5:ce 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '52be16b4-343a-4fd4-9041-39069a1fde2a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c40ad6c-6e2c-4d8e-a70f-72c8786fa745', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0ba95f2514ce4fe4b00f245335eaeb01', 'neutron:revision_number': '2', 'neutron:security_group_ids': '462a25ad-d94b-4af1-ba28-eaa1a993c459', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=22e9d786-1ab0-4026-8b17-f42f91b9280f, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=c992d6e3-ef59-42a0-80c5-109fe0c056cd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:00:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:00:49.691 162815 INFO neutron.agent.ovn.metadata.agent [-] Port c992d6e3-ef59-42a0-80c5-109fe0c056cd in datapath 7c40ad6c-6e2c-4d8e-a70f-72c8786fa745 bound to our chassis#033[00m
Oct 11 05:00:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:00:49.695 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7c40ad6c-6e2c-4d8e-a70f-72c8786fa745#033[00m
Oct 11 05:00:49 np0005481065 systemd-udevd[343915]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:00:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:00:49.711 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4f08efd3-b75f-4c50-9fb1-64770a948c7a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:00:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:00:49.714 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7c40ad6c-61 in ovnmeta-7c40ad6c-6e2c-4d8e-a70f-72c8786fa745 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 05:00:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:00:49.716 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7c40ad6c-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 05:00:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:00:49.716 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[73479237-8be1-41fc-87bc-bdedcfaafb87]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:00:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:00:49.717 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[50a5d27e-b93e-4bf9-b82f-50335d15d82e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:00:49 np0005481065 NetworkManager[44960]: <info>  [1760173249.7278] device (tapc992d6e3-ef): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:00:49 np0005481065 NetworkManager[44960]: <info>  [1760173249.7288] device (tapc992d6e3-ef): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:00:49 np0005481065 nova_compute[260935]: 2025-10-11 09:00:49.730 2 DEBUG oslo_concurrency.lockutils [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Acquiring lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:00:49 np0005481065 nova_compute[260935]: 2025-10-11 09:00:49.731 2 DEBUG oslo_concurrency.lockutils [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:00:49 np0005481065 nova_compute[260935]: 2025-10-11 09:00:49.731 2 DEBUG nova.compute.manager [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:00:49 np0005481065 ovn_controller[152945]: 2025-10-11T09:00:49Z|00782|binding|INFO|Setting lport c992d6e3-ef59-42a0-80c5-109fe0c056cd ovn-installed in OVS
Oct 11 05:00:49 np0005481065 ovn_controller[152945]: 2025-10-11T09:00:49Z|00783|binding|INFO|Setting lport c992d6e3-ef59-42a0-80c5-109fe0c056cd up in Southbound
Oct 11 05:00:49 np0005481065 nova_compute[260935]: 2025-10-11 09:00:49.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:00:49 np0005481065 systemd-machined[215705]: New machine qemu-98-instance-00000056.
Oct 11 05:00:49 np0005481065 nova_compute[260935]: 2025-10-11 09:00:49.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:00:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:00:49.739 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[83206be6-1440-465a-838a-d134e0673d26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:00:49 np0005481065 nova_compute[260935]: 2025-10-11 09:00:49.748 2 DEBUG nova.compute.manager [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Oct 11 05:00:49 np0005481065 systemd[1]: Started Virtual Machine qemu-98-instance-00000056.
Oct 11 05:00:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:00:49.772 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bac42852-7c34-46fb-8526-ef9aba0e8fc9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:00:49 np0005481065 kernel: tapbcfaf217-87 (unregistering): left promiscuous mode
Oct 11 05:00:49 np0005481065 NetworkManager[44960]: <info>  [1760173249.8056] device (tapbcfaf217-87): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:00:49 np0005481065 nova_compute[260935]: 2025-10-11 09:00:49.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:00:49 np0005481065 ovn_controller[152945]: 2025-10-11T09:00:49Z|00784|binding|INFO|Releasing lport bcfaf217-8703-4c1e-bf80-d24ab0e642bd from this chassis (sb_readonly=0)
Oct 11 05:00:49 np0005481065 ovn_controller[152945]: 2025-10-11T09:00:49Z|00785|binding|INFO|Setting lport bcfaf217-8703-4c1e-bf80-d24ab0e642bd down in Southbound
Oct 11 05:00:49 np0005481065 ovn_controller[152945]: 2025-10-11T09:00:49Z|00786|binding|INFO|Removing iface tapbcfaf217-87 ovn-installed in OVS
Oct 11 05:00:49 np0005481065 nova_compute[260935]: 2025-10-11 09:00:49.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:00:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:00:49.822 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[8fb89c06-d6d0-4905-b82c-8e70ae47c809]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:00:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:00:49.833 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:b9:d1 10.100.0.11'], port_security=['fa:16:3e:16:b9:d1 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '2ca1b1c6-7cd2-42f9-a24b-5c20bb567361', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-164a664d-5e52-48b9-8b00-f73d0851a4cc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd33b48586acf4e6c8254f2a1213b001c', 'neutron:revision_number': '8', 'neutron:security_group_ids': '7c2dc1cf-8ac0-4645-86fa-d32df3cf1552', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=68f3e6c4-f574-4830-9133-912bb9cd6132, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=bcfaf217-8703-4c1e-bf80-d24ab0e642bd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:00:49 np0005481065 systemd-udevd[343918]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:00:49 np0005481065 NetworkManager[44960]: <info>  [1760173249.8369] manager: (tap7c40ad6c-60): new Veth device (/org/freedesktop/NetworkManager/Devices/340)
Oct 11 05:00:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:00:49.834 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a3a453d9-6e04-4b47-ad61-78371dd71fdb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:00:49 np0005481065 nova_compute[260935]: 2025-10-11 09:00:49.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:00:49 np0005481065 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d00000054.scope: Deactivated successfully.
Oct 11 05:00:49 np0005481065 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d00000054.scope: Consumed 1.503s CPU time.
Oct 11 05:00:49 np0005481065 systemd-machined[215705]: Machine qemu-97-instance-00000054 terminated.
Oct 11 05:00:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:00:49.874 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[aa5e6e4b-d86f-4060-89b1-fefe53a6e573]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:00:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:00:49.877 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[d239be38-9b5b-4f5e-a657-a85581e8c84e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:00:49 np0005481065 NetworkManager[44960]: <info>  [1760173249.9064] device (tap7c40ad6c-60): carrier: link connected
Oct 11 05:00:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:00:49.916 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[0c0e6b71-bb93-4191-b417-8bee1db24493]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:00:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:00:49.935 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f42b8b33-f4db-4d8d-b053-368d2d68d0aa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c40ad6c-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5b:9b:0f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 240], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 509460, 'reachable_time': 18721, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343954, 'error': None, 'target': 'ovnmeta-7c40ad6c-6e2c-4d8e-a70f-72c8786fa745', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:00:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:00:49.953 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fecc542e-a656-4903-8bbd-57b449b63fee]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5b:9b0f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 509460, 'tstamp': 509460}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343955, 'error': None, 'target': 'ovnmeta-7c40ad6c-6e2c-4d8e-a70f-72c8786fa745', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:00:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:00:49.970 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[60f2e2ff-c249-43e5-81d6-3f0ff3544754]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c40ad6c-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5b:9b:0f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 240], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 509460, 'reachable_time': 18721, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 343956, 'error': None, 'target': 'ovnmeta-7c40ad6c-6e2c-4d8e-a70f-72c8786fa745', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:00:49 np0005481065 NetworkManager[44960]: <info>  [1760173249.9770] manager: (tapbcfaf217-87): new Tun device (/org/freedesktop/NetworkManager/Devices/341)
Oct 11 05:00:49 np0005481065 nova_compute[260935]: 2025-10-11 09:00:49.995 2 INFO nova.virt.libvirt.driver [-] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Instance destroyed successfully.#033[00m
Oct 11 05:00:49 np0005481065 nova_compute[260935]: 2025-10-11 09:00:49.995 2 DEBUG nova.compute.manager [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:00:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:00:50.006 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d70239ba-5c62-451c-981d-2faff33919f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:00:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:00:50.084 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[abc87846-175a-4f3f-a4f6-b36d0ad58192]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:00:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:00:50.086 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c40ad6c-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:00:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:00:50.087 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:00:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:00:50.087 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7c40ad6c-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:00:50 np0005481065 NetworkManager[44960]: <info>  [1760173250.0906] manager: (tap7c40ad6c-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/342)
Oct 11 05:00:50 np0005481065 nova_compute[260935]: 2025-10-11 09:00:50.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:00:50 np0005481065 kernel: tap7c40ad6c-60: entered promiscuous mode
Oct 11 05:00:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:00:50.099 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7c40ad6c-60, col_values=(('external_ids', {'iface-id': 'f45dd889-4db0-488b-b6d3-356bc191844e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:00:50 np0005481065 ovn_controller[152945]: 2025-10-11T09:00:50Z|00787|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:00:50 np0005481065 nova_compute[260935]: 2025-10-11 09:00:50.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:00:50 np0005481065 nova_compute[260935]: 2025-10-11 09:00:50.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:00:50 np0005481065 nova_compute[260935]: 2025-10-11 09:00:50.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:00:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:00:50.135 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7c40ad6c-6e2c-4d8e-a70f-72c8786fa745.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7c40ad6c-6e2c-4d8e-a70f-72c8786fa745.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 05:00:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:00:50.136 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b3beb447-0ef8-461c-b428-f1ca47ec9835]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:00:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:00:50.137 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 05:00:50 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 05:00:50 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 05:00:50 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-7c40ad6c-6e2c-4d8e-a70f-72c8786fa745
Oct 11 05:00:50 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 05:00:50 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 05:00:50 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 05:00:50 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/7c40ad6c-6e2c-4d8e-a70f-72c8786fa745.pid.haproxy
Oct 11 05:00:50 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 05:00:50 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:00:50 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 05:00:50 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 05:00:50 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 05:00:50 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 05:00:50 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 05:00:50 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 05:00:50 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 05:00:50 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 05:00:50 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 05:00:50 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 05:00:50 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 05:00:50 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 05:00:50 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 05:00:50 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:00:50 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:00:50 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 05:00:50 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 05:00:50 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 05:00:50 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID 7c40ad6c-6e2c-4d8e-a70f-72c8786fa745
Oct 11 05:00:50 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 05:00:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:00:50.139 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7c40ad6c-6e2c-4d8e-a70f-72c8786fa745', 'env', 'PROCESS_TAG=haproxy-7c40ad6c-6e2c-4d8e-a70f-72c8786fa745', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7c40ad6c-6e2c-4d8e-a70f-72c8786fa745.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 05:00:50 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1730: 321 pgs: 321 active+clean; 339 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 183 op/s
Oct 11 05:00:50 np0005481065 nova_compute[260935]: 2025-10-11 09:00:50.265 2 DEBUG oslo_concurrency.lockutils [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 0.534s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:00:50 np0005481065 nova_compute[260935]: 2025-10-11 09:00:50.318 2 DEBUG oslo_concurrency.lockutils [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:00:50 np0005481065 nova_compute[260935]: 2025-10-11 09:00:50.318 2 DEBUG oslo_concurrency.lockutils [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:00:50 np0005481065 nova_compute[260935]: 2025-10-11 09:00:50.319 2 DEBUG nova.objects.instance [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct 11 05:00:50 np0005481065 nova_compute[260935]: 2025-10-11 09:00:50.415 2 DEBUG nova.compute.manager [req-ed853ab6-1dea-4840-bdb4-6fd91cb3585d req-6c7922f9-8668-443b-9e8f-b7210e5a7d91 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Received event network-vif-plugged-c992d6e3-ef59-42a0-80c5-109fe0c056cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:00:50 np0005481065 nova_compute[260935]: 2025-10-11 09:00:50.416 2 DEBUG oslo_concurrency.lockutils [req-ed853ab6-1dea-4840-bdb4-6fd91cb3585d req-6c7922f9-8668-443b-9e8f-b7210e5a7d91 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "52be16b4-343a-4fd4-9041-39069a1fde2a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:00:50 np0005481065 nova_compute[260935]: 2025-10-11 09:00:50.416 2 DEBUG oslo_concurrency.lockutils [req-ed853ab6-1dea-4840-bdb4-6fd91cb3585d req-6c7922f9-8668-443b-9e8f-b7210e5a7d91 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "52be16b4-343a-4fd4-9041-39069a1fde2a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:00:50 np0005481065 nova_compute[260935]: 2025-10-11 09:00:50.416 2 DEBUG oslo_concurrency.lockutils [req-ed853ab6-1dea-4840-bdb4-6fd91cb3585d req-6c7922f9-8668-443b-9e8f-b7210e5a7d91 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "52be16b4-343a-4fd4-9041-39069a1fde2a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:00:50 np0005481065 nova_compute[260935]: 2025-10-11 09:00:50.417 2 DEBUG nova.compute.manager [req-ed853ab6-1dea-4840-bdb4-6fd91cb3585d req-6c7922f9-8668-443b-9e8f-b7210e5a7d91 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Processing event network-vif-plugged-c992d6e3-ef59-42a0-80c5-109fe0c056cd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:00:50 np0005481065 nova_compute[260935]: 2025-10-11 09:00:50.419 2 DEBUG oslo_concurrency.lockutils [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.100s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:00:50 np0005481065 podman[344002]: 2025-10-11 09:00:50.570795853 +0000 UTC m=+0.071359976 container create b5511f41d5dad8ddef6656dea0a0356c4dbed9d9c7c06c1948c618f051a18d4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7c40ad6c-6e2c-4d8e-a70f-72c8786fa745, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:00:50 np0005481065 systemd[1]: Started libpod-conmon-b5511f41d5dad8ddef6656dea0a0356c4dbed9d9c7c06c1948c618f051a18d4b.scope.
Oct 11 05:00:50 np0005481065 podman[344002]: 2025-10-11 09:00:50.534018754 +0000 UTC m=+0.034582947 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 05:00:50 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:00:50 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d388700494a60a977f53213e9adaae75b3b656a3610a624cd88ae77b82267d55/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 05:00:50 np0005481065 podman[344002]: 2025-10-11 09:00:50.684712341 +0000 UTC m=+0.185276554 container init b5511f41d5dad8ddef6656dea0a0356c4dbed9d9c7c06c1948c618f051a18d4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7c40ad6c-6e2c-4d8e-a70f-72c8786fa745, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:00:50 np0005481065 podman[344002]: 2025-10-11 09:00:50.698108803 +0000 UTC m=+0.198672956 container start b5511f41d5dad8ddef6656dea0a0356c4dbed9d9c7c06c1948c618f051a18d4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7c40ad6c-6e2c-4d8e-a70f-72c8786fa745, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 11 05:00:50 np0005481065 neutron-haproxy-ovnmeta-7c40ad6c-6e2c-4d8e-a70f-72c8786fa745[344017]: [NOTICE]   (344035) : New worker (344056) forked
Oct 11 05:00:50 np0005481065 neutron-haproxy-ovnmeta-7c40ad6c-6e2c-4d8e-a70f-72c8786fa745[344017]: [NOTICE]   (344035) : Loading success.
Oct 11 05:00:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:00:50.787 162815 INFO neutron.agent.ovn.metadata.agent [-] Port bcfaf217-8703-4c1e-bf80-d24ab0e642bd in datapath 164a664d-5e52-48b9-8b00-f73d0851a4cc unbound from our chassis#033[00m
Oct 11 05:00:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:00:50.791 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 164a664d-5e52-48b9-8b00-f73d0851a4cc#033[00m
Oct 11 05:00:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:00:50.807 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[caa3dafd-d9aa-4f46-bb2c-ad9b63eebbc4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:00:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:00:50.845 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f5f14a08-b056-438d-b0e1-0f76505368ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:00:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:00:50.853 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3dcc5a5c-8522-47e2-9851-adece4e015ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:00:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:00:50.898 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[2be529bd-0c8f-4499-80a3-576c8f281241]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:00:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:00:50.925 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0097c04b-93b7-45dd-ae8b-a5c2fcf22046]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap164a664d-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:a0:ed'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 916, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 916, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 224], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 500906, 'reachable_time': 17587, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344081, 'error': None, 'target': 'ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:00:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:00:50.956 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4b4027ec-f001-4601-a5ca-aa9a79b309d4]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap164a664d-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 500919, 'tstamp': 500919}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344082, 'error': None, 'target': 'ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap164a664d-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 500922, 'tstamp': 500922}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344082, 'error': None, 'target': 'ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:00:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:00:50.959 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap164a664d-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:00:50 np0005481065 nova_compute[260935]: 2025-10-11 09:00:50.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:00:50 np0005481065 nova_compute[260935]: 2025-10-11 09:00:50.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:00:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:00:50.968 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap164a664d-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:00:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:00:50.969 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:00:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:00:50.970 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap164a664d-50, col_values=(('external_ids', {'iface-id': 'e23cd806-8523-4e59-ba27-db15cee52548'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:00:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:00:50.970 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:00:51 np0005481065 nova_compute[260935]: 2025-10-11 09:00:51.201 2 DEBUG oslo_concurrency.lockutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Acquiring lock "15633aee-234a-4417-b5ea-f35f13820404" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:00:51 np0005481065 nova_compute[260935]: 2025-10-11 09:00:51.202 2 DEBUG oslo_concurrency.lockutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lock "15633aee-234a-4417-b5ea-f35f13820404" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:00:51 np0005481065 nova_compute[260935]: 2025-10-11 09:00:51.224 2 DEBUG nova.compute.manager [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:00:51 np0005481065 nova_compute[260935]: 2025-10-11 09:00:51.312 2 DEBUG oslo_concurrency.lockutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:00:51 np0005481065 nova_compute[260935]: 2025-10-11 09:00:51.313 2 DEBUG oslo_concurrency.lockutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:00:51 np0005481065 nova_compute[260935]: 2025-10-11 09:00:51.324 2 DEBUG nova.virt.hardware [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:00:51 np0005481065 nova_compute[260935]: 2025-10-11 09:00:51.325 2 INFO nova.compute.claims [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:00:51 np0005481065 nova_compute[260935]: 2025-10-11 09:00:51.399 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173251.399392, 52be16b4-343a-4fd4-9041-39069a1fde2a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:00:51 np0005481065 nova_compute[260935]: 2025-10-11 09:00:51.401 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] VM Started (Lifecycle Event)#033[00m
Oct 11 05:00:51 np0005481065 nova_compute[260935]: 2025-10-11 09:00:51.405 2 DEBUG nova.compute.manager [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:00:51 np0005481065 nova_compute[260935]: 2025-10-11 09:00:51.422 2 DEBUG nova.virt.libvirt.driver [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:00:51 np0005481065 nova_compute[260935]: 2025-10-11 09:00:51.429 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:00:51 np0005481065 nova_compute[260935]: 2025-10-11 09:00:51.448 2 INFO nova.virt.libvirt.driver [-] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Instance spawned successfully.#033[00m
Oct 11 05:00:51 np0005481065 nova_compute[260935]: 2025-10-11 09:00:51.450 2 DEBUG nova.virt.libvirt.driver [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:00:51 np0005481065 nova_compute[260935]: 2025-10-11 09:00:51.454 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:00:51 np0005481065 nova_compute[260935]: 2025-10-11 09:00:51.491 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:00:51 np0005481065 nova_compute[260935]: 2025-10-11 09:00:51.492 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173251.3994942, 52be16b4-343a-4fd4-9041-39069a1fde2a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:00:51 np0005481065 nova_compute[260935]: 2025-10-11 09:00:51.492 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:00:51 np0005481065 nova_compute[260935]: 2025-10-11 09:00:51.511 2 DEBUG nova.virt.libvirt.driver [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:00:51 np0005481065 nova_compute[260935]: 2025-10-11 09:00:51.512 2 DEBUG nova.virt.libvirt.driver [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:00:51 np0005481065 nova_compute[260935]: 2025-10-11 09:00:51.512 2 DEBUG nova.virt.libvirt.driver [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:00:51 np0005481065 nova_compute[260935]: 2025-10-11 09:00:51.513 2 DEBUG nova.virt.libvirt.driver [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:00:51 np0005481065 nova_compute[260935]: 2025-10-11 09:00:51.514 2 DEBUG nova.virt.libvirt.driver [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:00:51 np0005481065 nova_compute[260935]: 2025-10-11 09:00:51.514 2 DEBUG nova.virt.libvirt.driver [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:00:51 np0005481065 nova_compute[260935]: 2025-10-11 09:00:51.524 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:00:51 np0005481065 nova_compute[260935]: 2025-10-11 09:00:51.529 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173251.4118798, 52be16b4-343a-4fd4-9041-39069a1fde2a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:00:51 np0005481065 nova_compute[260935]: 2025-10-11 09:00:51.529 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:00:51 np0005481065 nova_compute[260935]: 2025-10-11 09:00:51.559 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:00:51 np0005481065 nova_compute[260935]: 2025-10-11 09:00:51.564 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:00:51 np0005481065 nova_compute[260935]: 2025-10-11 09:00:51.589 2 DEBUG oslo_concurrency.processutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:00:51 np0005481065 nova_compute[260935]: 2025-10-11 09:00:51.645 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:00:51 np0005481065 nova_compute[260935]: 2025-10-11 09:00:51.649 2 INFO nova.compute.manager [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Took 10.16 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:00:51 np0005481065 nova_compute[260935]: 2025-10-11 09:00:51.650 2 DEBUG nova.compute.manager [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:00:51 np0005481065 nova_compute[260935]: 2025-10-11 09:00:51.734 2 INFO nova.compute.manager [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Took 11.59 seconds to build instance.#033[00m
Oct 11 05:00:51 np0005481065 nova_compute[260935]: 2025-10-11 09:00:51.758 2 DEBUG oslo_concurrency.lockutils [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Lock "52be16b4-343a-4fd4-9041-39069a1fde2a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.684s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:00:52 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:00:52 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3365592419' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:00:52 np0005481065 nova_compute[260935]: 2025-10-11 09:00:52.112 2 DEBUG oslo_concurrency.processutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:00:52 np0005481065 nova_compute[260935]: 2025-10-11 09:00:52.119 2 DEBUG nova.compute.provider_tree [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:00:52 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1731: 321 pgs: 321 active+clean; 339 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 5.4 MiB/s wr, 189 op/s
Oct 11 05:00:52 np0005481065 nova_compute[260935]: 2025-10-11 09:00:52.163 2 DEBUG nova.scheduler.client.report [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:00:52 np0005481065 nova_compute[260935]: 2025-10-11 09:00:52.220 2 DEBUG oslo_concurrency.lockutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.907s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:00:52 np0005481065 nova_compute[260935]: 2025-10-11 09:00:52.221 2 DEBUG nova.compute.manager [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:00:52 np0005481065 nova_compute[260935]: 2025-10-11 09:00:52.306 2 DEBUG nova.compute.manager [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:00:52 np0005481065 nova_compute[260935]: 2025-10-11 09:00:52.306 2 DEBUG nova.network.neutron [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:00:52 np0005481065 nova_compute[260935]: 2025-10-11 09:00:52.333 2 INFO nova.virt.libvirt.driver [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:00:52 np0005481065 nova_compute[260935]: 2025-10-11 09:00:52.367 2 DEBUG nova.compute.manager [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:00:52 np0005481065 nova_compute[260935]: 2025-10-11 09:00:52.488 2 DEBUG nova.compute.manager [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:00:52 np0005481065 nova_compute[260935]: 2025-10-11 09:00:52.490 2 DEBUG nova.virt.libvirt.driver [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:00:52 np0005481065 nova_compute[260935]: 2025-10-11 09:00:52.491 2 INFO nova.virt.libvirt.driver [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Creating image(s)#033[00m
Oct 11 05:00:52 np0005481065 nova_compute[260935]: 2025-10-11 09:00:52.524 2 DEBUG nova.storage.rbd_utils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] rbd image 15633aee-234a-4417-b5ea-f35f13820404_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:00:52 np0005481065 nova_compute[260935]: 2025-10-11 09:00:52.565 2 DEBUG nova.storage.rbd_utils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] rbd image 15633aee-234a-4417-b5ea-f35f13820404_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:00:52 np0005481065 nova_compute[260935]: 2025-10-11 09:00:52.607 2 DEBUG nova.storage.rbd_utils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] rbd image 15633aee-234a-4417-b5ea-f35f13820404_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:00:52 np0005481065 nova_compute[260935]: 2025-10-11 09:00:52.614 2 DEBUG oslo_concurrency.processutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:00:52 np0005481065 nova_compute[260935]: 2025-10-11 09:00:52.674 2 DEBUG nova.policy [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'df5a3c3a5d68473aa2e2950de45ebce1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '11b44ad9193e4e43838d52056ccf413e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:00:52 np0005481065 nova_compute[260935]: 2025-10-11 09:00:52.722 2 DEBUG oslo_concurrency.processutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:00:52 np0005481065 nova_compute[260935]: 2025-10-11 09:00:52.723 2 DEBUG oslo_concurrency.lockutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:00:52 np0005481065 nova_compute[260935]: 2025-10-11 09:00:52.724 2 DEBUG oslo_concurrency.lockutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:00:52 np0005481065 nova_compute[260935]: 2025-10-11 09:00:52.725 2 DEBUG oslo_concurrency.lockutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:00:52 np0005481065 nova_compute[260935]: 2025-10-11 09:00:52.758 2 DEBUG nova.storage.rbd_utils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] rbd image 15633aee-234a-4417-b5ea-f35f13820404_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:00:52 np0005481065 nova_compute[260935]: 2025-10-11 09:00:52.763 2 DEBUG oslo_concurrency.processutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 15633aee-234a-4417-b5ea-f35f13820404_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:00:52 np0005481065 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Oct 11 05:00:53 np0005481065 nova_compute[260935]: 2025-10-11 09:00:53.093 2 DEBUG oslo_concurrency.processutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 15633aee-234a-4417-b5ea-f35f13820404_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.330s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:00:53 np0005481065 nova_compute[260935]: 2025-10-11 09:00:53.177 2 DEBUG nova.storage.rbd_utils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] resizing rbd image 15633aee-234a-4417-b5ea-f35f13820404_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:00:53 np0005481065 nova_compute[260935]: 2025-10-11 09:00:53.299 2 DEBUG nova.objects.instance [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lazy-loading 'migration_context' on Instance uuid 15633aee-234a-4417-b5ea-f35f13820404 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:00:53 np0005481065 nova_compute[260935]: 2025-10-11 09:00:53.325 2 DEBUG nova.virt.libvirt.driver [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:00:53 np0005481065 nova_compute[260935]: 2025-10-11 09:00:53.326 2 DEBUG nova.virt.libvirt.driver [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Ensure instance console log exists: /var/lib/nova/instances/15633aee-234a-4417-b5ea-f35f13820404/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:00:53 np0005481065 nova_compute[260935]: 2025-10-11 09:00:53.327 2 DEBUG oslo_concurrency.lockutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:00:53 np0005481065 nova_compute[260935]: 2025-10-11 09:00:53.327 2 DEBUG oslo_concurrency.lockutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:00:53 np0005481065 nova_compute[260935]: 2025-10-11 09:00:53.328 2 DEBUG oslo_concurrency.lockutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:00:53 np0005481065 nova_compute[260935]: 2025-10-11 09:00:53.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:00:53 np0005481065 nova_compute[260935]: 2025-10-11 09:00:53.392 2 DEBUG nova.compute.manager [req-1a63c503-5835-4b47-98e7-1cae2f859516 req-f5afc13b-8d4d-4b88-92c6-2749e598bd21 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Received event network-vif-plugged-c992d6e3-ef59-42a0-80c5-109fe0c056cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:00:53 np0005481065 nova_compute[260935]: 2025-10-11 09:00:53.393 2 DEBUG oslo_concurrency.lockutils [req-1a63c503-5835-4b47-98e7-1cae2f859516 req-f5afc13b-8d4d-4b88-92c6-2749e598bd21 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "52be16b4-343a-4fd4-9041-39069a1fde2a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:00:53 np0005481065 nova_compute[260935]: 2025-10-11 09:00:53.393 2 DEBUG oslo_concurrency.lockutils [req-1a63c503-5835-4b47-98e7-1cae2f859516 req-f5afc13b-8d4d-4b88-92c6-2749e598bd21 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "52be16b4-343a-4fd4-9041-39069a1fde2a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:00:53 np0005481065 nova_compute[260935]: 2025-10-11 09:00:53.394 2 DEBUG oslo_concurrency.lockutils [req-1a63c503-5835-4b47-98e7-1cae2f859516 req-f5afc13b-8d4d-4b88-92c6-2749e598bd21 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "52be16b4-343a-4fd4-9041-39069a1fde2a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:00:53 np0005481065 nova_compute[260935]: 2025-10-11 09:00:53.394 2 DEBUG nova.compute.manager [req-1a63c503-5835-4b47-98e7-1cae2f859516 req-f5afc13b-8d4d-4b88-92c6-2749e598bd21 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] No waiting events found dispatching network-vif-plugged-c992d6e3-ef59-42a0-80c5-109fe0c056cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:00:53 np0005481065 nova_compute[260935]: 2025-10-11 09:00:53.395 2 WARNING nova.compute.manager [req-1a63c503-5835-4b47-98e7-1cae2f859516 req-f5afc13b-8d4d-4b88-92c6-2749e598bd21 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Received unexpected event network-vif-plugged-c992d6e3-ef59-42a0-80c5-109fe0c056cd for instance with vm_state active and task_state None.#033[00m
Oct 11 05:00:53 np0005481065 nova_compute[260935]: 2025-10-11 09:00:53.396 2 DEBUG nova.compute.manager [req-1a63c503-5835-4b47-98e7-1cae2f859516 req-f5afc13b-8d4d-4b88-92c6-2749e598bd21 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Received event network-vif-unplugged-bcfaf217-8703-4c1e-bf80-d24ab0e642bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:00:53 np0005481065 nova_compute[260935]: 2025-10-11 09:00:53.396 2 DEBUG oslo_concurrency.lockutils [req-1a63c503-5835-4b47-98e7-1cae2f859516 req-f5afc13b-8d4d-4b88-92c6-2749e598bd21 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:00:53 np0005481065 nova_compute[260935]: 2025-10-11 09:00:53.397 2 DEBUG oslo_concurrency.lockutils [req-1a63c503-5835-4b47-98e7-1cae2f859516 req-f5afc13b-8d4d-4b88-92c6-2749e598bd21 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:00:53 np0005481065 nova_compute[260935]: 2025-10-11 09:00:53.397 2 DEBUG oslo_concurrency.lockutils [req-1a63c503-5835-4b47-98e7-1cae2f859516 req-f5afc13b-8d4d-4b88-92c6-2749e598bd21 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:00:53 np0005481065 nova_compute[260935]: 2025-10-11 09:00:53.398 2 DEBUG nova.compute.manager [req-1a63c503-5835-4b47-98e7-1cae2f859516 req-f5afc13b-8d4d-4b88-92c6-2749e598bd21 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] No waiting events found dispatching network-vif-unplugged-bcfaf217-8703-4c1e-bf80-d24ab0e642bd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:00:53 np0005481065 nova_compute[260935]: 2025-10-11 09:00:53.398 2 WARNING nova.compute.manager [req-1a63c503-5835-4b47-98e7-1cae2f859516 req-f5afc13b-8d4d-4b88-92c6-2749e598bd21 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Received unexpected event network-vif-unplugged-bcfaf217-8703-4c1e-bf80-d24ab0e642bd for instance with vm_state stopped and task_state None.#033[00m
Oct 11 05:00:53 np0005481065 nova_compute[260935]: 2025-10-11 09:00:53.399 2 DEBUG nova.compute.manager [req-1a63c503-5835-4b47-98e7-1cae2f859516 req-f5afc13b-8d4d-4b88-92c6-2749e598bd21 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Received event network-vif-plugged-bcfaf217-8703-4c1e-bf80-d24ab0e642bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:00:53 np0005481065 nova_compute[260935]: 2025-10-11 09:00:53.399 2 DEBUG oslo_concurrency.lockutils [req-1a63c503-5835-4b47-98e7-1cae2f859516 req-f5afc13b-8d4d-4b88-92c6-2749e598bd21 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:00:53 np0005481065 nova_compute[260935]: 2025-10-11 09:00:53.400 2 DEBUG oslo_concurrency.lockutils [req-1a63c503-5835-4b47-98e7-1cae2f859516 req-f5afc13b-8d4d-4b88-92c6-2749e598bd21 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:00:53 np0005481065 nova_compute[260935]: 2025-10-11 09:00:53.400 2 DEBUG oslo_concurrency.lockutils [req-1a63c503-5835-4b47-98e7-1cae2f859516 req-f5afc13b-8d4d-4b88-92c6-2749e598bd21 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:00:53 np0005481065 nova_compute[260935]: 2025-10-11 09:00:53.401 2 DEBUG nova.compute.manager [req-1a63c503-5835-4b47-98e7-1cae2f859516 req-f5afc13b-8d4d-4b88-92c6-2749e598bd21 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] No waiting events found dispatching network-vif-plugged-bcfaf217-8703-4c1e-bf80-d24ab0e642bd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:00:53 np0005481065 nova_compute[260935]: 2025-10-11 09:00:53.402 2 WARNING nova.compute.manager [req-1a63c503-5835-4b47-98e7-1cae2f859516 req-f5afc13b-8d4d-4b88-92c6-2749e598bd21 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Received unexpected event network-vif-plugged-bcfaf217-8703-4c1e-bf80-d24ab0e642bd for instance with vm_state stopped and task_state None.#033[00m
Oct 11 05:00:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:00:53 np0005481065 nova_compute[260935]: 2025-10-11 09:00:53.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:00:53 np0005481065 nova_compute[260935]: 2025-10-11 09:00:53.925 2 DEBUG nova.network.neutron [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Successfully created port: 074db183-8679-40f2-b39d-06759a8dfceb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:00:54 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1732: 321 pgs: 321 active+clean; 339 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.8 MiB/s wr, 205 op/s
Oct 11 05:00:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:00:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:00:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:00:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:00:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:00:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:00:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:00:54
Oct 11 05:00:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:00:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:00:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control', 'backups', 'volumes', 'images', 'cephfs.cephfs.meta', 'vms', 'cephfs.cephfs.data']
Oct 11 05:00:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:00:54 np0005481065 nova_compute[260935]: 2025-10-11 09:00:54.932 2 DEBUG oslo_concurrency.lockutils [None req-ce634aba-2255-4bb3-84bd-81dc9c6307c7 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Acquiring lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:00:54 np0005481065 nova_compute[260935]: 2025-10-11 09:00:54.934 2 DEBUG oslo_concurrency.lockutils [None req-ce634aba-2255-4bb3-84bd-81dc9c6307c7 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:00:54 np0005481065 nova_compute[260935]: 2025-10-11 09:00:54.934 2 DEBUG oslo_concurrency.lockutils [None req-ce634aba-2255-4bb3-84bd-81dc9c6307c7 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Acquiring lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:00:54 np0005481065 nova_compute[260935]: 2025-10-11 09:00:54.934 2 DEBUG oslo_concurrency.lockutils [None req-ce634aba-2255-4bb3-84bd-81dc9c6307c7 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:00:54 np0005481065 nova_compute[260935]: 2025-10-11 09:00:54.935 2 DEBUG oslo_concurrency.lockutils [None req-ce634aba-2255-4bb3-84bd-81dc9c6307c7 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:00:54 np0005481065 nova_compute[260935]: 2025-10-11 09:00:54.937 2 INFO nova.compute.manager [None req-ce634aba-2255-4bb3-84bd-81dc9c6307c7 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Terminating instance#033[00m
Oct 11 05:00:54 np0005481065 nova_compute[260935]: 2025-10-11 09:00:54.938 2 DEBUG nova.compute.manager [None req-ce634aba-2255-4bb3-84bd-81dc9c6307c7 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:00:54 np0005481065 nova_compute[260935]: 2025-10-11 09:00:54.945 2 INFO nova.virt.libvirt.driver [-] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Instance destroyed successfully.#033[00m
Oct 11 05:00:54 np0005481065 nova_compute[260935]: 2025-10-11 09:00:54.946 2 DEBUG nova.objects.instance [None req-ce634aba-2255-4bb3-84bd-81dc9c6307c7 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lazy-loading 'resources' on Instance uuid 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:00:54 np0005481065 nova_compute[260935]: 2025-10-11 09:00:54.977 2 DEBUG nova.virt.libvirt.vif [None req-ce634aba-2255-4bb3-84bd-81dc9c6307c7 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-11T08:59:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1432792747',display_name='tempest-tempest.common.compute-instance-1432792747',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1432792747',id=84,image_ref='95632eb9-5895-4e20-b760-0f149aadf400',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:00:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='d33b48586acf4e6c8254f2a1213b001c',ramdisk_id='',reservation_id='r-dj88erzq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='95632eb9-5895-4e20-b760-0f149aadf400',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-620268335',owner_user_name='tempest-ServerActionsTestOtherA-620268335-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:00:50Z,user_data=None,user_id='8d211063ed874837bead2e13898b31d4',uuid=2ca1b1c6-7cd2-42f9-a24b-5c20bb567361,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "bcfaf217-8703-4c1e-bf80-d24ab0e642bd", "address": "fa:16:3e:16:b9:d1", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcfaf217-87", "ovs_interfaceid": "bcfaf217-8703-4c1e-bf80-d24ab0e642bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:00:54 np0005481065 nova_compute[260935]: 2025-10-11 09:00:54.978 2 DEBUG nova.network.os_vif_util [None req-ce634aba-2255-4bb3-84bd-81dc9c6307c7 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Converting VIF {"id": "bcfaf217-8703-4c1e-bf80-d24ab0e642bd", "address": "fa:16:3e:16:b9:d1", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcfaf217-87", "ovs_interfaceid": "bcfaf217-8703-4c1e-bf80-d24ab0e642bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:00:54 np0005481065 nova_compute[260935]: 2025-10-11 09:00:54.979 2 DEBUG nova.network.os_vif_util [None req-ce634aba-2255-4bb3-84bd-81dc9c6307c7 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:b9:d1,bridge_name='br-int',has_traffic_filtering=True,id=bcfaf217-8703-4c1e-bf80-d24ab0e642bd,network=Network(164a664d-5e52-48b9-8b00-f73d0851a4cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcfaf217-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:00:54 np0005481065 nova_compute[260935]: 2025-10-11 09:00:54.980 2 DEBUG os_vif [None req-ce634aba-2255-4bb3-84bd-81dc9c6307c7 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:b9:d1,bridge_name='br-int',has_traffic_filtering=True,id=bcfaf217-8703-4c1e-bf80-d24ab0e642bd,network=Network(164a664d-5e52-48b9-8b00-f73d0851a4cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcfaf217-87') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:00:54 np0005481065 nova_compute[260935]: 2025-10-11 09:00:54.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:00:54 np0005481065 nova_compute[260935]: 2025-10-11 09:00:54.983 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbcfaf217-87, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:00:54 np0005481065 nova_compute[260935]: 2025-10-11 09:00:54.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:00:54 np0005481065 nova_compute[260935]: 2025-10-11 09:00:54.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:00:54 np0005481065 nova_compute[260935]: 2025-10-11 09:00:54.991 2 INFO os_vif [None req-ce634aba-2255-4bb3-84bd-81dc9c6307c7 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:b9:d1,bridge_name='br-int',has_traffic_filtering=True,id=bcfaf217-8703-4c1e-bf80-d24ab0e642bd,network=Network(164a664d-5e52-48b9-8b00-f73d0851a4cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcfaf217-87')#033[00m
Oct 11 05:00:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:00:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:00:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:00:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:00:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:00:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:00:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:00:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:00:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:00:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:00:55 np0005481065 nova_compute[260935]: 2025-10-11 09:00:55.461 2 DEBUG nova.compute.manager [req-6217c10a-82d7-4f3c-a0f0-e5f6d20ca543 req-d8d81b36-c6d1-45d0-bf25-b4cf99a6ec13 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Received event network-changed-c992d6e3-ef59-42a0-80c5-109fe0c056cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:00:55 np0005481065 nova_compute[260935]: 2025-10-11 09:00:55.462 2 DEBUG nova.compute.manager [req-6217c10a-82d7-4f3c-a0f0-e5f6d20ca543 req-d8d81b36-c6d1-45d0-bf25-b4cf99a6ec13 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Refreshing instance network info cache due to event network-changed-c992d6e3-ef59-42a0-80c5-109fe0c056cd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:00:55 np0005481065 nova_compute[260935]: 2025-10-11 09:00:55.463 2 DEBUG oslo_concurrency.lockutils [req-6217c10a-82d7-4f3c-a0f0-e5f6d20ca543 req-d8d81b36-c6d1-45d0-bf25-b4cf99a6ec13 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:00:55 np0005481065 nova_compute[260935]: 2025-10-11 09:00:55.464 2 DEBUG oslo_concurrency.lockutils [req-6217c10a-82d7-4f3c-a0f0-e5f6d20ca543 req-d8d81b36-c6d1-45d0-bf25-b4cf99a6ec13 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:00:55 np0005481065 nova_compute[260935]: 2025-10-11 09:00:55.464 2 DEBUG nova.network.neutron [req-6217c10a-82d7-4f3c-a0f0-e5f6d20ca543 req-d8d81b36-c6d1-45d0-bf25-b4cf99a6ec13 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Refreshing network info cache for port c992d6e3-ef59-42a0-80c5-109fe0c056cd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:00:55 np0005481065 nova_compute[260935]: 2025-10-11 09:00:55.514 2 INFO nova.virt.libvirt.driver [None req-ce634aba-2255-4bb3-84bd-81dc9c6307c7 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Deleting instance files /var/lib/nova/instances/2ca1b1c6-7cd2-42f9-a24b-5c20bb567361_del#033[00m
Oct 11 05:00:55 np0005481065 nova_compute[260935]: 2025-10-11 09:00:55.516 2 INFO nova.virt.libvirt.driver [None req-ce634aba-2255-4bb3-84bd-81dc9c6307c7 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Deletion of /var/lib/nova/instances/2ca1b1c6-7cd2-42f9-a24b-5c20bb567361_del complete#033[00m
Oct 11 05:00:55 np0005481065 nova_compute[260935]: 2025-10-11 09:00:55.595 2 INFO nova.compute.manager [None req-ce634aba-2255-4bb3-84bd-81dc9c6307c7 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Took 0.66 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:00:55 np0005481065 nova_compute[260935]: 2025-10-11 09:00:55.596 2 DEBUG oslo.service.loopingcall [None req-ce634aba-2255-4bb3-84bd-81dc9c6307c7 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:00:55 np0005481065 nova_compute[260935]: 2025-10-11 09:00:55.597 2 DEBUG nova.compute.manager [-] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:00:55 np0005481065 nova_compute[260935]: 2025-10-11 09:00:55.598 2 DEBUG nova.network.neutron [-] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:00:55 np0005481065 nova_compute[260935]: 2025-10-11 09:00:55.721 2 DEBUG nova.network.neutron [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Successfully updated port: 074db183-8679-40f2-b39d-06759a8dfceb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:00:55 np0005481065 nova_compute[260935]: 2025-10-11 09:00:55.743 2 DEBUG oslo_concurrency.lockutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Acquiring lock "refresh_cache-15633aee-234a-4417-b5ea-f35f13820404" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:00:55 np0005481065 nova_compute[260935]: 2025-10-11 09:00:55.744 2 DEBUG oslo_concurrency.lockutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Acquired lock "refresh_cache-15633aee-234a-4417-b5ea-f35f13820404" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:00:55 np0005481065 nova_compute[260935]: 2025-10-11 09:00:55.745 2 DEBUG nova.network.neutron [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:00:56 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1733: 321 pgs: 321 active+clean; 339 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 935 KiB/s wr, 142 op/s
Oct 11 05:00:56 np0005481065 nova_compute[260935]: 2025-10-11 09:00:56.270 2 DEBUG nova.network.neutron [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:00:58 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1734: 321 pgs: 321 active+clean; 339 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 2.7 MiB/s wr, 285 op/s
Oct 11 05:00:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:00:58 np0005481065 nova_compute[260935]: 2025-10-11 09:00:58.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:00:59 np0005481065 nova_compute[260935]: 2025-10-11 09:00:59.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:01:00 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1735: 321 pgs: 321 active+clean; 339 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 182 op/s
Oct 11 05:01:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:01:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:01:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 05:01:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:01:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 05:01:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:01:01 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev b2be673f-a1cd-49c1-af1a-e64def204cbf does not exist
Oct 11 05:01:01 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 7d908ca6-53b7-41fa-a31d-ba7c3e8b20c6 does not exist
Oct 11 05:01:01 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 64454929-abb3-4677-bf5d-548ab40eaff8 does not exist
Oct 11 05:01:01 np0005481065 nova_compute[260935]: 2025-10-11 09:01:01.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:01:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 05:01:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 05:01:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 05:01:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:01:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:01:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:01:02 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1736: 321 pgs: 321 active+clean; 352 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.1 MiB/s wr, 201 op/s
Oct 11 05:01:02 np0005481065 podman[344575]: 2025-10-11 09:01:02.559288188 +0000 UTC m=+0.056133872 container create 5a17e4d32f870f23355cd771892c78d8a48239ca6296fb1fd0165cfe206b1334 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_dijkstra, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 11 05:01:02 np0005481065 systemd[1]: Started libpod-conmon-5a17e4d32f870f23355cd771892c78d8a48239ca6296fb1fd0165cfe206b1334.scope.
Oct 11 05:01:02 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:01:02 np0005481065 podman[344575]: 2025-10-11 09:01:02.536179379 +0000 UTC m=+0.033025073 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:01:02 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:01:02 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:01:02 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:01:02 np0005481065 podman[344575]: 2025-10-11 09:01:02.643572571 +0000 UTC m=+0.140418255 container init 5a17e4d32f870f23355cd771892c78d8a48239ca6296fb1fd0165cfe206b1334 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 05:01:02 np0005481065 podman[344575]: 2025-10-11 09:01:02.652210457 +0000 UTC m=+0.149056111 container start 5a17e4d32f870f23355cd771892c78d8a48239ca6296fb1fd0165cfe206b1334 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_dijkstra, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 11 05:01:02 np0005481065 podman[344575]: 2025-10-11 09:01:02.656094268 +0000 UTC m=+0.152939932 container attach 5a17e4d32f870f23355cd771892c78d8a48239ca6296fb1fd0165cfe206b1334 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_dijkstra, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 05:01:02 np0005481065 inspiring_dijkstra[344592]: 167 167
Oct 11 05:01:02 np0005481065 systemd[1]: libpod-5a17e4d32f870f23355cd771892c78d8a48239ca6296fb1fd0165cfe206b1334.scope: Deactivated successfully.
Oct 11 05:01:02 np0005481065 conmon[344592]: conmon 5a17e4d32f870f23355c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5a17e4d32f870f23355cd771892c78d8a48239ca6296fb1fd0165cfe206b1334.scope/container/memory.events
Oct 11 05:01:02 np0005481065 podman[344575]: 2025-10-11 09:01:02.659960118 +0000 UTC m=+0.156805822 container died 5a17e4d32f870f23355cd771892c78d8a48239ca6296fb1fd0165cfe206b1334 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_dijkstra, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:01:02 np0005481065 systemd[1]: var-lib-containers-storage-overlay-693e0739eed15ba8fb17fdd2654d067539a8e18ad1f92eadf453db4037ad46d8-merged.mount: Deactivated successfully.
Oct 11 05:01:02 np0005481065 podman[344575]: 2025-10-11 09:01:02.706804614 +0000 UTC m=+0.203650268 container remove 5a17e4d32f870f23355cd771892c78d8a48239ca6296fb1fd0165cfe206b1334 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 05:01:02 np0005481065 systemd[1]: libpod-conmon-5a17e4d32f870f23355cd771892c78d8a48239ca6296fb1fd0165cfe206b1334.scope: Deactivated successfully.
Oct 11 05:01:02 np0005481065 podman[344614]: 2025-10-11 09:01:02.954377384 +0000 UTC m=+0.065062767 container create 087a0e515dcbd23596daca268386fa62048d7ef4dd5cc009a22bb0b427db03c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Oct 11 05:01:03 np0005481065 systemd[1]: Started libpod-conmon-087a0e515dcbd23596daca268386fa62048d7ef4dd5cc009a22bb0b427db03c7.scope.
Oct 11 05:01:03 np0005481065 podman[344614]: 2025-10-11 09:01:02.928533387 +0000 UTC m=+0.039218810 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:01:03 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:01:03 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/923fe524f927e63842d907eea50a54d45ebdc29b3407a0f858f3a1768ee448f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:01:03 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/923fe524f927e63842d907eea50a54d45ebdc29b3407a0f858f3a1768ee448f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:01:03 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/923fe524f927e63842d907eea50a54d45ebdc29b3407a0f858f3a1768ee448f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:01:03 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/923fe524f927e63842d907eea50a54d45ebdc29b3407a0f858f3a1768ee448f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:01:03 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/923fe524f927e63842d907eea50a54d45ebdc29b3407a0f858f3a1768ee448f9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 05:01:03 np0005481065 podman[344614]: 2025-10-11 09:01:03.058071929 +0000 UTC m=+0.168757272 container init 087a0e515dcbd23596daca268386fa62048d7ef4dd5cc009a22bb0b427db03c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_lamarr, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 05:01:03 np0005481065 podman[344614]: 2025-10-11 09:01:03.069857615 +0000 UTC m=+0.180542958 container start 087a0e515dcbd23596daca268386fa62048d7ef4dd5cc009a22bb0b427db03c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_lamarr, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 11 05:01:03 np0005481065 podman[344614]: 2025-10-11 09:01:03.074359404 +0000 UTC m=+0.185044777 container attach 087a0e515dcbd23596daca268386fa62048d7ef4dd5cc009a22bb0b427db03c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_lamarr, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 05:01:03 np0005481065 ovn_controller[152945]: 2025-10-11T09:01:03Z|00094|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d3:b5:ce 10.100.0.14
Oct 11 05:01:03 np0005481065 ovn_controller[152945]: 2025-10-11T09:01:03Z|00095|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d3:b5:ce 10.100.0.14
Oct 11 05:01:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:01:03 np0005481065 nova_compute[260935]: 2025-10-11 09:01:03.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:01:04 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1737: 321 pgs: 321 active+clean; 362 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.8 MiB/s wr, 214 op/s
Oct 11 05:01:04 np0005481065 mystifying_lamarr[344631]: --> passed data devices: 0 physical, 3 LVM
Oct 11 05:01:04 np0005481065 mystifying_lamarr[344631]: --> relative data size: 1.0
Oct 11 05:01:04 np0005481065 mystifying_lamarr[344631]: --> All data devices are unavailable
Oct 11 05:01:04 np0005481065 systemd[1]: libpod-087a0e515dcbd23596daca268386fa62048d7ef4dd5cc009a22bb0b427db03c7.scope: Deactivated successfully.
Oct 11 05:01:04 np0005481065 podman[344614]: 2025-10-11 09:01:04.397502254 +0000 UTC m=+1.508187607 container died 087a0e515dcbd23596daca268386fa62048d7ef4dd5cc009a22bb0b427db03c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_lamarr, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 05:01:04 np0005481065 systemd[1]: libpod-087a0e515dcbd23596daca268386fa62048d7ef4dd5cc009a22bb0b427db03c7.scope: Consumed 1.278s CPU time.
Oct 11 05:01:04 np0005481065 systemd[1]: var-lib-containers-storage-overlay-923fe524f927e63842d907eea50a54d45ebdc29b3407a0f858f3a1768ee448f9-merged.mount: Deactivated successfully.
Oct 11 05:01:04 np0005481065 podman[344614]: 2025-10-11 09:01:04.466631655 +0000 UTC m=+1.577317008 container remove 087a0e515dcbd23596daca268386fa62048d7ef4dd5cc009a22bb0b427db03c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 11 05:01:04 np0005481065 systemd[1]: libpod-conmon-087a0e515dcbd23596daca268386fa62048d7ef4dd5cc009a22bb0b427db03c7.scope: Deactivated successfully.
Oct 11 05:01:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:01:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:01:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:01:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:01:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0029572858671941134 of space, bias 1.0, pg target 0.887185760158234 quantized to 32 (current 32)
Oct 11 05:01:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:01:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:01:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:01:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:01:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:01:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 11 05:01:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:01:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 05:01:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:01:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:01:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:01:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 05:01:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:01:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 05:01:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:01:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:01:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:01:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 05:01:04 np0005481065 nova_compute[260935]: 2025-10-11 09:01:04.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:01:04 np0005481065 nova_compute[260935]: 2025-10-11 09:01:04.991 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173249.9902055, 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:01:04 np0005481065 nova_compute[260935]: 2025-10-11 09:01:04.991 2 INFO nova.compute.manager [-] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:01:05 np0005481065 podman[344812]: 2025-10-11 09:01:05.323153369 +0000 UTC m=+0.035900645 container create 6ca13abe5a5ce42045f502244752454057853ede43dd936b2efc5eadb8a2b52e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ishizaka, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 11 05:01:05 np0005481065 systemd[1]: Started libpod-conmon-6ca13abe5a5ce42045f502244752454057853ede43dd936b2efc5eadb8a2b52e.scope.
Oct 11 05:01:05 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:01:05 np0005481065 podman[344812]: 2025-10-11 09:01:05.309363796 +0000 UTC m=+0.022111092 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:01:05 np0005481065 podman[344812]: 2025-10-11 09:01:05.407575436 +0000 UTC m=+0.120322732 container init 6ca13abe5a5ce42045f502244752454057853ede43dd936b2efc5eadb8a2b52e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 05:01:05 np0005481065 podman[344812]: 2025-10-11 09:01:05.417921231 +0000 UTC m=+0.130668527 container start 6ca13abe5a5ce42045f502244752454057853ede43dd936b2efc5eadb8a2b52e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ishizaka, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:01:05 np0005481065 podman[344812]: 2025-10-11 09:01:05.421135623 +0000 UTC m=+0.133882919 container attach 6ca13abe5a5ce42045f502244752454057853ede43dd936b2efc5eadb8a2b52e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:01:05 np0005481065 youthful_ishizaka[344829]: 167 167
Oct 11 05:01:05 np0005481065 systemd[1]: libpod-6ca13abe5a5ce42045f502244752454057853ede43dd936b2efc5eadb8a2b52e.scope: Deactivated successfully.
Oct 11 05:01:05 np0005481065 podman[344812]: 2025-10-11 09:01:05.424827438 +0000 UTC m=+0.137574714 container died 6ca13abe5a5ce42045f502244752454057853ede43dd936b2efc5eadb8a2b52e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ishizaka, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 11 05:01:05 np0005481065 systemd[1]: var-lib-containers-storage-overlay-1b5901b482c3972a01413ea0df29e518e76c9c99e0c2eb227c52dcb7b23e9bd6-merged.mount: Deactivated successfully.
Oct 11 05:01:05 np0005481065 podman[344812]: 2025-10-11 09:01:05.463997185 +0000 UTC m=+0.176744471 container remove 6ca13abe5a5ce42045f502244752454057853ede43dd936b2efc5eadb8a2b52e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 11 05:01:05 np0005481065 systemd[1]: libpod-conmon-6ca13abe5a5ce42045f502244752454057853ede43dd936b2efc5eadb8a2b52e.scope: Deactivated successfully.
Oct 11 05:01:05 np0005481065 podman[344852]: 2025-10-11 09:01:05.736032772 +0000 UTC m=+0.082452142 container create b6359e7f726a3e6e7f91633ccc4338cf79359af53e74167e33b2cea4958a62c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:01:05 np0005481065 podman[344852]: 2025-10-11 09:01:05.697361049 +0000 UTC m=+0.043780479 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:01:05 np0005481065 systemd[1]: Started libpod-conmon-b6359e7f726a3e6e7f91633ccc4338cf79359af53e74167e33b2cea4958a62c5.scope.
Oct 11 05:01:05 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:01:05 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b2f461d224b5337077d4bb0acd9785eeee8f7a6737b5ec4f3aa506f6bfdfe1e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:01:05 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b2f461d224b5337077d4bb0acd9785eeee8f7a6737b5ec4f3aa506f6bfdfe1e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:01:05 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b2f461d224b5337077d4bb0acd9785eeee8f7a6737b5ec4f3aa506f6bfdfe1e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:01:05 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b2f461d224b5337077d4bb0acd9785eeee8f7a6737b5ec4f3aa506f6bfdfe1e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:01:05 np0005481065 podman[344852]: 2025-10-11 09:01:05.856706633 +0000 UTC m=+0.203126003 container init b6359e7f726a3e6e7f91633ccc4338cf79359af53e74167e33b2cea4958a62c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_lewin, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 11 05:01:05 np0005481065 podman[344852]: 2025-10-11 09:01:05.874090629 +0000 UTC m=+0.220509969 container start b6359e7f726a3e6e7f91633ccc4338cf79359af53e74167e33b2cea4958a62c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:01:05 np0005481065 podman[344852]: 2025-10-11 09:01:05.877744413 +0000 UTC m=+0.224163833 container attach b6359e7f726a3e6e7f91633ccc4338cf79359af53e74167e33b2cea4958a62c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_lewin, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 11 05:01:06 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1738: 321 pgs: 321 active+clean; 362 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.8 MiB/s wr, 181 op/s
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]: {
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:    "0": [
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:        {
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:            "devices": [
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:                "/dev/loop3"
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:            ],
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:            "lv_name": "ceph_lv0",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:            "lv_size": "21470642176",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:            "name": "ceph_lv0",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:            "tags": {
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:                "ceph.cluster_name": "ceph",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:                "ceph.crush_device_class": "",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:                "ceph.encrypted": "0",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:                "ceph.osd_id": "0",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:                "ceph.type": "block",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:                "ceph.vdo": "0"
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:            },
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:            "type": "block",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:            "vg_name": "ceph_vg0"
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:        }
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:    ],
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:    "1": [
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:        {
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:            "devices": [
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:                "/dev/loop4"
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:            ],
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:            "lv_name": "ceph_lv1",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:            "lv_size": "21470642176",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:            "name": "ceph_lv1",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:            "tags": {
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:                "ceph.cluster_name": "ceph",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:                "ceph.crush_device_class": "",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:                "ceph.encrypted": "0",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:                "ceph.osd_id": "1",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:                "ceph.type": "block",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:                "ceph.vdo": "0"
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:            },
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:            "type": "block",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:            "vg_name": "ceph_vg1"
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:        }
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:    ],
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:    "2": [
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:        {
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:            "devices": [
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:                "/dev/loop5"
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:            ],
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:            "lv_name": "ceph_lv2",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:            "lv_size": "21470642176",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:            "name": "ceph_lv2",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:            "tags": {
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:                "ceph.cluster_name": "ceph",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:                "ceph.crush_device_class": "",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:                "ceph.encrypted": "0",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:                "ceph.osd_id": "2",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:                "ceph.type": "block",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:                "ceph.vdo": "0"
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:            },
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:            "type": "block",
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:            "vg_name": "ceph_vg2"
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:        }
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]:    ]
Oct 11 05:01:06 np0005481065 mystifying_lewin[344868]: }
Oct 11 05:01:06 np0005481065 systemd[1]: libpod-b6359e7f726a3e6e7f91633ccc4338cf79359af53e74167e33b2cea4958a62c5.scope: Deactivated successfully.
Oct 11 05:01:06 np0005481065 podman[344877]: 2025-10-11 09:01:06.694378298 +0000 UTC m=+0.035742210 container died b6359e7f726a3e6e7f91633ccc4338cf79359af53e74167e33b2cea4958a62c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 11 05:01:06 np0005481065 systemd[1]: var-lib-containers-storage-overlay-9b2f461d224b5337077d4bb0acd9785eeee8f7a6737b5ec4f3aa506f6bfdfe1e-merged.mount: Deactivated successfully.
Oct 11 05:01:06 np0005481065 podman[344877]: 2025-10-11 09:01:06.773481823 +0000 UTC m=+0.114845715 container remove b6359e7f726a3e6e7f91633ccc4338cf79359af53e74167e33b2cea4958a62c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 05:01:06 np0005481065 systemd[1]: libpod-conmon-b6359e7f726a3e6e7f91633ccc4338cf79359af53e74167e33b2cea4958a62c5.scope: Deactivated successfully.
Oct 11 05:01:07 np0005481065 podman[344917]: 2025-10-11 09:01:07.101026503 +0000 UTC m=+0.122468133 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct 11 05:01:07 np0005481065 podman[345056]: 2025-10-11 09:01:07.696626497 +0000 UTC m=+0.063921024 container create 7c92bbd7fd0257760ffe1c0add5565ba91d0b4cf415b5744554724181e8f0c3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lumiere, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 05:01:07 np0005481065 systemd[1]: Started libpod-conmon-7c92bbd7fd0257760ffe1c0add5565ba91d0b4cf415b5744554724181e8f0c3c.scope.
Oct 11 05:01:07 np0005481065 podman[345056]: 2025-10-11 09:01:07.665646553 +0000 UTC m=+0.032941120 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:01:07 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:01:07 np0005481065 podman[345056]: 2025-10-11 09:01:07.799723186 +0000 UTC m=+0.167017753 container init 7c92bbd7fd0257760ffe1c0add5565ba91d0b4cf415b5744554724181e8f0c3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lumiere, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 05:01:07 np0005481065 podman[345056]: 2025-10-11 09:01:07.811176713 +0000 UTC m=+0.178471240 container start 7c92bbd7fd0257760ffe1c0add5565ba91d0b4cf415b5744554724181e8f0c3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lumiere, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:01:07 np0005481065 podman[345056]: 2025-10-11 09:01:07.815240279 +0000 UTC m=+0.182534806 container attach 7c92bbd7fd0257760ffe1c0add5565ba91d0b4cf415b5744554724181e8f0c3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lumiere, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 11 05:01:07 np0005481065 frosty_lumiere[345072]: 167 167
Oct 11 05:01:07 np0005481065 systemd[1]: libpod-7c92bbd7fd0257760ffe1c0add5565ba91d0b4cf415b5744554724181e8f0c3c.scope: Deactivated successfully.
Oct 11 05:01:07 np0005481065 podman[345056]: 2025-10-11 09:01:07.819576932 +0000 UTC m=+0.186871459 container died 7c92bbd7fd0257760ffe1c0add5565ba91d0b4cf415b5744554724181e8f0c3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 11 05:01:07 np0005481065 systemd[1]: var-lib-containers-storage-overlay-f59be9c02687326753ec136097e93e23c5abe808e4d7240e7233d83a5f20bfa2-merged.mount: Deactivated successfully.
Oct 11 05:01:07 np0005481065 podman[345056]: 2025-10-11 09:01:07.863295469 +0000 UTC m=+0.230589986 container remove 7c92bbd7fd0257760ffe1c0add5565ba91d0b4cf415b5744554724181e8f0c3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:01:07 np0005481065 systemd[1]: libpod-conmon-7c92bbd7fd0257760ffe1c0add5565ba91d0b4cf415b5744554724181e8f0c3c.scope: Deactivated successfully.
Oct 11 05:01:08 np0005481065 podman[345095]: 2025-10-11 09:01:08.118153976 +0000 UTC m=+0.053973630 container create a64f9c2d870da33ecd0412f20b4644537199e46bbea233e746182967fdf6b577 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:01:08 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1739: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.9 MiB/s wr, 211 op/s
Oct 11 05:01:08 np0005481065 systemd[1]: Started libpod-conmon-a64f9c2d870da33ecd0412f20b4644537199e46bbea233e746182967fdf6b577.scope.
Oct 11 05:01:08 np0005481065 podman[345095]: 2025-10-11 09:01:08.100126142 +0000 UTC m=+0.035945796 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:01:08 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:01:08 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd0005dc978592679f94a179c27d9da8000016c228c6a382ea9925e687e41206/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:01:08 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd0005dc978592679f94a179c27d9da8000016c228c6a382ea9925e687e41206/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:01:08 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd0005dc978592679f94a179c27d9da8000016c228c6a382ea9925e687e41206/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:01:08 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd0005dc978592679f94a179c27d9da8000016c228c6a382ea9925e687e41206/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:01:08 np0005481065 podman[345095]: 2025-10-11 09:01:08.229617865 +0000 UTC m=+0.165437509 container init a64f9c2d870da33ecd0412f20b4644537199e46bbea233e746182967fdf6b577 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_khorana, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:01:08 np0005481065 podman[345095]: 2025-10-11 09:01:08.236142571 +0000 UTC m=+0.171962195 container start a64f9c2d870da33ecd0412f20b4644537199e46bbea233e746182967fdf6b577 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_khorana, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 11 05:01:08 np0005481065 podman[345095]: 2025-10-11 09:01:08.23926081 +0000 UTC m=+0.175080434 container attach a64f9c2d870da33ecd0412f20b4644537199e46bbea233e746182967fdf6b577 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_khorana, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 11 05:01:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:01:08 np0005481065 nova_compute[260935]: 2025-10-11 09:01:08.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:01:08 np0005481065 nova_compute[260935]: 2025-10-11 09:01:08.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:01:09 np0005481065 exciting_khorana[345111]: {
Oct 11 05:01:09 np0005481065 exciting_khorana[345111]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 05:01:09 np0005481065 exciting_khorana[345111]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:01:09 np0005481065 exciting_khorana[345111]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 05:01:09 np0005481065 exciting_khorana[345111]:        "osd_id": 2,
Oct 11 05:01:09 np0005481065 exciting_khorana[345111]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:01:09 np0005481065 exciting_khorana[345111]:        "type": "bluestore"
Oct 11 05:01:09 np0005481065 exciting_khorana[345111]:    },
Oct 11 05:01:09 np0005481065 exciting_khorana[345111]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 05:01:09 np0005481065 exciting_khorana[345111]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:01:09 np0005481065 exciting_khorana[345111]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 05:01:09 np0005481065 exciting_khorana[345111]:        "osd_id": 0,
Oct 11 05:01:09 np0005481065 exciting_khorana[345111]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:01:09 np0005481065 exciting_khorana[345111]:        "type": "bluestore"
Oct 11 05:01:09 np0005481065 exciting_khorana[345111]:    },
Oct 11 05:01:09 np0005481065 exciting_khorana[345111]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 05:01:09 np0005481065 exciting_khorana[345111]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:01:09 np0005481065 exciting_khorana[345111]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 05:01:09 np0005481065 exciting_khorana[345111]:        "osd_id": 1,
Oct 11 05:01:09 np0005481065 exciting_khorana[345111]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:01:09 np0005481065 exciting_khorana[345111]:        "type": "bluestore"
Oct 11 05:01:09 np0005481065 exciting_khorana[345111]:    }
Oct 11 05:01:09 np0005481065 exciting_khorana[345111]: }
Oct 11 05:01:09 np0005481065 systemd[1]: libpod-a64f9c2d870da33ecd0412f20b4644537199e46bbea233e746182967fdf6b577.scope: Deactivated successfully.
Oct 11 05:01:09 np0005481065 podman[345095]: 2025-10-11 09:01:09.380691107 +0000 UTC m=+1.316510771 container died a64f9c2d870da33ecd0412f20b4644537199e46bbea233e746182967fdf6b577 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_khorana, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:01:09 np0005481065 systemd[1]: libpod-a64f9c2d870da33ecd0412f20b4644537199e46bbea233e746182967fdf6b577.scope: Consumed 1.143s CPU time.
Oct 11 05:01:09 np0005481065 systemd[1]: var-lib-containers-storage-overlay-bd0005dc978592679f94a179c27d9da8000016c228c6a382ea9925e687e41206-merged.mount: Deactivated successfully.
Oct 11 05:01:09 np0005481065 podman[345095]: 2025-10-11 09:01:09.45478679 +0000 UTC m=+1.390606454 container remove a64f9c2d870da33ecd0412f20b4644537199e46bbea233e746182967fdf6b577 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:01:09 np0005481065 systemd[1]: libpod-conmon-a64f9c2d870da33ecd0412f20b4644537199e46bbea233e746182967fdf6b577.scope: Deactivated successfully.
Oct 11 05:01:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:01:09 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:01:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:01:09 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:01:09 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev d1e120c9-acd2-4969-be20-4c7b336d5443 does not exist
Oct 11 05:01:09 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 7aec89cb-18e4-4dbd-95c9-0db948c7a7a1 does not exist
Oct 11 05:01:09 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:01:09 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:01:10 np0005481065 nova_compute[260935]: 2025-10-11 09:01:10.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:01:10 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1740: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 372 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Oct 11 05:01:11 np0005481065 nova_compute[260935]: 2025-10-11 09:01:11.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:01:11 np0005481065 nova_compute[260935]: 2025-10-11 09:01:11.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:01:12 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1741: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 372 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Oct 11 05:01:12 np0005481065 nova_compute[260935]: 2025-10-11 09:01:12.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:01:12 np0005481065 nova_compute[260935]: 2025-10-11 09:01:12.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:01:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:01:13 np0005481065 nova_compute[260935]: 2025-10-11 09:01:13.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:01:13 np0005481065 nova_compute[260935]: 2025-10-11 09:01:13.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:01:13 np0005481065 podman[345208]: 2025-10-11 09:01:13.811719425 +0000 UTC m=+0.107981609 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 11 05:01:14 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1742: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 199 KiB/s rd, 919 KiB/s wr, 50 op/s
Oct 11 05:01:15 np0005481065 nova_compute[260935]: 2025-10-11 09:01:15.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:01:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:01:15.196 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:01:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:01:15.197 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:01:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:01:15.198 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:01:15 np0005481065 nova_compute[260935]: 2025-10-11 09:01:15.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:01:15 np0005481065 nova_compute[260935]: 2025-10-11 09:01:15.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:01:16 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1743: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 130 KiB/s rd, 110 KiB/s wr, 31 op/s
Oct 11 05:01:16 np0005481065 podman[345229]: 2025-10-11 09:01:16.76827138 +0000 UTC m=+0.072382115 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:01:16 np0005481065 podman[345230]: 2025-10-11 09:01:16.85000166 +0000 UTC m=+0.140088245 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 05:01:18 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1744: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 130 KiB/s rd, 111 KiB/s wr, 31 op/s
Oct 11 05:01:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:01:18 np0005481065 nova_compute[260935]: 2025-10-11 09:01:18.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:01:20 np0005481065 nova_compute[260935]: 2025-10-11 09:01:20.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:01:20 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1745: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 15 KiB/s wr, 0 op/s
Oct 11 05:01:22 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1746: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 15 KiB/s wr, 0 op/s
Oct 11 05:01:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:01:23 np0005481065 nova_compute[260935]: 2025-10-11 09:01:23.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:01:24 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1747: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 15 KiB/s wr, 0 op/s
Oct 11 05:01:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:01:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:01:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:01:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:01:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:01:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:01:24 np0005481065 ovn_controller[152945]: 2025-10-11T09:01:24Z|00788|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct 11 05:01:25 np0005481065 nova_compute[260935]: 2025-10-11 09:01:25.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:01:26 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1748: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Oct 11 05:01:28 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1749: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Oct 11 05:01:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:01:28 np0005481065 nova_compute[260935]: 2025-10-11 09:01:28.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:01:30 np0005481065 nova_compute[260935]: 2025-10-11 09:01:30.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:01:30 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1750: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s wr, 0 op/s
Oct 11 05:01:32 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1751: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s wr, 0 op/s
Oct 11 05:01:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:01:33 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Oct 11 05:01:33 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:01:33.706936) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 05:01:33 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Oct 11 05:01:33 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173293706993, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 1473, "num_deletes": 250, "total_data_size": 2208065, "memory_usage": 2247112, "flush_reason": "Manual Compaction"}
Oct 11 05:01:33 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Oct 11 05:01:33 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173293723151, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 1294245, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35332, "largest_seqno": 36804, "table_properties": {"data_size": 1289201, "index_size": 2312, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13773, "raw_average_key_size": 20, "raw_value_size": 1277954, "raw_average_value_size": 1933, "num_data_blocks": 105, "num_entries": 661, "num_filter_entries": 661, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760173147, "oldest_key_time": 1760173147, "file_creation_time": 1760173293, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:01:33 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 16300 microseconds, and 7069 cpu microseconds.
Oct 11 05:01:33 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:01:33 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:01:33.723229) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 1294245 bytes OK
Oct 11 05:01:33 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:01:33.723263) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Oct 11 05:01:33 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:01:33.725141) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Oct 11 05:01:33 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:01:33.725161) EVENT_LOG_v1 {"time_micros": 1760173293725154, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 05:01:33 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:01:33.725189) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 05:01:33 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 2201560, prev total WAL file size 2201560, number of live WAL files 2.
Oct 11 05:01:33 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:01:33 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:01:33.726733) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323534' seq:72057594037927935, type:22 .. '6D6772737461740031353035' seq:0, type:0; will stop at (end)
Oct 11 05:01:33 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 05:01:33 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(1263KB)], [77(9444KB)]
Oct 11 05:01:33 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173293726917, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 10965754, "oldest_snapshot_seqno": -1}
Oct 11 05:01:33 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 6193 keys, 8543085 bytes, temperature: kUnknown
Oct 11 05:01:33 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173293791173, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 8543085, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8502285, "index_size": 24242, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15493, "raw_key_size": 155791, "raw_average_key_size": 25, "raw_value_size": 8391715, "raw_average_value_size": 1355, "num_data_blocks": 986, "num_entries": 6193, "num_filter_entries": 6193, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760173293, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:01:33 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:01:33 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:01:33.792037) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 8543085 bytes
Oct 11 05:01:33 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:01:33.793546) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 170.4 rd, 132.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 9.2 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(15.1) write-amplify(6.6) OK, records in: 6638, records dropped: 445 output_compression: NoCompression
Oct 11 05:01:33 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:01:33.793569) EVENT_LOG_v1 {"time_micros": 1760173293793560, "job": 44, "event": "compaction_finished", "compaction_time_micros": 64348, "compaction_time_cpu_micros": 49131, "output_level": 6, "num_output_files": 1, "total_output_size": 8543085, "num_input_records": 6638, "num_output_records": 6193, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 05:01:33 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:01:33 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173293795146, "job": 44, "event": "table_file_deletion", "file_number": 79}
Oct 11 05:01:33 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:01:33 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173293797464, "job": 44, "event": "table_file_deletion", "file_number": 77}
Oct 11 05:01:33 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:01:33.726560) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:01:33 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:01:33.797617) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:01:33 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:01:33.797635) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:01:33 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:01:33.797639) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:01:33 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:01:33.797643) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:01:33 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:01:33.797646) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:01:33 np0005481065 nova_compute[260935]: 2025-10-11 09:01:33.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:01:34 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1752: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s wr, 0 op/s
Oct 11 05:01:35 np0005481065 nova_compute[260935]: 2025-10-11 09:01:35.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:01:36 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1753: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Oct 11 05:01:37 np0005481065 podman[345277]: 2025-10-11 09:01:37.806438093 +0000 UTC m=+0.097746278 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:01:38 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1754: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 4.3 KiB/s wr, 0 op/s
Oct 11 05:01:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:01:38 np0005481065 nova_compute[260935]: 2025-10-11 09:01:38.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:01:40 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1755: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Oct 11 05:01:40 np0005481065 nova_compute[260935]: 2025-10-11 09:01:40.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:01:42 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1756: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Oct 11 05:01:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:01:43 np0005481065 nova_compute[260935]: 2025-10-11 09:01:43.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:01:44 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1757: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Oct 11 05:01:44 np0005481065 podman[345299]: 2025-10-11 09:01:44.450778638 +0000 UTC m=+0.088183196 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, tcib_managed=true)
Oct 11 05:01:45 np0005481065 nova_compute[260935]: 2025-10-11 09:01:45.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:01:46 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1758: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Oct 11 05:01:47 np0005481065 podman[345319]: 2025-10-11 09:01:47.762893671 +0000 UTC m=+0.069055520 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 11 05:01:47 np0005481065 podman[345320]: 2025-10-11 09:01:47.861428691 +0000 UTC m=+0.157630796 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2)
Oct 11 05:01:48 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1759: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Oct 11 05:01:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:01:48 np0005481065 nova_compute[260935]: 2025-10-11 09:01:48.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:01:50 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1760: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:01:50 np0005481065 nova_compute[260935]: 2025-10-11 09:01:50.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:01:52 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1761: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:01:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:01:53 np0005481065 nova_compute[260935]: 2025-10-11 09:01:53.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:01:54 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1762: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 8.4 KiB/s wr, 1 op/s
Oct 11 05:01:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:01:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:01:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:01:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:01:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:01:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:01:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:01:54
Oct 11 05:01:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:01:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:01:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log', '.mgr', 'vms', 'images', 'volumes', 'default.rgw.control', 'backups', '.rgw.root']
Oct 11 05:01:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:01:55 np0005481065 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 05:01:55 np0005481065 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 27K writes, 106K keys, 27K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.03 MB/s#012Cumulative WAL: 27K writes, 9390 syncs, 2.88 writes per sync, written: 0.10 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 10K writes, 41K keys, 10K commit groups, 1.0 writes per commit group, ingest: 43.34 MB, 0.07 MB/s#012Interval WAL: 10K writes, 4289 syncs, 2.51 writes per sync, written: 0.04 GB, 0.07 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 05:01:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:01:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:01:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:01:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:01:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:01:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:01:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:01:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:01:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:01:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:01:55 np0005481065 nova_compute[260935]: 2025-10-11 09:01:55.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:01:56 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1763: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 8.4 KiB/s wr, 1 op/s
Oct 11 05:01:58 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1764: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 8.4 KiB/s wr, 1 op/s
Oct 11 05:01:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:01:58 np0005481065 nova_compute[260935]: 2025-10-11 09:01:58.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:02:00 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1765: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 8.4 KiB/s wr, 1 op/s
Oct 11 05:02:00 np0005481065 nova_compute[260935]: 2025-10-11 09:02:00.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:02:00 np0005481065 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 05:02:00 np0005481065 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 28K writes, 106K keys, 28K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.03 MB/s#012Cumulative WAL: 28K writes, 10K syncs, 2.82 writes per sync, written: 0.10 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 11K writes, 40K keys, 11K commit groups, 1.0 writes per commit group, ingest: 38.99 MB, 0.06 MB/s#012Interval WAL: 11K writes, 4591 syncs, 2.43 writes per sync, written: 0.04 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 05:02:02 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1766: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 9.4 KiB/s wr, 1 op/s
Oct 11 05:02:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:02:03 np0005481065 nova_compute[260935]: 2025-10-11 09:02:03.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:02:04 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1767: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 14 KiB/s wr, 2 op/s
Oct 11 05:02:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:02:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:02:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:02:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:02:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0029729927720257864 of space, bias 1.0, pg target 0.891897831607736 quantized to 32 (current 32)
Oct 11 05:02:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:02:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:02:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:02:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:02:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:02:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 11 05:02:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:02:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 05:02:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:02:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:02:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:02:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 05:02:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:02:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 05:02:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:02:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:02:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:02:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 05:02:05 np0005481065 nova_compute[260935]: 2025-10-11 09:02:05.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:02:06 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1768: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Oct 11 05:02:06 np0005481065 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 05:02:06 np0005481065 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.3 total, 600.0 interval#012Cumulative writes: 22K writes, 88K keys, 22K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.03 MB/s#012Cumulative WAL: 22K writes, 7557 syncs, 2.94 writes per sync, written: 0.08 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 8223 writes, 32K keys, 8223 commit groups, 1.0 writes per commit group, ingest: 34.46 MB, 0.06 MB/s#012Interval WAL: 8223 writes, 3263 syncs, 2.52 writes per sync, written: 0.03 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 05:02:07 np0005481065 ceph-mgr[74605]: [devicehealth INFO root] Check health
Oct 11 05:02:08 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1769: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Oct 11 05:02:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:02:08 np0005481065 podman[345362]: 2025-10-11 09:02:08.759091146 +0000 UTC m=+0.065283442 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 11 05:02:08 np0005481065 nova_compute[260935]: 2025-10-11 09:02:08.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:02:10 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1770: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Oct 11 05:02:10 np0005481065 nova_compute[260935]: 2025-10-11 09:02:10.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:02:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:02:10 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:02:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 05:02:10 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:02:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 05:02:10 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:02:10 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev a53fb34f-5f92-4742-bc08-44b3facb55f0 does not exist
Oct 11 05:02:10 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 56048202-f037-4d84-9a64-f49698cdf53e does not exist
Oct 11 05:02:10 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 92cebac6-7a17-4899-8b95-5dbd2bcfdb4d does not exist
Oct 11 05:02:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 05:02:10 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 05:02:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 05:02:10 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:02:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:02:10 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:02:11 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:02:11 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:02:11 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:02:11 np0005481065 podman[345652]: 2025-10-11 09:02:11.694618087 +0000 UTC m=+0.041268489 container create d6fc928b8eb683d6fe8bcdce343a400dd8e7c9ecc6fddcacd162ed805228683e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hofstadter, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Oct 11 05:02:11 np0005481065 systemd[1]: Started libpod-conmon-d6fc928b8eb683d6fe8bcdce343a400dd8e7c9ecc6fddcacd162ed805228683e.scope.
Oct 11 05:02:11 np0005481065 podman[345652]: 2025-10-11 09:02:11.675910033 +0000 UTC m=+0.022560455 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:02:11 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:02:11 np0005481065 podman[345652]: 2025-10-11 09:02:11.7980467 +0000 UTC m=+0.144697112 container init d6fc928b8eb683d6fe8bcdce343a400dd8e7c9ecc6fddcacd162ed805228683e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 11 05:02:11 np0005481065 podman[345652]: 2025-10-11 09:02:11.811220926 +0000 UTC m=+0.157871338 container start d6fc928b8eb683d6fe8bcdce343a400dd8e7c9ecc6fddcacd162ed805228683e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:02:11 np0005481065 podman[345652]: 2025-10-11 09:02:11.816522247 +0000 UTC m=+0.163172679 container attach d6fc928b8eb683d6fe8bcdce343a400dd8e7c9ecc6fddcacd162ed805228683e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hofstadter, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:02:11 np0005481065 nervous_hofstadter[345668]: 167 167
Oct 11 05:02:11 np0005481065 systemd[1]: libpod-d6fc928b8eb683d6fe8bcdce343a400dd8e7c9ecc6fddcacd162ed805228683e.scope: Deactivated successfully.
Oct 11 05:02:11 np0005481065 podman[345652]: 2025-10-11 09:02:11.822747655 +0000 UTC m=+0.169398057 container died d6fc928b8eb683d6fe8bcdce343a400dd8e7c9ecc6fddcacd162ed805228683e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:02:11 np0005481065 systemd[1]: var-lib-containers-storage-overlay-ebc73cb1761e18435a8efcc62018132e92e58b49af15d5e916dbc11d852864d2-merged.mount: Deactivated successfully.
Oct 11 05:02:11 np0005481065 nova_compute[260935]: 2025-10-11 09:02:11.862 2 DEBUG nova.compute.manager [req-b3a49696-6fde-49ee-932a-08193025f2ee req-fcbdbf43-b827-47e3-b560-4c19f9fd1de3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Received event network-changed-074db183-8679-40f2-b39d-06759a8dfceb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:02:11 np0005481065 podman[345652]: 2025-10-11 09:02:11.866623977 +0000 UTC m=+0.213274369 container remove d6fc928b8eb683d6fe8bcdce343a400dd8e7c9ecc6fddcacd162ed805228683e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 05:02:11 np0005481065 nova_compute[260935]: 2025-10-11 09:02:11.873 2 DEBUG nova.compute.manager [req-b3a49696-6fde-49ee-932a-08193025f2ee req-fcbdbf43-b827-47e3-b560-4c19f9fd1de3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Refreshing instance network info cache due to event network-changed-074db183-8679-40f2-b39d-06759a8dfceb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:02:11 np0005481065 nova_compute[260935]: 2025-10-11 09:02:11.874 2 DEBUG oslo_concurrency.lockutils [req-b3a49696-6fde-49ee-932a-08193025f2ee req-fcbdbf43-b827-47e3-b560-4c19f9fd1de3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-15633aee-234a-4417-b5ea-f35f13820404" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:02:11 np0005481065 systemd[1]: libpod-conmon-d6fc928b8eb683d6fe8bcdce343a400dd8e7c9ecc6fddcacd162ed805228683e.scope: Deactivated successfully.
Oct 11 05:02:12 np0005481065 podman[345692]: 2025-10-11 09:02:12.100469003 +0000 UTC m=+0.069881896 container create 993156546e0aa24d57838b94a3640208c2fff4c0f1c5bd39b87e4194d3b77e46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:02:12 np0005481065 systemd[1]: Started libpod-conmon-993156546e0aa24d57838b94a3640208c2fff4c0f1c5bd39b87e4194d3b77e46.scope.
Oct 11 05:02:12 np0005481065 podman[345692]: 2025-10-11 09:02:12.07267738 +0000 UTC m=+0.042090383 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:02:12 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:02:12 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/156c11dfc6f1868ac6c40d8c5aa639b018c5db964ba60f15e0a0284b41557f95/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:02:12 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/156c11dfc6f1868ac6c40d8c5aa639b018c5db964ba60f15e0a0284b41557f95/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:02:12 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/156c11dfc6f1868ac6c40d8c5aa639b018c5db964ba60f15e0a0284b41557f95/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:02:12 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/156c11dfc6f1868ac6c40d8c5aa639b018c5db964ba60f15e0a0284b41557f95/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:02:12 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/156c11dfc6f1868ac6c40d8c5aa639b018c5db964ba60f15e0a0284b41557f95/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 05:02:12 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1771: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Oct 11 05:02:12 np0005481065 podman[345692]: 2025-10-11 09:02:12.214450267 +0000 UTC m=+0.183863170 container init 993156546e0aa24d57838b94a3640208c2fff4c0f1c5bd39b87e4194d3b77e46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_boyd, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:02:12 np0005481065 podman[345692]: 2025-10-11 09:02:12.228277902 +0000 UTC m=+0.197690835 container start 993156546e0aa24d57838b94a3640208c2fff4c0f1c5bd39b87e4194d3b77e46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_boyd, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:02:12 np0005481065 podman[345692]: 2025-10-11 09:02:12.232656937 +0000 UTC m=+0.202069850 container attach 993156546e0aa24d57838b94a3640208c2fff4c0f1c5bd39b87e4194d3b77e46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 11 05:02:13 np0005481065 gallant_boyd[345708]: --> passed data devices: 0 physical, 3 LVM
Oct 11 05:02:13 np0005481065 gallant_boyd[345708]: --> relative data size: 1.0
Oct 11 05:02:13 np0005481065 gallant_boyd[345708]: --> All data devices are unavailable
Oct 11 05:02:13 np0005481065 systemd[1]: libpod-993156546e0aa24d57838b94a3640208c2fff4c0f1c5bd39b87e4194d3b77e46.scope: Deactivated successfully.
Oct 11 05:02:13 np0005481065 systemd[1]: libpod-993156546e0aa24d57838b94a3640208c2fff4c0f1c5bd39b87e4194d3b77e46.scope: Consumed 1.099s CPU time.
Oct 11 05:02:13 np0005481065 podman[345737]: 2025-10-11 09:02:13.512417301 +0000 UTC m=+0.035509375 container died 993156546e0aa24d57838b94a3640208c2fff4c0f1c5bd39b87e4194d3b77e46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 05:02:13 np0005481065 systemd[1]: var-lib-containers-storage-overlay-156c11dfc6f1868ac6c40d8c5aa639b018c5db964ba60f15e0a0284b41557f95-merged.mount: Deactivated successfully.
Oct 11 05:02:13 np0005481065 podman[345737]: 2025-10-11 09:02:13.580980708 +0000 UTC m=+0.104072772 container remove 993156546e0aa24d57838b94a3640208c2fff4c0f1c5bd39b87e4194d3b77e46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_boyd, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:02:13 np0005481065 systemd[1]: libpod-conmon-993156546e0aa24d57838b94a3640208c2fff4c0f1c5bd39b87e4194d3b77e46.scope: Deactivated successfully.
Oct 11 05:02:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:02:13 np0005481065 nova_compute[260935]: 2025-10-11 09:02:13.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:02:14 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1772: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 4.3 KiB/s wr, 0 op/s
Oct 11 05:02:14 np0005481065 podman[345891]: 2025-10-11 09:02:14.339566974 +0000 UTC m=+0.059555181 container create 80a2229b6d2d8cd6238b407e7251e267f26f781a270b5c8fe1c69d9d81ed4014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ramanujan, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:02:14 np0005481065 systemd[1]: Started libpod-conmon-80a2229b6d2d8cd6238b407e7251e267f26f781a270b5c8fe1c69d9d81ed4014.scope.
Oct 11 05:02:14 np0005481065 podman[345891]: 2025-10-11 09:02:14.310097992 +0000 UTC m=+0.030086249 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:02:14 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:02:14 np0005481065 podman[345891]: 2025-10-11 09:02:14.452123437 +0000 UTC m=+0.172111624 container init 80a2229b6d2d8cd6238b407e7251e267f26f781a270b5c8fe1c69d9d81ed4014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:02:14 np0005481065 podman[345891]: 2025-10-11 09:02:14.461915176 +0000 UTC m=+0.181903383 container start 80a2229b6d2d8cd6238b407e7251e267f26f781a270b5c8fe1c69d9d81ed4014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:02:14 np0005481065 podman[345891]: 2025-10-11 09:02:14.465744516 +0000 UTC m=+0.185732783 container attach 80a2229b6d2d8cd6238b407e7251e267f26f781a270b5c8fe1c69d9d81ed4014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:02:14 np0005481065 crazy_ramanujan[345908]: 167 167
Oct 11 05:02:14 np0005481065 systemd[1]: libpod-80a2229b6d2d8cd6238b407e7251e267f26f781a270b5c8fe1c69d9d81ed4014.scope: Deactivated successfully.
Oct 11 05:02:14 np0005481065 podman[345891]: 2025-10-11 09:02:14.47150675 +0000 UTC m=+0.191495007 container died 80a2229b6d2d8cd6238b407e7251e267f26f781a270b5c8fe1c69d9d81ed4014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ramanujan, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 11 05:02:14 np0005481065 systemd[1]: var-lib-containers-storage-overlay-e8f4c7cb55e9bdcecc10416263b1a5d796e4789d839099c8b227e156fa7c0a09-merged.mount: Deactivated successfully.
Oct 11 05:02:14 np0005481065 podman[345891]: 2025-10-11 09:02:14.534361234 +0000 UTC m=+0.254349441 container remove 80a2229b6d2d8cd6238b407e7251e267f26f781a270b5c8fe1c69d9d81ed4014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 05:02:14 np0005481065 systemd[1]: libpod-conmon-80a2229b6d2d8cd6238b407e7251e267f26f781a270b5c8fe1c69d9d81ed4014.scope: Deactivated successfully.
Oct 11 05:02:14 np0005481065 podman[345914]: 2025-10-11 09:02:14.643795127 +0000 UTC m=+0.122313451 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 11 05:02:14 np0005481065 podman[345951]: 2025-10-11 09:02:14.807223503 +0000 UTC m=+0.074206420 container create 7ced772c3be8065f4b233d546dfb079c87036c1b14bbcffe2c8607163890c586 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_carson, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 05:02:14 np0005481065 podman[345951]: 2025-10-11 09:02:14.768660982 +0000 UTC m=+0.035643949 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:02:14 np0005481065 systemd[1]: Started libpod-conmon-7ced772c3be8065f4b233d546dfb079c87036c1b14bbcffe2c8607163890c586.scope.
Oct 11 05:02:14 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:02:14 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/278cd0a6b40fe7547858b9e04af7e8b2734607ec3e191647be754a5ff36b2d97/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:02:14 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/278cd0a6b40fe7547858b9e04af7e8b2734607ec3e191647be754a5ff36b2d97/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:02:14 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/278cd0a6b40fe7547858b9e04af7e8b2734607ec3e191647be754a5ff36b2d97/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:02:14 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/278cd0a6b40fe7547858b9e04af7e8b2734607ec3e191647be754a5ff36b2d97/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:02:14 np0005481065 podman[345951]: 2025-10-11 09:02:14.941871057 +0000 UTC m=+0.208854014 container init 7ced772c3be8065f4b233d546dfb079c87036c1b14bbcffe2c8607163890c586 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_carson, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 11 05:02:14 np0005481065 podman[345951]: 2025-10-11 09:02:14.956481394 +0000 UTC m=+0.223464271 container start 7ced772c3be8065f4b233d546dfb079c87036c1b14bbcffe2c8607163890c586 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_carson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 11 05:02:14 np0005481065 podman[345951]: 2025-10-11 09:02:14.960341154 +0000 UTC m=+0.227324111 container attach 7ced772c3be8065f4b233d546dfb079c87036c1b14bbcffe2c8607163890c586 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_carson, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:02:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:02:15.198 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:02:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:02:15.200 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:02:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:02:15.201 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:02:15 np0005481065 nova_compute[260935]: 2025-10-11 09:02:15.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:02:15 np0005481065 loving_carson[345968]: {
Oct 11 05:02:15 np0005481065 loving_carson[345968]:    "0": [
Oct 11 05:02:15 np0005481065 loving_carson[345968]:        {
Oct 11 05:02:15 np0005481065 loving_carson[345968]:            "devices": [
Oct 11 05:02:15 np0005481065 loving_carson[345968]:                "/dev/loop3"
Oct 11 05:02:15 np0005481065 loving_carson[345968]:            ],
Oct 11 05:02:15 np0005481065 loving_carson[345968]:            "lv_name": "ceph_lv0",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:            "lv_size": "21470642176",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:            "name": "ceph_lv0",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:            "tags": {
Oct 11 05:02:15 np0005481065 loving_carson[345968]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:                "ceph.cluster_name": "ceph",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:                "ceph.crush_device_class": "",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:                "ceph.encrypted": "0",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:                "ceph.osd_id": "0",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:                "ceph.type": "block",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:                "ceph.vdo": "0"
Oct 11 05:02:15 np0005481065 loving_carson[345968]:            },
Oct 11 05:02:15 np0005481065 loving_carson[345968]:            "type": "block",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:            "vg_name": "ceph_vg0"
Oct 11 05:02:15 np0005481065 loving_carson[345968]:        }
Oct 11 05:02:15 np0005481065 loving_carson[345968]:    ],
Oct 11 05:02:15 np0005481065 loving_carson[345968]:    "1": [
Oct 11 05:02:15 np0005481065 loving_carson[345968]:        {
Oct 11 05:02:15 np0005481065 loving_carson[345968]:            "devices": [
Oct 11 05:02:15 np0005481065 loving_carson[345968]:                "/dev/loop4"
Oct 11 05:02:15 np0005481065 loving_carson[345968]:            ],
Oct 11 05:02:15 np0005481065 loving_carson[345968]:            "lv_name": "ceph_lv1",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:            "lv_size": "21470642176",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:            "name": "ceph_lv1",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:            "tags": {
Oct 11 05:02:15 np0005481065 loving_carson[345968]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:                "ceph.cluster_name": "ceph",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:                "ceph.crush_device_class": "",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:                "ceph.encrypted": "0",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:                "ceph.osd_id": "1",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:                "ceph.type": "block",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:                "ceph.vdo": "0"
Oct 11 05:02:15 np0005481065 loving_carson[345968]:            },
Oct 11 05:02:15 np0005481065 loving_carson[345968]:            "type": "block",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:            "vg_name": "ceph_vg1"
Oct 11 05:02:15 np0005481065 loving_carson[345968]:        }
Oct 11 05:02:15 np0005481065 loving_carson[345968]:    ],
Oct 11 05:02:15 np0005481065 loving_carson[345968]:    "2": [
Oct 11 05:02:15 np0005481065 loving_carson[345968]:        {
Oct 11 05:02:15 np0005481065 loving_carson[345968]:            "devices": [
Oct 11 05:02:15 np0005481065 loving_carson[345968]:                "/dev/loop5"
Oct 11 05:02:15 np0005481065 loving_carson[345968]:            ],
Oct 11 05:02:15 np0005481065 loving_carson[345968]:            "lv_name": "ceph_lv2",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:            "lv_size": "21470642176",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:            "name": "ceph_lv2",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:            "tags": {
Oct 11 05:02:15 np0005481065 loving_carson[345968]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:                "ceph.cluster_name": "ceph",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:                "ceph.crush_device_class": "",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:                "ceph.encrypted": "0",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:                "ceph.osd_id": "2",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:                "ceph.type": "block",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:                "ceph.vdo": "0"
Oct 11 05:02:15 np0005481065 loving_carson[345968]:            },
Oct 11 05:02:15 np0005481065 loving_carson[345968]:            "type": "block",
Oct 11 05:02:15 np0005481065 loving_carson[345968]:            "vg_name": "ceph_vg2"
Oct 11 05:02:15 np0005481065 loving_carson[345968]:        }
Oct 11 05:02:15 np0005481065 loving_carson[345968]:    ]
Oct 11 05:02:15 np0005481065 loving_carson[345968]: }
Oct 11 05:02:15 np0005481065 systemd[1]: libpod-7ced772c3be8065f4b233d546dfb079c87036c1b14bbcffe2c8607163890c586.scope: Deactivated successfully.
Oct 11 05:02:15 np0005481065 podman[345951]: 2025-10-11 09:02:15.845127132 +0000 UTC m=+1.112110029 container died 7ced772c3be8065f4b233d546dfb079c87036c1b14bbcffe2c8607163890c586 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 05:02:15 np0005481065 systemd[1]: var-lib-containers-storage-overlay-278cd0a6b40fe7547858b9e04af7e8b2734607ec3e191647be754a5ff36b2d97-merged.mount: Deactivated successfully.
Oct 11 05:02:16 np0005481065 podman[345951]: 2025-10-11 09:02:16.040718956 +0000 UTC m=+1.307701863 container remove 7ced772c3be8065f4b233d546dfb079c87036c1b14bbcffe2c8607163890c586 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 05:02:16 np0005481065 systemd[1]: libpod-conmon-7ced772c3be8065f4b233d546dfb079c87036c1b14bbcffe2c8607163890c586.scope: Deactivated successfully.
Oct 11 05:02:16 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1773: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:02:16 np0005481065 podman[346134]: 2025-10-11 09:02:16.918521805 +0000 UTC m=+0.097234557 container create 84ffff01ce1ab4826c0ec9da1929d510f385dfdc589e99dc0375eaa2170f7692 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ptolemy, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 11 05:02:16 np0005481065 podman[346134]: 2025-10-11 09:02:16.849927896 +0000 UTC m=+0.028640668 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:02:17 np0005481065 systemd[1]: Started libpod-conmon-84ffff01ce1ab4826c0ec9da1929d510f385dfdc589e99dc0375eaa2170f7692.scope.
Oct 11 05:02:17 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:02:17 np0005481065 podman[346134]: 2025-10-11 09:02:17.15061047 +0000 UTC m=+0.329323202 container init 84ffff01ce1ab4826c0ec9da1929d510f385dfdc589e99dc0375eaa2170f7692 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ptolemy, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:02:17 np0005481065 podman[346134]: 2025-10-11 09:02:17.163581041 +0000 UTC m=+0.342293763 container start 84ffff01ce1ab4826c0ec9da1929d510f385dfdc589e99dc0375eaa2170f7692 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:02:17 np0005481065 eloquent_ptolemy[346150]: 167 167
Oct 11 05:02:17 np0005481065 systemd[1]: libpod-84ffff01ce1ab4826c0ec9da1929d510f385dfdc589e99dc0375eaa2170f7692.scope: Deactivated successfully.
Oct 11 05:02:17 np0005481065 conmon[346150]: conmon 84ffff01ce1ab4826c0e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-84ffff01ce1ab4826c0ec9da1929d510f385dfdc589e99dc0375eaa2170f7692.scope/container/memory.events
Oct 11 05:02:17 np0005481065 podman[346134]: 2025-10-11 09:02:17.216176982 +0000 UTC m=+0.394889714 container attach 84ffff01ce1ab4826c0ec9da1929d510f385dfdc589e99dc0375eaa2170f7692 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ptolemy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:02:17 np0005481065 podman[346134]: 2025-10-11 09:02:17.216728918 +0000 UTC m=+0.395441630 container died 84ffff01ce1ab4826c0ec9da1929d510f385dfdc589e99dc0375eaa2170f7692 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 05:02:17 np0005481065 systemd[1]: var-lib-containers-storage-overlay-c8afd1d32b759a12e6f3d0387ddc9ea80dc9150ab81255c2224edfd5903dd732-merged.mount: Deactivated successfully.
Oct 11 05:02:17 np0005481065 podman[346134]: 2025-10-11 09:02:17.501535378 +0000 UTC m=+0.680248110 container remove 84ffff01ce1ab4826c0ec9da1929d510f385dfdc589e99dc0375eaa2170f7692 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ptolemy, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Oct 11 05:02:17 np0005481065 systemd[1]: libpod-conmon-84ffff01ce1ab4826c0ec9da1929d510f385dfdc589e99dc0375eaa2170f7692.scope: Deactivated successfully.
Oct 11 05:02:17 np0005481065 podman[346174]: 2025-10-11 09:02:17.708196838 +0000 UTC m=+0.027778404 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:02:17 np0005481065 podman[346174]: 2025-10-11 09:02:17.817349193 +0000 UTC m=+0.136930739 container create 45b6947b021359043977d85186ac1866079f8763341eaccfcfa63c1f32684c9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 11 05:02:17 np0005481065 systemd[1]: Started libpod-conmon-45b6947b021359043977d85186ac1866079f8763341eaccfcfa63c1f32684c9e.scope.
Oct 11 05:02:17 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:02:17 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bb077074efce836e5f959ba4b92713fa27aa13fdea4d940d66415d4a1de8858/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:02:17 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bb077074efce836e5f959ba4b92713fa27aa13fdea4d940d66415d4a1de8858/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:02:17 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bb077074efce836e5f959ba4b92713fa27aa13fdea4d940d66415d4a1de8858/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:02:18 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bb077074efce836e5f959ba4b92713fa27aa13fdea4d940d66415d4a1de8858/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:02:18 np0005481065 podman[346174]: 2025-10-11 09:02:18.107791675 +0000 UTC m=+0.427373321 container init 45b6947b021359043977d85186ac1866079f8763341eaccfcfa63c1f32684c9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 05:02:18 np0005481065 podman[346174]: 2025-10-11 09:02:18.1205748 +0000 UTC m=+0.440156346 container start 45b6947b021359043977d85186ac1866079f8763341eaccfcfa63c1f32684c9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ardinghelli, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:02:18 np0005481065 podman[346188]: 2025-10-11 09:02:18.121583898 +0000 UTC m=+0.246806916 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 11 05:02:18 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1774: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:02:18 np0005481065 podman[346174]: 2025-10-11 09:02:18.225960617 +0000 UTC m=+0.545542213 container attach 45b6947b021359043977d85186ac1866079f8763341eaccfcfa63c1f32684c9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ardinghelli, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Oct 11 05:02:18 np0005481065 podman[346213]: 2025-10-11 09:02:18.375107455 +0000 UTC m=+0.371539317 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 11 05:02:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:02:18 np0005481065 nova_compute[260935]: 2025-10-11 09:02:18.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:02:19 np0005481065 nifty_ardinghelli[346211]: {
Oct 11 05:02:19 np0005481065 nifty_ardinghelli[346211]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 05:02:19 np0005481065 nifty_ardinghelli[346211]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:02:19 np0005481065 nifty_ardinghelli[346211]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 05:02:19 np0005481065 nifty_ardinghelli[346211]:        "osd_id": 2,
Oct 11 05:02:19 np0005481065 nifty_ardinghelli[346211]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:02:19 np0005481065 nifty_ardinghelli[346211]:        "type": "bluestore"
Oct 11 05:02:19 np0005481065 nifty_ardinghelli[346211]:    },
Oct 11 05:02:19 np0005481065 nifty_ardinghelli[346211]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 05:02:19 np0005481065 nifty_ardinghelli[346211]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:02:19 np0005481065 nifty_ardinghelli[346211]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 05:02:19 np0005481065 nifty_ardinghelli[346211]:        "osd_id": 0,
Oct 11 05:02:19 np0005481065 nifty_ardinghelli[346211]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:02:19 np0005481065 nifty_ardinghelli[346211]:        "type": "bluestore"
Oct 11 05:02:19 np0005481065 nifty_ardinghelli[346211]:    },
Oct 11 05:02:19 np0005481065 nifty_ardinghelli[346211]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 05:02:19 np0005481065 nifty_ardinghelli[346211]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:02:19 np0005481065 nifty_ardinghelli[346211]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 05:02:19 np0005481065 nifty_ardinghelli[346211]:        "osd_id": 1,
Oct 11 05:02:19 np0005481065 nifty_ardinghelli[346211]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:02:19 np0005481065 nifty_ardinghelli[346211]:        "type": "bluestore"
Oct 11 05:02:19 np0005481065 nifty_ardinghelli[346211]:    }
Oct 11 05:02:19 np0005481065 nifty_ardinghelli[346211]: }
Oct 11 05:02:19 np0005481065 systemd[1]: libpod-45b6947b021359043977d85186ac1866079f8763341eaccfcfa63c1f32684c9e.scope: Deactivated successfully.
Oct 11 05:02:19 np0005481065 systemd[1]: libpod-45b6947b021359043977d85186ac1866079f8763341eaccfcfa63c1f32684c9e.scope: Consumed 1.193s CPU time.
Oct 11 05:02:19 np0005481065 podman[346273]: 2025-10-11 09:02:19.3775027 +0000 UTC m=+0.045212001 container died 45b6947b021359043977d85186ac1866079f8763341eaccfcfa63c1f32684c9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ardinghelli, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 05:02:19 np0005481065 systemd[1]: var-lib-containers-storage-overlay-2bb077074efce836e5f959ba4b92713fa27aa13fdea4d940d66415d4a1de8858-merged.mount: Deactivated successfully.
Oct 11 05:02:19 np0005481065 podman[346273]: 2025-10-11 09:02:19.449459714 +0000 UTC m=+0.117168925 container remove 45b6947b021359043977d85186ac1866079f8763341eaccfcfa63c1f32684c9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 05:02:19 np0005481065 systemd[1]: libpod-conmon-45b6947b021359043977d85186ac1866079f8763341eaccfcfa63c1f32684c9e.scope: Deactivated successfully.
Oct 11 05:02:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:02:19 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:02:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:02:19 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:02:19 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev a410bbd8-fb0d-497c-8ae0-4041827a06a2 does not exist
Oct 11 05:02:19 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 5abf3b44-2651-4b50-860f-0e63cf0e6f93 does not exist
Oct 11 05:02:19 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:02:19 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:02:20 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1775: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:02:20 np0005481065 nova_compute[260935]: 2025-10-11 09:02:20.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:02:22 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1776: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:02:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:02:23 np0005481065 nova_compute[260935]: 2025-10-11 09:02:23.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:02:24 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1777: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:02:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:02:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:02:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:02:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:02:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:02:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:02:25 np0005481065 nova_compute[260935]: 2025-10-11 09:02:25.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:02:26 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1778: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:02:28 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1779: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:02:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:02:28 np0005481065 nova_compute[260935]: 2025-10-11 09:02:28.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:02:30 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1780: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:02:30 np0005481065 nova_compute[260935]: 2025-10-11 09:02:30.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:02:32 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1781: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:02:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:02:33 np0005481065 nova_compute[260935]: 2025-10-11 09:02:33.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:02:34 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1782: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:02:35 np0005481065 nova_compute[260935]: 2025-10-11 09:02:35.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:02:36 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1783: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:02:37 np0005481065 ovn_controller[152945]: 2025-10-11T09:02:37Z|00789|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:02:37 np0005481065 ovn_controller[152945]: 2025-10-11T09:02:37Z|00790|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:02:37 np0005481065 nova_compute[260935]: 2025-10-11 09:02:37.682 2 DEBUG nova.compute.manager [req-1cce769b-0f1a-488b-a39e-3ce12748e312 req-d486c419-e27a-413a-8e58-70eb54824d6d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Received event network-vif-deleted-bcfaf217-8703-4c1e-bf80-d24ab0e642bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:02:37 np0005481065 nova_compute[260935]: 2025-10-11 09:02:37.683 2 INFO nova.compute.manager [req-1cce769b-0f1a-488b-a39e-3ce12748e312 req-d486c419-e27a-413a-8e58-70eb54824d6d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Neutron deleted interface bcfaf217-8703-4c1e-bf80-d24ab0e642bd; detaching it from the instance and deleting it from the info cache#033[00m
Oct 11 05:02:37 np0005481065 nova_compute[260935]: 2025-10-11 09:02:37.683 2 DEBUG nova.network.neutron [req-1cce769b-0f1a-488b-a39e-3ce12748e312 req-d486c419-e27a-413a-8e58-70eb54824d6d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:02:38 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1784: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:02:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:02:38 np0005481065 nova_compute[260935]: 2025-10-11 09:02:38.740 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:02:38 np0005481065 nova_compute[260935]: 2025-10-11 09:02:38.742 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:02:38 np0005481065 nova_compute[260935]: 2025-10-11 09:02:38.743 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:02:38 np0005481065 nova_compute[260935]: 2025-10-11 09:02:38.743 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:02:38 np0005481065 nova_compute[260935]: 2025-10-11 09:02:38.744 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:02:38 np0005481065 nova_compute[260935]: 2025-10-11 09:02:38.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:02:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:02:39 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/862062399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:02:39 np0005481065 nova_compute[260935]: 2025-10-11 09:02:39.232 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:02:39 np0005481065 podman[346361]: 2025-10-11 09:02:39.436241954 +0000 UTC m=+0.123637020 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:02:40 np0005481065 nova_compute[260935]: 2025-10-11 09:02:40.186 2 DEBUG nova.compute.manager [None req-5832882f-6ec9-43d0-8673-3453baa1ea4d - - - - - -] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:02:40 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1785: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:02:40 np0005481065 nova_compute[260935]: 2025-10-11 09:02:40.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:02:41 np0005481065 nova_compute[260935]: 2025-10-11 09:02:41.203 2 DEBUG nova.network.neutron [-] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:02:41 np0005481065 nova_compute[260935]: 2025-10-11 09:02:41.209 2 DEBUG nova.compute.manager [req-1cce769b-0f1a-488b-a39e-3ce12748e312 req-d486c419-e27a-413a-8e58-70eb54824d6d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Detach interface failed, port_id=bcfaf217-8703-4c1e-bf80-d24ab0e642bd, reason: Instance 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct 11 05:02:41 np0005481065 nova_compute[260935]: 2025-10-11 09:02:41.598 2 INFO nova.compute.manager [-] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Took 106.00 seconds to deallocate network for instance.#033[00m
Oct 11 05:02:41 np0005481065 nova_compute[260935]: 2025-10-11 09:02:41.623 2 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 91.94 sec#033[00m
Oct 11 05:02:41 np0005481065 nova_compute[260935]: 2025-10-11 09:02:41.814 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:02:41 np0005481065 nova_compute[260935]: 2025-10-11 09:02:41.815 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:02:41 np0005481065 nova_compute[260935]: 2025-10-11 09:02:41.815 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:02:41 np0005481065 nova_compute[260935]: 2025-10-11 09:02:41.821 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:02:41 np0005481065 nova_compute[260935]: 2025-10-11 09:02:41.821 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:02:41 np0005481065 nova_compute[260935]: 2025-10-11 09:02:41.827 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:02:41 np0005481065 nova_compute[260935]: 2025-10-11 09:02:41.827 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:02:42 np0005481065 nova_compute[260935]: 2025-10-11 09:02:42.022 2 DEBUG oslo_concurrency.lockutils [None req-ce634aba-2255-4bb3-84bd-81dc9c6307c7 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:02:42 np0005481065 nova_compute[260935]: 2025-10-11 09:02:42.023 2 DEBUG oslo_concurrency.lockutils [None req-ce634aba-2255-4bb3-84bd-81dc9c6307c7 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:02:42 np0005481065 nova_compute[260935]: 2025-10-11 09:02:42.182 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:02:42 np0005481065 nova_compute[260935]: 2025-10-11 09:02:42.184 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3269MB free_disk=59.8099365234375GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:02:42 np0005481065 nova_compute[260935]: 2025-10-11 09:02:42.184 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:02:42 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1786: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Oct 11 05:02:43 np0005481065 nova_compute[260935]: 2025-10-11 09:02:43.227 2 DEBUG nova.network.neutron [req-6217c10a-82d7-4f3c-a0f0-e5f6d20ca543 req-d8d81b36-c6d1-45d0-bf25-b4cf99a6ec13 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updated VIF entry in instance network info cache for port c992d6e3-ef59-42a0-80c5-109fe0c056cd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:02:43 np0005481065 nova_compute[260935]: 2025-10-11 09:02:43.228 2 DEBUG nova.network.neutron [req-6217c10a-82d7-4f3c-a0f0-e5f6d20ca543 req-d8d81b36-c6d1-45d0-bf25-b4cf99a6ec13 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updating instance_info_cache with network_info: [{"id": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "address": "fa:16:3e:d3:b5:ce", "network": {"id": "7c40ad6c-6e2c-4d8e-a70f-72c8786fa745", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1855455514-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ba95f2514ce4fe4b00f245335eaeb01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc992d6e3-ef", "ovs_interfaceid": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:02:43 np0005481065 nova_compute[260935]: 2025-10-11 09:02:43.230 2 DEBUG nova.network.neutron [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Updating instance_info_cache with network_info: [{"id": "074db183-8679-40f2-b39d-06759a8dfceb", "address": "fa:16:3e:33:2b:94", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap074db183-86", "ovs_interfaceid": "074db183-8679-40f2-b39d-06759a8dfceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:02:43 np0005481065 nova_compute[260935]: 2025-10-11 09:02:43.524 2 DEBUG oslo_concurrency.lockutils [req-6217c10a-82d7-4f3c-a0f0-e5f6d20ca543 req-d8d81b36-c6d1-45d0-bf25-b4cf99a6ec13 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:02:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:02:43 np0005481065 nova_compute[260935]: 2025-10-11 09:02:43.723 2 DEBUG oslo_concurrency.lockutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Releasing lock "refresh_cache-15633aee-234a-4417-b5ea-f35f13820404" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:02:43 np0005481065 nova_compute[260935]: 2025-10-11 09:02:43.724 2 DEBUG nova.compute.manager [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Instance network_info: |[{"id": "074db183-8679-40f2-b39d-06759a8dfceb", "address": "fa:16:3e:33:2b:94", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap074db183-86", "ovs_interfaceid": "074db183-8679-40f2-b39d-06759a8dfceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:02:43 np0005481065 nova_compute[260935]: 2025-10-11 09:02:43.725 2 DEBUG oslo_concurrency.lockutils [req-b3a49696-6fde-49ee-932a-08193025f2ee req-fcbdbf43-b827-47e3-b560-4c19f9fd1de3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-15633aee-234a-4417-b5ea-f35f13820404" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:02:43 np0005481065 nova_compute[260935]: 2025-10-11 09:02:43.726 2 DEBUG nova.network.neutron [req-b3a49696-6fde-49ee-932a-08193025f2ee req-fcbdbf43-b827-47e3-b560-4c19f9fd1de3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Refreshing network info cache for port 074db183-8679-40f2-b39d-06759a8dfceb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:02:43 np0005481065 nova_compute[260935]: 2025-10-11 09:02:43.732 2 DEBUG nova.virt.libvirt.driver [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Start _get_guest_xml network_info=[{"id": "074db183-8679-40f2-b39d-06759a8dfceb", "address": "fa:16:3e:33:2b:94", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap074db183-86", "ovs_interfaceid": "074db183-8679-40f2-b39d-06759a8dfceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:02:43 np0005481065 nova_compute[260935]: 2025-10-11 09:02:43.738 2 WARNING nova.virt.libvirt.driver [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:02:43 np0005481065 nova_compute[260935]: 2025-10-11 09:02:43.747 2 DEBUG nova.virt.libvirt.host [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:02:43 np0005481065 nova_compute[260935]: 2025-10-11 09:02:43.749 2 DEBUG nova.virt.libvirt.host [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:02:43 np0005481065 nova_compute[260935]: 2025-10-11 09:02:43.751 2 DEBUG oslo_concurrency.processutils [None req-ce634aba-2255-4bb3-84bd-81dc9c6307c7 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:02:43 np0005481065 nova_compute[260935]: 2025-10-11 09:02:43.825 2 DEBUG nova.virt.libvirt.host [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:02:43 np0005481065 nova_compute[260935]: 2025-10-11 09:02:43.827 2 DEBUG nova.virt.libvirt.host [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:02:43 np0005481065 nova_compute[260935]: 2025-10-11 09:02:43.828 2 DEBUG nova.virt.libvirt.driver [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:02:43 np0005481065 nova_compute[260935]: 2025-10-11 09:02:43.828 2 DEBUG nova.virt.hardware [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:02:43 np0005481065 nova_compute[260935]: 2025-10-11 09:02:43.830 2 DEBUG nova.virt.hardware [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:02:43 np0005481065 nova_compute[260935]: 2025-10-11 09:02:43.830 2 DEBUG nova.virt.hardware [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:02:43 np0005481065 nova_compute[260935]: 2025-10-11 09:02:43.831 2 DEBUG nova.virt.hardware [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:02:43 np0005481065 nova_compute[260935]: 2025-10-11 09:02:43.831 2 DEBUG nova.virt.hardware [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:02:43 np0005481065 nova_compute[260935]: 2025-10-11 09:02:43.832 2 DEBUG nova.virt.hardware [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:02:43 np0005481065 nova_compute[260935]: 2025-10-11 09:02:43.832 2 DEBUG nova.virt.hardware [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:02:43 np0005481065 nova_compute[260935]: 2025-10-11 09:02:43.833 2 DEBUG nova.virt.hardware [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:02:43 np0005481065 nova_compute[260935]: 2025-10-11 09:02:43.833 2 DEBUG nova.virt.hardware [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:02:43 np0005481065 nova_compute[260935]: 2025-10-11 09:02:43.834 2 DEBUG nova.virt.hardware [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:02:43 np0005481065 nova_compute[260935]: 2025-10-11 09:02:43.834 2 DEBUG nova.virt.hardware [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:02:43 np0005481065 nova_compute[260935]: 2025-10-11 09:02:43.842 2 DEBUG oslo_concurrency.processutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:02:43 np0005481065 nova_compute[260935]: 2025-10-11 09:02:43.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:02:44 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1787: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Oct 11 05:02:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:02:44 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3003496757' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:02:44 np0005481065 nova_compute[260935]: 2025-10-11 09:02:44.251 2 DEBUG oslo_concurrency.processutils [None req-ce634aba-2255-4bb3-84bd-81dc9c6307c7 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:02:44 np0005481065 nova_compute[260935]: 2025-10-11 09:02:44.259 2 DEBUG nova.compute.provider_tree [None req-ce634aba-2255-4bb3-84bd-81dc9c6307c7 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:02:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:02:44 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1427179780' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:02:44 np0005481065 nova_compute[260935]: 2025-10-11 09:02:44.351 2 DEBUG oslo_concurrency.processutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:02:44 np0005481065 nova_compute[260935]: 2025-10-11 09:02:44.389 2 DEBUG nova.storage.rbd_utils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] rbd image 15633aee-234a-4417-b5ea-f35f13820404_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:02:44 np0005481065 nova_compute[260935]: 2025-10-11 09:02:44.394 2 DEBUG oslo_concurrency.processutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:02:44 np0005481065 nova_compute[260935]: 2025-10-11 09:02:44.435 2 DEBUG nova.scheduler.client.report [None req-ce634aba-2255-4bb3-84bd-81dc9c6307c7 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:02:44 np0005481065 nova_compute[260935]: 2025-10-11 09:02:44.701 2 DEBUG oslo_concurrency.lockutils [None req-ce634aba-2255-4bb3-84bd-81dc9c6307c7 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 2.678s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:02:44 np0005481065 nova_compute[260935]: 2025-10-11 09:02:44.706 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 2.522s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:02:44 np0005481065 podman[346463]: 2025-10-11 09:02:44.795150915 +0000 UTC m=+0.087939852 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, container_name=iscsid, org.label-schema.schema-version=1.0)
Oct 11 05:02:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:02:44 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3146808339' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:02:44 np0005481065 nova_compute[260935]: 2025-10-11 09:02:44.843 2 DEBUG oslo_concurrency.processutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:02:44 np0005481065 nova_compute[260935]: 2025-10-11 09:02:44.845 2 DEBUG nova.virt.libvirt.vif [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:00:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-2012807637',display_name='tempest-ServerRescueTestJSON-server-2012807637',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-2012807637',id=87,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='11b44ad9193e4e43838d52056ccf413e',ramdisk_id='',reservation_id='r-es16o3e8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-1667208638',owner_user_name='tempest-ServerRescueTestJSON-1667208638-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:00:52Z,user_data=None,user_id='df5a3c3a5d68473aa2e2950de45ebce1',uuid=15633aee-234a-4417-b5ea-f35f13820404,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "074db183-8679-40f2-b39d-06759a8dfceb", "address": "fa:16:3e:33:2b:94", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap074db183-86", "ovs_interfaceid": "074db183-8679-40f2-b39d-06759a8dfceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:02:44 np0005481065 nova_compute[260935]: 2025-10-11 09:02:44.846 2 DEBUG nova.network.os_vif_util [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Converting VIF {"id": "074db183-8679-40f2-b39d-06759a8dfceb", "address": "fa:16:3e:33:2b:94", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap074db183-86", "ovs_interfaceid": "074db183-8679-40f2-b39d-06759a8dfceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:02:44 np0005481065 nova_compute[260935]: 2025-10-11 09:02:44.847 2 DEBUG nova.network.os_vif_util [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:2b:94,bridge_name='br-int',has_traffic_filtering=True,id=074db183-8679-40f2-b39d-06759a8dfceb,network=Network(e4686205-cbf0-4221-bc49-ebb890c4a59f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap074db183-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:02:44 np0005481065 nova_compute[260935]: 2025-10-11 09:02:44.848 2 DEBUG nova.objects.instance [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lazy-loading 'pci_devices' on Instance uuid 15633aee-234a-4417-b5ea-f35f13820404 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:02:44 np0005481065 nova_compute[260935]: 2025-10-11 09:02:44.988 2 INFO nova.scheduler.client.report [None req-ce634aba-2255-4bb3-84bd-81dc9c6307c7 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Deleted allocations for instance 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361#033[00m
Oct 11 05:02:45 np0005481065 nova_compute[260935]: 2025-10-11 09:02:45.066 2 DEBUG nova.virt.libvirt.driver [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:02:45 np0005481065 nova_compute[260935]:  <uuid>15633aee-234a-4417-b5ea-f35f13820404</uuid>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:  <name>instance-00000057</name>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:02:45 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:      <nova:name>tempest-ServerRescueTestJSON-server-2012807637</nova:name>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:02:43</nova:creationTime>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:02:45 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:        <nova:user uuid="df5a3c3a5d68473aa2e2950de45ebce1">tempest-ServerRescueTestJSON-1667208638-project-member</nova:user>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:        <nova:project uuid="11b44ad9193e4e43838d52056ccf413e">tempest-ServerRescueTestJSON-1667208638</nova:project>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:        <nova:port uuid="074db183-8679-40f2-b39d-06759a8dfceb">
Oct 11 05:02:45 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:02:45 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:      <entry name="serial">15633aee-234a-4417-b5ea-f35f13820404</entry>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:      <entry name="uuid">15633aee-234a-4417-b5ea-f35f13820404</entry>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:02:45 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:02:45 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:02:45 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/15633aee-234a-4417-b5ea-f35f13820404_disk">
Oct 11 05:02:45 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:02:45 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:02:45 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/15633aee-234a-4417-b5ea-f35f13820404_disk.config">
Oct 11 05:02:45 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:02:45 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:02:45 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:33:2b:94"/>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:      <target dev="tap074db183-86"/>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:02:45 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/15633aee-234a-4417-b5ea-f35f13820404/console.log" append="off"/>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:02:45 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:02:45 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:02:45 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:02:45 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:02:45 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:02:45 np0005481065 nova_compute[260935]: 2025-10-11 09:02:45.068 2 DEBUG nova.compute.manager [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Preparing to wait for external event network-vif-plugged-074db183-8679-40f2-b39d-06759a8dfceb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:02:45 np0005481065 nova_compute[260935]: 2025-10-11 09:02:45.068 2 DEBUG oslo_concurrency.lockutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Acquiring lock "15633aee-234a-4417-b5ea-f35f13820404-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:02:45 np0005481065 nova_compute[260935]: 2025-10-11 09:02:45.068 2 DEBUG oslo_concurrency.lockutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lock "15633aee-234a-4417-b5ea-f35f13820404-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:02:45 np0005481065 nova_compute[260935]: 2025-10-11 09:02:45.069 2 DEBUG oslo_concurrency.lockutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lock "15633aee-234a-4417-b5ea-f35f13820404-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:02:45 np0005481065 nova_compute[260935]: 2025-10-11 09:02:45.070 2 DEBUG nova.virt.libvirt.vif [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:00:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-2012807637',display_name='tempest-ServerRescueTestJSON-server-2012807637',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-2012807637',id=87,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='11b44ad9193e4e43838d52056ccf413e',ramdisk_id='',reservation_id='r-es16o3e8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-1667208638',owner_user_name='tempest-ServerRescueTestJSON-1667208638-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:00:52Z,user_data=None,user_id='df5a3c3a5d68473aa2e2950de45ebce1',uuid=15633aee-234a-4417-b5ea-f35f13820404,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "074db183-8679-40f2-b39d-06759a8dfceb", "address": "fa:16:3e:33:2b:94", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap074db183-86", "ovs_interfaceid": "074db183-8679-40f2-b39d-06759a8dfceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:02:45 np0005481065 nova_compute[260935]: 2025-10-11 09:02:45.070 2 DEBUG nova.network.os_vif_util [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Converting VIF {"id": "074db183-8679-40f2-b39d-06759a8dfceb", "address": "fa:16:3e:33:2b:94", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap074db183-86", "ovs_interfaceid": "074db183-8679-40f2-b39d-06759a8dfceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:02:45 np0005481065 nova_compute[260935]: 2025-10-11 09:02:45.071 2 DEBUG nova.network.os_vif_util [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:2b:94,bridge_name='br-int',has_traffic_filtering=True,id=074db183-8679-40f2-b39d-06759a8dfceb,network=Network(e4686205-cbf0-4221-bc49-ebb890c4a59f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap074db183-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:02:45 np0005481065 nova_compute[260935]: 2025-10-11 09:02:45.072 2 DEBUG os_vif [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:2b:94,bridge_name='br-int',has_traffic_filtering=True,id=074db183-8679-40f2-b39d-06759a8dfceb,network=Network(e4686205-cbf0-4221-bc49-ebb890c4a59f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap074db183-86') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:02:45 np0005481065 nova_compute[260935]: 2025-10-11 09:02:45.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:02:45 np0005481065 nova_compute[260935]: 2025-10-11 09:02:45.075 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:02:45 np0005481065 nova_compute[260935]: 2025-10-11 09:02:45.076 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:02:45 np0005481065 nova_compute[260935]: 2025-10-11 09:02:45.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:02:45 np0005481065 nova_compute[260935]: 2025-10-11 09:02:45.092 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap074db183-86, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:02:45 np0005481065 nova_compute[260935]: 2025-10-11 09:02:45.093 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap074db183-86, col_values=(('external_ids', {'iface-id': '074db183-8679-40f2-b39d-06759a8dfceb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:33:2b:94', 'vm-uuid': '15633aee-234a-4417-b5ea-f35f13820404'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:02:45 np0005481065 nova_compute[260935]: 2025-10-11 09:02:45.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:02:45 np0005481065 NetworkManager[44960]: <info>  [1760173365.0967] manager: (tap074db183-86): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/343)
Oct 11 05:02:45 np0005481065 nova_compute[260935]: 2025-10-11 09:02:45.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:02:45 np0005481065 nova_compute[260935]: 2025-10-11 09:02:45.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:02:45 np0005481065 nova_compute[260935]: 2025-10-11 09:02:45.107 2 INFO os_vif [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:2b:94,bridge_name='br-int',has_traffic_filtering=True,id=074db183-8679-40f2-b39d-06759a8dfceb,network=Network(e4686205-cbf0-4221-bc49-ebb890c4a59f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap074db183-86')#033[00m
Oct 11 05:02:45 np0005481065 nova_compute[260935]: 2025-10-11 09:02:45.164 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:02:45 np0005481065 nova_compute[260935]: 2025-10-11 09:02:45.164 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:02:45 np0005481065 nova_compute[260935]: 2025-10-11 09:02:45.164 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:02:45 np0005481065 nova_compute[260935]: 2025-10-11 09:02:45.164 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 15633aee-234a-4417-b5ea-f35f13820404 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:02:45 np0005481065 nova_compute[260935]: 2025-10-11 09:02:45.165 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:02:45 np0005481065 nova_compute[260935]: 2025-10-11 09:02:45.165 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:02:45 np0005481065 nova_compute[260935]: 2025-10-11 09:02:45.501 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:02:45 np0005481065 nova_compute[260935]: 2025-10-11 09:02:45.570 2 DEBUG nova.virt.libvirt.driver [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:02:45 np0005481065 nova_compute[260935]: 2025-10-11 09:02:45.571 2 DEBUG nova.virt.libvirt.driver [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:02:45 np0005481065 nova_compute[260935]: 2025-10-11 09:02:45.571 2 DEBUG nova.virt.libvirt.driver [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] No VIF found with MAC fa:16:3e:33:2b:94, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:02:45 np0005481065 nova_compute[260935]: 2025-10-11 09:02:45.572 2 INFO nova.virt.libvirt.driver [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Using config drive#033[00m
Oct 11 05:02:45 np0005481065 nova_compute[260935]: 2025-10-11 09:02:45.608 2 DEBUG nova.storage.rbd_utils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] rbd image 15633aee-234a-4417-b5ea-f35f13820404_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:02:45 np0005481065 nova_compute[260935]: 2025-10-11 09:02:45.634 2 DEBUG oslo_concurrency.lockutils [None req-ce634aba-2255-4bb3-84bd-81dc9c6307c7 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 110.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:02:45 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:02:45 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3292923213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:02:45 np0005481065 nova_compute[260935]: 2025-10-11 09:02:45.980 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:02:45 np0005481065 nova_compute[260935]: 2025-10-11 09:02:45.987 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:02:46 np0005481065 nova_compute[260935]: 2025-10-11 09:02:46.114 2 DEBUG oslo_concurrency.lockutils [None req-a9b118e5-7aae-466a-a2e8-73259e1e4185 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Acquiring lock "15633aee-234a-4417-b5ea-f35f13820404" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:02:46 np0005481065 nova_compute[260935]: 2025-10-11 09:02:46.120 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:02:46 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Oct 11 05:02:46 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/550488350' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 11 05:02:46 np0005481065 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.19097 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 11 05:02:46 np0005481065 ceph-mgr[74605]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 11 05:02:46 np0005481065 ceph-mgr[74605]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 11 05:02:46 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1788: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Oct 11 05:02:46 np0005481065 nova_compute[260935]: 2025-10-11 09:02:46.646 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:02:46 np0005481065 nova_compute[260935]: 2025-10-11 09:02:46.646 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.940s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:02:47 np0005481065 nova_compute[260935]: 2025-10-11 09:02:47.256 2 INFO nova.virt.libvirt.driver [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Creating config drive at /var/lib/nova/instances/15633aee-234a-4417-b5ea-f35f13820404/disk.config#033[00m
Oct 11 05:02:47 np0005481065 nova_compute[260935]: 2025-10-11 09:02:47.264 2 DEBUG oslo_concurrency.processutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/15633aee-234a-4417-b5ea-f35f13820404/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyxsfcnce execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:02:47 np0005481065 nova_compute[260935]: 2025-10-11 09:02:47.424 2 DEBUG oslo_concurrency.processutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/15633aee-234a-4417-b5ea-f35f13820404/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyxsfcnce" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:02:47 np0005481065 nova_compute[260935]: 2025-10-11 09:02:47.456 2 DEBUG nova.storage.rbd_utils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] rbd image 15633aee-234a-4417-b5ea-f35f13820404_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:02:47 np0005481065 nova_compute[260935]: 2025-10-11 09:02:47.459 2 DEBUG oslo_concurrency.processutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/15633aee-234a-4417-b5ea-f35f13820404/disk.config 15633aee-234a-4417-b5ea-f35f13820404_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:02:47 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:02:47.524 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:02:47 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:02:47.525 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 05:02:47 np0005481065 nova_compute[260935]: 2025-10-11 09:02:47.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:02:47 np0005481065 nova_compute[260935]: 2025-10-11 09:02:47.604 2 DEBUG oslo_concurrency.processutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/15633aee-234a-4417-b5ea-f35f13820404/disk.config 15633aee-234a-4417-b5ea-f35f13820404_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:02:47 np0005481065 nova_compute[260935]: 2025-10-11 09:02:47.605 2 INFO nova.virt.libvirt.driver [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Deleting local config drive /var/lib/nova/instances/15633aee-234a-4417-b5ea-f35f13820404/disk.config because it was imported into RBD.#033[00m
Oct 11 05:02:47 np0005481065 kernel: tap074db183-86: entered promiscuous mode
Oct 11 05:02:47 np0005481065 NetworkManager[44960]: <info>  [1760173367.6797] manager: (tap074db183-86): new Tun device (/org/freedesktop/NetworkManager/Devices/344)
Oct 11 05:02:47 np0005481065 ovn_controller[152945]: 2025-10-11T09:02:47Z|00791|if_status|INFO|Not updating pb chassis for 074db183-8679-40f2-b39d-06759a8dfceb now as sb is readonly
Oct 11 05:02:47 np0005481065 nova_compute[260935]: 2025-10-11 09:02:47.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:02:47 np0005481065 nova_compute[260935]: 2025-10-11 09:02:47.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:02:47 np0005481065 systemd-udevd[346580]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:02:47 np0005481065 nova_compute[260935]: 2025-10-11 09:02:47.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:02:47 np0005481065 systemd-machined[215705]: New machine qemu-99-instance-00000057.
Oct 11 05:02:47 np0005481065 NetworkManager[44960]: <info>  [1760173367.7467] device (tap074db183-86): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:02:47 np0005481065 NetworkManager[44960]: <info>  [1760173367.7475] device (tap074db183-86): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:02:47 np0005481065 systemd[1]: Started Virtual Machine qemu-99-instance-00000057.
Oct 11 05:02:48 np0005481065 ovn_controller[152945]: 2025-10-11T09:02:48Z|00792|binding|INFO|Claiming lport 074db183-8679-40f2-b39d-06759a8dfceb for this chassis.
Oct 11 05:02:48 np0005481065 ovn_controller[152945]: 2025-10-11T09:02:48Z|00793|binding|INFO|074db183-8679-40f2-b39d-06759a8dfceb: Claiming fa:16:3e:33:2b:94 10.100.0.8
Oct 11 05:02:48 np0005481065 ovn_controller[152945]: 2025-10-11T09:02:48Z|00794|binding|INFO|Setting lport 074db183-8679-40f2-b39d-06759a8dfceb ovn-installed in OVS
Oct 11 05:02:48 np0005481065 nova_compute[260935]: 2025-10-11 09:02:48.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:02:48 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1789: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 1.7 KiB/s wr, 0 op/s
Oct 11 05:02:48 np0005481065 ovn_controller[152945]: 2025-10-11T09:02:48Z|00795|binding|INFO|Setting lport 074db183-8679-40f2-b39d-06759a8dfceb up in Southbound
Oct 11 05:02:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:02:48.619 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:2b:94 10.100.0.8'], port_security=['fa:16:3e:33:2b:94 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '15633aee-234a-4417-b5ea-f35f13820404', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4686205-cbf0-4221-bc49-ebb890c4a59f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '11b44ad9193e4e43838d52056ccf413e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4b0cfb76-aebd-4cfc-96f5-00bacc345d65', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9601b6e8-d9bc-46ca-99e8-33ec15f713e5, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=074db183-8679-40f2-b39d-06759a8dfceb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:02:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:02:48.620 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 074db183-8679-40f2-b39d-06759a8dfceb in datapath e4686205-cbf0-4221-bc49-ebb890c4a59f bound to our chassis#033[00m
Oct 11 05:02:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:02:48.622 162815 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network e4686205-cbf0-4221-bc49-ebb890c4a59f or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct 11 05:02:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:02:48.623 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[82e9519b-9c42-4cda-857a-7560bbecc39b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:02:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:02:48 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Oct 11 05:02:48 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:02:48.729029) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 05:02:48 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Oct 11 05:02:48 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173368729070, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 837, "num_deletes": 251, "total_data_size": 1129490, "memory_usage": 1144160, "flush_reason": "Manual Compaction"}
Oct 11 05:02:48 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Oct 11 05:02:48 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173368747154, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 1119042, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36805, "largest_seqno": 37641, "table_properties": {"data_size": 1114767, "index_size": 1991, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9392, "raw_average_key_size": 19, "raw_value_size": 1106203, "raw_average_value_size": 2304, "num_data_blocks": 89, "num_entries": 480, "num_filter_entries": 480, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760173294, "oldest_key_time": 1760173294, "file_creation_time": 1760173368, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:02:48 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 18189 microseconds, and 8071 cpu microseconds.
Oct 11 05:02:48 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:02:48 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:02:48.747213) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 1119042 bytes OK
Oct 11 05:02:48 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:02:48.747241) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Oct 11 05:02:48 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:02:48.750077) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Oct 11 05:02:48 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:02:48.750128) EVENT_LOG_v1 {"time_micros": 1760173368750120, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 05:02:48 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:02:48.750155) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 05:02:48 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 1125348, prev total WAL file size 1125348, number of live WAL files 2.
Oct 11 05:02:48 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:02:48 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:02:48.751521) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Oct 11 05:02:48 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 05:02:48 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(1092KB)], [80(8342KB)]
Oct 11 05:02:48 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173368751671, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 9662127, "oldest_snapshot_seqno": -1}
Oct 11 05:02:48 np0005481065 podman[346633]: 2025-10-11 09:02:48.789453899 +0000 UTC m=+0.095000223 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:02:48 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 6157 keys, 8067508 bytes, temperature: kUnknown
Oct 11 05:02:48 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173368813160, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 8067508, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8027354, "index_size": 23660, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15429, "raw_key_size": 155707, "raw_average_key_size": 25, "raw_value_size": 7917848, "raw_average_value_size": 1285, "num_data_blocks": 954, "num_entries": 6157, "num_filter_entries": 6157, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760173368, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:02:48 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:02:48 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:02:48.813503) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 8067508 bytes
Oct 11 05:02:48 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:02:48.815214) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 156.8 rd, 130.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 8.1 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(15.8) write-amplify(7.2) OK, records in: 6673, records dropped: 516 output_compression: NoCompression
Oct 11 05:02:48 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:02:48.815245) EVENT_LOG_v1 {"time_micros": 1760173368815231, "job": 46, "event": "compaction_finished", "compaction_time_micros": 61615, "compaction_time_cpu_micros": 37315, "output_level": 6, "num_output_files": 1, "total_output_size": 8067508, "num_input_records": 6673, "num_output_records": 6157, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 05:02:48 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:02:48 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173368816008, "job": 46, "event": "table_file_deletion", "file_number": 82}
Oct 11 05:02:48 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:02:48 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173368819619, "job": 46, "event": "table_file_deletion", "file_number": 80}
Oct 11 05:02:48 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:02:48.751359) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:02:48 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:02:48.819843) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:02:48 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:02:48.819848) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:02:48 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:02:48.819850) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:02:48 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:02:48.819852) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:02:48 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:02:48.819854) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:02:48 np0005481065 podman[346634]: 2025-10-11 09:02:48.841423833 +0000 UTC m=+0.139407341 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 11 05:02:48 np0005481065 nova_compute[260935]: 2025-10-11 09:02:48.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:02:49 np0005481065 nova_compute[260935]: 2025-10-11 09:02:49.104 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173369.10403, 15633aee-234a-4417-b5ea-f35f13820404 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:02:49 np0005481065 nova_compute[260935]: 2025-10-11 09:02:49.105 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 15633aee-234a-4417-b5ea-f35f13820404] VM Started (Lifecycle Event)#033[00m
Oct 11 05:02:49 np0005481065 nova_compute[260935]: 2025-10-11 09:02:49.171 2 DEBUG nova.network.neutron [req-b3a49696-6fde-49ee-932a-08193025f2ee req-fcbdbf43-b827-47e3-b560-4c19f9fd1de3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Updated VIF entry in instance network info cache for port 074db183-8679-40f2-b39d-06759a8dfceb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:02:49 np0005481065 nova_compute[260935]: 2025-10-11 09:02:49.172 2 DEBUG nova.network.neutron [req-b3a49696-6fde-49ee-932a-08193025f2ee req-fcbdbf43-b827-47e3-b560-4c19f9fd1de3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Updating instance_info_cache with network_info: [{"id": "074db183-8679-40f2-b39d-06759a8dfceb", "address": "fa:16:3e:33:2b:94", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap074db183-86", "ovs_interfaceid": "074db183-8679-40f2-b39d-06759a8dfceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:02:49 np0005481065 nova_compute[260935]: 2025-10-11 09:02:49.217 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:02:49 np0005481065 nova_compute[260935]: 2025-10-11 09:02:49.224 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173369.1061523, 15633aee-234a-4417-b5ea-f35f13820404 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:02:49 np0005481065 nova_compute[260935]: 2025-10-11 09:02:49.224 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 15633aee-234a-4417-b5ea-f35f13820404] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:02:49 np0005481065 nova_compute[260935]: 2025-10-11 09:02:49.573 2 DEBUG oslo_concurrency.lockutils [req-b3a49696-6fde-49ee-932a-08193025f2ee req-fcbdbf43-b827-47e3-b560-4c19f9fd1de3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-15633aee-234a-4417-b5ea-f35f13820404" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:02:49 np0005481065 nova_compute[260935]: 2025-10-11 09:02:49.648 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:02:49 np0005481065 nova_compute[260935]: 2025-10-11 09:02:49.653 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: deleting, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:02:50 np0005481065 nova_compute[260935]: 2025-10-11 09:02:50.079 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 15633aee-234a-4417-b5ea-f35f13820404] During sync_power_state the instance has a pending task (deleting). Skip.#033[00m
Oct 11 05:02:50 np0005481065 nova_compute[260935]: 2025-10-11 09:02:50.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:02:50 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1790: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 1.7 KiB/s wr, 0 op/s
Oct 11 05:02:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:02:50.528 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:02:50 np0005481065 nova_compute[260935]: 2025-10-11 09:02:50.645 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:02:50 np0005481065 nova_compute[260935]: 2025-10-11 09:02:50.646 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:02:51 np0005481065 nova_compute[260935]: 2025-10-11 09:02:51.638 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:02:51 np0005481065 nova_compute[260935]: 2025-10-11 09:02:51.639 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:02:51 np0005481065 nova_compute[260935]: 2025-10-11 09:02:51.639 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 11 05:02:52 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1791: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 1.5 KiB/s rd, 2.1 KiB/s wr, 2 op/s
Oct 11 05:02:52 np0005481065 nova_compute[260935]: 2025-10-11 09:02:52.834 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct 11 05:02:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:02:53 np0005481065 nova_compute[260935]: 2025-10-11 09:02:53.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:02:54 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1792: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 14 KiB/s wr, 10 op/s
Oct 11 05:02:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:02:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:02:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:02:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:02:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:02:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:02:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:02:54
Oct 11 05:02:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:02:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:02:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root', 'vms', 'default.rgw.control', 'backups', 'images', '.mgr', 'default.rgw.log', 'volumes']
Oct 11 05:02:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:02:55 np0005481065 nova_compute[260935]: 2025-10-11 09:02:55.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:02:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:02:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:02:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:02:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:02:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:02:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:02:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:02:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:02:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:02:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:02:55 np0005481065 nova_compute[260935]: 2025-10-11 09:02:55.768 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:02:55 np0005481065 nova_compute[260935]: 2025-10-11 09:02:55.769 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:02:55 np0005481065 nova_compute[260935]: 2025-10-11 09:02:55.770 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 05:02:55 np0005481065 nova_compute[260935]: 2025-10-11 09:02:55.770 2 DEBUG nova.objects.instance [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c176845c-89c0-4038-ba22-4ee79bd3ebfe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:02:56 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1793: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 14 KiB/s wr, 10 op/s
Oct 11 05:02:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:02:57 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/345437772' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:02:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:02:57 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/345437772' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:02:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:02:58 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2455969037' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:02:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:02:58 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2455969037' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:02:58 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1794: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 14 KiB/s wr, 10 op/s
Oct 11 05:02:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:02:58 np0005481065 nova_compute[260935]: 2025-10-11 09:02:58.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:03:00 np0005481065 nova_compute[260935]: 2025-10-11 09:03:00.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:03:00 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1795: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 13 KiB/s wr, 10 op/s
Oct 11 05:03:02 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1796: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 13 KiB/s wr, 10 op/s
Oct 11 05:03:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:03:03 np0005481065 nova_compute[260935]: 2025-10-11 09:03:03.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:03:04 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1797: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 12 KiB/s wr, 8 op/s
Oct 11 05:03:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:03:04 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/957849191' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:03:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:03:04 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/957849191' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:03:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:03:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:03:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:03:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:03:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0029757271724620694 of space, bias 1.0, pg target 0.8927181517386208 quantized to 32 (current 32)
Oct 11 05:03:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:03:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:03:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:03:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:03:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:03:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 11 05:03:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:03:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 05:03:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:03:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:03:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:03:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 05:03:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:03:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 05:03:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:03:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:03:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:03:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 05:03:05 np0005481065 nova_compute[260935]: 2025-10-11 09:03:05.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:03:06 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1798: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:03:08 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1799: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:03:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:03:08 np0005481065 nova_compute[260935]: 2025-10-11 09:03:08.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:03:09 np0005481065 nova_compute[260935]: 2025-10-11 09:03:09.625 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updating instance_info_cache with network_info: [{"id": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "address": "fa:16:3e:1e:82:58", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ae661-47", "ovs_interfaceid": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:03:09 np0005481065 podman[346681]: 2025-10-11 09:03:09.840269997 +0000 UTC m=+0.122278922 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 11 05:03:10 np0005481065 nova_compute[260935]: 2025-10-11 09:03:10.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:03:10 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1800: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:03:10 np0005481065 nova_compute[260935]: 2025-10-11 09:03:10.969 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:03:10 np0005481065 nova_compute[260935]: 2025-10-11 09:03:10.969 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 05:03:10 np0005481065 nova_compute[260935]: 2025-10-11 09:03:10.970 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:03:10 np0005481065 nova_compute[260935]: 2025-10-11 09:03:10.970 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:03:10 np0005481065 nova_compute[260935]: 2025-10-11 09:03:10.971 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:03:10 np0005481065 nova_compute[260935]: 2025-10-11 09:03:10.971 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:03:10 np0005481065 nova_compute[260935]: 2025-10-11 09:03:10.971 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:03:10 np0005481065 nova_compute[260935]: 2025-10-11 09:03:10.971 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:03:10 np0005481065 nova_compute[260935]: 2025-10-11 09:03:10.971 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:03:10 np0005481065 nova_compute[260935]: 2025-10-11 09:03:10.972 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:03:11 np0005481065 nova_compute[260935]: 2025-10-11 09:03:11.194 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:03:11 np0005481065 nova_compute[260935]: 2025-10-11 09:03:11.195 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:03:11 np0005481065 nova_compute[260935]: 2025-10-11 09:03:11.195 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:03:11 np0005481065 nova_compute[260935]: 2025-10-11 09:03:11.196 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:03:11 np0005481065 nova_compute[260935]: 2025-10-11 09:03:11.196 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:03:11 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:03:11 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1374859541' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:03:11 np0005481065 nova_compute[260935]: 2025-10-11 09:03:11.705 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:03:12 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1801: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 426 B/s wr, 0 op/s
Oct 11 05:03:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:03:13 np0005481065 nova_compute[260935]: 2025-10-11 09:03:13.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:03:14 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1802: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 426 B/s wr, 0 op/s
Oct 11 05:03:14 np0005481065 nova_compute[260935]: 2025-10-11 09:03:14.522 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:03:14 np0005481065 nova_compute[260935]: 2025-10-11 09:03:14.523 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:03:14 np0005481065 nova_compute[260935]: 2025-10-11 09:03:14.523 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:03:14 np0005481065 nova_compute[260935]: 2025-10-11 09:03:14.525 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:03:14 np0005481065 nova_compute[260935]: 2025-10-11 09:03:14.525 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:03:14 np0005481065 nova_compute[260935]: 2025-10-11 09:03:14.528 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000057 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:03:14 np0005481065 nova_compute[260935]: 2025-10-11 09:03:14.528 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000057 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:03:14 np0005481065 nova_compute[260935]: 2025-10-11 09:03:14.531 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:03:14 np0005481065 nova_compute[260935]: 2025-10-11 09:03:14.531 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:03:14 np0005481065 nova_compute[260935]: 2025-10-11 09:03:14.720 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:03:14 np0005481065 nova_compute[260935]: 2025-10-11 09:03:14.721 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3225MB free_disk=59.80977249145508GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:03:14 np0005481065 nova_compute[260935]: 2025-10-11 09:03:14.722 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:03:14 np0005481065 nova_compute[260935]: 2025-10-11 09:03:14.722 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:03:15 np0005481065 nova_compute[260935]: 2025-10-11 09:03:15.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:03:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:03:15.198 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:03:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:03:15.200 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:03:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:03:15.201 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:03:15 np0005481065 podman[346723]: 2025-10-11 09:03:15.796541578 +0000 UTC m=+0.088996312 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:03:16 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1803: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 426 B/s wr, 0 op/s
Oct 11 05:03:18 np0005481065 nova_compute[260935]: 2025-10-11 09:03:18.032 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:03:18 np0005481065 nova_compute[260935]: 2025-10-11 09:03:18.033 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:03:18 np0005481065 nova_compute[260935]: 2025-10-11 09:03:18.033 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:03:18 np0005481065 nova_compute[260935]: 2025-10-11 09:03:18.034 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 15633aee-234a-4417-b5ea-f35f13820404 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:03:18 np0005481065 nova_compute[260935]: 2025-10-11 09:03:18.034 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:03:18 np0005481065 nova_compute[260935]: 2025-10-11 09:03:18.035 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:03:18 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1804: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 1.4 KiB/s wr, 0 op/s
Oct 11 05:03:18 np0005481065 nova_compute[260935]: 2025-10-11 09:03:18.254 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:03:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:03:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:03:18 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3190328056' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:03:18 np0005481065 nova_compute[260935]: 2025-10-11 09:03:18.754 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:03:18 np0005481065 nova_compute[260935]: 2025-10-11 09:03:18.763 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:03:18 np0005481065 nova_compute[260935]: 2025-10-11 09:03:18.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:03:19 np0005481065 podman[346765]: 2025-10-11 09:03:19.787610112 +0000 UTC m=+0.085742829 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 05:03:19 np0005481065 podman[346766]: 2025-10-11 09:03:19.865247648 +0000 UTC m=+0.157021673 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 11 05:03:20 np0005481065 nova_compute[260935]: 2025-10-11 09:03:20.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:03:20 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1805: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 1.4 KiB/s wr, 0 op/s
Oct 11 05:03:20 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:03:20 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:03:20 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:03:20 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:03:21 np0005481065 nova_compute[260935]: 2025-10-11 09:03:21.014 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:03:21 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:03:21 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:03:21 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:03:21 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:03:21 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 05:03:21 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:03:21 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 05:03:21 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:03:21 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 147dcab4-25d7-459e-a68f-397366782ce6 does not exist
Oct 11 05:03:21 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 2a22582f-8416-4602-8c94-2c3c7a855fbf does not exist
Oct 11 05:03:21 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev f9fd6c57-e21e-4d3c-8004-f04b18751943 does not exist
Oct 11 05:03:21 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 05:03:21 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 05:03:21 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 05:03:21 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:03:21 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:03:21 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:03:22 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1806: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 1.4 KiB/s wr, 0 op/s
Oct 11 05:03:22 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:03:22 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:03:22 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:03:22 np0005481065 podman[347205]: 2025-10-11 09:03:22.578003529 +0000 UTC m=+0.055412252 container create 197d4b73bba0c7bc540f5f9145898fb939421384ebe8658dd0ae65b90d2c7194 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_gould, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:03:22 np0005481065 systemd[1]: Started libpod-conmon-197d4b73bba0c7bc540f5f9145898fb939421384ebe8658dd0ae65b90d2c7194.scope.
Oct 11 05:03:22 np0005481065 podman[347205]: 2025-10-11 09:03:22.550260547 +0000 UTC m=+0.027669290 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:03:22 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:03:22 np0005481065 podman[347205]: 2025-10-11 09:03:22.712169529 +0000 UTC m=+0.189578262 container init 197d4b73bba0c7bc540f5f9145898fb939421384ebe8658dd0ae65b90d2c7194 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_gould, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:03:22 np0005481065 podman[347205]: 2025-10-11 09:03:22.725353075 +0000 UTC m=+0.202761808 container start 197d4b73bba0c7bc540f5f9145898fb939421384ebe8658dd0ae65b90d2c7194 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_gould, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 05:03:22 np0005481065 podman[347205]: 2025-10-11 09:03:22.72973309 +0000 UTC m=+0.207141813 container attach 197d4b73bba0c7bc540f5f9145898fb939421384ebe8658dd0ae65b90d2c7194 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:03:22 np0005481065 vibrant_gould[347222]: 167 167
Oct 11 05:03:22 np0005481065 systemd[1]: libpod-197d4b73bba0c7bc540f5f9145898fb939421384ebe8658dd0ae65b90d2c7194.scope: Deactivated successfully.
Oct 11 05:03:22 np0005481065 conmon[347222]: conmon 197d4b73bba0c7bc540f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-197d4b73bba0c7bc540f5f9145898fb939421384ebe8658dd0ae65b90d2c7194.scope/container/memory.events
Oct 11 05:03:22 np0005481065 podman[347227]: 2025-10-11 09:03:22.808691344 +0000 UTC m=+0.050306147 container died 197d4b73bba0c7bc540f5f9145898fb939421384ebe8658dd0ae65b90d2c7194 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_gould, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:03:22 np0005481065 systemd[1]: var-lib-containers-storage-overlay-6633c0162577a91651029c5517da94a83b54b326b5b8108a125f3ffdd8bcccf6-merged.mount: Deactivated successfully.
Oct 11 05:03:22 np0005481065 podman[347227]: 2025-10-11 09:03:22.863113257 +0000 UTC m=+0.104728000 container remove 197d4b73bba0c7bc540f5f9145898fb939421384ebe8658dd0ae65b90d2c7194 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_gould, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Oct 11 05:03:22 np0005481065 systemd[1]: libpod-conmon-197d4b73bba0c7bc540f5f9145898fb939421384ebe8658dd0ae65b90d2c7194.scope: Deactivated successfully.
Oct 11 05:03:23 np0005481065 podman[347250]: 2025-10-11 09:03:23.126678662 +0000 UTC m=+0.066058687 container create a4532cda1f1174878f98566f089095d007cafb2ee668d1a724d479704f5052e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hodgkin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 05:03:23 np0005481065 systemd[1]: Started libpod-conmon-a4532cda1f1174878f98566f089095d007cafb2ee668d1a724d479704f5052e8.scope.
Oct 11 05:03:23 np0005481065 podman[347250]: 2025-10-11 09:03:23.093565286 +0000 UTC m=+0.032945371 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:03:23 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:03:23 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7e228aa8a66acf5ecec5ff097e7f3f73459b9ca57e628acf90fb983ffb93285/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:03:23 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7e228aa8a66acf5ecec5ff097e7f3f73459b9ca57e628acf90fb983ffb93285/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:03:23 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7e228aa8a66acf5ecec5ff097e7f3f73459b9ca57e628acf90fb983ffb93285/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:03:23 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7e228aa8a66acf5ecec5ff097e7f3f73459b9ca57e628acf90fb983ffb93285/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:03:23 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7e228aa8a66acf5ecec5ff097e7f3f73459b9ca57e628acf90fb983ffb93285/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 05:03:23 np0005481065 podman[347250]: 2025-10-11 09:03:23.244883306 +0000 UTC m=+0.184263371 container init a4532cda1f1174878f98566f089095d007cafb2ee668d1a724d479704f5052e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hodgkin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:03:23 np0005481065 podman[347250]: 2025-10-11 09:03:23.256474487 +0000 UTC m=+0.195854482 container start a4532cda1f1174878f98566f089095d007cafb2ee668d1a724d479704f5052e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:03:23 np0005481065 podman[347250]: 2025-10-11 09:03:23.260210684 +0000 UTC m=+0.199590699 container attach a4532cda1f1174878f98566f089095d007cafb2ee668d1a724d479704f5052e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:03:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 do_prune osdmap full prune enabled
Oct 11 05:03:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e243 e243: 3 total, 3 up, 3 in
Oct 11 05:03:23 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e243: 3 total, 3 up, 3 in
Oct 11 05:03:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:03:23 np0005481065 nova_compute[260935]: 2025-10-11 09:03:23.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:03:24 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1808: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 1.2 KiB/s wr, 0 op/s
Oct 11 05:03:24 np0005481065 nostalgic_hodgkin[347266]: --> passed data devices: 0 physical, 3 LVM
Oct 11 05:03:24 np0005481065 nostalgic_hodgkin[347266]: --> relative data size: 1.0
Oct 11 05:03:24 np0005481065 nostalgic_hodgkin[347266]: --> All data devices are unavailable
Oct 11 05:03:24 np0005481065 systemd[1]: libpod-a4532cda1f1174878f98566f089095d007cafb2ee668d1a724d479704f5052e8.scope: Deactivated successfully.
Oct 11 05:03:24 np0005481065 systemd[1]: libpod-a4532cda1f1174878f98566f089095d007cafb2ee668d1a724d479704f5052e8.scope: Consumed 1.101s CPU time.
Oct 11 05:03:24 np0005481065 podman[347295]: 2025-10-11 09:03:24.479617374 +0000 UTC m=+0.052146209 container died a4532cda1f1174878f98566f089095d007cafb2ee668d1a724d479704f5052e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hodgkin, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:03:24 np0005481065 systemd[1]: var-lib-containers-storage-overlay-d7e228aa8a66acf5ecec5ff097e7f3f73459b9ca57e628acf90fb983ffb93285-merged.mount: Deactivated successfully.
Oct 11 05:03:24 np0005481065 podman[347295]: 2025-10-11 09:03:24.55163848 +0000 UTC m=+0.124167225 container remove a4532cda1f1174878f98566f089095d007cafb2ee668d1a724d479704f5052e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 05:03:24 np0005481065 systemd[1]: libpod-conmon-a4532cda1f1174878f98566f089095d007cafb2ee668d1a724d479704f5052e8.scope: Deactivated successfully.
Oct 11 05:03:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:03:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:03:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:03:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:03:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:03:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:03:25 np0005481065 nova_compute[260935]: 2025-10-11 09:03:25.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:03:25 np0005481065 podman[347452]: 2025-10-11 09:03:25.549329292 +0000 UTC m=+0.072624404 container create 1e28c258e13ae81f1136f51213d3a1de73361d62538a934ddebff8130d3346a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_perlman, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:03:25 np0005481065 systemd[1]: Started libpod-conmon-1e28c258e13ae81f1136f51213d3a1de73361d62538a934ddebff8130d3346a3.scope.
Oct 11 05:03:25 np0005481065 podman[347452]: 2025-10-11 09:03:25.520208891 +0000 UTC m=+0.043504013 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:03:25 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:03:25 np0005481065 podman[347452]: 2025-10-11 09:03:25.652521148 +0000 UTC m=+0.175816280 container init 1e28c258e13ae81f1136f51213d3a1de73361d62538a934ddebff8130d3346a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:03:25 np0005481065 podman[347452]: 2025-10-11 09:03:25.664042627 +0000 UTC m=+0.187337739 container start 1e28c258e13ae81f1136f51213d3a1de73361d62538a934ddebff8130d3346a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:03:25 np0005481065 podman[347452]: 2025-10-11 09:03:25.668333609 +0000 UTC m=+0.191628761 container attach 1e28c258e13ae81f1136f51213d3a1de73361d62538a934ddebff8130d3346a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_perlman, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:03:25 np0005481065 lucid_perlman[347469]: 167 167
Oct 11 05:03:25 np0005481065 systemd[1]: libpod-1e28c258e13ae81f1136f51213d3a1de73361d62538a934ddebff8130d3346a3.scope: Deactivated successfully.
Oct 11 05:03:25 np0005481065 conmon[347469]: conmon 1e28c258e13ae81f1136 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1e28c258e13ae81f1136f51213d3a1de73361d62538a934ddebff8130d3346a3.scope/container/memory.events
Oct 11 05:03:25 np0005481065 podman[347452]: 2025-10-11 09:03:25.677743518 +0000 UTC m=+0.201038620 container died 1e28c258e13ae81f1136f51213d3a1de73361d62538a934ddebff8130d3346a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 05:03:25 np0005481065 systemd[1]: var-lib-containers-storage-overlay-a840339e6770782ed1a56847eb1f053c3e2ce365aefe5f5d207baa991bd9bdba-merged.mount: Deactivated successfully.
Oct 11 05:03:25 np0005481065 podman[347452]: 2025-10-11 09:03:25.728287021 +0000 UTC m=+0.251582133 container remove 1e28c258e13ae81f1136f51213d3a1de73361d62538a934ddebff8130d3346a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_perlman, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 11 05:03:25 np0005481065 systemd[1]: libpod-conmon-1e28c258e13ae81f1136f51213d3a1de73361d62538a934ddebff8130d3346a3.scope: Deactivated successfully.
Oct 11 05:03:25 np0005481065 podman[347492]: 2025-10-11 09:03:25.99554249 +0000 UTC m=+0.069292809 container create 7fd28de74f2aabc6403fde5c203e4c1d553b9187f497b0482a05d71fe31a9e2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hawking, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 11 05:03:26 np0005481065 systemd[1]: Started libpod-conmon-7fd28de74f2aabc6403fde5c203e4c1d553b9187f497b0482a05d71fe31a9e2d.scope.
Oct 11 05:03:26 np0005481065 podman[347492]: 2025-10-11 09:03:25.963544127 +0000 UTC m=+0.037294536 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:03:26 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:03:26 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/488255750ac8ea871c0584088c075afd5753ac8c8f2a024b6a2c453da81bbf7b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:03:26 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/488255750ac8ea871c0584088c075afd5753ac8c8f2a024b6a2c453da81bbf7b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:03:26 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/488255750ac8ea871c0584088c075afd5753ac8c8f2a024b6a2c453da81bbf7b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:03:26 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/488255750ac8ea871c0584088c075afd5753ac8c8f2a024b6a2c453da81bbf7b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:03:26 np0005481065 podman[347492]: 2025-10-11 09:03:26.097205822 +0000 UTC m=+0.170956181 container init 7fd28de74f2aabc6403fde5c203e4c1d553b9187f497b0482a05d71fe31a9e2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:03:26 np0005481065 podman[347492]: 2025-10-11 09:03:26.111297655 +0000 UTC m=+0.185048004 container start 7fd28de74f2aabc6403fde5c203e4c1d553b9187f497b0482a05d71fe31a9e2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:03:26 np0005481065 podman[347492]: 2025-10-11 09:03:26.115068062 +0000 UTC m=+0.188818401 container attach 7fd28de74f2aabc6403fde5c203e4c1d553b9187f497b0482a05d71fe31a9e2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:03:26 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1809: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 1.2 KiB/s wr, 0 op/s
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]: {
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:    "0": [
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:        {
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:            "devices": [
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:                "/dev/loop3"
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:            ],
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:            "lv_name": "ceph_lv0",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:            "lv_size": "21470642176",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:            "name": "ceph_lv0",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:            "tags": {
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:                "ceph.cluster_name": "ceph",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:                "ceph.crush_device_class": "",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:                "ceph.encrypted": "0",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:                "ceph.osd_id": "0",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:                "ceph.type": "block",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:                "ceph.vdo": "0"
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:            },
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:            "type": "block",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:            "vg_name": "ceph_vg0"
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:        }
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:    ],
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:    "1": [
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:        {
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:            "devices": [
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:                "/dev/loop4"
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:            ],
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:            "lv_name": "ceph_lv1",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:            "lv_size": "21470642176",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:            "name": "ceph_lv1",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:            "tags": {
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:                "ceph.cluster_name": "ceph",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:                "ceph.crush_device_class": "",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:                "ceph.encrypted": "0",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:                "ceph.osd_id": "1",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:                "ceph.type": "block",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:                "ceph.vdo": "0"
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:            },
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:            "type": "block",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:            "vg_name": "ceph_vg1"
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:        }
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:    ],
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:    "2": [
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:        {
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:            "devices": [
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:                "/dev/loop5"
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:            ],
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:            "lv_name": "ceph_lv2",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:            "lv_size": "21470642176",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:            "name": "ceph_lv2",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:            "tags": {
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:                "ceph.cluster_name": "ceph",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:                "ceph.crush_device_class": "",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:                "ceph.encrypted": "0",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:                "ceph.osd_id": "2",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:                "ceph.type": "block",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:                "ceph.vdo": "0"
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:            },
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:            "type": "block",
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:            "vg_name": "ceph_vg2"
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:        }
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]:    ]
Oct 11 05:03:26 np0005481065 agitated_hawking[347509]: }
Oct 11 05:03:26 np0005481065 systemd[1]: libpod-7fd28de74f2aabc6403fde5c203e4c1d553b9187f497b0482a05d71fe31a9e2d.scope: Deactivated successfully.
Oct 11 05:03:26 np0005481065 podman[347518]: 2025-10-11 09:03:26.965482938 +0000 UTC m=+0.042050961 container died 7fd28de74f2aabc6403fde5c203e4c1d553b9187f497b0482a05d71fe31a9e2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:03:27 np0005481065 systemd[1]: var-lib-containers-storage-overlay-488255750ac8ea871c0584088c075afd5753ac8c8f2a024b6a2c453da81bbf7b-merged.mount: Deactivated successfully.
Oct 11 05:03:27 np0005481065 podman[347518]: 2025-10-11 09:03:27.042759044 +0000 UTC m=+0.119326967 container remove 7fd28de74f2aabc6403fde5c203e4c1d553b9187f497b0482a05d71fe31a9e2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 11 05:03:27 np0005481065 systemd[1]: libpod-conmon-7fd28de74f2aabc6403fde5c203e4c1d553b9187f497b0482a05d71fe31a9e2d.scope: Deactivated successfully.
Oct 11 05:03:27 np0005481065 nova_compute[260935]: 2025-10-11 09:03:27.156 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:03:27 np0005481065 nova_compute[260935]: 2025-10-11 09:03:27.157 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 12.435s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:03:27 np0005481065 nova_compute[260935]: 2025-10-11 09:03:27.158 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:03:27 np0005481065 nova_compute[260935]: 2025-10-11 09:03:27.158 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 11 05:03:27 np0005481065 podman[347672]: 2025-10-11 09:03:27.934542012 +0000 UTC m=+0.072590303 container create 156a458c51af34a596f3069c55a52d55511dfa29193a8d26808a03b134ac52ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:03:27 np0005481065 systemd[1]: Started libpod-conmon-156a458c51af34a596f3069c55a52d55511dfa29193a8d26808a03b134ac52ba.scope.
Oct 11 05:03:27 np0005481065 podman[347672]: 2025-10-11 09:03:27.902202489 +0000 UTC m=+0.040250830 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:03:28 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:03:28 np0005481065 podman[347672]: 2025-10-11 09:03:28.024583192 +0000 UTC m=+0.162631543 container init 156a458c51af34a596f3069c55a52d55511dfa29193a8d26808a03b134ac52ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:03:28 np0005481065 podman[347672]: 2025-10-11 09:03:28.031445338 +0000 UTC m=+0.169493599 container start 156a458c51af34a596f3069c55a52d55511dfa29193a8d26808a03b134ac52ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_franklin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:03:28 np0005481065 podman[347672]: 2025-10-11 09:03:28.034769653 +0000 UTC m=+0.172817934 container attach 156a458c51af34a596f3069c55a52d55511dfa29193a8d26808a03b134ac52ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_franklin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:03:28 np0005481065 compassionate_franklin[347688]: 167 167
Oct 11 05:03:28 np0005481065 podman[347672]: 2025-10-11 09:03:28.036187974 +0000 UTC m=+0.174236225 container died 156a458c51af34a596f3069c55a52d55511dfa29193a8d26808a03b134ac52ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 11 05:03:28 np0005481065 systemd[1]: libpod-156a458c51af34a596f3069c55a52d55511dfa29193a8d26808a03b134ac52ba.scope: Deactivated successfully.
Oct 11 05:03:28 np0005481065 systemd[1]: var-lib-containers-storage-overlay-f4566db413e532aedcb0082282ba9bea73505e413f4e3051aa1713e91ecb1465-merged.mount: Deactivated successfully.
Oct 11 05:03:28 np0005481065 podman[347672]: 2025-10-11 09:03:28.069570336 +0000 UTC m=+0.207618577 container remove 156a458c51af34a596f3069c55a52d55511dfa29193a8d26808a03b134ac52ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_franklin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:03:28 np0005481065 systemd[1]: libpod-conmon-156a458c51af34a596f3069c55a52d55511dfa29193a8d26808a03b134ac52ba.scope: Deactivated successfully.
Oct 11 05:03:28 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1810: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 11 05:03:28 np0005481065 podman[347712]: 2025-10-11 09:03:28.298221984 +0000 UTC m=+0.064949345 container create 7888c0ac2fc4f17a7e4b1197efb4db53865da1bd257e30de480033252c8b5732 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hypatia, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:03:28 np0005481065 podman[347712]: 2025-10-11 09:03:28.268900007 +0000 UTC m=+0.035627408 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:03:28 np0005481065 systemd[1]: Started libpod-conmon-7888c0ac2fc4f17a7e4b1197efb4db53865da1bd257e30de480033252c8b5732.scope.
Oct 11 05:03:28 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:03:28 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f797dd5d0d85dff5e844c4790e14704c3d98d6e88e07fdb76f134d2b3fcb3a3e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:03:28 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f797dd5d0d85dff5e844c4790e14704c3d98d6e88e07fdb76f134d2b3fcb3a3e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:03:28 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f797dd5d0d85dff5e844c4790e14704c3d98d6e88e07fdb76f134d2b3fcb3a3e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:03:28 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f797dd5d0d85dff5e844c4790e14704c3d98d6e88e07fdb76f134d2b3fcb3a3e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:03:28 np0005481065 podman[347712]: 2025-10-11 09:03:28.423687925 +0000 UTC m=+0.190415376 container init 7888c0ac2fc4f17a7e4b1197efb4db53865da1bd257e30de480033252c8b5732 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hypatia, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:03:28 np0005481065 podman[347712]: 2025-10-11 09:03:28.444662844 +0000 UTC m=+0.211390225 container start 7888c0ac2fc4f17a7e4b1197efb4db53865da1bd257e30de480033252c8b5732 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hypatia, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:03:28 np0005481065 podman[347712]: 2025-10-11 09:03:28.449607075 +0000 UTC m=+0.216334506 container attach 7888c0ac2fc4f17a7e4b1197efb4db53865da1bd257e30de480033252c8b5732 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hypatia, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:03:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:03:28 np0005481065 nova_compute[260935]: 2025-10-11 09:03:28.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:03:29 np0005481065 admiring_hypatia[347728]: {
Oct 11 05:03:29 np0005481065 admiring_hypatia[347728]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 05:03:29 np0005481065 admiring_hypatia[347728]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:03:29 np0005481065 admiring_hypatia[347728]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 05:03:29 np0005481065 admiring_hypatia[347728]:        "osd_id": 2,
Oct 11 05:03:29 np0005481065 admiring_hypatia[347728]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:03:29 np0005481065 admiring_hypatia[347728]:        "type": "bluestore"
Oct 11 05:03:29 np0005481065 admiring_hypatia[347728]:    },
Oct 11 05:03:29 np0005481065 admiring_hypatia[347728]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 05:03:29 np0005481065 admiring_hypatia[347728]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:03:29 np0005481065 admiring_hypatia[347728]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 05:03:29 np0005481065 admiring_hypatia[347728]:        "osd_id": 0,
Oct 11 05:03:29 np0005481065 admiring_hypatia[347728]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:03:29 np0005481065 admiring_hypatia[347728]:        "type": "bluestore"
Oct 11 05:03:29 np0005481065 admiring_hypatia[347728]:    },
Oct 11 05:03:29 np0005481065 admiring_hypatia[347728]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 05:03:29 np0005481065 admiring_hypatia[347728]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:03:29 np0005481065 admiring_hypatia[347728]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 05:03:29 np0005481065 admiring_hypatia[347728]:        "osd_id": 1,
Oct 11 05:03:29 np0005481065 admiring_hypatia[347728]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:03:29 np0005481065 admiring_hypatia[347728]:        "type": "bluestore"
Oct 11 05:03:29 np0005481065 admiring_hypatia[347728]:    }
Oct 11 05:03:29 np0005481065 admiring_hypatia[347728]: }
Oct 11 05:03:29 np0005481065 systemd[1]: libpod-7888c0ac2fc4f17a7e4b1197efb4db53865da1bd257e30de480033252c8b5732.scope: Deactivated successfully.
Oct 11 05:03:29 np0005481065 systemd[1]: libpod-7888c0ac2fc4f17a7e4b1197efb4db53865da1bd257e30de480033252c8b5732.scope: Consumed 1.145s CPU time.
Oct 11 05:03:29 np0005481065 podman[347712]: 2025-10-11 09:03:29.589122835 +0000 UTC m=+1.355850196 container died 7888c0ac2fc4f17a7e4b1197efb4db53865da1bd257e30de480033252c8b5732 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:03:29 np0005481065 systemd[1]: var-lib-containers-storage-overlay-f797dd5d0d85dff5e844c4790e14704c3d98d6e88e07fdb76f134d2b3fcb3a3e-merged.mount: Deactivated successfully.
Oct 11 05:03:29 np0005481065 podman[347712]: 2025-10-11 09:03:29.657504217 +0000 UTC m=+1.424231568 container remove 7888c0ac2fc4f17a7e4b1197efb4db53865da1bd257e30de480033252c8b5732 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hypatia, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:03:29 np0005481065 systemd[1]: libpod-conmon-7888c0ac2fc4f17a7e4b1197efb4db53865da1bd257e30de480033252c8b5732.scope: Deactivated successfully.
Oct 11 05:03:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:03:29 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:03:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:03:29 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:03:29 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev e21c66a3-b944-402e-a2b3-f8a72986f86a does not exist
Oct 11 05:03:29 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 692a29df-cf03-4623-9ce3-5d98d1aab078 does not exist
Oct 11 05:03:30 np0005481065 nova_compute[260935]: 2025-10-11 09:03:30.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:03:30 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1811: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 11 05:03:30 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:03:30 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:03:31 np0005481065 ovn_controller[152945]: 2025-10-11T09:03:31Z|00796|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:03:31 np0005481065 ovn_controller[152945]: 2025-10-11T09:03:31Z|00797|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:03:31 np0005481065 nova_compute[260935]: 2025-10-11 09:03:31.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:03:32 np0005481065 ovn_controller[152945]: 2025-10-11T09:03:32Z|00798|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:03:32 np0005481065 ovn_controller[152945]: 2025-10-11T09:03:32Z|00799|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:03:32 np0005481065 nova_compute[260935]: 2025-10-11 09:03:32.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:03:32 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1812: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 1.6 KiB/s wr, 63 op/s
Oct 11 05:03:32 np0005481065 nova_compute[260935]: 2025-10-11 09:03:32.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:03:32 np0005481065 nova_compute[260935]: 2025-10-11 09:03:32.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:03:32 np0005481065 nova_compute[260935]: 2025-10-11 09:03:32.785 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:03:32 np0005481065 nova_compute[260935]: 2025-10-11 09:03:32.786 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:03:33 np0005481065 nova_compute[260935]: 2025-10-11 09:03:33.161 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:03:33 np0005481065 nova_compute[260935]: 2025-10-11 09:03:33.162 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:03:33 np0005481065 nova_compute[260935]: 2025-10-11 09:03:33.162 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 05:03:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:03:33 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Oct 11 05:03:33 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:03:33.737885) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 05:03:33 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Oct 11 05:03:33 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173413738000, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 665, "num_deletes": 255, "total_data_size": 755202, "memory_usage": 767520, "flush_reason": "Manual Compaction"}
Oct 11 05:03:33 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Oct 11 05:03:33 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173413751699, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 737418, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37642, "largest_seqno": 38306, "table_properties": {"data_size": 733905, "index_size": 1357, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8080, "raw_average_key_size": 18, "raw_value_size": 726701, "raw_average_value_size": 1705, "num_data_blocks": 60, "num_entries": 426, "num_filter_entries": 426, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760173369, "oldest_key_time": 1760173369, "file_creation_time": 1760173413, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:03:33 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 13901 microseconds, and 6334 cpu microseconds.
Oct 11 05:03:33 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:03:33 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:03:33.751792) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 737418 bytes OK
Oct 11 05:03:33 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:03:33.751872) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Oct 11 05:03:33 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:03:33.753927) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Oct 11 05:03:33 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:03:33.753952) EVENT_LOG_v1 {"time_micros": 1760173413753942, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 05:03:33 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:03:33.753983) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 05:03:33 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 751648, prev total WAL file size 751648, number of live WAL files 2.
Oct 11 05:03:33 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:03:33 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:03:33.754789) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323538' seq:72057594037927935, type:22 .. '6C6F676D0031353039' seq:0, type:0; will stop at (end)
Oct 11 05:03:33 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 05:03:33 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(720KB)], [83(7878KB)]
Oct 11 05:03:33 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173413754877, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 8804926, "oldest_snapshot_seqno": -1}
Oct 11 05:03:33 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 6057 keys, 8673095 bytes, temperature: kUnknown
Oct 11 05:03:33 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173413811740, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 8673095, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8632449, "index_size": 24408, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15173, "raw_key_size": 154569, "raw_average_key_size": 25, "raw_value_size": 8523509, "raw_average_value_size": 1407, "num_data_blocks": 983, "num_entries": 6057, "num_filter_entries": 6057, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760173413, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:03:33 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:03:33 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:03:33.812206) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 8673095 bytes
Oct 11 05:03:33 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:03:33.813537) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 154.3 rd, 152.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 7.7 +0.0 blob) out(8.3 +0.0 blob), read-write-amplify(23.7) write-amplify(11.8) OK, records in: 6583, records dropped: 526 output_compression: NoCompression
Oct 11 05:03:33 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:03:33.813568) EVENT_LOG_v1 {"time_micros": 1760173413813555, "job": 48, "event": "compaction_finished", "compaction_time_micros": 57063, "compaction_time_cpu_micros": 38154, "output_level": 6, "num_output_files": 1, "total_output_size": 8673095, "num_input_records": 6583, "num_output_records": 6057, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 05:03:33 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:03:33 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173413814065, "job": 48, "event": "table_file_deletion", "file_number": 85}
Oct 11 05:03:33 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:03:33 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173413816683, "job": 48, "event": "table_file_deletion", "file_number": 83}
Oct 11 05:03:33 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:03:33.754681) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:03:33 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:03:33.816911) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:03:33 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:03:33.816929) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:03:33 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:03:33.816935) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:03:33 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:03:33.816940) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:03:33 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:03:33.816944) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:03:33 np0005481065 nova_compute[260935]: 2025-10-11 09:03:33.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:03:34 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1813: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 1.5 KiB/s wr, 82 op/s
Oct 11 05:03:35 np0005481065 nova_compute[260935]: 2025-10-11 09:03:35.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:03:35 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e243 do_prune osdmap full prune enabled
Oct 11 05:03:35 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e244 e244: 3 total, 3 up, 3 in
Oct 11 05:03:35 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e244: 3 total, 3 up, 3 in
Oct 11 05:03:35 np0005481065 nova_compute[260935]: 2025-10-11 09:03:35.796 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updating instance_info_cache with network_info: [{"id": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "address": "fa:16:3e:ab:9b:26", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99e74dca-1d", "ovs_interfaceid": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:03:36 np0005481065 nova_compute[260935]: 2025-10-11 09:03:36.124 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:03:36 np0005481065 nova_compute[260935]: 2025-10-11 09:03:36.124 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 05:03:36 np0005481065 nova_compute[260935]: 2025-10-11 09:03:36.125 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:03:36 np0005481065 nova_compute[260935]: 2025-10-11 09:03:36.125 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:03:36 np0005481065 nova_compute[260935]: 2025-10-11 09:03:36.126 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:03:36 np0005481065 nova_compute[260935]: 2025-10-11 09:03:36.126 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:03:36 np0005481065 nova_compute[260935]: 2025-10-11 09:03:36.126 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:03:36 np0005481065 nova_compute[260935]: 2025-10-11 09:03:36.127 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:03:36 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1815: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 1.6 KiB/s wr, 86 op/s
Oct 11 05:03:36 np0005481065 nova_compute[260935]: 2025-10-11 09:03:36.253 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:03:36 np0005481065 nova_compute[260935]: 2025-10-11 09:03:36.253 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:03:36 np0005481065 nova_compute[260935]: 2025-10-11 09:03:36.254 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:03:36 np0005481065 nova_compute[260935]: 2025-10-11 09:03:36.254 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:03:36 np0005481065 nova_compute[260935]: 2025-10-11 09:03:36.254 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:03:36 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:03:36 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1826216385' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:03:36 np0005481065 nova_compute[260935]: 2025-10-11 09:03:36.720 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:03:38 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1816: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 1.6 KiB/s wr, 102 op/s
Oct 11 05:03:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:03:38 np0005481065 nova_compute[260935]: 2025-10-11 09:03:38.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:03:40 np0005481065 nova_compute[260935]: 2025-10-11 09:03:40.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:03:40 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1817: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 1.6 KiB/s wr, 102 op/s
Oct 11 05:03:40 np0005481065 podman[347849]: 2025-10-11 09:03:40.79901462 +0000 UTC m=+0.090406712 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:03:42 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1818: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.6 KiB/s wr, 53 op/s
Oct 11 05:03:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:03:43 np0005481065 nova_compute[260935]: 2025-10-11 09:03:43.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:03:44 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1819: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.6 KiB/s wr, 30 op/s
Oct 11 05:03:45 np0005481065 nova_compute[260935]: 2025-10-11 09:03:45.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:03:46 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1820: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.5 KiB/s wr, 29 op/s
Oct 11 05:03:46 np0005481065 podman[347868]: 2025-10-11 09:03:46.81580467 +0000 UTC m=+0.096455134 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, managed_by=edpm_ansible)
Oct 11 05:03:48 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1821: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.3 KiB/s wr, 25 op/s
Oct 11 05:03:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:03:48 np0005481065 nova_compute[260935]: 2025-10-11 09:03:48.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:03:50 np0005481065 nova_compute[260935]: 2025-10-11 09:03:50.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:03:50 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1822: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:03:50 np0005481065 podman[347891]: 2025-10-11 09:03:50.824473314 +0000 UTC m=+0.112035129 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 05:03:50 np0005481065 podman[347890]: 2025-10-11 09:03:50.83061836 +0000 UTC m=+0.112876744 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 11 05:03:52 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1823: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:03:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:03:53 np0005481065 nova_compute[260935]: 2025-10-11 09:03:53.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:03:54 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1824: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:03:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:03:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:03:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:03:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:03:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:03:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:03:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:03:54
Oct 11 05:03:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:03:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:03:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['.mgr', 'vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', '.rgw.root', 'volumes', 'images', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta']
Oct 11 05:03:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:03:55 np0005481065 nova_compute[260935]: 2025-10-11 09:03:55.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:03:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:03:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:03:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:03:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:03:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:03:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:03:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:03:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:03:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:03:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:03:56 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1825: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:03:58 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1826: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:03:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:03:58 np0005481065 nova_compute[260935]: 2025-10-11 09:03:58.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:04:00 np0005481065 nova_compute[260935]: 2025-10-11 09:04:00.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:04:00 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1827: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:04:02 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1828: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:04:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:04:03 np0005481065 nova_compute[260935]: 2025-10-11 09:04:03.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:04:04 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1829: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:04:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:04:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:04:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:04:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:04:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0029757271724620694 of space, bias 1.0, pg target 0.8927181517386208 quantized to 32 (current 32)
Oct 11 05:04:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:04:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:04:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:04:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:04:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:04:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006666213900826984 of space, bias 1.0, pg target 0.19998641702480952 quantized to 32 (current 32)
Oct 11 05:04:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:04:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 05:04:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:04:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:04:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:04:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 05:04:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:04:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 05:04:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:04:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:04:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:04:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 05:04:05 np0005481065 nova_compute[260935]: 2025-10-11 09:04:05.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:04:06 np0005481065 ovn_controller[152945]: 2025-10-11T09:04:06Z|00800|memory_trim|INFO|Detected inactivity (last active 30008 ms ago): trimming memory
Oct 11 05:04:06 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1830: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:04:08 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1831: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:04:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:04:08 np0005481065 nova_compute[260935]: 2025-10-11 09:04:08.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:04:10 np0005481065 nova_compute[260935]: 2025-10-11 09:04:10.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:04:10 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1832: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:04:11 np0005481065 podman[347934]: 2025-10-11 09:04:11.737697189 +0000 UTC m=+0.045348216 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:04:12 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1833: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:04:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:04:13 np0005481065 nova_compute[260935]: 2025-10-11 09:04:13.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:04:14 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1834: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:04:15 np0005481065 nova_compute[260935]: 2025-10-11 09:04:15.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:04:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:04:15.199 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:04:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:04:15.200 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:04:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:04:15.200 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:04:16 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1835: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:04:17 np0005481065 nova_compute[260935]: 2025-10-11 09:04:17.699 2 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 26.07 sec#033[00m
Oct 11 05:04:17 np0005481065 nova_compute[260935]: 2025-10-11 09:04:17.749 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:04:17 np0005481065 nova_compute[260935]: 2025-10-11 09:04:17.750 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:04:17 np0005481065 nova_compute[260935]: 2025-10-11 09:04:17.750 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:04:17 np0005481065 nova_compute[260935]: 2025-10-11 09:04:17.758 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:04:17 np0005481065 nova_compute[260935]: 2025-10-11 09:04:17.758 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:04:17 np0005481065 nova_compute[260935]: 2025-10-11 09:04:17.764 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000057 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:04:17 np0005481065 nova_compute[260935]: 2025-10-11 09:04:17.765 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000057 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:04:17 np0005481065 nova_compute[260935]: 2025-10-11 09:04:17.771 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:04:17 np0005481065 nova_compute[260935]: 2025-10-11 09:04:17.771 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:04:17 np0005481065 podman[347953]: 2025-10-11 09:04:17.794877722 +0000 UTC m=+0.088420355 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 11 05:04:18 np0005481065 nova_compute[260935]: 2025-10-11 09:04:18.029 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:04:18 np0005481065 nova_compute[260935]: 2025-10-11 09:04:18.031 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3208MB free_disk=59.80977249145508GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:04:18 np0005481065 nova_compute[260935]: 2025-10-11 09:04:18.032 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:04:18 np0005481065 nova_compute[260935]: 2025-10-11 09:04:18.032 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:04:18 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1836: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:04:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:04:18 np0005481065 nova_compute[260935]: 2025-10-11 09:04:18.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:04:19 np0005481065 nova_compute[260935]: 2025-10-11 09:04:19.255 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:04:19 np0005481065 nova_compute[260935]: 2025-10-11 09:04:19.256 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:04:19 np0005481065 nova_compute[260935]: 2025-10-11 09:04:19.256 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:04:19 np0005481065 nova_compute[260935]: 2025-10-11 09:04:19.257 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 15633aee-234a-4417-b5ea-f35f13820404 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:04:19 np0005481065 nova_compute[260935]: 2025-10-11 09:04:19.257 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:04:19 np0005481065 nova_compute[260935]: 2025-10-11 09:04:19.258 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:04:19 np0005481065 nova_compute[260935]: 2025-10-11 09:04:19.304 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing inventories for resource provider ead2f521-4d5d-46d9-864c-1aac19134114 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 11 05:04:19 np0005481065 nova_compute[260935]: 2025-10-11 09:04:19.360 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updating ProviderTree inventory for provider ead2f521-4d5d-46d9-864c-1aac19134114 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 11 05:04:19 np0005481065 nova_compute[260935]: 2025-10-11 09:04:19.361 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updating inventory in ProviderTree for provider ead2f521-4d5d-46d9-864c-1aac19134114 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 11 05:04:19 np0005481065 nova_compute[260935]: 2025-10-11 09:04:19.422 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing aggregate associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 11 05:04:19 np0005481065 nova_compute[260935]: 2025-10-11 09:04:19.468 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing trait associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, traits: HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSE41,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_USB,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,HW_CPU_X86_SVM,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 11 05:04:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:04:19.594 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:04:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:04:19.595 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 05:04:19 np0005481065 nova_compute[260935]: 2025-10-11 09:04:19.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:04:19 np0005481065 nova_compute[260935]: 2025-10-11 09:04:19.805 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:04:20 np0005481065 nova_compute[260935]: 2025-10-11 09:04:20.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:04:20 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1837: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:04:20 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:04:20 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/905249276' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:04:20 np0005481065 nova_compute[260935]: 2025-10-11 09:04:20.325 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:04:20 np0005481065 nova_compute[260935]: 2025-10-11 09:04:20.332 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:04:20 np0005481065 nova_compute[260935]: 2025-10-11 09:04:20.389 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:04:20 np0005481065 nova_compute[260935]: 2025-10-11 09:04:20.392 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:04:20 np0005481065 nova_compute[260935]: 2025-10-11 09:04:20.393 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.360s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:04:20 np0005481065 nova_compute[260935]: 2025-10-11 09:04:20.394 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:04:20 np0005481065 nova_compute[260935]: 2025-10-11 09:04:20.444 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:04:20 np0005481065 nova_compute[260935]: 2025-10-11 09:04:20.445 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct 11 05:04:20 np0005481065 nova_compute[260935]: 2025-10-11 09:04:20.512 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct 11 05:04:20 np0005481065 nova_compute[260935]: 2025-10-11 09:04:20.513 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:04:21 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e244 do_prune osdmap full prune enabled
Oct 11 05:04:21 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e245 e245: 3 total, 3 up, 3 in
Oct 11 05:04:21 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e245: 3 total, 3 up, 3 in
Oct 11 05:04:21 np0005481065 nova_compute[260935]: 2025-10-11 09:04:21.542 2 DEBUG oslo_concurrency.lockutils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Acquiring lock "cb40f13d-eaf0-4d7f-8745-acdd85fa0e86" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:04:21 np0005481065 nova_compute[260935]: 2025-10-11 09:04:21.542 2 DEBUG oslo_concurrency.lockutils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lock "cb40f13d-eaf0-4d7f-8745-acdd85fa0e86" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:04:21 np0005481065 nova_compute[260935]: 2025-10-11 09:04:21.626 2 DEBUG nova.compute.manager [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:04:21 np0005481065 nova_compute[260935]: 2025-10-11 09:04:21.792 2 DEBUG oslo_concurrency.lockutils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:04:21 np0005481065 nova_compute[260935]: 2025-10-11 09:04:21.793 2 DEBUG oslo_concurrency.lockutils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:04:21 np0005481065 podman[347996]: 2025-10-11 09:04:21.79841049 +0000 UTC m=+0.087212940 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:04:21 np0005481065 nova_compute[260935]: 2025-10-11 09:04:21.801 2 DEBUG nova.virt.hardware [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:04:21 np0005481065 nova_compute[260935]: 2025-10-11 09:04:21.801 2 INFO nova.compute.claims [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:04:21 np0005481065 podman[347997]: 2025-10-11 09:04:21.856320503 +0000 UTC m=+0.136965871 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 05:04:22 np0005481065 nova_compute[260935]: 2025-10-11 09:04:22.186 2 DEBUG oslo_concurrency.processutils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:04:22 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1839: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 921 B/s wr, 19 op/s
Oct 11 05:04:22 np0005481065 nova_compute[260935]: 2025-10-11 09:04:22.455 2 DEBUG oslo_concurrency.lockutils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Acquiring lock "14b020dc-ae35-4a87-87c8-ed8504968319" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:04:22 np0005481065 nova_compute[260935]: 2025-10-11 09:04:22.456 2 DEBUG oslo_concurrency.lockutils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Lock "14b020dc-ae35-4a87-87c8-ed8504968319" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:04:22 np0005481065 nova_compute[260935]: 2025-10-11 09:04:22.504 2 DEBUG nova.compute.manager [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:04:22 np0005481065 nova_compute[260935]: 2025-10-11 09:04:22.621 2 DEBUG oslo_concurrency.lockutils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:04:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:04:22 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/69219880' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:04:22 np0005481065 nova_compute[260935]: 2025-10-11 09:04:22.665 2 DEBUG oslo_concurrency.processutils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:04:22 np0005481065 nova_compute[260935]: 2025-10-11 09:04:22.672 2 DEBUG nova.compute.provider_tree [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:04:22 np0005481065 nova_compute[260935]: 2025-10-11 09:04:22.710 2 DEBUG nova.scheduler.client.report [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:04:22 np0005481065 nova_compute[260935]: 2025-10-11 09:04:22.808 2 DEBUG oslo_concurrency.lockutils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.015s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:04:22 np0005481065 nova_compute[260935]: 2025-10-11 09:04:22.809 2 DEBUG nova.compute.manager [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:04:22 np0005481065 nova_compute[260935]: 2025-10-11 09:04:22.814 2 DEBUG oslo_concurrency.lockutils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.192s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:04:22 np0005481065 nova_compute[260935]: 2025-10-11 09:04:22.824 2 DEBUG nova.virt.hardware [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:04:22 np0005481065 nova_compute[260935]: 2025-10-11 09:04:22.825 2 INFO nova.compute.claims [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:04:22 np0005481065 nova_compute[260935]: 2025-10-11 09:04:22.913 2 DEBUG nova.compute.manager [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Oct 11 05:04:23 np0005481065 nova_compute[260935]: 2025-10-11 09:04:23.039 2 INFO nova.virt.libvirt.driver [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:04:23 np0005481065 nova_compute[260935]: 2025-10-11 09:04:23.133 2 DEBUG nova.compute.manager [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:04:23 np0005481065 nova_compute[260935]: 2025-10-11 09:04:23.291 2 DEBUG oslo_concurrency.processutils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:04:23 np0005481065 nova_compute[260935]: 2025-10-11 09:04:23.428 2 DEBUG nova.compute.manager [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:04:23 np0005481065 nova_compute[260935]: 2025-10-11 09:04:23.431 2 DEBUG nova.virt.libvirt.driver [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:04:23 np0005481065 nova_compute[260935]: 2025-10-11 09:04:23.432 2 INFO nova.virt.libvirt.driver [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Creating image(s)#033[00m
Oct 11 05:04:23 np0005481065 nova_compute[260935]: 2025-10-11 09:04:23.478 2 DEBUG nova.storage.rbd_utils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] rbd image cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:04:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e245 do_prune osdmap full prune enabled
Oct 11 05:04:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e246 e246: 3 total, 3 up, 3 in
Oct 11 05:04:23 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e246: 3 total, 3 up, 3 in
Oct 11 05:04:23 np0005481065 nova_compute[260935]: 2025-10-11 09:04:23.535 2 DEBUG nova.storage.rbd_utils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] rbd image cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:04:23 np0005481065 nova_compute[260935]: 2025-10-11 09:04:23.572 2 DEBUG nova.storage.rbd_utils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] rbd image cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:04:23 np0005481065 nova_compute[260935]: 2025-10-11 09:04:23.579 2 DEBUG oslo_concurrency.processutils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:04:23 np0005481065 nova_compute[260935]: 2025-10-11 09:04:23.655 2 DEBUG oslo_concurrency.processutils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:04:23 np0005481065 nova_compute[260935]: 2025-10-11 09:04:23.656 2 DEBUG oslo_concurrency.lockutils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:04:23 np0005481065 nova_compute[260935]: 2025-10-11 09:04:23.657 2 DEBUG oslo_concurrency.lockutils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:04:23 np0005481065 nova_compute[260935]: 2025-10-11 09:04:23.657 2 DEBUG oslo_concurrency.lockutils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:04:23 np0005481065 nova_compute[260935]: 2025-10-11 09:04:23.685 2 DEBUG nova.storage.rbd_utils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] rbd image cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:04:23 np0005481065 nova_compute[260935]: 2025-10-11 09:04:23.690 2 DEBUG oslo_concurrency.processutils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:04:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:04:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:04:23 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1875438717' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:04:23 np0005481065 nova_compute[260935]: 2025-10-11 09:04:23.776 2 DEBUG oslo_concurrency.processutils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:04:23 np0005481065 nova_compute[260935]: 2025-10-11 09:04:23.785 2 DEBUG nova.compute.provider_tree [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:04:23 np0005481065 nova_compute[260935]: 2025-10-11 09:04:23.825 2 DEBUG nova.scheduler.client.report [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:04:23 np0005481065 nova_compute[260935]: 2025-10-11 09:04:23.881 2 DEBUG oslo_concurrency.lockutils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.067s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:04:23 np0005481065 nova_compute[260935]: 2025-10-11 09:04:23.882 2 DEBUG nova.compute.manager [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:04:23 np0005481065 nova_compute[260935]: 2025-10-11 09:04:23.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:04:24 np0005481065 nova_compute[260935]: 2025-10-11 09:04:24.058 2 DEBUG nova.compute.manager [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Oct 11 05:04:24 np0005481065 nova_compute[260935]: 2025-10-11 09:04:24.081 2 DEBUG oslo_concurrency.processutils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.391s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:04:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:04:24 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/464112611' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:04:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:04:24 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/464112611' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:04:24 np0005481065 nova_compute[260935]: 2025-10-11 09:04:24.172 2 DEBUG nova.storage.rbd_utils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] resizing rbd image cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:04:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:04:24 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2742869319' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:04:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:04:24 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2742869319' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:04:24 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1841: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Oct 11 05:04:24 np0005481065 nova_compute[260935]: 2025-10-11 09:04:24.309 2 DEBUG nova.objects.instance [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lazy-loading 'migration_context' on Instance uuid cb40f13d-eaf0-4d7f-8745-acdd85fa0e86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:04:24 np0005481065 nova_compute[260935]: 2025-10-11 09:04:24.324 2 INFO nova.virt.libvirt.driver [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:04:24 np0005481065 nova_compute[260935]: 2025-10-11 09:04:24.349 2 DEBUG nova.virt.libvirt.driver [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:04:24 np0005481065 nova_compute[260935]: 2025-10-11 09:04:24.349 2 DEBUG nova.virt.libvirt.driver [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Ensure instance console log exists: /var/lib/nova/instances/cb40f13d-eaf0-4d7f-8745-acdd85fa0e86/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:04:24 np0005481065 nova_compute[260935]: 2025-10-11 09:04:24.350 2 DEBUG oslo_concurrency.lockutils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:04:24 np0005481065 nova_compute[260935]: 2025-10-11 09:04:24.350 2 DEBUG oslo_concurrency.lockutils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:04:24 np0005481065 nova_compute[260935]: 2025-10-11 09:04:24.351 2 DEBUG oslo_concurrency.lockutils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:04:24 np0005481065 nova_compute[260935]: 2025-10-11 09:04:24.353 2 DEBUG nova.virt.libvirt.driver [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:04:24 np0005481065 nova_compute[260935]: 2025-10-11 09:04:24.359 2 WARNING nova.virt.libvirt.driver [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:04:24 np0005481065 nova_compute[260935]: 2025-10-11 09:04:24.366 2 DEBUG nova.virt.libvirt.host [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:04:24 np0005481065 nova_compute[260935]: 2025-10-11 09:04:24.367 2 DEBUG nova.virt.libvirt.host [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:04:24 np0005481065 nova_compute[260935]: 2025-10-11 09:04:24.372 2 DEBUG nova.virt.libvirt.host [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:04:24 np0005481065 nova_compute[260935]: 2025-10-11 09:04:24.373 2 DEBUG nova.virt.libvirt.host [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:04:24 np0005481065 nova_compute[260935]: 2025-10-11 09:04:24.374 2 DEBUG nova.virt.libvirt.driver [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:04:24 np0005481065 nova_compute[260935]: 2025-10-11 09:04:24.374 2 DEBUG nova.virt.hardware [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:04:24 np0005481065 nova_compute[260935]: 2025-10-11 09:04:24.375 2 DEBUG nova.virt.hardware [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:04:24 np0005481065 nova_compute[260935]: 2025-10-11 09:04:24.375 2 DEBUG nova.virt.hardware [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:04:24 np0005481065 nova_compute[260935]: 2025-10-11 09:04:24.376 2 DEBUG nova.virt.hardware [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:04:24 np0005481065 nova_compute[260935]: 2025-10-11 09:04:24.376 2 DEBUG nova.virt.hardware [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:04:24 np0005481065 nova_compute[260935]: 2025-10-11 09:04:24.376 2 DEBUG nova.virt.hardware [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:04:24 np0005481065 nova_compute[260935]: 2025-10-11 09:04:24.377 2 DEBUG nova.virt.hardware [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:04:24 np0005481065 nova_compute[260935]: 2025-10-11 09:04:24.377 2 DEBUG nova.virt.hardware [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:04:24 np0005481065 nova_compute[260935]: 2025-10-11 09:04:24.378 2 DEBUG nova.virt.hardware [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:04:24 np0005481065 nova_compute[260935]: 2025-10-11 09:04:24.378 2 DEBUG nova.virt.hardware [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:04:24 np0005481065 nova_compute[260935]: 2025-10-11 09:04:24.378 2 DEBUG nova.virt.hardware [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:04:24 np0005481065 nova_compute[260935]: 2025-10-11 09:04:24.383 2 DEBUG oslo_concurrency.processutils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:04:24 np0005481065 nova_compute[260935]: 2025-10-11 09:04:24.429 2 DEBUG nova.compute.manager [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:04:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:04:24 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3468149435' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:04:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:04:24 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3468149435' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:04:24 np0005481065 nova_compute[260935]: 2025-10-11 09:04:24.637 2 DEBUG nova.compute.manager [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:04:24 np0005481065 nova_compute[260935]: 2025-10-11 09:04:24.639 2 DEBUG nova.virt.libvirt.driver [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:04:24 np0005481065 nova_compute[260935]: 2025-10-11 09:04:24.639 2 INFO nova.virt.libvirt.driver [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Creating image(s)#033[00m
Oct 11 05:04:24 np0005481065 nova_compute[260935]: 2025-10-11 09:04:24.676 2 DEBUG nova.storage.rbd_utils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] rbd image 14b020dc-ae35-4a87-87c8-ed8504968319_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:04:24 np0005481065 nova_compute[260935]: 2025-10-11 09:04:24.712 2 DEBUG nova.storage.rbd_utils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] rbd image 14b020dc-ae35-4a87-87c8-ed8504968319_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:04:24 np0005481065 nova_compute[260935]: 2025-10-11 09:04:24.745 2 DEBUG nova.storage.rbd_utils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] rbd image 14b020dc-ae35-4a87-87c8-ed8504968319_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:04:24 np0005481065 nova_compute[260935]: 2025-10-11 09:04:24.749 2 DEBUG oslo_concurrency.processutils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:04:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:04:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:04:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:04:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:04:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:04:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:04:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:04:24 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2840872353' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:04:24 np0005481065 nova_compute[260935]: 2025-10-11 09:04:24.850 2 DEBUG oslo_concurrency.processutils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:04:24 np0005481065 nova_compute[260935]: 2025-10-11 09:04:24.854 2 DEBUG oslo_concurrency.lockutils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:04:24 np0005481065 nova_compute[260935]: 2025-10-11 09:04:24.855 2 DEBUG oslo_concurrency.lockutils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:04:24 np0005481065 nova_compute[260935]: 2025-10-11 09:04:24.856 2 DEBUG oslo_concurrency.lockutils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:04:24 np0005481065 nova_compute[260935]: 2025-10-11 09:04:24.889 2 DEBUG nova.storage.rbd_utils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] rbd image 14b020dc-ae35-4a87-87c8-ed8504968319_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:04:24 np0005481065 nova_compute[260935]: 2025-10-11 09:04:24.898 2 DEBUG oslo_concurrency.processutils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 14b020dc-ae35-4a87-87c8-ed8504968319_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:04:24 np0005481065 nova_compute[260935]: 2025-10-11 09:04:24.942 2 DEBUG oslo_concurrency.processutils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:04:24 np0005481065 nova_compute[260935]: 2025-10-11 09:04:24.977 2 DEBUG nova.storage.rbd_utils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] rbd image cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:04:24 np0005481065 nova_compute[260935]: 2025-10-11 09:04:24.983 2 DEBUG oslo_concurrency.processutils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:04:25 np0005481065 nova_compute[260935]: 2025-10-11 09:04:25.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:04:25 np0005481065 nova_compute[260935]: 2025-10-11 09:04:25.264 2 DEBUG oslo_concurrency.processutils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 14b020dc-ae35-4a87-87c8-ed8504968319_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.366s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:04:25 np0005481065 nova_compute[260935]: 2025-10-11 09:04:25.364 2 DEBUG nova.storage.rbd_utils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] resizing rbd image 14b020dc-ae35-4a87-87c8-ed8504968319_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:04:25 np0005481065 nova_compute[260935]: 2025-10-11 09:04:25.498 2 DEBUG nova.objects.instance [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Lazy-loading 'migration_context' on Instance uuid 14b020dc-ae35-4a87-87c8-ed8504968319 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:04:25 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:04:25 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/531030210' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:04:25 np0005481065 nova_compute[260935]: 2025-10-11 09:04:25.535 2 DEBUG oslo_concurrency.processutils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:04:25 np0005481065 nova_compute[260935]: 2025-10-11 09:04:25.538 2 DEBUG nova.objects.instance [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lazy-loading 'pci_devices' on Instance uuid cb40f13d-eaf0-4d7f-8745-acdd85fa0e86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:04:25 np0005481065 nova_compute[260935]: 2025-10-11 09:04:25.556 2 DEBUG nova.virt.libvirt.driver [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:04:25 np0005481065 nova_compute[260935]: 2025-10-11 09:04:25.556 2 DEBUG nova.virt.libvirt.driver [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Ensure instance console log exists: /var/lib/nova/instances/14b020dc-ae35-4a87-87c8-ed8504968319/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:04:25 np0005481065 nova_compute[260935]: 2025-10-11 09:04:25.557 2 DEBUG oslo_concurrency.lockutils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:04:25 np0005481065 nova_compute[260935]: 2025-10-11 09:04:25.558 2 DEBUG oslo_concurrency.lockutils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:04:25 np0005481065 nova_compute[260935]: 2025-10-11 09:04:25.558 2 DEBUG oslo_concurrency.lockutils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:04:25 np0005481065 nova_compute[260935]: 2025-10-11 09:04:25.561 2 DEBUG nova.virt.libvirt.driver [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:04:25 np0005481065 nova_compute[260935]: 2025-10-11 09:04:25.568 2 WARNING nova.virt.libvirt.driver [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:04:25 np0005481065 nova_compute[260935]: 2025-10-11 09:04:25.574 2 DEBUG nova.virt.libvirt.driver [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:04:25 np0005481065 nova_compute[260935]:  <uuid>cb40f13d-eaf0-4d7f-8745-acdd85fa0e86</uuid>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:  <name>instance-00000058</name>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:04:25 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:      <nova:name>tempest-ServerShowV257Test-server-919774712</nova:name>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:04:24</nova:creationTime>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:04:25 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:        <nova:user uuid="a90fe9900cc64109bfeb61e3bc71fb95">tempest-ServerShowV257Test-315301913-project-member</nova:user>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:        <nova:project uuid="f83a57424c0643a1b6a9b84fe208cb0e">tempest-ServerShowV257Test-315301913</nova:project>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:      <nova:ports/>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:04:25 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:      <entry name="serial">cb40f13d-eaf0-4d7f-8745-acdd85fa0e86</entry>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:      <entry name="uuid">cb40f13d-eaf0-4d7f-8745-acdd85fa0e86</entry>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:04:25 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:04:25 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:04:25 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_disk">
Oct 11 05:04:25 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:04:25 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:04:25 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_disk.config">
Oct 11 05:04:25 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:04:25 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:04:25 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/cb40f13d-eaf0-4d7f-8745-acdd85fa0e86/console.log" append="off"/>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:04:25 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:04:25 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:04:25 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:04:25 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:04:25 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:04:25 np0005481065 nova_compute[260935]: 2025-10-11 09:04:25.578 2 DEBUG nova.virt.libvirt.host [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:04:25 np0005481065 nova_compute[260935]: 2025-10-11 09:04:25.579 2 DEBUG nova.virt.libvirt.host [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:04:25 np0005481065 nova_compute[260935]: 2025-10-11 09:04:25.589 2 DEBUG nova.virt.libvirt.host [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:04:25 np0005481065 nova_compute[260935]: 2025-10-11 09:04:25.590 2 DEBUG nova.virt.libvirt.host [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:04:25 np0005481065 nova_compute[260935]: 2025-10-11 09:04:25.590 2 DEBUG nova.virt.libvirt.driver [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:04:25 np0005481065 nova_compute[260935]: 2025-10-11 09:04:25.591 2 DEBUG nova.virt.hardware [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:04:25 np0005481065 nova_compute[260935]: 2025-10-11 09:04:25.592 2 DEBUG nova.virt.hardware [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:04:25 np0005481065 nova_compute[260935]: 2025-10-11 09:04:25.592 2 DEBUG nova.virt.hardware [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:04:25 np0005481065 nova_compute[260935]: 2025-10-11 09:04:25.592 2 DEBUG nova.virt.hardware [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:04:25 np0005481065 nova_compute[260935]: 2025-10-11 09:04:25.593 2 DEBUG nova.virt.hardware [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:04:25 np0005481065 nova_compute[260935]: 2025-10-11 09:04:25.593 2 DEBUG nova.virt.hardware [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:04:25 np0005481065 nova_compute[260935]: 2025-10-11 09:04:25.594 2 DEBUG nova.virt.hardware [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:04:25 np0005481065 nova_compute[260935]: 2025-10-11 09:04:25.594 2 DEBUG nova.virt.hardware [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:04:25 np0005481065 nova_compute[260935]: 2025-10-11 09:04:25.595 2 DEBUG nova.virt.hardware [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:04:25 np0005481065 nova_compute[260935]: 2025-10-11 09:04:25.595 2 DEBUG nova.virt.hardware [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:04:25 np0005481065 nova_compute[260935]: 2025-10-11 09:04:25.595 2 DEBUG nova.virt.hardware [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:04:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:04:25.597 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:04:25 np0005481065 nova_compute[260935]: 2025-10-11 09:04:25.602 2 DEBUG oslo_concurrency.processutils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:04:25 np0005481065 nova_compute[260935]: 2025-10-11 09:04:25.784 2 DEBUG nova.virt.libvirt.driver [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:04:25 np0005481065 nova_compute[260935]: 2025-10-11 09:04:25.785 2 DEBUG nova.virt.libvirt.driver [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:04:25 np0005481065 nova_compute[260935]: 2025-10-11 09:04:25.787 2 INFO nova.virt.libvirt.driver [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Using config drive#033[00m
Oct 11 05:04:25 np0005481065 nova_compute[260935]: 2025-10-11 09:04:25.832 2 DEBUG nova.storage.rbd_utils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] rbd image cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:04:26 np0005481065 nova_compute[260935]: 2025-10-11 09:04:26.076 2 INFO nova.virt.libvirt.driver [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Creating config drive at /var/lib/nova/instances/cb40f13d-eaf0-4d7f-8745-acdd85fa0e86/disk.config#033[00m
Oct 11 05:04:26 np0005481065 nova_compute[260935]: 2025-10-11 09:04:26.085 2 DEBUG oslo_concurrency.processutils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cb40f13d-eaf0-4d7f-8745-acdd85fa0e86/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9y54wy51 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:04:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:04:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2169207255' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:04:26 np0005481065 nova_compute[260935]: 2025-10-11 09:04:26.134 2 DEBUG oslo_concurrency.processutils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:04:26 np0005481065 nova_compute[260935]: 2025-10-11 09:04:26.172 2 DEBUG nova.storage.rbd_utils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] rbd image 14b020dc-ae35-4a87-87c8-ed8504968319_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:04:26 np0005481065 nova_compute[260935]: 2025-10-11 09:04:26.177 2 DEBUG oslo_concurrency.processutils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:04:26 np0005481065 nova_compute[260935]: 2025-10-11 09:04:26.243 2 DEBUG oslo_concurrency.processutils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cb40f13d-eaf0-4d7f-8745-acdd85fa0e86/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9y54wy51" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:04:26 np0005481065 nova_compute[260935]: 2025-10-11 09:04:26.272 2 DEBUG nova.storage.rbd_utils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] rbd image cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:04:26 np0005481065 nova_compute[260935]: 2025-10-11 09:04:26.278 2 DEBUG oslo_concurrency.processutils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cb40f13d-eaf0-4d7f-8745-acdd85fa0e86/disk.config cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:04:26 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1842: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Oct 11 05:04:26 np0005481065 nova_compute[260935]: 2025-10-11 09:04:26.490 2 DEBUG oslo_concurrency.processutils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cb40f13d-eaf0-4d7f-8745-acdd85fa0e86/disk.config cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.212s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:04:26 np0005481065 nova_compute[260935]: 2025-10-11 09:04:26.492 2 INFO nova.virt.libvirt.driver [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Deleting local config drive /var/lib/nova/instances/cb40f13d-eaf0-4d7f-8745-acdd85fa0e86/disk.config because it was imported into RBD.#033[00m
Oct 11 05:04:26 np0005481065 systemd-machined[215705]: New machine qemu-100-instance-00000058.
Oct 11 05:04:26 np0005481065 systemd[1]: Started Virtual Machine qemu-100-instance-00000058.
Oct 11 05:04:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:04:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1923217079' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:04:26 np0005481065 nova_compute[260935]: 2025-10-11 09:04:26.750 2 DEBUG oslo_concurrency.processutils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:04:26 np0005481065 nova_compute[260935]: 2025-10-11 09:04:26.752 2 DEBUG nova.objects.instance [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Lazy-loading 'pci_devices' on Instance uuid 14b020dc-ae35-4a87-87c8-ed8504968319 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:04:26 np0005481065 nova_compute[260935]: 2025-10-11 09:04:26.792 2 DEBUG nova.virt.libvirt.driver [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:04:26 np0005481065 nova_compute[260935]:  <uuid>14b020dc-ae35-4a87-87c8-ed8504968319</uuid>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:  <name>instance-00000059</name>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:04:26 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:      <nova:name>tempest-ServersAaction247Test-server-101545139</nova:name>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:04:25</nova:creationTime>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:04:26 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:        <nova:user uuid="2930ce5b9093473e9d0f4e49fb1e934b">tempest-ServersAaction247Test-1311798005-project-member</nova:user>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:        <nova:project uuid="5ae7015798d64fdeb69d4d5d5fda3326">tempest-ServersAaction247Test-1311798005</nova:project>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:      <nova:ports/>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:04:26 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:      <entry name="serial">14b020dc-ae35-4a87-87c8-ed8504968319</entry>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:      <entry name="uuid">14b020dc-ae35-4a87-87c8-ed8504968319</entry>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:04:26 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:04:26 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:04:26 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/14b020dc-ae35-4a87-87c8-ed8504968319_disk">
Oct 11 05:04:26 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:04:26 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:04:26 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/14b020dc-ae35-4a87-87c8-ed8504968319_disk.config">
Oct 11 05:04:26 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:04:26 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:04:26 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/14b020dc-ae35-4a87-87c8-ed8504968319/console.log" append="off"/>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:04:26 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:04:26 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:04:26 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:04:26 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:04:26 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:04:26 np0005481065 nova_compute[260935]: 2025-10-11 09:04:26.916 2 DEBUG nova.virt.libvirt.driver [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:04:26 np0005481065 nova_compute[260935]: 2025-10-11 09:04:26.916 2 DEBUG nova.virt.libvirt.driver [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:04:26 np0005481065 nova_compute[260935]: 2025-10-11 09:04:26.917 2 INFO nova.virt.libvirt.driver [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Using config drive#033[00m
Oct 11 05:04:26 np0005481065 nova_compute[260935]: 2025-10-11 09:04:26.951 2 DEBUG nova.storage.rbd_utils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] rbd image 14b020dc-ae35-4a87-87c8-ed8504968319_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:04:27 np0005481065 nova_compute[260935]: 2025-10-11 09:04:27.350 2 INFO nova.virt.libvirt.driver [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Creating config drive at /var/lib/nova/instances/14b020dc-ae35-4a87-87c8-ed8504968319/disk.config#033[00m
Oct 11 05:04:27 np0005481065 nova_compute[260935]: 2025-10-11 09:04:27.359 2 DEBUG oslo_concurrency.processutils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/14b020dc-ae35-4a87-87c8-ed8504968319/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyogygpqx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:04:27 np0005481065 nova_compute[260935]: 2025-10-11 09:04:27.531 2 DEBUG oslo_concurrency.processutils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/14b020dc-ae35-4a87-87c8-ed8504968319/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyogygpqx" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:04:27 np0005481065 nova_compute[260935]: 2025-10-11 09:04:27.573 2 DEBUG nova.storage.rbd_utils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] rbd image 14b020dc-ae35-4a87-87c8-ed8504968319_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:04:27 np0005481065 nova_compute[260935]: 2025-10-11 09:04:27.578 2 DEBUG oslo_concurrency.processutils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/14b020dc-ae35-4a87-87c8-ed8504968319/disk.config 14b020dc-ae35-4a87-87c8-ed8504968319_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:04:27 np0005481065 nova_compute[260935]: 2025-10-11 09:04:27.631 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173467.6297598, cb40f13d-eaf0-4d7f-8745-acdd85fa0e86 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:04:27 np0005481065 nova_compute[260935]: 2025-10-11 09:04:27.632 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:04:27 np0005481065 nova_compute[260935]: 2025-10-11 09:04:27.638 2 DEBUG nova.compute.manager [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:04:27 np0005481065 nova_compute[260935]: 2025-10-11 09:04:27.639 2 DEBUG nova.virt.libvirt.driver [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:04:27 np0005481065 nova_compute[260935]: 2025-10-11 09:04:27.644 2 INFO nova.virt.libvirt.driver [-] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Instance spawned successfully.#033[00m
Oct 11 05:04:27 np0005481065 nova_compute[260935]: 2025-10-11 09:04:27.645 2 DEBUG nova.virt.libvirt.driver [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:04:27 np0005481065 nova_compute[260935]: 2025-10-11 09:04:27.733 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:04:27 np0005481065 nova_compute[260935]: 2025-10-11 09:04:27.740 2 DEBUG nova.virt.libvirt.driver [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:04:27 np0005481065 nova_compute[260935]: 2025-10-11 09:04:27.741 2 DEBUG nova.virt.libvirt.driver [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:04:27 np0005481065 nova_compute[260935]: 2025-10-11 09:04:27.741 2 DEBUG nova.virt.libvirt.driver [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:04:27 np0005481065 nova_compute[260935]: 2025-10-11 09:04:27.742 2 DEBUG nova.virt.libvirt.driver [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:04:27 np0005481065 nova_compute[260935]: 2025-10-11 09:04:27.743 2 DEBUG nova.virt.libvirt.driver [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:04:27 np0005481065 nova_compute[260935]: 2025-10-11 09:04:27.743 2 DEBUG nova.virt.libvirt.driver [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:04:27 np0005481065 nova_compute[260935]: 2025-10-11 09:04:27.751 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:04:27 np0005481065 nova_compute[260935]: 2025-10-11 09:04:27.787 2 DEBUG oslo_concurrency.processutils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/14b020dc-ae35-4a87-87c8-ed8504968319/disk.config 14b020dc-ae35-4a87-87c8-ed8504968319_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.209s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:04:27 np0005481065 nova_compute[260935]: 2025-10-11 09:04:27.788 2 INFO nova.virt.libvirt.driver [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Deleting local config drive /var/lib/nova/instances/14b020dc-ae35-4a87-87c8-ed8504968319/disk.config because it was imported into RBD.#033[00m
Oct 11 05:04:27 np0005481065 nova_compute[260935]: 2025-10-11 09:04:27.816 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:04:27 np0005481065 nova_compute[260935]: 2025-10-11 09:04:27.818 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173467.631032, cb40f13d-eaf0-4d7f-8745-acdd85fa0e86 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:04:27 np0005481065 nova_compute[260935]: 2025-10-11 09:04:27.818 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] VM Started (Lifecycle Event)#033[00m
Oct 11 05:04:27 np0005481065 nova_compute[260935]: 2025-10-11 09:04:27.869 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:04:27 np0005481065 nova_compute[260935]: 2025-10-11 09:04:27.875 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:04:27 np0005481065 nova_compute[260935]: 2025-10-11 09:04:27.884 2 INFO nova.compute.manager [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Took 4.45 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:04:27 np0005481065 nova_compute[260935]: 2025-10-11 09:04:27.885 2 DEBUG nova.compute.manager [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:04:27 np0005481065 nova_compute[260935]: 2025-10-11 09:04:27.907 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:04:27 np0005481065 systemd-machined[215705]: New machine qemu-101-instance-00000059.
Oct 11 05:04:27 np0005481065 systemd[1]: Started Virtual Machine qemu-101-instance-00000059.
Oct 11 05:04:27 np0005481065 nova_compute[260935]: 2025-10-11 09:04:27.979 2 INFO nova.compute.manager [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Took 6.21 seconds to build instance.#033[00m
Oct 11 05:04:28 np0005481065 nova_compute[260935]: 2025-10-11 09:04:28.032 2 DEBUG oslo_concurrency.lockutils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lock "cb40f13d-eaf0-4d7f-8745-acdd85fa0e86" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.490s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:04:28 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1843: 321 pgs: 321 active+clean; 467 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 108 KiB/s rd, 5.3 MiB/s wr, 158 op/s
Oct 11 05:04:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:04:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e246 do_prune osdmap full prune enabled
Oct 11 05:04:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e247 e247: 3 total, 3 up, 3 in
Oct 11 05:04:28 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e247: 3 total, 3 up, 3 in
Oct 11 05:04:28 np0005481065 nova_compute[260935]: 2025-10-11 09:04:28.854 2 DEBUG nova.compute.manager [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:04:28 np0005481065 nova_compute[260935]: 2025-10-11 09:04:28.856 2 DEBUG nova.virt.libvirt.driver [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:04:28 np0005481065 nova_compute[260935]: 2025-10-11 09:04:28.856 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173468.852929, 14b020dc-ae35-4a87-87c8-ed8504968319 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:04:28 np0005481065 nova_compute[260935]: 2025-10-11 09:04:28.857 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:04:28 np0005481065 nova_compute[260935]: 2025-10-11 09:04:28.869 2 INFO nova.virt.libvirt.driver [-] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Instance spawned successfully.#033[00m
Oct 11 05:04:28 np0005481065 nova_compute[260935]: 2025-10-11 09:04:28.869 2 DEBUG nova.virt.libvirt.driver [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:04:28 np0005481065 nova_compute[260935]: 2025-10-11 09:04:28.901 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:04:28 np0005481065 nova_compute[260935]: 2025-10-11 09:04:28.905 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:04:28 np0005481065 nova_compute[260935]: 2025-10-11 09:04:28.934 2 DEBUG nova.virt.libvirt.driver [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:04:28 np0005481065 nova_compute[260935]: 2025-10-11 09:04:28.934 2 DEBUG nova.virt.libvirt.driver [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:04:28 np0005481065 nova_compute[260935]: 2025-10-11 09:04:28.935 2 DEBUG nova.virt.libvirt.driver [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:04:28 np0005481065 nova_compute[260935]: 2025-10-11 09:04:28.935 2 DEBUG nova.virt.libvirt.driver [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:04:28 np0005481065 nova_compute[260935]: 2025-10-11 09:04:28.936 2 DEBUG nova.virt.libvirt.driver [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:04:28 np0005481065 nova_compute[260935]: 2025-10-11 09:04:28.936 2 DEBUG nova.virt.libvirt.driver [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:04:28 np0005481065 nova_compute[260935]: 2025-10-11 09:04:28.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:04:28 np0005481065 nova_compute[260935]: 2025-10-11 09:04:28.975 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:04:28 np0005481065 nova_compute[260935]: 2025-10-11 09:04:28.976 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173468.8553307, 14b020dc-ae35-4a87-87c8-ed8504968319 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:04:28 np0005481065 nova_compute[260935]: 2025-10-11 09:04:28.976 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] VM Started (Lifecycle Event)#033[00m
Oct 11 05:04:29 np0005481065 nova_compute[260935]: 2025-10-11 09:04:29.032 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:04:29 np0005481065 nova_compute[260935]: 2025-10-11 09:04:29.039 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:04:29 np0005481065 nova_compute[260935]: 2025-10-11 09:04:29.046 2 INFO nova.compute.manager [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Took 4.41 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:04:29 np0005481065 nova_compute[260935]: 2025-10-11 09:04:29.047 2 DEBUG nova.compute.manager [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:04:29 np0005481065 nova_compute[260935]: 2025-10-11 09:04:29.081 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:04:29 np0005481065 nova_compute[260935]: 2025-10-11 09:04:29.199 2 INFO nova.compute.manager [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Took 6.60 seconds to build instance.#033[00m
Oct 11 05:04:29 np0005481065 nova_compute[260935]: 2025-10-11 09:04:29.238 2 DEBUG oslo_concurrency.lockutils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Lock "14b020dc-ae35-4a87-87c8-ed8504968319" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.781s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:04:30 np0005481065 nova_compute[260935]: 2025-10-11 09:04:30.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:04:30 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1845: 321 pgs: 321 active+clean; 467 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 89 KiB/s rd, 5.3 MiB/s wr, 133 op/s
Oct 11 05:04:30 np0005481065 nova_compute[260935]: 2025-10-11 09:04:30.388 2 INFO nova.compute.manager [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Rebuilding instance#033[00m
Oct 11 05:04:30 np0005481065 nova_compute[260935]: 2025-10-11 09:04:30.692 2 DEBUG nova.objects.instance [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lazy-loading 'trusted_certs' on Instance uuid cb40f13d-eaf0-4d7f-8745-acdd85fa0e86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:04:30 np0005481065 nova_compute[260935]: 2025-10-11 09:04:30.735 2 DEBUG nova.compute.manager [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:04:30 np0005481065 nova_compute[260935]: 2025-10-11 09:04:30.816 2 DEBUG nova.objects.instance [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lazy-loading 'pci_requests' on Instance uuid cb40f13d-eaf0-4d7f-8745-acdd85fa0e86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:04:30 np0005481065 nova_compute[260935]: 2025-10-11 09:04:30.843 2 DEBUG nova.objects.instance [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lazy-loading 'pci_devices' on Instance uuid cb40f13d-eaf0-4d7f-8745-acdd85fa0e86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:04:30 np0005481065 nova_compute[260935]: 2025-10-11 09:04:30.870 2 DEBUG nova.objects.instance [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lazy-loading 'resources' on Instance uuid cb40f13d-eaf0-4d7f-8745-acdd85fa0e86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:04:30 np0005481065 nova_compute[260935]: 2025-10-11 09:04:30.891 2 DEBUG nova.objects.instance [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lazy-loading 'migration_context' on Instance uuid cb40f13d-eaf0-4d7f-8745-acdd85fa0e86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:04:30 np0005481065 nova_compute[260935]: 2025-10-11 09:04:30.922 2 DEBUG nova.objects.instance [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct 11 05:04:30 np0005481065 nova_compute[260935]: 2025-10-11 09:04:30.932 2 DEBUG nova.virt.libvirt.driver [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct 11 05:04:31 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 11 05:04:31 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 05:04:31 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:04:31 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:04:31 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 05:04:31 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:04:31 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 05:04:31 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:04:31 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev e27992bb-8e9b-4eba-91dc-57828e7c9820 does not exist
Oct 11 05:04:31 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev ae0f2143-4069-48af-bfe6-80fe0052a1f5 does not exist
Oct 11 05:04:31 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 6aa71a5c-42a0-4e5d-8a35-27e92455345c does not exist
Oct 11 05:04:31 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 05:04:31 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 05:04:31 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 05:04:31 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:04:31 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:04:31 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:04:31 np0005481065 nova_compute[260935]: 2025-10-11 09:04:31.290 2 DEBUG nova.compute.manager [None req-3132466f-c37e-4b3f-9ed4-f18e908a762b 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:04:31 np0005481065 nova_compute[260935]: 2025-10-11 09:04:31.350 2 INFO nova.compute.manager [None req-3132466f-c37e-4b3f-9ed4-f18e908a762b 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] instance snapshotting#033[00m
Oct 11 05:04:31 np0005481065 nova_compute[260935]: 2025-10-11 09:04:31.351 2 DEBUG nova.objects.instance [None req-3132466f-c37e-4b3f-9ed4-f18e908a762b 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Lazy-loading 'flavor' on Instance uuid 14b020dc-ae35-4a87-87c8-ed8504968319 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:04:31 np0005481065 nova_compute[260935]: 2025-10-11 09:04:31.540 2 DEBUG oslo_concurrency.lockutils [None req-bdc924b6-5434-44f6-bf04-aa9e1098f5c4 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Acquiring lock "14b020dc-ae35-4a87-87c8-ed8504968319" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:04:31 np0005481065 nova_compute[260935]: 2025-10-11 09:04:31.540 2 DEBUG oslo_concurrency.lockutils [None req-bdc924b6-5434-44f6-bf04-aa9e1098f5c4 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Lock "14b020dc-ae35-4a87-87c8-ed8504968319" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:04:31 np0005481065 nova_compute[260935]: 2025-10-11 09:04:31.541 2 DEBUG oslo_concurrency.lockutils [None req-bdc924b6-5434-44f6-bf04-aa9e1098f5c4 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Acquiring lock "14b020dc-ae35-4a87-87c8-ed8504968319-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:04:31 np0005481065 nova_compute[260935]: 2025-10-11 09:04:31.541 2 DEBUG oslo_concurrency.lockutils [None req-bdc924b6-5434-44f6-bf04-aa9e1098f5c4 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Lock "14b020dc-ae35-4a87-87c8-ed8504968319-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:04:31 np0005481065 nova_compute[260935]: 2025-10-11 09:04:31.542 2 DEBUG oslo_concurrency.lockutils [None req-bdc924b6-5434-44f6-bf04-aa9e1098f5c4 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Lock "14b020dc-ae35-4a87-87c8-ed8504968319-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:04:31 np0005481065 nova_compute[260935]: 2025-10-11 09:04:31.544 2 INFO nova.compute.manager [None req-bdc924b6-5434-44f6-bf04-aa9e1098f5c4 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Terminating instance#033[00m
Oct 11 05:04:31 np0005481065 nova_compute[260935]: 2025-10-11 09:04:31.545 2 DEBUG oslo_concurrency.lockutils [None req-bdc924b6-5434-44f6-bf04-aa9e1098f5c4 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Acquiring lock "refresh_cache-14b020dc-ae35-4a87-87c8-ed8504968319" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:04:31 np0005481065 nova_compute[260935]: 2025-10-11 09:04:31.546 2 DEBUG oslo_concurrency.lockutils [None req-bdc924b6-5434-44f6-bf04-aa9e1098f5c4 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Acquired lock "refresh_cache-14b020dc-ae35-4a87-87c8-ed8504968319" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:04:31 np0005481065 nova_compute[260935]: 2025-10-11 09:04:31.546 2 DEBUG nova.network.neutron [None req-bdc924b6-5434-44f6-bf04-aa9e1098f5c4 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:04:31 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e247 do_prune osdmap full prune enabled
Oct 11 05:04:31 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 05:04:31 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:04:31 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:04:31 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:04:31 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e248 e248: 3 total, 3 up, 3 in
Oct 11 05:04:31 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e248: 3 total, 3 up, 3 in
Oct 11 05:04:31 np0005481065 nova_compute[260935]: 2025-10-11 09:04:31.743 2 INFO nova.virt.libvirt.driver [None req-3132466f-c37e-4b3f-9ed4-f18e908a762b 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Beginning live snapshot process#033[00m
Oct 11 05:04:31 np0005481065 nova_compute[260935]: 2025-10-11 09:04:31.762 2 DEBUG nova.network.neutron [None req-bdc924b6-5434-44f6-bf04-aa9e1098f5c4 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:04:31 np0005481065 nova_compute[260935]: 2025-10-11 09:04:31.803 2 DEBUG nova.compute.manager [None req-3132466f-c37e-4b3f-9ed4-f18e908a762b 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Instance disappeared during snapshot _snapshot_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:4390#033[00m
Oct 11 05:04:32 np0005481065 podman[349045]: 2025-10-11 09:04:32.061051336 +0000 UTC m=+0.067008924 container create 65fcf7aad827aa6574b4051e0c99310775b76e600af6c2d0d113972fe6446da9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:04:32 np0005481065 systemd[1]: Started libpod-conmon-65fcf7aad827aa6574b4051e0c99310775b76e600af6c2d0d113972fe6446da9.scope.
Oct 11 05:04:32 np0005481065 podman[349045]: 2025-10-11 09:04:32.037283607 +0000 UTC m=+0.043241285 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:04:32 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:04:32 np0005481065 podman[349045]: 2025-10-11 09:04:32.167969848 +0000 UTC m=+0.173927476 container init 65fcf7aad827aa6574b4051e0c99310775b76e600af6c2d0d113972fe6446da9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 05:04:32 np0005481065 podman[349045]: 2025-10-11 09:04:32.177079298 +0000 UTC m=+0.183036896 container start 65fcf7aad827aa6574b4051e0c99310775b76e600af6c2d0d113972fe6446da9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_dewdney, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:04:32 np0005481065 podman[349045]: 2025-10-11 09:04:32.180897277 +0000 UTC m=+0.186854935 container attach 65fcf7aad827aa6574b4051e0c99310775b76e600af6c2d0d113972fe6446da9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_dewdney, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 05:04:32 np0005481065 competent_dewdney[349061]: 167 167
Oct 11 05:04:32 np0005481065 systemd[1]: libpod-65fcf7aad827aa6574b4051e0c99310775b76e600af6c2d0d113972fe6446da9.scope: Deactivated successfully.
Oct 11 05:04:32 np0005481065 conmon[349061]: conmon 65fcf7aad827aa6574b4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-65fcf7aad827aa6574b4051e0c99310775b76e600af6c2d0d113972fe6446da9.scope/container/memory.events
Oct 11 05:04:32 np0005481065 podman[349045]: 2025-10-11 09:04:32.185673953 +0000 UTC m=+0.191631561 container died 65fcf7aad827aa6574b4051e0c99310775b76e600af6c2d0d113972fe6446da9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_dewdney, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 05:04:32 np0005481065 systemd[1]: var-lib-containers-storage-overlay-256db1653e8460fe33fd2801446bb8001323d73d87d758e5bab8208900c57072-merged.mount: Deactivated successfully.
Oct 11 05:04:32 np0005481065 podman[349045]: 2025-10-11 09:04:32.235896707 +0000 UTC m=+0.241854305 container remove 65fcf7aad827aa6574b4051e0c99310775b76e600af6c2d0d113972fe6446da9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_dewdney, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:04:32 np0005481065 systemd[1]: libpod-conmon-65fcf7aad827aa6574b4051e0c99310775b76e600af6c2d0d113972fe6446da9.scope: Deactivated successfully.
Oct 11 05:04:32 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1847: 321 pgs: 321 active+clean; 467 MiB data, 855 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 5.4 MiB/s wr, 264 op/s
Oct 11 05:04:32 np0005481065 podman[349088]: 2025-10-11 09:04:32.59471579 +0000 UTC m=+0.080519799 container create aa2a3b7a78d788a6e7ae69d039c6dadc7563910859c17e72cdeb561a90499428 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:04:32 np0005481065 podman[349088]: 2025-10-11 09:04:32.54775872 +0000 UTC m=+0.033562789 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:04:32 np0005481065 systemd[1]: Started libpod-conmon-aa2a3b7a78d788a6e7ae69d039c6dadc7563910859c17e72cdeb561a90499428.scope.
Oct 11 05:04:32 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:04:32 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff7b799685fe55c48ce64d469eb1a0d124f65159749f40a34bcd681fdbfe54df/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:04:32 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff7b799685fe55c48ce64d469eb1a0d124f65159749f40a34bcd681fdbfe54df/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:04:32 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff7b799685fe55c48ce64d469eb1a0d124f65159749f40a34bcd681fdbfe54df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:04:32 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff7b799685fe55c48ce64d469eb1a0d124f65159749f40a34bcd681fdbfe54df/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:04:32 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff7b799685fe55c48ce64d469eb1a0d124f65159749f40a34bcd681fdbfe54df/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 05:04:32 np0005481065 podman[349088]: 2025-10-11 09:04:32.71556951 +0000 UTC m=+0.201373589 container init aa2a3b7a78d788a6e7ae69d039c6dadc7563910859c17e72cdeb561a90499428 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_solomon, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:04:32 np0005481065 podman[349088]: 2025-10-11 09:04:32.725686669 +0000 UTC m=+0.211490698 container start aa2a3b7a78d788a6e7ae69d039c6dadc7563910859c17e72cdeb561a90499428 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_solomon, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 05:04:32 np0005481065 podman[349088]: 2025-10-11 09:04:32.730198358 +0000 UTC m=+0.216002427 container attach aa2a3b7a78d788a6e7ae69d039c6dadc7563910859c17e72cdeb561a90499428 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 05:04:33 np0005481065 nova_compute[260935]: 2025-10-11 09:04:33.039 2 DEBUG nova.network.neutron [None req-bdc924b6-5434-44f6-bf04-aa9e1098f5c4 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:04:33 np0005481065 nova_compute[260935]: 2025-10-11 09:04:33.107 2 DEBUG oslo_concurrency.lockutils [None req-bdc924b6-5434-44f6-bf04-aa9e1098f5c4 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Releasing lock "refresh_cache-14b020dc-ae35-4a87-87c8-ed8504968319" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:04:33 np0005481065 nova_compute[260935]: 2025-10-11 09:04:33.108 2 DEBUG nova.compute.manager [None req-bdc924b6-5434-44f6-bf04-aa9e1098f5c4 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:04:33 np0005481065 nova_compute[260935]: 2025-10-11 09:04:33.188 2 DEBUG nova.compute.manager [None req-3132466f-c37e-4b3f-9ed4-f18e908a762b 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Found 0 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450#033[00m
Oct 11 05:04:33 np0005481065 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d00000059.scope: Deactivated successfully.
Oct 11 05:04:33 np0005481065 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d00000059.scope: Consumed 5.113s CPU time.
Oct 11 05:04:33 np0005481065 systemd-machined[215705]: Machine qemu-101-instance-00000059 terminated.
Oct 11 05:04:33 np0005481065 nova_compute[260935]: 2025-10-11 09:04:33.331 2 INFO nova.virt.libvirt.driver [-] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Instance destroyed successfully.#033[00m
Oct 11 05:04:33 np0005481065 nova_compute[260935]: 2025-10-11 09:04:33.332 2 DEBUG nova.objects.instance [None req-bdc924b6-5434-44f6-bf04-aa9e1098f5c4 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Lazy-loading 'resources' on Instance uuid 14b020dc-ae35-4a87-87c8-ed8504968319 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:04:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e248 do_prune osdmap full prune enabled
Oct 11 05:04:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e249 e249: 3 total, 3 up, 3 in
Oct 11 05:04:33 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e249: 3 total, 3 up, 3 in
Oct 11 05:04:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:04:33 np0005481065 nova_compute[260935]: 2025-10-11 09:04:33.764 2 INFO nova.virt.libvirt.driver [None req-bdc924b6-5434-44f6-bf04-aa9e1098f5c4 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Deleting instance files /var/lib/nova/instances/14b020dc-ae35-4a87-87c8-ed8504968319_del#033[00m
Oct 11 05:04:33 np0005481065 nova_compute[260935]: 2025-10-11 09:04:33.765 2 INFO nova.virt.libvirt.driver [None req-bdc924b6-5434-44f6-bf04-aa9e1098f5c4 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Deletion of /var/lib/nova/instances/14b020dc-ae35-4a87-87c8-ed8504968319_del complete#033[00m
Oct 11 05:04:33 np0005481065 nova_compute[260935]: 2025-10-11 09:04:33.844 2 INFO nova.compute.manager [None req-bdc924b6-5434-44f6-bf04-aa9e1098f5c4 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Took 0.74 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:04:33 np0005481065 nova_compute[260935]: 2025-10-11 09:04:33.844 2 DEBUG oslo.service.loopingcall [None req-bdc924b6-5434-44f6-bf04-aa9e1098f5c4 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:04:33 np0005481065 nova_compute[260935]: 2025-10-11 09:04:33.844 2 DEBUG nova.compute.manager [-] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:04:33 np0005481065 nova_compute[260935]: 2025-10-11 09:04:33.844 2 DEBUG nova.network.neutron [-] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:04:33 np0005481065 nova_compute[260935]: 2025-10-11 09:04:33.922 2 DEBUG oslo_concurrency.lockutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Acquiring lock "dffb6f2b-b5a8-4d28-ae9b-aca7728ce617" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:04:33 np0005481065 nova_compute[260935]: 2025-10-11 09:04:33.923 2 DEBUG oslo_concurrency.lockutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Lock "dffb6f2b-b5a8-4d28-ae9b-aca7728ce617" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:04:33 np0005481065 ecstatic_solomon[349106]: --> passed data devices: 0 physical, 3 LVM
Oct 11 05:04:33 np0005481065 ecstatic_solomon[349106]: --> relative data size: 1.0
Oct 11 05:04:33 np0005481065 ecstatic_solomon[349106]: --> All data devices are unavailable
Oct 11 05:04:33 np0005481065 systemd[1]: libpod-aa2a3b7a78d788a6e7ae69d039c6dadc7563910859c17e72cdeb561a90499428.scope: Deactivated successfully.
Oct 11 05:04:33 np0005481065 systemd[1]: libpod-aa2a3b7a78d788a6e7ae69d039c6dadc7563910859c17e72cdeb561a90499428.scope: Consumed 1.115s CPU time.
Oct 11 05:04:33 np0005481065 podman[349088]: 2025-10-11 09:04:33.958625407 +0000 UTC m=+1.444429396 container died aa2a3b7a78d788a6e7ae69d039c6dadc7563910859c17e72cdeb561a90499428 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_solomon, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 05:04:33 np0005481065 nova_compute[260935]: 2025-10-11 09:04:33.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:04:33 np0005481065 systemd[1]: var-lib-containers-storage-overlay-ff7b799685fe55c48ce64d469eb1a0d124f65159749f40a34bcd681fdbfe54df-merged.mount: Deactivated successfully.
Oct 11 05:04:34 np0005481065 nova_compute[260935]: 2025-10-11 09:04:34.019 2 DEBUG nova.compute.manager [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:04:34 np0005481065 podman[349088]: 2025-10-11 09:04:34.023684034 +0000 UTC m=+1.509488033 container remove aa2a3b7a78d788a6e7ae69d039c6dadc7563910859c17e72cdeb561a90499428 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_solomon, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Oct 11 05:04:34 np0005481065 systemd[1]: libpod-conmon-aa2a3b7a78d788a6e7ae69d039c6dadc7563910859c17e72cdeb561a90499428.scope: Deactivated successfully.
Oct 11 05:04:34 np0005481065 nova_compute[260935]: 2025-10-11 09:04:34.141 2 DEBUG oslo_concurrency.lockutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:04:34 np0005481065 nova_compute[260935]: 2025-10-11 09:04:34.142 2 DEBUG oslo_concurrency.lockutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:04:34 np0005481065 nova_compute[260935]: 2025-10-11 09:04:34.152 2 DEBUG nova.virt.hardware [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:04:34 np0005481065 nova_compute[260935]: 2025-10-11 09:04:34.153 2 INFO nova.compute.claims [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:04:34 np0005481065 nova_compute[260935]: 2025-10-11 09:04:34.170 2 DEBUG nova.network.neutron [-] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:04:34 np0005481065 nova_compute[260935]: 2025-10-11 09:04:34.233 2 DEBUG nova.network.neutron [-] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:04:34 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1849: 321 pgs: 321 active+clean; 467 MiB data, 855 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 28 KiB/s wr, 300 op/s
Oct 11 05:04:34 np0005481065 nova_compute[260935]: 2025-10-11 09:04:34.305 2 INFO nova.compute.manager [-] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Took 0.46 seconds to deallocate network for instance.#033[00m
Oct 11 05:04:34 np0005481065 nova_compute[260935]: 2025-10-11 09:04:34.382 2 DEBUG oslo_concurrency.lockutils [None req-bdc924b6-5434-44f6-bf04-aa9e1098f5c4 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:04:34 np0005481065 nova_compute[260935]: 2025-10-11 09:04:34.519 2 DEBUG oslo_concurrency.processutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:04:34 np0005481065 podman[349329]: 2025-10-11 09:04:34.908404479 +0000 UTC m=+0.059820529 container create 4634c42738ac7ac078fdd7ad3b4f39ba1fc07b62cb949b83197e96038cce19ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_chatelet, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 05:04:34 np0005481065 systemd[1]: Started libpod-conmon-4634c42738ac7ac078fdd7ad3b4f39ba1fc07b62cb949b83197e96038cce19ff.scope.
Oct 11 05:04:34 np0005481065 podman[349329]: 2025-10-11 09:04:34.882707166 +0000 UTC m=+0.034123236 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:04:35 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:04:35 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:04:35 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2043618827' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:04:35 np0005481065 podman[349329]: 2025-10-11 09:04:35.038273777 +0000 UTC m=+0.189689887 container init 4634c42738ac7ac078fdd7ad3b4f39ba1fc07b62cb949b83197e96038cce19ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_chatelet, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 11 05:04:35 np0005481065 nova_compute[260935]: 2025-10-11 09:04:35.041 2 DEBUG oslo_concurrency.processutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:04:35 np0005481065 podman[349329]: 2025-10-11 09:04:35.047542151 +0000 UTC m=+0.198958201 container start 4634c42738ac7ac078fdd7ad3b4f39ba1fc07b62cb949b83197e96038cce19ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_chatelet, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:04:35 np0005481065 nova_compute[260935]: 2025-10-11 09:04:35.053 2 DEBUG nova.compute.provider_tree [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:04:35 np0005481065 mystifying_chatelet[349346]: 167 167
Oct 11 05:04:35 np0005481065 systemd[1]: libpod-4634c42738ac7ac078fdd7ad3b4f39ba1fc07b62cb949b83197e96038cce19ff.scope: Deactivated successfully.
Oct 11 05:04:35 np0005481065 podman[349329]: 2025-10-11 09:04:35.070666681 +0000 UTC m=+0.222082741 container attach 4634c42738ac7ac078fdd7ad3b4f39ba1fc07b62cb949b83197e96038cce19ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:04:35 np0005481065 podman[349329]: 2025-10-11 09:04:35.071803754 +0000 UTC m=+0.223219794 container died 4634c42738ac7ac078fdd7ad3b4f39ba1fc07b62cb949b83197e96038cce19ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_chatelet, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 11 05:04:35 np0005481065 nova_compute[260935]: 2025-10-11 09:04:35.105 2 DEBUG nova.scheduler.client.report [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:04:35 np0005481065 systemd[1]: var-lib-containers-storage-overlay-d6e910854422a861075278aec4597b51bbdd397969e914f669dcc1171876d234-merged.mount: Deactivated successfully.
Oct 11 05:04:35 np0005481065 nova_compute[260935]: 2025-10-11 09:04:35.175 2 DEBUG oslo_concurrency.lockutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.033s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:04:35 np0005481065 nova_compute[260935]: 2025-10-11 09:04:35.177 2 DEBUG oslo_concurrency.lockutils [None req-bdc924b6-5434-44f6-bf04-aa9e1098f5c4 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:04:35 np0005481065 nova_compute[260935]: 2025-10-11 09:04:35.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:04:35 np0005481065 podman[349329]: 2025-10-11 09:04:35.222555567 +0000 UTC m=+0.373971597 container remove 4634c42738ac7ac078fdd7ad3b4f39ba1fc07b62cb949b83197e96038cce19ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_chatelet, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 05:04:35 np0005481065 systemd[1]: libpod-conmon-4634c42738ac7ac078fdd7ad3b4f39ba1fc07b62cb949b83197e96038cce19ff.scope: Deactivated successfully.
Oct 11 05:04:35 np0005481065 nova_compute[260935]: 2025-10-11 09:04:35.309 2 DEBUG oslo_concurrency.lockutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Acquiring lock "7bf7bbee-16b0-4eaa-b3f1-a25e58d9b292" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:04:35 np0005481065 nova_compute[260935]: 2025-10-11 09:04:35.310 2 DEBUG oslo_concurrency.lockutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Lock "7bf7bbee-16b0-4eaa-b3f1-a25e58d9b292" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:04:35 np0005481065 nova_compute[260935]: 2025-10-11 09:04:35.349 2 DEBUG oslo_concurrency.lockutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Lock "7bf7bbee-16b0-4eaa-b3f1-a25e58d9b292" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 0.039s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:04:35 np0005481065 nova_compute[260935]: 2025-10-11 09:04:35.351 2 DEBUG nova.compute.manager [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:04:35 np0005481065 nova_compute[260935]: 2025-10-11 09:04:35.412 2 DEBUG oslo_concurrency.processutils [None req-bdc924b6-5434-44f6-bf04-aa9e1098f5c4 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:04:35 np0005481065 nova_compute[260935]: 2025-10-11 09:04:35.508 2 DEBUG nova.compute.manager [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:04:35 np0005481065 nova_compute[260935]: 2025-10-11 09:04:35.510 2 DEBUG nova.network.neutron [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:04:35 np0005481065 podman[349375]: 2025-10-11 09:04:35.552109465 +0000 UTC m=+0.086981124 container create 78f0b0f12b9a67bf1606237f6567c6cce42d14b5f98bbc1682180ea419780b6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_sinoussi, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 05:04:35 np0005481065 nova_compute[260935]: 2025-10-11 09:04:35.575 2 INFO nova.virt.libvirt.driver [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:04:35 np0005481065 podman[349375]: 2025-10-11 09:04:35.519558966 +0000 UTC m=+0.054430665 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:04:35 np0005481065 systemd[1]: Started libpod-conmon-78f0b0f12b9a67bf1606237f6567c6cce42d14b5f98bbc1682180ea419780b6d.scope.
Oct 11 05:04:35 np0005481065 nova_compute[260935]: 2025-10-11 09:04:35.618 2 DEBUG nova.compute.manager [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:04:35 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:04:35 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dab66d3398a2695a961d4f08361fbec6e2ab4a823b0ea4911f4063c29e85dd6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:04:35 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dab66d3398a2695a961d4f08361fbec6e2ab4a823b0ea4911f4063c29e85dd6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:04:35 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dab66d3398a2695a961d4f08361fbec6e2ab4a823b0ea4911f4063c29e85dd6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:04:35 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dab66d3398a2695a961d4f08361fbec6e2ab4a823b0ea4911f4063c29e85dd6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:04:35 np0005481065 podman[349375]: 2025-10-11 09:04:35.681849269 +0000 UTC m=+0.216720928 container init 78f0b0f12b9a67bf1606237f6567c6cce42d14b5f98bbc1682180ea419780b6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 05:04:35 np0005481065 podman[349375]: 2025-10-11 09:04:35.697694481 +0000 UTC m=+0.232566140 container start 78f0b0f12b9a67bf1606237f6567c6cce42d14b5f98bbc1682180ea419780b6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_sinoussi, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 11 05:04:35 np0005481065 podman[349375]: 2025-10-11 09:04:35.702972602 +0000 UTC m=+0.237844231 container attach 78f0b0f12b9a67bf1606237f6567c6cce42d14b5f98bbc1682180ea419780b6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 05:04:35 np0005481065 nova_compute[260935]: 2025-10-11 09:04:35.733 2 DEBUG nova.policy [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f7e5365d09e140aaa4289b21435cbd70', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '884a8b5cd18948009939a3ab6cb1d42a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:04:35 np0005481065 nova_compute[260935]: 2025-10-11 09:04:35.793 2 DEBUG nova.compute.manager [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:04:35 np0005481065 nova_compute[260935]: 2025-10-11 09:04:35.795 2 DEBUG nova.virt.libvirt.driver [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:04:35 np0005481065 nova_compute[260935]: 2025-10-11 09:04:35.795 2 INFO nova.virt.libvirt.driver [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Creating image(s)#033[00m
Oct 11 05:04:35 np0005481065 nova_compute[260935]: 2025-10-11 09:04:35.822 2 DEBUG nova.storage.rbd_utils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] rbd image dffb6f2b-b5a8-4d28-ae9b-aca7728ce617_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:04:35 np0005481065 nova_compute[260935]: 2025-10-11 09:04:35.850 2 DEBUG nova.storage.rbd_utils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] rbd image dffb6f2b-b5a8-4d28-ae9b-aca7728ce617_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:04:35 np0005481065 nova_compute[260935]: 2025-10-11 09:04:35.877 2 DEBUG nova.storage.rbd_utils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] rbd image dffb6f2b-b5a8-4d28-ae9b-aca7728ce617_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:04:35 np0005481065 nova_compute[260935]: 2025-10-11 09:04:35.882 2 DEBUG oslo_concurrency.processutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:04:35 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:04:35 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1752793777' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:04:35 np0005481065 nova_compute[260935]: 2025-10-11 09:04:35.969 2 DEBUG oslo_concurrency.processutils [None req-bdc924b6-5434-44f6-bf04-aa9e1098f5c4 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:04:35 np0005481065 nova_compute[260935]: 2025-10-11 09:04:35.972 2 DEBUG oslo_concurrency.processutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:04:35 np0005481065 nova_compute[260935]: 2025-10-11 09:04:35.973 2 DEBUG oslo_concurrency.lockutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:04:35 np0005481065 nova_compute[260935]: 2025-10-11 09:04:35.974 2 DEBUG oslo_concurrency.lockutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:04:35 np0005481065 nova_compute[260935]: 2025-10-11 09:04:35.974 2 DEBUG oslo_concurrency.lockutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:04:36 np0005481065 nova_compute[260935]: 2025-10-11 09:04:36.006 2 DEBUG nova.storage.rbd_utils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] rbd image dffb6f2b-b5a8-4d28-ae9b-aca7728ce617_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:04:36 np0005481065 nova_compute[260935]: 2025-10-11 09:04:36.009 2 DEBUG oslo_concurrency.processutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 dffb6f2b-b5a8-4d28-ae9b-aca7728ce617_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:04:36 np0005481065 nova_compute[260935]: 2025-10-11 09:04:36.071 2 DEBUG nova.compute.provider_tree [None req-bdc924b6-5434-44f6-bf04-aa9e1098f5c4 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:04:36 np0005481065 nova_compute[260935]: 2025-10-11 09:04:36.106 2 DEBUG nova.scheduler.client.report [None req-bdc924b6-5434-44f6-bf04-aa9e1098f5c4 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:04:36 np0005481065 nova_compute[260935]: 2025-10-11 09:04:36.170 2 DEBUG oslo_concurrency.lockutils [None req-bdc924b6-5434-44f6-bf04-aa9e1098f5c4 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.993s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:04:36 np0005481065 nova_compute[260935]: 2025-10-11 09:04:36.243 2 INFO nova.scheduler.client.report [None req-bdc924b6-5434-44f6-bf04-aa9e1098f5c4 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Deleted allocations for instance 14b020dc-ae35-4a87-87c8-ed8504968319#033[00m
Oct 11 05:04:36 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1850: 321 pgs: 321 active+clean; 467 MiB data, 855 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 22 KiB/s wr, 239 op/s
Oct 11 05:04:36 np0005481065 nova_compute[260935]: 2025-10-11 09:04:36.414 2 DEBUG oslo_concurrency.processutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 dffb6f2b-b5a8-4d28-ae9b-aca7728ce617_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:04:36 np0005481065 nova_compute[260935]: 2025-10-11 09:04:36.461 2 DEBUG oslo_concurrency.lockutils [None req-bdc924b6-5434-44f6-bf04-aa9e1098f5c4 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Lock "14b020dc-ae35-4a87-87c8-ed8504968319" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.920s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]: {
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:    "0": [
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:        {
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:            "devices": [
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:                "/dev/loop3"
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:            ],
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:            "lv_name": "ceph_lv0",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:            "lv_size": "21470642176",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:            "name": "ceph_lv0",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:            "tags": {
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:                "ceph.cluster_name": "ceph",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:                "ceph.crush_device_class": "",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:                "ceph.encrypted": "0",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:                "ceph.osd_id": "0",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:                "ceph.type": "block",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:                "ceph.vdo": "0"
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:            },
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:            "type": "block",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:            "vg_name": "ceph_vg0"
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:        }
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:    ],
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:    "1": [
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:        {
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:            "devices": [
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:                "/dev/loop4"
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:            ],
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:            "lv_name": "ceph_lv1",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:            "lv_size": "21470642176",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:            "name": "ceph_lv1",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:            "tags": {
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:                "ceph.cluster_name": "ceph",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:                "ceph.crush_device_class": "",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:                "ceph.encrypted": "0",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:                "ceph.osd_id": "1",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:                "ceph.type": "block",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:                "ceph.vdo": "0"
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:            },
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:            "type": "block",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:            "vg_name": "ceph_vg1"
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:        }
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:    ],
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:    "2": [
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:        {
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:            "devices": [
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:                "/dev/loop5"
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:            ],
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:            "lv_name": "ceph_lv2",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:            "lv_size": "21470642176",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:            "name": "ceph_lv2",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:            "tags": {
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:                "ceph.cluster_name": "ceph",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:                "ceph.crush_device_class": "",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:                "ceph.encrypted": "0",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:                "ceph.osd_id": "2",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:                "ceph.type": "block",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:                "ceph.vdo": "0"
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:            },
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:            "type": "block",
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:            "vg_name": "ceph_vg2"
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:        }
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]:    ]
Oct 11 05:04:36 np0005481065 affectionate_sinoussi[349410]: }
Oct 11 05:04:36 np0005481065 nova_compute[260935]: 2025-10-11 09:04:36.514 2 DEBUG nova.storage.rbd_utils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] resizing rbd image dffb6f2b-b5a8-4d28-ae9b-aca7728ce617_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:04:36 np0005481065 systemd[1]: libpod-78f0b0f12b9a67bf1606237f6567c6cce42d14b5f98bbc1682180ea419780b6d.scope: Deactivated successfully.
Oct 11 05:04:36 np0005481065 podman[349375]: 2025-10-11 09:04:36.524583666 +0000 UTC m=+1.059455345 container died 78f0b0f12b9a67bf1606237f6567c6cce42d14b5f98bbc1682180ea419780b6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_sinoussi, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 05:04:36 np0005481065 systemd[1]: var-lib-containers-storage-overlay-6dab66d3398a2695a961d4f08361fbec6e2ab4a823b0ea4911f4063c29e85dd6-merged.mount: Deactivated successfully.
Oct 11 05:04:36 np0005481065 nova_compute[260935]: 2025-10-11 09:04:36.569 2 DEBUG nova.network.neutron [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Successfully created port: 63a1eeb0-05f1-4f8a-b115-a1323d4d119d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:04:36 np0005481065 podman[349375]: 2025-10-11 09:04:36.597019524 +0000 UTC m=+1.131891153 container remove 78f0b0f12b9a67bf1606237f6567c6cce42d14b5f98bbc1682180ea419780b6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 05:04:36 np0005481065 systemd[1]: libpod-conmon-78f0b0f12b9a67bf1606237f6567c6cce42d14b5f98bbc1682180ea419780b6d.scope: Deactivated successfully.
Oct 11 05:04:36 np0005481065 nova_compute[260935]: 2025-10-11 09:04:36.712 2 DEBUG nova.objects.instance [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Lazy-loading 'migration_context' on Instance uuid dffb6f2b-b5a8-4d28-ae9b-aca7728ce617 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:04:36 np0005481065 nova_compute[260935]: 2025-10-11 09:04:36.759 2 DEBUG nova.virt.libvirt.driver [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:04:36 np0005481065 nova_compute[260935]: 2025-10-11 09:04:36.760 2 DEBUG nova.virt.libvirt.driver [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Ensure instance console log exists: /var/lib/nova/instances/dffb6f2b-b5a8-4d28-ae9b-aca7728ce617/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:04:36 np0005481065 nova_compute[260935]: 2025-10-11 09:04:36.760 2 DEBUG oslo_concurrency.lockutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:04:36 np0005481065 nova_compute[260935]: 2025-10-11 09:04:36.761 2 DEBUG oslo_concurrency.lockutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:04:36 np0005481065 nova_compute[260935]: 2025-10-11 09:04:36.761 2 DEBUG oslo_concurrency.lockutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:04:37 np0005481065 podman[349737]: 2025-10-11 09:04:37.499341153 +0000 UTC m=+0.053977532 container create c63f16dba182e14e9a1ed1b7db0b2e23427c97381af89b7835ff5e46d437ee99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 05:04:37 np0005481065 systemd[1]: Started libpod-conmon-c63f16dba182e14e9a1ed1b7db0b2e23427c97381af89b7835ff5e46d437ee99.scope.
Oct 11 05:04:37 np0005481065 nova_compute[260935]: 2025-10-11 09:04:37.545 2 DEBUG nova.network.neutron [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Successfully updated port: 63a1eeb0-05f1-4f8a-b115-a1323d4d119d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:04:37 np0005481065 podman[349737]: 2025-10-11 09:04:37.474254077 +0000 UTC m=+0.028890496 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:04:37 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:04:37 np0005481065 nova_compute[260935]: 2025-10-11 09:04:37.583 2 DEBUG oslo_concurrency.lockutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Acquiring lock "refresh_cache-dffb6f2b-b5a8-4d28-ae9b-aca7728ce617" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:04:37 np0005481065 nova_compute[260935]: 2025-10-11 09:04:37.584 2 DEBUG oslo_concurrency.lockutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Acquired lock "refresh_cache-dffb6f2b-b5a8-4d28-ae9b-aca7728ce617" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:04:37 np0005481065 nova_compute[260935]: 2025-10-11 09:04:37.584 2 DEBUG nova.network.neutron [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:04:37 np0005481065 podman[349737]: 2025-10-11 09:04:37.605250796 +0000 UTC m=+0.159887185 container init c63f16dba182e14e9a1ed1b7db0b2e23427c97381af89b7835ff5e46d437ee99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:04:37 np0005481065 podman[349737]: 2025-10-11 09:04:37.616412485 +0000 UTC m=+0.171048864 container start c63f16dba182e14e9a1ed1b7db0b2e23427c97381af89b7835ff5e46d437ee99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mayer, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 11 05:04:37 np0005481065 podman[349737]: 2025-10-11 09:04:37.621534491 +0000 UTC m=+0.176170880 container attach c63f16dba182e14e9a1ed1b7db0b2e23427c97381af89b7835ff5e46d437ee99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 05:04:37 np0005481065 wizardly_mayer[349753]: 167 167
Oct 11 05:04:37 np0005481065 systemd[1]: libpod-c63f16dba182e14e9a1ed1b7db0b2e23427c97381af89b7835ff5e46d437ee99.scope: Deactivated successfully.
Oct 11 05:04:37 np0005481065 podman[349758]: 2025-10-11 09:04:37.67824958 +0000 UTC m=+0.034091994 container died c63f16dba182e14e9a1ed1b7db0b2e23427c97381af89b7835ff5e46d437ee99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 05:04:37 np0005481065 systemd[1]: var-lib-containers-storage-overlay-8bc9546443f31a8baf04d62be30abcaddb7e75e7d2a33936aa87cf5f9e2c1cac-merged.mount: Deactivated successfully.
Oct 11 05:04:37 np0005481065 podman[349758]: 2025-10-11 09:04:37.7293956 +0000 UTC m=+0.085238004 container remove c63f16dba182e14e9a1ed1b7db0b2e23427c97381af89b7835ff5e46d437ee99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mayer, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 05:04:37 np0005481065 systemd[1]: libpod-conmon-c63f16dba182e14e9a1ed1b7db0b2e23427c97381af89b7835ff5e46d437ee99.scope: Deactivated successfully.
Oct 11 05:04:37 np0005481065 nova_compute[260935]: 2025-10-11 09:04:37.753 2 DEBUG nova.compute.manager [req-0aa97448-4e72-4d82-b44b-5ee6d7e4a670 req-dabcf617-b846-49f0-b919-99615cdcb0fd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Received event network-changed-63a1eeb0-05f1-4f8a-b115-a1323d4d119d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:04:37 np0005481065 nova_compute[260935]: 2025-10-11 09:04:37.759 2 DEBUG nova.compute.manager [req-0aa97448-4e72-4d82-b44b-5ee6d7e4a670 req-dabcf617-b846-49f0-b919-99615cdcb0fd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Refreshing instance network info cache due to event network-changed-63a1eeb0-05f1-4f8a-b115-a1323d4d119d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:04:37 np0005481065 nova_compute[260935]: 2025-10-11 09:04:37.761 2 DEBUG oslo_concurrency.lockutils [req-0aa97448-4e72-4d82-b44b-5ee6d7e4a670 req-dabcf617-b846-49f0-b919-99615cdcb0fd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-dffb6f2b-b5a8-4d28-ae9b-aca7728ce617" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:04:37 np0005481065 nova_compute[260935]: 2025-10-11 09:04:37.844 2 DEBUG nova.network.neutron [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:04:38 np0005481065 podman[349780]: 2025-10-11 09:04:38.023013681 +0000 UTC m=+0.071437010 container create 1069f11545fd213da9c9771073a8de6484c9569daffd22568870d65cfd4d0b71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 05:04:38 np0005481065 systemd[1]: Started libpod-conmon-1069f11545fd213da9c9771073a8de6484c9569daffd22568870d65cfd4d0b71.scope.
Oct 11 05:04:38 np0005481065 podman[349780]: 2025-10-11 09:04:37.997942386 +0000 UTC m=+0.046365705 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:04:38 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:04:38 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1ccdfa78297bbecd3acac2ff229188486fb3237de5c832862110009a2326b14/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:04:38 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1ccdfa78297bbecd3acac2ff229188486fb3237de5c832862110009a2326b14/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:04:38 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1ccdfa78297bbecd3acac2ff229188486fb3237de5c832862110009a2326b14/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:04:38 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1ccdfa78297bbecd3acac2ff229188486fb3237de5c832862110009a2326b14/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:04:38 np0005481065 podman[349780]: 2025-10-11 09:04:38.140827985 +0000 UTC m=+0.189251294 container init 1069f11545fd213da9c9771073a8de6484c9569daffd22568870d65cfd4d0b71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:04:38 np0005481065 podman[349780]: 2025-10-11 09:04:38.154897666 +0000 UTC m=+0.203320995 container start 1069f11545fd213da9c9771073a8de6484c9569daffd22568870d65cfd4d0b71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:04:38 np0005481065 podman[349780]: 2025-10-11 09:04:38.15994436 +0000 UTC m=+0.208367679 container attach 1069f11545fd213da9c9771073a8de6484c9569daffd22568870d65cfd4d0b71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_panini, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 11 05:04:38 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1851: 321 pgs: 321 active+clean; 467 MiB data, 855 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 2.7 MiB/s wr, 336 op/s
Oct 11 05:04:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:04:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e249 do_prune osdmap full prune enabled
Oct 11 05:04:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e250 e250: 3 total, 3 up, 3 in
Oct 11 05:04:38 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e250: 3 total, 3 up, 3 in
Oct 11 05:04:38 np0005481065 nova_compute[260935]: 2025-10-11 09:04:38.892 2 DEBUG nova.network.neutron [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Updating instance_info_cache with network_info: [{"id": "63a1eeb0-05f1-4f8a-b115-a1323d4d119d", "address": "fa:16:3e:a1:89:76", "network": {"id": "80aba2f5-1646-44f0-b2f3-d9c000867c6d", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-2046079859-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "884a8b5cd18948009939a3ab6cb1d42a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63a1eeb0-05", "ovs_interfaceid": "63a1eeb0-05f1-4f8a-b115-a1323d4d119d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:04:38 np0005481065 nova_compute[260935]: 2025-10-11 09:04:38.966 2 DEBUG oslo_concurrency.lockutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Releasing lock "refresh_cache-dffb6f2b-b5a8-4d28-ae9b-aca7728ce617" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:04:38 np0005481065 nova_compute[260935]: 2025-10-11 09:04:38.967 2 DEBUG nova.compute.manager [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Instance network_info: |[{"id": "63a1eeb0-05f1-4f8a-b115-a1323d4d119d", "address": "fa:16:3e:a1:89:76", "network": {"id": "80aba2f5-1646-44f0-b2f3-d9c000867c6d", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-2046079859-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "884a8b5cd18948009939a3ab6cb1d42a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63a1eeb0-05", "ovs_interfaceid": "63a1eeb0-05f1-4f8a-b115-a1323d4d119d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:04:38 np0005481065 nova_compute[260935]: 2025-10-11 09:04:38.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:04:38 np0005481065 nova_compute[260935]: 2025-10-11 09:04:38.972 2 DEBUG oslo_concurrency.lockutils [req-0aa97448-4e72-4d82-b44b-5ee6d7e4a670 req-dabcf617-b846-49f0-b919-99615cdcb0fd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-dffb6f2b-b5a8-4d28-ae9b-aca7728ce617" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:04:38 np0005481065 nova_compute[260935]: 2025-10-11 09:04:38.972 2 DEBUG nova.network.neutron [req-0aa97448-4e72-4d82-b44b-5ee6d7e4a670 req-dabcf617-b846-49f0-b919-99615cdcb0fd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Refreshing network info cache for port 63a1eeb0-05f1-4f8a-b115-a1323d4d119d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:04:38 np0005481065 nova_compute[260935]: 2025-10-11 09:04:38.978 2 DEBUG nova.virt.libvirt.driver [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Start _get_guest_xml network_info=[{"id": "63a1eeb0-05f1-4f8a-b115-a1323d4d119d", "address": "fa:16:3e:a1:89:76", "network": {"id": "80aba2f5-1646-44f0-b2f3-d9c000867c6d", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-2046079859-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "884a8b5cd18948009939a3ab6cb1d42a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63a1eeb0-05", "ovs_interfaceid": "63a1eeb0-05f1-4f8a-b115-a1323d4d119d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:04:38 np0005481065 nova_compute[260935]: 2025-10-11 09:04:38.986 2 WARNING nova.virt.libvirt.driver [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:04:38 np0005481065 nova_compute[260935]: 2025-10-11 09:04:38.993 2 DEBUG nova.virt.libvirt.host [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:04:38 np0005481065 nova_compute[260935]: 2025-10-11 09:04:38.994 2 DEBUG nova.virt.libvirt.host [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:04:38 np0005481065 nova_compute[260935]: 2025-10-11 09:04:38.999 2 DEBUG nova.virt.libvirt.host [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:04:39 np0005481065 nova_compute[260935]: 2025-10-11 09:04:38.999 2 DEBUG nova.virt.libvirt.host [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:04:39 np0005481065 nova_compute[260935]: 2025-10-11 09:04:39.000 2 DEBUG nova.virt.libvirt.driver [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:04:39 np0005481065 nova_compute[260935]: 2025-10-11 09:04:39.001 2 DEBUG nova.virt.hardware [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:04:39 np0005481065 nova_compute[260935]: 2025-10-11 09:04:39.002 2 DEBUG nova.virt.hardware [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:04:39 np0005481065 nova_compute[260935]: 2025-10-11 09:04:39.003 2 DEBUG nova.virt.hardware [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:04:39 np0005481065 nova_compute[260935]: 2025-10-11 09:04:39.003 2 DEBUG nova.virt.hardware [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:04:39 np0005481065 nova_compute[260935]: 2025-10-11 09:04:39.004 2 DEBUG nova.virt.hardware [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:04:39 np0005481065 nova_compute[260935]: 2025-10-11 09:04:39.004 2 DEBUG nova.virt.hardware [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:04:39 np0005481065 nova_compute[260935]: 2025-10-11 09:04:39.005 2 DEBUG nova.virt.hardware [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:04:39 np0005481065 nova_compute[260935]: 2025-10-11 09:04:39.005 2 DEBUG nova.virt.hardware [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:04:39 np0005481065 nova_compute[260935]: 2025-10-11 09:04:39.006 2 DEBUG nova.virt.hardware [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:04:39 np0005481065 nova_compute[260935]: 2025-10-11 09:04:39.006 2 DEBUG nova.virt.hardware [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:04:39 np0005481065 nova_compute[260935]: 2025-10-11 09:04:39.007 2 DEBUG nova.virt.hardware [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:04:39 np0005481065 nova_compute[260935]: 2025-10-11 09:04:39.012 2 DEBUG oslo_concurrency.processutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:04:39 np0005481065 peaceful_panini[349797]: {
Oct 11 05:04:39 np0005481065 peaceful_panini[349797]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 05:04:39 np0005481065 peaceful_panini[349797]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:04:39 np0005481065 peaceful_panini[349797]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 05:04:39 np0005481065 peaceful_panini[349797]:        "osd_id": 2,
Oct 11 05:04:39 np0005481065 peaceful_panini[349797]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:04:39 np0005481065 peaceful_panini[349797]:        "type": "bluestore"
Oct 11 05:04:39 np0005481065 peaceful_panini[349797]:    },
Oct 11 05:04:39 np0005481065 peaceful_panini[349797]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 05:04:39 np0005481065 peaceful_panini[349797]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:04:39 np0005481065 peaceful_panini[349797]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 05:04:39 np0005481065 peaceful_panini[349797]:        "osd_id": 0,
Oct 11 05:04:39 np0005481065 peaceful_panini[349797]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:04:39 np0005481065 peaceful_panini[349797]:        "type": "bluestore"
Oct 11 05:04:39 np0005481065 peaceful_panini[349797]:    },
Oct 11 05:04:39 np0005481065 peaceful_panini[349797]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 05:04:39 np0005481065 peaceful_panini[349797]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:04:39 np0005481065 peaceful_panini[349797]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 05:04:39 np0005481065 peaceful_panini[349797]:        "osd_id": 1,
Oct 11 05:04:39 np0005481065 peaceful_panini[349797]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:04:39 np0005481065 peaceful_panini[349797]:        "type": "bluestore"
Oct 11 05:04:39 np0005481065 peaceful_panini[349797]:    }
Oct 11 05:04:39 np0005481065 peaceful_panini[349797]: }
Oct 11 05:04:39 np0005481065 systemd[1]: libpod-1069f11545fd213da9c9771073a8de6484c9569daffd22568870d65cfd4d0b71.scope: Deactivated successfully.
Oct 11 05:04:39 np0005481065 podman[349780]: 2025-10-11 09:04:39.244626065 +0000 UTC m=+1.293049394 container died 1069f11545fd213da9c9771073a8de6484c9569daffd22568870d65cfd4d0b71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_panini, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 05:04:39 np0005481065 systemd[1]: libpod-1069f11545fd213da9c9771073a8de6484c9569daffd22568870d65cfd4d0b71.scope: Consumed 1.061s CPU time.
Oct 11 05:04:39 np0005481065 systemd[1]: var-lib-containers-storage-overlay-e1ccdfa78297bbecd3acac2ff229188486fb3237de5c832862110009a2326b14-merged.mount: Deactivated successfully.
Oct 11 05:04:39 np0005481065 podman[349780]: 2025-10-11 09:04:39.315241271 +0000 UTC m=+1.363664570 container remove 1069f11545fd213da9c9771073a8de6484c9569daffd22568870d65cfd4d0b71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_panini, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 05:04:39 np0005481065 systemd[1]: libpod-conmon-1069f11545fd213da9c9771073a8de6484c9569daffd22568870d65cfd4d0b71.scope: Deactivated successfully.
Oct 11 05:04:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:04:39 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:04:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:04:39 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:04:39 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 2acd76f1-6426-434e-a8af-d8b99e4d5cc6 does not exist
Oct 11 05:04:39 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 36be4cfb-bb9c-4728-b032-ede1d93103f5 does not exist
Oct 11 05:04:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:04:39 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1292809805' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:04:39 np0005481065 nova_compute[260935]: 2025-10-11 09:04:39.500 2 DEBUG oslo_concurrency.processutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:04:39 np0005481065 nova_compute[260935]: 2025-10-11 09:04:39.548 2 DEBUG nova.storage.rbd_utils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] rbd image dffb6f2b-b5a8-4d28-ae9b-aca7728ce617_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:04:39 np0005481065 nova_compute[260935]: 2025-10-11 09:04:39.553 2 DEBUG oslo_concurrency.processutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:04:39 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:04:39 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:04:40 np0005481065 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Oct 11 05:04:40 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:04:40 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3839639460' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:04:40 np0005481065 nova_compute[260935]: 2025-10-11 09:04:40.075 2 DEBUG oslo_concurrency.processutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:04:40 np0005481065 nova_compute[260935]: 2025-10-11 09:04:40.078 2 DEBUG nova.virt.libvirt.vif [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:04:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-236424263',display_name='tempest-ServerGroupTestJSON-server-236424263',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-236424263',id=90,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='884a8b5cd18948009939a3ab6cb1d42a',ramdisk_id='',reservation_id='r-4ccwpn6r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerGroupTestJSON-1907510446',owner_user_name='tempest-ServerGroupTestJSON-1907510446-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:04:35Z,user_data=None,user_id='f7e5365d09e140aaa4289b21435cbd70',uuid=dffb6f2b-b5a8-4d28-ae9b-aca7728ce617,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "63a1eeb0-05f1-4f8a-b115-a1323d4d119d", "address": "fa:16:3e:a1:89:76", "network": {"id": "80aba2f5-1646-44f0-b2f3-d9c000867c6d", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-2046079859-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "884a8b5cd18948009939a3ab6cb1d42a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63a1eeb0-05", "ovs_interfaceid": "63a1eeb0-05f1-4f8a-b115-a1323d4d119d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:04:40 np0005481065 nova_compute[260935]: 2025-10-11 09:04:40.078 2 DEBUG nova.network.os_vif_util [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Converting VIF {"id": "63a1eeb0-05f1-4f8a-b115-a1323d4d119d", "address": "fa:16:3e:a1:89:76", "network": {"id": "80aba2f5-1646-44f0-b2f3-d9c000867c6d", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-2046079859-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "884a8b5cd18948009939a3ab6cb1d42a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63a1eeb0-05", "ovs_interfaceid": "63a1eeb0-05f1-4f8a-b115-a1323d4d119d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:04:40 np0005481065 nova_compute[260935]: 2025-10-11 09:04:40.079 2 DEBUG nova.network.os_vif_util [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a1:89:76,bridge_name='br-int',has_traffic_filtering=True,id=63a1eeb0-05f1-4f8a-b115-a1323d4d119d,network=Network(80aba2f5-1646-44f0-b2f3-d9c000867c6d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63a1eeb0-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:04:40 np0005481065 nova_compute[260935]: 2025-10-11 09:04:40.081 2 DEBUG nova.objects.instance [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Lazy-loading 'pci_devices' on Instance uuid dffb6f2b-b5a8-4d28-ae9b-aca7728ce617 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:04:40 np0005481065 nova_compute[260935]: 2025-10-11 09:04:40.119 2 DEBUG nova.virt.libvirt.driver [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:04:40 np0005481065 nova_compute[260935]:  <uuid>dffb6f2b-b5a8-4d28-ae9b-aca7728ce617</uuid>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:  <name>instance-0000005a</name>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:04:40 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:      <nova:name>tempest-ServerGroupTestJSON-server-236424263</nova:name>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:04:38</nova:creationTime>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:04:40 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:        <nova:user uuid="f7e5365d09e140aaa4289b21435cbd70">tempest-ServerGroupTestJSON-1907510446-project-member</nova:user>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:        <nova:project uuid="884a8b5cd18948009939a3ab6cb1d42a">tempest-ServerGroupTestJSON-1907510446</nova:project>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:        <nova:port uuid="63a1eeb0-05f1-4f8a-b115-a1323d4d119d">
Oct 11 05:04:40 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:04:40 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:      <entry name="serial">dffb6f2b-b5a8-4d28-ae9b-aca7728ce617</entry>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:      <entry name="uuid">dffb6f2b-b5a8-4d28-ae9b-aca7728ce617</entry>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:04:40 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:04:40 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:04:40 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/dffb6f2b-b5a8-4d28-ae9b-aca7728ce617_disk">
Oct 11 05:04:40 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:04:40 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:04:40 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/dffb6f2b-b5a8-4d28-ae9b-aca7728ce617_disk.config">
Oct 11 05:04:40 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:04:40 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:04:40 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:a1:89:76"/>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:      <target dev="tap63a1eeb0-05"/>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:04:40 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/dffb6f2b-b5a8-4d28-ae9b-aca7728ce617/console.log" append="off"/>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:04:40 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:04:40 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:04:40 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:04:40 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:04:40 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:04:40 np0005481065 nova_compute[260935]: 2025-10-11 09:04:40.120 2 DEBUG nova.compute.manager [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Preparing to wait for external event network-vif-plugged-63a1eeb0-05f1-4f8a-b115-a1323d4d119d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:04:40 np0005481065 nova_compute[260935]: 2025-10-11 09:04:40.121 2 DEBUG oslo_concurrency.lockutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Acquiring lock "dffb6f2b-b5a8-4d28-ae9b-aca7728ce617-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:04:40 np0005481065 nova_compute[260935]: 2025-10-11 09:04:40.124 2 DEBUG oslo_concurrency.lockutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Lock "dffb6f2b-b5a8-4d28-ae9b-aca7728ce617-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:04:40 np0005481065 nova_compute[260935]: 2025-10-11 09:04:40.124 2 DEBUG oslo_concurrency.lockutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Lock "dffb6f2b-b5a8-4d28-ae9b-aca7728ce617-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:04:40 np0005481065 nova_compute[260935]: 2025-10-11 09:04:40.125 2 DEBUG nova.virt.libvirt.vif [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:04:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-236424263',display_name='tempest-ServerGroupTestJSON-server-236424263',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-236424263',id=90,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='884a8b5cd18948009939a3ab6cb1d42a',ramdisk_id='',reservation_id='r-4ccwpn6r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerGroupTestJSON-1907510446',owner_user_name='tempest-ServerGroupTestJSON-1907510446-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:04:35Z,user_data=None,user_id='f7e5365d09e140aaa4289b21435cbd70',uuid=dffb6f2b-b5a8-4d28-ae9b-aca7728ce617,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "63a1eeb0-05f1-4f8a-b115-a1323d4d119d", "address": "fa:16:3e:a1:89:76", "network": {"id": "80aba2f5-1646-44f0-b2f3-d9c000867c6d", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-2046079859-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "884a8b5cd18948009939a3ab6cb1d42a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63a1eeb0-05", "ovs_interfaceid": "63a1eeb0-05f1-4f8a-b115-a1323d4d119d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:04:40 np0005481065 nova_compute[260935]: 2025-10-11 09:04:40.126 2 DEBUG nova.network.os_vif_util [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Converting VIF {"id": "63a1eeb0-05f1-4f8a-b115-a1323d4d119d", "address": "fa:16:3e:a1:89:76", "network": {"id": "80aba2f5-1646-44f0-b2f3-d9c000867c6d", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-2046079859-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "884a8b5cd18948009939a3ab6cb1d42a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63a1eeb0-05", "ovs_interfaceid": "63a1eeb0-05f1-4f8a-b115-a1323d4d119d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:04:40 np0005481065 nova_compute[260935]: 2025-10-11 09:04:40.128 2 DEBUG nova.network.os_vif_util [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a1:89:76,bridge_name='br-int',has_traffic_filtering=True,id=63a1eeb0-05f1-4f8a-b115-a1323d4d119d,network=Network(80aba2f5-1646-44f0-b2f3-d9c000867c6d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63a1eeb0-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:04:40 np0005481065 nova_compute[260935]: 2025-10-11 09:04:40.128 2 DEBUG os_vif [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a1:89:76,bridge_name='br-int',has_traffic_filtering=True,id=63a1eeb0-05f1-4f8a-b115-a1323d4d119d,network=Network(80aba2f5-1646-44f0-b2f3-d9c000867c6d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63a1eeb0-05') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:04:40 np0005481065 nova_compute[260935]: 2025-10-11 09:04:40.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:04:40 np0005481065 nova_compute[260935]: 2025-10-11 09:04:40.130 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:04:40 np0005481065 nova_compute[260935]: 2025-10-11 09:04:40.131 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:04:40 np0005481065 nova_compute[260935]: 2025-10-11 09:04:40.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:04:40 np0005481065 nova_compute[260935]: 2025-10-11 09:04:40.139 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap63a1eeb0-05, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:04:40 np0005481065 nova_compute[260935]: 2025-10-11 09:04:40.140 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap63a1eeb0-05, col_values=(('external_ids', {'iface-id': '63a1eeb0-05f1-4f8a-b115-a1323d4d119d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a1:89:76', 'vm-uuid': 'dffb6f2b-b5a8-4d28-ae9b-aca7728ce617'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:04:40 np0005481065 nova_compute[260935]: 2025-10-11 09:04:40.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:04:40 np0005481065 NetworkManager[44960]: <info>  [1760173480.1442] manager: (tap63a1eeb0-05): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/345)
Oct 11 05:04:40 np0005481065 nova_compute[260935]: 2025-10-11 09:04:40.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:04:40 np0005481065 nova_compute[260935]: 2025-10-11 09:04:40.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:04:40 np0005481065 nova_compute[260935]: 2025-10-11 09:04:40.156 2 INFO os_vif [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a1:89:76,bridge_name='br-int',has_traffic_filtering=True,id=63a1eeb0-05f1-4f8a-b115-a1323d4d119d,network=Network(80aba2f5-1646-44f0-b2f3-d9c000867c6d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63a1eeb0-05')#033[00m
Oct 11 05:04:40 np0005481065 nova_compute[260935]: 2025-10-11 09:04:40.268 2 DEBUG nova.virt.libvirt.driver [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:04:40 np0005481065 nova_compute[260935]: 2025-10-11 09:04:40.269 2 DEBUG nova.virt.libvirt.driver [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:04:40 np0005481065 nova_compute[260935]: 2025-10-11 09:04:40.269 2 DEBUG nova.virt.libvirt.driver [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] No VIF found with MAC fa:16:3e:a1:89:76, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:04:40 np0005481065 nova_compute[260935]: 2025-10-11 09:04:40.270 2 INFO nova.virt.libvirt.driver [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Using config drive#033[00m
Oct 11 05:04:40 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1853: 321 pgs: 321 active+clean; 467 MiB data, 855 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.7 MiB/s wr, 199 op/s
Oct 11 05:04:40 np0005481065 nova_compute[260935]: 2025-10-11 09:04:40.298 2 DEBUG nova.storage.rbd_utils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] rbd image dffb6f2b-b5a8-4d28-ae9b-aca7728ce617_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:04:40 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e250 do_prune osdmap full prune enabled
Oct 11 05:04:40 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e251 e251: 3 total, 3 up, 3 in
Oct 11 05:04:40 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e251: 3 total, 3 up, 3 in
Oct 11 05:04:40 np0005481065 nova_compute[260935]: 2025-10-11 09:04:40.730 2 DEBUG nova.network.neutron [req-0aa97448-4e72-4d82-b44b-5ee6d7e4a670 req-dabcf617-b846-49f0-b919-99615cdcb0fd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Updated VIF entry in instance network info cache for port 63a1eeb0-05f1-4f8a-b115-a1323d4d119d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:04:40 np0005481065 nova_compute[260935]: 2025-10-11 09:04:40.730 2 DEBUG nova.network.neutron [req-0aa97448-4e72-4d82-b44b-5ee6d7e4a670 req-dabcf617-b846-49f0-b919-99615cdcb0fd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Updating instance_info_cache with network_info: [{"id": "63a1eeb0-05f1-4f8a-b115-a1323d4d119d", "address": "fa:16:3e:a1:89:76", "network": {"id": "80aba2f5-1646-44f0-b2f3-d9c000867c6d", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-2046079859-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "884a8b5cd18948009939a3ab6cb1d42a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63a1eeb0-05", "ovs_interfaceid": "63a1eeb0-05f1-4f8a-b115-a1323d4d119d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:04:40 np0005481065 nova_compute[260935]: 2025-10-11 09:04:40.760 2 DEBUG oslo_concurrency.lockutils [req-0aa97448-4e72-4d82-b44b-5ee6d7e4a670 req-dabcf617-b846-49f0-b919-99615cdcb0fd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-dffb6f2b-b5a8-4d28-ae9b-aca7728ce617" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:04:41 np0005481065 nova_compute[260935]: 2025-10-11 09:04:41.012 2 INFO nova.virt.libvirt.driver [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Creating config drive at /var/lib/nova/instances/dffb6f2b-b5a8-4d28-ae9b-aca7728ce617/disk.config#033[00m
Oct 11 05:04:41 np0005481065 nova_compute[260935]: 2025-10-11 09:04:41.020 2 DEBUG oslo_concurrency.processutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dffb6f2b-b5a8-4d28-ae9b-aca7728ce617/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6_t6w6s3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:04:41 np0005481065 nova_compute[260935]: 2025-10-11 09:04:41.107 2 DEBUG nova.virt.libvirt.driver [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct 11 05:04:41 np0005481065 nova_compute[260935]: 2025-10-11 09:04:41.194 2 DEBUG oslo_concurrency.processutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dffb6f2b-b5a8-4d28-ae9b-aca7728ce617/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6_t6w6s3" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:04:41 np0005481065 nova_compute[260935]: 2025-10-11 09:04:41.237 2 DEBUG nova.storage.rbd_utils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] rbd image dffb6f2b-b5a8-4d28-ae9b-aca7728ce617_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:04:41 np0005481065 nova_compute[260935]: 2025-10-11 09:04:41.242 2 DEBUG oslo_concurrency.processutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dffb6f2b-b5a8-4d28-ae9b-aca7728ce617/disk.config dffb6f2b-b5a8-4d28-ae9b-aca7728ce617_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:04:41 np0005481065 nova_compute[260935]: 2025-10-11 09:04:41.504 2 DEBUG oslo_concurrency.processutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dffb6f2b-b5a8-4d28-ae9b-aca7728ce617/disk.config dffb6f2b-b5a8-4d28-ae9b-aca7728ce617_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.262s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:04:41 np0005481065 nova_compute[260935]: 2025-10-11 09:04:41.505 2 INFO nova.virt.libvirt.driver [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Deleting local config drive /var/lib/nova/instances/dffb6f2b-b5a8-4d28-ae9b-aca7728ce617/disk.config because it was imported into RBD.#033[00m
Oct 11 05:04:41 np0005481065 kernel: tap63a1eeb0-05: entered promiscuous mode
Oct 11 05:04:41 np0005481065 ovn_controller[152945]: 2025-10-11T09:04:41Z|00801|binding|INFO|Claiming lport 63a1eeb0-05f1-4f8a-b115-a1323d4d119d for this chassis.
Oct 11 05:04:41 np0005481065 ovn_controller[152945]: 2025-10-11T09:04:41Z|00802|binding|INFO|63a1eeb0-05f1-4f8a-b115-a1323d4d119d: Claiming fa:16:3e:a1:89:76 10.100.0.3
Oct 11 05:04:41 np0005481065 NetworkManager[44960]: <info>  [1760173481.5832] manager: (tap63a1eeb0-05): new Tun device (/org/freedesktop/NetworkManager/Devices/346)
Oct 11 05:04:41 np0005481065 nova_compute[260935]: 2025-10-11 09:04:41.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:04:41 np0005481065 nova_compute[260935]: 2025-10-11 09:04:41.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:04:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:04:41.609 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a1:89:76 10.100.0.3'], port_security=['fa:16:3e:a1:89:76 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dffb6f2b-b5a8-4d28-ae9b-aca7728ce617', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-80aba2f5-1646-44f0-b2f3-d9c000867c6d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '884a8b5cd18948009939a3ab6cb1d42a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f1f8aaee-5285-4faa-914e-87133500808c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=57af006d-3f68-4651-b79e-9b2e72269c0e, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=63a1eeb0-05f1-4f8a-b115-a1323d4d119d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:04:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:04:41.610 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 63a1eeb0-05f1-4f8a-b115-a1323d4d119d in datapath 80aba2f5-1646-44f0-b2f3-d9c000867c6d bound to our chassis#033[00m
Oct 11 05:04:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:04:41.613 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 80aba2f5-1646-44f0-b2f3-d9c000867c6d#033[00m
Oct 11 05:04:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:04:41.665 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[19e83f7f-7751-46b6-bb77-9a12eb0c1951]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:04:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:04:41.667 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap80aba2f5-11 in ovnmeta-80aba2f5-1646-44f0-b2f3-d9c000867c6d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 05:04:41 np0005481065 systemd-udevd[350030]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:04:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:04:41.671 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap80aba2f5-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 05:04:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:04:41.671 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[450179d4-98bc-4345-8874-699ccff75981]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:04:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:04:41.672 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bffad023-107e-448f-883a-6eb599f5d9c9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:04:41 np0005481065 systemd-machined[215705]: New machine qemu-102-instance-0000005a.
Oct 11 05:04:41 np0005481065 NetworkManager[44960]: <info>  [1760173481.6826] device (tap63a1eeb0-05): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:04:41 np0005481065 NetworkManager[44960]: <info>  [1760173481.6832] device (tap63a1eeb0-05): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:04:41 np0005481065 systemd[1]: Started Virtual Machine qemu-102-instance-0000005a.
Oct 11 05:04:41 np0005481065 ovn_controller[152945]: 2025-10-11T09:04:41Z|00803|binding|INFO|Setting lport 63a1eeb0-05f1-4f8a-b115-a1323d4d119d ovn-installed in OVS
Oct 11 05:04:41 np0005481065 ovn_controller[152945]: 2025-10-11T09:04:41Z|00804|binding|INFO|Setting lport 63a1eeb0-05f1-4f8a-b115-a1323d4d119d up in Southbound
Oct 11 05:04:41 np0005481065 nova_compute[260935]: 2025-10-11 09:04:41.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:04:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:04:41.696 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[2e5cdbc1-5bec-4059-89cf-00a8fcf315cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:04:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:04:41.715 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d969edba-102e-41b7-b65a-02d6fa719a22]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:04:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:04:41.771 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[2dfc7d82-71b6-4204-a740-6acfb66251e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:04:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:04:41.779 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[eedfb6e4-3522-4026-9d38-39d31be4b864]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:04:41 np0005481065 NetworkManager[44960]: <info>  [1760173481.7803] manager: (tap80aba2f5-10): new Veth device (/org/freedesktop/NetworkManager/Devices/347)
Oct 11 05:04:41 np0005481065 systemd-udevd[350034]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:04:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:04:41.837 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[0d13f969-e0de-4f47-8878-9c293dddc8d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:04:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:04:41.841 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[87ba1c7f-2604-4fc3-8e4c-f773a99a29c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:04:41 np0005481065 NetworkManager[44960]: <info>  [1760173481.8714] device (tap80aba2f5-10): carrier: link connected
Oct 11 05:04:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:04:41.878 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[79f3f930-73c8-4e88-b565-4edcbc9c2a95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:04:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:04:41.904 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[038eadf4-952b-4aef-9938-c2d03255b4fa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap80aba2f5-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:e7:39'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 244], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 532657, 'reachable_time': 30369, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350073, 'error': None, 'target': 'ovnmeta-80aba2f5-1646-44f0-b2f3-d9c000867c6d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:04:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:04:41.925 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[626e9b6c-a34b-4f20-acf4-8629f5f2fb38]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe21:e739'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 532657, 'tstamp': 532657}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 350079, 'error': None, 'target': 'ovnmeta-80aba2f5-1646-44f0-b2f3-d9c000867c6d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:04:41 np0005481065 podman[350047]: 2025-10-11 09:04:41.928776258 +0000 UTC m=+0.093461139 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 11 05:04:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:04:41.949 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[03a8519f-61f4-4973-9c61-ced46c670fea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap80aba2f5-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:e7:39'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 244], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 532657, 'reachable_time': 30369, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 350082, 'error': None, 'target': 'ovnmeta-80aba2f5-1646-44f0-b2f3-d9c000867c6d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:04:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:04:41.983 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c7cee455-20ce-4012-8580-b6050c4fa90e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:04:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:04:42.071 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e7993875-1481-492b-b970-a7deb04e3bda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:04:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:04:42.073 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap80aba2f5-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:04:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:04:42.074 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:04:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:04:42.074 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap80aba2f5-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:04:42 np0005481065 nova_compute[260935]: 2025-10-11 09:04:42.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:04:42 np0005481065 NetworkManager[44960]: <info>  [1760173482.1088] manager: (tap80aba2f5-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/348)
Oct 11 05:04:42 np0005481065 kernel: tap80aba2f5-10: entered promiscuous mode
Oct 11 05:04:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:04:42.113 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap80aba2f5-10, col_values=(('external_ids', {'iface-id': '0ca342cd-acfa-447a-940e-c8b97b9bebfe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:04:42 np0005481065 nova_compute[260935]: 2025-10-11 09:04:42.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:04:42 np0005481065 ovn_controller[152945]: 2025-10-11T09:04:42Z|00805|binding|INFO|Releasing lport 0ca342cd-acfa-447a-940e-c8b97b9bebfe from this chassis (sb_readonly=0)
Oct 11 05:04:42 np0005481065 nova_compute[260935]: 2025-10-11 09:04:42.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:04:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:04:42.146 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/80aba2f5-1646-44f0-b2f3-d9c000867c6d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/80aba2f5-1646-44f0-b2f3-d9c000867c6d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 05:04:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:04:42.147 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[511d9182-b198-49fd-a99c-b91d35fd45fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:04:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:04:42.148 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 05:04:42 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 05:04:42 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 05:04:42 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-80aba2f5-1646-44f0-b2f3-d9c000867c6d
Oct 11 05:04:42 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 05:04:42 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 05:04:42 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 05:04:42 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/80aba2f5-1646-44f0-b2f3-d9c000867c6d.pid.haproxy
Oct 11 05:04:42 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 05:04:42 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:04:42 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 05:04:42 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 05:04:42 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 05:04:42 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 05:04:42 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 05:04:42 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 05:04:42 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 05:04:42 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 05:04:42 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 05:04:42 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 05:04:42 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 05:04:42 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 05:04:42 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 05:04:42 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:04:42 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:04:42 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 05:04:42 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 05:04:42 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 05:04:42 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID 80aba2f5-1646-44f0-b2f3-d9c000867c6d
Oct 11 05:04:42 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 05:04:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:04:42.149 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-80aba2f5-1646-44f0-b2f3-d9c000867c6d', 'env', 'PROCESS_TAG=haproxy-80aba2f5-1646-44f0-b2f3-d9c000867c6d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/80aba2f5-1646-44f0-b2f3-d9c000867c6d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 05:04:42 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1855: 321 pgs: 321 active+clean; 491 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 348 KiB/s rd, 5.2 MiB/s wr, 189 op/s
Oct 11 05:04:42 np0005481065 podman[350156]: 2025-10-11 09:04:42.576437247 +0000 UTC m=+0.078631675 container create 1557ff322d0f03bbb771928ada4ce745c19a84b2b08942388e4af507e34aa64e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-80aba2f5-1646-44f0-b2f3-d9c000867c6d, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 11 05:04:42 np0005481065 systemd[1]: Started libpod-conmon-1557ff322d0f03bbb771928ada4ce745c19a84b2b08942388e4af507e34aa64e.scope.
Oct 11 05:04:42 np0005481065 podman[350156]: 2025-10-11 09:04:42.537228208 +0000 UTC m=+0.039422726 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 05:04:42 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:04:42 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35f771fc789c94264e2ade21343c20941b7339208f940dc9d09191577d5f9af6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 05:04:42 np0005481065 podman[350156]: 2025-10-11 09:04:42.688356282 +0000 UTC m=+0.190550730 container init 1557ff322d0f03bbb771928ada4ce745c19a84b2b08942388e4af507e34aa64e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-80aba2f5-1646-44f0-b2f3-d9c000867c6d, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 05:04:42 np0005481065 podman[350156]: 2025-10-11 09:04:42.698055699 +0000 UTC m=+0.200250117 container start 1557ff322d0f03bbb771928ada4ce745c19a84b2b08942388e4af507e34aa64e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-80aba2f5-1646-44f0-b2f3-d9c000867c6d, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:04:42 np0005481065 neutron-haproxy-ovnmeta-80aba2f5-1646-44f0-b2f3-d9c000867c6d[350171]: [NOTICE]   (350175) : New worker (350177) forked
Oct 11 05:04:42 np0005481065 neutron-haproxy-ovnmeta-80aba2f5-1646-44f0-b2f3-d9c000867c6d[350171]: [NOTICE]   (350175) : Loading success.
Oct 11 05:04:42 np0005481065 nova_compute[260935]: 2025-10-11 09:04:42.803 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173482.8031628, dffb6f2b-b5a8-4d28-ae9b-aca7728ce617 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:04:42 np0005481065 nova_compute[260935]: 2025-10-11 09:04:42.804 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] VM Started (Lifecycle Event)#033[00m
Oct 11 05:04:42 np0005481065 nova_compute[260935]: 2025-10-11 09:04:42.844 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:04:42 np0005481065 nova_compute[260935]: 2025-10-11 09:04:42.850 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173482.8074305, dffb6f2b-b5a8-4d28-ae9b-aca7728ce617 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:04:42 np0005481065 nova_compute[260935]: 2025-10-11 09:04:42.850 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:04:42 np0005481065 nova_compute[260935]: 2025-10-11 09:04:42.947 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:04:42 np0005481065 nova_compute[260935]: 2025-10-11 09:04:42.952 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:04:43 np0005481065 nova_compute[260935]: 2025-10-11 09:04:43.014 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:04:43 np0005481065 nova_compute[260935]: 2025-10-11 09:04:43.223 2 DEBUG nova.compute.manager [req-bc174533-a154-4025-8085-e11cf7c03da0 req-c0d0569f-2de4-4c74-985d-97c9be3c56c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Received event network-vif-plugged-63a1eeb0-05f1-4f8a-b115-a1323d4d119d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:04:43 np0005481065 nova_compute[260935]: 2025-10-11 09:04:43.223 2 DEBUG oslo_concurrency.lockutils [req-bc174533-a154-4025-8085-e11cf7c03da0 req-c0d0569f-2de4-4c74-985d-97c9be3c56c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "dffb6f2b-b5a8-4d28-ae9b-aca7728ce617-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:04:43 np0005481065 nova_compute[260935]: 2025-10-11 09:04:43.224 2 DEBUG oslo_concurrency.lockutils [req-bc174533-a154-4025-8085-e11cf7c03da0 req-c0d0569f-2de4-4c74-985d-97c9be3c56c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dffb6f2b-b5a8-4d28-ae9b-aca7728ce617-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:04:43 np0005481065 nova_compute[260935]: 2025-10-11 09:04:43.224 2 DEBUG oslo_concurrency.lockutils [req-bc174533-a154-4025-8085-e11cf7c03da0 req-c0d0569f-2de4-4c74-985d-97c9be3c56c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dffb6f2b-b5a8-4d28-ae9b-aca7728ce617-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:04:43 np0005481065 nova_compute[260935]: 2025-10-11 09:04:43.224 2 DEBUG nova.compute.manager [req-bc174533-a154-4025-8085-e11cf7c03da0 req-c0d0569f-2de4-4c74-985d-97c9be3c56c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Processing event network-vif-plugged-63a1eeb0-05f1-4f8a-b115-a1323d4d119d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:04:43 np0005481065 nova_compute[260935]: 2025-10-11 09:04:43.225 2 DEBUG nova.compute.manager [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:04:43 np0005481065 nova_compute[260935]: 2025-10-11 09:04:43.228 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173483.2284052, dffb6f2b-b5a8-4d28-ae9b-aca7728ce617 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:04:43 np0005481065 nova_compute[260935]: 2025-10-11 09:04:43.228 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:04:43 np0005481065 nova_compute[260935]: 2025-10-11 09:04:43.233 2 DEBUG nova.virt.libvirt.driver [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:04:43 np0005481065 nova_compute[260935]: 2025-10-11 09:04:43.237 2 INFO nova.virt.libvirt.driver [-] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Instance spawned successfully.#033[00m
Oct 11 05:04:43 np0005481065 nova_compute[260935]: 2025-10-11 09:04:43.237 2 DEBUG nova.virt.libvirt.driver [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:04:43 np0005481065 nova_compute[260935]: 2025-10-11 09:04:43.312 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:04:43 np0005481065 nova_compute[260935]: 2025-10-11 09:04:43.319 2 DEBUG nova.virt.libvirt.driver [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:04:43 np0005481065 nova_compute[260935]: 2025-10-11 09:04:43.319 2 DEBUG nova.virt.libvirt.driver [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:04:43 np0005481065 nova_compute[260935]: 2025-10-11 09:04:43.320 2 DEBUG nova.virt.libvirt.driver [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:04:43 np0005481065 nova_compute[260935]: 2025-10-11 09:04:43.320 2 DEBUG nova.virt.libvirt.driver [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:04:43 np0005481065 nova_compute[260935]: 2025-10-11 09:04:43.321 2 DEBUG nova.virt.libvirt.driver [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:04:43 np0005481065 nova_compute[260935]: 2025-10-11 09:04:43.321 2 DEBUG nova.virt.libvirt.driver [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:04:43 np0005481065 nova_compute[260935]: 2025-10-11 09:04:43.326 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:04:43 np0005481065 nova_compute[260935]: 2025-10-11 09:04:43.382 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:04:43 np0005481065 nova_compute[260935]: 2025-10-11 09:04:43.429 2 INFO nova.compute.manager [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Took 7.63 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:04:43 np0005481065 nova_compute[260935]: 2025-10-11 09:04:43.429 2 DEBUG nova.compute.manager [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:04:43 np0005481065 nova_compute[260935]: 2025-10-11 09:04:43.706 2 INFO nova.compute.manager [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Took 9.59 seconds to build instance.#033[00m
Oct 11 05:04:43 np0005481065 nova_compute[260935]: 2025-10-11 09:04:43.746 2 DEBUG oslo_concurrency.lockutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Lock "dffb6f2b-b5a8-4d28-ae9b-aca7728ce617" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:04:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:04:43 np0005481065 nova_compute[260935]: 2025-10-11 09:04:43.840 2 DEBUG oslo_concurrency.lockutils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Acquiring lock "4af522a5-ae89-49de-a956-50cfa7dd136a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:04:43 np0005481065 nova_compute[260935]: 2025-10-11 09:04:43.840 2 DEBUG oslo_concurrency.lockutils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Lock "4af522a5-ae89-49de-a956-50cfa7dd136a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:04:43 np0005481065 nova_compute[260935]: 2025-10-11 09:04:43.902 2 DEBUG nova.compute.manager [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:04:43 np0005481065 nova_compute[260935]: 2025-10-11 09:04:43.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:04:44 np0005481065 nova_compute[260935]: 2025-10-11 09:04:44.131 2 INFO nova.virt.libvirt.driver [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Instance shutdown successfully after 13 seconds.#033[00m
Oct 11 05:04:44 np0005481065 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d00000058.scope: Deactivated successfully.
Oct 11 05:04:44 np0005481065 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d00000058.scope: Consumed 14.406s CPU time.
Oct 11 05:04:44 np0005481065 systemd-machined[215705]: Machine qemu-100-instance-00000058 terminated.
Oct 11 05:04:44 np0005481065 nova_compute[260935]: 2025-10-11 09:04:44.277 2 DEBUG oslo_concurrency.lockutils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:04:44 np0005481065 nova_compute[260935]: 2025-10-11 09:04:44.279 2 DEBUG oslo_concurrency.lockutils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:04:44 np0005481065 nova_compute[260935]: 2025-10-11 09:04:44.287 2 DEBUG nova.virt.hardware [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:04:44 np0005481065 nova_compute[260935]: 2025-10-11 09:04:44.288 2 INFO nova.compute.claims [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:04:44 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1856: 321 pgs: 321 active+clean; 500 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 583 KiB/s rd, 5.9 MiB/s wr, 241 op/s
Oct 11 05:04:44 np0005481065 nova_compute[260935]: 2025-10-11 09:04:44.354 2 INFO nova.virt.libvirt.driver [-] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Instance destroyed successfully.#033[00m
Oct 11 05:04:44 np0005481065 nova_compute[260935]: 2025-10-11 09:04:44.359 2 INFO nova.virt.libvirt.driver [-] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Instance destroyed successfully.#033[00m
Oct 11 05:04:44 np0005481065 nova_compute[260935]: 2025-10-11 09:04:44.782 2 DEBUG oslo_concurrency.processutils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:04:44 np0005481065 nova_compute[260935]: 2025-10-11 09:04:44.875 2 INFO nova.virt.libvirt.driver [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Deleting instance files /var/lib/nova/instances/cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_del#033[00m
Oct 11 05:04:44 np0005481065 nova_compute[260935]: 2025-10-11 09:04:44.877 2 INFO nova.virt.libvirt.driver [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Deletion of /var/lib/nova/instances/cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_del complete#033[00m
Oct 11 05:04:45 np0005481065 nova_compute[260935]: 2025-10-11 09:04:45.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:04:45 np0005481065 nova_compute[260935]: 2025-10-11 09:04:45.152 2 DEBUG nova.virt.libvirt.driver [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:04:45 np0005481065 nova_compute[260935]: 2025-10-11 09:04:45.153 2 INFO nova.virt.libvirt.driver [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Creating image(s)#033[00m
Oct 11 05:04:45 np0005481065 nova_compute[260935]: 2025-10-11 09:04:45.190 2 DEBUG nova.storage.rbd_utils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] rbd image cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:04:45 np0005481065 nova_compute[260935]: 2025-10-11 09:04:45.229 2 DEBUG nova.storage.rbd_utils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] rbd image cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:04:45 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:04:45 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2136822517' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:04:45 np0005481065 nova_compute[260935]: 2025-10-11 09:04:45.269 2 DEBUG nova.storage.rbd_utils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] rbd image cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:04:45 np0005481065 nova_compute[260935]: 2025-10-11 09:04:45.275 2 DEBUG oslo_concurrency.processutils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:04:45 np0005481065 nova_compute[260935]: 2025-10-11 09:04:45.328 2 DEBUG oslo_concurrency.processutils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:04:45 np0005481065 nova_compute[260935]: 2025-10-11 09:04:45.338 2 DEBUG nova.compute.provider_tree [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:04:45 np0005481065 nova_compute[260935]: 2025-10-11 09:04:45.368 2 DEBUG oslo_concurrency.processutils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:04:45 np0005481065 nova_compute[260935]: 2025-10-11 09:04:45.369 2 DEBUG oslo_concurrency.lockutils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Acquiring lock "d427ed36e4acfaf36d5cf36bd49361b1db4ee571" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:04:45 np0005481065 nova_compute[260935]: 2025-10-11 09:04:45.370 2 DEBUG oslo_concurrency.lockutils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lock "d427ed36e4acfaf36d5cf36bd49361b1db4ee571" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:04:45 np0005481065 nova_compute[260935]: 2025-10-11 09:04:45.371 2 DEBUG oslo_concurrency.lockutils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lock "d427ed36e4acfaf36d5cf36bd49361b1db4ee571" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:04:45 np0005481065 nova_compute[260935]: 2025-10-11 09:04:45.412 2 DEBUG nova.storage.rbd_utils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] rbd image cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:04:45 np0005481065 nova_compute[260935]: 2025-10-11 09:04:45.419 2 DEBUG oslo_concurrency.processutils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:04:45 np0005481065 nova_compute[260935]: 2025-10-11 09:04:45.471 2 DEBUG nova.scheduler.client.report [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:04:45 np0005481065 nova_compute[260935]: 2025-10-11 09:04:45.482 2 DEBUG nova.compute.manager [req-b72134e1-d1df-4463-aee5-baece2e829cc req-ae5cd6db-e820-4b8c-9734-273c2352e5fd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Received event network-vif-plugged-63a1eeb0-05f1-4f8a-b115-a1323d4d119d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:04:45 np0005481065 nova_compute[260935]: 2025-10-11 09:04:45.482 2 DEBUG oslo_concurrency.lockutils [req-b72134e1-d1df-4463-aee5-baece2e829cc req-ae5cd6db-e820-4b8c-9734-273c2352e5fd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "dffb6f2b-b5a8-4d28-ae9b-aca7728ce617-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:04:45 np0005481065 nova_compute[260935]: 2025-10-11 09:04:45.483 2 DEBUG oslo_concurrency.lockutils [req-b72134e1-d1df-4463-aee5-baece2e829cc req-ae5cd6db-e820-4b8c-9734-273c2352e5fd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dffb6f2b-b5a8-4d28-ae9b-aca7728ce617-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:04:45 np0005481065 nova_compute[260935]: 2025-10-11 09:04:45.483 2 DEBUG oslo_concurrency.lockutils [req-b72134e1-d1df-4463-aee5-baece2e829cc req-ae5cd6db-e820-4b8c-9734-273c2352e5fd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dffb6f2b-b5a8-4d28-ae9b-aca7728ce617-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:04:45 np0005481065 nova_compute[260935]: 2025-10-11 09:04:45.483 2 DEBUG nova.compute.manager [req-b72134e1-d1df-4463-aee5-baece2e829cc req-ae5cd6db-e820-4b8c-9734-273c2352e5fd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] No waiting events found dispatching network-vif-plugged-63a1eeb0-05f1-4f8a-b115-a1323d4d119d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:04:45 np0005481065 nova_compute[260935]: 2025-10-11 09:04:45.484 2 WARNING nova.compute.manager [req-b72134e1-d1df-4463-aee5-baece2e829cc req-ae5cd6db-e820-4b8c-9734-273c2352e5fd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Received unexpected event network-vif-plugged-63a1eeb0-05f1-4f8a-b115-a1323d4d119d for instance with vm_state active and task_state None.#033[00m
Oct 11 05:04:45 np0005481065 nova_compute[260935]: 2025-10-11 09:04:45.525 2 DEBUG oslo_concurrency.lockutils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.246s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:04:45 np0005481065 nova_compute[260935]: 2025-10-11 09:04:45.526 2 DEBUG nova.compute.manager [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:04:45 np0005481065 nova_compute[260935]: 2025-10-11 09:04:45.745 2 DEBUG nova.compute.manager [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:04:45 np0005481065 nova_compute[260935]: 2025-10-11 09:04:45.746 2 DEBUG nova.network.neutron [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:04:45 np0005481065 nova_compute[260935]: 2025-10-11 09:04:45.757 2 DEBUG oslo_concurrency.processutils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:04:45 np0005481065 nova_compute[260935]: 2025-10-11 09:04:45.852 2 DEBUG nova.storage.rbd_utils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] resizing rbd image cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:04:45 np0005481065 nova_compute[260935]: 2025-10-11 09:04:45.900 2 INFO nova.virt.libvirt.driver [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:04:45 np0005481065 nova_compute[260935]: 2025-10-11 09:04:45.977 2 DEBUG nova.virt.libvirt.driver [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:04:45 np0005481065 nova_compute[260935]: 2025-10-11 09:04:45.978 2 DEBUG nova.virt.libvirt.driver [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Ensure instance console log exists: /var/lib/nova/instances/cb40f13d-eaf0-4d7f-8745-acdd85fa0e86/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:04:45 np0005481065 nova_compute[260935]: 2025-10-11 09:04:45.979 2 DEBUG oslo_concurrency.lockutils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:04:45 np0005481065 nova_compute[260935]: 2025-10-11 09:04:45.980 2 DEBUG oslo_concurrency.lockutils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:04:45 np0005481065 nova_compute[260935]: 2025-10-11 09:04:45.981 2 DEBUG oslo_concurrency.lockutils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:04:45 np0005481065 nova_compute[260935]: 2025-10-11 09:04:45.983 2 DEBUG nova.virt.libvirt.driver [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:29Z,direct_url=<?>,disk_format='qcow2',id=95632eb9-5895-4e20-b760-0f149aadf400,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:04:45 np0005481065 nova_compute[260935]: 2025-10-11 09:04:45.989 2 WARNING nova.virt.libvirt.driver [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Oct 11 05:04:45 np0005481065 nova_compute[260935]: 2025-10-11 09:04:45.994 2 DEBUG nova.virt.libvirt.host [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:04:45 np0005481065 nova_compute[260935]: 2025-10-11 09:04:45.995 2 DEBUG nova.virt.libvirt.host [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:04:46 np0005481065 nova_compute[260935]: 2025-10-11 09:04:46.000 2 DEBUG nova.virt.libvirt.host [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:04:46 np0005481065 nova_compute[260935]: 2025-10-11 09:04:46.001 2 DEBUG nova.virt.libvirt.host [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:04:46 np0005481065 nova_compute[260935]: 2025-10-11 09:04:46.002 2 DEBUG nova.virt.libvirt.driver [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:04:46 np0005481065 nova_compute[260935]: 2025-10-11 09:04:46.003 2 DEBUG nova.virt.hardware [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:29Z,direct_url=<?>,disk_format='qcow2',id=95632eb9-5895-4e20-b760-0f149aadf400,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:04:46 np0005481065 nova_compute[260935]: 2025-10-11 09:04:46.004 2 DEBUG nova.virt.hardware [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:04:46 np0005481065 nova_compute[260935]: 2025-10-11 09:04:46.004 2 DEBUG nova.virt.hardware [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:04:46 np0005481065 nova_compute[260935]: 2025-10-11 09:04:46.005 2 DEBUG nova.virt.hardware [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:04:46 np0005481065 nova_compute[260935]: 2025-10-11 09:04:46.005 2 DEBUG nova.virt.hardware [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:04:46 np0005481065 nova_compute[260935]: 2025-10-11 09:04:46.006 2 DEBUG nova.virt.hardware [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:04:46 np0005481065 nova_compute[260935]: 2025-10-11 09:04:46.007 2 DEBUG nova.virt.hardware [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:04:46 np0005481065 nova_compute[260935]: 2025-10-11 09:04:46.007 2 DEBUG nova.virt.hardware [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:04:46 np0005481065 nova_compute[260935]: 2025-10-11 09:04:46.008 2 DEBUG nova.virt.hardware [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:04:46 np0005481065 nova_compute[260935]: 2025-10-11 09:04:46.009 2 DEBUG nova.virt.hardware [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:04:46 np0005481065 nova_compute[260935]: 2025-10-11 09:04:46.009 2 DEBUG nova.virt.hardware [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:04:46 np0005481065 nova_compute[260935]: 2025-10-11 09:04:46.010 2 DEBUG nova.objects.instance [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lazy-loading 'vcpu_model' on Instance uuid cb40f13d-eaf0-4d7f-8745-acdd85fa0e86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:04:46 np0005481065 nova_compute[260935]: 2025-10-11 09:04:46.047 2 DEBUG nova.network.neutron [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct 11 05:04:46 np0005481065 nova_compute[260935]: 2025-10-11 09:04:46.048 2 DEBUG nova.compute.manager [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:04:46 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1857: 321 pgs: 321 active+clean; 500 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 506 KiB/s rd, 3.2 MiB/s wr, 129 op/s
Oct 11 05:04:46 np0005481065 nova_compute[260935]: 2025-10-11 09:04:46.398 2 DEBUG nova.compute.manager [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:04:46 np0005481065 nova_compute[260935]: 2025-10-11 09:04:46.528 2 DEBUG oslo_concurrency.processutils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:04:46 np0005481065 nova_compute[260935]: 2025-10-11 09:04:46.575 2 DEBUG oslo_concurrency.lockutils [None req-2f97a1e7-8cbd-40ad-9350-e718e78c356b f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Acquiring lock "dffb6f2b-b5a8-4d28-ae9b-aca7728ce617" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:04:46 np0005481065 nova_compute[260935]: 2025-10-11 09:04:46.576 2 DEBUG oslo_concurrency.lockutils [None req-2f97a1e7-8cbd-40ad-9350-e718e78c356b f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Lock "dffb6f2b-b5a8-4d28-ae9b-aca7728ce617" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:04:46 np0005481065 nova_compute[260935]: 2025-10-11 09:04:46.577 2 DEBUG oslo_concurrency.lockutils [None req-2f97a1e7-8cbd-40ad-9350-e718e78c356b f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Acquiring lock "dffb6f2b-b5a8-4d28-ae9b-aca7728ce617-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:04:46 np0005481065 nova_compute[260935]: 2025-10-11 09:04:46.577 2 DEBUG oslo_concurrency.lockutils [None req-2f97a1e7-8cbd-40ad-9350-e718e78c356b f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Lock "dffb6f2b-b5a8-4d28-ae9b-aca7728ce617-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:04:46 np0005481065 nova_compute[260935]: 2025-10-11 09:04:46.577 2 DEBUG oslo_concurrency.lockutils [None req-2f97a1e7-8cbd-40ad-9350-e718e78c356b f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Lock "dffb6f2b-b5a8-4d28-ae9b-aca7728ce617-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:04:46 np0005481065 nova_compute[260935]: 2025-10-11 09:04:46.579 2 INFO nova.compute.manager [None req-2f97a1e7-8cbd-40ad-9350-e718e78c356b f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Terminating instance#033[00m
Oct 11 05:04:46 np0005481065 nova_compute[260935]: 2025-10-11 09:04:46.580 2 DEBUG nova.compute.manager [None req-2f97a1e7-8cbd-40ad-9350-e718e78c356b f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:04:46 np0005481065 kernel: tap63a1eeb0-05 (unregistering): left promiscuous mode
Oct 11 05:04:46 np0005481065 NetworkManager[44960]: <info>  [1760173486.6274] device (tap63a1eeb0-05): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:04:46 np0005481065 nova_compute[260935]: 2025-10-11 09:04:46.641 2 DEBUG nova.compute.manager [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:04:46 np0005481065 ovn_controller[152945]: 2025-10-11T09:04:46Z|00806|binding|INFO|Releasing lport 63a1eeb0-05f1-4f8a-b115-a1323d4d119d from this chassis (sb_readonly=0)
Oct 11 05:04:46 np0005481065 ovn_controller[152945]: 2025-10-11T09:04:46Z|00807|binding|INFO|Setting lport 63a1eeb0-05f1-4f8a-b115-a1323d4d119d down in Southbound
Oct 11 05:04:46 np0005481065 ovn_controller[152945]: 2025-10-11T09:04:46Z|00808|binding|INFO|Removing iface tap63a1eeb0-05 ovn-installed in OVS
Oct 11 05:04:46 np0005481065 nova_compute[260935]: 2025-10-11 09:04:46.643 2 DEBUG nova.virt.libvirt.driver [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:04:46 np0005481065 nova_compute[260935]: 2025-10-11 09:04:46.644 2 INFO nova.virt.libvirt.driver [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Creating image(s)#033[00m
Oct 11 05:04:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:04:46.653 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a1:89:76 10.100.0.3'], port_security=['fa:16:3e:a1:89:76 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dffb6f2b-b5a8-4d28-ae9b-aca7728ce617', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-80aba2f5-1646-44f0-b2f3-d9c000867c6d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '884a8b5cd18948009939a3ab6cb1d42a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f1f8aaee-5285-4faa-914e-87133500808c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=57af006d-3f68-4651-b79e-9b2e72269c0e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=63a1eeb0-05f1-4f8a-b115-a1323d4d119d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:04:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:04:46.655 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 63a1eeb0-05f1-4f8a-b115-a1323d4d119d in datapath 80aba2f5-1646-44f0-b2f3-d9c000867c6d unbound from our chassis#033[00m
Oct 11 05:04:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:04:46.657 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 80aba2f5-1646-44f0-b2f3-d9c000867c6d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:04:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:04:46.658 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[94a07826-5292-423c-af90-d3705e936d5f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:04:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:04:46.659 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-80aba2f5-1646-44f0-b2f3-d9c000867c6d namespace which is not needed anymore#033[00m
Oct 11 05:04:46 np0005481065 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d0000005a.scope: Deactivated successfully.
Oct 11 05:04:46 np0005481065 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d0000005a.scope: Consumed 4.412s CPU time.
Oct 11 05:04:46 np0005481065 systemd-machined[215705]: Machine qemu-102-instance-0000005a terminated.
Oct 11 05:04:46 np0005481065 nova_compute[260935]: 2025-10-11 09:04:46.688 2 DEBUG nova.storage.rbd_utils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] rbd image 4af522a5-ae89-49de-a956-50cfa7dd136a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:04:46 np0005481065 nova_compute[260935]: 2025-10-11 09:04:46.719 2 DEBUG nova.storage.rbd_utils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] rbd image 4af522a5-ae89-49de-a956-50cfa7dd136a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:04:46 np0005481065 nova_compute[260935]: 2025-10-11 09:04:46.759 2 DEBUG nova.storage.rbd_utils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] rbd image 4af522a5-ae89-49de-a956-50cfa7dd136a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:04:46 np0005481065 nova_compute[260935]: 2025-10-11 09:04:46.768 2 DEBUG oslo_concurrency.lockutils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Acquiring lock "24fb2f8914224986c1a288ff195771cacbd87d4d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:04:46 np0005481065 nova_compute[260935]: 2025-10-11 09:04:46.769 2 DEBUG oslo_concurrency.lockutils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Lock "24fb2f8914224986c1a288ff195771cacbd87d4d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:04:46 np0005481065 nova_compute[260935]: 2025-10-11 09:04:46.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:04:46 np0005481065 nova_compute[260935]: 2025-10-11 09:04:46.815 2 INFO nova.virt.libvirt.driver [-] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Instance destroyed successfully.#033[00m
Oct 11 05:04:46 np0005481065 nova_compute[260935]: 2025-10-11 09:04:46.816 2 DEBUG nova.objects.instance [None req-2f97a1e7-8cbd-40ad-9350-e718e78c356b f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Lazy-loading 'resources' on Instance uuid dffb6f2b-b5a8-4d28-ae9b-aca7728ce617 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:04:46 np0005481065 nova_compute[260935]: 2025-10-11 09:04:46.839 2 DEBUG nova.virt.libvirt.vif [None req-2f97a1e7-8cbd-40ad-9350-e718e78c356b f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:04:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-236424263',display_name='tempest-ServerGroupTestJSON-server-236424263',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-236424263',id=90,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:04:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='884a8b5cd18948009939a3ab6cb1d42a',ramdisk_id='',reservation_id='r-4ccwpn6r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerGroupTestJSON-1907510446',owner_user_name='tempest-ServerGroupTestJSON-1907510446-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:04:43Z,user_data=None,user_id='f7e5365d09e140aaa4289b21435cbd70',uuid=dffb6f2b-b5a8-4d28-ae9b-aca7728ce617,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "63a1eeb0-05f1-4f8a-b115-a1323d4d119d", "address": "fa:16:3e:a1:89:76", "network": {"id": "80aba2f5-1646-44f0-b2f3-d9c000867c6d", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-2046079859-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "884a8b5cd18948009939a3ab6cb1d42a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63a1eeb0-05", "ovs_interfaceid": "63a1eeb0-05f1-4f8a-b115-a1323d4d119d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:04:46 np0005481065 nova_compute[260935]: 2025-10-11 09:04:46.844 2 DEBUG nova.network.os_vif_util [None req-2f97a1e7-8cbd-40ad-9350-e718e78c356b f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Converting VIF {"id": "63a1eeb0-05f1-4f8a-b115-a1323d4d119d", "address": "fa:16:3e:a1:89:76", "network": {"id": "80aba2f5-1646-44f0-b2f3-d9c000867c6d", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-2046079859-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "884a8b5cd18948009939a3ab6cb1d42a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63a1eeb0-05", "ovs_interfaceid": "63a1eeb0-05f1-4f8a-b115-a1323d4d119d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:04:46 np0005481065 neutron-haproxy-ovnmeta-80aba2f5-1646-44f0-b2f3-d9c000867c6d[350171]: [NOTICE]   (350175) : haproxy version is 2.8.14-c23fe91
Oct 11 05:04:46 np0005481065 neutron-haproxy-ovnmeta-80aba2f5-1646-44f0-b2f3-d9c000867c6d[350171]: [NOTICE]   (350175) : path to executable is /usr/sbin/haproxy
Oct 11 05:04:46 np0005481065 neutron-haproxy-ovnmeta-80aba2f5-1646-44f0-b2f3-d9c000867c6d[350171]: [WARNING]  (350175) : Exiting Master process...
Oct 11 05:04:46 np0005481065 neutron-haproxy-ovnmeta-80aba2f5-1646-44f0-b2f3-d9c000867c6d[350171]: [WARNING]  (350175) : Exiting Master process...
Oct 11 05:04:46 np0005481065 nova_compute[260935]: 2025-10-11 09:04:46.845 2 DEBUG nova.network.os_vif_util [None req-2f97a1e7-8cbd-40ad-9350-e718e78c356b f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a1:89:76,bridge_name='br-int',has_traffic_filtering=True,id=63a1eeb0-05f1-4f8a-b115-a1323d4d119d,network=Network(80aba2f5-1646-44f0-b2f3-d9c000867c6d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63a1eeb0-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:04:46 np0005481065 nova_compute[260935]: 2025-10-11 09:04:46.846 2 DEBUG os_vif [None req-2f97a1e7-8cbd-40ad-9350-e718e78c356b f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a1:89:76,bridge_name='br-int',has_traffic_filtering=True,id=63a1eeb0-05f1-4f8a-b115-a1323d4d119d,network=Network(80aba2f5-1646-44f0-b2f3-d9c000867c6d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63a1eeb0-05') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:04:46 np0005481065 neutron-haproxy-ovnmeta-80aba2f5-1646-44f0-b2f3-d9c000867c6d[350171]: [ALERT]    (350175) : Current worker (350177) exited with code 143 (Terminated)
Oct 11 05:04:46 np0005481065 neutron-haproxy-ovnmeta-80aba2f5-1646-44f0-b2f3-d9c000867c6d[350171]: [WARNING]  (350175) : All workers exited. Exiting... (0)
Oct 11 05:04:46 np0005481065 nova_compute[260935]: 2025-10-11 09:04:46.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:04:46 np0005481065 nova_compute[260935]: 2025-10-11 09:04:46.850 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap63a1eeb0-05, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:04:46 np0005481065 systemd[1]: libpod-1557ff322d0f03bbb771928ada4ce745c19a84b2b08942388e4af507e34aa64e.scope: Deactivated successfully.
Oct 11 05:04:46 np0005481065 podman[350495]: 2025-10-11 09:04:46.857107317 +0000 UTC m=+0.063151304 container died 1557ff322d0f03bbb771928ada4ce745c19a84b2b08942388e4af507e34aa64e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-80aba2f5-1646-44f0-b2f3-d9c000867c6d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 05:04:46 np0005481065 nova_compute[260935]: 2025-10-11 09:04:46.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:04:46 np0005481065 nova_compute[260935]: 2025-10-11 09:04:46.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:04:46 np0005481065 nova_compute[260935]: 2025-10-11 09:04:46.906 2 INFO os_vif [None req-2f97a1e7-8cbd-40ad-9350-e718e78c356b f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a1:89:76,bridge_name='br-int',has_traffic_filtering=True,id=63a1eeb0-05f1-4f8a-b115-a1323d4d119d,network=Network(80aba2f5-1646-44f0-b2f3-d9c000867c6d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63a1eeb0-05')#033[00m
Oct 11 05:04:46 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1557ff322d0f03bbb771928ada4ce745c19a84b2b08942388e4af507e34aa64e-userdata-shm.mount: Deactivated successfully.
Oct 11 05:04:46 np0005481065 systemd[1]: var-lib-containers-storage-overlay-35f771fc789c94264e2ade21343c20941b7339208f940dc9d09191577d5f9af6-merged.mount: Deactivated successfully.
Oct 11 05:04:46 np0005481065 podman[350495]: 2025-10-11 09:04:46.931899452 +0000 UTC m=+0.137943459 container cleanup 1557ff322d0f03bbb771928ada4ce745c19a84b2b08942388e4af507e34aa64e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-80aba2f5-1646-44f0-b2f3-d9c000867c6d, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:04:46 np0005481065 systemd[1]: libpod-conmon-1557ff322d0f03bbb771928ada4ce745c19a84b2b08942388e4af507e34aa64e.scope: Deactivated successfully.
Oct 11 05:04:46 np0005481065 nova_compute[260935]: 2025-10-11 09:04:46.990 2 DEBUG nova.virt.libvirt.imagebackend [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Image locations are: [{'url': 'rbd://33219f8b-dc38-5a8f-a577-8ccc4b37190a/images/d6bf8b58-0442-4653-84ed-9390a999a943/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://33219f8b-dc38-5a8f-a577-8ccc4b37190a/images/d6bf8b58-0442-4653-84ed-9390a999a943/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct 11 05:04:47 np0005481065 podman[350550]: 2025-10-11 09:04:47.015046506 +0000 UTC m=+0.054627781 container remove 1557ff322d0f03bbb771928ada4ce745c19a84b2b08942388e4af507e34aa64e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-80aba2f5-1646-44f0-b2f3-d9c000867c6d, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 11 05:04:47 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:04:47.031 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[063a8bf5-d519-4be2-8c8c-e7673e1da34d]: (4, ('Sat Oct 11 09:04:46 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-80aba2f5-1646-44f0-b2f3-d9c000867c6d (1557ff322d0f03bbb771928ada4ce745c19a84b2b08942388e4af507e34aa64e)\n1557ff322d0f03bbb771928ada4ce745c19a84b2b08942388e4af507e34aa64e\nSat Oct 11 09:04:46 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-80aba2f5-1646-44f0-b2f3-d9c000867c6d (1557ff322d0f03bbb771928ada4ce745c19a84b2b08942388e4af507e34aa64e)\n1557ff322d0f03bbb771928ada4ce745c19a84b2b08942388e4af507e34aa64e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:04:47 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:04:47.034 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1e1f8acf-3f0d-4618-8c04-e50a47b2726a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:04:47 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:04:47.035 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap80aba2f5-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:04:47 np0005481065 nova_compute[260935]: 2025-10-11 09:04:47.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:04:47 np0005481065 nova_compute[260935]: 2025-10-11 09:04:47.047 2 DEBUG nova.virt.libvirt.imagebackend [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Selected location: {'url': 'rbd://33219f8b-dc38-5a8f-a577-8ccc4b37190a/images/d6bf8b58-0442-4653-84ed-9390a999a943/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Oct 11 05:04:47 np0005481065 nova_compute[260935]: 2025-10-11 09:04:47.047 2 DEBUG nova.storage.rbd_utils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] cloning images/d6bf8b58-0442-4653-84ed-9390a999a943@snap to None/4af522a5-ae89-49de-a956-50cfa7dd136a_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct 11 05:04:47 np0005481065 kernel: tap80aba2f5-10: left promiscuous mode
Oct 11 05:04:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:04:47 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1299647705' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:04:47 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:04:47.063 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5817d724-69a3-4754-bde2-1b320412730c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:04:47 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:04:47.087 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9956c49e-0ed7-46b6-b191-4d03fb981216]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:04:47 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:04:47.088 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[031b384d-134c-4067-8ea3-3875128af63d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:04:47 np0005481065 nova_compute[260935]: 2025-10-11 09:04:47.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:04:47 np0005481065 nova_compute[260935]: 2025-10-11 09:04:47.094 2 DEBUG oslo_concurrency.processutils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:04:47 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:04:47.111 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7c54d727-bf98-4761-9d1f-e57f487b4d0e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 532646, 'reachable_time': 36885, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350634, 'error': None, 'target': 'ovnmeta-80aba2f5-1646-44f0-b2f3-d9c000867c6d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:04:47 np0005481065 nova_compute[260935]: 2025-10-11 09:04:47.114 2 DEBUG nova.storage.rbd_utils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] rbd image cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:04:47 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:04:47.115 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-80aba2f5-1646-44f0-b2f3-d9c000867c6d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 05:04:47 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:04:47.115 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[88f66bd8-7c9a-4b05-bf6e-3a84d7f0d93c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:04:47 np0005481065 systemd[1]: run-netns-ovnmeta\x2d80aba2f5\x2d1646\x2d44f0\x2db2f3\x2dd9c000867c6d.mount: Deactivated successfully.
Oct 11 05:04:47 np0005481065 nova_compute[260935]: 2025-10-11 09:04:47.121 2 DEBUG oslo_concurrency.processutils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:04:47 np0005481065 nova_compute[260935]: 2025-10-11 09:04:47.214 2 DEBUG oslo_concurrency.lockutils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Lock "24fb2f8914224986c1a288ff195771cacbd87d4d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.445s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:04:47 np0005481065 nova_compute[260935]: 2025-10-11 09:04:47.363 2 INFO nova.virt.libvirt.driver [None req-2f97a1e7-8cbd-40ad-9350-e718e78c356b f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Deleting instance files /var/lib/nova/instances/dffb6f2b-b5a8-4d28-ae9b-aca7728ce617_del#033[00m
Oct 11 05:04:47 np0005481065 nova_compute[260935]: 2025-10-11 09:04:47.364 2 INFO nova.virt.libvirt.driver [None req-2f97a1e7-8cbd-40ad-9350-e718e78c356b f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Deletion of /var/lib/nova/instances/dffb6f2b-b5a8-4d28-ae9b-aca7728ce617_del complete#033[00m
Oct 11 05:04:47 np0005481065 nova_compute[260935]: 2025-10-11 09:04:47.371 2 DEBUG nova.storage.rbd_utils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] resizing rbd image 4af522a5-ae89-49de-a956-50cfa7dd136a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:04:47 np0005481065 nova_compute[260935]: 2025-10-11 09:04:47.447 2 DEBUG nova.objects.instance [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Lazy-loading 'migration_context' on Instance uuid 4af522a5-ae89-49de-a956-50cfa7dd136a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:04:47 np0005481065 nova_compute[260935]: 2025-10-11 09:04:47.480 2 DEBUG nova.virt.libvirt.driver [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:04:47 np0005481065 nova_compute[260935]: 2025-10-11 09:04:47.480 2 DEBUG nova.virt.libvirt.driver [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Ensure instance console log exists: /var/lib/nova/instances/4af522a5-ae89-49de-a956-50cfa7dd136a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:04:47 np0005481065 nova_compute[260935]: 2025-10-11 09:04:47.481 2 DEBUG oslo_concurrency.lockutils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:04:47 np0005481065 nova_compute[260935]: 2025-10-11 09:04:47.481 2 DEBUG oslo_concurrency.lockutils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:04:47 np0005481065 nova_compute[260935]: 2025-10-11 09:04:47.481 2 DEBUG oslo_concurrency.lockutils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:04:47 np0005481065 nova_compute[260935]: 2025-10-11 09:04:47.483 2 DEBUG nova.virt.libvirt.driver [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='762f3d8f1f3fcf09a35e950f82f57acd',container_format='bare',created_at=2025-10-11T09:04:39Z,direct_url=<?>,disk_format='raw',id=d6bf8b58-0442-4653-84ed-9390a999a943,min_disk=0,min_ram=0,name='tempest-image-dependency-test-269879602',owner='001deb6d2eb5499b8e24cf16d6c2dbaa',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2025-10-11T09:04:40Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': 'd6bf8b58-0442-4653-84ed-9390a999a943'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:04:47 np0005481065 nova_compute[260935]: 2025-10-11 09:04:47.484 2 INFO nova.compute.manager [None req-2f97a1e7-8cbd-40ad-9350-e718e78c356b f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Took 0.90 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:04:47 np0005481065 nova_compute[260935]: 2025-10-11 09:04:47.485 2 DEBUG oslo.service.loopingcall [None req-2f97a1e7-8cbd-40ad-9350-e718e78c356b f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:04:47 np0005481065 nova_compute[260935]: 2025-10-11 09:04:47.485 2 DEBUG nova.compute.manager [-] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:04:47 np0005481065 nova_compute[260935]: 2025-10-11 09:04:47.486 2 DEBUG nova.network.neutron [-] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:04:47 np0005481065 nova_compute[260935]: 2025-10-11 09:04:47.492 2 WARNING nova.virt.libvirt.driver [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:04:47 np0005481065 nova_compute[260935]: 2025-10-11 09:04:47.500 2 DEBUG nova.virt.libvirt.host [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:04:47 np0005481065 nova_compute[260935]: 2025-10-11 09:04:47.501 2 DEBUG nova.virt.libvirt.host [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:04:47 np0005481065 nova_compute[260935]: 2025-10-11 09:04:47.506 2 DEBUG nova.virt.libvirt.host [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:04:47 np0005481065 nova_compute[260935]: 2025-10-11 09:04:47.506 2 DEBUG nova.virt.libvirt.host [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:04:47 np0005481065 nova_compute[260935]: 2025-10-11 09:04:47.507 2 DEBUG nova.virt.libvirt.driver [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:04:47 np0005481065 nova_compute[260935]: 2025-10-11 09:04:47.507 2 DEBUG nova.virt.hardware [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='762f3d8f1f3fcf09a35e950f82f57acd',container_format='bare',created_at=2025-10-11T09:04:39Z,direct_url=<?>,disk_format='raw',id=d6bf8b58-0442-4653-84ed-9390a999a943,min_disk=0,min_ram=0,name='tempest-image-dependency-test-269879602',owner='001deb6d2eb5499b8e24cf16d6c2dbaa',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2025-10-11T09:04:40Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:04:47 np0005481065 nova_compute[260935]: 2025-10-11 09:04:47.507 2 DEBUG nova.virt.hardware [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:04:47 np0005481065 nova_compute[260935]: 2025-10-11 09:04:47.508 2 DEBUG nova.virt.hardware [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:04:47 np0005481065 nova_compute[260935]: 2025-10-11 09:04:47.508 2 DEBUG nova.virt.hardware [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:04:47 np0005481065 nova_compute[260935]: 2025-10-11 09:04:47.508 2 DEBUG nova.virt.hardware [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:04:47 np0005481065 nova_compute[260935]: 2025-10-11 09:04:47.508 2 DEBUG nova.virt.hardware [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:04:47 np0005481065 nova_compute[260935]: 2025-10-11 09:04:47.508 2 DEBUG nova.virt.hardware [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:04:47 np0005481065 nova_compute[260935]: 2025-10-11 09:04:47.509 2 DEBUG nova.virt.hardware [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:04:47 np0005481065 nova_compute[260935]: 2025-10-11 09:04:47.509 2 DEBUG nova.virt.hardware [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:04:47 np0005481065 nova_compute[260935]: 2025-10-11 09:04:47.509 2 DEBUG nova.virt.hardware [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:04:47 np0005481065 nova_compute[260935]: 2025-10-11 09:04:47.509 2 DEBUG nova.virt.hardware [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:04:47 np0005481065 nova_compute[260935]: 2025-10-11 09:04:47.512 2 DEBUG oslo_concurrency.processutils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:04:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:04:47 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1945827982' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:04:47 np0005481065 nova_compute[260935]: 2025-10-11 09:04:47.607 2 DEBUG nova.compute.manager [req-70c296bb-50aa-48c8-bb1f-9fca9f282295 req-3a7f33fc-5156-48d2-92a4-5916f821bf04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Received event network-vif-unplugged-63a1eeb0-05f1-4f8a-b115-a1323d4d119d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:04:47 np0005481065 nova_compute[260935]: 2025-10-11 09:04:47.608 2 DEBUG oslo_concurrency.lockutils [req-70c296bb-50aa-48c8-bb1f-9fca9f282295 req-3a7f33fc-5156-48d2-92a4-5916f821bf04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "dffb6f2b-b5a8-4d28-ae9b-aca7728ce617-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:04:47 np0005481065 nova_compute[260935]: 2025-10-11 09:04:47.609 2 DEBUG oslo_concurrency.lockutils [req-70c296bb-50aa-48c8-bb1f-9fca9f282295 req-3a7f33fc-5156-48d2-92a4-5916f821bf04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dffb6f2b-b5a8-4d28-ae9b-aca7728ce617-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:04:47 np0005481065 nova_compute[260935]: 2025-10-11 09:04:47.609 2 DEBUG oslo_concurrency.lockutils [req-70c296bb-50aa-48c8-bb1f-9fca9f282295 req-3a7f33fc-5156-48d2-92a4-5916f821bf04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dffb6f2b-b5a8-4d28-ae9b-aca7728ce617-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:04:47 np0005481065 nova_compute[260935]: 2025-10-11 09:04:47.609 2 DEBUG nova.compute.manager [req-70c296bb-50aa-48c8-bb1f-9fca9f282295 req-3a7f33fc-5156-48d2-92a4-5916f821bf04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] No waiting events found dispatching network-vif-unplugged-63a1eeb0-05f1-4f8a-b115-a1323d4d119d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:04:47 np0005481065 nova_compute[260935]: 2025-10-11 09:04:47.609 2 DEBUG nova.compute.manager [req-70c296bb-50aa-48c8-bb1f-9fca9f282295 req-3a7f33fc-5156-48d2-92a4-5916f821bf04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Received event network-vif-unplugged-63a1eeb0-05f1-4f8a-b115-a1323d4d119d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 05:04:47 np0005481065 nova_compute[260935]: 2025-10-11 09:04:47.621 2 DEBUG oslo_concurrency.processutils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:04:47 np0005481065 nova_compute[260935]: 2025-10-11 09:04:47.625 2 DEBUG nova.virt.libvirt.driver [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:04:47 np0005481065 nova_compute[260935]:  <uuid>cb40f13d-eaf0-4d7f-8745-acdd85fa0e86</uuid>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:  <name>instance-00000058</name>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:04:47 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:      <nova:name>tempest-ServerShowV257Test-server-919774712</nova:name>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:04:45</nova:creationTime>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:04:47 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:        <nova:user uuid="a90fe9900cc64109bfeb61e3bc71fb95">tempest-ServerShowV257Test-315301913-project-member</nova:user>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:        <nova:project uuid="f83a57424c0643a1b6a9b84fe208cb0e">tempest-ServerShowV257Test-315301913</nova:project>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="95632eb9-5895-4e20-b760-0f149aadf400"/>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:      <nova:ports/>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:04:47 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:      <entry name="serial">cb40f13d-eaf0-4d7f-8745-acdd85fa0e86</entry>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:      <entry name="uuid">cb40f13d-eaf0-4d7f-8745-acdd85fa0e86</entry>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:04:47 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:04:47 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:04:47 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_disk">
Oct 11 05:04:47 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:04:47 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:04:47 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_disk.config">
Oct 11 05:04:47 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:04:47 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:04:47 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/cb40f13d-eaf0-4d7f-8745-acdd85fa0e86/console.log" append="off"/>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:04:47 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:04:47 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:04:47 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:04:47 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:04:47 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:04:47 np0005481065 nova_compute[260935]: 2025-10-11 09:04:47.736 2 DEBUG nova.virt.libvirt.driver [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:04:47 np0005481065 nova_compute[260935]: 2025-10-11 09:04:47.736 2 DEBUG nova.virt.libvirt.driver [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:04:47 np0005481065 nova_compute[260935]: 2025-10-11 09:04:47.737 2 INFO nova.virt.libvirt.driver [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Using config drive#033[00m
Oct 11 05:04:47 np0005481065 nova_compute[260935]: 2025-10-11 09:04:47.757 2 DEBUG nova.storage.rbd_utils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] rbd image cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:04:47 np0005481065 nova_compute[260935]: 2025-10-11 09:04:47.787 2 DEBUG nova.objects.instance [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lazy-loading 'ec2_ids' on Instance uuid cb40f13d-eaf0-4d7f-8745-acdd85fa0e86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:04:47 np0005481065 nova_compute[260935]: 2025-10-11 09:04:47.888 2 DEBUG nova.objects.instance [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lazy-loading 'keypairs' on Instance uuid cb40f13d-eaf0-4d7f-8745-acdd85fa0e86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:04:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:04:47 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3280813713' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:04:48 np0005481065 nova_compute[260935]: 2025-10-11 09:04:48.017 2 DEBUG oslo_concurrency.processutils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:04:48 np0005481065 nova_compute[260935]: 2025-10-11 09:04:48.051 2 DEBUG nova.storage.rbd_utils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] rbd image 4af522a5-ae89-49de-a956-50cfa7dd136a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:04:48 np0005481065 nova_compute[260935]: 2025-10-11 09:04:48.056 2 DEBUG oslo_concurrency.processutils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:04:48 np0005481065 nova_compute[260935]: 2025-10-11 09:04:48.126 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:04:48 np0005481065 nova_compute[260935]: 2025-10-11 09:04:48.127 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:04:48 np0005481065 nova_compute[260935]: 2025-10-11 09:04:48.127 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:04:48 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1858: 321 pgs: 321 active+clean; 421 MiB data, 883 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 5.0 MiB/s wr, 301 op/s
Oct 11 05:04:48 np0005481065 nova_compute[260935]: 2025-10-11 09:04:48.329 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173473.329016, 14b020dc-ae35-4a87-87c8-ed8504968319 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:04:48 np0005481065 nova_compute[260935]: 2025-10-11 09:04:48.329 2 INFO nova.compute.manager [-] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:04:48 np0005481065 nova_compute[260935]: 2025-10-11 09:04:48.357 2 INFO nova.virt.libvirt.driver [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Creating config drive at /var/lib/nova/instances/cb40f13d-eaf0-4d7f-8745-acdd85fa0e86/disk.config#033[00m
Oct 11 05:04:48 np0005481065 nova_compute[260935]: 2025-10-11 09:04:48.362 2 DEBUG oslo_concurrency.processutils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cb40f13d-eaf0-4d7f-8745-acdd85fa0e86/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvvj9s1rw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:04:48 np0005481065 nova_compute[260935]: 2025-10-11 09:04:48.409 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:04:48 np0005481065 nova_compute[260935]: 2025-10-11 09:04:48.409 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:04:48 np0005481065 nova_compute[260935]: 2025-10-11 09:04:48.409 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 05:04:48 np0005481065 nova_compute[260935]: 2025-10-11 09:04:48.411 2 DEBUG nova.compute.manager [None req-cfaa289a-694b-4301-bf10-efcea187848b - - - - - -] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:04:48 np0005481065 nova_compute[260935]: 2025-10-11 09:04:48.520 2 DEBUG oslo_concurrency.processutils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cb40f13d-eaf0-4d7f-8745-acdd85fa0e86/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvvj9s1rw" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:04:48 np0005481065 nova_compute[260935]: 2025-10-11 09:04:48.557 2 DEBUG nova.storage.rbd_utils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] rbd image cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:04:48 np0005481065 nova_compute[260935]: 2025-10-11 09:04:48.562 2 DEBUG oslo_concurrency.processutils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cb40f13d-eaf0-4d7f-8745-acdd85fa0e86/disk.config cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:04:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:04:48 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3947318784' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:04:48 np0005481065 nova_compute[260935]: 2025-10-11 09:04:48.615 2 DEBUG oslo_concurrency.processutils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:04:48 np0005481065 nova_compute[260935]: 2025-10-11 09:04:48.618 2 DEBUG nova.objects.instance [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Lazy-loading 'pci_devices' on Instance uuid 4af522a5-ae89-49de-a956-50cfa7dd136a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:04:48 np0005481065 nova_compute[260935]: 2025-10-11 09:04:48.666 2 DEBUG nova.virt.libvirt.driver [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:04:48 np0005481065 nova_compute[260935]:  <uuid>4af522a5-ae89-49de-a956-50cfa7dd136a</uuid>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:  <name>instance-0000005b</name>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:04:48 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:      <nova:name>instance-depend-image</nova:name>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:04:47</nova:creationTime>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:04:48 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:        <nova:user uuid="1c9cf27bf1c44ef1ba0c291e3d5818be">tempest-ImageDependencyTests-755665872-project-member</nova:user>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:        <nova:project uuid="001deb6d2eb5499b8e24cf16d6c2dbaa">tempest-ImageDependencyTests-755665872</nova:project>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="d6bf8b58-0442-4653-84ed-9390a999a943"/>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:      <nova:ports/>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:04:48 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:      <entry name="serial">4af522a5-ae89-49de-a956-50cfa7dd136a</entry>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:      <entry name="uuid">4af522a5-ae89-49de-a956-50cfa7dd136a</entry>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:04:48 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:04:48 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:04:48 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/4af522a5-ae89-49de-a956-50cfa7dd136a_disk">
Oct 11 05:04:48 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:04:48 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:04:48 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/4af522a5-ae89-49de-a956-50cfa7dd136a_disk.config">
Oct 11 05:04:48 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:04:48 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:04:48 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/4af522a5-ae89-49de-a956-50cfa7dd136a/console.log" append="off"/>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:04:48 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:04:48 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:04:48 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:04:48 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:04:48 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:04:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:04:48 np0005481065 nova_compute[260935]: 2025-10-11 09:04:48.791 2 DEBUG oslo_concurrency.processutils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cb40f13d-eaf0-4d7f-8745-acdd85fa0e86/disk.config cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.230s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:04:48 np0005481065 nova_compute[260935]: 2025-10-11 09:04:48.792 2 INFO nova.virt.libvirt.driver [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Deleting local config drive /var/lib/nova/instances/cb40f13d-eaf0-4d7f-8745-acdd85fa0e86/disk.config because it was imported into RBD.#033[00m
Oct 11 05:04:48 np0005481065 nova_compute[260935]: 2025-10-11 09:04:48.803 2 DEBUG nova.virt.libvirt.driver [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:04:48 np0005481065 nova_compute[260935]: 2025-10-11 09:04:48.803 2 DEBUG nova.virt.libvirt.driver [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:04:48 np0005481065 nova_compute[260935]: 2025-10-11 09:04:48.804 2 INFO nova.virt.libvirt.driver [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Using config drive#033[00m
Oct 11 05:04:48 np0005481065 podman[350864]: 2025-10-11 09:04:48.819086265 +0000 UTC m=+0.115669023 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct 11 05:04:48 np0005481065 nova_compute[260935]: 2025-10-11 09:04:48.830 2 DEBUG nova.storage.rbd_utils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] rbd image 4af522a5-ae89-49de-a956-50cfa7dd136a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:04:48 np0005481065 systemd-machined[215705]: New machine qemu-103-instance-00000058.
Oct 11 05:04:48 np0005481065 systemd[1]: Started Virtual Machine qemu-103-instance-00000058.
Oct 11 05:04:48 np0005481065 nova_compute[260935]: 2025-10-11 09:04:48.912 2 DEBUG nova.network.neutron [-] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:04:48 np0005481065 nova_compute[260935]: 2025-10-11 09:04:48.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:04:49 np0005481065 nova_compute[260935]: 2025-10-11 09:04:49.100 2 INFO nova.compute.manager [-] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Took 1.61 seconds to deallocate network for instance.#033[00m
Oct 11 05:04:49 np0005481065 nova_compute[260935]: 2025-10-11 09:04:49.105 2 INFO nova.virt.libvirt.driver [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Creating config drive at /var/lib/nova/instances/4af522a5-ae89-49de-a956-50cfa7dd136a/disk.config#033[00m
Oct 11 05:04:49 np0005481065 nova_compute[260935]: 2025-10-11 09:04:49.117 2 DEBUG oslo_concurrency.processutils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4af522a5-ae89-49de-a956-50cfa7dd136a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps9wil9ay execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:04:49 np0005481065 nova_compute[260935]: 2025-10-11 09:04:49.290 2 DEBUG oslo_concurrency.processutils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4af522a5-ae89-49de-a956-50cfa7dd136a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps9wil9ay" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:04:49 np0005481065 nova_compute[260935]: 2025-10-11 09:04:49.317 2 DEBUG nova.storage.rbd_utils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] rbd image 4af522a5-ae89-49de-a956-50cfa7dd136a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:04:49 np0005481065 nova_compute[260935]: 2025-10-11 09:04:49.322 2 DEBUG oslo_concurrency.processutils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4af522a5-ae89-49de-a956-50cfa7dd136a/disk.config 4af522a5-ae89-49de-a956-50cfa7dd136a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:04:49 np0005481065 nova_compute[260935]: 2025-10-11 09:04:49.395 2 DEBUG nova.compute.manager [req-b647b5f5-fc2b-4b7c-a6c3-ccd745319210 req-5cec846d-2f04-4e86-8482-2a3ae209da46 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Received event network-changed-c992d6e3-ef59-42a0-80c5-109fe0c056cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:04:49 np0005481065 nova_compute[260935]: 2025-10-11 09:04:49.396 2 DEBUG nova.compute.manager [req-b647b5f5-fc2b-4b7c-a6c3-ccd745319210 req-5cec846d-2f04-4e86-8482-2a3ae209da46 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Refreshing instance network info cache due to event network-changed-c992d6e3-ef59-42a0-80c5-109fe0c056cd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:04:49 np0005481065 nova_compute[260935]: 2025-10-11 09:04:49.396 2 DEBUG oslo_concurrency.lockutils [req-b647b5f5-fc2b-4b7c-a6c3-ccd745319210 req-5cec846d-2f04-4e86-8482-2a3ae209da46 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:04:49 np0005481065 nova_compute[260935]: 2025-10-11 09:04:49.398 2 DEBUG oslo_concurrency.lockutils [None req-2f97a1e7-8cbd-40ad-9350-e718e78c356b f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:04:49 np0005481065 nova_compute[260935]: 2025-10-11 09:04:49.398 2 DEBUG oslo_concurrency.lockutils [None req-2f97a1e7-8cbd-40ad-9350-e718e78c356b f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:04:49 np0005481065 nova_compute[260935]: 2025-10-11 09:04:49.508 2 DEBUG oslo_concurrency.processutils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4af522a5-ae89-49de-a956-50cfa7dd136a/disk.config 4af522a5-ae89-49de-a956-50cfa7dd136a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:04:49 np0005481065 nova_compute[260935]: 2025-10-11 09:04:49.509 2 INFO nova.virt.libvirt.driver [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Deleting local config drive /var/lib/nova/instances/4af522a5-ae89-49de-a956-50cfa7dd136a/disk.config because it was imported into RBD.#033[00m
Oct 11 05:04:49 np0005481065 systemd-machined[215705]: New machine qemu-104-instance-0000005b.
Oct 11 05:04:49 np0005481065 nova_compute[260935]: 2025-10-11 09:04:49.602 2 DEBUG oslo_concurrency.processutils [None req-2f97a1e7-8cbd-40ad-9350-e718e78c356b f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:04:49 np0005481065 systemd[1]: Started Virtual Machine qemu-104-instance-0000005b.
Oct 11 05:04:49 np0005481065 nova_compute[260935]: 2025-10-11 09:04:49.805 2 DEBUG nova.compute.manager [req-9a8506a8-37cb-4247-a0c5-8ffcffd270e5 req-629a986b-716e-419f-8020-802f37ba8fb7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Received event network-vif-plugged-63a1eeb0-05f1-4f8a-b115-a1323d4d119d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:04:49 np0005481065 nova_compute[260935]: 2025-10-11 09:04:49.806 2 DEBUG oslo_concurrency.lockutils [req-9a8506a8-37cb-4247-a0c5-8ffcffd270e5 req-629a986b-716e-419f-8020-802f37ba8fb7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "dffb6f2b-b5a8-4d28-ae9b-aca7728ce617-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:04:49 np0005481065 nova_compute[260935]: 2025-10-11 09:04:49.806 2 DEBUG oslo_concurrency.lockutils [req-9a8506a8-37cb-4247-a0c5-8ffcffd270e5 req-629a986b-716e-419f-8020-802f37ba8fb7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dffb6f2b-b5a8-4d28-ae9b-aca7728ce617-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:04:49 np0005481065 nova_compute[260935]: 2025-10-11 09:04:49.806 2 DEBUG oslo_concurrency.lockutils [req-9a8506a8-37cb-4247-a0c5-8ffcffd270e5 req-629a986b-716e-419f-8020-802f37ba8fb7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dffb6f2b-b5a8-4d28-ae9b-aca7728ce617-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:04:49 np0005481065 nova_compute[260935]: 2025-10-11 09:04:49.806 2 DEBUG nova.compute.manager [req-9a8506a8-37cb-4247-a0c5-8ffcffd270e5 req-629a986b-716e-419f-8020-802f37ba8fb7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] No waiting events found dispatching network-vif-plugged-63a1eeb0-05f1-4f8a-b115-a1323d4d119d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:04:49 np0005481065 nova_compute[260935]: 2025-10-11 09:04:49.807 2 WARNING nova.compute.manager [req-9a8506a8-37cb-4247-a0c5-8ffcffd270e5 req-629a986b-716e-419f-8020-802f37ba8fb7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Received unexpected event network-vif-plugged-63a1eeb0-05f1-4f8a-b115-a1323d4d119d for instance with vm_state deleted and task_state None.#033[00m
Oct 11 05:04:49 np0005481065 nova_compute[260935]: 2025-10-11 09:04:49.858 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Removed pending event for cb40f13d-eaf0-4d7f-8745-acdd85fa0e86 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct 11 05:04:49 np0005481065 nova_compute[260935]: 2025-10-11 09:04:49.859 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173489.85819, cb40f13d-eaf0-4d7f-8745-acdd85fa0e86 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:04:49 np0005481065 nova_compute[260935]: 2025-10-11 09:04:49.859 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:04:49 np0005481065 nova_compute[260935]: 2025-10-11 09:04:49.861 2 DEBUG nova.compute.manager [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:04:49 np0005481065 nova_compute[260935]: 2025-10-11 09:04:49.862 2 DEBUG nova.virt.libvirt.driver [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:04:49 np0005481065 nova_compute[260935]: 2025-10-11 09:04:49.873 2 INFO nova.virt.libvirt.driver [-] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Instance spawned successfully.#033[00m
Oct 11 05:04:49 np0005481065 nova_compute[260935]: 2025-10-11 09:04:49.873 2 DEBUG nova.virt.libvirt.driver [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:04:49 np0005481065 nova_compute[260935]: 2025-10-11 09:04:49.916 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:04:49 np0005481065 nova_compute[260935]: 2025-10-11 09:04:49.921 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:04:49 np0005481065 nova_compute[260935]: 2025-10-11 09:04:49.938 2 DEBUG nova.virt.libvirt.driver [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:04:49 np0005481065 nova_compute[260935]: 2025-10-11 09:04:49.938 2 DEBUG nova.virt.libvirt.driver [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:04:49 np0005481065 nova_compute[260935]: 2025-10-11 09:04:49.939 2 DEBUG nova.virt.libvirt.driver [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:04:49 np0005481065 nova_compute[260935]: 2025-10-11 09:04:49.939 2 DEBUG nova.virt.libvirt.driver [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:04:49 np0005481065 nova_compute[260935]: 2025-10-11 09:04:49.940 2 DEBUG nova.virt.libvirt.driver [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:04:49 np0005481065 nova_compute[260935]: 2025-10-11 09:04:49.940 2 DEBUG nova.virt.libvirt.driver [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:04:50 np0005481065 nova_compute[260935]: 2025-10-11 09:04:50.027 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct 11 05:04:50 np0005481065 nova_compute[260935]: 2025-10-11 09:04:50.027 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173489.8625157, cb40f13d-eaf0-4d7f-8745-acdd85fa0e86 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:04:50 np0005481065 nova_compute[260935]: 2025-10-11 09:04:50.028 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] VM Started (Lifecycle Event)#033[00m
Oct 11 05:04:50 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:04:50 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2737087605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:04:50 np0005481065 nova_compute[260935]: 2025-10-11 09:04:50.105 2 DEBUG oslo_concurrency.processutils [None req-2f97a1e7-8cbd-40ad-9350-e718e78c356b f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:04:50 np0005481065 nova_compute[260935]: 2025-10-11 09:04:50.111 2 DEBUG nova.compute.provider_tree [None req-2f97a1e7-8cbd-40ad-9350-e718e78c356b f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:04:50 np0005481065 nova_compute[260935]: 2025-10-11 09:04:50.147 2 DEBUG nova.compute.manager [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:04:50 np0005481065 nova_compute[260935]: 2025-10-11 09:04:50.171 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:04:50 np0005481065 nova_compute[260935]: 2025-10-11 09:04:50.181 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:04:50 np0005481065 nova_compute[260935]: 2025-10-11 09:04:50.221 2 DEBUG nova.scheduler.client.report [None req-2f97a1e7-8cbd-40ad-9350-e718e78c356b f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:04:50 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1859: 321 pgs: 321 active+clean; 421 MiB data, 883 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.7 MiB/s wr, 287 op/s
Oct 11 05:04:50 np0005481065 nova_compute[260935]: 2025-10-11 09:04:50.322 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct 11 05:04:50 np0005481065 nova_compute[260935]: 2025-10-11 09:04:50.409 2 DEBUG oslo_concurrency.lockutils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:04:50 np0005481065 nova_compute[260935]: 2025-10-11 09:04:50.478 2 DEBUG oslo_concurrency.lockutils [None req-2f97a1e7-8cbd-40ad-9350-e718e78c356b f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.080s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:04:50 np0005481065 nova_compute[260935]: 2025-10-11 09:04:50.482 2 DEBUG oslo_concurrency.lockutils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.073s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:04:50 np0005481065 nova_compute[260935]: 2025-10-11 09:04:50.482 2 DEBUG nova.objects.instance [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct 11 05:04:50 np0005481065 nova_compute[260935]: 2025-10-11 09:04:50.644 2 INFO nova.scheduler.client.report [None req-2f97a1e7-8cbd-40ad-9350-e718e78c356b f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Deleted allocations for instance dffb6f2b-b5a8-4d28-ae9b-aca7728ce617#033[00m
Oct 11 05:04:50 np0005481065 nova_compute[260935]: 2025-10-11 09:04:50.840 2 DEBUG oslo_concurrency.lockutils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.358s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:04:50 np0005481065 nova_compute[260935]: 2025-10-11 09:04:50.920 2 DEBUG oslo_concurrency.lockutils [None req-2f97a1e7-8cbd-40ad-9350-e718e78c356b f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Lock "dffb6f2b-b5a8-4d28-ae9b-aca7728ce617" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.344s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.282 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173491.2821743, 4af522a5-ae89-49de-a956-50cfa7dd136a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.283 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.285 2 DEBUG nova.compute.manager [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.286 2 DEBUG nova.virt.libvirt.driver [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.289 2 INFO nova.virt.libvirt.driver [-] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Instance spawned successfully.#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.289 2 DEBUG nova.virt.libvirt.driver [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.353 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.357 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.387 2 DEBUG nova.virt.libvirt.driver [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.388 2 DEBUG nova.virt.libvirt.driver [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.388 2 DEBUG nova.virt.libvirt.driver [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.389 2 DEBUG nova.virt.libvirt.driver [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.389 2 DEBUG nova.virt.libvirt.driver [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.389 2 DEBUG nova.virt.libvirt.driver [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.415 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.415 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173491.2832425, 4af522a5-ae89-49de-a956-50cfa7dd136a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.415 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] VM Started (Lifecycle Event)#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.521 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.527 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.531 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updating instance_info_cache with network_info: [{"id": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "address": "fa:16:3e:d3:b5:ce", "network": {"id": "7c40ad6c-6e2c-4d8e-a70f-72c8786fa745", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1855455514-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ba95f2514ce4fe4b00f245335eaeb01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc992d6e3-ef", "ovs_interfaceid": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.537 2 DEBUG oslo_concurrency.lockutils [None req-7641b456-eabb-4f11-b264-7d16c458804f a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Acquiring lock "cb40f13d-eaf0-4d7f-8745-acdd85fa0e86" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.538 2 DEBUG oslo_concurrency.lockutils [None req-7641b456-eabb-4f11-b264-7d16c458804f a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lock "cb40f13d-eaf0-4d7f-8745-acdd85fa0e86" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.538 2 DEBUG oslo_concurrency.lockutils [None req-7641b456-eabb-4f11-b264-7d16c458804f a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Acquiring lock "cb40f13d-eaf0-4d7f-8745-acdd85fa0e86-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.538 2 DEBUG oslo_concurrency.lockutils [None req-7641b456-eabb-4f11-b264-7d16c458804f a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lock "cb40f13d-eaf0-4d7f-8745-acdd85fa0e86-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.539 2 DEBUG oslo_concurrency.lockutils [None req-7641b456-eabb-4f11-b264-7d16c458804f a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lock "cb40f13d-eaf0-4d7f-8745-acdd85fa0e86-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.540 2 INFO nova.compute.manager [None req-7641b456-eabb-4f11-b264-7d16c458804f a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Terminating instance#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.541 2 DEBUG oslo_concurrency.lockutils [None req-7641b456-eabb-4f11-b264-7d16c458804f a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Acquiring lock "refresh_cache-cb40f13d-eaf0-4d7f-8745-acdd85fa0e86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.542 2 DEBUG oslo_concurrency.lockutils [None req-7641b456-eabb-4f11-b264-7d16c458804f a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Acquired lock "refresh_cache-cb40f13d-eaf0-4d7f-8745-acdd85fa0e86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.542 2 DEBUG nova.network.neutron [None req-7641b456-eabb-4f11-b264-7d16c458804f a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.554 2 INFO nova.compute.manager [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Took 4.91 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.555 2 DEBUG nova.compute.manager [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.592 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.665 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.666 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.667 2 DEBUG oslo_concurrency.lockutils [req-b647b5f5-fc2b-4b7c-a6c3-ccd745319210 req-5cec846d-2f04-4e86-8482-2a3ae209da46 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.667 2 DEBUG nova.network.neutron [req-b647b5f5-fc2b-4b7c-a6c3-ccd745319210 req-5cec846d-2f04-4e86-8482-2a3ae209da46 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Refreshing network info cache for port c992d6e3-ef59-42a0-80c5-109fe0c056cd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.668 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.669 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.670 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.670 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.670 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.671 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.725 2 INFO nova.compute.manager [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Took 7.70 seconds to build instance.#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.749 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Triggering sync for uuid c176845c-89c0-4038-ba22-4ee79bd3ebfe _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.750 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Triggering sync for uuid b75d8ded-515b-48ff-a6b6-28df88878996 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.750 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Triggering sync for uuid 52be16b4-343a-4fd4-9041-39069a1fde2a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.750 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Triggering sync for uuid 15633aee-234a-4417-b5ea-f35f13820404 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.751 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Triggering sync for uuid cb40f13d-eaf0-4d7f-8745-acdd85fa0e86 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.751 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Triggering sync for uuid 4af522a5-ae89-49de-a956-50cfa7dd136a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.751 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "c176845c-89c0-4038-ba22-4ee79bd3ebfe" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.752 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "c176845c-89c0-4038-ba22-4ee79bd3ebfe" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.753 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "b75d8ded-515b-48ff-a6b6-28df88878996" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.754 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "b75d8ded-515b-48ff-a6b6-28df88878996" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.754 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "52be16b4-343a-4fd4-9041-39069a1fde2a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.755 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "52be16b4-343a-4fd4-9041-39069a1fde2a" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.755 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "15633aee-234a-4417-b5ea-f35f13820404" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.755 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "cb40f13d-eaf0-4d7f-8745-acdd85fa0e86" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.756 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "4af522a5-ae89-49de-a956-50cfa7dd136a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.756 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.756 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.757 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.821 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "c176845c-89c0-4038-ba22-4ee79bd3ebfe" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.069s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.838 2 DEBUG oslo_concurrency.lockutils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Lock "4af522a5-ae89-49de-a956-50cfa7dd136a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.998s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.839 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "4af522a5-ae89-49de-a956-50cfa7dd136a" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.083s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.841 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "b75d8ded-515b-48ff-a6b6-28df88878996" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.087s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.843 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "52be16b4-343a-4fd4-9041-39069a1fde2a" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.088s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.950 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "4af522a5-ae89-49de-a956-50cfa7dd136a" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.111s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.970 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.970 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.971 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.971 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:04:51 np0005481065 nova_compute[260935]: 2025-10-11 09:04:51.971 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:04:52 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1860: 321 pgs: 321 active+clean; 421 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 4.1 MiB/s wr, 323 op/s
Oct 11 05:04:52 np0005481065 nova_compute[260935]: 2025-10-11 09:04:52.370 2 DEBUG nova.network.neutron [None req-7641b456-eabb-4f11-b264-7d16c458804f a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:04:52 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:04:52 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3934776140' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:04:52 np0005481065 nova_compute[260935]: 2025-10-11 09:04:52.480 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:04:52 np0005481065 nova_compute[260935]: 2025-10-11 09:04:52.606 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000058 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:04:52 np0005481065 nova_compute[260935]: 2025-10-11 09:04:52.606 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000058 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:04:52 np0005481065 nova_compute[260935]: 2025-10-11 09:04:52.610 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:04:52 np0005481065 nova_compute[260935]: 2025-10-11 09:04:52.610 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:04:52 np0005481065 nova_compute[260935]: 2025-10-11 09:04:52.611 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:04:52 np0005481065 nova_compute[260935]: 2025-10-11 09:04:52.615 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:04:52 np0005481065 nova_compute[260935]: 2025-10-11 09:04:52.616 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:04:52 np0005481065 nova_compute[260935]: 2025-10-11 09:04:52.622 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000057 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:04:52 np0005481065 nova_compute[260935]: 2025-10-11 09:04:52.623 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000057 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:04:52 np0005481065 nova_compute[260935]: 2025-10-11 09:04:52.629 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000005b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:04:52 np0005481065 nova_compute[260935]: 2025-10-11 09:04:52.629 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000005b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:04:52 np0005481065 nova_compute[260935]: 2025-10-11 09:04:52.633 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:04:52 np0005481065 nova_compute[260935]: 2025-10-11 09:04:52.633 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:04:52 np0005481065 podman[351113]: 2025-10-11 09:04:52.635510032 +0000 UTC m=+0.097331040 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 11 05:04:52 np0005481065 podman[351114]: 2025-10-11 09:04:52.710521793 +0000 UTC m=+0.165864166 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 11 05:04:52 np0005481065 nova_compute[260935]: 2025-10-11 09:04:52.740 2 DEBUG nova.network.neutron [None req-7641b456-eabb-4f11-b264-7d16c458804f a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:04:52 np0005481065 nova_compute[260935]: 2025-10-11 09:04:52.797 2 DEBUG oslo_concurrency.lockutils [None req-7641b456-eabb-4f11-b264-7d16c458804f a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Releasing lock "refresh_cache-cb40f13d-eaf0-4d7f-8745-acdd85fa0e86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:04:52 np0005481065 nova_compute[260935]: 2025-10-11 09:04:52.798 2 DEBUG nova.compute.manager [None req-7641b456-eabb-4f11-b264-7d16c458804f a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:04:52 np0005481065 systemd[1]: machine-qemu\x2d103\x2dinstance\x2d00000058.scope: Deactivated successfully.
Oct 11 05:04:52 np0005481065 systemd[1]: machine-qemu\x2d103\x2dinstance\x2d00000058.scope: Consumed 3.822s CPU time.
Oct 11 05:04:52 np0005481065 systemd-machined[215705]: Machine qemu-103-instance-00000058 terminated.
Oct 11 05:04:52 np0005481065 nova_compute[260935]: 2025-10-11 09:04:52.933 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:04:52 np0005481065 nova_compute[260935]: 2025-10-11 09:04:52.934 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3010MB free_disk=59.788700103759766GB free_vcpus=2 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:04:52 np0005481065 nova_compute[260935]: 2025-10-11 09:04:52.935 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:04:52 np0005481065 nova_compute[260935]: 2025-10-11 09:04:52.935 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:04:53 np0005481065 nova_compute[260935]: 2025-10-11 09:04:53.037 2 INFO nova.virt.libvirt.driver [-] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Instance destroyed successfully.#033[00m
Oct 11 05:04:53 np0005481065 nova_compute[260935]: 2025-10-11 09:04:53.038 2 DEBUG nova.objects.instance [None req-7641b456-eabb-4f11-b264-7d16c458804f a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lazy-loading 'resources' on Instance uuid cb40f13d-eaf0-4d7f-8745-acdd85fa0e86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:04:53 np0005481065 nova_compute[260935]: 2025-10-11 09:04:53.117 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:04:53 np0005481065 nova_compute[260935]: 2025-10-11 09:04:53.118 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:04:53 np0005481065 nova_compute[260935]: 2025-10-11 09:04:53.118 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:04:53 np0005481065 nova_compute[260935]: 2025-10-11 09:04:53.118 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 15633aee-234a-4417-b5ea-f35f13820404 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:04:53 np0005481065 nova_compute[260935]: 2025-10-11 09:04:53.118 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance cb40f13d-eaf0-4d7f-8745-acdd85fa0e86 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:04:53 np0005481065 nova_compute[260935]: 2025-10-11 09:04:53.119 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 4af522a5-ae89-49de-a956-50cfa7dd136a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:04:53 np0005481065 nova_compute[260935]: 2025-10-11 09:04:53.119 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 6 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:04:53 np0005481065 nova_compute[260935]: 2025-10-11 09:04:53.119 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1280MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=6 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:04:53 np0005481065 nova_compute[260935]: 2025-10-11 09:04:53.338 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:04:53 np0005481065 nova_compute[260935]: 2025-10-11 09:04:53.624 2 DEBUG nova.network.neutron [req-b647b5f5-fc2b-4b7c-a6c3-ccd745319210 req-5cec846d-2f04-4e86-8482-2a3ae209da46 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updated VIF entry in instance network info cache for port c992d6e3-ef59-42a0-80c5-109fe0c056cd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:04:53 np0005481065 nova_compute[260935]: 2025-10-11 09:04:53.625 2 DEBUG nova.network.neutron [req-b647b5f5-fc2b-4b7c-a6c3-ccd745319210 req-5cec846d-2f04-4e86-8482-2a3ae209da46 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updating instance_info_cache with network_info: [{"id": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "address": "fa:16:3e:d3:b5:ce", "network": {"id": "7c40ad6c-6e2c-4d8e-a70f-72c8786fa745", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1855455514-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ba95f2514ce4fe4b00f245335eaeb01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc992d6e3-ef", "ovs_interfaceid": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:04:53 np0005481065 nova_compute[260935]: 2025-10-11 09:04:53.683 2 INFO nova.virt.libvirt.driver [None req-7641b456-eabb-4f11-b264-7d16c458804f a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Deleting instance files /var/lib/nova/instances/cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_del#033[00m
Oct 11 05:04:53 np0005481065 nova_compute[260935]: 2025-10-11 09:04:53.684 2 INFO nova.virt.libvirt.driver [None req-7641b456-eabb-4f11-b264-7d16c458804f a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Deletion of /var/lib/nova/instances/cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_del complete#033[00m
Oct 11 05:04:53 np0005481065 nova_compute[260935]: 2025-10-11 09:04:53.739 2 DEBUG oslo_concurrency.lockutils [req-b647b5f5-fc2b-4b7c-a6c3-ccd745319210 req-5cec846d-2f04-4e86-8482-2a3ae209da46 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:04:53 np0005481065 nova_compute[260935]: 2025-10-11 09:04:53.740 2 DEBUG nova.compute.manager [req-b647b5f5-fc2b-4b7c-a6c3-ccd745319210 req-5cec846d-2f04-4e86-8482-2a3ae209da46 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Received event network-vif-deleted-63a1eeb0-05f1-4f8a-b115-a1323d4d119d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:04:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:04:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:04:53 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1130079266' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:04:53 np0005481065 nova_compute[260935]: 2025-10-11 09:04:53.848 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:04:53 np0005481065 nova_compute[260935]: 2025-10-11 09:04:53.862 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:04:53 np0005481065 nova_compute[260935]: 2025-10-11 09:04:53.869 2 INFO nova.compute.manager [None req-7641b456-eabb-4f11-b264-7d16c458804f a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Took 1.07 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:04:53 np0005481065 nova_compute[260935]: 2025-10-11 09:04:53.869 2 DEBUG oslo.service.loopingcall [None req-7641b456-eabb-4f11-b264-7d16c458804f a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:04:53 np0005481065 nova_compute[260935]: 2025-10-11 09:04:53.870 2 DEBUG nova.compute.manager [-] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:04:53 np0005481065 nova_compute[260935]: 2025-10-11 09:04:53.870 2 DEBUG nova.network.neutron [-] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:04:53 np0005481065 nova_compute[260935]: 2025-10-11 09:04:53.910 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:04:53 np0005481065 nova_compute[260935]: 2025-10-11 09:04:53.970 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:04:53 np0005481065 nova_compute[260935]: 2025-10-11 09:04:53.971 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.036s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:04:53 np0005481065 nova_compute[260935]: 2025-10-11 09:04:53.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:04:54 np0005481065 nova_compute[260935]: 2025-10-11 09:04:54.057 2 DEBUG nova.network.neutron [-] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:04:54 np0005481065 nova_compute[260935]: 2025-10-11 09:04:54.211 2 DEBUG nova.network.neutron [-] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:04:54 np0005481065 nova_compute[260935]: 2025-10-11 09:04:54.286 2 INFO nova.compute.manager [-] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Took 0.42 seconds to deallocate network for instance.#033[00m
Oct 11 05:04:54 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1861: 321 pgs: 321 active+clean; 421 MiB data, 855 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 301 op/s
Oct 11 05:04:54 np0005481065 nova_compute[260935]: 2025-10-11 09:04:54.521 2 DEBUG nova.compute.manager [None req-46588549-d987-4215-831e-39457622aab4 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:04:54 np0005481065 nova_compute[260935]: 2025-10-11 09:04:54.538 2 DEBUG oslo_concurrency.lockutils [None req-7641b456-eabb-4f11-b264-7d16c458804f a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:04:54 np0005481065 nova_compute[260935]: 2025-10-11 09:04:54.539 2 DEBUG oslo_concurrency.lockutils [None req-7641b456-eabb-4f11-b264-7d16c458804f a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:04:54 np0005481065 nova_compute[260935]: 2025-10-11 09:04:54.620 2 INFO nova.compute.manager [None req-46588549-d987-4215-831e-39457622aab4 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] instance snapshotting#033[00m
Oct 11 05:04:54 np0005481065 nova_compute[260935]: 2025-10-11 09:04:54.725 2 DEBUG oslo_concurrency.processutils [None req-7641b456-eabb-4f11-b264-7d16c458804f a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:04:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:04:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:04:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:04:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:04:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:04:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:04:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:04:54
Oct 11 05:04:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:04:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:04:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['backups', 'default.rgw.control', 'images', '.rgw.root', '.mgr', 'default.rgw.log', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms']
Oct 11 05:04:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:04:54 np0005481065 nova_compute[260935]: 2025-10-11 09:04:54.975 2 INFO nova.virt.libvirt.driver [None req-46588549-d987-4215-831e-39457622aab4 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Beginning live snapshot process#033[00m
Oct 11 05:04:55 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:04:55 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4224717697' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:04:55 np0005481065 nova_compute[260935]: 2025-10-11 09:04:55.186 2 DEBUG oslo_concurrency.processutils [None req-7641b456-eabb-4f11-b264-7d16c458804f a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:04:55 np0005481065 nova_compute[260935]: 2025-10-11 09:04:55.194 2 DEBUG nova.compute.provider_tree [None req-7641b456-eabb-4f11-b264-7d16c458804f a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:04:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:04:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:04:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:04:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:04:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:04:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:04:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:04:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:04:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:04:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:04:55 np0005481065 nova_compute[260935]: 2025-10-11 09:04:55.370 2 DEBUG nova.scheduler.client.report [None req-7641b456-eabb-4f11-b264-7d16c458804f a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:04:55 np0005481065 nova_compute[260935]: 2025-10-11 09:04:55.383 2 DEBUG nova.storage.rbd_utils [None req-46588549-d987-4215-831e-39457622aab4 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] creating snapshot(2e7a082774e34d0eaf6df91effc02bf7) on rbd image(4af522a5-ae89-49de-a956-50cfa7dd136a_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 11 05:04:55 np0005481065 nova_compute[260935]: 2025-10-11 09:04:55.507 2 DEBUG oslo_concurrency.lockutils [None req-7641b456-eabb-4f11-b264-7d16c458804f a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.967s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:04:56 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1862: 321 pgs: 321 active+clean; 421 MiB data, 855 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 266 op/s
Oct 11 05:04:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e251 do_prune osdmap full prune enabled
Oct 11 05:04:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e252 e252: 3 total, 3 up, 3 in
Oct 11 05:04:56 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e252: 3 total, 3 up, 3 in
Oct 11 05:04:56 np0005481065 nova_compute[260935]: 2025-10-11 09:04:56.401 2 INFO nova.scheduler.client.report [None req-7641b456-eabb-4f11-b264-7d16c458804f a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Deleted allocations for instance cb40f13d-eaf0-4d7f-8745-acdd85fa0e86#033[00m
Oct 11 05:04:56 np0005481065 nova_compute[260935]: 2025-10-11 09:04:56.504 2 DEBUG nova.storage.rbd_utils [None req-46588549-d987-4215-831e-39457622aab4 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] cloning vms/4af522a5-ae89-49de-a956-50cfa7dd136a_disk@2e7a082774e34d0eaf6df91effc02bf7 to images/d9a855f1-b63e-4cbc-895e-23378797f027 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct 11 05:04:56 np0005481065 nova_compute[260935]: 2025-10-11 09:04:56.605 2 DEBUG oslo_concurrency.lockutils [None req-7641b456-eabb-4f11-b264-7d16c458804f a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lock "cb40f13d-eaf0-4d7f-8745-acdd85fa0e86" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.067s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:04:56 np0005481065 nova_compute[260935]: 2025-10-11 09:04:56.607 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "cb40f13d-eaf0-4d7f-8745-acdd85fa0e86" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 4.852s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:04:56 np0005481065 nova_compute[260935]: 2025-10-11 09:04:56.607 2 INFO nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] During sync_power_state the instance has a pending task (deleting). Skip.#033[00m
Oct 11 05:04:56 np0005481065 nova_compute[260935]: 2025-10-11 09:04:56.608 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "cb40f13d-eaf0-4d7f-8745-acdd85fa0e86" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:04:56 np0005481065 nova_compute[260935]: 2025-10-11 09:04:56.745 2 DEBUG nova.storage.rbd_utils [None req-46588549-d987-4215-831e-39457622aab4 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] flattening images/d9a855f1-b63e-4cbc-895e-23378797f027 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct 11 05:04:56 np0005481065 nova_compute[260935]: 2025-10-11 09:04:56.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:04:56 np0005481065 nova_compute[260935]: 2025-10-11 09:04:56.975 2 DEBUG nova.storage.rbd_utils [None req-46588549-d987-4215-831e-39457622aab4 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] removing snapshot(2e7a082774e34d0eaf6df91effc02bf7) on rbd image(4af522a5-ae89-49de-a956-50cfa7dd136a_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct 11 05:04:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e252 do_prune osdmap full prune enabled
Oct 11 05:04:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e253 e253: 3 total, 3 up, 3 in
Oct 11 05:04:57 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e253: 3 total, 3 up, 3 in
Oct 11 05:04:57 np0005481065 nova_compute[260935]: 2025-10-11 09:04:57.457 2 DEBUG nova.storage.rbd_utils [None req-46588549-d987-4215-831e-39457622aab4 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] creating snapshot(snap) on rbd image(d9a855f1-b63e-4cbc-895e-23378797f027) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 11 05:04:58 np0005481065 ovn_controller[152945]: 2025-10-11T09:04:58Z|00809|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:04:58 np0005481065 ovn_controller[152945]: 2025-10-11T09:04:58Z|00810|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:04:58 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1865: 321 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 318 active+clean; 374 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 47 KiB/s wr, 283 op/s
Oct 11 05:04:58 np0005481065 nova_compute[260935]: 2025-10-11 09:04:58.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:04:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e253 do_prune osdmap full prune enabled
Oct 11 05:04:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e254 e254: 3 total, 3 up, 3 in
Oct 11 05:04:58 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e254: 3 total, 3 up, 3 in
Oct 11 05:04:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:04:58 np0005481065 nova_compute[260935]: 2025-10-11 09:04:58.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:00 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1867: 321 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 318 active+clean; 374 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 114 KiB/s rd, 6.5 KiB/s wr, 150 op/s
Oct 11 05:05:00 np0005481065 nova_compute[260935]: 2025-10-11 09:05:00.389 2 INFO nova.virt.libvirt.driver [None req-46588549-d987-4215-831e-39457622aab4 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Snapshot image upload complete#033[00m
Oct 11 05:05:00 np0005481065 nova_compute[260935]: 2025-10-11 09:05:00.390 2 INFO nova.compute.manager [None req-46588549-d987-4215-831e-39457622aab4 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Took 5.77 seconds to snapshot the instance on the hypervisor.#033[00m
Oct 11 05:05:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e254 do_prune osdmap full prune enabled
Oct 11 05:05:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e255 e255: 3 total, 3 up, 3 in
Oct 11 05:05:01 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e255: 3 total, 3 up, 3 in
Oct 11 05:05:01 np0005481065 nova_compute[260935]: 2025-10-11 09:05:01.815 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173486.8129892, dffb6f2b-b5a8-4d28-ae9b-aca7728ce617 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:05:01 np0005481065 nova_compute[260935]: 2025-10-11 09:05:01.815 2 INFO nova.compute.manager [-] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:05:01 np0005481065 nova_compute[260935]: 2025-10-11 09:05:01.896 2 DEBUG nova.compute.manager [None req-df9fa7c4-6291-4d17-8451-8b7cd6a83ba3 - - - - - -] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:05:01 np0005481065 nova_compute[260935]: 2025-10-11 09:05:01.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:02 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1869: 321 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 318 active+clean; 374 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 199 KiB/s rd, 11 KiB/s wr, 259 op/s
Oct 11 05:05:02 np0005481065 nova_compute[260935]: 2025-10-11 09:05:02.995 2 DEBUG oslo_concurrency.lockutils [None req-6755e971-7c8b-442c-b622-a7d1567419fc 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Acquiring lock "4af522a5-ae89-49de-a956-50cfa7dd136a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:05:02 np0005481065 nova_compute[260935]: 2025-10-11 09:05:02.996 2 DEBUG oslo_concurrency.lockutils [None req-6755e971-7c8b-442c-b622-a7d1567419fc 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Lock "4af522a5-ae89-49de-a956-50cfa7dd136a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:05:02 np0005481065 nova_compute[260935]: 2025-10-11 09:05:02.996 2 DEBUG oslo_concurrency.lockutils [None req-6755e971-7c8b-442c-b622-a7d1567419fc 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Acquiring lock "4af522a5-ae89-49de-a956-50cfa7dd136a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:05:02 np0005481065 nova_compute[260935]: 2025-10-11 09:05:02.997 2 DEBUG oslo_concurrency.lockutils [None req-6755e971-7c8b-442c-b622-a7d1567419fc 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Lock "4af522a5-ae89-49de-a956-50cfa7dd136a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:05:02 np0005481065 nova_compute[260935]: 2025-10-11 09:05:02.997 2 DEBUG oslo_concurrency.lockutils [None req-6755e971-7c8b-442c-b622-a7d1567419fc 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Lock "4af522a5-ae89-49de-a956-50cfa7dd136a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:05:02 np0005481065 nova_compute[260935]: 2025-10-11 09:05:02.999 2 INFO nova.compute.manager [None req-6755e971-7c8b-442c-b622-a7d1567419fc 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Terminating instance#033[00m
Oct 11 05:05:03 np0005481065 nova_compute[260935]: 2025-10-11 09:05:03.000 2 DEBUG oslo_concurrency.lockutils [None req-6755e971-7c8b-442c-b622-a7d1567419fc 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Acquiring lock "refresh_cache-4af522a5-ae89-49de-a956-50cfa7dd136a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:05:03 np0005481065 nova_compute[260935]: 2025-10-11 09:05:03.001 2 DEBUG oslo_concurrency.lockutils [None req-6755e971-7c8b-442c-b622-a7d1567419fc 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Acquired lock "refresh_cache-4af522a5-ae89-49de-a956-50cfa7dd136a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:05:03 np0005481065 nova_compute[260935]: 2025-10-11 09:05:03.001 2 DEBUG nova.network.neutron [None req-6755e971-7c8b-442c-b622-a7d1567419fc 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:05:03 np0005481065 nova_compute[260935]: 2025-10-11 09:05:03.404 2 DEBUG nova.network.neutron [None req-6755e971-7c8b-442c-b622-a7d1567419fc 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:05:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:05:03 np0005481065 nova_compute[260935]: 2025-10-11 09:05:03.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:04 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1870: 321 pgs: 321 active+clean; 374 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 122 KiB/s rd, 6.5 KiB/s wr, 160 op/s
Oct 11 05:05:04 np0005481065 nova_compute[260935]: 2025-10-11 09:05:04.423 2 DEBUG nova.network.neutron [None req-6755e971-7c8b-442c-b622-a7d1567419fc 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:05:04 np0005481065 nova_compute[260935]: 2025-10-11 09:05:04.464 2 DEBUG oslo_concurrency.lockutils [None req-6755e971-7c8b-442c-b622-a7d1567419fc 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Releasing lock "refresh_cache-4af522a5-ae89-49de-a956-50cfa7dd136a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:05:04 np0005481065 nova_compute[260935]: 2025-10-11 09:05:04.466 2 DEBUG nova.compute.manager [None req-6755e971-7c8b-442c-b622-a7d1567419fc 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:05:04 np0005481065 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d0000005b.scope: Deactivated successfully.
Oct 11 05:05:04 np0005481065 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d0000005b.scope: Consumed 2.001s CPU time.
Oct 11 05:05:04 np0005481065 systemd-machined[215705]: Machine qemu-104-instance-0000005b terminated.
Oct 11 05:05:04 np0005481065 nova_compute[260935]: 2025-10-11 09:05:04.697 2 INFO nova.virt.libvirt.driver [-] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Instance destroyed successfully.#033[00m
Oct 11 05:05:04 np0005481065 nova_compute[260935]: 2025-10-11 09:05:04.698 2 DEBUG nova.objects.instance [None req-6755e971-7c8b-442c-b622-a7d1567419fc 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Lazy-loading 'resources' on Instance uuid 4af522a5-ae89-49de-a956-50cfa7dd136a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:05:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:05:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:05:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:05:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:05:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0029782708007748907 of space, bias 1.0, pg target 0.8934812402324672 quantized to 32 (current 32)
Oct 11 05:05:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:05:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:05:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:05:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:05:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:05:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006663034365435958 of space, bias 1.0, pg target 0.19989103096307873 quantized to 32 (current 32)
Oct 11 05:05:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:05:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 05:05:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:05:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:05:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:05:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 05:05:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:05:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 05:05:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:05:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:05:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:05:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 05:05:05 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e255 do_prune osdmap full prune enabled
Oct 11 05:05:05 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e256 e256: 3 total, 3 up, 3 in
Oct 11 05:05:05 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e256: 3 total, 3 up, 3 in
Oct 11 05:05:06 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1872: 321 pgs: 321 active+clean; 374 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 65 KiB/s rd, 3.2 KiB/s wr, 86 op/s
Oct 11 05:05:06 np0005481065 nova_compute[260935]: 2025-10-11 09:05:06.386 2 INFO nova.virt.libvirt.driver [None req-6755e971-7c8b-442c-b622-a7d1567419fc 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Deleting instance files /var/lib/nova/instances/4af522a5-ae89-49de-a956-50cfa7dd136a_del#033[00m
Oct 11 05:05:06 np0005481065 nova_compute[260935]: 2025-10-11 09:05:06.389 2 INFO nova.virt.libvirt.driver [None req-6755e971-7c8b-442c-b622-a7d1567419fc 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Deletion of /var/lib/nova/instances/4af522a5-ae89-49de-a956-50cfa7dd136a_del complete#033[00m
Oct 11 05:05:06 np0005481065 nova_compute[260935]: 2025-10-11 09:05:06.506 2 INFO nova.compute.manager [None req-6755e971-7c8b-442c-b622-a7d1567419fc 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Took 2.04 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:05:06 np0005481065 nova_compute[260935]: 2025-10-11 09:05:06.507 2 DEBUG oslo.service.loopingcall [None req-6755e971-7c8b-442c-b622-a7d1567419fc 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:05:06 np0005481065 nova_compute[260935]: 2025-10-11 09:05:06.508 2 DEBUG nova.compute.manager [-] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:05:06 np0005481065 nova_compute[260935]: 2025-10-11 09:05:06.509 2 DEBUG nova.network.neutron [-] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:05:07 np0005481065 nova_compute[260935]: 2025-10-11 09:05:07.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:08 np0005481065 nova_compute[260935]: 2025-10-11 09:05:08.034 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173493.0332146, cb40f13d-eaf0-4d7f-8745-acdd85fa0e86 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:05:08 np0005481065 nova_compute[260935]: 2025-10-11 09:05:08.035 2 INFO nova.compute.manager [-] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:05:08 np0005481065 nova_compute[260935]: 2025-10-11 09:05:08.196 2 DEBUG nova.compute.manager [None req-de80e430-7f56-4f70-81c0-c30aafbeb67f - - - - - -] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:05:08 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1873: 321 pgs: 321 active+clean; 374 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 98 KiB/s rd, 4.9 KiB/s wr, 129 op/s
Oct 11 05:05:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:05:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e256 do_prune osdmap full prune enabled
Oct 11 05:05:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 e257: 3 total, 3 up, 3 in
Oct 11 05:05:08 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e257: 3 total, 3 up, 3 in
Oct 11 05:05:08 np0005481065 nova_compute[260935]: 2025-10-11 09:05:08.885 2 DEBUG nova.network.neutron [-] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:05:08 np0005481065 nova_compute[260935]: 2025-10-11 09:05:08.943 2 DEBUG nova.network.neutron [-] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:05:08 np0005481065 nova_compute[260935]: 2025-10-11 09:05:08.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:09 np0005481065 nova_compute[260935]: 2025-10-11 09:05:09.031 2 INFO nova.compute.manager [-] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Took 2.52 seconds to deallocate network for instance.#033[00m
Oct 11 05:05:09 np0005481065 nova_compute[260935]: 2025-10-11 09:05:09.143 2 DEBUG oslo_concurrency.lockutils [None req-6755e971-7c8b-442c-b622-a7d1567419fc 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:05:09 np0005481065 nova_compute[260935]: 2025-10-11 09:05:09.144 2 DEBUG oslo_concurrency.lockutils [None req-6755e971-7c8b-442c-b622-a7d1567419fc 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:05:09 np0005481065 nova_compute[260935]: 2025-10-11 09:05:09.275 2 DEBUG oslo_concurrency.processutils [None req-6755e971-7c8b-442c-b622-a7d1567419fc 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:05:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:05:09 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1398104273' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:05:09 np0005481065 nova_compute[260935]: 2025-10-11 09:05:09.734 2 DEBUG oslo_concurrency.processutils [None req-6755e971-7c8b-442c-b622-a7d1567419fc 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:05:09 np0005481065 nova_compute[260935]: 2025-10-11 09:05:09.741 2 DEBUG nova.compute.provider_tree [None req-6755e971-7c8b-442c-b622-a7d1567419fc 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:05:09 np0005481065 nova_compute[260935]: 2025-10-11 09:05:09.808 2 DEBUG nova.scheduler.client.report [None req-6755e971-7c8b-442c-b622-a7d1567419fc 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:05:10 np0005481065 nova_compute[260935]: 2025-10-11 09:05:10.067 2 DEBUG oslo_concurrency.lockutils [None req-6755e971-7c8b-442c-b622-a7d1567419fc 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.924s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:05:10 np0005481065 nova_compute[260935]: 2025-10-11 09:05:10.165 2 INFO nova.scheduler.client.report [None req-6755e971-7c8b-442c-b622-a7d1567419fc 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Deleted allocations for instance 4af522a5-ae89-49de-a956-50cfa7dd136a#033[00m
Oct 11 05:05:10 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1875: 321 pgs: 321 active+clean; 374 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.9 KiB/s wr, 51 op/s
Oct 11 05:05:10 np0005481065 nova_compute[260935]: 2025-10-11 09:05:10.426 2 DEBUG oslo_concurrency.lockutils [None req-6755e971-7c8b-442c-b622-a7d1567419fc 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Lock "4af522a5-ae89-49de-a956-50cfa7dd136a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.430s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:05:12 np0005481065 nova_compute[260935]: 2025-10-11 09:05:12.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:12 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1876: 321 pgs: 321 active+clean; 374 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 1.7 KiB/s wr, 44 op/s
Oct 11 05:05:12 np0005481065 podman[351410]: 2025-10-11 09:05:12.795787205 +0000 UTC m=+0.093554492 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 11 05:05:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:05:13 np0005481065 nova_compute[260935]: 2025-10-11 09:05:13.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:14 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1877: 321 pgs: 321 active+clean; 374 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 1.6 KiB/s wr, 40 op/s
Oct 11 05:05:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:15.200 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:05:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:15.201 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:05:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:15.203 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:05:16 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1878: 321 pgs: 321 active+clean; 374 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 1.4 KiB/s wr, 35 op/s
Oct 11 05:05:17 np0005481065 nova_compute[260935]: 2025-10-11 09:05:17.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:18 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1879: 321 pgs: 321 active+clean; 374 MiB data, 838 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:05:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:05:18 np0005481065 nova_compute[260935]: 2025-10-11 09:05:18.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:19 np0005481065 nova_compute[260935]: 2025-10-11 09:05:19.694 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173504.6933997, 4af522a5-ae89-49de-a956-50cfa7dd136a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:05:19 np0005481065 nova_compute[260935]: 2025-10-11 09:05:19.695 2 INFO nova.compute.manager [-] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:05:19 np0005481065 podman[351429]: 2025-10-11 09:05:19.790082894 +0000 UTC m=+0.089868317 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, managed_by=edpm_ansible)
Oct 11 05:05:19 np0005481065 nova_compute[260935]: 2025-10-11 09:05:19.883 2 DEBUG nova.compute.manager [None req-05e39b50-7a52-494c-b8be-961d85758b5b - - - - - -] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:05:20 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1880: 321 pgs: 321 active+clean; 374 MiB data, 838 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:05:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:20.949 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:05:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:20.952 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 05:05:20 np0005481065 nova_compute[260935]: 2025-10-11 09:05:20.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:22 np0005481065 nova_compute[260935]: 2025-10-11 09:05:22.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:22 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1881: 321 pgs: 321 active+clean; 374 MiB data, 838 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:05:22 np0005481065 podman[351450]: 2025-10-11 09:05:22.784592308 +0000 UTC m=+0.085107950 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:05:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:22.954 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:05:22 np0005481065 podman[351470]: 2025-10-11 09:05:22.991909886 +0000 UTC m=+0.162505190 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct 11 05:05:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:05:23 np0005481065 nova_compute[260935]: 2025-10-11 09:05:23.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:24 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1882: 321 pgs: 321 active+clean; 374 MiB data, 838 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:05:24 np0005481065 nova_compute[260935]: 2025-10-11 09:05:24.455 2 DEBUG oslo_concurrency.lockutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Acquiring lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:05:24 np0005481065 nova_compute[260935]: 2025-10-11 09:05:24.455 2 DEBUG oslo_concurrency.lockutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:05:24 np0005481065 nova_compute[260935]: 2025-10-11 09:05:24.544 2 DEBUG nova.compute.manager [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:05:24 np0005481065 nova_compute[260935]: 2025-10-11 09:05:24.708 2 DEBUG oslo_concurrency.lockutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:05:24 np0005481065 nova_compute[260935]: 2025-10-11 09:05:24.709 2 DEBUG oslo_concurrency.lockutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:05:24 np0005481065 nova_compute[260935]: 2025-10-11 09:05:24.717 2 DEBUG nova.virt.hardware [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:05:24 np0005481065 nova_compute[260935]: 2025-10-11 09:05:24.717 2 INFO nova.compute.claims [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:05:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:05:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:05:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:05:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:05:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:05:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:05:25 np0005481065 nova_compute[260935]: 2025-10-11 09:05:25.061 2 DEBUG oslo_concurrency.processutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:05:25 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:05:25 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/313735075' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:05:25 np0005481065 nova_compute[260935]: 2025-10-11 09:05:25.563 2 DEBUG oslo_concurrency.processutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:05:25 np0005481065 nova_compute[260935]: 2025-10-11 09:05:25.574 2 DEBUG nova.compute.provider_tree [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:05:25 np0005481065 nova_compute[260935]: 2025-10-11 09:05:25.821 2 DEBUG nova.scheduler.client.report [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:05:25 np0005481065 nova_compute[260935]: 2025-10-11 09:05:25.957 2 DEBUG oslo_concurrency.lockutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.249s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:05:25 np0005481065 nova_compute[260935]: 2025-10-11 09:05:25.959 2 DEBUG nova.compute.manager [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:05:26 np0005481065 nova_compute[260935]: 2025-10-11 09:05:26.106 2 DEBUG nova.compute.manager [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:05:26 np0005481065 nova_compute[260935]: 2025-10-11 09:05:26.106 2 DEBUG nova.network.neutron [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:05:26 np0005481065 nova_compute[260935]: 2025-10-11 09:05:26.163 2 INFO nova.virt.libvirt.driver [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:05:26 np0005481065 nova_compute[260935]: 2025-10-11 09:05:26.238 2 DEBUG nova.compute.manager [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:05:26 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1883: 321 pgs: 321 active+clean; 374 MiB data, 838 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:05:26 np0005481065 nova_compute[260935]: 2025-10-11 09:05:26.423 2 DEBUG nova.policy [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b7730a035fdf47498398e20e5aaf9ba4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0a5d578da5e746caa535eef295e1a67d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:05:26 np0005481065 nova_compute[260935]: 2025-10-11 09:05:26.531 2 DEBUG nova.compute.manager [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:05:26 np0005481065 nova_compute[260935]: 2025-10-11 09:05:26.533 2 DEBUG nova.virt.libvirt.driver [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:05:26 np0005481065 nova_compute[260935]: 2025-10-11 09:05:26.534 2 INFO nova.virt.libvirt.driver [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Creating image(s)#033[00m
Oct 11 05:05:26 np0005481065 nova_compute[260935]: 2025-10-11 09:05:26.568 2 DEBUG nova.storage.rbd_utils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] rbd image a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:05:26 np0005481065 nova_compute[260935]: 2025-10-11 09:05:26.603 2 DEBUG nova.storage.rbd_utils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] rbd image a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:05:26 np0005481065 nova_compute[260935]: 2025-10-11 09:05:26.640 2 DEBUG nova.storage.rbd_utils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] rbd image a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:05:26 np0005481065 nova_compute[260935]: 2025-10-11 09:05:26.645 2 DEBUG oslo_concurrency.processutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:05:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:05:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2529107901' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:05:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:05:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2529107901' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:05:26 np0005481065 nova_compute[260935]: 2025-10-11 09:05:26.773 2 DEBUG oslo_concurrency.processutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:05:26 np0005481065 nova_compute[260935]: 2025-10-11 09:05:26.775 2 DEBUG oslo_concurrency.lockutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:05:26 np0005481065 nova_compute[260935]: 2025-10-11 09:05:26.776 2 DEBUG oslo_concurrency.lockutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:05:26 np0005481065 nova_compute[260935]: 2025-10-11 09:05:26.777 2 DEBUG oslo_concurrency.lockutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:05:26 np0005481065 nova_compute[260935]: 2025-10-11 09:05:26.810 2 DEBUG nova.storage.rbd_utils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] rbd image a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:05:26 np0005481065 nova_compute[260935]: 2025-10-11 09:05:26.815 2 DEBUG oslo_concurrency.processutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:05:27 np0005481065 nova_compute[260935]: 2025-10-11 09:05:27.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:27 np0005481065 nova_compute[260935]: 2025-10-11 09:05:27.145 2 DEBUG oslo_concurrency.processutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.331s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:05:27 np0005481065 nova_compute[260935]: 2025-10-11 09:05:27.232 2 DEBUG nova.storage.rbd_utils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] resizing rbd image a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:05:27 np0005481065 nova_compute[260935]: 2025-10-11 09:05:27.320 2 DEBUG nova.network.neutron [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Successfully created port: f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:05:27 np0005481065 nova_compute[260935]: 2025-10-11 09:05:27.378 2 DEBUG nova.objects.instance [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lazy-loading 'migration_context' on Instance uuid a83f40e4-c852-4b45-a3d2-1cd65e9aaa31 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:05:27 np0005481065 nova_compute[260935]: 2025-10-11 09:05:27.477 2 DEBUG nova.virt.libvirt.driver [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:05:27 np0005481065 nova_compute[260935]: 2025-10-11 09:05:27.478 2 DEBUG nova.virt.libvirt.driver [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Ensure instance console log exists: /var/lib/nova/instances/a83f40e4-c852-4b45-a3d2-1cd65e9aaa31/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:05:27 np0005481065 nova_compute[260935]: 2025-10-11 09:05:27.479 2 DEBUG oslo_concurrency.lockutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:05:27 np0005481065 nova_compute[260935]: 2025-10-11 09:05:27.480 2 DEBUG oslo_concurrency.lockutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:05:27 np0005481065 nova_compute[260935]: 2025-10-11 09:05:27.481 2 DEBUG oslo_concurrency.lockutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:05:28 np0005481065 nova_compute[260935]: 2025-10-11 09:05:28.184 2 DEBUG oslo_concurrency.lockutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Acquiring lock "81e243b6-6e9c-428e-a7b7-743f60f94728" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:05:28 np0005481065 nova_compute[260935]: 2025-10-11 09:05:28.185 2 DEBUG oslo_concurrency.lockutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Lock "81e243b6-6e9c-428e-a7b7-743f60f94728" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:05:28 np0005481065 nova_compute[260935]: 2025-10-11 09:05:28.240 2 DEBUG nova.compute.manager [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:05:28 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1884: 321 pgs: 321 active+clean; 420 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 9.2 KiB/s rd, 1.8 MiB/s wr, 17 op/s
Oct 11 05:05:28 np0005481065 nova_compute[260935]: 2025-10-11 09:05:28.343 2 DEBUG oslo_concurrency.lockutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:05:28 np0005481065 nova_compute[260935]: 2025-10-11 09:05:28.343 2 DEBUG oslo_concurrency.lockutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:05:28 np0005481065 nova_compute[260935]: 2025-10-11 09:05:28.354 2 DEBUG nova.virt.hardware [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:05:28 np0005481065 nova_compute[260935]: 2025-10-11 09:05:28.355 2 INFO nova.compute.claims [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:05:28 np0005481065 nova_compute[260935]: 2025-10-11 09:05:28.452 2 DEBUG nova.network.neutron [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Successfully updated port: f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:05:28 np0005481065 nova_compute[260935]: 2025-10-11 09:05:28.498 2 DEBUG oslo_concurrency.lockutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Acquiring lock "refresh_cache-a83f40e4-c852-4b45-a3d2-1cd65e9aaa31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:05:28 np0005481065 nova_compute[260935]: 2025-10-11 09:05:28.498 2 DEBUG oslo_concurrency.lockutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Acquired lock "refresh_cache-a83f40e4-c852-4b45-a3d2-1cd65e9aaa31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:05:28 np0005481065 nova_compute[260935]: 2025-10-11 09:05:28.499 2 DEBUG nova.network.neutron [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:05:28 np0005481065 nova_compute[260935]: 2025-10-11 09:05:28.545 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:05:28 np0005481065 nova_compute[260935]: 2025-10-11 09:05:28.546 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:05:28 np0005481065 nova_compute[260935]: 2025-10-11 09:05:28.599 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:05:28 np0005481065 nova_compute[260935]: 2025-10-11 09:05:28.599 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:05:28 np0005481065 nova_compute[260935]: 2025-10-11 09:05:28.600 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 11 05:05:28 np0005481065 nova_compute[260935]: 2025-10-11 09:05:28.652 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct 11 05:05:28 np0005481065 nova_compute[260935]: 2025-10-11 09:05:28.652 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct 11 05:05:28 np0005481065 nova_compute[260935]: 2025-10-11 09:05:28.653 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct 11 05:05:28 np0005481065 nova_compute[260935]: 2025-10-11 09:05:28.754 2 DEBUG oslo_concurrency.processutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:05:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:05:28 np0005481065 nova_compute[260935]: 2025-10-11 09:05:28.807 2 DEBUG nova.network.neutron [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:05:28 np0005481065 nova_compute[260935]: 2025-10-11 09:05:28.857 2 DEBUG nova.compute.manager [req-cc78e6de-1276-4c9a-ad98-664820c9438c req-6cf85b23-11c9-4542-92a4-3231365a2c7b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Received event network-changed-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:05:28 np0005481065 nova_compute[260935]: 2025-10-11 09:05:28.858 2 DEBUG nova.compute.manager [req-cc78e6de-1276-4c9a-ad98-664820c9438c req-6cf85b23-11c9-4542-92a4-3231365a2c7b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Refreshing instance network info cache due to event network-changed-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:05:28 np0005481065 nova_compute[260935]: 2025-10-11 09:05:28.858 2 DEBUG oslo_concurrency.lockutils [req-cc78e6de-1276-4c9a-ad98-664820c9438c req-6cf85b23-11c9-4542-92a4-3231365a2c7b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-a83f40e4-c852-4b45-a3d2-1cd65e9aaa31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:05:28 np0005481065 nova_compute[260935]: 2025-10-11 09:05:28.885 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:05:28 np0005481065 nova_compute[260935]: 2025-10-11 09:05:28.885 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:05:28 np0005481065 nova_compute[260935]: 2025-10-11 09:05:28.886 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 05:05:28 np0005481065 nova_compute[260935]: 2025-10-11 09:05:28.886 2 DEBUG nova.objects.instance [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c176845c-89c0-4038-ba22-4ee79bd3ebfe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:05:28 np0005481065 nova_compute[260935]: 2025-10-11 09:05:28.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:05:29 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/659624736' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:05:29 np0005481065 nova_compute[260935]: 2025-10-11 09:05:29.238 2 DEBUG oslo_concurrency.processutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:05:29 np0005481065 nova_compute[260935]: 2025-10-11 09:05:29.245 2 DEBUG nova.compute.provider_tree [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:05:29 np0005481065 nova_compute[260935]: 2025-10-11 09:05:29.290 2 DEBUG nova.scheduler.client.report [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:05:29 np0005481065 nova_compute[260935]: 2025-10-11 09:05:29.432 2 DEBUG oslo_concurrency.lockutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.088s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:05:29 np0005481065 nova_compute[260935]: 2025-10-11 09:05:29.432 2 DEBUG nova.compute.manager [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:05:29 np0005481065 nova_compute[260935]: 2025-10-11 09:05:29.777 2 DEBUG nova.compute.manager [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:05:29 np0005481065 nova_compute[260935]: 2025-10-11 09:05:29.778 2 DEBUG nova.network.neutron [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:05:29 np0005481065 nova_compute[260935]: 2025-10-11 09:05:29.890 2 INFO nova.virt.libvirt.driver [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:05:30 np0005481065 nova_compute[260935]: 2025-10-11 09:05:30.094 2 DEBUG nova.compute.manager [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:05:30 np0005481065 nova_compute[260935]: 2025-10-11 09:05:30.235 2 DEBUG nova.compute.manager [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:05:30 np0005481065 nova_compute[260935]: 2025-10-11 09:05:30.237 2 DEBUG nova.virt.libvirt.driver [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:05:30 np0005481065 nova_compute[260935]: 2025-10-11 09:05:30.237 2 INFO nova.virt.libvirt.driver [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Creating image(s)#033[00m
Oct 11 05:05:30 np0005481065 nova_compute[260935]: 2025-10-11 09:05:30.269 2 DEBUG nova.storage.rbd_utils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] rbd image 81e243b6-6e9c-428e-a7b7-743f60f94728_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:05:30 np0005481065 nova_compute[260935]: 2025-10-11 09:05:30.306 2 DEBUG nova.storage.rbd_utils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] rbd image 81e243b6-6e9c-428e-a7b7-743f60f94728_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:05:30 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1885: 321 pgs: 321 active+clean; 420 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 9.2 KiB/s rd, 1.8 MiB/s wr, 17 op/s
Oct 11 05:05:30 np0005481065 nova_compute[260935]: 2025-10-11 09:05:30.342 2 DEBUG nova.storage.rbd_utils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] rbd image 81e243b6-6e9c-428e-a7b7-743f60f94728_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:05:30 np0005481065 nova_compute[260935]: 2025-10-11 09:05:30.347 2 DEBUG oslo_concurrency.processutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:05:30 np0005481065 nova_compute[260935]: 2025-10-11 09:05:30.455 2 DEBUG oslo_concurrency.processutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:05:30 np0005481065 nova_compute[260935]: 2025-10-11 09:05:30.457 2 DEBUG oslo_concurrency.lockutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:05:30 np0005481065 nova_compute[260935]: 2025-10-11 09:05:30.458 2 DEBUG oslo_concurrency.lockutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:05:30 np0005481065 nova_compute[260935]: 2025-10-11 09:05:30.458 2 DEBUG oslo_concurrency.lockutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:05:30 np0005481065 nova_compute[260935]: 2025-10-11 09:05:30.490 2 DEBUG nova.storage.rbd_utils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] rbd image 81e243b6-6e9c-428e-a7b7-743f60f94728_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:05:30 np0005481065 nova_compute[260935]: 2025-10-11 09:05:30.495 2 DEBUG oslo_concurrency.processutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 81e243b6-6e9c-428e-a7b7-743f60f94728_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:05:30 np0005481065 nova_compute[260935]: 2025-10-11 09:05:30.761 2 DEBUG nova.policy [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '727ffe9af08040179ad1981f985d61cc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '72f2c977b4a147e3b45be9903a0afdf1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:05:30 np0005481065 nova_compute[260935]: 2025-10-11 09:05:30.829 2 DEBUG oslo_concurrency.processutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 81e243b6-6e9c-428e-a7b7-743f60f94728_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.334s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:05:30 np0005481065 nova_compute[260935]: 2025-10-11 09:05:30.915 2 DEBUG nova.storage.rbd_utils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] resizing rbd image 81e243b6-6e9c-428e-a7b7-743f60f94728_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:05:31 np0005481065 nova_compute[260935]: 2025-10-11 09:05:31.047 2 DEBUG nova.objects.instance [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Lazy-loading 'migration_context' on Instance uuid 81e243b6-6e9c-428e-a7b7-743f60f94728 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:05:31 np0005481065 nova_compute[260935]: 2025-10-11 09:05:31.094 2 DEBUG nova.virt.libvirt.driver [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:05:31 np0005481065 nova_compute[260935]: 2025-10-11 09:05:31.095 2 DEBUG nova.virt.libvirt.driver [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Ensure instance console log exists: /var/lib/nova/instances/81e243b6-6e9c-428e-a7b7-743f60f94728/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:05:31 np0005481065 nova_compute[260935]: 2025-10-11 09:05:31.096 2 DEBUG oslo_concurrency.lockutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:05:31 np0005481065 nova_compute[260935]: 2025-10-11 09:05:31.097 2 DEBUG oslo_concurrency.lockutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:05:31 np0005481065 nova_compute[260935]: 2025-10-11 09:05:31.097 2 DEBUG oslo_concurrency.lockutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:05:31 np0005481065 nova_compute[260935]: 2025-10-11 09:05:31.459 2 DEBUG nova.network.neutron [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Updating instance_info_cache with network_info: [{"id": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "address": "fa:16:3e:b4:bf:b7", "network": {"id": "7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1957321638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "0a5d578da5e746caa535eef295e1a67d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8dc388c-9e", "ovs_interfaceid": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:05:31 np0005481065 nova_compute[260935]: 2025-10-11 09:05:31.754 2 DEBUG oslo_concurrency.lockutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Releasing lock "refresh_cache-a83f40e4-c852-4b45-a3d2-1cd65e9aaa31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:05:31 np0005481065 nova_compute[260935]: 2025-10-11 09:05:31.754 2 DEBUG nova.compute.manager [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Instance network_info: |[{"id": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "address": "fa:16:3e:b4:bf:b7", "network": {"id": "7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1957321638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "0a5d578da5e746caa535eef295e1a67d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8dc388c-9e", "ovs_interfaceid": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:05:31 np0005481065 nova_compute[260935]: 2025-10-11 09:05:31.755 2 DEBUG oslo_concurrency.lockutils [req-cc78e6de-1276-4c9a-ad98-664820c9438c req-6cf85b23-11c9-4542-92a4-3231365a2c7b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-a83f40e4-c852-4b45-a3d2-1cd65e9aaa31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:05:31 np0005481065 nova_compute[260935]: 2025-10-11 09:05:31.756 2 DEBUG nova.network.neutron [req-cc78e6de-1276-4c9a-ad98-664820c9438c req-6cf85b23-11c9-4542-92a4-3231365a2c7b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Refreshing network info cache for port f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:05:31 np0005481065 nova_compute[260935]: 2025-10-11 09:05:31.761 2 DEBUG nova.virt.libvirt.driver [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Start _get_guest_xml network_info=[{"id": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "address": "fa:16:3e:b4:bf:b7", "network": {"id": "7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1957321638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "0a5d578da5e746caa535eef295e1a67d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8dc388c-9e", "ovs_interfaceid": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:05:31 np0005481065 nova_compute[260935]: 2025-10-11 09:05:31.768 2 WARNING nova.virt.libvirt.driver [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:05:31 np0005481065 nova_compute[260935]: 2025-10-11 09:05:31.773 2 DEBUG nova.virt.libvirt.host [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:05:31 np0005481065 nova_compute[260935]: 2025-10-11 09:05:31.774 2 DEBUG nova.virt.libvirt.host [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:05:31 np0005481065 nova_compute[260935]: 2025-10-11 09:05:31.778 2 DEBUG nova.virt.libvirt.host [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:05:31 np0005481065 nova_compute[260935]: 2025-10-11 09:05:31.779 2 DEBUG nova.virt.libvirt.host [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:05:31 np0005481065 nova_compute[260935]: 2025-10-11 09:05:31.779 2 DEBUG nova.virt.libvirt.driver [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:05:31 np0005481065 nova_compute[260935]: 2025-10-11 09:05:31.780 2 DEBUG nova.virt.hardware [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:05:31 np0005481065 nova_compute[260935]: 2025-10-11 09:05:31.780 2 DEBUG nova.virt.hardware [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:05:31 np0005481065 nova_compute[260935]: 2025-10-11 09:05:31.781 2 DEBUG nova.virt.hardware [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:05:31 np0005481065 nova_compute[260935]: 2025-10-11 09:05:31.781 2 DEBUG nova.virt.hardware [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:05:31 np0005481065 nova_compute[260935]: 2025-10-11 09:05:31.781 2 DEBUG nova.virt.hardware [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:05:31 np0005481065 nova_compute[260935]: 2025-10-11 09:05:31.782 2 DEBUG nova.virt.hardware [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:05:31 np0005481065 nova_compute[260935]: 2025-10-11 09:05:31.782 2 DEBUG nova.virt.hardware [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:05:31 np0005481065 nova_compute[260935]: 2025-10-11 09:05:31.783 2 DEBUG nova.virt.hardware [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:05:31 np0005481065 nova_compute[260935]: 2025-10-11 09:05:31.783 2 DEBUG nova.virt.hardware [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:05:31 np0005481065 nova_compute[260935]: 2025-10-11 09:05:31.783 2 DEBUG nova.virt.hardware [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:05:31 np0005481065 nova_compute[260935]: 2025-10-11 09:05:31.784 2 DEBUG nova.virt.hardware [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:05:31 np0005481065 nova_compute[260935]: 2025-10-11 09:05:31.788 2 DEBUG oslo_concurrency.processutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:05:32 np0005481065 nova_compute[260935]: 2025-10-11 09:05:32.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:32 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1886: 321 pgs: 321 active+clean; 458 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.2 MiB/s wr, 53 op/s
Oct 11 05:05:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:05:32 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1115873053' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:05:32 np0005481065 nova_compute[260935]: 2025-10-11 09:05:32.329 2 DEBUG oslo_concurrency.processutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:05:32 np0005481065 nova_compute[260935]: 2025-10-11 09:05:32.352 2 DEBUG nova.storage.rbd_utils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] rbd image a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:05:32 np0005481065 nova_compute[260935]: 2025-10-11 09:05:32.356 2 DEBUG oslo_concurrency.processutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:05:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:05:32 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1525929425' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:05:32 np0005481065 nova_compute[260935]: 2025-10-11 09:05:32.876 2 DEBUG oslo_concurrency.processutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:05:32 np0005481065 nova_compute[260935]: 2025-10-11 09:05:32.877 2 DEBUG nova.virt.libvirt.vif [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:05:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-942927350',display_name='tempest-ServerRescueTestJSONUnderV235-server-942927350',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-942927350',id=92,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0a5d578da5e746caa535eef295e1a67d',ramdisk_id='',reservation_id='r-i0emrtot',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSONUnderV235-2035879439',owner_user_name='tempest-ServerRescueTestJSONUnderV235-2035879439-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:05:26Z,user_data=None,user_id='b7730a035fdf47498398e20e5aaf9ba4',uuid=a83f40e4-c852-4b45-a3d2-1cd65e9aaa31,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "address": "fa:16:3e:b4:bf:b7", "network": {"id": "7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1957321638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "0a5d578da5e746caa535eef295e1a67d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8dc388c-9e", "ovs_interfaceid": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:05:32 np0005481065 nova_compute[260935]: 2025-10-11 09:05:32.878 2 DEBUG nova.network.os_vif_util [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Converting VIF {"id": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "address": "fa:16:3e:b4:bf:b7", "network": {"id": "7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1957321638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "0a5d578da5e746caa535eef295e1a67d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8dc388c-9e", "ovs_interfaceid": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:05:32 np0005481065 nova_compute[260935]: 2025-10-11 09:05:32.879 2 DEBUG nova.network.os_vif_util [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:bf:b7,bridge_name='br-int',has_traffic_filtering=True,id=f8dc388c-9e5a-43c2-8dd9-8fc28768ec31,network=Network(7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf8dc388c-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:05:32 np0005481065 nova_compute[260935]: 2025-10-11 09:05:32.880 2 DEBUG nova.objects.instance [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lazy-loading 'pci_devices' on Instance uuid a83f40e4-c852-4b45-a3d2-1cd65e9aaa31 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:05:32 np0005481065 nova_compute[260935]: 2025-10-11 09:05:32.917 2 DEBUG nova.virt.libvirt.driver [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:05:32 np0005481065 nova_compute[260935]:  <uuid>a83f40e4-c852-4b45-a3d2-1cd65e9aaa31</uuid>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:  <name>instance-0000005c</name>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:05:32 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:      <nova:name>tempest-ServerRescueTestJSONUnderV235-server-942927350</nova:name>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:05:31</nova:creationTime>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:05:32 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:        <nova:user uuid="b7730a035fdf47498398e20e5aaf9ba4">tempest-ServerRescueTestJSONUnderV235-2035879439-project-member</nova:user>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:        <nova:project uuid="0a5d578da5e746caa535eef295e1a67d">tempest-ServerRescueTestJSONUnderV235-2035879439</nova:project>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:        <nova:port uuid="f8dc388c-9e5a-43c2-8dd9-8fc28768ec31">
Oct 11 05:05:32 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:05:32 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:      <entry name="serial">a83f40e4-c852-4b45-a3d2-1cd65e9aaa31</entry>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:      <entry name="uuid">a83f40e4-c852-4b45-a3d2-1cd65e9aaa31</entry>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:05:32 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:05:32 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:05:32 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_disk">
Oct 11 05:05:32 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:05:32 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:05:32 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_disk.config">
Oct 11 05:05:32 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:05:32 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:05:32 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:b4:bf:b7"/>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:      <target dev="tapf8dc388c-9e"/>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:05:32 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/a83f40e4-c852-4b45-a3d2-1cd65e9aaa31/console.log" append="off"/>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:05:32 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:05:32 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:05:32 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:05:32 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:05:32 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:05:32 np0005481065 nova_compute[260935]: 2025-10-11 09:05:32.918 2 DEBUG nova.compute.manager [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Preparing to wait for external event network-vif-plugged-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:05:32 np0005481065 nova_compute[260935]: 2025-10-11 09:05:32.918 2 DEBUG oslo_concurrency.lockutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Acquiring lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:05:32 np0005481065 nova_compute[260935]: 2025-10-11 09:05:32.918 2 DEBUG oslo_concurrency.lockutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:05:32 np0005481065 nova_compute[260935]: 2025-10-11 09:05:32.919 2 DEBUG oslo_concurrency.lockutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:05:32 np0005481065 nova_compute[260935]: 2025-10-11 09:05:32.919 2 DEBUG nova.virt.libvirt.vif [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:05:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-942927350',display_name='tempest-ServerRescueTestJSONUnderV235-server-942927350',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-942927350',id=92,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0a5d578da5e746caa535eef295e1a67d',ramdisk_id='',reservation_id='r-i0emrtot',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSONUnderV235-2035879439',owner_user_name='tempest-ServerRescueTestJSONUnderV235-2035879439-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:05:26Z,user_data=None,user_id='b7730a035fdf47498398e20e5aaf9ba4',uuid=a83f40e4-c852-4b45-a3d2-1cd65e9aaa31,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "address": "fa:16:3e:b4:bf:b7", "network": {"id": "7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1957321638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "0a5d578da5e746caa535eef295e1a67d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8dc388c-9e", "ovs_interfaceid": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:05:32 np0005481065 nova_compute[260935]: 2025-10-11 09:05:32.920 2 DEBUG nova.network.os_vif_util [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Converting VIF {"id": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "address": "fa:16:3e:b4:bf:b7", "network": {"id": "7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1957321638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "0a5d578da5e746caa535eef295e1a67d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8dc388c-9e", "ovs_interfaceid": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:05:32 np0005481065 nova_compute[260935]: 2025-10-11 09:05:32.920 2 DEBUG nova.network.os_vif_util [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:bf:b7,bridge_name='br-int',has_traffic_filtering=True,id=f8dc388c-9e5a-43c2-8dd9-8fc28768ec31,network=Network(7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf8dc388c-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:05:32 np0005481065 nova_compute[260935]: 2025-10-11 09:05:32.920 2 DEBUG os_vif [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:bf:b7,bridge_name='br-int',has_traffic_filtering=True,id=f8dc388c-9e5a-43c2-8dd9-8fc28768ec31,network=Network(7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf8dc388c-9e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:05:32 np0005481065 nova_compute[260935]: 2025-10-11 09:05:32.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:32 np0005481065 nova_compute[260935]: 2025-10-11 09:05:32.921 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:05:32 np0005481065 nova_compute[260935]: 2025-10-11 09:05:32.922 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:05:32 np0005481065 nova_compute[260935]: 2025-10-11 09:05:32.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:32 np0005481065 nova_compute[260935]: 2025-10-11 09:05:32.925 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf8dc388c-9e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:05:32 np0005481065 nova_compute[260935]: 2025-10-11 09:05:32.926 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf8dc388c-9e, col_values=(('external_ids', {'iface-id': 'f8dc388c-9e5a-43c2-8dd9-8fc28768ec31', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b4:bf:b7', 'vm-uuid': 'a83f40e4-c852-4b45-a3d2-1cd65e9aaa31'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:05:32 np0005481065 nova_compute[260935]: 2025-10-11 09:05:32.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:32 np0005481065 NetworkManager[44960]: <info>  [1760173532.9293] manager: (tapf8dc388c-9e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/349)
Oct 11 05:05:32 np0005481065 nova_compute[260935]: 2025-10-11 09:05:32.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:05:32 np0005481065 nova_compute[260935]: 2025-10-11 09:05:32.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:32 np0005481065 nova_compute[260935]: 2025-10-11 09:05:32.941 2 INFO os_vif [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:bf:b7,bridge_name='br-int',has_traffic_filtering=True,id=f8dc388c-9e5a-43c2-8dd9-8fc28768ec31,network=Network(7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf8dc388c-9e')#033[00m
Oct 11 05:05:33 np0005481065 nova_compute[260935]: 2025-10-11 09:05:33.162 2 DEBUG nova.virt.libvirt.driver [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:05:33 np0005481065 nova_compute[260935]: 2025-10-11 09:05:33.163 2 DEBUG nova.virt.libvirt.driver [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:05:33 np0005481065 nova_compute[260935]: 2025-10-11 09:05:33.163 2 DEBUG nova.virt.libvirt.driver [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] No VIF found with MAC fa:16:3e:b4:bf:b7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:05:33 np0005481065 nova_compute[260935]: 2025-10-11 09:05:33.164 2 INFO nova.virt.libvirt.driver [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Using config drive#033[00m
Oct 11 05:05:33 np0005481065 nova_compute[260935]: 2025-10-11 09:05:33.201 2 DEBUG nova.storage.rbd_utils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] rbd image a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:05:33 np0005481065 nova_compute[260935]: 2025-10-11 09:05:33.280 2 DEBUG nova.network.neutron [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Successfully created port: 406fc421-7a8c-4974-8410-c31cab730ea9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:05:33 np0005481065 nova_compute[260935]: 2025-10-11 09:05:33.318 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updating instance_info_cache with network_info: [{"id": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "address": "fa:16:3e:1e:82:58", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ae661-47", "ovs_interfaceid": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:05:33 np0005481065 nova_compute[260935]: 2025-10-11 09:05:33.380 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:05:33 np0005481065 nova_compute[260935]: 2025-10-11 09:05:33.381 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 05:05:33 np0005481065 nova_compute[260935]: 2025-10-11 09:05:33.381 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:05:33 np0005481065 nova_compute[260935]: 2025-10-11 09:05:33.382 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:05:33 np0005481065 nova_compute[260935]: 2025-10-11 09:05:33.382 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:05:33 np0005481065 nova_compute[260935]: 2025-10-11 09:05:33.383 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:05:33 np0005481065 nova_compute[260935]: 2025-10-11 09:05:33.383 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:05:33 np0005481065 nova_compute[260935]: 2025-10-11 09:05:33.383 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:05:33 np0005481065 nova_compute[260935]: 2025-10-11 09:05:33.384 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:05:33 np0005481065 nova_compute[260935]: 2025-10-11 09:05:33.384 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:05:33 np0005481065 nova_compute[260935]: 2025-10-11 09:05:33.450 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:05:33 np0005481065 nova_compute[260935]: 2025-10-11 09:05:33.451 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:05:33 np0005481065 nova_compute[260935]: 2025-10-11 09:05:33.452 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:05:33 np0005481065 nova_compute[260935]: 2025-10-11 09:05:33.452 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:05:33 np0005481065 nova_compute[260935]: 2025-10-11 09:05:33.452 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:05:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:05:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:05:33 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/290285277' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:05:33 np0005481065 nova_compute[260935]: 2025-10-11 09:05:33.945 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:05:33 np0005481065 nova_compute[260935]: 2025-10-11 09:05:33.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:34 np0005481065 nova_compute[260935]: 2025-10-11 09:05:34.093 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000005c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:05:34 np0005481065 nova_compute[260935]: 2025-10-11 09:05:34.094 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000005c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:05:34 np0005481065 nova_compute[260935]: 2025-10-11 09:05:34.100 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:05:34 np0005481065 nova_compute[260935]: 2025-10-11 09:05:34.100 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:05:34 np0005481065 nova_compute[260935]: 2025-10-11 09:05:34.101 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:05:34 np0005481065 nova_compute[260935]: 2025-10-11 09:05:34.107 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:05:34 np0005481065 nova_compute[260935]: 2025-10-11 09:05:34.107 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:05:34 np0005481065 nova_compute[260935]: 2025-10-11 09:05:34.113 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000057 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:05:34 np0005481065 nova_compute[260935]: 2025-10-11 09:05:34.113 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000057 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:05:34 np0005481065 nova_compute[260935]: 2025-10-11 09:05:34.118 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:05:34 np0005481065 nova_compute[260935]: 2025-10-11 09:05:34.118 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:05:34 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1887: 321 pgs: 321 active+clean; 467 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 11 05:05:34 np0005481065 nova_compute[260935]: 2025-10-11 09:05:34.412 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:05:34 np0005481065 nova_compute[260935]: 2025-10-11 09:05:34.414 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3117MB free_disk=59.77231216430664GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:05:34 np0005481065 nova_compute[260935]: 2025-10-11 09:05:34.414 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:05:34 np0005481065 nova_compute[260935]: 2025-10-11 09:05:34.414 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:05:34 np0005481065 nova_compute[260935]: 2025-10-11 09:05:34.618 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:05:34 np0005481065 nova_compute[260935]: 2025-10-11 09:05:34.618 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:05:34 np0005481065 nova_compute[260935]: 2025-10-11 09:05:34.619 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:05:34 np0005481065 nova_compute[260935]: 2025-10-11 09:05:34.619 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 15633aee-234a-4417-b5ea-f35f13820404 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:05:34 np0005481065 nova_compute[260935]: 2025-10-11 09:05:34.619 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance a83f40e4-c852-4b45-a3d2-1cd65e9aaa31 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:05:34 np0005481065 nova_compute[260935]: 2025-10-11 09:05:34.620 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 81e243b6-6e9c-428e-a7b7-743f60f94728 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:05:34 np0005481065 nova_compute[260935]: 2025-10-11 09:05:34.620 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 6 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:05:34 np0005481065 nova_compute[260935]: 2025-10-11 09:05:34.621 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1280MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=6 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:05:34 np0005481065 nova_compute[260935]: 2025-10-11 09:05:34.739 2 INFO nova.virt.libvirt.driver [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Creating config drive at /var/lib/nova/instances/a83f40e4-c852-4b45-a3d2-1cd65e9aaa31/disk.config#033[00m
Oct 11 05:05:34 np0005481065 nova_compute[260935]: 2025-10-11 09:05:34.749 2 DEBUG oslo_concurrency.processutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a83f40e4-c852-4b45-a3d2-1cd65e9aaa31/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjdtg4xrk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:05:34 np0005481065 nova_compute[260935]: 2025-10-11 09:05:34.906 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:05:34 np0005481065 nova_compute[260935]: 2025-10-11 09:05:34.967 2 DEBUG oslo_concurrency.processutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a83f40e4-c852-4b45-a3d2-1cd65e9aaa31/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjdtg4xrk" returned: 0 in 0.218s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:05:35 np0005481065 nova_compute[260935]: 2025-10-11 09:05:35.008 2 DEBUG nova.storage.rbd_utils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] rbd image a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:05:35 np0005481065 nova_compute[260935]: 2025-10-11 09:05:35.014 2 DEBUG oslo_concurrency.processutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a83f40e4-c852-4b45-a3d2-1cd65e9aaa31/disk.config a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:05:35 np0005481065 nova_compute[260935]: 2025-10-11 09:05:35.219 2 DEBUG oslo_concurrency.processutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a83f40e4-c852-4b45-a3d2-1cd65e9aaa31/disk.config a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.205s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:05:35 np0005481065 nova_compute[260935]: 2025-10-11 09:05:35.221 2 INFO nova.virt.libvirt.driver [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Deleting local config drive /var/lib/nova/instances/a83f40e4-c852-4b45-a3d2-1cd65e9aaa31/disk.config because it was imported into RBD.#033[00m
Oct 11 05:05:35 np0005481065 kernel: tapf8dc388c-9e: entered promiscuous mode
Oct 11 05:05:35 np0005481065 ovn_controller[152945]: 2025-10-11T09:05:35Z|00811|binding|INFO|Claiming lport f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 for this chassis.
Oct 11 05:05:35 np0005481065 ovn_controller[152945]: 2025-10-11T09:05:35Z|00812|binding|INFO|f8dc388c-9e5a-43c2-8dd9-8fc28768ec31: Claiming fa:16:3e:b4:bf:b7 10.100.0.12
Oct 11 05:05:35 np0005481065 NetworkManager[44960]: <info>  [1760173535.2883] manager: (tapf8dc388c-9e): new Tun device (/org/freedesktop/NetworkManager/Devices/350)
Oct 11 05:05:35 np0005481065 nova_compute[260935]: 2025-10-11 09:05:35.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:35 np0005481065 systemd-udevd[352048]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:05:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:35.340 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:bf:b7 10.100.0.12'], port_security=['fa:16:3e:b4:bf:b7 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a83f40e4-c852-4b45-a3d2-1cd65e9aaa31', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a5d578da5e746caa535eef295e1a67d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e8a3473d-4723-4d4d-be0a-f96b6f35a4ee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ae01bff6-0f5a-48e3-a011-50bc6bac19c7, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=f8dc388c-9e5a-43c2-8dd9-8fc28768ec31) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:05:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:35.342 162815 INFO neutron.agent.ovn.metadata.agent [-] Port f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 in datapath 7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75 bound to our chassis#033[00m
Oct 11 05:05:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:35.343 162815 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct 11 05:05:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:35.343 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[dbb4c68b-6d7e-46f8-a4a1-9e09afd70ad9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:05:35 np0005481065 NetworkManager[44960]: <info>  [1760173535.3563] device (tapf8dc388c-9e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:05:35 np0005481065 NetworkManager[44960]: <info>  [1760173535.3579] device (tapf8dc388c-9e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:05:35 np0005481065 systemd-machined[215705]: New machine qemu-105-instance-0000005c.
Oct 11 05:05:35 np0005481065 nova_compute[260935]: 2025-10-11 09:05:35.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:35 np0005481065 ovn_controller[152945]: 2025-10-11T09:05:35Z|00813|binding|INFO|Setting lport f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 ovn-installed in OVS
Oct 11 05:05:35 np0005481065 systemd[1]: Started Virtual Machine qemu-105-instance-0000005c.
Oct 11 05:05:35 np0005481065 ovn_controller[152945]: 2025-10-11T09:05:35Z|00814|binding|INFO|Setting lport f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 up in Southbound
Oct 11 05:05:35 np0005481065 nova_compute[260935]: 2025-10-11 09:05:35.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:35 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:05:35 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4224136825' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:05:35 np0005481065 nova_compute[260935]: 2025-10-11 09:05:35.419 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:05:35 np0005481065 nova_compute[260935]: 2025-10-11 09:05:35.427 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:05:35 np0005481065 nova_compute[260935]: 2025-10-11 09:05:35.482 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:05:35 np0005481065 nova_compute[260935]: 2025-10-11 09:05:35.568 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:05:35 np0005481065 nova_compute[260935]: 2025-10-11 09:05:35.568 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.154s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:05:36 np0005481065 nova_compute[260935]: 2025-10-11 09:05:36.178 2 DEBUG nova.network.neutron [req-cc78e6de-1276-4c9a-ad98-664820c9438c req-6cf85b23-11c9-4542-92a4-3231365a2c7b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Updated VIF entry in instance network info cache for port f8dc388c-9e5a-43c2-8dd9-8fc28768ec31. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:05:36 np0005481065 nova_compute[260935]: 2025-10-11 09:05:36.179 2 DEBUG nova.network.neutron [req-cc78e6de-1276-4c9a-ad98-664820c9438c req-6cf85b23-11c9-4542-92a4-3231365a2c7b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Updating instance_info_cache with network_info: [{"id": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "address": "fa:16:3e:b4:bf:b7", "network": {"id": "7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1957321638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "0a5d578da5e746caa535eef295e1a67d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8dc388c-9e", "ovs_interfaceid": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:05:36 np0005481065 nova_compute[260935]: 2025-10-11 09:05:36.235 2 DEBUG oslo_concurrency.lockutils [req-cc78e6de-1276-4c9a-ad98-664820c9438c req-6cf85b23-11c9-4542-92a4-3231365a2c7b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-a83f40e4-c852-4b45-a3d2-1cd65e9aaa31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:05:36 np0005481065 nova_compute[260935]: 2025-10-11 09:05:36.263 2 DEBUG nova.network.neutron [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Successfully updated port: 406fc421-7a8c-4974-8410-c31cab730ea9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:05:36 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1888: 321 pgs: 321 active+clean; 467 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 11 05:05:36 np0005481065 nova_compute[260935]: 2025-10-11 09:05:36.321 2 DEBUG oslo_concurrency.lockutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Acquiring lock "refresh_cache-81e243b6-6e9c-428e-a7b7-743f60f94728" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:05:36 np0005481065 nova_compute[260935]: 2025-10-11 09:05:36.321 2 DEBUG oslo_concurrency.lockutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Acquired lock "refresh_cache-81e243b6-6e9c-428e-a7b7-743f60f94728" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:05:36 np0005481065 nova_compute[260935]: 2025-10-11 09:05:36.322 2 DEBUG nova.network.neutron [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:05:36 np0005481065 nova_compute[260935]: 2025-10-11 09:05:36.364 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173536.3634126, a83f40e4-c852-4b45-a3d2-1cd65e9aaa31 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:05:36 np0005481065 nova_compute[260935]: 2025-10-11 09:05:36.365 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] VM Started (Lifecycle Event)#033[00m
Oct 11 05:05:36 np0005481065 nova_compute[260935]: 2025-10-11 09:05:36.437 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:05:36 np0005481065 nova_compute[260935]: 2025-10-11 09:05:36.445 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173536.3635485, a83f40e4-c852-4b45-a3d2-1cd65e9aaa31 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:05:36 np0005481065 nova_compute[260935]: 2025-10-11 09:05:36.445 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:05:36 np0005481065 nova_compute[260935]: 2025-10-11 09:05:36.489 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:05:36 np0005481065 nova_compute[260935]: 2025-10-11 09:05:36.493 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:05:36 np0005481065 nova_compute[260935]: 2025-10-11 09:05:36.503 2 DEBUG nova.compute.manager [req-e30a5b44-e1c2-4b5a-a0e6-2456047c966e req-fcc04171-65cd-4b45-b7b4-0b6a6d8e7574 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Received event network-changed-406fc421-7a8c-4974-8410-c31cab730ea9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:05:36 np0005481065 nova_compute[260935]: 2025-10-11 09:05:36.504 2 DEBUG nova.compute.manager [req-e30a5b44-e1c2-4b5a-a0e6-2456047c966e req-fcc04171-65cd-4b45-b7b4-0b6a6d8e7574 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Refreshing instance network info cache due to event network-changed-406fc421-7a8c-4974-8410-c31cab730ea9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:05:36 np0005481065 nova_compute[260935]: 2025-10-11 09:05:36.504 2 DEBUG oslo_concurrency.lockutils [req-e30a5b44-e1c2-4b5a-a0e6-2456047c966e req-fcc04171-65cd-4b45-b7b4-0b6a6d8e7574 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-81e243b6-6e9c-428e-a7b7-743f60f94728" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:05:36 np0005481065 nova_compute[260935]: 2025-10-11 09:05:36.522 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:05:36 np0005481065 nova_compute[260935]: 2025-10-11 09:05:36.588 2 DEBUG nova.compute.manager [req-62936306-aa36-4aa1-bc02-e1b09bcc3b97 req-6b0dbc56-8118-4412-96ed-fee8b4a6b0d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Received event network-vif-plugged-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:05:36 np0005481065 nova_compute[260935]: 2025-10-11 09:05:36.589 2 DEBUG oslo_concurrency.lockutils [req-62936306-aa36-4aa1-bc02-e1b09bcc3b97 req-6b0dbc56-8118-4412-96ed-fee8b4a6b0d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:05:36 np0005481065 nova_compute[260935]: 2025-10-11 09:05:36.589 2 DEBUG oslo_concurrency.lockutils [req-62936306-aa36-4aa1-bc02-e1b09bcc3b97 req-6b0dbc56-8118-4412-96ed-fee8b4a6b0d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:05:36 np0005481065 nova_compute[260935]: 2025-10-11 09:05:36.590 2 DEBUG oslo_concurrency.lockutils [req-62936306-aa36-4aa1-bc02-e1b09bcc3b97 req-6b0dbc56-8118-4412-96ed-fee8b4a6b0d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:05:36 np0005481065 nova_compute[260935]: 2025-10-11 09:05:36.590 2 DEBUG nova.compute.manager [req-62936306-aa36-4aa1-bc02-e1b09bcc3b97 req-6b0dbc56-8118-4412-96ed-fee8b4a6b0d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Processing event network-vif-plugged-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:05:36 np0005481065 nova_compute[260935]: 2025-10-11 09:05:36.591 2 DEBUG nova.compute.manager [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:05:36 np0005481065 nova_compute[260935]: 2025-10-11 09:05:36.599 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173536.5992155, a83f40e4-c852-4b45-a3d2-1cd65e9aaa31 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:05:36 np0005481065 nova_compute[260935]: 2025-10-11 09:05:36.600 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:05:36 np0005481065 nova_compute[260935]: 2025-10-11 09:05:36.602 2 DEBUG nova.virt.libvirt.driver [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:05:36 np0005481065 nova_compute[260935]: 2025-10-11 09:05:36.606 2 INFO nova.virt.libvirt.driver [-] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Instance spawned successfully.#033[00m
Oct 11 05:05:36 np0005481065 nova_compute[260935]: 2025-10-11 09:05:36.607 2 DEBUG nova.virt.libvirt.driver [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:05:36 np0005481065 nova_compute[260935]: 2025-10-11 09:05:36.736 2 DEBUG nova.network.neutron [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:05:36 np0005481065 nova_compute[260935]: 2025-10-11 09:05:36.743 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:05:36 np0005481065 nova_compute[260935]: 2025-10-11 09:05:36.753 2 DEBUG oslo_concurrency.lockutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Acquiring lock "d813afc2-c844-45eb-b1ec-efbbf95498ef" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:05:36 np0005481065 nova_compute[260935]: 2025-10-11 09:05:36.753 2 DEBUG oslo_concurrency.lockutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Lock "d813afc2-c844-45eb-b1ec-efbbf95498ef" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:05:36 np0005481065 nova_compute[260935]: 2025-10-11 09:05:36.760 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:05:36 np0005481065 nova_compute[260935]: 2025-10-11 09:05:36.765 2 DEBUG nova.virt.libvirt.driver [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:05:36 np0005481065 nova_compute[260935]: 2025-10-11 09:05:36.766 2 DEBUG nova.virt.libvirt.driver [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:05:36 np0005481065 nova_compute[260935]: 2025-10-11 09:05:36.767 2 DEBUG nova.virt.libvirt.driver [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:05:36 np0005481065 nova_compute[260935]: 2025-10-11 09:05:36.768 2 DEBUG nova.virt.libvirt.driver [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:05:36 np0005481065 nova_compute[260935]: 2025-10-11 09:05:36.769 2 DEBUG nova.virt.libvirt.driver [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:05:36 np0005481065 nova_compute[260935]: 2025-10-11 09:05:36.770 2 DEBUG nova.virt.libvirt.driver [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:05:36 np0005481065 nova_compute[260935]: 2025-10-11 09:05:36.841 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:05:36 np0005481065 nova_compute[260935]: 2025-10-11 09:05:36.843 2 DEBUG nova.compute.manager [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:05:37 np0005481065 nova_compute[260935]: 2025-10-11 09:05:37.072 2 INFO nova.compute.manager [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Took 10.54 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:05:37 np0005481065 nova_compute[260935]: 2025-10-11 09:05:37.073 2 DEBUG nova.compute.manager [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:05:37 np0005481065 nova_compute[260935]: 2025-10-11 09:05:37.105 2 DEBUG oslo_concurrency.lockutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:05:37 np0005481065 nova_compute[260935]: 2025-10-11 09:05:37.105 2 DEBUG oslo_concurrency.lockutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:05:37 np0005481065 nova_compute[260935]: 2025-10-11 09:05:37.113 2 DEBUG nova.virt.hardware [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:05:37 np0005481065 nova_compute[260935]: 2025-10-11 09:05:37.113 2 INFO nova.compute.claims [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:05:37 np0005481065 nova_compute[260935]: 2025-10-11 09:05:37.237 2 INFO nova.compute.manager [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Took 12.57 seconds to build instance.#033[00m
Oct 11 05:05:37 np0005481065 nova_compute[260935]: 2025-10-11 09:05:37.301 2 DEBUG oslo_concurrency.lockutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.846s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:05:37 np0005481065 nova_compute[260935]: 2025-10-11 09:05:37.490 2 DEBUG oslo_concurrency.processutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:05:37 np0005481065 nova_compute[260935]: 2025-10-11 09:05:37.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:05:38 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4204038342' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:05:38 np0005481065 nova_compute[260935]: 2025-10-11 09:05:38.083 2 DEBUG oslo_concurrency.processutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:05:38 np0005481065 nova_compute[260935]: 2025-10-11 09:05:38.090 2 DEBUG nova.compute.provider_tree [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:05:38 np0005481065 nova_compute[260935]: 2025-10-11 09:05:38.121 2 DEBUG nova.scheduler.client.report [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:05:38 np0005481065 nova_compute[260935]: 2025-10-11 09:05:38.130 2 DEBUG nova.network.neutron [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Updating instance_info_cache with network_info: [{"id": "406fc421-7a8c-4974-8410-c31cab730ea9", "address": "fa:16:3e:fe:a8:86", "network": {"id": "6273b59c-68e6-46d1-ac04-7afa70039429", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-29199496-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72f2c977b4a147e3b45be9903a0afdf1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap406fc421-7a", "ovs_interfaceid": "406fc421-7a8c-4974-8410-c31cab730ea9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:05:38 np0005481065 nova_compute[260935]: 2025-10-11 09:05:38.197 2 DEBUG oslo_concurrency.lockutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.092s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:05:38 np0005481065 nova_compute[260935]: 2025-10-11 09:05:38.198 2 DEBUG nova.compute.manager [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:05:38 np0005481065 nova_compute[260935]: 2025-10-11 09:05:38.234 2 DEBUG oslo_concurrency.lockutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Releasing lock "refresh_cache-81e243b6-6e9c-428e-a7b7-743f60f94728" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:05:38 np0005481065 nova_compute[260935]: 2025-10-11 09:05:38.235 2 DEBUG nova.compute.manager [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Instance network_info: |[{"id": "406fc421-7a8c-4974-8410-c31cab730ea9", "address": "fa:16:3e:fe:a8:86", "network": {"id": "6273b59c-68e6-46d1-ac04-7afa70039429", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-29199496-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72f2c977b4a147e3b45be9903a0afdf1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap406fc421-7a", "ovs_interfaceid": "406fc421-7a8c-4974-8410-c31cab730ea9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:05:38 np0005481065 nova_compute[260935]: 2025-10-11 09:05:38.236 2 DEBUG oslo_concurrency.lockutils [req-e30a5b44-e1c2-4b5a-a0e6-2456047c966e req-fcc04171-65cd-4b45-b7b4-0b6a6d8e7574 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-81e243b6-6e9c-428e-a7b7-743f60f94728" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:05:38 np0005481065 nova_compute[260935]: 2025-10-11 09:05:38.236 2 DEBUG nova.network.neutron [req-e30a5b44-e1c2-4b5a-a0e6-2456047c966e req-fcc04171-65cd-4b45-b7b4-0b6a6d8e7574 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Refreshing network info cache for port 406fc421-7a8c-4974-8410-c31cab730ea9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:05:38 np0005481065 nova_compute[260935]: 2025-10-11 09:05:38.243 2 DEBUG nova.virt.libvirt.driver [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Start _get_guest_xml network_info=[{"id": "406fc421-7a8c-4974-8410-c31cab730ea9", "address": "fa:16:3e:fe:a8:86", "network": {"id": "6273b59c-68e6-46d1-ac04-7afa70039429", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-29199496-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72f2c977b4a147e3b45be9903a0afdf1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap406fc421-7a", "ovs_interfaceid": "406fc421-7a8c-4974-8410-c31cab730ea9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:05:38 np0005481065 nova_compute[260935]: 2025-10-11 09:05:38.250 2 WARNING nova.virt.libvirt.driver [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:05:38 np0005481065 nova_compute[260935]: 2025-10-11 09:05:38.259 2 DEBUG nova.virt.libvirt.host [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:05:38 np0005481065 nova_compute[260935]: 2025-10-11 09:05:38.261 2 DEBUG nova.virt.libvirt.host [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:05:38 np0005481065 nova_compute[260935]: 2025-10-11 09:05:38.265 2 DEBUG nova.virt.libvirt.host [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:05:38 np0005481065 nova_compute[260935]: 2025-10-11 09:05:38.266 2 DEBUG nova.virt.libvirt.host [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:05:38 np0005481065 nova_compute[260935]: 2025-10-11 09:05:38.266 2 DEBUG nova.virt.libvirt.driver [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:05:38 np0005481065 nova_compute[260935]: 2025-10-11 09:05:38.267 2 DEBUG nova.virt.hardware [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:05:38 np0005481065 nova_compute[260935]: 2025-10-11 09:05:38.268 2 DEBUG nova.virt.hardware [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:05:38 np0005481065 nova_compute[260935]: 2025-10-11 09:05:38.268 2 DEBUG nova.virt.hardware [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:05:38 np0005481065 nova_compute[260935]: 2025-10-11 09:05:38.269 2 DEBUG nova.virt.hardware [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:05:38 np0005481065 nova_compute[260935]: 2025-10-11 09:05:38.269 2 DEBUG nova.virt.hardware [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:05:38 np0005481065 nova_compute[260935]: 2025-10-11 09:05:38.269 2 DEBUG nova.virt.hardware [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:05:38 np0005481065 nova_compute[260935]: 2025-10-11 09:05:38.270 2 DEBUG nova.virt.hardware [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:05:38 np0005481065 nova_compute[260935]: 2025-10-11 09:05:38.270 2 DEBUG nova.virt.hardware [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:05:38 np0005481065 nova_compute[260935]: 2025-10-11 09:05:38.271 2 DEBUG nova.virt.hardware [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:05:38 np0005481065 nova_compute[260935]: 2025-10-11 09:05:38.271 2 DEBUG nova.virt.hardware [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:05:38 np0005481065 nova_compute[260935]: 2025-10-11 09:05:38.272 2 DEBUG nova.virt.hardware [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:05:38 np0005481065 nova_compute[260935]: 2025-10-11 09:05:38.276 2 DEBUG oslo_concurrency.processutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:05:38 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1889: 321 pgs: 321 active+clean; 467 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 160 KiB/s rd, 3.6 MiB/s wr, 68 op/s
Oct 11 05:05:38 np0005481065 nova_compute[260935]: 2025-10-11 09:05:38.332 2 DEBUG nova.compute.manager [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:05:38 np0005481065 nova_compute[260935]: 2025-10-11 09:05:38.334 2 DEBUG nova.network.neutron [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:05:38 np0005481065 nova_compute[260935]: 2025-10-11 09:05:38.397 2 INFO nova.virt.libvirt.driver [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:05:38 np0005481065 nova_compute[260935]: 2025-10-11 09:05:38.456 2 DEBUG nova.compute.manager [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:05:38 np0005481065 nova_compute[260935]: 2025-10-11 09:05:38.697 2 DEBUG nova.compute.manager [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:05:38 np0005481065 nova_compute[260935]: 2025-10-11 09:05:38.699 2 DEBUG nova.virt.libvirt.driver [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:05:38 np0005481065 nova_compute[260935]: 2025-10-11 09:05:38.700 2 INFO nova.virt.libvirt.driver [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Creating image(s)#033[00m
Oct 11 05:05:38 np0005481065 nova_compute[260935]: 2025-10-11 09:05:38.734 2 DEBUG nova.storage.rbd_utils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] rbd image d813afc2-c844-45eb-b1ec-efbbf95498ef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:05:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:05:38 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3640757075' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:05:38 np0005481065 nova_compute[260935]: 2025-10-11 09:05:38.769 2 DEBUG nova.storage.rbd_utils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] rbd image d813afc2-c844-45eb-b1ec-efbbf95498ef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:05:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:05:38 np0005481065 nova_compute[260935]: 2025-10-11 09:05:38.791 2 DEBUG nova.storage.rbd_utils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] rbd image d813afc2-c844-45eb-b1ec-efbbf95498ef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:05:38 np0005481065 nova_compute[260935]: 2025-10-11 09:05:38.793 2 DEBUG oslo_concurrency.processutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:05:38 np0005481065 nova_compute[260935]: 2025-10-11 09:05:38.823 2 DEBUG oslo_concurrency.processutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:05:38 np0005481065 nova_compute[260935]: 2025-10-11 09:05:38.849 2 DEBUG nova.storage.rbd_utils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] rbd image 81e243b6-6e9c-428e-a7b7-743f60f94728_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:05:38 np0005481065 nova_compute[260935]: 2025-10-11 09:05:38.853 2 DEBUG oslo_concurrency.processutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:05:38 np0005481065 nova_compute[260935]: 2025-10-11 09:05:38.887 2 DEBUG oslo_concurrency.processutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:05:38 np0005481065 nova_compute[260935]: 2025-10-11 09:05:38.888 2 DEBUG oslo_concurrency.lockutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:05:38 np0005481065 nova_compute[260935]: 2025-10-11 09:05:38.889 2 DEBUG oslo_concurrency.lockutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:05:38 np0005481065 nova_compute[260935]: 2025-10-11 09:05:38.889 2 DEBUG oslo_concurrency.lockutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:05:38 np0005481065 nova_compute[260935]: 2025-10-11 09:05:38.913 2 DEBUG nova.storage.rbd_utils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] rbd image d813afc2-c844-45eb-b1ec-efbbf95498ef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:05:38 np0005481065 nova_compute[260935]: 2025-10-11 09:05:38.917 2 DEBUG oslo_concurrency.processutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 d813afc2-c844-45eb-b1ec-efbbf95498ef_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:05:38 np0005481065 nova_compute[260935]: 2025-10-11 09:05:38.970 2 DEBUG nova.policy [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ac3a19a7426e4e51aa4f94b016decc82', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0eed0ccef3b34c4db44e88ebe1aef9f0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:05:38 np0005481065 nova_compute[260935]: 2025-10-11 09:05:38.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:39 np0005481065 nova_compute[260935]: 2025-10-11 09:05:39.005 2 DEBUG nova.compute.manager [req-e62fb6ec-2ee4-4318-8757-2073701ea5c1 req-71ef2b50-6610-4d69-8bfa-dc822e2b2200 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Received event network-vif-plugged-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:05:39 np0005481065 nova_compute[260935]: 2025-10-11 09:05:39.006 2 DEBUG oslo_concurrency.lockutils [req-e62fb6ec-2ee4-4318-8757-2073701ea5c1 req-71ef2b50-6610-4d69-8bfa-dc822e2b2200 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:05:39 np0005481065 nova_compute[260935]: 2025-10-11 09:05:39.007 2 DEBUG oslo_concurrency.lockutils [req-e62fb6ec-2ee4-4318-8757-2073701ea5c1 req-71ef2b50-6610-4d69-8bfa-dc822e2b2200 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:05:39 np0005481065 nova_compute[260935]: 2025-10-11 09:05:39.008 2 DEBUG oslo_concurrency.lockutils [req-e62fb6ec-2ee4-4318-8757-2073701ea5c1 req-71ef2b50-6610-4d69-8bfa-dc822e2b2200 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:05:39 np0005481065 nova_compute[260935]: 2025-10-11 09:05:39.008 2 DEBUG nova.compute.manager [req-e62fb6ec-2ee4-4318-8757-2073701ea5c1 req-71ef2b50-6610-4d69-8bfa-dc822e2b2200 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] No waiting events found dispatching network-vif-plugged-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:05:39 np0005481065 nova_compute[260935]: 2025-10-11 09:05:39.009 2 WARNING nova.compute.manager [req-e62fb6ec-2ee4-4318-8757-2073701ea5c1 req-71ef2b50-6610-4d69-8bfa-dc822e2b2200 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Received unexpected event network-vif-plugged-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 for instance with vm_state active and task_state None.#033[00m
Oct 11 05:05:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:05:39 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4268169213' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:05:39 np0005481065 nova_compute[260935]: 2025-10-11 09:05:39.318 2 DEBUG oslo_concurrency.processutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 d813afc2-c844-45eb-b1ec-efbbf95498ef_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.401s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:05:39 np0005481065 nova_compute[260935]: 2025-10-11 09:05:39.355 2 DEBUG oslo_concurrency.processutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:05:39 np0005481065 nova_compute[260935]: 2025-10-11 09:05:39.356 2 DEBUG nova.virt.libvirt.vif [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:05:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-2126714831',display_name='tempest-ServerPasswordTestJSON-server-2126714831',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-2126714831',id=93,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='72f2c977b4a147e3b45be9903a0afdf1',ramdisk_id='',reservation_id='r-ig7c58si',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerPasswordTestJSON-624321625',owner_user_name='tempest-ServerPasswordTestJSON-624321625-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:05:30Z,user_data=None,user_id='727ffe9af08040179ad1981f985d61cc',uuid=81e243b6-6e9c-428e-a7b7-743f60f94728,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "406fc421-7a8c-4974-8410-c31cab730ea9", "address": "fa:16:3e:fe:a8:86", "network": {"id": "6273b59c-68e6-46d1-ac04-7afa70039429", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-29199496-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72f2c977b4a147e3b45be9903a0afdf1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap406fc421-7a", "ovs_interfaceid": "406fc421-7a8c-4974-8410-c31cab730ea9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:05:39 np0005481065 nova_compute[260935]: 2025-10-11 09:05:39.357 2 DEBUG nova.network.os_vif_util [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Converting VIF {"id": "406fc421-7a8c-4974-8410-c31cab730ea9", "address": "fa:16:3e:fe:a8:86", "network": {"id": "6273b59c-68e6-46d1-ac04-7afa70039429", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-29199496-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72f2c977b4a147e3b45be9903a0afdf1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap406fc421-7a", "ovs_interfaceid": "406fc421-7a8c-4974-8410-c31cab730ea9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:05:39 np0005481065 nova_compute[260935]: 2025-10-11 09:05:39.358 2 DEBUG nova.network.os_vif_util [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:a8:86,bridge_name='br-int',has_traffic_filtering=True,id=406fc421-7a8c-4974-8410-c31cab730ea9,network=Network(6273b59c-68e6-46d1-ac04-7afa70039429),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap406fc421-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:05:39 np0005481065 nova_compute[260935]: 2025-10-11 09:05:39.359 2 DEBUG nova.objects.instance [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 81e243b6-6e9c-428e-a7b7-743f60f94728 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:05:39 np0005481065 nova_compute[260935]: 2025-10-11 09:05:39.403 2 DEBUG nova.virt.libvirt.driver [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:05:39 np0005481065 nova_compute[260935]:  <uuid>81e243b6-6e9c-428e-a7b7-743f60f94728</uuid>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:  <name>instance-0000005d</name>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:05:39 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:      <nova:name>tempest-ServerPasswordTestJSON-server-2126714831</nova:name>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:05:38</nova:creationTime>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:05:39 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:        <nova:user uuid="727ffe9af08040179ad1981f985d61cc">tempest-ServerPasswordTestJSON-624321625-project-member</nova:user>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:        <nova:project uuid="72f2c977b4a147e3b45be9903a0afdf1">tempest-ServerPasswordTestJSON-624321625</nova:project>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:        <nova:port uuid="406fc421-7a8c-4974-8410-c31cab730ea9">
Oct 11 05:05:39 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:05:39 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:      <entry name="serial">81e243b6-6e9c-428e-a7b7-743f60f94728</entry>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:      <entry name="uuid">81e243b6-6e9c-428e-a7b7-743f60f94728</entry>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:05:39 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:05:39 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:05:39 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/81e243b6-6e9c-428e-a7b7-743f60f94728_disk">
Oct 11 05:05:39 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:05:39 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:05:39 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/81e243b6-6e9c-428e-a7b7-743f60f94728_disk.config">
Oct 11 05:05:39 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:05:39 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:05:39 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:fe:a8:86"/>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:      <target dev="tap406fc421-7a"/>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:05:39 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/81e243b6-6e9c-428e-a7b7-743f60f94728/console.log" append="off"/>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:05:39 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:05:39 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:05:39 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:05:39 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:05:39 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:05:39 np0005481065 nova_compute[260935]: 2025-10-11 09:05:39.409 2 DEBUG nova.compute.manager [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Preparing to wait for external event network-vif-plugged-406fc421-7a8c-4974-8410-c31cab730ea9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:05:39 np0005481065 nova_compute[260935]: 2025-10-11 09:05:39.409 2 DEBUG oslo_concurrency.lockutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Acquiring lock "81e243b6-6e9c-428e-a7b7-743f60f94728-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:05:39 np0005481065 nova_compute[260935]: 2025-10-11 09:05:39.409 2 DEBUG oslo_concurrency.lockutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Lock "81e243b6-6e9c-428e-a7b7-743f60f94728-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:05:39 np0005481065 nova_compute[260935]: 2025-10-11 09:05:39.410 2 DEBUG oslo_concurrency.lockutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Lock "81e243b6-6e9c-428e-a7b7-743f60f94728-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:05:39 np0005481065 nova_compute[260935]: 2025-10-11 09:05:39.410 2 DEBUG nova.virt.libvirt.vif [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:05:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-2126714831',display_name='tempest-ServerPasswordTestJSON-server-2126714831',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-2126714831',id=93,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='72f2c977b4a147e3b45be9903a0afdf1',ramdisk_id='',reservation_id='r-ig7c58si',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerPasswordTestJSON-624321625',owner_user_name='tempest-ServerPasswordTestJSON-624321625-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:05:30Z,user_data=None,user_id='727ffe9af08040179ad1981f985d61cc',uuid=81e243b6-6e9c-428e-a7b7-743f60f94728,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "406fc421-7a8c-4974-8410-c31cab730ea9", "address": "fa:16:3e:fe:a8:86", "network": {"id": "6273b59c-68e6-46d1-ac04-7afa70039429", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-29199496-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72f2c977b4a147e3b45be9903a0afdf1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap406fc421-7a", "ovs_interfaceid": "406fc421-7a8c-4974-8410-c31cab730ea9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:05:39 np0005481065 nova_compute[260935]: 2025-10-11 09:05:39.410 2 DEBUG nova.network.os_vif_util [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Converting VIF {"id": "406fc421-7a8c-4974-8410-c31cab730ea9", "address": "fa:16:3e:fe:a8:86", "network": {"id": "6273b59c-68e6-46d1-ac04-7afa70039429", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-29199496-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72f2c977b4a147e3b45be9903a0afdf1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap406fc421-7a", "ovs_interfaceid": "406fc421-7a8c-4974-8410-c31cab730ea9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:05:39 np0005481065 nova_compute[260935]: 2025-10-11 09:05:39.411 2 DEBUG nova.network.os_vif_util [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:a8:86,bridge_name='br-int',has_traffic_filtering=True,id=406fc421-7a8c-4974-8410-c31cab730ea9,network=Network(6273b59c-68e6-46d1-ac04-7afa70039429),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap406fc421-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:05:39 np0005481065 nova_compute[260935]: 2025-10-11 09:05:39.411 2 DEBUG os_vif [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:a8:86,bridge_name='br-int',has_traffic_filtering=True,id=406fc421-7a8c-4974-8410-c31cab730ea9,network=Network(6273b59c-68e6-46d1-ac04-7afa70039429),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap406fc421-7a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:05:39 np0005481065 nova_compute[260935]: 2025-10-11 09:05:39.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:39 np0005481065 nova_compute[260935]: 2025-10-11 09:05:39.414 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:05:39 np0005481065 nova_compute[260935]: 2025-10-11 09:05:39.414 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:05:39 np0005481065 nova_compute[260935]: 2025-10-11 09:05:39.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:39 np0005481065 nova_compute[260935]: 2025-10-11 09:05:39.419 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap406fc421-7a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:05:39 np0005481065 nova_compute[260935]: 2025-10-11 09:05:39.420 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap406fc421-7a, col_values=(('external_ids', {'iface-id': '406fc421-7a8c-4974-8410-c31cab730ea9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fe:a8:86', 'vm-uuid': '81e243b6-6e9c-428e-a7b7-743f60f94728'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:05:39 np0005481065 nova_compute[260935]: 2025-10-11 09:05:39.448 2 DEBUG nova.storage.rbd_utils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] resizing rbd image d813afc2-c844-45eb-b1ec-efbbf95498ef_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:05:39 np0005481065 NetworkManager[44960]: <info>  [1760173539.4493] manager: (tap406fc421-7a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/351)
Oct 11 05:05:39 np0005481065 nova_compute[260935]: 2025-10-11 09:05:39.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:39 np0005481065 nova_compute[260935]: 2025-10-11 09:05:39.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:05:39 np0005481065 nova_compute[260935]: 2025-10-11 09:05:39.505 2 INFO os_vif [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:a8:86,bridge_name='br-int',has_traffic_filtering=True,id=406fc421-7a8c-4974-8410-c31cab730ea9,network=Network(6273b59c-68e6-46d1-ac04-7afa70039429),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap406fc421-7a')#033[00m
Oct 11 05:05:39 np0005481065 nova_compute[260935]: 2025-10-11 09:05:39.511 2 INFO nova.compute.manager [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Rescuing#033[00m
Oct 11 05:05:39 np0005481065 nova_compute[260935]: 2025-10-11 09:05:39.512 2 DEBUG oslo_concurrency.lockutils [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Acquiring lock "refresh_cache-a83f40e4-c852-4b45-a3d2-1cd65e9aaa31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:05:39 np0005481065 nova_compute[260935]: 2025-10-11 09:05:39.514 2 DEBUG oslo_concurrency.lockutils [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Acquired lock "refresh_cache-a83f40e4-c852-4b45-a3d2-1cd65e9aaa31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:05:39 np0005481065 nova_compute[260935]: 2025-10-11 09:05:39.514 2 DEBUG nova.network.neutron [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:05:39 np0005481065 nova_compute[260935]: 2025-10-11 09:05:39.595 2 DEBUG nova.objects.instance [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Lazy-loading 'migration_context' on Instance uuid d813afc2-c844-45eb-b1ec-efbbf95498ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:05:39 np0005481065 nova_compute[260935]: 2025-10-11 09:05:39.635 2 DEBUG nova.virt.libvirt.driver [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:05:39 np0005481065 nova_compute[260935]: 2025-10-11 09:05:39.635 2 DEBUG nova.virt.libvirt.driver [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:05:39 np0005481065 nova_compute[260935]: 2025-10-11 09:05:39.635 2 DEBUG nova.virt.libvirt.driver [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] No VIF found with MAC fa:16:3e:fe:a8:86, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:05:39 np0005481065 nova_compute[260935]: 2025-10-11 09:05:39.636 2 INFO nova.virt.libvirt.driver [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Using config drive#033[00m
Oct 11 05:05:39 np0005481065 nova_compute[260935]: 2025-10-11 09:05:39.661 2 DEBUG nova.storage.rbd_utils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] rbd image 81e243b6-6e9c-428e-a7b7-743f60f94728_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:05:39 np0005481065 nova_compute[260935]: 2025-10-11 09:05:39.671 2 DEBUG nova.virt.libvirt.driver [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:05:39 np0005481065 nova_compute[260935]: 2025-10-11 09:05:39.672 2 DEBUG nova.virt.libvirt.driver [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Ensure instance console log exists: /var/lib/nova/instances/d813afc2-c844-45eb-b1ec-efbbf95498ef/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:05:39 np0005481065 nova_compute[260935]: 2025-10-11 09:05:39.673 2 DEBUG oslo_concurrency.lockutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:05:39 np0005481065 nova_compute[260935]: 2025-10-11 09:05:39.673 2 DEBUG oslo_concurrency.lockutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:05:39 np0005481065 nova_compute[260935]: 2025-10-11 09:05:39.673 2 DEBUG oslo_concurrency.lockutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:05:40 np0005481065 nova_compute[260935]: 2025-10-11 09:05:40.106 2 INFO nova.virt.libvirt.driver [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Creating config drive at /var/lib/nova/instances/81e243b6-6e9c-428e-a7b7-743f60f94728/disk.config#033[00m
Oct 11 05:05:40 np0005481065 nova_compute[260935]: 2025-10-11 09:05:40.119 2 DEBUG oslo_concurrency.processutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/81e243b6-6e9c-428e-a7b7-743f60f94728/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy41g0n3w execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:05:40 np0005481065 nova_compute[260935]: 2025-10-11 09:05:40.266 2 DEBUG oslo_concurrency.processutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/81e243b6-6e9c-428e-a7b7-743f60f94728/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy41g0n3w" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:05:40 np0005481065 nova_compute[260935]: 2025-10-11 09:05:40.307 2 DEBUG nova.storage.rbd_utils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] rbd image 81e243b6-6e9c-428e-a7b7-743f60f94728_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:05:40 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1890: 321 pgs: 321 active+clean; 467 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 151 KiB/s rd, 1.8 MiB/s wr, 50 op/s
Oct 11 05:05:40 np0005481065 nova_compute[260935]: 2025-10-11 09:05:40.320 2 DEBUG oslo_concurrency.processutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/81e243b6-6e9c-428e-a7b7-743f60f94728/disk.config 81e243b6-6e9c-428e-a7b7-743f60f94728_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:05:40 np0005481065 nova_compute[260935]: 2025-10-11 09:05:40.452 2 DEBUG nova.network.neutron [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Successfully created port: 4d16ebf4-64d7-4476-8898-f62d856c729b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:05:40 np0005481065 nova_compute[260935]: 2025-10-11 09:05:40.552 2 DEBUG oslo_concurrency.processutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/81e243b6-6e9c-428e-a7b7-743f60f94728/disk.config 81e243b6-6e9c-428e-a7b7-743f60f94728_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.232s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:05:40 np0005481065 nova_compute[260935]: 2025-10-11 09:05:40.553 2 INFO nova.virt.libvirt.driver [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Deleting local config drive /var/lib/nova/instances/81e243b6-6e9c-428e-a7b7-743f60f94728/disk.config because it was imported into RBD.#033[00m
Oct 11 05:05:40 np0005481065 podman[352585]: 2025-10-11 09:05:40.626494127 +0000 UTC m=+0.092012828 container exec ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:05:40 np0005481065 kernel: tap406fc421-7a: entered promiscuous mode
Oct 11 05:05:40 np0005481065 NetworkManager[44960]: <info>  [1760173540.6326] manager: (tap406fc421-7a): new Tun device (/org/freedesktop/NetworkManager/Devices/352)
Oct 11 05:05:40 np0005481065 systemd-udevd[352615]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:05:40 np0005481065 ovn_controller[152945]: 2025-10-11T09:05:40Z|00815|binding|INFO|Claiming lport 406fc421-7a8c-4974-8410-c31cab730ea9 for this chassis.
Oct 11 05:05:40 np0005481065 ovn_controller[152945]: 2025-10-11T09:05:40Z|00816|binding|INFO|406fc421-7a8c-4974-8410-c31cab730ea9: Claiming fa:16:3e:fe:a8:86 10.100.0.9
Oct 11 05:05:40 np0005481065 nova_compute[260935]: 2025-10-11 09:05:40.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:40 np0005481065 nova_compute[260935]: 2025-10-11 09:05:40.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:40 np0005481065 NetworkManager[44960]: <info>  [1760173540.7068] device (tap406fc421-7a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:05:40 np0005481065 NetworkManager[44960]: <info>  [1760173540.7083] device (tap406fc421-7a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:05:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:40.723 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:a8:86 10.100.0.9'], port_security=['fa:16:3e:fe:a8:86 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '81e243b6-6e9c-428e-a7b7-743f60f94728', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6273b59c-68e6-46d1-ac04-7afa70039429', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '72f2c977b4a147e3b45be9903a0afdf1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6b8ac1d7-d0d6-46d1-9dc3-fb48cbd12dd5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dbffcb7d-7dd8-41b0-903b-e11c21f9092c, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=406fc421-7a8c-4974-8410-c31cab730ea9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:05:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:40.724 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 406fc421-7a8c-4974-8410-c31cab730ea9 in datapath 6273b59c-68e6-46d1-ac04-7afa70039429 bound to our chassis#033[00m
Oct 11 05:05:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:40.727 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6273b59c-68e6-46d1-ac04-7afa70039429#033[00m
Oct 11 05:05:40 np0005481065 podman[352585]: 2025-10-11 09:05:40.732078531 +0000 UTC m=+0.197597252 container exec_died ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:05:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:40.745 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3b212c04-666c-4438-bb04-79bea638e819]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:05:40 np0005481065 systemd-machined[215705]: New machine qemu-106-instance-0000005d.
Oct 11 05:05:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:40.749 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6273b59c-61 in ovnmeta-6273b59c-68e6-46d1-ac04-7afa70039429 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 05:05:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:40.752 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6273b59c-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 05:05:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:40.753 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ba13cf30-8caa-49bb-9c6e-6c24962cbdca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:05:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:40.754 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b863be8c-1a75-40d0-bdaa-f951ebc470d9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:05:40 np0005481065 systemd[1]: Started Virtual Machine qemu-106-instance-0000005d.
Oct 11 05:05:40 np0005481065 ovn_controller[152945]: 2025-10-11T09:05:40Z|00817|binding|INFO|Setting lport 406fc421-7a8c-4974-8410-c31cab730ea9 ovn-installed in OVS
Oct 11 05:05:40 np0005481065 ovn_controller[152945]: 2025-10-11T09:05:40Z|00818|binding|INFO|Setting lport 406fc421-7a8c-4974-8410-c31cab730ea9 up in Southbound
Oct 11 05:05:40 np0005481065 nova_compute[260935]: 2025-10-11 09:05:40.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:40.772 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[6ccd1650-66ce-4a80-a86e-039b33c0420f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:05:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:40.800 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e6f45e60-d27c-4a5a-90bf-55558cb1ea78]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:05:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:40.844 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[9bfce720-67f3-421a-bfea-838ff2156a6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:05:40 np0005481065 NetworkManager[44960]: <info>  [1760173540.8520] manager: (tap6273b59c-60): new Veth device (/org/freedesktop/NetworkManager/Devices/353)
Oct 11 05:05:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:40.850 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c5e19546-589a-4b14-ba8b-eb710fb4d98c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:05:40 np0005481065 systemd-udevd[352619]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:05:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:40.892 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[7eec5254-3b0a-471e-a063-19f89cbdb4a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:05:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:40.898 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[ce0f7b63-bf78-412e-a0bc-6b8247f9df83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:05:40 np0005481065 NetworkManager[44960]: <info>  [1760173540.9283] device (tap6273b59c-60): carrier: link connected
Oct 11 05:05:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:40.941 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[a0e90790-c741-4a52-87dd-57081552dd74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:05:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:40.965 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b12f712e-3351-46e5-85f6-0d5395d51a59]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6273b59c-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:44:8c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 248], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 538563, 'reachable_time': 40063, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352682, 'error': None, 'target': 'ovnmeta-6273b59c-68e6-46d1-ac04-7afa70039429', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:05:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:40.986 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[34fe9fb4-9767-4fa4-9de5-c00a443b9b92]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb9:448c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 538563, 'tstamp': 538563}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352690, 'error': None, 'target': 'ovnmeta-6273b59c-68e6-46d1-ac04-7afa70039429', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:05:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:41.014 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3f124662-73db-41d4-8ca1-02df76ee12db]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6273b59c-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:44:8c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 248], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 538563, 'reachable_time': 40063, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 352695, 'error': None, 'target': 'ovnmeta-6273b59c-68e6-46d1-ac04-7afa70039429', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:05:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:41.077 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[eae6f127-610f-4f1d-9f85-bc5ec5fe978a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:05:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:41.155 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ec86d8fd-c942-4dbf-8b98-e155e9b36490]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:05:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:41.157 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6273b59c-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:05:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:41.157 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:05:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:41.158 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6273b59c-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:05:41 np0005481065 NetworkManager[44960]: <info>  [1760173541.1602] manager: (tap6273b59c-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/354)
Oct 11 05:05:41 np0005481065 kernel: tap6273b59c-60: entered promiscuous mode
Oct 11 05:05:41 np0005481065 nova_compute[260935]: 2025-10-11 09:05:41.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:41.162 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6273b59c-60, col_values=(('external_ids', {'iface-id': '7e357d1d-ae9b-4fa9-ab3c-8d0e604ab2fd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:05:41 np0005481065 ovn_controller[152945]: 2025-10-11T09:05:41Z|00819|binding|INFO|Releasing lport 7e357d1d-ae9b-4fa9-ab3c-8d0e604ab2fd from this chassis (sb_readonly=0)
Oct 11 05:05:41 np0005481065 nova_compute[260935]: 2025-10-11 09:05:41.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:41.182 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6273b59c-68e6-46d1-ac04-7afa70039429.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6273b59c-68e6-46d1-ac04-7afa70039429.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 05:05:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:41.183 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d22087c3-7806-476b-b145-894ec6251c40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:05:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:41.184 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 05:05:41 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 05:05:41 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 05:05:41 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-6273b59c-68e6-46d1-ac04-7afa70039429
Oct 11 05:05:41 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 05:05:41 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 05:05:41 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 05:05:41 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/6273b59c-68e6-46d1-ac04-7afa70039429.pid.haproxy
Oct 11 05:05:41 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 05:05:41 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:05:41 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 05:05:41 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 05:05:41 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 05:05:41 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 05:05:41 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 05:05:41 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 05:05:41 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 05:05:41 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 05:05:41 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 05:05:41 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 05:05:41 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 05:05:41 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 05:05:41 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 05:05:41 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:05:41 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:05:41 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 05:05:41 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 05:05:41 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 05:05:41 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID 6273b59c-68e6-46d1-ac04-7afa70039429
Oct 11 05:05:41 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 05:05:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:41.185 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6273b59c-68e6-46d1-ac04-7afa70039429', 'env', 'PROCESS_TAG=haproxy-6273b59c-68e6-46d1-ac04-7afa70039429', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6273b59c-68e6-46d1-ac04-7afa70039429.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 05:05:41 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:05:41 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:05:41 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:05:41 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:05:41 np0005481065 podman[352853]: 2025-10-11 09:05:41.627495312 +0000 UTC m=+0.069970228 container create 684862f00538c46291fa1579d08116448704da1e44bd0a36fd2799fc3cd69c15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6273b59c-68e6-46d1-ac04-7afa70039429, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 11 05:05:41 np0005481065 systemd[1]: Started libpod-conmon-684862f00538c46291fa1579d08116448704da1e44bd0a36fd2799fc3cd69c15.scope.
Oct 11 05:05:41 np0005481065 podman[352853]: 2025-10-11 09:05:41.598728941 +0000 UTC m=+0.041203887 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 05:05:41 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:05:41 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38285c17898e47fb52b792120b4ee9bb04913e8a53c1010175d0aebd20a4d016/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 05:05:41 np0005481065 podman[352853]: 2025-10-11 09:05:41.717538663 +0000 UTC m=+0.160013599 container init 684862f00538c46291fa1579d08116448704da1e44bd0a36fd2799fc3cd69c15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6273b59c-68e6-46d1-ac04-7afa70039429, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 11 05:05:41 np0005481065 podman[352853]: 2025-10-11 09:05:41.724658196 +0000 UTC m=+0.167133112 container start 684862f00538c46291fa1579d08116448704da1e44bd0a36fd2799fc3cd69c15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6273b59c-68e6-46d1-ac04-7afa70039429, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:05:41 np0005481065 neutron-haproxy-ovnmeta-6273b59c-68e6-46d1-ac04-7afa70039429[352915]: [NOTICE]   (352930) : New worker (352947) forked
Oct 11 05:05:41 np0005481065 neutron-haproxy-ovnmeta-6273b59c-68e6-46d1-ac04-7afa70039429[352915]: [NOTICE]   (352930) : Loading success.
Oct 11 05:05:41 np0005481065 nova_compute[260935]: 2025-10-11 09:05:41.952 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173541.9510348, 81e243b6-6e9c-428e-a7b7-743f60f94728 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:05:41 np0005481065 nova_compute[260935]: 2025-10-11 09:05:41.953 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] VM Started (Lifecycle Event)#033[00m
Oct 11 05:05:42 np0005481065 nova_compute[260935]: 2025-10-11 09:05:42.011 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:05:42 np0005481065 nova_compute[260935]: 2025-10-11 09:05:42.016 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173541.9513943, 81e243b6-6e9c-428e-a7b7-743f60f94728 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:05:42 np0005481065 nova_compute[260935]: 2025-10-11 09:05:42.016 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:05:42 np0005481065 nova_compute[260935]: 2025-10-11 09:05:42.031 2 DEBUG nova.network.neutron [req-e30a5b44-e1c2-4b5a-a0e6-2456047c966e req-fcc04171-65cd-4b45-b7b4-0b6a6d8e7574 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Updated VIF entry in instance network info cache for port 406fc421-7a8c-4974-8410-c31cab730ea9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:05:42 np0005481065 nova_compute[260935]: 2025-10-11 09:05:42.032 2 DEBUG nova.network.neutron [req-e30a5b44-e1c2-4b5a-a0e6-2456047c966e req-fcc04171-65cd-4b45-b7b4-0b6a6d8e7574 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Updating instance_info_cache with network_info: [{"id": "406fc421-7a8c-4974-8410-c31cab730ea9", "address": "fa:16:3e:fe:a8:86", "network": {"id": "6273b59c-68e6-46d1-ac04-7afa70039429", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-29199496-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72f2c977b4a147e3b45be9903a0afdf1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap406fc421-7a", "ovs_interfaceid": "406fc421-7a8c-4974-8410-c31cab730ea9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:05:42 np0005481065 nova_compute[260935]: 2025-10-11 09:05:42.080 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:05:42 np0005481065 nova_compute[260935]: 2025-10-11 09:05:42.083 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:05:42 np0005481065 nova_compute[260935]: 2025-10-11 09:05:42.095 2 DEBUG oslo_concurrency.lockutils [req-e30a5b44-e1c2-4b5a-a0e6-2456047c966e req-fcc04171-65cd-4b45-b7b4-0b6a6d8e7574 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-81e243b6-6e9c-428e-a7b7-743f60f94728" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:05:42 np0005481065 nova_compute[260935]: 2025-10-11 09:05:42.189 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:05:42 np0005481065 nova_compute[260935]: 2025-10-11 09:05:42.265 2 DEBUG nova.compute.manager [req-a036fa2f-54f3-4a22-b174-b888b56df61f req-02a45eb3-8aa2-4a28-af7a-7b602968e0e8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Received event network-vif-plugged-406fc421-7a8c-4974-8410-c31cab730ea9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:05:42 np0005481065 nova_compute[260935]: 2025-10-11 09:05:42.265 2 DEBUG oslo_concurrency.lockutils [req-a036fa2f-54f3-4a22-b174-b888b56df61f req-02a45eb3-8aa2-4a28-af7a-7b602968e0e8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "81e243b6-6e9c-428e-a7b7-743f60f94728-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:05:42 np0005481065 nova_compute[260935]: 2025-10-11 09:05:42.266 2 DEBUG oslo_concurrency.lockutils [req-a036fa2f-54f3-4a22-b174-b888b56df61f req-02a45eb3-8aa2-4a28-af7a-7b602968e0e8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "81e243b6-6e9c-428e-a7b7-743f60f94728-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:05:42 np0005481065 nova_compute[260935]: 2025-10-11 09:05:42.266 2 DEBUG oslo_concurrency.lockutils [req-a036fa2f-54f3-4a22-b174-b888b56df61f req-02a45eb3-8aa2-4a28-af7a-7b602968e0e8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "81e243b6-6e9c-428e-a7b7-743f60f94728-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:05:42 np0005481065 nova_compute[260935]: 2025-10-11 09:05:42.266 2 DEBUG nova.compute.manager [req-a036fa2f-54f3-4a22-b174-b888b56df61f req-02a45eb3-8aa2-4a28-af7a-7b602968e0e8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Processing event network-vif-plugged-406fc421-7a8c-4974-8410-c31cab730ea9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:05:42 np0005481065 nova_compute[260935]: 2025-10-11 09:05:42.267 2 DEBUG nova.compute.manager [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:05:42 np0005481065 nova_compute[260935]: 2025-10-11 09:05:42.272 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173542.2724977, 81e243b6-6e9c-428e-a7b7-743f60f94728 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:05:42 np0005481065 nova_compute[260935]: 2025-10-11 09:05:42.274 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:05:42 np0005481065 nova_compute[260935]: 2025-10-11 09:05:42.276 2 DEBUG nova.virt.libvirt.driver [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:05:42 np0005481065 nova_compute[260935]: 2025-10-11 09:05:42.281 2 INFO nova.virt.libvirt.driver [-] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Instance spawned successfully.#033[00m
Oct 11 05:05:42 np0005481065 nova_compute[260935]: 2025-10-11 09:05:42.282 2 DEBUG nova.virt.libvirt.driver [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:05:42 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1891: 321 pgs: 321 active+clean; 498 MiB data, 880 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.1 MiB/s wr, 136 op/s
Oct 11 05:05:42 np0005481065 nova_compute[260935]: 2025-10-11 09:05:42.387 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:05:42 np0005481065 nova_compute[260935]: 2025-10-11 09:05:42.391 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:05:42 np0005481065 nova_compute[260935]: 2025-10-11 09:05:42.422 2 DEBUG nova.virt.libvirt.driver [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:05:42 np0005481065 nova_compute[260935]: 2025-10-11 09:05:42.423 2 DEBUG nova.virt.libvirt.driver [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:05:42 np0005481065 nova_compute[260935]: 2025-10-11 09:05:42.423 2 DEBUG nova.virt.libvirt.driver [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:05:42 np0005481065 nova_compute[260935]: 2025-10-11 09:05:42.424 2 DEBUG nova.virt.libvirt.driver [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:05:42 np0005481065 nova_compute[260935]: 2025-10-11 09:05:42.425 2 DEBUG nova.virt.libvirt.driver [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:05:42 np0005481065 nova_compute[260935]: 2025-10-11 09:05:42.426 2 DEBUG nova.virt.libvirt.driver [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:05:42 np0005481065 nova_compute[260935]: 2025-10-11 09:05:42.461 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:05:42 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:05:42 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:05:42 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:05:42 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:05:42 np0005481065 nova_compute[260935]: 2025-10-11 09:05:42.554 2 INFO nova.compute.manager [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Took 12.32 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:05:42 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 05:05:42 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:05:42 np0005481065 nova_compute[260935]: 2025-10-11 09:05:42.556 2 DEBUG nova.compute.manager [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:05:42 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 05:05:42 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:05:42 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev d079bdc9-6cad-42ad-9794-2d0b0e94d0da does not exist
Oct 11 05:05:42 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 93da4e52-7f4c-4939-9ca3-c867c9a0be41 does not exist
Oct 11 05:05:42 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 17af2677-1c92-468e-88c7-1f1a362f899c does not exist
Oct 11 05:05:42 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 05:05:42 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 05:05:42 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 05:05:42 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:05:42 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:05:42 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:05:42 np0005481065 nova_compute[260935]: 2025-10-11 09:05:42.638 2 DEBUG nova.network.neutron [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Updating instance_info_cache with network_info: [{"id": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "address": "fa:16:3e:b4:bf:b7", "network": {"id": "7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1957321638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "0a5d578da5e746caa535eef295e1a67d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8dc388c-9e", "ovs_interfaceid": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:05:42 np0005481065 nova_compute[260935]: 2025-10-11 09:05:42.671 2 INFO nova.compute.manager [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Took 14.35 seconds to build instance.#033[00m
Oct 11 05:05:42 np0005481065 nova_compute[260935]: 2025-10-11 09:05:42.702 2 DEBUG oslo_concurrency.lockutils [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Releasing lock "refresh_cache-a83f40e4-c852-4b45-a3d2-1cd65e9aaa31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:05:42 np0005481065 nova_compute[260935]: 2025-10-11 09:05:42.723 2 DEBUG oslo_concurrency.lockutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Lock "81e243b6-6e9c-428e-a7b7-743f60f94728" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.538s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:05:42 np0005481065 podman[353090]: 2025-10-11 09:05:42.968535334 +0000 UTC m=+0.085426460 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true)
Oct 11 05:05:43 np0005481065 nova_compute[260935]: 2025-10-11 09:05:43.156 2 DEBUG nova.virt.libvirt.driver [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct 11 05:05:43 np0005481065 podman[353172]: 2025-10-11 09:05:43.430311067 +0000 UTC m=+0.054246880 container create 63268ac528dda4463fb02d65897d580530bf0b1d32c739fbf717783e08c0b88a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mcclintock, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 11 05:05:43 np0005481065 systemd[1]: Started libpod-conmon-63268ac528dda4463fb02d65897d580530bf0b1d32c739fbf717783e08c0b88a.scope.
Oct 11 05:05:43 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:05:43 np0005481065 podman[353172]: 2025-10-11 09:05:43.407041702 +0000 UTC m=+0.030977525 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:05:43 np0005481065 podman[353172]: 2025-10-11 09:05:43.509362773 +0000 UTC m=+0.133298616 container init 63268ac528dda4463fb02d65897d580530bf0b1d32c739fbf717783e08c0b88a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 05:05:43 np0005481065 podman[353172]: 2025-10-11 09:05:43.518607077 +0000 UTC m=+0.142542920 container start 63268ac528dda4463fb02d65897d580530bf0b1d32c739fbf717783e08c0b88a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 11 05:05:43 np0005481065 podman[353172]: 2025-10-11 09:05:43.523149257 +0000 UTC m=+0.147085110 container attach 63268ac528dda4463fb02d65897d580530bf0b1d32c739fbf717783e08c0b88a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mcclintock, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:05:43 np0005481065 affectionate_mcclintock[353188]: 167 167
Oct 11 05:05:43 np0005481065 systemd[1]: libpod-63268ac528dda4463fb02d65897d580530bf0b1d32c739fbf717783e08c0b88a.scope: Deactivated successfully.
Oct 11 05:05:43 np0005481065 podman[353172]: 2025-10-11 09:05:43.527889572 +0000 UTC m=+0.151825435 container died 63268ac528dda4463fb02d65897d580530bf0b1d32c739fbf717783e08c0b88a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 05:05:43 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:05:43 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:05:43 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:05:43 np0005481065 systemd[1]: var-lib-containers-storage-overlay-04b81ab6612f6cc4bebc39542880487a00c371e56db58da4af147f804723695f-merged.mount: Deactivated successfully.
Oct 11 05:05:43 np0005481065 podman[353172]: 2025-10-11 09:05:43.576245783 +0000 UTC m=+0.200181586 container remove 63268ac528dda4463fb02d65897d580530bf0b1d32c739fbf717783e08c0b88a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mcclintock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:05:43 np0005481065 systemd[1]: libpod-conmon-63268ac528dda4463fb02d65897d580530bf0b1d32c739fbf717783e08c0b88a.scope: Deactivated successfully.
Oct 11 05:05:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:05:43 np0005481065 podman[353211]: 2025-10-11 09:05:43.818112758 +0000 UTC m=+0.058007067 container create 741002d399bae12a4c47b7c98bce94a889b79450a9baa31311e3368d61805176 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 05:05:43 np0005481065 systemd[1]: Started libpod-conmon-741002d399bae12a4c47b7c98bce94a889b79450a9baa31311e3368d61805176.scope.
Oct 11 05:05:43 np0005481065 podman[353211]: 2025-10-11 09:05:43.792849427 +0000 UTC m=+0.032743776 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:05:43 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:05:43 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27cc36f4a5c6c245982bff7ef571483fe9a8b371de51605ff15338529ffbba78/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:05:43 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27cc36f4a5c6c245982bff7ef571483fe9a8b371de51605ff15338529ffbba78/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:05:43 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27cc36f4a5c6c245982bff7ef571483fe9a8b371de51605ff15338529ffbba78/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:05:43 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27cc36f4a5c6c245982bff7ef571483fe9a8b371de51605ff15338529ffbba78/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:05:43 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27cc36f4a5c6c245982bff7ef571483fe9a8b371de51605ff15338529ffbba78/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 05:05:43 np0005481065 podman[353211]: 2025-10-11 09:05:43.923461805 +0000 UTC m=+0.163356104 container init 741002d399bae12a4c47b7c98bce94a889b79450a9baa31311e3368d61805176 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_swanson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 11 05:05:43 np0005481065 podman[353211]: 2025-10-11 09:05:43.935639663 +0000 UTC m=+0.175533952 container start 741002d399bae12a4c47b7c98bce94a889b79450a9baa31311e3368d61805176 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 05:05:43 np0005481065 podman[353211]: 2025-10-11 09:05:43.938734881 +0000 UTC m=+0.178629170 container attach 741002d399bae12a4c47b7c98bce94a889b79450a9baa31311e3368d61805176 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 11 05:05:43 np0005481065 nova_compute[260935]: 2025-10-11 09:05:43.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:44 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1892: 321 pgs: 321 active+clean; 513 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 111 op/s
Oct 11 05:05:44 np0005481065 nova_compute[260935]: 2025-10-11 09:05:44.409 2 DEBUG nova.network.neutron [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Successfully updated port: 4d16ebf4-64d7-4476-8898-f62d856c729b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:05:44 np0005481065 nova_compute[260935]: 2025-10-11 09:05:44.507 2 DEBUG nova.compute.manager [req-10681af1-9d21-4910-b014-759dcd7bac42 req-3b41aaae-17e3-4264-aff7-140fca51137e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Received event network-vif-plugged-406fc421-7a8c-4974-8410-c31cab730ea9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:05:44 np0005481065 nova_compute[260935]: 2025-10-11 09:05:44.507 2 DEBUG oslo_concurrency.lockutils [req-10681af1-9d21-4910-b014-759dcd7bac42 req-3b41aaae-17e3-4264-aff7-140fca51137e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "81e243b6-6e9c-428e-a7b7-743f60f94728-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:05:44 np0005481065 nova_compute[260935]: 2025-10-11 09:05:44.508 2 DEBUG oslo_concurrency.lockutils [req-10681af1-9d21-4910-b014-759dcd7bac42 req-3b41aaae-17e3-4264-aff7-140fca51137e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "81e243b6-6e9c-428e-a7b7-743f60f94728-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:05:44 np0005481065 nova_compute[260935]: 2025-10-11 09:05:44.513 2 DEBUG oslo_concurrency.lockutils [req-10681af1-9d21-4910-b014-759dcd7bac42 req-3b41aaae-17e3-4264-aff7-140fca51137e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "81e243b6-6e9c-428e-a7b7-743f60f94728-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:05:44 np0005481065 nova_compute[260935]: 2025-10-11 09:05:44.513 2 DEBUG nova.compute.manager [req-10681af1-9d21-4910-b014-759dcd7bac42 req-3b41aaae-17e3-4264-aff7-140fca51137e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] No waiting events found dispatching network-vif-plugged-406fc421-7a8c-4974-8410-c31cab730ea9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:05:44 np0005481065 nova_compute[260935]: 2025-10-11 09:05:44.513 2 WARNING nova.compute.manager [req-10681af1-9d21-4910-b014-759dcd7bac42 req-3b41aaae-17e3-4264-aff7-140fca51137e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Received unexpected event network-vif-plugged-406fc421-7a8c-4974-8410-c31cab730ea9 for instance with vm_state active and task_state None.#033[00m
Oct 11 05:05:44 np0005481065 nova_compute[260935]: 2025-10-11 09:05:44.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:44 np0005481065 nova_compute[260935]: 2025-10-11 09:05:44.517 2 DEBUG oslo_concurrency.lockutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Acquiring lock "refresh_cache-d813afc2-c844-45eb-b1ec-efbbf95498ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:05:44 np0005481065 nova_compute[260935]: 2025-10-11 09:05:44.517 2 DEBUG oslo_concurrency.lockutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Acquired lock "refresh_cache-d813afc2-c844-45eb-b1ec-efbbf95498ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:05:44 np0005481065 nova_compute[260935]: 2025-10-11 09:05:44.517 2 DEBUG nova.network.neutron [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:05:45 np0005481065 funny_swanson[353228]: --> passed data devices: 0 physical, 3 LVM
Oct 11 05:05:45 np0005481065 funny_swanson[353228]: --> relative data size: 1.0
Oct 11 05:05:45 np0005481065 funny_swanson[353228]: --> All data devices are unavailable
Oct 11 05:05:45 np0005481065 systemd[1]: libpod-741002d399bae12a4c47b7c98bce94a889b79450a9baa31311e3368d61805176.scope: Deactivated successfully.
Oct 11 05:05:45 np0005481065 systemd[1]: libpod-741002d399bae12a4c47b7c98bce94a889b79450a9baa31311e3368d61805176.scope: Consumed 1.256s CPU time.
Oct 11 05:05:45 np0005481065 podman[353257]: 2025-10-11 09:05:45.336623697 +0000 UTC m=+0.051814220 container died 741002d399bae12a4c47b7c98bce94a889b79450a9baa31311e3368d61805176 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_swanson, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 11 05:05:45 np0005481065 systemd[1]: var-lib-containers-storage-overlay-27cc36f4a5c6c245982bff7ef571483fe9a8b371de51605ff15338529ffbba78-merged.mount: Deactivated successfully.
Oct 11 05:05:45 np0005481065 nova_compute[260935]: 2025-10-11 09:05:45.372 2 DEBUG nova.network.neutron [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:05:45 np0005481065 podman[353257]: 2025-10-11 09:05:45.403418623 +0000 UTC m=+0.118609096 container remove 741002d399bae12a4c47b7c98bce94a889b79450a9baa31311e3368d61805176 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_swanson, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:05:45 np0005481065 systemd[1]: libpod-conmon-741002d399bae12a4c47b7c98bce94a889b79450a9baa31311e3368d61805176.scope: Deactivated successfully.
Oct 11 05:05:45 np0005481065 nova_compute[260935]: 2025-10-11 09:05:45.830 2 DEBUG oslo_concurrency.lockutils [None req-aff4ef40-b159-4e3d-afbe-eb0ed60088f4 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Acquiring lock "81e243b6-6e9c-428e-a7b7-743f60f94728" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:05:45 np0005481065 nova_compute[260935]: 2025-10-11 09:05:45.831 2 DEBUG oslo_concurrency.lockutils [None req-aff4ef40-b159-4e3d-afbe-eb0ed60088f4 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Lock "81e243b6-6e9c-428e-a7b7-743f60f94728" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:05:45 np0005481065 nova_compute[260935]: 2025-10-11 09:05:45.832 2 DEBUG oslo_concurrency.lockutils [None req-aff4ef40-b159-4e3d-afbe-eb0ed60088f4 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Acquiring lock "81e243b6-6e9c-428e-a7b7-743f60f94728-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:05:45 np0005481065 nova_compute[260935]: 2025-10-11 09:05:45.832 2 DEBUG oslo_concurrency.lockutils [None req-aff4ef40-b159-4e3d-afbe-eb0ed60088f4 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Lock "81e243b6-6e9c-428e-a7b7-743f60f94728-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:05:45 np0005481065 nova_compute[260935]: 2025-10-11 09:05:45.832 2 DEBUG oslo_concurrency.lockutils [None req-aff4ef40-b159-4e3d-afbe-eb0ed60088f4 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Lock "81e243b6-6e9c-428e-a7b7-743f60f94728-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:05:45 np0005481065 nova_compute[260935]: 2025-10-11 09:05:45.834 2 INFO nova.compute.manager [None req-aff4ef40-b159-4e3d-afbe-eb0ed60088f4 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Terminating instance#033[00m
Oct 11 05:05:45 np0005481065 nova_compute[260935]: 2025-10-11 09:05:45.836 2 DEBUG nova.compute.manager [None req-aff4ef40-b159-4e3d-afbe-eb0ed60088f4 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:05:45 np0005481065 kernel: tap406fc421-7a (unregistering): left promiscuous mode
Oct 11 05:05:45 np0005481065 NetworkManager[44960]: <info>  [1760173545.9039] device (tap406fc421-7a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:05:45 np0005481065 ovn_controller[152945]: 2025-10-11T09:05:45Z|00820|binding|INFO|Releasing lport 406fc421-7a8c-4974-8410-c31cab730ea9 from this chassis (sb_readonly=0)
Oct 11 05:05:45 np0005481065 ovn_controller[152945]: 2025-10-11T09:05:45Z|00821|binding|INFO|Setting lport 406fc421-7a8c-4974-8410-c31cab730ea9 down in Southbound
Oct 11 05:05:45 np0005481065 nova_compute[260935]: 2025-10-11 09:05:45.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:45 np0005481065 ovn_controller[152945]: 2025-10-11T09:05:45Z|00822|binding|INFO|Removing iface tap406fc421-7a ovn-installed in OVS
Oct 11 05:05:45 np0005481065 nova_compute[260935]: 2025-10-11 09:05:45.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:45 np0005481065 nova_compute[260935]: 2025-10-11 09:05:45.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:45.960 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:a8:86 10.100.0.9'], port_security=['fa:16:3e:fe:a8:86 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '81e243b6-6e9c-428e-a7b7-743f60f94728', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6273b59c-68e6-46d1-ac04-7afa70039429', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '72f2c977b4a147e3b45be9903a0afdf1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6b8ac1d7-d0d6-46d1-9dc3-fb48cbd12dd5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dbffcb7d-7dd8-41b0-903b-e11c21f9092c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=406fc421-7a8c-4974-8410-c31cab730ea9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:05:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:45.961 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 406fc421-7a8c-4974-8410-c31cab730ea9 in datapath 6273b59c-68e6-46d1-ac04-7afa70039429 unbound from our chassis#033[00m
Oct 11 05:05:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:45.963 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6273b59c-68e6-46d1-ac04-7afa70039429, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:05:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:45.964 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c5ee67e4-1a25-41da-bc32-5881a726f2fe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:05:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:45.965 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6273b59c-68e6-46d1-ac04-7afa70039429 namespace which is not needed anymore#033[00m
Oct 11 05:05:45 np0005481065 systemd[1]: machine-qemu\x2d106\x2dinstance\x2d0000005d.scope: Deactivated successfully.
Oct 11 05:05:45 np0005481065 systemd[1]: machine-qemu\x2d106\x2dinstance\x2d0000005d.scope: Consumed 4.566s CPU time.
Oct 11 05:05:45 np0005481065 systemd-machined[215705]: Machine qemu-106-instance-0000005d terminated.
Oct 11 05:05:46 np0005481065 nova_compute[260935]: 2025-10-11 09:05:46.084 2 INFO nova.virt.libvirt.driver [-] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Instance destroyed successfully.#033[00m
Oct 11 05:05:46 np0005481065 nova_compute[260935]: 2025-10-11 09:05:46.084 2 DEBUG nova.objects.instance [None req-aff4ef40-b159-4e3d-afbe-eb0ed60088f4 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Lazy-loading 'resources' on Instance uuid 81e243b6-6e9c-428e-a7b7-743f60f94728 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:05:46 np0005481065 neutron-haproxy-ovnmeta-6273b59c-68e6-46d1-ac04-7afa70039429[352915]: [NOTICE]   (352930) : haproxy version is 2.8.14-c23fe91
Oct 11 05:05:46 np0005481065 neutron-haproxy-ovnmeta-6273b59c-68e6-46d1-ac04-7afa70039429[352915]: [NOTICE]   (352930) : path to executable is /usr/sbin/haproxy
Oct 11 05:05:46 np0005481065 neutron-haproxy-ovnmeta-6273b59c-68e6-46d1-ac04-7afa70039429[352915]: [WARNING]  (352930) : Exiting Master process...
Oct 11 05:05:46 np0005481065 nova_compute[260935]: 2025-10-11 09:05:46.140 2 DEBUG nova.virt.libvirt.vif [None req-aff4ef40-b159-4e3d-afbe-eb0ed60088f4 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:05:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-2126714831',display_name='tempest-ServerPasswordTestJSON-server-2126714831',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-2126714831',id=93,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:05:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='72f2c977b4a147e3b45be9903a0afdf1',ramdisk_id='',reservation_id='r-ig7c58si',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerPasswordTestJSON-624321625',owner_user_name='tempest-ServerPasswordTestJSON-624321625-project-member',password_0='',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:05:45Z,user_data=None,user_id='727ffe9af08040179ad1981f985d61cc',uuid=81e243b6-6e9c-428e-a7b7-743f60f94728,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "406fc421-7a8c-4974-8410-c31cab730ea9", "address": "fa:16:3e:fe:a8:86", "network": {"id": "6273b59c-68e6-46d1-ac04-7afa70039429", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-29199496-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72f2c977b4a147e3b45be9903a0afdf1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap406fc421-7a", "ovs_interfaceid": "406fc421-7a8c-4974-8410-c31cab730ea9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:05:46 np0005481065 nova_compute[260935]: 2025-10-11 09:05:46.141 2 DEBUG nova.network.os_vif_util [None req-aff4ef40-b159-4e3d-afbe-eb0ed60088f4 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Converting VIF {"id": "406fc421-7a8c-4974-8410-c31cab730ea9", "address": "fa:16:3e:fe:a8:86", "network": {"id": "6273b59c-68e6-46d1-ac04-7afa70039429", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-29199496-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72f2c977b4a147e3b45be9903a0afdf1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap406fc421-7a", "ovs_interfaceid": "406fc421-7a8c-4974-8410-c31cab730ea9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:05:46 np0005481065 nova_compute[260935]: 2025-10-11 09:05:46.142 2 DEBUG nova.network.os_vif_util [None req-aff4ef40-b159-4e3d-afbe-eb0ed60088f4 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:a8:86,bridge_name='br-int',has_traffic_filtering=True,id=406fc421-7a8c-4974-8410-c31cab730ea9,network=Network(6273b59c-68e6-46d1-ac04-7afa70039429),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap406fc421-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:05:46 np0005481065 neutron-haproxy-ovnmeta-6273b59c-68e6-46d1-ac04-7afa70039429[352915]: [ALERT]    (352930) : Current worker (352947) exited with code 143 (Terminated)
Oct 11 05:05:46 np0005481065 neutron-haproxy-ovnmeta-6273b59c-68e6-46d1-ac04-7afa70039429[352915]: [WARNING]  (352930) : All workers exited. Exiting... (0)
Oct 11 05:05:46 np0005481065 nova_compute[260935]: 2025-10-11 09:05:46.142 2 DEBUG os_vif [None req-aff4ef40-b159-4e3d-afbe-eb0ed60088f4 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:a8:86,bridge_name='br-int',has_traffic_filtering=True,id=406fc421-7a8c-4974-8410-c31cab730ea9,network=Network(6273b59c-68e6-46d1-ac04-7afa70039429),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap406fc421-7a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:05:46 np0005481065 systemd[1]: libpod-684862f00538c46291fa1579d08116448704da1e44bd0a36fd2799fc3cd69c15.scope: Deactivated successfully.
Oct 11 05:05:46 np0005481065 nova_compute[260935]: 2025-10-11 09:05:46.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:46 np0005481065 nova_compute[260935]: 2025-10-11 09:05:46.144 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap406fc421-7a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:05:46 np0005481065 nova_compute[260935]: 2025-10-11 09:05:46.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:46 np0005481065 nova_compute[260935]: 2025-10-11 09:05:46.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:05:46 np0005481065 podman[353438]: 2025-10-11 09:05:46.151058605 +0000 UTC m=+0.047313971 container died 684862f00538c46291fa1579d08116448704da1e44bd0a36fd2799fc3cd69c15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6273b59c-68e6-46d1-ac04-7afa70039429, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 11 05:05:46 np0005481065 nova_compute[260935]: 2025-10-11 09:05:46.154 2 INFO os_vif [None req-aff4ef40-b159-4e3d-afbe-eb0ed60088f4 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:a8:86,bridge_name='br-int',has_traffic_filtering=True,id=406fc421-7a8c-4974-8410-c31cab730ea9,network=Network(6273b59c-68e6-46d1-ac04-7afa70039429),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap406fc421-7a')#033[00m
Oct 11 05:05:46 np0005481065 podman[353439]: 2025-10-11 09:05:46.175156733 +0000 UTC m=+0.059551021 container create dd7dc82b72e482770bc053cec497e5e3cc595ff263cb17f4297d1cb0260970fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_cannon, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:05:46 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-684862f00538c46291fa1579d08116448704da1e44bd0a36fd2799fc3cd69c15-userdata-shm.mount: Deactivated successfully.
Oct 11 05:05:46 np0005481065 systemd[1]: var-lib-containers-storage-overlay-38285c17898e47fb52b792120b4ee9bb04913e8a53c1010175d0aebd20a4d016-merged.mount: Deactivated successfully.
Oct 11 05:05:46 np0005481065 podman[353438]: 2025-10-11 09:05:46.208247428 +0000 UTC m=+0.104502794 container cleanup 684862f00538c46291fa1579d08116448704da1e44bd0a36fd2799fc3cd69c15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6273b59c-68e6-46d1-ac04-7afa70039429, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:05:46 np0005481065 systemd[1]: Started libpod-conmon-dd7dc82b72e482770bc053cec497e5e3cc595ff263cb17f4297d1cb0260970fd.scope.
Oct 11 05:05:46 np0005481065 systemd[1]: libpod-conmon-684862f00538c46291fa1579d08116448704da1e44bd0a36fd2799fc3cd69c15.scope: Deactivated successfully.
Oct 11 05:05:46 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:05:46 np0005481065 podman[353439]: 2025-10-11 09:05:46.140780202 +0000 UTC m=+0.025174510 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:05:46 np0005481065 podman[353439]: 2025-10-11 09:05:46.251907694 +0000 UTC m=+0.136301992 container init dd7dc82b72e482770bc053cec497e5e3cc595ff263cb17f4297d1cb0260970fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 05:05:46 np0005481065 podman[353439]: 2025-10-11 09:05:46.260392037 +0000 UTC m=+0.144786335 container start dd7dc82b72e482770bc053cec497e5e3cc595ff263cb17f4297d1cb0260970fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_cannon, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:05:46 np0005481065 podman[353439]: 2025-10-11 09:05:46.264226116 +0000 UTC m=+0.148620404 container attach dd7dc82b72e482770bc053cec497e5e3cc595ff263cb17f4297d1cb0260970fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_cannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:05:46 np0005481065 jovial_cannon[353501]: 167 167
Oct 11 05:05:46 np0005481065 systemd[1]: libpod-dd7dc82b72e482770bc053cec497e5e3cc595ff263cb17f4297d1cb0260970fd.scope: Deactivated successfully.
Oct 11 05:05:46 np0005481065 podman[353439]: 2025-10-11 09:05:46.267334195 +0000 UTC m=+0.151728503 container died dd7dc82b72e482770bc053cec497e5e3cc595ff263cb17f4297d1cb0260970fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_cannon, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 05:05:46 np0005481065 podman[353505]: 2025-10-11 09:05:46.294797639 +0000 UTC m=+0.053289612 container remove 684862f00538c46291fa1579d08116448704da1e44bd0a36fd2799fc3cd69c15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6273b59c-68e6-46d1-ac04-7afa70039429, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2)
Oct 11 05:05:46 np0005481065 systemd[1]: var-lib-containers-storage-overlay-f9c57ab65fdaa015f4f1ce372af127816abc7ceaf7e71b73a7076ddc42c7637c-merged.mount: Deactivated successfully.
Oct 11 05:05:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:46.306 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c28ed99d-82eb-4d85-a160-db6b9f23e0bd]: (4, ('Sat Oct 11 09:05:46 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6273b59c-68e6-46d1-ac04-7afa70039429 (684862f00538c46291fa1579d08116448704da1e44bd0a36fd2799fc3cd69c15)\n684862f00538c46291fa1579d08116448704da1e44bd0a36fd2799fc3cd69c15\nSat Oct 11 09:05:46 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6273b59c-68e6-46d1-ac04-7afa70039429 (684862f00538c46291fa1579d08116448704da1e44bd0a36fd2799fc3cd69c15)\n684862f00538c46291fa1579d08116448704da1e44bd0a36fd2799fc3cd69c15\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:05:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:46.310 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[39681526-5e33-4ec5-b4b0-de7e1938c712]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:05:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:46.311 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6273b59c-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:05:46 np0005481065 nova_compute[260935]: 2025-10-11 09:05:46.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:46 np0005481065 kernel: tap6273b59c-60: left promiscuous mode
Oct 11 05:05:46 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1893: 321 pgs: 321 active+clean; 513 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 110 op/s
Oct 11 05:05:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:46.319 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[17098208-6736-4095-b445-8902fce379de]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:05:46 np0005481065 podman[353439]: 2025-10-11 09:05:46.325833025 +0000 UTC m=+0.210227303 container remove dd7dc82b72e482770bc053cec497e5e3cc595ff263cb17f4297d1cb0260970fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_cannon, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:05:46 np0005481065 nova_compute[260935]: 2025-10-11 09:05:46.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:46.353 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7bb1036f-6a35-41a2-90b3-b53212d93745]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:05:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:46.356 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[91fb4888-42a1-4ba7-ad93-c93f108722f6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:05:46 np0005481065 systemd[1]: libpod-conmon-dd7dc82b72e482770bc053cec497e5e3cc595ff263cb17f4297d1cb0260970fd.scope: Deactivated successfully.
Oct 11 05:05:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:46.376 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d1a60f09-bb42-4cba-9709-0b67c7616d5a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 538553, 'reachable_time': 21140, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353534, 'error': None, 'target': 'ovnmeta-6273b59c-68e6-46d1-ac04-7afa70039429', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:05:46 np0005481065 systemd[1]: run-netns-ovnmeta\x2d6273b59c\x2d68e6\x2d46d1\x2dac04\x2d7afa70039429.mount: Deactivated successfully.
Oct 11 05:05:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:46.379 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6273b59c-68e6-46d1-ac04-7afa70039429 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 05:05:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:46.379 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[e0cec0f5-3a71-4adb-8ae9-2bfc404f2f5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:05:46 np0005481065 podman[353541]: 2025-10-11 09:05:46.533624507 +0000 UTC m=+0.056663419 container create 8e5208a5397319c0f84e76de8073b0ab50ac7bcd07ebece966238a6ebbb96988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wing, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:05:46 np0005481065 systemd[1]: Started libpod-conmon-8e5208a5397319c0f84e76de8073b0ab50ac7bcd07ebece966238a6ebbb96988.scope.
Oct 11 05:05:46 np0005481065 podman[353541]: 2025-10-11 09:05:46.515592002 +0000 UTC m=+0.038630944 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:05:46 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:05:46 np0005481065 nova_compute[260935]: 2025-10-11 09:05:46.615 2 INFO nova.virt.libvirt.driver [None req-aff4ef40-b159-4e3d-afbe-eb0ed60088f4 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Deleting instance files /var/lib/nova/instances/81e243b6-6e9c-428e-a7b7-743f60f94728_del#033[00m
Oct 11 05:05:46 np0005481065 nova_compute[260935]: 2025-10-11 09:05:46.616 2 INFO nova.virt.libvirt.driver [None req-aff4ef40-b159-4e3d-afbe-eb0ed60088f4 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Deletion of /var/lib/nova/instances/81e243b6-6e9c-428e-a7b7-743f60f94728_del complete#033[00m
Oct 11 05:05:46 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baee55a1e3748b9bd8cd712664663de6349b8a4622b3886a36b62e8234b6995d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:05:46 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baee55a1e3748b9bd8cd712664663de6349b8a4622b3886a36b62e8234b6995d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:05:46 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baee55a1e3748b9bd8cd712664663de6349b8a4622b3886a36b62e8234b6995d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:05:46 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baee55a1e3748b9bd8cd712664663de6349b8a4622b3886a36b62e8234b6995d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:05:46 np0005481065 podman[353541]: 2025-10-11 09:05:46.637647426 +0000 UTC m=+0.160686378 container init 8e5208a5397319c0f84e76de8073b0ab50ac7bcd07ebece966238a6ebbb96988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:05:46 np0005481065 podman[353541]: 2025-10-11 09:05:46.65075097 +0000 UTC m=+0.173789892 container start 8e5208a5397319c0f84e76de8073b0ab50ac7bcd07ebece966238a6ebbb96988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:05:46 np0005481065 podman[353541]: 2025-10-11 09:05:46.65844496 +0000 UTC m=+0.181483882 container attach 8e5208a5397319c0f84e76de8073b0ab50ac7bcd07ebece966238a6ebbb96988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wing, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 11 05:05:46 np0005481065 nova_compute[260935]: 2025-10-11 09:05:46.849 2 DEBUG nova.compute.manager [req-af78b1c6-3dbb-4ecd-b4ea-e18596b3966e req-7b53d93e-6296-458d-b9fd-eb00d8460ce1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Received event network-changed-4d16ebf4-64d7-4476-8898-f62d856c729b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:05:46 np0005481065 nova_compute[260935]: 2025-10-11 09:05:46.849 2 DEBUG nova.compute.manager [req-af78b1c6-3dbb-4ecd-b4ea-e18596b3966e req-7b53d93e-6296-458d-b9fd-eb00d8460ce1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Refreshing instance network info cache due to event network-changed-4d16ebf4-64d7-4476-8898-f62d856c729b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:05:46 np0005481065 nova_compute[260935]: 2025-10-11 09:05:46.849 2 DEBUG oslo_concurrency.lockutils [req-af78b1c6-3dbb-4ecd-b4ea-e18596b3966e req-7b53d93e-6296-458d-b9fd-eb00d8460ce1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-d813afc2-c844-45eb-b1ec-efbbf95498ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:05:46 np0005481065 nova_compute[260935]: 2025-10-11 09:05:46.852 2 INFO nova.compute.manager [None req-aff4ef40-b159-4e3d-afbe-eb0ed60088f4 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Took 1.02 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:05:46 np0005481065 nova_compute[260935]: 2025-10-11 09:05:46.853 2 DEBUG oslo.service.loopingcall [None req-aff4ef40-b159-4e3d-afbe-eb0ed60088f4 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:05:46 np0005481065 nova_compute[260935]: 2025-10-11 09:05:46.853 2 DEBUG nova.compute.manager [-] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:05:46 np0005481065 nova_compute[260935]: 2025-10-11 09:05:46.853 2 DEBUG nova.network.neutron [-] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:05:47 np0005481065 great_wing[353558]: {
Oct 11 05:05:47 np0005481065 great_wing[353558]:    "0": [
Oct 11 05:05:47 np0005481065 great_wing[353558]:        {
Oct 11 05:05:47 np0005481065 great_wing[353558]:            "devices": [
Oct 11 05:05:47 np0005481065 great_wing[353558]:                "/dev/loop3"
Oct 11 05:05:47 np0005481065 great_wing[353558]:            ],
Oct 11 05:05:47 np0005481065 great_wing[353558]:            "lv_name": "ceph_lv0",
Oct 11 05:05:47 np0005481065 great_wing[353558]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:05:47 np0005481065 great_wing[353558]:            "lv_size": "21470642176",
Oct 11 05:05:47 np0005481065 great_wing[353558]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:05:47 np0005481065 great_wing[353558]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:05:47 np0005481065 great_wing[353558]:            "name": "ceph_lv0",
Oct 11 05:05:47 np0005481065 great_wing[353558]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:05:47 np0005481065 great_wing[353558]:            "tags": {
Oct 11 05:05:47 np0005481065 great_wing[353558]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:05:47 np0005481065 great_wing[353558]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:05:47 np0005481065 great_wing[353558]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:05:47 np0005481065 great_wing[353558]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:05:47 np0005481065 great_wing[353558]:                "ceph.cluster_name": "ceph",
Oct 11 05:05:47 np0005481065 great_wing[353558]:                "ceph.crush_device_class": "",
Oct 11 05:05:47 np0005481065 great_wing[353558]:                "ceph.encrypted": "0",
Oct 11 05:05:47 np0005481065 great_wing[353558]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:05:47 np0005481065 great_wing[353558]:                "ceph.osd_id": "0",
Oct 11 05:05:47 np0005481065 great_wing[353558]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:05:47 np0005481065 great_wing[353558]:                "ceph.type": "block",
Oct 11 05:05:47 np0005481065 great_wing[353558]:                "ceph.vdo": "0"
Oct 11 05:05:47 np0005481065 great_wing[353558]:            },
Oct 11 05:05:47 np0005481065 great_wing[353558]:            "type": "block",
Oct 11 05:05:47 np0005481065 great_wing[353558]:            "vg_name": "ceph_vg0"
Oct 11 05:05:47 np0005481065 great_wing[353558]:        }
Oct 11 05:05:47 np0005481065 great_wing[353558]:    ],
Oct 11 05:05:47 np0005481065 great_wing[353558]:    "1": [
Oct 11 05:05:47 np0005481065 great_wing[353558]:        {
Oct 11 05:05:47 np0005481065 great_wing[353558]:            "devices": [
Oct 11 05:05:47 np0005481065 great_wing[353558]:                "/dev/loop4"
Oct 11 05:05:47 np0005481065 great_wing[353558]:            ],
Oct 11 05:05:47 np0005481065 great_wing[353558]:            "lv_name": "ceph_lv1",
Oct 11 05:05:47 np0005481065 great_wing[353558]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:05:47 np0005481065 great_wing[353558]:            "lv_size": "21470642176",
Oct 11 05:05:47 np0005481065 great_wing[353558]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:05:47 np0005481065 great_wing[353558]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:05:47 np0005481065 great_wing[353558]:            "name": "ceph_lv1",
Oct 11 05:05:47 np0005481065 great_wing[353558]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:05:47 np0005481065 great_wing[353558]:            "tags": {
Oct 11 05:05:47 np0005481065 great_wing[353558]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:05:47 np0005481065 great_wing[353558]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:05:47 np0005481065 great_wing[353558]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:05:47 np0005481065 great_wing[353558]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:05:47 np0005481065 great_wing[353558]:                "ceph.cluster_name": "ceph",
Oct 11 05:05:47 np0005481065 great_wing[353558]:                "ceph.crush_device_class": "",
Oct 11 05:05:47 np0005481065 great_wing[353558]:                "ceph.encrypted": "0",
Oct 11 05:05:47 np0005481065 great_wing[353558]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:05:47 np0005481065 great_wing[353558]:                "ceph.osd_id": "1",
Oct 11 05:05:47 np0005481065 great_wing[353558]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:05:47 np0005481065 great_wing[353558]:                "ceph.type": "block",
Oct 11 05:05:47 np0005481065 great_wing[353558]:                "ceph.vdo": "0"
Oct 11 05:05:47 np0005481065 great_wing[353558]:            },
Oct 11 05:05:47 np0005481065 great_wing[353558]:            "type": "block",
Oct 11 05:05:47 np0005481065 great_wing[353558]:            "vg_name": "ceph_vg1"
Oct 11 05:05:47 np0005481065 great_wing[353558]:        }
Oct 11 05:05:47 np0005481065 great_wing[353558]:    ],
Oct 11 05:05:47 np0005481065 great_wing[353558]:    "2": [
Oct 11 05:05:47 np0005481065 great_wing[353558]:        {
Oct 11 05:05:47 np0005481065 great_wing[353558]:            "devices": [
Oct 11 05:05:47 np0005481065 great_wing[353558]:                "/dev/loop5"
Oct 11 05:05:47 np0005481065 great_wing[353558]:            ],
Oct 11 05:05:47 np0005481065 great_wing[353558]:            "lv_name": "ceph_lv2",
Oct 11 05:05:47 np0005481065 great_wing[353558]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:05:47 np0005481065 great_wing[353558]:            "lv_size": "21470642176",
Oct 11 05:05:47 np0005481065 great_wing[353558]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:05:47 np0005481065 great_wing[353558]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:05:47 np0005481065 great_wing[353558]:            "name": "ceph_lv2",
Oct 11 05:05:47 np0005481065 great_wing[353558]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:05:47 np0005481065 great_wing[353558]:            "tags": {
Oct 11 05:05:47 np0005481065 great_wing[353558]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:05:47 np0005481065 great_wing[353558]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:05:47 np0005481065 great_wing[353558]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:05:47 np0005481065 great_wing[353558]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:05:47 np0005481065 great_wing[353558]:                "ceph.cluster_name": "ceph",
Oct 11 05:05:47 np0005481065 great_wing[353558]:                "ceph.crush_device_class": "",
Oct 11 05:05:47 np0005481065 great_wing[353558]:                "ceph.encrypted": "0",
Oct 11 05:05:47 np0005481065 great_wing[353558]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:05:47 np0005481065 great_wing[353558]:                "ceph.osd_id": "2",
Oct 11 05:05:47 np0005481065 great_wing[353558]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:05:47 np0005481065 great_wing[353558]:                "ceph.type": "block",
Oct 11 05:05:47 np0005481065 great_wing[353558]:                "ceph.vdo": "0"
Oct 11 05:05:47 np0005481065 great_wing[353558]:            },
Oct 11 05:05:47 np0005481065 great_wing[353558]:            "type": "block",
Oct 11 05:05:47 np0005481065 great_wing[353558]:            "vg_name": "ceph_vg2"
Oct 11 05:05:47 np0005481065 great_wing[353558]:        }
Oct 11 05:05:47 np0005481065 great_wing[353558]:    ]
Oct 11 05:05:47 np0005481065 great_wing[353558]: }
Oct 11 05:05:47 np0005481065 systemd[1]: libpod-8e5208a5397319c0f84e76de8073b0ab50ac7bcd07ebece966238a6ebbb96988.scope: Deactivated successfully.
Oct 11 05:05:47 np0005481065 podman[353541]: 2025-10-11 09:05:47.484528752 +0000 UTC m=+1.007567694 container died 8e5208a5397319c0f84e76de8073b0ab50ac7bcd07ebece966238a6ebbb96988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 05:05:47 np0005481065 systemd[1]: var-lib-containers-storage-overlay-baee55a1e3748b9bd8cd712664663de6349b8a4622b3886a36b62e8234b6995d-merged.mount: Deactivated successfully.
Oct 11 05:05:47 np0005481065 nova_compute[260935]: 2025-10-11 09:05:47.564 2 DEBUG nova.network.neutron [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Updating instance_info_cache with network_info: [{"id": "4d16ebf4-64d7-4476-8898-f62d856c729b", "address": "fa:16:3e:62:2f:b2", "network": {"id": "da65ea75-f60d-4001-894b-df4408baa99c", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-908406648-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0eed0ccef3b34c4db44e88ebe1aef9f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d16ebf4-64", "ovs_interfaceid": "4d16ebf4-64d7-4476-8898-f62d856c729b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:05:47 np0005481065 podman[353541]: 2025-10-11 09:05:47.57728395 +0000 UTC m=+1.100322902 container remove 8e5208a5397319c0f84e76de8073b0ab50ac7bcd07ebece966238a6ebbb96988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:05:47 np0005481065 systemd[1]: libpod-conmon-8e5208a5397319c0f84e76de8073b0ab50ac7bcd07ebece966238a6ebbb96988.scope: Deactivated successfully.
Oct 11 05:05:47 np0005481065 nova_compute[260935]: 2025-10-11 09:05:47.705 2 DEBUG oslo_concurrency.lockutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Releasing lock "refresh_cache-d813afc2-c844-45eb-b1ec-efbbf95498ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:05:47 np0005481065 nova_compute[260935]: 2025-10-11 09:05:47.706 2 DEBUG nova.compute.manager [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Instance network_info: |[{"id": "4d16ebf4-64d7-4476-8898-f62d856c729b", "address": "fa:16:3e:62:2f:b2", "network": {"id": "da65ea75-f60d-4001-894b-df4408baa99c", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-908406648-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0eed0ccef3b34c4db44e88ebe1aef9f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d16ebf4-64", "ovs_interfaceid": "4d16ebf4-64d7-4476-8898-f62d856c729b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:05:47 np0005481065 nova_compute[260935]: 2025-10-11 09:05:47.707 2 DEBUG oslo_concurrency.lockutils [req-af78b1c6-3dbb-4ecd-b4ea-e18596b3966e req-7b53d93e-6296-458d-b9fd-eb00d8460ce1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-d813afc2-c844-45eb-b1ec-efbbf95498ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:05:47 np0005481065 nova_compute[260935]: 2025-10-11 09:05:47.707 2 DEBUG nova.network.neutron [req-af78b1c6-3dbb-4ecd-b4ea-e18596b3966e req-7b53d93e-6296-458d-b9fd-eb00d8460ce1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Refreshing network info cache for port 4d16ebf4-64d7-4476-8898-f62d856c729b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:05:47 np0005481065 nova_compute[260935]: 2025-10-11 09:05:47.711 2 DEBUG nova.virt.libvirt.driver [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Start _get_guest_xml network_info=[{"id": "4d16ebf4-64d7-4476-8898-f62d856c729b", "address": "fa:16:3e:62:2f:b2", "network": {"id": "da65ea75-f60d-4001-894b-df4408baa99c", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-908406648-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0eed0ccef3b34c4db44e88ebe1aef9f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d16ebf4-64", "ovs_interfaceid": "4d16ebf4-64d7-4476-8898-f62d856c729b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:05:47 np0005481065 nova_compute[260935]: 2025-10-11 09:05:47.719 2 WARNING nova.virt.libvirt.driver [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:05:47 np0005481065 nova_compute[260935]: 2025-10-11 09:05:47.727 2 DEBUG nova.virt.libvirt.host [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:05:47 np0005481065 nova_compute[260935]: 2025-10-11 09:05:47.728 2 DEBUG nova.virt.libvirt.host [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:05:47 np0005481065 nova_compute[260935]: 2025-10-11 09:05:47.732 2 DEBUG nova.virt.libvirt.host [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:05:47 np0005481065 nova_compute[260935]: 2025-10-11 09:05:47.732 2 DEBUG nova.virt.libvirt.host [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:05:47 np0005481065 nova_compute[260935]: 2025-10-11 09:05:47.732 2 DEBUG nova.virt.libvirt.driver [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:05:47 np0005481065 nova_compute[260935]: 2025-10-11 09:05:47.733 2 DEBUG nova.virt.hardware [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:05:47 np0005481065 nova_compute[260935]: 2025-10-11 09:05:47.733 2 DEBUG nova.virt.hardware [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:05:47 np0005481065 nova_compute[260935]: 2025-10-11 09:05:47.734 2 DEBUG nova.virt.hardware [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:05:47 np0005481065 nova_compute[260935]: 2025-10-11 09:05:47.734 2 DEBUG nova.virt.hardware [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:05:47 np0005481065 nova_compute[260935]: 2025-10-11 09:05:47.734 2 DEBUG nova.virt.hardware [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:05:47 np0005481065 nova_compute[260935]: 2025-10-11 09:05:47.734 2 DEBUG nova.virt.hardware [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:05:47 np0005481065 nova_compute[260935]: 2025-10-11 09:05:47.734 2 DEBUG nova.virt.hardware [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:05:47 np0005481065 nova_compute[260935]: 2025-10-11 09:05:47.735 2 DEBUG nova.virt.hardware [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:05:47 np0005481065 nova_compute[260935]: 2025-10-11 09:05:47.735 2 DEBUG nova.virt.hardware [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:05:47 np0005481065 nova_compute[260935]: 2025-10-11 09:05:47.735 2 DEBUG nova.virt.hardware [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:05:47 np0005481065 nova_compute[260935]: 2025-10-11 09:05:47.735 2 DEBUG nova.virt.hardware [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:05:47 np0005481065 nova_compute[260935]: 2025-10-11 09:05:47.739 2 DEBUG oslo_concurrency.processutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:05:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:05:48 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/357432966' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:05:48 np0005481065 nova_compute[260935]: 2025-10-11 09:05:48.237 2 DEBUG oslo_concurrency.processutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:05:48 np0005481065 nova_compute[260935]: 2025-10-11 09:05:48.262 2 DEBUG nova.storage.rbd_utils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] rbd image d813afc2-c844-45eb-b1ec-efbbf95498ef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:05:48 np0005481065 nova_compute[260935]: 2025-10-11 09:05:48.272 2 DEBUG oslo_concurrency.processutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:05:48 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1894: 321 pgs: 321 active+clean; 467 MiB data, 866 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 201 op/s
Oct 11 05:05:48 np0005481065 podman[353762]: 2025-10-11 09:05:48.346394646 +0000 UTC m=+0.062230408 container create 598213dc38dc76b7d6fc56a063ee1a103cee9da81fc8a3e8f4037bfa7ab2412a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:05:48 np0005481065 systemd[1]: Started libpod-conmon-598213dc38dc76b7d6fc56a063ee1a103cee9da81fc8a3e8f4037bfa7ab2412a.scope.
Oct 11 05:05:48 np0005481065 podman[353762]: 2025-10-11 09:05:48.317015897 +0000 UTC m=+0.032851679 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:05:48 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:05:48 np0005481065 nova_compute[260935]: 2025-10-11 09:05:48.437 2 DEBUG nova.network.neutron [-] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:05:48 np0005481065 podman[353762]: 2025-10-11 09:05:48.442632783 +0000 UTC m=+0.158468615 container init 598213dc38dc76b7d6fc56a063ee1a103cee9da81fc8a3e8f4037bfa7ab2412a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ptolemy, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 11 05:05:48 np0005481065 podman[353762]: 2025-10-11 09:05:48.454122331 +0000 UTC m=+0.169958163 container start 598213dc38dc76b7d6fc56a063ee1a103cee9da81fc8a3e8f4037bfa7ab2412a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 05:05:48 np0005481065 cranky_ptolemy[353779]: 167 167
Oct 11 05:05:48 np0005481065 systemd[1]: libpod-598213dc38dc76b7d6fc56a063ee1a103cee9da81fc8a3e8f4037bfa7ab2412a.scope: Deactivated successfully.
Oct 11 05:05:48 np0005481065 podman[353762]: 2025-10-11 09:05:48.467925115 +0000 UTC m=+0.183760917 container attach 598213dc38dc76b7d6fc56a063ee1a103cee9da81fc8a3e8f4037bfa7ab2412a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 05:05:48 np0005481065 podman[353762]: 2025-10-11 09:05:48.468378938 +0000 UTC m=+0.184214740 container died 598213dc38dc76b7d6fc56a063ee1a103cee9da81fc8a3e8f4037bfa7ab2412a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ptolemy, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 11 05:05:48 np0005481065 systemd[1]: var-lib-containers-storage-overlay-2daa7a28f2dc543d1c8305c45e7d36d5ff7dedcddcbe923d13161148674a30f9-merged.mount: Deactivated successfully.
Oct 11 05:05:48 np0005481065 podman[353762]: 2025-10-11 09:05:48.512950801 +0000 UTC m=+0.228786583 container remove 598213dc38dc76b7d6fc56a063ee1a103cee9da81fc8a3e8f4037bfa7ab2412a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ptolemy, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:05:48 np0005481065 systemd[1]: libpod-conmon-598213dc38dc76b7d6fc56a063ee1a103cee9da81fc8a3e8f4037bfa7ab2412a.scope: Deactivated successfully.
Oct 11 05:05:48 np0005481065 podman[353821]: 2025-10-11 09:05:48.726991761 +0000 UTC m=+0.044769399 container create 4b65afe31600305389737a4aba147272010165a63f46799fabc19c2e7177721b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_meninsky, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:05:48 np0005481065 nova_compute[260935]: 2025-10-11 09:05:48.739 2 INFO nova.compute.manager [-] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Took 1.89 seconds to deallocate network for instance.#033[00m
Oct 11 05:05:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:05:48 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2419311604' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:05:48 np0005481065 systemd[1]: Started libpod-conmon-4b65afe31600305389737a4aba147272010165a63f46799fabc19c2e7177721b.scope.
Oct 11 05:05:48 np0005481065 nova_compute[260935]: 2025-10-11 09:05:48.779 2 DEBUG oslo_concurrency.processutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:05:48 np0005481065 nova_compute[260935]: 2025-10-11 09:05:48.781 2 DEBUG nova.virt.libvirt.vif [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:05:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-393316733',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-393316733',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-393316733',id=94,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0eed0ccef3b34c4db44e88ebe1aef9f0',ramdisk_id='',reservation_id='r-ikgw00lh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-1815816531',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-1815816531-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:05:38Z,user_data=None,user_id='ac3a19a7426e4e51aa4f94b016decc82',uuid=d813afc2-c844-45eb-b1ec-efbbf95498ef,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4d16ebf4-64d7-4476-8898-f62d856c729b", "address": "fa:16:3e:62:2f:b2", "network": {"id": "da65ea75-f60d-4001-894b-df4408baa99c", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-908406648-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0eed0ccef3b34c4db44e88ebe1aef9f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d16ebf4-64", "ovs_interfaceid": "4d16ebf4-64d7-4476-8898-f62d856c729b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:05:48 np0005481065 nova_compute[260935]: 2025-10-11 09:05:48.781 2 DEBUG nova.network.os_vif_util [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Converting VIF {"id": "4d16ebf4-64d7-4476-8898-f62d856c729b", "address": "fa:16:3e:62:2f:b2", "network": {"id": "da65ea75-f60d-4001-894b-df4408baa99c", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-908406648-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0eed0ccef3b34c4db44e88ebe1aef9f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d16ebf4-64", "ovs_interfaceid": "4d16ebf4-64d7-4476-8898-f62d856c729b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:05:48 np0005481065 nova_compute[260935]: 2025-10-11 09:05:48.782 2 DEBUG nova.network.os_vif_util [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:2f:b2,bridge_name='br-int',has_traffic_filtering=True,id=4d16ebf4-64d7-4476-8898-f62d856c729b,network=Network(da65ea75-f60d-4001-894b-df4408baa99c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d16ebf4-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:05:48 np0005481065 nova_compute[260935]: 2025-10-11 09:05:48.783 2 DEBUG nova.objects.instance [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Lazy-loading 'pci_devices' on Instance uuid d813afc2-c844-45eb-b1ec-efbbf95498ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:05:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:05:48 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:05:48 np0005481065 podman[353821]: 2025-10-11 09:05:48.709207983 +0000 UTC m=+0.026985631 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:05:48 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34bc5ce5b16c8534d034dcd628bea28fa529c800c6322af5f691982cb6a39ffd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:05:48 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34bc5ce5b16c8534d034dcd628bea28fa529c800c6322af5f691982cb6a39ffd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:05:48 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34bc5ce5b16c8534d034dcd628bea28fa529c800c6322af5f691982cb6a39ffd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:05:48 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34bc5ce5b16c8534d034dcd628bea28fa529c800c6322af5f691982cb6a39ffd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:05:48 np0005481065 podman[353821]: 2025-10-11 09:05:48.838315689 +0000 UTC m=+0.156093407 container init 4b65afe31600305389737a4aba147272010165a63f46799fabc19c2e7177721b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 11 05:05:48 np0005481065 podman[353821]: 2025-10-11 09:05:48.853366499 +0000 UTC m=+0.171144137 container start 4b65afe31600305389737a4aba147272010165a63f46799fabc19c2e7177721b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_meninsky, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 05:05:48 np0005481065 podman[353821]: 2025-10-11 09:05:48.857657591 +0000 UTC m=+0.175435309 container attach 4b65afe31600305389737a4aba147272010165a63f46799fabc19c2e7177721b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_meninsky, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:05:48 np0005481065 nova_compute[260935]: 2025-10-11 09:05:48.870 2 DEBUG nova.virt.libvirt.driver [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:05:48 np0005481065 nova_compute[260935]:  <uuid>d813afc2-c844-45eb-b1ec-efbbf95498ef</uuid>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:  <name>instance-0000005e</name>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:05:48 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:      <nova:name>tempest-ServersNegativeTestMultiTenantJSON-server-393316733</nova:name>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:05:47</nova:creationTime>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:05:48 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:        <nova:user uuid="ac3a19a7426e4e51aa4f94b016decc82">tempest-ServersNegativeTestMultiTenantJSON-1815816531-project-member</nova:user>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:        <nova:project uuid="0eed0ccef3b34c4db44e88ebe1aef9f0">tempest-ServersNegativeTestMultiTenantJSON-1815816531</nova:project>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:        <nova:port uuid="4d16ebf4-64d7-4476-8898-f62d856c729b">
Oct 11 05:05:48 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:05:48 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:      <entry name="serial">d813afc2-c844-45eb-b1ec-efbbf95498ef</entry>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:      <entry name="uuid">d813afc2-c844-45eb-b1ec-efbbf95498ef</entry>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:05:48 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:05:48 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:05:48 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/d813afc2-c844-45eb-b1ec-efbbf95498ef_disk">
Oct 11 05:05:48 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:05:48 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:05:48 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/d813afc2-c844-45eb-b1ec-efbbf95498ef_disk.config">
Oct 11 05:05:48 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:05:48 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:05:48 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:62:2f:b2"/>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:      <target dev="tap4d16ebf4-64"/>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:05:48 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/d813afc2-c844-45eb-b1ec-efbbf95498ef/console.log" append="off"/>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:05:48 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:05:48 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:05:48 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:05:48 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:05:48 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:05:48 np0005481065 nova_compute[260935]: 2025-10-11 09:05:48.871 2 DEBUG nova.compute.manager [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Preparing to wait for external event network-vif-plugged-4d16ebf4-64d7-4476-8898-f62d856c729b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:05:48 np0005481065 nova_compute[260935]: 2025-10-11 09:05:48.871 2 DEBUG oslo_concurrency.lockutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Acquiring lock "d813afc2-c844-45eb-b1ec-efbbf95498ef-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:05:48 np0005481065 nova_compute[260935]: 2025-10-11 09:05:48.872 2 DEBUG oslo_concurrency.lockutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Lock "d813afc2-c844-45eb-b1ec-efbbf95498ef-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:05:48 np0005481065 nova_compute[260935]: 2025-10-11 09:05:48.872 2 DEBUG oslo_concurrency.lockutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Lock "d813afc2-c844-45eb-b1ec-efbbf95498ef-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:05:48 np0005481065 nova_compute[260935]: 2025-10-11 09:05:48.873 2 DEBUG nova.virt.libvirt.vif [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:05:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-393316733',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-393316733',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-393316733',id=94,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0eed0ccef3b34c4db44e88ebe1aef9f0',ramdisk_id='',reservation_id='r-ikgw00lh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-1815816531',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-1815816531-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:05:38Z,user_data=None,user_id='ac3a19a7426e4e51aa4f94b016decc82',uuid=d813afc2-c844-45eb-b1ec-efbbf95498ef,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4d16ebf4-64d7-4476-8898-f62d856c729b", "address": "fa:16:3e:62:2f:b2", "network": {"id": "da65ea75-f60d-4001-894b-df4408baa99c", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-908406648-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0eed0ccef3b34c4db44e88ebe1aef9f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d16ebf4-64", "ovs_interfaceid": "4d16ebf4-64d7-4476-8898-f62d856c729b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:05:48 np0005481065 nova_compute[260935]: 2025-10-11 09:05:48.873 2 DEBUG nova.network.os_vif_util [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Converting VIF {"id": "4d16ebf4-64d7-4476-8898-f62d856c729b", "address": "fa:16:3e:62:2f:b2", "network": {"id": "da65ea75-f60d-4001-894b-df4408baa99c", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-908406648-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0eed0ccef3b34c4db44e88ebe1aef9f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d16ebf4-64", "ovs_interfaceid": "4d16ebf4-64d7-4476-8898-f62d856c729b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:05:48 np0005481065 nova_compute[260935]: 2025-10-11 09:05:48.874 2 DEBUG nova.network.os_vif_util [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:2f:b2,bridge_name='br-int',has_traffic_filtering=True,id=4d16ebf4-64d7-4476-8898-f62d856c729b,network=Network(da65ea75-f60d-4001-894b-df4408baa99c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d16ebf4-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:05:48 np0005481065 nova_compute[260935]: 2025-10-11 09:05:48.874 2 DEBUG os_vif [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:2f:b2,bridge_name='br-int',has_traffic_filtering=True,id=4d16ebf4-64d7-4476-8898-f62d856c729b,network=Network(da65ea75-f60d-4001-894b-df4408baa99c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d16ebf4-64') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:05:48 np0005481065 nova_compute[260935]: 2025-10-11 09:05:48.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:48 np0005481065 nova_compute[260935]: 2025-10-11 09:05:48.876 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:05:48 np0005481065 nova_compute[260935]: 2025-10-11 09:05:48.877 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:05:48 np0005481065 nova_compute[260935]: 2025-10-11 09:05:48.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:48 np0005481065 nova_compute[260935]: 2025-10-11 09:05:48.882 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d16ebf4-64, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:05:48 np0005481065 nova_compute[260935]: 2025-10-11 09:05:48.883 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4d16ebf4-64, col_values=(('external_ids', {'iface-id': '4d16ebf4-64d7-4476-8898-f62d856c729b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:62:2f:b2', 'vm-uuid': 'd813afc2-c844-45eb-b1ec-efbbf95498ef'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:05:48 np0005481065 nova_compute[260935]: 2025-10-11 09:05:48.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:48 np0005481065 NetworkManager[44960]: <info>  [1760173548.8859] manager: (tap4d16ebf4-64): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/355)
Oct 11 05:05:48 np0005481065 nova_compute[260935]: 2025-10-11 09:05:48.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:05:48 np0005481065 nova_compute[260935]: 2025-10-11 09:05:48.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:48 np0005481065 nova_compute[260935]: 2025-10-11 09:05:48.896 2 INFO os_vif [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:2f:b2,bridge_name='br-int',has_traffic_filtering=True,id=4d16ebf4-64d7-4476-8898-f62d856c729b,network=Network(da65ea75-f60d-4001-894b-df4408baa99c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d16ebf4-64')#033[00m
Oct 11 05:05:48 np0005481065 nova_compute[260935]: 2025-10-11 09:05:48.906 2 DEBUG oslo_concurrency.lockutils [None req-aff4ef40-b159-4e3d-afbe-eb0ed60088f4 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:05:48 np0005481065 nova_compute[260935]: 2025-10-11 09:05:48.906 2 DEBUG oslo_concurrency.lockutils [None req-aff4ef40-b159-4e3d-afbe-eb0ed60088f4 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:05:48 np0005481065 nova_compute[260935]: 2025-10-11 09:05:48.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:49 np0005481065 nova_compute[260935]: 2025-10-11 09:05:49.015 2 DEBUG nova.virt.libvirt.driver [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:05:49 np0005481065 nova_compute[260935]: 2025-10-11 09:05:49.015 2 DEBUG nova.virt.libvirt.driver [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:05:49 np0005481065 nova_compute[260935]: 2025-10-11 09:05:49.016 2 DEBUG nova.virt.libvirt.driver [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] No VIF found with MAC fa:16:3e:62:2f:b2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:05:49 np0005481065 nova_compute[260935]: 2025-10-11 09:05:49.016 2 INFO nova.virt.libvirt.driver [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Using config drive#033[00m
Oct 11 05:05:49 np0005481065 nova_compute[260935]: 2025-10-11 09:05:49.040 2 DEBUG nova.storage.rbd_utils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] rbd image d813afc2-c844-45eb-b1ec-efbbf95498ef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:05:49 np0005481065 nova_compute[260935]: 2025-10-11 09:05:49.059 2 DEBUG nova.compute.manager [req-961d1691-4ea0-4af7-8d80-bb563e36055b req-9d9c081b-6cdd-4cd6-8d11-c5721a8343d2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Received event network-vif-unplugged-406fc421-7a8c-4974-8410-c31cab730ea9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:05:49 np0005481065 nova_compute[260935]: 2025-10-11 09:05:49.060 2 DEBUG oslo_concurrency.lockutils [req-961d1691-4ea0-4af7-8d80-bb563e36055b req-9d9c081b-6cdd-4cd6-8d11-c5721a8343d2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "81e243b6-6e9c-428e-a7b7-743f60f94728-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:05:49 np0005481065 nova_compute[260935]: 2025-10-11 09:05:49.060 2 DEBUG oslo_concurrency.lockutils [req-961d1691-4ea0-4af7-8d80-bb563e36055b req-9d9c081b-6cdd-4cd6-8d11-c5721a8343d2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "81e243b6-6e9c-428e-a7b7-743f60f94728-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:05:49 np0005481065 nova_compute[260935]: 2025-10-11 09:05:49.061 2 DEBUG oslo_concurrency.lockutils [req-961d1691-4ea0-4af7-8d80-bb563e36055b req-9d9c081b-6cdd-4cd6-8d11-c5721a8343d2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "81e243b6-6e9c-428e-a7b7-743f60f94728-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:05:49 np0005481065 nova_compute[260935]: 2025-10-11 09:05:49.062 2 DEBUG nova.compute.manager [req-961d1691-4ea0-4af7-8d80-bb563e36055b req-9d9c081b-6cdd-4cd6-8d11-c5721a8343d2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] No waiting events found dispatching network-vif-unplugged-406fc421-7a8c-4974-8410-c31cab730ea9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:05:49 np0005481065 nova_compute[260935]: 2025-10-11 09:05:49.063 2 WARNING nova.compute.manager [req-961d1691-4ea0-4af7-8d80-bb563e36055b req-9d9c081b-6cdd-4cd6-8d11-c5721a8343d2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Received unexpected event network-vif-unplugged-406fc421-7a8c-4974-8410-c31cab730ea9 for instance with vm_state deleted and task_state None.#033[00m
Oct 11 05:05:49 np0005481065 nova_compute[260935]: 2025-10-11 09:05:49.063 2 DEBUG nova.compute.manager [req-961d1691-4ea0-4af7-8d80-bb563e36055b req-9d9c081b-6cdd-4cd6-8d11-c5721a8343d2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Received event network-vif-plugged-406fc421-7a8c-4974-8410-c31cab730ea9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:05:49 np0005481065 nova_compute[260935]: 2025-10-11 09:05:49.063 2 DEBUG oslo_concurrency.lockutils [req-961d1691-4ea0-4af7-8d80-bb563e36055b req-9d9c081b-6cdd-4cd6-8d11-c5721a8343d2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "81e243b6-6e9c-428e-a7b7-743f60f94728-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:05:49 np0005481065 nova_compute[260935]: 2025-10-11 09:05:49.064 2 DEBUG oslo_concurrency.lockutils [req-961d1691-4ea0-4af7-8d80-bb563e36055b req-9d9c081b-6cdd-4cd6-8d11-c5721a8343d2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "81e243b6-6e9c-428e-a7b7-743f60f94728-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:05:49 np0005481065 nova_compute[260935]: 2025-10-11 09:05:49.064 2 DEBUG oslo_concurrency.lockutils [req-961d1691-4ea0-4af7-8d80-bb563e36055b req-9d9c081b-6cdd-4cd6-8d11-c5721a8343d2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "81e243b6-6e9c-428e-a7b7-743f60f94728-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:05:49 np0005481065 nova_compute[260935]: 2025-10-11 09:05:49.064 2 DEBUG nova.compute.manager [req-961d1691-4ea0-4af7-8d80-bb563e36055b req-9d9c081b-6cdd-4cd6-8d11-c5721a8343d2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] No waiting events found dispatching network-vif-plugged-406fc421-7a8c-4974-8410-c31cab730ea9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:05:49 np0005481065 nova_compute[260935]: 2025-10-11 09:05:49.064 2 WARNING nova.compute.manager [req-961d1691-4ea0-4af7-8d80-bb563e36055b req-9d9c081b-6cdd-4cd6-8d11-c5721a8343d2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Received unexpected event network-vif-plugged-406fc421-7a8c-4974-8410-c31cab730ea9 for instance with vm_state deleted and task_state None.#033[00m
Oct 11 05:05:49 np0005481065 nova_compute[260935]: 2025-10-11 09:05:49.064 2 DEBUG nova.compute.manager [req-961d1691-4ea0-4af7-8d80-bb563e36055b req-9d9c081b-6cdd-4cd6-8d11-c5721a8343d2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Received event network-vif-deleted-406fc421-7a8c-4974-8410-c31cab730ea9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:05:49 np0005481065 nova_compute[260935]: 2025-10-11 09:05:49.138 2 DEBUG oslo_concurrency.processutils [None req-aff4ef40-b159-4e3d-afbe-eb0ed60088f4 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:05:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:05:49 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2447797774' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:05:49 np0005481065 nova_compute[260935]: 2025-10-11 09:05:49.615 2 DEBUG oslo_concurrency.processutils [None req-aff4ef40-b159-4e3d-afbe-eb0ed60088f4 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:05:49 np0005481065 nova_compute[260935]: 2025-10-11 09:05:49.622 2 DEBUG nova.compute.provider_tree [None req-aff4ef40-b159-4e3d-afbe-eb0ed60088f4 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:05:49 np0005481065 nova_compute[260935]: 2025-10-11 09:05:49.686 2 DEBUG nova.scheduler.client.report [None req-aff4ef40-b159-4e3d-afbe-eb0ed60088f4 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:05:49 np0005481065 nova_compute[260935]: 2025-10-11 09:05:49.710 2 INFO nova.virt.libvirt.driver [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Creating config drive at /var/lib/nova/instances/d813afc2-c844-45eb-b1ec-efbbf95498ef/disk.config#033[00m
Oct 11 05:05:49 np0005481065 nova_compute[260935]: 2025-10-11 09:05:49.716 2 DEBUG oslo_concurrency.processutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d813afc2-c844-45eb-b1ec-efbbf95498ef/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp46qvn6gw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:05:49 np0005481065 nova_compute[260935]: 2025-10-11 09:05:49.768 2 DEBUG oslo_concurrency.lockutils [None req-aff4ef40-b159-4e3d-afbe-eb0ed60088f4 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.862s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:05:49 np0005481065 nova_compute[260935]: 2025-10-11 09:05:49.850 2 INFO nova.scheduler.client.report [None req-aff4ef40-b159-4e3d-afbe-eb0ed60088f4 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Deleted allocations for instance 81e243b6-6e9c-428e-a7b7-743f60f94728#033[00m
Oct 11 05:05:49 np0005481065 nova_compute[260935]: 2025-10-11 09:05:49.870 2 DEBUG oslo_concurrency.processutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d813afc2-c844-45eb-b1ec-efbbf95498ef/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp46qvn6gw" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:05:49 np0005481065 loving_meninsky[353839]: {
Oct 11 05:05:49 np0005481065 loving_meninsky[353839]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 05:05:49 np0005481065 loving_meninsky[353839]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:05:49 np0005481065 loving_meninsky[353839]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 05:05:49 np0005481065 loving_meninsky[353839]:        "osd_id": 2,
Oct 11 05:05:49 np0005481065 loving_meninsky[353839]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:05:49 np0005481065 loving_meninsky[353839]:        "type": "bluestore"
Oct 11 05:05:49 np0005481065 loving_meninsky[353839]:    },
Oct 11 05:05:49 np0005481065 loving_meninsky[353839]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 05:05:49 np0005481065 loving_meninsky[353839]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:05:49 np0005481065 loving_meninsky[353839]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 05:05:49 np0005481065 loving_meninsky[353839]:        "osd_id": 0,
Oct 11 05:05:49 np0005481065 loving_meninsky[353839]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:05:49 np0005481065 loving_meninsky[353839]:        "type": "bluestore"
Oct 11 05:05:49 np0005481065 loving_meninsky[353839]:    },
Oct 11 05:05:49 np0005481065 loving_meninsky[353839]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 05:05:49 np0005481065 loving_meninsky[353839]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:05:49 np0005481065 loving_meninsky[353839]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 05:05:49 np0005481065 loving_meninsky[353839]:        "osd_id": 1,
Oct 11 05:05:49 np0005481065 loving_meninsky[353839]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:05:49 np0005481065 loving_meninsky[353839]:        "type": "bluestore"
Oct 11 05:05:49 np0005481065 loving_meninsky[353839]:    }
Oct 11 05:05:49 np0005481065 loving_meninsky[353839]: }
Oct 11 05:05:49 np0005481065 nova_compute[260935]: 2025-10-11 09:05:49.909 2 DEBUG nova.storage.rbd_utils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] rbd image d813afc2-c844-45eb-b1ec-efbbf95498ef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:05:49 np0005481065 nova_compute[260935]: 2025-10-11 09:05:49.914 2 DEBUG oslo_concurrency.processutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d813afc2-c844-45eb-b1ec-efbbf95498ef/disk.config d813afc2-c844-45eb-b1ec-efbbf95498ef_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:05:49 np0005481065 systemd[1]: libpod-4b65afe31600305389737a4aba147272010165a63f46799fabc19c2e7177721b.scope: Deactivated successfully.
Oct 11 05:05:49 np0005481065 systemd[1]: libpod-4b65afe31600305389737a4aba147272010165a63f46799fabc19c2e7177721b.scope: Consumed 1.067s CPU time.
Oct 11 05:05:49 np0005481065 conmon[353839]: conmon 4b65afe3160030538973 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4b65afe31600305389737a4aba147272010165a63f46799fabc19c2e7177721b.scope/container/memory.events
Oct 11 05:05:49 np0005481065 podman[353821]: 2025-10-11 09:05:49.938494735 +0000 UTC m=+1.256272393 container died 4b65afe31600305389737a4aba147272010165a63f46799fabc19c2e7177721b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_meninsky, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 05:05:49 np0005481065 systemd[1]: var-lib-containers-storage-overlay-34bc5ce5b16c8534d034dcd628bea28fa529c800c6322af5f691982cb6a39ffd-merged.mount: Deactivated successfully.
Oct 11 05:05:50 np0005481065 podman[353821]: 2025-10-11 09:05:50.01469003 +0000 UTC m=+1.332467668 container remove 4b65afe31600305389737a4aba147272010165a63f46799fabc19c2e7177721b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_meninsky, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 05:05:50 np0005481065 systemd[1]: libpod-conmon-4b65afe31600305389737a4aba147272010165a63f46799fabc19c2e7177721b.scope: Deactivated successfully.
Oct 11 05:05:50 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:05:50 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:05:50 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:05:50 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:05:50 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 5a7f2fab-7892-4c01-b33b-f02aa5df2a09 does not exist
Oct 11 05:05:50 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 14f6292b-9012-445c-84ee-22f08709343a does not exist
Oct 11 05:05:50 np0005481065 podman[353937]: 2025-10-11 09:05:50.087675004 +0000 UTC m=+0.102487557 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:05:50 np0005481065 nova_compute[260935]: 2025-10-11 09:05:50.087 2 DEBUG nova.network.neutron [req-af78b1c6-3dbb-4ecd-b4ea-e18596b3966e req-7b53d93e-6296-458d-b9fd-eb00d8460ce1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Updated VIF entry in instance network info cache for port 4d16ebf4-64d7-4476-8898-f62d856c729b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:05:50 np0005481065 nova_compute[260935]: 2025-10-11 09:05:50.088 2 DEBUG nova.network.neutron [req-af78b1c6-3dbb-4ecd-b4ea-e18596b3966e req-7b53d93e-6296-458d-b9fd-eb00d8460ce1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Updating instance_info_cache with network_info: [{"id": "4d16ebf4-64d7-4476-8898-f62d856c729b", "address": "fa:16:3e:62:2f:b2", "network": {"id": "da65ea75-f60d-4001-894b-df4408baa99c", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-908406648-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0eed0ccef3b34c4db44e88ebe1aef9f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d16ebf4-64", "ovs_interfaceid": "4d16ebf4-64d7-4476-8898-f62d856c729b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:05:50 np0005481065 nova_compute[260935]: 2025-10-11 09:05:50.122 2 DEBUG oslo_concurrency.processutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d813afc2-c844-45eb-b1ec-efbbf95498ef/disk.config d813afc2-c844-45eb-b1ec-efbbf95498ef_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.208s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:05:50 np0005481065 nova_compute[260935]: 2025-10-11 09:05:50.122 2 INFO nova.virt.libvirt.driver [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Deleting local config drive /var/lib/nova/instances/d813afc2-c844-45eb-b1ec-efbbf95498ef/disk.config because it was imported into RBD.#033[00m
Oct 11 05:05:50 np0005481065 nova_compute[260935]: 2025-10-11 09:05:50.154 2 DEBUG oslo_concurrency.lockutils [req-af78b1c6-3dbb-4ecd-b4ea-e18596b3966e req-7b53d93e-6296-458d-b9fd-eb00d8460ce1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-d813afc2-c844-45eb-b1ec-efbbf95498ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:05:50 np0005481065 NetworkManager[44960]: <info>  [1760173550.1917] manager: (tap4d16ebf4-64): new Tun device (/org/freedesktop/NetworkManager/Devices/356)
Oct 11 05:05:50 np0005481065 kernel: tap4d16ebf4-64: entered promiscuous mode
Oct 11 05:05:50 np0005481065 ovn_controller[152945]: 2025-10-11T09:05:50Z|00823|binding|INFO|Claiming lport 4d16ebf4-64d7-4476-8898-f62d856c729b for this chassis.
Oct 11 05:05:50 np0005481065 ovn_controller[152945]: 2025-10-11T09:05:50Z|00824|binding|INFO|4d16ebf4-64d7-4476-8898-f62d856c729b: Claiming fa:16:3e:62:2f:b2 10.100.0.7
Oct 11 05:05:50 np0005481065 nova_compute[260935]: 2025-10-11 09:05:50.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:50 np0005481065 nova_compute[260935]: 2025-10-11 09:05:50.206 2 DEBUG oslo_concurrency.lockutils [None req-aff4ef40-b159-4e3d-afbe-eb0ed60088f4 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Lock "81e243b6-6e9c-428e-a7b7-743f60f94728" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.375s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:50.214 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:2f:b2 10.100.0.7'], port_security=['fa:16:3e:62:2f:b2 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'd813afc2-c844-45eb-b1ec-efbbf95498ef', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-da65ea75-f60d-4001-894b-df4408baa99c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0eed0ccef3b34c4db44e88ebe1aef9f0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '27e50c86-89d7-49fd-89cc-35770f4ed881', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=04f380b3-4aca-47a6-b068-b7f58cfe8d61, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=4d16ebf4-64d7-4476-8898-f62d856c729b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:50.215 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 4d16ebf4-64d7-4476-8898-f62d856c729b in datapath da65ea75-f60d-4001-894b-df4408baa99c bound to our chassis#033[00m
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:50.218 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network da65ea75-f60d-4001-894b-df4408baa99c#033[00m
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:50.236 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a58ec2cb-f719-4aa9-8bef-d7ca264a5d6d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:50.237 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapda65ea75-f1 in ovnmeta-da65ea75-f60d-4001-894b-df4408baa99c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:50.239 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapda65ea75-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:50.239 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[589ac9a0-848a-4572-9746-ecf16ebb7128]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:50.240 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d8d44659-6718-4cab-a07c-899416116da6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:05:50 np0005481065 systemd-udevd[354050]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:05:50 np0005481065 systemd-machined[215705]: New machine qemu-107-instance-0000005e.
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:50.255 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[74fbf0d1-430c-4825-a83e-100acef2e4bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:05:50 np0005481065 NetworkManager[44960]: <info>  [1760173550.2573] device (tap4d16ebf4-64): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:05:50 np0005481065 NetworkManager[44960]: <info>  [1760173550.2585] device (tap4d16ebf4-64): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:05:50 np0005481065 systemd[1]: Started Virtual Machine qemu-107-instance-0000005e.
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:50.285 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[37f24eb9-cbda-4363-9f36-25279f374f61]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:05:50 np0005481065 ovn_controller[152945]: 2025-10-11T09:05:50Z|00825|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:05:50 np0005481065 ovn_controller[152945]: 2025-10-11T09:05:50Z|00826|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:05:50 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1895: 321 pgs: 321 active+clean; 467 MiB data, 866 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 1.8 MiB/s wr, 186 op/s
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:50.323 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[33b37dc6-28bf-45f2-9ad2-3885c9c242fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:05:50 np0005481065 nova_compute[260935]: 2025-10-11 09:05:50.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:50 np0005481065 systemd-udevd[354053]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:05:50 np0005481065 NetworkManager[44960]: <info>  [1760173550.3411] manager: (tapda65ea75-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/357)
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:50.339 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e7b70f32-fc63-454f-87ce-164a894b2900]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:05:50 np0005481065 ovn_controller[152945]: 2025-10-11T09:05:50Z|00827|binding|INFO|Setting lport 4d16ebf4-64d7-4476-8898-f62d856c729b ovn-installed in OVS
Oct 11 05:05:50 np0005481065 ovn_controller[152945]: 2025-10-11T09:05:50Z|00828|binding|INFO|Setting lport 4d16ebf4-64d7-4476-8898-f62d856c729b up in Southbound
Oct 11 05:05:50 np0005481065 nova_compute[260935]: 2025-10-11 09:05:50.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:50.391 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[5b8a23a5-9025-4087-883d-d5aa10e6bf65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:50.397 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[de01ff64-d1da-448e-9026-2f7c2524d07c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:05:50 np0005481065 NetworkManager[44960]: <info>  [1760173550.4223] device (tapda65ea75-f0): carrier: link connected
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:50.430 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[039d6108-2bb1-4fa2-8dd1-9e77fe07cb1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:50.450 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[929811e4-460f-433d-844b-edde672d684a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapda65ea75-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:1a:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 251], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539512, 'reachable_time': 23095, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354082, 'error': None, 'target': 'ovnmeta-da65ea75-f60d-4001-894b-df4408baa99c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:50.475 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2b2886cd-5f28-4421-8f7d-cdd47c4a9508]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe02:1ab2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 539512, 'tstamp': 539512}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 354083, 'error': None, 'target': 'ovnmeta-da65ea75-f60d-4001-894b-df4408baa99c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:50.497 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[18df5aaa-a0ba-4f8a-8104-b59d3f2e4129]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapda65ea75-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:1a:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 251], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539512, 'reachable_time': 23095, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 354084, 'error': None, 'target': 'ovnmeta-da65ea75-f60d-4001-894b-df4408baa99c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:50.546 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e3e27bcf-aba4-46ab-974a-e5ca775ce805]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:50.625 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cd68cf19-8c4f-4ca7-9126-33d54ced22c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:50.627 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapda65ea75-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:50.628 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:50.628 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapda65ea75-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:05:50 np0005481065 NetworkManager[44960]: <info>  [1760173550.6319] manager: (tapda65ea75-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/358)
Oct 11 05:05:50 np0005481065 nova_compute[260935]: 2025-10-11 09:05:50.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:50 np0005481065 kernel: tapda65ea75-f0: entered promiscuous mode
Oct 11 05:05:50 np0005481065 nova_compute[260935]: 2025-10-11 09:05:50.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:50.636 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapda65ea75-f0, col_values=(('external_ids', {'iface-id': '938161fc-259e-4821-9e55-2e083e89770d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:05:50 np0005481065 nova_compute[260935]: 2025-10-11 09:05:50.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:50 np0005481065 ovn_controller[152945]: 2025-10-11T09:05:50Z|00829|binding|INFO|Releasing lport 938161fc-259e-4821-9e55-2e083e89770d from this chassis (sb_readonly=0)
Oct 11 05:05:50 np0005481065 nova_compute[260935]: 2025-10-11 09:05:50.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:50.671 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/da65ea75-f60d-4001-894b-df4408baa99c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/da65ea75-f60d-4001-894b-df4408baa99c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:50.672 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3aa53667-f6d9-49cc-b551-a1d3d37db7dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:50.673 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-da65ea75-f60d-4001-894b-df4408baa99c
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/da65ea75-f60d-4001-894b-df4408baa99c.pid.haproxy
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID da65ea75-f60d-4001-894b-df4408baa99c
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 05:05:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:50.674 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-da65ea75-f60d-4001-894b-df4408baa99c', 'env', 'PROCESS_TAG=haproxy-da65ea75-f60d-4001-894b-df4408baa99c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/da65ea75-f60d-4001-894b-df4408baa99c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 05:05:51 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:05:51 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:05:51 np0005481065 podman[354158]: 2025-10-11 09:05:51.113437386 +0000 UTC m=+0.056080462 container create fe1073206b8884e40d5fa95be894fa4fa71d43f68f6ba8ef4b1c8249e73078f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-da65ea75-f60d-4001-894b-df4408baa99c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:05:51 np0005481065 systemd[1]: Started libpod-conmon-fe1073206b8884e40d5fa95be894fa4fa71d43f68f6ba8ef4b1c8249e73078f1.scope.
Oct 11 05:05:51 np0005481065 podman[354158]: 2025-10-11 09:05:51.085170229 +0000 UTC m=+0.027813325 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 05:05:51 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:05:51 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7944f1711c8b20f7abfb4df9886afc904490f332f531da6a73509c2289aa5d79/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 05:05:51 np0005481065 podman[354158]: 2025-10-11 09:05:51.198872935 +0000 UTC m=+0.141516041 container init fe1073206b8884e40d5fa95be894fa4fa71d43f68f6ba8ef4b1c8249e73078f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-da65ea75-f60d-4001-894b-df4408baa99c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:05:51 np0005481065 podman[354158]: 2025-10-11 09:05:51.204672351 +0000 UTC m=+0.147315427 container start fe1073206b8884e40d5fa95be894fa4fa71d43f68f6ba8ef4b1c8249e73078f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-da65ea75-f60d-4001-894b-df4408baa99c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 11 05:05:51 np0005481065 neutron-haproxy-ovnmeta-da65ea75-f60d-4001-894b-df4408baa99c[354173]: [NOTICE]   (354177) : New worker (354179) forked
Oct 11 05:05:51 np0005481065 neutron-haproxy-ovnmeta-da65ea75-f60d-4001-894b-df4408baa99c[354173]: [NOTICE]   (354177) : Loading success.
Oct 11 05:05:51 np0005481065 nova_compute[260935]: 2025-10-11 09:05:51.233 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173551.232891, d813afc2-c844-45eb-b1ec-efbbf95498ef => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:05:51 np0005481065 nova_compute[260935]: 2025-10-11 09:05:51.233 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] VM Started (Lifecycle Event)#033[00m
Oct 11 05:05:51 np0005481065 nova_compute[260935]: 2025-10-11 09:05:51.331 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:05:51 np0005481065 nova_compute[260935]: 2025-10-11 09:05:51.336 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173551.233206, d813afc2-c844-45eb-b1ec-efbbf95498ef => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:05:51 np0005481065 nova_compute[260935]: 2025-10-11 09:05:51.336 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:05:51 np0005481065 nova_compute[260935]: 2025-10-11 09:05:51.380 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:05:51 np0005481065 nova_compute[260935]: 2025-10-11 09:05:51.383 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:05:51 np0005481065 nova_compute[260935]: 2025-10-11 09:05:51.430 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:05:52 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1896: 321 pgs: 321 active+clean; 490 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.5 MiB/s wr, 243 op/s
Oct 11 05:05:53 np0005481065 nova_compute[260935]: 2025-10-11 09:05:53.271 2 DEBUG nova.virt.libvirt.driver [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct 11 05:05:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:05:53 np0005481065 podman[354189]: 2025-10-11 09:05:53.823367035 +0000 UTC m=+0.111097252 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:05:53 np0005481065 podman[354190]: 2025-10-11 09:05:53.865065606 +0000 UTC m=+0.148325055 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 11 05:05:53 np0005481065 nova_compute[260935]: 2025-10-11 09:05:53.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:54 np0005481065 nova_compute[260935]: 2025-10-11 09:05:54.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:54 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1897: 321 pgs: 321 active+clean; 500 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.6 MiB/s wr, 173 op/s
Oct 11 05:05:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:05:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:05:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:05:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:05:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:05:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:05:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:05:54
Oct 11 05:05:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:05:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:05:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'images', 'default.rgw.log', 'default.rgw.meta', 'backups', '.mgr', 'volumes', '.rgw.root']
Oct 11 05:05:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:05:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:05:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:05:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:05:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:05:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:05:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:05:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:05:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:05:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:05:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:05:56 np0005481065 nova_compute[260935]: 2025-10-11 09:05:56.292 2 INFO nova.virt.libvirt.driver [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Instance shutdown successfully after 13 seconds.#033[00m
Oct 11 05:05:56 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1898: 321 pgs: 321 active+clean; 500 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 162 op/s
Oct 11 05:05:56 np0005481065 nova_compute[260935]: 2025-10-11 09:05:56.443 2 DEBUG nova.compute.manager [req-eb6115b7-788c-4d0a-a832-519e45e0c2b6 req-03d06685-cde1-4a75-bab4-6ae0be3198b8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Received event network-vif-plugged-4d16ebf4-64d7-4476-8898-f62d856c729b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:05:56 np0005481065 nova_compute[260935]: 2025-10-11 09:05:56.444 2 DEBUG oslo_concurrency.lockutils [req-eb6115b7-788c-4d0a-a832-519e45e0c2b6 req-03d06685-cde1-4a75-bab4-6ae0be3198b8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d813afc2-c844-45eb-b1ec-efbbf95498ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:05:56 np0005481065 nova_compute[260935]: 2025-10-11 09:05:56.444 2 DEBUG oslo_concurrency.lockutils [req-eb6115b7-788c-4d0a-a832-519e45e0c2b6 req-03d06685-cde1-4a75-bab4-6ae0be3198b8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d813afc2-c844-45eb-b1ec-efbbf95498ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:05:56 np0005481065 nova_compute[260935]: 2025-10-11 09:05:56.444 2 DEBUG oslo_concurrency.lockutils [req-eb6115b7-788c-4d0a-a832-519e45e0c2b6 req-03d06685-cde1-4a75-bab4-6ae0be3198b8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d813afc2-c844-45eb-b1ec-efbbf95498ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:05:56 np0005481065 nova_compute[260935]: 2025-10-11 09:05:56.445 2 DEBUG nova.compute.manager [req-eb6115b7-788c-4d0a-a832-519e45e0c2b6 req-03d06685-cde1-4a75-bab4-6ae0be3198b8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Processing event network-vif-plugged-4d16ebf4-64d7-4476-8898-f62d856c729b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:05:56 np0005481065 nova_compute[260935]: 2025-10-11 09:05:56.445 2 DEBUG nova.compute.manager [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:05:56 np0005481065 nova_compute[260935]: 2025-10-11 09:05:56.450 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173556.4498963, d813afc2-c844-45eb-b1ec-efbbf95498ef => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:05:56 np0005481065 nova_compute[260935]: 2025-10-11 09:05:56.450 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:05:56 np0005481065 nova_compute[260935]: 2025-10-11 09:05:56.451 2 DEBUG nova.virt.libvirt.driver [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:05:56 np0005481065 nova_compute[260935]: 2025-10-11 09:05:56.454 2 INFO nova.virt.libvirt.driver [-] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Instance spawned successfully.#033[00m
Oct 11 05:05:56 np0005481065 nova_compute[260935]: 2025-10-11 09:05:56.455 2 DEBUG nova.virt.libvirt.driver [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:05:56 np0005481065 nova_compute[260935]: 2025-10-11 09:05:56.495 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:05:56 np0005481065 nova_compute[260935]: 2025-10-11 09:05:56.500 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:05:56 np0005481065 nova_compute[260935]: 2025-10-11 09:05:56.518 2 DEBUG nova.virt.libvirt.driver [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:05:56 np0005481065 nova_compute[260935]: 2025-10-11 09:05:56.518 2 DEBUG nova.virt.libvirt.driver [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:05:56 np0005481065 nova_compute[260935]: 2025-10-11 09:05:56.519 2 DEBUG nova.virt.libvirt.driver [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:05:56 np0005481065 nova_compute[260935]: 2025-10-11 09:05:56.519 2 DEBUG nova.virt.libvirt.driver [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:05:56 np0005481065 nova_compute[260935]: 2025-10-11 09:05:56.520 2 DEBUG nova.virt.libvirt.driver [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:05:56 np0005481065 nova_compute[260935]: 2025-10-11 09:05:56.521 2 DEBUG nova.virt.libvirt.driver [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:05:56 np0005481065 nova_compute[260935]: 2025-10-11 09:05:56.571 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:05:56 np0005481065 kernel: tapf8dc388c-9e (unregistering): left promiscuous mode
Oct 11 05:05:56 np0005481065 NetworkManager[44960]: <info>  [1760173556.5891] device (tapf8dc388c-9e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:05:56 np0005481065 ovn_controller[152945]: 2025-10-11T09:05:56Z|00830|binding|INFO|Releasing lport f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 from this chassis (sb_readonly=0)
Oct 11 05:05:56 np0005481065 ovn_controller[152945]: 2025-10-11T09:05:56Z|00831|binding|INFO|Setting lport f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 down in Southbound
Oct 11 05:05:56 np0005481065 ovn_controller[152945]: 2025-10-11T09:05:56Z|00832|binding|INFO|Removing iface tapf8dc388c-9e ovn-installed in OVS
Oct 11 05:05:56 np0005481065 nova_compute[260935]: 2025-10-11 09:05:56.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:56 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:56.627 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:bf:b7 10.100.0.12'], port_security=['fa:16:3e:b4:bf:b7 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a83f40e4-c852-4b45-a3d2-1cd65e9aaa31', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a5d578da5e746caa535eef295e1a67d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e8a3473d-4723-4d4d-be0a-f96b6f35a4ee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ae01bff6-0f5a-48e3-a011-50bc6bac19c7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=f8dc388c-9e5a-43c2-8dd9-8fc28768ec31) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:05:56 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:56.628 162815 INFO neutron.agent.ovn.metadata.agent [-] Port f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 in datapath 7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75 unbound from our chassis#033[00m
Oct 11 05:05:56 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:56.629 162815 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct 11 05:05:56 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:05:56.630 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a6c6a0ea-e5d2-4bac-bfc7-27df2a07ff4a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:05:56 np0005481065 nova_compute[260935]: 2025-10-11 09:05:56.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:56 np0005481065 nova_compute[260935]: 2025-10-11 09:05:56.647 2 INFO nova.compute.manager [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Took 17.95 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:05:56 np0005481065 nova_compute[260935]: 2025-10-11 09:05:56.647 2 DEBUG nova.compute.manager [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:05:56 np0005481065 systemd[1]: machine-qemu\x2d105\x2dinstance\x2d0000005c.scope: Deactivated successfully.
Oct 11 05:05:56 np0005481065 systemd[1]: machine-qemu\x2d105\x2dinstance\x2d0000005c.scope: Consumed 14.693s CPU time.
Oct 11 05:05:56 np0005481065 systemd-machined[215705]: Machine qemu-105-instance-0000005c terminated.
Oct 11 05:05:56 np0005481065 nova_compute[260935]: 2025-10-11 09:05:56.734 2 INFO nova.virt.libvirt.driver [-] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Instance destroyed successfully.#033[00m
Oct 11 05:05:56 np0005481065 nova_compute[260935]: 2025-10-11 09:05:56.735 2 DEBUG nova.objects.instance [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lazy-loading 'numa_topology' on Instance uuid a83f40e4-c852-4b45-a3d2-1cd65e9aaa31 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:05:56 np0005481065 nova_compute[260935]: 2025-10-11 09:05:56.737 2 INFO nova.compute.manager [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Took 19.65 seconds to build instance.#033[00m
Oct 11 05:05:56 np0005481065 nova_compute[260935]: 2025-10-11 09:05:56.787 2 INFO nova.virt.libvirt.driver [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Attempting rescue#033[00m
Oct 11 05:05:56 np0005481065 nova_compute[260935]: 2025-10-11 09:05:56.789 2 DEBUG nova.virt.libvirt.driver [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Oct 11 05:05:56 np0005481065 nova_compute[260935]: 2025-10-11 09:05:56.794 2 DEBUG nova.virt.libvirt.driver [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Oct 11 05:05:56 np0005481065 nova_compute[260935]: 2025-10-11 09:05:56.794 2 INFO nova.virt.libvirt.driver [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Creating image(s)#033[00m
Oct 11 05:05:56 np0005481065 nova_compute[260935]: 2025-10-11 09:05:56.817 2 DEBUG nova.storage.rbd_utils [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] rbd image a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:05:56 np0005481065 nova_compute[260935]: 2025-10-11 09:05:56.820 2 DEBUG nova.objects.instance [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lazy-loading 'trusted_certs' on Instance uuid a83f40e4-c852-4b45-a3d2-1cd65e9aaa31 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:05:56 np0005481065 nova_compute[260935]: 2025-10-11 09:05:56.822 2 DEBUG oslo_concurrency.lockutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Lock "d813afc2-c844-45eb-b1ec-efbbf95498ef" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.068s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:05:56 np0005481065 nova_compute[260935]: 2025-10-11 09:05:56.853 2 DEBUG nova.storage.rbd_utils [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] rbd image a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:05:56 np0005481065 nova_compute[260935]: 2025-10-11 09:05:56.875 2 DEBUG nova.storage.rbd_utils [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] rbd image a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:05:56 np0005481065 nova_compute[260935]: 2025-10-11 09:05:56.878 2 DEBUG oslo_concurrency.processutils [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:05:56 np0005481065 nova_compute[260935]: 2025-10-11 09:05:56.966 2 DEBUG oslo_concurrency.processutils [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:05:56 np0005481065 nova_compute[260935]: 2025-10-11 09:05:56.968 2 DEBUG oslo_concurrency.lockutils [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:05:56 np0005481065 nova_compute[260935]: 2025-10-11 09:05:56.969 2 DEBUG oslo_concurrency.lockutils [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:05:56 np0005481065 nova_compute[260935]: 2025-10-11 09:05:56.969 2 DEBUG oslo_concurrency.lockutils [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:05:56 np0005481065 nova_compute[260935]: 2025-10-11 09:05:56.994 2 DEBUG nova.storage.rbd_utils [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] rbd image a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:05:57 np0005481065 nova_compute[260935]: 2025-10-11 09:05:57.000 2 DEBUG oslo_concurrency.processutils [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:05:57 np0005481065 nova_compute[260935]: 2025-10-11 09:05:57.046 2 DEBUG nova.compute.manager [req-d367a1e0-54f1-4229-8273-845f25e732f1 req-e2651ff5-951d-42ba-a0b1-00d809ae8a6e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Received event network-vif-unplugged-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:05:57 np0005481065 nova_compute[260935]: 2025-10-11 09:05:57.046 2 DEBUG oslo_concurrency.lockutils [req-d367a1e0-54f1-4229-8273-845f25e732f1 req-e2651ff5-951d-42ba-a0b1-00d809ae8a6e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:05:57 np0005481065 nova_compute[260935]: 2025-10-11 09:05:57.047 2 DEBUG oslo_concurrency.lockutils [req-d367a1e0-54f1-4229-8273-845f25e732f1 req-e2651ff5-951d-42ba-a0b1-00d809ae8a6e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:05:57 np0005481065 nova_compute[260935]: 2025-10-11 09:05:57.047 2 DEBUG oslo_concurrency.lockutils [req-d367a1e0-54f1-4229-8273-845f25e732f1 req-e2651ff5-951d-42ba-a0b1-00d809ae8a6e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:05:57 np0005481065 nova_compute[260935]: 2025-10-11 09:05:57.048 2 DEBUG nova.compute.manager [req-d367a1e0-54f1-4229-8273-845f25e732f1 req-e2651ff5-951d-42ba-a0b1-00d809ae8a6e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] No waiting events found dispatching network-vif-unplugged-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:05:57 np0005481065 nova_compute[260935]: 2025-10-11 09:05:57.048 2 WARNING nova.compute.manager [req-d367a1e0-54f1-4229-8273-845f25e732f1 req-e2651ff5-951d-42ba-a0b1-00d809ae8a6e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Received unexpected event network-vif-unplugged-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 for instance with vm_state active and task_state rescuing.#033[00m
Oct 11 05:05:57 np0005481065 nova_compute[260935]: 2025-10-11 09:05:57.792 2 DEBUG oslo_concurrency.processutils [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.792s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:05:57 np0005481065 nova_compute[260935]: 2025-10-11 09:05:57.793 2 DEBUG nova.objects.instance [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lazy-loading 'migration_context' on Instance uuid a83f40e4-c852-4b45-a3d2-1cd65e9aaa31 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:05:57 np0005481065 nova_compute[260935]: 2025-10-11 09:05:57.879 2 DEBUG nova.virt.libvirt.driver [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:05:57 np0005481065 nova_compute[260935]: 2025-10-11 09:05:57.881 2 DEBUG nova.virt.libvirt.driver [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Start _get_guest_xml network_info=[{"id": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "address": "fa:16:3e:b4:bf:b7", "network": {"id": "7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1957321638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-1957321638-network", "vif_mac": "fa:16:3e:b4:bf:b7"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "0a5d578da5e746caa535eef295e1a67d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8dc388c-9e", "ovs_interfaceid": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:05:57 np0005481065 nova_compute[260935]: 2025-10-11 09:05:57.881 2 DEBUG nova.objects.instance [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lazy-loading 'resources' on Instance uuid a83f40e4-c852-4b45-a3d2-1cd65e9aaa31 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:05:57 np0005481065 nova_compute[260935]: 2025-10-11 09:05:57.908 2 WARNING nova.virt.libvirt.driver [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:05:57 np0005481065 nova_compute[260935]: 2025-10-11 09:05:57.914 2 DEBUG nova.virt.libvirt.host [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:05:57 np0005481065 nova_compute[260935]: 2025-10-11 09:05:57.915 2 DEBUG nova.virt.libvirt.host [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:05:57 np0005481065 nova_compute[260935]: 2025-10-11 09:05:57.919 2 DEBUG nova.virt.libvirt.host [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:05:57 np0005481065 nova_compute[260935]: 2025-10-11 09:05:57.919 2 DEBUG nova.virt.libvirt.host [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:05:57 np0005481065 nova_compute[260935]: 2025-10-11 09:05:57.920 2 DEBUG nova.virt.libvirt.driver [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:05:57 np0005481065 nova_compute[260935]: 2025-10-11 09:05:57.920 2 DEBUG nova.virt.hardware [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:05:57 np0005481065 nova_compute[260935]: 2025-10-11 09:05:57.921 2 DEBUG nova.virt.hardware [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:05:57 np0005481065 nova_compute[260935]: 2025-10-11 09:05:57.921 2 DEBUG nova.virt.hardware [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:05:57 np0005481065 nova_compute[260935]: 2025-10-11 09:05:57.921 2 DEBUG nova.virt.hardware [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:05:57 np0005481065 nova_compute[260935]: 2025-10-11 09:05:57.922 2 DEBUG nova.virt.hardware [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:05:57 np0005481065 nova_compute[260935]: 2025-10-11 09:05:57.922 2 DEBUG nova.virt.hardware [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:05:57 np0005481065 nova_compute[260935]: 2025-10-11 09:05:57.923 2 DEBUG nova.virt.hardware [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:05:57 np0005481065 nova_compute[260935]: 2025-10-11 09:05:57.923 2 DEBUG nova.virt.hardware [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:05:57 np0005481065 nova_compute[260935]: 2025-10-11 09:05:57.923 2 DEBUG nova.virt.hardware [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:05:57 np0005481065 nova_compute[260935]: 2025-10-11 09:05:57.924 2 DEBUG nova.virt.hardware [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:05:57 np0005481065 nova_compute[260935]: 2025-10-11 09:05:57.924 2 DEBUG nova.virt.hardware [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:05:57 np0005481065 nova_compute[260935]: 2025-10-11 09:05:57.924 2 DEBUG nova.objects.instance [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lazy-loading 'vcpu_model' on Instance uuid a83f40e4-c852-4b45-a3d2-1cd65e9aaa31 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:05:57 np0005481065 nova_compute[260935]: 2025-10-11 09:05:57.958 2 DEBUG oslo_concurrency.processutils [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:05:58 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1899: 321 pgs: 321 active+clean; 500 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.2 MiB/s wr, 185 op/s
Oct 11 05:05:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:05:58 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/259846511' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:05:58 np0005481065 nova_compute[260935]: 2025-10-11 09:05:58.388 2 DEBUG oslo_concurrency.processutils [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:05:58 np0005481065 nova_compute[260935]: 2025-10-11 09:05:58.390 2 DEBUG oslo_concurrency.processutils [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:05:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:05:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:05:58 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/690254663' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:05:58 np0005481065 nova_compute[260935]: 2025-10-11 09:05:58.870 2 DEBUG nova.compute.manager [req-bce0cced-5d0d-4e3b-b1b2-dfe2f5fab09e req-53bb1e06-cc33-4738-a551-d3151298d75c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Received event network-vif-plugged-4d16ebf4-64d7-4476-8898-f62d856c729b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:05:58 np0005481065 nova_compute[260935]: 2025-10-11 09:05:58.871 2 DEBUG oslo_concurrency.lockutils [req-bce0cced-5d0d-4e3b-b1b2-dfe2f5fab09e req-53bb1e06-cc33-4738-a551-d3151298d75c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d813afc2-c844-45eb-b1ec-efbbf95498ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:05:58 np0005481065 nova_compute[260935]: 2025-10-11 09:05:58.871 2 DEBUG oslo_concurrency.lockutils [req-bce0cced-5d0d-4e3b-b1b2-dfe2f5fab09e req-53bb1e06-cc33-4738-a551-d3151298d75c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d813afc2-c844-45eb-b1ec-efbbf95498ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:05:58 np0005481065 nova_compute[260935]: 2025-10-11 09:05:58.872 2 DEBUG oslo_concurrency.lockutils [req-bce0cced-5d0d-4e3b-b1b2-dfe2f5fab09e req-53bb1e06-cc33-4738-a551-d3151298d75c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d813afc2-c844-45eb-b1ec-efbbf95498ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:05:58 np0005481065 nova_compute[260935]: 2025-10-11 09:05:58.872 2 DEBUG nova.compute.manager [req-bce0cced-5d0d-4e3b-b1b2-dfe2f5fab09e req-53bb1e06-cc33-4738-a551-d3151298d75c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] No waiting events found dispatching network-vif-plugged-4d16ebf4-64d7-4476-8898-f62d856c729b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:05:58 np0005481065 nova_compute[260935]: 2025-10-11 09:05:58.872 2 WARNING nova.compute.manager [req-bce0cced-5d0d-4e3b-b1b2-dfe2f5fab09e req-53bb1e06-cc33-4738-a551-d3151298d75c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Received unexpected event network-vif-plugged-4d16ebf4-64d7-4476-8898-f62d856c729b for instance with vm_state active and task_state None.#033[00m
Oct 11 05:05:58 np0005481065 nova_compute[260935]: 2025-10-11 09:05:58.876 2 DEBUG oslo_concurrency.processutils [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:05:58 np0005481065 nova_compute[260935]: 2025-10-11 09:05:58.877 2 DEBUG oslo_concurrency.processutils [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:05:58 np0005481065 nova_compute[260935]: 2025-10-11 09:05:58.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:59 np0005481065 nova_compute[260935]: 2025-10-11 09:05:59.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:05:59 np0005481065 nova_compute[260935]: 2025-10-11 09:05:59.101 2 DEBUG nova.compute.manager [req-dd5db1cc-3183-4374-8363-6c0ccec1d7e4 req-99bbce1c-4890-4cbd-b0fb-8bc39368718c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Received event network-vif-plugged-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:05:59 np0005481065 nova_compute[260935]: 2025-10-11 09:05:59.102 2 DEBUG oslo_concurrency.lockutils [req-dd5db1cc-3183-4374-8363-6c0ccec1d7e4 req-99bbce1c-4890-4cbd-b0fb-8bc39368718c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:05:59 np0005481065 nova_compute[260935]: 2025-10-11 09:05:59.102 2 DEBUG oslo_concurrency.lockutils [req-dd5db1cc-3183-4374-8363-6c0ccec1d7e4 req-99bbce1c-4890-4cbd-b0fb-8bc39368718c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:05:59 np0005481065 nova_compute[260935]: 2025-10-11 09:05:59.103 2 DEBUG oslo_concurrency.lockutils [req-dd5db1cc-3183-4374-8363-6c0ccec1d7e4 req-99bbce1c-4890-4cbd-b0fb-8bc39368718c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:05:59 np0005481065 nova_compute[260935]: 2025-10-11 09:05:59.103 2 DEBUG nova.compute.manager [req-dd5db1cc-3183-4374-8363-6c0ccec1d7e4 req-99bbce1c-4890-4cbd-b0fb-8bc39368718c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] No waiting events found dispatching network-vif-plugged-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:05:59 np0005481065 nova_compute[260935]: 2025-10-11 09:05:59.103 2 WARNING nova.compute.manager [req-dd5db1cc-3183-4374-8363-6c0ccec1d7e4 req-99bbce1c-4890-4cbd-b0fb-8bc39368718c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Received unexpected event network-vif-plugged-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 for instance with vm_state active and task_state rescuing.#033[00m
Oct 11 05:05:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:05:59 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2676679246' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:05:59 np0005481065 nova_compute[260935]: 2025-10-11 09:05:59.322 2 DEBUG oslo_concurrency.processutils [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:05:59 np0005481065 nova_compute[260935]: 2025-10-11 09:05:59.323 2 DEBUG nova.virt.libvirt.vif [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:05:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-942927350',display_name='tempest-ServerRescueTestJSONUnderV235-server-942927350',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-942927350',id=92,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:05:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0a5d578da5e746caa535eef295e1a67d',ramdisk_id='',reservation_id='r-i0emrtot',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSONUnderV235-2035879439',owner_user_name='tempest-ServerRescueTestJSONUnderV235-2035879439-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:05:37Z,user_data=None,user_id='b7730a035fdf47498398e20e5aaf9ba4',uuid=a83f40e4-c852-4b45-a3d2-1cd65e9aaa31,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "address": "fa:16:3e:b4:bf:b7", "network": {"id": "7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1957321638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-1957321638-network", "vif_mac": "fa:16:3e:b4:bf:b7"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "0a5d578da5e746caa535eef295e1a67d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8dc388c-9e", "ovs_interfaceid": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:05:59 np0005481065 nova_compute[260935]: 2025-10-11 09:05:59.323 2 DEBUG nova.network.os_vif_util [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Converting VIF {"id": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "address": "fa:16:3e:b4:bf:b7", "network": {"id": "7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1957321638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-1957321638-network", "vif_mac": "fa:16:3e:b4:bf:b7"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "0a5d578da5e746caa535eef295e1a67d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8dc388c-9e", "ovs_interfaceid": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:05:59 np0005481065 nova_compute[260935]: 2025-10-11 09:05:59.324 2 DEBUG nova.network.os_vif_util [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b4:bf:b7,bridge_name='br-int',has_traffic_filtering=True,id=f8dc388c-9e5a-43c2-8dd9-8fc28768ec31,network=Network(7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf8dc388c-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:05:59 np0005481065 nova_compute[260935]: 2025-10-11 09:05:59.325 2 DEBUG nova.objects.instance [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lazy-loading 'pci_devices' on Instance uuid a83f40e4-c852-4b45-a3d2-1cd65e9aaa31 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:05:59 np0005481065 nova_compute[260935]: 2025-10-11 09:05:59.372 2 DEBUG nova.virt.libvirt.driver [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:05:59 np0005481065 nova_compute[260935]:  <uuid>a83f40e4-c852-4b45-a3d2-1cd65e9aaa31</uuid>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:  <name>instance-0000005c</name>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:05:59 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:      <nova:name>tempest-ServerRescueTestJSONUnderV235-server-942927350</nova:name>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:05:57</nova:creationTime>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:05:59 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:        <nova:user uuid="b7730a035fdf47498398e20e5aaf9ba4">tempest-ServerRescueTestJSONUnderV235-2035879439-project-member</nova:user>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:        <nova:project uuid="0a5d578da5e746caa535eef295e1a67d">tempest-ServerRescueTestJSONUnderV235-2035879439</nova:project>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:        <nova:port uuid="f8dc388c-9e5a-43c2-8dd9-8fc28768ec31">
Oct 11 05:05:59 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:      <entry name="serial">a83f40e4-c852-4b45-a3d2-1cd65e9aaa31</entry>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:      <entry name="uuid">a83f40e4-c852-4b45-a3d2-1cd65e9aaa31</entry>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:05:59 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_disk.rescue">
Oct 11 05:05:59 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:05:59 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:05:59 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_disk">
Oct 11 05:05:59 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:05:59 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:      <target dev="vdb" bus="virtio"/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:05:59 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_disk.config.rescue">
Oct 11 05:05:59 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:05:59 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:05:59 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:b4:bf:b7"/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:      <target dev="tapf8dc388c-9e"/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:05:59 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/a83f40e4-c852-4b45-a3d2-1cd65e9aaa31/console.log" append="off"/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:05:59 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:05:59 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:05:59 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:05:59 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:05:59 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:05:59 np0005481065 nova_compute[260935]: 2025-10-11 09:05:59.382 2 INFO nova.virt.libvirt.driver [-] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Instance destroyed successfully.#033[00m
Oct 11 05:05:59 np0005481065 nova_compute[260935]: 2025-10-11 09:05:59.493 2 DEBUG nova.virt.libvirt.driver [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:05:59 np0005481065 nova_compute[260935]: 2025-10-11 09:05:59.494 2 DEBUG nova.virt.libvirt.driver [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:05:59 np0005481065 nova_compute[260935]: 2025-10-11 09:05:59.494 2 DEBUG nova.virt.libvirt.driver [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:05:59 np0005481065 nova_compute[260935]: 2025-10-11 09:05:59.495 2 DEBUG nova.virt.libvirt.driver [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] No VIF found with MAC fa:16:3e:b4:bf:b7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:05:59 np0005481065 nova_compute[260935]: 2025-10-11 09:05:59.495 2 INFO nova.virt.libvirt.driver [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Using config drive#033[00m
Oct 11 05:05:59 np0005481065 nova_compute[260935]: 2025-10-11 09:05:59.530 2 DEBUG nova.storage.rbd_utils [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] rbd image a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:05:59 np0005481065 nova_compute[260935]: 2025-10-11 09:05:59.565 2 DEBUG nova.objects.instance [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lazy-loading 'ec2_ids' on Instance uuid a83f40e4-c852-4b45-a3d2-1cd65e9aaa31 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:05:59 np0005481065 nova_compute[260935]: 2025-10-11 09:05:59.622 2 DEBUG nova.objects.instance [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lazy-loading 'keypairs' on Instance uuid a83f40e4-c852-4b45-a3d2-1cd65e9aaa31 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:06:00 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1900: 321 pgs: 321 active+clean; 500 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 812 KiB/s rd, 2.2 MiB/s wr, 94 op/s
Oct 11 05:06:01 np0005481065 nova_compute[260935]: 2025-10-11 09:06:01.059 2 INFO nova.virt.libvirt.driver [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Creating config drive at /var/lib/nova/instances/a83f40e4-c852-4b45-a3d2-1cd65e9aaa31/disk.config.rescue#033[00m
Oct 11 05:06:01 np0005481065 nova_compute[260935]: 2025-10-11 09:06:01.064 2 DEBUG oslo_concurrency.processutils [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a83f40e4-c852-4b45-a3d2-1cd65e9aaa31/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppi_gl5x7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:06:01 np0005481065 nova_compute[260935]: 2025-10-11 09:06:01.108 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173546.0733197, 81e243b6-6e9c-428e-a7b7-743f60f94728 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:06:01 np0005481065 nova_compute[260935]: 2025-10-11 09:06:01.109 2 INFO nova.compute.manager [-] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:06:01 np0005481065 nova_compute[260935]: 2025-10-11 09:06:01.164 2 DEBUG nova.compute.manager [None req-b7a6eaac-a604-4839-a6c9-72ed785bf8b8 - - - - - -] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:06:01 np0005481065 nova_compute[260935]: 2025-10-11 09:06:01.222 2 DEBUG oslo_concurrency.processutils [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a83f40e4-c852-4b45-a3d2-1cd65e9aaa31/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppi_gl5x7" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:06:01 np0005481065 nova_compute[260935]: 2025-10-11 09:06:01.244 2 DEBUG nova.storage.rbd_utils [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] rbd image a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:06:01 np0005481065 nova_compute[260935]: 2025-10-11 09:06:01.247 2 DEBUG oslo_concurrency.processutils [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a83f40e4-c852-4b45-a3d2-1cd65e9aaa31/disk.config.rescue a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:06:01 np0005481065 nova_compute[260935]: 2025-10-11 09:06:01.416 2 DEBUG oslo_concurrency.processutils [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a83f40e4-c852-4b45-a3d2-1cd65e9aaa31/disk.config.rescue a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:06:01 np0005481065 nova_compute[260935]: 2025-10-11 09:06:01.417 2 INFO nova.virt.libvirt.driver [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Deleting local config drive /var/lib/nova/instances/a83f40e4-c852-4b45-a3d2-1cd65e9aaa31/disk.config.rescue because it was imported into RBD.#033[00m
Oct 11 05:06:01 np0005481065 kernel: tapf8dc388c-9e: entered promiscuous mode
Oct 11 05:06:01 np0005481065 ovn_controller[152945]: 2025-10-11T09:06:01Z|00833|binding|INFO|Claiming lport f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 for this chassis.
Oct 11 05:06:01 np0005481065 ovn_controller[152945]: 2025-10-11T09:06:01Z|00834|binding|INFO|f8dc388c-9e5a-43c2-8dd9-8fc28768ec31: Claiming fa:16:3e:b4:bf:b7 10.100.0.12
Oct 11 05:06:01 np0005481065 nova_compute[260935]: 2025-10-11 09:06:01.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:01 np0005481065 NetworkManager[44960]: <info>  [1760173561.4871] manager: (tapf8dc388c-9e): new Tun device (/org/freedesktop/NetworkManager/Devices/359)
Oct 11 05:06:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:01.515 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:bf:b7 10.100.0.12'], port_security=['fa:16:3e:b4:bf:b7 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a83f40e4-c852-4b45-a3d2-1cd65e9aaa31', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a5d578da5e746caa535eef295e1a67d', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'e8a3473d-4723-4d4d-be0a-f96b6f35a4ee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ae01bff6-0f5a-48e3-a011-50bc6bac19c7, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=f8dc388c-9e5a-43c2-8dd9-8fc28768ec31) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:06:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:01.517 162815 INFO neutron.agent.ovn.metadata.agent [-] Port f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 in datapath 7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75 bound to our chassis#033[00m
Oct 11 05:06:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:01.519 162815 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct 11 05:06:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:01.521 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e0919731-933f-4ff4-a83c-0d30a1c76288]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:06:01 np0005481065 ovn_controller[152945]: 2025-10-11T09:06:01Z|00835|binding|INFO|Setting lport f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 ovn-installed in OVS
Oct 11 05:06:01 np0005481065 ovn_controller[152945]: 2025-10-11T09:06:01Z|00836|binding|INFO|Setting lport f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 up in Southbound
Oct 11 05:06:01 np0005481065 nova_compute[260935]: 2025-10-11 09:06:01.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:01 np0005481065 nova_compute[260935]: 2025-10-11 09:06:01.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:01 np0005481065 systemd-machined[215705]: New machine qemu-108-instance-0000005c.
Oct 11 05:06:01 np0005481065 nova_compute[260935]: 2025-10-11 09:06:01.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:01 np0005481065 systemd[1]: Started Virtual Machine qemu-108-instance-0000005c.
Oct 11 05:06:01 np0005481065 systemd-udevd[354487]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:06:01 np0005481065 NetworkManager[44960]: <info>  [1760173561.5670] device (tapf8dc388c-9e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:06:01 np0005481065 NetworkManager[44960]: <info>  [1760173561.5683] device (tapf8dc388c-9e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:06:01 np0005481065 nova_compute[260935]: 2025-10-11 09:06:01.695 2 DEBUG oslo_concurrency.lockutils [None req-7088ee2a-f426-4885-8621-49d560c830e1 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Acquiring lock "d813afc2-c844-45eb-b1ec-efbbf95498ef" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:06:01 np0005481065 nova_compute[260935]: 2025-10-11 09:06:01.696 2 DEBUG oslo_concurrency.lockutils [None req-7088ee2a-f426-4885-8621-49d560c830e1 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Lock "d813afc2-c844-45eb-b1ec-efbbf95498ef" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:06:01 np0005481065 nova_compute[260935]: 2025-10-11 09:06:01.697 2 DEBUG oslo_concurrency.lockutils [None req-7088ee2a-f426-4885-8621-49d560c830e1 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Acquiring lock "d813afc2-c844-45eb-b1ec-efbbf95498ef-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:06:01 np0005481065 nova_compute[260935]: 2025-10-11 09:06:01.698 2 DEBUG oslo_concurrency.lockutils [None req-7088ee2a-f426-4885-8621-49d560c830e1 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Lock "d813afc2-c844-45eb-b1ec-efbbf95498ef-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:06:01 np0005481065 nova_compute[260935]: 2025-10-11 09:06:01.698 2 DEBUG oslo_concurrency.lockutils [None req-7088ee2a-f426-4885-8621-49d560c830e1 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Lock "d813afc2-c844-45eb-b1ec-efbbf95498ef-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:06:01 np0005481065 nova_compute[260935]: 2025-10-11 09:06:01.701 2 INFO nova.compute.manager [None req-7088ee2a-f426-4885-8621-49d560c830e1 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Terminating instance#033[00m
Oct 11 05:06:01 np0005481065 nova_compute[260935]: 2025-10-11 09:06:01.703 2 DEBUG nova.compute.manager [None req-7088ee2a-f426-4885-8621-49d560c830e1 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:06:01 np0005481065 kernel: tap4d16ebf4-64 (unregistering): left promiscuous mode
Oct 11 05:06:01 np0005481065 NetworkManager[44960]: <info>  [1760173561.7654] device (tap4d16ebf4-64): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:06:01 np0005481065 ovn_controller[152945]: 2025-10-11T09:06:01Z|00837|binding|INFO|Releasing lport 4d16ebf4-64d7-4476-8898-f62d856c729b from this chassis (sb_readonly=0)
Oct 11 05:06:01 np0005481065 ovn_controller[152945]: 2025-10-11T09:06:01Z|00838|binding|INFO|Setting lport 4d16ebf4-64d7-4476-8898-f62d856c729b down in Southbound
Oct 11 05:06:01 np0005481065 nova_compute[260935]: 2025-10-11 09:06:01.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:01 np0005481065 ovn_controller[152945]: 2025-10-11T09:06:01Z|00839|binding|INFO|Removing iface tap4d16ebf4-64 ovn-installed in OVS
Oct 11 05:06:01 np0005481065 nova_compute[260935]: 2025-10-11 09:06:01.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:01 np0005481065 nova_compute[260935]: 2025-10-11 09:06:01.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:01 np0005481065 systemd[1]: machine-qemu\x2d107\x2dinstance\x2d0000005e.scope: Deactivated successfully.
Oct 11 05:06:01 np0005481065 systemd[1]: machine-qemu\x2d107\x2dinstance\x2d0000005e.scope: Consumed 6.039s CPU time.
Oct 11 05:06:01 np0005481065 systemd-machined[215705]: Machine qemu-107-instance-0000005e terminated.
Oct 11 05:06:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:01.830 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:2f:b2 10.100.0.7'], port_security=['fa:16:3e:62:2f:b2 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'd813afc2-c844-45eb-b1ec-efbbf95498ef', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-da65ea75-f60d-4001-894b-df4408baa99c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0eed0ccef3b34c4db44e88ebe1aef9f0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '27e50c86-89d7-49fd-89cc-35770f4ed881', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=04f380b3-4aca-47a6-b068-b7f58cfe8d61, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=4d16ebf4-64d7-4476-8898-f62d856c729b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:06:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:01.832 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 4d16ebf4-64d7-4476-8898-f62d856c729b in datapath da65ea75-f60d-4001-894b-df4408baa99c unbound from our chassis#033[00m
Oct 11 05:06:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:01.834 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network da65ea75-f60d-4001-894b-df4408baa99c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:06:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:01.836 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5bc56190-acfd-4a5e-922d-e72961ef5601]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:06:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:01.837 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-da65ea75-f60d-4001-894b-df4408baa99c namespace which is not needed anymore#033[00m
Oct 11 05:06:01 np0005481065 nova_compute[260935]: 2025-10-11 09:06:01.953 2 INFO nova.virt.libvirt.driver [-] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Instance destroyed successfully.#033[00m
Oct 11 05:06:01 np0005481065 nova_compute[260935]: 2025-10-11 09:06:01.954 2 DEBUG nova.objects.instance [None req-7088ee2a-f426-4885-8621-49d560c830e1 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Lazy-loading 'resources' on Instance uuid d813afc2-c844-45eb-b1ec-efbbf95498ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:06:02 np0005481065 nova_compute[260935]: 2025-10-11 09:06:02.002 2 DEBUG nova.virt.libvirt.vif [None req-7088ee2a-f426-4885-8621-49d560c830e1 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:05:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-393316733',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-393316733',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-393316733',id=94,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:05:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0eed0ccef3b34c4db44e88ebe1aef9f0',ramdisk_id='',reservation_id='r-ikgw00lh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-1815816531',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-1815816531-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:05:56Z,user_data=None,user_id='ac3a19a7426e4e51aa4f94b016decc82',uuid=d813afc2-c844-45eb-b1ec-efbbf95498ef,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4d16ebf4-64d7-4476-8898-f62d856c729b", "address": "fa:16:3e:62:2f:b2", "network": {"id": "da65ea75-f60d-4001-894b-df4408baa99c", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-908406648-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0eed0ccef3b34c4db44e88ebe1aef9f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d16ebf4-64", "ovs_interfaceid": "4d16ebf4-64d7-4476-8898-f62d856c729b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:06:02 np0005481065 nova_compute[260935]: 2025-10-11 09:06:02.003 2 DEBUG nova.network.os_vif_util [None req-7088ee2a-f426-4885-8621-49d560c830e1 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Converting VIF {"id": "4d16ebf4-64d7-4476-8898-f62d856c729b", "address": "fa:16:3e:62:2f:b2", "network": {"id": "da65ea75-f60d-4001-894b-df4408baa99c", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-908406648-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0eed0ccef3b34c4db44e88ebe1aef9f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d16ebf4-64", "ovs_interfaceid": "4d16ebf4-64d7-4476-8898-f62d856c729b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:06:02 np0005481065 nova_compute[260935]: 2025-10-11 09:06:02.003 2 DEBUG nova.network.os_vif_util [None req-7088ee2a-f426-4885-8621-49d560c830e1 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:2f:b2,bridge_name='br-int',has_traffic_filtering=True,id=4d16ebf4-64d7-4476-8898-f62d856c729b,network=Network(da65ea75-f60d-4001-894b-df4408baa99c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d16ebf4-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:06:02 np0005481065 nova_compute[260935]: 2025-10-11 09:06:02.004 2 DEBUG os_vif [None req-7088ee2a-f426-4885-8621-49d560c830e1 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:2f:b2,bridge_name='br-int',has_traffic_filtering=True,id=4d16ebf4-64d7-4476-8898-f62d856c729b,network=Network(da65ea75-f60d-4001-894b-df4408baa99c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d16ebf4-64') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:06:02 np0005481065 nova_compute[260935]: 2025-10-11 09:06:02.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:02 np0005481065 nova_compute[260935]: 2025-10-11 09:06:02.006 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d16ebf4-64, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:06:02 np0005481065 nova_compute[260935]: 2025-10-11 09:06:02.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:02 np0005481065 nova_compute[260935]: 2025-10-11 09:06:02.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:02 np0005481065 nova_compute[260935]: 2025-10-11 09:06:02.013 2 INFO os_vif [None req-7088ee2a-f426-4885-8621-49d560c830e1 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:2f:b2,bridge_name='br-int',has_traffic_filtering=True,id=4d16ebf4-64d7-4476-8898-f62d856c729b,network=Network(da65ea75-f60d-4001-894b-df4408baa99c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d16ebf4-64')#033[00m
Oct 11 05:06:02 np0005481065 neutron-haproxy-ovnmeta-da65ea75-f60d-4001-894b-df4408baa99c[354173]: [NOTICE]   (354177) : haproxy version is 2.8.14-c23fe91
Oct 11 05:06:02 np0005481065 neutron-haproxy-ovnmeta-da65ea75-f60d-4001-894b-df4408baa99c[354173]: [NOTICE]   (354177) : path to executable is /usr/sbin/haproxy
Oct 11 05:06:02 np0005481065 neutron-haproxy-ovnmeta-da65ea75-f60d-4001-894b-df4408baa99c[354173]: [ALERT]    (354177) : Current worker (354179) exited with code 143 (Terminated)
Oct 11 05:06:02 np0005481065 neutron-haproxy-ovnmeta-da65ea75-f60d-4001-894b-df4408baa99c[354173]: [WARNING]  (354177) : All workers exited. Exiting... (0)
Oct 11 05:06:02 np0005481065 systemd[1]: libpod-fe1073206b8884e40d5fa95be894fa4fa71d43f68f6ba8ef4b1c8249e73078f1.scope: Deactivated successfully.
Oct 11 05:06:02 np0005481065 podman[354523]: 2025-10-11 09:06:02.088735667 +0000 UTC m=+0.130526267 container died fe1073206b8884e40d5fa95be894fa4fa71d43f68f6ba8ef4b1c8249e73078f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-da65ea75-f60d-4001-894b-df4408baa99c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:06:02 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fe1073206b8884e40d5fa95be894fa4fa71d43f68f6ba8ef4b1c8249e73078f1-userdata-shm.mount: Deactivated successfully.
Oct 11 05:06:02 np0005481065 systemd[1]: var-lib-containers-storage-overlay-7944f1711c8b20f7abfb4df9886afc904490f332f531da6a73509c2289aa5d79-merged.mount: Deactivated successfully.
Oct 11 05:06:02 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1901: 321 pgs: 321 active+clean; 533 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.2 MiB/s wr, 160 op/s
Oct 11 05:06:02 np0005481065 podman[354523]: 2025-10-11 09:06:02.53546687 +0000 UTC m=+0.577257430 container cleanup fe1073206b8884e40d5fa95be894fa4fa71d43f68f6ba8ef4b1c8249e73078f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-da65ea75-f60d-4001-894b-df4408baa99c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 05:06:02 np0005481065 systemd[1]: libpod-conmon-fe1073206b8884e40d5fa95be894fa4fa71d43f68f6ba8ef4b1c8249e73078f1.scope: Deactivated successfully.
Oct 11 05:06:02 np0005481065 nova_compute[260935]: 2025-10-11 09:06:02.550 2 DEBUG nova.compute.manager [req-32cbd574-dd41-4d4e-b764-d338cd366cf2 req-e1bba347-a58e-43f5-ba09-b3bc7e7919b0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Received event network-vif-unplugged-4d16ebf4-64d7-4476-8898-f62d856c729b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:06:02 np0005481065 nova_compute[260935]: 2025-10-11 09:06:02.552 2 DEBUG oslo_concurrency.lockutils [req-32cbd574-dd41-4d4e-b764-d338cd366cf2 req-e1bba347-a58e-43f5-ba09-b3bc7e7919b0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d813afc2-c844-45eb-b1ec-efbbf95498ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:06:02 np0005481065 nova_compute[260935]: 2025-10-11 09:06:02.553 2 DEBUG oslo_concurrency.lockutils [req-32cbd574-dd41-4d4e-b764-d338cd366cf2 req-e1bba347-a58e-43f5-ba09-b3bc7e7919b0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d813afc2-c844-45eb-b1ec-efbbf95498ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:06:02 np0005481065 nova_compute[260935]: 2025-10-11 09:06:02.553 2 DEBUG oslo_concurrency.lockutils [req-32cbd574-dd41-4d4e-b764-d338cd366cf2 req-e1bba347-a58e-43f5-ba09-b3bc7e7919b0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d813afc2-c844-45eb-b1ec-efbbf95498ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:06:02 np0005481065 nova_compute[260935]: 2025-10-11 09:06:02.554 2 DEBUG nova.compute.manager [req-32cbd574-dd41-4d4e-b764-d338cd366cf2 req-e1bba347-a58e-43f5-ba09-b3bc7e7919b0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] No waiting events found dispatching network-vif-unplugged-4d16ebf4-64d7-4476-8898-f62d856c729b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:06:02 np0005481065 nova_compute[260935]: 2025-10-11 09:06:02.554 2 DEBUG nova.compute.manager [req-32cbd574-dd41-4d4e-b764-d338cd366cf2 req-e1bba347-a58e-43f5-ba09-b3bc7e7919b0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Received event network-vif-unplugged-4d16ebf4-64d7-4476-8898-f62d856c729b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 05:06:02 np0005481065 nova_compute[260935]: 2025-10-11 09:06:02.889 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Removed pending event for a83f40e4-c852-4b45-a3d2-1cd65e9aaa31 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct 11 05:06:02 np0005481065 nova_compute[260935]: 2025-10-11 09:06:02.890 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173562.8885512, a83f40e4-c852-4b45-a3d2-1cd65e9aaa31 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:06:02 np0005481065 nova_compute[260935]: 2025-10-11 09:06:02.890 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:06:02 np0005481065 nova_compute[260935]: 2025-10-11 09:06:02.900 2 DEBUG nova.compute.manager [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:06:02 np0005481065 nova_compute[260935]: 2025-10-11 09:06:02.928 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:06:02 np0005481065 nova_compute[260935]: 2025-10-11 09:06:02.932 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:06:02 np0005481065 nova_compute[260935]: 2025-10-11 09:06:02.998 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173562.896696, a83f40e4-c852-4b45-a3d2-1cd65e9aaa31 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:06:02 np0005481065 nova_compute[260935]: 2025-10-11 09:06:02.998 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] VM Started (Lifecycle Event)#033[00m
Oct 11 05:06:03 np0005481065 nova_compute[260935]: 2025-10-11 09:06:03.054 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:06:03 np0005481065 nova_compute[260935]: 2025-10-11 09:06:03.059 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:06:03 np0005481065 podman[354635]: 2025-10-11 09:06:03.130433844 +0000 UTC m=+0.558414012 container remove fe1073206b8884e40d5fa95be894fa4fa71d43f68f6ba8ef4b1c8249e73078f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-da65ea75-f60d-4001-894b-df4408baa99c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 05:06:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:03.139 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fc255948-0482-4b67-8d64-65c012cd3bdf]: (4, ('Sat Oct 11 09:06:01 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-da65ea75-f60d-4001-894b-df4408baa99c (fe1073206b8884e40d5fa95be894fa4fa71d43f68f6ba8ef4b1c8249e73078f1)\nfe1073206b8884e40d5fa95be894fa4fa71d43f68f6ba8ef4b1c8249e73078f1\nSat Oct 11 09:06:02 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-da65ea75-f60d-4001-894b-df4408baa99c (fe1073206b8884e40d5fa95be894fa4fa71d43f68f6ba8ef4b1c8249e73078f1)\nfe1073206b8884e40d5fa95be894fa4fa71d43f68f6ba8ef4b1c8249e73078f1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:06:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:03.141 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[137aa968-0294-4af6-a27f-2d1706da5d9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:06:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:03.144 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapda65ea75-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:06:03 np0005481065 nova_compute[260935]: 2025-10-11 09:06:03.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:03 np0005481065 kernel: tapda65ea75-f0: left promiscuous mode
Oct 11 05:06:03 np0005481065 nova_compute[260935]: 2025-10-11 09:06:03.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:03.185 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[338c2800-12ea-4cc6-b371-1a5c56c9cab6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:06:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:03.215 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b5f93fa9-e1a3-4f2e-992c-fd1d7ab3d4d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:06:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:03.218 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b4336244-da3a-4a4c-9195-e4ae768ccbe9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:06:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:03.244 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0f8f71b1-be11-457f-bd0d-b569babf21e1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539501, 'reachable_time': 41708, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354651, 'error': None, 'target': 'ovnmeta-da65ea75-f60d-4001-894b-df4408baa99c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:06:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:03.248 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-da65ea75-f60d-4001-894b-df4408baa99c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 05:06:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:03.249 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[b02f2b3e-a4b9-4701-bab7-58c702aceb9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:06:03 np0005481065 systemd[1]: run-netns-ovnmeta\x2dda65ea75\x2df60d\x2d4001\x2d894b\x2ddf4408baa99c.mount: Deactivated successfully.
Oct 11 05:06:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:06:04 np0005481065 nova_compute[260935]: 2025-10-11 09:06:04.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:04 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1902: 321 pgs: 321 active+clean; 546 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.2 MiB/s wr, 114 op/s
Oct 11 05:06:04 np0005481065 nova_compute[260935]: 2025-10-11 09:06:04.708 2 DEBUG nova.compute.manager [req-99d192d0-d489-4fae-8f8a-9c4362410b4d req-762d23eb-778d-4845-a678-f19b664007a3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Received event network-vif-plugged-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:06:04 np0005481065 nova_compute[260935]: 2025-10-11 09:06:04.709 2 DEBUG oslo_concurrency.lockutils [req-99d192d0-d489-4fae-8f8a-9c4362410b4d req-762d23eb-778d-4845-a678-f19b664007a3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:06:04 np0005481065 nova_compute[260935]: 2025-10-11 09:06:04.710 2 DEBUG oslo_concurrency.lockutils [req-99d192d0-d489-4fae-8f8a-9c4362410b4d req-762d23eb-778d-4845-a678-f19b664007a3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:06:04 np0005481065 nova_compute[260935]: 2025-10-11 09:06:04.710 2 DEBUG oslo_concurrency.lockutils [req-99d192d0-d489-4fae-8f8a-9c4362410b4d req-762d23eb-778d-4845-a678-f19b664007a3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:06:04 np0005481065 nova_compute[260935]: 2025-10-11 09:06:04.710 2 DEBUG nova.compute.manager [req-99d192d0-d489-4fae-8f8a-9c4362410b4d req-762d23eb-778d-4845-a678-f19b664007a3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] No waiting events found dispatching network-vif-plugged-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:06:04 np0005481065 nova_compute[260935]: 2025-10-11 09:06:04.711 2 WARNING nova.compute.manager [req-99d192d0-d489-4fae-8f8a-9c4362410b4d req-762d23eb-778d-4845-a678-f19b664007a3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Received unexpected event network-vif-plugged-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 for instance with vm_state rescued and task_state None.#033[00m
Oct 11 05:06:04 np0005481065 nova_compute[260935]: 2025-10-11 09:06:04.839 2 DEBUG nova.compute.manager [req-8ad2ffc6-9463-4705-8a83-0c7e8ec312aa req-67b8f32e-ee30-4f5b-b17e-dc435f2d6317 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Received event network-vif-plugged-4d16ebf4-64d7-4476-8898-f62d856c729b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:06:04 np0005481065 nova_compute[260935]: 2025-10-11 09:06:04.840 2 DEBUG oslo_concurrency.lockutils [req-8ad2ffc6-9463-4705-8a83-0c7e8ec312aa req-67b8f32e-ee30-4f5b-b17e-dc435f2d6317 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d813afc2-c844-45eb-b1ec-efbbf95498ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:06:04 np0005481065 nova_compute[260935]: 2025-10-11 09:06:04.840 2 DEBUG oslo_concurrency.lockutils [req-8ad2ffc6-9463-4705-8a83-0c7e8ec312aa req-67b8f32e-ee30-4f5b-b17e-dc435f2d6317 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d813afc2-c844-45eb-b1ec-efbbf95498ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:06:04 np0005481065 nova_compute[260935]: 2025-10-11 09:06:04.840 2 DEBUG oslo_concurrency.lockutils [req-8ad2ffc6-9463-4705-8a83-0c7e8ec312aa req-67b8f32e-ee30-4f5b-b17e-dc435f2d6317 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d813afc2-c844-45eb-b1ec-efbbf95498ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:06:04 np0005481065 nova_compute[260935]: 2025-10-11 09:06:04.841 2 DEBUG nova.compute.manager [req-8ad2ffc6-9463-4705-8a83-0c7e8ec312aa req-67b8f32e-ee30-4f5b-b17e-dc435f2d6317 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] No waiting events found dispatching network-vif-plugged-4d16ebf4-64d7-4476-8898-f62d856c729b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:06:04 np0005481065 nova_compute[260935]: 2025-10-11 09:06:04.841 2 WARNING nova.compute.manager [req-8ad2ffc6-9463-4705-8a83-0c7e8ec312aa req-67b8f32e-ee30-4f5b-b17e-dc435f2d6317 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Received unexpected event network-vif-plugged-4d16ebf4-64d7-4476-8898-f62d856c729b for instance with vm_state active and task_state deleting.#033[00m
Oct 11 05:06:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:06:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:06:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:06:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:06:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004430873339519437 of space, bias 1.0, pg target 1.329262001855831 quantized to 32 (current 32)
Oct 11 05:06:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:06:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:06:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:06:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:06:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:06:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.1992057139048968 quantized to 32 (current 32)
Oct 11 05:06:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:06:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 32)
Oct 11 05:06:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:06:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:06:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:06:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Oct 11 05:06:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:06:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Oct 11 05:06:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:06:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:06:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:06:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Oct 11 05:06:06 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1903: 321 pgs: 321 active+clean; 546 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 98 op/s
Oct 11 05:06:06 np0005481065 nova_compute[260935]: 2025-10-11 09:06:06.885 2 DEBUG nova.compute.manager [req-e57b61e6-fa18-43ff-aa6c-ae2867f68cc4 req-92e3955c-daef-4495-b593-b277a5b22de8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Received event network-vif-plugged-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:06:06 np0005481065 nova_compute[260935]: 2025-10-11 09:06:06.885 2 DEBUG oslo_concurrency.lockutils [req-e57b61e6-fa18-43ff-aa6c-ae2867f68cc4 req-92e3955c-daef-4495-b593-b277a5b22de8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:06:06 np0005481065 nova_compute[260935]: 2025-10-11 09:06:06.886 2 DEBUG oslo_concurrency.lockutils [req-e57b61e6-fa18-43ff-aa6c-ae2867f68cc4 req-92e3955c-daef-4495-b593-b277a5b22de8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:06:06 np0005481065 nova_compute[260935]: 2025-10-11 09:06:06.886 2 DEBUG oslo_concurrency.lockutils [req-e57b61e6-fa18-43ff-aa6c-ae2867f68cc4 req-92e3955c-daef-4495-b593-b277a5b22de8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:06:06 np0005481065 nova_compute[260935]: 2025-10-11 09:06:06.886 2 DEBUG nova.compute.manager [req-e57b61e6-fa18-43ff-aa6c-ae2867f68cc4 req-92e3955c-daef-4495-b593-b277a5b22de8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] No waiting events found dispatching network-vif-plugged-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:06:06 np0005481065 nova_compute[260935]: 2025-10-11 09:06:06.886 2 WARNING nova.compute.manager [req-e57b61e6-fa18-43ff-aa6c-ae2867f68cc4 req-92e3955c-daef-4495-b593-b277a5b22de8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Received unexpected event network-vif-plugged-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 for instance with vm_state rescued and task_state None.#033[00m
Oct 11 05:06:07 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Oct 11 05:06:07 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:06:07.009692) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 05:06:07 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Oct 11 05:06:07 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173567009771, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 1745, "num_deletes": 257, "total_data_size": 2594584, "memory_usage": 2639072, "flush_reason": "Manual Compaction"}
Oct 11 05:06:07 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Oct 11 05:06:07 np0005481065 nova_compute[260935]: 2025-10-11 09:06:07.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:07 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173567262497, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 2543747, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38307, "largest_seqno": 40051, "table_properties": {"data_size": 2535700, "index_size": 4861, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 17320, "raw_average_key_size": 20, "raw_value_size": 2519308, "raw_average_value_size": 2995, "num_data_blocks": 215, "num_entries": 841, "num_filter_entries": 841, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760173414, "oldest_key_time": 1760173414, "file_creation_time": 1760173567, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:06:07 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 252873 microseconds, and 8069 cpu microseconds.
Oct 11 05:06:07 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:06:07 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:06:07.262568) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 2543747 bytes OK
Oct 11 05:06:07 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:06:07.262601) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Oct 11 05:06:07 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:06:07.281036) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Oct 11 05:06:07 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:06:07.281068) EVENT_LOG_v1 {"time_micros": 1760173567281057, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 05:06:07 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:06:07.281095) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 05:06:07 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 2586960, prev total WAL file size 2586960, number of live WAL files 2.
Oct 11 05:06:07 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:06:07 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:06:07.282297) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Oct 11 05:06:07 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 05:06:07 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(2484KB)], [86(8469KB)]
Oct 11 05:06:07 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173567282391, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 11216842, "oldest_snapshot_seqno": -1}
Oct 11 05:06:07 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 6372 keys, 9580481 bytes, temperature: kUnknown
Oct 11 05:06:07 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173567415892, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 9580481, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9536811, "index_size": 26664, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15941, "raw_key_size": 161978, "raw_average_key_size": 25, "raw_value_size": 9421353, "raw_average_value_size": 1478, "num_data_blocks": 1074, "num_entries": 6372, "num_filter_entries": 6372, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760173567, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:06:07 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:06:07 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:06:07.416145) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 9580481 bytes
Oct 11 05:06:07 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:06:07.420222) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 84.0 rd, 71.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 8.3 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(8.2) write-amplify(3.8) OK, records in: 6898, records dropped: 526 output_compression: NoCompression
Oct 11 05:06:07 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:06:07.420241) EVENT_LOG_v1 {"time_micros": 1760173567420232, "job": 50, "event": "compaction_finished", "compaction_time_micros": 133573, "compaction_time_cpu_micros": 23795, "output_level": 6, "num_output_files": 1, "total_output_size": 9580481, "num_input_records": 6898, "num_output_records": 6372, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 05:06:07 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:06:07 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173567420845, "job": 50, "event": "table_file_deletion", "file_number": 88}
Oct 11 05:06:07 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:06:07 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173567422508, "job": 50, "event": "table_file_deletion", "file_number": 86}
Oct 11 05:06:07 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:06:07.282140) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:06:07 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:06:07.422557) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:06:07 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:06:07.422564) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:06:07 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:06:07.422566) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:06:07 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:06:07.422567) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:06:07 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:06:07.422569) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:06:07 np0005481065 nova_compute[260935]: 2025-10-11 09:06:07.823 2 INFO nova.virt.libvirt.driver [None req-7088ee2a-f426-4885-8621-49d560c830e1 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Deleting instance files /var/lib/nova/instances/d813afc2-c844-45eb-b1ec-efbbf95498ef_del#033[00m
Oct 11 05:06:07 np0005481065 nova_compute[260935]: 2025-10-11 09:06:07.824 2 INFO nova.virt.libvirt.driver [None req-7088ee2a-f426-4885-8621-49d560c830e1 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Deletion of /var/lib/nova/instances/d813afc2-c844-45eb-b1ec-efbbf95498ef_del complete#033[00m
Oct 11 05:06:07 np0005481065 nova_compute[260935]: 2025-10-11 09:06:07.979 2 INFO nova.compute.manager [None req-7088ee2a-f426-4885-8621-49d560c830e1 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Took 6.28 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:06:07 np0005481065 nova_compute[260935]: 2025-10-11 09:06:07.980 2 DEBUG oslo.service.loopingcall [None req-7088ee2a-f426-4885-8621-49d560c830e1 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:06:07 np0005481065 nova_compute[260935]: 2025-10-11 09:06:07.980 2 DEBUG nova.compute.manager [-] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:06:07 np0005481065 nova_compute[260935]: 2025-10-11 09:06:07.980 2 DEBUG nova.network.neutron [-] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:06:08 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1904: 321 pgs: 321 active+clean; 500 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 179 op/s
Oct 11 05:06:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:06:09 np0005481065 nova_compute[260935]: 2025-10-11 09:06:09.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:09 np0005481065 nova_compute[260935]: 2025-10-11 09:06:09.067 2 DEBUG nova.compute.manager [req-7b0044d4-61ed-413f-a6ae-4045767783f5 req-a9fc5dc2-cd68-4fe5-877f-d4e1d2fe4622 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Received event network-changed-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:06:09 np0005481065 nova_compute[260935]: 2025-10-11 09:06:09.067 2 DEBUG nova.compute.manager [req-7b0044d4-61ed-413f-a6ae-4045767783f5 req-a9fc5dc2-cd68-4fe5-877f-d4e1d2fe4622 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Refreshing instance network info cache due to event network-changed-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:06:09 np0005481065 nova_compute[260935]: 2025-10-11 09:06:09.068 2 DEBUG oslo_concurrency.lockutils [req-7b0044d4-61ed-413f-a6ae-4045767783f5 req-a9fc5dc2-cd68-4fe5-877f-d4e1d2fe4622 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-a83f40e4-c852-4b45-a3d2-1cd65e9aaa31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:06:09 np0005481065 nova_compute[260935]: 2025-10-11 09:06:09.069 2 DEBUG oslo_concurrency.lockutils [req-7b0044d4-61ed-413f-a6ae-4045767783f5 req-a9fc5dc2-cd68-4fe5-877f-d4e1d2fe4622 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-a83f40e4-c852-4b45-a3d2-1cd65e9aaa31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:06:09 np0005481065 nova_compute[260935]: 2025-10-11 09:06:09.069 2 DEBUG nova.network.neutron [req-7b0044d4-61ed-413f-a6ae-4045767783f5 req-a9fc5dc2-cd68-4fe5-877f-d4e1d2fe4622 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Refreshing network info cache for port f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:06:09 np0005481065 nova_compute[260935]: 2025-10-11 09:06:09.628 2 DEBUG nova.network.neutron [-] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:06:09 np0005481065 nova_compute[260935]: 2025-10-11 09:06:09.727 2 INFO nova.compute.manager [-] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Took 1.75 seconds to deallocate network for instance.#033[00m
Oct 11 05:06:09 np0005481065 nova_compute[260935]: 2025-10-11 09:06:09.802 2 DEBUG oslo_concurrency.lockutils [None req-7088ee2a-f426-4885-8621-49d560c830e1 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:06:09 np0005481065 nova_compute[260935]: 2025-10-11 09:06:09.803 2 DEBUG oslo_concurrency.lockutils [None req-7088ee2a-f426-4885-8621-49d560c830e1 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:06:10 np0005481065 nova_compute[260935]: 2025-10-11 09:06:10.025 2 DEBUG oslo_concurrency.processutils [None req-7088ee2a-f426-4885-8621-49d560c830e1 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:06:10 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1905: 321 pgs: 321 active+clean; 500 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 156 op/s
Oct 11 05:06:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:06:10 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3825419757' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:06:10 np0005481065 nova_compute[260935]: 2025-10-11 09:06:10.522 2 DEBUG oslo_concurrency.processutils [None req-7088ee2a-f426-4885-8621-49d560c830e1 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:06:10 np0005481065 nova_compute[260935]: 2025-10-11 09:06:10.531 2 DEBUG nova.compute.provider_tree [None req-7088ee2a-f426-4885-8621-49d560c830e1 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:06:10 np0005481065 nova_compute[260935]: 2025-10-11 09:06:10.560 2 DEBUG nova.scheduler.client.report [None req-7088ee2a-f426-4885-8621-49d560c830e1 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:06:10 np0005481065 nova_compute[260935]: 2025-10-11 09:06:10.643 2 DEBUG oslo_concurrency.lockutils [None req-7088ee2a-f426-4885-8621-49d560c830e1 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.840s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:06:10 np0005481065 nova_compute[260935]: 2025-10-11 09:06:10.724 2 INFO nova.scheduler.client.report [None req-7088ee2a-f426-4885-8621-49d560c830e1 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Deleted allocations for instance d813afc2-c844-45eb-b1ec-efbbf95498ef#033[00m
Oct 11 05:06:10 np0005481065 nova_compute[260935]: 2025-10-11 09:06:10.939 2 DEBUG oslo_concurrency.lockutils [None req-7088ee2a-f426-4885-8621-49d560c830e1 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Lock "d813afc2-c844-45eb-b1ec-efbbf95498ef" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.243s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:06:11 np0005481065 nova_compute[260935]: 2025-10-11 09:06:11.228 2 DEBUG nova.compute.manager [req-4170ac78-8c28-4209-b5e8-a139551dc92c req-e40c567e-0498-4f87-a222-0c51203f7fdd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Received event network-vif-deleted-4d16ebf4-64d7-4476-8898-f62d856c729b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:06:11 np0005481065 nova_compute[260935]: 2025-10-11 09:06:11.228 2 DEBUG nova.compute.manager [req-4170ac78-8c28-4209-b5e8-a139551dc92c req-e40c567e-0498-4f87-a222-0c51203f7fdd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Received event network-changed-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:06:11 np0005481065 nova_compute[260935]: 2025-10-11 09:06:11.229 2 DEBUG nova.compute.manager [req-4170ac78-8c28-4209-b5e8-a139551dc92c req-e40c567e-0498-4f87-a222-0c51203f7fdd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Refreshing instance network info cache due to event network-changed-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:06:11 np0005481065 nova_compute[260935]: 2025-10-11 09:06:11.229 2 DEBUG oslo_concurrency.lockutils [req-4170ac78-8c28-4209-b5e8-a139551dc92c req-e40c567e-0498-4f87-a222-0c51203f7fdd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-a83f40e4-c852-4b45-a3d2-1cd65e9aaa31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:06:11 np0005481065 nova_compute[260935]: 2025-10-11 09:06:11.467 2 DEBUG nova.network.neutron [req-7b0044d4-61ed-413f-a6ae-4045767783f5 req-a9fc5dc2-cd68-4fe5-877f-d4e1d2fe4622 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Updated VIF entry in instance network info cache for port f8dc388c-9e5a-43c2-8dd9-8fc28768ec31. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:06:11 np0005481065 nova_compute[260935]: 2025-10-11 09:06:11.468 2 DEBUG nova.network.neutron [req-7b0044d4-61ed-413f-a6ae-4045767783f5 req-a9fc5dc2-cd68-4fe5-877f-d4e1d2fe4622 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Updating instance_info_cache with network_info: [{"id": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "address": "fa:16:3e:b4:bf:b7", "network": {"id": "7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1957321638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "0a5d578da5e746caa535eef295e1a67d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8dc388c-9e", "ovs_interfaceid": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:06:11 np0005481065 nova_compute[260935]: 2025-10-11 09:06:11.519 2 DEBUG oslo_concurrency.lockutils [req-7b0044d4-61ed-413f-a6ae-4045767783f5 req-a9fc5dc2-cd68-4fe5-877f-d4e1d2fe4622 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-a83f40e4-c852-4b45-a3d2-1cd65e9aaa31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:06:11 np0005481065 nova_compute[260935]: 2025-10-11 09:06:11.520 2 DEBUG oslo_concurrency.lockutils [req-4170ac78-8c28-4209-b5e8-a139551dc92c req-e40c567e-0498-4f87-a222-0c51203f7fdd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-a83f40e4-c852-4b45-a3d2-1cd65e9aaa31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:06:11 np0005481065 nova_compute[260935]: 2025-10-11 09:06:11.521 2 DEBUG nova.network.neutron [req-4170ac78-8c28-4209-b5e8-a139551dc92c req-e40c567e-0498-4f87-a222-0c51203f7fdd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Refreshing network info cache for port f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:06:12 np0005481065 nova_compute[260935]: 2025-10-11 09:06:12.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:12 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1906: 321 pgs: 321 active+clean; 500 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 164 op/s
Oct 11 05:06:13 np0005481065 podman[354675]: 2025-10-11 09:06:13.770961648 +0000 UTC m=+0.070284407 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Oct 11 05:06:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:06:14 np0005481065 nova_compute[260935]: 2025-10-11 09:06:14.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:14 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1907: 321 pgs: 321 active+clean; 500 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 734 KiB/s wr, 99 op/s
Oct 11 05:06:14 np0005481065 nova_compute[260935]: 2025-10-11 09:06:14.340 2 DEBUG nova.network.neutron [req-4170ac78-8c28-4209-b5e8-a139551dc92c req-e40c567e-0498-4f87-a222-0c51203f7fdd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Updated VIF entry in instance network info cache for port f8dc388c-9e5a-43c2-8dd9-8fc28768ec31. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:06:14 np0005481065 nova_compute[260935]: 2025-10-11 09:06:14.340 2 DEBUG nova.network.neutron [req-4170ac78-8c28-4209-b5e8-a139551dc92c req-e40c567e-0498-4f87-a222-0c51203f7fdd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Updating instance_info_cache with network_info: [{"id": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "address": "fa:16:3e:b4:bf:b7", "network": {"id": "7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1957321638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "0a5d578da5e746caa535eef295e1a67d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8dc388c-9e", "ovs_interfaceid": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:06:14 np0005481065 nova_compute[260935]: 2025-10-11 09:06:14.392 2 DEBUG oslo_concurrency.lockutils [req-4170ac78-8c28-4209-b5e8-a139551dc92c req-e40c567e-0498-4f87-a222-0c51203f7fdd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-a83f40e4-c852-4b45-a3d2-1cd65e9aaa31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:06:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:15.202 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:06:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:15.202 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:06:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:15.203 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:06:15 np0005481065 nova_compute[260935]: 2025-10-11 09:06:15.535 2 DEBUG oslo_concurrency.lockutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Acquiring lock "079324f3-2fba-431a-9b8a-6b755af3fe74" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:06:15 np0005481065 nova_compute[260935]: 2025-10-11 09:06:15.536 2 DEBUG oslo_concurrency.lockutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Lock "079324f3-2fba-431a-9b8a-6b755af3fe74" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:06:15 np0005481065 nova_compute[260935]: 2025-10-11 09:06:15.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:15 np0005481065 NetworkManager[44960]: <info>  [1760173575.5395] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/360)
Oct 11 05:06:15 np0005481065 NetworkManager[44960]: <info>  [1760173575.5411] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/361)
Oct 11 05:06:15 np0005481065 nova_compute[260935]: 2025-10-11 09:06:15.691 2 DEBUG nova.compute.manager [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:06:15 np0005481065 nova_compute[260935]: 2025-10-11 09:06:15.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:15 np0005481065 ovn_controller[152945]: 2025-10-11T09:06:15Z|00840|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:06:15 np0005481065 ovn_controller[152945]: 2025-10-11T09:06:15Z|00841|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:06:15 np0005481065 nova_compute[260935]: 2025-10-11 09:06:15.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:15 np0005481065 nova_compute[260935]: 2025-10-11 09:06:15.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:15 np0005481065 nova_compute[260935]: 2025-10-11 09:06:15.865 2 DEBUG oslo_concurrency.lockutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:06:15 np0005481065 nova_compute[260935]: 2025-10-11 09:06:15.866 2 DEBUG oslo_concurrency.lockutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:06:15 np0005481065 nova_compute[260935]: 2025-10-11 09:06:15.875 2 DEBUG nova.virt.hardware [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:06:15 np0005481065 nova_compute[260935]: 2025-10-11 09:06:15.876 2 INFO nova.compute.claims [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:06:16 np0005481065 nova_compute[260935]: 2025-10-11 09:06:16.201 2 DEBUG oslo_concurrency.processutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:06:16 np0005481065 nova_compute[260935]: 2025-10-11 09:06:16.273 2 DEBUG nova.compute.manager [req-0d2c80a8-44e1-4ed8-9680-e43ee35cd8d6 req-8e764754-5f3f-439e-8565-766ccedd0815 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Received event network-changed-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:06:16 np0005481065 nova_compute[260935]: 2025-10-11 09:06:16.274 2 DEBUG nova.compute.manager [req-0d2c80a8-44e1-4ed8-9680-e43ee35cd8d6 req-8e764754-5f3f-439e-8565-766ccedd0815 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Refreshing instance network info cache due to event network-changed-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:06:16 np0005481065 nova_compute[260935]: 2025-10-11 09:06:16.275 2 DEBUG oslo_concurrency.lockutils [req-0d2c80a8-44e1-4ed8-9680-e43ee35cd8d6 req-8e764754-5f3f-439e-8565-766ccedd0815 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-a83f40e4-c852-4b45-a3d2-1cd65e9aaa31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:06:16 np0005481065 nova_compute[260935]: 2025-10-11 09:06:16.275 2 DEBUG oslo_concurrency.lockutils [req-0d2c80a8-44e1-4ed8-9680-e43ee35cd8d6 req-8e764754-5f3f-439e-8565-766ccedd0815 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-a83f40e4-c852-4b45-a3d2-1cd65e9aaa31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:06:16 np0005481065 nova_compute[260935]: 2025-10-11 09:06:16.276 2 DEBUG nova.network.neutron [req-0d2c80a8-44e1-4ed8-9680-e43ee35cd8d6 req-8e764754-5f3f-439e-8565-766ccedd0815 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Refreshing network info cache for port f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:06:16 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1908: 321 pgs: 321 active+clean; 500 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 88 op/s
Oct 11 05:06:16 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:06:16 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/934959076' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:06:16 np0005481065 nova_compute[260935]: 2025-10-11 09:06:16.747 2 DEBUG oslo_concurrency.processutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:06:16 np0005481065 nova_compute[260935]: 2025-10-11 09:06:16.755 2 DEBUG nova.compute.provider_tree [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:06:16 np0005481065 nova_compute[260935]: 2025-10-11 09:06:16.804 2 DEBUG nova.scheduler.client.report [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:06:16 np0005481065 nova_compute[260935]: 2025-10-11 09:06:16.944 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173561.9432986, d813afc2-c844-45eb-b1ec-efbbf95498ef => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:06:16 np0005481065 nova_compute[260935]: 2025-10-11 09:06:16.945 2 INFO nova.compute.manager [-] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:06:16 np0005481065 nova_compute[260935]: 2025-10-11 09:06:16.996 2 DEBUG oslo_concurrency.lockutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.130s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:06:16 np0005481065 nova_compute[260935]: 2025-10-11 09:06:16.998 2 DEBUG nova.compute.manager [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:06:17 np0005481065 nova_compute[260935]: 2025-10-11 09:06:17.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:17 np0005481065 nova_compute[260935]: 2025-10-11 09:06:17.022 2 DEBUG nova.compute.manager [None req-347f5c8d-42ba-4b21-8b5d-43f93fc6b508 - - - - - -] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:06:17 np0005481065 nova_compute[260935]: 2025-10-11 09:06:17.229 2 DEBUG nova.compute.manager [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:06:17 np0005481065 nova_compute[260935]: 2025-10-11 09:06:17.230 2 DEBUG nova.network.neutron [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:06:17 np0005481065 nova_compute[260935]: 2025-10-11 09:06:17.333 2 INFO nova.virt.libvirt.driver [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:06:17 np0005481065 nova_compute[260935]: 2025-10-11 09:06:17.394 2 DEBUG nova.compute.manager [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:06:17 np0005481065 nova_compute[260935]: 2025-10-11 09:06:17.622 2 DEBUG nova.compute.manager [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:06:17 np0005481065 nova_compute[260935]: 2025-10-11 09:06:17.625 2 DEBUG nova.virt.libvirt.driver [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:06:17 np0005481065 nova_compute[260935]: 2025-10-11 09:06:17.626 2 INFO nova.virt.libvirt.driver [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Creating image(s)#033[00m
Oct 11 05:06:17 np0005481065 nova_compute[260935]: 2025-10-11 09:06:17.659 2 DEBUG nova.storage.rbd_utils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] rbd image 079324f3-2fba-431a-9b8a-6b755af3fe74_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:06:17 np0005481065 nova_compute[260935]: 2025-10-11 09:06:17.703 2 DEBUG nova.storage.rbd_utils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] rbd image 079324f3-2fba-431a-9b8a-6b755af3fe74_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:06:17 np0005481065 nova_compute[260935]: 2025-10-11 09:06:17.746 2 DEBUG nova.storage.rbd_utils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] rbd image 079324f3-2fba-431a-9b8a-6b755af3fe74_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:06:17 np0005481065 nova_compute[260935]: 2025-10-11 09:06:17.754 2 DEBUG oslo_concurrency.processutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:06:17 np0005481065 nova_compute[260935]: 2025-10-11 09:06:17.828 2 DEBUG nova.policy [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c555ec17274647ed83d33852feed0fa6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b86be458d54543fcae9df9884591edf3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:06:17 np0005481065 nova_compute[260935]: 2025-10-11 09:06:17.868 2 DEBUG oslo_concurrency.processutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.115s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:06:17 np0005481065 nova_compute[260935]: 2025-10-11 09:06:17.870 2 DEBUG oslo_concurrency.lockutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:06:17 np0005481065 nova_compute[260935]: 2025-10-11 09:06:17.871 2 DEBUG oslo_concurrency.lockutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:06:17 np0005481065 nova_compute[260935]: 2025-10-11 09:06:17.871 2 DEBUG oslo_concurrency.lockutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:06:17 np0005481065 nova_compute[260935]: 2025-10-11 09:06:17.908 2 DEBUG nova.storage.rbd_utils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] rbd image 079324f3-2fba-431a-9b8a-6b755af3fe74_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:06:17 np0005481065 nova_compute[260935]: 2025-10-11 09:06:17.914 2 DEBUG oslo_concurrency.processutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 079324f3-2fba-431a-9b8a-6b755af3fe74_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:06:18 np0005481065 nova_compute[260935]: 2025-10-11 09:06:18.286 2 DEBUG oslo_concurrency.processutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 079324f3-2fba-431a-9b8a-6b755af3fe74_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.372s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:06:18 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1909: 321 pgs: 321 active+clean; 500 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 12 KiB/s wr, 147 op/s
Oct 11 05:06:18 np0005481065 nova_compute[260935]: 2025-10-11 09:06:18.375 2 DEBUG nova.storage.rbd_utils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] resizing rbd image 079324f3-2fba-431a-9b8a-6b755af3fe74_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:06:18 np0005481065 nova_compute[260935]: 2025-10-11 09:06:18.501 2 DEBUG nova.objects.instance [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Lazy-loading 'migration_context' on Instance uuid 079324f3-2fba-431a-9b8a-6b755af3fe74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:06:18 np0005481065 nova_compute[260935]: 2025-10-11 09:06:18.537 2 DEBUG nova.virt.libvirt.driver [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:06:18 np0005481065 nova_compute[260935]: 2025-10-11 09:06:18.537 2 DEBUG nova.virt.libvirt.driver [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Ensure instance console log exists: /var/lib/nova/instances/079324f3-2fba-431a-9b8a-6b755af3fe74/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:06:18 np0005481065 nova_compute[260935]: 2025-10-11 09:06:18.538 2 DEBUG oslo_concurrency.lockutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:06:18 np0005481065 nova_compute[260935]: 2025-10-11 09:06:18.538 2 DEBUG oslo_concurrency.lockutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:06:18 np0005481065 nova_compute[260935]: 2025-10-11 09:06:18.539 2 DEBUG oslo_concurrency.lockutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:06:18 np0005481065 ovn_controller[152945]: 2025-10-11T09:06:18Z|00842|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:06:18 np0005481065 ovn_controller[152945]: 2025-10-11T09:06:18Z|00843|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:06:18 np0005481065 nova_compute[260935]: 2025-10-11 09:06:18.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:06:18 np0005481065 nova_compute[260935]: 2025-10-11 09:06:18.992 2 DEBUG nova.network.neutron [req-0d2c80a8-44e1-4ed8-9680-e43ee35cd8d6 req-8e764754-5f3f-439e-8565-766ccedd0815 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Updated VIF entry in instance network info cache for port f8dc388c-9e5a-43c2-8dd9-8fc28768ec31. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:06:18 np0005481065 nova_compute[260935]: 2025-10-11 09:06:18.993 2 DEBUG nova.network.neutron [req-0d2c80a8-44e1-4ed8-9680-e43ee35cd8d6 req-8e764754-5f3f-439e-8565-766ccedd0815 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Updating instance_info_cache with network_info: [{"id": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "address": "fa:16:3e:b4:bf:b7", "network": {"id": "7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1957321638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "0a5d578da5e746caa535eef295e1a67d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8dc388c-9e", "ovs_interfaceid": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:06:19 np0005481065 nova_compute[260935]: 2025-10-11 09:06:19.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:19 np0005481065 nova_compute[260935]: 2025-10-11 09:06:19.030 2 DEBUG oslo_concurrency.lockutils [req-0d2c80a8-44e1-4ed8-9680-e43ee35cd8d6 req-8e764754-5f3f-439e-8565-766ccedd0815 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-a83f40e4-c852-4b45-a3d2-1cd65e9aaa31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:06:19 np0005481065 nova_compute[260935]: 2025-10-11 09:06:19.245 2 DEBUG nova.network.neutron [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Successfully created port: f44dbabf-c6b7-4aa2-ac55-6df262611e5e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:06:20 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1910: 321 pgs: 321 active+clean; 500 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 615 KiB/s rd, 12 KiB/s wr, 67 op/s
Oct 11 05:06:20 np0005481065 podman[354884]: 2025-10-11 09:06:20.77911865 +0000 UTC m=+0.078112911 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 11 05:06:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:21.215 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:06:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:21.217 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 05:06:21 np0005481065 nova_compute[260935]: 2025-10-11 09:06:21.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:21 np0005481065 nova_compute[260935]: 2025-10-11 09:06:21.387 2 DEBUG nova.network.neutron [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Successfully updated port: f44dbabf-c6b7-4aa2-ac55-6df262611e5e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:06:21 np0005481065 nova_compute[260935]: 2025-10-11 09:06:21.453 2 DEBUG oslo_concurrency.lockutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Acquiring lock "refresh_cache-079324f3-2fba-431a-9b8a-6b755af3fe74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:06:21 np0005481065 nova_compute[260935]: 2025-10-11 09:06:21.453 2 DEBUG oslo_concurrency.lockutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Acquired lock "refresh_cache-079324f3-2fba-431a-9b8a-6b755af3fe74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:06:21 np0005481065 nova_compute[260935]: 2025-10-11 09:06:21.454 2 DEBUG nova.network.neutron [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:06:21 np0005481065 nova_compute[260935]: 2025-10-11 09:06:21.534 2 DEBUG nova.compute.manager [req-865f2bcd-2c62-4aa8-9a02-ef624eeddc8f req-7ac852fb-6a64-4a90-b1a7-66b99e151061 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Received event network-changed-f44dbabf-c6b7-4aa2-ac55-6df262611e5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:06:21 np0005481065 nova_compute[260935]: 2025-10-11 09:06:21.535 2 DEBUG nova.compute.manager [req-865f2bcd-2c62-4aa8-9a02-ef624eeddc8f req-7ac852fb-6a64-4a90-b1a7-66b99e151061 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Refreshing instance network info cache due to event network-changed-f44dbabf-c6b7-4aa2-ac55-6df262611e5e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:06:21 np0005481065 nova_compute[260935]: 2025-10-11 09:06:21.535 2 DEBUG oslo_concurrency.lockutils [req-865f2bcd-2c62-4aa8-9a02-ef624eeddc8f req-7ac852fb-6a64-4a90-b1a7-66b99e151061 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-079324f3-2fba-431a-9b8a-6b755af3fe74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:06:21 np0005481065 nova_compute[260935]: 2025-10-11 09:06:21.678 2 DEBUG nova.network.neutron [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:06:22 np0005481065 nova_compute[260935]: 2025-10-11 09:06:22.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:22 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1911: 321 pgs: 321 active+clean; 541 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 632 KiB/s rd, 1.5 MiB/s wr, 93 op/s
Oct 11 05:06:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:06:24 np0005481065 nova_compute[260935]: 2025-10-11 09:06:24.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:24 np0005481065 nova_compute[260935]: 2025-10-11 09:06:24.224 2 DEBUG nova.compute.manager [req-708a7f1d-21de-4345-aa85-c6656682bf64 req-fa701e60-c1b2-427c-951f-79449e96fc05 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Received event network-changed-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:06:24 np0005481065 nova_compute[260935]: 2025-10-11 09:06:24.224 2 DEBUG nova.compute.manager [req-708a7f1d-21de-4345-aa85-c6656682bf64 req-fa701e60-c1b2-427c-951f-79449e96fc05 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Refreshing instance network info cache due to event network-changed-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:06:24 np0005481065 nova_compute[260935]: 2025-10-11 09:06:24.225 2 DEBUG oslo_concurrency.lockutils [req-708a7f1d-21de-4345-aa85-c6656682bf64 req-fa701e60-c1b2-427c-951f-79449e96fc05 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-a83f40e4-c852-4b45-a3d2-1cd65e9aaa31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:06:24 np0005481065 nova_compute[260935]: 2025-10-11 09:06:24.225 2 DEBUG oslo_concurrency.lockutils [req-708a7f1d-21de-4345-aa85-c6656682bf64 req-fa701e60-c1b2-427c-951f-79449e96fc05 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-a83f40e4-c852-4b45-a3d2-1cd65e9aaa31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:06:24 np0005481065 nova_compute[260935]: 2025-10-11 09:06:24.225 2 DEBUG nova.network.neutron [req-708a7f1d-21de-4345-aa85-c6656682bf64 req-fa701e60-c1b2-427c-951f-79449e96fc05 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Refreshing network info cache for port f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:06:24 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1912: 321 pgs: 321 active+clean; 548 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 625 KiB/s rd, 1.8 MiB/s wr, 87 op/s
Oct 11 05:06:24 np0005481065 nova_compute[260935]: 2025-10-11 09:06:24.367 2 DEBUG nova.network.neutron [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Updating instance_info_cache with network_info: [{"id": "f44dbabf-c6b7-4aa2-ac55-6df262611e5e", "address": "fa:16:3e:47:6a:2d", "network": {"id": "c4ebf6c9-009d-4565-ba58-6b9452636d41", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-2078761724-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b86be458d54543fcae9df9884591edf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf44dbabf-c6", "ovs_interfaceid": "f44dbabf-c6b7-4aa2-ac55-6df262611e5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:06:24 np0005481065 nova_compute[260935]: 2025-10-11 09:06:24.426 2 DEBUG oslo_concurrency.lockutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Releasing lock "refresh_cache-079324f3-2fba-431a-9b8a-6b755af3fe74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:06:24 np0005481065 nova_compute[260935]: 2025-10-11 09:06:24.427 2 DEBUG nova.compute.manager [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Instance network_info: |[{"id": "f44dbabf-c6b7-4aa2-ac55-6df262611e5e", "address": "fa:16:3e:47:6a:2d", "network": {"id": "c4ebf6c9-009d-4565-ba58-6b9452636d41", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-2078761724-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b86be458d54543fcae9df9884591edf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf44dbabf-c6", "ovs_interfaceid": "f44dbabf-c6b7-4aa2-ac55-6df262611e5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:06:24 np0005481065 nova_compute[260935]: 2025-10-11 09:06:24.427 2 DEBUG oslo_concurrency.lockutils [req-865f2bcd-2c62-4aa8-9a02-ef624eeddc8f req-7ac852fb-6a64-4a90-b1a7-66b99e151061 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-079324f3-2fba-431a-9b8a-6b755af3fe74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:06:24 np0005481065 nova_compute[260935]: 2025-10-11 09:06:24.428 2 DEBUG nova.network.neutron [req-865f2bcd-2c62-4aa8-9a02-ef624eeddc8f req-7ac852fb-6a64-4a90-b1a7-66b99e151061 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Refreshing network info cache for port f44dbabf-c6b7-4aa2-ac55-6df262611e5e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:06:24 np0005481065 nova_compute[260935]: 2025-10-11 09:06:24.433 2 DEBUG nova.virt.libvirt.driver [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Start _get_guest_xml network_info=[{"id": "f44dbabf-c6b7-4aa2-ac55-6df262611e5e", "address": "fa:16:3e:47:6a:2d", "network": {"id": "c4ebf6c9-009d-4565-ba58-6b9452636d41", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-2078761724-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b86be458d54543fcae9df9884591edf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf44dbabf-c6", "ovs_interfaceid": "f44dbabf-c6b7-4aa2-ac55-6df262611e5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:06:24 np0005481065 nova_compute[260935]: 2025-10-11 09:06:24.440 2 WARNING nova.virt.libvirt.driver [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:06:24 np0005481065 nova_compute[260935]: 2025-10-11 09:06:24.446 2 DEBUG nova.virt.libvirt.host [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:06:24 np0005481065 nova_compute[260935]: 2025-10-11 09:06:24.447 2 DEBUG nova.virt.libvirt.host [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:06:24 np0005481065 nova_compute[260935]: 2025-10-11 09:06:24.451 2 DEBUG nova.virt.libvirt.host [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:06:24 np0005481065 nova_compute[260935]: 2025-10-11 09:06:24.452 2 DEBUG nova.virt.libvirt.host [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:06:24 np0005481065 nova_compute[260935]: 2025-10-11 09:06:24.452 2 DEBUG nova.virt.libvirt.driver [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:06:24 np0005481065 nova_compute[260935]: 2025-10-11 09:06:24.453 2 DEBUG nova.virt.hardware [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:06:24 np0005481065 nova_compute[260935]: 2025-10-11 09:06:24.453 2 DEBUG nova.virt.hardware [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:06:24 np0005481065 nova_compute[260935]: 2025-10-11 09:06:24.454 2 DEBUG nova.virt.hardware [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:06:24 np0005481065 nova_compute[260935]: 2025-10-11 09:06:24.454 2 DEBUG nova.virt.hardware [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:06:24 np0005481065 nova_compute[260935]: 2025-10-11 09:06:24.455 2 DEBUG nova.virt.hardware [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:06:24 np0005481065 nova_compute[260935]: 2025-10-11 09:06:24.455 2 DEBUG nova.virt.hardware [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:06:24 np0005481065 nova_compute[260935]: 2025-10-11 09:06:24.455 2 DEBUG nova.virt.hardware [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:06:24 np0005481065 nova_compute[260935]: 2025-10-11 09:06:24.456 2 DEBUG nova.virt.hardware [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:06:24 np0005481065 nova_compute[260935]: 2025-10-11 09:06:24.456 2 DEBUG nova.virt.hardware [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:06:24 np0005481065 nova_compute[260935]: 2025-10-11 09:06:24.456 2 DEBUG nova.virt.hardware [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:06:24 np0005481065 nova_compute[260935]: 2025-10-11 09:06:24.457 2 DEBUG nova.virt.hardware [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:06:24 np0005481065 nova_compute[260935]: 2025-10-11 09:06:24.461 2 DEBUG oslo_concurrency.processutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:06:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:06:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:06:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:06:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:06:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:06:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:06:24 np0005481065 podman[354924]: 2025-10-11 09:06:24.813546458 +0000 UTC m=+0.107669675 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_managed=true)
Oct 11 05:06:24 np0005481065 podman[354925]: 2025-10-11 09:06:24.853837488 +0000 UTC m=+0.140700838 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct 11 05:06:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:06:24 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3714150570' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:06:24 np0005481065 nova_compute[260935]: 2025-10-11 09:06:24.963 2 DEBUG oslo_concurrency.processutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:06:24 np0005481065 nova_compute[260935]: 2025-10-11 09:06:24.988 2 DEBUG nova.storage.rbd_utils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] rbd image 079324f3-2fba-431a-9b8a-6b755af3fe74_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:06:24 np0005481065 nova_compute[260935]: 2025-10-11 09:06:24.993 2 DEBUG oslo_concurrency.processutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:06:25 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:06:25 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2926475377' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:06:25 np0005481065 nova_compute[260935]: 2025-10-11 09:06:25.460 2 DEBUG oslo_concurrency.processutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:06:25 np0005481065 nova_compute[260935]: 2025-10-11 09:06:25.463 2 DEBUG nova.virt.libvirt.vif [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:06:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-1151527895',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-1151527895',id=95,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b86be458d54543fcae9df9884591edf3',ramdisk_id='',reservation_id='r-7frv4663',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-363741472',owner_user_name='tempest-ServerTagsTestJSON-363741472-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:06:17Z,user_data=None,user_id='c555ec17274647ed83d33852feed0fa6',uuid=079324f3-2fba-431a-9b8a-6b755af3fe74,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f44dbabf-c6b7-4aa2-ac55-6df262611e5e", "address": "fa:16:3e:47:6a:2d", "network": {"id": "c4ebf6c9-009d-4565-ba58-6b9452636d41", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-2078761724-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b86be458d54543fcae9df9884591edf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf44dbabf-c6", "ovs_interfaceid": "f44dbabf-c6b7-4aa2-ac55-6df262611e5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:06:25 np0005481065 nova_compute[260935]: 2025-10-11 09:06:25.464 2 DEBUG nova.network.os_vif_util [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Converting VIF {"id": "f44dbabf-c6b7-4aa2-ac55-6df262611e5e", "address": "fa:16:3e:47:6a:2d", "network": {"id": "c4ebf6c9-009d-4565-ba58-6b9452636d41", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-2078761724-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b86be458d54543fcae9df9884591edf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf44dbabf-c6", "ovs_interfaceid": "f44dbabf-c6b7-4aa2-ac55-6df262611e5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:06:25 np0005481065 nova_compute[260935]: 2025-10-11 09:06:25.465 2 DEBUG nova.network.os_vif_util [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:6a:2d,bridge_name='br-int',has_traffic_filtering=True,id=f44dbabf-c6b7-4aa2-ac55-6df262611e5e,network=Network(c4ebf6c9-009d-4565-ba58-6b9452636d41),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf44dbabf-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:06:25 np0005481065 nova_compute[260935]: 2025-10-11 09:06:25.469 2 DEBUG nova.objects.instance [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 079324f3-2fba-431a-9b8a-6b755af3fe74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:06:25 np0005481065 nova_compute[260935]: 2025-10-11 09:06:25.498 2 DEBUG nova.virt.libvirt.driver [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:06:25 np0005481065 nova_compute[260935]:  <uuid>079324f3-2fba-431a-9b8a-6b755af3fe74</uuid>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:  <name>instance-0000005f</name>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:06:25 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:      <nova:name>tempest-ServerTagsTestJSON-server-1151527895</nova:name>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:06:24</nova:creationTime>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:06:25 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:        <nova:user uuid="c555ec17274647ed83d33852feed0fa6">tempest-ServerTagsTestJSON-363741472-project-member</nova:user>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:        <nova:project uuid="b86be458d54543fcae9df9884591edf3">tempest-ServerTagsTestJSON-363741472</nova:project>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:        <nova:port uuid="f44dbabf-c6b7-4aa2-ac55-6df262611e5e">
Oct 11 05:06:25 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:06:25 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:      <entry name="serial">079324f3-2fba-431a-9b8a-6b755af3fe74</entry>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:      <entry name="uuid">079324f3-2fba-431a-9b8a-6b755af3fe74</entry>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:06:25 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:06:25 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:06:25 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/079324f3-2fba-431a-9b8a-6b755af3fe74_disk">
Oct 11 05:06:25 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:06:25 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:06:25 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/079324f3-2fba-431a-9b8a-6b755af3fe74_disk.config">
Oct 11 05:06:25 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:06:25 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:06:25 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:47:6a:2d"/>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:      <target dev="tapf44dbabf-c6"/>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:06:25 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/079324f3-2fba-431a-9b8a-6b755af3fe74/console.log" append="off"/>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:06:25 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:06:25 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:06:25 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:06:25 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:06:25 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:06:25 np0005481065 nova_compute[260935]: 2025-10-11 09:06:25.499 2 DEBUG nova.compute.manager [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Preparing to wait for external event network-vif-plugged-f44dbabf-c6b7-4aa2-ac55-6df262611e5e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:06:25 np0005481065 nova_compute[260935]: 2025-10-11 09:06:25.499 2 DEBUG oslo_concurrency.lockutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Acquiring lock "079324f3-2fba-431a-9b8a-6b755af3fe74-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:06:25 np0005481065 nova_compute[260935]: 2025-10-11 09:06:25.500 2 DEBUG oslo_concurrency.lockutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Lock "079324f3-2fba-431a-9b8a-6b755af3fe74-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:06:25 np0005481065 nova_compute[260935]: 2025-10-11 09:06:25.500 2 DEBUG oslo_concurrency.lockutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Lock "079324f3-2fba-431a-9b8a-6b755af3fe74-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:06:25 np0005481065 nova_compute[260935]: 2025-10-11 09:06:25.502 2 DEBUG nova.virt.libvirt.vif [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:06:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-1151527895',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-1151527895',id=95,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b86be458d54543fcae9df9884591edf3',ramdisk_id='',reservation_id='r-7frv4663',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-363741472',owner_user_name='tempest-ServerTagsTestJSON-363741472-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:06:17Z,user_data=None,user_id='c555ec17274647ed83d33852feed0fa6',uuid=079324f3-2fba-431a-9b8a-6b755af3fe74,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f44dbabf-c6b7-4aa2-ac55-6df262611e5e", "address": "fa:16:3e:47:6a:2d", "network": {"id": "c4ebf6c9-009d-4565-ba58-6b9452636d41", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-2078761724-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b86be458d54543fcae9df9884591edf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf44dbabf-c6", "ovs_interfaceid": "f44dbabf-c6b7-4aa2-ac55-6df262611e5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:06:25 np0005481065 nova_compute[260935]: 2025-10-11 09:06:25.503 2 DEBUG nova.network.os_vif_util [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Converting VIF {"id": "f44dbabf-c6b7-4aa2-ac55-6df262611e5e", "address": "fa:16:3e:47:6a:2d", "network": {"id": "c4ebf6c9-009d-4565-ba58-6b9452636d41", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-2078761724-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b86be458d54543fcae9df9884591edf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf44dbabf-c6", "ovs_interfaceid": "f44dbabf-c6b7-4aa2-ac55-6df262611e5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:06:25 np0005481065 nova_compute[260935]: 2025-10-11 09:06:25.504 2 DEBUG nova.network.os_vif_util [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:6a:2d,bridge_name='br-int',has_traffic_filtering=True,id=f44dbabf-c6b7-4aa2-ac55-6df262611e5e,network=Network(c4ebf6c9-009d-4565-ba58-6b9452636d41),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf44dbabf-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:06:25 np0005481065 nova_compute[260935]: 2025-10-11 09:06:25.505 2 DEBUG os_vif [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:6a:2d,bridge_name='br-int',has_traffic_filtering=True,id=f44dbabf-c6b7-4aa2-ac55-6df262611e5e,network=Network(c4ebf6c9-009d-4565-ba58-6b9452636d41),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf44dbabf-c6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:06:25 np0005481065 nova_compute[260935]: 2025-10-11 09:06:25.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:25 np0005481065 nova_compute[260935]: 2025-10-11 09:06:25.507 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:06:25 np0005481065 nova_compute[260935]: 2025-10-11 09:06:25.507 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:06:25 np0005481065 nova_compute[260935]: 2025-10-11 09:06:25.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:25 np0005481065 nova_compute[260935]: 2025-10-11 09:06:25.512 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf44dbabf-c6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:06:25 np0005481065 nova_compute[260935]: 2025-10-11 09:06:25.513 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf44dbabf-c6, col_values=(('external_ids', {'iface-id': 'f44dbabf-c6b7-4aa2-ac55-6df262611e5e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:47:6a:2d', 'vm-uuid': '079324f3-2fba-431a-9b8a-6b755af3fe74'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:06:25 np0005481065 nova_compute[260935]: 2025-10-11 09:06:25.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:25 np0005481065 NetworkManager[44960]: <info>  [1760173585.5162] manager: (tapf44dbabf-c6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/362)
Oct 11 05:06:25 np0005481065 nova_compute[260935]: 2025-10-11 09:06:25.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:06:25 np0005481065 nova_compute[260935]: 2025-10-11 09:06:25.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:25 np0005481065 nova_compute[260935]: 2025-10-11 09:06:25.528 2 INFO os_vif [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:6a:2d,bridge_name='br-int',has_traffic_filtering=True,id=f44dbabf-c6b7-4aa2-ac55-6df262611e5e,network=Network(c4ebf6c9-009d-4565-ba58-6b9452636d41),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf44dbabf-c6')#033[00m
Oct 11 05:06:25 np0005481065 nova_compute[260935]: 2025-10-11 09:06:25.616 2 DEBUG nova.virt.libvirt.driver [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:06:25 np0005481065 nova_compute[260935]: 2025-10-11 09:06:25.616 2 DEBUG nova.virt.libvirt.driver [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:06:25 np0005481065 nova_compute[260935]: 2025-10-11 09:06:25.617 2 DEBUG nova.virt.libvirt.driver [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] No VIF found with MAC fa:16:3e:47:6a:2d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:06:25 np0005481065 nova_compute[260935]: 2025-10-11 09:06:25.617 2 INFO nova.virt.libvirt.driver [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Using config drive#033[00m
Oct 11 05:06:25 np0005481065 nova_compute[260935]: 2025-10-11 09:06:25.651 2 DEBUG nova.storage.rbd_utils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] rbd image 079324f3-2fba-431a-9b8a-6b755af3fe74_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:06:25 np0005481065 nova_compute[260935]: 2025-10-11 09:06:25.660 2 DEBUG nova.network.neutron [req-708a7f1d-21de-4345-aa85-c6656682bf64 req-fa701e60-c1b2-427c-951f-79449e96fc05 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Updated VIF entry in instance network info cache for port f8dc388c-9e5a-43c2-8dd9-8fc28768ec31. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:06:25 np0005481065 nova_compute[260935]: 2025-10-11 09:06:25.661 2 DEBUG nova.network.neutron [req-708a7f1d-21de-4345-aa85-c6656682bf64 req-fa701e60-c1b2-427c-951f-79449e96fc05 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Updating instance_info_cache with network_info: [{"id": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "address": "fa:16:3e:b4:bf:b7", "network": {"id": "7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1957321638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "0a5d578da5e746caa535eef295e1a67d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8dc388c-9e", "ovs_interfaceid": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:06:25 np0005481065 nova_compute[260935]: 2025-10-11 09:06:25.759 2 DEBUG oslo_concurrency.lockutils [req-708a7f1d-21de-4345-aa85-c6656682bf64 req-fa701e60-c1b2-427c-951f-79449e96fc05 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-a83f40e4-c852-4b45-a3d2-1cd65e9aaa31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:06:26 np0005481065 nova_compute[260935]: 2025-10-11 09:06:26.118 2 INFO nova.virt.libvirt.driver [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Creating config drive at /var/lib/nova/instances/079324f3-2fba-431a-9b8a-6b755af3fe74/disk.config#033[00m
Oct 11 05:06:26 np0005481065 nova_compute[260935]: 2025-10-11 09:06:26.123 2 DEBUG oslo_concurrency.processutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/079324f3-2fba-431a-9b8a-6b755af3fe74/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyakuuedm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:06:26 np0005481065 nova_compute[260935]: 2025-10-11 09:06:26.290 2 DEBUG oslo_concurrency.processutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/079324f3-2fba-431a-9b8a-6b755af3fe74/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyakuuedm" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:06:26 np0005481065 nova_compute[260935]: 2025-10-11 09:06:26.314 2 DEBUG nova.storage.rbd_utils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] rbd image 079324f3-2fba-431a-9b8a-6b755af3fe74_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:06:26 np0005481065 nova_compute[260935]: 2025-10-11 09:06:26.321 2 DEBUG oslo_concurrency.processutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/079324f3-2fba-431a-9b8a-6b755af3fe74/disk.config 079324f3-2fba-431a-9b8a-6b755af3fe74_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:06:26 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1913: 321 pgs: 321 active+clean; 548 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 623 KiB/s rd, 1.8 MiB/s wr, 86 op/s
Oct 11 05:06:26 np0005481065 nova_compute[260935]: 2025-10-11 09:06:26.527 2 DEBUG oslo_concurrency.processutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/079324f3-2fba-431a-9b8a-6b755af3fe74/disk.config 079324f3-2fba-431a-9b8a-6b755af3fe74_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.206s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:06:26 np0005481065 nova_compute[260935]: 2025-10-11 09:06:26.529 2 INFO nova.virt.libvirt.driver [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Deleting local config drive /var/lib/nova/instances/079324f3-2fba-431a-9b8a-6b755af3fe74/disk.config because it was imported into RBD.#033[00m
Oct 11 05:06:26 np0005481065 kernel: tapf44dbabf-c6: entered promiscuous mode
Oct 11 05:06:26 np0005481065 NetworkManager[44960]: <info>  [1760173586.6094] manager: (tapf44dbabf-c6): new Tun device (/org/freedesktop/NetworkManager/Devices/363)
Oct 11 05:06:26 np0005481065 nova_compute[260935]: 2025-10-11 09:06:26.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:26 np0005481065 ovn_controller[152945]: 2025-10-11T09:06:26Z|00844|binding|INFO|Claiming lport f44dbabf-c6b7-4aa2-ac55-6df262611e5e for this chassis.
Oct 11 05:06:26 np0005481065 ovn_controller[152945]: 2025-10-11T09:06:26Z|00845|binding|INFO|f44dbabf-c6b7-4aa2-ac55-6df262611e5e: Claiming fa:16:3e:47:6a:2d 10.100.0.3
Oct 11 05:06:26 np0005481065 nova_compute[260935]: 2025-10-11 09:06:26.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:26 np0005481065 ovn_controller[152945]: 2025-10-11T09:06:26Z|00846|binding|INFO|Setting lport f44dbabf-c6b7-4aa2-ac55-6df262611e5e ovn-installed in OVS
Oct 11 05:06:26 np0005481065 ovn_controller[152945]: 2025-10-11T09:06:26Z|00847|binding|INFO|Setting lport f44dbabf-c6b7-4aa2-ac55-6df262611e5e up in Southbound
Oct 11 05:06:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:26.657 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:6a:2d 10.100.0.3'], port_security=['fa:16:3e:47:6a:2d 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '079324f3-2fba-431a-9b8a-6b755af3fe74', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c4ebf6c9-009d-4565-ba58-6b9452636d41', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b86be458d54543fcae9df9884591edf3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '97cd21ff-67df-467b-bb0c-629509de3f23', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=820964f1-3ad5-4fec-b6e5-3d57a1853278, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=f44dbabf-c6b7-4aa2-ac55-6df262611e5e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:06:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:26.659 162815 INFO neutron.agent.ovn.metadata.agent [-] Port f44dbabf-c6b7-4aa2-ac55-6df262611e5e in datapath c4ebf6c9-009d-4565-ba58-6b9452636d41 bound to our chassis#033[00m
Oct 11 05:06:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:26.661 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c4ebf6c9-009d-4565-ba58-6b9452636d41#033[00m
Oct 11 05:06:26 np0005481065 nova_compute[260935]: 2025-10-11 09:06:26.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:26 np0005481065 systemd-udevd[355084]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:06:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:06:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3834400119' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:06:26 np0005481065 systemd-machined[215705]: New machine qemu-109-instance-0000005f.
Oct 11 05:06:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:26.676 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b7c3d0dd-e1b1-432c-9567-b889b5698cab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:06:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:26.678 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc4ebf6c9-01 in ovnmeta-c4ebf6c9-009d-4565-ba58-6b9452636d41 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 05:06:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:26.680 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc4ebf6c9-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 05:06:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:26.681 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fc6323db-6fbd-4e2b-b1df-9effd6e7a210]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:06:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:06:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3834400119' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:06:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:26.682 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bda94d8b-cd2a-46d6-a2ed-0b1c4914ab83]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:06:26 np0005481065 NetworkManager[44960]: <info>  [1760173586.6957] device (tapf44dbabf-c6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:06:26 np0005481065 NetworkManager[44960]: <info>  [1760173586.6965] device (tapf44dbabf-c6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:06:26 np0005481065 systemd[1]: Started Virtual Machine qemu-109-instance-0000005f.
Oct 11 05:06:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:26.699 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[368554e5-7f59-4684-87b8-8f8afc5ec0a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:06:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:26.715 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[082797a3-ab94-4e07-872d-7b654602f959]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:06:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:26.757 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[416d1f22-c2a3-47b1-9d39-d573c75a26a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:06:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:26.762 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fa6180c8-307e-448c-9ab5-86180f74f273]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:06:26 np0005481065 NetworkManager[44960]: <info>  [1760173586.7634] manager: (tapc4ebf6c9-00): new Veth device (/org/freedesktop/NetworkManager/Devices/364)
Oct 11 05:06:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:26.812 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[5f7a69f9-32df-4948-a5d1-bdf28f1be944]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:06:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:26.817 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[18c4ca42-8957-43d4-b3b8-2ebc6f2dd822]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:06:26 np0005481065 NetworkManager[44960]: <info>  [1760173586.8576] device (tapc4ebf6c9-00): carrier: link connected
Oct 11 05:06:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:26.867 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[eae7a1f8-b716-4577-9d2e-2f3e5266d4fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:06:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:26.890 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[774a4fe5-2f7a-4222-badd-6a642db04dff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc4ebf6c9-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7e:00:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 256], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543156, 'reachable_time': 40561, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355117, 'error': None, 'target': 'ovnmeta-c4ebf6c9-009d-4565-ba58-6b9452636d41', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:06:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:26.912 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[365b7d57-3cef-4768-9aba-d5d7bc5d0880]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7e:f2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 543156, 'tstamp': 543156}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355118, 'error': None, 'target': 'ovnmeta-c4ebf6c9-009d-4565-ba58-6b9452636d41', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:06:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:26.934 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ec941c9a-28f8-4d94-80e4-1f142cbf1033]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc4ebf6c9-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7e:00:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 256], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543156, 'reachable_time': 40561, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 355119, 'error': None, 'target': 'ovnmeta-c4ebf6c9-009d-4565-ba58-6b9452636d41', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:06:26 np0005481065 nova_compute[260935]: 2025-10-11 09:06:26.957 2 DEBUG nova.network.neutron [req-865f2bcd-2c62-4aa8-9a02-ef624eeddc8f req-7ac852fb-6a64-4a90-b1a7-66b99e151061 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Updated VIF entry in instance network info cache for port f44dbabf-c6b7-4aa2-ac55-6df262611e5e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:06:26 np0005481065 nova_compute[260935]: 2025-10-11 09:06:26.958 2 DEBUG nova.network.neutron [req-865f2bcd-2c62-4aa8-9a02-ef624eeddc8f req-7ac852fb-6a64-4a90-b1a7-66b99e151061 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Updating instance_info_cache with network_info: [{"id": "f44dbabf-c6b7-4aa2-ac55-6df262611e5e", "address": "fa:16:3e:47:6a:2d", "network": {"id": "c4ebf6c9-009d-4565-ba58-6b9452636d41", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-2078761724-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b86be458d54543fcae9df9884591edf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf44dbabf-c6", "ovs_interfaceid": "f44dbabf-c6b7-4aa2-ac55-6df262611e5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:06:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:26.987 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[101f8500-3feb-4505-ad76-28fc4f814222]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:06:27 np0005481065 nova_compute[260935]: 2025-10-11 09:06:27.008 2 DEBUG oslo_concurrency.lockutils [req-865f2bcd-2c62-4aa8-9a02-ef624eeddc8f req-7ac852fb-6a64-4a90-b1a7-66b99e151061 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-079324f3-2fba-431a-9b8a-6b755af3fe74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:06:27 np0005481065 ovn_controller[152945]: 2025-10-11T09:06:27Z|00848|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:06:27 np0005481065 ovn_controller[152945]: 2025-10-11T09:06:27Z|00849|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:06:27 np0005481065 nova_compute[260935]: 2025-10-11 09:06:27.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:27 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:27.084 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e1559799-3887-4f49-af2e-9d0cb31953b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:06:27 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:27.086 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc4ebf6c9-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:06:27 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:27.087 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:06:27 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:27.087 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc4ebf6c9-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:06:27 np0005481065 nova_compute[260935]: 2025-10-11 09:06:27.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:27 np0005481065 NetworkManager[44960]: <info>  [1760173587.0904] manager: (tapc4ebf6c9-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/365)
Oct 11 05:06:27 np0005481065 kernel: tapc4ebf6c9-00: entered promiscuous mode
Oct 11 05:06:27 np0005481065 nova_compute[260935]: 2025-10-11 09:06:27.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:27 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:27.141 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc4ebf6c9-00, col_values=(('external_ids', {'iface-id': '07f4b6ae-9260-4c66-a561-df5bb8d757b4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:06:27 np0005481065 nova_compute[260935]: 2025-10-11 09:06:27.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:27 np0005481065 nova_compute[260935]: 2025-10-11 09:06:27.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:27 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:27.146 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c4ebf6c9-009d-4565-ba58-6b9452636d41.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c4ebf6c9-009d-4565-ba58-6b9452636d41.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 05:06:27 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:27.147 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c5683be3-de48-4245-8360-ecee887ed4c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:06:27 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:27.148 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 05:06:27 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 05:06:27 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 05:06:27 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-c4ebf6c9-009d-4565-ba58-6b9452636d41
Oct 11 05:06:27 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 05:06:27 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 05:06:27 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 05:06:27 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/c4ebf6c9-009d-4565-ba58-6b9452636d41.pid.haproxy
Oct 11 05:06:27 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 05:06:27 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:06:27 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 05:06:27 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 05:06:27 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 05:06:27 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 05:06:27 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 05:06:27 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 05:06:27 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 05:06:27 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 05:06:27 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 05:06:27 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 05:06:27 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 05:06:27 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 05:06:27 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 05:06:27 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:06:27 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:06:27 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 05:06:27 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 05:06:27 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 05:06:27 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID c4ebf6c9-009d-4565-ba58-6b9452636d41
Oct 11 05:06:27 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 05:06:27 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:27.148 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c4ebf6c9-009d-4565-ba58-6b9452636d41', 'env', 'PROCESS_TAG=haproxy-c4ebf6c9-009d-4565-ba58-6b9452636d41', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c4ebf6c9-009d-4565-ba58-6b9452636d41.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 05:06:27 np0005481065 ovn_controller[152945]: 2025-10-11T09:06:27Z|00850|binding|INFO|Releasing lport 07f4b6ae-9260-4c66-a561-df5bb8d757b4 from this chassis (sb_readonly=0)
Oct 11 05:06:27 np0005481065 ovn_controller[152945]: 2025-10-11T09:06:27Z|00851|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:06:27 np0005481065 ovn_controller[152945]: 2025-10-11T09:06:27Z|00852|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:06:27 np0005481065 nova_compute[260935]: 2025-10-11 09:06:27.155 2 DEBUG nova.compute.manager [req-b538ec5c-3aa9-4b2c-b4f7-b991142a7e2a req-5173cd97-b3dd-49c3-98d4-cc9757768553 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Received event network-vif-plugged-f44dbabf-c6b7-4aa2-ac55-6df262611e5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:06:27 np0005481065 nova_compute[260935]: 2025-10-11 09:06:27.155 2 DEBUG oslo_concurrency.lockutils [req-b538ec5c-3aa9-4b2c-b4f7-b991142a7e2a req-5173cd97-b3dd-49c3-98d4-cc9757768553 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "079324f3-2fba-431a-9b8a-6b755af3fe74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:06:27 np0005481065 nova_compute[260935]: 2025-10-11 09:06:27.156 2 DEBUG oslo_concurrency.lockutils [req-b538ec5c-3aa9-4b2c-b4f7-b991142a7e2a req-5173cd97-b3dd-49c3-98d4-cc9757768553 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "079324f3-2fba-431a-9b8a-6b755af3fe74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:06:27 np0005481065 nova_compute[260935]: 2025-10-11 09:06:27.156 2 DEBUG oslo_concurrency.lockutils [req-b538ec5c-3aa9-4b2c-b4f7-b991142a7e2a req-5173cd97-b3dd-49c3-98d4-cc9757768553 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "079324f3-2fba-431a-9b8a-6b755af3fe74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:06:27 np0005481065 nova_compute[260935]: 2025-10-11 09:06:27.156 2 DEBUG nova.compute.manager [req-b538ec5c-3aa9-4b2c-b4f7-b991142a7e2a req-5173cd97-b3dd-49c3-98d4-cc9757768553 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Processing event network-vif-plugged-f44dbabf-c6b7-4aa2-ac55-6df262611e5e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:06:27 np0005481065 nova_compute[260935]: 2025-10-11 09:06:27.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:27 np0005481065 nova_compute[260935]: 2025-10-11 09:06:27.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:27 np0005481065 podman[355193]: 2025-10-11 09:06:27.548939134 +0000 UTC m=+0.062147365 container create 430ab45702f3269ff635d2a687f1fa6c64c61e0b8b3ea0d1d1139c16c4b33446 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c4ebf6c9-009d-4565-ba58-6b9452636d41, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:06:27 np0005481065 systemd[1]: Started libpod-conmon-430ab45702f3269ff635d2a687f1fa6c64c61e0b8b3ea0d1d1139c16c4b33446.scope.
Oct 11 05:06:27 np0005481065 podman[355193]: 2025-10-11 09:06:27.514629285 +0000 UTC m=+0.027837556 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 05:06:27 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:06:27 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5116281b771ae2e0fec1c2aeaecd4d786f9db2d4ea87dc93adfd474008700f33/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 05:06:27 np0005481065 podman[355193]: 2025-10-11 09:06:27.64125814 +0000 UTC m=+0.154466401 container init 430ab45702f3269ff635d2a687f1fa6c64c61e0b8b3ea0d1d1139c16c4b33446 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c4ebf6c9-009d-4565-ba58-6b9452636d41, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 11 05:06:27 np0005481065 podman[355193]: 2025-10-11 09:06:27.647287052 +0000 UTC m=+0.160495283 container start 430ab45702f3269ff635d2a687f1fa6c64c61e0b8b3ea0d1d1139c16c4b33446 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c4ebf6c9-009d-4565-ba58-6b9452636d41, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:06:27 np0005481065 neutron-haproxy-ovnmeta-c4ebf6c9-009d-4565-ba58-6b9452636d41[355209]: [NOTICE]   (355213) : New worker (355215) forked
Oct 11 05:06:27 np0005481065 neutron-haproxy-ovnmeta-c4ebf6c9-009d-4565-ba58-6b9452636d41[355209]: [NOTICE]   (355213) : Loading success.
Oct 11 05:06:27 np0005481065 nova_compute[260935]: 2025-10-11 09:06:27.776 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173587.7765043, 079324f3-2fba-431a-9b8a-6b755af3fe74 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:06:27 np0005481065 nova_compute[260935]: 2025-10-11 09:06:27.777 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] VM Started (Lifecycle Event)#033[00m
Oct 11 05:06:27 np0005481065 nova_compute[260935]: 2025-10-11 09:06:27.780 2 DEBUG nova.compute.manager [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:06:27 np0005481065 nova_compute[260935]: 2025-10-11 09:06:27.784 2 DEBUG nova.virt.libvirt.driver [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:06:27 np0005481065 nova_compute[260935]: 2025-10-11 09:06:27.788 2 INFO nova.virt.libvirt.driver [-] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Instance spawned successfully.#033[00m
Oct 11 05:06:27 np0005481065 nova_compute[260935]: 2025-10-11 09:06:27.789 2 DEBUG nova.virt.libvirt.driver [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:06:27 np0005481065 nova_compute[260935]: 2025-10-11 09:06:27.837 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:06:27 np0005481065 nova_compute[260935]: 2025-10-11 09:06:27.842 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:06:27 np0005481065 nova_compute[260935]: 2025-10-11 09:06:27.852 2 DEBUG nova.virt.libvirt.driver [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:06:27 np0005481065 nova_compute[260935]: 2025-10-11 09:06:27.853 2 DEBUG nova.virt.libvirt.driver [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:06:27 np0005481065 nova_compute[260935]: 2025-10-11 09:06:27.854 2 DEBUG nova.virt.libvirt.driver [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:06:27 np0005481065 nova_compute[260935]: 2025-10-11 09:06:27.854 2 DEBUG nova.virt.libvirt.driver [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:06:27 np0005481065 nova_compute[260935]: 2025-10-11 09:06:27.855 2 DEBUG nova.virt.libvirt.driver [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:06:27 np0005481065 nova_compute[260935]: 2025-10-11 09:06:27.856 2 DEBUG nova.virt.libvirt.driver [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:06:27 np0005481065 nova_compute[260935]: 2025-10-11 09:06:27.915 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:06:27 np0005481065 nova_compute[260935]: 2025-10-11 09:06:27.915 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173587.77752, 079324f3-2fba-431a-9b8a-6b755af3fe74 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:06:27 np0005481065 nova_compute[260935]: 2025-10-11 09:06:27.916 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:06:27 np0005481065 nova_compute[260935]: 2025-10-11 09:06:27.962 2 INFO nova.compute.manager [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Took 10.34 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:06:27 np0005481065 nova_compute[260935]: 2025-10-11 09:06:27.963 2 DEBUG nova.compute.manager [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:06:27 np0005481065 nova_compute[260935]: 2025-10-11 09:06:27.967 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:06:27 np0005481065 nova_compute[260935]: 2025-10-11 09:06:27.979 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173587.7838619, 079324f3-2fba-431a-9b8a-6b755af3fe74 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:06:27 np0005481065 nova_compute[260935]: 2025-10-11 09:06:27.980 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:06:28 np0005481065 nova_compute[260935]: 2025-10-11 09:06:28.036 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:06:28 np0005481065 nova_compute[260935]: 2025-10-11 09:06:28.040 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:06:28 np0005481065 nova_compute[260935]: 2025-10-11 09:06:28.081 2 INFO nova.compute.manager [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Took 12.27 seconds to build instance.#033[00m
Oct 11 05:06:28 np0005481065 nova_compute[260935]: 2025-10-11 09:06:28.193 2 DEBUG oslo_concurrency.lockutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Lock "079324f3-2fba-431a-9b8a-6b755af3fe74" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:06:28 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1914: 321 pgs: 321 active+clean; 548 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 630 KiB/s rd, 1.8 MiB/s wr, 96 op/s
Oct 11 05:06:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:06:29 np0005481065 nova_compute[260935]: 2025-10-11 09:06:29.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:29.219 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:06:29 np0005481065 nova_compute[260935]: 2025-10-11 09:06:29.345 2 DEBUG nova.compute.manager [req-5632545d-104a-47d3-afb4-cc3517583f70 req-02f0ff8b-ce22-4eb1-a477-7faed60faca1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Received event network-vif-plugged-f44dbabf-c6b7-4aa2-ac55-6df262611e5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:06:29 np0005481065 nova_compute[260935]: 2025-10-11 09:06:29.345 2 DEBUG oslo_concurrency.lockutils [req-5632545d-104a-47d3-afb4-cc3517583f70 req-02f0ff8b-ce22-4eb1-a477-7faed60faca1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "079324f3-2fba-431a-9b8a-6b755af3fe74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:06:29 np0005481065 nova_compute[260935]: 2025-10-11 09:06:29.345 2 DEBUG oslo_concurrency.lockutils [req-5632545d-104a-47d3-afb4-cc3517583f70 req-02f0ff8b-ce22-4eb1-a477-7faed60faca1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "079324f3-2fba-431a-9b8a-6b755af3fe74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:06:29 np0005481065 nova_compute[260935]: 2025-10-11 09:06:29.346 2 DEBUG oslo_concurrency.lockutils [req-5632545d-104a-47d3-afb4-cc3517583f70 req-02f0ff8b-ce22-4eb1-a477-7faed60faca1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "079324f3-2fba-431a-9b8a-6b755af3fe74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:06:29 np0005481065 nova_compute[260935]: 2025-10-11 09:06:29.346 2 DEBUG nova.compute.manager [req-5632545d-104a-47d3-afb4-cc3517583f70 req-02f0ff8b-ce22-4eb1-a477-7faed60faca1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] No waiting events found dispatching network-vif-plugged-f44dbabf-c6b7-4aa2-ac55-6df262611e5e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:06:29 np0005481065 nova_compute[260935]: 2025-10-11 09:06:29.346 2 WARNING nova.compute.manager [req-5632545d-104a-47d3-afb4-cc3517583f70 req-02f0ff8b-ce22-4eb1-a477-7faed60faca1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Received unexpected event network-vif-plugged-f44dbabf-c6b7-4aa2-ac55-6df262611e5e for instance with vm_state active and task_state None.#033[00m
Oct 11 05:06:29 np0005481065 nova_compute[260935]: 2025-10-11 09:06:29.364 2 DEBUG oslo_concurrency.lockutils [None req-6a40a529-0650-4691-8e19-6221ac47d0be b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Acquiring lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:06:29 np0005481065 nova_compute[260935]: 2025-10-11 09:06:29.364 2 DEBUG oslo_concurrency.lockutils [None req-6a40a529-0650-4691-8e19-6221ac47d0be b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:06:29 np0005481065 nova_compute[260935]: 2025-10-11 09:06:29.365 2 DEBUG oslo_concurrency.lockutils [None req-6a40a529-0650-4691-8e19-6221ac47d0be b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Acquiring lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:06:29 np0005481065 nova_compute[260935]: 2025-10-11 09:06:29.365 2 DEBUG oslo_concurrency.lockutils [None req-6a40a529-0650-4691-8e19-6221ac47d0be b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:06:29 np0005481065 nova_compute[260935]: 2025-10-11 09:06:29.366 2 DEBUG oslo_concurrency.lockutils [None req-6a40a529-0650-4691-8e19-6221ac47d0be b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:06:29 np0005481065 nova_compute[260935]: 2025-10-11 09:06:29.367 2 INFO nova.compute.manager [None req-6a40a529-0650-4691-8e19-6221ac47d0be b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Terminating instance#033[00m
Oct 11 05:06:29 np0005481065 nova_compute[260935]: 2025-10-11 09:06:29.368 2 DEBUG nova.compute.manager [None req-6a40a529-0650-4691-8e19-6221ac47d0be b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:06:29 np0005481065 kernel: tapf8dc388c-9e (unregistering): left promiscuous mode
Oct 11 05:06:29 np0005481065 NetworkManager[44960]: <info>  [1760173589.4461] device (tapf8dc388c-9e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:06:29 np0005481065 nova_compute[260935]: 2025-10-11 09:06:29.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:29 np0005481065 ovn_controller[152945]: 2025-10-11T09:06:29Z|00853|binding|INFO|Releasing lport f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 from this chassis (sb_readonly=0)
Oct 11 05:06:29 np0005481065 ovn_controller[152945]: 2025-10-11T09:06:29Z|00854|binding|INFO|Setting lport f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 down in Southbound
Oct 11 05:06:29 np0005481065 ovn_controller[152945]: 2025-10-11T09:06:29Z|00855|binding|INFO|Removing iface tapf8dc388c-9e ovn-installed in OVS
Oct 11 05:06:29 np0005481065 nova_compute[260935]: 2025-10-11 09:06:29.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:29 np0005481065 nova_compute[260935]: 2025-10-11 09:06:29.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:29 np0005481065 systemd[1]: machine-qemu\x2d108\x2dinstance\x2d0000005c.scope: Deactivated successfully.
Oct 11 05:06:29 np0005481065 systemd[1]: machine-qemu\x2d108\x2dinstance\x2d0000005c.scope: Consumed 12.937s CPU time.
Oct 11 05:06:29 np0005481065 systemd-machined[215705]: Machine qemu-108-instance-0000005c terminated.
Oct 11 05:06:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:29.566 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:bf:b7 10.100.0.12'], port_security=['fa:16:3e:b4:bf:b7 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a83f40e4-c852-4b45-a3d2-1cd65e9aaa31', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a5d578da5e746caa535eef295e1a67d', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'e8a3473d-4723-4d4d-be0a-f96b6f35a4ee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ae01bff6-0f5a-48e3-a011-50bc6bac19c7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=f8dc388c-9e5a-43c2-8dd9-8fc28768ec31) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:06:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:29.574 162815 INFO neutron.agent.ovn.metadata.agent [-] Port f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 in datapath 7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75 unbound from our chassis#033[00m
Oct 11 05:06:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:29.576 162815 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct 11 05:06:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:29.578 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[11dd83b4-edc4-471e-8d16-8943a296d41a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:06:29 np0005481065 nova_compute[260935]: 2025-10-11 09:06:29.612 2 INFO nova.virt.libvirt.driver [-] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Instance destroyed successfully.#033[00m
Oct 11 05:06:29 np0005481065 nova_compute[260935]: 2025-10-11 09:06:29.612 2 DEBUG nova.objects.instance [None req-6a40a529-0650-4691-8e19-6221ac47d0be b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lazy-loading 'resources' on Instance uuid a83f40e4-c852-4b45-a3d2-1cd65e9aaa31 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:06:29 np0005481065 nova_compute[260935]: 2025-10-11 09:06:29.673 2 DEBUG nova.virt.libvirt.vif [None req-6a40a529-0650-4691-8e19-6221ac47d0be b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:05:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-942927350',display_name='tempest-ServerRescueTestJSONUnderV235-server-942927350',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-942927350',id=92,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:06:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0a5d578da5e746caa535eef295e1a67d',ramdisk_id='',reservation_id='r-i0emrtot',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSONUnderV235-2035879439',owner_user_name='tempest-ServerRescueTestJSONUnderV235-2035879439-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:06:02Z,user_data=None,user_id='b7730a035fdf47498398e20e5aaf9ba4',uuid=a83f40e4-c852-4b45-a3d2-1cd65e9aaa31,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "address": "fa:16:3e:b4:bf:b7", "network": {"id": "7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1957321638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "0a5d578da5e746caa535eef295e1a67d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8dc388c-9e", "ovs_interfaceid": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:06:29 np0005481065 nova_compute[260935]: 2025-10-11 09:06:29.674 2 DEBUG nova.network.os_vif_util [None req-6a40a529-0650-4691-8e19-6221ac47d0be b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Converting VIF {"id": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "address": "fa:16:3e:b4:bf:b7", "network": {"id": "7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1957321638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "0a5d578da5e746caa535eef295e1a67d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8dc388c-9e", "ovs_interfaceid": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:06:29 np0005481065 nova_compute[260935]: 2025-10-11 09:06:29.675 2 DEBUG nova.network.os_vif_util [None req-6a40a529-0650-4691-8e19-6221ac47d0be b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b4:bf:b7,bridge_name='br-int',has_traffic_filtering=True,id=f8dc388c-9e5a-43c2-8dd9-8fc28768ec31,network=Network(7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf8dc388c-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:06:29 np0005481065 nova_compute[260935]: 2025-10-11 09:06:29.676 2 DEBUG os_vif [None req-6a40a529-0650-4691-8e19-6221ac47d0be b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b4:bf:b7,bridge_name='br-int',has_traffic_filtering=True,id=f8dc388c-9e5a-43c2-8dd9-8fc28768ec31,network=Network(7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf8dc388c-9e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:06:29 np0005481065 nova_compute[260935]: 2025-10-11 09:06:29.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:29 np0005481065 nova_compute[260935]: 2025-10-11 09:06:29.679 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf8dc388c-9e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:06:29 np0005481065 nova_compute[260935]: 2025-10-11 09:06:29.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:29 np0005481065 nova_compute[260935]: 2025-10-11 09:06:29.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:06:29 np0005481065 nova_compute[260935]: 2025-10-11 09:06:29.686 2 INFO os_vif [None req-6a40a529-0650-4691-8e19-6221ac47d0be b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b4:bf:b7,bridge_name='br-int',has_traffic_filtering=True,id=f8dc388c-9e5a-43c2-8dd9-8fc28768ec31,network=Network(7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf8dc388c-9e')#033[00m
Oct 11 05:06:30 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1915: 321 pgs: 321 active+clean; 548 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 11 05:06:30 np0005481065 nova_compute[260935]: 2025-10-11 09:06:30.448 2 INFO nova.virt.libvirt.driver [None req-6a40a529-0650-4691-8e19-6221ac47d0be b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Deleting instance files /var/lib/nova/instances/a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_del#033[00m
Oct 11 05:06:30 np0005481065 nova_compute[260935]: 2025-10-11 09:06:30.449 2 INFO nova.virt.libvirt.driver [None req-6a40a529-0650-4691-8e19-6221ac47d0be b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Deletion of /var/lib/nova/instances/a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_del complete#033[00m
Oct 11 05:06:30 np0005481065 nova_compute[260935]: 2025-10-11 09:06:30.655 2 INFO nova.compute.manager [None req-6a40a529-0650-4691-8e19-6221ac47d0be b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Took 1.29 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:06:30 np0005481065 nova_compute[260935]: 2025-10-11 09:06:30.656 2 DEBUG oslo.service.loopingcall [None req-6a40a529-0650-4691-8e19-6221ac47d0be b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:06:30 np0005481065 nova_compute[260935]: 2025-10-11 09:06:30.656 2 DEBUG nova.compute.manager [-] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:06:30 np0005481065 nova_compute[260935]: 2025-10-11 09:06:30.657 2 DEBUG nova.network.neutron [-] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:06:31 np0005481065 nova_compute[260935]: 2025-10-11 09:06:31.555 2 DEBUG nova.compute.manager [req-d7ab80e6-7e08-4aca-a72a-bd86076d93ac req-da0f480b-8a6b-4a78-9edc-4b82097c89e4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Received event network-vif-unplugged-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:06:31 np0005481065 nova_compute[260935]: 2025-10-11 09:06:31.556 2 DEBUG oslo_concurrency.lockutils [req-d7ab80e6-7e08-4aca-a72a-bd86076d93ac req-da0f480b-8a6b-4a78-9edc-4b82097c89e4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:06:31 np0005481065 nova_compute[260935]: 2025-10-11 09:06:31.556 2 DEBUG oslo_concurrency.lockutils [req-d7ab80e6-7e08-4aca-a72a-bd86076d93ac req-da0f480b-8a6b-4a78-9edc-4b82097c89e4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:06:31 np0005481065 nova_compute[260935]: 2025-10-11 09:06:31.557 2 DEBUG oslo_concurrency.lockutils [req-d7ab80e6-7e08-4aca-a72a-bd86076d93ac req-da0f480b-8a6b-4a78-9edc-4b82097c89e4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:06:31 np0005481065 nova_compute[260935]: 2025-10-11 09:06:31.557 2 DEBUG nova.compute.manager [req-d7ab80e6-7e08-4aca-a72a-bd86076d93ac req-da0f480b-8a6b-4a78-9edc-4b82097c89e4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] No waiting events found dispatching network-vif-unplugged-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:06:31 np0005481065 nova_compute[260935]: 2025-10-11 09:06:31.557 2 DEBUG nova.compute.manager [req-d7ab80e6-7e08-4aca-a72a-bd86076d93ac req-da0f480b-8a6b-4a78-9edc-4b82097c89e4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Received event network-vif-unplugged-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 05:06:31 np0005481065 nova_compute[260935]: 2025-10-11 09:06:31.725 2 DEBUG nova.network.neutron [-] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:06:31 np0005481065 nova_compute[260935]: 2025-10-11 09:06:31.779 2 INFO nova.compute.manager [-] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Took 1.12 seconds to deallocate network for instance.#033[00m
Oct 11 05:06:31 np0005481065 nova_compute[260935]: 2025-10-11 09:06:31.856 2 DEBUG oslo_concurrency.lockutils [None req-6a40a529-0650-4691-8e19-6221ac47d0be b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:06:31 np0005481065 nova_compute[260935]: 2025-10-11 09:06:31.857 2 DEBUG oslo_concurrency.lockutils [None req-6a40a529-0650-4691-8e19-6221ac47d0be b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:06:32 np0005481065 nova_compute[260935]: 2025-10-11 09:06:32.068 2 DEBUG oslo_concurrency.processutils [None req-6a40a529-0650-4691-8e19-6221ac47d0be b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:06:32 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1916: 321 pgs: 321 active+clean; 454 MiB data, 865 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 131 op/s
Oct 11 05:06:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:06:32 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/592571589' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:06:32 np0005481065 nova_compute[260935]: 2025-10-11 09:06:32.612 2 DEBUG oslo_concurrency.processutils [None req-6a40a529-0650-4691-8e19-6221ac47d0be b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:06:32 np0005481065 nova_compute[260935]: 2025-10-11 09:06:32.618 2 DEBUG nova.compute.provider_tree [None req-6a40a529-0650-4691-8e19-6221ac47d0be b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:06:32 np0005481065 nova_compute[260935]: 2025-10-11 09:06:32.658 2 DEBUG nova.scheduler.client.report [None req-6a40a529-0650-4691-8e19-6221ac47d0be b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:06:32 np0005481065 nova_compute[260935]: 2025-10-11 09:06:32.741 2 DEBUG oslo_concurrency.lockutils [None req-6a40a529-0650-4691-8e19-6221ac47d0be b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.884s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:06:32 np0005481065 nova_compute[260935]: 2025-10-11 09:06:32.803 2 INFO nova.scheduler.client.report [None req-6a40a529-0650-4691-8e19-6221ac47d0be b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Deleted allocations for instance a83f40e4-c852-4b45-a3d2-1cd65e9aaa31#033[00m
Oct 11 05:06:33 np0005481065 nova_compute[260935]: 2025-10-11 09:06:33.018 2 DEBUG oslo_concurrency.lockutils [None req-6a40a529-0650-4691-8e19-6221ac47d0be b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:06:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:06:34 np0005481065 nova_compute[260935]: 2025-10-11 09:06:34.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:34 np0005481065 nova_compute[260935]: 2025-10-11 09:06:34.030 2 DEBUG nova.compute.manager [req-8ba58660-a261-4f22-8c0b-711330870007 req-52c9f5c3-060f-47e8-beea-49ee22aefae8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Received event network-vif-plugged-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:06:34 np0005481065 nova_compute[260935]: 2025-10-11 09:06:34.031 2 DEBUG oslo_concurrency.lockutils [req-8ba58660-a261-4f22-8c0b-711330870007 req-52c9f5c3-060f-47e8-beea-49ee22aefae8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:06:34 np0005481065 nova_compute[260935]: 2025-10-11 09:06:34.032 2 DEBUG oslo_concurrency.lockutils [req-8ba58660-a261-4f22-8c0b-711330870007 req-52c9f5c3-060f-47e8-beea-49ee22aefae8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:06:34 np0005481065 nova_compute[260935]: 2025-10-11 09:06:34.032 2 DEBUG oslo_concurrency.lockutils [req-8ba58660-a261-4f22-8c0b-711330870007 req-52c9f5c3-060f-47e8-beea-49ee22aefae8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:06:34 np0005481065 nova_compute[260935]: 2025-10-11 09:06:34.033 2 DEBUG nova.compute.manager [req-8ba58660-a261-4f22-8c0b-711330870007 req-52c9f5c3-060f-47e8-beea-49ee22aefae8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] No waiting events found dispatching network-vif-plugged-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:06:34 np0005481065 nova_compute[260935]: 2025-10-11 09:06:34.033 2 WARNING nova.compute.manager [req-8ba58660-a261-4f22-8c0b-711330870007 req-52c9f5c3-060f-47e8-beea-49ee22aefae8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Received unexpected event network-vif-plugged-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 for instance with vm_state deleted and task_state None.#033[00m
Oct 11 05:06:34 np0005481065 nova_compute[260935]: 2025-10-11 09:06:34.034 2 DEBUG nova.compute.manager [req-8ba58660-a261-4f22-8c0b-711330870007 req-52c9f5c3-060f-47e8-beea-49ee22aefae8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Received event network-vif-deleted-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:06:34 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1917: 321 pgs: 321 active+clean; 421 MiB data, 855 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 275 KiB/s wr, 129 op/s
Oct 11 05:06:34 np0005481065 nova_compute[260935]: 2025-10-11 09:06:34.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:34 np0005481065 nova_compute[260935]: 2025-10-11 09:06:34.742 2 DEBUG oslo_concurrency.lockutils [None req-1d4a41c2-0404-42b2-96af-2a766c7d1a71 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Acquiring lock "079324f3-2fba-431a-9b8a-6b755af3fe74" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:06:34 np0005481065 nova_compute[260935]: 2025-10-11 09:06:34.743 2 DEBUG oslo_concurrency.lockutils [None req-1d4a41c2-0404-42b2-96af-2a766c7d1a71 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Lock "079324f3-2fba-431a-9b8a-6b755af3fe74" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:06:34 np0005481065 nova_compute[260935]: 2025-10-11 09:06:34.744 2 DEBUG oslo_concurrency.lockutils [None req-1d4a41c2-0404-42b2-96af-2a766c7d1a71 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Acquiring lock "079324f3-2fba-431a-9b8a-6b755af3fe74-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:06:34 np0005481065 nova_compute[260935]: 2025-10-11 09:06:34.744 2 DEBUG oslo_concurrency.lockutils [None req-1d4a41c2-0404-42b2-96af-2a766c7d1a71 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Lock "079324f3-2fba-431a-9b8a-6b755af3fe74-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:06:34 np0005481065 nova_compute[260935]: 2025-10-11 09:06:34.745 2 DEBUG oslo_concurrency.lockutils [None req-1d4a41c2-0404-42b2-96af-2a766c7d1a71 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Lock "079324f3-2fba-431a-9b8a-6b755af3fe74-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:06:34 np0005481065 nova_compute[260935]: 2025-10-11 09:06:34.747 2 INFO nova.compute.manager [None req-1d4a41c2-0404-42b2-96af-2a766c7d1a71 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Terminating instance#033[00m
Oct 11 05:06:34 np0005481065 nova_compute[260935]: 2025-10-11 09:06:34.749 2 DEBUG nova.compute.manager [None req-1d4a41c2-0404-42b2-96af-2a766c7d1a71 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:06:34 np0005481065 kernel: tapf44dbabf-c6 (unregistering): left promiscuous mode
Oct 11 05:06:34 np0005481065 NetworkManager[44960]: <info>  [1760173594.7905] device (tapf44dbabf-c6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:06:34 np0005481065 ovn_controller[152945]: 2025-10-11T09:06:34Z|00856|binding|INFO|Releasing lport f44dbabf-c6b7-4aa2-ac55-6df262611e5e from this chassis (sb_readonly=0)
Oct 11 05:06:34 np0005481065 ovn_controller[152945]: 2025-10-11T09:06:34Z|00857|binding|INFO|Setting lport f44dbabf-c6b7-4aa2-ac55-6df262611e5e down in Southbound
Oct 11 05:06:34 np0005481065 ovn_controller[152945]: 2025-10-11T09:06:34Z|00858|binding|INFO|Removing iface tapf44dbabf-c6 ovn-installed in OVS
Oct 11 05:06:34 np0005481065 nova_compute[260935]: 2025-10-11 09:06:34.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:34 np0005481065 nova_compute[260935]: 2025-10-11 09:06:34.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:34 np0005481065 nova_compute[260935]: 2025-10-11 09:06:34.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:34 np0005481065 systemd[1]: machine-qemu\x2d109\x2dinstance\x2d0000005f.scope: Deactivated successfully.
Oct 11 05:06:34 np0005481065 systemd[1]: machine-qemu\x2d109\x2dinstance\x2d0000005f.scope: Consumed 8.007s CPU time.
Oct 11 05:06:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:34.858 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:6a:2d 10.100.0.3'], port_security=['fa:16:3e:47:6a:2d 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '079324f3-2fba-431a-9b8a-6b755af3fe74', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c4ebf6c9-009d-4565-ba58-6b9452636d41', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b86be458d54543fcae9df9884591edf3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '97cd21ff-67df-467b-bb0c-629509de3f23', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=820964f1-3ad5-4fec-b6e5-3d57a1853278, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=f44dbabf-c6b7-4aa2-ac55-6df262611e5e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:06:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:34.860 162815 INFO neutron.agent.ovn.metadata.agent [-] Port f44dbabf-c6b7-4aa2-ac55-6df262611e5e in datapath c4ebf6c9-009d-4565-ba58-6b9452636d41 unbound from our chassis#033[00m
Oct 11 05:06:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:34.863 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c4ebf6c9-009d-4565-ba58-6b9452636d41, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:06:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:34.864 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a2fd342d-49b0-49b2-b123-d5bac162604a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:06:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:34.865 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c4ebf6c9-009d-4565-ba58-6b9452636d41 namespace which is not needed anymore#033[00m
Oct 11 05:06:34 np0005481065 systemd-machined[215705]: Machine qemu-109-instance-0000005f terminated.
Oct 11 05:06:34 np0005481065 nova_compute[260935]: 2025-10-11 09:06:34.992 2 INFO nova.virt.libvirt.driver [-] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Instance destroyed successfully.#033[00m
Oct 11 05:06:34 np0005481065 nova_compute[260935]: 2025-10-11 09:06:34.993 2 DEBUG nova.objects.instance [None req-1d4a41c2-0404-42b2-96af-2a766c7d1a71 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Lazy-loading 'resources' on Instance uuid 079324f3-2fba-431a-9b8a-6b755af3fe74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:06:35 np0005481065 neutron-haproxy-ovnmeta-c4ebf6c9-009d-4565-ba58-6b9452636d41[355209]: [NOTICE]   (355213) : haproxy version is 2.8.14-c23fe91
Oct 11 05:06:35 np0005481065 neutron-haproxy-ovnmeta-c4ebf6c9-009d-4565-ba58-6b9452636d41[355209]: [NOTICE]   (355213) : path to executable is /usr/sbin/haproxy
Oct 11 05:06:35 np0005481065 neutron-haproxy-ovnmeta-c4ebf6c9-009d-4565-ba58-6b9452636d41[355209]: [WARNING]  (355213) : Exiting Master process...
Oct 11 05:06:35 np0005481065 neutron-haproxy-ovnmeta-c4ebf6c9-009d-4565-ba58-6b9452636d41[355209]: [ALERT]    (355213) : Current worker (355215) exited with code 143 (Terminated)
Oct 11 05:06:35 np0005481065 neutron-haproxy-ovnmeta-c4ebf6c9-009d-4565-ba58-6b9452636d41[355209]: [WARNING]  (355213) : All workers exited. Exiting... (0)
Oct 11 05:06:35 np0005481065 nova_compute[260935]: 2025-10-11 09:06:35.063 2 DEBUG nova.virt.libvirt.vif [None req-1d4a41c2-0404-42b2-96af-2a766c7d1a71 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:06:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-1151527895',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-1151527895',id=95,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:06:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b86be458d54543fcae9df9884591edf3',ramdisk_id='',reservation_id='r-7frv4663',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerTagsTestJSON-363741472',owner_user_name='tempest-ServerTagsTestJSON-363741472-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:06:28Z,user_data=None,user_id='c555ec17274647ed83d33852feed0fa6',uuid=079324f3-2fba-431a-9b8a-6b755af3fe74,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f44dbabf-c6b7-4aa2-ac55-6df262611e5e", "address": "fa:16:3e:47:6a:2d", "network": {"id": "c4ebf6c9-009d-4565-ba58-6b9452636d41", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-2078761724-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b86be458d54543fcae9df9884591edf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf44dbabf-c6", "ovs_interfaceid": "f44dbabf-c6b7-4aa2-ac55-6df262611e5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:06:35 np0005481065 nova_compute[260935]: 2025-10-11 09:06:35.063 2 DEBUG nova.network.os_vif_util [None req-1d4a41c2-0404-42b2-96af-2a766c7d1a71 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Converting VIF {"id": "f44dbabf-c6b7-4aa2-ac55-6df262611e5e", "address": "fa:16:3e:47:6a:2d", "network": {"id": "c4ebf6c9-009d-4565-ba58-6b9452636d41", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-2078761724-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b86be458d54543fcae9df9884591edf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf44dbabf-c6", "ovs_interfaceid": "f44dbabf-c6b7-4aa2-ac55-6df262611e5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:06:35 np0005481065 systemd[1]: libpod-430ab45702f3269ff635d2a687f1fa6c64c61e0b8b3ea0d1d1139c16c4b33446.scope: Deactivated successfully.
Oct 11 05:06:35 np0005481065 nova_compute[260935]: 2025-10-11 09:06:35.065 2 DEBUG nova.network.os_vif_util [None req-1d4a41c2-0404-42b2-96af-2a766c7d1a71 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:6a:2d,bridge_name='br-int',has_traffic_filtering=True,id=f44dbabf-c6b7-4aa2-ac55-6df262611e5e,network=Network(c4ebf6c9-009d-4565-ba58-6b9452636d41),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf44dbabf-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:06:35 np0005481065 nova_compute[260935]: 2025-10-11 09:06:35.065 2 DEBUG os_vif [None req-1d4a41c2-0404-42b2-96af-2a766c7d1a71 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:6a:2d,bridge_name='br-int',has_traffic_filtering=True,id=f44dbabf-c6b7-4aa2-ac55-6df262611e5e,network=Network(c4ebf6c9-009d-4565-ba58-6b9452636d41),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf44dbabf-c6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:06:35 np0005481065 nova_compute[260935]: 2025-10-11 09:06:35.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:35 np0005481065 nova_compute[260935]: 2025-10-11 09:06:35.068 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf44dbabf-c6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:06:35 np0005481065 nova_compute[260935]: 2025-10-11 09:06:35.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:35 np0005481065 nova_compute[260935]: 2025-10-11 09:06:35.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:35 np0005481065 podman[355312]: 2025-10-11 09:06:35.074044673 +0000 UTC m=+0.063863955 container died 430ab45702f3269ff635d2a687f1fa6c64c61e0b8b3ea0d1d1139c16c4b33446 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c4ebf6c9-009d-4565-ba58-6b9452636d41, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 05:06:35 np0005481065 nova_compute[260935]: 2025-10-11 09:06:35.075 2 INFO os_vif [None req-1d4a41c2-0404-42b2-96af-2a766c7d1a71 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:6a:2d,bridge_name='br-int',has_traffic_filtering=True,id=f44dbabf-c6b7-4aa2-ac55-6df262611e5e,network=Network(c4ebf6c9-009d-4565-ba58-6b9452636d41),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf44dbabf-c6')#033[00m
Oct 11 05:06:35 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-430ab45702f3269ff635d2a687f1fa6c64c61e0b8b3ea0d1d1139c16c4b33446-userdata-shm.mount: Deactivated successfully.
Oct 11 05:06:35 np0005481065 systemd[1]: var-lib-containers-storage-overlay-5116281b771ae2e0fec1c2aeaecd4d786f9db2d4ea87dc93adfd474008700f33-merged.mount: Deactivated successfully.
Oct 11 05:06:35 np0005481065 podman[355312]: 2025-10-11 09:06:35.136946778 +0000 UTC m=+0.126766080 container cleanup 430ab45702f3269ff635d2a687f1fa6c64c61e0b8b3ea0d1d1139c16c4b33446 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c4ebf6c9-009d-4565-ba58-6b9452636d41, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:06:35 np0005481065 systemd[1]: libpod-conmon-430ab45702f3269ff635d2a687f1fa6c64c61e0b8b3ea0d1d1139c16c4b33446.scope: Deactivated successfully.
Oct 11 05:06:35 np0005481065 podman[355363]: 2025-10-11 09:06:35.233115144 +0000 UTC m=+0.059123829 container remove 430ab45702f3269ff635d2a687f1fa6c64c61e0b8b3ea0d1d1139c16c4b33446 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c4ebf6c9-009d-4565-ba58-6b9452636d41, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 11 05:06:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:35.244 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[befe9dc5-789c-4dcc-8854-a65396586f2a]: (4, ('Sat Oct 11 09:06:34 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c4ebf6c9-009d-4565-ba58-6b9452636d41 (430ab45702f3269ff635d2a687f1fa6c64c61e0b8b3ea0d1d1139c16c4b33446)\n430ab45702f3269ff635d2a687f1fa6c64c61e0b8b3ea0d1d1139c16c4b33446\nSat Oct 11 09:06:35 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c4ebf6c9-009d-4565-ba58-6b9452636d41 (430ab45702f3269ff635d2a687f1fa6c64c61e0b8b3ea0d1d1139c16c4b33446)\n430ab45702f3269ff635d2a687f1fa6c64c61e0b8b3ea0d1d1139c16c4b33446\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:06:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:35.247 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[05b8a7c9-1132-4d68-86b7-eec5c92d0269]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:06:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:35.248 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc4ebf6c9-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:06:35 np0005481065 nova_compute[260935]: 2025-10-11 09:06:35.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:35 np0005481065 kernel: tapc4ebf6c9-00: left promiscuous mode
Oct 11 05:06:35 np0005481065 nova_compute[260935]: 2025-10-11 09:06:35.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:35.277 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[90540d08-3810-44e0-9790-3fc7bdb9c802]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:06:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:35.300 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7d9351cf-98b6-4179-b304-807f3eed23f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:06:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:35.303 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2de8fe03-18b2-48ac-ae38-708828b92857]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:06:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:35.340 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[60da574d-ea12-4632-aad0-ccf3d219f655]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543145, 'reachable_time': 38993, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355380, 'error': None, 'target': 'ovnmeta-c4ebf6c9-009d-4565-ba58-6b9452636d41', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:06:35 np0005481065 systemd[1]: run-netns-ovnmeta\x2dc4ebf6c9\x2d009d\x2d4565\x2dba58\x2d6b9452636d41.mount: Deactivated successfully.
Oct 11 05:06:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:35.345 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c4ebf6c9-009d-4565-ba58-6b9452636d41 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 05:06:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:06:35.345 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[b1f5bfd1-07fc-4400-9c1c-92531d6f5fc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:06:35 np0005481065 nova_compute[260935]: 2025-10-11 09:06:35.522 2 INFO nova.virt.libvirt.driver [None req-1d4a41c2-0404-42b2-96af-2a766c7d1a71 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Deleting instance files /var/lib/nova/instances/079324f3-2fba-431a-9b8a-6b755af3fe74_del#033[00m
Oct 11 05:06:35 np0005481065 nova_compute[260935]: 2025-10-11 09:06:35.524 2 INFO nova.virt.libvirt.driver [None req-1d4a41c2-0404-42b2-96af-2a766c7d1a71 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Deletion of /var/lib/nova/instances/079324f3-2fba-431a-9b8a-6b755af3fe74_del complete#033[00m
Oct 11 05:06:35 np0005481065 nova_compute[260935]: 2025-10-11 09:06:35.569 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:06:35 np0005481065 nova_compute[260935]: 2025-10-11 09:06:35.570 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:06:35 np0005481065 nova_compute[260935]: 2025-10-11 09:06:35.570 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:06:35 np0005481065 nova_compute[260935]: 2025-10-11 09:06:35.681 2 INFO nova.compute.manager [None req-1d4a41c2-0404-42b2-96af-2a766c7d1a71 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Took 0.93 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:06:35 np0005481065 nova_compute[260935]: 2025-10-11 09:06:35.682 2 DEBUG oslo.service.loopingcall [None req-1d4a41c2-0404-42b2-96af-2a766c7d1a71 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:06:35 np0005481065 nova_compute[260935]: 2025-10-11 09:06:35.683 2 DEBUG nova.compute.manager [-] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:06:35 np0005481065 nova_compute[260935]: 2025-10-11 09:06:35.684 2 DEBUG nova.network.neutron [-] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:06:35 np0005481065 nova_compute[260935]: 2025-10-11 09:06:35.919 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:06:35 np0005481065 nova_compute[260935]: 2025-10-11 09:06:35.919 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:06:35 np0005481065 nova_compute[260935]: 2025-10-11 09:06:35.920 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 05:06:36 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1918: 321 pgs: 321 active+clean; 421 MiB data, 855 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 16 KiB/s wr, 127 op/s
Oct 11 05:06:36 np0005481065 nova_compute[260935]: 2025-10-11 09:06:36.747 2 DEBUG nova.network.neutron [-] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:06:36 np0005481065 nova_compute[260935]: 2025-10-11 09:06:36.796 2 INFO nova.compute.manager [-] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Took 1.11 seconds to deallocate network for instance.#033[00m
Oct 11 05:06:36 np0005481065 nova_compute[260935]: 2025-10-11 09:06:36.858 2 DEBUG nova.compute.manager [req-08f9846a-1750-4858-87d3-e0166b5eeb3c req-be75b409-ee1a-4220-a251-199506322aa6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Received event network-vif-deleted-f44dbabf-c6b7-4aa2-ac55-6df262611e5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:06:36 np0005481065 nova_compute[260935]: 2025-10-11 09:06:36.882 2 DEBUG oslo_concurrency.lockutils [None req-1d4a41c2-0404-42b2-96af-2a766c7d1a71 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:06:36 np0005481065 nova_compute[260935]: 2025-10-11 09:06:36.882 2 DEBUG oslo_concurrency.lockutils [None req-1d4a41c2-0404-42b2-96af-2a766c7d1a71 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:06:37 np0005481065 nova_compute[260935]: 2025-10-11 09:06:37.057 2 DEBUG oslo_concurrency.processutils [None req-1d4a41c2-0404-42b2-96af-2a766c7d1a71 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:06:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:06:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3067880928' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:06:37 np0005481065 nova_compute[260935]: 2025-10-11 09:06:37.557 2 DEBUG oslo_concurrency.processutils [None req-1d4a41c2-0404-42b2-96af-2a766c7d1a71 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:06:37 np0005481065 nova_compute[260935]: 2025-10-11 09:06:37.566 2 DEBUG nova.compute.provider_tree [None req-1d4a41c2-0404-42b2-96af-2a766c7d1a71 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:06:37 np0005481065 nova_compute[260935]: 2025-10-11 09:06:37.626 2 DEBUG nova.scheduler.client.report [None req-1d4a41c2-0404-42b2-96af-2a766c7d1a71 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:06:37 np0005481065 nova_compute[260935]: 2025-10-11 09:06:37.636 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updating instance_info_cache with network_info: [{"id": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "address": "fa:16:3e:ab:9b:26", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99e74dca-1d", "ovs_interfaceid": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:06:37 np0005481065 nova_compute[260935]: 2025-10-11 09:06:37.672 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:06:37 np0005481065 nova_compute[260935]: 2025-10-11 09:06:37.673 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 05:06:37 np0005481065 nova_compute[260935]: 2025-10-11 09:06:37.675 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:06:37 np0005481065 nova_compute[260935]: 2025-10-11 09:06:37.676 2 DEBUG oslo_concurrency.lockutils [None req-1d4a41c2-0404-42b2-96af-2a766c7d1a71 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.793s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:06:37 np0005481065 nova_compute[260935]: 2025-10-11 09:06:37.681 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:06:37 np0005481065 nova_compute[260935]: 2025-10-11 09:06:37.682 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:06:37 np0005481065 nova_compute[260935]: 2025-10-11 09:06:37.683 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:06:37 np0005481065 nova_compute[260935]: 2025-10-11 09:06:37.684 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:06:37 np0005481065 nova_compute[260935]: 2025-10-11 09:06:37.685 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:06:37 np0005481065 nova_compute[260935]: 2025-10-11 09:06:37.686 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:06:37 np0005481065 nova_compute[260935]: 2025-10-11 09:06:37.686 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:06:37 np0005481065 nova_compute[260935]: 2025-10-11 09:06:37.716 2 INFO nova.scheduler.client.report [None req-1d4a41c2-0404-42b2-96af-2a766c7d1a71 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Deleted allocations for instance 079324f3-2fba-431a-9b8a-6b755af3fe74#033[00m
Oct 11 05:06:37 np0005481065 nova_compute[260935]: 2025-10-11 09:06:37.749 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:06:37 np0005481065 nova_compute[260935]: 2025-10-11 09:06:37.750 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:06:37 np0005481065 nova_compute[260935]: 2025-10-11 09:06:37.750 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:06:37 np0005481065 nova_compute[260935]: 2025-10-11 09:06:37.751 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:06:37 np0005481065 nova_compute[260935]: 2025-10-11 09:06:37.752 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:06:37 np0005481065 nova_compute[260935]: 2025-10-11 09:06:37.919 2 DEBUG oslo_concurrency.lockutils [None req-1d4a41c2-0404-42b2-96af-2a766c7d1a71 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Lock "079324f3-2fba-431a-9b8a-6b755af3fe74" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.175s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:06:38 np0005481065 ovn_controller[152945]: 2025-10-11T09:06:38Z|00859|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:06:38 np0005481065 ovn_controller[152945]: 2025-10-11T09:06:38Z|00860|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:06:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:06:38 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2376962690' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:06:38 np0005481065 nova_compute[260935]: 2025-10-11 09:06:38.252 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:06:38 np0005481065 nova_compute[260935]: 2025-10-11 09:06:38.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:38 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1919: 321 pgs: 321 active+clean; 374 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 17 KiB/s wr, 154 op/s
Oct 11 05:06:38 np0005481065 nova_compute[260935]: 2025-10-11 09:06:38.483 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:06:38 np0005481065 nova_compute[260935]: 2025-10-11 09:06:38.484 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:06:38 np0005481065 nova_compute[260935]: 2025-10-11 09:06:38.484 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:06:38 np0005481065 nova_compute[260935]: 2025-10-11 09:06:38.490 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:06:38 np0005481065 nova_compute[260935]: 2025-10-11 09:06:38.490 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:06:38 np0005481065 nova_compute[260935]: 2025-10-11 09:06:38.496 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000057 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:06:38 np0005481065 nova_compute[260935]: 2025-10-11 09:06:38.496 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000057 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:06:38 np0005481065 nova_compute[260935]: 2025-10-11 09:06:38.501 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:06:38 np0005481065 nova_compute[260935]: 2025-10-11 09:06:38.501 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:06:38 np0005481065 nova_compute[260935]: 2025-10-11 09:06:38.778 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:06:38 np0005481065 nova_compute[260935]: 2025-10-11 09:06:38.780 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3146MB free_disk=59.788875579833984GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:06:38 np0005481065 nova_compute[260935]: 2025-10-11 09:06:38.781 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:06:38 np0005481065 nova_compute[260935]: 2025-10-11 09:06:38.781 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:06:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:06:38 np0005481065 nova_compute[260935]: 2025-10-11 09:06:38.969 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:06:38 np0005481065 nova_compute[260935]: 2025-10-11 09:06:38.969 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:06:38 np0005481065 nova_compute[260935]: 2025-10-11 09:06:38.969 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:06:38 np0005481065 nova_compute[260935]: 2025-10-11 09:06:38.970 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 15633aee-234a-4417-b5ea-f35f13820404 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:06:38 np0005481065 nova_compute[260935]: 2025-10-11 09:06:38.970 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:06:38 np0005481065 nova_compute[260935]: 2025-10-11 09:06:38.970 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:06:39 np0005481065 nova_compute[260935]: 2025-10-11 09:06:39.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:39 np0005481065 nova_compute[260935]: 2025-10-11 09:06:39.112 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:06:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:06:39 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2578979883' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:06:39 np0005481065 nova_compute[260935]: 2025-10-11 09:06:39.694 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:06:39 np0005481065 nova_compute[260935]: 2025-10-11 09:06:39.705 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:06:39 np0005481065 nova_compute[260935]: 2025-10-11 09:06:39.771 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:06:39 np0005481065 nova_compute[260935]: 2025-10-11 09:06:39.875 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:06:39 np0005481065 nova_compute[260935]: 2025-10-11 09:06:39.876 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.095s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:06:40 np0005481065 nova_compute[260935]: 2025-10-11 09:06:40.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:40 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1920: 321 pgs: 321 active+clean; 374 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.5 KiB/s wr, 144 op/s
Oct 11 05:06:42 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1921: 321 pgs: 321 active+clean; 374 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.5 KiB/s wr, 144 op/s
Oct 11 05:06:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:06:44 np0005481065 nova_compute[260935]: 2025-10-11 09:06:44.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:44 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1922: 321 pgs: 321 active+clean; 374 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 379 KiB/s rd, 1.5 KiB/s wr, 51 op/s
Oct 11 05:06:44 np0005481065 nova_compute[260935]: 2025-10-11 09:06:44.607 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173589.6062007, a83f40e4-c852-4b45-a3d2-1cd65e9aaa31 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:06:44 np0005481065 nova_compute[260935]: 2025-10-11 09:06:44.608 2 INFO nova.compute.manager [-] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:06:44 np0005481065 nova_compute[260935]: 2025-10-11 09:06:44.658 2 DEBUG nova.compute.manager [None req-d0ec6287-caab-4f20-80b0-0aeaf5d41526 - - - - - -] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:06:44 np0005481065 podman[355448]: 2025-10-11 09:06:44.791493554 +0000 UTC m=+0.090941447 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct 11 05:06:45 np0005481065 nova_compute[260935]: 2025-10-11 09:06:45.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:46 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1923: 321 pgs: 321 active+clean; 374 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Oct 11 05:06:48 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1924: 321 pgs: 321 active+clean; 374 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Oct 11 05:06:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:06:49 np0005481065 nova_compute[260935]: 2025-10-11 09:06:49.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:49 np0005481065 nova_compute[260935]: 2025-10-11 09:06:49.987 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173594.9862652, 079324f3-2fba-431a-9b8a-6b755af3fe74 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:06:49 np0005481065 nova_compute[260935]: 2025-10-11 09:06:49.988 2 INFO nova.compute.manager [-] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:06:50 np0005481065 nova_compute[260935]: 2025-10-11 09:06:50.044 2 DEBUG nova.compute.manager [None req-27a818a9-b487-4c54-9e1a-6c489f013290 - - - - - -] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:06:50 np0005481065 nova_compute[260935]: 2025-10-11 09:06:50.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:50 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1925: 321 pgs: 321 active+clean; 374 MiB data, 826 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:06:51 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:06:51 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:06:51 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 05:06:51 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:06:51 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 05:06:51 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:06:51 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 6938261c-e0e3-46cf-95c3-fe921f11058e does not exist
Oct 11 05:06:51 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 9f3c5bb3-4126-41d9-89a1-da16ae0f37e5 does not exist
Oct 11 05:06:51 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev a6761191-011f-4788-a267-d508a01b7f23 does not exist
Oct 11 05:06:51 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 05:06:51 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 05:06:51 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 05:06:51 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:06:51 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:06:51 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:06:51 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:06:51 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:06:51 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:06:51 np0005481065 podman[355624]: 2025-10-11 09:06:51.574163068 +0000 UTC m=+0.093640404 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 11 05:06:52 np0005481065 podman[355761]: 2025-10-11 09:06:52.220499649 +0000 UTC m=+0.056139994 container create 9299614edfb0bbb8af5c42942b3de8088002772559ef10f263bd9dbfa1370941 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_kilby, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 05:06:52 np0005481065 systemd[1]: Started libpod-conmon-9299614edfb0bbb8af5c42942b3de8088002772559ef10f263bd9dbfa1370941.scope.
Oct 11 05:06:52 np0005481065 podman[355761]: 2025-10-11 09:06:52.193013954 +0000 UTC m=+0.028654379 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:06:52 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:06:52 np0005481065 podman[355761]: 2025-10-11 09:06:52.331018864 +0000 UTC m=+0.166659229 container init 9299614edfb0bbb8af5c42942b3de8088002772559ef10f263bd9dbfa1370941 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_kilby, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 05:06:52 np0005481065 podman[355761]: 2025-10-11 09:06:52.337725395 +0000 UTC m=+0.173365740 container start 9299614edfb0bbb8af5c42942b3de8088002772559ef10f263bd9dbfa1370941 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:06:52 np0005481065 podman[355761]: 2025-10-11 09:06:52.340913566 +0000 UTC m=+0.176553911 container attach 9299614edfb0bbb8af5c42942b3de8088002772559ef10f263bd9dbfa1370941 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 11 05:06:52 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1926: 321 pgs: 321 active+clean; 374 MiB data, 826 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:06:52 np0005481065 busy_kilby[355777]: 167 167
Oct 11 05:06:52 np0005481065 systemd[1]: libpod-9299614edfb0bbb8af5c42942b3de8088002772559ef10f263bd9dbfa1370941.scope: Deactivated successfully.
Oct 11 05:06:52 np0005481065 podman[355761]: 2025-10-11 09:06:52.348617576 +0000 UTC m=+0.184257951 container died 9299614edfb0bbb8af5c42942b3de8088002772559ef10f263bd9dbfa1370941 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_kilby, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:06:52 np0005481065 systemd[1]: var-lib-containers-storage-overlay-d85bfd72ca7cd1bc68309e8b2d2eaeedcfc89cc59a9c05ab40ce3902dff2fd68-merged.mount: Deactivated successfully.
Oct 11 05:06:52 np0005481065 podman[355761]: 2025-10-11 09:06:52.399460278 +0000 UTC m=+0.235100673 container remove 9299614edfb0bbb8af5c42942b3de8088002772559ef10f263bd9dbfa1370941 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 11 05:06:52 np0005481065 systemd[1]: libpod-conmon-9299614edfb0bbb8af5c42942b3de8088002772559ef10f263bd9dbfa1370941.scope: Deactivated successfully.
Oct 11 05:06:52 np0005481065 podman[355801]: 2025-10-11 09:06:52.653494989 +0000 UTC m=+0.053220620 container create 1bf005b848b727bc1f7f8751b6dfdc45970ecb033382539c1e9348fbf2a975b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:06:52 np0005481065 systemd[1]: Started libpod-conmon-1bf005b848b727bc1f7f8751b6dfdc45970ecb033382539c1e9348fbf2a975b1.scope.
Oct 11 05:06:52 np0005481065 podman[355801]: 2025-10-11 09:06:52.624316546 +0000 UTC m=+0.024042227 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:06:52 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:06:52 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d7314b1faef6b12828acf593b1ee4ae303b65c7390c67a5fed77dc34b0003c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:06:52 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d7314b1faef6b12828acf593b1ee4ae303b65c7390c67a5fed77dc34b0003c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:06:52 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d7314b1faef6b12828acf593b1ee4ae303b65c7390c67a5fed77dc34b0003c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:06:52 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d7314b1faef6b12828acf593b1ee4ae303b65c7390c67a5fed77dc34b0003c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:06:52 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d7314b1faef6b12828acf593b1ee4ae303b65c7390c67a5fed77dc34b0003c4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 05:06:52 np0005481065 podman[355801]: 2025-10-11 09:06:52.757749966 +0000 UTC m=+0.157475657 container init 1bf005b848b727bc1f7f8751b6dfdc45970ecb033382539c1e9348fbf2a975b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 05:06:52 np0005481065 podman[355801]: 2025-10-11 09:06:52.772730473 +0000 UTC m=+0.172456064 container start 1bf005b848b727bc1f7f8751b6dfdc45970ecb033382539c1e9348fbf2a975b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 11 05:06:52 np0005481065 podman[355801]: 2025-10-11 09:06:52.776998965 +0000 UTC m=+0.176724596 container attach 1bf005b848b727bc1f7f8751b6dfdc45970ecb033382539c1e9348fbf2a975b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_joliot, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 05:06:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:06:53 np0005481065 zen_joliot[355817]: --> passed data devices: 0 physical, 3 LVM
Oct 11 05:06:53 np0005481065 zen_joliot[355817]: --> relative data size: 1.0
Oct 11 05:06:53 np0005481065 zen_joliot[355817]: --> All data devices are unavailable
Oct 11 05:06:54 np0005481065 systemd[1]: libpod-1bf005b848b727bc1f7f8751b6dfdc45970ecb033382539c1e9348fbf2a975b1.scope: Deactivated successfully.
Oct 11 05:06:54 np0005481065 systemd[1]: libpod-1bf005b848b727bc1f7f8751b6dfdc45970ecb033382539c1e9348fbf2a975b1.scope: Consumed 1.166s CPU time.
Oct 11 05:06:54 np0005481065 podman[355846]: 2025-10-11 09:06:54.061906514 +0000 UTC m=+0.033224529 container died 1bf005b848b727bc1f7f8751b6dfdc45970ecb033382539c1e9348fbf2a975b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_joliot, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 05:06:54 np0005481065 nova_compute[260935]: 2025-10-11 09:06:54.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:54 np0005481065 systemd[1]: var-lib-containers-storage-overlay-5d7314b1faef6b12828acf593b1ee4ae303b65c7390c67a5fed77dc34b0003c4-merged.mount: Deactivated successfully.
Oct 11 05:06:54 np0005481065 podman[355846]: 2025-10-11 09:06:54.132097888 +0000 UTC m=+0.103415863 container remove 1bf005b848b727bc1f7f8751b6dfdc45970ecb033382539c1e9348fbf2a975b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_joliot, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:06:54 np0005481065 systemd[1]: libpod-conmon-1bf005b848b727bc1f7f8751b6dfdc45970ecb033382539c1e9348fbf2a975b1.scope: Deactivated successfully.
Oct 11 05:06:54 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1927: 321 pgs: 321 active+clean; 374 MiB data, 826 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:06:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:06:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:06:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:06:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:06:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:06:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:06:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:06:54
Oct 11 05:06:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:06:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:06:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['backups', 'volumes', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'images', '.rgw.root', 'default.rgw.log', 'vms', 'default.rgw.control']
Oct 11 05:06:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:06:55 np0005481065 podman[356003]: 2025-10-11 09:06:55.058424022 +0000 UTC m=+0.069117104 container create 21018926948b6223e1cafd88f1706061caaad17d2da24f13d9bc5e5a2b9ab194 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 11 05:06:55 np0005481065 podman[356003]: 2025-10-11 09:06:55.033305515 +0000 UTC m=+0.043998687 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:06:55 np0005481065 systemd[1]: Started libpod-conmon-21018926948b6223e1cafd88f1706061caaad17d2da24f13d9bc5e5a2b9ab194.scope.
Oct 11 05:06:55 np0005481065 nova_compute[260935]: 2025-10-11 09:06:55.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:55 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:06:55 np0005481065 podman[356003]: 2025-10-11 09:06:55.206339825 +0000 UTC m=+0.217032927 container init 21018926948b6223e1cafd88f1706061caaad17d2da24f13d9bc5e5a2b9ab194 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:06:55 np0005481065 podman[356003]: 2025-10-11 09:06:55.214664643 +0000 UTC m=+0.225357725 container start 21018926948b6223e1cafd88f1706061caaad17d2da24f13d9bc5e5a2b9ab194 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_carson, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 05:06:55 np0005481065 podman[356003]: 2025-10-11 09:06:55.219018497 +0000 UTC m=+0.229711649 container attach 21018926948b6223e1cafd88f1706061caaad17d2da24f13d9bc5e5a2b9ab194 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 11 05:06:55 np0005481065 youthful_carson[356041]: 167 167
Oct 11 05:06:55 np0005481065 systemd[1]: libpod-21018926948b6223e1cafd88f1706061caaad17d2da24f13d9bc5e5a2b9ab194.scope: Deactivated successfully.
Oct 11 05:06:55 np0005481065 podman[356003]: 2025-10-11 09:06:55.222773894 +0000 UTC m=+0.233466996 container died 21018926948b6223e1cafd88f1706061caaad17d2da24f13d9bc5e5a2b9ab194 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_carson, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:06:55 np0005481065 podman[356017]: 2025-10-11 09:06:55.250735493 +0000 UTC m=+0.148726067 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible)
Oct 11 05:06:55 np0005481065 systemd[1]: var-lib-containers-storage-overlay-29d43d04394a56214b59efdcadc71fd935e126fd919b889ad5907a0dad388df1-merged.mount: Deactivated successfully.
Oct 11 05:06:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:06:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:06:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:06:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:06:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:06:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:06:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:06:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:06:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:06:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:06:55 np0005481065 podman[356020]: 2025-10-11 09:06:55.270126066 +0000 UTC m=+0.158099684 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:06:55 np0005481065 podman[356003]: 2025-10-11 09:06:55.280934165 +0000 UTC m=+0.291627277 container remove 21018926948b6223e1cafd88f1706061caaad17d2da24f13d9bc5e5a2b9ab194 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_carson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 05:06:55 np0005481065 systemd[1]: libpod-conmon-21018926948b6223e1cafd88f1706061caaad17d2da24f13d9bc5e5a2b9ab194.scope: Deactivated successfully.
Oct 11 05:06:55 np0005481065 podman[356085]: 2025-10-11 09:06:55.527421021 +0000 UTC m=+0.042843244 container create 027c5e9c93f814dedd5124a4e542a26cf422c967e124a9429e199ae374b7ea46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:06:55 np0005481065 systemd[1]: Started libpod-conmon-027c5e9c93f814dedd5124a4e542a26cf422c967e124a9429e199ae374b7ea46.scope.
Oct 11 05:06:55 np0005481065 podman[356085]: 2025-10-11 09:06:55.507150452 +0000 UTC m=+0.022572665 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:06:55 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:06:55 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81b3d3f72eecbc537a3ec37abd640584b7e50128cf01d15f9774e58fc1967097/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:06:55 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81b3d3f72eecbc537a3ec37abd640584b7e50128cf01d15f9774e58fc1967097/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:06:55 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81b3d3f72eecbc537a3ec37abd640584b7e50128cf01d15f9774e58fc1967097/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:06:55 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81b3d3f72eecbc537a3ec37abd640584b7e50128cf01d15f9774e58fc1967097/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:06:55 np0005481065 podman[356085]: 2025-10-11 09:06:55.664299728 +0000 UTC m=+0.179722021 container init 027c5e9c93f814dedd5124a4e542a26cf422c967e124a9429e199ae374b7ea46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 05:06:55 np0005481065 podman[356085]: 2025-10-11 09:06:55.672634036 +0000 UTC m=+0.188056259 container start 027c5e9c93f814dedd5124a4e542a26cf422c967e124a9429e199ae374b7ea46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bassi, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:06:55 np0005481065 podman[356085]: 2025-10-11 09:06:55.677990169 +0000 UTC m=+0.193412392 container attach 027c5e9c93f814dedd5124a4e542a26cf422c967e124a9429e199ae374b7ea46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 11 05:06:56 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1928: 321 pgs: 321 active+clean; 374 MiB data, 826 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]: {
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:    "0": [
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:        {
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:            "devices": [
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:                "/dev/loop3"
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:            ],
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:            "lv_name": "ceph_lv0",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:            "lv_size": "21470642176",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:            "name": "ceph_lv0",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:            "tags": {
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:                "ceph.cluster_name": "ceph",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:                "ceph.crush_device_class": "",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:                "ceph.encrypted": "0",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:                "ceph.osd_id": "0",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:                "ceph.type": "block",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:                "ceph.vdo": "0"
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:            },
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:            "type": "block",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:            "vg_name": "ceph_vg0"
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:        }
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:    ],
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:    "1": [
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:        {
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:            "devices": [
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:                "/dev/loop4"
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:            ],
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:            "lv_name": "ceph_lv1",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:            "lv_size": "21470642176",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:            "name": "ceph_lv1",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:            "tags": {
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:                "ceph.cluster_name": "ceph",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:                "ceph.crush_device_class": "",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:                "ceph.encrypted": "0",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:                "ceph.osd_id": "1",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:                "ceph.type": "block",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:                "ceph.vdo": "0"
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:            },
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:            "type": "block",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:            "vg_name": "ceph_vg1"
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:        }
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:    ],
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:    "2": [
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:        {
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:            "devices": [
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:                "/dev/loop5"
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:            ],
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:            "lv_name": "ceph_lv2",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:            "lv_size": "21470642176",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:            "name": "ceph_lv2",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:            "tags": {
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:                "ceph.cluster_name": "ceph",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:                "ceph.crush_device_class": "",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:                "ceph.encrypted": "0",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:                "ceph.osd_id": "2",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:                "ceph.type": "block",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:                "ceph.vdo": "0"
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:            },
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:            "type": "block",
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:            "vg_name": "ceph_vg2"
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:        }
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]:    ]
Oct 11 05:06:56 np0005481065 elastic_bassi[356102]: }
Oct 11 05:06:56 np0005481065 systemd[1]: libpod-027c5e9c93f814dedd5124a4e542a26cf422c967e124a9429e199ae374b7ea46.scope: Deactivated successfully.
Oct 11 05:06:56 np0005481065 podman[356085]: 2025-10-11 09:06:56.427147716 +0000 UTC m=+0.942569979 container died 027c5e9c93f814dedd5124a4e542a26cf422c967e124a9429e199ae374b7ea46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bassi, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 11 05:06:56 np0005481065 systemd[1]: var-lib-containers-storage-overlay-81b3d3f72eecbc537a3ec37abd640584b7e50128cf01d15f9774e58fc1967097-merged.mount: Deactivated successfully.
Oct 11 05:06:56 np0005481065 podman[356085]: 2025-10-11 09:06:56.49665157 +0000 UTC m=+1.012073773 container remove 027c5e9c93f814dedd5124a4e542a26cf422c967e124a9429e199ae374b7ea46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 05:06:56 np0005481065 systemd[1]: libpod-conmon-027c5e9c93f814dedd5124a4e542a26cf422c967e124a9429e199ae374b7ea46.scope: Deactivated successfully.
Oct 11 05:06:57 np0005481065 podman[356266]: 2025-10-11 09:06:57.342370412 +0000 UTC m=+0.061954879 container create aa3b47c1670aec9ca748e2cd582b9b44fe76ba00910df0dd6bc3be89ec05a3a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_goodall, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 05:06:57 np0005481065 systemd[1]: Started libpod-conmon-aa3b47c1670aec9ca748e2cd582b9b44fe76ba00910df0dd6bc3be89ec05a3a5.scope.
Oct 11 05:06:57 np0005481065 podman[356266]: 2025-10-11 09:06:57.312929052 +0000 UTC m=+0.032513569 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:06:57 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:06:57 np0005481065 podman[356266]: 2025-10-11 09:06:57.454463592 +0000 UTC m=+0.174048119 container init aa3b47c1670aec9ca748e2cd582b9b44fe76ba00910df0dd6bc3be89ec05a3a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:06:57 np0005481065 podman[356266]: 2025-10-11 09:06:57.461227725 +0000 UTC m=+0.180812162 container start aa3b47c1670aec9ca748e2cd582b9b44fe76ba00910df0dd6bc3be89ec05a3a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:06:57 np0005481065 podman[356266]: 2025-10-11 09:06:57.465910758 +0000 UTC m=+0.185495215 container attach aa3b47c1670aec9ca748e2cd582b9b44fe76ba00910df0dd6bc3be89ec05a3a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 05:06:57 np0005481065 optimistic_goodall[356282]: 167 167
Oct 11 05:06:57 np0005481065 systemd[1]: libpod-aa3b47c1670aec9ca748e2cd582b9b44fe76ba00910df0dd6bc3be89ec05a3a5.scope: Deactivated successfully.
Oct 11 05:06:57 np0005481065 podman[356266]: 2025-10-11 09:06:57.469943583 +0000 UTC m=+0.189528040 container died aa3b47c1670aec9ca748e2cd582b9b44fe76ba00910df0dd6bc3be89ec05a3a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:06:57 np0005481065 systemd[1]: var-lib-containers-storage-overlay-46644da71927bc24422585f4cbe36740bbf08fd8c91856bbee0275886165c180-merged.mount: Deactivated successfully.
Oct 11 05:06:57 np0005481065 podman[356266]: 2025-10-11 09:06:57.522350239 +0000 UTC m=+0.241934666 container remove aa3b47c1670aec9ca748e2cd582b9b44fe76ba00910df0dd6bc3be89ec05a3a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Oct 11 05:06:57 np0005481065 systemd[1]: libpod-conmon-aa3b47c1670aec9ca748e2cd582b9b44fe76ba00910df0dd6bc3be89ec05a3a5.scope: Deactivated successfully.
Oct 11 05:06:57 np0005481065 podman[356306]: 2025-10-11 09:06:57.810709821 +0000 UTC m=+0.065439059 container create a32e2e5cffa902ab2897365d240d73823c697878a77026df1129dfafc74a59a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bhaskara, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:06:57 np0005481065 systemd[1]: Started libpod-conmon-a32e2e5cffa902ab2897365d240d73823c697878a77026df1129dfafc74a59a6.scope.
Oct 11 05:06:57 np0005481065 podman[356306]: 2025-10-11 09:06:57.779434809 +0000 UTC m=+0.034164107 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:06:57 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:06:57 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22fb0b1e107adcb00515a81c4b0b9653838cc8fbade640f8b723e769f08a38f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:06:57 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22fb0b1e107adcb00515a81c4b0b9653838cc8fbade640f8b723e769f08a38f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:06:57 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22fb0b1e107adcb00515a81c4b0b9653838cc8fbade640f8b723e769f08a38f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:06:57 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22fb0b1e107adcb00515a81c4b0b9653838cc8fbade640f8b723e769f08a38f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:06:57 np0005481065 podman[356306]: 2025-10-11 09:06:57.911403246 +0000 UTC m=+0.166132544 container init a32e2e5cffa902ab2897365d240d73823c697878a77026df1129dfafc74a59a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bhaskara, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 05:06:57 np0005481065 podman[356306]: 2025-10-11 09:06:57.925852478 +0000 UTC m=+0.180581736 container start a32e2e5cffa902ab2897365d240d73823c697878a77026df1129dfafc74a59a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:06:57 np0005481065 podman[356306]: 2025-10-11 09:06:57.932525639 +0000 UTC m=+0.187254937 container attach a32e2e5cffa902ab2897365d240d73823c697878a77026df1129dfafc74a59a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 05:06:58 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1929: 321 pgs: 321 active+clean; 374 MiB data, 826 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:06:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:06:58 np0005481065 interesting_bhaskara[356323]: {
Oct 11 05:06:58 np0005481065 interesting_bhaskara[356323]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 05:06:58 np0005481065 interesting_bhaskara[356323]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:06:58 np0005481065 interesting_bhaskara[356323]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 05:06:58 np0005481065 interesting_bhaskara[356323]:        "osd_id": 2,
Oct 11 05:06:58 np0005481065 interesting_bhaskara[356323]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:06:58 np0005481065 interesting_bhaskara[356323]:        "type": "bluestore"
Oct 11 05:06:58 np0005481065 interesting_bhaskara[356323]:    },
Oct 11 05:06:58 np0005481065 interesting_bhaskara[356323]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 05:06:58 np0005481065 interesting_bhaskara[356323]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:06:58 np0005481065 interesting_bhaskara[356323]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 05:06:58 np0005481065 interesting_bhaskara[356323]:        "osd_id": 0,
Oct 11 05:06:58 np0005481065 interesting_bhaskara[356323]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:06:58 np0005481065 interesting_bhaskara[356323]:        "type": "bluestore"
Oct 11 05:06:58 np0005481065 interesting_bhaskara[356323]:    },
Oct 11 05:06:58 np0005481065 interesting_bhaskara[356323]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 05:06:58 np0005481065 interesting_bhaskara[356323]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:06:58 np0005481065 interesting_bhaskara[356323]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 05:06:58 np0005481065 interesting_bhaskara[356323]:        "osd_id": 1,
Oct 11 05:06:58 np0005481065 interesting_bhaskara[356323]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:06:58 np0005481065 interesting_bhaskara[356323]:        "type": "bluestore"
Oct 11 05:06:58 np0005481065 interesting_bhaskara[356323]:    }
Oct 11 05:06:58 np0005481065 interesting_bhaskara[356323]: }
Oct 11 05:06:58 np0005481065 systemd[1]: libpod-a32e2e5cffa902ab2897365d240d73823c697878a77026df1129dfafc74a59a6.scope: Deactivated successfully.
Oct 11 05:06:58 np0005481065 systemd[1]: libpod-a32e2e5cffa902ab2897365d240d73823c697878a77026df1129dfafc74a59a6.scope: Consumed 1.053s CPU time.
Oct 11 05:06:58 np0005481065 podman[356306]: 2025-10-11 09:06:58.984101729 +0000 UTC m=+1.238830987 container died a32e2e5cffa902ab2897365d240d73823c697878a77026df1129dfafc74a59a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bhaskara, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:06:59 np0005481065 systemd[1]: var-lib-containers-storage-overlay-22fb0b1e107adcb00515a81c4b0b9653838cc8fbade640f8b723e769f08a38f0-merged.mount: Deactivated successfully.
Oct 11 05:06:59 np0005481065 podman[356306]: 2025-10-11 09:06:59.050979898 +0000 UTC m=+1.305709126 container remove a32e2e5cffa902ab2897365d240d73823c697878a77026df1129dfafc74a59a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bhaskara, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 05:06:59 np0005481065 systemd[1]: libpod-conmon-a32e2e5cffa902ab2897365d240d73823c697878a77026df1129dfafc74a59a6.scope: Deactivated successfully.
Oct 11 05:06:59 np0005481065 nova_compute[260935]: 2025-10-11 09:06:59.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:06:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:06:59 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:06:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:06:59 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:06:59 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 83d8e14b-df29-42ec-9d9c-ae7cd4202c97 does not exist
Oct 11 05:06:59 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 0fc927ec-5bf5-44b4-a109-213d3fa10367 does not exist
Oct 11 05:06:59 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:06:59 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:07:00 np0005481065 nova_compute[260935]: 2025-10-11 09:07:00.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:00 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1930: 321 pgs: 321 active+clean; 374 MiB data, 826 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:07:02 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1931: 321 pgs: 321 active+clean; 374 MiB data, 826 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:07:02 np0005481065 nova_compute[260935]: 2025-10-11 09:07:02.552 2 DEBUG oslo_concurrency.lockutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Acquiring lock "6520fc43-79ed-4060-85bb-dcdff5f5c101" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:07:02 np0005481065 nova_compute[260935]: 2025-10-11 09:07:02.552 2 DEBUG oslo_concurrency.lockutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "6520fc43-79ed-4060-85bb-dcdff5f5c101" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:07:02 np0005481065 nova_compute[260935]: 2025-10-11 09:07:02.590 2 DEBUG nova.compute.manager [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:07:02 np0005481065 nova_compute[260935]: 2025-10-11 09:07:02.716 2 DEBUG oslo_concurrency.lockutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:07:02 np0005481065 nova_compute[260935]: 2025-10-11 09:07:02.717 2 DEBUG oslo_concurrency.lockutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:07:02 np0005481065 nova_compute[260935]: 2025-10-11 09:07:02.729 2 DEBUG nova.virt.hardware [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:07:02 np0005481065 nova_compute[260935]: 2025-10-11 09:07:02.730 2 INFO nova.compute.claims [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:07:02 np0005481065 nova_compute[260935]: 2025-10-11 09:07:02.981 2 DEBUG oslo_concurrency.processutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:07:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:07:03 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/218263712' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:07:03 np0005481065 nova_compute[260935]: 2025-10-11 09:07:03.457 2 DEBUG oslo_concurrency.processutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:07:03 np0005481065 nova_compute[260935]: 2025-10-11 09:07:03.465 2 DEBUG nova.compute.provider_tree [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:07:03 np0005481065 nova_compute[260935]: 2025-10-11 09:07:03.560 2 DEBUG nova.scheduler.client.report [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:07:03 np0005481065 nova_compute[260935]: 2025-10-11 09:07:03.624 2 DEBUG oslo_concurrency.lockutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.907s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:07:03 np0005481065 nova_compute[260935]: 2025-10-11 09:07:03.626 2 DEBUG nova.compute.manager [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:07:03 np0005481065 nova_compute[260935]: 2025-10-11 09:07:03.732 2 DEBUG nova.compute.manager [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:07:03 np0005481065 nova_compute[260935]: 2025-10-11 09:07:03.733 2 DEBUG nova.network.neutron [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:07:03 np0005481065 nova_compute[260935]: 2025-10-11 09:07:03.798 2 INFO nova.virt.libvirt.driver [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:07:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:07:03 np0005481065 nova_compute[260935]: 2025-10-11 09:07:03.874 2 DEBUG nova.compute.manager [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:07:04 np0005481065 nova_compute[260935]: 2025-10-11 09:07:04.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:04 np0005481065 nova_compute[260935]: 2025-10-11 09:07:04.096 2 DEBUG nova.compute.manager [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:07:04 np0005481065 nova_compute[260935]: 2025-10-11 09:07:04.099 2 DEBUG nova.virt.libvirt.driver [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:07:04 np0005481065 nova_compute[260935]: 2025-10-11 09:07:04.100 2 INFO nova.virt.libvirt.driver [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Creating image(s)#033[00m
Oct 11 05:07:04 np0005481065 nova_compute[260935]: 2025-10-11 09:07:04.133 2 DEBUG nova.storage.rbd_utils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] rbd image 6520fc43-79ed-4060-85bb-dcdff5f5c101_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:07:04 np0005481065 nova_compute[260935]: 2025-10-11 09:07:04.173 2 DEBUG nova.storage.rbd_utils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] rbd image 6520fc43-79ed-4060-85bb-dcdff5f5c101_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:07:04 np0005481065 nova_compute[260935]: 2025-10-11 09:07:04.207 2 DEBUG nova.storage.rbd_utils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] rbd image 6520fc43-79ed-4060-85bb-dcdff5f5c101_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:07:04 np0005481065 nova_compute[260935]: 2025-10-11 09:07:04.212 2 DEBUG oslo_concurrency.processutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:07:04 np0005481065 nova_compute[260935]: 2025-10-11 09:07:04.267 2 DEBUG nova.policy [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f52fcc072b5843a197064a063d4a5d30', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c221c92f100b42fbb2581f0c7035540b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:07:04 np0005481065 nova_compute[260935]: 2025-10-11 09:07:04.320 2 DEBUG oslo_concurrency.processutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:07:04 np0005481065 nova_compute[260935]: 2025-10-11 09:07:04.321 2 DEBUG oslo_concurrency.lockutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:07:04 np0005481065 nova_compute[260935]: 2025-10-11 09:07:04.322 2 DEBUG oslo_concurrency.lockutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:07:04 np0005481065 nova_compute[260935]: 2025-10-11 09:07:04.323 2 DEBUG oslo_concurrency.lockutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:07:04 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1932: 321 pgs: 321 active+clean; 374 MiB data, 826 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:07:04 np0005481065 nova_compute[260935]: 2025-10-11 09:07:04.348 2 DEBUG nova.storage.rbd_utils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] rbd image 6520fc43-79ed-4060-85bb-dcdff5f5c101_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:07:04 np0005481065 nova_compute[260935]: 2025-10-11 09:07:04.353 2 DEBUG oslo_concurrency.processutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 6520fc43-79ed-4060-85bb-dcdff5f5c101_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:07:04 np0005481065 nova_compute[260935]: 2025-10-11 09:07:04.718 2 DEBUG oslo_concurrency.processutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 6520fc43-79ed-4060-85bb-dcdff5f5c101_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.365s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:07:04 np0005481065 nova_compute[260935]: 2025-10-11 09:07:04.804 2 DEBUG nova.storage.rbd_utils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] resizing rbd image 6520fc43-79ed-4060-85bb-dcdff5f5c101_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:07:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:07:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:07:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:07:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:07:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0029757271724620694 of space, bias 1.0, pg target 0.8927181517386208 quantized to 32 (current 32)
Oct 11 05:07:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:07:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:07:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:07:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:07:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:07:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 11 05:07:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:07:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 05:07:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:07:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:07:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:07:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 05:07:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:07:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 05:07:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:07:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:07:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:07:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 05:07:04 np0005481065 nova_compute[260935]: 2025-10-11 09:07:04.942 2 DEBUG nova.objects.instance [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lazy-loading 'migration_context' on Instance uuid 6520fc43-79ed-4060-85bb-dcdff5f5c101 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:07:04 np0005481065 nova_compute[260935]: 2025-10-11 09:07:04.991 2 DEBUG nova.virt.libvirt.driver [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:07:04 np0005481065 nova_compute[260935]: 2025-10-11 09:07:04.992 2 DEBUG nova.virt.libvirt.driver [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Ensure instance console log exists: /var/lib/nova/instances/6520fc43-79ed-4060-85bb-dcdff5f5c101/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:07:04 np0005481065 nova_compute[260935]: 2025-10-11 09:07:04.993 2 DEBUG oslo_concurrency.lockutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:07:04 np0005481065 nova_compute[260935]: 2025-10-11 09:07:04.993 2 DEBUG oslo_concurrency.lockutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:07:04 np0005481065 nova_compute[260935]: 2025-10-11 09:07:04.994 2 DEBUG oslo_concurrency.lockutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:07:05 np0005481065 nova_compute[260935]: 2025-10-11 09:07:05.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:05 np0005481065 nova_compute[260935]: 2025-10-11 09:07:05.246 2 DEBUG nova.network.neutron [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Successfully created port: 7f81893d-380a-42b4-88a7-76a98c30a38d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:07:06 np0005481065 nova_compute[260935]: 2025-10-11 09:07:06.003 2 DEBUG oslo_concurrency.lockutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Acquiring lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:07:06 np0005481065 nova_compute[260935]: 2025-10-11 09:07:06.004 2 DEBUG oslo_concurrency.lockutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:07:06 np0005481065 nova_compute[260935]: 2025-10-11 09:07:06.074 2 DEBUG nova.compute.manager [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:07:06 np0005481065 nova_compute[260935]: 2025-10-11 09:07:06.207 2 DEBUG oslo_concurrency.lockutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:07:06 np0005481065 nova_compute[260935]: 2025-10-11 09:07:06.207 2 DEBUG oslo_concurrency.lockutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:07:06 np0005481065 nova_compute[260935]: 2025-10-11 09:07:06.216 2 DEBUG nova.virt.hardware [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:07:06 np0005481065 nova_compute[260935]: 2025-10-11 09:07:06.216 2 INFO nova.compute.claims [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:07:06 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1933: 321 pgs: 321 active+clean; 374 MiB data, 826 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:07:06 np0005481065 nova_compute[260935]: 2025-10-11 09:07:06.416 2 DEBUG nova.network.neutron [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Successfully updated port: 7f81893d-380a-42b4-88a7-76a98c30a38d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:07:06 np0005481065 nova_compute[260935]: 2025-10-11 09:07:06.454 2 DEBUG oslo_concurrency.lockutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Acquiring lock "refresh_cache-6520fc43-79ed-4060-85bb-dcdff5f5c101" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:07:06 np0005481065 nova_compute[260935]: 2025-10-11 09:07:06.454 2 DEBUG oslo_concurrency.lockutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Acquired lock "refresh_cache-6520fc43-79ed-4060-85bb-dcdff5f5c101" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:07:06 np0005481065 nova_compute[260935]: 2025-10-11 09:07:06.455 2 DEBUG nova.network.neutron [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:07:06 np0005481065 nova_compute[260935]: 2025-10-11 09:07:06.513 2 DEBUG oslo_concurrency.lockutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquiring lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:07:06 np0005481065 nova_compute[260935]: 2025-10-11 09:07:06.514 2 DEBUG oslo_concurrency.lockutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:07:06 np0005481065 nova_compute[260935]: 2025-10-11 09:07:06.581 2 DEBUG oslo_concurrency.processutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:07:06 np0005481065 nova_compute[260935]: 2025-10-11 09:07:06.636 2 DEBUG nova.compute.manager [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:07:06 np0005481065 nova_compute[260935]: 2025-10-11 09:07:06.658 2 DEBUG nova.compute.manager [req-cb9d72b0-cdd6-406b-8cab-854d0fbef8d9 req-7abf82ba-745c-4997-8ef7-a175fb946a24 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Received event network-changed-7f81893d-380a-42b4-88a7-76a98c30a38d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:07:06 np0005481065 nova_compute[260935]: 2025-10-11 09:07:06.659 2 DEBUG nova.compute.manager [req-cb9d72b0-cdd6-406b-8cab-854d0fbef8d9 req-7abf82ba-745c-4997-8ef7-a175fb946a24 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Refreshing instance network info cache due to event network-changed-7f81893d-380a-42b4-88a7-76a98c30a38d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:07:06 np0005481065 nova_compute[260935]: 2025-10-11 09:07:06.660 2 DEBUG oslo_concurrency.lockutils [req-cb9d72b0-cdd6-406b-8cab-854d0fbef8d9 req-7abf82ba-745c-4997-8ef7-a175fb946a24 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-6520fc43-79ed-4060-85bb-dcdff5f5c101" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:07:06 np0005481065 nova_compute[260935]: 2025-10-11 09:07:06.767 2 DEBUG nova.network.neutron [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:07:06 np0005481065 nova_compute[260935]: 2025-10-11 09:07:06.842 2 DEBUG oslo_concurrency.lockutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:07:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:07:07 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3679868914' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:07:07 np0005481065 nova_compute[260935]: 2025-10-11 09:07:07.044 2 DEBUG oslo_concurrency.processutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:07:07 np0005481065 nova_compute[260935]: 2025-10-11 09:07:07.051 2 DEBUG nova.compute.provider_tree [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:07:07 np0005481065 nova_compute[260935]: 2025-10-11 09:07:07.081 2 DEBUG nova.scheduler.client.report [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:07:07 np0005481065 nova_compute[260935]: 2025-10-11 09:07:07.147 2 DEBUG oslo_concurrency.lockutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.940s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:07:07 np0005481065 nova_compute[260935]: 2025-10-11 09:07:07.148 2 DEBUG nova.compute.manager [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:07:07 np0005481065 nova_compute[260935]: 2025-10-11 09:07:07.150 2 DEBUG oslo_concurrency.lockutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.308s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:07:07 np0005481065 nova_compute[260935]: 2025-10-11 09:07:07.158 2 DEBUG nova.virt.hardware [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:07:07 np0005481065 nova_compute[260935]: 2025-10-11 09:07:07.158 2 INFO nova.compute.claims [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:07:07 np0005481065 nova_compute[260935]: 2025-10-11 09:07:07.414 2 DEBUG nova.compute.manager [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:07:07 np0005481065 nova_compute[260935]: 2025-10-11 09:07:07.415 2 DEBUG nova.network.neutron [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:07:07 np0005481065 nova_compute[260935]: 2025-10-11 09:07:07.522 2 INFO nova.virt.libvirt.driver [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:07:07 np0005481065 nova_compute[260935]: 2025-10-11 09:07:07.585 2 DEBUG nova.compute.manager [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:07:07 np0005481065 nova_compute[260935]: 2025-10-11 09:07:07.744 2 DEBUG oslo_concurrency.processutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:07:07 np0005481065 nova_compute[260935]: 2025-10-11 09:07:07.793 2 DEBUG nova.policy [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f52fcc072b5843a197064a063d4a5d30', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c221c92f100b42fbb2581f0c7035540b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:07:07 np0005481065 nova_compute[260935]: 2025-10-11 09:07:07.801 2 DEBUG nova.compute.manager [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:07:07 np0005481065 nova_compute[260935]: 2025-10-11 09:07:07.804 2 DEBUG nova.virt.libvirt.driver [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:07:07 np0005481065 nova_compute[260935]: 2025-10-11 09:07:07.806 2 INFO nova.virt.libvirt.driver [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Creating image(s)#033[00m
Oct 11 05:07:07 np0005481065 nova_compute[260935]: 2025-10-11 09:07:07.841 2 DEBUG nova.storage.rbd_utils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] rbd image f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:07:07 np0005481065 nova_compute[260935]: 2025-10-11 09:07:07.869 2 DEBUG nova.storage.rbd_utils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] rbd image f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:07:07 np0005481065 nova_compute[260935]: 2025-10-11 09:07:07.905 2 DEBUG nova.storage.rbd_utils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] rbd image f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:07:07 np0005481065 nova_compute[260935]: 2025-10-11 09:07:07.909 2 DEBUG oslo_concurrency.processutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:07:08 np0005481065 nova_compute[260935]: 2025-10-11 09:07:08.003 2 DEBUG oslo_concurrency.processutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:07:08 np0005481065 nova_compute[260935]: 2025-10-11 09:07:08.004 2 DEBUG oslo_concurrency.lockutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:07:08 np0005481065 nova_compute[260935]: 2025-10-11 09:07:08.005 2 DEBUG oslo_concurrency.lockutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:07:08 np0005481065 nova_compute[260935]: 2025-10-11 09:07:08.005 2 DEBUG oslo_concurrency.lockutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:07:08 np0005481065 nova_compute[260935]: 2025-10-11 09:07:08.022 2 DEBUG nova.storage.rbd_utils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] rbd image f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:07:08 np0005481065 nova_compute[260935]: 2025-10-11 09:07:08.025 2 DEBUG oslo_concurrency.processutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:07:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:07:08 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4079266758' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:07:08 np0005481065 nova_compute[260935]: 2025-10-11 09:07:08.182 2 DEBUG oslo_concurrency.processutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:07:08 np0005481065 nova_compute[260935]: 2025-10-11 09:07:08.191 2 DEBUG nova.compute.provider_tree [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:07:08 np0005481065 nova_compute[260935]: 2025-10-11 09:07:08.230 2 DEBUG nova.scheduler.client.report [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:07:08 np0005481065 nova_compute[260935]: 2025-10-11 09:07:08.274 2 DEBUG oslo_concurrency.lockutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.124s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:07:08 np0005481065 nova_compute[260935]: 2025-10-11 09:07:08.276 2 DEBUG nova.compute.manager [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:07:08 np0005481065 nova_compute[260935]: 2025-10-11 09:07:08.347 2 DEBUG nova.compute.manager [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:07:08 np0005481065 nova_compute[260935]: 2025-10-11 09:07:08.348 2 DEBUG nova.network.neutron [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:07:08 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1934: 321 pgs: 321 active+clean; 420 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:07:08 np0005481065 nova_compute[260935]: 2025-10-11 09:07:08.359 2 DEBUG oslo_concurrency.processutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.335s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:07:08 np0005481065 nova_compute[260935]: 2025-10-11 09:07:08.407 2 INFO nova.virt.libvirt.driver [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:07:08 np0005481065 nova_compute[260935]: 2025-10-11 09:07:08.456 2 DEBUG nova.compute.manager [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:07:08 np0005481065 nova_compute[260935]: 2025-10-11 09:07:08.466 2 DEBUG nova.storage.rbd_utils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] resizing rbd image f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:07:08 np0005481065 nova_compute[260935]: 2025-10-11 09:07:08.580 2 DEBUG nova.objects.instance [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lazy-loading 'migration_context' on Instance uuid f5dfbe0b-a1ff-4001-abe4-a4493c9124f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:07:08 np0005481065 nova_compute[260935]: 2025-10-11 09:07:08.658 2 DEBUG nova.compute.manager [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:07:08 np0005481065 nova_compute[260935]: 2025-10-11 09:07:08.660 2 DEBUG nova.virt.libvirt.driver [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:07:08 np0005481065 nova_compute[260935]: 2025-10-11 09:07:08.660 2 INFO nova.virt.libvirt.driver [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Creating image(s)#033[00m
Oct 11 05:07:08 np0005481065 nova_compute[260935]: 2025-10-11 09:07:08.684 2 DEBUG nova.storage.rbd_utils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] rbd image 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:07:08 np0005481065 nova_compute[260935]: 2025-10-11 09:07:08.711 2 DEBUG nova.storage.rbd_utils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] rbd image 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:07:08 np0005481065 nova_compute[260935]: 2025-10-11 09:07:08.738 2 DEBUG nova.storage.rbd_utils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] rbd image 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:07:08 np0005481065 nova_compute[260935]: 2025-10-11 09:07:08.742 2 DEBUG oslo_concurrency.processutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:07:08 np0005481065 nova_compute[260935]: 2025-10-11 09:07:08.790 2 DEBUG nova.virt.libvirt.driver [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:07:08 np0005481065 nova_compute[260935]: 2025-10-11 09:07:08.791 2 DEBUG nova.virt.libvirt.driver [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Ensure instance console log exists: /var/lib/nova/instances/f5dfbe0b-a1ff-4001-abe4-a4493c9124f2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:07:08 np0005481065 nova_compute[260935]: 2025-10-11 09:07:08.791 2 DEBUG oslo_concurrency.lockutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:07:08 np0005481065 nova_compute[260935]: 2025-10-11 09:07:08.792 2 DEBUG oslo_concurrency.lockutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:07:08 np0005481065 nova_compute[260935]: 2025-10-11 09:07:08.792 2 DEBUG oslo_concurrency.lockutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:07:08 np0005481065 nova_compute[260935]: 2025-10-11 09:07:08.794 2 DEBUG nova.network.neutron [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Updating instance_info_cache with network_info: [{"id": "7f81893d-380a-42b4-88a7-76a98c30a38d", "address": "fa:16:3e:61:f1:b7", "network": {"id": "76432838-bd4d-4fb8-8e44-6e230b5868b8", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-372698809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c221c92f100b42fbb2581f0c7035540b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f81893d-38", "ovs_interfaceid": "7f81893d-380a-42b4-88a7-76a98c30a38d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:07:08 np0005481065 nova_compute[260935]: 2025-10-11 09:07:08.798 2 DEBUG nova.policy [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6b8d9d5ab01d48ae81a09f922875ea3e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '21f163e616ee4917a580701d466f7dc9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:07:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:07:08 np0005481065 nova_compute[260935]: 2025-10-11 09:07:08.841 2 DEBUG oslo_concurrency.processutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:07:08 np0005481065 nova_compute[260935]: 2025-10-11 09:07:08.842 2 DEBUG oslo_concurrency.lockutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:07:08 np0005481065 nova_compute[260935]: 2025-10-11 09:07:08.843 2 DEBUG oslo_concurrency.lockutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:07:08 np0005481065 nova_compute[260935]: 2025-10-11 09:07:08.843 2 DEBUG oslo_concurrency.lockutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:07:08 np0005481065 nova_compute[260935]: 2025-10-11 09:07:08.866 2 DEBUG nova.storage.rbd_utils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] rbd image 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:07:08 np0005481065 nova_compute[260935]: 2025-10-11 09:07:08.870 2 DEBUG oslo_concurrency.processutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:07:08 np0005481065 nova_compute[260935]: 2025-10-11 09:07:08.950 2 DEBUG nova.network.neutron [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Successfully created port: 34749eb0-e6d2-4c4b-a2b8-9fc709c1caca _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:07:09 np0005481065 nova_compute[260935]: 2025-10-11 09:07:09.068 2 DEBUG oslo_concurrency.lockutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Releasing lock "refresh_cache-6520fc43-79ed-4060-85bb-dcdff5f5c101" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:07:09 np0005481065 nova_compute[260935]: 2025-10-11 09:07:09.069 2 DEBUG nova.compute.manager [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Instance network_info: |[{"id": "7f81893d-380a-42b4-88a7-76a98c30a38d", "address": "fa:16:3e:61:f1:b7", "network": {"id": "76432838-bd4d-4fb8-8e44-6e230b5868b8", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-372698809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c221c92f100b42fbb2581f0c7035540b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f81893d-38", "ovs_interfaceid": "7f81893d-380a-42b4-88a7-76a98c30a38d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:07:09 np0005481065 nova_compute[260935]: 2025-10-11 09:07:09.071 2 DEBUG oslo_concurrency.lockutils [req-cb9d72b0-cdd6-406b-8cab-854d0fbef8d9 req-7abf82ba-745c-4997-8ef7-a175fb946a24 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-6520fc43-79ed-4060-85bb-dcdff5f5c101" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:07:09 np0005481065 nova_compute[260935]: 2025-10-11 09:07:09.071 2 DEBUG nova.network.neutron [req-cb9d72b0-cdd6-406b-8cab-854d0fbef8d9 req-7abf82ba-745c-4997-8ef7-a175fb946a24 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Refreshing network info cache for port 7f81893d-380a-42b4-88a7-76a98c30a38d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:07:09 np0005481065 nova_compute[260935]: 2025-10-11 09:07:09.078 2 DEBUG nova.virt.libvirt.driver [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Start _get_guest_xml network_info=[{"id": "7f81893d-380a-42b4-88a7-76a98c30a38d", "address": "fa:16:3e:61:f1:b7", "network": {"id": "76432838-bd4d-4fb8-8e44-6e230b5868b8", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-372698809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c221c92f100b42fbb2581f0c7035540b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f81893d-38", "ovs_interfaceid": "7f81893d-380a-42b4-88a7-76a98c30a38d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:07:09 np0005481065 nova_compute[260935]: 2025-10-11 09:07:09.085 2 WARNING nova.virt.libvirt.driver [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:07:09 np0005481065 nova_compute[260935]: 2025-10-11 09:07:09.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:09 np0005481065 nova_compute[260935]: 2025-10-11 09:07:09.097 2 DEBUG nova.virt.libvirt.host [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:07:09 np0005481065 nova_compute[260935]: 2025-10-11 09:07:09.098 2 DEBUG nova.virt.libvirt.host [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:07:09 np0005481065 nova_compute[260935]: 2025-10-11 09:07:09.111 2 DEBUG nova.virt.libvirt.host [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:07:09 np0005481065 nova_compute[260935]: 2025-10-11 09:07:09.113 2 DEBUG nova.virt.libvirt.host [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:07:09 np0005481065 nova_compute[260935]: 2025-10-11 09:07:09.113 2 DEBUG nova.virt.libvirt.driver [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:07:09 np0005481065 nova_compute[260935]: 2025-10-11 09:07:09.114 2 DEBUG nova.virt.hardware [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:07:09 np0005481065 nova_compute[260935]: 2025-10-11 09:07:09.115 2 DEBUG nova.virt.hardware [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:07:09 np0005481065 nova_compute[260935]: 2025-10-11 09:07:09.115 2 DEBUG nova.virt.hardware [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:07:09 np0005481065 nova_compute[260935]: 2025-10-11 09:07:09.116 2 DEBUG nova.virt.hardware [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:07:09 np0005481065 nova_compute[260935]: 2025-10-11 09:07:09.116 2 DEBUG nova.virt.hardware [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:07:09 np0005481065 nova_compute[260935]: 2025-10-11 09:07:09.117 2 DEBUG nova.virt.hardware [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:07:09 np0005481065 nova_compute[260935]: 2025-10-11 09:07:09.117 2 DEBUG nova.virt.hardware [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:07:09 np0005481065 nova_compute[260935]: 2025-10-11 09:07:09.118 2 DEBUG nova.virt.hardware [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:07:09 np0005481065 nova_compute[260935]: 2025-10-11 09:07:09.118 2 DEBUG nova.virt.hardware [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:07:09 np0005481065 nova_compute[260935]: 2025-10-11 09:07:09.119 2 DEBUG nova.virt.hardware [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:07:09 np0005481065 nova_compute[260935]: 2025-10-11 09:07:09.119 2 DEBUG nova.virt.hardware [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:07:09 np0005481065 nova_compute[260935]: 2025-10-11 09:07:09.125 2 DEBUG oslo_concurrency.processutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:07:09 np0005481065 nova_compute[260935]: 2025-10-11 09:07:09.191 2 DEBUG oslo_concurrency.processutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.322s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:07:09 np0005481065 nova_compute[260935]: 2025-10-11 09:07:09.260 2 DEBUG nova.storage.rbd_utils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] resizing rbd image 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:07:09 np0005481065 nova_compute[260935]: 2025-10-11 09:07:09.364 2 DEBUG nova.objects.instance [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lazy-loading 'migration_context' on Instance uuid 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:07:09 np0005481065 nova_compute[260935]: 2025-10-11 09:07:09.407 2 DEBUG nova.virt.libvirt.driver [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:07:09 np0005481065 nova_compute[260935]: 2025-10-11 09:07:09.407 2 DEBUG nova.virt.libvirt.driver [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Ensure instance console log exists: /var/lib/nova/instances/0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:07:09 np0005481065 nova_compute[260935]: 2025-10-11 09:07:09.408 2 DEBUG oslo_concurrency.lockutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:07:09 np0005481065 nova_compute[260935]: 2025-10-11 09:07:09.408 2 DEBUG oslo_concurrency.lockutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:07:09 np0005481065 nova_compute[260935]: 2025-10-11 09:07:09.408 2 DEBUG oslo_concurrency.lockutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:07:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:07:09 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/299898137' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:07:09 np0005481065 nova_compute[260935]: 2025-10-11 09:07:09.601 2 DEBUG oslo_concurrency.processutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:07:09 np0005481065 nova_compute[260935]: 2025-10-11 09:07:09.620 2 DEBUG nova.storage.rbd_utils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] rbd image 6520fc43-79ed-4060-85bb-dcdff5f5c101_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:07:09 np0005481065 nova_compute[260935]: 2025-10-11 09:07:09.626 2 DEBUG oslo_concurrency.processutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:07:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:07:10 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/734158486' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:07:10 np0005481065 nova_compute[260935]: 2025-10-11 09:07:10.060 2 DEBUG oslo_concurrency.processutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:07:10 np0005481065 nova_compute[260935]: 2025-10-11 09:07:10.064 2 DEBUG nova.virt.libvirt.vif [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:07:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-653793177',display_name='tempest-ServerRescueNegativeTestJSON-server-653793177',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-653793177',id=96,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c221c92f100b42fbb2581f0c7035540b',ramdisk_id='',reservation_id='r-id4jerug',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1667632067',owner_user_name='tempest-ServerRescueNegativeTestJSON-1667632067-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:07:03Z,user_data=None,user_id='f52fcc072b5843a197064a063d4a5d30',uuid=6520fc43-79ed-4060-85bb-dcdff5f5c101,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7f81893d-380a-42b4-88a7-76a98c30a38d", "address": "fa:16:3e:61:f1:b7", "network": {"id": "76432838-bd4d-4fb8-8e44-6e230b5868b8", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-372698809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c221c92f100b42fbb2581f0c7035540b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f81893d-38", "ovs_interfaceid": "7f81893d-380a-42b4-88a7-76a98c30a38d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:07:10 np0005481065 nova_compute[260935]: 2025-10-11 09:07:10.065 2 DEBUG nova.network.os_vif_util [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Converting VIF {"id": "7f81893d-380a-42b4-88a7-76a98c30a38d", "address": "fa:16:3e:61:f1:b7", "network": {"id": "76432838-bd4d-4fb8-8e44-6e230b5868b8", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-372698809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c221c92f100b42fbb2581f0c7035540b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f81893d-38", "ovs_interfaceid": "7f81893d-380a-42b4-88a7-76a98c30a38d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:07:10 np0005481065 nova_compute[260935]: 2025-10-11 09:07:10.066 2 DEBUG nova.network.os_vif_util [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:f1:b7,bridge_name='br-int',has_traffic_filtering=True,id=7f81893d-380a-42b4-88a7-76a98c30a38d,network=Network(76432838-bd4d-4fb8-8e44-6e230b5868b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f81893d-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:07:10 np0005481065 nova_compute[260935]: 2025-10-11 09:07:10.069 2 DEBUG nova.objects.instance [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lazy-loading 'pci_devices' on Instance uuid 6520fc43-79ed-4060-85bb-dcdff5f5c101 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:07:10 np0005481065 nova_compute[260935]: 2025-10-11 09:07:10.113 2 DEBUG nova.virt.libvirt.driver [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:07:10 np0005481065 nova_compute[260935]:  <uuid>6520fc43-79ed-4060-85bb-dcdff5f5c101</uuid>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:  <name>instance-00000060</name>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:07:10 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:      <nova:name>tempest-ServerRescueNegativeTestJSON-server-653793177</nova:name>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:07:09</nova:creationTime>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:07:10 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:        <nova:user uuid="f52fcc072b5843a197064a063d4a5d30">tempest-ServerRescueNegativeTestJSON-1667632067-project-member</nova:user>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:        <nova:project uuid="c221c92f100b42fbb2581f0c7035540b">tempest-ServerRescueNegativeTestJSON-1667632067</nova:project>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:        <nova:port uuid="7f81893d-380a-42b4-88a7-76a98c30a38d">
Oct 11 05:07:10 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:07:10 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:      <entry name="serial">6520fc43-79ed-4060-85bb-dcdff5f5c101</entry>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:      <entry name="uuid">6520fc43-79ed-4060-85bb-dcdff5f5c101</entry>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:07:10 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:07:10 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:07:10 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/6520fc43-79ed-4060-85bb-dcdff5f5c101_disk">
Oct 11 05:07:10 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:07:10 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:07:10 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/6520fc43-79ed-4060-85bb-dcdff5f5c101_disk.config">
Oct 11 05:07:10 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:07:10 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:07:10 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:61:f1:b7"/>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:      <target dev="tap7f81893d-38"/>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:07:10 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/6520fc43-79ed-4060-85bb-dcdff5f5c101/console.log" append="off"/>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:07:10 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:07:10 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:07:10 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:07:10 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:07:10 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:07:10 np0005481065 nova_compute[260935]: 2025-10-11 09:07:10.114 2 DEBUG nova.compute.manager [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Preparing to wait for external event network-vif-plugged-7f81893d-380a-42b4-88a7-76a98c30a38d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:07:10 np0005481065 nova_compute[260935]: 2025-10-11 09:07:10.115 2 DEBUG oslo_concurrency.lockutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Acquiring lock "6520fc43-79ed-4060-85bb-dcdff5f5c101-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:07:10 np0005481065 nova_compute[260935]: 2025-10-11 09:07:10.115 2 DEBUG oslo_concurrency.lockutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "6520fc43-79ed-4060-85bb-dcdff5f5c101-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:07:10 np0005481065 nova_compute[260935]: 2025-10-11 09:07:10.116 2 DEBUG oslo_concurrency.lockutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "6520fc43-79ed-4060-85bb-dcdff5f5c101-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:07:10 np0005481065 nova_compute[260935]: 2025-10-11 09:07:10.117 2 DEBUG nova.virt.libvirt.vif [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:07:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-653793177',display_name='tempest-ServerRescueNegativeTestJSON-server-653793177',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-653793177',id=96,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c221c92f100b42fbb2581f0c7035540b',ramdisk_id='',reservation_id='r-id4jerug',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1667632067',owner_user_name='tempest-ServerRescueNegativeTestJSON-1667632067-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:07:03Z,user_data=None,user_id='f52fcc072b5843a197064a063d4a5d30',uuid=6520fc43-79ed-4060-85bb-dcdff5f5c101,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7f81893d-380a-42b4-88a7-76a98c30a38d", "address": "fa:16:3e:61:f1:b7", "network": {"id": "76432838-bd4d-4fb8-8e44-6e230b5868b8", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-372698809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c221c92f100b42fbb2581f0c7035540b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f81893d-38", "ovs_interfaceid": "7f81893d-380a-42b4-88a7-76a98c30a38d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:07:10 np0005481065 nova_compute[260935]: 2025-10-11 09:07:10.118 2 DEBUG nova.network.os_vif_util [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Converting VIF {"id": "7f81893d-380a-42b4-88a7-76a98c30a38d", "address": "fa:16:3e:61:f1:b7", "network": {"id": "76432838-bd4d-4fb8-8e44-6e230b5868b8", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-372698809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c221c92f100b42fbb2581f0c7035540b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f81893d-38", "ovs_interfaceid": "7f81893d-380a-42b4-88a7-76a98c30a38d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:07:10 np0005481065 nova_compute[260935]: 2025-10-11 09:07:10.119 2 DEBUG nova.network.os_vif_util [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:f1:b7,bridge_name='br-int',has_traffic_filtering=True,id=7f81893d-380a-42b4-88a7-76a98c30a38d,network=Network(76432838-bd4d-4fb8-8e44-6e230b5868b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f81893d-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:07:10 np0005481065 nova_compute[260935]: 2025-10-11 09:07:10.120 2 DEBUG os_vif [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:f1:b7,bridge_name='br-int',has_traffic_filtering=True,id=7f81893d-380a-42b4-88a7-76a98c30a38d,network=Network(76432838-bd4d-4fb8-8e44-6e230b5868b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f81893d-38') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:07:10 np0005481065 nova_compute[260935]: 2025-10-11 09:07:10.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:10 np0005481065 nova_compute[260935]: 2025-10-11 09:07:10.122 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:07:10 np0005481065 nova_compute[260935]: 2025-10-11 09:07:10.123 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:07:10 np0005481065 nova_compute[260935]: 2025-10-11 09:07:10.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:10 np0005481065 nova_compute[260935]: 2025-10-11 09:07:10.129 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7f81893d-38, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:07:10 np0005481065 nova_compute[260935]: 2025-10-11 09:07:10.130 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7f81893d-38, col_values=(('external_ids', {'iface-id': '7f81893d-380a-42b4-88a7-76a98c30a38d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:61:f1:b7', 'vm-uuid': '6520fc43-79ed-4060-85bb-dcdff5f5c101'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:07:10 np0005481065 NetworkManager[44960]: <info>  [1760173630.1677] manager: (tap7f81893d-38): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/366)
Oct 11 05:07:10 np0005481065 nova_compute[260935]: 2025-10-11 09:07:10.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:10 np0005481065 nova_compute[260935]: 2025-10-11 09:07:10.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:07:10 np0005481065 nova_compute[260935]: 2025-10-11 09:07:10.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:10 np0005481065 nova_compute[260935]: 2025-10-11 09:07:10.180 2 INFO os_vif [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:f1:b7,bridge_name='br-int',has_traffic_filtering=True,id=7f81893d-380a-42b4-88a7-76a98c30a38d,network=Network(76432838-bd4d-4fb8-8e44-6e230b5868b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f81893d-38')#033[00m
Oct 11 05:07:10 np0005481065 nova_compute[260935]: 2025-10-11 09:07:10.301 2 DEBUG nova.virt.libvirt.driver [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:07:10 np0005481065 nova_compute[260935]: 2025-10-11 09:07:10.301 2 DEBUG nova.virt.libvirt.driver [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:07:10 np0005481065 nova_compute[260935]: 2025-10-11 09:07:10.301 2 DEBUG nova.virt.libvirt.driver [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] No VIF found with MAC fa:16:3e:61:f1:b7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:07:10 np0005481065 nova_compute[260935]: 2025-10-11 09:07:10.302 2 INFO nova.virt.libvirt.driver [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Using config drive#033[00m
Oct 11 05:07:10 np0005481065 nova_compute[260935]: 2025-10-11 09:07:10.333 2 DEBUG nova.storage.rbd_utils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] rbd image 6520fc43-79ed-4060-85bb-dcdff5f5c101_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:07:10 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1935: 321 pgs: 321 active+clean; 420 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:07:10 np0005481065 nova_compute[260935]: 2025-10-11 09:07:10.914 2 DEBUG nova.network.neutron [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Successfully created port: a611854c-0a61-41b8-91ce-0c0f893aa54c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:07:11 np0005481065 nova_compute[260935]: 2025-10-11 09:07:11.426 2 INFO nova.virt.libvirt.driver [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Creating config drive at /var/lib/nova/instances/6520fc43-79ed-4060-85bb-dcdff5f5c101/disk.config#033[00m
Oct 11 05:07:11 np0005481065 nova_compute[260935]: 2025-10-11 09:07:11.435 2 DEBUG oslo_concurrency.processutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6520fc43-79ed-4060-85bb-dcdff5f5c101/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplt37ypx8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:07:11 np0005481065 nova_compute[260935]: 2025-10-11 09:07:11.604 2 DEBUG oslo_concurrency.processutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6520fc43-79ed-4060-85bb-dcdff5f5c101/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplt37ypx8" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:07:11 np0005481065 nova_compute[260935]: 2025-10-11 09:07:11.648 2 DEBUG nova.storage.rbd_utils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] rbd image 6520fc43-79ed-4060-85bb-dcdff5f5c101_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:07:11 np0005481065 nova_compute[260935]: 2025-10-11 09:07:11.653 2 DEBUG oslo_concurrency.processutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6520fc43-79ed-4060-85bb-dcdff5f5c101/disk.config 6520fc43-79ed-4060-85bb-dcdff5f5c101_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:07:11 np0005481065 nova_compute[260935]: 2025-10-11 09:07:11.711 2 DEBUG nova.network.neutron [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Successfully updated port: 34749eb0-e6d2-4c4b-a2b8-9fc709c1caca _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:07:11 np0005481065 nova_compute[260935]: 2025-10-11 09:07:11.742 2 DEBUG oslo_concurrency.lockutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Acquiring lock "refresh_cache-f5dfbe0b-a1ff-4001-abe4-a4493c9124f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:07:11 np0005481065 nova_compute[260935]: 2025-10-11 09:07:11.742 2 DEBUG oslo_concurrency.lockutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Acquired lock "refresh_cache-f5dfbe0b-a1ff-4001-abe4-a4493c9124f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:07:11 np0005481065 nova_compute[260935]: 2025-10-11 09:07:11.742 2 DEBUG nova.network.neutron [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:07:11 np0005481065 nova_compute[260935]: 2025-10-11 09:07:11.870 2 DEBUG oslo_concurrency.processutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6520fc43-79ed-4060-85bb-dcdff5f5c101/disk.config 6520fc43-79ed-4060-85bb-dcdff5f5c101_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.217s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:07:11 np0005481065 nova_compute[260935]: 2025-10-11 09:07:11.871 2 INFO nova.virt.libvirt.driver [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Deleting local config drive /var/lib/nova/instances/6520fc43-79ed-4060-85bb-dcdff5f5c101/disk.config because it was imported into RBD.#033[00m
Oct 11 05:07:11 np0005481065 nova_compute[260935]: 2025-10-11 09:07:11.946 2 DEBUG nova.compute.manager [req-63bb350f-1c02-48b5-b2b5-971ef214e168 req-c9f499ff-daf4-4848-bd71-77f92dec20e0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Received event network-changed-34749eb0-e6d2-4c4b-a2b8-9fc709c1caca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:07:11 np0005481065 nova_compute[260935]: 2025-10-11 09:07:11.947 2 DEBUG nova.compute.manager [req-63bb350f-1c02-48b5-b2b5-971ef214e168 req-c9f499ff-daf4-4848-bd71-77f92dec20e0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Refreshing instance network info cache due to event network-changed-34749eb0-e6d2-4c4b-a2b8-9fc709c1caca. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:07:11 np0005481065 nova_compute[260935]: 2025-10-11 09:07:11.947 2 DEBUG oslo_concurrency.lockutils [req-63bb350f-1c02-48b5-b2b5-971ef214e168 req-c9f499ff-daf4-4848-bd71-77f92dec20e0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-f5dfbe0b-a1ff-4001-abe4-a4493c9124f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:07:11 np0005481065 kernel: tap7f81893d-38: entered promiscuous mode
Oct 11 05:07:11 np0005481065 NetworkManager[44960]: <info>  [1760173631.9516] manager: (tap7f81893d-38): new Tun device (/org/freedesktop/NetworkManager/Devices/367)
Oct 11 05:07:11 np0005481065 nova_compute[260935]: 2025-10-11 09:07:11.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:11 np0005481065 ovn_controller[152945]: 2025-10-11T09:07:11Z|00861|binding|INFO|Claiming lport 7f81893d-380a-42b4-88a7-76a98c30a38d for this chassis.
Oct 11 05:07:11 np0005481065 ovn_controller[152945]: 2025-10-11T09:07:11Z|00862|binding|INFO|7f81893d-380a-42b4-88a7-76a98c30a38d: Claiming fa:16:3e:61:f1:b7 10.100.0.6
Oct 11 05:07:11 np0005481065 nova_compute[260935]: 2025-10-11 09:07:11.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:11.988 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:f1:b7 10.100.0.6'], port_security=['fa:16:3e:61:f1:b7 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '6520fc43-79ed-4060-85bb-dcdff5f5c101', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-76432838-bd4d-4fb8-8e44-6e230b5868b8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c221c92f100b42fbb2581f0c7035540b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '933852e8-c082-4590-9200-b967a1691dcb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7f1c5e2c-b27b-4d23-ad04-5134938386e4, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=7f81893d-380a-42b4-88a7-76a98c30a38d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:07:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:11.990 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 7f81893d-380a-42b4-88a7-76a98c30a38d in datapath 76432838-bd4d-4fb8-8e44-6e230b5868b8 bound to our chassis#033[00m
Oct 11 05:07:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:11.993 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 76432838-bd4d-4fb8-8e44-6e230b5868b8#033[00m
Oct 11 05:07:11 np0005481065 systemd-udevd[357117]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:07:12 np0005481065 nova_compute[260935]: 2025-10-11 09:07:12.035 2 DEBUG nova.network.neutron [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:07:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:12.037 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d60f87ad-7920-4886-b651-474eb322deab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:12.038 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap76432838-b1 in ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 05:07:12 np0005481065 NetworkManager[44960]: <info>  [1760173632.0400] device (tap7f81893d-38): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:07:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:12.040 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap76432838-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 05:07:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:12.040 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9c78ed11-8398-4c9a-8cb9-bc2ff0f6f427]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:12 np0005481065 NetworkManager[44960]: <info>  [1760173632.0415] device (tap7f81893d-38): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:07:12 np0005481065 systemd-machined[215705]: New machine qemu-110-instance-00000060.
Oct 11 05:07:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:12.041 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[457339be-1501-4b7a-a5e3-33d62d2c8483]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:12 np0005481065 nova_compute[260935]: 2025-10-11 09:07:12.043 2 DEBUG nova.network.neutron [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Successfully updated port: a611854c-0a61-41b8-91ce-0c0f893aa54c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:07:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:12.056 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[3f8e8994-039c-4b83-b71b-a3654715ddf0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:12 np0005481065 nova_compute[260935]: 2025-10-11 09:07:12.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:12 np0005481065 ovn_controller[152945]: 2025-10-11T09:07:12Z|00863|binding|INFO|Setting lport 7f81893d-380a-42b4-88a7-76a98c30a38d ovn-installed in OVS
Oct 11 05:07:12 np0005481065 ovn_controller[152945]: 2025-10-11T09:07:12Z|00864|binding|INFO|Setting lport 7f81893d-380a-42b4-88a7-76a98c30a38d up in Southbound
Oct 11 05:07:12 np0005481065 nova_compute[260935]: 2025-10-11 09:07:12.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:12 np0005481065 systemd[1]: Started Virtual Machine qemu-110-instance-00000060.
Oct 11 05:07:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:12.083 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fc9da250-c0ae-413d-9bac-8ea677b7a5fa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:12 np0005481065 nova_compute[260935]: 2025-10-11 09:07:12.088 2 DEBUG oslo_concurrency.lockutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquiring lock "refresh_cache-0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:07:12 np0005481065 nova_compute[260935]: 2025-10-11 09:07:12.089 2 DEBUG oslo_concurrency.lockutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquired lock "refresh_cache-0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:07:12 np0005481065 nova_compute[260935]: 2025-10-11 09:07:12.089 2 DEBUG nova.network.neutron [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:07:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:12.117 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f4c6ec29-5460-4242-8ba0-48304b1bb47e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:12.125 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a12a7830-a7b9-49cd-b392-aad41dfa00d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:12 np0005481065 NetworkManager[44960]: <info>  [1760173632.1258] manager: (tap76432838-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/368)
Oct 11 05:07:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:12.179 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3439922c-f0e9-401d-921c-10292398a924]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:12.183 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[d8778af9-6339-411d-ab59-135f4c2854c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:12 np0005481065 nova_compute[260935]: 2025-10-11 09:07:12.194 2 DEBUG nova.compute.manager [req-0aadb39a-5069-4869-bf34-1b9c7a7a1b96 req-2f083799-105c-4728-9368-6bf82f2baf9f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Received event network-changed-a611854c-0a61-41b8-91ce-0c0f893aa54c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:07:12 np0005481065 nova_compute[260935]: 2025-10-11 09:07:12.194 2 DEBUG nova.compute.manager [req-0aadb39a-5069-4869-bf34-1b9c7a7a1b96 req-2f083799-105c-4728-9368-6bf82f2baf9f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Refreshing instance network info cache due to event network-changed-a611854c-0a61-41b8-91ce-0c0f893aa54c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:07:12 np0005481065 nova_compute[260935]: 2025-10-11 09:07:12.195 2 DEBUG oslo_concurrency.lockutils [req-0aadb39a-5069-4869-bf34-1b9c7a7a1b96 req-2f083799-105c-4728-9368-6bf82f2baf9f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:07:12 np0005481065 NetworkManager[44960]: <info>  [1760173632.2155] device (tap76432838-b0): carrier: link connected
Oct 11 05:07:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:12.223 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[44a8e5eb-d06b-4baf-8f4d-bcd847281c06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:12.248 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[247eca88-e4c7-4f17-9868-887ef10ab394]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap76432838-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:06:2f:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547691, 'reachable_time': 27608, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357151, 'error': None, 'target': 'ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:12.271 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[298aa085-6819-42f0-a2be-9306c2f2fd61]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe06:2fa3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 547691, 'tstamp': 547691}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357152, 'error': None, 'target': 'ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:12.295 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[69e65564-17ea-4d2f-87bb-8e31161af436]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap76432838-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:06:2f:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547691, 'reachable_time': 27608, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 357153, 'error': None, 'target': 'ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:12.336 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5a8166c2-16f8-4c4f-8876-7c68474ac0e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:12 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1936: 321 pgs: 321 active+clean; 491 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 4.3 MiB/s wr, 83 op/s
Oct 11 05:07:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:12.420 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b7c17db1-e4e2-437f-8d7a-9163c5a57cde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:12.421 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap76432838-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:07:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:12.421 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:07:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:12.422 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap76432838-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:07:12 np0005481065 nova_compute[260935]: 2025-10-11 09:07:12.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:12 np0005481065 NetworkManager[44960]: <info>  [1760173632.4250] manager: (tap76432838-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/369)
Oct 11 05:07:12 np0005481065 kernel: tap76432838-b0: entered promiscuous mode
Oct 11 05:07:12 np0005481065 nova_compute[260935]: 2025-10-11 09:07:12.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:12.430 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap76432838-b0, col_values=(('external_ids', {'iface-id': 'b6ca7d68-6b07-4260-8964-929cc77a92b2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:07:12 np0005481065 ovn_controller[152945]: 2025-10-11T09:07:12Z|00865|binding|INFO|Releasing lport b6ca7d68-6b07-4260-8964-929cc77a92b2 from this chassis (sb_readonly=0)
Oct 11 05:07:12 np0005481065 nova_compute[260935]: 2025-10-11 09:07:12.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:12 np0005481065 nova_compute[260935]: 2025-10-11 09:07:12.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:12.457 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/76432838-bd4d-4fb8-8e44-6e230b5868b8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/76432838-bd4d-4fb8-8e44-6e230b5868b8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 05:07:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:12.458 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[033374da-61d9-4be3-85b9-07fd99e1e097]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:12.459 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 05:07:12 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 05:07:12 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 05:07:12 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-76432838-bd4d-4fb8-8e44-6e230b5868b8
Oct 11 05:07:12 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 05:07:12 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 05:07:12 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 05:07:12 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/76432838-bd4d-4fb8-8e44-6e230b5868b8.pid.haproxy
Oct 11 05:07:12 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 05:07:12 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:07:12 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 05:07:12 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 05:07:12 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 05:07:12 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 05:07:12 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 05:07:12 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 05:07:12 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 05:07:12 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 05:07:12 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 05:07:12 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 05:07:12 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 05:07:12 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 05:07:12 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 05:07:12 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:07:12 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:07:12 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 05:07:12 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 05:07:12 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 05:07:12 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID 76432838-bd4d-4fb8-8e44-6e230b5868b8
Oct 11 05:07:12 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 05:07:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:12.461 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8', 'env', 'PROCESS_TAG=haproxy-76432838-bd4d-4fb8-8e44-6e230b5868b8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/76432838-bd4d-4fb8-8e44-6e230b5868b8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 05:07:12 np0005481065 nova_compute[260935]: 2025-10-11 09:07:12.486 2 DEBUG nova.network.neutron [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:07:12 np0005481065 nova_compute[260935]: 2025-10-11 09:07:12.870 2 DEBUG nova.network.neutron [req-cb9d72b0-cdd6-406b-8cab-854d0fbef8d9 req-7abf82ba-745c-4997-8ef7-a175fb946a24 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Updated VIF entry in instance network info cache for port 7f81893d-380a-42b4-88a7-76a98c30a38d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:07:12 np0005481065 nova_compute[260935]: 2025-10-11 09:07:12.871 2 DEBUG nova.network.neutron [req-cb9d72b0-cdd6-406b-8cab-854d0fbef8d9 req-7abf82ba-745c-4997-8ef7-a175fb946a24 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Updating instance_info_cache with network_info: [{"id": "7f81893d-380a-42b4-88a7-76a98c30a38d", "address": "fa:16:3e:61:f1:b7", "network": {"id": "76432838-bd4d-4fb8-8e44-6e230b5868b8", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-372698809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c221c92f100b42fbb2581f0c7035540b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f81893d-38", "ovs_interfaceid": "7f81893d-380a-42b4-88a7-76a98c30a38d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:07:12 np0005481065 nova_compute[260935]: 2025-10-11 09:07:12.903 2 DEBUG oslo_concurrency.lockutils [req-cb9d72b0-cdd6-406b-8cab-854d0fbef8d9 req-7abf82ba-745c-4997-8ef7-a175fb946a24 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-6520fc43-79ed-4060-85bb-dcdff5f5c101" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:07:12 np0005481065 podman[357227]: 2025-10-11 09:07:12.925531704 +0000 UTC m=+0.073085057 container create e40b14f688a76c580a6de35085da7c0a9566389ef2c8a28953b020fd0054b819 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 05:07:12 np0005481065 podman[357227]: 2025-10-11 09:07:12.895092496 +0000 UTC m=+0.042645919 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 05:07:12 np0005481065 systemd[1]: Started libpod-conmon-e40b14f688a76c580a6de35085da7c0a9566389ef2c8a28953b020fd0054b819.scope.
Oct 11 05:07:13 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:07:13 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f42249320cbd9f6be1af07a22f94351bbe874386f708642b1d430373ffaae9fb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 05:07:13 np0005481065 podman[357227]: 2025-10-11 09:07:13.052479938 +0000 UTC m=+0.200033371 container init e40b14f688a76c580a6de35085da7c0a9566389ef2c8a28953b020fd0054b819 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 05:07:13 np0005481065 podman[357227]: 2025-10-11 09:07:13.062161865 +0000 UTC m=+0.209715238 container start e40b14f688a76c580a6de35085da7c0a9566389ef2c8a28953b020fd0054b819 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.090 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173633.0890605, 6520fc43-79ed-4060-85bb-dcdff5f5c101 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.090 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] VM Started (Lifecycle Event)#033[00m
Oct 11 05:07:13 np0005481065 neutron-haproxy-ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8[357243]: [NOTICE]   (357247) : New worker (357249) forked
Oct 11 05:07:13 np0005481065 neutron-haproxy-ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8[357243]: [NOTICE]   (357247) : Loading success.
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.129 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.135 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173633.0894687, 6520fc43-79ed-4060-85bb-dcdff5f5c101 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.135 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.172 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.177 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.230 2 DEBUG nova.network.neutron [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Updating instance_info_cache with network_info: [{"id": "34749eb0-e6d2-4c4b-a2b8-9fc709c1caca", "address": "fa:16:3e:88:5f:9b", "network": {"id": "76432838-bd4d-4fb8-8e44-6e230b5868b8", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-372698809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c221c92f100b42fbb2581f0c7035540b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34749eb0-e6", "ovs_interfaceid": "34749eb0-e6d2-4c4b-a2b8-9fc709c1caca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.246 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.293 2 DEBUG oslo_concurrency.lockutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Releasing lock "refresh_cache-f5dfbe0b-a1ff-4001-abe4-a4493c9124f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.294 2 DEBUG nova.compute.manager [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Instance network_info: |[{"id": "34749eb0-e6d2-4c4b-a2b8-9fc709c1caca", "address": "fa:16:3e:88:5f:9b", "network": {"id": "76432838-bd4d-4fb8-8e44-6e230b5868b8", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-372698809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c221c92f100b42fbb2581f0c7035540b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34749eb0-e6", "ovs_interfaceid": "34749eb0-e6d2-4c4b-a2b8-9fc709c1caca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.294 2 DEBUG oslo_concurrency.lockutils [req-63bb350f-1c02-48b5-b2b5-971ef214e168 req-c9f499ff-daf4-4848-bd71-77f92dec20e0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-f5dfbe0b-a1ff-4001-abe4-a4493c9124f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.295 2 DEBUG nova.network.neutron [req-63bb350f-1c02-48b5-b2b5-971ef214e168 req-c9f499ff-daf4-4848-bd71-77f92dec20e0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Refreshing network info cache for port 34749eb0-e6d2-4c4b-a2b8-9fc709c1caca _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.303 2 DEBUG nova.virt.libvirt.driver [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Start _get_guest_xml network_info=[{"id": "34749eb0-e6d2-4c4b-a2b8-9fc709c1caca", "address": "fa:16:3e:88:5f:9b", "network": {"id": "76432838-bd4d-4fb8-8e44-6e230b5868b8", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-372698809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c221c92f100b42fbb2581f0c7035540b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34749eb0-e6", "ovs_interfaceid": "34749eb0-e6d2-4c4b-a2b8-9fc709c1caca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.309 2 WARNING nova.virt.libvirt.driver [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.317 2 DEBUG nova.virt.libvirt.host [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.318 2 DEBUG nova.virt.libvirt.host [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.323 2 DEBUG nova.virt.libvirt.host [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.323 2 DEBUG nova.virt.libvirt.host [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.324 2 DEBUG nova.virt.libvirt.driver [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.324 2 DEBUG nova.virt.hardware [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.325 2 DEBUG nova.virt.hardware [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.325 2 DEBUG nova.virt.hardware [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.326 2 DEBUG nova.virt.hardware [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.326 2 DEBUG nova.virt.hardware [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.326 2 DEBUG nova.virt.hardware [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.327 2 DEBUG nova.virt.hardware [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.327 2 DEBUG nova.virt.hardware [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.327 2 DEBUG nova.virt.hardware [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.328 2 DEBUG nova.virt.hardware [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.328 2 DEBUG nova.virt.hardware [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.333 2 DEBUG oslo_concurrency.processutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.716 2 DEBUG nova.network.neutron [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Updating instance_info_cache with network_info: [{"id": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "address": "fa:16:3e:9d:da:b0", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa611854c-0a", "ovs_interfaceid": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:07:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:07:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:07:13 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3591828787' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.842 2 DEBUG oslo_concurrency.processutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.864 2 DEBUG nova.storage.rbd_utils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] rbd image f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.867 2 DEBUG oslo_concurrency.processutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.909 2 DEBUG oslo_concurrency.lockutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Releasing lock "refresh_cache-0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.909 2 DEBUG nova.compute.manager [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Instance network_info: |[{"id": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "address": "fa:16:3e:9d:da:b0", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa611854c-0a", "ovs_interfaceid": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.910 2 DEBUG oslo_concurrency.lockutils [req-0aadb39a-5069-4869-bf34-1b9c7a7a1b96 req-2f083799-105c-4728-9368-6bf82f2baf9f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.910 2 DEBUG nova.network.neutron [req-0aadb39a-5069-4869-bf34-1b9c7a7a1b96 req-2f083799-105c-4728-9368-6bf82f2baf9f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Refreshing network info cache for port a611854c-0a61-41b8-91ce-0c0f893aa54c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.913 2 DEBUG nova.virt.libvirt.driver [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Start _get_guest_xml network_info=[{"id": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "address": "fa:16:3e:9d:da:b0", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa611854c-0a", "ovs_interfaceid": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.917 2 WARNING nova.virt.libvirt.driver [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.922 2 DEBUG nova.virt.libvirt.host [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.922 2 DEBUG nova.virt.libvirt.host [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.925 2 DEBUG nova.virt.libvirt.host [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.926 2 DEBUG nova.virt.libvirt.host [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.926 2 DEBUG nova.virt.libvirt.driver [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.926 2 DEBUG nova.virt.hardware [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.926 2 DEBUG nova.virt.hardware [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.927 2 DEBUG nova.virt.hardware [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.927 2 DEBUG nova.virt.hardware [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.927 2 DEBUG nova.virt.hardware [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.927 2 DEBUG nova.virt.hardware [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.927 2 DEBUG nova.virt.hardware [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.927 2 DEBUG nova.virt.hardware [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.928 2 DEBUG nova.virt.hardware [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.928 2 DEBUG nova.virt.hardware [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.928 2 DEBUG nova.virt.hardware [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:07:13 np0005481065 nova_compute[260935]: 2025-10-11 09:07:13.930 2 DEBUG oslo_concurrency.processutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.110 2 DEBUG nova.compute.manager [req-db52cbf2-52e1-46e6-9cf1-e4dce8df12f0 req-a830075b-86b9-46e4-8efa-1d1c8d7fe0a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Received event network-vif-plugged-7f81893d-380a-42b4-88a7-76a98c30a38d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.111 2 DEBUG oslo_concurrency.lockutils [req-db52cbf2-52e1-46e6-9cf1-e4dce8df12f0 req-a830075b-86b9-46e4-8efa-1d1c8d7fe0a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "6520fc43-79ed-4060-85bb-dcdff5f5c101-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.111 2 DEBUG oslo_concurrency.lockutils [req-db52cbf2-52e1-46e6-9cf1-e4dce8df12f0 req-a830075b-86b9-46e4-8efa-1d1c8d7fe0a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "6520fc43-79ed-4060-85bb-dcdff5f5c101-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.112 2 DEBUG oslo_concurrency.lockutils [req-db52cbf2-52e1-46e6-9cf1-e4dce8df12f0 req-a830075b-86b9-46e4-8efa-1d1c8d7fe0a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "6520fc43-79ed-4060-85bb-dcdff5f5c101-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.112 2 DEBUG nova.compute.manager [req-db52cbf2-52e1-46e6-9cf1-e4dce8df12f0 req-a830075b-86b9-46e4-8efa-1d1c8d7fe0a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Processing event network-vif-plugged-7f81893d-380a-42b4-88a7-76a98c30a38d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.113 2 DEBUG nova.compute.manager [req-db52cbf2-52e1-46e6-9cf1-e4dce8df12f0 req-a830075b-86b9-46e4-8efa-1d1c8d7fe0a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Received event network-vif-plugged-7f81893d-380a-42b4-88a7-76a98c30a38d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.113 2 DEBUG oslo_concurrency.lockutils [req-db52cbf2-52e1-46e6-9cf1-e4dce8df12f0 req-a830075b-86b9-46e4-8efa-1d1c8d7fe0a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "6520fc43-79ed-4060-85bb-dcdff5f5c101-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.113 2 DEBUG oslo_concurrency.lockutils [req-db52cbf2-52e1-46e6-9cf1-e4dce8df12f0 req-a830075b-86b9-46e4-8efa-1d1c8d7fe0a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "6520fc43-79ed-4060-85bb-dcdff5f5c101-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.114 2 DEBUG oslo_concurrency.lockutils [req-db52cbf2-52e1-46e6-9cf1-e4dce8df12f0 req-a830075b-86b9-46e4-8efa-1d1c8d7fe0a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "6520fc43-79ed-4060-85bb-dcdff5f5c101-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.115 2 DEBUG nova.compute.manager [req-db52cbf2-52e1-46e6-9cf1-e4dce8df12f0 req-a830075b-86b9-46e4-8efa-1d1c8d7fe0a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] No waiting events found dispatching network-vif-plugged-7f81893d-380a-42b4-88a7-76a98c30a38d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.116 2 WARNING nova.compute.manager [req-db52cbf2-52e1-46e6-9cf1-e4dce8df12f0 req-a830075b-86b9-46e4-8efa-1d1c8d7fe0a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Received unexpected event network-vif-plugged-7f81893d-380a-42b4-88a7-76a98c30a38d for instance with vm_state building and task_state spawning.#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.118 2 DEBUG nova.compute.manager [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.122 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173634.1212041, 6520fc43-79ed-4060-85bb-dcdff5f5c101 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.122 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.123 2 DEBUG nova.virt.libvirt.driver [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.126 2 INFO nova.virt.libvirt.driver [-] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Instance spawned successfully.#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.127 2 DEBUG nova.virt.libvirt.driver [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.169 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.172 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.181 2 DEBUG nova.virt.libvirt.driver [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.181 2 DEBUG nova.virt.libvirt.driver [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.182 2 DEBUG nova.virt.libvirt.driver [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.182 2 DEBUG nova.virt.libvirt.driver [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.182 2 DEBUG nova.virt.libvirt.driver [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.183 2 DEBUG nova.virt.libvirt.driver [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.246 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.301 2 INFO nova.compute.manager [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Took 10.20 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.302 2 DEBUG nova.compute.manager [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:07:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:07:14 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3364657527' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.352 2 DEBUG oslo_concurrency.processutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:07:14 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1937: 321 pgs: 321 active+clean; 513 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 5.3 MiB/s wr, 88 op/s
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.354 2 DEBUG nova.virt.libvirt.vif [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:07:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1694931715',display_name='tempest-ServerRescueNegativeTestJSON-server-1694931715',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1694931715',id=97,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c221c92f100b42fbb2581f0c7035540b',ramdisk_id='',reservation_id='r-w0ylr4kw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1667632067',owner_user_name='tempest-ServerRescueNegativeTestJSON-1667632067-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:07:07Z,user_data=None,user_id='f52fcc072b5843a197064a063d4a5d30',uuid=f5dfbe0b-a1ff-4001-abe4-a4493c9124f2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "34749eb0-e6d2-4c4b-a2b8-9fc709c1caca", "address": "fa:16:3e:88:5f:9b", "network": {"id": "76432838-bd4d-4fb8-8e44-6e230b5868b8", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-372698809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c221c92f100b42fbb2581f0c7035540b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34749eb0-e6", "ovs_interfaceid": "34749eb0-e6d2-4c4b-a2b8-9fc709c1caca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.355 2 DEBUG nova.network.os_vif_util [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Converting VIF {"id": "34749eb0-e6d2-4c4b-a2b8-9fc709c1caca", "address": "fa:16:3e:88:5f:9b", "network": {"id": "76432838-bd4d-4fb8-8e44-6e230b5868b8", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-372698809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c221c92f100b42fbb2581f0c7035540b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34749eb0-e6", "ovs_interfaceid": "34749eb0-e6d2-4c4b-a2b8-9fc709c1caca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.356 2 DEBUG nova.network.os_vif_util [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:5f:9b,bridge_name='br-int',has_traffic_filtering=True,id=34749eb0-e6d2-4c4b-a2b8-9fc709c1caca,network=Network(76432838-bd4d-4fb8-8e44-6e230b5868b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34749eb0-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.358 2 DEBUG nova.objects.instance [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lazy-loading 'pci_devices' on Instance uuid f5dfbe0b-a1ff-4001-abe4-a4493c9124f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:07:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:07:14 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/377748573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.376 2 DEBUG oslo_concurrency.processutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.413 2 DEBUG nova.storage.rbd_utils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] rbd image 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.418 2 DEBUG oslo_concurrency.processutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.497 2 DEBUG nova.virt.libvirt.driver [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:07:14 np0005481065 nova_compute[260935]:  <uuid>f5dfbe0b-a1ff-4001-abe4-a4493c9124f2</uuid>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:  <name>instance-00000061</name>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:07:14 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:      <nova:name>tempest-ServerRescueNegativeTestJSON-server-1694931715</nova:name>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:07:13</nova:creationTime>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:07:14 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:        <nova:user uuid="f52fcc072b5843a197064a063d4a5d30">tempest-ServerRescueNegativeTestJSON-1667632067-project-member</nova:user>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:        <nova:project uuid="c221c92f100b42fbb2581f0c7035540b">tempest-ServerRescueNegativeTestJSON-1667632067</nova:project>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:        <nova:port uuid="34749eb0-e6d2-4c4b-a2b8-9fc709c1caca">
Oct 11 05:07:14 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:      <entry name="serial">f5dfbe0b-a1ff-4001-abe4-a4493c9124f2</entry>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:      <entry name="uuid">f5dfbe0b-a1ff-4001-abe4-a4493c9124f2</entry>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:07:14 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_disk">
Oct 11 05:07:14 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:07:14 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:07:14 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_disk.config">
Oct 11 05:07:14 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:07:14 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:07:14 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:88:5f:9b"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:      <target dev="tap34749eb0-e6"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:07:14 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/f5dfbe0b-a1ff-4001-abe4-a4493c9124f2/console.log" append="off"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:07:14 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:07:14 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:07:14 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:07:14 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.498 2 DEBUG nova.compute.manager [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Preparing to wait for external event network-vif-plugged-34749eb0-e6d2-4c4b-a2b8-9fc709c1caca prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.499 2 DEBUG oslo_concurrency.lockutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Acquiring lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.499 2 DEBUG oslo_concurrency.lockutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.500 2 DEBUG oslo_concurrency.lockutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.501 2 DEBUG nova.virt.libvirt.vif [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:07:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1694931715',display_name='tempest-ServerRescueNegativeTestJSON-server-1694931715',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1694931715',id=97,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c221c92f100b42fbb2581f0c7035540b',ramdisk_id='',reservation_id='r-w0ylr4kw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1667632067',owner_user_name='tempest-ServerRescueNegativeTestJSON-1667632067-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:07:07Z,user_data=None,user_id='f52fcc072b5843a197064a063d4a5d30',uuid=f5dfbe0b-a1ff-4001-abe4-a4493c9124f2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "34749eb0-e6d2-4c4b-a2b8-9fc709c1caca", "address": "fa:16:3e:88:5f:9b", "network": {"id": "76432838-bd4d-4fb8-8e44-6e230b5868b8", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-372698809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c221c92f100b42fbb2581f0c7035540b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34749eb0-e6", "ovs_interfaceid": "34749eb0-e6d2-4c4b-a2b8-9fc709c1caca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.502 2 DEBUG nova.network.os_vif_util [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Converting VIF {"id": "34749eb0-e6d2-4c4b-a2b8-9fc709c1caca", "address": "fa:16:3e:88:5f:9b", "network": {"id": "76432838-bd4d-4fb8-8e44-6e230b5868b8", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-372698809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c221c92f100b42fbb2581f0c7035540b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34749eb0-e6", "ovs_interfaceid": "34749eb0-e6d2-4c4b-a2b8-9fc709c1caca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.503 2 DEBUG nova.network.os_vif_util [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:5f:9b,bridge_name='br-int',has_traffic_filtering=True,id=34749eb0-e6d2-4c4b-a2b8-9fc709c1caca,network=Network(76432838-bd4d-4fb8-8e44-6e230b5868b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34749eb0-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.504 2 DEBUG os_vif [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:5f:9b,bridge_name='br-int',has_traffic_filtering=True,id=34749eb0-e6d2-4c4b-a2b8-9fc709c1caca,network=Network(76432838-bd4d-4fb8-8e44-6e230b5868b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34749eb0-e6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.513 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.514 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.518 2 INFO nova.compute.manager [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Took 11.85 seconds to build instance.#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.519 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap34749eb0-e6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.520 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap34749eb0-e6, col_values=(('external_ids', {'iface-id': '34749eb0-e6d2-4c4b-a2b8-9fc709c1caca', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:88:5f:9b', 'vm-uuid': 'f5dfbe0b-a1ff-4001-abe4-a4493c9124f2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:14 np0005481065 NetworkManager[44960]: <info>  [1760173634.5237] manager: (tap34749eb0-e6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/370)
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.534 2 INFO os_vif [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:5f:9b,bridge_name='br-int',has_traffic_filtering=True,id=34749eb0-e6d2-4c4b-a2b8-9fc709c1caca,network=Network(76432838-bd4d-4fb8-8e44-6e230b5868b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34749eb0-e6')#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.596 2 DEBUG oslo_concurrency.lockutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "6520fc43-79ed-4060-85bb-dcdff5f5c101" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.044s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.700 2 DEBUG nova.virt.libvirt.driver [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.701 2 DEBUG nova.virt.libvirt.driver [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.701 2 DEBUG nova.virt.libvirt.driver [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] No VIF found with MAC fa:16:3e:88:5f:9b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.701 2 INFO nova.virt.libvirt.driver [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Using config drive#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.726 2 DEBUG nova.storage.rbd_utils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] rbd image f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:07:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:07:14 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2464163947' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.944 2 DEBUG oslo_concurrency.processutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.947 2 DEBUG nova.virt.libvirt.vif [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:07:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-581429551',display_name='tempest-ServersNegativeTestJSON-server-581429551',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-581429551',id=98,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='21f163e616ee4917a580701d466f7dc9',ramdisk_id='',reservation_id='r-wp4zx0a9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-414353187',owner_user_name='tempest-ServersNegativeTestJSON-414353187-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:07:08Z,user_data=None,user_id='6b8d9d5ab01d48ae81a09f922875ea3e',uuid=0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "address": "fa:16:3e:9d:da:b0", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa611854c-0a", "ovs_interfaceid": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.948 2 DEBUG nova.network.os_vif_util [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Converting VIF {"id": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "address": "fa:16:3e:9d:da:b0", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa611854c-0a", "ovs_interfaceid": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.951 2 DEBUG nova.network.os_vif_util [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:da:b0,bridge_name='br-int',has_traffic_filtering=True,id=a611854c-0a61-41b8-91ce-0c0f893aa54c,network=Network(14e82eeb-74e2-4de3-9047-74da777fe1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa611854c-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.954 2 DEBUG nova.objects.instance [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:07:14 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.992 2 DEBUG nova.virt.libvirt.driver [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:07:14 np0005481065 nova_compute[260935]:  <uuid>0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c</uuid>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:  <name>instance-00000062</name>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:07:14 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:      <nova:name>tempest-ServersNegativeTestJSON-server-581429551</nova:name>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:07:13</nova:creationTime>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:07:14 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:        <nova:user uuid="6b8d9d5ab01d48ae81a09f922875ea3e">tempest-ServersNegativeTestJSON-414353187-project-member</nova:user>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:        <nova:project uuid="21f163e616ee4917a580701d466f7dc9">tempest-ServersNegativeTestJSON-414353187</nova:project>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:        <nova:port uuid="a611854c-0a61-41b8-91ce-0c0f893aa54c">
Oct 11 05:07:14 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:      <entry name="serial">0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c</entry>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:      <entry name="uuid">0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c</entry>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:07:14 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:07:14 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:07:15 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:07:15 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk">
Oct 11 05:07:15 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:07:15 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:07:15 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk.config">
Oct 11 05:07:15 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:07:15 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:07:15 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:9d:da:b0"/>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:      <target dev="tapa611854c-0a"/>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:07:15 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c/console.log" append="off"/>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:07:15 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:07:15 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:07:15 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:07:15 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:07:15 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:07:15 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.993 2 DEBUG nova.compute.manager [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Preparing to wait for external event network-vif-plugged-a611854c-0a61-41b8-91ce-0c0f893aa54c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:07:15 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.994 2 DEBUG oslo_concurrency.lockutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquiring lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:07:15 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.995 2 DEBUG oslo_concurrency.lockutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:07:15 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.996 2 DEBUG oslo_concurrency.lockutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:07:15 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.998 2 DEBUG nova.virt.libvirt.vif [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:07:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-581429551',display_name='tempest-ServersNegativeTestJSON-server-581429551',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-581429551',id=98,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='21f163e616ee4917a580701d466f7dc9',ramdisk_id='',reservation_id='r-wp4zx0a9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-414353187',owner_user_name='tempest-ServersNegativeTestJSON-414353187-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:07:08Z,user_data=None,user_id='6b8d9d5ab01d48ae81a09f922875ea3e',uuid=0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "address": "fa:16:3e:9d:da:b0", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa611854c-0a", "ovs_interfaceid": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:07:15 np0005481065 nova_compute[260935]: 2025-10-11 09:07:14.999 2 DEBUG nova.network.os_vif_util [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Converting VIF {"id": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "address": "fa:16:3e:9d:da:b0", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa611854c-0a", "ovs_interfaceid": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:07:15 np0005481065 nova_compute[260935]: 2025-10-11 09:07:15.002 2 DEBUG nova.network.os_vif_util [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:da:b0,bridge_name='br-int',has_traffic_filtering=True,id=a611854c-0a61-41b8-91ce-0c0f893aa54c,network=Network(14e82eeb-74e2-4de3-9047-74da777fe1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa611854c-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:07:15 np0005481065 nova_compute[260935]: 2025-10-11 09:07:15.003 2 DEBUG os_vif [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:da:b0,bridge_name='br-int',has_traffic_filtering=True,id=a611854c-0a61-41b8-91ce-0c0f893aa54c,network=Network(14e82eeb-74e2-4de3-9047-74da777fe1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa611854c-0a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:07:15 np0005481065 nova_compute[260935]: 2025-10-11 09:07:15.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:15 np0005481065 nova_compute[260935]: 2025-10-11 09:07:15.004 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:07:15 np0005481065 nova_compute[260935]: 2025-10-11 09:07:15.005 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:07:15 np0005481065 nova_compute[260935]: 2025-10-11 09:07:15.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:15 np0005481065 nova_compute[260935]: 2025-10-11 09:07:15.008 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa611854c-0a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:07:15 np0005481065 nova_compute[260935]: 2025-10-11 09:07:15.009 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa611854c-0a, col_values=(('external_ids', {'iface-id': 'a611854c-0a61-41b8-91ce-0c0f893aa54c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9d:da:b0', 'vm-uuid': '0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:07:15 np0005481065 nova_compute[260935]: 2025-10-11 09:07:15.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:15 np0005481065 NetworkManager[44960]: <info>  [1760173635.0121] manager: (tapa611854c-0a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/371)
Oct 11 05:07:15 np0005481065 nova_compute[260935]: 2025-10-11 09:07:15.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:07:15 np0005481065 nova_compute[260935]: 2025-10-11 09:07:15.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:15 np0005481065 nova_compute[260935]: 2025-10-11 09:07:15.021 2 INFO os_vif [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:da:b0,bridge_name='br-int',has_traffic_filtering=True,id=a611854c-0a61-41b8-91ce-0c0f893aa54c,network=Network(14e82eeb-74e2-4de3-9047-74da777fe1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa611854c-0a')#033[00m
Oct 11 05:07:15 np0005481065 nova_compute[260935]: 2025-10-11 09:07:15.101 2 DEBUG nova.virt.libvirt.driver [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:07:15 np0005481065 nova_compute[260935]: 2025-10-11 09:07:15.102 2 DEBUG nova.virt.libvirt.driver [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:07:15 np0005481065 nova_compute[260935]: 2025-10-11 09:07:15.102 2 DEBUG nova.virt.libvirt.driver [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] No VIF found with MAC fa:16:3e:9d:da:b0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:07:15 np0005481065 nova_compute[260935]: 2025-10-11 09:07:15.103 2 INFO nova.virt.libvirt.driver [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Using config drive#033[00m
Oct 11 05:07:15 np0005481065 nova_compute[260935]: 2025-10-11 09:07:15.134 2 DEBUG nova.storage.rbd_utils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] rbd image 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:07:15 np0005481065 nova_compute[260935]: 2025-10-11 09:07:15.168 2 INFO nova.virt.libvirt.driver [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Creating config drive at /var/lib/nova/instances/f5dfbe0b-a1ff-4001-abe4-a4493c9124f2/disk.config#033[00m
Oct 11 05:07:15 np0005481065 nova_compute[260935]: 2025-10-11 09:07:15.183 2 DEBUG oslo_concurrency.processutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f5dfbe0b-a1ff-4001-abe4-a4493c9124f2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppo_5ys1n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:07:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:15.202 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:07:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:15.203 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:07:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:15.204 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:07:15 np0005481065 nova_compute[260935]: 2025-10-11 09:07:15.347 2 DEBUG oslo_concurrency.processutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f5dfbe0b-a1ff-4001-abe4-a4493c9124f2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppo_5ys1n" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:07:15 np0005481065 nova_compute[260935]: 2025-10-11 09:07:15.378 2 DEBUG nova.storage.rbd_utils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] rbd image f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:07:15 np0005481065 nova_compute[260935]: 2025-10-11 09:07:15.383 2 DEBUG oslo_concurrency.processutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f5dfbe0b-a1ff-4001-abe4-a4493c9124f2/disk.config f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:07:15 np0005481065 nova_compute[260935]: 2025-10-11 09:07:15.445 2 DEBUG nova.network.neutron [req-63bb350f-1c02-48b5-b2b5-971ef214e168 req-c9f499ff-daf4-4848-bd71-77f92dec20e0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Updated VIF entry in instance network info cache for port 34749eb0-e6d2-4c4b-a2b8-9fc709c1caca. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:07:15 np0005481065 nova_compute[260935]: 2025-10-11 09:07:15.446 2 DEBUG nova.network.neutron [req-63bb350f-1c02-48b5-b2b5-971ef214e168 req-c9f499ff-daf4-4848-bd71-77f92dec20e0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Updating instance_info_cache with network_info: [{"id": "34749eb0-e6d2-4c4b-a2b8-9fc709c1caca", "address": "fa:16:3e:88:5f:9b", "network": {"id": "76432838-bd4d-4fb8-8e44-6e230b5868b8", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-372698809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c221c92f100b42fbb2581f0c7035540b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34749eb0-e6", "ovs_interfaceid": "34749eb0-e6d2-4c4b-a2b8-9fc709c1caca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:07:15 np0005481065 nova_compute[260935]: 2025-10-11 09:07:15.471 2 DEBUG oslo_concurrency.lockutils [req-63bb350f-1c02-48b5-b2b5-971ef214e168 req-c9f499ff-daf4-4848-bd71-77f92dec20e0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-f5dfbe0b-a1ff-4001-abe4-a4493c9124f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:07:15 np0005481065 nova_compute[260935]: 2025-10-11 09:07:15.570 2 DEBUG oslo_concurrency.processutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f5dfbe0b-a1ff-4001-abe4-a4493c9124f2/disk.config f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:07:15 np0005481065 nova_compute[260935]: 2025-10-11 09:07:15.571 2 INFO nova.virt.libvirt.driver [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Deleting local config drive /var/lib/nova/instances/f5dfbe0b-a1ff-4001-abe4-a4493c9124f2/disk.config because it was imported into RBD.#033[00m
Oct 11 05:07:15 np0005481065 nova_compute[260935]: 2025-10-11 09:07:15.606 2 INFO nova.virt.libvirt.driver [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Creating config drive at /var/lib/nova/instances/0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c/disk.config#033[00m
Oct 11 05:07:15 np0005481065 nova_compute[260935]: 2025-10-11 09:07:15.617 2 DEBUG oslo_concurrency.processutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9e8o79bs execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:07:15 np0005481065 NetworkManager[44960]: <info>  [1760173635.6308] manager: (tap34749eb0-e6): new Tun device (/org/freedesktop/NetworkManager/Devices/372)
Oct 11 05:07:15 np0005481065 kernel: tap34749eb0-e6: entered promiscuous mode
Oct 11 05:07:15 np0005481065 ovn_controller[152945]: 2025-10-11T09:07:15Z|00866|binding|INFO|Claiming lport 34749eb0-e6d2-4c4b-a2b8-9fc709c1caca for this chassis.
Oct 11 05:07:15 np0005481065 ovn_controller[152945]: 2025-10-11T09:07:15Z|00867|binding|INFO|34749eb0-e6d2-4c4b-a2b8-9fc709c1caca: Claiming fa:16:3e:88:5f:9b 10.100.0.12
Oct 11 05:07:15 np0005481065 nova_compute[260935]: 2025-10-11 09:07:15.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:15 np0005481065 ovn_controller[152945]: 2025-10-11T09:07:15Z|00868|binding|INFO|Setting lport 34749eb0-e6d2-4c4b-a2b8-9fc709c1caca ovn-installed in OVS
Oct 11 05:07:15 np0005481065 nova_compute[260935]: 2025-10-11 09:07:15.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:15.681 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:5f:9b 10.100.0.12'], port_security=['fa:16:3e:88:5f:9b 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'f5dfbe0b-a1ff-4001-abe4-a4493c9124f2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-76432838-bd4d-4fb8-8e44-6e230b5868b8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c221c92f100b42fbb2581f0c7035540b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '933852e8-c082-4590-9200-b967a1691dcb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7f1c5e2c-b27b-4d23-ad04-5134938386e4, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=34749eb0-e6d2-4c4b-a2b8-9fc709c1caca) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:07:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:15.682 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 34749eb0-e6d2-4c4b-a2b8-9fc709c1caca in datapath 76432838-bd4d-4fb8-8e44-6e230b5868b8 bound to our chassis#033[00m
Oct 11 05:07:15 np0005481065 ovn_controller[152945]: 2025-10-11T09:07:15Z|00869|binding|INFO|Setting lport 34749eb0-e6d2-4c4b-a2b8-9fc709c1caca up in Southbound
Oct 11 05:07:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:15.683 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 76432838-bd4d-4fb8-8e44-6e230b5868b8#033[00m
Oct 11 05:07:15 np0005481065 systemd-machined[215705]: New machine qemu-111-instance-00000061.
Oct 11 05:07:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:15.702 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8e1b746c-bc0a-444d-a2b8-8c4cea687bec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:15 np0005481065 systemd[1]: Started Virtual Machine qemu-111-instance-00000061.
Oct 11 05:07:15 np0005481065 systemd-udevd[357501]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:07:15 np0005481065 NetworkManager[44960]: <info>  [1760173635.7470] device (tap34749eb0-e6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:07:15 np0005481065 NetworkManager[44960]: <info>  [1760173635.7484] device (tap34749eb0-e6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:07:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:15.752 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[aa8cdcbf-146e-4e2e-98b5-7f9e81490bd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:15.756 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[2884b045-1b67-4230-95ca-f238338b9ac0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:15 np0005481065 podman[357473]: 2025-10-11 09:07:15.757657603 +0000 UTC m=+0.090390482 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 05:07:15 np0005481065 nova_compute[260935]: 2025-10-11 09:07:15.781 2 DEBUG oslo_concurrency.processutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9e8o79bs" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:07:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:15.797 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[674d6e1d-cc25-43a1-84d9-eeaf1be98813]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:15 np0005481065 nova_compute[260935]: 2025-10-11 09:07:15.808 2 DEBUG nova.storage.rbd_utils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] rbd image 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:07:15 np0005481065 nova_compute[260935]: 2025-10-11 09:07:15.813 2 DEBUG oslo_concurrency.processutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c/disk.config 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:07:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:15.817 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7486d2bf-ca6a-46dc-a101-fa9c510c81f2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap76432838-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:06:2f:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547691, 'reachable_time': 27608, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357527, 'error': None, 'target': 'ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:15.842 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c16d672d-98ea-4c6e-b1b6-080b3225222d]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap76432838-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 547707, 'tstamp': 547707}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357531, 'error': None, 'target': 'ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap76432838-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 547711, 'tstamp': 547711}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357531, 'error': None, 'target': 'ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:15.843 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap76432838-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:07:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:15.851 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap76432838-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:07:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:15.851 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:07:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:15.851 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap76432838-b0, col_values=(('external_ids', {'iface-id': 'b6ca7d68-6b07-4260-8964-929cc77a92b2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:07:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:15.852 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:07:15 np0005481065 nova_compute[260935]: 2025-10-11 09:07:15.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:15 np0005481065 nova_compute[260935]: 2025-10-11 09:07:15.974 2 DEBUG nova.network.neutron [req-0aadb39a-5069-4869-bf34-1b9c7a7a1b96 req-2f083799-105c-4728-9368-6bf82f2baf9f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Updated VIF entry in instance network info cache for port a611854c-0a61-41b8-91ce-0c0f893aa54c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:07:15 np0005481065 nova_compute[260935]: 2025-10-11 09:07:15.975 2 DEBUG nova.network.neutron [req-0aadb39a-5069-4869-bf34-1b9c7a7a1b96 req-2f083799-105c-4728-9368-6bf82f2baf9f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Updating instance_info_cache with network_info: [{"id": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "address": "fa:16:3e:9d:da:b0", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa611854c-0a", "ovs_interfaceid": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:07:15 np0005481065 nova_compute[260935]: 2025-10-11 09:07:15.984 2 DEBUG oslo_concurrency.processutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c/disk.config 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:07:15 np0005481065 nova_compute[260935]: 2025-10-11 09:07:15.984 2 INFO nova.virt.libvirt.driver [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Deleting local config drive /var/lib/nova/instances/0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c/disk.config because it was imported into RBD.#033[00m
Oct 11 05:07:16 np0005481065 nova_compute[260935]: 2025-10-11 09:07:16.012 2 DEBUG oslo_concurrency.lockutils [req-0aadb39a-5069-4869-bf34-1b9c7a7a1b96 req-2f083799-105c-4728-9368-6bf82f2baf9f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:07:16 np0005481065 NetworkManager[44960]: <info>  [1760173636.0494] manager: (tapa611854c-0a): new Tun device (/org/freedesktop/NetworkManager/Devices/373)
Oct 11 05:07:16 np0005481065 kernel: tapa611854c-0a: entered promiscuous mode
Oct 11 05:07:16 np0005481065 NetworkManager[44960]: <info>  [1760173636.0627] device (tapa611854c-0a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:07:16 np0005481065 NetworkManager[44960]: <info>  [1760173636.0633] device (tapa611854c-0a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:07:16 np0005481065 ovn_controller[152945]: 2025-10-11T09:07:16Z|00870|binding|INFO|Claiming lport a611854c-0a61-41b8-91ce-0c0f893aa54c for this chassis.
Oct 11 05:07:16 np0005481065 ovn_controller[152945]: 2025-10-11T09:07:16Z|00871|binding|INFO|a611854c-0a61-41b8-91ce-0c0f893aa54c: Claiming fa:16:3e:9d:da:b0 10.100.0.12
Oct 11 05:07:16 np0005481065 nova_compute[260935]: 2025-10-11 09:07:16.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:16 np0005481065 nova_compute[260935]: 2025-10-11 09:07:16.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:16 np0005481065 systemd-machined[215705]: New machine qemu-112-instance-00000062.
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:16.140 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:da:b0 10.100.0.12'], port_security=['fa:16:3e:9d:da:b0 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '21f163e616ee4917a580701d466f7dc9', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'df67e917-90ec-4dfb-a7e0-e0c1517579e8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e0adf864-e0f8-4e06-afb7-e390a634b36e, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=a611854c-0a61-41b8-91ce-0c0f893aa54c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:16.141 162815 INFO neutron.agent.ovn.metadata.agent [-] Port a611854c-0a61-41b8-91ce-0c0f893aa54c in datapath 14e82eeb-74e2-4de3-9047-74da777fe1ec bound to our chassis#033[00m
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:16.143 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 14e82eeb-74e2-4de3-9047-74da777fe1ec#033[00m
Oct 11 05:07:16 np0005481065 systemd[1]: Started Virtual Machine qemu-112-instance-00000062.
Oct 11 05:07:16 np0005481065 nova_compute[260935]: 2025-10-11 09:07:16.147 2 DEBUG nova.compute.manager [req-a96ce835-9cc2-4d8f-af67-4f14e913ad85 req-34cf876f-d892-43f1-b85b-fdd75506d7a8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Received event network-vif-plugged-34749eb0-e6d2-4c4b-a2b8-9fc709c1caca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:07:16 np0005481065 nova_compute[260935]: 2025-10-11 09:07:16.148 2 DEBUG oslo_concurrency.lockutils [req-a96ce835-9cc2-4d8f-af67-4f14e913ad85 req-34cf876f-d892-43f1-b85b-fdd75506d7a8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:07:16 np0005481065 nova_compute[260935]: 2025-10-11 09:07:16.148 2 DEBUG oslo_concurrency.lockutils [req-a96ce835-9cc2-4d8f-af67-4f14e913ad85 req-34cf876f-d892-43f1-b85b-fdd75506d7a8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:07:16 np0005481065 nova_compute[260935]: 2025-10-11 09:07:16.148 2 DEBUG oslo_concurrency.lockutils [req-a96ce835-9cc2-4d8f-af67-4f14e913ad85 req-34cf876f-d892-43f1-b85b-fdd75506d7a8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:07:16 np0005481065 nova_compute[260935]: 2025-10-11 09:07:16.149 2 DEBUG nova.compute.manager [req-a96ce835-9cc2-4d8f-af67-4f14e913ad85 req-34cf876f-d892-43f1-b85b-fdd75506d7a8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Processing event network-vif-plugged-34749eb0-e6d2-4c4b-a2b8-9fc709c1caca _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:16.154 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4436517f-1ec2-4ec4-835b-d7fb2f9f6660]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:16.155 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap14e82eeb-71 in ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:16.161 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap14e82eeb-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:16.162 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[43701903-1a24-424d-bd63-e80e5b715833]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:16.165 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[063dd97f-2b49-41ce-886a-8dfca189be60]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:16.183 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[8e7a3ef5-2711-4549-9672-10745536cf9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:16 np0005481065 ovn_controller[152945]: 2025-10-11T09:07:16Z|00872|binding|INFO|Setting lport a611854c-0a61-41b8-91ce-0c0f893aa54c ovn-installed in OVS
Oct 11 05:07:16 np0005481065 ovn_controller[152945]: 2025-10-11T09:07:16Z|00873|binding|INFO|Setting lport a611854c-0a61-41b8-91ce-0c0f893aa54c up in Southbound
Oct 11 05:07:16 np0005481065 nova_compute[260935]: 2025-10-11 09:07:16.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:16.197 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cb830089-cf0d-4047-b2e2-d2295917ef6e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:16.240 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[4ade678d-759c-43d7-83de-496e98f72302]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:16 np0005481065 NetworkManager[44960]: <info>  [1760173636.2471] manager: (tap14e82eeb-70): new Veth device (/org/freedesktop/NetworkManager/Devices/374)
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:16.245 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[56b800d9-793c-4c12-ac47-5fce38541ec2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:16.286 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[95cd6fd1-74f3-41f1-aa76-d49333489313]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:16.290 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[9fc5b6ad-3fee-422b-a58d-340afc5c350e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:16 np0005481065 NetworkManager[44960]: <info>  [1760173636.3161] device (tap14e82eeb-70): carrier: link connected
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:16.322 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3281a3f4-4bc5-4088-91f9-6b68ad219ee8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:16.343 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0a17facc-c17d-46d4-93da-2e93bd2d6652]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14e82eeb-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:60:56:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 263], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 548101, 'reachable_time': 18729, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357637, 'error': None, 'target': 'ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:16 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1938: 321 pgs: 321 active+clean; 513 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 5.3 MiB/s wr, 88 op/s
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:16.358 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[909902b6-32c9-48a5-9b0c-44336a548ce2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe60:56f1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 548101, 'tstamp': 548101}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357638, 'error': None, 'target': 'ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:16.380 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[69b57796-f135-44c0-bbb0-a275081b8c23]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14e82eeb-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:60:56:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 263], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 548101, 'reachable_time': 18729, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 357639, 'error': None, 'target': 'ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:16.414 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[47b59b35-02b9-4a6b-98ed-b033f5ee2607]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:16.470 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[210a8ee3-08b0-453d-a035-e0042cc4a012]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:16.471 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14e82eeb-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:16.472 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:16.472 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14e82eeb-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:07:16 np0005481065 NetworkManager[44960]: <info>  [1760173636.4746] manager: (tap14e82eeb-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/375)
Oct 11 05:07:16 np0005481065 kernel: tap14e82eeb-70: entered promiscuous mode
Oct 11 05:07:16 np0005481065 nova_compute[260935]: 2025-10-11 09:07:16.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:16.477 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap14e82eeb-70, col_values=(('external_ids', {'iface-id': 'c44760fe-71c5-482d-aee7-a0b0513681b7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:07:16 np0005481065 ovn_controller[152945]: 2025-10-11T09:07:16Z|00874|binding|INFO|Releasing lport c44760fe-71c5-482d-aee7-a0b0513681b7 from this chassis (sb_readonly=0)
Oct 11 05:07:16 np0005481065 nova_compute[260935]: 2025-10-11 09:07:16.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:16 np0005481065 nova_compute[260935]: 2025-10-11 09:07:16.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:16.498 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/14e82eeb-74e2-4de3-9047-74da777fe1ec.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/14e82eeb-74e2-4de3-9047-74da777fe1ec.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:16.499 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0429739d-3d79-4d03-82c8-2b84232c3c79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:16.500 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-14e82eeb-74e2-4de3-9047-74da777fe1ec
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/14e82eeb-74e2-4de3-9047-74da777fe1ec.pid.haproxy
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID 14e82eeb-74e2-4de3-9047-74da777fe1ec
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 05:07:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:16.501 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'env', 'PROCESS_TAG=haproxy-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/14e82eeb-74e2-4de3-9047-74da777fe1ec.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 05:07:16 np0005481065 nova_compute[260935]: 2025-10-11 09:07:16.714 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173636.7137704, f5dfbe0b-a1ff-4001-abe4-a4493c9124f2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:07:16 np0005481065 nova_compute[260935]: 2025-10-11 09:07:16.714 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] VM Started (Lifecycle Event)#033[00m
Oct 11 05:07:16 np0005481065 nova_compute[260935]: 2025-10-11 09:07:16.717 2 DEBUG nova.compute.manager [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:07:16 np0005481065 nova_compute[260935]: 2025-10-11 09:07:16.720 2 DEBUG nova.virt.libvirt.driver [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:07:16 np0005481065 nova_compute[260935]: 2025-10-11 09:07:16.724 2 INFO nova.virt.libvirt.driver [-] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Instance spawned successfully.#033[00m
Oct 11 05:07:16 np0005481065 nova_compute[260935]: 2025-10-11 09:07:16.724 2 DEBUG nova.virt.libvirt.driver [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:07:16 np0005481065 nova_compute[260935]: 2025-10-11 09:07:16.758 2 DEBUG nova.compute.manager [req-c9377d82-fb2e-4f30-ace0-228b0b66d784 req-ecca4dea-a122-4f7d-b747-810797baadfd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Received event network-vif-plugged-a611854c-0a61-41b8-91ce-0c0f893aa54c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:07:16 np0005481065 nova_compute[260935]: 2025-10-11 09:07:16.758 2 DEBUG oslo_concurrency.lockutils [req-c9377d82-fb2e-4f30-ace0-228b0b66d784 req-ecca4dea-a122-4f7d-b747-810797baadfd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:07:16 np0005481065 nova_compute[260935]: 2025-10-11 09:07:16.758 2 DEBUG oslo_concurrency.lockutils [req-c9377d82-fb2e-4f30-ace0-228b0b66d784 req-ecca4dea-a122-4f7d-b747-810797baadfd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:07:16 np0005481065 nova_compute[260935]: 2025-10-11 09:07:16.758 2 DEBUG oslo_concurrency.lockutils [req-c9377d82-fb2e-4f30-ace0-228b0b66d784 req-ecca4dea-a122-4f7d-b747-810797baadfd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:07:16 np0005481065 nova_compute[260935]: 2025-10-11 09:07:16.759 2 DEBUG nova.compute.manager [req-c9377d82-fb2e-4f30-ace0-228b0b66d784 req-ecca4dea-a122-4f7d-b747-810797baadfd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Processing event network-vif-plugged-a611854c-0a61-41b8-91ce-0c0f893aa54c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:07:16 np0005481065 nova_compute[260935]: 2025-10-11 09:07:16.775 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:07:16 np0005481065 nova_compute[260935]: 2025-10-11 09:07:16.778 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:07:16 np0005481065 nova_compute[260935]: 2025-10-11 09:07:16.785 2 DEBUG nova.virt.libvirt.driver [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:07:16 np0005481065 nova_compute[260935]: 2025-10-11 09:07:16.785 2 DEBUG nova.virt.libvirt.driver [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:07:16 np0005481065 nova_compute[260935]: 2025-10-11 09:07:16.786 2 DEBUG nova.virt.libvirt.driver [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:07:16 np0005481065 nova_compute[260935]: 2025-10-11 09:07:16.786 2 DEBUG nova.virt.libvirt.driver [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:07:16 np0005481065 nova_compute[260935]: 2025-10-11 09:07:16.787 2 DEBUG nova.virt.libvirt.driver [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:07:16 np0005481065 nova_compute[260935]: 2025-10-11 09:07:16.787 2 DEBUG nova.virt.libvirt.driver [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:07:16 np0005481065 nova_compute[260935]: 2025-10-11 09:07:16.826 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:07:16 np0005481065 nova_compute[260935]: 2025-10-11 09:07:16.826 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173636.7150145, f5dfbe0b-a1ff-4001-abe4-a4493c9124f2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:07:16 np0005481065 nova_compute[260935]: 2025-10-11 09:07:16.826 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:07:16 np0005481065 nova_compute[260935]: 2025-10-11 09:07:16.927 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:07:16 np0005481065 nova_compute[260935]: 2025-10-11 09:07:16.932 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173636.7192333, f5dfbe0b-a1ff-4001-abe4-a4493c9124f2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:07:16 np0005481065 nova_compute[260935]: 2025-10-11 09:07:16.932 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:07:16 np0005481065 nova_compute[260935]: 2025-10-11 09:07:16.969 2 INFO nova.compute.manager [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Took 9.17 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:07:16 np0005481065 nova_compute[260935]: 2025-10-11 09:07:16.970 2 DEBUG nova.compute.manager [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:07:16 np0005481065 podman[357670]: 2025-10-11 09:07:16.97799207 +0000 UTC m=+0.055576628 container create 80ced7c6a3c16ccdbe23725730c8db7aeed4183f2fc1bc06e5ea3e07111ac748 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:07:16 np0005481065 nova_compute[260935]: 2025-10-11 09:07:16.995 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:07:17 np0005481065 nova_compute[260935]: 2025-10-11 09:07:17.000 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:07:17 np0005481065 systemd[1]: Started libpod-conmon-80ced7c6a3c16ccdbe23725730c8db7aeed4183f2fc1bc06e5ea3e07111ac748.scope.
Oct 11 05:07:17 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:07:17 np0005481065 podman[357670]: 2025-10-11 09:07:16.950097333 +0000 UTC m=+0.027681921 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 05:07:17 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/046b66f7870015e0b115ddde4088213d8c930e4e0ddec3540ecd817899022603/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 05:07:17 np0005481065 podman[357670]: 2025-10-11 09:07:17.069336517 +0000 UTC m=+0.146921085 container init 80ced7c6a3c16ccdbe23725730c8db7aeed4183f2fc1bc06e5ea3e07111ac748 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 11 05:07:17 np0005481065 podman[357670]: 2025-10-11 09:07:17.075126343 +0000 UTC m=+0.152710901 container start 80ced7c6a3c16ccdbe23725730c8db7aeed4183f2fc1bc06e5ea3e07111ac748 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct 11 05:07:17 np0005481065 neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec[357685]: [NOTICE]   (357689) : New worker (357691) forked
Oct 11 05:07:17 np0005481065 neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec[357685]: [NOTICE]   (357689) : Loading success.
Oct 11 05:07:17 np0005481065 nova_compute[260935]: 2025-10-11 09:07:17.372 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:07:17 np0005481065 nova_compute[260935]: 2025-10-11 09:07:17.480 2 INFO nova.compute.manager [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Took 11.29 seconds to build instance.#033[00m
Oct 11 05:07:17 np0005481065 nova_compute[260935]: 2025-10-11 09:07:17.543 2 DEBUG oslo_concurrency.lockutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.539s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:07:18 np0005481065 nova_compute[260935]: 2025-10-11 09:07:18.329 2 DEBUG nova.compute.manager [req-2f0309e3-30f2-4d53-a9e6-2987dcb33822 req-72c2f569-8757-42b4-80cd-2b97d7dbe50c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Received event network-vif-plugged-34749eb0-e6d2-4c4b-a2b8-9fc709c1caca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:07:18 np0005481065 nova_compute[260935]: 2025-10-11 09:07:18.329 2 DEBUG oslo_concurrency.lockutils [req-2f0309e3-30f2-4d53-a9e6-2987dcb33822 req-72c2f569-8757-42b4-80cd-2b97d7dbe50c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:07:18 np0005481065 nova_compute[260935]: 2025-10-11 09:07:18.331 2 DEBUG oslo_concurrency.lockutils [req-2f0309e3-30f2-4d53-a9e6-2987dcb33822 req-72c2f569-8757-42b4-80cd-2b97d7dbe50c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:07:18 np0005481065 nova_compute[260935]: 2025-10-11 09:07:18.332 2 DEBUG oslo_concurrency.lockutils [req-2f0309e3-30f2-4d53-a9e6-2987dcb33822 req-72c2f569-8757-42b4-80cd-2b97d7dbe50c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:07:18 np0005481065 nova_compute[260935]: 2025-10-11 09:07:18.332 2 DEBUG nova.compute.manager [req-2f0309e3-30f2-4d53-a9e6-2987dcb33822 req-72c2f569-8757-42b4-80cd-2b97d7dbe50c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] No waiting events found dispatching network-vif-plugged-34749eb0-e6d2-4c4b-a2b8-9fc709c1caca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:07:18 np0005481065 nova_compute[260935]: 2025-10-11 09:07:18.332 2 WARNING nova.compute.manager [req-2f0309e3-30f2-4d53-a9e6-2987dcb33822 req-72c2f569-8757-42b4-80cd-2b97d7dbe50c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Received unexpected event network-vif-plugged-34749eb0-e6d2-4c4b-a2b8-9fc709c1caca for instance with vm_state active and task_state None.#033[00m
Oct 11 05:07:18 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1939: 321 pgs: 321 active+clean; 514 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 5.4 MiB/s wr, 175 op/s
Oct 11 05:07:18 np0005481065 nova_compute[260935]: 2025-10-11 09:07:18.462 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173638.4623191, 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:07:18 np0005481065 nova_compute[260935]: 2025-10-11 09:07:18.463 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] VM Started (Lifecycle Event)#033[00m
Oct 11 05:07:18 np0005481065 nova_compute[260935]: 2025-10-11 09:07:18.465 2 DEBUG nova.compute.manager [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:07:18 np0005481065 nova_compute[260935]: 2025-10-11 09:07:18.472 2 DEBUG nova.virt.libvirt.driver [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:07:18 np0005481065 nova_compute[260935]: 2025-10-11 09:07:18.476 2 INFO nova.virt.libvirt.driver [-] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Instance spawned successfully.#033[00m
Oct 11 05:07:18 np0005481065 nova_compute[260935]: 2025-10-11 09:07:18.476 2 DEBUG nova.virt.libvirt.driver [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:07:18 np0005481065 nova_compute[260935]: 2025-10-11 09:07:18.553 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:07:18 np0005481065 nova_compute[260935]: 2025-10-11 09:07:18.558 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:07:18 np0005481065 nova_compute[260935]: 2025-10-11 09:07:18.614 2 DEBUG nova.virt.libvirt.driver [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:07:18 np0005481065 nova_compute[260935]: 2025-10-11 09:07:18.615 2 DEBUG nova.virt.libvirt.driver [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:07:18 np0005481065 nova_compute[260935]: 2025-10-11 09:07:18.616 2 DEBUG nova.virt.libvirt.driver [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:07:18 np0005481065 nova_compute[260935]: 2025-10-11 09:07:18.616 2 DEBUG nova.virt.libvirt.driver [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:07:18 np0005481065 nova_compute[260935]: 2025-10-11 09:07:18.617 2 DEBUG nova.virt.libvirt.driver [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:07:18 np0005481065 nova_compute[260935]: 2025-10-11 09:07:18.618 2 DEBUG nova.virt.libvirt.driver [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:07:18 np0005481065 nova_compute[260935]: 2025-10-11 09:07:18.679 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:07:18 np0005481065 nova_compute[260935]: 2025-10-11 09:07:18.679 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173638.4625487, 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:07:18 np0005481065 nova_compute[260935]: 2025-10-11 09:07:18.680 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:07:18 np0005481065 nova_compute[260935]: 2025-10-11 09:07:18.770 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:07:18 np0005481065 nova_compute[260935]: 2025-10-11 09:07:18.776 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173638.4676518, 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:07:18 np0005481065 nova_compute[260935]: 2025-10-11 09:07:18.776 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:07:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:07:18 np0005481065 nova_compute[260935]: 2025-10-11 09:07:18.933 2 INFO nova.compute.manager [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Took 10.27 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:07:18 np0005481065 nova_compute[260935]: 2025-10-11 09:07:18.933 2 DEBUG nova.compute.manager [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:07:18 np0005481065 nova_compute[260935]: 2025-10-11 09:07:18.946 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:07:18 np0005481065 nova_compute[260935]: 2025-10-11 09:07:18.951 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:07:19 np0005481065 nova_compute[260935]: 2025-10-11 09:07:19.062 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:07:19 np0005481065 nova_compute[260935]: 2025-10-11 09:07:19.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:19 np0005481065 nova_compute[260935]: 2025-10-11 09:07:19.148 2 INFO nova.compute.manager [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Took 12.33 seconds to build instance.#033[00m
Oct 11 05:07:19 np0005481065 nova_compute[260935]: 2025-10-11 09:07:19.176 2 DEBUG nova.compute.manager [req-0d989d29-a205-422a-9078-6c00fd632446 req-7b47c299-07de-4b94-b462-6a114e7d95c5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Received event network-vif-plugged-a611854c-0a61-41b8-91ce-0c0f893aa54c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:07:19 np0005481065 nova_compute[260935]: 2025-10-11 09:07:19.176 2 DEBUG oslo_concurrency.lockutils [req-0d989d29-a205-422a-9078-6c00fd632446 req-7b47c299-07de-4b94-b462-6a114e7d95c5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:07:19 np0005481065 nova_compute[260935]: 2025-10-11 09:07:19.177 2 DEBUG oslo_concurrency.lockutils [req-0d989d29-a205-422a-9078-6c00fd632446 req-7b47c299-07de-4b94-b462-6a114e7d95c5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:07:19 np0005481065 nova_compute[260935]: 2025-10-11 09:07:19.177 2 DEBUG oslo_concurrency.lockutils [req-0d989d29-a205-422a-9078-6c00fd632446 req-7b47c299-07de-4b94-b462-6a114e7d95c5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:07:19 np0005481065 nova_compute[260935]: 2025-10-11 09:07:19.178 2 DEBUG nova.compute.manager [req-0d989d29-a205-422a-9078-6c00fd632446 req-7b47c299-07de-4b94-b462-6a114e7d95c5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] No waiting events found dispatching network-vif-plugged-a611854c-0a61-41b8-91ce-0c0f893aa54c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:07:19 np0005481065 nova_compute[260935]: 2025-10-11 09:07:19.178 2 WARNING nova.compute.manager [req-0d989d29-a205-422a-9078-6c00fd632446 req-7b47c299-07de-4b94-b462-6a114e7d95c5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Received unexpected event network-vif-plugged-a611854c-0a61-41b8-91ce-0c0f893aa54c for instance with vm_state active and task_state None.#033[00m
Oct 11 05:07:19 np0005481065 nova_compute[260935]: 2025-10-11 09:07:19.241 2 DEBUG oslo_concurrency.lockutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:07:19 np0005481065 nova_compute[260935]: 2025-10-11 09:07:19.455 2 INFO nova.compute.manager [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Rescuing#033[00m
Oct 11 05:07:19 np0005481065 nova_compute[260935]: 2025-10-11 09:07:19.456 2 DEBUG oslo_concurrency.lockutils [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Acquiring lock "refresh_cache-f5dfbe0b-a1ff-4001-abe4-a4493c9124f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:07:19 np0005481065 nova_compute[260935]: 2025-10-11 09:07:19.456 2 DEBUG oslo_concurrency.lockutils [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Acquired lock "refresh_cache-f5dfbe0b-a1ff-4001-abe4-a4493c9124f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:07:19 np0005481065 nova_compute[260935]: 2025-10-11 09:07:19.456 2 DEBUG nova.network.neutron [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:07:20 np0005481065 nova_compute[260935]: 2025-10-11 09:07:20.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:20 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1940: 321 pgs: 321 active+clean; 514 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.6 MiB/s wr, 148 op/s
Oct 11 05:07:21 np0005481065 nova_compute[260935]: 2025-10-11 09:07:21.663 2 DEBUG nova.network.neutron [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Updating instance_info_cache with network_info: [{"id": "34749eb0-e6d2-4c4b-a2b8-9fc709c1caca", "address": "fa:16:3e:88:5f:9b", "network": {"id": "76432838-bd4d-4fb8-8e44-6e230b5868b8", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-372698809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c221c92f100b42fbb2581f0c7035540b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34749eb0-e6", "ovs_interfaceid": "34749eb0-e6d2-4c4b-a2b8-9fc709c1caca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:07:21 np0005481065 nova_compute[260935]: 2025-10-11 09:07:21.757 2 DEBUG oslo_concurrency.lockutils [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Releasing lock "refresh_cache-f5dfbe0b-a1ff-4001-abe4-a4493c9124f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:07:21 np0005481065 podman[357742]: 2025-10-11 09:07:21.869792496 +0000 UTC m=+0.152919167 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 05:07:22 np0005481065 nova_compute[260935]: 2025-10-11 09:07:22.196 2 DEBUG nova.virt.libvirt.driver [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct 11 05:07:22 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1941: 321 pgs: 321 active+clean; 514 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 3.6 MiB/s wr, 237 op/s
Oct 11 05:07:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:22.605 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:07:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:22.607 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 05:07:22 np0005481065 nova_compute[260935]: 2025-10-11 09:07:22.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:07:24 np0005481065 nova_compute[260935]: 2025-10-11 09:07:24.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:24 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1942: 321 pgs: 321 active+clean; 514 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 1.1 MiB/s wr, 218 op/s
Oct 11 05:07:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:07:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:07:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:07:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:07:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:07:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:07:25 np0005481065 nova_compute[260935]: 2025-10-11 09:07:25.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:25 np0005481065 podman[357759]: 2025-10-11 09:07:25.806144086 +0000 UTC m=+0.104071852 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible)
Oct 11 05:07:25 np0005481065 podman[357760]: 2025-10-11 09:07:25.84797223 +0000 UTC m=+0.145685480 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 11 05:07:26 np0005481065 nova_compute[260935]: 2025-10-11 09:07:26.068 2 DEBUG oslo_concurrency.lockutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquiring lock "02489367-ca19-4428-9727-6d8ab66e1b46" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:07:26 np0005481065 nova_compute[260935]: 2025-10-11 09:07:26.069 2 DEBUG oslo_concurrency.lockutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "02489367-ca19-4428-9727-6d8ab66e1b46" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:07:26 np0005481065 nova_compute[260935]: 2025-10-11 09:07:26.172 2 DEBUG nova.compute.manager [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:07:26 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1943: 321 pgs: 321 active+clean; 514 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 25 KiB/s wr, 213 op/s
Oct 11 05:07:26 np0005481065 nova_compute[260935]: 2025-10-11 09:07:26.421 2 DEBUG oslo_concurrency.lockutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:07:26 np0005481065 nova_compute[260935]: 2025-10-11 09:07:26.422 2 DEBUG oslo_concurrency.lockutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:07:26 np0005481065 nova_compute[260935]: 2025-10-11 09:07:26.433 2 DEBUG nova.virt.hardware [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:07:26 np0005481065 nova_compute[260935]: 2025-10-11 09:07:26.433 2 INFO nova.compute.claims [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:07:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:26.611 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:07:26 np0005481065 nova_compute[260935]: 2025-10-11 09:07:26.768 2 DEBUG oslo_concurrency.processutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:07:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:07:27 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1560192760' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:07:27 np0005481065 ovn_controller[152945]: 2025-10-11T09:07:27Z|00096|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:61:f1:b7 10.100.0.6
Oct 11 05:07:27 np0005481065 ovn_controller[152945]: 2025-10-11T09:07:27Z|00097|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:61:f1:b7 10.100.0.6
Oct 11 05:07:27 np0005481065 nova_compute[260935]: 2025-10-11 09:07:27.314 2 DEBUG oslo_concurrency.processutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:07:27 np0005481065 nova_compute[260935]: 2025-10-11 09:07:27.325 2 DEBUG nova.compute.provider_tree [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:07:27 np0005481065 nova_compute[260935]: 2025-10-11 09:07:27.379 2 DEBUG nova.scheduler.client.report [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:07:27 np0005481065 nova_compute[260935]: 2025-10-11 09:07:27.454 2 DEBUG oslo_concurrency.lockutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.032s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:07:27 np0005481065 nova_compute[260935]: 2025-10-11 09:07:27.455 2 DEBUG nova.compute.manager [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:07:27 np0005481065 nova_compute[260935]: 2025-10-11 09:07:27.600 2 DEBUG nova.compute.manager [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:07:27 np0005481065 nova_compute[260935]: 2025-10-11 09:07:27.601 2 DEBUG nova.network.neutron [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:07:27 np0005481065 nova_compute[260935]: 2025-10-11 09:07:27.708 2 INFO nova.virt.libvirt.driver [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:07:27 np0005481065 nova_compute[260935]: 2025-10-11 09:07:27.808 2 DEBUG nova.compute.manager [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:07:28 np0005481065 nova_compute[260935]: 2025-10-11 09:07:28.005 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:07:28 np0005481065 nova_compute[260935]: 2025-10-11 09:07:28.005 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:07:28 np0005481065 nova_compute[260935]: 2025-10-11 09:07:28.063 2 DEBUG nova.policy [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6b8d9d5ab01d48ae81a09f922875ea3e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '21f163e616ee4917a580701d466f7dc9', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:07:28 np0005481065 nova_compute[260935]: 2025-10-11 09:07:28.093 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:07:28 np0005481065 nova_compute[260935]: 2025-10-11 09:07:28.093 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:07:28 np0005481065 nova_compute[260935]: 2025-10-11 09:07:28.100 2 DEBUG nova.compute.manager [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:07:28 np0005481065 nova_compute[260935]: 2025-10-11 09:07:28.101 2 DEBUG nova.virt.libvirt.driver [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:07:28 np0005481065 nova_compute[260935]: 2025-10-11 09:07:28.101 2 INFO nova.virt.libvirt.driver [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Creating image(s)#033[00m
Oct 11 05:07:28 np0005481065 nova_compute[260935]: 2025-10-11 09:07:28.125 2 DEBUG nova.storage.rbd_utils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] rbd image 02489367-ca19-4428-9727-6d8ab66e1b46_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:07:28 np0005481065 nova_compute[260935]: 2025-10-11 09:07:28.146 2 DEBUG nova.storage.rbd_utils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] rbd image 02489367-ca19-4428-9727-6d8ab66e1b46_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:07:28 np0005481065 nova_compute[260935]: 2025-10-11 09:07:28.171 2 DEBUG nova.storage.rbd_utils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] rbd image 02489367-ca19-4428-9727-6d8ab66e1b46_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:07:28 np0005481065 nova_compute[260935]: 2025-10-11 09:07:28.175 2 DEBUG oslo_concurrency.processutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:07:28 np0005481065 nova_compute[260935]: 2025-10-11 09:07:28.276 2 DEBUG oslo_concurrency.processutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:07:28 np0005481065 nova_compute[260935]: 2025-10-11 09:07:28.277 2 DEBUG oslo_concurrency.lockutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:07:28 np0005481065 nova_compute[260935]: 2025-10-11 09:07:28.278 2 DEBUG oslo_concurrency.lockutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:07:28 np0005481065 nova_compute[260935]: 2025-10-11 09:07:28.278 2 DEBUG oslo_concurrency.lockutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:07:28 np0005481065 nova_compute[260935]: 2025-10-11 09:07:28.302 2 DEBUG nova.storage.rbd_utils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] rbd image 02489367-ca19-4428-9727-6d8ab66e1b46_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:07:28 np0005481065 nova_compute[260935]: 2025-10-11 09:07:28.306 2 DEBUG oslo_concurrency.processutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 02489367-ca19-4428-9727-6d8ab66e1b46_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:07:28 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1944: 321 pgs: 321 active+clean; 546 MiB data, 915 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 2.2 MiB/s wr, 276 op/s
Oct 11 05:07:28 np0005481065 nova_compute[260935]: 2025-10-11 09:07:28.440 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:07:28 np0005481065 nova_compute[260935]: 2025-10-11 09:07:28.441 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:07:28 np0005481065 nova_compute[260935]: 2025-10-11 09:07:28.441 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 05:07:28 np0005481065 nova_compute[260935]: 2025-10-11 09:07:28.620 2 DEBUG oslo_concurrency.processutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 02489367-ca19-4428-9727-6d8ab66e1b46_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.314s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:07:28 np0005481065 nova_compute[260935]: 2025-10-11 09:07:28.723 2 DEBUG nova.storage.rbd_utils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] resizing rbd image 02489367-ca19-4428-9727-6d8ab66e1b46_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:07:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:07:28 np0005481065 nova_compute[260935]: 2025-10-11 09:07:28.859 2 DEBUG nova.objects.instance [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lazy-loading 'migration_context' on Instance uuid 02489367-ca19-4428-9727-6d8ab66e1b46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:07:28 np0005481065 nova_compute[260935]: 2025-10-11 09:07:28.902 2 DEBUG nova.virt.libvirt.driver [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:07:28 np0005481065 nova_compute[260935]: 2025-10-11 09:07:28.903 2 DEBUG nova.virt.libvirt.driver [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Ensure instance console log exists: /var/lib/nova/instances/02489367-ca19-4428-9727-6d8ab66e1b46/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:07:28 np0005481065 nova_compute[260935]: 2025-10-11 09:07:28.904 2 DEBUG oslo_concurrency.lockutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:07:28 np0005481065 nova_compute[260935]: 2025-10-11 09:07:28.905 2 DEBUG oslo_concurrency.lockutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:07:28 np0005481065 nova_compute[260935]: 2025-10-11 09:07:28.906 2 DEBUG oslo_concurrency.lockutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:07:28 np0005481065 nova_compute[260935]: 2025-10-11 09:07:28.908 2 DEBUG nova.network.neutron [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Successfully created port: 91e3c288-76e0-4c61-9a40-fde0087f647b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:07:29 np0005481065 nova_compute[260935]: 2025-10-11 09:07:29.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:30 np0005481065 nova_compute[260935]: 2025-10-11 09:07:30.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:30 np0005481065 nova_compute[260935]: 2025-10-11 09:07:30.283 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updating instance_info_cache with network_info: [{"id": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "address": "fa:16:3e:d3:b5:ce", "network": {"id": "7c40ad6c-6e2c-4d8e-a70f-72c8786fa745", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1855455514-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ba95f2514ce4fe4b00f245335eaeb01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc992d6e3-ef", "ovs_interfaceid": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:07:30 np0005481065 nova_compute[260935]: 2025-10-11 09:07:30.332 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:07:30 np0005481065 nova_compute[260935]: 2025-10-11 09:07:30.333 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 05:07:30 np0005481065 nova_compute[260935]: 2025-10-11 09:07:30.334 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:07:30 np0005481065 nova_compute[260935]: 2025-10-11 09:07:30.334 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:07:30 np0005481065 nova_compute[260935]: 2025-10-11 09:07:30.334 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:07:30 np0005481065 nova_compute[260935]: 2025-10-11 09:07:30.335 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:07:30 np0005481065 nova_compute[260935]: 2025-10-11 09:07:30.335 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:07:30 np0005481065 nova_compute[260935]: 2025-10-11 09:07:30.335 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:07:30 np0005481065 nova_compute[260935]: 2025-10-11 09:07:30.335 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:07:30 np0005481065 nova_compute[260935]: 2025-10-11 09:07:30.336 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:07:30 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1945: 321 pgs: 321 active+clean; 546 MiB data, 915 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 2.1 MiB/s wr, 189 op/s
Oct 11 05:07:30 np0005481065 nova_compute[260935]: 2025-10-11 09:07:30.383 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:07:30 np0005481065 nova_compute[260935]: 2025-10-11 09:07:30.383 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:07:30 np0005481065 nova_compute[260935]: 2025-10-11 09:07:30.384 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:07:30 np0005481065 nova_compute[260935]: 2025-10-11 09:07:30.384 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:07:30 np0005481065 nova_compute[260935]: 2025-10-11 09:07:30.385 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:07:30 np0005481065 nova_compute[260935]: 2025-10-11 09:07:30.547 2 DEBUG nova.network.neutron [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Successfully updated port: 91e3c288-76e0-4c61-9a40-fde0087f647b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:07:30 np0005481065 nova_compute[260935]: 2025-10-11 09:07:30.597 2 DEBUG oslo_concurrency.lockutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquiring lock "refresh_cache-02489367-ca19-4428-9727-6d8ab66e1b46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:07:30 np0005481065 nova_compute[260935]: 2025-10-11 09:07:30.597 2 DEBUG oslo_concurrency.lockutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquired lock "refresh_cache-02489367-ca19-4428-9727-6d8ab66e1b46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:07:30 np0005481065 nova_compute[260935]: 2025-10-11 09:07:30.598 2 DEBUG nova.network.neutron [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:07:30 np0005481065 nova_compute[260935]: 2025-10-11 09:07:30.836 2 DEBUG nova.compute.manager [req-788b4548-ea66-4fd6-8770-9372dd1ad601 req-3e74e85b-384a-4ccf-b061-89097d89dba2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Received event network-changed-91e3c288-76e0-4c61-9a40-fde0087f647b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:07:30 np0005481065 nova_compute[260935]: 2025-10-11 09:07:30.837 2 DEBUG nova.compute.manager [req-788b4548-ea66-4fd6-8770-9372dd1ad601 req-3e74e85b-384a-4ccf-b061-89097d89dba2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Refreshing instance network info cache due to event network-changed-91e3c288-76e0-4c61-9a40-fde0087f647b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:07:30 np0005481065 nova_compute[260935]: 2025-10-11 09:07:30.837 2 DEBUG oslo_concurrency.lockutils [req-788b4548-ea66-4fd6-8770-9372dd1ad601 req-3e74e85b-384a-4ccf-b061-89097d89dba2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-02489367-ca19-4428-9727-6d8ab66e1b46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:07:30 np0005481065 nova_compute[260935]: 2025-10-11 09:07:30.852 2 DEBUG nova.network.neutron [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:07:30 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:07:30 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2730568505' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:07:30 np0005481065 nova_compute[260935]: 2025-10-11 09:07:30.992 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:07:31 np0005481065 nova_compute[260935]: 2025-10-11 09:07:31.174 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:07:31 np0005481065 nova_compute[260935]: 2025-10-11 09:07:31.175 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:07:31 np0005481065 nova_compute[260935]: 2025-10-11 09:07:31.181 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000057 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:07:31 np0005481065 nova_compute[260935]: 2025-10-11 09:07:31.181 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000057 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:07:31 np0005481065 nova_compute[260935]: 2025-10-11 09:07:31.186 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:07:31 np0005481065 nova_compute[260935]: 2025-10-11 09:07:31.187 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:07:31 np0005481065 nova_compute[260935]: 2025-10-11 09:07:31.192 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:07:31 np0005481065 nova_compute[260935]: 2025-10-11 09:07:31.193 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:07:31 np0005481065 nova_compute[260935]: 2025-10-11 09:07:31.197 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:07:31 np0005481065 nova_compute[260935]: 2025-10-11 09:07:31.197 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:07:31 np0005481065 nova_compute[260935]: 2025-10-11 09:07:31.197 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:07:31 np0005481065 nova_compute[260935]: 2025-10-11 09:07:31.200 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000061 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:07:31 np0005481065 nova_compute[260935]: 2025-10-11 09:07:31.200 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000061 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:07:31 np0005481065 nova_compute[260935]: 2025-10-11 09:07:31.204 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000062 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:07:31 np0005481065 nova_compute[260935]: 2025-10-11 09:07:31.204 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000062 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:07:31 np0005481065 nova_compute[260935]: 2025-10-11 09:07:31.489 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:07:31 np0005481065 nova_compute[260935]: 2025-10-11 09:07:31.490 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2567MB free_disk=59.72262954711914GB free_vcpus=1 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:07:31 np0005481065 nova_compute[260935]: 2025-10-11 09:07:31.490 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:07:31 np0005481065 nova_compute[260935]: 2025-10-11 09:07:31.491 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:07:31 np0005481065 nova_compute[260935]: 2025-10-11 09:07:31.640 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:07:31 np0005481065 nova_compute[260935]: 2025-10-11 09:07:31.641 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:07:31 np0005481065 nova_compute[260935]: 2025-10-11 09:07:31.642 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:07:31 np0005481065 nova_compute[260935]: 2025-10-11 09:07:31.642 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 15633aee-234a-4417-b5ea-f35f13820404 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:07:31 np0005481065 nova_compute[260935]: 2025-10-11 09:07:31.643 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 6520fc43-79ed-4060-85bb-dcdff5f5c101 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:07:31 np0005481065 nova_compute[260935]: 2025-10-11 09:07:31.643 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance f5dfbe0b-a1ff-4001-abe4-a4493c9124f2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:07:31 np0005481065 nova_compute[260935]: 2025-10-11 09:07:31.643 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:07:31 np0005481065 nova_compute[260935]: 2025-10-11 09:07:31.644 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 02489367-ca19-4428-9727-6d8ab66e1b46 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:07:31 np0005481065 nova_compute[260935]: 2025-10-11 09:07:31.644 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 8 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:07:31 np0005481065 nova_compute[260935]: 2025-10-11 09:07:31.645 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=59GB used_disk=8GB total_vcpus=8 used_vcpus=8 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:07:31 np0005481065 ovn_controller[152945]: 2025-10-11T09:07:31Z|00098|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:88:5f:9b 10.100.0.12
Oct 11 05:07:31 np0005481065 ovn_controller[152945]: 2025-10-11T09:07:31Z|00099|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:88:5f:9b 10.100.0.12
Oct 11 05:07:31 np0005481065 nova_compute[260935]: 2025-10-11 09:07:31.866 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:07:32 np0005481065 nova_compute[260935]: 2025-10-11 09:07:32.264 2 DEBUG nova.virt.libvirt.driver [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct 11 05:07:32 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1946: 321 pgs: 321 active+clean; 613 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 6.2 MiB/s wr, 294 op/s
Oct 11 05:07:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:07:32 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2240840697' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:07:32 np0005481065 nova_compute[260935]: 2025-10-11 09:07:32.420 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:07:32 np0005481065 nova_compute[260935]: 2025-10-11 09:07:32.427 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:07:32 np0005481065 nova_compute[260935]: 2025-10-11 09:07:32.439 2 DEBUG nova.network.neutron [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Updating instance_info_cache with network_info: [{"id": "91e3c288-76e0-4c61-9a40-fde0087f647b", "address": "fa:16:3e:56:f9:d8", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91e3c288-76", "ovs_interfaceid": "91e3c288-76e0-4c61-9a40-fde0087f647b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:07:32 np0005481065 nova_compute[260935]: 2025-10-11 09:07:32.482 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:07:32 np0005481065 nova_compute[260935]: 2025-10-11 09:07:32.573 2 DEBUG oslo_concurrency.lockutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Releasing lock "refresh_cache-02489367-ca19-4428-9727-6d8ab66e1b46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:07:32 np0005481065 nova_compute[260935]: 2025-10-11 09:07:32.573 2 DEBUG nova.compute.manager [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Instance network_info: |[{"id": "91e3c288-76e0-4c61-9a40-fde0087f647b", "address": "fa:16:3e:56:f9:d8", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91e3c288-76", "ovs_interfaceid": "91e3c288-76e0-4c61-9a40-fde0087f647b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:07:32 np0005481065 nova_compute[260935]: 2025-10-11 09:07:32.574 2 DEBUG oslo_concurrency.lockutils [req-788b4548-ea66-4fd6-8770-9372dd1ad601 req-3e74e85b-384a-4ccf-b061-89097d89dba2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-02489367-ca19-4428-9727-6d8ab66e1b46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:07:32 np0005481065 nova_compute[260935]: 2025-10-11 09:07:32.574 2 DEBUG nova.network.neutron [req-788b4548-ea66-4fd6-8770-9372dd1ad601 req-3e74e85b-384a-4ccf-b061-89097d89dba2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Refreshing network info cache for port 91e3c288-76e0-4c61-9a40-fde0087f647b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:07:32 np0005481065 nova_compute[260935]: 2025-10-11 09:07:32.576 2 DEBUG nova.virt.libvirt.driver [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Start _get_guest_xml network_info=[{"id": "91e3c288-76e0-4c61-9a40-fde0087f647b", "address": "fa:16:3e:56:f9:d8", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91e3c288-76", "ovs_interfaceid": "91e3c288-76e0-4c61-9a40-fde0087f647b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:07:32 np0005481065 nova_compute[260935]: 2025-10-11 09:07:32.580 2 WARNING nova.virt.libvirt.driver [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:07:32 np0005481065 nova_compute[260935]: 2025-10-11 09:07:32.584 2 DEBUG nova.virt.libvirt.host [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:07:32 np0005481065 nova_compute[260935]: 2025-10-11 09:07:32.584 2 DEBUG nova.virt.libvirt.host [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:07:32 np0005481065 nova_compute[260935]: 2025-10-11 09:07:32.587 2 DEBUG nova.virt.libvirt.host [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:07:32 np0005481065 nova_compute[260935]: 2025-10-11 09:07:32.588 2 DEBUG nova.virt.libvirt.host [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:07:32 np0005481065 nova_compute[260935]: 2025-10-11 09:07:32.588 2 DEBUG nova.virt.libvirt.driver [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:07:32 np0005481065 nova_compute[260935]: 2025-10-11 09:07:32.588 2 DEBUG nova.virt.hardware [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:07:32 np0005481065 nova_compute[260935]: 2025-10-11 09:07:32.588 2 DEBUG nova.virt.hardware [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:07:32 np0005481065 nova_compute[260935]: 2025-10-11 09:07:32.588 2 DEBUG nova.virt.hardware [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:07:32 np0005481065 nova_compute[260935]: 2025-10-11 09:07:32.589 2 DEBUG nova.virt.hardware [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:07:32 np0005481065 nova_compute[260935]: 2025-10-11 09:07:32.589 2 DEBUG nova.virt.hardware [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:07:32 np0005481065 nova_compute[260935]: 2025-10-11 09:07:32.589 2 DEBUG nova.virt.hardware [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:07:32 np0005481065 nova_compute[260935]: 2025-10-11 09:07:32.589 2 DEBUG nova.virt.hardware [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:07:32 np0005481065 nova_compute[260935]: 2025-10-11 09:07:32.589 2 DEBUG nova.virt.hardware [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:07:32 np0005481065 nova_compute[260935]: 2025-10-11 09:07:32.589 2 DEBUG nova.virt.hardware [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:07:32 np0005481065 nova_compute[260935]: 2025-10-11 09:07:32.590 2 DEBUG nova.virt.hardware [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:07:32 np0005481065 nova_compute[260935]: 2025-10-11 09:07:32.590 2 DEBUG nova.virt.hardware [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:07:32 np0005481065 nova_compute[260935]: 2025-10-11 09:07:32.592 2 DEBUG oslo_concurrency.processutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:07:32 np0005481065 nova_compute[260935]: 2025-10-11 09:07:32.768 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:07:32 np0005481065 nova_compute[260935]: 2025-10-11 09:07:32.769 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.278s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:07:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:07:33 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4214189064' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:07:33 np0005481065 nova_compute[260935]: 2025-10-11 09:07:33.075 2 DEBUG oslo_concurrency.processutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:07:33 np0005481065 nova_compute[260935]: 2025-10-11 09:07:33.113 2 DEBUG nova.storage.rbd_utils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] rbd image 02489367-ca19-4428-9727-6d8ab66e1b46_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:07:33 np0005481065 nova_compute[260935]: 2025-10-11 09:07:33.120 2 DEBUG oslo_concurrency.processutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:07:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:07:33 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4175653657' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:07:33 np0005481065 nova_compute[260935]: 2025-10-11 09:07:33.612 2 DEBUG oslo_concurrency.processutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:07:33 np0005481065 nova_compute[260935]: 2025-10-11 09:07:33.615 2 DEBUG nova.virt.libvirt.vif [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:07:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-322685699',display_name='tempest-ServersNegativeTestJSON-server-322685699',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-322685699',id=99,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='21f163e616ee4917a580701d466f7dc9',ramdisk_id='',reservation_id='r-m7jzw6nr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-414353187',owner_user_name='tempest-ServersNegativeTestJSON-414353187-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:07:27Z,user_data=None,user_id='6b8d9d5ab01d48ae81a09f922875ea3e',uuid=02489367-ca19-4428-9727-6d8ab66e1b46,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "91e3c288-76e0-4c61-9a40-fde0087f647b", "address": "fa:16:3e:56:f9:d8", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91e3c288-76", "ovs_interfaceid": "91e3c288-76e0-4c61-9a40-fde0087f647b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:07:33 np0005481065 nova_compute[260935]: 2025-10-11 09:07:33.616 2 DEBUG nova.network.os_vif_util [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Converting VIF {"id": "91e3c288-76e0-4c61-9a40-fde0087f647b", "address": "fa:16:3e:56:f9:d8", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91e3c288-76", "ovs_interfaceid": "91e3c288-76e0-4c61-9a40-fde0087f647b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:07:33 np0005481065 nova_compute[260935]: 2025-10-11 09:07:33.619 2 DEBUG nova.network.os_vif_util [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:f9:d8,bridge_name='br-int',has_traffic_filtering=True,id=91e3c288-76e0-4c61-9a40-fde0087f647b,network=Network(14e82eeb-74e2-4de3-9047-74da777fe1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91e3c288-76') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:07:33 np0005481065 nova_compute[260935]: 2025-10-11 09:07:33.622 2 DEBUG nova.objects.instance [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 02489367-ca19-4428-9727-6d8ab66e1b46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:07:33 np0005481065 ovn_controller[152945]: 2025-10-11T09:07:33Z|00100|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9d:da:b0 10.100.0.12
Oct 11 05:07:33 np0005481065 nova_compute[260935]: 2025-10-11 09:07:33.680 2 DEBUG nova.virt.libvirt.driver [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:07:33 np0005481065 nova_compute[260935]:  <uuid>02489367-ca19-4428-9727-6d8ab66e1b46</uuid>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:  <name>instance-00000063</name>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:07:33 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:      <nova:name>tempest-ServersNegativeTestJSON-server-322685699</nova:name>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:07:32</nova:creationTime>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:07:33 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:        <nova:user uuid="6b8d9d5ab01d48ae81a09f922875ea3e">tempest-ServersNegativeTestJSON-414353187-project-member</nova:user>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:        <nova:project uuid="21f163e616ee4917a580701d466f7dc9">tempest-ServersNegativeTestJSON-414353187</nova:project>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:        <nova:port uuid="91e3c288-76e0-4c61-9a40-fde0087f647b">
Oct 11 05:07:33 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:07:33 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:      <entry name="serial">02489367-ca19-4428-9727-6d8ab66e1b46</entry>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:      <entry name="uuid">02489367-ca19-4428-9727-6d8ab66e1b46</entry>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:07:33 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:07:33 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:07:33 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/02489367-ca19-4428-9727-6d8ab66e1b46_disk">
Oct 11 05:07:33 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:07:33 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:07:33 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/02489367-ca19-4428-9727-6d8ab66e1b46_disk.config">
Oct 11 05:07:33 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:07:33 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:07:33 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:56:f9:d8"/>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:      <target dev="tap91e3c288-76"/>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:07:33 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/02489367-ca19-4428-9727-6d8ab66e1b46/console.log" append="off"/>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:07:33 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:07:33 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:07:33 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:07:33 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:07:33 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:07:33 np0005481065 nova_compute[260935]: 2025-10-11 09:07:33.681 2 DEBUG nova.compute.manager [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Preparing to wait for external event network-vif-plugged-91e3c288-76e0-4c61-9a40-fde0087f647b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:07:33 np0005481065 nova_compute[260935]: 2025-10-11 09:07:33.681 2 DEBUG oslo_concurrency.lockutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquiring lock "02489367-ca19-4428-9727-6d8ab66e1b46-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:07:33 np0005481065 nova_compute[260935]: 2025-10-11 09:07:33.682 2 DEBUG oslo_concurrency.lockutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "02489367-ca19-4428-9727-6d8ab66e1b46-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:07:33 np0005481065 nova_compute[260935]: 2025-10-11 09:07:33.682 2 DEBUG oslo_concurrency.lockutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "02489367-ca19-4428-9727-6d8ab66e1b46-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:07:33 np0005481065 nova_compute[260935]: 2025-10-11 09:07:33.683 2 DEBUG nova.virt.libvirt.vif [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:07:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-322685699',display_name='tempest-ServersNegativeTestJSON-server-322685699',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-322685699',id=99,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='21f163e616ee4917a580701d466f7dc9',ramdisk_id='',reservation_id='r-m7jzw6nr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-414353187',owner_user_name='tempest-ServersNegativeTestJSON-414353187-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:07:27Z,user_data=None,user_id='6b8d9d5ab01d48ae81a09f922875ea3e',uuid=02489367-ca19-4428-9727-6d8ab66e1b46,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "91e3c288-76e0-4c61-9a40-fde0087f647b", "address": "fa:16:3e:56:f9:d8", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91e3c288-76", "ovs_interfaceid": "91e3c288-76e0-4c61-9a40-fde0087f647b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:07:33 np0005481065 nova_compute[260935]: 2025-10-11 09:07:33.683 2 DEBUG nova.network.os_vif_util [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Converting VIF {"id": "91e3c288-76e0-4c61-9a40-fde0087f647b", "address": "fa:16:3e:56:f9:d8", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91e3c288-76", "ovs_interfaceid": "91e3c288-76e0-4c61-9a40-fde0087f647b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:07:33 np0005481065 nova_compute[260935]: 2025-10-11 09:07:33.684 2 DEBUG nova.network.os_vif_util [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:f9:d8,bridge_name='br-int',has_traffic_filtering=True,id=91e3c288-76e0-4c61-9a40-fde0087f647b,network=Network(14e82eeb-74e2-4de3-9047-74da777fe1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91e3c288-76') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:07:33 np0005481065 ovn_controller[152945]: 2025-10-11T09:07:33Z|00101|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9d:da:b0 10.100.0.12
Oct 11 05:07:33 np0005481065 nova_compute[260935]: 2025-10-11 09:07:33.684 2 DEBUG os_vif [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:f9:d8,bridge_name='br-int',has_traffic_filtering=True,id=91e3c288-76e0-4c61-9a40-fde0087f647b,network=Network(14e82eeb-74e2-4de3-9047-74da777fe1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91e3c288-76') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:07:33 np0005481065 nova_compute[260935]: 2025-10-11 09:07:33.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:33 np0005481065 nova_compute[260935]: 2025-10-11 09:07:33.689 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:07:33 np0005481065 nova_compute[260935]: 2025-10-11 09:07:33.691 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:07:33 np0005481065 nova_compute[260935]: 2025-10-11 09:07:33.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:33 np0005481065 nova_compute[260935]: 2025-10-11 09:07:33.703 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap91e3c288-76, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:07:33 np0005481065 nova_compute[260935]: 2025-10-11 09:07:33.705 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap91e3c288-76, col_values=(('external_ids', {'iface-id': '91e3c288-76e0-4c61-9a40-fde0087f647b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:56:f9:d8', 'vm-uuid': '02489367-ca19-4428-9727-6d8ab66e1b46'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:07:33 np0005481065 nova_compute[260935]: 2025-10-11 09:07:33.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:33 np0005481065 NetworkManager[44960]: <info>  [1760173653.7107] manager: (tap91e3c288-76): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/376)
Oct 11 05:07:33 np0005481065 nova_compute[260935]: 2025-10-11 09:07:33.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:07:33 np0005481065 nova_compute[260935]: 2025-10-11 09:07:33.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:33 np0005481065 nova_compute[260935]: 2025-10-11 09:07:33.721 2 INFO os_vif [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:f9:d8,bridge_name='br-int',has_traffic_filtering=True,id=91e3c288-76e0-4c61-9a40-fde0087f647b,network=Network(14e82eeb-74e2-4de3-9047-74da777fe1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91e3c288-76')#033[00m
Oct 11 05:07:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:07:33 np0005481065 nova_compute[260935]: 2025-10-11 09:07:33.812 2 DEBUG nova.virt.libvirt.driver [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:07:33 np0005481065 nova_compute[260935]: 2025-10-11 09:07:33.813 2 DEBUG nova.virt.libvirt.driver [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:07:33 np0005481065 nova_compute[260935]: 2025-10-11 09:07:33.813 2 DEBUG nova.virt.libvirt.driver [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] No VIF found with MAC fa:16:3e:56:f9:d8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:07:33 np0005481065 nova_compute[260935]: 2025-10-11 09:07:33.813 2 INFO nova.virt.libvirt.driver [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Using config drive#033[00m
Oct 11 05:07:33 np0005481065 nova_compute[260935]: 2025-10-11 09:07:33.849 2 DEBUG nova.storage.rbd_utils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] rbd image 02489367-ca19-4428-9727-6d8ab66e1b46_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:07:34 np0005481065 nova_compute[260935]: 2025-10-11 09:07:34.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:34 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1947: 321 pgs: 321 active+clean; 647 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 8.1 MiB/s wr, 229 op/s
Oct 11 05:07:34 np0005481065 kernel: tap34749eb0-e6 (unregistering): left promiscuous mode
Oct 11 05:07:34 np0005481065 NetworkManager[44960]: <info>  [1760173654.5544] device (tap34749eb0-e6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:07:34 np0005481065 ovn_controller[152945]: 2025-10-11T09:07:34Z|00875|binding|INFO|Releasing lport 34749eb0-e6d2-4c4b-a2b8-9fc709c1caca from this chassis (sb_readonly=0)
Oct 11 05:07:34 np0005481065 ovn_controller[152945]: 2025-10-11T09:07:34Z|00876|binding|INFO|Setting lport 34749eb0-e6d2-4c4b-a2b8-9fc709c1caca down in Southbound
Oct 11 05:07:34 np0005481065 ovn_controller[152945]: 2025-10-11T09:07:34Z|00877|binding|INFO|Removing iface tap34749eb0-e6 ovn-installed in OVS
Oct 11 05:07:34 np0005481065 nova_compute[260935]: 2025-10-11 09:07:34.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:34 np0005481065 nova_compute[260935]: 2025-10-11 09:07:34.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:34.585 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:5f:9b 10.100.0.12'], port_security=['fa:16:3e:88:5f:9b 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'f5dfbe0b-a1ff-4001-abe4-a4493c9124f2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-76432838-bd4d-4fb8-8e44-6e230b5868b8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c221c92f100b42fbb2581f0c7035540b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '933852e8-c082-4590-9200-b967a1691dcb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7f1c5e2c-b27b-4d23-ad04-5134938386e4, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=34749eb0-e6d2-4c4b-a2b8-9fc709c1caca) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:07:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:34.588 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 34749eb0-e6d2-4c4b-a2b8-9fc709c1caca in datapath 76432838-bd4d-4fb8-8e44-6e230b5868b8 unbound from our chassis#033[00m
Oct 11 05:07:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:34.590 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 76432838-bd4d-4fb8-8e44-6e230b5868b8#033[00m
Oct 11 05:07:34 np0005481065 nova_compute[260935]: 2025-10-11 09:07:34.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:34.608 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8e00e19f-1549-4d41-a1a5-6b3f4b5d57b3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:34 np0005481065 systemd[1]: machine-qemu\x2d111\x2dinstance\x2d00000061.scope: Deactivated successfully.
Oct 11 05:07:34 np0005481065 systemd[1]: machine-qemu\x2d111\x2dinstance\x2d00000061.scope: Consumed 14.713s CPU time.
Oct 11 05:07:34 np0005481065 systemd-machined[215705]: Machine qemu-111-instance-00000061 terminated.
Oct 11 05:07:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:34.642 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[890b325a-3a44-4e54-bb1e-7be5c237e6c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:34.646 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[a95a576e-69b1-4388-b138-29e962f10c4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:34 np0005481065 nova_compute[260935]: 2025-10-11 09:07:34.656 2 INFO nova.virt.libvirt.driver [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Creating config drive at /var/lib/nova/instances/02489367-ca19-4428-9727-6d8ab66e1b46/disk.config#033[00m
Oct 11 05:07:34 np0005481065 nova_compute[260935]: 2025-10-11 09:07:34.667 2 DEBUG oslo_concurrency.processutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/02489367-ca19-4428-9727-6d8ab66e1b46/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyzie0oqd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:07:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:34.688 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[5f84f57c-1cd2-42d2-bd30-e38909f11404]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:34.712 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[adfbe43c-9a1e-4680-9140-0229a7d18589]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap76432838-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:06:2f:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547691, 'reachable_time': 43408, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358128, 'error': None, 'target': 'ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:34.730 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ad99086b-c8f1-4453-bba1-8823bcacf3b4]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap76432838-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 547707, 'tstamp': 547707}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358129, 'error': None, 'target': 'ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap76432838-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 547711, 'tstamp': 547711}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358129, 'error': None, 'target': 'ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:34.733 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap76432838-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:07:34 np0005481065 nova_compute[260935]: 2025-10-11 09:07:34.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:34.744 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap76432838-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:07:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:34.745 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:07:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:34.745 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap76432838-b0, col_values=(('external_ids', {'iface-id': 'b6ca7d68-6b07-4260-8964-929cc77a92b2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:07:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:34.745 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:07:34 np0005481065 nova_compute[260935]: 2025-10-11 09:07:34.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:34 np0005481065 nova_compute[260935]: 2025-10-11 09:07:34.833 2 DEBUG oslo_concurrency.processutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/02489367-ca19-4428-9727-6d8ab66e1b46/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyzie0oqd" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:07:34 np0005481065 nova_compute[260935]: 2025-10-11 09:07:34.873 2 DEBUG nova.storage.rbd_utils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] rbd image 02489367-ca19-4428-9727-6d8ab66e1b46_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:07:34 np0005481065 nova_compute[260935]: 2025-10-11 09:07:34.878 2 DEBUG oslo_concurrency.processutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/02489367-ca19-4428-9727-6d8ab66e1b46/disk.config 02489367-ca19-4428-9727-6d8ab66e1b46_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:07:35 np0005481065 nova_compute[260935]: 2025-10-11 09:07:35.052 2 DEBUG nova.compute.manager [req-16c99b88-665e-4db1-8994-77379c416931 req-a1bf17bd-ddc4-4503-885c-73ed172b1c99 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Received event network-vif-unplugged-34749eb0-e6d2-4c4b-a2b8-9fc709c1caca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:07:35 np0005481065 nova_compute[260935]: 2025-10-11 09:07:35.053 2 DEBUG oslo_concurrency.lockutils [req-16c99b88-665e-4db1-8994-77379c416931 req-a1bf17bd-ddc4-4503-885c-73ed172b1c99 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:07:35 np0005481065 nova_compute[260935]: 2025-10-11 09:07:35.054 2 DEBUG oslo_concurrency.lockutils [req-16c99b88-665e-4db1-8994-77379c416931 req-a1bf17bd-ddc4-4503-885c-73ed172b1c99 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:07:35 np0005481065 nova_compute[260935]: 2025-10-11 09:07:35.054 2 DEBUG oslo_concurrency.lockutils [req-16c99b88-665e-4db1-8994-77379c416931 req-a1bf17bd-ddc4-4503-885c-73ed172b1c99 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:07:35 np0005481065 nova_compute[260935]: 2025-10-11 09:07:35.054 2 DEBUG nova.compute.manager [req-16c99b88-665e-4db1-8994-77379c416931 req-a1bf17bd-ddc4-4503-885c-73ed172b1c99 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] No waiting events found dispatching network-vif-unplugged-34749eb0-e6d2-4c4b-a2b8-9fc709c1caca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:07:35 np0005481065 nova_compute[260935]: 2025-10-11 09:07:35.054 2 WARNING nova.compute.manager [req-16c99b88-665e-4db1-8994-77379c416931 req-a1bf17bd-ddc4-4503-885c-73ed172b1c99 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Received unexpected event network-vif-unplugged-34749eb0-e6d2-4c4b-a2b8-9fc709c1caca for instance with vm_state active and task_state rescuing.#033[00m
Oct 11 05:07:35 np0005481065 nova_compute[260935]: 2025-10-11 09:07:35.068 2 DEBUG oslo_concurrency.processutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/02489367-ca19-4428-9727-6d8ab66e1b46/disk.config 02489367-ca19-4428-9727-6d8ab66e1b46_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.189s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:07:35 np0005481065 nova_compute[260935]: 2025-10-11 09:07:35.068 2 INFO nova.virt.libvirt.driver [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Deleting local config drive /var/lib/nova/instances/02489367-ca19-4428-9727-6d8ab66e1b46/disk.config because it was imported into RBD.#033[00m
Oct 11 05:07:35 np0005481065 systemd-udevd[358120]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:07:35 np0005481065 kernel: tap91e3c288-76: entered promiscuous mode
Oct 11 05:07:35 np0005481065 NetworkManager[44960]: <info>  [1760173655.1588] manager: (tap91e3c288-76): new Tun device (/org/freedesktop/NetworkManager/Devices/377)
Oct 11 05:07:35 np0005481065 nova_compute[260935]: 2025-10-11 09:07:35.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:35 np0005481065 ovn_controller[152945]: 2025-10-11T09:07:35Z|00878|binding|INFO|Claiming lport 91e3c288-76e0-4c61-9a40-fde0087f647b for this chassis.
Oct 11 05:07:35 np0005481065 ovn_controller[152945]: 2025-10-11T09:07:35Z|00879|binding|INFO|91e3c288-76e0-4c61-9a40-fde0087f647b: Claiming fa:16:3e:56:f9:d8 10.100.0.11
Oct 11 05:07:35 np0005481065 NetworkManager[44960]: <info>  [1760173655.1773] device (tap91e3c288-76): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:07:35 np0005481065 NetworkManager[44960]: <info>  [1760173655.1792] device (tap91e3c288-76): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:07:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:35.184 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:f9:d8 10.100.0.11'], port_security=['fa:16:3e:56:f9:d8 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '02489367-ca19-4428-9727-6d8ab66e1b46', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '21f163e616ee4917a580701d466f7dc9', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'df67e917-90ec-4dfb-a7e0-e0c1517579e8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e0adf864-e0f8-4e06-afb7-e390a634b36e, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=91e3c288-76e0-4c61-9a40-fde0087f647b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:07:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:35.186 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 91e3c288-76e0-4c61-9a40-fde0087f647b in datapath 14e82eeb-74e2-4de3-9047-74da777fe1ec bound to our chassis#033[00m
Oct 11 05:07:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:35.189 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 14e82eeb-74e2-4de3-9047-74da777fe1ec#033[00m
Oct 11 05:07:35 np0005481065 systemd-machined[215705]: New machine qemu-113-instance-00000063.
Oct 11 05:07:35 np0005481065 ovn_controller[152945]: 2025-10-11T09:07:35Z|00880|binding|INFO|Setting lport 91e3c288-76e0-4c61-9a40-fde0087f647b ovn-installed in OVS
Oct 11 05:07:35 np0005481065 ovn_controller[152945]: 2025-10-11T09:07:35Z|00881|binding|INFO|Setting lport 91e3c288-76e0-4c61-9a40-fde0087f647b up in Southbound
Oct 11 05:07:35 np0005481065 nova_compute[260935]: 2025-10-11 09:07:35.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:35.223 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f44c63c9-6f75-4c04-a443-bfaac2848ec2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:35 np0005481065 systemd[1]: Started Virtual Machine qemu-113-instance-00000063.
Oct 11 05:07:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:35.273 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[8894f231-ab51-44cd-ad93-fc253bcc3fb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:35.280 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[e0ebcf33-095f-4073-83d4-644edd7d19f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:35 np0005481065 nova_compute[260935]: 2025-10-11 09:07:35.286 2 INFO nova.virt.libvirt.driver [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Instance shutdown successfully after 13 seconds.#033[00m
Oct 11 05:07:35 np0005481065 nova_compute[260935]: 2025-10-11 09:07:35.304 2 INFO nova.virt.libvirt.driver [-] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Instance destroyed successfully.#033[00m
Oct 11 05:07:35 np0005481065 nova_compute[260935]: 2025-10-11 09:07:35.305 2 DEBUG nova.objects.instance [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lazy-loading 'numa_topology' on Instance uuid f5dfbe0b-a1ff-4001-abe4-a4493c9124f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:07:35 np0005481065 nova_compute[260935]: 2025-10-11 09:07:35.334 2 INFO nova.virt.libvirt.driver [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Attempting rescue#033[00m
Oct 11 05:07:35 np0005481065 nova_compute[260935]: 2025-10-11 09:07:35.336 2 DEBUG nova.virt.libvirt.driver [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Oct 11 05:07:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:35.336 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[0b95137e-856a-48e7-a646-8dca04214103]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:35 np0005481065 nova_compute[260935]: 2025-10-11 09:07:35.344 2 DEBUG nova.virt.libvirt.driver [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Oct 11 05:07:35 np0005481065 nova_compute[260935]: 2025-10-11 09:07:35.345 2 INFO nova.virt.libvirt.driver [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Creating image(s)#033[00m
Oct 11 05:07:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:35.367 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[228d4389-1c2a-414c-8b92-563928cf68af]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14e82eeb-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:60:56:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 263], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 548101, 'reachable_time': 37420, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358209, 'error': None, 'target': 'ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:35 np0005481065 nova_compute[260935]: 2025-10-11 09:07:35.384 2 DEBUG nova.storage.rbd_utils [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] rbd image f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:07:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:35.394 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3892e0f5-6b3b-4973-bd16-4a68178d6799]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap14e82eeb-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 548114, 'tstamp': 548114}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358225, 'error': None, 'target': 'ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap14e82eeb-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 548116, 'tstamp': 548116}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358225, 'error': None, 'target': 'ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:35.397 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14e82eeb-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:07:35 np0005481065 nova_compute[260935]: 2025-10-11 09:07:35.401 2 DEBUG nova.objects.instance [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lazy-loading 'trusted_certs' on Instance uuid f5dfbe0b-a1ff-4001-abe4-a4493c9124f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:07:35 np0005481065 nova_compute[260935]: 2025-10-11 09:07:35.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:35.406 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14e82eeb-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:07:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:35.406 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:07:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:35.407 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap14e82eeb-70, col_values=(('external_ids', {'iface-id': 'c44760fe-71c5-482d-aee7-a0b0513681b7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:07:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:35.408 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:07:35 np0005481065 nova_compute[260935]: 2025-10-11 09:07:35.473 2 DEBUG nova.storage.rbd_utils [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] rbd image f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:07:35 np0005481065 nova_compute[260935]: 2025-10-11 09:07:35.509 2 DEBUG nova.storage.rbd_utils [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] rbd image f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:07:35 np0005481065 nova_compute[260935]: 2025-10-11 09:07:35.514 2 DEBUG oslo_concurrency.processutils [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:07:35 np0005481065 nova_compute[260935]: 2025-10-11 09:07:35.566 2 DEBUG nova.network.neutron [req-788b4548-ea66-4fd6-8770-9372dd1ad601 req-3e74e85b-384a-4ccf-b061-89097d89dba2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Updated VIF entry in instance network info cache for port 91e3c288-76e0-4c61-9a40-fde0087f647b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:07:35 np0005481065 nova_compute[260935]: 2025-10-11 09:07:35.567 2 DEBUG nova.network.neutron [req-788b4548-ea66-4fd6-8770-9372dd1ad601 req-3e74e85b-384a-4ccf-b061-89097d89dba2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Updating instance_info_cache with network_info: [{"id": "91e3c288-76e0-4c61-9a40-fde0087f647b", "address": "fa:16:3e:56:f9:d8", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91e3c288-76", "ovs_interfaceid": "91e3c288-76e0-4c61-9a40-fde0087f647b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:07:35 np0005481065 nova_compute[260935]: 2025-10-11 09:07:35.597 2 DEBUG oslo_concurrency.lockutils [req-788b4548-ea66-4fd6-8770-9372dd1ad601 req-3e74e85b-384a-4ccf-b061-89097d89dba2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-02489367-ca19-4428-9727-6d8ab66e1b46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:07:35 np0005481065 nova_compute[260935]: 2025-10-11 09:07:35.611 2 DEBUG oslo_concurrency.processutils [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:07:35 np0005481065 nova_compute[260935]: 2025-10-11 09:07:35.612 2 DEBUG oslo_concurrency.lockutils [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:07:35 np0005481065 nova_compute[260935]: 2025-10-11 09:07:35.613 2 DEBUG oslo_concurrency.lockutils [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:07:35 np0005481065 nova_compute[260935]: 2025-10-11 09:07:35.613 2 DEBUG oslo_concurrency.lockutils [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:07:35 np0005481065 nova_compute[260935]: 2025-10-11 09:07:35.639 2 DEBUG nova.storage.rbd_utils [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] rbd image f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:07:35 np0005481065 nova_compute[260935]: 2025-10-11 09:07:35.643 2 DEBUG oslo_concurrency.processutils [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:07:35 np0005481065 nova_compute[260935]: 2025-10-11 09:07:35.989 2 DEBUG oslo_concurrency.processutils [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.346s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:07:35 np0005481065 nova_compute[260935]: 2025-10-11 09:07:35.990 2 DEBUG nova.objects.instance [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lazy-loading 'migration_context' on Instance uuid f5dfbe0b-a1ff-4001-abe4-a4493c9124f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:07:36 np0005481065 nova_compute[260935]: 2025-10-11 09:07:36.033 2 DEBUG nova.virt.libvirt.driver [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:07:36 np0005481065 nova_compute[260935]: 2025-10-11 09:07:36.034 2 DEBUG nova.virt.libvirt.driver [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Start _get_guest_xml network_info=[{"id": "34749eb0-e6d2-4c4b-a2b8-9fc709c1caca", "address": "fa:16:3e:88:5f:9b", "network": {"id": "76432838-bd4d-4fb8-8e44-6e230b5868b8", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-372698809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-372698809-network", "vif_mac": "fa:16:3e:88:5f:9b"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c221c92f100b42fbb2581f0c7035540b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34749eb0-e6", "ovs_interfaceid": "34749eb0-e6d2-4c4b-a2b8-9fc709c1caca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:07:36 np0005481065 nova_compute[260935]: 2025-10-11 09:07:36.035 2 DEBUG nova.objects.instance [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lazy-loading 'resources' on Instance uuid f5dfbe0b-a1ff-4001-abe4-a4493c9124f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:07:36 np0005481065 nova_compute[260935]: 2025-10-11 09:07:36.107 2 WARNING nova.virt.libvirt.driver [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:07:36 np0005481065 nova_compute[260935]: 2025-10-11 09:07:36.116 2 DEBUG nova.virt.libvirt.host [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:07:36 np0005481065 nova_compute[260935]: 2025-10-11 09:07:36.117 2 DEBUG nova.virt.libvirt.host [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:07:36 np0005481065 nova_compute[260935]: 2025-10-11 09:07:36.122 2 DEBUG nova.virt.libvirt.host [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:07:36 np0005481065 nova_compute[260935]: 2025-10-11 09:07:36.123 2 DEBUG nova.virt.libvirt.host [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:07:36 np0005481065 nova_compute[260935]: 2025-10-11 09:07:36.123 2 DEBUG nova.virt.libvirt.driver [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:07:36 np0005481065 nova_compute[260935]: 2025-10-11 09:07:36.124 2 DEBUG nova.virt.hardware [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:07:36 np0005481065 nova_compute[260935]: 2025-10-11 09:07:36.125 2 DEBUG nova.virt.hardware [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:07:36 np0005481065 nova_compute[260935]: 2025-10-11 09:07:36.125 2 DEBUG nova.virt.hardware [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:07:36 np0005481065 nova_compute[260935]: 2025-10-11 09:07:36.126 2 DEBUG nova.virt.hardware [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:07:36 np0005481065 nova_compute[260935]: 2025-10-11 09:07:36.126 2 DEBUG nova.virt.hardware [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:07:36 np0005481065 nova_compute[260935]: 2025-10-11 09:07:36.126 2 DEBUG nova.virt.hardware [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:07:36 np0005481065 nova_compute[260935]: 2025-10-11 09:07:36.127 2 DEBUG nova.virt.hardware [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:07:36 np0005481065 nova_compute[260935]: 2025-10-11 09:07:36.127 2 DEBUG nova.virt.hardware [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:07:36 np0005481065 nova_compute[260935]: 2025-10-11 09:07:36.128 2 DEBUG nova.virt.hardware [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:07:36 np0005481065 nova_compute[260935]: 2025-10-11 09:07:36.128 2 DEBUG nova.virt.hardware [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:07:36 np0005481065 nova_compute[260935]: 2025-10-11 09:07:36.129 2 DEBUG nova.virt.hardware [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:07:36 np0005481065 nova_compute[260935]: 2025-10-11 09:07:36.129 2 DEBUG nova.objects.instance [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lazy-loading 'vcpu_model' on Instance uuid f5dfbe0b-a1ff-4001-abe4-a4493c9124f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:07:36 np0005481065 nova_compute[260935]: 2025-10-11 09:07:36.171 2 DEBUG oslo_concurrency.processutils [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:07:36 np0005481065 nova_compute[260935]: 2025-10-11 09:07:36.343 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173656.3427625, 02489367-ca19-4428-9727-6d8ab66e1b46 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:07:36 np0005481065 nova_compute[260935]: 2025-10-11 09:07:36.344 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] VM Started (Lifecycle Event)#033[00m
Oct 11 05:07:36 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1948: 321 pgs: 321 active+clean; 647 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 881 KiB/s rd, 8.1 MiB/s wr, 192 op/s
Oct 11 05:07:36 np0005481065 nova_compute[260935]: 2025-10-11 09:07:36.398 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:07:36 np0005481065 nova_compute[260935]: 2025-10-11 09:07:36.404 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173656.3429835, 02489367-ca19-4428-9727-6d8ab66e1b46 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:07:36 np0005481065 nova_compute[260935]: 2025-10-11 09:07:36.405 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:07:36 np0005481065 nova_compute[260935]: 2025-10-11 09:07:36.453 2 DEBUG nova.compute.manager [req-b2f38212-1c50-488f-8b15-dac8f7997e0b req-1f2d7cbc-86a6-496c-814a-e6ac947db80d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Received event network-vif-plugged-91e3c288-76e0-4c61-9a40-fde0087f647b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:07:36 np0005481065 nova_compute[260935]: 2025-10-11 09:07:36.453 2 DEBUG oslo_concurrency.lockutils [req-b2f38212-1c50-488f-8b15-dac8f7997e0b req-1f2d7cbc-86a6-496c-814a-e6ac947db80d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "02489367-ca19-4428-9727-6d8ab66e1b46-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:07:36 np0005481065 nova_compute[260935]: 2025-10-11 09:07:36.454 2 DEBUG oslo_concurrency.lockutils [req-b2f38212-1c50-488f-8b15-dac8f7997e0b req-1f2d7cbc-86a6-496c-814a-e6ac947db80d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "02489367-ca19-4428-9727-6d8ab66e1b46-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:07:36 np0005481065 nova_compute[260935]: 2025-10-11 09:07:36.455 2 DEBUG oslo_concurrency.lockutils [req-b2f38212-1c50-488f-8b15-dac8f7997e0b req-1f2d7cbc-86a6-496c-814a-e6ac947db80d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "02489367-ca19-4428-9727-6d8ab66e1b46-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:07:36 np0005481065 nova_compute[260935]: 2025-10-11 09:07:36.455 2 DEBUG nova.compute.manager [req-b2f38212-1c50-488f-8b15-dac8f7997e0b req-1f2d7cbc-86a6-496c-814a-e6ac947db80d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Processing event network-vif-plugged-91e3c288-76e0-4c61-9a40-fde0087f647b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:07:36 np0005481065 nova_compute[260935]: 2025-10-11 09:07:36.456 2 DEBUG nova.compute.manager [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:07:36 np0005481065 nova_compute[260935]: 2025-10-11 09:07:36.460 2 DEBUG nova.virt.libvirt.driver [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:07:36 np0005481065 nova_compute[260935]: 2025-10-11 09:07:36.464 2 INFO nova.virt.libvirt.driver [-] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Instance spawned successfully.#033[00m
Oct 11 05:07:36 np0005481065 nova_compute[260935]: 2025-10-11 09:07:36.465 2 DEBUG nova.virt.libvirt.driver [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:07:36 np0005481065 nova_compute[260935]: 2025-10-11 09:07:36.476 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:07:36 np0005481065 nova_compute[260935]: 2025-10-11 09:07:36.486 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173656.4601095, 02489367-ca19-4428-9727-6d8ab66e1b46 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:07:36 np0005481065 nova_compute[260935]: 2025-10-11 09:07:36.486 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:07:36 np0005481065 nova_compute[260935]: 2025-10-11 09:07:36.562 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:07:36 np0005481065 nova_compute[260935]: 2025-10-11 09:07:36.570 2 DEBUG nova.virt.libvirt.driver [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:07:36 np0005481065 nova_compute[260935]: 2025-10-11 09:07:36.570 2 DEBUG nova.virt.libvirt.driver [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:07:36 np0005481065 nova_compute[260935]: 2025-10-11 09:07:36.571 2 DEBUG nova.virt.libvirt.driver [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:07:36 np0005481065 nova_compute[260935]: 2025-10-11 09:07:36.572 2 DEBUG nova.virt.libvirt.driver [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:07:36 np0005481065 nova_compute[260935]: 2025-10-11 09:07:36.574 2 DEBUG nova.virt.libvirt.driver [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:07:36 np0005481065 nova_compute[260935]: 2025-10-11 09:07:36.575 2 DEBUG nova.virt.libvirt.driver [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:07:36 np0005481065 nova_compute[260935]: 2025-10-11 09:07:36.585 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:07:36 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:07:36 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3357456046' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:07:36 np0005481065 nova_compute[260935]: 2025-10-11 09:07:36.662 2 DEBUG oslo_concurrency.processutils [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:07:36 np0005481065 nova_compute[260935]: 2025-10-11 09:07:36.663 2 DEBUG oslo_concurrency.processutils [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:07:36 np0005481065 nova_compute[260935]: 2025-10-11 09:07:36.709 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:07:36 np0005481065 nova_compute[260935]: 2025-10-11 09:07:36.729 2 INFO nova.compute.manager [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Took 8.63 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:07:36 np0005481065 nova_compute[260935]: 2025-10-11 09:07:36.730 2 DEBUG nova.compute.manager [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:07:36 np0005481065 nova_compute[260935]: 2025-10-11 09:07:36.886 2 INFO nova.compute.manager [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Took 10.53 seconds to build instance.#033[00m
Oct 11 05:07:36 np0005481065 nova_compute[260935]: 2025-10-11 09:07:36.943 2 DEBUG oslo_concurrency.lockutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "02489367-ca19-4428-9727-6d8ab66e1b46" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.874s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:07:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:07:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2550403082' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:07:37 np0005481065 nova_compute[260935]: 2025-10-11 09:07:37.170 2 DEBUG oslo_concurrency.processutils [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:07:37 np0005481065 nova_compute[260935]: 2025-10-11 09:07:37.171 2 DEBUG oslo_concurrency.processutils [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:07:37 np0005481065 nova_compute[260935]: 2025-10-11 09:07:37.221 2 DEBUG nova.compute.manager [req-c91df565-b811-4d5b-b0ab-16aed7688eb5 req-ed2b8c0e-2954-497b-911a-60c823ceac0b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Received event network-vif-plugged-34749eb0-e6d2-4c4b-a2b8-9fc709c1caca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:07:37 np0005481065 nova_compute[260935]: 2025-10-11 09:07:37.222 2 DEBUG oslo_concurrency.lockutils [req-c91df565-b811-4d5b-b0ab-16aed7688eb5 req-ed2b8c0e-2954-497b-911a-60c823ceac0b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:07:37 np0005481065 nova_compute[260935]: 2025-10-11 09:07:37.222 2 DEBUG oslo_concurrency.lockutils [req-c91df565-b811-4d5b-b0ab-16aed7688eb5 req-ed2b8c0e-2954-497b-911a-60c823ceac0b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:07:37 np0005481065 nova_compute[260935]: 2025-10-11 09:07:37.223 2 DEBUG oslo_concurrency.lockutils [req-c91df565-b811-4d5b-b0ab-16aed7688eb5 req-ed2b8c0e-2954-497b-911a-60c823ceac0b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:07:37 np0005481065 nova_compute[260935]: 2025-10-11 09:07:37.223 2 DEBUG nova.compute.manager [req-c91df565-b811-4d5b-b0ab-16aed7688eb5 req-ed2b8c0e-2954-497b-911a-60c823ceac0b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] No waiting events found dispatching network-vif-plugged-34749eb0-e6d2-4c4b-a2b8-9fc709c1caca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:07:37 np0005481065 nova_compute[260935]: 2025-10-11 09:07:37.223 2 WARNING nova.compute.manager [req-c91df565-b811-4d5b-b0ab-16aed7688eb5 req-ed2b8c0e-2954-497b-911a-60c823ceac0b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Received unexpected event network-vif-plugged-34749eb0-e6d2-4c4b-a2b8-9fc709c1caca for instance with vm_state active and task_state rescuing.#033[00m
Oct 11 05:07:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:07:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3449958463' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:07:37 np0005481065 nova_compute[260935]: 2025-10-11 09:07:37.696 2 DEBUG oslo_concurrency.processutils [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:07:37 np0005481065 nova_compute[260935]: 2025-10-11 09:07:37.698 2 DEBUG nova.virt.libvirt.vif [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:07:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1694931715',display_name='tempest-ServerRescueNegativeTestJSON-server-1694931715',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1694931715',id=97,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:07:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c221c92f100b42fbb2581f0c7035540b',ramdisk_id='',reservation_id='r-w0ylr4kw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-1667632067',owner_user_name='tempest-ServerRescueNegativeTestJSON-1667632067-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:07:17Z,user_data=None,user_id='f52fcc072b5843a197064a063d4a5d30',uuid=f5dfbe0b-a1ff-4001-abe4-a4493c9124f2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "34749eb0-e6d2-4c4b-a2b8-9fc709c1caca", "address": "fa:16:3e:88:5f:9b", "network": {"id": "76432838-bd4d-4fb8-8e44-6e230b5868b8", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-372698809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-372698809-network", "vif_mac": "fa:16:3e:88:5f:9b"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c221c92f100b42fbb2581f0c7035540b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34749eb0-e6", "ovs_interfaceid": "34749eb0-e6d2-4c4b-a2b8-9fc709c1caca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:07:37 np0005481065 nova_compute[260935]: 2025-10-11 09:07:37.699 2 DEBUG nova.network.os_vif_util [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Converting VIF {"id": "34749eb0-e6d2-4c4b-a2b8-9fc709c1caca", "address": "fa:16:3e:88:5f:9b", "network": {"id": "76432838-bd4d-4fb8-8e44-6e230b5868b8", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-372698809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-372698809-network", "vif_mac": "fa:16:3e:88:5f:9b"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c221c92f100b42fbb2581f0c7035540b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34749eb0-e6", "ovs_interfaceid": "34749eb0-e6d2-4c4b-a2b8-9fc709c1caca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:07:37 np0005481065 nova_compute[260935]: 2025-10-11 09:07:37.700 2 DEBUG nova.network.os_vif_util [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:88:5f:9b,bridge_name='br-int',has_traffic_filtering=True,id=34749eb0-e6d2-4c4b-a2b8-9fc709c1caca,network=Network(76432838-bd4d-4fb8-8e44-6e230b5868b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34749eb0-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:07:37 np0005481065 nova_compute[260935]: 2025-10-11 09:07:37.702 2 DEBUG nova.objects.instance [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lazy-loading 'pci_devices' on Instance uuid f5dfbe0b-a1ff-4001-abe4-a4493c9124f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:07:37 np0005481065 nova_compute[260935]: 2025-10-11 09:07:37.746 2 DEBUG nova.virt.libvirt.driver [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:07:37 np0005481065 nova_compute[260935]:  <uuid>f5dfbe0b-a1ff-4001-abe4-a4493c9124f2</uuid>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:  <name>instance-00000061</name>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:07:37 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:      <nova:name>tempest-ServerRescueNegativeTestJSON-server-1694931715</nova:name>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:07:36</nova:creationTime>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:07:37 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:        <nova:user uuid="f52fcc072b5843a197064a063d4a5d30">tempest-ServerRescueNegativeTestJSON-1667632067-project-member</nova:user>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:        <nova:project uuid="c221c92f100b42fbb2581f0c7035540b">tempest-ServerRescueNegativeTestJSON-1667632067</nova:project>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:        <nova:port uuid="34749eb0-e6d2-4c4b-a2b8-9fc709c1caca">
Oct 11 05:07:37 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:      <entry name="serial">f5dfbe0b-a1ff-4001-abe4-a4493c9124f2</entry>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:      <entry name="uuid">f5dfbe0b-a1ff-4001-abe4-a4493c9124f2</entry>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:07:37 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_disk.rescue">
Oct 11 05:07:37 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:07:37 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:07:37 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_disk">
Oct 11 05:07:37 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:07:37 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:      <target dev="vdb" bus="virtio"/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:07:37 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_disk.config.rescue">
Oct 11 05:07:37 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:07:37 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:07:37 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:88:5f:9b"/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:      <target dev="tap34749eb0-e6"/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:07:37 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/f5dfbe0b-a1ff-4001-abe4-a4493c9124f2/console.log" append="off"/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:07:37 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:07:37 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:07:37 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:07:37 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:07:37 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:07:37 np0005481065 nova_compute[260935]: 2025-10-11 09:07:37.753 2 INFO nova.virt.libvirt.driver [-] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Instance destroyed successfully.#033[00m
Oct 11 05:07:37 np0005481065 nova_compute[260935]: 2025-10-11 09:07:37.979 2 DEBUG nova.virt.libvirt.driver [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:07:37 np0005481065 nova_compute[260935]: 2025-10-11 09:07:37.980 2 DEBUG nova.virt.libvirt.driver [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:07:37 np0005481065 nova_compute[260935]: 2025-10-11 09:07:37.980 2 DEBUG nova.virt.libvirt.driver [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:07:37 np0005481065 nova_compute[260935]: 2025-10-11 09:07:37.980 2 DEBUG nova.virt.libvirt.driver [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] No VIF found with MAC fa:16:3e:88:5f:9b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:07:37 np0005481065 nova_compute[260935]: 2025-10-11 09:07:37.981 2 INFO nova.virt.libvirt.driver [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Using config drive#033[00m
Oct 11 05:07:38 np0005481065 nova_compute[260935]: 2025-10-11 09:07:38.006 2 DEBUG nova.storage.rbd_utils [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] rbd image f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:07:38 np0005481065 nova_compute[260935]: 2025-10-11 09:07:38.099 2 DEBUG nova.objects.instance [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lazy-loading 'ec2_ids' on Instance uuid f5dfbe0b-a1ff-4001-abe4-a4493c9124f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:07:38 np0005481065 nova_compute[260935]: 2025-10-11 09:07:38.210 2 DEBUG nova.objects.instance [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lazy-loading 'keypairs' on Instance uuid f5dfbe0b-a1ff-4001-abe4-a4493c9124f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:07:38 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1949: 321 pgs: 321 active+clean; 705 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 10 MiB/s wr, 265 op/s
Oct 11 05:07:38 np0005481065 nova_compute[260935]: 2025-10-11 09:07:38.636 2 DEBUG nova.compute.manager [req-eaf3687b-023b-49f0-823d-b66a13c227c9 req-516e0026-cc4c-4b59-870c-ddb789fab311 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Received event network-vif-plugged-91e3c288-76e0-4c61-9a40-fde0087f647b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:07:38 np0005481065 nova_compute[260935]: 2025-10-11 09:07:38.637 2 DEBUG oslo_concurrency.lockutils [req-eaf3687b-023b-49f0-823d-b66a13c227c9 req-516e0026-cc4c-4b59-870c-ddb789fab311 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "02489367-ca19-4428-9727-6d8ab66e1b46-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:07:38 np0005481065 nova_compute[260935]: 2025-10-11 09:07:38.637 2 DEBUG oslo_concurrency.lockutils [req-eaf3687b-023b-49f0-823d-b66a13c227c9 req-516e0026-cc4c-4b59-870c-ddb789fab311 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "02489367-ca19-4428-9727-6d8ab66e1b46-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:07:38 np0005481065 nova_compute[260935]: 2025-10-11 09:07:38.637 2 DEBUG oslo_concurrency.lockutils [req-eaf3687b-023b-49f0-823d-b66a13c227c9 req-516e0026-cc4c-4b59-870c-ddb789fab311 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "02489367-ca19-4428-9727-6d8ab66e1b46-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:07:38 np0005481065 nova_compute[260935]: 2025-10-11 09:07:38.637 2 DEBUG nova.compute.manager [req-eaf3687b-023b-49f0-823d-b66a13c227c9 req-516e0026-cc4c-4b59-870c-ddb789fab311 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] No waiting events found dispatching network-vif-plugged-91e3c288-76e0-4c61-9a40-fde0087f647b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:07:38 np0005481065 nova_compute[260935]: 2025-10-11 09:07:38.638 2 WARNING nova.compute.manager [req-eaf3687b-023b-49f0-823d-b66a13c227c9 req-516e0026-cc4c-4b59-870c-ddb789fab311 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Received unexpected event network-vif-plugged-91e3c288-76e0-4c61-9a40-fde0087f647b for instance with vm_state active and task_state None.#033[00m
Oct 11 05:07:38 np0005481065 nova_compute[260935]: 2025-10-11 09:07:38.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:07:38 np0005481065 nova_compute[260935]: 2025-10-11 09:07:38.809 2 INFO nova.virt.libvirt.driver [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Creating config drive at /var/lib/nova/instances/f5dfbe0b-a1ff-4001-abe4-a4493c9124f2/disk.config.rescue#033[00m
Oct 11 05:07:38 np0005481065 nova_compute[260935]: 2025-10-11 09:07:38.814 2 DEBUG oslo_concurrency.processutils [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f5dfbe0b-a1ff-4001-abe4-a4493c9124f2/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplq_hy6r9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:07:38 np0005481065 nova_compute[260935]: 2025-10-11 09:07:38.960 2 DEBUG oslo_concurrency.processutils [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f5dfbe0b-a1ff-4001-abe4-a4493c9124f2/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplq_hy6r9" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:07:38 np0005481065 nova_compute[260935]: 2025-10-11 09:07:38.987 2 DEBUG nova.storage.rbd_utils [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] rbd image f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:07:38 np0005481065 nova_compute[260935]: 2025-10-11 09:07:38.991 2 DEBUG oslo_concurrency.processutils [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f5dfbe0b-a1ff-4001-abe4-a4493c9124f2/disk.config.rescue f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:07:39 np0005481065 nova_compute[260935]: 2025-10-11 09:07:39.029 2 DEBUG oslo_concurrency.lockutils [None req-4e0ad04b-15ca-4ec2-a8c0-a7cd32858035 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquiring lock "02489367-ca19-4428-9727-6d8ab66e1b46" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:07:39 np0005481065 nova_compute[260935]: 2025-10-11 09:07:39.030 2 DEBUG oslo_concurrency.lockutils [None req-4e0ad04b-15ca-4ec2-a8c0-a7cd32858035 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "02489367-ca19-4428-9727-6d8ab66e1b46" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:07:39 np0005481065 nova_compute[260935]: 2025-10-11 09:07:39.030 2 DEBUG oslo_concurrency.lockutils [None req-4e0ad04b-15ca-4ec2-a8c0-a7cd32858035 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquiring lock "02489367-ca19-4428-9727-6d8ab66e1b46-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:07:39 np0005481065 nova_compute[260935]: 2025-10-11 09:07:39.031 2 DEBUG oslo_concurrency.lockutils [None req-4e0ad04b-15ca-4ec2-a8c0-a7cd32858035 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "02489367-ca19-4428-9727-6d8ab66e1b46-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:07:39 np0005481065 nova_compute[260935]: 2025-10-11 09:07:39.031 2 DEBUG oslo_concurrency.lockutils [None req-4e0ad04b-15ca-4ec2-a8c0-a7cd32858035 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "02489367-ca19-4428-9727-6d8ab66e1b46-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:07:39 np0005481065 nova_compute[260935]: 2025-10-11 09:07:39.033 2 INFO nova.compute.manager [None req-4e0ad04b-15ca-4ec2-a8c0-a7cd32858035 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Terminating instance#033[00m
Oct 11 05:07:39 np0005481065 nova_compute[260935]: 2025-10-11 09:07:39.034 2 DEBUG nova.compute.manager [None req-4e0ad04b-15ca-4ec2-a8c0-a7cd32858035 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:07:39 np0005481065 kernel: tap91e3c288-76 (unregistering): left promiscuous mode
Oct 11 05:07:39 np0005481065 NetworkManager[44960]: <info>  [1760173659.0872] device (tap91e3c288-76): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:07:39 np0005481065 nova_compute[260935]: 2025-10-11 09:07:39.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:39 np0005481065 ovn_controller[152945]: 2025-10-11T09:07:39Z|00882|binding|INFO|Releasing lport 91e3c288-76e0-4c61-9a40-fde0087f647b from this chassis (sb_readonly=0)
Oct 11 05:07:39 np0005481065 ovn_controller[152945]: 2025-10-11T09:07:39Z|00883|binding|INFO|Setting lport 91e3c288-76e0-4c61-9a40-fde0087f647b down in Southbound
Oct 11 05:07:39 np0005481065 ovn_controller[152945]: 2025-10-11T09:07:39Z|00884|binding|INFO|Removing iface tap91e3c288-76 ovn-installed in OVS
Oct 11 05:07:39 np0005481065 nova_compute[260935]: 2025-10-11 09:07:39.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.136 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:f9:d8 10.100.0.11'], port_security=['fa:16:3e:56:f9:d8 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '02489367-ca19-4428-9727-6d8ab66e1b46', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '21f163e616ee4917a580701d466f7dc9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'df67e917-90ec-4dfb-a7e0-e0c1517579e8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e0adf864-e0f8-4e06-afb7-e390a634b36e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=91e3c288-76e0-4c61-9a40-fde0087f647b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:07:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.139 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 91e3c288-76e0-4c61-9a40-fde0087f647b in datapath 14e82eeb-74e2-4de3-9047-74da777fe1ec unbound from our chassis#033[00m
Oct 11 05:07:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.142 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 14e82eeb-74e2-4de3-9047-74da777fe1ec#033[00m
Oct 11 05:07:39 np0005481065 systemd[1]: machine-qemu\x2d113\x2dinstance\x2d00000063.scope: Deactivated successfully.
Oct 11 05:07:39 np0005481065 systemd[1]: machine-qemu\x2d113\x2dinstance\x2d00000063.scope: Consumed 3.582s CPU time.
Oct 11 05:07:39 np0005481065 systemd-machined[215705]: Machine qemu-113-instance-00000063 terminated.
Oct 11 05:07:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.175 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6f28e77f-040a-477f-a8a9-7487fda5125b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:39 np0005481065 nova_compute[260935]: 2025-10-11 09:07:39.197 2 DEBUG oslo_concurrency.processutils [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f5dfbe0b-a1ff-4001-abe4-a4493c9124f2/disk.config.rescue f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.206s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:07:39 np0005481065 nova_compute[260935]: 2025-10-11 09:07:39.198 2 INFO nova.virt.libvirt.driver [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Deleting local config drive /var/lib/nova/instances/f5dfbe0b-a1ff-4001-abe4-a4493c9124f2/disk.config.rescue because it was imported into RBD.#033[00m
Oct 11 05:07:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.218 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[039715c1-148c-4ba9-bb9b-2b6c86571aa1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.222 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[a104a861-5ba5-4bae-ad62-a375d62c3132]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.257 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[517a96a1-bc24-4290-88b9-e0bbbfc16679]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:39 np0005481065 systemd-udevd[358482]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:07:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.276 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[156e84a1-2109-4107-b09e-e33871ea97eb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14e82eeb-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:60:56:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 874, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 874, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 263], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 548101, 'reachable_time': 37420, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358497, 'error': None, 'target': 'ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:39 np0005481065 NetworkManager[44960]: <info>  [1760173659.2808] manager: (tap91e3c288-76): new Tun device (/org/freedesktop/NetworkManager/Devices/378)
Oct 11 05:07:39 np0005481065 kernel: tap91e3c288-76: entered promiscuous mode
Oct 11 05:07:39 np0005481065 nova_compute[260935]: 2025-10-11 09:07:39.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:39 np0005481065 kernel: tap91e3c288-76 (unregistering): left promiscuous mode
Oct 11 05:07:39 np0005481065 ovn_controller[152945]: 2025-10-11T09:07:39Z|00885|binding|INFO|Claiming lport 91e3c288-76e0-4c61-9a40-fde0087f647b for this chassis.
Oct 11 05:07:39 np0005481065 ovn_controller[152945]: 2025-10-11T09:07:39Z|00886|binding|INFO|91e3c288-76e0-4c61-9a40-fde0087f647b: Claiming fa:16:3e:56:f9:d8 10.100.0.11
Oct 11 05:07:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.308 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4a22e422-ca00-4639-ab36-11f944f730b2]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap14e82eeb-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 548114, 'tstamp': 548114}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358502, 'error': None, 'target': 'ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap14e82eeb-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 548116, 'tstamp': 548116}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358502, 'error': None, 'target': 'ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.311 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14e82eeb-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:07:39 np0005481065 nova_compute[260935]: 2025-10-11 09:07:39.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:39 np0005481065 nova_compute[260935]: 2025-10-11 09:07:39.320 2 INFO nova.virt.libvirt.driver [-] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Instance destroyed successfully.#033[00m
Oct 11 05:07:39 np0005481065 nova_compute[260935]: 2025-10-11 09:07:39.320 2 DEBUG nova.objects.instance [None req-4e0ad04b-15ca-4ec2-a8c0-a7cd32858035 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lazy-loading 'resources' on Instance uuid 02489367-ca19-4428-9727-6d8ab66e1b46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:07:39 np0005481065 ovn_controller[152945]: 2025-10-11T09:07:39Z|00887|binding|INFO|Setting lport 91e3c288-76e0-4c61-9a40-fde0087f647b ovn-installed in OVS
Oct 11 05:07:39 np0005481065 NetworkManager[44960]: <info>  [1760173659.3273] manager: (tap34749eb0-e6): new Tun device (/org/freedesktop/NetworkManager/Devices/379)
Oct 11 05:07:39 np0005481065 nova_compute[260935]: 2025-10-11 09:07:39.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:39 np0005481065 ovn_controller[152945]: 2025-10-11T09:07:39Z|00888|if_status|INFO|Dropped 2 log messages in last 439 seconds (most recently, 439 seconds ago) due to excessive rate
Oct 11 05:07:39 np0005481065 ovn_controller[152945]: 2025-10-11T09:07:39Z|00889|if_status|INFO|Not setting lport 91e3c288-76e0-4c61-9a40-fde0087f647b down as sb is readonly
Oct 11 05:07:39 np0005481065 kernel: tap34749eb0-e6: entered promiscuous mode
Oct 11 05:07:39 np0005481065 ovn_controller[152945]: 2025-10-11T09:07:39Z|00890|binding|INFO|Releasing lport 91e3c288-76e0-4c61-9a40-fde0087f647b from this chassis (sb_readonly=0)
Oct 11 05:07:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.335 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:f9:d8 10.100.0.11'], port_security=['fa:16:3e:56:f9:d8 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '02489367-ca19-4428-9727-6d8ab66e1b46', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '21f163e616ee4917a580701d466f7dc9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'df67e917-90ec-4dfb-a7e0-e0c1517579e8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e0adf864-e0f8-4e06-afb7-e390a634b36e, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=91e3c288-76e0-4c61-9a40-fde0087f647b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:07:39 np0005481065 NetworkManager[44960]: <info>  [1760173659.3510] device (tap34749eb0-e6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:07:39 np0005481065 NetworkManager[44960]: <info>  [1760173659.3525] device (tap34749eb0-e6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:07:39 np0005481065 ovn_controller[152945]: 2025-10-11T09:07:39Z|00891|binding|INFO|Claiming lport 34749eb0-e6d2-4c4b-a2b8-9fc709c1caca for this chassis.
Oct 11 05:07:39 np0005481065 ovn_controller[152945]: 2025-10-11T09:07:39Z|00892|binding|INFO|34749eb0-e6d2-4c4b-a2b8-9fc709c1caca: Claiming fa:16:3e:88:5f:9b 10.100.0.12
Oct 11 05:07:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.359 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:f9:d8 10.100.0.11'], port_security=['fa:16:3e:56:f9:d8 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '02489367-ca19-4428-9727-6d8ab66e1b46', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '21f163e616ee4917a580701d466f7dc9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'df67e917-90ec-4dfb-a7e0-e0c1517579e8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e0adf864-e0f8-4e06-afb7-e390a634b36e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=91e3c288-76e0-4c61-9a40-fde0087f647b) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:07:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.361 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14e82eeb-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:07:39 np0005481065 nova_compute[260935]: 2025-10-11 09:07:39.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.361 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:07:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.362 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap14e82eeb-70, col_values=(('external_ids', {'iface-id': 'c44760fe-71c5-482d-aee7-a0b0513681b7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:07:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.363 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:07:39 np0005481065 nova_compute[260935]: 2025-10-11 09:07:39.363 2 DEBUG nova.virt.libvirt.vif [None req-4e0ad04b-15ca-4ec2-a8c0-a7cd32858035 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:07:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-322685699',display_name='tempest-ServersNegativeTestJSON-server-322685699',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-322685699',id=99,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:07:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='21f163e616ee4917a580701d466f7dc9',ramdisk_id='',reservation_id='r-m7jzw6nr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-414353187',owner_user_name='tempest-ServersNegativeTestJSON-414353187-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:07:36Z,user_data=None,user_id='6b8d9d5ab01d48ae81a09f922875ea3e',uuid=02489367-ca19-4428-9727-6d8ab66e1b46,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "91e3c288-76e0-4c61-9a40-fde0087f647b", "address": "fa:16:3e:56:f9:d8", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91e3c288-76", "ovs_interfaceid": "91e3c288-76e0-4c61-9a40-fde0087f647b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:07:39 np0005481065 nova_compute[260935]: 2025-10-11 09:07:39.364 2 DEBUG nova.network.os_vif_util [None req-4e0ad04b-15ca-4ec2-a8c0-a7cd32858035 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Converting VIF {"id": "91e3c288-76e0-4c61-9a40-fde0087f647b", "address": "fa:16:3e:56:f9:d8", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91e3c288-76", "ovs_interfaceid": "91e3c288-76e0-4c61-9a40-fde0087f647b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:07:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.365 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 91e3c288-76e0-4c61-9a40-fde0087f647b in datapath 14e82eeb-74e2-4de3-9047-74da777fe1ec unbound from our chassis#033[00m
Oct 11 05:07:39 np0005481065 nova_compute[260935]: 2025-10-11 09:07:39.365 2 DEBUG nova.network.os_vif_util [None req-4e0ad04b-15ca-4ec2-a8c0-a7cd32858035 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:f9:d8,bridge_name='br-int',has_traffic_filtering=True,id=91e3c288-76e0-4c61-9a40-fde0087f647b,network=Network(14e82eeb-74e2-4de3-9047-74da777fe1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91e3c288-76') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:07:39 np0005481065 nova_compute[260935]: 2025-10-11 09:07:39.366 2 DEBUG os_vif [None req-4e0ad04b-15ca-4ec2-a8c0-a7cd32858035 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:f9:d8,bridge_name='br-int',has_traffic_filtering=True,id=91e3c288-76e0-4c61-9a40-fde0087f647b,network=Network(14e82eeb-74e2-4de3-9047-74da777fe1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91e3c288-76') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:07:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.368 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 14e82eeb-74e2-4de3-9047-74da777fe1ec#033[00m
Oct 11 05:07:39 np0005481065 nova_compute[260935]: 2025-10-11 09:07:39.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:39 np0005481065 nova_compute[260935]: 2025-10-11 09:07:39.369 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap91e3c288-76, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:07:39 np0005481065 nova_compute[260935]: 2025-10-11 09:07:39.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:39 np0005481065 nova_compute[260935]: 2025-10-11 09:07:39.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:07:39 np0005481065 ovn_controller[152945]: 2025-10-11T09:07:39Z|00893|binding|INFO|Setting lport 34749eb0-e6d2-4c4b-a2b8-9fc709c1caca ovn-installed in OVS
Oct 11 05:07:39 np0005481065 nova_compute[260935]: 2025-10-11 09:07:39.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:39 np0005481065 nova_compute[260935]: 2025-10-11 09:07:39.384 2 INFO os_vif [None req-4e0ad04b-15ca-4ec2-a8c0-a7cd32858035 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:f9:d8,bridge_name='br-int',has_traffic_filtering=True,id=91e3c288-76e0-4c61-9a40-fde0087f647b,network=Network(14e82eeb-74e2-4de3-9047-74da777fe1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91e3c288-76')#033[00m
Oct 11 05:07:39 np0005481065 systemd-machined[215705]: New machine qemu-114-instance-00000061.
Oct 11 05:07:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.395 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c83326d3-b833-4a92-85dc-d7026f590bcf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:39 np0005481065 systemd[1]: Started Virtual Machine qemu-114-instance-00000061.
Oct 11 05:07:39 np0005481065 ovn_controller[152945]: 2025-10-11T09:07:39Z|00894|binding|INFO|Setting lport 34749eb0-e6d2-4c4b-a2b8-9fc709c1caca up in Southbound
Oct 11 05:07:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.410 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:5f:9b 10.100.0.12'], port_security=['fa:16:3e:88:5f:9b 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'f5dfbe0b-a1ff-4001-abe4-a4493c9124f2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-76432838-bd4d-4fb8-8e44-6e230b5868b8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c221c92f100b42fbb2581f0c7035540b', 'neutron:revision_number': '5', 'neutron:security_group_ids': '933852e8-c082-4590-9200-b967a1691dcb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7f1c5e2c-b27b-4d23-ad04-5134938386e4, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=34749eb0-e6d2-4c4b-a2b8-9fc709c1caca) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:07:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.447 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[e019f1ee-37fe-4cd0-a617-3150475ec890]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.454 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[56ccd447-9a81-4097-974f-8e6fab801124]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.508 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[53f2b591-665c-445c-8069-56d1c19a161b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.548 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4afb42ed-1045-41a8-a4d3-a310415da124]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14e82eeb-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:60:56:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 9, 'rx_bytes': 874, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 9, 'rx_bytes': 874, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 263], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 548101, 'reachable_time': 37420, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358544, 'error': None, 'target': 'ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.578 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1ab485c5-bbf7-42f0-a347-847612fdc1a2]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap14e82eeb-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 548114, 'tstamp': 548114}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358545, 'error': None, 'target': 'ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap14e82eeb-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 548116, 'tstamp': 548116}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358545, 'error': None, 'target': 'ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.581 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14e82eeb-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:07:39 np0005481065 nova_compute[260935]: 2025-10-11 09:07:39.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:39 np0005481065 nova_compute[260935]: 2025-10-11 09:07:39.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.584 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14e82eeb-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:07:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.585 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:07:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.586 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap14e82eeb-70, col_values=(('external_ids', {'iface-id': 'c44760fe-71c5-482d-aee7-a0b0513681b7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:07:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.586 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:07:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.589 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 91e3c288-76e0-4c61-9a40-fde0087f647b in datapath 14e82eeb-74e2-4de3-9047-74da777fe1ec unbound from our chassis#033[00m
Oct 11 05:07:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.592 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 14e82eeb-74e2-4de3-9047-74da777fe1ec#033[00m
Oct 11 05:07:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.620 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0af62b12-b0bb-4cbd-a3cd-479b2d0d86f6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.696 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[e9c7989b-9c2d-4998-862f-a095c8a23b87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.705 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b8c8b3e3-5bac-4a6d-bb1c-d7b4a35de931]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.755 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c9537497-93ce-469e-a8c6-06bd30af4b48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.788 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5bb3b651-7138-4a6a-b793-5ffcf157b81e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14e82eeb-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:60:56:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 11, 'rx_bytes': 874, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 11, 'rx_bytes': 874, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 263], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 548101, 'reachable_time': 37420, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358552, 'error': None, 'target': 'ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.824 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[95189f4c-4c0a-4499-9477-9d8556f333c1]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap14e82eeb-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 548114, 'tstamp': 548114}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358553, 'error': None, 'target': 'ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap14e82eeb-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 548116, 'tstamp': 548116}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358553, 'error': None, 'target': 'ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.828 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14e82eeb-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:07:39 np0005481065 nova_compute[260935]: 2025-10-11 09:07:39.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.833 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14e82eeb-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:07:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.833 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:07:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.834 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap14e82eeb-70, col_values=(('external_ids', {'iface-id': 'c44760fe-71c5-482d-aee7-a0b0513681b7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:07:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.835 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:07:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.841 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 34749eb0-e6d2-4c4b-a2b8-9fc709c1caca in datapath 76432838-bd4d-4fb8-8e44-6e230b5868b8 unbound from our chassis#033[00m
Oct 11 05:07:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.844 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 76432838-bd4d-4fb8-8e44-6e230b5868b8#033[00m
Oct 11 05:07:39 np0005481065 nova_compute[260935]: 2025-10-11 09:07:39.850 2 DEBUG nova.compute.manager [req-3e04bbb9-0b7a-4771-9618-71c7208f474c req-199ac017-85ee-4f2d-b64a-9bdcec49da71 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Received event network-vif-plugged-34749eb0-e6d2-4c4b-a2b8-9fc709c1caca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:07:39 np0005481065 nova_compute[260935]: 2025-10-11 09:07:39.852 2 DEBUG oslo_concurrency.lockutils [req-3e04bbb9-0b7a-4771-9618-71c7208f474c req-199ac017-85ee-4f2d-b64a-9bdcec49da71 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:07:39 np0005481065 nova_compute[260935]: 2025-10-11 09:07:39.852 2 DEBUG oslo_concurrency.lockutils [req-3e04bbb9-0b7a-4771-9618-71c7208f474c req-199ac017-85ee-4f2d-b64a-9bdcec49da71 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:07:39 np0005481065 nova_compute[260935]: 2025-10-11 09:07:39.853 2 DEBUG oslo_concurrency.lockutils [req-3e04bbb9-0b7a-4771-9618-71c7208f474c req-199ac017-85ee-4f2d-b64a-9bdcec49da71 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:07:39 np0005481065 nova_compute[260935]: 2025-10-11 09:07:39.854 2 DEBUG nova.compute.manager [req-3e04bbb9-0b7a-4771-9618-71c7208f474c req-199ac017-85ee-4f2d-b64a-9bdcec49da71 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] No waiting events found dispatching network-vif-plugged-34749eb0-e6d2-4c4b-a2b8-9fc709c1caca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:07:39 np0005481065 nova_compute[260935]: 2025-10-11 09:07:39.855 2 WARNING nova.compute.manager [req-3e04bbb9-0b7a-4771-9618-71c7208f474c req-199ac017-85ee-4f2d-b64a-9bdcec49da71 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Received unexpected event network-vif-plugged-34749eb0-e6d2-4c4b-a2b8-9fc709c1caca for instance with vm_state active and task_state rescuing.#033[00m
Oct 11 05:07:39 np0005481065 nova_compute[260935]: 2025-10-11 09:07:39.885 2 INFO nova.virt.libvirt.driver [None req-4e0ad04b-15ca-4ec2-a8c0-a7cd32858035 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Deleting instance files /var/lib/nova/instances/02489367-ca19-4428-9727-6d8ab66e1b46_del#033[00m
Oct 11 05:07:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.885 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e6409ba5-dd58-4058-87e2-580b199f0b9f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:39 np0005481065 nova_compute[260935]: 2025-10-11 09:07:39.887 2 INFO nova.virt.libvirt.driver [None req-4e0ad04b-15ca-4ec2-a8c0-a7cd32858035 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Deletion of /var/lib/nova/instances/02489367-ca19-4428-9727-6d8ab66e1b46_del complete#033[00m
Oct 11 05:07:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.930 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[ea9ec12a-973b-4b5b-9242-57d308f43b3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.936 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[d4a6babe-b15d-458d-854d-06ddbcd198cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.990 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[5964d806-6cb0-4c09-ba29-aad136f55bca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:40.016 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[67f71799-df17-466a-86c7-f43cedc4836f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap76432838-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:06:2f:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547691, 'reachable_time': 43408, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358618, 'error': None, 'target': 'ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:40.034 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[566ca55a-abda-4d03-9d70-1ac349a607ad]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap76432838-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 547707, 'tstamp': 547707}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358619, 'error': None, 'target': 'ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap76432838-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 547711, 'tstamp': 547711}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358619, 'error': None, 'target': 'ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:40.035 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap76432838-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:07:40 np0005481065 nova_compute[260935]: 2025-10-11 09:07:40.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:40 np0005481065 nova_compute[260935]: 2025-10-11 09:07:40.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:40.089 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap76432838-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:07:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:40.089 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:07:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:40.090 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap76432838-b0, col_values=(('external_ids', {'iface-id': 'b6ca7d68-6b07-4260-8964-929cc77a92b2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:07:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:40.090 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:07:40 np0005481065 nova_compute[260935]: 2025-10-11 09:07:40.093 2 INFO nova.compute.manager [None req-4e0ad04b-15ca-4ec2-a8c0-a7cd32858035 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Took 1.06 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:07:40 np0005481065 nova_compute[260935]: 2025-10-11 09:07:40.094 2 DEBUG oslo.service.loopingcall [None req-4e0ad04b-15ca-4ec2-a8c0-a7cd32858035 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:07:40 np0005481065 nova_compute[260935]: 2025-10-11 09:07:40.095 2 DEBUG nova.compute.manager [-] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:07:40 np0005481065 nova_compute[260935]: 2025-10-11 09:07:40.096 2 DEBUG nova.network.neutron [-] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:07:40 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1950: 321 pgs: 321 active+clean; 705 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 7.8 MiB/s wr, 202 op/s
Oct 11 05:07:40 np0005481065 nova_compute[260935]: 2025-10-11 09:07:40.574 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Removed pending event for f5dfbe0b-a1ff-4001-abe4-a4493c9124f2 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct 11 05:07:40 np0005481065 nova_compute[260935]: 2025-10-11 09:07:40.576 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173660.5741084, f5dfbe0b-a1ff-4001-abe4-a4493c9124f2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:07:40 np0005481065 nova_compute[260935]: 2025-10-11 09:07:40.577 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:07:40 np0005481065 nova_compute[260935]: 2025-10-11 09:07:40.582 2 DEBUG nova.compute.manager [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:07:40 np0005481065 nova_compute[260935]: 2025-10-11 09:07:40.738 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:07:40 np0005481065 nova_compute[260935]: 2025-10-11 09:07:40.743 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:07:40 np0005481065 nova_compute[260935]: 2025-10-11 09:07:40.809 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173660.5744529, f5dfbe0b-a1ff-4001-abe4-a4493c9124f2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:07:40 np0005481065 nova_compute[260935]: 2025-10-11 09:07:40.810 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] VM Started (Lifecycle Event)#033[00m
Oct 11 05:07:40 np0005481065 nova_compute[260935]: 2025-10-11 09:07:40.859 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:07:40 np0005481065 nova_compute[260935]: 2025-10-11 09:07:40.863 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:07:40 np0005481065 nova_compute[260935]: 2025-10-11 09:07:40.894 2 DEBUG nova.compute.manager [req-5377cf6c-8f78-4f84-b711-614667f27f4d req-5d78ac9d-dc3c-489e-acec-0a57e8b521b9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Received event network-vif-unplugged-91e3c288-76e0-4c61-9a40-fde0087f647b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:07:40 np0005481065 nova_compute[260935]: 2025-10-11 09:07:40.895 2 DEBUG oslo_concurrency.lockutils [req-5377cf6c-8f78-4f84-b711-614667f27f4d req-5d78ac9d-dc3c-489e-acec-0a57e8b521b9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "02489367-ca19-4428-9727-6d8ab66e1b46-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:07:40 np0005481065 nova_compute[260935]: 2025-10-11 09:07:40.896 2 DEBUG oslo_concurrency.lockutils [req-5377cf6c-8f78-4f84-b711-614667f27f4d req-5d78ac9d-dc3c-489e-acec-0a57e8b521b9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "02489367-ca19-4428-9727-6d8ab66e1b46-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:07:40 np0005481065 nova_compute[260935]: 2025-10-11 09:07:40.896 2 DEBUG oslo_concurrency.lockutils [req-5377cf6c-8f78-4f84-b711-614667f27f4d req-5d78ac9d-dc3c-489e-acec-0a57e8b521b9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "02489367-ca19-4428-9727-6d8ab66e1b46-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:07:40 np0005481065 nova_compute[260935]: 2025-10-11 09:07:40.897 2 DEBUG nova.compute.manager [req-5377cf6c-8f78-4f84-b711-614667f27f4d req-5d78ac9d-dc3c-489e-acec-0a57e8b521b9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] No waiting events found dispatching network-vif-unplugged-91e3c288-76e0-4c61-9a40-fde0087f647b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:07:40 np0005481065 nova_compute[260935]: 2025-10-11 09:07:40.897 2 DEBUG nova.compute.manager [req-5377cf6c-8f78-4f84-b711-614667f27f4d req-5d78ac9d-dc3c-489e-acec-0a57e8b521b9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Received event network-vif-unplugged-91e3c288-76e0-4c61-9a40-fde0087f647b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 05:07:40 np0005481065 nova_compute[260935]: 2025-10-11 09:07:40.897 2 DEBUG nova.compute.manager [req-5377cf6c-8f78-4f84-b711-614667f27f4d req-5d78ac9d-dc3c-489e-acec-0a57e8b521b9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Received event network-vif-plugged-91e3c288-76e0-4c61-9a40-fde0087f647b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:07:40 np0005481065 nova_compute[260935]: 2025-10-11 09:07:40.898 2 DEBUG oslo_concurrency.lockutils [req-5377cf6c-8f78-4f84-b711-614667f27f4d req-5d78ac9d-dc3c-489e-acec-0a57e8b521b9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "02489367-ca19-4428-9727-6d8ab66e1b46-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:07:40 np0005481065 nova_compute[260935]: 2025-10-11 09:07:40.898 2 DEBUG oslo_concurrency.lockutils [req-5377cf6c-8f78-4f84-b711-614667f27f4d req-5d78ac9d-dc3c-489e-acec-0a57e8b521b9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "02489367-ca19-4428-9727-6d8ab66e1b46-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:07:40 np0005481065 nova_compute[260935]: 2025-10-11 09:07:40.898 2 DEBUG oslo_concurrency.lockutils [req-5377cf6c-8f78-4f84-b711-614667f27f4d req-5d78ac9d-dc3c-489e-acec-0a57e8b521b9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "02489367-ca19-4428-9727-6d8ab66e1b46-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:07:40 np0005481065 nova_compute[260935]: 2025-10-11 09:07:40.899 2 DEBUG nova.compute.manager [req-5377cf6c-8f78-4f84-b711-614667f27f4d req-5d78ac9d-dc3c-489e-acec-0a57e8b521b9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] No waiting events found dispatching network-vif-plugged-91e3c288-76e0-4c61-9a40-fde0087f647b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:07:40 np0005481065 nova_compute[260935]: 2025-10-11 09:07:40.899 2 WARNING nova.compute.manager [req-5377cf6c-8f78-4f84-b711-614667f27f4d req-5d78ac9d-dc3c-489e-acec-0a57e8b521b9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Received unexpected event network-vif-plugged-91e3c288-76e0-4c61-9a40-fde0087f647b for instance with vm_state active and task_state deleting.#033[00m
Oct 11 05:07:40 np0005481065 nova_compute[260935]: 2025-10-11 09:07:40.954 2 DEBUG nova.network.neutron [-] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:07:41 np0005481065 nova_compute[260935]: 2025-10-11 09:07:41.032 2 INFO nova.compute.manager [-] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Took 0.94 seconds to deallocate network for instance.#033[00m
Oct 11 05:07:41 np0005481065 nova_compute[260935]: 2025-10-11 09:07:41.194 2 DEBUG oslo_concurrency.lockutils [None req-4e0ad04b-15ca-4ec2-a8c0-a7cd32858035 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:07:41 np0005481065 nova_compute[260935]: 2025-10-11 09:07:41.195 2 DEBUG oslo_concurrency.lockutils [None req-4e0ad04b-15ca-4ec2-a8c0-a7cd32858035 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:07:41 np0005481065 nova_compute[260935]: 2025-10-11 09:07:41.411 2 DEBUG oslo_concurrency.processutils [None req-4e0ad04b-15ca-4ec2-a8c0-a7cd32858035 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:07:41 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:07:41 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2743563293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:07:41 np0005481065 nova_compute[260935]: 2025-10-11 09:07:41.963 2 DEBUG oslo_concurrency.processutils [None req-4e0ad04b-15ca-4ec2-a8c0-a7cd32858035 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:07:41 np0005481065 nova_compute[260935]: 2025-10-11 09:07:41.972 2 DEBUG nova.compute.provider_tree [None req-4e0ad04b-15ca-4ec2-a8c0-a7cd32858035 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:07:42 np0005481065 nova_compute[260935]: 2025-10-11 09:07:42.009 2 DEBUG nova.compute.manager [req-88f2c59c-2a23-4fae-82f0-5a81075fb305 req-7840a850-cbc2-4d20-8eaf-afb7b99bb814 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Received event network-vif-plugged-34749eb0-e6d2-4c4b-a2b8-9fc709c1caca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:07:42 np0005481065 nova_compute[260935]: 2025-10-11 09:07:42.010 2 DEBUG oslo_concurrency.lockutils [req-88f2c59c-2a23-4fae-82f0-5a81075fb305 req-7840a850-cbc2-4d20-8eaf-afb7b99bb814 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:07:42 np0005481065 nova_compute[260935]: 2025-10-11 09:07:42.011 2 DEBUG oslo_concurrency.lockutils [req-88f2c59c-2a23-4fae-82f0-5a81075fb305 req-7840a850-cbc2-4d20-8eaf-afb7b99bb814 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:07:42 np0005481065 nova_compute[260935]: 2025-10-11 09:07:42.011 2 DEBUG oslo_concurrency.lockutils [req-88f2c59c-2a23-4fae-82f0-5a81075fb305 req-7840a850-cbc2-4d20-8eaf-afb7b99bb814 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:07:42 np0005481065 nova_compute[260935]: 2025-10-11 09:07:42.011 2 DEBUG nova.compute.manager [req-88f2c59c-2a23-4fae-82f0-5a81075fb305 req-7840a850-cbc2-4d20-8eaf-afb7b99bb814 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] No waiting events found dispatching network-vif-plugged-34749eb0-e6d2-4c4b-a2b8-9fc709c1caca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:07:42 np0005481065 nova_compute[260935]: 2025-10-11 09:07:42.012 2 WARNING nova.compute.manager [req-88f2c59c-2a23-4fae-82f0-5a81075fb305 req-7840a850-cbc2-4d20-8eaf-afb7b99bb814 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Received unexpected event network-vif-plugged-34749eb0-e6d2-4c4b-a2b8-9fc709c1caca for instance with vm_state rescued and task_state None.#033[00m
Oct 11 05:07:42 np0005481065 nova_compute[260935]: 2025-10-11 09:07:42.012 2 DEBUG nova.compute.manager [req-88f2c59c-2a23-4fae-82f0-5a81075fb305 req-7840a850-cbc2-4d20-8eaf-afb7b99bb814 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Received event network-vif-deleted-91e3c288-76e0-4c61-9a40-fde0087f647b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:07:42 np0005481065 nova_compute[260935]: 2025-10-11 09:07:42.061 2 DEBUG nova.scheduler.client.report [None req-4e0ad04b-15ca-4ec2-a8c0-a7cd32858035 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:07:42 np0005481065 nova_compute[260935]: 2025-10-11 09:07:42.142 2 DEBUG oslo_concurrency.lockutils [None req-4e0ad04b-15ca-4ec2-a8c0-a7cd32858035 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.947s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:07:42 np0005481065 nova_compute[260935]: 2025-10-11 09:07:42.197 2 INFO nova.scheduler.client.report [None req-4e0ad04b-15ca-4ec2-a8c0-a7cd32858035 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Deleted allocations for instance 02489367-ca19-4428-9727-6d8ab66e1b46#033[00m
Oct 11 05:07:42 np0005481065 nova_compute[260935]: 2025-10-11 09:07:42.354 2 DEBUG oslo_concurrency.lockutils [None req-4e0ad04b-15ca-4ec2-a8c0-a7cd32858035 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "02489367-ca19-4428-9727-6d8ab66e1b46" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.324s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:07:42 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1951: 321 pgs: 321 active+clean; 674 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 7.9 MiB/s wr, 316 op/s
Oct 11 05:07:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:07:44 np0005481065 nova_compute[260935]: 2025-10-11 09:07:44.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:44 np0005481065 nova_compute[260935]: 2025-10-11 09:07:44.176 2 INFO nova.compute.manager [None req-f7cb1d73-c488-44d7-b3cb-1394c9065d01 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Pausing#033[00m
Oct 11 05:07:44 np0005481065 nova_compute[260935]: 2025-10-11 09:07:44.177 2 DEBUG nova.objects.instance [None req-f7cb1d73-c488-44d7-b3cb-1394c9065d01 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lazy-loading 'flavor' on Instance uuid 6520fc43-79ed-4060-85bb-dcdff5f5c101 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:07:44 np0005481065 nova_compute[260935]: 2025-10-11 09:07:44.252 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173664.2511065, 6520fc43-79ed-4060-85bb-dcdff5f5c101 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:07:44 np0005481065 nova_compute[260935]: 2025-10-11 09:07:44.252 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:07:44 np0005481065 nova_compute[260935]: 2025-10-11 09:07:44.255 2 DEBUG nova.compute.manager [None req-f7cb1d73-c488-44d7-b3cb-1394c9065d01 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:07:44 np0005481065 nova_compute[260935]: 2025-10-11 09:07:44.302 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:07:44 np0005481065 nova_compute[260935]: 2025-10-11 09:07:44.311 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:07:44 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1952: 321 pgs: 321 active+clean; 658 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 3.8 MiB/s wr, 246 op/s
Oct 11 05:07:44 np0005481065 nova_compute[260935]: 2025-10-11 09:07:44.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:44 np0005481065 nova_compute[260935]: 2025-10-11 09:07:44.400 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Oct 11 05:07:46 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1953: 321 pgs: 321 active+clean; 658 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 1.9 MiB/s wr, 222 op/s
Oct 11 05:07:46 np0005481065 podman[358643]: 2025-10-11 09:07:46.801229709 +0000 UTC m=+0.094426046 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 11 05:07:48 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1954: 321 pgs: 321 active+clean; 658 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 1.9 MiB/s wr, 222 op/s
Oct 11 05:07:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.111 2 WARNING nova.compute.manager [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Timeout waiting for ['network-vif-plugged-074db183-8679-40f2-b39d-06759a8dfceb'] for instance with vm_state building and task_state spawning. Event states are: network-vif-plugged-074db183-8679-40f2-b39d-06759a8dfceb: timed out after 300.00 seconds: eventlet.timeout.Timeout: 300 seconds#033[00m
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.143 2 INFO nova.compute.manager [None req-4bb4a7d3-a6d1-470b-99df-bf48b915bce2 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Unpausing#033[00m
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.144 2 DEBUG nova.objects.instance [None req-4bb4a7d3-a6d1-470b-99df-bf48b915bce2 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lazy-loading 'flavor' on Instance uuid 6520fc43-79ed-4060-85bb-dcdff5f5c101 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:07:49 np0005481065 kernel: tap074db183-86 (unregistering): left promiscuous mode
Oct 11 05:07:49 np0005481065 NetworkManager[44960]: <info>  [1760173669.1597] device (tap074db183-86): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:49 np0005481065 ovn_controller[152945]: 2025-10-11T09:07:49Z|00895|binding|INFO|Releasing lport 074db183-8679-40f2-b39d-06759a8dfceb from this chassis (sb_readonly=0)
Oct 11 05:07:49 np0005481065 ovn_controller[152945]: 2025-10-11T09:07:49Z|00896|binding|INFO|Setting lport 074db183-8679-40f2-b39d-06759a8dfceb down in Southbound
Oct 11 05:07:49 np0005481065 ovn_controller[152945]: 2025-10-11T09:07:49Z|00897|binding|INFO|Removing iface tap074db183-86 ovn-installed in OVS
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:49 np0005481065 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d00000057.scope: Deactivated successfully.
Oct 11 05:07:49 np0005481065 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d00000057.scope: Consumed 1.435s CPU time.
Oct 11 05:07:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:49.231 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:2b:94 10.100.0.8'], port_security=['fa:16:3e:33:2b:94 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '15633aee-234a-4417-b5ea-f35f13820404', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4686205-cbf0-4221-bc49-ebb890c4a59f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '11b44ad9193e4e43838d52056ccf413e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4b0cfb76-aebd-4cfc-96f5-00bacc345d65', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9601b6e8-d9bc-46ca-99e8-33ec15f713e5, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=074db183-8679-40f2-b39d-06759a8dfceb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:07:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:49.234 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 074db183-8679-40f2-b39d-06759a8dfceb in datapath e4686205-cbf0-4221-bc49-ebb890c4a59f unbound from our chassis#033[00m
Oct 11 05:07:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:49.236 162815 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network e4686205-cbf0-4221-bc49-ebb890c4a59f or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct 11 05:07:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:49.237 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fda9f397-8df4-4f48-8db2-dbe17f7d3ad6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:49 np0005481065 systemd-machined[215705]: Machine qemu-99-instance-00000057 terminated.
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.249 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173669.2490623, 6520fc43-79ed-4060-85bb-dcdff5f5c101 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.249 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:07:49 np0005481065 virtqemud[260524]: argument unsupported: QEMU guest agent is not configured
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.256 2 DEBUG nova.virt.libvirt.guest [None req-4bb4a7d3-a6d1-470b-99df-bf48b915bce2 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.256 2 DEBUG nova.compute.manager [None req-4bb4a7d3-a6d1-470b-99df-bf48b915bce2 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.316 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.326 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:07:49 np0005481065 NetworkManager[44960]: <info>  [1760173669.3361] manager: (tap074db183-86): new Tun device (/org/freedesktop/NetworkManager/Devices/380)
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.355 2 DEBUG nova.virt.libvirt.vif [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:00:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-2012807637',display_name='tempest-ServerRescueTestJSON-server-2012807637',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-2012807637',id=87,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='11b44ad9193e4e43838d52056ccf413e',ramdisk_id='',reservation_id='r-es16o3e8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-1667208638',owner_user_name='tempest-ServerRescueTestJSON-1667208638-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:00:52Z,user_data=None,user_id='df5a3c3a5d68473aa2e2950de45ebce1',uuid=15633aee-234a-4417-b5ea-f35f13820404,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "074db183-8679-40f2-b39d-06759a8dfceb", "address": "fa:16:3e:33:2b:94", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap074db183-86", "ovs_interfaceid": "074db183-8679-40f2-b39d-06759a8dfceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.355 2 DEBUG nova.network.os_vif_util [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Converting VIF {"id": "074db183-8679-40f2-b39d-06759a8dfceb", "address": "fa:16:3e:33:2b:94", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap074db183-86", "ovs_interfaceid": "074db183-8679-40f2-b39d-06759a8dfceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.356 2 DEBUG nova.network.os_vif_util [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:2b:94,bridge_name='br-int',has_traffic_filtering=True,id=074db183-8679-40f2-b39d-06759a8dfceb,network=Network(e4686205-cbf0-4221-bc49-ebb890c4a59f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap074db183-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.357 2 DEBUG os_vif [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:2b:94,bridge_name='br-int',has_traffic_filtering=True,id=074db183-8679-40f2-b39d-06759a8dfceb,network=Network(e4686205-cbf0-4221-bc49-ebb890c4a59f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap074db183-86') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.364 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap074db183-86, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.369 2 INFO os_vif [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:2b:94,bridge_name='br-int',has_traffic_filtering=True,id=074db183-8679-40f2-b39d-06759a8dfceb,network=Network(e4686205-cbf0-4221-bc49-ebb890c4a59f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap074db183-86')#033[00m
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.812 2 INFO nova.virt.libvirt.driver [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Deleting instance files /var/lib/nova/instances/15633aee-234a-4417-b5ea-f35f13820404_del#033[00m
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.812 2 INFO nova.virt.libvirt.driver [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Deletion of /var/lib/nova/instances/15633aee-234a-4417-b5ea-f35f13820404_del complete#033[00m
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.954 2 DEBUG nova.compute.manager [req-4b68501c-3cf2-44d0-ba9b-a1165a3b4936 req-cd2dda02-ff6a-45d7-a39b-067380bf2bd3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Received event network-vif-unplugged-074db183-8679-40f2-b39d-06759a8dfceb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.955 2 DEBUG oslo_concurrency.lockutils [req-4b68501c-3cf2-44d0-ba9b-a1165a3b4936 req-cd2dda02-ff6a-45d7-a39b-067380bf2bd3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "15633aee-234a-4417-b5ea-f35f13820404-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.955 2 DEBUG oslo_concurrency.lockutils [req-4b68501c-3cf2-44d0-ba9b-a1165a3b4936 req-cd2dda02-ff6a-45d7-a39b-067380bf2bd3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "15633aee-234a-4417-b5ea-f35f13820404-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.955 2 DEBUG oslo_concurrency.lockutils [req-4b68501c-3cf2-44d0-ba9b-a1165a3b4936 req-cd2dda02-ff6a-45d7-a39b-067380bf2bd3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "15633aee-234a-4417-b5ea-f35f13820404-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.955 2 DEBUG nova.compute.manager [req-4b68501c-3cf2-44d0-ba9b-a1165a3b4936 req-cd2dda02-ff6a-45d7-a39b-067380bf2bd3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] No event matching network-vif-unplugged-074db183-8679-40f2-b39d-06759a8dfceb in dict_keys([('network-vif-plugged', '074db183-8679-40f2-b39d-06759a8dfceb')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.955 2 DEBUG nova.compute.manager [req-4b68501c-3cf2-44d0-ba9b-a1165a3b4936 req-cd2dda02-ff6a-45d7-a39b-067380bf2bd3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Received event network-vif-unplugged-074db183-8679-40f2-b39d-06759a8dfceb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Instance failed to spawn: nova.exception.VirtualInterfaceCreateException: Virtual Interface creation failed
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404] Traceback (most recent call last):
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 7750, in _create_guest_with_network
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]     guest = self._create_guest(
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]   File "/usr/lib64/python3.9/contextlib.py", line 126, in __exit__
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]     next(self.gen)
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 559, in wait_for_instance_event
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]     self._wait_for_instance_events(
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 471, in _wait_for_instance_events
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]     actual_event = event.wait()
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 436, in wait
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]     instance_event = self.event.wait()
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]   File "/usr/lib/python3.9/site-packages/eventlet/event.py", line 125, in wait
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]     result = hub.switch()
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]   File "/usr/lib/python3.9/site-packages/eventlet/hubs/hub.py", line 313, in switch
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]     return self.greenlet.switch()
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404] eventlet.timeout.Timeout: 300 seconds
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404] 
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404] During handling of the above exception, another exception occurred:
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404] 
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404] Traceback (most recent call last):
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 2864, in _build_resources
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]     yield resources
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 2611, in _build_and_run_instance
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]     self.driver.spawn(context, instance, image_meta,
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 4411, in spawn
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]     self._create_guest_with_network(
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 7768, in _create_guest_with_network
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]     raise exception.VirtualInterfaceCreateException()
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404] nova.exception.VirtualInterfaceCreateException: Virtual Interface creation failed
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404] #033[00m
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.972 2 INFO nova.compute.manager [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Terminating instance#033[00m
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.974 2 DEBUG nova.compute.manager [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.977 2 DEBUG nova.virt.libvirt.driver [-] [instance: 15633aee-234a-4417-b5ea-f35f13820404] During wait destroy, instance disappeared. _wait_for_destroy /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1527#033[00m
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.977 2 INFO nova.virt.libvirt.driver [-] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Instance destroyed successfully.#033[00m
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.977 2 DEBUG nova.virt.libvirt.vif [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='',created_at=2025-10-11T09:00:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-2012807637',display_name='tempest-ServerRescueTestJSON-server-2012807637',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-2012807637',id=87,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='11b44ad9193e4e43838d52056ccf413e',ramdisk_id='',reservation_id='r-es16o3e8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-1667208638',owner_user_name='tempest-ServerRescueTestJSON-1667208638-project-member'},tags=TagList,task_state='deleting',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:02:45Z,user_data=None,user_id='df5a3c3a5d68473aa2e2950de45ebce1',uuid=15633aee-234a-4417-b5ea-f35f13820404,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "074db183-8679-40f2-b39d-06759a8dfceb", "address": "fa:16:3e:33:2b:94", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap074db183-86", "ovs_interfaceid": "074db183-8679-40f2-b39d-06759a8dfceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.978 2 DEBUG nova.network.os_vif_util [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Converting VIF {"id": "074db183-8679-40f2-b39d-06759a8dfceb", "address": "fa:16:3e:33:2b:94", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap074db183-86", "ovs_interfaceid": "074db183-8679-40f2-b39d-06759a8dfceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.978 2 DEBUG nova.network.os_vif_util [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:2b:94,bridge_name='br-int',has_traffic_filtering=True,id=074db183-8679-40f2-b39d-06759a8dfceb,network=Network(e4686205-cbf0-4221-bc49-ebb890c4a59f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap074db183-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.979 2 DEBUG os_vif [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:2b:94,bridge_name='br-int',has_traffic_filtering=True,id=074db183-8679-40f2-b39d-06759a8dfceb,network=Network(e4686205-cbf0-4221-bc49-ebb890c4a59f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap074db183-86') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.980 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap074db183-86, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.981 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:07:49 np0005481065 nova_compute[260935]: 2025-10-11 09:07:49.986 2 INFO os_vif [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:2b:94,bridge_name='br-int',has_traffic_filtering=True,id=074db183-8679-40f2-b39d-06759a8dfceb,network=Network(e4686205-cbf0-4221-bc49-ebb890c4a59f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap074db183-86')#033[00m
Oct 11 05:07:50 np0005481065 nova_compute[260935]: 2025-10-11 09:07:50.011 2 INFO nova.virt.libvirt.driver [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Deletion of /var/lib/nova/instances/15633aee-234a-4417-b5ea-f35f13820404_del complete#033[00m
Oct 11 05:07:50 np0005481065 nova_compute[260935]: 2025-10-11 09:07:50.227 2 INFO nova.compute.manager [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Took 0.25 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:07:50 np0005481065 nova_compute[260935]: 2025-10-11 09:07:50.228 2 DEBUG nova.compute.claims [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Aborting claim: <nova.compute.claims.Claim object at 0x7f1e55ff1c10> abort /usr/lib/python3.9/site-packages/nova/compute/claims.py:85#033[00m
Oct 11 05:07:50 np0005481065 nova_compute[260935]: 2025-10-11 09:07:50.229 2 DEBUG oslo_concurrency.lockutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.abort_instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:07:50 np0005481065 nova_compute[260935]: 2025-10-11 09:07:50.229 2 DEBUG oslo_concurrency.lockutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.abort_instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:07:50 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1955: 321 pgs: 321 active+clean; 658 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 27 KiB/s wr, 149 op/s
Oct 11 05:07:50 np0005481065 nova_compute[260935]: 2025-10-11 09:07:50.531 2 DEBUG oslo_concurrency.processutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:07:51 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:07:51 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3089977427' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:07:51 np0005481065 nova_compute[260935]: 2025-10-11 09:07:51.039 2 DEBUG oslo_concurrency.processutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:07:51 np0005481065 nova_compute[260935]: 2025-10-11 09:07:51.052 2 DEBUG nova.compute.provider_tree [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:07:51 np0005481065 nova_compute[260935]: 2025-10-11 09:07:51.115 2 DEBUG nova.scheduler.client.report [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:07:51 np0005481065 nova_compute[260935]: 2025-10-11 09:07:51.224 2 DEBUG oslo_concurrency.lockutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.abort_instance_claim" :: held 0.995s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:07:51 np0005481065 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Failed to allocate network(s): nova.exception.VirtualInterfaceCreateException: Virtual Interface creation failed
Oct 11 05:07:51 np0005481065 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404] Traceback (most recent call last):
Oct 11 05:07:51 np0005481065 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 7750, in _create_guest_with_network
Oct 11 05:07:51 np0005481065 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]     guest = self._create_guest(
Oct 11 05:07:51 np0005481065 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]   File "/usr/lib64/python3.9/contextlib.py", line 126, in __exit__
Oct 11 05:07:51 np0005481065 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]     next(self.gen)
Oct 11 05:07:51 np0005481065 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 559, in wait_for_instance_event
Oct 11 05:07:51 np0005481065 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]     self._wait_for_instance_events(
Oct 11 05:07:51 np0005481065 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 471, in _wait_for_instance_events
Oct 11 05:07:51 np0005481065 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]     actual_event = event.wait()
Oct 11 05:07:51 np0005481065 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 436, in wait
Oct 11 05:07:51 np0005481065 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]     instance_event = self.event.wait()
Oct 11 05:07:51 np0005481065 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]   File "/usr/lib/python3.9/site-packages/eventlet/event.py", line 125, in wait
Oct 11 05:07:51 np0005481065 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]     result = hub.switch()
Oct 11 05:07:51 np0005481065 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]   File "/usr/lib/python3.9/site-packages/eventlet/hubs/hub.py", line 313, in switch
Oct 11 05:07:51 np0005481065 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]     return self.greenlet.switch()
Oct 11 05:07:51 np0005481065 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404] eventlet.timeout.Timeout: 300 seconds
Oct 11 05:07:51 np0005481065 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404] 
Oct 11 05:07:51 np0005481065 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404] During handling of the above exception, another exception occurred:
Oct 11 05:07:51 np0005481065 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404] 
Oct 11 05:07:51 np0005481065 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404] Traceback (most recent call last):
Oct 11 05:07:51 np0005481065 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 2611, in _build_and_run_instance
Oct 11 05:07:51 np0005481065 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]     self.driver.spawn(context, instance, image_meta,
Oct 11 05:07:51 np0005481065 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 4411, in spawn
Oct 11 05:07:51 np0005481065 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]     self._create_guest_with_network(
Oct 11 05:07:51 np0005481065 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 7768, in _create_guest_with_network
Oct 11 05:07:51 np0005481065 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]     raise exception.VirtualInterfaceCreateException()
Oct 11 05:07:51 np0005481065 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404] nova.exception.VirtualInterfaceCreateException: Virtual Interface creation failed
Oct 11 05:07:51 np0005481065 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404] #033[00m
Oct 11 05:07:51 np0005481065 nova_compute[260935]: 2025-10-11 09:07:51.228 2 DEBUG nova.compute.utils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Virtual Interface creation failed notify_about_instance_usage /usr/lib/python3.9/site-packages/nova/compute/utils.py:430#033[00m
Oct 11 05:07:51 np0005481065 nova_compute[260935]: 2025-10-11 09:07:51.229 2 ERROR nova.compute.manager [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Build of instance 15633aee-234a-4417-b5ea-f35f13820404 aborted: Failed to allocate the network(s), not rescheduling.: nova.exception.BuildAbortException: Build of instance 15633aee-234a-4417-b5ea-f35f13820404 aborted: Failed to allocate the network(s), not rescheduling.#033[00m
Oct 11 05:07:51 np0005481065 nova_compute[260935]: 2025-10-11 09:07:51.230 2 DEBUG nova.compute.manager [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Unplugging VIFs for instance _cleanup_allocated_networks /usr/lib/python3.9/site-packages/nova/compute/manager.py:2976#033[00m
Oct 11 05:07:51 np0005481065 nova_compute[260935]: 2025-10-11 09:07:51.231 2 DEBUG nova.virt.libvirt.vif [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='',created_at=2025-10-11T09:00:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-2012807637',display_name='tempest-ServerRescueTestJSON-server-2012807637',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host=None,hostname='tempest-serverrescuetestjson-server-2012807637',id=87,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node=None,numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='11b44ad9193e4e43838d52056ccf413e',ramdisk_id='',reservation_id='r-es16o3e8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-1667208638',owner_user_name='tempest-ServerRescueTestJSON-1667208638-project-member'},tags=TagList,task_state='deleting',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:07:50Z,user_data=None,user_id='df5a3c3a5d68473aa2e2950de45ebce1',uuid=15633aee-234a-4417-b5ea-f35f13820404,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "074db183-8679-40f2-b39d-06759a8dfceb", "address": "fa:16:3e:33:2b:94", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap074db183-86", "ovs_interfaceid": "074db183-8679-40f2-b39d-06759a8dfceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:07:51 np0005481065 nova_compute[260935]: 2025-10-11 09:07:51.231 2 DEBUG nova.network.os_vif_util [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Converting VIF {"id": "074db183-8679-40f2-b39d-06759a8dfceb", "address": "fa:16:3e:33:2b:94", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap074db183-86", "ovs_interfaceid": "074db183-8679-40f2-b39d-06759a8dfceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:07:51 np0005481065 nova_compute[260935]: 2025-10-11 09:07:51.233 2 DEBUG nova.network.os_vif_util [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:2b:94,bridge_name='br-int',has_traffic_filtering=True,id=074db183-8679-40f2-b39d-06759a8dfceb,network=Network(e4686205-cbf0-4221-bc49-ebb890c4a59f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap074db183-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:07:51 np0005481065 nova_compute[260935]: 2025-10-11 09:07:51.233 2 DEBUG os_vif [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:2b:94,bridge_name='br-int',has_traffic_filtering=True,id=074db183-8679-40f2-b39d-06759a8dfceb,network=Network(e4686205-cbf0-4221-bc49-ebb890c4a59f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap074db183-86') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:07:51 np0005481065 nova_compute[260935]: 2025-10-11 09:07:51.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:51 np0005481065 nova_compute[260935]: 2025-10-11 09:07:51.236 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap074db183-86, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:07:51 np0005481065 nova_compute[260935]: 2025-10-11 09:07:51.237 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:07:51 np0005481065 nova_compute[260935]: 2025-10-11 09:07:51.241 2 INFO os_vif [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:2b:94,bridge_name='br-int',has_traffic_filtering=True,id=074db183-8679-40f2-b39d-06759a8dfceb,network=Network(e4686205-cbf0-4221-bc49-ebb890c4a59f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap074db183-86')#033[00m
Oct 11 05:07:51 np0005481065 nova_compute[260935]: 2025-10-11 09:07:51.242 2 DEBUG nova.compute.manager [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Unplugged VIFs for instance _cleanup_allocated_networks /usr/lib/python3.9/site-packages/nova/compute/manager.py:3012#033[00m
Oct 11 05:07:51 np0005481065 nova_compute[260935]: 2025-10-11 09:07:51.243 2 DEBUG nova.compute.manager [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:07:51 np0005481065 nova_compute[260935]: 2025-10-11 09:07:51.243 2 DEBUG nova.network.neutron [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:07:51 np0005481065 ovn_controller[152945]: 2025-10-11T09:07:51Z|00102|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:88:5f:9b 10.100.0.12
Oct 11 05:07:51 np0005481065 ovn_controller[152945]: 2025-10-11T09:07:51Z|00103|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:88:5f:9b 10.100.0.12
Oct 11 05:07:52 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1956: 321 pgs: 321 active+clean; 618 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 40 KiB/s wr, 217 op/s
Oct 11 05:07:52 np0005481065 nova_compute[260935]: 2025-10-11 09:07:52.576 2 DEBUG nova.network.neutron [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:07:52 np0005481065 nova_compute[260935]: 2025-10-11 09:07:52.764 2 INFO nova.compute.manager [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Took 1.52 seconds to deallocate network for instance.#033[00m
Oct 11 05:07:52 np0005481065 podman[358731]: 2025-10-11 09:07:52.825537244 +0000 UTC m=+0.108253181 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=iscsid, io.buildah.version=1.41.3)
Oct 11 05:07:52 np0005481065 nova_compute[260935]: 2025-10-11 09:07:52.960 2 DEBUG oslo_concurrency.lockutils [None req-016b3afb-e77e-442b-b195-3639acdd94fb f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Acquiring lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:07:52 np0005481065 nova_compute[260935]: 2025-10-11 09:07:52.962 2 DEBUG oslo_concurrency.lockutils [None req-016b3afb-e77e-442b-b195-3639acdd94fb f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:07:52 np0005481065 nova_compute[260935]: 2025-10-11 09:07:52.962 2 DEBUG oslo_concurrency.lockutils [None req-016b3afb-e77e-442b-b195-3639acdd94fb f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Acquiring lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:07:52 np0005481065 nova_compute[260935]: 2025-10-11 09:07:52.963 2 DEBUG oslo_concurrency.lockutils [None req-016b3afb-e77e-442b-b195-3639acdd94fb f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:07:52 np0005481065 nova_compute[260935]: 2025-10-11 09:07:52.963 2 DEBUG oslo_concurrency.lockutils [None req-016b3afb-e77e-442b-b195-3639acdd94fb f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:07:52 np0005481065 nova_compute[260935]: 2025-10-11 09:07:52.964 2 INFO nova.compute.manager [None req-016b3afb-e77e-442b-b195-3639acdd94fb f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Terminating instance#033[00m
Oct 11 05:07:52 np0005481065 nova_compute[260935]: 2025-10-11 09:07:52.965 2 DEBUG nova.compute.manager [None req-016b3afb-e77e-442b-b195-3639acdd94fb f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:07:53 np0005481065 kernel: tap34749eb0-e6 (unregistering): left promiscuous mode
Oct 11 05:07:53 np0005481065 NetworkManager[44960]: <info>  [1760173673.0397] device (tap34749eb0-e6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:07:53 np0005481065 ovn_controller[152945]: 2025-10-11T09:07:53Z|00898|binding|INFO|Releasing lport 34749eb0-e6d2-4c4b-a2b8-9fc709c1caca from this chassis (sb_readonly=0)
Oct 11 05:07:53 np0005481065 ovn_controller[152945]: 2025-10-11T09:07:53Z|00899|binding|INFO|Setting lport 34749eb0-e6d2-4c4b-a2b8-9fc709c1caca down in Southbound
Oct 11 05:07:53 np0005481065 nova_compute[260935]: 2025-10-11 09:07:53.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:53 np0005481065 ovn_controller[152945]: 2025-10-11T09:07:53Z|00900|binding|INFO|Removing iface tap34749eb0-e6 ovn-installed in OVS
Oct 11 05:07:53 np0005481065 nova_compute[260935]: 2025-10-11 09:07:53.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:53.068 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:5f:9b 10.100.0.12'], port_security=['fa:16:3e:88:5f:9b 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'f5dfbe0b-a1ff-4001-abe4-a4493c9124f2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-76432838-bd4d-4fb8-8e44-6e230b5868b8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c221c92f100b42fbb2581f0c7035540b', 'neutron:revision_number': '6', 'neutron:security_group_ids': '933852e8-c082-4590-9200-b967a1691dcb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7f1c5e2c-b27b-4d23-ad04-5134938386e4, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=34749eb0-e6d2-4c4b-a2b8-9fc709c1caca) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:07:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:53.070 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 34749eb0-e6d2-4c4b-a2b8-9fc709c1caca in datapath 76432838-bd4d-4fb8-8e44-6e230b5868b8 unbound from our chassis#033[00m
Oct 11 05:07:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:53.073 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 76432838-bd4d-4fb8-8e44-6e230b5868b8#033[00m
Oct 11 05:07:53 np0005481065 nova_compute[260935]: 2025-10-11 09:07:53.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:53.100 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a942457e-dede-4290-999f-71e6f11e004c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:53 np0005481065 systemd[1]: machine-qemu\x2d114\x2dinstance\x2d00000061.scope: Deactivated successfully.
Oct 11 05:07:53 np0005481065 systemd[1]: machine-qemu\x2d114\x2dinstance\x2d00000061.scope: Consumed 12.544s CPU time.
Oct 11 05:07:53 np0005481065 systemd-machined[215705]: Machine qemu-114-instance-00000061 terminated.
Oct 11 05:07:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:53.153 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3e4a1048-8cbb-491f-997f-fbc889c5e1aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:53.156 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[2ae6a108-1dc0-4933-be0b-a60e961e1d15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:53.207 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[d353fc9e-27e9-4621-a15d-9ef6e07a9468]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:53 np0005481065 nova_compute[260935]: 2025-10-11 09:07:53.214 2 INFO nova.virt.libvirt.driver [-] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Instance destroyed successfully.#033[00m
Oct 11 05:07:53 np0005481065 nova_compute[260935]: 2025-10-11 09:07:53.215 2 DEBUG nova.objects.instance [None req-016b3afb-e77e-442b-b195-3639acdd94fb f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lazy-loading 'resources' on Instance uuid f5dfbe0b-a1ff-4001-abe4-a4493c9124f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:07:53 np0005481065 nova_compute[260935]: 2025-10-11 09:07:53.219 2 INFO nova.scheduler.client.report [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Deleted allocations for instance 15633aee-234a-4417-b5ea-f35f13820404#033[00m
Oct 11 05:07:53 np0005481065 nova_compute[260935]: 2025-10-11 09:07:53.220 2 DEBUG oslo_concurrency.lockutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lock "15633aee-234a-4417-b5ea-f35f13820404" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 422.017s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:07:53 np0005481065 nova_compute[260935]: 2025-10-11 09:07:53.221 2 DEBUG oslo_concurrency.lockutils [None req-a9b118e5-7aae-466a-a2e8-73259e1e4185 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lock "15633aee-234a-4417-b5ea-f35f13820404" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 307.107s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:07:53 np0005481065 nova_compute[260935]: 2025-10-11 09:07:53.221 2 DEBUG oslo_concurrency.lockutils [None req-a9b118e5-7aae-466a-a2e8-73259e1e4185 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Acquiring lock "15633aee-234a-4417-b5ea-f35f13820404-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:07:53 np0005481065 nova_compute[260935]: 2025-10-11 09:07:53.222 2 DEBUG oslo_concurrency.lockutils [None req-a9b118e5-7aae-466a-a2e8-73259e1e4185 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lock "15633aee-234a-4417-b5ea-f35f13820404-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:07:53 np0005481065 nova_compute[260935]: 2025-10-11 09:07:53.222 2 DEBUG oslo_concurrency.lockutils [None req-a9b118e5-7aae-466a-a2e8-73259e1e4185 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lock "15633aee-234a-4417-b5ea-f35f13820404-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:07:53 np0005481065 nova_compute[260935]: 2025-10-11 09:07:53.223 2 DEBUG nova.compute.manager [None req-a9b118e5-7aae-466a-a2e8-73259e1e4185 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Events pending at deletion: network-vif-plugged-074db183-8679-40f2-b39d-06759a8dfceb _delete_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3237#033[00m
Oct 11 05:07:53 np0005481065 nova_compute[260935]: 2025-10-11 09:07:53.224 2 INFO nova.compute.manager [None req-a9b118e5-7aae-466a-a2e8-73259e1e4185 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Terminating instance#033[00m
Oct 11 05:07:53 np0005481065 nova_compute[260935]: 2025-10-11 09:07:53.226 2 DEBUG nova.compute.manager [None req-a9b118e5-7aae-466a-a2e8-73259e1e4185 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:07:53 np0005481065 nova_compute[260935]: 2025-10-11 09:07:53.230 2 DEBUG nova.virt.libvirt.driver [-] [instance: 15633aee-234a-4417-b5ea-f35f13820404] During wait destroy, instance disappeared. _wait_for_destroy /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1527#033[00m
Oct 11 05:07:53 np0005481065 nova_compute[260935]: 2025-10-11 09:07:53.230 2 INFO nova.virt.libvirt.driver [-] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Instance destroyed successfully.#033[00m
Oct 11 05:07:53 np0005481065 nova_compute[260935]: 2025-10-11 09:07:53.231 2 DEBUG nova.objects.instance [None req-a9b118e5-7aae-466a-a2e8-73259e1e4185 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lazy-loading 'resources' on Instance uuid 15633aee-234a-4417-b5ea-f35f13820404 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:07:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:53.237 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4965d8f3-419d-4cfb-a789-8c77a18c0bbd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap76432838-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:06:2f:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547691, 'reachable_time': 43408, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358773, 'error': None, 'target': 'ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:53.255 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ecd84883-e90c-4206-bffa-db8da0daec8a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap76432838-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 547707, 'tstamp': 547707}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358775, 'error': None, 'target': 'ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap76432838-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 547711, 'tstamp': 547711}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358775, 'error': None, 'target': 'ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:53.256 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap76432838-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:07:53 np0005481065 nova_compute[260935]: 2025-10-11 09:07:53.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:53 np0005481065 nova_compute[260935]: 2025-10-11 09:07:53.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:53.263 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap76432838-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:07:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:53.263 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:07:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:53.264 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap76432838-b0, col_values=(('external_ids', {'iface-id': 'b6ca7d68-6b07-4260-8964-929cc77a92b2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:07:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:53.264 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:07:53 np0005481065 nova_compute[260935]: 2025-10-11 09:07:53.265 2 DEBUG nova.virt.libvirt.vif [None req-016b3afb-e77e-442b-b195-3639acdd94fb f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:07:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1694931715',display_name='tempest-ServerRescueNegativeTestJSON-server-1694931715',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1694931715',id=97,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:07:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c221c92f100b42fbb2581f0c7035540b',ramdisk_id='',reservation_id='r-w0ylr4kw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-1667632067',owner_user_name='tempest-ServerRescueNegativeTestJSON-1667632067-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:07:40Z,user_data=None,user_id='f52fcc072b5843a197064a063d4a5d30',uuid=f5dfbe0b-a1ff-4001-abe4-a4493c9124f2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "34749eb0-e6d2-4c4b-a2b8-9fc709c1caca", "address": "fa:16:3e:88:5f:9b", "network": {"id": "76432838-bd4d-4fb8-8e44-6e230b5868b8", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-372698809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c221c92f100b42fbb2581f0c7035540b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34749eb0-e6", "ovs_interfaceid": "34749eb0-e6d2-4c4b-a2b8-9fc709c1caca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:07:53 np0005481065 nova_compute[260935]: 2025-10-11 09:07:53.265 2 DEBUG nova.network.os_vif_util [None req-016b3afb-e77e-442b-b195-3639acdd94fb f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Converting VIF {"id": "34749eb0-e6d2-4c4b-a2b8-9fc709c1caca", "address": "fa:16:3e:88:5f:9b", "network": {"id": "76432838-bd4d-4fb8-8e44-6e230b5868b8", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-372698809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c221c92f100b42fbb2581f0c7035540b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34749eb0-e6", "ovs_interfaceid": "34749eb0-e6d2-4c4b-a2b8-9fc709c1caca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:07:53 np0005481065 nova_compute[260935]: 2025-10-11 09:07:53.266 2 DEBUG nova.network.os_vif_util [None req-016b3afb-e77e-442b-b195-3639acdd94fb f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:88:5f:9b,bridge_name='br-int',has_traffic_filtering=True,id=34749eb0-e6d2-4c4b-a2b8-9fc709c1caca,network=Network(76432838-bd4d-4fb8-8e44-6e230b5868b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34749eb0-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:07:53 np0005481065 nova_compute[260935]: 2025-10-11 09:07:53.266 2 DEBUG os_vif [None req-016b3afb-e77e-442b-b195-3639acdd94fb f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:88:5f:9b,bridge_name='br-int',has_traffic_filtering=True,id=34749eb0-e6d2-4c4b-a2b8-9fc709c1caca,network=Network(76432838-bd4d-4fb8-8e44-6e230b5868b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34749eb0-e6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:07:53 np0005481065 nova_compute[260935]: 2025-10-11 09:07:53.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:53 np0005481065 nova_compute[260935]: 2025-10-11 09:07:53.267 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34749eb0-e6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:07:53 np0005481065 nova_compute[260935]: 2025-10-11 09:07:53.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:53 np0005481065 nova_compute[260935]: 2025-10-11 09:07:53.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:07:53 np0005481065 nova_compute[260935]: 2025-10-11 09:07:53.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:53 np0005481065 nova_compute[260935]: 2025-10-11 09:07:53.273 2 INFO os_vif [None req-016b3afb-e77e-442b-b195-3639acdd94fb f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:88:5f:9b,bridge_name='br-int',has_traffic_filtering=True,id=34749eb0-e6d2-4c4b-a2b8-9fc709c1caca,network=Network(76432838-bd4d-4fb8-8e44-6e230b5868b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34749eb0-e6')#033[00m
Oct 11 05:07:53 np0005481065 nova_compute[260935]: 2025-10-11 09:07:53.289 2 DEBUG nova.virt.libvirt.vif [None req-a9b118e5-7aae-466a-a2e8-73259e1e4185 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:00:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-2012807637',display_name='tempest-ServerRescueTestJSON-server-2012807637',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-2012807637',id=87,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=0,progress=0,project_id='11b44ad9193e4e43838d52056ccf413e',ramdisk_id='',reservation_id='r-es16o3e8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-1667208638',owner_user_name='tempest-ServerRescueTestJSON-1667208638-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:00:52Z,user_data=None,user_id='df5a3c3a5d68473aa2e2950de45ebce1',uuid=15633aee-234a-4417-b5ea-f35f13820404,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "074db183-8679-40f2-b39d-06759a8dfceb", "address": "fa:16:3e:33:2b:94", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap074db183-86", "ovs_interfaceid": "074db183-8679-40f2-b39d-06759a8dfceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:07:53 np0005481065 nova_compute[260935]: 2025-10-11 09:07:53.290 2 DEBUG nova.network.os_vif_util [None req-a9b118e5-7aae-466a-a2e8-73259e1e4185 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Converting VIF {"id": "074db183-8679-40f2-b39d-06759a8dfceb", "address": "fa:16:3e:33:2b:94", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap074db183-86", "ovs_interfaceid": "074db183-8679-40f2-b39d-06759a8dfceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:07:53 np0005481065 nova_compute[260935]: 2025-10-11 09:07:53.291 2 DEBUG nova.network.os_vif_util [None req-a9b118e5-7aae-466a-a2e8-73259e1e4185 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:2b:94,bridge_name='br-int',has_traffic_filtering=True,id=074db183-8679-40f2-b39d-06759a8dfceb,network=Network(e4686205-cbf0-4221-bc49-ebb890c4a59f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap074db183-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:07:53 np0005481065 nova_compute[260935]: 2025-10-11 09:07:53.291 2 DEBUG os_vif [None req-a9b118e5-7aae-466a-a2e8-73259e1e4185 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:2b:94,bridge_name='br-int',has_traffic_filtering=True,id=074db183-8679-40f2-b39d-06759a8dfceb,network=Network(e4686205-cbf0-4221-bc49-ebb890c4a59f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap074db183-86') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:07:53 np0005481065 nova_compute[260935]: 2025-10-11 09:07:53.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:53 np0005481065 nova_compute[260935]: 2025-10-11 09:07:53.293 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap074db183-86, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:07:53 np0005481065 nova_compute[260935]: 2025-10-11 09:07:53.293 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:07:53 np0005481065 nova_compute[260935]: 2025-10-11 09:07:53.296 2 INFO os_vif [None req-a9b118e5-7aae-466a-a2e8-73259e1e4185 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:2b:94,bridge_name='br-int',has_traffic_filtering=True,id=074db183-8679-40f2-b39d-06759a8dfceb,network=Network(e4686205-cbf0-4221-bc49-ebb890c4a59f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap074db183-86')#033[00m
Oct 11 05:07:53 np0005481065 nova_compute[260935]: 2025-10-11 09:07:53.329 2 INFO nova.virt.libvirt.driver [None req-a9b118e5-7aae-466a-a2e8-73259e1e4185 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Deletion of /var/lib/nova/instances/15633aee-234a-4417-b5ea-f35f13820404_del complete#033[00m
Oct 11 05:07:53 np0005481065 nova_compute[260935]: 2025-10-11 09:07:53.477 2 INFO nova.compute.manager [None req-a9b118e5-7aae-466a-a2e8-73259e1e4185 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Took 0.25 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:07:53 np0005481065 nova_compute[260935]: 2025-10-11 09:07:53.478 2 DEBUG oslo.service.loopingcall [None req-a9b118e5-7aae-466a-a2e8-73259e1e4185 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:07:53 np0005481065 nova_compute[260935]: 2025-10-11 09:07:53.478 2 DEBUG nova.compute.manager [-] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:07:53 np0005481065 nova_compute[260935]: 2025-10-11 09:07:53.479 2 DEBUG nova.network.neutron [-] [instance: 15633aee-234a-4417-b5ea-f35f13820404] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:07:53 np0005481065 nova_compute[260935]: 2025-10-11 09:07:53.492 2 DEBUG nova.compute.manager [req-fde38772-a79f-4ffc-bd49-37d94ffc0c79 req-855ee445-19fe-42d7-b132-7300e4e7f87d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Received event network-vif-unplugged-34749eb0-e6d2-4c4b-a2b8-9fc709c1caca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:07:53 np0005481065 nova_compute[260935]: 2025-10-11 09:07:53.492 2 DEBUG oslo_concurrency.lockutils [req-fde38772-a79f-4ffc-bd49-37d94ffc0c79 req-855ee445-19fe-42d7-b132-7300e4e7f87d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:07:53 np0005481065 nova_compute[260935]: 2025-10-11 09:07:53.493 2 DEBUG oslo_concurrency.lockutils [req-fde38772-a79f-4ffc-bd49-37d94ffc0c79 req-855ee445-19fe-42d7-b132-7300e4e7f87d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:07:53 np0005481065 nova_compute[260935]: 2025-10-11 09:07:53.494 2 DEBUG oslo_concurrency.lockutils [req-fde38772-a79f-4ffc-bd49-37d94ffc0c79 req-855ee445-19fe-42d7-b132-7300e4e7f87d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:07:53 np0005481065 nova_compute[260935]: 2025-10-11 09:07:53.494 2 DEBUG nova.compute.manager [req-fde38772-a79f-4ffc-bd49-37d94ffc0c79 req-855ee445-19fe-42d7-b132-7300e4e7f87d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] No waiting events found dispatching network-vif-unplugged-34749eb0-e6d2-4c4b-a2b8-9fc709c1caca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:07:53 np0005481065 nova_compute[260935]: 2025-10-11 09:07:53.495 2 DEBUG nova.compute.manager [req-fde38772-a79f-4ffc-bd49-37d94ffc0c79 req-855ee445-19fe-42d7-b132-7300e4e7f87d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Received event network-vif-unplugged-34749eb0-e6d2-4c4b-a2b8-9fc709c1caca for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 05:07:53 np0005481065 nova_compute[260935]: 2025-10-11 09:07:53.742 2 DEBUG nova.network.neutron [-] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:07:53 np0005481065 nova_compute[260935]: 2025-10-11 09:07:53.774 2 DEBUG nova.network.neutron [-] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:07:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:07:53 np0005481065 nova_compute[260935]: 2025-10-11 09:07:53.842 2 INFO nova.compute.manager [-] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Took 0.36 seconds to deallocate network for instance.#033[00m
Oct 11 05:07:54 np0005481065 nova_compute[260935]: 2025-10-11 09:07:54.131 2 INFO nova.virt.libvirt.driver [None req-016b3afb-e77e-442b-b195-3639acdd94fb f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Deleting instance files /var/lib/nova/instances/f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_del#033[00m
Oct 11 05:07:54 np0005481065 nova_compute[260935]: 2025-10-11 09:07:54.133 2 INFO nova.virt.libvirt.driver [None req-016b3afb-e77e-442b-b195-3639acdd94fb f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Deletion of /var/lib/nova/instances/f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_del complete#033[00m
Oct 11 05:07:54 np0005481065 nova_compute[260935]: 2025-10-11 09:07:54.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:54 np0005481065 nova_compute[260935]: 2025-10-11 09:07:54.246 2 DEBUG oslo_concurrency.lockutils [None req-a9b118e5-7aae-466a-a2e8-73259e1e4185 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lock "15633aee-234a-4417-b5ea-f35f13820404" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.025s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:07:54 np0005481065 nova_compute[260935]: 2025-10-11 09:07:54.249 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "15633aee-234a-4417-b5ea-f35f13820404" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 182.494s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:07:54 np0005481065 nova_compute[260935]: 2025-10-11 09:07:54.249 2 INFO nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 15633aee-234a-4417-b5ea-f35f13820404] During sync_power_state the instance has a pending task (deleting). Skip.#033[00m
Oct 11 05:07:54 np0005481065 nova_compute[260935]: 2025-10-11 09:07:54.250 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "15633aee-234a-4417-b5ea-f35f13820404" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:07:54 np0005481065 nova_compute[260935]: 2025-10-11 09:07:54.268 2 INFO nova.compute.manager [None req-016b3afb-e77e-442b-b195-3639acdd94fb f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Took 1.30 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:07:54 np0005481065 nova_compute[260935]: 2025-10-11 09:07:54.269 2 DEBUG oslo.service.loopingcall [None req-016b3afb-e77e-442b-b195-3639acdd94fb f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:07:54 np0005481065 nova_compute[260935]: 2025-10-11 09:07:54.270 2 DEBUG nova.compute.manager [-] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:07:54 np0005481065 nova_compute[260935]: 2025-10-11 09:07:54.270 2 DEBUG nova.network.neutron [-] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:07:54 np0005481065 nova_compute[260935]: 2025-10-11 09:07:54.313 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173659.3121285, 02489367-ca19-4428-9727-6d8ab66e1b46 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:07:54 np0005481065 nova_compute[260935]: 2025-10-11 09:07:54.314 2 INFO nova.compute.manager [-] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:07:54 np0005481065 nova_compute[260935]: 2025-10-11 09:07:54.356 2 DEBUG nova.compute.manager [None req-359a0feb-6a4f-4893-a2bb-a010a3acb2a8 - - - - - -] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:07:54 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1957: 321 pgs: 321 active+clean; 612 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 15 KiB/s wr, 121 op/s
Oct 11 05:07:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:07:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:07:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:07:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:07:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:07:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:07:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:07:54
Oct 11 05:07:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:07:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:07:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log', 'volumes', 'default.rgw.meta', 'images', 'vms', 'backups']
Oct 11 05:07:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:07:55 np0005481065 nova_compute[260935]: 2025-10-11 09:07:55.163 2 DEBUG nova.network.neutron [-] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:07:55 np0005481065 nova_compute[260935]: 2025-10-11 09:07:55.216 2 INFO nova.compute.manager [-] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Took 0.95 seconds to deallocate network for instance.#033[00m
Oct 11 05:07:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:07:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:07:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:07:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:07:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:07:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:07:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:07:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:07:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:07:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:07:55 np0005481065 nova_compute[260935]: 2025-10-11 09:07:55.304 2 DEBUG oslo_concurrency.lockutils [None req-016b3afb-e77e-442b-b195-3639acdd94fb f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:07:55 np0005481065 nova_compute[260935]: 2025-10-11 09:07:55.305 2 DEBUG oslo_concurrency.lockutils [None req-016b3afb-e77e-442b-b195-3639acdd94fb f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:07:55 np0005481065 nova_compute[260935]: 2025-10-11 09:07:55.518 2 DEBUG oslo_concurrency.processutils [None req-016b3afb-e77e-442b-b195-3639acdd94fb f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:07:55 np0005481065 nova_compute[260935]: 2025-10-11 09:07:55.600 2 DEBUG nova.compute.manager [req-70e8d8b0-3bc9-4306-9e1b-2026275c81c5 req-5fc69e3b-6124-4b93-b264-218ba03d1f17 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Received event network-vif-plugged-34749eb0-e6d2-4c4b-a2b8-9fc709c1caca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:07:55 np0005481065 nova_compute[260935]: 2025-10-11 09:07:55.601 2 DEBUG oslo_concurrency.lockutils [req-70e8d8b0-3bc9-4306-9e1b-2026275c81c5 req-5fc69e3b-6124-4b93-b264-218ba03d1f17 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:07:55 np0005481065 nova_compute[260935]: 2025-10-11 09:07:55.601 2 DEBUG oslo_concurrency.lockutils [req-70e8d8b0-3bc9-4306-9e1b-2026275c81c5 req-5fc69e3b-6124-4b93-b264-218ba03d1f17 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:07:55 np0005481065 nova_compute[260935]: 2025-10-11 09:07:55.602 2 DEBUG oslo_concurrency.lockutils [req-70e8d8b0-3bc9-4306-9e1b-2026275c81c5 req-5fc69e3b-6124-4b93-b264-218ba03d1f17 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:07:55 np0005481065 nova_compute[260935]: 2025-10-11 09:07:55.602 2 DEBUG nova.compute.manager [req-70e8d8b0-3bc9-4306-9e1b-2026275c81c5 req-5fc69e3b-6124-4b93-b264-218ba03d1f17 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] No waiting events found dispatching network-vif-plugged-34749eb0-e6d2-4c4b-a2b8-9fc709c1caca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:07:55 np0005481065 nova_compute[260935]: 2025-10-11 09:07:55.603 2 WARNING nova.compute.manager [req-70e8d8b0-3bc9-4306-9e1b-2026275c81c5 req-5fc69e3b-6124-4b93-b264-218ba03d1f17 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Received unexpected event network-vif-plugged-34749eb0-e6d2-4c4b-a2b8-9fc709c1caca for instance with vm_state deleted and task_state None.#033[00m
Oct 11 05:07:55 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:07:55 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1005458417' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:07:55 np0005481065 nova_compute[260935]: 2025-10-11 09:07:55.998 2 DEBUG oslo_concurrency.processutils [None req-016b3afb-e77e-442b-b195-3639acdd94fb f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:07:56 np0005481065 nova_compute[260935]: 2025-10-11 09:07:56.006 2 DEBUG nova.compute.provider_tree [None req-016b3afb-e77e-442b-b195-3639acdd94fb f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:07:56 np0005481065 nova_compute[260935]: 2025-10-11 09:07:56.036 2 DEBUG nova.scheduler.client.report [None req-016b3afb-e77e-442b-b195-3639acdd94fb f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:07:56 np0005481065 nova_compute[260935]: 2025-10-11 09:07:56.082 2 DEBUG oslo_concurrency.lockutils [None req-016b3afb-e77e-442b-b195-3639acdd94fb f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.777s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:07:56 np0005481065 nova_compute[260935]: 2025-10-11 09:07:56.143 2 INFO nova.scheduler.client.report [None req-016b3afb-e77e-442b-b195-3639acdd94fb f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Deleted allocations for instance f5dfbe0b-a1ff-4001-abe4-a4493c9124f2#033[00m
Oct 11 05:07:56 np0005481065 nova_compute[260935]: 2025-10-11 09:07:56.317 2 DEBUG oslo_concurrency.lockutils [None req-016b3afb-e77e-442b-b195-3639acdd94fb f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.355s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:07:56 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1958: 321 pgs: 321 active+clean; 612 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 630 KiB/s rd, 14 KiB/s wr, 87 op/s
Oct 11 05:07:56 np0005481065 nova_compute[260935]: 2025-10-11 09:07:56.577 2 DEBUG nova.compute.manager [req-ead7e16f-f1a0-4a4f-a202-93d06c23d717 req-f1d0bba6-1d4e-4e60-93f6-773861b1b2ce e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Received event network-vif-deleted-34749eb0-e6d2-4c4b-a2b8-9fc709c1caca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:07:56 np0005481065 podman[358833]: 2025-10-11 09:07:56.804743798 +0000 UTC m=+0.092188743 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 11 05:07:56 np0005481065 podman[358834]: 2025-10-11 09:07:56.835179757 +0000 UTC m=+0.121024476 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 05:07:57 np0005481065 nova_compute[260935]: 2025-10-11 09:07:57.764 2 DEBUG oslo_concurrency.lockutils [None req-b119b161-266e-4039-a587-323785b07d0d f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Acquiring lock "6520fc43-79ed-4060-85bb-dcdff5f5c101" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:07:57 np0005481065 nova_compute[260935]: 2025-10-11 09:07:57.766 2 DEBUG oslo_concurrency.lockutils [None req-b119b161-266e-4039-a587-323785b07d0d f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "6520fc43-79ed-4060-85bb-dcdff5f5c101" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:07:57 np0005481065 nova_compute[260935]: 2025-10-11 09:07:57.767 2 DEBUG oslo_concurrency.lockutils [None req-b119b161-266e-4039-a587-323785b07d0d f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Acquiring lock "6520fc43-79ed-4060-85bb-dcdff5f5c101-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:07:57 np0005481065 nova_compute[260935]: 2025-10-11 09:07:57.767 2 DEBUG oslo_concurrency.lockutils [None req-b119b161-266e-4039-a587-323785b07d0d f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "6520fc43-79ed-4060-85bb-dcdff5f5c101-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:07:57 np0005481065 nova_compute[260935]: 2025-10-11 09:07:57.768 2 DEBUG oslo_concurrency.lockutils [None req-b119b161-266e-4039-a587-323785b07d0d f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "6520fc43-79ed-4060-85bb-dcdff5f5c101-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:07:57 np0005481065 nova_compute[260935]: 2025-10-11 09:07:57.770 2 INFO nova.compute.manager [None req-b119b161-266e-4039-a587-323785b07d0d f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Terminating instance#033[00m
Oct 11 05:07:57 np0005481065 nova_compute[260935]: 2025-10-11 09:07:57.772 2 DEBUG nova.compute.manager [None req-b119b161-266e-4039-a587-323785b07d0d f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:07:57 np0005481065 kernel: tap7f81893d-38 (unregistering): left promiscuous mode
Oct 11 05:07:57 np0005481065 NetworkManager[44960]: <info>  [1760173677.8320] device (tap7f81893d-38): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:07:57 np0005481065 ovn_controller[152945]: 2025-10-11T09:07:57Z|00901|binding|INFO|Releasing lport 7f81893d-380a-42b4-88a7-76a98c30a38d from this chassis (sb_readonly=0)
Oct 11 05:07:57 np0005481065 nova_compute[260935]: 2025-10-11 09:07:57.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:57 np0005481065 ovn_controller[152945]: 2025-10-11T09:07:57Z|00902|binding|INFO|Setting lport 7f81893d-380a-42b4-88a7-76a98c30a38d down in Southbound
Oct 11 05:07:57 np0005481065 ovn_controller[152945]: 2025-10-11T09:07:57Z|00903|binding|INFO|Removing iface tap7f81893d-38 ovn-installed in OVS
Oct 11 05:07:57 np0005481065 nova_compute[260935]: 2025-10-11 09:07:57.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:57.891 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:f1:b7 10.100.0.6'], port_security=['fa:16:3e:61:f1:b7 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '6520fc43-79ed-4060-85bb-dcdff5f5c101', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-76432838-bd4d-4fb8-8e44-6e230b5868b8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c221c92f100b42fbb2581f0c7035540b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '933852e8-c082-4590-9200-b967a1691dcb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7f1c5e2c-b27b-4d23-ad04-5134938386e4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=7f81893d-380a-42b4-88a7-76a98c30a38d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:07:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:57.893 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 7f81893d-380a-42b4-88a7-76a98c30a38d in datapath 76432838-bd4d-4fb8-8e44-6e230b5868b8 unbound from our chassis#033[00m
Oct 11 05:07:57 np0005481065 nova_compute[260935]: 2025-10-11 09:07:57.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:57.897 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 76432838-bd4d-4fb8-8e44-6e230b5868b8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:07:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:57.899 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fc64fc95-63bd-40d9-a3ae-3a622c373675]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:57.900 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8 namespace which is not needed anymore#033[00m
Oct 11 05:07:57 np0005481065 systemd[1]: machine-qemu\x2d110\x2dinstance\x2d00000060.scope: Deactivated successfully.
Oct 11 05:07:57 np0005481065 systemd[1]: machine-qemu\x2d110\x2dinstance\x2d00000060.scope: Consumed 13.615s CPU time.
Oct 11 05:07:57 np0005481065 systemd-machined[215705]: Machine qemu-110-instance-00000060 terminated.
Oct 11 05:07:58 np0005481065 nova_compute[260935]: 2025-10-11 09:07:58.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:58 np0005481065 nova_compute[260935]: 2025-10-11 09:07:58.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:58 np0005481065 nova_compute[260935]: 2025-10-11 09:07:58.019 2 INFO nova.virt.libvirt.driver [-] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Instance destroyed successfully.#033[00m
Oct 11 05:07:58 np0005481065 nova_compute[260935]: 2025-10-11 09:07:58.019 2 DEBUG nova.objects.instance [None req-b119b161-266e-4039-a587-323785b07d0d f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lazy-loading 'resources' on Instance uuid 6520fc43-79ed-4060-85bb-dcdff5f5c101 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:07:58 np0005481065 nova_compute[260935]: 2025-10-11 09:07:58.055 2 DEBUG nova.virt.libvirt.vif [None req-b119b161-266e-4039-a587-323785b07d0d f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:07:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-653793177',display_name='tempest-ServerRescueNegativeTestJSON-server-653793177',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-653793177',id=96,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:07:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c221c92f100b42fbb2581f0c7035540b',ramdisk_id='',reservation_id='r-id4jerug',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-1667632067',owner_user_name='tempest-ServerRescueNegativeTestJSON-1667632067-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:07:49Z,user_data=None,user_id='f52fcc072b5843a197064a063d4a5d30',uuid=6520fc43-79ed-4060-85bb-dcdff5f5c101,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7f81893d-380a-42b4-88a7-76a98c30a38d", "address": "fa:16:3e:61:f1:b7", "network": {"id": "76432838-bd4d-4fb8-8e44-6e230b5868b8", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-372698809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c221c92f100b42fbb2581f0c7035540b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f81893d-38", "ovs_interfaceid": "7f81893d-380a-42b4-88a7-76a98c30a38d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:07:58 np0005481065 nova_compute[260935]: 2025-10-11 09:07:58.055 2 DEBUG nova.network.os_vif_util [None req-b119b161-266e-4039-a587-323785b07d0d f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Converting VIF {"id": "7f81893d-380a-42b4-88a7-76a98c30a38d", "address": "fa:16:3e:61:f1:b7", "network": {"id": "76432838-bd4d-4fb8-8e44-6e230b5868b8", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-372698809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c221c92f100b42fbb2581f0c7035540b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f81893d-38", "ovs_interfaceid": "7f81893d-380a-42b4-88a7-76a98c30a38d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:07:58 np0005481065 nova_compute[260935]: 2025-10-11 09:07:58.056 2 DEBUG nova.network.os_vif_util [None req-b119b161-266e-4039-a587-323785b07d0d f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:f1:b7,bridge_name='br-int',has_traffic_filtering=True,id=7f81893d-380a-42b4-88a7-76a98c30a38d,network=Network(76432838-bd4d-4fb8-8e44-6e230b5868b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f81893d-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:07:58 np0005481065 nova_compute[260935]: 2025-10-11 09:07:58.056 2 DEBUG os_vif [None req-b119b161-266e-4039-a587-323785b07d0d f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:f1:b7,bridge_name='br-int',has_traffic_filtering=True,id=7f81893d-380a-42b4-88a7-76a98c30a38d,network=Network(76432838-bd4d-4fb8-8e44-6e230b5868b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f81893d-38') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:07:58 np0005481065 nova_compute[260935]: 2025-10-11 09:07:58.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:58 np0005481065 nova_compute[260935]: 2025-10-11 09:07:58.059 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7f81893d-38, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:07:58 np0005481065 nova_compute[260935]: 2025-10-11 09:07:58.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:58 np0005481065 nova_compute[260935]: 2025-10-11 09:07:58.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:07:58 np0005481065 nova_compute[260935]: 2025-10-11 09:07:58.064 2 INFO os_vif [None req-b119b161-266e-4039-a587-323785b07d0d f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:f1:b7,bridge_name='br-int',has_traffic_filtering=True,id=7f81893d-380a-42b4-88a7-76a98c30a38d,network=Network(76432838-bd4d-4fb8-8e44-6e230b5868b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f81893d-38')#033[00m
Oct 11 05:07:58 np0005481065 neutron-haproxy-ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8[357243]: [NOTICE]   (357247) : haproxy version is 2.8.14-c23fe91
Oct 11 05:07:58 np0005481065 neutron-haproxy-ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8[357243]: [NOTICE]   (357247) : path to executable is /usr/sbin/haproxy
Oct 11 05:07:58 np0005481065 neutron-haproxy-ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8[357243]: [WARNING]  (357247) : Exiting Master process...
Oct 11 05:07:58 np0005481065 neutron-haproxy-ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8[357243]: [WARNING]  (357247) : Exiting Master process...
Oct 11 05:07:58 np0005481065 neutron-haproxy-ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8[357243]: [ALERT]    (357247) : Current worker (357249) exited with code 143 (Terminated)
Oct 11 05:07:58 np0005481065 neutron-haproxy-ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8[357243]: [WARNING]  (357247) : All workers exited. Exiting... (0)
Oct 11 05:07:58 np0005481065 systemd[1]: libpod-e40b14f688a76c580a6de35085da7c0a9566389ef2c8a28953b020fd0054b819.scope: Deactivated successfully.
Oct 11 05:07:58 np0005481065 podman[358911]: 2025-10-11 09:07:58.092692135 +0000 UTC m=+0.046413456 container died e40b14f688a76c580a6de35085da7c0a9566389ef2c8a28953b020fd0054b819 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001)
Oct 11 05:07:58 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e40b14f688a76c580a6de35085da7c0a9566389ef2c8a28953b020fd0054b819-userdata-shm.mount: Deactivated successfully.
Oct 11 05:07:58 np0005481065 systemd[1]: var-lib-containers-storage-overlay-f42249320cbd9f6be1af07a22f94351bbe874386f708642b1d430373ffaae9fb-merged.mount: Deactivated successfully.
Oct 11 05:07:58 np0005481065 podman[358911]: 2025-10-11 09:07:58.132549673 +0000 UTC m=+0.086270994 container cleanup e40b14f688a76c580a6de35085da7c0a9566389ef2c8a28953b020fd0054b819 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 05:07:58 np0005481065 systemd[1]: libpod-conmon-e40b14f688a76c580a6de35085da7c0a9566389ef2c8a28953b020fd0054b819.scope: Deactivated successfully.
Oct 11 05:07:58 np0005481065 podman[358960]: 2025-10-11 09:07:58.200991115 +0000 UTC m=+0.044824659 container remove e40b14f688a76c580a6de35085da7c0a9566389ef2c8a28953b020fd0054b819 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Oct 11 05:07:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:58.212 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[dc4fb653-f9fb-4963-bc9e-4240a0cc8ba2]: (4, ('Sat Oct 11 09:07:58 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8 (e40b14f688a76c580a6de35085da7c0a9566389ef2c8a28953b020fd0054b819)\ne40b14f688a76c580a6de35085da7c0a9566389ef2c8a28953b020fd0054b819\nSat Oct 11 09:07:58 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8 (e40b14f688a76c580a6de35085da7c0a9566389ef2c8a28953b020fd0054b819)\ne40b14f688a76c580a6de35085da7c0a9566389ef2c8a28953b020fd0054b819\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:58.214 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0ec5641b-77ca-483c-80fb-32e0cffea2fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:58.215 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap76432838-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:07:58 np0005481065 kernel: tap76432838-b0: left promiscuous mode
Oct 11 05:07:58 np0005481065 nova_compute[260935]: 2025-10-11 09:07:58.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:58 np0005481065 nova_compute[260935]: 2025-10-11 09:07:58.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:58 np0005481065 nova_compute[260935]: 2025-10-11 09:07:58.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:58.249 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[821b7a3e-5e9f-4e59-884c-71b4899d7945]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:58.281 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[65f3bae7-365e-48a1-9f4a-4c157e22229f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:58.282 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d5b7ab67-c463-4de5-8d17-00c6df932902]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:58.302 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b17c1b9e-aa60-434a-865a-06e8b2945813]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547681, 'reachable_time': 32528, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358976, 'error': None, 'target': 'ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:58 np0005481065 systemd[1]: run-netns-ovnmeta\x2d76432838\x2dbd4d\x2d4fb8\x2d8e44\x2d6e230b5868b8.mount: Deactivated successfully.
Oct 11 05:07:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:58.309 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 05:07:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:07:58.309 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[49d6a1b3-58e6-409b-a015-1ece1f8b258e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:07:58 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1959: 321 pgs: 321 active+clean; 486 MiB data, 905 MiB used, 59 GiB / 60 GiB avail; 669 KiB/s rd, 16 KiB/s wr, 141 op/s
Oct 11 05:07:58 np0005481065 nova_compute[260935]: 2025-10-11 09:07:58.441 2 INFO nova.virt.libvirt.driver [None req-b119b161-266e-4039-a587-323785b07d0d f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Deleting instance files /var/lib/nova/instances/6520fc43-79ed-4060-85bb-dcdff5f5c101_del#033[00m
Oct 11 05:07:58 np0005481065 nova_compute[260935]: 2025-10-11 09:07:58.441 2 INFO nova.virt.libvirt.driver [None req-b119b161-266e-4039-a587-323785b07d0d f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Deletion of /var/lib/nova/instances/6520fc43-79ed-4060-85bb-dcdff5f5c101_del complete#033[00m
Oct 11 05:07:58 np0005481065 nova_compute[260935]: 2025-10-11 09:07:58.579 2 INFO nova.compute.manager [None req-b119b161-266e-4039-a587-323785b07d0d f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Took 0.81 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:07:58 np0005481065 nova_compute[260935]: 2025-10-11 09:07:58.580 2 DEBUG oslo.service.loopingcall [None req-b119b161-266e-4039-a587-323785b07d0d f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:07:58 np0005481065 nova_compute[260935]: 2025-10-11 09:07:58.580 2 DEBUG nova.compute.manager [-] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:07:58 np0005481065 nova_compute[260935]: 2025-10-11 09:07:58.580 2 DEBUG nova.network.neutron [-] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:07:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:07:59 np0005481065 nova_compute[260935]: 2025-10-11 09:07:59.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:07:59 np0005481065 nova_compute[260935]: 2025-10-11 09:07:59.293 2 DEBUG nova.compute.manager [req-41e78be9-1f72-47c9-aa63-71d843742f54 req-1495bcf5-27dd-430e-9daf-5438d1e8c385 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Received event network-vif-unplugged-7f81893d-380a-42b4-88a7-76a98c30a38d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:07:59 np0005481065 nova_compute[260935]: 2025-10-11 09:07:59.293 2 DEBUG oslo_concurrency.lockutils [req-41e78be9-1f72-47c9-aa63-71d843742f54 req-1495bcf5-27dd-430e-9daf-5438d1e8c385 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "6520fc43-79ed-4060-85bb-dcdff5f5c101-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:07:59 np0005481065 nova_compute[260935]: 2025-10-11 09:07:59.294 2 DEBUG oslo_concurrency.lockutils [req-41e78be9-1f72-47c9-aa63-71d843742f54 req-1495bcf5-27dd-430e-9daf-5438d1e8c385 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "6520fc43-79ed-4060-85bb-dcdff5f5c101-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:07:59 np0005481065 nova_compute[260935]: 2025-10-11 09:07:59.294 2 DEBUG oslo_concurrency.lockutils [req-41e78be9-1f72-47c9-aa63-71d843742f54 req-1495bcf5-27dd-430e-9daf-5438d1e8c385 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "6520fc43-79ed-4060-85bb-dcdff5f5c101-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:07:59 np0005481065 nova_compute[260935]: 2025-10-11 09:07:59.294 2 DEBUG nova.compute.manager [req-41e78be9-1f72-47c9-aa63-71d843742f54 req-1495bcf5-27dd-430e-9daf-5438d1e8c385 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] No waiting events found dispatching network-vif-unplugged-7f81893d-380a-42b4-88a7-76a98c30a38d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:07:59 np0005481065 nova_compute[260935]: 2025-10-11 09:07:59.295 2 DEBUG nova.compute.manager [req-41e78be9-1f72-47c9-aa63-71d843742f54 req-1495bcf5-27dd-430e-9daf-5438d1e8c385 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Received event network-vif-unplugged-7f81893d-380a-42b4-88a7-76a98c30a38d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 05:07:59 np0005481065 nova_compute[260935]: 2025-10-11 09:07:59.714 2 DEBUG nova.network.neutron [-] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:07:59 np0005481065 nova_compute[260935]: 2025-10-11 09:07:59.796 2 INFO nova.compute.manager [-] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Took 1.22 seconds to deallocate network for instance.#033[00m
Oct 11 05:07:59 np0005481065 nova_compute[260935]: 2025-10-11 09:07:59.874 2 DEBUG oslo_concurrency.lockutils [None req-b119b161-266e-4039-a587-323785b07d0d f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:07:59 np0005481065 nova_compute[260935]: 2025-10-11 09:07:59.876 2 DEBUG oslo_concurrency.lockutils [None req-b119b161-266e-4039-a587-323785b07d0d f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:08:00 np0005481065 nova_compute[260935]: 2025-10-11 09:08:00.019 2 DEBUG oslo_concurrency.processutils [None req-b119b161-266e-4039-a587-323785b07d0d f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:08:00 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1960: 321 pgs: 321 active+clean; 486 MiB data, 905 MiB used, 59 GiB / 60 GiB avail; 669 KiB/s rd, 16 KiB/s wr, 141 op/s
Oct 11 05:08:00 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:08:00 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:08:00 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 05:08:00 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:08:00 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 05:08:00 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:08:00 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 128e326b-94cc-4cd4-8ebe-16dddb985bcc does not exist
Oct 11 05:08:00 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 1d3b29ef-35f7-42cf-98f2-b78201c0966f does not exist
Oct 11 05:08:00 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev e2b30296-bbde-4a49-a28e-819efc420246 does not exist
Oct 11 05:08:00 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 05:08:00 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 05:08:00 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 05:08:00 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:08:00 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:08:00 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:08:00 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:08:00 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/42263550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:08:00 np0005481065 nova_compute[260935]: 2025-10-11 09:08:00.571 2 DEBUG oslo_concurrency.processutils [None req-b119b161-266e-4039-a587-323785b07d0d f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:08:00 np0005481065 nova_compute[260935]: 2025-10-11 09:08:00.583 2 DEBUG nova.compute.provider_tree [None req-b119b161-266e-4039-a587-323785b07d0d f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:08:00 np0005481065 nova_compute[260935]: 2025-10-11 09:08:00.608 2 DEBUG nova.scheduler.client.report [None req-b119b161-266e-4039-a587-323785b07d0d f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:08:00 np0005481065 nova_compute[260935]: 2025-10-11 09:08:00.656 2 DEBUG oslo_concurrency.lockutils [None req-b119b161-266e-4039-a587-323785b07d0d f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:08:00 np0005481065 nova_compute[260935]: 2025-10-11 09:08:00.694 2 INFO nova.scheduler.client.report [None req-b119b161-266e-4039-a587-323785b07d0d f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Deleted allocations for instance 6520fc43-79ed-4060-85bb-dcdff5f5c101#033[00m
Oct 11 05:08:00 np0005481065 nova_compute[260935]: 2025-10-11 09:08:00.798 2 DEBUG oslo_concurrency.lockutils [None req-b119b161-266e-4039-a587-323785b07d0d f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "6520fc43-79ed-4060-85bb-dcdff5f5c101" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.032s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:08:01 np0005481065 podman[359271]: 2025-10-11 09:08:01.307019424 +0000 UTC m=+0.059102528 container create 273bb35f3d5c6a5882d20befa86a3a4de5e0e9b5308011946efe649885662af5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:08:01 np0005481065 systemd[1]: Started libpod-conmon-273bb35f3d5c6a5882d20befa86a3a4de5e0e9b5308011946efe649885662af5.scope.
Oct 11 05:08:01 np0005481065 podman[359271]: 2025-10-11 09:08:01.277270554 +0000 UTC m=+0.029353698 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:08:01 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:08:01 np0005481065 nova_compute[260935]: 2025-10-11 09:08:01.401 2 DEBUG nova.compute.manager [req-3b9fb120-85cc-4787-ba36-d6272d0b6300 req-a293f33a-6e88-441f-bbb0-df4b30deb537 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Received event network-vif-plugged-7f81893d-380a-42b4-88a7-76a98c30a38d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:08:01 np0005481065 nova_compute[260935]: 2025-10-11 09:08:01.401 2 DEBUG oslo_concurrency.lockutils [req-3b9fb120-85cc-4787-ba36-d6272d0b6300 req-a293f33a-6e88-441f-bbb0-df4b30deb537 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "6520fc43-79ed-4060-85bb-dcdff5f5c101-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:08:01 np0005481065 nova_compute[260935]: 2025-10-11 09:08:01.402 2 DEBUG oslo_concurrency.lockutils [req-3b9fb120-85cc-4787-ba36-d6272d0b6300 req-a293f33a-6e88-441f-bbb0-df4b30deb537 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "6520fc43-79ed-4060-85bb-dcdff5f5c101-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:08:01 np0005481065 nova_compute[260935]: 2025-10-11 09:08:01.402 2 DEBUG oslo_concurrency.lockutils [req-3b9fb120-85cc-4787-ba36-d6272d0b6300 req-a293f33a-6e88-441f-bbb0-df4b30deb537 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "6520fc43-79ed-4060-85bb-dcdff5f5c101-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:08:01 np0005481065 nova_compute[260935]: 2025-10-11 09:08:01.403 2 DEBUG nova.compute.manager [req-3b9fb120-85cc-4787-ba36-d6272d0b6300 req-a293f33a-6e88-441f-bbb0-df4b30deb537 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] No waiting events found dispatching network-vif-plugged-7f81893d-380a-42b4-88a7-76a98c30a38d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:08:01 np0005481065 nova_compute[260935]: 2025-10-11 09:08:01.403 2 WARNING nova.compute.manager [req-3b9fb120-85cc-4787-ba36-d6272d0b6300 req-a293f33a-6e88-441f-bbb0-df4b30deb537 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Received unexpected event network-vif-plugged-7f81893d-380a-42b4-88a7-76a98c30a38d for instance with vm_state deleted and task_state None.#033[00m
Oct 11 05:08:01 np0005481065 nova_compute[260935]: 2025-10-11 09:08:01.403 2 DEBUG nova.compute.manager [req-3b9fb120-85cc-4787-ba36-d6272d0b6300 req-a293f33a-6e88-441f-bbb0-df4b30deb537 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Received event network-vif-deleted-7f81893d-380a-42b4-88a7-76a98c30a38d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:08:01 np0005481065 podman[359271]: 2025-10-11 09:08:01.419407882 +0000 UTC m=+0.171491026 container init 273bb35f3d5c6a5882d20befa86a3a4de5e0e9b5308011946efe649885662af5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wu, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:08:01 np0005481065 podman[359271]: 2025-10-11 09:08:01.433759862 +0000 UTC m=+0.185842966 container start 273bb35f3d5c6a5882d20befa86a3a4de5e0e9b5308011946efe649885662af5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wu, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 05:08:01 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:08:01 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:08:01 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:08:01 np0005481065 podman[359271]: 2025-10-11 09:08:01.440195786 +0000 UTC m=+0.192278950 container attach 273bb35f3d5c6a5882d20befa86a3a4de5e0e9b5308011946efe649885662af5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wu, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 05:08:01 np0005481065 distracted_wu[359288]: 167 167
Oct 11 05:08:01 np0005481065 systemd[1]: libpod-273bb35f3d5c6a5882d20befa86a3a4de5e0e9b5308011946efe649885662af5.scope: Deactivated successfully.
Oct 11 05:08:01 np0005481065 podman[359271]: 2025-10-11 09:08:01.444048886 +0000 UTC m=+0.196132010 container died 273bb35f3d5c6a5882d20befa86a3a4de5e0e9b5308011946efe649885662af5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 11 05:08:01 np0005481065 systemd[1]: var-lib-containers-storage-overlay-2408be3c77edc026a0feae089920d458e96849660abb6d78ec91ad4f294f2bbd-merged.mount: Deactivated successfully.
Oct 11 05:08:01 np0005481065 podman[359271]: 2025-10-11 09:08:01.492294313 +0000 UTC m=+0.244377407 container remove 273bb35f3d5c6a5882d20befa86a3a4de5e0e9b5308011946efe649885662af5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wu, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:08:01 np0005481065 systemd[1]: libpod-conmon-273bb35f3d5c6a5882d20befa86a3a4de5e0e9b5308011946efe649885662af5.scope: Deactivated successfully.
Oct 11 05:08:01 np0005481065 podman[359311]: 2025-10-11 09:08:01.810110075 +0000 UTC m=+0.062408352 container create 4ea8433c87bb4efe4fad25a7d8744d47b45473d864298615590df4f586dcce82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_banzai, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:08:01 np0005481065 systemd[1]: Started libpod-conmon-4ea8433c87bb4efe4fad25a7d8744d47b45473d864298615590df4f586dcce82.scope.
Oct 11 05:08:01 np0005481065 podman[359311]: 2025-10-11 09:08:01.788959331 +0000 UTC m=+0.041257648 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:08:01 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:08:01 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9929dd5316c2eb0f30415f3e82041505b7f33de98927a734b19837d8c645e9cf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:08:01 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9929dd5316c2eb0f30415f3e82041505b7f33de98927a734b19837d8c645e9cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:08:01 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9929dd5316c2eb0f30415f3e82041505b7f33de98927a734b19837d8c645e9cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:08:01 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9929dd5316c2eb0f30415f3e82041505b7f33de98927a734b19837d8c645e9cf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:08:01 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9929dd5316c2eb0f30415f3e82041505b7f33de98927a734b19837d8c645e9cf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 05:08:01 np0005481065 podman[359311]: 2025-10-11 09:08:01.914768832 +0000 UTC m=+0.167067189 container init 4ea8433c87bb4efe4fad25a7d8744d47b45473d864298615590df4f586dcce82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_banzai, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 05:08:01 np0005481065 podman[359311]: 2025-10-11 09:08:01.926069185 +0000 UTC m=+0.178367482 container start 4ea8433c87bb4efe4fad25a7d8744d47b45473d864298615590df4f586dcce82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:08:01 np0005481065 podman[359311]: 2025-10-11 09:08:01.931275283 +0000 UTC m=+0.183573640 container attach 4ea8433c87bb4efe4fad25a7d8744d47b45473d864298615590df4f586dcce82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_banzai, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:08:02 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1961: 321 pgs: 321 active+clean; 437 MiB data, 870 MiB used, 59 GiB / 60 GiB avail; 680 KiB/s rd, 18 KiB/s wr, 160 op/s
Oct 11 05:08:03 np0005481065 silly_banzai[359328]: --> passed data devices: 0 physical, 3 LVM
Oct 11 05:08:03 np0005481065 silly_banzai[359328]: --> relative data size: 1.0
Oct 11 05:08:03 np0005481065 silly_banzai[359328]: --> All data devices are unavailable
Oct 11 05:08:03 np0005481065 systemd[1]: libpod-4ea8433c87bb4efe4fad25a7d8744d47b45473d864298615590df4f586dcce82.scope: Deactivated successfully.
Oct 11 05:08:03 np0005481065 systemd[1]: libpod-4ea8433c87bb4efe4fad25a7d8744d47b45473d864298615590df4f586dcce82.scope: Consumed 1.075s CPU time.
Oct 11 05:08:03 np0005481065 podman[359311]: 2025-10-11 09:08:03.10253879 +0000 UTC m=+1.354837097 container died 4ea8433c87bb4efe4fad25a7d8744d47b45473d864298615590df4f586dcce82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_banzai, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:08:03 np0005481065 nova_compute[260935]: 2025-10-11 09:08:03.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:08:03 np0005481065 systemd[1]: var-lib-containers-storage-overlay-9929dd5316c2eb0f30415f3e82041505b7f33de98927a734b19837d8c645e9cf-merged.mount: Deactivated successfully.
Oct 11 05:08:03 np0005481065 podman[359311]: 2025-10-11 09:08:03.159973699 +0000 UTC m=+1.412271986 container remove 4ea8433c87bb4efe4fad25a7d8744d47b45473d864298615590df4f586dcce82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_banzai, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 05:08:03 np0005481065 systemd[1]: libpod-conmon-4ea8433c87bb4efe4fad25a7d8744d47b45473d864298615590df4f586dcce82.scope: Deactivated successfully.
Oct 11 05:08:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:08:04 np0005481065 podman[359511]: 2025-10-11 09:08:04.038320264 +0000 UTC m=+0.045074158 container create d2b695695c4aeaa7e9e680128692651c2b881ae4e300fab203298c39703ee3ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_ramanujan, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:08:04 np0005481065 systemd[1]: Started libpod-conmon-d2b695695c4aeaa7e9e680128692651c2b881ae4e300fab203298c39703ee3ee.scope.
Oct 11 05:08:04 np0005481065 podman[359511]: 2025-10-11 09:08:04.017658374 +0000 UTC m=+0.024412308 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:08:04 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:08:04 np0005481065 nova_compute[260935]: 2025-10-11 09:08:04.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:08:04 np0005481065 podman[359511]: 2025-10-11 09:08:04.179678788 +0000 UTC m=+0.186432752 container init d2b695695c4aeaa7e9e680128692651c2b881ae4e300fab203298c39703ee3ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:08:04 np0005481065 podman[359511]: 2025-10-11 09:08:04.190924709 +0000 UTC m=+0.197678623 container start d2b695695c4aeaa7e9e680128692651c2b881ae4e300fab203298c39703ee3ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 11 05:08:04 np0005481065 podman[359511]: 2025-10-11 09:08:04.195242203 +0000 UTC m=+0.201996127 container attach d2b695695c4aeaa7e9e680128692651c2b881ae4e300fab203298c39703ee3ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 11 05:08:04 np0005481065 mystifying_ramanujan[359528]: 167 167
Oct 11 05:08:04 np0005481065 systemd[1]: libpod-d2b695695c4aeaa7e9e680128692651c2b881ae4e300fab203298c39703ee3ee.scope: Deactivated successfully.
Oct 11 05:08:04 np0005481065 podman[359511]: 2025-10-11 09:08:04.198757263 +0000 UTC m=+0.205511187 container died d2b695695c4aeaa7e9e680128692651c2b881ae4e300fab203298c39703ee3ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_ramanujan, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 05:08:04 np0005481065 systemd[1]: var-lib-containers-storage-overlay-32335e376981d49e5832637acef4164b6d6e09cfed266a93f404a01aed786282-merged.mount: Deactivated successfully.
Oct 11 05:08:04 np0005481065 podman[359511]: 2025-10-11 09:08:04.250974224 +0000 UTC m=+0.257728148 container remove d2b695695c4aeaa7e9e680128692651c2b881ae4e300fab203298c39703ee3ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_ramanujan, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:08:04 np0005481065 systemd[1]: libpod-conmon-d2b695695c4aeaa7e9e680128692651c2b881ae4e300fab203298c39703ee3ee.scope: Deactivated successfully.
Oct 11 05:08:04 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1962: 321 pgs: 321 active+clean; 407 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 239 KiB/s rd, 4.2 KiB/s wr, 101 op/s
Oct 11 05:08:04 np0005481065 nova_compute[260935]: 2025-10-11 09:08:04.392 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173669.354265, 15633aee-234a-4417-b5ea-f35f13820404 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:08:04 np0005481065 nova_compute[260935]: 2025-10-11 09:08:04.394 2 INFO nova.compute.manager [-] [instance: 15633aee-234a-4417-b5ea-f35f13820404] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:08:04 np0005481065 nova_compute[260935]: 2025-10-11 09:08:04.432 2 DEBUG nova.compute.manager [None req-a11257c6-af69-4aa0-894e-5e01768e7172 - - - - - -] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:08:04 np0005481065 podman[359551]: 2025-10-11 09:08:04.552135211 +0000 UTC m=+0.076491174 container create 813edca0b6fbe1d20dc77f6913c1f17d41e9599ab495fa88d820cacecbc91944 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_pare, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:08:04 np0005481065 systemd[1]: Started libpod-conmon-813edca0b6fbe1d20dc77f6913c1f17d41e9599ab495fa88d820cacecbc91944.scope.
Oct 11 05:08:04 np0005481065 podman[359551]: 2025-10-11 09:08:04.522387822 +0000 UTC m=+0.046743835 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:08:04 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:08:04 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7c631370fd333bba102789d8496e36a7ab2dac6c18b1a9aed948d393c2e0fc5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:08:04 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7c631370fd333bba102789d8496e36a7ab2dac6c18b1a9aed948d393c2e0fc5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:08:04 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7c631370fd333bba102789d8496e36a7ab2dac6c18b1a9aed948d393c2e0fc5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:08:04 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7c631370fd333bba102789d8496e36a7ab2dac6c18b1a9aed948d393c2e0fc5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:08:04 np0005481065 podman[359551]: 2025-10-11 09:08:04.666664981 +0000 UTC m=+0.191020954 container init 813edca0b6fbe1d20dc77f6913c1f17d41e9599ab495fa88d820cacecbc91944 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 05:08:04 np0005481065 podman[359551]: 2025-10-11 09:08:04.67959824 +0000 UTC m=+0.203954213 container start 813edca0b6fbe1d20dc77f6913c1f17d41e9599ab495fa88d820cacecbc91944 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:08:04 np0005481065 podman[359551]: 2025-10-11 09:08:04.684325175 +0000 UTC m=+0.208681148 container attach 813edca0b6fbe1d20dc77f6913c1f17d41e9599ab495fa88d820cacecbc91944 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_pare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:08:04 np0005481065 ovn_controller[152945]: 2025-10-11T09:08:04Z|00904|binding|INFO|Releasing lport c44760fe-71c5-482d-aee7-a0b0513681b7 from this chassis (sb_readonly=0)
Oct 11 05:08:04 np0005481065 ovn_controller[152945]: 2025-10-11T09:08:04Z|00905|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:08:04 np0005481065 ovn_controller[152945]: 2025-10-11T09:08:04Z|00906|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:08:04 np0005481065 nova_compute[260935]: 2025-10-11 09:08:04.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:08:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:08:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:08:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:08:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:08:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033857600564888976 of space, bias 1.0, pg target 1.0157280169466694 quantized to 32 (current 32)
Oct 11 05:08:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:08:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:08:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:08:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:08:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:08:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.1992057139048968 quantized to 32 (current 32)
Oct 11 05:08:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:08:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 32)
Oct 11 05:08:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:08:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:08:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:08:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Oct 11 05:08:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:08:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Oct 11 05:08:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:08:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:08:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:08:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Oct 11 05:08:05 np0005481065 strange_pare[359568]: {
Oct 11 05:08:05 np0005481065 strange_pare[359568]:    "0": [
Oct 11 05:08:05 np0005481065 strange_pare[359568]:        {
Oct 11 05:08:05 np0005481065 strange_pare[359568]:            "devices": [
Oct 11 05:08:05 np0005481065 strange_pare[359568]:                "/dev/loop3"
Oct 11 05:08:05 np0005481065 strange_pare[359568]:            ],
Oct 11 05:08:05 np0005481065 strange_pare[359568]:            "lv_name": "ceph_lv0",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:            "lv_size": "21470642176",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:            "name": "ceph_lv0",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:            "tags": {
Oct 11 05:08:05 np0005481065 strange_pare[359568]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:                "ceph.cluster_name": "ceph",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:                "ceph.crush_device_class": "",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:                "ceph.encrypted": "0",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:                "ceph.osd_id": "0",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:                "ceph.type": "block",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:                "ceph.vdo": "0"
Oct 11 05:08:05 np0005481065 strange_pare[359568]:            },
Oct 11 05:08:05 np0005481065 strange_pare[359568]:            "type": "block",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:            "vg_name": "ceph_vg0"
Oct 11 05:08:05 np0005481065 strange_pare[359568]:        }
Oct 11 05:08:05 np0005481065 strange_pare[359568]:    ],
Oct 11 05:08:05 np0005481065 strange_pare[359568]:    "1": [
Oct 11 05:08:05 np0005481065 strange_pare[359568]:        {
Oct 11 05:08:05 np0005481065 strange_pare[359568]:            "devices": [
Oct 11 05:08:05 np0005481065 strange_pare[359568]:                "/dev/loop4"
Oct 11 05:08:05 np0005481065 strange_pare[359568]:            ],
Oct 11 05:08:05 np0005481065 strange_pare[359568]:            "lv_name": "ceph_lv1",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:            "lv_size": "21470642176",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:            "name": "ceph_lv1",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:            "tags": {
Oct 11 05:08:05 np0005481065 strange_pare[359568]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:                "ceph.cluster_name": "ceph",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:                "ceph.crush_device_class": "",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:                "ceph.encrypted": "0",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:                "ceph.osd_id": "1",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:                "ceph.type": "block",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:                "ceph.vdo": "0"
Oct 11 05:08:05 np0005481065 strange_pare[359568]:            },
Oct 11 05:08:05 np0005481065 strange_pare[359568]:            "type": "block",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:            "vg_name": "ceph_vg1"
Oct 11 05:08:05 np0005481065 strange_pare[359568]:        }
Oct 11 05:08:05 np0005481065 strange_pare[359568]:    ],
Oct 11 05:08:05 np0005481065 strange_pare[359568]:    "2": [
Oct 11 05:08:05 np0005481065 strange_pare[359568]:        {
Oct 11 05:08:05 np0005481065 strange_pare[359568]:            "devices": [
Oct 11 05:08:05 np0005481065 strange_pare[359568]:                "/dev/loop5"
Oct 11 05:08:05 np0005481065 strange_pare[359568]:            ],
Oct 11 05:08:05 np0005481065 strange_pare[359568]:            "lv_name": "ceph_lv2",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:            "lv_size": "21470642176",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:            "name": "ceph_lv2",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:            "tags": {
Oct 11 05:08:05 np0005481065 strange_pare[359568]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:                "ceph.cluster_name": "ceph",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:                "ceph.crush_device_class": "",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:                "ceph.encrypted": "0",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:                "ceph.osd_id": "2",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:                "ceph.type": "block",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:                "ceph.vdo": "0"
Oct 11 05:08:05 np0005481065 strange_pare[359568]:            },
Oct 11 05:08:05 np0005481065 strange_pare[359568]:            "type": "block",
Oct 11 05:08:05 np0005481065 strange_pare[359568]:            "vg_name": "ceph_vg2"
Oct 11 05:08:05 np0005481065 strange_pare[359568]:        }
Oct 11 05:08:05 np0005481065 strange_pare[359568]:    ]
Oct 11 05:08:05 np0005481065 strange_pare[359568]: }
Oct 11 05:08:05 np0005481065 systemd[1]: libpod-813edca0b6fbe1d20dc77f6913c1f17d41e9599ab495fa88d820cacecbc91944.scope: Deactivated successfully.
Oct 11 05:08:05 np0005481065 podman[359551]: 2025-10-11 09:08:05.516197882 +0000 UTC m=+1.040553875 container died 813edca0b6fbe1d20dc77f6913c1f17d41e9599ab495fa88d820cacecbc91944 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 05:08:05 np0005481065 systemd[1]: var-lib-containers-storage-overlay-a7c631370fd333bba102789d8496e36a7ab2dac6c18b1a9aed948d393c2e0fc5-merged.mount: Deactivated successfully.
Oct 11 05:08:05 np0005481065 podman[359551]: 2025-10-11 09:08:05.588744212 +0000 UTC m=+1.113100145 container remove 813edca0b6fbe1d20dc77f6913c1f17d41e9599ab495fa88d820cacecbc91944 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:08:05 np0005481065 systemd[1]: libpod-conmon-813edca0b6fbe1d20dc77f6913c1f17d41e9599ab495fa88d820cacecbc91944.scope: Deactivated successfully.
Oct 11 05:08:06 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1963: 321 pgs: 321 active+clean; 407 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 3.5 KiB/s wr, 82 op/s
Oct 11 05:08:06 np0005481065 podman[359731]: 2025-10-11 09:08:06.383016137 +0000 UTC m=+0.066701965 container create 089dd3eca2b81224119c2d379c703e1124955c08409b1c9803b5bcac7e6273ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:08:06 np0005481065 systemd[1]: Started libpod-conmon-089dd3eca2b81224119c2d379c703e1124955c08409b1c9803b5bcac7e6273ca.scope.
Oct 11 05:08:06 np0005481065 podman[359731]: 2025-10-11 09:08:06.354565905 +0000 UTC m=+0.038251803 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:08:06 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:08:06 np0005481065 podman[359731]: 2025-10-11 09:08:06.49275383 +0000 UTC m=+0.176439738 container init 089dd3eca2b81224119c2d379c703e1124955c08409b1c9803b5bcac7e6273ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 05:08:06 np0005481065 podman[359731]: 2025-10-11 09:08:06.507228443 +0000 UTC m=+0.190914301 container start 089dd3eca2b81224119c2d379c703e1124955c08409b1c9803b5bcac7e6273ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 05:08:06 np0005481065 podman[359731]: 2025-10-11 09:08:06.511308609 +0000 UTC m=+0.194994527 container attach 089dd3eca2b81224119c2d379c703e1124955c08409b1c9803b5bcac7e6273ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Oct 11 05:08:06 np0005481065 eager_bose[359747]: 167 167
Oct 11 05:08:06 np0005481065 systemd[1]: libpod-089dd3eca2b81224119c2d379c703e1124955c08409b1c9803b5bcac7e6273ca.scope: Deactivated successfully.
Oct 11 05:08:06 np0005481065 podman[359731]: 2025-10-11 09:08:06.514304235 +0000 UTC m=+0.197990083 container died 089dd3eca2b81224119c2d379c703e1124955c08409b1c9803b5bcac7e6273ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:08:06 np0005481065 systemd[1]: var-lib-containers-storage-overlay-044540c52eb96aef9b479ee0ae5be58dcb5ff700d86c9a49c56d95b17c3d68e6-merged.mount: Deactivated successfully.
Oct 11 05:08:06 np0005481065 podman[359731]: 2025-10-11 09:08:06.567533714 +0000 UTC m=+0.251219572 container remove 089dd3eca2b81224119c2d379c703e1124955c08409b1c9803b5bcac7e6273ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bose, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:08:06 np0005481065 systemd[1]: libpod-conmon-089dd3eca2b81224119c2d379c703e1124955c08409b1c9803b5bcac7e6273ca.scope: Deactivated successfully.
Oct 11 05:08:06 np0005481065 podman[359770]: 2025-10-11 09:08:06.823232444 +0000 UTC m=+0.044153022 container create 26fe89bad18cf6bcd8dbb2be1ac123c656b76930a028c0cc8cbd2ecbf6caf744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_roentgen, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 05:08:06 np0005481065 systemd[1]: Started libpod-conmon-26fe89bad18cf6bcd8dbb2be1ac123c656b76930a028c0cc8cbd2ecbf6caf744.scope.
Oct 11 05:08:06 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:08:06 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14eb0b8d8ea89158a01eccca5783607782f22945e082da0bb6645b426867c650/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:08:06 np0005481065 podman[359770]: 2025-10-11 09:08:06.805085786 +0000 UTC m=+0.026006394 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:08:06 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14eb0b8d8ea89158a01eccca5783607782f22945e082da0bb6645b426867c650/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:08:06 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14eb0b8d8ea89158a01eccca5783607782f22945e082da0bb6645b426867c650/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:08:06 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14eb0b8d8ea89158a01eccca5783607782f22945e082da0bb6645b426867c650/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:08:06 np0005481065 podman[359770]: 2025-10-11 09:08:06.919733049 +0000 UTC m=+0.140653707 container init 26fe89bad18cf6bcd8dbb2be1ac123c656b76930a028c0cc8cbd2ecbf6caf744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 11 05:08:06 np0005481065 podman[359770]: 2025-10-11 09:08:06.931866695 +0000 UTC m=+0.152787293 container start 26fe89bad18cf6bcd8dbb2be1ac123c656b76930a028c0cc8cbd2ecbf6caf744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_roentgen, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:08:06 np0005481065 podman[359770]: 2025-10-11 09:08:06.935802627 +0000 UTC m=+0.156723235 container attach 26fe89bad18cf6bcd8dbb2be1ac123c656b76930a028c0cc8cbd2ecbf6caf744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_roentgen, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 05:08:07 np0005481065 sleepy_roentgen[359787]: {
Oct 11 05:08:07 np0005481065 sleepy_roentgen[359787]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 05:08:07 np0005481065 sleepy_roentgen[359787]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:08:07 np0005481065 sleepy_roentgen[359787]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 05:08:07 np0005481065 sleepy_roentgen[359787]:        "osd_id": 2,
Oct 11 05:08:07 np0005481065 sleepy_roentgen[359787]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:08:07 np0005481065 sleepy_roentgen[359787]:        "type": "bluestore"
Oct 11 05:08:07 np0005481065 sleepy_roentgen[359787]:    },
Oct 11 05:08:07 np0005481065 sleepy_roentgen[359787]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 05:08:07 np0005481065 sleepy_roentgen[359787]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:08:07 np0005481065 sleepy_roentgen[359787]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 05:08:07 np0005481065 sleepy_roentgen[359787]:        "osd_id": 0,
Oct 11 05:08:07 np0005481065 sleepy_roentgen[359787]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:08:07 np0005481065 sleepy_roentgen[359787]:        "type": "bluestore"
Oct 11 05:08:07 np0005481065 sleepy_roentgen[359787]:    },
Oct 11 05:08:07 np0005481065 sleepy_roentgen[359787]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 05:08:07 np0005481065 sleepy_roentgen[359787]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:08:07 np0005481065 sleepy_roentgen[359787]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 05:08:07 np0005481065 sleepy_roentgen[359787]:        "osd_id": 1,
Oct 11 05:08:07 np0005481065 sleepy_roentgen[359787]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:08:07 np0005481065 sleepy_roentgen[359787]:        "type": "bluestore"
Oct 11 05:08:07 np0005481065 sleepy_roentgen[359787]:    }
Oct 11 05:08:07 np0005481065 sleepy_roentgen[359787]: }
Oct 11 05:08:07 np0005481065 systemd[1]: libpod-26fe89bad18cf6bcd8dbb2be1ac123c656b76930a028c0cc8cbd2ecbf6caf744.scope: Deactivated successfully.
Oct 11 05:08:07 np0005481065 systemd[1]: libpod-26fe89bad18cf6bcd8dbb2be1ac123c656b76930a028c0cc8cbd2ecbf6caf744.scope: Consumed 1.053s CPU time.
Oct 11 05:08:08 np0005481065 podman[359820]: 2025-10-11 09:08:08.025954748 +0000 UTC m=+0.030649706 container died 26fe89bad18cf6bcd8dbb2be1ac123c656b76930a028c0cc8cbd2ecbf6caf744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 05:08:08 np0005481065 systemd[1]: var-lib-containers-storage-overlay-14eb0b8d8ea89158a01eccca5783607782f22945e082da0bb6645b426867c650-merged.mount: Deactivated successfully.
Oct 11 05:08:08 np0005481065 nova_compute[260935]: 2025-10-11 09:08:08.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:08:08 np0005481065 podman[359820]: 2025-10-11 09:08:08.111594623 +0000 UTC m=+0.116289531 container remove 26fe89bad18cf6bcd8dbb2be1ac123c656b76930a028c0cc8cbd2ecbf6caf744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_roentgen, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 11 05:08:08 np0005481065 systemd[1]: libpod-conmon-26fe89bad18cf6bcd8dbb2be1ac123c656b76930a028c0cc8cbd2ecbf6caf744.scope: Deactivated successfully.
Oct 11 05:08:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:08:08 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:08:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:08:08 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:08:08 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 2760c6e6-a1f9-4a76-9fdd-3664e946f6a2 does not exist
Oct 11 05:08:08 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev d38f01bc-206b-43ce-a5d0-453e690a6534 does not exist
Oct 11 05:08:08 np0005481065 nova_compute[260935]: 2025-10-11 09:08:08.210 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173673.2089772, f5dfbe0b-a1ff-4001-abe4-a4493c9124f2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:08:08 np0005481065 nova_compute[260935]: 2025-10-11 09:08:08.211 2 INFO nova.compute.manager [-] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:08:08 np0005481065 nova_compute[260935]: 2025-10-11 09:08:08.233 2 DEBUG nova.compute.manager [None req-52d3d7a1-73d2-44ff-9891-f5779a784348 - - - - - -] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:08:08 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1964: 321 pgs: 321 active+clean; 407 MiB data, 851 MiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 5.8 KiB/s wr, 83 op/s
Oct 11 05:08:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:08:09 np0005481065 nova_compute[260935]: 2025-10-11 09:08:09.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:08:09 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:08:09 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:08:10 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1965: 321 pgs: 321 active+clean; 407 MiB data, 851 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.5 KiB/s wr, 28 op/s
Oct 11 05:08:11 np0005481065 nova_compute[260935]: 2025-10-11 09:08:11.855 2 DEBUG oslo_concurrency.lockutils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Acquiring lock "1a90a348-da49-4ba3-8bae-5b426e9d7424" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:08:11 np0005481065 nova_compute[260935]: 2025-10-11 09:08:11.855 2 DEBUG oslo_concurrency.lockutils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "1a90a348-da49-4ba3-8bae-5b426e9d7424" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:08:11 np0005481065 nova_compute[260935]: 2025-10-11 09:08:11.875 2 DEBUG nova.compute.manager [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:08:11 np0005481065 nova_compute[260935]: 2025-10-11 09:08:11.965 2 DEBUG oslo_concurrency.lockutils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:08:11 np0005481065 nova_compute[260935]: 2025-10-11 09:08:11.965 2 DEBUG oslo_concurrency.lockutils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:08:11 np0005481065 nova_compute[260935]: 2025-10-11 09:08:11.976 2 DEBUG nova.virt.hardware [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:08:11 np0005481065 nova_compute[260935]: 2025-10-11 09:08:11.976 2 INFO nova.compute.claims [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:08:12 np0005481065 nova_compute[260935]: 2025-10-11 09:08:12.193 2 DEBUG oslo_concurrency.processutils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:08:12 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1966: 321 pgs: 321 active+clean; 407 MiB data, 851 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.5 KiB/s wr, 28 op/s
Oct 11 05:08:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:08:12 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3739722464' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:08:12 np0005481065 nova_compute[260935]: 2025-10-11 09:08:12.732 2 DEBUG oslo_concurrency.processutils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:08:12 np0005481065 nova_compute[260935]: 2025-10-11 09:08:12.743 2 DEBUG nova.compute.provider_tree [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:08:12 np0005481065 nova_compute[260935]: 2025-10-11 09:08:12.897 2 DEBUG nova.scheduler.client.report [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:08:12 np0005481065 nova_compute[260935]: 2025-10-11 09:08:12.991 2 DEBUG oslo_concurrency.lockutils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.026s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:08:12 np0005481065 nova_compute[260935]: 2025-10-11 09:08:12.993 2 DEBUG nova.compute.manager [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:08:13 np0005481065 nova_compute[260935]: 2025-10-11 09:08:13.018 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173678.017132, 6520fc43-79ed-4060-85bb-dcdff5f5c101 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:08:13 np0005481065 nova_compute[260935]: 2025-10-11 09:08:13.019 2 INFO nova.compute.manager [-] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:08:13 np0005481065 nova_compute[260935]: 2025-10-11 09:08:13.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:08:13 np0005481065 nova_compute[260935]: 2025-10-11 09:08:13.152 2 DEBUG nova.compute.manager [None req-a0612205-2e22-43c0-a43e-a2cf44eb818d - - - - - -] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:08:13 np0005481065 nova_compute[260935]: 2025-10-11 09:08:13.172 2 DEBUG nova.compute.manager [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Oct 11 05:08:13 np0005481065 nova_compute[260935]: 2025-10-11 09:08:13.253 2 INFO nova.virt.libvirt.driver [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:08:13 np0005481065 nova_compute[260935]: 2025-10-11 09:08:13.286 2 DEBUG nova.compute.manager [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:08:13 np0005481065 nova_compute[260935]: 2025-10-11 09:08:13.397 2 DEBUG nova.compute.manager [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:08:13 np0005481065 nova_compute[260935]: 2025-10-11 09:08:13.399 2 DEBUG nova.virt.libvirt.driver [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:08:13 np0005481065 nova_compute[260935]: 2025-10-11 09:08:13.400 2 INFO nova.virt.libvirt.driver [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Creating image(s)#033[00m
Oct 11 05:08:13 np0005481065 nova_compute[260935]: 2025-10-11 09:08:13.434 2 DEBUG nova.storage.rbd_utils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] rbd image 1a90a348-da49-4ba3-8bae-5b426e9d7424_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:08:13 np0005481065 nova_compute[260935]: 2025-10-11 09:08:13.469 2 DEBUG nova.storage.rbd_utils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] rbd image 1a90a348-da49-4ba3-8bae-5b426e9d7424_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:08:13 np0005481065 nova_compute[260935]: 2025-10-11 09:08:13.503 2 DEBUG nova.storage.rbd_utils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] rbd image 1a90a348-da49-4ba3-8bae-5b426e9d7424_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:08:13 np0005481065 nova_compute[260935]: 2025-10-11 09:08:13.508 2 DEBUG oslo_concurrency.processutils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:08:13 np0005481065 nova_compute[260935]: 2025-10-11 09:08:13.571 2 DEBUG oslo_concurrency.lockutils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Acquiring lock "df634e60-d790-4a39-adcc-c6345a12e9df" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:08:13 np0005481065 nova_compute[260935]: 2025-10-11 09:08:13.572 2 DEBUG oslo_concurrency.lockutils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "df634e60-d790-4a39-adcc-c6345a12e9df" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:08:13 np0005481065 nova_compute[260935]: 2025-10-11 09:08:13.612 2 DEBUG nova.compute.manager [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:08:13 np0005481065 nova_compute[260935]: 2025-10-11 09:08:13.618 2 DEBUG oslo_concurrency.processutils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:08:13 np0005481065 nova_compute[260935]: 2025-10-11 09:08:13.618 2 DEBUG oslo_concurrency.lockutils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:08:13 np0005481065 nova_compute[260935]: 2025-10-11 09:08:13.619 2 DEBUG oslo_concurrency.lockutils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:08:13 np0005481065 nova_compute[260935]: 2025-10-11 09:08:13.620 2 DEBUG oslo_concurrency.lockutils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:08:13 np0005481065 nova_compute[260935]: 2025-10-11 09:08:13.653 2 DEBUG nova.storage.rbd_utils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] rbd image 1a90a348-da49-4ba3-8bae-5b426e9d7424_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:08:13 np0005481065 nova_compute[260935]: 2025-10-11 09:08:13.657 2 DEBUG oslo_concurrency.processutils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 1a90a348-da49-4ba3-8bae-5b426e9d7424_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:08:13 np0005481065 nova_compute[260935]: 2025-10-11 09:08:13.753 2 DEBUG oslo_concurrency.lockutils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:08:13 np0005481065 nova_compute[260935]: 2025-10-11 09:08:13.754 2 DEBUG oslo_concurrency.lockutils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:08:13 np0005481065 nova_compute[260935]: 2025-10-11 09:08:13.763 2 DEBUG nova.virt.hardware [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:08:13 np0005481065 nova_compute[260935]: 2025-10-11 09:08:13.764 2 INFO nova.compute.claims [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:08:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:08:13 np0005481065 nova_compute[260935]: 2025-10-11 09:08:13.957 2 DEBUG oslo_concurrency.processutils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 1a90a348-da49-4ba3-8bae-5b426e9d7424_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.299s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:08:14 np0005481065 nova_compute[260935]: 2025-10-11 09:08:14.030 2 DEBUG nova.storage.rbd_utils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] resizing rbd image 1a90a348-da49-4ba3-8bae-5b426e9d7424_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:08:14 np0005481065 nova_compute[260935]: 2025-10-11 09:08:14.068 2 DEBUG oslo_concurrency.processutils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:08:14 np0005481065 nova_compute[260935]: 2025-10-11 09:08:14.159 2 DEBUG nova.objects.instance [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lazy-loading 'migration_context' on Instance uuid 1a90a348-da49-4ba3-8bae-5b426e9d7424 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:08:14 np0005481065 nova_compute[260935]: 2025-10-11 09:08:14.178 2 DEBUG nova.virt.libvirt.driver [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:08:14 np0005481065 nova_compute[260935]: 2025-10-11 09:08:14.179 2 DEBUG nova.virt.libvirt.driver [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Ensure instance console log exists: /var/lib/nova/instances/1a90a348-da49-4ba3-8bae-5b426e9d7424/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:08:14 np0005481065 nova_compute[260935]: 2025-10-11 09:08:14.180 2 DEBUG oslo_concurrency.lockutils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:08:14 np0005481065 nova_compute[260935]: 2025-10-11 09:08:14.180 2 DEBUG oslo_concurrency.lockutils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:08:14 np0005481065 nova_compute[260935]: 2025-10-11 09:08:14.181 2 DEBUG oslo_concurrency.lockutils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:08:14 np0005481065 nova_compute[260935]: 2025-10-11 09:08:14.183 2 DEBUG nova.virt.libvirt.driver [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:08:14 np0005481065 nova_compute[260935]: 2025-10-11 09:08:14.188 2 WARNING nova.virt.libvirt.driver [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:08:14 np0005481065 nova_compute[260935]: 2025-10-11 09:08:14.193 2 DEBUG nova.virt.libvirt.host [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:08:14 np0005481065 nova_compute[260935]: 2025-10-11 09:08:14.194 2 DEBUG nova.virt.libvirt.host [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:08:14 np0005481065 nova_compute[260935]: 2025-10-11 09:08:14.198 2 DEBUG nova.virt.libvirt.host [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:08:14 np0005481065 nova_compute[260935]: 2025-10-11 09:08:14.199 2 DEBUG nova.virt.libvirt.host [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:08:14 np0005481065 nova_compute[260935]: 2025-10-11 09:08:14.199 2 DEBUG nova.virt.libvirt.driver [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:08:14 np0005481065 nova_compute[260935]: 2025-10-11 09:08:14.200 2 DEBUG nova.virt.hardware [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:08:14 np0005481065 nova_compute[260935]: 2025-10-11 09:08:14.200 2 DEBUG nova.virt.hardware [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:08:14 np0005481065 nova_compute[260935]: 2025-10-11 09:08:14.201 2 DEBUG nova.virt.hardware [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:08:14 np0005481065 nova_compute[260935]: 2025-10-11 09:08:14.201 2 DEBUG nova.virt.hardware [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:08:14 np0005481065 nova_compute[260935]: 2025-10-11 09:08:14.202 2 DEBUG nova.virt.hardware [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:08:14 np0005481065 nova_compute[260935]: 2025-10-11 09:08:14.203 2 DEBUG nova.virt.hardware [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:08:14 np0005481065 nova_compute[260935]: 2025-10-11 09:08:14.204 2 DEBUG nova.virt.hardware [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:08:14 np0005481065 nova_compute[260935]: 2025-10-11 09:08:14.205 2 DEBUG nova.virt.hardware [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:08:14 np0005481065 nova_compute[260935]: 2025-10-11 09:08:14.205 2 DEBUG nova.virt.hardware [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:08:14 np0005481065 nova_compute[260935]: 2025-10-11 09:08:14.205 2 DEBUG nova.virt.hardware [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:08:14 np0005481065 nova_compute[260935]: 2025-10-11 09:08:14.206 2 DEBUG nova.virt.hardware [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:08:14 np0005481065 nova_compute[260935]: 2025-10-11 09:08:14.209 2 DEBUG oslo_concurrency.processutils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:08:14 np0005481065 nova_compute[260935]: 2025-10-11 09:08:14.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:08:14 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1967: 321 pgs: 321 active+clean; 407 MiB data, 851 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 2.3 KiB/s wr, 10 op/s
Oct 11 05:08:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:08:14 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/709926393' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:08:14 np0005481065 nova_compute[260935]: 2025-10-11 09:08:14.519 2 DEBUG oslo_concurrency.processutils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:08:14 np0005481065 nova_compute[260935]: 2025-10-11 09:08:14.524 2 DEBUG nova.compute.provider_tree [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:08:14 np0005481065 nova_compute[260935]: 2025-10-11 09:08:14.545 2 DEBUG nova.scheduler.client.report [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:08:14 np0005481065 nova_compute[260935]: 2025-10-11 09:08:14.587 2 DEBUG oslo_concurrency.lockutils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.833s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:08:14 np0005481065 nova_compute[260935]: 2025-10-11 09:08:14.588 2 DEBUG nova.compute.manager [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:08:14 np0005481065 nova_compute[260935]: 2025-10-11 09:08:14.648 2 DEBUG nova.compute.manager [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Oct 11 05:08:14 np0005481065 nova_compute[260935]: 2025-10-11 09:08:14.664 2 INFO nova.virt.libvirt.driver [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:08:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:08:14 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/855587875' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:08:14 np0005481065 nova_compute[260935]: 2025-10-11 09:08:14.680 2 DEBUG nova.compute.manager [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:08:14 np0005481065 nova_compute[260935]: 2025-10-11 09:08:14.687 2 DEBUG oslo_concurrency.processutils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:08:14 np0005481065 nova_compute[260935]: 2025-10-11 09:08:14.723 2 DEBUG nova.storage.rbd_utils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] rbd image 1a90a348-da49-4ba3-8bae-5b426e9d7424_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:08:14 np0005481065 nova_compute[260935]: 2025-10-11 09:08:14.729 2 DEBUG oslo_concurrency.processutils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:08:14 np0005481065 nova_compute[260935]: 2025-10-11 09:08:14.839 2 DEBUG nova.compute.manager [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:08:14 np0005481065 nova_compute[260935]: 2025-10-11 09:08:14.842 2 DEBUG nova.virt.libvirt.driver [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:08:14 np0005481065 nova_compute[260935]: 2025-10-11 09:08:14.843 2 INFO nova.virt.libvirt.driver [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Creating image(s)#033[00m
Oct 11 05:08:14 np0005481065 nova_compute[260935]: 2025-10-11 09:08:14.880 2 DEBUG nova.storage.rbd_utils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] rbd image df634e60-d790-4a39-adcc-c6345a12e9df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:08:14 np0005481065 nova_compute[260935]: 2025-10-11 09:08:14.923 2 DEBUG nova.storage.rbd_utils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] rbd image df634e60-d790-4a39-adcc-c6345a12e9df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:08:14 np0005481065 nova_compute[260935]: 2025-10-11 09:08:14.957 2 DEBUG nova.storage.rbd_utils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] rbd image df634e60-d790-4a39-adcc-c6345a12e9df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:08:14 np0005481065 nova_compute[260935]: 2025-10-11 09:08:14.962 2 DEBUG oslo_concurrency.processutils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:08:15 np0005481065 nova_compute[260935]: 2025-10-11 09:08:15.056 2 DEBUG oslo_concurrency.processutils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:08:15 np0005481065 nova_compute[260935]: 2025-10-11 09:08:15.058 2 DEBUG oslo_concurrency.lockutils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:08:15 np0005481065 nova_compute[260935]: 2025-10-11 09:08:15.059 2 DEBUG oslo_concurrency.lockutils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:08:15 np0005481065 nova_compute[260935]: 2025-10-11 09:08:15.059 2 DEBUG oslo_concurrency.lockutils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:08:15 np0005481065 nova_compute[260935]: 2025-10-11 09:08:15.091 2 DEBUG nova.storage.rbd_utils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] rbd image df634e60-d790-4a39-adcc-c6345a12e9df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:08:15 np0005481065 nova_compute[260935]: 2025-10-11 09:08:15.095 2 DEBUG oslo_concurrency.processutils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 df634e60-d790-4a39-adcc-c6345a12e9df_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:08:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:08:15.203 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:08:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:08:15.204 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:08:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:08:15.205 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:08:15 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:08:15 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1487155864' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:08:15 np0005481065 nova_compute[260935]: 2025-10-11 09:08:15.231 2 DEBUG oslo_concurrency.processutils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:08:15 np0005481065 nova_compute[260935]: 2025-10-11 09:08:15.235 2 DEBUG nova.objects.instance [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1a90a348-da49-4ba3-8bae-5b426e9d7424 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:08:15 np0005481065 nova_compute[260935]: 2025-10-11 09:08:15.261 2 DEBUG nova.virt.libvirt.driver [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:08:15 np0005481065 nova_compute[260935]:  <uuid>1a90a348-da49-4ba3-8bae-5b426e9d7424</uuid>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:  <name>instance-00000064</name>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:08:15 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:      <nova:name>tempest-ServerShowV247Test-server-1283643768</nova:name>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:08:14</nova:creationTime>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:08:15 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:        <nova:user uuid="2273281c2fb74ffc95506b1aaa8874ed">tempest-ServerShowV247Test-462364684-project-member</nova:user>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:        <nova:project uuid="f4d141b93bd54c8f859a3ec1283a9a71">tempest-ServerShowV247Test-462364684</nova:project>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:      <nova:ports/>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:08:15 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:      <entry name="serial">1a90a348-da49-4ba3-8bae-5b426e9d7424</entry>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:      <entry name="uuid">1a90a348-da49-4ba3-8bae-5b426e9d7424</entry>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:08:15 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:08:15 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:08:15 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/1a90a348-da49-4ba3-8bae-5b426e9d7424_disk">
Oct 11 05:08:15 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:08:15 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:08:15 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/1a90a348-da49-4ba3-8bae-5b426e9d7424_disk.config">
Oct 11 05:08:15 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:08:15 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:08:15 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/1a90a348-da49-4ba3-8bae-5b426e9d7424/console.log" append="off"/>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:08:15 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:08:15 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:08:15 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:08:15 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:08:15 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:08:15 np0005481065 nova_compute[260935]: 2025-10-11 09:08:15.364 2 DEBUG nova.virt.libvirt.driver [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:08:15 np0005481065 nova_compute[260935]: 2025-10-11 09:08:15.364 2 DEBUG nova.virt.libvirt.driver [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:08:15 np0005481065 nova_compute[260935]: 2025-10-11 09:08:15.365 2 INFO nova.virt.libvirt.driver [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Using config drive#033[00m
Oct 11 05:08:15 np0005481065 nova_compute[260935]: 2025-10-11 09:08:15.390 2 DEBUG nova.storage.rbd_utils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] rbd image 1a90a348-da49-4ba3-8bae-5b426e9d7424_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:08:15 np0005481065 nova_compute[260935]: 2025-10-11 09:08:15.397 2 DEBUG oslo_concurrency.processutils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 df634e60-d790-4a39-adcc-c6345a12e9df_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.302s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:08:15 np0005481065 nova_compute[260935]: 2025-10-11 09:08:15.479 2 DEBUG nova.storage.rbd_utils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] resizing rbd image df634e60-d790-4a39-adcc-c6345a12e9df_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:08:15 np0005481065 nova_compute[260935]: 2025-10-11 09:08:15.573 2 DEBUG nova.objects.instance [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lazy-loading 'migration_context' on Instance uuid df634e60-d790-4a39-adcc-c6345a12e9df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:08:15 np0005481065 nova_compute[260935]: 2025-10-11 09:08:15.596 2 DEBUG nova.virt.libvirt.driver [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:08:15 np0005481065 nova_compute[260935]: 2025-10-11 09:08:15.596 2 DEBUG nova.virt.libvirt.driver [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Ensure instance console log exists: /var/lib/nova/instances/df634e60-d790-4a39-adcc-c6345a12e9df/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:08:15 np0005481065 nova_compute[260935]: 2025-10-11 09:08:15.597 2 DEBUG oslo_concurrency.lockutils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:08:15 np0005481065 nova_compute[260935]: 2025-10-11 09:08:15.597 2 DEBUG oslo_concurrency.lockutils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:08:15 np0005481065 nova_compute[260935]: 2025-10-11 09:08:15.597 2 DEBUG oslo_concurrency.lockutils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:08:15 np0005481065 nova_compute[260935]: 2025-10-11 09:08:15.599 2 DEBUG nova.virt.libvirt.driver [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:08:15 np0005481065 nova_compute[260935]: 2025-10-11 09:08:15.603 2 WARNING nova.virt.libvirt.driver [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:08:15 np0005481065 nova_compute[260935]: 2025-10-11 09:08:15.609 2 DEBUG nova.virt.libvirt.host [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:08:15 np0005481065 nova_compute[260935]: 2025-10-11 09:08:15.609 2 DEBUG nova.virt.libvirt.host [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:08:15 np0005481065 nova_compute[260935]: 2025-10-11 09:08:15.614 2 DEBUG nova.virt.libvirt.host [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:08:15 np0005481065 nova_compute[260935]: 2025-10-11 09:08:15.614 2 DEBUG nova.virt.libvirt.host [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:08:15 np0005481065 nova_compute[260935]: 2025-10-11 09:08:15.614 2 DEBUG nova.virt.libvirt.driver [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:08:15 np0005481065 nova_compute[260935]: 2025-10-11 09:08:15.615 2 DEBUG nova.virt.hardware [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:08:15 np0005481065 nova_compute[260935]: 2025-10-11 09:08:15.615 2 DEBUG nova.virt.hardware [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:08:15 np0005481065 nova_compute[260935]: 2025-10-11 09:08:15.615 2 DEBUG nova.virt.hardware [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:08:15 np0005481065 nova_compute[260935]: 2025-10-11 09:08:15.616 2 DEBUG nova.virt.hardware [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:08:15 np0005481065 nova_compute[260935]: 2025-10-11 09:08:15.616 2 DEBUG nova.virt.hardware [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:08:15 np0005481065 nova_compute[260935]: 2025-10-11 09:08:15.616 2 DEBUG nova.virt.hardware [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:08:15 np0005481065 nova_compute[260935]: 2025-10-11 09:08:15.616 2 DEBUG nova.virt.hardware [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:08:15 np0005481065 nova_compute[260935]: 2025-10-11 09:08:15.616 2 DEBUG nova.virt.hardware [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:08:15 np0005481065 nova_compute[260935]: 2025-10-11 09:08:15.617 2 DEBUG nova.virt.hardware [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:08:15 np0005481065 nova_compute[260935]: 2025-10-11 09:08:15.617 2 DEBUG nova.virt.hardware [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:08:15 np0005481065 nova_compute[260935]: 2025-10-11 09:08:15.617 2 DEBUG nova.virt.hardware [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:08:15 np0005481065 nova_compute[260935]: 2025-10-11 09:08:15.619 2 DEBUG oslo_concurrency.processutils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:08:15 np0005481065 nova_compute[260935]: 2025-10-11 09:08:15.937 2 INFO nova.virt.libvirt.driver [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Creating config drive at /var/lib/nova/instances/1a90a348-da49-4ba3-8bae-5b426e9d7424/disk.config#033[00m
Oct 11 05:08:15 np0005481065 nova_compute[260935]: 2025-10-11 09:08:15.947 2 DEBUG oslo_concurrency.processutils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1a90a348-da49-4ba3-8bae-5b426e9d7424/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb267h23a execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:08:16 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:08:16 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/328235877' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:08:16 np0005481065 nova_compute[260935]: 2025-10-11 09:08:16.107 2 DEBUG oslo_concurrency.processutils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:08:16 np0005481065 nova_compute[260935]: 2025-10-11 09:08:16.143 2 DEBUG nova.storage.rbd_utils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] rbd image df634e60-d790-4a39-adcc-c6345a12e9df_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:08:16 np0005481065 nova_compute[260935]: 2025-10-11 09:08:16.148 2 DEBUG oslo_concurrency.processutils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:08:16 np0005481065 nova_compute[260935]: 2025-10-11 09:08:16.187 2 DEBUG oslo_concurrency.processutils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1a90a348-da49-4ba3-8bae-5b426e9d7424/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb267h23a" returned: 0 in 0.240s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:08:16 np0005481065 nova_compute[260935]: 2025-10-11 09:08:16.228 2 DEBUG nova.storage.rbd_utils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] rbd image 1a90a348-da49-4ba3-8bae-5b426e9d7424_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:08:16 np0005481065 nova_compute[260935]: 2025-10-11 09:08:16.234 2 DEBUG oslo_concurrency.processutils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1a90a348-da49-4ba3-8bae-5b426e9d7424/disk.config 1a90a348-da49-4ba3-8bae-5b426e9d7424_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:08:16 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1968: 321 pgs: 321 active+clean; 407 MiB data, 851 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Oct 11 05:08:16 np0005481065 nova_compute[260935]: 2025-10-11 09:08:16.448 2 DEBUG oslo_concurrency.processutils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1a90a348-da49-4ba3-8bae-5b426e9d7424/disk.config 1a90a348-da49-4ba3-8bae-5b426e9d7424_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.213s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:08:16 np0005481065 nova_compute[260935]: 2025-10-11 09:08:16.450 2 INFO nova.virt.libvirt.driver [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Deleting local config drive /var/lib/nova/instances/1a90a348-da49-4ba3-8bae-5b426e9d7424/disk.config because it was imported into RBD.#033[00m
Oct 11 05:08:16 np0005481065 systemd-machined[215705]: New machine qemu-115-instance-00000064.
Oct 11 05:08:16 np0005481065 systemd[1]: Started Virtual Machine qemu-115-instance-00000064.
Oct 11 05:08:16 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:08:16 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3934649351' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:08:16 np0005481065 nova_compute[260935]: 2025-10-11 09:08:16.610 2 DEBUG oslo_concurrency.processutils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:08:16 np0005481065 nova_compute[260935]: 2025-10-11 09:08:16.614 2 DEBUG nova.objects.instance [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lazy-loading 'pci_devices' on Instance uuid df634e60-d790-4a39-adcc-c6345a12e9df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:08:16 np0005481065 nova_compute[260935]: 2025-10-11 09:08:16.640 2 DEBUG nova.virt.libvirt.driver [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:08:16 np0005481065 nova_compute[260935]:  <uuid>df634e60-d790-4a39-adcc-c6345a12e9df</uuid>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:  <name>instance-00000065</name>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:08:16 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:      <nova:name>tempest-ServerShowV247Test-server-32539140</nova:name>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:08:15</nova:creationTime>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:08:16 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:        <nova:user uuid="2273281c2fb74ffc95506b1aaa8874ed">tempest-ServerShowV247Test-462364684-project-member</nova:user>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:        <nova:project uuid="f4d141b93bd54c8f859a3ec1283a9a71">tempest-ServerShowV247Test-462364684</nova:project>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:      <nova:ports/>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:08:16 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:      <entry name="serial">df634e60-d790-4a39-adcc-c6345a12e9df</entry>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:      <entry name="uuid">df634e60-d790-4a39-adcc-c6345a12e9df</entry>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:08:16 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:08:16 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:08:16 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/df634e60-d790-4a39-adcc-c6345a12e9df_disk">
Oct 11 05:08:16 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:08:16 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:08:16 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/df634e60-d790-4a39-adcc-c6345a12e9df_disk.config">
Oct 11 05:08:16 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:08:16 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:08:16 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/df634e60-d790-4a39-adcc-c6345a12e9df/console.log" append="off"/>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:08:16 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:08:16 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:08:16 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:08:16 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:08:16 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:08:16 np0005481065 nova_compute[260935]: 2025-10-11 09:08:16.708 2 DEBUG nova.virt.libvirt.driver [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:08:16 np0005481065 nova_compute[260935]: 2025-10-11 09:08:16.709 2 DEBUG nova.virt.libvirt.driver [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:08:16 np0005481065 nova_compute[260935]: 2025-10-11 09:08:16.710 2 INFO nova.virt.libvirt.driver [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Using config drive#033[00m
Oct 11 05:08:16 np0005481065 nova_compute[260935]: 2025-10-11 09:08:16.742 2 DEBUG nova.storage.rbd_utils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] rbd image df634e60-d790-4a39-adcc-c6345a12e9df_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:08:17 np0005481065 podman[360517]: 2025-10-11 09:08:17.072852607 +0000 UTC m=+0.073081357 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Oct 11 05:08:17 np0005481065 nova_compute[260935]: 2025-10-11 09:08:17.094 2 INFO nova.virt.libvirt.driver [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Creating config drive at /var/lib/nova/instances/df634e60-d790-4a39-adcc-c6345a12e9df/disk.config#033[00m
Oct 11 05:08:17 np0005481065 nova_compute[260935]: 2025-10-11 09:08:17.101 2 DEBUG oslo_concurrency.processutils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/df634e60-d790-4a39-adcc-c6345a12e9df/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk75cggww execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:08:17 np0005481065 nova_compute[260935]: 2025-10-11 09:08:17.263 2 DEBUG oslo_concurrency.processutils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/df634e60-d790-4a39-adcc-c6345a12e9df/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk75cggww" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:08:17 np0005481065 nova_compute[260935]: 2025-10-11 09:08:17.291 2 DEBUG nova.storage.rbd_utils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] rbd image df634e60-d790-4a39-adcc-c6345a12e9df_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:08:17 np0005481065 nova_compute[260935]: 2025-10-11 09:08:17.295 2 DEBUG oslo_concurrency.processutils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/df634e60-d790-4a39-adcc-c6345a12e9df/disk.config df634e60-d790-4a39-adcc-c6345a12e9df_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:08:17 np0005481065 nova_compute[260935]: 2025-10-11 09:08:17.503 2 DEBUG oslo_concurrency.processutils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/df634e60-d790-4a39-adcc-c6345a12e9df/disk.config df634e60-d790-4a39-adcc-c6345a12e9df_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.208s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:08:17 np0005481065 nova_compute[260935]: 2025-10-11 09:08:17.505 2 INFO nova.virt.libvirt.driver [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Deleting local config drive /var/lib/nova/instances/df634e60-d790-4a39-adcc-c6345a12e9df/disk.config because it was imported into RBD.#033[00m
Oct 11 05:08:17 np0005481065 nova_compute[260935]: 2025-10-11 09:08:17.507 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173697.5022328, 1a90a348-da49-4ba3-8bae-5b426e9d7424 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:08:17 np0005481065 nova_compute[260935]: 2025-10-11 09:08:17.509 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:08:17 np0005481065 nova_compute[260935]: 2025-10-11 09:08:17.515 2 DEBUG nova.compute.manager [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:08:17 np0005481065 nova_compute[260935]: 2025-10-11 09:08:17.516 2 DEBUG nova.virt.libvirt.driver [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:08:17 np0005481065 nova_compute[260935]: 2025-10-11 09:08:17.524 2 INFO nova.virt.libvirt.driver [-] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Instance spawned successfully.#033[00m
Oct 11 05:08:17 np0005481065 nova_compute[260935]: 2025-10-11 09:08:17.525 2 DEBUG nova.virt.libvirt.driver [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:08:17 np0005481065 nova_compute[260935]: 2025-10-11 09:08:17.540 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:08:17 np0005481065 nova_compute[260935]: 2025-10-11 09:08:17.552 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:08:17 np0005481065 nova_compute[260935]: 2025-10-11 09:08:17.557 2 DEBUG nova.virt.libvirt.driver [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:08:17 np0005481065 nova_compute[260935]: 2025-10-11 09:08:17.558 2 DEBUG nova.virt.libvirt.driver [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:08:17 np0005481065 nova_compute[260935]: 2025-10-11 09:08:17.559 2 DEBUG nova.virt.libvirt.driver [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:08:17 np0005481065 nova_compute[260935]: 2025-10-11 09:08:17.559 2 DEBUG nova.virt.libvirt.driver [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:08:17 np0005481065 nova_compute[260935]: 2025-10-11 09:08:17.560 2 DEBUG nova.virt.libvirt.driver [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:08:17 np0005481065 nova_compute[260935]: 2025-10-11 09:08:17.561 2 DEBUG nova.virt.libvirt.driver [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:08:17 np0005481065 nova_compute[260935]: 2025-10-11 09:08:17.590 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:08:17 np0005481065 nova_compute[260935]: 2025-10-11 09:08:17.591 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173697.502834, 1a90a348-da49-4ba3-8bae-5b426e9d7424 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:08:17 np0005481065 nova_compute[260935]: 2025-10-11 09:08:17.591 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] VM Started (Lifecycle Event)#033[00m
Oct 11 05:08:17 np0005481065 nova_compute[260935]: 2025-10-11 09:08:17.617 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:08:17 np0005481065 systemd-machined[215705]: New machine qemu-116-instance-00000065.
Oct 11 05:08:17 np0005481065 nova_compute[260935]: 2025-10-11 09:08:17.622 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:08:17 np0005481065 nova_compute[260935]: 2025-10-11 09:08:17.630 2 INFO nova.compute.manager [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Took 4.23 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:08:17 np0005481065 nova_compute[260935]: 2025-10-11 09:08:17.631 2 DEBUG nova.compute.manager [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:08:17 np0005481065 systemd[1]: Started Virtual Machine qemu-116-instance-00000065.
Oct 11 05:08:17 np0005481065 nova_compute[260935]: 2025-10-11 09:08:17.640 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:08:17 np0005481065 nova_compute[260935]: 2025-10-11 09:08:17.700 2 INFO nova.compute.manager [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Took 5.77 seconds to build instance.#033[00m
Oct 11 05:08:17 np0005481065 nova_compute[260935]: 2025-10-11 09:08:17.720 2 DEBUG oslo_concurrency.lockutils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "1a90a348-da49-4ba3-8bae-5b426e9d7424" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.864s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:08:18 np0005481065 nova_compute[260935]: 2025-10-11 09:08:18.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:08:18 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1969: 321 pgs: 321 active+clean; 500 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 3.6 MiB/s wr, 68 op/s
Oct 11 05:08:18 np0005481065 nova_compute[260935]: 2025-10-11 09:08:18.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:08:18 np0005481065 nova_compute[260935]: 2025-10-11 09:08:18.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:08:18 np0005481065 nova_compute[260935]: 2025-10-11 09:08:18.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:08:18 np0005481065 nova_compute[260935]: 2025-10-11 09:08:18.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:08:18 np0005481065 nova_compute[260935]: 2025-10-11 09:08:18.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 11 05:08:18 np0005481065 nova_compute[260935]: 2025-10-11 09:08:18.786 2 DEBUG nova.compute.manager [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:08:18 np0005481065 nova_compute[260935]: 2025-10-11 09:08:18.787 2 DEBUG nova.virt.libvirt.driver [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:08:18 np0005481065 nova_compute[260935]: 2025-10-11 09:08:18.787 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173698.7856712, df634e60-d790-4a39-adcc-c6345a12e9df => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:08:18 np0005481065 nova_compute[260935]: 2025-10-11 09:08:18.788 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:08:18 np0005481065 nova_compute[260935]: 2025-10-11 09:08:18.794 2 INFO nova.virt.libvirt.driver [-] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Instance spawned successfully.#033[00m
Oct 11 05:08:18 np0005481065 nova_compute[260935]: 2025-10-11 09:08:18.795 2 DEBUG nova.virt.libvirt.driver [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:08:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:08:18 np0005481065 nova_compute[260935]: 2025-10-11 09:08:18.829 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:08:18 np0005481065 nova_compute[260935]: 2025-10-11 09:08:18.838 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:08:18 np0005481065 nova_compute[260935]: 2025-10-11 09:08:18.844 2 DEBUG nova.virt.libvirt.driver [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:08:18 np0005481065 nova_compute[260935]: 2025-10-11 09:08:18.844 2 DEBUG nova.virt.libvirt.driver [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:08:18 np0005481065 nova_compute[260935]: 2025-10-11 09:08:18.845 2 DEBUG nova.virt.libvirt.driver [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:08:18 np0005481065 nova_compute[260935]: 2025-10-11 09:08:18.846 2 DEBUG nova.virt.libvirt.driver [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:08:18 np0005481065 nova_compute[260935]: 2025-10-11 09:08:18.846 2 DEBUG nova.virt.libvirt.driver [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:08:18 np0005481065 nova_compute[260935]: 2025-10-11 09:08:18.847 2 DEBUG nova.virt.libvirt.driver [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:08:18 np0005481065 nova_compute[260935]: 2025-10-11 09:08:18.874 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:08:18 np0005481065 nova_compute[260935]: 2025-10-11 09:08:18.875 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173698.785767, df634e60-d790-4a39-adcc-c6345a12e9df => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:08:18 np0005481065 nova_compute[260935]: 2025-10-11 09:08:18.875 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] VM Started (Lifecycle Event)#033[00m
Oct 11 05:08:18 np0005481065 nova_compute[260935]: 2025-10-11 09:08:18.905 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:08:18 np0005481065 nova_compute[260935]: 2025-10-11 09:08:18.910 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:08:18 np0005481065 nova_compute[260935]: 2025-10-11 09:08:18.917 2 INFO nova.compute.manager [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Took 4.08 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:08:18 np0005481065 nova_compute[260935]: 2025-10-11 09:08:18.918 2 DEBUG nova.compute.manager [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:08:18 np0005481065 nova_compute[260935]: 2025-10-11 09:08:18.952 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:08:18 np0005481065 nova_compute[260935]: 2025-10-11 09:08:18.987 2 INFO nova.compute.manager [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Took 5.28 seconds to build instance.#033[00m
Oct 11 05:08:19 np0005481065 nova_compute[260935]: 2025-10-11 09:08:19.006 2 DEBUG oslo_concurrency.lockutils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "df634e60-d790-4a39-adcc-c6345a12e9df" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.434s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:08:19 np0005481065 nova_compute[260935]: 2025-10-11 09:08:19.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:08:19 np0005481065 nova_compute[260935]: 2025-10-11 09:08:19.731 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:08:19 np0005481065 nova_compute[260935]: 2025-10-11 09:08:19.732 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:08:19 np0005481065 nova_compute[260935]: 2025-10-11 09:08:19.733 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:08:19 np0005481065 nova_compute[260935]: 2025-10-11 09:08:19.733 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:08:20 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1970: 321 pgs: 321 active+clean; 500 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 3.6 MiB/s wr, 68 op/s
Oct 11 05:08:20 np0005481065 nova_compute[260935]: 2025-10-11 09:08:20.429 2 INFO nova.compute.manager [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Rebuilding instance#033[00m
Oct 11 05:08:20 np0005481065 nova_compute[260935]: 2025-10-11 09:08:20.909 2 DEBUG nova.objects.instance [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lazy-loading 'trusted_certs' on Instance uuid df634e60-d790-4a39-adcc-c6345a12e9df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:08:20 np0005481065 nova_compute[260935]: 2025-10-11 09:08:20.934 2 DEBUG nova.compute.manager [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:08:21 np0005481065 nova_compute[260935]: 2025-10-11 09:08:21.005 2 DEBUG nova.objects.instance [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lazy-loading 'pci_requests' on Instance uuid df634e60-d790-4a39-adcc-c6345a12e9df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:08:21 np0005481065 nova_compute[260935]: 2025-10-11 09:08:21.022 2 DEBUG nova.objects.instance [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lazy-loading 'pci_devices' on Instance uuid df634e60-d790-4a39-adcc-c6345a12e9df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:08:21 np0005481065 nova_compute[260935]: 2025-10-11 09:08:21.038 2 DEBUG nova.objects.instance [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lazy-loading 'resources' on Instance uuid df634e60-d790-4a39-adcc-c6345a12e9df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:08:21 np0005481065 nova_compute[260935]: 2025-10-11 09:08:21.049 2 DEBUG nova.objects.instance [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lazy-loading 'migration_context' on Instance uuid df634e60-d790-4a39-adcc-c6345a12e9df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:08:21 np0005481065 nova_compute[260935]: 2025-10-11 09:08:21.063 2 DEBUG nova.objects.instance [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct 11 05:08:21 np0005481065 nova_compute[260935]: 2025-10-11 09:08:21.067 2 DEBUG nova.virt.libvirt.driver [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct 11 05:08:21 np0005481065 nova_compute[260935]: 2025-10-11 09:08:21.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:08:21 np0005481065 nova_compute[260935]: 2025-10-11 09:08:21.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:08:21 np0005481065 nova_compute[260935]: 2025-10-11 09:08:21.735 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:08:21 np0005481065 nova_compute[260935]: 2025-10-11 09:08:21.735 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:08:21 np0005481065 nova_compute[260935]: 2025-10-11 09:08:21.736 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:08:21 np0005481065 nova_compute[260935]: 2025-10-11 09:08:21.736 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:08:21 np0005481065 nova_compute[260935]: 2025-10-11 09:08:21.737 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:08:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:08:22 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3340733308' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:08:22 np0005481065 nova_compute[260935]: 2025-10-11 09:08:22.233 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:08:22 np0005481065 nova_compute[260935]: 2025-10-11 09:08:22.358 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:08:22 np0005481065 nova_compute[260935]: 2025-10-11 09:08:22.359 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:08:22 np0005481065 nova_compute[260935]: 2025-10-11 09:08:22.364 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000064 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:08:22 np0005481065 nova_compute[260935]: 2025-10-11 09:08:22.364 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000064 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:08:22 np0005481065 nova_compute[260935]: 2025-10-11 09:08:22.368 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000065 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:08:22 np0005481065 nova_compute[260935]: 2025-10-11 09:08:22.369 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000065 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:08:22 np0005481065 nova_compute[260935]: 2025-10-11 09:08:22.373 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:08:22 np0005481065 nova_compute[260935]: 2025-10-11 09:08:22.374 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:08:22 np0005481065 nova_compute[260935]: 2025-10-11 09:08:22.379 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:08:22 np0005481065 nova_compute[260935]: 2025-10-11 09:08:22.379 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:08:22 np0005481065 nova_compute[260935]: 2025-10-11 09:08:22.380 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:08:22 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1971: 321 pgs: 321 active+clean; 500 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 136 op/s
Oct 11 05:08:22 np0005481065 nova_compute[260935]: 2025-10-11 09:08:22.384 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000062 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:08:22 np0005481065 nova_compute[260935]: 2025-10-11 09:08:22.384 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000062 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:08:22 np0005481065 nova_compute[260935]: 2025-10-11 09:08:22.598 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:08:22 np0005481065 nova_compute[260935]: 2025-10-11 09:08:22.599 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2643MB free_disk=59.74338150024414GB free_vcpus=2 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:08:22 np0005481065 nova_compute[260935]: 2025-10-11 09:08:22.600 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:08:22 np0005481065 nova_compute[260935]: 2025-10-11 09:08:22.600 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:08:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:08:22.840 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:08:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:08:22.840 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 05:08:22 np0005481065 nova_compute[260935]: 2025-10-11 09:08:22.880 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:08:22 np0005481065 nova_compute[260935]: 2025-10-11 09:08:22.881 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:08:22 np0005481065 nova_compute[260935]: 2025-10-11 09:08:22.881 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:08:22 np0005481065 nova_compute[260935]: 2025-10-11 09:08:22.881 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:08:22 np0005481065 nova_compute[260935]: 2025-10-11 09:08:22.882 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 1a90a348-da49-4ba3-8bae-5b426e9d7424 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:08:22 np0005481065 nova_compute[260935]: 2025-10-11 09:08:22.882 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance df634e60-d790-4a39-adcc-c6345a12e9df actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:08:22 np0005481065 nova_compute[260935]: 2025-10-11 09:08:22.883 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 6 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:08:22 np0005481065 nova_compute[260935]: 2025-10-11 09:08:22.883 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1280MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=6 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:08:22 np0005481065 nova_compute[260935]: 2025-10-11 09:08:22.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:08:23 np0005481065 nova_compute[260935]: 2025-10-11 09:08:23.076 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:08:23 np0005481065 nova_compute[260935]: 2025-10-11 09:08:23.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:08:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:08:23 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3950555897' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:08:23 np0005481065 nova_compute[260935]: 2025-10-11 09:08:23.551 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:08:23 np0005481065 nova_compute[260935]: 2025-10-11 09:08:23.559 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:08:23 np0005481065 nova_compute[260935]: 2025-10-11 09:08:23.588 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:08:23 np0005481065 nova_compute[260935]: 2025-10-11 09:08:23.635 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:08:23 np0005481065 nova_compute[260935]: 2025-10-11 09:08:23.636 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.037s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:08:23 np0005481065 podman[360682]: 2025-10-11 09:08:23.800024476 +0000 UTC m=+0.101336924 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 11 05:08:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:08:24 np0005481065 nova_compute[260935]: 2025-10-11 09:08:24.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:08:24 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1972: 321 pgs: 321 active+clean; 500 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 202 op/s
Oct 11 05:08:24 np0005481065 nova_compute[260935]: 2025-10-11 09:08:24.512 2 INFO nova.compute.manager [None req-08cfeb84-ea6e-40f1-86cc-2bb0c9615a8f 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Pausing#033[00m
Oct 11 05:08:24 np0005481065 nova_compute[260935]: 2025-10-11 09:08:24.514 2 DEBUG nova.objects.instance [None req-08cfeb84-ea6e-40f1-86cc-2bb0c9615a8f 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lazy-loading 'flavor' on Instance uuid 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:08:24 np0005481065 nova_compute[260935]: 2025-10-11 09:08:24.583 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173704.58298, 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:08:24 np0005481065 nova_compute[260935]: 2025-10-11 09:08:24.586 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:08:24 np0005481065 nova_compute[260935]: 2025-10-11 09:08:24.588 2 DEBUG nova.compute.manager [None req-08cfeb84-ea6e-40f1-86cc-2bb0c9615a8f 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:08:24 np0005481065 nova_compute[260935]: 2025-10-11 09:08:24.628 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:08:24 np0005481065 nova_compute[260935]: 2025-10-11 09:08:24.634 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:08:24 np0005481065 nova_compute[260935]: 2025-10-11 09:08:24.666 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Oct 11 05:08:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:08:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:08:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:08:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:08:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:08:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:08:25 np0005481065 nova_compute[260935]: 2025-10-11 09:08:25.638 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:08:25 np0005481065 nova_compute[260935]: 2025-10-11 09:08:25.640 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:08:25 np0005481065 nova_compute[260935]: 2025-10-11 09:08:25.640 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 11 05:08:25 np0005481065 nova_compute[260935]: 2025-10-11 09:08:25.920 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:08:25 np0005481065 nova_compute[260935]: 2025-10-11 09:08:25.921 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:08:25 np0005481065 nova_compute[260935]: 2025-10-11 09:08:25.921 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 05:08:25 np0005481065 nova_compute[260935]: 2025-10-11 09:08:25.922 2 DEBUG nova.objects.instance [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c176845c-89c0-4038-ba22-4ee79bd3ebfe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:08:26 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1973: 321 pgs: 321 active+clean; 500 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 202 op/s
Oct 11 05:08:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:08:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/433786690' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:08:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:08:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/433786690' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:08:27 np0005481065 nova_compute[260935]: 2025-10-11 09:08:27.638 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updating instance_info_cache with network_info: [{"id": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "address": "fa:16:3e:1e:82:58", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ae661-47", "ovs_interfaceid": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:08:27 np0005481065 nova_compute[260935]: 2025-10-11 09:08:27.666 2 INFO nova.compute.manager [None req-5b3142f5-12df-4068-ace6-b88cbfd5eb73 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Unpausing#033[00m
Oct 11 05:08:27 np0005481065 nova_compute[260935]: 2025-10-11 09:08:27.667 2 DEBUG nova.objects.instance [None req-5b3142f5-12df-4068-ace6-b88cbfd5eb73 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lazy-loading 'flavor' on Instance uuid 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:08:27 np0005481065 nova_compute[260935]: 2025-10-11 09:08:27.672 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:08:27 np0005481065 nova_compute[260935]: 2025-10-11 09:08:27.672 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 05:08:27 np0005481065 nova_compute[260935]: 2025-10-11 09:08:27.714 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173707.713921, 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:08:27 np0005481065 nova_compute[260935]: 2025-10-11 09:08:27.714 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:08:27 np0005481065 virtqemud[260524]: argument unsupported: QEMU guest agent is not configured
Oct 11 05:08:27 np0005481065 nova_compute[260935]: 2025-10-11 09:08:27.728 2 DEBUG nova.virt.libvirt.guest [None req-5b3142f5-12df-4068-ace6-b88cbfd5eb73 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Oct 11 05:08:27 np0005481065 nova_compute[260935]: 2025-10-11 09:08:27.728 2 DEBUG nova.compute.manager [None req-5b3142f5-12df-4068-ace6-b88cbfd5eb73 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:08:27 np0005481065 nova_compute[260935]: 2025-10-11 09:08:27.742 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:08:27 np0005481065 nova_compute[260935]: 2025-10-11 09:08:27.747 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:08:27 np0005481065 nova_compute[260935]: 2025-10-11 09:08:27.783 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] During sync_power_state the instance has a pending task (unpausing). Skip.#033[00m
Oct 11 05:08:27 np0005481065 podman[360701]: 2025-10-11 09:08:27.808189617 +0000 UTC m=+0.110112085 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 05:08:27 np0005481065 podman[360702]: 2025-10-11 09:08:27.890433484 +0000 UTC m=+0.164265100 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible)
Oct 11 05:08:28 np0005481065 nova_compute[260935]: 2025-10-11 09:08:28.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:08:28 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1974: 321 pgs: 321 active+clean; 500 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 203 op/s
Oct 11 05:08:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:08:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:08:28.842 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:08:29 np0005481065 nova_compute[260935]: 2025-10-11 09:08:29.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:08:29 np0005481065 nova_compute[260935]: 2025-10-11 09:08:29.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:08:29 np0005481065 nova_compute[260935]: 2025-10-11 09:08:29.702 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct 11 05:08:29 np0005481065 nova_compute[260935]: 2025-10-11 09:08:29.719 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct 11 05:08:30 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1975: 321 pgs: 321 active+clean; 500 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 8.2 KiB/s wr, 135 op/s
Oct 11 05:08:31 np0005481065 nova_compute[260935]: 2025-10-11 09:08:31.212 2 DEBUG nova.virt.libvirt.driver [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct 11 05:08:32 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1976: 321 pgs: 321 active+clean; 544 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 2.9 MiB/s wr, 227 op/s
Oct 11 05:08:33 np0005481065 nova_compute[260935]: 2025-10-11 09:08:33.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:08:33 np0005481065 systemd[1]: machine-qemu\x2d116\x2dinstance\x2d00000065.scope: Deactivated successfully.
Oct 11 05:08:33 np0005481065 systemd[1]: machine-qemu\x2d116\x2dinstance\x2d00000065.scope: Consumed 12.380s CPU time.
Oct 11 05:08:33 np0005481065 systemd-machined[215705]: Machine qemu-116-instance-00000065 terminated.
Oct 11 05:08:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:08:34 np0005481065 nova_compute[260935]: 2025-10-11 09:08:34.228 2 INFO nova.virt.libvirt.driver [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Instance shutdown successfully after 13 seconds.#033[00m
Oct 11 05:08:34 np0005481065 nova_compute[260935]: 2025-10-11 09:08:34.236 2 INFO nova.virt.libvirt.driver [-] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Instance destroyed successfully.#033[00m
Oct 11 05:08:34 np0005481065 nova_compute[260935]: 2025-10-11 09:08:34.242 2 INFO nova.virt.libvirt.driver [-] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Instance destroyed successfully.#033[00m
Oct 11 05:08:34 np0005481065 nova_compute[260935]: 2025-10-11 09:08:34.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:08:34 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1977: 321 pgs: 321 active+clean; 566 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 193 op/s
Oct 11 05:08:34 np0005481065 nova_compute[260935]: 2025-10-11 09:08:34.713 2 INFO nova.virt.libvirt.driver [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Deleting instance files /var/lib/nova/instances/df634e60-d790-4a39-adcc-c6345a12e9df_del#033[00m
Oct 11 05:08:34 np0005481065 nova_compute[260935]: 2025-10-11 09:08:34.714 2 INFO nova.virt.libvirt.driver [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Deletion of /var/lib/nova/instances/df634e60-d790-4a39-adcc-c6345a12e9df_del complete#033[00m
Oct 11 05:08:34 np0005481065 nova_compute[260935]: 2025-10-11 09:08:34.891 2 DEBUG nova.virt.libvirt.driver [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:08:34 np0005481065 nova_compute[260935]: 2025-10-11 09:08:34.892 2 INFO nova.virt.libvirt.driver [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Creating image(s)#033[00m
Oct 11 05:08:34 np0005481065 nova_compute[260935]: 2025-10-11 09:08:34.926 2 DEBUG nova.storage.rbd_utils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] rbd image df634e60-d790-4a39-adcc-c6345a12e9df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:08:34 np0005481065 nova_compute[260935]: 2025-10-11 09:08:34.961 2 DEBUG nova.storage.rbd_utils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] rbd image df634e60-d790-4a39-adcc-c6345a12e9df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:08:34 np0005481065 nova_compute[260935]: 2025-10-11 09:08:34.988 2 DEBUG nova.storage.rbd_utils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] rbd image df634e60-d790-4a39-adcc-c6345a12e9df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:08:34 np0005481065 nova_compute[260935]: 2025-10-11 09:08:34.994 2 DEBUG oslo_concurrency.processutils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:08:35 np0005481065 nova_compute[260935]: 2025-10-11 09:08:35.076 2 DEBUG oslo_concurrency.processutils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:08:35 np0005481065 nova_compute[260935]: 2025-10-11 09:08:35.077 2 DEBUG oslo_concurrency.lockutils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Acquiring lock "d427ed36e4acfaf36d5cf36bd49361b1db4ee571" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:08:35 np0005481065 nova_compute[260935]: 2025-10-11 09:08:35.078 2 DEBUG oslo_concurrency.lockutils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "d427ed36e4acfaf36d5cf36bd49361b1db4ee571" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:08:35 np0005481065 nova_compute[260935]: 2025-10-11 09:08:35.078 2 DEBUG oslo_concurrency.lockutils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "d427ed36e4acfaf36d5cf36bd49361b1db4ee571" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:08:35 np0005481065 nova_compute[260935]: 2025-10-11 09:08:35.107 2 DEBUG nova.storage.rbd_utils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] rbd image df634e60-d790-4a39-adcc-c6345a12e9df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:08:35 np0005481065 nova_compute[260935]: 2025-10-11 09:08:35.111 2 DEBUG oslo_concurrency.processutils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 df634e60-d790-4a39-adcc-c6345a12e9df_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:08:35 np0005481065 nova_compute[260935]: 2025-10-11 09:08:35.403 2 DEBUG oslo_concurrency.processutils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 df634e60-d790-4a39-adcc-c6345a12e9df_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.292s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:08:35 np0005481065 nova_compute[260935]: 2025-10-11 09:08:35.483 2 DEBUG nova.storage.rbd_utils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] resizing rbd image df634e60-d790-4a39-adcc-c6345a12e9df_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:08:35 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 do_prune osdmap full prune enabled
Oct 11 05:08:35 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e258 e258: 3 total, 3 up, 3 in
Oct 11 05:08:35 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e258: 3 total, 3 up, 3 in
Oct 11 05:08:35 np0005481065 nova_compute[260935]: 2025-10-11 09:08:35.607 2 DEBUG nova.virt.libvirt.driver [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:08:35 np0005481065 nova_compute[260935]: 2025-10-11 09:08:35.608 2 DEBUG nova.virt.libvirt.driver [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Ensure instance console log exists: /var/lib/nova/instances/df634e60-d790-4a39-adcc-c6345a12e9df/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:08:35 np0005481065 nova_compute[260935]: 2025-10-11 09:08:35.609 2 DEBUG oslo_concurrency.lockutils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:08:35 np0005481065 nova_compute[260935]: 2025-10-11 09:08:35.609 2 DEBUG oslo_concurrency.lockutils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:08:35 np0005481065 nova_compute[260935]: 2025-10-11 09:08:35.610 2 DEBUG oslo_concurrency.lockutils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:08:35 np0005481065 nova_compute[260935]: 2025-10-11 09:08:35.611 2 DEBUG nova.virt.libvirt.driver [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:29Z,direct_url=<?>,disk_format='qcow2',id=95632eb9-5895-4e20-b760-0f149aadf400,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:08:35 np0005481065 nova_compute[260935]: 2025-10-11 09:08:35.616 2 WARNING nova.virt.libvirt.driver [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Oct 11 05:08:35 np0005481065 nova_compute[260935]: 2025-10-11 09:08:35.624 2 DEBUG nova.virt.libvirt.host [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:08:35 np0005481065 nova_compute[260935]: 2025-10-11 09:08:35.624 2 DEBUG nova.virt.libvirt.host [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:08:35 np0005481065 nova_compute[260935]: 2025-10-11 09:08:35.629 2 DEBUG nova.virt.libvirt.host [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:08:35 np0005481065 nova_compute[260935]: 2025-10-11 09:08:35.629 2 DEBUG nova.virt.libvirt.host [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:08:35 np0005481065 nova_compute[260935]: 2025-10-11 09:08:35.629 2 DEBUG nova.virt.libvirt.driver [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:08:35 np0005481065 nova_compute[260935]: 2025-10-11 09:08:35.630 2 DEBUG nova.virt.hardware [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:29Z,direct_url=<?>,disk_format='qcow2',id=95632eb9-5895-4e20-b760-0f149aadf400,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:08:35 np0005481065 nova_compute[260935]: 2025-10-11 09:08:35.630 2 DEBUG nova.virt.hardware [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:08:35 np0005481065 nova_compute[260935]: 2025-10-11 09:08:35.630 2 DEBUG nova.virt.hardware [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:08:35 np0005481065 nova_compute[260935]: 2025-10-11 09:08:35.631 2 DEBUG nova.virt.hardware [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:08:35 np0005481065 nova_compute[260935]: 2025-10-11 09:08:35.631 2 DEBUG nova.virt.hardware [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:08:35 np0005481065 nova_compute[260935]: 2025-10-11 09:08:35.631 2 DEBUG nova.virt.hardware [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:08:35 np0005481065 nova_compute[260935]: 2025-10-11 09:08:35.632 2 DEBUG nova.virt.hardware [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:08:35 np0005481065 nova_compute[260935]: 2025-10-11 09:08:35.632 2 DEBUG nova.virt.hardware [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:08:35 np0005481065 nova_compute[260935]: 2025-10-11 09:08:35.632 2 DEBUG nova.virt.hardware [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:08:35 np0005481065 nova_compute[260935]: 2025-10-11 09:08:35.633 2 DEBUG nova.virt.hardware [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:08:35 np0005481065 nova_compute[260935]: 2025-10-11 09:08:35.633 2 DEBUG nova.virt.hardware [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:08:35 np0005481065 nova_compute[260935]: 2025-10-11 09:08:35.633 2 DEBUG nova.objects.instance [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lazy-loading 'vcpu_model' on Instance uuid df634e60-d790-4a39-adcc-c6345a12e9df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:08:35 np0005481065 nova_compute[260935]: 2025-10-11 09:08:35.657 2 DEBUG oslo_concurrency.processutils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:08:36 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:08:36 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3185534938' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:08:36 np0005481065 nova_compute[260935]: 2025-10-11 09:08:36.138 2 DEBUG oslo_concurrency.processutils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:08:36 np0005481065 nova_compute[260935]: 2025-10-11 09:08:36.172 2 DEBUG nova.storage.rbd_utils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] rbd image df634e60-d790-4a39-adcc-c6345a12e9df_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:08:36 np0005481065 nova_compute[260935]: 2025-10-11 09:08:36.179 2 DEBUG oslo_concurrency.processutils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:08:36 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1979: 321 pgs: 321 active+clean; 566 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 772 KiB/s rd, 5.1 MiB/s wr, 152 op/s
Oct 11 05:08:36 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e258 do_prune osdmap full prune enabled
Oct 11 05:08:36 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e259 e259: 3 total, 3 up, 3 in
Oct 11 05:08:36 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e259: 3 total, 3 up, 3 in
Oct 11 05:08:36 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:08:36 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1845454379' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:08:36 np0005481065 nova_compute[260935]: 2025-10-11 09:08:36.673 2 DEBUG oslo_concurrency.processutils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:08:36 np0005481065 nova_compute[260935]: 2025-10-11 09:08:36.682 2 DEBUG nova.virt.libvirt.driver [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:08:36 np0005481065 nova_compute[260935]:  <uuid>df634e60-d790-4a39-adcc-c6345a12e9df</uuid>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:  <name>instance-00000065</name>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:08:36 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:      <nova:name>tempest-ServerShowV247Test-server-32539140</nova:name>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:08:35</nova:creationTime>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:08:36 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:        <nova:user uuid="2273281c2fb74ffc95506b1aaa8874ed">tempest-ServerShowV247Test-462364684-project-member</nova:user>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:        <nova:project uuid="f4d141b93bd54c8f859a3ec1283a9a71">tempest-ServerShowV247Test-462364684</nova:project>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="95632eb9-5895-4e20-b760-0f149aadf400"/>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:      <nova:ports/>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:08:36 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:      <entry name="serial">df634e60-d790-4a39-adcc-c6345a12e9df</entry>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:      <entry name="uuid">df634e60-d790-4a39-adcc-c6345a12e9df</entry>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:08:36 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:08:36 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:08:36 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/df634e60-d790-4a39-adcc-c6345a12e9df_disk">
Oct 11 05:08:36 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:08:36 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:08:36 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/df634e60-d790-4a39-adcc-c6345a12e9df_disk.config">
Oct 11 05:08:36 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:08:36 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:08:36 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/df634e60-d790-4a39-adcc-c6345a12e9df/console.log" append="off"/>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:08:36 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:08:36 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:08:36 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:08:36 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:08:36 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:08:36 np0005481065 nova_compute[260935]: 2025-10-11 09:08:36.770 2 DEBUG nova.virt.libvirt.driver [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:08:36 np0005481065 nova_compute[260935]: 2025-10-11 09:08:36.771 2 DEBUG nova.virt.libvirt.driver [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:08:36 np0005481065 nova_compute[260935]: 2025-10-11 09:08:36.772 2 INFO nova.virt.libvirt.driver [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Using config drive#033[00m
Oct 11 05:08:36 np0005481065 nova_compute[260935]: 2025-10-11 09:08:36.805 2 DEBUG nova.storage.rbd_utils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] rbd image df634e60-d790-4a39-adcc-c6345a12e9df_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:08:36 np0005481065 nova_compute[260935]: 2025-10-11 09:08:36.839 2 DEBUG nova.objects.instance [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lazy-loading 'ec2_ids' on Instance uuid df634e60-d790-4a39-adcc-c6345a12e9df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:08:36 np0005481065 nova_compute[260935]: 2025-10-11 09:08:36.876 2 DEBUG nova.objects.instance [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lazy-loading 'keypairs' on Instance uuid df634e60-d790-4a39-adcc-c6345a12e9df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:08:37 np0005481065 nova_compute[260935]: 2025-10-11 09:08:37.215 2 INFO nova.virt.libvirt.driver [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Creating config drive at /var/lib/nova/instances/df634e60-d790-4a39-adcc-c6345a12e9df/disk.config#033[00m
Oct 11 05:08:37 np0005481065 nova_compute[260935]: 2025-10-11 09:08:37.225 2 DEBUG oslo_concurrency.processutils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/df634e60-d790-4a39-adcc-c6345a12e9df/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo2qe_sr0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:08:37 np0005481065 nova_compute[260935]: 2025-10-11 09:08:37.395 2 DEBUG oslo_concurrency.processutils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/df634e60-d790-4a39-adcc-c6345a12e9df/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo2qe_sr0" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:08:37 np0005481065 nova_compute[260935]: 2025-10-11 09:08:37.433 2 DEBUG nova.storage.rbd_utils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] rbd image df634e60-d790-4a39-adcc-c6345a12e9df_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:08:37 np0005481065 nova_compute[260935]: 2025-10-11 09:08:37.437 2 DEBUG oslo_concurrency.processutils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/df634e60-d790-4a39-adcc-c6345a12e9df/disk.config df634e60-d790-4a39-adcc-c6345a12e9df_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:08:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e259 do_prune osdmap full prune enabled
Oct 11 05:08:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e260 e260: 3 total, 3 up, 3 in
Oct 11 05:08:37 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e260: 3 total, 3 up, 3 in
Oct 11 05:08:37 np0005481065 nova_compute[260935]: 2025-10-11 09:08:37.647 2 DEBUG oslo_concurrency.processutils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/df634e60-d790-4a39-adcc-c6345a12e9df/disk.config df634e60-d790-4a39-adcc-c6345a12e9df_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.210s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:08:37 np0005481065 nova_compute[260935]: 2025-10-11 09:08:37.649 2 INFO nova.virt.libvirt.driver [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Deleting local config drive /var/lib/nova/instances/df634e60-d790-4a39-adcc-c6345a12e9df/disk.config because it was imported into RBD.#033[00m
Oct 11 05:08:37 np0005481065 systemd-machined[215705]: New machine qemu-117-instance-00000065.
Oct 11 05:08:37 np0005481065 systemd[1]: Started Virtual Machine qemu-117-instance-00000065.
Oct 11 05:08:38 np0005481065 nova_compute[260935]: 2025-10-11 09:08:38.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:08:38 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1982: 321 pgs: 321 active+clean; 533 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 359 KiB/s rd, 6.3 MiB/s wr, 241 op/s
Oct 11 05:08:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e260 do_prune osdmap full prune enabled
Oct 11 05:08:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e261 e261: 3 total, 3 up, 3 in
Oct 11 05:08:38 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e261: 3 total, 3 up, 3 in
Oct 11 05:08:38 np0005481065 nova_compute[260935]: 2025-10-11 09:08:38.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:08:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:08:38 np0005481065 nova_compute[260935]: 2025-10-11 09:08:38.828 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Removed pending event for df634e60-d790-4a39-adcc-c6345a12e9df due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct 11 05:08:38 np0005481065 nova_compute[260935]: 2025-10-11 09:08:38.829 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173718.827769, df634e60-d790-4a39-adcc-c6345a12e9df => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:08:38 np0005481065 nova_compute[260935]: 2025-10-11 09:08:38.829 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:08:38 np0005481065 nova_compute[260935]: 2025-10-11 09:08:38.833 2 DEBUG nova.compute.manager [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:08:38 np0005481065 nova_compute[260935]: 2025-10-11 09:08:38.834 2 DEBUG nova.virt.libvirt.driver [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:08:38 np0005481065 nova_compute[260935]: 2025-10-11 09:08:38.840 2 INFO nova.virt.libvirt.driver [-] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Instance spawned successfully.#033[00m
Oct 11 05:08:38 np0005481065 nova_compute[260935]: 2025-10-11 09:08:38.841 2 DEBUG nova.virt.libvirt.driver [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:08:38 np0005481065 nova_compute[260935]: 2025-10-11 09:08:38.865 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:08:38 np0005481065 nova_compute[260935]: 2025-10-11 09:08:38.875 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:08:38 np0005481065 nova_compute[260935]: 2025-10-11 09:08:38.882 2 DEBUG nova.virt.libvirt.driver [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:08:38 np0005481065 nova_compute[260935]: 2025-10-11 09:08:38.883 2 DEBUG nova.virt.libvirt.driver [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:08:38 np0005481065 nova_compute[260935]: 2025-10-11 09:08:38.884 2 DEBUG nova.virt.libvirt.driver [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:08:38 np0005481065 nova_compute[260935]: 2025-10-11 09:08:38.884 2 DEBUG nova.virt.libvirt.driver [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:08:38 np0005481065 nova_compute[260935]: 2025-10-11 09:08:38.885 2 DEBUG nova.virt.libvirt.driver [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:08:38 np0005481065 nova_compute[260935]: 2025-10-11 09:08:38.886 2 DEBUG nova.virt.libvirt.driver [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:08:38 np0005481065 nova_compute[260935]: 2025-10-11 09:08:38.926 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct 11 05:08:38 np0005481065 nova_compute[260935]: 2025-10-11 09:08:38.927 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173718.8327749, df634e60-d790-4a39-adcc-c6345a12e9df => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:08:38 np0005481065 nova_compute[260935]: 2025-10-11 09:08:38.928 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] VM Started (Lifecycle Event)#033[00m
Oct 11 05:08:38 np0005481065 nova_compute[260935]: 2025-10-11 09:08:38.959 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:08:38 np0005481065 nova_compute[260935]: 2025-10-11 09:08:38.965 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:08:38 np0005481065 nova_compute[260935]: 2025-10-11 09:08:38.972 2 DEBUG nova.compute.manager [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:08:39 np0005481065 nova_compute[260935]: 2025-10-11 09:08:39.004 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct 11 05:08:39 np0005481065 nova_compute[260935]: 2025-10-11 09:08:39.037 2 DEBUG oslo_concurrency.lockutils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:08:39 np0005481065 nova_compute[260935]: 2025-10-11 09:08:39.037 2 DEBUG oslo_concurrency.lockutils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:08:39 np0005481065 nova_compute[260935]: 2025-10-11 09:08:39.038 2 DEBUG nova.objects.instance [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct 11 05:08:39 np0005481065 nova_compute[260935]: 2025-10-11 09:08:39.101 2 DEBUG oslo_concurrency.lockutils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.063s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:08:39 np0005481065 nova_compute[260935]: 2025-10-11 09:08:39.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:08:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e261 do_prune osdmap full prune enabled
Oct 11 05:08:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e262 e262: 3 total, 3 up, 3 in
Oct 11 05:08:39 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e262: 3 total, 3 up, 3 in
Oct 11 05:08:40 np0005481065 nova_compute[260935]: 2025-10-11 09:08:40.371 2 DEBUG oslo_concurrency.lockutils [None req-332a8af7-387c-46cf-88fe-bdcd5676e252 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Acquiring lock "df634e60-d790-4a39-adcc-c6345a12e9df" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:08:40 np0005481065 nova_compute[260935]: 2025-10-11 09:08:40.371 2 DEBUG oslo_concurrency.lockutils [None req-332a8af7-387c-46cf-88fe-bdcd5676e252 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "df634e60-d790-4a39-adcc-c6345a12e9df" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:08:40 np0005481065 nova_compute[260935]: 2025-10-11 09:08:40.372 2 DEBUG oslo_concurrency.lockutils [None req-332a8af7-387c-46cf-88fe-bdcd5676e252 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Acquiring lock "df634e60-d790-4a39-adcc-c6345a12e9df-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:08:40 np0005481065 nova_compute[260935]: 2025-10-11 09:08:40.372 2 DEBUG oslo_concurrency.lockutils [None req-332a8af7-387c-46cf-88fe-bdcd5676e252 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "df634e60-d790-4a39-adcc-c6345a12e9df-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:08:40 np0005481065 nova_compute[260935]: 2025-10-11 09:08:40.372 2 DEBUG oslo_concurrency.lockutils [None req-332a8af7-387c-46cf-88fe-bdcd5676e252 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "df634e60-d790-4a39-adcc-c6345a12e9df-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:08:40 np0005481065 nova_compute[260935]: 2025-10-11 09:08:40.374 2 INFO nova.compute.manager [None req-332a8af7-387c-46cf-88fe-bdcd5676e252 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Terminating instance#033[00m
Oct 11 05:08:40 np0005481065 nova_compute[260935]: 2025-10-11 09:08:40.375 2 DEBUG oslo_concurrency.lockutils [None req-332a8af7-387c-46cf-88fe-bdcd5676e252 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Acquiring lock "refresh_cache-df634e60-d790-4a39-adcc-c6345a12e9df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:08:40 np0005481065 nova_compute[260935]: 2025-10-11 09:08:40.376 2 DEBUG oslo_concurrency.lockutils [None req-332a8af7-387c-46cf-88fe-bdcd5676e252 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Acquired lock "refresh_cache-df634e60-d790-4a39-adcc-c6345a12e9df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:08:40 np0005481065 nova_compute[260935]: 2025-10-11 09:08:40.376 2 DEBUG nova.network.neutron [None req-332a8af7-387c-46cf-88fe-bdcd5676e252 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:08:40 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1985: 321 pgs: 321 active+clean; 533 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 176 KiB/s rd, 5.4 MiB/s wr, 261 op/s
Oct 11 05:08:40 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e262 do_prune osdmap full prune enabled
Oct 11 05:08:40 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e263 e263: 3 total, 3 up, 3 in
Oct 11 05:08:40 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e263: 3 total, 3 up, 3 in
Oct 11 05:08:40 np0005481065 nova_compute[260935]: 2025-10-11 09:08:40.788 2 DEBUG nova.network.neutron [None req-332a8af7-387c-46cf-88fe-bdcd5676e252 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:08:41 np0005481065 nova_compute[260935]: 2025-10-11 09:08:41.257 2 DEBUG nova.network.neutron [None req-332a8af7-387c-46cf-88fe-bdcd5676e252 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:08:41 np0005481065 nova_compute[260935]: 2025-10-11 09:08:41.274 2 DEBUG oslo_concurrency.lockutils [None req-332a8af7-387c-46cf-88fe-bdcd5676e252 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Releasing lock "refresh_cache-df634e60-d790-4a39-adcc-c6345a12e9df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:08:41 np0005481065 nova_compute[260935]: 2025-10-11 09:08:41.275 2 DEBUG nova.compute.manager [None req-332a8af7-387c-46cf-88fe-bdcd5676e252 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:08:41 np0005481065 systemd[1]: machine-qemu\x2d117\x2dinstance\x2d00000065.scope: Deactivated successfully.
Oct 11 05:08:41 np0005481065 systemd[1]: machine-qemu\x2d117\x2dinstance\x2d00000065.scope: Consumed 3.511s CPU time.
Oct 11 05:08:41 np0005481065 systemd-machined[215705]: Machine qemu-117-instance-00000065 terminated.
Oct 11 05:08:41 np0005481065 nova_compute[260935]: 2025-10-11 09:08:41.501 2 INFO nova.virt.libvirt.driver [-] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Instance destroyed successfully.#033[00m
Oct 11 05:08:41 np0005481065 nova_compute[260935]: 2025-10-11 09:08:41.502 2 DEBUG nova.objects.instance [None req-332a8af7-387c-46cf-88fe-bdcd5676e252 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lazy-loading 'resources' on Instance uuid df634e60-d790-4a39-adcc-c6345a12e9df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:08:42 np0005481065 nova_compute[260935]: 2025-10-11 09:08:42.010 2 INFO nova.virt.libvirt.driver [None req-332a8af7-387c-46cf-88fe-bdcd5676e252 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Deleting instance files /var/lib/nova/instances/df634e60-d790-4a39-adcc-c6345a12e9df_del#033[00m
Oct 11 05:08:42 np0005481065 nova_compute[260935]: 2025-10-11 09:08:42.011 2 INFO nova.virt.libvirt.driver [None req-332a8af7-387c-46cf-88fe-bdcd5676e252 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Deletion of /var/lib/nova/instances/df634e60-d790-4a39-adcc-c6345a12e9df_del complete#033[00m
Oct 11 05:08:42 np0005481065 nova_compute[260935]: 2025-10-11 09:08:42.077 2 INFO nova.compute.manager [None req-332a8af7-387c-46cf-88fe-bdcd5676e252 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Took 0.80 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:08:42 np0005481065 nova_compute[260935]: 2025-10-11 09:08:42.077 2 DEBUG oslo.service.loopingcall [None req-332a8af7-387c-46cf-88fe-bdcd5676e252 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:08:42 np0005481065 nova_compute[260935]: 2025-10-11 09:08:42.078 2 DEBUG nova.compute.manager [-] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:08:42 np0005481065 nova_compute[260935]: 2025-10-11 09:08:42.078 2 DEBUG nova.network.neutron [-] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:08:42 np0005481065 nova_compute[260935]: 2025-10-11 09:08:42.261 2 DEBUG nova.network.neutron [-] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:08:42 np0005481065 nova_compute[260935]: 2025-10-11 09:08:42.277 2 DEBUG nova.network.neutron [-] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:08:42 np0005481065 nova_compute[260935]: 2025-10-11 09:08:42.292 2 INFO nova.compute.manager [-] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Took 0.21 seconds to deallocate network for instance.#033[00m
Oct 11 05:08:42 np0005481065 nova_compute[260935]: 2025-10-11 09:08:42.340 2 DEBUG oslo_concurrency.lockutils [None req-332a8af7-387c-46cf-88fe-bdcd5676e252 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:08:42 np0005481065 nova_compute[260935]: 2025-10-11 09:08:42.341 2 DEBUG oslo_concurrency.lockutils [None req-332a8af7-387c-46cf-88fe-bdcd5676e252 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:08:42 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1987: 321 pgs: 321 active+clean; 490 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 40 KiB/s wr, 277 op/s
Oct 11 05:08:42 np0005481065 nova_compute[260935]: 2025-10-11 09:08:42.483 2 DEBUG oslo_concurrency.processutils [None req-332a8af7-387c-46cf-88fe-bdcd5676e252 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:08:42 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:08:42 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1017674053' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:08:43 np0005481065 nova_compute[260935]: 2025-10-11 09:08:43.007 2 DEBUG oslo_concurrency.processutils [None req-332a8af7-387c-46cf-88fe-bdcd5676e252 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:08:43 np0005481065 nova_compute[260935]: 2025-10-11 09:08:43.017 2 DEBUG nova.compute.provider_tree [None req-332a8af7-387c-46cf-88fe-bdcd5676e252 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:08:43 np0005481065 nova_compute[260935]: 2025-10-11 09:08:43.043 2 DEBUG nova.scheduler.client.report [None req-332a8af7-387c-46cf-88fe-bdcd5676e252 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:08:43 np0005481065 nova_compute[260935]: 2025-10-11 09:08:43.075 2 DEBUG oslo_concurrency.lockutils [None req-332a8af7-387c-46cf-88fe-bdcd5676e252 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:08:43 np0005481065 nova_compute[260935]: 2025-10-11 09:08:43.105 2 INFO nova.scheduler.client.report [None req-332a8af7-387c-46cf-88fe-bdcd5676e252 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Deleted allocations for instance df634e60-d790-4a39-adcc-c6345a12e9df#033[00m
Oct 11 05:08:43 np0005481065 nova_compute[260935]: 2025-10-11 09:08:43.212 2 DEBUG oslo_concurrency.lockutils [None req-332a8af7-387c-46cf-88fe-bdcd5676e252 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "df634e60-d790-4a39-adcc-c6345a12e9df" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:08:43 np0005481065 nova_compute[260935]: 2025-10-11 09:08:43.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:08:43 np0005481065 nova_compute[260935]: 2025-10-11 09:08:43.704 2 DEBUG oslo_concurrency.lockutils [None req-662555af-b93b-4cde-974d-5ac176d92c5e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Acquiring lock "1a90a348-da49-4ba3-8bae-5b426e9d7424" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:08:43 np0005481065 nova_compute[260935]: 2025-10-11 09:08:43.705 2 DEBUG oslo_concurrency.lockutils [None req-662555af-b93b-4cde-974d-5ac176d92c5e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "1a90a348-da49-4ba3-8bae-5b426e9d7424" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:08:43 np0005481065 nova_compute[260935]: 2025-10-11 09:08:43.705 2 DEBUG oslo_concurrency.lockutils [None req-662555af-b93b-4cde-974d-5ac176d92c5e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Acquiring lock "1a90a348-da49-4ba3-8bae-5b426e9d7424-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:08:43 np0005481065 nova_compute[260935]: 2025-10-11 09:08:43.705 2 DEBUG oslo_concurrency.lockutils [None req-662555af-b93b-4cde-974d-5ac176d92c5e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "1a90a348-da49-4ba3-8bae-5b426e9d7424-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:08:43 np0005481065 nova_compute[260935]: 2025-10-11 09:08:43.706 2 DEBUG oslo_concurrency.lockutils [None req-662555af-b93b-4cde-974d-5ac176d92c5e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "1a90a348-da49-4ba3-8bae-5b426e9d7424-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:08:43 np0005481065 nova_compute[260935]: 2025-10-11 09:08:43.708 2 INFO nova.compute.manager [None req-662555af-b93b-4cde-974d-5ac176d92c5e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Terminating instance#033[00m
Oct 11 05:08:43 np0005481065 nova_compute[260935]: 2025-10-11 09:08:43.710 2 DEBUG oslo_concurrency.lockutils [None req-662555af-b93b-4cde-974d-5ac176d92c5e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Acquiring lock "refresh_cache-1a90a348-da49-4ba3-8bae-5b426e9d7424" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:08:43 np0005481065 nova_compute[260935]: 2025-10-11 09:08:43.710 2 DEBUG oslo_concurrency.lockutils [None req-662555af-b93b-4cde-974d-5ac176d92c5e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Acquired lock "refresh_cache-1a90a348-da49-4ba3-8bae-5b426e9d7424" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:08:43 np0005481065 nova_compute[260935]: 2025-10-11 09:08:43.710 2 DEBUG nova.network.neutron [None req-662555af-b93b-4cde-974d-5ac176d92c5e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:08:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:08:44 np0005481065 nova_compute[260935]: 2025-10-11 09:08:43.996 2 DEBUG nova.network.neutron [None req-662555af-b93b-4cde-974d-5ac176d92c5e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:08:44 np0005481065 nova_compute[260935]: 2025-10-11 09:08:44.255 2 DEBUG nova.network.neutron [None req-662555af-b93b-4cde-974d-5ac176d92c5e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:08:44 np0005481065 nova_compute[260935]: 2025-10-11 09:08:44.273 2 DEBUG oslo_concurrency.lockutils [None req-662555af-b93b-4cde-974d-5ac176d92c5e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Releasing lock "refresh_cache-1a90a348-da49-4ba3-8bae-5b426e9d7424" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:08:44 np0005481065 nova_compute[260935]: 2025-10-11 09:08:44.274 2 DEBUG nova.compute.manager [None req-662555af-b93b-4cde-974d-5ac176d92c5e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:08:44 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1988: 321 pgs: 321 active+clean; 486 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 38 KiB/s wr, 286 op/s
Oct 11 05:08:44 np0005481065 nova_compute[260935]: 2025-10-11 09:08:44.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:08:44 np0005481065 systemd[1]: machine-qemu\x2d115\x2dinstance\x2d00000064.scope: Deactivated successfully.
Oct 11 05:08:44 np0005481065 systemd[1]: machine-qemu\x2d115\x2dinstance\x2d00000064.scope: Consumed 12.881s CPU time.
Oct 11 05:08:44 np0005481065 systemd-machined[215705]: Machine qemu-115-instance-00000064 terminated.
Oct 11 05:08:44 np0005481065 nova_compute[260935]: 2025-10-11 09:08:44.499 2 INFO nova.virt.libvirt.driver [-] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Instance destroyed successfully.#033[00m
Oct 11 05:08:44 np0005481065 nova_compute[260935]: 2025-10-11 09:08:44.500 2 DEBUG nova.objects.instance [None req-662555af-b93b-4cde-974d-5ac176d92c5e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lazy-loading 'resources' on Instance uuid 1a90a348-da49-4ba3-8bae-5b426e9d7424 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:08:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e263 do_prune osdmap full prune enabled
Oct 11 05:08:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e264 e264: 3 total, 3 up, 3 in
Oct 11 05:08:44 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e264: 3 total, 3 up, 3 in
Oct 11 05:08:45 np0005481065 nova_compute[260935]: 2025-10-11 09:08:45.081 2 INFO nova.virt.libvirt.driver [None req-662555af-b93b-4cde-974d-5ac176d92c5e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Deleting instance files /var/lib/nova/instances/1a90a348-da49-4ba3-8bae-5b426e9d7424_del#033[00m
Oct 11 05:08:45 np0005481065 nova_compute[260935]: 2025-10-11 09:08:45.083 2 INFO nova.virt.libvirt.driver [None req-662555af-b93b-4cde-974d-5ac176d92c5e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Deletion of /var/lib/nova/instances/1a90a348-da49-4ba3-8bae-5b426e9d7424_del complete#033[00m
Oct 11 05:08:45 np0005481065 nova_compute[260935]: 2025-10-11 09:08:45.173 2 INFO nova.compute.manager [None req-662555af-b93b-4cde-974d-5ac176d92c5e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Took 0.90 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:08:45 np0005481065 nova_compute[260935]: 2025-10-11 09:08:45.173 2 DEBUG oslo.service.loopingcall [None req-662555af-b93b-4cde-974d-5ac176d92c5e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:08:45 np0005481065 nova_compute[260935]: 2025-10-11 09:08:45.174 2 DEBUG nova.compute.manager [-] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:08:45 np0005481065 nova_compute[260935]: 2025-10-11 09:08:45.174 2 DEBUG nova.network.neutron [-] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:08:45 np0005481065 nova_compute[260935]: 2025-10-11 09:08:45.361 2 DEBUG nova.network.neutron [-] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:08:45 np0005481065 nova_compute[260935]: 2025-10-11 09:08:45.387 2 DEBUG nova.network.neutron [-] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:08:45 np0005481065 nova_compute[260935]: 2025-10-11 09:08:45.416 2 INFO nova.compute.manager [-] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Took 0.24 seconds to deallocate network for instance.#033[00m
Oct 11 05:08:45 np0005481065 nova_compute[260935]: 2025-10-11 09:08:45.502 2 DEBUG oslo_concurrency.lockutils [None req-662555af-b93b-4cde-974d-5ac176d92c5e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:08:45 np0005481065 nova_compute[260935]: 2025-10-11 09:08:45.503 2 DEBUG oslo_concurrency.lockutils [None req-662555af-b93b-4cde-974d-5ac176d92c5e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:08:45 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e264 do_prune osdmap full prune enabled
Oct 11 05:08:45 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e265 e265: 3 total, 3 up, 3 in
Oct 11 05:08:45 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e265: 3 total, 3 up, 3 in
Oct 11 05:08:45 np0005481065 nova_compute[260935]: 2025-10-11 09:08:45.707 2 DEBUG oslo_concurrency.processutils [None req-662555af-b93b-4cde-974d-5ac176d92c5e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:08:46 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:08:46 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2571303775' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:08:46 np0005481065 nova_compute[260935]: 2025-10-11 09:08:46.183 2 DEBUG oslo_concurrency.processutils [None req-662555af-b93b-4cde-974d-5ac176d92c5e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:08:46 np0005481065 nova_compute[260935]: 2025-10-11 09:08:46.194 2 DEBUG nova.compute.provider_tree [None req-662555af-b93b-4cde-974d-5ac176d92c5e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:08:46 np0005481065 nova_compute[260935]: 2025-10-11 09:08:46.218 2 DEBUG nova.scheduler.client.report [None req-662555af-b93b-4cde-974d-5ac176d92c5e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:08:46 np0005481065 nova_compute[260935]: 2025-10-11 09:08:46.287 2 DEBUG oslo_concurrency.lockutils [None req-662555af-b93b-4cde-974d-5ac176d92c5e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:08:46 np0005481065 nova_compute[260935]: 2025-10-11 09:08:46.332 2 INFO nova.scheduler.client.report [None req-662555af-b93b-4cde-974d-5ac176d92c5e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Deleted allocations for instance 1a90a348-da49-4ba3-8bae-5b426e9d7424#033[00m
Oct 11 05:08:46 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1991: 321 pgs: 321 active+clean; 486 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 38 KiB/s wr, 286 op/s
Oct 11 05:08:46 np0005481065 nova_compute[260935]: 2025-10-11 09:08:46.437 2 DEBUG oslo_concurrency.lockutils [None req-662555af-b93b-4cde-974d-5ac176d92c5e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "1a90a348-da49-4ba3-8bae-5b426e9d7424" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.733s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:08:46 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e265 do_prune osdmap full prune enabled
Oct 11 05:08:46 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e266 e266: 3 total, 3 up, 3 in
Oct 11 05:08:46 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e266: 3 total, 3 up, 3 in
Oct 11 05:08:46 np0005481065 nova_compute[260935]: 2025-10-11 09:08:46.760 2 DEBUG oslo_concurrency.lockutils [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquiring lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:08:46 np0005481065 nova_compute[260935]: 2025-10-11 09:08:46.761 2 DEBUG oslo_concurrency.lockutils [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:08:46 np0005481065 nova_compute[260935]: 2025-10-11 09:08:46.761 2 INFO nova.compute.manager [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Shelving#033[00m
Oct 11 05:08:46 np0005481065 nova_compute[260935]: 2025-10-11 09:08:46.793 2 DEBUG nova.virt.libvirt.driver [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct 11 05:08:47 np0005481065 podman[361202]: 2025-10-11 09:08:47.435929997 +0000 UTC m=+0.093884841 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 05:08:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e266 do_prune osdmap full prune enabled
Oct 11 05:08:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e267 e267: 3 total, 3 up, 3 in
Oct 11 05:08:47 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e267: 3 total, 3 up, 3 in
Oct 11 05:08:48 np0005481065 nova_compute[260935]: 2025-10-11 09:08:48.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:08:48 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1994: 321 pgs: 321 active+clean; 407 MiB data, 852 MiB used, 59 GiB / 60 GiB avail; 209 KiB/s rd, 40 KiB/s wr, 292 op/s
Oct 11 05:08:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e267 do_prune osdmap full prune enabled
Oct 11 05:08:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e268 e268: 3 total, 3 up, 3 in
Oct 11 05:08:48 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e268: 3 total, 3 up, 3 in
Oct 11 05:08:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:08:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e268 do_prune osdmap full prune enabled
Oct 11 05:08:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e269 e269: 3 total, 3 up, 3 in
Oct 11 05:08:48 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e269: 3 total, 3 up, 3 in
Oct 11 05:08:49 np0005481065 kernel: tapa611854c-0a (unregistering): left promiscuous mode
Oct 11 05:08:49 np0005481065 NetworkManager[44960]: <info>  [1760173729.1776] device (tapa611854c-0a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:08:49 np0005481065 nova_compute[260935]: 2025-10-11 09:08:49.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:08:49 np0005481065 ovn_controller[152945]: 2025-10-11T09:08:49Z|00907|binding|INFO|Releasing lport a611854c-0a61-41b8-91ce-0c0f893aa54c from this chassis (sb_readonly=0)
Oct 11 05:08:49 np0005481065 ovn_controller[152945]: 2025-10-11T09:08:49Z|00908|binding|INFO|Setting lport a611854c-0a61-41b8-91ce-0c0f893aa54c down in Southbound
Oct 11 05:08:49 np0005481065 ovn_controller[152945]: 2025-10-11T09:08:49Z|00909|binding|INFO|Removing iface tapa611854c-0a ovn-installed in OVS
Oct 11 05:08:49 np0005481065 nova_compute[260935]: 2025-10-11 09:08:49.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:08:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:08:49.193 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:da:b0 10.100.0.12'], port_security=['fa:16:3e:9d:da:b0 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '21f163e616ee4917a580701d466f7dc9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'df67e917-90ec-4dfb-a7e0-e0c1517579e8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e0adf864-e0f8-4e06-afb7-e390a634b36e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=a611854c-0a61-41b8-91ce-0c0f893aa54c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:08:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:08:49.195 162815 INFO neutron.agent.ovn.metadata.agent [-] Port a611854c-0a61-41b8-91ce-0c0f893aa54c in datapath 14e82eeb-74e2-4de3-9047-74da777fe1ec unbound from our chassis#033[00m
Oct 11 05:08:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:08:49.197 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 14e82eeb-74e2-4de3-9047-74da777fe1ec, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:08:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:08:49.200 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a11e6570-a690-47d1-a88e-a5d0b3fde613]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:08:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:08:49.200 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec namespace which is not needed anymore#033[00m
Oct 11 05:08:49 np0005481065 nova_compute[260935]: 2025-10-11 09:08:49.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:08:49 np0005481065 systemd[1]: machine-qemu\x2d112\x2dinstance\x2d00000062.scope: Deactivated successfully.
Oct 11 05:08:49 np0005481065 systemd[1]: machine-qemu\x2d112\x2dinstance\x2d00000062.scope: Consumed 20.462s CPU time.
Oct 11 05:08:49 np0005481065 systemd-machined[215705]: Machine qemu-112-instance-00000062 terminated.
Oct 11 05:08:49 np0005481065 neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec[357685]: [NOTICE]   (357689) : haproxy version is 2.8.14-c23fe91
Oct 11 05:08:49 np0005481065 neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec[357685]: [NOTICE]   (357689) : path to executable is /usr/sbin/haproxy
Oct 11 05:08:49 np0005481065 neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec[357685]: [WARNING]  (357689) : Exiting Master process...
Oct 11 05:08:49 np0005481065 neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec[357685]: [ALERT]    (357689) : Current worker (357691) exited with code 143 (Terminated)
Oct 11 05:08:49 np0005481065 neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec[357685]: [WARNING]  (357689) : All workers exited. Exiting... (0)
Oct 11 05:08:49 np0005481065 nova_compute[260935]: 2025-10-11 09:08:49.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:08:49 np0005481065 systemd[1]: libpod-80ced7c6a3c16ccdbe23725730c8db7aeed4183f2fc1bc06e5ea3e07111ac748.scope: Deactivated successfully.
Oct 11 05:08:49 np0005481065 podman[361245]: 2025-10-11 09:08:49.405234534 +0000 UTC m=+0.063164545 container died 80ced7c6a3c16ccdbe23725730c8db7aeed4183f2fc1bc06e5ea3e07111ac748 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 05:08:49 np0005481065 nova_compute[260935]: 2025-10-11 09:08:49.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:08:49 np0005481065 nova_compute[260935]: 2025-10-11 09:08:49.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:08:49 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-80ced7c6a3c16ccdbe23725730c8db7aeed4183f2fc1bc06e5ea3e07111ac748-userdata-shm.mount: Deactivated successfully.
Oct 11 05:08:49 np0005481065 systemd[1]: var-lib-containers-storage-overlay-046b66f7870015e0b115ddde4088213d8c930e4e0ddec3540ecd817899022603-merged.mount: Deactivated successfully.
Oct 11 05:08:49 np0005481065 podman[361245]: 2025-10-11 09:08:49.488790849 +0000 UTC m=+0.146720840 container cleanup 80ced7c6a3c16ccdbe23725730c8db7aeed4183f2fc1bc06e5ea3e07111ac748 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 05:08:49 np0005481065 systemd[1]: libpod-conmon-80ced7c6a3c16ccdbe23725730c8db7aeed4183f2fc1bc06e5ea3e07111ac748.scope: Deactivated successfully.
Oct 11 05:08:49 np0005481065 podman[361285]: 2025-10-11 09:08:49.608409124 +0000 UTC m=+0.077869404 container remove 80ced7c6a3c16ccdbe23725730c8db7aeed4183f2fc1bc06e5ea3e07111ac748 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 05:08:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:08:49.619 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e44a9158-cb14-48d5-b043-1741601ce9d1]: (4, ('Sat Oct 11 09:08:49 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec (80ced7c6a3c16ccdbe23725730c8db7aeed4183f2fc1bc06e5ea3e07111ac748)\n80ced7c6a3c16ccdbe23725730c8db7aeed4183f2fc1bc06e5ea3e07111ac748\nSat Oct 11 09:08:49 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec (80ced7c6a3c16ccdbe23725730c8db7aeed4183f2fc1bc06e5ea3e07111ac748)\n80ced7c6a3c16ccdbe23725730c8db7aeed4183f2fc1bc06e5ea3e07111ac748\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:08:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:08:49.622 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c0994e31-0d45-434e-8054-1e534a77e1fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:08:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:08:49.623 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14e82eeb-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:08:49 np0005481065 nova_compute[260935]: 2025-10-11 09:08:49.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:08:49 np0005481065 kernel: tap14e82eeb-70: left promiscuous mode
Oct 11 05:08:49 np0005481065 nova_compute[260935]: 2025-10-11 09:08:49.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:08:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:08:49.668 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a9258649-eec0-4356-8d72-f68aabc7fcbc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:08:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:08:49.703 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7c43e6db-656f-47a6-b07e-77098619a562]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:08:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:08:49.705 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b829d98d-2696-4433-865c-e2c1327c8e82]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:08:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:08:49.736 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[488026de-1b84-4661-a920-fa3ca4dc846f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 548093, 'reachable_time': 21473, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361304, 'error': None, 'target': 'ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:08:49 np0005481065 systemd[1]: run-netns-ovnmeta\x2d14e82eeb\x2d74e2\x2d4de3\x2d9047\x2d74da777fe1ec.mount: Deactivated successfully.
Oct 11 05:08:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:08:49.740 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 05:08:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:08:49.740 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[d71945dd-d8f4-46c3-b59b-08cec0fff12c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:08:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e269 do_prune osdmap full prune enabled
Oct 11 05:08:49 np0005481065 nova_compute[260935]: 2025-10-11 09:08:49.823 2 INFO nova.virt.libvirt.driver [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Instance shutdown successfully after 3 seconds.#033[00m
Oct 11 05:08:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e270 e270: 3 total, 3 up, 3 in
Oct 11 05:08:49 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e270: 3 total, 3 up, 3 in
Oct 11 05:08:49 np0005481065 nova_compute[260935]: 2025-10-11 09:08:49.833 2 INFO nova.virt.libvirt.driver [-] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Instance destroyed successfully.#033[00m
Oct 11 05:08:49 np0005481065 nova_compute[260935]: 2025-10-11 09:08:49.834 2 DEBUG nova.objects.instance [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lazy-loading 'numa_topology' on Instance uuid 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:08:49 np0005481065 nova_compute[260935]: 2025-10-11 09:08:49.856 2 DEBUG nova.compute.manager [req-dad44b56-18dd-42d9-9af3-37e58e23c8d9 req-a3aaf233-e15b-46bc-bb49-4ca0b182a66f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Received event network-vif-unplugged-a611854c-0a61-41b8-91ce-0c0f893aa54c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:08:49 np0005481065 nova_compute[260935]: 2025-10-11 09:08:49.857 2 DEBUG oslo_concurrency.lockutils [req-dad44b56-18dd-42d9-9af3-37e58e23c8d9 req-a3aaf233-e15b-46bc-bb49-4ca0b182a66f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:08:49 np0005481065 nova_compute[260935]: 2025-10-11 09:08:49.857 2 DEBUG oslo_concurrency.lockutils [req-dad44b56-18dd-42d9-9af3-37e58e23c8d9 req-a3aaf233-e15b-46bc-bb49-4ca0b182a66f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:08:49 np0005481065 nova_compute[260935]: 2025-10-11 09:08:49.858 2 DEBUG oslo_concurrency.lockutils [req-dad44b56-18dd-42d9-9af3-37e58e23c8d9 req-a3aaf233-e15b-46bc-bb49-4ca0b182a66f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:08:49 np0005481065 nova_compute[260935]: 2025-10-11 09:08:49.860 2 DEBUG nova.compute.manager [req-dad44b56-18dd-42d9-9af3-37e58e23c8d9 req-a3aaf233-e15b-46bc-bb49-4ca0b182a66f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] No waiting events found dispatching network-vif-unplugged-a611854c-0a61-41b8-91ce-0c0f893aa54c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:08:49 np0005481065 nova_compute[260935]: 2025-10-11 09:08:49.860 2 WARNING nova.compute.manager [req-dad44b56-18dd-42d9-9af3-37e58e23c8d9 req-a3aaf233-e15b-46bc-bb49-4ca0b182a66f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Received unexpected event network-vif-unplugged-a611854c-0a61-41b8-91ce-0c0f893aa54c for instance with vm_state active and task_state shelving.#033[00m
Oct 11 05:08:50 np0005481065 nova_compute[260935]: 2025-10-11 09:08:50.312 2 INFO nova.virt.libvirt.driver [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Beginning cold snapshot process#033[00m
Oct 11 05:08:50 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1998: 321 pgs: 321 active+clean; 407 MiB data, 852 MiB used, 59 GiB / 60 GiB avail; 226 KiB/s rd, 43 KiB/s wr, 316 op/s
Oct 11 05:08:50 np0005481065 nova_compute[260935]: 2025-10-11 09:08:50.513 2 DEBUG nova.virt.libvirt.imagebackend [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] No parent info for 03f2fef0-11c0-48e1-b3a0-3e02d898739e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct 11 05:08:50 np0005481065 nova_compute[260935]: 2025-10-11 09:08:50.994 2 DEBUG nova.storage.rbd_utils [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] creating snapshot(6ad053a64b5d4ae4adc6ab6154a5ffde) on rbd image(0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 11 05:08:51 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e270 do_prune osdmap full prune enabled
Oct 11 05:08:51 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e271 e271: 3 total, 3 up, 3 in
Oct 11 05:08:51 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e271: 3 total, 3 up, 3 in
Oct 11 05:08:51 np0005481065 nova_compute[260935]: 2025-10-11 09:08:51.927 2 DEBUG nova.storage.rbd_utils [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] cloning vms/0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk@6ad053a64b5d4ae4adc6ab6154a5ffde to images/cccd490d-35ae-4410-9dbd-f711e72912ef clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct 11 05:08:51 np0005481065 nova_compute[260935]: 2025-10-11 09:08:51.994 2 DEBUG nova.compute.manager [req-70d3d542-3bb1-4b50-8a1c-4c67902fc749 req-48d766d7-245a-4c8e-80b8-557ea5259861 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Received event network-vif-plugged-a611854c-0a61-41b8-91ce-0c0f893aa54c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:08:51 np0005481065 nova_compute[260935]: 2025-10-11 09:08:51.995 2 DEBUG oslo_concurrency.lockutils [req-70d3d542-3bb1-4b50-8a1c-4c67902fc749 req-48d766d7-245a-4c8e-80b8-557ea5259861 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:08:51 np0005481065 nova_compute[260935]: 2025-10-11 09:08:51.996 2 DEBUG oslo_concurrency.lockutils [req-70d3d542-3bb1-4b50-8a1c-4c67902fc749 req-48d766d7-245a-4c8e-80b8-557ea5259861 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:08:51 np0005481065 nova_compute[260935]: 2025-10-11 09:08:51.996 2 DEBUG oslo_concurrency.lockutils [req-70d3d542-3bb1-4b50-8a1c-4c67902fc749 req-48d766d7-245a-4c8e-80b8-557ea5259861 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:08:51 np0005481065 nova_compute[260935]: 2025-10-11 09:08:51.997 2 DEBUG nova.compute.manager [req-70d3d542-3bb1-4b50-8a1c-4c67902fc749 req-48d766d7-245a-4c8e-80b8-557ea5259861 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] No waiting events found dispatching network-vif-plugged-a611854c-0a61-41b8-91ce-0c0f893aa54c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:08:51 np0005481065 nova_compute[260935]: 2025-10-11 09:08:51.998 2 WARNING nova.compute.manager [req-70d3d542-3bb1-4b50-8a1c-4c67902fc749 req-48d766d7-245a-4c8e-80b8-557ea5259861 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Received unexpected event network-vif-plugged-a611854c-0a61-41b8-91ce-0c0f893aa54c for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Oct 11 05:08:52 np0005481065 nova_compute[260935]: 2025-10-11 09:08:52.085 2 DEBUG nova.storage.rbd_utils [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] flattening images/cccd490d-35ae-4410-9dbd-f711e72912ef flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct 11 05:08:52 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2000: 321 pgs: 321 active+clean; 420 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 1.9 MiB/s wr, 184 op/s
Oct 11 05:08:52 np0005481065 nova_compute[260935]: 2025-10-11 09:08:52.495 2 DEBUG nova.storage.rbd_utils [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] removing snapshot(6ad053a64b5d4ae4adc6ab6154a5ffde) on rbd image(0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct 11 05:08:52 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e271 do_prune osdmap full prune enabled
Oct 11 05:08:52 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e272 e272: 3 total, 3 up, 3 in
Oct 11 05:08:52 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e272: 3 total, 3 up, 3 in
Oct 11 05:08:52 np0005481065 nova_compute[260935]: 2025-10-11 09:08:52.897 2 DEBUG nova.storage.rbd_utils [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] creating snapshot(snap) on rbd image(cccd490d-35ae-4410-9dbd-f711e72912ef) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 11 05:08:53 np0005481065 nova_compute[260935]: 2025-10-11 09:08:53.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:08:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:08:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e272 do_prune osdmap full prune enabled
Oct 11 05:08:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e273 e273: 3 total, 3 up, 3 in
Oct 11 05:08:53 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e273: 3 total, 3 up, 3 in
Oct 11 05:08:54 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2003: 321 pgs: 321 active+clean; 444 MiB data, 861 MiB used, 59 GiB / 60 GiB avail; 8.3 MiB/s rd, 4.5 MiB/s wr, 248 op/s
Oct 11 05:08:54 np0005481065 nova_compute[260935]: 2025-10-11 09:08:54.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:08:54 np0005481065 podman[361448]: 2025-10-11 09:08:54.769715854 +0000 UTC m=+0.067757216 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 11 05:08:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:08:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:08:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:08:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:08:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:08:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:08:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:08:54
Oct 11 05:08:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:08:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:08:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', 'backups', 'vms', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta', '.rgw.root', '.mgr', 'volumes', 'default.rgw.log']
Oct 11 05:08:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:08:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:08:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:08:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:08:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:08:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:08:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:08:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:08:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:08:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:08:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:08:56 np0005481065 nova_compute[260935]: 2025-10-11 09:08:56.239 2 INFO nova.virt.libvirt.driver [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Snapshot image upload complete#033[00m
Oct 11 05:08:56 np0005481065 nova_compute[260935]: 2025-10-11 09:08:56.240 2 DEBUG nova.compute.manager [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:08:56 np0005481065 nova_compute[260935]: 2025-10-11 09:08:56.249 2 DEBUG oslo_concurrency.lockutils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Acquiring lock "b3697ef4-2e67-4d7f-ad14-ae6ab4055454" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:08:56 np0005481065 nova_compute[260935]: 2025-10-11 09:08:56.249 2 DEBUG oslo_concurrency.lockutils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lock "b3697ef4-2e67-4d7f-ad14-ae6ab4055454" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:08:56 np0005481065 nova_compute[260935]: 2025-10-11 09:08:56.300 2 DEBUG nova.compute.manager [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:08:56 np0005481065 nova_compute[260935]: 2025-10-11 09:08:56.311 2 INFO nova.compute.manager [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Shelve offloading#033[00m
Oct 11 05:08:56 np0005481065 nova_compute[260935]: 2025-10-11 09:08:56.323 2 INFO nova.virt.libvirt.driver [-] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Instance destroyed successfully.#033[00m
Oct 11 05:08:56 np0005481065 nova_compute[260935]: 2025-10-11 09:08:56.324 2 DEBUG nova.compute.manager [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:08:56 np0005481065 nova_compute[260935]: 2025-10-11 09:08:56.328 2 DEBUG oslo_concurrency.lockutils [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquiring lock "refresh_cache-0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:08:56 np0005481065 nova_compute[260935]: 2025-10-11 09:08:56.328 2 DEBUG oslo_concurrency.lockutils [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquired lock "refresh_cache-0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:08:56 np0005481065 nova_compute[260935]: 2025-10-11 09:08:56.329 2 DEBUG nova.network.neutron [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:08:56 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2004: 321 pgs: 321 active+clean; 444 MiB data, 861 MiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 3.4 MiB/s wr, 188 op/s
Oct 11 05:08:56 np0005481065 nova_compute[260935]: 2025-10-11 09:08:56.398 2 DEBUG oslo_concurrency.lockutils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:08:56 np0005481065 nova_compute[260935]: 2025-10-11 09:08:56.399 2 DEBUG oslo_concurrency.lockutils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:08:56 np0005481065 nova_compute[260935]: 2025-10-11 09:08:56.411 2 DEBUG nova.virt.hardware [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:08:56 np0005481065 nova_compute[260935]: 2025-10-11 09:08:56.411 2 INFO nova.compute.claims [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:08:56 np0005481065 nova_compute[260935]: 2025-10-11 09:08:56.499 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173721.4977875, df634e60-d790-4a39-adcc-c6345a12e9df => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:08:56 np0005481065 nova_compute[260935]: 2025-10-11 09:08:56.500 2 INFO nova.compute.manager [-] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:08:56 np0005481065 nova_compute[260935]: 2025-10-11 09:08:56.520 2 DEBUG nova.compute.manager [None req-8f22c9a1-c474-4201-b756-5d4aec567cf0 - - - - - -] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:08:56 np0005481065 nova_compute[260935]: 2025-10-11 09:08:56.616 2 DEBUG oslo_concurrency.processutils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:08:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:08:57 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3150851700' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:08:57 np0005481065 nova_compute[260935]: 2025-10-11 09:08:57.136 2 DEBUG oslo_concurrency.processutils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:08:57 np0005481065 nova_compute[260935]: 2025-10-11 09:08:57.142 2 DEBUG nova.compute.provider_tree [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:08:57 np0005481065 nova_compute[260935]: 2025-10-11 09:08:57.160 2 DEBUG nova.scheduler.client.report [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:08:57 np0005481065 nova_compute[260935]: 2025-10-11 09:08:57.183 2 DEBUG oslo_concurrency.lockutils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:08:57 np0005481065 nova_compute[260935]: 2025-10-11 09:08:57.183 2 DEBUG nova.compute.manager [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:08:57 np0005481065 nova_compute[260935]: 2025-10-11 09:08:57.239 2 DEBUG nova.compute.manager [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Oct 11 05:08:57 np0005481065 nova_compute[260935]: 2025-10-11 09:08:57.263 2 INFO nova.virt.libvirt.driver [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:08:57 np0005481065 nova_compute[260935]: 2025-10-11 09:08:57.289 2 DEBUG nova.compute.manager [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:08:57 np0005481065 nova_compute[260935]: 2025-10-11 09:08:57.421 2 DEBUG nova.compute.manager [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:08:57 np0005481065 nova_compute[260935]: 2025-10-11 09:08:57.423 2 DEBUG nova.virt.libvirt.driver [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:08:57 np0005481065 nova_compute[260935]: 2025-10-11 09:08:57.424 2 INFO nova.virt.libvirt.driver [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Creating image(s)#033[00m
Oct 11 05:08:57 np0005481065 nova_compute[260935]: 2025-10-11 09:08:57.459 2 DEBUG nova.storage.rbd_utils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] rbd image b3697ef4-2e67-4d7f-ad14-ae6ab4055454_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:08:57 np0005481065 nova_compute[260935]: 2025-10-11 09:08:57.494 2 DEBUG nova.storage.rbd_utils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] rbd image b3697ef4-2e67-4d7f-ad14-ae6ab4055454_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:08:57 np0005481065 nova_compute[260935]: 2025-10-11 09:08:57.523 2 DEBUG nova.storage.rbd_utils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] rbd image b3697ef4-2e67-4d7f-ad14-ae6ab4055454_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:08:57 np0005481065 nova_compute[260935]: 2025-10-11 09:08:57.528 2 DEBUG oslo_concurrency.processutils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:08:57 np0005481065 nova_compute[260935]: 2025-10-11 09:08:57.622 2 DEBUG oslo_concurrency.processutils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:08:57 np0005481065 nova_compute[260935]: 2025-10-11 09:08:57.623 2 DEBUG oslo_concurrency.lockutils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:08:57 np0005481065 nova_compute[260935]: 2025-10-11 09:08:57.624 2 DEBUG oslo_concurrency.lockutils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:08:57 np0005481065 nova_compute[260935]: 2025-10-11 09:08:57.624 2 DEBUG oslo_concurrency.lockutils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:08:57 np0005481065 nova_compute[260935]: 2025-10-11 09:08:57.655 2 DEBUG nova.storage.rbd_utils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] rbd image b3697ef4-2e67-4d7f-ad14-ae6ab4055454_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:08:57 np0005481065 nova_compute[260935]: 2025-10-11 09:08:57.659 2 DEBUG oslo_concurrency.processutils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 b3697ef4-2e67-4d7f-ad14-ae6ab4055454_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:08:57 np0005481065 nova_compute[260935]: 2025-10-11 09:08:57.972 2 DEBUG oslo_concurrency.processutils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 b3697ef4-2e67-4d7f-ad14-ae6ab4055454_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.312s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:08:58 np0005481065 nova_compute[260935]: 2025-10-11 09:08:58.047 2 DEBUG nova.storage.rbd_utils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] resizing rbd image b3697ef4-2e67-4d7f-ad14-ae6ab4055454_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:08:58 np0005481065 nova_compute[260935]: 2025-10-11 09:08:58.130 2 DEBUG nova.network.neutron [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Updating instance_info_cache with network_info: [{"id": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "address": "fa:16:3e:9d:da:b0", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa611854c-0a", "ovs_interfaceid": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:08:58 np0005481065 nova_compute[260935]: 2025-10-11 09:08:58.137 2 DEBUG nova.objects.instance [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lazy-loading 'migration_context' on Instance uuid b3697ef4-2e67-4d7f-ad14-ae6ab4055454 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:08:58 np0005481065 nova_compute[260935]: 2025-10-11 09:08:58.154 2 DEBUG nova.virt.libvirt.driver [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:08:58 np0005481065 nova_compute[260935]: 2025-10-11 09:08:58.155 2 DEBUG nova.virt.libvirt.driver [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Ensure instance console log exists: /var/lib/nova/instances/b3697ef4-2e67-4d7f-ad14-ae6ab4055454/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:08:58 np0005481065 nova_compute[260935]: 2025-10-11 09:08:58.155 2 DEBUG oslo_concurrency.lockutils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:08:58 np0005481065 nova_compute[260935]: 2025-10-11 09:08:58.155 2 DEBUG oslo_concurrency.lockutils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:08:58 np0005481065 nova_compute[260935]: 2025-10-11 09:08:58.156 2 DEBUG oslo_concurrency.lockutils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:08:58 np0005481065 nova_compute[260935]: 2025-10-11 09:08:58.157 2 DEBUG nova.virt.libvirt.driver [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:08:58 np0005481065 nova_compute[260935]: 2025-10-11 09:08:58.159 2 DEBUG oslo_concurrency.lockutils [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Releasing lock "refresh_cache-0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:08:58 np0005481065 nova_compute[260935]: 2025-10-11 09:08:58.165 2 WARNING nova.virt.libvirt.driver [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:08:58 np0005481065 nova_compute[260935]: 2025-10-11 09:08:58.169 2 DEBUG nova.virt.libvirt.host [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:08:58 np0005481065 nova_compute[260935]: 2025-10-11 09:08:58.170 2 DEBUG nova.virt.libvirt.host [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:08:58 np0005481065 nova_compute[260935]: 2025-10-11 09:08:58.173 2 DEBUG nova.virt.libvirt.host [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:08:58 np0005481065 nova_compute[260935]: 2025-10-11 09:08:58.174 2 DEBUG nova.virt.libvirt.host [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:08:58 np0005481065 nova_compute[260935]: 2025-10-11 09:08:58.174 2 DEBUG nova.virt.libvirt.driver [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:08:58 np0005481065 nova_compute[260935]: 2025-10-11 09:08:58.174 2 DEBUG nova.virt.hardware [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:08:58 np0005481065 nova_compute[260935]: 2025-10-11 09:08:58.175 2 DEBUG nova.virt.hardware [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:08:58 np0005481065 nova_compute[260935]: 2025-10-11 09:08:58.175 2 DEBUG nova.virt.hardware [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:08:58 np0005481065 nova_compute[260935]: 2025-10-11 09:08:58.175 2 DEBUG nova.virt.hardware [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:08:58 np0005481065 nova_compute[260935]: 2025-10-11 09:08:58.175 2 DEBUG nova.virt.hardware [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:08:58 np0005481065 nova_compute[260935]: 2025-10-11 09:08:58.176 2 DEBUG nova.virt.hardware [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:08:58 np0005481065 nova_compute[260935]: 2025-10-11 09:08:58.176 2 DEBUG nova.virt.hardware [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:08:58 np0005481065 nova_compute[260935]: 2025-10-11 09:08:58.176 2 DEBUG nova.virt.hardware [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:08:58 np0005481065 nova_compute[260935]: 2025-10-11 09:08:58.176 2 DEBUG nova.virt.hardware [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:08:58 np0005481065 nova_compute[260935]: 2025-10-11 09:08:58.176 2 DEBUG nova.virt.hardware [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:08:58 np0005481065 nova_compute[260935]: 2025-10-11 09:08:58.177 2 DEBUG nova.virt.hardware [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:08:58 np0005481065 nova_compute[260935]: 2025-10-11 09:08:58.180 2 DEBUG oslo_concurrency.processutils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:08:58 np0005481065 nova_compute[260935]: 2025-10-11 09:08:58.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:08:58 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2005: 321 pgs: 321 active+clean; 486 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 7.3 MiB/s rd, 7.1 MiB/s wr, 263 op/s
Oct 11 05:08:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:08:58 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2116364323' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:08:58 np0005481065 nova_compute[260935]: 2025-10-11 09:08:58.711 2 DEBUG oslo_concurrency.processutils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:08:58 np0005481065 nova_compute[260935]: 2025-10-11 09:08:58.750 2 DEBUG nova.storage.rbd_utils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] rbd image b3697ef4-2e67-4d7f-ad14-ae6ab4055454_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:08:58 np0005481065 nova_compute[260935]: 2025-10-11 09:08:58.762 2 DEBUG oslo_concurrency.processutils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:08:58 np0005481065 podman[361676]: 2025-10-11 09:08:58.813202882 +0000 UTC m=+0.112090870 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 11 05:08:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:08:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e273 do_prune osdmap full prune enabled
Oct 11 05:08:58 np0005481065 podman[361679]: 2025-10-11 09:08:58.826297256 +0000 UTC m=+0.124367941 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 05:08:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e274 e274: 3 total, 3 up, 3 in
Oct 11 05:08:58 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e274: 3 total, 3 up, 3 in
Oct 11 05:08:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:08:59 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1842180243' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:08:59 np0005481065 nova_compute[260935]: 2025-10-11 09:08:59.207 2 DEBUG oslo_concurrency.processutils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:08:59 np0005481065 nova_compute[260935]: 2025-10-11 09:08:59.211 2 DEBUG nova.objects.instance [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lazy-loading 'pci_devices' on Instance uuid b3697ef4-2e67-4d7f-ad14-ae6ab4055454 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:08:59 np0005481065 nova_compute[260935]: 2025-10-11 09:08:59.238 2 DEBUG nova.virt.libvirt.driver [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:08:59 np0005481065 nova_compute[260935]:  <uuid>b3697ef4-2e67-4d7f-ad14-ae6ab4055454</uuid>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:  <name>instance-00000066</name>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:08:59 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:      <nova:name>tempest-ServerShowV254Test-server-186124796</nova:name>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:08:58</nova:creationTime>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:08:59 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:        <nova:user uuid="18ec7b6a713d43e9880a338abc3182b1">tempest-ServerShowV254Test-827203150-project-member</nova:user>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:        <nova:project uuid="31a5173de7e7470ab2566a226b8ba071">tempest-ServerShowV254Test-827203150</nova:project>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:      <nova:ports/>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:08:59 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:      <entry name="serial">b3697ef4-2e67-4d7f-ad14-ae6ab4055454</entry>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:      <entry name="uuid">b3697ef4-2e67-4d7f-ad14-ae6ab4055454</entry>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:08:59 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:08:59 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:08:59 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/b3697ef4-2e67-4d7f-ad14-ae6ab4055454_disk">
Oct 11 05:08:59 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:08:59 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:08:59 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/b3697ef4-2e67-4d7f-ad14-ae6ab4055454_disk.config">
Oct 11 05:08:59 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:08:59 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:08:59 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/b3697ef4-2e67-4d7f-ad14-ae6ab4055454/console.log" append="off"/>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:08:59 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:08:59 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:08:59 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:08:59 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:08:59 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:08:59 np0005481065 nova_compute[260935]: 2025-10-11 09:08:59.334 2 DEBUG nova.virt.libvirt.driver [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:08:59 np0005481065 nova_compute[260935]: 2025-10-11 09:08:59.335 2 DEBUG nova.virt.libvirt.driver [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:08:59 np0005481065 nova_compute[260935]: 2025-10-11 09:08:59.335 2 INFO nova.virt.libvirt.driver [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Using config drive#033[00m
Oct 11 05:08:59 np0005481065 nova_compute[260935]: 2025-10-11 09:08:59.375 2 DEBUG nova.storage.rbd_utils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] rbd image b3697ef4-2e67-4d7f-ad14-ae6ab4055454_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:08:59 np0005481065 nova_compute[260935]: 2025-10-11 09:08:59.467 2 INFO nova.virt.libvirt.driver [-] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Instance destroyed successfully.#033[00m
Oct 11 05:08:59 np0005481065 nova_compute[260935]: 2025-10-11 09:08:59.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:08:59 np0005481065 nova_compute[260935]: 2025-10-11 09:08:59.469 2 DEBUG nova.objects.instance [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lazy-loading 'resources' on Instance uuid 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:08:59 np0005481065 nova_compute[260935]: 2025-10-11 09:08:59.498 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173724.4976034, 1a90a348-da49-4ba3-8bae-5b426e9d7424 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:08:59 np0005481065 nova_compute[260935]: 2025-10-11 09:08:59.498 2 INFO nova.compute.manager [-] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:08:59 np0005481065 nova_compute[260935]: 2025-10-11 09:08:59.500 2 DEBUG nova.virt.libvirt.vif [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:07:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-581429551',display_name='tempest-ServersNegativeTestJSON-server-581429551',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-581429551',id=98,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:07:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='21f163e616ee4917a580701d466f7dc9',ramdisk_id='',reservation_id='r-wp4zx0a9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-414353187',owner_user_name='tempest-ServersNegativeTestJSON-414353187-project-member',shelved_at='2025-10-11T09:08:56.240286',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='cccd490d-35ae-4410-9dbd-f711e72912ef'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:08:50Z,user_data=None,user_id='6b8d9d5ab01d48ae81a09f922875ea3e',uuid=0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "address": "fa:16:3e:9d:da:b0", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa611854c-0a", "ovs_interfaceid": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:08:59 np0005481065 nova_compute[260935]: 2025-10-11 09:08:59.501 2 DEBUG nova.network.os_vif_util [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Converting VIF {"id": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "address": "fa:16:3e:9d:da:b0", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa611854c-0a", "ovs_interfaceid": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:08:59 np0005481065 nova_compute[260935]: 2025-10-11 09:08:59.502 2 DEBUG nova.network.os_vif_util [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:da:b0,bridge_name='br-int',has_traffic_filtering=True,id=a611854c-0a61-41b8-91ce-0c0f893aa54c,network=Network(14e82eeb-74e2-4de3-9047-74da777fe1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa611854c-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:08:59 np0005481065 nova_compute[260935]: 2025-10-11 09:08:59.502 2 DEBUG os_vif [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:da:b0,bridge_name='br-int',has_traffic_filtering=True,id=a611854c-0a61-41b8-91ce-0c0f893aa54c,network=Network(14e82eeb-74e2-4de3-9047-74da777fe1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa611854c-0a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:08:59 np0005481065 nova_compute[260935]: 2025-10-11 09:08:59.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:08:59 np0005481065 nova_compute[260935]: 2025-10-11 09:08:59.504 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa611854c-0a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:08:59 np0005481065 nova_compute[260935]: 2025-10-11 09:08:59.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:08:59 np0005481065 nova_compute[260935]: 2025-10-11 09:08:59.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:08:59 np0005481065 nova_compute[260935]: 2025-10-11 09:08:59.509 2 INFO os_vif [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:da:b0,bridge_name='br-int',has_traffic_filtering=True,id=a611854c-0a61-41b8-91ce-0c0f893aa54c,network=Network(14e82eeb-74e2-4de3-9047-74da777fe1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa611854c-0a')#033[00m
Oct 11 05:08:59 np0005481065 nova_compute[260935]: 2025-10-11 09:08:59.538 2 DEBUG nova.compute.manager [None req-ad9a494b-d208-4dba-a96e-80e4511b93ca - - - - - -] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:08:59 np0005481065 nova_compute[260935]: 2025-10-11 09:08:59.691 2 DEBUG nova.compute.manager [req-f745fa9a-b1f7-4ab6-811e-237e3b5d4c33 req-811c4b79-38fa-45a8-820d-eebac48a4167 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Received event network-changed-a611854c-0a61-41b8-91ce-0c0f893aa54c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:08:59 np0005481065 nova_compute[260935]: 2025-10-11 09:08:59.691 2 DEBUG nova.compute.manager [req-f745fa9a-b1f7-4ab6-811e-237e3b5d4c33 req-811c4b79-38fa-45a8-820d-eebac48a4167 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Refreshing instance network info cache due to event network-changed-a611854c-0a61-41b8-91ce-0c0f893aa54c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:08:59 np0005481065 nova_compute[260935]: 2025-10-11 09:08:59.691 2 DEBUG oslo_concurrency.lockutils [req-f745fa9a-b1f7-4ab6-811e-237e3b5d4c33 req-811c4b79-38fa-45a8-820d-eebac48a4167 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:08:59 np0005481065 nova_compute[260935]: 2025-10-11 09:08:59.692 2 DEBUG oslo_concurrency.lockutils [req-f745fa9a-b1f7-4ab6-811e-237e3b5d4c33 req-811c4b79-38fa-45a8-820d-eebac48a4167 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:08:59 np0005481065 nova_compute[260935]: 2025-10-11 09:08:59.692 2 DEBUG nova.network.neutron [req-f745fa9a-b1f7-4ab6-811e-237e3b5d4c33 req-811c4b79-38fa-45a8-820d-eebac48a4167 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Refreshing network info cache for port a611854c-0a61-41b8-91ce-0c0f893aa54c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:08:59 np0005481065 nova_compute[260935]: 2025-10-11 09:08:59.748 2 INFO nova.virt.libvirt.driver [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Creating config drive at /var/lib/nova/instances/b3697ef4-2e67-4d7f-ad14-ae6ab4055454/disk.config#033[00m
Oct 11 05:08:59 np0005481065 nova_compute[260935]: 2025-10-11 09:08:59.759 2 DEBUG oslo_concurrency.processutils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b3697ef4-2e67-4d7f-ad14-ae6ab4055454/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3dlw3xny execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:08:59 np0005481065 nova_compute[260935]: 2025-10-11 09:08:59.924 2 INFO nova.virt.libvirt.driver [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Deleting instance files /var/lib/nova/instances/0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_del#033[00m
Oct 11 05:08:59 np0005481065 nova_compute[260935]: 2025-10-11 09:08:59.925 2 INFO nova.virt.libvirt.driver [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Deletion of /var/lib/nova/instances/0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_del complete#033[00m
Oct 11 05:08:59 np0005481065 nova_compute[260935]: 2025-10-11 09:08:59.929 2 DEBUG oslo_concurrency.processutils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b3697ef4-2e67-4d7f-ad14-ae6ab4055454/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3dlw3xny" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:08:59 np0005481065 nova_compute[260935]: 2025-10-11 09:08:59.957 2 DEBUG nova.storage.rbd_utils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] rbd image b3697ef4-2e67-4d7f-ad14-ae6ab4055454_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:08:59 np0005481065 nova_compute[260935]: 2025-10-11 09:08:59.961 2 DEBUG oslo_concurrency.processutils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b3697ef4-2e67-4d7f-ad14-ae6ab4055454/disk.config b3697ef4-2e67-4d7f-ad14-ae6ab4055454_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:09:00 np0005481065 nova_compute[260935]: 2025-10-11 09:09:00.052 2 INFO nova.scheduler.client.report [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Deleted allocations for instance 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c#033[00m
Oct 11 05:09:00 np0005481065 nova_compute[260935]: 2025-10-11 09:09:00.097 2 DEBUG oslo_concurrency.lockutils [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:09:00 np0005481065 nova_compute[260935]: 2025-10-11 09:09:00.097 2 DEBUG oslo_concurrency.lockutils [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:09:00 np0005481065 nova_compute[260935]: 2025-10-11 09:09:00.153 2 DEBUG oslo_concurrency.processutils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b3697ef4-2e67-4d7f-ad14-ae6ab4055454/disk.config b3697ef4-2e67-4d7f-ad14-ae6ab4055454_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.193s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:09:00 np0005481065 nova_compute[260935]: 2025-10-11 09:09:00.155 2 INFO nova.virt.libvirt.driver [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Deleting local config drive /var/lib/nova/instances/b3697ef4-2e67-4d7f-ad14-ae6ab4055454/disk.config because it was imported into RBD.#033[00m
Oct 11 05:09:00 np0005481065 nova_compute[260935]: 2025-10-11 09:09:00.223 2 DEBUG oslo_concurrency.processutils [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:09:00 np0005481065 systemd-machined[215705]: New machine qemu-118-instance-00000066.
Oct 11 05:09:00 np0005481065 systemd[1]: Started Virtual Machine qemu-118-instance-00000066.
Oct 11 05:09:00 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2007: 321 pgs: 321 active+clean; 486 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.4 MiB/s wr, 78 op/s
Oct 11 05:09:00 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:09:00 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1091932551' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:09:00 np0005481065 nova_compute[260935]: 2025-10-11 09:09:00.695 2 DEBUG oslo_concurrency.processutils [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:09:00 np0005481065 nova_compute[260935]: 2025-10-11 09:09:00.706 2 DEBUG nova.compute.provider_tree [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:09:00 np0005481065 nova_compute[260935]: 2025-10-11 09:09:00.735 2 DEBUG nova.scheduler.client.report [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:09:00 np0005481065 nova_compute[260935]: 2025-10-11 09:09:00.774 2 DEBUG oslo_concurrency.lockutils [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:09:00 np0005481065 nova_compute[260935]: 2025-10-11 09:09:00.839 2 DEBUG oslo_concurrency.lockutils [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 14.078s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:09:01 np0005481065 nova_compute[260935]: 2025-10-11 09:09:01.227 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173741.2266312, b3697ef4-2e67-4d7f-ad14-ae6ab4055454 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:09:01 np0005481065 nova_compute[260935]: 2025-10-11 09:09:01.227 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:09:01 np0005481065 nova_compute[260935]: 2025-10-11 09:09:01.229 2 DEBUG nova.compute.manager [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:09:01 np0005481065 nova_compute[260935]: 2025-10-11 09:09:01.230 2 DEBUG nova.virt.libvirt.driver [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:09:01 np0005481065 nova_compute[260935]: 2025-10-11 09:09:01.234 2 INFO nova.virt.libvirt.driver [-] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Instance spawned successfully.#033[00m
Oct 11 05:09:01 np0005481065 nova_compute[260935]: 2025-10-11 09:09:01.234 2 DEBUG nova.virt.libvirt.driver [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:09:01 np0005481065 nova_compute[260935]: 2025-10-11 09:09:01.260 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:09:01 np0005481065 nova_compute[260935]: 2025-10-11 09:09:01.264 2 DEBUG nova.virt.libvirt.driver [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:09:01 np0005481065 nova_compute[260935]: 2025-10-11 09:09:01.265 2 DEBUG nova.virt.libvirt.driver [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:09:01 np0005481065 nova_compute[260935]: 2025-10-11 09:09:01.266 2 DEBUG nova.virt.libvirt.driver [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:09:01 np0005481065 nova_compute[260935]: 2025-10-11 09:09:01.267 2 DEBUG nova.virt.libvirt.driver [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:09:01 np0005481065 nova_compute[260935]: 2025-10-11 09:09:01.267 2 DEBUG nova.virt.libvirt.driver [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:09:01 np0005481065 nova_compute[260935]: 2025-10-11 09:09:01.268 2 DEBUG nova.virt.libvirt.driver [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:09:01 np0005481065 nova_compute[260935]: 2025-10-11 09:09:01.272 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:09:01 np0005481065 nova_compute[260935]: 2025-10-11 09:09:01.304 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:09:01 np0005481065 nova_compute[260935]: 2025-10-11 09:09:01.304 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173741.2296371, b3697ef4-2e67-4d7f-ad14-ae6ab4055454 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:09:01 np0005481065 nova_compute[260935]: 2025-10-11 09:09:01.305 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] VM Started (Lifecycle Event)#033[00m
Oct 11 05:09:01 np0005481065 nova_compute[260935]: 2025-10-11 09:09:01.333 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:09:01 np0005481065 nova_compute[260935]: 2025-10-11 09:09:01.338 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:09:01 np0005481065 nova_compute[260935]: 2025-10-11 09:09:01.343 2 INFO nova.compute.manager [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Took 3.92 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:09:01 np0005481065 nova_compute[260935]: 2025-10-11 09:09:01.344 2 DEBUG nova.compute.manager [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:09:01 np0005481065 nova_compute[260935]: 2025-10-11 09:09:01.356 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:09:01 np0005481065 nova_compute[260935]: 2025-10-11 09:09:01.416 2 INFO nova.compute.manager [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Took 5.05 seconds to build instance.#033[00m
Oct 11 05:09:01 np0005481065 nova_compute[260935]: 2025-10-11 09:09:01.485 2 DEBUG oslo_concurrency.lockutils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lock "b3697ef4-2e67-4d7f-ad14-ae6ab4055454" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.236s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:09:01 np0005481065 nova_compute[260935]: 2025-10-11 09:09:01.491 2 DEBUG nova.network.neutron [req-f745fa9a-b1f7-4ab6-811e-237e3b5d4c33 req-811c4b79-38fa-45a8-820d-eebac48a4167 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Updated VIF entry in instance network info cache for port a611854c-0a61-41b8-91ce-0c0f893aa54c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:09:01 np0005481065 nova_compute[260935]: 2025-10-11 09:09:01.492 2 DEBUG nova.network.neutron [req-f745fa9a-b1f7-4ab6-811e-237e3b5d4c33 req-811c4b79-38fa-45a8-820d-eebac48a4167 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Updating instance_info_cache with network_info: [{"id": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "address": "fa:16:3e:9d:da:b0", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": null, "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tapa611854c-0a", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:09:01 np0005481065 nova_compute[260935]: 2025-10-11 09:09:01.515 2 DEBUG oslo_concurrency.lockutils [req-f745fa9a-b1f7-4ab6-811e-237e3b5d4c33 req-811c4b79-38fa-45a8-820d-eebac48a4167 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:09:02 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2008: 321 pgs: 321 active+clean; 447 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.7 MiB/s wr, 173 op/s
Oct 11 05:09:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:09:04 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2009: 321 pgs: 321 active+clean; 453 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.7 MiB/s wr, 195 op/s
Oct 11 05:09:04 np0005481065 nova_compute[260935]: 2025-10-11 09:09:04.434 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173729.433575, 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:09:04 np0005481065 nova_compute[260935]: 2025-10-11 09:09:04.435 2 INFO nova.compute.manager [-] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:09:04 np0005481065 nova_compute[260935]: 2025-10-11 09:09:04.459 2 INFO nova.compute.manager [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Rebuilding instance#033[00m
Oct 11 05:09:04 np0005481065 nova_compute[260935]: 2025-10-11 09:09:04.465 2 DEBUG nova.compute.manager [None req-abed385a-bb63-4264-a041-df792a746ff2 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:09:04 np0005481065 nova_compute[260935]: 2025-10-11 09:09:04.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:09:04 np0005481065 nova_compute[260935]: 2025-10-11 09:09:04.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:09:04 np0005481065 nova_compute[260935]: 2025-10-11 09:09:04.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:09:04 np0005481065 nova_compute[260935]: 2025-10-11 09:09:04.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:09:04 np0005481065 nova_compute[260935]: 2025-10-11 09:09:04.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:09:04 np0005481065 nova_compute[260935]: 2025-10-11 09:09:04.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:09:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:09:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:09:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:09:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0029757271724620694 of space, bias 1.0, pg target 0.8927181517386208 quantized to 32 (current 32)
Oct 11 05:09:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:09:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:09:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:09:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:09:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:09:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014232236317314579 of space, bias 1.0, pg target 0.42696708951943735 quantized to 32 (current 32)
Oct 11 05:09:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:09:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 05:09:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:09:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:09:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:09:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 05:09:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:09:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 05:09:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:09:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:09:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:09:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 05:09:05 np0005481065 nova_compute[260935]: 2025-10-11 09:09:05.117 2 DEBUG nova.objects.instance [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lazy-loading 'trusted_certs' on Instance uuid b3697ef4-2e67-4d7f-ad14-ae6ab4055454 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:09:05 np0005481065 nova_compute[260935]: 2025-10-11 09:09:05.146 2 DEBUG nova.compute.manager [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:09:05 np0005481065 nova_compute[260935]: 2025-10-11 09:09:05.229 2 DEBUG nova.objects.instance [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lazy-loading 'pci_requests' on Instance uuid b3697ef4-2e67-4d7f-ad14-ae6ab4055454 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:09:05 np0005481065 nova_compute[260935]: 2025-10-11 09:09:05.243 2 DEBUG nova.objects.instance [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lazy-loading 'pci_devices' on Instance uuid b3697ef4-2e67-4d7f-ad14-ae6ab4055454 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:09:05 np0005481065 nova_compute[260935]: 2025-10-11 09:09:05.259 2 DEBUG nova.objects.instance [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lazy-loading 'resources' on Instance uuid b3697ef4-2e67-4d7f-ad14-ae6ab4055454 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:09:05 np0005481065 nova_compute[260935]: 2025-10-11 09:09:05.276 2 DEBUG nova.objects.instance [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lazy-loading 'migration_context' on Instance uuid b3697ef4-2e67-4d7f-ad14-ae6ab4055454 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:09:05 np0005481065 nova_compute[260935]: 2025-10-11 09:09:05.290 2 DEBUG nova.objects.instance [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct 11 05:09:05 np0005481065 nova_compute[260935]: 2025-10-11 09:09:05.297 2 DEBUG nova.virt.libvirt.driver [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct 11 05:09:06 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2010: 321 pgs: 321 active+clean; 453 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.7 MiB/s wr, 195 op/s
Oct 11 05:09:06 np0005481065 nova_compute[260935]: 2025-10-11 09:09:06.527 2 DEBUG oslo_concurrency.lockutils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquiring lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:09:06 np0005481065 nova_compute[260935]: 2025-10-11 09:09:06.528 2 DEBUG oslo_concurrency.lockutils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:09:06 np0005481065 nova_compute[260935]: 2025-10-11 09:09:06.529 2 INFO nova.compute.manager [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Unshelving#033[00m
Oct 11 05:09:06 np0005481065 nova_compute[260935]: 2025-10-11 09:09:06.627 2 DEBUG oslo_concurrency.lockutils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:09:06 np0005481065 nova_compute[260935]: 2025-10-11 09:09:06.628 2 DEBUG oslo_concurrency.lockutils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:09:06 np0005481065 nova_compute[260935]: 2025-10-11 09:09:06.636 2 DEBUG nova.objects.instance [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lazy-loading 'pci_requests' on Instance uuid 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:09:06 np0005481065 nova_compute[260935]: 2025-10-11 09:09:06.659 2 DEBUG nova.objects.instance [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lazy-loading 'numa_topology' on Instance uuid 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:09:06 np0005481065 nova_compute[260935]: 2025-10-11 09:09:06.675 2 DEBUG nova.virt.hardware [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:09:06 np0005481065 nova_compute[260935]: 2025-10-11 09:09:06.676 2 INFO nova.compute.claims [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:09:06 np0005481065 nova_compute[260935]: 2025-10-11 09:09:06.724 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:09:06 np0005481065 nova_compute[260935]: 2025-10-11 09:09:06.898 2 DEBUG oslo_concurrency.processutils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:09:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:09:07 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1109422311' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:09:07 np0005481065 nova_compute[260935]: 2025-10-11 09:09:07.402 2 DEBUG oslo_concurrency.processutils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:09:07 np0005481065 nova_compute[260935]: 2025-10-11 09:09:07.408 2 DEBUG nova.compute.provider_tree [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:09:07 np0005481065 nova_compute[260935]: 2025-10-11 09:09:07.442 2 DEBUG nova.scheduler.client.report [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:09:07 np0005481065 nova_compute[260935]: 2025-10-11 09:09:07.476 2 DEBUG oslo_concurrency.lockutils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.848s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:09:07 np0005481065 nova_compute[260935]: 2025-10-11 09:09:07.722 2 INFO nova.network.neutron [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Updating port a611854c-0a61-41b8-91ce-0c0f893aa54c with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Oct 11 05:09:08 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2011: 321 pgs: 321 active+clean; 453 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 153 op/s
Oct 11 05:09:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:09:09 np0005481065 nova_compute[260935]: 2025-10-11 09:09:09.077 2 DEBUG oslo_concurrency.lockutils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquiring lock "refresh_cache-0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:09:09 np0005481065 nova_compute[260935]: 2025-10-11 09:09:09.080 2 DEBUG oslo_concurrency.lockutils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquired lock "refresh_cache-0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:09:09 np0005481065 nova_compute[260935]: 2025-10-11 09:09:09.080 2 DEBUG nova.network.neutron [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:09:09 np0005481065 nova_compute[260935]: 2025-10-11 09:09:09.235 2 DEBUG nova.compute.manager [req-0fa1a883-467e-44b2-9c97-bd3403659cc3 req-dc9d79be-fb59-4206-be11-87d88e2e3517 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Received event network-changed-a611854c-0a61-41b8-91ce-0c0f893aa54c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:09:09 np0005481065 nova_compute[260935]: 2025-10-11 09:09:09.237 2 DEBUG nova.compute.manager [req-0fa1a883-467e-44b2-9c97-bd3403659cc3 req-dc9d79be-fb59-4206-be11-87d88e2e3517 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Refreshing instance network info cache due to event network-changed-a611854c-0a61-41b8-91ce-0c0f893aa54c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:09:09 np0005481065 nova_compute[260935]: 2025-10-11 09:09:09.237 2 DEBUG oslo_concurrency.lockutils [req-0fa1a883-467e-44b2-9c97-bd3403659cc3 req-dc9d79be-fb59-4206-be11-87d88e2e3517 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:09:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:09:09 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:09:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 05:09:09 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:09:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 05:09:09 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:09:09 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 455e5a33-e8ff-4ed4-aa65-a8d8439c4d2f does not exist
Oct 11 05:09:09 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev f197b979-3563-4e4e-9efb-7c89cf4ebdcc does not exist
Oct 11 05:09:09 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 7aa4292e-ed65-4aa6-99bb-19c91cf1dbde does not exist
Oct 11 05:09:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 05:09:09 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 05:09:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 05:09:09 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:09:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:09:09 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:09:09 np0005481065 nova_compute[260935]: 2025-10-11 09:09:09.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:09:09 np0005481065 nova_compute[260935]: 2025-10-11 09:09:09.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:09:09 np0005481065 nova_compute[260935]: 2025-10-11 09:09:09.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:09:09 np0005481065 nova_compute[260935]: 2025-10-11 09:09:09.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:09:09 np0005481065 nova_compute[260935]: 2025-10-11 09:09:09.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:09 np0005481065 nova_compute[260935]: 2025-10-11 09:09:09.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:09:09 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:09:09 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:09:09 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:09:10 np0005481065 podman[362214]: 2025-10-11 09:09:10.366967865 +0000 UTC m=+0.067168719 container create 6a7a9d24838506d085807d241aaa56aef9dd4b68fb8bbf64dcc21e49cc207355 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_cerf, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 05:09:10 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2012: 321 pgs: 321 active+clean; 453 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 132 op/s
Oct 11 05:09:10 np0005481065 podman[362214]: 2025-10-11 09:09:10.332353427 +0000 UTC m=+0.032554301 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:09:10 np0005481065 systemd[1]: Started libpod-conmon-6a7a9d24838506d085807d241aaa56aef9dd4b68fb8bbf64dcc21e49cc207355.scope.
Oct 11 05:09:10 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:09:10 np0005481065 podman[362214]: 2025-10-11 09:09:10.499697384 +0000 UTC m=+0.199898308 container init 6a7a9d24838506d085807d241aaa56aef9dd4b68fb8bbf64dcc21e49cc207355 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_cerf, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 05:09:10 np0005481065 podman[362214]: 2025-10-11 09:09:10.512649284 +0000 UTC m=+0.212850148 container start 6a7a9d24838506d085807d241aaa56aef9dd4b68fb8bbf64dcc21e49cc207355 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_cerf, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 05:09:10 np0005481065 podman[362214]: 2025-10-11 09:09:10.51705708 +0000 UTC m=+0.217258004 container attach 6a7a9d24838506d085807d241aaa56aef9dd4b68fb8bbf64dcc21e49cc207355 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_cerf, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 05:09:10 np0005481065 stupefied_cerf[362231]: 167 167
Oct 11 05:09:10 np0005481065 systemd[1]: libpod-6a7a9d24838506d085807d241aaa56aef9dd4b68fb8bbf64dcc21e49cc207355.scope: Deactivated successfully.
Oct 11 05:09:10 np0005481065 podman[362214]: 2025-10-11 09:09:10.524052869 +0000 UTC m=+0.224253763 container died 6a7a9d24838506d085807d241aaa56aef9dd4b68fb8bbf64dcc21e49cc207355 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:09:10 np0005481065 systemd[1]: var-lib-containers-storage-overlay-cde6f1aef68fb15bff8ae86b68b74f91061d8d4c96fbdeb877a05ba90480cb7b-merged.mount: Deactivated successfully.
Oct 11 05:09:10 np0005481065 podman[362214]: 2025-10-11 09:09:10.59169122 +0000 UTC m=+0.291892084 container remove 6a7a9d24838506d085807d241aaa56aef9dd4b68fb8bbf64dcc21e49cc207355 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_cerf, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:09:10 np0005481065 systemd[1]: libpod-conmon-6a7a9d24838506d085807d241aaa56aef9dd4b68fb8bbf64dcc21e49cc207355.scope: Deactivated successfully.
Oct 11 05:09:10 np0005481065 podman[362256]: 2025-10-11 09:09:10.822943062 +0000 UTC m=+0.053484638 container create 2042fd801196570e6b32100387af91a2ba2d7df27c5ed416651995b593fb4ac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_robinson, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 05:09:10 np0005481065 systemd[1]: Started libpod-conmon-2042fd801196570e6b32100387af91a2ba2d7df27c5ed416651995b593fb4ac4.scope.
Oct 11 05:09:10 np0005481065 podman[362256]: 2025-10-11 09:09:10.795723885 +0000 UTC m=+0.026265451 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:09:10 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:09:10 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d19f94917f751e6a7ea85a02a73d7b097fd74c5314b239da4e603d38bf310209/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:09:10 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d19f94917f751e6a7ea85a02a73d7b097fd74c5314b239da4e603d38bf310209/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:09:10 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d19f94917f751e6a7ea85a02a73d7b097fd74c5314b239da4e603d38bf310209/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:09:10 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d19f94917f751e6a7ea85a02a73d7b097fd74c5314b239da4e603d38bf310209/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:09:10 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d19f94917f751e6a7ea85a02a73d7b097fd74c5314b239da4e603d38bf310209/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 05:09:10 np0005481065 podman[362256]: 2025-10-11 09:09:10.927391733 +0000 UTC m=+0.157933369 container init 2042fd801196570e6b32100387af91a2ba2d7df27c5ed416651995b593fb4ac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_robinson, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:09:10 np0005481065 podman[362256]: 2025-10-11 09:09:10.937957355 +0000 UTC m=+0.168498891 container start 2042fd801196570e6b32100387af91a2ba2d7df27c5ed416651995b593fb4ac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_robinson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:09:10 np0005481065 podman[362256]: 2025-10-11 09:09:10.941921078 +0000 UTC m=+0.172462714 container attach 2042fd801196570e6b32100387af91a2ba2d7df27c5ed416651995b593fb4ac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_robinson, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 05:09:11 np0005481065 nova_compute[260935]: 2025-10-11 09:09:11.890 2 DEBUG nova.network.neutron [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Updating instance_info_cache with network_info: [{"id": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "address": "fa:16:3e:9d:da:b0", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa611854c-0a", "ovs_interfaceid": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:09:11 np0005481065 nova_compute[260935]: 2025-10-11 09:09:11.923 2 DEBUG oslo_concurrency.lockutils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Releasing lock "refresh_cache-0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:09:11 np0005481065 nova_compute[260935]: 2025-10-11 09:09:11.925 2 DEBUG nova.virt.libvirt.driver [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:09:11 np0005481065 nova_compute[260935]: 2025-10-11 09:09:11.926 2 INFO nova.virt.libvirt.driver [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Creating image(s)#033[00m
Oct 11 05:09:11 np0005481065 nova_compute[260935]: 2025-10-11 09:09:11.964 2 DEBUG nova.storage.rbd_utils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] rbd image 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:09:11 np0005481065 nova_compute[260935]: 2025-10-11 09:09:11.979 2 DEBUG nova.objects.instance [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:09:11 np0005481065 nova_compute[260935]: 2025-10-11 09:09:11.980 2 DEBUG oslo_concurrency.lockutils [req-0fa1a883-467e-44b2-9c97-bd3403659cc3 req-dc9d79be-fb59-4206-be11-87d88e2e3517 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:09:11 np0005481065 nova_compute[260935]: 2025-10-11 09:09:11.981 2 DEBUG nova.network.neutron [req-0fa1a883-467e-44b2-9c97-bd3403659cc3 req-dc9d79be-fb59-4206-be11-87d88e2e3517 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Refreshing network info cache for port a611854c-0a61-41b8-91ce-0c0f893aa54c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:09:12 np0005481065 nova_compute[260935]: 2025-10-11 09:09:12.028 2 DEBUG nova.storage.rbd_utils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] rbd image 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:09:12 np0005481065 nova_compute[260935]: 2025-10-11 09:09:12.060 2 DEBUG nova.storage.rbd_utils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] rbd image 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:09:12 np0005481065 nova_compute[260935]: 2025-10-11 09:09:12.063 2 DEBUG oslo_concurrency.lockutils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquiring lock "fde2d6dcc32e12f0cf33b731a0ab6c7e3dd9c214" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:09:12 np0005481065 nova_compute[260935]: 2025-10-11 09:09:12.064 2 DEBUG oslo_concurrency.lockutils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "fde2d6dcc32e12f0cf33b731a0ab6c7e3dd9c214" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:09:12 np0005481065 bold_robinson[362272]: --> passed data devices: 0 physical, 3 LVM
Oct 11 05:09:12 np0005481065 bold_robinson[362272]: --> relative data size: 1.0
Oct 11 05:09:12 np0005481065 bold_robinson[362272]: --> All data devices are unavailable
Oct 11 05:09:12 np0005481065 systemd[1]: libpod-2042fd801196570e6b32100387af91a2ba2d7df27c5ed416651995b593fb4ac4.scope: Deactivated successfully.
Oct 11 05:09:12 np0005481065 systemd[1]: libpod-2042fd801196570e6b32100387af91a2ba2d7df27c5ed416651995b593fb4ac4.scope: Consumed 1.147s CPU time.
Oct 11 05:09:12 np0005481065 podman[362355]: 2025-10-11 09:09:12.27169892 +0000 UTC m=+0.033261621 container died 2042fd801196570e6b32100387af91a2ba2d7df27c5ed416651995b593fb4ac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 05:09:12 np0005481065 systemd[1]: var-lib-containers-storage-overlay-d19f94917f751e6a7ea85a02a73d7b097fd74c5314b239da4e603d38bf310209-merged.mount: Deactivated successfully.
Oct 11 05:09:12 np0005481065 podman[362355]: 2025-10-11 09:09:12.354763311 +0000 UTC m=+0.116325952 container remove 2042fd801196570e6b32100387af91a2ba2d7df27c5ed416651995b593fb4ac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:09:12 np0005481065 systemd[1]: libpod-conmon-2042fd801196570e6b32100387af91a2ba2d7df27c5ed416651995b593fb4ac4.scope: Deactivated successfully.
Oct 11 05:09:12 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2013: 321 pgs: 321 active+clean; 471 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.3 MiB/s wr, 158 op/s
Oct 11 05:09:12 np0005481065 nova_compute[260935]: 2025-10-11 09:09:12.532 2 DEBUG nova.virt.libvirt.imagebackend [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Image locations are: [{'url': 'rbd://33219f8b-dc38-5a8f-a577-8ccc4b37190a/images/cccd490d-35ae-4410-9dbd-f711e72912ef/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://33219f8b-dc38-5a8f-a577-8ccc4b37190a/images/cccd490d-35ae-4410-9dbd-f711e72912ef/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct 11 05:09:12 np0005481065 nova_compute[260935]: 2025-10-11 09:09:12.639 2 DEBUG nova.virt.libvirt.imagebackend [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Selected location: {'url': 'rbd://33219f8b-dc38-5a8f-a577-8ccc4b37190a/images/cccd490d-35ae-4410-9dbd-f711e72912ef/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Oct 11 05:09:12 np0005481065 nova_compute[260935]: 2025-10-11 09:09:12.640 2 DEBUG nova.storage.rbd_utils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] cloning images/cccd490d-35ae-4410-9dbd-f711e72912ef@snap to None/0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct 11 05:09:12 np0005481065 nova_compute[260935]: 2025-10-11 09:09:12.791 2 DEBUG oslo_concurrency.lockutils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "fde2d6dcc32e12f0cf33b731a0ab6c7e3dd9c214" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:09:12 np0005481065 nova_compute[260935]: 2025-10-11 09:09:12.911 2 DEBUG nova.objects.instance [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lazy-loading 'migration_context' on Instance uuid 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:09:12 np0005481065 nova_compute[260935]: 2025-10-11 09:09:12.965 2 DEBUG nova.storage.rbd_utils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] flattening vms/0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct 11 05:09:13 np0005481065 podman[362668]: 2025-10-11 09:09:13.277997156 +0000 UTC m=+0.080433777 container create 2be89783b3eee08660ac910f59f7be2123933d3d051c36cf70abd8e87253d8d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lovelace, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 05:09:13 np0005481065 nova_compute[260935]: 2025-10-11 09:09:13.306 2 DEBUG nova.virt.libvirt.driver [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Image rbd:vms/0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Oct 11 05:09:13 np0005481065 nova_compute[260935]: 2025-10-11 09:09:13.307 2 DEBUG nova.virt.libvirt.driver [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:09:13 np0005481065 nova_compute[260935]: 2025-10-11 09:09:13.307 2 DEBUG nova.virt.libvirt.driver [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Ensure instance console log exists: /var/lib/nova/instances/0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:09:13 np0005481065 nova_compute[260935]: 2025-10-11 09:09:13.308 2 DEBUG oslo_concurrency.lockutils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:09:13 np0005481065 nova_compute[260935]: 2025-10-11 09:09:13.308 2 DEBUG oslo_concurrency.lockutils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:09:13 np0005481065 nova_compute[260935]: 2025-10-11 09:09:13.308 2 DEBUG oslo_concurrency.lockutils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:09:13 np0005481065 nova_compute[260935]: 2025-10-11 09:09:13.310 2 DEBUG nova.virt.libvirt.driver [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Start _get_guest_xml network_info=[{"id": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "address": "fa:16:3e:9d:da:b0", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa611854c-0a", "ovs_interfaceid": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-11T09:08:46Z,direct_url=<?>,disk_format='raw',id=cccd490d-35ae-4410-9dbd-f711e72912ef,min_disk=1,min_ram=0,name='tempest-ServersNegativeTestJSON-server-581429551-shelved',owner='21f163e616ee4917a580701d466f7dc9',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-11T09:08:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:09:13 np0005481065 nova_compute[260935]: 2025-10-11 09:09:13.318 2 WARNING nova.virt.libvirt.driver [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:09:13 np0005481065 nova_compute[260935]: 2025-10-11 09:09:13.324 2 DEBUG nova.virt.libvirt.host [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:09:13 np0005481065 nova_compute[260935]: 2025-10-11 09:09:13.324 2 DEBUG nova.virt.libvirt.host [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:09:13 np0005481065 nova_compute[260935]: 2025-10-11 09:09:13.328 2 DEBUG nova.virt.libvirt.host [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:09:13 np0005481065 nova_compute[260935]: 2025-10-11 09:09:13.329 2 DEBUG nova.virt.libvirt.host [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:09:13 np0005481065 nova_compute[260935]: 2025-10-11 09:09:13.329 2 DEBUG nova.virt.libvirt.driver [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:09:13 np0005481065 nova_compute[260935]: 2025-10-11 09:09:13.329 2 DEBUG nova.virt.hardware [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-11T09:08:46Z,direct_url=<?>,disk_format='raw',id=cccd490d-35ae-4410-9dbd-f711e72912ef,min_disk=1,min_ram=0,name='tempest-ServersNegativeTestJSON-server-581429551-shelved',owner='21f163e616ee4917a580701d466f7dc9',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-11T09:08:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:09:13 np0005481065 podman[362668]: 2025-10-11 09:09:13.238096037 +0000 UTC m=+0.040532698 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:09:13 np0005481065 nova_compute[260935]: 2025-10-11 09:09:13.330 2 DEBUG nova.virt.hardware [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:09:13 np0005481065 nova_compute[260935]: 2025-10-11 09:09:13.330 2 DEBUG nova.virt.hardware [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:09:13 np0005481065 nova_compute[260935]: 2025-10-11 09:09:13.330 2 DEBUG nova.virt.hardware [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:09:13 np0005481065 nova_compute[260935]: 2025-10-11 09:09:13.331 2 DEBUG nova.virt.hardware [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:09:13 np0005481065 nova_compute[260935]: 2025-10-11 09:09:13.331 2 DEBUG nova.virt.hardware [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:09:13 np0005481065 nova_compute[260935]: 2025-10-11 09:09:13.332 2 DEBUG nova.virt.hardware [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:09:13 np0005481065 nova_compute[260935]: 2025-10-11 09:09:13.332 2 DEBUG nova.virt.hardware [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:09:13 np0005481065 nova_compute[260935]: 2025-10-11 09:09:13.332 2 DEBUG nova.virt.hardware [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:09:13 np0005481065 nova_compute[260935]: 2025-10-11 09:09:13.332 2 DEBUG nova.virt.hardware [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:09:13 np0005481065 nova_compute[260935]: 2025-10-11 09:09:13.333 2 DEBUG nova.virt.hardware [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:09:13 np0005481065 nova_compute[260935]: 2025-10-11 09:09:13.334 2 DEBUG nova.objects.instance [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:09:13 np0005481065 systemd[1]: Started libpod-conmon-2be89783b3eee08660ac910f59f7be2123933d3d051c36cf70abd8e87253d8d4.scope.
Oct 11 05:09:13 np0005481065 nova_compute[260935]: 2025-10-11 09:09:13.354 2 DEBUG oslo_concurrency.processutils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:09:13 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:09:13 np0005481065 podman[362668]: 2025-10-11 09:09:13.395962423 +0000 UTC m=+0.198399054 container init 2be89783b3eee08660ac910f59f7be2123933d3d051c36cf70abd8e87253d8d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:09:13 np0005481065 podman[362668]: 2025-10-11 09:09:13.402363626 +0000 UTC m=+0.204800237 container start 2be89783b3eee08660ac910f59f7be2123933d3d051c36cf70abd8e87253d8d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:09:13 np0005481065 practical_lovelace[362684]: 167 167
Oct 11 05:09:13 np0005481065 systemd[1]: libpod-2be89783b3eee08660ac910f59f7be2123933d3d051c36cf70abd8e87253d8d4.scope: Deactivated successfully.
Oct 11 05:09:13 np0005481065 conmon[362684]: conmon 2be89783b3eee08660ac <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2be89783b3eee08660ac910f59f7be2123933d3d051c36cf70abd8e87253d8d4.scope/container/memory.events
Oct 11 05:09:13 np0005481065 podman[362668]: 2025-10-11 09:09:13.408525542 +0000 UTC m=+0.210962173 container attach 2be89783b3eee08660ac910f59f7be2123933d3d051c36cf70abd8e87253d8d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lovelace, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 05:09:13 np0005481065 podman[362668]: 2025-10-11 09:09:13.409128599 +0000 UTC m=+0.211565210 container died 2be89783b3eee08660ac910f59f7be2123933d3d051c36cf70abd8e87253d8d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 05:09:13 np0005481065 systemd[1]: var-lib-containers-storage-overlay-feeaf0159f00d574e79f4bde67bd9547b11c6dad3bdc9a2c0af26860dfd879ef-merged.mount: Deactivated successfully.
Oct 11 05:09:13 np0005481065 podman[362668]: 2025-10-11 09:09:13.450030506 +0000 UTC m=+0.252467117 container remove 2be89783b3eee08660ac910f59f7be2123933d3d051c36cf70abd8e87253d8d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lovelace, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 05:09:13 np0005481065 systemd[1]: libpod-conmon-2be89783b3eee08660ac910f59f7be2123933d3d051c36cf70abd8e87253d8d4.scope: Deactivated successfully.
Oct 11 05:09:13 np0005481065 podman[362728]: 2025-10-11 09:09:13.645115195 +0000 UTC m=+0.045515060 container create ec3416ba826e115b9a8baa76cec0a234cdedb480aae8c79bb9c7c08b90cac9c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cannon, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Oct 11 05:09:13 np0005481065 nova_compute[260935]: 2025-10-11 09:09:13.671 2 DEBUG nova.network.neutron [req-0fa1a883-467e-44b2-9c97-bd3403659cc3 req-dc9d79be-fb59-4206-be11-87d88e2e3517 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Updated VIF entry in instance network info cache for port a611854c-0a61-41b8-91ce-0c0f893aa54c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:09:13 np0005481065 nova_compute[260935]: 2025-10-11 09:09:13.672 2 DEBUG nova.network.neutron [req-0fa1a883-467e-44b2-9c97-bd3403659cc3 req-dc9d79be-fb59-4206-be11-87d88e2e3517 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Updating instance_info_cache with network_info: [{"id": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "address": "fa:16:3e:9d:da:b0", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa611854c-0a", "ovs_interfaceid": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:09:13 np0005481065 systemd[1]: Started libpod-conmon-ec3416ba826e115b9a8baa76cec0a234cdedb480aae8c79bb9c7c08b90cac9c6.scope.
Oct 11 05:09:13 np0005481065 nova_compute[260935]: 2025-10-11 09:09:13.698 2 DEBUG oslo_concurrency.lockutils [req-0fa1a883-467e-44b2-9c97-bd3403659cc3 req-dc9d79be-fb59-4206-be11-87d88e2e3517 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:09:13 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:09:13 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20a3f30b73f8db5ec63ad05258f7ceac39388b9248f2a763e61ca0d84573210d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:09:13 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20a3f30b73f8db5ec63ad05258f7ceac39388b9248f2a763e61ca0d84573210d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:09:13 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20a3f30b73f8db5ec63ad05258f7ceac39388b9248f2a763e61ca0d84573210d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:09:13 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20a3f30b73f8db5ec63ad05258f7ceac39388b9248f2a763e61ca0d84573210d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:09:13 np0005481065 podman[362728]: 2025-10-11 09:09:13.62565837 +0000 UTC m=+0.026058315 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:09:13 np0005481065 podman[362728]: 2025-10-11 09:09:13.727043274 +0000 UTC m=+0.127443139 container init ec3416ba826e115b9a8baa76cec0a234cdedb480aae8c79bb9c7c08b90cac9c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cannon, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:09:13 np0005481065 podman[362728]: 2025-10-11 09:09:13.734504847 +0000 UTC m=+0.134904712 container start ec3416ba826e115b9a8baa76cec0a234cdedb480aae8c79bb9c7c08b90cac9c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 05:09:13 np0005481065 podman[362728]: 2025-10-11 09:09:13.738051379 +0000 UTC m=+0.138451244 container attach ec3416ba826e115b9a8baa76cec0a234cdedb480aae8c79bb9c7c08b90cac9c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cannon, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 05:09:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:09:13 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/336213904' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:09:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:09:13 np0005481065 nova_compute[260935]: 2025-10-11 09:09:13.827 2 DEBUG oslo_concurrency.processutils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:09:13 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #90. Immutable memtables: 0.
Oct 11 05:09:13 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:13.834519) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 05:09:13 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 90
Oct 11 05:09:13 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173753834585, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 1984, "num_deletes": 257, "total_data_size": 3050823, "memory_usage": 3114016, "flush_reason": "Manual Compaction"}
Oct 11 05:09:13 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #91: started
Oct 11 05:09:13 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173753849743, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 91, "file_size": 3007356, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40052, "largest_seqno": 42035, "table_properties": {"data_size": 2998141, "index_size": 5773, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18191, "raw_average_key_size": 19, "raw_value_size": 2979716, "raw_average_value_size": 3193, "num_data_blocks": 253, "num_entries": 933, "num_filter_entries": 933, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760173568, "oldest_key_time": 1760173568, "file_creation_time": 1760173753, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 91, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:09:13 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 15255 microseconds, and 6086 cpu microseconds.
Oct 11 05:09:13 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:09:13 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:13.849778) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #91: 3007356 bytes OK
Oct 11 05:09:13 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:13.849795) [db/memtable_list.cc:519] [default] Level-0 commit table #91 started
Oct 11 05:09:13 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:13.851119) [db/memtable_list.cc:722] [default] Level-0 commit table #91: memtable #1 done
Oct 11 05:09:13 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:13.851131) EVENT_LOG_v1 {"time_micros": 1760173753851127, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 05:09:13 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:13.851148) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 05:09:13 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 3042282, prev total WAL file size 3042282, number of live WAL files 2.
Oct 11 05:09:13 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000087.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:09:13 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:13.851852) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323531' seq:0, type:0; will stop at (end)
Oct 11 05:09:13 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 05:09:13 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [91(2936KB)], [89(9355KB)]
Oct 11 05:09:13 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173753851931, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [91], "files_L6": [89], "score": -1, "input_data_size": 12587837, "oldest_snapshot_seqno": -1}
Oct 11 05:09:13 np0005481065 nova_compute[260935]: 2025-10-11 09:09:13.854 2 DEBUG nova.storage.rbd_utils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] rbd image 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:09:13 np0005481065 nova_compute[260935]: 2025-10-11 09:09:13.866 2 DEBUG oslo_concurrency.processutils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:09:13 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #92: 6780 keys, 11873936 bytes, temperature: kUnknown
Oct 11 05:09:13 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173753909482, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 92, "file_size": 11873936, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11824589, "index_size": 31315, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16965, "raw_key_size": 172466, "raw_average_key_size": 25, "raw_value_size": 11699001, "raw_average_value_size": 1725, "num_data_blocks": 1253, "num_entries": 6780, "num_filter_entries": 6780, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760173753, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:09:13 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:09:13 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:13.909994) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 11873936 bytes
Oct 11 05:09:13 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:13.912644) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 217.4 rd, 205.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.9, 9.1 +0.0 blob) out(11.3 +0.0 blob), read-write-amplify(8.1) write-amplify(3.9) OK, records in: 7305, records dropped: 525 output_compression: NoCompression
Oct 11 05:09:13 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:13.912663) EVENT_LOG_v1 {"time_micros": 1760173753912654, "job": 52, "event": "compaction_finished", "compaction_time_micros": 57899, "compaction_time_cpu_micros": 23079, "output_level": 6, "num_output_files": 1, "total_output_size": 11873936, "num_input_records": 7305, "num_output_records": 6780, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 05:09:13 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000091.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:09:13 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173753913366, "job": 52, "event": "table_file_deletion", "file_number": 91}
Oct 11 05:09:13 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:09:13 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173753915095, "job": 52, "event": "table_file_deletion", "file_number": 89}
Oct 11 05:09:13 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:13.851788) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:09:13 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:13.915137) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:09:13 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:13.915143) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:09:13 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:13.915146) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:09:13 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:13.915149) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:09:13 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:13.915152) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:09:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:09:14 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1550355202' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:09:14 np0005481065 nova_compute[260935]: 2025-10-11 09:09:14.300 2 DEBUG oslo_concurrency.processutils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:09:14 np0005481065 nova_compute[260935]: 2025-10-11 09:09:14.303 2 DEBUG nova.virt.libvirt.vif [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-11T09:07:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-581429551',display_name='tempest-ServersNegativeTestJSON-server-581429551',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-581429551',id=98,image_ref='cccd490d-35ae-4410-9dbd-f711e72912ef',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:07:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='21f163e616ee4917a580701d466f7dc9',ramdisk_id='',reservation_id='r-wp4zx0a9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-414353187',owner_user_name='tempest-ServersNegativeTestJSON-414353187-project-member',shelved_at='2025-10-11T09:08:56.240286',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='cccd490d-35ae-4410-9dbd-f711e72912ef'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:09:06Z,user_data=None,user_id='6b8d9d5ab01d48ae81a09f922875ea3e',uuid=0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "address": "fa:16:3e:9d:da:b0", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa611854c-0a", "ovs_interfaceid": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:09:14 np0005481065 nova_compute[260935]: 2025-10-11 09:09:14.304 2 DEBUG nova.network.os_vif_util [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Converting VIF {"id": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "address": "fa:16:3e:9d:da:b0", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa611854c-0a", "ovs_interfaceid": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:09:14 np0005481065 nova_compute[260935]: 2025-10-11 09:09:14.306 2 DEBUG nova.network.os_vif_util [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:da:b0,bridge_name='br-int',has_traffic_filtering=True,id=a611854c-0a61-41b8-91ce-0c0f893aa54c,network=Network(14e82eeb-74e2-4de3-9047-74da777fe1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa611854c-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:09:14 np0005481065 nova_compute[260935]: 2025-10-11 09:09:14.310 2 DEBUG nova.objects.instance [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:09:14 np0005481065 nova_compute[260935]: 2025-10-11 09:09:14.338 2 DEBUG nova.virt.libvirt.driver [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:09:14 np0005481065 nova_compute[260935]:  <uuid>0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c</uuid>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:  <name>instance-00000062</name>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:09:14 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:      <nova:name>tempest-ServersNegativeTestJSON-server-581429551</nova:name>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:09:13</nova:creationTime>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:09:14 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:        <nova:user uuid="6b8d9d5ab01d48ae81a09f922875ea3e">tempest-ServersNegativeTestJSON-414353187-project-member</nova:user>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:        <nova:project uuid="21f163e616ee4917a580701d466f7dc9">tempest-ServersNegativeTestJSON-414353187</nova:project>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="cccd490d-35ae-4410-9dbd-f711e72912ef"/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:        <nova:port uuid="a611854c-0a61-41b8-91ce-0c0f893aa54c">
Oct 11 05:09:14 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:      <entry name="serial">0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c</entry>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:      <entry name="uuid">0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c</entry>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:09:14 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk">
Oct 11 05:09:14 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:09:14 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:09:14 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk.config">
Oct 11 05:09:14 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:09:14 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:09:14 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:9d:da:b0"/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:      <target dev="tapa611854c-0a"/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:09:14 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c/console.log" append="off"/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    <input type="keyboard" bus="usb"/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:09:14 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:09:14 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:09:14 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:09:14 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:09:14 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:09:14 np0005481065 nova_compute[260935]: 2025-10-11 09:09:14.339 2 DEBUG nova.compute.manager [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Preparing to wait for external event network-vif-plugged-a611854c-0a61-41b8-91ce-0c0f893aa54c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:09:14 np0005481065 nova_compute[260935]: 2025-10-11 09:09:14.340 2 DEBUG oslo_concurrency.lockutils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquiring lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:09:14 np0005481065 nova_compute[260935]: 2025-10-11 09:09:14.340 2 DEBUG oslo_concurrency.lockutils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:09:14 np0005481065 nova_compute[260935]: 2025-10-11 09:09:14.341 2 DEBUG oslo_concurrency.lockutils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:09:14 np0005481065 nova_compute[260935]: 2025-10-11 09:09:14.342 2 DEBUG nova.virt.libvirt.vif [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-11T09:07:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-581429551',display_name='tempest-ServersNegativeTestJSON-server-581429551',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-581429551',id=98,image_ref='cccd490d-35ae-4410-9dbd-f711e72912ef',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:07:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='21f163e616ee4917a580701d466f7dc9',ramdisk_id='',reservation_id='r-wp4zx0a9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-414353187',owner_user_name='tempest-ServersNegativeTestJSON-414353187-project-member',shelved_at='2025-10-11T09:08:56.240286',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='cccd490d-35ae-4410-9dbd-f711e72912ef'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:09:06Z,user_data=None,user_id='6b8d9d5ab01d48ae81a09f922875ea3e',uuid=0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "address": "fa:16:3e:9d:da:b0", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa611854c-0a", "ovs_interfaceid": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:09:14 np0005481065 nova_compute[260935]: 2025-10-11 09:09:14.342 2 DEBUG nova.network.os_vif_util [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Converting VIF {"id": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "address": "fa:16:3e:9d:da:b0", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa611854c-0a", "ovs_interfaceid": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:09:14 np0005481065 nova_compute[260935]: 2025-10-11 09:09:14.343 2 DEBUG nova.network.os_vif_util [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:da:b0,bridge_name='br-int',has_traffic_filtering=True,id=a611854c-0a61-41b8-91ce-0c0f893aa54c,network=Network(14e82eeb-74e2-4de3-9047-74da777fe1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa611854c-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:09:14 np0005481065 nova_compute[260935]: 2025-10-11 09:09:14.344 2 DEBUG os_vif [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:da:b0,bridge_name='br-int',has_traffic_filtering=True,id=a611854c-0a61-41b8-91ce-0c0f893aa54c,network=Network(14e82eeb-74e2-4de3-9047-74da777fe1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa611854c-0a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:09:14 np0005481065 nova_compute[260935]: 2025-10-11 09:09:14.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:14 np0005481065 nova_compute[260935]: 2025-10-11 09:09:14.346 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:09:14 np0005481065 nova_compute[260935]: 2025-10-11 09:09:14.347 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:09:14 np0005481065 nova_compute[260935]: 2025-10-11 09:09:14.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:14 np0005481065 nova_compute[260935]: 2025-10-11 09:09:14.352 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa611854c-0a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:09:14 np0005481065 nova_compute[260935]: 2025-10-11 09:09:14.352 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa611854c-0a, col_values=(('external_ids', {'iface-id': 'a611854c-0a61-41b8-91ce-0c0f893aa54c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9d:da:b0', 'vm-uuid': '0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:09:14 np0005481065 nova_compute[260935]: 2025-10-11 09:09:14.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:14 np0005481065 NetworkManager[44960]: <info>  [1760173754.3969] manager: (tapa611854c-0a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/381)
Oct 11 05:09:14 np0005481065 nova_compute[260935]: 2025-10-11 09:09:14.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:09:14 np0005481065 nova_compute[260935]: 2025-10-11 09:09:14.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:14 np0005481065 nova_compute[260935]: 2025-10-11 09:09:14.404 2 INFO os_vif [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:da:b0,bridge_name='br-int',has_traffic_filtering=True,id=a611854c-0a61-41b8-91ce-0c0f893aa54c,network=Network(14e82eeb-74e2-4de3-9047-74da777fe1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa611854c-0a')#033[00m
Oct 11 05:09:14 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2014: 321 pgs: 321 active+clean; 475 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.4 MiB/s wr, 92 op/s
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]: {
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:    "0": [
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:        {
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:            "devices": [
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:                "/dev/loop3"
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:            ],
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:            "lv_name": "ceph_lv0",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:            "lv_size": "21470642176",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:            "name": "ceph_lv0",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:            "tags": {
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:                "ceph.cluster_name": "ceph",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:                "ceph.crush_device_class": "",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:                "ceph.encrypted": "0",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:                "ceph.osd_id": "0",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:                "ceph.type": "block",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:                "ceph.vdo": "0"
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:            },
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:            "type": "block",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:            "vg_name": "ceph_vg0"
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:        }
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:    ],
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:    "1": [
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:        {
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:            "devices": [
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:                "/dev/loop4"
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:            ],
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:            "lv_name": "ceph_lv1",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:            "lv_size": "21470642176",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:            "name": "ceph_lv1",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:            "tags": {
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:                "ceph.cluster_name": "ceph",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:                "ceph.crush_device_class": "",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:                "ceph.encrypted": "0",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:                "ceph.osd_id": "1",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:                "ceph.type": "block",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:                "ceph.vdo": "0"
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:            },
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:            "type": "block",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:            "vg_name": "ceph_vg1"
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:        }
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:    ],
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:    "2": [
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:        {
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:            "devices": [
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:                "/dev/loop5"
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:            ],
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:            "lv_name": "ceph_lv2",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:            "lv_size": "21470642176",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:            "name": "ceph_lv2",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:            "tags": {
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:                "ceph.cluster_name": "ceph",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:                "ceph.crush_device_class": "",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:                "ceph.encrypted": "0",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:                "ceph.osd_id": "2",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:                "ceph.type": "block",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:                "ceph.vdo": "0"
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:            },
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:            "type": "block",
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:            "vg_name": "ceph_vg2"
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:        }
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]:    ]
Oct 11 05:09:14 np0005481065 peaceful_cannon[362743]: }
Oct 11 05:09:14 np0005481065 nova_compute[260935]: 2025-10-11 09:09:14.499 2 DEBUG nova.virt.libvirt.driver [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:09:14 np0005481065 nova_compute[260935]: 2025-10-11 09:09:14.500 2 DEBUG nova.virt.libvirt.driver [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:09:14 np0005481065 nova_compute[260935]: 2025-10-11 09:09:14.501 2 DEBUG nova.virt.libvirt.driver [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] No VIF found with MAC fa:16:3e:9d:da:b0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:09:14 np0005481065 nova_compute[260935]: 2025-10-11 09:09:14.502 2 INFO nova.virt.libvirt.driver [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Using config drive#033[00m
Oct 11 05:09:14 np0005481065 systemd[1]: libpod-ec3416ba826e115b9a8baa76cec0a234cdedb480aae8c79bb9c7c08b90cac9c6.scope: Deactivated successfully.
Oct 11 05:09:14 np0005481065 nova_compute[260935]: 2025-10-11 09:09:14.540 2 DEBUG nova.storage.rbd_utils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] rbd image 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:09:14 np0005481065 podman[362805]: 2025-10-11 09:09:14.569888865 +0000 UTC m=+0.037513912 container died ec3416ba826e115b9a8baa76cec0a234cdedb480aae8c79bb9c7c08b90cac9c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 11 05:09:14 np0005481065 nova_compute[260935]: 2025-10-11 09:09:14.570 2 DEBUG nova.objects.instance [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:09:14 np0005481065 nova_compute[260935]: 2025-10-11 09:09:14.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:14 np0005481065 systemd[1]: var-lib-containers-storage-overlay-20a3f30b73f8db5ec63ad05258f7ceac39388b9248f2a763e61ca0d84573210d-merged.mount: Deactivated successfully.
Oct 11 05:09:14 np0005481065 nova_compute[260935]: 2025-10-11 09:09:14.638 2 DEBUG nova.objects.instance [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lazy-loading 'keypairs' on Instance uuid 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:09:14 np0005481065 podman[362805]: 2025-10-11 09:09:14.642290012 +0000 UTC m=+0.109915009 container remove ec3416ba826e115b9a8baa76cec0a234cdedb480aae8c79bb9c7c08b90cac9c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cannon, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:09:14 np0005481065 systemd[1]: libpod-conmon-ec3416ba826e115b9a8baa76cec0a234cdedb480aae8c79bb9c7c08b90cac9c6.scope: Deactivated successfully.
Oct 11 05:09:15 np0005481065 nova_compute[260935]: 2025-10-11 09:09:15.199 2 INFO nova.virt.libvirt.driver [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Creating config drive at /var/lib/nova/instances/0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c/disk.config#033[00m
Oct 11 05:09:15 np0005481065 nova_compute[260935]: 2025-10-11 09:09:15.204 2 DEBUG oslo_concurrency.processutils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfy3d3dx1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:09:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:15.204 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:09:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:15.204 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:09:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:15.205 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:09:15 np0005481065 nova_compute[260935]: 2025-10-11 09:09:15.344 2 DEBUG oslo_concurrency.processutils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfy3d3dx1" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:09:15 np0005481065 nova_compute[260935]: 2025-10-11 09:09:15.370 2 DEBUG nova.storage.rbd_utils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] rbd image 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:09:15 np0005481065 nova_compute[260935]: 2025-10-11 09:09:15.374 2 DEBUG oslo_concurrency.processutils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c/disk.config 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:09:15 np0005481065 podman[362973]: 2025-10-11 09:09:15.400694302 +0000 UTC m=+0.051264994 container create 49272595ba8c8913b04d2db88122970f6aad46584c8eb048d21ad8da34883d8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_panini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 11 05:09:15 np0005481065 nova_compute[260935]: 2025-10-11 09:09:15.429 2 DEBUG nova.virt.libvirt.driver [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct 11 05:09:15 np0005481065 systemd[1]: Started libpod-conmon-49272595ba8c8913b04d2db88122970f6aad46584c8eb048d21ad8da34883d8b.scope.
Oct 11 05:09:15 np0005481065 podman[362973]: 2025-10-11 09:09:15.374722381 +0000 UTC m=+0.025293093 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:09:15 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:09:15 np0005481065 podman[362973]: 2025-10-11 09:09:15.503075305 +0000 UTC m=+0.153646007 container init 49272595ba8c8913b04d2db88122970f6aad46584c8eb048d21ad8da34883d8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_panini, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Oct 11 05:09:15 np0005481065 podman[362973]: 2025-10-11 09:09:15.514704537 +0000 UTC m=+0.165275239 container start 49272595ba8c8913b04d2db88122970f6aad46584c8eb048d21ad8da34883d8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_panini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 05:09:15 np0005481065 podman[362973]: 2025-10-11 09:09:15.519418852 +0000 UTC m=+0.169989584 container attach 49272595ba8c8913b04d2db88122970f6aad46584c8eb048d21ad8da34883d8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_panini, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:09:15 np0005481065 upbeat_panini[363008]: 167 167
Oct 11 05:09:15 np0005481065 systemd[1]: libpod-49272595ba8c8913b04d2db88122970f6aad46584c8eb048d21ad8da34883d8b.scope: Deactivated successfully.
Oct 11 05:09:15 np0005481065 conmon[363008]: conmon 49272595ba8c8913b04d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-49272595ba8c8913b04d2db88122970f6aad46584c8eb048d21ad8da34883d8b.scope/container/memory.events
Oct 11 05:09:15 np0005481065 podman[362973]: 2025-10-11 09:09:15.526555735 +0000 UTC m=+0.177126457 container died 49272595ba8c8913b04d2db88122970f6aad46584c8eb048d21ad8da34883d8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_panini, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:09:15 np0005481065 systemd[1]: var-lib-containers-storage-overlay-4a17367d388fd32ca1ecc0e3ccac09a756fbb372a442b3296aa2a964e0b55872-merged.mount: Deactivated successfully.
Oct 11 05:09:15 np0005481065 podman[362973]: 2025-10-11 09:09:15.578133238 +0000 UTC m=+0.228703960 container remove 49272595ba8c8913b04d2db88122970f6aad46584c8eb048d21ad8da34883d8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_panini, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:09:15 np0005481065 systemd[1]: libpod-conmon-49272595ba8c8913b04d2db88122970f6aad46584c8eb048d21ad8da34883d8b.scope: Deactivated successfully.
Oct 11 05:09:15 np0005481065 nova_compute[260935]: 2025-10-11 09:09:15.621 2 DEBUG oslo_concurrency.processutils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c/disk.config 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.246s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:09:15 np0005481065 nova_compute[260935]: 2025-10-11 09:09:15.622 2 INFO nova.virt.libvirt.driver [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Deleting local config drive /var/lib/nova/instances/0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c/disk.config because it was imported into RBD.#033[00m
Oct 11 05:09:15 np0005481065 kernel: tapa611854c-0a: entered promiscuous mode
Oct 11 05:09:15 np0005481065 NetworkManager[44960]: <info>  [1760173755.6852] manager: (tapa611854c-0a): new Tun device (/org/freedesktop/NetworkManager/Devices/382)
Oct 11 05:09:15 np0005481065 ovn_controller[152945]: 2025-10-11T09:09:15Z|00910|binding|INFO|Claiming lport a611854c-0a61-41b8-91ce-0c0f893aa54c for this chassis.
Oct 11 05:09:15 np0005481065 nova_compute[260935]: 2025-10-11 09:09:15.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:15 np0005481065 ovn_controller[152945]: 2025-10-11T09:09:15Z|00911|binding|INFO|a611854c-0a61-41b8-91ce-0c0f893aa54c: Claiming fa:16:3e:9d:da:b0 10.100.0.12
Oct 11 05:09:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:15.724 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:da:b0 10.100.0.12'], port_security=['fa:16:3e:9d:da:b0 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '21f163e616ee4917a580701d466f7dc9', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'df67e917-90ec-4dfb-a7e0-e0c1517579e8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e0adf864-e0f8-4e06-afb7-e390a634b36e, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=a611854c-0a61-41b8-91ce-0c0f893aa54c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:09:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:15.726 162815 INFO neutron.agent.ovn.metadata.agent [-] Port a611854c-0a61-41b8-91ce-0c0f893aa54c in datapath 14e82eeb-74e2-4de3-9047-74da777fe1ec bound to our chassis#033[00m
Oct 11 05:09:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:15.727 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 14e82eeb-74e2-4de3-9047-74da777fe1ec#033[00m
Oct 11 05:09:15 np0005481065 systemd-udevd[363056]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:09:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:15.743 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c5853c69-27ce-4b9e-8f0f-c1f2da85facb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:15.744 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap14e82eeb-71 in ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 05:09:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:15.746 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap14e82eeb-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 05:09:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:15.746 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[24b15751-84f6-4798-a456-1505c24249eb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:15.747 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[54dfdb7b-07b4-4a82-830e-74cc9b4f8d64]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:15 np0005481065 systemd-machined[215705]: New machine qemu-119-instance-00000062.
Oct 11 05:09:15 np0005481065 NetworkManager[44960]: <info>  [1760173755.7690] device (tapa611854c-0a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:09:15 np0005481065 ovn_controller[152945]: 2025-10-11T09:09:15Z|00912|binding|INFO|Setting lport a611854c-0a61-41b8-91ce-0c0f893aa54c ovn-installed in OVS
Oct 11 05:09:15 np0005481065 ovn_controller[152945]: 2025-10-11T09:09:15Z|00913|binding|INFO|Setting lport a611854c-0a61-41b8-91ce-0c0f893aa54c up in Southbound
Oct 11 05:09:15 np0005481065 NetworkManager[44960]: <info>  [1760173755.7714] device (tapa611854c-0a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:09:15 np0005481065 nova_compute[260935]: 2025-10-11 09:09:15.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:15.758 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[04defa4e-0482-4e6a-a831-a6690d476d74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:15 np0005481065 nova_compute[260935]: 2025-10-11 09:09:15.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:15.774 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[82b071dd-98e7-4837-b8b4-827270918ccf]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:15 np0005481065 systemd[1]: Started Virtual Machine qemu-119-instance-00000062.
Oct 11 05:09:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:15.810 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3fc1017e-c9b7-4e47-ae4c-36420dffc097]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:15 np0005481065 systemd-udevd[363070]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:09:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:15.816 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0dcc40a3-8518-4a6c-995b-29c5dabfbfbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:15 np0005481065 NetworkManager[44960]: <info>  [1760173755.8195] manager: (tap14e82eeb-70): new Veth device (/org/freedesktop/NetworkManager/Devices/383)
Oct 11 05:09:15 np0005481065 podman[363067]: 2025-10-11 09:09:15.851970955 +0000 UTC m=+0.068579229 container create 55f390b0d9cb9154736e42fc7d1b2032109cf916732b2c22b97063f23b3ecb20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:09:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:15.855 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3a628f1f-d59f-4bee-9dfd-f4557191ff79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:15.858 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[39b79d13-d0a4-49b6-ab5d-80c9912ed42f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:15 np0005481065 NetworkManager[44960]: <info>  [1760173755.8895] device (tap14e82eeb-70): carrier: link connected
Oct 11 05:09:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:15.897 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[0af448db-f4e8-486e-8dbc-2a9f52dfadcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:15 np0005481065 systemd[1]: Started libpod-conmon-55f390b0d9cb9154736e42fc7d1b2032109cf916732b2c22b97063f23b3ecb20.scope.
Oct 11 05:09:15 np0005481065 podman[363067]: 2025-10-11 09:09:15.825336125 +0000 UTC m=+0.041944409 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:09:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:15.919 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1741b652-7cc6-4b2c-b77f-3824e1797fe8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14e82eeb-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:60:56:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 273], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 560059, 'reachable_time': 15853, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363113, 'error': None, 'target': 'ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:15 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:09:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:15.942 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6c05a238-958f-4f6c-8da0-d9eec5f1bc45]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe60:56f1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 560059, 'tstamp': 560059}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 363117, 'error': None, 'target': 'ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:15 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ab14f4565140bf8b3c9c5e8b4bb775adc00d7ad339c188c8da533151e8f7e6e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:09:15 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ab14f4565140bf8b3c9c5e8b4bb775adc00d7ad339c188c8da533151e8f7e6e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:09:15 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ab14f4565140bf8b3c9c5e8b4bb775adc00d7ad339c188c8da533151e8f7e6e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:09:15 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ab14f4565140bf8b3c9c5e8b4bb775adc00d7ad339c188c8da533151e8f7e6e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:09:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:15.961 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b0dffba4-bbe5-4467-8917-c623fd59020e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14e82eeb-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:60:56:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 273], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 560059, 'reachable_time': 15853, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 363118, 'error': None, 'target': 'ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:15 np0005481065 podman[363067]: 2025-10-11 09:09:15.966640618 +0000 UTC m=+0.183248902 container init 55f390b0d9cb9154736e42fc7d1b2032109cf916732b2c22b97063f23b3ecb20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Oct 11 05:09:15 np0005481065 podman[363067]: 2025-10-11 09:09:15.976484559 +0000 UTC m=+0.193092823 container start 55f390b0d9cb9154736e42fc7d1b2032109cf916732b2c22b97063f23b3ecb20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:09:15 np0005481065 podman[363067]: 2025-10-11 09:09:15.986950288 +0000 UTC m=+0.203558642 container attach 55f390b0d9cb9154736e42fc7d1b2032109cf916732b2c22b97063f23b3ecb20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:09:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:16.005 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[dd0c07c5-0355-44ab-b167-437d6b91cebc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:16.089 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bc96da74-2dc7-4bdd-b7f1-061f0caa298d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:16.091 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14e82eeb-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:09:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:16.091 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:09:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:16.091 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14e82eeb-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:09:16 np0005481065 NetworkManager[44960]: <info>  [1760173756.0941] manager: (tap14e82eeb-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/384)
Oct 11 05:09:16 np0005481065 kernel: tap14e82eeb-70: entered promiscuous mode
Oct 11 05:09:16 np0005481065 nova_compute[260935]: 2025-10-11 09:09:16.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:16.098 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap14e82eeb-70, col_values=(('external_ids', {'iface-id': 'c44760fe-71c5-482d-aee7-a0b0513681b7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:09:16 np0005481065 ovn_controller[152945]: 2025-10-11T09:09:16Z|00914|binding|INFO|Releasing lport c44760fe-71c5-482d-aee7-a0b0513681b7 from this chassis (sb_readonly=0)
Oct 11 05:09:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:16.104 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/14e82eeb-74e2-4de3-9047-74da777fe1ec.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/14e82eeb-74e2-4de3-9047-74da777fe1ec.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 05:09:16 np0005481065 nova_compute[260935]: 2025-10-11 09:09:16.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:16.105 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d6241346-7f55-467e-951b-fc0eef07bd3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:16.106 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 05:09:16 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 05:09:16 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 05:09:16 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-14e82eeb-74e2-4de3-9047-74da777fe1ec
Oct 11 05:09:16 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 05:09:16 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 05:09:16 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 05:09:16 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/14e82eeb-74e2-4de3-9047-74da777fe1ec.pid.haproxy
Oct 11 05:09:16 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 05:09:16 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:09:16 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 05:09:16 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 05:09:16 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 05:09:16 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 05:09:16 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 05:09:16 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 05:09:16 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 05:09:16 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 05:09:16 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 05:09:16 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 05:09:16 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 05:09:16 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 05:09:16 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 05:09:16 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:09:16 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:09:16 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 05:09:16 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 05:09:16 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 05:09:16 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID 14e82eeb-74e2-4de3-9047-74da777fe1ec
Oct 11 05:09:16 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 05:09:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:16.107 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'env', 'PROCESS_TAG=haproxy-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/14e82eeb-74e2-4de3-9047-74da777fe1ec.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 05:09:16 np0005481065 nova_compute[260935]: 2025-10-11 09:09:16.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:16 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2015: 321 pgs: 321 active+clean; 475 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 586 KiB/s rd, 2.0 MiB/s wr, 53 op/s
Oct 11 05:09:16 np0005481065 nova_compute[260935]: 2025-10-11 09:09:16.564 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173756.564318, 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:09:16 np0005481065 nova_compute[260935]: 2025-10-11 09:09:16.565 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] VM Started (Lifecycle Event)#033[00m
Oct 11 05:09:16 np0005481065 podman[363194]: 2025-10-11 09:09:16.579896455 +0000 UTC m=+0.065987205 container create 4c5576afb82b4ccd0ccde2b6eb55108563fef3f9bc2ce3ee3984b2f7786bc901 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 05:09:16 np0005481065 nova_compute[260935]: 2025-10-11 09:09:16.591 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:09:16 np0005481065 nova_compute[260935]: 2025-10-11 09:09:16.596 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173756.5682006, 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:09:16 np0005481065 nova_compute[260935]: 2025-10-11 09:09:16.596 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:09:16 np0005481065 nova_compute[260935]: 2025-10-11 09:09:16.622 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:09:16 np0005481065 systemd[1]: Started libpod-conmon-4c5576afb82b4ccd0ccde2b6eb55108563fef3f9bc2ce3ee3984b2f7786bc901.scope.
Oct 11 05:09:16 np0005481065 nova_compute[260935]: 2025-10-11 09:09:16.625 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:09:16 np0005481065 podman[363194]: 2025-10-11 09:09:16.550759073 +0000 UTC m=+0.036849843 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 05:09:16 np0005481065 nova_compute[260935]: 2025-10-11 09:09:16.647 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:09:16 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:09:16 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56bddb7740329636355fcc11f4dc0b2142ccfefe44ada4320f628f0155b9b1ac/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 05:09:16 np0005481065 podman[363194]: 2025-10-11 09:09:16.694031603 +0000 UTC m=+0.180122373 container init 4c5576afb82b4ccd0ccde2b6eb55108563fef3f9bc2ce3ee3984b2f7786bc901 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:09:16 np0005481065 podman[363194]: 2025-10-11 09:09:16.699338845 +0000 UTC m=+0.185429595 container start 4c5576afb82b4ccd0ccde2b6eb55108563fef3f9bc2ce3ee3984b2f7786bc901 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 05:09:16 np0005481065 neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec[363209]: [NOTICE]   (363218) : New worker (363222) forked
Oct 11 05:09:16 np0005481065 neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec[363209]: [NOTICE]   (363218) : Loading success.
Oct 11 05:09:16 np0005481065 zealous_greider[363114]: {
Oct 11 05:09:16 np0005481065 zealous_greider[363114]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 05:09:16 np0005481065 zealous_greider[363114]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:09:16 np0005481065 zealous_greider[363114]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 05:09:16 np0005481065 zealous_greider[363114]:        "osd_id": 2,
Oct 11 05:09:16 np0005481065 zealous_greider[363114]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:09:16 np0005481065 zealous_greider[363114]:        "type": "bluestore"
Oct 11 05:09:16 np0005481065 zealous_greider[363114]:    },
Oct 11 05:09:16 np0005481065 zealous_greider[363114]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 05:09:16 np0005481065 zealous_greider[363114]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:09:16 np0005481065 zealous_greider[363114]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 05:09:16 np0005481065 zealous_greider[363114]:        "osd_id": 0,
Oct 11 05:09:16 np0005481065 zealous_greider[363114]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:09:16 np0005481065 zealous_greider[363114]:        "type": "bluestore"
Oct 11 05:09:16 np0005481065 zealous_greider[363114]:    },
Oct 11 05:09:16 np0005481065 zealous_greider[363114]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 05:09:16 np0005481065 zealous_greider[363114]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:09:16 np0005481065 zealous_greider[363114]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 05:09:16 np0005481065 zealous_greider[363114]:        "osd_id": 1,
Oct 11 05:09:16 np0005481065 zealous_greider[363114]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:09:16 np0005481065 zealous_greider[363114]:        "type": "bluestore"
Oct 11 05:09:16 np0005481065 zealous_greider[363114]:    }
Oct 11 05:09:16 np0005481065 zealous_greider[363114]: }
Oct 11 05:09:16 np0005481065 systemd[1]: libpod-55f390b0d9cb9154736e42fc7d1b2032109cf916732b2c22b97063f23b3ecb20.scope: Deactivated successfully.
Oct 11 05:09:16 np0005481065 podman[363067]: 2025-10-11 09:09:16.964912405 +0000 UTC m=+1.181520729 container died 55f390b0d9cb9154736e42fc7d1b2032109cf916732b2c22b97063f23b3ecb20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_greider, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Oct 11 05:09:17 np0005481065 systemd[1]: var-lib-containers-storage-overlay-1ab14f4565140bf8b3c9c5e8b4bb775adc00d7ad339c188c8da533151e8f7e6e-merged.mount: Deactivated successfully.
Oct 11 05:09:17 np0005481065 podman[363067]: 2025-10-11 09:09:17.038427794 +0000 UTC m=+1.255036048 container remove 55f390b0d9cb9154736e42fc7d1b2032109cf916732b2c22b97063f23b3ecb20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_greider, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Oct 11 05:09:17 np0005481065 systemd[1]: libpod-conmon-55f390b0d9cb9154736e42fc7d1b2032109cf916732b2c22b97063f23b3ecb20.scope: Deactivated successfully.
Oct 11 05:09:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:09:17 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:09:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:09:17 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:09:17 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 11fef533-3242-4416-80bf-147785e6f072 does not exist
Oct 11 05:09:17 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev b7d29961-5b95-4599-8a55-080db646447b does not exist
Oct 11 05:09:17 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:09:17 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:09:17 np0005481065 nova_compute[260935]: 2025-10-11 09:09:17.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:09:17 np0005481065 systemd[1]: machine-qemu\x2d118\x2dinstance\x2d00000066.scope: Deactivated successfully.
Oct 11 05:09:17 np0005481065 systemd[1]: machine-qemu\x2d118\x2dinstance\x2d00000066.scope: Consumed 12.875s CPU time.
Oct 11 05:09:17 np0005481065 systemd-machined[215705]: Machine qemu-118-instance-00000066 terminated.
Oct 11 05:09:17 np0005481065 podman[363316]: 2025-10-11 09:09:17.80125352 +0000 UTC m=+0.096149885 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 05:09:18 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2016: 321 pgs: 321 active+clean; 565 MiB data, 957 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 6.0 MiB/s wr, 164 op/s
Oct 11 05:09:18 np0005481065 nova_compute[260935]: 2025-10-11 09:09:18.449 2 INFO nova.virt.libvirt.driver [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Instance shutdown successfully after 13 seconds.#033[00m
Oct 11 05:09:18 np0005481065 nova_compute[260935]: 2025-10-11 09:09:18.457 2 INFO nova.virt.libvirt.driver [-] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Instance destroyed successfully.#033[00m
Oct 11 05:09:18 np0005481065 nova_compute[260935]: 2025-10-11 09:09:18.463 2 INFO nova.virt.libvirt.driver [-] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Instance destroyed successfully.#033[00m
Oct 11 05:09:18 np0005481065 nova_compute[260935]: 2025-10-11 09:09:18.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:09:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:09:18 np0005481065 nova_compute[260935]: 2025-10-11 09:09:18.902 2 INFO nova.virt.libvirt.driver [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Deleting instance files /var/lib/nova/instances/b3697ef4-2e67-4d7f-ad14-ae6ab4055454_del#033[00m
Oct 11 05:09:18 np0005481065 nova_compute[260935]: 2025-10-11 09:09:18.903 2 INFO nova.virt.libvirt.driver [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Deletion of /var/lib/nova/instances/b3697ef4-2e67-4d7f-ad14-ae6ab4055454_del complete#033[00m
Oct 11 05:09:19 np0005481065 nova_compute[260935]: 2025-10-11 09:09:19.101 2 DEBUG nova.virt.libvirt.driver [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:09:19 np0005481065 nova_compute[260935]: 2025-10-11 09:09:19.102 2 INFO nova.virt.libvirt.driver [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Creating image(s)#033[00m
Oct 11 05:09:19 np0005481065 nova_compute[260935]: 2025-10-11 09:09:19.133 2 DEBUG nova.storage.rbd_utils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] rbd image b3697ef4-2e67-4d7f-ad14-ae6ab4055454_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:09:19 np0005481065 nova_compute[260935]: 2025-10-11 09:09:19.170 2 DEBUG nova.storage.rbd_utils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] rbd image b3697ef4-2e67-4d7f-ad14-ae6ab4055454_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:09:19 np0005481065 nova_compute[260935]: 2025-10-11 09:09:19.205 2 DEBUG nova.storage.rbd_utils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] rbd image b3697ef4-2e67-4d7f-ad14-ae6ab4055454_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:09:19 np0005481065 nova_compute[260935]: 2025-10-11 09:09:19.210 2 DEBUG oslo_concurrency.processutils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:09:19 np0005481065 nova_compute[260935]: 2025-10-11 09:09:19.317 2 DEBUG oslo_concurrency.processutils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:09:19 np0005481065 nova_compute[260935]: 2025-10-11 09:09:19.319 2 DEBUG oslo_concurrency.lockutils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Acquiring lock "d427ed36e4acfaf36d5cf36bd49361b1db4ee571" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:09:19 np0005481065 nova_compute[260935]: 2025-10-11 09:09:19.320 2 DEBUG oslo_concurrency.lockutils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lock "d427ed36e4acfaf36d5cf36bd49361b1db4ee571" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:09:19 np0005481065 nova_compute[260935]: 2025-10-11 09:09:19.321 2 DEBUG oslo_concurrency.lockutils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lock "d427ed36e4acfaf36d5cf36bd49361b1db4ee571" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:09:19 np0005481065 nova_compute[260935]: 2025-10-11 09:09:19.360 2 DEBUG nova.storage.rbd_utils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] rbd image b3697ef4-2e67-4d7f-ad14-ae6ab4055454_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:09:19 np0005481065 nova_compute[260935]: 2025-10-11 09:09:19.367 2 DEBUG oslo_concurrency.processutils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 b3697ef4-2e67-4d7f-ad14-ae6ab4055454_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:09:19 np0005481065 nova_compute[260935]: 2025-10-11 09:09:19.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:19 np0005481065 nova_compute[260935]: 2025-10-11 09:09:19.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:19 np0005481065 nova_compute[260935]: 2025-10-11 09:09:19.708 2 DEBUG oslo_concurrency.processutils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 b3697ef4-2e67-4d7f-ad14-ae6ab4055454_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.342s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:09:19 np0005481065 nova_compute[260935]: 2025-10-11 09:09:19.762 2 DEBUG nova.storage.rbd_utils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] resizing rbd image b3697ef4-2e67-4d7f-ad14-ae6ab4055454_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:09:19 np0005481065 nova_compute[260935]: 2025-10-11 09:09:19.855 2 DEBUG nova.virt.libvirt.driver [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:09:19 np0005481065 nova_compute[260935]: 2025-10-11 09:09:19.856 2 DEBUG nova.virt.libvirt.driver [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Ensure instance console log exists: /var/lib/nova/instances/b3697ef4-2e67-4d7f-ad14-ae6ab4055454/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:09:19 np0005481065 nova_compute[260935]: 2025-10-11 09:09:19.857 2 DEBUG oslo_concurrency.lockutils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:09:19 np0005481065 nova_compute[260935]: 2025-10-11 09:09:19.858 2 DEBUG oslo_concurrency.lockutils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:09:19 np0005481065 nova_compute[260935]: 2025-10-11 09:09:19.858 2 DEBUG oslo_concurrency.lockutils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:09:19 np0005481065 nova_compute[260935]: 2025-10-11 09:09:19.861 2 DEBUG nova.virt.libvirt.driver [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:29Z,direct_url=<?>,disk_format='qcow2',id=95632eb9-5895-4e20-b760-0f149aadf400,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:09:19 np0005481065 nova_compute[260935]: 2025-10-11 09:09:19.866 2 WARNING nova.virt.libvirt.driver [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Oct 11 05:09:19 np0005481065 nova_compute[260935]: 2025-10-11 09:09:19.876 2 DEBUG nova.virt.libvirt.host [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:09:19 np0005481065 nova_compute[260935]: 2025-10-11 09:09:19.877 2 DEBUG nova.virt.libvirt.host [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:09:19 np0005481065 nova_compute[260935]: 2025-10-11 09:09:19.881 2 DEBUG nova.virt.libvirt.host [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:09:19 np0005481065 nova_compute[260935]: 2025-10-11 09:09:19.882 2 DEBUG nova.virt.libvirt.host [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:09:19 np0005481065 nova_compute[260935]: 2025-10-11 09:09:19.882 2 DEBUG nova.virt.libvirt.driver [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:09:19 np0005481065 nova_compute[260935]: 2025-10-11 09:09:19.883 2 DEBUG nova.virt.hardware [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:29Z,direct_url=<?>,disk_format='qcow2',id=95632eb9-5895-4e20-b760-0f149aadf400,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:09:19 np0005481065 nova_compute[260935]: 2025-10-11 09:09:19.883 2 DEBUG nova.virt.hardware [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:09:19 np0005481065 nova_compute[260935]: 2025-10-11 09:09:19.884 2 DEBUG nova.virt.hardware [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:09:19 np0005481065 nova_compute[260935]: 2025-10-11 09:09:19.884 2 DEBUG nova.virt.hardware [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:09:19 np0005481065 nova_compute[260935]: 2025-10-11 09:09:19.884 2 DEBUG nova.virt.hardware [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:09:19 np0005481065 nova_compute[260935]: 2025-10-11 09:09:19.885 2 DEBUG nova.virt.hardware [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:09:19 np0005481065 nova_compute[260935]: 2025-10-11 09:09:19.885 2 DEBUG nova.virt.hardware [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:09:19 np0005481065 nova_compute[260935]: 2025-10-11 09:09:19.885 2 DEBUG nova.virt.hardware [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:09:19 np0005481065 nova_compute[260935]: 2025-10-11 09:09:19.886 2 DEBUG nova.virt.hardware [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:09:19 np0005481065 nova_compute[260935]: 2025-10-11 09:09:19.886 2 DEBUG nova.virt.hardware [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:09:19 np0005481065 nova_compute[260935]: 2025-10-11 09:09:19.886 2 DEBUG nova.virt.hardware [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:09:19 np0005481065 nova_compute[260935]: 2025-10-11 09:09:19.887 2 DEBUG nova.objects.instance [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lazy-loading 'vcpu_model' on Instance uuid b3697ef4-2e67-4d7f-ad14-ae6ab4055454 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:09:19 np0005481065 nova_compute[260935]: 2025-10-11 09:09:19.907 2 DEBUG oslo_concurrency.processutils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:09:20 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:09:20 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/208600249' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:09:20 np0005481065 nova_compute[260935]: 2025-10-11 09:09:20.388 2 DEBUG oslo_concurrency.processutils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:09:20 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2017: 321 pgs: 321 active+clean; 565 MiB data, 957 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 6.0 MiB/s wr, 149 op/s
Oct 11 05:09:20 np0005481065 nova_compute[260935]: 2025-10-11 09:09:20.421 2 DEBUG nova.storage.rbd_utils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] rbd image b3697ef4-2e67-4d7f-ad14-ae6ab4055454_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:09:20 np0005481065 nova_compute[260935]: 2025-10-11 09:09:20.426 2 DEBUG oslo_concurrency.processutils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:09:20 np0005481065 nova_compute[260935]: 2025-10-11 09:09:20.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:09:20 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:09:20 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4039969589' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:09:20 np0005481065 nova_compute[260935]: 2025-10-11 09:09:20.934 2 DEBUG oslo_concurrency.lockutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "1e10d958-167c-4caf-8ff0-67f50e39ec3c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:09:20 np0005481065 nova_compute[260935]: 2025-10-11 09:09:20.935 2 DEBUG oslo_concurrency.lockutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "1e10d958-167c-4caf-8ff0-67f50e39ec3c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:09:20 np0005481065 nova_compute[260935]: 2025-10-11 09:09:20.940 2 DEBUG oslo_concurrency.processutils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:09:20 np0005481065 nova_compute[260935]: 2025-10-11 09:09:20.943 2 DEBUG nova.virt.libvirt.driver [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:09:20 np0005481065 nova_compute[260935]:  <uuid>b3697ef4-2e67-4d7f-ad14-ae6ab4055454</uuid>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:  <name>instance-00000066</name>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:09:20 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:      <nova:name>tempest-ServerShowV254Test-server-186124796</nova:name>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:09:19</nova:creationTime>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:09:20 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:        <nova:user uuid="18ec7b6a713d43e9880a338abc3182b1">tempest-ServerShowV254Test-827203150-project-member</nova:user>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:        <nova:project uuid="31a5173de7e7470ab2566a226b8ba071">tempest-ServerShowV254Test-827203150</nova:project>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="95632eb9-5895-4e20-b760-0f149aadf400"/>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:      <nova:ports/>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:09:20 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:      <entry name="serial">b3697ef4-2e67-4d7f-ad14-ae6ab4055454</entry>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:      <entry name="uuid">b3697ef4-2e67-4d7f-ad14-ae6ab4055454</entry>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:09:20 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:09:20 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:09:20 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/b3697ef4-2e67-4d7f-ad14-ae6ab4055454_disk">
Oct 11 05:09:20 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:09:20 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:09:20 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/b3697ef4-2e67-4d7f-ad14-ae6ab4055454_disk.config">
Oct 11 05:09:20 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:09:20 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:09:20 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/b3697ef4-2e67-4d7f-ad14-ae6ab4055454/console.log" append="off"/>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:09:20 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:09:20 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:09:20 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:09:20 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:09:20 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:09:20 np0005481065 nova_compute[260935]: 2025-10-11 09:09:20.961 2 DEBUG nova.compute.manager [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:09:21 np0005481065 nova_compute[260935]: 2025-10-11 09:09:21.034 2 DEBUG nova.virt.libvirt.driver [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:09:21 np0005481065 nova_compute[260935]: 2025-10-11 09:09:21.034 2 DEBUG nova.virt.libvirt.driver [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:09:21 np0005481065 nova_compute[260935]: 2025-10-11 09:09:21.035 2 INFO nova.virt.libvirt.driver [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Using config drive#033[00m
Oct 11 05:09:21 np0005481065 nova_compute[260935]: 2025-10-11 09:09:21.068 2 DEBUG nova.storage.rbd_utils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] rbd image b3697ef4-2e67-4d7f-ad14-ae6ab4055454_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:09:21 np0005481065 nova_compute[260935]: 2025-10-11 09:09:21.107 2 DEBUG nova.objects.instance [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lazy-loading 'ec2_ids' on Instance uuid b3697ef4-2e67-4d7f-ad14-ae6ab4055454 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:09:21 np0005481065 nova_compute[260935]: 2025-10-11 09:09:21.124 2 DEBUG oslo_concurrency.lockutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:09:21 np0005481065 nova_compute[260935]: 2025-10-11 09:09:21.125 2 DEBUG oslo_concurrency.lockutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:09:21 np0005481065 nova_compute[260935]: 2025-10-11 09:09:21.134 2 DEBUG nova.virt.hardware [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:09:21 np0005481065 nova_compute[260935]: 2025-10-11 09:09:21.135 2 INFO nova.compute.claims [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:09:21 np0005481065 nova_compute[260935]: 2025-10-11 09:09:21.231 2 DEBUG nova.scheduler.client.report [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Refreshing inventories for resource provider ead2f521-4d5d-46d9-864c-1aac19134114 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 11 05:09:21 np0005481065 nova_compute[260935]: 2025-10-11 09:09:21.262 2 DEBUG nova.scheduler.client.report [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Updating ProviderTree inventory for provider ead2f521-4d5d-46d9-864c-1aac19134114 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 11 05:09:21 np0005481065 nova_compute[260935]: 2025-10-11 09:09:21.263 2 DEBUG nova.compute.provider_tree [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Updating inventory in ProviderTree for provider ead2f521-4d5d-46d9-864c-1aac19134114 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 11 05:09:21 np0005481065 nova_compute[260935]: 2025-10-11 09:09:21.286 2 DEBUG nova.scheduler.client.report [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Refreshing aggregate associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 11 05:09:21 np0005481065 nova_compute[260935]: 2025-10-11 09:09:21.308 2 INFO nova.virt.libvirt.driver [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Creating config drive at /var/lib/nova/instances/b3697ef4-2e67-4d7f-ad14-ae6ab4055454/disk.config#033[00m
Oct 11 05:09:21 np0005481065 nova_compute[260935]: 2025-10-11 09:09:21.316 2 DEBUG oslo_concurrency.processutils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b3697ef4-2e67-4d7f-ad14-ae6ab4055454/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfqtx9cos execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:09:21 np0005481065 nova_compute[260935]: 2025-10-11 09:09:21.376 2 DEBUG nova.scheduler.client.report [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Refreshing trait associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, traits: HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSE41,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_USB,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,HW_CPU_X86_SVM,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 11 05:09:21 np0005481065 nova_compute[260935]: 2025-10-11 09:09:21.485 2 DEBUG oslo_concurrency.processutils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b3697ef4-2e67-4d7f-ad14-ae6ab4055454/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfqtx9cos" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:09:21 np0005481065 nova_compute[260935]: 2025-10-11 09:09:21.522 2 DEBUG nova.storage.rbd_utils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] rbd image b3697ef4-2e67-4d7f-ad14-ae6ab4055454_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:09:21 np0005481065 nova_compute[260935]: 2025-10-11 09:09:21.527 2 DEBUG oslo_concurrency.processutils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b3697ef4-2e67-4d7f-ad14-ae6ab4055454/disk.config b3697ef4-2e67-4d7f-ad14-ae6ab4055454_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:09:21 np0005481065 nova_compute[260935]: 2025-10-11 09:09:21.633 2 DEBUG oslo_concurrency.processutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:09:21 np0005481065 nova_compute[260935]: 2025-10-11 09:09:21.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:09:21 np0005481065 nova_compute[260935]: 2025-10-11 09:09:21.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:09:21 np0005481065 nova_compute[260935]: 2025-10-11 09:09:21.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:09:21 np0005481065 nova_compute[260935]: 2025-10-11 09:09:21.749 2 DEBUG oslo_concurrency.processutils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b3697ef4-2e67-4d7f-ad14-ae6ab4055454/disk.config b3697ef4-2e67-4d7f-ad14-ae6ab4055454_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.222s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:09:21 np0005481065 nova_compute[260935]: 2025-10-11 09:09:21.750 2 INFO nova.virt.libvirt.driver [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Deleting local config drive /var/lib/nova/instances/b3697ef4-2e67-4d7f-ad14-ae6ab4055454/disk.config because it was imported into RBD.#033[00m
Oct 11 05:09:21 np0005481065 systemd-machined[215705]: New machine qemu-120-instance-00000066.
Oct 11 05:09:21 np0005481065 systemd[1]: Started Virtual Machine qemu-120-instance-00000066.
Oct 11 05:09:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:09:22 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4046967867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:09:22 np0005481065 nova_compute[260935]: 2025-10-11 09:09:22.177 2 DEBUG oslo_concurrency.processutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:09:22 np0005481065 nova_compute[260935]: 2025-10-11 09:09:22.188 2 DEBUG nova.compute.provider_tree [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:09:22 np0005481065 nova_compute[260935]: 2025-10-11 09:09:22.212 2 DEBUG nova.scheduler.client.report [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:09:22 np0005481065 nova_compute[260935]: 2025-10-11 09:09:22.249 2 DEBUG oslo_concurrency.lockutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.125s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:09:22 np0005481065 nova_compute[260935]: 2025-10-11 09:09:22.251 2 DEBUG nova.compute.manager [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:09:22 np0005481065 nova_compute[260935]: 2025-10-11 09:09:22.334 2 DEBUG nova.compute.manager [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:09:22 np0005481065 nova_compute[260935]: 2025-10-11 09:09:22.334 2 DEBUG nova.network.neutron [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:09:22 np0005481065 nova_compute[260935]: 2025-10-11 09:09:22.364 2 INFO nova.virt.libvirt.driver [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:09:22 np0005481065 nova_compute[260935]: 2025-10-11 09:09:22.400 2 DEBUG nova.compute.manager [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:09:22 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2018: 321 pgs: 321 active+clean; 540 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 7.0 MiB/s wr, 204 op/s
Oct 11 05:09:22 np0005481065 nova_compute[260935]: 2025-10-11 09:09:22.550 2 DEBUG nova.compute.manager [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:09:22 np0005481065 nova_compute[260935]: 2025-10-11 09:09:22.552 2 DEBUG nova.virt.libvirt.driver [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:09:22 np0005481065 nova_compute[260935]: 2025-10-11 09:09:22.552 2 INFO nova.virt.libvirt.driver [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Creating image(s)#033[00m
Oct 11 05:09:22 np0005481065 nova_compute[260935]: 2025-10-11 09:09:22.589 2 DEBUG nova.storage.rbd_utils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 1e10d958-167c-4caf-8ff0-67f50e39ec3c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:09:22 np0005481065 nova_compute[260935]: 2025-10-11 09:09:22.627 2 DEBUG nova.storage.rbd_utils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 1e10d958-167c-4caf-8ff0-67f50e39ec3c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:09:22 np0005481065 nova_compute[260935]: 2025-10-11 09:09:22.664 2 DEBUG nova.storage.rbd_utils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 1e10d958-167c-4caf-8ff0-67f50e39ec3c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:09:22 np0005481065 nova_compute[260935]: 2025-10-11 09:09:22.670 2 DEBUG oslo_concurrency.processutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:09:22 np0005481065 nova_compute[260935]: 2025-10-11 09:09:22.729 2 DEBUG nova.policy [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a213c3877fc144a3af0be3c3d853f999', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ca4b15770e784f45910b630937562cb6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:09:22 np0005481065 nova_compute[260935]: 2025-10-11 09:09:22.734 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:09:22 np0005481065 nova_compute[260935]: 2025-10-11 09:09:22.736 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Removed pending event for b3697ef4-2e67-4d7f-ad14-ae6ab4055454 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct 11 05:09:22 np0005481065 nova_compute[260935]: 2025-10-11 09:09:22.736 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173762.7353868, b3697ef4-2e67-4d7f-ad14-ae6ab4055454 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:09:22 np0005481065 nova_compute[260935]: 2025-10-11 09:09:22.737 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:09:22 np0005481065 nova_compute[260935]: 2025-10-11 09:09:22.739 2 DEBUG nova.compute.manager [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:09:22 np0005481065 nova_compute[260935]: 2025-10-11 09:09:22.740 2 DEBUG nova.virt.libvirt.driver [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:09:22 np0005481065 nova_compute[260935]: 2025-10-11 09:09:22.747 2 INFO nova.virt.libvirt.driver [-] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Instance spawned successfully.#033[00m
Oct 11 05:09:22 np0005481065 nova_compute[260935]: 2025-10-11 09:09:22.748 2 DEBUG nova.virt.libvirt.driver [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:09:22 np0005481065 nova_compute[260935]: 2025-10-11 09:09:22.769 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:09:22 np0005481065 nova_compute[260935]: 2025-10-11 09:09:22.777 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:09:22 np0005481065 nova_compute[260935]: 2025-10-11 09:09:22.779 2 DEBUG oslo_concurrency.processutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:09:22 np0005481065 nova_compute[260935]: 2025-10-11 09:09:22.782 2 DEBUG nova.virt.libvirt.driver [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:09:22 np0005481065 nova_compute[260935]: 2025-10-11 09:09:22.782 2 DEBUG nova.virt.libvirt.driver [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:09:22 np0005481065 nova_compute[260935]: 2025-10-11 09:09:22.782 2 DEBUG nova.virt.libvirt.driver [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:09:22 np0005481065 nova_compute[260935]: 2025-10-11 09:09:22.783 2 DEBUG nova.virt.libvirt.driver [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:09:22 np0005481065 nova_compute[260935]: 2025-10-11 09:09:22.783 2 DEBUG nova.virt.libvirt.driver [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:09:22 np0005481065 nova_compute[260935]: 2025-10-11 09:09:22.784 2 DEBUG nova.virt.libvirt.driver [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:09:22 np0005481065 nova_compute[260935]: 2025-10-11 09:09:22.788 2 DEBUG oslo_concurrency.lockutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:09:22 np0005481065 nova_compute[260935]: 2025-10-11 09:09:22.788 2 DEBUG oslo_concurrency.lockutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:09:22 np0005481065 nova_compute[260935]: 2025-10-11 09:09:22.789 2 DEBUG oslo_concurrency.lockutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:09:22 np0005481065 nova_compute[260935]: 2025-10-11 09:09:22.819 2 DEBUG nova.storage.rbd_utils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 1e10d958-167c-4caf-8ff0-67f50e39ec3c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:09:22 np0005481065 nova_compute[260935]: 2025-10-11 09:09:22.823 2 DEBUG oslo_concurrency.processutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 1e10d958-167c-4caf-8ff0-67f50e39ec3c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:09:22 np0005481065 nova_compute[260935]: 2025-10-11 09:09:22.865 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct 11 05:09:22 np0005481065 nova_compute[260935]: 2025-10-11 09:09:22.866 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173762.7391198, b3697ef4-2e67-4d7f-ad14-ae6ab4055454 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:09:22 np0005481065 nova_compute[260935]: 2025-10-11 09:09:22.866 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] VM Started (Lifecycle Event)#033[00m
Oct 11 05:09:22 np0005481065 nova_compute[260935]: 2025-10-11 09:09:22.905 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:09:22 np0005481065 nova_compute[260935]: 2025-10-11 09:09:22.908 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:09:22 np0005481065 nova_compute[260935]: 2025-10-11 09:09:22.931 2 DEBUG nova.compute.manager [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:09:22 np0005481065 nova_compute[260935]: 2025-10-11 09:09:22.932 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct 11 05:09:23 np0005481065 nova_compute[260935]: 2025-10-11 09:09:23.026 2 DEBUG oslo_concurrency.lockutils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:09:23 np0005481065 nova_compute[260935]: 2025-10-11 09:09:23.027 2 DEBUG oslo_concurrency.lockutils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:09:23 np0005481065 nova_compute[260935]: 2025-10-11 09:09:23.027 2 DEBUG nova.objects.instance [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct 11 05:09:23 np0005481065 nova_compute[260935]: 2025-10-11 09:09:23.101 2 DEBUG oslo_concurrency.lockutils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.074s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:09:23 np0005481065 nova_compute[260935]: 2025-10-11 09:09:23.132 2 DEBUG oslo_concurrency.processutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 1e10d958-167c-4caf-8ff0-67f50e39ec3c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.309s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:09:23 np0005481065 nova_compute[260935]: 2025-10-11 09:09:23.204 2 DEBUG nova.storage.rbd_utils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] resizing rbd image 1e10d958-167c-4caf-8ff0-67f50e39ec3c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:09:23 np0005481065 nova_compute[260935]: 2025-10-11 09:09:23.311 2 DEBUG nova.objects.instance [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'migration_context' on Instance uuid 1e10d958-167c-4caf-8ff0-67f50e39ec3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:09:23 np0005481065 nova_compute[260935]: 2025-10-11 09:09:23.338 2 DEBUG nova.virt.libvirt.driver [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:09:23 np0005481065 nova_compute[260935]: 2025-10-11 09:09:23.339 2 DEBUG nova.virt.libvirt.driver [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Ensure instance console log exists: /var/lib/nova/instances/1e10d958-167c-4caf-8ff0-67f50e39ec3c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:09:23 np0005481065 nova_compute[260935]: 2025-10-11 09:09:23.339 2 DEBUG oslo_concurrency.lockutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:09:23 np0005481065 nova_compute[260935]: 2025-10-11 09:09:23.340 2 DEBUG oslo_concurrency.lockutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:09:23 np0005481065 nova_compute[260935]: 2025-10-11 09:09:23.340 2 DEBUG oslo_concurrency.lockutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:09:23 np0005481065 nova_compute[260935]: 2025-10-11 09:09:23.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:23 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:23.589 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:09:23 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:23.593 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 05:09:23 np0005481065 nova_compute[260935]: 2025-10-11 09:09:23.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:09:23 np0005481065 nova_compute[260935]: 2025-10-11 09:09:23.739 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:09:23 np0005481065 nova_compute[260935]: 2025-10-11 09:09:23.764 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:09:23 np0005481065 nova_compute[260935]: 2025-10-11 09:09:23.764 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:09:23 np0005481065 nova_compute[260935]: 2025-10-11 09:09:23.764 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:09:23 np0005481065 nova_compute[260935]: 2025-10-11 09:09:23.765 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:09:23 np0005481065 nova_compute[260935]: 2025-10-11 09:09:23.765 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:09:23 np0005481065 nova_compute[260935]: 2025-10-11 09:09:23.806 2 DEBUG nova.compute.manager [req-b1b9a9df-eccf-42e9-a026-9448b2791d62 req-db7d8b89-dc0a-4747-a2e1-dea0954847d2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Received event network-vif-plugged-a611854c-0a61-41b8-91ce-0c0f893aa54c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:09:23 np0005481065 nova_compute[260935]: 2025-10-11 09:09:23.807 2 DEBUG oslo_concurrency.lockutils [req-b1b9a9df-eccf-42e9-a026-9448b2791d62 req-db7d8b89-dc0a-4747-a2e1-dea0954847d2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:09:23 np0005481065 nova_compute[260935]: 2025-10-11 09:09:23.807 2 DEBUG oslo_concurrency.lockutils [req-b1b9a9df-eccf-42e9-a026-9448b2791d62 req-db7d8b89-dc0a-4747-a2e1-dea0954847d2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:09:23 np0005481065 nova_compute[260935]: 2025-10-11 09:09:23.807 2 DEBUG oslo_concurrency.lockutils [req-b1b9a9df-eccf-42e9-a026-9448b2791d62 req-db7d8b89-dc0a-4747-a2e1-dea0954847d2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:09:23 np0005481065 nova_compute[260935]: 2025-10-11 09:09:23.807 2 DEBUG nova.compute.manager [req-b1b9a9df-eccf-42e9-a026-9448b2791d62 req-db7d8b89-dc0a-4747-a2e1-dea0954847d2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Processing event network-vif-plugged-a611854c-0a61-41b8-91ce-0c0f893aa54c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:09:23 np0005481065 nova_compute[260935]: 2025-10-11 09:09:23.808 2 DEBUG nova.compute.manager [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Instance event wait completed in 7 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:09:23 np0005481065 nova_compute[260935]: 2025-10-11 09:09:23.811 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173763.8115203, 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:09:23 np0005481065 nova_compute[260935]: 2025-10-11 09:09:23.812 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:09:23 np0005481065 nova_compute[260935]: 2025-10-11 09:09:23.814 2 DEBUG nova.virt.libvirt.driver [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:09:23 np0005481065 nova_compute[260935]: 2025-10-11 09:09:23.818 2 INFO nova.virt.libvirt.driver [-] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Instance spawned successfully.#033[00m
Oct 11 05:09:23 np0005481065 nova_compute[260935]: 2025-10-11 09:09:23.834 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:09:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:09:23 np0005481065 nova_compute[260935]: 2025-10-11 09:09:23.837 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:09:23 np0005481065 nova_compute[260935]: 2025-10-11 09:09:23.857 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:09:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:09:24 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1583032190' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:09:24 np0005481065 nova_compute[260935]: 2025-10-11 09:09:24.249 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:09:24 np0005481065 nova_compute[260935]: 2025-10-11 09:09:24.380 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:09:24 np0005481065 nova_compute[260935]: 2025-10-11 09:09:24.381 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:09:24 np0005481065 nova_compute[260935]: 2025-10-11 09:09:24.386 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000062 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:09:24 np0005481065 nova_compute[260935]: 2025-10-11 09:09:24.387 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000062 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:09:24 np0005481065 nova_compute[260935]: 2025-10-11 09:09:24.394 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:09:24 np0005481065 nova_compute[260935]: 2025-10-11 09:09:24.394 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:09:24 np0005481065 nova_compute[260935]: 2025-10-11 09:09:24.399 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000066 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:09:24 np0005481065 nova_compute[260935]: 2025-10-11 09:09:24.400 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000066 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:09:24 np0005481065 nova_compute[260935]: 2025-10-11 09:09:24.406 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:09:24 np0005481065 nova_compute[260935]: 2025-10-11 09:09:24.406 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:09:24 np0005481065 nova_compute[260935]: 2025-10-11 09:09:24.407 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:09:24 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2019: 321 pgs: 321 active+clean; 533 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 6.3 MiB/s wr, 183 op/s
Oct 11 05:09:24 np0005481065 nova_compute[260935]: 2025-10-11 09:09:24.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:24 np0005481065 nova_compute[260935]: 2025-10-11 09:09:24.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:24 np0005481065 nova_compute[260935]: 2025-10-11 09:09:24.629 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:09:24 np0005481065 nova_compute[260935]: 2025-10-11 09:09:24.630 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2800MB free_disk=59.752681732177734GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:09:24 np0005481065 nova_compute[260935]: 2025-10-11 09:09:24.630 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:09:24 np0005481065 nova_compute[260935]: 2025-10-11 09:09:24.631 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:09:24 np0005481065 nova_compute[260935]: 2025-10-11 09:09:24.632 2 DEBUG oslo_concurrency.lockutils [None req-c6c35388-6f65-484e-ba37-54027b958af1 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Acquiring lock "b3697ef4-2e67-4d7f-ad14-ae6ab4055454" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:09:24 np0005481065 nova_compute[260935]: 2025-10-11 09:09:24.632 2 DEBUG oslo_concurrency.lockutils [None req-c6c35388-6f65-484e-ba37-54027b958af1 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lock "b3697ef4-2e67-4d7f-ad14-ae6ab4055454" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:09:24 np0005481065 nova_compute[260935]: 2025-10-11 09:09:24.632 2 DEBUG oslo_concurrency.lockutils [None req-c6c35388-6f65-484e-ba37-54027b958af1 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Acquiring lock "b3697ef4-2e67-4d7f-ad14-ae6ab4055454-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:09:24 np0005481065 nova_compute[260935]: 2025-10-11 09:09:24.633 2 DEBUG oslo_concurrency.lockutils [None req-c6c35388-6f65-484e-ba37-54027b958af1 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lock "b3697ef4-2e67-4d7f-ad14-ae6ab4055454-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:09:24 np0005481065 nova_compute[260935]: 2025-10-11 09:09:24.633 2 DEBUG oslo_concurrency.lockutils [None req-c6c35388-6f65-484e-ba37-54027b958af1 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lock "b3697ef4-2e67-4d7f-ad14-ae6ab4055454-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:09:24 np0005481065 nova_compute[260935]: 2025-10-11 09:09:24.634 2 INFO nova.compute.manager [None req-c6c35388-6f65-484e-ba37-54027b958af1 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Terminating instance#033[00m
Oct 11 05:09:24 np0005481065 nova_compute[260935]: 2025-10-11 09:09:24.635 2 DEBUG oslo_concurrency.lockutils [None req-c6c35388-6f65-484e-ba37-54027b958af1 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Acquiring lock "refresh_cache-b3697ef4-2e67-4d7f-ad14-ae6ab4055454" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:09:24 np0005481065 nova_compute[260935]: 2025-10-11 09:09:24.635 2 DEBUG oslo_concurrency.lockutils [None req-c6c35388-6f65-484e-ba37-54027b958af1 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Acquired lock "refresh_cache-b3697ef4-2e67-4d7f-ad14-ae6ab4055454" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:09:24 np0005481065 nova_compute[260935]: 2025-10-11 09:09:24.635 2 DEBUG nova.network.neutron [None req-c6c35388-6f65-484e-ba37-54027b958af1 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:09:24 np0005481065 nova_compute[260935]: 2025-10-11 09:09:24.637 2 DEBUG nova.network.neutron [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Successfully created port: 38159e48-a8a0-4daa-95d3-19be7346d2cb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:09:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e274 do_prune osdmap full prune enabled
Oct 11 05:09:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e275 e275: 3 total, 3 up, 3 in
Oct 11 05:09:24 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e275: 3 total, 3 up, 3 in
Oct 11 05:09:24 np0005481065 nova_compute[260935]: 2025-10-11 09:09:24.741 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:09:24 np0005481065 nova_compute[260935]: 2025-10-11 09:09:24.741 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:09:24 np0005481065 nova_compute[260935]: 2025-10-11 09:09:24.741 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:09:24 np0005481065 nova_compute[260935]: 2025-10-11 09:09:24.743 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b3697ef4-2e67-4d7f-ad14-ae6ab4055454 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:09:24 np0005481065 nova_compute[260935]: 2025-10-11 09:09:24.743 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:09:24 np0005481065 nova_compute[260935]: 2025-10-11 09:09:24.744 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 1e10d958-167c-4caf-8ff0-67f50e39ec3c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:09:24 np0005481065 nova_compute[260935]: 2025-10-11 09:09:24.744 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 6 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:09:24 np0005481065 nova_compute[260935]: 2025-10-11 09:09:24.744 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1280MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=6 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:09:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:09:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:09:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:09:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:09:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:09:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:09:24 np0005481065 nova_compute[260935]: 2025-10-11 09:09:24.894 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:09:24 np0005481065 nova_compute[260935]: 2025-10-11 09:09:24.948 2 DEBUG nova.network.neutron [None req-c6c35388-6f65-484e-ba37-54027b958af1 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:09:25 np0005481065 nova_compute[260935]: 2025-10-11 09:09:25.137 2 DEBUG nova.compute.manager [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:09:25 np0005481065 nova_compute[260935]: 2025-10-11 09:09:25.221 2 DEBUG oslo_concurrency.lockutils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 18.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:09:25 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:09:25 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1239512463' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:09:25 np0005481065 nova_compute[260935]: 2025-10-11 09:09:25.342 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:09:25 np0005481065 nova_compute[260935]: 2025-10-11 09:09:25.346 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:09:25 np0005481065 nova_compute[260935]: 2025-10-11 09:09:25.390 2 DEBUG nova.network.neutron [None req-c6c35388-6f65-484e-ba37-54027b958af1 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:09:25 np0005481065 nova_compute[260935]: 2025-10-11 09:09:25.392 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:09:25 np0005481065 nova_compute[260935]: 2025-10-11 09:09:25.415 2 DEBUG oslo_concurrency.lockutils [None req-c6c35388-6f65-484e-ba37-54027b958af1 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Releasing lock "refresh_cache-b3697ef4-2e67-4d7f-ad14-ae6ab4055454" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:09:25 np0005481065 nova_compute[260935]: 2025-10-11 09:09:25.416 2 DEBUG nova.compute.manager [None req-c6c35388-6f65-484e-ba37-54027b958af1 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:09:25 np0005481065 nova_compute[260935]: 2025-10-11 09:09:25.420 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:09:25 np0005481065 nova_compute[260935]: 2025-10-11 09:09:25.420 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.789s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:09:25 np0005481065 systemd[1]: machine-qemu\x2d120\x2dinstance\x2d00000066.scope: Deactivated successfully.
Oct 11 05:09:25 np0005481065 systemd[1]: machine-qemu\x2d120\x2dinstance\x2d00000066.scope: Consumed 3.476s CPU time.
Oct 11 05:09:25 np0005481065 systemd-machined[215705]: Machine qemu-120-instance-00000066 terminated.
Oct 11 05:09:25 np0005481065 podman[363933]: 2025-10-11 09:09:25.542736185 +0000 UTC m=+0.065464410 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=iscsid, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:09:25 np0005481065 nova_compute[260935]: 2025-10-11 09:09:25.649 2 INFO nova.virt.libvirt.driver [-] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Instance destroyed successfully.#033[00m
Oct 11 05:09:25 np0005481065 nova_compute[260935]: 2025-10-11 09:09:25.650 2 DEBUG nova.objects.instance [None req-c6c35388-6f65-484e-ba37-54027b958af1 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lazy-loading 'resources' on Instance uuid b3697ef4-2e67-4d7f-ad14-ae6ab4055454 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:09:25 np0005481065 nova_compute[260935]: 2025-10-11 09:09:25.738 2 DEBUG nova.network.neutron [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Successfully updated port: 38159e48-a8a0-4daa-95d3-19be7346d2cb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:09:25 np0005481065 nova_compute[260935]: 2025-10-11 09:09:25.763 2 DEBUG oslo_concurrency.lockutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "refresh_cache-1e10d958-167c-4caf-8ff0-67f50e39ec3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:09:25 np0005481065 nova_compute[260935]: 2025-10-11 09:09:25.763 2 DEBUG oslo_concurrency.lockutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquired lock "refresh_cache-1e10d958-167c-4caf-8ff0-67f50e39ec3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:09:25 np0005481065 nova_compute[260935]: 2025-10-11 09:09:25.763 2 DEBUG nova.network.neutron [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:09:25 np0005481065 nova_compute[260935]: 2025-10-11 09:09:25.866 2 DEBUG nova.compute.manager [req-e820127e-880c-4f3a-88ab-93ce9c8e3b4c req-273d0166-2656-4318-80ed-e4d9acc7e7ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Received event network-vif-plugged-a611854c-0a61-41b8-91ce-0c0f893aa54c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:09:25 np0005481065 nova_compute[260935]: 2025-10-11 09:09:25.866 2 DEBUG oslo_concurrency.lockutils [req-e820127e-880c-4f3a-88ab-93ce9c8e3b4c req-273d0166-2656-4318-80ed-e4d9acc7e7ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:09:25 np0005481065 nova_compute[260935]: 2025-10-11 09:09:25.867 2 DEBUG oslo_concurrency.lockutils [req-e820127e-880c-4f3a-88ab-93ce9c8e3b4c req-273d0166-2656-4318-80ed-e4d9acc7e7ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:09:25 np0005481065 nova_compute[260935]: 2025-10-11 09:09:25.867 2 DEBUG oslo_concurrency.lockutils [req-e820127e-880c-4f3a-88ab-93ce9c8e3b4c req-273d0166-2656-4318-80ed-e4d9acc7e7ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:09:25 np0005481065 nova_compute[260935]: 2025-10-11 09:09:25.867 2 DEBUG nova.compute.manager [req-e820127e-880c-4f3a-88ab-93ce9c8e3b4c req-273d0166-2656-4318-80ed-e4d9acc7e7ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] No waiting events found dispatching network-vif-plugged-a611854c-0a61-41b8-91ce-0c0f893aa54c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:09:25 np0005481065 nova_compute[260935]: 2025-10-11 09:09:25.868 2 WARNING nova.compute.manager [req-e820127e-880c-4f3a-88ab-93ce9c8e3b4c req-273d0166-2656-4318-80ed-e4d9acc7e7ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Received unexpected event network-vif-plugged-a611854c-0a61-41b8-91ce-0c0f893aa54c for instance with vm_state active and task_state None.#033[00m
Oct 11 05:09:25 np0005481065 nova_compute[260935]: 2025-10-11 09:09:25.868 2 DEBUG nova.compute.manager [req-e820127e-880c-4f3a-88ab-93ce9c8e3b4c req-273d0166-2656-4318-80ed-e4d9acc7e7ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Received event network-changed-38159e48-a8a0-4daa-95d3-19be7346d2cb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:09:25 np0005481065 nova_compute[260935]: 2025-10-11 09:09:25.869 2 DEBUG nova.compute.manager [req-e820127e-880c-4f3a-88ab-93ce9c8e3b4c req-273d0166-2656-4318-80ed-e4d9acc7e7ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Refreshing instance network info cache due to event network-changed-38159e48-a8a0-4daa-95d3-19be7346d2cb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:09:25 np0005481065 nova_compute[260935]: 2025-10-11 09:09:25.869 2 DEBUG oslo_concurrency.lockutils [req-e820127e-880c-4f3a-88ab-93ce9c8e3b4c req-273d0166-2656-4318-80ed-e4d9acc7e7ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-1e10d958-167c-4caf-8ff0-67f50e39ec3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:09:26 np0005481065 nova_compute[260935]: 2025-10-11 09:09:26.010 2 INFO nova.virt.libvirt.driver [None req-c6c35388-6f65-484e-ba37-54027b958af1 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Deleting instance files /var/lib/nova/instances/b3697ef4-2e67-4d7f-ad14-ae6ab4055454_del#033[00m
Oct 11 05:09:26 np0005481065 nova_compute[260935]: 2025-10-11 09:09:26.011 2 INFO nova.virt.libvirt.driver [None req-c6c35388-6f65-484e-ba37-54027b958af1 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Deletion of /var/lib/nova/instances/b3697ef4-2e67-4d7f-ad14-ae6ab4055454_del complete#033[00m
Oct 11 05:09:26 np0005481065 nova_compute[260935]: 2025-10-11 09:09:26.219 2 DEBUG nova.network.neutron [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:09:26 np0005481065 nova_compute[260935]: 2025-10-11 09:09:26.232 2 INFO nova.compute.manager [None req-c6c35388-6f65-484e-ba37-54027b958af1 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Took 0.82 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:09:26 np0005481065 nova_compute[260935]: 2025-10-11 09:09:26.233 2 DEBUG oslo.service.loopingcall [None req-c6c35388-6f65-484e-ba37-54027b958af1 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:09:26 np0005481065 nova_compute[260935]: 2025-10-11 09:09:26.233 2 DEBUG nova.compute.manager [-] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:09:26 np0005481065 nova_compute[260935]: 2025-10-11 09:09:26.234 2 DEBUG nova.network.neutron [-] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:09:26 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2021: 321 pgs: 321 active+clean; 533 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 6.9 MiB/s wr, 210 op/s
Oct 11 05:09:26 np0005481065 nova_compute[260935]: 2025-10-11 09:09:26.433 2 DEBUG nova.network.neutron [-] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:09:26 np0005481065 nova_compute[260935]: 2025-10-11 09:09:26.456 2 DEBUG nova.network.neutron [-] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:09:26 np0005481065 nova_compute[260935]: 2025-10-11 09:09:26.476 2 INFO nova.compute.manager [-] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Took 0.24 seconds to deallocate network for instance.#033[00m
Oct 11 05:09:26 np0005481065 nova_compute[260935]: 2025-10-11 09:09:26.534 2 DEBUG oslo_concurrency.lockutils [None req-c6c35388-6f65-484e-ba37-54027b958af1 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:09:26 np0005481065 nova_compute[260935]: 2025-10-11 09:09:26.535 2 DEBUG oslo_concurrency.lockutils [None req-c6c35388-6f65-484e-ba37-54027b958af1 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:09:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:09:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/99429675' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:09:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:09:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/99429675' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:09:26 np0005481065 nova_compute[260935]: 2025-10-11 09:09:26.713 2 DEBUG oslo_concurrency.processutils [None req-c6c35388-6f65-484e-ba37-54027b958af1 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:09:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:09:27 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3612448378' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:09:27 np0005481065 nova_compute[260935]: 2025-10-11 09:09:27.175 2 DEBUG oslo_concurrency.processutils [None req-c6c35388-6f65-484e-ba37-54027b958af1 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:09:27 np0005481065 nova_compute[260935]: 2025-10-11 09:09:27.182 2 DEBUG nova.compute.provider_tree [None req-c6c35388-6f65-484e-ba37-54027b958af1 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:09:27 np0005481065 nova_compute[260935]: 2025-10-11 09:09:27.205 2 DEBUG nova.scheduler.client.report [None req-c6c35388-6f65-484e-ba37-54027b958af1 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:09:27 np0005481065 nova_compute[260935]: 2025-10-11 09:09:27.231 2 DEBUG oslo_concurrency.lockutils [None req-c6c35388-6f65-484e-ba37-54027b958af1 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:09:27 np0005481065 nova_compute[260935]: 2025-10-11 09:09:27.272 2 INFO nova.scheduler.client.report [None req-c6c35388-6f65-484e-ba37-54027b958af1 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Deleted allocations for instance b3697ef4-2e67-4d7f-ad14-ae6ab4055454#033[00m
Oct 11 05:09:27 np0005481065 nova_compute[260935]: 2025-10-11 09:09:27.343 2 DEBUG oslo_concurrency.lockutils [None req-c6c35388-6f65-484e-ba37-54027b958af1 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lock "b3697ef4-2e67-4d7f-ad14-ae6ab4055454" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:09:27 np0005481065 nova_compute[260935]: 2025-10-11 09:09:27.384 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:09:27 np0005481065 nova_compute[260935]: 2025-10-11 09:09:27.384 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:09:27 np0005481065 nova_compute[260935]: 2025-10-11 09:09:27.567 2 DEBUG nova.network.neutron [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Updating instance_info_cache with network_info: [{"id": "38159e48-a8a0-4daa-95d3-19be7346d2cb", "address": "fa:16:3e:7c:70:56", "network": {"id": "3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e", "bridge": "br-int", "label": "tempest-network-smoke--7754663", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap38159e48-a8", "ovs_interfaceid": "38159e48-a8a0-4daa-95d3-19be7346d2cb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:09:27 np0005481065 nova_compute[260935]: 2025-10-11 09:09:27.596 2 DEBUG oslo_concurrency.lockutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Releasing lock "refresh_cache-1e10d958-167c-4caf-8ff0-67f50e39ec3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:09:27 np0005481065 nova_compute[260935]: 2025-10-11 09:09:27.596 2 DEBUG nova.compute.manager [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Instance network_info: |[{"id": "38159e48-a8a0-4daa-95d3-19be7346d2cb", "address": "fa:16:3e:7c:70:56", "network": {"id": "3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e", "bridge": "br-int", "label": "tempest-network-smoke--7754663", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap38159e48-a8", "ovs_interfaceid": "38159e48-a8a0-4daa-95d3-19be7346d2cb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:09:27 np0005481065 nova_compute[260935]: 2025-10-11 09:09:27.597 2 DEBUG oslo_concurrency.lockutils [req-e820127e-880c-4f3a-88ab-93ce9c8e3b4c req-273d0166-2656-4318-80ed-e4d9acc7e7ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-1e10d958-167c-4caf-8ff0-67f50e39ec3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:09:27 np0005481065 nova_compute[260935]: 2025-10-11 09:09:27.597 2 DEBUG nova.network.neutron [req-e820127e-880c-4f3a-88ab-93ce9c8e3b4c req-273d0166-2656-4318-80ed-e4d9acc7e7ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Refreshing network info cache for port 38159e48-a8a0-4daa-95d3-19be7346d2cb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:09:27 np0005481065 nova_compute[260935]: 2025-10-11 09:09:27.605 2 DEBUG nova.virt.libvirt.driver [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Start _get_guest_xml network_info=[{"id": "38159e48-a8a0-4daa-95d3-19be7346d2cb", "address": "fa:16:3e:7c:70:56", "network": {"id": "3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e", "bridge": "br-int", "label": "tempest-network-smoke--7754663", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap38159e48-a8", "ovs_interfaceid": "38159e48-a8a0-4daa-95d3-19be7346d2cb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:09:27 np0005481065 nova_compute[260935]: 2025-10-11 09:09:27.607 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:09:27 np0005481065 nova_compute[260935]: 2025-10-11 09:09:27.607 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:09:27 np0005481065 nova_compute[260935]: 2025-10-11 09:09:27.608 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 05:09:27 np0005481065 nova_compute[260935]: 2025-10-11 09:09:27.614 2 WARNING nova.virt.libvirt.driver [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:09:27 np0005481065 nova_compute[260935]: 2025-10-11 09:09:27.635 2 DEBUG nova.virt.libvirt.host [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:09:27 np0005481065 nova_compute[260935]: 2025-10-11 09:09:27.636 2 DEBUG nova.virt.libvirt.host [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:09:27 np0005481065 nova_compute[260935]: 2025-10-11 09:09:27.641 2 DEBUG nova.virt.libvirt.host [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:09:27 np0005481065 nova_compute[260935]: 2025-10-11 09:09:27.642 2 DEBUG nova.virt.libvirt.host [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:09:27 np0005481065 nova_compute[260935]: 2025-10-11 09:09:27.642 2 DEBUG nova.virt.libvirt.driver [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:09:27 np0005481065 nova_compute[260935]: 2025-10-11 09:09:27.643 2 DEBUG nova.virt.hardware [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:09:27 np0005481065 nova_compute[260935]: 2025-10-11 09:09:27.644 2 DEBUG nova.virt.hardware [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:09:27 np0005481065 nova_compute[260935]: 2025-10-11 09:09:27.644 2 DEBUG nova.virt.hardware [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:09:27 np0005481065 nova_compute[260935]: 2025-10-11 09:09:27.644 2 DEBUG nova.virt.hardware [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:09:27 np0005481065 nova_compute[260935]: 2025-10-11 09:09:27.645 2 DEBUG nova.virt.hardware [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:09:27 np0005481065 nova_compute[260935]: 2025-10-11 09:09:27.645 2 DEBUG nova.virt.hardware [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:09:27 np0005481065 nova_compute[260935]: 2025-10-11 09:09:27.645 2 DEBUG nova.virt.hardware [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:09:27 np0005481065 nova_compute[260935]: 2025-10-11 09:09:27.646 2 DEBUG nova.virt.hardware [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:09:27 np0005481065 nova_compute[260935]: 2025-10-11 09:09:27.646 2 DEBUG nova.virt.hardware [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:09:27 np0005481065 nova_compute[260935]: 2025-10-11 09:09:27.646 2 DEBUG nova.virt.hardware [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:09:27 np0005481065 nova_compute[260935]: 2025-10-11 09:09:27.647 2 DEBUG nova.virt.hardware [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:09:27 np0005481065 nova_compute[260935]: 2025-10-11 09:09:27.651 2 DEBUG oslo_concurrency.processutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:09:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:09:28 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2118813889' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:09:28 np0005481065 nova_compute[260935]: 2025-10-11 09:09:28.151 2 DEBUG oslo_concurrency.processutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:09:28 np0005481065 nova_compute[260935]: 2025-10-11 09:09:28.179 2 DEBUG nova.storage.rbd_utils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 1e10d958-167c-4caf-8ff0-67f50e39ec3c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:09:28 np0005481065 nova_compute[260935]: 2025-10-11 09:09:28.185 2 DEBUG oslo_concurrency.processutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:09:28 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2022: 321 pgs: 321 active+clean; 453 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 4.3 MiB/s wr, 323 op/s
Oct 11 05:09:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:09:28 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1556775564' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:09:28 np0005481065 nova_compute[260935]: 2025-10-11 09:09:28.640 2 DEBUG oslo_concurrency.processutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:09:28 np0005481065 nova_compute[260935]: 2025-10-11 09:09:28.643 2 DEBUG nova.virt.libvirt.vif [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:09:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1369305797',display_name='tempest-TestNetworkAdvancedServerOps-server-1369305797',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1369305797',id=103,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB3xSdxOkU3R8spG6Ar0C2b6V4kJp9/fLj3qIKqVjHr+W8RWdFE2Y04thwsEaHG/Pjnq/p0OokkgP0kv89kAu1w3IytsdPugHW10i6+zWfhjEg7DZ4e+0dQC7E2fiV/TCA==',key_name='tempest-TestNetworkAdvancedServerOps-1897339183',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca4b15770e784f45910b630937562cb6',ramdisk_id='',reservation_id='r-0mc689ih',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1304559157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1304559157-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:09:22Z,user_data=None,user_id='a213c3877fc144a3af0be3c3d853f999',uuid=1e10d958-167c-4caf-8ff0-67f50e39ec3c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "38159e48-a8a0-4daa-95d3-19be7346d2cb", "address": "fa:16:3e:7c:70:56", "network": {"id": "3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e", "bridge": "br-int", "label": "tempest-network-smoke--7754663", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap38159e48-a8", "ovs_interfaceid": "38159e48-a8a0-4daa-95d3-19be7346d2cb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:09:28 np0005481065 nova_compute[260935]: 2025-10-11 09:09:28.644 2 DEBUG nova.network.os_vif_util [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converting VIF {"id": "38159e48-a8a0-4daa-95d3-19be7346d2cb", "address": "fa:16:3e:7c:70:56", "network": {"id": "3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e", "bridge": "br-int", "label": "tempest-network-smoke--7754663", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap38159e48-a8", "ovs_interfaceid": "38159e48-a8a0-4daa-95d3-19be7346d2cb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:09:28 np0005481065 nova_compute[260935]: 2025-10-11 09:09:28.646 2 DEBUG nova.network.os_vif_util [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:70:56,bridge_name='br-int',has_traffic_filtering=True,id=38159e48-a8a0-4daa-95d3-19be7346d2cb,network=Network(3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap38159e48-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:09:28 np0005481065 nova_compute[260935]: 2025-10-11 09:09:28.648 2 DEBUG nova.objects.instance [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1e10d958-167c-4caf-8ff0-67f50e39ec3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:09:28 np0005481065 nova_compute[260935]: 2025-10-11 09:09:28.674 2 DEBUG nova.virt.libvirt.driver [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:09:28 np0005481065 nova_compute[260935]:  <uuid>1e10d958-167c-4caf-8ff0-67f50e39ec3c</uuid>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:  <name>instance-00000067</name>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:09:28 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1369305797</nova:name>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:09:27</nova:creationTime>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:09:28 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:        <nova:user uuid="a213c3877fc144a3af0be3c3d853f999">tempest-TestNetworkAdvancedServerOps-1304559157-project-member</nova:user>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:        <nova:project uuid="ca4b15770e784f45910b630937562cb6">tempest-TestNetworkAdvancedServerOps-1304559157</nova:project>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:        <nova:port uuid="38159e48-a8a0-4daa-95d3-19be7346d2cb">
Oct 11 05:09:28 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:09:28 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:      <entry name="serial">1e10d958-167c-4caf-8ff0-67f50e39ec3c</entry>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:      <entry name="uuid">1e10d958-167c-4caf-8ff0-67f50e39ec3c</entry>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:09:28 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:09:28 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:09:28 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/1e10d958-167c-4caf-8ff0-67f50e39ec3c_disk">
Oct 11 05:09:28 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:09:28 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:09:28 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/1e10d958-167c-4caf-8ff0-67f50e39ec3c_disk.config">
Oct 11 05:09:28 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:09:28 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:09:28 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:7c:70:56"/>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:      <target dev="tap38159e48-a8"/>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:09:28 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/1e10d958-167c-4caf-8ff0-67f50e39ec3c/console.log" append="off"/>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:09:28 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:09:28 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:09:28 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:09:28 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:09:28 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:09:28 np0005481065 nova_compute[260935]: 2025-10-11 09:09:28.676 2 DEBUG nova.compute.manager [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Preparing to wait for external event network-vif-plugged-38159e48-a8a0-4daa-95d3-19be7346d2cb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:09:28 np0005481065 nova_compute[260935]: 2025-10-11 09:09:28.677 2 DEBUG oslo_concurrency.lockutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "1e10d958-167c-4caf-8ff0-67f50e39ec3c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:09:28 np0005481065 nova_compute[260935]: 2025-10-11 09:09:28.678 2 DEBUG oslo_concurrency.lockutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "1e10d958-167c-4caf-8ff0-67f50e39ec3c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:09:28 np0005481065 nova_compute[260935]: 2025-10-11 09:09:28.678 2 DEBUG oslo_concurrency.lockutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "1e10d958-167c-4caf-8ff0-67f50e39ec3c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:09:28 np0005481065 nova_compute[260935]: 2025-10-11 09:09:28.679 2 DEBUG nova.virt.libvirt.vif [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:09:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1369305797',display_name='tempest-TestNetworkAdvancedServerOps-server-1369305797',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1369305797',id=103,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB3xSdxOkU3R8spG6Ar0C2b6V4kJp9/fLj3qIKqVjHr+W8RWdFE2Y04thwsEaHG/Pjnq/p0OokkgP0kv89kAu1w3IytsdPugHW10i6+zWfhjEg7DZ4e+0dQC7E2fiV/TCA==',key_name='tempest-TestNetworkAdvancedServerOps-1897339183',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca4b15770e784f45910b630937562cb6',ramdisk_id='',reservation_id='r-0mc689ih',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1304559157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1304559157-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:09:22Z,user_data=None,user_id='a213c3877fc144a3af0be3c3d853f999',uuid=1e10d958-167c-4caf-8ff0-67f50e39ec3c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "38159e48-a8a0-4daa-95d3-19be7346d2cb", "address": "fa:16:3e:7c:70:56", "network": {"id": "3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e", "bridge": "br-int", "label": "tempest-network-smoke--7754663", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap38159e48-a8", "ovs_interfaceid": "38159e48-a8a0-4daa-95d3-19be7346d2cb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:09:28 np0005481065 nova_compute[260935]: 2025-10-11 09:09:28.680 2 DEBUG nova.network.os_vif_util [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converting VIF {"id": "38159e48-a8a0-4daa-95d3-19be7346d2cb", "address": "fa:16:3e:7c:70:56", "network": {"id": "3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e", "bridge": "br-int", "label": "tempest-network-smoke--7754663", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap38159e48-a8", "ovs_interfaceid": "38159e48-a8a0-4daa-95d3-19be7346d2cb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:09:28 np0005481065 nova_compute[260935]: 2025-10-11 09:09:28.681 2 DEBUG nova.network.os_vif_util [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:70:56,bridge_name='br-int',has_traffic_filtering=True,id=38159e48-a8a0-4daa-95d3-19be7346d2cb,network=Network(3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap38159e48-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:09:28 np0005481065 nova_compute[260935]: 2025-10-11 09:09:28.682 2 DEBUG os_vif [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:70:56,bridge_name='br-int',has_traffic_filtering=True,id=38159e48-a8a0-4daa-95d3-19be7346d2cb,network=Network(3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap38159e48-a8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:09:28 np0005481065 nova_compute[260935]: 2025-10-11 09:09:28.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:28 np0005481065 nova_compute[260935]: 2025-10-11 09:09:28.684 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:09:28 np0005481065 nova_compute[260935]: 2025-10-11 09:09:28.685 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:09:28 np0005481065 nova_compute[260935]: 2025-10-11 09:09:28.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:28 np0005481065 nova_compute[260935]: 2025-10-11 09:09:28.690 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap38159e48-a8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:09:28 np0005481065 nova_compute[260935]: 2025-10-11 09:09:28.691 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap38159e48-a8, col_values=(('external_ids', {'iface-id': '38159e48-a8a0-4daa-95d3-19be7346d2cb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7c:70:56', 'vm-uuid': '1e10d958-167c-4caf-8ff0-67f50e39ec3c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:09:28 np0005481065 nova_compute[260935]: 2025-10-11 09:09:28.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:28 np0005481065 NetworkManager[44960]: <info>  [1760173768.6962] manager: (tap38159e48-a8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/385)
Oct 11 05:09:28 np0005481065 nova_compute[260935]: 2025-10-11 09:09:28.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:09:28 np0005481065 nova_compute[260935]: 2025-10-11 09:09:28.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:28 np0005481065 nova_compute[260935]: 2025-10-11 09:09:28.706 2 INFO os_vif [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:70:56,bridge_name='br-int',has_traffic_filtering=True,id=38159e48-a8a0-4daa-95d3-19be7346d2cb,network=Network(3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap38159e48-a8')#033[00m
Oct 11 05:09:28 np0005481065 nova_compute[260935]: 2025-10-11 09:09:28.769 2 DEBUG nova.virt.libvirt.driver [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:09:28 np0005481065 nova_compute[260935]: 2025-10-11 09:09:28.769 2 DEBUG nova.virt.libvirt.driver [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:09:28 np0005481065 nova_compute[260935]: 2025-10-11 09:09:28.770 2 DEBUG nova.virt.libvirt.driver [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] No VIF found with MAC fa:16:3e:7c:70:56, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:09:28 np0005481065 nova_compute[260935]: 2025-10-11 09:09:28.770 2 INFO nova.virt.libvirt.driver [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Using config drive#033[00m
Oct 11 05:09:28 np0005481065 nova_compute[260935]: 2025-10-11 09:09:28.798 2 DEBUG nova.storage.rbd_utils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 1e10d958-167c-4caf-8ff0-67f50e39ec3c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:09:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:09:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e275 do_prune osdmap full prune enabled
Oct 11 05:09:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e276 e276: 3 total, 3 up, 3 in
Oct 11 05:09:28 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e276: 3 total, 3 up, 3 in
Oct 11 05:09:29 np0005481065 nova_compute[260935]: 2025-10-11 09:09:29.385 2 INFO nova.virt.libvirt.driver [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Creating config drive at /var/lib/nova/instances/1e10d958-167c-4caf-8ff0-67f50e39ec3c/disk.config#033[00m
Oct 11 05:09:29 np0005481065 nova_compute[260935]: 2025-10-11 09:09:29.393 2 DEBUG oslo_concurrency.processutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1e10d958-167c-4caf-8ff0-67f50e39ec3c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprd_3esig execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:09:29 np0005481065 nova_compute[260935]: 2025-10-11 09:09:29.457 2 DEBUG nova.network.neutron [req-e820127e-880c-4f3a-88ab-93ce9c8e3b4c req-273d0166-2656-4318-80ed-e4d9acc7e7ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Updated VIF entry in instance network info cache for port 38159e48-a8a0-4daa-95d3-19be7346d2cb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:09:29 np0005481065 nova_compute[260935]: 2025-10-11 09:09:29.460 2 DEBUG nova.network.neutron [req-e820127e-880c-4f3a-88ab-93ce9c8e3b4c req-273d0166-2656-4318-80ed-e4d9acc7e7ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Updating instance_info_cache with network_info: [{"id": "38159e48-a8a0-4daa-95d3-19be7346d2cb", "address": "fa:16:3e:7c:70:56", "network": {"id": "3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e", "bridge": "br-int", "label": "tempest-network-smoke--7754663", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap38159e48-a8", "ovs_interfaceid": "38159e48-a8a0-4daa-95d3-19be7346d2cb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:09:29 np0005481065 nova_compute[260935]: 2025-10-11 09:09:29.489 2 DEBUG oslo_concurrency.lockutils [req-e820127e-880c-4f3a-88ab-93ce9c8e3b4c req-273d0166-2656-4318-80ed-e4d9acc7e7ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-1e10d958-167c-4caf-8ff0-67f50e39ec3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:09:29 np0005481065 nova_compute[260935]: 2025-10-11 09:09:29.513 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updating instance_info_cache with network_info: [{"id": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "address": "fa:16:3e:ab:9b:26", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99e74dca-1d", "ovs_interfaceid": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:09:29 np0005481065 nova_compute[260935]: 2025-10-11 09:09:29.540 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:09:29 np0005481065 nova_compute[260935]: 2025-10-11 09:09:29.540 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 05:09:29 np0005481065 nova_compute[260935]: 2025-10-11 09:09:29.558 2 DEBUG oslo_concurrency.processutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1e10d958-167c-4caf-8ff0-67f50e39ec3c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprd_3esig" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:09:29 np0005481065 nova_compute[260935]: 2025-10-11 09:09:29.631 2 DEBUG nova.storage.rbd_utils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 1e10d958-167c-4caf-8ff0-67f50e39ec3c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:09:29 np0005481065 nova_compute[260935]: 2025-10-11 09:09:29.638 2 DEBUG oslo_concurrency.processutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1e10d958-167c-4caf-8ff0-67f50e39ec3c/disk.config 1e10d958-167c-4caf-8ff0-67f50e39ec3c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:09:29 np0005481065 nova_compute[260935]: 2025-10-11 09:09:29.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:29 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #93. Immutable memtables: 0.
Oct 11 05:09:29 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:29.745167) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 05:09:29 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 93
Oct 11 05:09:29 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173769745201, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 454, "num_deletes": 252, "total_data_size": 329063, "memory_usage": 337464, "flush_reason": "Manual Compaction"}
Oct 11 05:09:29 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #94: started
Oct 11 05:09:29 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173769749893, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 94, "file_size": 325692, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42036, "largest_seqno": 42489, "table_properties": {"data_size": 323047, "index_size": 681, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6718, "raw_average_key_size": 19, "raw_value_size": 317592, "raw_average_value_size": 917, "num_data_blocks": 29, "num_entries": 346, "num_filter_entries": 346, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760173754, "oldest_key_time": 1760173754, "file_creation_time": 1760173769, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 94, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:09:29 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 4754 microseconds, and 1656 cpu microseconds.
Oct 11 05:09:29 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:09:29 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:29.749920) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #94: 325692 bytes OK
Oct 11 05:09:29 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:29.749937) [db/memtable_list.cc:519] [default] Level-0 commit table #94 started
Oct 11 05:09:29 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:29.751700) [db/memtable_list.cc:722] [default] Level-0 commit table #94: memtable #1 done
Oct 11 05:09:29 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:29.751716) EVENT_LOG_v1 {"time_micros": 1760173769751711, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 05:09:29 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:29.751734) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 05:09:29 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 326281, prev total WAL file size 326281, number of live WAL files 2.
Oct 11 05:09:29 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000090.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:09:29 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:29.753312) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Oct 11 05:09:29 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 05:09:29 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [94(318KB)], [92(11MB)]
Oct 11 05:09:29 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173769753357, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [94], "files_L6": [92], "score": -1, "input_data_size": 12199628, "oldest_snapshot_seqno": -1}
Oct 11 05:09:29 np0005481065 podman[364101]: 2025-10-11 09:09:29.781616192 +0000 UTC m=+0.076062513 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:09:29 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #95: 6608 keys, 10509838 bytes, temperature: kUnknown
Oct 11 05:09:29 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173769820710, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 95, "file_size": 10509838, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10462900, "index_size": 29290, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16581, "raw_key_size": 169630, "raw_average_key_size": 25, "raw_value_size": 10341598, "raw_average_value_size": 1565, "num_data_blocks": 1160, "num_entries": 6608, "num_filter_entries": 6608, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760173769, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:09:29 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:09:29 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:29.820956) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 10509838 bytes
Oct 11 05:09:29 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:29.822178) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 181.0 rd, 155.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 11.3 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(69.7) write-amplify(32.3) OK, records in: 7126, records dropped: 518 output_compression: NoCompression
Oct 11 05:09:29 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:29.822201) EVENT_LOG_v1 {"time_micros": 1760173769822191, "job": 54, "event": "compaction_finished", "compaction_time_micros": 67415, "compaction_time_cpu_micros": 30092, "output_level": 6, "num_output_files": 1, "total_output_size": 10509838, "num_input_records": 7126, "num_output_records": 6608, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 05:09:29 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000094.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:09:29 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173769822391, "job": 54, "event": "table_file_deletion", "file_number": 94}
Oct 11 05:09:29 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:09:29 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173769825017, "job": 54, "event": "table_file_deletion", "file_number": 92}
Oct 11 05:09:29 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:29.753255) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:09:29 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:29.825051) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:09:29 np0005481065 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 05:09:29 np0005481065 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 05:09:29 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:29.825056) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:09:29 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:29.825057) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:09:29 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:29.825058) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:09:29 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:29.825060) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:09:29 np0005481065 podman[364102]: 2025-10-11 09:09:29.83443058 +0000 UTC m=+0.128730856 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 11 05:09:29 np0005481065 nova_compute[260935]: 2025-10-11 09:09:29.908 2 DEBUG oslo_concurrency.processutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1e10d958-167c-4caf-8ff0-67f50e39ec3c/disk.config 1e10d958-167c-4caf-8ff0-67f50e39ec3c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.271s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:09:29 np0005481065 nova_compute[260935]: 2025-10-11 09:09:29.909 2 INFO nova.virt.libvirt.driver [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Deleting local config drive /var/lib/nova/instances/1e10d958-167c-4caf-8ff0-67f50e39ec3c/disk.config because it was imported into RBD.#033[00m
Oct 11 05:09:29 np0005481065 kernel: tap38159e48-a8: entered promiscuous mode
Oct 11 05:09:29 np0005481065 NetworkManager[44960]: <info>  [1760173769.9775] manager: (tap38159e48-a8): new Tun device (/org/freedesktop/NetworkManager/Devices/386)
Oct 11 05:09:29 np0005481065 ovn_controller[152945]: 2025-10-11T09:09:29Z|00915|binding|INFO|Claiming lport 38159e48-a8a0-4daa-95d3-19be7346d2cb for this chassis.
Oct 11 05:09:29 np0005481065 ovn_controller[152945]: 2025-10-11T09:09:29Z|00916|binding|INFO|38159e48-a8a0-4daa-95d3-19be7346d2cb: Claiming fa:16:3e:7c:70:56 10.100.0.11
Oct 11 05:09:29 np0005481065 nova_compute[260935]: 2025-10-11 09:09:29.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:29.994 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7c:70:56 10.100.0.11'], port_security=['fa:16:3e:7c:70:56 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '1e10d958-167c-4caf-8ff0-67f50e39ec3c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca4b15770e784f45910b630937562cb6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5fe33852-f012-4e6c-8c5b-0eacc3c5c3e8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b3216995-d47a-4bef-959f-e3536ac62432, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=38159e48-a8a0-4daa-95d3-19be7346d2cb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:29.996 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 38159e48-a8a0-4daa-95d3-19be7346d2cb in datapath 3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e bound to our chassis#033[00m
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:29.998 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e#033[00m
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:30.020 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3b03c8bd-8858-42d3-84c8-0e487fd99536]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:30.022 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3e150cf5-f1 in ovnmeta-3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:30.024 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3e150cf5-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:30.024 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d4370182-81d8-4c22-b747-898b3adaa540]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:30.026 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[823b9b21-12e4-475b-a89c-c0cccaf18175]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:30 np0005481065 systemd-udevd[364179]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:30.044 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[a66719db-111b-4d9b-81c5-468e8f0a0d18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:30 np0005481065 NetworkManager[44960]: <info>  [1760173770.0538] device (tap38159e48-a8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:09:30 np0005481065 NetworkManager[44960]: <info>  [1760173770.0562] device (tap38159e48-a8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:09:30 np0005481065 systemd-machined[215705]: New machine qemu-121-instance-00000067.
Oct 11 05:09:30 np0005481065 systemd[1]: Started Virtual Machine qemu-121-instance-00000067.
Oct 11 05:09:30 np0005481065 nova_compute[260935]: 2025-10-11 09:09:30.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:30.077 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[133dfbb3-1cd9-4ecf-9fd4-cd39a018691f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:30 np0005481065 ovn_controller[152945]: 2025-10-11T09:09:30Z|00917|binding|INFO|Setting lport 38159e48-a8a0-4daa-95d3-19be7346d2cb ovn-installed in OVS
Oct 11 05:09:30 np0005481065 ovn_controller[152945]: 2025-10-11T09:09:30Z|00918|binding|INFO|Setting lport 38159e48-a8a0-4daa-95d3-19be7346d2cb up in Southbound
Oct 11 05:09:30 np0005481065 nova_compute[260935]: 2025-10-11 09:09:30.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:30.119 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[4f175af7-f1ee-404e-9c0f-b831e42023dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:30 np0005481065 systemd-udevd[364184]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:09:30 np0005481065 NetworkManager[44960]: <info>  [1760173770.1283] manager: (tap3e150cf5-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/387)
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:30.127 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0adc8786-eff7-446b-9456-39f77a535990]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:30.181 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[1bdd44ec-1b69-4be0-bc2d-c22c7a9b1dff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:30.183 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[2575c78a-e142-49c5-9d79-e27d82f8637f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:30 np0005481065 NetworkManager[44960]: <info>  [1760173770.2144] device (tap3e150cf5-f0): carrier: link connected
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:30.221 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[7e59cd65-0a7e-44bd-8485-0252c26e5c02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:30.242 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b4bc6c46-5db8-4a2b-91d3-6a1bb6b1b429]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3e150cf5-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:ac:30'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 275], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561491, 'reachable_time': 20501, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364212, 'error': None, 'target': 'ovnmeta-3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:30.262 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f09c1306-2f39-4671-87b4-21aba4c0405e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8b:ac30'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561491, 'tstamp': 561491}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364213, 'error': None, 'target': 'ovnmeta-3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:30.282 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a53b751b-3187-407b-9469-b280392acd96]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3e150cf5-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:ac:30'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 275], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561491, 'reachable_time': 20501, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 364214, 'error': None, 'target': 'ovnmeta-3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:30.320 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fd16a658-e57b-418f-9107-777a31a30943]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:30.394 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f0e5f857-5105-4851-b9a9-c564c493304d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:30.396 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3e150cf5-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:30.396 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:30.397 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3e150cf5-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:09:30 np0005481065 nova_compute[260935]: 2025-10-11 09:09:30.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:30 np0005481065 NetworkManager[44960]: <info>  [1760173770.4007] manager: (tap3e150cf5-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/388)
Oct 11 05:09:30 np0005481065 kernel: tap3e150cf5-f0: entered promiscuous mode
Oct 11 05:09:30 np0005481065 nova_compute[260935]: 2025-10-11 09:09:30.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:30.407 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3e150cf5-f0, col_values=(('external_ids', {'iface-id': 'd058d94f-2c43-4ada-876d-4d89d5169811'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:09:30 np0005481065 nova_compute[260935]: 2025-10-11 09:09:30.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:30 np0005481065 ovn_controller[152945]: 2025-10-11T09:09:30Z|00919|binding|INFO|Releasing lport d058d94f-2c43-4ada-876d-4d89d5169811 from this chassis (sb_readonly=0)
Oct 11 05:09:30 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2024: 321 pgs: 321 active+clean; 453 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.9 MiB/s wr, 320 op/s
Oct 11 05:09:30 np0005481065 nova_compute[260935]: 2025-10-11 09:09:30.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:30.451 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:30.452 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a676ecd8-96b4-4a20-86f9-a9b44f15571e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:30.453 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e.pid.haproxy
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID 3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 05:09:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:30.454 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e', 'env', 'PROCESS_TAG=haproxy-3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 05:09:30 np0005481065 podman[364288]: 2025-10-11 09:09:30.942154762 +0000 UTC m=+0.082104585 container create 6815214ed4d7b783ad1ba6031bca85e6eb433cdd2124489969f3b2a6f5b588c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 11 05:09:30 np0005481065 podman[364288]: 2025-10-11 09:09:30.893017269 +0000 UTC m=+0.032967122 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 05:09:31 np0005481065 systemd[1]: Started libpod-conmon-6815214ed4d7b783ad1ba6031bca85e6eb433cdd2124489969f3b2a6f5b588c6.scope.
Oct 11 05:09:31 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:09:31 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6608af26482039c5816a02e8765c53d58b01bc30687269e83d429b9f655fb4c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 05:09:31 np0005481065 podman[364288]: 2025-10-11 09:09:31.078976948 +0000 UTC m=+0.218926831 container init 6815214ed4d7b783ad1ba6031bca85e6eb433cdd2124489969f3b2a6f5b588c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 11 05:09:31 np0005481065 nova_compute[260935]: 2025-10-11 09:09:31.088 2 DEBUG nova.compute.manager [req-236749c3-eeae-42dc-9990-1aba432b5703 req-459fb2e9-93fb-422a-89b0-e2b61fff1973 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Received event network-vif-plugged-38159e48-a8a0-4daa-95d3-19be7346d2cb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:09:31 np0005481065 podman[364288]: 2025-10-11 09:09:31.090410584 +0000 UTC m=+0.230360407 container start 6815214ed4d7b783ad1ba6031bca85e6eb433cdd2124489969f3b2a6f5b588c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:09:31 np0005481065 nova_compute[260935]: 2025-10-11 09:09:31.090 2 DEBUG oslo_concurrency.lockutils [req-236749c3-eeae-42dc-9990-1aba432b5703 req-459fb2e9-93fb-422a-89b0-e2b61fff1973 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "1e10d958-167c-4caf-8ff0-67f50e39ec3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:09:31 np0005481065 nova_compute[260935]: 2025-10-11 09:09:31.091 2 DEBUG oslo_concurrency.lockutils [req-236749c3-eeae-42dc-9990-1aba432b5703 req-459fb2e9-93fb-422a-89b0-e2b61fff1973 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1e10d958-167c-4caf-8ff0-67f50e39ec3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:09:31 np0005481065 nova_compute[260935]: 2025-10-11 09:09:31.092 2 DEBUG oslo_concurrency.lockutils [req-236749c3-eeae-42dc-9990-1aba432b5703 req-459fb2e9-93fb-422a-89b0-e2b61fff1973 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1e10d958-167c-4caf-8ff0-67f50e39ec3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:09:31 np0005481065 nova_compute[260935]: 2025-10-11 09:09:31.092 2 DEBUG nova.compute.manager [req-236749c3-eeae-42dc-9990-1aba432b5703 req-459fb2e9-93fb-422a-89b0-e2b61fff1973 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Processing event network-vif-plugged-38159e48-a8a0-4daa-95d3-19be7346d2cb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:09:31 np0005481065 neutron-haproxy-ovnmeta-3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e[364303]: [NOTICE]   (364307) : New worker (364309) forked
Oct 11 05:09:31 np0005481065 neutron-haproxy-ovnmeta-3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e[364303]: [NOTICE]   (364307) : Loading success.
Oct 11 05:09:31 np0005481065 nova_compute[260935]: 2025-10-11 09:09:31.210 2 DEBUG nova.compute.manager [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:09:31 np0005481065 nova_compute[260935]: 2025-10-11 09:09:31.211 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173771.2108932, 1e10d958-167c-4caf-8ff0-67f50e39ec3c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:09:31 np0005481065 nova_compute[260935]: 2025-10-11 09:09:31.211 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] VM Started (Lifecycle Event)#033[00m
Oct 11 05:09:31 np0005481065 nova_compute[260935]: 2025-10-11 09:09:31.219 2 DEBUG nova.virt.libvirt.driver [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:09:31 np0005481065 nova_compute[260935]: 2025-10-11 09:09:31.223 2 INFO nova.virt.libvirt.driver [-] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Instance spawned successfully.#033[00m
Oct 11 05:09:31 np0005481065 nova_compute[260935]: 2025-10-11 09:09:31.224 2 DEBUG nova.virt.libvirt.driver [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:09:31 np0005481065 nova_compute[260935]: 2025-10-11 09:09:31.240 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:09:31 np0005481065 nova_compute[260935]: 2025-10-11 09:09:31.251 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:09:31 np0005481065 nova_compute[260935]: 2025-10-11 09:09:31.257 2 DEBUG nova.virt.libvirt.driver [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:09:31 np0005481065 nova_compute[260935]: 2025-10-11 09:09:31.258 2 DEBUG nova.virt.libvirt.driver [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:09:31 np0005481065 nova_compute[260935]: 2025-10-11 09:09:31.259 2 DEBUG nova.virt.libvirt.driver [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:09:31 np0005481065 nova_compute[260935]: 2025-10-11 09:09:31.260 2 DEBUG nova.virt.libvirt.driver [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:09:31 np0005481065 nova_compute[260935]: 2025-10-11 09:09:31.260 2 DEBUG nova.virt.libvirt.driver [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:09:31 np0005481065 nova_compute[260935]: 2025-10-11 09:09:31.261 2 DEBUG nova.virt.libvirt.driver [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:09:31 np0005481065 nova_compute[260935]: 2025-10-11 09:09:31.275 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:09:31 np0005481065 nova_compute[260935]: 2025-10-11 09:09:31.275 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173771.2154548, 1e10d958-167c-4caf-8ff0-67f50e39ec3c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:09:31 np0005481065 nova_compute[260935]: 2025-10-11 09:09:31.276 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:09:31 np0005481065 nova_compute[260935]: 2025-10-11 09:09:31.325 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:09:31 np0005481065 nova_compute[260935]: 2025-10-11 09:09:31.330 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173771.2175663, 1e10d958-167c-4caf-8ff0-67f50e39ec3c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:09:31 np0005481065 nova_compute[260935]: 2025-10-11 09:09:31.330 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:09:31 np0005481065 nova_compute[260935]: 2025-10-11 09:09:31.341 2 INFO nova.compute.manager [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Took 8.79 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:09:31 np0005481065 nova_compute[260935]: 2025-10-11 09:09:31.342 2 DEBUG nova.compute.manager [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:09:31 np0005481065 nova_compute[260935]: 2025-10-11 09:09:31.352 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:09:31 np0005481065 nova_compute[260935]: 2025-10-11 09:09:31.356 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:09:31 np0005481065 nova_compute[260935]: 2025-10-11 09:09:31.388 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:09:31 np0005481065 nova_compute[260935]: 2025-10-11 09:09:31.424 2 INFO nova.compute.manager [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Took 10.35 seconds to build instance.#033[00m
Oct 11 05:09:31 np0005481065 nova_compute[260935]: 2025-10-11 09:09:31.442 2 DEBUG oslo_concurrency.lockutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "1e10d958-167c-4caf-8ff0-67f50e39ec3c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.507s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:09:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:31.595 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:09:31 np0005481065 nova_compute[260935]: 2025-10-11 09:09:31.822 2 DEBUG nova.objects.instance [None req-d17933ce-c67d-44bc-b91c-be59c84776d2 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:09:31 np0005481065 nova_compute[260935]: 2025-10-11 09:09:31.865 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173771.8650405, 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:09:31 np0005481065 nova_compute[260935]: 2025-10-11 09:09:31.868 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:09:31 np0005481065 nova_compute[260935]: 2025-10-11 09:09:31.896 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:09:31 np0005481065 nova_compute[260935]: 2025-10-11 09:09:31.904 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:09:31 np0005481065 nova_compute[260935]: 2025-10-11 09:09:31.945 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Oct 11 05:09:32 np0005481065 kernel: tapa611854c-0a (unregistering): left promiscuous mode
Oct 11 05:09:32 np0005481065 NetworkManager[44960]: <info>  [1760173772.2763] device (tapa611854c-0a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:09:32 np0005481065 nova_compute[260935]: 2025-10-11 09:09:32.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:32 np0005481065 ovn_controller[152945]: 2025-10-11T09:09:32Z|00920|binding|INFO|Releasing lport a611854c-0a61-41b8-91ce-0c0f893aa54c from this chassis (sb_readonly=0)
Oct 11 05:09:32 np0005481065 ovn_controller[152945]: 2025-10-11T09:09:32Z|00921|binding|INFO|Setting lport a611854c-0a61-41b8-91ce-0c0f893aa54c down in Southbound
Oct 11 05:09:32 np0005481065 ovn_controller[152945]: 2025-10-11T09:09:32Z|00922|binding|INFO|Removing iface tapa611854c-0a ovn-installed in OVS
Oct 11 05:09:32 np0005481065 nova_compute[260935]: 2025-10-11 09:09:32.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:32.327 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:da:b0 10.100.0.12'], port_security=['fa:16:3e:9d:da:b0 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '21f163e616ee4917a580701d466f7dc9', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'df67e917-90ec-4dfb-a7e0-e0c1517579e8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e0adf864-e0f8-4e06-afb7-e390a634b36e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=a611854c-0a61-41b8-91ce-0c0f893aa54c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:09:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:32.329 162815 INFO neutron.agent.ovn.metadata.agent [-] Port a611854c-0a61-41b8-91ce-0c0f893aa54c in datapath 14e82eeb-74e2-4de3-9047-74da777fe1ec unbound from our chassis#033[00m
Oct 11 05:09:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:32.333 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 14e82eeb-74e2-4de3-9047-74da777fe1ec, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:09:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:32.334 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4c1ebc11-bb90-4a9d-a16c-9e4481c38b94]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:32.335 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec namespace which is not needed anymore#033[00m
Oct 11 05:09:32 np0005481065 systemd[1]: machine-qemu\x2d119\x2dinstance\x2d00000062.scope: Deactivated successfully.
Oct 11 05:09:32 np0005481065 systemd[1]: machine-qemu\x2d119\x2dinstance\x2d00000062.scope: Consumed 9.168s CPU time.
Oct 11 05:09:32 np0005481065 nova_compute[260935]: 2025-10-11 09:09:32.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:32 np0005481065 systemd-machined[215705]: Machine qemu-119-instance-00000062 terminated.
Oct 11 05:09:32 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2025: 321 pgs: 321 active+clean; 453 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 6.7 MiB/s rd, 2.7 MiB/s wr, 348 op/s
Oct 11 05:09:32 np0005481065 nova_compute[260935]: 2025-10-11 09:09:32.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:32 np0005481065 nova_compute[260935]: 2025-10-11 09:09:32.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:32 np0005481065 nova_compute[260935]: 2025-10-11 09:09:32.474 2 DEBUG nova.compute.manager [None req-d17933ce-c67d-44bc-b91c-be59c84776d2 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:09:32 np0005481065 neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec[363209]: [NOTICE]   (363218) : haproxy version is 2.8.14-c23fe91
Oct 11 05:09:32 np0005481065 neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec[363209]: [NOTICE]   (363218) : path to executable is /usr/sbin/haproxy
Oct 11 05:09:32 np0005481065 neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec[363209]: [WARNING]  (363218) : Exiting Master process...
Oct 11 05:09:32 np0005481065 neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec[363209]: [WARNING]  (363218) : Exiting Master process...
Oct 11 05:09:32 np0005481065 neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec[363209]: [ALERT]    (363218) : Current worker (363222) exited with code 143 (Terminated)
Oct 11 05:09:32 np0005481065 neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec[363209]: [WARNING]  (363218) : All workers exited. Exiting... (0)
Oct 11 05:09:32 np0005481065 systemd[1]: libpod-4c5576afb82b4ccd0ccde2b6eb55108563fef3f9bc2ce3ee3984b2f7786bc901.scope: Deactivated successfully.
Oct 11 05:09:32 np0005481065 podman[364351]: 2025-10-11 09:09:32.560641254 +0000 UTC m=+0.062694101 container died 4c5576afb82b4ccd0ccde2b6eb55108563fef3f9bc2ce3ee3984b2f7786bc901 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 05:09:32 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4c5576afb82b4ccd0ccde2b6eb55108563fef3f9bc2ce3ee3984b2f7786bc901-userdata-shm.mount: Deactivated successfully.
Oct 11 05:09:32 np0005481065 systemd[1]: var-lib-containers-storage-overlay-56bddb7740329636355fcc11f4dc0b2142ccfefe44ada4320f628f0155b9b1ac-merged.mount: Deactivated successfully.
Oct 11 05:09:32 np0005481065 podman[364351]: 2025-10-11 09:09:32.621033008 +0000 UTC m=+0.123085845 container cleanup 4c5576afb82b4ccd0ccde2b6eb55108563fef3f9bc2ce3ee3984b2f7786bc901 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 11 05:09:32 np0005481065 systemd[1]: libpod-conmon-4c5576afb82b4ccd0ccde2b6eb55108563fef3f9bc2ce3ee3984b2f7786bc901.scope: Deactivated successfully.
Oct 11 05:09:32 np0005481065 podman[364380]: 2025-10-11 09:09:32.716349159 +0000 UTC m=+0.061028033 container remove 4c5576afb82b4ccd0ccde2b6eb55108563fef3f9bc2ce3ee3984b2f7786bc901 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 11 05:09:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:32.727 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c98660e0-b8ed-40dd-bcb1-a903bbcb46e2]: (4, ('Sat Oct 11 09:09:32 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec (4c5576afb82b4ccd0ccde2b6eb55108563fef3f9bc2ce3ee3984b2f7786bc901)\n4c5576afb82b4ccd0ccde2b6eb55108563fef3f9bc2ce3ee3984b2f7786bc901\nSat Oct 11 09:09:32 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec (4c5576afb82b4ccd0ccde2b6eb55108563fef3f9bc2ce3ee3984b2f7786bc901)\n4c5576afb82b4ccd0ccde2b6eb55108563fef3f9bc2ce3ee3984b2f7786bc901\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:32.729 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1e58284f-ab38-4daa-afa0-3f61e349aada]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:32.730 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14e82eeb-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:09:32 np0005481065 kernel: tap14e82eeb-70: left promiscuous mode
Oct 11 05:09:32 np0005481065 nova_compute[260935]: 2025-10-11 09:09:32.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:32 np0005481065 nova_compute[260935]: 2025-10-11 09:09:32.740 2 DEBUG nova.compute.manager [req-de395aca-1d6c-4b0b-8ef3-733e4e01d72d req-538a1f19-92c5-4efd-89a3-594825b10aa4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Received event network-vif-unplugged-a611854c-0a61-41b8-91ce-0c0f893aa54c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:09:32 np0005481065 nova_compute[260935]: 2025-10-11 09:09:32.740 2 DEBUG oslo_concurrency.lockutils [req-de395aca-1d6c-4b0b-8ef3-733e4e01d72d req-538a1f19-92c5-4efd-89a3-594825b10aa4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:09:32 np0005481065 nova_compute[260935]: 2025-10-11 09:09:32.741 2 DEBUG oslo_concurrency.lockutils [req-de395aca-1d6c-4b0b-8ef3-733e4e01d72d req-538a1f19-92c5-4efd-89a3-594825b10aa4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:09:32 np0005481065 nova_compute[260935]: 2025-10-11 09:09:32.741 2 DEBUG oslo_concurrency.lockutils [req-de395aca-1d6c-4b0b-8ef3-733e4e01d72d req-538a1f19-92c5-4efd-89a3-594825b10aa4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:09:32 np0005481065 nova_compute[260935]: 2025-10-11 09:09:32.741 2 DEBUG nova.compute.manager [req-de395aca-1d6c-4b0b-8ef3-733e4e01d72d req-538a1f19-92c5-4efd-89a3-594825b10aa4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] No waiting events found dispatching network-vif-unplugged-a611854c-0a61-41b8-91ce-0c0f893aa54c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:09:32 np0005481065 nova_compute[260935]: 2025-10-11 09:09:32.742 2 WARNING nova.compute.manager [req-de395aca-1d6c-4b0b-8ef3-733e4e01d72d req-538a1f19-92c5-4efd-89a3-594825b10aa4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Received unexpected event network-vif-unplugged-a611854c-0a61-41b8-91ce-0c0f893aa54c for instance with vm_state suspended and task_state None.#033[00m
Oct 11 05:09:32 np0005481065 nova_compute[260935]: 2025-10-11 09:09:32.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:32.765 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d9679889-e34a-4dbb-ac8d-45d6d12dbbde]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:32.799 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f092f369-8f49-4338-83f9-ceaa609b9369]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:32.800 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[80287d0d-a34e-4649-9052-472e9b37d2ce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:32.825 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[31f18c00-3a9f-4249-9150-ba4067ad4d9d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 560050, 'reachable_time': 40627, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364397, 'error': None, 'target': 'ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:32 np0005481065 systemd[1]: run-netns-ovnmeta\x2d14e82eeb\x2d74e2\x2d4de3\x2d9047\x2d74da777fe1ec.mount: Deactivated successfully.
Oct 11 05:09:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:32.830 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 05:09:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:32.830 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[c170e288-5ffb-4202-9b76-66056c5e3683]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:33 np0005481065 nova_compute[260935]: 2025-10-11 09:09:33.268 2 DEBUG nova.compute.manager [req-e9f8d358-b9d6-4595-a372-7c9375186eac req-f2102ee9-34e9-41ee-b3f2-3bd0d61378c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Received event network-vif-plugged-38159e48-a8a0-4daa-95d3-19be7346d2cb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:09:33 np0005481065 nova_compute[260935]: 2025-10-11 09:09:33.268 2 DEBUG oslo_concurrency.lockutils [req-e9f8d358-b9d6-4595-a372-7c9375186eac req-f2102ee9-34e9-41ee-b3f2-3bd0d61378c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "1e10d958-167c-4caf-8ff0-67f50e39ec3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:09:33 np0005481065 nova_compute[260935]: 2025-10-11 09:09:33.270 2 DEBUG oslo_concurrency.lockutils [req-e9f8d358-b9d6-4595-a372-7c9375186eac req-f2102ee9-34e9-41ee-b3f2-3bd0d61378c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1e10d958-167c-4caf-8ff0-67f50e39ec3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:09:33 np0005481065 nova_compute[260935]: 2025-10-11 09:09:33.270 2 DEBUG oslo_concurrency.lockutils [req-e9f8d358-b9d6-4595-a372-7c9375186eac req-f2102ee9-34e9-41ee-b3f2-3bd0d61378c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1e10d958-167c-4caf-8ff0-67f50e39ec3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:09:33 np0005481065 nova_compute[260935]: 2025-10-11 09:09:33.271 2 DEBUG nova.compute.manager [req-e9f8d358-b9d6-4595-a372-7c9375186eac req-f2102ee9-34e9-41ee-b3f2-3bd0d61378c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] No waiting events found dispatching network-vif-plugged-38159e48-a8a0-4daa-95d3-19be7346d2cb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:09:33 np0005481065 nova_compute[260935]: 2025-10-11 09:09:33.272 2 WARNING nova.compute.manager [req-e9f8d358-b9d6-4595-a372-7c9375186eac req-f2102ee9-34e9-41ee-b3f2-3bd0d61378c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Received unexpected event network-vif-plugged-38159e48-a8a0-4daa-95d3-19be7346d2cb for instance with vm_state active and task_state None.#033[00m
Oct 11 05:09:33 np0005481065 nova_compute[260935]: 2025-10-11 09:09:33.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:09:34 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2026: 321 pgs: 321 active+clean; 453 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 2.2 MiB/s wr, 306 op/s
Oct 11 05:09:34 np0005481065 nova_compute[260935]: 2025-10-11 09:09:34.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:34 np0005481065 nova_compute[260935]: 2025-10-11 09:09:34.879 2 DEBUG nova.compute.manager [req-4b9ac9b1-0f95-4606-b436-eaff9cdbcddd req-57f473e6-d249-4095-b085-5c6b972f53e6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Received event network-vif-plugged-a611854c-0a61-41b8-91ce-0c0f893aa54c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:09:34 np0005481065 nova_compute[260935]: 2025-10-11 09:09:34.879 2 DEBUG oslo_concurrency.lockutils [req-4b9ac9b1-0f95-4606-b436-eaff9cdbcddd req-57f473e6-d249-4095-b085-5c6b972f53e6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:09:34 np0005481065 nova_compute[260935]: 2025-10-11 09:09:34.880 2 DEBUG oslo_concurrency.lockutils [req-4b9ac9b1-0f95-4606-b436-eaff9cdbcddd req-57f473e6-d249-4095-b085-5c6b972f53e6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:09:34 np0005481065 nova_compute[260935]: 2025-10-11 09:09:34.880 2 DEBUG oslo_concurrency.lockutils [req-4b9ac9b1-0f95-4606-b436-eaff9cdbcddd req-57f473e6-d249-4095-b085-5c6b972f53e6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:09:34 np0005481065 nova_compute[260935]: 2025-10-11 09:09:34.881 2 DEBUG nova.compute.manager [req-4b9ac9b1-0f95-4606-b436-eaff9cdbcddd req-57f473e6-d249-4095-b085-5c6b972f53e6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] No waiting events found dispatching network-vif-plugged-a611854c-0a61-41b8-91ce-0c0f893aa54c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:09:34 np0005481065 nova_compute[260935]: 2025-10-11 09:09:34.881 2 WARNING nova.compute.manager [req-4b9ac9b1-0f95-4606-b436-eaff9cdbcddd req-57f473e6-d249-4095-b085-5c6b972f53e6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Received unexpected event network-vif-plugged-a611854c-0a61-41b8-91ce-0c0f893aa54c for instance with vm_state suspended and task_state None.#033[00m
Oct 11 05:09:35 np0005481065 nova_compute[260935]: 2025-10-11 09:09:35.198 2 INFO nova.compute.manager [None req-eaad3c70-20f5-4317-883e-0f5f692ea002 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Resuming#033[00m
Oct 11 05:09:35 np0005481065 nova_compute[260935]: 2025-10-11 09:09:35.200 2 DEBUG nova.objects.instance [None req-eaad3c70-20f5-4317-883e-0f5f692ea002 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lazy-loading 'flavor' on Instance uuid 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:09:35 np0005481065 nova_compute[260935]: 2025-10-11 09:09:35.246 2 DEBUG oslo_concurrency.lockutils [None req-eaad3c70-20f5-4317-883e-0f5f692ea002 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquiring lock "refresh_cache-0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:09:35 np0005481065 nova_compute[260935]: 2025-10-11 09:09:35.247 2 DEBUG oslo_concurrency.lockutils [None req-eaad3c70-20f5-4317-883e-0f5f692ea002 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquired lock "refresh_cache-0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:09:35 np0005481065 nova_compute[260935]: 2025-10-11 09:09:35.249 2 DEBUG nova.network.neutron [None req-eaad3c70-20f5-4317-883e-0f5f692ea002 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:09:35 np0005481065 NetworkManager[44960]: <info>  [1760173775.2702] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/389)
Oct 11 05:09:35 np0005481065 NetworkManager[44960]: <info>  [1760173775.2713] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/390)
Oct 11 05:09:35 np0005481065 ovn_controller[152945]: 2025-10-11T09:09:35Z|00923|binding|INFO|Releasing lport d058d94f-2c43-4ada-876d-4d89d5169811 from this chassis (sb_readonly=0)
Oct 11 05:09:35 np0005481065 ovn_controller[152945]: 2025-10-11T09:09:35Z|00924|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:09:35 np0005481065 ovn_controller[152945]: 2025-10-11T09:09:35Z|00925|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:09:35 np0005481065 nova_compute[260935]: 2025-10-11 09:09:35.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:35 np0005481065 nova_compute[260935]: 2025-10-11 09:09:35.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:35 np0005481065 ovn_controller[152945]: 2025-10-11T09:09:35Z|00926|binding|INFO|Releasing lport d058d94f-2c43-4ada-876d-4d89d5169811 from this chassis (sb_readonly=0)
Oct 11 05:09:35 np0005481065 ovn_controller[152945]: 2025-10-11T09:09:35Z|00927|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:09:35 np0005481065 ovn_controller[152945]: 2025-10-11T09:09:35Z|00928|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:09:35 np0005481065 nova_compute[260935]: 2025-10-11 09:09:35.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:36 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2027: 321 pgs: 321 active+clean; 453 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 2.1 MiB/s wr, 298 op/s
Oct 11 05:09:37 np0005481065 nova_compute[260935]: 2025-10-11 09:09:37.008 2 DEBUG nova.compute.manager [req-1ee22aea-8ff6-4934-b395-dfb5998193fb req-4131591f-8b69-4e99-8763-44f98c4ff5a0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Received event network-changed-38159e48-a8a0-4daa-95d3-19be7346d2cb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:09:37 np0005481065 nova_compute[260935]: 2025-10-11 09:09:37.008 2 DEBUG nova.compute.manager [req-1ee22aea-8ff6-4934-b395-dfb5998193fb req-4131591f-8b69-4e99-8763-44f98c4ff5a0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Refreshing instance network info cache due to event network-changed-38159e48-a8a0-4daa-95d3-19be7346d2cb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:09:37 np0005481065 nova_compute[260935]: 2025-10-11 09:09:37.009 2 DEBUG oslo_concurrency.lockutils [req-1ee22aea-8ff6-4934-b395-dfb5998193fb req-4131591f-8b69-4e99-8763-44f98c4ff5a0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-1e10d958-167c-4caf-8ff0-67f50e39ec3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:09:37 np0005481065 nova_compute[260935]: 2025-10-11 09:09:37.009 2 DEBUG oslo_concurrency.lockutils [req-1ee22aea-8ff6-4934-b395-dfb5998193fb req-4131591f-8b69-4e99-8763-44f98c4ff5a0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-1e10d958-167c-4caf-8ff0-67f50e39ec3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:09:37 np0005481065 nova_compute[260935]: 2025-10-11 09:09:37.009 2 DEBUG nova.network.neutron [req-1ee22aea-8ff6-4934-b395-dfb5998193fb req-4131591f-8b69-4e99-8763-44f98c4ff5a0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Refreshing network info cache for port 38159e48-a8a0-4daa-95d3-19be7346d2cb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:09:38 np0005481065 nova_compute[260935]: 2025-10-11 09:09:38.308 2 DEBUG nova.network.neutron [None req-eaad3c70-20f5-4317-883e-0f5f692ea002 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Updating instance_info_cache with network_info: [{"id": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "address": "fa:16:3e:9d:da:b0", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa611854c-0a", "ovs_interfaceid": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:09:38 np0005481065 nova_compute[260935]: 2025-10-11 09:09:38.331 2 DEBUG oslo_concurrency.lockutils [None req-eaad3c70-20f5-4317-883e-0f5f692ea002 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Releasing lock "refresh_cache-0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:09:38 np0005481065 nova_compute[260935]: 2025-10-11 09:09:38.338 2 DEBUG nova.virt.libvirt.vif [None req-eaad3c70-20f5-4317-883e-0f5f692ea002 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-11T09:07:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-581429551',display_name='tempest-ServersNegativeTestJSON-server-581429551',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-581429551',id=98,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:09:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='21f163e616ee4917a580701d466f7dc9',ramdisk_id='',reservation_id='r-wp4zx0a9',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServersNegativeTestJSON-414353187',owner_user_name='tempest-ServersNegativeTestJSON-414353187-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:09:32Z,user_data=None,user_id='6b8d9d5ab01d48ae81a09f922875ea3e',uuid=0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "address": "fa:16:3e:9d:da:b0", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa611854c-0a", "ovs_interfaceid": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:09:38 np0005481065 nova_compute[260935]: 2025-10-11 09:09:38.338 2 DEBUG nova.network.os_vif_util [None req-eaad3c70-20f5-4317-883e-0f5f692ea002 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Converting VIF {"id": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "address": "fa:16:3e:9d:da:b0", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa611854c-0a", "ovs_interfaceid": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:09:38 np0005481065 nova_compute[260935]: 2025-10-11 09:09:38.339 2 DEBUG nova.network.os_vif_util [None req-eaad3c70-20f5-4317-883e-0f5f692ea002 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:da:b0,bridge_name='br-int',has_traffic_filtering=True,id=a611854c-0a61-41b8-91ce-0c0f893aa54c,network=Network(14e82eeb-74e2-4de3-9047-74da777fe1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa611854c-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:09:38 np0005481065 nova_compute[260935]: 2025-10-11 09:09:38.340 2 DEBUG os_vif [None req-eaad3c70-20f5-4317-883e-0f5f692ea002 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:da:b0,bridge_name='br-int',has_traffic_filtering=True,id=a611854c-0a61-41b8-91ce-0c0f893aa54c,network=Network(14e82eeb-74e2-4de3-9047-74da777fe1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa611854c-0a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:09:38 np0005481065 nova_compute[260935]: 2025-10-11 09:09:38.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:38 np0005481065 nova_compute[260935]: 2025-10-11 09:09:38.341 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:09:38 np0005481065 nova_compute[260935]: 2025-10-11 09:09:38.342 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:09:38 np0005481065 nova_compute[260935]: 2025-10-11 09:09:38.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:38 np0005481065 nova_compute[260935]: 2025-10-11 09:09:38.346 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa611854c-0a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:09:38 np0005481065 nova_compute[260935]: 2025-10-11 09:09:38.346 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa611854c-0a, col_values=(('external_ids', {'iface-id': 'a611854c-0a61-41b8-91ce-0c0f893aa54c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9d:da:b0', 'vm-uuid': '0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:09:38 np0005481065 nova_compute[260935]: 2025-10-11 09:09:38.347 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:09:38 np0005481065 nova_compute[260935]: 2025-10-11 09:09:38.347 2 INFO os_vif [None req-eaad3c70-20f5-4317-883e-0f5f692ea002 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:da:b0,bridge_name='br-int',has_traffic_filtering=True,id=a611854c-0a61-41b8-91ce-0c0f893aa54c,network=Network(14e82eeb-74e2-4de3-9047-74da777fe1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa611854c-0a')#033[00m
Oct 11 05:09:38 np0005481065 nova_compute[260935]: 2025-10-11 09:09:38.380 2 DEBUG nova.objects.instance [None req-eaad3c70-20f5-4317-883e-0f5f692ea002 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lazy-loading 'numa_topology' on Instance uuid 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:09:38 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2028: 321 pgs: 321 active+clean; 453 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 88 op/s
Oct 11 05:09:38 np0005481065 NetworkManager[44960]: <info>  [1760173778.4727] manager: (tapa611854c-0a): new Tun device (/org/freedesktop/NetworkManager/Devices/391)
Oct 11 05:09:38 np0005481065 kernel: tapa611854c-0a: entered promiscuous mode
Oct 11 05:09:38 np0005481065 nova_compute[260935]: 2025-10-11 09:09:38.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:38 np0005481065 ovn_controller[152945]: 2025-10-11T09:09:38Z|00929|binding|INFO|Claiming lport a611854c-0a61-41b8-91ce-0c0f893aa54c for this chassis.
Oct 11 05:09:38 np0005481065 ovn_controller[152945]: 2025-10-11T09:09:38Z|00930|binding|INFO|a611854c-0a61-41b8-91ce-0c0f893aa54c: Claiming fa:16:3e:9d:da:b0 10.100.0.12
Oct 11 05:09:38 np0005481065 systemd-udevd[364411]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:09:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:38.525 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:da:b0 10.100.0.12'], port_security=['fa:16:3e:9d:da:b0 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '21f163e616ee4917a580701d466f7dc9', 'neutron:revision_number': '10', 'neutron:security_group_ids': 'df67e917-90ec-4dfb-a7e0-e0c1517579e8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e0adf864-e0f8-4e06-afb7-e390a634b36e, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=a611854c-0a61-41b8-91ce-0c0f893aa54c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:09:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:38.526 162815 INFO neutron.agent.ovn.metadata.agent [-] Port a611854c-0a61-41b8-91ce-0c0f893aa54c in datapath 14e82eeb-74e2-4de3-9047-74da777fe1ec bound to our chassis#033[00m
Oct 11 05:09:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:38.528 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 14e82eeb-74e2-4de3-9047-74da777fe1ec#033[00m
Oct 11 05:09:38 np0005481065 ovn_controller[152945]: 2025-10-11T09:09:38Z|00931|binding|INFO|Setting lport a611854c-0a61-41b8-91ce-0c0f893aa54c ovn-installed in OVS
Oct 11 05:09:38 np0005481065 ovn_controller[152945]: 2025-10-11T09:09:38Z|00932|binding|INFO|Setting lport a611854c-0a61-41b8-91ce-0c0f893aa54c up in Southbound
Oct 11 05:09:38 np0005481065 nova_compute[260935]: 2025-10-11 09:09:38.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:38.545 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[00349b22-8700-474a-a0ae-949e8a8d00e7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:38.545 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap14e82eeb-71 in ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 05:09:38 np0005481065 NetworkManager[44960]: <info>  [1760173778.5475] device (tapa611854c-0a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:09:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:38.548 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap14e82eeb-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 05:09:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:38.548 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8ad478bb-574e-4c9b-9152-f28e7b9d2ab3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:38.549 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5d4eb88f-ab8b-4be5-8a74-ae0b08938165]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:38 np0005481065 NetworkManager[44960]: <info>  [1760173778.5505] device (tapa611854c-0a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:09:38 np0005481065 systemd-machined[215705]: New machine qemu-122-instance-00000062.
Oct 11 05:09:38 np0005481065 systemd[1]: Started Virtual Machine qemu-122-instance-00000062.
Oct 11 05:09:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:38.574 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[be09fb2f-645b-4bc5-9679-c09dd6449fe9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:38.606 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[db481803-cc16-4b48-86eb-efeed8dd4390]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:38.665 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f9862c8f-d330-4afb-91b4-f24f862408c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:38 np0005481065 NetworkManager[44960]: <info>  [1760173778.6727] manager: (tap14e82eeb-70): new Veth device (/org/freedesktop/NetworkManager/Devices/392)
Oct 11 05:09:38 np0005481065 systemd-udevd[364418]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:09:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:38.670 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[36040262-2c0e-4d8d-b8d4-5ccba852d843]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:38 np0005481065 nova_compute[260935]: 2025-10-11 09:09:38.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:38.723 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[230149dd-dc63-416a-b9b4-740fd2bffeb5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:38.726 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[37c0a4ef-cd04-415f-adf8-f335cbc9e86c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:38 np0005481065 NetworkManager[44960]: <info>  [1760173778.7634] device (tap14e82eeb-70): carrier: link connected
Oct 11 05:09:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:38.773 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[ee603908-d14a-48e9-ac81-5166aa9a84eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:38.808 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[61bbf444-9f7d-43a6-abe6-3262f05445ac]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14e82eeb-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:60:56:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 278], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 562346, 'reachable_time': 21818, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364447, 'error': None, 'target': 'ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:38.829 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b187d892-3b92-460b-bab5-70df9ca539b0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe60:56f1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 562346, 'tstamp': 562346}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364448, 'error': None, 'target': 'ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:09:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:38.852 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ef1001a8-7de8-40a4-9052-5eee6d93f68e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14e82eeb-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:60:56:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 278], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 562346, 'reachable_time': 21818, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 364449, 'error': None, 'target': 'ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:38.896 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5e2f0919-5fb3-43bd-bb6a-0dc60abd0835]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:38.988 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fcd4a2da-d881-4494-bbd0-e6e2c58bcc82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:38.990 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14e82eeb-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:09:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:38.990 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:09:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:38.991 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14e82eeb-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:09:38 np0005481065 nova_compute[260935]: 2025-10-11 09:09:38.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:38 np0005481065 NetworkManager[44960]: <info>  [1760173778.9946] manager: (tap14e82eeb-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/393)
Oct 11 05:09:38 np0005481065 kernel: tap14e82eeb-70: entered promiscuous mode
Oct 11 05:09:38 np0005481065 nova_compute[260935]: 2025-10-11 09:09:38.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:38.999 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap14e82eeb-70, col_values=(('external_ids', {'iface-id': 'c44760fe-71c5-482d-aee7-a0b0513681b7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:09:39 np0005481065 ovn_controller[152945]: 2025-10-11T09:09:39Z|00933|binding|INFO|Releasing lport c44760fe-71c5-482d-aee7-a0b0513681b7 from this chassis (sb_readonly=0)
Oct 11 05:09:39 np0005481065 nova_compute[260935]: 2025-10-11 09:09:39.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:39 np0005481065 nova_compute[260935]: 2025-10-11 09:09:39.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:39.032 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/14e82eeb-74e2-4de3-9047-74da777fe1ec.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/14e82eeb-74e2-4de3-9047-74da777fe1ec.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 05:09:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:39.034 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8dc8415b-365d-493e-b2a7-b6338c613a22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:39.035 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 05:09:39 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 05:09:39 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 05:09:39 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-14e82eeb-74e2-4de3-9047-74da777fe1ec
Oct 11 05:09:39 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 05:09:39 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 05:09:39 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 05:09:39 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/14e82eeb-74e2-4de3-9047-74da777fe1ec.pid.haproxy
Oct 11 05:09:39 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 05:09:39 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:09:39 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 05:09:39 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 05:09:39 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 05:09:39 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 05:09:39 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 05:09:39 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 05:09:39 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 05:09:39 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 05:09:39 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 05:09:39 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 05:09:39 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 05:09:39 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 05:09:39 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 05:09:39 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:09:39 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:09:39 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 05:09:39 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 05:09:39 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 05:09:39 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID 14e82eeb-74e2-4de3-9047-74da777fe1ec
Oct 11 05:09:39 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 05:09:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:39.037 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'env', 'PROCESS_TAG=haproxy-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/14e82eeb-74e2-4de3-9047-74da777fe1ec.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 05:09:39 np0005481065 nova_compute[260935]: 2025-10-11 09:09:39.297 2 DEBUG nova.compute.manager [req-9078b05b-c7de-4bcc-86d9-b008abb62e02 req-3b71c798-617b-45c5-9798-81ca7fed664d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Received event network-vif-plugged-a611854c-0a61-41b8-91ce-0c0f893aa54c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:09:39 np0005481065 nova_compute[260935]: 2025-10-11 09:09:39.298 2 DEBUG oslo_concurrency.lockutils [req-9078b05b-c7de-4bcc-86d9-b008abb62e02 req-3b71c798-617b-45c5-9798-81ca7fed664d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:09:39 np0005481065 nova_compute[260935]: 2025-10-11 09:09:39.298 2 DEBUG oslo_concurrency.lockutils [req-9078b05b-c7de-4bcc-86d9-b008abb62e02 req-3b71c798-617b-45c5-9798-81ca7fed664d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:09:39 np0005481065 nova_compute[260935]: 2025-10-11 09:09:39.299 2 DEBUG oslo_concurrency.lockutils [req-9078b05b-c7de-4bcc-86d9-b008abb62e02 req-3b71c798-617b-45c5-9798-81ca7fed664d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:09:39 np0005481065 nova_compute[260935]: 2025-10-11 09:09:39.300 2 DEBUG nova.compute.manager [req-9078b05b-c7de-4bcc-86d9-b008abb62e02 req-3b71c798-617b-45c5-9798-81ca7fed664d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] No waiting events found dispatching network-vif-plugged-a611854c-0a61-41b8-91ce-0c0f893aa54c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:09:39 np0005481065 nova_compute[260935]: 2025-10-11 09:09:39.300 2 WARNING nova.compute.manager [req-9078b05b-c7de-4bcc-86d9-b008abb62e02 req-3b71c798-617b-45c5-9798-81ca7fed664d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Received unexpected event network-vif-plugged-a611854c-0a61-41b8-91ce-0c0f893aa54c for instance with vm_state suspended and task_state resuming.#033[00m
Oct 11 05:09:39 np0005481065 podman[364521]: 2025-10-11 09:09:39.470327704 +0000 UTC m=+0.069353041 container create 064e59958f6ed1df696e290d0df260334f8f6cf4ec230c7a95673bc57167428b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 11 05:09:39 np0005481065 systemd[1]: Started libpod-conmon-064e59958f6ed1df696e290d0df260334f8f6cf4ec230c7a95673bc57167428b.scope.
Oct 11 05:09:39 np0005481065 podman[364521]: 2025-10-11 09:09:39.424686271 +0000 UTC m=+0.023711618 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 05:09:39 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:09:39 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45fd75aade0337d682b14081c1eccf706df4989bd7a9205a59aaf2a4e8230fe7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 05:09:39 np0005481065 nova_compute[260935]: 2025-10-11 09:09:39.576 2 DEBUG nova.network.neutron [req-1ee22aea-8ff6-4934-b395-dfb5998193fb req-4131591f-8b69-4e99-8763-44f98c4ff5a0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Updated VIF entry in instance network info cache for port 38159e48-a8a0-4daa-95d3-19be7346d2cb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:09:39 np0005481065 nova_compute[260935]: 2025-10-11 09:09:39.577 2 DEBUG nova.network.neutron [req-1ee22aea-8ff6-4934-b395-dfb5998193fb req-4131591f-8b69-4e99-8763-44f98c4ff5a0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Updating instance_info_cache with network_info: [{"id": "38159e48-a8a0-4daa-95d3-19be7346d2cb", "address": "fa:16:3e:7c:70:56", "network": {"id": "3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e", "bridge": "br-int", "label": "tempest-network-smoke--7754663", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap38159e48-a8", "ovs_interfaceid": "38159e48-a8a0-4daa-95d3-19be7346d2cb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:09:39 np0005481065 podman[364521]: 2025-10-11 09:09:39.582717402 +0000 UTC m=+0.181742769 container init 064e59958f6ed1df696e290d0df260334f8f6cf4ec230c7a95673bc57167428b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 05:09:39 np0005481065 podman[364521]: 2025-10-11 09:09:39.59033812 +0000 UTC m=+0.189363457 container start 064e59958f6ed1df696e290d0df260334f8f6cf4ec230c7a95673bc57167428b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:09:39 np0005481065 nova_compute[260935]: 2025-10-11 09:09:39.602 2 DEBUG oslo_concurrency.lockutils [req-1ee22aea-8ff6-4934-b395-dfb5998193fb req-4131591f-8b69-4e99-8763-44f98c4ff5a0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-1e10d958-167c-4caf-8ff0-67f50e39ec3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:09:39 np0005481065 neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec[364536]: [NOTICE]   (364540) : New worker (364542) forked
Oct 11 05:09:39 np0005481065 neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec[364536]: [NOTICE]   (364540) : Loading success.
Oct 11 05:09:39 np0005481065 nova_compute[260935]: 2025-10-11 09:09:39.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:39 np0005481065 nova_compute[260935]: 2025-10-11 09:09:39.846 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Removed pending event for 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct 11 05:09:39 np0005481065 nova_compute[260935]: 2025-10-11 09:09:39.846 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173779.8458936, 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:09:39 np0005481065 nova_compute[260935]: 2025-10-11 09:09:39.846 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] VM Started (Lifecycle Event)#033[00m
Oct 11 05:09:39 np0005481065 nova_compute[260935]: 2025-10-11 09:09:39.880 2 DEBUG nova.compute.manager [None req-eaad3c70-20f5-4317-883e-0f5f692ea002 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:09:39 np0005481065 nova_compute[260935]: 2025-10-11 09:09:39.881 2 DEBUG nova.objects.instance [None req-eaad3c70-20f5-4317-883e-0f5f692ea002 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:09:39 np0005481065 nova_compute[260935]: 2025-10-11 09:09:39.892 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:09:39 np0005481065 nova_compute[260935]: 2025-10-11 09:09:39.897 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:09:39 np0005481065 nova_compute[260935]: 2025-10-11 09:09:39.901 2 INFO nova.virt.libvirt.driver [-] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Instance running successfully.#033[00m
Oct 11 05:09:39 np0005481065 virtqemud[260524]: argument unsupported: QEMU guest agent is not configured
Oct 11 05:09:39 np0005481065 nova_compute[260935]: 2025-10-11 09:09:39.904 2 DEBUG nova.virt.libvirt.guest [None req-eaad3c70-20f5-4317-883e-0f5f692ea002 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Oct 11 05:09:39 np0005481065 nova_compute[260935]: 2025-10-11 09:09:39.904 2 DEBUG nova.compute.manager [None req-eaad3c70-20f5-4317-883e-0f5f692ea002 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:09:39 np0005481065 nova_compute[260935]: 2025-10-11 09:09:39.916 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Oct 11 05:09:39 np0005481065 nova_compute[260935]: 2025-10-11 09:09:39.917 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173779.853199, 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:09:39 np0005481065 nova_compute[260935]: 2025-10-11 09:09:39.918 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:09:39 np0005481065 nova_compute[260935]: 2025-10-11 09:09:39.955 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:09:39 np0005481065 nova_compute[260935]: 2025-10-11 09:09:39.958 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:09:40 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2029: 321 pgs: 321 active+clean; 453 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 76 op/s
Oct 11 05:09:40 np0005481065 nova_compute[260935]: 2025-10-11 09:09:40.646 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173765.644571, b3697ef4-2e67-4d7f-ad14-ae6ab4055454 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:09:40 np0005481065 nova_compute[260935]: 2025-10-11 09:09:40.648 2 INFO nova.compute.manager [-] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:09:40 np0005481065 nova_compute[260935]: 2025-10-11 09:09:40.684 2 DEBUG nova.compute.manager [None req-b17f70c3-9df2-4665-b1ce-1cd415db256b - - - - - -] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:09:41 np0005481065 nova_compute[260935]: 2025-10-11 09:09:41.611 2 DEBUG nova.compute.manager [req-3681bd0a-b112-4d98-9112-114865dd07f3 req-fbe04f97-aa8e-4e8e-a171-fcef083e5245 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Received event network-vif-plugged-a611854c-0a61-41b8-91ce-0c0f893aa54c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:09:41 np0005481065 nova_compute[260935]: 2025-10-11 09:09:41.612 2 DEBUG oslo_concurrency.lockutils [req-3681bd0a-b112-4d98-9112-114865dd07f3 req-fbe04f97-aa8e-4e8e-a171-fcef083e5245 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:09:41 np0005481065 nova_compute[260935]: 2025-10-11 09:09:41.612 2 DEBUG oslo_concurrency.lockutils [req-3681bd0a-b112-4d98-9112-114865dd07f3 req-fbe04f97-aa8e-4e8e-a171-fcef083e5245 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:09:41 np0005481065 nova_compute[260935]: 2025-10-11 09:09:41.613 2 DEBUG oslo_concurrency.lockutils [req-3681bd0a-b112-4d98-9112-114865dd07f3 req-fbe04f97-aa8e-4e8e-a171-fcef083e5245 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:09:41 np0005481065 nova_compute[260935]: 2025-10-11 09:09:41.613 2 DEBUG nova.compute.manager [req-3681bd0a-b112-4d98-9112-114865dd07f3 req-fbe04f97-aa8e-4e8e-a171-fcef083e5245 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] No waiting events found dispatching network-vif-plugged-a611854c-0a61-41b8-91ce-0c0f893aa54c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:09:41 np0005481065 nova_compute[260935]: 2025-10-11 09:09:41.614 2 WARNING nova.compute.manager [req-3681bd0a-b112-4d98-9112-114865dd07f3 req-fbe04f97-aa8e-4e8e-a171-fcef083e5245 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Received unexpected event network-vif-plugged-a611854c-0a61-41b8-91ce-0c0f893aa54c for instance with vm_state active and task_state None.#033[00m
Oct 11 05:09:42 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2030: 321 pgs: 321 active+clean; 466 MiB data, 876 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 93 op/s
Oct 11 05:09:42 np0005481065 ovn_controller[152945]: 2025-10-11T09:09:42Z|00104|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7c:70:56 10.100.0.11
Oct 11 05:09:42 np0005481065 ovn_controller[152945]: 2025-10-11T09:09:42Z|00105|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7c:70:56 10.100.0.11
Oct 11 05:09:43 np0005481065 nova_compute[260935]: 2025-10-11 09:09:43.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:09:44 np0005481065 ovn_controller[152945]: 2025-10-11T09:09:44Z|00106|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9d:da:b0 10.100.0.12
Oct 11 05:09:44 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2031: 321 pgs: 321 active+clean; 475 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.0 MiB/s wr, 86 op/s
Oct 11 05:09:44 np0005481065 nova_compute[260935]: 2025-10-11 09:09:44.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:46 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2032: 321 pgs: 321 active+clean; 475 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.0 MiB/s wr, 70 op/s
Oct 11 05:09:48 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2033: 321 pgs: 321 active+clean; 486 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 139 op/s
Oct 11 05:09:48 np0005481065 nova_compute[260935]: 2025-10-11 09:09:48.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:48 np0005481065 podman[364551]: 2025-10-11 09:09:48.832740522 +0000 UTC m=+0.121128439 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 11 05:09:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:09:48 np0005481065 nova_compute[260935]: 2025-10-11 09:09:48.947 2 INFO nova.compute.manager [None req-6d729b98-7574-4a3e-a10a-615181061c68 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Get console output#033[00m
Oct 11 05:09:48 np0005481065 nova_compute[260935]: 2025-10-11 09:09:48.953 29289 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct 11 05:09:49 np0005481065 nova_compute[260935]: 2025-10-11 09:09:49.398 2 INFO nova.compute.manager [None req-3831b164-722a-4dc1-8161-39e57de17105 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Pausing#033[00m
Oct 11 05:09:49 np0005481065 nova_compute[260935]: 2025-10-11 09:09:49.399 2 DEBUG nova.objects.instance [None req-3831b164-722a-4dc1-8161-39e57de17105 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'flavor' on Instance uuid 1e10d958-167c-4caf-8ff0-67f50e39ec3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:09:49 np0005481065 nova_compute[260935]: 2025-10-11 09:09:49.444 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173789.4443357, 1e10d958-167c-4caf-8ff0-67f50e39ec3c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:09:49 np0005481065 nova_compute[260935]: 2025-10-11 09:09:49.445 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:09:49 np0005481065 nova_compute[260935]: 2025-10-11 09:09:49.447 2 DEBUG nova.compute.manager [None req-3831b164-722a-4dc1-8161-39e57de17105 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:09:49 np0005481065 nova_compute[260935]: 2025-10-11 09:09:49.486 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:09:49 np0005481065 nova_compute[260935]: 2025-10-11 09:09:49.490 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:09:49 np0005481065 nova_compute[260935]: 2025-10-11 09:09:49.523 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Oct 11 05:09:49 np0005481065 nova_compute[260935]: 2025-10-11 09:09:49.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:50 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2034: 321 pgs: 321 active+clean; 486 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 857 KiB/s rd, 2.1 MiB/s wr, 110 op/s
Oct 11 05:09:51 np0005481065 nova_compute[260935]: 2025-10-11 09:09:51.806 2 INFO nova.compute.manager [None req-89ed1b6e-9584-48cb-8eb0-3924a268bc62 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Get console output#033[00m
Oct 11 05:09:51 np0005481065 nova_compute[260935]: 2025-10-11 09:09:51.814 29289 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct 11 05:09:51 np0005481065 nova_compute[260935]: 2025-10-11 09:09:51.977 2 INFO nova.compute.manager [None req-102b1e5c-bbd9-4e76-a6d7-88d3553b4b2a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Unpausing#033[00m
Oct 11 05:09:51 np0005481065 nova_compute[260935]: 2025-10-11 09:09:51.979 2 DEBUG nova.objects.instance [None req-102b1e5c-bbd9-4e76-a6d7-88d3553b4b2a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'flavor' on Instance uuid 1e10d958-167c-4caf-8ff0-67f50e39ec3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:09:52 np0005481065 nova_compute[260935]: 2025-10-11 09:09:52.018 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173792.0181463, 1e10d958-167c-4caf-8ff0-67f50e39ec3c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:09:52 np0005481065 nova_compute[260935]: 2025-10-11 09:09:52.019 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:09:52 np0005481065 virtqemud[260524]: argument unsupported: QEMU guest agent is not configured
Oct 11 05:09:52 np0005481065 nova_compute[260935]: 2025-10-11 09:09:52.023 2 DEBUG nova.virt.libvirt.guest [None req-102b1e5c-bbd9-4e76-a6d7-88d3553b4b2a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Oct 11 05:09:52 np0005481065 nova_compute[260935]: 2025-10-11 09:09:52.024 2 DEBUG nova.compute.manager [None req-102b1e5c-bbd9-4e76-a6d7-88d3553b4b2a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:09:52 np0005481065 nova_compute[260935]: 2025-10-11 09:09:52.056 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:09:52 np0005481065 nova_compute[260935]: 2025-10-11 09:09:52.063 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:09:52 np0005481065 nova_compute[260935]: 2025-10-11 09:09:52.112 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] During sync_power_state the instance has a pending task (unpausing). Skip.#033[00m
Oct 11 05:09:52 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2035: 321 pgs: 321 active+clean; 488 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 857 KiB/s rd, 2.2 MiB/s wr, 111 op/s
Oct 11 05:09:53 np0005481065 nova_compute[260935]: 2025-10-11 09:09:53.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:09:53 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #96. Immutable memtables: 0.
Oct 11 05:09:53 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:53.852735) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 05:09:53 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 96
Oct 11 05:09:53 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173793852772, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 425, "num_deletes": 250, "total_data_size": 351427, "memory_usage": 360144, "flush_reason": "Manual Compaction"}
Oct 11 05:09:53 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #97: started
Oct 11 05:09:53 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173793858500, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 97, "file_size": 269199, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42490, "largest_seqno": 42914, "table_properties": {"data_size": 266814, "index_size": 485, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6353, "raw_average_key_size": 20, "raw_value_size": 262099, "raw_average_value_size": 834, "num_data_blocks": 22, "num_entries": 314, "num_filter_entries": 314, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760173770, "oldest_key_time": 1760173770, "file_creation_time": 1760173793, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 97, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:09:53 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 5800 microseconds, and 1668 cpu microseconds.
Oct 11 05:09:53 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:09:53 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:53.858536) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #97: 269199 bytes OK
Oct 11 05:09:53 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:53.858551) [db/memtable_list.cc:519] [default] Level-0 commit table #97 started
Oct 11 05:09:53 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:53.861058) [db/memtable_list.cc:722] [default] Level-0 commit table #97: memtable #1 done
Oct 11 05:09:53 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:53.861073) EVENT_LOG_v1 {"time_micros": 1760173793861068, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 05:09:53 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:53.861089) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 05:09:53 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 348802, prev total WAL file size 348802, number of live WAL files 2.
Oct 11 05:09:53 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000093.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:09:53 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:53.861630) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353034' seq:72057594037927935, type:22 .. '6D6772737461740031373535' seq:0, type:0; will stop at (end)
Oct 11 05:09:53 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 05:09:53 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [97(262KB)], [95(10MB)]
Oct 11 05:09:53 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173793861713, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [97], "files_L6": [95], "score": -1, "input_data_size": 10779037, "oldest_snapshot_seqno": -1}
Oct 11 05:09:53 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #98: 6420 keys, 7568892 bytes, temperature: kUnknown
Oct 11 05:09:53 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173793916994, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 98, "file_size": 7568892, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7527814, "index_size": 23944, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16069, "raw_key_size": 165894, "raw_average_key_size": 25, "raw_value_size": 7414319, "raw_average_value_size": 1154, "num_data_blocks": 937, "num_entries": 6420, "num_filter_entries": 6420, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760173793, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:09:53 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:09:53 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:53.917311) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 7568892 bytes
Oct 11 05:09:53 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:53.918808) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 194.8 rd, 136.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 10.0 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(68.2) write-amplify(28.1) OK, records in: 6922, records dropped: 502 output_compression: NoCompression
Oct 11 05:09:53 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:53.918866) EVENT_LOG_v1 {"time_micros": 1760173793918852, "job": 56, "event": "compaction_finished", "compaction_time_micros": 55345, "compaction_time_cpu_micros": 37485, "output_level": 6, "num_output_files": 1, "total_output_size": 7568892, "num_input_records": 6922, "num_output_records": 6420, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 05:09:53 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000097.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:09:53 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173793919169, "job": 56, "event": "table_file_deletion", "file_number": 97}
Oct 11 05:09:53 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:09:53 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173793922636, "job": 56, "event": "table_file_deletion", "file_number": 95}
Oct 11 05:09:53 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:53.861523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:09:53 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:53.922753) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:09:53 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:53.922761) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:09:53 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:53.922763) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:09:53 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:53.922765) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:09:53 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:53.922767) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:09:54 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2036: 321 pgs: 321 active+clean; 488 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 807 KiB/s rd, 916 KiB/s wr, 91 op/s
Oct 11 05:09:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:09:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:09:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:09:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:09:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:09:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:09:54 np0005481065 nova_compute[260935]: 2025-10-11 09:09:54.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:09:54
Oct 11 05:09:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:09:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:09:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['.rgw.root', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'vms', '.mgr', 'volumes', 'backups', 'default.rgw.meta', 'default.rgw.control']
Oct 11 05:09:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:09:55 np0005481065 nova_compute[260935]: 2025-10-11 09:09:55.072 2 INFO nova.compute.manager [None req-b1212c07-58c0-4534-942f-b1eca044470f a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Get console output#033[00m
Oct 11 05:09:55 np0005481065 nova_compute[260935]: 2025-10-11 09:09:55.078 29289 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct 11 05:09:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:09:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:09:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:09:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:09:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:09:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:09:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:09:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:09:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:09:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:09:55 np0005481065 podman[364571]: 2025-10-11 09:09:55.798329447 +0000 UTC m=+0.096254069 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 11 05:09:56 np0005481065 nova_compute[260935]: 2025-10-11 09:09:56.145 2 DEBUG oslo_concurrency.lockutils [None req-b8fd29cc-e044-452b-a4e6-5433f96ee35a 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquiring lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:09:56 np0005481065 nova_compute[260935]: 2025-10-11 09:09:56.146 2 DEBUG oslo_concurrency.lockutils [None req-b8fd29cc-e044-452b-a4e6-5433f96ee35a 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:09:56 np0005481065 nova_compute[260935]: 2025-10-11 09:09:56.146 2 DEBUG oslo_concurrency.lockutils [None req-b8fd29cc-e044-452b-a4e6-5433f96ee35a 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquiring lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:09:56 np0005481065 nova_compute[260935]: 2025-10-11 09:09:56.147 2 DEBUG oslo_concurrency.lockutils [None req-b8fd29cc-e044-452b-a4e6-5433f96ee35a 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:09:56 np0005481065 nova_compute[260935]: 2025-10-11 09:09:56.147 2 DEBUG oslo_concurrency.lockutils [None req-b8fd29cc-e044-452b-a4e6-5433f96ee35a 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:09:56 np0005481065 nova_compute[260935]: 2025-10-11 09:09:56.149 2 INFO nova.compute.manager [None req-b8fd29cc-e044-452b-a4e6-5433f96ee35a 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Terminating instance#033[00m
Oct 11 05:09:56 np0005481065 nova_compute[260935]: 2025-10-11 09:09:56.150 2 DEBUG nova.compute.manager [None req-b8fd29cc-e044-452b-a4e6-5433f96ee35a 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:09:56 np0005481065 kernel: tapa611854c-0a (unregistering): left promiscuous mode
Oct 11 05:09:56 np0005481065 NetworkManager[44960]: <info>  [1760173796.2183] device (tapa611854c-0a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:09:56 np0005481065 ovn_controller[152945]: 2025-10-11T09:09:56Z|00934|binding|INFO|Releasing lport a611854c-0a61-41b8-91ce-0c0f893aa54c from this chassis (sb_readonly=0)
Oct 11 05:09:56 np0005481065 ovn_controller[152945]: 2025-10-11T09:09:56Z|00935|binding|INFO|Setting lport a611854c-0a61-41b8-91ce-0c0f893aa54c down in Southbound
Oct 11 05:09:56 np0005481065 nova_compute[260935]: 2025-10-11 09:09:56.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:56 np0005481065 ovn_controller[152945]: 2025-10-11T09:09:56Z|00936|binding|INFO|Removing iface tapa611854c-0a ovn-installed in OVS
Oct 11 05:09:56 np0005481065 nova_compute[260935]: 2025-10-11 09:09:56.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:56 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:56.245 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:da:b0 10.100.0.12'], port_security=['fa:16:3e:9d:da:b0 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '21f163e616ee4917a580701d466f7dc9', 'neutron:revision_number': '11', 'neutron:security_group_ids': 'df67e917-90ec-4dfb-a7e0-e0c1517579e8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e0adf864-e0f8-4e06-afb7-e390a634b36e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=a611854c-0a61-41b8-91ce-0c0f893aa54c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:09:56 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:56.248 162815 INFO neutron.agent.ovn.metadata.agent [-] Port a611854c-0a61-41b8-91ce-0c0f893aa54c in datapath 14e82eeb-74e2-4de3-9047-74da777fe1ec unbound from our chassis#033[00m
Oct 11 05:09:56 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:56.252 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 14e82eeb-74e2-4de3-9047-74da777fe1ec, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:09:56 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:56.254 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bad36584-7c0a-4230-ad57-7ba66a3ced90]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:56 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:56.255 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec namespace which is not needed anymore#033[00m
Oct 11 05:09:56 np0005481065 nova_compute[260935]: 2025-10-11 09:09:56.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:56 np0005481065 systemd[1]: machine-qemu\x2d122\x2dinstance\x2d00000062.scope: Deactivated successfully.
Oct 11 05:09:56 np0005481065 systemd[1]: machine-qemu\x2d122\x2dinstance\x2d00000062.scope: Consumed 5.647s CPU time.
Oct 11 05:09:56 np0005481065 systemd-machined[215705]: Machine qemu-122-instance-00000062 terminated.
Oct 11 05:09:56 np0005481065 nova_compute[260935]: 2025-10-11 09:09:56.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:56 np0005481065 nova_compute[260935]: 2025-10-11 09:09:56.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:56 np0005481065 nova_compute[260935]: 2025-10-11 09:09:56.388 2 DEBUG oslo_concurrency.lockutils [None req-45f1a6d9-7500-4b6b-bab9-43b85db08961 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "1e10d958-167c-4caf-8ff0-67f50e39ec3c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:09:56 np0005481065 nova_compute[260935]: 2025-10-11 09:09:56.393 2 DEBUG oslo_concurrency.lockutils [None req-45f1a6d9-7500-4b6b-bab9-43b85db08961 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "1e10d958-167c-4caf-8ff0-67f50e39ec3c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:09:56 np0005481065 nova_compute[260935]: 2025-10-11 09:09:56.394 2 DEBUG oslo_concurrency.lockutils [None req-45f1a6d9-7500-4b6b-bab9-43b85db08961 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "1e10d958-167c-4caf-8ff0-67f50e39ec3c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:09:56 np0005481065 nova_compute[260935]: 2025-10-11 09:09:56.394 2 DEBUG oslo_concurrency.lockutils [None req-45f1a6d9-7500-4b6b-bab9-43b85db08961 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "1e10d958-167c-4caf-8ff0-67f50e39ec3c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:09:56 np0005481065 nova_compute[260935]: 2025-10-11 09:09:56.395 2 DEBUG oslo_concurrency.lockutils [None req-45f1a6d9-7500-4b6b-bab9-43b85db08961 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "1e10d958-167c-4caf-8ff0-67f50e39ec3c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:09:56 np0005481065 nova_compute[260935]: 2025-10-11 09:09:56.397 2 INFO nova.compute.manager [None req-45f1a6d9-7500-4b6b-bab9-43b85db08961 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Terminating instance#033[00m
Oct 11 05:09:56 np0005481065 nova_compute[260935]: 2025-10-11 09:09:56.399 2 DEBUG nova.compute.manager [None req-45f1a6d9-7500-4b6b-bab9-43b85db08961 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:09:56 np0005481065 nova_compute[260935]: 2025-10-11 09:09:56.409 2 INFO nova.virt.libvirt.driver [-] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Instance destroyed successfully.#033[00m
Oct 11 05:09:56 np0005481065 nova_compute[260935]: 2025-10-11 09:09:56.411 2 DEBUG nova.objects.instance [None req-b8fd29cc-e044-452b-a4e6-5433f96ee35a 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lazy-loading 'resources' on Instance uuid 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:09:56 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2037: 321 pgs: 321 active+clean; 488 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 653 KiB/s rd, 129 KiB/s wr, 70 op/s
Oct 11 05:09:56 np0005481065 nova_compute[260935]: 2025-10-11 09:09:56.432 2 DEBUG nova.virt.libvirt.vif [None req-b8fd29cc-e044-452b-a4e6-5433f96ee35a 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-11T09:07:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-581429551',display_name='tempest-ServersNegativeTestJSON-server-581429551',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-581429551',id=98,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:09:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='21f163e616ee4917a580701d466f7dc9',ramdisk_id='',reservation_id='r-wp4zx0a9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-414353187',owner_user_name='tempest-ServersNegativeTestJSON-414353187-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:09:39Z,user_data=None,user_id='6b8d9d5ab01d48ae81a09f922875ea3e',uuid=0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "address": "fa:16:3e:9d:da:b0", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa611854c-0a", "ovs_interfaceid": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:09:56 np0005481065 nova_compute[260935]: 2025-10-11 09:09:56.433 2 DEBUG nova.network.os_vif_util [None req-b8fd29cc-e044-452b-a4e6-5433f96ee35a 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Converting VIF {"id": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "address": "fa:16:3e:9d:da:b0", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa611854c-0a", "ovs_interfaceid": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:09:56 np0005481065 nova_compute[260935]: 2025-10-11 09:09:56.434 2 DEBUG nova.network.os_vif_util [None req-b8fd29cc-e044-452b-a4e6-5433f96ee35a 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:da:b0,bridge_name='br-int',has_traffic_filtering=True,id=a611854c-0a61-41b8-91ce-0c0f893aa54c,network=Network(14e82eeb-74e2-4de3-9047-74da777fe1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa611854c-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:09:56 np0005481065 nova_compute[260935]: 2025-10-11 09:09:56.435 2 DEBUG os_vif [None req-b8fd29cc-e044-452b-a4e6-5433f96ee35a 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:da:b0,bridge_name='br-int',has_traffic_filtering=True,id=a611854c-0a61-41b8-91ce-0c0f893aa54c,network=Network(14e82eeb-74e2-4de3-9047-74da777fe1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa611854c-0a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:09:56 np0005481065 nova_compute[260935]: 2025-10-11 09:09:56.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:56 np0005481065 nova_compute[260935]: 2025-10-11 09:09:56.439 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa611854c-0a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:09:56 np0005481065 nova_compute[260935]: 2025-10-11 09:09:56.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:56 np0005481065 nova_compute[260935]: 2025-10-11 09:09:56.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:56 np0005481065 nova_compute[260935]: 2025-10-11 09:09:56.449 2 INFO os_vif [None req-b8fd29cc-e044-452b-a4e6-5433f96ee35a 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:da:b0,bridge_name='br-int',has_traffic_filtering=True,id=a611854c-0a61-41b8-91ce-0c0f893aa54c,network=Network(14e82eeb-74e2-4de3-9047-74da777fe1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa611854c-0a')#033[00m
Oct 11 05:09:56 np0005481065 kernel: tap38159e48-a8 (unregistering): left promiscuous mode
Oct 11 05:09:56 np0005481065 NetworkManager[44960]: <info>  [1760173796.4658] device (tap38159e48-a8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:09:56 np0005481065 neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec[364536]: [NOTICE]   (364540) : haproxy version is 2.8.14-c23fe91
Oct 11 05:09:56 np0005481065 neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec[364536]: [NOTICE]   (364540) : path to executable is /usr/sbin/haproxy
Oct 11 05:09:56 np0005481065 neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec[364536]: [WARNING]  (364540) : Exiting Master process...
Oct 11 05:09:56 np0005481065 neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec[364536]: [ALERT]    (364540) : Current worker (364542) exited with code 143 (Terminated)
Oct 11 05:09:56 np0005481065 neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec[364536]: [WARNING]  (364540) : All workers exited. Exiting... (0)
Oct 11 05:09:56 np0005481065 systemd[1]: libpod-064e59958f6ed1df696e290d0df260334f8f6cf4ec230c7a95673bc57167428b.scope: Deactivated successfully.
Oct 11 05:09:56 np0005481065 conmon[364536]: conmon 064e59958f6ed1df696e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-064e59958f6ed1df696e290d0df260334f8f6cf4ec230c7a95673bc57167428b.scope/container/memory.events
Oct 11 05:09:56 np0005481065 podman[364620]: 2025-10-11 09:09:56.482800666 +0000 UTC m=+0.078884883 container died 064e59958f6ed1df696e290d0df260334f8f6cf4ec230c7a95673bc57167428b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 05:09:56 np0005481065 ovn_controller[152945]: 2025-10-11T09:09:56Z|00937|binding|INFO|Releasing lport 38159e48-a8a0-4daa-95d3-19be7346d2cb from this chassis (sb_readonly=0)
Oct 11 05:09:56 np0005481065 ovn_controller[152945]: 2025-10-11T09:09:56Z|00938|binding|INFO|Setting lport 38159e48-a8a0-4daa-95d3-19be7346d2cb down in Southbound
Oct 11 05:09:56 np0005481065 ovn_controller[152945]: 2025-10-11T09:09:56Z|00939|binding|INFO|Removing iface tap38159e48-a8 ovn-installed in OVS
Oct 11 05:09:56 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:56.487 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7c:70:56 10.100.0.11'], port_security=['fa:16:3e:7c:70:56 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '1e10d958-167c-4caf-8ff0-67f50e39ec3c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca4b15770e784f45910b630937562cb6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5fe33852-f012-4e6c-8c5b-0eacc3c5c3e8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b3216995-d47a-4bef-959f-e3536ac62432, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=38159e48-a8a0-4daa-95d3-19be7346d2cb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:09:56 np0005481065 nova_compute[260935]: 2025-10-11 09:09:56.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:56 np0005481065 nova_compute[260935]: 2025-10-11 09:09:56.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:56 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-064e59958f6ed1df696e290d0df260334f8f6cf4ec230c7a95673bc57167428b-userdata-shm.mount: Deactivated successfully.
Oct 11 05:09:56 np0005481065 systemd[1]: var-lib-containers-storage-overlay-45fd75aade0337d682b14081c1eccf706df4989bd7a9205a59aaf2a4e8230fe7-merged.mount: Deactivated successfully.
Oct 11 05:09:56 np0005481065 systemd[1]: machine-qemu\x2d121\x2dinstance\x2d00000067.scope: Deactivated successfully.
Oct 11 05:09:56 np0005481065 systemd[1]: machine-qemu\x2d121\x2dinstance\x2d00000067.scope: Consumed 13.333s CPU time.
Oct 11 05:09:56 np0005481065 systemd-machined[215705]: Machine qemu-121-instance-00000067 terminated.
Oct 11 05:09:56 np0005481065 podman[364620]: 2025-10-11 09:09:56.533217765 +0000 UTC m=+0.129301982 container cleanup 064e59958f6ed1df696e290d0df260334f8f6cf4ec230c7a95673bc57167428b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:09:56 np0005481065 systemd[1]: libpod-conmon-064e59958f6ed1df696e290d0df260334f8f6cf4ec230c7a95673bc57167428b.scope: Deactivated successfully.
Oct 11 05:09:56 np0005481065 nova_compute[260935]: 2025-10-11 09:09:56.610 2 DEBUG nova.compute.manager [req-754e631f-5595-48d6-9752-ea5452073819 req-d6c75318-dd8d-4bc4-9b76-d9f79a2cef4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Received event network-changed-38159e48-a8a0-4daa-95d3-19be7346d2cb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:09:56 np0005481065 nova_compute[260935]: 2025-10-11 09:09:56.612 2 DEBUG nova.compute.manager [req-754e631f-5595-48d6-9752-ea5452073819 req-d6c75318-dd8d-4bc4-9b76-d9f79a2cef4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Refreshing instance network info cache due to event network-changed-38159e48-a8a0-4daa-95d3-19be7346d2cb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:09:56 np0005481065 nova_compute[260935]: 2025-10-11 09:09:56.612 2 DEBUG oslo_concurrency.lockutils [req-754e631f-5595-48d6-9752-ea5452073819 req-d6c75318-dd8d-4bc4-9b76-d9f79a2cef4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-1e10d958-167c-4caf-8ff0-67f50e39ec3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:09:56 np0005481065 nova_compute[260935]: 2025-10-11 09:09:56.613 2 DEBUG oslo_concurrency.lockutils [req-754e631f-5595-48d6-9752-ea5452073819 req-d6c75318-dd8d-4bc4-9b76-d9f79a2cef4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-1e10d958-167c-4caf-8ff0-67f50e39ec3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:09:56 np0005481065 nova_compute[260935]: 2025-10-11 09:09:56.614 2 DEBUG nova.network.neutron [req-754e631f-5595-48d6-9752-ea5452073819 req-d6c75318-dd8d-4bc4-9b76-d9f79a2cef4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Refreshing network info cache for port 38159e48-a8a0-4daa-95d3-19be7346d2cb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:09:56 np0005481065 podman[364677]: 2025-10-11 09:09:56.624091379 +0000 UTC m=+0.062499425 container remove 064e59958f6ed1df696e290d0df260334f8f6cf4ec230c7a95673bc57167428b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:09:56 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:56.636 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e56e18cf-80ed-4b3d-b4f8-d1532b6a1a65]: (4, ('Sat Oct 11 09:09:56 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec (064e59958f6ed1df696e290d0df260334f8f6cf4ec230c7a95673bc57167428b)\n064e59958f6ed1df696e290d0df260334f8f6cf4ec230c7a95673bc57167428b\nSat Oct 11 09:09:56 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec (064e59958f6ed1df696e290d0df260334f8f6cf4ec230c7a95673bc57167428b)\n064e59958f6ed1df696e290d0df260334f8f6cf4ec230c7a95673bc57167428b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:56 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:56.639 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7d9404f6-ea08-42de-bea1-4bf05b5cbaaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:56 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:56.641 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14e82eeb-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:09:56 np0005481065 nova_compute[260935]: 2025-10-11 09:09:56.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:56 np0005481065 nova_compute[260935]: 2025-10-11 09:09:56.654 2 INFO nova.virt.libvirt.driver [-] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Instance destroyed successfully.#033[00m
Oct 11 05:09:56 np0005481065 nova_compute[260935]: 2025-10-11 09:09:56.656 2 DEBUG nova.objects.instance [None req-45f1a6d9-7500-4b6b-bab9-43b85db08961 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'resources' on Instance uuid 1e10d958-167c-4caf-8ff0-67f50e39ec3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:09:56 np0005481065 nova_compute[260935]: 2025-10-11 09:09:56.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:56 np0005481065 kernel: tap14e82eeb-70: left promiscuous mode
Oct 11 05:09:56 np0005481065 nova_compute[260935]: 2025-10-11 09:09:56.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:56 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:56.676 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f2f6386a-e674-4a1b-a6a6-456b4e1debfb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:56 np0005481065 nova_compute[260935]: 2025-10-11 09:09:56.680 2 DEBUG nova.virt.libvirt.vif [None req-45f1a6d9-7500-4b6b-bab9-43b85db08961 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:09:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1369305797',display_name='tempest-TestNetworkAdvancedServerOps-server-1369305797',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1369305797',id=103,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB3xSdxOkU3R8spG6Ar0C2b6V4kJp9/fLj3qIKqVjHr+W8RWdFE2Y04thwsEaHG/Pjnq/p0OokkgP0kv89kAu1w3IytsdPugHW10i6+zWfhjEg7DZ4e+0dQC7E2fiV/TCA==',key_name='tempest-TestNetworkAdvancedServerOps-1897339183',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:09:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ca4b15770e784f45910b630937562cb6',ramdisk_id='',reservation_id='r-0mc689ih',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1304559157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1304559157-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:09:52Z,user_data=None,user_id='a213c3877fc144a3af0be3c3d853f999',uuid=1e10d958-167c-4caf-8ff0-67f50e39ec3c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "38159e48-a8a0-4daa-95d3-19be7346d2cb", "address": "fa:16:3e:7c:70:56", "network": {"id": "3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e", "bridge": "br-int", "label": "tempest-network-smoke--7754663", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap38159e48-a8", "ovs_interfaceid": "38159e48-a8a0-4daa-95d3-19be7346d2cb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:09:56 np0005481065 nova_compute[260935]: 2025-10-11 09:09:56.682 2 DEBUG nova.network.os_vif_util [None req-45f1a6d9-7500-4b6b-bab9-43b85db08961 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converting VIF {"id": "38159e48-a8a0-4daa-95d3-19be7346d2cb", "address": "fa:16:3e:7c:70:56", "network": {"id": "3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e", "bridge": "br-int", "label": "tempest-network-smoke--7754663", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap38159e48-a8", "ovs_interfaceid": "38159e48-a8a0-4daa-95d3-19be7346d2cb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:09:56 np0005481065 nova_compute[260935]: 2025-10-11 09:09:56.683 2 DEBUG nova.network.os_vif_util [None req-45f1a6d9-7500-4b6b-bab9-43b85db08961 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7c:70:56,bridge_name='br-int',has_traffic_filtering=True,id=38159e48-a8a0-4daa-95d3-19be7346d2cb,network=Network(3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap38159e48-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:09:56 np0005481065 nova_compute[260935]: 2025-10-11 09:09:56.684 2 DEBUG os_vif [None req-45f1a6d9-7500-4b6b-bab9-43b85db08961 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7c:70:56,bridge_name='br-int',has_traffic_filtering=True,id=38159e48-a8a0-4daa-95d3-19be7346d2cb,network=Network(3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap38159e48-a8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:09:56 np0005481065 nova_compute[260935]: 2025-10-11 09:09:56.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:56 np0005481065 nova_compute[260935]: 2025-10-11 09:09:56.686 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap38159e48-a8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:09:56 np0005481065 nova_compute[260935]: 2025-10-11 09:09:56.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:56 np0005481065 nova_compute[260935]: 2025-10-11 09:09:56.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:56 np0005481065 nova_compute[260935]: 2025-10-11 09:09:56.693 2 INFO os_vif [None req-45f1a6d9-7500-4b6b-bab9-43b85db08961 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7c:70:56,bridge_name='br-int',has_traffic_filtering=True,id=38159e48-a8a0-4daa-95d3-19be7346d2cb,network=Network(3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap38159e48-a8')#033[00m
Oct 11 05:09:56 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:56.699 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[006ee0d5-2c60-4b40-bb82-fc8ffdf3dff7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:56 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:56.700 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b6545ce8-7af6-4f3c-9781-911f919be545]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:56 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:56.718 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1c9ef5ab-833a-4e00-b2ae-9b93e7b39dd0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 562335, 'reachable_time': 27980, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364717, 'error': None, 'target': 'ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:56 np0005481065 systemd[1]: run-netns-ovnmeta\x2d14e82eeb\x2d74e2\x2d4de3\x2d9047\x2d74da777fe1ec.mount: Deactivated successfully.
Oct 11 05:09:56 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:56.723 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 05:09:56 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:56.723 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[b5885cb2-761f-4706-9a43-caa40316cbcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:56 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:56.724 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 38159e48-a8a0-4daa-95d3-19be7346d2cb in datapath 3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e unbound from our chassis#033[00m
Oct 11 05:09:56 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:56.725 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:09:56 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:56.726 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[11c255ef-45ee-4aa3-8b26-7d489c1b121d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:56 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:56.726 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e namespace which is not needed anymore#033[00m
Oct 11 05:09:56 np0005481065 neutron-haproxy-ovnmeta-3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e[364303]: [NOTICE]   (364307) : haproxy version is 2.8.14-c23fe91
Oct 11 05:09:56 np0005481065 neutron-haproxy-ovnmeta-3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e[364303]: [NOTICE]   (364307) : path to executable is /usr/sbin/haproxy
Oct 11 05:09:56 np0005481065 neutron-haproxy-ovnmeta-3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e[364303]: [WARNING]  (364307) : Exiting Master process...
Oct 11 05:09:56 np0005481065 neutron-haproxy-ovnmeta-3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e[364303]: [WARNING]  (364307) : Exiting Master process...
Oct 11 05:09:56 np0005481065 neutron-haproxy-ovnmeta-3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e[364303]: [ALERT]    (364307) : Current worker (364309) exited with code 143 (Terminated)
Oct 11 05:09:56 np0005481065 neutron-haproxy-ovnmeta-3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e[364303]: [WARNING]  (364307) : All workers exited. Exiting... (0)
Oct 11 05:09:56 np0005481065 systemd[1]: libpod-6815214ed4d7b783ad1ba6031bca85e6eb433cdd2124489969f3b2a6f5b588c6.scope: Deactivated successfully.
Oct 11 05:09:56 np0005481065 podman[364743]: 2025-10-11 09:09:56.933288536 +0000 UTC m=+0.067405265 container died 6815214ed4d7b783ad1ba6031bca85e6eb433cdd2124489969f3b2a6f5b588c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 11 05:09:56 np0005481065 nova_compute[260935]: 2025-10-11 09:09:56.939 2 INFO nova.virt.libvirt.driver [None req-b8fd29cc-e044-452b-a4e6-5433f96ee35a 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Deleting instance files /var/lib/nova/instances/0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_del#033[00m
Oct 11 05:09:56 np0005481065 nova_compute[260935]: 2025-10-11 09:09:56.940 2 INFO nova.virt.libvirt.driver [None req-b8fd29cc-e044-452b-a4e6-5433f96ee35a 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Deletion of /var/lib/nova/instances/0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_del complete#033[00m
Oct 11 05:09:56 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6815214ed4d7b783ad1ba6031bca85e6eb433cdd2124489969f3b2a6f5b588c6-userdata-shm.mount: Deactivated successfully.
Oct 11 05:09:56 np0005481065 systemd[1]: var-lib-containers-storage-overlay-d6608af26482039c5816a02e8765c53d58b01bc30687269e83d429b9f655fb4c-merged.mount: Deactivated successfully.
Oct 11 05:09:56 np0005481065 podman[364743]: 2025-10-11 09:09:56.976201651 +0000 UTC m=+0.110318370 container cleanup 6815214ed4d7b783ad1ba6031bca85e6eb433cdd2124489969f3b2a6f5b588c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2)
Oct 11 05:09:56 np0005481065 systemd[1]: libpod-conmon-6815214ed4d7b783ad1ba6031bca85e6eb433cdd2124489969f3b2a6f5b588c6.scope: Deactivated successfully.
Oct 11 05:09:57 np0005481065 nova_compute[260935]: 2025-10-11 09:09:57.021 2 INFO nova.compute.manager [None req-b8fd29cc-e044-452b-a4e6-5433f96ee35a 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Took 0.87 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:09:57 np0005481065 nova_compute[260935]: 2025-10-11 09:09:57.022 2 DEBUG oslo.service.loopingcall [None req-b8fd29cc-e044-452b-a4e6-5433f96ee35a 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:09:57 np0005481065 nova_compute[260935]: 2025-10-11 09:09:57.023 2 DEBUG nova.compute.manager [-] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:09:57 np0005481065 nova_compute[260935]: 2025-10-11 09:09:57.023 2 DEBUG nova.network.neutron [-] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:09:57 np0005481065 podman[364775]: 2025-10-11 09:09:57.061139356 +0000 UTC m=+0.058867622 container remove 6815214ed4d7b783ad1ba6031bca85e6eb433cdd2124489969f3b2a6f5b588c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:09:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:57.068 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[915b4db4-8ecd-4e49-bcc3-f3ad6bf5cad8]: (4, ('Sat Oct 11 09:09:56 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e (6815214ed4d7b783ad1ba6031bca85e6eb433cdd2124489969f3b2a6f5b588c6)\n6815214ed4d7b783ad1ba6031bca85e6eb433cdd2124489969f3b2a6f5b588c6\nSat Oct 11 09:09:56 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e (6815214ed4d7b783ad1ba6031bca85e6eb433cdd2124489969f3b2a6f5b588c6)\n6815214ed4d7b783ad1ba6031bca85e6eb433cdd2124489969f3b2a6f5b588c6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:57.070 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3f5b632b-675c-48d3-9a58-c02326ee8aff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:57.070 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3e150cf5-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:09:57 np0005481065 kernel: tap3e150cf5-f0: left promiscuous mode
Oct 11 05:09:57 np0005481065 nova_compute[260935]: 2025-10-11 09:09:57.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:57 np0005481065 nova_compute[260935]: 2025-10-11 09:09:57.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:57.092 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6c8c9228-1622-419b-809a-2a346f13c5ea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:57 np0005481065 nova_compute[260935]: 2025-10-11 09:09:57.109 2 INFO nova.virt.libvirt.driver [None req-45f1a6d9-7500-4b6b-bab9-43b85db08961 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Deleting instance files /var/lib/nova/instances/1e10d958-167c-4caf-8ff0-67f50e39ec3c_del#033[00m
Oct 11 05:09:57 np0005481065 nova_compute[260935]: 2025-10-11 09:09:57.110 2 INFO nova.virt.libvirt.driver [None req-45f1a6d9-7500-4b6b-bab9-43b85db08961 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Deletion of /var/lib/nova/instances/1e10d958-167c-4caf-8ff0-67f50e39ec3c_del complete#033[00m
Oct 11 05:09:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:57.110 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[696e7050-980c-42da-9b8d-b7155c568cde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:57.111 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c342ca9a-25bd-4c2b-8ca2-6ffc3e465596]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:57 np0005481065 nova_compute[260935]: 2025-10-11 09:09:57.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:09:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:57.129 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[df5422b7-0c50-4674-9e73-89d7552873ec]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561481, 'reachable_time': 38751, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364789, 'error': None, 'target': 'ovnmeta-3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:57.131 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 05:09:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:09:57.131 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[b0548319-947f-4b55-9ba8-e4a6bb804877]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:09:57 np0005481065 nova_compute[260935]: 2025-10-11 09:09:57.176 2 INFO nova.compute.manager [None req-45f1a6d9-7500-4b6b-bab9-43b85db08961 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Took 0.78 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:09:57 np0005481065 nova_compute[260935]: 2025-10-11 09:09:57.176 2 DEBUG oslo.service.loopingcall [None req-45f1a6d9-7500-4b6b-bab9-43b85db08961 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:09:57 np0005481065 nova_compute[260935]: 2025-10-11 09:09:57.177 2 DEBUG nova.compute.manager [-] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:09:57 np0005481065 nova_compute[260935]: 2025-10-11 09:09:57.177 2 DEBUG nova.network.neutron [-] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:09:57 np0005481065 systemd[1]: run-netns-ovnmeta\x2d3e150cf5\x2df6e5\x2d4cd4\x2dbd30\x2d1cc73cc8f72e.mount: Deactivated successfully.
Oct 11 05:09:57 np0005481065 nova_compute[260935]: 2025-10-11 09:09:57.575 2 DEBUG nova.compute.manager [req-9713534e-02a7-4dd3-8657-4bdf3cd9916d req-41481262-f05b-4860-ace6-2b7d77cff014 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Received event network-vif-unplugged-38159e48-a8a0-4daa-95d3-19be7346d2cb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:09:57 np0005481065 nova_compute[260935]: 2025-10-11 09:09:57.575 2 DEBUG oslo_concurrency.lockutils [req-9713534e-02a7-4dd3-8657-4bdf3cd9916d req-41481262-f05b-4860-ace6-2b7d77cff014 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "1e10d958-167c-4caf-8ff0-67f50e39ec3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:09:57 np0005481065 nova_compute[260935]: 2025-10-11 09:09:57.576 2 DEBUG oslo_concurrency.lockutils [req-9713534e-02a7-4dd3-8657-4bdf3cd9916d req-41481262-f05b-4860-ace6-2b7d77cff014 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1e10d958-167c-4caf-8ff0-67f50e39ec3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:09:57 np0005481065 nova_compute[260935]: 2025-10-11 09:09:57.576 2 DEBUG oslo_concurrency.lockutils [req-9713534e-02a7-4dd3-8657-4bdf3cd9916d req-41481262-f05b-4860-ace6-2b7d77cff014 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1e10d958-167c-4caf-8ff0-67f50e39ec3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:09:57 np0005481065 nova_compute[260935]: 2025-10-11 09:09:57.576 2 DEBUG nova.compute.manager [req-9713534e-02a7-4dd3-8657-4bdf3cd9916d req-41481262-f05b-4860-ace6-2b7d77cff014 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] No waiting events found dispatching network-vif-unplugged-38159e48-a8a0-4daa-95d3-19be7346d2cb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:09:57 np0005481065 nova_compute[260935]: 2025-10-11 09:09:57.576 2 DEBUG nova.compute.manager [req-9713534e-02a7-4dd3-8657-4bdf3cd9916d req-41481262-f05b-4860-ace6-2b7d77cff014 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Received event network-vif-unplugged-38159e48-a8a0-4daa-95d3-19be7346d2cb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 05:09:58 np0005481065 nova_compute[260935]: 2025-10-11 09:09:58.049 2 DEBUG nova.network.neutron [-] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:09:58 np0005481065 nova_compute[260935]: 2025-10-11 09:09:58.074 2 INFO nova.compute.manager [-] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Took 1.05 seconds to deallocate network for instance.#033[00m
Oct 11 05:09:58 np0005481065 nova_compute[260935]: 2025-10-11 09:09:58.124 2 DEBUG oslo_concurrency.lockutils [None req-b8fd29cc-e044-452b-a4e6-5433f96ee35a 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:09:58 np0005481065 nova_compute[260935]: 2025-10-11 09:09:58.124 2 DEBUG oslo_concurrency.lockutils [None req-b8fd29cc-e044-452b-a4e6-5433f96ee35a 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:09:58 np0005481065 nova_compute[260935]: 2025-10-11 09:09:58.196 2 DEBUG nova.network.neutron [-] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:09:58 np0005481065 nova_compute[260935]: 2025-10-11 09:09:58.225 2 INFO nova.compute.manager [-] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Took 1.05 seconds to deallocate network for instance.#033[00m
Oct 11 05:09:58 np0005481065 nova_compute[260935]: 2025-10-11 09:09:58.259 2 DEBUG oslo_concurrency.processutils [None req-b8fd29cc-e044-452b-a4e6-5433f96ee35a 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:09:58 np0005481065 nova_compute[260935]: 2025-10-11 09:09:58.308 2 DEBUG oslo_concurrency.lockutils [None req-45f1a6d9-7500-4b6b-bab9-43b85db08961 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:09:58 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2038: 321 pgs: 321 active+clean; 328 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 691 KiB/s rd, 131 KiB/s wr, 126 op/s
Oct 11 05:09:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:09:58 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2714491320' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:09:58 np0005481065 nova_compute[260935]: 2025-10-11 09:09:58.719 2 DEBUG oslo_concurrency.processutils [None req-b8fd29cc-e044-452b-a4e6-5433f96ee35a 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:09:58 np0005481065 nova_compute[260935]: 2025-10-11 09:09:58.726 2 DEBUG nova.compute.provider_tree [None req-b8fd29cc-e044-452b-a4e6-5433f96ee35a 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:09:58 np0005481065 nova_compute[260935]: 2025-10-11 09:09:58.743 2 DEBUG nova.scheduler.client.report [None req-b8fd29cc-e044-452b-a4e6-5433f96ee35a 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:09:58 np0005481065 nova_compute[260935]: 2025-10-11 09:09:58.763 2 DEBUG oslo_concurrency.lockutils [None req-b8fd29cc-e044-452b-a4e6-5433f96ee35a 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:09:58 np0005481065 nova_compute[260935]: 2025-10-11 09:09:58.765 2 DEBUG oslo_concurrency.lockutils [None req-45f1a6d9-7500-4b6b-bab9-43b85db08961 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.456s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:09:58 np0005481065 nova_compute[260935]: 2025-10-11 09:09:58.780 2 DEBUG nova.network.neutron [req-754e631f-5595-48d6-9752-ea5452073819 req-d6c75318-dd8d-4bc4-9b76-d9f79a2cef4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Updated VIF entry in instance network info cache for port 38159e48-a8a0-4daa-95d3-19be7346d2cb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:09:58 np0005481065 nova_compute[260935]: 2025-10-11 09:09:58.780 2 DEBUG nova.network.neutron [req-754e631f-5595-48d6-9752-ea5452073819 req-d6c75318-dd8d-4bc4-9b76-d9f79a2cef4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Updating instance_info_cache with network_info: [{"id": "38159e48-a8a0-4daa-95d3-19be7346d2cb", "address": "fa:16:3e:7c:70:56", "network": {"id": "3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e", "bridge": "br-int", "label": "tempest-network-smoke--7754663", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap38159e48-a8", "ovs_interfaceid": "38159e48-a8a0-4daa-95d3-19be7346d2cb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:09:58 np0005481065 nova_compute[260935]: 2025-10-11 09:09:58.807 2 DEBUG oslo_concurrency.lockutils [req-754e631f-5595-48d6-9752-ea5452073819 req-d6c75318-dd8d-4bc4-9b76-d9f79a2cef4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-1e10d958-167c-4caf-8ff0-67f50e39ec3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:09:58 np0005481065 nova_compute[260935]: 2025-10-11 09:09:58.810 2 INFO nova.scheduler.client.report [None req-b8fd29cc-e044-452b-a4e6-5433f96ee35a 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Deleted allocations for instance 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c#033[00m
Oct 11 05:09:58 np0005481065 nova_compute[260935]: 2025-10-11 09:09:58.820 2 DEBUG nova.compute.manager [req-eafdf0ad-21dc-4c43-9764-bb10e0162bcb req-409c0d1b-3674-42ec-bd18-7e68d035afdd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Received event network-vif-unplugged-a611854c-0a61-41b8-91ce-0c0f893aa54c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:09:58 np0005481065 nova_compute[260935]: 2025-10-11 09:09:58.820 2 DEBUG oslo_concurrency.lockutils [req-eafdf0ad-21dc-4c43-9764-bb10e0162bcb req-409c0d1b-3674-42ec-bd18-7e68d035afdd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:09:58 np0005481065 nova_compute[260935]: 2025-10-11 09:09:58.821 2 DEBUG oslo_concurrency.lockutils [req-eafdf0ad-21dc-4c43-9764-bb10e0162bcb req-409c0d1b-3674-42ec-bd18-7e68d035afdd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:09:58 np0005481065 nova_compute[260935]: 2025-10-11 09:09:58.821 2 DEBUG oslo_concurrency.lockutils [req-eafdf0ad-21dc-4c43-9764-bb10e0162bcb req-409c0d1b-3674-42ec-bd18-7e68d035afdd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:09:58 np0005481065 nova_compute[260935]: 2025-10-11 09:09:58.821 2 DEBUG nova.compute.manager [req-eafdf0ad-21dc-4c43-9764-bb10e0162bcb req-409c0d1b-3674-42ec-bd18-7e68d035afdd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] No waiting events found dispatching network-vif-unplugged-a611854c-0a61-41b8-91ce-0c0f893aa54c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:09:58 np0005481065 nova_compute[260935]: 2025-10-11 09:09:58.821 2 WARNING nova.compute.manager [req-eafdf0ad-21dc-4c43-9764-bb10e0162bcb req-409c0d1b-3674-42ec-bd18-7e68d035afdd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Received unexpected event network-vif-unplugged-a611854c-0a61-41b8-91ce-0c0f893aa54c for instance with vm_state deleted and task_state None.#033[00m
Oct 11 05:09:58 np0005481065 nova_compute[260935]: 2025-10-11 09:09:58.822 2 DEBUG nova.compute.manager [req-eafdf0ad-21dc-4c43-9764-bb10e0162bcb req-409c0d1b-3674-42ec-bd18-7e68d035afdd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Received event network-vif-plugged-a611854c-0a61-41b8-91ce-0c0f893aa54c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:09:58 np0005481065 nova_compute[260935]: 2025-10-11 09:09:58.822 2 DEBUG oslo_concurrency.lockutils [req-eafdf0ad-21dc-4c43-9764-bb10e0162bcb req-409c0d1b-3674-42ec-bd18-7e68d035afdd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:09:58 np0005481065 nova_compute[260935]: 2025-10-11 09:09:58.822 2 DEBUG oslo_concurrency.lockutils [req-eafdf0ad-21dc-4c43-9764-bb10e0162bcb req-409c0d1b-3674-42ec-bd18-7e68d035afdd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:09:58 np0005481065 nova_compute[260935]: 2025-10-11 09:09:58.822 2 DEBUG oslo_concurrency.lockutils [req-eafdf0ad-21dc-4c43-9764-bb10e0162bcb req-409c0d1b-3674-42ec-bd18-7e68d035afdd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:09:58 np0005481065 nova_compute[260935]: 2025-10-11 09:09:58.823 2 DEBUG nova.compute.manager [req-eafdf0ad-21dc-4c43-9764-bb10e0162bcb req-409c0d1b-3674-42ec-bd18-7e68d035afdd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] No waiting events found dispatching network-vif-plugged-a611854c-0a61-41b8-91ce-0c0f893aa54c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:09:58 np0005481065 nova_compute[260935]: 2025-10-11 09:09:58.823 2 WARNING nova.compute.manager [req-eafdf0ad-21dc-4c43-9764-bb10e0162bcb req-409c0d1b-3674-42ec-bd18-7e68d035afdd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Received unexpected event network-vif-plugged-a611854c-0a61-41b8-91ce-0c0f893aa54c for instance with vm_state deleted and task_state None.#033[00m
Oct 11 05:09:58 np0005481065 nova_compute[260935]: 2025-10-11 09:09:58.823 2 DEBUG nova.compute.manager [req-eafdf0ad-21dc-4c43-9764-bb10e0162bcb req-409c0d1b-3674-42ec-bd18-7e68d035afdd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Received event network-vif-deleted-a611854c-0a61-41b8-91ce-0c0f893aa54c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:09:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:09:58 np0005481065 nova_compute[260935]: 2025-10-11 09:09:58.881 2 DEBUG oslo_concurrency.lockutils [None req-b8fd29cc-e044-452b-a4e6-5433f96ee35a 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:09:58 np0005481065 nova_compute[260935]: 2025-10-11 09:09:58.885 2 DEBUG oslo_concurrency.processutils [None req-45f1a6d9-7500-4b6b-bab9-43b85db08961 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:09:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:09:59 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3613070824' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:09:59 np0005481065 nova_compute[260935]: 2025-10-11 09:09:59.360 2 DEBUG oslo_concurrency.processutils [None req-45f1a6d9-7500-4b6b-bab9-43b85db08961 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:09:59 np0005481065 nova_compute[260935]: 2025-10-11 09:09:59.369 2 DEBUG nova.compute.provider_tree [None req-45f1a6d9-7500-4b6b-bab9-43b85db08961 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:09:59 np0005481065 nova_compute[260935]: 2025-10-11 09:09:59.404 2 DEBUG nova.scheduler.client.report [None req-45f1a6d9-7500-4b6b-bab9-43b85db08961 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:09:59 np0005481065 nova_compute[260935]: 2025-10-11 09:09:59.457 2 DEBUG oslo_concurrency.lockutils [None req-45f1a6d9-7500-4b6b-bab9-43b85db08961 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:09:59 np0005481065 nova_compute[260935]: 2025-10-11 09:09:59.486 2 INFO nova.scheduler.client.report [None req-45f1a6d9-7500-4b6b-bab9-43b85db08961 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Deleted allocations for instance 1e10d958-167c-4caf-8ff0-67f50e39ec3c#033[00m
Oct 11 05:09:59 np0005481065 nova_compute[260935]: 2025-10-11 09:09:59.566 2 DEBUG oslo_concurrency.lockutils [None req-45f1a6d9-7500-4b6b-bab9-43b85db08961 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "1e10d958-167c-4caf-8ff0-67f50e39ec3c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.174s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:09:59 np0005481065 nova_compute[260935]: 2025-10-11 09:09:59.782 2 DEBUG nova.compute.manager [req-08a80288-5e58-45fe-86b8-d24bbe5422df req-cf3c546e-1d73-4aa3-806e-504b63a3d9da e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Received event network-vif-plugged-38159e48-a8a0-4daa-95d3-19be7346d2cb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:09:59 np0005481065 nova_compute[260935]: 2025-10-11 09:09:59.783 2 DEBUG oslo_concurrency.lockutils [req-08a80288-5e58-45fe-86b8-d24bbe5422df req-cf3c546e-1d73-4aa3-806e-504b63a3d9da e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "1e10d958-167c-4caf-8ff0-67f50e39ec3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:09:59 np0005481065 nova_compute[260935]: 2025-10-11 09:09:59.784 2 DEBUG oslo_concurrency.lockutils [req-08a80288-5e58-45fe-86b8-d24bbe5422df req-cf3c546e-1d73-4aa3-806e-504b63a3d9da e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1e10d958-167c-4caf-8ff0-67f50e39ec3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:09:59 np0005481065 nova_compute[260935]: 2025-10-11 09:09:59.785 2 DEBUG oslo_concurrency.lockutils [req-08a80288-5e58-45fe-86b8-d24bbe5422df req-cf3c546e-1d73-4aa3-806e-504b63a3d9da e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1e10d958-167c-4caf-8ff0-67f50e39ec3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:09:59 np0005481065 nova_compute[260935]: 2025-10-11 09:09:59.785 2 DEBUG nova.compute.manager [req-08a80288-5e58-45fe-86b8-d24bbe5422df req-cf3c546e-1d73-4aa3-806e-504b63a3d9da e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] No waiting events found dispatching network-vif-plugged-38159e48-a8a0-4daa-95d3-19be7346d2cb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:09:59 np0005481065 nova_compute[260935]: 2025-10-11 09:09:59.786 2 WARNING nova.compute.manager [req-08a80288-5e58-45fe-86b8-d24bbe5422df req-cf3c546e-1d73-4aa3-806e-504b63a3d9da e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Received unexpected event network-vif-plugged-38159e48-a8a0-4daa-95d3-19be7346d2cb for instance with vm_state deleted and task_state None.#033[00m
Oct 11 05:09:59 np0005481065 nova_compute[260935]: 2025-10-11 09:09:59.786 2 DEBUG nova.compute.manager [req-08a80288-5e58-45fe-86b8-d24bbe5422df req-cf3c546e-1d73-4aa3-806e-504b63a3d9da e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Received event network-vif-deleted-38159e48-a8a0-4daa-95d3-19be7346d2cb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:09:59 np0005481065 nova_compute[260935]: 2025-10-11 09:09:59.787 2 INFO nova.compute.manager [req-08a80288-5e58-45fe-86b8-d24bbe5422df req-cf3c546e-1d73-4aa3-806e-504b63a3d9da e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Neutron deleted interface 38159e48-a8a0-4daa-95d3-19be7346d2cb; detaching it from the instance and deleting it from the info cache#033[00m
Oct 11 05:09:59 np0005481065 nova_compute[260935]: 2025-10-11 09:09:59.787 2 DEBUG nova.network.neutron [req-08a80288-5e58-45fe-86b8-d24bbe5422df req-cf3c546e-1d73-4aa3-806e-504b63a3d9da e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Oct 11 05:09:59 np0005481065 nova_compute[260935]: 2025-10-11 09:09:59.790 2 DEBUG nova.compute.manager [req-08a80288-5e58-45fe-86b8-d24bbe5422df req-cf3c546e-1d73-4aa3-806e-504b63a3d9da e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Detach interface failed, port_id=38159e48-a8a0-4daa-95d3-19be7346d2cb, reason: Instance 1e10d958-167c-4caf-8ff0-67f50e39ec3c could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct 11 05:09:59 np0005481065 nova_compute[260935]: 2025-10-11 09:09:59.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:10:00 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2039: 321 pgs: 321 active+clean; 328 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 25 KiB/s wr, 57 op/s
Oct 11 05:10:00 np0005481065 podman[364834]: 2025-10-11 09:10:00.824255651 +0000 UTC m=+0.108663313 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd)
Oct 11 05:10:00 np0005481065 podman[364835]: 2025-10-11 09:10:00.872181789 +0000 UTC m=+0.153803991 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 11 05:10:01 np0005481065 nova_compute[260935]: 2025-10-11 09:10:01.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:10:02 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2040: 321 pgs: 321 active+clean; 328 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 25 KiB/s wr, 57 op/s
Oct 11 05:10:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:10:03 np0005481065 ovn_controller[152945]: 2025-10-11T09:10:03Z|00940|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:10:03 np0005481065 ovn_controller[152945]: 2025-10-11T09:10:03Z|00941|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:10:03 np0005481065 nova_compute[260935]: 2025-10-11 09:10:03.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:10:04 np0005481065 ovn_controller[152945]: 2025-10-11T09:10:04Z|00942|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:10:04 np0005481065 ovn_controller[152945]: 2025-10-11T09:10:04Z|00943|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:10:04 np0005481065 nova_compute[260935]: 2025-10-11 09:10:04.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:10:04 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2041: 321 pgs: 321 active+clean; 328 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 13 KiB/s wr, 56 op/s
Oct 11 05:10:04 np0005481065 nova_compute[260935]: 2025-10-11 09:10:04.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:10:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:10:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:10:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:10:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:10:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002627377275021163 of space, bias 1.0, pg target 0.788213182506349 quantized to 32 (current 32)
Oct 11 05:10:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:10:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:10:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:10:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:10:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:10:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 11 05:10:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:10:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 05:10:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:10:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:10:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:10:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 05:10:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:10:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 05:10:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:10:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:10:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:10:04 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 05:10:06 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2042: 321 pgs: 321 active+clean; 328 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Oct 11 05:10:06 np0005481065 nova_compute[260935]: 2025-10-11 09:10:06.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:10:07 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 05:10:07 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.0 total, 600.0 interval#012Cumulative writes: 9375 writes, 42K keys, 9375 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s#012Cumulative WAL: 9375 writes, 9375 syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1474 writes, 7155 keys, 1474 commit groups, 1.0 writes per commit group, ingest: 9.21 MB, 0.02 MB/s#012Interval WAL: 1474 writes, 1474 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     64.2      0.79              0.20        28    0.028       0      0       0.0       0.0#012  L6      1/0    7.22 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   4.1    160.1    132.7      1.59              0.88        27    0.059    149K    15K       0.0       0.0#012 Sum      1/0    7.22 MB   0.0      0.2     0.0      0.2       0.3      0.1       0.0   5.1    106.9    109.9      2.38              1.08        55    0.043    149K    15K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.0     88.2     85.7      0.82              0.28        14    0.059     48K   3558       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0    160.1    132.7      1.59              0.88        27    0.059    149K    15K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     64.5      0.79              0.20        27    0.029       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.2      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3600.0 total, 600.0 interval#012Flush(GB): cumulative 0.050, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.26 GB write, 0.07 MB/s write, 0.25 GB read, 0.07 MB/s read, 2.4 seconds#012Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.8 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558f0ab3b1f0#2 capacity: 304.00 MB usage: 28.69 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000236 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1901,27.57 MB,9.06923%) FilterBlock(56,419.61 KB,0.134794%) IndexBlock(56,722.28 KB,0.232024%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct 11 05:10:07 np0005481065 nova_compute[260935]: 2025-10-11 09:10:07.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:10:08 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2043: 321 pgs: 321 active+clean; 328 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Oct 11 05:10:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:10:09 np0005481065 nova_compute[260935]: 2025-10-11 09:10:09.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:10:10 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2044: 321 pgs: 321 active+clean; 328 MiB data, 846 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:10:11 np0005481065 nova_compute[260935]: 2025-10-11 09:10:11.403 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173796.3939128, 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:10:11 np0005481065 nova_compute[260935]: 2025-10-11 09:10:11.404 2 INFO nova.compute.manager [-] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:10:11 np0005481065 nova_compute[260935]: 2025-10-11 09:10:11.447 2 DEBUG nova.compute.manager [None req-6af58e5d-f4d9-4382-8911-1633837d27dc - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:10:11 np0005481065 nova_compute[260935]: 2025-10-11 09:10:11.643 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173796.6409194, 1e10d958-167c-4caf-8ff0-67f50e39ec3c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:10:11 np0005481065 nova_compute[260935]: 2025-10-11 09:10:11.644 2 INFO nova.compute.manager [-] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:10:11 np0005481065 nova_compute[260935]: 2025-10-11 09:10:11.692 2 DEBUG nova.compute.manager [None req-88d40329-589f-4e52-b075-e4781ff797b8 - - - - - -] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:10:11 np0005481065 nova_compute[260935]: 2025-10-11 09:10:11.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:10:12 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2045: 321 pgs: 321 active+clean; 328 MiB data, 846 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:10:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:10:14 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2046: 321 pgs: 321 active+clean; 328 MiB data, 846 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:10:14 np0005481065 nova_compute[260935]: 2025-10-11 09:10:14.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:10:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:10:15.206 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:10:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:10:15.206 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:10:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:10:15.207 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:10:16 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2047: 321 pgs: 321 active+clean; 328 MiB data, 846 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:10:16 np0005481065 nova_compute[260935]: 2025-10-11 09:10:16.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:10:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:10:18 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:10:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 05:10:18 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:10:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 05:10:18 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:10:18 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 14d56d60-e999-4767-8e9b-6015fc59f0cc does not exist
Oct 11 05:10:18 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 4c6f9512-0d00-4228-836a-4ca6cbe640a1 does not exist
Oct 11 05:10:18 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 66e99c38-ef81-4040-b02e-14d65fb218cb does not exist
Oct 11 05:10:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 05:10:18 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 05:10:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 05:10:18 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:10:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:10:18 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:10:18 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2048: 321 pgs: 321 active+clean; 328 MiB data, 846 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:10:18 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:10:18 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:10:18 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:10:18 np0005481065 nova_compute[260935]: 2025-10-11 09:10:18.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:10:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:10:18 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #99. Immutable memtables: 0.
Oct 11 05:10:18 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:10:18.862278) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 05:10:18 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 99
Oct 11 05:10:18 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173818862339, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 449, "num_deletes": 255, "total_data_size": 356565, "memory_usage": 365272, "flush_reason": "Manual Compaction"}
Oct 11 05:10:18 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #100: started
Oct 11 05:10:18 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173818867780, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 100, "file_size": 353519, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42915, "largest_seqno": 43363, "table_properties": {"data_size": 350895, "index_size": 660, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6143, "raw_average_key_size": 18, "raw_value_size": 345688, "raw_average_value_size": 1025, "num_data_blocks": 30, "num_entries": 337, "num_filter_entries": 337, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760173794, "oldest_key_time": 1760173794, "file_creation_time": 1760173818, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 100, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:10:18 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 5580 microseconds, and 2639 cpu microseconds.
Oct 11 05:10:18 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:10:18 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:10:18.867858) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #100: 353519 bytes OK
Oct 11 05:10:18 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:10:18.867884) [db/memtable_list.cc:519] [default] Level-0 commit table #100 started
Oct 11 05:10:18 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:10:18.869841) [db/memtable_list.cc:722] [default] Level-0 commit table #100: memtable #1 done
Oct 11 05:10:18 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:10:18.869866) EVENT_LOG_v1 {"time_micros": 1760173818869858, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 05:10:18 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:10:18.869890) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 05:10:18 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 353815, prev total WAL file size 353815, number of live WAL files 2.
Oct 11 05:10:18 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000096.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:10:18 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:10:18.870452) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353038' seq:72057594037927935, type:22 .. '6C6F676D0031373539' seq:0, type:0; will stop at (end)
Oct 11 05:10:18 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 05:10:18 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [100(345KB)], [98(7391KB)]
Oct 11 05:10:18 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173818870512, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [100], "files_L6": [98], "score": -1, "input_data_size": 7922411, "oldest_snapshot_seqno": -1}
Oct 11 05:10:18 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #101: 6238 keys, 7798204 bytes, temperature: kUnknown
Oct 11 05:10:18 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173818910832, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 101, "file_size": 7798204, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7757522, "index_size": 23996, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15621, "raw_key_size": 163059, "raw_average_key_size": 26, "raw_value_size": 7646403, "raw_average_value_size": 1225, "num_data_blocks": 936, "num_entries": 6238, "num_filter_entries": 6238, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760173818, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:10:18 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:10:18 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:10:18.911015) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 7798204 bytes
Oct 11 05:10:18 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:10:18.912243) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 196.2 rd, 193.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 7.2 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(44.5) write-amplify(22.1) OK, records in: 6757, records dropped: 519 output_compression: NoCompression
Oct 11 05:10:18 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:10:18.912258) EVENT_LOG_v1 {"time_micros": 1760173818912251, "job": 58, "event": "compaction_finished", "compaction_time_micros": 40383, "compaction_time_cpu_micros": 16782, "output_level": 6, "num_output_files": 1, "total_output_size": 7798204, "num_input_records": 6757, "num_output_records": 6238, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 05:10:18 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000100.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:10:18 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173818912400, "job": 58, "event": "table_file_deletion", "file_number": 100}
Oct 11 05:10:18 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:10:18 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173818913563, "job": 58, "event": "table_file_deletion", "file_number": 98}
Oct 11 05:10:18 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:10:18.870338) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:10:18 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:10:18.913683) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:10:18 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:10:18.913692) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:10:18 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:10:18.913695) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:10:18 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:10:18.913699) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:10:18 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:10:18.913703) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:10:19 np0005481065 podman[365155]: 2025-10-11 09:10:19.202174311 +0000 UTC m=+0.071017208 container create ec9f8cbbdac5db722090db2038612cd05fdac0dcc77b77d99fa11f491fd248ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:10:19 np0005481065 systemd[1]: Started libpod-conmon-ec9f8cbbdac5db722090db2038612cd05fdac0dcc77b77d99fa11f491fd248ad.scope.
Oct 11 05:10:19 np0005481065 podman[365155]: 2025-10-11 09:10:19.171034432 +0000 UTC m=+0.039877359 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:10:19 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:10:19 np0005481065 podman[365155]: 2025-10-11 09:10:19.314569489 +0000 UTC m=+0.183412416 container init ec9f8cbbdac5db722090db2038612cd05fdac0dcc77b77d99fa11f491fd248ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_davinci, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 05:10:19 np0005481065 podman[365155]: 2025-10-11 09:10:19.324631447 +0000 UTC m=+0.193474344 container start ec9f8cbbdac5db722090db2038612cd05fdac0dcc77b77d99fa11f491fd248ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_davinci, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 05:10:19 np0005481065 podman[365155]: 2025-10-11 09:10:19.328906049 +0000 UTC m=+0.197748996 container attach ec9f8cbbdac5db722090db2038612cd05fdac0dcc77b77d99fa11f491fd248ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_davinci, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 05:10:19 np0005481065 gallant_davinci[365172]: 167 167
Oct 11 05:10:19 np0005481065 systemd[1]: libpod-ec9f8cbbdac5db722090db2038612cd05fdac0dcc77b77d99fa11f491fd248ad.scope: Deactivated successfully.
Oct 11 05:10:19 np0005481065 podman[365155]: 2025-10-11 09:10:19.331688418 +0000 UTC m=+0.200531305 container died ec9f8cbbdac5db722090db2038612cd05fdac0dcc77b77d99fa11f491fd248ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_davinci, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 05:10:19 np0005481065 systemd[1]: var-lib-containers-storage-overlay-bbc8914c3f8483a9766a63e34e4dca6aee7e860ee2d8883254ffddedd143dd42-merged.mount: Deactivated successfully.
Oct 11 05:10:19 np0005481065 podman[365155]: 2025-10-11 09:10:19.383219969 +0000 UTC m=+0.252062856 container remove ec9f8cbbdac5db722090db2038612cd05fdac0dcc77b77d99fa11f491fd248ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_davinci, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:10:19 np0005481065 podman[365169]: 2025-10-11 09:10:19.395393657 +0000 UTC m=+0.139900355 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 11 05:10:19 np0005481065 systemd[1]: libpod-conmon-ec9f8cbbdac5db722090db2038612cd05fdac0dcc77b77d99fa11f491fd248ad.scope: Deactivated successfully.
Oct 11 05:10:19 np0005481065 podman[365213]: 2025-10-11 09:10:19.682795361 +0000 UTC m=+0.074239190 container create bc29eac582fd0db7dd5cfa08278c645bd7b4ad7cfd36f2d188135a02762041c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_moser, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 11 05:10:19 np0005481065 nova_compute[260935]: 2025-10-11 09:10:19.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:10:19 np0005481065 systemd[1]: Started libpod-conmon-bc29eac582fd0db7dd5cfa08278c645bd7b4ad7cfd36f2d188135a02762041c2.scope.
Oct 11 05:10:19 np0005481065 podman[365213]: 2025-10-11 09:10:19.653578867 +0000 UTC m=+0.045022706 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:10:19 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:10:19 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ad521fb5f7e01149c5bd79acaf181922a5ad027e69066e187c21675835a9b09/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:10:19 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ad521fb5f7e01149c5bd79acaf181922a5ad027e69066e187c21675835a9b09/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:10:19 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ad521fb5f7e01149c5bd79acaf181922a5ad027e69066e187c21675835a9b09/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:10:19 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ad521fb5f7e01149c5bd79acaf181922a5ad027e69066e187c21675835a9b09/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:10:19 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ad521fb5f7e01149c5bd79acaf181922a5ad027e69066e187c21675835a9b09/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 05:10:19 np0005481065 podman[365213]: 2025-10-11 09:10:19.821668786 +0000 UTC m=+0.213112665 container init bc29eac582fd0db7dd5cfa08278c645bd7b4ad7cfd36f2d188135a02762041c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_moser, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 11 05:10:19 np0005481065 podman[365213]: 2025-10-11 09:10:19.838203428 +0000 UTC m=+0.229647257 container start bc29eac582fd0db7dd5cfa08278c645bd7b4ad7cfd36f2d188135a02762041c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_moser, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:10:19 np0005481065 podman[365213]: 2025-10-11 09:10:19.842335186 +0000 UTC m=+0.233779075 container attach bc29eac582fd0db7dd5cfa08278c645bd7b4ad7cfd36f2d188135a02762041c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_moser, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:10:19 np0005481065 nova_compute[260935]: 2025-10-11 09:10:19.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:10:20 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2049: 321 pgs: 321 active+clean; 328 MiB data, 846 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:10:21 np0005481065 agitated_moser[365230]: --> passed data devices: 0 physical, 3 LVM
Oct 11 05:10:21 np0005481065 agitated_moser[365230]: --> relative data size: 1.0
Oct 11 05:10:21 np0005481065 agitated_moser[365230]: --> All data devices are unavailable
Oct 11 05:10:21 np0005481065 systemd[1]: libpod-bc29eac582fd0db7dd5cfa08278c645bd7b4ad7cfd36f2d188135a02762041c2.scope: Deactivated successfully.
Oct 11 05:10:21 np0005481065 podman[365213]: 2025-10-11 09:10:21.117023444 +0000 UTC m=+1.508467263 container died bc29eac582fd0db7dd5cfa08278c645bd7b4ad7cfd36f2d188135a02762041c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_moser, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 05:10:21 np0005481065 systemd[1]: libpod-bc29eac582fd0db7dd5cfa08278c645bd7b4ad7cfd36f2d188135a02762041c2.scope: Consumed 1.208s CPU time.
Oct 11 05:10:21 np0005481065 systemd[1]: var-lib-containers-storage-overlay-6ad521fb5f7e01149c5bd79acaf181922a5ad027e69066e187c21675835a9b09-merged.mount: Deactivated successfully.
Oct 11 05:10:21 np0005481065 podman[365213]: 2025-10-11 09:10:21.181535096 +0000 UTC m=+1.572978935 container remove bc29eac582fd0db7dd5cfa08278c645bd7b4ad7cfd36f2d188135a02762041c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:10:21 np0005481065 systemd[1]: libpod-conmon-bc29eac582fd0db7dd5cfa08278c645bd7b4ad7cfd36f2d188135a02762041c2.scope: Deactivated successfully.
Oct 11 05:10:21 np0005481065 nova_compute[260935]: 2025-10-11 09:10:21.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:10:21 np0005481065 nova_compute[260935]: 2025-10-11 09:10:21.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:10:21 np0005481065 nova_compute[260935]: 2025-10-11 09:10:21.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:10:21 np0005481065 podman[365411]: 2025-10-11 09:10:21.967305586 +0000 UTC m=+0.068366302 container create dc04bef7807f627849125c51e98e538f6eecad5832fc7f7b16ac096ce01d0a02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 11 05:10:22 np0005481065 systemd[1]: Started libpod-conmon-dc04bef7807f627849125c51e98e538f6eecad5832fc7f7b16ac096ce01d0a02.scope.
Oct 11 05:10:22 np0005481065 podman[365411]: 2025-10-11 09:10:21.93801667 +0000 UTC m=+0.039077426 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:10:22 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:10:22 np0005481065 podman[365411]: 2025-10-11 09:10:22.08162619 +0000 UTC m=+0.182686916 container init dc04bef7807f627849125c51e98e538f6eecad5832fc7f7b16ac096ce01d0a02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:10:22 np0005481065 podman[365411]: 2025-10-11 09:10:22.094129197 +0000 UTC m=+0.195189883 container start dc04bef7807f627849125c51e98e538f6eecad5832fc7f7b16ac096ce01d0a02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 11 05:10:22 np0005481065 podman[365411]: 2025-10-11 09:10:22.098413959 +0000 UTC m=+0.199474735 container attach dc04bef7807f627849125c51e98e538f6eecad5832fc7f7b16ac096ce01d0a02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:10:22 np0005481065 zealous_morse[365427]: 167 167
Oct 11 05:10:22 np0005481065 systemd[1]: libpod-dc04bef7807f627849125c51e98e538f6eecad5832fc7f7b16ac096ce01d0a02.scope: Deactivated successfully.
Oct 11 05:10:22 np0005481065 podman[365411]: 2025-10-11 09:10:22.104292097 +0000 UTC m=+0.205352813 container died dc04bef7807f627849125c51e98e538f6eecad5832fc7f7b16ac096ce01d0a02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:10:22 np0005481065 systemd[1]: var-lib-containers-storage-overlay-c51acee5a03c69c0199a888fac0644f757d7d8aaddf4f3ef5a7c30e48de00ebd-merged.mount: Deactivated successfully.
Oct 11 05:10:22 np0005481065 podman[365411]: 2025-10-11 09:10:22.163637791 +0000 UTC m=+0.264698497 container remove dc04bef7807f627849125c51e98e538f6eecad5832fc7f7b16ac096ce01d0a02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_morse, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 05:10:22 np0005481065 systemd[1]: libpod-conmon-dc04bef7807f627849125c51e98e538f6eecad5832fc7f7b16ac096ce01d0a02.scope: Deactivated successfully.
Oct 11 05:10:22 np0005481065 podman[365450]: 2025-10-11 09:10:22.425331692 +0000 UTC m=+0.047296532 container create 8a4ee84b75e89166056a2acfa5a8b785341b6a6a125a09f5a9e1abf26023ef7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 05:10:22 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2050: 321 pgs: 321 active+clean; 328 MiB data, 846 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:10:22 np0005481065 systemd[1]: Started libpod-conmon-8a4ee84b75e89166056a2acfa5a8b785341b6a6a125a09f5a9e1abf26023ef7d.scope.
Oct 11 05:10:22 np0005481065 podman[365450]: 2025-10-11 09:10:22.404131576 +0000 UTC m=+0.026096396 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:10:22 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:10:22 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79313149ddb2dcbadc89daa23b4b8d9e1f23296ca8946376c1620440d55b6ba9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:10:22 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79313149ddb2dcbadc89daa23b4b8d9e1f23296ca8946376c1620440d55b6ba9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:10:22 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79313149ddb2dcbadc89daa23b4b8d9e1f23296ca8946376c1620440d55b6ba9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:10:22 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79313149ddb2dcbadc89daa23b4b8d9e1f23296ca8946376c1620440d55b6ba9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:10:22 np0005481065 podman[365450]: 2025-10-11 09:10:22.54402275 +0000 UTC m=+0.165987640 container init 8a4ee84b75e89166056a2acfa5a8b785341b6a6a125a09f5a9e1abf26023ef7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_banzai, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 05:10:22 np0005481065 podman[365450]: 2025-10-11 09:10:22.557160985 +0000 UTC m=+0.179125805 container start 8a4ee84b75e89166056a2acfa5a8b785341b6a6a125a09f5a9e1abf26023ef7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_banzai, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:10:22 np0005481065 podman[365450]: 2025-10-11 09:10:22.561122458 +0000 UTC m=+0.183087348 container attach 8a4ee84b75e89166056a2acfa5a8b785341b6a6a125a09f5a9e1abf26023ef7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 05:10:22 np0005481065 nova_compute[260935]: 2025-10-11 09:10:22.889 2 DEBUG oslo_concurrency.lockutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:10:22 np0005481065 nova_compute[260935]: 2025-10-11 09:10:22.896 2 DEBUG oslo_concurrency.lockutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.007s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:10:22 np0005481065 nova_compute[260935]: 2025-10-11 09:10:22.926 2 DEBUG nova.compute.manager [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:10:23 np0005481065 nova_compute[260935]: 2025-10-11 09:10:23.035 2 DEBUG oslo_concurrency.lockutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:10:23 np0005481065 nova_compute[260935]: 2025-10-11 09:10:23.036 2 DEBUG oslo_concurrency.lockutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:10:23 np0005481065 nova_compute[260935]: 2025-10-11 09:10:23.048 2 DEBUG nova.virt.hardware [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:10:23 np0005481065 nova_compute[260935]: 2025-10-11 09:10:23.049 2 INFO nova.compute.claims [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:10:23 np0005481065 nova_compute[260935]: 2025-10-11 09:10:23.271 2 DEBUG oslo_concurrency.processutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]: {
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:    "0": [
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:        {
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:            "devices": [
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:                "/dev/loop3"
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:            ],
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:            "lv_name": "ceph_lv0",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:            "lv_size": "21470642176",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:            "name": "ceph_lv0",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:            "tags": {
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:                "ceph.cluster_name": "ceph",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:                "ceph.crush_device_class": "",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:                "ceph.encrypted": "0",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:                "ceph.osd_id": "0",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:                "ceph.type": "block",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:                "ceph.vdo": "0"
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:            },
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:            "type": "block",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:            "vg_name": "ceph_vg0"
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:        }
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:    ],
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:    "1": [
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:        {
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:            "devices": [
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:                "/dev/loop4"
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:            ],
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:            "lv_name": "ceph_lv1",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:            "lv_size": "21470642176",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:            "name": "ceph_lv1",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:            "tags": {
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:                "ceph.cluster_name": "ceph",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:                "ceph.crush_device_class": "",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:                "ceph.encrypted": "0",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:                "ceph.osd_id": "1",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:                "ceph.type": "block",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:                "ceph.vdo": "0"
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:            },
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:            "type": "block",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:            "vg_name": "ceph_vg1"
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:        }
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:    ],
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:    "2": [
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:        {
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:            "devices": [
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:                "/dev/loop5"
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:            ],
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:            "lv_name": "ceph_lv2",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:            "lv_size": "21470642176",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:            "name": "ceph_lv2",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:            "tags": {
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:                "ceph.cluster_name": "ceph",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:                "ceph.crush_device_class": "",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:                "ceph.encrypted": "0",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:                "ceph.osd_id": "2",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:                "ceph.type": "block",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:                "ceph.vdo": "0"
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:            },
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:            "type": "block",
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:            "vg_name": "ceph_vg2"
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:        }
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]:    ]
Oct 11 05:10:23 np0005481065 infallible_banzai[365467]: }
Oct 11 05:10:23 np0005481065 systemd[1]: libpod-8a4ee84b75e89166056a2acfa5a8b785341b6a6a125a09f5a9e1abf26023ef7d.scope: Deactivated successfully.
Oct 11 05:10:23 np0005481065 podman[365450]: 2025-10-11 09:10:23.369400332 +0000 UTC m=+0.991365192 container died 8a4ee84b75e89166056a2acfa5a8b785341b6a6a125a09f5a9e1abf26023ef7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 05:10:23 np0005481065 systemd[1]: var-lib-containers-storage-overlay-79313149ddb2dcbadc89daa23b4b8d9e1f23296ca8946376c1620440d55b6ba9-merged.mount: Deactivated successfully.
Oct 11 05:10:23 np0005481065 podman[365450]: 2025-10-11 09:10:23.44780572 +0000 UTC m=+1.069770560 container remove 8a4ee84b75e89166056a2acfa5a8b785341b6a6a125a09f5a9e1abf26023ef7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_banzai, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:10:23 np0005481065 systemd[1]: libpod-conmon-8a4ee84b75e89166056a2acfa5a8b785341b6a6a125a09f5a9e1abf26023ef7d.scope: Deactivated successfully.
Oct 11 05:10:23 np0005481065 nova_compute[260935]: 2025-10-11 09:10:23.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:10:23 np0005481065 nova_compute[260935]: 2025-10-11 09:10:23.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:10:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:10:23 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3168893775' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:10:23 np0005481065 nova_compute[260935]: 2025-10-11 09:10:23.808 2 DEBUG oslo_concurrency.processutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:10:23 np0005481065 nova_compute[260935]: 2025-10-11 09:10:23.815 2 DEBUG nova.compute.provider_tree [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:10:23 np0005481065 nova_compute[260935]: 2025-10-11 09:10:23.832 2 DEBUG nova.scheduler.client.report [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:10:23 np0005481065 nova_compute[260935]: 2025-10-11 09:10:23.855 2 DEBUG oslo_concurrency.lockutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.819s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:10:23 np0005481065 nova_compute[260935]: 2025-10-11 09:10:23.856 2 DEBUG nova.compute.manager [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:10:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:10:23 np0005481065 nova_compute[260935]: 2025-10-11 09:10:23.898 2 DEBUG nova.compute.manager [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:10:23 np0005481065 nova_compute[260935]: 2025-10-11 09:10:23.898 2 DEBUG nova.network.neutron [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:10:23 np0005481065 nova_compute[260935]: 2025-10-11 09:10:23.917 2 INFO nova.virt.libvirt.driver [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:10:23 np0005481065 nova_compute[260935]: 2025-10-11 09:10:23.933 2 DEBUG nova.compute.manager [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:10:24 np0005481065 nova_compute[260935]: 2025-10-11 09:10:24.026 2 DEBUG nova.compute.manager [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:10:24 np0005481065 nova_compute[260935]: 2025-10-11 09:10:24.027 2 DEBUG nova.virt.libvirt.driver [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:10:24 np0005481065 nova_compute[260935]: 2025-10-11 09:10:24.027 2 INFO nova.virt.libvirt.driver [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Creating image(s)#033[00m
Oct 11 05:10:24 np0005481065 nova_compute[260935]: 2025-10-11 09:10:24.053 2 DEBUG nova.storage.rbd_utils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 0b3da34e-478e-44fc-a1ec-2601998d2b0d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:10:24 np0005481065 nova_compute[260935]: 2025-10-11 09:10:24.093 2 DEBUG nova.storage.rbd_utils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 0b3da34e-478e-44fc-a1ec-2601998d2b0d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:10:24 np0005481065 nova_compute[260935]: 2025-10-11 09:10:24.123 2 DEBUG nova.storage.rbd_utils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 0b3da34e-478e-44fc-a1ec-2601998d2b0d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:10:24 np0005481065 nova_compute[260935]: 2025-10-11 09:10:24.129 2 DEBUG oslo_concurrency.processutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:10:24 np0005481065 nova_compute[260935]: 2025-10-11 09:10:24.231 2 DEBUG nova.policy [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a213c3877fc144a3af0be3c3d853f999', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ca4b15770e784f45910b630937562cb6', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:10:24 np0005481065 nova_compute[260935]: 2025-10-11 09:10:24.233 2 DEBUG oslo_concurrency.processutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:10:24 np0005481065 nova_compute[260935]: 2025-10-11 09:10:24.234 2 DEBUG oslo_concurrency.lockutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:10:24 np0005481065 nova_compute[260935]: 2025-10-11 09:10:24.234 2 DEBUG oslo_concurrency.lockutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:10:24 np0005481065 nova_compute[260935]: 2025-10-11 09:10:24.235 2 DEBUG oslo_concurrency.lockutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:10:24 np0005481065 nova_compute[260935]: 2025-10-11 09:10:24.255 2 DEBUG nova.storage.rbd_utils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 0b3da34e-478e-44fc-a1ec-2601998d2b0d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:10:24 np0005481065 nova_compute[260935]: 2025-10-11 09:10:24.258 2 DEBUG oslo_concurrency.processutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 0b3da34e-478e-44fc-a1ec-2601998d2b0d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:10:24 np0005481065 podman[365728]: 2025-10-11 09:10:24.381661838 +0000 UTC m=+0.065271055 container create 1a60a99eae6186a15073f5b3b3aa4b215ba9cd11fa73e3ab0f4af85840f5cd0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bardeen, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 05:10:24 np0005481065 systemd[1]: Started libpod-conmon-1a60a99eae6186a15073f5b3b3aa4b215ba9cd11fa73e3ab0f4af85840f5cd0f.scope.
Oct 11 05:10:24 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2051: 321 pgs: 321 active+clean; 328 MiB data, 846 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:10:24 np0005481065 podman[365728]: 2025-10-11 09:10:24.355313586 +0000 UTC m=+0.038922903 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:10:24 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:10:24 np0005481065 podman[365728]: 2025-10-11 09:10:24.513536193 +0000 UTC m=+0.197145410 container init 1a60a99eae6186a15073f5b3b3aa4b215ba9cd11fa73e3ab0f4af85840f5cd0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bardeen, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:10:24 np0005481065 podman[365728]: 2025-10-11 09:10:24.520691367 +0000 UTC m=+0.204300584 container start 1a60a99eae6186a15073f5b3b3aa4b215ba9cd11fa73e3ab0f4af85840f5cd0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bardeen, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 11 05:10:24 np0005481065 xenodochial_bardeen[365762]: 167 167
Oct 11 05:10:24 np0005481065 systemd[1]: libpod-1a60a99eae6186a15073f5b3b3aa4b215ba9cd11fa73e3ab0f4af85840f5cd0f.scope: Deactivated successfully.
Oct 11 05:10:24 np0005481065 podman[365728]: 2025-10-11 09:10:24.53306023 +0000 UTC m=+0.216669447 container attach 1a60a99eae6186a15073f5b3b3aa4b215ba9cd11fa73e3ab0f4af85840f5cd0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:10:24 np0005481065 podman[365728]: 2025-10-11 09:10:24.53339488 +0000 UTC m=+0.217004107 container died 1a60a99eae6186a15073f5b3b3aa4b215ba9cd11fa73e3ab0f4af85840f5cd0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bardeen, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:10:24 np0005481065 systemd[1]: var-lib-containers-storage-overlay-76367e33aed37aaf8092eb3e4550fa26061beefdd2f13263c60abc0ee97e3683-merged.mount: Deactivated successfully.
Oct 11 05:10:24 np0005481065 podman[365728]: 2025-10-11 09:10:24.569164941 +0000 UTC m=+0.252774158 container remove 1a60a99eae6186a15073f5b3b3aa4b215ba9cd11fa73e3ab0f4af85840f5cd0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 05:10:24 np0005481065 nova_compute[260935]: 2025-10-11 09:10:24.572 2 DEBUG oslo_concurrency.processutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 0b3da34e-478e-44fc-a1ec-2601998d2b0d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.314s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:10:24 np0005481065 systemd[1]: libpod-conmon-1a60a99eae6186a15073f5b3b3aa4b215ba9cd11fa73e3ab0f4af85840f5cd0f.scope: Deactivated successfully.
Oct 11 05:10:24 np0005481065 nova_compute[260935]: 2025-10-11 09:10:24.645 2 DEBUG nova.storage.rbd_utils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] resizing rbd image 0b3da34e-478e-44fc-a1ec-2601998d2b0d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:10:24 np0005481065 nova_compute[260935]: 2025-10-11 09:10:24.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:10:24 np0005481065 nova_compute[260935]: 2025-10-11 09:10:24.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:10:24 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:10:24.716 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:10:24 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:10:24.717 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 05:10:24 np0005481065 nova_compute[260935]: 2025-10-11 09:10:24.741 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:10:24 np0005481065 nova_compute[260935]: 2025-10-11 09:10:24.741 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:10:24 np0005481065 nova_compute[260935]: 2025-10-11 09:10:24.741 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:10:24 np0005481065 nova_compute[260935]: 2025-10-11 09:10:24.742 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:10:24 np0005481065 nova_compute[260935]: 2025-10-11 09:10:24.742 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:10:24 np0005481065 nova_compute[260935]: 2025-10-11 09:10:24.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:10:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:10:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:10:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:10:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:10:24 np0005481065 nova_compute[260935]: 2025-10-11 09:10:24.812 2 DEBUG nova.objects.instance [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'migration_context' on Instance uuid 0b3da34e-478e-44fc-a1ec-2601998d2b0d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:10:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:10:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:10:24 np0005481065 nova_compute[260935]: 2025-10-11 09:10:24.836 2 DEBUG nova.virt.libvirt.driver [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:10:24 np0005481065 nova_compute[260935]: 2025-10-11 09:10:24.837 2 DEBUG nova.virt.libvirt.driver [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Ensure instance console log exists: /var/lib/nova/instances/0b3da34e-478e-44fc-a1ec-2601998d2b0d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:10:24 np0005481065 nova_compute[260935]: 2025-10-11 09:10:24.837 2 DEBUG oslo_concurrency.lockutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:10:24 np0005481065 nova_compute[260935]: 2025-10-11 09:10:24.838 2 DEBUG oslo_concurrency.lockutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:10:24 np0005481065 nova_compute[260935]: 2025-10-11 09:10:24.838 2 DEBUG oslo_concurrency.lockutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:10:24 np0005481065 podman[365856]: 2025-10-11 09:10:24.840139176 +0000 UTC m=+0.054561958 container create dc8b5ead0f6f56094c5c21e096b7e4583b8b6e66cd8ff2a3de1f4617b6ddfe46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_sutherland, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 11 05:10:24 np0005481065 systemd[1]: Started libpod-conmon-dc8b5ead0f6f56094c5c21e096b7e4583b8b6e66cd8ff2a3de1f4617b6ddfe46.scope.
Oct 11 05:10:24 np0005481065 nova_compute[260935]: 2025-10-11 09:10:24.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:10:24 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:10:24 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/714a74fd693a6ef9103a7f0aeae32096958dcb04fcab7e56975b8e17321273f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:10:24 np0005481065 podman[365856]: 2025-10-11 09:10:24.825310463 +0000 UTC m=+0.039733275 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:10:24 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/714a74fd693a6ef9103a7f0aeae32096958dcb04fcab7e56975b8e17321273f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:10:24 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/714a74fd693a6ef9103a7f0aeae32096958dcb04fcab7e56975b8e17321273f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:10:24 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/714a74fd693a6ef9103a7f0aeae32096958dcb04fcab7e56975b8e17321273f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:10:24 np0005481065 podman[365856]: 2025-10-11 09:10:24.936415374 +0000 UTC m=+0.150838176 container init dc8b5ead0f6f56094c5c21e096b7e4583b8b6e66cd8ff2a3de1f4617b6ddfe46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 05:10:24 np0005481065 podman[365856]: 2025-10-11 09:10:24.943747363 +0000 UTC m=+0.158170185 container start dc8b5ead0f6f56094c5c21e096b7e4583b8b6e66cd8ff2a3de1f4617b6ddfe46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_sutherland, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:10:24 np0005481065 podman[365856]: 2025-10-11 09:10:24.948479749 +0000 UTC m=+0.162902551 container attach dc8b5ead0f6f56094c5c21e096b7e4583b8b6e66cd8ff2a3de1f4617b6ddfe46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_sutherland, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 11 05:10:25 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:10:25 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3710635512' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:10:25 np0005481065 nova_compute[260935]: 2025-10-11 09:10:25.240 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:10:25 np0005481065 nova_compute[260935]: 2025-10-11 09:10:25.351 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:10:25 np0005481065 nova_compute[260935]: 2025-10-11 09:10:25.351 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:10:25 np0005481065 nova_compute[260935]: 2025-10-11 09:10:25.351 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:10:25 np0005481065 nova_compute[260935]: 2025-10-11 09:10:25.357 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:10:25 np0005481065 nova_compute[260935]: 2025-10-11 09:10:25.357 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:10:25 np0005481065 nova_compute[260935]: 2025-10-11 09:10:25.363 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:10:25 np0005481065 nova_compute[260935]: 2025-10-11 09:10:25.363 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:10:25 np0005481065 nova_compute[260935]: 2025-10-11 09:10:25.624 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:10:25 np0005481065 nova_compute[260935]: 2025-10-11 09:10:25.625 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3005MB free_disk=59.83066940307617GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:10:25 np0005481065 nova_compute[260935]: 2025-10-11 09:10:25.625 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:10:25 np0005481065 nova_compute[260935]: 2025-10-11 09:10:25.625 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:10:25 np0005481065 nova_compute[260935]: 2025-10-11 09:10:25.720 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:10:25 np0005481065 nova_compute[260935]: 2025-10-11 09:10:25.720 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:10:25 np0005481065 nova_compute[260935]: 2025-10-11 09:10:25.720 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:10:25 np0005481065 nova_compute[260935]: 2025-10-11 09:10:25.720 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 0b3da34e-478e-44fc-a1ec-2601998d2b0d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:10:25 np0005481065 nova_compute[260935]: 2025-10-11 09:10:25.720 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:10:25 np0005481065 nova_compute[260935]: 2025-10-11 09:10:25.721 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:10:25 np0005481065 nova_compute[260935]: 2025-10-11 09:10:25.841 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:10:25 np0005481065 nova_compute[260935]: 2025-10-11 09:10:25.886 2 DEBUG nova.network.neutron [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Successfully created port: 673c8285-440e-4cb6-b81b-1bd2ddb7375a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:10:26 np0005481065 stupefied_sutherland[365872]: {
Oct 11 05:10:26 np0005481065 stupefied_sutherland[365872]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 05:10:26 np0005481065 stupefied_sutherland[365872]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:10:26 np0005481065 stupefied_sutherland[365872]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 05:10:26 np0005481065 stupefied_sutherland[365872]:        "osd_id": 2,
Oct 11 05:10:26 np0005481065 stupefied_sutherland[365872]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:10:26 np0005481065 stupefied_sutherland[365872]:        "type": "bluestore"
Oct 11 05:10:26 np0005481065 stupefied_sutherland[365872]:    },
Oct 11 05:10:26 np0005481065 stupefied_sutherland[365872]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 05:10:26 np0005481065 stupefied_sutherland[365872]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:10:26 np0005481065 stupefied_sutherland[365872]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 05:10:26 np0005481065 stupefied_sutherland[365872]:        "osd_id": 0,
Oct 11 05:10:26 np0005481065 stupefied_sutherland[365872]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:10:26 np0005481065 stupefied_sutherland[365872]:        "type": "bluestore"
Oct 11 05:10:26 np0005481065 stupefied_sutherland[365872]:    },
Oct 11 05:10:26 np0005481065 stupefied_sutherland[365872]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 05:10:26 np0005481065 stupefied_sutherland[365872]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:10:26 np0005481065 stupefied_sutherland[365872]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 05:10:26 np0005481065 stupefied_sutherland[365872]:        "osd_id": 1,
Oct 11 05:10:26 np0005481065 stupefied_sutherland[365872]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:10:26 np0005481065 stupefied_sutherland[365872]:        "type": "bluestore"
Oct 11 05:10:26 np0005481065 stupefied_sutherland[365872]:    }
Oct 11 05:10:26 np0005481065 stupefied_sutherland[365872]: }
Oct 11 05:10:26 np0005481065 systemd[1]: libpod-dc8b5ead0f6f56094c5c21e096b7e4583b8b6e66cd8ff2a3de1f4617b6ddfe46.scope: Deactivated successfully.
Oct 11 05:10:26 np0005481065 systemd[1]: libpod-dc8b5ead0f6f56094c5c21e096b7e4583b8b6e66cd8ff2a3de1f4617b6ddfe46.scope: Consumed 1.165s CPU time.
Oct 11 05:10:26 np0005481065 podman[365856]: 2025-10-11 09:10:26.150223855 +0000 UTC m=+1.364646667 container died dc8b5ead0f6f56094c5c21e096b7e4583b8b6e66cd8ff2a3de1f4617b6ddfe46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_sutherland, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:10:26 np0005481065 systemd[1]: var-lib-containers-storage-overlay-714a74fd693a6ef9103a7f0aeae32096958dcb04fcab7e56975b8e17321273f5-merged.mount: Deactivated successfully.
Oct 11 05:10:26 np0005481065 podman[365856]: 2025-10-11 09:10:26.227997755 +0000 UTC m=+1.442420547 container remove dc8b5ead0f6f56094c5c21e096b7e4583b8b6e66cd8ff2a3de1f4617b6ddfe46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_sutherland, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:10:26 np0005481065 systemd[1]: libpod-conmon-dc8b5ead0f6f56094c5c21e096b7e4583b8b6e66cd8ff2a3de1f4617b6ddfe46.scope: Deactivated successfully.
Oct 11 05:10:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:10:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:10:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:10:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:10:26 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 7850faf4-3b34-4fdc-a23e-46776fb0a3be does not exist
Oct 11 05:10:26 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev ba43fd61-a9cf-498f-93a2-8de44e9f3180 does not exist
Oct 11 05:10:26 np0005481065 podman[365947]: 2025-10-11 09:10:26.294647908 +0000 UTC m=+0.109098356 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=iscsid)
Oct 11 05:10:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:10:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2975616066' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:10:26 np0005481065 nova_compute[260935]: 2025-10-11 09:10:26.358 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:10:26 np0005481065 nova_compute[260935]: 2025-10-11 09:10:26.365 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:10:26 np0005481065 nova_compute[260935]: 2025-10-11 09:10:26.386 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:10:26 np0005481065 nova_compute[260935]: 2025-10-11 09:10:26.419 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:10:26 np0005481065 nova_compute[260935]: 2025-10-11 09:10:26.420 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:10:26 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2052: 321 pgs: 321 active+clean; 328 MiB data, 846 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:10:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:10:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1384408300' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:10:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:10:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1384408300' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:10:26 np0005481065 nova_compute[260935]: 2025-10-11 09:10:26.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:10:27 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:10:27 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:10:27 np0005481065 nova_compute[260935]: 2025-10-11 09:10:27.369 2 DEBUG nova.network.neutron [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Successfully updated port: 673c8285-440e-4cb6-b81b-1bd2ddb7375a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:10:27 np0005481065 nova_compute[260935]: 2025-10-11 09:10:27.393 2 DEBUG oslo_concurrency.lockutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "refresh_cache-0b3da34e-478e-44fc-a1ec-2601998d2b0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:10:27 np0005481065 nova_compute[260935]: 2025-10-11 09:10:27.394 2 DEBUG oslo_concurrency.lockutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquired lock "refresh_cache-0b3da34e-478e-44fc-a1ec-2601998d2b0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:10:27 np0005481065 nova_compute[260935]: 2025-10-11 09:10:27.394 2 DEBUG nova.network.neutron [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:10:27 np0005481065 nova_compute[260935]: 2025-10-11 09:10:27.631 2 DEBUG nova.compute.manager [req-8db12eed-84e2-4226-bc1d-003dc9a18b3e req-0e9209f5-37b8-4fff-b779-68e00cd8594a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Received event network-changed-673c8285-440e-4cb6-b81b-1bd2ddb7375a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:10:27 np0005481065 nova_compute[260935]: 2025-10-11 09:10:27.631 2 DEBUG nova.compute.manager [req-8db12eed-84e2-4226-bc1d-003dc9a18b3e req-0e9209f5-37b8-4fff-b779-68e00cd8594a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Refreshing instance network info cache due to event network-changed-673c8285-440e-4cb6-b81b-1bd2ddb7375a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:10:27 np0005481065 nova_compute[260935]: 2025-10-11 09:10:27.632 2 DEBUG oslo_concurrency.lockutils [req-8db12eed-84e2-4226-bc1d-003dc9a18b3e req-0e9209f5-37b8-4fff-b779-68e00cd8594a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-0b3da34e-478e-44fc-a1ec-2601998d2b0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:10:27 np0005481065 nova_compute[260935]: 2025-10-11 09:10:27.854 2 DEBUG nova.network.neutron [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:10:28 np0005481065 nova_compute[260935]: 2025-10-11 09:10:28.421 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:10:28 np0005481065 nova_compute[260935]: 2025-10-11 09:10:28.421 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:10:28 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2053: 321 pgs: 321 active+clean; 374 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:10:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:10:28 np0005481065 nova_compute[260935]: 2025-10-11 09:10:28.917 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:10:28 np0005481065 nova_compute[260935]: 2025-10-11 09:10:28.918 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:10:28 np0005481065 nova_compute[260935]: 2025-10-11 09:10:28.918 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 05:10:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:10:29.719 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:10:29 np0005481065 nova_compute[260935]: 2025-10-11 09:10:29.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:10:30 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2054: 321 pgs: 321 active+clean; 374 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:10:30 np0005481065 nova_compute[260935]: 2025-10-11 09:10:30.484 2 DEBUG nova.network.neutron [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Updating instance_info_cache with network_info: [{"id": "673c8285-440e-4cb6-b81b-1bd2ddb7375a", "address": "fa:16:3e:ac:22:4a", "network": {"id": "2de6a2d5-4c4a-403f-9eb5-27dc7562315f", "bridge": "br-int", "label": "tempest-network-smoke--795331470", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap673c8285-44", "ovs_interfaceid": "673c8285-440e-4cb6-b81b-1bd2ddb7375a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:10:30 np0005481065 nova_compute[260935]: 2025-10-11 09:10:30.516 2 DEBUG oslo_concurrency.lockutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Releasing lock "refresh_cache-0b3da34e-478e-44fc-a1ec-2601998d2b0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:10:30 np0005481065 nova_compute[260935]: 2025-10-11 09:10:30.517 2 DEBUG nova.compute.manager [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Instance network_info: |[{"id": "673c8285-440e-4cb6-b81b-1bd2ddb7375a", "address": "fa:16:3e:ac:22:4a", "network": {"id": "2de6a2d5-4c4a-403f-9eb5-27dc7562315f", "bridge": "br-int", "label": "tempest-network-smoke--795331470", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap673c8285-44", "ovs_interfaceid": "673c8285-440e-4cb6-b81b-1bd2ddb7375a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:10:30 np0005481065 nova_compute[260935]: 2025-10-11 09:10:30.518 2 DEBUG oslo_concurrency.lockutils [req-8db12eed-84e2-4226-bc1d-003dc9a18b3e req-0e9209f5-37b8-4fff-b779-68e00cd8594a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-0b3da34e-478e-44fc-a1ec-2601998d2b0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:10:30 np0005481065 nova_compute[260935]: 2025-10-11 09:10:30.518 2 DEBUG nova.network.neutron [req-8db12eed-84e2-4226-bc1d-003dc9a18b3e req-0e9209f5-37b8-4fff-b779-68e00cd8594a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Refreshing network info cache for port 673c8285-440e-4cb6-b81b-1bd2ddb7375a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:10:30 np0005481065 nova_compute[260935]: 2025-10-11 09:10:30.523 2 DEBUG nova.virt.libvirt.driver [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Start _get_guest_xml network_info=[{"id": "673c8285-440e-4cb6-b81b-1bd2ddb7375a", "address": "fa:16:3e:ac:22:4a", "network": {"id": "2de6a2d5-4c4a-403f-9eb5-27dc7562315f", "bridge": "br-int", "label": "tempest-network-smoke--795331470", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap673c8285-44", "ovs_interfaceid": "673c8285-440e-4cb6-b81b-1bd2ddb7375a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:10:30 np0005481065 nova_compute[260935]: 2025-10-11 09:10:30.531 2 WARNING nova.virt.libvirt.driver [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:10:30 np0005481065 nova_compute[260935]: 2025-10-11 09:10:30.547 2 DEBUG nova.virt.libvirt.host [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:10:30 np0005481065 nova_compute[260935]: 2025-10-11 09:10:30.548 2 DEBUG nova.virt.libvirt.host [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:10:30 np0005481065 nova_compute[260935]: 2025-10-11 09:10:30.554 2 DEBUG nova.virt.libvirt.host [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:10:30 np0005481065 nova_compute[260935]: 2025-10-11 09:10:30.555 2 DEBUG nova.virt.libvirt.host [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:10:30 np0005481065 nova_compute[260935]: 2025-10-11 09:10:30.556 2 DEBUG nova.virt.libvirt.driver [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:10:30 np0005481065 nova_compute[260935]: 2025-10-11 09:10:30.556 2 DEBUG nova.virt.hardware [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:10:30 np0005481065 nova_compute[260935]: 2025-10-11 09:10:30.557 2 DEBUG nova.virt.hardware [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:10:30 np0005481065 nova_compute[260935]: 2025-10-11 09:10:30.558 2 DEBUG nova.virt.hardware [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:10:30 np0005481065 nova_compute[260935]: 2025-10-11 09:10:30.558 2 DEBUG nova.virt.hardware [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:10:30 np0005481065 nova_compute[260935]: 2025-10-11 09:10:30.559 2 DEBUG nova.virt.hardware [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:10:30 np0005481065 nova_compute[260935]: 2025-10-11 09:10:30.559 2 DEBUG nova.virt.hardware [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:10:30 np0005481065 nova_compute[260935]: 2025-10-11 09:10:30.560 2 DEBUG nova.virt.hardware [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:10:30 np0005481065 nova_compute[260935]: 2025-10-11 09:10:30.561 2 DEBUG nova.virt.hardware [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:10:30 np0005481065 nova_compute[260935]: 2025-10-11 09:10:30.561 2 DEBUG nova.virt.hardware [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:10:30 np0005481065 nova_compute[260935]: 2025-10-11 09:10:30.562 2 DEBUG nova.virt.hardware [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:10:30 np0005481065 nova_compute[260935]: 2025-10-11 09:10:30.563 2 DEBUG nova.virt.hardware [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:10:30 np0005481065 nova_compute[260935]: 2025-10-11 09:10:30.568 2 DEBUG oslo_concurrency.processutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:10:31 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:10:31 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3119043267' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:10:31 np0005481065 nova_compute[260935]: 2025-10-11 09:10:31.082 2 DEBUG oslo_concurrency.processutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:10:31 np0005481065 nova_compute[260935]: 2025-10-11 09:10:31.121 2 DEBUG nova.storage.rbd_utils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 0b3da34e-478e-44fc-a1ec-2601998d2b0d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:10:31 np0005481065 nova_compute[260935]: 2025-10-11 09:10:31.128 2 DEBUG oslo_concurrency.processutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:10:31 np0005481065 podman[366092]: 2025-10-11 09:10:31.558314767 +0000 UTC m=+0.102694242 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:10:31 np0005481065 podman[366093]: 2025-10-11 09:10:31.60639944 +0000 UTC m=+0.145308849 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller)
Oct 11 05:10:31 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:10:31 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2812675974' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:10:31 np0005481065 nova_compute[260935]: 2025-10-11 09:10:31.638 2 DEBUG oslo_concurrency.processutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:10:31 np0005481065 nova_compute[260935]: 2025-10-11 09:10:31.641 2 DEBUG nova.virt.libvirt.vif [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:10:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2087957653',display_name='tempest-TestNetworkAdvancedServerOps-server-2087957653',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2087957653',id=104,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBElmuTcJVI/ZkWAz+5kv9/dAzHDbPqJ/RLoEbpcbzY+bQ/5rlnwJAiwfIH5I2AByz1siErLhAl04MipJZMi/HRmixiLKf1bIaWO7+9o5POQWUQay494cNjZNh9+hKmeEUA==',key_name='tempest-TestNetworkAdvancedServerOps-746953215',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca4b15770e784f45910b630937562cb6',ramdisk_id='',reservation_id='r-ng6vrpfk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1304559157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1304559157-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:10:23Z,user_data=None,user_id='a213c3877fc144a3af0be3c3d853f999',uuid=0b3da34e-478e-44fc-a1ec-2601998d2b0d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "673c8285-440e-4cb6-b81b-1bd2ddb7375a", "address": "fa:16:3e:ac:22:4a", "network": {"id": "2de6a2d5-4c4a-403f-9eb5-27dc7562315f", "bridge": "br-int", "label": "tempest-network-smoke--795331470", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap673c8285-44", "ovs_interfaceid": "673c8285-440e-4cb6-b81b-1bd2ddb7375a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:10:31 np0005481065 nova_compute[260935]: 2025-10-11 09:10:31.642 2 DEBUG nova.network.os_vif_util [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converting VIF {"id": "673c8285-440e-4cb6-b81b-1bd2ddb7375a", "address": "fa:16:3e:ac:22:4a", "network": {"id": "2de6a2d5-4c4a-403f-9eb5-27dc7562315f", "bridge": "br-int", "label": "tempest-network-smoke--795331470", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap673c8285-44", "ovs_interfaceid": "673c8285-440e-4cb6-b81b-1bd2ddb7375a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:10:31 np0005481065 nova_compute[260935]: 2025-10-11 09:10:31.643 2 DEBUG nova.network.os_vif_util [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ac:22:4a,bridge_name='br-int',has_traffic_filtering=True,id=673c8285-440e-4cb6-b81b-1bd2ddb7375a,network=Network(2de6a2d5-4c4a-403f-9eb5-27dc7562315f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap673c8285-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:10:31 np0005481065 nova_compute[260935]: 2025-10-11 09:10:31.646 2 DEBUG nova.objects.instance [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0b3da34e-478e-44fc-a1ec-2601998d2b0d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:10:31 np0005481065 nova_compute[260935]: 2025-10-11 09:10:31.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:10:31 np0005481065 nova_compute[260935]: 2025-10-11 09:10:31.716 2 DEBUG nova.virt.libvirt.driver [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:10:31 np0005481065 nova_compute[260935]:  <uuid>0b3da34e-478e-44fc-a1ec-2601998d2b0d</uuid>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:  <name>instance-00000068</name>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:10:31 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-2087957653</nova:name>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:10:30</nova:creationTime>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:10:31 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:        <nova:user uuid="a213c3877fc144a3af0be3c3d853f999">tempest-TestNetworkAdvancedServerOps-1304559157-project-member</nova:user>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:        <nova:project uuid="ca4b15770e784f45910b630937562cb6">tempest-TestNetworkAdvancedServerOps-1304559157</nova:project>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:        <nova:port uuid="673c8285-440e-4cb6-b81b-1bd2ddb7375a">
Oct 11 05:10:31 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:10:31 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:      <entry name="serial">0b3da34e-478e-44fc-a1ec-2601998d2b0d</entry>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:      <entry name="uuid">0b3da34e-478e-44fc-a1ec-2601998d2b0d</entry>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:10:31 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:10:31 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:10:31 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/0b3da34e-478e-44fc-a1ec-2601998d2b0d_disk">
Oct 11 05:10:31 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:10:31 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:10:31 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/0b3da34e-478e-44fc-a1ec-2601998d2b0d_disk.config">
Oct 11 05:10:31 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:10:31 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:10:31 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:ac:22:4a"/>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:      <target dev="tap673c8285-44"/>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:10:31 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/0b3da34e-478e-44fc-a1ec-2601998d2b0d/console.log" append="off"/>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:10:31 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:10:31 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:10:31 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:10:31 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:10:31 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:10:31 np0005481065 nova_compute[260935]: 2025-10-11 09:10:31.720 2 DEBUG nova.compute.manager [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Preparing to wait for external event network-vif-plugged-673c8285-440e-4cb6-b81b-1bd2ddb7375a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:10:31 np0005481065 nova_compute[260935]: 2025-10-11 09:10:31.721 2 DEBUG oslo_concurrency.lockutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:10:31 np0005481065 nova_compute[260935]: 2025-10-11 09:10:31.721 2 DEBUG oslo_concurrency.lockutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:10:31 np0005481065 nova_compute[260935]: 2025-10-11 09:10:31.722 2 DEBUG oslo_concurrency.lockutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:10:31 np0005481065 nova_compute[260935]: 2025-10-11 09:10:31.723 2 DEBUG nova.virt.libvirt.vif [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:10:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2087957653',display_name='tempest-TestNetworkAdvancedServerOps-server-2087957653',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2087957653',id=104,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBElmuTcJVI/ZkWAz+5kv9/dAzHDbPqJ/RLoEbpcbzY+bQ/5rlnwJAiwfIH5I2AByz1siErLhAl04MipJZMi/HRmixiLKf1bIaWO7+9o5POQWUQay494cNjZNh9+hKmeEUA==',key_name='tempest-TestNetworkAdvancedServerOps-746953215',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca4b15770e784f45910b630937562cb6',ramdisk_id='',reservation_id='r-ng6vrpfk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1304559157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1304559157-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:10:23Z,user_data=None,user_id='a213c3877fc144a3af0be3c3d853f999',uuid=0b3da34e-478e-44fc-a1ec-2601998d2b0d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "673c8285-440e-4cb6-b81b-1bd2ddb7375a", "address": "fa:16:3e:ac:22:4a", "network": {"id": "2de6a2d5-4c4a-403f-9eb5-27dc7562315f", "bridge": "br-int", "label": "tempest-network-smoke--795331470", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap673c8285-44", "ovs_interfaceid": "673c8285-440e-4cb6-b81b-1bd2ddb7375a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:10:31 np0005481065 nova_compute[260935]: 2025-10-11 09:10:31.724 2 DEBUG nova.network.os_vif_util [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converting VIF {"id": "673c8285-440e-4cb6-b81b-1bd2ddb7375a", "address": "fa:16:3e:ac:22:4a", "network": {"id": "2de6a2d5-4c4a-403f-9eb5-27dc7562315f", "bridge": "br-int", "label": "tempest-network-smoke--795331470", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap673c8285-44", "ovs_interfaceid": "673c8285-440e-4cb6-b81b-1bd2ddb7375a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:10:31 np0005481065 nova_compute[260935]: 2025-10-11 09:10:31.725 2 DEBUG nova.network.os_vif_util [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ac:22:4a,bridge_name='br-int',has_traffic_filtering=True,id=673c8285-440e-4cb6-b81b-1bd2ddb7375a,network=Network(2de6a2d5-4c4a-403f-9eb5-27dc7562315f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap673c8285-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:10:31 np0005481065 nova_compute[260935]: 2025-10-11 09:10:31.726 2 DEBUG os_vif [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:22:4a,bridge_name='br-int',has_traffic_filtering=True,id=673c8285-440e-4cb6-b81b-1bd2ddb7375a,network=Network(2de6a2d5-4c4a-403f-9eb5-27dc7562315f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap673c8285-44') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:10:31 np0005481065 nova_compute[260935]: 2025-10-11 09:10:31.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:10:31 np0005481065 nova_compute[260935]: 2025-10-11 09:10:31.727 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:10:31 np0005481065 nova_compute[260935]: 2025-10-11 09:10:31.728 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:10:31 np0005481065 nova_compute[260935]: 2025-10-11 09:10:31.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:10:31 np0005481065 nova_compute[260935]: 2025-10-11 09:10:31.734 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap673c8285-44, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:10:31 np0005481065 nova_compute[260935]: 2025-10-11 09:10:31.735 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap673c8285-44, col_values=(('external_ids', {'iface-id': '673c8285-440e-4cb6-b81b-1bd2ddb7375a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ac:22:4a', 'vm-uuid': '0b3da34e-478e-44fc-a1ec-2601998d2b0d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:10:31 np0005481065 NetworkManager[44960]: <info>  [1760173831.7396] manager: (tap673c8285-44): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/394)
Oct 11 05:10:31 np0005481065 nova_compute[260935]: 2025-10-11 09:10:31.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:10:31 np0005481065 nova_compute[260935]: 2025-10-11 09:10:31.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:10:31 np0005481065 nova_compute[260935]: 2025-10-11 09:10:31.751 2 INFO os_vif [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:22:4a,bridge_name='br-int',has_traffic_filtering=True,id=673c8285-440e-4cb6-b81b-1bd2ddb7375a,network=Network(2de6a2d5-4c4a-403f-9eb5-27dc7562315f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap673c8285-44')#033[00m
Oct 11 05:10:31 np0005481065 nova_compute[260935]: 2025-10-11 09:10:31.848 2 DEBUG nova.virt.libvirt.driver [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:10:31 np0005481065 nova_compute[260935]: 2025-10-11 09:10:31.848 2 DEBUG nova.virt.libvirt.driver [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:10:31 np0005481065 nova_compute[260935]: 2025-10-11 09:10:31.849 2 DEBUG nova.virt.libvirt.driver [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] No VIF found with MAC fa:16:3e:ac:22:4a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:10:31 np0005481065 nova_compute[260935]: 2025-10-11 09:10:31.849 2 INFO nova.virt.libvirt.driver [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Using config drive#033[00m
Oct 11 05:10:31 np0005481065 nova_compute[260935]: 2025-10-11 09:10:31.886 2 DEBUG nova.storage.rbd_utils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 0b3da34e-478e-44fc-a1ec-2601998d2b0d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:10:32 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2055: 321 pgs: 321 active+clean; 374 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:10:33 np0005481065 nova_compute[260935]: 2025-10-11 09:10:33.636 2 INFO nova.virt.libvirt.driver [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Creating config drive at /var/lib/nova/instances/0b3da34e-478e-44fc-a1ec-2601998d2b0d/disk.config#033[00m
Oct 11 05:10:33 np0005481065 nova_compute[260935]: 2025-10-11 09:10:33.644 2 DEBUG oslo_concurrency.processutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0b3da34e-478e-44fc-a1ec-2601998d2b0d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa6bxkdrr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:10:33 np0005481065 nova_compute[260935]: 2025-10-11 09:10:33.804 2 DEBUG oslo_concurrency.processutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0b3da34e-478e-44fc-a1ec-2601998d2b0d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa6bxkdrr" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:10:33 np0005481065 nova_compute[260935]: 2025-10-11 09:10:33.844 2 DEBUG nova.storage.rbd_utils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 0b3da34e-478e-44fc-a1ec-2601998d2b0d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:10:33 np0005481065 nova_compute[260935]: 2025-10-11 09:10:33.849 2 DEBUG oslo_concurrency.processutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0b3da34e-478e-44fc-a1ec-2601998d2b0d/disk.config 0b3da34e-478e-44fc-a1ec-2601998d2b0d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:10:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:10:33 np0005481065 nova_compute[260935]: 2025-10-11 09:10:33.918 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updating instance_info_cache with network_info: [{"id": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "address": "fa:16:3e:d3:b5:ce", "network": {"id": "7c40ad6c-6e2c-4d8e-a70f-72c8786fa745", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1855455514-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ba95f2514ce4fe4b00f245335eaeb01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc992d6e3-ef", "ovs_interfaceid": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:10:33 np0005481065 nova_compute[260935]: 2025-10-11 09:10:33.948 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:10:33 np0005481065 nova_compute[260935]: 2025-10-11 09:10:33.949 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 05:10:34 np0005481065 nova_compute[260935]: 2025-10-11 09:10:34.024 2 DEBUG oslo_concurrency.processutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0b3da34e-478e-44fc-a1ec-2601998d2b0d/disk.config 0b3da34e-478e-44fc-a1ec-2601998d2b0d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:10:34 np0005481065 nova_compute[260935]: 2025-10-11 09:10:34.025 2 INFO nova.virt.libvirt.driver [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Deleting local config drive /var/lib/nova/instances/0b3da34e-478e-44fc-a1ec-2601998d2b0d/disk.config because it was imported into RBD.#033[00m
Oct 11 05:10:34 np0005481065 kernel: tap673c8285-44: entered promiscuous mode
Oct 11 05:10:34 np0005481065 ovn_controller[152945]: 2025-10-11T09:10:34Z|00944|binding|INFO|Claiming lport 673c8285-440e-4cb6-b81b-1bd2ddb7375a for this chassis.
Oct 11 05:10:34 np0005481065 ovn_controller[152945]: 2025-10-11T09:10:34Z|00945|binding|INFO|673c8285-440e-4cb6-b81b-1bd2ddb7375a: Claiming fa:16:3e:ac:22:4a 10.100.0.9
Oct 11 05:10:34 np0005481065 NetworkManager[44960]: <info>  [1760173834.0892] manager: (tap673c8285-44): new Tun device (/org/freedesktop/NetworkManager/Devices/395)
Oct 11 05:10:34 np0005481065 nova_compute[260935]: 2025-10-11 09:10:34.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:10:34 np0005481065 nova_compute[260935]: 2025-10-11 09:10:34.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:10:34.111 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:22:4a 10.100.0.9'], port_security=['fa:16:3e:ac:22:4a 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '0b3da34e-478e-44fc-a1ec-2601998d2b0d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2de6a2d5-4c4a-403f-9eb5-27dc7562315f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca4b15770e784f45910b630937562cb6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8ebe05f7-a7b2-491d-bdc1-80d46e3deb2b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9218e07c-cf9e-412a-8b5f-2d45bacdf8cb, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=673c8285-440e-4cb6-b81b-1bd2ddb7375a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:10:34.114 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 673c8285-440e-4cb6-b81b-1bd2ddb7375a in datapath 2de6a2d5-4c4a-403f-9eb5-27dc7562315f bound to our chassis#033[00m
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:10:34.117 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2de6a2d5-4c4a-403f-9eb5-27dc7562315f#033[00m
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:10:34.142 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ac1e57ef-d006-433a-ac3f-d5df8640b227]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:10:34.143 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2de6a2d5-41 in ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:10:34.145 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2de6a2d5-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:10:34.145 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[759da2c9-568c-4560-ae13-8138638fb2e3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:10:34 np0005481065 systemd-machined[215705]: New machine qemu-123-instance-00000068.
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:10:34.146 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8327c2cb-8490-409a-88f3-fca9be2577fe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:10:34.166 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[f0e969d3-4bc7-4c6f-ac89-f1206ed651f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:10:34 np0005481065 systemd[1]: Started Virtual Machine qemu-123-instance-00000068.
Oct 11 05:10:34 np0005481065 systemd-udevd[366215]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:10:34.198 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fc81d62e-c89a-4f5e-ab8f-858eb687da8a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:10:34 np0005481065 NetworkManager[44960]: <info>  [1760173834.2101] device (tap673c8285-44): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:10:34 np0005481065 NetworkManager[44960]: <info>  [1760173834.2114] device (tap673c8285-44): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:10:34 np0005481065 nova_compute[260935]: 2025-10-11 09:10:34.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:10:34 np0005481065 ovn_controller[152945]: 2025-10-11T09:10:34Z|00946|binding|INFO|Setting lport 673c8285-440e-4cb6-b81b-1bd2ddb7375a ovn-installed in OVS
Oct 11 05:10:34 np0005481065 ovn_controller[152945]: 2025-10-11T09:10:34Z|00947|binding|INFO|Setting lport 673c8285-440e-4cb6-b81b-1bd2ddb7375a up in Southbound
Oct 11 05:10:34 np0005481065 nova_compute[260935]: 2025-10-11 09:10:34.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:10:34.243 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b35078dd-7280-477f-811b-33032c1f1993]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:10:34 np0005481065 NetworkManager[44960]: <info>  [1760173834.2539] manager: (tap2de6a2d5-40): new Veth device (/org/freedesktop/NetworkManager/Devices/396)
Oct 11 05:10:34 np0005481065 systemd-udevd[366218]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:10:34.253 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3e25d09e-02ba-418e-bbb1-34f3609cbd5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:10:34.304 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3151bccb-1c7c-4bfc-bc8d-13059c22bad6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:10:34.309 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[dd5ea99d-0d28-4390-9a40-11750511a3be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:10:34 np0005481065 NetworkManager[44960]: <info>  [1760173834.3503] device (tap2de6a2d5-40): carrier: link connected
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:10:34.359 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[59e8c91f-9245-42b7-88a4-ea31a98405f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:10:34.389 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[715e244a-0c23-436c-ba14-9797404146db]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2de6a2d5-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:54:6d:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 282], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 567905, 'reachable_time': 34835, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366245, 'error': None, 'target': 'ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:10:34.408 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f5413fba-d3eb-4f09-816d-d670ed250173]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe54:6d2f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 567905, 'tstamp': 567905}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 366246, 'error': None, 'target': 'ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:10:34.436 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e3727ee4-7750-49b2-bdf8-56a917ff0037]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2de6a2d5-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:54:6d:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 282], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 567905, 'reachable_time': 34835, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 366247, 'error': None, 'target': 'ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:10:34 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2056: 321 pgs: 321 active+clean; 374 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:10:34.490 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[acd5efa2-8812-4f9e-931c-306811ded906]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:10:34.600 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d2913251-577b-4006-8b40-c02e377de212]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:10:34.603 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2de6a2d5-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:10:34.603 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:10:34.604 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2de6a2d5-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:10:34 np0005481065 nova_compute[260935]: 2025-10-11 09:10:34.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:10:34 np0005481065 kernel: tap2de6a2d5-40: entered promiscuous mode
Oct 11 05:10:34 np0005481065 NetworkManager[44960]: <info>  [1760173834.6396] manager: (tap2de6a2d5-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/397)
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:10:34.656 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2de6a2d5-40, col_values=(('external_ids', {'iface-id': 'f34fe4de-2979-4f48-9b40-4531a5420e53'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:10:34 np0005481065 nova_compute[260935]: 2025-10-11 09:10:34.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:10:34 np0005481065 ovn_controller[152945]: 2025-10-11T09:10:34Z|00948|binding|INFO|Releasing lport f34fe4de-2979-4f48-9b40-4531a5420e53 from this chassis (sb_readonly=0)
Oct 11 05:10:34 np0005481065 nova_compute[260935]: 2025-10-11 09:10:34.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:10:34.663 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2de6a2d5-4c4a-403f-9eb5-27dc7562315f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2de6a2d5-4c4a-403f-9eb5-27dc7562315f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:10:34.664 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2c5d3890-94eb-4464-9717-a7c289b8844b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:10:34.666 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-2de6a2d5-4c4a-403f-9eb5-27dc7562315f
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/2de6a2d5-4c4a-403f-9eb5-27dc7562315f.pid.haproxy
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID 2de6a2d5-4c4a-403f-9eb5-27dc7562315f
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 05:10:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:10:34.667 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f', 'env', 'PROCESS_TAG=haproxy-2de6a2d5-4c4a-403f-9eb5-27dc7562315f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2de6a2d5-4c4a-403f-9eb5-27dc7562315f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 05:10:34 np0005481065 nova_compute[260935]: 2025-10-11 09:10:34.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:10:34 np0005481065 nova_compute[260935]: 2025-10-11 09:10:34.740 2 DEBUG nova.compute.manager [req-f8502b11-a306-4183-8458-10f7de90de4b req-39c1e905-d96e-4ff0-b1a0-d85941d0a849 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Received event network-vif-plugged-673c8285-440e-4cb6-b81b-1bd2ddb7375a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:10:34 np0005481065 nova_compute[260935]: 2025-10-11 09:10:34.741 2 DEBUG oslo_concurrency.lockutils [req-f8502b11-a306-4183-8458-10f7de90de4b req-39c1e905-d96e-4ff0-b1a0-d85941d0a849 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:10:34 np0005481065 nova_compute[260935]: 2025-10-11 09:10:34.741 2 DEBUG oslo_concurrency.lockutils [req-f8502b11-a306-4183-8458-10f7de90de4b req-39c1e905-d96e-4ff0-b1a0-d85941d0a849 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:10:34 np0005481065 nova_compute[260935]: 2025-10-11 09:10:34.742 2 DEBUG oslo_concurrency.lockutils [req-f8502b11-a306-4183-8458-10f7de90de4b req-39c1e905-d96e-4ff0-b1a0-d85941d0a849 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:10:34 np0005481065 nova_compute[260935]: 2025-10-11 09:10:34.742 2 DEBUG nova.compute.manager [req-f8502b11-a306-4183-8458-10f7de90de4b req-39c1e905-d96e-4ff0-b1a0-d85941d0a849 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Processing event network-vif-plugged-673c8285-440e-4cb6-b81b-1bd2ddb7375a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:10:34 np0005481065 nova_compute[260935]: 2025-10-11 09:10:34.898 2 DEBUG nova.network.neutron [req-8db12eed-84e2-4226-bc1d-003dc9a18b3e req-0e9209f5-37b8-4fff-b779-68e00cd8594a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Updated VIF entry in instance network info cache for port 673c8285-440e-4cb6-b81b-1bd2ddb7375a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:10:34 np0005481065 nova_compute[260935]: 2025-10-11 09:10:34.898 2 DEBUG nova.network.neutron [req-8db12eed-84e2-4226-bc1d-003dc9a18b3e req-0e9209f5-37b8-4fff-b779-68e00cd8594a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Updating instance_info_cache with network_info: [{"id": "673c8285-440e-4cb6-b81b-1bd2ddb7375a", "address": "fa:16:3e:ac:22:4a", "network": {"id": "2de6a2d5-4c4a-403f-9eb5-27dc7562315f", "bridge": "br-int", "label": "tempest-network-smoke--795331470", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap673c8285-44", "ovs_interfaceid": "673c8285-440e-4cb6-b81b-1bd2ddb7375a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:10:34 np0005481065 nova_compute[260935]: 2025-10-11 09:10:34.927 2 DEBUG oslo_concurrency.lockutils [req-8db12eed-84e2-4226-bc1d-003dc9a18b3e req-0e9209f5-37b8-4fff-b779-68e00cd8594a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-0b3da34e-478e-44fc-a1ec-2601998d2b0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:10:34 np0005481065 nova_compute[260935]: 2025-10-11 09:10:34.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:10:35 np0005481065 podman[366279]: 2025-10-11 09:10:35.19172526 +0000 UTC m=+0.088034164 container create c053f96544bf638240d0d54ddc97e5f3e213ef72bbd8494d356db148f8011b3f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 05:10:35 np0005481065 podman[366279]: 2025-10-11 09:10:35.146852079 +0000 UTC m=+0.043161023 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 05:10:35 np0005481065 systemd[1]: Started libpod-conmon-c053f96544bf638240d0d54ddc97e5f3e213ef72bbd8494d356db148f8011b3f.scope.
Oct 11 05:10:35 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:10:35 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8d2d7c4ba9cfd39626d89e94aa92614f628d67bec419a0105d23bb5bb0122f8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 05:10:35 np0005481065 podman[366279]: 2025-10-11 09:10:35.313594009 +0000 UTC m=+0.209902973 container init c053f96544bf638240d0d54ddc97e5f3e213ef72bbd8494d356db148f8011b3f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 05:10:35 np0005481065 podman[366279]: 2025-10-11 09:10:35.319634062 +0000 UTC m=+0.215942966 container start c053f96544bf638240d0d54ddc97e5f3e213ef72bbd8494d356db148f8011b3f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct 11 05:10:35 np0005481065 neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f[366336]: [NOTICE]   (366341) : New worker (366343) forked
Oct 11 05:10:35 np0005481065 neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f[366336]: [NOTICE]   (366341) : Loading success.
Oct 11 05:10:35 np0005481065 nova_compute[260935]: 2025-10-11 09:10:35.823 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173835.8224127, 0b3da34e-478e-44fc-a1ec-2601998d2b0d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:10:35 np0005481065 nova_compute[260935]: 2025-10-11 09:10:35.823 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] VM Started (Lifecycle Event)#033[00m
Oct 11 05:10:35 np0005481065 nova_compute[260935]: 2025-10-11 09:10:35.825 2 DEBUG nova.compute.manager [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:10:35 np0005481065 nova_compute[260935]: 2025-10-11 09:10:35.830 2 DEBUG nova.virt.libvirt.driver [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:10:35 np0005481065 nova_compute[260935]: 2025-10-11 09:10:35.835 2 INFO nova.virt.libvirt.driver [-] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Instance spawned successfully.#033[00m
Oct 11 05:10:35 np0005481065 nova_compute[260935]: 2025-10-11 09:10:35.835 2 DEBUG nova.virt.libvirt.driver [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:10:35 np0005481065 nova_compute[260935]: 2025-10-11 09:10:35.865 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:10:35 np0005481065 nova_compute[260935]: 2025-10-11 09:10:35.870 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:10:35 np0005481065 nova_compute[260935]: 2025-10-11 09:10:35.885 2 DEBUG nova.virt.libvirt.driver [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:10:35 np0005481065 nova_compute[260935]: 2025-10-11 09:10:35.885 2 DEBUG nova.virt.libvirt.driver [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:10:35 np0005481065 nova_compute[260935]: 2025-10-11 09:10:35.886 2 DEBUG nova.virt.libvirt.driver [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:10:35 np0005481065 nova_compute[260935]: 2025-10-11 09:10:35.887 2 DEBUG nova.virt.libvirt.driver [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:10:35 np0005481065 nova_compute[260935]: 2025-10-11 09:10:35.888 2 DEBUG nova.virt.libvirt.driver [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:10:35 np0005481065 nova_compute[260935]: 2025-10-11 09:10:35.888 2 DEBUG nova.virt.libvirt.driver [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:10:35 np0005481065 nova_compute[260935]: 2025-10-11 09:10:35.895 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:10:35 np0005481065 nova_compute[260935]: 2025-10-11 09:10:35.895 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173835.8226926, 0b3da34e-478e-44fc-a1ec-2601998d2b0d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:10:35 np0005481065 nova_compute[260935]: 2025-10-11 09:10:35.896 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:10:35 np0005481065 nova_compute[260935]: 2025-10-11 09:10:35.933 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:10:35 np0005481065 nova_compute[260935]: 2025-10-11 09:10:35.943 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173835.8288982, 0b3da34e-478e-44fc-a1ec-2601998d2b0d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:10:35 np0005481065 nova_compute[260935]: 2025-10-11 09:10:35.944 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:10:35 np0005481065 nova_compute[260935]: 2025-10-11 09:10:35.964 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:10:35 np0005481065 nova_compute[260935]: 2025-10-11 09:10:35.968 2 INFO nova.compute.manager [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Took 11.94 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:10:35 np0005481065 nova_compute[260935]: 2025-10-11 09:10:35.969 2 DEBUG nova.compute.manager [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:10:35 np0005481065 nova_compute[260935]: 2025-10-11 09:10:35.971 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:10:36 np0005481065 nova_compute[260935]: 2025-10-11 09:10:36.010 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:10:36 np0005481065 nova_compute[260935]: 2025-10-11 09:10:36.044 2 INFO nova.compute.manager [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Took 13.04 seconds to build instance.#033[00m
Oct 11 05:10:36 np0005481065 nova_compute[260935]: 2025-10-11 09:10:36.071 2 DEBUG oslo_concurrency.lockutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.175s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:10:36 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2057: 321 pgs: 321 active+clean; 374 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:10:36 np0005481065 nova_compute[260935]: 2025-10-11 09:10:36.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:10:36 np0005481065 nova_compute[260935]: 2025-10-11 09:10:36.989 2 DEBUG nova.compute.manager [req-efb96965-7fcf-4304-b10c-a1954faac3c9 req-8632e11f-8f8e-493d-8557-3b403362eda3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Received event network-vif-plugged-673c8285-440e-4cb6-b81b-1bd2ddb7375a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:10:36 np0005481065 nova_compute[260935]: 2025-10-11 09:10:36.990 2 DEBUG oslo_concurrency.lockutils [req-efb96965-7fcf-4304-b10c-a1954faac3c9 req-8632e11f-8f8e-493d-8557-3b403362eda3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:10:36 np0005481065 nova_compute[260935]: 2025-10-11 09:10:36.990 2 DEBUG oslo_concurrency.lockutils [req-efb96965-7fcf-4304-b10c-a1954faac3c9 req-8632e11f-8f8e-493d-8557-3b403362eda3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:10:36 np0005481065 nova_compute[260935]: 2025-10-11 09:10:36.991 2 DEBUG oslo_concurrency.lockutils [req-efb96965-7fcf-4304-b10c-a1954faac3c9 req-8632e11f-8f8e-493d-8557-3b403362eda3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:10:36 np0005481065 nova_compute[260935]: 2025-10-11 09:10:36.991 2 DEBUG nova.compute.manager [req-efb96965-7fcf-4304-b10c-a1954faac3c9 req-8632e11f-8f8e-493d-8557-3b403362eda3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] No waiting events found dispatching network-vif-plugged-673c8285-440e-4cb6-b81b-1bd2ddb7375a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:10:36 np0005481065 nova_compute[260935]: 2025-10-11 09:10:36.991 2 WARNING nova.compute.manager [req-efb96965-7fcf-4304-b10c-a1954faac3c9 req-8632e11f-8f8e-493d-8557-3b403362eda3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Received unexpected event network-vif-plugged-673c8285-440e-4cb6-b81b-1bd2ddb7375a for instance with vm_state active and task_state None.#033[00m
Oct 11 05:10:38 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2058: 321 pgs: 321 active+clean; 374 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 05:10:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:10:39 np0005481065 nova_compute[260935]: 2025-10-11 09:10:39.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:10:40 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2059: 321 pgs: 321 active+clean; 374 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 05:10:41 np0005481065 nova_compute[260935]: 2025-10-11 09:10:41.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:10:41 np0005481065 ovn_controller[152945]: 2025-10-11T09:10:41Z|00949|binding|INFO|Releasing lport f34fe4de-2979-4f48-9b40-4531a5420e53 from this chassis (sb_readonly=0)
Oct 11 05:10:41 np0005481065 ovn_controller[152945]: 2025-10-11T09:10:41Z|00950|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:10:41 np0005481065 ovn_controller[152945]: 2025-10-11T09:10:41Z|00951|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:10:41 np0005481065 nova_compute[260935]: 2025-10-11 09:10:41.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:10:41 np0005481065 NetworkManager[44960]: <info>  [1760173841.9429] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/398)
Oct 11 05:10:41 np0005481065 NetworkManager[44960]: <info>  [1760173841.9438] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/399)
Oct 11 05:10:41 np0005481065 ovn_controller[152945]: 2025-10-11T09:10:41Z|00952|binding|INFO|Releasing lport f34fe4de-2979-4f48-9b40-4531a5420e53 from this chassis (sb_readonly=0)
Oct 11 05:10:41 np0005481065 nova_compute[260935]: 2025-10-11 09:10:41.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:10:41 np0005481065 ovn_controller[152945]: 2025-10-11T09:10:41Z|00953|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:10:41 np0005481065 ovn_controller[152945]: 2025-10-11T09:10:41Z|00954|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:10:41 np0005481065 nova_compute[260935]: 2025-10-11 09:10:41.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:10:42 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2060: 321 pgs: 321 active+clean; 374 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 05:10:42 np0005481065 nova_compute[260935]: 2025-10-11 09:10:42.655 2 DEBUG nova.compute.manager [req-8a91c098-4ca1-4722-9508-a9b2606f3d67 req-1b8dbe9f-e828-46bc-b4cc-37bd496cf529 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Received event network-changed-673c8285-440e-4cb6-b81b-1bd2ddb7375a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:10:42 np0005481065 nova_compute[260935]: 2025-10-11 09:10:42.656 2 DEBUG nova.compute.manager [req-8a91c098-4ca1-4722-9508-a9b2606f3d67 req-1b8dbe9f-e828-46bc-b4cc-37bd496cf529 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Refreshing instance network info cache due to event network-changed-673c8285-440e-4cb6-b81b-1bd2ddb7375a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:10:42 np0005481065 nova_compute[260935]: 2025-10-11 09:10:42.656 2 DEBUG oslo_concurrency.lockutils [req-8a91c098-4ca1-4722-9508-a9b2606f3d67 req-1b8dbe9f-e828-46bc-b4cc-37bd496cf529 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-0b3da34e-478e-44fc-a1ec-2601998d2b0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:10:42 np0005481065 nova_compute[260935]: 2025-10-11 09:10:42.657 2 DEBUG oslo_concurrency.lockutils [req-8a91c098-4ca1-4722-9508-a9b2606f3d67 req-1b8dbe9f-e828-46bc-b4cc-37bd496cf529 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-0b3da34e-478e-44fc-a1ec-2601998d2b0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:10:42 np0005481065 nova_compute[260935]: 2025-10-11 09:10:42.657 2 DEBUG nova.network.neutron [req-8a91c098-4ca1-4722-9508-a9b2606f3d67 req-1b8dbe9f-e828-46bc-b4cc-37bd496cf529 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Refreshing network info cache for port 673c8285-440e-4cb6-b81b-1bd2ddb7375a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:10:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:10:44 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2061: 321 pgs: 321 active+clean; 374 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 05:10:44 np0005481065 nova_compute[260935]: 2025-10-11 09:10:44.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:10:45 np0005481065 nova_compute[260935]: 2025-10-11 09:10:45.896 2 DEBUG nova.network.neutron [req-8a91c098-4ca1-4722-9508-a9b2606f3d67 req-1b8dbe9f-e828-46bc-b4cc-37bd496cf529 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Updated VIF entry in instance network info cache for port 673c8285-440e-4cb6-b81b-1bd2ddb7375a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:10:45 np0005481065 nova_compute[260935]: 2025-10-11 09:10:45.897 2 DEBUG nova.network.neutron [req-8a91c098-4ca1-4722-9508-a9b2606f3d67 req-1b8dbe9f-e828-46bc-b4cc-37bd496cf529 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Updating instance_info_cache with network_info: [{"id": "673c8285-440e-4cb6-b81b-1bd2ddb7375a", "address": "fa:16:3e:ac:22:4a", "network": {"id": "2de6a2d5-4c4a-403f-9eb5-27dc7562315f", "bridge": "br-int", "label": "tempest-network-smoke--795331470", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap673c8285-44", "ovs_interfaceid": "673c8285-440e-4cb6-b81b-1bd2ddb7375a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:10:45 np0005481065 nova_compute[260935]: 2025-10-11 09:10:45.915 2 DEBUG oslo_concurrency.lockutils [req-8a91c098-4ca1-4722-9508-a9b2606f3d67 req-1b8dbe9f-e828-46bc-b4cc-37bd496cf529 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-0b3da34e-478e-44fc-a1ec-2601998d2b0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:10:46 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2062: 321 pgs: 321 active+clean; 374 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 05:10:46 np0005481065 nova_compute[260935]: 2025-10-11 09:10:46.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:10:48 np0005481065 ovn_controller[152945]: 2025-10-11T09:10:48Z|00107|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ac:22:4a 10.100.0.9
Oct 11 05:10:48 np0005481065 ovn_controller[152945]: 2025-10-11T09:10:48Z|00108|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ac:22:4a 10.100.0.9
Oct 11 05:10:48 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2063: 321 pgs: 321 active+clean; 395 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 119 op/s
Oct 11 05:10:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:10:49 np0005481065 podman[366358]: 2025-10-11 09:10:49.811781177 +0000 UTC m=+0.095627287 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:10:49 np0005481065 nova_compute[260935]: 2025-10-11 09:10:49.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:10:50 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2064: 321 pgs: 321 active+clean; 395 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 261 KiB/s rd, 2.0 MiB/s wr, 46 op/s
Oct 11 05:10:51 np0005481065 nova_compute[260935]: 2025-10-11 09:10:51.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:10:52 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2065: 321 pgs: 321 active+clean; 404 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 56 op/s
Oct 11 05:10:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:10:54 np0005481065 nova_compute[260935]: 2025-10-11 09:10:54.442 2 INFO nova.compute.manager [None req-16fdadf7-ac51-4a4e-a01b-bd586a212c12 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Get console output#033[00m
Oct 11 05:10:54 np0005481065 nova_compute[260935]: 2025-10-11 09:10:54.449 29289 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct 11 05:10:54 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2066: 321 pgs: 321 active+clean; 407 MiB data, 901 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 05:10:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:10:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:10:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:10:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:10:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:10:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:10:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:10:54
Oct 11 05:10:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:10:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:10:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['vms', 'images', 'backups', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr']
Oct 11 05:10:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:10:54 np0005481065 nova_compute[260935]: 2025-10-11 09:10:54.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:10:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:10:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:10:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:10:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:10:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:10:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:10:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:10:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:10:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:10:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:10:55 np0005481065 nova_compute[260935]: 2025-10-11 09:10:55.600 2 DEBUG oslo_concurrency.lockutils [None req-5eb2966d-fec5-4d19-98fc-0401849181fa a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:10:55 np0005481065 nova_compute[260935]: 2025-10-11 09:10:55.600 2 DEBUG oslo_concurrency.lockutils [None req-5eb2966d-fec5-4d19-98fc-0401849181fa a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:10:55 np0005481065 nova_compute[260935]: 2025-10-11 09:10:55.601 2 INFO nova.compute.manager [None req-5eb2966d-fec5-4d19-98fc-0401849181fa a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Rebooting instance#033[00m
Oct 11 05:10:55 np0005481065 nova_compute[260935]: 2025-10-11 09:10:55.622 2 DEBUG oslo_concurrency.lockutils [None req-5eb2966d-fec5-4d19-98fc-0401849181fa a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "refresh_cache-0b3da34e-478e-44fc-a1ec-2601998d2b0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:10:55 np0005481065 nova_compute[260935]: 2025-10-11 09:10:55.623 2 DEBUG oslo_concurrency.lockutils [None req-5eb2966d-fec5-4d19-98fc-0401849181fa a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquired lock "refresh_cache-0b3da34e-478e-44fc-a1ec-2601998d2b0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:10:55 np0005481065 nova_compute[260935]: 2025-10-11 09:10:55.624 2 DEBUG nova.network.neutron [None req-5eb2966d-fec5-4d19-98fc-0401849181fa a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:10:55 np0005481065 ceph-mgr[74605]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2898047278
Oct 11 05:10:56 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2067: 321 pgs: 321 active+clean; 407 MiB data, 901 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 05:10:56 np0005481065 nova_compute[260935]: 2025-10-11 09:10:56.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:10:56 np0005481065 podman[366378]: 2025-10-11 09:10:56.829791939 +0000 UTC m=+0.083611663 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 11 05:10:58 np0005481065 nova_compute[260935]: 2025-10-11 09:10:58.435 2 DEBUG nova.network.neutron [None req-5eb2966d-fec5-4d19-98fc-0401849181fa a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Updating instance_info_cache with network_info: [{"id": "673c8285-440e-4cb6-b81b-1bd2ddb7375a", "address": "fa:16:3e:ac:22:4a", "network": {"id": "2de6a2d5-4c4a-403f-9eb5-27dc7562315f", "bridge": "br-int", "label": "tempest-network-smoke--795331470", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap673c8285-44", "ovs_interfaceid": "673c8285-440e-4cb6-b81b-1bd2ddb7375a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:10:58 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2068: 321 pgs: 321 active+clean; 407 MiB data, 901 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 05:10:58 np0005481065 nova_compute[260935]: 2025-10-11 09:10:58.457 2 DEBUG oslo_concurrency.lockutils [None req-5eb2966d-fec5-4d19-98fc-0401849181fa a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Releasing lock "refresh_cache-0b3da34e-478e-44fc-a1ec-2601998d2b0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:10:58 np0005481065 nova_compute[260935]: 2025-10-11 09:10:58.459 2 DEBUG nova.compute.manager [None req-5eb2966d-fec5-4d19-98fc-0401849181fa a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:10:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e276 do_prune osdmap full prune enabled
Oct 11 05:10:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e277 e277: 3 total, 3 up, 3 in
Oct 11 05:10:58 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e277: 3 total, 3 up, 3 in
Oct 11 05:10:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:10:59 np0005481065 nova_compute[260935]: 2025-10-11 09:10:59.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:00 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2070: 321 pgs: 321 active+clean; 407 MiB data, 901 MiB used, 59 GiB / 60 GiB avail; 81 KiB/s rd, 130 KiB/s wr, 21 op/s
Oct 11 05:11:00 np0005481065 kernel: tap673c8285-44 (unregistering): left promiscuous mode
Oct 11 05:11:00 np0005481065 NetworkManager[44960]: <info>  [1760173860.9421] device (tap673c8285-44): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:11:00 np0005481065 ovn_controller[152945]: 2025-10-11T09:11:00Z|00955|binding|INFO|Releasing lport 673c8285-440e-4cb6-b81b-1bd2ddb7375a from this chassis (sb_readonly=0)
Oct 11 05:11:00 np0005481065 ovn_controller[152945]: 2025-10-11T09:11:00Z|00956|binding|INFO|Setting lport 673c8285-440e-4cb6-b81b-1bd2ddb7375a down in Southbound
Oct 11 05:11:00 np0005481065 nova_compute[260935]: 2025-10-11 09:11:00.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:00 np0005481065 ovn_controller[152945]: 2025-10-11T09:11:00Z|00957|binding|INFO|Removing iface tap673c8285-44 ovn-installed in OVS
Oct 11 05:11:00 np0005481065 nova_compute[260935]: 2025-10-11 09:11:00.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:00 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:00.963 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:22:4a 10.100.0.9'], port_security=['fa:16:3e:ac:22:4a 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '0b3da34e-478e-44fc-a1ec-2601998d2b0d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2de6a2d5-4c4a-403f-9eb5-27dc7562315f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca4b15770e784f45910b630937562cb6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8ebe05f7-a7b2-491d-bdc1-80d46e3deb2b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.181'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9218e07c-cf9e-412a-8b5f-2d45bacdf8cb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=673c8285-440e-4cb6-b81b-1bd2ddb7375a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:11:00 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:00.966 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 673c8285-440e-4cb6-b81b-1bd2ddb7375a in datapath 2de6a2d5-4c4a-403f-9eb5-27dc7562315f unbound from our chassis#033[00m
Oct 11 05:11:00 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:00.969 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2de6a2d5-4c4a-403f-9eb5-27dc7562315f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:11:00 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:00.971 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0577ff14-774b-4202-8536-375b593f85e6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:11:00 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:00.975 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f namespace which is not needed anymore#033[00m
Oct 11 05:11:00 np0005481065 nova_compute[260935]: 2025-10-11 09:11:00.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:01 np0005481065 systemd[1]: machine-qemu\x2d123\x2dinstance\x2d00000068.scope: Deactivated successfully.
Oct 11 05:11:01 np0005481065 systemd[1]: machine-qemu\x2d123\x2dinstance\x2d00000068.scope: Consumed 14.431s CPU time.
Oct 11 05:11:01 np0005481065 systemd-machined[215705]: Machine qemu-123-instance-00000068 terminated.
Oct 11 05:11:01 np0005481065 nova_compute[260935]: 2025-10-11 09:11:01.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:01 np0005481065 neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f[366336]: [NOTICE]   (366341) : haproxy version is 2.8.14-c23fe91
Oct 11 05:11:01 np0005481065 neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f[366336]: [NOTICE]   (366341) : path to executable is /usr/sbin/haproxy
Oct 11 05:11:01 np0005481065 neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f[366336]: [ALERT]    (366341) : Current worker (366343) exited with code 143 (Terminated)
Oct 11 05:11:01 np0005481065 neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f[366336]: [WARNING]  (366341) : All workers exited. Exiting... (0)
Oct 11 05:11:01 np0005481065 nova_compute[260935]: 2025-10-11 09:11:01.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:01 np0005481065 systemd[1]: libpod-c053f96544bf638240d0d54ddc97e5f3e213ef72bbd8494d356db148f8011b3f.scope: Deactivated successfully.
Oct 11 05:11:01 np0005481065 podman[366421]: 2025-10-11 09:11:01.209109086 +0000 UTC m=+0.079455611 container died c053f96544bf638240d0d54ddc97e5f3e213ef72bbd8494d356db148f8011b3f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:11:01 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c053f96544bf638240d0d54ddc97e5f3e213ef72bbd8494d356db148f8011b3f-userdata-shm.mount: Deactivated successfully.
Oct 11 05:11:01 np0005481065 systemd[1]: var-lib-containers-storage-overlay-e8d2d7c4ba9cfd39626d89e94aa92614f628d67bec419a0105d23bb5bb0122f8-merged.mount: Deactivated successfully.
Oct 11 05:11:01 np0005481065 podman[366421]: 2025-10-11 09:11:01.27285874 +0000 UTC m=+0.143205265 container cleanup c053f96544bf638240d0d54ddc97e5f3e213ef72bbd8494d356db148f8011b3f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 11 05:11:01 np0005481065 systemd[1]: libpod-conmon-c053f96544bf638240d0d54ddc97e5f3e213ef72bbd8494d356db148f8011b3f.scope: Deactivated successfully.
Oct 11 05:11:01 np0005481065 podman[366459]: 2025-10-11 09:11:01.376142539 +0000 UTC m=+0.060758913 container remove c053f96544bf638240d0d54ddc97e5f3e213ef72bbd8494d356db148f8011b3f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct 11 05:11:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:01.386 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f884a26f-b666-4d04-86a4-5c5669fb331e]: (4, ('Sat Oct 11 09:11:01 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f (c053f96544bf638240d0d54ddc97e5f3e213ef72bbd8494d356db148f8011b3f)\nc053f96544bf638240d0d54ddc97e5f3e213ef72bbd8494d356db148f8011b3f\nSat Oct 11 09:11:01 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f (c053f96544bf638240d0d54ddc97e5f3e213ef72bbd8494d356db148f8011b3f)\nc053f96544bf638240d0d54ddc97e5f3e213ef72bbd8494d356db148f8011b3f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:11:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:01.388 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[49555e5b-0211-439b-9aba-b722fbc500a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:11:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:01.389 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2de6a2d5-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:11:01 np0005481065 nova_compute[260935]: 2025-10-11 09:11:01.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:01 np0005481065 kernel: tap2de6a2d5-40: left promiscuous mode
Oct 11 05:11:01 np0005481065 nova_compute[260935]: 2025-10-11 09:11:01.401 2 DEBUG nova.compute.manager [req-6bc8a049-b0d3-4ead-bd14-0f48ae292217 req-99db68cb-1f21-458d-9270-f42f448eec96 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Received event network-vif-unplugged-673c8285-440e-4cb6-b81b-1bd2ddb7375a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:11:01 np0005481065 nova_compute[260935]: 2025-10-11 09:11:01.402 2 DEBUG oslo_concurrency.lockutils [req-6bc8a049-b0d3-4ead-bd14-0f48ae292217 req-99db68cb-1f21-458d-9270-f42f448eec96 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:11:01 np0005481065 nova_compute[260935]: 2025-10-11 09:11:01.402 2 DEBUG oslo_concurrency.lockutils [req-6bc8a049-b0d3-4ead-bd14-0f48ae292217 req-99db68cb-1f21-458d-9270-f42f448eec96 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:11:01 np0005481065 nova_compute[260935]: 2025-10-11 09:11:01.403 2 DEBUG oslo_concurrency.lockutils [req-6bc8a049-b0d3-4ead-bd14-0f48ae292217 req-99db68cb-1f21-458d-9270-f42f448eec96 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:11:01 np0005481065 nova_compute[260935]: 2025-10-11 09:11:01.403 2 DEBUG nova.compute.manager [req-6bc8a049-b0d3-4ead-bd14-0f48ae292217 req-99db68cb-1f21-458d-9270-f42f448eec96 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] No waiting events found dispatching network-vif-unplugged-673c8285-440e-4cb6-b81b-1bd2ddb7375a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:11:01 np0005481065 nova_compute[260935]: 2025-10-11 09:11:01.403 2 WARNING nova.compute.manager [req-6bc8a049-b0d3-4ead-bd14-0f48ae292217 req-99db68cb-1f21-458d-9270-f42f448eec96 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Received unexpected event network-vif-unplugged-673c8285-440e-4cb6-b81b-1bd2ddb7375a for instance with vm_state active and task_state reboot_started.#033[00m
Oct 11 05:11:01 np0005481065 nova_compute[260935]: 2025-10-11 09:11:01.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:01.418 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[83cd76e8-c11c-4877-8eb9-58acfc08b6fb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:11:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:01.455 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2945ca4b-8c22-40ae-a387-9772b791f583]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:11:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:01.457 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5694c5c0-3044-47d4-84c3-6ac041ebddd2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:11:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:01.485 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cdab48dc-153e-4df6-a723-49bb30e551cb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 567893, 'reachable_time': 15283, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366477, 'error': None, 'target': 'ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:11:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:01.488 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 05:11:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:01.489 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[4c33669c-3f99-4bd5-8a20-286979d5f2b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:11:01 np0005481065 systemd[1]: run-netns-ovnmeta\x2d2de6a2d5\x2d4c4a\x2d403f\x2d9eb5\x2d27dc7562315f.mount: Deactivated successfully.
Oct 11 05:11:01 np0005481065 nova_compute[260935]: 2025-10-11 09:11:01.644 2 INFO nova.virt.libvirt.driver [None req-5eb2966d-fec5-4d19-98fc-0401849181fa a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Instance shutdown successfully.#033[00m
Oct 11 05:11:01 np0005481065 kernel: tap673c8285-44: entered promiscuous mode
Oct 11 05:11:01 np0005481065 systemd-udevd[366402]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:11:01 np0005481065 NetworkManager[44960]: <info>  [1760173861.7360] manager: (tap673c8285-44): new Tun device (/org/freedesktop/NetworkManager/Devices/400)
Oct 11 05:11:01 np0005481065 ovn_controller[152945]: 2025-10-11T09:11:01Z|00958|binding|INFO|Claiming lport 673c8285-440e-4cb6-b81b-1bd2ddb7375a for this chassis.
Oct 11 05:11:01 np0005481065 ovn_controller[152945]: 2025-10-11T09:11:01Z|00959|binding|INFO|673c8285-440e-4cb6-b81b-1bd2ddb7375a: Claiming fa:16:3e:ac:22:4a 10.100.0.9
Oct 11 05:11:01 np0005481065 nova_compute[260935]: 2025-10-11 09:11:01.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:01.748 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:22:4a 10.100.0.9'], port_security=['fa:16:3e:ac:22:4a 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '0b3da34e-478e-44fc-a1ec-2601998d2b0d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2de6a2d5-4c4a-403f-9eb5-27dc7562315f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca4b15770e784f45910b630937562cb6', 'neutron:revision_number': '5', 'neutron:security_group_ids': '8ebe05f7-a7b2-491d-bdc1-80d46e3deb2b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.181'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9218e07c-cf9e-412a-8b5f-2d45bacdf8cb, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=673c8285-440e-4cb6-b81b-1bd2ddb7375a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:11:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:01.750 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 673c8285-440e-4cb6-b81b-1bd2ddb7375a in datapath 2de6a2d5-4c4a-403f-9eb5-27dc7562315f bound to our chassis#033[00m
Oct 11 05:11:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:01.754 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2de6a2d5-4c4a-403f-9eb5-27dc7562315f#033[00m
Oct 11 05:11:01 np0005481065 NetworkManager[44960]: <info>  [1760173861.7552] device (tap673c8285-44): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:11:01 np0005481065 ovn_controller[152945]: 2025-10-11T09:11:01Z|00960|binding|INFO|Setting lport 673c8285-440e-4cb6-b81b-1bd2ddb7375a ovn-installed in OVS
Oct 11 05:11:01 np0005481065 ovn_controller[152945]: 2025-10-11T09:11:01Z|00961|binding|INFO|Setting lport 673c8285-440e-4cb6-b81b-1bd2ddb7375a up in Southbound
Oct 11 05:11:01 np0005481065 nova_compute[260935]: 2025-10-11 09:11:01.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:01.770 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4066c7c0-1f11-45c2-857a-b5338612fb58]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:11:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:01.771 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2de6a2d5-41 in ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 05:11:01 np0005481065 NetworkManager[44960]: <info>  [1760173861.7739] device (tap673c8285-44): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:11:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:01.774 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2de6a2d5-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 05:11:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:01.774 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7837f526-211f-4c88-b9f1-ae5c480a4ba9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:11:01 np0005481065 nova_compute[260935]: 2025-10-11 09:11:01.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:01.776 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1d6a6000-71cb-48d0-a2c6-6609e67ba7aa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:11:01 np0005481065 nova_compute[260935]: 2025-10-11 09:11:01.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:01.795 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[bd973dcc-cbcd-4fc9-966c-12ea7fb57adb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:11:01 np0005481065 systemd-machined[215705]: New machine qemu-124-instance-00000068.
Oct 11 05:11:01 np0005481065 podman[366479]: 2025-10-11 09:11:01.826845864 +0000 UTC m=+0.130727369 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:11:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:01.829 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e5adc779-e21c-4a64-a235-7fd258704c76]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:11:01 np0005481065 systemd[1]: Started Virtual Machine qemu-124-instance-00000068.
Oct 11 05:11:01 np0005481065 podman[366481]: 2025-10-11 09:11:01.862446489 +0000 UTC m=+0.164019031 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 11 05:11:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:01.864 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[1fc07a3b-cc8b-4c60-88c0-150497905f72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:11:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:01.871 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e9b5b046-0878-4661-81d9-a48afba45caa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:11:01 np0005481065 NetworkManager[44960]: <info>  [1760173861.8733] manager: (tap2de6a2d5-40): new Veth device (/org/freedesktop/NetworkManager/Devices/401)
Oct 11 05:11:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:01.912 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6a6ba076-7b5c-4a51-8964-8072e43e7954]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:11:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:01.915 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[8da6a919-5b2b-487a-872a-5b6a899017e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:11:01 np0005481065 NetworkManager[44960]: <info>  [1760173861.9392] device (tap2de6a2d5-40): carrier: link connected
Oct 11 05:11:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:01.946 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[685f3b40-487b-449e-b0c8-56c7720427a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:11:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:01.966 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[61741135-5998-4da4-8fec-20f00c63e390]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2de6a2d5-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:54:6d:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 285], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570664, 'reachable_time': 31214, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366566, 'error': None, 'target': 'ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:11:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:01.983 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[721d7e6f-1c8c-44b1-a68b-61d236cbbec3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe54:6d2f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 570664, 'tstamp': 570664}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 366567, 'error': None, 'target': 'ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:11:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:02.006 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e5964ccc-f313-406c-963e-e2be3a2fd730]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2de6a2d5-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:54:6d:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 285], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570664, 'reachable_time': 31214, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 366568, 'error': None, 'target': 'ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:11:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:02.045 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[36d44ed0-bba9-4be2-b5e2-3691d3e02851]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:11:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:02.136 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b0860bcf-2bcc-4045-988e-c1c9035cc8ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:11:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:02.138 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2de6a2d5-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:11:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:02.138 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:11:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:02.139 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2de6a2d5-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:11:02 np0005481065 nova_compute[260935]: 2025-10-11 09:11:02.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:02 np0005481065 kernel: tap2de6a2d5-40: entered promiscuous mode
Oct 11 05:11:02 np0005481065 NetworkManager[44960]: <info>  [1760173862.1419] manager: (tap2de6a2d5-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/402)
Oct 11 05:11:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:02.146 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2de6a2d5-40, col_values=(('external_ids', {'iface-id': 'f34fe4de-2979-4f48-9b40-4531a5420e53'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:11:02 np0005481065 ovn_controller[152945]: 2025-10-11T09:11:02Z|00962|binding|INFO|Releasing lport f34fe4de-2979-4f48-9b40-4531a5420e53 from this chassis (sb_readonly=0)
Oct 11 05:11:02 np0005481065 nova_compute[260935]: 2025-10-11 09:11:02.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:02 np0005481065 nova_compute[260935]: 2025-10-11 09:11:02.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:02 np0005481065 nova_compute[260935]: 2025-10-11 09:11:02.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:02.180 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2de6a2d5-4c4a-403f-9eb5-27dc7562315f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2de6a2d5-4c4a-403f-9eb5-27dc7562315f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 05:11:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:02.181 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2125fcc7-f19f-4c0d-95a2-5c9f5c308257]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:11:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:02.182 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 05:11:02 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 05:11:02 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 05:11:02 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-2de6a2d5-4c4a-403f-9eb5-27dc7562315f
Oct 11 05:11:02 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 05:11:02 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 05:11:02 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 05:11:02 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/2de6a2d5-4c4a-403f-9eb5-27dc7562315f.pid.haproxy
Oct 11 05:11:02 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 05:11:02 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:11:02 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 05:11:02 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 05:11:02 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 05:11:02 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 05:11:02 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 05:11:02 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 05:11:02 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 05:11:02 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 05:11:02 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 05:11:02 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 05:11:02 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 05:11:02 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 05:11:02 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 05:11:02 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:11:02 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:11:02 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 05:11:02 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 05:11:02 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 05:11:02 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID 2de6a2d5-4c4a-403f-9eb5-27dc7562315f
Oct 11 05:11:02 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 05:11:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:02.183 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f', 'env', 'PROCESS_TAG=haproxy-2de6a2d5-4c4a-403f-9eb5-27dc7562315f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2de6a2d5-4c4a-403f-9eb5-27dc7562315f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 05:11:02 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2071: 321 pgs: 321 active+clean; 407 MiB data, 901 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 62 KiB/s wr, 30 op/s
Oct 11 05:11:02 np0005481065 podman[366642]: 2025-10-11 09:11:02.668311615 +0000 UTC m=+0.056331890 container create 390c567af3f4f9b17c471a557ae52e1fc53a30a1ce7d52271270e72501fa6d45 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001)
Oct 11 05:11:02 np0005481065 systemd[1]: Started libpod-conmon-390c567af3f4f9b17c471a557ae52e1fc53a30a1ce7d52271270e72501fa6d45.scope.
Oct 11 05:11:02 np0005481065 podman[366642]: 2025-10-11 09:11:02.636930747 +0000 UTC m=+0.024951022 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 05:11:02 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:11:02 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e0faee35d9e2350e7eab68b1a8768c75dd4c537a8039875f94be2481264f64f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 05:11:02 np0005481065 podman[366642]: 2025-10-11 09:11:02.773856087 +0000 UTC m=+0.161876352 container init 390c567af3f4f9b17c471a557ae52e1fc53a30a1ce7d52271270e72501fa6d45 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:11:02 np0005481065 podman[366642]: 2025-10-11 09:11:02.781078187 +0000 UTC m=+0.169098422 container start 390c567af3f4f9b17c471a557ae52e1fc53a30a1ce7d52271270e72501fa6d45 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 11 05:11:02 np0005481065 nova_compute[260935]: 2025-10-11 09:11:02.803 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Removed pending event for 0b3da34e-478e-44fc-a1ec-2601998d2b0d due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct 11 05:11:02 np0005481065 nova_compute[260935]: 2025-10-11 09:11:02.804 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173862.8030243, 0b3da34e-478e-44fc-a1ec-2601998d2b0d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:11:02 np0005481065 nova_compute[260935]: 2025-10-11 09:11:02.804 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:11:02 np0005481065 neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f[366657]: [NOTICE]   (366661) : New worker (366663) forked
Oct 11 05:11:02 np0005481065 neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f[366657]: [NOTICE]   (366661) : Loading success.
Oct 11 05:11:02 np0005481065 nova_compute[260935]: 2025-10-11 09:11:02.813 2 INFO nova.virt.libvirt.driver [-] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Instance running successfully.#033[00m
Oct 11 05:11:02 np0005481065 nova_compute[260935]: 2025-10-11 09:11:02.813 2 INFO nova.virt.libvirt.driver [None req-5eb2966d-fec5-4d19-98fc-0401849181fa a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Instance soft rebooted successfully.#033[00m
Oct 11 05:11:02 np0005481065 nova_compute[260935]: 2025-10-11 09:11:02.814 2 DEBUG nova.compute.manager [None req-5eb2966d-fec5-4d19-98fc-0401849181fa a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:11:03 np0005481065 nova_compute[260935]: 2025-10-11 09:11:03.268 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:11:03 np0005481065 nova_compute[260935]: 2025-10-11 09:11:03.272 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:11:03 np0005481065 nova_compute[260935]: 2025-10-11 09:11:03.318 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] During sync_power_state the instance has a pending task (reboot_started). Skip.#033[00m
Oct 11 05:11:03 np0005481065 nova_compute[260935]: 2025-10-11 09:11:03.319 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173862.8045547, 0b3da34e-478e-44fc-a1ec-2601998d2b0d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:11:03 np0005481065 nova_compute[260935]: 2025-10-11 09:11:03.319 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] VM Started (Lifecycle Event)#033[00m
Oct 11 05:11:03 np0005481065 nova_compute[260935]: 2025-10-11 09:11:03.330 2 DEBUG oslo_concurrency.lockutils [None req-5eb2966d-fec5-4d19-98fc-0401849181fa a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 7.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:11:03 np0005481065 nova_compute[260935]: 2025-10-11 09:11:03.345 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:11:03 np0005481065 nova_compute[260935]: 2025-10-11 09:11:03.349 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:11:03 np0005481065 nova_compute[260935]: 2025-10-11 09:11:03.802 2 DEBUG nova.compute.manager [req-7d3522dc-64d4-488f-85fc-217ba7f93ceb req-ac3a1a01-9bb6-4257-922f-9bbc761c4819 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Received event network-vif-plugged-673c8285-440e-4cb6-b81b-1bd2ddb7375a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:11:03 np0005481065 nova_compute[260935]: 2025-10-11 09:11:03.803 2 DEBUG oslo_concurrency.lockutils [req-7d3522dc-64d4-488f-85fc-217ba7f93ceb req-ac3a1a01-9bb6-4257-922f-9bbc761c4819 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:11:03 np0005481065 nova_compute[260935]: 2025-10-11 09:11:03.804 2 DEBUG oslo_concurrency.lockutils [req-7d3522dc-64d4-488f-85fc-217ba7f93ceb req-ac3a1a01-9bb6-4257-922f-9bbc761c4819 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:11:03 np0005481065 nova_compute[260935]: 2025-10-11 09:11:03.804 2 DEBUG oslo_concurrency.lockutils [req-7d3522dc-64d4-488f-85fc-217ba7f93ceb req-ac3a1a01-9bb6-4257-922f-9bbc761c4819 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:11:03 np0005481065 nova_compute[260935]: 2025-10-11 09:11:03.805 2 DEBUG nova.compute.manager [req-7d3522dc-64d4-488f-85fc-217ba7f93ceb req-ac3a1a01-9bb6-4257-922f-9bbc761c4819 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] No waiting events found dispatching network-vif-plugged-673c8285-440e-4cb6-b81b-1bd2ddb7375a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:11:03 np0005481065 nova_compute[260935]: 2025-10-11 09:11:03.806 2 WARNING nova.compute.manager [req-7d3522dc-64d4-488f-85fc-217ba7f93ceb req-ac3a1a01-9bb6-4257-922f-9bbc761c4819 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Received unexpected event network-vif-plugged-673c8285-440e-4cb6-b81b-1bd2ddb7375a for instance with vm_state active and task_state None.#033[00m
Oct 11 05:11:03 np0005481065 nova_compute[260935]: 2025-10-11 09:11:03.806 2 DEBUG nova.compute.manager [req-7d3522dc-64d4-488f-85fc-217ba7f93ceb req-ac3a1a01-9bb6-4257-922f-9bbc761c4819 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Received event network-vif-plugged-673c8285-440e-4cb6-b81b-1bd2ddb7375a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:11:03 np0005481065 nova_compute[260935]: 2025-10-11 09:11:03.807 2 DEBUG oslo_concurrency.lockutils [req-7d3522dc-64d4-488f-85fc-217ba7f93ceb req-ac3a1a01-9bb6-4257-922f-9bbc761c4819 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:11:03 np0005481065 nova_compute[260935]: 2025-10-11 09:11:03.808 2 DEBUG oslo_concurrency.lockutils [req-7d3522dc-64d4-488f-85fc-217ba7f93ceb req-ac3a1a01-9bb6-4257-922f-9bbc761c4819 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:11:03 np0005481065 nova_compute[260935]: 2025-10-11 09:11:03.808 2 DEBUG oslo_concurrency.lockutils [req-7d3522dc-64d4-488f-85fc-217ba7f93ceb req-ac3a1a01-9bb6-4257-922f-9bbc761c4819 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:11:03 np0005481065 nova_compute[260935]: 2025-10-11 09:11:03.809 2 DEBUG nova.compute.manager [req-7d3522dc-64d4-488f-85fc-217ba7f93ceb req-ac3a1a01-9bb6-4257-922f-9bbc761c4819 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] No waiting events found dispatching network-vif-plugged-673c8285-440e-4cb6-b81b-1bd2ddb7375a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:11:03 np0005481065 nova_compute[260935]: 2025-10-11 09:11:03.810 2 WARNING nova.compute.manager [req-7d3522dc-64d4-488f-85fc-217ba7f93ceb req-ac3a1a01-9bb6-4257-922f-9bbc761c4819 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Received unexpected event network-vif-plugged-673c8285-440e-4cb6-b81b-1bd2ddb7375a for instance with vm_state active and task_state None.#033[00m
Oct 11 05:11:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:11:04 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2072: 321 pgs: 321 active+clean; 415 MiB data, 901 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 858 KiB/s wr, 27 op/s
Oct 11 05:11:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e277 do_prune osdmap full prune enabled
Oct 11 05:11:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e278 e278: 3 total, 3 up, 3 in
Oct 11 05:11:04 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e278: 3 total, 3 up, 3 in
Oct 11 05:11:04 np0005481065 nova_compute[260935]: 2025-10-11 09:11:04.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:11:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:11:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:11:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:11:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033876677777235136 of space, bias 1.0, pg target 1.0163003333170542 quantized to 32 (current 32)
Oct 11 05:11:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:11:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:11:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:11:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:11:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:11:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0007967915689913394 of space, bias 1.0, pg target 0.2382406791284105 quantized to 32 (current 32)
Oct 11 05:11:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:11:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 32)
Oct 11 05:11:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:11:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:11:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:11:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Oct 11 05:11:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:11:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Oct 11 05:11:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:11:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:11:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:11:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Oct 11 05:11:06 np0005481065 nova_compute[260935]: 2025-10-11 09:11:06.222 2 DEBUG nova.compute.manager [req-23acae09-ca70-4343-a0c5-e1b5bf4ad3ab req-08b5312c-e46b-44cb-ad6c-762fb4bc0533 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Received event network-vif-plugged-673c8285-440e-4cb6-b81b-1bd2ddb7375a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:11:06 np0005481065 nova_compute[260935]: 2025-10-11 09:11:06.223 2 DEBUG oslo_concurrency.lockutils [req-23acae09-ca70-4343-a0c5-e1b5bf4ad3ab req-08b5312c-e46b-44cb-ad6c-762fb4bc0533 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:11:06 np0005481065 nova_compute[260935]: 2025-10-11 09:11:06.224 2 DEBUG oslo_concurrency.lockutils [req-23acae09-ca70-4343-a0c5-e1b5bf4ad3ab req-08b5312c-e46b-44cb-ad6c-762fb4bc0533 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:11:06 np0005481065 nova_compute[260935]: 2025-10-11 09:11:06.225 2 DEBUG oslo_concurrency.lockutils [req-23acae09-ca70-4343-a0c5-e1b5bf4ad3ab req-08b5312c-e46b-44cb-ad6c-762fb4bc0533 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:11:06 np0005481065 nova_compute[260935]: 2025-10-11 09:11:06.225 2 DEBUG nova.compute.manager [req-23acae09-ca70-4343-a0c5-e1b5bf4ad3ab req-08b5312c-e46b-44cb-ad6c-762fb4bc0533 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] No waiting events found dispatching network-vif-plugged-673c8285-440e-4cb6-b81b-1bd2ddb7375a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:11:06 np0005481065 nova_compute[260935]: 2025-10-11 09:11:06.226 2 WARNING nova.compute.manager [req-23acae09-ca70-4343-a0c5-e1b5bf4ad3ab req-08b5312c-e46b-44cb-ad6c-762fb4bc0533 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Received unexpected event network-vif-plugged-673c8285-440e-4cb6-b81b-1bd2ddb7375a for instance with vm_state active and task_state None.#033[00m
Oct 11 05:11:06 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2074: 321 pgs: 321 active+clean; 415 MiB data, 901 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.0 MiB/s wr, 33 op/s
Oct 11 05:11:06 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e278 do_prune osdmap full prune enabled
Oct 11 05:11:06 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e279 e279: 3 total, 3 up, 3 in
Oct 11 05:11:06 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e279: 3 total, 3 up, 3 in
Oct 11 05:11:06 np0005481065 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Oct 11 05:11:06 np0005481065 nova_compute[260935]: 2025-10-11 09:11:06.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e279 do_prune osdmap full prune enabled
Oct 11 05:11:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e280 e280: 3 total, 3 up, 3 in
Oct 11 05:11:07 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e280: 3 total, 3 up, 3 in
Oct 11 05:11:08 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2077: 321 pgs: 321 active+clean; 407 MiB data, 989 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 19 MiB/s wr, 233 op/s
Oct 11 05:11:08 np0005481065 nova_compute[260935]: 2025-10-11 09:11:08.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:11:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:11:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e280 do_prune osdmap full prune enabled
Oct 11 05:11:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e281 e281: 3 total, 3 up, 3 in
Oct 11 05:11:08 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e281: 3 total, 3 up, 3 in
Oct 11 05:11:09 np0005481065 nova_compute[260935]: 2025-10-11 09:11:09.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:10 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2079: 321 pgs: 321 active+clean; 407 MiB data, 989 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 18 MiB/s wr, 233 op/s
Oct 11 05:11:11 np0005481065 nova_compute[260935]: 2025-10-11 09:11:11.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:12 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2080: 321 pgs: 321 active+clean; 407 MiB data, 949 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 17 MiB/s wr, 267 op/s
Oct 11 05:11:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:11:13 np0005481065 ovn_controller[152945]: 2025-10-11T09:11:13Z|00109|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ac:22:4a 10.100.0.9
Oct 11 05:11:14 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2081: 321 pgs: 321 active+clean; 407 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 13 MiB/s wr, 210 op/s
Oct 11 05:11:14 np0005481065 nova_compute[260935]: 2025-10-11 09:11:14.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:15.206 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:11:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:15.207 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:11:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:15.209 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:11:16 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2082: 321 pgs: 321 active+clean; 407 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 75 KiB/s rd, 1.6 KiB/s wr, 32 op/s
Oct 11 05:11:16 np0005481065 nova_compute[260935]: 2025-10-11 09:11:16.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:18 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2083: 321 pgs: 321 active+clean; 407 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 652 KiB/s rd, 16 KiB/s wr, 77 op/s
Oct 11 05:11:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:11:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e281 do_prune osdmap full prune enabled
Oct 11 05:11:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 e282: 3 total, 3 up, 3 in
Oct 11 05:11:18 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e282: 3 total, 3 up, 3 in
Oct 11 05:11:19 np0005481065 nova_compute[260935]: 2025-10-11 09:11:19.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:11:19 np0005481065 nova_compute[260935]: 2025-10-11 09:11:19.757 2 INFO nova.compute.manager [None req-f5ca0e01-6f59-4bdb-a392-298635601835 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Get console output#033[00m
Oct 11 05:11:19 np0005481065 nova_compute[260935]: 2025-10-11 09:11:19.765 29289 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct 11 05:11:19 np0005481065 nova_compute[260935]: 2025-10-11 09:11:19.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:20 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2085: 321 pgs: 321 active+clean; 407 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 652 KiB/s rd, 16 KiB/s wr, 77 op/s
Oct 11 05:11:20 np0005481065 nova_compute[260935]: 2025-10-11 09:11:20.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:11:20 np0005481065 podman[366676]: 2025-10-11 09:11:20.809887346 +0000 UTC m=+0.094314731 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 05:11:21 np0005481065 nova_compute[260935]: 2025-10-11 09:11:21.514 2 DEBUG nova.compute.manager [req-4945e1cc-71bb-47d6-aefc-300ceab5f058 req-788d1078-bae1-48c2-8ae4-7639ba2f0cad e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Received event network-changed-673c8285-440e-4cb6-b81b-1bd2ddb7375a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:11:21 np0005481065 nova_compute[260935]: 2025-10-11 09:11:21.515 2 DEBUG nova.compute.manager [req-4945e1cc-71bb-47d6-aefc-300ceab5f058 req-788d1078-bae1-48c2-8ae4-7639ba2f0cad e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Refreshing instance network info cache due to event network-changed-673c8285-440e-4cb6-b81b-1bd2ddb7375a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:11:21 np0005481065 nova_compute[260935]: 2025-10-11 09:11:21.515 2 DEBUG oslo_concurrency.lockutils [req-4945e1cc-71bb-47d6-aefc-300ceab5f058 req-788d1078-bae1-48c2-8ae4-7639ba2f0cad e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-0b3da34e-478e-44fc-a1ec-2601998d2b0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:11:21 np0005481065 nova_compute[260935]: 2025-10-11 09:11:21.516 2 DEBUG oslo_concurrency.lockutils [req-4945e1cc-71bb-47d6-aefc-300ceab5f058 req-788d1078-bae1-48c2-8ae4-7639ba2f0cad e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-0b3da34e-478e-44fc-a1ec-2601998d2b0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:11:21 np0005481065 nova_compute[260935]: 2025-10-11 09:11:21.516 2 DEBUG nova.network.neutron [req-4945e1cc-71bb-47d6-aefc-300ceab5f058 req-788d1078-bae1-48c2-8ae4-7639ba2f0cad e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Refreshing network info cache for port 673c8285-440e-4cb6-b81b-1bd2ddb7375a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:11:21 np0005481065 nova_compute[260935]: 2025-10-11 09:11:21.543 2 DEBUG oslo_concurrency.lockutils [None req-7b47f653-3159-488e-a698-078bcab638c1 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:11:21 np0005481065 nova_compute[260935]: 2025-10-11 09:11:21.543 2 DEBUG oslo_concurrency.lockutils [None req-7b47f653-3159-488e-a698-078bcab638c1 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:11:21 np0005481065 nova_compute[260935]: 2025-10-11 09:11:21.544 2 DEBUG oslo_concurrency.lockutils [None req-7b47f653-3159-488e-a698-078bcab638c1 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:11:21 np0005481065 nova_compute[260935]: 2025-10-11 09:11:21.544 2 DEBUG oslo_concurrency.lockutils [None req-7b47f653-3159-488e-a698-078bcab638c1 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:11:21 np0005481065 nova_compute[260935]: 2025-10-11 09:11:21.544 2 DEBUG oslo_concurrency.lockutils [None req-7b47f653-3159-488e-a698-078bcab638c1 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:11:21 np0005481065 nova_compute[260935]: 2025-10-11 09:11:21.545 2 INFO nova.compute.manager [None req-7b47f653-3159-488e-a698-078bcab638c1 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Terminating instance#033[00m
Oct 11 05:11:21 np0005481065 nova_compute[260935]: 2025-10-11 09:11:21.546 2 DEBUG nova.compute.manager [None req-7b47f653-3159-488e-a698-078bcab638c1 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:11:21 np0005481065 kernel: tap673c8285-44 (unregistering): left promiscuous mode
Oct 11 05:11:21 np0005481065 NetworkManager[44960]: <info>  [1760173881.6128] device (tap673c8285-44): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:11:21 np0005481065 nova_compute[260935]: 2025-10-11 09:11:21.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:21 np0005481065 ovn_controller[152945]: 2025-10-11T09:11:21Z|00963|binding|INFO|Releasing lport 673c8285-440e-4cb6-b81b-1bd2ddb7375a from this chassis (sb_readonly=0)
Oct 11 05:11:21 np0005481065 ovn_controller[152945]: 2025-10-11T09:11:21Z|00964|binding|INFO|Setting lport 673c8285-440e-4cb6-b81b-1bd2ddb7375a down in Southbound
Oct 11 05:11:21 np0005481065 ovn_controller[152945]: 2025-10-11T09:11:21Z|00965|binding|INFO|Removing iface tap673c8285-44 ovn-installed in OVS
Oct 11 05:11:21 np0005481065 nova_compute[260935]: 2025-10-11 09:11:21.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:21.643 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:22:4a 10.100.0.9'], port_security=['fa:16:3e:ac:22:4a 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '0b3da34e-478e-44fc-a1ec-2601998d2b0d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2de6a2d5-4c4a-403f-9eb5-27dc7562315f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca4b15770e784f45910b630937562cb6', 'neutron:revision_number': '6', 'neutron:security_group_ids': '8ebe05f7-a7b2-491d-bdc1-80d46e3deb2b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9218e07c-cf9e-412a-8b5f-2d45bacdf8cb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=673c8285-440e-4cb6-b81b-1bd2ddb7375a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:11:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:21.647 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 673c8285-440e-4cb6-b81b-1bd2ddb7375a in datapath 2de6a2d5-4c4a-403f-9eb5-27dc7562315f unbound from our chassis#033[00m
Oct 11 05:11:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:21.652 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2de6a2d5-4c4a-403f-9eb5-27dc7562315f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:11:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:21.654 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[37fa09b8-f404-4126-b373-02cdaa404a68]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:11:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:21.655 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f namespace which is not needed anymore#033[00m
Oct 11 05:11:21 np0005481065 nova_compute[260935]: 2025-10-11 09:11:21.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:21 np0005481065 nova_compute[260935]: 2025-10-11 09:11:21.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:11:21 np0005481065 nova_compute[260935]: 2025-10-11 09:11:21.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:11:21 np0005481065 systemd[1]: machine-qemu\x2d124\x2dinstance\x2d00000068.scope: Deactivated successfully.
Oct 11 05:11:21 np0005481065 systemd[1]: machine-qemu\x2d124\x2dinstance\x2d00000068.scope: Consumed 12.247s CPU time.
Oct 11 05:11:21 np0005481065 systemd-machined[215705]: Machine qemu-124-instance-00000068 terminated.
Oct 11 05:11:21 np0005481065 nova_compute[260935]: 2025-10-11 09:11:21.824 2 INFO nova.virt.libvirt.driver [-] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Instance destroyed successfully.#033[00m
Oct 11 05:11:21 np0005481065 nova_compute[260935]: 2025-10-11 09:11:21.824 2 DEBUG nova.objects.instance [None req-7b47f653-3159-488e-a698-078bcab638c1 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'resources' on Instance uuid 0b3da34e-478e-44fc-a1ec-2601998d2b0d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:11:21 np0005481065 nova_compute[260935]: 2025-10-11 09:11:21.843 2 DEBUG nova.virt.libvirt.vif [None req-7b47f653-3159-488e-a698-078bcab638c1 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:10:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2087957653',display_name='tempest-TestNetworkAdvancedServerOps-server-2087957653',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2087957653',id=104,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBElmuTcJVI/ZkWAz+5kv9/dAzHDbPqJ/RLoEbpcbzY+bQ/5rlnwJAiwfIH5I2AByz1siErLhAl04MipJZMi/HRmixiLKf1bIaWO7+9o5POQWUQay494cNjZNh9+hKmeEUA==',key_name='tempest-TestNetworkAdvancedServerOps-746953215',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:10:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ca4b15770e784f45910b630937562cb6',ramdisk_id='',reservation_id='r-ng6vrpfk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1304559157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1304559157-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:11:03Z,user_data=None,user_id='a213c3877fc144a3af0be3c3d853f999',uuid=0b3da34e-478e-44fc-a1ec-2601998d2b0d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "673c8285-440e-4cb6-b81b-1bd2ddb7375a", "address": "fa:16:3e:ac:22:4a", "network": {"id": "2de6a2d5-4c4a-403f-9eb5-27dc7562315f", "bridge": "br-int", "label": "tempest-network-smoke--795331470", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap673c8285-44", "ovs_interfaceid": "673c8285-440e-4cb6-b81b-1bd2ddb7375a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:11:21 np0005481065 nova_compute[260935]: 2025-10-11 09:11:21.843 2 DEBUG nova.network.os_vif_util [None req-7b47f653-3159-488e-a698-078bcab638c1 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converting VIF {"id": "673c8285-440e-4cb6-b81b-1bd2ddb7375a", "address": "fa:16:3e:ac:22:4a", "network": {"id": "2de6a2d5-4c4a-403f-9eb5-27dc7562315f", "bridge": "br-int", "label": "tempest-network-smoke--795331470", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap673c8285-44", "ovs_interfaceid": "673c8285-440e-4cb6-b81b-1bd2ddb7375a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:11:21 np0005481065 nova_compute[260935]: 2025-10-11 09:11:21.844 2 DEBUG nova.network.os_vif_util [None req-7b47f653-3159-488e-a698-078bcab638c1 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ac:22:4a,bridge_name='br-int',has_traffic_filtering=True,id=673c8285-440e-4cb6-b81b-1bd2ddb7375a,network=Network(2de6a2d5-4c4a-403f-9eb5-27dc7562315f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap673c8285-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:11:21 np0005481065 nova_compute[260935]: 2025-10-11 09:11:21.844 2 DEBUG os_vif [None req-7b47f653-3159-488e-a698-078bcab638c1 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ac:22:4a,bridge_name='br-int',has_traffic_filtering=True,id=673c8285-440e-4cb6-b81b-1bd2ddb7375a,network=Network(2de6a2d5-4c4a-403f-9eb5-27dc7562315f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap673c8285-44') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:11:21 np0005481065 nova_compute[260935]: 2025-10-11 09:11:21.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:21 np0005481065 nova_compute[260935]: 2025-10-11 09:11:21.847 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap673c8285-44, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:11:21 np0005481065 nova_compute[260935]: 2025-10-11 09:11:21.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:21 np0005481065 nova_compute[260935]: 2025-10-11 09:11:21.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:11:21 np0005481065 nova_compute[260935]: 2025-10-11 09:11:21.855 2 INFO os_vif [None req-7b47f653-3159-488e-a698-078bcab638c1 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ac:22:4a,bridge_name='br-int',has_traffic_filtering=True,id=673c8285-440e-4cb6-b81b-1bd2ddb7375a,network=Network(2de6a2d5-4c4a-403f-9eb5-27dc7562315f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap673c8285-44')#033[00m
Oct 11 05:11:21 np0005481065 neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f[366657]: [NOTICE]   (366661) : haproxy version is 2.8.14-c23fe91
Oct 11 05:11:21 np0005481065 neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f[366657]: [NOTICE]   (366661) : path to executable is /usr/sbin/haproxy
Oct 11 05:11:21 np0005481065 neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f[366657]: [WARNING]  (366661) : Exiting Master process...
Oct 11 05:11:21 np0005481065 neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f[366657]: [WARNING]  (366661) : Exiting Master process...
Oct 11 05:11:21 np0005481065 neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f[366657]: [ALERT]    (366661) : Current worker (366663) exited with code 143 (Terminated)
Oct 11 05:11:21 np0005481065 neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f[366657]: [WARNING]  (366661) : All workers exited. Exiting... (0)
Oct 11 05:11:21 np0005481065 systemd[1]: libpod-390c567af3f4f9b17c471a557ae52e1fc53a30a1ce7d52271270e72501fa6d45.scope: Deactivated successfully.
Oct 11 05:11:21 np0005481065 podman[366728]: 2025-10-11 09:11:21.911306503 +0000 UTC m=+0.066303747 container died 390c567af3f4f9b17c471a557ae52e1fc53a30a1ce7d52271270e72501fa6d45 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 11 05:11:21 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-390c567af3f4f9b17c471a557ae52e1fc53a30a1ce7d52271270e72501fa6d45-userdata-shm.mount: Deactivated successfully.
Oct 11 05:11:21 np0005481065 systemd[1]: var-lib-containers-storage-overlay-8e0faee35d9e2350e7eab68b1a8768c75dd4c537a8039875f94be2481264f64f-merged.mount: Deactivated successfully.
Oct 11 05:11:21 np0005481065 podman[366728]: 2025-10-11 09:11:21.963447116 +0000 UTC m=+0.118444360 container cleanup 390c567af3f4f9b17c471a557ae52e1fc53a30a1ce7d52271270e72501fa6d45 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 11 05:11:21 np0005481065 systemd[1]: libpod-conmon-390c567af3f4f9b17c471a557ae52e1fc53a30a1ce7d52271270e72501fa6d45.scope: Deactivated successfully.
Oct 11 05:11:22 np0005481065 nova_compute[260935]: 2025-10-11 09:11:22.031 2 DEBUG nova.compute.manager [req-466d8e69-b97b-4211-a55c-99c6f61daa91 req-16ddeec1-42f0-4f88-8801-1aeaad7253cc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Received event network-vif-unplugged-673c8285-440e-4cb6-b81b-1bd2ddb7375a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:11:22 np0005481065 nova_compute[260935]: 2025-10-11 09:11:22.032 2 DEBUG oslo_concurrency.lockutils [req-466d8e69-b97b-4211-a55c-99c6f61daa91 req-16ddeec1-42f0-4f88-8801-1aeaad7253cc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:11:22 np0005481065 nova_compute[260935]: 2025-10-11 09:11:22.032 2 DEBUG oslo_concurrency.lockutils [req-466d8e69-b97b-4211-a55c-99c6f61daa91 req-16ddeec1-42f0-4f88-8801-1aeaad7253cc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:11:22 np0005481065 nova_compute[260935]: 2025-10-11 09:11:22.032 2 DEBUG oslo_concurrency.lockutils [req-466d8e69-b97b-4211-a55c-99c6f61daa91 req-16ddeec1-42f0-4f88-8801-1aeaad7253cc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:11:22 np0005481065 nova_compute[260935]: 2025-10-11 09:11:22.033 2 DEBUG nova.compute.manager [req-466d8e69-b97b-4211-a55c-99c6f61daa91 req-16ddeec1-42f0-4f88-8801-1aeaad7253cc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] No waiting events found dispatching network-vif-unplugged-673c8285-440e-4cb6-b81b-1bd2ddb7375a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:11:22 np0005481065 nova_compute[260935]: 2025-10-11 09:11:22.033 2 DEBUG nova.compute.manager [req-466d8e69-b97b-4211-a55c-99c6f61daa91 req-16ddeec1-42f0-4f88-8801-1aeaad7253cc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Received event network-vif-unplugged-673c8285-440e-4cb6-b81b-1bd2ddb7375a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 05:11:22 np0005481065 podman[366779]: 2025-10-11 09:11:22.044428357 +0000 UTC m=+0.052107103 container remove 390c567af3f4f9b17c471a557ae52e1fc53a30a1ce7d52271270e72501fa6d45 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 05:11:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:22.053 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[63a18d1c-5535-4498-b7c8-e16496d0d675]: (4, ('Sat Oct 11 09:11:21 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f (390c567af3f4f9b17c471a557ae52e1fc53a30a1ce7d52271270e72501fa6d45)\n390c567af3f4f9b17c471a557ae52e1fc53a30a1ce7d52271270e72501fa6d45\nSat Oct 11 09:11:21 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f (390c567af3f4f9b17c471a557ae52e1fc53a30a1ce7d52271270e72501fa6d45)\n390c567af3f4f9b17c471a557ae52e1fc53a30a1ce7d52271270e72501fa6d45\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:11:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:22.056 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[abdd2684-a3bc-4ff5-b2d5-c3476200935b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:11:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:22.058 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2de6a2d5-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:11:22 np0005481065 nova_compute[260935]: 2025-10-11 09:11:22.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:22 np0005481065 nova_compute[260935]: 2025-10-11 09:11:22.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:22 np0005481065 kernel: tap2de6a2d5-40: left promiscuous mode
Oct 11 05:11:22 np0005481065 nova_compute[260935]: 2025-10-11 09:11:22.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:22.084 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4a85c9f0-d2aa-4b38-bf8b-c27b8cfd9e93]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:11:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:22.113 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b5e08cc8-b8e3-4515-bbc2-86b221c577e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:11:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:22.114 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[02fc31ea-c7e3-4850-863b-92ba96a298e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:11:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:22.139 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e666e34e-2233-4d07-9c89-3ba6e63c52e8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570656, 'reachable_time': 43356, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366795, 'error': None, 'target': 'ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:11:22 np0005481065 systemd[1]: run-netns-ovnmeta\x2d2de6a2d5\x2d4c4a\x2d403f\x2d9eb5\x2d27dc7562315f.mount: Deactivated successfully.
Oct 11 05:11:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:22.144 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 05:11:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:22.144 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[b560725f-60ef-4ba1-9fd3-53d5e82748e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:11:22 np0005481065 nova_compute[260935]: 2025-10-11 09:11:22.345 2 INFO nova.virt.libvirt.driver [None req-7b47f653-3159-488e-a698-078bcab638c1 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Deleting instance files /var/lib/nova/instances/0b3da34e-478e-44fc-a1ec-2601998d2b0d_del#033[00m
Oct 11 05:11:22 np0005481065 nova_compute[260935]: 2025-10-11 09:11:22.347 2 INFO nova.virt.libvirt.driver [None req-7b47f653-3159-488e-a698-078bcab638c1 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Deletion of /var/lib/nova/instances/0b3da34e-478e-44fc-a1ec-2601998d2b0d_del complete#033[00m
Oct 11 05:11:22 np0005481065 nova_compute[260935]: 2025-10-11 09:11:22.398 2 INFO nova.compute.manager [None req-7b47f653-3159-488e-a698-078bcab638c1 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Took 0.85 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:11:22 np0005481065 nova_compute[260935]: 2025-10-11 09:11:22.399 2 DEBUG oslo.service.loopingcall [None req-7b47f653-3159-488e-a698-078bcab638c1 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:11:22 np0005481065 nova_compute[260935]: 2025-10-11 09:11:22.399 2 DEBUG nova.compute.manager [-] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:11:22 np0005481065 nova_compute[260935]: 2025-10-11 09:11:22.400 2 DEBUG nova.network.neutron [-] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:11:22 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2086: 321 pgs: 321 active+clean; 356 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 649 KiB/s rd, 27 KiB/s wr, 76 op/s
Oct 11 05:11:23 np0005481065 nova_compute[260935]: 2025-10-11 09:11:23.357 2 DEBUG nova.network.neutron [-] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:11:23 np0005481065 nova_compute[260935]: 2025-10-11 09:11:23.392 2 INFO nova.compute.manager [-] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Took 0.99 seconds to deallocate network for instance.#033[00m
Oct 11 05:11:23 np0005481065 nova_compute[260935]: 2025-10-11 09:11:23.414 2 DEBUG nova.network.neutron [req-4945e1cc-71bb-47d6-aefc-300ceab5f058 req-788d1078-bae1-48c2-8ae4-7639ba2f0cad e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Updated VIF entry in instance network info cache for port 673c8285-440e-4cb6-b81b-1bd2ddb7375a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:11:23 np0005481065 nova_compute[260935]: 2025-10-11 09:11:23.415 2 DEBUG nova.network.neutron [req-4945e1cc-71bb-47d6-aefc-300ceab5f058 req-788d1078-bae1-48c2-8ae4-7639ba2f0cad e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Updating instance_info_cache with network_info: [{"id": "673c8285-440e-4cb6-b81b-1bd2ddb7375a", "address": "fa:16:3e:ac:22:4a", "network": {"id": "2de6a2d5-4c4a-403f-9eb5-27dc7562315f", "bridge": "br-int", "label": "tempest-network-smoke--795331470", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap673c8285-44", "ovs_interfaceid": "673c8285-440e-4cb6-b81b-1bd2ddb7375a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:11:23 np0005481065 nova_compute[260935]: 2025-10-11 09:11:23.446 2 DEBUG oslo_concurrency.lockutils [req-4945e1cc-71bb-47d6-aefc-300ceab5f058 req-788d1078-bae1-48c2-8ae4-7639ba2f0cad e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-0b3da34e-478e-44fc-a1ec-2601998d2b0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:11:23 np0005481065 nova_compute[260935]: 2025-10-11 09:11:23.452 2 DEBUG oslo_concurrency.lockutils [None req-7b47f653-3159-488e-a698-078bcab638c1 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:11:23 np0005481065 nova_compute[260935]: 2025-10-11 09:11:23.453 2 DEBUG oslo_concurrency.lockutils [None req-7b47f653-3159-488e-a698-078bcab638c1 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:11:23 np0005481065 nova_compute[260935]: 2025-10-11 09:11:23.616 2 DEBUG oslo_concurrency.processutils [None req-7b47f653-3159-488e-a698-078bcab638c1 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:11:23 np0005481065 nova_compute[260935]: 2025-10-11 09:11:23.675 2 DEBUG nova.compute.manager [req-14a1f755-df81-44d3-9935-ab1447a2f179 req-afb0180c-3980-4b59-985f-44c4a093d16a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Received event network-vif-deleted-673c8285-440e-4cb6-b81b-1bd2ddb7375a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:11:23 np0005481065 nova_compute[260935]: 2025-10-11 09:11:23.676 2 INFO nova.compute.manager [req-14a1f755-df81-44d3-9935-ab1447a2f179 req-afb0180c-3980-4b59-985f-44c4a093d16a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Neutron deleted interface 673c8285-440e-4cb6-b81b-1bd2ddb7375a; detaching it from the instance and deleting it from the info cache#033[00m
Oct 11 05:11:23 np0005481065 nova_compute[260935]: 2025-10-11 09:11:23.677 2 DEBUG nova.network.neutron [req-14a1f755-df81-44d3-9935-ab1447a2f179 req-afb0180c-3980-4b59-985f-44c4a093d16a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:11:23 np0005481065 nova_compute[260935]: 2025-10-11 09:11:23.702 2 DEBUG nova.compute.manager [req-14a1f755-df81-44d3-9935-ab1447a2f179 req-afb0180c-3980-4b59-985f-44c4a093d16a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Detach interface failed, port_id=673c8285-440e-4cb6-b81b-1bd2ddb7375a, reason: Instance 0b3da34e-478e-44fc-a1ec-2601998d2b0d could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct 11 05:11:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:11:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:11:24 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3961666844' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:11:24 np0005481065 nova_compute[260935]: 2025-10-11 09:11:24.106 2 DEBUG oslo_concurrency.processutils [None req-7b47f653-3159-488e-a698-078bcab638c1 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:11:24 np0005481065 nova_compute[260935]: 2025-10-11 09:11:24.115 2 DEBUG nova.compute.provider_tree [None req-7b47f653-3159-488e-a698-078bcab638c1 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:11:24 np0005481065 nova_compute[260935]: 2025-10-11 09:11:24.135 2 DEBUG nova.scheduler.client.report [None req-7b47f653-3159-488e-a698-078bcab638c1 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:11:24 np0005481065 nova_compute[260935]: 2025-10-11 09:11:24.160 2 DEBUG oslo_concurrency.lockutils [None req-7b47f653-3159-488e-a698-078bcab638c1 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:11:24 np0005481065 nova_compute[260935]: 2025-10-11 09:11:24.200 2 INFO nova.scheduler.client.report [None req-7b47f653-3159-488e-a698-078bcab638c1 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Deleted allocations for instance 0b3da34e-478e-44fc-a1ec-2601998d2b0d#033[00m
Oct 11 05:11:24 np0005481065 nova_compute[260935]: 2025-10-11 09:11:24.277 2 DEBUG nova.compute.manager [req-4532ac89-a18a-43c3-8669-4aeb3b03e114 req-f5ffd120-fc2a-453b-a6bf-28d2aa14e313 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Received event network-vif-plugged-673c8285-440e-4cb6-b81b-1bd2ddb7375a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:11:24 np0005481065 nova_compute[260935]: 2025-10-11 09:11:24.278 2 DEBUG oslo_concurrency.lockutils [req-4532ac89-a18a-43c3-8669-4aeb3b03e114 req-f5ffd120-fc2a-453b-a6bf-28d2aa14e313 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:11:24 np0005481065 nova_compute[260935]: 2025-10-11 09:11:24.278 2 DEBUG oslo_concurrency.lockutils [req-4532ac89-a18a-43c3-8669-4aeb3b03e114 req-f5ffd120-fc2a-453b-a6bf-28d2aa14e313 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:11:24 np0005481065 nova_compute[260935]: 2025-10-11 09:11:24.279 2 DEBUG oslo_concurrency.lockutils [req-4532ac89-a18a-43c3-8669-4aeb3b03e114 req-f5ffd120-fc2a-453b-a6bf-28d2aa14e313 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:11:24 np0005481065 nova_compute[260935]: 2025-10-11 09:11:24.279 2 DEBUG nova.compute.manager [req-4532ac89-a18a-43c3-8669-4aeb3b03e114 req-f5ffd120-fc2a-453b-a6bf-28d2aa14e313 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] No waiting events found dispatching network-vif-plugged-673c8285-440e-4cb6-b81b-1bd2ddb7375a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:11:24 np0005481065 nova_compute[260935]: 2025-10-11 09:11:24.280 2 WARNING nova.compute.manager [req-4532ac89-a18a-43c3-8669-4aeb3b03e114 req-f5ffd120-fc2a-453b-a6bf-28d2aa14e313 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Received unexpected event network-vif-plugged-673c8285-440e-4cb6-b81b-1bd2ddb7375a for instance with vm_state deleted and task_state None.#033[00m
Oct 11 05:11:24 np0005481065 nova_compute[260935]: 2025-10-11 09:11:24.284 2 DEBUG oslo_concurrency.lockutils [None req-7b47f653-3159-488e-a698-078bcab638c1 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:11:24 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2087: 321 pgs: 321 active+clean; 328 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 608 KiB/s rd, 28 KiB/s wr, 77 op/s
Oct 11 05:11:24 np0005481065 nova_compute[260935]: 2025-10-11 09:11:24.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:11:24 np0005481065 nova_compute[260935]: 2025-10-11 09:11:24.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:11:24 np0005481065 nova_compute[260935]: 2025-10-11 09:11:24.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:11:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:11:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:11:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:11:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:11:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:11:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:11:24 np0005481065 nova_compute[260935]: 2025-10-11 09:11:24.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:25 np0005481065 nova_compute[260935]: 2025-10-11 09:11:25.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:25.486 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:11:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:25.488 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 05:11:25 np0005481065 nova_compute[260935]: 2025-10-11 09:11:25.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:11:25 np0005481065 nova_compute[260935]: 2025-10-11 09:11:25.732 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:11:25 np0005481065 nova_compute[260935]: 2025-10-11 09:11:25.732 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:11:25 np0005481065 nova_compute[260935]: 2025-10-11 09:11:25.733 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:11:25 np0005481065 nova_compute[260935]: 2025-10-11 09:11:25.733 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:11:25 np0005481065 nova_compute[260935]: 2025-10-11 09:11:25.734 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:11:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:11:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/761223916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:11:26 np0005481065 nova_compute[260935]: 2025-10-11 09:11:26.257 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:11:26 np0005481065 nova_compute[260935]: 2025-10-11 09:11:26.378 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:11:26 np0005481065 nova_compute[260935]: 2025-10-11 09:11:26.379 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:11:26 np0005481065 nova_compute[260935]: 2025-10-11 09:11:26.379 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:11:26 np0005481065 nova_compute[260935]: 2025-10-11 09:11:26.386 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:11:26 np0005481065 nova_compute[260935]: 2025-10-11 09:11:26.386 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:11:26 np0005481065 nova_compute[260935]: 2025-10-11 09:11:26.392 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:11:26 np0005481065 nova_compute[260935]: 2025-10-11 09:11:26.392 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:11:26 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2088: 321 pgs: 321 active+clean; 328 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 608 KiB/s rd, 28 KiB/s wr, 77 op/s
Oct 11 05:11:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:11:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266745744' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:11:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:11:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266745744' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:11:26 np0005481065 nova_compute[260935]: 2025-10-11 09:11:26.744 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:11:26 np0005481065 nova_compute[260935]: 2025-10-11 09:11:26.748 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3106MB free_disk=59.830528259277344GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:11:26 np0005481065 nova_compute[260935]: 2025-10-11 09:11:26.749 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:11:26 np0005481065 nova_compute[260935]: 2025-10-11 09:11:26.750 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:11:26 np0005481065 nova_compute[260935]: 2025-10-11 09:11:26.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:26 np0005481065 nova_compute[260935]: 2025-10-11 09:11:26.873 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:11:26 np0005481065 nova_compute[260935]: 2025-10-11 09:11:26.873 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:11:26 np0005481065 nova_compute[260935]: 2025-10-11 09:11:26.874 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:11:26 np0005481065 nova_compute[260935]: 2025-10-11 09:11:26.874 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:11:26 np0005481065 nova_compute[260935]: 2025-10-11 09:11:26.874 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:11:26 np0005481065 podman[366940]: 2025-10-11 09:11:26.963206663 +0000 UTC m=+0.085062516 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 11 05:11:26 np0005481065 nova_compute[260935]: 2025-10-11 09:11:26.980 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:11:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:11:27 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3666519937' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:11:27 np0005481065 nova_compute[260935]: 2025-10-11 09:11:27.445 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:11:27 np0005481065 nova_compute[260935]: 2025-10-11 09:11:27.454 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:11:27 np0005481065 nova_compute[260935]: 2025-10-11 09:11:27.481 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:11:27 np0005481065 nova_compute[260935]: 2025-10-11 09:11:27.515 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:11:27 np0005481065 nova_compute[260935]: 2025-10-11 09:11:27.516 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.766s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:11:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:11:28 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:11:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:11:28 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:11:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:11:28 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:11:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 05:11:28 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:11:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 05:11:28 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:11:28 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 846c8ff3-2ccb-47e3-826a-7a0e7ccbb9db does not exist
Oct 11 05:11:28 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev c5434dcd-861b-4075-bd16-62120523cf83 does not exist
Oct 11 05:11:28 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 3cb2f74c-db94-494f-8993-c2883eb7ea06 does not exist
Oct 11 05:11:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 05:11:28 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 05:11:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 05:11:28 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:11:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:11:28 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:11:28 np0005481065 ovn_controller[152945]: 2025-10-11T09:11:28Z|00966|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:11:28 np0005481065 ovn_controller[152945]: 2025-10-11T09:11:28Z|00967|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:11:28 np0005481065 nova_compute[260935]: 2025-10-11 09:11:28.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:28 np0005481065 ovn_controller[152945]: 2025-10-11T09:11:28Z|00968|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:11:28 np0005481065 ovn_controller[152945]: 2025-10-11T09:11:28Z|00969|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:11:28 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2089: 321 pgs: 321 active+clean; 328 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 14 KiB/s wr, 34 op/s
Oct 11 05:11:28 np0005481065 nova_compute[260935]: 2025-10-11 09:11:28.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:28 np0005481065 nova_compute[260935]: 2025-10-11 09:11:28.512 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:11:28 np0005481065 nova_compute[260935]: 2025-10-11 09:11:28.543 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:11:28 np0005481065 nova_compute[260935]: 2025-10-11 09:11:28.544 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:11:28 np0005481065 nova_compute[260935]: 2025-10-11 09:11:28.611 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 11 05:11:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:11:28 np0005481065 podman[367272]: 2025-10-11 09:11:28.952515445 +0000 UTC m=+0.063981162 container create 1344808d8463297da901a5dec9a35c10b4d72edd9b93d38004490c81650c7bb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:11:28 np0005481065 systemd[1]: Started libpod-conmon-1344808d8463297da901a5dec9a35c10b4d72edd9b93d38004490c81650c7bb1.scope.
Oct 11 05:11:29 np0005481065 podman[367272]: 2025-10-11 09:11:28.926850725 +0000 UTC m=+0.038316472 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:11:29 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:11:29 np0005481065 podman[367272]: 2025-10-11 09:11:29.042593369 +0000 UTC m=+0.154059116 container init 1344808d8463297da901a5dec9a35c10b4d72edd9b93d38004490c81650c7bb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mestorf, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 05:11:29 np0005481065 podman[367272]: 2025-10-11 09:11:29.059409854 +0000 UTC m=+0.170875571 container start 1344808d8463297da901a5dec9a35c10b4d72edd9b93d38004490c81650c7bb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mestorf, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 11 05:11:29 np0005481065 podman[367272]: 2025-10-11 09:11:29.062739306 +0000 UTC m=+0.174205063 container attach 1344808d8463297da901a5dec9a35c10b4d72edd9b93d38004490c81650c7bb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 05:11:29 np0005481065 exciting_mestorf[367289]: 167 167
Oct 11 05:11:29 np0005481065 systemd[1]: libpod-1344808d8463297da901a5dec9a35c10b4d72edd9b93d38004490c81650c7bb1.scope: Deactivated successfully.
Oct 11 05:11:29 np0005481065 podman[367272]: 2025-10-11 09:11:29.066707576 +0000 UTC m=+0.178173323 container died 1344808d8463297da901a5dec9a35c10b4d72edd9b93d38004490c81650c7bb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 11 05:11:29 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:11:29 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:11:29 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:11:29 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:11:29 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:11:29 np0005481065 systemd[1]: var-lib-containers-storage-overlay-e6a2775652ca2131ba9553fcdbbc8621f0aef780ccbca67dc59bef8d3bef9f7c-merged.mount: Deactivated successfully.
Oct 11 05:11:29 np0005481065 podman[367272]: 2025-10-11 09:11:29.130172973 +0000 UTC m=+0.241638710 container remove 1344808d8463297da901a5dec9a35c10b4d72edd9b93d38004490c81650c7bb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:11:29 np0005481065 systemd[1]: libpod-conmon-1344808d8463297da901a5dec9a35c10b4d72edd9b93d38004490c81650c7bb1.scope: Deactivated successfully.
Oct 11 05:11:29 np0005481065 podman[367315]: 2025-10-11 09:11:29.396514204 +0000 UTC m=+0.066568883 container create c68cd01edb2d5be213b929b8d089cff7e502e1af2439b64f60215cae56158f27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mahavira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 05:11:29 np0005481065 systemd[1]: Started libpod-conmon-c68cd01edb2d5be213b929b8d089cff7e502e1af2439b64f60215cae56158f27.scope.
Oct 11 05:11:29 np0005481065 podman[367315]: 2025-10-11 09:11:29.373021954 +0000 UTC m=+0.043076633 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:11:29 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:11:29 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f4e0b44162b0a251d73c2d9858f7878c2e811d37d7ad9f2c9b22a6651dd5cde/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:11:29 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f4e0b44162b0a251d73c2d9858f7878c2e811d37d7ad9f2c9b22a6651dd5cde/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:11:29 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f4e0b44162b0a251d73c2d9858f7878c2e811d37d7ad9f2c9b22a6651dd5cde/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:11:29 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f4e0b44162b0a251d73c2d9858f7878c2e811d37d7ad9f2c9b22a6651dd5cde/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:11:29 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f4e0b44162b0a251d73c2d9858f7878c2e811d37d7ad9f2c9b22a6651dd5cde/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 05:11:29 np0005481065 podman[367315]: 2025-10-11 09:11:29.513829831 +0000 UTC m=+0.183884530 container init c68cd01edb2d5be213b929b8d089cff7e502e1af2439b64f60215cae56158f27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mahavira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 11 05:11:29 np0005481065 podman[367315]: 2025-10-11 09:11:29.531740267 +0000 UTC m=+0.201794946 container start c68cd01edb2d5be213b929b8d089cff7e502e1af2439b64f60215cae56158f27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mahavira, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:11:29 np0005481065 podman[367315]: 2025-10-11 09:11:29.536493848 +0000 UTC m=+0.206548497 container attach c68cd01edb2d5be213b929b8d089cff7e502e1af2439b64f60215cae56158f27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 11 05:11:29 np0005481065 nova_compute[260935]: 2025-10-11 09:11:29.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:30 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2090: 321 pgs: 321 active+clean; 328 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 12 KiB/s wr, 30 op/s
Oct 11 05:11:30 np0005481065 peaceful_mahavira[367331]: --> passed data devices: 0 physical, 3 LVM
Oct 11 05:11:30 np0005481065 peaceful_mahavira[367331]: --> relative data size: 1.0
Oct 11 05:11:30 np0005481065 peaceful_mahavira[367331]: --> All data devices are unavailable
Oct 11 05:11:30 np0005481065 systemd[1]: libpod-c68cd01edb2d5be213b929b8d089cff7e502e1af2439b64f60215cae56158f27.scope: Deactivated successfully.
Oct 11 05:11:30 np0005481065 systemd[1]: libpod-c68cd01edb2d5be213b929b8d089cff7e502e1af2439b64f60215cae56158f27.scope: Consumed 1.063s CPU time.
Oct 11 05:11:30 np0005481065 podman[367315]: 2025-10-11 09:11:30.676627047 +0000 UTC m=+1.346681766 container died c68cd01edb2d5be213b929b8d089cff7e502e1af2439b64f60215cae56158f27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mahavira, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:11:30 np0005481065 systemd[1]: var-lib-containers-storage-overlay-5f4e0b44162b0a251d73c2d9858f7878c2e811d37d7ad9f2c9b22a6651dd5cde-merged.mount: Deactivated successfully.
Oct 11 05:11:30 np0005481065 podman[367315]: 2025-10-11 09:11:30.761594648 +0000 UTC m=+1.431649337 container remove c68cd01edb2d5be213b929b8d089cff7e502e1af2439b64f60215cae56158f27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mahavira, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Oct 11 05:11:30 np0005481065 systemd[1]: libpod-conmon-c68cd01edb2d5be213b929b8d089cff7e502e1af2439b64f60215cae56158f27.scope: Deactivated successfully.
Oct 11 05:11:31 np0005481065 podman[367515]: 2025-10-11 09:11:31.728329087 +0000 UTC m=+0.064242639 container create 0718606399c49ea856a66fa2be08f8ec259208f20603f5a7380ebe1a92395c17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_curie, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Oct 11 05:11:31 np0005481065 systemd[1]: Started libpod-conmon-0718606399c49ea856a66fa2be08f8ec259208f20603f5a7380ebe1a92395c17.scope.
Oct 11 05:11:31 np0005481065 podman[367515]: 2025-10-11 09:11:31.703304474 +0000 UTC m=+0.039218066 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:11:31 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:11:31 np0005481065 podman[367515]: 2025-10-11 09:11:31.833374244 +0000 UTC m=+0.169287836 container init 0718606399c49ea856a66fa2be08f8ec259208f20603f5a7380ebe1a92395c17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_curie, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 05:11:31 np0005481065 podman[367515]: 2025-10-11 09:11:31.84333658 +0000 UTC m=+0.179250122 container start 0718606399c49ea856a66fa2be08f8ec259208f20603f5a7380ebe1a92395c17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_curie, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:11:31 np0005481065 podman[367515]: 2025-10-11 09:11:31.847884476 +0000 UTC m=+0.183798048 container attach 0718606399c49ea856a66fa2be08f8ec259208f20603f5a7380ebe1a92395c17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_curie, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:11:31 np0005481065 cool_curie[367532]: 167 167
Oct 11 05:11:31 np0005481065 systemd[1]: libpod-0718606399c49ea856a66fa2be08f8ec259208f20603f5a7380ebe1a92395c17.scope: Deactivated successfully.
Oct 11 05:11:31 np0005481065 podman[367515]: 2025-10-11 09:11:31.850364965 +0000 UTC m=+0.186278507 container died 0718606399c49ea856a66fa2be08f8ec259208f20603f5a7380ebe1a92395c17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:11:31 np0005481065 nova_compute[260935]: 2025-10-11 09:11:31.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:31 np0005481065 systemd[1]: var-lib-containers-storage-overlay-9baec829c3182bd4a312656579301074f89f2cfe95ee5134ae29bbd1397ef7c3-merged.mount: Deactivated successfully.
Oct 11 05:11:31 np0005481065 podman[367515]: 2025-10-11 09:11:31.909394669 +0000 UTC m=+0.245308221 container remove 0718606399c49ea856a66fa2be08f8ec259208f20603f5a7380ebe1a92395c17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_curie, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 05:11:31 np0005481065 systemd[1]: libpod-conmon-0718606399c49ea856a66fa2be08f8ec259208f20603f5a7380ebe1a92395c17.scope: Deactivated successfully.
Oct 11 05:11:32 np0005481065 podman[367538]: 2025-10-11 09:11:32.01892437 +0000 UTC m=+0.119634692 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible)
Oct 11 05:11:32 np0005481065 podman[367546]: 2025-10-11 09:11:32.050459883 +0000 UTC m=+0.140831649 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 11 05:11:32 np0005481065 podman[367600]: 2025-10-11 09:11:32.150887283 +0000 UTC m=+0.054505840 container create 61563d162d002c17e679dbd3d29cdb240492a79eed2ece2859d3e33db9fe2cb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lamarr, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:11:32 np0005481065 systemd[1]: Started libpod-conmon-61563d162d002c17e679dbd3d29cdb240492a79eed2ece2859d3e33db9fe2cb8.scope.
Oct 11 05:11:32 np0005481065 podman[367600]: 2025-10-11 09:11:32.130254532 +0000 UTC m=+0.033873059 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:11:32 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:11:32 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4734989fb6aba1cf2bd0cd2b0b811a1d1b66410fbbbca3466a96b26daf115d99/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:11:32 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4734989fb6aba1cf2bd0cd2b0b811a1d1b66410fbbbca3466a96b26daf115d99/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:11:32 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4734989fb6aba1cf2bd0cd2b0b811a1d1b66410fbbbca3466a96b26daf115d99/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:11:32 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4734989fb6aba1cf2bd0cd2b0b811a1d1b66410fbbbca3466a96b26daf115d99/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:11:32 np0005481065 podman[367600]: 2025-10-11 09:11:32.283984827 +0000 UTC m=+0.187603414 container init 61563d162d002c17e679dbd3d29cdb240492a79eed2ece2859d3e33db9fe2cb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lamarr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 05:11:32 np0005481065 podman[367600]: 2025-10-11 09:11:32.296200065 +0000 UTC m=+0.199818572 container start 61563d162d002c17e679dbd3d29cdb240492a79eed2ece2859d3e33db9fe2cb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 05:11:32 np0005481065 podman[367600]: 2025-10-11 09:11:32.299949249 +0000 UTC m=+0.203567806 container attach 61563d162d002c17e679dbd3d29cdb240492a79eed2ece2859d3e33db9fe2cb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lamarr, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:11:32 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2091: 321 pgs: 321 active+clean; 328 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 11 KiB/s wr, 28 op/s
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]: {
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:    "0": [
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:        {
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:            "devices": [
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:                "/dev/loop3"
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:            ],
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:            "lv_name": "ceph_lv0",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:            "lv_size": "21470642176",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:            "name": "ceph_lv0",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:            "tags": {
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:                "ceph.cluster_name": "ceph",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:                "ceph.crush_device_class": "",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:                "ceph.encrypted": "0",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:                "ceph.osd_id": "0",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:                "ceph.type": "block",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:                "ceph.vdo": "0"
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:            },
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:            "type": "block",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:            "vg_name": "ceph_vg0"
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:        }
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:    ],
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:    "1": [
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:        {
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:            "devices": [
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:                "/dev/loop4"
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:            ],
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:            "lv_name": "ceph_lv1",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:            "lv_size": "21470642176",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:            "name": "ceph_lv1",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:            "tags": {
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:                "ceph.cluster_name": "ceph",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:                "ceph.crush_device_class": "",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:                "ceph.encrypted": "0",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:                "ceph.osd_id": "1",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:                "ceph.type": "block",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:                "ceph.vdo": "0"
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:            },
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:            "type": "block",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:            "vg_name": "ceph_vg1"
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:        }
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:    ],
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:    "2": [
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:        {
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:            "devices": [
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:                "/dev/loop5"
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:            ],
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:            "lv_name": "ceph_lv2",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:            "lv_size": "21470642176",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:            "name": "ceph_lv2",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:            "tags": {
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:                "ceph.cluster_name": "ceph",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:                "ceph.crush_device_class": "",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:                "ceph.encrypted": "0",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:                "ceph.osd_id": "2",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:                "ceph.type": "block",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:                "ceph.vdo": "0"
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:            },
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:            "type": "block",
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:            "vg_name": "ceph_vg2"
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:        }
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]:    ]
Oct 11 05:11:33 np0005481065 elastic_lamarr[367616]: }
Oct 11 05:11:33 np0005481065 systemd[1]: libpod-61563d162d002c17e679dbd3d29cdb240492a79eed2ece2859d3e33db9fe2cb8.scope: Deactivated successfully.
Oct 11 05:11:33 np0005481065 podman[367600]: 2025-10-11 09:11:33.129991493 +0000 UTC m=+1.033610030 container died 61563d162d002c17e679dbd3d29cdb240492a79eed2ece2859d3e33db9fe2cb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lamarr, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 05:11:33 np0005481065 systemd[1]: var-lib-containers-storage-overlay-4734989fb6aba1cf2bd0cd2b0b811a1d1b66410fbbbca3466a96b26daf115d99-merged.mount: Deactivated successfully.
Oct 11 05:11:33 np0005481065 podman[367600]: 2025-10-11 09:11:33.206028597 +0000 UTC m=+1.109647154 container remove 61563d162d002c17e679dbd3d29cdb240492a79eed2ece2859d3e33db9fe2cb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 05:11:33 np0005481065 systemd[1]: libpod-conmon-61563d162d002c17e679dbd3d29cdb240492a79eed2ece2859d3e33db9fe2cb8.scope: Deactivated successfully.
Oct 11 05:11:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:11:34 np0005481065 podman[367779]: 2025-10-11 09:11:34.055677145 +0000 UTC m=+0.046189380 container create af35bff4549b5f4c46cb930b2b32e0918335e11d083ceadcd9e855ccb64a1202 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 05:11:34 np0005481065 systemd[1]: Started libpod-conmon-af35bff4549b5f4c46cb930b2b32e0918335e11d083ceadcd9e855ccb64a1202.scope.
Oct 11 05:11:34 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:11:34 np0005481065 podman[367779]: 2025-10-11 09:11:34.036466183 +0000 UTC m=+0.026978448 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:11:34 np0005481065 podman[367779]: 2025-10-11 09:11:34.151683122 +0000 UTC m=+0.142195447 container init af35bff4549b5f4c46cb930b2b32e0918335e11d083ceadcd9e855ccb64a1202 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bouman, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 05:11:34 np0005481065 podman[367779]: 2025-10-11 09:11:34.161577316 +0000 UTC m=+0.152089571 container start af35bff4549b5f4c46cb930b2b32e0918335e11d083ceadcd9e855ccb64a1202 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:11:34 np0005481065 podman[367779]: 2025-10-11 09:11:34.166279686 +0000 UTC m=+0.156791951 container attach af35bff4549b5f4c46cb930b2b32e0918335e11d083ceadcd9e855ccb64a1202 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bouman, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 11 05:11:34 np0005481065 relaxed_bouman[367796]: 167 167
Oct 11 05:11:34 np0005481065 podman[367779]: 2025-10-11 09:11:34.169894986 +0000 UTC m=+0.160407251 container died af35bff4549b5f4c46cb930b2b32e0918335e11d083ceadcd9e855ccb64a1202 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bouman, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 05:11:34 np0005481065 systemd[1]: libpod-af35bff4549b5f4c46cb930b2b32e0918335e11d083ceadcd9e855ccb64a1202.scope: Deactivated successfully.
Oct 11 05:11:34 np0005481065 systemd[1]: var-lib-containers-storage-overlay-bc53acc7d5985cc7a43820902b781d3a6135ccd70fa205ae659107aba912596e-merged.mount: Deactivated successfully.
Oct 11 05:11:34 np0005481065 podman[367779]: 2025-10-11 09:11:34.224977401 +0000 UTC m=+0.215489676 container remove af35bff4549b5f4c46cb930b2b32e0918335e11d083ceadcd9e855ccb64a1202 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 11 05:11:34 np0005481065 systemd[1]: libpod-conmon-af35bff4549b5f4c46cb930b2b32e0918335e11d083ceadcd9e855ccb64a1202.scope: Deactivated successfully.
Oct 11 05:11:34 np0005481065 podman[367820]: 2025-10-11 09:11:34.476352619 +0000 UTC m=+0.072830927 container create 0110d9d0bbe8920e14de7052fb9814eb727c0ae00da8b380970ea9853aae09af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 11 05:11:34 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2092: 321 pgs: 321 active+clean; 328 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 5.6 KiB/s rd, 682 B/s wr, 9 op/s
Oct 11 05:11:34 np0005481065 systemd[1]: Started libpod-conmon-0110d9d0bbe8920e14de7052fb9814eb727c0ae00da8b380970ea9853aae09af.scope.
Oct 11 05:11:34 np0005481065 podman[367820]: 2025-10-11 09:11:34.446707368 +0000 UTC m=+0.043185746 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:11:34 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:11:34 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5438a642c54e083513f332d51858fc774c756581c63a9a701d675939eb6cd55c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:11:34 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5438a642c54e083513f332d51858fc774c756581c63a9a701d675939eb6cd55c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:11:34 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5438a642c54e083513f332d51858fc774c756581c63a9a701d675939eb6cd55c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:11:34 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5438a642c54e083513f332d51858fc774c756581c63a9a701d675939eb6cd55c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:11:34 np0005481065 podman[367820]: 2025-10-11 09:11:34.599422065 +0000 UTC m=+0.195900403 container init 0110d9d0bbe8920e14de7052fb9814eb727c0ae00da8b380970ea9853aae09af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:11:34 np0005481065 podman[367820]: 2025-10-11 09:11:34.616462847 +0000 UTC m=+0.212941175 container start 0110d9d0bbe8920e14de7052fb9814eb727c0ae00da8b380970ea9853aae09af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:11:34 np0005481065 podman[367820]: 2025-10-11 09:11:34.620765316 +0000 UTC m=+0.217243654 container attach 0110d9d0bbe8920e14de7052fb9814eb727c0ae00da8b380970ea9853aae09af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:11:34 np0005481065 nova_compute[260935]: 2025-10-11 09:11:34.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:35.491 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:11:35 np0005481065 great_snyder[367837]: {
Oct 11 05:11:35 np0005481065 great_snyder[367837]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 05:11:35 np0005481065 great_snyder[367837]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:11:35 np0005481065 great_snyder[367837]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 05:11:35 np0005481065 great_snyder[367837]:        "osd_id": 2,
Oct 11 05:11:35 np0005481065 great_snyder[367837]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:11:35 np0005481065 great_snyder[367837]:        "type": "bluestore"
Oct 11 05:11:35 np0005481065 great_snyder[367837]:    },
Oct 11 05:11:35 np0005481065 great_snyder[367837]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 05:11:35 np0005481065 great_snyder[367837]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:11:35 np0005481065 great_snyder[367837]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 05:11:35 np0005481065 great_snyder[367837]:        "osd_id": 0,
Oct 11 05:11:35 np0005481065 great_snyder[367837]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:11:35 np0005481065 great_snyder[367837]:        "type": "bluestore"
Oct 11 05:11:35 np0005481065 great_snyder[367837]:    },
Oct 11 05:11:35 np0005481065 great_snyder[367837]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 05:11:35 np0005481065 great_snyder[367837]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:11:35 np0005481065 great_snyder[367837]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 05:11:35 np0005481065 great_snyder[367837]:        "osd_id": 1,
Oct 11 05:11:35 np0005481065 great_snyder[367837]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:11:35 np0005481065 great_snyder[367837]:        "type": "bluestore"
Oct 11 05:11:35 np0005481065 great_snyder[367837]:    }
Oct 11 05:11:35 np0005481065 great_snyder[367837]: }
Oct 11 05:11:35 np0005481065 systemd[1]: libpod-0110d9d0bbe8920e14de7052fb9814eb727c0ae00da8b380970ea9853aae09af.scope: Deactivated successfully.
Oct 11 05:11:35 np0005481065 podman[367820]: 2025-10-11 09:11:35.713532043 +0000 UTC m=+1.310010381 container died 0110d9d0bbe8920e14de7052fb9814eb727c0ae00da8b380970ea9853aae09af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 05:11:35 np0005481065 systemd[1]: libpod-0110d9d0bbe8920e14de7052fb9814eb727c0ae00da8b380970ea9853aae09af.scope: Consumed 1.105s CPU time.
Oct 11 05:11:35 np0005481065 systemd[1]: var-lib-containers-storage-overlay-5438a642c54e083513f332d51858fc774c756581c63a9a701d675939eb6cd55c-merged.mount: Deactivated successfully.
Oct 11 05:11:35 np0005481065 podman[367820]: 2025-10-11 09:11:35.785283299 +0000 UTC m=+1.381761607 container remove 0110d9d0bbe8920e14de7052fb9814eb727c0ae00da8b380970ea9853aae09af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_snyder, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:11:35 np0005481065 systemd[1]: libpod-conmon-0110d9d0bbe8920e14de7052fb9814eb727c0ae00da8b380970ea9853aae09af.scope: Deactivated successfully.
Oct 11 05:11:35 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:11:35 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:11:35 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:11:35 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:11:35 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 1dd096a8-c3b3-4dd6-824c-1f8779428b53 does not exist
Oct 11 05:11:35 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 92cec2f9-607b-4756-845c-13ebc86319de does not exist
Oct 11 05:11:36 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2093: 321 pgs: 321 active+clean; 328 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 3.2 KiB/s rd, 341 B/s wr, 4 op/s
Oct 11 05:11:36 np0005481065 nova_compute[260935]: 2025-10-11 09:11:36.792 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173881.7909877, 0b3da34e-478e-44fc-a1ec-2601998d2b0d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:11:36 np0005481065 nova_compute[260935]: 2025-10-11 09:11:36.793 2 INFO nova.compute.manager [-] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:11:36 np0005481065 nova_compute[260935]: 2025-10-11 09:11:36.820 2 DEBUG nova.compute.manager [None req-2a0d7db5-a1fc-4746-b921-0f93b8001a07 - - - - - -] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:11:36 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:11:36 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:11:36 np0005481065 nova_compute[260935]: 2025-10-11 09:11:36.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:38 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2094: 321 pgs: 321 active+clean; 328 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 3.2 KiB/s rd, 341 B/s wr, 4 op/s
Oct 11 05:11:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:11:39 np0005481065 nova_compute[260935]: 2025-10-11 09:11:39.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:40 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2095: 321 pgs: 321 active+clean; 328 MiB data, 872 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:11:41 np0005481065 nova_compute[260935]: 2025-10-11 09:11:41.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:42 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2096: 321 pgs: 321 active+clean; 328 MiB data, 872 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:11:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:11:44 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2097: 321 pgs: 321 active+clean; 328 MiB data, 872 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:11:44 np0005481065 nova_compute[260935]: 2025-10-11 09:11:44.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:46 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2098: 321 pgs: 321 active+clean; 328 MiB data, 872 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:11:46 np0005481065 nova_compute[260935]: 2025-10-11 09:11:46.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:47 np0005481065 nova_compute[260935]: 2025-10-11 09:11:47.139 2 DEBUG oslo_concurrency.lockutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:11:47 np0005481065 nova_compute[260935]: 2025-10-11 09:11:47.140 2 DEBUG oslo_concurrency.lockutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:11:47 np0005481065 nova_compute[260935]: 2025-10-11 09:11:47.161 2 DEBUG nova.compute.manager [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:11:47 np0005481065 nova_compute[260935]: 2025-10-11 09:11:47.266 2 DEBUG oslo_concurrency.lockutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:11:47 np0005481065 nova_compute[260935]: 2025-10-11 09:11:47.267 2 DEBUG oslo_concurrency.lockutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:11:47 np0005481065 nova_compute[260935]: 2025-10-11 09:11:47.279 2 DEBUG nova.virt.hardware [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:11:47 np0005481065 nova_compute[260935]: 2025-10-11 09:11:47.280 2 INFO nova.compute.claims [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:11:47 np0005481065 nova_compute[260935]: 2025-10-11 09:11:47.524 2 DEBUG oslo_concurrency.processutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:11:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:11:48 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2240100118' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:11:48 np0005481065 nova_compute[260935]: 2025-10-11 09:11:48.027 2 DEBUG oslo_concurrency.processutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:11:48 np0005481065 nova_compute[260935]: 2025-10-11 09:11:48.035 2 DEBUG nova.compute.provider_tree [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:11:48 np0005481065 nova_compute[260935]: 2025-10-11 09:11:48.056 2 DEBUG nova.scheduler.client.report [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:11:48 np0005481065 nova_compute[260935]: 2025-10-11 09:11:48.092 2 DEBUG oslo_concurrency.lockutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.825s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:11:48 np0005481065 nova_compute[260935]: 2025-10-11 09:11:48.093 2 DEBUG nova.compute.manager [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:11:48 np0005481065 nova_compute[260935]: 2025-10-11 09:11:48.192 2 DEBUG nova.compute.manager [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:11:48 np0005481065 nova_compute[260935]: 2025-10-11 09:11:48.193 2 DEBUG nova.network.neutron [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:11:48 np0005481065 nova_compute[260935]: 2025-10-11 09:11:48.219 2 INFO nova.virt.libvirt.driver [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:11:48 np0005481065 nova_compute[260935]: 2025-10-11 09:11:48.259 2 DEBUG nova.compute.manager [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:11:48 np0005481065 nova_compute[260935]: 2025-10-11 09:11:48.361 2 DEBUG nova.compute.manager [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:11:48 np0005481065 nova_compute[260935]: 2025-10-11 09:11:48.364 2 DEBUG nova.virt.libvirt.driver [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:11:48 np0005481065 nova_compute[260935]: 2025-10-11 09:11:48.365 2 INFO nova.virt.libvirt.driver [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Creating image(s)#033[00m
Oct 11 05:11:48 np0005481065 nova_compute[260935]: 2025-10-11 09:11:48.403 2 DEBUG nova.storage.rbd_utils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:11:48 np0005481065 nova_compute[260935]: 2025-10-11 09:11:48.447 2 DEBUG nova.storage.rbd_utils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:11:48 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2099: 321 pgs: 321 active+clean; 328 MiB data, 872 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:11:48 np0005481065 nova_compute[260935]: 2025-10-11 09:11:48.487 2 DEBUG nova.storage.rbd_utils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:11:48 np0005481065 nova_compute[260935]: 2025-10-11 09:11:48.493 2 DEBUG oslo_concurrency.processutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:11:48 np0005481065 nova_compute[260935]: 2025-10-11 09:11:48.598 2 DEBUG oslo_concurrency.processutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:11:48 np0005481065 nova_compute[260935]: 2025-10-11 09:11:48.599 2 DEBUG oslo_concurrency.lockutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:11:48 np0005481065 nova_compute[260935]: 2025-10-11 09:11:48.600 2 DEBUG oslo_concurrency.lockutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:11:48 np0005481065 nova_compute[260935]: 2025-10-11 09:11:48.600 2 DEBUG oslo_concurrency.lockutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:11:48 np0005481065 nova_compute[260935]: 2025-10-11 09:11:48.625 2 DEBUG nova.storage.rbd_utils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:11:48 np0005481065 nova_compute[260935]: 2025-10-11 09:11:48.629 2 DEBUG oslo_concurrency.processutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:11:48 np0005481065 nova_compute[260935]: 2025-10-11 09:11:48.781 2 DEBUG nova.policy [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a213c3877fc144a3af0be3c3d853f999', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ca4b15770e784f45910b630937562cb6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:11:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:11:48 np0005481065 nova_compute[260935]: 2025-10-11 09:11:48.918 2 DEBUG oslo_concurrency.processutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.289s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:11:49 np0005481065 nova_compute[260935]: 2025-10-11 09:11:49.011 2 DEBUG nova.storage.rbd_utils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] resizing rbd image 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:11:49 np0005481065 nova_compute[260935]: 2025-10-11 09:11:49.143 2 DEBUG nova.objects.instance [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'migration_context' on Instance uuid 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:11:49 np0005481065 nova_compute[260935]: 2025-10-11 09:11:49.161 2 DEBUG nova.virt.libvirt.driver [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:11:49 np0005481065 nova_compute[260935]: 2025-10-11 09:11:49.162 2 DEBUG nova.virt.libvirt.driver [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Ensure instance console log exists: /var/lib/nova/instances/0dc02e8f-5afd-40a3-8de3-e5550e4ab57e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:11:49 np0005481065 nova_compute[260935]: 2025-10-11 09:11:49.162 2 DEBUG oslo_concurrency.lockutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:11:49 np0005481065 nova_compute[260935]: 2025-10-11 09:11:49.163 2 DEBUG oslo_concurrency.lockutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:11:49 np0005481065 nova_compute[260935]: 2025-10-11 09:11:49.164 2 DEBUG oslo_concurrency.lockutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:11:49 np0005481065 nova_compute[260935]: 2025-10-11 09:11:49.819 2 DEBUG nova.network.neutron [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Successfully created port: 0ae3e094-fe06-40c1-8840-cf16ba40f7fb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:11:49 np0005481065 nova_compute[260935]: 2025-10-11 09:11:49.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:50 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2100: 321 pgs: 321 active+clean; 328 MiB data, 872 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:11:51 np0005481065 nova_compute[260935]: 2025-10-11 09:11:51.185 2 DEBUG nova.network.neutron [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Successfully updated port: 0ae3e094-fe06-40c1-8840-cf16ba40f7fb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:11:51 np0005481065 nova_compute[260935]: 2025-10-11 09:11:51.206 2 DEBUG oslo_concurrency.lockutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "refresh_cache-0dc02e8f-5afd-40a3-8de3-e5550e4ab57e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:11:51 np0005481065 nova_compute[260935]: 2025-10-11 09:11:51.206 2 DEBUG oslo_concurrency.lockutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquired lock "refresh_cache-0dc02e8f-5afd-40a3-8de3-e5550e4ab57e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:11:51 np0005481065 nova_compute[260935]: 2025-10-11 09:11:51.206 2 DEBUG nova.network.neutron [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:11:51 np0005481065 nova_compute[260935]: 2025-10-11 09:11:51.475 2 DEBUG nova.compute.manager [req-1a83ff77-b968-4564-b5e9-880c091bddcd req-d3483574-f53a-49e6-8398-0a644fbc8cab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Received event network-changed-0ae3e094-fe06-40c1-8840-cf16ba40f7fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:11:51 np0005481065 nova_compute[260935]: 2025-10-11 09:11:51.476 2 DEBUG nova.compute.manager [req-1a83ff77-b968-4564-b5e9-880c091bddcd req-d3483574-f53a-49e6-8398-0a644fbc8cab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Refreshing instance network info cache due to event network-changed-0ae3e094-fe06-40c1-8840-cf16ba40f7fb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:11:51 np0005481065 nova_compute[260935]: 2025-10-11 09:11:51.476 2 DEBUG oslo_concurrency.lockutils [req-1a83ff77-b968-4564-b5e9-880c091bddcd req-d3483574-f53a-49e6-8398-0a644fbc8cab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-0dc02e8f-5afd-40a3-8de3-e5550e4ab57e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:11:51 np0005481065 nova_compute[260935]: 2025-10-11 09:11:51.574 2 DEBUG nova.network.neutron [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:11:51 np0005481065 podman[368123]: 2025-10-11 09:11:51.804260504 +0000 UTC m=+0.099392612 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 11 05:11:51 np0005481065 nova_compute[260935]: 2025-10-11 09:11:51.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:52 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2101: 321 pgs: 321 active+clean; 370 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.5 MiB/s wr, 24 op/s
Oct 11 05:11:52 np0005481065 nova_compute[260935]: 2025-10-11 09:11:52.931 2 DEBUG nova.network.neutron [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Updating instance_info_cache with network_info: [{"id": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "address": "fa:16:3e:36:24:98", "network": {"id": "fba72350-6e1a-4542-9fe5-a0774a411936", "bridge": "br-int", "label": "tempest-network-smoke--284994200", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae3e094-fe", "ovs_interfaceid": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:11:52 np0005481065 nova_compute[260935]: 2025-10-11 09:11:52.964 2 DEBUG oslo_concurrency.lockutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Releasing lock "refresh_cache-0dc02e8f-5afd-40a3-8de3-e5550e4ab57e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:11:52 np0005481065 nova_compute[260935]: 2025-10-11 09:11:52.965 2 DEBUG nova.compute.manager [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Instance network_info: |[{"id": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "address": "fa:16:3e:36:24:98", "network": {"id": "fba72350-6e1a-4542-9fe5-a0774a411936", "bridge": "br-int", "label": "tempest-network-smoke--284994200", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae3e094-fe", "ovs_interfaceid": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:11:52 np0005481065 nova_compute[260935]: 2025-10-11 09:11:52.966 2 DEBUG oslo_concurrency.lockutils [req-1a83ff77-b968-4564-b5e9-880c091bddcd req-d3483574-f53a-49e6-8398-0a644fbc8cab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-0dc02e8f-5afd-40a3-8de3-e5550e4ab57e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:11:52 np0005481065 nova_compute[260935]: 2025-10-11 09:11:52.966 2 DEBUG nova.network.neutron [req-1a83ff77-b968-4564-b5e9-880c091bddcd req-d3483574-f53a-49e6-8398-0a644fbc8cab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Refreshing network info cache for port 0ae3e094-fe06-40c1-8840-cf16ba40f7fb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:11:52 np0005481065 nova_compute[260935]: 2025-10-11 09:11:52.970 2 DEBUG nova.virt.libvirt.driver [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Start _get_guest_xml network_info=[{"id": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "address": "fa:16:3e:36:24:98", "network": {"id": "fba72350-6e1a-4542-9fe5-a0774a411936", "bridge": "br-int", "label": "tempest-network-smoke--284994200", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae3e094-fe", "ovs_interfaceid": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:11:52 np0005481065 nova_compute[260935]: 2025-10-11 09:11:52.974 2 WARNING nova.virt.libvirt.driver [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:11:52 np0005481065 nova_compute[260935]: 2025-10-11 09:11:52.982 2 DEBUG nova.virt.libvirt.host [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:11:52 np0005481065 nova_compute[260935]: 2025-10-11 09:11:52.983 2 DEBUG nova.virt.libvirt.host [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:11:52 np0005481065 nova_compute[260935]: 2025-10-11 09:11:52.989 2 DEBUG nova.virt.libvirt.host [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:11:52 np0005481065 nova_compute[260935]: 2025-10-11 09:11:52.989 2 DEBUG nova.virt.libvirt.host [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:11:52 np0005481065 nova_compute[260935]: 2025-10-11 09:11:52.990 2 DEBUG nova.virt.libvirt.driver [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:11:52 np0005481065 nova_compute[260935]: 2025-10-11 09:11:52.991 2 DEBUG nova.virt.hardware [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:11:52 np0005481065 nova_compute[260935]: 2025-10-11 09:11:52.992 2 DEBUG nova.virt.hardware [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:11:52 np0005481065 nova_compute[260935]: 2025-10-11 09:11:52.992 2 DEBUG nova.virt.hardware [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:11:52 np0005481065 nova_compute[260935]: 2025-10-11 09:11:52.992 2 DEBUG nova.virt.hardware [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:11:52 np0005481065 nova_compute[260935]: 2025-10-11 09:11:52.993 2 DEBUG nova.virt.hardware [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:11:52 np0005481065 nova_compute[260935]: 2025-10-11 09:11:52.993 2 DEBUG nova.virt.hardware [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:11:52 np0005481065 nova_compute[260935]: 2025-10-11 09:11:52.994 2 DEBUG nova.virt.hardware [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:11:52 np0005481065 nova_compute[260935]: 2025-10-11 09:11:52.994 2 DEBUG nova.virt.hardware [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:11:52 np0005481065 nova_compute[260935]: 2025-10-11 09:11:52.995 2 DEBUG nova.virt.hardware [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:11:52 np0005481065 nova_compute[260935]: 2025-10-11 09:11:52.995 2 DEBUG nova.virt.hardware [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:11:52 np0005481065 nova_compute[260935]: 2025-10-11 09:11:52.996 2 DEBUG nova.virt.hardware [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:11:53 np0005481065 nova_compute[260935]: 2025-10-11 09:11:52.999 2 DEBUG oslo_concurrency.processutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:11:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:11:53 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2727017568' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:11:53 np0005481065 nova_compute[260935]: 2025-10-11 09:11:53.512 2 DEBUG oslo_concurrency.processutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:11:53 np0005481065 nova_compute[260935]: 2025-10-11 09:11:53.548 2 DEBUG nova.storage.rbd_utils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:11:53 np0005481065 nova_compute[260935]: 2025-10-11 09:11:53.551 2 DEBUG oslo_concurrency.processutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:11:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:11:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:11:53 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/712901151' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:11:54 np0005481065 nova_compute[260935]: 2025-10-11 09:11:54.006 2 DEBUG oslo_concurrency.processutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:11:54 np0005481065 nova_compute[260935]: 2025-10-11 09:11:54.008 2 DEBUG nova.virt.libvirt.vif [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:11:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1973542017',display_name='tempest-TestNetworkAdvancedServerOps-server-1973542017',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1973542017',id=105,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPjiTuLZzmAZ35tMhCOSVitFlf9QVc7XZ8MVeNKbs8Bxh7N8pYPwy6nmkT0CJ4moptetANiA+6a5Y0trCDigpxU0NxSZrm3lKn8b9iWNg2gUkyVkLT8AlFALq9EjTDW32g==',key_name='tempest-TestNetworkAdvancedServerOps-2078355019',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca4b15770e784f45910b630937562cb6',ramdisk_id='',reservation_id='r-7vnn79ig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1304559157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1304559157-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:11:48Z,user_data=None,user_id='a213c3877fc144a3af0be3c3d853f999',uuid=0dc02e8f-5afd-40a3-8de3-e5550e4ab57e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "address": "fa:16:3e:36:24:98", "network": {"id": "fba72350-6e1a-4542-9fe5-a0774a411936", "bridge": "br-int", "label": "tempest-network-smoke--284994200", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae3e094-fe", "ovs_interfaceid": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:11:54 np0005481065 nova_compute[260935]: 2025-10-11 09:11:54.008 2 DEBUG nova.network.os_vif_util [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converting VIF {"id": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "address": "fa:16:3e:36:24:98", "network": {"id": "fba72350-6e1a-4542-9fe5-a0774a411936", "bridge": "br-int", "label": "tempest-network-smoke--284994200", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae3e094-fe", "ovs_interfaceid": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:11:54 np0005481065 nova_compute[260935]: 2025-10-11 09:11:54.009 2 DEBUG nova.network.os_vif_util [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:36:24:98,bridge_name='br-int',has_traffic_filtering=True,id=0ae3e094-fe06-40c1-8840-cf16ba40f7fb,network=Network(fba72350-6e1a-4542-9fe5-a0774a411936),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ae3e094-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:11:54 np0005481065 nova_compute[260935]: 2025-10-11 09:11:54.010 2 DEBUG nova.objects.instance [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:11:54 np0005481065 nova_compute[260935]: 2025-10-11 09:11:54.026 2 DEBUG nova.virt.libvirt.driver [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:11:54 np0005481065 nova_compute[260935]:  <uuid>0dc02e8f-5afd-40a3-8de3-e5550e4ab57e</uuid>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:  <name>instance-00000069</name>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:11:54 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1973542017</nova:name>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:11:52</nova:creationTime>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:11:54 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:        <nova:user uuid="a213c3877fc144a3af0be3c3d853f999">tempest-TestNetworkAdvancedServerOps-1304559157-project-member</nova:user>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:        <nova:project uuid="ca4b15770e784f45910b630937562cb6">tempest-TestNetworkAdvancedServerOps-1304559157</nova:project>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:        <nova:port uuid="0ae3e094-fe06-40c1-8840-cf16ba40f7fb">
Oct 11 05:11:54 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:11:54 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:      <entry name="serial">0dc02e8f-5afd-40a3-8de3-e5550e4ab57e</entry>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:      <entry name="uuid">0dc02e8f-5afd-40a3-8de3-e5550e4ab57e</entry>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:11:54 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:11:54 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:11:54 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_disk">
Oct 11 05:11:54 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:11:54 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:11:54 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_disk.config">
Oct 11 05:11:54 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:11:54 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:11:54 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:36:24:98"/>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:      <target dev="tap0ae3e094-fe"/>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:11:54 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/0dc02e8f-5afd-40a3-8de3-e5550e4ab57e/console.log" append="off"/>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:11:54 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:11:54 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:11:54 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:11:54 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:11:54 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:11:54 np0005481065 nova_compute[260935]: 2025-10-11 09:11:54.027 2 DEBUG nova.compute.manager [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Preparing to wait for external event network-vif-plugged-0ae3e094-fe06-40c1-8840-cf16ba40f7fb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:11:54 np0005481065 nova_compute[260935]: 2025-10-11 09:11:54.027 2 DEBUG oslo_concurrency.lockutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:11:54 np0005481065 nova_compute[260935]: 2025-10-11 09:11:54.028 2 DEBUG oslo_concurrency.lockutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:11:54 np0005481065 nova_compute[260935]: 2025-10-11 09:11:54.028 2 DEBUG oslo_concurrency.lockutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:11:54 np0005481065 nova_compute[260935]: 2025-10-11 09:11:54.028 2 DEBUG nova.virt.libvirt.vif [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:11:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1973542017',display_name='tempest-TestNetworkAdvancedServerOps-server-1973542017',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1973542017',id=105,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPjiTuLZzmAZ35tMhCOSVitFlf9QVc7XZ8MVeNKbs8Bxh7N8pYPwy6nmkT0CJ4moptetANiA+6a5Y0trCDigpxU0NxSZrm3lKn8b9iWNg2gUkyVkLT8AlFALq9EjTDW32g==',key_name='tempest-TestNetworkAdvancedServerOps-2078355019',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca4b15770e784f45910b630937562cb6',ramdisk_id='',reservation_id='r-7vnn79ig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1304559157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1304559157-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:11:48Z,user_data=None,user_id='a213c3877fc144a3af0be3c3d853f999',uuid=0dc02e8f-5afd-40a3-8de3-e5550e4ab57e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "address": "fa:16:3e:36:24:98", "network": {"id": "fba72350-6e1a-4542-9fe5-a0774a411936", "bridge": "br-int", "label": "tempest-network-smoke--284994200", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae3e094-fe", "ovs_interfaceid": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:11:54 np0005481065 nova_compute[260935]: 2025-10-11 09:11:54.029 2 DEBUG nova.network.os_vif_util [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converting VIF {"id": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "address": "fa:16:3e:36:24:98", "network": {"id": "fba72350-6e1a-4542-9fe5-a0774a411936", "bridge": "br-int", "label": "tempest-network-smoke--284994200", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae3e094-fe", "ovs_interfaceid": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:11:54 np0005481065 nova_compute[260935]: 2025-10-11 09:11:54.029 2 DEBUG nova.network.os_vif_util [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:36:24:98,bridge_name='br-int',has_traffic_filtering=True,id=0ae3e094-fe06-40c1-8840-cf16ba40f7fb,network=Network(fba72350-6e1a-4542-9fe5-a0774a411936),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ae3e094-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:11:54 np0005481065 nova_compute[260935]: 2025-10-11 09:11:54.030 2 DEBUG os_vif [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:24:98,bridge_name='br-int',has_traffic_filtering=True,id=0ae3e094-fe06-40c1-8840-cf16ba40f7fb,network=Network(fba72350-6e1a-4542-9fe5-a0774a411936),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ae3e094-fe') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:11:54 np0005481065 nova_compute[260935]: 2025-10-11 09:11:54.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:54 np0005481065 nova_compute[260935]: 2025-10-11 09:11:54.030 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:11:54 np0005481065 nova_compute[260935]: 2025-10-11 09:11:54.031 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:11:54 np0005481065 nova_compute[260935]: 2025-10-11 09:11:54.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:54 np0005481065 nova_compute[260935]: 2025-10-11 09:11:54.033 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0ae3e094-fe, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:11:54 np0005481065 nova_compute[260935]: 2025-10-11 09:11:54.034 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0ae3e094-fe, col_values=(('external_ids', {'iface-id': '0ae3e094-fe06-40c1-8840-cf16ba40f7fb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:36:24:98', 'vm-uuid': '0dc02e8f-5afd-40a3-8de3-e5550e4ab57e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:11:54 np0005481065 nova_compute[260935]: 2025-10-11 09:11:54.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:54 np0005481065 NetworkManager[44960]: <info>  [1760173914.0363] manager: (tap0ae3e094-fe): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/403)
Oct 11 05:11:54 np0005481065 nova_compute[260935]: 2025-10-11 09:11:54.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:11:54 np0005481065 nova_compute[260935]: 2025-10-11 09:11:54.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:54 np0005481065 nova_compute[260935]: 2025-10-11 09:11:54.047 2 INFO os_vif [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:24:98,bridge_name='br-int',has_traffic_filtering=True,id=0ae3e094-fe06-40c1-8840-cf16ba40f7fb,network=Network(fba72350-6e1a-4542-9fe5-a0774a411936),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ae3e094-fe')#033[00m
Oct 11 05:11:54 np0005481065 nova_compute[260935]: 2025-10-11 09:11:54.101 2 DEBUG nova.virt.libvirt.driver [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:11:54 np0005481065 nova_compute[260935]: 2025-10-11 09:11:54.101 2 DEBUG nova.virt.libvirt.driver [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:11:54 np0005481065 nova_compute[260935]: 2025-10-11 09:11:54.102 2 DEBUG nova.virt.libvirt.driver [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] No VIF found with MAC fa:16:3e:36:24:98, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:11:54 np0005481065 nova_compute[260935]: 2025-10-11 09:11:54.103 2 INFO nova.virt.libvirt.driver [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Using config drive#033[00m
Oct 11 05:11:54 np0005481065 nova_compute[260935]: 2025-10-11 09:11:54.136 2 DEBUG nova.storage.rbd_utils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:11:54 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2102: 321 pgs: 321 active+clean; 374 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:11:54 np0005481065 nova_compute[260935]: 2025-10-11 09:11:54.739 2 INFO nova.virt.libvirt.driver [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Creating config drive at /var/lib/nova/instances/0dc02e8f-5afd-40a3-8de3-e5550e4ab57e/disk.config#033[00m
Oct 11 05:11:54 np0005481065 nova_compute[260935]: 2025-10-11 09:11:54.749 2 DEBUG oslo_concurrency.processutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0dc02e8f-5afd-40a3-8de3-e5550e4ab57e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxd_4ys_n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:11:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:11:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:11:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:11:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:11:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:11:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:11:54 np0005481065 nova_compute[260935]: 2025-10-11 09:11:54.900 2 DEBUG oslo_concurrency.processutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0dc02e8f-5afd-40a3-8de3-e5550e4ab57e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxd_4ys_n" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:11:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:11:54
Oct 11 05:11:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:11:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:11:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', 'default.rgw.meta', 'images', 'vms', '.rgw.root', 'cephfs.cephfs.meta', 'backups', '.mgr']
Oct 11 05:11:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:11:54 np0005481065 nova_compute[260935]: 2025-10-11 09:11:54.936 2 DEBUG nova.storage.rbd_utils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:11:54 np0005481065 nova_compute[260935]: 2025-10-11 09:11:54.942 2 DEBUG oslo_concurrency.processutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0dc02e8f-5afd-40a3-8de3-e5550e4ab57e/disk.config 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:11:55 np0005481065 nova_compute[260935]: 2025-10-11 09:11:55.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:55 np0005481065 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 05:11:55 np0005481065 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 32K writes, 126K keys, 32K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.03 MB/s#012Cumulative WAL: 32K writes, 11K syncs, 2.78 writes per sync, written: 0.12 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5554 writes, 19K keys, 5554 commit groups, 1.0 writes per commit group, ingest: 17.50 MB, 0.03 MB/s#012Interval WAL: 5554 writes, 2320 syncs, 2.39 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 05:11:55 np0005481065 nova_compute[260935]: 2025-10-11 09:11:55.177 2 DEBUG oslo_concurrency.processutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0dc02e8f-5afd-40a3-8de3-e5550e4ab57e/disk.config 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.235s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:11:55 np0005481065 nova_compute[260935]: 2025-10-11 09:11:55.178 2 INFO nova.virt.libvirt.driver [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Deleting local config drive /var/lib/nova/instances/0dc02e8f-5afd-40a3-8de3-e5550e4ab57e/disk.config because it was imported into RBD.#033[00m
Oct 11 05:11:55 np0005481065 nova_compute[260935]: 2025-10-11 09:11:55.275 2 DEBUG nova.network.neutron [req-1a83ff77-b968-4564-b5e9-880c091bddcd req-d3483574-f53a-49e6-8398-0a644fbc8cab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Updated VIF entry in instance network info cache for port 0ae3e094-fe06-40c1-8840-cf16ba40f7fb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:11:55 np0005481065 nova_compute[260935]: 2025-10-11 09:11:55.276 2 DEBUG nova.network.neutron [req-1a83ff77-b968-4564-b5e9-880c091bddcd req-d3483574-f53a-49e6-8398-0a644fbc8cab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Updating instance_info_cache with network_info: [{"id": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "address": "fa:16:3e:36:24:98", "network": {"id": "fba72350-6e1a-4542-9fe5-a0774a411936", "bridge": "br-int", "label": "tempest-network-smoke--284994200", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae3e094-fe", "ovs_interfaceid": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:11:55 np0005481065 kernel: tap0ae3e094-fe: entered promiscuous mode
Oct 11 05:11:55 np0005481065 ovn_controller[152945]: 2025-10-11T09:11:55Z|00970|binding|INFO|Claiming lport 0ae3e094-fe06-40c1-8840-cf16ba40f7fb for this chassis.
Oct 11 05:11:55 np0005481065 ovn_controller[152945]: 2025-10-11T09:11:55Z|00971|binding|INFO|0ae3e094-fe06-40c1-8840-cf16ba40f7fb: Claiming fa:16:3e:36:24:98 10.100.0.9
Oct 11 05:11:55 np0005481065 nova_compute[260935]: 2025-10-11 09:11:55.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:55 np0005481065 NetworkManager[44960]: <info>  [1760173915.2928] manager: (tap0ae3e094-fe): new Tun device (/org/freedesktop/NetworkManager/Devices/404)
Oct 11 05:11:55 np0005481065 nova_compute[260935]: 2025-10-11 09:11:55.296 2 DEBUG oslo_concurrency.lockutils [req-1a83ff77-b968-4564-b5e9-880c091bddcd req-d3483574-f53a-49e6-8398-0a644fbc8cab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-0dc02e8f-5afd-40a3-8de3-e5550e4ab57e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:11:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:11:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:11:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:11:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:11:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:11:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:11:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:11:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:11:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:11:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:55.322 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:36:24:98 10.100.0.9'], port_security=['fa:16:3e:36:24:98 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '0dc02e8f-5afd-40a3-8de3-e5550e4ab57e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fba72350-6e1a-4542-9fe5-a0774a411936', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca4b15770e784f45910b630937562cb6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8dd2bd74-fe29-451a-969b-10f8c65ff537', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4107ca00-6bae-4723-b2f7-08200810b548, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=0ae3e094-fe06-40c1-8840-cf16ba40f7fb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:55.324 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 0ae3e094-fe06-40c1-8840-cf16ba40f7fb in datapath fba72350-6e1a-4542-9fe5-a0774a411936 bound to our chassis#033[00m
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:55.327 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fba72350-6e1a-4542-9fe5-a0774a411936#033[00m
Oct 11 05:11:55 np0005481065 systemd-machined[215705]: New machine qemu-125-instance-00000069.
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:55.355 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ae9f272e-38ba-4ae2-b5c5-47f1478a7fe6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:55.356 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfba72350-61 in ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:55.359 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfba72350-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:55.359 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[59b93252-2d4c-4612-856c-4112837b583b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:55.360 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6695ffd0-fbcc-48a9-aad8-4e72d045ccff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:55.383 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[d675da1e-e7e6-4043-8898-092cfb36249b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:11:55 np0005481065 systemd[1]: Started Virtual Machine qemu-125-instance-00000069.
Oct 11 05:11:55 np0005481065 nova_compute[260935]: 2025-10-11 09:11:55.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:55 np0005481065 ovn_controller[152945]: 2025-10-11T09:11:55Z|00972|binding|INFO|Setting lport 0ae3e094-fe06-40c1-8840-cf16ba40f7fb ovn-installed in OVS
Oct 11 05:11:55 np0005481065 ovn_controller[152945]: 2025-10-11T09:11:55Z|00973|binding|INFO|Setting lport 0ae3e094-fe06-40c1-8840-cf16ba40f7fb up in Southbound
Oct 11 05:11:55 np0005481065 nova_compute[260935]: 2025-10-11 09:11:55.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:55.412 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ce6e57e5-0c4d-42a0-b7ad-e9f5e9288727]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:11:55 np0005481065 systemd-udevd[368281]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:11:55 np0005481065 NetworkManager[44960]: <info>  [1760173915.4493] device (tap0ae3e094-fe): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:11:55 np0005481065 NetworkManager[44960]: <info>  [1760173915.4509] device (tap0ae3e094-fe): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:55.460 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b758fe1c-2f38-4dfe-910c-71393363755b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:55.466 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cfb80155-49ae-4ea4-8124-1f2d2e1ade2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:11:55 np0005481065 NetworkManager[44960]: <info>  [1760173915.4707] manager: (tapfba72350-60): new Veth device (/org/freedesktop/NetworkManager/Devices/405)
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:55.522 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[a20f9e56-8af2-42fa-a3f2-034877195d6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:55.526 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[d8add052-d652-437b-8935-a442cc37bacd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:11:55 np0005481065 NetworkManager[44960]: <info>  [1760173915.5715] device (tapfba72350-60): carrier: link connected
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:55.581 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[e67b717a-111d-4c5c-aecb-1e99681d10bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:55.610 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[037cb446-7205-4d2e-98e5-d7d74e268373]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfba72350-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5c:dc:bd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 196, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 196, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 288], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576027, 'reachable_time': 31590, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 368311, 'error': None, 'target': 'ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:55.637 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2a54d5fa-6b17-4e25-96c5-526f937810ca]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5c:dcbd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 576027, 'tstamp': 576027}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 368312, 'error': None, 'target': 'ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:55.659 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5f89d375-25e0-40a4-a131-18a31d94eab4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfba72350-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5c:dc:bd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 196, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 196, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 288], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576027, 'reachable_time': 31590, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 368313, 'error': None, 'target': 'ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:11:55 np0005481065 nova_compute[260935]: 2025-10-11 09:11:55.701 2 DEBUG nova.compute.manager [req-a28e75ca-b4c9-4e14-af8b-f78022f21853 req-17789a9d-248f-47a3-a4f9-24649112cb95 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Received event network-vif-plugged-0ae3e094-fe06-40c1-8840-cf16ba40f7fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:55.701 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6c85fc7a-f3e1-4a6b-809f-fb40fedd044c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:11:55 np0005481065 nova_compute[260935]: 2025-10-11 09:11:55.702 2 DEBUG oslo_concurrency.lockutils [req-a28e75ca-b4c9-4e14-af8b-f78022f21853 req-17789a9d-248f-47a3-a4f9-24649112cb95 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:11:55 np0005481065 nova_compute[260935]: 2025-10-11 09:11:55.702 2 DEBUG oslo_concurrency.lockutils [req-a28e75ca-b4c9-4e14-af8b-f78022f21853 req-17789a9d-248f-47a3-a4f9-24649112cb95 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:11:55 np0005481065 nova_compute[260935]: 2025-10-11 09:11:55.703 2 DEBUG oslo_concurrency.lockutils [req-a28e75ca-b4c9-4e14-af8b-f78022f21853 req-17789a9d-248f-47a3-a4f9-24649112cb95 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:11:55 np0005481065 nova_compute[260935]: 2025-10-11 09:11:55.703 2 DEBUG nova.compute.manager [req-a28e75ca-b4c9-4e14-af8b-f78022f21853 req-17789a9d-248f-47a3-a4f9-24649112cb95 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Processing event network-vif-plugged-0ae3e094-fe06-40c1-8840-cf16ba40f7fb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:55.793 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bbde9201-a284-4b3f-8741-bf4c1861dafc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:55.794 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfba72350-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:55.795 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:55.795 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfba72350-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:11:55 np0005481065 nova_compute[260935]: 2025-10-11 09:11:55.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:55 np0005481065 NetworkManager[44960]: <info>  [1760173915.7985] manager: (tapfba72350-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/406)
Oct 11 05:11:55 np0005481065 kernel: tapfba72350-60: entered promiscuous mode
Oct 11 05:11:55 np0005481065 nova_compute[260935]: 2025-10-11 09:11:55.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:55.803 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfba72350-60, col_values=(('external_ids', {'iface-id': '2e151f7b-2e1d-434e-9750-cdf44a5aa034'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:11:55 np0005481065 nova_compute[260935]: 2025-10-11 09:11:55.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:55 np0005481065 ovn_controller[152945]: 2025-10-11T09:11:55Z|00974|binding|INFO|Releasing lport 2e151f7b-2e1d-434e-9750-cdf44a5aa034 from this chassis (sb_readonly=0)
Oct 11 05:11:55 np0005481065 nova_compute[260935]: 2025-10-11 09:11:55.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:55.835 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fba72350-6e1a-4542-9fe5-a0774a411936.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fba72350-6e1a-4542-9fe5-a0774a411936.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:55.838 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4d5c62f5-c303-4695-9a6e-ebfd72bbf14a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:55.839 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-fba72350-6e1a-4542-9fe5-a0774a411936
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/fba72350-6e1a-4542-9fe5-a0774a411936.pid.haproxy
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID fba72350-6e1a-4542-9fe5-a0774a411936
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 05:11:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:11:55.840 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936', 'env', 'PROCESS_TAG=haproxy-fba72350-6e1a-4542-9fe5-a0774a411936', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fba72350-6e1a-4542-9fe5-a0774a411936.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 05:11:56 np0005481065 podman[368382]: 2025-10-11 09:11:56.285098259 +0000 UTC m=+0.063719805 container create ec1149314a3d07372ddbc52429d17cff61449fe11903c2a988709a542e875bb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 11 05:11:56 np0005481065 systemd[1]: Started libpod-conmon-ec1149314a3d07372ddbc52429d17cff61449fe11903c2a988709a542e875bb0.scope.
Oct 11 05:11:56 np0005481065 podman[368382]: 2025-10-11 09:11:56.250905982 +0000 UTC m=+0.029527598 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 05:11:56 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:11:56 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79b21d042abfb89eb21b26b040500e0f55cb793382691f819975fafe3db3b783/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 05:11:56 np0005481065 podman[368382]: 2025-10-11 09:11:56.391953647 +0000 UTC m=+0.170575203 container init ec1149314a3d07372ddbc52429d17cff61449fe11903c2a988709a542e875bb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 11 05:11:56 np0005481065 podman[368382]: 2025-10-11 09:11:56.401317736 +0000 UTC m=+0.179939282 container start ec1149314a3d07372ddbc52429d17cff61449fe11903c2a988709a542e875bb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 05:11:56 np0005481065 neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936[368402]: [NOTICE]   (368406) : New worker (368408) forked
Oct 11 05:11:56 np0005481065 neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936[368402]: [NOTICE]   (368406) : Loading success.
Oct 11 05:11:56 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2103: 321 pgs: 321 active+clean; 374 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:11:56 np0005481065 nova_compute[260935]: 2025-10-11 09:11:56.882 2 DEBUG nova.compute.manager [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:11:56 np0005481065 nova_compute[260935]: 2025-10-11 09:11:56.883 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173916.8814278, 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:11:56 np0005481065 nova_compute[260935]: 2025-10-11 09:11:56.883 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] VM Started (Lifecycle Event)#033[00m
Oct 11 05:11:56 np0005481065 nova_compute[260935]: 2025-10-11 09:11:56.887 2 DEBUG nova.virt.libvirt.driver [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:11:56 np0005481065 nova_compute[260935]: 2025-10-11 09:11:56.892 2 INFO nova.virt.libvirt.driver [-] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Instance spawned successfully.#033[00m
Oct 11 05:11:56 np0005481065 nova_compute[260935]: 2025-10-11 09:11:56.893 2 DEBUG nova.virt.libvirt.driver [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:11:56 np0005481065 nova_compute[260935]: 2025-10-11 09:11:56.908 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:11:56 np0005481065 nova_compute[260935]: 2025-10-11 09:11:56.917 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:11:56 np0005481065 nova_compute[260935]: 2025-10-11 09:11:56.923 2 DEBUG nova.virt.libvirt.driver [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:11:56 np0005481065 nova_compute[260935]: 2025-10-11 09:11:56.923 2 DEBUG nova.virt.libvirt.driver [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:11:56 np0005481065 nova_compute[260935]: 2025-10-11 09:11:56.924 2 DEBUG nova.virt.libvirt.driver [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:11:56 np0005481065 nova_compute[260935]: 2025-10-11 09:11:56.925 2 DEBUG nova.virt.libvirt.driver [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:11:56 np0005481065 nova_compute[260935]: 2025-10-11 09:11:56.926 2 DEBUG nova.virt.libvirt.driver [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:11:56 np0005481065 nova_compute[260935]: 2025-10-11 09:11:56.926 2 DEBUG nova.virt.libvirt.driver [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:11:56 np0005481065 nova_compute[260935]: 2025-10-11 09:11:56.936 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:11:56 np0005481065 nova_compute[260935]: 2025-10-11 09:11:56.936 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173916.8869085, 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:11:56 np0005481065 nova_compute[260935]: 2025-10-11 09:11:56.936 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:11:56 np0005481065 nova_compute[260935]: 2025-10-11 09:11:56.959 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:11:56 np0005481065 nova_compute[260935]: 2025-10-11 09:11:56.963 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173916.8874564, 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:11:56 np0005481065 nova_compute[260935]: 2025-10-11 09:11:56.964 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:11:56 np0005481065 nova_compute[260935]: 2025-10-11 09:11:56.993 2 INFO nova.compute.manager [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Took 8.63 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:11:56 np0005481065 nova_compute[260935]: 2025-10-11 09:11:56.994 2 DEBUG nova.compute.manager [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:11:56 np0005481065 nova_compute[260935]: 2025-10-11 09:11:56.995 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:11:57 np0005481065 nova_compute[260935]: 2025-10-11 09:11:57.003 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:11:57 np0005481065 nova_compute[260935]: 2025-10-11 09:11:57.043 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:11:57 np0005481065 nova_compute[260935]: 2025-10-11 09:11:57.071 2 INFO nova.compute.manager [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Took 9.86 seconds to build instance.#033[00m
Oct 11 05:11:57 np0005481065 nova_compute[260935]: 2025-10-11 09:11:57.092 2 DEBUG oslo_concurrency.lockutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.953s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:11:57 np0005481065 podman[368419]: 2025-10-11 09:11:57.801055739 +0000 UTC m=+0.104596266 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.build-date=20251001)
Oct 11 05:11:57 np0005481065 nova_compute[260935]: 2025-10-11 09:11:57.862 2 DEBUG nova.compute.manager [req-310a12d4-2ce7-445e-8daf-b6b5ef0ab15b req-365613d6-9304-4671-abca-e4ea1a5b6933 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Received event network-vif-plugged-0ae3e094-fe06-40c1-8840-cf16ba40f7fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:11:57 np0005481065 nova_compute[260935]: 2025-10-11 09:11:57.862 2 DEBUG oslo_concurrency.lockutils [req-310a12d4-2ce7-445e-8daf-b6b5ef0ab15b req-365613d6-9304-4671-abca-e4ea1a5b6933 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:11:57 np0005481065 nova_compute[260935]: 2025-10-11 09:11:57.863 2 DEBUG oslo_concurrency.lockutils [req-310a12d4-2ce7-445e-8daf-b6b5ef0ab15b req-365613d6-9304-4671-abca-e4ea1a5b6933 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:11:57 np0005481065 nova_compute[260935]: 2025-10-11 09:11:57.863 2 DEBUG oslo_concurrency.lockutils [req-310a12d4-2ce7-445e-8daf-b6b5ef0ab15b req-365613d6-9304-4671-abca-e4ea1a5b6933 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:11:57 np0005481065 nova_compute[260935]: 2025-10-11 09:11:57.863 2 DEBUG nova.compute.manager [req-310a12d4-2ce7-445e-8daf-b6b5ef0ab15b req-365613d6-9304-4671-abca-e4ea1a5b6933 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] No waiting events found dispatching network-vif-plugged-0ae3e094-fe06-40c1-8840-cf16ba40f7fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:11:57 np0005481065 nova_compute[260935]: 2025-10-11 09:11:57.863 2 WARNING nova.compute.manager [req-310a12d4-2ce7-445e-8daf-b6b5ef0ab15b req-365613d6-9304-4671-abca-e4ea1a5b6933 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Received unexpected event network-vif-plugged-0ae3e094-fe06-40c1-8840-cf16ba40f7fb for instance with vm_state active and task_state None.#033[00m
Oct 11 05:11:58 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2104: 321 pgs: 321 active+clean; 374 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 11 05:11:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:11:59 np0005481065 nova_compute[260935]: 2025-10-11 09:11:59.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:00 np0005481065 nova_compute[260935]: 2025-10-11 09:12:00.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 05:12:00 np0005481065 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 34K writes, 126K keys, 34K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.03 MB/s#012Cumulative WAL: 34K writes, 12K syncs, 2.74 writes per sync, written: 0.12 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5812 writes, 20K keys, 5812 commit groups, 1.0 writes per commit group, ingest: 19.71 MB, 0.03 MB/s#012Interval WAL: 5812 writes, 2422 syncs, 2.40 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 05:12:00 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2105: 321 pgs: 321 active+clean; 374 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 11 05:12:01 np0005481065 ovn_controller[152945]: 2025-10-11T09:12:01Z|00975|binding|INFO|Releasing lport 2e151f7b-2e1d-434e-9750-cdf44a5aa034 from this chassis (sb_readonly=0)
Oct 11 05:12:01 np0005481065 ovn_controller[152945]: 2025-10-11T09:12:01Z|00976|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:12:01 np0005481065 ovn_controller[152945]: 2025-10-11T09:12:01Z|00977|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:12:01 np0005481065 NetworkManager[44960]: <info>  [1760173921.7488] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/407)
Oct 11 05:12:01 np0005481065 nova_compute[260935]: 2025-10-11 09:12:01.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:01 np0005481065 NetworkManager[44960]: <info>  [1760173921.7504] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/408)
Oct 11 05:12:01 np0005481065 ovn_controller[152945]: 2025-10-11T09:12:01Z|00978|binding|INFO|Releasing lport 2e151f7b-2e1d-434e-9750-cdf44a5aa034 from this chassis (sb_readonly=0)
Oct 11 05:12:01 np0005481065 ovn_controller[152945]: 2025-10-11T09:12:01Z|00979|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:12:01 np0005481065 ovn_controller[152945]: 2025-10-11T09:12:01Z|00980|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:12:01 np0005481065 nova_compute[260935]: 2025-10-11 09:12:01.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:01 np0005481065 nova_compute[260935]: 2025-10-11 09:12:01.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:02 np0005481065 nova_compute[260935]: 2025-10-11 09:12:02.294 2 DEBUG nova.compute.manager [req-fa3f3a5f-1c6d-47ad-9cc5-cc5c2ff2126d req-aec50289-e3dc-41ee-afeb-c6259a1a1cab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Received event network-changed-0ae3e094-fe06-40c1-8840-cf16ba40f7fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:12:02 np0005481065 nova_compute[260935]: 2025-10-11 09:12:02.295 2 DEBUG nova.compute.manager [req-fa3f3a5f-1c6d-47ad-9cc5-cc5c2ff2126d req-aec50289-e3dc-41ee-afeb-c6259a1a1cab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Refreshing instance network info cache due to event network-changed-0ae3e094-fe06-40c1-8840-cf16ba40f7fb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:12:02 np0005481065 nova_compute[260935]: 2025-10-11 09:12:02.296 2 DEBUG oslo_concurrency.lockutils [req-fa3f3a5f-1c6d-47ad-9cc5-cc5c2ff2126d req-aec50289-e3dc-41ee-afeb-c6259a1a1cab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-0dc02e8f-5afd-40a3-8de3-e5550e4ab57e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:12:02 np0005481065 nova_compute[260935]: 2025-10-11 09:12:02.296 2 DEBUG oslo_concurrency.lockutils [req-fa3f3a5f-1c6d-47ad-9cc5-cc5c2ff2126d req-aec50289-e3dc-41ee-afeb-c6259a1a1cab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-0dc02e8f-5afd-40a3-8de3-e5550e4ab57e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:12:02 np0005481065 nova_compute[260935]: 2025-10-11 09:12:02.297 2 DEBUG nova.network.neutron [req-fa3f3a5f-1c6d-47ad-9cc5-cc5c2ff2126d req-aec50289-e3dc-41ee-afeb-c6259a1a1cab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Refreshing network info cache for port 0ae3e094-fe06-40c1-8840-cf16ba40f7fb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:12:02 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2106: 321 pgs: 321 active+clean; 374 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 92 op/s
Oct 11 05:12:02 np0005481065 podman[368445]: 2025-10-11 09:12:02.793611398 +0000 UTC m=+0.093396047 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:12:02 np0005481065 podman[368446]: 2025-10-11 09:12:02.807644266 +0000 UTC m=+0.110855719 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 11 05:12:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:12:04 np0005481065 nova_compute[260935]: 2025-10-11 09:12:04.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:04 np0005481065 nova_compute[260935]: 2025-10-11 09:12:04.109 2 DEBUG nova.network.neutron [req-fa3f3a5f-1c6d-47ad-9cc5-cc5c2ff2126d req-aec50289-e3dc-41ee-afeb-c6259a1a1cab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Updated VIF entry in instance network info cache for port 0ae3e094-fe06-40c1-8840-cf16ba40f7fb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:12:04 np0005481065 nova_compute[260935]: 2025-10-11 09:12:04.110 2 DEBUG nova.network.neutron [req-fa3f3a5f-1c6d-47ad-9cc5-cc5c2ff2126d req-aec50289-e3dc-41ee-afeb-c6259a1a1cab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Updating instance_info_cache with network_info: [{"id": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "address": "fa:16:3e:36:24:98", "network": {"id": "fba72350-6e1a-4542-9fe5-a0774a411936", "bridge": "br-int", "label": "tempest-network-smoke--284994200", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae3e094-fe", "ovs_interfaceid": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:12:04 np0005481065 nova_compute[260935]: 2025-10-11 09:12:04.137 2 DEBUG oslo_concurrency.lockutils [req-fa3f3a5f-1c6d-47ad-9cc5-cc5c2ff2126d req-aec50289-e3dc-41ee-afeb-c6259a1a1cab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-0dc02e8f-5afd-40a3-8de3-e5550e4ab57e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:12:04 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2107: 321 pgs: 321 active+clean; 374 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 268 KiB/s wr, 76 op/s
Oct 11 05:12:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:12:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:12:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:12:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:12:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0029757271724620694 of space, bias 1.0, pg target 0.8927181517386208 quantized to 32 (current 32)
Oct 11 05:12:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:12:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:12:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:12:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:12:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:12:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 11 05:12:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:12:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 05:12:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:12:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:12:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:12:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 05:12:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:12:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 05:12:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:12:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:12:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:12:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 05:12:05 np0005481065 nova_compute[260935]: 2025-10-11 09:12:05.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:06 np0005481065 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 05:12:06 np0005481065 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.3 total, 600.0 interval#012Cumulative writes: 26K writes, 105K keys, 26K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.03 MB/s#012Cumulative WAL: 26K writes, 9284 syncs, 2.85 writes per sync, written: 0.10 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4253 writes, 16K keys, 4253 commit groups, 1.0 writes per commit group, ingest: 15.90 MB, 0.03 MB/s#012Interval WAL: 4253 writes, 1727 syncs, 2.46 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 05:12:06 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2108: 321 pgs: 321 active+clean; 374 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 05:12:07 np0005481065 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Oct 11 05:12:07 np0005481065 ceph-mgr[74605]: [devicehealth INFO root] Check health
Oct 11 05:12:08 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2109: 321 pgs: 321 active+clean; 374 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 100 KiB/s wr, 81 op/s
Oct 11 05:12:08 np0005481065 nova_compute[260935]: 2025-10-11 09:12:08.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:12:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:12:09 np0005481065 nova_compute[260935]: 2025-10-11 09:12:09.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:09 np0005481065 ovn_controller[152945]: 2025-10-11T09:12:09Z|00110|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:36:24:98 10.100.0.9
Oct 11 05:12:09 np0005481065 ovn_controller[152945]: 2025-10-11T09:12:09Z|00111|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:36:24:98 10.100.0.9
Oct 11 05:12:10 np0005481065 nova_compute[260935]: 2025-10-11 09:12:10.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:10 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2110: 321 pgs: 321 active+clean; 374 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 88 KiB/s wr, 71 op/s
Oct 11 05:12:12 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2111: 321 pgs: 321 active+clean; 400 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 125 op/s
Oct 11 05:12:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:12:14 np0005481065 nova_compute[260935]: 2025-10-11 09:12:14.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:14 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2112: 321 pgs: 321 active+clean; 407 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 589 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Oct 11 05:12:15 np0005481065 nova_compute[260935]: 2025-10-11 09:12:15.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:15.207 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:12:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:15.209 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:12:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:15.210 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:12:16 np0005481065 nova_compute[260935]: 2025-10-11 09:12:16.351 2 INFO nova.compute.manager [None req-35f7fdbf-915f-4e01-9594-8f08e657df71 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Get console output#033[00m
Oct 11 05:12:16 np0005481065 nova_compute[260935]: 2025-10-11 09:12:16.357 29289 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct 11 05:12:16 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2113: 321 pgs: 321 active+clean; 407 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 11 05:12:17 np0005481065 nova_compute[260935]: 2025-10-11 09:12:17.665 2 INFO nova.compute.manager [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Rebuilding instance#033[00m
Oct 11 05:12:18 np0005481065 nova_compute[260935]: 2025-10-11 09:12:18.008 2 DEBUG nova.objects.instance [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:12:18 np0005481065 nova_compute[260935]: 2025-10-11 09:12:18.023 2 DEBUG nova.compute.manager [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:12:18 np0005481065 nova_compute[260935]: 2025-10-11 09:12:18.092 2 DEBUG nova.objects.instance [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'pci_requests' on Instance uuid 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:12:18 np0005481065 nova_compute[260935]: 2025-10-11 09:12:18.111 2 DEBUG nova.objects.instance [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:12:18 np0005481065 nova_compute[260935]: 2025-10-11 09:12:18.128 2 DEBUG nova.objects.instance [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'resources' on Instance uuid 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:12:18 np0005481065 nova_compute[260935]: 2025-10-11 09:12:18.148 2 DEBUG nova.objects.instance [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'migration_context' on Instance uuid 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:12:18 np0005481065 nova_compute[260935]: 2025-10-11 09:12:18.162 2 DEBUG nova.objects.instance [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct 11 05:12:18 np0005481065 nova_compute[260935]: 2025-10-11 09:12:18.166 2 DEBUG nova.virt.libvirt.driver [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct 11 05:12:18 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2114: 321 pgs: 321 active+clean; 407 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 05:12:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:12:19 np0005481065 nova_compute[260935]: 2025-10-11 09:12:19.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:20 np0005481065 nova_compute[260935]: 2025-10-11 09:12:20.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:20 np0005481065 kernel: tap0ae3e094-fe (unregistering): left promiscuous mode
Oct 11 05:12:20 np0005481065 NetworkManager[44960]: <info>  [1760173940.4649] device (tap0ae3e094-fe): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:12:20 np0005481065 nova_compute[260935]: 2025-10-11 09:12:20.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:20 np0005481065 ovn_controller[152945]: 2025-10-11T09:12:20Z|00981|binding|INFO|Releasing lport 0ae3e094-fe06-40c1-8840-cf16ba40f7fb from this chassis (sb_readonly=0)
Oct 11 05:12:20 np0005481065 ovn_controller[152945]: 2025-10-11T09:12:20Z|00982|binding|INFO|Setting lport 0ae3e094-fe06-40c1-8840-cf16ba40f7fb down in Southbound
Oct 11 05:12:20 np0005481065 ovn_controller[152945]: 2025-10-11T09:12:20Z|00983|binding|INFO|Removing iface tap0ae3e094-fe ovn-installed in OVS
Oct 11 05:12:20 np0005481065 nova_compute[260935]: 2025-10-11 09:12:20.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:20.492 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:36:24:98 10.100.0.9'], port_security=['fa:16:3e:36:24:98 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '0dc02e8f-5afd-40a3-8de3-e5550e4ab57e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fba72350-6e1a-4542-9fe5-a0774a411936', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca4b15770e784f45910b630937562cb6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8dd2bd74-fe29-451a-969b-10f8c65ff537', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.173'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4107ca00-6bae-4723-b2f7-08200810b548, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=0ae3e094-fe06-40c1-8840-cf16ba40f7fb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:12:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:20.495 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 0ae3e094-fe06-40c1-8840-cf16ba40f7fb in datapath fba72350-6e1a-4542-9fe5-a0774a411936 unbound from our chassis#033[00m
Oct 11 05:12:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:20.499 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fba72350-6e1a-4542-9fe5-a0774a411936, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:12:20 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2115: 321 pgs: 321 active+clean; 407 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 301 KiB/s rd, 2.1 MiB/s wr, 56 op/s
Oct 11 05:12:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:20.501 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3acafd0f-2703-4ec0-a0fc-43943e370025]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:12:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:20.503 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936 namespace which is not needed anymore#033[00m
Oct 11 05:12:20 np0005481065 nova_compute[260935]: 2025-10-11 09:12:20.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:20 np0005481065 systemd[1]: machine-qemu\x2d125\x2dinstance\x2d00000069.scope: Deactivated successfully.
Oct 11 05:12:20 np0005481065 systemd[1]: machine-qemu\x2d125\x2dinstance\x2d00000069.scope: Consumed 13.530s CPU time.
Oct 11 05:12:20 np0005481065 systemd-machined[215705]: Machine qemu-125-instance-00000069 terminated.
Oct 11 05:12:20 np0005481065 nova_compute[260935]: 2025-10-11 09:12:20.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:12:20 np0005481065 neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936[368402]: [NOTICE]   (368406) : haproxy version is 2.8.14-c23fe91
Oct 11 05:12:20 np0005481065 neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936[368402]: [NOTICE]   (368406) : path to executable is /usr/sbin/haproxy
Oct 11 05:12:20 np0005481065 neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936[368402]: [WARNING]  (368406) : Exiting Master process...
Oct 11 05:12:20 np0005481065 neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936[368402]: [WARNING]  (368406) : Exiting Master process...
Oct 11 05:12:20 np0005481065 neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936[368402]: [ALERT]    (368406) : Current worker (368408) exited with code 143 (Terminated)
Oct 11 05:12:20 np0005481065 neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936[368402]: [WARNING]  (368406) : All workers exited. Exiting... (0)
Oct 11 05:12:20 np0005481065 systemd[1]: libpod-ec1149314a3d07372ddbc52429d17cff61449fe11903c2a988709a542e875bb0.scope: Deactivated successfully.
Oct 11 05:12:20 np0005481065 podman[368517]: 2025-10-11 09:12:20.728645715 +0000 UTC m=+0.077300710 container died ec1149314a3d07372ddbc52429d17cff61449fe11903c2a988709a542e875bb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:12:20 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ec1149314a3d07372ddbc52429d17cff61449fe11903c2a988709a542e875bb0-userdata-shm.mount: Deactivated successfully.
Oct 11 05:12:20 np0005481065 systemd[1]: var-lib-containers-storage-overlay-79b21d042abfb89eb21b26b040500e0f55cb793382691f819975fafe3db3b783-merged.mount: Deactivated successfully.
Oct 11 05:12:20 np0005481065 podman[368517]: 2025-10-11 09:12:20.795244549 +0000 UTC m=+0.143899554 container cleanup ec1149314a3d07372ddbc52429d17cff61449fe11903c2a988709a542e875bb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 11 05:12:20 np0005481065 systemd[1]: libpod-conmon-ec1149314a3d07372ddbc52429d17cff61449fe11903c2a988709a542e875bb0.scope: Deactivated successfully.
Oct 11 05:12:20 np0005481065 podman[368559]: 2025-10-11 09:12:20.900717718 +0000 UTC m=+0.067236142 container remove ec1149314a3d07372ddbc52429d17cff61449fe11903c2a988709a542e875bb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 11 05:12:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:20.911 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[512e3ac5-2a73-4c04-b204-e81e818d501e]: (4, ('Sat Oct 11 09:12:20 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936 (ec1149314a3d07372ddbc52429d17cff61449fe11903c2a988709a542e875bb0)\nec1149314a3d07372ddbc52429d17cff61449fe11903c2a988709a542e875bb0\nSat Oct 11 09:12:20 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936 (ec1149314a3d07372ddbc52429d17cff61449fe11903c2a988709a542e875bb0)\nec1149314a3d07372ddbc52429d17cff61449fe11903c2a988709a542e875bb0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:12:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:20.913 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a49772b3-333a-457a-9707-2a988d08abce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:12:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:20.914 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfba72350-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:12:20 np0005481065 nova_compute[260935]: 2025-10-11 09:12:20.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:20 np0005481065 kernel: tapfba72350-60: left promiscuous mode
Oct 11 05:12:20 np0005481065 nova_compute[260935]: 2025-10-11 09:12:20.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:20.946 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[848b16b2-08f6-4172-a6be-5ce305871953]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:12:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:20.975 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[64ca7b3e-78e6-4fb7-b5f2-8e6d9d330b2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:12:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:20.979 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b53c71e1-6290-48cf-b7c2-b05c13de0db7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:12:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:21.006 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[24c9f837-6b6f-4b09-9654-97c785b0d54f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576015, 'reachable_time': 15086, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 368577, 'error': None, 'target': 'ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:12:21 np0005481065 systemd[1]: run-netns-ovnmeta\x2dfba72350\x2d6e1a\x2d4542\x2d9fe5\x2da0774a411936.mount: Deactivated successfully.
Oct 11 05:12:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:21.010 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 05:12:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:21.010 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[354a2bb7-1357-435f-bbe8-1634fe125dfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:12:21 np0005481065 nova_compute[260935]: 2025-10-11 09:12:21.070 2 DEBUG nova.compute.manager [req-272ce3f7-2b29-4c54-b078-c07b8411df51 req-0da5a748-6301-4a2f-a469-25402bf191c6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Received event network-vif-unplugged-0ae3e094-fe06-40c1-8840-cf16ba40f7fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:12:21 np0005481065 nova_compute[260935]: 2025-10-11 09:12:21.070 2 DEBUG oslo_concurrency.lockutils [req-272ce3f7-2b29-4c54-b078-c07b8411df51 req-0da5a748-6301-4a2f-a469-25402bf191c6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:12:21 np0005481065 nova_compute[260935]: 2025-10-11 09:12:21.071 2 DEBUG oslo_concurrency.lockutils [req-272ce3f7-2b29-4c54-b078-c07b8411df51 req-0da5a748-6301-4a2f-a469-25402bf191c6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:12:21 np0005481065 nova_compute[260935]: 2025-10-11 09:12:21.071 2 DEBUG oslo_concurrency.lockutils [req-272ce3f7-2b29-4c54-b078-c07b8411df51 req-0da5a748-6301-4a2f-a469-25402bf191c6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:12:21 np0005481065 nova_compute[260935]: 2025-10-11 09:12:21.072 2 DEBUG nova.compute.manager [req-272ce3f7-2b29-4c54-b078-c07b8411df51 req-0da5a748-6301-4a2f-a469-25402bf191c6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] No waiting events found dispatching network-vif-unplugged-0ae3e094-fe06-40c1-8840-cf16ba40f7fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:12:21 np0005481065 nova_compute[260935]: 2025-10-11 09:12:21.072 2 WARNING nova.compute.manager [req-272ce3f7-2b29-4c54-b078-c07b8411df51 req-0da5a748-6301-4a2f-a469-25402bf191c6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Received unexpected event network-vif-unplugged-0ae3e094-fe06-40c1-8840-cf16ba40f7fb for instance with vm_state active and task_state rebuilding.#033[00m
Oct 11 05:12:21 np0005481065 nova_compute[260935]: 2025-10-11 09:12:21.189 2 INFO nova.virt.libvirt.driver [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Instance shutdown successfully after 3 seconds.#033[00m
Oct 11 05:12:21 np0005481065 nova_compute[260935]: 2025-10-11 09:12:21.197 2 INFO nova.virt.libvirt.driver [-] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Instance destroyed successfully.#033[00m
Oct 11 05:12:21 np0005481065 nova_compute[260935]: 2025-10-11 09:12:21.204 2 INFO nova.virt.libvirt.driver [-] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Instance destroyed successfully.#033[00m
Oct 11 05:12:21 np0005481065 nova_compute[260935]: 2025-10-11 09:12:21.206 2 DEBUG nova.virt.libvirt.vif [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:11:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1973542017',display_name='tempest-TestNetworkAdvancedServerOps-server-1973542017',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1973542017',id=105,image_ref='95632eb9-5895-4e20-b760-0f149aadf400',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPjiTuLZzmAZ35tMhCOSVitFlf9QVc7XZ8MVeNKbs8Bxh7N8pYPwy6nmkT0CJ4moptetANiA+6a5Y0trCDigpxU0NxSZrm3lKn8b9iWNg2gUkyVkLT8AlFALq9EjTDW32g==',key_name='tempest-TestNetworkAdvancedServerOps-2078355019',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:11:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ca4b15770e784f45910b630937562cb6',ramdisk_id='',reservation_id='r-7vnn79ig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='95632eb9-5895-4e20-b760-0f149aadf400',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1304559157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1304559157-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:12:17Z,user_data=None,user_id='a213c3877fc144a3af0be3c3d853f999',uuid=0dc02e8f-5afd-40a3-8de3-e5550e4ab57e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "address": "fa:16:3e:36:24:98", "network": {"id": "fba72350-6e1a-4542-9fe5-a0774a411936", "bridge": "br-int", "label": "tempest-network-smoke--284994200", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae3e094-fe", "ovs_interfaceid": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:12:21 np0005481065 nova_compute[260935]: 2025-10-11 09:12:21.206 2 DEBUG nova.network.os_vif_util [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converting VIF {"id": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "address": "fa:16:3e:36:24:98", "network": {"id": "fba72350-6e1a-4542-9fe5-a0774a411936", "bridge": "br-int", "label": "tempest-network-smoke--284994200", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae3e094-fe", "ovs_interfaceid": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:12:21 np0005481065 nova_compute[260935]: 2025-10-11 09:12:21.208 2 DEBUG nova.network.os_vif_util [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:36:24:98,bridge_name='br-int',has_traffic_filtering=True,id=0ae3e094-fe06-40c1-8840-cf16ba40f7fb,network=Network(fba72350-6e1a-4542-9fe5-a0774a411936),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ae3e094-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:12:21 np0005481065 nova_compute[260935]: 2025-10-11 09:12:21.208 2 DEBUG os_vif [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:36:24:98,bridge_name='br-int',has_traffic_filtering=True,id=0ae3e094-fe06-40c1-8840-cf16ba40f7fb,network=Network(fba72350-6e1a-4542-9fe5-a0774a411936),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ae3e094-fe') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:12:21 np0005481065 nova_compute[260935]: 2025-10-11 09:12:21.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:21 np0005481065 nova_compute[260935]: 2025-10-11 09:12:21.211 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0ae3e094-fe, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:12:21 np0005481065 nova_compute[260935]: 2025-10-11 09:12:21.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:21 np0005481065 nova_compute[260935]: 2025-10-11 09:12:21.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:21 np0005481065 nova_compute[260935]: 2025-10-11 09:12:21.259 2 INFO os_vif [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:36:24:98,bridge_name='br-int',has_traffic_filtering=True,id=0ae3e094-fe06-40c1-8840-cf16ba40f7fb,network=Network(fba72350-6e1a-4542-9fe5-a0774a411936),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ae3e094-fe')#033[00m
Oct 11 05:12:21 np0005481065 nova_compute[260935]: 2025-10-11 09:12:21.665 2 INFO nova.virt.libvirt.driver [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Deleting instance files /var/lib/nova/instances/0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_del#033[00m
Oct 11 05:12:21 np0005481065 nova_compute[260935]: 2025-10-11 09:12:21.666 2 INFO nova.virt.libvirt.driver [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Deletion of /var/lib/nova/instances/0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_del complete#033[00m
Oct 11 05:12:21 np0005481065 nova_compute[260935]: 2025-10-11 09:12:21.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:12:21 np0005481065 nova_compute[260935]: 2025-10-11 09:12:21.849 2 DEBUG nova.virt.libvirt.driver [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:12:21 np0005481065 nova_compute[260935]: 2025-10-11 09:12:21.850 2 INFO nova.virt.libvirt.driver [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Creating image(s)#033[00m
Oct 11 05:12:21 np0005481065 nova_compute[260935]: 2025-10-11 09:12:21.884 2 DEBUG nova.storage.rbd_utils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:12:21 np0005481065 nova_compute[260935]: 2025-10-11 09:12:21.921 2 DEBUG nova.storage.rbd_utils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:12:21 np0005481065 nova_compute[260935]: 2025-10-11 09:12:21.957 2 DEBUG nova.storage.rbd_utils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:12:21 np0005481065 nova_compute[260935]: 2025-10-11 09:12:21.963 2 DEBUG oslo_concurrency.processutils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:12:22 np0005481065 nova_compute[260935]: 2025-10-11 09:12:22.076 2 DEBUG oslo_concurrency.processutils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 --force-share --output=json" returned: 0 in 0.113s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:12:22 np0005481065 nova_compute[260935]: 2025-10-11 09:12:22.077 2 DEBUG oslo_concurrency.lockutils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "d427ed36e4acfaf36d5cf36bd49361b1db4ee571" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:12:22 np0005481065 nova_compute[260935]: 2025-10-11 09:12:22.078 2 DEBUG oslo_concurrency.lockutils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "d427ed36e4acfaf36d5cf36bd49361b1db4ee571" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:12:22 np0005481065 nova_compute[260935]: 2025-10-11 09:12:22.078 2 DEBUG oslo_concurrency.lockutils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "d427ed36e4acfaf36d5cf36bd49361b1db4ee571" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:12:22 np0005481065 nova_compute[260935]: 2025-10-11 09:12:22.104 2 DEBUG nova.storage.rbd_utils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:12:22 np0005481065 nova_compute[260935]: 2025-10-11 09:12:22.108 2 DEBUG oslo_concurrency.processutils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:12:22 np0005481065 nova_compute[260935]: 2025-10-11 09:12:22.376 2 DEBUG oslo_concurrency.processutils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.268s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:12:22 np0005481065 nova_compute[260935]: 2025-10-11 09:12:22.429 2 DEBUG nova.storage.rbd_utils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] resizing rbd image 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:12:22 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2116: 321 pgs: 321 active+clean; 351 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 319 KiB/s rd, 2.9 MiB/s wr, 83 op/s
Oct 11 05:12:22 np0005481065 nova_compute[260935]: 2025-10-11 09:12:22.530 2 DEBUG nova.virt.libvirt.driver [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:12:22 np0005481065 nova_compute[260935]: 2025-10-11 09:12:22.530 2 DEBUG nova.virt.libvirt.driver [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Ensure instance console log exists: /var/lib/nova/instances/0dc02e8f-5afd-40a3-8de3-e5550e4ab57e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:12:22 np0005481065 nova_compute[260935]: 2025-10-11 09:12:22.531 2 DEBUG oslo_concurrency.lockutils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:12:22 np0005481065 nova_compute[260935]: 2025-10-11 09:12:22.532 2 DEBUG oslo_concurrency.lockutils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:12:22 np0005481065 nova_compute[260935]: 2025-10-11 09:12:22.532 2 DEBUG oslo_concurrency.lockutils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:12:22 np0005481065 nova_compute[260935]: 2025-10-11 09:12:22.535 2 DEBUG nova.virt.libvirt.driver [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Start _get_guest_xml network_info=[{"id": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "address": "fa:16:3e:36:24:98", "network": {"id": "fba72350-6e1a-4542-9fe5-a0774a411936", "bridge": "br-int", "label": "tempest-network-smoke--284994200", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae3e094-fe", "ovs_interfaceid": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:29Z,direct_url=<?>,disk_format='qcow2',id=95632eb9-5895-4e20-b760-0f149aadf400,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:12:22 np0005481065 nova_compute[260935]: 2025-10-11 09:12:22.538 2 WARNING nova.virt.libvirt.driver [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Oct 11 05:12:22 np0005481065 nova_compute[260935]: 2025-10-11 09:12:22.548 2 DEBUG nova.virt.libvirt.host [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:12:22 np0005481065 nova_compute[260935]: 2025-10-11 09:12:22.548 2 DEBUG nova.virt.libvirt.host [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:12:22 np0005481065 nova_compute[260935]: 2025-10-11 09:12:22.552 2 DEBUG nova.virt.libvirt.host [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:12:22 np0005481065 nova_compute[260935]: 2025-10-11 09:12:22.552 2 DEBUG nova.virt.libvirt.host [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:12:22 np0005481065 nova_compute[260935]: 2025-10-11 09:12:22.553 2 DEBUG nova.virt.libvirt.driver [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:12:22 np0005481065 nova_compute[260935]: 2025-10-11 09:12:22.553 2 DEBUG nova.virt.hardware [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:29Z,direct_url=<?>,disk_format='qcow2',id=95632eb9-5895-4e20-b760-0f149aadf400,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:12:22 np0005481065 nova_compute[260935]: 2025-10-11 09:12:22.553 2 DEBUG nova.virt.hardware [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:12:22 np0005481065 nova_compute[260935]: 2025-10-11 09:12:22.554 2 DEBUG nova.virt.hardware [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:12:22 np0005481065 nova_compute[260935]: 2025-10-11 09:12:22.554 2 DEBUG nova.virt.hardware [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:12:22 np0005481065 nova_compute[260935]: 2025-10-11 09:12:22.554 2 DEBUG nova.virt.hardware [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:12:22 np0005481065 nova_compute[260935]: 2025-10-11 09:12:22.554 2 DEBUG nova.virt.hardware [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:12:22 np0005481065 nova_compute[260935]: 2025-10-11 09:12:22.554 2 DEBUG nova.virt.hardware [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:12:22 np0005481065 nova_compute[260935]: 2025-10-11 09:12:22.555 2 DEBUG nova.virt.hardware [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:12:22 np0005481065 nova_compute[260935]: 2025-10-11 09:12:22.555 2 DEBUG nova.virt.hardware [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:12:22 np0005481065 nova_compute[260935]: 2025-10-11 09:12:22.555 2 DEBUG nova.virt.hardware [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:12:22 np0005481065 nova_compute[260935]: 2025-10-11 09:12:22.555 2 DEBUG nova.virt.hardware [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:12:22 np0005481065 nova_compute[260935]: 2025-10-11 09:12:22.556 2 DEBUG nova.objects.instance [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:12:22 np0005481065 nova_compute[260935]: 2025-10-11 09:12:22.575 2 DEBUG oslo_concurrency.processutils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:12:22 np0005481065 podman[368774]: 2025-10-11 09:12:22.816320121 +0000 UTC m=+0.067895281 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:12:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:12:23 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2288654728' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:12:23 np0005481065 nova_compute[260935]: 2025-10-11 09:12:23.033 2 DEBUG oslo_concurrency.processutils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:12:23 np0005481065 nova_compute[260935]: 2025-10-11 09:12:23.072 2 DEBUG nova.storage.rbd_utils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:12:23 np0005481065 nova_compute[260935]: 2025-10-11 09:12:23.080 2 DEBUG oslo_concurrency.processutils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:12:23 np0005481065 nova_compute[260935]: 2025-10-11 09:12:23.177 2 DEBUG nova.compute.manager [req-9f18507a-0d49-482d-afbf-f6d4821d9939 req-75a64cf9-a273-4897-9146-86a55ffb686f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Received event network-vif-plugged-0ae3e094-fe06-40c1-8840-cf16ba40f7fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:12:23 np0005481065 nova_compute[260935]: 2025-10-11 09:12:23.179 2 DEBUG oslo_concurrency.lockutils [req-9f18507a-0d49-482d-afbf-f6d4821d9939 req-75a64cf9-a273-4897-9146-86a55ffb686f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:12:23 np0005481065 nova_compute[260935]: 2025-10-11 09:12:23.179 2 DEBUG oslo_concurrency.lockutils [req-9f18507a-0d49-482d-afbf-f6d4821d9939 req-75a64cf9-a273-4897-9146-86a55ffb686f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:12:23 np0005481065 nova_compute[260935]: 2025-10-11 09:12:23.180 2 DEBUG oslo_concurrency.lockutils [req-9f18507a-0d49-482d-afbf-f6d4821d9939 req-75a64cf9-a273-4897-9146-86a55ffb686f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:12:23 np0005481065 nova_compute[260935]: 2025-10-11 09:12:23.180 2 DEBUG nova.compute.manager [req-9f18507a-0d49-482d-afbf-f6d4821d9939 req-75a64cf9-a273-4897-9146-86a55ffb686f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] No waiting events found dispatching network-vif-plugged-0ae3e094-fe06-40c1-8840-cf16ba40f7fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:12:23 np0005481065 nova_compute[260935]: 2025-10-11 09:12:23.181 2 WARNING nova.compute.manager [req-9f18507a-0d49-482d-afbf-f6d4821d9939 req-75a64cf9-a273-4897-9146-86a55ffb686f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Received unexpected event network-vif-plugged-0ae3e094-fe06-40c1-8840-cf16ba40f7fb for instance with vm_state active and task_state rebuild_spawning.#033[00m
Oct 11 05:12:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:12:23 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1435080860' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:12:23 np0005481065 nova_compute[260935]: 2025-10-11 09:12:23.582 2 DEBUG oslo_concurrency.processutils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:12:23 np0005481065 nova_compute[260935]: 2025-10-11 09:12:23.583 2 DEBUG nova.virt.libvirt.vif [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-11T09:11:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1973542017',display_name='tempest-TestNetworkAdvancedServerOps-server-1973542017',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1973542017',id=105,image_ref='95632eb9-5895-4e20-b760-0f149aadf400',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPjiTuLZzmAZ35tMhCOSVitFlf9QVc7XZ8MVeNKbs8Bxh7N8pYPwy6nmkT0CJ4moptetANiA+6a5Y0trCDigpxU0NxSZrm3lKn8b9iWNg2gUkyVkLT8AlFALq9EjTDW32g==',key_name='tempest-TestNetworkAdvancedServerOps-2078355019',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:11:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ca4b15770e784f45910b630937562cb6',ramdisk_id='',reservation_id='r-7vnn79ig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='95632eb9-5895-4e20-b760-0f149aadf400',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1304559157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1304559157-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:12:21Z,user_data=None,user_id='a213c3877fc144a3af0be3c3d853f999',uuid=0dc02e8f-5afd-40a3-8de3-e5550e4ab57e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "address": "fa:16:3e:36:24:98", "network": {"id": "fba72350-6e1a-4542-9fe5-a0774a411936", "bridge": "br-int", "label": "tempest-network-smoke--284994200", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae3e094-fe", "ovs_interfaceid": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:12:23 np0005481065 nova_compute[260935]: 2025-10-11 09:12:23.584 2 DEBUG nova.network.os_vif_util [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converting VIF {"id": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "address": "fa:16:3e:36:24:98", "network": {"id": "fba72350-6e1a-4542-9fe5-a0774a411936", "bridge": "br-int", "label": "tempest-network-smoke--284994200", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae3e094-fe", "ovs_interfaceid": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:12:23 np0005481065 nova_compute[260935]: 2025-10-11 09:12:23.585 2 DEBUG nova.network.os_vif_util [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:36:24:98,bridge_name='br-int',has_traffic_filtering=True,id=0ae3e094-fe06-40c1-8840-cf16ba40f7fb,network=Network(fba72350-6e1a-4542-9fe5-a0774a411936),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ae3e094-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:12:23 np0005481065 nova_compute[260935]: 2025-10-11 09:12:23.588 2 DEBUG nova.virt.libvirt.driver [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:12:23 np0005481065 nova_compute[260935]:  <uuid>0dc02e8f-5afd-40a3-8de3-e5550e4ab57e</uuid>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:  <name>instance-00000069</name>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:12:23 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1973542017</nova:name>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:12:22</nova:creationTime>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:12:23 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:        <nova:user uuid="a213c3877fc144a3af0be3c3d853f999">tempest-TestNetworkAdvancedServerOps-1304559157-project-member</nova:user>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:        <nova:project uuid="ca4b15770e784f45910b630937562cb6">tempest-TestNetworkAdvancedServerOps-1304559157</nova:project>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="95632eb9-5895-4e20-b760-0f149aadf400"/>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:        <nova:port uuid="0ae3e094-fe06-40c1-8840-cf16ba40f7fb">
Oct 11 05:12:23 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:12:23 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:      <entry name="serial">0dc02e8f-5afd-40a3-8de3-e5550e4ab57e</entry>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:      <entry name="uuid">0dc02e8f-5afd-40a3-8de3-e5550e4ab57e</entry>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:12:23 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:12:23 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:12:23 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_disk">
Oct 11 05:12:23 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:12:23 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:12:23 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_disk.config">
Oct 11 05:12:23 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:12:23 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:12:23 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:36:24:98"/>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:      <target dev="tap0ae3e094-fe"/>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:12:23 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/0dc02e8f-5afd-40a3-8de3-e5550e4ab57e/console.log" append="off"/>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:12:23 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:12:23 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:12:23 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:12:23 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:12:23 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:12:23 np0005481065 nova_compute[260935]: 2025-10-11 09:12:23.591 2 DEBUG nova.virt.libvirt.vif [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-11T09:11:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1973542017',display_name='tempest-TestNetworkAdvancedServerOps-server-1973542017',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1973542017',id=105,image_ref='95632eb9-5895-4e20-b760-0f149aadf400',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPjiTuLZzmAZ35tMhCOSVitFlf9QVc7XZ8MVeNKbs8Bxh7N8pYPwy6nmkT0CJ4moptetANiA+6a5Y0trCDigpxU0NxSZrm3lKn8b9iWNg2gUkyVkLT8AlFALq9EjTDW32g==',key_name='tempest-TestNetworkAdvancedServerOps-2078355019',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:11:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ca4b15770e784f45910b630937562cb6',ramdisk_id='',reservation_id='r-7vnn79ig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='95632eb9-5895-4e20-b760-0f149aadf400',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1304559157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1304559157-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:12:21Z,user_data=None,user_id='a213c3877fc144a3af0be3c3d853f999',uuid=0dc02e8f-5afd-40a3-8de3-e5550e4ab57e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "address": "fa:16:3e:36:24:98", "network": {"id": "fba72350-6e1a-4542-9fe5-a0774a411936", "bridge": "br-int", "label": "tempest-network-smoke--284994200", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae3e094-fe", "ovs_interfaceid": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:12:23 np0005481065 nova_compute[260935]: 2025-10-11 09:12:23.591 2 DEBUG nova.network.os_vif_util [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converting VIF {"id": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "address": "fa:16:3e:36:24:98", "network": {"id": "fba72350-6e1a-4542-9fe5-a0774a411936", "bridge": "br-int", "label": "tempest-network-smoke--284994200", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae3e094-fe", "ovs_interfaceid": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:12:23 np0005481065 nova_compute[260935]: 2025-10-11 09:12:23.592 2 DEBUG nova.network.os_vif_util [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:36:24:98,bridge_name='br-int',has_traffic_filtering=True,id=0ae3e094-fe06-40c1-8840-cf16ba40f7fb,network=Network(fba72350-6e1a-4542-9fe5-a0774a411936),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ae3e094-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:12:23 np0005481065 nova_compute[260935]: 2025-10-11 09:12:23.593 2 DEBUG os_vif [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:36:24:98,bridge_name='br-int',has_traffic_filtering=True,id=0ae3e094-fe06-40c1-8840-cf16ba40f7fb,network=Network(fba72350-6e1a-4542-9fe5-a0774a411936),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ae3e094-fe') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:12:23 np0005481065 nova_compute[260935]: 2025-10-11 09:12:23.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:23 np0005481065 nova_compute[260935]: 2025-10-11 09:12:23.595 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:12:23 np0005481065 nova_compute[260935]: 2025-10-11 09:12:23.596 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:12:23 np0005481065 nova_compute[260935]: 2025-10-11 09:12:23.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:23 np0005481065 nova_compute[260935]: 2025-10-11 09:12:23.601 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0ae3e094-fe, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:12:23 np0005481065 nova_compute[260935]: 2025-10-11 09:12:23.602 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0ae3e094-fe, col_values=(('external_ids', {'iface-id': '0ae3e094-fe06-40c1-8840-cf16ba40f7fb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:36:24:98', 'vm-uuid': '0dc02e8f-5afd-40a3-8de3-e5550e4ab57e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:12:23 np0005481065 NetworkManager[44960]: <info>  [1760173943.6055] manager: (tap0ae3e094-fe): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/409)
Oct 11 05:12:23 np0005481065 nova_compute[260935]: 2025-10-11 09:12:23.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:12:23 np0005481065 nova_compute[260935]: 2025-10-11 09:12:23.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:23 np0005481065 nova_compute[260935]: 2025-10-11 09:12:23.611 2 INFO os_vif [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:36:24:98,bridge_name='br-int',has_traffic_filtering=True,id=0ae3e094-fe06-40c1-8840-cf16ba40f7fb,network=Network(fba72350-6e1a-4542-9fe5-a0774a411936),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ae3e094-fe')#033[00m
Oct 11 05:12:23 np0005481065 nova_compute[260935]: 2025-10-11 09:12:23.691 2 DEBUG nova.virt.libvirt.driver [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:12:23 np0005481065 nova_compute[260935]: 2025-10-11 09:12:23.692 2 DEBUG nova.virt.libvirt.driver [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:12:23 np0005481065 nova_compute[260935]: 2025-10-11 09:12:23.692 2 DEBUG nova.virt.libvirt.driver [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] No VIF found with MAC fa:16:3e:36:24:98, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:12:23 np0005481065 nova_compute[260935]: 2025-10-11 09:12:23.693 2 INFO nova.virt.libvirt.driver [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Using config drive#033[00m
Oct 11 05:12:23 np0005481065 nova_compute[260935]: 2025-10-11 09:12:23.730 2 DEBUG nova.storage.rbd_utils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:12:23 np0005481065 nova_compute[260935]: 2025-10-11 09:12:23.742 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:12:23 np0005481065 nova_compute[260935]: 2025-10-11 09:12:23.744 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:12:23 np0005481065 nova_compute[260935]: 2025-10-11 09:12:23.768 2 DEBUG nova.objects.instance [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:12:23 np0005481065 nova_compute[260935]: 2025-10-11 09:12:23.810 2 DEBUG nova.objects.instance [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'keypairs' on Instance uuid 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:12:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:12:24 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2117: 321 pgs: 321 active+clean; 346 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 1.1 MiB/s wr, 46 op/s
Oct 11 05:12:24 np0005481065 nova_compute[260935]: 2025-10-11 09:12:24.506 2 INFO nova.virt.libvirt.driver [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Creating config drive at /var/lib/nova/instances/0dc02e8f-5afd-40a3-8de3-e5550e4ab57e/disk.config#033[00m
Oct 11 05:12:24 np0005481065 nova_compute[260935]: 2025-10-11 09:12:24.519 2 DEBUG oslo_concurrency.processutils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0dc02e8f-5afd-40a3-8de3-e5550e4ab57e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb8w5wtef execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:12:24 np0005481065 nova_compute[260935]: 2025-10-11 09:12:24.695 2 DEBUG oslo_concurrency.processutils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0dc02e8f-5afd-40a3-8de3-e5550e4ab57e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb8w5wtef" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:12:24 np0005481065 nova_compute[260935]: 2025-10-11 09:12:24.743 2 DEBUG nova.storage.rbd_utils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:12:24 np0005481065 nova_compute[260935]: 2025-10-11 09:12:24.750 2 DEBUG oslo_concurrency.processutils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0dc02e8f-5afd-40a3-8de3-e5550e4ab57e/disk.config 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:12:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:12:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:12:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:12:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:12:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:12:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:12:25 np0005481065 nova_compute[260935]: 2025-10-11 09:12:25.004 2 DEBUG oslo_concurrency.processutils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0dc02e8f-5afd-40a3-8de3-e5550e4ab57e/disk.config 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.254s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:12:25 np0005481065 nova_compute[260935]: 2025-10-11 09:12:25.006 2 INFO nova.virt.libvirt.driver [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Deleting local config drive /var/lib/nova/instances/0dc02e8f-5afd-40a3-8de3-e5550e4ab57e/disk.config because it was imported into RBD.#033[00m
Oct 11 05:12:25 np0005481065 kernel: tap0ae3e094-fe: entered promiscuous mode
Oct 11 05:12:25 np0005481065 NetworkManager[44960]: <info>  [1760173945.0842] manager: (tap0ae3e094-fe): new Tun device (/org/freedesktop/NetworkManager/Devices/410)
Oct 11 05:12:25 np0005481065 ovn_controller[152945]: 2025-10-11T09:12:25Z|00984|binding|INFO|Claiming lport 0ae3e094-fe06-40c1-8840-cf16ba40f7fb for this chassis.
Oct 11 05:12:25 np0005481065 ovn_controller[152945]: 2025-10-11T09:12:25Z|00985|binding|INFO|0ae3e094-fe06-40c1-8840-cf16ba40f7fb: Claiming fa:16:3e:36:24:98 10.100.0.9
Oct 11 05:12:25 np0005481065 nova_compute[260935]: 2025-10-11 09:12:25.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:25 np0005481065 nova_compute[260935]: 2025-10-11 09:12:25.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:25 np0005481065 nova_compute[260935]: 2025-10-11 09:12:25.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:25.134 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:36:24:98 10.100.0.9'], port_security=['fa:16:3e:36:24:98 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '0dc02e8f-5afd-40a3-8de3-e5550e4ab57e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fba72350-6e1a-4542-9fe5-a0774a411936', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca4b15770e784f45910b630937562cb6', 'neutron:revision_number': '5', 'neutron:security_group_ids': '8dd2bd74-fe29-451a-969b-10f8c65ff537', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.173'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4107ca00-6bae-4723-b2f7-08200810b548, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=0ae3e094-fe06-40c1-8840-cf16ba40f7fb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:25.135 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 0ae3e094-fe06-40c1-8840-cf16ba40f7fb in datapath fba72350-6e1a-4542-9fe5-a0774a411936 bound to our chassis#033[00m
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:25.138 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fba72350-6e1a-4542-9fe5-a0774a411936#033[00m
Oct 11 05:12:25 np0005481065 ovn_controller[152945]: 2025-10-11T09:12:25Z|00986|binding|INFO|Setting lport 0ae3e094-fe06-40c1-8840-cf16ba40f7fb ovn-installed in OVS
Oct 11 05:12:25 np0005481065 ovn_controller[152945]: 2025-10-11T09:12:25Z|00987|binding|INFO|Setting lport 0ae3e094-fe06-40c1-8840-cf16ba40f7fb up in Southbound
Oct 11 05:12:25 np0005481065 nova_compute[260935]: 2025-10-11 09:12:25.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:25.158 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[be67eadc-084e-4f53-aa39-29e648ef9c1a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:25.159 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfba72350-61 in ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 05:12:25 np0005481065 systemd-machined[215705]: New machine qemu-126-instance-00000069.
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:25.161 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfba72350-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:25.162 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[83c0d177-bd1b-4afb-b03d-1b960dfeb4eb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:25.163 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b7bdc562-2b5a-460d-95f5-b43e60b3fa43]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:12:25 np0005481065 systemd[1]: Started Virtual Machine qemu-126-instance-00000069.
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:25.184 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[72ffbe3d-d63b-462d-b42c-8a37032f593e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:12:25 np0005481065 systemd-udevd[368920]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:12:25 np0005481065 NetworkManager[44960]: <info>  [1760173945.2027] device (tap0ae3e094-fe): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:12:25 np0005481065 NetworkManager[44960]: <info>  [1760173945.2046] device (tap0ae3e094-fe): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:25.218 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e932bfb5-da0e-4e8b-8c01-3f003dea5350]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:25.267 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[beca9cc4-ca7b-4b4b-8119-fb6057a156fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:25.276 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8a326c4b-c757-4365-b9fe-3f7fedf80489]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:12:25 np0005481065 NetworkManager[44960]: <info>  [1760173945.2782] manager: (tapfba72350-60): new Veth device (/org/freedesktop/NetworkManager/Devices/411)
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:25.321 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[9e2c3890-22bc-4476-8267-aa5c8899d7c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:25.325 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[bed1eb83-7d11-4e9c-88f4-aa2b5b951daa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:12:25 np0005481065 NetworkManager[44960]: <info>  [1760173945.3563] device (tapfba72350-60): carrier: link connected
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:25.371 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b27411a0-5f8a-463e-b9bd-b16c6c30c883]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:25.394 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[dd253e41-43b0-492b-882b-f6abb3a238f9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfba72350-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5c:dc:bd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 291], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 579005, 'reachable_time': 39809, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 368951, 'error': None, 'target': 'ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:25.413 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cd31cde5-44e8-4d1d-8818-6bc0aed98a76]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5c:dcbd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 579005, 'tstamp': 579005}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 368952, 'error': None, 'target': 'ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:25.437 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[254cba19-fc77-4a0a-85df-640c66fb83ea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfba72350-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5c:dc:bd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 291], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 579005, 'reachable_time': 39809, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 368953, 'error': None, 'target': 'ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:25.492 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bcbd078f-3ce8-45f7-bbae-3fef1b93d67b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:25.589 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[467133c9-5bf9-4adc-b233-86e49055d19f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:25.591 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfba72350-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:25.591 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:25.592 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfba72350-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:12:25 np0005481065 nova_compute[260935]: 2025-10-11 09:12:25.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:25 np0005481065 NetworkManager[44960]: <info>  [1760173945.5961] manager: (tapfba72350-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/412)
Oct 11 05:12:25 np0005481065 kernel: tapfba72350-60: entered promiscuous mode
Oct 11 05:12:25 np0005481065 nova_compute[260935]: 2025-10-11 09:12:25.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:25.601 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfba72350-60, col_values=(('external_ids', {'iface-id': '2e151f7b-2e1d-434e-9750-cdf44a5aa034'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:12:25 np0005481065 nova_compute[260935]: 2025-10-11 09:12:25.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:25 np0005481065 ovn_controller[152945]: 2025-10-11T09:12:25Z|00988|binding|INFO|Releasing lport 2e151f7b-2e1d-434e-9750-cdf44a5aa034 from this chassis (sb_readonly=0)
Oct 11 05:12:25 np0005481065 nova_compute[260935]: 2025-10-11 09:12:25.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:25.634 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fba72350-6e1a-4542-9fe5-a0774a411936.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fba72350-6e1a-4542-9fe5-a0774a411936.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:25.638 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2743efe1-7994-47a4-abfe-319ff7ed7695]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:25.639 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-fba72350-6e1a-4542-9fe5-a0774a411936
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/fba72350-6e1a-4542-9fe5-a0774a411936.pid.haproxy
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID fba72350-6e1a-4542-9fe5-a0774a411936
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 05:12:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:25.639 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936', 'env', 'PROCESS_TAG=haproxy-fba72350-6e1a-4542-9fe5-a0774a411936', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fba72350-6e1a-4542-9fe5-a0774a411936.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 05:12:25 np0005481065 nova_compute[260935]: 2025-10-11 09:12:25.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:12:25 np0005481065 nova_compute[260935]: 2025-10-11 09:12:25.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:12:25 np0005481065 nova_compute[260935]: 2025-10-11 09:12:25.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:12:25 np0005481065 nova_compute[260935]: 2025-10-11 09:12:25.946 2 DEBUG nova.compute.manager [req-10e5d2f0-7f08-4ae5-bfa3-f28ff11d6c4a req-0b684556-7d42-4a05-a986-a5e3db48efd3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Received event network-vif-plugged-0ae3e094-fe06-40c1-8840-cf16ba40f7fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:12:25 np0005481065 nova_compute[260935]: 2025-10-11 09:12:25.946 2 DEBUG oslo_concurrency.lockutils [req-10e5d2f0-7f08-4ae5-bfa3-f28ff11d6c4a req-0b684556-7d42-4a05-a986-a5e3db48efd3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:12:25 np0005481065 nova_compute[260935]: 2025-10-11 09:12:25.946 2 DEBUG oslo_concurrency.lockutils [req-10e5d2f0-7f08-4ae5-bfa3-f28ff11d6c4a req-0b684556-7d42-4a05-a986-a5e3db48efd3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:12:25 np0005481065 nova_compute[260935]: 2025-10-11 09:12:25.947 2 DEBUG oslo_concurrency.lockutils [req-10e5d2f0-7f08-4ae5-bfa3-f28ff11d6c4a req-0b684556-7d42-4a05-a986-a5e3db48efd3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:12:25 np0005481065 nova_compute[260935]: 2025-10-11 09:12:25.947 2 DEBUG nova.compute.manager [req-10e5d2f0-7f08-4ae5-bfa3-f28ff11d6c4a req-0b684556-7d42-4a05-a986-a5e3db48efd3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] No waiting events found dispatching network-vif-plugged-0ae3e094-fe06-40c1-8840-cf16ba40f7fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:12:25 np0005481065 nova_compute[260935]: 2025-10-11 09:12:25.947 2 WARNING nova.compute.manager [req-10e5d2f0-7f08-4ae5-bfa3-f28ff11d6c4a req-0b684556-7d42-4a05-a986-a5e3db48efd3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Received unexpected event network-vif-plugged-0ae3e094-fe06-40c1-8840-cf16ba40f7fb for instance with vm_state active and task_state rebuild_spawning.#033[00m
Oct 11 05:12:26 np0005481065 podman[369025]: 2025-10-11 09:12:26.075478741 +0000 UTC m=+0.054508470 container create cca8b69c19e14fed71092700160d6e1286225281cd0deb548025e0d9e80d71b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 11 05:12:26 np0005481065 systemd[1]: Started libpod-conmon-cca8b69c19e14fed71092700160d6e1286225281cd0deb548025e0d9e80d71b1.scope.
Oct 11 05:12:26 np0005481065 podman[369025]: 2025-10-11 09:12:26.045866352 +0000 UTC m=+0.024896171 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 05:12:26 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:12:26 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7f85520362d24f3ac3e0a81a81c4d2407edd9ee81684410d35e390bcf954224/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 05:12:26 np0005481065 nova_compute[260935]: 2025-10-11 09:12:26.165 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Removed pending event for 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct 11 05:12:26 np0005481065 nova_compute[260935]: 2025-10-11 09:12:26.166 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173946.1648345, 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:12:26 np0005481065 nova_compute[260935]: 2025-10-11 09:12:26.166 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:12:26 np0005481065 podman[369025]: 2025-10-11 09:12:26.167331854 +0000 UTC m=+0.146361603 container init cca8b69c19e14fed71092700160d6e1286225281cd0deb548025e0d9e80d71b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 11 05:12:26 np0005481065 nova_compute[260935]: 2025-10-11 09:12:26.169 2 DEBUG nova.compute.manager [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:12:26 np0005481065 nova_compute[260935]: 2025-10-11 09:12:26.169 2 DEBUG nova.virt.libvirt.driver [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:12:26 np0005481065 nova_compute[260935]: 2025-10-11 09:12:26.173 2 INFO nova.virt.libvirt.driver [-] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Instance spawned successfully.#033[00m
Oct 11 05:12:26 np0005481065 nova_compute[260935]: 2025-10-11 09:12:26.174 2 DEBUG nova.virt.libvirt.driver [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:12:26 np0005481065 podman[369025]: 2025-10-11 09:12:26.174833621 +0000 UTC m=+0.153863350 container start cca8b69c19e14fed71092700160d6e1286225281cd0deb548025e0d9e80d71b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:12:26 np0005481065 nova_compute[260935]: 2025-10-11 09:12:26.190 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:12:26 np0005481065 nova_compute[260935]: 2025-10-11 09:12:26.194 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:12:26 np0005481065 neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936[369040]: [NOTICE]   (369044) : New worker (369046) forked
Oct 11 05:12:26 np0005481065 neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936[369040]: [NOTICE]   (369044) : Loading success.
Oct 11 05:12:26 np0005481065 nova_compute[260935]: 2025-10-11 09:12:26.217 2 DEBUG nova.virt.libvirt.driver [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:12:26 np0005481065 nova_compute[260935]: 2025-10-11 09:12:26.217 2 DEBUG nova.virt.libvirt.driver [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:12:26 np0005481065 nova_compute[260935]: 2025-10-11 09:12:26.217 2 DEBUG nova.virt.libvirt.driver [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:12:26 np0005481065 nova_compute[260935]: 2025-10-11 09:12:26.218 2 DEBUG nova.virt.libvirt.driver [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:12:26 np0005481065 nova_compute[260935]: 2025-10-11 09:12:26.218 2 DEBUG nova.virt.libvirt.driver [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:12:26 np0005481065 nova_compute[260935]: 2025-10-11 09:12:26.219 2 DEBUG nova.virt.libvirt.driver [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:12:26 np0005481065 nova_compute[260935]: 2025-10-11 09:12:26.223 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct 11 05:12:26 np0005481065 nova_compute[260935]: 2025-10-11 09:12:26.223 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173946.1676824, 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:12:26 np0005481065 nova_compute[260935]: 2025-10-11 09:12:26.223 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] VM Started (Lifecycle Event)#033[00m
Oct 11 05:12:26 np0005481065 nova_compute[260935]: 2025-10-11 09:12:26.263 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:12:26 np0005481065 nova_compute[260935]: 2025-10-11 09:12:26.267 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:12:26 np0005481065 nova_compute[260935]: 2025-10-11 09:12:26.286 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct 11 05:12:26 np0005481065 nova_compute[260935]: 2025-10-11 09:12:26.295 2 DEBUG nova.compute.manager [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:12:26 np0005481065 nova_compute[260935]: 2025-10-11 09:12:26.360 2 DEBUG oslo_concurrency.lockutils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:12:26 np0005481065 nova_compute[260935]: 2025-10-11 09:12:26.360 2 DEBUG oslo_concurrency.lockutils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:12:26 np0005481065 nova_compute[260935]: 2025-10-11 09:12:26.361 2 DEBUG nova.objects.instance [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct 11 05:12:26 np0005481065 nova_compute[260935]: 2025-10-11 09:12:26.413 2 DEBUG oslo_concurrency.lockutils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.053s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:12:26 np0005481065 nova_compute[260935]: 2025-10-11 09:12:26.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:26.448 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:12:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:26.450 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 05:12:26 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2118: 321 pgs: 321 active+clean; 346 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 1.1 MiB/s wr, 45 op/s
Oct 11 05:12:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:12:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3378638996' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:12:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:12:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3378638996' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:12:26 np0005481065 nova_compute[260935]: 2025-10-11 09:12:26.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:12:26 np0005481065 nova_compute[260935]: 2025-10-11 09:12:26.738 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:12:26 np0005481065 nova_compute[260935]: 2025-10-11 09:12:26.739 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:12:26 np0005481065 nova_compute[260935]: 2025-10-11 09:12:26.739 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:12:26 np0005481065 nova_compute[260935]: 2025-10-11 09:12:26.740 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:12:26 np0005481065 nova_compute[260935]: 2025-10-11 09:12:26.740 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:12:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:12:27 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/911322854' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:12:27 np0005481065 nova_compute[260935]: 2025-10-11 09:12:27.353 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:12:27 np0005481065 nova_compute[260935]: 2025-10-11 09:12:27.437 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:12:27 np0005481065 nova_compute[260935]: 2025-10-11 09:12:27.437 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:12:27 np0005481065 nova_compute[260935]: 2025-10-11 09:12:27.437 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:12:27 np0005481065 nova_compute[260935]: 2025-10-11 09:12:27.441 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:12:27 np0005481065 nova_compute[260935]: 2025-10-11 09:12:27.441 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:12:27 np0005481065 nova_compute[260935]: 2025-10-11 09:12:27.445 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:12:27 np0005481065 nova_compute[260935]: 2025-10-11 09:12:27.445 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:12:27 np0005481065 nova_compute[260935]: 2025-10-11 09:12:27.449 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:12:27 np0005481065 nova_compute[260935]: 2025-10-11 09:12:27.450 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:12:27 np0005481065 nova_compute[260935]: 2025-10-11 09:12:27.638 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:12:27 np0005481065 nova_compute[260935]: 2025-10-11 09:12:27.639 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2962MB free_disk=59.818275451660156GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:12:27 np0005481065 nova_compute[260935]: 2025-10-11 09:12:27.639 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:12:27 np0005481065 nova_compute[260935]: 2025-10-11 09:12:27.640 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:12:27 np0005481065 nova_compute[260935]: 2025-10-11 09:12:27.828 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:12:27 np0005481065 nova_compute[260935]: 2025-10-11 09:12:27.829 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:12:27 np0005481065 nova_compute[260935]: 2025-10-11 09:12:27.829 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:12:27 np0005481065 nova_compute[260935]: 2025-10-11 09:12:27.829 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:12:27 np0005481065 nova_compute[260935]: 2025-10-11 09:12:27.829 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:12:27 np0005481065 nova_compute[260935]: 2025-10-11 09:12:27.829 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:12:27 np0005481065 nova_compute[260935]: 2025-10-11 09:12:27.971 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:12:28 np0005481065 nova_compute[260935]: 2025-10-11 09:12:28.105 2 DEBUG nova.compute.manager [req-8ab5e81e-7d82-45eb-bf0c-ee25b6cf3985 req-7731e587-1ae5-4791-bd8c-3f7af6b9d4ec e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Received event network-vif-plugged-0ae3e094-fe06-40c1-8840-cf16ba40f7fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:12:28 np0005481065 nova_compute[260935]: 2025-10-11 09:12:28.107 2 DEBUG oslo_concurrency.lockutils [req-8ab5e81e-7d82-45eb-bf0c-ee25b6cf3985 req-7731e587-1ae5-4791-bd8c-3f7af6b9d4ec e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:12:28 np0005481065 nova_compute[260935]: 2025-10-11 09:12:28.108 2 DEBUG oslo_concurrency.lockutils [req-8ab5e81e-7d82-45eb-bf0c-ee25b6cf3985 req-7731e587-1ae5-4791-bd8c-3f7af6b9d4ec e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:12:28 np0005481065 nova_compute[260935]: 2025-10-11 09:12:28.108 2 DEBUG oslo_concurrency.lockutils [req-8ab5e81e-7d82-45eb-bf0c-ee25b6cf3985 req-7731e587-1ae5-4791-bd8c-3f7af6b9d4ec e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:12:28 np0005481065 nova_compute[260935]: 2025-10-11 09:12:28.109 2 DEBUG nova.compute.manager [req-8ab5e81e-7d82-45eb-bf0c-ee25b6cf3985 req-7731e587-1ae5-4791-bd8c-3f7af6b9d4ec e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] No waiting events found dispatching network-vif-plugged-0ae3e094-fe06-40c1-8840-cf16ba40f7fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:12:28 np0005481065 nova_compute[260935]: 2025-10-11 09:12:28.110 2 WARNING nova.compute.manager [req-8ab5e81e-7d82-45eb-bf0c-ee25b6cf3985 req-7731e587-1ae5-4791-bd8c-3f7af6b9d4ec e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Received unexpected event network-vif-plugged-0ae3e094-fe06-40c1-8840-cf16ba40f7fb for instance with vm_state active and task_state None.#033[00m
Oct 11 05:12:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:12:28 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/527898686' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:12:28 np0005481065 nova_compute[260935]: 2025-10-11 09:12:28.485 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:12:28 np0005481065 nova_compute[260935]: 2025-10-11 09:12:28.493 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:12:28 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2119: 321 pgs: 321 active+clean; 374 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 105 op/s
Oct 11 05:12:28 np0005481065 nova_compute[260935]: 2025-10-11 09:12:28.516 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:12:28 np0005481065 nova_compute[260935]: 2025-10-11 09:12:28.555 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:12:28 np0005481065 nova_compute[260935]: 2025-10-11 09:12:28.555 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.916s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:12:28 np0005481065 nova_compute[260935]: 2025-10-11 09:12:28.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:28 np0005481065 podman[369100]: 2025-10-11 09:12:28.771100883 +0000 UTC m=+0.071057868 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid)
Oct 11 05:12:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:12:30 np0005481065 nova_compute[260935]: 2025-10-11 09:12:30.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:30 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2120: 321 pgs: 321 active+clean; 374 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 105 op/s
Oct 11 05:12:30 np0005481065 nova_compute[260935]: 2025-10-11 09:12:30.555 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:12:30 np0005481065 nova_compute[260935]: 2025-10-11 09:12:30.556 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:12:30 np0005481065 nova_compute[260935]: 2025-10-11 09:12:30.557 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 11 05:12:30 np0005481065 nova_compute[260935]: 2025-10-11 09:12:30.776 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:12:30 np0005481065 nova_compute[260935]: 2025-10-11 09:12:30.777 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:12:30 np0005481065 nova_compute[260935]: 2025-10-11 09:12:30.778 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 05:12:30 np0005481065 nova_compute[260935]: 2025-10-11 09:12:30.778 2 DEBUG nova.objects.instance [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c176845c-89c0-4038-ba22-4ee79bd3ebfe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:12:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:31.453 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:12:31 np0005481065 nova_compute[260935]: 2025-10-11 09:12:31.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:32 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2121: 321 pgs: 321 active+clean; 374 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 131 op/s
Oct 11 05:12:33 np0005481065 nova_compute[260935]: 2025-10-11 09:12:33.314 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updating instance_info_cache with network_info: [{"id": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "address": "fa:16:3e:1e:82:58", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ae661-47", "ovs_interfaceid": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:12:33 np0005481065 nova_compute[260935]: 2025-10-11 09:12:33.341 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:12:33 np0005481065 nova_compute[260935]: 2025-10-11 09:12:33.342 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 05:12:33 np0005481065 nova_compute[260935]: 2025-10-11 09:12:33.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:33 np0005481065 podman[369120]: 2025-10-11 09:12:33.774903553 +0000 UTC m=+0.070459131 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 11 05:12:33 np0005481065 podman[369121]: 2025-10-11 09:12:33.869203882 +0000 UTC m=+0.166280682 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 11 05:12:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:12:34 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2122: 321 pgs: 321 active+clean; 374 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 993 KiB/s wr, 104 op/s
Oct 11 05:12:35 np0005481065 nova_compute[260935]: 2025-10-11 09:12:35.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:35 np0005481065 nova_compute[260935]: 2025-10-11 09:12:35.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:36 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2123: 321 pgs: 321 active+clean; 374 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 743 KiB/s wr, 86 op/s
Oct 11 05:12:36 np0005481065 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Oct 11 05:12:36 np0005481065 ovn_controller[152945]: 2025-10-11T09:12:36Z|00989|binding|INFO|Releasing lport 2e151f7b-2e1d-434e-9750-cdf44a5aa034 from this chassis (sb_readonly=0)
Oct 11 05:12:36 np0005481065 ovn_controller[152945]: 2025-10-11T09:12:36Z|00990|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:12:36 np0005481065 ovn_controller[152945]: 2025-10-11T09:12:36Z|00991|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:12:36 np0005481065 nova_compute[260935]: 2025-10-11 09:12:36.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:37 np0005481065 ovn_controller[152945]: 2025-10-11T09:12:37Z|00112|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:36:24:98 10.100.0.9
Oct 11 05:12:37 np0005481065 ovn_controller[152945]: 2025-10-11T09:12:37Z|00113|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:36:24:98 10.100.0.9
Oct 11 05:12:37 np0005481065 podman[369438]: 2025-10-11 09:12:37.900176106 +0000 UTC m=+0.067644524 container create bdf9399e238f6e578a374853b87c29d693ba9b5c618673ac7c48eecf1825ecb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kilby, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:12:37 np0005481065 podman[369438]: 2025-10-11 09:12:37.868711455 +0000 UTC m=+0.036179943 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:12:37 np0005481065 systemd[1]: Started libpod-conmon-bdf9399e238f6e578a374853b87c29d693ba9b5c618673ac7c48eecf1825ecb7.scope.
Oct 11 05:12:37 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:12:38 np0005481065 podman[369438]: 2025-10-11 09:12:38.017528104 +0000 UTC m=+0.184996612 container init bdf9399e238f6e578a374853b87c29d693ba9b5c618673ac7c48eecf1825ecb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 11 05:12:38 np0005481065 podman[369438]: 2025-10-11 09:12:38.025871035 +0000 UTC m=+0.193339473 container start bdf9399e238f6e578a374853b87c29d693ba9b5c618673ac7c48eecf1825ecb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kilby, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:12:38 np0005481065 podman[369438]: 2025-10-11 09:12:38.029370832 +0000 UTC m=+0.196839330 container attach bdf9399e238f6e578a374853b87c29d693ba9b5c618673ac7c48eecf1825ecb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kilby, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 05:12:38 np0005481065 brave_kilby[369454]: 167 167
Oct 11 05:12:38 np0005481065 systemd[1]: libpod-bdf9399e238f6e578a374853b87c29d693ba9b5c618673ac7c48eecf1825ecb7.scope: Deactivated successfully.
Oct 11 05:12:38 np0005481065 conmon[369454]: conmon bdf9399e238f6e578a37 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bdf9399e238f6e578a374853b87c29d693ba9b5c618673ac7c48eecf1825ecb7.scope/container/memory.events
Oct 11 05:12:38 np0005481065 podman[369438]: 2025-10-11 09:12:38.037282921 +0000 UTC m=+0.204751369 container died bdf9399e238f6e578a374853b87c29d693ba9b5c618673ac7c48eecf1825ecb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kilby, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:12:38 np0005481065 systemd[1]: var-lib-containers-storage-overlay-21c62a690f6168543425e8c60489777c09a6ffd06c39245337545ce0f121a193-merged.mount: Deactivated successfully.
Oct 11 05:12:38 np0005481065 podman[369438]: 2025-10-11 09:12:38.089156257 +0000 UTC m=+0.256624705 container remove bdf9399e238f6e578a374853b87c29d693ba9b5c618673ac7c48eecf1825ecb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kilby, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 11 05:12:38 np0005481065 systemd[1]: libpod-conmon-bdf9399e238f6e578a374853b87c29d693ba9b5c618673ac7c48eecf1825ecb7.scope: Deactivated successfully.
Oct 11 05:12:38 np0005481065 podman[369477]: 2025-10-11 09:12:38.38660796 +0000 UTC m=+0.087020080 container create 2f5855283a76abacd3363befce745fd39d7007f94e0628738d0584b306de7a53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:12:38 np0005481065 systemd[1]: Started libpod-conmon-2f5855283a76abacd3363befce745fd39d7007f94e0628738d0584b306de7a53.scope.
Oct 11 05:12:38 np0005481065 podman[369477]: 2025-10-11 09:12:38.357055112 +0000 UTC m=+0.057467312 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:12:38 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:12:38 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33adebc7114fa456ae86066a354759e171629270519347b59951f8e464540dd3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:12:38 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33adebc7114fa456ae86066a354759e171629270519347b59951f8e464540dd3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:12:38 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33adebc7114fa456ae86066a354759e171629270519347b59951f8e464540dd3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:12:38 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33adebc7114fa456ae86066a354759e171629270519347b59951f8e464540dd3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:12:38 np0005481065 podman[369477]: 2025-10-11 09:12:38.502986171 +0000 UTC m=+0.203398361 container init 2f5855283a76abacd3363befce745fd39d7007f94e0628738d0584b306de7a53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:12:38 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2124: 321 pgs: 321 active+clean; 395 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.8 MiB/s wr, 129 op/s
Oct 11 05:12:38 np0005481065 podman[369477]: 2025-10-11 09:12:38.509040059 +0000 UTC m=+0.209452179 container start 2f5855283a76abacd3363befce745fd39d7007f94e0628738d0584b306de7a53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_nightingale, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 05:12:38 np0005481065 podman[369477]: 2025-10-11 09:12:38.513566174 +0000 UTC m=+0.213978294 container attach 2f5855283a76abacd3363befce745fd39d7007f94e0628738d0584b306de7a53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 05:12:38 np0005481065 nova_compute[260935]: 2025-10-11 09:12:38.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:38 np0005481065 ovn_controller[152945]: 2025-10-11T09:12:38Z|00992|binding|INFO|Releasing lport 2e151f7b-2e1d-434e-9750-cdf44a5aa034 from this chassis (sb_readonly=0)
Oct 11 05:12:38 np0005481065 ovn_controller[152945]: 2025-10-11T09:12:38Z|00993|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:12:38 np0005481065 ovn_controller[152945]: 2025-10-11T09:12:38Z|00994|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:12:38 np0005481065 nova_compute[260935]: 2025-10-11 09:12:38.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:12:40 np0005481065 nova_compute[260935]: 2025-10-11 09:12:40.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:40 np0005481065 stupefied_nightingale[369493]: [
Oct 11 05:12:40 np0005481065 stupefied_nightingale[369493]:    {
Oct 11 05:12:40 np0005481065 stupefied_nightingale[369493]:        "available": false,
Oct 11 05:12:40 np0005481065 stupefied_nightingale[369493]:        "ceph_device": false,
Oct 11 05:12:40 np0005481065 stupefied_nightingale[369493]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct 11 05:12:40 np0005481065 stupefied_nightingale[369493]:        "lsm_data": {},
Oct 11 05:12:40 np0005481065 stupefied_nightingale[369493]:        "lvs": [],
Oct 11 05:12:40 np0005481065 stupefied_nightingale[369493]:        "path": "/dev/sr0",
Oct 11 05:12:40 np0005481065 stupefied_nightingale[369493]:        "rejected_reasons": [
Oct 11 05:12:40 np0005481065 stupefied_nightingale[369493]:            "Insufficient space (<5GB)",
Oct 11 05:12:40 np0005481065 stupefied_nightingale[369493]:            "Has a FileSystem"
Oct 11 05:12:40 np0005481065 stupefied_nightingale[369493]:        ],
Oct 11 05:12:40 np0005481065 stupefied_nightingale[369493]:        "sys_api": {
Oct 11 05:12:40 np0005481065 stupefied_nightingale[369493]:            "actuators": null,
Oct 11 05:12:40 np0005481065 stupefied_nightingale[369493]:            "device_nodes": "sr0",
Oct 11 05:12:40 np0005481065 stupefied_nightingale[369493]:            "devname": "sr0",
Oct 11 05:12:40 np0005481065 stupefied_nightingale[369493]:            "human_readable_size": "482.00 KB",
Oct 11 05:12:40 np0005481065 stupefied_nightingale[369493]:            "id_bus": "ata",
Oct 11 05:12:40 np0005481065 stupefied_nightingale[369493]:            "model": "QEMU DVD-ROM",
Oct 11 05:12:40 np0005481065 stupefied_nightingale[369493]:            "nr_requests": "2",
Oct 11 05:12:40 np0005481065 stupefied_nightingale[369493]:            "parent": "/dev/sr0",
Oct 11 05:12:40 np0005481065 stupefied_nightingale[369493]:            "partitions": {},
Oct 11 05:12:40 np0005481065 stupefied_nightingale[369493]:            "path": "/dev/sr0",
Oct 11 05:12:40 np0005481065 stupefied_nightingale[369493]:            "removable": "1",
Oct 11 05:12:40 np0005481065 stupefied_nightingale[369493]:            "rev": "2.5+",
Oct 11 05:12:40 np0005481065 stupefied_nightingale[369493]:            "ro": "0",
Oct 11 05:12:40 np0005481065 stupefied_nightingale[369493]:            "rotational": "0",
Oct 11 05:12:40 np0005481065 stupefied_nightingale[369493]:            "sas_address": "",
Oct 11 05:12:40 np0005481065 stupefied_nightingale[369493]:            "sas_device_handle": "",
Oct 11 05:12:40 np0005481065 stupefied_nightingale[369493]:            "scheduler_mode": "mq-deadline",
Oct 11 05:12:40 np0005481065 stupefied_nightingale[369493]:            "sectors": 0,
Oct 11 05:12:40 np0005481065 stupefied_nightingale[369493]:            "sectorsize": "2048",
Oct 11 05:12:40 np0005481065 stupefied_nightingale[369493]:            "size": 493568.0,
Oct 11 05:12:40 np0005481065 stupefied_nightingale[369493]:            "support_discard": "2048",
Oct 11 05:12:40 np0005481065 stupefied_nightingale[369493]:            "type": "disk",
Oct 11 05:12:40 np0005481065 stupefied_nightingale[369493]:            "vendor": "QEMU"
Oct 11 05:12:40 np0005481065 stupefied_nightingale[369493]:        }
Oct 11 05:12:40 np0005481065 stupefied_nightingale[369493]:    }
Oct 11 05:12:40 np0005481065 stupefied_nightingale[369493]: ]
Oct 11 05:12:40 np0005481065 systemd[1]: libpod-2f5855283a76abacd3363befce745fd39d7007f94e0628738d0584b306de7a53.scope: Deactivated successfully.
Oct 11 05:12:40 np0005481065 podman[369477]: 2025-10-11 09:12:40.445263752 +0000 UTC m=+2.145675872 container died 2f5855283a76abacd3363befce745fd39d7007f94e0628738d0584b306de7a53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_nightingale, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:12:40 np0005481065 systemd[1]: libpod-2f5855283a76abacd3363befce745fd39d7007f94e0628738d0584b306de7a53.scope: Consumed 2.014s CPU time.
Oct 11 05:12:40 np0005481065 systemd[1]: var-lib-containers-storage-overlay-33adebc7114fa456ae86066a354759e171629270519347b59951f8e464540dd3-merged.mount: Deactivated successfully.
Oct 11 05:12:40 np0005481065 podman[369477]: 2025-10-11 09:12:40.504490811 +0000 UTC m=+2.204902921 container remove 2f5855283a76abacd3363befce745fd39d7007f94e0628738d0584b306de7a53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_nightingale, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:12:40 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2125: 321 pgs: 321 active+clean; 395 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 2.0 MiB/s wr, 68 op/s
Oct 11 05:12:40 np0005481065 systemd[1]: libpod-conmon-2f5855283a76abacd3363befce745fd39d7007f94e0628738d0584b306de7a53.scope: Deactivated successfully.
Oct 11 05:12:40 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:12:40 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:12:40 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:12:40 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:12:40 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:12:40 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:12:40 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 05:12:40 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:12:40 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 05:12:40 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:12:40 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 7fce1dbd-60dd-41f0-a48b-7b690750ec82 does not exist
Oct 11 05:12:40 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 13192085-f10b-4996-a65c-9d264e846671 does not exist
Oct 11 05:12:40 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev b618ddd0-6be9-4499-ab34-f544a8c32942 does not exist
Oct 11 05:12:40 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 05:12:40 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 05:12:40 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 05:12:40 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:12:40 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:12:40 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:12:41 np0005481065 podman[372096]: 2025-10-11 09:12:41.467455684 +0000 UTC m=+0.064537197 container create 555930723cd5acba86cad25228c5c85a1641dd053ef2ae52a8cddb62a66c2329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Oct 11 05:12:41 np0005481065 systemd[1]: Started libpod-conmon-555930723cd5acba86cad25228c5c85a1641dd053ef2ae52a8cddb62a66c2329.scope.
Oct 11 05:12:41 np0005481065 podman[372096]: 2025-10-11 09:12:41.441214968 +0000 UTC m=+0.038296561 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:12:41 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:12:41 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:12:41 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:12:41 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:12:41 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:12:41 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:12:41 np0005481065 podman[372096]: 2025-10-11 09:12:41.570240999 +0000 UTC m=+0.167322582 container init 555930723cd5acba86cad25228c5c85a1641dd053ef2ae52a8cddb62a66c2329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_haibt, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Oct 11 05:12:41 np0005481065 podman[372096]: 2025-10-11 09:12:41.58219353 +0000 UTC m=+0.179275073 container start 555930723cd5acba86cad25228c5c85a1641dd053ef2ae52a8cddb62a66c2329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_haibt, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:12:41 np0005481065 podman[372096]: 2025-10-11 09:12:41.585990395 +0000 UTC m=+0.183071988 container attach 555930723cd5acba86cad25228c5c85a1641dd053ef2ae52a8cddb62a66c2329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_haibt, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:12:41 np0005481065 jovial_haibt[372112]: 167 167
Oct 11 05:12:41 np0005481065 systemd[1]: libpod-555930723cd5acba86cad25228c5c85a1641dd053ef2ae52a8cddb62a66c2329.scope: Deactivated successfully.
Oct 11 05:12:41 np0005481065 podman[372096]: 2025-10-11 09:12:41.588526345 +0000 UTC m=+0.185607888 container died 555930723cd5acba86cad25228c5c85a1641dd053ef2ae52a8cddb62a66c2329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_haibt, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 11 05:12:41 np0005481065 systemd[1]: var-lib-containers-storage-overlay-cf8296d0c8d71531457f45c3ac1eff0372c1514b82000e7f17f7481f24751edf-merged.mount: Deactivated successfully.
Oct 11 05:12:41 np0005481065 podman[372096]: 2025-10-11 09:12:41.637344227 +0000 UTC m=+0.234425760 container remove 555930723cd5acba86cad25228c5c85a1641dd053ef2ae52a8cddb62a66c2329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:12:41 np0005481065 systemd[1]: libpod-conmon-555930723cd5acba86cad25228c5c85a1641dd053ef2ae52a8cddb62a66c2329.scope: Deactivated successfully.
Oct 11 05:12:41 np0005481065 podman[372135]: 2025-10-11 09:12:41.892438057 +0000 UTC m=+0.061283887 container create bc15640458bd10f5ed0f940ea1a15d214ae3c5c17a2b38bc4f19c992ffcd6a88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:12:41 np0005481065 systemd[1]: Started libpod-conmon-bc15640458bd10f5ed0f940ea1a15d214ae3c5c17a2b38bc4f19c992ffcd6a88.scope.
Oct 11 05:12:41 np0005481065 podman[372135]: 2025-10-11 09:12:41.869300997 +0000 UTC m=+0.038146877 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:12:41 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:12:41 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75cf768eaa14592f7594c4db80d34fcead89bd99914c3d82feaf8d5c75848c20/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:12:41 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75cf768eaa14592f7594c4db80d34fcead89bd99914c3d82feaf8d5c75848c20/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:12:42 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75cf768eaa14592f7594c4db80d34fcead89bd99914c3d82feaf8d5c75848c20/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:12:42 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75cf768eaa14592f7594c4db80d34fcead89bd99914c3d82feaf8d5c75848c20/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:12:42 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75cf768eaa14592f7594c4db80d34fcead89bd99914c3d82feaf8d5c75848c20/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 05:12:42 np0005481065 podman[372135]: 2025-10-11 09:12:42.023960938 +0000 UTC m=+0.192806798 container init bc15640458bd10f5ed0f940ea1a15d214ae3c5c17a2b38bc4f19c992ffcd6a88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mayer, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 05:12:42 np0005481065 podman[372135]: 2025-10-11 09:12:42.031969379 +0000 UTC m=+0.200815219 container start bc15640458bd10f5ed0f940ea1a15d214ae3c5c17a2b38bc4f19c992ffcd6a88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 05:12:42 np0005481065 podman[372135]: 2025-10-11 09:12:42.037529203 +0000 UTC m=+0.206375083 container attach bc15640458bd10f5ed0f940ea1a15d214ae3c5c17a2b38bc4f19c992ffcd6a88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 05:12:42 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2126: 321 pgs: 321 active+clean; 406 MiB data, 954 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 84 op/s
Oct 11 05:12:43 np0005481065 epic_mayer[372151]: --> passed data devices: 0 physical, 3 LVM
Oct 11 05:12:43 np0005481065 epic_mayer[372151]: --> relative data size: 1.0
Oct 11 05:12:43 np0005481065 epic_mayer[372151]: --> All data devices are unavailable
Oct 11 05:12:43 np0005481065 systemd[1]: libpod-bc15640458bd10f5ed0f940ea1a15d214ae3c5c17a2b38bc4f19c992ffcd6a88.scope: Deactivated successfully.
Oct 11 05:12:43 np0005481065 podman[372135]: 2025-10-11 09:12:43.233350043 +0000 UTC m=+1.402195853 container died bc15640458bd10f5ed0f940ea1a15d214ae3c5c17a2b38bc4f19c992ffcd6a88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 11 05:12:43 np0005481065 systemd[1]: libpod-bc15640458bd10f5ed0f940ea1a15d214ae3c5c17a2b38bc4f19c992ffcd6a88.scope: Consumed 1.129s CPU time.
Oct 11 05:12:43 np0005481065 systemd[1]: var-lib-containers-storage-overlay-75cf768eaa14592f7594c4db80d34fcead89bd99914c3d82feaf8d5c75848c20-merged.mount: Deactivated successfully.
Oct 11 05:12:43 np0005481065 podman[372135]: 2025-10-11 09:12:43.291683737 +0000 UTC m=+1.460529547 container remove bc15640458bd10f5ed0f940ea1a15d214ae3c5c17a2b38bc4f19c992ffcd6a88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mayer, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:12:43 np0005481065 systemd[1]: libpod-conmon-bc15640458bd10f5ed0f940ea1a15d214ae3c5c17a2b38bc4f19c992ffcd6a88.scope: Deactivated successfully.
Oct 11 05:12:43 np0005481065 nova_compute[260935]: 2025-10-11 09:12:43.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:43 np0005481065 nova_compute[260935]: 2025-10-11 09:12:43.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:12:43 np0005481065 nova_compute[260935]: 2025-10-11 09:12:43.702 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:12:43 np0005481065 nova_compute[260935]: 2025-10-11 09:12:43.703 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:12:43 np0005481065 nova_compute[260935]: 2025-10-11 09:12:43.704 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:12:43 np0005481065 nova_compute[260935]: 2025-10-11 09:12:43.704 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:12:43 np0005481065 nova_compute[260935]: 2025-10-11 09:12:43.704 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:12:43 np0005481065 nova_compute[260935]: 2025-10-11 09:12:43.704 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:12:43 np0005481065 nova_compute[260935]: 2025-10-11 09:12:43.740 2 DEBUG nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100#033[00m
Oct 11 05:12:43 np0005481065 nova_compute[260935]: 2025-10-11 09:12:43.774 2 DEBUG nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Oct 11 05:12:43 np0005481065 nova_compute[260935]: 2025-10-11 09:12:43.776 2 DEBUG nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Image id 95632eb9-5895-4e20-b760-0f149aadf400 yields fingerprint d427ed36e4acfaf36d5cf36bd49361b1db4ee571 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Oct 11 05:12:43 np0005481065 nova_compute[260935]: 2025-10-11 09:12:43.776 2 INFO nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] image 95632eb9-5895-4e20-b760-0f149aadf400 at (/var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571): checking#033[00m
Oct 11 05:12:43 np0005481065 nova_compute[260935]: 2025-10-11 09:12:43.777 2 DEBUG nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] image 95632eb9-5895-4e20-b760-0f149aadf400 at (/var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279#033[00m
Oct 11 05:12:43 np0005481065 nova_compute[260935]: 2025-10-11 09:12:43.780 2 DEBUG nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Oct 11 05:12:43 np0005481065 nova_compute[260935]: 2025-10-11 09:12:43.781 2 DEBUG nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Image id 03f2fef0-11c0-48e1-b3a0-3e02d898739e yields fingerprint 9811042c7d73cc51997f7c966840f4b7728169a1 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Oct 11 05:12:43 np0005481065 nova_compute[260935]: 2025-10-11 09:12:43.781 2 INFO nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] image 03f2fef0-11c0-48e1-b3a0-3e02d898739e at (/var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1): checking#033[00m
Oct 11 05:12:43 np0005481065 nova_compute[260935]: 2025-10-11 09:12:43.781 2 DEBUG nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] image 03f2fef0-11c0-48e1-b3a0-3e02d898739e at (/var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279#033[00m
Oct 11 05:12:43 np0005481065 nova_compute[260935]: 2025-10-11 09:12:43.783 2 DEBUG nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] c176845c-89c0-4038-ba22-4ee79bd3ebfe is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Oct 11 05:12:43 np0005481065 nova_compute[260935]: 2025-10-11 09:12:43.784 2 DEBUG nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] b75d8ded-515b-48ff-a6b6-28df88878996 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Oct 11 05:12:43 np0005481065 nova_compute[260935]: 2025-10-11 09:12:43.784 2 DEBUG nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] 52be16b4-343a-4fd4-9041-39069a1fde2a is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Oct 11 05:12:43 np0005481065 nova_compute[260935]: 2025-10-11 09:12:43.785 2 DEBUG nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Oct 11 05:12:43 np0005481065 nova_compute[260935]: 2025-10-11 09:12:43.785 2 INFO nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Active base files: /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1#033[00m
Oct 11 05:12:43 np0005481065 nova_compute[260935]: 2025-10-11 09:12:43.785 2 DEBUG nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Oct 11 05:12:43 np0005481065 nova_compute[260935]: 2025-10-11 09:12:43.785 2 DEBUG nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Oct 11 05:12:43 np0005481065 nova_compute[260935]: 2025-10-11 09:12:43.786 2 DEBUG nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Oct 11 05:12:43 np0005481065 nova_compute[260935]: 2025-10-11 09:12:43.786 2 INFO nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66#033[00m
Oct 11 05:12:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:12:43 np0005481065 nova_compute[260935]: 2025-10-11 09:12:43.994 2 INFO nova.compute.manager [None req-279cc5eb-9669-44e6-9f9b-a1a3947ef73e a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Get console output#033[00m
Oct 11 05:12:44 np0005481065 nova_compute[260935]: 2025-10-11 09:12:44.002 29289 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct 11 05:12:44 np0005481065 podman[372336]: 2025-10-11 09:12:44.101273136 +0000 UTC m=+0.077492326 container create f07f6f730d4eedfcd8b7a62c4499cf1a4ae6320069341beb97359d33a7c960da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wilbur, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 11 05:12:44 np0005481065 systemd[1]: Started libpod-conmon-f07f6f730d4eedfcd8b7a62c4499cf1a4ae6320069341beb97359d33a7c960da.scope.
Oct 11 05:12:44 np0005481065 podman[372336]: 2025-10-11 09:12:44.060045805 +0000 UTC m=+0.036265065 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:12:44 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:12:44 np0005481065 podman[372336]: 2025-10-11 09:12:44.20941731 +0000 UTC m=+0.185636530 container init f07f6f730d4eedfcd8b7a62c4499cf1a4ae6320069341beb97359d33a7c960da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wilbur, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:12:44 np0005481065 podman[372336]: 2025-10-11 09:12:44.221582596 +0000 UTC m=+0.197801806 container start f07f6f730d4eedfcd8b7a62c4499cf1a4ae6320069341beb97359d33a7c960da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wilbur, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 11 05:12:44 np0005481065 podman[372336]: 2025-10-11 09:12:44.22642317 +0000 UTC m=+0.202642340 container attach f07f6f730d4eedfcd8b7a62c4499cf1a4ae6320069341beb97359d33a7c960da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 05:12:44 np0005481065 pedantic_wilbur[372353]: 167 167
Oct 11 05:12:44 np0005481065 systemd[1]: libpod-f07f6f730d4eedfcd8b7a62c4499cf1a4ae6320069341beb97359d33a7c960da.scope: Deactivated successfully.
Oct 11 05:12:44 np0005481065 podman[372336]: 2025-10-11 09:12:44.231064559 +0000 UTC m=+0.207283769 container died f07f6f730d4eedfcd8b7a62c4499cf1a4ae6320069341beb97359d33a7c960da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 05:12:44 np0005481065 systemd[1]: var-lib-containers-storage-overlay-8f28d6adb7ef11f9483590958c837af5f44a8f197e5a6c91a9dbe16b5ead5d83-merged.mount: Deactivated successfully.
Oct 11 05:12:44 np0005481065 podman[372336]: 2025-10-11 09:12:44.29721219 +0000 UTC m=+0.273431400 container remove f07f6f730d4eedfcd8b7a62c4499cf1a4ae6320069341beb97359d33a7c960da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 05:12:44 np0005481065 systemd[1]: libpod-conmon-f07f6f730d4eedfcd8b7a62c4499cf1a4ae6320069341beb97359d33a7c960da.scope: Deactivated successfully.
Oct 11 05:12:44 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2127: 321 pgs: 321 active+clean; 407 MiB data, 954 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 05:12:44 np0005481065 podman[372376]: 2025-10-11 09:12:44.579016959 +0000 UTC m=+0.067262472 container create 6286ee8c120f154e75e417e7067df83a1cfcc896b2fc1f20e9b3a025f5b1bc7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_chaum, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 11 05:12:44 np0005481065 podman[372376]: 2025-10-11 09:12:44.552292139 +0000 UTC m=+0.040537762 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:12:44 np0005481065 systemd[1]: Started libpod-conmon-6286ee8c120f154e75e417e7067df83a1cfcc896b2fc1f20e9b3a025f5b1bc7c.scope.
Oct 11 05:12:44 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:12:44 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cf93721894837aac5457766c172ba39cfd6abb9e33a4547025828c22500270b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:12:44 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cf93721894837aac5457766c172ba39cfd6abb9e33a4547025828c22500270b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:12:44 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cf93721894837aac5457766c172ba39cfd6abb9e33a4547025828c22500270b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:12:44 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cf93721894837aac5457766c172ba39cfd6abb9e33a4547025828c22500270b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:12:44 np0005481065 podman[372376]: 2025-10-11 09:12:44.713594404 +0000 UTC m=+0.201839977 container init 6286ee8c120f154e75e417e7067df83a1cfcc896b2fc1f20e9b3a025f5b1bc7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_chaum, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 11 05:12:44 np0005481065 podman[372376]: 2025-10-11 09:12:44.733946007 +0000 UTC m=+0.222191540 container start 6286ee8c120f154e75e417e7067df83a1cfcc896b2fc1f20e9b3a025f5b1bc7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_chaum, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:12:44 np0005481065 podman[372376]: 2025-10-11 09:12:44.739703836 +0000 UTC m=+0.227949409 container attach 6286ee8c120f154e75e417e7067df83a1cfcc896b2fc1f20e9b3a025f5b1bc7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 05:12:45 np0005481065 nova_compute[260935]: 2025-10-11 09:12:45.013 2 DEBUG nova.compute.manager [req-6fa2d102-2289-4f5a-978d-098a0402b1b9 req-5ce0ac04-dcc0-4016-acfa-aad8d88b5ada e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Received event network-changed-0ae3e094-fe06-40c1-8840-cf16ba40f7fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:12:45 np0005481065 nova_compute[260935]: 2025-10-11 09:12:45.013 2 DEBUG nova.compute.manager [req-6fa2d102-2289-4f5a-978d-098a0402b1b9 req-5ce0ac04-dcc0-4016-acfa-aad8d88b5ada e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Refreshing instance network info cache due to event network-changed-0ae3e094-fe06-40c1-8840-cf16ba40f7fb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:12:45 np0005481065 nova_compute[260935]: 2025-10-11 09:12:45.014 2 DEBUG oslo_concurrency.lockutils [req-6fa2d102-2289-4f5a-978d-098a0402b1b9 req-5ce0ac04-dcc0-4016-acfa-aad8d88b5ada e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-0dc02e8f-5afd-40a3-8de3-e5550e4ab57e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:12:45 np0005481065 nova_compute[260935]: 2025-10-11 09:12:45.014 2 DEBUG oslo_concurrency.lockutils [req-6fa2d102-2289-4f5a-978d-098a0402b1b9 req-5ce0ac04-dcc0-4016-acfa-aad8d88b5ada e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-0dc02e8f-5afd-40a3-8de3-e5550e4ab57e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:12:45 np0005481065 nova_compute[260935]: 2025-10-11 09:12:45.014 2 DEBUG nova.network.neutron [req-6fa2d102-2289-4f5a-978d-098a0402b1b9 req-5ce0ac04-dcc0-4016-acfa-aad8d88b5ada e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Refreshing network info cache for port 0ae3e094-fe06-40c1-8840-cf16ba40f7fb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:12:45 np0005481065 nova_compute[260935]: 2025-10-11 09:12:45.136 2 DEBUG oslo_concurrency.lockutils [None req-ed823c19-8ddf-431b-ba34-91b643184e33 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:12:45 np0005481065 nova_compute[260935]: 2025-10-11 09:12:45.137 2 DEBUG oslo_concurrency.lockutils [None req-ed823c19-8ddf-431b-ba34-91b643184e33 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:12:45 np0005481065 nova_compute[260935]: 2025-10-11 09:12:45.137 2 DEBUG oslo_concurrency.lockutils [None req-ed823c19-8ddf-431b-ba34-91b643184e33 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:12:45 np0005481065 nova_compute[260935]: 2025-10-11 09:12:45.137 2 DEBUG oslo_concurrency.lockutils [None req-ed823c19-8ddf-431b-ba34-91b643184e33 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:12:45 np0005481065 nova_compute[260935]: 2025-10-11 09:12:45.138 2 DEBUG oslo_concurrency.lockutils [None req-ed823c19-8ddf-431b-ba34-91b643184e33 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:12:45 np0005481065 nova_compute[260935]: 2025-10-11 09:12:45.139 2 INFO nova.compute.manager [None req-ed823c19-8ddf-431b-ba34-91b643184e33 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Terminating instance#033[00m
Oct 11 05:12:45 np0005481065 nova_compute[260935]: 2025-10-11 09:12:45.140 2 DEBUG nova.compute.manager [None req-ed823c19-8ddf-431b-ba34-91b643184e33 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:12:45 np0005481065 nova_compute[260935]: 2025-10-11 09:12:45.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:45 np0005481065 kernel: tap0ae3e094-fe (unregistering): left promiscuous mode
Oct 11 05:12:45 np0005481065 NetworkManager[44960]: <info>  [1760173965.2054] device (tap0ae3e094-fe): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:12:45 np0005481065 nova_compute[260935]: 2025-10-11 09:12:45.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:45 np0005481065 ovn_controller[152945]: 2025-10-11T09:12:45Z|00995|binding|INFO|Releasing lport 0ae3e094-fe06-40c1-8840-cf16ba40f7fb from this chassis (sb_readonly=0)
Oct 11 05:12:45 np0005481065 ovn_controller[152945]: 2025-10-11T09:12:45Z|00996|binding|INFO|Setting lport 0ae3e094-fe06-40c1-8840-cf16ba40f7fb down in Southbound
Oct 11 05:12:45 np0005481065 ovn_controller[152945]: 2025-10-11T09:12:45Z|00997|binding|INFO|Removing iface tap0ae3e094-fe ovn-installed in OVS
Oct 11 05:12:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:45.233 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:36:24:98 10.100.0.9'], port_security=['fa:16:3e:36:24:98 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '0dc02e8f-5afd-40a3-8de3-e5550e4ab57e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fba72350-6e1a-4542-9fe5-a0774a411936', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca4b15770e784f45910b630937562cb6', 'neutron:revision_number': '6', 'neutron:security_group_ids': '8dd2bd74-fe29-451a-969b-10f8c65ff537', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4107ca00-6bae-4723-b2f7-08200810b548, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=0ae3e094-fe06-40c1-8840-cf16ba40f7fb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:12:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:45.235 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 0ae3e094-fe06-40c1-8840-cf16ba40f7fb in datapath fba72350-6e1a-4542-9fe5-a0774a411936 unbound from our chassis#033[00m
Oct 11 05:12:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:45.238 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fba72350-6e1a-4542-9fe5-a0774a411936, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:12:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:45.241 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2198b792-6611-494a-9724-a612433f1d8d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:12:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:45.242 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936 namespace which is not needed anymore#033[00m
Oct 11 05:12:45 np0005481065 nova_compute[260935]: 2025-10-11 09:12:45.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:45 np0005481065 systemd[1]: machine-qemu\x2d126\x2dinstance\x2d00000069.scope: Deactivated successfully.
Oct 11 05:12:45 np0005481065 systemd[1]: machine-qemu\x2d126\x2dinstance\x2d00000069.scope: Consumed 12.775s CPU time.
Oct 11 05:12:45 np0005481065 systemd-machined[215705]: Machine qemu-126-instance-00000069 terminated.
Oct 11 05:12:45 np0005481065 nova_compute[260935]: 2025-10-11 09:12:45.383 2 INFO nova.virt.libvirt.driver [-] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Instance destroyed successfully.#033[00m
Oct 11 05:12:45 np0005481065 nova_compute[260935]: 2025-10-11 09:12:45.384 2 DEBUG nova.objects.instance [None req-ed823c19-8ddf-431b-ba34-91b643184e33 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'resources' on Instance uuid 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:12:45 np0005481065 nova_compute[260935]: 2025-10-11 09:12:45.419 2 DEBUG nova.virt.libvirt.vif [None req-ed823c19-8ddf-431b-ba34-91b643184e33 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-11T09:11:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1973542017',display_name='tempest-TestNetworkAdvancedServerOps-server-1973542017',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1973542017',id=105,image_ref='95632eb9-5895-4e20-b760-0f149aadf400',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPjiTuLZzmAZ35tMhCOSVitFlf9QVc7XZ8MVeNKbs8Bxh7N8pYPwy6nmkT0CJ4moptetANiA+6a5Y0trCDigpxU0NxSZrm3lKn8b9iWNg2gUkyVkLT8AlFALq9EjTDW32g==',key_name='tempest-TestNetworkAdvancedServerOps-2078355019',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:12:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ca4b15770e784f45910b630937562cb6',ramdisk_id='',reservation_id='r-7vnn79ig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='95632eb9-5895-4e20-b760-0f149aadf400',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1304559157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1304559157-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:12:26Z,user_data=None,user_id='a213c3877fc144a3af0be3c3d853f999',uuid=0dc02e8f-5afd-40a3-8de3-e5550e4ab57e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "address": "fa:16:3e:36:24:98", "network": {"id": "fba72350-6e1a-4542-9fe5-a0774a411936", "bridge": "br-int", "label": "tempest-network-smoke--284994200", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae3e094-fe", "ovs_interfaceid": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:12:45 np0005481065 nova_compute[260935]: 2025-10-11 09:12:45.420 2 DEBUG nova.network.os_vif_util [None req-ed823c19-8ddf-431b-ba34-91b643184e33 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converting VIF {"id": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "address": "fa:16:3e:36:24:98", "network": {"id": "fba72350-6e1a-4542-9fe5-a0774a411936", "bridge": "br-int", "label": "tempest-network-smoke--284994200", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae3e094-fe", "ovs_interfaceid": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:12:45 np0005481065 nova_compute[260935]: 2025-10-11 09:12:45.420 2 DEBUG nova.network.os_vif_util [None req-ed823c19-8ddf-431b-ba34-91b643184e33 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:36:24:98,bridge_name='br-int',has_traffic_filtering=True,id=0ae3e094-fe06-40c1-8840-cf16ba40f7fb,network=Network(fba72350-6e1a-4542-9fe5-a0774a411936),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ae3e094-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:12:45 np0005481065 nova_compute[260935]: 2025-10-11 09:12:45.421 2 DEBUG os_vif [None req-ed823c19-8ddf-431b-ba34-91b643184e33 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:36:24:98,bridge_name='br-int',has_traffic_filtering=True,id=0ae3e094-fe06-40c1-8840-cf16ba40f7fb,network=Network(fba72350-6e1a-4542-9fe5-a0774a411936),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ae3e094-fe') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:12:45 np0005481065 nova_compute[260935]: 2025-10-11 09:12:45.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:45 np0005481065 nova_compute[260935]: 2025-10-11 09:12:45.422 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0ae3e094-fe, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:12:45 np0005481065 nova_compute[260935]: 2025-10-11 09:12:45.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:45 np0005481065 nova_compute[260935]: 2025-10-11 09:12:45.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:45 np0005481065 nova_compute[260935]: 2025-10-11 09:12:45.428 2 INFO os_vif [None req-ed823c19-8ddf-431b-ba34-91b643184e33 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:36:24:98,bridge_name='br-int',has_traffic_filtering=True,id=0ae3e094-fe06-40c1-8840-cf16ba40f7fb,network=Network(fba72350-6e1a-4542-9fe5-a0774a411936),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ae3e094-fe')#033[00m
Oct 11 05:12:45 np0005481065 neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936[369040]: [NOTICE]   (369044) : haproxy version is 2.8.14-c23fe91
Oct 11 05:12:45 np0005481065 neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936[369040]: [NOTICE]   (369044) : path to executable is /usr/sbin/haproxy
Oct 11 05:12:45 np0005481065 neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936[369040]: [WARNING]  (369044) : Exiting Master process...
Oct 11 05:12:45 np0005481065 neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936[369040]: [WARNING]  (369044) : Exiting Master process...
Oct 11 05:12:45 np0005481065 neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936[369040]: [ALERT]    (369044) : Current worker (369046) exited with code 143 (Terminated)
Oct 11 05:12:45 np0005481065 neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936[369040]: [WARNING]  (369044) : All workers exited. Exiting... (0)
Oct 11 05:12:45 np0005481065 systemd[1]: libpod-cca8b69c19e14fed71092700160d6e1286225281cd0deb548025e0d9e80d71b1.scope: Deactivated successfully.
Oct 11 05:12:45 np0005481065 podman[372427]: 2025-10-11 09:12:45.465731382 +0000 UTC m=+0.077664450 container died cca8b69c19e14fed71092700160d6e1286225281cd0deb548025e0d9e80d71b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]: {
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:    "0": [
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:        {
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:            "devices": [
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:                "/dev/loop3"
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:            ],
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:            "lv_name": "ceph_lv0",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:            "lv_size": "21470642176",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:            "name": "ceph_lv0",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:            "tags": {
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:                "ceph.cluster_name": "ceph",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:                "ceph.crush_device_class": "",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:                "ceph.encrypted": "0",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:                "ceph.osd_id": "0",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:                "ceph.type": "block",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:                "ceph.vdo": "0"
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:            },
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:            "type": "block",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:            "vg_name": "ceph_vg0"
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:        }
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:    ],
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:    "1": [
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:        {
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:            "devices": [
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:                "/dev/loop4"
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:            ],
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:            "lv_name": "ceph_lv1",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:            "lv_size": "21470642176",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:            "name": "ceph_lv1",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:            "tags": {
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:                "ceph.cluster_name": "ceph",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:                "ceph.crush_device_class": "",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:                "ceph.encrypted": "0",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:                "ceph.osd_id": "1",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:                "ceph.type": "block",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:                "ceph.vdo": "0"
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:            },
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:            "type": "block",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:            "vg_name": "ceph_vg1"
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:        }
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:    ],
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:    "2": [
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:        {
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:            "devices": [
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:                "/dev/loop5"
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:            ],
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:            "lv_name": "ceph_lv2",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:            "lv_size": "21470642176",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:            "name": "ceph_lv2",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:            "tags": {
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:                "ceph.cluster_name": "ceph",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:                "ceph.crush_device_class": "",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:                "ceph.encrypted": "0",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:                "ceph.osd_id": "2",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:                "ceph.type": "block",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:                "ceph.vdo": "0"
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:            },
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:            "type": "block",
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:            "vg_name": "ceph_vg2"
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:        }
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]:    ]
Oct 11 05:12:45 np0005481065 quirky_chaum[372393]: }
Oct 11 05:12:45 np0005481065 systemd[1]: var-lib-containers-storage-overlay-c7f85520362d24f3ac3e0a81a81c4d2407edd9ee81684410d35e390bcf954224-merged.mount: Deactivated successfully.
Oct 11 05:12:45 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cca8b69c19e14fed71092700160d6e1286225281cd0deb548025e0d9e80d71b1-userdata-shm.mount: Deactivated successfully.
Oct 11 05:12:45 np0005481065 systemd[1]: libpod-6286ee8c120f154e75e417e7067df83a1cfcc896b2fc1f20e9b3a025f5b1bc7c.scope: Deactivated successfully.
Oct 11 05:12:45 np0005481065 podman[372376]: 2025-10-11 09:12:45.519399068 +0000 UTC m=+1.007644621 container died 6286ee8c120f154e75e417e7067df83a1cfcc896b2fc1f20e9b3a025f5b1bc7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_chaum, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:12:45 np0005481065 podman[372427]: 2025-10-11 09:12:45.535453202 +0000 UTC m=+0.147386270 container cleanup cca8b69c19e14fed71092700160d6e1286225281cd0deb548025e0d9e80d71b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 11 05:12:45 np0005481065 systemd[1]: libpod-conmon-cca8b69c19e14fed71092700160d6e1286225281cd0deb548025e0d9e80d71b1.scope: Deactivated successfully.
Oct 11 05:12:45 np0005481065 systemd[1]: var-lib-containers-storage-overlay-7cf93721894837aac5457766c172ba39cfd6abb9e33a4547025828c22500270b-merged.mount: Deactivated successfully.
Oct 11 05:12:45 np0005481065 podman[372376]: 2025-10-11 09:12:45.587339148 +0000 UTC m=+1.075584671 container remove 6286ee8c120f154e75e417e7067df83a1cfcc896b2fc1f20e9b3a025f5b1bc7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_chaum, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:12:45 np0005481065 systemd[1]: libpod-conmon-6286ee8c120f154e75e417e7067df83a1cfcc896b2fc1f20e9b3a025f5b1bc7c.scope: Deactivated successfully.
Oct 11 05:12:45 np0005481065 podman[372497]: 2025-10-11 09:12:45.640207762 +0000 UTC m=+0.067399257 container remove cca8b69c19e14fed71092700160d6e1286225281cd0deb548025e0d9e80d71b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:12:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:45.655 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f247cab7-3750-40e7-99e0-83078f28bbc4]: (4, ('Sat Oct 11 09:12:45 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936 (cca8b69c19e14fed71092700160d6e1286225281cd0deb548025e0d9e80d71b1)\ncca8b69c19e14fed71092700160d6e1286225281cd0deb548025e0d9e80d71b1\nSat Oct 11 09:12:45 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936 (cca8b69c19e14fed71092700160d6e1286225281cd0deb548025e0d9e80d71b1)\ncca8b69c19e14fed71092700160d6e1286225281cd0deb548025e0d9e80d71b1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:12:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:45.658 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fc68744d-b468-4e2e-bbf3-6ea0fc8ec1d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:12:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:45.659 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfba72350-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:12:45 np0005481065 nova_compute[260935]: 2025-10-11 09:12:45.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:45 np0005481065 kernel: tapfba72350-60: left promiscuous mode
Oct 11 05:12:45 np0005481065 nova_compute[260935]: 2025-10-11 09:12:45.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:45.678 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[189cec1f-0008-4ac3-8ef1-918030db234b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:12:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:45.704 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bb194699-3cf1-4e95-8154-690fb260f8a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:12:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:45.705 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[267670bb-f8b7-484c-aa72-1f59ad19e697]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:12:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:45.719 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d641136f-5141-4070-aa05-2ef5c1044879]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 578996, 'reachable_time': 23273, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372541, 'error': None, 'target': 'ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:12:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:45.721 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 05:12:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:12:45.721 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[ecb68f68-7067-4df3-b9bf-1466015c4ca6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:12:45 np0005481065 systemd[1]: run-netns-ovnmeta\x2dfba72350\x2d6e1a\x2d4542\x2d9fe5\x2da0774a411936.mount: Deactivated successfully.
Oct 11 05:12:45 np0005481065 nova_compute[260935]: 2025-10-11 09:12:45.791 2 DEBUG nova.compute.manager [req-6c4479b2-9e9c-4163-bd53-07a1809b6b95 req-210832e3-f2dc-4d24-a4a4-d15f3510efed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Received event network-vif-unplugged-0ae3e094-fe06-40c1-8840-cf16ba40f7fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:12:45 np0005481065 nova_compute[260935]: 2025-10-11 09:12:45.791 2 DEBUG oslo_concurrency.lockutils [req-6c4479b2-9e9c-4163-bd53-07a1809b6b95 req-210832e3-f2dc-4d24-a4a4-d15f3510efed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:12:45 np0005481065 nova_compute[260935]: 2025-10-11 09:12:45.792 2 DEBUG oslo_concurrency.lockutils [req-6c4479b2-9e9c-4163-bd53-07a1809b6b95 req-210832e3-f2dc-4d24-a4a4-d15f3510efed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:12:45 np0005481065 nova_compute[260935]: 2025-10-11 09:12:45.792 2 DEBUG oslo_concurrency.lockutils [req-6c4479b2-9e9c-4163-bd53-07a1809b6b95 req-210832e3-f2dc-4d24-a4a4-d15f3510efed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:12:45 np0005481065 nova_compute[260935]: 2025-10-11 09:12:45.792 2 DEBUG nova.compute.manager [req-6c4479b2-9e9c-4163-bd53-07a1809b6b95 req-210832e3-f2dc-4d24-a4a4-d15f3510efed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] No waiting events found dispatching network-vif-unplugged-0ae3e094-fe06-40c1-8840-cf16ba40f7fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:12:45 np0005481065 nova_compute[260935]: 2025-10-11 09:12:45.792 2 DEBUG nova.compute.manager [req-6c4479b2-9e9c-4163-bd53-07a1809b6b95 req-210832e3-f2dc-4d24-a4a4-d15f3510efed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Received event network-vif-unplugged-0ae3e094-fe06-40c1-8840-cf16ba40f7fb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 05:12:45 np0005481065 nova_compute[260935]: 2025-10-11 09:12:45.860 2 INFO nova.virt.libvirt.driver [None req-ed823c19-8ddf-431b-ba34-91b643184e33 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Deleting instance files /var/lib/nova/instances/0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_del#033[00m
Oct 11 05:12:45 np0005481065 nova_compute[260935]: 2025-10-11 09:12:45.861 2 INFO nova.virt.libvirt.driver [None req-ed823c19-8ddf-431b-ba34-91b643184e33 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Deletion of /var/lib/nova/instances/0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_del complete#033[00m
Oct 11 05:12:45 np0005481065 nova_compute[260935]: 2025-10-11 09:12:45.936 2 INFO nova.compute.manager [None req-ed823c19-8ddf-431b-ba34-91b643184e33 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Took 0.80 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:12:45 np0005481065 nova_compute[260935]: 2025-10-11 09:12:45.937 2 DEBUG oslo.service.loopingcall [None req-ed823c19-8ddf-431b-ba34-91b643184e33 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:12:45 np0005481065 nova_compute[260935]: 2025-10-11 09:12:45.938 2 DEBUG nova.compute.manager [-] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:12:45 np0005481065 nova_compute[260935]: 2025-10-11 09:12:45.938 2 DEBUG nova.network.neutron [-] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:12:46 np0005481065 podman[372656]: 2025-10-11 09:12:46.353750022 +0000 UTC m=+0.066508082 container create b52b5752a82029f6ce3f01e6e0122175631f17bd3623bd0f46783ab0f8e08386 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:12:46 np0005481065 systemd[1]: Started libpod-conmon-b52b5752a82029f6ce3f01e6e0122175631f17bd3623bd0f46783ab0f8e08386.scope.
Oct 11 05:12:46 np0005481065 podman[372656]: 2025-10-11 09:12:46.327248358 +0000 UTC m=+0.040006418 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:12:46 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:12:46 np0005481065 podman[372656]: 2025-10-11 09:12:46.455710604 +0000 UTC m=+0.168468704 container init b52b5752a82029f6ce3f01e6e0122175631f17bd3623bd0f46783ab0f8e08386 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_chatelet, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 05:12:46 np0005481065 podman[372656]: 2025-10-11 09:12:46.467091539 +0000 UTC m=+0.179849589 container start b52b5752a82029f6ce3f01e6e0122175631f17bd3623bd0f46783ab0f8e08386 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:12:46 np0005481065 podman[372656]: 2025-10-11 09:12:46.47074644 +0000 UTC m=+0.183504610 container attach b52b5752a82029f6ce3f01e6e0122175631f17bd3623bd0f46783ab0f8e08386 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_chatelet, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 11 05:12:46 np0005481065 exciting_chatelet[372673]: 167 167
Oct 11 05:12:46 np0005481065 systemd[1]: libpod-b52b5752a82029f6ce3f01e6e0122175631f17bd3623bd0f46783ab0f8e08386.scope: Deactivated successfully.
Oct 11 05:12:46 np0005481065 podman[372656]: 2025-10-11 09:12:46.477184979 +0000 UTC m=+0.189943069 container died b52b5752a82029f6ce3f01e6e0122175631f17bd3623bd0f46783ab0f8e08386 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_chatelet, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:12:46 np0005481065 systemd[1]: var-lib-containers-storage-overlay-ccc4fbb4d6cff7e2413430ea8793c414c3476aef0a2e71c4f3621404ca12387c-merged.mount: Deactivated successfully.
Oct 11 05:12:46 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2128: 321 pgs: 321 active+clean; 407 MiB data, 954 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 05:12:46 np0005481065 podman[372656]: 2025-10-11 09:12:46.525346932 +0000 UTC m=+0.238105012 container remove b52b5752a82029f6ce3f01e6e0122175631f17bd3623bd0f46783ab0f8e08386 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_chatelet, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 05:12:46 np0005481065 systemd[1]: libpod-conmon-b52b5752a82029f6ce3f01e6e0122175631f17bd3623bd0f46783ab0f8e08386.scope: Deactivated successfully.
Oct 11 05:12:46 np0005481065 podman[372697]: 2025-10-11 09:12:46.776981547 +0000 UTC m=+0.061198045 container create 67b6266006fe5b3a3992b945ca77b6ef36ea3e8ef70a5925f39edaebf1fd255b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:12:46 np0005481065 systemd[1]: Started libpod-conmon-67b6266006fe5b3a3992b945ca77b6ef36ea3e8ef70a5925f39edaebf1fd255b.scope.
Oct 11 05:12:46 np0005481065 podman[372697]: 2025-10-11 09:12:46.754795603 +0000 UTC m=+0.039012151 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:12:46 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:12:46 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff7ad4d5a95dd5ce376dffcbb492274a75e8ddf6791aa75e9eba9e07d207395c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:12:46 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff7ad4d5a95dd5ce376dffcbb492274a75e8ddf6791aa75e9eba9e07d207395c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:12:46 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff7ad4d5a95dd5ce376dffcbb492274a75e8ddf6791aa75e9eba9e07d207395c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:12:46 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff7ad4d5a95dd5ce376dffcbb492274a75e8ddf6791aa75e9eba9e07d207395c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:12:46 np0005481065 podman[372697]: 2025-10-11 09:12:46.881625613 +0000 UTC m=+0.165842141 container init 67b6266006fe5b3a3992b945ca77b6ef36ea3e8ef70a5925f39edaebf1fd255b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_booth, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 05:12:46 np0005481065 podman[372697]: 2025-10-11 09:12:46.889031618 +0000 UTC m=+0.173248116 container start 67b6266006fe5b3a3992b945ca77b6ef36ea3e8ef70a5925f39edaebf1fd255b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_booth, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:12:46 np0005481065 podman[372697]: 2025-10-11 09:12:46.892595937 +0000 UTC m=+0.176812435 container attach 67b6266006fe5b3a3992b945ca77b6ef36ea3e8ef70a5925f39edaebf1fd255b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:12:47 np0005481065 nova_compute[260935]: 2025-10-11 09:12:47.206 2 DEBUG nova.network.neutron [-] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:12:47 np0005481065 nova_compute[260935]: 2025-10-11 09:12:47.246 2 INFO nova.compute.manager [-] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Took 1.31 seconds to deallocate network for instance.#033[00m
Oct 11 05:12:47 np0005481065 nova_compute[260935]: 2025-10-11 09:12:47.325 2 DEBUG oslo_concurrency.lockutils [None req-ed823c19-8ddf-431b-ba34-91b643184e33 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:12:47 np0005481065 nova_compute[260935]: 2025-10-11 09:12:47.325 2 DEBUG oslo_concurrency.lockutils [None req-ed823c19-8ddf-431b-ba34-91b643184e33 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:12:47 np0005481065 nova_compute[260935]: 2025-10-11 09:12:47.424 2 DEBUG nova.compute.manager [req-0ec9682b-abe9-4e62-81fe-84ac3cfdf630 req-1bca8125-a39d-4329-ba2f-a35ad0229401 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Received event network-vif-deleted-0ae3e094-fe06-40c1-8840-cf16ba40f7fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:12:47 np0005481065 nova_compute[260935]: 2025-10-11 09:12:47.814 2 DEBUG oslo_concurrency.processutils [None req-ed823c19-8ddf-431b-ba34-91b643184e33 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:12:47 np0005481065 nova_compute[260935]: 2025-10-11 09:12:47.876 2 DEBUG nova.network.neutron [req-6fa2d102-2289-4f5a-978d-098a0402b1b9 req-5ce0ac04-dcc0-4016-acfa-aad8d88b5ada e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Updated VIF entry in instance network info cache for port 0ae3e094-fe06-40c1-8840-cf16ba40f7fb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:12:47 np0005481065 nova_compute[260935]: 2025-10-11 09:12:47.878 2 DEBUG nova.network.neutron [req-6fa2d102-2289-4f5a-978d-098a0402b1b9 req-5ce0ac04-dcc0-4016-acfa-aad8d88b5ada e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Updating instance_info_cache with network_info: [{"id": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "address": "fa:16:3e:36:24:98", "network": {"id": "fba72350-6e1a-4542-9fe5-a0774a411936", "bridge": "br-int", "label": "tempest-network-smoke--284994200", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae3e094-fe", "ovs_interfaceid": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:12:47 np0005481065 clever_booth[372714]: {
Oct 11 05:12:47 np0005481065 clever_booth[372714]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 05:12:47 np0005481065 clever_booth[372714]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:12:47 np0005481065 clever_booth[372714]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 05:12:47 np0005481065 clever_booth[372714]:        "osd_id": 2,
Oct 11 05:12:47 np0005481065 clever_booth[372714]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:12:47 np0005481065 clever_booth[372714]:        "type": "bluestore"
Oct 11 05:12:47 np0005481065 clever_booth[372714]:    },
Oct 11 05:12:47 np0005481065 clever_booth[372714]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 05:12:47 np0005481065 clever_booth[372714]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:12:47 np0005481065 clever_booth[372714]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 05:12:47 np0005481065 clever_booth[372714]:        "osd_id": 0,
Oct 11 05:12:47 np0005481065 clever_booth[372714]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:12:47 np0005481065 clever_booth[372714]:        "type": "bluestore"
Oct 11 05:12:47 np0005481065 clever_booth[372714]:    },
Oct 11 05:12:47 np0005481065 clever_booth[372714]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 05:12:47 np0005481065 clever_booth[372714]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:12:47 np0005481065 clever_booth[372714]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 05:12:47 np0005481065 clever_booth[372714]:        "osd_id": 1,
Oct 11 05:12:47 np0005481065 clever_booth[372714]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:12:47 np0005481065 clever_booth[372714]:        "type": "bluestore"
Oct 11 05:12:47 np0005481065 clever_booth[372714]:    }
Oct 11 05:12:47 np0005481065 clever_booth[372714]: }
Oct 11 05:12:47 np0005481065 nova_compute[260935]: 2025-10-11 09:12:47.958 2 DEBUG oslo_concurrency.lockutils [req-6fa2d102-2289-4f5a-978d-098a0402b1b9 req-5ce0ac04-dcc0-4016-acfa-aad8d88b5ada e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-0dc02e8f-5afd-40a3-8de3-e5550e4ab57e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:12:47 np0005481065 systemd[1]: libpod-67b6266006fe5b3a3992b945ca77b6ef36ea3e8ef70a5925f39edaebf1fd255b.scope: Deactivated successfully.
Oct 11 05:12:47 np0005481065 systemd[1]: libpod-67b6266006fe5b3a3992b945ca77b6ef36ea3e8ef70a5925f39edaebf1fd255b.scope: Consumed 1.073s CPU time.
Oct 11 05:12:47 np0005481065 nova_compute[260935]: 2025-10-11 09:12:47.981 2 DEBUG nova.compute.manager [req-38d8bc27-7a7a-4a74-91ba-14c1fd3ff606 req-e307caf5-70ae-4164-ace9-6d8124c40ead e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Received event network-vif-plugged-0ae3e094-fe06-40c1-8840-cf16ba40f7fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:12:47 np0005481065 nova_compute[260935]: 2025-10-11 09:12:47.982 2 DEBUG oslo_concurrency.lockutils [req-38d8bc27-7a7a-4a74-91ba-14c1fd3ff606 req-e307caf5-70ae-4164-ace9-6d8124c40ead e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:12:47 np0005481065 nova_compute[260935]: 2025-10-11 09:12:47.982 2 DEBUG oslo_concurrency.lockutils [req-38d8bc27-7a7a-4a74-91ba-14c1fd3ff606 req-e307caf5-70ae-4164-ace9-6d8124c40ead e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:12:47 np0005481065 nova_compute[260935]: 2025-10-11 09:12:47.983 2 DEBUG oslo_concurrency.lockutils [req-38d8bc27-7a7a-4a74-91ba-14c1fd3ff606 req-e307caf5-70ae-4164-ace9-6d8124c40ead e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:12:47 np0005481065 nova_compute[260935]: 2025-10-11 09:12:47.983 2 DEBUG nova.compute.manager [req-38d8bc27-7a7a-4a74-91ba-14c1fd3ff606 req-e307caf5-70ae-4164-ace9-6d8124c40ead e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] No waiting events found dispatching network-vif-plugged-0ae3e094-fe06-40c1-8840-cf16ba40f7fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:12:47 np0005481065 nova_compute[260935]: 2025-10-11 09:12:47.983 2 WARNING nova.compute.manager [req-38d8bc27-7a7a-4a74-91ba-14c1fd3ff606 req-e307caf5-70ae-4164-ace9-6d8124c40ead e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Received unexpected event network-vif-plugged-0ae3e094-fe06-40c1-8840-cf16ba40f7fb for instance with vm_state deleted and task_state None.#033[00m
Oct 11 05:12:48 np0005481065 podman[372748]: 2025-10-11 09:12:48.012585948 +0000 UTC m=+0.035893065 container died 67b6266006fe5b3a3992b945ca77b6ef36ea3e8ef70a5925f39edaebf1fd255b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_booth, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:12:48 np0005481065 systemd[1]: var-lib-containers-storage-overlay-ff7ad4d5a95dd5ce376dffcbb492274a75e8ddf6791aa75e9eba9e07d207395c-merged.mount: Deactivated successfully.
Oct 11 05:12:48 np0005481065 podman[372748]: 2025-10-11 09:12:48.065924514 +0000 UTC m=+0.089231611 container remove 67b6266006fe5b3a3992b945ca77b6ef36ea3e8ef70a5925f39edaebf1fd255b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_booth, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:12:48 np0005481065 systemd[1]: libpod-conmon-67b6266006fe5b3a3992b945ca77b6ef36ea3e8ef70a5925f39edaebf1fd255b.scope: Deactivated successfully.
Oct 11 05:12:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:12:48 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:12:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:12:48 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:12:48 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 765d1411-4811-4afa-9fc9-0d80c03d3d07 does not exist
Oct 11 05:12:48 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev aaf70e23-50fb-464f-a3f2-aaa56030cc1a does not exist
Oct 11 05:12:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:12:48 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3440920978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:12:48 np0005481065 nova_compute[260935]: 2025-10-11 09:12:48.390 2 DEBUG oslo_concurrency.processutils [None req-ed823c19-8ddf-431b-ba34-91b643184e33 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:12:48 np0005481065 nova_compute[260935]: 2025-10-11 09:12:48.400 2 DEBUG nova.compute.provider_tree [None req-ed823c19-8ddf-431b-ba34-91b643184e33 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:12:48 np0005481065 nova_compute[260935]: 2025-10-11 09:12:48.418 2 DEBUG nova.scheduler.client.report [None req-ed823c19-8ddf-431b-ba34-91b643184e33 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:12:48 np0005481065 nova_compute[260935]: 2025-10-11 09:12:48.443 2 DEBUG oslo_concurrency.lockutils [None req-ed823c19-8ddf-431b-ba34-91b643184e33 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.118s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:12:48 np0005481065 nova_compute[260935]: 2025-10-11 09:12:48.470 2 INFO nova.scheduler.client.report [None req-ed823c19-8ddf-431b-ba34-91b643184e33 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Deleted allocations for instance 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e#033[00m
Oct 11 05:12:48 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2129: 321 pgs: 321 active+clean; 328 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Oct 11 05:12:48 np0005481065 nova_compute[260935]: 2025-10-11 09:12:48.563 2 DEBUG oslo_concurrency.lockutils [None req-ed823c19-8ddf-431b-ba34-91b643184e33 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.426s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:12:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:12:49 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:12:49 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:12:50 np0005481065 nova_compute[260935]: 2025-10-11 09:12:50.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:50 np0005481065 nova_compute[260935]: 2025-10-11 09:12:50.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:50 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2130: 321 pgs: 321 active+clean; 328 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 117 KiB/s rd, 108 KiB/s wr, 49 op/s
Oct 11 05:12:51 np0005481065 ovn_controller[152945]: 2025-10-11T09:12:51Z|00998|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:12:51 np0005481065 ovn_controller[152945]: 2025-10-11T09:12:51Z|00999|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:12:51 np0005481065 nova_compute[260935]: 2025-10-11 09:12:51.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:51 np0005481065 ovn_controller[152945]: 2025-10-11T09:12:51Z|01000|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:12:51 np0005481065 ovn_controller[152945]: 2025-10-11T09:12:51Z|01001|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:12:51 np0005481065 nova_compute[260935]: 2025-10-11 09:12:51.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:52 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2131: 321 pgs: 321 active+clean; 328 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 117 KiB/s rd, 108 KiB/s wr, 49 op/s
Oct 11 05:12:53 np0005481065 podman[372834]: 2025-10-11 09:12:53.81046951 +0000 UTC m=+0.095483824 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 11 05:12:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:12:54 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2132: 321 pgs: 321 active+clean; 328 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 19 KiB/s wr, 33 op/s
Oct 11 05:12:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:12:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:12:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:12:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:12:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:12:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:12:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:12:54
Oct 11 05:12:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:12:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:12:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'backups', '.mgr']
Oct 11 05:12:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:12:55 np0005481065 nova_compute[260935]: 2025-10-11 09:12:55.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:12:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:12:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:12:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:12:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:12:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:12:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:12:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:12:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:12:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:12:55 np0005481065 nova_compute[260935]: 2025-10-11 09:12:55.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:12:56 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2133: 321 pgs: 321 active+clean; 328 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s
Oct 11 05:12:58 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2134: 321 pgs: 321 active+clean; 328 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s
Oct 11 05:12:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:12:59 np0005481065 podman[372853]: 2025-10-11 09:12:59.778448969 +0000 UTC m=+0.077719503 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 11 05:13:00 np0005481065 nova_compute[260935]: 2025-10-11 09:13:00.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:13:00 np0005481065 nova_compute[260935]: 2025-10-11 09:13:00.381 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173965.3812284, 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:13:00 np0005481065 nova_compute[260935]: 2025-10-11 09:13:00.382 2 INFO nova.compute.manager [-] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:13:00 np0005481065 nova_compute[260935]: 2025-10-11 09:13:00.413 2 DEBUG nova.compute.manager [None req-a55a8cd3-09a1-468a-a91f-edce24b0989b - - - - - -] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:13:00 np0005481065 nova_compute[260935]: 2025-10-11 09:13:00.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:13:00 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2135: 321 pgs: 321 active+clean; 328 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:13:02 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2136: 321 pgs: 321 active+clean; 328 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:13:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:13:04 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2137: 321 pgs: 321 active+clean; 328 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:13:04 np0005481065 podman[372873]: 2025-10-11 09:13:04.793048058 +0000 UTC m=+0.096056720 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Oct 11 05:13:04 np0005481065 podman[372874]: 2025-10-11 09:13:04.857093451 +0000 UTC m=+0.139620466 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 05:13:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:13:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:13:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:13:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:13:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002627377275021163 of space, bias 1.0, pg target 0.788213182506349 quantized to 32 (current 32)
Oct 11 05:13:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:13:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:13:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:13:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:13:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:13:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 11 05:13:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:13:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 05:13:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:13:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:13:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:13:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 05:13:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:13:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 05:13:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:13:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:13:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:13:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 05:13:05 np0005481065 nova_compute[260935]: 2025-10-11 09:13:05.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:13:05 np0005481065 nova_compute[260935]: 2025-10-11 09:13:05.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:13:06 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2138: 321 pgs: 321 active+clean; 328 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:13:07 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #102. Immutable memtables: 0.
Oct 11 05:13:07 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:13:07.700436) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 05:13:07 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 102
Oct 11 05:13:07 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173987700530, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 1712, "num_deletes": 253, "total_data_size": 2756032, "memory_usage": 2788768, "flush_reason": "Manual Compaction"}
Oct 11 05:13:07 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #103: started
Oct 11 05:13:07 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173987718076, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 103, "file_size": 2684307, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43364, "largest_seqno": 45075, "table_properties": {"data_size": 2676323, "index_size": 4862, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16416, "raw_average_key_size": 20, "raw_value_size": 2660367, "raw_average_value_size": 3276, "num_data_blocks": 216, "num_entries": 812, "num_filter_entries": 812, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760173819, "oldest_key_time": 1760173819, "file_creation_time": 1760173987, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 103, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:13:07 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 17688 microseconds, and 11220 cpu microseconds.
Oct 11 05:13:07 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:13:07 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:13:07.718136) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #103: 2684307 bytes OK
Oct 11 05:13:07 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:13:07.718161) [db/memtable_list.cc:519] [default] Level-0 commit table #103 started
Oct 11 05:13:07 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:13:07.719738) [db/memtable_list.cc:722] [default] Level-0 commit table #103: memtable #1 done
Oct 11 05:13:07 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:13:07.719764) EVENT_LOG_v1 {"time_micros": 1760173987719755, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 05:13:07 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:13:07.719792) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 05:13:07 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 2748643, prev total WAL file size 2748643, number of live WAL files 2.
Oct 11 05:13:07 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000099.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:13:07 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:13:07.721413) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Oct 11 05:13:07 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 05:13:07 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [103(2621KB)], [101(7615KB)]
Oct 11 05:13:07 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173987721479, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [103], "files_L6": [101], "score": -1, "input_data_size": 10482511, "oldest_snapshot_seqno": -1}
Oct 11 05:13:07 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #104: 6529 keys, 8786356 bytes, temperature: kUnknown
Oct 11 05:13:07 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173987779287, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 104, "file_size": 8786356, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8742680, "index_size": 26220, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16389, "raw_key_size": 169799, "raw_average_key_size": 26, "raw_value_size": 8625435, "raw_average_value_size": 1321, "num_data_blocks": 1026, "num_entries": 6529, "num_filter_entries": 6529, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760173987, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:13:07 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:13:07 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:13:07.779672) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 8786356 bytes
Oct 11 05:13:07 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:13:07.781355) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 180.9 rd, 151.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 7.4 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(7.2) write-amplify(3.3) OK, records in: 7050, records dropped: 521 output_compression: NoCompression
Oct 11 05:13:07 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:13:07.781384) EVENT_LOG_v1 {"time_micros": 1760173987781371, "job": 60, "event": "compaction_finished", "compaction_time_micros": 57949, "compaction_time_cpu_micros": 41810, "output_level": 6, "num_output_files": 1, "total_output_size": 8786356, "num_input_records": 7050, "num_output_records": 6529, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 05:13:07 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000103.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:13:07 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173987782658, "job": 60, "event": "table_file_deletion", "file_number": 103}
Oct 11 05:13:07 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:13:07 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173987785427, "job": 60, "event": "table_file_deletion", "file_number": 101}
Oct 11 05:13:07 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:13:07.721267) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:13:07 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:13:07.785622) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:13:07 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:13:07.785632) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:13:07 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:13:07.785638) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:13:07 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:13:07.785642) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:13:07 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:13:07.785646) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:13:08 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2139: 321 pgs: 321 active+clean; 328 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:13:08 np0005481065 nova_compute[260935]: 2025-10-11 09:13:08.787 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:13:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:13:10 np0005481065 nova_compute[260935]: 2025-10-11 09:13:10.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:13:10 np0005481065 nova_compute[260935]: 2025-10-11 09:13:10.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:13:10 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2140: 321 pgs: 321 active+clean; 328 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:13:12 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2141: 321 pgs: 321 active+clean; 328 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:13:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:13:14 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2142: 321 pgs: 321 active+clean; 328 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:13:15 np0005481065 nova_compute[260935]: 2025-10-11 09:13:15.054 2 DEBUG oslo_concurrency.lockutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "4b5bb809-c821-466d-9a47-b5a6aa337212" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:13:15 np0005481065 nova_compute[260935]: 2025-10-11 09:13:15.055 2 DEBUG oslo_concurrency.lockutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "4b5bb809-c821-466d-9a47-b5a6aa337212" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:13:15 np0005481065 nova_compute[260935]: 2025-10-11 09:13:15.078 2 DEBUG nova.compute.manager [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:13:15 np0005481065 nova_compute[260935]: 2025-10-11 09:13:15.186 2 DEBUG oslo_concurrency.lockutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:13:15 np0005481065 nova_compute[260935]: 2025-10-11 09:13:15.187 2 DEBUG oslo_concurrency.lockutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:13:15 np0005481065 nova_compute[260935]: 2025-10-11 09:13:15.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:13:15 np0005481065 nova_compute[260935]: 2025-10-11 09:13:15.200 2 DEBUG nova.virt.hardware [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:13:15 np0005481065 nova_compute[260935]: 2025-10-11 09:13:15.201 2 INFO nova.compute.claims [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:13:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:15.208 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:13:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:15.209 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:13:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:15.210 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:13:15 np0005481065 nova_compute[260935]: 2025-10-11 09:13:15.419 2 DEBUG oslo_concurrency.processutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:13:15 np0005481065 nova_compute[260935]: 2025-10-11 09:13:15.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:13:15 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:13:15 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1103627397' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:13:15 np0005481065 nova_compute[260935]: 2025-10-11 09:13:15.926 2 DEBUG oslo_concurrency.processutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:13:15 np0005481065 nova_compute[260935]: 2025-10-11 09:13:15.932 2 DEBUG nova.compute.provider_tree [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:13:15 np0005481065 nova_compute[260935]: 2025-10-11 09:13:15.950 2 DEBUG nova.scheduler.client.report [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:13:15 np0005481065 nova_compute[260935]: 2025-10-11 09:13:15.973 2 DEBUG oslo_concurrency.lockutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:13:15 np0005481065 nova_compute[260935]: 2025-10-11 09:13:15.974 2 DEBUG nova.compute.manager [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:13:16 np0005481065 nova_compute[260935]: 2025-10-11 09:13:16.025 2 DEBUG nova.compute.manager [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:13:16 np0005481065 nova_compute[260935]: 2025-10-11 09:13:16.026 2 DEBUG nova.network.neutron [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:13:16 np0005481065 nova_compute[260935]: 2025-10-11 09:13:16.054 2 INFO nova.virt.libvirt.driver [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:13:16 np0005481065 nova_compute[260935]: 2025-10-11 09:13:16.089 2 DEBUG nova.compute.manager [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:13:16 np0005481065 nova_compute[260935]: 2025-10-11 09:13:16.203 2 DEBUG nova.compute.manager [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:13:16 np0005481065 nova_compute[260935]: 2025-10-11 09:13:16.205 2 DEBUG nova.virt.libvirt.driver [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:13:16 np0005481065 nova_compute[260935]: 2025-10-11 09:13:16.206 2 INFO nova.virt.libvirt.driver [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Creating image(s)#033[00m
Oct 11 05:13:16 np0005481065 nova_compute[260935]: 2025-10-11 09:13:16.241 2 DEBUG nova.storage.rbd_utils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 4b5bb809-c821-466d-9a47-b5a6aa337212_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:13:16 np0005481065 nova_compute[260935]: 2025-10-11 09:13:16.280 2 DEBUG nova.storage.rbd_utils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 4b5bb809-c821-466d-9a47-b5a6aa337212_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:13:16 np0005481065 nova_compute[260935]: 2025-10-11 09:13:16.317 2 DEBUG nova.storage.rbd_utils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 4b5bb809-c821-466d-9a47-b5a6aa337212_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:13:16 np0005481065 nova_compute[260935]: 2025-10-11 09:13:16.322 2 DEBUG oslo_concurrency.processutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:13:16 np0005481065 nova_compute[260935]: 2025-10-11 09:13:16.370 2 DEBUG nova.policy [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a213c3877fc144a3af0be3c3d853f999', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ca4b15770e784f45910b630937562cb6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:13:16 np0005481065 nova_compute[260935]: 2025-10-11 09:13:16.416 2 DEBUG oslo_concurrency.processutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:13:16 np0005481065 nova_compute[260935]: 2025-10-11 09:13:16.417 2 DEBUG oslo_concurrency.lockutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:13:16 np0005481065 nova_compute[260935]: 2025-10-11 09:13:16.418 2 DEBUG oslo_concurrency.lockutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:13:16 np0005481065 nova_compute[260935]: 2025-10-11 09:13:16.418 2 DEBUG oslo_concurrency.lockutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:13:16 np0005481065 nova_compute[260935]: 2025-10-11 09:13:16.449 2 DEBUG nova.storage.rbd_utils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 4b5bb809-c821-466d-9a47-b5a6aa337212_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:13:16 np0005481065 nova_compute[260935]: 2025-10-11 09:13:16.454 2 DEBUG oslo_concurrency.processutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 4b5bb809-c821-466d-9a47-b5a6aa337212_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:13:16 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2143: 321 pgs: 321 active+clean; 328 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:13:16 np0005481065 nova_compute[260935]: 2025-10-11 09:13:16.789 2 DEBUG oslo_concurrency.processutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 4b5bb809-c821-466d-9a47-b5a6aa337212_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.334s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:13:16 np0005481065 nova_compute[260935]: 2025-10-11 09:13:16.889 2 DEBUG nova.storage.rbd_utils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] resizing rbd image 4b5bb809-c821-466d-9a47-b5a6aa337212_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:13:17 np0005481065 nova_compute[260935]: 2025-10-11 09:13:17.029 2 DEBUG nova.objects.instance [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'migration_context' on Instance uuid 4b5bb809-c821-466d-9a47-b5a6aa337212 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:13:17 np0005481065 nova_compute[260935]: 2025-10-11 09:13:17.058 2 DEBUG nova.virt.libvirt.driver [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:13:17 np0005481065 nova_compute[260935]: 2025-10-11 09:13:17.059 2 DEBUG nova.virt.libvirt.driver [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Ensure instance console log exists: /var/lib/nova/instances/4b5bb809-c821-466d-9a47-b5a6aa337212/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:13:17 np0005481065 nova_compute[260935]: 2025-10-11 09:13:17.061 2 DEBUG oslo_concurrency.lockutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:13:17 np0005481065 nova_compute[260935]: 2025-10-11 09:13:17.062 2 DEBUG oslo_concurrency.lockutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:13:17 np0005481065 nova_compute[260935]: 2025-10-11 09:13:17.062 2 DEBUG oslo_concurrency.lockutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:13:17 np0005481065 nova_compute[260935]: 2025-10-11 09:13:17.471 2 DEBUG nova.network.neutron [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Successfully created port: 072bf760-1ffd-4b15-bdd9-58b9b670feb8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:13:18 np0005481065 nova_compute[260935]: 2025-10-11 09:13:18.501 2 DEBUG nova.network.neutron [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Successfully updated port: 072bf760-1ffd-4b15-bdd9-58b9b670feb8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:13:18 np0005481065 nova_compute[260935]: 2025-10-11 09:13:18.522 2 DEBUG oslo_concurrency.lockutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "refresh_cache-4b5bb809-c821-466d-9a47-b5a6aa337212" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:13:18 np0005481065 nova_compute[260935]: 2025-10-11 09:13:18.522 2 DEBUG oslo_concurrency.lockutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquired lock "refresh_cache-4b5bb809-c821-466d-9a47-b5a6aa337212" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:13:18 np0005481065 nova_compute[260935]: 2025-10-11 09:13:18.522 2 DEBUG nova.network.neutron [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:13:18 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2144: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:13:18 np0005481065 nova_compute[260935]: 2025-10-11 09:13:18.699 2 DEBUG nova.compute.manager [req-a287590b-9955-4caa-b838-4fbdab33613b req-4cf07e00-67be-412e-b923-2e02fd29b96b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Received event network-changed-072bf760-1ffd-4b15-bdd9-58b9b670feb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:13:18 np0005481065 nova_compute[260935]: 2025-10-11 09:13:18.700 2 DEBUG nova.compute.manager [req-a287590b-9955-4caa-b838-4fbdab33613b req-4cf07e00-67be-412e-b923-2e02fd29b96b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Refreshing instance network info cache due to event network-changed-072bf760-1ffd-4b15-bdd9-58b9b670feb8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:13:18 np0005481065 nova_compute[260935]: 2025-10-11 09:13:18.700 2 DEBUG oslo_concurrency.lockutils [req-a287590b-9955-4caa-b838-4fbdab33613b req-4cf07e00-67be-412e-b923-2e02fd29b96b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-4b5bb809-c821-466d-9a47-b5a6aa337212" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:13:18 np0005481065 nova_compute[260935]: 2025-10-11 09:13:18.871 2 DEBUG nova.network.neutron [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:13:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:13:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:18.940 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5c:14:7e 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-38f0ab9f-8af5-45d6-b50f-d8f0d954376a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-38f0ab9f-8af5-45d6-b50f-d8f0d954376a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7020c6a7808745b3bfdbd16acc7ff39e', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e4e40f64-b9d8-48f8-9bae-4a183b1a906f, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=697c1af8-d05b-4380-bc71-c07e452471c7) old=Port_Binding(mac=['fa:16:3e:5c:14:7e 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-38f0ab9f-8af5-45d6-b50f-d8f0d954376a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-38f0ab9f-8af5-45d6-b50f-d8f0d954376a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7020c6a7808745b3bfdbd16acc7ff39e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:13:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:18.942 162815 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 697c1af8-d05b-4380-bc71-c07e452471c7 in datapath 38f0ab9f-8af5-45d6-b50f-d8f0d954376a updated#033[00m
Oct 11 05:13:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:18.946 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 38f0ab9f-8af5-45d6-b50f-d8f0d954376a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:13:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:18.947 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[75ef80d6-e79e-4689-afe5-1a9b6cf1da88]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:13:20 np0005481065 nova_compute[260935]: 2025-10-11 09:13:20.193 2 DEBUG nova.network.neutron [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Updating instance_info_cache with network_info: [{"id": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "address": "fa:16:3e:c7:58:7f", "network": {"id": "824eb213-11b4-4a1c-ba73-2f66376f326f", "bridge": "br-int", "label": "tempest-network-smoke--404577306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap072bf760-1f", "ovs_interfaceid": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:13:20 np0005481065 nova_compute[260935]: 2025-10-11 09:13:20.229 2 DEBUG oslo_concurrency.lockutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Releasing lock "refresh_cache-4b5bb809-c821-466d-9a47-b5a6aa337212" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:13:20 np0005481065 nova_compute[260935]: 2025-10-11 09:13:20.230 2 DEBUG nova.compute.manager [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Instance network_info: |[{"id": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "address": "fa:16:3e:c7:58:7f", "network": {"id": "824eb213-11b4-4a1c-ba73-2f66376f326f", "bridge": "br-int", "label": "tempest-network-smoke--404577306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap072bf760-1f", "ovs_interfaceid": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:13:20 np0005481065 nova_compute[260935]: 2025-10-11 09:13:20.231 2 DEBUG oslo_concurrency.lockutils [req-a287590b-9955-4caa-b838-4fbdab33613b req-4cf07e00-67be-412e-b923-2e02fd29b96b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-4b5bb809-c821-466d-9a47-b5a6aa337212" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:13:20 np0005481065 nova_compute[260935]: 2025-10-11 09:13:20.231 2 DEBUG nova.network.neutron [req-a287590b-9955-4caa-b838-4fbdab33613b req-4cf07e00-67be-412e-b923-2e02fd29b96b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Refreshing network info cache for port 072bf760-1ffd-4b15-bdd9-58b9b670feb8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:13:20 np0005481065 nova_compute[260935]: 2025-10-11 09:13:20.237 2 DEBUG nova.virt.libvirt.driver [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Start _get_guest_xml network_info=[{"id": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "address": "fa:16:3e:c7:58:7f", "network": {"id": "824eb213-11b4-4a1c-ba73-2f66376f326f", "bridge": "br-int", "label": "tempest-network-smoke--404577306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap072bf760-1f", "ovs_interfaceid": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:13:20 np0005481065 nova_compute[260935]: 2025-10-11 09:13:20.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:13:20 np0005481065 nova_compute[260935]: 2025-10-11 09:13:20.246 2 WARNING nova.virt.libvirt.driver [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:13:20 np0005481065 nova_compute[260935]: 2025-10-11 09:13:20.252 2 DEBUG nova.virt.libvirt.host [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:13:20 np0005481065 nova_compute[260935]: 2025-10-11 09:13:20.252 2 DEBUG nova.virt.libvirt.host [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:13:20 np0005481065 nova_compute[260935]: 2025-10-11 09:13:20.264 2 DEBUG nova.virt.libvirt.host [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:13:20 np0005481065 nova_compute[260935]: 2025-10-11 09:13:20.265 2 DEBUG nova.virt.libvirt.host [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:13:20 np0005481065 nova_compute[260935]: 2025-10-11 09:13:20.265 2 DEBUG nova.virt.libvirt.driver [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:13:20 np0005481065 nova_compute[260935]: 2025-10-11 09:13:20.266 2 DEBUG nova.virt.hardware [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:13:20 np0005481065 nova_compute[260935]: 2025-10-11 09:13:20.267 2 DEBUG nova.virt.hardware [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:13:20 np0005481065 nova_compute[260935]: 2025-10-11 09:13:20.267 2 DEBUG nova.virt.hardware [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:13:20 np0005481065 nova_compute[260935]: 2025-10-11 09:13:20.267 2 DEBUG nova.virt.hardware [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:13:20 np0005481065 nova_compute[260935]: 2025-10-11 09:13:20.268 2 DEBUG nova.virt.hardware [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:13:20 np0005481065 nova_compute[260935]: 2025-10-11 09:13:20.268 2 DEBUG nova.virt.hardware [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:13:20 np0005481065 nova_compute[260935]: 2025-10-11 09:13:20.268 2 DEBUG nova.virt.hardware [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:13:20 np0005481065 nova_compute[260935]: 2025-10-11 09:13:20.269 2 DEBUG nova.virt.hardware [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:13:20 np0005481065 nova_compute[260935]: 2025-10-11 09:13:20.269 2 DEBUG nova.virt.hardware [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:13:20 np0005481065 nova_compute[260935]: 2025-10-11 09:13:20.269 2 DEBUG nova.virt.hardware [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:13:20 np0005481065 nova_compute[260935]: 2025-10-11 09:13:20.270 2 DEBUG nova.virt.hardware [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:13:20 np0005481065 nova_compute[260935]: 2025-10-11 09:13:20.274 2 DEBUG oslo_concurrency.processutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:13:20 np0005481065 nova_compute[260935]: 2025-10-11 09:13:20.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:13:20 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2145: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:13:20 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:13:20 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2231420734' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:13:20 np0005481065 nova_compute[260935]: 2025-10-11 09:13:20.731 2 DEBUG oslo_concurrency.processutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:13:20 np0005481065 nova_compute[260935]: 2025-10-11 09:13:20.765 2 DEBUG nova.storage.rbd_utils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 4b5bb809-c821-466d-9a47-b5a6aa337212_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:13:20 np0005481065 nova_compute[260935]: 2025-10-11 09:13:20.771 2 DEBUG oslo_concurrency.processutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:13:21 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:13:21 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/174029542' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:13:21 np0005481065 nova_compute[260935]: 2025-10-11 09:13:21.271 2 DEBUG oslo_concurrency.processutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:13:21 np0005481065 nova_compute[260935]: 2025-10-11 09:13:21.275 2 DEBUG nova.virt.libvirt.vif [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:13:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-429546179',display_name='tempest-TestNetworkAdvancedServerOps-server-429546179',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-429546179',id=106,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI+dqizmLaYC2433qIpQLnnx/a3P6lrbi1ICEZMm0vIsSZegV5kK7h8RBLfeGIs6fvNJtxm5wiio76URdFktX8daCex/YjqNpM8rlSnMQuWZwdYT6LukjvR6b7qleLKRfA==',key_name='tempest-TestNetworkAdvancedServerOps-1430850582',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca4b15770e784f45910b630937562cb6',ramdisk_id='',reservation_id='r-2d0dbaqa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1304559157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1304559157-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:13:16Z,user_data=None,user_id='a213c3877fc144a3af0be3c3d853f999',uuid=4b5bb809-c821-466d-9a47-b5a6aa337212,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "address": "fa:16:3e:c7:58:7f", "network": {"id": "824eb213-11b4-4a1c-ba73-2f66376f326f", "bridge": "br-int", "label": "tempest-network-smoke--404577306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap072bf760-1f", "ovs_interfaceid": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:13:21 np0005481065 nova_compute[260935]: 2025-10-11 09:13:21.276 2 DEBUG nova.network.os_vif_util [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converting VIF {"id": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "address": "fa:16:3e:c7:58:7f", "network": {"id": "824eb213-11b4-4a1c-ba73-2f66376f326f", "bridge": "br-int", "label": "tempest-network-smoke--404577306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap072bf760-1f", "ovs_interfaceid": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:13:21 np0005481065 nova_compute[260935]: 2025-10-11 09:13:21.277 2 DEBUG nova.network.os_vif_util [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:58:7f,bridge_name='br-int',has_traffic_filtering=True,id=072bf760-1ffd-4b15-bdd9-58b9b670feb8,network=Network(824eb213-11b4-4a1c-ba73-2f66376f326f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap072bf760-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:13:21 np0005481065 nova_compute[260935]: 2025-10-11 09:13:21.279 2 DEBUG nova.objects.instance [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4b5bb809-c821-466d-9a47-b5a6aa337212 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:13:21 np0005481065 nova_compute[260935]: 2025-10-11 09:13:21.297 2 DEBUG nova.virt.libvirt.driver [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:13:21 np0005481065 nova_compute[260935]:  <uuid>4b5bb809-c821-466d-9a47-b5a6aa337212</uuid>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:  <name>instance-0000006a</name>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:13:21 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-429546179</nova:name>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:13:20</nova:creationTime>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:13:21 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:        <nova:user uuid="a213c3877fc144a3af0be3c3d853f999">tempest-TestNetworkAdvancedServerOps-1304559157-project-member</nova:user>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:        <nova:project uuid="ca4b15770e784f45910b630937562cb6">tempest-TestNetworkAdvancedServerOps-1304559157</nova:project>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:        <nova:port uuid="072bf760-1ffd-4b15-bdd9-58b9b670feb8">
Oct 11 05:13:21 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:13:21 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:      <entry name="serial">4b5bb809-c821-466d-9a47-b5a6aa337212</entry>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:      <entry name="uuid">4b5bb809-c821-466d-9a47-b5a6aa337212</entry>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:13:21 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:13:21 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:13:21 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/4b5bb809-c821-466d-9a47-b5a6aa337212_disk">
Oct 11 05:13:21 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:13:21 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:13:21 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/4b5bb809-c821-466d-9a47-b5a6aa337212_disk.config">
Oct 11 05:13:21 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:13:21 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:13:21 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:c7:58:7f"/>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:      <target dev="tap072bf760-1f"/>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:13:21 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/4b5bb809-c821-466d-9a47-b5a6aa337212/console.log" append="off"/>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:13:21 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:13:21 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:13:21 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:13:21 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:13:21 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:13:21 np0005481065 nova_compute[260935]: 2025-10-11 09:13:21.299 2 DEBUG nova.compute.manager [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Preparing to wait for external event network-vif-plugged-072bf760-1ffd-4b15-bdd9-58b9b670feb8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:13:21 np0005481065 nova_compute[260935]: 2025-10-11 09:13:21.299 2 DEBUG oslo_concurrency.lockutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:13:21 np0005481065 nova_compute[260935]: 2025-10-11 09:13:21.300 2 DEBUG oslo_concurrency.lockutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:13:21 np0005481065 nova_compute[260935]: 2025-10-11 09:13:21.301 2 DEBUG oslo_concurrency.lockutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:13:21 np0005481065 nova_compute[260935]: 2025-10-11 09:13:21.302 2 DEBUG nova.virt.libvirt.vif [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:13:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-429546179',display_name='tempest-TestNetworkAdvancedServerOps-server-429546179',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-429546179',id=106,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI+dqizmLaYC2433qIpQLnnx/a3P6lrbi1ICEZMm0vIsSZegV5kK7h8RBLfeGIs6fvNJtxm5wiio76URdFktX8daCex/YjqNpM8rlSnMQuWZwdYT6LukjvR6b7qleLKRfA==',key_name='tempest-TestNetworkAdvancedServerOps-1430850582',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca4b15770e784f45910b630937562cb6',ramdisk_id='',reservation_id='r-2d0dbaqa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1304559157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1304559157-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:13:16Z,user_data=None,user_id='a213c3877fc144a3af0be3c3d853f999',uuid=4b5bb809-c821-466d-9a47-b5a6aa337212,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "address": "fa:16:3e:c7:58:7f", "network": {"id": "824eb213-11b4-4a1c-ba73-2f66376f326f", "bridge": "br-int", "label": "tempest-network-smoke--404577306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap072bf760-1f", "ovs_interfaceid": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:13:21 np0005481065 nova_compute[260935]: 2025-10-11 09:13:21.302 2 DEBUG nova.network.os_vif_util [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converting VIF {"id": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "address": "fa:16:3e:c7:58:7f", "network": {"id": "824eb213-11b4-4a1c-ba73-2f66376f326f", "bridge": "br-int", "label": "tempest-network-smoke--404577306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap072bf760-1f", "ovs_interfaceid": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:13:21 np0005481065 nova_compute[260935]: 2025-10-11 09:13:21.303 2 DEBUG nova.network.os_vif_util [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:58:7f,bridge_name='br-int',has_traffic_filtering=True,id=072bf760-1ffd-4b15-bdd9-58b9b670feb8,network=Network(824eb213-11b4-4a1c-ba73-2f66376f326f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap072bf760-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:13:21 np0005481065 nova_compute[260935]: 2025-10-11 09:13:21.304 2 DEBUG os_vif [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:58:7f,bridge_name='br-int',has_traffic_filtering=True,id=072bf760-1ffd-4b15-bdd9-58b9b670feb8,network=Network(824eb213-11b4-4a1c-ba73-2f66376f326f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap072bf760-1f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:13:21 np0005481065 nova_compute[260935]: 2025-10-11 09:13:21.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:13:21 np0005481065 nova_compute[260935]: 2025-10-11 09:13:21.306 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:13:21 np0005481065 nova_compute[260935]: 2025-10-11 09:13:21.307 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:13:21 np0005481065 nova_compute[260935]: 2025-10-11 09:13:21.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:13:21 np0005481065 nova_compute[260935]: 2025-10-11 09:13:21.312 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap072bf760-1f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:13:21 np0005481065 nova_compute[260935]: 2025-10-11 09:13:21.313 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap072bf760-1f, col_values=(('external_ids', {'iface-id': '072bf760-1ffd-4b15-bdd9-58b9b670feb8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c7:58:7f', 'vm-uuid': '4b5bb809-c821-466d-9a47-b5a6aa337212'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:13:21 np0005481065 NetworkManager[44960]: <info>  [1760174001.3475] manager: (tap072bf760-1f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/413)
Oct 11 05:13:21 np0005481065 nova_compute[260935]: 2025-10-11 09:13:21.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:13:21 np0005481065 nova_compute[260935]: 2025-10-11 09:13:21.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:13:21 np0005481065 nova_compute[260935]: 2025-10-11 09:13:21.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:13:21 np0005481065 nova_compute[260935]: 2025-10-11 09:13:21.358 2 INFO os_vif [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:58:7f,bridge_name='br-int',has_traffic_filtering=True,id=072bf760-1ffd-4b15-bdd9-58b9b670feb8,network=Network(824eb213-11b4-4a1c-ba73-2f66376f326f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap072bf760-1f')#033[00m
Oct 11 05:13:21 np0005481065 nova_compute[260935]: 2025-10-11 09:13:21.425 2 DEBUG nova.virt.libvirt.driver [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:13:21 np0005481065 nova_compute[260935]: 2025-10-11 09:13:21.425 2 DEBUG nova.virt.libvirt.driver [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:13:21 np0005481065 nova_compute[260935]: 2025-10-11 09:13:21.426 2 DEBUG nova.virt.libvirt.driver [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] No VIF found with MAC fa:16:3e:c7:58:7f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:13:21 np0005481065 nova_compute[260935]: 2025-10-11 09:13:21.427 2 INFO nova.virt.libvirt.driver [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Using config drive#033[00m
Oct 11 05:13:21 np0005481065 nova_compute[260935]: 2025-10-11 09:13:21.453 2 DEBUG nova.storage.rbd_utils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 4b5bb809-c821-466d-9a47-b5a6aa337212_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:13:21 np0005481065 nova_compute[260935]: 2025-10-11 09:13:21.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:13:21 np0005481065 nova_compute[260935]: 2025-10-11 09:13:21.838 2 DEBUG nova.network.neutron [req-a287590b-9955-4caa-b838-4fbdab33613b req-4cf07e00-67be-412e-b923-2e02fd29b96b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Updated VIF entry in instance network info cache for port 072bf760-1ffd-4b15-bdd9-58b9b670feb8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:13:21 np0005481065 nova_compute[260935]: 2025-10-11 09:13:21.839 2 DEBUG nova.network.neutron [req-a287590b-9955-4caa-b838-4fbdab33613b req-4cf07e00-67be-412e-b923-2e02fd29b96b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Updating instance_info_cache with network_info: [{"id": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "address": "fa:16:3e:c7:58:7f", "network": {"id": "824eb213-11b4-4a1c-ba73-2f66376f326f", "bridge": "br-int", "label": "tempest-network-smoke--404577306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap072bf760-1f", "ovs_interfaceid": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:13:21 np0005481065 nova_compute[260935]: 2025-10-11 09:13:21.857 2 DEBUG oslo_concurrency.lockutils [req-a287590b-9955-4caa-b838-4fbdab33613b req-4cf07e00-67be-412e-b923-2e02fd29b96b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-4b5bb809-c821-466d-9a47-b5a6aa337212" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:13:22 np0005481065 nova_compute[260935]: 2025-10-11 09:13:22.009 2 INFO nova.virt.libvirt.driver [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Creating config drive at /var/lib/nova/instances/4b5bb809-c821-466d-9a47-b5a6aa337212/disk.config#033[00m
Oct 11 05:13:22 np0005481065 nova_compute[260935]: 2025-10-11 09:13:22.019 2 DEBUG oslo_concurrency.processutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4b5bb809-c821-466d-9a47-b5a6aa337212/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1_d2wzhx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:13:22 np0005481065 nova_compute[260935]: 2025-10-11 09:13:22.188 2 DEBUG oslo_concurrency.processutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4b5bb809-c821-466d-9a47-b5a6aa337212/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1_d2wzhx" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:13:22 np0005481065 nova_compute[260935]: 2025-10-11 09:13:22.226 2 DEBUG nova.storage.rbd_utils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 4b5bb809-c821-466d-9a47-b5a6aa337212_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:13:22 np0005481065 nova_compute[260935]: 2025-10-11 09:13:22.231 2 DEBUG oslo_concurrency.processutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4b5bb809-c821-466d-9a47-b5a6aa337212/disk.config 4b5bb809-c821-466d-9a47-b5a6aa337212_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:13:22 np0005481065 nova_compute[260935]: 2025-10-11 09:13:22.390 2 DEBUG oslo_concurrency.processutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4b5bb809-c821-466d-9a47-b5a6aa337212/disk.config 4b5bb809-c821-466d-9a47-b5a6aa337212_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:13:22 np0005481065 nova_compute[260935]: 2025-10-11 09:13:22.393 2 INFO nova.virt.libvirt.driver [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Deleting local config drive /var/lib/nova/instances/4b5bb809-c821-466d-9a47-b5a6aa337212/disk.config because it was imported into RBD.#033[00m
Oct 11 05:13:22 np0005481065 kernel: tap072bf760-1f: entered promiscuous mode
Oct 11 05:13:22 np0005481065 NetworkManager[44960]: <info>  [1760174002.4720] manager: (tap072bf760-1f): new Tun device (/org/freedesktop/NetworkManager/Devices/414)
Oct 11 05:13:22 np0005481065 nova_compute[260935]: 2025-10-11 09:13:22.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:13:22 np0005481065 ovn_controller[152945]: 2025-10-11T09:13:22Z|01002|binding|INFO|Claiming lport 072bf760-1ffd-4b15-bdd9-58b9b670feb8 for this chassis.
Oct 11 05:13:22 np0005481065 ovn_controller[152945]: 2025-10-11T09:13:22Z|01003|binding|INFO|072bf760-1ffd-4b15-bdd9-58b9b670feb8: Claiming fa:16:3e:c7:58:7f 10.100.0.11
Oct 11 05:13:22 np0005481065 systemd-udevd[373238]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:13:22 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2146: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Oct 11 05:13:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:22.536 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:58:7f 10.100.0.11'], port_security=['fa:16:3e:c7:58:7f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '4b5bb809-c821-466d-9a47-b5a6aa337212', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-824eb213-11b4-4a1c-ba73-2f66376f326f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca4b15770e784f45910b630937562cb6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '17c338b8-146b-462b-bba0-743df78b6636', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8beef9e7-87f9-4509-b665-58f91eb3377a, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=072bf760-1ffd-4b15-bdd9-58b9b670feb8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:13:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:22.538 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 072bf760-1ffd-4b15-bdd9-58b9b670feb8 in datapath 824eb213-11b4-4a1c-ba73-2f66376f326f bound to our chassis#033[00m
Oct 11 05:13:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:22.541 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 824eb213-11b4-4a1c-ba73-2f66376f326f#033[00m
Oct 11 05:13:22 np0005481065 NetworkManager[44960]: <info>  [1760174002.5493] device (tap072bf760-1f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:13:22 np0005481065 NetworkManager[44960]: <info>  [1760174002.5521] device (tap072bf760-1f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:13:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:22.560 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[95a0f141-8f23-4019-9169-7231c5db8685]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:13:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:22.562 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap824eb213-11 in ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 05:13:22 np0005481065 systemd-machined[215705]: New machine qemu-127-instance-0000006a.
Oct 11 05:13:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:22.565 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap824eb213-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 05:13:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:22.565 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c331db11-ec38-4ec5-be21-1a018e1e677a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:13:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:22.567 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9c6a5d7f-86f7-4b00-8f1a-e06cfff9a7a1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:13:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:22.585 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[a2367b3f-9ba5-4dc9-b7e8-a59c31f3b7de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:13:22 np0005481065 systemd[1]: Started Virtual Machine qemu-127-instance-0000006a.
Oct 11 05:13:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:22.617 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c2d8e05c-d32c-4ed9-82da-3c60d2027fc1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:13:22 np0005481065 ovn_controller[152945]: 2025-10-11T09:13:22Z|01004|binding|INFO|Setting lport 072bf760-1ffd-4b15-bdd9-58b9b670feb8 ovn-installed in OVS
Oct 11 05:13:22 np0005481065 ovn_controller[152945]: 2025-10-11T09:13:22Z|01005|binding|INFO|Setting lport 072bf760-1ffd-4b15-bdd9-58b9b670feb8 up in Southbound
Oct 11 05:13:22 np0005481065 nova_compute[260935]: 2025-10-11 09:13:22.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:13:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:22.647 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[49b20e7b-53d2-4493-ac1c-21bd15f7ac8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:13:22 np0005481065 systemd-udevd[373244]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:13:22 np0005481065 NetworkManager[44960]: <info>  [1760174002.6577] manager: (tap824eb213-10): new Veth device (/org/freedesktop/NetworkManager/Devices/415)
Oct 11 05:13:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:22.655 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4fbe27b1-7c9a-4ccb-8ab6-344dfc9fad86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:13:22 np0005481065 nova_compute[260935]: 2025-10-11 09:13:22.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:13:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:22.710 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[17f8fbbc-4c82-494a-8405-4d31e68065da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:13:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:22.715 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[de3ff4ee-006e-4935-91b9-11182626352b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:13:22 np0005481065 NetworkManager[44960]: <info>  [1760174002.7514] device (tap824eb213-10): carrier: link connected
Oct 11 05:13:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:22.757 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[ab99c1e1-342a-4587-94ec-1a1307026c40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:13:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:22.776 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[711afd9a-33bc-4daa-a19f-cd6c650e8caa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap824eb213-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:a7:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 294], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 584745, 'reachable_time': 41318, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 373274, 'error': None, 'target': 'ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:13:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:22.805 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2151a5f3-dbcc-464c-852a-8c3858bce448]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe32:a71c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 584745, 'tstamp': 584745}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 373275, 'error': None, 'target': 'ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:13:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:22.830 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cc71ca3c-64ee-4ab2-b952-9e73d0072dfb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap824eb213-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:a7:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 294], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 584745, 'reachable_time': 41318, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 373276, 'error': None, 'target': 'ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:13:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:22.870 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c0cce818-2c1b-4b87-b895-b6c53250a9f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:13:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:22.954 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0083b82f-6e5f-4cc9-9473-54d5340b01f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:13:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:22.956 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap824eb213-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:13:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:22.957 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:13:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:22.958 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap824eb213-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:13:22 np0005481065 kernel: tap824eb213-10: entered promiscuous mode
Oct 11 05:13:22 np0005481065 NetworkManager[44960]: <info>  [1760174002.9619] manager: (tap824eb213-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/416)
Oct 11 05:13:22 np0005481065 nova_compute[260935]: 2025-10-11 09:13:22.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:13:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:22.967 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap824eb213-10, col_values=(('external_ids', {'iface-id': 'f4946461-9ac5-43c5-81c4-5ba2f19b9f2d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:13:22 np0005481065 ovn_controller[152945]: 2025-10-11T09:13:22Z|01006|binding|INFO|Releasing lport f4946461-9ac5-43c5-81c4-5ba2f19b9f2d from this chassis (sb_readonly=0)
Oct 11 05:13:22 np0005481065 nova_compute[260935]: 2025-10-11 09:13:22.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:13:22 np0005481065 nova_compute[260935]: 2025-10-11 09:13:22.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:13:23 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:22.999 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/824eb213-11b4-4a1c-ba73-2f66376f326f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/824eb213-11b4-4a1c-ba73-2f66376f326f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 05:13:23 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:23.001 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8e968643-f6b6-4231-8e81-79b970bf23f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:13:23 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:23.003 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 05:13:23 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 05:13:23 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 05:13:23 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-824eb213-11b4-4a1c-ba73-2f66376f326f
Oct 11 05:13:23 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 05:13:23 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 05:13:23 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 05:13:23 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/824eb213-11b4-4a1c-ba73-2f66376f326f.pid.haproxy
Oct 11 05:13:23 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 05:13:23 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:13:23 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 05:13:23 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 05:13:23 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 05:13:23 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 05:13:23 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 05:13:23 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 05:13:23 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 05:13:23 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 05:13:23 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 05:13:23 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 05:13:23 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 05:13:23 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 05:13:23 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 05:13:23 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:13:23 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:13:23 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 05:13:23 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 05:13:23 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 05:13:23 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID 824eb213-11b4-4a1c-ba73-2f66376f326f
Oct 11 05:13:23 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 05:13:23 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:23.005 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f', 'env', 'PROCESS_TAG=haproxy-824eb213-11b4-4a1c-ba73-2f66376f326f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/824eb213-11b4-4a1c-ba73-2f66376f326f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 05:13:23 np0005481065 nova_compute[260935]: 2025-10-11 09:13:23.010 2 DEBUG nova.compute.manager [req-dea19ea0-9ebb-4f25-a718-7b574e3a3987 req-eb7cd729-b0bf-4204-959c-1f5ebe04e00f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Received event network-vif-plugged-072bf760-1ffd-4b15-bdd9-58b9b670feb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:13:23 np0005481065 nova_compute[260935]: 2025-10-11 09:13:23.011 2 DEBUG oslo_concurrency.lockutils [req-dea19ea0-9ebb-4f25-a718-7b574e3a3987 req-eb7cd729-b0bf-4204-959c-1f5ebe04e00f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:13:23 np0005481065 nova_compute[260935]: 2025-10-11 09:13:23.011 2 DEBUG oslo_concurrency.lockutils [req-dea19ea0-9ebb-4f25-a718-7b574e3a3987 req-eb7cd729-b0bf-4204-959c-1f5ebe04e00f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:13:23 np0005481065 nova_compute[260935]: 2025-10-11 09:13:23.012 2 DEBUG oslo_concurrency.lockutils [req-dea19ea0-9ebb-4f25-a718-7b574e3a3987 req-eb7cd729-b0bf-4204-959c-1f5ebe04e00f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:13:23 np0005481065 nova_compute[260935]: 2025-10-11 09:13:23.013 2 DEBUG nova.compute.manager [req-dea19ea0-9ebb-4f25-a718-7b574e3a3987 req-eb7cd729-b0bf-4204-959c-1f5ebe04e00f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Processing event network-vif-plugged-072bf760-1ffd-4b15-bdd9-58b9b670feb8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:13:23 np0005481065 podman[373308]: 2025-10-11 09:13:23.482103174 +0000 UTC m=+0.061485643 container create 78f0fd3b5de59331d822e226bd70d7d4fef1aa6cadfaada96296472857cf5483 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 05:13:23 np0005481065 systemd[1]: Started libpod-conmon-78f0fd3b5de59331d822e226bd70d7d4fef1aa6cadfaada96296472857cf5483.scope.
Oct 11 05:13:23 np0005481065 podman[373308]: 2025-10-11 09:13:23.453929564 +0000 UTC m=+0.033312023 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 05:13:23 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:13:23 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ac5649964aaf0918143f43f0172e8f20ee7f3bca61099013f301b1e0f7916bd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 05:13:23 np0005481065 podman[373308]: 2025-10-11 09:13:23.585410063 +0000 UTC m=+0.164792552 container init 78f0fd3b5de59331d822e226bd70d7d4fef1aa6cadfaada96296472857cf5483 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2)
Oct 11 05:13:23 np0005481065 podman[373308]: 2025-10-11 09:13:23.592083268 +0000 UTC m=+0.171465707 container start 78f0fd3b5de59331d822e226bd70d7d4fef1aa6cadfaada96296472857cf5483 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 05:13:23 np0005481065 neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f[373363]: [NOTICE]   (373369) : New worker (373371) forked
Oct 11 05:13:23 np0005481065 neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f[373363]: [NOTICE]   (373369) : Loading success.
Oct 11 05:13:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:13:24 np0005481065 nova_compute[260935]: 2025-10-11 09:13:24.023 2 DEBUG nova.compute.manager [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:13:24 np0005481065 nova_compute[260935]: 2025-10-11 09:13:24.026 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174004.0253658, 4b5bb809-c821-466d-9a47-b5a6aa337212 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:13:24 np0005481065 nova_compute[260935]: 2025-10-11 09:13:24.026 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] VM Started (Lifecycle Event)#033[00m
Oct 11 05:13:24 np0005481065 nova_compute[260935]: 2025-10-11 09:13:24.031 2 DEBUG nova.virt.libvirt.driver [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:13:24 np0005481065 nova_compute[260935]: 2025-10-11 09:13:24.036 2 INFO nova.virt.libvirt.driver [-] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Instance spawned successfully.#033[00m
Oct 11 05:13:24 np0005481065 nova_compute[260935]: 2025-10-11 09:13:24.037 2 DEBUG nova.virt.libvirt.driver [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:13:24 np0005481065 nova_compute[260935]: 2025-10-11 09:13:24.057 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:13:24 np0005481065 nova_compute[260935]: 2025-10-11 09:13:24.066 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:13:24 np0005481065 nova_compute[260935]: 2025-10-11 09:13:24.073 2 DEBUG nova.virt.libvirt.driver [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:13:24 np0005481065 nova_compute[260935]: 2025-10-11 09:13:24.074 2 DEBUG nova.virt.libvirt.driver [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:13:24 np0005481065 nova_compute[260935]: 2025-10-11 09:13:24.074 2 DEBUG nova.virt.libvirt.driver [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:13:24 np0005481065 nova_compute[260935]: 2025-10-11 09:13:24.075 2 DEBUG nova.virt.libvirt.driver [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:13:24 np0005481065 nova_compute[260935]: 2025-10-11 09:13:24.076 2 DEBUG nova.virt.libvirt.driver [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:13:24 np0005481065 nova_compute[260935]: 2025-10-11 09:13:24.077 2 DEBUG nova.virt.libvirt.driver [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:13:24 np0005481065 nova_compute[260935]: 2025-10-11 09:13:24.133 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:13:24 np0005481065 nova_compute[260935]: 2025-10-11 09:13:24.134 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174004.0255444, 4b5bb809-c821-466d-9a47-b5a6aa337212 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:13:24 np0005481065 nova_compute[260935]: 2025-10-11 09:13:24.134 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:13:24 np0005481065 nova_compute[260935]: 2025-10-11 09:13:24.168 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:13:24 np0005481065 nova_compute[260935]: 2025-10-11 09:13:24.173 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174004.0282097, 4b5bb809-c821-466d-9a47-b5a6aa337212 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:13:24 np0005481065 nova_compute[260935]: 2025-10-11 09:13:24.173 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:13:24 np0005481065 nova_compute[260935]: 2025-10-11 09:13:24.182 2 INFO nova.compute.manager [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Took 7.98 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:13:24 np0005481065 nova_compute[260935]: 2025-10-11 09:13:24.183 2 DEBUG nova.compute.manager [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:13:24 np0005481065 nova_compute[260935]: 2025-10-11 09:13:24.202 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:13:24 np0005481065 nova_compute[260935]: 2025-10-11 09:13:24.206 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:13:24 np0005481065 nova_compute[260935]: 2025-10-11 09:13:24.223 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:13:24 np0005481065 nova_compute[260935]: 2025-10-11 09:13:24.253 2 INFO nova.compute.manager [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Took 9.11 seconds to build instance.#033[00m
Oct 11 05:13:24 np0005481065 nova_compute[260935]: 2025-10-11 09:13:24.271 2 DEBUG oslo_concurrency.lockutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "4b5bb809-c821-466d-9a47-b5a6aa337212" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.217s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:13:24 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2147: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct 11 05:13:24 np0005481065 nova_compute[260935]: 2025-10-11 09:13:24.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:13:24 np0005481065 nova_compute[260935]: 2025-10-11 09:13:24.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 11 05:13:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:13:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:13:24 np0005481065 podman[373380]: 2025-10-11 09:13:24.810175454 +0000 UTC m=+0.096289456 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:13:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:13:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:13:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:13:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:13:25 np0005481065 nova_compute[260935]: 2025-10-11 09:13:25.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:13:25 np0005481065 nova_compute[260935]: 2025-10-11 09:13:25.336 2 DEBUG nova.compute.manager [req-e2eb0558-ecac-4692-8e91-3b4ba7ca3d4e req-6f420f3e-f7e8-4eda-af5e-aef09d936510 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Received event network-vif-plugged-072bf760-1ffd-4b15-bdd9-58b9b670feb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:13:25 np0005481065 nova_compute[260935]: 2025-10-11 09:13:25.338 2 DEBUG oslo_concurrency.lockutils [req-e2eb0558-ecac-4692-8e91-3b4ba7ca3d4e req-6f420f3e-f7e8-4eda-af5e-aef09d936510 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:13:25 np0005481065 nova_compute[260935]: 2025-10-11 09:13:25.338 2 DEBUG oslo_concurrency.lockutils [req-e2eb0558-ecac-4692-8e91-3b4ba7ca3d4e req-6f420f3e-f7e8-4eda-af5e-aef09d936510 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:13:25 np0005481065 nova_compute[260935]: 2025-10-11 09:13:25.338 2 DEBUG oslo_concurrency.lockutils [req-e2eb0558-ecac-4692-8e91-3b4ba7ca3d4e req-6f420f3e-f7e8-4eda-af5e-aef09d936510 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:13:25 np0005481065 nova_compute[260935]: 2025-10-11 09:13:25.339 2 DEBUG nova.compute.manager [req-e2eb0558-ecac-4692-8e91-3b4ba7ca3d4e req-6f420f3e-f7e8-4eda-af5e-aef09d936510 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] No waiting events found dispatching network-vif-plugged-072bf760-1ffd-4b15-bdd9-58b9b670feb8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:13:25 np0005481065 nova_compute[260935]: 2025-10-11 09:13:25.339 2 WARNING nova.compute.manager [req-e2eb0558-ecac-4692-8e91-3b4ba7ca3d4e req-6f420f3e-f7e8-4eda-af5e-aef09d936510 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Received unexpected event network-vif-plugged-072bf760-1ffd-4b15-bdd9-58b9b670feb8 for instance with vm_state active and task_state None.#033[00m
Oct 11 05:13:25 np0005481065 nova_compute[260935]: 2025-10-11 09:13:25.718 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:13:25 np0005481065 nova_compute[260935]: 2025-10-11 09:13:25.719 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:13:25 np0005481065 nova_compute[260935]: 2025-10-11 09:13:25.719 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:13:25 np0005481065 nova_compute[260935]: 2025-10-11 09:13:25.719 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:13:26 np0005481065 nova_compute[260935]: 2025-10-11 09:13:26.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:13:26 np0005481065 nova_compute[260935]: 2025-10-11 09:13:26.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:13:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:26.507 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:13:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:26.509 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 05:13:26 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2148: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct 11 05:13:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:13:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2181799174' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:13:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:13:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2181799174' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:13:26 np0005481065 nova_compute[260935]: 2025-10-11 09:13:26.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:13:26 np0005481065 nova_compute[260935]: 2025-10-11 09:13:26.730 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:13:26 np0005481065 nova_compute[260935]: 2025-10-11 09:13:26.731 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:13:26 np0005481065 nova_compute[260935]: 2025-10-11 09:13:26.732 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:13:26 np0005481065 nova_compute[260935]: 2025-10-11 09:13:26.732 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:13:26 np0005481065 nova_compute[260935]: 2025-10-11 09:13:26.733 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:13:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:13:27 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/589256496' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:13:27 np0005481065 nova_compute[260935]: 2025-10-11 09:13:27.239 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:13:27 np0005481065 nova_compute[260935]: 2025-10-11 09:13:27.337 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:13:27 np0005481065 nova_compute[260935]: 2025-10-11 09:13:27.338 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:13:27 np0005481065 nova_compute[260935]: 2025-10-11 09:13:27.338 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:13:27 np0005481065 nova_compute[260935]: 2025-10-11 09:13:27.344 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:13:27 np0005481065 nova_compute[260935]: 2025-10-11 09:13:27.344 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:13:27 np0005481065 nova_compute[260935]: 2025-10-11 09:13:27.350 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:13:27 np0005481065 nova_compute[260935]: 2025-10-11 09:13:27.350 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:13:27 np0005481065 nova_compute[260935]: 2025-10-11 09:13:27.355 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:13:27 np0005481065 nova_compute[260935]: 2025-10-11 09:13:27.356 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:13:27 np0005481065 nova_compute[260935]: 2025-10-11 09:13:27.664 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:13:27 np0005481065 nova_compute[260935]: 2025-10-11 09:13:27.666 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2924MB free_disk=59.80990982055664GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:13:27 np0005481065 nova_compute[260935]: 2025-10-11 09:13:27.667 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:13:27 np0005481065 nova_compute[260935]: 2025-10-11 09:13:27.667 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:13:28 np0005481065 nova_compute[260935]: 2025-10-11 09:13:28.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:13:28 np0005481065 ovn_controller[152945]: 2025-10-11T09:13:28Z|01007|binding|INFO|Releasing lport f4946461-9ac5-43c5-81c4-5ba2f19b9f2d from this chassis (sb_readonly=0)
Oct 11 05:13:28 np0005481065 ovn_controller[152945]: 2025-10-11T09:13:28Z|01008|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:13:28 np0005481065 ovn_controller[152945]: 2025-10-11T09:13:28Z|01009|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:13:28 np0005481065 NetworkManager[44960]: <info>  [1760174008.0127] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/417)
Oct 11 05:13:28 np0005481065 NetworkManager[44960]: <info>  [1760174008.0139] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/418)
Oct 11 05:13:28 np0005481065 ovn_controller[152945]: 2025-10-11T09:13:28Z|01010|binding|INFO|Releasing lport f4946461-9ac5-43c5-81c4-5ba2f19b9f2d from this chassis (sb_readonly=0)
Oct 11 05:13:28 np0005481065 ovn_controller[152945]: 2025-10-11T09:13:28Z|01011|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:13:28 np0005481065 ovn_controller[152945]: 2025-10-11T09:13:28Z|01012|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:13:28 np0005481065 nova_compute[260935]: 2025-10-11 09:13:28.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:13:28 np0005481065 nova_compute[260935]: 2025-10-11 09:13:28.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:13:28 np0005481065 nova_compute[260935]: 2025-10-11 09:13:28.091 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:13:28 np0005481065 nova_compute[260935]: 2025-10-11 09:13:28.091 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:13:28 np0005481065 nova_compute[260935]: 2025-10-11 09:13:28.092 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:13:28 np0005481065 nova_compute[260935]: 2025-10-11 09:13:28.092 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 4b5bb809-c821-466d-9a47-b5a6aa337212 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:13:28 np0005481065 nova_compute[260935]: 2025-10-11 09:13:28.093 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:13:28 np0005481065 nova_compute[260935]: 2025-10-11 09:13:28.093 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:13:28 np0005481065 nova_compute[260935]: 2025-10-11 09:13:28.351 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:13:28 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2149: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 05:13:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:13:28 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3078186894' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:13:28 np0005481065 nova_compute[260935]: 2025-10-11 09:13:28.843 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:13:28 np0005481065 nova_compute[260935]: 2025-10-11 09:13:28.850 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:13:28 np0005481065 nova_compute[260935]: 2025-10-11 09:13:28.869 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:13:28 np0005481065 nova_compute[260935]: 2025-10-11 09:13:28.896 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:13:28 np0005481065 nova_compute[260935]: 2025-10-11 09:13:28.898 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.230s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:13:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:13:28 np0005481065 nova_compute[260935]: 2025-10-11 09:13:28.977 2 DEBUG nova.compute.manager [req-754ef831-fa74-4bb0-bef7-b77b27d312a1 req-3f7d2d3d-272b-499c-b4f8-dc2273584b9d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Received event network-changed-072bf760-1ffd-4b15-bdd9-58b9b670feb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:13:28 np0005481065 nova_compute[260935]: 2025-10-11 09:13:28.977 2 DEBUG nova.compute.manager [req-754ef831-fa74-4bb0-bef7-b77b27d312a1 req-3f7d2d3d-272b-499c-b4f8-dc2273584b9d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Refreshing instance network info cache due to event network-changed-072bf760-1ffd-4b15-bdd9-58b9b670feb8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:13:28 np0005481065 nova_compute[260935]: 2025-10-11 09:13:28.978 2 DEBUG oslo_concurrency.lockutils [req-754ef831-fa74-4bb0-bef7-b77b27d312a1 req-3f7d2d3d-272b-499c-b4f8-dc2273584b9d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-4b5bb809-c821-466d-9a47-b5a6aa337212" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:13:28 np0005481065 nova_compute[260935]: 2025-10-11 09:13:28.978 2 DEBUG oslo_concurrency.lockutils [req-754ef831-fa74-4bb0-bef7-b77b27d312a1 req-3f7d2d3d-272b-499c-b4f8-dc2273584b9d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-4b5bb809-c821-466d-9a47-b5a6aa337212" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:13:28 np0005481065 nova_compute[260935]: 2025-10-11 09:13:28.979 2 DEBUG nova.network.neutron [req-754ef831-fa74-4bb0-bef7-b77b27d312a1 req-3f7d2d3d-272b-499c-b4f8-dc2273584b9d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Refreshing network info cache for port 072bf760-1ffd-4b15-bdd9-58b9b670feb8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:13:29 np0005481065 nova_compute[260935]: 2025-10-11 09:13:29.898 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:13:29 np0005481065 nova_compute[260935]: 2025-10-11 09:13:29.934 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:13:29 np0005481065 nova_compute[260935]: 2025-10-11 09:13:29.934 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:13:30 np0005481065 nova_compute[260935]: 2025-10-11 09:13:30.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:13:30 np0005481065 nova_compute[260935]: 2025-10-11 09:13:30.276 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:13:30 np0005481065 nova_compute[260935]: 2025-10-11 09:13:30.276 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:13:30 np0005481065 nova_compute[260935]: 2025-10-11 09:13:30.276 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 05:13:30 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2150: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 05:13:30 np0005481065 podman[373451]: 2025-10-11 09:13:30.790798663 +0000 UTC m=+0.082387811 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 05:13:31 np0005481065 nova_compute[260935]: 2025-10-11 09:13:31.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:13:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:31.512 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:13:31 np0005481065 nova_compute[260935]: 2025-10-11 09:13:31.863 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updating instance_info_cache with network_info: [{"id": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "address": "fa:16:3e:ab:9b:26", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99e74dca-1d", "ovs_interfaceid": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:13:31 np0005481065 nova_compute[260935]: 2025-10-11 09:13:31.882 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:13:31 np0005481065 nova_compute[260935]: 2025-10-11 09:13:31.882 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 05:13:31 np0005481065 nova_compute[260935]: 2025-10-11 09:13:31.882 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:13:31 np0005481065 nova_compute[260935]: 2025-10-11 09:13:31.930 2 DEBUG nova.network.neutron [req-754ef831-fa74-4bb0-bef7-b77b27d312a1 req-3f7d2d3d-272b-499c-b4f8-dc2273584b9d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Updated VIF entry in instance network info cache for port 072bf760-1ffd-4b15-bdd9-58b9b670feb8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:13:31 np0005481065 nova_compute[260935]: 2025-10-11 09:13:31.930 2 DEBUG nova.network.neutron [req-754ef831-fa74-4bb0-bef7-b77b27d312a1 req-3f7d2d3d-272b-499c-b4f8-dc2273584b9d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Updating instance_info_cache with network_info: [{"id": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "address": "fa:16:3e:c7:58:7f", "network": {"id": "824eb213-11b4-4a1c-ba73-2f66376f326f", "bridge": "br-int", "label": "tempest-network-smoke--404577306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap072bf760-1f", "ovs_interfaceid": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:13:31 np0005481065 nova_compute[260935]: 2025-10-11 09:13:31.955 2 DEBUG oslo_concurrency.lockutils [req-754ef831-fa74-4bb0-bef7-b77b27d312a1 req-3f7d2d3d-272b-499c-b4f8-dc2273584b9d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-4b5bb809-c821-466d-9a47-b5a6aa337212" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:13:32 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2151: 321 pgs: 321 active+clean; 374 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 124 op/s
Oct 11 05:13:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:13:34 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2152: 321 pgs: 321 active+clean; 374 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 12 KiB/s wr, 143 op/s
Oct 11 05:13:35 np0005481065 nova_compute[260935]: 2025-10-11 09:13:35.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:13:35 np0005481065 ovn_controller[152945]: 2025-10-11T09:13:35Z|00114|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c7:58:7f 10.100.0.11
Oct 11 05:13:35 np0005481065 ovn_controller[152945]: 2025-10-11T09:13:35Z|00115|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c7:58:7f 10.100.0.11
Oct 11 05:13:35 np0005481065 podman[373473]: 2025-10-11 09:13:35.779410412 +0000 UTC m=+0.086470945 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 05:13:35 np0005481065 podman[373474]: 2025-10-11 09:13:35.831953396 +0000 UTC m=+0.127810298 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 11 05:13:36 np0005481065 nova_compute[260935]: 2025-10-11 09:13:36.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:13:36 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2153: 321 pgs: 321 active+clean; 374 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 12 KiB/s wr, 142 op/s
Oct 11 05:13:38 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2154: 321 pgs: 321 active+clean; 407 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 205 op/s
Oct 11 05:13:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:13:40 np0005481065 nova_compute[260935]: 2025-10-11 09:13:40.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:13:40 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2155: 321 pgs: 321 active+clean; 407 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 371 KiB/s rd, 2.1 MiB/s wr, 136 op/s
Oct 11 05:13:41 np0005481065 nova_compute[260935]: 2025-10-11 09:13:41.201 2 INFO nova.compute.manager [None req-d1fd82f4-8d7b-42ef-a107-375a860fafd4 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Get console output#033[00m
Oct 11 05:13:41 np0005481065 nova_compute[260935]: 2025-10-11 09:13:41.209 29289 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct 11 05:13:41 np0005481065 nova_compute[260935]: 2025-10-11 09:13:41.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:13:41 np0005481065 nova_compute[260935]: 2025-10-11 09:13:41.582 2 DEBUG oslo_concurrency.lockutils [None req-d9a68e0f-f019-40d9-b349-5e578780b11e a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "4b5bb809-c821-466d-9a47-b5a6aa337212" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:13:41 np0005481065 nova_compute[260935]: 2025-10-11 09:13:41.583 2 DEBUG oslo_concurrency.lockutils [None req-d9a68e0f-f019-40d9-b349-5e578780b11e a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "4b5bb809-c821-466d-9a47-b5a6aa337212" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:13:41 np0005481065 nova_compute[260935]: 2025-10-11 09:13:41.583 2 DEBUG nova.compute.manager [None req-d9a68e0f-f019-40d9-b349-5e578780b11e a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:13:41 np0005481065 nova_compute[260935]: 2025-10-11 09:13:41.588 2 DEBUG nova.compute.manager [None req-d9a68e0f-f019-40d9-b349-5e578780b11e a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Oct 11 05:13:41 np0005481065 nova_compute[260935]: 2025-10-11 09:13:41.590 2 DEBUG nova.objects.instance [None req-d9a68e0f-f019-40d9-b349-5e578780b11e a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'flavor' on Instance uuid 4b5bb809-c821-466d-9a47-b5a6aa337212 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:13:41 np0005481065 nova_compute[260935]: 2025-10-11 09:13:41.618 2 DEBUG nova.virt.libvirt.driver [None req-d9a68e0f-f019-40d9-b349-5e578780b11e a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct 11 05:13:42 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2156: 321 pgs: 321 active+clean; 407 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 373 KiB/s rd, 2.2 MiB/s wr, 140 op/s
Oct 11 05:13:42 np0005481065 nova_compute[260935]: 2025-10-11 09:13:42.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:13:42 np0005481065 nova_compute[260935]: 2025-10-11 09:13:42.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct 11 05:13:42 np0005481065 nova_compute[260935]: 2025-10-11 09:13:42.739 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct 11 05:13:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:13:43 np0005481065 kernel: tap072bf760-1f (unregistering): left promiscuous mode
Oct 11 05:13:43 np0005481065 NetworkManager[44960]: <info>  [1760174023.9654] device (tap072bf760-1f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:13:43 np0005481065 ovn_controller[152945]: 2025-10-11T09:13:43Z|01013|binding|INFO|Releasing lport 072bf760-1ffd-4b15-bdd9-58b9b670feb8 from this chassis (sb_readonly=0)
Oct 11 05:13:43 np0005481065 ovn_controller[152945]: 2025-10-11T09:13:43Z|01014|binding|INFO|Setting lport 072bf760-1ffd-4b15-bdd9-58b9b670feb8 down in Southbound
Oct 11 05:13:43 np0005481065 nova_compute[260935]: 2025-10-11 09:13:43.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:13:43 np0005481065 ovn_controller[152945]: 2025-10-11T09:13:43Z|01015|binding|INFO|Removing iface tap072bf760-1f ovn-installed in OVS
Oct 11 05:13:43 np0005481065 nova_compute[260935]: 2025-10-11 09:13:43.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:13:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:44.003 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:58:7f 10.100.0.11'], port_security=['fa:16:3e:c7:58:7f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '4b5bb809-c821-466d-9a47-b5a6aa337212', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-824eb213-11b4-4a1c-ba73-2f66376f326f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca4b15770e784f45910b630937562cb6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '17c338b8-146b-462b-bba0-743df78b6636', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.175'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8beef9e7-87f9-4509-b665-58f91eb3377a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=072bf760-1ffd-4b15-bdd9-58b9b670feb8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:13:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:44.005 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 072bf760-1ffd-4b15-bdd9-58b9b670feb8 in datapath 824eb213-11b4-4a1c-ba73-2f66376f326f unbound from our chassis#033[00m
Oct 11 05:13:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:44.009 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 824eb213-11b4-4a1c-ba73-2f66376f326f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:13:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:44.011 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[76324ef5-7a6f-40d3-a08b-9f165a7fe86e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:13:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:44.012 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f namespace which is not needed anymore#033[00m
Oct 11 05:13:44 np0005481065 nova_compute[260935]: 2025-10-11 09:13:44.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:13:44 np0005481065 systemd[1]: machine-qemu\x2d127\x2dinstance\x2d0000006a.scope: Deactivated successfully.
Oct 11 05:13:44 np0005481065 systemd[1]: machine-qemu\x2d127\x2dinstance\x2d0000006a.scope: Consumed 13.451s CPU time.
Oct 11 05:13:44 np0005481065 systemd-machined[215705]: Machine qemu-127-instance-0000006a terminated.
Oct 11 05:13:44 np0005481065 nova_compute[260935]: 2025-10-11 09:13:44.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:13:44 np0005481065 nova_compute[260935]: 2025-10-11 09:13:44.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:13:44 np0005481065 neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f[373363]: [NOTICE]   (373369) : haproxy version is 2.8.14-c23fe91
Oct 11 05:13:44 np0005481065 neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f[373363]: [NOTICE]   (373369) : path to executable is /usr/sbin/haproxy
Oct 11 05:13:44 np0005481065 neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f[373363]: [WARNING]  (373369) : Exiting Master process...
Oct 11 05:13:44 np0005481065 neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f[373363]: [WARNING]  (373369) : Exiting Master process...
Oct 11 05:13:44 np0005481065 neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f[373363]: [ALERT]    (373369) : Current worker (373371) exited with code 143 (Terminated)
Oct 11 05:13:44 np0005481065 neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f[373363]: [WARNING]  (373369) : All workers exited. Exiting... (0)
Oct 11 05:13:44 np0005481065 systemd[1]: libpod-78f0fd3b5de59331d822e226bd70d7d4fef1aa6cadfaada96296472857cf5483.scope: Deactivated successfully.
Oct 11 05:13:44 np0005481065 podman[373541]: 2025-10-11 09:13:44.25218248 +0000 UTC m=+0.095715231 container died 78f0fd3b5de59331d822e226bd70d7d4fef1aa6cadfaada96296472857cf5483 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 11 05:13:44 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-78f0fd3b5de59331d822e226bd70d7d4fef1aa6cadfaada96296472857cf5483-userdata-shm.mount: Deactivated successfully.
Oct 11 05:13:44 np0005481065 systemd[1]: var-lib-containers-storage-overlay-8ac5649964aaf0918143f43f0172e8f20ee7f3bca61099013f301b1e0f7916bd-merged.mount: Deactivated successfully.
Oct 11 05:13:44 np0005481065 podman[373541]: 2025-10-11 09:13:44.325557221 +0000 UTC m=+0.169089922 container cleanup 78f0fd3b5de59331d822e226bd70d7d4fef1aa6cadfaada96296472857cf5483 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:13:44 np0005481065 systemd[1]: libpod-conmon-78f0fd3b5de59331d822e226bd70d7d4fef1aa6cadfaada96296472857cf5483.scope: Deactivated successfully.
Oct 11 05:13:44 np0005481065 podman[373582]: 2025-10-11 09:13:44.43644059 +0000 UTC m=+0.070181454 container remove 78f0fd3b5de59331d822e226bd70d7d4fef1aa6cadfaada96296472857cf5483 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 11 05:13:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:44.447 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[167216b9-7578-4c3d-a069-c3028d9d0d0a]: (4, ('Sat Oct 11 09:13:44 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f (78f0fd3b5de59331d822e226bd70d7d4fef1aa6cadfaada96296472857cf5483)\n78f0fd3b5de59331d822e226bd70d7d4fef1aa6cadfaada96296472857cf5483\nSat Oct 11 09:13:44 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f (78f0fd3b5de59331d822e226bd70d7d4fef1aa6cadfaada96296472857cf5483)\n78f0fd3b5de59331d822e226bd70d7d4fef1aa6cadfaada96296472857cf5483\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:13:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:44.451 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1d1771f9-80ec-41e6-9d45-58d39bf7f5bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:13:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:44.452 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap824eb213-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:13:44 np0005481065 nova_compute[260935]: 2025-10-11 09:13:44.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:13:44 np0005481065 kernel: tap824eb213-10: left promiscuous mode
Oct 11 05:13:44 np0005481065 nova_compute[260935]: 2025-10-11 09:13:44.487 2 DEBUG nova.compute.manager [req-f3aa1294-7828-4c33-99e9-e33e2cc7e45b req-d8751403-cf58-4735-ba35-07bfe16d2b01 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Received event network-vif-unplugged-072bf760-1ffd-4b15-bdd9-58b9b670feb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:13:44 np0005481065 nova_compute[260935]: 2025-10-11 09:13:44.489 2 DEBUG oslo_concurrency.lockutils [req-f3aa1294-7828-4c33-99e9-e33e2cc7e45b req-d8751403-cf58-4735-ba35-07bfe16d2b01 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:13:44 np0005481065 nova_compute[260935]: 2025-10-11 09:13:44.490 2 DEBUG oslo_concurrency.lockutils [req-f3aa1294-7828-4c33-99e9-e33e2cc7e45b req-d8751403-cf58-4735-ba35-07bfe16d2b01 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:13:44 np0005481065 nova_compute[260935]: 2025-10-11 09:13:44.491 2 DEBUG oslo_concurrency.lockutils [req-f3aa1294-7828-4c33-99e9-e33e2cc7e45b req-d8751403-cf58-4735-ba35-07bfe16d2b01 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:13:44 np0005481065 nova_compute[260935]: 2025-10-11 09:13:44.491 2 DEBUG nova.compute.manager [req-f3aa1294-7828-4c33-99e9-e33e2cc7e45b req-d8751403-cf58-4735-ba35-07bfe16d2b01 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] No waiting events found dispatching network-vif-unplugged-072bf760-1ffd-4b15-bdd9-58b9b670feb8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:13:44 np0005481065 nova_compute[260935]: 2025-10-11 09:13:44.492 2 WARNING nova.compute.manager [req-f3aa1294-7828-4c33-99e9-e33e2cc7e45b req-d8751403-cf58-4735-ba35-07bfe16d2b01 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Received unexpected event network-vif-unplugged-072bf760-1ffd-4b15-bdd9-58b9b670feb8 for instance with vm_state active and task_state powering-off.#033[00m
Oct 11 05:13:44 np0005481065 nova_compute[260935]: 2025-10-11 09:13:44.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:13:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:44.493 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4f130ca7-66a1-4b8d-88b3-234c667fd7eb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:13:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:44.523 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[dae36fcc-c8a6-410c-8a9f-9d9b5dd41bfd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:13:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:44.526 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[278a7e07-4c57-4815-8e3f-3c3eb2336619]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:13:44 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2157: 321 pgs: 321 active+clean; 407 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 2.2 MiB/s wr, 90 op/s
Oct 11 05:13:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:44.558 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d3893abf-f868-4dbb-8cfe-9a8b3738b45a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 584734, 'reachable_time': 30150, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 373600, 'error': None, 'target': 'ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:13:44 np0005481065 systemd[1]: run-netns-ovnmeta\x2d824eb213\x2d11b4\x2d4a1c\x2dba73\x2d2f66376f326f.mount: Deactivated successfully.
Oct 11 05:13:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:44.567 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 05:13:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:44.567 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[25853aeb-2171-4fbb-8795-fd6694e027b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:13:44 np0005481065 nova_compute[260935]: 2025-10-11 09:13:44.665 2 INFO nova.virt.libvirt.driver [None req-d9a68e0f-f019-40d9-b349-5e578780b11e a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Instance shutdown successfully after 3 seconds.#033[00m
Oct 11 05:13:44 np0005481065 nova_compute[260935]: 2025-10-11 09:13:44.674 2 INFO nova.virt.libvirt.driver [-] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Instance destroyed successfully.#033[00m
Oct 11 05:13:44 np0005481065 nova_compute[260935]: 2025-10-11 09:13:44.675 2 DEBUG nova.objects.instance [None req-d9a68e0f-f019-40d9-b349-5e578780b11e a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'numa_topology' on Instance uuid 4b5bb809-c821-466d-9a47-b5a6aa337212 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:13:44 np0005481065 nova_compute[260935]: 2025-10-11 09:13:44.696 2 DEBUG nova.compute.manager [None req-d9a68e0f-f019-40d9-b349-5e578780b11e a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:13:44 np0005481065 nova_compute[260935]: 2025-10-11 09:13:44.765 2 DEBUG oslo_concurrency.lockutils [None req-d9a68e0f-f019-40d9-b349-5e578780b11e a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "4b5bb809-c821-466d-9a47-b5a6aa337212" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.182s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:13:45 np0005481065 nova_compute[260935]: 2025-10-11 09:13:45.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:13:46 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2158: 321 pgs: 321 active+clean; 407 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.2 MiB/s wr, 67 op/s
Oct 11 05:13:46 np0005481065 nova_compute[260935]: 2025-10-11 09:13:46.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:13:46 np0005481065 nova_compute[260935]: 2025-10-11 09:13:46.629 2 DEBUG nova.compute.manager [req-e04a8058-85a7-4a43-9d32-16f00d8c576e req-9eafffb4-a794-4386-b329-f6686c4c90bc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Received event network-vif-plugged-072bf760-1ffd-4b15-bdd9-58b9b670feb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:13:46 np0005481065 nova_compute[260935]: 2025-10-11 09:13:46.630 2 DEBUG oslo_concurrency.lockutils [req-e04a8058-85a7-4a43-9d32-16f00d8c576e req-9eafffb4-a794-4386-b329-f6686c4c90bc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:13:46 np0005481065 nova_compute[260935]: 2025-10-11 09:13:46.630 2 DEBUG oslo_concurrency.lockutils [req-e04a8058-85a7-4a43-9d32-16f00d8c576e req-9eafffb4-a794-4386-b329-f6686c4c90bc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:13:46 np0005481065 nova_compute[260935]: 2025-10-11 09:13:46.631 2 DEBUG oslo_concurrency.lockutils [req-e04a8058-85a7-4a43-9d32-16f00d8c576e req-9eafffb4-a794-4386-b329-f6686c4c90bc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:13:46 np0005481065 nova_compute[260935]: 2025-10-11 09:13:46.631 2 DEBUG nova.compute.manager [req-e04a8058-85a7-4a43-9d32-16f00d8c576e req-9eafffb4-a794-4386-b329-f6686c4c90bc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] No waiting events found dispatching network-vif-plugged-072bf760-1ffd-4b15-bdd9-58b9b670feb8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:13:46 np0005481065 nova_compute[260935]: 2025-10-11 09:13:46.632 2 WARNING nova.compute.manager [req-e04a8058-85a7-4a43-9d32-16f00d8c576e req-9eafffb4-a794-4386-b329-f6686c4c90bc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Received unexpected event network-vif-plugged-072bf760-1ffd-4b15-bdd9-58b9b670feb8 for instance with vm_state stopped and task_state None.#033[00m
Oct 11 05:13:47 np0005481065 nova_compute[260935]: 2025-10-11 09:13:47.672 2 INFO nova.compute.manager [None req-7f30aee0-7767-42ae-a7e9-21ec7d59af3e a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Get console output#033[00m
Oct 11 05:13:47 np0005481065 nova_compute[260935]: 2025-10-11 09:13:47.708 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:13:47 np0005481065 nova_compute[260935]: 2025-10-11 09:13:47.939 2 DEBUG nova.objects.instance [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'flavor' on Instance uuid 4b5bb809-c821-466d-9a47-b5a6aa337212 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:13:47 np0005481065 nova_compute[260935]: 2025-10-11 09:13:47.966 2 DEBUG oslo_concurrency.lockutils [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "refresh_cache-4b5bb809-c821-466d-9a47-b5a6aa337212" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:13:47 np0005481065 nova_compute[260935]: 2025-10-11 09:13:47.967 2 DEBUG oslo_concurrency.lockutils [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquired lock "refresh_cache-4b5bb809-c821-466d-9a47-b5a6aa337212" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:13:47 np0005481065 nova_compute[260935]: 2025-10-11 09:13:47.968 2 DEBUG nova.network.neutron [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:13:47 np0005481065 nova_compute[260935]: 2025-10-11 09:13:47.968 2 DEBUG nova.objects.instance [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'info_cache' on Instance uuid 4b5bb809-c821-466d-9a47-b5a6aa337212 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:13:48 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2159: 321 pgs: 321 active+clean; 407 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.2 MiB/s wr, 67 op/s
Oct 11 05:13:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:13:48 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:13:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:13:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:48.887 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:cd:e5 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-77797f2a-6d47-4af7-aba0-0c6e896bb489', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77797f2a-6d47-4af7-aba0-0c6e896bb489', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7020c6a7808745b3bfdbd16acc7ff39e', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=01bd402c-a60d-400e-bb8e-f9e8f782e47c, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=f3e66d3d-6090-4c0d-ae59-e7c3b5cba99f) old=Port_Binding(mac=['fa:16:3e:57:cd:e5 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-77797f2a-6d47-4af7-aba0-0c6e896bb489', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77797f2a-6d47-4af7-aba0-0c6e896bb489', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7020c6a7808745b3bfdbd16acc7ff39e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:13:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:48.889 162815 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port f3e66d3d-6090-4c0d-ae59-e7c3b5cba99f in datapath 77797f2a-6d47-4af7-aba0-0c6e896bb489 updated#033[00m
Oct 11 05:13:48 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:13:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:48.892 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 77797f2a-6d47-4af7-aba0-0c6e896bb489, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:13:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:48.894 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1076f3b1-39fd-4bbd-9054-6a38f04fecad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:13:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:13:49 np0005481065 nova_compute[260935]: 2025-10-11 09:13:49.441 2 DEBUG nova.network.neutron [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Updating instance_info_cache with network_info: [{"id": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "address": "fa:16:3e:c7:58:7f", "network": {"id": "824eb213-11b4-4a1c-ba73-2f66376f326f", "bridge": "br-int", "label": "tempest-network-smoke--404577306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap072bf760-1f", "ovs_interfaceid": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:13:49 np0005481065 nova_compute[260935]: 2025-10-11 09:13:49.464 2 DEBUG oslo_concurrency.lockutils [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Releasing lock "refresh_cache-4b5bb809-c821-466d-9a47-b5a6aa337212" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:13:49 np0005481065 nova_compute[260935]: 2025-10-11 09:13:49.494 2 INFO nova.virt.libvirt.driver [-] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Instance destroyed successfully.#033[00m
Oct 11 05:13:49 np0005481065 nova_compute[260935]: 2025-10-11 09:13:49.495 2 DEBUG nova.objects.instance [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'numa_topology' on Instance uuid 4b5bb809-c821-466d-9a47-b5a6aa337212 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:13:49 np0005481065 nova_compute[260935]: 2025-10-11 09:13:49.506 2 DEBUG nova.objects.instance [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'resources' on Instance uuid 4b5bb809-c821-466d-9a47-b5a6aa337212 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:13:49 np0005481065 nova_compute[260935]: 2025-10-11 09:13:49.524 2 DEBUG nova.virt.libvirt.vif [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:13:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-429546179',display_name='tempest-TestNetworkAdvancedServerOps-server-429546179',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-429546179',id=106,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI+dqizmLaYC2433qIpQLnnx/a3P6lrbi1ICEZMm0vIsSZegV5kK7h8RBLfeGIs6fvNJtxm5wiio76URdFktX8daCex/YjqNpM8rlSnMQuWZwdYT6LukjvR6b7qleLKRfA==',key_name='tempest-TestNetworkAdvancedServerOps-1430850582',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:13:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='ca4b15770e784f45910b630937562cb6',ramdisk_id='',reservation_id='r-2d0dbaqa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1304559157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1304559157-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:13:44Z,user_data=None,user_id='a213c3877fc144a3af0be3c3d853f999',uuid=4b5bb809-c821-466d-9a47-b5a6aa337212,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "address": "fa:16:3e:c7:58:7f", "network": {"id": "824eb213-11b4-4a1c-ba73-2f66376f326f", "bridge": "br-int", "label": "tempest-network-smoke--404577306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap072bf760-1f", "ovs_interfaceid": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:13:49 np0005481065 nova_compute[260935]: 2025-10-11 09:13:49.525 2 DEBUG nova.network.os_vif_util [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converting VIF {"id": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "address": "fa:16:3e:c7:58:7f", "network": {"id": "824eb213-11b4-4a1c-ba73-2f66376f326f", "bridge": "br-int", "label": "tempest-network-smoke--404577306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap072bf760-1f", "ovs_interfaceid": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:13:49 np0005481065 nova_compute[260935]: 2025-10-11 09:13:49.526 2 DEBUG nova.network.os_vif_util [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:58:7f,bridge_name='br-int',has_traffic_filtering=True,id=072bf760-1ffd-4b15-bdd9-58b9b670feb8,network=Network(824eb213-11b4-4a1c-ba73-2f66376f326f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap072bf760-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:13:49 np0005481065 nova_compute[260935]: 2025-10-11 09:13:49.526 2 DEBUG os_vif [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:58:7f,bridge_name='br-int',has_traffic_filtering=True,id=072bf760-1ffd-4b15-bdd9-58b9b670feb8,network=Network(824eb213-11b4-4a1c-ba73-2f66376f326f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap072bf760-1f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:13:49 np0005481065 nova_compute[260935]: 2025-10-11 09:13:49.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:13:49 np0005481065 nova_compute[260935]: 2025-10-11 09:13:49.530 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap072bf760-1f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:13:49 np0005481065 nova_compute[260935]: 2025-10-11 09:13:49.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:13:49 np0005481065 nova_compute[260935]: 2025-10-11 09:13:49.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:13:49 np0005481065 nova_compute[260935]: 2025-10-11 09:13:49.540 2 INFO os_vif [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:58:7f,bridge_name='br-int',has_traffic_filtering=True,id=072bf760-1ffd-4b15-bdd9-58b9b670feb8,network=Network(824eb213-11b4-4a1c-ba73-2f66376f326f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap072bf760-1f')#033[00m
Oct 11 05:13:49 np0005481065 nova_compute[260935]: 2025-10-11 09:13:49.550 2 DEBUG nova.virt.libvirt.driver [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Start _get_guest_xml network_info=[{"id": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "address": "fa:16:3e:c7:58:7f", "network": {"id": "824eb213-11b4-4a1c-ba73-2f66376f326f", "bridge": "br-int", "label": "tempest-network-smoke--404577306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap072bf760-1f", "ovs_interfaceid": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:13:49 np0005481065 nova_compute[260935]: 2025-10-11 09:13:49.556 2 WARNING nova.virt.libvirt.driver [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:13:49 np0005481065 nova_compute[260935]: 2025-10-11 09:13:49.564 2 DEBUG nova.virt.libvirt.host [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:13:49 np0005481065 nova_compute[260935]: 2025-10-11 09:13:49.565 2 DEBUG nova.virt.libvirt.host [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:13:49 np0005481065 nova_compute[260935]: 2025-10-11 09:13:49.569 2 DEBUG nova.virt.libvirt.host [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:13:49 np0005481065 nova_compute[260935]: 2025-10-11 09:13:49.569 2 DEBUG nova.virt.libvirt.host [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:13:49 np0005481065 nova_compute[260935]: 2025-10-11 09:13:49.570 2 DEBUG nova.virt.libvirt.driver [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:13:49 np0005481065 nova_compute[260935]: 2025-10-11 09:13:49.570 2 DEBUG nova.virt.hardware [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:13:49 np0005481065 nova_compute[260935]: 2025-10-11 09:13:49.571 2 DEBUG nova.virt.hardware [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:13:49 np0005481065 nova_compute[260935]: 2025-10-11 09:13:49.571 2 DEBUG nova.virt.hardware [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:13:49 np0005481065 nova_compute[260935]: 2025-10-11 09:13:49.572 2 DEBUG nova.virt.hardware [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:13:49 np0005481065 nova_compute[260935]: 2025-10-11 09:13:49.572 2 DEBUG nova.virt.hardware [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:13:49 np0005481065 nova_compute[260935]: 2025-10-11 09:13:49.572 2 DEBUG nova.virt.hardware [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:13:49 np0005481065 nova_compute[260935]: 2025-10-11 09:13:49.573 2 DEBUG nova.virt.hardware [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:13:49 np0005481065 nova_compute[260935]: 2025-10-11 09:13:49.573 2 DEBUG nova.virt.hardware [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:13:49 np0005481065 nova_compute[260935]: 2025-10-11 09:13:49.573 2 DEBUG nova.virt.hardware [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:13:49 np0005481065 nova_compute[260935]: 2025-10-11 09:13:49.574 2 DEBUG nova.virt.hardware [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:13:49 np0005481065 nova_compute[260935]: 2025-10-11 09:13:49.574 2 DEBUG nova.virt.hardware [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:13:49 np0005481065 nova_compute[260935]: 2025-10-11 09:13:49.575 2 DEBUG nova.objects.instance [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 4b5bb809-c821-466d-9a47-b5a6aa337212 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:13:49 np0005481065 nova_compute[260935]: 2025-10-11 09:13:49.592 2 DEBUG oslo_concurrency.processutils [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:13:49 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:13:49 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:13:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:13:49 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:13:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 05:13:49 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:13:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 05:13:49 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:13:49 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 5b97b3ec-f186-422f-b1b8-6a9904163145 does not exist
Oct 11 05:13:49 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev cac8406a-d85c-4ecc-9f6a-d27b5946f7c5 does not exist
Oct 11 05:13:49 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev a607b788-caf3-4841-9c52-e2433ff9f578 does not exist
Oct 11 05:13:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 05:13:49 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 05:13:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 05:13:49 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:13:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:13:49 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:13:50 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:13:50 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2885213427' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:13:50 np0005481065 nova_compute[260935]: 2025-10-11 09:13:50.049 2 DEBUG oslo_concurrency.processutils [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:13:50 np0005481065 nova_compute[260935]: 2025-10-11 09:13:50.108 2 DEBUG oslo_concurrency.processutils [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:13:50 np0005481065 nova_compute[260935]: 2025-10-11 09:13:50.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:13:50 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2160: 321 pgs: 321 active+clean; 407 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 30 KiB/s wr, 4 op/s
Oct 11 05:13:50 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:13:50 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3511507650' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:13:50 np0005481065 nova_compute[260935]: 2025-10-11 09:13:50.641 2 DEBUG oslo_concurrency.processutils [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:13:50 np0005481065 nova_compute[260935]: 2025-10-11 09:13:50.643 2 DEBUG nova.virt.libvirt.vif [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:13:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-429546179',display_name='tempest-TestNetworkAdvancedServerOps-server-429546179',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-429546179',id=106,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI+dqizmLaYC2433qIpQLnnx/a3P6lrbi1ICEZMm0vIsSZegV5kK7h8RBLfeGIs6fvNJtxm5wiio76URdFktX8daCex/YjqNpM8rlSnMQuWZwdYT6LukjvR6b7qleLKRfA==',key_name='tempest-TestNetworkAdvancedServerOps-1430850582',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:13:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='ca4b15770e784f45910b630937562cb6',ramdisk_id='',reservation_id='r-2d0dbaqa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1304559157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1304559157-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:13:44Z,user_data=None,user_id='a213c3877fc144a3af0be3c3d853f999',uuid=4b5bb809-c821-466d-9a47-b5a6aa337212,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "address": "fa:16:3e:c7:58:7f", "network": {"id": "824eb213-11b4-4a1c-ba73-2f66376f326f", "bridge": "br-int", "label": "tempest-network-smoke--404577306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap072bf760-1f", "ovs_interfaceid": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:13:50 np0005481065 nova_compute[260935]: 2025-10-11 09:13:50.644 2 DEBUG nova.network.os_vif_util [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converting VIF {"id": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "address": "fa:16:3e:c7:58:7f", "network": {"id": "824eb213-11b4-4a1c-ba73-2f66376f326f", "bridge": "br-int", "label": "tempest-network-smoke--404577306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap072bf760-1f", "ovs_interfaceid": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:13:50 np0005481065 nova_compute[260935]: 2025-10-11 09:13:50.645 2 DEBUG nova.network.os_vif_util [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:58:7f,bridge_name='br-int',has_traffic_filtering=True,id=072bf760-1ffd-4b15-bdd9-58b9b670feb8,network=Network(824eb213-11b4-4a1c-ba73-2f66376f326f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap072bf760-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:13:50 np0005481065 nova_compute[260935]: 2025-10-11 09:13:50.647 2 DEBUG nova.objects.instance [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4b5bb809-c821-466d-9a47-b5a6aa337212 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:13:50 np0005481065 nova_compute[260935]: 2025-10-11 09:13:50.672 2 DEBUG nova.virt.libvirt.driver [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:13:50 np0005481065 nova_compute[260935]:  <uuid>4b5bb809-c821-466d-9a47-b5a6aa337212</uuid>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:  <name>instance-0000006a</name>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:13:50 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-429546179</nova:name>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:13:49</nova:creationTime>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:13:50 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:        <nova:user uuid="a213c3877fc144a3af0be3c3d853f999">tempest-TestNetworkAdvancedServerOps-1304559157-project-member</nova:user>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:        <nova:project uuid="ca4b15770e784f45910b630937562cb6">tempest-TestNetworkAdvancedServerOps-1304559157</nova:project>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:        <nova:port uuid="072bf760-1ffd-4b15-bdd9-58b9b670feb8">
Oct 11 05:13:50 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:      <entry name="serial">4b5bb809-c821-466d-9a47-b5a6aa337212</entry>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:      <entry name="uuid">4b5bb809-c821-466d-9a47-b5a6aa337212</entry>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:13:50 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/4b5bb809-c821-466d-9a47-b5a6aa337212_disk">
Oct 11 05:13:50 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:13:50 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:13:50 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/4b5bb809-c821-466d-9a47-b5a6aa337212_disk.config">
Oct 11 05:13:50 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:13:50 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:13:50 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:c7:58:7f"/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:      <target dev="tap072bf760-1f"/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:13:50 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/4b5bb809-c821-466d-9a47-b5a6aa337212/console.log" append="off"/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    <input type="keyboard" bus="usb"/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:13:50 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:13:50 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:13:50 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:13:50 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:13:50 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:13:50 np0005481065 nova_compute[260935]: 2025-10-11 09:13:50.674 2 DEBUG nova.virt.libvirt.driver [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:13:50 np0005481065 nova_compute[260935]: 2025-10-11 09:13:50.675 2 DEBUG nova.virt.libvirt.driver [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:13:50 np0005481065 nova_compute[260935]: 2025-10-11 09:13:50.676 2 DEBUG nova.virt.libvirt.vif [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:13:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-429546179',display_name='tempest-TestNetworkAdvancedServerOps-server-429546179',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-429546179',id=106,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI+dqizmLaYC2433qIpQLnnx/a3P6lrbi1ICEZMm0vIsSZegV5kK7h8RBLfeGIs6fvNJtxm5wiio76URdFktX8daCex/YjqNpM8rlSnMQuWZwdYT6LukjvR6b7qleLKRfA==',key_name='tempest-TestNetworkAdvancedServerOps-1430850582',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:13:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='ca4b15770e784f45910b630937562cb6',ramdisk_id='',reservation_id='r-2d0dbaqa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1304559157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1304559157-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:13:44Z,user_data=None,user_id='a213c3877fc144a3af0be3c3d853f999',uuid=4b5bb809-c821-466d-9a47-b5a6aa337212,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "address": "fa:16:3e:c7:58:7f", "network": {"id": "824eb213-11b4-4a1c-ba73-2f66376f326f", "bridge": "br-int", "label": "tempest-network-smoke--404577306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap072bf760-1f", "ovs_interfaceid": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:13:50 np0005481065 nova_compute[260935]: 2025-10-11 09:13:50.676 2 DEBUG nova.network.os_vif_util [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converting VIF {"id": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "address": "fa:16:3e:c7:58:7f", "network": {"id": "824eb213-11b4-4a1c-ba73-2f66376f326f", "bridge": "br-int", "label": "tempest-network-smoke--404577306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap072bf760-1f", "ovs_interfaceid": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:13:50 np0005481065 nova_compute[260935]: 2025-10-11 09:13:50.677 2 DEBUG nova.network.os_vif_util [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:58:7f,bridge_name='br-int',has_traffic_filtering=True,id=072bf760-1ffd-4b15-bdd9-58b9b670feb8,network=Network(824eb213-11b4-4a1c-ba73-2f66376f326f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap072bf760-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:13:50 np0005481065 nova_compute[260935]: 2025-10-11 09:13:50.677 2 DEBUG os_vif [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:58:7f,bridge_name='br-int',has_traffic_filtering=True,id=072bf760-1ffd-4b15-bdd9-58b9b670feb8,network=Network(824eb213-11b4-4a1c-ba73-2f66376f326f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap072bf760-1f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:13:50 np0005481065 nova_compute[260935]: 2025-10-11 09:13:50.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:13:50 np0005481065 nova_compute[260935]: 2025-10-11 09:13:50.678 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:13:50 np0005481065 nova_compute[260935]: 2025-10-11 09:13:50.679 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:13:50 np0005481065 nova_compute[260935]: 2025-10-11 09:13:50.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:13:50 np0005481065 nova_compute[260935]: 2025-10-11 09:13:50.683 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap072bf760-1f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:13:50 np0005481065 nova_compute[260935]: 2025-10-11 09:13:50.684 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap072bf760-1f, col_values=(('external_ids', {'iface-id': '072bf760-1ffd-4b15-bdd9-58b9b670feb8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c7:58:7f', 'vm-uuid': '4b5bb809-c821-466d-9a47-b5a6aa337212'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:13:50 np0005481065 NetworkManager[44960]: <info>  [1760174030.6859] manager: (tap072bf760-1f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/419)
Oct 11 05:13:50 np0005481065 nova_compute[260935]: 2025-10-11 09:13:50.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:13:50 np0005481065 nova_compute[260935]: 2025-10-11 09:13:50.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:13:50 np0005481065 nova_compute[260935]: 2025-10-11 09:13:50.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:13:50 np0005481065 nova_compute[260935]: 2025-10-11 09:13:50.694 2 INFO os_vif [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:58:7f,bridge_name='br-int',has_traffic_filtering=True,id=072bf760-1ffd-4b15-bdd9-58b9b670feb8,network=Network(824eb213-11b4-4a1c-ba73-2f66376f326f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap072bf760-1f')#033[00m
Oct 11 05:13:50 np0005481065 kernel: tap072bf760-1f: entered promiscuous mode
Oct 11 05:13:50 np0005481065 ovn_controller[152945]: 2025-10-11T09:13:50Z|01016|binding|INFO|Claiming lport 072bf760-1ffd-4b15-bdd9-58b9b670feb8 for this chassis.
Oct 11 05:13:50 np0005481065 ovn_controller[152945]: 2025-10-11T09:13:50Z|01017|binding|INFO|072bf760-1ffd-4b15-bdd9-58b9b670feb8: Claiming fa:16:3e:c7:58:7f 10.100.0.11
Oct 11 05:13:50 np0005481065 NetworkManager[44960]: <info>  [1760174030.8063] manager: (tap072bf760-1f): new Tun device (/org/freedesktop/NetworkManager/Devices/420)
Oct 11 05:13:50 np0005481065 nova_compute[260935]: 2025-10-11 09:13:50.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:13:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:50.815 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:58:7f 10.100.0.11'], port_security=['fa:16:3e:c7:58:7f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '4b5bb809-c821-466d-9a47-b5a6aa337212', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-824eb213-11b4-4a1c-ba73-2f66376f326f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca4b15770e784f45910b630937562cb6', 'neutron:revision_number': '5', 'neutron:security_group_ids': '17c338b8-146b-462b-bba0-743df78b6636', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.175'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8beef9e7-87f9-4509-b665-58f91eb3377a, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=072bf760-1ffd-4b15-bdd9-58b9b670feb8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:13:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:50.817 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 072bf760-1ffd-4b15-bdd9-58b9b670feb8 in datapath 824eb213-11b4-4a1c-ba73-2f66376f326f bound to our chassis#033[00m
Oct 11 05:13:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:50.819 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 824eb213-11b4-4a1c-ba73-2f66376f326f#033[00m
Oct 11 05:13:50 np0005481065 ovn_controller[152945]: 2025-10-11T09:13:50Z|01018|binding|INFO|Setting lport 072bf760-1ffd-4b15-bdd9-58b9b670feb8 ovn-installed in OVS
Oct 11 05:13:50 np0005481065 ovn_controller[152945]: 2025-10-11T09:13:50Z|01019|binding|INFO|Setting lport 072bf760-1ffd-4b15-bdd9-58b9b670feb8 up in Southbound
Oct 11 05:13:50 np0005481065 nova_compute[260935]: 2025-10-11 09:13:50.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:13:50 np0005481065 systemd-udevd[374083]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:13:50 np0005481065 podman[374060]: 2025-10-11 09:13:50.839146821 +0000 UTC m=+0.069807253 container create 1181b9fd8dd2977eaf121da177a897146d0bb712ef39c57e8701c61f5a599bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hypatia, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 11 05:13:50 np0005481065 nova_compute[260935]: 2025-10-11 09:13:50.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:13:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:50.841 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[29f2dcea-4b0a-4140-9586-3b8e665ede0c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:13:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:50.842 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap824eb213-11 in ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 05:13:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:50.844 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap824eb213-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 05:13:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:50.844 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6e8dfe82-e9e0-4ecb-af62-b4d4cfa9d8d5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:13:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:50.845 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6b0a47ce-d909-4981-be84-4d240931e064]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:13:50 np0005481065 NetworkManager[44960]: <info>  [1760174030.8550] device (tap072bf760-1f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:13:50 np0005481065 NetworkManager[44960]: <info>  [1760174030.8567] device (tap072bf760-1f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:13:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:50.867 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[2c74180d-02c2-4b19-a79f-7f81ada88931]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:13:50 np0005481065 systemd-machined[215705]: New machine qemu-128-instance-0000006a.
Oct 11 05:13:50 np0005481065 podman[374060]: 2025-10-11 09:13:50.788931321 +0000 UTC m=+0.019591733 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:13:50 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:13:50 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:13:50 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:13:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:50.894 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[93df4a64-2523-4a1f-b2c0-94dab9cd1d99]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:13:50 np0005481065 systemd[1]: Started libpod-conmon-1181b9fd8dd2977eaf121da177a897146d0bb712ef39c57e8701c61f5a599bc2.scope.
Oct 11 05:13:50 np0005481065 systemd[1]: Started Virtual Machine qemu-128-instance-0000006a.
Oct 11 05:13:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:50.936 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[94914c90-faee-438b-bce7-4893f21569b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:13:50 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:13:50 np0005481065 auditd[700]: Audit daemon rotating log files
Oct 11 05:13:50 np0005481065 NetworkManager[44960]: <info>  [1760174030.9504] manager: (tap824eb213-10): new Veth device (/org/freedesktop/NetworkManager/Devices/421)
Oct 11 05:13:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:50.948 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0d503f4f-c748-4330-a851-eaaecee717f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:13:50 np0005481065 podman[374060]: 2025-10-11 09:13:50.988204487 +0000 UTC m=+0.218864909 container init 1181b9fd8dd2977eaf121da177a897146d0bb712ef39c57e8701c61f5a599bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hypatia, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:13:51 np0005481065 podman[374060]: 2025-10-11 09:13:51.003911281 +0000 UTC m=+0.234571713 container start 1181b9fd8dd2977eaf121da177a897146d0bb712ef39c57e8701c61f5a599bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 11 05:13:51 np0005481065 podman[374060]: 2025-10-11 09:13:51.009367772 +0000 UTC m=+0.240028254 container attach 1181b9fd8dd2977eaf121da177a897146d0bb712ef39c57e8701c61f5a599bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hypatia, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:13:51 np0005481065 determined_hypatia[374094]: 167 167
Oct 11 05:13:51 np0005481065 systemd[1]: libpod-1181b9fd8dd2977eaf121da177a897146d0bb712ef39c57e8701c61f5a599bc2.scope: Deactivated successfully.
Oct 11 05:13:51 np0005481065 podman[374060]: 2025-10-11 09:13:51.01866999 +0000 UTC m=+0.249330422 container died 1181b9fd8dd2977eaf121da177a897146d0bb712ef39c57e8701c61f5a599bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 11 05:13:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:51.014 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[aae1dd39-2a35-4f59-99d4-720b5e2913c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:13:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:51.023 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f03e8087-16db-4437-92b6-cb16b51ea12b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:13:51 np0005481065 systemd[1]: var-lib-containers-storage-overlay-046c3d7ec6a3e1ca8f36a32ed7d98d29bf0df08dd897555605e8fc0425975e6a-merged.mount: Deactivated successfully.
Oct 11 05:13:51 np0005481065 NetworkManager[44960]: <info>  [1760174031.0859] device (tap824eb213-10): carrier: link connected
Oct 11 05:13:51 np0005481065 podman[374060]: 2025-10-11 09:13:51.087436583 +0000 UTC m=+0.318096985 container remove 1181b9fd8dd2977eaf121da177a897146d0bb712ef39c57e8701c61f5a599bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 05:13:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:51.097 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[7140e0e5-e298-44fc-b26b-2a0f617365e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:13:51 np0005481065 systemd[1]: libpod-conmon-1181b9fd8dd2977eaf121da177a897146d0bb712ef39c57e8701c61f5a599bc2.scope: Deactivated successfully.
Oct 11 05:13:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:51.126 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2a7ecc06-ca0c-45f6-aeb3-f159ee1646b6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap824eb213-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:a7:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 297], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 587578, 'reachable_time': 18087, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 374141, 'error': None, 'target': 'ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:13:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:51.151 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b8e505e4-1ad0-434e-b631-2028970a0d09]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe32:a71c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 587578, 'tstamp': 587578}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 374142, 'error': None, 'target': 'ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:13:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:51.178 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e8499fa9-53ec-4e28-8b5a-9514c1131190]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap824eb213-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:a7:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 297], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 587578, 'reachable_time': 18087, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 374143, 'error': None, 'target': 'ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:13:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:51.212 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2fd85785-c96e-4ae0-96f1-532ee922c9f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:13:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:51.304 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[be543a97-4921-418d-95d3-8d29af03abc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:13:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:51.306 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap824eb213-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:13:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:51.307 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:13:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:51.308 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap824eb213-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:13:51 np0005481065 nova_compute[260935]: 2025-10-11 09:13:51.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:13:51 np0005481065 NetworkManager[44960]: <info>  [1760174031.3113] manager: (tap824eb213-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/422)
Oct 11 05:13:51 np0005481065 kernel: tap824eb213-10: entered promiscuous mode
Oct 11 05:13:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:51.315 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap824eb213-10, col_values=(('external_ids', {'iface-id': 'f4946461-9ac5-43c5-81c4-5ba2f19b9f2d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:13:51 np0005481065 ovn_controller[152945]: 2025-10-11T09:13:51Z|01020|binding|INFO|Releasing lport f4946461-9ac5-43c5-81c4-5ba2f19b9f2d from this chassis (sb_readonly=0)
Oct 11 05:13:51 np0005481065 nova_compute[260935]: 2025-10-11 09:13:51.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:13:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:51.340 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/824eb213-11b4-4a1c-ba73-2f66376f326f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/824eb213-11b4-4a1c-ba73-2f66376f326f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 05:13:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:51.342 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8d7f4abd-9e7a-4c4e-b668-e6194e4d6d85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:13:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:51.343 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 05:13:51 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 05:13:51 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 05:13:51 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-824eb213-11b4-4a1c-ba73-2f66376f326f
Oct 11 05:13:51 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 05:13:51 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 05:13:51 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 05:13:51 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/824eb213-11b4-4a1c-ba73-2f66376f326f.pid.haproxy
Oct 11 05:13:51 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 05:13:51 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:13:51 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 05:13:51 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 05:13:51 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 05:13:51 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 05:13:51 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 05:13:51 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 05:13:51 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 05:13:51 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 05:13:51 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 05:13:51 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 05:13:51 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 05:13:51 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 05:13:51 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 05:13:51 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:13:51 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:13:51 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 05:13:51 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 05:13:51 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 05:13:51 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID 824eb213-11b4-4a1c-ba73-2f66376f326f
Oct 11 05:13:51 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 05:13:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:51.344 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f', 'env', 'PROCESS_TAG=haproxy-824eb213-11b4-4a1c-ba73-2f66376f326f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/824eb213-11b4-4a1c-ba73-2f66376f326f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 05:13:51 np0005481065 podman[374154]: 2025-10-11 09:13:51.392676372 +0000 UTC m=+0.098721834 container create 65a9e29df67dd2d39ae721425c864410f3e649c440827d896f418b8e32dd0147 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 11 05:13:51 np0005481065 podman[374154]: 2025-10-11 09:13:51.359203636 +0000 UTC m=+0.065249148 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:13:51 np0005481065 systemd[1]: Started libpod-conmon-65a9e29df67dd2d39ae721425c864410f3e649c440827d896f418b8e32dd0147.scope.
Oct 11 05:13:51 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:13:51 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ef148b6cbb84017ffdfebfc86e683029c7463dfb6677f0f5eadfc146e25f1da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:13:51 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ef148b6cbb84017ffdfebfc86e683029c7463dfb6677f0f5eadfc146e25f1da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:13:51 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ef148b6cbb84017ffdfebfc86e683029c7463dfb6677f0f5eadfc146e25f1da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:13:51 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ef148b6cbb84017ffdfebfc86e683029c7463dfb6677f0f5eadfc146e25f1da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:13:51 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ef148b6cbb84017ffdfebfc86e683029c7463dfb6677f0f5eadfc146e25f1da/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 05:13:51 np0005481065 podman[374154]: 2025-10-11 09:13:51.542742696 +0000 UTC m=+0.248788218 container init 65a9e29df67dd2d39ae721425c864410f3e649c440827d896f418b8e32dd0147 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:13:51 np0005481065 podman[374154]: 2025-10-11 09:13:51.557029821 +0000 UTC m=+0.263075243 container start 65a9e29df67dd2d39ae721425c864410f3e649c440827d896f418b8e32dd0147 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:13:51 np0005481065 podman[374154]: 2025-10-11 09:13:51.560429775 +0000 UTC m=+0.266475287 container attach 65a9e29df67dd2d39ae721425c864410f3e649c440827d896f418b8e32dd0147 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_davinci, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:13:51 np0005481065 podman[374243]: 2025-10-11 09:13:51.838312987 +0000 UTC m=+0.078222806 container create fe44361210902a7d2aa4fa61cd2749a87305dac2f61418b347db2edef02fb595 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 05:13:51 np0005481065 systemd[1]: Started libpod-conmon-fe44361210902a7d2aa4fa61cd2749a87305dac2f61418b347db2edef02fb595.scope.
Oct 11 05:13:51 np0005481065 podman[374243]: 2025-10-11 09:13:51.797570599 +0000 UTC m=+0.037480498 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 05:13:51 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:13:51 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15941a0eabc12ef18aa237435bb737824ea9fdbe58e73a7127e8fa87ef5c01c5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 05:13:51 np0005481065 podman[374243]: 2025-10-11 09:13:51.949756072 +0000 UTC m=+0.189665901 container init fe44361210902a7d2aa4fa61cd2749a87305dac2f61418b347db2edef02fb595 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 11 05:13:51 np0005481065 podman[374243]: 2025-10-11 09:13:51.95763106 +0000 UTC m=+0.197540889 container start fe44361210902a7d2aa4fa61cd2749a87305dac2f61418b347db2edef02fb595 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:13:51 np0005481065 neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f[374257]: [NOTICE]   (374261) : New worker (374263) forked
Oct 11 05:13:51 np0005481065 neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f[374257]: [NOTICE]   (374261) : Loading success.
Oct 11 05:13:52 np0005481065 nova_compute[260935]: 2025-10-11 09:13:52.110 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Removed pending event for 4b5bb809-c821-466d-9a47-b5a6aa337212 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct 11 05:13:52 np0005481065 nova_compute[260935]: 2025-10-11 09:13:52.110 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174032.1075108, 4b5bb809-c821-466d-9a47-b5a6aa337212 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:13:52 np0005481065 nova_compute[260935]: 2025-10-11 09:13:52.112 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:13:52 np0005481065 nova_compute[260935]: 2025-10-11 09:13:52.113 2 DEBUG nova.compute.manager [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:13:52 np0005481065 nova_compute[260935]: 2025-10-11 09:13:52.119 2 INFO nova.virt.libvirt.driver [-] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Instance rebooted successfully.#033[00m
Oct 11 05:13:52 np0005481065 nova_compute[260935]: 2025-10-11 09:13:52.120 2 DEBUG nova.compute.manager [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:13:52 np0005481065 nova_compute[260935]: 2025-10-11 09:13:52.160 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:13:52 np0005481065 nova_compute[260935]: 2025-10-11 09:13:52.179 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:13:52 np0005481065 nova_compute[260935]: 2025-10-11 09:13:52.217 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174032.1087902, 4b5bb809-c821-466d-9a47-b5a6aa337212 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:13:52 np0005481065 nova_compute[260935]: 2025-10-11 09:13:52.217 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] VM Started (Lifecycle Event)#033[00m
Oct 11 05:13:52 np0005481065 nova_compute[260935]: 2025-10-11 09:13:52.243 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:13:52 np0005481065 nova_compute[260935]: 2025-10-11 09:13:52.247 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:13:52 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2161: 321 pgs: 321 active+clean; 407 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 7.8 KiB/s rd, 30 KiB/s wr, 11 op/s
Oct 11 05:13:52 np0005481065 loving_davinci[374181]: --> passed data devices: 0 physical, 3 LVM
Oct 11 05:13:52 np0005481065 loving_davinci[374181]: --> relative data size: 1.0
Oct 11 05:13:52 np0005481065 loving_davinci[374181]: --> All data devices are unavailable
Oct 11 05:13:52 np0005481065 systemd[1]: libpod-65a9e29df67dd2d39ae721425c864410f3e649c440827d896f418b8e32dd0147.scope: Deactivated successfully.
Oct 11 05:13:52 np0005481065 systemd[1]: libpod-65a9e29df67dd2d39ae721425c864410f3e649c440827d896f418b8e32dd0147.scope: Consumed 1.107s CPU time.
Oct 11 05:13:52 np0005481065 podman[374296]: 2025-10-11 09:13:52.833869012 +0000 UTC m=+0.030385512 container died 65a9e29df67dd2d39ae721425c864410f3e649c440827d896f418b8e32dd0147 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_davinci, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 05:13:52 np0005481065 systemd[1]: var-lib-containers-storage-overlay-3ef148b6cbb84017ffdfebfc86e683029c7463dfb6677f0f5eadfc146e25f1da-merged.mount: Deactivated successfully.
Oct 11 05:13:52 np0005481065 podman[374296]: 2025-10-11 09:13:52.924986034 +0000 UTC m=+0.121502544 container remove 65a9e29df67dd2d39ae721425c864410f3e649c440827d896f418b8e32dd0147 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_davinci, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 11 05:13:52 np0005481065 systemd[1]: libpod-conmon-65a9e29df67dd2d39ae721425c864410f3e649c440827d896f418b8e32dd0147.scope: Deactivated successfully.
Oct 11 05:13:53 np0005481065 podman[374451]: 2025-10-11 09:13:53.846885522 +0000 UTC m=+0.048403881 container create c0e38af6f9783064fc93649e55639f78149caf0f15744e566ba6811e33106570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:13:53 np0005481065 systemd[1]: Started libpod-conmon-c0e38af6f9783064fc93649e55639f78149caf0f15744e566ba6811e33106570.scope.
Oct 11 05:13:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:13:53 np0005481065 podman[374451]: 2025-10-11 09:13:53.827560747 +0000 UTC m=+0.029079126 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:13:53 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:13:53 np0005481065 podman[374451]: 2025-10-11 09:13:53.949491312 +0000 UTC m=+0.151009701 container init c0e38af6f9783064fc93649e55639f78149caf0f15744e566ba6811e33106570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_gates, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:13:53 np0005481065 podman[374451]: 2025-10-11 09:13:53.961144274 +0000 UTC m=+0.162662633 container start c0e38af6f9783064fc93649e55639f78149caf0f15744e566ba6811e33106570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_gates, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 05:13:53 np0005481065 podman[374451]: 2025-10-11 09:13:53.964323542 +0000 UTC m=+0.165841901 container attach c0e38af6f9783064fc93649e55639f78149caf0f15744e566ba6811e33106570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_gates, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:13:53 np0005481065 epic_gates[374467]: 167 167
Oct 11 05:13:53 np0005481065 systemd[1]: libpod-c0e38af6f9783064fc93649e55639f78149caf0f15744e566ba6811e33106570.scope: Deactivated successfully.
Oct 11 05:13:53 np0005481065 podman[374451]: 2025-10-11 09:13:53.970478753 +0000 UTC m=+0.171997152 container died c0e38af6f9783064fc93649e55639f78149caf0f15744e566ba6811e33106570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_gates, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:13:54 np0005481065 systemd[1]: var-lib-containers-storage-overlay-c909e26b5ce3f4f13f640151274a5237d0989ccadd83bacf7f68229970cd4cfd-merged.mount: Deactivated successfully.
Oct 11 05:13:54 np0005481065 podman[374451]: 2025-10-11 09:13:54.026518714 +0000 UTC m=+0.228037093 container remove c0e38af6f9783064fc93649e55639f78149caf0f15744e566ba6811e33106570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_gates, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:13:54 np0005481065 systemd[1]: libpod-conmon-c0e38af6f9783064fc93649e55639f78149caf0f15744e566ba6811e33106570.scope: Deactivated successfully.
Oct 11 05:13:54 np0005481065 podman[374490]: 2025-10-11 09:13:54.33167408 +0000 UTC m=+0.084308184 container create 6fe9f5811e8ce5522a659f2f54ba7dbb066334c1220d82b74b9ad6ee66a25413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 11 05:13:54 np0005481065 systemd[1]: Started libpod-conmon-6fe9f5811e8ce5522a659f2f54ba7dbb066334c1220d82b74b9ad6ee66a25413.scope.
Oct 11 05:13:54 np0005481065 nova_compute[260935]: 2025-10-11 09:13:54.378 2 DEBUG nova.compute.manager [req-a2d94969-bd01-4ea9-aaab-cd1e0fa0d1b6 req-e08a926e-d1b6-41d9-820e-60f735de703d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Received event network-vif-plugged-072bf760-1ffd-4b15-bdd9-58b9b670feb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:13:54 np0005481065 nova_compute[260935]: 2025-10-11 09:13:54.380 2 DEBUG oslo_concurrency.lockutils [req-a2d94969-bd01-4ea9-aaab-cd1e0fa0d1b6 req-e08a926e-d1b6-41d9-820e-60f735de703d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:13:54 np0005481065 podman[374490]: 2025-10-11 09:13:54.291702244 +0000 UTC m=+0.044336428 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:13:54 np0005481065 nova_compute[260935]: 2025-10-11 09:13:54.380 2 DEBUG oslo_concurrency.lockutils [req-a2d94969-bd01-4ea9-aaab-cd1e0fa0d1b6 req-e08a926e-d1b6-41d9-820e-60f735de703d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:13:54 np0005481065 nova_compute[260935]: 2025-10-11 09:13:54.384 2 DEBUG oslo_concurrency.lockutils [req-a2d94969-bd01-4ea9-aaab-cd1e0fa0d1b6 req-e08a926e-d1b6-41d9-820e-60f735de703d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:13:54 np0005481065 nova_compute[260935]: 2025-10-11 09:13:54.384 2 DEBUG nova.compute.manager [req-a2d94969-bd01-4ea9-aaab-cd1e0fa0d1b6 req-e08a926e-d1b6-41d9-820e-60f735de703d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] No waiting events found dispatching network-vif-plugged-072bf760-1ffd-4b15-bdd9-58b9b670feb8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:13:54 np0005481065 nova_compute[260935]: 2025-10-11 09:13:54.385 2 WARNING nova.compute.manager [req-a2d94969-bd01-4ea9-aaab-cd1e0fa0d1b6 req-e08a926e-d1b6-41d9-820e-60f735de703d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Received unexpected event network-vif-plugged-072bf760-1ffd-4b15-bdd9-58b9b670feb8 for instance with vm_state active and task_state None.#033[00m
Oct 11 05:13:54 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:13:54 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8634b20a56ccebe2680c75d526f4bc221f25677dc6d6e1b63cd912ee96e650cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:13:54 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8634b20a56ccebe2680c75d526f4bc221f25677dc6d6e1b63cd912ee96e650cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:13:54 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8634b20a56ccebe2680c75d526f4bc221f25677dc6d6e1b63cd912ee96e650cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:13:54 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8634b20a56ccebe2680c75d526f4bc221f25677dc6d6e1b63cd912ee96e650cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:13:54 np0005481065 podman[374490]: 2025-10-11 09:13:54.428804449 +0000 UTC m=+0.181438543 container init 6fe9f5811e8ce5522a659f2f54ba7dbb066334c1220d82b74b9ad6ee66a25413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_engelbart, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:13:54 np0005481065 podman[374490]: 2025-10-11 09:13:54.442374354 +0000 UTC m=+0.195008478 container start 6fe9f5811e8ce5522a659f2f54ba7dbb066334c1220d82b74b9ad6ee66a25413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 11 05:13:54 np0005481065 podman[374490]: 2025-10-11 09:13:54.446310293 +0000 UTC m=+0.198944407 container attach 6fe9f5811e8ce5522a659f2f54ba7dbb066334c1220d82b74b9ad6ee66a25413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 11 05:13:54 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2162: 321 pgs: 321 active+clean; 407 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 0 B/s wr, 7 op/s
Oct 11 05:13:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:13:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:13:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:13:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:13:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:13:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:13:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:13:54
Oct 11 05:13:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:13:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:13:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['vms', 'backups', 'volumes', 'default.rgw.log', 'default.rgw.control', '.rgw.root', 'default.rgw.meta', '.mgr', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Oct 11 05:13:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]: {
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:    "0": [
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:        {
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:            "devices": [
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:                "/dev/loop3"
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:            ],
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:            "lv_name": "ceph_lv0",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:            "lv_size": "21470642176",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:            "name": "ceph_lv0",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:            "tags": {
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:                "ceph.cluster_name": "ceph",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:                "ceph.crush_device_class": "",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:                "ceph.encrypted": "0",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:                "ceph.osd_id": "0",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:                "ceph.type": "block",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:                "ceph.vdo": "0"
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:            },
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:            "type": "block",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:            "vg_name": "ceph_vg0"
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:        }
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:    ],
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:    "1": [
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:        {
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:            "devices": [
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:                "/dev/loop4"
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:            ],
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:            "lv_name": "ceph_lv1",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:            "lv_size": "21470642176",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:            "name": "ceph_lv1",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:            "tags": {
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:                "ceph.cluster_name": "ceph",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:                "ceph.crush_device_class": "",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:                "ceph.encrypted": "0",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:                "ceph.osd_id": "1",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:                "ceph.type": "block",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:                "ceph.vdo": "0"
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:            },
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:            "type": "block",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:            "vg_name": "ceph_vg1"
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:        }
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:    ],
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:    "2": [
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:        {
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:            "devices": [
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:                "/dev/loop5"
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:            ],
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:            "lv_name": "ceph_lv2",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:            "lv_size": "21470642176",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:            "name": "ceph_lv2",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:            "tags": {
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:                "ceph.cluster_name": "ceph",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:                "ceph.crush_device_class": "",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:                "ceph.encrypted": "0",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:                "ceph.osd_id": "2",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:                "ceph.type": "block",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:                "ceph.vdo": "0"
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:            },
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:            "type": "block",
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:            "vg_name": "ceph_vg2"
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:        }
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]:    ]
Oct 11 05:13:55 np0005481065 wonderful_engelbart[374507]: }
Oct 11 05:13:55 np0005481065 systemd[1]: libpod-6fe9f5811e8ce5522a659f2f54ba7dbb066334c1220d82b74b9ad6ee66a25413.scope: Deactivated successfully.
Oct 11 05:13:55 np0005481065 podman[374490]: 2025-10-11 09:13:55.290540841 +0000 UTC m=+1.043174975 container died 6fe9f5811e8ce5522a659f2f54ba7dbb066334c1220d82b74b9ad6ee66a25413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 05:13:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:13:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:13:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:13:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:13:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:13:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:13:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:13:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:13:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:13:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:13:55 np0005481065 systemd[1]: var-lib-containers-storage-overlay-8634b20a56ccebe2680c75d526f4bc221f25677dc6d6e1b63cd912ee96e650cc-merged.mount: Deactivated successfully.
Oct 11 05:13:55 np0005481065 nova_compute[260935]: 2025-10-11 09:13:55.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:13:55 np0005481065 podman[374490]: 2025-10-11 09:13:55.386857907 +0000 UTC m=+1.139492011 container remove 6fe9f5811e8ce5522a659f2f54ba7dbb066334c1220d82b74b9ad6ee66a25413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 11 05:13:55 np0005481065 systemd[1]: libpod-conmon-6fe9f5811e8ce5522a659f2f54ba7dbb066334c1220d82b74b9ad6ee66a25413.scope: Deactivated successfully.
Oct 11 05:13:55 np0005481065 podman[374517]: 2025-10-11 09:13:55.47079212 +0000 UTC m=+0.145171329 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 11 05:13:55 np0005481065 nova_compute[260935]: 2025-10-11 09:13:55.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:13:56 np0005481065 podman[374686]: 2025-10-11 09:13:56.180848913 +0000 UTC m=+0.056857885 container create 785e467b4f5529d8d591e577721e27ce87bcdb711d821aa96a0ed9bf0906acbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 05:13:56 np0005481065 systemd[1]: Started libpod-conmon-785e467b4f5529d8d591e577721e27ce87bcdb711d821aa96a0ed9bf0906acbe.scope.
Oct 11 05:13:56 np0005481065 podman[374686]: 2025-10-11 09:13:56.158061732 +0000 UTC m=+0.034070684 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:13:56 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:13:56 np0005481065 podman[374686]: 2025-10-11 09:13:56.282673651 +0000 UTC m=+0.158682663 container init 785e467b4f5529d8d591e577721e27ce87bcdb711d821aa96a0ed9bf0906acbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_zhukovsky, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Oct 11 05:13:56 np0005481065 podman[374686]: 2025-10-11 09:13:56.299144017 +0000 UTC m=+0.175152989 container start 785e467b4f5529d8d591e577721e27ce87bcdb711d821aa96a0ed9bf0906acbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_zhukovsky, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:13:56 np0005481065 podman[374686]: 2025-10-11 09:13:56.30359311 +0000 UTC m=+0.179602092 container attach 785e467b4f5529d8d591e577721e27ce87bcdb711d821aa96a0ed9bf0906acbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:13:56 np0005481065 nifty_zhukovsky[374703]: 167 167
Oct 11 05:13:56 np0005481065 systemd[1]: libpod-785e467b4f5529d8d591e577721e27ce87bcdb711d821aa96a0ed9bf0906acbe.scope: Deactivated successfully.
Oct 11 05:13:56 np0005481065 podman[374686]: 2025-10-11 09:13:56.310042969 +0000 UTC m=+0.186051931 container died 785e467b4f5529d8d591e577721e27ce87bcdb711d821aa96a0ed9bf0906acbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_zhukovsky, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Oct 11 05:13:56 np0005481065 systemd[1]: var-lib-containers-storage-overlay-cac356a4a4743ac1b5926d654e4032faf8165de18b51f4360f5332caf6550791-merged.mount: Deactivated successfully.
Oct 11 05:13:56 np0005481065 podman[374686]: 2025-10-11 09:13:56.392015738 +0000 UTC m=+0.268024680 container remove 785e467b4f5529d8d591e577721e27ce87bcdb711d821aa96a0ed9bf0906acbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_zhukovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 05:13:56 np0005481065 systemd[1]: libpod-conmon-785e467b4f5529d8d591e577721e27ce87bcdb711d821aa96a0ed9bf0906acbe.scope: Deactivated successfully.
Oct 11 05:13:56 np0005481065 nova_compute[260935]: 2025-10-11 09:13:56.512 2 DEBUG nova.compute.manager [req-4c83a98e-f310-40eb-9bba-ce546f5116b1 req-1694c42d-09b9-4e2b-b213-76b0abf388eb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Received event network-vif-plugged-072bf760-1ffd-4b15-bdd9-58b9b670feb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:13:56 np0005481065 nova_compute[260935]: 2025-10-11 09:13:56.513 2 DEBUG oslo_concurrency.lockutils [req-4c83a98e-f310-40eb-9bba-ce546f5116b1 req-1694c42d-09b9-4e2b-b213-76b0abf388eb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:13:56 np0005481065 nova_compute[260935]: 2025-10-11 09:13:56.513 2 DEBUG oslo_concurrency.lockutils [req-4c83a98e-f310-40eb-9bba-ce546f5116b1 req-1694c42d-09b9-4e2b-b213-76b0abf388eb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:13:56 np0005481065 nova_compute[260935]: 2025-10-11 09:13:56.514 2 DEBUG oslo_concurrency.lockutils [req-4c83a98e-f310-40eb-9bba-ce546f5116b1 req-1694c42d-09b9-4e2b-b213-76b0abf388eb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:13:56 np0005481065 nova_compute[260935]: 2025-10-11 09:13:56.514 2 DEBUG nova.compute.manager [req-4c83a98e-f310-40eb-9bba-ce546f5116b1 req-1694c42d-09b9-4e2b-b213-76b0abf388eb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] No waiting events found dispatching network-vif-plugged-072bf760-1ffd-4b15-bdd9-58b9b670feb8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:13:56 np0005481065 nova_compute[260935]: 2025-10-11 09:13:56.514 2 WARNING nova.compute.manager [req-4c83a98e-f310-40eb-9bba-ce546f5116b1 req-1694c42d-09b9-4e2b-b213-76b0abf388eb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Received unexpected event network-vif-plugged-072bf760-1ffd-4b15-bdd9-58b9b670feb8 for instance with vm_state active and task_state None.#033[00m
Oct 11 05:13:56 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2163: 321 pgs: 321 active+clean; 407 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 0 B/s wr, 7 op/s
Oct 11 05:13:56 np0005481065 podman[374727]: 2025-10-11 09:13:56.647391186 +0000 UTC m=+0.080483278 container create 9f1cb4f830036fa8c95f4a0aafac1589c3ea57a684af86b84ccd67212788319d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:13:56 np0005481065 podman[374727]: 2025-10-11 09:13:56.610963838 +0000 UTC m=+0.044056020 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:13:56 np0005481065 systemd[1]: Started libpod-conmon-9f1cb4f830036fa8c95f4a0aafac1589c3ea57a684af86b84ccd67212788319d.scope.
Oct 11 05:13:56 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:13:56 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbd4f0210e3a8e390e1428446e7b5430c3ebc18baad2d99766567c5fcf70369d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:13:56 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbd4f0210e3a8e390e1428446e7b5430c3ebc18baad2d99766567c5fcf70369d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:13:56 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbd4f0210e3a8e390e1428446e7b5430c3ebc18baad2d99766567c5fcf70369d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:13:56 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbd4f0210e3a8e390e1428446e7b5430c3ebc18baad2d99766567c5fcf70369d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:13:56 np0005481065 podman[374727]: 2025-10-11 09:13:56.780334726 +0000 UTC m=+0.213426888 container init 9f1cb4f830036fa8c95f4a0aafac1589c3ea57a684af86b84ccd67212788319d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_sammet, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:13:56 np0005481065 podman[374727]: 2025-10-11 09:13:56.795965779 +0000 UTC m=+0.229057891 container start 9f1cb4f830036fa8c95f4a0aafac1589c3ea57a684af86b84ccd67212788319d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 05:13:56 np0005481065 podman[374727]: 2025-10-11 09:13:56.800212676 +0000 UTC m=+0.233304858 container attach 9f1cb4f830036fa8c95f4a0aafac1589c3ea57a684af86b84ccd67212788319d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_sammet, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:13:57 np0005481065 nice_sammet[374743]: {
Oct 11 05:13:57 np0005481065 nice_sammet[374743]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 05:13:57 np0005481065 nice_sammet[374743]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:13:57 np0005481065 nice_sammet[374743]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 05:13:57 np0005481065 nice_sammet[374743]:        "osd_id": 2,
Oct 11 05:13:57 np0005481065 nice_sammet[374743]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:13:57 np0005481065 nice_sammet[374743]:        "type": "bluestore"
Oct 11 05:13:57 np0005481065 nice_sammet[374743]:    },
Oct 11 05:13:57 np0005481065 nice_sammet[374743]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 05:13:57 np0005481065 nice_sammet[374743]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:13:57 np0005481065 nice_sammet[374743]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 05:13:57 np0005481065 nice_sammet[374743]:        "osd_id": 0,
Oct 11 05:13:57 np0005481065 nice_sammet[374743]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:13:57 np0005481065 nice_sammet[374743]:        "type": "bluestore"
Oct 11 05:13:57 np0005481065 nice_sammet[374743]:    },
Oct 11 05:13:57 np0005481065 nice_sammet[374743]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 05:13:57 np0005481065 nice_sammet[374743]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:13:57 np0005481065 nice_sammet[374743]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 05:13:57 np0005481065 nice_sammet[374743]:        "osd_id": 1,
Oct 11 05:13:57 np0005481065 nice_sammet[374743]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:13:57 np0005481065 nice_sammet[374743]:        "type": "bluestore"
Oct 11 05:13:57 np0005481065 nice_sammet[374743]:    }
Oct 11 05:13:57 np0005481065 nice_sammet[374743]: }
Oct 11 05:13:57 np0005481065 systemd[1]: libpod-9f1cb4f830036fa8c95f4a0aafac1589c3ea57a684af86b84ccd67212788319d.scope: Deactivated successfully.
Oct 11 05:13:57 np0005481065 podman[374727]: 2025-10-11 09:13:57.770051221 +0000 UTC m=+1.203143333 container died 9f1cb4f830036fa8c95f4a0aafac1589c3ea57a684af86b84ccd67212788319d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:13:57 np0005481065 systemd[1]: var-lib-containers-storage-overlay-cbd4f0210e3a8e390e1428446e7b5430c3ebc18baad2d99766567c5fcf70369d-merged.mount: Deactivated successfully.
Oct 11 05:13:57 np0005481065 podman[374727]: 2025-10-11 09:13:57.844920503 +0000 UTC m=+1.278012595 container remove 9f1cb4f830036fa8c95f4a0aafac1589c3ea57a684af86b84ccd67212788319d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_sammet, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:13:57 np0005481065 systemd[1]: libpod-conmon-9f1cb4f830036fa8c95f4a0aafac1589c3ea57a684af86b84ccd67212788319d.scope: Deactivated successfully.
Oct 11 05:13:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:13:57 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:13:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:13:57 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:13:57 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 17b8ea58-ced7-4802-bb22-0ef879decb02 does not exist
Oct 11 05:13:57 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 4b88a664-b9cc-4637-aa93-826f5b013ab1 does not exist
Oct 11 05:13:58 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2164: 321 pgs: 321 active+clean; 407 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 0 B/s wr, 71 op/s
Oct 11 05:13:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:58.808 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:cd:e5 10.100.0.18 10.100.0.2 10.100.0.34'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28 10.100.0.34/28', 'neutron:device_id': 'ovnmeta-77797f2a-6d47-4af7-aba0-0c6e896bb489', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77797f2a-6d47-4af7-aba0-0c6e896bb489', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7020c6a7808745b3bfdbd16acc7ff39e', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=01bd402c-a60d-400e-bb8e-f9e8f782e47c, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=f3e66d3d-6090-4c0d-ae59-e7c3b5cba99f) old=Port_Binding(mac=['fa:16:3e:57:cd:e5 10.100.0.18 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-77797f2a-6d47-4af7-aba0-0c6e896bb489', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77797f2a-6d47-4af7-aba0-0c6e896bb489', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7020c6a7808745b3bfdbd16acc7ff39e', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:13:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:58.811 162815 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port f3e66d3d-6090-4c0d-ae59-e7c3b5cba99f in datapath 77797f2a-6d47-4af7-aba0-0c6e896bb489 updated#033[00m
Oct 11 05:13:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:58.814 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 77797f2a-6d47-4af7-aba0-0c6e896bb489, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:13:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:13:58.815 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1ede057e-d99b-4501-8827-d0bd0a4f84df]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:13:58 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:13:58 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:13:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:14:00 np0005481065 nova_compute[260935]: 2025-10-11 09:14:00.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:14:00 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2165: 321 pgs: 321 active+clean; 407 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 71 op/s
Oct 11 05:14:00 np0005481065 nova_compute[260935]: 2025-10-11 09:14:00.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:14:01 np0005481065 podman[374839]: 2025-10-11 09:14:01.831295383 +0000 UTC m=+0.117445562 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 05:14:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:02.340 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:96:7e 10.100.0.2 2001:db8::f816:3eff:fe6f:967e'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe6f:967e/64', 'neutron:device_id': 'ovnmeta-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b9fef5c3096478daba4f4d85fddf9e9', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=237c95b9-58c9-4db6-9d3a-13740240295d, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=2a00a6d8-ae77-4ece-ac60-480f6b95d6a8) old=Port_Binding(mac=['fa:16:3e:6f:96:7e 2001:db8::f816:3eff:fe6f:967e'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe6f:967e/64', 'neutron:device_id': 'ovnmeta-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b9fef5c3096478daba4f4d85fddf9e9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:14:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:02.342 162815 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 2a00a6d8-ae77-4ece-ac60-480f6b95d6a8 in datapath 64420f4d-abed-43af-8d6f-2156706c7174 updated#033[00m
Oct 11 05:14:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:02.344 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 64420f4d-abed-43af-8d6f-2156706c7174, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:14:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:02.345 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[eb119a2d-4223-4435-8489-0c16437ec814]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:14:02 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2166: 321 pgs: 321 active+clean; 407 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 71 op/s
Oct 11 05:14:03 np0005481065 ovn_controller[152945]: 2025-10-11T09:14:03Z|00116|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c7:58:7f 10.100.0.11
Oct 11 05:14:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:14:04 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2167: 321 pgs: 321 active+clean; 407 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Oct 11 05:14:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:14:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:14:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:14:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:14:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033874134148922314 of space, bias 1.0, pg target 1.0162240244676695 quantized to 32 (current 32)
Oct 11 05:14:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:14:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:14:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:14:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:14:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:14:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.1992057139048968 quantized to 32 (current 32)
Oct 11 05:14:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:14:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 32)
Oct 11 05:14:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:14:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:14:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:14:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Oct 11 05:14:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:14:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Oct 11 05:14:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:14:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:14:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:14:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Oct 11 05:14:05 np0005481065 nova_compute[260935]: 2025-10-11 09:14:05.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:14:05 np0005481065 nova_compute[260935]: 2025-10-11 09:14:05.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:14:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:06.024 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:96:7e 2001:db8::f816:3eff:fe6f:967e'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe6f:967e/64', 'neutron:device_id': 'ovnmeta-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b9fef5c3096478daba4f4d85fddf9e9', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=237c95b9-58c9-4db6-9d3a-13740240295d, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=2a00a6d8-ae77-4ece-ac60-480f6b95d6a8) old=Port_Binding(mac=['fa:16:3e:6f:96:7e 10.100.0.2 2001:db8::f816:3eff:fe6f:967e'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe6f:967e/64', 'neutron:device_id': 'ovnmeta-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b9fef5c3096478daba4f4d85fddf9e9', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:14:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:06.024 162815 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 2a00a6d8-ae77-4ece-ac60-480f6b95d6a8 in datapath 64420f4d-abed-43af-8d6f-2156706c7174 updated#033[00m
Oct 11 05:14:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:06.026 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 64420f4d-abed-43af-8d6f-2156706c7174, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:14:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:06.026 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[42a41ca8-1f61-441d-a71b-a7deed1cdef7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:14:06 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2168: 321 pgs: 321 active+clean; 407 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Oct 11 05:14:06 np0005481065 podman[374862]: 2025-10-11 09:14:06.822122564 +0000 UTC m=+0.105503271 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 11 05:14:06 np0005481065 podman[374863]: 2025-10-11 09:14:06.863804037 +0000 UTC m=+0.142266048 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 11 05:14:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:07.443 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:96:7e 10.100.0.2 2001:db8::f816:3eff:fe6f:967e'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe6f:967e/64', 'neutron:device_id': 'ovnmeta-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b9fef5c3096478daba4f4d85fddf9e9', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=237c95b9-58c9-4db6-9d3a-13740240295d, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=2a00a6d8-ae77-4ece-ac60-480f6b95d6a8) old=Port_Binding(mac=['fa:16:3e:6f:96:7e 2001:db8::f816:3eff:fe6f:967e'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe6f:967e/64', 'neutron:device_id': 'ovnmeta-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b9fef5c3096478daba4f4d85fddf9e9', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:14:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:07.444 162815 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 2a00a6d8-ae77-4ece-ac60-480f6b95d6a8 in datapath 64420f4d-abed-43af-8d6f-2156706c7174 updated#033[00m
Oct 11 05:14:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:07.447 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 64420f4d-abed-43af-8d6f-2156706c7174, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:14:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:07.448 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1a9d49f0-c156-4eb6-80e2-2539dcaa0de5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:14:08 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2169: 321 pgs: 321 active+clean; 407 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 12 KiB/s wr, 109 op/s
Oct 11 05:14:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:14:10 np0005481065 nova_compute[260935]: 2025-10-11 09:14:10.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:14:10 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2170: 321 pgs: 321 active+clean; 407 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 529 KiB/s rd, 12 KiB/s wr, 45 op/s
Oct 11 05:14:10 np0005481065 nova_compute[260935]: 2025-10-11 09:14:10.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:14:10 np0005481065 nova_compute[260935]: 2025-10-11 09:14:10.719 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:14:10 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:10.819 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:96:7e 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b9fef5c3096478daba4f4d85fddf9e9', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=237c95b9-58c9-4db6-9d3a-13740240295d, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=2a00a6d8-ae77-4ece-ac60-480f6b95d6a8) old=Port_Binding(mac=['fa:16:3e:6f:96:7e 10.100.0.2 2001:db8::f816:3eff:fe6f:967e'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe6f:967e/64', 'neutron:device_id': 'ovnmeta-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b9fef5c3096478daba4f4d85fddf9e9', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:14:10 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:10.821 162815 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 2a00a6d8-ae77-4ece-ac60-480f6b95d6a8 in datapath 64420f4d-abed-43af-8d6f-2156706c7174 updated#033[00m
Oct 11 05:14:10 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:10.823 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 64420f4d-abed-43af-8d6f-2156706c7174, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:14:10 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:10.823 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[60f2c391-7c0a-4392-813d-f712d37c06bb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:14:11 np0005481065 nova_compute[260935]: 2025-10-11 09:14:11.340 2 INFO nova.compute.manager [None req-a9ccf037-b415-4fd8-ba5b-1a9737c06444 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Get console output#033[00m
Oct 11 05:14:11 np0005481065 nova_compute[260935]: 2025-10-11 09:14:11.348 29289 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct 11 05:14:12 np0005481065 nova_compute[260935]: 2025-10-11 09:14:12.361 2 DEBUG nova.compute.manager [req-e89c302b-5525-4ad8-aee1-13001d738e99 req-24334ce0-0517-4372-b3a7-f475361380c0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Received event network-changed-072bf760-1ffd-4b15-bdd9-58b9b670feb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:14:12 np0005481065 nova_compute[260935]: 2025-10-11 09:14:12.361 2 DEBUG nova.compute.manager [req-e89c302b-5525-4ad8-aee1-13001d738e99 req-24334ce0-0517-4372-b3a7-f475361380c0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Refreshing instance network info cache due to event network-changed-072bf760-1ffd-4b15-bdd9-58b9b670feb8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:14:12 np0005481065 nova_compute[260935]: 2025-10-11 09:14:12.362 2 DEBUG oslo_concurrency.lockutils [req-e89c302b-5525-4ad8-aee1-13001d738e99 req-24334ce0-0517-4372-b3a7-f475361380c0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-4b5bb809-c821-466d-9a47-b5a6aa337212" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:14:12 np0005481065 nova_compute[260935]: 2025-10-11 09:14:12.362 2 DEBUG oslo_concurrency.lockutils [req-e89c302b-5525-4ad8-aee1-13001d738e99 req-24334ce0-0517-4372-b3a7-f475361380c0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-4b5bb809-c821-466d-9a47-b5a6aa337212" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:14:12 np0005481065 nova_compute[260935]: 2025-10-11 09:14:12.363 2 DEBUG nova.network.neutron [req-e89c302b-5525-4ad8-aee1-13001d738e99 req-24334ce0-0517-4372-b3a7-f475361380c0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Refreshing network info cache for port 072bf760-1ffd-4b15-bdd9-58b9b670feb8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:14:12 np0005481065 nova_compute[260935]: 2025-10-11 09:14:12.406 2 DEBUG oslo_concurrency.lockutils [None req-5f616836-f1e6-48c3-96cc-2b7cadb0680a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "4b5bb809-c821-466d-9a47-b5a6aa337212" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:14:12 np0005481065 nova_compute[260935]: 2025-10-11 09:14:12.406 2 DEBUG oslo_concurrency.lockutils [None req-5f616836-f1e6-48c3-96cc-2b7cadb0680a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "4b5bb809-c821-466d-9a47-b5a6aa337212" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:14:12 np0005481065 nova_compute[260935]: 2025-10-11 09:14:12.407 2 DEBUG oslo_concurrency.lockutils [None req-5f616836-f1e6-48c3-96cc-2b7cadb0680a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:14:12 np0005481065 nova_compute[260935]: 2025-10-11 09:14:12.408 2 DEBUG oslo_concurrency.lockutils [None req-5f616836-f1e6-48c3-96cc-2b7cadb0680a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:14:12 np0005481065 nova_compute[260935]: 2025-10-11 09:14:12.408 2 DEBUG oslo_concurrency.lockutils [None req-5f616836-f1e6-48c3-96cc-2b7cadb0680a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:14:12 np0005481065 nova_compute[260935]: 2025-10-11 09:14:12.410 2 INFO nova.compute.manager [None req-5f616836-f1e6-48c3-96cc-2b7cadb0680a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Terminating instance#033[00m
Oct 11 05:14:12 np0005481065 nova_compute[260935]: 2025-10-11 09:14:12.412 2 DEBUG nova.compute.manager [None req-5f616836-f1e6-48c3-96cc-2b7cadb0680a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:14:12 np0005481065 kernel: tap072bf760-1f (unregistering): left promiscuous mode
Oct 11 05:14:12 np0005481065 NetworkManager[44960]: <info>  [1760174052.4857] device (tap072bf760-1f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:14:12 np0005481065 nova_compute[260935]: 2025-10-11 09:14:12.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:14:12 np0005481065 ovn_controller[152945]: 2025-10-11T09:14:12Z|01021|binding|INFO|Releasing lport 072bf760-1ffd-4b15-bdd9-58b9b670feb8 from this chassis (sb_readonly=0)
Oct 11 05:14:12 np0005481065 ovn_controller[152945]: 2025-10-11T09:14:12Z|01022|binding|INFO|Setting lport 072bf760-1ffd-4b15-bdd9-58b9b670feb8 down in Southbound
Oct 11 05:14:12 np0005481065 ovn_controller[152945]: 2025-10-11T09:14:12Z|01023|binding|INFO|Removing iface tap072bf760-1f ovn-installed in OVS
Oct 11 05:14:12 np0005481065 nova_compute[260935]: 2025-10-11 09:14:12.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:14:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:12.510 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:58:7f 10.100.0.11'], port_security=['fa:16:3e:c7:58:7f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '4b5bb809-c821-466d-9a47-b5a6aa337212', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-824eb213-11b4-4a1c-ba73-2f66376f326f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca4b15770e784f45910b630937562cb6', 'neutron:revision_number': '6', 'neutron:security_group_ids': '17c338b8-146b-462b-bba0-743df78b6636', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8beef9e7-87f9-4509-b665-58f91eb3377a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=072bf760-1ffd-4b15-bdd9-58b9b670feb8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:14:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:12.511 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 072bf760-1ffd-4b15-bdd9-58b9b670feb8 in datapath 824eb213-11b4-4a1c-ba73-2f66376f326f unbound from our chassis#033[00m
Oct 11 05:14:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:12.513 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 824eb213-11b4-4a1c-ba73-2f66376f326f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:14:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:12.513 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[af39a327-df3f-45f9-bf37-bc6a4d56dbd4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:14:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:12.514 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f namespace which is not needed anymore#033[00m
Oct 11 05:14:12 np0005481065 nova_compute[260935]: 2025-10-11 09:14:12.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:14:12 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2171: 321 pgs: 321 active+clean; 409 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 531 KiB/s rd, 22 KiB/s wr, 46 op/s
Oct 11 05:14:12 np0005481065 systemd[1]: machine-qemu\x2d128\x2dinstance\x2d0000006a.scope: Deactivated successfully.
Oct 11 05:14:12 np0005481065 systemd[1]: machine-qemu\x2d128\x2dinstance\x2d0000006a.scope: Consumed 13.468s CPU time.
Oct 11 05:14:12 np0005481065 systemd-machined[215705]: Machine qemu-128-instance-0000006a terminated.
Oct 11 05:14:12 np0005481065 nova_compute[260935]: 2025-10-11 09:14:12.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:14:12 np0005481065 nova_compute[260935]: 2025-10-11 09:14:12.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:14:12 np0005481065 nova_compute[260935]: 2025-10-11 09:14:12.663 2 INFO nova.virt.libvirt.driver [-] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Instance destroyed successfully.#033[00m
Oct 11 05:14:12 np0005481065 nova_compute[260935]: 2025-10-11 09:14:12.664 2 DEBUG nova.objects.instance [None req-5f616836-f1e6-48c3-96cc-2b7cadb0680a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'resources' on Instance uuid 4b5bb809-c821-466d-9a47-b5a6aa337212 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:14:12 np0005481065 nova_compute[260935]: 2025-10-11 09:14:12.684 2 DEBUG nova.virt.libvirt.vif [None req-5f616836-f1e6-48c3-96cc-2b7cadb0680a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:13:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-429546179',display_name='tempest-TestNetworkAdvancedServerOps-server-429546179',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-429546179',id=106,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI+dqizmLaYC2433qIpQLnnx/a3P6lrbi1ICEZMm0vIsSZegV5kK7h8RBLfeGIs6fvNJtxm5wiio76URdFktX8daCex/YjqNpM8rlSnMQuWZwdYT6LukjvR6b7qleLKRfA==',key_name='tempest-TestNetworkAdvancedServerOps-1430850582',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:13:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ca4b15770e784f45910b630937562cb6',ramdisk_id='',reservation_id='r-2d0dbaqa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1304559157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1304559157-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:13:52Z,user_data=None,user_id='a213c3877fc144a3af0be3c3d853f999',uuid=4b5bb809-c821-466d-9a47-b5a6aa337212,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "address": "fa:16:3e:c7:58:7f", "network": {"id": "824eb213-11b4-4a1c-ba73-2f66376f326f", "bridge": "br-int", "label": "tempest-network-smoke--404577306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap072bf760-1f", "ovs_interfaceid": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:14:12 np0005481065 nova_compute[260935]: 2025-10-11 09:14:12.686 2 DEBUG nova.network.os_vif_util [None req-5f616836-f1e6-48c3-96cc-2b7cadb0680a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converting VIF {"id": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "address": "fa:16:3e:c7:58:7f", "network": {"id": "824eb213-11b4-4a1c-ba73-2f66376f326f", "bridge": "br-int", "label": "tempest-network-smoke--404577306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap072bf760-1f", "ovs_interfaceid": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:14:12 np0005481065 nova_compute[260935]: 2025-10-11 09:14:12.688 2 DEBUG nova.network.os_vif_util [None req-5f616836-f1e6-48c3-96cc-2b7cadb0680a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:58:7f,bridge_name='br-int',has_traffic_filtering=True,id=072bf760-1ffd-4b15-bdd9-58b9b670feb8,network=Network(824eb213-11b4-4a1c-ba73-2f66376f326f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap072bf760-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:14:12 np0005481065 nova_compute[260935]: 2025-10-11 09:14:12.688 2 DEBUG os_vif [None req-5f616836-f1e6-48c3-96cc-2b7cadb0680a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:58:7f,bridge_name='br-int',has_traffic_filtering=True,id=072bf760-1ffd-4b15-bdd9-58b9b670feb8,network=Network(824eb213-11b4-4a1c-ba73-2f66376f326f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap072bf760-1f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:14:12 np0005481065 nova_compute[260935]: 2025-10-11 09:14:12.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:14:12 np0005481065 nova_compute[260935]: 2025-10-11 09:14:12.692 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap072bf760-1f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:14:12 np0005481065 nova_compute[260935]: 2025-10-11 09:14:12.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:14:12 np0005481065 nova_compute[260935]: 2025-10-11 09:14:12.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:14:12 np0005481065 nova_compute[260935]: 2025-10-11 09:14:12.700 2 INFO os_vif [None req-5f616836-f1e6-48c3-96cc-2b7cadb0680a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:58:7f,bridge_name='br-int',has_traffic_filtering=True,id=072bf760-1ffd-4b15-bdd9-58b9b670feb8,network=Network(824eb213-11b4-4a1c-ba73-2f66376f326f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap072bf760-1f')#033[00m
Oct 11 05:14:12 np0005481065 neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f[374257]: [NOTICE]   (374261) : haproxy version is 2.8.14-c23fe91
Oct 11 05:14:12 np0005481065 neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f[374257]: [NOTICE]   (374261) : path to executable is /usr/sbin/haproxy
Oct 11 05:14:12 np0005481065 neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f[374257]: [WARNING]  (374261) : Exiting Master process...
Oct 11 05:14:12 np0005481065 neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f[374257]: [ALERT]    (374261) : Current worker (374263) exited with code 143 (Terminated)
Oct 11 05:14:12 np0005481065 neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f[374257]: [WARNING]  (374261) : All workers exited. Exiting... (0)
Oct 11 05:14:12 np0005481065 systemd[1]: libpod-fe44361210902a7d2aa4fa61cd2749a87305dac2f61418b347db2edef02fb595.scope: Deactivated successfully.
Oct 11 05:14:12 np0005481065 podman[374937]: 2025-10-11 09:14:12.747564576 +0000 UTC m=+0.075073059 container died fe44361210902a7d2aa4fa61cd2749a87305dac2f61418b347db2edef02fb595 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2)
Oct 11 05:14:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:12.774 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:96:7e 10.100.0.2 2001:db8::f816:3eff:fe6f:967e'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe6f:967e/64', 'neutron:device_id': 'ovnmeta-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b9fef5c3096478daba4f4d85fddf9e9', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=237c95b9-58c9-4db6-9d3a-13740240295d, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=2a00a6d8-ae77-4ece-ac60-480f6b95d6a8) old=Port_Binding(mac=['fa:16:3e:6f:96:7e 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b9fef5c3096478daba4f4d85fddf9e9', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:14:12 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fe44361210902a7d2aa4fa61cd2749a87305dac2f61418b347db2edef02fb595-userdata-shm.mount: Deactivated successfully.
Oct 11 05:14:12 np0005481065 systemd[1]: var-lib-containers-storage-overlay-15941a0eabc12ef18aa237435bb737824ea9fdbe58e73a7127e8fa87ef5c01c5-merged.mount: Deactivated successfully.
Oct 11 05:14:12 np0005481065 podman[374937]: 2025-10-11 09:14:12.818349216 +0000 UTC m=+0.145857729 container cleanup fe44361210902a7d2aa4fa61cd2749a87305dac2f61418b347db2edef02fb595 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 11 05:14:12 np0005481065 systemd[1]: libpod-conmon-fe44361210902a7d2aa4fa61cd2749a87305dac2f61418b347db2edef02fb595.scope: Deactivated successfully.
Oct 11 05:14:12 np0005481065 podman[374991]: 2025-10-11 09:14:12.928789642 +0000 UTC m=+0.079441689 container remove fe44361210902a7d2aa4fa61cd2749a87305dac2f61418b347db2edef02fb595 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 05:14:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:12.936 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[33bef591-5136-4adc-b1b6-0a44a315c060]: (4, ('Sat Oct 11 09:14:12 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f (fe44361210902a7d2aa4fa61cd2749a87305dac2f61418b347db2edef02fb595)\nfe44361210902a7d2aa4fa61cd2749a87305dac2f61418b347db2edef02fb595\nSat Oct 11 09:14:12 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f (fe44361210902a7d2aa4fa61cd2749a87305dac2f61418b347db2edef02fb595)\nfe44361210902a7d2aa4fa61cd2749a87305dac2f61418b347db2edef02fb595\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:14:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:12.938 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5d99ea83-9fc5-42b7-913d-f1b84ba88096]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:14:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:12.940 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap824eb213-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:14:12 np0005481065 nova_compute[260935]: 2025-10-11 09:14:12.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:14:12 np0005481065 kernel: tap824eb213-10: left promiscuous mode
Oct 11 05:14:12 np0005481065 nova_compute[260935]: 2025-10-11 09:14:12.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:14:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:12.964 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4c4dbdbd-e751-4aba-9111-d8d6d66b228a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:14:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:12.993 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[345f9357-c1d2-43c6-bf5b-24fb2a521484]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:14:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:12.995 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[643170e7-1f24-4f99-8213-8c96a68a3b96]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:14:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:13.017 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fd752f6e-38fb-4924-bb3b-8d33355702e0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 587563, 'reachable_time': 42835, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 375008, 'error': None, 'target': 'ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:14:13 np0005481065 systemd[1]: run-netns-ovnmeta\x2d824eb213\x2d11b4\x2d4a1c\x2dba73\x2d2f66376f326f.mount: Deactivated successfully.
Oct 11 05:14:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:13.023 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 05:14:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:13.023 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[4ec416a3-94b7-4153-9ab4-abc219320fa8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:14:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:13.024 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 2a00a6d8-ae77-4ece-ac60-480f6b95d6a8 in datapath 64420f4d-abed-43af-8d6f-2156706c7174 unbound from our chassis#033[00m
Oct 11 05:14:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:13.026 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 64420f4d-abed-43af-8d6f-2156706c7174, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:14:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:13.027 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[62357795-a2f7-47b3-840e-9abfacb32d3f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:14:13 np0005481065 nova_compute[260935]: 2025-10-11 09:14:13.124 2 INFO nova.virt.libvirt.driver [None req-5f616836-f1e6-48c3-96cc-2b7cadb0680a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Deleting instance files /var/lib/nova/instances/4b5bb809-c821-466d-9a47-b5a6aa337212_del#033[00m
Oct 11 05:14:13 np0005481065 nova_compute[260935]: 2025-10-11 09:14:13.125 2 INFO nova.virt.libvirt.driver [None req-5f616836-f1e6-48c3-96cc-2b7cadb0680a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Deletion of /var/lib/nova/instances/4b5bb809-c821-466d-9a47-b5a6aa337212_del complete#033[00m
Oct 11 05:14:13 np0005481065 nova_compute[260935]: 2025-10-11 09:14:13.190 2 INFO nova.compute.manager [None req-5f616836-f1e6-48c3-96cc-2b7cadb0680a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Took 0.78 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:14:13 np0005481065 nova_compute[260935]: 2025-10-11 09:14:13.190 2 DEBUG oslo.service.loopingcall [None req-5f616836-f1e6-48c3-96cc-2b7cadb0680a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:14:13 np0005481065 nova_compute[260935]: 2025-10-11 09:14:13.191 2 DEBUG nova.compute.manager [-] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:14:13 np0005481065 nova_compute[260935]: 2025-10-11 09:14:13.191 2 DEBUG nova.network.neutron [-] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:14:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:14:14 np0005481065 nova_compute[260935]: 2025-10-11 09:14:14.523 2 DEBUG nova.compute.manager [req-04961ed3-0438-4ac5-b1e2-e15155688bc2 req-2ff45222-9a4d-4742-9e0f-a00981f15663 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Received event network-vif-unplugged-072bf760-1ffd-4b15-bdd9-58b9b670feb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:14:14 np0005481065 nova_compute[260935]: 2025-10-11 09:14:14.524 2 DEBUG oslo_concurrency.lockutils [req-04961ed3-0438-4ac5-b1e2-e15155688bc2 req-2ff45222-9a4d-4742-9e0f-a00981f15663 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:14:14 np0005481065 nova_compute[260935]: 2025-10-11 09:14:14.524 2 DEBUG oslo_concurrency.lockutils [req-04961ed3-0438-4ac5-b1e2-e15155688bc2 req-2ff45222-9a4d-4742-9e0f-a00981f15663 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:14:14 np0005481065 nova_compute[260935]: 2025-10-11 09:14:14.524 2 DEBUG oslo_concurrency.lockutils [req-04961ed3-0438-4ac5-b1e2-e15155688bc2 req-2ff45222-9a4d-4742-9e0f-a00981f15663 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:14:14 np0005481065 nova_compute[260935]: 2025-10-11 09:14:14.525 2 DEBUG nova.compute.manager [req-04961ed3-0438-4ac5-b1e2-e15155688bc2 req-2ff45222-9a4d-4742-9e0f-a00981f15663 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] No waiting events found dispatching network-vif-unplugged-072bf760-1ffd-4b15-bdd9-58b9b670feb8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:14:14 np0005481065 nova_compute[260935]: 2025-10-11 09:14:14.525 2 DEBUG nova.compute.manager [req-04961ed3-0438-4ac5-b1e2-e15155688bc2 req-2ff45222-9a4d-4742-9e0f-a00981f15663 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Received event network-vif-unplugged-072bf760-1ffd-4b15-bdd9-58b9b670feb8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 05:14:14 np0005481065 nova_compute[260935]: 2025-10-11 09:14:14.526 2 DEBUG nova.compute.manager [req-04961ed3-0438-4ac5-b1e2-e15155688bc2 req-2ff45222-9a4d-4742-9e0f-a00981f15663 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Received event network-vif-plugged-072bf760-1ffd-4b15-bdd9-58b9b670feb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:14:14 np0005481065 nova_compute[260935]: 2025-10-11 09:14:14.526 2 DEBUG oslo_concurrency.lockutils [req-04961ed3-0438-4ac5-b1e2-e15155688bc2 req-2ff45222-9a4d-4742-9e0f-a00981f15663 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:14:14 np0005481065 nova_compute[260935]: 2025-10-11 09:14:14.527 2 DEBUG oslo_concurrency.lockutils [req-04961ed3-0438-4ac5-b1e2-e15155688bc2 req-2ff45222-9a4d-4742-9e0f-a00981f15663 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:14:14 np0005481065 nova_compute[260935]: 2025-10-11 09:14:14.527 2 DEBUG oslo_concurrency.lockutils [req-04961ed3-0438-4ac5-b1e2-e15155688bc2 req-2ff45222-9a4d-4742-9e0f-a00981f15663 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:14:14 np0005481065 nova_compute[260935]: 2025-10-11 09:14:14.527 2 DEBUG nova.compute.manager [req-04961ed3-0438-4ac5-b1e2-e15155688bc2 req-2ff45222-9a4d-4742-9e0f-a00981f15663 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] No waiting events found dispatching network-vif-plugged-072bf760-1ffd-4b15-bdd9-58b9b670feb8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:14:14 np0005481065 nova_compute[260935]: 2025-10-11 09:14:14.528 2 WARNING nova.compute.manager [req-04961ed3-0438-4ac5-b1e2-e15155688bc2 req-2ff45222-9a4d-4742-9e0f-a00981f15663 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Received unexpected event network-vif-plugged-072bf760-1ffd-4b15-bdd9-58b9b670feb8 for instance with vm_state active and task_state deleting.#033[00m
Oct 11 05:14:14 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2172: 321 pgs: 321 active+clean; 409 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 531 KiB/s rd, 22 KiB/s wr, 46 op/s
Oct 11 05:14:14 np0005481065 nova_compute[260935]: 2025-10-11 09:14:14.654 2 DEBUG nova.network.neutron [-] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:14:14 np0005481065 nova_compute[260935]: 2025-10-11 09:14:14.683 2 DEBUG nova.network.neutron [req-e89c302b-5525-4ad8-aee1-13001d738e99 req-24334ce0-0517-4372-b3a7-f475361380c0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Updated VIF entry in instance network info cache for port 072bf760-1ffd-4b15-bdd9-58b9b670feb8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:14:14 np0005481065 nova_compute[260935]: 2025-10-11 09:14:14.684 2 DEBUG nova.network.neutron [req-e89c302b-5525-4ad8-aee1-13001d738e99 req-24334ce0-0517-4372-b3a7-f475361380c0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Updating instance_info_cache with network_info: [{"id": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "address": "fa:16:3e:c7:58:7f", "network": {"id": "824eb213-11b4-4a1c-ba73-2f66376f326f", "bridge": "br-int", "label": "tempest-network-smoke--404577306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap072bf760-1f", "ovs_interfaceid": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:14:14 np0005481065 nova_compute[260935]: 2025-10-11 09:14:14.690 2 INFO nova.compute.manager [-] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Took 1.50 seconds to deallocate network for instance.#033[00m
Oct 11 05:14:14 np0005481065 nova_compute[260935]: 2025-10-11 09:14:14.724 2 DEBUG oslo_concurrency.lockutils [req-e89c302b-5525-4ad8-aee1-13001d738e99 req-24334ce0-0517-4372-b3a7-f475361380c0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-4b5bb809-c821-466d-9a47-b5a6aa337212" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:14:14 np0005481065 nova_compute[260935]: 2025-10-11 09:14:14.761 2 DEBUG nova.compute.manager [req-89526ed6-16f0-4a40-9549-5de0e3a834f6 req-7c19813a-8672-4645-b615-d6b730ba9229 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Received event network-vif-deleted-072bf760-1ffd-4b15-bdd9-58b9b670feb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:14:14 np0005481065 nova_compute[260935]: 2025-10-11 09:14:14.761 2 INFO nova.compute.manager [req-89526ed6-16f0-4a40-9549-5de0e3a834f6 req-7c19813a-8672-4645-b615-d6b730ba9229 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Neutron deleted interface 072bf760-1ffd-4b15-bdd9-58b9b670feb8; detaching it from the instance and deleting it from the info cache#033[00m
Oct 11 05:14:14 np0005481065 nova_compute[260935]: 2025-10-11 09:14:14.762 2 DEBUG nova.network.neutron [req-89526ed6-16f0-4a40-9549-5de0e3a834f6 req-7c19813a-8672-4645-b615-d6b730ba9229 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:14:14 np0005481065 nova_compute[260935]: 2025-10-11 09:14:14.765 2 DEBUG oslo_concurrency.lockutils [None req-5f616836-f1e6-48c3-96cc-2b7cadb0680a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:14:14 np0005481065 nova_compute[260935]: 2025-10-11 09:14:14.766 2 DEBUG oslo_concurrency.lockutils [None req-5f616836-f1e6-48c3-96cc-2b7cadb0680a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:14:14 np0005481065 nova_compute[260935]: 2025-10-11 09:14:14.791 2 DEBUG nova.compute.manager [req-89526ed6-16f0-4a40-9549-5de0e3a834f6 req-7c19813a-8672-4645-b615-d6b730ba9229 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Detach interface failed, port_id=072bf760-1ffd-4b15-bdd9-58b9b670feb8, reason: Instance 4b5bb809-c821-466d-9a47-b5a6aa337212 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct 11 05:14:14 np0005481065 nova_compute[260935]: 2025-10-11 09:14:14.916 2 DEBUG oslo_concurrency.processutils [None req-5f616836-f1e6-48c3-96cc-2b7cadb0680a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:14:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:15.209 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:14:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:15.210 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:14:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:15.212 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:14:15 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:14:15 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2032363935' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:14:15 np0005481065 nova_compute[260935]: 2025-10-11 09:14:15.416 2 DEBUG oslo_concurrency.processutils [None req-5f616836-f1e6-48c3-96cc-2b7cadb0680a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:14:15 np0005481065 nova_compute[260935]: 2025-10-11 09:14:15.424 2 DEBUG nova.compute.provider_tree [None req-5f616836-f1e6-48c3-96cc-2b7cadb0680a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:14:15 np0005481065 nova_compute[260935]: 2025-10-11 09:14:15.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:14:15 np0005481065 nova_compute[260935]: 2025-10-11 09:14:15.445 2 DEBUG nova.scheduler.client.report [None req-5f616836-f1e6-48c3-96cc-2b7cadb0680a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:14:15 np0005481065 nova_compute[260935]: 2025-10-11 09:14:15.584 2 DEBUG oslo_concurrency.lockutils [None req-5f616836-f1e6-48c3-96cc-2b7cadb0680a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.818s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:14:15 np0005481065 nova_compute[260935]: 2025-10-11 09:14:15.743 2 INFO nova.scheduler.client.report [None req-5f616836-f1e6-48c3-96cc-2b7cadb0680a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Deleted allocations for instance 4b5bb809-c821-466d-9a47-b5a6aa337212#033[00m
Oct 11 05:14:15 np0005481065 nova_compute[260935]: 2025-10-11 09:14:15.864 2 DEBUG oslo_concurrency.lockutils [None req-5f616836-f1e6-48c3-96cc-2b7cadb0680a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "4b5bb809-c821-466d-9a47-b5a6aa337212" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.457s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:14:16 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2173: 321 pgs: 321 active+clean; 409 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 531 KiB/s rd, 22 KiB/s wr, 46 op/s
Oct 11 05:14:17 np0005481065 nova_compute[260935]: 2025-10-11 09:14:17.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:14:18 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2174: 321 pgs: 321 active+clean; 328 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 549 KiB/s rd, 23 KiB/s wr, 74 op/s
Oct 11 05:14:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:18.652 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:96:7e 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b9fef5c3096478daba4f4d85fddf9e9', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=237c95b9-58c9-4db6-9d3a-13740240295d, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=2a00a6d8-ae77-4ece-ac60-480f6b95d6a8) old=Port_Binding(mac=['fa:16:3e:6f:96:7e 10.100.0.2 2001:db8::f816:3eff:fe6f:967e'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe6f:967e/64', 'neutron:device_id': 'ovnmeta-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b9fef5c3096478daba4f4d85fddf9e9', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:14:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:18.654 162815 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 2a00a6d8-ae77-4ece-ac60-480f6b95d6a8 in datapath 64420f4d-abed-43af-8d6f-2156706c7174 updated#033[00m
Oct 11 05:14:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:18.656 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 64420f4d-abed-43af-8d6f-2156706c7174, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:14:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:18.657 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9254b364-92f9-47bf-8a05-98fb15d2bc39]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:14:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:14:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:19.758 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:96:7e 10.100.0.2 2001:db8::f816:3eff:fe6f:967e'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe6f:967e/64', 'neutron:device_id': 'ovnmeta-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b9fef5c3096478daba4f4d85fddf9e9', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=237c95b9-58c9-4db6-9d3a-13740240295d, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=2a00a6d8-ae77-4ece-ac60-480f6b95d6a8) old=Port_Binding(mac=['fa:16:3e:6f:96:7e 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b9fef5c3096478daba4f4d85fddf9e9', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:14:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:19.760 162815 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 2a00a6d8-ae77-4ece-ac60-480f6b95d6a8 in datapath 64420f4d-abed-43af-8d6f-2156706c7174 updated#033[00m
Oct 11 05:14:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:19.762 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 64420f4d-abed-43af-8d6f-2156706c7174, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:14:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:19.763 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7887cc6e-b6e5-4b5e-bc38-2273ff91d787]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:14:20 np0005481065 ovn_controller[152945]: 2025-10-11T09:14:20Z|01024|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:14:20 np0005481065 ovn_controller[152945]: 2025-10-11T09:14:20Z|01025|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:14:20 np0005481065 nova_compute[260935]: 2025-10-11 09:14:20.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:14:20 np0005481065 ovn_controller[152945]: 2025-10-11T09:14:20Z|01026|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:14:20 np0005481065 ovn_controller[152945]: 2025-10-11T09:14:20Z|01027|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:14:20 np0005481065 nova_compute[260935]: 2025-10-11 09:14:20.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:14:20 np0005481065 nova_compute[260935]: 2025-10-11 09:14:20.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:14:20 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2175: 321 pgs: 321 active+clean; 328 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 11 KiB/s wr, 28 op/s
Oct 11 05:14:21 np0005481065 nova_compute[260935]: 2025-10-11 09:14:21.661 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:14:21 np0005481065 nova_compute[260935]: 2025-10-11 09:14:21.696 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Triggering sync for uuid c176845c-89c0-4038-ba22-4ee79bd3ebfe _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct 11 05:14:21 np0005481065 nova_compute[260935]: 2025-10-11 09:14:21.697 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Triggering sync for uuid b75d8ded-515b-48ff-a6b6-28df88878996 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct 11 05:14:21 np0005481065 nova_compute[260935]: 2025-10-11 09:14:21.698 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Triggering sync for uuid 52be16b4-343a-4fd4-9041-39069a1fde2a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct 11 05:14:21 np0005481065 nova_compute[260935]: 2025-10-11 09:14:21.698 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "c176845c-89c0-4038-ba22-4ee79bd3ebfe" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:14:21 np0005481065 nova_compute[260935]: 2025-10-11 09:14:21.699 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "c176845c-89c0-4038-ba22-4ee79bd3ebfe" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:14:21 np0005481065 nova_compute[260935]: 2025-10-11 09:14:21.699 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "b75d8ded-515b-48ff-a6b6-28df88878996" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:14:21 np0005481065 nova_compute[260935]: 2025-10-11 09:14:21.700 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "b75d8ded-515b-48ff-a6b6-28df88878996" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:14:21 np0005481065 nova_compute[260935]: 2025-10-11 09:14:21.701 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "52be16b4-343a-4fd4-9041-39069a1fde2a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:14:21 np0005481065 nova_compute[260935]: 2025-10-11 09:14:21.701 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "52be16b4-343a-4fd4-9041-39069a1fde2a" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:14:21 np0005481065 nova_compute[260935]: 2025-10-11 09:14:21.799 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "c176845c-89c0-4038-ba22-4ee79bd3ebfe" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.100s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:14:21 np0005481065 nova_compute[260935]: 2025-10-11 09:14:21.800 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "52be16b4-343a-4fd4-9041-39069a1fde2a" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.099s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:14:21 np0005481065 nova_compute[260935]: 2025-10-11 09:14:21.803 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "b75d8ded-515b-48ff-a6b6-28df88878996" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.103s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:14:22 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2176: 321 pgs: 321 active+clean; 328 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 11 KiB/s wr, 28 op/s
Oct 11 05:14:22 np0005481065 nova_compute[260935]: 2025-10-11 09:14:22.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:14:22 np0005481065 nova_compute[260935]: 2025-10-11 09:14:22.743 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:14:23 np0005481065 nova_compute[260935]: 2025-10-11 09:14:23.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:14:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:14:24 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2177: 321 pgs: 321 active+clean; 328 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 05:14:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:14:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:14:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:14:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:14:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:14:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:14:25 np0005481065 nova_compute[260935]: 2025-10-11 09:14:25.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:14:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:25.684 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:96:7e 2001:db8::f816:3eff:fe6f:967e'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe6f:967e/64', 'neutron:device_id': 'ovnmeta-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b9fef5c3096478daba4f4d85fddf9e9', 'neutron:revision_number': '18', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=237c95b9-58c9-4db6-9d3a-13740240295d, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=2a00a6d8-ae77-4ece-ac60-480f6b95d6a8) old=Port_Binding(mac=['fa:16:3e:6f:96:7e 10.100.0.2 2001:db8::f816:3eff:fe6f:967e'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe6f:967e/64', 'neutron:device_id': 'ovnmeta-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b9fef5c3096478daba4f4d85fddf9e9', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:14:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:25.685 162815 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 2a00a6d8-ae77-4ece-ac60-480f6b95d6a8 in datapath 64420f4d-abed-43af-8d6f-2156706c7174 updated#033[00m
Oct 11 05:14:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:25.688 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 64420f4d-abed-43af-8d6f-2156706c7174, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:14:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:25.689 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f012c683-6c20-4edd-8218-95eca14b936c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:14:25 np0005481065 nova_compute[260935]: 2025-10-11 09:14:25.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:14:25 np0005481065 nova_compute[260935]: 2025-10-11 09:14:25.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:14:25 np0005481065 nova_compute[260935]: 2025-10-11 09:14:25.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:14:25 np0005481065 podman[375032]: 2025-10-11 09:14:25.804227461 +0000 UTC m=+0.100178684 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:14:26 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2178: 321 pgs: 321 active+clean; 328 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 05:14:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:14:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3223603160' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:14:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:14:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3223603160' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:14:26 np0005481065 nova_compute[260935]: 2025-10-11 09:14:26.700 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:14:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:26.701 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:14:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:26.703 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 05:14:26 np0005481065 nova_compute[260935]: 2025-10-11 09:14:26.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:14:27 np0005481065 nova_compute[260935]: 2025-10-11 09:14:27.659 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174052.6581135, 4b5bb809-c821-466d-9a47-b5a6aa337212 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:14:27 np0005481065 nova_compute[260935]: 2025-10-11 09:14:27.660 2 INFO nova.compute.manager [-] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:14:27 np0005481065 nova_compute[260935]: 2025-10-11 09:14:27.689 2 DEBUG nova.compute.manager [None req-0fd74466-5629-43e7-9801-53bda3281b2a - - - - - -] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:14:27 np0005481065 nova_compute[260935]: 2025-10-11 09:14:27.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:14:28 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2179: 321 pgs: 321 active+clean; 328 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 05:14:28 np0005481065 nova_compute[260935]: 2025-10-11 09:14:28.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:14:28 np0005481065 nova_compute[260935]: 2025-10-11 09:14:28.737 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:14:28 np0005481065 nova_compute[260935]: 2025-10-11 09:14:28.738 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:14:28 np0005481065 nova_compute[260935]: 2025-10-11 09:14:28.738 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:14:28 np0005481065 nova_compute[260935]: 2025-10-11 09:14:28.739 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:14:28 np0005481065 nova_compute[260935]: 2025-10-11 09:14:28.739 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:14:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:14:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:14:29 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/704317858' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:14:29 np0005481065 nova_compute[260935]: 2025-10-11 09:14:29.215 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:14:29 np0005481065 nova_compute[260935]: 2025-10-11 09:14:29.328 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:14:29 np0005481065 nova_compute[260935]: 2025-10-11 09:14:29.328 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:14:29 np0005481065 nova_compute[260935]: 2025-10-11 09:14:29.329 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:14:29 np0005481065 nova_compute[260935]: 2025-10-11 09:14:29.335 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:14:29 np0005481065 nova_compute[260935]: 2025-10-11 09:14:29.336 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:14:29 np0005481065 nova_compute[260935]: 2025-10-11 09:14:29.341 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:14:29 np0005481065 nova_compute[260935]: 2025-10-11 09:14:29.342 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:14:29 np0005481065 nova_compute[260935]: 2025-10-11 09:14:29.609 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:14:29 np0005481065 nova_compute[260935]: 2025-10-11 09:14:29.610 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3030MB free_disk=59.83066940307617GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:14:29 np0005481065 nova_compute[260935]: 2025-10-11 09:14:29.611 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:14:29 np0005481065 nova_compute[260935]: 2025-10-11 09:14:29.611 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:14:29 np0005481065 nova_compute[260935]: 2025-10-11 09:14:29.734 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:14:29 np0005481065 nova_compute[260935]: 2025-10-11 09:14:29.735 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:14:29 np0005481065 nova_compute[260935]: 2025-10-11 09:14:29.735 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:14:29 np0005481065 nova_compute[260935]: 2025-10-11 09:14:29.736 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:14:29 np0005481065 nova_compute[260935]: 2025-10-11 09:14:29.736 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:14:29 np0005481065 nova_compute[260935]: 2025-10-11 09:14:29.757 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing inventories for resource provider ead2f521-4d5d-46d9-864c-1aac19134114 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 11 05:14:29 np0005481065 nova_compute[260935]: 2025-10-11 09:14:29.782 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updating ProviderTree inventory for provider ead2f521-4d5d-46d9-864c-1aac19134114 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 11 05:14:29 np0005481065 nova_compute[260935]: 2025-10-11 09:14:29.782 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updating inventory in ProviderTree for provider ead2f521-4d5d-46d9-864c-1aac19134114 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 11 05:14:29 np0005481065 nova_compute[260935]: 2025-10-11 09:14:29.802 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing aggregate associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 11 05:14:29 np0005481065 nova_compute[260935]: 2025-10-11 09:14:29.834 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing trait associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, traits: HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSE41,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_USB,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,HW_CPU_X86_SVM,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 11 05:14:29 np0005481065 nova_compute[260935]: 2025-10-11 09:14:29.951 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:14:30 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:14:30 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2134330022' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:14:30 np0005481065 nova_compute[260935]: 2025-10-11 09:14:30.434 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:14:30 np0005481065 nova_compute[260935]: 2025-10-11 09:14:30.444 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:14:30 np0005481065 nova_compute[260935]: 2025-10-11 09:14:30.471 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:14:30 np0005481065 nova_compute[260935]: 2025-10-11 09:14:30.544 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:14:30 np0005481065 nova_compute[260935]: 2025-10-11 09:14:30.545 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.933s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:14:30 np0005481065 nova_compute[260935]: 2025-10-11 09:14:30.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:14:30 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2180: 321 pgs: 321 active+clean; 328 MiB data, 899 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:14:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:30.705 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:14:31 np0005481065 nova_compute[260935]: 2025-10-11 09:14:31.547 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:14:31 np0005481065 nova_compute[260935]: 2025-10-11 09:14:31.548 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:14:31 np0005481065 nova_compute[260935]: 2025-10-11 09:14:31.864 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:14:31 np0005481065 nova_compute[260935]: 2025-10-11 09:14:31.864 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:14:31 np0005481065 nova_compute[260935]: 2025-10-11 09:14:31.865 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 05:14:32 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2181: 321 pgs: 321 active+clean; 328 MiB data, 899 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:14:32 np0005481065 nova_compute[260935]: 2025-10-11 09:14:32.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:14:32 np0005481065 podman[375100]: 2025-10-11 09:14:32.792844879 +0000 UTC m=+0.090201848 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:14:33 np0005481065 nova_compute[260935]: 2025-10-11 09:14:33.716 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updating instance_info_cache with network_info: [{"id": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "address": "fa:16:3e:d3:b5:ce", "network": {"id": "7c40ad6c-6e2c-4d8e-a70f-72c8786fa745", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1855455514-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ba95f2514ce4fe4b00f245335eaeb01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc992d6e3-ef", "ovs_interfaceid": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:14:33 np0005481065 nova_compute[260935]: 2025-10-11 09:14:33.743 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:14:33 np0005481065 nova_compute[260935]: 2025-10-11 09:14:33.744 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 05:14:33 np0005481065 nova_compute[260935]: 2025-10-11 09:14:33.745 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:14:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:14:34 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2182: 321 pgs: 321 active+clean; 328 MiB data, 899 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:14:35 np0005481065 nova_compute[260935]: 2025-10-11 09:14:35.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:14:36 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2183: 321 pgs: 321 active+clean; 328 MiB data, 899 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:14:37 np0005481065 nova_compute[260935]: 2025-10-11 09:14:37.677 2 DEBUG oslo_concurrency.lockutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:14:37 np0005481065 nova_compute[260935]: 2025-10-11 09:14:37.678 2 DEBUG oslo_concurrency.lockutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:14:37 np0005481065 nova_compute[260935]: 2025-10-11 09:14:37.750 2 DEBUG nova.compute.manager [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:14:37 np0005481065 nova_compute[260935]: 2025-10-11 09:14:37.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:14:37 np0005481065 podman[375120]: 2025-10-11 09:14:37.86714685 +0000 UTC m=+0.160831003 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 11 05:14:37 np0005481065 nova_compute[260935]: 2025-10-11 09:14:37.879 2 DEBUG oslo_concurrency.lockutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:14:37 np0005481065 nova_compute[260935]: 2025-10-11 09:14:37.879 2 DEBUG oslo_concurrency.lockutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:14:37 np0005481065 podman[375121]: 2025-10-11 09:14:37.884191752 +0000 UTC m=+0.174113961 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 11 05:14:37 np0005481065 nova_compute[260935]: 2025-10-11 09:14:37.887 2 DEBUG nova.virt.hardware [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:14:37 np0005481065 nova_compute[260935]: 2025-10-11 09:14:37.887 2 INFO nova.compute.claims [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:14:38 np0005481065 nova_compute[260935]: 2025-10-11 09:14:38.287 2 DEBUG oslo_concurrency.processutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:14:38 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2184: 321 pgs: 321 active+clean; 328 MiB data, 899 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:14:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:14:38 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3791196006' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:14:38 np0005481065 nova_compute[260935]: 2025-10-11 09:14:38.764 2 DEBUG oslo_concurrency.processutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:14:38 np0005481065 nova_compute[260935]: 2025-10-11 09:14:38.773 2 DEBUG nova.compute.provider_tree [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:14:38 np0005481065 nova_compute[260935]: 2025-10-11 09:14:38.799 2 DEBUG nova.scheduler.client.report [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:14:38 np0005481065 nova_compute[260935]: 2025-10-11 09:14:38.839 2 DEBUG oslo_concurrency.lockutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.959s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:14:38 np0005481065 nova_compute[260935]: 2025-10-11 09:14:38.840 2 DEBUG nova.compute.manager [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:14:38 np0005481065 nova_compute[260935]: 2025-10-11 09:14:38.904 2 DEBUG nova.compute.manager [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:14:38 np0005481065 nova_compute[260935]: 2025-10-11 09:14:38.904 2 DEBUG nova.network.neutron [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:14:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:14:38 np0005481065 nova_compute[260935]: 2025-10-11 09:14:38.923 2 INFO nova.virt.libvirt.driver [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:14:38 np0005481065 nova_compute[260935]: 2025-10-11 09:14:38.948 2 DEBUG nova.compute.manager [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:14:39 np0005481065 nova_compute[260935]: 2025-10-11 09:14:39.076 2 DEBUG nova.compute.manager [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:14:39 np0005481065 nova_compute[260935]: 2025-10-11 09:14:39.078 2 DEBUG nova.virt.libvirt.driver [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:14:39 np0005481065 nova_compute[260935]: 2025-10-11 09:14:39.079 2 INFO nova.virt.libvirt.driver [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Creating image(s)#033[00m
Oct 11 05:14:39 np0005481065 nova_compute[260935]: 2025-10-11 09:14:39.113 2 DEBUG nova.storage.rbd_utils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image dd4f6627-2bd4-4ad6-8e1e-c5bec1228096_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:14:39 np0005481065 nova_compute[260935]: 2025-10-11 09:14:39.145 2 DEBUG nova.storage.rbd_utils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image dd4f6627-2bd4-4ad6-8e1e-c5bec1228096_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:14:39 np0005481065 nova_compute[260935]: 2025-10-11 09:14:39.178 2 DEBUG nova.storage.rbd_utils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image dd4f6627-2bd4-4ad6-8e1e-c5bec1228096_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:14:39 np0005481065 nova_compute[260935]: 2025-10-11 09:14:39.182 2 DEBUG oslo_concurrency.processutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:14:39 np0005481065 nova_compute[260935]: 2025-10-11 09:14:39.230 2 DEBUG nova.policy [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a213c3877fc144a3af0be3c3d853f999', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ca4b15770e784f45910b630937562cb6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:14:39 np0005481065 nova_compute[260935]: 2025-10-11 09:14:39.280 2 DEBUG oslo_concurrency.processutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:14:39 np0005481065 nova_compute[260935]: 2025-10-11 09:14:39.281 2 DEBUG oslo_concurrency.lockutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:14:39 np0005481065 nova_compute[260935]: 2025-10-11 09:14:39.282 2 DEBUG oslo_concurrency.lockutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:14:39 np0005481065 nova_compute[260935]: 2025-10-11 09:14:39.283 2 DEBUG oslo_concurrency.lockutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:14:39 np0005481065 nova_compute[260935]: 2025-10-11 09:14:39.317 2 DEBUG nova.storage.rbd_utils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image dd4f6627-2bd4-4ad6-8e1e-c5bec1228096_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:14:39 np0005481065 nova_compute[260935]: 2025-10-11 09:14:39.321 2 DEBUG oslo_concurrency.processutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 dd4f6627-2bd4-4ad6-8e1e-c5bec1228096_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:14:39 np0005481065 nova_compute[260935]: 2025-10-11 09:14:39.621 2 DEBUG oslo_concurrency.processutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 dd4f6627-2bd4-4ad6-8e1e-c5bec1228096_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.300s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:14:39 np0005481065 nova_compute[260935]: 2025-10-11 09:14:39.712 2 DEBUG nova.storage.rbd_utils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] resizing rbd image dd4f6627-2bd4-4ad6-8e1e-c5bec1228096_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:14:39 np0005481065 nova_compute[260935]: 2025-10-11 09:14:39.843 2 DEBUG nova.objects.instance [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'migration_context' on Instance uuid dd4f6627-2bd4-4ad6-8e1e-c5bec1228096 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:14:39 np0005481065 nova_compute[260935]: 2025-10-11 09:14:39.872 2 DEBUG nova.virt.libvirt.driver [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:14:39 np0005481065 nova_compute[260935]: 2025-10-11 09:14:39.872 2 DEBUG nova.virt.libvirt.driver [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Ensure instance console log exists: /var/lib/nova/instances/dd4f6627-2bd4-4ad6-8e1e-c5bec1228096/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:14:39 np0005481065 nova_compute[260935]: 2025-10-11 09:14:39.873 2 DEBUG oslo_concurrency.lockutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:14:39 np0005481065 nova_compute[260935]: 2025-10-11 09:14:39.874 2 DEBUG oslo_concurrency.lockutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:14:39 np0005481065 nova_compute[260935]: 2025-10-11 09:14:39.875 2 DEBUG oslo_concurrency.lockutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:14:40 np0005481065 nova_compute[260935]: 2025-10-11 09:14:40.291 2 DEBUG nova.network.neutron [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Successfully created port: ef5b5080-0657-4874-a32f-f79c370fbb6f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:14:40 np0005481065 nova_compute[260935]: 2025-10-11 09:14:40.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:14:40 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2185: 321 pgs: 321 active+clean; 328 MiB data, 899 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:14:41 np0005481065 nova_compute[260935]: 2025-10-11 09:14:41.213 2 DEBUG nova.network.neutron [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Successfully updated port: ef5b5080-0657-4874-a32f-f79c370fbb6f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:14:41 np0005481065 nova_compute[260935]: 2025-10-11 09:14:41.230 2 DEBUG oslo_concurrency.lockutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "refresh_cache-dd4f6627-2bd4-4ad6-8e1e-c5bec1228096" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:14:41 np0005481065 nova_compute[260935]: 2025-10-11 09:14:41.231 2 DEBUG oslo_concurrency.lockutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquired lock "refresh_cache-dd4f6627-2bd4-4ad6-8e1e-c5bec1228096" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:14:41 np0005481065 nova_compute[260935]: 2025-10-11 09:14:41.231 2 DEBUG nova.network.neutron [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:14:41 np0005481065 nova_compute[260935]: 2025-10-11 09:14:41.355 2 DEBUG nova.compute.manager [req-d802011d-7444-4195-8808-4e802c2bdcd5 req-3cb1222f-4264-4034-b06b-f7c161d7d69e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Received event network-changed-ef5b5080-0657-4874-a32f-f79c370fbb6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:14:41 np0005481065 nova_compute[260935]: 2025-10-11 09:14:41.356 2 DEBUG nova.compute.manager [req-d802011d-7444-4195-8808-4e802c2bdcd5 req-3cb1222f-4264-4034-b06b-f7c161d7d69e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Refreshing instance network info cache due to event network-changed-ef5b5080-0657-4874-a32f-f79c370fbb6f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:14:41 np0005481065 nova_compute[260935]: 2025-10-11 09:14:41.357 2 DEBUG oslo_concurrency.lockutils [req-d802011d-7444-4195-8808-4e802c2bdcd5 req-3cb1222f-4264-4034-b06b-f7c161d7d69e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-dd4f6627-2bd4-4ad6-8e1e-c5bec1228096" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:14:41 np0005481065 nova_compute[260935]: 2025-10-11 09:14:41.731 2 DEBUG nova.network.neutron [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:14:42 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2186: 321 pgs: 321 active+clean; 358 MiB data, 917 MiB used, 59 GiB / 60 GiB avail; 7.0 KiB/s rd, 1.5 MiB/s wr, 14 op/s
Oct 11 05:14:42 np0005481065 nova_compute[260935]: 2025-10-11 09:14:42.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:14:42 np0005481065 nova_compute[260935]: 2025-10-11 09:14:42.781 2 DEBUG nova.network.neutron [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Updating instance_info_cache with network_info: [{"id": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "address": "fa:16:3e:73:4b:2f", "network": {"id": "4b2dcf8f-2d20-4207-8c49-d03814ca4c89", "bridge": "br-int", "label": "tempest-network-smoke--1714222013", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef5b5080-06", "ovs_interfaceid": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:14:42 np0005481065 nova_compute[260935]: 2025-10-11 09:14:42.808 2 DEBUG oslo_concurrency.lockutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Releasing lock "refresh_cache-dd4f6627-2bd4-4ad6-8e1e-c5bec1228096" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:14:42 np0005481065 nova_compute[260935]: 2025-10-11 09:14:42.809 2 DEBUG nova.compute.manager [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Instance network_info: |[{"id": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "address": "fa:16:3e:73:4b:2f", "network": {"id": "4b2dcf8f-2d20-4207-8c49-d03814ca4c89", "bridge": "br-int", "label": "tempest-network-smoke--1714222013", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef5b5080-06", "ovs_interfaceid": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:14:42 np0005481065 nova_compute[260935]: 2025-10-11 09:14:42.810 2 DEBUG oslo_concurrency.lockutils [req-d802011d-7444-4195-8808-4e802c2bdcd5 req-3cb1222f-4264-4034-b06b-f7c161d7d69e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-dd4f6627-2bd4-4ad6-8e1e-c5bec1228096" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:14:42 np0005481065 nova_compute[260935]: 2025-10-11 09:14:42.810 2 DEBUG nova.network.neutron [req-d802011d-7444-4195-8808-4e802c2bdcd5 req-3cb1222f-4264-4034-b06b-f7c161d7d69e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Refreshing network info cache for port ef5b5080-0657-4874-a32f-f79c370fbb6f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:14:42 np0005481065 nova_compute[260935]: 2025-10-11 09:14:42.815 2 DEBUG nova.virt.libvirt.driver [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Start _get_guest_xml network_info=[{"id": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "address": "fa:16:3e:73:4b:2f", "network": {"id": "4b2dcf8f-2d20-4207-8c49-d03814ca4c89", "bridge": "br-int", "label": "tempest-network-smoke--1714222013", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef5b5080-06", "ovs_interfaceid": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:14:42 np0005481065 nova_compute[260935]: 2025-10-11 09:14:42.823 2 WARNING nova.virt.libvirt.driver [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:14:42 np0005481065 nova_compute[260935]: 2025-10-11 09:14:42.831 2 DEBUG nova.virt.libvirt.host [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:14:42 np0005481065 nova_compute[260935]: 2025-10-11 09:14:42.832 2 DEBUG nova.virt.libvirt.host [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:14:42 np0005481065 nova_compute[260935]: 2025-10-11 09:14:42.840 2 DEBUG nova.virt.libvirt.host [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:14:42 np0005481065 nova_compute[260935]: 2025-10-11 09:14:42.841 2 DEBUG nova.virt.libvirt.host [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:14:42 np0005481065 nova_compute[260935]: 2025-10-11 09:14:42.842 2 DEBUG nova.virt.libvirt.driver [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:14:42 np0005481065 nova_compute[260935]: 2025-10-11 09:14:42.842 2 DEBUG nova.virt.hardware [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:14:42 np0005481065 nova_compute[260935]: 2025-10-11 09:14:42.843 2 DEBUG nova.virt.hardware [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:14:42 np0005481065 nova_compute[260935]: 2025-10-11 09:14:42.843 2 DEBUG nova.virt.hardware [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:14:42 np0005481065 nova_compute[260935]: 2025-10-11 09:14:42.844 2 DEBUG nova.virt.hardware [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:14:42 np0005481065 nova_compute[260935]: 2025-10-11 09:14:42.844 2 DEBUG nova.virt.hardware [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:14:42 np0005481065 nova_compute[260935]: 2025-10-11 09:14:42.845 2 DEBUG nova.virt.hardware [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:14:42 np0005481065 nova_compute[260935]: 2025-10-11 09:14:42.845 2 DEBUG nova.virt.hardware [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:14:42 np0005481065 nova_compute[260935]: 2025-10-11 09:14:42.846 2 DEBUG nova.virt.hardware [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:14:42 np0005481065 nova_compute[260935]: 2025-10-11 09:14:42.846 2 DEBUG nova.virt.hardware [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:14:42 np0005481065 nova_compute[260935]: 2025-10-11 09:14:42.847 2 DEBUG nova.virt.hardware [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:14:42 np0005481065 nova_compute[260935]: 2025-10-11 09:14:42.847 2 DEBUG nova.virt.hardware [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:14:42 np0005481065 nova_compute[260935]: 2025-10-11 09:14:42.852 2 DEBUG oslo_concurrency.processutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:14:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:14:43 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3489057043' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:14:43 np0005481065 nova_compute[260935]: 2025-10-11 09:14:43.367 2 DEBUG oslo_concurrency.processutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:14:43 np0005481065 nova_compute[260935]: 2025-10-11 09:14:43.404 2 DEBUG nova.storage.rbd_utils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image dd4f6627-2bd4-4ad6-8e1e-c5bec1228096_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:14:43 np0005481065 nova_compute[260935]: 2025-10-11 09:14:43.410 2 DEBUG oslo_concurrency.processutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:14:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:14:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:14:43 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2814180833' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:14:43 np0005481065 nova_compute[260935]: 2025-10-11 09:14:43.950 2 DEBUG oslo_concurrency.processutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:14:43 np0005481065 nova_compute[260935]: 2025-10-11 09:14:43.952 2 DEBUG nova.virt.libvirt.vif [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:14:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1538313355',display_name='tempest-TestNetworkAdvancedServerOps-server-1538313355',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1538313355',id=107,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIm35lVc/qb0x6y/uUYwsQgGwxPTQjIAOVresR65zlGyxGEfXr+JB+WnXq5JHTOUB0BIvi8UrAYCHxEhMebttYysUf8+o2zlwFp3FSTpdihQDj9CLBgCJ2HNvxQ7hPPL2g==',key_name='tempest-TestNetworkAdvancedServerOps-1022296660',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca4b15770e784f45910b630937562cb6',ramdisk_id='',reservation_id='r-nhg7hkl0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1304559157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1304559157-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:14:39Z,user_data=None,user_id='a213c3877fc144a3af0be3c3d853f999',uuid=dd4f6627-2bd4-4ad6-8e1e-c5bec1228096,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "address": "fa:16:3e:73:4b:2f", "network": {"id": "4b2dcf8f-2d20-4207-8c49-d03814ca4c89", "bridge": "br-int", "label": "tempest-network-smoke--1714222013", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef5b5080-06", "ovs_interfaceid": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:14:43 np0005481065 nova_compute[260935]: 2025-10-11 09:14:43.952 2 DEBUG nova.network.os_vif_util [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converting VIF {"id": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "address": "fa:16:3e:73:4b:2f", "network": {"id": "4b2dcf8f-2d20-4207-8c49-d03814ca4c89", "bridge": "br-int", "label": "tempest-network-smoke--1714222013", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef5b5080-06", "ovs_interfaceid": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:14:43 np0005481065 nova_compute[260935]: 2025-10-11 09:14:43.953 2 DEBUG nova.network.os_vif_util [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:4b:2f,bridge_name='br-int',has_traffic_filtering=True,id=ef5b5080-0657-4874-a32f-f79c370fbb6f,network=Network(4b2dcf8f-2d20-4207-8c49-d03814ca4c89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef5b5080-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:14:43 np0005481065 nova_compute[260935]: 2025-10-11 09:14:43.954 2 DEBUG nova.objects.instance [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'pci_devices' on Instance uuid dd4f6627-2bd4-4ad6-8e1e-c5bec1228096 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:14:43 np0005481065 nova_compute[260935]: 2025-10-11 09:14:43.989 2 DEBUG nova.virt.libvirt.driver [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:14:43 np0005481065 nova_compute[260935]:  <uuid>dd4f6627-2bd4-4ad6-8e1e-c5bec1228096</uuid>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:  <name>instance-0000006b</name>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:14:43 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1538313355</nova:name>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:14:42</nova:creationTime>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:14:43 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:        <nova:user uuid="a213c3877fc144a3af0be3c3d853f999">tempest-TestNetworkAdvancedServerOps-1304559157-project-member</nova:user>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:        <nova:project uuid="ca4b15770e784f45910b630937562cb6">tempest-TestNetworkAdvancedServerOps-1304559157</nova:project>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:        <nova:port uuid="ef5b5080-0657-4874-a32f-f79c370fbb6f">
Oct 11 05:14:43 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:14:43 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:      <entry name="serial">dd4f6627-2bd4-4ad6-8e1e-c5bec1228096</entry>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:      <entry name="uuid">dd4f6627-2bd4-4ad6-8e1e-c5bec1228096</entry>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:14:43 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:14:43 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:14:43 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/dd4f6627-2bd4-4ad6-8e1e-c5bec1228096_disk">
Oct 11 05:14:43 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:14:43 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:14:43 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/dd4f6627-2bd4-4ad6-8e1e-c5bec1228096_disk.config">
Oct 11 05:14:43 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:14:43 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:14:43 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:73:4b:2f"/>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:      <target dev="tapef5b5080-06"/>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:14:43 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/dd4f6627-2bd4-4ad6-8e1e-c5bec1228096/console.log" append="off"/>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:14:43 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:14:43 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:14:43 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:14:43 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:14:43 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:14:43 np0005481065 nova_compute[260935]: 2025-10-11 09:14:43.992 2 DEBUG nova.compute.manager [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Preparing to wait for external event network-vif-plugged-ef5b5080-0657-4874-a32f-f79c370fbb6f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:14:43 np0005481065 nova_compute[260935]: 2025-10-11 09:14:43.993 2 DEBUG oslo_concurrency.lockutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:14:44 np0005481065 nova_compute[260935]: 2025-10-11 09:14:43.993 2 DEBUG oslo_concurrency.lockutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:14:44 np0005481065 nova_compute[260935]: 2025-10-11 09:14:43.994 2 DEBUG oslo_concurrency.lockutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:14:44 np0005481065 nova_compute[260935]: 2025-10-11 09:14:43.995 2 DEBUG nova.virt.libvirt.vif [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:14:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1538313355',display_name='tempest-TestNetworkAdvancedServerOps-server-1538313355',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1538313355',id=107,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIm35lVc/qb0x6y/uUYwsQgGwxPTQjIAOVresR65zlGyxGEfXr+JB+WnXq5JHTOUB0BIvi8UrAYCHxEhMebttYysUf8+o2zlwFp3FSTpdihQDj9CLBgCJ2HNvxQ7hPPL2g==',key_name='tempest-TestNetworkAdvancedServerOps-1022296660',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca4b15770e784f45910b630937562cb6',ramdisk_id='',reservation_id='r-nhg7hkl0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1304559157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1304559157-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:14:39Z,user_data=None,user_id='a213c3877fc144a3af0be3c3d853f999',uuid=dd4f6627-2bd4-4ad6-8e1e-c5bec1228096,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "address": "fa:16:3e:73:4b:2f", "network": {"id": "4b2dcf8f-2d20-4207-8c49-d03814ca4c89", "bridge": "br-int", "label": "tempest-network-smoke--1714222013", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef5b5080-06", "ovs_interfaceid": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:14:44 np0005481065 nova_compute[260935]: 2025-10-11 09:14:43.996 2 DEBUG nova.network.os_vif_util [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converting VIF {"id": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "address": "fa:16:3e:73:4b:2f", "network": {"id": "4b2dcf8f-2d20-4207-8c49-d03814ca4c89", "bridge": "br-int", "label": "tempest-network-smoke--1714222013", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef5b5080-06", "ovs_interfaceid": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:14:44 np0005481065 nova_compute[260935]: 2025-10-11 09:14:43.997 2 DEBUG nova.network.os_vif_util [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:4b:2f,bridge_name='br-int',has_traffic_filtering=True,id=ef5b5080-0657-4874-a32f-f79c370fbb6f,network=Network(4b2dcf8f-2d20-4207-8c49-d03814ca4c89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef5b5080-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:14:44 np0005481065 nova_compute[260935]: 2025-10-11 09:14:43.998 2 DEBUG os_vif [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:4b:2f,bridge_name='br-int',has_traffic_filtering=True,id=ef5b5080-0657-4874-a32f-f79c370fbb6f,network=Network(4b2dcf8f-2d20-4207-8c49-d03814ca4c89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef5b5080-06') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:14:44 np0005481065 nova_compute[260935]: 2025-10-11 09:14:43.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:14:44 np0005481065 nova_compute[260935]: 2025-10-11 09:14:44.000 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:14:44 np0005481065 nova_compute[260935]: 2025-10-11 09:14:44.001 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:14:44 np0005481065 nova_compute[260935]: 2025-10-11 09:14:44.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:14:44 np0005481065 nova_compute[260935]: 2025-10-11 09:14:44.006 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapef5b5080-06, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:14:44 np0005481065 nova_compute[260935]: 2025-10-11 09:14:44.007 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapef5b5080-06, col_values=(('external_ids', {'iface-id': 'ef5b5080-0657-4874-a32f-f79c370fbb6f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:73:4b:2f', 'vm-uuid': 'dd4f6627-2bd4-4ad6-8e1e-c5bec1228096'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:14:44 np0005481065 nova_compute[260935]: 2025-10-11 09:14:44.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:14:44 np0005481065 NetworkManager[44960]: <info>  [1760174084.0116] manager: (tapef5b5080-06): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/423)
Oct 11 05:14:44 np0005481065 nova_compute[260935]: 2025-10-11 09:14:44.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:14:44 np0005481065 nova_compute[260935]: 2025-10-11 09:14:44.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:14:44 np0005481065 nova_compute[260935]: 2025-10-11 09:14:44.016 2 INFO os_vif [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:4b:2f,bridge_name='br-int',has_traffic_filtering=True,id=ef5b5080-0657-4874-a32f-f79c370fbb6f,network=Network(4b2dcf8f-2d20-4207-8c49-d03814ca4c89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef5b5080-06')#033[00m
Oct 11 05:14:44 np0005481065 nova_compute[260935]: 2025-10-11 09:14:44.085 2 DEBUG nova.virt.libvirt.driver [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:14:44 np0005481065 nova_compute[260935]: 2025-10-11 09:14:44.085 2 DEBUG nova.virt.libvirt.driver [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:14:44 np0005481065 nova_compute[260935]: 2025-10-11 09:14:44.086 2 DEBUG nova.virt.libvirt.driver [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] No VIF found with MAC fa:16:3e:73:4b:2f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:14:44 np0005481065 nova_compute[260935]: 2025-10-11 09:14:44.086 2 INFO nova.virt.libvirt.driver [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Using config drive#033[00m
Oct 11 05:14:44 np0005481065 nova_compute[260935]: 2025-10-11 09:14:44.119 2 DEBUG nova.storage.rbd_utils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image dd4f6627-2bd4-4ad6-8e1e-c5bec1228096_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:14:44 np0005481065 nova_compute[260935]: 2025-10-11 09:14:44.428 2 DEBUG nova.network.neutron [req-d802011d-7444-4195-8808-4e802c2bdcd5 req-3cb1222f-4264-4034-b06b-f7c161d7d69e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Updated VIF entry in instance network info cache for port ef5b5080-0657-4874-a32f-f79c370fbb6f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:14:44 np0005481065 nova_compute[260935]: 2025-10-11 09:14:44.429 2 DEBUG nova.network.neutron [req-d802011d-7444-4195-8808-4e802c2bdcd5 req-3cb1222f-4264-4034-b06b-f7c161d7d69e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Updating instance_info_cache with network_info: [{"id": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "address": "fa:16:3e:73:4b:2f", "network": {"id": "4b2dcf8f-2d20-4207-8c49-d03814ca4c89", "bridge": "br-int", "label": "tempest-network-smoke--1714222013", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef5b5080-06", "ovs_interfaceid": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:14:44 np0005481065 nova_compute[260935]: 2025-10-11 09:14:44.452 2 DEBUG oslo_concurrency.lockutils [req-d802011d-7444-4195-8808-4e802c2bdcd5 req-3cb1222f-4264-4034-b06b-f7c161d7d69e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-dd4f6627-2bd4-4ad6-8e1e-c5bec1228096" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:14:44 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2187: 321 pgs: 321 active+clean; 374 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:14:44 np0005481065 nova_compute[260935]: 2025-10-11 09:14:44.610 2 INFO nova.virt.libvirt.driver [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Creating config drive at /var/lib/nova/instances/dd4f6627-2bd4-4ad6-8e1e-c5bec1228096/disk.config#033[00m
Oct 11 05:14:44 np0005481065 nova_compute[260935]: 2025-10-11 09:14:44.617 2 DEBUG oslo_concurrency.processutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dd4f6627-2bd4-4ad6-8e1e-c5bec1228096/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx9os0ois execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:14:44 np0005481065 nova_compute[260935]: 2025-10-11 09:14:44.785 2 DEBUG oslo_concurrency.processutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dd4f6627-2bd4-4ad6-8e1e-c5bec1228096/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx9os0ois" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:14:44 np0005481065 nova_compute[260935]: 2025-10-11 09:14:44.824 2 DEBUG nova.storage.rbd_utils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image dd4f6627-2bd4-4ad6-8e1e-c5bec1228096_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:14:44 np0005481065 nova_compute[260935]: 2025-10-11 09:14:44.829 2 DEBUG oslo_concurrency.processutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dd4f6627-2bd4-4ad6-8e1e-c5bec1228096/disk.config dd4f6627-2bd4-4ad6-8e1e-c5bec1228096_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:14:44 np0005481065 nova_compute[260935]: 2025-10-11 09:14:44.988 2 DEBUG oslo_concurrency.processutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dd4f6627-2bd4-4ad6-8e1e-c5bec1228096/disk.config dd4f6627-2bd4-4ad6-8e1e-c5bec1228096_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:14:44 np0005481065 nova_compute[260935]: 2025-10-11 09:14:44.989 2 INFO nova.virt.libvirt.driver [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Deleting local config drive /var/lib/nova/instances/dd4f6627-2bd4-4ad6-8e1e-c5bec1228096/disk.config because it was imported into RBD.#033[00m
Oct 11 05:14:45 np0005481065 kernel: tapef5b5080-06: entered promiscuous mode
Oct 11 05:14:45 np0005481065 NetworkManager[44960]: <info>  [1760174085.0677] manager: (tapef5b5080-06): new Tun device (/org/freedesktop/NetworkManager/Devices/424)
Oct 11 05:14:45 np0005481065 ovn_controller[152945]: 2025-10-11T09:14:45Z|01028|binding|INFO|Claiming lport ef5b5080-0657-4874-a32f-f79c370fbb6f for this chassis.
Oct 11 05:14:45 np0005481065 ovn_controller[152945]: 2025-10-11T09:14:45Z|01029|binding|INFO|ef5b5080-0657-4874-a32f-f79c370fbb6f: Claiming fa:16:3e:73:4b:2f 10.100.0.4
Oct 11 05:14:45 np0005481065 nova_compute[260935]: 2025-10-11 09:14:45.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:45.092 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:4b:2f 10.100.0.4'], port_security=['fa:16:3e:73:4b:2f 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'dd4f6627-2bd4-4ad6-8e1e-c5bec1228096', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4b2dcf8f-2d20-4207-8c49-d03814ca4c89', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca4b15770e784f45910b630937562cb6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '69b43b9b-1747-4199-b611-82604b25c02c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a61dfb2-d260-491a-87d9-80daad2fd54f, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=ef5b5080-0657-4874-a32f-f79c370fbb6f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:45.093 162815 INFO neutron.agent.ovn.metadata.agent [-] Port ef5b5080-0657-4874-a32f-f79c370fbb6f in datapath 4b2dcf8f-2d20-4207-8c49-d03814ca4c89 bound to our chassis#033[00m
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:45.095 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4b2dcf8f-2d20-4207-8c49-d03814ca4c89#033[00m
Oct 11 05:14:45 np0005481065 systemd-udevd[375489]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:45.112 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[059f12fc-b34b-4db0-ad09-42ca38e5c3ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:45.114 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4b2dcf8f-21 in ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:45.117 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4b2dcf8f-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:45.117 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7eb6dd39-6ac6-4647-9e29-4eaecf27f3e9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:45.119 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1c13d017-e9a7-472e-8e90-763edb2df2bd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:14:45 np0005481065 NetworkManager[44960]: <info>  [1760174085.1266] device (tapef5b5080-06): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:14:45 np0005481065 NetworkManager[44960]: <info>  [1760174085.1282] device (tapef5b5080-06): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:45.133 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[bb0d5390-5573-48af-8373-17ff87a30372]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:14:45 np0005481065 systemd-machined[215705]: New machine qemu-129-instance-0000006b.
Oct 11 05:14:45 np0005481065 nova_compute[260935]: 2025-10-11 09:14:45.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:14:45 np0005481065 ovn_controller[152945]: 2025-10-11T09:14:45Z|01030|binding|INFO|Setting lport ef5b5080-0657-4874-a32f-f79c370fbb6f ovn-installed in OVS
Oct 11 05:14:45 np0005481065 ovn_controller[152945]: 2025-10-11T09:14:45Z|01031|binding|INFO|Setting lport ef5b5080-0657-4874-a32f-f79c370fbb6f up in Southbound
Oct 11 05:14:45 np0005481065 nova_compute[260935]: 2025-10-11 09:14:45.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:14:45 np0005481065 systemd[1]: Started Virtual Machine qemu-129-instance-0000006b.
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:45.168 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0e63af39-f7a6-4180-b7ba-cc8370e9d13a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:45.203 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[1e996c4e-adcd-4836-bacf-968017459999]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:45.207 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ebf51311-fff6-48d2-aa78-09227ec611f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:14:45 np0005481065 systemd-udevd[375496]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:14:45 np0005481065 NetworkManager[44960]: <info>  [1760174085.2088] manager: (tap4b2dcf8f-20): new Veth device (/org/freedesktop/NetworkManager/Devices/425)
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:45.238 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[fcedf2d3-0da2-449d-856e-5c6ae0416643]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:45.241 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[08525a40-37dc-4559-82ee-1d481c398cbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:14:45 np0005481065 NetworkManager[44960]: <info>  [1760174085.2599] device (tap4b2dcf8f-20): carrier: link connected
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:45.268 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6d0fac29-210e-452a-8f3d-e0db9759e3ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:45.288 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ba751bc8-73a1-4e82-bac2-6b02d789f2d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4b2dcf8f-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:48:74'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 300], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592996, 'reachable_time': 36712, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 375526, 'error': None, 'target': 'ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:45.307 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d85f8659-2657-4937-a565-b4ab9e1968e6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb1:4874'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 592996, 'tstamp': 592996}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 375527, 'error': None, 'target': 'ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:45.333 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[469f5481-b2b7-45ea-bd4a-ff59c2f1c023]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4b2dcf8f-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:48:74'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 300], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592996, 'reachable_time': 36712, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 375528, 'error': None, 'target': 'ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:45.388 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cc0a4a32-b956-4c93-8b17-73a1e8605b74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:45.491 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[80eae5aa-9dd1-4155-935b-8a4b18f67fa8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:45.493 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4b2dcf8f-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:45.493 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:45.494 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4b2dcf8f-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:14:45 np0005481065 NetworkManager[44960]: <info>  [1760174085.5341] manager: (tap4b2dcf8f-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/426)
Oct 11 05:14:45 np0005481065 nova_compute[260935]: 2025-10-11 09:14:45.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:14:45 np0005481065 kernel: tap4b2dcf8f-20: entered promiscuous mode
Oct 11 05:14:45 np0005481065 nova_compute[260935]: 2025-10-11 09:14:45.538 2 DEBUG nova.compute.manager [req-3d4257d0-3299-4939-8036-707dd7743df8 req-5c524de4-7e65-45f2-be75-1904211e34b6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Received event network-vif-plugged-ef5b5080-0657-4874-a32f-f79c370fbb6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:14:45 np0005481065 nova_compute[260935]: 2025-10-11 09:14:45.538 2 DEBUG oslo_concurrency.lockutils [req-3d4257d0-3299-4939-8036-707dd7743df8 req-5c524de4-7e65-45f2-be75-1904211e34b6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:14:45 np0005481065 nova_compute[260935]: 2025-10-11 09:14:45.539 2 DEBUG oslo_concurrency.lockutils [req-3d4257d0-3299-4939-8036-707dd7743df8 req-5c524de4-7e65-45f2-be75-1904211e34b6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:14:45 np0005481065 nova_compute[260935]: 2025-10-11 09:14:45.539 2 DEBUG oslo_concurrency.lockutils [req-3d4257d0-3299-4939-8036-707dd7743df8 req-5c524de4-7e65-45f2-be75-1904211e34b6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:14:45 np0005481065 nova_compute[260935]: 2025-10-11 09:14:45.539 2 DEBUG nova.compute.manager [req-3d4257d0-3299-4939-8036-707dd7743df8 req-5c524de4-7e65-45f2-be75-1904211e34b6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Processing event network-vif-plugged-ef5b5080-0657-4874-a32f-f79c370fbb6f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:14:45 np0005481065 nova_compute[260935]: 2025-10-11 09:14:45.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:45.541 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4b2dcf8f-20, col_values=(('external_ids', {'iface-id': '0c6a8cda-d5e3-42f3-82f4-69be33ec4ca6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:14:45 np0005481065 nova_compute[260935]: 2025-10-11 09:14:45.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:14:45 np0005481065 ovn_controller[152945]: 2025-10-11T09:14:45Z|01032|binding|INFO|Releasing lport 0c6a8cda-d5e3-42f3-82f4-69be33ec4ca6 from this chassis (sb_readonly=0)
Oct 11 05:14:45 np0005481065 nova_compute[260935]: 2025-10-11 09:14:45.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:14:45 np0005481065 nova_compute[260935]: 2025-10-11 09:14:45.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:14:45 np0005481065 nova_compute[260935]: 2025-10-11 09:14:45.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:45.575 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4b2dcf8f-2d20-4207-8c49-d03814ca4c89.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4b2dcf8f-2d20-4207-8c49-d03814ca4c89.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:45.576 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b5006f02-218b-449b-a901-aa8ad0621860]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:45.578 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-4b2dcf8f-2d20-4207-8c49-d03814ca4c89
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/4b2dcf8f-2d20-4207-8c49-d03814ca4c89.pid.haproxy
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID 4b2dcf8f-2d20-4207-8c49-d03814ca4c89
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 05:14:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:45.581 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89', 'env', 'PROCESS_TAG=haproxy-4b2dcf8f-2d20-4207-8c49-d03814ca4c89', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4b2dcf8f-2d20-4207-8c49-d03814ca4c89.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 05:14:45 np0005481065 nova_compute[260935]: 2025-10-11 09:14:45.978 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174085.9776804, dd4f6627-2bd4-4ad6-8e1e-c5bec1228096 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:14:45 np0005481065 nova_compute[260935]: 2025-10-11 09:14:45.980 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] VM Started (Lifecycle Event)#033[00m
Oct 11 05:14:45 np0005481065 nova_compute[260935]: 2025-10-11 09:14:45.984 2 DEBUG nova.compute.manager [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:14:45 np0005481065 nova_compute[260935]: 2025-10-11 09:14:45.989 2 DEBUG nova.virt.libvirt.driver [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:14:45 np0005481065 nova_compute[260935]: 2025-10-11 09:14:45.995 2 INFO nova.virt.libvirt.driver [-] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Instance spawned successfully.#033[00m
Oct 11 05:14:45 np0005481065 nova_compute[260935]: 2025-10-11 09:14:45.995 2 DEBUG nova.virt.libvirt.driver [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:14:46 np0005481065 nova_compute[260935]: 2025-10-11 09:14:46.015 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:14:46 np0005481065 nova_compute[260935]: 2025-10-11 09:14:46.026 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:14:46 np0005481065 podman[375602]: 2025-10-11 09:14:46.028791637 +0000 UTC m=+0.086066483 container create 9c990cd252a00b3aae108b75a4f292bd7c93e59da0428ed1c037f6870491fac9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 11 05:14:46 np0005481065 nova_compute[260935]: 2025-10-11 09:14:46.034 2 DEBUG nova.virt.libvirt.driver [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:14:46 np0005481065 nova_compute[260935]: 2025-10-11 09:14:46.035 2 DEBUG nova.virt.libvirt.driver [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:14:46 np0005481065 nova_compute[260935]: 2025-10-11 09:14:46.035 2 DEBUG nova.virt.libvirt.driver [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:14:46 np0005481065 nova_compute[260935]: 2025-10-11 09:14:46.036 2 DEBUG nova.virt.libvirt.driver [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:14:46 np0005481065 nova_compute[260935]: 2025-10-11 09:14:46.037 2 DEBUG nova.virt.libvirt.driver [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:14:46 np0005481065 nova_compute[260935]: 2025-10-11 09:14:46.038 2 DEBUG nova.virt.libvirt.driver [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:14:46 np0005481065 nova_compute[260935]: 2025-10-11 09:14:46.063 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:14:46 np0005481065 nova_compute[260935]: 2025-10-11 09:14:46.063 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174085.978961, dd4f6627-2bd4-4ad6-8e1e-c5bec1228096 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:14:46 np0005481065 nova_compute[260935]: 2025-10-11 09:14:46.063 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:14:46 np0005481065 podman[375602]: 2025-10-11 09:14:45.984325066 +0000 UTC m=+0.041599992 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 05:14:46 np0005481065 systemd[1]: Started libpod-conmon-9c990cd252a00b3aae108b75a4f292bd7c93e59da0428ed1c037f6870491fac9.scope.
Oct 11 05:14:46 np0005481065 nova_compute[260935]: 2025-10-11 09:14:46.090 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:14:46 np0005481065 nova_compute[260935]: 2025-10-11 09:14:46.098 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174085.9867253, dd4f6627-2bd4-4ad6-8e1e-c5bec1228096 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:14:46 np0005481065 nova_compute[260935]: 2025-10-11 09:14:46.099 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:14:46 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:14:46 np0005481065 nova_compute[260935]: 2025-10-11 09:14:46.132 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:14:46 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93c774767af60c322abb2e50f9ecf5a65e5ec0268f3808529aa2cf4be24924c9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 05:14:46 np0005481065 nova_compute[260935]: 2025-10-11 09:14:46.137 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:14:46 np0005481065 nova_compute[260935]: 2025-10-11 09:14:46.142 2 INFO nova.compute.manager [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Took 7.06 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:14:46 np0005481065 nova_compute[260935]: 2025-10-11 09:14:46.142 2 DEBUG nova.compute.manager [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:14:46 np0005481065 podman[375602]: 2025-10-11 09:14:46.167890947 +0000 UTC m=+0.225165873 container init 9c990cd252a00b3aae108b75a4f292bd7c93e59da0428ed1c037f6870491fac9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 05:14:46 np0005481065 podman[375602]: 2025-10-11 09:14:46.178197402 +0000 UTC m=+0.235472278 container start 9c990cd252a00b3aae108b75a4f292bd7c93e59da0428ed1c037f6870491fac9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:14:46 np0005481065 neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89[375618]: [NOTICE]   (375622) : New worker (375624) forked
Oct 11 05:14:46 np0005481065 neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89[375618]: [NOTICE]   (375622) : Loading success.
Oct 11 05:14:46 np0005481065 nova_compute[260935]: 2025-10-11 09:14:46.371 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:14:46 np0005481065 nova_compute[260935]: 2025-10-11 09:14:46.429 2 INFO nova.compute.manager [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Took 8.63 seconds to build instance.#033[00m
Oct 11 05:14:46 np0005481065 nova_compute[260935]: 2025-10-11 09:14:46.447 2 DEBUG oslo_concurrency.lockutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:14:46 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2188: 321 pgs: 321 active+clean; 374 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:14:47 np0005481065 nova_compute[260935]: 2025-10-11 09:14:47.605 2 DEBUG nova.compute.manager [req-a02ede09-7bc9-41a7-a404-46193785a408 req-f4175bac-607a-4989-84a8-b3cd3eae1b57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Received event network-vif-plugged-ef5b5080-0657-4874-a32f-f79c370fbb6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:14:47 np0005481065 nova_compute[260935]: 2025-10-11 09:14:47.605 2 DEBUG oslo_concurrency.lockutils [req-a02ede09-7bc9-41a7-a404-46193785a408 req-f4175bac-607a-4989-84a8-b3cd3eae1b57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:14:47 np0005481065 nova_compute[260935]: 2025-10-11 09:14:47.606 2 DEBUG oslo_concurrency.lockutils [req-a02ede09-7bc9-41a7-a404-46193785a408 req-f4175bac-607a-4989-84a8-b3cd3eae1b57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:14:47 np0005481065 nova_compute[260935]: 2025-10-11 09:14:47.606 2 DEBUG oslo_concurrency.lockutils [req-a02ede09-7bc9-41a7-a404-46193785a408 req-f4175bac-607a-4989-84a8-b3cd3eae1b57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:14:47 np0005481065 nova_compute[260935]: 2025-10-11 09:14:47.606 2 DEBUG nova.compute.manager [req-a02ede09-7bc9-41a7-a404-46193785a408 req-f4175bac-607a-4989-84a8-b3cd3eae1b57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] No waiting events found dispatching network-vif-plugged-ef5b5080-0657-4874-a32f-f79c370fbb6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:14:47 np0005481065 nova_compute[260935]: 2025-10-11 09:14:47.607 2 WARNING nova.compute.manager [req-a02ede09-7bc9-41a7-a404-46193785a408 req-f4175bac-607a-4989-84a8-b3cd3eae1b57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Received unexpected event network-vif-plugged-ef5b5080-0657-4874-a32f-f79c370fbb6f for instance with vm_state active and task_state None.#033[00m
Oct 11 05:14:48 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2189: 321 pgs: 321 active+clean; 374 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Oct 11 05:14:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:14:49 np0005481065 nova_compute[260935]: 2025-10-11 09:14:49.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:14:50 np0005481065 ovn_controller[152945]: 2025-10-11T09:14:50Z|01033|binding|INFO|Releasing lport 0c6a8cda-d5e3-42f3-82f4-69be33ec4ca6 from this chassis (sb_readonly=0)
Oct 11 05:14:50 np0005481065 ovn_controller[152945]: 2025-10-11T09:14:50Z|01034|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:14:50 np0005481065 ovn_controller[152945]: 2025-10-11T09:14:50Z|01035|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:14:50 np0005481065 NetworkManager[44960]: <info>  [1760174090.2440] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/427)
Oct 11 05:14:50 np0005481065 NetworkManager[44960]: <info>  [1760174090.2450] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/428)
Oct 11 05:14:50 np0005481065 nova_compute[260935]: 2025-10-11 09:14:50.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:14:50 np0005481065 ovn_controller[152945]: 2025-10-11T09:14:50Z|01036|binding|INFO|Releasing lport 0c6a8cda-d5e3-42f3-82f4-69be33ec4ca6 from this chassis (sb_readonly=0)
Oct 11 05:14:50 np0005481065 nova_compute[260935]: 2025-10-11 09:14:50.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:14:50 np0005481065 ovn_controller[152945]: 2025-10-11T09:14:50Z|01037|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:14:50 np0005481065 ovn_controller[152945]: 2025-10-11T09:14:50Z|01038|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:14:50 np0005481065 nova_compute[260935]: 2025-10-11 09:14:50.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:14:50 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2190: 321 pgs: 321 active+clean; 374 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Oct 11 05:14:50 np0005481065 nova_compute[260935]: 2025-10-11 09:14:50.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:14:50 np0005481065 nova_compute[260935]: 2025-10-11 09:14:50.948 2 DEBUG nova.compute.manager [req-29adb0d9-b49d-4894-abe2-e79324eba3ae req-f336c83e-f12b-424d-b696-69a9e8ac1f8b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Received event network-changed-ef5b5080-0657-4874-a32f-f79c370fbb6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:14:50 np0005481065 nova_compute[260935]: 2025-10-11 09:14:50.948 2 DEBUG nova.compute.manager [req-29adb0d9-b49d-4894-abe2-e79324eba3ae req-f336c83e-f12b-424d-b696-69a9e8ac1f8b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Refreshing instance network info cache due to event network-changed-ef5b5080-0657-4874-a32f-f79c370fbb6f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:14:50 np0005481065 nova_compute[260935]: 2025-10-11 09:14:50.949 2 DEBUG oslo_concurrency.lockutils [req-29adb0d9-b49d-4894-abe2-e79324eba3ae req-f336c83e-f12b-424d-b696-69a9e8ac1f8b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-dd4f6627-2bd4-4ad6-8e1e-c5bec1228096" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:14:50 np0005481065 nova_compute[260935]: 2025-10-11 09:14:50.949 2 DEBUG oslo_concurrency.lockutils [req-29adb0d9-b49d-4894-abe2-e79324eba3ae req-f336c83e-f12b-424d-b696-69a9e8ac1f8b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-dd4f6627-2bd4-4ad6-8e1e-c5bec1228096" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:14:50 np0005481065 nova_compute[260935]: 2025-10-11 09:14:50.950 2 DEBUG nova.network.neutron [req-29adb0d9-b49d-4894-abe2-e79324eba3ae req-f336c83e-f12b-424d-b696-69a9e8ac1f8b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Refreshing network info cache for port ef5b5080-0657-4874-a32f-f79c370fbb6f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:14:52 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2191: 321 pgs: 321 active+clean; 374 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 05:14:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:14:54 np0005481065 nova_compute[260935]: 2025-10-11 09:14:54.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:14:54 np0005481065 nova_compute[260935]: 2025-10-11 09:14:54.113 2 DEBUG nova.network.neutron [req-29adb0d9-b49d-4894-abe2-e79324eba3ae req-f336c83e-f12b-424d-b696-69a9e8ac1f8b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Updated VIF entry in instance network info cache for port ef5b5080-0657-4874-a32f-f79c370fbb6f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:14:54 np0005481065 nova_compute[260935]: 2025-10-11 09:14:54.113 2 DEBUG nova.network.neutron [req-29adb0d9-b49d-4894-abe2-e79324eba3ae req-f336c83e-f12b-424d-b696-69a9e8ac1f8b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Updating instance_info_cache with network_info: [{"id": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "address": "fa:16:3e:73:4b:2f", "network": {"id": "4b2dcf8f-2d20-4207-8c49-d03814ca4c89", "bridge": "br-int", "label": "tempest-network-smoke--1714222013", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef5b5080-06", "ovs_interfaceid": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:14:54 np0005481065 nova_compute[260935]: 2025-10-11 09:14:54.138 2 DEBUG oslo_concurrency.lockutils [req-29adb0d9-b49d-4894-abe2-e79324eba3ae req-f336c83e-f12b-424d-b696-69a9e8ac1f8b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-dd4f6627-2bd4-4ad6-8e1e-c5bec1228096" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:14:54 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2192: 321 pgs: 321 active+clean; 374 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 292 KiB/s wr, 86 op/s
Oct 11 05:14:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:14:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:14:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:14:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:14:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:14:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:14:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:14:54
Oct 11 05:14:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:14:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:14:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', 'images', 'vms', '.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', 'default.rgw.log', 'volumes']
Oct 11 05:14:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:14:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:14:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:14:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:14:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:14:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:14:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:14:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:14:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:14:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:14:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:14:55 np0005481065 nova_compute[260935]: 2025-10-11 09:14:55.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:14:56 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:56.488 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:96:7e 2001:db8:0:1:f816:3eff:fe6f:967e'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe6f:967e/64', 'neutron:device_id': 'ovnmeta-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b9fef5c3096478daba4f4d85fddf9e9', 'neutron:revision_number': '30', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=237c95b9-58c9-4db6-9d3a-13740240295d, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=2a00a6d8-ae77-4ece-ac60-480f6b95d6a8) old=Port_Binding(mac=['fa:16:3e:6f:96:7e 2001:db8::f816:3eff:fe6f:967e'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe6f:967e/64', 'neutron:device_id': 'ovnmeta-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b9fef5c3096478daba4f4d85fddf9e9', 'neutron:revision_number': '28', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:14:56 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:56.490 162815 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 2a00a6d8-ae77-4ece-ac60-480f6b95d6a8 in datapath 64420f4d-abed-43af-8d6f-2156706c7174 updated#033[00m
Oct 11 05:14:56 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:56.492 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 64420f4d-abed-43af-8d6f-2156706c7174, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:14:56 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:14:56.494 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1aa6f894-e56c-492a-ab5c-045821fad8d2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:14:56 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2193: 321 pgs: 321 active+clean; 374 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 05:14:56 np0005481065 podman[375637]: 2025-10-11 09:14:56.810463772 +0000 UTC m=+0.091521134 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:14:57 np0005481065 ovn_controller[152945]: 2025-10-11T09:14:57Z|00117|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:73:4b:2f 10.100.0.4
Oct 11 05:14:57 np0005481065 ovn_controller[152945]: 2025-10-11T09:14:57Z|00118|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:73:4b:2f 10.100.0.4
Oct 11 05:14:58 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2194: 321 pgs: 321 active+clean; 400 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 127 op/s
Oct 11 05:14:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:14:59 np0005481065 nova_compute[260935]: 2025-10-11 09:14:59.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:14:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 11 05:14:59 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 05:14:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:14:59 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:14:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 05:14:59 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:14:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 05:14:59 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:14:59 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 0c52ef51-3194-4ac1-af8f-80b2fbd174b5 does not exist
Oct 11 05:14:59 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 87be00fa-e806-47f4-8fb4-7e80c85e9cb4 does not exist
Oct 11 05:14:59 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev a241172c-b74e-4068-b0e2-2c41b0c77ce2 does not exist
Oct 11 05:14:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 05:14:59 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 05:14:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 05:14:59 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:14:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:14:59 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:14:59 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 05:14:59 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:14:59 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:14:59 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:14:59 np0005481065 podman[375930]: 2025-10-11 09:14:59.940666223 +0000 UTC m=+0.064571828 container create 0a3b75c4235a2633d6dc57c5ee08a5be89cb7af45cbaa76442bd4f86969d91f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bhaskara, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:14:59 np0005481065 systemd[1]: Started libpod-conmon-0a3b75c4235a2633d6dc57c5ee08a5be89cb7af45cbaa76442bd4f86969d91f8.scope.
Oct 11 05:15:00 np0005481065 podman[375930]: 2025-10-11 09:14:59.914997753 +0000 UTC m=+0.038903388 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:15:00 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:15:00 np0005481065 podman[375930]: 2025-10-11 09:15:00.05289929 +0000 UTC m=+0.176804915 container init 0a3b75c4235a2633d6dc57c5ee08a5be89cb7af45cbaa76442bd4f86969d91f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bhaskara, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 11 05:15:00 np0005481065 podman[375930]: 2025-10-11 09:15:00.062529926 +0000 UTC m=+0.186435541 container start 0a3b75c4235a2633d6dc57c5ee08a5be89cb7af45cbaa76442bd4f86969d91f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Oct 11 05:15:00 np0005481065 podman[375930]: 2025-10-11 09:15:00.06662393 +0000 UTC m=+0.190529555 container attach 0a3b75c4235a2633d6dc57c5ee08a5be89cb7af45cbaa76442bd4f86969d91f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 11 05:15:00 np0005481065 goofy_bhaskara[375947]: 167 167
Oct 11 05:15:00 np0005481065 systemd[1]: libpod-0a3b75c4235a2633d6dc57c5ee08a5be89cb7af45cbaa76442bd4f86969d91f8.scope: Deactivated successfully.
Oct 11 05:15:00 np0005481065 podman[375930]: 2025-10-11 09:15:00.070022934 +0000 UTC m=+0.193928559 container died 0a3b75c4235a2633d6dc57c5ee08a5be89cb7af45cbaa76442bd4f86969d91f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 05:15:00 np0005481065 systemd[1]: var-lib-containers-storage-overlay-d8c75588465589cdf7a6897c7d068254530256967745fc3bef4c89ace7b80eb7-merged.mount: Deactivated successfully.
Oct 11 05:15:00 np0005481065 podman[375930]: 2025-10-11 09:15:00.121397366 +0000 UTC m=+0.245302981 container remove 0a3b75c4235a2633d6dc57c5ee08a5be89cb7af45cbaa76442bd4f86969d91f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 05:15:00 np0005481065 systemd[1]: libpod-conmon-0a3b75c4235a2633d6dc57c5ee08a5be89cb7af45cbaa76442bd4f86969d91f8.scope: Deactivated successfully.
Oct 11 05:15:00 np0005481065 podman[375974]: 2025-10-11 09:15:00.414038506 +0000 UTC m=+0.076695374 container create dd621cd9b7a84ab1df23646b3dfa8813e9bfbb982075050cc9246c63bf4daaaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jennings, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 05:15:00 np0005481065 podman[375974]: 2025-10-11 09:15:00.379472559 +0000 UTC m=+0.042129487 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:15:00 np0005481065 systemd[1]: Started libpod-conmon-dd621cd9b7a84ab1df23646b3dfa8813e9bfbb982075050cc9246c63bf4daaaa.scope.
Oct 11 05:15:00 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:15:00 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/015f3917fcfc94d4f282fe6160309442617a172d5e1156bc5d4feda1b4d893ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:15:00 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/015f3917fcfc94d4f282fe6160309442617a172d5e1156bc5d4feda1b4d893ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:15:00 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/015f3917fcfc94d4f282fe6160309442617a172d5e1156bc5d4feda1b4d893ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:15:00 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/015f3917fcfc94d4f282fe6160309442617a172d5e1156bc5d4feda1b4d893ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:15:00 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/015f3917fcfc94d4f282fe6160309442617a172d5e1156bc5d4feda1b4d893ce/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 05:15:00 np0005481065 podman[375974]: 2025-10-11 09:15:00.560754636 +0000 UTC m=+0.223411504 container init dd621cd9b7a84ab1df23646b3dfa8813e9bfbb982075050cc9246c63bf4daaaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jennings, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:15:00 np0005481065 podman[375974]: 2025-10-11 09:15:00.575980627 +0000 UTC m=+0.238637505 container start dd621cd9b7a84ab1df23646b3dfa8813e9bfbb982075050cc9246c63bf4daaaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jennings, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:15:00 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2195: 321 pgs: 321 active+clean; 400 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 603 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 11 05:15:00 np0005481065 podman[375974]: 2025-10-11 09:15:00.580312457 +0000 UTC m=+0.242969325 container attach dd621cd9b7a84ab1df23646b3dfa8813e9bfbb982075050cc9246c63bf4daaaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 11 05:15:00 np0005481065 nova_compute[260935]: 2025-10-11 09:15:00.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:01 np0005481065 stupefied_jennings[375991]: --> passed data devices: 0 physical, 3 LVM
Oct 11 05:15:01 np0005481065 stupefied_jennings[375991]: --> relative data size: 1.0
Oct 11 05:15:01 np0005481065 stupefied_jennings[375991]: --> All data devices are unavailable
Oct 11 05:15:01 np0005481065 systemd[1]: libpod-dd621cd9b7a84ab1df23646b3dfa8813e9bfbb982075050cc9246c63bf4daaaa.scope: Deactivated successfully.
Oct 11 05:15:01 np0005481065 podman[376020]: 2025-10-11 09:15:01.641291514 +0000 UTC m=+0.043917356 container died dd621cd9b7a84ab1df23646b3dfa8813e9bfbb982075050cc9246c63bf4daaaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jennings, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 05:15:01 np0005481065 systemd[1]: var-lib-containers-storage-overlay-015f3917fcfc94d4f282fe6160309442617a172d5e1156bc5d4feda1b4d893ce-merged.mount: Deactivated successfully.
Oct 11 05:15:01 np0005481065 podman[376020]: 2025-10-11 09:15:01.707099656 +0000 UTC m=+0.109725418 container remove dd621cd9b7a84ab1df23646b3dfa8813e9bfbb982075050cc9246c63bf4daaaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jennings, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 11 05:15:01 np0005481065 systemd[1]: libpod-conmon-dd621cd9b7a84ab1df23646b3dfa8813e9bfbb982075050cc9246c63bf4daaaa.scope: Deactivated successfully.
Oct 11 05:15:02 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2196: 321 pgs: 321 active+clean; 407 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 616 KiB/s rd, 2.1 MiB/s wr, 72 op/s
Oct 11 05:15:02 np0005481065 podman[376179]: 2025-10-11 09:15:02.649915522 +0000 UTC m=+0.071933442 container create 5d5fc40996ad7915ec9004c88ad3f846eab7c5c36e8124a9fae472176aa332f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 05:15:02 np0005481065 systemd[1]: Started libpod-conmon-5d5fc40996ad7915ec9004c88ad3f846eab7c5c36e8124a9fae472176aa332f8.scope.
Oct 11 05:15:02 np0005481065 podman[376179]: 2025-10-11 09:15:02.621375402 +0000 UTC m=+0.043393362 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:15:02 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:15:02 np0005481065 podman[376179]: 2025-10-11 09:15:02.766706935 +0000 UTC m=+0.188724885 container init 5d5fc40996ad7915ec9004c88ad3f846eab7c5c36e8124a9fae472176aa332f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:15:02 np0005481065 podman[376179]: 2025-10-11 09:15:02.773506573 +0000 UTC m=+0.195524493 container start 5d5fc40996ad7915ec9004c88ad3f846eab7c5c36e8124a9fae472176aa332f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 11 05:15:02 np0005481065 podman[376179]: 2025-10-11 09:15:02.778040659 +0000 UTC m=+0.200058569 container attach 5d5fc40996ad7915ec9004c88ad3f846eab7c5c36e8124a9fae472176aa332f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wozniak, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 11 05:15:02 np0005481065 friendly_wozniak[376196]: 167 167
Oct 11 05:15:02 np0005481065 systemd[1]: libpod-5d5fc40996ad7915ec9004c88ad3f846eab7c5c36e8124a9fae472176aa332f8.scope: Deactivated successfully.
Oct 11 05:15:02 np0005481065 conmon[376196]: conmon 5d5fc40996ad7915ec90 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5d5fc40996ad7915ec9004c88ad3f846eab7c5c36e8124a9fae472176aa332f8.scope/container/memory.events
Oct 11 05:15:02 np0005481065 podman[376179]: 2025-10-11 09:15:02.784159798 +0000 UTC m=+0.206177678 container died 5d5fc40996ad7915ec9004c88ad3f846eab7c5c36e8124a9fae472176aa332f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wozniak, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 05:15:02 np0005481065 systemd[1]: var-lib-containers-storage-overlay-a8bc64e3e1c74dcdfe3a5bf6d178453a24558646e27efdb8672768d03b2a317d-merged.mount: Deactivated successfully.
Oct 11 05:15:02 np0005481065 podman[376179]: 2025-10-11 09:15:02.847267805 +0000 UTC m=+0.269285715 container remove 5d5fc40996ad7915ec9004c88ad3f846eab7c5c36e8124a9fae472176aa332f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 05:15:02 np0005481065 systemd[1]: libpod-conmon-5d5fc40996ad7915ec9004c88ad3f846eab7c5c36e8124a9fae472176aa332f8.scope: Deactivated successfully.
Oct 11 05:15:02 np0005481065 podman[376213]: 2025-10-11 09:15:02.933305416 +0000 UTC m=+0.073409073 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 11 05:15:03 np0005481065 podman[376240]: 2025-10-11 09:15:03.086471906 +0000 UTC m=+0.062418409 container create 2c37a8f9d2a33cb46ae9366c9de7751840effc6e79278de92a0cf21abec62a86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 11 05:15:03 np0005481065 systemd[1]: Started libpod-conmon-2c37a8f9d2a33cb46ae9366c9de7751840effc6e79278de92a0cf21abec62a86.scope.
Oct 11 05:15:03 np0005481065 podman[376240]: 2025-10-11 09:15:03.056537247 +0000 UTC m=+0.032483820 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:15:03 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:15:03 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fd513a2427299e7f5e3b26367a3aa0700cbc4052f619c33affb7a8ddfc3f1f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:15:03 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fd513a2427299e7f5e3b26367a3aa0700cbc4052f619c33affb7a8ddfc3f1f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:15:03 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fd513a2427299e7f5e3b26367a3aa0700cbc4052f619c33affb7a8ddfc3f1f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:15:03 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fd513a2427299e7f5e3b26367a3aa0700cbc4052f619c33affb7a8ddfc3f1f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:15:03 np0005481065 podman[376240]: 2025-10-11 09:15:03.201291624 +0000 UTC m=+0.177238197 container init 2c37a8f9d2a33cb46ae9366c9de7751840effc6e79278de92a0cf21abec62a86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 11 05:15:03 np0005481065 podman[376240]: 2025-10-11 09:15:03.214324485 +0000 UTC m=+0.190270998 container start 2c37a8f9d2a33cb46ae9366c9de7751840effc6e79278de92a0cf21abec62a86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_clarke, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:15:03 np0005481065 podman[376240]: 2025-10-11 09:15:03.220242008 +0000 UTC m=+0.196188531 container attach 2c37a8f9d2a33cb46ae9366c9de7751840effc6e79278de92a0cf21abec62a86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_clarke, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:15:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]: {
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:    "0": [
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:        {
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:            "devices": [
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:                "/dev/loop3"
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:            ],
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:            "lv_name": "ceph_lv0",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:            "lv_size": "21470642176",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:            "name": "ceph_lv0",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:            "tags": {
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:                "ceph.cluster_name": "ceph",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:                "ceph.crush_device_class": "",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:                "ceph.encrypted": "0",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:                "ceph.osd_id": "0",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:                "ceph.type": "block",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:                "ceph.vdo": "0"
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:            },
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:            "type": "block",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:            "vg_name": "ceph_vg0"
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:        }
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:    ],
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:    "1": [
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:        {
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:            "devices": [
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:                "/dev/loop4"
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:            ],
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:            "lv_name": "ceph_lv1",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:            "lv_size": "21470642176",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:            "name": "ceph_lv1",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:            "tags": {
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:                "ceph.cluster_name": "ceph",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:                "ceph.crush_device_class": "",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:                "ceph.encrypted": "0",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:                "ceph.osd_id": "1",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:                "ceph.type": "block",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:                "ceph.vdo": "0"
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:            },
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:            "type": "block",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:            "vg_name": "ceph_vg1"
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:        }
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:    ],
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:    "2": [
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:        {
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:            "devices": [
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:                "/dev/loop5"
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:            ],
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:            "lv_name": "ceph_lv2",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:            "lv_size": "21470642176",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:            "name": "ceph_lv2",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:            "tags": {
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:                "ceph.cluster_name": "ceph",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:                "ceph.crush_device_class": "",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:                "ceph.encrypted": "0",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:                "ceph.osd_id": "2",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:                "ceph.type": "block",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:                "ceph.vdo": "0"
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:            },
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:            "type": "block",
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:            "vg_name": "ceph_vg2"
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:        }
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]:    ]
Oct 11 05:15:03 np0005481065 priceless_clarke[376257]: }
Oct 11 05:15:04 np0005481065 systemd[1]: libpod-2c37a8f9d2a33cb46ae9366c9de7751840effc6e79278de92a0cf21abec62a86.scope: Deactivated successfully.
Oct 11 05:15:04 np0005481065 podman[376240]: 2025-10-11 09:15:04.002237133 +0000 UTC m=+0.978183596 container died 2c37a8f9d2a33cb46ae9366c9de7751840effc6e79278de92a0cf21abec62a86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 11 05:15:04 np0005481065 systemd[1]: var-lib-containers-storage-overlay-8fd513a2427299e7f5e3b26367a3aa0700cbc4052f619c33affb7a8ddfc3f1f0-merged.mount: Deactivated successfully.
Oct 11 05:15:04 np0005481065 nova_compute[260935]: 2025-10-11 09:15:04.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:04 np0005481065 podman[376240]: 2025-10-11 09:15:04.070311958 +0000 UTC m=+1.046258471 container remove 2c37a8f9d2a33cb46ae9366c9de7751840effc6e79278de92a0cf21abec62a86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_clarke, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 11 05:15:04 np0005481065 systemd[1]: libpod-conmon-2c37a8f9d2a33cb46ae9366c9de7751840effc6e79278de92a0cf21abec62a86.scope: Deactivated successfully.
Oct 11 05:15:04 np0005481065 nova_compute[260935]: 2025-10-11 09:15:04.497 2 INFO nova.compute.manager [None req-537c3019-fecd-4ec8-af26-cf075784eff4 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Get console output#033[00m
Oct 11 05:15:04 np0005481065 nova_compute[260935]: 2025-10-11 09:15:04.504 29289 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct 11 05:15:04 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2197: 321 pgs: 321 active+clean; 407 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 05:15:04 np0005481065 podman[376421]: 2025-10-11 09:15:04.908279621 +0000 UTC m=+0.057534143 container create f8570dba1a8563122c08cd230341a142c2e600fe60749aecf979867105328c48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_clarke, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 05:15:04 np0005481065 nova_compute[260935]: 2025-10-11 09:15:04.923 2 DEBUG nova.objects.instance [None req-2f012b15-b6c8-4501-89aa-611311630683 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'pci_devices' on Instance uuid dd4f6627-2bd4-4ad6-8e1e-c5bec1228096 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:15:04 np0005481065 nova_compute[260935]: 2025-10-11 09:15:04.951 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174104.951147, dd4f6627-2bd4-4ad6-8e1e-c5bec1228096 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:15:04 np0005481065 nova_compute[260935]: 2025-10-11 09:15:04.952 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:15:04 np0005481065 systemd[1]: Started libpod-conmon-f8570dba1a8563122c08cd230341a142c2e600fe60749aecf979867105328c48.scope.
Oct 11 05:15:04 np0005481065 podman[376421]: 2025-10-11 09:15:04.880803321 +0000 UTC m=+0.030057913 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:15:04 np0005481065 nova_compute[260935]: 2025-10-11 09:15:04.975 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:15:04 np0005481065 nova_compute[260935]: 2025-10-11 09:15:04.984 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:15:05 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:15:05 np0005481065 nova_compute[260935]: 2025-10-11 09:15:05.012 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Oct 11 05:15:05 np0005481065 podman[376421]: 2025-10-11 09:15:05.027377058 +0000 UTC m=+0.176631590 container init f8570dba1a8563122c08cd230341a142c2e600fe60749aecf979867105328c48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Oct 11 05:15:05 np0005481065 podman[376421]: 2025-10-11 09:15:05.039570695 +0000 UTC m=+0.188825227 container start f8570dba1a8563122c08cd230341a142c2e600fe60749aecf979867105328c48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_clarke, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:15:05 np0005481065 podman[376421]: 2025-10-11 09:15:05.045090758 +0000 UTC m=+0.194345340 container attach f8570dba1a8563122c08cd230341a142c2e600fe60749aecf979867105328c48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_clarke, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:15:05 np0005481065 reverent_clarke[376439]: 167 167
Oct 11 05:15:05 np0005481065 systemd[1]: libpod-f8570dba1a8563122c08cd230341a142c2e600fe60749aecf979867105328c48.scope: Deactivated successfully.
Oct 11 05:15:05 np0005481065 podman[376421]: 2025-10-11 09:15:05.051168376 +0000 UTC m=+0.200422908 container died f8570dba1a8563122c08cd230341a142c2e600fe60749aecf979867105328c48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_clarke, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:15:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:15:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:15:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:15:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:15:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033839795166699226 of space, bias 1.0, pg target 1.0151938550009767 quantized to 32 (current 32)
Oct 11 05:15:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:15:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:15:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:15:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:15:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:15:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.1992057139048968 quantized to 32 (current 32)
Oct 11 05:15:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:15:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 32)
Oct 11 05:15:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:15:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:15:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:15:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Oct 11 05:15:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:15:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Oct 11 05:15:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:15:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:15:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:15:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Oct 11 05:15:05 np0005481065 systemd[1]: var-lib-containers-storage-overlay-9fa93331277ca4f40f339565e6122fe3a57002e00e02aad7f56d1a5059d1f5dd-merged.mount: Deactivated successfully.
Oct 11 05:15:05 np0005481065 podman[376421]: 2025-10-11 09:15:05.102117786 +0000 UTC m=+0.251372288 container remove f8570dba1a8563122c08cd230341a142c2e600fe60749aecf979867105328c48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_clarke, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:15:05 np0005481065 systemd[1]: libpod-conmon-f8570dba1a8563122c08cd230341a142c2e600fe60749aecf979867105328c48.scope: Deactivated successfully.
Oct 11 05:15:05 np0005481065 podman[376464]: 2025-10-11 09:15:05.366896545 +0000 UTC m=+0.056730621 container create 1c6d2d20fd65e152956cc7569b6db81ce7e7aafa05930cd624d6e3c6245dc84c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_euler, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 11 05:15:05 np0005481065 systemd[1]: Started libpod-conmon-1c6d2d20fd65e152956cc7569b6db81ce7e7aafa05930cd624d6e3c6245dc84c.scope.
Oct 11 05:15:05 np0005481065 podman[376464]: 2025-10-11 09:15:05.339383654 +0000 UTC m=+0.029217800 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:15:05 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:15:05 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a59c468e57361f6447cf724a331b19adb8637614bec64ece422f15578f1ab270/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:15:05 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a59c468e57361f6447cf724a331b19adb8637614bec64ece422f15578f1ab270/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:15:05 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a59c468e57361f6447cf724a331b19adb8637614bec64ece422f15578f1ab270/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:15:05 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a59c468e57361f6447cf724a331b19adb8637614bec64ece422f15578f1ab270/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:15:05 np0005481065 podman[376464]: 2025-10-11 09:15:05.53363144 +0000 UTC m=+0.223465526 container init 1c6d2d20fd65e152956cc7569b6db81ce7e7aafa05930cd624d6e3c6245dc84c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_euler, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 05:15:05 np0005481065 podman[376464]: 2025-10-11 09:15:05.547158855 +0000 UTC m=+0.236992931 container start 1c6d2d20fd65e152956cc7569b6db81ce7e7aafa05930cd624d6e3c6245dc84c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_euler, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 05:15:05 np0005481065 podman[376464]: 2025-10-11 09:15:05.551274879 +0000 UTC m=+0.241108975 container attach 1c6d2d20fd65e152956cc7569b6db81ce7e7aafa05930cd624d6e3c6245dc84c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_euler, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 11 05:15:05 np0005481065 kernel: tapef5b5080-06 (unregistering): left promiscuous mode
Oct 11 05:15:05 np0005481065 NetworkManager[44960]: <info>  [1760174105.5954] device (tapef5b5080-06): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:15:05 np0005481065 ovn_controller[152945]: 2025-10-11T09:15:05Z|01039|binding|INFO|Releasing lport ef5b5080-0657-4874-a32f-f79c370fbb6f from this chassis (sb_readonly=0)
Oct 11 05:15:05 np0005481065 nova_compute[260935]: 2025-10-11 09:15:05.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:05 np0005481065 ovn_controller[152945]: 2025-10-11T09:15:05Z|01040|binding|INFO|Setting lport ef5b5080-0657-4874-a32f-f79c370fbb6f down in Southbound
Oct 11 05:15:05 np0005481065 ovn_controller[152945]: 2025-10-11T09:15:05Z|01041|binding|INFO|Removing iface tapef5b5080-06 ovn-installed in OVS
Oct 11 05:15:05 np0005481065 nova_compute[260935]: 2025-10-11 09:15:05.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:05 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:05.626 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:4b:2f 10.100.0.4'], port_security=['fa:16:3e:73:4b:2f 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'dd4f6627-2bd4-4ad6-8e1e-c5bec1228096', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4b2dcf8f-2d20-4207-8c49-d03814ca4c89', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca4b15770e784f45910b630937562cb6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '69b43b9b-1747-4199-b611-82604b25c02c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.242'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a61dfb2-d260-491a-87d9-80daad2fd54f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=ef5b5080-0657-4874-a32f-f79c370fbb6f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:15:05 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:05.628 162815 INFO neutron.agent.ovn.metadata.agent [-] Port ef5b5080-0657-4874-a32f-f79c370fbb6f in datapath 4b2dcf8f-2d20-4207-8c49-d03814ca4c89 unbound from our chassis#033[00m
Oct 11 05:15:05 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:05.632 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4b2dcf8f-2d20-4207-8c49-d03814ca4c89, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:15:05 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:05.634 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[145eafbb-65af-4782-881c-685e5ab47003]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:15:05 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:05.635 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89 namespace which is not needed anymore#033[00m
Oct 11 05:15:05 np0005481065 nova_compute[260935]: 2025-10-11 09:15:05.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:05 np0005481065 nova_compute[260935]: 2025-10-11 09:15:05.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:05 np0005481065 systemd[1]: machine-qemu\x2d129\x2dinstance\x2d0000006b.scope: Deactivated successfully.
Oct 11 05:15:05 np0005481065 systemd[1]: machine-qemu\x2d129\x2dinstance\x2d0000006b.scope: Consumed 12.515s CPU time.
Oct 11 05:15:05 np0005481065 systemd-machined[215705]: Machine qemu-129-instance-0000006b terminated.
Oct 11 05:15:05 np0005481065 nova_compute[260935]: 2025-10-11 09:15:05.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:05 np0005481065 nova_compute[260935]: 2025-10-11 09:15:05.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:05 np0005481065 nova_compute[260935]: 2025-10-11 09:15:05.776 2 DEBUG nova.compute.manager [None req-2f012b15-b6c8-4501-89aa-611311630683 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:15:05 np0005481065 neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89[375618]: [NOTICE]   (375622) : haproxy version is 2.8.14-c23fe91
Oct 11 05:15:05 np0005481065 neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89[375618]: [NOTICE]   (375622) : path to executable is /usr/sbin/haproxy
Oct 11 05:15:05 np0005481065 neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89[375618]: [ALERT]    (375622) : Current worker (375624) exited with code 143 (Terminated)
Oct 11 05:15:05 np0005481065 neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89[375618]: [WARNING]  (375622) : All workers exited. Exiting... (0)
Oct 11 05:15:05 np0005481065 systemd[1]: libpod-9c990cd252a00b3aae108b75a4f292bd7c93e59da0428ed1c037f6870491fac9.scope: Deactivated successfully.
Oct 11 05:15:05 np0005481065 conmon[375618]: conmon 9c990cd252a00b3aae10 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9c990cd252a00b3aae108b75a4f292bd7c93e59da0428ed1c037f6870491fac9.scope/container/memory.events
Oct 11 05:15:05 np0005481065 podman[376511]: 2025-10-11 09:15:05.838364465 +0000 UTC m=+0.066377168 container died 9c990cd252a00b3aae108b75a4f292bd7c93e59da0428ed1c037f6870491fac9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 11 05:15:05 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9c990cd252a00b3aae108b75a4f292bd7c93e59da0428ed1c037f6870491fac9-userdata-shm.mount: Deactivated successfully.
Oct 11 05:15:05 np0005481065 systemd[1]: var-lib-containers-storage-overlay-93c774767af60c322abb2e50f9ecf5a65e5ec0268f3808529aa2cf4be24924c9-merged.mount: Deactivated successfully.
Oct 11 05:15:05 np0005481065 podman[376511]: 2025-10-11 09:15:05.904027243 +0000 UTC m=+0.132039916 container cleanup 9c990cd252a00b3aae108b75a4f292bd7c93e59da0428ed1c037f6870491fac9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 11 05:15:05 np0005481065 systemd[1]: libpod-conmon-9c990cd252a00b3aae108b75a4f292bd7c93e59da0428ed1c037f6870491fac9.scope: Deactivated successfully.
Oct 11 05:15:05 np0005481065 podman[376545]: 2025-10-11 09:15:05.981369163 +0000 UTC m=+0.045971863 container remove 9c990cd252a00b3aae108b75a4f292bd7c93e59da0428ed1c037f6870491fac9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 05:15:05 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:05.990 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d9cebc7e-55f4-4bae-9d60-92a5f00394d1]: (4, ('Sat Oct 11 09:15:05 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89 (9c990cd252a00b3aae108b75a4f292bd7c93e59da0428ed1c037f6870491fac9)\n9c990cd252a00b3aae108b75a4f292bd7c93e59da0428ed1c037f6870491fac9\nSat Oct 11 09:15:05 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89 (9c990cd252a00b3aae108b75a4f292bd7c93e59da0428ed1c037f6870491fac9)\n9c990cd252a00b3aae108b75a4f292bd7c93e59da0428ed1c037f6870491fac9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:15:05 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:05.993 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e7dd5456-fc4f-40a7-a915-fe6d5a308ee6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:15:05 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:05.994 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4b2dcf8f-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:15:05 np0005481065 nova_compute[260935]: 2025-10-11 09:15:05.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:05 np0005481065 kernel: tap4b2dcf8f-20: left promiscuous mode
Oct 11 05:15:06 np0005481065 nova_compute[260935]: 2025-10-11 09:15:06.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:06.028 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[835ca78a-1776-4373-ab11-42c9a8c62be7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:15:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:06.058 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[44e5dc23-8c91-4450-932d-58256a0ce875]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:15:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:06.060 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[47359233-c670-422a-9438-447a31632b3d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:15:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:06.088 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3bdece7a-9d83-4ff8-893f-084ef027d0ff]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592989, 'reachable_time': 23448, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376563, 'error': None, 'target': 'ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:15:06 np0005481065 systemd[1]: run-netns-ovnmeta\x2d4b2dcf8f\x2d2d20\x2d4207\x2d8c49\x2dd03814ca4c89.mount: Deactivated successfully.
Oct 11 05:15:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:06.093 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 05:15:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:06.094 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[d941dbbb-752d-4e58-be55-ed6edd37321d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:15:06 np0005481065 nova_compute[260935]: 2025-10-11 09:15:06.176 2 DEBUG nova.compute.manager [req-099bdbfc-d3e8-44d0-b05b-c8f1c74e4193 req-b5cd1cb2-7bc1-4d45-95ce-bde2cb856c0d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Received event network-vif-unplugged-ef5b5080-0657-4874-a32f-f79c370fbb6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:15:06 np0005481065 nova_compute[260935]: 2025-10-11 09:15:06.178 2 DEBUG oslo_concurrency.lockutils [req-099bdbfc-d3e8-44d0-b05b-c8f1c74e4193 req-b5cd1cb2-7bc1-4d45-95ce-bde2cb856c0d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:15:06 np0005481065 nova_compute[260935]: 2025-10-11 09:15:06.179 2 DEBUG oslo_concurrency.lockutils [req-099bdbfc-d3e8-44d0-b05b-c8f1c74e4193 req-b5cd1cb2-7bc1-4d45-95ce-bde2cb856c0d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:15:06 np0005481065 nova_compute[260935]: 2025-10-11 09:15:06.180 2 DEBUG oslo_concurrency.lockutils [req-099bdbfc-d3e8-44d0-b05b-c8f1c74e4193 req-b5cd1cb2-7bc1-4d45-95ce-bde2cb856c0d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:15:06 np0005481065 nova_compute[260935]: 2025-10-11 09:15:06.181 2 DEBUG nova.compute.manager [req-099bdbfc-d3e8-44d0-b05b-c8f1c74e4193 req-b5cd1cb2-7bc1-4d45-95ce-bde2cb856c0d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] No waiting events found dispatching network-vif-unplugged-ef5b5080-0657-4874-a32f-f79c370fbb6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:15:06 np0005481065 nova_compute[260935]: 2025-10-11 09:15:06.181 2 WARNING nova.compute.manager [req-099bdbfc-d3e8-44d0-b05b-c8f1c74e4193 req-b5cd1cb2-7bc1-4d45-95ce-bde2cb856c0d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Received unexpected event network-vif-unplugged-ef5b5080-0657-4874-a32f-f79c370fbb6f for instance with vm_state suspended and task_state None.#033[00m
Oct 11 05:15:06 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2198: 321 pgs: 321 active+clean; 407 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 05:15:06 np0005481065 brave_euler[376481]: {
Oct 11 05:15:06 np0005481065 brave_euler[376481]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 05:15:06 np0005481065 brave_euler[376481]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:15:06 np0005481065 brave_euler[376481]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 05:15:06 np0005481065 brave_euler[376481]:        "osd_id": 2,
Oct 11 05:15:06 np0005481065 brave_euler[376481]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:15:06 np0005481065 brave_euler[376481]:        "type": "bluestore"
Oct 11 05:15:06 np0005481065 brave_euler[376481]:    },
Oct 11 05:15:06 np0005481065 brave_euler[376481]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 05:15:06 np0005481065 brave_euler[376481]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:15:06 np0005481065 brave_euler[376481]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 05:15:06 np0005481065 brave_euler[376481]:        "osd_id": 0,
Oct 11 05:15:06 np0005481065 brave_euler[376481]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:15:06 np0005481065 brave_euler[376481]:        "type": "bluestore"
Oct 11 05:15:06 np0005481065 brave_euler[376481]:    },
Oct 11 05:15:06 np0005481065 brave_euler[376481]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 05:15:06 np0005481065 brave_euler[376481]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:15:06 np0005481065 brave_euler[376481]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 05:15:06 np0005481065 brave_euler[376481]:        "osd_id": 1,
Oct 11 05:15:06 np0005481065 brave_euler[376481]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:15:06 np0005481065 brave_euler[376481]:        "type": "bluestore"
Oct 11 05:15:06 np0005481065 brave_euler[376481]:    }
Oct 11 05:15:06 np0005481065 brave_euler[376481]: }
Oct 11 05:15:06 np0005481065 systemd[1]: libpod-1c6d2d20fd65e152956cc7569b6db81ce7e7aafa05930cd624d6e3c6245dc84c.scope: Deactivated successfully.
Oct 11 05:15:06 np0005481065 systemd[1]: libpod-1c6d2d20fd65e152956cc7569b6db81ce7e7aafa05930cd624d6e3c6245dc84c.scope: Consumed 1.161s CPU time.
Oct 11 05:15:06 np0005481065 conmon[376481]: conmon 1c6d2d20fd65e152956c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1c6d2d20fd65e152956cc7569b6db81ce7e7aafa05930cd624d6e3c6245dc84c.scope/container/memory.events
Oct 11 05:15:06 np0005481065 podman[376464]: 2025-10-11 09:15:06.737987316 +0000 UTC m=+1.427821422 container died 1c6d2d20fd65e152956cc7569b6db81ce7e7aafa05930cd624d6e3c6245dc84c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_euler, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 05:15:06 np0005481065 systemd[1]: var-lib-containers-storage-overlay-a59c468e57361f6447cf724a331b19adb8637614bec64ece422f15578f1ab270-merged.mount: Deactivated successfully.
Oct 11 05:15:06 np0005481065 podman[376464]: 2025-10-11 09:15:06.80967655 +0000 UTC m=+1.499510626 container remove 1c6d2d20fd65e152956cc7569b6db81ce7e7aafa05930cd624d6e3c6245dc84c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_euler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:15:06 np0005481065 systemd[1]: libpod-conmon-1c6d2d20fd65e152956cc7569b6db81ce7e7aafa05930cd624d6e3c6245dc84c.scope: Deactivated successfully.
Oct 11 05:15:06 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:15:06 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:15:06 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:15:06 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:15:06 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 8f128bb1-1ca3-4f16-b212-bc09d4eaee0d does not exist
Oct 11 05:15:06 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev cefe412f-19a5-4789-8f8c-33bfc14006da does not exist
Oct 11 05:15:07 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:15:07 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:15:08 np0005481065 nova_compute[260935]: 2025-10-11 09:15:08.297 2 DEBUG nova.compute.manager [req-3adfc58f-c2df-4f5b-a430-08e252b682c3 req-a9a98f69-38c9-4076-8cd1-dc8f21f1194e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Received event network-vif-plugged-ef5b5080-0657-4874-a32f-f79c370fbb6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:15:08 np0005481065 nova_compute[260935]: 2025-10-11 09:15:08.298 2 DEBUG oslo_concurrency.lockutils [req-3adfc58f-c2df-4f5b-a430-08e252b682c3 req-a9a98f69-38c9-4076-8cd1-dc8f21f1194e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:15:08 np0005481065 nova_compute[260935]: 2025-10-11 09:15:08.299 2 DEBUG oslo_concurrency.lockutils [req-3adfc58f-c2df-4f5b-a430-08e252b682c3 req-a9a98f69-38c9-4076-8cd1-dc8f21f1194e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:15:08 np0005481065 nova_compute[260935]: 2025-10-11 09:15:08.299 2 DEBUG oslo_concurrency.lockutils [req-3adfc58f-c2df-4f5b-a430-08e252b682c3 req-a9a98f69-38c9-4076-8cd1-dc8f21f1194e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:15:08 np0005481065 nova_compute[260935]: 2025-10-11 09:15:08.300 2 DEBUG nova.compute.manager [req-3adfc58f-c2df-4f5b-a430-08e252b682c3 req-a9a98f69-38c9-4076-8cd1-dc8f21f1194e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] No waiting events found dispatching network-vif-plugged-ef5b5080-0657-4874-a32f-f79c370fbb6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:15:08 np0005481065 nova_compute[260935]: 2025-10-11 09:15:08.300 2 WARNING nova.compute.manager [req-3adfc58f-c2df-4f5b-a430-08e252b682c3 req-a9a98f69-38c9-4076-8cd1-dc8f21f1194e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Received unexpected event network-vif-plugged-ef5b5080-0657-4874-a32f-f79c370fbb6f for instance with vm_state suspended and task_state None.#033[00m
Oct 11 05:15:08 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2199: 321 pgs: 321 active+clean; 407 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 05:15:08 np0005481065 podman[376653]: 2025-10-11 09:15:08.831659746 +0000 UTC m=+0.120994610 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Oct 11 05:15:08 np0005481065 podman[376654]: 2025-10-11 09:15:08.871011536 +0000 UTC m=+0.158709764 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 11 05:15:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:15:09 np0005481065 nova_compute[260935]: 2025-10-11 09:15:09.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:09 np0005481065 nova_compute[260935]: 2025-10-11 09:15:09.424 2 INFO nova.compute.manager [None req-dfbaa716-6f44-48a1-ac51-f0fa18a5e265 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Get console output#033[00m
Oct 11 05:15:09 np0005481065 nova_compute[260935]: 2025-10-11 09:15:09.621 2 INFO nova.compute.manager [None req-5bce991b-4bd0-467a-b8cd-da5a138e2600 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Resuming#033[00m
Oct 11 05:15:09 np0005481065 nova_compute[260935]: 2025-10-11 09:15:09.622 2 DEBUG nova.objects.instance [None req-5bce991b-4bd0-467a-b8cd-da5a138e2600 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'flavor' on Instance uuid dd4f6627-2bd4-4ad6-8e1e-c5bec1228096 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:15:09 np0005481065 nova_compute[260935]: 2025-10-11 09:15:09.663 2 DEBUG oslo_concurrency.lockutils [None req-5bce991b-4bd0-467a-b8cd-da5a138e2600 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "refresh_cache-dd4f6627-2bd4-4ad6-8e1e-c5bec1228096" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:15:09 np0005481065 nova_compute[260935]: 2025-10-11 09:15:09.664 2 DEBUG oslo_concurrency.lockutils [None req-5bce991b-4bd0-467a-b8cd-da5a138e2600 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquired lock "refresh_cache-dd4f6627-2bd4-4ad6-8e1e-c5bec1228096" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:15:09 np0005481065 nova_compute[260935]: 2025-10-11 09:15:09.664 2 DEBUG nova.network.neutron [None req-5bce991b-4bd0-467a-b8cd-da5a138e2600 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:15:10 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2200: 321 pgs: 321 active+clean; 407 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 55 KiB/s wr, 10 op/s
Oct 11 05:15:10 np0005481065 nova_compute[260935]: 2025-10-11 09:15:10.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:11 np0005481065 nova_compute[260935]: 2025-10-11 09:15:11.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:15:12 np0005481065 nova_compute[260935]: 2025-10-11 09:15:12.335 2 DEBUG nova.network.neutron [None req-5bce991b-4bd0-467a-b8cd-da5a138e2600 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Updating instance_info_cache with network_info: [{"id": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "address": "fa:16:3e:73:4b:2f", "network": {"id": "4b2dcf8f-2d20-4207-8c49-d03814ca4c89", "bridge": "br-int", "label": "tempest-network-smoke--1714222013", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef5b5080-06", "ovs_interfaceid": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:15:12 np0005481065 nova_compute[260935]: 2025-10-11 09:15:12.433 2 DEBUG oslo_concurrency.lockutils [None req-5bce991b-4bd0-467a-b8cd-da5a138e2600 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Releasing lock "refresh_cache-dd4f6627-2bd4-4ad6-8e1e-c5bec1228096" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:15:12 np0005481065 nova_compute[260935]: 2025-10-11 09:15:12.441 2 DEBUG nova.virt.libvirt.vif [None req-5bce991b-4bd0-467a-b8cd-da5a138e2600 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:14:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1538313355',display_name='tempest-TestNetworkAdvancedServerOps-server-1538313355',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1538313355',id=107,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIm35lVc/qb0x6y/uUYwsQgGwxPTQjIAOVresR65zlGyxGEfXr+JB+WnXq5JHTOUB0BIvi8UrAYCHxEhMebttYysUf8+o2zlwFp3FSTpdihQDj9CLBgCJ2HNvxQ7hPPL2g==',key_name='tempest-TestNetworkAdvancedServerOps-1022296660',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:14:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='ca4b15770e784f45910b630937562cb6',ramdisk_id='',reservation_id='r-nhg7hkl0',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1304559157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1304559157-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:15:05Z,user_data=None,user_id='a213c3877fc144a3af0be3c3d853f999',uuid=dd4f6627-2bd4-4ad6-8e1e-c5bec1228096,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "address": "fa:16:3e:73:4b:2f", "network": {"id": "4b2dcf8f-2d20-4207-8c49-d03814ca4c89", "bridge": "br-int", "label": "tempest-network-smoke--1714222013", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef5b5080-06", "ovs_interfaceid": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:15:12 np0005481065 nova_compute[260935]: 2025-10-11 09:15:12.441 2 DEBUG nova.network.os_vif_util [None req-5bce991b-4bd0-467a-b8cd-da5a138e2600 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converting VIF {"id": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "address": "fa:16:3e:73:4b:2f", "network": {"id": "4b2dcf8f-2d20-4207-8c49-d03814ca4c89", "bridge": "br-int", "label": "tempest-network-smoke--1714222013", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef5b5080-06", "ovs_interfaceid": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:15:12 np0005481065 nova_compute[260935]: 2025-10-11 09:15:12.443 2 DEBUG nova.network.os_vif_util [None req-5bce991b-4bd0-467a-b8cd-da5a138e2600 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:4b:2f,bridge_name='br-int',has_traffic_filtering=True,id=ef5b5080-0657-4874-a32f-f79c370fbb6f,network=Network(4b2dcf8f-2d20-4207-8c49-d03814ca4c89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef5b5080-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:15:12 np0005481065 nova_compute[260935]: 2025-10-11 09:15:12.444 2 DEBUG os_vif [None req-5bce991b-4bd0-467a-b8cd-da5a138e2600 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:4b:2f,bridge_name='br-int',has_traffic_filtering=True,id=ef5b5080-0657-4874-a32f-f79c370fbb6f,network=Network(4b2dcf8f-2d20-4207-8c49-d03814ca4c89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef5b5080-06') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:15:12 np0005481065 nova_compute[260935]: 2025-10-11 09:15:12.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:12 np0005481065 nova_compute[260935]: 2025-10-11 09:15:12.445 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:15:12 np0005481065 nova_compute[260935]: 2025-10-11 09:15:12.446 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:15:12 np0005481065 nova_compute[260935]: 2025-10-11 09:15:12.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:12 np0005481065 nova_compute[260935]: 2025-10-11 09:15:12.451 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapef5b5080-06, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:15:12 np0005481065 nova_compute[260935]: 2025-10-11 09:15:12.452 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapef5b5080-06, col_values=(('external_ids', {'iface-id': 'ef5b5080-0657-4874-a32f-f79c370fbb6f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:73:4b:2f', 'vm-uuid': 'dd4f6627-2bd4-4ad6-8e1e-c5bec1228096'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:15:12 np0005481065 nova_compute[260935]: 2025-10-11 09:15:12.453 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:15:12 np0005481065 nova_compute[260935]: 2025-10-11 09:15:12.453 2 INFO os_vif [None req-5bce991b-4bd0-467a-b8cd-da5a138e2600 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:4b:2f,bridge_name='br-int',has_traffic_filtering=True,id=ef5b5080-0657-4874-a32f-f79c370fbb6f,network=Network(4b2dcf8f-2d20-4207-8c49-d03814ca4c89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef5b5080-06')#033[00m
Oct 11 05:15:12 np0005481065 nova_compute[260935]: 2025-10-11 09:15:12.487 2 DEBUG nova.objects.instance [None req-5bce991b-4bd0-467a-b8cd-da5a138e2600 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'numa_topology' on Instance uuid dd4f6627-2bd4-4ad6-8e1e-c5bec1228096 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:15:12 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2201: 321 pgs: 321 active+clean; 407 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 55 KiB/s wr, 10 op/s
Oct 11 05:15:12 np0005481065 kernel: tapef5b5080-06: entered promiscuous mode
Oct 11 05:15:12 np0005481065 NetworkManager[44960]: <info>  [1760174112.6353] manager: (tapef5b5080-06): new Tun device (/org/freedesktop/NetworkManager/Devices/429)
Oct 11 05:15:12 np0005481065 ovn_controller[152945]: 2025-10-11T09:15:12Z|01042|binding|INFO|Claiming lport ef5b5080-0657-4874-a32f-f79c370fbb6f for this chassis.
Oct 11 05:15:12 np0005481065 ovn_controller[152945]: 2025-10-11T09:15:12Z|01043|binding|INFO|ef5b5080-0657-4874-a32f-f79c370fbb6f: Claiming fa:16:3e:73:4b:2f 10.100.0.4
Oct 11 05:15:12 np0005481065 nova_compute[260935]: 2025-10-11 09:15:12.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:12.651 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:4b:2f 10.100.0.4'], port_security=['fa:16:3e:73:4b:2f 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'dd4f6627-2bd4-4ad6-8e1e-c5bec1228096', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4b2dcf8f-2d20-4207-8c49-d03814ca4c89', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca4b15770e784f45910b630937562cb6', 'neutron:revision_number': '5', 'neutron:security_group_ids': '69b43b9b-1747-4199-b611-82604b25c02c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.242'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a61dfb2-d260-491a-87d9-80daad2fd54f, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=ef5b5080-0657-4874-a32f-f79c370fbb6f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:15:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:12.654 162815 INFO neutron.agent.ovn.metadata.agent [-] Port ef5b5080-0657-4874-a32f-f79c370fbb6f in datapath 4b2dcf8f-2d20-4207-8c49-d03814ca4c89 bound to our chassis#033[00m
Oct 11 05:15:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:12.656 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4b2dcf8f-2d20-4207-8c49-d03814ca4c89#033[00m
Oct 11 05:15:12 np0005481065 ovn_controller[152945]: 2025-10-11T09:15:12Z|01044|binding|INFO|Setting lport ef5b5080-0657-4874-a32f-f79c370fbb6f ovn-installed in OVS
Oct 11 05:15:12 np0005481065 ovn_controller[152945]: 2025-10-11T09:15:12Z|01045|binding|INFO|Setting lport ef5b5080-0657-4874-a32f-f79c370fbb6f up in Southbound
Oct 11 05:15:12 np0005481065 nova_compute[260935]: 2025-10-11 09:15:12.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:12 np0005481065 nova_compute[260935]: 2025-10-11 09:15:12.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:12.677 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[55d9757e-1d1f-4b80-92c4-15b964a5ef15]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:15:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:12.680 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4b2dcf8f-21 in ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 05:15:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:12.683 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4b2dcf8f-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 05:15:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:12.683 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[014ddf90-030c-422f-8add-783023c9b52a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:15:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:12.684 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d04ff11a-3cda-4a71-a269-a48a041aa455]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:15:12 np0005481065 systemd-udevd[376714]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:15:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:12.701 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[82411115-b2e1-4e0c-9db7-d70c5077c07b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:15:12 np0005481065 systemd-machined[215705]: New machine qemu-130-instance-0000006b.
Oct 11 05:15:12 np0005481065 NetworkManager[44960]: <info>  [1760174112.7154] device (tapef5b5080-06): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:15:12 np0005481065 NetworkManager[44960]: <info>  [1760174112.7162] device (tapef5b5080-06): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:15:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:12.726 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5ee86388-624c-4159-a785-87ff731a637b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:15:12 np0005481065 systemd[1]: Started Virtual Machine qemu-130-instance-0000006b.
Oct 11 05:15:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:12.787 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[9417bd50-1bfb-43bd-ac4f-f78f9ba9e06d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:15:12 np0005481065 NetworkManager[44960]: <info>  [1760174112.7985] manager: (tap4b2dcf8f-20): new Veth device (/org/freedesktop/NetworkManager/Devices/430)
Oct 11 05:15:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:12.800 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[194392ea-9d09-4c52-94f4-e643f00d537d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:15:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:12.860 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6848b532-e9c2-4d78-9475-9d50ad45593a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:15:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:12.865 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[e2a634aa-e2b8-49ec-accc-2cdd7765c5be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:15:12 np0005481065 NetworkManager[44960]: <info>  [1760174112.9047] device (tap4b2dcf8f-20): carrier: link connected
Oct 11 05:15:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:12.915 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[38c6b1bd-f979-4b21-8c0f-f6fba2a9fa9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:15:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:12.943 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[eeb53c70-25a1-4ae6-9eca-39a349f9d327]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4b2dcf8f-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:48:74'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 303], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 595760, 'reachable_time': 33957, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376746, 'error': None, 'target': 'ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:15:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:12.962 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[75dd6ad9-03cb-4817-9d95-c55825831f99]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb1:4874'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 595760, 'tstamp': 595760}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376747, 'error': None, 'target': 'ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:15:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:12.985 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d18dc929-d042-4f02-af23-480e219e1606]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4b2dcf8f-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:48:74'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 303], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 595760, 'reachable_time': 33957, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 376748, 'error': None, 'target': 'ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:15:13 np0005481065 nova_compute[260935]: 2025-10-11 09:15:13.013 2 DEBUG nova.compute.manager [req-5e6815dc-a9c3-474b-af4d-e3163a266e57 req-99f64454-eb9f-4892-b197-63cd335d179e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Received event network-vif-plugged-ef5b5080-0657-4874-a32f-f79c370fbb6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:15:13 np0005481065 nova_compute[260935]: 2025-10-11 09:15:13.013 2 DEBUG oslo_concurrency.lockutils [req-5e6815dc-a9c3-474b-af4d-e3163a266e57 req-99f64454-eb9f-4892-b197-63cd335d179e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:15:13 np0005481065 nova_compute[260935]: 2025-10-11 09:15:13.014 2 DEBUG oslo_concurrency.lockutils [req-5e6815dc-a9c3-474b-af4d-e3163a266e57 req-99f64454-eb9f-4892-b197-63cd335d179e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:15:13 np0005481065 nova_compute[260935]: 2025-10-11 09:15:13.015 2 DEBUG oslo_concurrency.lockutils [req-5e6815dc-a9c3-474b-af4d-e3163a266e57 req-99f64454-eb9f-4892-b197-63cd335d179e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:15:13 np0005481065 nova_compute[260935]: 2025-10-11 09:15:13.015 2 DEBUG nova.compute.manager [req-5e6815dc-a9c3-474b-af4d-e3163a266e57 req-99f64454-eb9f-4892-b197-63cd335d179e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] No waiting events found dispatching network-vif-plugged-ef5b5080-0657-4874-a32f-f79c370fbb6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:15:13 np0005481065 nova_compute[260935]: 2025-10-11 09:15:13.016 2 WARNING nova.compute.manager [req-5e6815dc-a9c3-474b-af4d-e3163a266e57 req-99f64454-eb9f-4892-b197-63cd335d179e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Received unexpected event network-vif-plugged-ef5b5080-0657-4874-a32f-f79c370fbb6f for instance with vm_state suspended and task_state resuming.#033[00m
Oct 11 05:15:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:13.034 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4b9f21a3-55da-4208-8006-5a2a0693513a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:15:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:13.121 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2461dad8-1e9e-4801-a2eb-d4159b2c859e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:15:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:13.123 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4b2dcf8f-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:15:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:13.124 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:15:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:13.124 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4b2dcf8f-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:15:13 np0005481065 nova_compute[260935]: 2025-10-11 09:15:13.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:13 np0005481065 kernel: tap4b2dcf8f-20: entered promiscuous mode
Oct 11 05:15:13 np0005481065 NetworkManager[44960]: <info>  [1760174113.1294] manager: (tap4b2dcf8f-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/431)
Oct 11 05:15:13 np0005481065 nova_compute[260935]: 2025-10-11 09:15:13.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:13.138 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4b2dcf8f-20, col_values=(('external_ids', {'iface-id': '0c6a8cda-d5e3-42f3-82f4-69be33ec4ca6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:15:13 np0005481065 ovn_controller[152945]: 2025-10-11T09:15:13Z|01046|binding|INFO|Releasing lport 0c6a8cda-d5e3-42f3-82f4-69be33ec4ca6 from this chassis (sb_readonly=0)
Oct 11 05:15:13 np0005481065 nova_compute[260935]: 2025-10-11 09:15:13.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:13 np0005481065 nova_compute[260935]: 2025-10-11 09:15:13.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:13.169 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4b2dcf8f-2d20-4207-8c49-d03814ca4c89.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4b2dcf8f-2d20-4207-8c49-d03814ca4c89.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 05:15:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:13.171 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b0fdf880-c951-4c97-bb30-85960a4a67a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:15:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:13.172 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 05:15:13 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 05:15:13 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 05:15:13 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-4b2dcf8f-2d20-4207-8c49-d03814ca4c89
Oct 11 05:15:13 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 05:15:13 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 05:15:13 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 05:15:13 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/4b2dcf8f-2d20-4207-8c49-d03814ca4c89.pid.haproxy
Oct 11 05:15:13 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 05:15:13 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:15:13 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 05:15:13 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 05:15:13 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 05:15:13 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 05:15:13 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 05:15:13 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 05:15:13 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 05:15:13 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 05:15:13 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 05:15:13 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 05:15:13 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 05:15:13 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 05:15:13 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 05:15:13 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:15:13 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:15:13 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 05:15:13 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 05:15:13 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 05:15:13 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID 4b2dcf8f-2d20-4207-8c49-d03814ca4c89
Oct 11 05:15:13 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 05:15:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:13.173 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89', 'env', 'PROCESS_TAG=haproxy-4b2dcf8f-2d20-4207-8c49-d03814ca4c89', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4b2dcf8f-2d20-4207-8c49-d03814ca4c89.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 05:15:13 np0005481065 podman[376823]: 2025-10-11 09:15:13.600538165 +0000 UTC m=+0.043887206 container create 758743012bf5f9089c8d9eaa120435dbf2e8a515465430d45860b329e93b9eda (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 05:15:13 np0005481065 systemd[1]: Started libpod-conmon-758743012bf5f9089c8d9eaa120435dbf2e8a515465430d45860b329e93b9eda.scope.
Oct 11 05:15:13 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:15:13 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f44778f1e684e02e2c766e9adf76da6209b32f1329c4249ef2256f9967f26385/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 05:15:13 np0005481065 podman[376823]: 2025-10-11 09:15:13.673139844 +0000 UTC m=+0.116488895 container init 758743012bf5f9089c8d9eaa120435dbf2e8a515465430d45860b329e93b9eda (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 11 05:15:13 np0005481065 podman[376823]: 2025-10-11 09:15:13.578272438 +0000 UTC m=+0.021621499 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 05:15:13 np0005481065 podman[376823]: 2025-10-11 09:15:13.682968356 +0000 UTC m=+0.126317397 container start 758743012bf5f9089c8d9eaa120435dbf2e8a515465430d45860b329e93b9eda (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 05:15:13 np0005481065 neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89[376840]: [NOTICE]   (376844) : New worker (376846) forked
Oct 11 05:15:13 np0005481065 neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89[376840]: [NOTICE]   (376844) : Loading success.
Oct 11 05:15:13 np0005481065 nova_compute[260935]: 2025-10-11 09:15:13.919 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Removed pending event for dd4f6627-2bd4-4ad6-8e1e-c5bec1228096 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct 11 05:15:13 np0005481065 nova_compute[260935]: 2025-10-11 09:15:13.920 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174113.9191809, dd4f6627-2bd4-4ad6-8e1e-c5bec1228096 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:15:13 np0005481065 nova_compute[260935]: 2025-10-11 09:15:13.921 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] VM Started (Lifecycle Event)#033[00m
Oct 11 05:15:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:15:13 np0005481065 nova_compute[260935]: 2025-10-11 09:15:13.949 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:15:13 np0005481065 nova_compute[260935]: 2025-10-11 09:15:13.972 2 DEBUG nova.compute.manager [None req-5bce991b-4bd0-467a-b8cd-da5a138e2600 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:15:13 np0005481065 nova_compute[260935]: 2025-10-11 09:15:13.973 2 DEBUG nova.objects.instance [None req-5bce991b-4bd0-467a-b8cd-da5a138e2600 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'pci_devices' on Instance uuid dd4f6627-2bd4-4ad6-8e1e-c5bec1228096 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:15:13 np0005481065 nova_compute[260935]: 2025-10-11 09:15:13.976 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:15:13 np0005481065 nova_compute[260935]: 2025-10-11 09:15:13.996 2 INFO nova.virt.libvirt.driver [-] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Instance running successfully.#033[00m
Oct 11 05:15:13 np0005481065 virtqemud[260524]: argument unsupported: QEMU guest agent is not configured
Oct 11 05:15:13 np0005481065 nova_compute[260935]: 2025-10-11 09:15:13.999 2 DEBUG nova.virt.libvirt.guest [None req-5bce991b-4bd0-467a-b8cd-da5a138e2600 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Oct 11 05:15:14 np0005481065 nova_compute[260935]: 2025-10-11 09:15:13.999 2 DEBUG nova.compute.manager [None req-5bce991b-4bd0-467a-b8cd-da5a138e2600 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:15:14 np0005481065 nova_compute[260935]: 2025-10-11 09:15:14.002 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Oct 11 05:15:14 np0005481065 nova_compute[260935]: 2025-10-11 09:15:14.003 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174113.9242215, dd4f6627-2bd4-4ad6-8e1e-c5bec1228096 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:15:14 np0005481065 nova_compute[260935]: 2025-10-11 09:15:14.003 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:15:14 np0005481065 nova_compute[260935]: 2025-10-11 09:15:14.032 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:15:14 np0005481065 nova_compute[260935]: 2025-10-11 09:15:14.038 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:15:14 np0005481065 nova_compute[260935]: 2025-10-11 09:15:14.071 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Oct 11 05:15:14 np0005481065 nova_compute[260935]: 2025-10-11 09:15:14.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:14 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2202: 321 pgs: 321 active+clean; 407 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 12 KiB/s wr, 1 op/s
Oct 11 05:15:15 np0005481065 nova_compute[260935]: 2025-10-11 09:15:15.094 2 DEBUG nova.compute.manager [req-43736221-b6b3-44ed-b3e4-5ad61dc5cebc req-d6d76eac-b806-4228-9b3b-40e74fa23952 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Received event network-vif-plugged-ef5b5080-0657-4874-a32f-f79c370fbb6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:15:15 np0005481065 nova_compute[260935]: 2025-10-11 09:15:15.095 2 DEBUG oslo_concurrency.lockutils [req-43736221-b6b3-44ed-b3e4-5ad61dc5cebc req-d6d76eac-b806-4228-9b3b-40e74fa23952 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:15:15 np0005481065 nova_compute[260935]: 2025-10-11 09:15:15.095 2 DEBUG oslo_concurrency.lockutils [req-43736221-b6b3-44ed-b3e4-5ad61dc5cebc req-d6d76eac-b806-4228-9b3b-40e74fa23952 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:15:15 np0005481065 nova_compute[260935]: 2025-10-11 09:15:15.096 2 DEBUG oslo_concurrency.lockutils [req-43736221-b6b3-44ed-b3e4-5ad61dc5cebc req-d6d76eac-b806-4228-9b3b-40e74fa23952 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:15:15 np0005481065 nova_compute[260935]: 2025-10-11 09:15:15.096 2 DEBUG nova.compute.manager [req-43736221-b6b3-44ed-b3e4-5ad61dc5cebc req-d6d76eac-b806-4228-9b3b-40e74fa23952 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] No waiting events found dispatching network-vif-plugged-ef5b5080-0657-4874-a32f-f79c370fbb6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:15:15 np0005481065 nova_compute[260935]: 2025-10-11 09:15:15.097 2 WARNING nova.compute.manager [req-43736221-b6b3-44ed-b3e4-5ad61dc5cebc req-d6d76eac-b806-4228-9b3b-40e74fa23952 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Received unexpected event network-vif-plugged-ef5b5080-0657-4874-a32f-f79c370fbb6f for instance with vm_state active and task_state None.#033[00m
Oct 11 05:15:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:15.210 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:15:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:15.211 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:15:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:15.212 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:15:15 np0005481065 nova_compute[260935]: 2025-10-11 09:15:15.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:16 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2203: 321 pgs: 321 active+clean; 407 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 852 B/s rd, 12 KiB/s wr, 0 op/s
Oct 11 05:15:17 np0005481065 nova_compute[260935]: 2025-10-11 09:15:17.147 2 INFO nova.compute.manager [None req-056281e6-4d65-4f3f-8017-5989b5f18f6b a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Get console output#033[00m
Oct 11 05:15:17 np0005481065 nova_compute[260935]: 2025-10-11 09:15:17.154 29289 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct 11 05:15:18 np0005481065 nova_compute[260935]: 2025-10-11 09:15:18.313 2 DEBUG nova.compute.manager [req-c003a7df-5071-42b6-aeeb-1a605ba7c5be req-cd611666-e864-4272-8691-8aa0c171026a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Received event network-changed-ef5b5080-0657-4874-a32f-f79c370fbb6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:15:18 np0005481065 nova_compute[260935]: 2025-10-11 09:15:18.314 2 DEBUG nova.compute.manager [req-c003a7df-5071-42b6-aeeb-1a605ba7c5be req-cd611666-e864-4272-8691-8aa0c171026a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Refreshing instance network info cache due to event network-changed-ef5b5080-0657-4874-a32f-f79c370fbb6f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:15:18 np0005481065 nova_compute[260935]: 2025-10-11 09:15:18.315 2 DEBUG oslo_concurrency.lockutils [req-c003a7df-5071-42b6-aeeb-1a605ba7c5be req-cd611666-e864-4272-8691-8aa0c171026a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-dd4f6627-2bd4-4ad6-8e1e-c5bec1228096" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:15:18 np0005481065 nova_compute[260935]: 2025-10-11 09:15:18.315 2 DEBUG oslo_concurrency.lockutils [req-c003a7df-5071-42b6-aeeb-1a605ba7c5be req-cd611666-e864-4272-8691-8aa0c171026a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-dd4f6627-2bd4-4ad6-8e1e-c5bec1228096" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:15:18 np0005481065 nova_compute[260935]: 2025-10-11 09:15:18.315 2 DEBUG nova.network.neutron [req-c003a7df-5071-42b6-aeeb-1a605ba7c5be req-cd611666-e864-4272-8691-8aa0c171026a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Refreshing network info cache for port ef5b5080-0657-4874-a32f-f79c370fbb6f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:15:18 np0005481065 nova_compute[260935]: 2025-10-11 09:15:18.511 2 DEBUG oslo_concurrency.lockutils [None req-c53a938f-bb46-4741-9e60-3b88069acf3f a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:15:18 np0005481065 nova_compute[260935]: 2025-10-11 09:15:18.512 2 DEBUG oslo_concurrency.lockutils [None req-c53a938f-bb46-4741-9e60-3b88069acf3f a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:15:18 np0005481065 nova_compute[260935]: 2025-10-11 09:15:18.512 2 DEBUG oslo_concurrency.lockutils [None req-c53a938f-bb46-4741-9e60-3b88069acf3f a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:15:18 np0005481065 nova_compute[260935]: 2025-10-11 09:15:18.513 2 DEBUG oslo_concurrency.lockutils [None req-c53a938f-bb46-4741-9e60-3b88069acf3f a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:15:18 np0005481065 nova_compute[260935]: 2025-10-11 09:15:18.514 2 DEBUG oslo_concurrency.lockutils [None req-c53a938f-bb46-4741-9e60-3b88069acf3f a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:15:18 np0005481065 nova_compute[260935]: 2025-10-11 09:15:18.516 2 INFO nova.compute.manager [None req-c53a938f-bb46-4741-9e60-3b88069acf3f a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Terminating instance#033[00m
Oct 11 05:15:18 np0005481065 nova_compute[260935]: 2025-10-11 09:15:18.518 2 DEBUG nova.compute.manager [None req-c53a938f-bb46-4741-9e60-3b88069acf3f a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:15:18 np0005481065 kernel: tapef5b5080-06 (unregistering): left promiscuous mode
Oct 11 05:15:18 np0005481065 NetworkManager[44960]: <info>  [1760174118.5833] device (tapef5b5080-06): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:15:18 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2204: 321 pgs: 321 active+clean; 407 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 5.1 KiB/s rd, 12 KiB/s wr, 5 op/s
Oct 11 05:15:18 np0005481065 nova_compute[260935]: 2025-10-11 09:15:18.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:18 np0005481065 ovn_controller[152945]: 2025-10-11T09:15:18Z|01047|binding|INFO|Releasing lport ef5b5080-0657-4874-a32f-f79c370fbb6f from this chassis (sb_readonly=0)
Oct 11 05:15:18 np0005481065 ovn_controller[152945]: 2025-10-11T09:15:18Z|01048|binding|INFO|Setting lport ef5b5080-0657-4874-a32f-f79c370fbb6f down in Southbound
Oct 11 05:15:18 np0005481065 ovn_controller[152945]: 2025-10-11T09:15:18Z|01049|binding|INFO|Removing iface tapef5b5080-06 ovn-installed in OVS
Oct 11 05:15:18 np0005481065 nova_compute[260935]: 2025-10-11 09:15:18.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:18.613 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:4b:2f 10.100.0.4'], port_security=['fa:16:3e:73:4b:2f 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'dd4f6627-2bd4-4ad6-8e1e-c5bec1228096', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4b2dcf8f-2d20-4207-8c49-d03814ca4c89', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca4b15770e784f45910b630937562cb6', 'neutron:revision_number': '6', 'neutron:security_group_ids': '69b43b9b-1747-4199-b611-82604b25c02c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a61dfb2-d260-491a-87d9-80daad2fd54f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=ef5b5080-0657-4874-a32f-f79c370fbb6f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:15:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:18.615 162815 INFO neutron.agent.ovn.metadata.agent [-] Port ef5b5080-0657-4874-a32f-f79c370fbb6f in datapath 4b2dcf8f-2d20-4207-8c49-d03814ca4c89 unbound from our chassis#033[00m
Oct 11 05:15:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:18.618 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4b2dcf8f-2d20-4207-8c49-d03814ca4c89, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:15:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:18.620 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[943efbc0-eff3-476b-9c74-2863ae3c4807]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:15:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:18.620 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89 namespace which is not needed anymore#033[00m
Oct 11 05:15:18 np0005481065 nova_compute[260935]: 2025-10-11 09:15:18.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:18 np0005481065 systemd[1]: machine-qemu\x2d130\x2dinstance\x2d0000006b.scope: Deactivated successfully.
Oct 11 05:15:18 np0005481065 systemd[1]: machine-qemu\x2d130\x2dinstance\x2d0000006b.scope: Consumed 1.329s CPU time.
Oct 11 05:15:18 np0005481065 systemd-machined[215705]: Machine qemu-130-instance-0000006b terminated.
Oct 11 05:15:18 np0005481065 nova_compute[260935]: 2025-10-11 09:15:18.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:18 np0005481065 nova_compute[260935]: 2025-10-11 09:15:18.758 2 INFO nova.virt.libvirt.driver [-] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Instance destroyed successfully.#033[00m
Oct 11 05:15:18 np0005481065 nova_compute[260935]: 2025-10-11 09:15:18.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:18 np0005481065 nova_compute[260935]: 2025-10-11 09:15:18.760 2 DEBUG nova.objects.instance [None req-c53a938f-bb46-4741-9e60-3b88069acf3f a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'resources' on Instance uuid dd4f6627-2bd4-4ad6-8e1e-c5bec1228096 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:15:18 np0005481065 nova_compute[260935]: 2025-10-11 09:15:18.783 2 DEBUG nova.virt.libvirt.vif [None req-c53a938f-bb46-4741-9e60-3b88069acf3f a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:14:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1538313355',display_name='tempest-TestNetworkAdvancedServerOps-server-1538313355',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1538313355',id=107,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIm35lVc/qb0x6y/uUYwsQgGwxPTQjIAOVresR65zlGyxGEfXr+JB+WnXq5JHTOUB0BIvi8UrAYCHxEhMebttYysUf8+o2zlwFp3FSTpdihQDj9CLBgCJ2HNvxQ7hPPL2g==',key_name='tempest-TestNetworkAdvancedServerOps-1022296660',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:14:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ca4b15770e784f45910b630937562cb6',ramdisk_id='',reservation_id='r-nhg7hkl0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1304559157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1304559157-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:15:14Z,user_data=None,user_id='a213c3877fc144a3af0be3c3d853f999',uuid=dd4f6627-2bd4-4ad6-8e1e-c5bec1228096,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "address": "fa:16:3e:73:4b:2f", "network": {"id": "4b2dcf8f-2d20-4207-8c49-d03814ca4c89", "bridge": "br-int", "label": "tempest-network-smoke--1714222013", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef5b5080-06", "ovs_interfaceid": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:15:18 np0005481065 nova_compute[260935]: 2025-10-11 09:15:18.785 2 DEBUG nova.network.os_vif_util [None req-c53a938f-bb46-4741-9e60-3b88069acf3f a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converting VIF {"id": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "address": "fa:16:3e:73:4b:2f", "network": {"id": "4b2dcf8f-2d20-4207-8c49-d03814ca4c89", "bridge": "br-int", "label": "tempest-network-smoke--1714222013", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef5b5080-06", "ovs_interfaceid": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:15:18 np0005481065 nova_compute[260935]: 2025-10-11 09:15:18.787 2 DEBUG nova.network.os_vif_util [None req-c53a938f-bb46-4741-9e60-3b88069acf3f a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:4b:2f,bridge_name='br-int',has_traffic_filtering=True,id=ef5b5080-0657-4874-a32f-f79c370fbb6f,network=Network(4b2dcf8f-2d20-4207-8c49-d03814ca4c89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef5b5080-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:15:18 np0005481065 nova_compute[260935]: 2025-10-11 09:15:18.787 2 DEBUG os_vif [None req-c53a938f-bb46-4741-9e60-3b88069acf3f a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:4b:2f,bridge_name='br-int',has_traffic_filtering=True,id=ef5b5080-0657-4874-a32f-f79c370fbb6f,network=Network(4b2dcf8f-2d20-4207-8c49-d03814ca4c89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef5b5080-06') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:15:18 np0005481065 nova_compute[260935]: 2025-10-11 09:15:18.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:18 np0005481065 nova_compute[260935]: 2025-10-11 09:15:18.790 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef5b5080-06, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:15:18 np0005481065 nova_compute[260935]: 2025-10-11 09:15:18.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:18 np0005481065 nova_compute[260935]: 2025-10-11 09:15:18.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:15:18 np0005481065 nova_compute[260935]: 2025-10-11 09:15:18.798 2 INFO os_vif [None req-c53a938f-bb46-4741-9e60-3b88069acf3f a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:4b:2f,bridge_name='br-int',has_traffic_filtering=True,id=ef5b5080-0657-4874-a32f-f79c370fbb6f,network=Network(4b2dcf8f-2d20-4207-8c49-d03814ca4c89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef5b5080-06')#033[00m
Oct 11 05:15:18 np0005481065 neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89[376840]: [NOTICE]   (376844) : haproxy version is 2.8.14-c23fe91
Oct 11 05:15:18 np0005481065 neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89[376840]: [NOTICE]   (376844) : path to executable is /usr/sbin/haproxy
Oct 11 05:15:18 np0005481065 neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89[376840]: [WARNING]  (376844) : Exiting Master process...
Oct 11 05:15:18 np0005481065 neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89[376840]: [WARNING]  (376844) : Exiting Master process...
Oct 11 05:15:18 np0005481065 neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89[376840]: [ALERT]    (376844) : Current worker (376846) exited with code 143 (Terminated)
Oct 11 05:15:18 np0005481065 neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89[376840]: [WARNING]  (376844) : All workers exited. Exiting... (0)
Oct 11 05:15:18 np0005481065 systemd[1]: libpod-758743012bf5f9089c8d9eaa120435dbf2e8a515465430d45860b329e93b9eda.scope: Deactivated successfully.
Oct 11 05:15:18 np0005481065 conmon[376840]: conmon 758743012bf5f9089c8d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-758743012bf5f9089c8d9eaa120435dbf2e8a515465430d45860b329e93b9eda.scope/container/memory.events
Oct 11 05:15:18 np0005481065 podman[376880]: 2025-10-11 09:15:18.813292431 +0000 UTC m=+0.062355467 container died 758743012bf5f9089c8d9eaa120435dbf2e8a515465430d45860b329e93b9eda (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 11 05:15:18 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-758743012bf5f9089c8d9eaa120435dbf2e8a515465430d45860b329e93b9eda-userdata-shm.mount: Deactivated successfully.
Oct 11 05:15:18 np0005481065 systemd[1]: var-lib-containers-storage-overlay-f44778f1e684e02e2c766e9adf76da6209b32f1329c4249ef2256f9967f26385-merged.mount: Deactivated successfully.
Oct 11 05:15:18 np0005481065 podman[376880]: 2025-10-11 09:15:18.874136565 +0000 UTC m=+0.123199581 container cleanup 758743012bf5f9089c8d9eaa120435dbf2e8a515465430d45860b329e93b9eda (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 05:15:18 np0005481065 systemd[1]: libpod-conmon-758743012bf5f9089c8d9eaa120435dbf2e8a515465430d45860b329e93b9eda.scope: Deactivated successfully.
Oct 11 05:15:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:15:18 np0005481065 podman[376933]: 2025-10-11 09:15:18.96285176 +0000 UTC m=+0.050667083 container remove 758743012bf5f9089c8d9eaa120435dbf2e8a515465430d45860b329e93b9eda (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 11 05:15:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:18.972 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1ea1ce45-df02-4694-aab1-5fae9ae28d92]: (4, ('Sat Oct 11 09:15:18 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89 (758743012bf5f9089c8d9eaa120435dbf2e8a515465430d45860b329e93b9eda)\n758743012bf5f9089c8d9eaa120435dbf2e8a515465430d45860b329e93b9eda\nSat Oct 11 09:15:18 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89 (758743012bf5f9089c8d9eaa120435dbf2e8a515465430d45860b329e93b9eda)\n758743012bf5f9089c8d9eaa120435dbf2e8a515465430d45860b329e93b9eda\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:15:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:18.974 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[badcebe3-10f9-4b07-aed0-5483083db895]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:15:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:18.976 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4b2dcf8f-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:15:18 np0005481065 nova_compute[260935]: 2025-10-11 09:15:18.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:18 np0005481065 kernel: tap4b2dcf8f-20: left promiscuous mode
Oct 11 05:15:19 np0005481065 nova_compute[260935]: 2025-10-11 09:15:18.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:19.005 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[06d451dc-f2cd-4771-a2c9-e2c64806ea96]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:15:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:19.031 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b2f74e57-ea1a-43b2-9a6c-6adc9d2e7c2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:15:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:19.032 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8ba8542e-c414-4f15-89a5-bc9bbd0bd329]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:15:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:19.055 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9bb30987-a532-4ca4-8b9a-a13eac69c60e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 595747, 'reachable_time': 20292, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376945, 'error': None, 'target': 'ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:15:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:19.058 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 05:15:19 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:19.058 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[5508e2dc-ea4c-4592-9e3b-a6964add191b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:15:19 np0005481065 systemd[1]: run-netns-ovnmeta\x2d4b2dcf8f\x2d2d20\x2d4207\x2d8c49\x2dd03814ca4c89.mount: Deactivated successfully.
Oct 11 05:15:19 np0005481065 nova_compute[260935]: 2025-10-11 09:15:19.256 2 INFO nova.virt.libvirt.driver [None req-c53a938f-bb46-4741-9e60-3b88069acf3f a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Deleting instance files /var/lib/nova/instances/dd4f6627-2bd4-4ad6-8e1e-c5bec1228096_del#033[00m
Oct 11 05:15:19 np0005481065 nova_compute[260935]: 2025-10-11 09:15:19.257 2 INFO nova.virt.libvirt.driver [None req-c53a938f-bb46-4741-9e60-3b88069acf3f a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Deletion of /var/lib/nova/instances/dd4f6627-2bd4-4ad6-8e1e-c5bec1228096_del complete#033[00m
Oct 11 05:15:19 np0005481065 nova_compute[260935]: 2025-10-11 09:15:19.355 2 INFO nova.compute.manager [None req-c53a938f-bb46-4741-9e60-3b88069acf3f a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Took 0.84 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:15:19 np0005481065 nova_compute[260935]: 2025-10-11 09:15:19.356 2 DEBUG oslo.service.loopingcall [None req-c53a938f-bb46-4741-9e60-3b88069acf3f a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:15:19 np0005481065 nova_compute[260935]: 2025-10-11 09:15:19.361 2 DEBUG nova.compute.manager [-] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:15:19 np0005481065 nova_compute[260935]: 2025-10-11 09:15:19.361 2 DEBUG nova.network.neutron [-] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:15:20 np0005481065 nova_compute[260935]: 2025-10-11 09:15:20.250 2 DEBUG nova.network.neutron [-] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:15:20 np0005481065 nova_compute[260935]: 2025-10-11 09:15:20.273 2 INFO nova.compute.manager [-] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Took 0.91 seconds to deallocate network for instance.#033[00m
Oct 11 05:15:20 np0005481065 nova_compute[260935]: 2025-10-11 09:15:20.299 2 DEBUG nova.network.neutron [req-c003a7df-5071-42b6-aeeb-1a605ba7c5be req-cd611666-e864-4272-8691-8aa0c171026a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Updated VIF entry in instance network info cache for port ef5b5080-0657-4874-a32f-f79c370fbb6f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:15:20 np0005481065 nova_compute[260935]: 2025-10-11 09:15:20.300 2 DEBUG nova.network.neutron [req-c003a7df-5071-42b6-aeeb-1a605ba7c5be req-cd611666-e864-4272-8691-8aa0c171026a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Updating instance_info_cache with network_info: [{"id": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "address": "fa:16:3e:73:4b:2f", "network": {"id": "4b2dcf8f-2d20-4207-8c49-d03814ca4c89", "bridge": "br-int", "label": "tempest-network-smoke--1714222013", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef5b5080-06", "ovs_interfaceid": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:15:20 np0005481065 nova_compute[260935]: 2025-10-11 09:15:20.370 2 DEBUG oslo_concurrency.lockutils [req-c003a7df-5071-42b6-aeeb-1a605ba7c5be req-cd611666-e864-4272-8691-8aa0c171026a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-dd4f6627-2bd4-4ad6-8e1e-c5bec1228096" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:15:20 np0005481065 nova_compute[260935]: 2025-10-11 09:15:20.376 2 DEBUG nova.compute.manager [req-af268778-64de-40f4-8843-eac9115a3a2d req-6036156c-ae43-4249-8013-20282b63bdf2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Received event network-vif-deleted-ef5b5080-0657-4874-a32f-f79c370fbb6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:15:20 np0005481065 nova_compute[260935]: 2025-10-11 09:15:20.378 2 DEBUG oslo_concurrency.lockutils [None req-c53a938f-bb46-4741-9e60-3b88069acf3f a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:15:20 np0005481065 nova_compute[260935]: 2025-10-11 09:15:20.378 2 DEBUG oslo_concurrency.lockutils [None req-c53a938f-bb46-4741-9e60-3b88069acf3f a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:15:20 np0005481065 nova_compute[260935]: 2025-10-11 09:15:20.467 2 DEBUG nova.compute.manager [req-87282bf4-2052-412c-be3a-64512721df33 req-c1ac9d01-f8ed-44d4-ae82-e54ff4442b09 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Received event network-vif-unplugged-ef5b5080-0657-4874-a32f-f79c370fbb6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:15:20 np0005481065 nova_compute[260935]: 2025-10-11 09:15:20.467 2 DEBUG oslo_concurrency.lockutils [req-87282bf4-2052-412c-be3a-64512721df33 req-c1ac9d01-f8ed-44d4-ae82-e54ff4442b09 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:15:20 np0005481065 nova_compute[260935]: 2025-10-11 09:15:20.468 2 DEBUG oslo_concurrency.lockutils [req-87282bf4-2052-412c-be3a-64512721df33 req-c1ac9d01-f8ed-44d4-ae82-e54ff4442b09 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:15:20 np0005481065 nova_compute[260935]: 2025-10-11 09:15:20.468 2 DEBUG oslo_concurrency.lockutils [req-87282bf4-2052-412c-be3a-64512721df33 req-c1ac9d01-f8ed-44d4-ae82-e54ff4442b09 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:15:20 np0005481065 nova_compute[260935]: 2025-10-11 09:15:20.469 2 DEBUG nova.compute.manager [req-87282bf4-2052-412c-be3a-64512721df33 req-c1ac9d01-f8ed-44d4-ae82-e54ff4442b09 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] No waiting events found dispatching network-vif-unplugged-ef5b5080-0657-4874-a32f-f79c370fbb6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:15:20 np0005481065 nova_compute[260935]: 2025-10-11 09:15:20.469 2 WARNING nova.compute.manager [req-87282bf4-2052-412c-be3a-64512721df33 req-c1ac9d01-f8ed-44d4-ae82-e54ff4442b09 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Received unexpected event network-vif-unplugged-ef5b5080-0657-4874-a32f-f79c370fbb6f for instance with vm_state deleted and task_state None.#033[00m
Oct 11 05:15:20 np0005481065 nova_compute[260935]: 2025-10-11 09:15:20.469 2 DEBUG nova.compute.manager [req-87282bf4-2052-412c-be3a-64512721df33 req-c1ac9d01-f8ed-44d4-ae82-e54ff4442b09 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Received event network-vif-plugged-ef5b5080-0657-4874-a32f-f79c370fbb6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:15:20 np0005481065 nova_compute[260935]: 2025-10-11 09:15:20.470 2 DEBUG oslo_concurrency.lockutils [req-87282bf4-2052-412c-be3a-64512721df33 req-c1ac9d01-f8ed-44d4-ae82-e54ff4442b09 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:15:20 np0005481065 nova_compute[260935]: 2025-10-11 09:15:20.470 2 DEBUG oslo_concurrency.lockutils [req-87282bf4-2052-412c-be3a-64512721df33 req-c1ac9d01-f8ed-44d4-ae82-e54ff4442b09 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:15:20 np0005481065 nova_compute[260935]: 2025-10-11 09:15:20.471 2 DEBUG oslo_concurrency.lockutils [req-87282bf4-2052-412c-be3a-64512721df33 req-c1ac9d01-f8ed-44d4-ae82-e54ff4442b09 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:15:20 np0005481065 nova_compute[260935]: 2025-10-11 09:15:20.471 2 DEBUG nova.compute.manager [req-87282bf4-2052-412c-be3a-64512721df33 req-c1ac9d01-f8ed-44d4-ae82-e54ff4442b09 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] No waiting events found dispatching network-vif-plugged-ef5b5080-0657-4874-a32f-f79c370fbb6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:15:20 np0005481065 nova_compute[260935]: 2025-10-11 09:15:20.471 2 WARNING nova.compute.manager [req-87282bf4-2052-412c-be3a-64512721df33 req-c1ac9d01-f8ed-44d4-ae82-e54ff4442b09 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Received unexpected event network-vif-plugged-ef5b5080-0657-4874-a32f-f79c370fbb6f for instance with vm_state deleted and task_state None.#033[00m
Oct 11 05:15:20 np0005481065 nova_compute[260935]: 2025-10-11 09:15:20.556 2 DEBUG oslo_concurrency.processutils [None req-c53a938f-bb46-4741-9e60-3b88069acf3f a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:15:20 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2205: 321 pgs: 321 active+clean; 407 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 4 op/s
Oct 11 05:15:20 np0005481065 nova_compute[260935]: 2025-10-11 09:15:20.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:21 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:15:21 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3364576966' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:15:21 np0005481065 nova_compute[260935]: 2025-10-11 09:15:21.032 2 DEBUG oslo_concurrency.processutils [None req-c53a938f-bb46-4741-9e60-3b88069acf3f a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:15:21 np0005481065 nova_compute[260935]: 2025-10-11 09:15:21.041 2 DEBUG nova.compute.provider_tree [None req-c53a938f-bb46-4741-9e60-3b88069acf3f a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:15:21 np0005481065 nova_compute[260935]: 2025-10-11 09:15:21.063 2 DEBUG nova.scheduler.client.report [None req-c53a938f-bb46-4741-9e60-3b88069acf3f a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:15:21 np0005481065 nova_compute[260935]: 2025-10-11 09:15:21.099 2 DEBUG oslo_concurrency.lockutils [None req-c53a938f-bb46-4741-9e60-3b88069acf3f a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:15:21 np0005481065 nova_compute[260935]: 2025-10-11 09:15:21.179 2 INFO nova.scheduler.client.report [None req-c53a938f-bb46-4741-9e60-3b88069acf3f a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Deleted allocations for instance dd4f6627-2bd4-4ad6-8e1e-c5bec1228096#033[00m
Oct 11 05:15:21 np0005481065 nova_compute[260935]: 2025-10-11 09:15:21.267 2 DEBUG oslo_concurrency.lockutils [None req-c53a938f-bb46-4741-9e60-3b88069acf3f a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:15:22 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2206: 321 pgs: 321 active+clean; 358 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.2 KiB/s wr, 23 op/s
Oct 11 05:15:23 np0005481065 nova_compute[260935]: 2025-10-11 09:15:23.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:15:23 np0005481065 nova_compute[260935]: 2025-10-11 09:15:23.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:15:24 np0005481065 ovn_controller[152945]: 2025-10-11T09:15:24Z|01050|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:15:24 np0005481065 ovn_controller[152945]: 2025-10-11T09:15:24Z|01051|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:15:24 np0005481065 nova_compute[260935]: 2025-10-11 09:15:24.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:24 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2207: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Oct 11 05:15:24 np0005481065 ovn_controller[152945]: 2025-10-11T09:15:24Z|01052|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:15:24 np0005481065 ovn_controller[152945]: 2025-10-11T09:15:24Z|01053|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:15:24 np0005481065 nova_compute[260935]: 2025-10-11 09:15:24.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:24 np0005481065 nova_compute[260935]: 2025-10-11 09:15:24.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:15:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:15:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:15:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:15:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:15:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:15:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:15:25 np0005481065 nova_compute[260935]: 2025-10-11 09:15:25.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:26 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2208: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Oct 11 05:15:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:15:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1071036871' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:15:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:15:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1071036871' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:15:26 np0005481065 nova_compute[260935]: 2025-10-11 09:15:26.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:15:26 np0005481065 nova_compute[260935]: 2025-10-11 09:15:26.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:15:27 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:27.103 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:15:27 np0005481065 nova_compute[260935]: 2025-10-11 09:15:27.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:27 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:27.105 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 05:15:27 np0005481065 nova_compute[260935]: 2025-10-11 09:15:27.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:15:27 np0005481065 nova_compute[260935]: 2025-10-11 09:15:27.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:15:27 np0005481065 podman[376973]: 2025-10-11 09:15:27.808228965 +0000 UTC m=+0.098235700 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 11 05:15:28 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2209: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Oct 11 05:15:28 np0005481065 nova_compute[260935]: 2025-10-11 09:15:28.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:15:30 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2210: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 05:15:30 np0005481065 nova_compute[260935]: 2025-10-11 09:15:30.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:15:30 np0005481065 nova_compute[260935]: 2025-10-11 09:15:30.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:30 np0005481065 nova_compute[260935]: 2025-10-11 09:15:30.876 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:15:30 np0005481065 nova_compute[260935]: 2025-10-11 09:15:30.876 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:15:30 np0005481065 nova_compute[260935]: 2025-10-11 09:15:30.907 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 11 05:15:30 np0005481065 nova_compute[260935]: 2025-10-11 09:15:30.908 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:15:30 np0005481065 nova_compute[260935]: 2025-10-11 09:15:30.941 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:15:30 np0005481065 nova_compute[260935]: 2025-10-11 09:15:30.942 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:15:30 np0005481065 nova_compute[260935]: 2025-10-11 09:15:30.943 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:15:30 np0005481065 nova_compute[260935]: 2025-10-11 09:15:30.943 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:15:30 np0005481065 nova_compute[260935]: 2025-10-11 09:15:30.945 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:15:31 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:15:31 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/59155469' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:15:31 np0005481065 nova_compute[260935]: 2025-10-11 09:15:31.447 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:15:31 np0005481065 nova_compute[260935]: 2025-10-11 09:15:31.560 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:15:31 np0005481065 nova_compute[260935]: 2025-10-11 09:15:31.560 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:15:31 np0005481065 nova_compute[260935]: 2025-10-11 09:15:31.561 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:15:31 np0005481065 nova_compute[260935]: 2025-10-11 09:15:31.567 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:15:31 np0005481065 nova_compute[260935]: 2025-10-11 09:15:31.568 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:15:31 np0005481065 nova_compute[260935]: 2025-10-11 09:15:31.573 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:15:31 np0005481065 nova_compute[260935]: 2025-10-11 09:15:31.574 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:15:31 np0005481065 nova_compute[260935]: 2025-10-11 09:15:31.901 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:15:31 np0005481065 nova_compute[260935]: 2025-10-11 09:15:31.904 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3016MB free_disk=59.83066940307617GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:15:31 np0005481065 nova_compute[260935]: 2025-10-11 09:15:31.905 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:15:31 np0005481065 nova_compute[260935]: 2025-10-11 09:15:31.906 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:15:32 np0005481065 nova_compute[260935]: 2025-10-11 09:15:32.020 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:15:32 np0005481065 nova_compute[260935]: 2025-10-11 09:15:32.020 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:15:32 np0005481065 nova_compute[260935]: 2025-10-11 09:15:32.021 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:15:32 np0005481065 nova_compute[260935]: 2025-10-11 09:15:32.021 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:15:32 np0005481065 nova_compute[260935]: 2025-10-11 09:15:32.022 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:15:32 np0005481065 nova_compute[260935]: 2025-10-11 09:15:32.107 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:15:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:15:32 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/836855172' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:15:32 np0005481065 nova_compute[260935]: 2025-10-11 09:15:32.577 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:15:32 np0005481065 nova_compute[260935]: 2025-10-11 09:15:32.584 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:15:32 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2211: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 05:15:32 np0005481065 nova_compute[260935]: 2025-10-11 09:15:32.608 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:15:32 np0005481065 nova_compute[260935]: 2025-10-11 09:15:32.636 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:15:32 np0005481065 nova_compute[260935]: 2025-10-11 09:15:32.637 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:15:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:33.107 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:15:33 np0005481065 nova_compute[260935]: 2025-10-11 09:15:33.433 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:15:33 np0005481065 nova_compute[260935]: 2025-10-11 09:15:33.757 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174118.7554939, dd4f6627-2bd4-4ad6-8e1e-c5bec1228096 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:15:33 np0005481065 nova_compute[260935]: 2025-10-11 09:15:33.757 2 INFO nova.compute.manager [-] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:15:33 np0005481065 nova_compute[260935]: 2025-10-11 09:15:33.786 2 DEBUG nova.compute.manager [None req-68e62540-cac9-4fbd-983b-35747801efd6 - - - - - -] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:15:33 np0005481065 nova_compute[260935]: 2025-10-11 09:15:33.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:33 np0005481065 podman[377037]: 2025-10-11 09:15:33.803730804 +0000 UTC m=+0.097063538 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:15:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:15:34 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2212: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 7.1 KiB/s rd, 0 B/s wr, 9 op/s
Oct 11 05:15:35 np0005481065 nova_compute[260935]: 2025-10-11 09:15:35.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:36 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2213: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:15:38 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2214: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:15:38 np0005481065 nova_compute[260935]: 2025-10-11 09:15:38.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:15:39 np0005481065 systemd[1]: Starting dnf makecache...
Oct 11 05:15:39 np0005481065 podman[377057]: 2025-10-11 09:15:39.79802885 +0000 UTC m=+0.091535364 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 11 05:15:39 np0005481065 podman[377058]: 2025-10-11 09:15:39.809378105 +0000 UTC m=+0.102934021 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 05:15:40 np0005481065 dnf[377059]: Metadata cache refreshed recently.
Oct 11 05:15:40 np0005481065 systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct 11 05:15:40 np0005481065 systemd[1]: Finished dnf makecache.
Oct 11 05:15:40 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2215: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:15:40 np0005481065 nova_compute[260935]: 2025-10-11 09:15:40.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:42 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2216: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:15:43 np0005481065 nova_compute[260935]: 2025-10-11 09:15:43.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:15:44 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2217: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:15:45 np0005481065 nova_compute[260935]: 2025-10-11 09:15:45.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:46 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2218: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:15:48 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2219: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:15:48 np0005481065 nova_compute[260935]: 2025-10-11 09:15:48.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:15:50 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2220: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:15:50 np0005481065 nova_compute[260935]: 2025-10-11 09:15:50.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:51 np0005481065 nova_compute[260935]: 2025-10-11 09:15:51.612 2 DEBUG oslo_concurrency.lockutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "8354434f-daa4-4745-9755-bd2465f5459b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:15:51 np0005481065 nova_compute[260935]: 2025-10-11 09:15:51.613 2 DEBUG oslo_concurrency.lockutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "8354434f-daa4-4745-9755-bd2465f5459b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:15:51 np0005481065 nova_compute[260935]: 2025-10-11 09:15:51.647 2 DEBUG nova.compute.manager [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:15:51 np0005481065 nova_compute[260935]: 2025-10-11 09:15:51.772 2 DEBUG oslo_concurrency.lockutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:15:51 np0005481065 nova_compute[260935]: 2025-10-11 09:15:51.773 2 DEBUG oslo_concurrency.lockutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:15:51 np0005481065 nova_compute[260935]: 2025-10-11 09:15:51.785 2 DEBUG nova.virt.hardware [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:15:51 np0005481065 nova_compute[260935]: 2025-10-11 09:15:51.786 2 INFO nova.compute.claims [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:15:51 np0005481065 nova_compute[260935]: 2025-10-11 09:15:51.976 2 DEBUG oslo_concurrency.processutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:15:52 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:15:52 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/89263273' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:15:52 np0005481065 nova_compute[260935]: 2025-10-11 09:15:52.471 2 DEBUG oslo_concurrency.processutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:15:52 np0005481065 nova_compute[260935]: 2025-10-11 09:15:52.481 2 DEBUG nova.compute.provider_tree [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:15:52 np0005481065 nova_compute[260935]: 2025-10-11 09:15:52.508 2 DEBUG nova.scheduler.client.report [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:15:52 np0005481065 nova_compute[260935]: 2025-10-11 09:15:52.547 2 DEBUG oslo_concurrency.lockutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:15:52 np0005481065 nova_compute[260935]: 2025-10-11 09:15:52.548 2 DEBUG nova.compute.manager [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:15:52 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2221: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:15:52 np0005481065 nova_compute[260935]: 2025-10-11 09:15:52.612 2 DEBUG nova.compute.manager [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:15:52 np0005481065 nova_compute[260935]: 2025-10-11 09:15:52.613 2 DEBUG nova.network.neutron [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:15:52 np0005481065 nova_compute[260935]: 2025-10-11 09:15:52.635 2 INFO nova.virt.libvirt.driver [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:15:52 np0005481065 nova_compute[260935]: 2025-10-11 09:15:52.655 2 DEBUG nova.compute.manager [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:15:52 np0005481065 nova_compute[260935]: 2025-10-11 09:15:52.785 2 DEBUG nova.compute.manager [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:15:52 np0005481065 nova_compute[260935]: 2025-10-11 09:15:52.787 2 DEBUG nova.virt.libvirt.driver [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:15:52 np0005481065 nova_compute[260935]: 2025-10-11 09:15:52.788 2 INFO nova.virt.libvirt.driver [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Creating image(s)#033[00m
Oct 11 05:15:52 np0005481065 nova_compute[260935]: 2025-10-11 09:15:52.815 2 DEBUG nova.storage.rbd_utils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 8354434f-daa4-4745-9755-bd2465f5459b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:15:52 np0005481065 nova_compute[260935]: 2025-10-11 09:15:52.849 2 DEBUG nova.storage.rbd_utils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 8354434f-daa4-4745-9755-bd2465f5459b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:15:52 np0005481065 nova_compute[260935]: 2025-10-11 09:15:52.884 2 DEBUG nova.storage.rbd_utils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 8354434f-daa4-4745-9755-bd2465f5459b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:15:52 np0005481065 nova_compute[260935]: 2025-10-11 09:15:52.890 2 DEBUG oslo_concurrency.processutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:15:52 np0005481065 nova_compute[260935]: 2025-10-11 09:15:52.942 2 DEBUG nova.policy [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dd336dcb24664df58613d4105ce1b004', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:15:52 np0005481065 nova_compute[260935]: 2025-10-11 09:15:52.988 2 DEBUG oslo_concurrency.processutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:15:52 np0005481065 nova_compute[260935]: 2025-10-11 09:15:52.988 2 DEBUG oslo_concurrency.lockutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:15:52 np0005481065 nova_compute[260935]: 2025-10-11 09:15:52.990 2 DEBUG oslo_concurrency.lockutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:15:52 np0005481065 nova_compute[260935]: 2025-10-11 09:15:52.991 2 DEBUG oslo_concurrency.lockutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:15:53 np0005481065 nova_compute[260935]: 2025-10-11 09:15:53.021 2 DEBUG nova.storage.rbd_utils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 8354434f-daa4-4745-9755-bd2465f5459b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:15:53 np0005481065 nova_compute[260935]: 2025-10-11 09:15:53.027 2 DEBUG oslo_concurrency.processutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 8354434f-daa4-4745-9755-bd2465f5459b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:15:53 np0005481065 nova_compute[260935]: 2025-10-11 09:15:53.342 2 DEBUG oslo_concurrency.processutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 8354434f-daa4-4745-9755-bd2465f5459b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.315s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:15:53 np0005481065 nova_compute[260935]: 2025-10-11 09:15:53.422 2 DEBUG nova.storage.rbd_utils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] resizing rbd image 8354434f-daa4-4745-9755-bd2465f5459b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:15:53 np0005481065 nova_compute[260935]: 2025-10-11 09:15:53.519 2 DEBUG nova.objects.instance [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'migration_context' on Instance uuid 8354434f-daa4-4745-9755-bd2465f5459b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:15:53 np0005481065 nova_compute[260935]: 2025-10-11 09:15:53.536 2 DEBUG nova.virt.libvirt.driver [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:15:53 np0005481065 nova_compute[260935]: 2025-10-11 09:15:53.537 2 DEBUG nova.virt.libvirt.driver [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Ensure instance console log exists: /var/lib/nova/instances/8354434f-daa4-4745-9755-bd2465f5459b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:15:53 np0005481065 nova_compute[260935]: 2025-10-11 09:15:53.537 2 DEBUG oslo_concurrency.lockutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:15:53 np0005481065 nova_compute[260935]: 2025-10-11 09:15:53.538 2 DEBUG oslo_concurrency.lockutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:15:53 np0005481065 nova_compute[260935]: 2025-10-11 09:15:53.538 2 DEBUG oslo_concurrency.lockutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:15:53 np0005481065 nova_compute[260935]: 2025-10-11 09:15:53.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:53 np0005481065 nova_compute[260935]: 2025-10-11 09:15:53.926 2 DEBUG nova.network.neutron [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Successfully created port: cabc92ec-b61c-42d8-80b5-444ec773a568 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:15:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:15:54 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2222: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:15:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:15:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:15:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:15:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:15:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:15:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:15:54 np0005481065 nova_compute[260935]: 2025-10-11 09:15:54.852 2 DEBUG nova.network.neutron [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Successfully updated port: cabc92ec-b61c-42d8-80b5-444ec773a568 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:15:54 np0005481065 nova_compute[260935]: 2025-10-11 09:15:54.869 2 DEBUG oslo_concurrency.lockutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "refresh_cache-8354434f-daa4-4745-9755-bd2465f5459b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:15:54 np0005481065 nova_compute[260935]: 2025-10-11 09:15:54.869 2 DEBUG oslo_concurrency.lockutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquired lock "refresh_cache-8354434f-daa4-4745-9755-bd2465f5459b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:15:54 np0005481065 nova_compute[260935]: 2025-10-11 09:15:54.869 2 DEBUG nova.network.neutron [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:15:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:15:54
Oct 11 05:15:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:15:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:15:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.meta', 'vms', 'images', '.mgr', 'cephfs.cephfs.data', 'volumes', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'backups']
Oct 11 05:15:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:15:55 np0005481065 nova_compute[260935]: 2025-10-11 09:15:55.004 2 DEBUG nova.compute.manager [req-e7e4e3da-6d72-45bb-8e2c-9307289a221b req-c5734003-c844-4cc4-a2c2-4485688827a1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Received event network-changed-cabc92ec-b61c-42d8-80b5-444ec773a568 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:15:55 np0005481065 nova_compute[260935]: 2025-10-11 09:15:55.004 2 DEBUG nova.compute.manager [req-e7e4e3da-6d72-45bb-8e2c-9307289a221b req-c5734003-c844-4cc4-a2c2-4485688827a1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Refreshing instance network info cache due to event network-changed-cabc92ec-b61c-42d8-80b5-444ec773a568. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:15:55 np0005481065 nova_compute[260935]: 2025-10-11 09:15:55.005 2 DEBUG oslo_concurrency.lockutils [req-e7e4e3da-6d72-45bb-8e2c-9307289a221b req-c5734003-c844-4cc4-a2c2-4485688827a1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-8354434f-daa4-4745-9755-bd2465f5459b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:15:55 np0005481065 nova_compute[260935]: 2025-10-11 09:15:55.125 2 DEBUG nova.network.neutron [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:15:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:15:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:15:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:15:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:15:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:15:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:15:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:15:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:15:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:15:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:15:55 np0005481065 nova_compute[260935]: 2025-10-11 09:15:55.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:56 np0005481065 nova_compute[260935]: 2025-10-11 09:15:56.561 2 DEBUG nova.network.neutron [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Updating instance_info_cache with network_info: [{"id": "cabc92ec-b61c-42d8-80b5-444ec773a568", "address": "fa:16:3e:ec:41:56", "network": {"id": "3f56857a-dc0a-4d4f-92ae-d6806961b854", "bridge": "br-int", "label": "tempest-network-smoke--985507468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcabc92ec-b6", "ovs_interfaceid": "cabc92ec-b61c-42d8-80b5-444ec773a568", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:15:56 np0005481065 nova_compute[260935]: 2025-10-11 09:15:56.585 2 DEBUG oslo_concurrency.lockutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Releasing lock "refresh_cache-8354434f-daa4-4745-9755-bd2465f5459b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:15:56 np0005481065 nova_compute[260935]: 2025-10-11 09:15:56.585 2 DEBUG nova.compute.manager [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Instance network_info: |[{"id": "cabc92ec-b61c-42d8-80b5-444ec773a568", "address": "fa:16:3e:ec:41:56", "network": {"id": "3f56857a-dc0a-4d4f-92ae-d6806961b854", "bridge": "br-int", "label": "tempest-network-smoke--985507468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcabc92ec-b6", "ovs_interfaceid": "cabc92ec-b61c-42d8-80b5-444ec773a568", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:15:56 np0005481065 nova_compute[260935]: 2025-10-11 09:15:56.586 2 DEBUG oslo_concurrency.lockutils [req-e7e4e3da-6d72-45bb-8e2c-9307289a221b req-c5734003-c844-4cc4-a2c2-4485688827a1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-8354434f-daa4-4745-9755-bd2465f5459b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:15:56 np0005481065 nova_compute[260935]: 2025-10-11 09:15:56.587 2 DEBUG nova.network.neutron [req-e7e4e3da-6d72-45bb-8e2c-9307289a221b req-c5734003-c844-4cc4-a2c2-4485688827a1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Refreshing network info cache for port cabc92ec-b61c-42d8-80b5-444ec773a568 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:15:56 np0005481065 nova_compute[260935]: 2025-10-11 09:15:56.592 2 DEBUG nova.virt.libvirt.driver [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Start _get_guest_xml network_info=[{"id": "cabc92ec-b61c-42d8-80b5-444ec773a568", "address": "fa:16:3e:ec:41:56", "network": {"id": "3f56857a-dc0a-4d4f-92ae-d6806961b854", "bridge": "br-int", "label": "tempest-network-smoke--985507468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcabc92ec-b6", "ovs_interfaceid": "cabc92ec-b61c-42d8-80b5-444ec773a568", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:15:56 np0005481065 nova_compute[260935]: 2025-10-11 09:15:56.600 2 WARNING nova.virt.libvirt.driver [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:15:56 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2223: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:15:56 np0005481065 nova_compute[260935]: 2025-10-11 09:15:56.616 2 DEBUG nova.virt.libvirt.host [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:15:56 np0005481065 nova_compute[260935]: 2025-10-11 09:15:56.617 2 DEBUG nova.virt.libvirt.host [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:15:56 np0005481065 nova_compute[260935]: 2025-10-11 09:15:56.622 2 DEBUG nova.virt.libvirt.host [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:15:56 np0005481065 nova_compute[260935]: 2025-10-11 09:15:56.623 2 DEBUG nova.virt.libvirt.host [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:15:56 np0005481065 nova_compute[260935]: 2025-10-11 09:15:56.624 2 DEBUG nova.virt.libvirt.driver [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:15:56 np0005481065 nova_compute[260935]: 2025-10-11 09:15:56.624 2 DEBUG nova.virt.hardware [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:15:56 np0005481065 nova_compute[260935]: 2025-10-11 09:15:56.625 2 DEBUG nova.virt.hardware [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:15:56 np0005481065 nova_compute[260935]: 2025-10-11 09:15:56.626 2 DEBUG nova.virt.hardware [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:15:56 np0005481065 nova_compute[260935]: 2025-10-11 09:15:56.626 2 DEBUG nova.virt.hardware [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:15:56 np0005481065 nova_compute[260935]: 2025-10-11 09:15:56.627 2 DEBUG nova.virt.hardware [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:15:56 np0005481065 nova_compute[260935]: 2025-10-11 09:15:56.628 2 DEBUG nova.virt.hardware [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:15:56 np0005481065 nova_compute[260935]: 2025-10-11 09:15:56.628 2 DEBUG nova.virt.hardware [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:15:56 np0005481065 nova_compute[260935]: 2025-10-11 09:15:56.629 2 DEBUG nova.virt.hardware [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:15:56 np0005481065 nova_compute[260935]: 2025-10-11 09:15:56.629 2 DEBUG nova.virt.hardware [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:15:56 np0005481065 nova_compute[260935]: 2025-10-11 09:15:56.630 2 DEBUG nova.virt.hardware [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:15:56 np0005481065 nova_compute[260935]: 2025-10-11 09:15:56.630 2 DEBUG nova.virt.hardware [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:15:56 np0005481065 nova_compute[260935]: 2025-10-11 09:15:56.635 2 DEBUG oslo_concurrency.processutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:15:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:15:57 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1569846386' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:15:57 np0005481065 nova_compute[260935]: 2025-10-11 09:15:57.149 2 DEBUG oslo_concurrency.processutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:15:57 np0005481065 nova_compute[260935]: 2025-10-11 09:15:57.177 2 DEBUG nova.storage.rbd_utils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 8354434f-daa4-4745-9755-bd2465f5459b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:15:57 np0005481065 nova_compute[260935]: 2025-10-11 09:15:57.182 2 DEBUG oslo_concurrency.processutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:15:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:15:57 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/620990903' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:15:57 np0005481065 nova_compute[260935]: 2025-10-11 09:15:57.648 2 DEBUG oslo_concurrency.processutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:15:57 np0005481065 nova_compute[260935]: 2025-10-11 09:15:57.651 2 DEBUG nova.virt.libvirt.vif [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:15:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-901907022',display_name='tempest-TestNetworkBasicOps-server-901907022',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-901907022',id=108,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNSUDuwrUo8mPnl46KJgqVMBI/oUdDQJiZKLg3PcXdEoEbBCfynVBzjq5RcgXvWBbi7yER9lhbMWC7+xjyS7CKL6WdpVIbjuziOiJFncdqOQt0YKcX1oXGLKj/54Eha0rg==',key_name='tempest-TestNetworkBasicOps-1207558742',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-svb3ck8j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:15:52Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=8354434f-daa4-4745-9755-bd2465f5459b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cabc92ec-b61c-42d8-80b5-444ec773a568", "address": "fa:16:3e:ec:41:56", "network": {"id": "3f56857a-dc0a-4d4f-92ae-d6806961b854", "bridge": "br-int", "label": "tempest-network-smoke--985507468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcabc92ec-b6", "ovs_interfaceid": "cabc92ec-b61c-42d8-80b5-444ec773a568", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:15:57 np0005481065 nova_compute[260935]: 2025-10-11 09:15:57.652 2 DEBUG nova.network.os_vif_util [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "cabc92ec-b61c-42d8-80b5-444ec773a568", "address": "fa:16:3e:ec:41:56", "network": {"id": "3f56857a-dc0a-4d4f-92ae-d6806961b854", "bridge": "br-int", "label": "tempest-network-smoke--985507468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcabc92ec-b6", "ovs_interfaceid": "cabc92ec-b61c-42d8-80b5-444ec773a568", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:15:57 np0005481065 nova_compute[260935]: 2025-10-11 09:15:57.654 2 DEBUG nova.network.os_vif_util [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:41:56,bridge_name='br-int',has_traffic_filtering=True,id=cabc92ec-b61c-42d8-80b5-444ec773a568,network=Network(3f56857a-dc0a-4d4f-92ae-d6806961b854),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcabc92ec-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:15:57 np0005481065 nova_compute[260935]: 2025-10-11 09:15:57.656 2 DEBUG nova.objects.instance [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8354434f-daa4-4745-9755-bd2465f5459b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:15:57 np0005481065 nova_compute[260935]: 2025-10-11 09:15:57.682 2 DEBUG nova.virt.libvirt.driver [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:15:57 np0005481065 nova_compute[260935]:  <uuid>8354434f-daa4-4745-9755-bd2465f5459b</uuid>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:  <name>instance-0000006c</name>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:15:57 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:      <nova:name>tempest-TestNetworkBasicOps-server-901907022</nova:name>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:15:56</nova:creationTime>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:15:57 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:        <nova:user uuid="dd336dcb24664df58613d4105ce1b004">tempest-TestNetworkBasicOps-1622727639-project-member</nova:user>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:        <nova:project uuid="bee9c6aad5fe46a2b0fb6caf4d995b72">tempest-TestNetworkBasicOps-1622727639</nova:project>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:        <nova:port uuid="cabc92ec-b61c-42d8-80b5-444ec773a568">
Oct 11 05:15:57 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:15:57 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:      <entry name="serial">8354434f-daa4-4745-9755-bd2465f5459b</entry>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:      <entry name="uuid">8354434f-daa4-4745-9755-bd2465f5459b</entry>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:15:57 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:15:57 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:15:57 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/8354434f-daa4-4745-9755-bd2465f5459b_disk">
Oct 11 05:15:57 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:15:57 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:15:57 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/8354434f-daa4-4745-9755-bd2465f5459b_disk.config">
Oct 11 05:15:57 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:15:57 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:15:57 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:ec:41:56"/>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:      <target dev="tapcabc92ec-b6"/>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:15:57 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/8354434f-daa4-4745-9755-bd2465f5459b/console.log" append="off"/>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:15:57 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:15:57 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:15:57 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:15:57 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:15:57 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:15:57 np0005481065 nova_compute[260935]: 2025-10-11 09:15:57.685 2 DEBUG nova.compute.manager [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Preparing to wait for external event network-vif-plugged-cabc92ec-b61c-42d8-80b5-444ec773a568 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:15:57 np0005481065 nova_compute[260935]: 2025-10-11 09:15:57.686 2 DEBUG oslo_concurrency.lockutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "8354434f-daa4-4745-9755-bd2465f5459b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:15:57 np0005481065 nova_compute[260935]: 2025-10-11 09:15:57.686 2 DEBUG oslo_concurrency.lockutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "8354434f-daa4-4745-9755-bd2465f5459b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:15:57 np0005481065 nova_compute[260935]: 2025-10-11 09:15:57.687 2 DEBUG oslo_concurrency.lockutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "8354434f-daa4-4745-9755-bd2465f5459b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:15:57 np0005481065 nova_compute[260935]: 2025-10-11 09:15:57.689 2 DEBUG nova.virt.libvirt.vif [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:15:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-901907022',display_name='tempest-TestNetworkBasicOps-server-901907022',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-901907022',id=108,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNSUDuwrUo8mPnl46KJgqVMBI/oUdDQJiZKLg3PcXdEoEbBCfynVBzjq5RcgXvWBbi7yER9lhbMWC7+xjyS7CKL6WdpVIbjuziOiJFncdqOQt0YKcX1oXGLKj/54Eha0rg==',key_name='tempest-TestNetworkBasicOps-1207558742',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-svb3ck8j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:15:52Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=8354434f-daa4-4745-9755-bd2465f5459b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cabc92ec-b61c-42d8-80b5-444ec773a568", "address": "fa:16:3e:ec:41:56", "network": {"id": "3f56857a-dc0a-4d4f-92ae-d6806961b854", "bridge": "br-int", "label": "tempest-network-smoke--985507468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcabc92ec-b6", "ovs_interfaceid": "cabc92ec-b61c-42d8-80b5-444ec773a568", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:15:57 np0005481065 nova_compute[260935]: 2025-10-11 09:15:57.690 2 DEBUG nova.network.os_vif_util [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "cabc92ec-b61c-42d8-80b5-444ec773a568", "address": "fa:16:3e:ec:41:56", "network": {"id": "3f56857a-dc0a-4d4f-92ae-d6806961b854", "bridge": "br-int", "label": "tempest-network-smoke--985507468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcabc92ec-b6", "ovs_interfaceid": "cabc92ec-b61c-42d8-80b5-444ec773a568", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:15:57 np0005481065 nova_compute[260935]: 2025-10-11 09:15:57.691 2 DEBUG nova.network.os_vif_util [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:41:56,bridge_name='br-int',has_traffic_filtering=True,id=cabc92ec-b61c-42d8-80b5-444ec773a568,network=Network(3f56857a-dc0a-4d4f-92ae-d6806961b854),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcabc92ec-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:15:57 np0005481065 nova_compute[260935]: 2025-10-11 09:15:57.692 2 DEBUG os_vif [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:41:56,bridge_name='br-int',has_traffic_filtering=True,id=cabc92ec-b61c-42d8-80b5-444ec773a568,network=Network(3f56857a-dc0a-4d4f-92ae-d6806961b854),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcabc92ec-b6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:15:57 np0005481065 nova_compute[260935]: 2025-10-11 09:15:57.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:57 np0005481065 nova_compute[260935]: 2025-10-11 09:15:57.694 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:15:57 np0005481065 nova_compute[260935]: 2025-10-11 09:15:57.696 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:15:57 np0005481065 nova_compute[260935]: 2025-10-11 09:15:57.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:57 np0005481065 nova_compute[260935]: 2025-10-11 09:15:57.701 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcabc92ec-b6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:15:57 np0005481065 nova_compute[260935]: 2025-10-11 09:15:57.703 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcabc92ec-b6, col_values=(('external_ids', {'iface-id': 'cabc92ec-b61c-42d8-80b5-444ec773a568', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ec:41:56', 'vm-uuid': '8354434f-daa4-4745-9755-bd2465f5459b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:15:57 np0005481065 nova_compute[260935]: 2025-10-11 09:15:57.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:57 np0005481065 NetworkManager[44960]: <info>  [1760174157.7065] manager: (tapcabc92ec-b6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/432)
Oct 11 05:15:57 np0005481065 nova_compute[260935]: 2025-10-11 09:15:57.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:15:57 np0005481065 nova_compute[260935]: 2025-10-11 09:15:57.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:57 np0005481065 nova_compute[260935]: 2025-10-11 09:15:57.717 2 INFO os_vif [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:41:56,bridge_name='br-int',has_traffic_filtering=True,id=cabc92ec-b61c-42d8-80b5-444ec773a568,network=Network(3f56857a-dc0a-4d4f-92ae-d6806961b854),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcabc92ec-b6')#033[00m
Oct 11 05:15:57 np0005481065 nova_compute[260935]: 2025-10-11 09:15:57.779 2 DEBUG nova.virt.libvirt.driver [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:15:57 np0005481065 nova_compute[260935]: 2025-10-11 09:15:57.780 2 DEBUG nova.virt.libvirt.driver [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:15:57 np0005481065 nova_compute[260935]: 2025-10-11 09:15:57.780 2 DEBUG nova.virt.libvirt.driver [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No VIF found with MAC fa:16:3e:ec:41:56, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:15:57 np0005481065 nova_compute[260935]: 2025-10-11 09:15:57.781 2 INFO nova.virt.libvirt.driver [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Using config drive#033[00m
Oct 11 05:15:57 np0005481065 nova_compute[260935]: 2025-10-11 09:15:57.817 2 DEBUG nova.storage.rbd_utils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 8354434f-daa4-4745-9755-bd2465f5459b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:15:58 np0005481065 nova_compute[260935]: 2025-10-11 09:15:58.328 2 INFO nova.virt.libvirt.driver [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Creating config drive at /var/lib/nova/instances/8354434f-daa4-4745-9755-bd2465f5459b/disk.config#033[00m
Oct 11 05:15:58 np0005481065 nova_compute[260935]: 2025-10-11 09:15:58.333 2 DEBUG oslo_concurrency.processutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8354434f-daa4-4745-9755-bd2465f5459b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp87o6q4cu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:15:58 np0005481065 nova_compute[260935]: 2025-10-11 09:15:58.483 2 DEBUG oslo_concurrency.processutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8354434f-daa4-4745-9755-bd2465f5459b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp87o6q4cu" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:15:58 np0005481065 nova_compute[260935]: 2025-10-11 09:15:58.511 2 DEBUG nova.storage.rbd_utils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 8354434f-daa4-4745-9755-bd2465f5459b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:15:58 np0005481065 nova_compute[260935]: 2025-10-11 09:15:58.517 2 DEBUG oslo_concurrency.processutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8354434f-daa4-4745-9755-bd2465f5459b/disk.config 8354434f-daa4-4745-9755-bd2465f5459b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:15:58 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2224: 321 pgs: 321 active+clean; 374 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:15:58 np0005481065 nova_compute[260935]: 2025-10-11 09:15:58.721 2 DEBUG oslo_concurrency.processutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8354434f-daa4-4745-9755-bd2465f5459b/disk.config 8354434f-daa4-4745-9755-bd2465f5459b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.204s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:15:58 np0005481065 nova_compute[260935]: 2025-10-11 09:15:58.723 2 INFO nova.virt.libvirt.driver [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Deleting local config drive /var/lib/nova/instances/8354434f-daa4-4745-9755-bd2465f5459b/disk.config because it was imported into RBD.#033[00m
Oct 11 05:15:58 np0005481065 podman[377414]: 2025-10-11 09:15:58.782702558 +0000 UTC m=+0.079972265 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:15:58 np0005481065 kernel: tapcabc92ec-b6: entered promiscuous mode
Oct 11 05:15:58 np0005481065 NetworkManager[44960]: <info>  [1760174158.7882] manager: (tapcabc92ec-b6): new Tun device (/org/freedesktop/NetworkManager/Devices/433)
Oct 11 05:15:58 np0005481065 ovn_controller[152945]: 2025-10-11T09:15:58Z|01054|binding|INFO|Claiming lport cabc92ec-b61c-42d8-80b5-444ec773a568 for this chassis.
Oct 11 05:15:58 np0005481065 ovn_controller[152945]: 2025-10-11T09:15:58Z|01055|binding|INFO|cabc92ec-b61c-42d8-80b5-444ec773a568: Claiming fa:16:3e:ec:41:56 10.100.0.8
Oct 11 05:15:58 np0005481065 nova_compute[260935]: 2025-10-11 09:15:58.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:58.802 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:41:56 10.100.0.8'], port_security=['fa:16:3e:ec:41:56 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '8354434f-daa4-4745-9755-bd2465f5459b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3f56857a-dc0a-4d4f-92ae-d6806961b854', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '2', 'neutron:security_group_ids': '437c3d45-aba0-41e1-8469-ba1a8b534642', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=94b69aa0-e5aa-4772-b64f-f38dc39ac5a0, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=cabc92ec-b61c-42d8-80b5-444ec773a568) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:15:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:58.803 162815 INFO neutron.agent.ovn.metadata.agent [-] Port cabc92ec-b61c-42d8-80b5-444ec773a568 in datapath 3f56857a-dc0a-4d4f-92ae-d6806961b854 bound to our chassis#033[00m
Oct 11 05:15:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:58.804 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3f56857a-dc0a-4d4f-92ae-d6806961b854#033[00m
Oct 11 05:15:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:58.822 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[929a3f1c-643c-4c0d-86e4-60b31786f42f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:15:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:58.823 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3f56857a-d1 in ovnmeta-3f56857a-dc0a-4d4f-92ae-d6806961b854 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 05:15:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:58.825 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3f56857a-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 05:15:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:58.826 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f7198e72-6a2b-499e-a741-941e50c40e0c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:15:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:58.826 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f4cb2a26-c651-4965-a237-3ba9d4a77230]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:15:58 np0005481065 systemd-udevd[377446]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:15:58 np0005481065 systemd-machined[215705]: New machine qemu-131-instance-0000006c.
Oct 11 05:15:58 np0005481065 NetworkManager[44960]: <info>  [1760174158.8452] device (tapcabc92ec-b6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:15:58 np0005481065 NetworkManager[44960]: <info>  [1760174158.8464] device (tapcabc92ec-b6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:15:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:58.844 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[7204d881-8fcc-4997-b0ec-aab2a4b5cf2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:15:58 np0005481065 systemd[1]: Started Virtual Machine qemu-131-instance-0000006c.
Oct 11 05:15:58 np0005481065 nova_compute[260935]: 2025-10-11 09:15:58.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:58.861 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2f4e5495-3b2d-4536-9a24-82ef4c484504]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:15:58 np0005481065 ovn_controller[152945]: 2025-10-11T09:15:58Z|01056|binding|INFO|Setting lport cabc92ec-b61c-42d8-80b5-444ec773a568 ovn-installed in OVS
Oct 11 05:15:58 np0005481065 ovn_controller[152945]: 2025-10-11T09:15:58Z|01057|binding|INFO|Setting lport cabc92ec-b61c-42d8-80b5-444ec773a568 up in Southbound
Oct 11 05:15:58 np0005481065 nova_compute[260935]: 2025-10-11 09:15:58.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:58.893 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[43f63a93-6baf-487e-b193-c7f2d51d270f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:15:58 np0005481065 NetworkManager[44960]: <info>  [1760174158.9001] manager: (tap3f56857a-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/434)
Oct 11 05:15:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:58.898 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b3268674-2298-42d6-95dc-e0984102c207]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:15:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:15:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:58.936 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[0e5c79f6-d00e-4492-b8a8-ac8424039b52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:15:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:58.939 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[5bc21416-e600-42df-99b8-d255e4fcfae8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:15:58 np0005481065 NetworkManager[44960]: <info>  [1760174158.9621] device (tap3f56857a-d0): carrier: link connected
Oct 11 05:15:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:58.967 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[9b4ace29-cb32-42a7-b65e-62e5df464475]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:15:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:58.987 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8fb9a21b-5568-4e88-b899-d07292af3669]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3f56857a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:81:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 306], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 600366, 'reachable_time': 27415, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 377479, 'error': None, 'target': 'ovnmeta-3f56857a-dc0a-4d4f-92ae-d6806961b854', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:15:59 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:59.004 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ba283a04-7e26-4ebd-9638-c6494bcb62a3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec4:81b2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 600366, 'tstamp': 600366}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 377480, 'error': None, 'target': 'ovnmeta-3f56857a-dc0a-4d4f-92ae-d6806961b854', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:15:59 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:59.030 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3402dc0f-75c1-4102-bc65-862ba15d6c28]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3f56857a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:81:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 306], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 600366, 'reachable_time': 27415, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 377481, 'error': None, 'target': 'ovnmeta-3f56857a-dc0a-4d4f-92ae-d6806961b854', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:15:59 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:59.068 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[70a0ad37-8d8a-46e6-9dfb-360794e389f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:15:59 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:59.120 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b77161e0-70ca-4526-ad71-7ac4954058cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:15:59 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:59.122 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3f56857a-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:15:59 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:59.122 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:15:59 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:59.122 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3f56857a-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:15:59 np0005481065 nova_compute[260935]: 2025-10-11 09:15:59.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:59 np0005481065 NetworkManager[44960]: <info>  [1760174159.1246] manager: (tap3f56857a-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/435)
Oct 11 05:15:59 np0005481065 kernel: tap3f56857a-d0: entered promiscuous mode
Oct 11 05:15:59 np0005481065 nova_compute[260935]: 2025-10-11 09:15:59.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:59 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:59.129 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3f56857a-d0, col_values=(('external_ids', {'iface-id': '58ef9b4a-8b66-4d5d-ac05-f694b2a9b216'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:15:59 np0005481065 ovn_controller[152945]: 2025-10-11T09:15:59Z|01058|binding|INFO|Releasing lport 58ef9b4a-8b66-4d5d-ac05-f694b2a9b216 from this chassis (sb_readonly=0)
Oct 11 05:15:59 np0005481065 nova_compute[260935]: 2025-10-11 09:15:59.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:59 np0005481065 nova_compute[260935]: 2025-10-11 09:15:59.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:59 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:59.134 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3f56857a-dc0a-4d4f-92ae-d6806961b854.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3f56857a-dc0a-4d4f-92ae-d6806961b854.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 05:15:59 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:59.134 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c813ae73-b210-41a5-abbb-b18d8d00ae36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:15:59 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:59.135 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 05:15:59 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 05:15:59 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 05:15:59 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-3f56857a-dc0a-4d4f-92ae-d6806961b854
Oct 11 05:15:59 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 05:15:59 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 05:15:59 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 05:15:59 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/3f56857a-dc0a-4d4f-92ae-d6806961b854.pid.haproxy
Oct 11 05:15:59 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 05:15:59 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:15:59 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 05:15:59 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 05:15:59 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 05:15:59 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 05:15:59 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 05:15:59 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 05:15:59 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 05:15:59 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 05:15:59 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 05:15:59 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 05:15:59 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 05:15:59 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 05:15:59 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 05:15:59 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:15:59 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:15:59 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 05:15:59 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 05:15:59 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 05:15:59 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID 3f56857a-dc0a-4d4f-92ae-d6806961b854
Oct 11 05:15:59 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 05:15:59 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:15:59.137 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3f56857a-dc0a-4d4f-92ae-d6806961b854', 'env', 'PROCESS_TAG=haproxy-3f56857a-dc0a-4d4f-92ae-d6806961b854', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3f56857a-dc0a-4d4f-92ae-d6806961b854.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 05:15:59 np0005481065 nova_compute[260935]: 2025-10-11 09:15:59.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:15:59 np0005481065 nova_compute[260935]: 2025-10-11 09:15:59.240 2 DEBUG nova.network.neutron [req-e7e4e3da-6d72-45bb-8e2c-9307289a221b req-c5734003-c844-4cc4-a2c2-4485688827a1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Updated VIF entry in instance network info cache for port cabc92ec-b61c-42d8-80b5-444ec773a568. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:15:59 np0005481065 nova_compute[260935]: 2025-10-11 09:15:59.241 2 DEBUG nova.network.neutron [req-e7e4e3da-6d72-45bb-8e2c-9307289a221b req-c5734003-c844-4cc4-a2c2-4485688827a1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Updating instance_info_cache with network_info: [{"id": "cabc92ec-b61c-42d8-80b5-444ec773a568", "address": "fa:16:3e:ec:41:56", "network": {"id": "3f56857a-dc0a-4d4f-92ae-d6806961b854", "bridge": "br-int", "label": "tempest-network-smoke--985507468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcabc92ec-b6", "ovs_interfaceid": "cabc92ec-b61c-42d8-80b5-444ec773a568", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:15:59 np0005481065 nova_compute[260935]: 2025-10-11 09:15:59.262 2 DEBUG oslo_concurrency.lockutils [req-e7e4e3da-6d72-45bb-8e2c-9307289a221b req-c5734003-c844-4cc4-a2c2-4485688827a1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-8354434f-daa4-4745-9755-bd2465f5459b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:15:59 np0005481065 podman[377555]: 2025-10-11 09:15:59.504737863 +0000 UTC m=+0.050805917 container create 56a2f43e4dc043e957cb609a6cf1764396e6ef23226a986d93d157a36a8cfc97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3f56857a-dc0a-4d4f-92ae-d6806961b854, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 11 05:15:59 np0005481065 systemd[1]: Started libpod-conmon-56a2f43e4dc043e957cb609a6cf1764396e6ef23226a986d93d157a36a8cfc97.scope.
Oct 11 05:15:59 np0005481065 nova_compute[260935]: 2025-10-11 09:15:59.564 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174159.5637765, 8354434f-daa4-4745-9755-bd2465f5459b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:15:59 np0005481065 nova_compute[260935]: 2025-10-11 09:15:59.565 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] VM Started (Lifecycle Event)#033[00m
Oct 11 05:15:59 np0005481065 podman[377555]: 2025-10-11 09:15:59.479077483 +0000 UTC m=+0.025145567 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 05:15:59 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:15:59 np0005481065 nova_compute[260935]: 2025-10-11 09:15:59.586 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:15:59 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16ea3fde6a616ac783e0a41154d62589ff39ccb058913321360e01f0736e2297/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 05:15:59 np0005481065 nova_compute[260935]: 2025-10-11 09:15:59.591 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174159.5639136, 8354434f-daa4-4745-9755-bd2465f5459b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:15:59 np0005481065 nova_compute[260935]: 2025-10-11 09:15:59.592 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:15:59 np0005481065 podman[377555]: 2025-10-11 09:15:59.604619048 +0000 UTC m=+0.150687152 container init 56a2f43e4dc043e957cb609a6cf1764396e6ef23226a986d93d157a36a8cfc97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3f56857a-dc0a-4d4f-92ae-d6806961b854, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct 11 05:15:59 np0005481065 podman[377555]: 2025-10-11 09:15:59.611295643 +0000 UTC m=+0.157363707 container start 56a2f43e4dc043e957cb609a6cf1764396e6ef23226a986d93d157a36a8cfc97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3f56857a-dc0a-4d4f-92ae-d6806961b854, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true)
Oct 11 05:15:59 np0005481065 nova_compute[260935]: 2025-10-11 09:15:59.613 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:15:59 np0005481065 nova_compute[260935]: 2025-10-11 09:15:59.618 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:15:59 np0005481065 neutron-haproxy-ovnmeta-3f56857a-dc0a-4d4f-92ae-d6806961b854[377571]: [NOTICE]   (377575) : New worker (377577) forked
Oct 11 05:15:59 np0005481065 neutron-haproxy-ovnmeta-3f56857a-dc0a-4d4f-92ae-d6806961b854[377571]: [NOTICE]   (377575) : Loading success.
Oct 11 05:15:59 np0005481065 nova_compute[260935]: 2025-10-11 09:15:59.641 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:16:00 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2225: 321 pgs: 321 active+clean; 374 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:16:00 np0005481065 nova_compute[260935]: 2025-10-11 09:16:00.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:16:02 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2226: 321 pgs: 321 active+clean; 374 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Oct 11 05:16:02 np0005481065 nova_compute[260935]: 2025-10-11 09:16:02.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:16:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:16:04 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2227: 321 pgs: 321 active+clean; 374 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 11 05:16:04 np0005481065 podman[377589]: 2025-10-11 09:16:04.78727333 +0000 UTC m=+0.082278368 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=iscsid)
Oct 11 05:16:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:16:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:16:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:16:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:16:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0029757271724620694 of space, bias 1.0, pg target 0.8927181517386208 quantized to 32 (current 32)
Oct 11 05:16:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:16:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:16:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:16:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:16:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:16:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 11 05:16:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:16:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 05:16:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:16:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:16:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:16:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 05:16:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:16:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 05:16:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:16:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:16:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:16:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 05:16:05 np0005481065 nova_compute[260935]: 2025-10-11 09:16:05.221 2 DEBUG nova.compute.manager [req-eb52a615-6a2b-443b-ab9a-733d3cefb827 req-ad588793-81df-43cf-81fd-7894e5f95be3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Received event network-vif-plugged-cabc92ec-b61c-42d8-80b5-444ec773a568 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:16:05 np0005481065 nova_compute[260935]: 2025-10-11 09:16:05.221 2 DEBUG oslo_concurrency.lockutils [req-eb52a615-6a2b-443b-ab9a-733d3cefb827 req-ad588793-81df-43cf-81fd-7894e5f95be3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "8354434f-daa4-4745-9755-bd2465f5459b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:16:05 np0005481065 nova_compute[260935]: 2025-10-11 09:16:05.221 2 DEBUG oslo_concurrency.lockutils [req-eb52a615-6a2b-443b-ab9a-733d3cefb827 req-ad588793-81df-43cf-81fd-7894e5f95be3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8354434f-daa4-4745-9755-bd2465f5459b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:16:05 np0005481065 nova_compute[260935]: 2025-10-11 09:16:05.222 2 DEBUG oslo_concurrency.lockutils [req-eb52a615-6a2b-443b-ab9a-733d3cefb827 req-ad588793-81df-43cf-81fd-7894e5f95be3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8354434f-daa4-4745-9755-bd2465f5459b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:16:05 np0005481065 nova_compute[260935]: 2025-10-11 09:16:05.222 2 DEBUG nova.compute.manager [req-eb52a615-6a2b-443b-ab9a-733d3cefb827 req-ad588793-81df-43cf-81fd-7894e5f95be3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Processing event network-vif-plugged-cabc92ec-b61c-42d8-80b5-444ec773a568 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:16:05 np0005481065 nova_compute[260935]: 2025-10-11 09:16:05.223 2 DEBUG nova.compute.manager [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:16:05 np0005481065 nova_compute[260935]: 2025-10-11 09:16:05.226 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174165.2262444, 8354434f-daa4-4745-9755-bd2465f5459b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:16:05 np0005481065 nova_compute[260935]: 2025-10-11 09:16:05.226 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:16:05 np0005481065 nova_compute[260935]: 2025-10-11 09:16:05.229 2 DEBUG nova.virt.libvirt.driver [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:16:05 np0005481065 nova_compute[260935]: 2025-10-11 09:16:05.232 2 INFO nova.virt.libvirt.driver [-] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Instance spawned successfully.#033[00m
Oct 11 05:16:05 np0005481065 nova_compute[260935]: 2025-10-11 09:16:05.232 2 DEBUG nova.virt.libvirt.driver [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:16:05 np0005481065 nova_compute[260935]: 2025-10-11 09:16:05.252 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:16:05 np0005481065 nova_compute[260935]: 2025-10-11 09:16:05.258 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:16:05 np0005481065 nova_compute[260935]: 2025-10-11 09:16:05.262 2 DEBUG nova.virt.libvirt.driver [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:16:05 np0005481065 nova_compute[260935]: 2025-10-11 09:16:05.263 2 DEBUG nova.virt.libvirt.driver [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:16:05 np0005481065 nova_compute[260935]: 2025-10-11 09:16:05.263 2 DEBUG nova.virt.libvirt.driver [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:16:05 np0005481065 nova_compute[260935]: 2025-10-11 09:16:05.263 2 DEBUG nova.virt.libvirt.driver [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:16:05 np0005481065 nova_compute[260935]: 2025-10-11 09:16:05.264 2 DEBUG nova.virt.libvirt.driver [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:16:05 np0005481065 nova_compute[260935]: 2025-10-11 09:16:05.264 2 DEBUG nova.virt.libvirt.driver [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:16:05 np0005481065 nova_compute[260935]: 2025-10-11 09:16:05.292 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:16:05 np0005481065 nova_compute[260935]: 2025-10-11 09:16:05.330 2 INFO nova.compute.manager [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Took 12.54 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:16:05 np0005481065 nova_compute[260935]: 2025-10-11 09:16:05.331 2 DEBUG nova.compute.manager [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:16:05 np0005481065 nova_compute[260935]: 2025-10-11 09:16:05.410 2 INFO nova.compute.manager [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Took 13.68 seconds to build instance.#033[00m
Oct 11 05:16:05 np0005481065 nova_compute[260935]: 2025-10-11 09:16:05.427 2 DEBUG oslo_concurrency.lockutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "8354434f-daa4-4745-9755-bd2465f5459b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:16:05 np0005481065 nova_compute[260935]: 2025-10-11 09:16:05.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:16:06 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2228: 321 pgs: 321 active+clean; 374 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 11 05:16:07 np0005481065 nova_compute[260935]: 2025-10-11 09:16:07.346 2 DEBUG nova.compute.manager [req-30f62ef4-ded5-4c0e-8488-740caddeae9d req-7e6c287a-2033-4479-ac7c-2154e35474f8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Received event network-vif-plugged-cabc92ec-b61c-42d8-80b5-444ec773a568 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:16:07 np0005481065 nova_compute[260935]: 2025-10-11 09:16:07.348 2 DEBUG oslo_concurrency.lockutils [req-30f62ef4-ded5-4c0e-8488-740caddeae9d req-7e6c287a-2033-4479-ac7c-2154e35474f8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "8354434f-daa4-4745-9755-bd2465f5459b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:16:07 np0005481065 nova_compute[260935]: 2025-10-11 09:16:07.349 2 DEBUG oslo_concurrency.lockutils [req-30f62ef4-ded5-4c0e-8488-740caddeae9d req-7e6c287a-2033-4479-ac7c-2154e35474f8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8354434f-daa4-4745-9755-bd2465f5459b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:16:07 np0005481065 nova_compute[260935]: 2025-10-11 09:16:07.349 2 DEBUG oslo_concurrency.lockutils [req-30f62ef4-ded5-4c0e-8488-740caddeae9d req-7e6c287a-2033-4479-ac7c-2154e35474f8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8354434f-daa4-4745-9755-bd2465f5459b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:16:07 np0005481065 nova_compute[260935]: 2025-10-11 09:16:07.350 2 DEBUG nova.compute.manager [req-30f62ef4-ded5-4c0e-8488-740caddeae9d req-7e6c287a-2033-4479-ac7c-2154e35474f8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] No waiting events found dispatching network-vif-plugged-cabc92ec-b61c-42d8-80b5-444ec773a568 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:16:07 np0005481065 nova_compute[260935]: 2025-10-11 09:16:07.351 2 WARNING nova.compute.manager [req-30f62ef4-ded5-4c0e-8488-740caddeae9d req-7e6c287a-2033-4479-ac7c-2154e35474f8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Received unexpected event network-vif-plugged-cabc92ec-b61c-42d8-80b5-444ec773a568 for instance with vm_state active and task_state None.#033[00m
Oct 11 05:16:07 np0005481065 nova_compute[260935]: 2025-10-11 09:16:07.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:16:08 np0005481065 podman[377783]: 2025-10-11 09:16:08.098036469 +0000 UTC m=+0.071930992 container exec ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 05:16:08 np0005481065 podman[377783]: 2025-10-11 09:16:08.212168898 +0000 UTC m=+0.186063431 container exec_died ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:16:08 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2229: 321 pgs: 321 active+clean; 374 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 05:16:08 np0005481065 nova_compute[260935]: 2025-10-11 09:16:08.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:16:08 np0005481065 NetworkManager[44960]: <info>  [1760174168.7469] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/436)
Oct 11 05:16:08 np0005481065 NetworkManager[44960]: <info>  [1760174168.7495] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/437)
Oct 11 05:16:08 np0005481065 nova_compute[260935]: 2025-10-11 09:16:08.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:16:08 np0005481065 ovn_controller[152945]: 2025-10-11T09:16:08Z|01059|binding|INFO|Releasing lport 58ef9b4a-8b66-4d5d-ac05-f694b2a9b216 from this chassis (sb_readonly=0)
Oct 11 05:16:08 np0005481065 ovn_controller[152945]: 2025-10-11T09:16:08Z|01060|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:16:08 np0005481065 ovn_controller[152945]: 2025-10-11T09:16:08Z|01061|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:16:08 np0005481065 nova_compute[260935]: 2025-10-11 09:16:08.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:16:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:16:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:16:09 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:16:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:16:09 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:16:09 np0005481065 nova_compute[260935]: 2025-10-11 09:16:09.486 2 DEBUG nova.compute.manager [req-f203fb46-db47-4782-8995-e686bccb57ba req-958a762b-fa78-4de4-9b7c-6730cedc6591 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Received event network-changed-cabc92ec-b61c-42d8-80b5-444ec773a568 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:16:09 np0005481065 nova_compute[260935]: 2025-10-11 09:16:09.487 2 DEBUG nova.compute.manager [req-f203fb46-db47-4782-8995-e686bccb57ba req-958a762b-fa78-4de4-9b7c-6730cedc6591 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Refreshing instance network info cache due to event network-changed-cabc92ec-b61c-42d8-80b5-444ec773a568. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:16:09 np0005481065 nova_compute[260935]: 2025-10-11 09:16:09.488 2 DEBUG oslo_concurrency.lockutils [req-f203fb46-db47-4782-8995-e686bccb57ba req-958a762b-fa78-4de4-9b7c-6730cedc6591 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-8354434f-daa4-4745-9755-bd2465f5459b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:16:09 np0005481065 nova_compute[260935]: 2025-10-11 09:16:09.489 2 DEBUG oslo_concurrency.lockutils [req-f203fb46-db47-4782-8995-e686bccb57ba req-958a762b-fa78-4de4-9b7c-6730cedc6591 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-8354434f-daa4-4745-9755-bd2465f5459b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:16:09 np0005481065 nova_compute[260935]: 2025-10-11 09:16:09.489 2 DEBUG nova.network.neutron [req-f203fb46-db47-4782-8995-e686bccb57ba req-958a762b-fa78-4de4-9b7c-6730cedc6591 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Refreshing network info cache for port cabc92ec-b61c-42d8-80b5-444ec773a568 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:16:09 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:16:09 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:16:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:16:10 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:16:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 05:16:10 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:16:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 05:16:10 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:16:10 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev dbac6785-7197-44e2-88f5-426d09db3d69 does not exist
Oct 11 05:16:10 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev c18e0a9c-21bd-443c-a0b6-907040c6e159 does not exist
Oct 11 05:16:10 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 98913d9e-4926-4ae6-8a63-c153a5b44158 does not exist
Oct 11 05:16:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 05:16:10 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 05:16:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 05:16:10 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:16:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:16:10 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:16:10 np0005481065 podman[378094]: 2025-10-11 09:16:10.457398543 +0000 UTC m=+0.104911675 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:16:10 np0005481065 podman[378095]: 2025-10-11 09:16:10.60572086 +0000 UTC m=+0.251482943 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 05:16:10 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2230: 321 pgs: 321 active+clean; 374 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 05:16:10 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:16:10 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:16:10 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:16:10 np0005481065 nova_compute[260935]: 2025-10-11 09:16:10.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:16:11 np0005481065 podman[378256]: 2025-10-11 09:16:11.025964221 +0000 UTC m=+0.071750887 container create 63db8bdf6c92ffebbc62993debdcb316bcd48fa310f74eb62b3542754b59b24d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_volhard, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:16:11 np0005481065 systemd[1]: Started libpod-conmon-63db8bdf6c92ffebbc62993debdcb316bcd48fa310f74eb62b3542754b59b24d.scope.
Oct 11 05:16:11 np0005481065 podman[378256]: 2025-10-11 09:16:10.995203809 +0000 UTC m=+0.040990535 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:16:11 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:16:11 np0005481065 podman[378256]: 2025-10-11 09:16:11.125407943 +0000 UTC m=+0.171194669 container init 63db8bdf6c92ffebbc62993debdcb316bcd48fa310f74eb62b3542754b59b24d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_volhard, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 05:16:11 np0005481065 podman[378256]: 2025-10-11 09:16:11.133186128 +0000 UTC m=+0.178972774 container start 63db8bdf6c92ffebbc62993debdcb316bcd48fa310f74eb62b3542754b59b24d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_volhard, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:16:11 np0005481065 podman[378256]: 2025-10-11 09:16:11.13687247 +0000 UTC m=+0.182659146 container attach 63db8bdf6c92ffebbc62993debdcb316bcd48fa310f74eb62b3542754b59b24d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:16:11 np0005481065 relaxed_volhard[378273]: 167 167
Oct 11 05:16:11 np0005481065 systemd[1]: libpod-63db8bdf6c92ffebbc62993debdcb316bcd48fa310f74eb62b3542754b59b24d.scope: Deactivated successfully.
Oct 11 05:16:11 np0005481065 conmon[378273]: conmon 63db8bdf6c92ffebbc62 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-63db8bdf6c92ffebbc62993debdcb316bcd48fa310f74eb62b3542754b59b24d.scope/container/memory.events
Oct 11 05:16:11 np0005481065 podman[378256]: 2025-10-11 09:16:11.144211674 +0000 UTC m=+0.189998320 container died 63db8bdf6c92ffebbc62993debdcb316bcd48fa310f74eb62b3542754b59b24d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_volhard, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 05:16:11 np0005481065 systemd[1]: var-lib-containers-storage-overlay-1c29e61b4a39f08640b08433077ec266d2e94403b82e3514f818275fdcf19b5f-merged.mount: Deactivated successfully.
Oct 11 05:16:11 np0005481065 podman[378256]: 2025-10-11 09:16:11.197434117 +0000 UTC m=+0.243220773 container remove 63db8bdf6c92ffebbc62993debdcb316bcd48fa310f74eb62b3542754b59b24d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:16:11 np0005481065 systemd[1]: libpod-conmon-63db8bdf6c92ffebbc62993debdcb316bcd48fa310f74eb62b3542754b59b24d.scope: Deactivated successfully.
Oct 11 05:16:11 np0005481065 podman[378299]: 2025-10-11 09:16:11.475314238 +0000 UTC m=+0.088770828 container create 6687c6354db2ea962a3feda63e2846d9a87759b36985dd5c40014dbf666f1564 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_clarke, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:16:11 np0005481065 podman[378299]: 2025-10-11 09:16:11.433962874 +0000 UTC m=+0.047419444 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:16:11 np0005481065 systemd[1]: Started libpod-conmon-6687c6354db2ea962a3feda63e2846d9a87759b36985dd5c40014dbf666f1564.scope.
Oct 11 05:16:11 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:16:11 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/767e5cdb46a400d8460bb807259ff44a2d1a9bf6a37cef13ec6e38775f6eb82c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:16:11 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/767e5cdb46a400d8460bb807259ff44a2d1a9bf6a37cef13ec6e38775f6eb82c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:16:11 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/767e5cdb46a400d8460bb807259ff44a2d1a9bf6a37cef13ec6e38775f6eb82c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:16:11 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/767e5cdb46a400d8460bb807259ff44a2d1a9bf6a37cef13ec6e38775f6eb82c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:16:11 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/767e5cdb46a400d8460bb807259ff44a2d1a9bf6a37cef13ec6e38775f6eb82c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 05:16:11 np0005481065 podman[378299]: 2025-10-11 09:16:11.616580478 +0000 UTC m=+0.230037128 container init 6687c6354db2ea962a3feda63e2846d9a87759b36985dd5c40014dbf666f1564 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_clarke, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 05:16:11 np0005481065 podman[378299]: 2025-10-11 09:16:11.628079957 +0000 UTC m=+0.241536517 container start 6687c6354db2ea962a3feda63e2846d9a87759b36985dd5c40014dbf666f1564 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:16:11 np0005481065 podman[378299]: 2025-10-11 09:16:11.631587544 +0000 UTC m=+0.245044194 container attach 6687c6354db2ea962a3feda63e2846d9a87759b36985dd5c40014dbf666f1564 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:16:11 np0005481065 nova_compute[260935]: 2025-10-11 09:16:11.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:16:12 np0005481065 nova_compute[260935]: 2025-10-11 09:16:12.173 2 DEBUG nova.network.neutron [req-f203fb46-db47-4782-8995-e686bccb57ba req-958a762b-fa78-4de4-9b7c-6730cedc6591 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Updated VIF entry in instance network info cache for port cabc92ec-b61c-42d8-80b5-444ec773a568. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:16:12 np0005481065 nova_compute[260935]: 2025-10-11 09:16:12.173 2 DEBUG nova.network.neutron [req-f203fb46-db47-4782-8995-e686bccb57ba req-958a762b-fa78-4de4-9b7c-6730cedc6591 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Updating instance_info_cache with network_info: [{"id": "cabc92ec-b61c-42d8-80b5-444ec773a568", "address": "fa:16:3e:ec:41:56", "network": {"id": "3f56857a-dc0a-4d4f-92ae-d6806961b854", "bridge": "br-int", "label": "tempest-network-smoke--985507468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcabc92ec-b6", "ovs_interfaceid": "cabc92ec-b61c-42d8-80b5-444ec773a568", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:16:12 np0005481065 ovn_controller[152945]: 2025-10-11T09:16:12Z|01062|binding|INFO|Releasing lport 58ef9b4a-8b66-4d5d-ac05-f694b2a9b216 from this chassis (sb_readonly=0)
Oct 11 05:16:12 np0005481065 ovn_controller[152945]: 2025-10-11T09:16:12Z|01063|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:16:12 np0005481065 ovn_controller[152945]: 2025-10-11T09:16:12Z|01064|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:16:12 np0005481065 nova_compute[260935]: 2025-10-11 09:16:12.208 2 DEBUG oslo_concurrency.lockutils [req-f203fb46-db47-4782-8995-e686bccb57ba req-958a762b-fa78-4de4-9b7c-6730cedc6591 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-8354434f-daa4-4745-9755-bd2465f5459b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:16:12 np0005481065 nova_compute[260935]: 2025-10-11 09:16:12.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:16:12 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2231: 321 pgs: 321 active+clean; 374 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 05:16:12 np0005481065 nova_compute[260935]: 2025-10-11 09:16:12.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:16:12 np0005481065 busy_clarke[378316]: --> passed data devices: 0 physical, 3 LVM
Oct 11 05:16:12 np0005481065 busy_clarke[378316]: --> relative data size: 1.0
Oct 11 05:16:12 np0005481065 busy_clarke[378316]: --> All data devices are unavailable
Oct 11 05:16:12 np0005481065 systemd[1]: libpod-6687c6354db2ea962a3feda63e2846d9a87759b36985dd5c40014dbf666f1564.scope: Deactivated successfully.
Oct 11 05:16:12 np0005481065 systemd[1]: libpod-6687c6354db2ea962a3feda63e2846d9a87759b36985dd5c40014dbf666f1564.scope: Consumed 1.052s CPU time.
Oct 11 05:16:12 np0005481065 podman[378299]: 2025-10-11 09:16:12.760339596 +0000 UTC m=+1.373796186 container died 6687c6354db2ea962a3feda63e2846d9a87759b36985dd5c40014dbf666f1564 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 11 05:16:12 np0005481065 systemd[1]: var-lib-containers-storage-overlay-767e5cdb46a400d8460bb807259ff44a2d1a9bf6a37cef13ec6e38775f6eb82c-merged.mount: Deactivated successfully.
Oct 11 05:16:12 np0005481065 podman[378299]: 2025-10-11 09:16:12.844977469 +0000 UTC m=+1.458434019 container remove 6687c6354db2ea962a3feda63e2846d9a87759b36985dd5c40014dbf666f1564 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_clarke, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 11 05:16:12 np0005481065 systemd[1]: libpod-conmon-6687c6354db2ea962a3feda63e2846d9a87759b36985dd5c40014dbf666f1564.scope: Deactivated successfully.
Oct 11 05:16:13 np0005481065 podman[378497]: 2025-10-11 09:16:13.573064861 +0000 UTC m=+0.037794327 container create b912405845f5c0875a009661421e2c3327933f6c1a4334c0e3175c0f009f138a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_saha, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 05:16:13 np0005481065 systemd[1]: Started libpod-conmon-b912405845f5c0875a009661421e2c3327933f6c1a4334c0e3175c0f009f138a.scope.
Oct 11 05:16:13 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:16:13 np0005481065 podman[378497]: 2025-10-11 09:16:13.638712308 +0000 UTC m=+0.103441834 container init b912405845f5c0875a009661421e2c3327933f6c1a4334c0e3175c0f009f138a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_saha, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 05:16:13 np0005481065 podman[378497]: 2025-10-11 09:16:13.65031392 +0000 UTC m=+0.115043396 container start b912405845f5c0875a009661421e2c3327933f6c1a4334c0e3175c0f009f138a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_saha, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 05:16:13 np0005481065 podman[378497]: 2025-10-11 09:16:13.556108242 +0000 UTC m=+0.020837728 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:16:13 np0005481065 podman[378497]: 2025-10-11 09:16:13.654081094 +0000 UTC m=+0.118810660 container attach b912405845f5c0875a009661421e2c3327933f6c1a4334c0e3175c0f009f138a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_saha, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:16:13 np0005481065 naughty_saha[378513]: 167 167
Oct 11 05:16:13 np0005481065 systemd[1]: libpod-b912405845f5c0875a009661421e2c3327933f6c1a4334c0e3175c0f009f138a.scope: Deactivated successfully.
Oct 11 05:16:13 np0005481065 podman[378497]: 2025-10-11 09:16:13.655675548 +0000 UTC m=+0.120405014 container died b912405845f5c0875a009661421e2c3327933f6c1a4334c0e3175c0f009f138a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_saha, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Oct 11 05:16:13 np0005481065 systemd[1]: var-lib-containers-storage-overlay-ec8f18c7e3465ab83c7dd6e1c7f90421731946949830374c6bc51214b00f7bee-merged.mount: Deactivated successfully.
Oct 11 05:16:13 np0005481065 podman[378497]: 2025-10-11 09:16:13.691310344 +0000 UTC m=+0.156039820 container remove b912405845f5c0875a009661421e2c3327933f6c1a4334c0e3175c0f009f138a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_saha, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:16:13 np0005481065 systemd[1]: libpod-conmon-b912405845f5c0875a009661421e2c3327933f6c1a4334c0e3175c0f009f138a.scope: Deactivated successfully.
Oct 11 05:16:13 np0005481065 podman[378536]: 2025-10-11 09:16:13.890374024 +0000 UTC m=+0.052827013 container create f280f480b116bf3205125bfcefb11bbdafed2368ef8bc9eb70598f4a6a745486 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_rubin, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 11 05:16:13 np0005481065 systemd[1]: Started libpod-conmon-f280f480b116bf3205125bfcefb11bbdafed2368ef8bc9eb70598f4a6a745486.scope.
Oct 11 05:16:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:16:13 np0005481065 podman[378536]: 2025-10-11 09:16:13.862330608 +0000 UTC m=+0.024783617 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:16:13 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:16:13 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8023906f7774a1cd2f2fc89ed07d92384bfda036e56be970260f7d6774f260a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:16:13 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8023906f7774a1cd2f2fc89ed07d92384bfda036e56be970260f7d6774f260a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:16:13 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8023906f7774a1cd2f2fc89ed07d92384bfda036e56be970260f7d6774f260a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:16:13 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8023906f7774a1cd2f2fc89ed07d92384bfda036e56be970260f7d6774f260a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:16:13 np0005481065 podman[378536]: 2025-10-11 09:16:13.991288357 +0000 UTC m=+0.153741296 container init f280f480b116bf3205125bfcefb11bbdafed2368ef8bc9eb70598f4a6a745486 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_rubin, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:16:14 np0005481065 podman[378536]: 2025-10-11 09:16:14.007490516 +0000 UTC m=+0.169943495 container start f280f480b116bf3205125bfcefb11bbdafed2368ef8bc9eb70598f4a6a745486 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_rubin, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:16:14 np0005481065 podman[378536]: 2025-10-11 09:16:14.01160899 +0000 UTC m=+0.174061929 container attach f280f480b116bf3205125bfcefb11bbdafed2368ef8bc9eb70598f4a6a745486 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_rubin, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:16:14 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2232: 321 pgs: 321 active+clean; 374 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 70 op/s
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]: {
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:    "0": [
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:        {
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:            "devices": [
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:                "/dev/loop3"
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:            ],
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:            "lv_name": "ceph_lv0",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:            "lv_size": "21470642176",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:            "name": "ceph_lv0",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:            "tags": {
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:                "ceph.cluster_name": "ceph",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:                "ceph.crush_device_class": "",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:                "ceph.encrypted": "0",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:                "ceph.osd_id": "0",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:                "ceph.type": "block",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:                "ceph.vdo": "0"
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:            },
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:            "type": "block",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:            "vg_name": "ceph_vg0"
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:        }
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:    ],
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:    "1": [
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:        {
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:            "devices": [
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:                "/dev/loop4"
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:            ],
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:            "lv_name": "ceph_lv1",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:            "lv_size": "21470642176",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:            "name": "ceph_lv1",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:            "tags": {
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:                "ceph.cluster_name": "ceph",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:                "ceph.crush_device_class": "",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:                "ceph.encrypted": "0",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:                "ceph.osd_id": "1",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:                "ceph.type": "block",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:                "ceph.vdo": "0"
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:            },
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:            "type": "block",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:            "vg_name": "ceph_vg1"
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:        }
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:    ],
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:    "2": [
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:        {
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:            "devices": [
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:                "/dev/loop5"
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:            ],
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:            "lv_name": "ceph_lv2",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:            "lv_size": "21470642176",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:            "name": "ceph_lv2",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:            "tags": {
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:                "ceph.cluster_name": "ceph",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:                "ceph.crush_device_class": "",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:                "ceph.encrypted": "0",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:                "ceph.osd_id": "2",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:                "ceph.type": "block",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:                "ceph.vdo": "0"
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:            },
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:            "type": "block",
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:            "vg_name": "ceph_vg2"
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:        }
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]:    ]
Oct 11 05:16:14 np0005481065 infallible_rubin[378553]: }
Oct 11 05:16:14 np0005481065 systemd[1]: libpod-f280f480b116bf3205125bfcefb11bbdafed2368ef8bc9eb70598f4a6a745486.scope: Deactivated successfully.
Oct 11 05:16:14 np0005481065 podman[378562]: 2025-10-11 09:16:14.941681804 +0000 UTC m=+0.071461889 container died f280f480b116bf3205125bfcefb11bbdafed2368ef8bc9eb70598f4a6a745486 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_rubin, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 05:16:14 np0005481065 systemd[1]: var-lib-containers-storage-overlay-c8023906f7774a1cd2f2fc89ed07d92384bfda036e56be970260f7d6774f260a-merged.mount: Deactivated successfully.
Oct 11 05:16:15 np0005481065 podman[378562]: 2025-10-11 09:16:15.007419473 +0000 UTC m=+0.137199478 container remove f280f480b116bf3205125bfcefb11bbdafed2368ef8bc9eb70598f4a6a745486 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_rubin, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 11 05:16:15 np0005481065 systemd[1]: libpod-conmon-f280f480b116bf3205125bfcefb11bbdafed2368ef8bc9eb70598f4a6a745486.scope: Deactivated successfully.
Oct 11 05:16:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:15.211 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:16:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:15.212 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:16:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:15.213 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:16:15 np0005481065 podman[378716]: 2025-10-11 09:16:15.944414567 +0000 UTC m=+0.056982328 container create 0fdcc8bda2d7904c47042b20c19d525d0c78aa04c620eec9ddad8bb14e96400f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 11 05:16:15 np0005481065 nova_compute[260935]: 2025-10-11 09:16:15.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:16:15 np0005481065 systemd[1]: Started libpod-conmon-0fdcc8bda2d7904c47042b20c19d525d0c78aa04c620eec9ddad8bb14e96400f.scope.
Oct 11 05:16:16 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:16:16 np0005481065 podman[378716]: 2025-10-11 09:16:15.92571322 +0000 UTC m=+0.038281001 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:16:16 np0005481065 podman[378716]: 2025-10-11 09:16:16.03987375 +0000 UTC m=+0.152441521 container init 0fdcc8bda2d7904c47042b20c19d525d0c78aa04c620eec9ddad8bb14e96400f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mayer, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 05:16:16 np0005481065 podman[378716]: 2025-10-11 09:16:16.048961111 +0000 UTC m=+0.161528872 container start 0fdcc8bda2d7904c47042b20c19d525d0c78aa04c620eec9ddad8bb14e96400f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 11 05:16:16 np0005481065 podman[378716]: 2025-10-11 09:16:16.052451668 +0000 UTC m=+0.165019469 container attach 0fdcc8bda2d7904c47042b20c19d525d0c78aa04c620eec9ddad8bb14e96400f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mayer, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:16:16 np0005481065 agitated_mayer[378732]: 167 167
Oct 11 05:16:16 np0005481065 systemd[1]: libpod-0fdcc8bda2d7904c47042b20c19d525d0c78aa04c620eec9ddad8bb14e96400f.scope: Deactivated successfully.
Oct 11 05:16:16 np0005481065 podman[378716]: 2025-10-11 09:16:16.055752029 +0000 UTC m=+0.168319790 container died 0fdcc8bda2d7904c47042b20c19d525d0c78aa04c620eec9ddad8bb14e96400f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mayer, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Oct 11 05:16:16 np0005481065 systemd[1]: var-lib-containers-storage-overlay-1189a7b68178d54d997bff7cc8fdcb847acf3b81d34b192383ed19ba71faaa5e-merged.mount: Deactivated successfully.
Oct 11 05:16:16 np0005481065 podman[378716]: 2025-10-11 09:16:16.09299754 +0000 UTC m=+0.205565301 container remove 0fdcc8bda2d7904c47042b20c19d525d0c78aa04c620eec9ddad8bb14e96400f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mayer, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 05:16:16 np0005481065 systemd[1]: libpod-conmon-0fdcc8bda2d7904c47042b20c19d525d0c78aa04c620eec9ddad8bb14e96400f.scope: Deactivated successfully.
Oct 11 05:16:16 np0005481065 podman[378758]: 2025-10-11 09:16:16.288047819 +0000 UTC m=+0.044474282 container create f7666642b0860ef73fecc1e2b77765d459baac86834af4f0fabc9119fb7433a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_sutherland, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 05:16:16 np0005481065 systemd[1]: Started libpod-conmon-f7666642b0860ef73fecc1e2b77765d459baac86834af4f0fabc9119fb7433a5.scope.
Oct 11 05:16:16 np0005481065 podman[378758]: 2025-10-11 09:16:16.27147215 +0000 UTC m=+0.027898633 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:16:16 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:16:16 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a7d8eb77559754a46c4f7429b6327fd22af0b0701378c374b2f5a359e1c0cf1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:16:16 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a7d8eb77559754a46c4f7429b6327fd22af0b0701378c374b2f5a359e1c0cf1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:16:16 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a7d8eb77559754a46c4f7429b6327fd22af0b0701378c374b2f5a359e1c0cf1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:16:16 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a7d8eb77559754a46c4f7429b6327fd22af0b0701378c374b2f5a359e1c0cf1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:16:16 np0005481065 podman[378758]: 2025-10-11 09:16:16.407753482 +0000 UTC m=+0.164179965 container init f7666642b0860ef73fecc1e2b77765d459baac86834af4f0fabc9119fb7433a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_sutherland, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:16:16 np0005481065 podman[378758]: 2025-10-11 09:16:16.419168268 +0000 UTC m=+0.175594731 container start f7666642b0860ef73fecc1e2b77765d459baac86834af4f0fabc9119fb7433a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_sutherland, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:16:16 np0005481065 podman[378758]: 2025-10-11 09:16:16.422215203 +0000 UTC m=+0.178641666 container attach f7666642b0860ef73fecc1e2b77765d459baac86834af4f0fabc9119fb7433a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_sutherland, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:16:16 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2233: 321 pgs: 321 active+clean; 374 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Oct 11 05:16:17 np0005481065 reverent_sutherland[378774]: {
Oct 11 05:16:17 np0005481065 reverent_sutherland[378774]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 05:16:17 np0005481065 reverent_sutherland[378774]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:16:17 np0005481065 reverent_sutherland[378774]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 05:16:17 np0005481065 reverent_sutherland[378774]:        "osd_id": 2,
Oct 11 05:16:17 np0005481065 reverent_sutherland[378774]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:16:17 np0005481065 reverent_sutherland[378774]:        "type": "bluestore"
Oct 11 05:16:17 np0005481065 reverent_sutherland[378774]:    },
Oct 11 05:16:17 np0005481065 reverent_sutherland[378774]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 05:16:17 np0005481065 reverent_sutherland[378774]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:16:17 np0005481065 reverent_sutherland[378774]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 05:16:17 np0005481065 reverent_sutherland[378774]:        "osd_id": 0,
Oct 11 05:16:17 np0005481065 reverent_sutherland[378774]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:16:17 np0005481065 reverent_sutherland[378774]:        "type": "bluestore"
Oct 11 05:16:17 np0005481065 reverent_sutherland[378774]:    },
Oct 11 05:16:17 np0005481065 reverent_sutherland[378774]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 05:16:17 np0005481065 reverent_sutherland[378774]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:16:17 np0005481065 reverent_sutherland[378774]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 05:16:17 np0005481065 reverent_sutherland[378774]:        "osd_id": 1,
Oct 11 05:16:17 np0005481065 reverent_sutherland[378774]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:16:17 np0005481065 reverent_sutherland[378774]:        "type": "bluestore"
Oct 11 05:16:17 np0005481065 reverent_sutherland[378774]:    }
Oct 11 05:16:17 np0005481065 reverent_sutherland[378774]: }
Oct 11 05:16:17 np0005481065 systemd[1]: libpod-f7666642b0860ef73fecc1e2b77765d459baac86834af4f0fabc9119fb7433a5.scope: Deactivated successfully.
Oct 11 05:16:17 np0005481065 podman[378758]: 2025-10-11 09:16:17.368747622 +0000 UTC m=+1.125174125 container died f7666642b0860ef73fecc1e2b77765d459baac86834af4f0fabc9119fb7433a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_sutherland, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:16:17 np0005481065 systemd[1]: var-lib-containers-storage-overlay-4a7d8eb77559754a46c4f7429b6327fd22af0b0701378c374b2f5a359e1c0cf1-merged.mount: Deactivated successfully.
Oct 11 05:16:17 np0005481065 podman[378758]: 2025-10-11 09:16:17.423007494 +0000 UTC m=+1.179433967 container remove f7666642b0860ef73fecc1e2b77765d459baac86834af4f0fabc9119fb7433a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_sutherland, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:16:17 np0005481065 systemd[1]: libpod-conmon-f7666642b0860ef73fecc1e2b77765d459baac86834af4f0fabc9119fb7433a5.scope: Deactivated successfully.
Oct 11 05:16:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:16:17 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:16:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:16:17 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:16:17 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev c9492410-593d-4f69-9238-9bad0a9c9580 does not exist
Oct 11 05:16:17 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev c56147f0-4ebf-4b49-8ca3-61f24c511ef6 does not exist
Oct 11 05:16:17 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:16:17 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:16:17 np0005481065 nova_compute[260935]: 2025-10-11 09:16:17.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:16:17 np0005481065 ovn_controller[152945]: 2025-10-11T09:16:17Z|01065|binding|INFO|Releasing lport 58ef9b4a-8b66-4d5d-ac05-f694b2a9b216 from this chassis (sb_readonly=0)
Oct 11 05:16:17 np0005481065 ovn_controller[152945]: 2025-10-11T09:16:17Z|01066|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:16:17 np0005481065 ovn_controller[152945]: 2025-10-11T09:16:17Z|01067|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:16:17 np0005481065 nova_compute[260935]: 2025-10-11 09:16:17.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:16:18 np0005481065 ovn_controller[152945]: 2025-10-11T09:16:18Z|00119|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ec:41:56 10.100.0.8
Oct 11 05:16:18 np0005481065 ovn_controller[152945]: 2025-10-11T09:16:18Z|00120|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ec:41:56 10.100.0.8
Oct 11 05:16:18 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2234: 321 pgs: 321 active+clean; 407 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 124 op/s
Oct 11 05:16:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:16:20 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2235: 321 pgs: 321 active+clean; 407 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 273 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Oct 11 05:16:20 np0005481065 nova_compute[260935]: 2025-10-11 09:16:20.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:16:22 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2236: 321 pgs: 321 active+clean; 407 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 11 05:16:22 np0005481065 nova_compute[260935]: 2025-10-11 09:16:22.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:16:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:16:24 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2237: 321 pgs: 321 active+clean; 407 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 11 05:16:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:16:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:16:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:16:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:16:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:16:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:16:24 np0005481065 nova_compute[260935]: 2025-10-11 09:16:24.856 2 INFO nova.compute.manager [None req-ed4a3496-921f-4e8d-a19a-35da0aa2d8e0 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Get console output#033[00m
Oct 11 05:16:24 np0005481065 nova_compute[260935]: 2025-10-11 09:16:24.870 29289 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct 11 05:16:25 np0005481065 nova_compute[260935]: 2025-10-11 09:16:25.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:16:25 np0005481065 nova_compute[260935]: 2025-10-11 09:16:25.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:16:26 np0005481065 nova_compute[260935]: 2025-10-11 09:16:26.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:16:26 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2238: 321 pgs: 321 active+clean; 407 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 11 05:16:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:16:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2767573846' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:16:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:16:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2767573846' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:16:27 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:27.359 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:16:27 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:27.360 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 05:16:27 np0005481065 nova_compute[260935]: 2025-10-11 09:16:27.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:16:27 np0005481065 nova_compute[260935]: 2025-10-11 09:16:27.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:16:27 np0005481065 nova_compute[260935]: 2025-10-11 09:16:27.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:16:28 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2239: 321 pgs: 321 active+clean; 407 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 310 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 05:16:28 np0005481065 nova_compute[260935]: 2025-10-11 09:16:28.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:16:28 np0005481065 nova_compute[260935]: 2025-10-11 09:16:28.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:16:28 np0005481065 nova_compute[260935]: 2025-10-11 09:16:28.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:16:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:16:29 np0005481065 nova_compute[260935]: 2025-10-11 09:16:29.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:16:29 np0005481065 podman[378872]: 2025-10-11 09:16:29.795584483 +0000 UTC m=+0.087982127 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 11 05:16:30 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2240: 321 pgs: 321 active+clean; 407 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 11 KiB/s wr, 2 op/s
Oct 11 05:16:31 np0005481065 nova_compute[260935]: 2025-10-11 09:16:31.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:16:31 np0005481065 nova_compute[260935]: 2025-10-11 09:16:31.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:16:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:31.362 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:16:31 np0005481065 nova_compute[260935]: 2025-10-11 09:16:31.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:16:31 np0005481065 nova_compute[260935]: 2025-10-11 09:16:31.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:16:31 np0005481065 nova_compute[260935]: 2025-10-11 09:16:31.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 11 05:16:32 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2241: 321 pgs: 321 active+clean; 407 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 12 KiB/s wr, 2 op/s
Oct 11 05:16:32 np0005481065 nova_compute[260935]: 2025-10-11 09:16:32.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:16:32 np0005481065 nova_compute[260935]: 2025-10-11 09:16:32.917 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:16:32 np0005481065 nova_compute[260935]: 2025-10-11 09:16:32.917 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:16:32 np0005481065 nova_compute[260935]: 2025-10-11 09:16:32.918 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 05:16:32 np0005481065 nova_compute[260935]: 2025-10-11 09:16:32.918 2 DEBUG nova.objects.instance [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c176845c-89c0-4038-ba22-4ee79bd3ebfe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:16:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:33.886 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:74:43:a2 10.100.0.2 2001:db8::f816:3eff:fe74:43a2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe74:43a2/64', 'neutron:device_id': 'ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2f4ce403-2596-441d-805b-ba15e2f385a1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ec00ec81-0492-43bf-b21e-a8398ff551c2, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=722d4b4c-2d64-4e8c-b343-4ac25259f23b) old=Port_Binding(mac=['fa:16:3e:74:43:a2 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2f4ce403-2596-441d-805b-ba15e2f385a1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:16:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:33.889 162815 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 722d4b4c-2d64-4e8c-b343-4ac25259f23b in datapath 2f4ce403-2596-441d-805b-ba15e2f385a1 updated#033[00m
Oct 11 05:16:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:33.892 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2f4ce403-2596-441d-805b-ba15e2f385a1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:16:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:33.893 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[92a6be89-542e-47bf-914d-bb40636e942f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:16:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:16:34 np0005481065 nova_compute[260935]: 2025-10-11 09:16:34.444 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updating instance_info_cache with network_info: [{"id": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "address": "fa:16:3e:1e:82:58", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ae661-47", "ovs_interfaceid": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:16:34 np0005481065 nova_compute[260935]: 2025-10-11 09:16:34.466 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:16:34 np0005481065 nova_compute[260935]: 2025-10-11 09:16:34.467 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 05:16:34 np0005481065 nova_compute[260935]: 2025-10-11 09:16:34.467 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:16:34 np0005481065 nova_compute[260935]: 2025-10-11 09:16:34.468 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:16:34 np0005481065 nova_compute[260935]: 2025-10-11 09:16:34.499 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:16:34 np0005481065 nova_compute[260935]: 2025-10-11 09:16:34.500 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:16:34 np0005481065 nova_compute[260935]: 2025-10-11 09:16:34.500 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:16:34 np0005481065 nova_compute[260935]: 2025-10-11 09:16:34.501 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:16:34 np0005481065 nova_compute[260935]: 2025-10-11 09:16:34.501 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:16:34 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2242: 321 pgs: 321 active+clean; 407 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 5.3 KiB/s rd, 17 KiB/s wr, 0 op/s
Oct 11 05:16:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:16:34 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4186119559' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:16:34 np0005481065 nova_compute[260935]: 2025-10-11 09:16:34.984 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:16:35 np0005481065 podman[378912]: 2025-10-11 09:16:35.10886622 +0000 UTC m=+0.057228415 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.license=GPLv2)
Oct 11 05:16:35 np0005481065 nova_compute[260935]: 2025-10-11 09:16:35.130 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:16:35 np0005481065 nova_compute[260935]: 2025-10-11 09:16:35.131 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:16:35 np0005481065 nova_compute[260935]: 2025-10-11 09:16:35.131 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:16:35 np0005481065 nova_compute[260935]: 2025-10-11 09:16:35.136 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:16:35 np0005481065 nova_compute[260935]: 2025-10-11 09:16:35.137 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:16:35 np0005481065 nova_compute[260935]: 2025-10-11 09:16:35.141 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:16:35 np0005481065 nova_compute[260935]: 2025-10-11 09:16:35.141 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:16:35 np0005481065 nova_compute[260935]: 2025-10-11 09:16:35.146 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:16:35 np0005481065 nova_compute[260935]: 2025-10-11 09:16:35.146 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:16:35 np0005481065 nova_compute[260935]: 2025-10-11 09:16:35.388 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:16:35 np0005481065 nova_compute[260935]: 2025-10-11 09:16:35.391 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2791MB free_disk=59.78509521484375GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:16:35 np0005481065 nova_compute[260935]: 2025-10-11 09:16:35.391 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:16:35 np0005481065 nova_compute[260935]: 2025-10-11 09:16:35.392 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:16:35 np0005481065 nova_compute[260935]: 2025-10-11 09:16:35.539 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:16:35 np0005481065 nova_compute[260935]: 2025-10-11 09:16:35.540 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:16:35 np0005481065 nova_compute[260935]: 2025-10-11 09:16:35.540 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:16:35 np0005481065 nova_compute[260935]: 2025-10-11 09:16:35.540 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 8354434f-daa4-4745-9755-bd2465f5459b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:16:35 np0005481065 nova_compute[260935]: 2025-10-11 09:16:35.541 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:16:35 np0005481065 nova_compute[260935]: 2025-10-11 09:16:35.541 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:16:35 np0005481065 nova_compute[260935]: 2025-10-11 09:16:35.666 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:16:36 np0005481065 nova_compute[260935]: 2025-10-11 09:16:36.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:16:36 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:16:36 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1105510431' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:16:36 np0005481065 nova_compute[260935]: 2025-10-11 09:16:36.135 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:16:36 np0005481065 nova_compute[260935]: 2025-10-11 09:16:36.143 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:16:36 np0005481065 nova_compute[260935]: 2025-10-11 09:16:36.169 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:16:36 np0005481065 nova_compute[260935]: 2025-10-11 09:16:36.203 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:16:36 np0005481065 nova_compute[260935]: 2025-10-11 09:16:36.204 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.812s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:16:36 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2243: 321 pgs: 321 active+clean; 407 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 5.3 KiB/s rd, 6.7 KiB/s wr, 0 op/s
Oct 11 05:16:36 np0005481065 nova_compute[260935]: 2025-10-11 09:16:36.888 2 DEBUG oslo_concurrency.lockutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "a1232bdc-1728-423e-91ef-f46614fcec43" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:16:36 np0005481065 nova_compute[260935]: 2025-10-11 09:16:36.888 2 DEBUG oslo_concurrency.lockutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "a1232bdc-1728-423e-91ef-f46614fcec43" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:16:36 np0005481065 nova_compute[260935]: 2025-10-11 09:16:36.909 2 DEBUG nova.compute.manager [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:16:36 np0005481065 nova_compute[260935]: 2025-10-11 09:16:36.985 2 DEBUG oslo_concurrency.lockutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:16:36 np0005481065 nova_compute[260935]: 2025-10-11 09:16:36.986 2 DEBUG oslo_concurrency.lockutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:16:36 np0005481065 nova_compute[260935]: 2025-10-11 09:16:36.996 2 DEBUG nova.virt.hardware [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:16:36 np0005481065 nova_compute[260935]: 2025-10-11 09:16:36.997 2 INFO nova.compute.claims [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:16:37 np0005481065 nova_compute[260935]: 2025-10-11 09:16:37.173 2 DEBUG oslo_concurrency.processutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:16:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:16:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/922198867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:16:37 np0005481065 nova_compute[260935]: 2025-10-11 09:16:37.604 2 DEBUG oslo_concurrency.processutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:16:37 np0005481065 nova_compute[260935]: 2025-10-11 09:16:37.613 2 DEBUG nova.compute.provider_tree [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:16:37 np0005481065 nova_compute[260935]: 2025-10-11 09:16:37.633 2 DEBUG nova.scheduler.client.report [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:16:37 np0005481065 nova_compute[260935]: 2025-10-11 09:16:37.662 2 DEBUG oslo_concurrency.lockutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:16:37 np0005481065 nova_compute[260935]: 2025-10-11 09:16:37.664 2 DEBUG nova.compute.manager [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:16:37 np0005481065 nova_compute[260935]: 2025-10-11 09:16:37.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:16:37 np0005481065 nova_compute[260935]: 2025-10-11 09:16:37.729 2 DEBUG nova.compute.manager [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:16:37 np0005481065 nova_compute[260935]: 2025-10-11 09:16:37.730 2 DEBUG nova.network.neutron [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:16:37 np0005481065 nova_compute[260935]: 2025-10-11 09:16:37.756 2 INFO nova.virt.libvirt.driver [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:16:37 np0005481065 nova_compute[260935]: 2025-10-11 09:16:37.780 2 DEBUG nova.compute.manager [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:16:37 np0005481065 nova_compute[260935]: 2025-10-11 09:16:37.876 2 DEBUG nova.compute.manager [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:16:37 np0005481065 nova_compute[260935]: 2025-10-11 09:16:37.877 2 DEBUG nova.virt.libvirt.driver [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:16:37 np0005481065 nova_compute[260935]: 2025-10-11 09:16:37.878 2 INFO nova.virt.libvirt.driver [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Creating image(s)#033[00m
Oct 11 05:16:37 np0005481065 nova_compute[260935]: 2025-10-11 09:16:37.904 2 DEBUG nova.storage.rbd_utils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image a1232bdc-1728-423e-91ef-f46614fcec43_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:16:37 np0005481065 nova_compute[260935]: 2025-10-11 09:16:37.933 2 DEBUG nova.storage.rbd_utils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image a1232bdc-1728-423e-91ef-f46614fcec43_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:16:37 np0005481065 nova_compute[260935]: 2025-10-11 09:16:37.959 2 DEBUG nova.storage.rbd_utils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image a1232bdc-1728-423e-91ef-f46614fcec43_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:16:37 np0005481065 nova_compute[260935]: 2025-10-11 09:16:37.963 2 DEBUG oslo_concurrency.processutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:16:38 np0005481065 nova_compute[260935]: 2025-10-11 09:16:38.012 2 DEBUG nova.policy [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dd336dcb24664df58613d4105ce1b004', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:16:38 np0005481065 nova_compute[260935]: 2025-10-11 09:16:38.063 2 DEBUG oslo_concurrency.processutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:16:38 np0005481065 nova_compute[260935]: 2025-10-11 09:16:38.064 2 DEBUG oslo_concurrency.lockutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:16:38 np0005481065 nova_compute[260935]: 2025-10-11 09:16:38.065 2 DEBUG oslo_concurrency.lockutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:16:38 np0005481065 nova_compute[260935]: 2025-10-11 09:16:38.065 2 DEBUG oslo_concurrency.lockutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:16:38 np0005481065 nova_compute[260935]: 2025-10-11 09:16:38.094 2 DEBUG nova.storage.rbd_utils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image a1232bdc-1728-423e-91ef-f46614fcec43_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:16:38 np0005481065 nova_compute[260935]: 2025-10-11 09:16:38.102 2 DEBUG oslo_concurrency.processutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 a1232bdc-1728-423e-91ef-f46614fcec43_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:16:38 np0005481065 nova_compute[260935]: 2025-10-11 09:16:38.380 2 DEBUG oslo_concurrency.processutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 a1232bdc-1728-423e-91ef-f46614fcec43_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.278s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:16:38 np0005481065 nova_compute[260935]: 2025-10-11 09:16:38.444 2 DEBUG nova.storage.rbd_utils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] resizing rbd image a1232bdc-1728-423e-91ef-f46614fcec43_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:16:38 np0005481065 nova_compute[260935]: 2025-10-11 09:16:38.539 2 DEBUG nova.objects.instance [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'migration_context' on Instance uuid a1232bdc-1728-423e-91ef-f46614fcec43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:16:38 np0005481065 nova_compute[260935]: 2025-10-11 09:16:38.587 2 DEBUG nova.virt.libvirt.driver [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:16:38 np0005481065 nova_compute[260935]: 2025-10-11 09:16:38.587 2 DEBUG nova.virt.libvirt.driver [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Ensure instance console log exists: /var/lib/nova/instances/a1232bdc-1728-423e-91ef-f46614fcec43/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:16:38 np0005481065 nova_compute[260935]: 2025-10-11 09:16:38.588 2 DEBUG oslo_concurrency.lockutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:16:38 np0005481065 nova_compute[260935]: 2025-10-11 09:16:38.588 2 DEBUG oslo_concurrency.lockutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:16:38 np0005481065 nova_compute[260935]: 2025-10-11 09:16:38.589 2 DEBUG oslo_concurrency.lockutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:16:38 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2244: 321 pgs: 321 active+clean; 407 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 5.3 KiB/s rd, 7.7 KiB/s wr, 1 op/s
Oct 11 05:16:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:16:39 np0005481065 nova_compute[260935]: 2025-10-11 09:16:39.825 2 DEBUG nova.network.neutron [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Successfully created port: e89831fc-f646-401f-8562-959bb36ec0e9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:16:40 np0005481065 nova_compute[260935]: 2025-10-11 09:16:40.304 2 DEBUG oslo_concurrency.lockutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "187076f6-221b-4a35-a7a8-9ba7c2a546b5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:16:40 np0005481065 nova_compute[260935]: 2025-10-11 09:16:40.305 2 DEBUG oslo_concurrency.lockutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "187076f6-221b-4a35-a7a8-9ba7c2a546b5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:16:40 np0005481065 nova_compute[260935]: 2025-10-11 09:16:40.343 2 DEBUG nova.compute.manager [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:16:40 np0005481065 nova_compute[260935]: 2025-10-11 09:16:40.419 2 DEBUG oslo_concurrency.lockutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:16:40 np0005481065 nova_compute[260935]: 2025-10-11 09:16:40.419 2 DEBUG oslo_concurrency.lockutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:16:40 np0005481065 nova_compute[260935]: 2025-10-11 09:16:40.428 2 DEBUG nova.virt.hardware [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:16:40 np0005481065 nova_compute[260935]: 2025-10-11 09:16:40.429 2 INFO nova.compute.claims [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:16:40 np0005481065 nova_compute[260935]: 2025-10-11 09:16:40.501 2 DEBUG nova.network.neutron [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Successfully updated port: e89831fc-f646-401f-8562-959bb36ec0e9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:16:40 np0005481065 nova_compute[260935]: 2025-10-11 09:16:40.513 2 DEBUG oslo_concurrency.lockutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "refresh_cache-a1232bdc-1728-423e-91ef-f46614fcec43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:16:40 np0005481065 nova_compute[260935]: 2025-10-11 09:16:40.514 2 DEBUG oslo_concurrency.lockutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquired lock "refresh_cache-a1232bdc-1728-423e-91ef-f46614fcec43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:16:40 np0005481065 nova_compute[260935]: 2025-10-11 09:16:40.514 2 DEBUG nova.network.neutron [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:16:40 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2245: 321 pgs: 321 active+clean; 407 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 6.3 KiB/s wr, 0 op/s
Oct 11 05:16:40 np0005481065 nova_compute[260935]: 2025-10-11 09:16:40.639 2 DEBUG nova.compute.manager [req-15770475-065f-44bc-a417-b0321fb1d7b1 req-27d635b4-5c00-4e32-92e7-c2ebad038784 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Received event network-changed-e89831fc-f646-401f-8562-959bb36ec0e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:16:40 np0005481065 nova_compute[260935]: 2025-10-11 09:16:40.640 2 DEBUG nova.compute.manager [req-15770475-065f-44bc-a417-b0321fb1d7b1 req-27d635b4-5c00-4e32-92e7-c2ebad038784 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Refreshing instance network info cache due to event network-changed-e89831fc-f646-401f-8562-959bb36ec0e9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:16:40 np0005481065 nova_compute[260935]: 2025-10-11 09:16:40.640 2 DEBUG oslo_concurrency.lockutils [req-15770475-065f-44bc-a417-b0321fb1d7b1 req-27d635b4-5c00-4e32-92e7-c2ebad038784 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-a1232bdc-1728-423e-91ef-f46614fcec43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:16:40 np0005481065 nova_compute[260935]: 2025-10-11 09:16:40.652 2 DEBUG oslo_concurrency.processutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:16:40 np0005481065 nova_compute[260935]: 2025-10-11 09:16:40.709 2 DEBUG nova.network.neutron [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:16:40 np0005481065 podman[379144]: 2025-10-11 09:16:40.817012146 +0000 UTC m=+0.096289465 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 11 05:16:40 np0005481065 podman[379145]: 2025-10-11 09:16:40.881068429 +0000 UTC m=+0.156095600 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:16:41 np0005481065 nova_compute[260935]: 2025-10-11 09:16:41.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:16:41 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:16:41 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1105601677' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:16:41 np0005481065 nova_compute[260935]: 2025-10-11 09:16:41.127 2 DEBUG oslo_concurrency.processutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:16:41 np0005481065 nova_compute[260935]: 2025-10-11 09:16:41.135 2 DEBUG nova.compute.provider_tree [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:16:41 np0005481065 nova_compute[260935]: 2025-10-11 09:16:41.160 2 DEBUG nova.scheduler.client.report [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:16:41 np0005481065 nova_compute[260935]: 2025-10-11 09:16:41.194 2 DEBUG oslo_concurrency.lockutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:16:41 np0005481065 nova_compute[260935]: 2025-10-11 09:16:41.195 2 DEBUG nova.compute.manager [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:16:41 np0005481065 nova_compute[260935]: 2025-10-11 09:16:41.253 2 DEBUG nova.compute.manager [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:16:41 np0005481065 nova_compute[260935]: 2025-10-11 09:16:41.254 2 DEBUG nova.network.neutron [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:16:41 np0005481065 nova_compute[260935]: 2025-10-11 09:16:41.276 2 INFO nova.virt.libvirt.driver [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:16:41 np0005481065 nova_compute[260935]: 2025-10-11 09:16:41.293 2 DEBUG nova.compute.manager [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:16:41 np0005481065 nova_compute[260935]: 2025-10-11 09:16:41.381 2 DEBUG nova.compute.manager [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:16:41 np0005481065 nova_compute[260935]: 2025-10-11 09:16:41.383 2 DEBUG nova.virt.libvirt.driver [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:16:41 np0005481065 nova_compute[260935]: 2025-10-11 09:16:41.384 2 INFO nova.virt.libvirt.driver [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Creating image(s)#033[00m
Oct 11 05:16:41 np0005481065 nova_compute[260935]: 2025-10-11 09:16:41.421 2 DEBUG nova.storage.rbd_utils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 187076f6-221b-4a35-a7a8-9ba7c2a546b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:16:41 np0005481065 nova_compute[260935]: 2025-10-11 09:16:41.456 2 DEBUG nova.storage.rbd_utils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 187076f6-221b-4a35-a7a8-9ba7c2a546b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:16:41 np0005481065 nova_compute[260935]: 2025-10-11 09:16:41.490 2 DEBUG nova.storage.rbd_utils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 187076f6-221b-4a35-a7a8-9ba7c2a546b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:16:41 np0005481065 nova_compute[260935]: 2025-10-11 09:16:41.495 2 DEBUG oslo_concurrency.processutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:16:41 np0005481065 nova_compute[260935]: 2025-10-11 09:16:41.600 2 DEBUG oslo_concurrency.processutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:16:41 np0005481065 nova_compute[260935]: 2025-10-11 09:16:41.602 2 DEBUG oslo_concurrency.lockutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:16:41 np0005481065 nova_compute[260935]: 2025-10-11 09:16:41.603 2 DEBUG oslo_concurrency.lockutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:16:41 np0005481065 nova_compute[260935]: 2025-10-11 09:16:41.603 2 DEBUG oslo_concurrency.lockutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:16:41 np0005481065 nova_compute[260935]: 2025-10-11 09:16:41.634 2 DEBUG nova.storage.rbd_utils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 187076f6-221b-4a35-a7a8-9ba7c2a546b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:16:41 np0005481065 nova_compute[260935]: 2025-10-11 09:16:41.639 2 DEBUG oslo_concurrency.processutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 187076f6-221b-4a35-a7a8-9ba7c2a546b5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:16:41 np0005481065 nova_compute[260935]: 2025-10-11 09:16:41.931 2 DEBUG oslo_concurrency.processutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 187076f6-221b-4a35-a7a8-9ba7c2a546b5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.292s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:16:41 np0005481065 nova_compute[260935]: 2025-10-11 09:16:41.971 2 DEBUG nova.policy [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0e1fd111a1ff43179343661e01457085', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'db6885dd005947ad850fed13cefdf2fc', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:16:42 np0005481065 nova_compute[260935]: 2025-10-11 09:16:42.018 2 DEBUG nova.storage.rbd_utils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] resizing rbd image 187076f6-221b-4a35-a7a8-9ba7c2a546b5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:16:42 np0005481065 nova_compute[260935]: 2025-10-11 09:16:42.132 2 DEBUG nova.objects.instance [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'migration_context' on Instance uuid 187076f6-221b-4a35-a7a8-9ba7c2a546b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:16:42 np0005481065 nova_compute[260935]: 2025-10-11 09:16:42.149 2 DEBUG nova.virt.libvirt.driver [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:16:42 np0005481065 nova_compute[260935]: 2025-10-11 09:16:42.149 2 DEBUG nova.virt.libvirt.driver [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Ensure instance console log exists: /var/lib/nova/instances/187076f6-221b-4a35-a7a8-9ba7c2a546b5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:16:42 np0005481065 nova_compute[260935]: 2025-10-11 09:16:42.150 2 DEBUG oslo_concurrency.lockutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:16:42 np0005481065 nova_compute[260935]: 2025-10-11 09:16:42.151 2 DEBUG oslo_concurrency.lockutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:16:42 np0005481065 nova_compute[260935]: 2025-10-11 09:16:42.151 2 DEBUG oslo_concurrency.lockutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:16:42 np0005481065 nova_compute[260935]: 2025-10-11 09:16:42.293 2 DEBUG nova.network.neutron [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Updating instance_info_cache with network_info: [{"id": "e89831fc-f646-401f-8562-959bb36ec0e9", "address": "fa:16:3e:48:8f:cc", "network": {"id": "3f778265-a3b7-4c18-be8e-648917a97a03", "bridge": "br-int", "label": "tempest-network-smoke--1276900070", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89831fc-f6", "ovs_interfaceid": "e89831fc-f646-401f-8562-959bb36ec0e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:16:42 np0005481065 nova_compute[260935]: 2025-10-11 09:16:42.318 2 DEBUG oslo_concurrency.lockutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Releasing lock "refresh_cache-a1232bdc-1728-423e-91ef-f46614fcec43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:16:42 np0005481065 nova_compute[260935]: 2025-10-11 09:16:42.319 2 DEBUG nova.compute.manager [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Instance network_info: |[{"id": "e89831fc-f646-401f-8562-959bb36ec0e9", "address": "fa:16:3e:48:8f:cc", "network": {"id": "3f778265-a3b7-4c18-be8e-648917a97a03", "bridge": "br-int", "label": "tempest-network-smoke--1276900070", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89831fc-f6", "ovs_interfaceid": "e89831fc-f646-401f-8562-959bb36ec0e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:16:42 np0005481065 nova_compute[260935]: 2025-10-11 09:16:42.319 2 DEBUG oslo_concurrency.lockutils [req-15770475-065f-44bc-a417-b0321fb1d7b1 req-27d635b4-5c00-4e32-92e7-c2ebad038784 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-a1232bdc-1728-423e-91ef-f46614fcec43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:16:42 np0005481065 nova_compute[260935]: 2025-10-11 09:16:42.320 2 DEBUG nova.network.neutron [req-15770475-065f-44bc-a417-b0321fb1d7b1 req-27d635b4-5c00-4e32-92e7-c2ebad038784 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Refreshing network info cache for port e89831fc-f646-401f-8562-959bb36ec0e9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:16:42 np0005481065 nova_compute[260935]: 2025-10-11 09:16:42.325 2 DEBUG nova.virt.libvirt.driver [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Start _get_guest_xml network_info=[{"id": "e89831fc-f646-401f-8562-959bb36ec0e9", "address": "fa:16:3e:48:8f:cc", "network": {"id": "3f778265-a3b7-4c18-be8e-648917a97a03", "bridge": "br-int", "label": "tempest-network-smoke--1276900070", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89831fc-f6", "ovs_interfaceid": "e89831fc-f646-401f-8562-959bb36ec0e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:16:42 np0005481065 nova_compute[260935]: 2025-10-11 09:16:42.330 2 WARNING nova.virt.libvirt.driver [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:16:42 np0005481065 nova_compute[260935]: 2025-10-11 09:16:42.337 2 DEBUG nova.virt.libvirt.host [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:16:42 np0005481065 nova_compute[260935]: 2025-10-11 09:16:42.338 2 DEBUG nova.virt.libvirt.host [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:16:42 np0005481065 nova_compute[260935]: 2025-10-11 09:16:42.341 2 DEBUG nova.virt.libvirt.host [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:16:42 np0005481065 nova_compute[260935]: 2025-10-11 09:16:42.342 2 DEBUG nova.virt.libvirt.host [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:16:42 np0005481065 nova_compute[260935]: 2025-10-11 09:16:42.343 2 DEBUG nova.virt.libvirt.driver [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:16:42 np0005481065 nova_compute[260935]: 2025-10-11 09:16:42.343 2 DEBUG nova.virt.hardware [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:16:42 np0005481065 nova_compute[260935]: 2025-10-11 09:16:42.344 2 DEBUG nova.virt.hardware [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:16:42 np0005481065 nova_compute[260935]: 2025-10-11 09:16:42.344 2 DEBUG nova.virt.hardware [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:16:42 np0005481065 nova_compute[260935]: 2025-10-11 09:16:42.345 2 DEBUG nova.virt.hardware [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:16:42 np0005481065 nova_compute[260935]: 2025-10-11 09:16:42.345 2 DEBUG nova.virt.hardware [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:16:42 np0005481065 nova_compute[260935]: 2025-10-11 09:16:42.345 2 DEBUG nova.virt.hardware [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:16:42 np0005481065 nova_compute[260935]: 2025-10-11 09:16:42.346 2 DEBUG nova.virt.hardware [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:16:42 np0005481065 nova_compute[260935]: 2025-10-11 09:16:42.346 2 DEBUG nova.virt.hardware [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:16:42 np0005481065 nova_compute[260935]: 2025-10-11 09:16:42.347 2 DEBUG nova.virt.hardware [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:16:42 np0005481065 nova_compute[260935]: 2025-10-11 09:16:42.347 2 DEBUG nova.virt.hardware [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:16:42 np0005481065 nova_compute[260935]: 2025-10-11 09:16:42.347 2 DEBUG nova.virt.hardware [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:16:42 np0005481065 nova_compute[260935]: 2025-10-11 09:16:42.352 2 DEBUG oslo_concurrency.processutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:16:42 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2246: 321 pgs: 321 active+clean; 460 MiB data, 960 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 1.6 MiB/s wr, 49 op/s
Oct 11 05:16:42 np0005481065 nova_compute[260935]: 2025-10-11 09:16:42.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:16:42 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:16:42 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2037161715' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:16:42 np0005481065 nova_compute[260935]: 2025-10-11 09:16:42.834 2 DEBUG oslo_concurrency.processutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:16:42 np0005481065 nova_compute[260935]: 2025-10-11 09:16:42.866 2 DEBUG nova.storage.rbd_utils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image a1232bdc-1728-423e-91ef-f46614fcec43_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:16:42 np0005481065 nova_compute[260935]: 2025-10-11 09:16:42.871 2 DEBUG oslo_concurrency.processutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:16:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:16:43 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3934545857' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:16:43 np0005481065 nova_compute[260935]: 2025-10-11 09:16:43.339 2 DEBUG oslo_concurrency.processutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:16:43 np0005481065 nova_compute[260935]: 2025-10-11 09:16:43.342 2 DEBUG nova.virt.libvirt.vif [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:16:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-272732865',display_name='tempest-TestNetworkBasicOps-server-272732865',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-272732865',id=109,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF7FX3U/5RP2hr/RRWH8Uzs1pxIw1BlrlGqbe9sm3AIGCWu3yNU1SvlSTOKsJU+AC7mmnOtgeA/3XVmXVb4brBN1OHPjhKFiHZVW+4rKWNrrCcpToIxg/52KW4sZLUbqeQ==',key_name='tempest-TestNetworkBasicOps-1996793419',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-obg5p04x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:16:37Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=a1232bdc-1728-423e-91ef-f46614fcec43,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e89831fc-f646-401f-8562-959bb36ec0e9", "address": "fa:16:3e:48:8f:cc", "network": {"id": "3f778265-a3b7-4c18-be8e-648917a97a03", "bridge": "br-int", "label": "tempest-network-smoke--1276900070", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89831fc-f6", "ovs_interfaceid": "e89831fc-f646-401f-8562-959bb36ec0e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:16:43 np0005481065 nova_compute[260935]: 2025-10-11 09:16:43.343 2 DEBUG nova.network.os_vif_util [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "e89831fc-f646-401f-8562-959bb36ec0e9", "address": "fa:16:3e:48:8f:cc", "network": {"id": "3f778265-a3b7-4c18-be8e-648917a97a03", "bridge": "br-int", "label": "tempest-network-smoke--1276900070", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89831fc-f6", "ovs_interfaceid": "e89831fc-f646-401f-8562-959bb36ec0e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:16:43 np0005481065 nova_compute[260935]: 2025-10-11 09:16:43.344 2 DEBUG nova.network.os_vif_util [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:8f:cc,bridge_name='br-int',has_traffic_filtering=True,id=e89831fc-f646-401f-8562-959bb36ec0e9,network=Network(3f778265-a3b7-4c18-be8e-648917a97a03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape89831fc-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:16:43 np0005481065 nova_compute[260935]: 2025-10-11 09:16:43.347 2 DEBUG nova.objects.instance [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'pci_devices' on Instance uuid a1232bdc-1728-423e-91ef-f46614fcec43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:16:43 np0005481065 nova_compute[260935]: 2025-10-11 09:16:43.365 2 DEBUG nova.virt.libvirt.driver [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:16:43 np0005481065 nova_compute[260935]:  <uuid>a1232bdc-1728-423e-91ef-f46614fcec43</uuid>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:  <name>instance-0000006d</name>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:16:43 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:      <nova:name>tempest-TestNetworkBasicOps-server-272732865</nova:name>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:16:42</nova:creationTime>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:16:43 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:        <nova:user uuid="dd336dcb24664df58613d4105ce1b004">tempest-TestNetworkBasicOps-1622727639-project-member</nova:user>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:        <nova:project uuid="bee9c6aad5fe46a2b0fb6caf4d995b72">tempest-TestNetworkBasicOps-1622727639</nova:project>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:        <nova:port uuid="e89831fc-f646-401f-8562-959bb36ec0e9">
Oct 11 05:16:43 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.21" ipVersion="4"/>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:16:43 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:      <entry name="serial">a1232bdc-1728-423e-91ef-f46614fcec43</entry>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:      <entry name="uuid">a1232bdc-1728-423e-91ef-f46614fcec43</entry>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:16:43 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:16:43 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:16:43 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/a1232bdc-1728-423e-91ef-f46614fcec43_disk">
Oct 11 05:16:43 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:16:43 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:16:43 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/a1232bdc-1728-423e-91ef-f46614fcec43_disk.config">
Oct 11 05:16:43 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:16:43 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:16:43 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:48:8f:cc"/>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:      <target dev="tape89831fc-f6"/>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:16:43 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/a1232bdc-1728-423e-91ef-f46614fcec43/console.log" append="off"/>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:16:43 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:16:43 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:16:43 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:16:43 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:16:43 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:16:43 np0005481065 nova_compute[260935]: 2025-10-11 09:16:43.367 2 DEBUG nova.compute.manager [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Preparing to wait for external event network-vif-plugged-e89831fc-f646-401f-8562-959bb36ec0e9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:16:43 np0005481065 nova_compute[260935]: 2025-10-11 09:16:43.368 2 DEBUG oslo_concurrency.lockutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "a1232bdc-1728-423e-91ef-f46614fcec43-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:16:43 np0005481065 nova_compute[260935]: 2025-10-11 09:16:43.368 2 DEBUG oslo_concurrency.lockutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "a1232bdc-1728-423e-91ef-f46614fcec43-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:16:43 np0005481065 nova_compute[260935]: 2025-10-11 09:16:43.368 2 DEBUG oslo_concurrency.lockutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "a1232bdc-1728-423e-91ef-f46614fcec43-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:16:43 np0005481065 nova_compute[260935]: 2025-10-11 09:16:43.369 2 DEBUG nova.virt.libvirt.vif [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:16:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-272732865',display_name='tempest-TestNetworkBasicOps-server-272732865',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-272732865',id=109,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF7FX3U/5RP2hr/RRWH8Uzs1pxIw1BlrlGqbe9sm3AIGCWu3yNU1SvlSTOKsJU+AC7mmnOtgeA/3XVmXVb4brBN1OHPjhKFiHZVW+4rKWNrrCcpToIxg/52KW4sZLUbqeQ==',key_name='tempest-TestNetworkBasicOps-1996793419',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-obg5p04x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:16:37Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=a1232bdc-1728-423e-91ef-f46614fcec43,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e89831fc-f646-401f-8562-959bb36ec0e9", "address": "fa:16:3e:48:8f:cc", "network": {"id": "3f778265-a3b7-4c18-be8e-648917a97a03", "bridge": "br-int", "label": "tempest-network-smoke--1276900070", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89831fc-f6", "ovs_interfaceid": "e89831fc-f646-401f-8562-959bb36ec0e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:16:43 np0005481065 nova_compute[260935]: 2025-10-11 09:16:43.369 2 DEBUG nova.network.os_vif_util [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "e89831fc-f646-401f-8562-959bb36ec0e9", "address": "fa:16:3e:48:8f:cc", "network": {"id": "3f778265-a3b7-4c18-be8e-648917a97a03", "bridge": "br-int", "label": "tempest-network-smoke--1276900070", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89831fc-f6", "ovs_interfaceid": "e89831fc-f646-401f-8562-959bb36ec0e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:16:43 np0005481065 nova_compute[260935]: 2025-10-11 09:16:43.370 2 DEBUG nova.network.os_vif_util [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:8f:cc,bridge_name='br-int',has_traffic_filtering=True,id=e89831fc-f646-401f-8562-959bb36ec0e9,network=Network(3f778265-a3b7-4c18-be8e-648917a97a03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape89831fc-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:16:43 np0005481065 nova_compute[260935]: 2025-10-11 09:16:43.370 2 DEBUG os_vif [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:8f:cc,bridge_name='br-int',has_traffic_filtering=True,id=e89831fc-f646-401f-8562-959bb36ec0e9,network=Network(3f778265-a3b7-4c18-be8e-648917a97a03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape89831fc-f6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:16:43 np0005481065 nova_compute[260935]: 2025-10-11 09:16:43.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:16:43 np0005481065 nova_compute[260935]: 2025-10-11 09:16:43.372 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:16:43 np0005481065 nova_compute[260935]: 2025-10-11 09:16:43.372 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:16:43 np0005481065 nova_compute[260935]: 2025-10-11 09:16:43.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:16:43 np0005481065 nova_compute[260935]: 2025-10-11 09:16:43.376 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape89831fc-f6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:16:43 np0005481065 nova_compute[260935]: 2025-10-11 09:16:43.377 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape89831fc-f6, col_values=(('external_ids', {'iface-id': 'e89831fc-f646-401f-8562-959bb36ec0e9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:48:8f:cc', 'vm-uuid': 'a1232bdc-1728-423e-91ef-f46614fcec43'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:16:43 np0005481065 NetworkManager[44960]: <info>  [1760174203.4220] manager: (tape89831fc-f6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/438)
Oct 11 05:16:43 np0005481065 nova_compute[260935]: 2025-10-11 09:16:43.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:16:43 np0005481065 nova_compute[260935]: 2025-10-11 09:16:43.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:16:43 np0005481065 nova_compute[260935]: 2025-10-11 09:16:43.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:16:43 np0005481065 nova_compute[260935]: 2025-10-11 09:16:43.433 2 INFO os_vif [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:8f:cc,bridge_name='br-int',has_traffic_filtering=True,id=e89831fc-f646-401f-8562-959bb36ec0e9,network=Network(3f778265-a3b7-4c18-be8e-648917a97a03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape89831fc-f6')#033[00m
Oct 11 05:16:43 np0005481065 nova_compute[260935]: 2025-10-11 09:16:43.469 2 DEBUG nova.network.neutron [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Successfully created port: 1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:16:43 np0005481065 nova_compute[260935]: 2025-10-11 09:16:43.496 2 DEBUG nova.virt.libvirt.driver [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:16:43 np0005481065 nova_compute[260935]: 2025-10-11 09:16:43.497 2 DEBUG nova.virt.libvirt.driver [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:16:43 np0005481065 nova_compute[260935]: 2025-10-11 09:16:43.497 2 DEBUG nova.virt.libvirt.driver [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No VIF found with MAC fa:16:3e:48:8f:cc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:16:43 np0005481065 nova_compute[260935]: 2025-10-11 09:16:43.498 2 INFO nova.virt.libvirt.driver [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Using config drive#033[00m
Oct 11 05:16:43 np0005481065 nova_compute[260935]: 2025-10-11 09:16:43.539 2 DEBUG nova.storage.rbd_utils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image a1232bdc-1728-423e-91ef-f46614fcec43_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:16:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:16:44 np0005481065 nova_compute[260935]: 2025-10-11 09:16:44.047 2 INFO nova.virt.libvirt.driver [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Creating config drive at /var/lib/nova/instances/a1232bdc-1728-423e-91ef-f46614fcec43/disk.config#033[00m
Oct 11 05:16:44 np0005481065 nova_compute[260935]: 2025-10-11 09:16:44.059 2 DEBUG oslo_concurrency.processutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a1232bdc-1728-423e-91ef-f46614fcec43/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxd1e1h8b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:16:44 np0005481065 nova_compute[260935]: 2025-10-11 09:16:44.225 2 DEBUG oslo_concurrency.processutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a1232bdc-1728-423e-91ef-f46614fcec43/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxd1e1h8b" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:16:44 np0005481065 nova_compute[260935]: 2025-10-11 09:16:44.269 2 DEBUG nova.storage.rbd_utils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image a1232bdc-1728-423e-91ef-f46614fcec43_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:16:44 np0005481065 nova_compute[260935]: 2025-10-11 09:16:44.274 2 DEBUG oslo_concurrency.processutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a1232bdc-1728-423e-91ef-f46614fcec43/disk.config a1232bdc-1728-423e-91ef-f46614fcec43_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:16:44 np0005481065 nova_compute[260935]: 2025-10-11 09:16:44.330 2 DEBUG nova.network.neutron [req-15770475-065f-44bc-a417-b0321fb1d7b1 req-27d635b4-5c00-4e32-92e7-c2ebad038784 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Updated VIF entry in instance network info cache for port e89831fc-f646-401f-8562-959bb36ec0e9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:16:44 np0005481065 nova_compute[260935]: 2025-10-11 09:16:44.332 2 DEBUG nova.network.neutron [req-15770475-065f-44bc-a417-b0321fb1d7b1 req-27d635b4-5c00-4e32-92e7-c2ebad038784 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Updating instance_info_cache with network_info: [{"id": "e89831fc-f646-401f-8562-959bb36ec0e9", "address": "fa:16:3e:48:8f:cc", "network": {"id": "3f778265-a3b7-4c18-be8e-648917a97a03", "bridge": "br-int", "label": "tempest-network-smoke--1276900070", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89831fc-f6", "ovs_interfaceid": "e89831fc-f646-401f-8562-959bb36ec0e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:16:44 np0005481065 nova_compute[260935]: 2025-10-11 09:16:44.356 2 DEBUG oslo_concurrency.lockutils [req-15770475-065f-44bc-a417-b0321fb1d7b1 req-27d635b4-5c00-4e32-92e7-c2ebad038784 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-a1232bdc-1728-423e-91ef-f46614fcec43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:16:44 np0005481065 nova_compute[260935]: 2025-10-11 09:16:44.499 2 DEBUG oslo_concurrency.processutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a1232bdc-1728-423e-91ef-f46614fcec43/disk.config a1232bdc-1728-423e-91ef-f46614fcec43_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.226s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:16:44 np0005481065 nova_compute[260935]: 2025-10-11 09:16:44.500 2 INFO nova.virt.libvirt.driver [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Deleting local config drive /var/lib/nova/instances/a1232bdc-1728-423e-91ef-f46614fcec43/disk.config because it was imported into RBD.#033[00m
Oct 11 05:16:44 np0005481065 kernel: tape89831fc-f6: entered promiscuous mode
Oct 11 05:16:44 np0005481065 NetworkManager[44960]: <info>  [1760174204.5771] manager: (tape89831fc-f6): new Tun device (/org/freedesktop/NetworkManager/Devices/439)
Oct 11 05:16:44 np0005481065 ovn_controller[152945]: 2025-10-11T09:16:44Z|01068|binding|INFO|Claiming lport e89831fc-f646-401f-8562-959bb36ec0e9 for this chassis.
Oct 11 05:16:44 np0005481065 ovn_controller[152945]: 2025-10-11T09:16:44Z|01069|binding|INFO|e89831fc-f646-401f-8562-959bb36ec0e9: Claiming fa:16:3e:48:8f:cc 10.100.0.21
Oct 11 05:16:44 np0005481065 nova_compute[260935]: 2025-10-11 09:16:44.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:16:44 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2247: 321 pgs: 321 active+clean; 500 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 3.5 MiB/s wr, 52 op/s
Oct 11 05:16:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:44.631 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:8f:cc 10.100.0.21'], port_security=['fa:16:3e:48:8f:cc 10.100.0.21'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.21/28', 'neutron:device_id': 'a1232bdc-1728-423e-91ef-f46614fcec43', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3f778265-a3b7-4c18-be8e-648917a97a03', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0f4801ae-575f-43c2-a03b-ceeeb136f93e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=704f698c-f7f9-44d3-83d1-ae6d1824d8ad, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=e89831fc-f646-401f-8562-959bb36ec0e9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:16:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:44.632 162815 INFO neutron.agent.ovn.metadata.agent [-] Port e89831fc-f646-401f-8562-959bb36ec0e9 in datapath 3f778265-a3b7-4c18-be8e-648917a97a03 bound to our chassis#033[00m
Oct 11 05:16:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:44.635 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3f778265-a3b7-4c18-be8e-648917a97a03#033[00m
Oct 11 05:16:44 np0005481065 systemd-udevd[379514]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:16:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:44.653 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5685adc9-0237-4467-93ae-057c07466026]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:16:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:44.654 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3f778265-a1 in ovnmeta-3f778265-a3b7-4c18-be8e-648917a97a03 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 05:16:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:44.657 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3f778265-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 05:16:44 np0005481065 systemd-machined[215705]: New machine qemu-132-instance-0000006d.
Oct 11 05:16:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:44.658 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1f7006c7-2cab-4161-a6d0-bcb9be28e5cf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:16:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:44.659 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[22ef5904-b044-4819-a400-9730993276c2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:16:44 np0005481065 NetworkManager[44960]: <info>  [1760174204.6701] device (tape89831fc-f6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:16:44 np0005481065 NetworkManager[44960]: <info>  [1760174204.6719] device (tape89831fc-f6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:16:44 np0005481065 ovn_controller[152945]: 2025-10-11T09:16:44Z|01070|binding|INFO|Setting lport e89831fc-f646-401f-8562-959bb36ec0e9 ovn-installed in OVS
Oct 11 05:16:44 np0005481065 ovn_controller[152945]: 2025-10-11T09:16:44Z|01071|binding|INFO|Setting lport e89831fc-f646-401f-8562-959bb36ec0e9 up in Southbound
Oct 11 05:16:44 np0005481065 nova_compute[260935]: 2025-10-11 09:16:44.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:16:44 np0005481065 nova_compute[260935]: 2025-10-11 09:16:44.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:16:44 np0005481065 systemd[1]: Started Virtual Machine qemu-132-instance-0000006d.
Oct 11 05:16:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:44.688 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[0b68a41e-29d1-4fce-a48c-3f9d28017e0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:16:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:44.711 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[40b307d9-9404-4613-889f-4e0442fb4641]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:16:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:44.751 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[20af9bce-387f-453c-aa0e-770458d4ca13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:16:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:44.757 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c6f0db63-6a5a-4f9d-8af1-e8cc264f320d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:16:44 np0005481065 NetworkManager[44960]: <info>  [1760174204.7589] manager: (tap3f778265-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/440)
Oct 11 05:16:44 np0005481065 nova_compute[260935]: 2025-10-11 09:16:44.791 2 DEBUG nova.network.neutron [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Successfully updated port: 1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:16:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:44.798 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[1e04ac94-bb93-4bde-a5c0-d6d719da92f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:16:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:44.802 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6364cfbd-4155-4256-bbda-2e4b71c08e84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:16:44 np0005481065 NetworkManager[44960]: <info>  [1760174204.8374] device (tap3f778265-a0): carrier: link connected
Oct 11 05:16:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:44.846 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[0688bdb7-56a9-47ef-8c5f-0ad5846ee117]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:16:44 np0005481065 nova_compute[260935]: 2025-10-11 09:16:44.850 2 DEBUG oslo_concurrency.lockutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "refresh_cache-187076f6-221b-4a35-a7a8-9ba7c2a546b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:16:44 np0005481065 nova_compute[260935]: 2025-10-11 09:16:44.851 2 DEBUG oslo_concurrency.lockutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquired lock "refresh_cache-187076f6-221b-4a35-a7a8-9ba7c2a546b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:16:44 np0005481065 nova_compute[260935]: 2025-10-11 09:16:44.851 2 DEBUG nova.network.neutron [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:16:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:44.871 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9ebf009d-8fc5-49fe-afec-dc704bf53983]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3f778265-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:ed:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 308], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 604954, 'reachable_time': 36448, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 379548, 'error': None, 'target': 'ovnmeta-3f778265-a3b7-4c18-be8e-648917a97a03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:16:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:44.897 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6de1f229-650a-4c59-b32c-68d4582b2931]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feef:edd9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 604954, 'tstamp': 604954}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 379549, 'error': None, 'target': 'ovnmeta-3f778265-a3b7-4c18-be8e-648917a97a03', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:16:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:44.926 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a5f9b886-522e-406d-86bb-dbd5bac454e2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3f778265-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:ed:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 308], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 604954, 'reachable_time': 36448, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 379550, 'error': None, 'target': 'ovnmeta-3f778265-a3b7-4c18-be8e-648917a97a03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:16:44 np0005481065 nova_compute[260935]: 2025-10-11 09:16:44.966 2 DEBUG nova.compute.manager [req-bd89893c-bfc7-4ca1-82bc-ddb8316b997d req-1026d221-a472-473d-b396-ca4b464f1d38 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Received event network-changed-1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:16:44 np0005481065 nova_compute[260935]: 2025-10-11 09:16:44.967 2 DEBUG nova.compute.manager [req-bd89893c-bfc7-4ca1-82bc-ddb8316b997d req-1026d221-a472-473d-b396-ca4b464f1d38 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Refreshing instance network info cache due to event network-changed-1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:16:44 np0005481065 nova_compute[260935]: 2025-10-11 09:16:44.967 2 DEBUG oslo_concurrency.lockutils [req-bd89893c-bfc7-4ca1-82bc-ddb8316b997d req-1026d221-a472-473d-b396-ca4b464f1d38 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-187076f6-221b-4a35-a7a8-9ba7c2a546b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:16:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:44.977 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[11f80536-8b44-4afd-9cad-a3aea173a8f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:16:45 np0005481065 nova_compute[260935]: 2025-10-11 09:16:45.048 2 DEBUG nova.compute.manager [req-e5f5c232-7522-4941-9b51-7b2359db5161 req-226341a8-0e93-4429-967e-686ca8808296 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Received event network-vif-plugged-e89831fc-f646-401f-8562-959bb36ec0e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:16:45 np0005481065 nova_compute[260935]: 2025-10-11 09:16:45.049 2 DEBUG oslo_concurrency.lockutils [req-e5f5c232-7522-4941-9b51-7b2359db5161 req-226341a8-0e93-4429-967e-686ca8808296 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "a1232bdc-1728-423e-91ef-f46614fcec43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:16:45 np0005481065 nova_compute[260935]: 2025-10-11 09:16:45.049 2 DEBUG oslo_concurrency.lockutils [req-e5f5c232-7522-4941-9b51-7b2359db5161 req-226341a8-0e93-4429-967e-686ca8808296 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a1232bdc-1728-423e-91ef-f46614fcec43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:16:45 np0005481065 nova_compute[260935]: 2025-10-11 09:16:45.050 2 DEBUG oslo_concurrency.lockutils [req-e5f5c232-7522-4941-9b51-7b2359db5161 req-226341a8-0e93-4429-967e-686ca8808296 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a1232bdc-1728-423e-91ef-f46614fcec43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:16:45 np0005481065 nova_compute[260935]: 2025-10-11 09:16:45.050 2 DEBUG nova.compute.manager [req-e5f5c232-7522-4941-9b51-7b2359db5161 req-226341a8-0e93-4429-967e-686ca8808296 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Processing event network-vif-plugged-e89831fc-f646-401f-8562-959bb36ec0e9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:16:45 np0005481065 nova_compute[260935]: 2025-10-11 09:16:45.084 2 DEBUG nova.network.neutron [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:16:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:45.090 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[dc57be4f-e966-4f15-8b19-fd72ae2c747e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:16:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:45.092 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3f778265-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:16:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:45.092 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:16:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:45.093 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3f778265-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:16:45 np0005481065 nova_compute[260935]: 2025-10-11 09:16:45.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:16:45 np0005481065 kernel: tap3f778265-a0: entered promiscuous mode
Oct 11 05:16:45 np0005481065 NetworkManager[44960]: <info>  [1760174205.0961] manager: (tap3f778265-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/441)
Oct 11 05:16:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:45.103 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3f778265-a0, col_values=(('external_ids', {'iface-id': 'cbaa5fc6-b10a-49d7-b190-bda5c8465d38'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:16:45 np0005481065 ovn_controller[152945]: 2025-10-11T09:16:45Z|01072|binding|INFO|Releasing lport cbaa5fc6-b10a-49d7-b190-bda5c8465d38 from this chassis (sb_readonly=0)
Oct 11 05:16:45 np0005481065 nova_compute[260935]: 2025-10-11 09:16:45.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:16:45 np0005481065 nova_compute[260935]: 2025-10-11 09:16:45.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:16:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:45.108 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3f778265-a3b7-4c18-be8e-648917a97a03.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3f778265-a3b7-4c18-be8e-648917a97a03.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 05:16:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:45.109 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8e822726-5f49-4083-a746-ee8c4a8c49b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:16:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:45.110 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 05:16:45 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 05:16:45 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 05:16:45 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-3f778265-a3b7-4c18-be8e-648917a97a03
Oct 11 05:16:45 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 05:16:45 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 05:16:45 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 05:16:45 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/3f778265-a3b7-4c18-be8e-648917a97a03.pid.haproxy
Oct 11 05:16:45 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 05:16:45 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:16:45 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 05:16:45 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 05:16:45 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 05:16:45 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 05:16:45 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 05:16:45 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 05:16:45 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 05:16:45 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 05:16:45 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 05:16:45 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 05:16:45 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 05:16:45 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 05:16:45 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 05:16:45 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:16:45 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:16:45 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 05:16:45 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 05:16:45 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 05:16:45 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID 3f778265-a3b7-4c18-be8e-648917a97a03
Oct 11 05:16:45 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 05:16:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:45.111 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3f778265-a3b7-4c18-be8e-648917a97a03', 'env', 'PROCESS_TAG=haproxy-3f778265-a3b7-4c18-be8e-648917a97a03', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3f778265-a3b7-4c18-be8e-648917a97a03.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 05:16:45 np0005481065 nova_compute[260935]: 2025-10-11 09:16:45.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:16:45 np0005481065 podman[379624]: 2025-10-11 09:16:45.595345006 +0000 UTC m=+0.082948907 container create cec0bb5800761f72b32b04dda9e4e3462d4da48888e056643a262d83e6e3bdb6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3f778265-a3b7-4c18-be8e-648917a97a03, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 11 05:16:45 np0005481065 systemd[1]: Started libpod-conmon-cec0bb5800761f72b32b04dda9e4e3462d4da48888e056643a262d83e6e3bdb6.scope.
Oct 11 05:16:45 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:16:45 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efdb927a6e1809dd643e00a3746dcecf9a780cac7ca509077562678f25a5dc5f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 05:16:45 np0005481065 podman[379624]: 2025-10-11 09:16:45.558628169 +0000 UTC m=+0.046232110 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 05:16:45 np0005481065 podman[379624]: 2025-10-11 09:16:45.668190172 +0000 UTC m=+0.155794163 container init cec0bb5800761f72b32b04dda9e4e3462d4da48888e056643a262d83e6e3bdb6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3f778265-a3b7-4c18-be8e-648917a97a03, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:16:45 np0005481065 podman[379624]: 2025-10-11 09:16:45.67821439 +0000 UTC m=+0.165818321 container start cec0bb5800761f72b32b04dda9e4e3462d4da48888e056643a262d83e6e3bdb6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3f778265-a3b7-4c18-be8e-648917a97a03, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 11 05:16:45 np0005481065 neutron-haproxy-ovnmeta-3f778265-a3b7-4c18-be8e-648917a97a03[379640]: [NOTICE]   (379644) : New worker (379646) forked
Oct 11 05:16:45 np0005481065 neutron-haproxy-ovnmeta-3f778265-a3b7-4c18-be8e-648917a97a03[379640]: [NOTICE]   (379644) : Loading success.
Oct 11 05:16:45 np0005481065 nova_compute[260935]: 2025-10-11 09:16:45.742 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174205.7420537, a1232bdc-1728-423e-91ef-f46614fcec43 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:16:45 np0005481065 nova_compute[260935]: 2025-10-11 09:16:45.743 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] VM Started (Lifecycle Event)#033[00m
Oct 11 05:16:45 np0005481065 nova_compute[260935]: 2025-10-11 09:16:45.746 2 DEBUG nova.compute.manager [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:16:45 np0005481065 nova_compute[260935]: 2025-10-11 09:16:45.750 2 DEBUG nova.virt.libvirt.driver [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:16:45 np0005481065 nova_compute[260935]: 2025-10-11 09:16:45.754 2 INFO nova.virt.libvirt.driver [-] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Instance spawned successfully.#033[00m
Oct 11 05:16:45 np0005481065 nova_compute[260935]: 2025-10-11 09:16:45.755 2 DEBUG nova.virt.libvirt.driver [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:16:45 np0005481065 nova_compute[260935]: 2025-10-11 09:16:45.781 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:16:45 np0005481065 nova_compute[260935]: 2025-10-11 09:16:45.791 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:16:45 np0005481065 nova_compute[260935]: 2025-10-11 09:16:45.796 2 DEBUG nova.virt.libvirt.driver [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:16:45 np0005481065 nova_compute[260935]: 2025-10-11 09:16:45.797 2 DEBUG nova.virt.libvirt.driver [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:16:45 np0005481065 nova_compute[260935]: 2025-10-11 09:16:45.797 2 DEBUG nova.virt.libvirt.driver [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:16:45 np0005481065 nova_compute[260935]: 2025-10-11 09:16:45.798 2 DEBUG nova.virt.libvirt.driver [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:16:45 np0005481065 nova_compute[260935]: 2025-10-11 09:16:45.799 2 DEBUG nova.virt.libvirt.driver [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:16:45 np0005481065 nova_compute[260935]: 2025-10-11 09:16:45.799 2 DEBUG nova.virt.libvirt.driver [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:16:45 np0005481065 nova_compute[260935]: 2025-10-11 09:16:45.842 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:16:45 np0005481065 nova_compute[260935]: 2025-10-11 09:16:45.842 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174205.7423146, a1232bdc-1728-423e-91ef-f46614fcec43 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:16:45 np0005481065 nova_compute[260935]: 2025-10-11 09:16:45.843 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:16:45 np0005481065 nova_compute[260935]: 2025-10-11 09:16:45.875 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:16:45 np0005481065 nova_compute[260935]: 2025-10-11 09:16:45.886 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174205.7488785, a1232bdc-1728-423e-91ef-f46614fcec43 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:16:45 np0005481065 nova_compute[260935]: 2025-10-11 09:16:45.887 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:16:45 np0005481065 nova_compute[260935]: 2025-10-11 09:16:45.899 2 INFO nova.compute.manager [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Took 8.02 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:16:45 np0005481065 nova_compute[260935]: 2025-10-11 09:16:45.900 2 DEBUG nova.compute.manager [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:16:45 np0005481065 nova_compute[260935]: 2025-10-11 09:16:45.910 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:16:45 np0005481065 nova_compute[260935]: 2025-10-11 09:16:45.922 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:16:45 np0005481065 nova_compute[260935]: 2025-10-11 09:16:45.975 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:16:46 np0005481065 nova_compute[260935]: 2025-10-11 09:16:46.009 2 INFO nova.compute.manager [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Took 9.05 seconds to build instance.#033[00m
Oct 11 05:16:46 np0005481065 nova_compute[260935]: 2025-10-11 09:16:46.039 2 DEBUG oslo_concurrency.lockutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "a1232bdc-1728-423e-91ef-f46614fcec43" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.151s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:16:46 np0005481065 nova_compute[260935]: 2025-10-11 09:16:46.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:16:46 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2248: 321 pgs: 321 active+clean; 500 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 3.5 MiB/s wr, 52 op/s
Oct 11 05:16:46 np0005481065 nova_compute[260935]: 2025-10-11 09:16:46.973 2 DEBUG nova.network.neutron [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Updating instance_info_cache with network_info: [{"id": "1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54", "address": "fa:16:3e:4b:3e:b6", "network": {"id": "2f4ce403-2596-441d-805b-ba15e2f385a1", "bridge": "br-int", "label": "tempest-network-smoke--109925739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4b:3eb6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f9dc2d8-70", "ovs_interfaceid": "1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:16:47 np0005481065 nova_compute[260935]: 2025-10-11 09:16:47.000 2 DEBUG oslo_concurrency.lockutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Releasing lock "refresh_cache-187076f6-221b-4a35-a7a8-9ba7c2a546b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:16:47 np0005481065 nova_compute[260935]: 2025-10-11 09:16:47.001 2 DEBUG nova.compute.manager [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Instance network_info: |[{"id": "1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54", "address": "fa:16:3e:4b:3e:b6", "network": {"id": "2f4ce403-2596-441d-805b-ba15e2f385a1", "bridge": "br-int", "label": "tempest-network-smoke--109925739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4b:3eb6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f9dc2d8-70", "ovs_interfaceid": "1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:16:47 np0005481065 nova_compute[260935]: 2025-10-11 09:16:47.002 2 DEBUG oslo_concurrency.lockutils [req-bd89893c-bfc7-4ca1-82bc-ddb8316b997d req-1026d221-a472-473d-b396-ca4b464f1d38 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-187076f6-221b-4a35-a7a8-9ba7c2a546b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:16:47 np0005481065 nova_compute[260935]: 2025-10-11 09:16:47.002 2 DEBUG nova.network.neutron [req-bd89893c-bfc7-4ca1-82bc-ddb8316b997d req-1026d221-a472-473d-b396-ca4b464f1d38 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Refreshing network info cache for port 1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:16:47 np0005481065 nova_compute[260935]: 2025-10-11 09:16:47.007 2 DEBUG nova.virt.libvirt.driver [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Start _get_guest_xml network_info=[{"id": "1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54", "address": "fa:16:3e:4b:3e:b6", "network": {"id": "2f4ce403-2596-441d-805b-ba15e2f385a1", "bridge": "br-int", "label": "tempest-network-smoke--109925739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4b:3eb6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f9dc2d8-70", "ovs_interfaceid": "1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:16:47 np0005481065 nova_compute[260935]: 2025-10-11 09:16:47.014 2 WARNING nova.virt.libvirt.driver [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:16:47 np0005481065 nova_compute[260935]: 2025-10-11 09:16:47.023 2 DEBUG nova.virt.libvirt.host [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:16:47 np0005481065 nova_compute[260935]: 2025-10-11 09:16:47.024 2 DEBUG nova.virt.libvirt.host [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:16:47 np0005481065 nova_compute[260935]: 2025-10-11 09:16:47.035 2 DEBUG nova.virt.libvirt.host [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:16:47 np0005481065 nova_compute[260935]: 2025-10-11 09:16:47.036 2 DEBUG nova.virt.libvirt.host [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:16:47 np0005481065 nova_compute[260935]: 2025-10-11 09:16:47.036 2 DEBUG nova.virt.libvirt.driver [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:16:47 np0005481065 nova_compute[260935]: 2025-10-11 09:16:47.037 2 DEBUG nova.virt.hardware [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:16:47 np0005481065 nova_compute[260935]: 2025-10-11 09:16:47.038 2 DEBUG nova.virt.hardware [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:16:47 np0005481065 nova_compute[260935]: 2025-10-11 09:16:47.038 2 DEBUG nova.virt.hardware [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:16:47 np0005481065 nova_compute[260935]: 2025-10-11 09:16:47.039 2 DEBUG nova.virt.hardware [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:16:47 np0005481065 nova_compute[260935]: 2025-10-11 09:16:47.039 2 DEBUG nova.virt.hardware [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:16:47 np0005481065 nova_compute[260935]: 2025-10-11 09:16:47.039 2 DEBUG nova.virt.hardware [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:16:47 np0005481065 nova_compute[260935]: 2025-10-11 09:16:47.040 2 DEBUG nova.virt.hardware [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:16:47 np0005481065 nova_compute[260935]: 2025-10-11 09:16:47.040 2 DEBUG nova.virt.hardware [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:16:47 np0005481065 nova_compute[260935]: 2025-10-11 09:16:47.041 2 DEBUG nova.virt.hardware [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:16:47 np0005481065 nova_compute[260935]: 2025-10-11 09:16:47.041 2 DEBUG nova.virt.hardware [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:16:47 np0005481065 nova_compute[260935]: 2025-10-11 09:16:47.041 2 DEBUG nova.virt.hardware [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:16:47 np0005481065 nova_compute[260935]: 2025-10-11 09:16:47.046 2 DEBUG oslo_concurrency.processutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:16:47 np0005481065 nova_compute[260935]: 2025-10-11 09:16:47.167 2 DEBUG nova.compute.manager [req-3f172184-926a-4be2-bbab-1b56387ce661 req-a2cb0f3a-9a23-4253-a068-53261fbff1de e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Received event network-vif-plugged-e89831fc-f646-401f-8562-959bb36ec0e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:16:47 np0005481065 nova_compute[260935]: 2025-10-11 09:16:47.168 2 DEBUG oslo_concurrency.lockutils [req-3f172184-926a-4be2-bbab-1b56387ce661 req-a2cb0f3a-9a23-4253-a068-53261fbff1de e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "a1232bdc-1728-423e-91ef-f46614fcec43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:16:47 np0005481065 nova_compute[260935]: 2025-10-11 09:16:47.168 2 DEBUG oslo_concurrency.lockutils [req-3f172184-926a-4be2-bbab-1b56387ce661 req-a2cb0f3a-9a23-4253-a068-53261fbff1de e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a1232bdc-1728-423e-91ef-f46614fcec43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:16:47 np0005481065 nova_compute[260935]: 2025-10-11 09:16:47.168 2 DEBUG oslo_concurrency.lockutils [req-3f172184-926a-4be2-bbab-1b56387ce661 req-a2cb0f3a-9a23-4253-a068-53261fbff1de e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a1232bdc-1728-423e-91ef-f46614fcec43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:16:47 np0005481065 nova_compute[260935]: 2025-10-11 09:16:47.169 2 DEBUG nova.compute.manager [req-3f172184-926a-4be2-bbab-1b56387ce661 req-a2cb0f3a-9a23-4253-a068-53261fbff1de e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] No waiting events found dispatching network-vif-plugged-e89831fc-f646-401f-8562-959bb36ec0e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:16:47 np0005481065 nova_compute[260935]: 2025-10-11 09:16:47.169 2 WARNING nova.compute.manager [req-3f172184-926a-4be2-bbab-1b56387ce661 req-a2cb0f3a-9a23-4253-a068-53261fbff1de e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Received unexpected event network-vif-plugged-e89831fc-f646-401f-8562-959bb36ec0e9 for instance with vm_state active and task_state None.#033[00m
Oct 11 05:16:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:16:47 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/628114044' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:16:47 np0005481065 nova_compute[260935]: 2025-10-11 09:16:47.555 2 DEBUG oslo_concurrency.processutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:16:47 np0005481065 nova_compute[260935]: 2025-10-11 09:16:47.590 2 DEBUG nova.storage.rbd_utils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 187076f6-221b-4a35-a7a8-9ba7c2a546b5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:16:47 np0005481065 nova_compute[260935]: 2025-10-11 09:16:47.596 2 DEBUG oslo_concurrency.processutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:16:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:16:48 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/751411432' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:16:48 np0005481065 nova_compute[260935]: 2025-10-11 09:16:48.130 2 DEBUG oslo_concurrency.processutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:16:48 np0005481065 nova_compute[260935]: 2025-10-11 09:16:48.132 2 DEBUG nova.virt.libvirt.vif [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:16:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-354741605',display_name='tempest-TestGettingAddress-server-354741605',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-354741605',id=110,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIEEroIcQI6yjPYNFRbpO55PkieYAc+yLELOXjNZohffuvQJuC/A538gnzESmYvfIV1iCxTeQxcsCPGRPplzY6F4Y6cKL3td1D8v0hhGFKNU2eGLTpxtQxcU6ACWQ1bQPg==',key_name='tempest-TestGettingAddress-609422538',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-f6psl0gg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:16:41Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=187076f6-221b-4a35-a7a8-9ba7c2a546b5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54", "address": "fa:16:3e:4b:3e:b6", "network": {"id": "2f4ce403-2596-441d-805b-ba15e2f385a1", "bridge": "br-int", "label": "tempest-network-smoke--109925739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4b:3eb6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f9dc2d8-70", "ovs_interfaceid": "1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:16:48 np0005481065 nova_compute[260935]: 2025-10-11 09:16:48.133 2 DEBUG nova.network.os_vif_util [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54", "address": "fa:16:3e:4b:3e:b6", "network": {"id": "2f4ce403-2596-441d-805b-ba15e2f385a1", "bridge": "br-int", "label": "tempest-network-smoke--109925739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4b:3eb6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f9dc2d8-70", "ovs_interfaceid": "1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:16:48 np0005481065 nova_compute[260935]: 2025-10-11 09:16:48.135 2 DEBUG nova.network.os_vif_util [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:3e:b6,bridge_name='br-int',has_traffic_filtering=True,id=1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54,network=Network(2f4ce403-2596-441d-805b-ba15e2f385a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f9dc2d8-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:16:48 np0005481065 nova_compute[260935]: 2025-10-11 09:16:48.137 2 DEBUG nova.objects.instance [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'pci_devices' on Instance uuid 187076f6-221b-4a35-a7a8-9ba7c2a546b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:16:48 np0005481065 nova_compute[260935]: 2025-10-11 09:16:48.164 2 DEBUG nova.virt.libvirt.driver [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:16:48 np0005481065 nova_compute[260935]:  <uuid>187076f6-221b-4a35-a7a8-9ba7c2a546b5</uuid>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:  <name>instance-0000006e</name>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:16:48 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:      <nova:name>tempest-TestGettingAddress-server-354741605</nova:name>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:16:47</nova:creationTime>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:16:48 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:        <nova:user uuid="0e1fd111a1ff43179343661e01457085">tempest-TestGettingAddress-1238692117-project-member</nova:user>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:        <nova:project uuid="db6885dd005947ad850fed13cefdf2fc">tempest-TestGettingAddress-1238692117</nova:project>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:        <nova:port uuid="1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54">
Oct 11 05:16:48 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe4b:3eb6" ipVersion="6"/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:16:48 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:      <entry name="serial">187076f6-221b-4a35-a7a8-9ba7c2a546b5</entry>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:      <entry name="uuid">187076f6-221b-4a35-a7a8-9ba7c2a546b5</entry>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:16:48 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:16:48 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:16:48 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/187076f6-221b-4a35-a7a8-9ba7c2a546b5_disk">
Oct 11 05:16:48 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:16:48 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:16:48 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/187076f6-221b-4a35-a7a8-9ba7c2a546b5_disk.config">
Oct 11 05:16:48 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:16:48 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:16:48 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:4b:3e:b6"/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:      <target dev="tap1f9dc2d8-70"/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:16:48 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/187076f6-221b-4a35-a7a8-9ba7c2a546b5/console.log" append="off"/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:16:48 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:16:48 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:16:48 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:16:48 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:16:48 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:16:48 np0005481065 nova_compute[260935]: 2025-10-11 09:16:48.165 2 DEBUG nova.compute.manager [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Preparing to wait for external event network-vif-plugged-1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:16:48 np0005481065 nova_compute[260935]: 2025-10-11 09:16:48.165 2 DEBUG oslo_concurrency.lockutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "187076f6-221b-4a35-a7a8-9ba7c2a546b5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:16:48 np0005481065 nova_compute[260935]: 2025-10-11 09:16:48.166 2 DEBUG oslo_concurrency.lockutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "187076f6-221b-4a35-a7a8-9ba7c2a546b5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:16:48 np0005481065 nova_compute[260935]: 2025-10-11 09:16:48.166 2 DEBUG oslo_concurrency.lockutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "187076f6-221b-4a35-a7a8-9ba7c2a546b5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:16:48 np0005481065 nova_compute[260935]: 2025-10-11 09:16:48.167 2 DEBUG nova.virt.libvirt.vif [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:16:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-354741605',display_name='tempest-TestGettingAddress-server-354741605',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-354741605',id=110,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIEEroIcQI6yjPYNFRbpO55PkieYAc+yLELOXjNZohffuvQJuC/A538gnzESmYvfIV1iCxTeQxcsCPGRPplzY6F4Y6cKL3td1D8v0hhGFKNU2eGLTpxtQxcU6ACWQ1bQPg==',key_name='tempest-TestGettingAddress-609422538',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-f6psl0gg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:16:41Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=187076f6-221b-4a35-a7a8-9ba7c2a546b5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54", "address": "fa:16:3e:4b:3e:b6", "network": {"id": "2f4ce403-2596-441d-805b-ba15e2f385a1", "bridge": "br-int", "label": "tempest-network-smoke--109925739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4b:3eb6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f9dc2d8-70", "ovs_interfaceid": "1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:16:48 np0005481065 nova_compute[260935]: 2025-10-11 09:16:48.167 2 DEBUG nova.network.os_vif_util [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54", "address": "fa:16:3e:4b:3e:b6", "network": {"id": "2f4ce403-2596-441d-805b-ba15e2f385a1", "bridge": "br-int", "label": "tempest-network-smoke--109925739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4b:3eb6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f9dc2d8-70", "ovs_interfaceid": "1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:16:48 np0005481065 nova_compute[260935]: 2025-10-11 09:16:48.169 2 DEBUG nova.network.os_vif_util [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:3e:b6,bridge_name='br-int',has_traffic_filtering=True,id=1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54,network=Network(2f4ce403-2596-441d-805b-ba15e2f385a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f9dc2d8-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:16:48 np0005481065 nova_compute[260935]: 2025-10-11 09:16:48.169 2 DEBUG os_vif [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:3e:b6,bridge_name='br-int',has_traffic_filtering=True,id=1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54,network=Network(2f4ce403-2596-441d-805b-ba15e2f385a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f9dc2d8-70') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:16:48 np0005481065 nova_compute[260935]: 2025-10-11 09:16:48.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:16:48 np0005481065 nova_compute[260935]: 2025-10-11 09:16:48.171 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:16:48 np0005481065 nova_compute[260935]: 2025-10-11 09:16:48.172 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:16:48 np0005481065 nova_compute[260935]: 2025-10-11 09:16:48.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:16:48 np0005481065 nova_compute[260935]: 2025-10-11 09:16:48.178 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1f9dc2d8-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:16:48 np0005481065 nova_compute[260935]: 2025-10-11 09:16:48.179 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1f9dc2d8-70, col_values=(('external_ids', {'iface-id': '1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4b:3e:b6', 'vm-uuid': '187076f6-221b-4a35-a7a8-9ba7c2a546b5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:16:48 np0005481065 nova_compute[260935]: 2025-10-11 09:16:48.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:16:48 np0005481065 NetworkManager[44960]: <info>  [1760174208.2313] manager: (tap1f9dc2d8-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/442)
Oct 11 05:16:48 np0005481065 nova_compute[260935]: 2025-10-11 09:16:48.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:16:48 np0005481065 nova_compute[260935]: 2025-10-11 09:16:48.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:16:48 np0005481065 nova_compute[260935]: 2025-10-11 09:16:48.238 2 INFO os_vif [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:3e:b6,bridge_name='br-int',has_traffic_filtering=True,id=1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54,network=Network(2f4ce403-2596-441d-805b-ba15e2f385a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f9dc2d8-70')#033[00m
Oct 11 05:16:48 np0005481065 nova_compute[260935]: 2025-10-11 09:16:48.322 2 DEBUG nova.virt.libvirt.driver [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:16:48 np0005481065 nova_compute[260935]: 2025-10-11 09:16:48.323 2 DEBUG nova.virt.libvirt.driver [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:16:48 np0005481065 nova_compute[260935]: 2025-10-11 09:16:48.323 2 DEBUG nova.virt.libvirt.driver [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No VIF found with MAC fa:16:3e:4b:3e:b6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:16:48 np0005481065 nova_compute[260935]: 2025-10-11 09:16:48.324 2 INFO nova.virt.libvirt.driver [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Using config drive#033[00m
Oct 11 05:16:48 np0005481065 nova_compute[260935]: 2025-10-11 09:16:48.355 2 DEBUG nova.storage.rbd_utils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 187076f6-221b-4a35-a7a8-9ba7c2a546b5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:16:48 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2249: 321 pgs: 321 active+clean; 500 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 128 op/s
Oct 11 05:16:48 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #105. Immutable memtables: 0.
Oct 11 05:16:48 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:16:48.921920) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 05:16:48 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 105
Oct 11 05:16:48 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174208921999, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 2058, "num_deletes": 251, "total_data_size": 3409258, "memory_usage": 3469688, "flush_reason": "Manual Compaction"}
Oct 11 05:16:48 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #106: started
Oct 11 05:16:48 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174208938776, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 106, "file_size": 3331164, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45076, "largest_seqno": 47133, "table_properties": {"data_size": 3321840, "index_size": 5882, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18933, "raw_average_key_size": 20, "raw_value_size": 3303283, "raw_average_value_size": 3514, "num_data_blocks": 261, "num_entries": 940, "num_filter_entries": 940, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760173988, "oldest_key_time": 1760173988, "file_creation_time": 1760174208, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 106, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:16:48 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 16930 microseconds, and 8956 cpu microseconds.
Oct 11 05:16:48 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:16:48 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:16:48.938856) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #106: 3331164 bytes OK
Oct 11 05:16:48 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:16:48.938882) [db/memtable_list.cc:519] [default] Level-0 commit table #106 started
Oct 11 05:16:48 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:16:48.940428) [db/memtable_list.cc:722] [default] Level-0 commit table #106: memtable #1 done
Oct 11 05:16:48 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:16:48.940483) EVENT_LOG_v1 {"time_micros": 1760174208940471, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 05:16:48 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:16:48.940514) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 05:16:48 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 3400625, prev total WAL file size 3400625, number of live WAL files 2.
Oct 11 05:16:48 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000102.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:16:48 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:16:48.942092) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Oct 11 05:16:48 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 05:16:48 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [106(3253KB)], [104(8580KB)]
Oct 11 05:16:48 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174208942147, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [106], "files_L6": [104], "score": -1, "input_data_size": 12117520, "oldest_snapshot_seqno": -1}
Oct 11 05:16:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:16:48 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #107: 6955 keys, 10406128 bytes, temperature: kUnknown
Oct 11 05:16:48 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174208988895, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 107, "file_size": 10406128, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10358204, "index_size": 29444, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17413, "raw_key_size": 179305, "raw_average_key_size": 25, "raw_value_size": 10232184, "raw_average_value_size": 1471, "num_data_blocks": 1160, "num_entries": 6955, "num_filter_entries": 6955, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760174208, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:16:48 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:16:48 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:16:48.989080) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 10406128 bytes
Oct 11 05:16:48 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:16:48.990609) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 258.9 rd, 222.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 8.4 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(6.8) write-amplify(3.1) OK, records in: 7469, records dropped: 514 output_compression: NoCompression
Oct 11 05:16:48 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:16:48.990624) EVENT_LOG_v1 {"time_micros": 1760174208990616, "job": 62, "event": "compaction_finished", "compaction_time_micros": 46806, "compaction_time_cpu_micros": 23033, "output_level": 6, "num_output_files": 1, "total_output_size": 10406128, "num_input_records": 7469, "num_output_records": 6955, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 05:16:48 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000106.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:16:48 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174208991181, "job": 62, "event": "table_file_deletion", "file_number": 106}
Oct 11 05:16:48 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:16:48 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174208992482, "job": 62, "event": "table_file_deletion", "file_number": 104}
Oct 11 05:16:48 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:16:48.941989) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:16:48 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:16:48.992542) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:16:48 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:16:48.992550) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:16:48 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:16:48.992554) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:16:48 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:16:48.992557) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:16:48 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:16:48.992561) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:16:49 np0005481065 nova_compute[260935]: 2025-10-11 09:16:49.185 2 INFO nova.virt.libvirt.driver [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Creating config drive at /var/lib/nova/instances/187076f6-221b-4a35-a7a8-9ba7c2a546b5/disk.config#033[00m
Oct 11 05:16:49 np0005481065 nova_compute[260935]: 2025-10-11 09:16:49.194 2 DEBUG oslo_concurrency.processutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/187076f6-221b-4a35-a7a8-9ba7c2a546b5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3e1y8h8e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:16:49 np0005481065 nova_compute[260935]: 2025-10-11 09:16:49.364 2 DEBUG oslo_concurrency.processutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/187076f6-221b-4a35-a7a8-9ba7c2a546b5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3e1y8h8e" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:16:49 np0005481065 nova_compute[260935]: 2025-10-11 09:16:49.403 2 DEBUG nova.storage.rbd_utils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 187076f6-221b-4a35-a7a8-9ba7c2a546b5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:16:49 np0005481065 nova_compute[260935]: 2025-10-11 09:16:49.410 2 DEBUG oslo_concurrency.processutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/187076f6-221b-4a35-a7a8-9ba7c2a546b5/disk.config 187076f6-221b-4a35-a7a8-9ba7c2a546b5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:16:49 np0005481065 nova_compute[260935]: 2025-10-11 09:16:49.630 2 DEBUG oslo_concurrency.processutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/187076f6-221b-4a35-a7a8-9ba7c2a546b5/disk.config 187076f6-221b-4a35-a7a8-9ba7c2a546b5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.220s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:16:49 np0005481065 nova_compute[260935]: 2025-10-11 09:16:49.632 2 INFO nova.virt.libvirt.driver [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Deleting local config drive /var/lib/nova/instances/187076f6-221b-4a35-a7a8-9ba7c2a546b5/disk.config because it was imported into RBD.#033[00m
Oct 11 05:16:49 np0005481065 nova_compute[260935]: 2025-10-11 09:16:49.636 2 DEBUG nova.network.neutron [req-bd89893c-bfc7-4ca1-82bc-ddb8316b997d req-1026d221-a472-473d-b396-ca4b464f1d38 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Updated VIF entry in instance network info cache for port 1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:16:49 np0005481065 nova_compute[260935]: 2025-10-11 09:16:49.637 2 DEBUG nova.network.neutron [req-bd89893c-bfc7-4ca1-82bc-ddb8316b997d req-1026d221-a472-473d-b396-ca4b464f1d38 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Updating instance_info_cache with network_info: [{"id": "1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54", "address": "fa:16:3e:4b:3e:b6", "network": {"id": "2f4ce403-2596-441d-805b-ba15e2f385a1", "bridge": "br-int", "label": "tempest-network-smoke--109925739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4b:3eb6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f9dc2d8-70", "ovs_interfaceid": "1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:16:49 np0005481065 nova_compute[260935]: 2025-10-11 09:16:49.666 2 DEBUG oslo_concurrency.lockutils [req-bd89893c-bfc7-4ca1-82bc-ddb8316b997d req-1026d221-a472-473d-b396-ca4b464f1d38 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-187076f6-221b-4a35-a7a8-9ba7c2a546b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:16:49 np0005481065 kernel: tap1f9dc2d8-70: entered promiscuous mode
Oct 11 05:16:49 np0005481065 NetworkManager[44960]: <info>  [1760174209.7367] manager: (tap1f9dc2d8-70): new Tun device (/org/freedesktop/NetworkManager/Devices/443)
Oct 11 05:16:49 np0005481065 ovn_controller[152945]: 2025-10-11T09:16:49Z|01073|binding|INFO|Claiming lport 1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54 for this chassis.
Oct 11 05:16:49 np0005481065 ovn_controller[152945]: 2025-10-11T09:16:49Z|01074|binding|INFO|1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54: Claiming fa:16:3e:4b:3e:b6 10.100.0.12 2001:db8::f816:3eff:fe4b:3eb6
Oct 11 05:16:49 np0005481065 nova_compute[260935]: 2025-10-11 09:16:49.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:16:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:49.797 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:3e:b6 10.100.0.12 2001:db8::f816:3eff:fe4b:3eb6'], port_security=['fa:16:3e:4b:3e:b6 10.100.0.12 2001:db8::f816:3eff:fe4b:3eb6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28 2001:db8::f816:3eff:fe4b:3eb6/64', 'neutron:device_id': '187076f6-221b-4a35-a7a8-9ba7c2a546b5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2f4ce403-2596-441d-805b-ba15e2f385a1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '89a40070-a057-46b1-9d97-ea387c29c959', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ec00ec81-0492-43bf-b21e-a8398ff551c2, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:16:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:49.798 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54 in datapath 2f4ce403-2596-441d-805b-ba15e2f385a1 bound to our chassis#033[00m
Oct 11 05:16:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:49.800 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2f4ce403-2596-441d-805b-ba15e2f385a1#033[00m
Oct 11 05:16:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:49.815 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b2a08658-f34a-4899-b8e5-edc7192e10b8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:16:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:49.816 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2f4ce403-21 in ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 05:16:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:49.818 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2f4ce403-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 05:16:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:49.818 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0789bfe9-7951-40e9-9835-510fdda14e96]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:16:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:49.819 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[afd8aacd-c610-48d5-8935-c3e318e6e962]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:16:49 np0005481065 systemd-udevd[379792]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:16:49 np0005481065 ovn_controller[152945]: 2025-10-11T09:16:49Z|01075|binding|INFO|Setting lport 1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54 ovn-installed in OVS
Oct 11 05:16:49 np0005481065 ovn_controller[152945]: 2025-10-11T09:16:49Z|01076|binding|INFO|Setting lport 1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54 up in Southbound
Oct 11 05:16:49 np0005481065 nova_compute[260935]: 2025-10-11 09:16:49.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:16:49 np0005481065 nova_compute[260935]: 2025-10-11 09:16:49.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:16:49 np0005481065 systemd-machined[215705]: New machine qemu-133-instance-0000006e.
Oct 11 05:16:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:49.835 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[4c00d225-2d8f-4540-aaeb-6cb112c98ab8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:16:49 np0005481065 NetworkManager[44960]: <info>  [1760174209.8389] device (tap1f9dc2d8-70): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:16:49 np0005481065 NetworkManager[44960]: <info>  [1760174209.8398] device (tap1f9dc2d8-70): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:16:49 np0005481065 systemd[1]: Started Virtual Machine qemu-133-instance-0000006e.
Oct 11 05:16:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:49.864 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[259726b1-48e8-40b7-9497-aa18f2022af2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:16:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:49.905 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[de1e1cb0-feb7-42bd-b962-17533eab5e3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:16:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:49.912 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9515063e-dd63-4706-8b77-55483005de18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:16:49 np0005481065 systemd-udevd[379795]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:16:49 np0005481065 NetworkManager[44960]: <info>  [1760174209.9144] manager: (tap2f4ce403-20): new Veth device (/org/freedesktop/NetworkManager/Devices/444)
Oct 11 05:16:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:49.977 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[18b39002-8557-4056-abb6-63b2b4222d6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:16:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:49.981 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b9aa40f3-687b-4b02-a022-7f38987390be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:16:50 np0005481065 NetworkManager[44960]: <info>  [1760174210.0146] device (tap2f4ce403-20): carrier: link connected
Oct 11 05:16:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:50.021 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[25aa5aef-c7c5-45ff-ae4e-152b89f019fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:16:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:50.045 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[87f9fcd7-af83-4da2-8440-834a43604fa0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2f4ce403-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:74:43:a2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 310], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 605471, 'reachable_time': 19417, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 379824, 'error': None, 'target': 'ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:16:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:50.066 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e6496a5f-2175-41db-b292-a4bee978654d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe74:43a2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 605471, 'tstamp': 605471}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 379825, 'error': None, 'target': 'ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:16:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:50.092 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8556622b-23da-4c62-975e-f35c24413003]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2f4ce403-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:74:43:a2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 310], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 605471, 'reachable_time': 19417, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 379826, 'error': None, 'target': 'ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:16:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:50.132 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6360c4f2-a90b-4d6e-b351-4f51ffa817f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:16:50 np0005481065 nova_compute[260935]: 2025-10-11 09:16:50.145 2 DEBUG nova.compute.manager [req-8f009638-a121-43f6-90f1-44ab8943c3ab req-385a7dd7-8b07-40b6-afcf-07082062d459 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Received event network-vif-plugged-1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:16:50 np0005481065 nova_compute[260935]: 2025-10-11 09:16:50.146 2 DEBUG oslo_concurrency.lockutils [req-8f009638-a121-43f6-90f1-44ab8943c3ab req-385a7dd7-8b07-40b6-afcf-07082062d459 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "187076f6-221b-4a35-a7a8-9ba7c2a546b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:16:50 np0005481065 nova_compute[260935]: 2025-10-11 09:16:50.146 2 DEBUG oslo_concurrency.lockutils [req-8f009638-a121-43f6-90f1-44ab8943c3ab req-385a7dd7-8b07-40b6-afcf-07082062d459 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "187076f6-221b-4a35-a7a8-9ba7c2a546b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:16:50 np0005481065 nova_compute[260935]: 2025-10-11 09:16:50.147 2 DEBUG oslo_concurrency.lockutils [req-8f009638-a121-43f6-90f1-44ab8943c3ab req-385a7dd7-8b07-40b6-afcf-07082062d459 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "187076f6-221b-4a35-a7a8-9ba7c2a546b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:16:50 np0005481065 nova_compute[260935]: 2025-10-11 09:16:50.148 2 DEBUG nova.compute.manager [req-8f009638-a121-43f6-90f1-44ab8943c3ab req-385a7dd7-8b07-40b6-afcf-07082062d459 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Processing event network-vif-plugged-1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:16:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:50.218 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[64fde1c6-741e-4cb3-8143-2be18237f452]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:16:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:50.221 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2f4ce403-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:16:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:50.221 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:16:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:50.222 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2f4ce403-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:16:50 np0005481065 NetworkManager[44960]: <info>  [1760174210.2255] manager: (tap2f4ce403-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/445)
Oct 11 05:16:50 np0005481065 kernel: tap2f4ce403-20: entered promiscuous mode
Oct 11 05:16:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:50.231 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2f4ce403-20, col_values=(('external_ids', {'iface-id': '722d4b4c-2d64-4e8c-b343-4ac25259f23b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:16:50 np0005481065 ovn_controller[152945]: 2025-10-11T09:16:50Z|01077|binding|INFO|Releasing lport 722d4b4c-2d64-4e8c-b343-4ac25259f23b from this chassis (sb_readonly=0)
Oct 11 05:16:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:50.238 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2f4ce403-2596-441d-805b-ba15e2f385a1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2f4ce403-2596-441d-805b-ba15e2f385a1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 05:16:50 np0005481065 nova_compute[260935]: 2025-10-11 09:16:50.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:16:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:50.239 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1200ee52-91f8-4683-8140-3ec2785a7da8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:16:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:50.240 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 05:16:50 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 05:16:50 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 05:16:50 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-2f4ce403-2596-441d-805b-ba15e2f385a1
Oct 11 05:16:50 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 05:16:50 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 05:16:50 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 05:16:50 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/2f4ce403-2596-441d-805b-ba15e2f385a1.pid.haproxy
Oct 11 05:16:50 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 05:16:50 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:16:50 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 05:16:50 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 05:16:50 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 05:16:50 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 05:16:50 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 05:16:50 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 05:16:50 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 05:16:50 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 05:16:50 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 05:16:50 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 05:16:50 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 05:16:50 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 05:16:50 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 05:16:50 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:16:50 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:16:50 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 05:16:50 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 05:16:50 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 05:16:50 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID 2f4ce403-2596-441d-805b-ba15e2f385a1
Oct 11 05:16:50 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 05:16:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:16:50.242 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1', 'env', 'PROCESS_TAG=haproxy-2f4ce403-2596-441d-805b-ba15e2f385a1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2f4ce403-2596-441d-805b-ba15e2f385a1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 05:16:50 np0005481065 nova_compute[260935]: 2025-10-11 09:16:50.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:16:50 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2250: 321 pgs: 321 active+clean; 500 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 127 op/s
Oct 11 05:16:50 np0005481065 podman[379898]: 2025-10-11 09:16:50.719078015 +0000 UTC m=+0.102598060 container create 3697c62e0c6431967367d91d5b22adc88dc70ab8ef5c5a60c6aad4d4a948029a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true)
Oct 11 05:16:50 np0005481065 podman[379898]: 2025-10-11 09:16:50.670022828 +0000 UTC m=+0.053542943 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 05:16:50 np0005481065 systemd[1]: Started libpod-conmon-3697c62e0c6431967367d91d5b22adc88dc70ab8ef5c5a60c6aad4d4a948029a.scope.
Oct 11 05:16:50 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:16:50 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/425c0630029d1aed8583bcc3941c7da79348c9c8a0eadb7928e597fdeee04cbf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 05:16:50 np0005481065 podman[379898]: 2025-10-11 09:16:50.846687138 +0000 UTC m=+0.230207203 container init 3697c62e0c6431967367d91d5b22adc88dc70ab8ef5c5a60c6aad4d4a948029a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 11 05:16:50 np0005481065 podman[379898]: 2025-10-11 09:16:50.857016004 +0000 UTC m=+0.240536019 container start 3697c62e0c6431967367d91d5b22adc88dc70ab8ef5c5a60c6aad4d4a948029a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 11 05:16:50 np0005481065 neutron-haproxy-ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1[379911]: [NOTICE]   (379915) : New worker (379917) forked
Oct 11 05:16:50 np0005481065 neutron-haproxy-ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1[379911]: [NOTICE]   (379915) : Loading success.
Oct 11 05:16:50 np0005481065 nova_compute[260935]: 2025-10-11 09:16:50.968 2 DEBUG nova.compute.manager [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:16:50 np0005481065 nova_compute[260935]: 2025-10-11 09:16:50.969 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174210.9673986, 187076f6-221b-4a35-a7a8-9ba7c2a546b5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:16:50 np0005481065 nova_compute[260935]: 2025-10-11 09:16:50.970 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] VM Started (Lifecycle Event)#033[00m
Oct 11 05:16:50 np0005481065 nova_compute[260935]: 2025-10-11 09:16:50.974 2 DEBUG nova.virt.libvirt.driver [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:16:50 np0005481065 nova_compute[260935]: 2025-10-11 09:16:50.979 2 INFO nova.virt.libvirt.driver [-] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Instance spawned successfully.#033[00m
Oct 11 05:16:50 np0005481065 nova_compute[260935]: 2025-10-11 09:16:50.980 2 DEBUG nova.virt.libvirt.driver [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:16:50 np0005481065 nova_compute[260935]: 2025-10-11 09:16:50.995 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:16:51 np0005481065 nova_compute[260935]: 2025-10-11 09:16:51.001 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:16:51 np0005481065 nova_compute[260935]: 2025-10-11 09:16:51.017 2 DEBUG nova.virt.libvirt.driver [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:16:51 np0005481065 nova_compute[260935]: 2025-10-11 09:16:51.018 2 DEBUG nova.virt.libvirt.driver [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:16:51 np0005481065 nova_compute[260935]: 2025-10-11 09:16:51.019 2 DEBUG nova.virt.libvirt.driver [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:16:51 np0005481065 nova_compute[260935]: 2025-10-11 09:16:51.020 2 DEBUG nova.virt.libvirt.driver [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:16:51 np0005481065 nova_compute[260935]: 2025-10-11 09:16:51.020 2 DEBUG nova.virt.libvirt.driver [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:16:51 np0005481065 nova_compute[260935]: 2025-10-11 09:16:51.021 2 DEBUG nova.virt.libvirt.driver [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:16:51 np0005481065 nova_compute[260935]: 2025-10-11 09:16:51.028 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:16:51 np0005481065 nova_compute[260935]: 2025-10-11 09:16:51.029 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174210.967625, 187076f6-221b-4a35-a7a8-9ba7c2a546b5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:16:51 np0005481065 nova_compute[260935]: 2025-10-11 09:16:51.029 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:16:51 np0005481065 nova_compute[260935]: 2025-10-11 09:16:51.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:16:51 np0005481065 nova_compute[260935]: 2025-10-11 09:16:51.058 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:16:51 np0005481065 nova_compute[260935]: 2025-10-11 09:16:51.064 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174210.9742355, 187076f6-221b-4a35-a7a8-9ba7c2a546b5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:16:51 np0005481065 nova_compute[260935]: 2025-10-11 09:16:51.064 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:16:51 np0005481065 nova_compute[260935]: 2025-10-11 09:16:51.086 2 INFO nova.compute.manager [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Took 9.70 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:16:51 np0005481065 nova_compute[260935]: 2025-10-11 09:16:51.086 2 DEBUG nova.compute.manager [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:16:51 np0005481065 nova_compute[260935]: 2025-10-11 09:16:51.099 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:16:51 np0005481065 nova_compute[260935]: 2025-10-11 09:16:51.103 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:16:51 np0005481065 nova_compute[260935]: 2025-10-11 09:16:51.129 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:16:51 np0005481065 nova_compute[260935]: 2025-10-11 09:16:51.168 2 INFO nova.compute.manager [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Took 10.77 seconds to build instance.#033[00m
Oct 11 05:16:51 np0005481065 nova_compute[260935]: 2025-10-11 09:16:51.184 2 DEBUG oslo_concurrency.lockutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "187076f6-221b-4a35-a7a8-9ba7c2a546b5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.879s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:16:52 np0005481065 nova_compute[260935]: 2025-10-11 09:16:52.247 2 DEBUG nova.compute.manager [req-1fdc7c63-4613-426c-bf28-0648debe82f4 req-5a629357-4532-4d73-88ec-c975161d5cfe e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Received event network-vif-plugged-1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:16:52 np0005481065 nova_compute[260935]: 2025-10-11 09:16:52.248 2 DEBUG oslo_concurrency.lockutils [req-1fdc7c63-4613-426c-bf28-0648debe82f4 req-5a629357-4532-4d73-88ec-c975161d5cfe e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "187076f6-221b-4a35-a7a8-9ba7c2a546b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:16:52 np0005481065 nova_compute[260935]: 2025-10-11 09:16:52.248 2 DEBUG oslo_concurrency.lockutils [req-1fdc7c63-4613-426c-bf28-0648debe82f4 req-5a629357-4532-4d73-88ec-c975161d5cfe e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "187076f6-221b-4a35-a7a8-9ba7c2a546b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:16:52 np0005481065 nova_compute[260935]: 2025-10-11 09:16:52.248 2 DEBUG oslo_concurrency.lockutils [req-1fdc7c63-4613-426c-bf28-0648debe82f4 req-5a629357-4532-4d73-88ec-c975161d5cfe e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "187076f6-221b-4a35-a7a8-9ba7c2a546b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:16:52 np0005481065 nova_compute[260935]: 2025-10-11 09:16:52.248 2 DEBUG nova.compute.manager [req-1fdc7c63-4613-426c-bf28-0648debe82f4 req-5a629357-4532-4d73-88ec-c975161d5cfe e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] No waiting events found dispatching network-vif-plugged-1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:16:52 np0005481065 nova_compute[260935]: 2025-10-11 09:16:52.249 2 WARNING nova.compute.manager [req-1fdc7c63-4613-426c-bf28-0648debe82f4 req-5a629357-4532-4d73-88ec-c975161d5cfe e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Received unexpected event network-vif-plugged-1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54 for instance with vm_state active and task_state None.#033[00m
Oct 11 05:16:52 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2251: 321 pgs: 321 active+clean; 500 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.6 MiB/s wr, 164 op/s
Oct 11 05:16:53 np0005481065 nova_compute[260935]: 2025-10-11 09:16:53.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:16:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:16:54 np0005481065 nova_compute[260935]: 2025-10-11 09:16:54.315 2 DEBUG nova.compute.manager [req-17341210-0060-4152-9050-57789c938675 req-28a12971-a734-41e2-a5b0-2ba90587bc11 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Received event network-changed-1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:16:54 np0005481065 nova_compute[260935]: 2025-10-11 09:16:54.316 2 DEBUG nova.compute.manager [req-17341210-0060-4152-9050-57789c938675 req-28a12971-a734-41e2-a5b0-2ba90587bc11 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Refreshing instance network info cache due to event network-changed-1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:16:54 np0005481065 nova_compute[260935]: 2025-10-11 09:16:54.317 2 DEBUG oslo_concurrency.lockutils [req-17341210-0060-4152-9050-57789c938675 req-28a12971-a734-41e2-a5b0-2ba90587bc11 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-187076f6-221b-4a35-a7a8-9ba7c2a546b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:16:54 np0005481065 nova_compute[260935]: 2025-10-11 09:16:54.317 2 DEBUG oslo_concurrency.lockutils [req-17341210-0060-4152-9050-57789c938675 req-28a12971-a734-41e2-a5b0-2ba90587bc11 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-187076f6-221b-4a35-a7a8-9ba7c2a546b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:16:54 np0005481065 nova_compute[260935]: 2025-10-11 09:16:54.318 2 DEBUG nova.network.neutron [req-17341210-0060-4152-9050-57789c938675 req-28a12971-a734-41e2-a5b0-2ba90587bc11 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Refreshing network info cache for port 1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:16:54 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2252: 321 pgs: 321 active+clean; 500 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.0 MiB/s wr, 147 op/s
Oct 11 05:16:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:16:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:16:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:16:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:16:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:16:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:16:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:16:54
Oct 11 05:16:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:16:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:16:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'images', 'volumes', 'default.rgw.log', 'vms', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', 'backups', 'cephfs.cephfs.meta']
Oct 11 05:16:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:16:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:16:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:16:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:16:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:16:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:16:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:16:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:16:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:16:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:16:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:16:56 np0005481065 nova_compute[260935]: 2025-10-11 09:16:56.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:16:56 np0005481065 nova_compute[260935]: 2025-10-11 09:16:56.254 2 DEBUG nova.network.neutron [req-17341210-0060-4152-9050-57789c938675 req-28a12971-a734-41e2-a5b0-2ba90587bc11 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Updated VIF entry in instance network info cache for port 1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:16:56 np0005481065 nova_compute[260935]: 2025-10-11 09:16:56.255 2 DEBUG nova.network.neutron [req-17341210-0060-4152-9050-57789c938675 req-28a12971-a734-41e2-a5b0-2ba90587bc11 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Updating instance_info_cache with network_info: [{"id": "1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54", "address": "fa:16:3e:4b:3e:b6", "network": {"id": "2f4ce403-2596-441d-805b-ba15e2f385a1", "bridge": "br-int", "label": "tempest-network-smoke--109925739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4b:3eb6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f9dc2d8-70", "ovs_interfaceid": "1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:16:56 np0005481065 nova_compute[260935]: 2025-10-11 09:16:56.279 2 DEBUG oslo_concurrency.lockutils [req-17341210-0060-4152-9050-57789c938675 req-28a12971-a734-41e2-a5b0-2ba90587bc11 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-187076f6-221b-4a35-a7a8-9ba7c2a546b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:16:56 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2253: 321 pgs: 321 active+clean; 500 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 29 KiB/s wr, 143 op/s
Oct 11 05:16:58 np0005481065 nova_compute[260935]: 2025-10-11 09:16:58.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:16:58 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2254: 321 pgs: 321 active+clean; 521 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.1 MiB/s wr, 185 op/s
Oct 11 05:16:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:16:59 np0005481065 ovn_controller[152945]: 2025-10-11T09:16:59Z|00121|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:48:8f:cc 10.100.0.21
Oct 11 05:16:59 np0005481065 ovn_controller[152945]: 2025-10-11T09:16:59Z|00122|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:48:8f:cc 10.100.0.21
Oct 11 05:17:00 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2255: 321 pgs: 321 active+clean; 521 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 109 op/s
Oct 11 05:17:00 np0005481065 podman[379928]: 2025-10-11 09:17:00.815614688 +0000 UTC m=+0.107708333 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 05:17:01 np0005481065 nova_compute[260935]: 2025-10-11 09:17:01.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:02 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2256: 321 pgs: 321 active+clean; 532 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 134 op/s
Oct 11 05:17:03 np0005481065 nova_compute[260935]: 2025-10-11 09:17:03.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:17:04 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2257: 321 pgs: 321 active+clean; 533 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 103 op/s
Oct 11 05:17:05 np0005481065 ovn_controller[152945]: 2025-10-11T09:17:05Z|00123|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4b:3e:b6 10.100.0.12
Oct 11 05:17:05 np0005481065 ovn_controller[152945]: 2025-10-11T09:17:05Z|00124|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4b:3e:b6 10.100.0.12
Oct 11 05:17:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:17:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:17:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:17:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:17:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004499487713257797 of space, bias 1.0, pg target 1.3498463139773391 quantized to 32 (current 32)
Oct 11 05:17:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:17:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:17:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:17:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:17:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:17:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.1992057139048968 quantized to 32 (current 32)
Oct 11 05:17:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:17:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 32)
Oct 11 05:17:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:17:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:17:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:17:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Oct 11 05:17:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:17:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Oct 11 05:17:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:17:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:17:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:17:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Oct 11 05:17:05 np0005481065 podman[379949]: 2025-10-11 09:17:05.80670123 +0000 UTC m=+0.095587097 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 05:17:06 np0005481065 nova_compute[260935]: 2025-10-11 09:17:06.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:06 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2258: 321 pgs: 321 active+clean; 533 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 531 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Oct 11 05:17:08 np0005481065 nova_compute[260935]: 2025-10-11 09:17:08.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:08 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2259: 321 pgs: 321 active+clean; 566 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 791 KiB/s rd, 4.3 MiB/s wr, 133 op/s
Oct 11 05:17:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:17:09 np0005481065 nova_compute[260935]: 2025-10-11 09:17:09.425 2 DEBUG oslo_concurrency.lockutils [None req-bc309bb7-c7fc-47c1-a648-27936f8b600c dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "a1232bdc-1728-423e-91ef-f46614fcec43" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:17:09 np0005481065 nova_compute[260935]: 2025-10-11 09:17:09.426 2 DEBUG oslo_concurrency.lockutils [None req-bc309bb7-c7fc-47c1-a648-27936f8b600c dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "a1232bdc-1728-423e-91ef-f46614fcec43" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:17:09 np0005481065 nova_compute[260935]: 2025-10-11 09:17:09.427 2 DEBUG oslo_concurrency.lockutils [None req-bc309bb7-c7fc-47c1-a648-27936f8b600c dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "a1232bdc-1728-423e-91ef-f46614fcec43-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:17:09 np0005481065 nova_compute[260935]: 2025-10-11 09:17:09.427 2 DEBUG oslo_concurrency.lockutils [None req-bc309bb7-c7fc-47c1-a648-27936f8b600c dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "a1232bdc-1728-423e-91ef-f46614fcec43-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:17:09 np0005481065 nova_compute[260935]: 2025-10-11 09:17:09.428 2 DEBUG oslo_concurrency.lockutils [None req-bc309bb7-c7fc-47c1-a648-27936f8b600c dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "a1232bdc-1728-423e-91ef-f46614fcec43-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:17:09 np0005481065 nova_compute[260935]: 2025-10-11 09:17:09.430 2 INFO nova.compute.manager [None req-bc309bb7-c7fc-47c1-a648-27936f8b600c dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Terminating instance#033[00m
Oct 11 05:17:09 np0005481065 nova_compute[260935]: 2025-10-11 09:17:09.432 2 DEBUG nova.compute.manager [None req-bc309bb7-c7fc-47c1-a648-27936f8b600c dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:17:09 np0005481065 kernel: tape89831fc-f6 (unregistering): left promiscuous mode
Oct 11 05:17:09 np0005481065 NetworkManager[44960]: <info>  [1760174229.5017] device (tape89831fc-f6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:17:09 np0005481065 nova_compute[260935]: 2025-10-11 09:17:09.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:09 np0005481065 ovn_controller[152945]: 2025-10-11T09:17:09Z|01078|binding|INFO|Releasing lport e89831fc-f646-401f-8562-959bb36ec0e9 from this chassis (sb_readonly=0)
Oct 11 05:17:09 np0005481065 ovn_controller[152945]: 2025-10-11T09:17:09Z|01079|binding|INFO|Setting lport e89831fc-f646-401f-8562-959bb36ec0e9 down in Southbound
Oct 11 05:17:09 np0005481065 ovn_controller[152945]: 2025-10-11T09:17:09Z|01080|binding|INFO|Removing iface tape89831fc-f6 ovn-installed in OVS
Oct 11 05:17:09 np0005481065 nova_compute[260935]: 2025-10-11 09:17:09.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:09 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:09.532 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:8f:cc 10.100.0.21'], port_security=['fa:16:3e:48:8f:cc 10.100.0.21'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.21/28', 'neutron:device_id': 'a1232bdc-1728-423e-91ef-f46614fcec43', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3f778265-a3b7-4c18-be8e-648917a97a03', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0f4801ae-575f-43c2-a03b-ceeeb136f93e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=704f698c-f7f9-44d3-83d1-ae6d1824d8ad, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=e89831fc-f646-401f-8562-959bb36ec0e9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:17:09 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:09.534 162815 INFO neutron.agent.ovn.metadata.agent [-] Port e89831fc-f646-401f-8562-959bb36ec0e9 in datapath 3f778265-a3b7-4c18-be8e-648917a97a03 unbound from our chassis#033[00m
Oct 11 05:17:09 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:09.538 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3f778265-a3b7-4c18-be8e-648917a97a03, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:17:09 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:09.540 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5e639041-39a4-4a8a-87f7-9097fd056de3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:17:09 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:09.541 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3f778265-a3b7-4c18-be8e-648917a97a03 namespace which is not needed anymore#033[00m
Oct 11 05:17:09 np0005481065 nova_compute[260935]: 2025-10-11 09:17:09.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:09 np0005481065 systemd[1]: machine-qemu\x2d132\x2dinstance\x2d0000006d.scope: Deactivated successfully.
Oct 11 05:17:09 np0005481065 systemd[1]: machine-qemu\x2d132\x2dinstance\x2d0000006d.scope: Consumed 13.671s CPU time.
Oct 11 05:17:09 np0005481065 systemd-machined[215705]: Machine qemu-132-instance-0000006d terminated.
Oct 11 05:17:09 np0005481065 nova_compute[260935]: 2025-10-11 09:17:09.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:09 np0005481065 nova_compute[260935]: 2025-10-11 09:17:09.728 2 INFO nova.virt.libvirt.driver [-] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Instance destroyed successfully.#033[00m
Oct 11 05:17:09 np0005481065 nova_compute[260935]: 2025-10-11 09:17:09.728 2 DEBUG nova.objects.instance [None req-bc309bb7-c7fc-47c1-a648-27936f8b600c dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'resources' on Instance uuid a1232bdc-1728-423e-91ef-f46614fcec43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:17:09 np0005481065 neutron-haproxy-ovnmeta-3f778265-a3b7-4c18-be8e-648917a97a03[379640]: [NOTICE]   (379644) : haproxy version is 2.8.14-c23fe91
Oct 11 05:17:09 np0005481065 neutron-haproxy-ovnmeta-3f778265-a3b7-4c18-be8e-648917a97a03[379640]: [NOTICE]   (379644) : path to executable is /usr/sbin/haproxy
Oct 11 05:17:09 np0005481065 neutron-haproxy-ovnmeta-3f778265-a3b7-4c18-be8e-648917a97a03[379640]: [WARNING]  (379644) : Exiting Master process...
Oct 11 05:17:09 np0005481065 neutron-haproxy-ovnmeta-3f778265-a3b7-4c18-be8e-648917a97a03[379640]: [WARNING]  (379644) : Exiting Master process...
Oct 11 05:17:09 np0005481065 neutron-haproxy-ovnmeta-3f778265-a3b7-4c18-be8e-648917a97a03[379640]: [ALERT]    (379644) : Current worker (379646) exited with code 143 (Terminated)
Oct 11 05:17:09 np0005481065 neutron-haproxy-ovnmeta-3f778265-a3b7-4c18-be8e-648917a97a03[379640]: [WARNING]  (379644) : All workers exited. Exiting... (0)
Oct 11 05:17:09 np0005481065 systemd[1]: libpod-cec0bb5800761f72b32b04dda9e4e3462d4da48888e056643a262d83e6e3bdb6.scope: Deactivated successfully.
Oct 11 05:17:09 np0005481065 nova_compute[260935]: 2025-10-11 09:17:09.749 2 DEBUG nova.virt.libvirt.vif [None req-bc309bb7-c7fc-47c1-a648-27936f8b600c dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:16:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-272732865',display_name='tempest-TestNetworkBasicOps-server-272732865',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-272732865',id=109,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF7FX3U/5RP2hr/RRWH8Uzs1pxIw1BlrlGqbe9sm3AIGCWu3yNU1SvlSTOKsJU+AC7mmnOtgeA/3XVmXVb4brBN1OHPjhKFiHZVW+4rKWNrrCcpToIxg/52KW4sZLUbqeQ==',key_name='tempest-TestNetworkBasicOps-1996793419',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:16:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-obg5p04x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:16:45Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=a1232bdc-1728-423e-91ef-f46614fcec43,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e89831fc-f646-401f-8562-959bb36ec0e9", "address": "fa:16:3e:48:8f:cc", "network": {"id": "3f778265-a3b7-4c18-be8e-648917a97a03", "bridge": "br-int", "label": "tempest-network-smoke--1276900070", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89831fc-f6", "ovs_interfaceid": "e89831fc-f646-401f-8562-959bb36ec0e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:17:09 np0005481065 nova_compute[260935]: 2025-10-11 09:17:09.750 2 DEBUG nova.network.os_vif_util [None req-bc309bb7-c7fc-47c1-a648-27936f8b600c dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "e89831fc-f646-401f-8562-959bb36ec0e9", "address": "fa:16:3e:48:8f:cc", "network": {"id": "3f778265-a3b7-4c18-be8e-648917a97a03", "bridge": "br-int", "label": "tempest-network-smoke--1276900070", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89831fc-f6", "ovs_interfaceid": "e89831fc-f646-401f-8562-959bb36ec0e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:17:09 np0005481065 podman[379995]: 2025-10-11 09:17:09.754841881 +0000 UTC m=+0.070965236 container died cec0bb5800761f72b32b04dda9e4e3462d4da48888e056643a262d83e6e3bdb6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3f778265-a3b7-4c18-be8e-648917a97a03, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 11 05:17:09 np0005481065 nova_compute[260935]: 2025-10-11 09:17:09.752 2 DEBUG nova.network.os_vif_util [None req-bc309bb7-c7fc-47c1-a648-27936f8b600c dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:8f:cc,bridge_name='br-int',has_traffic_filtering=True,id=e89831fc-f646-401f-8562-959bb36ec0e9,network=Network(3f778265-a3b7-4c18-be8e-648917a97a03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape89831fc-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:17:09 np0005481065 nova_compute[260935]: 2025-10-11 09:17:09.753 2 DEBUG os_vif [None req-bc309bb7-c7fc-47c1-a648-27936f8b600c dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:8f:cc,bridge_name='br-int',has_traffic_filtering=True,id=e89831fc-f646-401f-8562-959bb36ec0e9,network=Network(3f778265-a3b7-4c18-be8e-648917a97a03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape89831fc-f6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:17:09 np0005481065 nova_compute[260935]: 2025-10-11 09:17:09.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:09 np0005481065 nova_compute[260935]: 2025-10-11 09:17:09.758 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape89831fc-f6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:17:09 np0005481065 nova_compute[260935]: 2025-10-11 09:17:09.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:09 np0005481065 nova_compute[260935]: 2025-10-11 09:17:09.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:17:09 np0005481065 nova_compute[260935]: 2025-10-11 09:17:09.770 2 INFO os_vif [None req-bc309bb7-c7fc-47c1-a648-27936f8b600c dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:8f:cc,bridge_name='br-int',has_traffic_filtering=True,id=e89831fc-f646-401f-8562-959bb36ec0e9,network=Network(3f778265-a3b7-4c18-be8e-648917a97a03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape89831fc-f6')#033[00m
Oct 11 05:17:09 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cec0bb5800761f72b32b04dda9e4e3462d4da48888e056643a262d83e6e3bdb6-userdata-shm.mount: Deactivated successfully.
Oct 11 05:17:09 np0005481065 systemd[1]: var-lib-containers-storage-overlay-efdb927a6e1809dd643e00a3746dcecf9a780cac7ca509077562678f25a5dc5f-merged.mount: Deactivated successfully.
Oct 11 05:17:09 np0005481065 podman[379995]: 2025-10-11 09:17:09.827119231 +0000 UTC m=+0.143242556 container cleanup cec0bb5800761f72b32b04dda9e4e3462d4da48888e056643a262d83e6e3bdb6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3f778265-a3b7-4c18-be8e-648917a97a03, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 05:17:09 np0005481065 systemd[1]: libpod-conmon-cec0bb5800761f72b32b04dda9e4e3462d4da48888e056643a262d83e6e3bdb6.scope: Deactivated successfully.
Oct 11 05:17:09 np0005481065 nova_compute[260935]: 2025-10-11 09:17:09.851 2 DEBUG nova.compute.manager [req-9529cb3d-a545-4ab4-9287-ea56a2d5356f req-e9313062-43d1-42c7-a537-4382a53ddbc7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Received event network-vif-unplugged-e89831fc-f646-401f-8562-959bb36ec0e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:17:09 np0005481065 nova_compute[260935]: 2025-10-11 09:17:09.852 2 DEBUG oslo_concurrency.lockutils [req-9529cb3d-a545-4ab4-9287-ea56a2d5356f req-e9313062-43d1-42c7-a537-4382a53ddbc7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "a1232bdc-1728-423e-91ef-f46614fcec43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:17:09 np0005481065 nova_compute[260935]: 2025-10-11 09:17:09.853 2 DEBUG oslo_concurrency.lockutils [req-9529cb3d-a545-4ab4-9287-ea56a2d5356f req-e9313062-43d1-42c7-a537-4382a53ddbc7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a1232bdc-1728-423e-91ef-f46614fcec43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:17:09 np0005481065 nova_compute[260935]: 2025-10-11 09:17:09.853 2 DEBUG oslo_concurrency.lockutils [req-9529cb3d-a545-4ab4-9287-ea56a2d5356f req-e9313062-43d1-42c7-a537-4382a53ddbc7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a1232bdc-1728-423e-91ef-f46614fcec43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:17:09 np0005481065 nova_compute[260935]: 2025-10-11 09:17:09.854 2 DEBUG nova.compute.manager [req-9529cb3d-a545-4ab4-9287-ea56a2d5356f req-e9313062-43d1-42c7-a537-4382a53ddbc7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] No waiting events found dispatching network-vif-unplugged-e89831fc-f646-401f-8562-959bb36ec0e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:17:09 np0005481065 nova_compute[260935]: 2025-10-11 09:17:09.854 2 DEBUG nova.compute.manager [req-9529cb3d-a545-4ab4-9287-ea56a2d5356f req-e9313062-43d1-42c7-a537-4382a53ddbc7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Received event network-vif-unplugged-e89831fc-f646-401f-8562-959bb36ec0e9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 05:17:09 np0005481065 podman[380049]: 2025-10-11 09:17:09.925961117 +0000 UTC m=+0.061724319 container remove cec0bb5800761f72b32b04dda9e4e3462d4da48888e056643a262d83e6e3bdb6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3f778265-a3b7-4c18-be8e-648917a97a03, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 11 05:17:09 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:09.933 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[55295607-7acf-4fb8-a487-21a74e410ded]: (4, ('Sat Oct 11 09:17:09 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3f778265-a3b7-4c18-be8e-648917a97a03 (cec0bb5800761f72b32b04dda9e4e3462d4da48888e056643a262d83e6e3bdb6)\ncec0bb5800761f72b32b04dda9e4e3462d4da48888e056643a262d83e6e3bdb6\nSat Oct 11 09:17:09 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3f778265-a3b7-4c18-be8e-648917a97a03 (cec0bb5800761f72b32b04dda9e4e3462d4da48888e056643a262d83e6e3bdb6)\ncec0bb5800761f72b32b04dda9e4e3462d4da48888e056643a262d83e6e3bdb6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:17:09 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:09.934 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5c8bdf12-f6bb-4d95-a775-cf71950aebdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:17:09 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:09.936 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3f778265-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:17:09 np0005481065 nova_compute[260935]: 2025-10-11 09:17:09.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:09 np0005481065 kernel: tap3f778265-a0: left promiscuous mode
Oct 11 05:17:09 np0005481065 nova_compute[260935]: 2025-10-11 09:17:09.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:09 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:09.948 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[33a0630e-69e4-49d9-b760-b21f13bd3444]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:17:09 np0005481065 nova_compute[260935]: 2025-10-11 09:17:09.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:09 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:09.981 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[df970beb-2db6-4d7f-950b-2e03d07b5785]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:17:09 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:09.983 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[04843a9c-b56b-4285-b351-c5ab60e1adfc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:17:10 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:10.005 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[87bd1beb-aa6a-4f39-b17a-9c6a4aeac894]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 604944, 'reachable_time': 34489, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 380065, 'error': None, 'target': 'ovnmeta-3f778265-a3b7-4c18-be8e-648917a97a03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:17:10 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:10.008 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3f778265-a3b7-4c18-be8e-648917a97a03 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 05:17:10 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:10.008 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[96cc13a3-f99b-43d3-b25b-f44177387d20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:17:10 np0005481065 systemd[1]: run-netns-ovnmeta\x2d3f778265\x2da3b7\x2d4c18\x2dbe8e\x2d648917a97a03.mount: Deactivated successfully.
Oct 11 05:17:10 np0005481065 nova_compute[260935]: 2025-10-11 09:17:10.197 2 INFO nova.virt.libvirt.driver [None req-bc309bb7-c7fc-47c1-a648-27936f8b600c dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Deleting instance files /var/lib/nova/instances/a1232bdc-1728-423e-91ef-f46614fcec43_del#033[00m
Oct 11 05:17:10 np0005481065 nova_compute[260935]: 2025-10-11 09:17:10.198 2 INFO nova.virt.libvirt.driver [None req-bc309bb7-c7fc-47c1-a648-27936f8b600c dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Deletion of /var/lib/nova/instances/a1232bdc-1728-423e-91ef-f46614fcec43_del complete#033[00m
Oct 11 05:17:10 np0005481065 nova_compute[260935]: 2025-10-11 09:17:10.279 2 INFO nova.compute.manager [None req-bc309bb7-c7fc-47c1-a648-27936f8b600c dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Took 0.85 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:17:10 np0005481065 nova_compute[260935]: 2025-10-11 09:17:10.280 2 DEBUG oslo.service.loopingcall [None req-bc309bb7-c7fc-47c1-a648-27936f8b600c dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:17:10 np0005481065 nova_compute[260935]: 2025-10-11 09:17:10.281 2 DEBUG nova.compute.manager [-] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:17:10 np0005481065 nova_compute[260935]: 2025-10-11 09:17:10.281 2 DEBUG nova.network.neutron [-] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:17:10 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2260: 321 pgs: 321 active+clean; 566 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 473 KiB/s rd, 2.2 MiB/s wr, 91 op/s
Oct 11 05:17:10 np0005481065 nova_compute[260935]: 2025-10-11 09:17:10.933 2 DEBUG nova.network.neutron [-] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:17:10 np0005481065 nova_compute[260935]: 2025-10-11 09:17:10.954 2 INFO nova.compute.manager [-] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Took 0.67 seconds to deallocate network for instance.#033[00m
Oct 11 05:17:11 np0005481065 nova_compute[260935]: 2025-10-11 09:17:11.008 2 DEBUG oslo_concurrency.lockutils [None req-bc309bb7-c7fc-47c1-a648-27936f8b600c dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:17:11 np0005481065 nova_compute[260935]: 2025-10-11 09:17:11.009 2 DEBUG oslo_concurrency.lockutils [None req-bc309bb7-c7fc-47c1-a648-27936f8b600c dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:17:11 np0005481065 nova_compute[260935]: 2025-10-11 09:17:11.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:11 np0005481065 nova_compute[260935]: 2025-10-11 09:17:11.192 2 DEBUG oslo_concurrency.processutils [None req-bc309bb7-c7fc-47c1-a648-27936f8b600c dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:17:11 np0005481065 podman[380089]: 2025-10-11 09:17:11.656911118 +0000 UTC m=+0.073319740 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd)
Oct 11 05:17:11 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:17:11 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3939161694' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:17:11 np0005481065 podman[380090]: 2025-10-11 09:17:11.705139743 +0000 UTC m=+0.108936486 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:17:11 np0005481065 nova_compute[260935]: 2025-10-11 09:17:11.734 2 DEBUG oslo_concurrency.processutils [None req-bc309bb7-c7fc-47c1-a648-27936f8b600c dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:17:11 np0005481065 nova_compute[260935]: 2025-10-11 09:17:11.746 2 DEBUG nova.compute.provider_tree [None req-bc309bb7-c7fc-47c1-a648-27936f8b600c dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:17:11 np0005481065 nova_compute[260935]: 2025-10-11 09:17:11.768 2 DEBUG nova.scheduler.client.report [None req-bc309bb7-c7fc-47c1-a648-27936f8b600c dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:17:11 np0005481065 nova_compute[260935]: 2025-10-11 09:17:11.795 2 DEBUG oslo_concurrency.lockutils [None req-bc309bb7-c7fc-47c1-a648-27936f8b600c dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:17:11 np0005481065 nova_compute[260935]: 2025-10-11 09:17:11.828 2 INFO nova.scheduler.client.report [None req-bc309bb7-c7fc-47c1-a648-27936f8b600c dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Deleted allocations for instance a1232bdc-1728-423e-91ef-f46614fcec43#033[00m
Oct 11 05:17:11 np0005481065 nova_compute[260935]: 2025-10-11 09:17:11.901 2 DEBUG oslo_concurrency.lockutils [None req-bc309bb7-c7fc-47c1-a648-27936f8b600c dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "a1232bdc-1728-423e-91ef-f46614fcec43" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.475s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:17:11 np0005481065 nova_compute[260935]: 2025-10-11 09:17:11.934 2 DEBUG nova.compute.manager [req-e6df145e-0c93-48d1-887b-d513054dc6ba req-1bb08df6-dc6e-4dfb-b90c-6b440529399b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Received event network-vif-plugged-e89831fc-f646-401f-8562-959bb36ec0e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:17:11 np0005481065 nova_compute[260935]: 2025-10-11 09:17:11.935 2 DEBUG oslo_concurrency.lockutils [req-e6df145e-0c93-48d1-887b-d513054dc6ba req-1bb08df6-dc6e-4dfb-b90c-6b440529399b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "a1232bdc-1728-423e-91ef-f46614fcec43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:17:11 np0005481065 nova_compute[260935]: 2025-10-11 09:17:11.936 2 DEBUG oslo_concurrency.lockutils [req-e6df145e-0c93-48d1-887b-d513054dc6ba req-1bb08df6-dc6e-4dfb-b90c-6b440529399b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a1232bdc-1728-423e-91ef-f46614fcec43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:17:11 np0005481065 nova_compute[260935]: 2025-10-11 09:17:11.936 2 DEBUG oslo_concurrency.lockutils [req-e6df145e-0c93-48d1-887b-d513054dc6ba req-1bb08df6-dc6e-4dfb-b90c-6b440529399b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a1232bdc-1728-423e-91ef-f46614fcec43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:17:11 np0005481065 nova_compute[260935]: 2025-10-11 09:17:11.937 2 DEBUG nova.compute.manager [req-e6df145e-0c93-48d1-887b-d513054dc6ba req-1bb08df6-dc6e-4dfb-b90c-6b440529399b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] No waiting events found dispatching network-vif-plugged-e89831fc-f646-401f-8562-959bb36ec0e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:17:11 np0005481065 nova_compute[260935]: 2025-10-11 09:17:11.938 2 WARNING nova.compute.manager [req-e6df145e-0c93-48d1-887b-d513054dc6ba req-1bb08df6-dc6e-4dfb-b90c-6b440529399b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Received unexpected event network-vif-plugged-e89831fc-f646-401f-8562-959bb36ec0e9 for instance with vm_state deleted and task_state None.#033[00m
Oct 11 05:17:11 np0005481065 nova_compute[260935]: 2025-10-11 09:17:11.938 2 DEBUG nova.compute.manager [req-e6df145e-0c93-48d1-887b-d513054dc6ba req-1bb08df6-dc6e-4dfb-b90c-6b440529399b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Received event network-vif-deleted-e89831fc-f646-401f-8562-959bb36ec0e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:17:12 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2261: 321 pgs: 321 active+clean; 509 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 492 KiB/s rd, 2.2 MiB/s wr, 118 op/s
Oct 11 05:17:13 np0005481065 nova_compute[260935]: 2025-10-11 09:17:13.440 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:17:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:17:14 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2262: 321 pgs: 321 active+clean; 486 MiB data, 997 MiB used, 59 GiB / 60 GiB avail; 303 KiB/s rd, 2.2 MiB/s wr, 95 op/s
Oct 11 05:17:14 np0005481065 nova_compute[260935]: 2025-10-11 09:17:14.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:14 np0005481065 ovn_controller[152945]: 2025-10-11T09:17:14Z|01081|binding|INFO|Releasing lport 58ef9b4a-8b66-4d5d-ac05-f694b2a9b216 from this chassis (sb_readonly=0)
Oct 11 05:17:14 np0005481065 ovn_controller[152945]: 2025-10-11T09:17:14Z|01082|binding|INFO|Releasing lport 722d4b4c-2d64-4e8c-b343-4ac25259f23b from this chassis (sb_readonly=0)
Oct 11 05:17:14 np0005481065 ovn_controller[152945]: 2025-10-11T09:17:14Z|01083|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:17:14 np0005481065 ovn_controller[152945]: 2025-10-11T09:17:14Z|01084|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:17:14 np0005481065 nova_compute[260935]: 2025-10-11 09:17:14.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:15.211 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:17:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:15.212 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:17:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:15.213 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:17:16 np0005481065 nova_compute[260935]: 2025-10-11 09:17:16.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:16 np0005481065 nova_compute[260935]: 2025-10-11 09:17:16.342 2 DEBUG nova.compute.manager [req-2fc3de9d-758c-4505-91b9-bb270d019091 req-16e90f6d-4595-4d73-9fd4-601d7cfbf617 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Received event network-changed-cabc92ec-b61c-42d8-80b5-444ec773a568 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:17:16 np0005481065 nova_compute[260935]: 2025-10-11 09:17:16.343 2 DEBUG nova.compute.manager [req-2fc3de9d-758c-4505-91b9-bb270d019091 req-16e90f6d-4595-4d73-9fd4-601d7cfbf617 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Refreshing instance network info cache due to event network-changed-cabc92ec-b61c-42d8-80b5-444ec773a568. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:17:16 np0005481065 nova_compute[260935]: 2025-10-11 09:17:16.343 2 DEBUG oslo_concurrency.lockutils [req-2fc3de9d-758c-4505-91b9-bb270d019091 req-16e90f6d-4595-4d73-9fd4-601d7cfbf617 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-8354434f-daa4-4745-9755-bd2465f5459b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:17:16 np0005481065 nova_compute[260935]: 2025-10-11 09:17:16.344 2 DEBUG oslo_concurrency.lockutils [req-2fc3de9d-758c-4505-91b9-bb270d019091 req-16e90f6d-4595-4d73-9fd4-601d7cfbf617 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-8354434f-daa4-4745-9755-bd2465f5459b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:17:16 np0005481065 nova_compute[260935]: 2025-10-11 09:17:16.344 2 DEBUG nova.network.neutron [req-2fc3de9d-758c-4505-91b9-bb270d019091 req-16e90f6d-4595-4d73-9fd4-601d7cfbf617 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Refreshing network info cache for port cabc92ec-b61c-42d8-80b5-444ec773a568 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:17:16 np0005481065 nova_compute[260935]: 2025-10-11 09:17:16.502 2 DEBUG oslo_concurrency.lockutils [None req-aa12f155-3a65-4148-a44d-7d46f8eae8e5 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "8354434f-daa4-4745-9755-bd2465f5459b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:17:16 np0005481065 nova_compute[260935]: 2025-10-11 09:17:16.503 2 DEBUG oslo_concurrency.lockutils [None req-aa12f155-3a65-4148-a44d-7d46f8eae8e5 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "8354434f-daa4-4745-9755-bd2465f5459b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:17:16 np0005481065 nova_compute[260935]: 2025-10-11 09:17:16.503 2 DEBUG oslo_concurrency.lockutils [None req-aa12f155-3a65-4148-a44d-7d46f8eae8e5 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "8354434f-daa4-4745-9755-bd2465f5459b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:17:16 np0005481065 nova_compute[260935]: 2025-10-11 09:17:16.504 2 DEBUG oslo_concurrency.lockutils [None req-aa12f155-3a65-4148-a44d-7d46f8eae8e5 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "8354434f-daa4-4745-9755-bd2465f5459b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:17:16 np0005481065 nova_compute[260935]: 2025-10-11 09:17:16.504 2 DEBUG oslo_concurrency.lockutils [None req-aa12f155-3a65-4148-a44d-7d46f8eae8e5 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "8354434f-daa4-4745-9755-bd2465f5459b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:17:16 np0005481065 nova_compute[260935]: 2025-10-11 09:17:16.506 2 INFO nova.compute.manager [None req-aa12f155-3a65-4148-a44d-7d46f8eae8e5 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Terminating instance#033[00m
Oct 11 05:17:16 np0005481065 nova_compute[260935]: 2025-10-11 09:17:16.508 2 DEBUG nova.compute.manager [None req-aa12f155-3a65-4148-a44d-7d46f8eae8e5 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:17:16 np0005481065 kernel: tapcabc92ec-b6 (unregistering): left promiscuous mode
Oct 11 05:17:16 np0005481065 NetworkManager[44960]: <info>  [1760174236.5813] device (tapcabc92ec-b6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:17:16 np0005481065 ovn_controller[152945]: 2025-10-11T09:17:16Z|01085|binding|INFO|Releasing lport cabc92ec-b61c-42d8-80b5-444ec773a568 from this chassis (sb_readonly=0)
Oct 11 05:17:16 np0005481065 ovn_controller[152945]: 2025-10-11T09:17:16Z|01086|binding|INFO|Setting lport cabc92ec-b61c-42d8-80b5-444ec773a568 down in Southbound
Oct 11 05:17:16 np0005481065 nova_compute[260935]: 2025-10-11 09:17:16.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:16 np0005481065 ovn_controller[152945]: 2025-10-11T09:17:16Z|01087|binding|INFO|Removing iface tapcabc92ec-b6 ovn-installed in OVS
Oct 11 05:17:16 np0005481065 nova_compute[260935]: 2025-10-11 09:17:16.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:16.615 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:41:56 10.100.0.8'], port_security=['fa:16:3e:ec:41:56 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '8354434f-daa4-4745-9755-bd2465f5459b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3f56857a-dc0a-4d4f-92ae-d6806961b854', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '4', 'neutron:security_group_ids': '437c3d45-aba0-41e1-8469-ba1a8b534642', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=94b69aa0-e5aa-4772-b64f-f38dc39ac5a0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=cabc92ec-b61c-42d8-80b5-444ec773a568) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:17:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:16.616 162815 INFO neutron.agent.ovn.metadata.agent [-] Port cabc92ec-b61c-42d8-80b5-444ec773a568 in datapath 3f56857a-dc0a-4d4f-92ae-d6806961b854 unbound from our chassis#033[00m
Oct 11 05:17:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:16.620 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3f56857a-dc0a-4d4f-92ae-d6806961b854, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:17:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:16.622 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6e2c186c-ec5f-4a5a-9f3a-765aa256139a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:17:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:16.623 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3f56857a-dc0a-4d4f-92ae-d6806961b854 namespace which is not needed anymore#033[00m
Oct 11 05:17:16 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2263: 321 pgs: 321 active+clean; 486 MiB data, 997 MiB used, 59 GiB / 60 GiB avail; 279 KiB/s rd, 2.2 MiB/s wr, 90 op/s
Oct 11 05:17:16 np0005481065 nova_compute[260935]: 2025-10-11 09:17:16.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:16 np0005481065 systemd[1]: machine-qemu\x2d131\x2dinstance\x2d0000006c.scope: Deactivated successfully.
Oct 11 05:17:16 np0005481065 systemd[1]: machine-qemu\x2d131\x2dinstance\x2d0000006c.scope: Consumed 15.612s CPU time.
Oct 11 05:17:16 np0005481065 systemd-machined[215705]: Machine qemu-131-instance-0000006c terminated.
Oct 11 05:17:16 np0005481065 nova_compute[260935]: 2025-10-11 09:17:16.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:16 np0005481065 nova_compute[260935]: 2025-10-11 09:17:16.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:16 np0005481065 nova_compute[260935]: 2025-10-11 09:17:16.764 2 INFO nova.virt.libvirt.driver [-] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Instance destroyed successfully.#033[00m
Oct 11 05:17:16 np0005481065 nova_compute[260935]: 2025-10-11 09:17:16.764 2 DEBUG nova.objects.instance [None req-aa12f155-3a65-4148-a44d-7d46f8eae8e5 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'resources' on Instance uuid 8354434f-daa4-4745-9755-bd2465f5459b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:17:16 np0005481065 nova_compute[260935]: 2025-10-11 09:17:16.787 2 DEBUG nova.virt.libvirt.vif [None req-aa12f155-3a65-4148-a44d-7d46f8eae8e5 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:15:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-901907022',display_name='tempest-TestNetworkBasicOps-server-901907022',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-901907022',id=108,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNSUDuwrUo8mPnl46KJgqVMBI/oUdDQJiZKLg3PcXdEoEbBCfynVBzjq5RcgXvWBbi7yER9lhbMWC7+xjyS7CKL6WdpVIbjuziOiJFncdqOQt0YKcX1oXGLKj/54Eha0rg==',key_name='tempest-TestNetworkBasicOps-1207558742',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:16:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-svb3ck8j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:16:05Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=8354434f-daa4-4745-9755-bd2465f5459b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cabc92ec-b61c-42d8-80b5-444ec773a568", "address": "fa:16:3e:ec:41:56", "network": {"id": "3f56857a-dc0a-4d4f-92ae-d6806961b854", "bridge": "br-int", "label": "tempest-network-smoke--985507468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcabc92ec-b6", "ovs_interfaceid": "cabc92ec-b61c-42d8-80b5-444ec773a568", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:17:16 np0005481065 nova_compute[260935]: 2025-10-11 09:17:16.788 2 DEBUG nova.network.os_vif_util [None req-aa12f155-3a65-4148-a44d-7d46f8eae8e5 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "cabc92ec-b61c-42d8-80b5-444ec773a568", "address": "fa:16:3e:ec:41:56", "network": {"id": "3f56857a-dc0a-4d4f-92ae-d6806961b854", "bridge": "br-int", "label": "tempest-network-smoke--985507468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcabc92ec-b6", "ovs_interfaceid": "cabc92ec-b61c-42d8-80b5-444ec773a568", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:17:16 np0005481065 nova_compute[260935]: 2025-10-11 09:17:16.789 2 DEBUG nova.network.os_vif_util [None req-aa12f155-3a65-4148-a44d-7d46f8eae8e5 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ec:41:56,bridge_name='br-int',has_traffic_filtering=True,id=cabc92ec-b61c-42d8-80b5-444ec773a568,network=Network(3f56857a-dc0a-4d4f-92ae-d6806961b854),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcabc92ec-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:17:16 np0005481065 nova_compute[260935]: 2025-10-11 09:17:16.789 2 DEBUG os_vif [None req-aa12f155-3a65-4148-a44d-7d46f8eae8e5 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ec:41:56,bridge_name='br-int',has_traffic_filtering=True,id=cabc92ec-b61c-42d8-80b5-444ec773a568,network=Network(3f56857a-dc0a-4d4f-92ae-d6806961b854),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcabc92ec-b6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:17:16 np0005481065 nova_compute[260935]: 2025-10-11 09:17:16.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:16 np0005481065 nova_compute[260935]: 2025-10-11 09:17:16.791 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcabc92ec-b6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:17:16 np0005481065 nova_compute[260935]: 2025-10-11 09:17:16.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:16 np0005481065 nova_compute[260935]: 2025-10-11 09:17:16.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:16 np0005481065 nova_compute[260935]: 2025-10-11 09:17:16.798 2 INFO os_vif [None req-aa12f155-3a65-4148-a44d-7d46f8eae8e5 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ec:41:56,bridge_name='br-int',has_traffic_filtering=True,id=cabc92ec-b61c-42d8-80b5-444ec773a568,network=Network(3f56857a-dc0a-4d4f-92ae-d6806961b854),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcabc92ec-b6')#033[00m
Oct 11 05:17:16 np0005481065 neutron-haproxy-ovnmeta-3f56857a-dc0a-4d4f-92ae-d6806961b854[377571]: [NOTICE]   (377575) : haproxy version is 2.8.14-c23fe91
Oct 11 05:17:16 np0005481065 neutron-haproxy-ovnmeta-3f56857a-dc0a-4d4f-92ae-d6806961b854[377571]: [NOTICE]   (377575) : path to executable is /usr/sbin/haproxy
Oct 11 05:17:16 np0005481065 neutron-haproxy-ovnmeta-3f56857a-dc0a-4d4f-92ae-d6806961b854[377571]: [WARNING]  (377575) : Exiting Master process...
Oct 11 05:17:16 np0005481065 neutron-haproxy-ovnmeta-3f56857a-dc0a-4d4f-92ae-d6806961b854[377571]: [WARNING]  (377575) : Exiting Master process...
Oct 11 05:17:16 np0005481065 neutron-haproxy-ovnmeta-3f56857a-dc0a-4d4f-92ae-d6806961b854[377571]: [ALERT]    (377575) : Current worker (377577) exited with code 143 (Terminated)
Oct 11 05:17:16 np0005481065 neutron-haproxy-ovnmeta-3f56857a-dc0a-4d4f-92ae-d6806961b854[377571]: [WARNING]  (377575) : All workers exited. Exiting... (0)
Oct 11 05:17:16 np0005481065 systemd[1]: libpod-56a2f43e4dc043e957cb609a6cf1764396e6ef23226a986d93d157a36a8cfc97.scope: Deactivated successfully.
Oct 11 05:17:16 np0005481065 conmon[377571]: conmon 56a2f43e4dc043e957cb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-56a2f43e4dc043e957cb609a6cf1764396e6ef23226a986d93d157a36a8cfc97.scope/container/memory.events
Oct 11 05:17:16 np0005481065 podman[380165]: 2025-10-11 09:17:16.86672394 +0000 UTC m=+0.075155731 container died 56a2f43e4dc043e957cb609a6cf1764396e6ef23226a986d93d157a36a8cfc97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3f56857a-dc0a-4d4f-92ae-d6806961b854, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001)
Oct 11 05:17:16 np0005481065 nova_compute[260935]: 2025-10-11 09:17:16.894 2 DEBUG nova.compute.manager [req-5d11c3b9-3f2f-4fbc-b1f1-620833c9a62a req-24a363bc-77c9-4a2a-b7a3-010edb3d853d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Received event network-vif-unplugged-cabc92ec-b61c-42d8-80b5-444ec773a568 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:17:16 np0005481065 nova_compute[260935]: 2025-10-11 09:17:16.894 2 DEBUG oslo_concurrency.lockutils [req-5d11c3b9-3f2f-4fbc-b1f1-620833c9a62a req-24a363bc-77c9-4a2a-b7a3-010edb3d853d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "8354434f-daa4-4745-9755-bd2465f5459b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:17:16 np0005481065 nova_compute[260935]: 2025-10-11 09:17:16.895 2 DEBUG oslo_concurrency.lockutils [req-5d11c3b9-3f2f-4fbc-b1f1-620833c9a62a req-24a363bc-77c9-4a2a-b7a3-010edb3d853d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8354434f-daa4-4745-9755-bd2465f5459b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:17:16 np0005481065 nova_compute[260935]: 2025-10-11 09:17:16.895 2 DEBUG oslo_concurrency.lockutils [req-5d11c3b9-3f2f-4fbc-b1f1-620833c9a62a req-24a363bc-77c9-4a2a-b7a3-010edb3d853d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8354434f-daa4-4745-9755-bd2465f5459b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:17:16 np0005481065 nova_compute[260935]: 2025-10-11 09:17:16.896 2 DEBUG nova.compute.manager [req-5d11c3b9-3f2f-4fbc-b1f1-620833c9a62a req-24a363bc-77c9-4a2a-b7a3-010edb3d853d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] No waiting events found dispatching network-vif-unplugged-cabc92ec-b61c-42d8-80b5-444ec773a568 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:17:16 np0005481065 nova_compute[260935]: 2025-10-11 09:17:16.897 2 DEBUG nova.compute.manager [req-5d11c3b9-3f2f-4fbc-b1f1-620833c9a62a req-24a363bc-77c9-4a2a-b7a3-010edb3d853d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Received event network-vif-unplugged-cabc92ec-b61c-42d8-80b5-444ec773a568 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 05:17:16 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-56a2f43e4dc043e957cb609a6cf1764396e6ef23226a986d93d157a36a8cfc97-userdata-shm.mount: Deactivated successfully.
Oct 11 05:17:16 np0005481065 systemd[1]: var-lib-containers-storage-overlay-16ea3fde6a616ac783e0a41154d62589ff39ccb058913321360e01f0736e2297-merged.mount: Deactivated successfully.
Oct 11 05:17:16 np0005481065 podman[380165]: 2025-10-11 09:17:16.917186197 +0000 UTC m=+0.125617978 container cleanup 56a2f43e4dc043e957cb609a6cf1764396e6ef23226a986d93d157a36a8cfc97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3f56857a-dc0a-4d4f-92ae-d6806961b854, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 11 05:17:16 np0005481065 systemd[1]: libpod-conmon-56a2f43e4dc043e957cb609a6cf1764396e6ef23226a986d93d157a36a8cfc97.scope: Deactivated successfully.
Oct 11 05:17:17 np0005481065 podman[380211]: 2025-10-11 09:17:17.009855892 +0000 UTC m=+0.063360525 container remove 56a2f43e4dc043e957cb609a6cf1764396e6ef23226a986d93d157a36a8cfc97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3f56857a-dc0a-4d4f-92ae-d6806961b854, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 05:17:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:17.015 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3a01691a-d0f3-45b0-89f3-73ed6ce06c99]: (4, ('Sat Oct 11 09:17:16 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3f56857a-dc0a-4d4f-92ae-d6806961b854 (56a2f43e4dc043e957cb609a6cf1764396e6ef23226a986d93d157a36a8cfc97)\n56a2f43e4dc043e957cb609a6cf1764396e6ef23226a986d93d157a36a8cfc97\nSat Oct 11 09:17:16 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3f56857a-dc0a-4d4f-92ae-d6806961b854 (56a2f43e4dc043e957cb609a6cf1764396e6ef23226a986d93d157a36a8cfc97)\n56a2f43e4dc043e957cb609a6cf1764396e6ef23226a986d93d157a36a8cfc97\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:17:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:17.017 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[086ec4e0-18ca-4ebe-97e5-5e2b4bb6f824]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:17:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:17.019 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3f56857a-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:17:17 np0005481065 nova_compute[260935]: 2025-10-11 09:17:17.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:17 np0005481065 kernel: tap3f56857a-d0: left promiscuous mode
Oct 11 05:17:17 np0005481065 nova_compute[260935]: 2025-10-11 09:17:17.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:17.060 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[af7ff1c3-75cf-4f1e-8ab4-734c176f569d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:17:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:17.093 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8a6b0a27-17eb-41a8-88e0-fd1018ecc6a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:17:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:17.094 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7938283e-7490-4901-b8d1-86f0ab19e3cb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:17:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:17.112 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9eade0db-61e2-4bb1-95f5-31b129ff2c65]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 600359, 'reachable_time': 16373, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 380227, 'error': None, 'target': 'ovnmeta-3f56857a-dc0a-4d4f-92ae-d6806961b854', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:17:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:17.114 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3f56857a-dc0a-4d4f-92ae-d6806961b854 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 05:17:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:17.115 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[d989db6e-59cc-4ad2-9d35-f0f56f0cab24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:17:17 np0005481065 systemd[1]: run-netns-ovnmeta\x2d3f56857a\x2ddc0a\x2d4d4f\x2d92ae\x2dd6806961b854.mount: Deactivated successfully.
Oct 11 05:17:17 np0005481065 nova_compute[260935]: 2025-10-11 09:17:17.252 2 INFO nova.virt.libvirt.driver [None req-aa12f155-3a65-4148-a44d-7d46f8eae8e5 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Deleting instance files /var/lib/nova/instances/8354434f-daa4-4745-9755-bd2465f5459b_del#033[00m
Oct 11 05:17:17 np0005481065 nova_compute[260935]: 2025-10-11 09:17:17.253 2 INFO nova.virt.libvirt.driver [None req-aa12f155-3a65-4148-a44d-7d46f8eae8e5 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Deletion of /var/lib/nova/instances/8354434f-daa4-4745-9755-bd2465f5459b_del complete#033[00m
Oct 11 05:17:17 np0005481065 nova_compute[260935]: 2025-10-11 09:17:17.312 2 INFO nova.compute.manager [None req-aa12f155-3a65-4148-a44d-7d46f8eae8e5 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Took 0.80 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:17:17 np0005481065 nova_compute[260935]: 2025-10-11 09:17:17.313 2 DEBUG oslo.service.loopingcall [None req-aa12f155-3a65-4148-a44d-7d46f8eae8e5 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:17:17 np0005481065 nova_compute[260935]: 2025-10-11 09:17:17.314 2 DEBUG nova.compute.manager [-] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:17:17 np0005481065 nova_compute[260935]: 2025-10-11 09:17:17.314 2 DEBUG nova.network.neutron [-] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:17:18 np0005481065 nova_compute[260935]: 2025-10-11 09:17:18.602 2 DEBUG nova.network.neutron [req-2fc3de9d-758c-4505-91b9-bb270d019091 req-16e90f6d-4595-4d73-9fd4-601d7cfbf617 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Updated VIF entry in instance network info cache for port cabc92ec-b61c-42d8-80b5-444ec773a568. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:17:18 np0005481065 nova_compute[260935]: 2025-10-11 09:17:18.604 2 DEBUG nova.network.neutron [req-2fc3de9d-758c-4505-91b9-bb270d019091 req-16e90f6d-4595-4d73-9fd4-601d7cfbf617 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Updating instance_info_cache with network_info: [{"id": "cabc92ec-b61c-42d8-80b5-444ec773a568", "address": "fa:16:3e:ec:41:56", "network": {"id": "3f56857a-dc0a-4d4f-92ae-d6806961b854", "bridge": "br-int", "label": "tempest-network-smoke--985507468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcabc92ec-b6", "ovs_interfaceid": "cabc92ec-b61c-42d8-80b5-444ec773a568", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:17:18 np0005481065 nova_compute[260935]: 2025-10-11 09:17:18.630 2 DEBUG oslo_concurrency.lockutils [req-2fc3de9d-758c-4505-91b9-bb270d019091 req-16e90f6d-4595-4d73-9fd4-601d7cfbf617 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-8354434f-daa4-4745-9755-bd2465f5459b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:17:18 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2264: 321 pgs: 321 active+clean; 407 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 298 KiB/s rd, 2.2 MiB/s wr, 118 op/s
Oct 11 05:17:18 np0005481065 nova_compute[260935]: 2025-10-11 09:17:18.698 2 DEBUG nova.network.neutron [-] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:17:18 np0005481065 nova_compute[260935]: 2025-10-11 09:17:18.720 2 INFO nova.compute.manager [-] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Took 1.41 seconds to deallocate network for instance.#033[00m
Oct 11 05:17:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:17:18 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:17:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 05:17:18 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:17:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 05:17:18 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:17:18 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 94488ee0-29db-48cf-a0f6-a86e9b7fa3c1 does not exist
Oct 11 05:17:18 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 15502bc0-2a38-4f34-b9c4-b31f0c76dddd does not exist
Oct 11 05:17:18 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev bea7cf29-e528-416f-9829-f960f040152f does not exist
Oct 11 05:17:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 05:17:18 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 05:17:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 05:17:18 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:17:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:17:18 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:17:18 np0005481065 nova_compute[260935]: 2025-10-11 09:17:18.781 2 DEBUG oslo_concurrency.lockutils [None req-aa12f155-3a65-4148-a44d-7d46f8eae8e5 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:17:18 np0005481065 nova_compute[260935]: 2025-10-11 09:17:18.781 2 DEBUG oslo_concurrency.lockutils [None req-aa12f155-3a65-4148-a44d-7d46f8eae8e5 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:17:18 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:17:18 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:17:18 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:17:18 np0005481065 nova_compute[260935]: 2025-10-11 09:17:18.933 2 DEBUG oslo_concurrency.processutils [None req-aa12f155-3a65-4148-a44d-7d46f8eae8e5 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:17:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:17:19 np0005481065 nova_compute[260935]: 2025-10-11 09:17:19.033 2 DEBUG nova.compute.manager [req-e876c5d5-f4c9-4f11-99fd-4d872f86de7b req-eb84fffd-e93c-4a2d-815f-d893f5cbcdb3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Received event network-vif-plugged-cabc92ec-b61c-42d8-80b5-444ec773a568 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:17:19 np0005481065 nova_compute[260935]: 2025-10-11 09:17:19.034 2 DEBUG oslo_concurrency.lockutils [req-e876c5d5-f4c9-4f11-99fd-4d872f86de7b req-eb84fffd-e93c-4a2d-815f-d893f5cbcdb3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "8354434f-daa4-4745-9755-bd2465f5459b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:17:19 np0005481065 nova_compute[260935]: 2025-10-11 09:17:19.034 2 DEBUG oslo_concurrency.lockutils [req-e876c5d5-f4c9-4f11-99fd-4d872f86de7b req-eb84fffd-e93c-4a2d-815f-d893f5cbcdb3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8354434f-daa4-4745-9755-bd2465f5459b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:17:19 np0005481065 nova_compute[260935]: 2025-10-11 09:17:19.034 2 DEBUG oslo_concurrency.lockutils [req-e876c5d5-f4c9-4f11-99fd-4d872f86de7b req-eb84fffd-e93c-4a2d-815f-d893f5cbcdb3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8354434f-daa4-4745-9755-bd2465f5459b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:17:19 np0005481065 nova_compute[260935]: 2025-10-11 09:17:19.034 2 DEBUG nova.compute.manager [req-e876c5d5-f4c9-4f11-99fd-4d872f86de7b req-eb84fffd-e93c-4a2d-815f-d893f5cbcdb3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] No waiting events found dispatching network-vif-plugged-cabc92ec-b61c-42d8-80b5-444ec773a568 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:17:19 np0005481065 nova_compute[260935]: 2025-10-11 09:17:19.035 2 WARNING nova.compute.manager [req-e876c5d5-f4c9-4f11-99fd-4d872f86de7b req-eb84fffd-e93c-4a2d-815f-d893f5cbcdb3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Received unexpected event network-vif-plugged-cabc92ec-b61c-42d8-80b5-444ec773a568 for instance with vm_state deleted and task_state None.#033[00m
Oct 11 05:17:19 np0005481065 nova_compute[260935]: 2025-10-11 09:17:19.035 2 DEBUG nova.compute.manager [req-e876c5d5-f4c9-4f11-99fd-4d872f86de7b req-eb84fffd-e93c-4a2d-815f-d893f5cbcdb3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Received event network-vif-deleted-cabc92ec-b61c-42d8-80b5-444ec773a568 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:17:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:17:19 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/972597374' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:17:19 np0005481065 nova_compute[260935]: 2025-10-11 09:17:19.395 2 DEBUG oslo_concurrency.processutils [None req-aa12f155-3a65-4148-a44d-7d46f8eae8e5 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:17:19 np0005481065 nova_compute[260935]: 2025-10-11 09:17:19.405 2 DEBUG nova.compute.provider_tree [None req-aa12f155-3a65-4148-a44d-7d46f8eae8e5 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:17:19 np0005481065 nova_compute[260935]: 2025-10-11 09:17:19.439 2 DEBUG nova.scheduler.client.report [None req-aa12f155-3a65-4148-a44d-7d46f8eae8e5 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:17:19 np0005481065 nova_compute[260935]: 2025-10-11 09:17:19.462 2 DEBUG oslo_concurrency.lockutils [None req-aa12f155-3a65-4148-a44d-7d46f8eae8e5 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:17:19 np0005481065 nova_compute[260935]: 2025-10-11 09:17:19.488 2 INFO nova.scheduler.client.report [None req-aa12f155-3a65-4148-a44d-7d46f8eae8e5 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Deleted allocations for instance 8354434f-daa4-4745-9755-bd2465f5459b#033[00m
Oct 11 05:17:19 np0005481065 podman[380526]: 2025-10-11 09:17:19.522991464 +0000 UTC m=+0.061558045 container create 25e91ab114911d93b50be2dbbd2299d9ed1160cc806be27370e36013bc3225b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 05:17:19 np0005481065 nova_compute[260935]: 2025-10-11 09:17:19.571 2 DEBUG oslo_concurrency.lockutils [None req-aa12f155-3a65-4148-a44d-7d46f8eae8e5 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "8354434f-daa4-4745-9755-bd2465f5459b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.068s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:17:19 np0005481065 systemd[1]: Started libpod-conmon-25e91ab114911d93b50be2dbbd2299d9ed1160cc806be27370e36013bc3225b4.scope.
Oct 11 05:17:19 np0005481065 podman[380526]: 2025-10-11 09:17:19.495357849 +0000 UTC m=+0.033924480 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:17:19 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:17:19 np0005481065 podman[380526]: 2025-10-11 09:17:19.645125224 +0000 UTC m=+0.183691845 container init 25e91ab114911d93b50be2dbbd2299d9ed1160cc806be27370e36013bc3225b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_shirley, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 11 05:17:19 np0005481065 podman[380526]: 2025-10-11 09:17:19.657644221 +0000 UTC m=+0.196210772 container start 25e91ab114911d93b50be2dbbd2299d9ed1160cc806be27370e36013bc3225b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 11 05:17:19 np0005481065 podman[380526]: 2025-10-11 09:17:19.661676153 +0000 UTC m=+0.200242784 container attach 25e91ab114911d93b50be2dbbd2299d9ed1160cc806be27370e36013bc3225b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:17:19 np0005481065 clever_shirley[380542]: 167 167
Oct 11 05:17:19 np0005481065 systemd[1]: libpod-25e91ab114911d93b50be2dbbd2299d9ed1160cc806be27370e36013bc3225b4.scope: Deactivated successfully.
Oct 11 05:17:19 np0005481065 conmon[380542]: conmon 25e91ab114911d93b50b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-25e91ab114911d93b50be2dbbd2299d9ed1160cc806be27370e36013bc3225b4.scope/container/memory.events
Oct 11 05:17:19 np0005481065 podman[380526]: 2025-10-11 09:17:19.668409209 +0000 UTC m=+0.206975800 container died 25e91ab114911d93b50be2dbbd2299d9ed1160cc806be27370e36013bc3225b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_shirley, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 05:17:19 np0005481065 systemd[1]: var-lib-containers-storage-overlay-aed7f6645d639396f1cfa780beb4bf542cec88ab3e058d3bc6c670cb1568e58a-merged.mount: Deactivated successfully.
Oct 11 05:17:19 np0005481065 podman[380526]: 2025-10-11 09:17:19.723235596 +0000 UTC m=+0.261802167 container remove 25e91ab114911d93b50be2dbbd2299d9ed1160cc806be27370e36013bc3225b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_shirley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 05:17:19 np0005481065 systemd[1]: libpod-conmon-25e91ab114911d93b50be2dbbd2299d9ed1160cc806be27370e36013bc3225b4.scope: Deactivated successfully.
Oct 11 05:17:20 np0005481065 podman[380565]: 2025-10-11 09:17:20.005354955 +0000 UTC m=+0.069317629 container create 87f8fe6b43146246da4c1d2bc6122723b1e5deee09a7542717a6db4b4b21fbc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_knuth, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 05:17:20 np0005481065 systemd[1]: Started libpod-conmon-87f8fe6b43146246da4c1d2bc6122723b1e5deee09a7542717a6db4b4b21fbc1.scope.
Oct 11 05:17:20 np0005481065 podman[380565]: 2025-10-11 09:17:19.979423868 +0000 UTC m=+0.043386562 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:17:20 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:17:20 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd3b041be12d13438af6f67a6773c234b4846b5bd375c72e5cfeafb17cd44569/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:17:20 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd3b041be12d13438af6f67a6773c234b4846b5bd375c72e5cfeafb17cd44569/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:17:20 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd3b041be12d13438af6f67a6773c234b4846b5bd375c72e5cfeafb17cd44569/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:17:20 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd3b041be12d13438af6f67a6773c234b4846b5bd375c72e5cfeafb17cd44569/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:17:20 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd3b041be12d13438af6f67a6773c234b4846b5bd375c72e5cfeafb17cd44569/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 05:17:20 np0005481065 podman[380565]: 2025-10-11 09:17:20.109307523 +0000 UTC m=+0.173270247 container init 87f8fe6b43146246da4c1d2bc6122723b1e5deee09a7542717a6db4b4b21fbc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_knuth, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:17:20 np0005481065 podman[380565]: 2025-10-11 09:17:20.129969864 +0000 UTC m=+0.193932538 container start 87f8fe6b43146246da4c1d2bc6122723b1e5deee09a7542717a6db4b4b21fbc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:17:20 np0005481065 podman[380565]: 2025-10-11 09:17:20.134145219 +0000 UTC m=+0.198107943 container attach 87f8fe6b43146246da4c1d2bc6122723b1e5deee09a7542717a6db4b4b21fbc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:17:20 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2265: 321 pgs: 321 active+clean; 407 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 19 KiB/s wr, 56 op/s
Oct 11 05:17:21 np0005481065 nova_compute[260935]: 2025-10-11 09:17:21.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:21 np0005481065 gallant_knuth[380582]: --> passed data devices: 0 physical, 3 LVM
Oct 11 05:17:21 np0005481065 gallant_knuth[380582]: --> relative data size: 1.0
Oct 11 05:17:21 np0005481065 gallant_knuth[380582]: --> All data devices are unavailable
Oct 11 05:17:21 np0005481065 systemd[1]: libpod-87f8fe6b43146246da4c1d2bc6122723b1e5deee09a7542717a6db4b4b21fbc1.scope: Deactivated successfully.
Oct 11 05:17:21 np0005481065 systemd[1]: libpod-87f8fe6b43146246da4c1d2bc6122723b1e5deee09a7542717a6db4b4b21fbc1.scope: Consumed 1.137s CPU time.
Oct 11 05:17:21 np0005481065 podman[380613]: 2025-10-11 09:17:21.427782096 +0000 UTC m=+0.038159127 container died 87f8fe6b43146246da4c1d2bc6122723b1e5deee09a7542717a6db4b4b21fbc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:17:21 np0005481065 systemd[1]: var-lib-containers-storage-overlay-bd3b041be12d13438af6f67a6773c234b4846b5bd375c72e5cfeafb17cd44569-merged.mount: Deactivated successfully.
Oct 11 05:17:21 np0005481065 podman[380613]: 2025-10-11 09:17:21.519384421 +0000 UTC m=+0.129761432 container remove 87f8fe6b43146246da4c1d2bc6122723b1e5deee09a7542717a6db4b4b21fbc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 11 05:17:21 np0005481065 systemd[1]: libpod-conmon-87f8fe6b43146246da4c1d2bc6122723b1e5deee09a7542717a6db4b4b21fbc1.scope: Deactivated successfully.
Oct 11 05:17:21 np0005481065 nova_compute[260935]: 2025-10-11 09:17:21.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:22 np0005481065 podman[380771]: 2025-10-11 09:17:22.471061513 +0000 UTC m=+0.068210499 container create f81fdd7c224aa502815d5919af3a1e5e4cbbd406fc41632054b4baf480080513 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 05:17:22 np0005481065 systemd[1]: Started libpod-conmon-f81fdd7c224aa502815d5919af3a1e5e4cbbd406fc41632054b4baf480080513.scope.
Oct 11 05:17:22 np0005481065 podman[380771]: 2025-10-11 09:17:22.441864705 +0000 UTC m=+0.039013741 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:17:22 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:17:22 np0005481065 podman[380771]: 2025-10-11 09:17:22.584363639 +0000 UTC m=+0.181512655 container init f81fdd7c224aa502815d5919af3a1e5e4cbbd406fc41632054b4baf480080513 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_haibt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 05:17:22 np0005481065 podman[380771]: 2025-10-11 09:17:22.594035827 +0000 UTC m=+0.191184783 container start f81fdd7c224aa502815d5919af3a1e5e4cbbd406fc41632054b4baf480080513 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_haibt, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 11 05:17:22 np0005481065 podman[380771]: 2025-10-11 09:17:22.597932745 +0000 UTC m=+0.195081791 container attach f81fdd7c224aa502815d5919af3a1e5e4cbbd406fc41632054b4baf480080513 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_haibt, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 05:17:22 np0005481065 blissful_haibt[380787]: 167 167
Oct 11 05:17:22 np0005481065 systemd[1]: libpod-f81fdd7c224aa502815d5919af3a1e5e4cbbd406fc41632054b4baf480080513.scope: Deactivated successfully.
Oct 11 05:17:22 np0005481065 podman[380771]: 2025-10-11 09:17:22.602123151 +0000 UTC m=+0.199272107 container died f81fdd7c224aa502815d5919af3a1e5e4cbbd406fc41632054b4baf480080513 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_haibt, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:17:22 np0005481065 systemd[1]: var-lib-containers-storage-overlay-81a14d524d29f7dd68896a9170a24e49f7c067c9b91946793d79c087bfac2a2a-merged.mount: Deactivated successfully.
Oct 11 05:17:22 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2266: 321 pgs: 321 active+clean; 407 MiB data, 950 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 19 KiB/s wr, 56 op/s
Oct 11 05:17:22 np0005481065 podman[380771]: 2025-10-11 09:17:22.65195961 +0000 UTC m=+0.249108596 container remove f81fdd7c224aa502815d5919af3a1e5e4cbbd406fc41632054b4baf480080513 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_haibt, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:17:22 np0005481065 systemd[1]: libpod-conmon-f81fdd7c224aa502815d5919af3a1e5e4cbbd406fc41632054b4baf480080513.scope: Deactivated successfully.
Oct 11 05:17:22 np0005481065 podman[380811]: 2025-10-11 09:17:22.954681189 +0000 UTC m=+0.069589877 container create d14d69468097342456b847fbc788f2020c0798c1773e26b80f49d878ab1b8b58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 05:17:22 np0005481065 ovn_controller[152945]: 2025-10-11T09:17:22Z|01088|binding|INFO|Releasing lport 722d4b4c-2d64-4e8c-b343-4ac25259f23b from this chassis (sb_readonly=0)
Oct 11 05:17:23 np0005481065 ovn_controller[152945]: 2025-10-11T09:17:22Z|01089|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:17:23 np0005481065 ovn_controller[152945]: 2025-10-11T09:17:22Z|01090|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:17:23 np0005481065 systemd[1]: Started libpod-conmon-d14d69468097342456b847fbc788f2020c0798c1773e26b80f49d878ab1b8b58.scope.
Oct 11 05:17:23 np0005481065 podman[380811]: 2025-10-11 09:17:22.92976214 +0000 UTC m=+0.044670818 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:17:23 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:17:23 np0005481065 nova_compute[260935]: 2025-10-11 09:17:23.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:23 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdcfea61c9b5a061119127222cae8c5239f18e93390059d7f1e316a15f874694/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:17:23 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdcfea61c9b5a061119127222cae8c5239f18e93390059d7f1e316a15f874694/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:17:23 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdcfea61c9b5a061119127222cae8c5239f18e93390059d7f1e316a15f874694/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:17:23 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdcfea61c9b5a061119127222cae8c5239f18e93390059d7f1e316a15f874694/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:17:23 np0005481065 podman[380811]: 2025-10-11 09:17:23.115802149 +0000 UTC m=+0.230710837 container init d14d69468097342456b847fbc788f2020c0798c1773e26b80f49d878ab1b8b58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:17:23 np0005481065 podman[380811]: 2025-10-11 09:17:23.128183912 +0000 UTC m=+0.243092560 container start d14d69468097342456b847fbc788f2020c0798c1773e26b80f49d878ab1b8b58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 05:17:23 np0005481065 podman[380811]: 2025-10-11 09:17:23.131757541 +0000 UTC m=+0.246666199 container attach d14d69468097342456b847fbc788f2020c0798c1773e26b80f49d878ab1b8b58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_chatterjee, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]: {
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:    "0": [
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:        {
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:            "devices": [
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:                "/dev/loop3"
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:            ],
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:            "lv_name": "ceph_lv0",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:            "lv_size": "21470642176",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:            "name": "ceph_lv0",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:            "tags": {
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:                "ceph.cluster_name": "ceph",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:                "ceph.crush_device_class": "",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:                "ceph.encrypted": "0",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:                "ceph.osd_id": "0",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:                "ceph.type": "block",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:                "ceph.vdo": "0"
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:            },
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:            "type": "block",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:            "vg_name": "ceph_vg0"
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:        }
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:    ],
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:    "1": [
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:        {
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:            "devices": [
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:                "/dev/loop4"
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:            ],
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:            "lv_name": "ceph_lv1",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:            "lv_size": "21470642176",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:            "name": "ceph_lv1",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:            "tags": {
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:                "ceph.cluster_name": "ceph",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:                "ceph.crush_device_class": "",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:                "ceph.encrypted": "0",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:                "ceph.osd_id": "1",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:                "ceph.type": "block",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:                "ceph.vdo": "0"
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:            },
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:            "type": "block",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:            "vg_name": "ceph_vg1"
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:        }
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:    ],
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:    "2": [
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:        {
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:            "devices": [
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:                "/dev/loop5"
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:            ],
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:            "lv_name": "ceph_lv2",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:            "lv_size": "21470642176",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:            "name": "ceph_lv2",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:            "tags": {
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:                "ceph.cluster_name": "ceph",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:                "ceph.crush_device_class": "",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:                "ceph.encrypted": "0",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:                "ceph.osd_id": "2",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:                "ceph.type": "block",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:                "ceph.vdo": "0"
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:            },
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:            "type": "block",
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:            "vg_name": "ceph_vg2"
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:        }
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]:    ]
Oct 11 05:17:23 np0005481065 confident_chatterjee[380827]: }
Oct 11 05:17:23 np0005481065 systemd[1]: libpod-d14d69468097342456b847fbc788f2020c0798c1773e26b80f49d878ab1b8b58.scope: Deactivated successfully.
Oct 11 05:17:23 np0005481065 podman[380811]: 2025-10-11 09:17:23.938347535 +0000 UTC m=+1.053256213 container died d14d69468097342456b847fbc788f2020c0798c1773e26b80f49d878ab1b8b58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_chatterjee, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:17:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:17:23 np0005481065 systemd[1]: var-lib-containers-storage-overlay-fdcfea61c9b5a061119127222cae8c5239f18e93390059d7f1e316a15f874694-merged.mount: Deactivated successfully.
Oct 11 05:17:24 np0005481065 podman[380811]: 2025-10-11 09:17:24.013082264 +0000 UTC m=+1.127990942 container remove d14d69468097342456b847fbc788f2020c0798c1773e26b80f49d878ab1b8b58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_chatterjee, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 11 05:17:24 np0005481065 systemd[1]: libpod-conmon-d14d69468097342456b847fbc788f2020c0798c1773e26b80f49d878ab1b8b58.scope: Deactivated successfully.
Oct 11 05:17:24 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2267: 321 pgs: 321 active+clean; 407 MiB data, 950 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 16 KiB/s wr, 29 op/s
Oct 11 05:17:24 np0005481065 nova_compute[260935]: 2025-10-11 09:17:24.725 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174229.7236707, a1232bdc-1728-423e-91ef-f46614fcec43 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:17:24 np0005481065 nova_compute[260935]: 2025-10-11 09:17:24.726 2 INFO nova.compute.manager [-] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:17:24 np0005481065 nova_compute[260935]: 2025-10-11 09:17:24.750 2 DEBUG nova.compute.manager [None req-6a04600b-dd74-4839-8704-15b7ab546ab5 - - - - - -] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:17:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:17:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:17:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:17:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:17:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:17:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:17:24 np0005481065 podman[380987]: 2025-10-11 09:17:24.890624254 +0000 UTC m=+0.067344505 container create c57c04e63ef36cbc155540c24cd71fcac92407e9c265433609143bb59edfdfcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:17:24 np0005481065 systemd[1]: Started libpod-conmon-c57c04e63ef36cbc155540c24cd71fcac92407e9c265433609143bb59edfdfcf.scope.
Oct 11 05:17:24 np0005481065 podman[380987]: 2025-10-11 09:17:24.861790246 +0000 UTC m=+0.038510527 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:17:24 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:17:24 np0005481065 podman[380987]: 2025-10-11 09:17:24.999455856 +0000 UTC m=+0.176176147 container init c57c04e63ef36cbc155540c24cd71fcac92407e9c265433609143bb59edfdfcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 05:17:25 np0005481065 podman[380987]: 2025-10-11 09:17:25.010288686 +0000 UTC m=+0.187008937 container start c57c04e63ef36cbc155540c24cd71fcac92407e9c265433609143bb59edfdfcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:17:25 np0005481065 podman[380987]: 2025-10-11 09:17:25.014864403 +0000 UTC m=+0.191584724 container attach c57c04e63ef36cbc155540c24cd71fcac92407e9c265433609143bb59edfdfcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_maxwell, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:17:25 np0005481065 unruffled_maxwell[381003]: 167 167
Oct 11 05:17:25 np0005481065 systemd[1]: libpod-c57c04e63ef36cbc155540c24cd71fcac92407e9c265433609143bb59edfdfcf.scope: Deactivated successfully.
Oct 11 05:17:25 np0005481065 podman[380987]: 2025-10-11 09:17:25.018259697 +0000 UTC m=+0.194979938 container died c57c04e63ef36cbc155540c24cd71fcac92407e9c265433609143bb59edfdfcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_maxwell, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:17:25 np0005481065 systemd[1]: var-lib-containers-storage-overlay-fc9414be89defe3bc867d47246018cb176d81d11da0418071126616fb2a16478-merged.mount: Deactivated successfully.
Oct 11 05:17:25 np0005481065 podman[380987]: 2025-10-11 09:17:25.067619043 +0000 UTC m=+0.244339294 container remove c57c04e63ef36cbc155540c24cd71fcac92407e9c265433609143bb59edfdfcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_maxwell, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 05:17:25 np0005481065 systemd[1]: libpod-conmon-c57c04e63ef36cbc155540c24cd71fcac92407e9c265433609143bb59edfdfcf.scope: Deactivated successfully.
Oct 11 05:17:25 np0005481065 podman[381027]: 2025-10-11 09:17:25.317050927 +0000 UTC m=+0.058803839 container create 5099cd8aadc93a4b5bc29e5d7c1d57d0972964096eaefa6ee201588c2c3ec384 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_golick, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 05:17:25 np0005481065 podman[381027]: 2025-10-11 09:17:25.288771244 +0000 UTC m=+0.030524166 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:17:25 np0005481065 systemd[1]: Started libpod-conmon-5099cd8aadc93a4b5bc29e5d7c1d57d0972964096eaefa6ee201588c2c3ec384.scope.
Oct 11 05:17:25 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:17:25 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d65142affbd865170ec4f5fc19e8e95d9a4ac669a9dc9829b4db6989af2c173e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:17:25 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d65142affbd865170ec4f5fc19e8e95d9a4ac669a9dc9829b4db6989af2c173e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:17:25 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d65142affbd865170ec4f5fc19e8e95d9a4ac669a9dc9829b4db6989af2c173e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:17:25 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d65142affbd865170ec4f5fc19e8e95d9a4ac669a9dc9829b4db6989af2c173e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:17:25 np0005481065 podman[381027]: 2025-10-11 09:17:25.452424824 +0000 UTC m=+0.194177796 container init 5099cd8aadc93a4b5bc29e5d7c1d57d0972964096eaefa6ee201588c2c3ec384 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_golick, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 11 05:17:25 np0005481065 podman[381027]: 2025-10-11 09:17:25.467558433 +0000 UTC m=+0.209311335 container start 5099cd8aadc93a4b5bc29e5d7c1d57d0972964096eaefa6ee201588c2c3ec384 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 11 05:17:25 np0005481065 podman[381027]: 2025-10-11 09:17:25.472171581 +0000 UTC m=+0.213924553 container attach 5099cd8aadc93a4b5bc29e5d7c1d57d0972964096eaefa6ee201588c2c3ec384 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:17:26 np0005481065 nova_compute[260935]: 2025-10-11 09:17:26.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:26 np0005481065 upbeat_golick[381044]: {
Oct 11 05:17:26 np0005481065 upbeat_golick[381044]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 05:17:26 np0005481065 upbeat_golick[381044]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:17:26 np0005481065 upbeat_golick[381044]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 05:17:26 np0005481065 upbeat_golick[381044]:        "osd_id": 2,
Oct 11 05:17:26 np0005481065 upbeat_golick[381044]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:17:26 np0005481065 upbeat_golick[381044]:        "type": "bluestore"
Oct 11 05:17:26 np0005481065 upbeat_golick[381044]:    },
Oct 11 05:17:26 np0005481065 upbeat_golick[381044]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 05:17:26 np0005481065 upbeat_golick[381044]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:17:26 np0005481065 upbeat_golick[381044]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 05:17:26 np0005481065 upbeat_golick[381044]:        "osd_id": 0,
Oct 11 05:17:26 np0005481065 upbeat_golick[381044]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:17:26 np0005481065 upbeat_golick[381044]:        "type": "bluestore"
Oct 11 05:17:26 np0005481065 upbeat_golick[381044]:    },
Oct 11 05:17:26 np0005481065 upbeat_golick[381044]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 05:17:26 np0005481065 upbeat_golick[381044]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:17:26 np0005481065 upbeat_golick[381044]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 05:17:26 np0005481065 upbeat_golick[381044]:        "osd_id": 1,
Oct 11 05:17:26 np0005481065 upbeat_golick[381044]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:17:26 np0005481065 upbeat_golick[381044]:        "type": "bluestore"
Oct 11 05:17:26 np0005481065 upbeat_golick[381044]:    }
Oct 11 05:17:26 np0005481065 upbeat_golick[381044]: }
Oct 11 05:17:26 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2268: 321 pgs: 321 active+clean; 407 MiB data, 950 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 4.5 KiB/s wr, 28 op/s
Oct 11 05:17:26 np0005481065 systemd[1]: libpod-5099cd8aadc93a4b5bc29e5d7c1d57d0972964096eaefa6ee201588c2c3ec384.scope: Deactivated successfully.
Oct 11 05:17:26 np0005481065 systemd[1]: libpod-5099cd8aadc93a4b5bc29e5d7c1d57d0972964096eaefa6ee201588c2c3ec384.scope: Consumed 1.193s CPU time.
Oct 11 05:17:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:17:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1594780645' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:17:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:17:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1594780645' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:17:26 np0005481065 podman[381027]: 2025-10-11 09:17:26.659871865 +0000 UTC m=+1.401624777 container died 5099cd8aadc93a4b5bc29e5d7c1d57d0972964096eaefa6ee201588c2c3ec384 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_golick, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 11 05:17:26 np0005481065 nova_compute[260935]: 2025-10-11 09:17:26.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:17:26 np0005481065 systemd[1]: var-lib-containers-storage-overlay-d65142affbd865170ec4f5fc19e8e95d9a4ac669a9dc9829b4db6989af2c173e-merged.mount: Deactivated successfully.
Oct 11 05:17:26 np0005481065 nova_compute[260935]: 2025-10-11 09:17:26.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:26 np0005481065 podman[381027]: 2025-10-11 09:17:26.748021715 +0000 UTC m=+1.489774577 container remove 5099cd8aadc93a4b5bc29e5d7c1d57d0972964096eaefa6ee201588c2c3ec384 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:17:26 np0005481065 systemd[1]: libpod-conmon-5099cd8aadc93a4b5bc29e5d7c1d57d0972964096eaefa6ee201588c2c3ec384.scope: Deactivated successfully.
Oct 11 05:17:26 np0005481065 nova_compute[260935]: 2025-10-11 09:17:26.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:17:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:17:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:17:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:17:26 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 0c57ba51-073e-4350-88d2-e32ba346f31c does not exist
Oct 11 05:17:26 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 62dcae6e-e495-47bb-83d2-195ccb4f99ed does not exist
Oct 11 05:17:26 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:17:26 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:17:27 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:27.503 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=34, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=33) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:17:27 np0005481065 nova_compute[260935]: 2025-10-11 09:17:27.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:27 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:27.506 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 05:17:27 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:27.508 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '34'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:17:27 np0005481065 nova_compute[260935]: 2025-10-11 09:17:27.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:17:28 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2269: 321 pgs: 321 active+clean; 407 MiB data, 950 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 4.5 KiB/s wr, 28 op/s
Oct 11 05:17:28 np0005481065 nova_compute[260935]: 2025-10-11 09:17:28.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:17:28 np0005481065 nova_compute[260935]: 2025-10-11 09:17:28.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:17:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:17:29 np0005481065 nova_compute[260935]: 2025-10-11 09:17:29.451 2 DEBUG oslo_concurrency.lockutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "25c0d908-6200-4e9e-8914-ed531abe14bf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:17:29 np0005481065 nova_compute[260935]: 2025-10-11 09:17:29.452 2 DEBUG oslo_concurrency.lockutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "25c0d908-6200-4e9e-8914-ed531abe14bf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:17:29 np0005481065 nova_compute[260935]: 2025-10-11 09:17:29.470 2 DEBUG nova.compute.manager [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:17:29 np0005481065 nova_compute[260935]: 2025-10-11 09:17:29.556 2 DEBUG oslo_concurrency.lockutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:17:29 np0005481065 nova_compute[260935]: 2025-10-11 09:17:29.556 2 DEBUG oslo_concurrency.lockutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:17:29 np0005481065 nova_compute[260935]: 2025-10-11 09:17:29.571 2 DEBUG nova.virt.hardware [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:17:29 np0005481065 nova_compute[260935]: 2025-10-11 09:17:29.571 2 INFO nova.compute.claims [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:17:29 np0005481065 nova_compute[260935]: 2025-10-11 09:17:29.787 2 DEBUG oslo_concurrency.processutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:17:30 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:17:30 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2284915098' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:17:30 np0005481065 nova_compute[260935]: 2025-10-11 09:17:30.284 2 DEBUG oslo_concurrency.processutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:17:30 np0005481065 nova_compute[260935]: 2025-10-11 09:17:30.293 2 DEBUG nova.compute.provider_tree [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:17:30 np0005481065 nova_compute[260935]: 2025-10-11 09:17:30.317 2 DEBUG nova.scheduler.client.report [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:17:30 np0005481065 nova_compute[260935]: 2025-10-11 09:17:30.354 2 DEBUG oslo_concurrency.lockutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.798s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:17:30 np0005481065 nova_compute[260935]: 2025-10-11 09:17:30.356 2 DEBUG nova.compute.manager [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:17:30 np0005481065 nova_compute[260935]: 2025-10-11 09:17:30.425 2 DEBUG nova.compute.manager [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:17:30 np0005481065 nova_compute[260935]: 2025-10-11 09:17:30.426 2 DEBUG nova.network.neutron [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:17:30 np0005481065 nova_compute[260935]: 2025-10-11 09:17:30.449 2 INFO nova.virt.libvirt.driver [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:17:30 np0005481065 nova_compute[260935]: 2025-10-11 09:17:30.472 2 DEBUG nova.compute.manager [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:17:30 np0005481065 nova_compute[260935]: 2025-10-11 09:17:30.603 2 DEBUG nova.compute.manager [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:17:30 np0005481065 nova_compute[260935]: 2025-10-11 09:17:30.606 2 DEBUG nova.virt.libvirt.driver [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:17:30 np0005481065 nova_compute[260935]: 2025-10-11 09:17:30.606 2 INFO nova.virt.libvirt.driver [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Creating image(s)#033[00m
Oct 11 05:17:30 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2270: 321 pgs: 321 active+clean; 407 MiB data, 950 MiB used, 59 GiB / 60 GiB avail; 852 B/s rd, 0 B/s wr, 0 op/s
Oct 11 05:17:30 np0005481065 nova_compute[260935]: 2025-10-11 09:17:30.642 2 DEBUG nova.storage.rbd_utils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 25c0d908-6200-4e9e-8914-ed531abe14bf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:17:30 np0005481065 nova_compute[260935]: 2025-10-11 09:17:30.680 2 DEBUG nova.storage.rbd_utils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 25c0d908-6200-4e9e-8914-ed531abe14bf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:17:30 np0005481065 nova_compute[260935]: 2025-10-11 09:17:30.722 2 DEBUG nova.storage.rbd_utils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 25c0d908-6200-4e9e-8914-ed531abe14bf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:17:30 np0005481065 nova_compute[260935]: 2025-10-11 09:17:30.729 2 DEBUG oslo_concurrency.processutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:17:30 np0005481065 nova_compute[260935]: 2025-10-11 09:17:30.791 2 DEBUG nova.policy [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0e1fd111a1ff43179343661e01457085', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'db6885dd005947ad850fed13cefdf2fc', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:17:30 np0005481065 nova_compute[260935]: 2025-10-11 09:17:30.796 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:17:30 np0005481065 nova_compute[260935]: 2025-10-11 09:17:30.796 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:17:30 np0005481065 nova_compute[260935]: 2025-10-11 09:17:30.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:30 np0005481065 nova_compute[260935]: 2025-10-11 09:17:30.840 2 DEBUG oslo_concurrency.processutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.112s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:17:30 np0005481065 nova_compute[260935]: 2025-10-11 09:17:30.841 2 DEBUG oslo_concurrency.lockutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:17:30 np0005481065 nova_compute[260935]: 2025-10-11 09:17:30.842 2 DEBUG oslo_concurrency.lockutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:17:30 np0005481065 nova_compute[260935]: 2025-10-11 09:17:30.843 2 DEBUG oslo_concurrency.lockutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:17:30 np0005481065 nova_compute[260935]: 2025-10-11 09:17:30.877 2 DEBUG nova.storage.rbd_utils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 25c0d908-6200-4e9e-8914-ed531abe14bf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:17:30 np0005481065 nova_compute[260935]: 2025-10-11 09:17:30.882 2 DEBUG oslo_concurrency.processutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 25c0d908-6200-4e9e-8914-ed531abe14bf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:17:31 np0005481065 nova_compute[260935]: 2025-10-11 09:17:31.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:31 np0005481065 nova_compute[260935]: 2025-10-11 09:17:31.251 2 DEBUG oslo_concurrency.processutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 25c0d908-6200-4e9e-8914-ed531abe14bf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.370s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:17:31 np0005481065 nova_compute[260935]: 2025-10-11 09:17:31.334 2 DEBUG nova.storage.rbd_utils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] resizing rbd image 25c0d908-6200-4e9e-8914-ed531abe14bf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:17:31 np0005481065 nova_compute[260935]: 2025-10-11 09:17:31.450 2 DEBUG nova.objects.instance [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'migration_context' on Instance uuid 25c0d908-6200-4e9e-8914-ed531abe14bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:17:31 np0005481065 nova_compute[260935]: 2025-10-11 09:17:31.469 2 DEBUG nova.virt.libvirt.driver [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:17:31 np0005481065 nova_compute[260935]: 2025-10-11 09:17:31.469 2 DEBUG nova.virt.libvirt.driver [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Ensure instance console log exists: /var/lib/nova/instances/25c0d908-6200-4e9e-8914-ed531abe14bf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:17:31 np0005481065 nova_compute[260935]: 2025-10-11 09:17:31.470 2 DEBUG oslo_concurrency.lockutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:17:31 np0005481065 nova_compute[260935]: 2025-10-11 09:17:31.470 2 DEBUG oslo_concurrency.lockutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:17:31 np0005481065 nova_compute[260935]: 2025-10-11 09:17:31.470 2 DEBUG oslo_concurrency.lockutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:17:31 np0005481065 nova_compute[260935]: 2025-10-11 09:17:31.760 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174236.7596006, 8354434f-daa4-4745-9755-bd2465f5459b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:17:31 np0005481065 nova_compute[260935]: 2025-10-11 09:17:31.761 2 INFO nova.compute.manager [-] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:17:31 np0005481065 nova_compute[260935]: 2025-10-11 09:17:31.779 2 DEBUG nova.compute.manager [None req-c5c4e698-b6f5-456d-9a0c-9c5d2643dc9f - - - - - -] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:17:31 np0005481065 nova_compute[260935]: 2025-10-11 09:17:31.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:31 np0005481065 podman[381329]: 2025-10-11 09:17:31.800586254 +0000 UTC m=+0.092662266 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:17:31 np0005481065 nova_compute[260935]: 2025-10-11 09:17:31.837 2 DEBUG nova.network.neutron [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Successfully created port: 25dc7747-60d0-4a38-8938-dfa9f55068b3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:17:32 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2271: 321 pgs: 321 active+clean; 435 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 7.4 KiB/s rd, 1.0 MiB/s wr, 12 op/s
Oct 11 05:17:32 np0005481065 nova_compute[260935]: 2025-10-11 09:17:32.700 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:17:32 np0005481065 nova_compute[260935]: 2025-10-11 09:17:32.727 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:17:32 np0005481065 nova_compute[260935]: 2025-10-11 09:17:32.727 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:17:32 np0005481065 nova_compute[260935]: 2025-10-11 09:17:32.826 2 DEBUG nova.network.neutron [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Successfully updated port: 25dc7747-60d0-4a38-8938-dfa9f55068b3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:17:32 np0005481065 nova_compute[260935]: 2025-10-11 09:17:32.841 2 DEBUG oslo_concurrency.lockutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "refresh_cache-25c0d908-6200-4e9e-8914-ed531abe14bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:17:32 np0005481065 nova_compute[260935]: 2025-10-11 09:17:32.841 2 DEBUG oslo_concurrency.lockutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquired lock "refresh_cache-25c0d908-6200-4e9e-8914-ed531abe14bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:17:32 np0005481065 nova_compute[260935]: 2025-10-11 09:17:32.842 2 DEBUG nova.network.neutron [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:17:32 np0005481065 nova_compute[260935]: 2025-10-11 09:17:32.939 2 DEBUG nova.compute.manager [req-f98e1868-53db-4ce1-9ff2-db865e9e6823 req-48a38adc-d014-4c15-b42b-d088179ab618 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Received event network-changed-25dc7747-60d0-4a38-8938-dfa9f55068b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:17:32 np0005481065 nova_compute[260935]: 2025-10-11 09:17:32.940 2 DEBUG nova.compute.manager [req-f98e1868-53db-4ce1-9ff2-db865e9e6823 req-48a38adc-d014-4c15-b42b-d088179ab618 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Refreshing instance network info cache due to event network-changed-25dc7747-60d0-4a38-8938-dfa9f55068b3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:17:32 np0005481065 nova_compute[260935]: 2025-10-11 09:17:32.941 2 DEBUG oslo_concurrency.lockutils [req-f98e1868-53db-4ce1-9ff2-db865e9e6823 req-48a38adc-d014-4c15-b42b-d088179ab618 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-25c0d908-6200-4e9e-8914-ed531abe14bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:17:32 np0005481065 nova_compute[260935]: 2025-10-11 09:17:32.969 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:17:32 np0005481065 nova_compute[260935]: 2025-10-11 09:17:32.969 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:17:32 np0005481065 nova_compute[260935]: 2025-10-11 09:17:32.969 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 05:17:33 np0005481065 nova_compute[260935]: 2025-10-11 09:17:33.350 2 DEBUG nova.network.neutron [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:17:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:17:34 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2272: 321 pgs: 321 active+clean; 453 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:17:34 np0005481065 nova_compute[260935]: 2025-10-11 09:17:34.979 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updating instance_info_cache with network_info: [{"id": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "address": "fa:16:3e:ab:9b:26", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99e74dca-1d", "ovs_interfaceid": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:17:35 np0005481065 nova_compute[260935]: 2025-10-11 09:17:34.999 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:17:35 np0005481065 nova_compute[260935]: 2025-10-11 09:17:35.000 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 05:17:35 np0005481065 nova_compute[260935]: 2025-10-11 09:17:35.001 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:17:35 np0005481065 nova_compute[260935]: 2025-10-11 09:17:35.001 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:17:35 np0005481065 nova_compute[260935]: 2025-10-11 09:17:35.034 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:17:35 np0005481065 nova_compute[260935]: 2025-10-11 09:17:35.035 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:17:35 np0005481065 nova_compute[260935]: 2025-10-11 09:17:35.035 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:17:35 np0005481065 nova_compute[260935]: 2025-10-11 09:17:35.035 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:17:35 np0005481065 nova_compute[260935]: 2025-10-11 09:17:35.036 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:17:35 np0005481065 nova_compute[260935]: 2025-10-11 09:17:35.385 2 DEBUG nova.network.neutron [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Updating instance_info_cache with network_info: [{"id": "25dc7747-60d0-4a38-8938-dfa9f55068b3", "address": "fa:16:3e:8e:fc:a5", "network": {"id": "2f4ce403-2596-441d-805b-ba15e2f385a1", "bridge": "br-int", "label": "tempest-network-smoke--109925739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8e:fca5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25dc7747-60", "ovs_interfaceid": "25dc7747-60d0-4a38-8938-dfa9f55068b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:17:35 np0005481065 nova_compute[260935]: 2025-10-11 09:17:35.409 2 DEBUG oslo_concurrency.lockutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Releasing lock "refresh_cache-25c0d908-6200-4e9e-8914-ed531abe14bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:17:35 np0005481065 nova_compute[260935]: 2025-10-11 09:17:35.410 2 DEBUG nova.compute.manager [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Instance network_info: |[{"id": "25dc7747-60d0-4a38-8938-dfa9f55068b3", "address": "fa:16:3e:8e:fc:a5", "network": {"id": "2f4ce403-2596-441d-805b-ba15e2f385a1", "bridge": "br-int", "label": "tempest-network-smoke--109925739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8e:fca5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25dc7747-60", "ovs_interfaceid": "25dc7747-60d0-4a38-8938-dfa9f55068b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:17:35 np0005481065 nova_compute[260935]: 2025-10-11 09:17:35.411 2 DEBUG oslo_concurrency.lockutils [req-f98e1868-53db-4ce1-9ff2-db865e9e6823 req-48a38adc-d014-4c15-b42b-d088179ab618 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-25c0d908-6200-4e9e-8914-ed531abe14bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:17:35 np0005481065 nova_compute[260935]: 2025-10-11 09:17:35.412 2 DEBUG nova.network.neutron [req-f98e1868-53db-4ce1-9ff2-db865e9e6823 req-48a38adc-d014-4c15-b42b-d088179ab618 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Refreshing network info cache for port 25dc7747-60d0-4a38-8938-dfa9f55068b3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:17:35 np0005481065 nova_compute[260935]: 2025-10-11 09:17:35.419 2 DEBUG nova.virt.libvirt.driver [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Start _get_guest_xml network_info=[{"id": "25dc7747-60d0-4a38-8938-dfa9f55068b3", "address": "fa:16:3e:8e:fc:a5", "network": {"id": "2f4ce403-2596-441d-805b-ba15e2f385a1", "bridge": "br-int", "label": "tempest-network-smoke--109925739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8e:fca5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25dc7747-60", "ovs_interfaceid": "25dc7747-60d0-4a38-8938-dfa9f55068b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:17:35 np0005481065 nova_compute[260935]: 2025-10-11 09:17:35.428 2 WARNING nova.virt.libvirt.driver [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:17:35 np0005481065 nova_compute[260935]: 2025-10-11 09:17:35.434 2 DEBUG nova.virt.libvirt.host [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:17:35 np0005481065 nova_compute[260935]: 2025-10-11 09:17:35.436 2 DEBUG nova.virt.libvirt.host [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:17:35 np0005481065 nova_compute[260935]: 2025-10-11 09:17:35.453 2 DEBUG nova.virt.libvirt.host [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:17:35 np0005481065 nova_compute[260935]: 2025-10-11 09:17:35.454 2 DEBUG nova.virt.libvirt.host [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:17:35 np0005481065 nova_compute[260935]: 2025-10-11 09:17:35.455 2 DEBUG nova.virt.libvirt.driver [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:17:35 np0005481065 nova_compute[260935]: 2025-10-11 09:17:35.455 2 DEBUG nova.virt.hardware [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:17:35 np0005481065 nova_compute[260935]: 2025-10-11 09:17:35.457 2 DEBUG nova.virt.hardware [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:17:35 np0005481065 nova_compute[260935]: 2025-10-11 09:17:35.457 2 DEBUG nova.virt.hardware [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:17:35 np0005481065 nova_compute[260935]: 2025-10-11 09:17:35.458 2 DEBUG nova.virt.hardware [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:17:35 np0005481065 nova_compute[260935]: 2025-10-11 09:17:35.458 2 DEBUG nova.virt.hardware [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:17:35 np0005481065 nova_compute[260935]: 2025-10-11 09:17:35.459 2 DEBUG nova.virt.hardware [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:17:35 np0005481065 nova_compute[260935]: 2025-10-11 09:17:35.460 2 DEBUG nova.virt.hardware [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:17:35 np0005481065 nova_compute[260935]: 2025-10-11 09:17:35.460 2 DEBUG nova.virt.hardware [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:17:35 np0005481065 nova_compute[260935]: 2025-10-11 09:17:35.461 2 DEBUG nova.virt.hardware [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:17:35 np0005481065 nova_compute[260935]: 2025-10-11 09:17:35.461 2 DEBUG nova.virt.hardware [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:17:35 np0005481065 nova_compute[260935]: 2025-10-11 09:17:35.462 2 DEBUG nova.virt.hardware [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:17:35 np0005481065 nova_compute[260935]: 2025-10-11 09:17:35.468 2 DEBUG oslo_concurrency.processutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:17:35 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:17:35 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2159254145' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:17:35 np0005481065 nova_compute[260935]: 2025-10-11 09:17:35.561 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:17:35 np0005481065 nova_compute[260935]: 2025-10-11 09:17:35.660 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:17:35 np0005481065 nova_compute[260935]: 2025-10-11 09:17:35.660 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:17:35 np0005481065 nova_compute[260935]: 2025-10-11 09:17:35.661 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:17:35 np0005481065 nova_compute[260935]: 2025-10-11 09:17:35.665 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:17:35 np0005481065 nova_compute[260935]: 2025-10-11 09:17:35.665 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:17:35 np0005481065 nova_compute[260935]: 2025-10-11 09:17:35.669 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:17:35 np0005481065 nova_compute[260935]: 2025-10-11 09:17:35.669 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:17:35 np0005481065 nova_compute[260935]: 2025-10-11 09:17:35.673 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:17:35 np0005481065 nova_compute[260935]: 2025-10-11 09:17:35.673 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:17:35 np0005481065 nova_compute[260935]: 2025-10-11 09:17:35.934 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:17:35 np0005481065 nova_compute[260935]: 2025-10-11 09:17:35.936 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2731MB free_disk=59.76434326171875GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:17:35 np0005481065 nova_compute[260935]: 2025-10-11 09:17:35.937 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:17:35 np0005481065 nova_compute[260935]: 2025-10-11 09:17:35.937 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:17:35 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:17:35 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3103790310' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:17:35 np0005481065 nova_compute[260935]: 2025-10-11 09:17:35.982 2 DEBUG oslo_concurrency.processutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:17:36 np0005481065 nova_compute[260935]: 2025-10-11 09:17:36.016 2 DEBUG nova.storage.rbd_utils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 25c0d908-6200-4e9e-8914-ed531abe14bf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:17:36 np0005481065 nova_compute[260935]: 2025-10-11 09:17:36.021 2 DEBUG oslo_concurrency.processutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:17:36 np0005481065 nova_compute[260935]: 2025-10-11 09:17:36.125 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:17:36 np0005481065 nova_compute[260935]: 2025-10-11 09:17:36.126 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:17:36 np0005481065 nova_compute[260935]: 2025-10-11 09:17:36.126 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:17:36 np0005481065 nova_compute[260935]: 2025-10-11 09:17:36.126 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 187076f6-221b-4a35-a7a8-9ba7c2a546b5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:17:36 np0005481065 nova_compute[260935]: 2025-10-11 09:17:36.127 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 25c0d908-6200-4e9e-8914-ed531abe14bf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:17:36 np0005481065 nova_compute[260935]: 2025-10-11 09:17:36.127 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:17:36 np0005481065 nova_compute[260935]: 2025-10-11 09:17:36.127 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:17:36 np0005481065 nova_compute[260935]: 2025-10-11 09:17:36.272 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:17:36 np0005481065 nova_compute[260935]: 2025-10-11 09:17:36.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:36 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:17:36 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2726868061' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:17:36 np0005481065 nova_compute[260935]: 2025-10-11 09:17:36.528 2 DEBUG oslo_concurrency.processutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:17:36 np0005481065 nova_compute[260935]: 2025-10-11 09:17:36.531 2 DEBUG nova.virt.libvirt.vif [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:17:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-637088263',display_name='tempest-TestGettingAddress-server-637088263',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-637088263',id=111,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIEEroIcQI6yjPYNFRbpO55PkieYAc+yLELOXjNZohffuvQJuC/A538gnzESmYvfIV1iCxTeQxcsCPGRPplzY6F4Y6cKL3td1D8v0hhGFKNU2eGLTpxtQxcU6ACWQ1bQPg==',key_name='tempest-TestGettingAddress-609422538',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-249bfkz4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:17:30Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=25c0d908-6200-4e9e-8914-ed531abe14bf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "25dc7747-60d0-4a38-8938-dfa9f55068b3", "address": "fa:16:3e:8e:fc:a5", "network": {"id": "2f4ce403-2596-441d-805b-ba15e2f385a1", "bridge": "br-int", "label": "tempest-network-smoke--109925739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8e:fca5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25dc7747-60", "ovs_interfaceid": "25dc7747-60d0-4a38-8938-dfa9f55068b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:17:36 np0005481065 nova_compute[260935]: 2025-10-11 09:17:36.532 2 DEBUG nova.network.os_vif_util [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "25dc7747-60d0-4a38-8938-dfa9f55068b3", "address": "fa:16:3e:8e:fc:a5", "network": {"id": "2f4ce403-2596-441d-805b-ba15e2f385a1", "bridge": "br-int", "label": "tempest-network-smoke--109925739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8e:fca5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25dc7747-60", "ovs_interfaceid": "25dc7747-60d0-4a38-8938-dfa9f55068b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:17:36 np0005481065 nova_compute[260935]: 2025-10-11 09:17:36.533 2 DEBUG nova.network.os_vif_util [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8e:fc:a5,bridge_name='br-int',has_traffic_filtering=True,id=25dc7747-60d0-4a38-8938-dfa9f55068b3,network=Network(2f4ce403-2596-441d-805b-ba15e2f385a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25dc7747-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:17:36 np0005481065 nova_compute[260935]: 2025-10-11 09:17:36.536 2 DEBUG nova.objects.instance [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'pci_devices' on Instance uuid 25c0d908-6200-4e9e-8914-ed531abe14bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:17:36 np0005481065 nova_compute[260935]: 2025-10-11 09:17:36.555 2 DEBUG nova.virt.libvirt.driver [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:17:36 np0005481065 nova_compute[260935]:  <uuid>25c0d908-6200-4e9e-8914-ed531abe14bf</uuid>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:  <name>instance-0000006f</name>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:17:36 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:      <nova:name>tempest-TestGettingAddress-server-637088263</nova:name>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:17:35</nova:creationTime>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:17:36 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:        <nova:user uuid="0e1fd111a1ff43179343661e01457085">tempest-TestGettingAddress-1238692117-project-member</nova:user>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:        <nova:project uuid="db6885dd005947ad850fed13cefdf2fc">tempest-TestGettingAddress-1238692117</nova:project>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:        <nova:port uuid="25dc7747-60d0-4a38-8938-dfa9f55068b3">
Oct 11 05:17:36 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe8e:fca5" ipVersion="6"/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:17:36 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:      <entry name="serial">25c0d908-6200-4e9e-8914-ed531abe14bf</entry>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:      <entry name="uuid">25c0d908-6200-4e9e-8914-ed531abe14bf</entry>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:17:36 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:17:36 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:17:36 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/25c0d908-6200-4e9e-8914-ed531abe14bf_disk">
Oct 11 05:17:36 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:17:36 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:17:36 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/25c0d908-6200-4e9e-8914-ed531abe14bf_disk.config">
Oct 11 05:17:36 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:17:36 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:17:36 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:8e:fc:a5"/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:      <target dev="tap25dc7747-60"/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:17:36 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/25c0d908-6200-4e9e-8914-ed531abe14bf/console.log" append="off"/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:17:36 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:17:36 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:17:36 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:17:36 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:17:36 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:17:36 np0005481065 nova_compute[260935]: 2025-10-11 09:17:36.556 2 DEBUG nova.compute.manager [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Preparing to wait for external event network-vif-plugged-25dc7747-60d0-4a38-8938-dfa9f55068b3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:17:36 np0005481065 nova_compute[260935]: 2025-10-11 09:17:36.556 2 DEBUG oslo_concurrency.lockutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "25c0d908-6200-4e9e-8914-ed531abe14bf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:17:36 np0005481065 nova_compute[260935]: 2025-10-11 09:17:36.557 2 DEBUG oslo_concurrency.lockutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "25c0d908-6200-4e9e-8914-ed531abe14bf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:17:36 np0005481065 nova_compute[260935]: 2025-10-11 09:17:36.557 2 DEBUG oslo_concurrency.lockutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "25c0d908-6200-4e9e-8914-ed531abe14bf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:17:36 np0005481065 nova_compute[260935]: 2025-10-11 09:17:36.558 2 DEBUG nova.virt.libvirt.vif [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:17:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-637088263',display_name='tempest-TestGettingAddress-server-637088263',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-637088263',id=111,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIEEroIcQI6yjPYNFRbpO55PkieYAc+yLELOXjNZohffuvQJuC/A538gnzESmYvfIV1iCxTeQxcsCPGRPplzY6F4Y6cKL3td1D8v0hhGFKNU2eGLTpxtQxcU6ACWQ1bQPg==',key_name='tempest-TestGettingAddress-609422538',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-249bfkz4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:17:30Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=25c0d908-6200-4e9e-8914-ed531abe14bf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "25dc7747-60d0-4a38-8938-dfa9f55068b3", "address": "fa:16:3e:8e:fc:a5", "network": {"id": "2f4ce403-2596-441d-805b-ba15e2f385a1", "bridge": "br-int", "label": "tempest-network-smoke--109925739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8e:fca5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25dc7747-60", "ovs_interfaceid": "25dc7747-60d0-4a38-8938-dfa9f55068b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:17:36 np0005481065 nova_compute[260935]: 2025-10-11 09:17:36.559 2 DEBUG nova.network.os_vif_util [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "25dc7747-60d0-4a38-8938-dfa9f55068b3", "address": "fa:16:3e:8e:fc:a5", "network": {"id": "2f4ce403-2596-441d-805b-ba15e2f385a1", "bridge": "br-int", "label": "tempest-network-smoke--109925739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8e:fca5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25dc7747-60", "ovs_interfaceid": "25dc7747-60d0-4a38-8938-dfa9f55068b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:17:36 np0005481065 nova_compute[260935]: 2025-10-11 09:17:36.560 2 DEBUG nova.network.os_vif_util [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8e:fc:a5,bridge_name='br-int',has_traffic_filtering=True,id=25dc7747-60d0-4a38-8938-dfa9f55068b3,network=Network(2f4ce403-2596-441d-805b-ba15e2f385a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25dc7747-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:17:36 np0005481065 nova_compute[260935]: 2025-10-11 09:17:36.560 2 DEBUG os_vif [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:fc:a5,bridge_name='br-int',has_traffic_filtering=True,id=25dc7747-60d0-4a38-8938-dfa9f55068b3,network=Network(2f4ce403-2596-441d-805b-ba15e2f385a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25dc7747-60') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:17:36 np0005481065 nova_compute[260935]: 2025-10-11 09:17:36.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:36 np0005481065 nova_compute[260935]: 2025-10-11 09:17:36.561 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:17:36 np0005481065 nova_compute[260935]: 2025-10-11 09:17:36.562 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:17:36 np0005481065 nova_compute[260935]: 2025-10-11 09:17:36.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:36 np0005481065 nova_compute[260935]: 2025-10-11 09:17:36.566 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap25dc7747-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:17:36 np0005481065 nova_compute[260935]: 2025-10-11 09:17:36.567 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap25dc7747-60, col_values=(('external_ids', {'iface-id': '25dc7747-60d0-4a38-8938-dfa9f55068b3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8e:fc:a5', 'vm-uuid': '25c0d908-6200-4e9e-8914-ed531abe14bf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:17:36 np0005481065 nova_compute[260935]: 2025-10-11 09:17:36.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:36 np0005481065 NetworkManager[44960]: <info>  [1760174256.5709] manager: (tap25dc7747-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/446)
Oct 11 05:17:36 np0005481065 nova_compute[260935]: 2025-10-11 09:17:36.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:17:36 np0005481065 nova_compute[260935]: 2025-10-11 09:17:36.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:36 np0005481065 nova_compute[260935]: 2025-10-11 09:17:36.580 2 INFO os_vif [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:fc:a5,bridge_name='br-int',has_traffic_filtering=True,id=25dc7747-60d0-4a38-8938-dfa9f55068b3,network=Network(2f4ce403-2596-441d-805b-ba15e2f385a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25dc7747-60')#033[00m
Oct 11 05:17:36 np0005481065 nova_compute[260935]: 2025-10-11 09:17:36.639 2 DEBUG nova.virt.libvirt.driver [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:17:36 np0005481065 nova_compute[260935]: 2025-10-11 09:17:36.640 2 DEBUG nova.virt.libvirt.driver [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:17:36 np0005481065 nova_compute[260935]: 2025-10-11 09:17:36.640 2 DEBUG nova.virt.libvirt.driver [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No VIF found with MAC fa:16:3e:8e:fc:a5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:17:36 np0005481065 nova_compute[260935]: 2025-10-11 09:17:36.641 2 INFO nova.virt.libvirt.driver [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Using config drive#033[00m
Oct 11 05:17:36 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2273: 321 pgs: 321 active+clean; 453 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:17:36 np0005481065 nova_compute[260935]: 2025-10-11 09:17:36.674 2 DEBUG nova.storage.rbd_utils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 25c0d908-6200-4e9e-8914-ed531abe14bf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:17:36 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:17:36 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1308882637' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:17:36 np0005481065 nova_compute[260935]: 2025-10-11 09:17:36.757 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:17:36 np0005481065 nova_compute[260935]: 2025-10-11 09:17:36.765 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:17:36 np0005481065 podman[381472]: 2025-10-11 09:17:36.781599874 +0000 UTC m=+0.086224338 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=iscsid)
Oct 11 05:17:36 np0005481065 nova_compute[260935]: 2025-10-11 09:17:36.785 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:17:36 np0005481065 nova_compute[260935]: 2025-10-11 09:17:36.819 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:17:36 np0005481065 nova_compute[260935]: 2025-10-11 09:17:36.819 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.882s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:17:37 np0005481065 nova_compute[260935]: 2025-10-11 09:17:37.258 2 DEBUG nova.network.neutron [req-f98e1868-53db-4ce1-9ff2-db865e9e6823 req-48a38adc-d014-4c15-b42b-d088179ab618 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Updated VIF entry in instance network info cache for port 25dc7747-60d0-4a38-8938-dfa9f55068b3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:17:37 np0005481065 nova_compute[260935]: 2025-10-11 09:17:37.259 2 DEBUG nova.network.neutron [req-f98e1868-53db-4ce1-9ff2-db865e9e6823 req-48a38adc-d014-4c15-b42b-d088179ab618 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Updating instance_info_cache with network_info: [{"id": "25dc7747-60d0-4a38-8938-dfa9f55068b3", "address": "fa:16:3e:8e:fc:a5", "network": {"id": "2f4ce403-2596-441d-805b-ba15e2f385a1", "bridge": "br-int", "label": "tempest-network-smoke--109925739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8e:fca5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25dc7747-60", "ovs_interfaceid": "25dc7747-60d0-4a38-8938-dfa9f55068b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:17:37 np0005481065 nova_compute[260935]: 2025-10-11 09:17:37.275 2 DEBUG oslo_concurrency.lockutils [req-f98e1868-53db-4ce1-9ff2-db865e9e6823 req-48a38adc-d014-4c15-b42b-d088179ab618 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-25c0d908-6200-4e9e-8914-ed531abe14bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:17:37 np0005481065 nova_compute[260935]: 2025-10-11 09:17:37.328 2 INFO nova.virt.libvirt.driver [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Creating config drive at /var/lib/nova/instances/25c0d908-6200-4e9e-8914-ed531abe14bf/disk.config#033[00m
Oct 11 05:17:37 np0005481065 nova_compute[260935]: 2025-10-11 09:17:37.337 2 DEBUG oslo_concurrency.processutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/25c0d908-6200-4e9e-8914-ed531abe14bf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz_1zzlzt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:17:37 np0005481065 nova_compute[260935]: 2025-10-11 09:17:37.499 2 DEBUG oslo_concurrency.processutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/25c0d908-6200-4e9e-8914-ed531abe14bf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz_1zzlzt" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:17:37 np0005481065 nova_compute[260935]: 2025-10-11 09:17:37.541 2 DEBUG nova.storage.rbd_utils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 25c0d908-6200-4e9e-8914-ed531abe14bf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:17:37 np0005481065 nova_compute[260935]: 2025-10-11 09:17:37.548 2 DEBUG oslo_concurrency.processutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/25c0d908-6200-4e9e-8914-ed531abe14bf/disk.config 25c0d908-6200-4e9e-8914-ed531abe14bf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:17:37 np0005481065 nova_compute[260935]: 2025-10-11 09:17:37.761 2 DEBUG oslo_concurrency.processutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/25c0d908-6200-4e9e-8914-ed531abe14bf/disk.config 25c0d908-6200-4e9e-8914-ed531abe14bf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.214s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:17:37 np0005481065 nova_compute[260935]: 2025-10-11 09:17:37.763 2 INFO nova.virt.libvirt.driver [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Deleting local config drive /var/lib/nova/instances/25c0d908-6200-4e9e-8914-ed531abe14bf/disk.config because it was imported into RBD.#033[00m
Oct 11 05:17:37 np0005481065 kernel: tap25dc7747-60: entered promiscuous mode
Oct 11 05:17:37 np0005481065 NetworkManager[44960]: <info>  [1760174257.8427] manager: (tap25dc7747-60): new Tun device (/org/freedesktop/NetworkManager/Devices/447)
Oct 11 05:17:37 np0005481065 nova_compute[260935]: 2025-10-11 09:17:37.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:37 np0005481065 ovn_controller[152945]: 2025-10-11T09:17:37Z|01091|binding|INFO|Claiming lport 25dc7747-60d0-4a38-8938-dfa9f55068b3 for this chassis.
Oct 11 05:17:37 np0005481065 ovn_controller[152945]: 2025-10-11T09:17:37Z|01092|binding|INFO|25dc7747-60d0-4a38-8938-dfa9f55068b3: Claiming fa:16:3e:8e:fc:a5 10.100.0.3 2001:db8::f816:3eff:fe8e:fca5
Oct 11 05:17:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:37.858 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:fc:a5 10.100.0.3 2001:db8::f816:3eff:fe8e:fca5'], port_security=['fa:16:3e:8e:fc:a5 10.100.0.3 2001:db8::f816:3eff:fe8e:fca5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:fe8e:fca5/64', 'neutron:device_id': '25c0d908-6200-4e9e-8914-ed531abe14bf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2f4ce403-2596-441d-805b-ba15e2f385a1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '89a40070-a057-46b1-9d97-ea387c29c959', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ec00ec81-0492-43bf-b21e-a8398ff551c2, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=25dc7747-60d0-4a38-8938-dfa9f55068b3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:17:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:37.860 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 25dc7747-60d0-4a38-8938-dfa9f55068b3 in datapath 2f4ce403-2596-441d-805b-ba15e2f385a1 bound to our chassis#033[00m
Oct 11 05:17:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:37.863 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2f4ce403-2596-441d-805b-ba15e2f385a1#033[00m
Oct 11 05:17:37 np0005481065 ovn_controller[152945]: 2025-10-11T09:17:37Z|01093|binding|INFO|Setting lport 25dc7747-60d0-4a38-8938-dfa9f55068b3 ovn-installed in OVS
Oct 11 05:17:37 np0005481065 ovn_controller[152945]: 2025-10-11T09:17:37Z|01094|binding|INFO|Setting lport 25dc7747-60d0-4a38-8938-dfa9f55068b3 up in Southbound
Oct 11 05:17:37 np0005481065 nova_compute[260935]: 2025-10-11 09:17:37.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:37.899 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b0d338ee-653d-4a9d-b51b-fca6b7af64fb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:17:37 np0005481065 systemd-machined[215705]: New machine qemu-134-instance-0000006f.
Oct 11 05:17:37 np0005481065 systemd[1]: Started Virtual Machine qemu-134-instance-0000006f.
Oct 11 05:17:37 np0005481065 systemd-udevd[381552]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:17:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:37.951 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[218c46ba-2485-4cc0-aee8-4e419f94f801]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:17:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:37.955 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[fe8805a5-754d-4af0-8ac0-2bb4ab54f918]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:17:37 np0005481065 NetworkManager[44960]: <info>  [1760174257.9599] device (tap25dc7747-60): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:17:37 np0005481065 NetworkManager[44960]: <info>  [1760174257.9613] device (tap25dc7747-60): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:17:37 np0005481065 nova_compute[260935]: 2025-10-11 09:17:37.962 2 DEBUG oslo_concurrency.lockutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "0c16a8df-379f-45ee-b8a2-930ab997e47b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:17:37 np0005481065 nova_compute[260935]: 2025-10-11 09:17:37.963 2 DEBUG oslo_concurrency.lockutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "0c16a8df-379f-45ee-b8a2-930ab997e47b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:17:37 np0005481065 nova_compute[260935]: 2025-10-11 09:17:37.983 2 DEBUG nova.compute.manager [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:17:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:38.007 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3feaca5d-8bcf-441a-b374-d50fdc26a080]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:17:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:38.036 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[44a25d93-a323-496c-9b2e-8accb14509b4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2f4ce403-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:74:43:a2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 21, 'tx_packets': 5, 'rx_bytes': 1886, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 21, 'tx_packets': 5, 'rx_bytes': 1886, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 310], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 605471, 'reachable_time': 19417, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 19, 'inoctets': 1536, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 19, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1536, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 19, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 381562, 'error': None, 'target': 'ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:17:38 np0005481065 nova_compute[260935]: 2025-10-11 09:17:38.062 2 DEBUG oslo_concurrency.lockutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:17:38 np0005481065 nova_compute[260935]: 2025-10-11 09:17:38.063 2 DEBUG oslo_concurrency.lockutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:17:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:38.066 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[da76c7fa-3508-4370-824a-4fb9fee9d4a6]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2f4ce403-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 605487, 'tstamp': 605487}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 381564, 'error': None, 'target': 'ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2f4ce403-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 605491, 'tstamp': 605491}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 381564, 'error': None, 'target': 'ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:17:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:38.069 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2f4ce403-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:17:38 np0005481065 nova_compute[260935]: 2025-10-11 09:17:38.072 2 DEBUG nova.virt.hardware [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:17:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:38.073 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2f4ce403-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:17:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:38.073 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:17:38 np0005481065 nova_compute[260935]: 2025-10-11 09:17:38.073 2 INFO nova.compute.claims [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:17:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:38.074 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2f4ce403-20, col_values=(('external_ids', {'iface-id': '722d4b4c-2d64-4e8c-b343-4ac25259f23b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:17:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:38.074 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:17:38 np0005481065 nova_compute[260935]: 2025-10-11 09:17:38.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:38 np0005481065 nova_compute[260935]: 2025-10-11 09:17:38.334 2 DEBUG oslo_concurrency.processutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:17:38 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2274: 321 pgs: 321 active+clean; 453 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 11 05:17:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:17:38 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2439670451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:17:38 np0005481065 nova_compute[260935]: 2025-10-11 09:17:38.849 2 DEBUG oslo_concurrency.processutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:17:38 np0005481065 nova_compute[260935]: 2025-10-11 09:17:38.857 2 DEBUG nova.compute.provider_tree [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:17:38 np0005481065 nova_compute[260935]: 2025-10-11 09:17:38.878 2 DEBUG nova.scheduler.client.report [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:17:38 np0005481065 nova_compute[260935]: 2025-10-11 09:17:38.910 2 DEBUG oslo_concurrency.lockutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.847s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:17:38 np0005481065 nova_compute[260935]: 2025-10-11 09:17:38.911 2 DEBUG nova.compute.manager [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:17:38 np0005481065 nova_compute[260935]: 2025-10-11 09:17:38.914 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174258.911845, 25c0d908-6200-4e9e-8914-ed531abe14bf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:17:38 np0005481065 nova_compute[260935]: 2025-10-11 09:17:38.915 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] VM Started (Lifecycle Event)#033[00m
Oct 11 05:17:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:17:38 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #108. Immutable memtables: 0.
Oct 11 05:17:38 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:17:38.962747) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 05:17:38 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 108
Oct 11 05:17:38 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174258962883, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 682, "num_deletes": 256, "total_data_size": 779644, "memory_usage": 792776, "flush_reason": "Manual Compaction"}
Oct 11 05:17:38 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #109: started
Oct 11 05:17:38 np0005481065 nova_compute[260935]: 2025-10-11 09:17:38.968 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:17:38 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174258973276, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 109, "file_size": 772323, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47134, "largest_seqno": 47815, "table_properties": {"data_size": 768740, "index_size": 1427, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8114, "raw_average_key_size": 18, "raw_value_size": 761500, "raw_average_value_size": 1775, "num_data_blocks": 63, "num_entries": 429, "num_filter_entries": 429, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760174209, "oldest_key_time": 1760174209, "file_creation_time": 1760174258, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 109, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:17:38 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 10559 microseconds, and 6168 cpu microseconds.
Oct 11 05:17:38 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:17:38 np0005481065 nova_compute[260935]: 2025-10-11 09:17:38.974 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174258.914773, 25c0d908-6200-4e9e-8914-ed531abe14bf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:17:38 np0005481065 nova_compute[260935]: 2025-10-11 09:17:38.974 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:17:38 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:17:38.973323) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #109: 772323 bytes OK
Oct 11 05:17:38 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:17:38.973349) [db/memtable_list.cc:519] [default] Level-0 commit table #109 started
Oct 11 05:17:38 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:17:38.975206) [db/memtable_list.cc:722] [default] Level-0 commit table #109: memtable #1 done
Oct 11 05:17:38 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:17:38.975220) EVENT_LOG_v1 {"time_micros": 1760174258975216, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 05:17:38 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:17:38.975242) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 05:17:38 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 776030, prev total WAL file size 776030, number of live WAL files 2.
Oct 11 05:17:38 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000105.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:17:38 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:17:38.975868) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373538' seq:72057594037927935, type:22 .. '6C6F676D0032303130' seq:0, type:0; will stop at (end)
Oct 11 05:17:38 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 05:17:38 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [109(754KB)], [107(10162KB)]
Oct 11 05:17:38 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174258975924, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [109], "files_L6": [107], "score": -1, "input_data_size": 11178451, "oldest_snapshot_seqno": -1}
Oct 11 05:17:38 np0005481065 nova_compute[260935]: 2025-10-11 09:17:38.986 2 DEBUG nova.compute.manager [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:17:38 np0005481065 nova_compute[260935]: 2025-10-11 09:17:38.986 2 DEBUG nova.network.neutron [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:17:38 np0005481065 nova_compute[260935]: 2025-10-11 09:17:38.996 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:17:39 np0005481065 nova_compute[260935]: 2025-10-11 09:17:39.002 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:17:39 np0005481065 nova_compute[260935]: 2025-10-11 09:17:39.007 2 INFO nova.virt.libvirt.driver [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:17:39 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #110: 6860 keys, 11055345 bytes, temperature: kUnknown
Oct 11 05:17:39 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174259021343, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 110, "file_size": 11055345, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11006807, "index_size": 30314, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17157, "raw_key_size": 178275, "raw_average_key_size": 25, "raw_value_size": 10881192, "raw_average_value_size": 1586, "num_data_blocks": 1195, "num_entries": 6860, "num_filter_entries": 6860, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760174258, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:17:39 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:17:39 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:17:39.021583) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 11055345 bytes
Oct 11 05:17:39 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:17:39.022675) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 245.7 rd, 243.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 9.9 +0.0 blob) out(10.5 +0.0 blob), read-write-amplify(28.8) write-amplify(14.3) OK, records in: 7384, records dropped: 524 output_compression: NoCompression
Oct 11 05:17:39 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:17:39.022690) EVENT_LOG_v1 {"time_micros": 1760174259022683, "job": 64, "event": "compaction_finished", "compaction_time_micros": 45493, "compaction_time_cpu_micros": 24127, "output_level": 6, "num_output_files": 1, "total_output_size": 11055345, "num_input_records": 7384, "num_output_records": 6860, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 05:17:39 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000109.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:17:39 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174259022934, "job": 64, "event": "table_file_deletion", "file_number": 109}
Oct 11 05:17:39 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:17:39 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174259024702, "job": 64, "event": "table_file_deletion", "file_number": 107}
Oct 11 05:17:39 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:17:38.975716) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:17:39 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:17:39.024863) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:17:39 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:17:39.024873) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:17:39 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:17:39.024876) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:17:39 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:17:39.024878) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:17:39 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:17:39.024880) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:17:39 np0005481065 nova_compute[260935]: 2025-10-11 09:17:39.036 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:17:39 np0005481065 nova_compute[260935]: 2025-10-11 09:17:39.039 2 DEBUG nova.compute.manager [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:17:39 np0005481065 nova_compute[260935]: 2025-10-11 09:17:39.135 2 DEBUG nova.compute.manager [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:17:39 np0005481065 nova_compute[260935]: 2025-10-11 09:17:39.136 2 DEBUG nova.virt.libvirt.driver [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:17:39 np0005481065 nova_compute[260935]: 2025-10-11 09:17:39.136 2 INFO nova.virt.libvirt.driver [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Creating image(s)#033[00m
Oct 11 05:17:39 np0005481065 nova_compute[260935]: 2025-10-11 09:17:39.161 2 DEBUG nova.storage.rbd_utils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 0c16a8df-379f-45ee-b8a2-930ab997e47b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:17:39 np0005481065 nova_compute[260935]: 2025-10-11 09:17:39.193 2 DEBUG nova.storage.rbd_utils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 0c16a8df-379f-45ee-b8a2-930ab997e47b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:17:39 np0005481065 nova_compute[260935]: 2025-10-11 09:17:39.216 2 DEBUG nova.storage.rbd_utils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 0c16a8df-379f-45ee-b8a2-930ab997e47b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:17:39 np0005481065 nova_compute[260935]: 2025-10-11 09:17:39.221 2 DEBUG oslo_concurrency.processutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:17:39 np0005481065 nova_compute[260935]: 2025-10-11 09:17:39.284 2 DEBUG nova.policy [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dd336dcb24664df58613d4105ce1b004', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:17:39 np0005481065 nova_compute[260935]: 2025-10-11 09:17:39.329 2 DEBUG oslo_concurrency.processutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:17:39 np0005481065 nova_compute[260935]: 2025-10-11 09:17:39.330 2 DEBUG oslo_concurrency.lockutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:17:39 np0005481065 nova_compute[260935]: 2025-10-11 09:17:39.330 2 DEBUG oslo_concurrency.lockutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:17:39 np0005481065 nova_compute[260935]: 2025-10-11 09:17:39.331 2 DEBUG oslo_concurrency.lockutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:17:39 np0005481065 nova_compute[260935]: 2025-10-11 09:17:39.356 2 DEBUG nova.storage.rbd_utils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 0c16a8df-379f-45ee-b8a2-930ab997e47b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:17:39 np0005481065 nova_compute[260935]: 2025-10-11 09:17:39.362 2 DEBUG oslo_concurrency.processutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 0c16a8df-379f-45ee-b8a2-930ab997e47b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:17:39 np0005481065 nova_compute[260935]: 2025-10-11 09:17:39.651 2 DEBUG oslo_concurrency.processutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 0c16a8df-379f-45ee-b8a2-930ab997e47b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.288s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:17:39 np0005481065 nova_compute[260935]: 2025-10-11 09:17:39.737 2 DEBUG nova.storage.rbd_utils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] resizing rbd image 0c16a8df-379f-45ee-b8a2-930ab997e47b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:17:39 np0005481065 nova_compute[260935]: 2025-10-11 09:17:39.850 2 DEBUG nova.objects.instance [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'migration_context' on Instance uuid 0c16a8df-379f-45ee-b8a2-930ab997e47b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:17:39 np0005481065 nova_compute[260935]: 2025-10-11 09:17:39.873 2 DEBUG nova.virt.libvirt.driver [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:17:39 np0005481065 nova_compute[260935]: 2025-10-11 09:17:39.873 2 DEBUG nova.virt.libvirt.driver [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Ensure instance console log exists: /var/lib/nova/instances/0c16a8df-379f-45ee-b8a2-930ab997e47b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:17:39 np0005481065 nova_compute[260935]: 2025-10-11 09:17:39.874 2 DEBUG oslo_concurrency.lockutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:17:39 np0005481065 nova_compute[260935]: 2025-10-11 09:17:39.874 2 DEBUG oslo_concurrency.lockutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:17:39 np0005481065 nova_compute[260935]: 2025-10-11 09:17:39.875 2 DEBUG oslo_concurrency.lockutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:17:40 np0005481065 nova_compute[260935]: 2025-10-11 09:17:40.582 2 DEBUG nova.network.neutron [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Successfully created port: 2284d8f8-63ae-4cf4-b954-c08472abd3ed _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:17:40 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2275: 321 pgs: 321 active+clean; 453 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 11 05:17:40 np0005481065 nova_compute[260935]: 2025-10-11 09:17:40.977 2 DEBUG nova.compute.manager [req-fa3f69e6-f355-44fc-963f-7ad7c49d72e7 req-4d45a33a-2081-4cd8-9256-4553d3588359 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Received event network-vif-plugged-25dc7747-60d0-4a38-8938-dfa9f55068b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:17:40 np0005481065 nova_compute[260935]: 2025-10-11 09:17:40.978 2 DEBUG oslo_concurrency.lockutils [req-fa3f69e6-f355-44fc-963f-7ad7c49d72e7 req-4d45a33a-2081-4cd8-9256-4553d3588359 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "25c0d908-6200-4e9e-8914-ed531abe14bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:17:40 np0005481065 nova_compute[260935]: 2025-10-11 09:17:40.978 2 DEBUG oslo_concurrency.lockutils [req-fa3f69e6-f355-44fc-963f-7ad7c49d72e7 req-4d45a33a-2081-4cd8-9256-4553d3588359 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "25c0d908-6200-4e9e-8914-ed531abe14bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:17:40 np0005481065 nova_compute[260935]: 2025-10-11 09:17:40.979 2 DEBUG oslo_concurrency.lockutils [req-fa3f69e6-f355-44fc-963f-7ad7c49d72e7 req-4d45a33a-2081-4cd8-9256-4553d3588359 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "25c0d908-6200-4e9e-8914-ed531abe14bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:17:40 np0005481065 nova_compute[260935]: 2025-10-11 09:17:40.980 2 DEBUG nova.compute.manager [req-fa3f69e6-f355-44fc-963f-7ad7c49d72e7 req-4d45a33a-2081-4cd8-9256-4553d3588359 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Processing event network-vif-plugged-25dc7747-60d0-4a38-8938-dfa9f55068b3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:17:40 np0005481065 nova_compute[260935]: 2025-10-11 09:17:40.981 2 DEBUG nova.compute.manager [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:17:40 np0005481065 nova_compute[260935]: 2025-10-11 09:17:40.986 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174260.9860694, 25c0d908-6200-4e9e-8914-ed531abe14bf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:17:40 np0005481065 nova_compute[260935]: 2025-10-11 09:17:40.987 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:17:40 np0005481065 nova_compute[260935]: 2025-10-11 09:17:40.992 2 DEBUG nova.virt.libvirt.driver [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:17:41 np0005481065 nova_compute[260935]: 2025-10-11 09:17:41.000 2 INFO nova.virt.libvirt.driver [-] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Instance spawned successfully.#033[00m
Oct 11 05:17:41 np0005481065 nova_compute[260935]: 2025-10-11 09:17:41.001 2 DEBUG nova.virt.libvirt.driver [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:17:41 np0005481065 nova_compute[260935]: 2025-10-11 09:17:41.029 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:17:41 np0005481065 nova_compute[260935]: 2025-10-11 09:17:41.038 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:17:41 np0005481065 nova_compute[260935]: 2025-10-11 09:17:41.044 2 DEBUG nova.virt.libvirt.driver [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:17:41 np0005481065 nova_compute[260935]: 2025-10-11 09:17:41.044 2 DEBUG nova.virt.libvirt.driver [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:17:41 np0005481065 nova_compute[260935]: 2025-10-11 09:17:41.045 2 DEBUG nova.virt.libvirt.driver [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:17:41 np0005481065 nova_compute[260935]: 2025-10-11 09:17:41.046 2 DEBUG nova.virt.libvirt.driver [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:17:41 np0005481065 nova_compute[260935]: 2025-10-11 09:17:41.047 2 DEBUG nova.virt.libvirt.driver [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:17:41 np0005481065 nova_compute[260935]: 2025-10-11 09:17:41.048 2 DEBUG nova.virt.libvirt.driver [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:17:41 np0005481065 nova_compute[260935]: 2025-10-11 09:17:41.109 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:17:41 np0005481065 nova_compute[260935]: 2025-10-11 09:17:41.163 2 INFO nova.compute.manager [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Took 10.56 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:17:41 np0005481065 nova_compute[260935]: 2025-10-11 09:17:41.164 2 DEBUG nova.compute.manager [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:17:41 np0005481065 nova_compute[260935]: 2025-10-11 09:17:41.238 2 INFO nova.compute.manager [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Took 11.72 seconds to build instance.#033[00m
Oct 11 05:17:41 np0005481065 nova_compute[260935]: 2025-10-11 09:17:41.262 2 DEBUG oslo_concurrency.lockutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "25c0d908-6200-4e9e-8914-ed531abe14bf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.810s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:17:41 np0005481065 nova_compute[260935]: 2025-10-11 09:17:41.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:41 np0005481065 nova_compute[260935]: 2025-10-11 09:17:41.434 2 DEBUG nova.network.neutron [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Successfully updated port: 2284d8f8-63ae-4cf4-b954-c08472abd3ed _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:17:41 np0005481065 nova_compute[260935]: 2025-10-11 09:17:41.449 2 DEBUG oslo_concurrency.lockutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "refresh_cache-0c16a8df-379f-45ee-b8a2-930ab997e47b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:17:41 np0005481065 nova_compute[260935]: 2025-10-11 09:17:41.449 2 DEBUG oslo_concurrency.lockutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquired lock "refresh_cache-0c16a8df-379f-45ee-b8a2-930ab997e47b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:17:41 np0005481065 nova_compute[260935]: 2025-10-11 09:17:41.450 2 DEBUG nova.network.neutron [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:17:41 np0005481065 nova_compute[260935]: 2025-10-11 09:17:41.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:41 np0005481065 nova_compute[260935]: 2025-10-11 09:17:41.667 2 DEBUG nova.network.neutron [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:17:42 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2276: 321 pgs: 321 active+clean; 485 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.1 MiB/s wr, 96 op/s
Oct 11 05:17:42 np0005481065 podman[381797]: 2025-10-11 09:17:42.816786604 +0000 UTC m=+0.108253268 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3)
Oct 11 05:17:42 np0005481065 podman[381798]: 2025-10-11 09:17:42.855386412 +0000 UTC m=+0.145486698 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible)
Oct 11 05:17:42 np0005481065 nova_compute[260935]: 2025-10-11 09:17:42.976 2 DEBUG nova.network.neutron [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Updating instance_info_cache with network_info: [{"id": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "address": "fa:16:3e:c9:58:d3", "network": {"id": "3f472aff-4044-47cd-b539-ffd0a15c2851", "bridge": "br-int", "label": "tempest-network-smoke--74572185", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2284d8f8-63", "ovs_interfaceid": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:17:43 np0005481065 nova_compute[260935]: 2025-10-11 09:17:43.009 2 DEBUG oslo_concurrency.lockutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Releasing lock "refresh_cache-0c16a8df-379f-45ee-b8a2-930ab997e47b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:17:43 np0005481065 nova_compute[260935]: 2025-10-11 09:17:43.010 2 DEBUG nova.compute.manager [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Instance network_info: |[{"id": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "address": "fa:16:3e:c9:58:d3", "network": {"id": "3f472aff-4044-47cd-b539-ffd0a15c2851", "bridge": "br-int", "label": "tempest-network-smoke--74572185", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2284d8f8-63", "ovs_interfaceid": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:17:43 np0005481065 nova_compute[260935]: 2025-10-11 09:17:43.014 2 DEBUG nova.virt.libvirt.driver [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Start _get_guest_xml network_info=[{"id": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "address": "fa:16:3e:c9:58:d3", "network": {"id": "3f472aff-4044-47cd-b539-ffd0a15c2851", "bridge": "br-int", "label": "tempest-network-smoke--74572185", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2284d8f8-63", "ovs_interfaceid": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:17:43 np0005481065 nova_compute[260935]: 2025-10-11 09:17:43.020 2 WARNING nova.virt.libvirt.driver [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:17:43 np0005481065 nova_compute[260935]: 2025-10-11 09:17:43.027 2 DEBUG nova.virt.libvirt.host [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:17:43 np0005481065 nova_compute[260935]: 2025-10-11 09:17:43.028 2 DEBUG nova.virt.libvirt.host [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:17:43 np0005481065 nova_compute[260935]: 2025-10-11 09:17:43.031 2 DEBUG nova.virt.libvirt.host [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:17:43 np0005481065 nova_compute[260935]: 2025-10-11 09:17:43.032 2 DEBUG nova.virt.libvirt.host [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:17:43 np0005481065 nova_compute[260935]: 2025-10-11 09:17:43.032 2 DEBUG nova.virt.libvirt.driver [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:17:43 np0005481065 nova_compute[260935]: 2025-10-11 09:17:43.032 2 DEBUG nova.virt.hardware [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:17:43 np0005481065 nova_compute[260935]: 2025-10-11 09:17:43.033 2 DEBUG nova.virt.hardware [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:17:43 np0005481065 nova_compute[260935]: 2025-10-11 09:17:43.033 2 DEBUG nova.virt.hardware [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:17:43 np0005481065 nova_compute[260935]: 2025-10-11 09:17:43.033 2 DEBUG nova.virt.hardware [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:17:43 np0005481065 nova_compute[260935]: 2025-10-11 09:17:43.034 2 DEBUG nova.virt.hardware [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:17:43 np0005481065 nova_compute[260935]: 2025-10-11 09:17:43.034 2 DEBUG nova.virt.hardware [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:17:43 np0005481065 nova_compute[260935]: 2025-10-11 09:17:43.034 2 DEBUG nova.virt.hardware [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:17:43 np0005481065 nova_compute[260935]: 2025-10-11 09:17:43.034 2 DEBUG nova.virt.hardware [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:17:43 np0005481065 nova_compute[260935]: 2025-10-11 09:17:43.035 2 DEBUG nova.virt.hardware [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:17:43 np0005481065 nova_compute[260935]: 2025-10-11 09:17:43.035 2 DEBUG nova.virt.hardware [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:17:43 np0005481065 nova_compute[260935]: 2025-10-11 09:17:43.035 2 DEBUG nova.virt.hardware [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:17:43 np0005481065 nova_compute[260935]: 2025-10-11 09:17:43.038 2 DEBUG oslo_concurrency.processutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:17:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:17:43 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1643253741' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:17:43 np0005481065 nova_compute[260935]: 2025-10-11 09:17:43.538 2 DEBUG oslo_concurrency.processutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:17:43 np0005481065 nova_compute[260935]: 2025-10-11 09:17:43.571 2 DEBUG nova.storage.rbd_utils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 0c16a8df-379f-45ee-b8a2-930ab997e47b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:17:43 np0005481065 nova_compute[260935]: 2025-10-11 09:17:43.576 2 DEBUG oslo_concurrency.processutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:17:43 np0005481065 nova_compute[260935]: 2025-10-11 09:17:43.634 2 DEBUG nova.compute.manager [req-5791cfbf-80dd-43bc-9479-c8ff34bbc78a req-6eb259e0-14c1-4981-80d7-c660ef55db72 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Received event network-vif-plugged-25dc7747-60d0-4a38-8938-dfa9f55068b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:17:43 np0005481065 nova_compute[260935]: 2025-10-11 09:17:43.635 2 DEBUG oslo_concurrency.lockutils [req-5791cfbf-80dd-43bc-9479-c8ff34bbc78a req-6eb259e0-14c1-4981-80d7-c660ef55db72 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "25c0d908-6200-4e9e-8914-ed531abe14bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:17:43 np0005481065 nova_compute[260935]: 2025-10-11 09:17:43.635 2 DEBUG oslo_concurrency.lockutils [req-5791cfbf-80dd-43bc-9479-c8ff34bbc78a req-6eb259e0-14c1-4981-80d7-c660ef55db72 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "25c0d908-6200-4e9e-8914-ed531abe14bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:17:43 np0005481065 nova_compute[260935]: 2025-10-11 09:17:43.636 2 DEBUG oslo_concurrency.lockutils [req-5791cfbf-80dd-43bc-9479-c8ff34bbc78a req-6eb259e0-14c1-4981-80d7-c660ef55db72 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "25c0d908-6200-4e9e-8914-ed531abe14bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:17:43 np0005481065 nova_compute[260935]: 2025-10-11 09:17:43.636 2 DEBUG nova.compute.manager [req-5791cfbf-80dd-43bc-9479-c8ff34bbc78a req-6eb259e0-14c1-4981-80d7-c660ef55db72 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] No waiting events found dispatching network-vif-plugged-25dc7747-60d0-4a38-8938-dfa9f55068b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:17:43 np0005481065 nova_compute[260935]: 2025-10-11 09:17:43.636 2 WARNING nova.compute.manager [req-5791cfbf-80dd-43bc-9479-c8ff34bbc78a req-6eb259e0-14c1-4981-80d7-c660ef55db72 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Received unexpected event network-vif-plugged-25dc7747-60d0-4a38-8938-dfa9f55068b3 for instance with vm_state active and task_state None.#033[00m
Oct 11 05:17:43 np0005481065 nova_compute[260935]: 2025-10-11 09:17:43.637 2 DEBUG nova.compute.manager [req-5791cfbf-80dd-43bc-9479-c8ff34bbc78a req-6eb259e0-14c1-4981-80d7-c660ef55db72 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Received event network-changed-2284d8f8-63ae-4cf4-b954-c08472abd3ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:17:43 np0005481065 nova_compute[260935]: 2025-10-11 09:17:43.637 2 DEBUG nova.compute.manager [req-5791cfbf-80dd-43bc-9479-c8ff34bbc78a req-6eb259e0-14c1-4981-80d7-c660ef55db72 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Refreshing instance network info cache due to event network-changed-2284d8f8-63ae-4cf4-b954-c08472abd3ed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:17:43 np0005481065 nova_compute[260935]: 2025-10-11 09:17:43.638 2 DEBUG oslo_concurrency.lockutils [req-5791cfbf-80dd-43bc-9479-c8ff34bbc78a req-6eb259e0-14c1-4981-80d7-c660ef55db72 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-0c16a8df-379f-45ee-b8a2-930ab997e47b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:17:43 np0005481065 nova_compute[260935]: 2025-10-11 09:17:43.638 2 DEBUG oslo_concurrency.lockutils [req-5791cfbf-80dd-43bc-9479-c8ff34bbc78a req-6eb259e0-14c1-4981-80d7-c660ef55db72 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-0c16a8df-379f-45ee-b8a2-930ab997e47b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:17:43 np0005481065 nova_compute[260935]: 2025-10-11 09:17:43.638 2 DEBUG nova.network.neutron [req-5791cfbf-80dd-43bc-9479-c8ff34bbc78a req-6eb259e0-14c1-4981-80d7-c660ef55db72 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Refreshing network info cache for port 2284d8f8-63ae-4cf4-b954-c08472abd3ed _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:17:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:17:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:17:44 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/99311287' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:17:44 np0005481065 nova_compute[260935]: 2025-10-11 09:17:44.064 2 DEBUG oslo_concurrency.processutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:17:44 np0005481065 nova_compute[260935]: 2025-10-11 09:17:44.067 2 DEBUG nova.virt.libvirt.vif [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:17:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1002613129',display_name='tempest-TestNetworkBasicOps-server-1002613129',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1002613129',id=112,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB1DLyePqzD5XeXq/VEty+kauBvZ2ILHF8of/ToD91IxehxiiNemh7q1yZYimrdmB0i6DGy2R/mJVWVMcXEoPylnExeWpiVIgoXv8lQJTgtQjj42wL1vsxK2jjFlUxPqYA==',key_name='tempest-TestNetworkBasicOps-1545772393',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-c03kw6on',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:17:39Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=0c16a8df-379f-45ee-b8a2-930ab997e47b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "address": "fa:16:3e:c9:58:d3", "network": {"id": "3f472aff-4044-47cd-b539-ffd0a15c2851", "bridge": "br-int", "label": "tempest-network-smoke--74572185", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2284d8f8-63", "ovs_interfaceid": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:17:44 np0005481065 nova_compute[260935]: 2025-10-11 09:17:44.068 2 DEBUG nova.network.os_vif_util [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "address": "fa:16:3e:c9:58:d3", "network": {"id": "3f472aff-4044-47cd-b539-ffd0a15c2851", "bridge": "br-int", "label": "tempest-network-smoke--74572185", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2284d8f8-63", "ovs_interfaceid": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:17:44 np0005481065 nova_compute[260935]: 2025-10-11 09:17:44.069 2 DEBUG nova.network.os_vif_util [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c9:58:d3,bridge_name='br-int',has_traffic_filtering=True,id=2284d8f8-63ae-4cf4-b954-c08472abd3ed,network=Network(3f472aff-4044-47cd-b539-ffd0a15c2851),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2284d8f8-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:17:44 np0005481065 nova_compute[260935]: 2025-10-11 09:17:44.072 2 DEBUG nova.objects.instance [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0c16a8df-379f-45ee-b8a2-930ab997e47b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:17:44 np0005481065 nova_compute[260935]: 2025-10-11 09:17:44.095 2 DEBUG nova.virt.libvirt.driver [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:17:44 np0005481065 nova_compute[260935]:  <uuid>0c16a8df-379f-45ee-b8a2-930ab997e47b</uuid>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:  <name>instance-00000070</name>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:17:44 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:      <nova:name>tempest-TestNetworkBasicOps-server-1002613129</nova:name>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:17:43</nova:creationTime>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:17:44 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:        <nova:user uuid="dd336dcb24664df58613d4105ce1b004">tempest-TestNetworkBasicOps-1622727639-project-member</nova:user>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:        <nova:project uuid="bee9c6aad5fe46a2b0fb6caf4d995b72">tempest-TestNetworkBasicOps-1622727639</nova:project>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:        <nova:port uuid="2284d8f8-63ae-4cf4-b954-c08472abd3ed">
Oct 11 05:17:44 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:17:44 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:      <entry name="serial">0c16a8df-379f-45ee-b8a2-930ab997e47b</entry>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:      <entry name="uuid">0c16a8df-379f-45ee-b8a2-930ab997e47b</entry>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:17:44 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:17:44 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:17:44 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/0c16a8df-379f-45ee-b8a2-930ab997e47b_disk">
Oct 11 05:17:44 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:17:44 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:17:44 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/0c16a8df-379f-45ee-b8a2-930ab997e47b_disk.config">
Oct 11 05:17:44 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:17:44 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:17:44 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:c9:58:d3"/>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:      <target dev="tap2284d8f8-63"/>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:17:44 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/0c16a8df-379f-45ee-b8a2-930ab997e47b/console.log" append="off"/>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:17:44 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:17:44 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:17:44 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:17:44 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:17:44 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:17:44 np0005481065 nova_compute[260935]: 2025-10-11 09:17:44.102 2 DEBUG nova.compute.manager [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Preparing to wait for external event network-vif-plugged-2284d8f8-63ae-4cf4-b954-c08472abd3ed prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:17:44 np0005481065 nova_compute[260935]: 2025-10-11 09:17:44.102 2 DEBUG oslo_concurrency.lockutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "0c16a8df-379f-45ee-b8a2-930ab997e47b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:17:44 np0005481065 nova_compute[260935]: 2025-10-11 09:17:44.102 2 DEBUG oslo_concurrency.lockutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "0c16a8df-379f-45ee-b8a2-930ab997e47b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:17:44 np0005481065 nova_compute[260935]: 2025-10-11 09:17:44.103 2 DEBUG oslo_concurrency.lockutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "0c16a8df-379f-45ee-b8a2-930ab997e47b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:17:44 np0005481065 nova_compute[260935]: 2025-10-11 09:17:44.103 2 DEBUG nova.virt.libvirt.vif [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:17:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1002613129',display_name='tempest-TestNetworkBasicOps-server-1002613129',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1002613129',id=112,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB1DLyePqzD5XeXq/VEty+kauBvZ2ILHF8of/ToD91IxehxiiNemh7q1yZYimrdmB0i6DGy2R/mJVWVMcXEoPylnExeWpiVIgoXv8lQJTgtQjj42wL1vsxK2jjFlUxPqYA==',key_name='tempest-TestNetworkBasicOps-1545772393',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-c03kw6on',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:17:39Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=0c16a8df-379f-45ee-b8a2-930ab997e47b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "address": "fa:16:3e:c9:58:d3", "network": {"id": "3f472aff-4044-47cd-b539-ffd0a15c2851", "bridge": "br-int", "label": "tempest-network-smoke--74572185", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2284d8f8-63", "ovs_interfaceid": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:17:44 np0005481065 nova_compute[260935]: 2025-10-11 09:17:44.103 2 DEBUG nova.network.os_vif_util [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "address": "fa:16:3e:c9:58:d3", "network": {"id": "3f472aff-4044-47cd-b539-ffd0a15c2851", "bridge": "br-int", "label": "tempest-network-smoke--74572185", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2284d8f8-63", "ovs_interfaceid": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:17:44 np0005481065 nova_compute[260935]: 2025-10-11 09:17:44.104 2 DEBUG nova.network.os_vif_util [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c9:58:d3,bridge_name='br-int',has_traffic_filtering=True,id=2284d8f8-63ae-4cf4-b954-c08472abd3ed,network=Network(3f472aff-4044-47cd-b539-ffd0a15c2851),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2284d8f8-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:17:44 np0005481065 nova_compute[260935]: 2025-10-11 09:17:44.104 2 DEBUG os_vif [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:58:d3,bridge_name='br-int',has_traffic_filtering=True,id=2284d8f8-63ae-4cf4-b954-c08472abd3ed,network=Network(3f472aff-4044-47cd-b539-ffd0a15c2851),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2284d8f8-63') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:17:44 np0005481065 nova_compute[260935]: 2025-10-11 09:17:44.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:44 np0005481065 nova_compute[260935]: 2025-10-11 09:17:44.105 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:17:44 np0005481065 nova_compute[260935]: 2025-10-11 09:17:44.106 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:17:44 np0005481065 nova_compute[260935]: 2025-10-11 09:17:44.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:44 np0005481065 nova_compute[260935]: 2025-10-11 09:17:44.109 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2284d8f8-63, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:17:44 np0005481065 nova_compute[260935]: 2025-10-11 09:17:44.110 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2284d8f8-63, col_values=(('external_ids', {'iface-id': '2284d8f8-63ae-4cf4-b954-c08472abd3ed', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c9:58:d3', 'vm-uuid': '0c16a8df-379f-45ee-b8a2-930ab997e47b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:17:44 np0005481065 NetworkManager[44960]: <info>  [1760174264.1126] manager: (tap2284d8f8-63): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/448)
Oct 11 05:17:44 np0005481065 nova_compute[260935]: 2025-10-11 09:17:44.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:17:44 np0005481065 nova_compute[260935]: 2025-10-11 09:17:44.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:44 np0005481065 nova_compute[260935]: 2025-10-11 09:17:44.119 2 INFO os_vif [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:58:d3,bridge_name='br-int',has_traffic_filtering=True,id=2284d8f8-63ae-4cf4-b954-c08472abd3ed,network=Network(3f472aff-4044-47cd-b539-ffd0a15c2851),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2284d8f8-63')#033[00m
Oct 11 05:17:44 np0005481065 nova_compute[260935]: 2025-10-11 09:17:44.179 2 DEBUG nova.virt.libvirt.driver [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:17:44 np0005481065 nova_compute[260935]: 2025-10-11 09:17:44.179 2 DEBUG nova.virt.libvirt.driver [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:17:44 np0005481065 nova_compute[260935]: 2025-10-11 09:17:44.179 2 DEBUG nova.virt.libvirt.driver [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No VIF found with MAC fa:16:3e:c9:58:d3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:17:44 np0005481065 nova_compute[260935]: 2025-10-11 09:17:44.180 2 INFO nova.virt.libvirt.driver [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Using config drive#033[00m
Oct 11 05:17:44 np0005481065 nova_compute[260935]: 2025-10-11 09:17:44.201 2 DEBUG nova.storage.rbd_utils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 0c16a8df-379f-45ee-b8a2-930ab997e47b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:17:44 np0005481065 nova_compute[260935]: 2025-10-11 09:17:44.591 2 INFO nova.virt.libvirt.driver [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Creating config drive at /var/lib/nova/instances/0c16a8df-379f-45ee-b8a2-930ab997e47b/disk.config#033[00m
Oct 11 05:17:44 np0005481065 nova_compute[260935]: 2025-10-11 09:17:44.600 2 DEBUG oslo_concurrency.processutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0c16a8df-379f-45ee-b8a2-930ab997e47b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6j14a8af execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:17:44 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2277: 321 pgs: 321 active+clean; 500 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.6 MiB/s wr, 116 op/s
Oct 11 05:17:44 np0005481065 nova_compute[260935]: 2025-10-11 09:17:44.772 2 DEBUG oslo_concurrency.processutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0c16a8df-379f-45ee-b8a2-930ab997e47b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6j14a8af" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:17:44 np0005481065 nova_compute[260935]: 2025-10-11 09:17:44.814 2 DEBUG nova.storage.rbd_utils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 0c16a8df-379f-45ee-b8a2-930ab997e47b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:17:44 np0005481065 nova_compute[260935]: 2025-10-11 09:17:44.817 2 DEBUG oslo_concurrency.processutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0c16a8df-379f-45ee-b8a2-930ab997e47b/disk.config 0c16a8df-379f-45ee-b8a2-930ab997e47b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:17:44 np0005481065 nova_compute[260935]: 2025-10-11 09:17:44.951 2 DEBUG nova.network.neutron [req-5791cfbf-80dd-43bc-9479-c8ff34bbc78a req-6eb259e0-14c1-4981-80d7-c660ef55db72 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Updated VIF entry in instance network info cache for port 2284d8f8-63ae-4cf4-b954-c08472abd3ed. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:17:44 np0005481065 nova_compute[260935]: 2025-10-11 09:17:44.952 2 DEBUG nova.network.neutron [req-5791cfbf-80dd-43bc-9479-c8ff34bbc78a req-6eb259e0-14c1-4981-80d7-c660ef55db72 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Updating instance_info_cache with network_info: [{"id": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "address": "fa:16:3e:c9:58:d3", "network": {"id": "3f472aff-4044-47cd-b539-ffd0a15c2851", "bridge": "br-int", "label": "tempest-network-smoke--74572185", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2284d8f8-63", "ovs_interfaceid": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:17:44 np0005481065 nova_compute[260935]: 2025-10-11 09:17:44.976 2 DEBUG oslo_concurrency.lockutils [req-5791cfbf-80dd-43bc-9479-c8ff34bbc78a req-6eb259e0-14c1-4981-80d7-c660ef55db72 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-0c16a8df-379f-45ee-b8a2-930ab997e47b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:17:45 np0005481065 nova_compute[260935]: 2025-10-11 09:17:45.011 2 DEBUG oslo_concurrency.processutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0c16a8df-379f-45ee-b8a2-930ab997e47b/disk.config 0c16a8df-379f-45ee-b8a2-930ab997e47b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.193s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:17:45 np0005481065 nova_compute[260935]: 2025-10-11 09:17:45.012 2 INFO nova.virt.libvirt.driver [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Deleting local config drive /var/lib/nova/instances/0c16a8df-379f-45ee-b8a2-930ab997e47b/disk.config because it was imported into RBD.#033[00m
Oct 11 05:17:45 np0005481065 kernel: tap2284d8f8-63: entered promiscuous mode
Oct 11 05:17:45 np0005481065 NetworkManager[44960]: <info>  [1760174265.0571] manager: (tap2284d8f8-63): new Tun device (/org/freedesktop/NetworkManager/Devices/449)
Oct 11 05:17:45 np0005481065 ovn_controller[152945]: 2025-10-11T09:17:45Z|01095|binding|INFO|Claiming lport 2284d8f8-63ae-4cf4-b954-c08472abd3ed for this chassis.
Oct 11 05:17:45 np0005481065 ovn_controller[152945]: 2025-10-11T09:17:45Z|01096|binding|INFO|2284d8f8-63ae-4cf4-b954-c08472abd3ed: Claiming fa:16:3e:c9:58:d3 10.100.0.14
Oct 11 05:17:45 np0005481065 nova_compute[260935]: 2025-10-11 09:17:45.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:45.072 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:58:d3 10.100.0.14'], port_security=['fa:16:3e:c9:58:d3 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '0c16a8df-379f-45ee-b8a2-930ab997e47b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3f472aff-4044-47cd-b539-ffd0a15c2851', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '2', 'neutron:security_group_ids': '19e2c24e-234b-4dae-9978-109acb79adf0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83048660-eea0-4997-8f0a-503095730c3f, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=2284d8f8-63ae-4cf4-b954-c08472abd3ed) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:45.073 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 2284d8f8-63ae-4cf4-b954-c08472abd3ed in datapath 3f472aff-4044-47cd-b539-ffd0a15c2851 bound to our chassis#033[00m
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:45.075 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3f472aff-4044-47cd-b539-ffd0a15c2851#033[00m
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:45.093 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[abf070f0-3729-4078-bbe1-31bdb73ff364]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:45.094 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3f472aff-41 in ovnmeta-3f472aff-4044-47cd-b539-ffd0a15c2851 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 05:17:45 np0005481065 systemd-machined[215705]: New machine qemu-135-instance-00000070.
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:45.098 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3f472aff-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:45.098 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a5643077-2188-4ec6-b411-81de4315f3e6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:45.099 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cdf03568-91fb-4493-87bb-f5c8f42e5ba8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:17:45 np0005481065 ovn_controller[152945]: 2025-10-11T09:17:45Z|01097|binding|INFO|Setting lport 2284d8f8-63ae-4cf4-b954-c08472abd3ed ovn-installed in OVS
Oct 11 05:17:45 np0005481065 ovn_controller[152945]: 2025-10-11T09:17:45Z|01098|binding|INFO|Setting lport 2284d8f8-63ae-4cf4-b954-c08472abd3ed up in Southbound
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:45.115 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[51270ea0-0b54-470f-ab41-d4450042b9ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:17:45 np0005481065 nova_compute[260935]: 2025-10-11 09:17:45.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:45 np0005481065 systemd[1]: Started Virtual Machine qemu-135-instance-00000070.
Oct 11 05:17:45 np0005481065 nova_compute[260935]: 2025-10-11 09:17:45.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:45 np0005481065 systemd-udevd[381980]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:17:45 np0005481065 NetworkManager[44960]: <info>  [1760174265.1377] device (tap2284d8f8-63): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:17:45 np0005481065 NetworkManager[44960]: <info>  [1760174265.1385] device (tap2284d8f8-63): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:45.156 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6f674e60-5782-4ee2-bffc-67c055de3844]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:45.195 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c56fc655-4179-4ad3-bcfd-18999ae98886]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:45.203 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6c163da6-154c-4e5d-aae7-b7e4d09bc9ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:17:45 np0005481065 systemd-udevd[381983]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:17:45 np0005481065 NetworkManager[44960]: <info>  [1760174265.2039] manager: (tap3f472aff-40): new Veth device (/org/freedesktop/NetworkManager/Devices/450)
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:45.227 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[5ea70187-2079-4644-b80b-6029ad835dd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:45.232 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[95d37ba9-e797-4842-a210-ff83561f6d18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:17:45 np0005481065 NetworkManager[44960]: <info>  [1760174265.2579] device (tap3f472aff-40): carrier: link connected
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:45.269 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[7cc32314-e130-43d7-ac79-fa18b55ede57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:45.293 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a230fcc4-78ef-4923-9501-ceaf115a7484]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3f472aff-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:0e:c0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 315], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 610996, 'reachable_time': 15084, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382011, 'error': None, 'target': 'ovnmeta-3f472aff-4044-47cd-b539-ffd0a15c2851', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:45.315 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6bb5bb51-2e18-4c4a-a2f3-000065e356b2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe63:ec0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 610996, 'tstamp': 610996}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 382012, 'error': None, 'target': 'ovnmeta-3f472aff-4044-47cd-b539-ffd0a15c2851', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:45.342 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[054451a7-38d8-49d0-b4b5-11da65a4c8e2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3f472aff-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:0e:c0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 315], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 610996, 'reachable_time': 15084, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 382013, 'error': None, 'target': 'ovnmeta-3f472aff-4044-47cd-b539-ffd0a15c2851', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:45.391 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c8b8e300-1dba-4f54-952e-03a39f907a47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:17:45 np0005481065 nova_compute[260935]: 2025-10-11 09:17:45.465 2 DEBUG nova.compute.manager [req-fe95fe1e-2817-431f-88ac-e0b0b8b048d0 req-66bf1b17-2eac-46bd-83b8-5dafebbbad12 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Received event network-vif-plugged-2284d8f8-63ae-4cf4-b954-c08472abd3ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:17:45 np0005481065 nova_compute[260935]: 2025-10-11 09:17:45.465 2 DEBUG oslo_concurrency.lockutils [req-fe95fe1e-2817-431f-88ac-e0b0b8b048d0 req-66bf1b17-2eac-46bd-83b8-5dafebbbad12 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0c16a8df-379f-45ee-b8a2-930ab997e47b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:17:45 np0005481065 nova_compute[260935]: 2025-10-11 09:17:45.466 2 DEBUG oslo_concurrency.lockutils [req-fe95fe1e-2817-431f-88ac-e0b0b8b048d0 req-66bf1b17-2eac-46bd-83b8-5dafebbbad12 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0c16a8df-379f-45ee-b8a2-930ab997e47b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:17:45 np0005481065 nova_compute[260935]: 2025-10-11 09:17:45.466 2 DEBUG oslo_concurrency.lockutils [req-fe95fe1e-2817-431f-88ac-e0b0b8b048d0 req-66bf1b17-2eac-46bd-83b8-5dafebbbad12 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0c16a8df-379f-45ee-b8a2-930ab997e47b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:17:45 np0005481065 nova_compute[260935]: 2025-10-11 09:17:45.466 2 DEBUG nova.compute.manager [req-fe95fe1e-2817-431f-88ac-e0b0b8b048d0 req-66bf1b17-2eac-46bd-83b8-5dafebbbad12 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Processing event network-vif-plugged-2284d8f8-63ae-4cf4-b954-c08472abd3ed _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:45.476 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9c0e301a-df5a-4951-badb-4bbb3a8c7617]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:45.477 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3f472aff-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:45.477 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:45.477 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3f472aff-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:17:45 np0005481065 kernel: tap3f472aff-40: entered promiscuous mode
Oct 11 05:17:45 np0005481065 NetworkManager[44960]: <info>  [1760174265.4807] manager: (tap3f472aff-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/451)
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:45.483 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3f472aff-40, col_values=(('external_ids', {'iface-id': '117fa814-7b44-4449-adf6-8726d78cffe9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:45.489 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3f472aff-4044-47cd-b539-ffd0a15c2851.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3f472aff-4044-47cd-b539-ffd0a15c2851.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:45.490 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[46ac8abd-2957-491b-8d5d-3855ee8fd40e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:45.491 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-3f472aff-4044-47cd-b539-ffd0a15c2851
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/3f472aff-4044-47cd-b539-ffd0a15c2851.pid.haproxy
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID 3f472aff-4044-47cd-b539-ffd0a15c2851
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 05:17:45 np0005481065 ovn_controller[152945]: 2025-10-11T09:17:45Z|01099|binding|INFO|Releasing lport 117fa814-7b44-4449-adf6-8726d78cffe9 from this chassis (sb_readonly=0)
Oct 11 05:17:45 np0005481065 nova_compute[260935]: 2025-10-11 09:17:45.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:17:45.491 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3f472aff-4044-47cd-b539-ffd0a15c2851', 'env', 'PROCESS_TAG=haproxy-3f472aff-4044-47cd-b539-ffd0a15c2851', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3f472aff-4044-47cd-b539-ffd0a15c2851.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 05:17:45 np0005481065 nova_compute[260935]: 2025-10-11 09:17:45.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:45 np0005481065 nova_compute[260935]: 2025-10-11 09:17:45.874 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174265.8736994, 0c16a8df-379f-45ee-b8a2-930ab997e47b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:17:45 np0005481065 nova_compute[260935]: 2025-10-11 09:17:45.876 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] VM Started (Lifecycle Event)#033[00m
Oct 11 05:17:45 np0005481065 nova_compute[260935]: 2025-10-11 09:17:45.879 2 DEBUG nova.compute.manager [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:17:45 np0005481065 nova_compute[260935]: 2025-10-11 09:17:45.883 2 DEBUG nova.virt.libvirt.driver [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:17:45 np0005481065 nova_compute[260935]: 2025-10-11 09:17:45.887 2 INFO nova.virt.libvirt.driver [-] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Instance spawned successfully.#033[00m
Oct 11 05:17:45 np0005481065 nova_compute[260935]: 2025-10-11 09:17:45.888 2 DEBUG nova.virt.libvirt.driver [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:17:45 np0005481065 nova_compute[260935]: 2025-10-11 09:17:45.914 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:17:45 np0005481065 podman[382085]: 2025-10-11 09:17:45.917992092 +0000 UTC m=+0.071681555 container create 1ecdb8b07380fa6c69f2dee9d9e0b0d27bd9b74547b9ea0ac2f30e8fde8e5390 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3f472aff-4044-47cd-b539-ffd0a15c2851, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001)
Oct 11 05:17:45 np0005481065 nova_compute[260935]: 2025-10-11 09:17:45.925 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:17:45 np0005481065 nova_compute[260935]: 2025-10-11 09:17:45.932 2 DEBUG nova.virt.libvirt.driver [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:17:45 np0005481065 nova_compute[260935]: 2025-10-11 09:17:45.933 2 DEBUG nova.virt.libvirt.driver [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:17:45 np0005481065 nova_compute[260935]: 2025-10-11 09:17:45.934 2 DEBUG nova.virt.libvirt.driver [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:17:45 np0005481065 nova_compute[260935]: 2025-10-11 09:17:45.935 2 DEBUG nova.virt.libvirt.driver [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:17:45 np0005481065 nova_compute[260935]: 2025-10-11 09:17:45.936 2 DEBUG nova.virt.libvirt.driver [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:17:45 np0005481065 nova_compute[260935]: 2025-10-11 09:17:45.937 2 DEBUG nova.virt.libvirt.driver [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:17:45 np0005481065 nova_compute[260935]: 2025-10-11 09:17:45.949 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:17:45 np0005481065 nova_compute[260935]: 2025-10-11 09:17:45.950 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174265.8739605, 0c16a8df-379f-45ee-b8a2-930ab997e47b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:17:45 np0005481065 nova_compute[260935]: 2025-10-11 09:17:45.950 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:17:45 np0005481065 systemd[1]: Started libpod-conmon-1ecdb8b07380fa6c69f2dee9d9e0b0d27bd9b74547b9ea0ac2f30e8fde8e5390.scope.
Oct 11 05:17:45 np0005481065 podman[382085]: 2025-10-11 09:17:45.874868748 +0000 UTC m=+0.028558251 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 05:17:45 np0005481065 nova_compute[260935]: 2025-10-11 09:17:45.988 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:17:45 np0005481065 nova_compute[260935]: 2025-10-11 09:17:45.993 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174265.8822107, 0c16a8df-379f-45ee-b8a2-930ab997e47b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:17:45 np0005481065 nova_compute[260935]: 2025-10-11 09:17:45.994 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:17:45 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:17:46 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d7493b7450df163711f70cb19b9368d1526437ef35376e11493a1c6025bea22/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 05:17:46 np0005481065 podman[382085]: 2025-10-11 09:17:46.028258104 +0000 UTC m=+0.181947677 container init 1ecdb8b07380fa6c69f2dee9d9e0b0d27bd9b74547b9ea0ac2f30e8fde8e5390 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3f472aff-4044-47cd-b539-ffd0a15c2851, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 11 05:17:46 np0005481065 nova_compute[260935]: 2025-10-11 09:17:46.032 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:17:46 np0005481065 podman[382085]: 2025-10-11 09:17:46.035207006 +0000 UTC m=+0.188896469 container start 1ecdb8b07380fa6c69f2dee9d9e0b0d27bd9b74547b9ea0ac2f30e8fde8e5390 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3f472aff-4044-47cd-b539-ffd0a15c2851, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2)
Oct 11 05:17:46 np0005481065 nova_compute[260935]: 2025-10-11 09:17:46.038 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:17:46 np0005481065 nova_compute[260935]: 2025-10-11 09:17:46.050 2 INFO nova.compute.manager [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Took 6.92 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:17:46 np0005481065 nova_compute[260935]: 2025-10-11 09:17:46.051 2 DEBUG nova.compute.manager [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:17:46 np0005481065 nova_compute[260935]: 2025-10-11 09:17:46.067 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:17:46 np0005481065 neutron-haproxy-ovnmeta-3f472aff-4044-47cd-b539-ffd0a15c2851[382100]: [NOTICE]   (382104) : New worker (382106) forked
Oct 11 05:17:46 np0005481065 neutron-haproxy-ovnmeta-3f472aff-4044-47cd-b539-ffd0a15c2851[382100]: [NOTICE]   (382104) : Loading success.
Oct 11 05:17:46 np0005481065 nova_compute[260935]: 2025-10-11 09:17:46.157 2 INFO nova.compute.manager [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Took 8.12 seconds to build instance.#033[00m
Oct 11 05:17:46 np0005481065 nova_compute[260935]: 2025-10-11 09:17:46.186 2 DEBUG oslo_concurrency.lockutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "0c16a8df-379f-45ee-b8a2-930ab997e47b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.222s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:17:46 np0005481065 nova_compute[260935]: 2025-10-11 09:17:46.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:46 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2278: 321 pgs: 321 active+clean; 500 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 11 05:17:47 np0005481065 nova_compute[260935]: 2025-10-11 09:17:47.594 2 DEBUG nova.compute.manager [req-a5ec8735-7fe6-44da-85b7-3fa90ce03299 req-7ff89a89-0b1d-4857-bd2f-f44db6ad07eb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Received event network-changed-25dc7747-60d0-4a38-8938-dfa9f55068b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:17:47 np0005481065 nova_compute[260935]: 2025-10-11 09:17:47.595 2 DEBUG nova.compute.manager [req-a5ec8735-7fe6-44da-85b7-3fa90ce03299 req-7ff89a89-0b1d-4857-bd2f-f44db6ad07eb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Refreshing instance network info cache due to event network-changed-25dc7747-60d0-4a38-8938-dfa9f55068b3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:17:47 np0005481065 nova_compute[260935]: 2025-10-11 09:17:47.595 2 DEBUG oslo_concurrency.lockutils [req-a5ec8735-7fe6-44da-85b7-3fa90ce03299 req-7ff89a89-0b1d-4857-bd2f-f44db6ad07eb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-25c0d908-6200-4e9e-8914-ed531abe14bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:17:47 np0005481065 nova_compute[260935]: 2025-10-11 09:17:47.595 2 DEBUG oslo_concurrency.lockutils [req-a5ec8735-7fe6-44da-85b7-3fa90ce03299 req-7ff89a89-0b1d-4857-bd2f-f44db6ad07eb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-25c0d908-6200-4e9e-8914-ed531abe14bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:17:47 np0005481065 nova_compute[260935]: 2025-10-11 09:17:47.595 2 DEBUG nova.network.neutron [req-a5ec8735-7fe6-44da-85b7-3fa90ce03299 req-7ff89a89-0b1d-4857-bd2f-f44db6ad07eb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Refreshing network info cache for port 25dc7747-60d0-4a38-8938-dfa9f55068b3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:17:48 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2279: 321 pgs: 321 active+clean; 500 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 1.8 MiB/s wr, 170 op/s
Oct 11 05:17:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:17:49 np0005481065 nova_compute[260935]: 2025-10-11 09:17:49.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:49 np0005481065 nova_compute[260935]: 2025-10-11 09:17:49.584 2 DEBUG nova.network.neutron [req-a5ec8735-7fe6-44da-85b7-3fa90ce03299 req-7ff89a89-0b1d-4857-bd2f-f44db6ad07eb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Updated VIF entry in instance network info cache for port 25dc7747-60d0-4a38-8938-dfa9f55068b3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:17:49 np0005481065 nova_compute[260935]: 2025-10-11 09:17:49.584 2 DEBUG nova.network.neutron [req-a5ec8735-7fe6-44da-85b7-3fa90ce03299 req-7ff89a89-0b1d-4857-bd2f-f44db6ad07eb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Updating instance_info_cache with network_info: [{"id": "25dc7747-60d0-4a38-8938-dfa9f55068b3", "address": "fa:16:3e:8e:fc:a5", "network": {"id": "2f4ce403-2596-441d-805b-ba15e2f385a1", "bridge": "br-int", "label": "tempest-network-smoke--109925739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8e:fca5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25dc7747-60", "ovs_interfaceid": "25dc7747-60d0-4a38-8938-dfa9f55068b3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:17:49 np0005481065 nova_compute[260935]: 2025-10-11 09:17:49.603 2 DEBUG oslo_concurrency.lockutils [req-a5ec8735-7fe6-44da-85b7-3fa90ce03299 req-7ff89a89-0b1d-4857-bd2f-f44db6ad07eb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-25c0d908-6200-4e9e-8914-ed531abe14bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:17:49 np0005481065 nova_compute[260935]: 2025-10-11 09:17:49.604 2 DEBUG nova.compute.manager [req-a5ec8735-7fe6-44da-85b7-3fa90ce03299 req-7ff89a89-0b1d-4857-bd2f-f44db6ad07eb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Received event network-vif-plugged-2284d8f8-63ae-4cf4-b954-c08472abd3ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:17:49 np0005481065 nova_compute[260935]: 2025-10-11 09:17:49.604 2 DEBUG oslo_concurrency.lockutils [req-a5ec8735-7fe6-44da-85b7-3fa90ce03299 req-7ff89a89-0b1d-4857-bd2f-f44db6ad07eb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0c16a8df-379f-45ee-b8a2-930ab997e47b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:17:49 np0005481065 nova_compute[260935]: 2025-10-11 09:17:49.605 2 DEBUG oslo_concurrency.lockutils [req-a5ec8735-7fe6-44da-85b7-3fa90ce03299 req-7ff89a89-0b1d-4857-bd2f-f44db6ad07eb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0c16a8df-379f-45ee-b8a2-930ab997e47b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:17:49 np0005481065 nova_compute[260935]: 2025-10-11 09:17:49.605 2 DEBUG oslo_concurrency.lockutils [req-a5ec8735-7fe6-44da-85b7-3fa90ce03299 req-7ff89a89-0b1d-4857-bd2f-f44db6ad07eb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0c16a8df-379f-45ee-b8a2-930ab997e47b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:17:49 np0005481065 nova_compute[260935]: 2025-10-11 09:17:49.605 2 DEBUG nova.compute.manager [req-a5ec8735-7fe6-44da-85b7-3fa90ce03299 req-7ff89a89-0b1d-4857-bd2f-f44db6ad07eb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] No waiting events found dispatching network-vif-plugged-2284d8f8-63ae-4cf4-b954-c08472abd3ed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:17:49 np0005481065 nova_compute[260935]: 2025-10-11 09:17:49.606 2 WARNING nova.compute.manager [req-a5ec8735-7fe6-44da-85b7-3fa90ce03299 req-7ff89a89-0b1d-4857-bd2f-f44db6ad07eb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Received unexpected event network-vif-plugged-2284d8f8-63ae-4cf4-b954-c08472abd3ed for instance with vm_state active and task_state None.#033[00m
Oct 11 05:17:50 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2280: 321 pgs: 321 active+clean; 500 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 1.8 MiB/s wr, 169 op/s
Oct 11 05:17:50 np0005481065 nova_compute[260935]: 2025-10-11 09:17:50.948 2 DEBUG nova.compute.manager [req-6524c5d1-44fe-4134-b654-cbd0ae999a75 req-179eefd8-a3b4-4bc4-8538-f3e2b929c8a5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Received event network-changed-2284d8f8-63ae-4cf4-b954-c08472abd3ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:17:50 np0005481065 nova_compute[260935]: 2025-10-11 09:17:50.949 2 DEBUG nova.compute.manager [req-6524c5d1-44fe-4134-b654-cbd0ae999a75 req-179eefd8-a3b4-4bc4-8538-f3e2b929c8a5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Refreshing instance network info cache due to event network-changed-2284d8f8-63ae-4cf4-b954-c08472abd3ed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:17:50 np0005481065 nova_compute[260935]: 2025-10-11 09:17:50.949 2 DEBUG oslo_concurrency.lockutils [req-6524c5d1-44fe-4134-b654-cbd0ae999a75 req-179eefd8-a3b4-4bc4-8538-f3e2b929c8a5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-0c16a8df-379f-45ee-b8a2-930ab997e47b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:17:50 np0005481065 nova_compute[260935]: 2025-10-11 09:17:50.949 2 DEBUG oslo_concurrency.lockutils [req-6524c5d1-44fe-4134-b654-cbd0ae999a75 req-179eefd8-a3b4-4bc4-8538-f3e2b929c8a5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-0c16a8df-379f-45ee-b8a2-930ab997e47b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:17:50 np0005481065 nova_compute[260935]: 2025-10-11 09:17:50.950 2 DEBUG nova.network.neutron [req-6524c5d1-44fe-4134-b654-cbd0ae999a75 req-179eefd8-a3b4-4bc4-8538-f3e2b929c8a5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Refreshing network info cache for port 2284d8f8-63ae-4cf4-b954-c08472abd3ed _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:17:51 np0005481065 nova_compute[260935]: 2025-10-11 09:17:51.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:52 np0005481065 nova_compute[260935]: 2025-10-11 09:17:52.345 2 DEBUG nova.network.neutron [req-6524c5d1-44fe-4134-b654-cbd0ae999a75 req-179eefd8-a3b4-4bc4-8538-f3e2b929c8a5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Updated VIF entry in instance network info cache for port 2284d8f8-63ae-4cf4-b954-c08472abd3ed. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:17:52 np0005481065 nova_compute[260935]: 2025-10-11 09:17:52.346 2 DEBUG nova.network.neutron [req-6524c5d1-44fe-4134-b654-cbd0ae999a75 req-179eefd8-a3b4-4bc4-8538-f3e2b929c8a5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Updating instance_info_cache with network_info: [{"id": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "address": "fa:16:3e:c9:58:d3", "network": {"id": "3f472aff-4044-47cd-b539-ffd0a15c2851", "bridge": "br-int", "label": "tempest-network-smoke--74572185", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2284d8f8-63", "ovs_interfaceid": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:17:52 np0005481065 nova_compute[260935]: 2025-10-11 09:17:52.370 2 DEBUG oslo_concurrency.lockutils [req-6524c5d1-44fe-4134-b654-cbd0ae999a75 req-179eefd8-a3b4-4bc4-8538-f3e2b929c8a5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-0c16a8df-379f-45ee-b8a2-930ab997e47b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:17:52 np0005481065 ovn_controller[152945]: 2025-10-11T09:17:52Z|00125|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8e:fc:a5 10.100.0.3
Oct 11 05:17:52 np0005481065 ovn_controller[152945]: 2025-10-11T09:17:52Z|00126|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8e:fc:a5 10.100.0.3
Oct 11 05:17:52 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2281: 321 pgs: 321 active+clean; 519 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.5 MiB/s wr, 198 op/s
Oct 11 05:17:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:17:54 np0005481065 nova_compute[260935]: 2025-10-11 09:17:54.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:54 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2282: 321 pgs: 321 active+clean; 521 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.5 MiB/s wr, 151 op/s
Oct 11 05:17:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:17:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:17:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:17:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:17:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:17:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:17:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:17:54
Oct 11 05:17:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:17:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:17:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'default.rgw.log', 'vms', 'volumes', 'backups', 'cephfs.cephfs.data', 'images', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control']
Oct 11 05:17:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:17:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:17:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:17:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:17:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:17:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:17:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:17:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:17:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:17:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:17:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:17:56 np0005481065 nova_compute[260935]: 2025-10-11 09:17:56.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:17:56 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2283: 321 pgs: 321 active+clean; 521 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 119 op/s
Oct 11 05:17:57 np0005481065 ovn_controller[152945]: 2025-10-11T09:17:57Z|00127|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c9:58:d3 10.100.0.14
Oct 11 05:17:57 np0005481065 ovn_controller[152945]: 2025-10-11T09:17:57Z|00128|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c9:58:d3 10.100.0.14
Oct 11 05:17:58 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2284: 321 pgs: 321 active+clean; 554 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.2 MiB/s wr, 180 op/s
Oct 11 05:17:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:17:59 np0005481065 nova_compute[260935]: 2025-10-11 09:17:59.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:00 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2285: 321 pgs: 321 active+clean; 554 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 699 KiB/s rd, 4.2 MiB/s wr, 110 op/s
Oct 11 05:18:01 np0005481065 nova_compute[260935]: 2025-10-11 09:18:01.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:02 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2286: 321 pgs: 321 active+clean; 564 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 797 KiB/s rd, 4.3 MiB/s wr, 127 op/s
Oct 11 05:18:02 np0005481065 podman[382120]: 2025-10-11 09:18:02.790533427 +0000 UTC m=+0.089520719 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true)
Oct 11 05:18:03 np0005481065 nova_compute[260935]: 2025-10-11 09:18:03.852 2 INFO nova.compute.manager [None req-5fdfb1e5-c9d9-4152-aefb-49c027715127 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Get console output#033[00m
Oct 11 05:18:03 np0005481065 nova_compute[260935]: 2025-10-11 09:18:03.860 29289 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct 11 05:18:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:18:04 np0005481065 nova_compute[260935]: 2025-10-11 09:18:04.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:04 np0005481065 nova_compute[260935]: 2025-10-11 09:18:04.438 2 DEBUG nova.compute.manager [req-b86ce366-5075-4ffe-ac0c-a1f57144e9c0 req-239d32f5-f450-4a03-83e0-6f4bf83a0dec e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Received event network-changed-25dc7747-60d0-4a38-8938-dfa9f55068b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:18:04 np0005481065 nova_compute[260935]: 2025-10-11 09:18:04.439 2 DEBUG nova.compute.manager [req-b86ce366-5075-4ffe-ac0c-a1f57144e9c0 req-239d32f5-f450-4a03-83e0-6f4bf83a0dec e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Refreshing instance network info cache due to event network-changed-25dc7747-60d0-4a38-8938-dfa9f55068b3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:18:04 np0005481065 nova_compute[260935]: 2025-10-11 09:18:04.439 2 DEBUG oslo_concurrency.lockutils [req-b86ce366-5075-4ffe-ac0c-a1f57144e9c0 req-239d32f5-f450-4a03-83e0-6f4bf83a0dec e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-25c0d908-6200-4e9e-8914-ed531abe14bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:18:04 np0005481065 nova_compute[260935]: 2025-10-11 09:18:04.440 2 DEBUG oslo_concurrency.lockutils [req-b86ce366-5075-4ffe-ac0c-a1f57144e9c0 req-239d32f5-f450-4a03-83e0-6f4bf83a0dec e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-25c0d908-6200-4e9e-8914-ed531abe14bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:18:04 np0005481065 nova_compute[260935]: 2025-10-11 09:18:04.440 2 DEBUG nova.network.neutron [req-b86ce366-5075-4ffe-ac0c-a1f57144e9c0 req-239d32f5-f450-4a03-83e0-6f4bf83a0dec e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Refreshing network info cache for port 25dc7747-60d0-4a38-8938-dfa9f55068b3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:18:04 np0005481065 nova_compute[260935]: 2025-10-11 09:18:04.488 2 DEBUG oslo_concurrency.lockutils [None req-fc281bd7-2763-41e2-8050-a15c9b60b36c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "25c0d908-6200-4e9e-8914-ed531abe14bf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:18:04 np0005481065 nova_compute[260935]: 2025-10-11 09:18:04.489 2 DEBUG oslo_concurrency.lockutils [None req-fc281bd7-2763-41e2-8050-a15c9b60b36c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "25c0d908-6200-4e9e-8914-ed531abe14bf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:18:04 np0005481065 nova_compute[260935]: 2025-10-11 09:18:04.490 2 DEBUG oslo_concurrency.lockutils [None req-fc281bd7-2763-41e2-8050-a15c9b60b36c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "25c0d908-6200-4e9e-8914-ed531abe14bf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:18:04 np0005481065 nova_compute[260935]: 2025-10-11 09:18:04.490 2 DEBUG oslo_concurrency.lockutils [None req-fc281bd7-2763-41e2-8050-a15c9b60b36c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "25c0d908-6200-4e9e-8914-ed531abe14bf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:18:04 np0005481065 nova_compute[260935]: 2025-10-11 09:18:04.490 2 DEBUG oslo_concurrency.lockutils [None req-fc281bd7-2763-41e2-8050-a15c9b60b36c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "25c0d908-6200-4e9e-8914-ed531abe14bf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:18:04 np0005481065 nova_compute[260935]: 2025-10-11 09:18:04.492 2 INFO nova.compute.manager [None req-fc281bd7-2763-41e2-8050-a15c9b60b36c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Terminating instance#033[00m
Oct 11 05:18:04 np0005481065 nova_compute[260935]: 2025-10-11 09:18:04.493 2 DEBUG nova.compute.manager [None req-fc281bd7-2763-41e2-8050-a15c9b60b36c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:18:04 np0005481065 kernel: tap25dc7747-60 (unregistering): left promiscuous mode
Oct 11 05:18:04 np0005481065 NetworkManager[44960]: <info>  [1760174284.5709] device (tap25dc7747-60): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:18:04 np0005481065 ovn_controller[152945]: 2025-10-11T09:18:04Z|01100|binding|INFO|Releasing lport 25dc7747-60d0-4a38-8938-dfa9f55068b3 from this chassis (sb_readonly=0)
Oct 11 05:18:04 np0005481065 ovn_controller[152945]: 2025-10-11T09:18:04Z|01101|binding|INFO|Setting lport 25dc7747-60d0-4a38-8938-dfa9f55068b3 down in Southbound
Oct 11 05:18:04 np0005481065 ovn_controller[152945]: 2025-10-11T09:18:04Z|01102|binding|INFO|Removing iface tap25dc7747-60 ovn-installed in OVS
Oct 11 05:18:04 np0005481065 nova_compute[260935]: 2025-10-11 09:18:04.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:04 np0005481065 nova_compute[260935]: 2025-10-11 09:18:04.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:04 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2287: 321 pgs: 321 active+clean; 566 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 623 KiB/s rd, 2.5 MiB/s wr, 105 op/s
Oct 11 05:18:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:04.659 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:fc:a5 10.100.0.3 2001:db8::f816:3eff:fe8e:fca5'], port_security=['fa:16:3e:8e:fc:a5 10.100.0.3 2001:db8::f816:3eff:fe8e:fca5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:fe8e:fca5/64', 'neutron:device_id': '25c0d908-6200-4e9e-8914-ed531abe14bf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2f4ce403-2596-441d-805b-ba15e2f385a1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': '89a40070-a057-46b1-9d97-ea387c29c959', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ec00ec81-0492-43bf-b21e-a8398ff551c2, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=25dc7747-60d0-4a38-8938-dfa9f55068b3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:18:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:04.667 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 25dc7747-60d0-4a38-8938-dfa9f55068b3 in datapath 2f4ce403-2596-441d-805b-ba15e2f385a1 unbound from our chassis#033[00m
Oct 11 05:18:04 np0005481065 nova_compute[260935]: 2025-10-11 09:18:04.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:04.670 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2f4ce403-2596-441d-805b-ba15e2f385a1#033[00m
Oct 11 05:18:04 np0005481065 systemd[1]: machine-qemu\x2d134\x2dinstance\x2d0000006f.scope: Deactivated successfully.
Oct 11 05:18:04 np0005481065 systemd[1]: machine-qemu\x2d134\x2dinstance\x2d0000006f.scope: Consumed 13.062s CPU time.
Oct 11 05:18:04 np0005481065 systemd-machined[215705]: Machine qemu-134-instance-0000006f terminated.
Oct 11 05:18:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:04.697 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d98dd0da-13b6-4e36-8224-25dcea8cc4ce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:04.747 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[e6e089eb-16bb-4848-a275-511f1387df4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:04 np0005481065 nova_compute[260935]: 2025-10-11 09:18:04.748 2 INFO nova.virt.libvirt.driver [-] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Instance destroyed successfully.#033[00m
Oct 11 05:18:04 np0005481065 nova_compute[260935]: 2025-10-11 09:18:04.749 2 DEBUG nova.objects.instance [None req-fc281bd7-2763-41e2-8050-a15c9b60b36c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'resources' on Instance uuid 25c0d908-6200-4e9e-8914-ed531abe14bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:18:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:04.752 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[a922da3d-34b1-4e0c-874b-524c6d65218d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:04 np0005481065 nova_compute[260935]: 2025-10-11 09:18:04.768 2 DEBUG nova.virt.libvirt.vif [None req-fc281bd7-2763-41e2-8050-a15c9b60b36c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:17:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-637088263',display_name='tempest-TestGettingAddress-server-637088263',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-637088263',id=111,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIEEroIcQI6yjPYNFRbpO55PkieYAc+yLELOXjNZohffuvQJuC/A538gnzESmYvfIV1iCxTeQxcsCPGRPplzY6F4Y6cKL3td1D8v0hhGFKNU2eGLTpxtQxcU6ACWQ1bQPg==',key_name='tempest-TestGettingAddress-609422538',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:17:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-249bfkz4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:17:41Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=25c0d908-6200-4e9e-8914-ed531abe14bf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "25dc7747-60d0-4a38-8938-dfa9f55068b3", "address": "fa:16:3e:8e:fc:a5", "network": {"id": "2f4ce403-2596-441d-805b-ba15e2f385a1", "bridge": "br-int", "label": "tempest-network-smoke--109925739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8e:fca5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25dc7747-60", "ovs_interfaceid": "25dc7747-60d0-4a38-8938-dfa9f55068b3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:18:04 np0005481065 nova_compute[260935]: 2025-10-11 09:18:04.769 2 DEBUG nova.network.os_vif_util [None req-fc281bd7-2763-41e2-8050-a15c9b60b36c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "25dc7747-60d0-4a38-8938-dfa9f55068b3", "address": "fa:16:3e:8e:fc:a5", "network": {"id": "2f4ce403-2596-441d-805b-ba15e2f385a1", "bridge": "br-int", "label": "tempest-network-smoke--109925739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8e:fca5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25dc7747-60", "ovs_interfaceid": "25dc7747-60d0-4a38-8938-dfa9f55068b3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:18:04 np0005481065 nova_compute[260935]: 2025-10-11 09:18:04.770 2 DEBUG nova.network.os_vif_util [None req-fc281bd7-2763-41e2-8050-a15c9b60b36c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8e:fc:a5,bridge_name='br-int',has_traffic_filtering=True,id=25dc7747-60d0-4a38-8938-dfa9f55068b3,network=Network(2f4ce403-2596-441d-805b-ba15e2f385a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25dc7747-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:18:04 np0005481065 nova_compute[260935]: 2025-10-11 09:18:04.771 2 DEBUG os_vif [None req-fc281bd7-2763-41e2-8050-a15c9b60b36c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8e:fc:a5,bridge_name='br-int',has_traffic_filtering=True,id=25dc7747-60d0-4a38-8938-dfa9f55068b3,network=Network(2f4ce403-2596-441d-805b-ba15e2f385a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25dc7747-60') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:18:04 np0005481065 nova_compute[260935]: 2025-10-11 09:18:04.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:04 np0005481065 nova_compute[260935]: 2025-10-11 09:18:04.774 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap25dc7747-60, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:18:04 np0005481065 nova_compute[260935]: 2025-10-11 09:18:04.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:04 np0005481065 nova_compute[260935]: 2025-10-11 09:18:04.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:18:04 np0005481065 nova_compute[260935]: 2025-10-11 09:18:04.781 2 INFO os_vif [None req-fc281bd7-2763-41e2-8050-a15c9b60b36c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8e:fc:a5,bridge_name='br-int',has_traffic_filtering=True,id=25dc7747-60d0-4a38-8938-dfa9f55068b3,network=Network(2f4ce403-2596-441d-805b-ba15e2f385a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25dc7747-60')#033[00m
Oct 11 05:18:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:04.799 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[18700001-b054-4283-bdd1-55b9a87c5673]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:04.833 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f90e31f1-df8c-47d5-8f54-6f55ff702010]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2f4ce403-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:74:43:a2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 34, 'tx_packets': 7, 'rx_bytes': 2940, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 34, 'tx_packets': 7, 'rx_bytes': 2940, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 310], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 605471, 'reachable_time': 32877, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 30, 'inoctets': 2352, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 30, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2352, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 30, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382176, 'error': None, 'target': 'ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:04.871 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[78b597dc-d76b-445d-8041-2f73869981d7]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2f4ce403-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 605487, 'tstamp': 605487}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 382180, 'error': None, 'target': 'ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2f4ce403-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 605491, 'tstamp': 605491}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 382180, 'error': None, 'target': 'ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:04.873 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2f4ce403-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:18:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:04.878 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2f4ce403-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:18:04 np0005481065 nova_compute[260935]: 2025-10-11 09:18:04.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:04.878 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:18:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:04.879 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2f4ce403-20, col_values=(('external_ids', {'iface-id': '722d4b4c-2d64-4e8c-b343-4ac25259f23b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:18:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:04.880 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:18:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:18:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:18:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:18:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:18:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0049026528008400076 of space, bias 1.0, pg target 1.4707958402520023 quantized to 32 (current 32)
Oct 11 05:18:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:18:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:18:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:18:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:18:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:18:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.1992057139048968 quantized to 32 (current 32)
Oct 11 05:18:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:18:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 32)
Oct 11 05:18:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:18:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:18:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:18:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Oct 11 05:18:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:18:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Oct 11 05:18:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:18:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:18:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:18:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Oct 11 05:18:05 np0005481065 nova_compute[260935]: 2025-10-11 09:18:05.242 2 INFO nova.virt.libvirt.driver [None req-fc281bd7-2763-41e2-8050-a15c9b60b36c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Deleting instance files /var/lib/nova/instances/25c0d908-6200-4e9e-8914-ed531abe14bf_del#033[00m
Oct 11 05:18:05 np0005481065 nova_compute[260935]: 2025-10-11 09:18:05.243 2 INFO nova.virt.libvirt.driver [None req-fc281bd7-2763-41e2-8050-a15c9b60b36c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Deletion of /var/lib/nova/instances/25c0d908-6200-4e9e-8914-ed531abe14bf_del complete#033[00m
Oct 11 05:18:05 np0005481065 nova_compute[260935]: 2025-10-11 09:18:05.301 2 INFO nova.compute.manager [None req-fc281bd7-2763-41e2-8050-a15c9b60b36c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Took 0.81 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:18:05 np0005481065 nova_compute[260935]: 2025-10-11 09:18:05.302 2 DEBUG oslo.service.loopingcall [None req-fc281bd7-2763-41e2-8050-a15c9b60b36c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:18:05 np0005481065 nova_compute[260935]: 2025-10-11 09:18:05.302 2 DEBUG nova.compute.manager [-] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:18:05 np0005481065 nova_compute[260935]: 2025-10-11 09:18:05.302 2 DEBUG nova.network.neutron [-] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:18:06 np0005481065 nova_compute[260935]: 2025-10-11 09:18:06.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:06 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2288: 321 pgs: 321 active+clean; 566 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 438 KiB/s rd, 2.2 MiB/s wr, 84 op/s
Oct 11 05:18:07 np0005481065 nova_compute[260935]: 2025-10-11 09:18:07.237 2 DEBUG nova.network.neutron [-] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:18:07 np0005481065 nova_compute[260935]: 2025-10-11 09:18:07.278 2 INFO nova.compute.manager [-] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Took 1.98 seconds to deallocate network for instance.#033[00m
Oct 11 05:18:07 np0005481065 nova_compute[260935]: 2025-10-11 09:18:07.311 2 DEBUG nova.compute.manager [req-35afe52e-f59e-4fd6-b887-1e801a137168 req-e70de97c-6723-4bbe-8c4a-6ec0106bd353 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Received event network-vif-deleted-25dc7747-60d0-4a38-8938-dfa9f55068b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:18:07 np0005481065 nova_compute[260935]: 2025-10-11 09:18:07.360 2 DEBUG oslo_concurrency.lockutils [None req-fc281bd7-2763-41e2-8050-a15c9b60b36c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:18:07 np0005481065 nova_compute[260935]: 2025-10-11 09:18:07.361 2 DEBUG oslo_concurrency.lockutils [None req-fc281bd7-2763-41e2-8050-a15c9b60b36c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:18:07 np0005481065 nova_compute[260935]: 2025-10-11 09:18:07.396 2 DEBUG nova.compute.manager [req-8c363077-edd2-40ba-a3e2-311fac81293f req-59db0dde-9448-4e88-bdcb-b5c4c74724f0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Received event network-vif-plugged-25dc7747-60d0-4a38-8938-dfa9f55068b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:18:07 np0005481065 nova_compute[260935]: 2025-10-11 09:18:07.397 2 DEBUG oslo_concurrency.lockutils [req-8c363077-edd2-40ba-a3e2-311fac81293f req-59db0dde-9448-4e88-bdcb-b5c4c74724f0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "25c0d908-6200-4e9e-8914-ed531abe14bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:18:07 np0005481065 nova_compute[260935]: 2025-10-11 09:18:07.398 2 DEBUG oslo_concurrency.lockutils [req-8c363077-edd2-40ba-a3e2-311fac81293f req-59db0dde-9448-4e88-bdcb-b5c4c74724f0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "25c0d908-6200-4e9e-8914-ed531abe14bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:18:07 np0005481065 nova_compute[260935]: 2025-10-11 09:18:07.398 2 DEBUG oslo_concurrency.lockutils [req-8c363077-edd2-40ba-a3e2-311fac81293f req-59db0dde-9448-4e88-bdcb-b5c4c74724f0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "25c0d908-6200-4e9e-8914-ed531abe14bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:18:07 np0005481065 nova_compute[260935]: 2025-10-11 09:18:07.399 2 DEBUG nova.compute.manager [req-8c363077-edd2-40ba-a3e2-311fac81293f req-59db0dde-9448-4e88-bdcb-b5c4c74724f0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] No waiting events found dispatching network-vif-plugged-25dc7747-60d0-4a38-8938-dfa9f55068b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:18:07 np0005481065 nova_compute[260935]: 2025-10-11 09:18:07.399 2 WARNING nova.compute.manager [req-8c363077-edd2-40ba-a3e2-311fac81293f req-59db0dde-9448-4e88-bdcb-b5c4c74724f0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Received unexpected event network-vif-plugged-25dc7747-60d0-4a38-8938-dfa9f55068b3 for instance with vm_state deleted and task_state None.#033[00m
Oct 11 05:18:07 np0005481065 nova_compute[260935]: 2025-10-11 09:18:07.556 2 DEBUG oslo_concurrency.processutils [None req-fc281bd7-2763-41e2-8050-a15c9b60b36c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:18:07 np0005481065 podman[382183]: 2025-10-11 09:18:07.791038935 +0000 UTC m=+0.078277018 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 11 05:18:07 np0005481065 nova_compute[260935]: 2025-10-11 09:18:07.866 2 DEBUG nova.network.neutron [req-b86ce366-5075-4ffe-ac0c-a1f57144e9c0 req-239d32f5-f450-4a03-83e0-6f4bf83a0dec e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Updated VIF entry in instance network info cache for port 25dc7747-60d0-4a38-8938-dfa9f55068b3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:18:07 np0005481065 nova_compute[260935]: 2025-10-11 09:18:07.867 2 DEBUG nova.network.neutron [req-b86ce366-5075-4ffe-ac0c-a1f57144e9c0 req-239d32f5-f450-4a03-83e0-6f4bf83a0dec e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Updating instance_info_cache with network_info: [{"id": "25dc7747-60d0-4a38-8938-dfa9f55068b3", "address": "fa:16:3e:8e:fc:a5", "network": {"id": "2f4ce403-2596-441d-805b-ba15e2f385a1", "bridge": "br-int", "label": "tempest-network-smoke--109925739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8e:fca5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25dc7747-60", "ovs_interfaceid": "25dc7747-60d0-4a38-8938-dfa9f55068b3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:18:07 np0005481065 nova_compute[260935]: 2025-10-11 09:18:07.889 2 DEBUG oslo_concurrency.lockutils [req-b86ce366-5075-4ffe-ac0c-a1f57144e9c0 req-239d32f5-f450-4a03-83e0-6f4bf83a0dec e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-25c0d908-6200-4e9e-8914-ed531abe14bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:18:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:18:08 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/119341820' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:18:08 np0005481065 nova_compute[260935]: 2025-10-11 09:18:08.048 2 DEBUG oslo_concurrency.processutils [None req-fc281bd7-2763-41e2-8050-a15c9b60b36c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:18:08 np0005481065 nova_compute[260935]: 2025-10-11 09:18:08.055 2 DEBUG nova.compute.provider_tree [None req-fc281bd7-2763-41e2-8050-a15c9b60b36c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:18:08 np0005481065 nova_compute[260935]: 2025-10-11 09:18:08.083 2 DEBUG nova.scheduler.client.report [None req-fc281bd7-2763-41e2-8050-a15c9b60b36c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:18:08 np0005481065 nova_compute[260935]: 2025-10-11 09:18:08.106 2 DEBUG oslo_concurrency.lockutils [None req-fc281bd7-2763-41e2-8050-a15c9b60b36c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.745s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:18:08 np0005481065 nova_compute[260935]: 2025-10-11 09:18:08.155 2 INFO nova.scheduler.client.report [None req-fc281bd7-2763-41e2-8050-a15c9b60b36c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Deleted allocations for instance 25c0d908-6200-4e9e-8914-ed531abe14bf#033[00m
Oct 11 05:18:08 np0005481065 nova_compute[260935]: 2025-10-11 09:18:08.259 2 DEBUG oslo_concurrency.lockutils [None req-fc281bd7-2763-41e2-8050-a15c9b60b36c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "25c0d908-6200-4e9e-8914-ed531abe14bf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:18:08 np0005481065 nova_compute[260935]: 2025-10-11 09:18:08.317 2 DEBUG oslo_concurrency.lockutils [None req-b08a428b-4ca4-4928-ae68-d827b2d2724f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "interface-0c16a8df-379f-45ee-b8a2-930ab997e47b-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:18:08 np0005481065 nova_compute[260935]: 2025-10-11 09:18:08.318 2 DEBUG oslo_concurrency.lockutils [None req-b08a428b-4ca4-4928-ae68-d827b2d2724f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "interface-0c16a8df-379f-45ee-b8a2-930ab997e47b-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:18:08 np0005481065 nova_compute[260935]: 2025-10-11 09:18:08.319 2 DEBUG nova.objects.instance [None req-b08a428b-4ca4-4928-ae68-d827b2d2724f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'flavor' on Instance uuid 0c16a8df-379f-45ee-b8a2-930ab997e47b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:18:08 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2289: 321 pgs: 321 active+clean; 486 MiB data, 997 MiB used, 59 GiB / 60 GiB avail; 463 KiB/s rd, 2.2 MiB/s wr, 114 op/s
Oct 11 05:18:08 np0005481065 nova_compute[260935]: 2025-10-11 09:18:08.734 2 DEBUG nova.objects.instance [None req-b08a428b-4ca4-4928-ae68-d827b2d2724f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'pci_requests' on Instance uuid 0c16a8df-379f-45ee-b8a2-930ab997e47b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:18:08 np0005481065 nova_compute[260935]: 2025-10-11 09:18:08.754 2 DEBUG nova.network.neutron [None req-b08a428b-4ca4-4928-ae68-d827b2d2724f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:18:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:18:09 np0005481065 nova_compute[260935]: 2025-10-11 09:18:09.011 2 DEBUG nova.policy [None req-b08a428b-4ca4-4928-ae68-d827b2d2724f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dd336dcb24664df58613d4105ce1b004', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:18:09 np0005481065 nova_compute[260935]: 2025-10-11 09:18:09.312 2 DEBUG oslo_concurrency.lockutils [None req-b79524c1-4228-4b0b-9879-2e9046f6cf49 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "187076f6-221b-4a35-a7a8-9ba7c2a546b5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:18:09 np0005481065 nova_compute[260935]: 2025-10-11 09:18:09.313 2 DEBUG oslo_concurrency.lockutils [None req-b79524c1-4228-4b0b-9879-2e9046f6cf49 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "187076f6-221b-4a35-a7a8-9ba7c2a546b5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:18:09 np0005481065 nova_compute[260935]: 2025-10-11 09:18:09.314 2 DEBUG oslo_concurrency.lockutils [None req-b79524c1-4228-4b0b-9879-2e9046f6cf49 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "187076f6-221b-4a35-a7a8-9ba7c2a546b5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:18:09 np0005481065 nova_compute[260935]: 2025-10-11 09:18:09.314 2 DEBUG oslo_concurrency.lockutils [None req-b79524c1-4228-4b0b-9879-2e9046f6cf49 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "187076f6-221b-4a35-a7a8-9ba7c2a546b5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:18:09 np0005481065 nova_compute[260935]: 2025-10-11 09:18:09.315 2 DEBUG oslo_concurrency.lockutils [None req-b79524c1-4228-4b0b-9879-2e9046f6cf49 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "187076f6-221b-4a35-a7a8-9ba7c2a546b5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:18:09 np0005481065 nova_compute[260935]: 2025-10-11 09:18:09.316 2 INFO nova.compute.manager [None req-b79524c1-4228-4b0b-9879-2e9046f6cf49 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Terminating instance#033[00m
Oct 11 05:18:09 np0005481065 nova_compute[260935]: 2025-10-11 09:18:09.318 2 DEBUG nova.compute.manager [None req-b79524c1-4228-4b0b-9879-2e9046f6cf49 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:18:09 np0005481065 kernel: tap1f9dc2d8-70 (unregistering): left promiscuous mode
Oct 11 05:18:09 np0005481065 NetworkManager[44960]: <info>  [1760174289.3866] device (tap1f9dc2d8-70): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:18:09 np0005481065 ovn_controller[152945]: 2025-10-11T09:18:09Z|01103|binding|INFO|Releasing lport 1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54 from this chassis (sb_readonly=0)
Oct 11 05:18:09 np0005481065 ovn_controller[152945]: 2025-10-11T09:18:09Z|01104|binding|INFO|Setting lport 1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54 down in Southbound
Oct 11 05:18:09 np0005481065 ovn_controller[152945]: 2025-10-11T09:18:09Z|01105|binding|INFO|Removing iface tap1f9dc2d8-70 ovn-installed in OVS
Oct 11 05:18:09 np0005481065 nova_compute[260935]: 2025-10-11 09:18:09.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:09 np0005481065 nova_compute[260935]: 2025-10-11 09:18:09.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:09 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:09.473 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:3e:b6 10.100.0.12 2001:db8::f816:3eff:fe4b:3eb6'], port_security=['fa:16:3e:4b:3e:b6 10.100.0.12 2001:db8::f816:3eff:fe4b:3eb6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28 2001:db8::f816:3eff:fe4b:3eb6/64', 'neutron:device_id': '187076f6-221b-4a35-a7a8-9ba7c2a546b5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2f4ce403-2596-441d-805b-ba15e2f385a1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': '89a40070-a057-46b1-9d97-ea387c29c959', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ec00ec81-0492-43bf-b21e-a8398ff551c2, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:18:09 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:09.475 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54 in datapath 2f4ce403-2596-441d-805b-ba15e2f385a1 unbound from our chassis#033[00m
Oct 11 05:18:09 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:09.477 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2f4ce403-2596-441d-805b-ba15e2f385a1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:18:09 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:09.477 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3383f22f-8e04-407e-87ae-904207914853]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:09 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:09.478 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1 namespace which is not needed anymore#033[00m
Oct 11 05:18:09 np0005481065 nova_compute[260935]: 2025-10-11 09:18:09.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:09 np0005481065 systemd[1]: machine-qemu\x2d133\x2dinstance\x2d0000006e.scope: Deactivated successfully.
Oct 11 05:18:09 np0005481065 systemd[1]: machine-qemu\x2d133\x2dinstance\x2d0000006e.scope: Consumed 17.414s CPU time.
Oct 11 05:18:09 np0005481065 systemd-machined[215705]: Machine qemu-133-instance-0000006e terminated.
Oct 11 05:18:09 np0005481065 nova_compute[260935]: 2025-10-11 09:18:09.560 2 INFO nova.virt.libvirt.driver [-] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Instance destroyed successfully.#033[00m
Oct 11 05:18:09 np0005481065 nova_compute[260935]: 2025-10-11 09:18:09.561 2 DEBUG nova.objects.instance [None req-b79524c1-4228-4b0b-9879-2e9046f6cf49 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'resources' on Instance uuid 187076f6-221b-4a35-a7a8-9ba7c2a546b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:18:09 np0005481065 nova_compute[260935]: 2025-10-11 09:18:09.576 2 DEBUG nova.virt.libvirt.vif [None req-b79524c1-4228-4b0b-9879-2e9046f6cf49 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:16:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-354741605',display_name='tempest-TestGettingAddress-server-354741605',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-354741605',id=110,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIEEroIcQI6yjPYNFRbpO55PkieYAc+yLELOXjNZohffuvQJuC/A538gnzESmYvfIV1iCxTeQxcsCPGRPplzY6F4Y6cKL3td1D8v0hhGFKNU2eGLTpxtQxcU6ACWQ1bQPg==',key_name='tempest-TestGettingAddress-609422538',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:16:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-f6psl0gg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:16:51Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=187076f6-221b-4a35-a7a8-9ba7c2a546b5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54", "address": "fa:16:3e:4b:3e:b6", "network": {"id": "2f4ce403-2596-441d-805b-ba15e2f385a1", "bridge": "br-int", "label": "tempest-network-smoke--109925739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4b:3eb6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f9dc2d8-70", "ovs_interfaceid": "1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:18:09 np0005481065 nova_compute[260935]: 2025-10-11 09:18:09.577 2 DEBUG nova.network.os_vif_util [None req-b79524c1-4228-4b0b-9879-2e9046f6cf49 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54", "address": "fa:16:3e:4b:3e:b6", "network": {"id": "2f4ce403-2596-441d-805b-ba15e2f385a1", "bridge": "br-int", "label": "tempest-network-smoke--109925739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4b:3eb6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f9dc2d8-70", "ovs_interfaceid": "1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:18:09 np0005481065 nova_compute[260935]: 2025-10-11 09:18:09.579 2 DEBUG nova.network.os_vif_util [None req-b79524c1-4228-4b0b-9879-2e9046f6cf49 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4b:3e:b6,bridge_name='br-int',has_traffic_filtering=True,id=1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54,network=Network(2f4ce403-2596-441d-805b-ba15e2f385a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f9dc2d8-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:18:09 np0005481065 nova_compute[260935]: 2025-10-11 09:18:09.580 2 DEBUG os_vif [None req-b79524c1-4228-4b0b-9879-2e9046f6cf49 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4b:3e:b6,bridge_name='br-int',has_traffic_filtering=True,id=1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54,network=Network(2f4ce403-2596-441d-805b-ba15e2f385a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f9dc2d8-70') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:18:09 np0005481065 nova_compute[260935]: 2025-10-11 09:18:09.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:09 np0005481065 nova_compute[260935]: 2025-10-11 09:18:09.585 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1f9dc2d8-70, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:18:09 np0005481065 nova_compute[260935]: 2025-10-11 09:18:09.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:09 np0005481065 nova_compute[260935]: 2025-10-11 09:18:09.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:18:09 np0005481065 nova_compute[260935]: 2025-10-11 09:18:09.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:09 np0005481065 nova_compute[260935]: 2025-10-11 09:18:09.594 2 INFO os_vif [None req-b79524c1-4228-4b0b-9879-2e9046f6cf49 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4b:3e:b6,bridge_name='br-int',has_traffic_filtering=True,id=1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54,network=Network(2f4ce403-2596-441d-805b-ba15e2f385a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f9dc2d8-70')#033[00m
Oct 11 05:18:09 np0005481065 neutron-haproxy-ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1[379911]: [NOTICE]   (379915) : haproxy version is 2.8.14-c23fe91
Oct 11 05:18:09 np0005481065 neutron-haproxy-ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1[379911]: [NOTICE]   (379915) : path to executable is /usr/sbin/haproxy
Oct 11 05:18:09 np0005481065 neutron-haproxy-ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1[379911]: [WARNING]  (379915) : Exiting Master process...
Oct 11 05:18:09 np0005481065 neutron-haproxy-ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1[379911]: [ALERT]    (379915) : Current worker (379917) exited with code 143 (Terminated)
Oct 11 05:18:09 np0005481065 neutron-haproxy-ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1[379911]: [WARNING]  (379915) : All workers exited. Exiting... (0)
Oct 11 05:18:09 np0005481065 systemd[1]: libpod-3697c62e0c6431967367d91d5b22adc88dc70ab8ef5c5a60c6aad4d4a948029a.scope: Deactivated successfully.
Oct 11 05:18:09 np0005481065 podman[382258]: 2025-10-11 09:18:09.684082973 +0000 UTC m=+0.076328034 container died 3697c62e0c6431967367d91d5b22adc88dc70ab8ef5c5a60c6aad4d4a948029a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 11 05:18:09 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3697c62e0c6431967367d91d5b22adc88dc70ab8ef5c5a60c6aad4d4a948029a-userdata-shm.mount: Deactivated successfully.
Oct 11 05:18:09 np0005481065 systemd[1]: var-lib-containers-storage-overlay-425c0630029d1aed8583bcc3941c7da79348c9c8a0eadb7928e597fdeee04cbf-merged.mount: Deactivated successfully.
Oct 11 05:18:09 np0005481065 podman[382258]: 2025-10-11 09:18:09.88848667 +0000 UTC m=+0.280731751 container cleanup 3697c62e0c6431967367d91d5b22adc88dc70ab8ef5c5a60c6aad4d4a948029a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:18:09 np0005481065 systemd[1]: libpod-conmon-3697c62e0c6431967367d91d5b22adc88dc70ab8ef5c5a60c6aad4d4a948029a.scope: Deactivated successfully.
Oct 11 05:18:09 np0005481065 podman[382307]: 2025-10-11 09:18:09.994103414 +0000 UTC m=+0.065687979 container remove 3697c62e0c6431967367d91d5b22adc88dc70ab8ef5c5a60c6aad4d4a948029a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 11 05:18:10 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:10.007 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[183a8375-9a8f-4bd1-a97e-b0019e4fc920]: (4, ('Sat Oct 11 09:18:09 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1 (3697c62e0c6431967367d91d5b22adc88dc70ab8ef5c5a60c6aad4d4a948029a)\n3697c62e0c6431967367d91d5b22adc88dc70ab8ef5c5a60c6aad4d4a948029a\nSat Oct 11 09:18:09 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1 (3697c62e0c6431967367d91d5b22adc88dc70ab8ef5c5a60c6aad4d4a948029a)\n3697c62e0c6431967367d91d5b22adc88dc70ab8ef5c5a60c6aad4d4a948029a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:10 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:10.011 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8c16ad71-6a14-47b8-bf13-452e312fe1fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:10 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:10.012 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2f4ce403-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:18:10 np0005481065 nova_compute[260935]: 2025-10-11 09:18:10.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:10 np0005481065 kernel: tap2f4ce403-20: left promiscuous mode
Oct 11 05:18:10 np0005481065 nova_compute[260935]: 2025-10-11 09:18:10.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:10 np0005481065 nova_compute[260935]: 2025-10-11 09:18:10.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:10 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:10.045 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ada9bd3f-0d55-4b59-a86f-39bc81888329]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:10 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:10.073 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[83a299c8-55d4-44d9-b231-5dd6354c5050]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:10 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:10.074 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[772593f4-0c5c-4fb7-b097-59353cce083a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:10 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:10.103 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9901b9cd-093c-4a2f-b8b4-59c07577e6cf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 605460, 'reachable_time': 42176, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382323, 'error': None, 'target': 'ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:10 np0005481065 systemd[1]: run-netns-ovnmeta\x2d2f4ce403\x2d2596\x2d441d\x2d805b\x2dba15e2f385a1.mount: Deactivated successfully.
Oct 11 05:18:10 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:10.110 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 05:18:10 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:10.110 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[139902fb-1aae-4841-850e-65c35ef22fa1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:10 np0005481065 nova_compute[260935]: 2025-10-11 09:18:10.114 2 DEBUG nova.compute.manager [req-a5a09b40-57c4-4fb9-9014-ae43d23fb90a req-42507238-05dc-4792-85ae-528cc94e2c7b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Received event network-changed-1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:18:10 np0005481065 nova_compute[260935]: 2025-10-11 09:18:10.115 2 DEBUG nova.compute.manager [req-a5a09b40-57c4-4fb9-9014-ae43d23fb90a req-42507238-05dc-4792-85ae-528cc94e2c7b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Refreshing instance network info cache due to event network-changed-1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:18:10 np0005481065 nova_compute[260935]: 2025-10-11 09:18:10.116 2 DEBUG oslo_concurrency.lockutils [req-a5a09b40-57c4-4fb9-9014-ae43d23fb90a req-42507238-05dc-4792-85ae-528cc94e2c7b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-187076f6-221b-4a35-a7a8-9ba7c2a546b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:18:10 np0005481065 nova_compute[260935]: 2025-10-11 09:18:10.116 2 DEBUG oslo_concurrency.lockutils [req-a5a09b40-57c4-4fb9-9014-ae43d23fb90a req-42507238-05dc-4792-85ae-528cc94e2c7b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-187076f6-221b-4a35-a7a8-9ba7c2a546b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:18:10 np0005481065 nova_compute[260935]: 2025-10-11 09:18:10.117 2 DEBUG nova.network.neutron [req-a5a09b40-57c4-4fb9-9014-ae43d23fb90a req-42507238-05dc-4792-85ae-528cc94e2c7b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Refreshing network info cache for port 1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:18:10 np0005481065 nova_compute[260935]: 2025-10-11 09:18:10.141 2 DEBUG nova.network.neutron [None req-b08a428b-4ca4-4928-ae68-d827b2d2724f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Successfully created port: 442b50ff-f920-42c8-b400-ec09aade7cd6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:18:10 np0005481065 nova_compute[260935]: 2025-10-11 09:18:10.254 2 INFO nova.virt.libvirt.driver [None req-b79524c1-4228-4b0b-9879-2e9046f6cf49 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Deleting instance files /var/lib/nova/instances/187076f6-221b-4a35-a7a8-9ba7c2a546b5_del#033[00m
Oct 11 05:18:10 np0005481065 nova_compute[260935]: 2025-10-11 09:18:10.256 2 INFO nova.virt.libvirt.driver [None req-b79524c1-4228-4b0b-9879-2e9046f6cf49 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Deletion of /var/lib/nova/instances/187076f6-221b-4a35-a7a8-9ba7c2a546b5_del complete#033[00m
Oct 11 05:18:10 np0005481065 nova_compute[260935]: 2025-10-11 09:18:10.314 2 INFO nova.compute.manager [None req-b79524c1-4228-4b0b-9879-2e9046f6cf49 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Took 1.00 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:18:10 np0005481065 nova_compute[260935]: 2025-10-11 09:18:10.315 2 DEBUG oslo.service.loopingcall [None req-b79524c1-4228-4b0b-9879-2e9046f6cf49 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:18:10 np0005481065 nova_compute[260935]: 2025-10-11 09:18:10.315 2 DEBUG nova.compute.manager [-] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:18:10 np0005481065 nova_compute[260935]: 2025-10-11 09:18:10.316 2 DEBUG nova.network.neutron [-] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:18:10 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2290: 321 pgs: 321 active+clean; 486 MiB data, 997 MiB used, 59 GiB / 60 GiB avail; 190 KiB/s rd, 121 KiB/s wr, 53 op/s
Oct 11 05:18:11 np0005481065 nova_compute[260935]: 2025-10-11 09:18:11.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:11 np0005481065 nova_compute[260935]: 2025-10-11 09:18:11.687 2 DEBUG nova.network.neutron [-] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:18:11 np0005481065 nova_compute[260935]: 2025-10-11 09:18:11.705 2 INFO nova.compute.manager [-] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Took 1.39 seconds to deallocate network for instance.#033[00m
Oct 11 05:18:11 np0005481065 nova_compute[260935]: 2025-10-11 09:18:11.782 2 DEBUG nova.network.neutron [None req-b08a428b-4ca4-4928-ae68-d827b2d2724f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Successfully updated port: 442b50ff-f920-42c8-b400-ec09aade7cd6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:18:11 np0005481065 nova_compute[260935]: 2025-10-11 09:18:11.785 2 DEBUG oslo_concurrency.lockutils [None req-b79524c1-4228-4b0b-9879-2e9046f6cf49 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:18:11 np0005481065 nova_compute[260935]: 2025-10-11 09:18:11.786 2 DEBUG oslo_concurrency.lockutils [None req-b79524c1-4228-4b0b-9879-2e9046f6cf49 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:18:11 np0005481065 nova_compute[260935]: 2025-10-11 09:18:11.799 2 DEBUG oslo_concurrency.lockutils [None req-b08a428b-4ca4-4928-ae68-d827b2d2724f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "refresh_cache-0c16a8df-379f-45ee-b8a2-930ab997e47b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:18:11 np0005481065 nova_compute[260935]: 2025-10-11 09:18:11.799 2 DEBUG oslo_concurrency.lockutils [None req-b08a428b-4ca4-4928-ae68-d827b2d2724f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquired lock "refresh_cache-0c16a8df-379f-45ee-b8a2-930ab997e47b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:18:11 np0005481065 nova_compute[260935]: 2025-10-11 09:18:11.800 2 DEBUG nova.network.neutron [None req-b08a428b-4ca4-4928-ae68-d827b2d2724f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:18:11 np0005481065 nova_compute[260935]: 2025-10-11 09:18:11.959 2 DEBUG nova.compute.manager [req-dd6619f6-93a9-42ea-9701-0bf39815851d req-145f050e-eab2-4fe7-88e7-2d0a7c16d0f2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Received event network-changed-442b50ff-f920-42c8-b400-ec09aade7cd6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:18:11 np0005481065 nova_compute[260935]: 2025-10-11 09:18:11.960 2 DEBUG nova.compute.manager [req-dd6619f6-93a9-42ea-9701-0bf39815851d req-145f050e-eab2-4fe7-88e7-2d0a7c16d0f2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Refreshing instance network info cache due to event network-changed-442b50ff-f920-42c8-b400-ec09aade7cd6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:18:11 np0005481065 nova_compute[260935]: 2025-10-11 09:18:11.962 2 DEBUG oslo_concurrency.lockutils [req-dd6619f6-93a9-42ea-9701-0bf39815851d req-145f050e-eab2-4fe7-88e7-2d0a7c16d0f2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-0c16a8df-379f-45ee-b8a2-930ab997e47b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:18:11 np0005481065 nova_compute[260935]: 2025-10-11 09:18:11.997 2 DEBUG oslo_concurrency.processutils [None req-b79524c1-4228-4b0b-9879-2e9046f6cf49 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:18:12 np0005481065 nova_compute[260935]: 2025-10-11 09:18:12.218 2 DEBUG nova.compute.manager [req-3ae73e13-5de6-4a21-a578-5a339e8f70f0 req-55798aa1-35fb-445a-bf6a-d7393139033c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Received event network-vif-deleted-1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:18:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:18:12 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1987145678' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:18:12 np0005481065 nova_compute[260935]: 2025-10-11 09:18:12.501 2 DEBUG oslo_concurrency.processutils [None req-b79524c1-4228-4b0b-9879-2e9046f6cf49 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:18:12 np0005481065 nova_compute[260935]: 2025-10-11 09:18:12.507 2 DEBUG nova.compute.provider_tree [None req-b79524c1-4228-4b0b-9879-2e9046f6cf49 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:18:12 np0005481065 nova_compute[260935]: 2025-10-11 09:18:12.527 2 DEBUG nova.scheduler.client.report [None req-b79524c1-4228-4b0b-9879-2e9046f6cf49 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:18:12 np0005481065 nova_compute[260935]: 2025-10-11 09:18:12.554 2 DEBUG oslo_concurrency.lockutils [None req-b79524c1-4228-4b0b-9879-2e9046f6cf49 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.768s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:18:12 np0005481065 nova_compute[260935]: 2025-10-11 09:18:12.599 2 INFO nova.scheduler.client.report [None req-b79524c1-4228-4b0b-9879-2e9046f6cf49 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Deleted allocations for instance 187076f6-221b-4a35-a7a8-9ba7c2a546b5#033[00m
Oct 11 05:18:12 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2291: 321 pgs: 321 active+clean; 434 MiB data, 972 MiB used, 59 GiB / 60 GiB avail; 209 KiB/s rd, 124 KiB/s wr, 79 op/s
Oct 11 05:18:12 np0005481065 nova_compute[260935]: 2025-10-11 09:18:12.727 2 DEBUG oslo_concurrency.lockutils [None req-b79524c1-4228-4b0b-9879-2e9046f6cf49 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "187076f6-221b-4a35-a7a8-9ba7c2a546b5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.414s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:18:12 np0005481065 nova_compute[260935]: 2025-10-11 09:18:12.883 2 DEBUG nova.network.neutron [req-a5a09b40-57c4-4fb9-9014-ae43d23fb90a req-42507238-05dc-4792-85ae-528cc94e2c7b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Updated VIF entry in instance network info cache for port 1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:18:12 np0005481065 nova_compute[260935]: 2025-10-11 09:18:12.883 2 DEBUG nova.network.neutron [req-a5a09b40-57c4-4fb9-9014-ae43d23fb90a req-42507238-05dc-4792-85ae-528cc94e2c7b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Updating instance_info_cache with network_info: [{"id": "1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54", "address": "fa:16:3e:4b:3e:b6", "network": {"id": "2f4ce403-2596-441d-805b-ba15e2f385a1", "bridge": "br-int", "label": "tempest-network-smoke--109925739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4b:3eb6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f9dc2d8-70", "ovs_interfaceid": "1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:18:12 np0005481065 nova_compute[260935]: 2025-10-11 09:18:12.902 2 DEBUG oslo_concurrency.lockutils [req-a5a09b40-57c4-4fb9-9014-ae43d23fb90a req-42507238-05dc-4792-85ae-528cc94e2c7b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-187076f6-221b-4a35-a7a8-9ba7c2a546b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:18:13 np0005481065 podman[382346]: 2025-10-11 09:18:13.819382253 +0000 UTC m=+0.102770755 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 05:18:13 np0005481065 podman[382347]: 2025-10-11 09:18:13.855572832 +0000 UTC m=+0.135606488 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 11 05:18:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:18:13 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #111. Immutable memtables: 0.
Oct 11 05:18:13 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:18:13.974496) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 05:18:13 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 111
Oct 11 05:18:13 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174293974569, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 516, "num_deletes": 250, "total_data_size": 494195, "memory_usage": 503312, "flush_reason": "Manual Compaction"}
Oct 11 05:18:13 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #112: started
Oct 11 05:18:13 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174293980925, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 112, "file_size": 344518, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47816, "largest_seqno": 48331, "table_properties": {"data_size": 341944, "index_size": 610, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6959, "raw_average_key_size": 20, "raw_value_size": 336713, "raw_average_value_size": 981, "num_data_blocks": 28, "num_entries": 343, "num_filter_entries": 343, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760174259, "oldest_key_time": 1760174259, "file_creation_time": 1760174293, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 112, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:18:13 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 6498 microseconds, and 3126 cpu microseconds.
Oct 11 05:18:13 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:18:13 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:18:13.980997) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #112: 344518 bytes OK
Oct 11 05:18:13 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:18:13.981020) [db/memtable_list.cc:519] [default] Level-0 commit table #112 started
Oct 11 05:18:13 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:18:13.982885) [db/memtable_list.cc:722] [default] Level-0 commit table #112: memtable #1 done
Oct 11 05:18:13 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:18:13.982907) EVENT_LOG_v1 {"time_micros": 1760174293982900, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 05:18:13 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:18:13.982927) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 05:18:13 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 491245, prev total WAL file size 491245, number of live WAL files 2.
Oct 11 05:18:13 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000108.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:18:13 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:18:13.983562) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373534' seq:72057594037927935, type:22 .. '6D6772737461740032303035' seq:0, type:0; will stop at (end)
Oct 11 05:18:13 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 05:18:13 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [112(336KB)], [110(10MB)]
Oct 11 05:18:13 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174293983616, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [112], "files_L6": [110], "score": -1, "input_data_size": 11399863, "oldest_snapshot_seqno": -1}
Oct 11 05:18:14 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #113: 6707 keys, 8256527 bytes, temperature: kUnknown
Oct 11 05:18:14 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174294036110, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 113, "file_size": 8256527, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8213455, "index_size": 25232, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16773, "raw_key_size": 175269, "raw_average_key_size": 26, "raw_value_size": 8094881, "raw_average_value_size": 1206, "num_data_blocks": 983, "num_entries": 6707, "num_filter_entries": 6707, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760174293, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:18:14 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:18:14 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:18:14.036504) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 8256527 bytes
Oct 11 05:18:14 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:18:14.038871) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 216.7 rd, 156.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 10.5 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(57.1) write-amplify(24.0) OK, records in: 7203, records dropped: 496 output_compression: NoCompression
Oct 11 05:18:14 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:18:14.038903) EVENT_LOG_v1 {"time_micros": 1760174294038889, "job": 66, "event": "compaction_finished", "compaction_time_micros": 52614, "compaction_time_cpu_micros": 39225, "output_level": 6, "num_output_files": 1, "total_output_size": 8256527, "num_input_records": 7203, "num_output_records": 6707, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 05:18:14 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000112.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:18:14 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174294039159, "job": 66, "event": "table_file_deletion", "file_number": 112}
Oct 11 05:18:14 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:18:14 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174294044125, "job": 66, "event": "table_file_deletion", "file_number": 110}
Oct 11 05:18:14 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:18:13.983475) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:18:14 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:18:14.044210) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:18:14 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:18:14.044218) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:18:14 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:18:14.044221) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:18:14 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:18:14.044224) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:18:14 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:18:14.044227) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:18:14 np0005481065 nova_compute[260935]: 2025-10-11 09:18:14.103 2 DEBUG nova.network.neutron [None req-b08a428b-4ca4-4928-ae68-d827b2d2724f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Updating instance_info_cache with network_info: [{"id": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "address": "fa:16:3e:c9:58:d3", "network": {"id": "3f472aff-4044-47cd-b539-ffd0a15c2851", "bridge": "br-int", "label": "tempest-network-smoke--74572185", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2284d8f8-63", "ovs_interfaceid": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "442b50ff-f920-42c8-b400-ec09aade7cd6", "address": "fa:16:3e:4f:8d:be", "network": {"id": "1ea2d228-974d-4a28-901f-89593554c6f8", "bridge": "br-int", "label": "tempest-network-smoke--1003241548", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap442b50ff-f9", "ovs_interfaceid": "442b50ff-f920-42c8-b400-ec09aade7cd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:18:14 np0005481065 nova_compute[260935]: 2025-10-11 09:18:14.133 2 DEBUG oslo_concurrency.lockutils [None req-b08a428b-4ca4-4928-ae68-d827b2d2724f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Releasing lock "refresh_cache-0c16a8df-379f-45ee-b8a2-930ab997e47b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:18:14 np0005481065 nova_compute[260935]: 2025-10-11 09:18:14.134 2 DEBUG oslo_concurrency.lockutils [req-dd6619f6-93a9-42ea-9701-0bf39815851d req-145f050e-eab2-4fe7-88e7-2d0a7c16d0f2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-0c16a8df-379f-45ee-b8a2-930ab997e47b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:18:14 np0005481065 nova_compute[260935]: 2025-10-11 09:18:14.134 2 DEBUG nova.network.neutron [req-dd6619f6-93a9-42ea-9701-0bf39815851d req-145f050e-eab2-4fe7-88e7-2d0a7c16d0f2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Refreshing network info cache for port 442b50ff-f920-42c8-b400-ec09aade7cd6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:18:14 np0005481065 nova_compute[260935]: 2025-10-11 09:18:14.138 2 DEBUG nova.virt.libvirt.vif [None req-b08a428b-4ca4-4928-ae68-d827b2d2724f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:17:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1002613129',display_name='tempest-TestNetworkBasicOps-server-1002613129',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1002613129',id=112,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB1DLyePqzD5XeXq/VEty+kauBvZ2ILHF8of/ToD91IxehxiiNemh7q1yZYimrdmB0i6DGy2R/mJVWVMcXEoPylnExeWpiVIgoXv8lQJTgtQjj42wL1vsxK2jjFlUxPqYA==',key_name='tempest-TestNetworkBasicOps-1545772393',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:17:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-c03kw6on',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:17:46Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=0c16a8df-379f-45ee-b8a2-930ab997e47b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "442b50ff-f920-42c8-b400-ec09aade7cd6", "address": "fa:16:3e:4f:8d:be", "network": {"id": "1ea2d228-974d-4a28-901f-89593554c6f8", "bridge": "br-int", "label": "tempest-network-smoke--1003241548", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap442b50ff-f9", "ovs_interfaceid": "442b50ff-f920-42c8-b400-ec09aade7cd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:18:14 np0005481065 nova_compute[260935]: 2025-10-11 09:18:14.138 2 DEBUG nova.network.os_vif_util [None req-b08a428b-4ca4-4928-ae68-d827b2d2724f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "442b50ff-f920-42c8-b400-ec09aade7cd6", "address": "fa:16:3e:4f:8d:be", "network": {"id": "1ea2d228-974d-4a28-901f-89593554c6f8", "bridge": "br-int", "label": "tempest-network-smoke--1003241548", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap442b50ff-f9", "ovs_interfaceid": "442b50ff-f920-42c8-b400-ec09aade7cd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:18:14 np0005481065 nova_compute[260935]: 2025-10-11 09:18:14.139 2 DEBUG nova.network.os_vif_util [None req-b08a428b-4ca4-4928-ae68-d827b2d2724f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4f:8d:be,bridge_name='br-int',has_traffic_filtering=True,id=442b50ff-f920-42c8-b400-ec09aade7cd6,network=Network(1ea2d228-974d-4a28-901f-89593554c6f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap442b50ff-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:18:14 np0005481065 nova_compute[260935]: 2025-10-11 09:18:14.139 2 DEBUG os_vif [None req-b08a428b-4ca4-4928-ae68-d827b2d2724f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4f:8d:be,bridge_name='br-int',has_traffic_filtering=True,id=442b50ff-f920-42c8-b400-ec09aade7cd6,network=Network(1ea2d228-974d-4a28-901f-89593554c6f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap442b50ff-f9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:18:14 np0005481065 nova_compute[260935]: 2025-10-11 09:18:14.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:14 np0005481065 nova_compute[260935]: 2025-10-11 09:18:14.140 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:18:14 np0005481065 nova_compute[260935]: 2025-10-11 09:18:14.141 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:18:14 np0005481065 nova_compute[260935]: 2025-10-11 09:18:14.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:14 np0005481065 nova_compute[260935]: 2025-10-11 09:18:14.144 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap442b50ff-f9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:18:14 np0005481065 nova_compute[260935]: 2025-10-11 09:18:14.145 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap442b50ff-f9, col_values=(('external_ids', {'iface-id': '442b50ff-f920-42c8-b400-ec09aade7cd6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4f:8d:be', 'vm-uuid': '0c16a8df-379f-45ee-b8a2-930ab997e47b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:18:14 np0005481065 NetworkManager[44960]: <info>  [1760174294.1482] manager: (tap442b50ff-f9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/452)
Oct 11 05:18:14 np0005481065 nova_compute[260935]: 2025-10-11 09:18:14.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:14 np0005481065 nova_compute[260935]: 2025-10-11 09:18:14.161 2 INFO os_vif [None req-b08a428b-4ca4-4928-ae68-d827b2d2724f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4f:8d:be,bridge_name='br-int',has_traffic_filtering=True,id=442b50ff-f920-42c8-b400-ec09aade7cd6,network=Network(1ea2d228-974d-4a28-901f-89593554c6f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap442b50ff-f9')#033[00m
Oct 11 05:18:14 np0005481065 nova_compute[260935]: 2025-10-11 09:18:14.163 2 DEBUG nova.virt.libvirt.vif [None req-b08a428b-4ca4-4928-ae68-d827b2d2724f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:17:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1002613129',display_name='tempest-TestNetworkBasicOps-server-1002613129',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1002613129',id=112,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB1DLyePqzD5XeXq/VEty+kauBvZ2ILHF8of/ToD91IxehxiiNemh7q1yZYimrdmB0i6DGy2R/mJVWVMcXEoPylnExeWpiVIgoXv8lQJTgtQjj42wL1vsxK2jjFlUxPqYA==',key_name='tempest-TestNetworkBasicOps-1545772393',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:17:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-c03kw6on',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:17:46Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=0c16a8df-379f-45ee-b8a2-930ab997e47b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "442b50ff-f920-42c8-b400-ec09aade7cd6", "address": "fa:16:3e:4f:8d:be", "network": {"id": "1ea2d228-974d-4a28-901f-89593554c6f8", "bridge": "br-int", "label": "tempest-network-smoke--1003241548", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap442b50ff-f9", "ovs_interfaceid": "442b50ff-f920-42c8-b400-ec09aade7cd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:18:14 np0005481065 nova_compute[260935]: 2025-10-11 09:18:14.163 2 DEBUG nova.network.os_vif_util [None req-b08a428b-4ca4-4928-ae68-d827b2d2724f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "442b50ff-f920-42c8-b400-ec09aade7cd6", "address": "fa:16:3e:4f:8d:be", "network": {"id": "1ea2d228-974d-4a28-901f-89593554c6f8", "bridge": "br-int", "label": "tempest-network-smoke--1003241548", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap442b50ff-f9", "ovs_interfaceid": "442b50ff-f920-42c8-b400-ec09aade7cd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:18:14 np0005481065 nova_compute[260935]: 2025-10-11 09:18:14.165 2 DEBUG nova.network.os_vif_util [None req-b08a428b-4ca4-4928-ae68-d827b2d2724f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4f:8d:be,bridge_name='br-int',has_traffic_filtering=True,id=442b50ff-f920-42c8-b400-ec09aade7cd6,network=Network(1ea2d228-974d-4a28-901f-89593554c6f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap442b50ff-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:18:14 np0005481065 nova_compute[260935]: 2025-10-11 09:18:14.169 2 DEBUG nova.virt.libvirt.guest [None req-b08a428b-4ca4-4928-ae68-d827b2d2724f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] attach device xml: <interface type="ethernet">
Oct 11 05:18:14 np0005481065 nova_compute[260935]:  <mac address="fa:16:3e:4f:8d:be"/>
Oct 11 05:18:14 np0005481065 nova_compute[260935]:  <model type="virtio"/>
Oct 11 05:18:14 np0005481065 nova_compute[260935]:  <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:18:14 np0005481065 nova_compute[260935]:  <mtu size="1442"/>
Oct 11 05:18:14 np0005481065 nova_compute[260935]:  <target dev="tap442b50ff-f9"/>
Oct 11 05:18:14 np0005481065 nova_compute[260935]: </interface>
Oct 11 05:18:14 np0005481065 nova_compute[260935]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct 11 05:18:14 np0005481065 kernel: tap442b50ff-f9: entered promiscuous mode
Oct 11 05:18:14 np0005481065 NetworkManager[44960]: <info>  [1760174294.1900] manager: (tap442b50ff-f9): new Tun device (/org/freedesktop/NetworkManager/Devices/453)
Oct 11 05:18:14 np0005481065 nova_compute[260935]: 2025-10-11 09:18:14.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:14 np0005481065 ovn_controller[152945]: 2025-10-11T09:18:14Z|01106|binding|INFO|Claiming lport 442b50ff-f920-42c8-b400-ec09aade7cd6 for this chassis.
Oct 11 05:18:14 np0005481065 ovn_controller[152945]: 2025-10-11T09:18:14Z|01107|binding|INFO|442b50ff-f920-42c8-b400-ec09aade7cd6: Claiming fa:16:3e:4f:8d:be 10.100.0.27
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:14.209 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4f:8d:be 10.100.0.27'], port_security=['fa:16:3e:4f:8d:be 10.100.0.27'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.27/28', 'neutron:device_id': '0c16a8df-379f-45ee-b8a2-930ab997e47b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ea2d228-974d-4a28-901f-89593554c6f8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '2', 'neutron:security_group_ids': '019fac1b-1c13-4d21-809e-39e39fdd9255', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2a1ce1a9-36d9-465e-a0cc-5bbbfcd5b496, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=442b50ff-f920-42c8-b400-ec09aade7cd6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:14.210 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 442b50ff-f920-42c8-b400-ec09aade7cd6 in datapath 1ea2d228-974d-4a28-901f-89593554c6f8 bound to our chassis#033[00m
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:14.212 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1ea2d228-974d-4a28-901f-89593554c6f8#033[00m
Oct 11 05:18:14 np0005481065 systemd-udevd[382398]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:14.230 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[caac18a9-1acd-4d4a-a43e-fb0bd78ba30a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:14.231 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1ea2d228-91 in ovnmeta-1ea2d228-974d-4a28-901f-89593554c6f8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:14.233 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1ea2d228-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:14.233 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bf608b37-c271-49a1-9c3c-5ef97651d9df]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:14.234 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5958c52c-93b3-4ed7-9339-27159c2b3003]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:14 np0005481065 NetworkManager[44960]: <info>  [1760174294.2443] device (tap442b50ff-f9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:18:14 np0005481065 NetworkManager[44960]: <info>  [1760174294.2459] device (tap442b50ff-f9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:14.251 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[1bf2a29e-886c-4d88-bff8-300fc13e0f01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:14 np0005481065 nova_compute[260935]: 2025-10-11 09:18:14.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:14.268 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5ed210e9-2358-472f-9a8e-faa8f39e399c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:14 np0005481065 ovn_controller[152945]: 2025-10-11T09:18:14Z|01108|binding|INFO|Setting lport 442b50ff-f920-42c8-b400-ec09aade7cd6 ovn-installed in OVS
Oct 11 05:18:14 np0005481065 ovn_controller[152945]: 2025-10-11T09:18:14Z|01109|binding|INFO|Setting lport 442b50ff-f920-42c8-b400-ec09aade7cd6 up in Southbound
Oct 11 05:18:14 np0005481065 nova_compute[260935]: 2025-10-11 09:18:14.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:14 np0005481065 nova_compute[260935]: 2025-10-11 09:18:14.291 2 DEBUG nova.virt.libvirt.driver [None req-b08a428b-4ca4-4928-ae68-d827b2d2724f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:18:14 np0005481065 nova_compute[260935]: 2025-10-11 09:18:14.291 2 DEBUG nova.virt.libvirt.driver [None req-b08a428b-4ca4-4928-ae68-d827b2d2724f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:18:14 np0005481065 nova_compute[260935]: 2025-10-11 09:18:14.292 2 DEBUG nova.virt.libvirt.driver [None req-b08a428b-4ca4-4928-ae68-d827b2d2724f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No VIF found with MAC fa:16:3e:c9:58:d3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:18:14 np0005481065 nova_compute[260935]: 2025-10-11 09:18:14.292 2 DEBUG nova.virt.libvirt.driver [None req-b08a428b-4ca4-4928-ae68-d827b2d2724f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No VIF found with MAC fa:16:3e:4f:8d:be, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:14.310 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[59e7f0ba-ab6c-4221-ae62-8c014729fab9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:14.317 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9fbda176-d1d7-4c72-b6aa-e89883b1804e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:14 np0005481065 NetworkManager[44960]: <info>  [1760174294.3182] manager: (tap1ea2d228-90): new Veth device (/org/freedesktop/NetworkManager/Devices/454)
Oct 11 05:18:14 np0005481065 nova_compute[260935]: 2025-10-11 09:18:14.320 2 DEBUG nova.virt.libvirt.guest [None req-b08a428b-4ca4-4928-ae68-d827b2d2724f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:18:14 np0005481065 nova_compute[260935]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:18:14 np0005481065 nova_compute[260935]:  <nova:name>tempest-TestNetworkBasicOps-server-1002613129</nova:name>
Oct 11 05:18:14 np0005481065 nova_compute[260935]:  <nova:creationTime>2025-10-11 09:18:14</nova:creationTime>
Oct 11 05:18:14 np0005481065 nova_compute[260935]:  <nova:flavor name="m1.nano">
Oct 11 05:18:14 np0005481065 nova_compute[260935]:    <nova:memory>128</nova:memory>
Oct 11 05:18:14 np0005481065 nova_compute[260935]:    <nova:disk>1</nova:disk>
Oct 11 05:18:14 np0005481065 nova_compute[260935]:    <nova:swap>0</nova:swap>
Oct 11 05:18:14 np0005481065 nova_compute[260935]:    <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:18:14 np0005481065 nova_compute[260935]:    <nova:vcpus>1</nova:vcpus>
Oct 11 05:18:14 np0005481065 nova_compute[260935]:  </nova:flavor>
Oct 11 05:18:14 np0005481065 nova_compute[260935]:  <nova:owner>
Oct 11 05:18:14 np0005481065 nova_compute[260935]:    <nova:user uuid="dd336dcb24664df58613d4105ce1b004">tempest-TestNetworkBasicOps-1622727639-project-member</nova:user>
Oct 11 05:18:14 np0005481065 nova_compute[260935]:    <nova:project uuid="bee9c6aad5fe46a2b0fb6caf4d995b72">tempest-TestNetworkBasicOps-1622727639</nova:project>
Oct 11 05:18:14 np0005481065 nova_compute[260935]:  </nova:owner>
Oct 11 05:18:14 np0005481065 nova_compute[260935]:  <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:18:14 np0005481065 nova_compute[260935]:  <nova:ports>
Oct 11 05:18:14 np0005481065 nova_compute[260935]:    <nova:port uuid="2284d8f8-63ae-4cf4-b954-c08472abd3ed">
Oct 11 05:18:14 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 05:18:14 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 05:18:14 np0005481065 nova_compute[260935]:    <nova:port uuid="442b50ff-f920-42c8-b400-ec09aade7cd6">
Oct 11 05:18:14 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.27" ipVersion="4"/>
Oct 11 05:18:14 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 05:18:14 np0005481065 nova_compute[260935]:  </nova:ports>
Oct 11 05:18:14 np0005481065 nova_compute[260935]: </nova:instance>
Oct 11 05:18:14 np0005481065 nova_compute[260935]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct 11 05:18:14 np0005481065 nova_compute[260935]: 2025-10-11 09:18:14.352 2 DEBUG oslo_concurrency.lockutils [None req-b08a428b-4ca4-4928-ae68-d827b2d2724f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "interface-0c16a8df-379f-45ee-b8a2-930ab997e47b-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 6.034s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:14.369 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c0c15fc9-3a7f-45f5-8111-d9c5f3c77edc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:14.374 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[5aa72ba2-5fb2-4c93-b119-e6fd9bfd5f3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:14 np0005481065 NetworkManager[44960]: <info>  [1760174294.4036] device (tap1ea2d228-90): carrier: link connected
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:14.412 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[8f52e6a0-fca5-4f70-a77c-e285837bcd1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:14.432 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f4544124-0fff-4bd6-8dea-cf2179563969]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1ea2d228-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e3:88:22'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 319], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 613910, 'reachable_time': 19837, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382424, 'error': None, 'target': 'ovnmeta-1ea2d228-974d-4a28-901f-89593554c6f8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:14.454 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a062629d-2c16-472f-ad08-7ac0e69d3f79]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee3:8822'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 613910, 'tstamp': 613910}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 382425, 'error': None, 'target': 'ovnmeta-1ea2d228-974d-4a28-901f-89593554c6f8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:14.482 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a383efe5-91b8-481b-8834-29af85ecc16d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1ea2d228-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e3:88:22'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 319], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 613910, 'reachable_time': 19837, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 382426, 'error': None, 'target': 'ovnmeta-1ea2d228-974d-4a28-901f-89593554c6f8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:14.522 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[02468298-e1dd-4055-8e9f-349b818039dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:14.609 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[10d4c9fa-46c9-4a83-8203-890e64d5ebe8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:14.610 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ea2d228-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:14.611 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:14.611 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1ea2d228-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:18:14 np0005481065 nova_compute[260935]: 2025-10-11 09:18:14.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:14 np0005481065 kernel: tap1ea2d228-90: entered promiscuous mode
Oct 11 05:18:14 np0005481065 NetworkManager[44960]: <info>  [1760174294.6581] manager: (tap1ea2d228-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/455)
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:14.661 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1ea2d228-90, col_values=(('external_ids', {'iface-id': '3fc4d2ee-2b2f-40fb-b79c-5db1c1c53bb8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:18:14 np0005481065 ovn_controller[152945]: 2025-10-11T09:18:14Z|01110|binding|INFO|Releasing lport 3fc4d2ee-2b2f-40fb-b79c-5db1c1c53bb8 from this chassis (sb_readonly=0)
Oct 11 05:18:14 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2292: 321 pgs: 321 active+clean; 407 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 112 KiB/s rd, 46 KiB/s wr, 65 op/s
Oct 11 05:18:14 np0005481065 nova_compute[260935]: 2025-10-11 09:18:14.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:14 np0005481065 nova_compute[260935]: 2025-10-11 09:18:14.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:14 np0005481065 nova_compute[260935]: 2025-10-11 09:18:14.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:14.696 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1ea2d228-974d-4a28-901f-89593554c6f8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1ea2d228-974d-4a28-901f-89593554c6f8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:14.707 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a48519e5-2ca4-43a9-ac3d-b37bc2416a93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:14.708 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-1ea2d228-974d-4a28-901f-89593554c6f8
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/1ea2d228-974d-4a28-901f-89593554c6f8.pid.haproxy
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID 1ea2d228-974d-4a28-901f-89593554c6f8
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 05:18:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:14.709 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1ea2d228-974d-4a28-901f-89593554c6f8', 'env', 'PROCESS_TAG=haproxy-1ea2d228-974d-4a28-901f-89593554c6f8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1ea2d228-974d-4a28-901f-89593554c6f8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 05:18:14 np0005481065 nova_compute[260935]: 2025-10-11 09:18:14.812 2 DEBUG nova.compute.manager [req-88742c7a-3f72-4d27-b678-8eac7b3aa835 req-12661665-8462-47ad-9aa5-08338fc104e1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Received event network-vif-plugged-442b50ff-f920-42c8-b400-ec09aade7cd6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:18:14 np0005481065 nova_compute[260935]: 2025-10-11 09:18:14.812 2 DEBUG oslo_concurrency.lockutils [req-88742c7a-3f72-4d27-b678-8eac7b3aa835 req-12661665-8462-47ad-9aa5-08338fc104e1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0c16a8df-379f-45ee-b8a2-930ab997e47b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:18:14 np0005481065 nova_compute[260935]: 2025-10-11 09:18:14.813 2 DEBUG oslo_concurrency.lockutils [req-88742c7a-3f72-4d27-b678-8eac7b3aa835 req-12661665-8462-47ad-9aa5-08338fc104e1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0c16a8df-379f-45ee-b8a2-930ab997e47b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:18:14 np0005481065 nova_compute[260935]: 2025-10-11 09:18:14.813 2 DEBUG oslo_concurrency.lockutils [req-88742c7a-3f72-4d27-b678-8eac7b3aa835 req-12661665-8462-47ad-9aa5-08338fc104e1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0c16a8df-379f-45ee-b8a2-930ab997e47b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:18:14 np0005481065 nova_compute[260935]: 2025-10-11 09:18:14.814 2 DEBUG nova.compute.manager [req-88742c7a-3f72-4d27-b678-8eac7b3aa835 req-12661665-8462-47ad-9aa5-08338fc104e1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] No waiting events found dispatching network-vif-plugged-442b50ff-f920-42c8-b400-ec09aade7cd6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:18:14 np0005481065 nova_compute[260935]: 2025-10-11 09:18:14.814 2 WARNING nova.compute.manager [req-88742c7a-3f72-4d27-b678-8eac7b3aa835 req-12661665-8462-47ad-9aa5-08338fc104e1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Received unexpected event network-vif-plugged-442b50ff-f920-42c8-b400-ec09aade7cd6 for instance with vm_state active and task_state None.#033[00m
Oct 11 05:18:15 np0005481065 podman[382459]: 2025-10-11 09:18:15.203738793 +0000 UTC m=+0.084002581 container create 587638c82f350fbd62d701e29e65ee0bc9197b8e2311b00da2a8821c45d0571e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-1ea2d228-974d-4a28-901f-89593554c6f8, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:18:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:15.216 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:18:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:15.218 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:18:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:15.220 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:18:15 np0005481065 podman[382459]: 2025-10-11 09:18:15.163965412 +0000 UTC m=+0.044229260 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 05:18:15 np0005481065 systemd[1]: Started libpod-conmon-587638c82f350fbd62d701e29e65ee0bc9197b8e2311b00da2a8821c45d0571e.scope.
Oct 11 05:18:15 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:18:15 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd4a08369183d1c19d0437930f9c9a68155223cbc285edf48d39b1d8e5c0ceb4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 05:18:15 np0005481065 podman[382459]: 2025-10-11 09:18:15.330398697 +0000 UTC m=+0.210662565 container init 587638c82f350fbd62d701e29e65ee0bc9197b8e2311b00da2a8821c45d0571e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-1ea2d228-974d-4a28-901f-89593554c6f8, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:18:15 np0005481065 podman[382459]: 2025-10-11 09:18:15.340702221 +0000 UTC m=+0.220966009 container start 587638c82f350fbd62d701e29e65ee0bc9197b8e2311b00da2a8821c45d0571e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-1ea2d228-974d-4a28-901f-89593554c6f8, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 11 05:18:15 np0005481065 neutron-haproxy-ovnmeta-1ea2d228-974d-4a28-901f-89593554c6f8[382475]: [NOTICE]   (382479) : New worker (382481) forked
Oct 11 05:18:15 np0005481065 neutron-haproxy-ovnmeta-1ea2d228-974d-4a28-901f-89593554c6f8[382475]: [NOTICE]   (382479) : Loading success.
Oct 11 05:18:15 np0005481065 nova_compute[260935]: 2025-10-11 09:18:15.521 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:18:15 np0005481065 nova_compute[260935]: 2025-10-11 09:18:15.702 2 DEBUG nova.network.neutron [req-dd6619f6-93a9-42ea-9701-0bf39815851d req-145f050e-eab2-4fe7-88e7-2d0a7c16d0f2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Updated VIF entry in instance network info cache for port 442b50ff-f920-42c8-b400-ec09aade7cd6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:18:15 np0005481065 nova_compute[260935]: 2025-10-11 09:18:15.703 2 DEBUG nova.network.neutron [req-dd6619f6-93a9-42ea-9701-0bf39815851d req-145f050e-eab2-4fe7-88e7-2d0a7c16d0f2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Updating instance_info_cache with network_info: [{"id": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "address": "fa:16:3e:c9:58:d3", "network": {"id": "3f472aff-4044-47cd-b539-ffd0a15c2851", "bridge": "br-int", "label": "tempest-network-smoke--74572185", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2284d8f8-63", "ovs_interfaceid": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "442b50ff-f920-42c8-b400-ec09aade7cd6", "address": "fa:16:3e:4f:8d:be", "network": {"id": "1ea2d228-974d-4a28-901f-89593554c6f8", "bridge": "br-int", "label": "tempest-network-smoke--1003241548", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap442b50ff-f9", "ovs_interfaceid": "442b50ff-f920-42c8-b400-ec09aade7cd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:18:15 np0005481065 nova_compute[260935]: 2025-10-11 09:18:15.728 2 DEBUG oslo_concurrency.lockutils [req-dd6619f6-93a9-42ea-9701-0bf39815851d req-145f050e-eab2-4fe7-88e7-2d0a7c16d0f2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-0c16a8df-379f-45ee-b8a2-930ab997e47b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:18:15 np0005481065 ovn_controller[152945]: 2025-10-11T09:18:15Z|00129|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4f:8d:be 10.100.0.27
Oct 11 05:18:15 np0005481065 ovn_controller[152945]: 2025-10-11T09:18:15Z|00130|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4f:8d:be 10.100.0.27
Oct 11 05:18:16 np0005481065 nova_compute[260935]: 2025-10-11 09:18:16.004 2 DEBUG oslo_concurrency.lockutils [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "interface-0c16a8df-379f-45ee-b8a2-930ab997e47b-442b50ff-f920-42c8-b400-ec09aade7cd6" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:18:16 np0005481065 nova_compute[260935]: 2025-10-11 09:18:16.004 2 DEBUG oslo_concurrency.lockutils [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "interface-0c16a8df-379f-45ee-b8a2-930ab997e47b-442b50ff-f920-42c8-b400-ec09aade7cd6" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:18:16 np0005481065 nova_compute[260935]: 2025-10-11 09:18:16.030 2 DEBUG nova.objects.instance [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'flavor' on Instance uuid 0c16a8df-379f-45ee-b8a2-930ab997e47b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:18:16 np0005481065 nova_compute[260935]: 2025-10-11 09:18:16.057 2 DEBUG nova.virt.libvirt.vif [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:17:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1002613129',display_name='tempest-TestNetworkBasicOps-server-1002613129',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1002613129',id=112,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB1DLyePqzD5XeXq/VEty+kauBvZ2ILHF8of/ToD91IxehxiiNemh7q1yZYimrdmB0i6DGy2R/mJVWVMcXEoPylnExeWpiVIgoXv8lQJTgtQjj42wL1vsxK2jjFlUxPqYA==',key_name='tempest-TestNetworkBasicOps-1545772393',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:17:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-c03kw6on',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:17:46Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=0c16a8df-379f-45ee-b8a2-930ab997e47b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "442b50ff-f920-42c8-b400-ec09aade7cd6", "address": "fa:16:3e:4f:8d:be", "network": {"id": "1ea2d228-974d-4a28-901f-89593554c6f8", "bridge": "br-int", "label": "tempest-network-smoke--1003241548", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap442b50ff-f9", "ovs_interfaceid": "442b50ff-f920-42c8-b400-ec09aade7cd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:18:16 np0005481065 nova_compute[260935]: 2025-10-11 09:18:16.057 2 DEBUG nova.network.os_vif_util [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "442b50ff-f920-42c8-b400-ec09aade7cd6", "address": "fa:16:3e:4f:8d:be", "network": {"id": "1ea2d228-974d-4a28-901f-89593554c6f8", "bridge": "br-int", "label": "tempest-network-smoke--1003241548", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap442b50ff-f9", "ovs_interfaceid": "442b50ff-f920-42c8-b400-ec09aade7cd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:18:16 np0005481065 nova_compute[260935]: 2025-10-11 09:18:16.058 2 DEBUG nova.network.os_vif_util [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4f:8d:be,bridge_name='br-int',has_traffic_filtering=True,id=442b50ff-f920-42c8-b400-ec09aade7cd6,network=Network(1ea2d228-974d-4a28-901f-89593554c6f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap442b50ff-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:18:16 np0005481065 nova_compute[260935]: 2025-10-11 09:18:16.063 2 DEBUG nova.virt.libvirt.guest [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:4f:8d:be"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap442b50ff-f9"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 11 05:18:16 np0005481065 nova_compute[260935]: 2025-10-11 09:18:16.067 2 DEBUG nova.virt.libvirt.guest [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:4f:8d:be"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap442b50ff-f9"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 11 05:18:16 np0005481065 nova_compute[260935]: 2025-10-11 09:18:16.071 2 DEBUG nova.virt.libvirt.driver [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Attempting to detach device tap442b50ff-f9 from instance 0c16a8df-379f-45ee-b8a2-930ab997e47b from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct 11 05:18:16 np0005481065 nova_compute[260935]: 2025-10-11 09:18:16.072 2 DEBUG nova.virt.libvirt.guest [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] detach device xml: <interface type="ethernet">
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <mac address="fa:16:3e:4f:8d:be"/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <model type="virtio"/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <mtu size="1442"/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <target dev="tap442b50ff-f9"/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]: </interface>
Oct 11 05:18:16 np0005481065 nova_compute[260935]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 11 05:18:16 np0005481065 nova_compute[260935]: 2025-10-11 09:18:16.081 2 DEBUG nova.virt.libvirt.guest [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:4f:8d:be"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap442b50ff-f9"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 11 05:18:16 np0005481065 nova_compute[260935]: 2025-10-11 09:18:16.085 2 DEBUG nova.virt.libvirt.guest [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:4f:8d:be"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap442b50ff-f9"/></interface>not found in domain: <domain type='kvm' id='135'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <name>instance-00000070</name>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <uuid>0c16a8df-379f-45ee-b8a2-930ab997e47b</uuid>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <nova:name>tempest-TestNetworkBasicOps-server-1002613129</nova:name>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <nova:creationTime>2025-10-11 09:18:14</nova:creationTime>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <nova:flavor name="m1.nano">
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <nova:memory>128</nova:memory>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <nova:disk>1</nova:disk>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <nova:swap>0</nova:swap>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <nova:vcpus>1</nova:vcpus>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  </nova:flavor>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <nova:owner>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <nova:user uuid="dd336dcb24664df58613d4105ce1b004">tempest-TestNetworkBasicOps-1622727639-project-member</nova:user>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <nova:project uuid="bee9c6aad5fe46a2b0fb6caf4d995b72">tempest-TestNetworkBasicOps-1622727639</nova:project>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  </nova:owner>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <nova:ports>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <nova:port uuid="2284d8f8-63ae-4cf4-b954-c08472abd3ed">
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <nova:port uuid="442b50ff-f920-42c8-b400-ec09aade7cd6">
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.27" ipVersion="4"/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  </nova:ports>
Oct 11 05:18:16 np0005481065 nova_compute[260935]: </nova:instance>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <memory unit='KiB'>131072</memory>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <vcpu placement='static'>1</vcpu>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <resource>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <partition>/machine</partition>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  </resource>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <sysinfo type='smbios'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <entry name='manufacturer'>RDO</entry>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <entry name='product'>OpenStack Compute</entry>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <entry name='serial'>0c16a8df-379f-45ee-b8a2-930ab997e47b</entry>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <entry name='uuid'>0c16a8df-379f-45ee-b8a2-930ab997e47b</entry>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <entry name='family'>Virtual Machine</entry>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <boot dev='hd'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <smbios mode='sysinfo'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <vmcoreinfo state='on'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <cpu mode='custom' match='exact' check='full'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <model fallback='forbid'>EPYC-Rome</model>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <vendor>AMD</vendor>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='require' name='x2apic'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='require' name='tsc-deadline'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='require' name='hypervisor'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='require' name='tsc_adjust'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='require' name='spec-ctrl'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='require' name='stibp'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='require' name='arch-capabilities'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='require' name='ssbd'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='require' name='cmp_legacy'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='require' name='overflow-recov'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='require' name='succor'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='require' name='ibrs'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='require' name='amd-ssbd'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='require' name='virt-ssbd'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='disable' name='lbrv'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='disable' name='tsc-scale'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='disable' name='vmcb-clean'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='disable' name='flushbyasid'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='disable' name='pause-filter'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='disable' name='pfthreshold'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='disable' name='svme-addr-chk'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='require' name='lfence-always-serializing'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='require' name='rdctl-no'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='require' name='mds-no'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='require' name='pschange-mc-no'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='require' name='gds-no'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='require' name='rfds-no'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='disable' name='xsaves'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='disable' name='svm'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='require' name='topoext'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='disable' name='npt'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='disable' name='nrip-save'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <clock offset='utc'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <timer name='pit' tickpolicy='delay'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <timer name='rtc' tickpolicy='catchup'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <timer name='hpet' present='no'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <on_poweroff>destroy</on_poweroff>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <on_reboot>restart</on_reboot>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <on_crash>destroy</on_crash>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <disk type='network' device='disk'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <driver name='qemu' type='raw' cache='none'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <auth username='openstack'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:        <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <source protocol='rbd' name='vms/0c16a8df-379f-45ee-b8a2-930ab997e47b_disk' index='2'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:        <host name='192.168.122.100' port='6789'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target dev='vda' bus='virtio'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='virtio-disk0'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <disk type='network' device='cdrom'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <driver name='qemu' type='raw' cache='none'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <auth username='openstack'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:        <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <source protocol='rbd' name='vms/0c16a8df-379f-45ee-b8a2-930ab997e47b_disk.config' index='1'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:        <host name='192.168.122.100' port='6789'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target dev='sda' bus='sata'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <readonly/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='sata0-0-0'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='pci' index='0' model='pcie-root'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='pcie.0'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target chassis='1' port='0x10'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='pci.1'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target chassis='2' port='0x11'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='pci.2'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target chassis='3' port='0x12'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='pci.3'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target chassis='4' port='0x13'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='pci.4'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target chassis='5' port='0x14'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='pci.5'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target chassis='6' port='0x15'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='pci.6'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target chassis='7' port='0x16'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='pci.7'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target chassis='8' port='0x17'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='pci.8'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target chassis='9' port='0x18'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='pci.9'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target chassis='10' port='0x19'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='pci.10'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target chassis='11' port='0x1a'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='pci.11'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target chassis='12' port='0x1b'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='pci.12'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target chassis='13' port='0x1c'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='pci.13'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target chassis='14' port='0x1d'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='pci.14'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target chassis='15' port='0x1e'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='pci.15'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target chassis='16' port='0x1f'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='pci.16'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target chassis='17' port='0x20'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='pci.17'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target chassis='18' port='0x21'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='pci.18'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target chassis='19' port='0x22'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='pci.19'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target chassis='20' port='0x23'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='pci.20'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target chassis='21' port='0x24'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='pci.21'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target chassis='22' port='0x25'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='pci.22'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target chassis='23' port='0x26'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='pci.23'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target chassis='24' port='0x27'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='pci.24'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target chassis='25' port='0x28'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='pci.25'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model name='pcie-pci-bridge'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='pci.26'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='usb'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='sata' index='0'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='ide'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <interface type='ethernet'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <mac address='fa:16:3e:c9:58:d3'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target dev='tap2284d8f8-63'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model type='virtio'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <driver name='vhost' rx_queue_size='512'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <mtu size='1442'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='net0'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <interface type='ethernet'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <mac address='fa:16:3e:4f:8d:be'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target dev='tap442b50ff-f9'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model type='virtio'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <driver name='vhost' rx_queue_size='512'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <mtu size='1442'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='net1'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <serial type='pty'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <source path='/dev/pts/4'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <log file='/var/lib/nova/instances/0c16a8df-379f-45ee-b8a2-930ab997e47b/console.log' append='off'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target type='isa-serial' port='0'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:        <model name='isa-serial'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      </target>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='serial0'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <console type='pty' tty='/dev/pts/4'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <source path='/dev/pts/4'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <log file='/var/lib/nova/instances/0c16a8df-379f-45ee-b8a2-930ab997e47b/console.log' append='off'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target type='serial' port='0'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='serial0'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </console>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <input type='tablet' bus='usb'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='input0'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='usb' bus='0' port='1'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </input>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <input type='mouse' bus='ps2'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='input1'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </input>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <input type='keyboard' bus='ps2'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='input2'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </input>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <graphics type='vnc' port='5904' autoport='yes' listen='::0'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <listen type='address' address='::0'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </graphics>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <audio id='1' type='none'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model type='virtio' heads='1' primary='yes'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='video0'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <watchdog model='itco' action='reset'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='watchdog0'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </watchdog>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <memballoon model='virtio'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <stats period='10'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='balloon0'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <rng model='virtio'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <backend model='random'>/dev/urandom</backend>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='rng0'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <label>system_u:system_r:svirt_t:s0:c440,c514</label>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c440,c514</imagelabel>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  </seclabel>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <label>+107:+107</label>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <imagelabel>+107:+107</imagelabel>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  </seclabel>
Oct 11 05:18:16 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:18:16 np0005481065 nova_compute[260935]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct 11 05:18:16 np0005481065 nova_compute[260935]: 2025-10-11 09:18:16.085 2 INFO nova.virt.libvirt.driver [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully detached device tap442b50ff-f9 from instance 0c16a8df-379f-45ee-b8a2-930ab997e47b from the persistent domain config.#033[00m
Oct 11 05:18:16 np0005481065 nova_compute[260935]: 2025-10-11 09:18:16.086 2 DEBUG nova.virt.libvirt.driver [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] (1/8): Attempting to detach device tap442b50ff-f9 with device alias net1 from instance 0c16a8df-379f-45ee-b8a2-930ab997e47b from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct 11 05:18:16 np0005481065 nova_compute[260935]: 2025-10-11 09:18:16.086 2 DEBUG nova.virt.libvirt.guest [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] detach device xml: <interface type="ethernet">
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <mac address="fa:16:3e:4f:8d:be"/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <model type="virtio"/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <mtu size="1442"/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <target dev="tap442b50ff-f9"/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]: </interface>
Oct 11 05:18:16 np0005481065 nova_compute[260935]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 11 05:18:16 np0005481065 kernel: tap442b50ff-f9 (unregistering): left promiscuous mode
Oct 11 05:18:16 np0005481065 NetworkManager[44960]: <info>  [1760174296.2062] device (tap442b50ff-f9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:18:16 np0005481065 nova_compute[260935]: 2025-10-11 09:18:16.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:16 np0005481065 ovn_controller[152945]: 2025-10-11T09:18:16Z|01111|binding|INFO|Releasing lport 442b50ff-f920-42c8-b400-ec09aade7cd6 from this chassis (sb_readonly=0)
Oct 11 05:18:16 np0005481065 ovn_controller[152945]: 2025-10-11T09:18:16Z|01112|binding|INFO|Setting lport 442b50ff-f920-42c8-b400-ec09aade7cd6 down in Southbound
Oct 11 05:18:16 np0005481065 ovn_controller[152945]: 2025-10-11T09:18:16Z|01113|binding|INFO|Removing iface tap442b50ff-f9 ovn-installed in OVS
Oct 11 05:18:16 np0005481065 nova_compute[260935]: 2025-10-11 09:18:16.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:16 np0005481065 nova_compute[260935]: 2025-10-11 09:18:16.225 2 DEBUG nova.virt.libvirt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Received event <DeviceRemovedEvent: 1760174296.2250836, 0c16a8df-379f-45ee-b8a2-930ab997e47b => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct 11 05:18:16 np0005481065 nova_compute[260935]: 2025-10-11 09:18:16.226 2 DEBUG nova.virt.libvirt.driver [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Start waiting for the detach event from libvirt for device tap442b50ff-f9 with device alias net1 for instance 0c16a8df-379f-45ee-b8a2-930ab997e47b _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct 11 05:18:16 np0005481065 nova_compute[260935]: 2025-10-11 09:18:16.227 2 DEBUG nova.virt.libvirt.guest [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:4f:8d:be"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap442b50ff-f9"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 11 05:18:16 np0005481065 nova_compute[260935]: 2025-10-11 09:18:16.231 2 DEBUG nova.virt.libvirt.guest [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:4f:8d:be"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap442b50ff-f9"/></interface>not found in domain: <domain type='kvm' id='135'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <name>instance-00000070</name>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <uuid>0c16a8df-379f-45ee-b8a2-930ab997e47b</uuid>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <nova:name>tempest-TestNetworkBasicOps-server-1002613129</nova:name>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <nova:creationTime>2025-10-11 09:18:14</nova:creationTime>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <nova:flavor name="m1.nano">
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <nova:memory>128</nova:memory>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <nova:disk>1</nova:disk>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <nova:swap>0</nova:swap>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <nova:vcpus>1</nova:vcpus>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  </nova:flavor>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <nova:owner>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <nova:user uuid="dd336dcb24664df58613d4105ce1b004">tempest-TestNetworkBasicOps-1622727639-project-member</nova:user>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <nova:project uuid="bee9c6aad5fe46a2b0fb6caf4d995b72">tempest-TestNetworkBasicOps-1622727639</nova:project>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  </nova:owner>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <nova:ports>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <nova:port uuid="2284d8f8-63ae-4cf4-b954-c08472abd3ed">
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <nova:port uuid="442b50ff-f920-42c8-b400-ec09aade7cd6">
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.27" ipVersion="4"/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  </nova:ports>
Oct 11 05:18:16 np0005481065 nova_compute[260935]: </nova:instance>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <memory unit='KiB'>131072</memory>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <vcpu placement='static'>1</vcpu>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <resource>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <partition>/machine</partition>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  </resource>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <sysinfo type='smbios'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <entry name='manufacturer'>RDO</entry>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <entry name='product'>OpenStack Compute</entry>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <entry name='serial'>0c16a8df-379f-45ee-b8a2-930ab997e47b</entry>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <entry name='uuid'>0c16a8df-379f-45ee-b8a2-930ab997e47b</entry>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <entry name='family'>Virtual Machine</entry>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <boot dev='hd'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <smbios mode='sysinfo'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <vmcoreinfo state='on'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <cpu mode='custom' match='exact' check='full'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <model fallback='forbid'>EPYC-Rome</model>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <vendor>AMD</vendor>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='require' name='x2apic'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='require' name='tsc-deadline'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='require' name='hypervisor'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='require' name='tsc_adjust'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='require' name='spec-ctrl'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='require' name='stibp'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='require' name='arch-capabilities'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='require' name='ssbd'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='require' name='cmp_legacy'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='require' name='overflow-recov'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='require' name='succor'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='require' name='ibrs'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='require' name='amd-ssbd'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='require' name='virt-ssbd'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='disable' name='lbrv'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='disable' name='tsc-scale'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='disable' name='vmcb-clean'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='disable' name='flushbyasid'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='disable' name='pause-filter'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='disable' name='pfthreshold'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='disable' name='svme-addr-chk'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='require' name='lfence-always-serializing'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='require' name='rdctl-no'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='require' name='mds-no'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='require' name='pschange-mc-no'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='require' name='gds-no'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='require' name='rfds-no'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='disable' name='xsaves'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='disable' name='svm'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='require' name='topoext'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='disable' name='npt'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <feature policy='disable' name='nrip-save'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <clock offset='utc'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <timer name='pit' tickpolicy='delay'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <timer name='rtc' tickpolicy='catchup'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <timer name='hpet' present='no'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <on_poweroff>destroy</on_poweroff>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <on_reboot>restart</on_reboot>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <on_crash>destroy</on_crash>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <disk type='network' device='disk'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <driver name='qemu' type='raw' cache='none'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <auth username='openstack'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:        <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <source protocol='rbd' name='vms/0c16a8df-379f-45ee-b8a2-930ab997e47b_disk' index='2'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:        <host name='192.168.122.100' port='6789'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target dev='vda' bus='virtio'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='virtio-disk0'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <disk type='network' device='cdrom'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <driver name='qemu' type='raw' cache='none'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <auth username='openstack'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:        <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <source protocol='rbd' name='vms/0c16a8df-379f-45ee-b8a2-930ab997e47b_disk.config' index='1'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:        <host name='192.168.122.100' port='6789'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target dev='sda' bus='sata'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <readonly/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='sata0-0-0'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='pci' index='0' model='pcie-root'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='pcie.0'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target chassis='1' port='0x10'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='pci.1'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target chassis='2' port='0x11'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='pci.2'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target chassis='3' port='0x12'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='pci.3'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target chassis='4' port='0x13'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='pci.4'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target chassis='5' port='0x14'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='pci.5'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target chassis='6' port='0x15'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='pci.6'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target chassis='7' port='0x16'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='pci.7'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target chassis='8' port='0x17'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='pci.8'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target chassis='9' port='0x18'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='pci.9'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target chassis='10' port='0x19'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='pci.10'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target chassis='11' port='0x1a'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='pci.11'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target chassis='12' port='0x1b'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='pci.12'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target chassis='13' port='0x1c'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='pci.13'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target chassis='14' port='0x1d'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='pci.14'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target chassis='15' port='0x1e'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='pci.15'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target chassis='16' port='0x1f'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='pci.16'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target chassis='17' port='0x20'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='pci.17'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target chassis='18' port='0x21'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='pci.18'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target chassis='19' port='0x22'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='pci.19'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target chassis='20' port='0x23'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='pci.20'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target chassis='21' port='0x24'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='pci.21'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target chassis='22' port='0x25'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='pci.22'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target chassis='23' port='0x26'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='pci.23'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target chassis='24' port='0x27'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='pci.24'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target chassis='25' port='0x28'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='pci.25'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model name='pcie-pci-bridge'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='pci.26'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='usb'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <controller type='sata' index='0'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='ide'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <interface type='ethernet'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <mac address='fa:16:3e:c9:58:d3'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target dev='tap2284d8f8-63'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model type='virtio'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <driver name='vhost' rx_queue_size='512'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <mtu size='1442'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='net0'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <serial type='pty'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <source path='/dev/pts/4'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <log file='/var/lib/nova/instances/0c16a8df-379f-45ee-b8a2-930ab997e47b/console.log' append='off'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target type='isa-serial' port='0'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:        <model name='isa-serial'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      </target>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='serial0'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <console type='pty' tty='/dev/pts/4'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <source path='/dev/pts/4'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <log file='/var/lib/nova/instances/0c16a8df-379f-45ee-b8a2-930ab997e47b/console.log' append='off'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <target type='serial' port='0'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='serial0'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </console>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <input type='tablet' bus='usb'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='input0'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='usb' bus='0' port='1'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </input>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <input type='mouse' bus='ps2'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='input1'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </input>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <input type='keyboard' bus='ps2'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='input2'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </input>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <graphics type='vnc' port='5904' autoport='yes' listen='::0'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <listen type='address' address='::0'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </graphics>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <audio id='1' type='none'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <model type='virtio' heads='1' primary='yes'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='video0'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <watchdog model='itco' action='reset'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='watchdog0'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </watchdog>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <memballoon model='virtio'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <stats period='10'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='balloon0'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <rng model='virtio'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <backend model='random'>/dev/urandom</backend>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <alias name='rng0'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <label>system_u:system_r:svirt_t:s0:c440,c514</label>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c440,c514</imagelabel>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  </seclabel>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <label>+107:+107</label>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <imagelabel>+107:+107</imagelabel>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  </seclabel>
Oct 11 05:18:16 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:18:16 np0005481065 nova_compute[260935]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct 11 05:18:16 np0005481065 nova_compute[260935]: 2025-10-11 09:18:16.231 2 INFO nova.virt.libvirt.driver [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully detached device tap442b50ff-f9 from instance 0c16a8df-379f-45ee-b8a2-930ab997e47b from the live domain config.#033[00m
Oct 11 05:18:16 np0005481065 nova_compute[260935]: 2025-10-11 09:18:16.232 2 DEBUG nova.virt.libvirt.vif [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:17:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1002613129',display_name='tempest-TestNetworkBasicOps-server-1002613129',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1002613129',id=112,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB1DLyePqzD5XeXq/VEty+kauBvZ2ILHF8of/ToD91IxehxiiNemh7q1yZYimrdmB0i6DGy2R/mJVWVMcXEoPylnExeWpiVIgoXv8lQJTgtQjj42wL1vsxK2jjFlUxPqYA==',key_name='tempest-TestNetworkBasicOps-1545772393',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:17:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-c03kw6on',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:17:46Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=0c16a8df-379f-45ee-b8a2-930ab997e47b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "442b50ff-f920-42c8-b400-ec09aade7cd6", "address": "fa:16:3e:4f:8d:be", "network": {"id": "1ea2d228-974d-4a28-901f-89593554c6f8", "bridge": "br-int", "label": "tempest-network-smoke--1003241548", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap442b50ff-f9", "ovs_interfaceid": "442b50ff-f920-42c8-b400-ec09aade7cd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:18:16 np0005481065 nova_compute[260935]: 2025-10-11 09:18:16.233 2 DEBUG nova.network.os_vif_util [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "442b50ff-f920-42c8-b400-ec09aade7cd6", "address": "fa:16:3e:4f:8d:be", "network": {"id": "1ea2d228-974d-4a28-901f-89593554c6f8", "bridge": "br-int", "label": "tempest-network-smoke--1003241548", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap442b50ff-f9", "ovs_interfaceid": "442b50ff-f920-42c8-b400-ec09aade7cd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:18:16 np0005481065 nova_compute[260935]: 2025-10-11 09:18:16.233 2 DEBUG nova.network.os_vif_util [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4f:8d:be,bridge_name='br-int',has_traffic_filtering=True,id=442b50ff-f920-42c8-b400-ec09aade7cd6,network=Network(1ea2d228-974d-4a28-901f-89593554c6f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap442b50ff-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:18:16 np0005481065 nova_compute[260935]: 2025-10-11 09:18:16.234 2 DEBUG os_vif [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4f:8d:be,bridge_name='br-int',has_traffic_filtering=True,id=442b50ff-f920-42c8-b400-ec09aade7cd6,network=Network(1ea2d228-974d-4a28-901f-89593554c6f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap442b50ff-f9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:18:16 np0005481065 nova_compute[260935]: 2025-10-11 09:18:16.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:16 np0005481065 nova_compute[260935]: 2025-10-11 09:18:16.236 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap442b50ff-f9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:18:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:16.237 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4f:8d:be 10.100.0.27'], port_security=['fa:16:3e:4f:8d:be 10.100.0.27'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.27/28', 'neutron:device_id': '0c16a8df-379f-45ee-b8a2-930ab997e47b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ea2d228-974d-4a28-901f-89593554c6f8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '4', 'neutron:security_group_ids': '019fac1b-1c13-4d21-809e-39e39fdd9255', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2a1ce1a9-36d9-465e-a0cc-5bbbfcd5b496, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=442b50ff-f920-42c8-b400-ec09aade7cd6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:18:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:16.239 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 442b50ff-f920-42c8-b400-ec09aade7cd6 in datapath 1ea2d228-974d-4a28-901f-89593554c6f8 unbound from our chassis#033[00m
Oct 11 05:18:16 np0005481065 nova_compute[260935]: 2025-10-11 09:18:16.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:16.242 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1ea2d228-974d-4a28-901f-89593554c6f8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:18:16 np0005481065 nova_compute[260935]: 2025-10-11 09:18:16.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:18:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:16.243 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[53aa2d22-97da-482f-a089-181b96546bdd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:16.244 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1ea2d228-974d-4a28-901f-89593554c6f8 namespace which is not needed anymore#033[00m
Oct 11 05:18:16 np0005481065 nova_compute[260935]: 2025-10-11 09:18:16.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:16 np0005481065 nova_compute[260935]: 2025-10-11 09:18:16.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:16 np0005481065 nova_compute[260935]: 2025-10-11 09:18:16.257 2 INFO os_vif [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4f:8d:be,bridge_name='br-int',has_traffic_filtering=True,id=442b50ff-f920-42c8-b400-ec09aade7cd6,network=Network(1ea2d228-974d-4a28-901f-89593554c6f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap442b50ff-f9')#033[00m
Oct 11 05:18:16 np0005481065 nova_compute[260935]: 2025-10-11 09:18:16.258 2 DEBUG nova.virt.libvirt.guest [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <nova:name>tempest-TestNetworkBasicOps-server-1002613129</nova:name>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <nova:creationTime>2025-10-11 09:18:16</nova:creationTime>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <nova:flavor name="m1.nano">
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <nova:memory>128</nova:memory>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <nova:disk>1</nova:disk>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <nova:swap>0</nova:swap>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <nova:vcpus>1</nova:vcpus>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  </nova:flavor>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <nova:owner>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <nova:user uuid="dd336dcb24664df58613d4105ce1b004">tempest-TestNetworkBasicOps-1622727639-project-member</nova:user>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <nova:project uuid="bee9c6aad5fe46a2b0fb6caf4d995b72">tempest-TestNetworkBasicOps-1622727639</nova:project>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  </nova:owner>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  <nova:ports>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    <nova:port uuid="2284d8f8-63ae-4cf4-b954-c08472abd3ed">
Oct 11 05:18:16 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 05:18:16 np0005481065 nova_compute[260935]:  </nova:ports>
Oct 11 05:18:16 np0005481065 nova_compute[260935]: </nova:instance>
Oct 11 05:18:16 np0005481065 nova_compute[260935]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct 11 05:18:16 np0005481065 nova_compute[260935]: 2025-10-11 09:18:16.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:16 np0005481065 neutron-haproxy-ovnmeta-1ea2d228-974d-4a28-901f-89593554c6f8[382475]: [NOTICE]   (382479) : haproxy version is 2.8.14-c23fe91
Oct 11 05:18:16 np0005481065 neutron-haproxy-ovnmeta-1ea2d228-974d-4a28-901f-89593554c6f8[382475]: [NOTICE]   (382479) : path to executable is /usr/sbin/haproxy
Oct 11 05:18:16 np0005481065 neutron-haproxy-ovnmeta-1ea2d228-974d-4a28-901f-89593554c6f8[382475]: [WARNING]  (382479) : Exiting Master process...
Oct 11 05:18:16 np0005481065 neutron-haproxy-ovnmeta-1ea2d228-974d-4a28-901f-89593554c6f8[382475]: [ALERT]    (382479) : Current worker (382481) exited with code 143 (Terminated)
Oct 11 05:18:16 np0005481065 neutron-haproxy-ovnmeta-1ea2d228-974d-4a28-901f-89593554c6f8[382475]: [WARNING]  (382479) : All workers exited. Exiting... (0)
Oct 11 05:18:16 np0005481065 systemd[1]: libpod-587638c82f350fbd62d701e29e65ee0bc9197b8e2311b00da2a8821c45d0571e.scope: Deactivated successfully.
Oct 11 05:18:16 np0005481065 podman[382511]: 2025-10-11 09:18:16.472617408 +0000 UTC m=+0.073250915 container died 587638c82f350fbd62d701e29e65ee0bc9197b8e2311b00da2a8821c45d0571e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-1ea2d228-974d-4a28-901f-89593554c6f8, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 05:18:16 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-587638c82f350fbd62d701e29e65ee0bc9197b8e2311b00da2a8821c45d0571e-userdata-shm.mount: Deactivated successfully.
Oct 11 05:18:16 np0005481065 systemd[1]: var-lib-containers-storage-overlay-dd4a08369183d1c19d0437930f9c9a68155223cbc285edf48d39b1d8e5c0ceb4-merged.mount: Deactivated successfully.
Oct 11 05:18:16 np0005481065 podman[382511]: 2025-10-11 09:18:16.530438473 +0000 UTC m=+0.131071980 container cleanup 587638c82f350fbd62d701e29e65ee0bc9197b8e2311b00da2a8821c45d0571e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-1ea2d228-974d-4a28-901f-89593554c6f8, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 05:18:16 np0005481065 systemd[1]: libpod-conmon-587638c82f350fbd62d701e29e65ee0bc9197b8e2311b00da2a8821c45d0571e.scope: Deactivated successfully.
Oct 11 05:18:16 np0005481065 podman[382542]: 2025-10-11 09:18:16.626197288 +0000 UTC m=+0.061836081 container remove 587638c82f350fbd62d701e29e65ee0bc9197b8e2311b00da2a8821c45d0571e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-1ea2d228-974d-4a28-901f-89593554c6f8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 11 05:18:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:16.633 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e9564ab3-0d49-461b-abf2-06bbfe8792ad]: (4, ('Sat Oct 11 09:18:16 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1ea2d228-974d-4a28-901f-89593554c6f8 (587638c82f350fbd62d701e29e65ee0bc9197b8e2311b00da2a8821c45d0571e)\n587638c82f350fbd62d701e29e65ee0bc9197b8e2311b00da2a8821c45d0571e\nSat Oct 11 09:18:16 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1ea2d228-974d-4a28-901f-89593554c6f8 (587638c82f350fbd62d701e29e65ee0bc9197b8e2311b00da2a8821c45d0571e)\n587638c82f350fbd62d701e29e65ee0bc9197b8e2311b00da2a8821c45d0571e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:16.635 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[05af4713-6dc5-4dc4-9d5b-7311822649f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:16.637 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ea2d228-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:18:16 np0005481065 nova_compute[260935]: 2025-10-11 09:18:16.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:16 np0005481065 kernel: tap1ea2d228-90: left promiscuous mode
Oct 11 05:18:16 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2293: 321 pgs: 321 active+clean; 407 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 19 KiB/s wr, 57 op/s
Oct 11 05:18:16 np0005481065 nova_compute[260935]: 2025-10-11 09:18:16.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:16.675 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7e4fec08-3fff-4b30-b271-d7872517c579]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:16.708 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[062eb7dc-62bd-416d-86e7-630fbece3101]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:16.709 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[374abceb-c7e7-4b6f-bc10-8770ec2f87aa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:16.736 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[31b6cea8-deeb-4c11-9adf-63edc66c9f35]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 613900, 'reachable_time': 41736, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382557, 'error': None, 'target': 'ovnmeta-1ea2d228-974d-4a28-901f-89593554c6f8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:16.739 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1ea2d228-974d-4a28-901f-89593554c6f8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 05:18:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:16.739 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[83b3a893-4a70-4b4a-8cb4-db4c484852eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:16 np0005481065 systemd[1]: run-netns-ovnmeta\x2d1ea2d228\x2d974d\x2d4a28\x2d901f\x2d89593554c6f8.mount: Deactivated successfully.
Oct 11 05:18:16 np0005481065 nova_compute[260935]: 2025-10-11 09:18:16.937 2 DEBUG nova.compute.manager [req-42fe97ad-d35d-4265-9764-2a3ff1f39df9 req-2749b787-2d4a-453f-abb1-3fd967bd17b0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Received event network-vif-plugged-442b50ff-f920-42c8-b400-ec09aade7cd6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:18:16 np0005481065 nova_compute[260935]: 2025-10-11 09:18:16.938 2 DEBUG oslo_concurrency.lockutils [req-42fe97ad-d35d-4265-9764-2a3ff1f39df9 req-2749b787-2d4a-453f-abb1-3fd967bd17b0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0c16a8df-379f-45ee-b8a2-930ab997e47b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:18:16 np0005481065 nova_compute[260935]: 2025-10-11 09:18:16.939 2 DEBUG oslo_concurrency.lockutils [req-42fe97ad-d35d-4265-9764-2a3ff1f39df9 req-2749b787-2d4a-453f-abb1-3fd967bd17b0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0c16a8df-379f-45ee-b8a2-930ab997e47b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:18:16 np0005481065 nova_compute[260935]: 2025-10-11 09:18:16.940 2 DEBUG oslo_concurrency.lockutils [req-42fe97ad-d35d-4265-9764-2a3ff1f39df9 req-2749b787-2d4a-453f-abb1-3fd967bd17b0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0c16a8df-379f-45ee-b8a2-930ab997e47b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:18:16 np0005481065 nova_compute[260935]: 2025-10-11 09:18:16.940 2 DEBUG nova.compute.manager [req-42fe97ad-d35d-4265-9764-2a3ff1f39df9 req-2749b787-2d4a-453f-abb1-3fd967bd17b0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] No waiting events found dispatching network-vif-plugged-442b50ff-f920-42c8-b400-ec09aade7cd6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:18:16 np0005481065 nova_compute[260935]: 2025-10-11 09:18:16.941 2 WARNING nova.compute.manager [req-42fe97ad-d35d-4265-9764-2a3ff1f39df9 req-2749b787-2d4a-453f-abb1-3fd967bd17b0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Received unexpected event network-vif-plugged-442b50ff-f920-42c8-b400-ec09aade7cd6 for instance with vm_state active and task_state None.#033[00m
Oct 11 05:18:17 np0005481065 nova_compute[260935]: 2025-10-11 09:18:17.372 2 DEBUG oslo_concurrency.lockutils [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "refresh_cache-0c16a8df-379f-45ee-b8a2-930ab997e47b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:18:17 np0005481065 nova_compute[260935]: 2025-10-11 09:18:17.374 2 DEBUG oslo_concurrency.lockutils [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquired lock "refresh_cache-0c16a8df-379f-45ee-b8a2-930ab997e47b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:18:17 np0005481065 nova_compute[260935]: 2025-10-11 09:18:17.374 2 DEBUG nova.network.neutron [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:18:17 np0005481065 ovn_controller[152945]: 2025-10-11T09:18:17Z|01114|binding|INFO|Releasing lport 117fa814-7b44-4449-adf6-8726d78cffe9 from this chassis (sb_readonly=0)
Oct 11 05:18:17 np0005481065 ovn_controller[152945]: 2025-10-11T09:18:17Z|01115|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:18:17 np0005481065 ovn_controller[152945]: 2025-10-11T09:18:17Z|01116|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:18:18 np0005481065 nova_compute[260935]: 2025-10-11 09:18:18.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:18 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2294: 321 pgs: 321 active+clean; 407 MiB data, 950 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 25 KiB/s wr, 58 op/s
Oct 11 05:18:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:18:19 np0005481065 nova_compute[260935]: 2025-10-11 09:18:19.040 2 DEBUG nova.compute.manager [req-17c94f15-0cf5-49bc-a95c-7915fcc76bb5 req-d0d81a92-80b8-4cdd-9904-01e74fe18328 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Received event network-vif-deleted-442b50ff-f920-42c8-b400-ec09aade7cd6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:18:19 np0005481065 nova_compute[260935]: 2025-10-11 09:18:19.041 2 INFO nova.compute.manager [req-17c94f15-0cf5-49bc-a95c-7915fcc76bb5 req-d0d81a92-80b8-4cdd-9904-01e74fe18328 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Neutron deleted interface 442b50ff-f920-42c8-b400-ec09aade7cd6; detaching it from the instance and deleting it from the info cache#033[00m
Oct 11 05:18:19 np0005481065 nova_compute[260935]: 2025-10-11 09:18:19.041 2 DEBUG nova.network.neutron [req-17c94f15-0cf5-49bc-a95c-7915fcc76bb5 req-d0d81a92-80b8-4cdd-9904-01e74fe18328 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Updating instance_info_cache with network_info: [{"id": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "address": "fa:16:3e:c9:58:d3", "network": {"id": "3f472aff-4044-47cd-b539-ffd0a15c2851", "bridge": "br-int", "label": "tempest-network-smoke--74572185", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2284d8f8-63", "ovs_interfaceid": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:18:19 np0005481065 nova_compute[260935]: 2025-10-11 09:18:19.071 2 DEBUG nova.objects.instance [req-17c94f15-0cf5-49bc-a95c-7915fcc76bb5 req-d0d81a92-80b8-4cdd-9904-01e74fe18328 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lazy-loading 'system_metadata' on Instance uuid 0c16a8df-379f-45ee-b8a2-930ab997e47b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:18:19 np0005481065 nova_compute[260935]: 2025-10-11 09:18:19.098 2 DEBUG nova.objects.instance [req-17c94f15-0cf5-49bc-a95c-7915fcc76bb5 req-d0d81a92-80b8-4cdd-9904-01e74fe18328 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lazy-loading 'flavor' on Instance uuid 0c16a8df-379f-45ee-b8a2-930ab997e47b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:18:19 np0005481065 nova_compute[260935]: 2025-10-11 09:18:19.123 2 DEBUG nova.virt.libvirt.vif [req-17c94f15-0cf5-49bc-a95c-7915fcc76bb5 req-d0d81a92-80b8-4cdd-9904-01e74fe18328 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:17:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1002613129',display_name='tempest-TestNetworkBasicOps-server-1002613129',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1002613129',id=112,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB1DLyePqzD5XeXq/VEty+kauBvZ2ILHF8of/ToD91IxehxiiNemh7q1yZYimrdmB0i6DGy2R/mJVWVMcXEoPylnExeWpiVIgoXv8lQJTgtQjj42wL1vsxK2jjFlUxPqYA==',key_name='tempest-TestNetworkBasicOps-1545772393',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:17:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-c03kw6on',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:17:46Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=0c16a8df-379f-45ee-b8a2-930ab997e47b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "442b50ff-f920-42c8-b400-ec09aade7cd6", "address": "fa:16:3e:4f:8d:be", "network": {"id": "1ea2d228-974d-4a28-901f-89593554c6f8", "bridge": "br-int", "label": "tempest-network-smoke--1003241548", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap442b50ff-f9", "ovs_interfaceid": "442b50ff-f920-42c8-b400-ec09aade7cd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:18:19 np0005481065 nova_compute[260935]: 2025-10-11 09:18:19.123 2 DEBUG nova.network.os_vif_util [req-17c94f15-0cf5-49bc-a95c-7915fcc76bb5 req-d0d81a92-80b8-4cdd-9904-01e74fe18328 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Converting VIF {"id": "442b50ff-f920-42c8-b400-ec09aade7cd6", "address": "fa:16:3e:4f:8d:be", "network": {"id": "1ea2d228-974d-4a28-901f-89593554c6f8", "bridge": "br-int", "label": "tempest-network-smoke--1003241548", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap442b50ff-f9", "ovs_interfaceid": "442b50ff-f920-42c8-b400-ec09aade7cd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:18:19 np0005481065 nova_compute[260935]: 2025-10-11 09:18:19.124 2 DEBUG nova.network.os_vif_util [req-17c94f15-0cf5-49bc-a95c-7915fcc76bb5 req-d0d81a92-80b8-4cdd-9904-01e74fe18328 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4f:8d:be,bridge_name='br-int',has_traffic_filtering=True,id=442b50ff-f920-42c8-b400-ec09aade7cd6,network=Network(1ea2d228-974d-4a28-901f-89593554c6f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap442b50ff-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:18:19 np0005481065 nova_compute[260935]: 2025-10-11 09:18:19.129 2 DEBUG nova.virt.libvirt.guest [req-17c94f15-0cf5-49bc-a95c-7915fcc76bb5 req-d0d81a92-80b8-4cdd-9904-01e74fe18328 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:4f:8d:be"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap442b50ff-f9"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 11 05:18:19 np0005481065 nova_compute[260935]: 2025-10-11 09:18:19.134 2 DEBUG nova.virt.libvirt.guest [req-17c94f15-0cf5-49bc-a95c-7915fcc76bb5 req-d0d81a92-80b8-4cdd-9904-01e74fe18328 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:4f:8d:be"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap442b50ff-f9"/></interface>not found in domain: <domain type='kvm' id='135'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <name>instance-00000070</name>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <uuid>0c16a8df-379f-45ee-b8a2-930ab997e47b</uuid>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <nova:name>tempest-TestNetworkBasicOps-server-1002613129</nova:name>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <nova:creationTime>2025-10-11 09:18:16</nova:creationTime>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <nova:flavor name="m1.nano">
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <nova:memory>128</nova:memory>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <nova:disk>1</nova:disk>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <nova:swap>0</nova:swap>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <nova:vcpus>1</nova:vcpus>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  </nova:flavor>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <nova:owner>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <nova:user uuid="dd336dcb24664df58613d4105ce1b004">tempest-TestNetworkBasicOps-1622727639-project-member</nova:user>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <nova:project uuid="bee9c6aad5fe46a2b0fb6caf4d995b72">tempest-TestNetworkBasicOps-1622727639</nova:project>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  </nova:owner>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <nova:ports>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <nova:port uuid="2284d8f8-63ae-4cf4-b954-c08472abd3ed">
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  </nova:ports>
Oct 11 05:18:19 np0005481065 nova_compute[260935]: </nova:instance>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <memory unit='KiB'>131072</memory>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <vcpu placement='static'>1</vcpu>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <resource>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <partition>/machine</partition>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  </resource>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <sysinfo type='smbios'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <entry name='manufacturer'>RDO</entry>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <entry name='product'>OpenStack Compute</entry>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <entry name='serial'>0c16a8df-379f-45ee-b8a2-930ab997e47b</entry>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <entry name='uuid'>0c16a8df-379f-45ee-b8a2-930ab997e47b</entry>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <entry name='family'>Virtual Machine</entry>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <boot dev='hd'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <smbios mode='sysinfo'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <vmcoreinfo state='on'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <cpu mode='custom' match='exact' check='full'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <model fallback='forbid'>EPYC-Rome</model>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <vendor>AMD</vendor>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='require' name='x2apic'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='require' name='tsc-deadline'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='require' name='hypervisor'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='require' name='tsc_adjust'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='require' name='spec-ctrl'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='require' name='stibp'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='require' name='arch-capabilities'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='require' name='ssbd'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='require' name='cmp_legacy'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='require' name='overflow-recov'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='require' name='succor'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='require' name='ibrs'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='require' name='amd-ssbd'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='require' name='virt-ssbd'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='disable' name='lbrv'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='disable' name='tsc-scale'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='disable' name='vmcb-clean'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='disable' name='flushbyasid'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='disable' name='pause-filter'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='disable' name='pfthreshold'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='disable' name='svme-addr-chk'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='require' name='lfence-always-serializing'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='require' name='rdctl-no'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='require' name='mds-no'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='require' name='pschange-mc-no'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='require' name='gds-no'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='require' name='rfds-no'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='disable' name='xsaves'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='disable' name='svm'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='require' name='topoext'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='disable' name='npt'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='disable' name='nrip-save'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <clock offset='utc'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <timer name='pit' tickpolicy='delay'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <timer name='rtc' tickpolicy='catchup'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <timer name='hpet' present='no'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <on_poweroff>destroy</on_poweroff>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <on_reboot>restart</on_reboot>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <on_crash>destroy</on_crash>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <disk type='network' device='disk'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <driver name='qemu' type='raw' cache='none'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <auth username='openstack'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:        <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <source protocol='rbd' name='vms/0c16a8df-379f-45ee-b8a2-930ab997e47b_disk' index='2'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:        <host name='192.168.122.100' port='6789'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target dev='vda' bus='virtio'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='virtio-disk0'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <disk type='network' device='cdrom'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <driver name='qemu' type='raw' cache='none'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <auth username='openstack'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:        <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <source protocol='rbd' name='vms/0c16a8df-379f-45ee-b8a2-930ab997e47b_disk.config' index='1'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:        <host name='192.168.122.100' port='6789'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target dev='sda' bus='sata'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <readonly/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='sata0-0-0'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='pci' index='0' model='pcie-root'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='pcie.0'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target chassis='1' port='0x10'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='pci.1'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target chassis='2' port='0x11'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='pci.2'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target chassis='3' port='0x12'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='pci.3'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target chassis='4' port='0x13'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='pci.4'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target chassis='5' port='0x14'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='pci.5'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target chassis='6' port='0x15'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='pci.6'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target chassis='7' port='0x16'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='pci.7'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target chassis='8' port='0x17'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='pci.8'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target chassis='9' port='0x18'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='pci.9'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target chassis='10' port='0x19'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='pci.10'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target chassis='11' port='0x1a'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='pci.11'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target chassis='12' port='0x1b'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='pci.12'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target chassis='13' port='0x1c'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='pci.13'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target chassis='14' port='0x1d'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='pci.14'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target chassis='15' port='0x1e'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='pci.15'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target chassis='16' port='0x1f'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='pci.16'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target chassis='17' port='0x20'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='pci.17'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target chassis='18' port='0x21'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='pci.18'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target chassis='19' port='0x22'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='pci.19'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target chassis='20' port='0x23'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='pci.20'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target chassis='21' port='0x24'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='pci.21'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target chassis='22' port='0x25'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='pci.22'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target chassis='23' port='0x26'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='pci.23'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target chassis='24' port='0x27'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='pci.24'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target chassis='25' port='0x28'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='pci.25'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <model name='pcie-pci-bridge'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='pci.26'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='usb'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='sata' index='0'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='ide'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <interface type='ethernet'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <mac address='fa:16:3e:c9:58:d3'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target dev='tap2284d8f8-63'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <model type='virtio'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <driver name='vhost' rx_queue_size='512'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <mtu size='1442'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='net0'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <serial type='pty'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <source path='/dev/pts/4'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <log file='/var/lib/nova/instances/0c16a8df-379f-45ee-b8a2-930ab997e47b/console.log' append='off'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target type='isa-serial' port='0'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:        <model name='isa-serial'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      </target>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='serial0'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <console type='pty' tty='/dev/pts/4'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <source path='/dev/pts/4'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <log file='/var/lib/nova/instances/0c16a8df-379f-45ee-b8a2-930ab997e47b/console.log' append='off'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target type='serial' port='0'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='serial0'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </console>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <input type='tablet' bus='usb'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='input0'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='usb' bus='0' port='1'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </input>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <input type='mouse' bus='ps2'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='input1'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </input>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <input type='keyboard' bus='ps2'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='input2'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </input>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <graphics type='vnc' port='5904' autoport='yes' listen='::0'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <listen type='address' address='::0'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </graphics>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <audio id='1' type='none'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <model type='virtio' heads='1' primary='yes'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='video0'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <watchdog model='itco' action='reset'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='watchdog0'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </watchdog>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <memballoon model='virtio'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <stats period='10'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='balloon0'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <rng model='virtio'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <backend model='random'>/dev/urandom</backend>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='rng0'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <label>system_u:system_r:svirt_t:s0:c440,c514</label>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c440,c514</imagelabel>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  </seclabel>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <label>+107:+107</label>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <imagelabel>+107:+107</imagelabel>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  </seclabel>
Oct 11 05:18:19 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:18:19 np0005481065 nova_compute[260935]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct 11 05:18:19 np0005481065 nova_compute[260935]: 2025-10-11 09:18:19.135 2 DEBUG nova.virt.libvirt.guest [req-17c94f15-0cf5-49bc-a95c-7915fcc76bb5 req-d0d81a92-80b8-4cdd-9904-01e74fe18328 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:4f:8d:be"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap442b50ff-f9"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 11 05:18:19 np0005481065 nova_compute[260935]: 2025-10-11 09:18:19.140 2 DEBUG nova.virt.libvirt.guest [req-17c94f15-0cf5-49bc-a95c-7915fcc76bb5 req-d0d81a92-80b8-4cdd-9904-01e74fe18328 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:4f:8d:be"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap442b50ff-f9"/></interface>not found in domain: <domain type='kvm' id='135'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <name>instance-00000070</name>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <uuid>0c16a8df-379f-45ee-b8a2-930ab997e47b</uuid>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <nova:name>tempest-TestNetworkBasicOps-server-1002613129</nova:name>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <nova:creationTime>2025-10-11 09:18:16</nova:creationTime>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <nova:flavor name="m1.nano">
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <nova:memory>128</nova:memory>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <nova:disk>1</nova:disk>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <nova:swap>0</nova:swap>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <nova:vcpus>1</nova:vcpus>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  </nova:flavor>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <nova:owner>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <nova:user uuid="dd336dcb24664df58613d4105ce1b004">tempest-TestNetworkBasicOps-1622727639-project-member</nova:user>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <nova:project uuid="bee9c6aad5fe46a2b0fb6caf4d995b72">tempest-TestNetworkBasicOps-1622727639</nova:project>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  </nova:owner>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <nova:ports>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <nova:port uuid="2284d8f8-63ae-4cf4-b954-c08472abd3ed">
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  </nova:ports>
Oct 11 05:18:19 np0005481065 nova_compute[260935]: </nova:instance>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <memory unit='KiB'>131072</memory>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <vcpu placement='static'>1</vcpu>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <resource>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <partition>/machine</partition>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  </resource>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <sysinfo type='smbios'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <entry name='manufacturer'>RDO</entry>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <entry name='product'>OpenStack Compute</entry>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <entry name='serial'>0c16a8df-379f-45ee-b8a2-930ab997e47b</entry>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <entry name='uuid'>0c16a8df-379f-45ee-b8a2-930ab997e47b</entry>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <entry name='family'>Virtual Machine</entry>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <boot dev='hd'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <smbios mode='sysinfo'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <vmcoreinfo state='on'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <cpu mode='custom' match='exact' check='full'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <model fallback='forbid'>EPYC-Rome</model>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <vendor>AMD</vendor>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='require' name='x2apic'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='require' name='tsc-deadline'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='require' name='hypervisor'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='require' name='tsc_adjust'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='require' name='spec-ctrl'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='require' name='stibp'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='require' name='arch-capabilities'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='require' name='ssbd'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='require' name='cmp_legacy'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='require' name='overflow-recov'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='require' name='succor'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='require' name='ibrs'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='require' name='amd-ssbd'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='require' name='virt-ssbd'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='disable' name='lbrv'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='disable' name='tsc-scale'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='disable' name='vmcb-clean'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='disable' name='flushbyasid'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='disable' name='pause-filter'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='disable' name='pfthreshold'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='disable' name='svme-addr-chk'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='require' name='lfence-always-serializing'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='require' name='rdctl-no'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='require' name='mds-no'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='require' name='pschange-mc-no'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='require' name='gds-no'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='require' name='rfds-no'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='disable' name='xsaves'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='disable' name='svm'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='require' name='topoext'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='disable' name='npt'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <feature policy='disable' name='nrip-save'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <clock offset='utc'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <timer name='pit' tickpolicy='delay'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <timer name='rtc' tickpolicy='catchup'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <timer name='hpet' present='no'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <on_poweroff>destroy</on_poweroff>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <on_reboot>restart</on_reboot>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <on_crash>destroy</on_crash>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <disk type='network' device='disk'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <driver name='qemu' type='raw' cache='none'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <auth username='openstack'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:        <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <source protocol='rbd' name='vms/0c16a8df-379f-45ee-b8a2-930ab997e47b_disk' index='2'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:        <host name='192.168.122.100' port='6789'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target dev='vda' bus='virtio'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='virtio-disk0'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <disk type='network' device='cdrom'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <driver name='qemu' type='raw' cache='none'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <auth username='openstack'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:        <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <source protocol='rbd' name='vms/0c16a8df-379f-45ee-b8a2-930ab997e47b_disk.config' index='1'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:        <host name='192.168.122.100' port='6789'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target dev='sda' bus='sata'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <readonly/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='sata0-0-0'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='pci' index='0' model='pcie-root'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='pcie.0'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target chassis='1' port='0x10'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='pci.1'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target chassis='2' port='0x11'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='pci.2'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target chassis='3' port='0x12'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='pci.3'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target chassis='4' port='0x13'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='pci.4'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target chassis='5' port='0x14'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='pci.5'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target chassis='6' port='0x15'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='pci.6'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target chassis='7' port='0x16'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='pci.7'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target chassis='8' port='0x17'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='pci.8'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target chassis='9' port='0x18'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='pci.9'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target chassis='10' port='0x19'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='pci.10'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target chassis='11' port='0x1a'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='pci.11'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target chassis='12' port='0x1b'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='pci.12'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target chassis='13' port='0x1c'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='pci.13'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target chassis='14' port='0x1d'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='pci.14'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target chassis='15' port='0x1e'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='pci.15'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target chassis='16' port='0x1f'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='pci.16'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target chassis='17' port='0x20'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='pci.17'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target chassis='18' port='0x21'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='pci.18'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target chassis='19' port='0x22'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='pci.19'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target chassis='20' port='0x23'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='pci.20'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target chassis='21' port='0x24'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='pci.21'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target chassis='22' port='0x25'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='pci.22'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target chassis='23' port='0x26'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='pci.23'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target chassis='24' port='0x27'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='pci.24'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target chassis='25' port='0x28'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='pci.25'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <model name='pcie-pci-bridge'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='pci.26'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='usb'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <controller type='sata' index='0'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='ide'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <interface type='ethernet'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <mac address='fa:16:3e:c9:58:d3'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target dev='tap2284d8f8-63'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <model type='virtio'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <driver name='vhost' rx_queue_size='512'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <mtu size='1442'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='net0'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <serial type='pty'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <source path='/dev/pts/4'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <log file='/var/lib/nova/instances/0c16a8df-379f-45ee-b8a2-930ab997e47b/console.log' append='off'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target type='isa-serial' port='0'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:        <model name='isa-serial'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      </target>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='serial0'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <console type='pty' tty='/dev/pts/4'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <source path='/dev/pts/4'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <log file='/var/lib/nova/instances/0c16a8df-379f-45ee-b8a2-930ab997e47b/console.log' append='off'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <target type='serial' port='0'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='serial0'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </console>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <input type='tablet' bus='usb'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='input0'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='usb' bus='0' port='1'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </input>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <input type='mouse' bus='ps2'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='input1'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </input>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <input type='keyboard' bus='ps2'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='input2'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </input>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <graphics type='vnc' port='5904' autoport='yes' listen='::0'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <listen type='address' address='::0'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </graphics>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <audio id='1' type='none'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <model type='virtio' heads='1' primary='yes'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='video0'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <watchdog model='itco' action='reset'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='watchdog0'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </watchdog>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <memballoon model='virtio'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <stats period='10'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='balloon0'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <rng model='virtio'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <backend model='random'>/dev/urandom</backend>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <alias name='rng0'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <label>system_u:system_r:svirt_t:s0:c440,c514</label>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c440,c514</imagelabel>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  </seclabel>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <label>+107:+107</label>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <imagelabel>+107:+107</imagelabel>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  </seclabel>
Oct 11 05:18:19 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:18:19 np0005481065 nova_compute[260935]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct 11 05:18:19 np0005481065 nova_compute[260935]: 2025-10-11 09:18:19.141 2 WARNING nova.virt.libvirt.driver [req-17c94f15-0cf5-49bc-a95c-7915fcc76bb5 req-d0d81a92-80b8-4cdd-9904-01e74fe18328 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Detaching interface fa:16:3e:4f:8d:be failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap442b50ff-f9' not found.#033[00m
Oct 11 05:18:19 np0005481065 nova_compute[260935]: 2025-10-11 09:18:19.143 2 DEBUG nova.virt.libvirt.vif [req-17c94f15-0cf5-49bc-a95c-7915fcc76bb5 req-d0d81a92-80b8-4cdd-9904-01e74fe18328 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:17:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1002613129',display_name='tempest-TestNetworkBasicOps-server-1002613129',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1002613129',id=112,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB1DLyePqzD5XeXq/VEty+kauBvZ2ILHF8of/ToD91IxehxiiNemh7q1yZYimrdmB0i6DGy2R/mJVWVMcXEoPylnExeWpiVIgoXv8lQJTgtQjj42wL1vsxK2jjFlUxPqYA==',key_name='tempest-TestNetworkBasicOps-1545772393',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:17:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-c03kw6on',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:17:46Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=0c16a8df-379f-45ee-b8a2-930ab997e47b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "442b50ff-f920-42c8-b400-ec09aade7cd6", "address": "fa:16:3e:4f:8d:be", "network": {"id": "1ea2d228-974d-4a28-901f-89593554c6f8", "bridge": "br-int", "label": "tempest-network-smoke--1003241548", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap442b50ff-f9", "ovs_interfaceid": "442b50ff-f920-42c8-b400-ec09aade7cd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:18:19 np0005481065 nova_compute[260935]: 2025-10-11 09:18:19.143 2 DEBUG nova.network.os_vif_util [req-17c94f15-0cf5-49bc-a95c-7915fcc76bb5 req-d0d81a92-80b8-4cdd-9904-01e74fe18328 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Converting VIF {"id": "442b50ff-f920-42c8-b400-ec09aade7cd6", "address": "fa:16:3e:4f:8d:be", "network": {"id": "1ea2d228-974d-4a28-901f-89593554c6f8", "bridge": "br-int", "label": "tempest-network-smoke--1003241548", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap442b50ff-f9", "ovs_interfaceid": "442b50ff-f920-42c8-b400-ec09aade7cd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:18:19 np0005481065 nova_compute[260935]: 2025-10-11 09:18:19.145 2 DEBUG nova.network.os_vif_util [req-17c94f15-0cf5-49bc-a95c-7915fcc76bb5 req-d0d81a92-80b8-4cdd-9904-01e74fe18328 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4f:8d:be,bridge_name='br-int',has_traffic_filtering=True,id=442b50ff-f920-42c8-b400-ec09aade7cd6,network=Network(1ea2d228-974d-4a28-901f-89593554c6f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap442b50ff-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:18:19 np0005481065 nova_compute[260935]: 2025-10-11 09:18:19.146 2 DEBUG os_vif [req-17c94f15-0cf5-49bc-a95c-7915fcc76bb5 req-d0d81a92-80b8-4cdd-9904-01e74fe18328 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4f:8d:be,bridge_name='br-int',has_traffic_filtering=True,id=442b50ff-f920-42c8-b400-ec09aade7cd6,network=Network(1ea2d228-974d-4a28-901f-89593554c6f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap442b50ff-f9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:18:19 np0005481065 nova_compute[260935]: 2025-10-11 09:18:19.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:19 np0005481065 nova_compute[260935]: 2025-10-11 09:18:19.150 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap442b50ff-f9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:18:19 np0005481065 nova_compute[260935]: 2025-10-11 09:18:19.151 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:18:19 np0005481065 nova_compute[260935]: 2025-10-11 09:18:19.155 2 INFO os_vif [req-17c94f15-0cf5-49bc-a95c-7915fcc76bb5 req-d0d81a92-80b8-4cdd-9904-01e74fe18328 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4f:8d:be,bridge_name='br-int',has_traffic_filtering=True,id=442b50ff-f920-42c8-b400-ec09aade7cd6,network=Network(1ea2d228-974d-4a28-901f-89593554c6f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap442b50ff-f9')#033[00m
Oct 11 05:18:19 np0005481065 nova_compute[260935]: 2025-10-11 09:18:19.156 2 DEBUG nova.virt.libvirt.guest [req-17c94f15-0cf5-49bc-a95c-7915fcc76bb5 req-d0d81a92-80b8-4cdd-9904-01e74fe18328 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <nova:name>tempest-TestNetworkBasicOps-server-1002613129</nova:name>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <nova:creationTime>2025-10-11 09:18:19</nova:creationTime>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <nova:flavor name="m1.nano">
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <nova:memory>128</nova:memory>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <nova:disk>1</nova:disk>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <nova:swap>0</nova:swap>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <nova:vcpus>1</nova:vcpus>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  </nova:flavor>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <nova:owner>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <nova:user uuid="dd336dcb24664df58613d4105ce1b004">tempest-TestNetworkBasicOps-1622727639-project-member</nova:user>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <nova:project uuid="bee9c6aad5fe46a2b0fb6caf4d995b72">tempest-TestNetworkBasicOps-1622727639</nova:project>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  </nova:owner>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  <nova:ports>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    <nova:port uuid="2284d8f8-63ae-4cf4-b954-c08472abd3ed">
Oct 11 05:18:19 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 05:18:19 np0005481065 nova_compute[260935]:  </nova:ports>
Oct 11 05:18:19 np0005481065 nova_compute[260935]: </nova:instance>
Oct 11 05:18:19 np0005481065 nova_compute[260935]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct 11 05:18:19 np0005481065 nova_compute[260935]: 2025-10-11 09:18:19.249 2 INFO nova.network.neutron [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Port 442b50ff-f920-42c8-b400-ec09aade7cd6 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Oct 11 05:18:19 np0005481065 nova_compute[260935]: 2025-10-11 09:18:19.250 2 DEBUG nova.network.neutron [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Updating instance_info_cache with network_info: [{"id": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "address": "fa:16:3e:c9:58:d3", "network": {"id": "3f472aff-4044-47cd-b539-ffd0a15c2851", "bridge": "br-int", "label": "tempest-network-smoke--74572185", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2284d8f8-63", "ovs_interfaceid": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:18:19 np0005481065 nova_compute[260935]: 2025-10-11 09:18:19.272 2 DEBUG oslo_concurrency.lockutils [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Releasing lock "refresh_cache-0c16a8df-379f-45ee-b8a2-930ab997e47b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:18:19 np0005481065 nova_compute[260935]: 2025-10-11 09:18:19.295 2 DEBUG oslo_concurrency.lockutils [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "interface-0c16a8df-379f-45ee-b8a2-930ab997e47b-442b50ff-f920-42c8-b400-ec09aade7cd6" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 3.291s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:18:19 np0005481065 nova_compute[260935]: 2025-10-11 09:18:19.746 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174284.745655, 25c0d908-6200-4e9e-8914-ed531abe14bf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:18:19 np0005481065 nova_compute[260935]: 2025-10-11 09:18:19.747 2 INFO nova.compute.manager [-] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:18:19 np0005481065 nova_compute[260935]: 2025-10-11 09:18:19.783 2 DEBUG nova.compute.manager [None req-76a6863e-dfa6-4dc8-85ce-b7fdcf9e2c69 - - - - - -] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:18:20 np0005481065 nova_compute[260935]: 2025-10-11 09:18:20.589 2 DEBUG nova.compute.manager [req-abf91524-11cf-49d3-a708-e981ba96d05b req-39b5b8d5-1dbe-4878-9a9a-0eb3321d6944 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Received event network-changed-2284d8f8-63ae-4cf4-b954-c08472abd3ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:18:20 np0005481065 nova_compute[260935]: 2025-10-11 09:18:20.589 2 DEBUG nova.compute.manager [req-abf91524-11cf-49d3-a708-e981ba96d05b req-39b5b8d5-1dbe-4878-9a9a-0eb3321d6944 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Refreshing instance network info cache due to event network-changed-2284d8f8-63ae-4cf4-b954-c08472abd3ed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:18:20 np0005481065 nova_compute[260935]: 2025-10-11 09:18:20.590 2 DEBUG oslo_concurrency.lockutils [req-abf91524-11cf-49d3-a708-e981ba96d05b req-39b5b8d5-1dbe-4878-9a9a-0eb3321d6944 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-0c16a8df-379f-45ee-b8a2-930ab997e47b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:18:20 np0005481065 nova_compute[260935]: 2025-10-11 09:18:20.590 2 DEBUG oslo_concurrency.lockutils [req-abf91524-11cf-49d3-a708-e981ba96d05b req-39b5b8d5-1dbe-4878-9a9a-0eb3321d6944 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-0c16a8df-379f-45ee-b8a2-930ab997e47b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:18:20 np0005481065 nova_compute[260935]: 2025-10-11 09:18:20.591 2 DEBUG nova.network.neutron [req-abf91524-11cf-49d3-a708-e981ba96d05b req-39b5b8d5-1dbe-4878-9a9a-0eb3321d6944 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Refreshing network info cache for port 2284d8f8-63ae-4cf4-b954-c08472abd3ed _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:18:20 np0005481065 nova_compute[260935]: 2025-10-11 09:18:20.617 2 DEBUG oslo_concurrency.lockutils [None req-0660e251-f1e0-459e-bcfb-768d67651959 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "0c16a8df-379f-45ee-b8a2-930ab997e47b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:18:20 np0005481065 nova_compute[260935]: 2025-10-11 09:18:20.617 2 DEBUG oslo_concurrency.lockutils [None req-0660e251-f1e0-459e-bcfb-768d67651959 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "0c16a8df-379f-45ee-b8a2-930ab997e47b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:18:20 np0005481065 nova_compute[260935]: 2025-10-11 09:18:20.618 2 DEBUG oslo_concurrency.lockutils [None req-0660e251-f1e0-459e-bcfb-768d67651959 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "0c16a8df-379f-45ee-b8a2-930ab997e47b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:18:20 np0005481065 nova_compute[260935]: 2025-10-11 09:18:20.618 2 DEBUG oslo_concurrency.lockutils [None req-0660e251-f1e0-459e-bcfb-768d67651959 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "0c16a8df-379f-45ee-b8a2-930ab997e47b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:18:20 np0005481065 nova_compute[260935]: 2025-10-11 09:18:20.618 2 DEBUG oslo_concurrency.lockutils [None req-0660e251-f1e0-459e-bcfb-768d67651959 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "0c16a8df-379f-45ee-b8a2-930ab997e47b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:18:20 np0005481065 nova_compute[260935]: 2025-10-11 09:18:20.620 2 INFO nova.compute.manager [None req-0660e251-f1e0-459e-bcfb-768d67651959 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Terminating instance#033[00m
Oct 11 05:18:20 np0005481065 nova_compute[260935]: 2025-10-11 09:18:20.621 2 DEBUG nova.compute.manager [None req-0660e251-f1e0-459e-bcfb-768d67651959 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:18:20 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2295: 321 pgs: 321 active+clean; 407 MiB data, 950 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 11 KiB/s wr, 29 op/s
Oct 11 05:18:20 np0005481065 kernel: tap2284d8f8-63 (unregistering): left promiscuous mode
Oct 11 05:18:20 np0005481065 NetworkManager[44960]: <info>  [1760174300.6793] device (tap2284d8f8-63): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:18:20 np0005481065 ovn_controller[152945]: 2025-10-11T09:18:20Z|01117|binding|INFO|Releasing lport 2284d8f8-63ae-4cf4-b954-c08472abd3ed from this chassis (sb_readonly=0)
Oct 11 05:18:20 np0005481065 ovn_controller[152945]: 2025-10-11T09:18:20Z|01118|binding|INFO|Setting lport 2284d8f8-63ae-4cf4-b954-c08472abd3ed down in Southbound
Oct 11 05:18:20 np0005481065 ovn_controller[152945]: 2025-10-11T09:18:20Z|01119|binding|INFO|Removing iface tap2284d8f8-63 ovn-installed in OVS
Oct 11 05:18:20 np0005481065 nova_compute[260935]: 2025-10-11 09:18:20.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:20.760 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:58:d3 10.100.0.14'], port_security=['fa:16:3e:c9:58:d3 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '0c16a8df-379f-45ee-b8a2-930ab997e47b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3f472aff-4044-47cd-b539-ffd0a15c2851', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '4', 'neutron:security_group_ids': '19e2c24e-234b-4dae-9978-109acb79adf0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83048660-eea0-4997-8f0a-503095730c3f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=2284d8f8-63ae-4cf4-b954-c08472abd3ed) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:18:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:20.762 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 2284d8f8-63ae-4cf4-b954-c08472abd3ed in datapath 3f472aff-4044-47cd-b539-ffd0a15c2851 unbound from our chassis#033[00m
Oct 11 05:18:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:20.764 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3f472aff-4044-47cd-b539-ffd0a15c2851, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:18:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:20.765 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[069888c8-8433-4788-87e4-f77130fec871]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:20.766 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3f472aff-4044-47cd-b539-ffd0a15c2851 namespace which is not needed anymore#033[00m
Oct 11 05:18:20 np0005481065 nova_compute[260935]: 2025-10-11 09:18:20.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:20 np0005481065 systemd[1]: machine-qemu\x2d135\x2dinstance\x2d00000070.scope: Deactivated successfully.
Oct 11 05:18:20 np0005481065 systemd[1]: machine-qemu\x2d135\x2dinstance\x2d00000070.scope: Consumed 14.567s CPU time.
Oct 11 05:18:20 np0005481065 systemd-machined[215705]: Machine qemu-135-instance-00000070 terminated.
Oct 11 05:18:20 np0005481065 nova_compute[260935]: 2025-10-11 09:18:20.865 2 INFO nova.virt.libvirt.driver [-] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Instance destroyed successfully.#033[00m
Oct 11 05:18:20 np0005481065 nova_compute[260935]: 2025-10-11 09:18:20.866 2 DEBUG nova.objects.instance [None req-0660e251-f1e0-459e-bcfb-768d67651959 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'resources' on Instance uuid 0c16a8df-379f-45ee-b8a2-930ab997e47b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:18:20 np0005481065 nova_compute[260935]: 2025-10-11 09:18:20.883 2 DEBUG nova.virt.libvirt.vif [None req-0660e251-f1e0-459e-bcfb-768d67651959 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:17:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1002613129',display_name='tempest-TestNetworkBasicOps-server-1002613129',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1002613129',id=112,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB1DLyePqzD5XeXq/VEty+kauBvZ2ILHF8of/ToD91IxehxiiNemh7q1yZYimrdmB0i6DGy2R/mJVWVMcXEoPylnExeWpiVIgoXv8lQJTgtQjj42wL1vsxK2jjFlUxPqYA==',key_name='tempest-TestNetworkBasicOps-1545772393',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:17:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-c03kw6on',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:17:46Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=0c16a8df-379f-45ee-b8a2-930ab997e47b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "address": "fa:16:3e:c9:58:d3", "network": {"id": "3f472aff-4044-47cd-b539-ffd0a15c2851", "bridge": "br-int", "label": "tempest-network-smoke--74572185", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2284d8f8-63", "ovs_interfaceid": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:18:20 np0005481065 nova_compute[260935]: 2025-10-11 09:18:20.884 2 DEBUG nova.network.os_vif_util [None req-0660e251-f1e0-459e-bcfb-768d67651959 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "address": "fa:16:3e:c9:58:d3", "network": {"id": "3f472aff-4044-47cd-b539-ffd0a15c2851", "bridge": "br-int", "label": "tempest-network-smoke--74572185", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2284d8f8-63", "ovs_interfaceid": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:18:20 np0005481065 nova_compute[260935]: 2025-10-11 09:18:20.884 2 DEBUG nova.network.os_vif_util [None req-0660e251-f1e0-459e-bcfb-768d67651959 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c9:58:d3,bridge_name='br-int',has_traffic_filtering=True,id=2284d8f8-63ae-4cf4-b954-c08472abd3ed,network=Network(3f472aff-4044-47cd-b539-ffd0a15c2851),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2284d8f8-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:18:20 np0005481065 nova_compute[260935]: 2025-10-11 09:18:20.885 2 DEBUG os_vif [None req-0660e251-f1e0-459e-bcfb-768d67651959 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c9:58:d3,bridge_name='br-int',has_traffic_filtering=True,id=2284d8f8-63ae-4cf4-b954-c08472abd3ed,network=Network(3f472aff-4044-47cd-b539-ffd0a15c2851),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2284d8f8-63') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:18:20 np0005481065 nova_compute[260935]: 2025-10-11 09:18:20.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:20 np0005481065 nova_compute[260935]: 2025-10-11 09:18:20.887 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2284d8f8-63, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:18:20 np0005481065 nova_compute[260935]: 2025-10-11 09:18:20.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:20 np0005481065 nova_compute[260935]: 2025-10-11 09:18:20.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:20 np0005481065 nova_compute[260935]: 2025-10-11 09:18:20.895 2 INFO os_vif [None req-0660e251-f1e0-459e-bcfb-768d67651959 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c9:58:d3,bridge_name='br-int',has_traffic_filtering=True,id=2284d8f8-63ae-4cf4-b954-c08472abd3ed,network=Network(3f472aff-4044-47cd-b539-ffd0a15c2851),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2284d8f8-63')#033[00m
Oct 11 05:18:20 np0005481065 neutron-haproxy-ovnmeta-3f472aff-4044-47cd-b539-ffd0a15c2851[382100]: [NOTICE]   (382104) : haproxy version is 2.8.14-c23fe91
Oct 11 05:18:20 np0005481065 neutron-haproxy-ovnmeta-3f472aff-4044-47cd-b539-ffd0a15c2851[382100]: [NOTICE]   (382104) : path to executable is /usr/sbin/haproxy
Oct 11 05:18:20 np0005481065 neutron-haproxy-ovnmeta-3f472aff-4044-47cd-b539-ffd0a15c2851[382100]: [WARNING]  (382104) : Exiting Master process...
Oct 11 05:18:20 np0005481065 neutron-haproxy-ovnmeta-3f472aff-4044-47cd-b539-ffd0a15c2851[382100]: [ALERT]    (382104) : Current worker (382106) exited with code 143 (Terminated)
Oct 11 05:18:20 np0005481065 neutron-haproxy-ovnmeta-3f472aff-4044-47cd-b539-ffd0a15c2851[382100]: [WARNING]  (382104) : All workers exited. Exiting... (0)
Oct 11 05:18:20 np0005481065 systemd[1]: libpod-1ecdb8b07380fa6c69f2dee9d9e0b0d27bd9b74547b9ea0ac2f30e8fde8e5390.scope: Deactivated successfully.
Oct 11 05:18:20 np0005481065 podman[382592]: 2025-10-11 09:18:20.984615652 +0000 UTC m=+0.085981857 container died 1ecdb8b07380fa6c69f2dee9d9e0b0d27bd9b74547b9ea0ac2f30e8fde8e5390 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3f472aff-4044-47cd-b539-ffd0a15c2851, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 05:18:21 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1ecdb8b07380fa6c69f2dee9d9e0b0d27bd9b74547b9ea0ac2f30e8fde8e5390-userdata-shm.mount: Deactivated successfully.
Oct 11 05:18:21 np0005481065 systemd[1]: var-lib-containers-storage-overlay-2d7493b7450df163711f70cb19b9368d1526437ef35376e11493a1c6025bea22-merged.mount: Deactivated successfully.
Oct 11 05:18:21 np0005481065 podman[382592]: 2025-10-11 09:18:21.037713832 +0000 UTC m=+0.139080027 container cleanup 1ecdb8b07380fa6c69f2dee9d9e0b0d27bd9b74547b9ea0ac2f30e8fde8e5390 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3f472aff-4044-47cd-b539-ffd0a15c2851, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 11 05:18:21 np0005481065 systemd[1]: libpod-conmon-1ecdb8b07380fa6c69f2dee9d9e0b0d27bd9b74547b9ea0ac2f30e8fde8e5390.scope: Deactivated successfully.
Oct 11 05:18:21 np0005481065 podman[382641]: 2025-10-11 09:18:21.158472399 +0000 UTC m=+0.087849791 container remove 1ecdb8b07380fa6c69f2dee9d9e0b0d27bd9b74547b9ea0ac2f30e8fde8e5390 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3f472aff-4044-47cd-b539-ffd0a15c2851, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 11 05:18:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:21.172 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c5ab189a-1645-48b4-9bfb-340a3381ce35]: (4, ('Sat Oct 11 09:18:20 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3f472aff-4044-47cd-b539-ffd0a15c2851 (1ecdb8b07380fa6c69f2dee9d9e0b0d27bd9b74547b9ea0ac2f30e8fde8e5390)\n1ecdb8b07380fa6c69f2dee9d9e0b0d27bd9b74547b9ea0ac2f30e8fde8e5390\nSat Oct 11 09:18:21 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3f472aff-4044-47cd-b539-ffd0a15c2851 (1ecdb8b07380fa6c69f2dee9d9e0b0d27bd9b74547b9ea0ac2f30e8fde8e5390)\n1ecdb8b07380fa6c69f2dee9d9e0b0d27bd9b74547b9ea0ac2f30e8fde8e5390\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:21.174 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7a4763c4-6149-42c0-a3cc-b587b9605893]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:21.176 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3f472aff-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:18:21 np0005481065 kernel: tap3f472aff-40: left promiscuous mode
Oct 11 05:18:21 np0005481065 nova_compute[260935]: 2025-10-11 09:18:21.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:21 np0005481065 nova_compute[260935]: 2025-10-11 09:18:21.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:21.214 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[27f5b310-6bb1-49cc-aa69-c8744432f508]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:21.242 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b939e357-a2c2-4847-b2e7-44f0f7c5fb1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:21.243 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[76b42fa9-9c4d-4395-b5cd-d4ac9636d864]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:21.263 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e2fecd17-307d-4147-b71d-5b2724a3a065]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 610989, 'reachable_time': 43254, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382657, 'error': None, 'target': 'ovnmeta-3f472aff-4044-47cd-b539-ffd0a15c2851', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:21.266 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3f472aff-4044-47cd-b539-ffd0a15c2851 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 05:18:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:21.266 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[21796f90-73d2-4a78-b6ea-aea8c6f70977]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:21 np0005481065 systemd[1]: run-netns-ovnmeta\x2d3f472aff\x2d4044\x2d47cd\x2db539\x2dffd0a15c2851.mount: Deactivated successfully.
Oct 11 05:18:21 np0005481065 nova_compute[260935]: 2025-10-11 09:18:21.387 2 INFO nova.virt.libvirt.driver [None req-0660e251-f1e0-459e-bcfb-768d67651959 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Deleting instance files /var/lib/nova/instances/0c16a8df-379f-45ee-b8a2-930ab997e47b_del#033[00m
Oct 11 05:18:21 np0005481065 nova_compute[260935]: 2025-10-11 09:18:21.388 2 INFO nova.virt.libvirt.driver [None req-0660e251-f1e0-459e-bcfb-768d67651959 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Deletion of /var/lib/nova/instances/0c16a8df-379f-45ee-b8a2-930ab997e47b_del complete#033[00m
Oct 11 05:18:21 np0005481065 nova_compute[260935]: 2025-10-11 09:18:21.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:21 np0005481065 nova_compute[260935]: 2025-10-11 09:18:21.462 2 INFO nova.compute.manager [None req-0660e251-f1e0-459e-bcfb-768d67651959 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Took 0.84 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:18:21 np0005481065 nova_compute[260935]: 2025-10-11 09:18:21.463 2 DEBUG oslo.service.loopingcall [None req-0660e251-f1e0-459e-bcfb-768d67651959 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:18:21 np0005481065 nova_compute[260935]: 2025-10-11 09:18:21.463 2 DEBUG nova.compute.manager [-] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:18:21 np0005481065 nova_compute[260935]: 2025-10-11 09:18:21.464 2 DEBUG nova.network.neutron [-] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:18:22 np0005481065 nova_compute[260935]: 2025-10-11 09:18:22.351 2 DEBUG nova.network.neutron [req-abf91524-11cf-49d3-a708-e981ba96d05b req-39b5b8d5-1dbe-4878-9a9a-0eb3321d6944 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Updated VIF entry in instance network info cache for port 2284d8f8-63ae-4cf4-b954-c08472abd3ed. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:18:22 np0005481065 nova_compute[260935]: 2025-10-11 09:18:22.352 2 DEBUG nova.network.neutron [req-abf91524-11cf-49d3-a708-e981ba96d05b req-39b5b8d5-1dbe-4878-9a9a-0eb3321d6944 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Updating instance_info_cache with network_info: [{"id": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "address": "fa:16:3e:c9:58:d3", "network": {"id": "3f472aff-4044-47cd-b539-ffd0a15c2851", "bridge": "br-int", "label": "tempest-network-smoke--74572185", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2284d8f8-63", "ovs_interfaceid": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:18:22 np0005481065 nova_compute[260935]: 2025-10-11 09:18:22.384 2 DEBUG oslo_concurrency.lockutils [req-abf91524-11cf-49d3-a708-e981ba96d05b req-39b5b8d5-1dbe-4878-9a9a-0eb3321d6944 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-0c16a8df-379f-45ee-b8a2-930ab997e47b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:18:22 np0005481065 nova_compute[260935]: 2025-10-11 09:18:22.471 2 DEBUG nova.network.neutron [-] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:18:22 np0005481065 nova_compute[260935]: 2025-10-11 09:18:22.501 2 INFO nova.compute.manager [-] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Took 1.04 seconds to deallocate network for instance.#033[00m
Oct 11 05:18:22 np0005481065 nova_compute[260935]: 2025-10-11 09:18:22.550 2 DEBUG nova.compute.manager [req-8bc39084-ece2-45f1-b149-ac4b50b13b08 req-3222e011-1176-45d7-892e-66a712a90587 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Received event network-vif-deleted-2284d8f8-63ae-4cf4-b954-c08472abd3ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:18:22 np0005481065 nova_compute[260935]: 2025-10-11 09:18:22.559 2 DEBUG oslo_concurrency.lockutils [None req-0660e251-f1e0-459e-bcfb-768d67651959 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:18:22 np0005481065 nova_compute[260935]: 2025-10-11 09:18:22.560 2 DEBUG oslo_concurrency.lockutils [None req-0660e251-f1e0-459e-bcfb-768d67651959 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:18:22 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2296: 321 pgs: 321 active+clean; 349 MiB data, 916 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 12 KiB/s wr, 52 op/s
Oct 11 05:18:22 np0005481065 nova_compute[260935]: 2025-10-11 09:18:22.679 2 DEBUG nova.compute.manager [req-1048d2f7-8948-4707-8970-27ff73a2e945 req-6aa6ffc2-d1bc-4689-a939-0ce794bdc3d4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Received event network-vif-unplugged-2284d8f8-63ae-4cf4-b954-c08472abd3ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:18:22 np0005481065 nova_compute[260935]: 2025-10-11 09:18:22.680 2 DEBUG oslo_concurrency.lockutils [req-1048d2f7-8948-4707-8970-27ff73a2e945 req-6aa6ffc2-d1bc-4689-a939-0ce794bdc3d4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0c16a8df-379f-45ee-b8a2-930ab997e47b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:18:22 np0005481065 nova_compute[260935]: 2025-10-11 09:18:22.680 2 DEBUG oslo_concurrency.lockutils [req-1048d2f7-8948-4707-8970-27ff73a2e945 req-6aa6ffc2-d1bc-4689-a939-0ce794bdc3d4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0c16a8df-379f-45ee-b8a2-930ab997e47b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:18:22 np0005481065 nova_compute[260935]: 2025-10-11 09:18:22.680 2 DEBUG oslo_concurrency.lockutils [req-1048d2f7-8948-4707-8970-27ff73a2e945 req-6aa6ffc2-d1bc-4689-a939-0ce794bdc3d4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0c16a8df-379f-45ee-b8a2-930ab997e47b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:18:22 np0005481065 nova_compute[260935]: 2025-10-11 09:18:22.680 2 DEBUG nova.compute.manager [req-1048d2f7-8948-4707-8970-27ff73a2e945 req-6aa6ffc2-d1bc-4689-a939-0ce794bdc3d4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] No waiting events found dispatching network-vif-unplugged-2284d8f8-63ae-4cf4-b954-c08472abd3ed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:18:22 np0005481065 nova_compute[260935]: 2025-10-11 09:18:22.681 2 WARNING nova.compute.manager [req-1048d2f7-8948-4707-8970-27ff73a2e945 req-6aa6ffc2-d1bc-4689-a939-0ce794bdc3d4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Received unexpected event network-vif-unplugged-2284d8f8-63ae-4cf4-b954-c08472abd3ed for instance with vm_state deleted and task_state None.#033[00m
Oct 11 05:18:22 np0005481065 nova_compute[260935]: 2025-10-11 09:18:22.681 2 DEBUG nova.compute.manager [req-1048d2f7-8948-4707-8970-27ff73a2e945 req-6aa6ffc2-d1bc-4689-a939-0ce794bdc3d4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Received event network-vif-plugged-2284d8f8-63ae-4cf4-b954-c08472abd3ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:18:22 np0005481065 nova_compute[260935]: 2025-10-11 09:18:22.681 2 DEBUG oslo_concurrency.lockutils [req-1048d2f7-8948-4707-8970-27ff73a2e945 req-6aa6ffc2-d1bc-4689-a939-0ce794bdc3d4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0c16a8df-379f-45ee-b8a2-930ab997e47b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:18:22 np0005481065 nova_compute[260935]: 2025-10-11 09:18:22.681 2 DEBUG oslo_concurrency.lockutils [req-1048d2f7-8948-4707-8970-27ff73a2e945 req-6aa6ffc2-d1bc-4689-a939-0ce794bdc3d4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0c16a8df-379f-45ee-b8a2-930ab997e47b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:18:22 np0005481065 nova_compute[260935]: 2025-10-11 09:18:22.682 2 DEBUG oslo_concurrency.lockutils [req-1048d2f7-8948-4707-8970-27ff73a2e945 req-6aa6ffc2-d1bc-4689-a939-0ce794bdc3d4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0c16a8df-379f-45ee-b8a2-930ab997e47b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:18:22 np0005481065 nova_compute[260935]: 2025-10-11 09:18:22.682 2 DEBUG nova.compute.manager [req-1048d2f7-8948-4707-8970-27ff73a2e945 req-6aa6ffc2-d1bc-4689-a939-0ce794bdc3d4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] No waiting events found dispatching network-vif-plugged-2284d8f8-63ae-4cf4-b954-c08472abd3ed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:18:22 np0005481065 nova_compute[260935]: 2025-10-11 09:18:22.682 2 WARNING nova.compute.manager [req-1048d2f7-8948-4707-8970-27ff73a2e945 req-6aa6ffc2-d1bc-4689-a939-0ce794bdc3d4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Received unexpected event network-vif-plugged-2284d8f8-63ae-4cf4-b954-c08472abd3ed for instance with vm_state deleted and task_state None.#033[00m
Oct 11 05:18:22 np0005481065 nova_compute[260935]: 2025-10-11 09:18:22.727 2 DEBUG oslo_concurrency.processutils [None req-0660e251-f1e0-459e-bcfb-768d67651959 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:18:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:18:23 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3623176638' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:18:23 np0005481065 nova_compute[260935]: 2025-10-11 09:18:23.169 2 DEBUG oslo_concurrency.processutils [None req-0660e251-f1e0-459e-bcfb-768d67651959 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:18:23 np0005481065 nova_compute[260935]: 2025-10-11 09:18:23.178 2 DEBUG nova.compute.provider_tree [None req-0660e251-f1e0-459e-bcfb-768d67651959 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:18:23 np0005481065 nova_compute[260935]: 2025-10-11 09:18:23.199 2 DEBUG nova.scheduler.client.report [None req-0660e251-f1e0-459e-bcfb-768d67651959 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:18:23 np0005481065 nova_compute[260935]: 2025-10-11 09:18:23.229 2 DEBUG oslo_concurrency.lockutils [None req-0660e251-f1e0-459e-bcfb-768d67651959 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:18:23 np0005481065 nova_compute[260935]: 2025-10-11 09:18:23.256 2 INFO nova.scheduler.client.report [None req-0660e251-f1e0-459e-bcfb-768d67651959 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Deleted allocations for instance 0c16a8df-379f-45ee-b8a2-930ab997e47b#033[00m
Oct 11 05:18:23 np0005481065 nova_compute[260935]: 2025-10-11 09:18:23.324 2 DEBUG oslo_concurrency.lockutils [None req-0660e251-f1e0-459e-bcfb-768d67651959 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "0c16a8df-379f-45ee-b8a2-930ab997e47b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:18:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:18:24 np0005481065 nova_compute[260935]: 2025-10-11 09:18:24.557 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174289.5557923, 187076f6-221b-4a35-a7a8-9ba7c2a546b5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:18:24 np0005481065 nova_compute[260935]: 2025-10-11 09:18:24.558 2 INFO nova.compute.manager [-] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:18:24 np0005481065 nova_compute[260935]: 2025-10-11 09:18:24.581 2 DEBUG nova.compute.manager [None req-0c6a74ec-dcf1-41ed-88f5-59104dd262d5 - - - - - -] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:18:24 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2297: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 9.2 KiB/s wr, 30 op/s
Oct 11 05:18:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:18:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:18:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:18:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:18:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:18:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:18:25 np0005481065 nova_compute[260935]: 2025-10-11 09:18:25.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:26 np0005481065 nova_compute[260935]: 2025-10-11 09:18:26.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:18:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3186847443' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:18:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:18:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3186847443' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:18:26 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2298: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 6.8 KiB/s wr, 28 op/s
Oct 11 05:18:27 np0005481065 nova_compute[260935]: 2025-10-11 09:18:27.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:18:27 np0005481065 ovn_controller[152945]: 2025-10-11T09:18:27Z|01120|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:18:27 np0005481065 ovn_controller[152945]: 2025-10-11T09:18:27Z|01121|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:18:27 np0005481065 nova_compute[260935]: 2025-10-11 09:18:27.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:27 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:27.782 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=35, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=34) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:18:27 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:27.783 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 05:18:27 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:27.784 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '35'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:18:27 np0005481065 ovn_controller[152945]: 2025-10-11T09:18:27Z|01122|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:18:27 np0005481065 ovn_controller[152945]: 2025-10-11T09:18:27Z|01123|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:18:27 np0005481065 nova_compute[260935]: 2025-10-11 09:18:27.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:18:28 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:18:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 05:18:28 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:18:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 05:18:28 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:18:28 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev c8b1aa77-283d-4bde-bcbf-17a7fb258e6c does not exist
Oct 11 05:18:28 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev e02b600c-3fa0-4a86-bdd1-28555387a6bc does not exist
Oct 11 05:18:28 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev ebfc13ca-0c1e-42a5-9047-46a0aef5605a does not exist
Oct 11 05:18:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 05:18:28 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 05:18:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 05:18:28 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:18:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:18:28 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:18:28 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2299: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 6.8 KiB/s wr, 28 op/s
Oct 11 05:18:28 np0005481065 nova_compute[260935]: 2025-10-11 09:18:28.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:18:28 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:18:28 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:18:28 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:18:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:18:29 np0005481065 podman[382952]: 2025-10-11 09:18:29.144315248 +0000 UTC m=+0.073082241 container create 65df244736213d40227337331a7ff352423569cc8525419ec6d1e69c502290ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Oct 11 05:18:29 np0005481065 podman[382952]: 2025-10-11 09:18:29.114756817 +0000 UTC m=+0.043523900 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:18:29 np0005481065 systemd[1]: Started libpod-conmon-65df244736213d40227337331a7ff352423569cc8525419ec6d1e69c502290ec.scope.
Oct 11 05:18:29 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:18:29 np0005481065 podman[382952]: 2025-10-11 09:18:29.292864374 +0000 UTC m=+0.221631407 container init 65df244736213d40227337331a7ff352423569cc8525419ec6d1e69c502290ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 05:18:29 np0005481065 podman[382952]: 2025-10-11 09:18:29.299462672 +0000 UTC m=+0.228229675 container start 65df244736213d40227337331a7ff352423569cc8525419ec6d1e69c502290ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:18:29 np0005481065 podman[382952]: 2025-10-11 09:18:29.303702453 +0000 UTC m=+0.232469456 container attach 65df244736213d40227337331a7ff352423569cc8525419ec6d1e69c502290ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_keldysh, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:18:29 np0005481065 modest_keldysh[382969]: 167 167
Oct 11 05:18:29 np0005481065 systemd[1]: libpod-65df244736213d40227337331a7ff352423569cc8525419ec6d1e69c502290ec.scope: Deactivated successfully.
Oct 11 05:18:29 np0005481065 conmon[382969]: conmon 65df244736213d402273 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-65df244736213d40227337331a7ff352423569cc8525419ec6d1e69c502290ec.scope/container/memory.events
Oct 11 05:18:29 np0005481065 podman[382952]: 2025-10-11 09:18:29.312167354 +0000 UTC m=+0.240934387 container died 65df244736213d40227337331a7ff352423569cc8525419ec6d1e69c502290ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 05:18:29 np0005481065 systemd[1]: var-lib-containers-storage-overlay-84a1bcea237e3d130ca9f03135ee76398edbc63ec558a302a79eacc28f08660a-merged.mount: Deactivated successfully.
Oct 11 05:18:29 np0005481065 podman[382952]: 2025-10-11 09:18:29.366072478 +0000 UTC m=+0.294839521 container remove 65df244736213d40227337331a7ff352423569cc8525419ec6d1e69c502290ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_keldysh, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:18:29 np0005481065 systemd[1]: libpod-conmon-65df244736213d40227337331a7ff352423569cc8525419ec6d1e69c502290ec.scope: Deactivated successfully.
Oct 11 05:18:29 np0005481065 podman[382995]: 2025-10-11 09:18:29.623382689 +0000 UTC m=+0.071223147 container create f1d896c1d6f89892cf12c1d999d3b7030b8fcae2b71cdf487040da121098484b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Oct 11 05:18:29 np0005481065 systemd[1]: Started libpod-conmon-f1d896c1d6f89892cf12c1d999d3b7030b8fcae2b71cdf487040da121098484b.scope.
Oct 11 05:18:29 np0005481065 podman[382995]: 2025-10-11 09:18:29.594270241 +0000 UTC m=+0.042110749 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:18:29 np0005481065 nova_compute[260935]: 2025-10-11 09:18:29.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:18:29 np0005481065 nova_compute[260935]: 2025-10-11 09:18:29.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:18:29 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:18:29 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be8476e761e0047f48bdfdaf9fbeaa0052c96c9fc00bd64739a10707240ad49b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:18:29 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be8476e761e0047f48bdfdaf9fbeaa0052c96c9fc00bd64739a10707240ad49b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:18:29 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be8476e761e0047f48bdfdaf9fbeaa0052c96c9fc00bd64739a10707240ad49b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:18:29 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be8476e761e0047f48bdfdaf9fbeaa0052c96c9fc00bd64739a10707240ad49b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:18:29 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be8476e761e0047f48bdfdaf9fbeaa0052c96c9fc00bd64739a10707240ad49b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 05:18:29 np0005481065 podman[382995]: 2025-10-11 09:18:29.738919507 +0000 UTC m=+0.186760005 container init f1d896c1d6f89892cf12c1d999d3b7030b8fcae2b71cdf487040da121098484b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 11 05:18:29 np0005481065 podman[382995]: 2025-10-11 09:18:29.751247227 +0000 UTC m=+0.199087655 container start f1d896c1d6f89892cf12c1d999d3b7030b8fcae2b71cdf487040da121098484b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 05:18:29 np0005481065 podman[382995]: 2025-10-11 09:18:29.754631944 +0000 UTC m=+0.202472472 container attach f1d896c1d6f89892cf12c1d999d3b7030b8fcae2b71cdf487040da121098484b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ishizaka, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 05:18:30 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2300: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 05:18:30 np0005481065 nova_compute[260935]: 2025-10-11 09:18:30.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:30 np0005481065 dreamy_ishizaka[383011]: --> passed data devices: 0 physical, 3 LVM
Oct 11 05:18:30 np0005481065 dreamy_ishizaka[383011]: --> relative data size: 1.0
Oct 11 05:18:30 np0005481065 dreamy_ishizaka[383011]: --> All data devices are unavailable
Oct 11 05:18:30 np0005481065 systemd[1]: libpod-f1d896c1d6f89892cf12c1d999d3b7030b8fcae2b71cdf487040da121098484b.scope: Deactivated successfully.
Oct 11 05:18:30 np0005481065 systemd[1]: libpod-f1d896c1d6f89892cf12c1d999d3b7030b8fcae2b71cdf487040da121098484b.scope: Consumed 1.155s CPU time.
Oct 11 05:18:30 np0005481065 podman[382995]: 2025-10-11 09:18:30.974778162 +0000 UTC m=+1.422618660 container died f1d896c1d6f89892cf12c1d999d3b7030b8fcae2b71cdf487040da121098484b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 05:18:31 np0005481065 systemd[1]: var-lib-containers-storage-overlay-be8476e761e0047f48bdfdaf9fbeaa0052c96c9fc00bd64739a10707240ad49b-merged.mount: Deactivated successfully.
Oct 11 05:18:31 np0005481065 podman[382995]: 2025-10-11 09:18:31.052213156 +0000 UTC m=+1.500053604 container remove f1d896c1d6f89892cf12c1d999d3b7030b8fcae2b71cdf487040da121098484b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ishizaka, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:18:31 np0005481065 systemd[1]: libpod-conmon-f1d896c1d6f89892cf12c1d999d3b7030b8fcae2b71cdf487040da121098484b.scope: Deactivated successfully.
Oct 11 05:18:31 np0005481065 nova_compute[260935]: 2025-10-11 09:18:31.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:31 np0005481065 podman[383195]: 2025-10-11 09:18:31.958610266 +0000 UTC m=+0.063901520 container create f67302f0dec71d9cfe90fcfe7db16bc9f2ed1f53210924ff0a0464506708af33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:18:32 np0005481065 systemd[1]: Started libpod-conmon-f67302f0dec71d9cfe90fcfe7db16bc9f2ed1f53210924ff0a0464506708af33.scope.
Oct 11 05:18:32 np0005481065 podman[383195]: 2025-10-11 09:18:31.932191574 +0000 UTC m=+0.037482878 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:18:32 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:18:32 np0005481065 podman[383195]: 2025-10-11 09:18:32.083999674 +0000 UTC m=+0.189290958 container init f67302f0dec71d9cfe90fcfe7db16bc9f2ed1f53210924ff0a0464506708af33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:18:32 np0005481065 podman[383195]: 2025-10-11 09:18:32.097274701 +0000 UTC m=+0.202565925 container start f67302f0dec71d9cfe90fcfe7db16bc9f2ed1f53210924ff0a0464506708af33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hoover, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:18:32 np0005481065 podman[383195]: 2025-10-11 09:18:32.101645736 +0000 UTC m=+0.206936990 container attach f67302f0dec71d9cfe90fcfe7db16bc9f2ed1f53210924ff0a0464506708af33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:18:32 np0005481065 optimistic_hoover[383211]: 167 167
Oct 11 05:18:32 np0005481065 systemd[1]: libpod-f67302f0dec71d9cfe90fcfe7db16bc9f2ed1f53210924ff0a0464506708af33.scope: Deactivated successfully.
Oct 11 05:18:32 np0005481065 conmon[383211]: conmon f67302f0dec71d9cfe90 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f67302f0dec71d9cfe90fcfe7db16bc9f2ed1f53210924ff0a0464506708af33.scope/container/memory.events
Oct 11 05:18:32 np0005481065 podman[383195]: 2025-10-11 09:18:32.104374913 +0000 UTC m=+0.209666197 container died f67302f0dec71d9cfe90fcfe7db16bc9f2ed1f53210924ff0a0464506708af33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hoover, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 05:18:32 np0005481065 systemd[1]: var-lib-containers-storage-overlay-524b74e54162358b96dcb2845f5e643925c96004f677c1d8ad027e3ae70ac886-merged.mount: Deactivated successfully.
Oct 11 05:18:32 np0005481065 podman[383195]: 2025-10-11 09:18:32.158341459 +0000 UTC m=+0.263632723 container remove f67302f0dec71d9cfe90fcfe7db16bc9f2ed1f53210924ff0a0464506708af33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:18:32 np0005481065 systemd[1]: libpod-conmon-f67302f0dec71d9cfe90fcfe7db16bc9f2ed1f53210924ff0a0464506708af33.scope: Deactivated successfully.
Oct 11 05:18:32 np0005481065 podman[383235]: 2025-10-11 09:18:32.42266641 +0000 UTC m=+0.070247180 container create d4fe01d12073eb7b4a7035acd61d220d32c89f4b877949262be3caae12e5726b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_kirch, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:18:32 np0005481065 systemd[1]: Started libpod-conmon-d4fe01d12073eb7b4a7035acd61d220d32c89f4b877949262be3caae12e5726b.scope.
Oct 11 05:18:32 np0005481065 podman[383235]: 2025-10-11 09:18:32.393523631 +0000 UTC m=+0.041104461 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:18:32 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:18:32 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3b7728b0295d8d67002324cd8e6171a5a65bdf00cfad34e07a5fbd007b66549/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:18:32 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3b7728b0295d8d67002324cd8e6171a5a65bdf00cfad34e07a5fbd007b66549/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:18:32 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3b7728b0295d8d67002324cd8e6171a5a65bdf00cfad34e07a5fbd007b66549/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:18:32 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3b7728b0295d8d67002324cd8e6171a5a65bdf00cfad34e07a5fbd007b66549/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:18:32 np0005481065 podman[383235]: 2025-10-11 09:18:32.530267732 +0000 UTC m=+0.177848542 container init d4fe01d12073eb7b4a7035acd61d220d32c89f4b877949262be3caae12e5726b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_kirch, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:18:32 np0005481065 podman[383235]: 2025-10-11 09:18:32.544141237 +0000 UTC m=+0.191722027 container start d4fe01d12073eb7b4a7035acd61d220d32c89f4b877949262be3caae12e5726b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_kirch, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 05:18:32 np0005481065 podman[383235]: 2025-10-11 09:18:32.548249193 +0000 UTC m=+0.195830003 container attach d4fe01d12073eb7b4a7035acd61d220d32c89f4b877949262be3caae12e5726b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_kirch, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 05:18:32 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2301: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 05:18:32 np0005481065 nova_compute[260935]: 2025-10-11 09:18:32.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:18:32 np0005481065 nova_compute[260935]: 2025-10-11 09:18:32.705 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:18:32 np0005481065 nova_compute[260935]: 2025-10-11 09:18:32.705 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:18:32 np0005481065 nova_compute[260935]: 2025-10-11 09:18:32.706 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 11 05:18:33 np0005481065 musing_kirch[383251]: {
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:    "0": [
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:        {
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:            "devices": [
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:                "/dev/loop3"
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:            ],
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:            "lv_name": "ceph_lv0",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:            "lv_size": "21470642176",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:            "name": "ceph_lv0",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:            "tags": {
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:                "ceph.cluster_name": "ceph",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:                "ceph.crush_device_class": "",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:                "ceph.encrypted": "0",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:                "ceph.osd_id": "0",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:                "ceph.type": "block",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:                "ceph.vdo": "0"
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:            },
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:            "type": "block",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:            "vg_name": "ceph_vg0"
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:        }
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:    ],
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:    "1": [
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:        {
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:            "devices": [
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:                "/dev/loop4"
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:            ],
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:            "lv_name": "ceph_lv1",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:            "lv_size": "21470642176",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:            "name": "ceph_lv1",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:            "tags": {
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:                "ceph.cluster_name": "ceph",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:                "ceph.crush_device_class": "",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:                "ceph.encrypted": "0",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:                "ceph.osd_id": "1",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:                "ceph.type": "block",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:                "ceph.vdo": "0"
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:            },
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:            "type": "block",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:            "vg_name": "ceph_vg1"
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:        }
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:    ],
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:    "2": [
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:        {
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:            "devices": [
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:                "/dev/loop5"
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:            ],
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:            "lv_name": "ceph_lv2",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:            "lv_size": "21470642176",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:            "name": "ceph_lv2",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:            "tags": {
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:                "ceph.cluster_name": "ceph",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:                "ceph.crush_device_class": "",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:                "ceph.encrypted": "0",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:                "ceph.osd_id": "2",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:                "ceph.type": "block",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:                "ceph.vdo": "0"
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:            },
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:            "type": "block",
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:            "vg_name": "ceph_vg2"
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:        }
Oct 11 05:18:33 np0005481065 musing_kirch[383251]:    ]
Oct 11 05:18:33 np0005481065 musing_kirch[383251]: }
Oct 11 05:18:33 np0005481065 systemd[1]: libpod-d4fe01d12073eb7b4a7035acd61d220d32c89f4b877949262be3caae12e5726b.scope: Deactivated successfully.
Oct 11 05:18:33 np0005481065 podman[383235]: 2025-10-11 09:18:33.407535984 +0000 UTC m=+1.055116774 container died d4fe01d12073eb7b4a7035acd61d220d32c89f4b877949262be3caae12e5726b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:18:33 np0005481065 systemd[1]: var-lib-containers-storage-overlay-a3b7728b0295d8d67002324cd8e6171a5a65bdf00cfad34e07a5fbd007b66549-merged.mount: Deactivated successfully.
Oct 11 05:18:33 np0005481065 podman[383235]: 2025-10-11 09:18:33.495026394 +0000 UTC m=+1.142607194 container remove d4fe01d12073eb7b4a7035acd61d220d32c89f4b877949262be3caae12e5726b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:18:33 np0005481065 systemd[1]: libpod-conmon-d4fe01d12073eb7b4a7035acd61d220d32c89f4b877949262be3caae12e5726b.scope: Deactivated successfully.
Oct 11 05:18:33 np0005481065 podman[383260]: 2025-10-11 09:18:33.517773721 +0000 UTC m=+0.075094368 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct 11 05:18:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:18:34 np0005481065 podman[383431]: 2025-10-11 09:18:34.348455137 +0000 UTC m=+0.072094532 container create 5b2a222b001e280421855f67f5df6297a2f253b6efd74bac05b5cabe4633e819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_nash, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:18:34 np0005481065 systemd[1]: Started libpod-conmon-5b2a222b001e280421855f67f5df6297a2f253b6efd74bac05b5cabe4633e819.scope.
Oct 11 05:18:34 np0005481065 podman[383431]: 2025-10-11 09:18:34.31656375 +0000 UTC m=+0.040203205 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:18:34 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:18:34 np0005481065 podman[383431]: 2025-10-11 09:18:34.458315823 +0000 UTC m=+0.181955238 container init 5b2a222b001e280421855f67f5df6297a2f253b6efd74bac05b5cabe4633e819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 11 05:18:34 np0005481065 podman[383431]: 2025-10-11 09:18:34.466838966 +0000 UTC m=+0.190478371 container start 5b2a222b001e280421855f67f5df6297a2f253b6efd74bac05b5cabe4633e819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_nash, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:18:34 np0005481065 podman[383431]: 2025-10-11 09:18:34.471268592 +0000 UTC m=+0.194908057 container attach 5b2a222b001e280421855f67f5df6297a2f253b6efd74bac05b5cabe4633e819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:18:34 np0005481065 blissful_nash[383448]: 167 167
Oct 11 05:18:34 np0005481065 systemd[1]: libpod-5b2a222b001e280421855f67f5df6297a2f253b6efd74bac05b5cabe4633e819.scope: Deactivated successfully.
Oct 11 05:18:34 np0005481065 podman[383431]: 2025-10-11 09:18:34.476164361 +0000 UTC m=+0.199803756 container died 5b2a222b001e280421855f67f5df6297a2f253b6efd74bac05b5cabe4633e819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_nash, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 11 05:18:34 np0005481065 systemd[1]: var-lib-containers-storage-overlay-119e11540d14b2a2aa4a9553259a5d4971cbcaa6bc8d91b2b6e5036c364c30d0-merged.mount: Deactivated successfully.
Oct 11 05:18:34 np0005481065 podman[383431]: 2025-10-11 09:18:34.530524168 +0000 UTC m=+0.254163573 container remove 5b2a222b001e280421855f67f5df6297a2f253b6efd74bac05b5cabe4633e819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:18:34 np0005481065 systemd[1]: libpod-conmon-5b2a222b001e280421855f67f5df6297a2f253b6efd74bac05b5cabe4633e819.scope: Deactivated successfully.
Oct 11 05:18:34 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2302: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 341 B/s wr, 4 op/s
Oct 11 05:18:34 np0005481065 nova_compute[260935]: 2025-10-11 09:18:34.722 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:18:34 np0005481065 nova_compute[260935]: 2025-10-11 09:18:34.723 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:18:34 np0005481065 podman[383472]: 2025-10-11 09:18:34.748438869 +0000 UTC m=+0.046504225 container create 65b2229fe2008500a4b36fef65974723ce7eb12836904d29c55e2a30e58c878a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:18:34 np0005481065 systemd[1]: Started libpod-conmon-65b2229fe2008500a4b36fef65974723ce7eb12836904d29c55e2a30e58c878a.scope.
Oct 11 05:18:34 np0005481065 podman[383472]: 2025-10-11 09:18:34.724031684 +0000 UTC m=+0.022097030 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:18:34 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:18:34 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dc5248a20165b55786a475faa96dc9dc6ad01b3822a8be08e1941c82b6dff60/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:18:34 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dc5248a20165b55786a475faa96dc9dc6ad01b3822a8be08e1941c82b6dff60/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:18:34 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dc5248a20165b55786a475faa96dc9dc6ad01b3822a8be08e1941c82b6dff60/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:18:34 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dc5248a20165b55786a475faa96dc9dc6ad01b3822a8be08e1941c82b6dff60/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:18:34 np0005481065 podman[383472]: 2025-10-11 09:18:34.870272985 +0000 UTC m=+0.168338421 container init 65b2229fe2008500a4b36fef65974723ce7eb12836904d29c55e2a30e58c878a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_poincare, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 11 05:18:34 np0005481065 podman[383472]: 2025-10-11 09:18:34.88345037 +0000 UTC m=+0.181515756 container start 65b2229fe2008500a4b36fef65974723ce7eb12836904d29c55e2a30e58c878a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_poincare, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:18:34 np0005481065 podman[383472]: 2025-10-11 09:18:34.888230486 +0000 UTC m=+0.186295872 container attach 65b2229fe2008500a4b36fef65974723ce7eb12836904d29c55e2a30e58c878a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 11 05:18:34 np0005481065 nova_compute[260935]: 2025-10-11 09:18:34.941 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:18:34 np0005481065 nova_compute[260935]: 2025-10-11 09:18:34.942 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:18:34 np0005481065 nova_compute[260935]: 2025-10-11 09:18:34.942 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 05:18:35 np0005481065 nova_compute[260935]: 2025-10-11 09:18:35.864 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174300.8629994, 0c16a8df-379f-45ee-b8a2-930ab997e47b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:18:35 np0005481065 nova_compute[260935]: 2025-10-11 09:18:35.864 2 INFO nova.compute.manager [-] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:18:35 np0005481065 nova_compute[260935]: 2025-10-11 09:18:35.890 2 DEBUG nova.compute.manager [None req-28bd4e1d-5078-4fd8-adfd-73f10a398d97 - - - - - -] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:18:35 np0005481065 nova_compute[260935]: 2025-10-11 09:18:35.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:35 np0005481065 flamboyant_poincare[383489]: {
Oct 11 05:18:35 np0005481065 flamboyant_poincare[383489]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 05:18:35 np0005481065 flamboyant_poincare[383489]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:18:35 np0005481065 flamboyant_poincare[383489]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 05:18:35 np0005481065 flamboyant_poincare[383489]:        "osd_id": 2,
Oct 11 05:18:35 np0005481065 flamboyant_poincare[383489]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:18:35 np0005481065 flamboyant_poincare[383489]:        "type": "bluestore"
Oct 11 05:18:35 np0005481065 flamboyant_poincare[383489]:    },
Oct 11 05:18:35 np0005481065 flamboyant_poincare[383489]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 05:18:35 np0005481065 flamboyant_poincare[383489]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:18:35 np0005481065 flamboyant_poincare[383489]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 05:18:35 np0005481065 flamboyant_poincare[383489]:        "osd_id": 0,
Oct 11 05:18:35 np0005481065 flamboyant_poincare[383489]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:18:35 np0005481065 flamboyant_poincare[383489]:        "type": "bluestore"
Oct 11 05:18:35 np0005481065 flamboyant_poincare[383489]:    },
Oct 11 05:18:35 np0005481065 flamboyant_poincare[383489]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 05:18:35 np0005481065 flamboyant_poincare[383489]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:18:35 np0005481065 flamboyant_poincare[383489]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 05:18:35 np0005481065 flamboyant_poincare[383489]:        "osd_id": 1,
Oct 11 05:18:35 np0005481065 flamboyant_poincare[383489]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:18:36 np0005481065 flamboyant_poincare[383489]:        "type": "bluestore"
Oct 11 05:18:36 np0005481065 flamboyant_poincare[383489]:    }
Oct 11 05:18:36 np0005481065 flamboyant_poincare[383489]: }
Oct 11 05:18:36 np0005481065 systemd[1]: libpod-65b2229fe2008500a4b36fef65974723ce7eb12836904d29c55e2a30e58c878a.scope: Deactivated successfully.
Oct 11 05:18:36 np0005481065 podman[383472]: 2025-10-11 09:18:36.052648318 +0000 UTC m=+1.350713674 container died 65b2229fe2008500a4b36fef65974723ce7eb12836904d29c55e2a30e58c878a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:18:36 np0005481065 systemd[1]: libpod-65b2229fe2008500a4b36fef65974723ce7eb12836904d29c55e2a30e58c878a.scope: Consumed 1.165s CPU time.
Oct 11 05:18:36 np0005481065 systemd[1]: var-lib-containers-storage-overlay-8dc5248a20165b55786a475faa96dc9dc6ad01b3822a8be08e1941c82b6dff60-merged.mount: Deactivated successfully.
Oct 11 05:18:36 np0005481065 podman[383472]: 2025-10-11 09:18:36.120726475 +0000 UTC m=+1.418791831 container remove 65b2229fe2008500a4b36fef65974723ce7eb12836904d29c55e2a30e58c878a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:18:36 np0005481065 systemd[1]: libpod-conmon-65b2229fe2008500a4b36fef65974723ce7eb12836904d29c55e2a30e58c878a.scope: Deactivated successfully.
Oct 11 05:18:36 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:18:36 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:18:36 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:18:36 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:18:36 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 922bdb0b-234e-4b2c-8527-2140d17f4ade does not exist
Oct 11 05:18:36 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 2eeab56b-9d81-4e00-87f4-56da59487b99 does not exist
Oct 11 05:18:36 np0005481065 nova_compute[260935]: 2025-10-11 09:18:36.232 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updating instance_info_cache with network_info: [{"id": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "address": "fa:16:3e:d3:b5:ce", "network": {"id": "7c40ad6c-6e2c-4d8e-a70f-72c8786fa745", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1855455514-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ba95f2514ce4fe4b00f245335eaeb01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc992d6e3-ef", "ovs_interfaceid": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:18:36 np0005481065 nova_compute[260935]: 2025-10-11 09:18:36.265 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:18:36 np0005481065 nova_compute[260935]: 2025-10-11 09:18:36.265 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 05:18:36 np0005481065 nova_compute[260935]: 2025-10-11 09:18:36.266 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:18:36 np0005481065 nova_compute[260935]: 2025-10-11 09:18:36.266 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:18:36 np0005481065 nova_compute[260935]: 2025-10-11 09:18:36.289 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:18:36 np0005481065 nova_compute[260935]: 2025-10-11 09:18:36.290 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:18:36 np0005481065 nova_compute[260935]: 2025-10-11 09:18:36.290 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:18:36 np0005481065 nova_compute[260935]: 2025-10-11 09:18:36.290 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:18:36 np0005481065 nova_compute[260935]: 2025-10-11 09:18:36.291 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:18:36 np0005481065 nova_compute[260935]: 2025-10-11 09:18:36.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:36 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2303: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:18:36 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:18:36 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/69984512' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:18:36 np0005481065 nova_compute[260935]: 2025-10-11 09:18:36.808 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:18:36 np0005481065 nova_compute[260935]: 2025-10-11 09:18:36.925 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:18:36 np0005481065 nova_compute[260935]: 2025-10-11 09:18:36.927 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:18:36 np0005481065 nova_compute[260935]: 2025-10-11 09:18:36.928 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:18:36 np0005481065 nova_compute[260935]: 2025-10-11 09:18:36.934 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:18:36 np0005481065 nova_compute[260935]: 2025-10-11 09:18:36.935 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:18:36 np0005481065 nova_compute[260935]: 2025-10-11 09:18:36.941 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:18:36 np0005481065 nova_compute[260935]: 2025-10-11 09:18:36.941 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:18:37 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:18:37 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:18:37 np0005481065 nova_compute[260935]: 2025-10-11 09:18:37.233 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:18:37 np0005481065 nova_compute[260935]: 2025-10-11 09:18:37.235 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2937MB free_disk=59.83066940307617GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:18:37 np0005481065 nova_compute[260935]: 2025-10-11 09:18:37.235 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:18:37 np0005481065 nova_compute[260935]: 2025-10-11 09:18:37.236 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:18:37 np0005481065 nova_compute[260935]: 2025-10-11 09:18:37.363 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:18:37 np0005481065 nova_compute[260935]: 2025-10-11 09:18:37.364 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:18:37 np0005481065 nova_compute[260935]: 2025-10-11 09:18:37.364 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:18:37 np0005481065 nova_compute[260935]: 2025-10-11 09:18:37.364 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:18:37 np0005481065 nova_compute[260935]: 2025-10-11 09:18:37.364 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:18:37 np0005481065 nova_compute[260935]: 2025-10-11 09:18:37.539 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:18:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:18:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/24222150' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:18:37 np0005481065 nova_compute[260935]: 2025-10-11 09:18:37.994 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:18:38 np0005481065 nova_compute[260935]: 2025-10-11 09:18:38.000 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:18:38 np0005481065 nova_compute[260935]: 2025-10-11 09:18:38.018 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:18:38 np0005481065 nova_compute[260935]: 2025-10-11 09:18:38.051 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:18:38 np0005481065 nova_compute[260935]: 2025-10-11 09:18:38.051 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:18:38 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2304: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:18:38 np0005481065 podman[383634]: 2025-10-11 09:18:38.817206482 +0000 UTC m=+0.104550246 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 11 05:18:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:18:40 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2305: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:18:40 np0005481065 nova_compute[260935]: 2025-10-11 09:18:40.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:41 np0005481065 nova_compute[260935]: 2025-10-11 09:18:41.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:42 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2306: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:18:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:18:44 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2307: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:18:44 np0005481065 podman[383656]: 2025-10-11 09:18:44.805578775 +0000 UTC m=+0.097469864 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 05:18:44 np0005481065 podman[383657]: 2025-10-11 09:18:44.835806775 +0000 UTC m=+0.127829768 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 11 05:18:45 np0005481065 nova_compute[260935]: 2025-10-11 09:18:45.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:46 np0005481065 nova_compute[260935]: 2025-10-11 09:18:46.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:46 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2308: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:18:47 np0005481065 nova_compute[260935]: 2025-10-11 09:18:47.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:18:47 np0005481065 nova_compute[260935]: 2025-10-11 09:18:47.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct 11 05:18:47 np0005481065 nova_compute[260935]: 2025-10-11 09:18:47.726 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct 11 05:18:48 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2309: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:18:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:18:49 np0005481065 nova_compute[260935]: 2025-10-11 09:18:49.360 2 DEBUG oslo_concurrency.lockutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "69860e17-caac-461a-a4a5-34ca72c0ee09" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:18:49 np0005481065 nova_compute[260935]: 2025-10-11 09:18:49.361 2 DEBUG oslo_concurrency.lockutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "69860e17-caac-461a-a4a5-34ca72c0ee09" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:18:49 np0005481065 nova_compute[260935]: 2025-10-11 09:18:49.387 2 DEBUG nova.compute.manager [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:18:49 np0005481065 nova_compute[260935]: 2025-10-11 09:18:49.487 2 DEBUG oslo_concurrency.lockutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:18:49 np0005481065 nova_compute[260935]: 2025-10-11 09:18:49.488 2 DEBUG oslo_concurrency.lockutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:18:49 np0005481065 nova_compute[260935]: 2025-10-11 09:18:49.500 2 DEBUG nova.virt.hardware [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:18:49 np0005481065 nova_compute[260935]: 2025-10-11 09:18:49.501 2 INFO nova.compute.claims [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:18:49 np0005481065 nova_compute[260935]: 2025-10-11 09:18:49.700 2 DEBUG oslo_concurrency.processutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:18:50 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:18:50 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/322561464' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:18:50 np0005481065 nova_compute[260935]: 2025-10-11 09:18:50.223 2 DEBUG oslo_concurrency.processutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:18:50 np0005481065 nova_compute[260935]: 2025-10-11 09:18:50.232 2 DEBUG nova.compute.provider_tree [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:18:50 np0005481065 nova_compute[260935]: 2025-10-11 09:18:50.258 2 DEBUG nova.scheduler.client.report [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:18:50 np0005481065 nova_compute[260935]: 2025-10-11 09:18:50.280 2 DEBUG oslo_concurrency.lockutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.792s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:18:50 np0005481065 nova_compute[260935]: 2025-10-11 09:18:50.281 2 DEBUG nova.compute.manager [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:18:50 np0005481065 nova_compute[260935]: 2025-10-11 09:18:50.329 2 DEBUG nova.compute.manager [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:18:50 np0005481065 nova_compute[260935]: 2025-10-11 09:18:50.330 2 DEBUG nova.network.neutron [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:18:50 np0005481065 nova_compute[260935]: 2025-10-11 09:18:50.361 2 INFO nova.virt.libvirt.driver [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:18:50 np0005481065 nova_compute[260935]: 2025-10-11 09:18:50.392 2 DEBUG nova.compute.manager [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:18:50 np0005481065 nova_compute[260935]: 2025-10-11 09:18:50.423 2 DEBUG oslo_concurrency.lockutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "564a3027-0f98-40fd-a495-1c13a103ea39" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:18:50 np0005481065 nova_compute[260935]: 2025-10-11 09:18:50.424 2 DEBUG oslo_concurrency.lockutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "564a3027-0f98-40fd-a495-1c13a103ea39" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:18:50 np0005481065 nova_compute[260935]: 2025-10-11 09:18:50.455 2 DEBUG nova.compute.manager [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:18:50 np0005481065 nova_compute[260935]: 2025-10-11 09:18:50.572 2 DEBUG nova.compute.manager [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:18:50 np0005481065 nova_compute[260935]: 2025-10-11 09:18:50.574 2 DEBUG nova.virt.libvirt.driver [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:18:50 np0005481065 nova_compute[260935]: 2025-10-11 09:18:50.575 2 INFO nova.virt.libvirt.driver [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Creating image(s)#033[00m
Oct 11 05:18:50 np0005481065 nova_compute[260935]: 2025-10-11 09:18:50.606 2 DEBUG nova.storage.rbd_utils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 69860e17-caac-461a-a4a5-34ca72c0ee09_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:18:50 np0005481065 nova_compute[260935]: 2025-10-11 09:18:50.640 2 DEBUG nova.storage.rbd_utils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 69860e17-caac-461a-a4a5-34ca72c0ee09_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:18:50 np0005481065 nova_compute[260935]: 2025-10-11 09:18:50.670 2 DEBUG nova.storage.rbd_utils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 69860e17-caac-461a-a4a5-34ca72c0ee09_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:18:50 np0005481065 nova_compute[260935]: 2025-10-11 09:18:50.674 2 DEBUG oslo_concurrency.processutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:18:50 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2310: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:18:50 np0005481065 nova_compute[260935]: 2025-10-11 09:18:50.725 2 DEBUG nova.policy [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dd336dcb24664df58613d4105ce1b004', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:18:50 np0005481065 nova_compute[260935]: 2025-10-11 09:18:50.757 2 DEBUG oslo_concurrency.lockutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:18:50 np0005481065 nova_compute[260935]: 2025-10-11 09:18:50.757 2 DEBUG oslo_concurrency.lockutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:18:50 np0005481065 nova_compute[260935]: 2025-10-11 09:18:50.765 2 DEBUG nova.virt.hardware [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:18:50 np0005481065 nova_compute[260935]: 2025-10-11 09:18:50.766 2 INFO nova.compute.claims [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:18:50 np0005481065 nova_compute[260935]: 2025-10-11 09:18:50.773 2 DEBUG oslo_concurrency.processutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:18:50 np0005481065 nova_compute[260935]: 2025-10-11 09:18:50.774 2 DEBUG oslo_concurrency.lockutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:18:50 np0005481065 nova_compute[260935]: 2025-10-11 09:18:50.774 2 DEBUG oslo_concurrency.lockutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:18:50 np0005481065 nova_compute[260935]: 2025-10-11 09:18:50.775 2 DEBUG oslo_concurrency.lockutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:18:50 np0005481065 nova_compute[260935]: 2025-10-11 09:18:50.802 2 DEBUG nova.storage.rbd_utils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 69860e17-caac-461a-a4a5-34ca72c0ee09_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:18:50 np0005481065 nova_compute[260935]: 2025-10-11 09:18:50.807 2 DEBUG oslo_concurrency.processutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 69860e17-caac-461a-a4a5-34ca72c0ee09_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:18:50 np0005481065 nova_compute[260935]: 2025-10-11 09:18:50.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:51 np0005481065 nova_compute[260935]: 2025-10-11 09:18:51.005 2 DEBUG oslo_concurrency.processutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:18:51 np0005481065 nova_compute[260935]: 2025-10-11 09:18:51.185 2 DEBUG oslo_concurrency.processutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 69860e17-caac-461a-a4a5-34ca72c0ee09_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.378s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:18:51 np0005481065 nova_compute[260935]: 2025-10-11 09:18:51.284 2 DEBUG nova.storage.rbd_utils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] resizing rbd image 69860e17-caac-461a-a4a5-34ca72c0ee09_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:18:51 np0005481065 nova_compute[260935]: 2025-10-11 09:18:51.419 2 DEBUG nova.objects.instance [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'migration_context' on Instance uuid 69860e17-caac-461a-a4a5-34ca72c0ee09 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:18:51 np0005481065 nova_compute[260935]: 2025-10-11 09:18:51.443 2 DEBUG nova.virt.libvirt.driver [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:18:51 np0005481065 nova_compute[260935]: 2025-10-11 09:18:51.444 2 DEBUG nova.virt.libvirt.driver [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Ensure instance console log exists: /var/lib/nova/instances/69860e17-caac-461a-a4a5-34ca72c0ee09/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:18:51 np0005481065 nova_compute[260935]: 2025-10-11 09:18:51.445 2 DEBUG oslo_concurrency.lockutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:18:51 np0005481065 nova_compute[260935]: 2025-10-11 09:18:51.445 2 DEBUG oslo_concurrency.lockutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:18:51 np0005481065 nova_compute[260935]: 2025-10-11 09:18:51.446 2 DEBUG oslo_concurrency.lockutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:18:51 np0005481065 nova_compute[260935]: 2025-10-11 09:18:51.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:51 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:18:51 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/131644788' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:18:51 np0005481065 nova_compute[260935]: 2025-10-11 09:18:51.557 2 DEBUG oslo_concurrency.processutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:18:51 np0005481065 nova_compute[260935]: 2025-10-11 09:18:51.565 2 DEBUG nova.compute.provider_tree [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:18:51 np0005481065 nova_compute[260935]: 2025-10-11 09:18:51.586 2 DEBUG nova.scheduler.client.report [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:18:51 np0005481065 nova_compute[260935]: 2025-10-11 09:18:51.612 2 DEBUG oslo_concurrency.lockutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.854s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:18:51 np0005481065 nova_compute[260935]: 2025-10-11 09:18:51.613 2 DEBUG nova.compute.manager [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:18:51 np0005481065 nova_compute[260935]: 2025-10-11 09:18:51.669 2 DEBUG nova.compute.manager [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:18:51 np0005481065 nova_compute[260935]: 2025-10-11 09:18:51.669 2 DEBUG nova.network.neutron [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:18:51 np0005481065 nova_compute[260935]: 2025-10-11 09:18:51.693 2 INFO nova.virt.libvirt.driver [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:18:51 np0005481065 nova_compute[260935]: 2025-10-11 09:18:51.714 2 DEBUG nova.compute.manager [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:18:51 np0005481065 nova_compute[260935]: 2025-10-11 09:18:51.835 2 DEBUG nova.compute.manager [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:18:51 np0005481065 nova_compute[260935]: 2025-10-11 09:18:51.837 2 DEBUG nova.virt.libvirt.driver [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:18:51 np0005481065 nova_compute[260935]: 2025-10-11 09:18:51.838 2 INFO nova.virt.libvirt.driver [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Creating image(s)#033[00m
Oct 11 05:18:51 np0005481065 nova_compute[260935]: 2025-10-11 09:18:51.874 2 DEBUG nova.storage.rbd_utils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 564a3027-0f98-40fd-a495-1c13a103ea39_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:18:51 np0005481065 nova_compute[260935]: 2025-10-11 09:18:51.940 2 DEBUG nova.storage.rbd_utils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 564a3027-0f98-40fd-a495-1c13a103ea39_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:18:51 np0005481065 nova_compute[260935]: 2025-10-11 09:18:51.971 2 DEBUG nova.storage.rbd_utils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 564a3027-0f98-40fd-a495-1c13a103ea39_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:18:51 np0005481065 nova_compute[260935]: 2025-10-11 09:18:51.975 2 DEBUG oslo_concurrency.processutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:18:52 np0005481065 nova_compute[260935]: 2025-10-11 09:18:52.022 2 DEBUG nova.policy [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0e1fd111a1ff43179343661e01457085', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'db6885dd005947ad850fed13cefdf2fc', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:18:52 np0005481065 nova_compute[260935]: 2025-10-11 09:18:52.069 2 DEBUG oslo_concurrency.processutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:18:52 np0005481065 nova_compute[260935]: 2025-10-11 09:18:52.070 2 DEBUG oslo_concurrency.lockutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:18:52 np0005481065 nova_compute[260935]: 2025-10-11 09:18:52.071 2 DEBUG oslo_concurrency.lockutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:18:52 np0005481065 nova_compute[260935]: 2025-10-11 09:18:52.071 2 DEBUG oslo_concurrency.lockutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:18:52 np0005481065 nova_compute[260935]: 2025-10-11 09:18:52.104 2 DEBUG nova.storage.rbd_utils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 564a3027-0f98-40fd-a495-1c13a103ea39_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:18:52 np0005481065 nova_compute[260935]: 2025-10-11 09:18:52.111 2 DEBUG oslo_concurrency.processutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 564a3027-0f98-40fd-a495-1c13a103ea39_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:18:52 np0005481065 nova_compute[260935]: 2025-10-11 09:18:52.172 2 DEBUG nova.network.neutron [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Successfully created port: a1864eda-bf8d-42a1-b315-967521604391 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:18:52 np0005481065 nova_compute[260935]: 2025-10-11 09:18:52.482 2 DEBUG oslo_concurrency.processutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 564a3027-0f98-40fd-a495-1c13a103ea39_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.371s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:18:52 np0005481065 nova_compute[260935]: 2025-10-11 09:18:52.551 2 DEBUG nova.storage.rbd_utils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] resizing rbd image 564a3027-0f98-40fd-a495-1c13a103ea39_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:18:52 np0005481065 nova_compute[260935]: 2025-10-11 09:18:52.670 2 DEBUG nova.objects.instance [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'migration_context' on Instance uuid 564a3027-0f98-40fd-a495-1c13a103ea39 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:18:52 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2311: 321 pgs: 321 active+clean; 374 MiB data, 916 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.5 MiB/s wr, 36 op/s
Oct 11 05:18:52 np0005481065 nova_compute[260935]: 2025-10-11 09:18:52.690 2 DEBUG nova.virt.libvirt.driver [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:18:52 np0005481065 nova_compute[260935]: 2025-10-11 09:18:52.691 2 DEBUG nova.virt.libvirt.driver [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Ensure instance console log exists: /var/lib/nova/instances/564a3027-0f98-40fd-a495-1c13a103ea39/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:18:52 np0005481065 nova_compute[260935]: 2025-10-11 09:18:52.691 2 DEBUG oslo_concurrency.lockutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:18:52 np0005481065 nova_compute[260935]: 2025-10-11 09:18:52.692 2 DEBUG oslo_concurrency.lockutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:18:52 np0005481065 nova_compute[260935]: 2025-10-11 09:18:52.692 2 DEBUG oslo_concurrency.lockutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:18:53 np0005481065 nova_compute[260935]: 2025-10-11 09:18:53.396 2 DEBUG nova.network.neutron [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Successfully created port: dea05614-b04a-4078-a98e-428065514f37 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:18:53 np0005481065 nova_compute[260935]: 2025-10-11 09:18:53.400 2 DEBUG nova.network.neutron [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Successfully updated port: a1864eda-bf8d-42a1-b315-967521604391 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:18:53 np0005481065 nova_compute[260935]: 2025-10-11 09:18:53.422 2 DEBUG oslo_concurrency.lockutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "refresh_cache-69860e17-caac-461a-a4a5-34ca72c0ee09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:18:53 np0005481065 nova_compute[260935]: 2025-10-11 09:18:53.422 2 DEBUG oslo_concurrency.lockutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquired lock "refresh_cache-69860e17-caac-461a-a4a5-34ca72c0ee09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:18:53 np0005481065 nova_compute[260935]: 2025-10-11 09:18:53.423 2 DEBUG nova.network.neutron [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:18:53 np0005481065 nova_compute[260935]: 2025-10-11 09:18:53.964 2 DEBUG nova.network.neutron [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:18:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:18:54 np0005481065 nova_compute[260935]: 2025-10-11 09:18:54.104 2 DEBUG nova.compute.manager [req-679c847c-f1de-456f-b018-ce46809e9f30 req-6a8a27dd-8a3b-4ec7-8580-5753d87c5cef e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Received event network-changed-a1864eda-bf8d-42a1-b315-967521604391 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:18:54 np0005481065 nova_compute[260935]: 2025-10-11 09:18:54.105 2 DEBUG nova.compute.manager [req-679c847c-f1de-456f-b018-ce46809e9f30 req-6a8a27dd-8a3b-4ec7-8580-5753d87c5cef e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Refreshing instance network info cache due to event network-changed-a1864eda-bf8d-42a1-b315-967521604391. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:18:54 np0005481065 nova_compute[260935]: 2025-10-11 09:18:54.105 2 DEBUG oslo_concurrency.lockutils [req-679c847c-f1de-456f-b018-ce46809e9f30 req-6a8a27dd-8a3b-4ec7-8580-5753d87c5cef e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-69860e17-caac-461a-a4a5-34ca72c0ee09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:18:54 np0005481065 nova_compute[260935]: 2025-10-11 09:18:54.399 2 DEBUG nova.network.neutron [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Successfully created port: c8bc0542-b12a-4eb6-898d-5fd663184c82 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:18:54 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2312: 321 pgs: 321 active+clean; 399 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 2.8 MiB/s wr, 40 op/s
Oct 11 05:18:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:18:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:18:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:18:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:18:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:18:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:18:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:18:54
Oct 11 05:18:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:18:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:18:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', 'images', '.mgr', 'default.rgw.meta', 'default.rgw.log', '.rgw.root', 'volumes', 'vms']
Oct 11 05:18:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:18:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:18:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:18:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:18:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:18:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:18:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:18:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:18:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:18:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:18:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:18:55 np0005481065 nova_compute[260935]: 2025-10-11 09:18:55.505 2 DEBUG nova.network.neutron [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Updating instance_info_cache with network_info: [{"id": "a1864eda-bf8d-42a1-b315-967521604391", "address": "fa:16:3e:8e:a2:5e", "network": {"id": "fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448", "bridge": "br-int", "label": "tempest-network-smoke--1639787343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1864eda-bf", "ovs_interfaceid": "a1864eda-bf8d-42a1-b315-967521604391", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:18:55 np0005481065 nova_compute[260935]: 2025-10-11 09:18:55.523 2 DEBUG oslo_concurrency.lockutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Releasing lock "refresh_cache-69860e17-caac-461a-a4a5-34ca72c0ee09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:18:55 np0005481065 nova_compute[260935]: 2025-10-11 09:18:55.524 2 DEBUG nova.compute.manager [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Instance network_info: |[{"id": "a1864eda-bf8d-42a1-b315-967521604391", "address": "fa:16:3e:8e:a2:5e", "network": {"id": "fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448", "bridge": "br-int", "label": "tempest-network-smoke--1639787343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1864eda-bf", "ovs_interfaceid": "a1864eda-bf8d-42a1-b315-967521604391", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:18:55 np0005481065 nova_compute[260935]: 2025-10-11 09:18:55.525 2 DEBUG oslo_concurrency.lockutils [req-679c847c-f1de-456f-b018-ce46809e9f30 req-6a8a27dd-8a3b-4ec7-8580-5753d87c5cef e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-69860e17-caac-461a-a4a5-34ca72c0ee09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:18:55 np0005481065 nova_compute[260935]: 2025-10-11 09:18:55.525 2 DEBUG nova.network.neutron [req-679c847c-f1de-456f-b018-ce46809e9f30 req-6a8a27dd-8a3b-4ec7-8580-5753d87c5cef e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Refreshing network info cache for port a1864eda-bf8d-42a1-b315-967521604391 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:18:55 np0005481065 nova_compute[260935]: 2025-10-11 09:18:55.531 2 DEBUG nova.virt.libvirt.driver [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Start _get_guest_xml network_info=[{"id": "a1864eda-bf8d-42a1-b315-967521604391", "address": "fa:16:3e:8e:a2:5e", "network": {"id": "fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448", "bridge": "br-int", "label": "tempest-network-smoke--1639787343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1864eda-bf", "ovs_interfaceid": "a1864eda-bf8d-42a1-b315-967521604391", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:18:55 np0005481065 nova_compute[260935]: 2025-10-11 09:18:55.538 2 WARNING nova.virt.libvirt.driver [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:18:55 np0005481065 nova_compute[260935]: 2025-10-11 09:18:55.549 2 DEBUG nova.virt.libvirt.host [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:18:55 np0005481065 nova_compute[260935]: 2025-10-11 09:18:55.550 2 DEBUG nova.virt.libvirt.host [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:18:55 np0005481065 nova_compute[260935]: 2025-10-11 09:18:55.555 2 DEBUG nova.virt.libvirt.host [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:18:55 np0005481065 nova_compute[260935]: 2025-10-11 09:18:55.556 2 DEBUG nova.virt.libvirt.host [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:18:55 np0005481065 nova_compute[260935]: 2025-10-11 09:18:55.557 2 DEBUG nova.virt.libvirt.driver [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:18:55 np0005481065 nova_compute[260935]: 2025-10-11 09:18:55.557 2 DEBUG nova.virt.hardware [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:18:55 np0005481065 nova_compute[260935]: 2025-10-11 09:18:55.558 2 DEBUG nova.virt.hardware [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:18:55 np0005481065 nova_compute[260935]: 2025-10-11 09:18:55.559 2 DEBUG nova.virt.hardware [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:18:55 np0005481065 nova_compute[260935]: 2025-10-11 09:18:55.559 2 DEBUG nova.virt.hardware [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:18:55 np0005481065 nova_compute[260935]: 2025-10-11 09:18:55.559 2 DEBUG nova.virt.hardware [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:18:55 np0005481065 nova_compute[260935]: 2025-10-11 09:18:55.560 2 DEBUG nova.virt.hardware [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:18:55 np0005481065 nova_compute[260935]: 2025-10-11 09:18:55.560 2 DEBUG nova.virt.hardware [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:18:55 np0005481065 nova_compute[260935]: 2025-10-11 09:18:55.561 2 DEBUG nova.virt.hardware [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:18:55 np0005481065 nova_compute[260935]: 2025-10-11 09:18:55.561 2 DEBUG nova.virt.hardware [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:18:55 np0005481065 nova_compute[260935]: 2025-10-11 09:18:55.562 2 DEBUG nova.virt.hardware [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:18:55 np0005481065 nova_compute[260935]: 2025-10-11 09:18:55.562 2 DEBUG nova.virt.hardware [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:18:55 np0005481065 nova_compute[260935]: 2025-10-11 09:18:55.567 2 DEBUG oslo_concurrency.processutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:18:55 np0005481065 nova_compute[260935]: 2025-10-11 09:18:55.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:18:56 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2495784309' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:18:56 np0005481065 nova_compute[260935]: 2025-10-11 09:18:56.123 2 DEBUG oslo_concurrency.processutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:18:56 np0005481065 nova_compute[260935]: 2025-10-11 09:18:56.147 2 DEBUG nova.storage.rbd_utils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 69860e17-caac-461a-a4a5-34ca72c0ee09_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:18:56 np0005481065 nova_compute[260935]: 2025-10-11 09:18:56.152 2 DEBUG oslo_concurrency.processutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:18:56 np0005481065 nova_compute[260935]: 2025-10-11 09:18:56.198 2 DEBUG nova.network.neutron [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Successfully updated port: dea05614-b04a-4078-a98e-428065514f37 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:18:56 np0005481065 nova_compute[260935]: 2025-10-11 09:18:56.250 2 DEBUG nova.compute.manager [req-5b90bd63-1501-4c8e-ba41-f956ba27584f req-1bb9c8fd-a99f-4f99-ad8e-d50955ae592f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Received event network-changed-dea05614-b04a-4078-a98e-428065514f37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:18:56 np0005481065 nova_compute[260935]: 2025-10-11 09:18:56.250 2 DEBUG nova.compute.manager [req-5b90bd63-1501-4c8e-ba41-f956ba27584f req-1bb9c8fd-a99f-4f99-ad8e-d50955ae592f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Refreshing instance network info cache due to event network-changed-dea05614-b04a-4078-a98e-428065514f37. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:18:56 np0005481065 nova_compute[260935]: 2025-10-11 09:18:56.252 2 DEBUG oslo_concurrency.lockutils [req-5b90bd63-1501-4c8e-ba41-f956ba27584f req-1bb9c8fd-a99f-4f99-ad8e-d50955ae592f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-564a3027-0f98-40fd-a495-1c13a103ea39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:18:56 np0005481065 nova_compute[260935]: 2025-10-11 09:18:56.252 2 DEBUG oslo_concurrency.lockutils [req-5b90bd63-1501-4c8e-ba41-f956ba27584f req-1bb9c8fd-a99f-4f99-ad8e-d50955ae592f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-564a3027-0f98-40fd-a495-1c13a103ea39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:18:56 np0005481065 nova_compute[260935]: 2025-10-11 09:18:56.252 2 DEBUG nova.network.neutron [req-5b90bd63-1501-4c8e-ba41-f956ba27584f req-1bb9c8fd-a99f-4f99-ad8e-d50955ae592f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Refreshing network info cache for port dea05614-b04a-4078-a98e-428065514f37 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:18:56 np0005481065 nova_compute[260935]: 2025-10-11 09:18:56.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:18:56 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1934886053' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:18:56 np0005481065 nova_compute[260935]: 2025-10-11 09:18:56.614 2 DEBUG oslo_concurrency.processutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:18:56 np0005481065 nova_compute[260935]: 2025-10-11 09:18:56.616 2 DEBUG nova.virt.libvirt.vif [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:18:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-372874397',display_name='tempest-TestNetworkBasicOps-server-372874397',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-372874397',id=113,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIx2RsPvJwPys7fB6mymA8gM4JxRMjGXoxlum0FnOgNOoalsj2xjCE+J+1HgjSPsznufYemTb9pcBC69nUdop6linXie2N/WBdlfI1xGC4f2xXUMk1ZGsW/ToHE3DY3muA==',key_name='tempest-TestNetworkBasicOps-55635867',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-vrz2alhg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:18:50Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=69860e17-caac-461a-a4a5-34ca72c0ee09,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a1864eda-bf8d-42a1-b315-967521604391", "address": "fa:16:3e:8e:a2:5e", "network": {"id": "fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448", "bridge": "br-int", "label": "tempest-network-smoke--1639787343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1864eda-bf", "ovs_interfaceid": "a1864eda-bf8d-42a1-b315-967521604391", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:18:56 np0005481065 nova_compute[260935]: 2025-10-11 09:18:56.616 2 DEBUG nova.network.os_vif_util [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "a1864eda-bf8d-42a1-b315-967521604391", "address": "fa:16:3e:8e:a2:5e", "network": {"id": "fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448", "bridge": "br-int", "label": "tempest-network-smoke--1639787343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1864eda-bf", "ovs_interfaceid": "a1864eda-bf8d-42a1-b315-967521604391", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:18:56 np0005481065 nova_compute[260935]: 2025-10-11 09:18:56.617 2 DEBUG nova.network.os_vif_util [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8e:a2:5e,bridge_name='br-int',has_traffic_filtering=True,id=a1864eda-bf8d-42a1-b315-967521604391,network=Network(fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1864eda-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:18:56 np0005481065 nova_compute[260935]: 2025-10-11 09:18:56.618 2 DEBUG nova.objects.instance [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'pci_devices' on Instance uuid 69860e17-caac-461a-a4a5-34ca72c0ee09 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:18:56 np0005481065 nova_compute[260935]: 2025-10-11 09:18:56.656 2 DEBUG nova.virt.libvirt.driver [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:18:56 np0005481065 nova_compute[260935]:  <uuid>69860e17-caac-461a-a4a5-34ca72c0ee09</uuid>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:  <name>instance-00000071</name>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:18:56 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:      <nova:name>tempest-TestNetworkBasicOps-server-372874397</nova:name>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:18:55</nova:creationTime>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:18:56 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:        <nova:user uuid="dd336dcb24664df58613d4105ce1b004">tempest-TestNetworkBasicOps-1622727639-project-member</nova:user>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:        <nova:project uuid="bee9c6aad5fe46a2b0fb6caf4d995b72">tempest-TestNetworkBasicOps-1622727639</nova:project>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:        <nova:port uuid="a1864eda-bf8d-42a1-b315-967521604391">
Oct 11 05:18:56 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:18:56 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:      <entry name="serial">69860e17-caac-461a-a4a5-34ca72c0ee09</entry>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:      <entry name="uuid">69860e17-caac-461a-a4a5-34ca72c0ee09</entry>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:18:56 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:18:56 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:18:56 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/69860e17-caac-461a-a4a5-34ca72c0ee09_disk">
Oct 11 05:18:56 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:18:56 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:18:56 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/69860e17-caac-461a-a4a5-34ca72c0ee09_disk.config">
Oct 11 05:18:56 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:18:56 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:18:56 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:8e:a2:5e"/>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:      <target dev="tapa1864eda-bf"/>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:18:56 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/69860e17-caac-461a-a4a5-34ca72c0ee09/console.log" append="off"/>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:18:56 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:18:56 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:18:56 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:18:56 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:18:56 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:18:56 np0005481065 nova_compute[260935]: 2025-10-11 09:18:56.658 2 DEBUG nova.compute.manager [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Preparing to wait for external event network-vif-plugged-a1864eda-bf8d-42a1-b315-967521604391 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:18:56 np0005481065 nova_compute[260935]: 2025-10-11 09:18:56.659 2 DEBUG oslo_concurrency.lockutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "69860e17-caac-461a-a4a5-34ca72c0ee09-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:18:56 np0005481065 nova_compute[260935]: 2025-10-11 09:18:56.659 2 DEBUG oslo_concurrency.lockutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "69860e17-caac-461a-a4a5-34ca72c0ee09-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:18:56 np0005481065 nova_compute[260935]: 2025-10-11 09:18:56.660 2 DEBUG oslo_concurrency.lockutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "69860e17-caac-461a-a4a5-34ca72c0ee09-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:18:56 np0005481065 nova_compute[260935]: 2025-10-11 09:18:56.661 2 DEBUG nova.virt.libvirt.vif [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:18:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-372874397',display_name='tempest-TestNetworkBasicOps-server-372874397',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-372874397',id=113,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIx2RsPvJwPys7fB6mymA8gM4JxRMjGXoxlum0FnOgNOoalsj2xjCE+J+1HgjSPsznufYemTb9pcBC69nUdop6linXie2N/WBdlfI1xGC4f2xXUMk1ZGsW/ToHE3DY3muA==',key_name='tempest-TestNetworkBasicOps-55635867',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-vrz2alhg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:18:50Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=69860e17-caac-461a-a4a5-34ca72c0ee09,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a1864eda-bf8d-42a1-b315-967521604391", "address": "fa:16:3e:8e:a2:5e", "network": {"id": "fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448", "bridge": "br-int", "label": "tempest-network-smoke--1639787343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1864eda-bf", "ovs_interfaceid": "a1864eda-bf8d-42a1-b315-967521604391", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:18:56 np0005481065 nova_compute[260935]: 2025-10-11 09:18:56.662 2 DEBUG nova.network.os_vif_util [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "a1864eda-bf8d-42a1-b315-967521604391", "address": "fa:16:3e:8e:a2:5e", "network": {"id": "fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448", "bridge": "br-int", "label": "tempest-network-smoke--1639787343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1864eda-bf", "ovs_interfaceid": "a1864eda-bf8d-42a1-b315-967521604391", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:18:56 np0005481065 nova_compute[260935]: 2025-10-11 09:18:56.663 2 DEBUG nova.network.os_vif_util [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8e:a2:5e,bridge_name='br-int',has_traffic_filtering=True,id=a1864eda-bf8d-42a1-b315-967521604391,network=Network(fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1864eda-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:18:56 np0005481065 nova_compute[260935]: 2025-10-11 09:18:56.663 2 DEBUG os_vif [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:a2:5e,bridge_name='br-int',has_traffic_filtering=True,id=a1864eda-bf8d-42a1-b315-967521604391,network=Network(fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1864eda-bf') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:18:56 np0005481065 nova_compute[260935]: 2025-10-11 09:18:56.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:56 np0005481065 nova_compute[260935]: 2025-10-11 09:18:56.665 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:18:56 np0005481065 nova_compute[260935]: 2025-10-11 09:18:56.666 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:18:56 np0005481065 nova_compute[260935]: 2025-10-11 09:18:56.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:56 np0005481065 nova_compute[260935]: 2025-10-11 09:18:56.676 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa1864eda-bf, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:18:56 np0005481065 nova_compute[260935]: 2025-10-11 09:18:56.677 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa1864eda-bf, col_values=(('external_ids', {'iface-id': 'a1864eda-bf8d-42a1-b315-967521604391', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8e:a2:5e', 'vm-uuid': '69860e17-caac-461a-a4a5-34ca72c0ee09'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:18:56 np0005481065 nova_compute[260935]: 2025-10-11 09:18:56.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:56 np0005481065 NetworkManager[44960]: <info>  [1760174336.6821] manager: (tapa1864eda-bf): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/456)
Oct 11 05:18:56 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2313: 321 pgs: 321 active+clean; 399 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 2.8 MiB/s wr, 40 op/s
Oct 11 05:18:56 np0005481065 nova_compute[260935]: 2025-10-11 09:18:56.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:18:56 np0005481065 nova_compute[260935]: 2025-10-11 09:18:56.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:56 np0005481065 nova_compute[260935]: 2025-10-11 09:18:56.695 2 INFO os_vif [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:a2:5e,bridge_name='br-int',has_traffic_filtering=True,id=a1864eda-bf8d-42a1-b315-967521604391,network=Network(fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1864eda-bf')#033[00m
Oct 11 05:18:56 np0005481065 nova_compute[260935]: 2025-10-11 09:18:56.771 2 DEBUG nova.virt.libvirt.driver [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:18:56 np0005481065 nova_compute[260935]: 2025-10-11 09:18:56.772 2 DEBUG nova.virt.libvirt.driver [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:18:56 np0005481065 nova_compute[260935]: 2025-10-11 09:18:56.772 2 DEBUG nova.virt.libvirt.driver [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No VIF found with MAC fa:16:3e:8e:a2:5e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:18:56 np0005481065 nova_compute[260935]: 2025-10-11 09:18:56.773 2 INFO nova.virt.libvirt.driver [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Using config drive#033[00m
Oct 11 05:18:56 np0005481065 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 05:18:56 np0005481065 nova_compute[260935]: 2025-10-11 09:18:56.801 2 DEBUG nova.storage.rbd_utils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 69860e17-caac-461a-a4a5-34ca72c0ee09_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:18:57 np0005481065 nova_compute[260935]: 2025-10-11 09:18:57.043 2 DEBUG nova.network.neutron [req-5b90bd63-1501-4c8e-ba41-f956ba27584f req-1bb9c8fd-a99f-4f99-ad8e-d50955ae592f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:18:57 np0005481065 nova_compute[260935]: 2025-10-11 09:18:57.469 2 INFO nova.virt.libvirt.driver [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Creating config drive at /var/lib/nova/instances/69860e17-caac-461a-a4a5-34ca72c0ee09/disk.config#033[00m
Oct 11 05:18:57 np0005481065 nova_compute[260935]: 2025-10-11 09:18:57.474 2 DEBUG oslo_concurrency.processutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/69860e17-caac-461a-a4a5-34ca72c0ee09/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1wf5x582 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:18:57 np0005481065 nova_compute[260935]: 2025-10-11 09:18:57.614 2 DEBUG oslo_concurrency.processutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/69860e17-caac-461a-a4a5-34ca72c0ee09/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1wf5x582" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:18:57 np0005481065 nova_compute[260935]: 2025-10-11 09:18:57.659 2 DEBUG nova.storage.rbd_utils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 69860e17-caac-461a-a4a5-34ca72c0ee09_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:18:57 np0005481065 nova_compute[260935]: 2025-10-11 09:18:57.664 2 DEBUG oslo_concurrency.processutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/69860e17-caac-461a-a4a5-34ca72c0ee09/disk.config 69860e17-caac-461a-a4a5-34ca72c0ee09_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:18:57 np0005481065 nova_compute[260935]: 2025-10-11 09:18:57.720 2 DEBUG nova.network.neutron [req-5b90bd63-1501-4c8e-ba41-f956ba27584f req-1bb9c8fd-a99f-4f99-ad8e-d50955ae592f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:18:57 np0005481065 nova_compute[260935]: 2025-10-11 09:18:57.726 2 DEBUG nova.network.neutron [req-679c847c-f1de-456f-b018-ce46809e9f30 req-6a8a27dd-8a3b-4ec7-8580-5753d87c5cef e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Updated VIF entry in instance network info cache for port a1864eda-bf8d-42a1-b315-967521604391. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:18:57 np0005481065 nova_compute[260935]: 2025-10-11 09:18:57.727 2 DEBUG nova.network.neutron [req-679c847c-f1de-456f-b018-ce46809e9f30 req-6a8a27dd-8a3b-4ec7-8580-5753d87c5cef e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Updating instance_info_cache with network_info: [{"id": "a1864eda-bf8d-42a1-b315-967521604391", "address": "fa:16:3e:8e:a2:5e", "network": {"id": "fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448", "bridge": "br-int", "label": "tempest-network-smoke--1639787343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1864eda-bf", "ovs_interfaceid": "a1864eda-bf8d-42a1-b315-967521604391", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:18:57 np0005481065 nova_compute[260935]: 2025-10-11 09:18:57.753 2 DEBUG oslo_concurrency.lockutils [req-679c847c-f1de-456f-b018-ce46809e9f30 req-6a8a27dd-8a3b-4ec7-8580-5753d87c5cef e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-69860e17-caac-461a-a4a5-34ca72c0ee09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:18:57 np0005481065 nova_compute[260935]: 2025-10-11 09:18:57.755 2 DEBUG oslo_concurrency.lockutils [req-5b90bd63-1501-4c8e-ba41-f956ba27584f req-1bb9c8fd-a99f-4f99-ad8e-d50955ae592f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-564a3027-0f98-40fd-a495-1c13a103ea39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:18:57 np0005481065 nova_compute[260935]: 2025-10-11 09:18:57.873 2 DEBUG oslo_concurrency.processutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/69860e17-caac-461a-a4a5-34ca72c0ee09/disk.config 69860e17-caac-461a-a4a5-34ca72c0ee09_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.209s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:18:57 np0005481065 nova_compute[260935]: 2025-10-11 09:18:57.873 2 INFO nova.virt.libvirt.driver [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Deleting local config drive /var/lib/nova/instances/69860e17-caac-461a-a4a5-34ca72c0ee09/disk.config because it was imported into RBD.#033[00m
Oct 11 05:18:57 np0005481065 kernel: tapa1864eda-bf: entered promiscuous mode
Oct 11 05:18:57 np0005481065 nova_compute[260935]: 2025-10-11 09:18:57.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:57 np0005481065 ovn_controller[152945]: 2025-10-11T09:18:57Z|01124|binding|INFO|Claiming lport a1864eda-bf8d-42a1-b315-967521604391 for this chassis.
Oct 11 05:18:57 np0005481065 ovn_controller[152945]: 2025-10-11T09:18:57Z|01125|binding|INFO|a1864eda-bf8d-42a1-b315-967521604391: Claiming fa:16:3e:8e:a2:5e 10.100.0.3
Oct 11 05:18:57 np0005481065 NetworkManager[44960]: <info>  [1760174337.9455] manager: (tapa1864eda-bf): new Tun device (/org/freedesktop/NetworkManager/Devices/457)
Oct 11 05:18:57 np0005481065 nova_compute[260935]: 2025-10-11 09:18:57.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:57.963 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:a2:5e 10.100.0.3'], port_security=['fa:16:3e:8e:a2:5e 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '69860e17-caac-461a-a4a5-34ca72c0ee09', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '2', 'neutron:security_group_ids': '04903712-3ca4-4ffc-b1ee-8e3bb7ff59e6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e842406f-d65b-48f7-9a65-50c6608add8c, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=a1864eda-bf8d-42a1-b315-967521604391) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:18:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:57.964 162815 INFO neutron.agent.ovn.metadata.agent [-] Port a1864eda-bf8d-42a1-b315-967521604391 in datapath fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448 bound to our chassis#033[00m
Oct 11 05:18:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:57.967 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448#033[00m
Oct 11 05:18:57 np0005481065 nova_compute[260935]: 2025-10-11 09:18:57.973 2 DEBUG nova.network.neutron [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Successfully updated port: c8bc0542-b12a-4eb6-898d-5fd663184c82 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:18:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:57.984 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7f917d39-d343-4919-9ae4-fead3d7dc066]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:57.985 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfbf3e0c3-41 in ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 05:18:57 np0005481065 systemd-udevd[384217]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:18:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:57.987 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfbf3e0c3-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 05:18:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:57.987 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[beba6bcc-59e3-4d42-a2b7-9e4802f9d0a6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:57.988 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d07ccf06-c550-4d73-ae60-3f38701d88e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:58 np0005481065 systemd-machined[215705]: New machine qemu-136-instance-00000071.
Oct 11 05:18:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:58.005 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[9a5049b7-70d7-4adb-9ca5-d034c571fc2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:58 np0005481065 NetworkManager[44960]: <info>  [1760174338.0129] device (tapa1864eda-bf): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:18:58 np0005481065 NetworkManager[44960]: <info>  [1760174338.0147] device (tapa1864eda-bf): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:18:58 np0005481065 nova_compute[260935]: 2025-10-11 09:18:58.024 2 DEBUG oslo_concurrency.lockutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "refresh_cache-564a3027-0f98-40fd-a495-1c13a103ea39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:18:58 np0005481065 nova_compute[260935]: 2025-10-11 09:18:58.024 2 DEBUG oslo_concurrency.lockutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquired lock "refresh_cache-564a3027-0f98-40fd-a495-1c13a103ea39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:18:58 np0005481065 nova_compute[260935]: 2025-10-11 09:18:58.024 2 DEBUG nova.network.neutron [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:18:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:58.034 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9e1f374e-2ce8-44bc-942d-ca979c6184cc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:58 np0005481065 systemd[1]: Started Virtual Machine qemu-136-instance-00000071.
Oct 11 05:18:58 np0005481065 nova_compute[260935]: 2025-10-11 09:18:58.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:58 np0005481065 ovn_controller[152945]: 2025-10-11T09:18:58Z|01126|binding|INFO|Setting lport a1864eda-bf8d-42a1-b315-967521604391 ovn-installed in OVS
Oct 11 05:18:58 np0005481065 ovn_controller[152945]: 2025-10-11T09:18:58Z|01127|binding|INFO|Setting lport a1864eda-bf8d-42a1-b315-967521604391 up in Southbound
Oct 11 05:18:58 np0005481065 nova_compute[260935]: 2025-10-11 09:18:58.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:58.080 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[8168db52-e3a6-43bc-982d-de3d7e218e4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:58.086 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b92ad869-a3a2-46a0-9d46-61be8b2ebe3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:58 np0005481065 NetworkManager[44960]: <info>  [1760174338.0871] manager: (tapfbf3e0c3-40): new Veth device (/org/freedesktop/NetworkManager/Devices/458)
Oct 11 05:18:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:58.123 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[7d9734d3-9a5c-4d7d-a56e-12c6da115851]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:58.129 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[96fa9c90-6592-4e93-aa08-e73972b0d76d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:58 np0005481065 NetworkManager[44960]: <info>  [1760174338.1570] device (tapfbf3e0c3-40): carrier: link connected
Oct 11 05:18:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:58.167 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f35f6398-1c98-48ca-aa02-b7dbf7bcbe22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:58.185 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2ee71ddc-631c-4d1e-bfcd-9f2005eb7229]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbf3e0c3-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c8:5d:03'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 322], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618285, 'reachable_time': 33588, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 384250, 'error': None, 'target': 'ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:58.198 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c5b4c5a8-7594-46ea-ba02-479920437e61]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec8:5d03'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618285, 'tstamp': 618285}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 384251, 'error': None, 'target': 'ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:58.220 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c2086022-036d-4629-a0c8-4b0a9014fbd4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbf3e0c3-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c8:5d:03'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 322], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618285, 'reachable_time': 33588, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 384252, 'error': None, 'target': 'ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:58 np0005481065 nova_compute[260935]: 2025-10-11 09:18:58.243 2 DEBUG nova.network.neutron [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:18:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:58.263 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b2415060-e577-4600-ad7e-4b27e9e6f228]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:58 np0005481065 nova_compute[260935]: 2025-10-11 09:18:58.278 2 DEBUG nova.compute.manager [req-90a7ebb9-4f9c-43ed-8e40-2dad29652b4d req-403631c2-d5fc-4de5-981f-847f9b1a3dcc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Received event network-vif-plugged-a1864eda-bf8d-42a1-b315-967521604391 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:18:58 np0005481065 nova_compute[260935]: 2025-10-11 09:18:58.278 2 DEBUG oslo_concurrency.lockutils [req-90a7ebb9-4f9c-43ed-8e40-2dad29652b4d req-403631c2-d5fc-4de5-981f-847f9b1a3dcc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "69860e17-caac-461a-a4a5-34ca72c0ee09-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:18:58 np0005481065 nova_compute[260935]: 2025-10-11 09:18:58.279 2 DEBUG oslo_concurrency.lockutils [req-90a7ebb9-4f9c-43ed-8e40-2dad29652b4d req-403631c2-d5fc-4de5-981f-847f9b1a3dcc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "69860e17-caac-461a-a4a5-34ca72c0ee09-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:18:58 np0005481065 nova_compute[260935]: 2025-10-11 09:18:58.279 2 DEBUG oslo_concurrency.lockutils [req-90a7ebb9-4f9c-43ed-8e40-2dad29652b4d req-403631c2-d5fc-4de5-981f-847f9b1a3dcc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "69860e17-caac-461a-a4a5-34ca72c0ee09-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:18:58 np0005481065 nova_compute[260935]: 2025-10-11 09:18:58.279 2 DEBUG nova.compute.manager [req-90a7ebb9-4f9c-43ed-8e40-2dad29652b4d req-403631c2-d5fc-4de5-981f-847f9b1a3dcc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Processing event network-vif-plugged-a1864eda-bf8d-42a1-b315-967521604391 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:18:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:58.334 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0858b0cc-8c75-472d-904b-55b9171fd625]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:58.336 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbf3e0c3-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:18:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:58.336 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:18:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:58.337 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbf3e0c3-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:18:58 np0005481065 nova_compute[260935]: 2025-10-11 09:18:58.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:58 np0005481065 kernel: tapfbf3e0c3-40: entered promiscuous mode
Oct 11 05:18:58 np0005481065 nova_compute[260935]: 2025-10-11 09:18:58.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:58 np0005481065 NetworkManager[44960]: <info>  [1760174338.3417] manager: (tapfbf3e0c3-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/459)
Oct 11 05:18:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:58.344 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfbf3e0c3-40, col_values=(('external_ids', {'iface-id': 'fb1da81b-c31d-4f26-a595-cb8eaad2e189'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:18:58 np0005481065 nova_compute[260935]: 2025-10-11 09:18:58.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:58 np0005481065 ovn_controller[152945]: 2025-10-11T09:18:58Z|01128|binding|INFO|Releasing lport fb1da81b-c31d-4f26-a595-cb8eaad2e189 from this chassis (sb_readonly=0)
Oct 11 05:18:58 np0005481065 nova_compute[260935]: 2025-10-11 09:18:58.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:58.349 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 05:18:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:58.350 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2bbd9c9f-e8a9-45b8-963d-0e0634786f78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:18:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:58.351 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 05:18:58 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 05:18:58 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 05:18:58 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448
Oct 11 05:18:58 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 05:18:58 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 05:18:58 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 05:18:58 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448.pid.haproxy
Oct 11 05:18:58 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 05:18:58 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:18:58 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 05:18:58 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 05:18:58 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 05:18:58 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 05:18:58 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 05:18:58 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 05:18:58 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 05:18:58 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 05:18:58 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 05:18:58 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 05:18:58 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 05:18:58 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 05:18:58 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 05:18:58 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:18:58 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:18:58 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 05:18:58 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 05:18:58 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 05:18:58 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448
Oct 11 05:18:58 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 05:18:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:18:58.353 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448', 'env', 'PROCESS_TAG=haproxy-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 05:18:58 np0005481065 nova_compute[260935]: 2025-10-11 09:18:58.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:18:58 np0005481065 nova_compute[260935]: 2025-10-11 09:18:58.374 2 DEBUG nova.compute.manager [req-02c4584f-0806-4fe0-af7c-f678c37d6ff7 req-3c22c189-e51e-4cff-94db-a6cdb182e3d9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Received event network-changed-c8bc0542-b12a-4eb6-898d-5fd663184c82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:18:58 np0005481065 nova_compute[260935]: 2025-10-11 09:18:58.374 2 DEBUG nova.compute.manager [req-02c4584f-0806-4fe0-af7c-f678c37d6ff7 req-3c22c189-e51e-4cff-94db-a6cdb182e3d9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Refreshing instance network info cache due to event network-changed-c8bc0542-b12a-4eb6-898d-5fd663184c82. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:18:58 np0005481065 nova_compute[260935]: 2025-10-11 09:18:58.374 2 DEBUG oslo_concurrency.lockutils [req-02c4584f-0806-4fe0-af7c-f678c37d6ff7 req-3c22c189-e51e-4cff-94db-a6cdb182e3d9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-564a3027-0f98-40fd-a495-1c13a103ea39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:18:58 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2314: 321 pgs: 321 active+clean; 420 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 11 05:18:58 np0005481065 podman[384326]: 2025-10-11 09:18:58.80876543 +0000 UTC m=+0.083359733 container create 01e7bdbe8c35ae7e00b5f53817e42d58224fba8df1abe5879aaded3c00981af1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 05:18:58 np0005481065 systemd[1]: Started libpod-conmon-01e7bdbe8c35ae7e00b5f53817e42d58224fba8df1abe5879aaded3c00981af1.scope.
Oct 11 05:18:58 np0005481065 podman[384326]: 2025-10-11 09:18:58.763378249 +0000 UTC m=+0.037972652 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 05:18:58 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:18:58 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c06fe5bbac5efd6711a8c9e5d7b4f200ffa3fc048d59e854178cb58c74576a86/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 05:18:58 np0005481065 podman[384326]: 2025-10-11 09:18:58.907179801 +0000 UTC m=+0.181774204 container init 01e7bdbe8c35ae7e00b5f53817e42d58224fba8df1abe5879aaded3c00981af1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:18:58 np0005481065 podman[384326]: 2025-10-11 09:18:58.916426904 +0000 UTC m=+0.191021247 container start 01e7bdbe8c35ae7e00b5f53817e42d58224fba8df1abe5879aaded3c00981af1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 05:18:58 np0005481065 nova_compute[260935]: 2025-10-11 09:18:58.942 2 DEBUG nova.compute.manager [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:18:58 np0005481065 nova_compute[260935]: 2025-10-11 09:18:58.944 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174338.9445179, 69860e17-caac-461a-a4a5-34ca72c0ee09 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:18:58 np0005481065 nova_compute[260935]: 2025-10-11 09:18:58.945 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] VM Started (Lifecycle Event)#033[00m
Oct 11 05:18:58 np0005481065 neutron-haproxy-ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448[384342]: [NOTICE]   (384346) : New worker (384348) forked
Oct 11 05:18:58 np0005481065 neutron-haproxy-ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448[384342]: [NOTICE]   (384346) : Loading success.
Oct 11 05:18:58 np0005481065 nova_compute[260935]: 2025-10-11 09:18:58.954 2 DEBUG nova.virt.libvirt.driver [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:18:58 np0005481065 nova_compute[260935]: 2025-10-11 09:18:58.959 2 INFO nova.virt.libvirt.driver [-] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Instance spawned successfully.#033[00m
Oct 11 05:18:58 np0005481065 nova_compute[260935]: 2025-10-11 09:18:58.959 2 DEBUG nova.virt.libvirt.driver [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:18:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:18:58 np0005481065 nova_compute[260935]: 2025-10-11 09:18:58.989 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:18:59 np0005481065 nova_compute[260935]: 2025-10-11 09:18:58.997 2 DEBUG nova.virt.libvirt.driver [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:18:59 np0005481065 nova_compute[260935]: 2025-10-11 09:18:58.997 2 DEBUG nova.virt.libvirt.driver [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:18:59 np0005481065 nova_compute[260935]: 2025-10-11 09:18:58.998 2 DEBUG nova.virt.libvirt.driver [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:18:59 np0005481065 nova_compute[260935]: 2025-10-11 09:18:58.998 2 DEBUG nova.virt.libvirt.driver [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:18:59 np0005481065 nova_compute[260935]: 2025-10-11 09:18:58.999 2 DEBUG nova.virt.libvirt.driver [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:18:59 np0005481065 nova_compute[260935]: 2025-10-11 09:18:58.999 2 DEBUG nova.virt.libvirt.driver [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:18:59 np0005481065 nova_compute[260935]: 2025-10-11 09:18:59.034 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:18:59 np0005481065 nova_compute[260935]: 2025-10-11 09:18:59.062 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:18:59 np0005481065 nova_compute[260935]: 2025-10-11 09:18:59.063 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174338.9451833, 69860e17-caac-461a-a4a5-34ca72c0ee09 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:18:59 np0005481065 nova_compute[260935]: 2025-10-11 09:18:59.063 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:18:59 np0005481065 nova_compute[260935]: 2025-10-11 09:18:59.074 2 INFO nova.compute.manager [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Took 8.50 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:18:59 np0005481065 nova_compute[260935]: 2025-10-11 09:18:59.075 2 DEBUG nova.compute.manager [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:18:59 np0005481065 nova_compute[260935]: 2025-10-11 09:18:59.081 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:18:59 np0005481065 nova_compute[260935]: 2025-10-11 09:18:59.089 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174338.9536877, 69860e17-caac-461a-a4a5-34ca72c0ee09 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:18:59 np0005481065 nova_compute[260935]: 2025-10-11 09:18:59.089 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:18:59 np0005481065 nova_compute[260935]: 2025-10-11 09:18:59.111 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:18:59 np0005481065 nova_compute[260935]: 2025-10-11 09:18:59.115 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:18:59 np0005481065 nova_compute[260935]: 2025-10-11 09:18:59.149 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:18:59 np0005481065 nova_compute[260935]: 2025-10-11 09:18:59.152 2 INFO nova.compute.manager [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Took 9.70 seconds to build instance.#033[00m
Oct 11 05:18:59 np0005481065 nova_compute[260935]: 2025-10-11 09:18:59.179 2 DEBUG oslo_concurrency.lockutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "69860e17-caac-461a-a4a5-34ca72c0ee09" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.818s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:19:00 np0005481065 nova_compute[260935]: 2025-10-11 09:19:00.196 2 DEBUG nova.network.neutron [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Updating instance_info_cache with network_info: [{"id": "dea05614-b04a-4078-a98e-428065514f37", "address": "fa:16:3e:00:3b:17", "network": {"id": "6346ea52-07fc-49ad-8f2d-fcfed9769241", "bridge": "br-int", "label": "tempest-network-smoke--1742332739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdea05614-b0", "ovs_interfaceid": "dea05614-b04a-4078-a98e-428065514f37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c8bc0542-b12a-4eb6-898d-5fd663184c82", "address": "fa:16:3e:78:b9:1a", "network": {"id": "ce353a4c-7280-46bb-ac7d-157bb5dc08cb", "bridge": "br-int", "label": "tempest-network-smoke--1463782365", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe78:b91a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8bc0542-b1", "ovs_interfaceid": "c8bc0542-b12a-4eb6-898d-5fd663184c82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:19:00 np0005481065 nova_compute[260935]: 2025-10-11 09:19:00.221 2 DEBUG oslo_concurrency.lockutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Releasing lock "refresh_cache-564a3027-0f98-40fd-a495-1c13a103ea39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:19:00 np0005481065 nova_compute[260935]: 2025-10-11 09:19:00.222 2 DEBUG nova.compute.manager [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Instance network_info: |[{"id": "dea05614-b04a-4078-a98e-428065514f37", "address": "fa:16:3e:00:3b:17", "network": {"id": "6346ea52-07fc-49ad-8f2d-fcfed9769241", "bridge": "br-int", "label": "tempest-network-smoke--1742332739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdea05614-b0", "ovs_interfaceid": "dea05614-b04a-4078-a98e-428065514f37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c8bc0542-b12a-4eb6-898d-5fd663184c82", "address": "fa:16:3e:78:b9:1a", "network": {"id": "ce353a4c-7280-46bb-ac7d-157bb5dc08cb", "bridge": "br-int", "label": "tempest-network-smoke--1463782365", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe78:b91a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8bc0542-b1", "ovs_interfaceid": "c8bc0542-b12a-4eb6-898d-5fd663184c82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:19:00 np0005481065 nova_compute[260935]: 2025-10-11 09:19:00.223 2 DEBUG oslo_concurrency.lockutils [req-02c4584f-0806-4fe0-af7c-f678c37d6ff7 req-3c22c189-e51e-4cff-94db-a6cdb182e3d9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-564a3027-0f98-40fd-a495-1c13a103ea39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:19:00 np0005481065 nova_compute[260935]: 2025-10-11 09:19:00.223 2 DEBUG nova.network.neutron [req-02c4584f-0806-4fe0-af7c-f678c37d6ff7 req-3c22c189-e51e-4cff-94db-a6cdb182e3d9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Refreshing network info cache for port c8bc0542-b12a-4eb6-898d-5fd663184c82 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:19:00 np0005481065 nova_compute[260935]: 2025-10-11 09:19:00.227 2 DEBUG nova.virt.libvirt.driver [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Start _get_guest_xml network_info=[{"id": "dea05614-b04a-4078-a98e-428065514f37", "address": "fa:16:3e:00:3b:17", "network": {"id": "6346ea52-07fc-49ad-8f2d-fcfed9769241", "bridge": "br-int", "label": "tempest-network-smoke--1742332739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdea05614-b0", "ovs_interfaceid": "dea05614-b04a-4078-a98e-428065514f37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c8bc0542-b12a-4eb6-898d-5fd663184c82", "address": "fa:16:3e:78:b9:1a", "network": {"id": "ce353a4c-7280-46bb-ac7d-157bb5dc08cb", "bridge": "br-int", "label": "tempest-network-smoke--1463782365", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe78:b91a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8bc0542-b1", "ovs_interfaceid": "c8bc0542-b12a-4eb6-898d-5fd663184c82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:19:00 np0005481065 nova_compute[260935]: 2025-10-11 09:19:00.234 2 WARNING nova.virt.libvirt.driver [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:19:00 np0005481065 nova_compute[260935]: 2025-10-11 09:19:00.246 2 DEBUG nova.virt.libvirt.host [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:19:00 np0005481065 nova_compute[260935]: 2025-10-11 09:19:00.247 2 DEBUG nova.virt.libvirt.host [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:19:00 np0005481065 nova_compute[260935]: 2025-10-11 09:19:00.252 2 DEBUG nova.virt.libvirt.host [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:19:00 np0005481065 nova_compute[260935]: 2025-10-11 09:19:00.253 2 DEBUG nova.virt.libvirt.host [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:19:00 np0005481065 nova_compute[260935]: 2025-10-11 09:19:00.254 2 DEBUG nova.virt.libvirt.driver [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:19:00 np0005481065 nova_compute[260935]: 2025-10-11 09:19:00.254 2 DEBUG nova.virt.hardware [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:19:00 np0005481065 nova_compute[260935]: 2025-10-11 09:19:00.255 2 DEBUG nova.virt.hardware [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:19:00 np0005481065 nova_compute[260935]: 2025-10-11 09:19:00.255 2 DEBUG nova.virt.hardware [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:19:00 np0005481065 nova_compute[260935]: 2025-10-11 09:19:00.255 2 DEBUG nova.virt.hardware [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:19:00 np0005481065 nova_compute[260935]: 2025-10-11 09:19:00.256 2 DEBUG nova.virt.hardware [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:19:00 np0005481065 nova_compute[260935]: 2025-10-11 09:19:00.256 2 DEBUG nova.virt.hardware [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:19:00 np0005481065 nova_compute[260935]: 2025-10-11 09:19:00.256 2 DEBUG nova.virt.hardware [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:19:00 np0005481065 nova_compute[260935]: 2025-10-11 09:19:00.257 2 DEBUG nova.virt.hardware [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:19:00 np0005481065 nova_compute[260935]: 2025-10-11 09:19:00.257 2 DEBUG nova.virt.hardware [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:19:00 np0005481065 nova_compute[260935]: 2025-10-11 09:19:00.258 2 DEBUG nova.virt.hardware [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:19:00 np0005481065 nova_compute[260935]: 2025-10-11 09:19:00.258 2 DEBUG nova.virt.hardware [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:19:00 np0005481065 nova_compute[260935]: 2025-10-11 09:19:00.261 2 DEBUG oslo_concurrency.processutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:19:00 np0005481065 nova_compute[260935]: 2025-10-11 09:19:00.411 2 DEBUG nova.compute.manager [req-4203aa03-1ee1-44c5-a29f-b9bbce319c05 req-167cd235-2413-46c5-96a8-d540712b0081 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Received event network-vif-plugged-a1864eda-bf8d-42a1-b315-967521604391 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:19:00 np0005481065 nova_compute[260935]: 2025-10-11 09:19:00.412 2 DEBUG oslo_concurrency.lockutils [req-4203aa03-1ee1-44c5-a29f-b9bbce319c05 req-167cd235-2413-46c5-96a8-d540712b0081 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "69860e17-caac-461a-a4a5-34ca72c0ee09-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:19:00 np0005481065 nova_compute[260935]: 2025-10-11 09:19:00.413 2 DEBUG oslo_concurrency.lockutils [req-4203aa03-1ee1-44c5-a29f-b9bbce319c05 req-167cd235-2413-46c5-96a8-d540712b0081 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "69860e17-caac-461a-a4a5-34ca72c0ee09-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:19:00 np0005481065 nova_compute[260935]: 2025-10-11 09:19:00.414 2 DEBUG oslo_concurrency.lockutils [req-4203aa03-1ee1-44c5-a29f-b9bbce319c05 req-167cd235-2413-46c5-96a8-d540712b0081 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "69860e17-caac-461a-a4a5-34ca72c0ee09-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:19:00 np0005481065 nova_compute[260935]: 2025-10-11 09:19:00.414 2 DEBUG nova.compute.manager [req-4203aa03-1ee1-44c5-a29f-b9bbce319c05 req-167cd235-2413-46c5-96a8-d540712b0081 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] No waiting events found dispatching network-vif-plugged-a1864eda-bf8d-42a1-b315-967521604391 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:19:00 np0005481065 nova_compute[260935]: 2025-10-11 09:19:00.414 2 WARNING nova.compute.manager [req-4203aa03-1ee1-44c5-a29f-b9bbce319c05 req-167cd235-2413-46c5-96a8-d540712b0081 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Received unexpected event network-vif-plugged-a1864eda-bf8d-42a1-b315-967521604391 for instance with vm_state active and task_state None.#033[00m
Oct 11 05:19:00 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2315: 321 pgs: 321 active+clean; 420 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 11 05:19:00 np0005481065 nova_compute[260935]: 2025-10-11 09:19:00.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:19:00 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:19:00 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2683852569' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:19:00 np0005481065 nova_compute[260935]: 2025-10-11 09:19:00.757 2 DEBUG oslo_concurrency.processutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:19:00 np0005481065 nova_compute[260935]: 2025-10-11 09:19:00.796 2 DEBUG nova.storage.rbd_utils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 564a3027-0f98-40fd-a495-1c13a103ea39_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:19:00 np0005481065 nova_compute[260935]: 2025-10-11 09:19:00.802 2 DEBUG oslo_concurrency.processutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:19:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:19:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/544037921' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:19:01 np0005481065 nova_compute[260935]: 2025-10-11 09:19:01.361 2 DEBUG oslo_concurrency.processutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:19:01 np0005481065 nova_compute[260935]: 2025-10-11 09:19:01.364 2 DEBUG nova.virt.libvirt.vif [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:18:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1664284742',display_name='tempest-TestGettingAddress-server-1664284742',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1664284742',id=114,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJTFYLtktGdqXo2IJ/ShBWfWgvaV4+fWmolxALneU1gDJL8dLDtvTdIhs+PbG9YcPYaBAkq6yl21bo4TTyj4znAX7oza+01Fop0j7H2jGDTRbWSF88RRRnjrHtRpLFrBeg==',key_name='tempest-TestGettingAddress-2133650400',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-2sdrmv3t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:18:51Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=564a3027-0f98-40fd-a495-1c13a103ea39,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dea05614-b04a-4078-a98e-428065514f37", "address": "fa:16:3e:00:3b:17", "network": {"id": "6346ea52-07fc-49ad-8f2d-fcfed9769241", "bridge": "br-int", "label": "tempest-network-smoke--1742332739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdea05614-b0", "ovs_interfaceid": "dea05614-b04a-4078-a98e-428065514f37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:19:01 np0005481065 nova_compute[260935]: 2025-10-11 09:19:01.365 2 DEBUG nova.network.os_vif_util [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "dea05614-b04a-4078-a98e-428065514f37", "address": "fa:16:3e:00:3b:17", "network": {"id": "6346ea52-07fc-49ad-8f2d-fcfed9769241", "bridge": "br-int", "label": "tempest-network-smoke--1742332739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdea05614-b0", "ovs_interfaceid": "dea05614-b04a-4078-a98e-428065514f37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:19:01 np0005481065 nova_compute[260935]: 2025-10-11 09:19:01.367 2 DEBUG nova.network.os_vif_util [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:3b:17,bridge_name='br-int',has_traffic_filtering=True,id=dea05614-b04a-4078-a98e-428065514f37,network=Network(6346ea52-07fc-49ad-8f2d-fcfed9769241),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdea05614-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:19:01 np0005481065 nova_compute[260935]: 2025-10-11 09:19:01.369 2 DEBUG nova.virt.libvirt.vif [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:18:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1664284742',display_name='tempest-TestGettingAddress-server-1664284742',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1664284742',id=114,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJTFYLtktGdqXo2IJ/ShBWfWgvaV4+fWmolxALneU1gDJL8dLDtvTdIhs+PbG9YcPYaBAkq6yl21bo4TTyj4znAX7oza+01Fop0j7H2jGDTRbWSF88RRRnjrHtRpLFrBeg==',key_name='tempest-TestGettingAddress-2133650400',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-2sdrmv3t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:18:51Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=564a3027-0f98-40fd-a495-1c13a103ea39,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c8bc0542-b12a-4eb6-898d-5fd663184c82", "address": "fa:16:3e:78:b9:1a", "network": {"id": "ce353a4c-7280-46bb-ac7d-157bb5dc08cb", "bridge": "br-int", "label": "tempest-network-smoke--1463782365", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe78:b91a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8bc0542-b1", "ovs_interfaceid": "c8bc0542-b12a-4eb6-898d-5fd663184c82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:19:01 np0005481065 nova_compute[260935]: 2025-10-11 09:19:01.369 2 DEBUG nova.network.os_vif_util [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "c8bc0542-b12a-4eb6-898d-5fd663184c82", "address": "fa:16:3e:78:b9:1a", "network": {"id": "ce353a4c-7280-46bb-ac7d-157bb5dc08cb", "bridge": "br-int", "label": "tempest-network-smoke--1463782365", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe78:b91a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8bc0542-b1", "ovs_interfaceid": "c8bc0542-b12a-4eb6-898d-5fd663184c82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:19:01 np0005481065 nova_compute[260935]: 2025-10-11 09:19:01.371 2 DEBUG nova.network.os_vif_util [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:b9:1a,bridge_name='br-int',has_traffic_filtering=True,id=c8bc0542-b12a-4eb6-898d-5fd663184c82,network=Network(ce353a4c-7280-46bb-ac7d-157bb5dc08cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8bc0542-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:19:01 np0005481065 nova_compute[260935]: 2025-10-11 09:19:01.374 2 DEBUG nova.objects.instance [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'pci_devices' on Instance uuid 564a3027-0f98-40fd-a495-1c13a103ea39 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:19:01 np0005481065 nova_compute[260935]: 2025-10-11 09:19:01.394 2 DEBUG nova.virt.libvirt.driver [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:19:01 np0005481065 nova_compute[260935]:  <uuid>564a3027-0f98-40fd-a495-1c13a103ea39</uuid>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:  <name>instance-00000072</name>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:19:01 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:      <nova:name>tempest-TestGettingAddress-server-1664284742</nova:name>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:19:00</nova:creationTime>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:19:01 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:        <nova:user uuid="0e1fd111a1ff43179343661e01457085">tempest-TestGettingAddress-1238692117-project-member</nova:user>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:        <nova:project uuid="db6885dd005947ad850fed13cefdf2fc">tempest-TestGettingAddress-1238692117</nova:project>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:        <nova:port uuid="dea05614-b04a-4078-a98e-428065514f37">
Oct 11 05:19:01 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:        <nova:port uuid="c8bc0542-b12a-4eb6-898d-5fd663184c82">
Oct 11 05:19:01 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe78:b91a" ipVersion="6"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:      <entry name="serial">564a3027-0f98-40fd-a495-1c13a103ea39</entry>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:      <entry name="uuid">564a3027-0f98-40fd-a495-1c13a103ea39</entry>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:19:01 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/564a3027-0f98-40fd-a495-1c13a103ea39_disk">
Oct 11 05:19:01 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:19:01 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:19:01 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/564a3027-0f98-40fd-a495-1c13a103ea39_disk.config">
Oct 11 05:19:01 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:19:01 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:19:01 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:00:3b:17"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:      <target dev="tapdea05614-b0"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:19:01 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:78:b9:1a"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:      <target dev="tapc8bc0542-b1"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:19:01 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/564a3027-0f98-40fd-a495-1c13a103ea39/console.log" append="off"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:19:01 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:19:01 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:19:01 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:19:01 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:19:01 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:19:01 np0005481065 nova_compute[260935]: 2025-10-11 09:19:01.397 2 DEBUG nova.compute.manager [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Preparing to wait for external event network-vif-plugged-dea05614-b04a-4078-a98e-428065514f37 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:19:01 np0005481065 nova_compute[260935]: 2025-10-11 09:19:01.397 2 DEBUG oslo_concurrency.lockutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "564a3027-0f98-40fd-a495-1c13a103ea39-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:19:01 np0005481065 nova_compute[260935]: 2025-10-11 09:19:01.398 2 DEBUG oslo_concurrency.lockutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "564a3027-0f98-40fd-a495-1c13a103ea39-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:19:01 np0005481065 nova_compute[260935]: 2025-10-11 09:19:01.398 2 DEBUG oslo_concurrency.lockutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "564a3027-0f98-40fd-a495-1c13a103ea39-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:19:01 np0005481065 nova_compute[260935]: 2025-10-11 09:19:01.399 2 DEBUG nova.compute.manager [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Preparing to wait for external event network-vif-plugged-c8bc0542-b12a-4eb6-898d-5fd663184c82 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:19:01 np0005481065 nova_compute[260935]: 2025-10-11 09:19:01.399 2 DEBUG oslo_concurrency.lockutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "564a3027-0f98-40fd-a495-1c13a103ea39-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:19:01 np0005481065 nova_compute[260935]: 2025-10-11 09:19:01.400 2 DEBUG oslo_concurrency.lockutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "564a3027-0f98-40fd-a495-1c13a103ea39-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:19:01 np0005481065 nova_compute[260935]: 2025-10-11 09:19:01.401 2 DEBUG oslo_concurrency.lockutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "564a3027-0f98-40fd-a495-1c13a103ea39-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:19:01 np0005481065 nova_compute[260935]: 2025-10-11 09:19:01.402 2 DEBUG nova.virt.libvirt.vif [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:18:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1664284742',display_name='tempest-TestGettingAddress-server-1664284742',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1664284742',id=114,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJTFYLtktGdqXo2IJ/ShBWfWgvaV4+fWmolxALneU1gDJL8dLDtvTdIhs+PbG9YcPYaBAkq6yl21bo4TTyj4znAX7oza+01Fop0j7H2jGDTRbWSF88RRRnjrHtRpLFrBeg==',key_name='tempest-TestGettingAddress-2133650400',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-2sdrmv3t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:18:51Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=564a3027-0f98-40fd-a495-1c13a103ea39,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dea05614-b04a-4078-a98e-428065514f37", "address": "fa:16:3e:00:3b:17", "network": {"id": "6346ea52-07fc-49ad-8f2d-fcfed9769241", "bridge": "br-int", "label": "tempest-network-smoke--1742332739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdea05614-b0", "ovs_interfaceid": "dea05614-b04a-4078-a98e-428065514f37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:19:01 np0005481065 nova_compute[260935]: 2025-10-11 09:19:01.403 2 DEBUG nova.network.os_vif_util [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "dea05614-b04a-4078-a98e-428065514f37", "address": "fa:16:3e:00:3b:17", "network": {"id": "6346ea52-07fc-49ad-8f2d-fcfed9769241", "bridge": "br-int", "label": "tempest-network-smoke--1742332739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdea05614-b0", "ovs_interfaceid": "dea05614-b04a-4078-a98e-428065514f37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:19:01 np0005481065 nova_compute[260935]: 2025-10-11 09:19:01.404 2 DEBUG nova.network.os_vif_util [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:3b:17,bridge_name='br-int',has_traffic_filtering=True,id=dea05614-b04a-4078-a98e-428065514f37,network=Network(6346ea52-07fc-49ad-8f2d-fcfed9769241),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdea05614-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:19:01 np0005481065 nova_compute[260935]: 2025-10-11 09:19:01.405 2 DEBUG os_vif [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:3b:17,bridge_name='br-int',has_traffic_filtering=True,id=dea05614-b04a-4078-a98e-428065514f37,network=Network(6346ea52-07fc-49ad-8f2d-fcfed9769241),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdea05614-b0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:19:01 np0005481065 nova_compute[260935]: 2025-10-11 09:19:01.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:01 np0005481065 nova_compute[260935]: 2025-10-11 09:19:01.407 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:19:01 np0005481065 nova_compute[260935]: 2025-10-11 09:19:01.409 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:19:01 np0005481065 nova_compute[260935]: 2025-10-11 09:19:01.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:01 np0005481065 nova_compute[260935]: 2025-10-11 09:19:01.414 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdea05614-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:19:01 np0005481065 nova_compute[260935]: 2025-10-11 09:19:01.415 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdea05614-b0, col_values=(('external_ids', {'iface-id': 'dea05614-b04a-4078-a98e-428065514f37', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:00:3b:17', 'vm-uuid': '564a3027-0f98-40fd-a495-1c13a103ea39'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:19:01 np0005481065 nova_compute[260935]: 2025-10-11 09:19:01.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:01 np0005481065 NetworkManager[44960]: <info>  [1760174341.4189] manager: (tapdea05614-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/460)
Oct 11 05:19:01 np0005481065 nova_compute[260935]: 2025-10-11 09:19:01.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:19:01 np0005481065 nova_compute[260935]: 2025-10-11 09:19:01.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:01 np0005481065 nova_compute[260935]: 2025-10-11 09:19:01.427 2 INFO os_vif [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:3b:17,bridge_name='br-int',has_traffic_filtering=True,id=dea05614-b04a-4078-a98e-428065514f37,network=Network(6346ea52-07fc-49ad-8f2d-fcfed9769241),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdea05614-b0')#033[00m
Oct 11 05:19:01 np0005481065 nova_compute[260935]: 2025-10-11 09:19:01.429 2 DEBUG nova.virt.libvirt.vif [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:18:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1664284742',display_name='tempest-TestGettingAddress-server-1664284742',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1664284742',id=114,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJTFYLtktGdqXo2IJ/ShBWfWgvaV4+fWmolxALneU1gDJL8dLDtvTdIhs+PbG9YcPYaBAkq6yl21bo4TTyj4znAX7oza+01Fop0j7H2jGDTRbWSF88RRRnjrHtRpLFrBeg==',key_name='tempest-TestGettingAddress-2133650400',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-2sdrmv3t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:18:51Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=564a3027-0f98-40fd-a495-1c13a103ea39,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c8bc0542-b12a-4eb6-898d-5fd663184c82", "address": "fa:16:3e:78:b9:1a", "network": {"id": "ce353a4c-7280-46bb-ac7d-157bb5dc08cb", "bridge": "br-int", "label": "tempest-network-smoke--1463782365", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe78:b91a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8bc0542-b1", "ovs_interfaceid": "c8bc0542-b12a-4eb6-898d-5fd663184c82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:19:01 np0005481065 nova_compute[260935]: 2025-10-11 09:19:01.430 2 DEBUG nova.network.os_vif_util [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "c8bc0542-b12a-4eb6-898d-5fd663184c82", "address": "fa:16:3e:78:b9:1a", "network": {"id": "ce353a4c-7280-46bb-ac7d-157bb5dc08cb", "bridge": "br-int", "label": "tempest-network-smoke--1463782365", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe78:b91a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8bc0542-b1", "ovs_interfaceid": "c8bc0542-b12a-4eb6-898d-5fd663184c82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:19:01 np0005481065 nova_compute[260935]: 2025-10-11 09:19:01.431 2 DEBUG nova.network.os_vif_util [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:b9:1a,bridge_name='br-int',has_traffic_filtering=True,id=c8bc0542-b12a-4eb6-898d-5fd663184c82,network=Network(ce353a4c-7280-46bb-ac7d-157bb5dc08cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8bc0542-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:19:01 np0005481065 nova_compute[260935]: 2025-10-11 09:19:01.432 2 DEBUG os_vif [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:b9:1a,bridge_name='br-int',has_traffic_filtering=True,id=c8bc0542-b12a-4eb6-898d-5fd663184c82,network=Network(ce353a4c-7280-46bb-ac7d-157bb5dc08cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8bc0542-b1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:19:01 np0005481065 nova_compute[260935]: 2025-10-11 09:19:01.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:01 np0005481065 nova_compute[260935]: 2025-10-11 09:19:01.433 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:19:01 np0005481065 nova_compute[260935]: 2025-10-11 09:19:01.434 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:19:01 np0005481065 nova_compute[260935]: 2025-10-11 09:19:01.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:01 np0005481065 nova_compute[260935]: 2025-10-11 09:19:01.438 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc8bc0542-b1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:19:01 np0005481065 nova_compute[260935]: 2025-10-11 09:19:01.439 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc8bc0542-b1, col_values=(('external_ids', {'iface-id': 'c8bc0542-b12a-4eb6-898d-5fd663184c82', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:78:b9:1a', 'vm-uuid': '564a3027-0f98-40fd-a495-1c13a103ea39'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:19:01 np0005481065 nova_compute[260935]: 2025-10-11 09:19:01.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:01 np0005481065 NetworkManager[44960]: <info>  [1760174341.4472] manager: (tapc8bc0542-b1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/461)
Oct 11 05:19:01 np0005481065 nova_compute[260935]: 2025-10-11 09:19:01.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:19:01 np0005481065 nova_compute[260935]: 2025-10-11 09:19:01.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:01 np0005481065 nova_compute[260935]: 2025-10-11 09:19:01.456 2 INFO os_vif [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:b9:1a,bridge_name='br-int',has_traffic_filtering=True,id=c8bc0542-b12a-4eb6-898d-5fd663184c82,network=Network(ce353a4c-7280-46bb-ac7d-157bb5dc08cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8bc0542-b1')#033[00m
Oct 11 05:19:01 np0005481065 nova_compute[260935]: 2025-10-11 09:19:01.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:01 np0005481065 nova_compute[260935]: 2025-10-11 09:19:01.525 2 DEBUG nova.virt.libvirt.driver [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:19:01 np0005481065 nova_compute[260935]: 2025-10-11 09:19:01.526 2 DEBUG nova.virt.libvirt.driver [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:19:01 np0005481065 nova_compute[260935]: 2025-10-11 09:19:01.526 2 DEBUG nova.virt.libvirt.driver [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No VIF found with MAC fa:16:3e:00:3b:17, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:19:01 np0005481065 nova_compute[260935]: 2025-10-11 09:19:01.526 2 DEBUG nova.virt.libvirt.driver [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No VIF found with MAC fa:16:3e:78:b9:1a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:19:01 np0005481065 nova_compute[260935]: 2025-10-11 09:19:01.527 2 INFO nova.virt.libvirt.driver [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Using config drive#033[00m
Oct 11 05:19:01 np0005481065 nova_compute[260935]: 2025-10-11 09:19:01.556 2 DEBUG nova.storage.rbd_utils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 564a3027-0f98-40fd-a495-1c13a103ea39_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:19:02 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2316: 321 pgs: 321 active+clean; 420 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.6 MiB/s wr, 103 op/s
Oct 11 05:19:03 np0005481065 nova_compute[260935]: 2025-10-11 09:19:03.000 2 INFO nova.virt.libvirt.driver [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Creating config drive at /var/lib/nova/instances/564a3027-0f98-40fd-a495-1c13a103ea39/disk.config#033[00m
Oct 11 05:19:03 np0005481065 nova_compute[260935]: 2025-10-11 09:19:03.006 2 DEBUG oslo_concurrency.processutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/564a3027-0f98-40fd-a495-1c13a103ea39/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8k2mfp3n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:19:03 np0005481065 nova_compute[260935]: 2025-10-11 09:19:03.181 2 DEBUG oslo_concurrency.processutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/564a3027-0f98-40fd-a495-1c13a103ea39/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8k2mfp3n" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:19:03 np0005481065 nova_compute[260935]: 2025-10-11 09:19:03.216 2 DEBUG nova.storage.rbd_utils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 564a3027-0f98-40fd-a495-1c13a103ea39_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:19:03 np0005481065 nova_compute[260935]: 2025-10-11 09:19:03.221 2 DEBUG oslo_concurrency.processutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/564a3027-0f98-40fd-a495-1c13a103ea39/disk.config 564a3027-0f98-40fd-a495-1c13a103ea39_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:19:03 np0005481065 nova_compute[260935]: 2025-10-11 09:19:03.440 2 DEBUG oslo_concurrency.processutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/564a3027-0f98-40fd-a495-1c13a103ea39/disk.config 564a3027-0f98-40fd-a495-1c13a103ea39_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.219s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:19:03 np0005481065 nova_compute[260935]: 2025-10-11 09:19:03.442 2 INFO nova.virt.libvirt.driver [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Deleting local config drive /var/lib/nova/instances/564a3027-0f98-40fd-a495-1c13a103ea39/disk.config because it was imported into RBD.#033[00m
Oct 11 05:19:03 np0005481065 NetworkManager[44960]: <info>  [1760174343.5030] manager: (tapdea05614-b0): new Tun device (/org/freedesktop/NetworkManager/Devices/462)
Oct 11 05:19:03 np0005481065 kernel: tapdea05614-b0: entered promiscuous mode
Oct 11 05:19:03 np0005481065 nova_compute[260935]: 2025-10-11 09:19:03.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:03 np0005481065 ovn_controller[152945]: 2025-10-11T09:19:03Z|01129|binding|INFO|Claiming lport dea05614-b04a-4078-a98e-428065514f37 for this chassis.
Oct 11 05:19:03 np0005481065 ovn_controller[152945]: 2025-10-11T09:19:03Z|01130|binding|INFO|dea05614-b04a-4078-a98e-428065514f37: Claiming fa:16:3e:00:3b:17 10.100.0.3
Oct 11 05:19:03 np0005481065 NetworkManager[44960]: <info>  [1760174343.5288] manager: (tapc8bc0542-b1): new Tun device (/org/freedesktop/NetworkManager/Devices/463)
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.546 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:3b:17 10.100.0.3'], port_security=['fa:16:3e:00:3b:17 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '564a3027-0f98-40fd-a495-1c13a103ea39', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6346ea52-07fc-49ad-8f2d-fcfed9769241', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a975f5dd-d207-4e6b-9401-45f24b820c2c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0785b003-8951-44fc-bd21-f1c5c3076102, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=dea05614-b04a-4078-a98e-428065514f37) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.551 162815 INFO neutron.agent.ovn.metadata.agent [-] Port dea05614-b04a-4078-a98e-428065514f37 in datapath 6346ea52-07fc-49ad-8f2d-fcfed9769241 bound to our chassis#033[00m
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.555 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6346ea52-07fc-49ad-8f2d-fcfed9769241#033[00m
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.574 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fae0f0aa-f24c-49c3-b6c7-3cd824a83093]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.575 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6346ea52-01 in ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.579 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6346ea52-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.580 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3dd495c2-7c93-4444-b2af-c69b2e9790c5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.582 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e86caa39-98a6-46a6-8843-ac81e2be6305]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.603 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[041f7cd5-1aa5-4ee4-9e38-68c3c134efec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:19:03 np0005481065 systemd-machined[215705]: New machine qemu-137-instance-00000072.
Oct 11 05:19:03 np0005481065 systemd[1]: Started Virtual Machine qemu-137-instance-00000072.
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.628 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[63df9585-939d-432d-9fdc-4b893c04fa3b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:19:03 np0005481065 kernel: tapc8bc0542-b1: entered promiscuous mode
Oct 11 05:19:03 np0005481065 systemd-udevd[384512]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:19:03 np0005481065 systemd-udevd[384511]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:19:03 np0005481065 ovn_controller[152945]: 2025-10-11T09:19:03Z|01131|binding|INFO|Claiming lport c8bc0542-b12a-4eb6-898d-5fd663184c82 for this chassis.
Oct 11 05:19:03 np0005481065 ovn_controller[152945]: 2025-10-11T09:19:03Z|01132|binding|INFO|c8bc0542-b12a-4eb6-898d-5fd663184c82: Claiming fa:16:3e:78:b9:1a 2001:db8::f816:3eff:fe78:b91a
Oct 11 05:19:03 np0005481065 nova_compute[260935]: 2025-10-11 09:19:03.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.645 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:b9:1a 2001:db8::f816:3eff:fe78:b91a'], port_security=['fa:16:3e:78:b9:1a 2001:db8::f816:3eff:fe78:b91a'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe78:b91a/64', 'neutron:device_id': '564a3027-0f98-40fd-a495-1c13a103ea39', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce353a4c-7280-46bb-ac7d-157bb5dc08cb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a975f5dd-d207-4e6b-9401-45f24b820c2c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7413cee9-eaf1-4090-8a8e-9bf9dac36a0a, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=c8bc0542-b12a-4eb6-898d-5fd663184c82) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:19:03 np0005481065 ovn_controller[152945]: 2025-10-11T09:19:03Z|01133|binding|INFO|Setting lport dea05614-b04a-4078-a98e-428065514f37 ovn-installed in OVS
Oct 11 05:19:03 np0005481065 ovn_controller[152945]: 2025-10-11T09:19:03Z|01134|binding|INFO|Setting lport dea05614-b04a-4078-a98e-428065514f37 up in Southbound
Oct 11 05:19:03 np0005481065 nova_compute[260935]: 2025-10-11 09:19:03.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:03 np0005481065 ovn_controller[152945]: 2025-10-11T09:19:03Z|01135|binding|INFO|Setting lport c8bc0542-b12a-4eb6-898d-5fd663184c82 ovn-installed in OVS
Oct 11 05:19:03 np0005481065 ovn_controller[152945]: 2025-10-11T09:19:03Z|01136|binding|INFO|Setting lport c8bc0542-b12a-4eb6-898d-5fd663184c82 up in Southbound
Oct 11 05:19:03 np0005481065 NetworkManager[44960]: <info>  [1760174343.6608] device (tapc8bc0542-b1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:19:03 np0005481065 nova_compute[260935]: 2025-10-11 09:19:03.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:03 np0005481065 NetworkManager[44960]: <info>  [1760174343.6626] device (tapc8bc0542-b1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:19:03 np0005481065 NetworkManager[44960]: <info>  [1760174343.6696] device (tapdea05614-b0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.669 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[2f56af59-1edf-4066-a74e-df2b795af6b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:19:03 np0005481065 NetworkManager[44960]: <info>  [1760174343.6705] device (tapdea05614-b0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:19:03 np0005481065 NetworkManager[44960]: <info>  [1760174343.6757] manager: (tap6346ea52-00): new Veth device (/org/freedesktop/NetworkManager/Devices/464)
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.675 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fade1765-0c56-4ae6-9360-ac464d73cb55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:19:03 np0005481065 podman[384497]: 2025-10-11 09:19:03.698206855 +0000 UTC m=+0.107481879 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.721 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[d8b2b21f-bbfb-4ce2-88dc-67800dfb7f33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.725 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[cca34fe9-f86f-47af-a938-c81b83b43687]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:19:03 np0005481065 NetworkManager[44960]: <info>  [1760174343.7445] device (tap6346ea52-00): carrier: link connected
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.749 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[fa6a6128-e53d-420e-b851-e9e7ba1f3129]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.766 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[758262ea-a0ad-4be6-b026-c137d34c33c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6346ea52-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f5:67:fa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 325], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618844, 'reachable_time': 24736, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 384550, 'error': None, 'target': 'ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.782 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b32d6a8f-8499-4319-af93-5072a92e5f8c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef5:67fa'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618844, 'tstamp': 618844}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 384551, 'error': None, 'target': 'ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.798 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[da43e569-4257-40ed-8979-68b7d62b7b4b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6346ea52-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f5:67:fa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 325], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618844, 'reachable_time': 24736, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 384552, 'error': None, 'target': 'ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.837 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ffac1703-5763-43af-839b-1858d2ce5b74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.930 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8ec7b970-f9e2-46e2-9eb5-1d072b77152b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.933 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6346ea52-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.934 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.935 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6346ea52-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:19:03 np0005481065 nova_compute[260935]: 2025-10-11 09:19:03.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:03 np0005481065 NetworkManager[44960]: <info>  [1760174343.9377] manager: (tap6346ea52-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/465)
Oct 11 05:19:03 np0005481065 kernel: tap6346ea52-00: entered promiscuous mode
Oct 11 05:19:03 np0005481065 nova_compute[260935]: 2025-10-11 09:19:03.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.941 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6346ea52-00, col_values=(('external_ids', {'iface-id': 'f4a0a09a-d174-44d3-b63d-753fefd94646'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:19:03 np0005481065 nova_compute[260935]: 2025-10-11 09:19:03.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:03 np0005481065 ovn_controller[152945]: 2025-10-11T09:19:03Z|01137|binding|INFO|Releasing lport f4a0a09a-d174-44d3-b63d-753fefd94646 from this chassis (sb_readonly=0)
Oct 11 05:19:03 np0005481065 nova_compute[260935]: 2025-10-11 09:19:03.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.962 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6346ea52-07fc-49ad-8f2d-fcfed9769241.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6346ea52-07fc-49ad-8f2d-fcfed9769241.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.966 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f49a1fc1-c85f-4beb-b7a5-9680613ef402]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.967 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-6346ea52-07fc-49ad-8f2d-fcfed9769241
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/6346ea52-07fc-49ad-8f2d-fcfed9769241.pid.haproxy
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID 6346ea52-07fc-49ad-8f2d-fcfed9769241
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 05:19:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.971 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241', 'env', 'PROCESS_TAG=haproxy-6346ea52-07fc-49ad-8f2d-fcfed9769241', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6346ea52-07fc-49ad-8f2d-fcfed9769241.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 05:19:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:19:04 np0005481065 nova_compute[260935]: 2025-10-11 09:19:04.154 2 DEBUG nova.compute.manager [req-019006d6-51d8-4285-9e57-1e363f245d11 req-113219a6-4aef-48a9-a354-032ae87407fb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Received event network-vif-plugged-c8bc0542-b12a-4eb6-898d-5fd663184c82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:19:04 np0005481065 nova_compute[260935]: 2025-10-11 09:19:04.154 2 DEBUG oslo_concurrency.lockutils [req-019006d6-51d8-4285-9e57-1e363f245d11 req-113219a6-4aef-48a9-a354-032ae87407fb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "564a3027-0f98-40fd-a495-1c13a103ea39-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:19:04 np0005481065 nova_compute[260935]: 2025-10-11 09:19:04.154 2 DEBUG oslo_concurrency.lockutils [req-019006d6-51d8-4285-9e57-1e363f245d11 req-113219a6-4aef-48a9-a354-032ae87407fb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "564a3027-0f98-40fd-a495-1c13a103ea39-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:19:04 np0005481065 nova_compute[260935]: 2025-10-11 09:19:04.155 2 DEBUG oslo_concurrency.lockutils [req-019006d6-51d8-4285-9e57-1e363f245d11 req-113219a6-4aef-48a9-a354-032ae87407fb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "564a3027-0f98-40fd-a495-1c13a103ea39-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:19:04 np0005481065 nova_compute[260935]: 2025-10-11 09:19:04.155 2 DEBUG nova.compute.manager [req-019006d6-51d8-4285-9e57-1e363f245d11 req-113219a6-4aef-48a9-a354-032ae87407fb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Processing event network-vif-plugged-c8bc0542-b12a-4eb6-898d-5fd663184c82 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:19:04 np0005481065 podman[384621]: 2025-10-11 09:19:04.408468644 +0000 UTC m=+0.055237382 container create 35a4ceb1b8073f14fe148299f689cfe5109498accc2ce93b22f7a21ca56c96a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 11 05:19:04 np0005481065 nova_compute[260935]: 2025-10-11 09:19:04.442 2 DEBUG nova.network.neutron [req-02c4584f-0806-4fe0-af7c-f678c37d6ff7 req-3c22c189-e51e-4cff-94db-a6cdb182e3d9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Updated VIF entry in instance network info cache for port c8bc0542-b12a-4eb6-898d-5fd663184c82. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:19:04 np0005481065 nova_compute[260935]: 2025-10-11 09:19:04.443 2 DEBUG nova.network.neutron [req-02c4584f-0806-4fe0-af7c-f678c37d6ff7 req-3c22c189-e51e-4cff-94db-a6cdb182e3d9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Updating instance_info_cache with network_info: [{"id": "dea05614-b04a-4078-a98e-428065514f37", "address": "fa:16:3e:00:3b:17", "network": {"id": "6346ea52-07fc-49ad-8f2d-fcfed9769241", "bridge": "br-int", "label": "tempest-network-smoke--1742332739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdea05614-b0", "ovs_interfaceid": "dea05614-b04a-4078-a98e-428065514f37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c8bc0542-b12a-4eb6-898d-5fd663184c82", "address": "fa:16:3e:78:b9:1a", "network": {"id": "ce353a4c-7280-46bb-ac7d-157bb5dc08cb", "bridge": "br-int", "label": "tempest-network-smoke--1463782365", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe78:b91a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8bc0542-b1", "ovs_interfaceid": "c8bc0542-b12a-4eb6-898d-5fd663184c82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:19:04 np0005481065 systemd[1]: Started libpod-conmon-35a4ceb1b8073f14fe148299f689cfe5109498accc2ce93b22f7a21ca56c96a2.scope.
Oct 11 05:19:04 np0005481065 nova_compute[260935]: 2025-10-11 09:19:04.464 2 DEBUG oslo_concurrency.lockutils [req-02c4584f-0806-4fe0-af7c-f678c37d6ff7 req-3c22c189-e51e-4cff-94db-a6cdb182e3d9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-564a3027-0f98-40fd-a495-1c13a103ea39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:19:04 np0005481065 podman[384621]: 2025-10-11 09:19:04.383512324 +0000 UTC m=+0.030281102 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 05:19:04 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:19:04 np0005481065 nova_compute[260935]: 2025-10-11 09:19:04.496 2 DEBUG nova.compute.manager [req-44a374fb-8cb5-4af0-ac07-488663623fde req-8c1e3b1d-8f35-4131-8215-737f5e5bcd3e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Received event network-vif-plugged-dea05614-b04a-4078-a98e-428065514f37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:19:04 np0005481065 nova_compute[260935]: 2025-10-11 09:19:04.496 2 DEBUG oslo_concurrency.lockutils [req-44a374fb-8cb5-4af0-ac07-488663623fde req-8c1e3b1d-8f35-4131-8215-737f5e5bcd3e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "564a3027-0f98-40fd-a495-1c13a103ea39-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:19:04 np0005481065 nova_compute[260935]: 2025-10-11 09:19:04.496 2 DEBUG oslo_concurrency.lockutils [req-44a374fb-8cb5-4af0-ac07-488663623fde req-8c1e3b1d-8f35-4131-8215-737f5e5bcd3e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "564a3027-0f98-40fd-a495-1c13a103ea39-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:19:04 np0005481065 nova_compute[260935]: 2025-10-11 09:19:04.496 2 DEBUG oslo_concurrency.lockutils [req-44a374fb-8cb5-4af0-ac07-488663623fde req-8c1e3b1d-8f35-4131-8215-737f5e5bcd3e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "564a3027-0f98-40fd-a495-1c13a103ea39-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:19:04 np0005481065 nova_compute[260935]: 2025-10-11 09:19:04.497 2 DEBUG nova.compute.manager [req-44a374fb-8cb5-4af0-ac07-488663623fde req-8c1e3b1d-8f35-4131-8215-737f5e5bcd3e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Processing event network-vif-plugged-dea05614-b04a-4078-a98e-428065514f37 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:19:04 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adfdfffff4362d598c2ef7afc8f136cd17a5b46b0bac4f37ab2cb909f4afc6f2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 05:19:04 np0005481065 podman[384621]: 2025-10-11 09:19:04.515681025 +0000 UTC m=+0.162449793 container init 35a4ceb1b8073f14fe148299f689cfe5109498accc2ce93b22f7a21ca56c96a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 05:19:04 np0005481065 podman[384621]: 2025-10-11 09:19:04.52604901 +0000 UTC m=+0.172817758 container start 35a4ceb1b8073f14fe148299f689cfe5109498accc2ce93b22f7a21ca56c96a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0)
Oct 11 05:19:04 np0005481065 neutron-haproxy-ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241[384643]: [NOTICE]   (384647) : New worker (384649) forked
Oct 11 05:19:04 np0005481065 neutron-haproxy-ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241[384643]: [NOTICE]   (384647) : Loading success.
Oct 11 05:19:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:04.587 162815 INFO neutron.agent.ovn.metadata.agent [-] Port c8bc0542-b12a-4eb6-898d-5fd663184c82 in datapath ce353a4c-7280-46bb-ac7d-157bb5dc08cb unbound from our chassis#033[00m
Oct 11 05:19:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:04.594 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce353a4c-7280-46bb-ac7d-157bb5dc08cb#033[00m
Oct 11 05:19:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:04.610 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[07441b2f-f1d3-4e98-8ab9-d08d660aa803]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:19:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:04.611 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapce353a4c-71 in ovnmeta-ce353a4c-7280-46bb-ac7d-157bb5dc08cb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 05:19:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:04.614 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapce353a4c-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 05:19:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:04.614 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b96e7a8e-80e7-4d26-a7a2-aa254734100f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:19:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:04.616 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[da7e1521-e9ba-4432-9c62-2dca2e1ecaa0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:19:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:04.632 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[ea0f0eed-b45f-43fa-b022-c198d8cdc305]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:19:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:04.646 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[96d71ede-c7d3-44f0-a898-0c2cb012b0ad]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:19:04 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2317: 321 pgs: 321 active+clean; 420 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 91 op/s
Oct 11 05:19:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:04.689 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[57e39484-fdc0-4bed-8fe9-5f031741ddc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:19:04 np0005481065 systemd-udevd[384540]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:19:04 np0005481065 NetworkManager[44960]: <info>  [1760174344.7013] manager: (tapce353a4c-70): new Veth device (/org/freedesktop/NetworkManager/Devices/466)
Oct 11 05:19:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:04.705 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[30ad8543-2469-4743-93a5-4dc98bb08243]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:19:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:04.752 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6a19fc12-4efd-45a3-8ad1-955fa7c8a8cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:19:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:04.757 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[df77b8b5-e8a4-46f9-8cf9-03ba9e96e4a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:19:04 np0005481065 NetworkManager[44960]: <info>  [1760174344.7883] device (tapce353a4c-70): carrier: link connected
Oct 11 05:19:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:04.797 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[2fd39355-5795-420f-97d5-d8f07dcb3abd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:19:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:04.821 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ae7ff0a2-97e2-432a-8470-b96783a44db6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce353a4c-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:23:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 326], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618949, 'reachable_time': 20488, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 384668, 'error': None, 'target': 'ovnmeta-ce353a4c-7280-46bb-ac7d-157bb5dc08cb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:19:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:04.848 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6e816665-39e8-4b9e-932f-9bb2ad68a8a6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe22:2362'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618949, 'tstamp': 618949}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 384669, 'error': None, 'target': 'ovnmeta-ce353a4c-7280-46bb-ac7d-157bb5dc08cb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:19:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:04.877 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c60c5dde-479e-4920-ae91-1b9b760c3d80]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce353a4c-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:23:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 326], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618949, 'reachable_time': 20488, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 384670, 'error': None, 'target': 'ovnmeta-ce353a4c-7280-46bb-ac7d-157bb5dc08cb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:19:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:04.915 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a86ed9e1-5787-42cd-85f7-c2729622294a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:19:04 np0005481065 nova_compute[260935]: 2025-10-11 09:19:04.930 2 DEBUG nova.compute.manager [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:19:04 np0005481065 nova_compute[260935]: 2025-10-11 09:19:04.931 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174344.9302588, 564a3027-0f98-40fd-a495-1c13a103ea39 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:19:04 np0005481065 nova_compute[260935]: 2025-10-11 09:19:04.932 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] VM Started (Lifecycle Event)#033[00m
Oct 11 05:19:04 np0005481065 nova_compute[260935]: 2025-10-11 09:19:04.950 2 DEBUG nova.virt.libvirt.driver [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:19:04 np0005481065 nova_compute[260935]: 2025-10-11 09:19:04.954 2 INFO nova.virt.libvirt.driver [-] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Instance spawned successfully.#033[00m
Oct 11 05:19:04 np0005481065 nova_compute[260935]: 2025-10-11 09:19:04.954 2 DEBUG nova.virt.libvirt.driver [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:19:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:04.957 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4597f233-c9ad-4c7b-a4df-daa16271ab6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:19:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:04.959 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce353a4c-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:19:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:04.960 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:19:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:04.961 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce353a4c-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:19:04 np0005481065 nova_compute[260935]: 2025-10-11 09:19:04.963 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:19:04 np0005481065 nova_compute[260935]: 2025-10-11 09:19:04.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:04 np0005481065 kernel: tapce353a4c-70: entered promiscuous mode
Oct 11 05:19:04 np0005481065 NetworkManager[44960]: <info>  [1760174344.9648] manager: (tapce353a4c-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/467)
Oct 11 05:19:04 np0005481065 nova_compute[260935]: 2025-10-11 09:19:04.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:04.968 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce353a4c-70, col_values=(('external_ids', {'iface-id': 'bf7f7fa8-f1d9-4202-bec4-06d2178d548d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:19:04 np0005481065 nova_compute[260935]: 2025-10-11 09:19:04.970 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:19:04 np0005481065 ovn_controller[152945]: 2025-10-11T09:19:04Z|01138|binding|INFO|Releasing lport bf7f7fa8-f1d9-4202-bec4-06d2178d548d from this chassis (sb_readonly=0)
Oct 11 05:19:04 np0005481065 nova_compute[260935]: 2025-10-11 09:19:04.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:04 np0005481065 nova_compute[260935]: 2025-10-11 09:19:04.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:04.974 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ce353a4c-7280-46bb-ac7d-157bb5dc08cb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ce353a4c-7280-46bb-ac7d-157bb5dc08cb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 05:19:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:04.977 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ee29ddf3-8c19-4f82-a587-9db4e25ce3d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:19:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:04.979 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 05:19:04 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 05:19:04 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 05:19:04 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-ce353a4c-7280-46bb-ac7d-157bb5dc08cb
Oct 11 05:19:04 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 05:19:04 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 05:19:04 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 05:19:04 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/ce353a4c-7280-46bb-ac7d-157bb5dc08cb.pid.haproxy
Oct 11 05:19:04 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 05:19:04 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:19:04 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 05:19:04 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 05:19:04 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 05:19:04 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 05:19:04 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 05:19:04 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 05:19:04 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 05:19:04 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 05:19:04 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 05:19:04 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 05:19:04 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 05:19:04 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 05:19:04 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 05:19:04 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:19:04 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:19:04 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 05:19:04 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 05:19:04 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 05:19:04 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID ce353a4c-7280-46bb-ac7d-157bb5dc08cb
Oct 11 05:19:04 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 05:19:04 np0005481065 nova_compute[260935]: 2025-10-11 09:19:04.980 2 DEBUG nova.virt.libvirt.driver [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:19:04 np0005481065 nova_compute[260935]: 2025-10-11 09:19:04.980 2 DEBUG nova.virt.libvirt.driver [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:19:04 np0005481065 nova_compute[260935]: 2025-10-11 09:19:04.981 2 DEBUG nova.virt.libvirt.driver [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:19:04 np0005481065 nova_compute[260935]: 2025-10-11 09:19:04.981 2 DEBUG nova.virt.libvirt.driver [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:19:04 np0005481065 nova_compute[260935]: 2025-10-11 09:19:04.981 2 DEBUG nova.virt.libvirt.driver [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:19:04 np0005481065 nova_compute[260935]: 2025-10-11 09:19:04.982 2 DEBUG nova.virt.libvirt.driver [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:19:04 np0005481065 nova_compute[260935]: 2025-10-11 09:19:04.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:04 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:04.983 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ce353a4c-7280-46bb-ac7d-157bb5dc08cb', 'env', 'PROCESS_TAG=haproxy-ce353a4c-7280-46bb-ac7d-157bb5dc08cb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ce353a4c-7280-46bb-ac7d-157bb5dc08cb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 05:19:05 np0005481065 nova_compute[260935]: 2025-10-11 09:19:05.012 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:19:05 np0005481065 nova_compute[260935]: 2025-10-11 09:19:05.013 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174344.931285, 564a3027-0f98-40fd-a495-1c13a103ea39 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:19:05 np0005481065 nova_compute[260935]: 2025-10-11 09:19:05.013 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:19:05 np0005481065 nova_compute[260935]: 2025-10-11 09:19:05.050 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:19:05 np0005481065 nova_compute[260935]: 2025-10-11 09:19:05.053 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174344.9355233, 564a3027-0f98-40fd-a495-1c13a103ea39 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:19:05 np0005481065 nova_compute[260935]: 2025-10-11 09:19:05.053 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:19:05 np0005481065 nova_compute[260935]: 2025-10-11 09:19:05.066 2 INFO nova.compute.manager [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Took 13.23 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:19:05 np0005481065 nova_compute[260935]: 2025-10-11 09:19:05.067 2 DEBUG nova.compute.manager [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:19:05 np0005481065 nova_compute[260935]: 2025-10-11 09:19:05.069 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:19:05 np0005481065 nova_compute[260935]: 2025-10-11 09:19:05.074 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:19:05 np0005481065 nova_compute[260935]: 2025-10-11 09:19:05.104 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:19:05 np0005481065 nova_compute[260935]: 2025-10-11 09:19:05.119 2 INFO nova.compute.manager [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Took 14.39 seconds to build instance.#033[00m
Oct 11 05:19:05 np0005481065 nova_compute[260935]: 2025-10-11 09:19:05.131 2 DEBUG oslo_concurrency.lockutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "564a3027-0f98-40fd-a495-1c13a103ea39" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:19:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:19:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:19:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:19:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:19:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033216606230057955 of space, bias 1.0, pg target 0.9964981869017386 quantized to 32 (current 32)
Oct 11 05:19:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:19:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:19:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:19:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:19:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:19:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 11 05:19:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:19:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 05:19:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:19:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:19:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:19:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 05:19:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:19:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 05:19:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:19:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:19:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:19:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 05:19:05 np0005481065 podman[384701]: 2025-10-11 09:19:05.400885233 +0000 UTC m=+0.069475698 container create 4f3f77c60f2bcd18ca6d28297f83f5e108db244fbb44511df252c2033ac620b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ce353a4c-7280-46bb-ac7d-157bb5dc08cb, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001)
Oct 11 05:19:05 np0005481065 systemd[1]: Started libpod-conmon-4f3f77c60f2bcd18ca6d28297f83f5e108db244fbb44511df252c2033ac620b4.scope.
Oct 11 05:19:05 np0005481065 podman[384701]: 2025-10-11 09:19:05.377080536 +0000 UTC m=+0.045671001 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 05:19:05 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:19:05 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8c33ef9ddc9558d0573d134c73c574d03ca7402b4d4ad1531c574c341a884ee/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 05:19:05 np0005481065 podman[384701]: 2025-10-11 09:19:05.515236217 +0000 UTC m=+0.183826702 container init 4f3f77c60f2bcd18ca6d28297f83f5e108db244fbb44511df252c2033ac620b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ce353a4c-7280-46bb-ac7d-157bb5dc08cb, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:19:05 np0005481065 podman[384701]: 2025-10-11 09:19:05.525646083 +0000 UTC m=+0.194236548 container start 4f3f77c60f2bcd18ca6d28297f83f5e108db244fbb44511df252c2033ac620b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ce353a4c-7280-46bb-ac7d-157bb5dc08cb, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 11 05:19:05 np0005481065 neutron-haproxy-ovnmeta-ce353a4c-7280-46bb-ac7d-157bb5dc08cb[384716]: [NOTICE]   (384720) : New worker (384722) forked
Oct 11 05:19:05 np0005481065 neutron-haproxy-ovnmeta-ce353a4c-7280-46bb-ac7d-157bb5dc08cb[384716]: [NOTICE]   (384720) : Loading success.
Oct 11 05:19:06 np0005481065 nova_compute[260935]: 2025-10-11 09:19:06.250 2 DEBUG nova.compute.manager [req-89ea2b94-d7d4-48f4-9e18-a6cf5f6fea6a req-1a1d31e6-df8e-468c-bf3c-2cec2cb4ef9a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Received event network-vif-plugged-c8bc0542-b12a-4eb6-898d-5fd663184c82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:19:06 np0005481065 nova_compute[260935]: 2025-10-11 09:19:06.251 2 DEBUG oslo_concurrency.lockutils [req-89ea2b94-d7d4-48f4-9e18-a6cf5f6fea6a req-1a1d31e6-df8e-468c-bf3c-2cec2cb4ef9a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "564a3027-0f98-40fd-a495-1c13a103ea39-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:19:06 np0005481065 nova_compute[260935]: 2025-10-11 09:19:06.252 2 DEBUG oslo_concurrency.lockutils [req-89ea2b94-d7d4-48f4-9e18-a6cf5f6fea6a req-1a1d31e6-df8e-468c-bf3c-2cec2cb4ef9a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "564a3027-0f98-40fd-a495-1c13a103ea39-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:19:06 np0005481065 nova_compute[260935]: 2025-10-11 09:19:06.252 2 DEBUG oslo_concurrency.lockutils [req-89ea2b94-d7d4-48f4-9e18-a6cf5f6fea6a req-1a1d31e6-df8e-468c-bf3c-2cec2cb4ef9a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "564a3027-0f98-40fd-a495-1c13a103ea39-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:19:06 np0005481065 nova_compute[260935]: 2025-10-11 09:19:06.253 2 DEBUG nova.compute.manager [req-89ea2b94-d7d4-48f4-9e18-a6cf5f6fea6a req-1a1d31e6-df8e-468c-bf3c-2cec2cb4ef9a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] No waiting events found dispatching network-vif-plugged-c8bc0542-b12a-4eb6-898d-5fd663184c82 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:19:06 np0005481065 nova_compute[260935]: 2025-10-11 09:19:06.254 2 WARNING nova.compute.manager [req-89ea2b94-d7d4-48f4-9e18-a6cf5f6fea6a req-1a1d31e6-df8e-468c-bf3c-2cec2cb4ef9a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Received unexpected event network-vif-plugged-c8bc0542-b12a-4eb6-898d-5fd663184c82 for instance with vm_state active and task_state None.#033[00m
Oct 11 05:19:06 np0005481065 ovn_controller[152945]: 2025-10-11T09:19:06Z|01139|binding|INFO|Releasing lport bf7f7fa8-f1d9-4202-bec4-06d2178d548d from this chassis (sb_readonly=0)
Oct 11 05:19:06 np0005481065 ovn_controller[152945]: 2025-10-11T09:19:06Z|01140|binding|INFO|Releasing lport fb1da81b-c31d-4f26-a595-cb8eaad2e189 from this chassis (sb_readonly=0)
Oct 11 05:19:06 np0005481065 ovn_controller[152945]: 2025-10-11T09:19:06Z|01141|binding|INFO|Releasing lport f4a0a09a-d174-44d3-b63d-753fefd94646 from this chassis (sb_readonly=0)
Oct 11 05:19:06 np0005481065 ovn_controller[152945]: 2025-10-11T09:19:06Z|01142|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:19:06 np0005481065 ovn_controller[152945]: 2025-10-11T09:19:06Z|01143|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:19:06 np0005481065 nova_compute[260935]: 2025-10-11 09:19:06.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:06 np0005481065 NetworkManager[44960]: <info>  [1760174346.4264] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/468)
Oct 11 05:19:06 np0005481065 NetworkManager[44960]: <info>  [1760174346.4278] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/469)
Oct 11 05:19:06 np0005481065 nova_compute[260935]: 2025-10-11 09:19:06.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:06 np0005481065 ovn_controller[152945]: 2025-10-11T09:19:06Z|01144|binding|INFO|Releasing lport bf7f7fa8-f1d9-4202-bec4-06d2178d548d from this chassis (sb_readonly=0)
Oct 11 05:19:06 np0005481065 ovn_controller[152945]: 2025-10-11T09:19:06Z|01145|binding|INFO|Releasing lport fb1da81b-c31d-4f26-a595-cb8eaad2e189 from this chassis (sb_readonly=0)
Oct 11 05:19:06 np0005481065 ovn_controller[152945]: 2025-10-11T09:19:06Z|01146|binding|INFO|Releasing lport f4a0a09a-d174-44d3-b63d-753fefd94646 from this chassis (sb_readonly=0)
Oct 11 05:19:06 np0005481065 ovn_controller[152945]: 2025-10-11T09:19:06Z|01147|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:19:06 np0005481065 ovn_controller[152945]: 2025-10-11T09:19:06Z|01148|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:19:06 np0005481065 nova_compute[260935]: 2025-10-11 09:19:06.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:06 np0005481065 nova_compute[260935]: 2025-10-11 09:19:06.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:06 np0005481065 nova_compute[260935]: 2025-10-11 09:19:06.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:06 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2318: 321 pgs: 321 active+clean; 420 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 741 KiB/s wr, 87 op/s
Oct 11 05:19:06 np0005481065 nova_compute[260935]: 2025-10-11 09:19:06.785 2 DEBUG nova.compute.manager [req-91eea30b-31e6-4908-8765-0e59d19f068c req-0f55932a-ed17-4a7f-b328-ca27872a57f1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Received event network-vif-plugged-dea05614-b04a-4078-a98e-428065514f37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:19:06 np0005481065 nova_compute[260935]: 2025-10-11 09:19:06.786 2 DEBUG oslo_concurrency.lockutils [req-91eea30b-31e6-4908-8765-0e59d19f068c req-0f55932a-ed17-4a7f-b328-ca27872a57f1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "564a3027-0f98-40fd-a495-1c13a103ea39-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:19:06 np0005481065 nova_compute[260935]: 2025-10-11 09:19:06.787 2 DEBUG oslo_concurrency.lockutils [req-91eea30b-31e6-4908-8765-0e59d19f068c req-0f55932a-ed17-4a7f-b328-ca27872a57f1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "564a3027-0f98-40fd-a495-1c13a103ea39-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:19:06 np0005481065 nova_compute[260935]: 2025-10-11 09:19:06.788 2 DEBUG oslo_concurrency.lockutils [req-91eea30b-31e6-4908-8765-0e59d19f068c req-0f55932a-ed17-4a7f-b328-ca27872a57f1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "564a3027-0f98-40fd-a495-1c13a103ea39-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:19:06 np0005481065 nova_compute[260935]: 2025-10-11 09:19:06.788 2 DEBUG nova.compute.manager [req-91eea30b-31e6-4908-8765-0e59d19f068c req-0f55932a-ed17-4a7f-b328-ca27872a57f1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] No waiting events found dispatching network-vif-plugged-dea05614-b04a-4078-a98e-428065514f37 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:19:06 np0005481065 nova_compute[260935]: 2025-10-11 09:19:06.789 2 WARNING nova.compute.manager [req-91eea30b-31e6-4908-8765-0e59d19f068c req-0f55932a-ed17-4a7f-b328-ca27872a57f1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Received unexpected event network-vif-plugged-dea05614-b04a-4078-a98e-428065514f37 for instance with vm_state active and task_state None.#033[00m
Oct 11 05:19:08 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2319: 321 pgs: 321 active+clean; 421 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 754 KiB/s wr, 160 op/s
Oct 11 05:19:08 np0005481065 nova_compute[260935]: 2025-10-11 09:19:08.905 2 DEBUG nova.compute.manager [req-211794b5-7183-4893-9361-4cdce30a649e req-bb245872-1cf5-4880-8bfe-1214470401d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Received event network-changed-a1864eda-bf8d-42a1-b315-967521604391 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:19:08 np0005481065 nova_compute[260935]: 2025-10-11 09:19:08.907 2 DEBUG nova.compute.manager [req-211794b5-7183-4893-9361-4cdce30a649e req-bb245872-1cf5-4880-8bfe-1214470401d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Refreshing instance network info cache due to event network-changed-a1864eda-bf8d-42a1-b315-967521604391. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:19:08 np0005481065 nova_compute[260935]: 2025-10-11 09:19:08.907 2 DEBUG oslo_concurrency.lockutils [req-211794b5-7183-4893-9361-4cdce30a649e req-bb245872-1cf5-4880-8bfe-1214470401d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-69860e17-caac-461a-a4a5-34ca72c0ee09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:19:08 np0005481065 nova_compute[260935]: 2025-10-11 09:19:08.908 2 DEBUG oslo_concurrency.lockutils [req-211794b5-7183-4893-9361-4cdce30a649e req-bb245872-1cf5-4880-8bfe-1214470401d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-69860e17-caac-461a-a4a5-34ca72c0ee09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:19:08 np0005481065 nova_compute[260935]: 2025-10-11 09:19:08.909 2 DEBUG nova.network.neutron [req-211794b5-7183-4893-9361-4cdce30a649e req-bb245872-1cf5-4880-8bfe-1214470401d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Refreshing network info cache for port a1864eda-bf8d-42a1-b315-967521604391 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:19:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:19:09 np0005481065 podman[384732]: 2025-10-11 09:19:09.80751065 +0000 UTC m=+0.101943282 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 05:19:10 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2320: 321 pgs: 321 active+clean; 421 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 147 op/s
Oct 11 05:19:10 np0005481065 ovn_controller[152945]: 2025-10-11T09:19:10Z|00131|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8e:a2:5e 10.100.0.3
Oct 11 05:19:10 np0005481065 ovn_controller[152945]: 2025-10-11T09:19:10Z|00132|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8e:a2:5e 10.100.0.3
Oct 11 05:19:11 np0005481065 nova_compute[260935]: 2025-10-11 09:19:11.012 2 DEBUG nova.network.neutron [req-211794b5-7183-4893-9361-4cdce30a649e req-bb245872-1cf5-4880-8bfe-1214470401d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Updated VIF entry in instance network info cache for port a1864eda-bf8d-42a1-b315-967521604391. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:19:11 np0005481065 nova_compute[260935]: 2025-10-11 09:19:11.014 2 DEBUG nova.network.neutron [req-211794b5-7183-4893-9361-4cdce30a649e req-bb245872-1cf5-4880-8bfe-1214470401d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Updating instance_info_cache with network_info: [{"id": "a1864eda-bf8d-42a1-b315-967521604391", "address": "fa:16:3e:8e:a2:5e", "network": {"id": "fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448", "bridge": "br-int", "label": "tempest-network-smoke--1639787343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1864eda-bf", "ovs_interfaceid": "a1864eda-bf8d-42a1-b315-967521604391", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:19:11 np0005481065 nova_compute[260935]: 2025-10-11 09:19:11.044 2 DEBUG oslo_concurrency.lockutils [req-211794b5-7183-4893-9361-4cdce30a649e req-bb245872-1cf5-4880-8bfe-1214470401d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-69860e17-caac-461a-a4a5-34ca72c0ee09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:19:11 np0005481065 nova_compute[260935]: 2025-10-11 09:19:11.059 2 DEBUG nova.compute.manager [req-df0f2baa-3fc3-42cc-960d-d4609582f845 req-20de248d-bfdb-4002-b190-50b7160d8829 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Received event network-changed-dea05614-b04a-4078-a98e-428065514f37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:19:11 np0005481065 nova_compute[260935]: 2025-10-11 09:19:11.060 2 DEBUG nova.compute.manager [req-df0f2baa-3fc3-42cc-960d-d4609582f845 req-20de248d-bfdb-4002-b190-50b7160d8829 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Refreshing instance network info cache due to event network-changed-dea05614-b04a-4078-a98e-428065514f37. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:19:11 np0005481065 nova_compute[260935]: 2025-10-11 09:19:11.061 2 DEBUG oslo_concurrency.lockutils [req-df0f2baa-3fc3-42cc-960d-d4609582f845 req-20de248d-bfdb-4002-b190-50b7160d8829 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-564a3027-0f98-40fd-a495-1c13a103ea39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:19:11 np0005481065 nova_compute[260935]: 2025-10-11 09:19:11.062 2 DEBUG oslo_concurrency.lockutils [req-df0f2baa-3fc3-42cc-960d-d4609582f845 req-20de248d-bfdb-4002-b190-50b7160d8829 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-564a3027-0f98-40fd-a495-1c13a103ea39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:19:11 np0005481065 nova_compute[260935]: 2025-10-11 09:19:11.063 2 DEBUG nova.network.neutron [req-df0f2baa-3fc3-42cc-960d-d4609582f845 req-20de248d-bfdb-4002-b190-50b7160d8829 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Refreshing network info cache for port dea05614-b04a-4078-a98e-428065514f37 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:19:11 np0005481065 nova_compute[260935]: 2025-10-11 09:19:11.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:11 np0005481065 nova_compute[260935]: 2025-10-11 09:19:11.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:12 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2321: 321 pgs: 321 active+clean; 448 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 1.9 MiB/s wr, 203 op/s
Oct 11 05:19:13 np0005481065 nova_compute[260935]: 2025-10-11 09:19:13.186 2 DEBUG nova.network.neutron [req-df0f2baa-3fc3-42cc-960d-d4609582f845 req-20de248d-bfdb-4002-b190-50b7160d8829 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Updated VIF entry in instance network info cache for port dea05614-b04a-4078-a98e-428065514f37. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:19:13 np0005481065 nova_compute[260935]: 2025-10-11 09:19:13.187 2 DEBUG nova.network.neutron [req-df0f2baa-3fc3-42cc-960d-d4609582f845 req-20de248d-bfdb-4002-b190-50b7160d8829 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Updating instance_info_cache with network_info: [{"id": "dea05614-b04a-4078-a98e-428065514f37", "address": "fa:16:3e:00:3b:17", "network": {"id": "6346ea52-07fc-49ad-8f2d-fcfed9769241", "bridge": "br-int", "label": "tempest-network-smoke--1742332739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdea05614-b0", "ovs_interfaceid": "dea05614-b04a-4078-a98e-428065514f37", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c8bc0542-b12a-4eb6-898d-5fd663184c82", "address": "fa:16:3e:78:b9:1a", "network": {"id": "ce353a4c-7280-46bb-ac7d-157bb5dc08cb", "bridge": "br-int", "label": "tempest-network-smoke--1463782365", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe78:b91a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8bc0542-b1", "ovs_interfaceid": "c8bc0542-b12a-4eb6-898d-5fd663184c82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:19:13 np0005481065 nova_compute[260935]: 2025-10-11 09:19:13.206 2 DEBUG oslo_concurrency.lockutils [req-df0f2baa-3fc3-42cc-960d-d4609582f845 req-20de248d-bfdb-4002-b190-50b7160d8829 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-564a3027-0f98-40fd-a495-1c13a103ea39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:19:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:19:14 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2322: 321 pgs: 321 active+clean; 453 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.1 MiB/s wr, 164 op/s
Oct 11 05:19:15 np0005481065 podman[384754]: 2025-10-11 09:19:15.009504637 +0000 UTC m=+0.101992923 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:19:15 np0005481065 podman[384755]: 2025-10-11 09:19:15.063257747 +0000 UTC m=+0.142180697 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001)
Oct 11 05:19:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:15.217 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:19:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:15.218 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:19:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:15.219 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:19:15 np0005481065 nova_compute[260935]: 2025-10-11 09:19:15.860 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:19:16 np0005481065 nova_compute[260935]: 2025-10-11 09:19:16.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:16 np0005481065 nova_compute[260935]: 2025-10-11 09:19:16.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:16 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2323: 321 pgs: 321 active+clean; 453 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 139 op/s
Oct 11 05:19:17 np0005481065 nova_compute[260935]: 2025-10-11 09:19:17.830 2 INFO nova.compute.manager [None req-021c4a4b-0393-4dc8-b868-dc4f6767b3b9 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Get console output#033[00m
Oct 11 05:19:17 np0005481065 nova_compute[260935]: 2025-10-11 09:19:17.839 29289 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct 11 05:19:18 np0005481065 ovn_controller[152945]: 2025-10-11T09:19:18Z|00133|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:00:3b:17 10.100.0.3
Oct 11 05:19:18 np0005481065 ovn_controller[152945]: 2025-10-11T09:19:18Z|00134|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:00:3b:17 10.100.0.3
Oct 11 05:19:18 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2324: 321 pgs: 321 active+clean; 484 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 201 op/s
Oct 11 05:19:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:19:19 np0005481065 nova_compute[260935]: 2025-10-11 09:19:19.617 2 DEBUG nova.compute.manager [req-cb0535f6-30c9-4209-a88d-d214166fd672 req-76a6e508-1427-4773-a9b8-4866a4390a69 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Received event network-changed-a1864eda-bf8d-42a1-b315-967521604391 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:19:19 np0005481065 nova_compute[260935]: 2025-10-11 09:19:19.617 2 DEBUG nova.compute.manager [req-cb0535f6-30c9-4209-a88d-d214166fd672 req-76a6e508-1427-4773-a9b8-4866a4390a69 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Refreshing instance network info cache due to event network-changed-a1864eda-bf8d-42a1-b315-967521604391. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:19:19 np0005481065 nova_compute[260935]: 2025-10-11 09:19:19.618 2 DEBUG oslo_concurrency.lockutils [req-cb0535f6-30c9-4209-a88d-d214166fd672 req-76a6e508-1427-4773-a9b8-4866a4390a69 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-69860e17-caac-461a-a4a5-34ca72c0ee09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:19:19 np0005481065 nova_compute[260935]: 2025-10-11 09:19:19.619 2 DEBUG oslo_concurrency.lockutils [req-cb0535f6-30c9-4209-a88d-d214166fd672 req-76a6e508-1427-4773-a9b8-4866a4390a69 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-69860e17-caac-461a-a4a5-34ca72c0ee09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:19:19 np0005481065 nova_compute[260935]: 2025-10-11 09:19:19.619 2 DEBUG nova.network.neutron [req-cb0535f6-30c9-4209-a88d-d214166fd672 req-76a6e508-1427-4773-a9b8-4866a4390a69 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Refreshing network info cache for port a1864eda-bf8d-42a1-b315-967521604391 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:19:20 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2325: 321 pgs: 321 active+clean; 484 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 712 KiB/s rd, 4.3 MiB/s wr, 127 op/s
Oct 11 05:19:21 np0005481065 nova_compute[260935]: 2025-10-11 09:19:21.163 2 DEBUG nova.network.neutron [req-cb0535f6-30c9-4209-a88d-d214166fd672 req-76a6e508-1427-4773-a9b8-4866a4390a69 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Updated VIF entry in instance network info cache for port a1864eda-bf8d-42a1-b315-967521604391. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:19:21 np0005481065 nova_compute[260935]: 2025-10-11 09:19:21.164 2 DEBUG nova.network.neutron [req-cb0535f6-30c9-4209-a88d-d214166fd672 req-76a6e508-1427-4773-a9b8-4866a4390a69 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Updating instance_info_cache with network_info: [{"id": "a1864eda-bf8d-42a1-b315-967521604391", "address": "fa:16:3e:8e:a2:5e", "network": {"id": "fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448", "bridge": "br-int", "label": "tempest-network-smoke--1639787343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1864eda-bf", "ovs_interfaceid": "a1864eda-bf8d-42a1-b315-967521604391", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:19:21 np0005481065 nova_compute[260935]: 2025-10-11 09:19:21.190 2 DEBUG oslo_concurrency.lockutils [req-cb0535f6-30c9-4209-a88d-d214166fd672 req-76a6e508-1427-4773-a9b8-4866a4390a69 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-69860e17-caac-461a-a4a5-34ca72c0ee09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:19:21 np0005481065 nova_compute[260935]: 2025-10-11 09:19:21.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:19:21 np0005481065 nova_compute[260935]: 2025-10-11 09:19:21.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:19:21 np0005481065 nova_compute[260935]: 2025-10-11 09:19:21.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5029 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:19:21 np0005481065 nova_compute[260935]: 2025-10-11 09:19:21.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:19:21 np0005481065 nova_compute[260935]: 2025-10-11 09:19:21.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:21 np0005481065 nova_compute[260935]: 2025-10-11 09:19:21.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:19:22 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2326: 321 pgs: 321 active+clean; 486 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 720 KiB/s rd, 4.3 MiB/s wr, 130 op/s
Oct 11 05:19:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:19:24 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2327: 321 pgs: 321 active+clean; 486 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 372 KiB/s rd, 2.4 MiB/s wr, 73 op/s
Oct 11 05:19:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:19:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:19:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:19:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:19:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:19:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:19:26 np0005481065 nova_compute[260935]: 2025-10-11 09:19:26.493 2 DEBUG oslo_concurrency.lockutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "6c9132ae-fb4f-4a77-8e3f-14f516eed49c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:19:26 np0005481065 nova_compute[260935]: 2025-10-11 09:19:26.494 2 DEBUG oslo_concurrency.lockutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "6c9132ae-fb4f-4a77-8e3f-14f516eed49c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:19:26 np0005481065 nova_compute[260935]: 2025-10-11 09:19:26.509 2 DEBUG nova.compute.manager [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:19:26 np0005481065 nova_compute[260935]: 2025-10-11 09:19:26.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:26 np0005481065 nova_compute[260935]: 2025-10-11 09:19:26.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:26 np0005481065 nova_compute[260935]: 2025-10-11 09:19:26.583 2 DEBUG oslo_concurrency.lockutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:19:26 np0005481065 nova_compute[260935]: 2025-10-11 09:19:26.583 2 DEBUG oslo_concurrency.lockutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:19:26 np0005481065 nova_compute[260935]: 2025-10-11 09:19:26.596 2 DEBUG nova.virt.hardware [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:19:26 np0005481065 nova_compute[260935]: 2025-10-11 09:19:26.597 2 INFO nova.compute.claims [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:19:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:19:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4070051561' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:19:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:19:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4070051561' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:19:26 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2328: 321 pgs: 321 active+clean; 486 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 331 KiB/s rd, 2.2 MiB/s wr, 64 op/s
Oct 11 05:19:26 np0005481065 nova_compute[260935]: 2025-10-11 09:19:26.870 2 DEBUG oslo_concurrency.processutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:19:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:19:27 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1950226892' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:19:27 np0005481065 nova_compute[260935]: 2025-10-11 09:19:27.378 2 DEBUG oslo_concurrency.processutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:19:27 np0005481065 nova_compute[260935]: 2025-10-11 09:19:27.389 2 DEBUG nova.compute.provider_tree [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:19:27 np0005481065 nova_compute[260935]: 2025-10-11 09:19:27.483 2 DEBUG nova.scheduler.client.report [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:19:27 np0005481065 nova_compute[260935]: 2025-10-11 09:19:27.507 2 DEBUG oslo_concurrency.lockutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.924s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:19:27 np0005481065 nova_compute[260935]: 2025-10-11 09:19:27.509 2 DEBUG nova.compute.manager [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:19:27 np0005481065 nova_compute[260935]: 2025-10-11 09:19:27.559 2 DEBUG nova.compute.manager [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:19:27 np0005481065 nova_compute[260935]: 2025-10-11 09:19:27.560 2 DEBUG nova.network.neutron [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:19:27 np0005481065 nova_compute[260935]: 2025-10-11 09:19:27.581 2 INFO nova.virt.libvirt.driver [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:19:27 np0005481065 nova_compute[260935]: 2025-10-11 09:19:27.600 2 DEBUG nova.compute.manager [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:19:27 np0005481065 nova_compute[260935]: 2025-10-11 09:19:27.686 2 DEBUG nova.compute.manager [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:19:27 np0005481065 nova_compute[260935]: 2025-10-11 09:19:27.687 2 DEBUG nova.virt.libvirt.driver [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:19:27 np0005481065 nova_compute[260935]: 2025-10-11 09:19:27.688 2 INFO nova.virt.libvirt.driver [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Creating image(s)#033[00m
Oct 11 05:19:27 np0005481065 nova_compute[260935]: 2025-10-11 09:19:27.713 2 DEBUG nova.storage.rbd_utils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 6c9132ae-fb4f-4a77-8e3f-14f516eed49c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:19:27 np0005481065 nova_compute[260935]: 2025-10-11 09:19:27.743 2 DEBUG nova.storage.rbd_utils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 6c9132ae-fb4f-4a77-8e3f-14f516eed49c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:19:27 np0005481065 nova_compute[260935]: 2025-10-11 09:19:27.780 2 DEBUG nova.storage.rbd_utils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 6c9132ae-fb4f-4a77-8e3f-14f516eed49c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:19:27 np0005481065 nova_compute[260935]: 2025-10-11 09:19:27.785 2 DEBUG oslo_concurrency.processutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:19:27 np0005481065 nova_compute[260935]: 2025-10-11 09:19:27.856 2 DEBUG nova.policy [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dd336dcb24664df58613d4105ce1b004', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:19:27 np0005481065 nova_compute[260935]: 2025-10-11 09:19:27.895 2 DEBUG oslo_concurrency.processutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:19:27 np0005481065 nova_compute[260935]: 2025-10-11 09:19:27.896 2 DEBUG oslo_concurrency.lockutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:19:27 np0005481065 nova_compute[260935]: 2025-10-11 09:19:27.897 2 DEBUG oslo_concurrency.lockutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:19:27 np0005481065 nova_compute[260935]: 2025-10-11 09:19:27.897 2 DEBUG oslo_concurrency.lockutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:19:27 np0005481065 nova_compute[260935]: 2025-10-11 09:19:27.928 2 DEBUG nova.storage.rbd_utils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 6c9132ae-fb4f-4a77-8e3f-14f516eed49c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:19:27 np0005481065 nova_compute[260935]: 2025-10-11 09:19:27.933 2 DEBUG oslo_concurrency.processutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 6c9132ae-fb4f-4a77-8e3f-14f516eed49c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:19:28 np0005481065 nova_compute[260935]: 2025-10-11 09:19:28.271 2 DEBUG oslo_concurrency.processutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 6c9132ae-fb4f-4a77-8e3f-14f516eed49c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:19:28 np0005481065 nova_compute[260935]: 2025-10-11 09:19:28.361 2 DEBUG nova.storage.rbd_utils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] resizing rbd image 6c9132ae-fb4f-4a77-8e3f-14f516eed49c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:19:28 np0005481065 nova_compute[260935]: 2025-10-11 09:19:28.469 2 DEBUG nova.objects.instance [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'migration_context' on Instance uuid 6c9132ae-fb4f-4a77-8e3f-14f516eed49c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:19:28 np0005481065 nova_compute[260935]: 2025-10-11 09:19:28.485 2 DEBUG nova.virt.libvirt.driver [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:19:28 np0005481065 nova_compute[260935]: 2025-10-11 09:19:28.485 2 DEBUG nova.virt.libvirt.driver [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Ensure instance console log exists: /var/lib/nova/instances/6c9132ae-fb4f-4a77-8e3f-14f516eed49c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:19:28 np0005481065 nova_compute[260935]: 2025-10-11 09:19:28.485 2 DEBUG oslo_concurrency.lockutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:19:28 np0005481065 nova_compute[260935]: 2025-10-11 09:19:28.486 2 DEBUG oslo_concurrency.lockutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:19:28 np0005481065 nova_compute[260935]: 2025-10-11 09:19:28.486 2 DEBUG oslo_concurrency.lockutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:19:28 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2329: 321 pgs: 321 active+clean; 486 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Oct 11 05:19:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:19:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:29.425 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=36, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=35) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:19:29 np0005481065 nova_compute[260935]: 2025-10-11 09:19:29.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:29.427 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 05:19:29 np0005481065 nova_compute[260935]: 2025-10-11 09:19:29.533 2 DEBUG nova.network.neutron [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Successfully created port: 50e8a951-4439-4d05-b0da-9b126fae5c30 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:19:29 np0005481065 nova_compute[260935]: 2025-10-11 09:19:29.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:19:29 np0005481065 nova_compute[260935]: 2025-10-11 09:19:29.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:19:29 np0005481065 nova_compute[260935]: 2025-10-11 09:19:29.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:19:30 np0005481065 nova_compute[260935]: 2025-10-11 09:19:30.148 2 DEBUG oslo_concurrency.lockutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "a798e2c3-8294-482d-a073-883121427765" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:19:30 np0005481065 nova_compute[260935]: 2025-10-11 09:19:30.148 2 DEBUG oslo_concurrency.lockutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "a798e2c3-8294-482d-a073-883121427765" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:19:30 np0005481065 nova_compute[260935]: 2025-10-11 09:19:30.169 2 DEBUG nova.compute.manager [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:19:30 np0005481065 nova_compute[260935]: 2025-10-11 09:19:30.271 2 DEBUG oslo_concurrency.lockutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:19:30 np0005481065 nova_compute[260935]: 2025-10-11 09:19:30.272 2 DEBUG oslo_concurrency.lockutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:19:30 np0005481065 nova_compute[260935]: 2025-10-11 09:19:30.280 2 DEBUG nova.virt.hardware [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:19:30 np0005481065 nova_compute[260935]: 2025-10-11 09:19:30.281 2 INFO nova.compute.claims [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:19:30 np0005481065 nova_compute[260935]: 2025-10-11 09:19:30.292 2 DEBUG nova.network.neutron [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Successfully updated port: 50e8a951-4439-4d05-b0da-9b126fae5c30 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:19:30 np0005481065 nova_compute[260935]: 2025-10-11 09:19:30.319 2 DEBUG oslo_concurrency.lockutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "refresh_cache-6c9132ae-fb4f-4a77-8e3f-14f516eed49c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:19:30 np0005481065 nova_compute[260935]: 2025-10-11 09:19:30.319 2 DEBUG oslo_concurrency.lockutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquired lock "refresh_cache-6c9132ae-fb4f-4a77-8e3f-14f516eed49c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:19:30 np0005481065 nova_compute[260935]: 2025-10-11 09:19:30.320 2 DEBUG nova.network.neutron [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:19:30 np0005481065 nova_compute[260935]: 2025-10-11 09:19:30.387 2 DEBUG nova.scheduler.client.report [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Refreshing inventories for resource provider ead2f521-4d5d-46d9-864c-1aac19134114 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 11 05:19:30 np0005481065 nova_compute[260935]: 2025-10-11 09:19:30.446 2 DEBUG nova.compute.manager [req-c846a5ee-f86b-41b2-b6f7-3b8996b8489d req-eebfb772-18d8-41ba-86d1-995a3ad1a3e4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Received event network-changed-50e8a951-4439-4d05-b0da-9b126fae5c30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:19:30 np0005481065 nova_compute[260935]: 2025-10-11 09:19:30.447 2 DEBUG nova.compute.manager [req-c846a5ee-f86b-41b2-b6f7-3b8996b8489d req-eebfb772-18d8-41ba-86d1-995a3ad1a3e4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Refreshing instance network info cache due to event network-changed-50e8a951-4439-4d05-b0da-9b126fae5c30. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:19:30 np0005481065 nova_compute[260935]: 2025-10-11 09:19:30.447 2 DEBUG oslo_concurrency.lockutils [req-c846a5ee-f86b-41b2-b6f7-3b8996b8489d req-eebfb772-18d8-41ba-86d1-995a3ad1a3e4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-6c9132ae-fb4f-4a77-8e3f-14f516eed49c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:19:30 np0005481065 nova_compute[260935]: 2025-10-11 09:19:30.449 2 DEBUG nova.scheduler.client.report [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Updating ProviderTree inventory for provider ead2f521-4d5d-46d9-864c-1aac19134114 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 11 05:19:30 np0005481065 nova_compute[260935]: 2025-10-11 09:19:30.450 2 DEBUG nova.compute.provider_tree [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Updating inventory in ProviderTree for provider ead2f521-4d5d-46d9-864c-1aac19134114 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 11 05:19:30 np0005481065 nova_compute[260935]: 2025-10-11 09:19:30.477 2 DEBUG nova.scheduler.client.report [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Refreshing aggregate associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 11 05:19:30 np0005481065 nova_compute[260935]: 2025-10-11 09:19:30.497 2 DEBUG nova.scheduler.client.report [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Refreshing trait associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, traits: HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSE41,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_USB,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,HW_CPU_X86_SVM,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 11 05:19:30 np0005481065 nova_compute[260935]: 2025-10-11 09:19:30.546 2 DEBUG nova.network.neutron [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:19:30 np0005481065 nova_compute[260935]: 2025-10-11 09:19:30.665 2 DEBUG oslo_concurrency.processutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:19:30 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2330: 321 pgs: 321 active+clean; 486 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 9.0 KiB/s rd, 24 KiB/s wr, 2 op/s
Oct 11 05:19:30 np0005481065 nova_compute[260935]: 2025-10-11 09:19:30.723 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:19:31 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:19:31 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1516431143' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:19:31 np0005481065 nova_compute[260935]: 2025-10-11 09:19:31.166 2 DEBUG oslo_concurrency.processutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:19:31 np0005481065 nova_compute[260935]: 2025-10-11 09:19:31.171 2 DEBUG nova.compute.provider_tree [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:19:31 np0005481065 nova_compute[260935]: 2025-10-11 09:19:31.186 2 DEBUG nova.scheduler.client.report [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:19:31 np0005481065 nova_compute[260935]: 2025-10-11 09:19:31.204 2 DEBUG oslo_concurrency.lockutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.932s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:19:31 np0005481065 nova_compute[260935]: 2025-10-11 09:19:31.205 2 DEBUG nova.compute.manager [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:19:31 np0005481065 nova_compute[260935]: 2025-10-11 09:19:31.245 2 DEBUG nova.compute.manager [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:19:31 np0005481065 nova_compute[260935]: 2025-10-11 09:19:31.245 2 DEBUG nova.network.neutron [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:19:31 np0005481065 nova_compute[260935]: 2025-10-11 09:19:31.266 2 INFO nova.virt.libvirt.driver [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:19:31 np0005481065 nova_compute[260935]: 2025-10-11 09:19:31.295 2 DEBUG nova.compute.manager [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:19:31 np0005481065 nova_compute[260935]: 2025-10-11 09:19:31.393 2 DEBUG nova.compute.manager [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:19:31 np0005481065 nova_compute[260935]: 2025-10-11 09:19:31.395 2 DEBUG nova.virt.libvirt.driver [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:19:31 np0005481065 nova_compute[260935]: 2025-10-11 09:19:31.395 2 INFO nova.virt.libvirt.driver [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Creating image(s)#033[00m
Oct 11 05:19:31 np0005481065 nova_compute[260935]: 2025-10-11 09:19:31.424 2 DEBUG nova.storage.rbd_utils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image a798e2c3-8294-482d-a073-883121427765_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:19:31 np0005481065 nova_compute[260935]: 2025-10-11 09:19:31.451 2 DEBUG nova.storage.rbd_utils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image a798e2c3-8294-482d-a073-883121427765_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:19:31 np0005481065 nova_compute[260935]: 2025-10-11 09:19:31.480 2 DEBUG nova.storage.rbd_utils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image a798e2c3-8294-482d-a073-883121427765_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:19:31 np0005481065 nova_compute[260935]: 2025-10-11 09:19:31.484 2 DEBUG oslo_concurrency.processutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:19:31 np0005481065 nova_compute[260935]: 2025-10-11 09:19:31.582 2 DEBUG oslo_concurrency.processutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:19:31 np0005481065 nova_compute[260935]: 2025-10-11 09:19:31.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:31 np0005481065 nova_compute[260935]: 2025-10-11 09:19:31.586 2 DEBUG oslo_concurrency.lockutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:19:31 np0005481065 nova_compute[260935]: 2025-10-11 09:19:31.586 2 DEBUG oslo_concurrency.lockutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:19:31 np0005481065 nova_compute[260935]: 2025-10-11 09:19:31.586 2 DEBUG oslo_concurrency.lockutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:19:31 np0005481065 nova_compute[260935]: 2025-10-11 09:19:31.612 2 DEBUG nova.storage.rbd_utils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image a798e2c3-8294-482d-a073-883121427765_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:19:31 np0005481065 nova_compute[260935]: 2025-10-11 09:19:31.616 2 DEBUG oslo_concurrency.processutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 a798e2c3-8294-482d-a073-883121427765_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:19:31 np0005481065 nova_compute[260935]: 2025-10-11 09:19:31.957 2 DEBUG oslo_concurrency.processutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 a798e2c3-8294-482d-a073-883121427765_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:19:32 np0005481065 nova_compute[260935]: 2025-10-11 09:19:32.040 2 DEBUG nova.storage.rbd_utils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] resizing rbd image a798e2c3-8294-482d-a073-883121427765_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:19:32 np0005481065 nova_compute[260935]: 2025-10-11 09:19:32.156 2 DEBUG nova.policy [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0e1fd111a1ff43179343661e01457085', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'db6885dd005947ad850fed13cefdf2fc', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:19:32 np0005481065 nova_compute[260935]: 2025-10-11 09:19:32.167 2 DEBUG nova.objects.instance [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'migration_context' on Instance uuid a798e2c3-8294-482d-a073-883121427765 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:19:32 np0005481065 nova_compute[260935]: 2025-10-11 09:19:32.204 2 DEBUG nova.virt.libvirt.driver [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:19:32 np0005481065 nova_compute[260935]: 2025-10-11 09:19:32.205 2 DEBUG nova.virt.libvirt.driver [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Ensure instance console log exists: /var/lib/nova/instances/a798e2c3-8294-482d-a073-883121427765/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:19:32 np0005481065 nova_compute[260935]: 2025-10-11 09:19:32.205 2 DEBUG oslo_concurrency.lockutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:19:32 np0005481065 nova_compute[260935]: 2025-10-11 09:19:32.206 2 DEBUG oslo_concurrency.lockutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:19:32 np0005481065 nova_compute[260935]: 2025-10-11 09:19:32.207 2 DEBUG oslo_concurrency.lockutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:19:32 np0005481065 nova_compute[260935]: 2025-10-11 09:19:32.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:19:32 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2331: 321 pgs: 321 active+clean; 554 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 2.8 MiB/s wr, 42 op/s
Oct 11 05:19:33 np0005481065 nova_compute[260935]: 2025-10-11 09:19:33.127 2 DEBUG nova.network.neutron [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Updating instance_info_cache with network_info: [{"id": "50e8a951-4439-4d05-b0da-9b126fae5c30", "address": "fa:16:3e:23:8b:f3", "network": {"id": "fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448", "bridge": "br-int", "label": "tempest-network-smoke--1639787343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e8a951-44", "ovs_interfaceid": "50e8a951-4439-4d05-b0da-9b126fae5c30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:19:33 np0005481065 nova_compute[260935]: 2025-10-11 09:19:33.165 2 DEBUG oslo_concurrency.lockutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Releasing lock "refresh_cache-6c9132ae-fb4f-4a77-8e3f-14f516eed49c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:19:33 np0005481065 nova_compute[260935]: 2025-10-11 09:19:33.166 2 DEBUG nova.compute.manager [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Instance network_info: |[{"id": "50e8a951-4439-4d05-b0da-9b126fae5c30", "address": "fa:16:3e:23:8b:f3", "network": {"id": "fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448", "bridge": "br-int", "label": "tempest-network-smoke--1639787343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e8a951-44", "ovs_interfaceid": "50e8a951-4439-4d05-b0da-9b126fae5c30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:19:33 np0005481065 nova_compute[260935]: 2025-10-11 09:19:33.166 2 DEBUG oslo_concurrency.lockutils [req-c846a5ee-f86b-41b2-b6f7-3b8996b8489d req-eebfb772-18d8-41ba-86d1-995a3ad1a3e4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-6c9132ae-fb4f-4a77-8e3f-14f516eed49c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:19:33 np0005481065 nova_compute[260935]: 2025-10-11 09:19:33.167 2 DEBUG nova.network.neutron [req-c846a5ee-f86b-41b2-b6f7-3b8996b8489d req-eebfb772-18d8-41ba-86d1-995a3ad1a3e4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Refreshing network info cache for port 50e8a951-4439-4d05-b0da-9b126fae5c30 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:19:33 np0005481065 nova_compute[260935]: 2025-10-11 09:19:33.170 2 DEBUG nova.virt.libvirt.driver [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Start _get_guest_xml network_info=[{"id": "50e8a951-4439-4d05-b0da-9b126fae5c30", "address": "fa:16:3e:23:8b:f3", "network": {"id": "fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448", "bridge": "br-int", "label": "tempest-network-smoke--1639787343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e8a951-44", "ovs_interfaceid": "50e8a951-4439-4d05-b0da-9b126fae5c30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:19:33 np0005481065 nova_compute[260935]: 2025-10-11 09:19:33.176 2 WARNING nova.virt.libvirt.driver [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:19:33 np0005481065 nova_compute[260935]: 2025-10-11 09:19:33.181 2 DEBUG nova.virt.libvirt.host [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:19:33 np0005481065 nova_compute[260935]: 2025-10-11 09:19:33.182 2 DEBUG nova.virt.libvirt.host [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:19:33 np0005481065 nova_compute[260935]: 2025-10-11 09:19:33.190 2 DEBUG nova.virt.libvirt.host [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:19:33 np0005481065 nova_compute[260935]: 2025-10-11 09:19:33.191 2 DEBUG nova.virt.libvirt.host [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:19:33 np0005481065 nova_compute[260935]: 2025-10-11 09:19:33.191 2 DEBUG nova.virt.libvirt.driver [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:19:33 np0005481065 nova_compute[260935]: 2025-10-11 09:19:33.192 2 DEBUG nova.virt.hardware [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:19:33 np0005481065 nova_compute[260935]: 2025-10-11 09:19:33.192 2 DEBUG nova.virt.hardware [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:19:33 np0005481065 nova_compute[260935]: 2025-10-11 09:19:33.193 2 DEBUG nova.virt.hardware [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:19:33 np0005481065 nova_compute[260935]: 2025-10-11 09:19:33.193 2 DEBUG nova.virt.hardware [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:19:33 np0005481065 nova_compute[260935]: 2025-10-11 09:19:33.193 2 DEBUG nova.virt.hardware [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:19:33 np0005481065 nova_compute[260935]: 2025-10-11 09:19:33.194 2 DEBUG nova.virt.hardware [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:19:33 np0005481065 nova_compute[260935]: 2025-10-11 09:19:33.194 2 DEBUG nova.virt.hardware [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:19:33 np0005481065 nova_compute[260935]: 2025-10-11 09:19:33.194 2 DEBUG nova.virt.hardware [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:19:33 np0005481065 nova_compute[260935]: 2025-10-11 09:19:33.195 2 DEBUG nova.virt.hardware [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:19:33 np0005481065 nova_compute[260935]: 2025-10-11 09:19:33.195 2 DEBUG nova.virt.hardware [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:19:33 np0005481065 nova_compute[260935]: 2025-10-11 09:19:33.195 2 DEBUG nova.virt.hardware [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:19:33 np0005481065 nova_compute[260935]: 2025-10-11 09:19:33.199 2 DEBUG oslo_concurrency.processutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:19:33 np0005481065 nova_compute[260935]: 2025-10-11 09:19:33.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:19:33 np0005481065 nova_compute[260935]: 2025-10-11 09:19:33.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:19:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:19:33 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1876995508' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:19:33 np0005481065 nova_compute[260935]: 2025-10-11 09:19:33.747 2 DEBUG oslo_concurrency.processutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:19:33 np0005481065 nova_compute[260935]: 2025-10-11 09:19:33.792 2 DEBUG nova.storage.rbd_utils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 6c9132ae-fb4f-4a77-8e3f-14f516eed49c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:19:33 np0005481065 nova_compute[260935]: 2025-10-11 09:19:33.800 2 DEBUG oslo_concurrency.processutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:19:33 np0005481065 nova_compute[260935]: 2025-10-11 09:19:33.854 2 DEBUG nova.network.neutron [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Successfully created port: d5347066-e322-4664-8798-146b9745fa17 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:19:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:19:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:19:34 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4216683930' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:19:34 np0005481065 nova_compute[260935]: 2025-10-11 09:19:34.284 2 DEBUG oslo_concurrency.processutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:19:34 np0005481065 nova_compute[260935]: 2025-10-11 09:19:34.287 2 DEBUG nova.virt.libvirt.vif [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:19:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-487160952',display_name='tempest-TestNetworkBasicOps-server-487160952',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-487160952',id=115,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIt4DfbM2d+jaXAhSihykdBZ+tOpog8UlIDD8J/El5+5N6K+dCWK3MxK+m6m5Y83GMP6LzeAgXIDOnVujmKdajdhJSoGBcewPota2xCPS2Aiqozz2Osh9vQuMUfwGcZ9Cw==',key_name='tempest-TestNetworkBasicOps-1573176618',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-wkj2eggj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:19:27Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=6c9132ae-fb4f-4a77-8e3f-14f516eed49c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "50e8a951-4439-4d05-b0da-9b126fae5c30", "address": "fa:16:3e:23:8b:f3", "network": {"id": "fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448", "bridge": "br-int", "label": "tempest-network-smoke--1639787343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e8a951-44", "ovs_interfaceid": "50e8a951-4439-4d05-b0da-9b126fae5c30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:19:34 np0005481065 nova_compute[260935]: 2025-10-11 09:19:34.287 2 DEBUG nova.network.os_vif_util [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "50e8a951-4439-4d05-b0da-9b126fae5c30", "address": "fa:16:3e:23:8b:f3", "network": {"id": "fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448", "bridge": "br-int", "label": "tempest-network-smoke--1639787343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e8a951-44", "ovs_interfaceid": "50e8a951-4439-4d05-b0da-9b126fae5c30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:19:34 np0005481065 nova_compute[260935]: 2025-10-11 09:19:34.289 2 DEBUG nova.network.os_vif_util [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:8b:f3,bridge_name='br-int',has_traffic_filtering=True,id=50e8a951-4439-4d05-b0da-9b126fae5c30,network=Network(fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50e8a951-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:19:34 np0005481065 nova_compute[260935]: 2025-10-11 09:19:34.291 2 DEBUG nova.objects.instance [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6c9132ae-fb4f-4a77-8e3f-14f516eed49c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:19:34 np0005481065 nova_compute[260935]: 2025-10-11 09:19:34.316 2 DEBUG nova.virt.libvirt.driver [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:19:34 np0005481065 nova_compute[260935]:  <uuid>6c9132ae-fb4f-4a77-8e3f-14f516eed49c</uuid>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:  <name>instance-00000073</name>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:19:34 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:      <nova:name>tempest-TestNetworkBasicOps-server-487160952</nova:name>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:19:33</nova:creationTime>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:19:34 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:        <nova:user uuid="dd336dcb24664df58613d4105ce1b004">tempest-TestNetworkBasicOps-1622727639-project-member</nova:user>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:        <nova:project uuid="bee9c6aad5fe46a2b0fb6caf4d995b72">tempest-TestNetworkBasicOps-1622727639</nova:project>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:        <nova:port uuid="50e8a951-4439-4d05-b0da-9b126fae5c30">
Oct 11 05:19:34 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:19:34 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:      <entry name="serial">6c9132ae-fb4f-4a77-8e3f-14f516eed49c</entry>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:      <entry name="uuid">6c9132ae-fb4f-4a77-8e3f-14f516eed49c</entry>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:19:34 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:19:34 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:19:34 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/6c9132ae-fb4f-4a77-8e3f-14f516eed49c_disk">
Oct 11 05:19:34 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:19:34 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:19:34 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/6c9132ae-fb4f-4a77-8e3f-14f516eed49c_disk.config">
Oct 11 05:19:34 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:19:34 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:19:34 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:23:8b:f3"/>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:      <target dev="tap50e8a951-44"/>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:19:34 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/6c9132ae-fb4f-4a77-8e3f-14f516eed49c/console.log" append="off"/>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:19:34 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:19:34 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:19:34 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:19:34 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:19:34 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:19:34 np0005481065 nova_compute[260935]: 2025-10-11 09:19:34.317 2 DEBUG nova.compute.manager [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Preparing to wait for external event network-vif-plugged-50e8a951-4439-4d05-b0da-9b126fae5c30 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:19:34 np0005481065 nova_compute[260935]: 2025-10-11 09:19:34.317 2 DEBUG oslo_concurrency.lockutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "6c9132ae-fb4f-4a77-8e3f-14f516eed49c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:19:34 np0005481065 nova_compute[260935]: 2025-10-11 09:19:34.318 2 DEBUG oslo_concurrency.lockutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "6c9132ae-fb4f-4a77-8e3f-14f516eed49c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:19:34 np0005481065 nova_compute[260935]: 2025-10-11 09:19:34.318 2 DEBUG oslo_concurrency.lockutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "6c9132ae-fb4f-4a77-8e3f-14f516eed49c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:19:34 np0005481065 nova_compute[260935]: 2025-10-11 09:19:34.319 2 DEBUG nova.virt.libvirt.vif [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:19:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-487160952',display_name='tempest-TestNetworkBasicOps-server-487160952',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-487160952',id=115,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIt4DfbM2d+jaXAhSihykdBZ+tOpog8UlIDD8J/El5+5N6K+dCWK3MxK+m6m5Y83GMP6LzeAgXIDOnVujmKdajdhJSoGBcewPota2xCPS2Aiqozz2Osh9vQuMUfwGcZ9Cw==',key_name='tempest-TestNetworkBasicOps-1573176618',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-wkj2eggj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:19:27Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=6c9132ae-fb4f-4a77-8e3f-14f516eed49c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "50e8a951-4439-4d05-b0da-9b126fae5c30", "address": "fa:16:3e:23:8b:f3", "network": {"id": "fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448", "bridge": "br-int", "label": "tempest-network-smoke--1639787343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e8a951-44", "ovs_interfaceid": "50e8a951-4439-4d05-b0da-9b126fae5c30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:19:34 np0005481065 nova_compute[260935]: 2025-10-11 09:19:34.320 2 DEBUG nova.network.os_vif_util [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "50e8a951-4439-4d05-b0da-9b126fae5c30", "address": "fa:16:3e:23:8b:f3", "network": {"id": "fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448", "bridge": "br-int", "label": "tempest-network-smoke--1639787343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e8a951-44", "ovs_interfaceid": "50e8a951-4439-4d05-b0da-9b126fae5c30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:19:34 np0005481065 nova_compute[260935]: 2025-10-11 09:19:34.321 2 DEBUG nova.network.os_vif_util [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:8b:f3,bridge_name='br-int',has_traffic_filtering=True,id=50e8a951-4439-4d05-b0da-9b126fae5c30,network=Network(fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50e8a951-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:19:34 np0005481065 nova_compute[260935]: 2025-10-11 09:19:34.322 2 DEBUG os_vif [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:8b:f3,bridge_name='br-int',has_traffic_filtering=True,id=50e8a951-4439-4d05-b0da-9b126fae5c30,network=Network(fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50e8a951-44') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:19:34 np0005481065 nova_compute[260935]: 2025-10-11 09:19:34.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:34 np0005481065 nova_compute[260935]: 2025-10-11 09:19:34.323 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:19:34 np0005481065 nova_compute[260935]: 2025-10-11 09:19:34.324 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:19:34 np0005481065 nova_compute[260935]: 2025-10-11 09:19:34.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:34 np0005481065 nova_compute[260935]: 2025-10-11 09:19:34.328 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap50e8a951-44, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:19:34 np0005481065 nova_compute[260935]: 2025-10-11 09:19:34.329 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap50e8a951-44, col_values=(('external_ids', {'iface-id': '50e8a951-4439-4d05-b0da-9b126fae5c30', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:23:8b:f3', 'vm-uuid': '6c9132ae-fb4f-4a77-8e3f-14f516eed49c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:19:34 np0005481065 NetworkManager[44960]: <info>  [1760174374.3325] manager: (tap50e8a951-44): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/470)
Oct 11 05:19:34 np0005481065 nova_compute[260935]: 2025-10-11 09:19:34.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:19:34 np0005481065 nova_compute[260935]: 2025-10-11 09:19:34.340 2 INFO os_vif [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:8b:f3,bridge_name='br-int',has_traffic_filtering=True,id=50e8a951-4439-4d05-b0da-9b126fae5c30,network=Network(fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50e8a951-44')#033[00m
Oct 11 05:19:34 np0005481065 nova_compute[260935]: 2025-10-11 09:19:34.404 2 DEBUG nova.virt.libvirt.driver [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:19:34 np0005481065 nova_compute[260935]: 2025-10-11 09:19:34.404 2 DEBUG nova.virt.libvirt.driver [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:19:34 np0005481065 nova_compute[260935]: 2025-10-11 09:19:34.404 2 DEBUG nova.virt.libvirt.driver [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No VIF found with MAC fa:16:3e:23:8b:f3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:19:34 np0005481065 nova_compute[260935]: 2025-10-11 09:19:34.405 2 INFO nova.virt.libvirt.driver [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Using config drive#033[00m
Oct 11 05:19:34 np0005481065 nova_compute[260935]: 2025-10-11 09:19:34.434 2 DEBUG nova.storage.rbd_utils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 6c9132ae-fb4f-4a77-8e3f-14f516eed49c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:19:34 np0005481065 podman[385244]: 2025-10-11 09:19:34.484799798 +0000 UTC m=+0.088028615 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:19:34 np0005481065 nova_compute[260935]: 2025-10-11 09:19:34.554 2 DEBUG nova.network.neutron [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Successfully created port: d8f510f6-a87e-4c76-be24-a143f69f564d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:19:34 np0005481065 nova_compute[260935]: 2025-10-11 09:19:34.698 2 DEBUG nova.network.neutron [req-c846a5ee-f86b-41b2-b6f7-3b8996b8489d req-eebfb772-18d8-41ba-86d1-995a3ad1a3e4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Updated VIF entry in instance network info cache for port 50e8a951-4439-4d05-b0da-9b126fae5c30. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:19:34 np0005481065 nova_compute[260935]: 2025-10-11 09:19:34.698 2 DEBUG nova.network.neutron [req-c846a5ee-f86b-41b2-b6f7-3b8996b8489d req-eebfb772-18d8-41ba-86d1-995a3ad1a3e4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Updating instance_info_cache with network_info: [{"id": "50e8a951-4439-4d05-b0da-9b126fae5c30", "address": "fa:16:3e:23:8b:f3", "network": {"id": "fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448", "bridge": "br-int", "label": "tempest-network-smoke--1639787343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e8a951-44", "ovs_interfaceid": "50e8a951-4439-4d05-b0da-9b126fae5c30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:19:34 np0005481065 nova_compute[260935]: 2025-10-11 09:19:34.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:19:34 np0005481065 nova_compute[260935]: 2025-10-11 09:19:34.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:19:34 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2332: 321 pgs: 321 active+clean; 579 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 3.6 MiB/s wr, 55 op/s
Oct 11 05:19:34 np0005481065 nova_compute[260935]: 2025-10-11 09:19:34.714 2 DEBUG oslo_concurrency.lockutils [req-c846a5ee-f86b-41b2-b6f7-3b8996b8489d req-eebfb772-18d8-41ba-86d1-995a3ad1a3e4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-6c9132ae-fb4f-4a77-8e3f-14f516eed49c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:19:34 np0005481065 nova_compute[260935]: 2025-10-11 09:19:34.721 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 11 05:19:34 np0005481065 nova_compute[260935]: 2025-10-11 09:19:34.747 2 INFO nova.virt.libvirt.driver [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Creating config drive at /var/lib/nova/instances/6c9132ae-fb4f-4a77-8e3f-14f516eed49c/disk.config#033[00m
Oct 11 05:19:34 np0005481065 nova_compute[260935]: 2025-10-11 09:19:34.754 2 DEBUG oslo_concurrency.processutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6c9132ae-fb4f-4a77-8e3f-14f516eed49c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzwga0xxq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:19:34 np0005481065 nova_compute[260935]: 2025-10-11 09:19:34.918 2 DEBUG oslo_concurrency.processutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6c9132ae-fb4f-4a77-8e3f-14f516eed49c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzwga0xxq" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:19:34 np0005481065 nova_compute[260935]: 2025-10-11 09:19:34.963 2 DEBUG nova.storage.rbd_utils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 6c9132ae-fb4f-4a77-8e3f-14f516eed49c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:19:34 np0005481065 nova_compute[260935]: 2025-10-11 09:19:34.968 2 DEBUG oslo_concurrency.processutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6c9132ae-fb4f-4a77-8e3f-14f516eed49c/disk.config 6c9132ae-fb4f-4a77-8e3f-14f516eed49c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:19:35 np0005481065 nova_compute[260935]: 2025-10-11 09:19:35.176 2 DEBUG oslo_concurrency.processutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6c9132ae-fb4f-4a77-8e3f-14f516eed49c/disk.config 6c9132ae-fb4f-4a77-8e3f-14f516eed49c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.208s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:19:35 np0005481065 nova_compute[260935]: 2025-10-11 09:19:35.177 2 INFO nova.virt.libvirt.driver [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Deleting local config drive /var/lib/nova/instances/6c9132ae-fb4f-4a77-8e3f-14f516eed49c/disk.config because it was imported into RBD.#033[00m
Oct 11 05:19:35 np0005481065 kernel: tap50e8a951-44: entered promiscuous mode
Oct 11 05:19:35 np0005481065 nova_compute[260935]: 2025-10-11 09:19:35.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:35 np0005481065 NetworkManager[44960]: <info>  [1760174375.3002] manager: (tap50e8a951-44): new Tun device (/org/freedesktop/NetworkManager/Devices/471)
Oct 11 05:19:35 np0005481065 ovn_controller[152945]: 2025-10-11T09:19:35Z|01149|binding|INFO|Claiming lport 50e8a951-4439-4d05-b0da-9b126fae5c30 for this chassis.
Oct 11 05:19:35 np0005481065 ovn_controller[152945]: 2025-10-11T09:19:35Z|01150|binding|INFO|50e8a951-4439-4d05-b0da-9b126fae5c30: Claiming fa:16:3e:23:8b:f3 10.100.0.10
Oct 11 05:19:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:35.312 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:23:8b:f3 10.100.0.10'], port_security=['fa:16:3e:23:8b:f3 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '6c9132ae-fb4f-4a77-8e3f-14f516eed49c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ef8b4bf7-fe14-4629-a7f4-eb04c76ca0d2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e842406f-d65b-48f7-9a65-50c6608add8c, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=50e8a951-4439-4d05-b0da-9b126fae5c30) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:19:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:35.316 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 50e8a951-4439-4d05-b0da-9b126fae5c30 in datapath fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448 bound to our chassis#033[00m
Oct 11 05:19:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:35.321 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448#033[00m
Oct 11 05:19:35 np0005481065 ovn_controller[152945]: 2025-10-11T09:19:35Z|01151|binding|INFO|Setting lport 50e8a951-4439-4d05-b0da-9b126fae5c30 ovn-installed in OVS
Oct 11 05:19:35 np0005481065 ovn_controller[152945]: 2025-10-11T09:19:35Z|01152|binding|INFO|Setting lport 50e8a951-4439-4d05-b0da-9b126fae5c30 up in Southbound
Oct 11 05:19:35 np0005481065 systemd-machined[215705]: New machine qemu-138-instance-00000073.
Oct 11 05:19:35 np0005481065 nova_compute[260935]: 2025-10-11 09:19:35.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:35 np0005481065 nova_compute[260935]: 2025-10-11 09:19:35.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:35 np0005481065 systemd[1]: Started Virtual Machine qemu-138-instance-00000073.
Oct 11 05:19:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:35.354 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5173e8c9-99c8-4f06-ac92-39197daf0f7c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:19:35 np0005481065 systemd-udevd[385335]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:19:35 np0005481065 NetworkManager[44960]: <info>  [1760174375.3770] device (tap50e8a951-44): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:19:35 np0005481065 NetworkManager[44960]: <info>  [1760174375.3782] device (tap50e8a951-44): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:19:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:35.403 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[43bf7fa7-10b3-4066-924d-f1dc12354677]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:19:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:35.412 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f3bca6de-96fb-44e9-80f4-732abf734ed3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:19:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:35.460 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[536aace9-bb97-4ed4-bf11-8385cdfa9419]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:19:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:35.480 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[efe71f5d-6252-4187-8a44-63cb494955ba]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbf3e0c3-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c8:5d:03'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 322], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618285, 'reachable_time': 33588, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 385347, 'error': None, 'target': 'ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:19:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:35.509 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e1e94c45-b4de-4866-889f-baf3b9940503]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfbf3e0c3-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618299, 'tstamp': 618299}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 385349, 'error': None, 'target': 'ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfbf3e0c3-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618303, 'tstamp': 618303}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 385349, 'error': None, 'target': 'ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:19:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:35.511 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbf3e0c3-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:19:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:35.515 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbf3e0c3-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:19:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:35.515 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:19:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:35.516 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfbf3e0c3-40, col_values=(('external_ids', {'iface-id': 'fb1da81b-c31d-4f26-a595-cb8eaad2e189'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:19:35 np0005481065 nova_compute[260935]: 2025-10-11 09:19:35.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:35.516 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:19:35 np0005481065 nova_compute[260935]: 2025-10-11 09:19:35.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:19:35 np0005481065 nova_compute[260935]: 2025-10-11 09:19:35.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:19:35 np0005481065 nova_compute[260935]: 2025-10-11 09:19:35.737 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:19:35 np0005481065 nova_compute[260935]: 2025-10-11 09:19:35.737 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:19:35 np0005481065 nova_compute[260935]: 2025-10-11 09:19:35.738 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:19:35 np0005481065 nova_compute[260935]: 2025-10-11 09:19:35.738 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:19:35 np0005481065 nova_compute[260935]: 2025-10-11 09:19:35.738 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:19:36 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:19:36 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3815423025' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:19:36 np0005481065 nova_compute[260935]: 2025-10-11 09:19:36.213 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:19:36 np0005481065 nova_compute[260935]: 2025-10-11 09:19:36.341 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:19:36 np0005481065 nova_compute[260935]: 2025-10-11 09:19:36.341 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:19:36 np0005481065 nova_compute[260935]: 2025-10-11 09:19:36.342 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:19:36 np0005481065 nova_compute[260935]: 2025-10-11 09:19:36.343 2 DEBUG nova.network.neutron [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Successfully updated port: d5347066-e322-4664-8798-146b9745fa17 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:19:36 np0005481065 nova_compute[260935]: 2025-10-11 09:19:36.347 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:19:36 np0005481065 nova_compute[260935]: 2025-10-11 09:19:36.347 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:19:36 np0005481065 nova_compute[260935]: 2025-10-11 09:19:36.350 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:19:36 np0005481065 nova_compute[260935]: 2025-10-11 09:19:36.350 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:19:36 np0005481065 nova_compute[260935]: 2025-10-11 09:19:36.354 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:19:36 np0005481065 nova_compute[260935]: 2025-10-11 09:19:36.354 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:19:36 np0005481065 nova_compute[260935]: 2025-10-11 09:19:36.361 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:19:36 np0005481065 nova_compute[260935]: 2025-10-11 09:19:36.361 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:19:36 np0005481065 nova_compute[260935]: 2025-10-11 09:19:36.365 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:19:36 np0005481065 nova_compute[260935]: 2025-10-11 09:19:36.365 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:19:36 np0005481065 nova_compute[260935]: 2025-10-11 09:19:36.462 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174376.4618876, 6c9132ae-fb4f-4a77-8e3f-14f516eed49c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:19:36 np0005481065 nova_compute[260935]: 2025-10-11 09:19:36.463 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] VM Started (Lifecycle Event)#033[00m
Oct 11 05:19:36 np0005481065 nova_compute[260935]: 2025-10-11 09:19:36.485 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:19:36 np0005481065 nova_compute[260935]: 2025-10-11 09:19:36.491 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174376.4620554, 6c9132ae-fb4f-4a77-8e3f-14f516eed49c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:19:36 np0005481065 nova_compute[260935]: 2025-10-11 09:19:36.492 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:19:36 np0005481065 nova_compute[260935]: 2025-10-11 09:19:36.517 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:19:36 np0005481065 nova_compute[260935]: 2025-10-11 09:19:36.521 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:19:36 np0005481065 nova_compute[260935]: 2025-10-11 09:19:36.553 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:19:36 np0005481065 nova_compute[260935]: 2025-10-11 09:19:36.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:36 np0005481065 nova_compute[260935]: 2025-10-11 09:19:36.681 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:19:36 np0005481065 nova_compute[260935]: 2025-10-11 09:19:36.682 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2506MB free_disk=59.69802474975586GB free_vcpus=2 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:19:36 np0005481065 nova_compute[260935]: 2025-10-11 09:19:36.682 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:19:36 np0005481065 nova_compute[260935]: 2025-10-11 09:19:36.682 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:19:36 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2333: 321 pgs: 321 active+clean; 579 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 3.6 MiB/s wr, 54 op/s
Oct 11 05:19:36 np0005481065 nova_compute[260935]: 2025-10-11 09:19:36.789 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:19:36 np0005481065 nova_compute[260935]: 2025-10-11 09:19:36.789 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:19:36 np0005481065 nova_compute[260935]: 2025-10-11 09:19:36.790 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:19:36 np0005481065 nova_compute[260935]: 2025-10-11 09:19:36.790 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 69860e17-caac-461a-a4a5-34ca72c0ee09 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:19:36 np0005481065 nova_compute[260935]: 2025-10-11 09:19:36.790 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 564a3027-0f98-40fd-a495-1c13a103ea39 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:19:36 np0005481065 nova_compute[260935]: 2025-10-11 09:19:36.790 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 6c9132ae-fb4f-4a77-8e3f-14f516eed49c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:19:36 np0005481065 nova_compute[260935]: 2025-10-11 09:19:36.790 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance a798e2c3-8294-482d-a073-883121427765 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:19:36 np0005481065 nova_compute[260935]: 2025-10-11 09:19:36.791 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 7 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:19:36 np0005481065 nova_compute[260935]: 2025-10-11 09:19:36.791 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1408MB phys_disk=59GB used_disk=7GB total_vcpus=8 used_vcpus=7 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:19:36 np0005481065 nova_compute[260935]: 2025-10-11 09:19:36.951 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:19:37 np0005481065 nova_compute[260935]: 2025-10-11 09:19:37.172 2 DEBUG nova.compute.manager [req-05064526-5db6-43ab-b70e-7da3c4ed1673 req-1fa9ff84-957b-4565-9d7d-56fbeeecd371 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Received event network-vif-plugged-50e8a951-4439-4d05-b0da-9b126fae5c30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:19:37 np0005481065 nova_compute[260935]: 2025-10-11 09:19:37.173 2 DEBUG oslo_concurrency.lockutils [req-05064526-5db6-43ab-b70e-7da3c4ed1673 req-1fa9ff84-957b-4565-9d7d-56fbeeecd371 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "6c9132ae-fb4f-4a77-8e3f-14f516eed49c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:19:37 np0005481065 nova_compute[260935]: 2025-10-11 09:19:37.173 2 DEBUG oslo_concurrency.lockutils [req-05064526-5db6-43ab-b70e-7da3c4ed1673 req-1fa9ff84-957b-4565-9d7d-56fbeeecd371 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "6c9132ae-fb4f-4a77-8e3f-14f516eed49c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:19:37 np0005481065 nova_compute[260935]: 2025-10-11 09:19:37.174 2 DEBUG oslo_concurrency.lockutils [req-05064526-5db6-43ab-b70e-7da3c4ed1673 req-1fa9ff84-957b-4565-9d7d-56fbeeecd371 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "6c9132ae-fb4f-4a77-8e3f-14f516eed49c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:19:37 np0005481065 nova_compute[260935]: 2025-10-11 09:19:37.174 2 DEBUG nova.compute.manager [req-05064526-5db6-43ab-b70e-7da3c4ed1673 req-1fa9ff84-957b-4565-9d7d-56fbeeecd371 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Processing event network-vif-plugged-50e8a951-4439-4d05-b0da-9b126fae5c30 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:19:37 np0005481065 nova_compute[260935]: 2025-10-11 09:19:37.175 2 DEBUG nova.compute.manager [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:19:37 np0005481065 nova_compute[260935]: 2025-10-11 09:19:37.187 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174377.1866078, 6c9132ae-fb4f-4a77-8e3f-14f516eed49c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:19:37 np0005481065 nova_compute[260935]: 2025-10-11 09:19:37.188 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:19:37 np0005481065 nova_compute[260935]: 2025-10-11 09:19:37.192 2 DEBUG nova.virt.libvirt.driver [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:19:37 np0005481065 nova_compute[260935]: 2025-10-11 09:19:37.205 2 INFO nova.virt.libvirt.driver [-] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Instance spawned successfully.#033[00m
Oct 11 05:19:37 np0005481065 nova_compute[260935]: 2025-10-11 09:19:37.205 2 DEBUG nova.virt.libvirt.driver [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:19:37 np0005481065 nova_compute[260935]: 2025-10-11 09:19:37.210 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:19:37 np0005481065 nova_compute[260935]: 2025-10-11 09:19:37.214 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:19:37 np0005481065 nova_compute[260935]: 2025-10-11 09:19:37.229 2 DEBUG nova.virt.libvirt.driver [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:19:37 np0005481065 nova_compute[260935]: 2025-10-11 09:19:37.229 2 DEBUG nova.virt.libvirt.driver [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:19:37 np0005481065 nova_compute[260935]: 2025-10-11 09:19:37.230 2 DEBUG nova.virt.libvirt.driver [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:19:37 np0005481065 nova_compute[260935]: 2025-10-11 09:19:37.230 2 DEBUG nova.virt.libvirt.driver [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:19:37 np0005481065 nova_compute[260935]: 2025-10-11 09:19:37.231 2 DEBUG nova.virt.libvirt.driver [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:19:37 np0005481065 nova_compute[260935]: 2025-10-11 09:19:37.232 2 DEBUG nova.virt.libvirt.driver [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:19:37 np0005481065 nova_compute[260935]: 2025-10-11 09:19:37.238 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:19:37 np0005481065 nova_compute[260935]: 2025-10-11 09:19:37.282 2 INFO nova.compute.manager [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Took 9.59 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:19:37 np0005481065 nova_compute[260935]: 2025-10-11 09:19:37.282 2 DEBUG nova.compute.manager [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:19:37 np0005481065 nova_compute[260935]: 2025-10-11 09:19:37.375 2 INFO nova.compute.manager [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Took 10.83 seconds to build instance.#033[00m
Oct 11 05:19:37 np0005481065 nova_compute[260935]: 2025-10-11 09:19:37.399 2 DEBUG oslo_concurrency.lockutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "6c9132ae-fb4f-4a77-8e3f-14f516eed49c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.905s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:19:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:19:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/468721934' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:19:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:37.429 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '36'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:19:37 np0005481065 nova_compute[260935]: 2025-10-11 09:19:37.447 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:19:37 np0005481065 nova_compute[260935]: 2025-10-11 09:19:37.458 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:19:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:19:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:19:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 05:19:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:19:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 05:19:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:19:37 np0005481065 nova_compute[260935]: 2025-10-11 09:19:37.473 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:19:37 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 67ed6427-e597-4e0d-b8ee-803929919f33 does not exist
Oct 11 05:19:37 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev a2dcf191-9512-4018-8c63-30df62794c17 does not exist
Oct 11 05:19:37 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 2146a691-7c05-417b-86cc-1742e8991aa3 does not exist
Oct 11 05:19:37 np0005481065 nova_compute[260935]: 2025-10-11 09:19:37.477 2 DEBUG nova.network.neutron [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Successfully updated port: d8f510f6-a87e-4c76-be24-a143f69f564d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:19:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 05:19:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 05:19:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 05:19:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:19:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:19:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:19:37 np0005481065 nova_compute[260935]: 2025-10-11 09:19:37.495 2 DEBUG oslo_concurrency.lockutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "refresh_cache-a798e2c3-8294-482d-a073-883121427765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:19:37 np0005481065 nova_compute[260935]: 2025-10-11 09:19:37.495 2 DEBUG oslo_concurrency.lockutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquired lock "refresh_cache-a798e2c3-8294-482d-a073-883121427765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:19:37 np0005481065 nova_compute[260935]: 2025-10-11 09:19:37.495 2 DEBUG nova.network.neutron [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:19:37 np0005481065 nova_compute[260935]: 2025-10-11 09:19:37.502 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:19:37 np0005481065 nova_compute[260935]: 2025-10-11 09:19:37.502 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.820s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:19:37 np0005481065 nova_compute[260935]: 2025-10-11 09:19:37.766 2 DEBUG nova.network.neutron [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:19:37 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:19:37 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:19:37 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:19:38 np0005481065 podman[385709]: 2025-10-11 09:19:38.285514397 +0000 UTC m=+0.052487912 container create b5c278b731de731a1c5b9069356faee6852facdcde096f44d3614386906b912d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_engelbart, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 05:19:38 np0005481065 systemd[1]: Started libpod-conmon-b5c278b731de731a1c5b9069356faee6852facdcde096f44d3614386906b912d.scope.
Oct 11 05:19:38 np0005481065 podman[385709]: 2025-10-11 09:19:38.25941919 +0000 UTC m=+0.026392745 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:19:38 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:19:38 np0005481065 podman[385709]: 2025-10-11 09:19:38.405770661 +0000 UTC m=+0.172744176 container init b5c278b731de731a1c5b9069356faee6852facdcde096f44d3614386906b912d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:19:38 np0005481065 podman[385709]: 2025-10-11 09:19:38.413989503 +0000 UTC m=+0.180963038 container start b5c278b731de731a1c5b9069356faee6852facdcde096f44d3614386906b912d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_engelbart, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 11 05:19:38 np0005481065 podman[385709]: 2025-10-11 09:19:38.417694267 +0000 UTC m=+0.184667802 container attach b5c278b731de731a1c5b9069356faee6852facdcde096f44d3614386906b912d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 05:19:38 np0005481065 eloquent_engelbart[385725]: 167 167
Oct 11 05:19:38 np0005481065 systemd[1]: libpod-b5c278b731de731a1c5b9069356faee6852facdcde096f44d3614386906b912d.scope: Deactivated successfully.
Oct 11 05:19:38 np0005481065 podman[385709]: 2025-10-11 09:19:38.427417641 +0000 UTC m=+0.194391166 container died b5c278b731de731a1c5b9069356faee6852facdcde096f44d3614386906b912d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_engelbart, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:19:38 np0005481065 systemd[1]: var-lib-containers-storage-overlay-1b646267ace4f903fc6b558ed8ef584662156ac1b53d9838f230f291dff376f2-merged.mount: Deactivated successfully.
Oct 11 05:19:38 np0005481065 podman[385709]: 2025-10-11 09:19:38.472418201 +0000 UTC m=+0.239391716 container remove b5c278b731de731a1c5b9069356faee6852facdcde096f44d3614386906b912d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 11 05:19:38 np0005481065 systemd[1]: libpod-conmon-b5c278b731de731a1c5b9069356faee6852facdcde096f44d3614386906b912d.scope: Deactivated successfully.
Oct 11 05:19:38 np0005481065 podman[385748]: 2025-10-11 09:19:38.697223306 +0000 UTC m=+0.054737416 container create 4c9ede4c09235424a0dfb2aabb46ce1194ef9238f41fe686c4c2684c6c9ef175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:19:38 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2334: 321 pgs: 321 active+clean; 579 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 3.6 MiB/s wr, 65 op/s
Oct 11 05:19:38 np0005481065 systemd[1]: Started libpod-conmon-4c9ede4c09235424a0dfb2aabb46ce1194ef9238f41fe686c4c2684c6c9ef175.scope.
Oct 11 05:19:38 np0005481065 podman[385748]: 2025-10-11 09:19:38.675165523 +0000 UTC m=+0.032679623 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:19:38 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:19:38 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6bfed197ab9a644d5d673e5c3a5c602fee2d96387087ebdac1be488284176e5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:19:38 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6bfed197ab9a644d5d673e5c3a5c602fee2d96387087ebdac1be488284176e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:19:38 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6bfed197ab9a644d5d673e5c3a5c602fee2d96387087ebdac1be488284176e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:19:38 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6bfed197ab9a644d5d673e5c3a5c602fee2d96387087ebdac1be488284176e5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:19:38 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6bfed197ab9a644d5d673e5c3a5c602fee2d96387087ebdac1be488284176e5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 05:19:38 np0005481065 podman[385748]: 2025-10-11 09:19:38.835187749 +0000 UTC m=+0.192701909 container init 4c9ede4c09235424a0dfb2aabb46ce1194ef9238f41fe686c4c2684c6c9ef175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mayer, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:19:38 np0005481065 podman[385748]: 2025-10-11 09:19:38.844255025 +0000 UTC m=+0.201769125 container start 4c9ede4c09235424a0dfb2aabb46ce1194ef9238f41fe686c4c2684c6c9ef175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:19:38 np0005481065 podman[385748]: 2025-10-11 09:19:38.847798105 +0000 UTC m=+0.205312275 container attach 4c9ede4c09235424a0dfb2aabb46ce1194ef9238f41fe686c4c2684c6c9ef175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mayer, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 11 05:19:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:19:39 np0005481065 nova_compute[260935]: 2025-10-11 09:19:39.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:39 np0005481065 nova_compute[260935]: 2025-10-11 09:19:39.478 2 DEBUG nova.compute.manager [req-d53f6209-42ac-461f-92b4-8dc609ebc4f0 req-22b0ba02-f507-4271-a49b-3b849bf11fe3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Received event network-changed-d5347066-e322-4664-8798-146b9745fa17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:19:39 np0005481065 nova_compute[260935]: 2025-10-11 09:19:39.478 2 DEBUG nova.compute.manager [req-d53f6209-42ac-461f-92b4-8dc609ebc4f0 req-22b0ba02-f507-4271-a49b-3b849bf11fe3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Refreshing instance network info cache due to event network-changed-d5347066-e322-4664-8798-146b9745fa17. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:19:39 np0005481065 nova_compute[260935]: 2025-10-11 09:19:39.479 2 DEBUG oslo_concurrency.lockutils [req-d53f6209-42ac-461f-92b4-8dc609ebc4f0 req-22b0ba02-f507-4271-a49b-3b849bf11fe3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-a798e2c3-8294-482d-a073-883121427765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:19:40 np0005481065 festive_mayer[385764]: --> passed data devices: 0 physical, 3 LVM
Oct 11 05:19:40 np0005481065 festive_mayer[385764]: --> relative data size: 1.0
Oct 11 05:19:40 np0005481065 festive_mayer[385764]: --> All data devices are unavailable
Oct 11 05:19:40 np0005481065 systemd[1]: libpod-4c9ede4c09235424a0dfb2aabb46ce1194ef9238f41fe686c4c2684c6c9ef175.scope: Deactivated successfully.
Oct 11 05:19:40 np0005481065 podman[385748]: 2025-10-11 09:19:40.137145811 +0000 UTC m=+1.494659931 container died 4c9ede4c09235424a0dfb2aabb46ce1194ef9238f41fe686c4c2684c6c9ef175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:19:40 np0005481065 systemd[1]: libpod-4c9ede4c09235424a0dfb2aabb46ce1194ef9238f41fe686c4c2684c6c9ef175.scope: Consumed 1.214s CPU time.
Oct 11 05:19:40 np0005481065 systemd[1]: var-lib-containers-storage-overlay-b6bfed197ab9a644d5d673e5c3a5c602fee2d96387087ebdac1be488284176e5-merged.mount: Deactivated successfully.
Oct 11 05:19:40 np0005481065 podman[385748]: 2025-10-11 09:19:40.218011773 +0000 UTC m=+1.575525883 container remove 4c9ede4c09235424a0dfb2aabb46ce1194ef9238f41fe686c4c2684c6c9ef175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Oct 11 05:19:40 np0005481065 systemd[1]: libpod-conmon-4c9ede4c09235424a0dfb2aabb46ce1194ef9238f41fe686c4c2684c6c9ef175.scope: Deactivated successfully.
Oct 11 05:19:40 np0005481065 nova_compute[260935]: 2025-10-11 09:19:40.305 2 DEBUG nova.network.neutron [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Updating instance_info_cache with network_info: [{"id": "d5347066-e322-4664-8798-146b9745fa17", "address": "fa:16:3e:eb:9b:3f", "network": {"id": "6346ea52-07fc-49ad-8f2d-fcfed9769241", "bridge": "br-int", "label": "tempest-network-smoke--1742332739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5347066-e3", "ovs_interfaceid": "d5347066-e322-4664-8798-146b9745fa17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d8f510f6-a87e-4c76-be24-a143f69f564d", "address": "fa:16:3e:f2:49:59", "network": {"id": "ce353a4c-7280-46bb-ac7d-157bb5dc08cb", "bridge": "br-int", "label": "tempest-network-smoke--1463782365", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef2:4959", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8f510f6-a8", "ovs_interfaceid": "d8f510f6-a87e-4c76-be24-a143f69f564d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:19:40 np0005481065 podman[385794]: 2025-10-11 09:19:40.307869078 +0000 UTC m=+0.123808755 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:19:40 np0005481065 nova_compute[260935]: 2025-10-11 09:19:40.426 2 DEBUG oslo_concurrency.lockutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Releasing lock "refresh_cache-a798e2c3-8294-482d-a073-883121427765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:19:40 np0005481065 nova_compute[260935]: 2025-10-11 09:19:40.427 2 DEBUG nova.compute.manager [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Instance network_info: |[{"id": "d5347066-e322-4664-8798-146b9745fa17", "address": "fa:16:3e:eb:9b:3f", "network": {"id": "6346ea52-07fc-49ad-8f2d-fcfed9769241", "bridge": "br-int", "label": "tempest-network-smoke--1742332739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5347066-e3", "ovs_interfaceid": "d5347066-e322-4664-8798-146b9745fa17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d8f510f6-a87e-4c76-be24-a143f69f564d", "address": "fa:16:3e:f2:49:59", "network": {"id": "ce353a4c-7280-46bb-ac7d-157bb5dc08cb", "bridge": "br-int", "label": "tempest-network-smoke--1463782365", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef2:4959", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8f510f6-a8", "ovs_interfaceid": "d8f510f6-a87e-4c76-be24-a143f69f564d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:19:40 np0005481065 nova_compute[260935]: 2025-10-11 09:19:40.428 2 DEBUG oslo_concurrency.lockutils [req-d53f6209-42ac-461f-92b4-8dc609ebc4f0 req-22b0ba02-f507-4271-a49b-3b849bf11fe3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-a798e2c3-8294-482d-a073-883121427765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:19:40 np0005481065 nova_compute[260935]: 2025-10-11 09:19:40.428 2 DEBUG nova.network.neutron [req-d53f6209-42ac-461f-92b4-8dc609ebc4f0 req-22b0ba02-f507-4271-a49b-3b849bf11fe3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Refreshing network info cache for port d5347066-e322-4664-8798-146b9745fa17 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:19:40 np0005481065 nova_compute[260935]: 2025-10-11 09:19:40.433 2 DEBUG nova.virt.libvirt.driver [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Start _get_guest_xml network_info=[{"id": "d5347066-e322-4664-8798-146b9745fa17", "address": "fa:16:3e:eb:9b:3f", "network": {"id": "6346ea52-07fc-49ad-8f2d-fcfed9769241", "bridge": "br-int", "label": "tempest-network-smoke--1742332739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5347066-e3", "ovs_interfaceid": "d5347066-e322-4664-8798-146b9745fa17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d8f510f6-a87e-4c76-be24-a143f69f564d", "address": "fa:16:3e:f2:49:59", "network": {"id": "ce353a4c-7280-46bb-ac7d-157bb5dc08cb", "bridge": "br-int", "label": "tempest-network-smoke--1463782365", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef2:4959", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8f510f6-a8", "ovs_interfaceid": "d8f510f6-a87e-4c76-be24-a143f69f564d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:19:40 np0005481065 nova_compute[260935]: 2025-10-11 09:19:40.437 2 WARNING nova.virt.libvirt.driver [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:19:40 np0005481065 nova_compute[260935]: 2025-10-11 09:19:40.443 2 DEBUG nova.virt.libvirt.host [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:19:40 np0005481065 nova_compute[260935]: 2025-10-11 09:19:40.444 2 DEBUG nova.virt.libvirt.host [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:19:40 np0005481065 nova_compute[260935]: 2025-10-11 09:19:40.447 2 DEBUG nova.virt.libvirt.host [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:19:40 np0005481065 nova_compute[260935]: 2025-10-11 09:19:40.448 2 DEBUG nova.virt.libvirt.host [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:19:40 np0005481065 nova_compute[260935]: 2025-10-11 09:19:40.448 2 DEBUG nova.virt.libvirt.driver [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:19:40 np0005481065 nova_compute[260935]: 2025-10-11 09:19:40.449 2 DEBUG nova.virt.hardware [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:19:40 np0005481065 nova_compute[260935]: 2025-10-11 09:19:40.449 2 DEBUG nova.virt.hardware [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:19:40 np0005481065 nova_compute[260935]: 2025-10-11 09:19:40.450 2 DEBUG nova.virt.hardware [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:19:40 np0005481065 nova_compute[260935]: 2025-10-11 09:19:40.450 2 DEBUG nova.virt.hardware [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:19:40 np0005481065 nova_compute[260935]: 2025-10-11 09:19:40.450 2 DEBUG nova.virt.hardware [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:19:40 np0005481065 nova_compute[260935]: 2025-10-11 09:19:40.450 2 DEBUG nova.virt.hardware [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:19:40 np0005481065 nova_compute[260935]: 2025-10-11 09:19:40.451 2 DEBUG nova.virt.hardware [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:19:40 np0005481065 nova_compute[260935]: 2025-10-11 09:19:40.451 2 DEBUG nova.virt.hardware [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:19:40 np0005481065 nova_compute[260935]: 2025-10-11 09:19:40.452 2 DEBUG nova.virt.hardware [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:19:40 np0005481065 nova_compute[260935]: 2025-10-11 09:19:40.452 2 DEBUG nova.virt.hardware [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:19:40 np0005481065 nova_compute[260935]: 2025-10-11 09:19:40.452 2 DEBUG nova.virt.hardware [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:19:40 np0005481065 nova_compute[260935]: 2025-10-11 09:19:40.456 2 DEBUG oslo_concurrency.processutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:19:40 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2335: 321 pgs: 321 active+clean; 579 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 3.6 MiB/s wr, 64 op/s
Oct 11 05:19:40 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:19:40 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4290417126' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:19:40 np0005481065 nova_compute[260935]: 2025-10-11 09:19:40.950 2 DEBUG oslo_concurrency.processutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:19:40 np0005481065 podman[385984]: 2025-10-11 09:19:40.979957935 +0000 UTC m=+0.054831158 container create 6f934fde4061d71035e716172b8cf96dcbed76bef49592bc2c2034dbfb680d3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_curran, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:19:40 np0005481065 nova_compute[260935]: 2025-10-11 09:19:40.983 2 DEBUG nova.storage.rbd_utils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image a798e2c3-8294-482d-a073-883121427765_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.004 2 DEBUG oslo_concurrency.processutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:19:41 np0005481065 systemd[1]: Started libpod-conmon-6f934fde4061d71035e716172b8cf96dcbed76bef49592bc2c2034dbfb680d3a.scope.
Oct 11 05:19:41 np0005481065 podman[385984]: 2025-10-11 09:19:40.957189333 +0000 UTC m=+0.032062546 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:19:41 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:19:41 np0005481065 podman[385984]: 2025-10-11 09:19:41.096109193 +0000 UTC m=+0.170982436 container init 6f934fde4061d71035e716172b8cf96dcbed76bef49592bc2c2034dbfb680d3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 11 05:19:41 np0005481065 podman[385984]: 2025-10-11 09:19:41.107862935 +0000 UTC m=+0.182736158 container start 6f934fde4061d71035e716172b8cf96dcbed76bef49592bc2c2034dbfb680d3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:19:41 np0005481065 podman[385984]: 2025-10-11 09:19:41.11192625 +0000 UTC m=+0.186799493 container attach 6f934fde4061d71035e716172b8cf96dcbed76bef49592bc2c2034dbfb680d3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_curran, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 05:19:41 np0005481065 nervous_curran[386020]: 167 167
Oct 11 05:19:41 np0005481065 systemd[1]: libpod-6f934fde4061d71035e716172b8cf96dcbed76bef49592bc2c2034dbfb680d3a.scope: Deactivated successfully.
Oct 11 05:19:41 np0005481065 podman[385984]: 2025-10-11 09:19:41.122649302 +0000 UTC m=+0.197522535 container died 6f934fde4061d71035e716172b8cf96dcbed76bef49592bc2c2034dbfb680d3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_curran, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:19:41 np0005481065 systemd[1]: var-lib-containers-storage-overlay-ff11e8af25e835bb1c34924bd5eacdbc910198432b0d43406b6cb76116b6b401-merged.mount: Deactivated successfully.
Oct 11 05:19:41 np0005481065 podman[385984]: 2025-10-11 09:19:41.186371761 +0000 UTC m=+0.261244984 container remove 6f934fde4061d71035e716172b8cf96dcbed76bef49592bc2c2034dbfb680d3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_curran, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 11 05:19:41 np0005481065 systemd[1]: libpod-conmon-6f934fde4061d71035e716172b8cf96dcbed76bef49592bc2c2034dbfb680d3a.scope: Deactivated successfully.
Oct 11 05:19:41 np0005481065 podman[386063]: 2025-10-11 09:19:41.433904066 +0000 UTC m=+0.064233333 container create ab23d35585b669f2b5af7353c76a6f28971258cdacbd6d8e5f83f391039e7b4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 11 05:19:41 np0005481065 systemd[1]: Started libpod-conmon-ab23d35585b669f2b5af7353c76a6f28971258cdacbd6d8e5f83f391039e7b4e.scope.
Oct 11 05:19:41 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:19:41 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3460648708' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.502 2 DEBUG oslo_concurrency.processutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:19:41 np0005481065 podman[386063]: 2025-10-11 09:19:41.410965959 +0000 UTC m=+0.041295226 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.504 2 DEBUG nova.virt.libvirt.vif [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:19:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1575481376',display_name='tempest-TestGettingAddress-server-1575481376',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1575481376',id=116,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJTFYLtktGdqXo2IJ/ShBWfWgvaV4+fWmolxALneU1gDJL8dLDtvTdIhs+PbG9YcPYaBAkq6yl21bo4TTyj4znAX7oza+01Fop0j7H2jGDTRbWSF88RRRnjrHtRpLFrBeg==',key_name='tempest-TestGettingAddress-2133650400',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-cbv7fcg7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:19:31Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=a798e2c3-8294-482d-a073-883121427765,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d5347066-e322-4664-8798-146b9745fa17", "address": "fa:16:3e:eb:9b:3f", "network": {"id": "6346ea52-07fc-49ad-8f2d-fcfed9769241", "bridge": "br-int", "label": "tempest-network-smoke--1742332739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5347066-e3", "ovs_interfaceid": "d5347066-e322-4664-8798-146b9745fa17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.505 2 DEBUG nova.network.os_vif_util [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "d5347066-e322-4664-8798-146b9745fa17", "address": "fa:16:3e:eb:9b:3f", "network": {"id": "6346ea52-07fc-49ad-8f2d-fcfed9769241", "bridge": "br-int", "label": "tempest-network-smoke--1742332739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5347066-e3", "ovs_interfaceid": "d5347066-e322-4664-8798-146b9745fa17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.506 2 DEBUG nova.network.os_vif_util [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:9b:3f,bridge_name='br-int',has_traffic_filtering=True,id=d5347066-e322-4664-8798-146b9745fa17,network=Network(6346ea52-07fc-49ad-8f2d-fcfed9769241),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5347066-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.507 2 DEBUG nova.virt.libvirt.vif [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:19:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1575481376',display_name='tempest-TestGettingAddress-server-1575481376',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1575481376',id=116,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJTFYLtktGdqXo2IJ/ShBWfWgvaV4+fWmolxALneU1gDJL8dLDtvTdIhs+PbG9YcPYaBAkq6yl21bo4TTyj4znAX7oza+01Fop0j7H2jGDTRbWSF88RRRnjrHtRpLFrBeg==',key_name='tempest-TestGettingAddress-2133650400',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-cbv7fcg7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:19:31Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=a798e2c3-8294-482d-a073-883121427765,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d8f510f6-a87e-4c76-be24-a143f69f564d", "address": "fa:16:3e:f2:49:59", "network": {"id": "ce353a4c-7280-46bb-ac7d-157bb5dc08cb", "bridge": "br-int", "label": "tempest-network-smoke--1463782365", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef2:4959", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8f510f6-a8", "ovs_interfaceid": "d8f510f6-a87e-4c76-be24-a143f69f564d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.508 2 DEBUG nova.network.os_vif_util [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "d8f510f6-a87e-4c76-be24-a143f69f564d", "address": "fa:16:3e:f2:49:59", "network": {"id": "ce353a4c-7280-46bb-ac7d-157bb5dc08cb", "bridge": "br-int", "label": "tempest-network-smoke--1463782365", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef2:4959", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8f510f6-a8", "ovs_interfaceid": "d8f510f6-a87e-4c76-be24-a143f69f564d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.509 2 DEBUG nova.network.os_vif_util [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f2:49:59,bridge_name='br-int',has_traffic_filtering=True,id=d8f510f6-a87e-4c76-be24-a143f69f564d,network=Network(ce353a4c-7280-46bb-ac7d-157bb5dc08cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8f510f6-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:19:41 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.513 2 DEBUG nova.objects.instance [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'pci_devices' on Instance uuid a798e2c3-8294-482d-a073-883121427765 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:19:41 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e2755b6c3574361d20c07bde3295a76ce0fba2242ef9b614af006b195f89d42/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:19:41 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e2755b6c3574361d20c07bde3295a76ce0fba2242ef9b614af006b195f89d42/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:19:41 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e2755b6c3574361d20c07bde3295a76ce0fba2242ef9b614af006b195f89d42/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:19:41 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e2755b6c3574361d20c07bde3295a76ce0fba2242ef9b614af006b195f89d42/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.530 2 DEBUG nova.virt.libvirt.driver [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:19:41 np0005481065 nova_compute[260935]:  <uuid>a798e2c3-8294-482d-a073-883121427765</uuid>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:  <name>instance-00000074</name>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:19:41 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:      <nova:name>tempest-TestGettingAddress-server-1575481376</nova:name>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:19:40</nova:creationTime>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:19:41 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:        <nova:user uuid="0e1fd111a1ff43179343661e01457085">tempest-TestGettingAddress-1238692117-project-member</nova:user>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:        <nova:project uuid="db6885dd005947ad850fed13cefdf2fc">tempest-TestGettingAddress-1238692117</nova:project>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:        <nova:port uuid="d5347066-e322-4664-8798-146b9745fa17">
Oct 11 05:19:41 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:        <nova:port uuid="d8f510f6-a87e-4c76-be24-a143f69f564d">
Oct 11 05:19:41 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fef2:4959" ipVersion="6"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:      <entry name="serial">a798e2c3-8294-482d-a073-883121427765</entry>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:      <entry name="uuid">a798e2c3-8294-482d-a073-883121427765</entry>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:19:41 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/a798e2c3-8294-482d-a073-883121427765_disk">
Oct 11 05:19:41 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:19:41 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:19:41 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/a798e2c3-8294-482d-a073-883121427765_disk.config">
Oct 11 05:19:41 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:19:41 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:19:41 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:eb:9b:3f"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:      <target dev="tapd5347066-e3"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:19:41 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:f2:49:59"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:      <target dev="tapd8f510f6-a8"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:19:41 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/a798e2c3-8294-482d-a073-883121427765/console.log" append="off"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:19:41 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:19:41 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:19:41 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:19:41 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:19:41 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.531 2 DEBUG nova.compute.manager [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Preparing to wait for external event network-vif-plugged-d5347066-e322-4664-8798-146b9745fa17 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.531 2 DEBUG oslo_concurrency.lockutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "a798e2c3-8294-482d-a073-883121427765-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.531 2 DEBUG oslo_concurrency.lockutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "a798e2c3-8294-482d-a073-883121427765-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.532 2 DEBUG oslo_concurrency.lockutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "a798e2c3-8294-482d-a073-883121427765-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.532 2 DEBUG nova.compute.manager [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Preparing to wait for external event network-vif-plugged-d8f510f6-a87e-4c76-be24-a143f69f564d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.532 2 DEBUG oslo_concurrency.lockutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "a798e2c3-8294-482d-a073-883121427765-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.532 2 DEBUG oslo_concurrency.lockutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "a798e2c3-8294-482d-a073-883121427765-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.533 2 DEBUG oslo_concurrency.lockutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "a798e2c3-8294-482d-a073-883121427765-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.533 2 DEBUG nova.virt.libvirt.vif [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:19:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1575481376',display_name='tempest-TestGettingAddress-server-1575481376',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1575481376',id=116,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJTFYLtktGdqXo2IJ/ShBWfWgvaV4+fWmolxALneU1gDJL8dLDtvTdIhs+PbG9YcPYaBAkq6yl21bo4TTyj4znAX7oza+01Fop0j7H2jGDTRbWSF88RRRnjrHtRpLFrBeg==',key_name='tempest-TestGettingAddress-2133650400',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-cbv7fcg7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:19:31Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=a798e2c3-8294-482d-a073-883121427765,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d5347066-e322-4664-8798-146b9745fa17", "address": "fa:16:3e:eb:9b:3f", "network": {"id": "6346ea52-07fc-49ad-8f2d-fcfed9769241", "bridge": "br-int", "label": "tempest-network-smoke--1742332739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5347066-e3", "ovs_interfaceid": "d5347066-e322-4664-8798-146b9745fa17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.534 2 DEBUG nova.network.os_vif_util [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "d5347066-e322-4664-8798-146b9745fa17", "address": "fa:16:3e:eb:9b:3f", "network": {"id": "6346ea52-07fc-49ad-8f2d-fcfed9769241", "bridge": "br-int", "label": "tempest-network-smoke--1742332739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5347066-e3", "ovs_interfaceid": "d5347066-e322-4664-8798-146b9745fa17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.534 2 DEBUG nova.network.os_vif_util [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:9b:3f,bridge_name='br-int',has_traffic_filtering=True,id=d5347066-e322-4664-8798-146b9745fa17,network=Network(6346ea52-07fc-49ad-8f2d-fcfed9769241),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5347066-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:19:41 np0005481065 podman[386063]: 2025-10-11 09:19:41.535587846 +0000 UTC m=+0.165917173 container init ab23d35585b669f2b5af7353c76a6f28971258cdacbd6d8e5f83f391039e7b4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_vaughan, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.535 2 DEBUG os_vif [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:9b:3f,bridge_name='br-int',has_traffic_filtering=True,id=d5347066-e322-4664-8798-146b9745fa17,network=Network(6346ea52-07fc-49ad-8f2d-fcfed9769241),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5347066-e3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.537 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.538 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.543 2 DEBUG nova.compute.manager [req-f4424040-ac78-4b60-9f68-8f151d38e775 req-45a121ba-a26b-4e1b-9b65-5318da266ab4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Received event network-changed-50e8a951-4439-4d05-b0da-9b126fae5c30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.543 2 DEBUG nova.compute.manager [req-f4424040-ac78-4b60-9f68-8f151d38e775 req-45a121ba-a26b-4e1b-9b65-5318da266ab4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Refreshing instance network info cache due to event network-changed-50e8a951-4439-4d05-b0da-9b126fae5c30. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.543 2 DEBUG oslo_concurrency.lockutils [req-f4424040-ac78-4b60-9f68-8f151d38e775 req-45a121ba-a26b-4e1b-9b65-5318da266ab4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-6c9132ae-fb4f-4a77-8e3f-14f516eed49c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.543 2 DEBUG oslo_concurrency.lockutils [req-f4424040-ac78-4b60-9f68-8f151d38e775 req-45a121ba-a26b-4e1b-9b65-5318da266ab4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-6c9132ae-fb4f-4a77-8e3f-14f516eed49c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.544 2 DEBUG nova.network.neutron [req-f4424040-ac78-4b60-9f68-8f151d38e775 req-45a121ba-a26b-4e1b-9b65-5318da266ab4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Refreshing network info cache for port 50e8a951-4439-4d05-b0da-9b126fae5c30 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.546 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd5347066-e3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.547 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd5347066-e3, col_values=(('external_ids', {'iface-id': 'd5347066-e322-4664-8798-146b9745fa17', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:eb:9b:3f', 'vm-uuid': 'a798e2c3-8294-482d-a073-883121427765'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:19:41 np0005481065 NetworkManager[44960]: <info>  [1760174381.5510] manager: (tapd5347066-e3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/472)
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:19:41 np0005481065 podman[386063]: 2025-10-11 09:19:41.552613986 +0000 UTC m=+0.182943263 container start ab23d35585b669f2b5af7353c76a6f28971258cdacbd6d8e5f83f391039e7b4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_vaughan, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:19:41 np0005481065 podman[386063]: 2025-10-11 09:19:41.556431774 +0000 UTC m=+0.186761111 container attach ab23d35585b669f2b5af7353c76a6f28971258cdacbd6d8e5f83f391039e7b4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.562 2 INFO os_vif [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:9b:3f,bridge_name='br-int',has_traffic_filtering=True,id=d5347066-e322-4664-8798-146b9745fa17,network=Network(6346ea52-07fc-49ad-8f2d-fcfed9769241),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5347066-e3')#033[00m
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.563 2 DEBUG nova.virt.libvirt.vif [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:19:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1575481376',display_name='tempest-TestGettingAddress-server-1575481376',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1575481376',id=116,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJTFYLtktGdqXo2IJ/ShBWfWgvaV4+fWmolxALneU1gDJL8dLDtvTdIhs+PbG9YcPYaBAkq6yl21bo4TTyj4znAX7oza+01Fop0j7H2jGDTRbWSF88RRRnjrHtRpLFrBeg==',key_name='tempest-TestGettingAddress-2133650400',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-cbv7fcg7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:19:31Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=a798e2c3-8294-482d-a073-883121427765,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d8f510f6-a87e-4c76-be24-a143f69f564d", "address": "fa:16:3e:f2:49:59", "network": {"id": "ce353a4c-7280-46bb-ac7d-157bb5dc08cb", "bridge": "br-int", "label": "tempest-network-smoke--1463782365", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef2:4959", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8f510f6-a8", "ovs_interfaceid": "d8f510f6-a87e-4c76-be24-a143f69f564d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.563 2 DEBUG nova.network.os_vif_util [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "d8f510f6-a87e-4c76-be24-a143f69f564d", "address": "fa:16:3e:f2:49:59", "network": {"id": "ce353a4c-7280-46bb-ac7d-157bb5dc08cb", "bridge": "br-int", "label": "tempest-network-smoke--1463782365", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef2:4959", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8f510f6-a8", "ovs_interfaceid": "d8f510f6-a87e-4c76-be24-a143f69f564d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.564 2 DEBUG nova.network.os_vif_util [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f2:49:59,bridge_name='br-int',has_traffic_filtering=True,id=d8f510f6-a87e-4c76-be24-a143f69f564d,network=Network(ce353a4c-7280-46bb-ac7d-157bb5dc08cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8f510f6-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.564 2 DEBUG os_vif [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:49:59,bridge_name='br-int',has_traffic_filtering=True,id=d8f510f6-a87e-4c76-be24-a143f69f564d,network=Network(ce353a4c-7280-46bb-ac7d-157bb5dc08cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8f510f6-a8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.566 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.566 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.569 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd8f510f6-a8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.569 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd8f510f6-a8, col_values=(('external_ids', {'iface-id': 'd8f510f6-a87e-4c76-be24-a143f69f564d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f2:49:59', 'vm-uuid': 'a798e2c3-8294-482d-a073-883121427765'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:41 np0005481065 NetworkManager[44960]: <info>  [1760174381.5726] manager: (tapd8f510f6-a8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/473)
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.583 2 INFO os_vif [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:49:59,bridge_name='br-int',has_traffic_filtering=True,id=d8f510f6-a87e-4c76-be24-a143f69f564d,network=Network(ce353a4c-7280-46bb-ac7d-157bb5dc08cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8f510f6-a8')#033[00m
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.635 2 DEBUG nova.virt.libvirt.driver [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.635 2 DEBUG nova.virt.libvirt.driver [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.635 2 DEBUG nova.virt.libvirt.driver [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No VIF found with MAC fa:16:3e:eb:9b:3f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.636 2 DEBUG nova.virt.libvirt.driver [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No VIF found with MAC fa:16:3e:f2:49:59, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.636 2 INFO nova.virt.libvirt.driver [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Using config drive#033[00m
Oct 11 05:19:41 np0005481065 nova_compute[260935]: 2025-10-11 09:19:41.662 2 DEBUG nova.storage.rbd_utils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image a798e2c3-8294-482d-a073-883121427765_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:19:42 np0005481065 nova_compute[260935]: 2025-10-11 09:19:42.095 2 INFO nova.virt.libvirt.driver [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Creating config drive at /var/lib/nova/instances/a798e2c3-8294-482d-a073-883121427765/disk.config#033[00m
Oct 11 05:19:42 np0005481065 nova_compute[260935]: 2025-10-11 09:19:42.107 2 DEBUG oslo_concurrency.processutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a798e2c3-8294-482d-a073-883121427765/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdr8pqgv2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:19:42 np0005481065 nova_compute[260935]: 2025-10-11 09:19:42.155 2 DEBUG nova.network.neutron [req-d53f6209-42ac-461f-92b4-8dc609ebc4f0 req-22b0ba02-f507-4271-a49b-3b849bf11fe3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Updated VIF entry in instance network info cache for port d5347066-e322-4664-8798-146b9745fa17. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:19:42 np0005481065 nova_compute[260935]: 2025-10-11 09:19:42.156 2 DEBUG nova.network.neutron [req-d53f6209-42ac-461f-92b4-8dc609ebc4f0 req-22b0ba02-f507-4271-a49b-3b849bf11fe3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Updating instance_info_cache with network_info: [{"id": "d5347066-e322-4664-8798-146b9745fa17", "address": "fa:16:3e:eb:9b:3f", "network": {"id": "6346ea52-07fc-49ad-8f2d-fcfed9769241", "bridge": "br-int", "label": "tempest-network-smoke--1742332739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5347066-e3", "ovs_interfaceid": "d5347066-e322-4664-8798-146b9745fa17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d8f510f6-a87e-4c76-be24-a143f69f564d", "address": "fa:16:3e:f2:49:59", "network": {"id": "ce353a4c-7280-46bb-ac7d-157bb5dc08cb", "bridge": "br-int", "label": "tempest-network-smoke--1463782365", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef2:4959", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8f510f6-a8", "ovs_interfaceid": "d8f510f6-a87e-4c76-be24-a143f69f564d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:19:42 np0005481065 nova_compute[260935]: 2025-10-11 09:19:42.185 2 DEBUG oslo_concurrency.lockutils [req-d53f6209-42ac-461f-92b4-8dc609ebc4f0 req-22b0ba02-f507-4271-a49b-3b849bf11fe3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-a798e2c3-8294-482d-a073-883121427765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:19:42 np0005481065 nova_compute[260935]: 2025-10-11 09:19:42.186 2 DEBUG nova.compute.manager [req-d53f6209-42ac-461f-92b4-8dc609ebc4f0 req-22b0ba02-f507-4271-a49b-3b849bf11fe3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Received event network-vif-plugged-50e8a951-4439-4d05-b0da-9b126fae5c30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:19:42 np0005481065 nova_compute[260935]: 2025-10-11 09:19:42.186 2 DEBUG oslo_concurrency.lockutils [req-d53f6209-42ac-461f-92b4-8dc609ebc4f0 req-22b0ba02-f507-4271-a49b-3b849bf11fe3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "6c9132ae-fb4f-4a77-8e3f-14f516eed49c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:19:42 np0005481065 nova_compute[260935]: 2025-10-11 09:19:42.187 2 DEBUG oslo_concurrency.lockutils [req-d53f6209-42ac-461f-92b4-8dc609ebc4f0 req-22b0ba02-f507-4271-a49b-3b849bf11fe3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "6c9132ae-fb4f-4a77-8e3f-14f516eed49c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:19:42 np0005481065 nova_compute[260935]: 2025-10-11 09:19:42.187 2 DEBUG oslo_concurrency.lockutils [req-d53f6209-42ac-461f-92b4-8dc609ebc4f0 req-22b0ba02-f507-4271-a49b-3b849bf11fe3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "6c9132ae-fb4f-4a77-8e3f-14f516eed49c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:19:42 np0005481065 nova_compute[260935]: 2025-10-11 09:19:42.187 2 DEBUG nova.compute.manager [req-d53f6209-42ac-461f-92b4-8dc609ebc4f0 req-22b0ba02-f507-4271-a49b-3b849bf11fe3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] No waiting events found dispatching network-vif-plugged-50e8a951-4439-4d05-b0da-9b126fae5c30 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:19:42 np0005481065 nova_compute[260935]: 2025-10-11 09:19:42.188 2 WARNING nova.compute.manager [req-d53f6209-42ac-461f-92b4-8dc609ebc4f0 req-22b0ba02-f507-4271-a49b-3b849bf11fe3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Received unexpected event network-vif-plugged-50e8a951-4439-4d05-b0da-9b126fae5c30 for instance with vm_state active and task_state None.#033[00m
Oct 11 05:19:42 np0005481065 nova_compute[260935]: 2025-10-11 09:19:42.188 2 DEBUG nova.compute.manager [req-d53f6209-42ac-461f-92b4-8dc609ebc4f0 req-22b0ba02-f507-4271-a49b-3b849bf11fe3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Received event network-changed-d8f510f6-a87e-4c76-be24-a143f69f564d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:19:42 np0005481065 nova_compute[260935]: 2025-10-11 09:19:42.188 2 DEBUG nova.compute.manager [req-d53f6209-42ac-461f-92b4-8dc609ebc4f0 req-22b0ba02-f507-4271-a49b-3b849bf11fe3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Refreshing instance network info cache due to event network-changed-d8f510f6-a87e-4c76-be24-a143f69f564d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:19:42 np0005481065 nova_compute[260935]: 2025-10-11 09:19:42.188 2 DEBUG oslo_concurrency.lockutils [req-d53f6209-42ac-461f-92b4-8dc609ebc4f0 req-22b0ba02-f507-4271-a49b-3b849bf11fe3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-a798e2c3-8294-482d-a073-883121427765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:19:42 np0005481065 nova_compute[260935]: 2025-10-11 09:19:42.189 2 DEBUG oslo_concurrency.lockutils [req-d53f6209-42ac-461f-92b4-8dc609ebc4f0 req-22b0ba02-f507-4271-a49b-3b849bf11fe3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-a798e2c3-8294-482d-a073-883121427765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:19:42 np0005481065 nova_compute[260935]: 2025-10-11 09:19:42.189 2 DEBUG nova.network.neutron [req-d53f6209-42ac-461f-92b4-8dc609ebc4f0 req-22b0ba02-f507-4271-a49b-3b849bf11fe3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Refreshing network info cache for port d8f510f6-a87e-4c76-be24-a143f69f564d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:19:42 np0005481065 nova_compute[260935]: 2025-10-11 09:19:42.266 2 DEBUG oslo_concurrency.processutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a798e2c3-8294-482d-a073-883121427765/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdr8pqgv2" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:19:42 np0005481065 nova_compute[260935]: 2025-10-11 09:19:42.299 2 DEBUG nova.storage.rbd_utils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image a798e2c3-8294-482d-a073-883121427765_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:19:42 np0005481065 nova_compute[260935]: 2025-10-11 09:19:42.311 2 DEBUG oslo_concurrency.processutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a798e2c3-8294-482d-a073-883121427765/disk.config a798e2c3-8294-482d-a073-883121427765_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]: {
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:    "0": [
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:        {
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:            "devices": [
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:                "/dev/loop3"
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:            ],
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:            "lv_name": "ceph_lv0",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:            "lv_size": "21470642176",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:            "name": "ceph_lv0",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:            "tags": {
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:                "ceph.cluster_name": "ceph",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:                "ceph.crush_device_class": "",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:                "ceph.encrypted": "0",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:                "ceph.osd_id": "0",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:                "ceph.type": "block",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:                "ceph.vdo": "0"
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:            },
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:            "type": "block",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:            "vg_name": "ceph_vg0"
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:        }
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:    ],
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:    "1": [
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:        {
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:            "devices": [
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:                "/dev/loop4"
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:            ],
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:            "lv_name": "ceph_lv1",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:            "lv_size": "21470642176",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:            "name": "ceph_lv1",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:            "tags": {
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:                "ceph.cluster_name": "ceph",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:                "ceph.crush_device_class": "",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:                "ceph.encrypted": "0",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:                "ceph.osd_id": "1",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:                "ceph.type": "block",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:                "ceph.vdo": "0"
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:            },
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:            "type": "block",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:            "vg_name": "ceph_vg1"
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:        }
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:    ],
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:    "2": [
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:        {
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:            "devices": [
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:                "/dev/loop5"
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:            ],
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:            "lv_name": "ceph_lv2",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:            "lv_size": "21470642176",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:            "name": "ceph_lv2",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:            "tags": {
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:                "ceph.cluster_name": "ceph",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:                "ceph.crush_device_class": "",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:                "ceph.encrypted": "0",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:                "ceph.osd_id": "2",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:                "ceph.type": "block",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:                "ceph.vdo": "0"
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:            },
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:            "type": "block",
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:            "vg_name": "ceph_vg2"
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:        }
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]:    ]
Oct 11 05:19:42 np0005481065 bold_vaughan[386080]: }
Oct 11 05:19:42 np0005481065 systemd[1]: libpod-ab23d35585b669f2b5af7353c76a6f28971258cdacbd6d8e5f83f391039e7b4e.scope: Deactivated successfully.
Oct 11 05:19:42 np0005481065 podman[386063]: 2025-10-11 09:19:42.382886117 +0000 UTC m=+1.013215424 container died ab23d35585b669f2b5af7353c76a6f28971258cdacbd6d8e5f83f391039e7b4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 05:19:42 np0005481065 systemd[1]: var-lib-containers-storage-overlay-3e2755b6c3574361d20c07bde3295a76ce0fba2242ef9b614af006b195f89d42-merged.mount: Deactivated successfully.
Oct 11 05:19:42 np0005481065 podman[386063]: 2025-10-11 09:19:42.452676307 +0000 UTC m=+1.083005544 container remove ab23d35585b669f2b5af7353c76a6f28971258cdacbd6d8e5f83f391039e7b4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_vaughan, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 11 05:19:42 np0005481065 systemd[1]: libpod-conmon-ab23d35585b669f2b5af7353c76a6f28971258cdacbd6d8e5f83f391039e7b4e.scope: Deactivated successfully.
Oct 11 05:19:42 np0005481065 nova_compute[260935]: 2025-10-11 09:19:42.562 2 DEBUG oslo_concurrency.processutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a798e2c3-8294-482d-a073-883121427765/disk.config a798e2c3-8294-482d-a073-883121427765_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.251s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:19:42 np0005481065 nova_compute[260935]: 2025-10-11 09:19:42.564 2 INFO nova.virt.libvirt.driver [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Deleting local config drive /var/lib/nova/instances/a798e2c3-8294-482d-a073-883121427765/disk.config because it was imported into RBD.#033[00m
Oct 11 05:19:42 np0005481065 kernel: tapd5347066-e3: entered promiscuous mode
Oct 11 05:19:42 np0005481065 NetworkManager[44960]: <info>  [1760174382.6301] manager: (tapd5347066-e3): new Tun device (/org/freedesktop/NetworkManager/Devices/474)
Oct 11 05:19:42 np0005481065 nova_compute[260935]: 2025-10-11 09:19:42.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:42 np0005481065 ovn_controller[152945]: 2025-10-11T09:19:42Z|01153|binding|INFO|Claiming lport d5347066-e322-4664-8798-146b9745fa17 for this chassis.
Oct 11 05:19:42 np0005481065 ovn_controller[152945]: 2025-10-11T09:19:42Z|01154|binding|INFO|d5347066-e322-4664-8798-146b9745fa17: Claiming fa:16:3e:eb:9b:3f 10.100.0.10
Oct 11 05:19:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:42.646 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:9b:3f 10.100.0.10'], port_security=['fa:16:3e:eb:9b:3f 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'a798e2c3-8294-482d-a073-883121427765', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6346ea52-07fc-49ad-8f2d-fcfed9769241', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a975f5dd-d207-4e6b-9401-45f24b820c2c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0785b003-8951-44fc-bd21-f1c5c3076102, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=d5347066-e322-4664-8798-146b9745fa17) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:19:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:42.647 162815 INFO neutron.agent.ovn.metadata.agent [-] Port d5347066-e322-4664-8798-146b9745fa17 in datapath 6346ea52-07fc-49ad-8f2d-fcfed9769241 bound to our chassis#033[00m
Oct 11 05:19:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:42.650 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6346ea52-07fc-49ad-8f2d-fcfed9769241#033[00m
Oct 11 05:19:42 np0005481065 NetworkManager[44960]: <info>  [1760174382.6549] manager: (tapd8f510f6-a8): new Tun device (/org/freedesktop/NetworkManager/Devices/475)
Oct 11 05:19:42 np0005481065 kernel: tapd8f510f6-a8: entered promiscuous mode
Oct 11 05:19:42 np0005481065 ovn_controller[152945]: 2025-10-11T09:19:42Z|01155|binding|INFO|Setting lport d5347066-e322-4664-8798-146b9745fa17 ovn-installed in OVS
Oct 11 05:19:42 np0005481065 ovn_controller[152945]: 2025-10-11T09:19:42Z|01156|binding|INFO|Setting lport d5347066-e322-4664-8798-146b9745fa17 up in Southbound
Oct 11 05:19:42 np0005481065 nova_compute[260935]: 2025-10-11 09:19:42.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:42 np0005481065 ovn_controller[152945]: 2025-10-11T09:19:42Z|01157|if_status|INFO|Dropped 5 log messages in last 1015 seconds (most recently, 1015 seconds ago) due to excessive rate
Oct 11 05:19:42 np0005481065 ovn_controller[152945]: 2025-10-11T09:19:42Z|01158|if_status|INFO|Not updating pb chassis for d8f510f6-a87e-4c76-be24-a143f69f564d now as sb is readonly
Oct 11 05:19:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:42.676 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9ad9d5d2-a082-4e81-ac61-a98edf9b0f56]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:19:42 np0005481065 systemd-udevd[386235]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:19:42 np0005481065 systemd-udevd[386237]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:19:42 np0005481065 ovn_controller[152945]: 2025-10-11T09:19:42Z|01159|binding|INFO|Claiming lport d8f510f6-a87e-4c76-be24-a143f69f564d for this chassis.
Oct 11 05:19:42 np0005481065 ovn_controller[152945]: 2025-10-11T09:19:42Z|01160|binding|INFO|d8f510f6-a87e-4c76-be24-a143f69f564d: Claiming fa:16:3e:f2:49:59 2001:db8::f816:3eff:fef2:4959
Oct 11 05:19:42 np0005481065 nova_compute[260935]: 2025-10-11 09:19:42.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:42 np0005481065 ovn_controller[152945]: 2025-10-11T09:19:42Z|01161|binding|INFO|Setting lport d8f510f6-a87e-4c76-be24-a143f69f564d ovn-installed in OVS
Oct 11 05:19:42 np0005481065 nova_compute[260935]: 2025-10-11 09:19:42.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:42 np0005481065 ovn_controller[152945]: 2025-10-11T09:19:42Z|01162|binding|INFO|Setting lport d8f510f6-a87e-4c76-be24-a143f69f564d up in Southbound
Oct 11 05:19:42 np0005481065 NetworkManager[44960]: <info>  [1760174382.7042] device (tapd8f510f6-a8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:19:42 np0005481065 NetworkManager[44960]: <info>  [1760174382.7060] device (tapd8f510f6-a8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:19:42 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2336: 321 pgs: 321 active+clean; 579 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.6 MiB/s wr, 111 op/s
Oct 11 05:19:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:42.708 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f2:49:59 2001:db8::f816:3eff:fef2:4959'], port_security=['fa:16:3e:f2:49:59 2001:db8::f816:3eff:fef2:4959'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fef2:4959/64', 'neutron:device_id': 'a798e2c3-8294-482d-a073-883121427765', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce353a4c-7280-46bb-ac7d-157bb5dc08cb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a975f5dd-d207-4e6b-9401-45f24b820c2c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7413cee9-eaf1-4090-8a8e-9bf9dac36a0a, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=d8f510f6-a87e-4c76-be24-a143f69f564d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:19:42 np0005481065 NetworkManager[44960]: <info>  [1760174382.7167] device (tapd5347066-e3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:19:42 np0005481065 NetworkManager[44960]: <info>  [1760174382.7223] device (tapd5347066-e3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:19:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:42.722 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c0210ddd-c4f4-4022-abe7-f430a5bbf2b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:19:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:42.725 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[a91dd1e5-725a-4c36-b84b-9874326f0b62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:19:42 np0005481065 systemd-machined[215705]: New machine qemu-139-instance-00000074.
Oct 11 05:19:42 np0005481065 systemd[1]: Started Virtual Machine qemu-139-instance-00000074.
Oct 11 05:19:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:42.757 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[e3d5948a-93bb-4355-bacb-5a79c8a5ac87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:19:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:42.796 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[df14af2d-0a02-4681-a164-4a92ecef8f4f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6346ea52-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f5:67:fa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 325], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618844, 'reachable_time': 24736, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 386283, 'error': None, 'target': 'ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:19:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:42.815 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[906efa65-8a56-4c3d-8d0b-f7f7a99268f4]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6346ea52-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618858, 'tstamp': 618858}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 386295, 'error': None, 'target': 'ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6346ea52-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618862, 'tstamp': 618862}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 386295, 'error': None, 'target': 'ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:19:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:42.817 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6346ea52-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:19:42 np0005481065 nova_compute[260935]: 2025-10-11 09:19:42.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:42.827 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6346ea52-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:19:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:42.827 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:19:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:42.828 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6346ea52-00, col_values=(('external_ids', {'iface-id': 'f4a0a09a-d174-44d3-b63d-753fefd94646'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:19:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:42.828 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:19:42 np0005481065 nova_compute[260935]: 2025-10-11 09:19:42.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:42.830 162815 INFO neutron.agent.ovn.metadata.agent [-] Port d8f510f6-a87e-4c76-be24-a143f69f564d in datapath ce353a4c-7280-46bb-ac7d-157bb5dc08cb unbound from our chassis#033[00m
Oct 11 05:19:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:42.832 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce353a4c-7280-46bb-ac7d-157bb5dc08cb#033[00m
Oct 11 05:19:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:42.846 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[097188b7-28f7-4cb3-affa-d45a684981d9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:19:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:42.883 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f0c3e25e-0425-412c-b8e0-6e86a703c72d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:19:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:42.886 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[972370a3-3b06-455e-9807-f03caa4b5d63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:19:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:42.924 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[26ca91ef-6914-4732-933a-9ae99c87d000]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:19:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:42.943 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5198391d-a711-4397-84e8-9906db650d8f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce353a4c-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:23:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 20, 'tx_packets': 4, 'rx_bytes': 1872, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 20, 'tx_packets': 4, 'rx_bytes': 1872, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 326], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618949, 'reachable_time': 20488, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 20, 'inoctets': 1592, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 20, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1592, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 20, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 386305, 'error': None, 'target': 'ovnmeta-ce353a4c-7280-46bb-ac7d-157bb5dc08cb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:19:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:42.962 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[041d9dc2-80db-4b1c-bd5f-048a6f789bbe]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapce353a4c-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618964, 'tstamp': 618964}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 386307, 'error': None, 'target': 'ovnmeta-ce353a4c-7280-46bb-ac7d-157bb5dc08cb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:19:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:42.964 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce353a4c-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:19:42 np0005481065 nova_compute[260935]: 2025-10-11 09:19:42.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:42 np0005481065 nova_compute[260935]: 2025-10-11 09:19:42.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:42.971 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce353a4c-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:19:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:42.971 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:19:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:42.972 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce353a4c-70, col_values=(('external_ids', {'iface-id': 'bf7f7fa8-f1d9-4202-bec4-06d2178d548d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:19:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:19:42.972 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:19:43 np0005481065 podman[386344]: 2025-10-11 09:19:43.258297982 +0000 UTC m=+0.101587578 container create 08265a73667c04df573d97c63574cbff3fef6817a34aa49ba397fe4449f508c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 05:19:43 np0005481065 podman[386344]: 2025-10-11 09:19:43.186420814 +0000 UTC m=+0.029710440 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:19:43 np0005481065 systemd[1]: Started libpod-conmon-08265a73667c04df573d97c63574cbff3fef6817a34aa49ba397fe4449f508c5.scope.
Oct 11 05:19:43 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:19:43 np0005481065 podman[386344]: 2025-10-11 09:19:43.369654574 +0000 UTC m=+0.212944240 container init 08265a73667c04df573d97c63574cbff3fef6817a34aa49ba397fe4449f508c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_darwin, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 05:19:43 np0005481065 podman[386344]: 2025-10-11 09:19:43.381950071 +0000 UTC m=+0.225239677 container start 08265a73667c04df573d97c63574cbff3fef6817a34aa49ba397fe4449f508c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 11 05:19:43 np0005481065 podman[386344]: 2025-10-11 09:19:43.385386388 +0000 UTC m=+0.228676014 container attach 08265a73667c04df573d97c63574cbff3fef6817a34aa49ba397fe4449f508c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:19:43 np0005481065 charming_darwin[386361]: 167 167
Oct 11 05:19:43 np0005481065 systemd[1]: libpod-08265a73667c04df573d97c63574cbff3fef6817a34aa49ba397fe4449f508c5.scope: Deactivated successfully.
Oct 11 05:19:43 np0005481065 podman[386344]: 2025-10-11 09:19:43.393807936 +0000 UTC m=+0.237097572 container died 08265a73667c04df573d97c63574cbff3fef6817a34aa49ba397fe4449f508c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Oct 11 05:19:43 np0005481065 systemd[1]: var-lib-containers-storage-overlay-3f75522e4ec8d369c42ba6beeb0461574a71fb3f7a9639e02cef1c05d0ba7353-merged.mount: Deactivated successfully.
Oct 11 05:19:43 np0005481065 podman[386344]: 2025-10-11 09:19:43.441239794 +0000 UTC m=+0.284529400 container remove 08265a73667c04df573d97c63574cbff3fef6817a34aa49ba397fe4449f508c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:19:43 np0005481065 systemd[1]: libpod-conmon-08265a73667c04df573d97c63574cbff3fef6817a34aa49ba397fe4449f508c5.scope: Deactivated successfully.
Oct 11 05:19:43 np0005481065 nova_compute[260935]: 2025-10-11 09:19:43.564 2 DEBUG nova.network.neutron [req-f4424040-ac78-4b60-9f68-8f151d38e775 req-45a121ba-a26b-4e1b-9b65-5318da266ab4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Updated VIF entry in instance network info cache for port 50e8a951-4439-4d05-b0da-9b126fae5c30. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:19:43 np0005481065 nova_compute[260935]: 2025-10-11 09:19:43.565 2 DEBUG nova.network.neutron [req-f4424040-ac78-4b60-9f68-8f151d38e775 req-45a121ba-a26b-4e1b-9b65-5318da266ab4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Updating instance_info_cache with network_info: [{"id": "50e8a951-4439-4d05-b0da-9b126fae5c30", "address": "fa:16:3e:23:8b:f3", "network": {"id": "fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448", "bridge": "br-int", "label": "tempest-network-smoke--1639787343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e8a951-44", "ovs_interfaceid": "50e8a951-4439-4d05-b0da-9b126fae5c30", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:19:43 np0005481065 nova_compute[260935]: 2025-10-11 09:19:43.628 2 DEBUG oslo_concurrency.lockutils [req-f4424040-ac78-4b60-9f68-8f151d38e775 req-45a121ba-a26b-4e1b-9b65-5318da266ab4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-6c9132ae-fb4f-4a77-8e3f-14f516eed49c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:19:43 np0005481065 nova_compute[260935]: 2025-10-11 09:19:43.645 2 DEBUG nova.compute.manager [req-1c9a06e1-ec84-4ee6-ae42-3eaf47ee3150 req-c800f83b-82b4-48ce-8fef-75069697116b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Received event network-vif-plugged-d8f510f6-a87e-4c76-be24-a143f69f564d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:19:43 np0005481065 nova_compute[260935]: 2025-10-11 09:19:43.646 2 DEBUG oslo_concurrency.lockutils [req-1c9a06e1-ec84-4ee6-ae42-3eaf47ee3150 req-c800f83b-82b4-48ce-8fef-75069697116b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "a798e2c3-8294-482d-a073-883121427765-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:19:43 np0005481065 nova_compute[260935]: 2025-10-11 09:19:43.647 2 DEBUG oslo_concurrency.lockutils [req-1c9a06e1-ec84-4ee6-ae42-3eaf47ee3150 req-c800f83b-82b4-48ce-8fef-75069697116b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a798e2c3-8294-482d-a073-883121427765-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:19:43 np0005481065 nova_compute[260935]: 2025-10-11 09:19:43.647 2 DEBUG oslo_concurrency.lockutils [req-1c9a06e1-ec84-4ee6-ae42-3eaf47ee3150 req-c800f83b-82b4-48ce-8fef-75069697116b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a798e2c3-8294-482d-a073-883121427765-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:19:43 np0005481065 nova_compute[260935]: 2025-10-11 09:19:43.648 2 DEBUG nova.compute.manager [req-1c9a06e1-ec84-4ee6-ae42-3eaf47ee3150 req-c800f83b-82b4-48ce-8fef-75069697116b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Processing event network-vif-plugged-d8f510f6-a87e-4c76-be24-a143f69f564d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:19:43 np0005481065 nova_compute[260935]: 2025-10-11 09:19:43.648 2 DEBUG nova.compute.manager [req-1c9a06e1-ec84-4ee6-ae42-3eaf47ee3150 req-c800f83b-82b4-48ce-8fef-75069697116b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Received event network-vif-plugged-d8f510f6-a87e-4c76-be24-a143f69f564d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:19:43 np0005481065 nova_compute[260935]: 2025-10-11 09:19:43.649 2 DEBUG oslo_concurrency.lockutils [req-1c9a06e1-ec84-4ee6-ae42-3eaf47ee3150 req-c800f83b-82b4-48ce-8fef-75069697116b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "a798e2c3-8294-482d-a073-883121427765-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:19:43 np0005481065 nova_compute[260935]: 2025-10-11 09:19:43.649 2 DEBUG oslo_concurrency.lockutils [req-1c9a06e1-ec84-4ee6-ae42-3eaf47ee3150 req-c800f83b-82b4-48ce-8fef-75069697116b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a798e2c3-8294-482d-a073-883121427765-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:19:43 np0005481065 nova_compute[260935]: 2025-10-11 09:19:43.649 2 DEBUG oslo_concurrency.lockutils [req-1c9a06e1-ec84-4ee6-ae42-3eaf47ee3150 req-c800f83b-82b4-48ce-8fef-75069697116b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a798e2c3-8294-482d-a073-883121427765-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:19:43 np0005481065 nova_compute[260935]: 2025-10-11 09:19:43.650 2 DEBUG nova.compute.manager [req-1c9a06e1-ec84-4ee6-ae42-3eaf47ee3150 req-c800f83b-82b4-48ce-8fef-75069697116b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] No event matching network-vif-plugged-d8f510f6-a87e-4c76-be24-a143f69f564d in dict_keys([('network-vif-plugged', 'd5347066-e322-4664-8798-146b9745fa17')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Oct 11 05:19:43 np0005481065 nova_compute[260935]: 2025-10-11 09:19:43.650 2 WARNING nova.compute.manager [req-1c9a06e1-ec84-4ee6-ae42-3eaf47ee3150 req-c800f83b-82b4-48ce-8fef-75069697116b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Received unexpected event network-vif-plugged-d8f510f6-a87e-4c76-be24-a143f69f564d for instance with vm_state building and task_state spawning.#033[00m
Oct 11 05:19:43 np0005481065 nova_compute[260935]: 2025-10-11 09:19:43.679 2 DEBUG nova.network.neutron [req-d53f6209-42ac-461f-92b4-8dc609ebc4f0 req-22b0ba02-f507-4271-a49b-3b849bf11fe3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Updated VIF entry in instance network info cache for port d8f510f6-a87e-4c76-be24-a143f69f564d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:19:43 np0005481065 nova_compute[260935]: 2025-10-11 09:19:43.680 2 DEBUG nova.network.neutron [req-d53f6209-42ac-461f-92b4-8dc609ebc4f0 req-22b0ba02-f507-4271-a49b-3b849bf11fe3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Updating instance_info_cache with network_info: [{"id": "d5347066-e322-4664-8798-146b9745fa17", "address": "fa:16:3e:eb:9b:3f", "network": {"id": "6346ea52-07fc-49ad-8f2d-fcfed9769241", "bridge": "br-int", "label": "tempest-network-smoke--1742332739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5347066-e3", "ovs_interfaceid": "d5347066-e322-4664-8798-146b9745fa17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d8f510f6-a87e-4c76-be24-a143f69f564d", "address": "fa:16:3e:f2:49:59", "network": {"id": "ce353a4c-7280-46bb-ac7d-157bb5dc08cb", "bridge": "br-int", "label": "tempest-network-smoke--1463782365", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef2:4959", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8f510f6-a8", "ovs_interfaceid": "d8f510f6-a87e-4c76-be24-a143f69f564d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:19:43 np0005481065 nova_compute[260935]: 2025-10-11 09:19:43.709 2 DEBUG oslo_concurrency.lockutils [req-d53f6209-42ac-461f-92b4-8dc609ebc4f0 req-22b0ba02-f507-4271-a49b-3b849bf11fe3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-a798e2c3-8294-482d-a073-883121427765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:19:43 np0005481065 podman[386385]: 2025-10-11 09:19:43.714608749 +0000 UTC m=+0.062103834 container create 4ea051f3c8eb99f1a8282ae813e05c935501de9ffbee9cd7294999d2a1d71ec4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_villani, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 11 05:19:43 np0005481065 systemd[1]: Started libpod-conmon-4ea051f3c8eb99f1a8282ae813e05c935501de9ffbee9cd7294999d2a1d71ec4.scope.
Oct 11 05:19:43 np0005481065 podman[386385]: 2025-10-11 09:19:43.69162599 +0000 UTC m=+0.039121045 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:19:43 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:19:43 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e64c21a32f7212729275dc071307bb1556d603e32905c48310e3d8c4aa9664d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:19:43 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e64c21a32f7212729275dc071307bb1556d603e32905c48310e3d8c4aa9664d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:19:43 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e64c21a32f7212729275dc071307bb1556d603e32905c48310e3d8c4aa9664d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:19:43 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e64c21a32f7212729275dc071307bb1556d603e32905c48310e3d8c4aa9664d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:19:43 np0005481065 podman[386385]: 2025-10-11 09:19:43.890649077 +0000 UTC m=+0.238144122 container init 4ea051f3c8eb99f1a8282ae813e05c935501de9ffbee9cd7294999d2a1d71ec4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_villani, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:19:43 np0005481065 podman[386385]: 2025-10-11 09:19:43.902616695 +0000 UTC m=+0.250111750 container start 4ea051f3c8eb99f1a8282ae813e05c935501de9ffbee9cd7294999d2a1d71ec4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_villani, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 05:19:43 np0005481065 podman[386385]: 2025-10-11 09:19:43.919927543 +0000 UTC m=+0.267422588 container attach 4ea051f3c8eb99f1a8282ae813e05c935501de9ffbee9cd7294999d2a1d71ec4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_villani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 11 05:19:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:19:44 np0005481065 nova_compute[260935]: 2025-10-11 09:19:44.258 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174384.2572055, a798e2c3-8294-482d-a073-883121427765 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:19:44 np0005481065 nova_compute[260935]: 2025-10-11 09:19:44.259 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a798e2c3-8294-482d-a073-883121427765] VM Started (Lifecycle Event)#033[00m
Oct 11 05:19:44 np0005481065 nova_compute[260935]: 2025-10-11 09:19:44.290 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a798e2c3-8294-482d-a073-883121427765] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:19:44 np0005481065 nova_compute[260935]: 2025-10-11 09:19:44.295 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174384.2574992, a798e2c3-8294-482d-a073-883121427765 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:19:44 np0005481065 nova_compute[260935]: 2025-10-11 09:19:44.295 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a798e2c3-8294-482d-a073-883121427765] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:19:44 np0005481065 nova_compute[260935]: 2025-10-11 09:19:44.318 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a798e2c3-8294-482d-a073-883121427765] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:19:44 np0005481065 nova_compute[260935]: 2025-10-11 09:19:44.320 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a798e2c3-8294-482d-a073-883121427765] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:19:44 np0005481065 nova_compute[260935]: 2025-10-11 09:19:44.352 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a798e2c3-8294-482d-a073-883121427765] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:19:44 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2337: 321 pgs: 321 active+clean; 579 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 834 KiB/s wr, 90 op/s
Oct 11 05:19:45 np0005481065 stoic_villani[386444]: {
Oct 11 05:19:45 np0005481065 stoic_villani[386444]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 05:19:45 np0005481065 stoic_villani[386444]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:19:45 np0005481065 stoic_villani[386444]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 05:19:45 np0005481065 stoic_villani[386444]:        "osd_id": 2,
Oct 11 05:19:45 np0005481065 stoic_villani[386444]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:19:45 np0005481065 stoic_villani[386444]:        "type": "bluestore"
Oct 11 05:19:45 np0005481065 stoic_villani[386444]:    },
Oct 11 05:19:45 np0005481065 stoic_villani[386444]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 05:19:45 np0005481065 stoic_villani[386444]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:19:45 np0005481065 stoic_villani[386444]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 05:19:45 np0005481065 stoic_villani[386444]:        "osd_id": 0,
Oct 11 05:19:45 np0005481065 stoic_villani[386444]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:19:45 np0005481065 stoic_villani[386444]:        "type": "bluestore"
Oct 11 05:19:45 np0005481065 stoic_villani[386444]:    },
Oct 11 05:19:45 np0005481065 stoic_villani[386444]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 05:19:45 np0005481065 stoic_villani[386444]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:19:45 np0005481065 stoic_villani[386444]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 05:19:45 np0005481065 stoic_villani[386444]:        "osd_id": 1,
Oct 11 05:19:45 np0005481065 stoic_villani[386444]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:19:45 np0005481065 stoic_villani[386444]:        "type": "bluestore"
Oct 11 05:19:45 np0005481065 stoic_villani[386444]:    }
Oct 11 05:19:45 np0005481065 stoic_villani[386444]: }
Oct 11 05:19:45 np0005481065 systemd[1]: libpod-4ea051f3c8eb99f1a8282ae813e05c935501de9ffbee9cd7294999d2a1d71ec4.scope: Deactivated successfully.
Oct 11 05:19:45 np0005481065 systemd[1]: libpod-4ea051f3c8eb99f1a8282ae813e05c935501de9ffbee9cd7294999d2a1d71ec4.scope: Consumed 1.162s CPU time.
Oct 11 05:19:45 np0005481065 conmon[386444]: conmon 4ea051f3c8eb99f1a828 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4ea051f3c8eb99f1a8282ae813e05c935501de9ffbee9cd7294999d2a1d71ec4.scope/container/memory.events
Oct 11 05:19:45 np0005481065 podman[386385]: 2025-10-11 09:19:45.110243195 +0000 UTC m=+1.457738280 container died 4ea051f3c8eb99f1a8282ae813e05c935501de9ffbee9cd7294999d2a1d71ec4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:19:45 np0005481065 systemd[1]: var-lib-containers-storage-overlay-3e64c21a32f7212729275dc071307bb1556d603e32905c48310e3d8c4aa9664d-merged.mount: Deactivated successfully.
Oct 11 05:19:45 np0005481065 podman[386385]: 2025-10-11 09:19:45.267332388 +0000 UTC m=+1.614827473 container remove 4ea051f3c8eb99f1a8282ae813e05c935501de9ffbee9cd7294999d2a1d71ec4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_villani, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:19:45 np0005481065 podman[386478]: 2025-10-11 09:19:45.272193635 +0000 UTC m=+0.130312318 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 11 05:19:45 np0005481065 systemd[1]: libpod-conmon-4ea051f3c8eb99f1a8282ae813e05c935501de9ffbee9cd7294999d2a1d71ec4.scope: Deactivated successfully.
Oct 11 05:19:45 np0005481065 podman[386485]: 2025-10-11 09:19:45.28901152 +0000 UTC m=+0.147230586 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct 11 05:19:45 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:19:45 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:19:45 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:19:45 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:19:45 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 781126cd-9523-477b-9839-3adf48c1ff7e does not exist
Oct 11 05:19:45 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev dfa3fdba-ffe3-4f18-8607-3271ea0d8e1b does not exist
Oct 11 05:19:45 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:19:45 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:19:46 np0005481065 nova_compute[260935]: 2025-10-11 09:19:46.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:46 np0005481065 nova_compute[260935]: 2025-10-11 09:19:46.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:46 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2338: 321 pgs: 321 active+clean; 579 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 75 op/s
Oct 11 05:19:48 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2339: 321 pgs: 321 active+clean; 579 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 29 KiB/s wr, 84 op/s
Oct 11 05:19:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:19:50 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2340: 321 pgs: 321 active+clean; 579 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 11 05:19:51 np0005481065 nova_compute[260935]: 2025-10-11 09:19:51.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:51 np0005481065 nova_compute[260935]: 2025-10-11 09:19:51.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:52 np0005481065 nova_compute[260935]: 2025-10-11 09:19:52.308 2 DEBUG nova.compute.manager [req-1400e464-fd38-4ac2-a50e-b6d0b9c1ae87 req-0d673cbb-fbea-4511-8577-22e50e91be08 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Received event network-vif-plugged-d5347066-e322-4664-8798-146b9745fa17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:19:52 np0005481065 nova_compute[260935]: 2025-10-11 09:19:52.309 2 DEBUG oslo_concurrency.lockutils [req-1400e464-fd38-4ac2-a50e-b6d0b9c1ae87 req-0d673cbb-fbea-4511-8577-22e50e91be08 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "a798e2c3-8294-482d-a073-883121427765-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:19:52 np0005481065 nova_compute[260935]: 2025-10-11 09:19:52.309 2 DEBUG oslo_concurrency.lockutils [req-1400e464-fd38-4ac2-a50e-b6d0b9c1ae87 req-0d673cbb-fbea-4511-8577-22e50e91be08 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a798e2c3-8294-482d-a073-883121427765-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:19:52 np0005481065 nova_compute[260935]: 2025-10-11 09:19:52.309 2 DEBUG oslo_concurrency.lockutils [req-1400e464-fd38-4ac2-a50e-b6d0b9c1ae87 req-0d673cbb-fbea-4511-8577-22e50e91be08 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a798e2c3-8294-482d-a073-883121427765-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:19:52 np0005481065 nova_compute[260935]: 2025-10-11 09:19:52.310 2 DEBUG nova.compute.manager [req-1400e464-fd38-4ac2-a50e-b6d0b9c1ae87 req-0d673cbb-fbea-4511-8577-22e50e91be08 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Processing event network-vif-plugged-d5347066-e322-4664-8798-146b9745fa17 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:19:52 np0005481065 nova_compute[260935]: 2025-10-11 09:19:52.311 2 DEBUG nova.compute.manager [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Instance event wait completed in 8 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:19:52 np0005481065 nova_compute[260935]: 2025-10-11 09:19:52.316 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174392.3160458, a798e2c3-8294-482d-a073-883121427765 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:19:52 np0005481065 nova_compute[260935]: 2025-10-11 09:19:52.317 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a798e2c3-8294-482d-a073-883121427765] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:19:52 np0005481065 nova_compute[260935]: 2025-10-11 09:19:52.319 2 DEBUG nova.virt.libvirt.driver [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:19:52 np0005481065 nova_compute[260935]: 2025-10-11 09:19:52.323 2 INFO nova.virt.libvirt.driver [-] [instance: a798e2c3-8294-482d-a073-883121427765] Instance spawned successfully.#033[00m
Oct 11 05:19:52 np0005481065 nova_compute[260935]: 2025-10-11 09:19:52.324 2 DEBUG nova.virt.libvirt.driver [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:19:52 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2341: 321 pgs: 321 active+clean; 585 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 744 KiB/s wr, 85 op/s
Oct 11 05:19:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:19:54 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2342: 321 pgs: 321 active+clean; 588 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 589 KiB/s rd, 1.1 MiB/s wr, 44 op/s
Oct 11 05:19:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:19:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:19:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:19:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:19:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:19:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:19:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:19:54
Oct 11 05:19:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:19:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:19:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'backups', 'vms', '.mgr', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', 'default.rgw.log']
Oct 11 05:19:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:19:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:19:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:19:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:19:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:19:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:19:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:19:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:19:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:19:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:19:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:19:55 np0005481065 nova_compute[260935]: 2025-10-11 09:19:55.550 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a798e2c3-8294-482d-a073-883121427765] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:19:55 np0005481065 nova_compute[260935]: 2025-10-11 09:19:55.564 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a798e2c3-8294-482d-a073-883121427765] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:19:55 np0005481065 nova_compute[260935]: 2025-10-11 09:19:55.574 2 DEBUG nova.virt.libvirt.driver [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:19:55 np0005481065 nova_compute[260935]: 2025-10-11 09:19:55.576 2 DEBUG nova.virt.libvirt.driver [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:19:55 np0005481065 nova_compute[260935]: 2025-10-11 09:19:55.577 2 DEBUG nova.virt.libvirt.driver [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:19:55 np0005481065 nova_compute[260935]: 2025-10-11 09:19:55.578 2 DEBUG nova.virt.libvirt.driver [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:19:55 np0005481065 nova_compute[260935]: 2025-10-11 09:19:55.579 2 DEBUG nova.virt.libvirt.driver [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:19:55 np0005481065 nova_compute[260935]: 2025-10-11 09:19:55.580 2 DEBUG nova.virt.libvirt.driver [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:19:56 np0005481065 nova_compute[260935]: 2025-10-11 09:19:56.296 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a798e2c3-8294-482d-a073-883121427765] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:19:56 np0005481065 nova_compute[260935]: 2025-10-11 09:19:56.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:19:56 np0005481065 nova_compute[260935]: 2025-10-11 09:19:56.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:19:56 np0005481065 nova_compute[260935]: 2025-10-11 09:19:56.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:19:56 np0005481065 nova_compute[260935]: 2025-10-11 09:19:56.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:19:56 np0005481065 nova_compute[260935]: 2025-10-11 09:19:56.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:19:56 np0005481065 nova_compute[260935]: 2025-10-11 09:19:56.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:19:56 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2343: 321 pgs: 321 active+clean; 588 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.1 MiB/s wr, 25 op/s
Oct 11 05:19:56 np0005481065 nova_compute[260935]: 2025-10-11 09:19:56.789 2 INFO nova.compute.manager [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Took 25.40 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:19:56 np0005481065 nova_compute[260935]: 2025-10-11 09:19:56.789 2 DEBUG nova.compute.manager [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:19:57 np0005481065 nova_compute[260935]: 2025-10-11 09:19:57.156 2 INFO nova.compute.manager [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Took 26.91 seconds to build instance.#033[00m
Oct 11 05:19:57 np0005481065 nova_compute[260935]: 2025-10-11 09:19:57.828 2 DEBUG nova.compute.manager [req-2d6cd393-1950-4dea-bf63-12c2e39dbc89 req-595aa63a-c97b-479e-abd0-cbd959fb2bff e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Received event network-vif-plugged-d5347066-e322-4664-8798-146b9745fa17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:19:57 np0005481065 nova_compute[260935]: 2025-10-11 09:19:57.828 2 DEBUG oslo_concurrency.lockutils [req-2d6cd393-1950-4dea-bf63-12c2e39dbc89 req-595aa63a-c97b-479e-abd0-cbd959fb2bff e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "a798e2c3-8294-482d-a073-883121427765-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:19:57 np0005481065 nova_compute[260935]: 2025-10-11 09:19:57.829 2 DEBUG oslo_concurrency.lockutils [req-2d6cd393-1950-4dea-bf63-12c2e39dbc89 req-595aa63a-c97b-479e-abd0-cbd959fb2bff e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a798e2c3-8294-482d-a073-883121427765-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:19:57 np0005481065 nova_compute[260935]: 2025-10-11 09:19:57.829 2 DEBUG oslo_concurrency.lockutils [req-2d6cd393-1950-4dea-bf63-12c2e39dbc89 req-595aa63a-c97b-479e-abd0-cbd959fb2bff e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a798e2c3-8294-482d-a073-883121427765-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:19:57 np0005481065 nova_compute[260935]: 2025-10-11 09:19:57.830 2 DEBUG nova.compute.manager [req-2d6cd393-1950-4dea-bf63-12c2e39dbc89 req-595aa63a-c97b-479e-abd0-cbd959fb2bff e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] No waiting events found dispatching network-vif-plugged-d5347066-e322-4664-8798-146b9745fa17 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:19:57 np0005481065 nova_compute[260935]: 2025-10-11 09:19:57.830 2 WARNING nova.compute.manager [req-2d6cd393-1950-4dea-bf63-12c2e39dbc89 req-595aa63a-c97b-479e-abd0-cbd959fb2bff e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Received unexpected event network-vif-plugged-d5347066-e322-4664-8798-146b9745fa17 for instance with vm_state active and task_state None.#033[00m
Oct 11 05:19:58 np0005481065 nova_compute[260935]: 2025-10-11 09:19:57.996 2 DEBUG oslo_concurrency.lockutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "a798e2c3-8294-482d-a073-883121427765" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 27.847s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:19:58 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2344: 321 pgs: 321 active+clean; 600 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 96 op/s
Oct 11 05:19:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:19:59 np0005481065 ovn_controller[152945]: 2025-10-11T09:19:59Z|00135|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:23:8b:f3 10.100.0.10
Oct 11 05:19:59 np0005481065 ovn_controller[152945]: 2025-10-11T09:19:59Z|00136|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:23:8b:f3 10.100.0.10
Oct 11 05:20:00 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2345: 321 pgs: 321 active+clean; 600 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 87 op/s
Oct 11 05:20:01 np0005481065 nova_compute[260935]: 2025-10-11 09:20:01.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:01 np0005481065 nova_compute[260935]: 2025-10-11 09:20:01.886 2 DEBUG nova.compute.manager [req-e89af524-c82b-4402-a3a8-d7e54a8befc1 req-ea2e4f81-a945-4160-9e04-1b8d6737c6fe e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Received event network-changed-d5347066-e322-4664-8798-146b9745fa17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:20:01 np0005481065 nova_compute[260935]: 2025-10-11 09:20:01.886 2 DEBUG nova.compute.manager [req-e89af524-c82b-4402-a3a8-d7e54a8befc1 req-ea2e4f81-a945-4160-9e04-1b8d6737c6fe e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Refreshing instance network info cache due to event network-changed-d5347066-e322-4664-8798-146b9745fa17. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:20:01 np0005481065 nova_compute[260935]: 2025-10-11 09:20:01.887 2 DEBUG oslo_concurrency.lockutils [req-e89af524-c82b-4402-a3a8-d7e54a8befc1 req-ea2e4f81-a945-4160-9e04-1b8d6737c6fe e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-a798e2c3-8294-482d-a073-883121427765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:20:01 np0005481065 nova_compute[260935]: 2025-10-11 09:20:01.887 2 DEBUG oslo_concurrency.lockutils [req-e89af524-c82b-4402-a3a8-d7e54a8befc1 req-ea2e4f81-a945-4160-9e04-1b8d6737c6fe e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-a798e2c3-8294-482d-a073-883121427765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:20:01 np0005481065 nova_compute[260935]: 2025-10-11 09:20:01.888 2 DEBUG nova.network.neutron [req-e89af524-c82b-4402-a3a8-d7e54a8befc1 req-ea2e4f81-a945-4160-9e04-1b8d6737c6fe e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Refreshing network info cache for port d5347066-e322-4664-8798-146b9745fa17 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:20:02 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2346: 321 pgs: 321 active+clean; 610 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 116 op/s
Oct 11 05:20:03 np0005481065 nova_compute[260935]: 2025-10-11 09:20:03.465 2 DEBUG nova.network.neutron [req-e89af524-c82b-4402-a3a8-d7e54a8befc1 req-ea2e4f81-a945-4160-9e04-1b8d6737c6fe e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Updated VIF entry in instance network info cache for port d5347066-e322-4664-8798-146b9745fa17. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:20:03 np0005481065 nova_compute[260935]: 2025-10-11 09:20:03.465 2 DEBUG nova.network.neutron [req-e89af524-c82b-4402-a3a8-d7e54a8befc1 req-ea2e4f81-a945-4160-9e04-1b8d6737c6fe e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Updating instance_info_cache with network_info: [{"id": "d5347066-e322-4664-8798-146b9745fa17", "address": "fa:16:3e:eb:9b:3f", "network": {"id": "6346ea52-07fc-49ad-8f2d-fcfed9769241", "bridge": "br-int", "label": "tempest-network-smoke--1742332739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5347066-e3", "ovs_interfaceid": "d5347066-e322-4664-8798-146b9745fa17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d8f510f6-a87e-4c76-be24-a143f69f564d", "address": "fa:16:3e:f2:49:59", "network": {"id": "ce353a4c-7280-46bb-ac7d-157bb5dc08cb", "bridge": "br-int", "label": "tempest-network-smoke--1463782365", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef2:4959", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8f510f6-a8", "ovs_interfaceid": "d8f510f6-a87e-4c76-be24-a143f69f564d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:20:03 np0005481065 nova_compute[260935]: 2025-10-11 09:20:03.693 2 DEBUG oslo_concurrency.lockutils [req-e89af524-c82b-4402-a3a8-d7e54a8befc1 req-ea2e4f81-a945-4160-9e04-1b8d6737c6fe e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-a798e2c3-8294-482d-a073-883121427765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:20:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:20:04 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2347: 321 pgs: 321 active+clean; 612 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.4 MiB/s wr, 114 op/s
Oct 11 05:20:04 np0005481065 podman[386586]: 2025-10-11 09:20:04.834692281 +0000 UTC m=+0.113986377 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 05:20:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:20:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:20:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:20:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:20:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.005251956558898222 of space, bias 1.0, pg target 1.5755869676694667 quantized to 32 (current 32)
Oct 11 05:20:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:20:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:20:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:20:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:20:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:20:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.1992057139048968 quantized to 32 (current 32)
Oct 11 05:20:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:20:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 32)
Oct 11 05:20:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:20:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:20:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:20:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Oct 11 05:20:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:20:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Oct 11 05:20:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:20:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:20:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:20:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Oct 11 05:20:06 np0005481065 nova_compute[260935]: 2025-10-11 09:20:06.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:20:06 np0005481065 nova_compute[260935]: 2025-10-11 09:20:06.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:20:06 np0005481065 nova_compute[260935]: 2025-10-11 09:20:06.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:20:06 np0005481065 nova_compute[260935]: 2025-10-11 09:20:06.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:20:06 np0005481065 nova_compute[260935]: 2025-10-11 09:20:06.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:06 np0005481065 nova_compute[260935]: 2025-10-11 09:20:06.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:20:06 np0005481065 nova_compute[260935]: 2025-10-11 09:20:06.649 2 INFO nova.compute.manager [None req-4a62a28c-7f0f-4593-9113-b4d04a8748ec dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Get console output#033[00m
Oct 11 05:20:06 np0005481065 nova_compute[260935]: 2025-10-11 09:20:06.655 29289 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct 11 05:20:06 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2348: 321 pgs: 321 active+clean; 612 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.1 MiB/s wr, 109 op/s
Oct 11 05:20:06 np0005481065 nova_compute[260935]: 2025-10-11 09:20:06.936 2 DEBUG oslo_concurrency.lockutils [None req-55de08ed-24e8-4d1d-bb31-46ddba2296d3 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "6c9132ae-fb4f-4a77-8e3f-14f516eed49c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:20:06 np0005481065 nova_compute[260935]: 2025-10-11 09:20:06.937 2 DEBUG oslo_concurrency.lockutils [None req-55de08ed-24e8-4d1d-bb31-46ddba2296d3 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "6c9132ae-fb4f-4a77-8e3f-14f516eed49c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:20:06 np0005481065 nova_compute[260935]: 2025-10-11 09:20:06.937 2 DEBUG oslo_concurrency.lockutils [None req-55de08ed-24e8-4d1d-bb31-46ddba2296d3 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "6c9132ae-fb4f-4a77-8e3f-14f516eed49c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:20:06 np0005481065 nova_compute[260935]: 2025-10-11 09:20:06.938 2 DEBUG oslo_concurrency.lockutils [None req-55de08ed-24e8-4d1d-bb31-46ddba2296d3 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "6c9132ae-fb4f-4a77-8e3f-14f516eed49c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:20:06 np0005481065 nova_compute[260935]: 2025-10-11 09:20:06.938 2 DEBUG oslo_concurrency.lockutils [None req-55de08ed-24e8-4d1d-bb31-46ddba2296d3 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "6c9132ae-fb4f-4a77-8e3f-14f516eed49c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:20:06 np0005481065 nova_compute[260935]: 2025-10-11 09:20:06.939 2 INFO nova.compute.manager [None req-55de08ed-24e8-4d1d-bb31-46ddba2296d3 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Terminating instance#033[00m
Oct 11 05:20:06 np0005481065 nova_compute[260935]: 2025-10-11 09:20:06.940 2 DEBUG nova.compute.manager [None req-55de08ed-24e8-4d1d-bb31-46ddba2296d3 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:20:06 np0005481065 kernel: tap50e8a951-44 (unregistering): left promiscuous mode
Oct 11 05:20:06 np0005481065 NetworkManager[44960]: <info>  [1760174406.9910] device (tap50e8a951-44): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:20:07 np0005481065 ovn_controller[152945]: 2025-10-11T09:20:07Z|01163|binding|INFO|Releasing lport 50e8a951-4439-4d05-b0da-9b126fae5c30 from this chassis (sb_readonly=0)
Oct 11 05:20:07 np0005481065 nova_compute[260935]: 2025-10-11 09:20:07.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:07 np0005481065 ovn_controller[152945]: 2025-10-11T09:20:07Z|01164|binding|INFO|Setting lport 50e8a951-4439-4d05-b0da-9b126fae5c30 down in Southbound
Oct 11 05:20:07 np0005481065 ovn_controller[152945]: 2025-10-11T09:20:07Z|01165|binding|INFO|Removing iface tap50e8a951-44 ovn-installed in OVS
Oct 11 05:20:07 np0005481065 nova_compute[260935]: 2025-10-11 09:20:07.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:07.027 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:23:8b:f3 10.100.0.10'], port_security=['fa:16:3e:23:8b:f3 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '6c9132ae-fb4f-4a77-8e3f-14f516eed49c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ef8b4bf7-fe14-4629-a7f4-eb04c76ca0d2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.236'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e842406f-d65b-48f7-9a65-50c6608add8c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=50e8a951-4439-4d05-b0da-9b126fae5c30) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:20:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:07.030 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 50e8a951-4439-4d05-b0da-9b126fae5c30 in datapath fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448 unbound from our chassis#033[00m
Oct 11 05:20:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:07.039 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448#033[00m
Oct 11 05:20:07 np0005481065 nova_compute[260935]: 2025-10-11 09:20:07.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:07.061 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[030e52bc-3592-47ae-957a-adc8fdd0470b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:07 np0005481065 systemd[1]: machine-qemu\x2d138\x2dinstance\x2d00000073.scope: Deactivated successfully.
Oct 11 05:20:07 np0005481065 systemd[1]: machine-qemu\x2d138\x2dinstance\x2d00000073.scope: Consumed 14.699s CPU time.
Oct 11 05:20:07 np0005481065 systemd-machined[215705]: Machine qemu-138-instance-00000073 terminated.
Oct 11 05:20:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:07.112 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[1b7c6f47-2296-4348-9034-808834ec747f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:07.116 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c3bed0ad-95b7-4f91-b383-322041860f5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:07.165 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[9024f222-5b7b-422d-bc1b-8482962492bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:07 np0005481065 nova_compute[260935]: 2025-10-11 09:20:07.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:07 np0005481065 nova_compute[260935]: 2025-10-11 09:20:07.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:07 np0005481065 nova_compute[260935]: 2025-10-11 09:20:07.186 2 INFO nova.virt.libvirt.driver [-] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Instance destroyed successfully.#033[00m
Oct 11 05:20:07 np0005481065 nova_compute[260935]: 2025-10-11 09:20:07.186 2 DEBUG nova.objects.instance [None req-55de08ed-24e8-4d1d-bb31-46ddba2296d3 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'resources' on Instance uuid 6c9132ae-fb4f-4a77-8e3f-14f516eed49c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:20:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:07.195 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a7e68ede-60ec-4f85-93b8-e6ef2af6b35f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbf3e0c3-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c8:5d:03'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 958, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 958, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 322], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618285, 'reachable_time': 33588, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 386622, 'error': None, 'target': 'ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:07 np0005481065 nova_compute[260935]: 2025-10-11 09:20:07.202 2 DEBUG nova.virt.libvirt.vif [None req-55de08ed-24e8-4d1d-bb31-46ddba2296d3 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:19:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-487160952',display_name='tempest-TestNetworkBasicOps-server-487160952',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-487160952',id=115,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIt4DfbM2d+jaXAhSihykdBZ+tOpog8UlIDD8J/El5+5N6K+dCWK3MxK+m6m5Y83GMP6LzeAgXIDOnVujmKdajdhJSoGBcewPota2xCPS2Aiqozz2Osh9vQuMUfwGcZ9Cw==',key_name='tempest-TestNetworkBasicOps-1573176618',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:19:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-wkj2eggj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:19:37Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=6c9132ae-fb4f-4a77-8e3f-14f516eed49c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "50e8a951-4439-4d05-b0da-9b126fae5c30", "address": "fa:16:3e:23:8b:f3", "network": {"id": "fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448", "bridge": "br-int", "label": "tempest-network-smoke--1639787343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e8a951-44", "ovs_interfaceid": "50e8a951-4439-4d05-b0da-9b126fae5c30", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:20:07 np0005481065 nova_compute[260935]: 2025-10-11 09:20:07.202 2 DEBUG nova.network.os_vif_util [None req-55de08ed-24e8-4d1d-bb31-46ddba2296d3 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "50e8a951-4439-4d05-b0da-9b126fae5c30", "address": "fa:16:3e:23:8b:f3", "network": {"id": "fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448", "bridge": "br-int", "label": "tempest-network-smoke--1639787343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e8a951-44", "ovs_interfaceid": "50e8a951-4439-4d05-b0da-9b126fae5c30", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:20:07 np0005481065 nova_compute[260935]: 2025-10-11 09:20:07.203 2 DEBUG nova.network.os_vif_util [None req-55de08ed-24e8-4d1d-bb31-46ddba2296d3 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:23:8b:f3,bridge_name='br-int',has_traffic_filtering=True,id=50e8a951-4439-4d05-b0da-9b126fae5c30,network=Network(fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50e8a951-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:20:07 np0005481065 nova_compute[260935]: 2025-10-11 09:20:07.204 2 DEBUG os_vif [None req-55de08ed-24e8-4d1d-bb31-46ddba2296d3 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:23:8b:f3,bridge_name='br-int',has_traffic_filtering=True,id=50e8a951-4439-4d05-b0da-9b126fae5c30,network=Network(fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50e8a951-44') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:20:07 np0005481065 nova_compute[260935]: 2025-10-11 09:20:07.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:07 np0005481065 nova_compute[260935]: 2025-10-11 09:20:07.206 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50e8a951-44, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:20:07 np0005481065 nova_compute[260935]: 2025-10-11 09:20:07.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:07 np0005481065 nova_compute[260935]: 2025-10-11 09:20:07.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:07 np0005481065 nova_compute[260935]: 2025-10-11 09:20:07.212 2 INFO os_vif [None req-55de08ed-24e8-4d1d-bb31-46ddba2296d3 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:23:8b:f3,bridge_name='br-int',has_traffic_filtering=True,id=50e8a951-4439-4d05-b0da-9b126fae5c30,network=Network(fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50e8a951-44')#033[00m
Oct 11 05:20:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:07.226 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7c98f707-c89c-42f8-9854-43a457998173]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfbf3e0c3-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618299, 'tstamp': 618299}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 386629, 'error': None, 'target': 'ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfbf3e0c3-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618303, 'tstamp': 618303}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 386629, 'error': None, 'target': 'ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:07.228 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbf3e0c3-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:20:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:07.236 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbf3e0c3-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:20:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:07.236 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:20:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:07.236 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfbf3e0c3-40, col_values=(('external_ids', {'iface-id': 'fb1da81b-c31d-4f26-a595-cb8eaad2e189'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:20:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:07.236 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:20:07 np0005481065 nova_compute[260935]: 2025-10-11 09:20:07.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:07 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 05:20:07 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.0 total, 600.0 interval#012Cumulative writes: 10K writes, 49K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s#012Cumulative WAL: 10K writes, 10K syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1382 writes, 6243 keys, 1382 commit groups, 1.0 writes per commit group, ingest: 8.86 MB, 0.01 MB/s#012Interval WAL: 1382 writes, 1382 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     68.3      0.85              0.23        33    0.026       0      0       0.0       0.0#012  L6      1/0    7.87 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.4    166.5    139.1      1.83              1.02        32    0.057    184K    17K       0.0       0.0#012 Sum      1/0    7.87 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.4    113.8    116.7      2.68              1.26        65    0.041    184K    17K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   7.2    168.5    170.7      0.30              0.18        10    0.030     35K   2574       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0    166.5    139.1      1.83              1.02        32    0.057    184K    17K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     68.6      0.85              0.23        32    0.026       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.2      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4200.0 total, 600.0 interval#012Flush(GB): cumulative 0.057, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.31 GB write, 0.07 MB/s write, 0.30 GB read, 0.07 MB/s read, 2.7 seconds#012Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558f0ab3b1f0#2 capacity: 304.00 MB usage: 33.66 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.000315 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2226,32.30 MB,10.6237%) FilterBlock(66,523.80 KB,0.168263%) IndexBlock(66,876.88 KB,0.281685%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct 11 05:20:07 np0005481065 nova_compute[260935]: 2025-10-11 09:20:07.650 2 INFO nova.virt.libvirt.driver [None req-55de08ed-24e8-4d1d-bb31-46ddba2296d3 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Deleting instance files /var/lib/nova/instances/6c9132ae-fb4f-4a77-8e3f-14f516eed49c_del#033[00m
Oct 11 05:20:07 np0005481065 nova_compute[260935]: 2025-10-11 09:20:07.651 2 INFO nova.virt.libvirt.driver [None req-55de08ed-24e8-4d1d-bb31-46ddba2296d3 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Deletion of /var/lib/nova/instances/6c9132ae-fb4f-4a77-8e3f-14f516eed49c_del complete#033[00m
Oct 11 05:20:07 np0005481065 nova_compute[260935]: 2025-10-11 09:20:07.733 2 INFO nova.compute.manager [None req-55de08ed-24e8-4d1d-bb31-46ddba2296d3 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Took 0.79 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:20:07 np0005481065 nova_compute[260935]: 2025-10-11 09:20:07.735 2 DEBUG oslo.service.loopingcall [None req-55de08ed-24e8-4d1d-bb31-46ddba2296d3 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:20:07 np0005481065 nova_compute[260935]: 2025-10-11 09:20:07.735 2 DEBUG nova.compute.manager [-] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:20:07 np0005481065 nova_compute[260935]: 2025-10-11 09:20:07.736 2 DEBUG nova.network.neutron [-] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:20:08 np0005481065 nova_compute[260935]: 2025-10-11 09:20:08.546 2 DEBUG nova.network.neutron [-] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:20:08 np0005481065 nova_compute[260935]: 2025-10-11 09:20:08.585 2 INFO nova.compute.manager [-] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Took 0.85 seconds to deallocate network for instance.#033[00m
Oct 11 05:20:08 np0005481065 nova_compute[260935]: 2025-10-11 09:20:08.637 2 DEBUG oslo_concurrency.lockutils [None req-55de08ed-24e8-4d1d-bb31-46ddba2296d3 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:20:08 np0005481065 nova_compute[260935]: 2025-10-11 09:20:08.638 2 DEBUG oslo_concurrency.lockutils [None req-55de08ed-24e8-4d1d-bb31-46ddba2296d3 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:20:08 np0005481065 nova_compute[260935]: 2025-10-11 09:20:08.656 2 DEBUG nova.compute.manager [req-2384ee2c-e4ff-4b43-8567-4b11eab93be9 req-d6a0f728-10ad-4f65-9bfc-84e0751cf60b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Received event network-vif-deleted-50e8a951-4439-4d05-b0da-9b126fae5c30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:20:08 np0005481065 ovn_controller[152945]: 2025-10-11T09:20:08Z|00137|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:eb:9b:3f 10.100.0.10
Oct 11 05:20:08 np0005481065 ovn_controller[152945]: 2025-10-11T09:20:08Z|00138|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:eb:9b:3f 10.100.0.10
Oct 11 05:20:08 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2349: 321 pgs: 321 active+clean; 587 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.1 MiB/s wr, 171 op/s
Oct 11 05:20:08 np0005481065 nova_compute[260935]: 2025-10-11 09:20:08.852 2 DEBUG oslo_concurrency.processutils [None req-55de08ed-24e8-4d1d-bb31-46ddba2296d3 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:20:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:20:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:20:09 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3979511492' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:20:09 np0005481065 nova_compute[260935]: 2025-10-11 09:20:09.370 2 DEBUG oslo_concurrency.processutils [None req-55de08ed-24e8-4d1d-bb31-46ddba2296d3 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:20:09 np0005481065 nova_compute[260935]: 2025-10-11 09:20:09.375 2 DEBUG nova.compute.provider_tree [None req-55de08ed-24e8-4d1d-bb31-46ddba2296d3 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:20:09 np0005481065 nova_compute[260935]: 2025-10-11 09:20:09.399 2 DEBUG nova.scheduler.client.report [None req-55de08ed-24e8-4d1d-bb31-46ddba2296d3 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:20:09 np0005481065 nova_compute[260935]: 2025-10-11 09:20:09.421 2 DEBUG oslo_concurrency.lockutils [None req-55de08ed-24e8-4d1d-bb31-46ddba2296d3 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:20:09 np0005481065 nova_compute[260935]: 2025-10-11 09:20:09.443 2 INFO nova.scheduler.client.report [None req-55de08ed-24e8-4d1d-bb31-46ddba2296d3 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Deleted allocations for instance 6c9132ae-fb4f-4a77-8e3f-14f516eed49c#033[00m
Oct 11 05:20:09 np0005481065 nova_compute[260935]: 2025-10-11 09:20:09.504 2 DEBUG oslo_concurrency.lockutils [None req-55de08ed-24e8-4d1d-bb31-46ddba2296d3 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "6c9132ae-fb4f-4a77-8e3f-14f516eed49c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.567s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:20:10 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2350: 321 pgs: 321 active+clean; 587 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 548 KiB/s rd, 2.2 MiB/s wr, 100 op/s
Oct 11 05:20:10 np0005481065 podman[386671]: 2025-10-11 09:20:10.812446587 +0000 UTC m=+0.102561206 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=iscsid)
Oct 11 05:20:11 np0005481065 nova_compute[260935]: 2025-10-11 09:20:11.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:12 np0005481065 nova_compute[260935]: 2025-10-11 09:20:12.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:12 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2351: 321 pgs: 321 active+clean; 563 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 606 KiB/s rd, 2.3 MiB/s wr, 127 op/s
Oct 11 05:20:13 np0005481065 nova_compute[260935]: 2025-10-11 09:20:13.233 2 DEBUG oslo_concurrency.lockutils [None req-13d36e4c-34d2-4b58-ba7d-39f792574e43 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "69860e17-caac-461a-a4a5-34ca72c0ee09" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:20:13 np0005481065 nova_compute[260935]: 2025-10-11 09:20:13.234 2 DEBUG oslo_concurrency.lockutils [None req-13d36e4c-34d2-4b58-ba7d-39f792574e43 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "69860e17-caac-461a-a4a5-34ca72c0ee09" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:20:13 np0005481065 nova_compute[260935]: 2025-10-11 09:20:13.234 2 DEBUG oslo_concurrency.lockutils [None req-13d36e4c-34d2-4b58-ba7d-39f792574e43 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "69860e17-caac-461a-a4a5-34ca72c0ee09-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:20:13 np0005481065 nova_compute[260935]: 2025-10-11 09:20:13.234 2 DEBUG oslo_concurrency.lockutils [None req-13d36e4c-34d2-4b58-ba7d-39f792574e43 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "69860e17-caac-461a-a4a5-34ca72c0ee09-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:20:13 np0005481065 nova_compute[260935]: 2025-10-11 09:20:13.235 2 DEBUG oslo_concurrency.lockutils [None req-13d36e4c-34d2-4b58-ba7d-39f792574e43 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "69860e17-caac-461a-a4a5-34ca72c0ee09-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:20:13 np0005481065 nova_compute[260935]: 2025-10-11 09:20:13.235 2 INFO nova.compute.manager [None req-13d36e4c-34d2-4b58-ba7d-39f792574e43 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Terminating instance#033[00m
Oct 11 05:20:13 np0005481065 nova_compute[260935]: 2025-10-11 09:20:13.236 2 DEBUG nova.compute.manager [None req-13d36e4c-34d2-4b58-ba7d-39f792574e43 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:20:13 np0005481065 kernel: tapa1864eda-bf (unregistering): left promiscuous mode
Oct 11 05:20:13 np0005481065 NetworkManager[44960]: <info>  [1760174413.2986] device (tapa1864eda-bf): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:20:13 np0005481065 ovn_controller[152945]: 2025-10-11T09:20:13Z|01166|binding|INFO|Releasing lport a1864eda-bf8d-42a1-b315-967521604391 from this chassis (sb_readonly=0)
Oct 11 05:20:13 np0005481065 ovn_controller[152945]: 2025-10-11T09:20:13Z|01167|binding|INFO|Setting lport a1864eda-bf8d-42a1-b315-967521604391 down in Southbound
Oct 11 05:20:13 np0005481065 ovn_controller[152945]: 2025-10-11T09:20:13Z|01168|binding|INFO|Removing iface tapa1864eda-bf ovn-installed in OVS
Oct 11 05:20:13 np0005481065 nova_compute[260935]: 2025-10-11 09:20:13.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:13.319 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:a2:5e 10.100.0.3'], port_security=['fa:16:3e:8e:a2:5e 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '69860e17-caac-461a-a4a5-34ca72c0ee09', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '4', 'neutron:security_group_ids': '04903712-3ca4-4ffc-b1ee-8e3bb7ff59e6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e842406f-d65b-48f7-9a65-50c6608add8c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=a1864eda-bf8d-42a1-b315-967521604391) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:20:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:13.321 162815 INFO neutron.agent.ovn.metadata.agent [-] Port a1864eda-bf8d-42a1-b315-967521604391 in datapath fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448 unbound from our chassis#033[00m
Oct 11 05:20:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:13.325 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:20:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:13.326 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8c702162-68c6-4576-9327-49875287a7ad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:13.327 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448 namespace which is not needed anymore#033[00m
Oct 11 05:20:13 np0005481065 nova_compute[260935]: 2025-10-11 09:20:13.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:13 np0005481065 systemd[1]: machine-qemu\x2d136\x2dinstance\x2d00000071.scope: Deactivated successfully.
Oct 11 05:20:13 np0005481065 systemd[1]: machine-qemu\x2d136\x2dinstance\x2d00000071.scope: Consumed 15.392s CPU time.
Oct 11 05:20:13 np0005481065 systemd-machined[215705]: Machine qemu-136-instance-00000071 terminated.
Oct 11 05:20:13 np0005481065 nova_compute[260935]: 2025-10-11 09:20:13.482 2 INFO nova.virt.libvirt.driver [-] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Instance destroyed successfully.#033[00m
Oct 11 05:20:13 np0005481065 nova_compute[260935]: 2025-10-11 09:20:13.482 2 DEBUG nova.objects.instance [None req-13d36e4c-34d2-4b58-ba7d-39f792574e43 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'resources' on Instance uuid 69860e17-caac-461a-a4a5-34ca72c0ee09 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:20:13 np0005481065 nova_compute[260935]: 2025-10-11 09:20:13.496 2 DEBUG nova.virt.libvirt.vif [None req-13d36e4c-34d2-4b58-ba7d-39f792574e43 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:18:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-372874397',display_name='tempest-TestNetworkBasicOps-server-372874397',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-372874397',id=113,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIx2RsPvJwPys7fB6mymA8gM4JxRMjGXoxlum0FnOgNOoalsj2xjCE+J+1HgjSPsznufYemTb9pcBC69nUdop6linXie2N/WBdlfI1xGC4f2xXUMk1ZGsW/ToHE3DY3muA==',key_name='tempest-TestNetworkBasicOps-55635867',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:18:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-vrz2alhg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:18:59Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=69860e17-caac-461a-a4a5-34ca72c0ee09,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a1864eda-bf8d-42a1-b315-967521604391", "address": "fa:16:3e:8e:a2:5e", "network": {"id": "fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448", "bridge": "br-int", "label": "tempest-network-smoke--1639787343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1864eda-bf", "ovs_interfaceid": "a1864eda-bf8d-42a1-b315-967521604391", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:20:13 np0005481065 nova_compute[260935]: 2025-10-11 09:20:13.497 2 DEBUG nova.network.os_vif_util [None req-13d36e4c-34d2-4b58-ba7d-39f792574e43 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "a1864eda-bf8d-42a1-b315-967521604391", "address": "fa:16:3e:8e:a2:5e", "network": {"id": "fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448", "bridge": "br-int", "label": "tempest-network-smoke--1639787343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1864eda-bf", "ovs_interfaceid": "a1864eda-bf8d-42a1-b315-967521604391", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:20:13 np0005481065 nova_compute[260935]: 2025-10-11 09:20:13.499 2 DEBUG nova.network.os_vif_util [None req-13d36e4c-34d2-4b58-ba7d-39f792574e43 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8e:a2:5e,bridge_name='br-int',has_traffic_filtering=True,id=a1864eda-bf8d-42a1-b315-967521604391,network=Network(fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1864eda-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:20:13 np0005481065 nova_compute[260935]: 2025-10-11 09:20:13.499 2 DEBUG os_vif [None req-13d36e4c-34d2-4b58-ba7d-39f792574e43 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8e:a2:5e,bridge_name='br-int',has_traffic_filtering=True,id=a1864eda-bf8d-42a1-b315-967521604391,network=Network(fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1864eda-bf') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:20:13 np0005481065 nova_compute[260935]: 2025-10-11 09:20:13.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:13 np0005481065 nova_compute[260935]: 2025-10-11 09:20:13.502 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa1864eda-bf, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:20:13 np0005481065 nova_compute[260935]: 2025-10-11 09:20:13.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:13 np0005481065 nova_compute[260935]: 2025-10-11 09:20:13.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:20:13 np0005481065 nova_compute[260935]: 2025-10-11 09:20:13.514 2 INFO os_vif [None req-13d36e4c-34d2-4b58-ba7d-39f792574e43 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8e:a2:5e,bridge_name='br-int',has_traffic_filtering=True,id=a1864eda-bf8d-42a1-b315-967521604391,network=Network(fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1864eda-bf')#033[00m
Oct 11 05:20:13 np0005481065 neutron-haproxy-ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448[384342]: [NOTICE]   (384346) : haproxy version is 2.8.14-c23fe91
Oct 11 05:20:13 np0005481065 neutron-haproxy-ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448[384342]: [NOTICE]   (384346) : path to executable is /usr/sbin/haproxy
Oct 11 05:20:13 np0005481065 neutron-haproxy-ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448[384342]: [WARNING]  (384346) : Exiting Master process...
Oct 11 05:20:13 np0005481065 neutron-haproxy-ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448[384342]: [WARNING]  (384346) : Exiting Master process...
Oct 11 05:20:13 np0005481065 neutron-haproxy-ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448[384342]: [ALERT]    (384346) : Current worker (384348) exited with code 143 (Terminated)
Oct 11 05:20:13 np0005481065 neutron-haproxy-ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448[384342]: [WARNING]  (384346) : All workers exited. Exiting... (0)
Oct 11 05:20:13 np0005481065 systemd[1]: libpod-01e7bdbe8c35ae7e00b5f53817e42d58224fba8df1abe5879aaded3c00981af1.scope: Deactivated successfully.
Oct 11 05:20:13 np0005481065 podman[386715]: 2025-10-11 09:20:13.562041231 +0000 UTC m=+0.080319968 container died 01e7bdbe8c35ae7e00b5f53817e42d58224fba8df1abe5879aaded3c00981af1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 11 05:20:13 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-01e7bdbe8c35ae7e00b5f53817e42d58224fba8df1abe5879aaded3c00981af1-userdata-shm.mount: Deactivated successfully.
Oct 11 05:20:13 np0005481065 systemd[1]: var-lib-containers-storage-overlay-c06fe5bbac5efd6711a8c9e5d7b4f200ffa3fc048d59e854178cb58c74576a86-merged.mount: Deactivated successfully.
Oct 11 05:20:13 np0005481065 podman[386715]: 2025-10-11 09:20:13.619470632 +0000 UTC m=+0.137749369 container cleanup 01e7bdbe8c35ae7e00b5f53817e42d58224fba8df1abe5879aaded3c00981af1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 05:20:13 np0005481065 systemd[1]: libpod-conmon-01e7bdbe8c35ae7e00b5f53817e42d58224fba8df1abe5879aaded3c00981af1.scope: Deactivated successfully.
Oct 11 05:20:13 np0005481065 podman[386774]: 2025-10-11 09:20:13.704268085 +0000 UTC m=+0.056399823 container remove 01e7bdbe8c35ae7e00b5f53817e42d58224fba8df1abe5879aaded3c00981af1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 05:20:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:13.716 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[264e2437-4d58-401a-baac-120610069b96]: (4, ('Sat Oct 11 09:20:13 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448 (01e7bdbe8c35ae7e00b5f53817e42d58224fba8df1abe5879aaded3c00981af1)\n01e7bdbe8c35ae7e00b5f53817e42d58224fba8df1abe5879aaded3c00981af1\nSat Oct 11 09:20:13 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448 (01e7bdbe8c35ae7e00b5f53817e42d58224fba8df1abe5879aaded3c00981af1)\n01e7bdbe8c35ae7e00b5f53817e42d58224fba8df1abe5879aaded3c00981af1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:13.719 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[152ce6c1-9fcf-45bf-a07d-4fe21f509ffa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:13.720 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbf3e0c3-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:20:13 np0005481065 nova_compute[260935]: 2025-10-11 09:20:13.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:13 np0005481065 kernel: tapfbf3e0c3-40: left promiscuous mode
Oct 11 05:20:13 np0005481065 nova_compute[260935]: 2025-10-11 09:20:13.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:13.754 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[539006d3-6abd-419c-badd-aaaf312e2868]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:13.786 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[33cd58c5-f920-4242-88b7-76f8ad59114a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:13.787 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e6456c87-ae79-4ea7-80db-216bd30304a4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:13.811 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d2ab1b99-5e8a-47ff-a7b7-835536de7333]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618277, 'reachable_time': 20900, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 386792, 'error': None, 'target': 'ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:13 np0005481065 systemd[1]: run-netns-ovnmeta\x2dfbf3e0c3\x2d4aa9\x2d41d1\x2d8ed0\x2dcbadb331b448.mount: Deactivated successfully.
Oct 11 05:20:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:13.817 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 05:20:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:13.817 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[c58c198d-59b5-47b4-a582-d7086de052d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:13 np0005481065 nova_compute[260935]: 2025-10-11 09:20:13.967 2 INFO nova.virt.libvirt.driver [None req-13d36e4c-34d2-4b58-ba7d-39f792574e43 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Deleting instance files /var/lib/nova/instances/69860e17-caac-461a-a4a5-34ca72c0ee09_del#033[00m
Oct 11 05:20:13 np0005481065 nova_compute[260935]: 2025-10-11 09:20:13.968 2 INFO nova.virt.libvirt.driver [None req-13d36e4c-34d2-4b58-ba7d-39f792574e43 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Deletion of /var/lib/nova/instances/69860e17-caac-461a-a4a5-34ca72c0ee09_del complete#033[00m
Oct 11 05:20:14 np0005481065 nova_compute[260935]: 2025-10-11 09:20:14.026 2 INFO nova.compute.manager [None req-13d36e4c-34d2-4b58-ba7d-39f792574e43 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Took 0.79 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:20:14 np0005481065 nova_compute[260935]: 2025-10-11 09:20:14.027 2 DEBUG oslo.service.loopingcall [None req-13d36e4c-34d2-4b58-ba7d-39f792574e43 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:20:14 np0005481065 nova_compute[260935]: 2025-10-11 09:20:14.028 2 DEBUG nova.compute.manager [-] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:20:14 np0005481065 nova_compute[260935]: 2025-10-11 09:20:14.028 2 DEBUG nova.network.neutron [-] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:20:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:20:14 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2352: 321 pgs: 321 active+clean; 566 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 430 KiB/s rd, 2.2 MiB/s wr, 103 op/s
Oct 11 05:20:14 np0005481065 nova_compute[260935]: 2025-10-11 09:20:14.939 2 DEBUG nova.network.neutron [-] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:20:14 np0005481065 nova_compute[260935]: 2025-10-11 09:20:14.959 2 INFO nova.compute.manager [-] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Took 0.93 seconds to deallocate network for instance.#033[00m
Oct 11 05:20:15 np0005481065 nova_compute[260935]: 2025-10-11 09:20:15.004 2 DEBUG nova.compute.manager [req-eae06087-d8b0-44dd-b2d1-0b6efff2609e req-304fe773-1142-4989-a200-d593de8e65a7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Received event network-vif-deleted-a1864eda-bf8d-42a1-b315-967521604391 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:20:15 np0005481065 nova_compute[260935]: 2025-10-11 09:20:15.026 2 DEBUG oslo_concurrency.lockutils [None req-13d36e4c-34d2-4b58-ba7d-39f792574e43 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:20:15 np0005481065 nova_compute[260935]: 2025-10-11 09:20:15.027 2 DEBUG oslo_concurrency.lockutils [None req-13d36e4c-34d2-4b58-ba7d-39f792574e43 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:20:15 np0005481065 nova_compute[260935]: 2025-10-11 09:20:15.216 2 DEBUG oslo_concurrency.processutils [None req-13d36e4c-34d2-4b58-ba7d-39f792574e43 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:20:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:15.217 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:20:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:15.218 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:20:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:15.219 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:20:15 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:20:15 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1108013294' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:20:15 np0005481065 nova_compute[260935]: 2025-10-11 09:20:15.668 2 DEBUG oslo_concurrency.processutils [None req-13d36e4c-34d2-4b58-ba7d-39f792574e43 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:20:15 np0005481065 nova_compute[260935]: 2025-10-11 09:20:15.674 2 DEBUG nova.compute.provider_tree [None req-13d36e4c-34d2-4b58-ba7d-39f792574e43 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:20:15 np0005481065 nova_compute[260935]: 2025-10-11 09:20:15.699 2 DEBUG nova.scheduler.client.report [None req-13d36e4c-34d2-4b58-ba7d-39f792574e43 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:20:15 np0005481065 nova_compute[260935]: 2025-10-11 09:20:15.726 2 DEBUG oslo_concurrency.lockutils [None req-13d36e4c-34d2-4b58-ba7d-39f792574e43 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:20:15 np0005481065 nova_compute[260935]: 2025-10-11 09:20:15.755 2 INFO nova.scheduler.client.report [None req-13d36e4c-34d2-4b58-ba7d-39f792574e43 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Deleted allocations for instance 69860e17-caac-461a-a4a5-34ca72c0ee09#033[00m
Oct 11 05:20:15 np0005481065 podman[386815]: 2025-10-11 09:20:15.770916127 +0000 UTC m=+0.067266460 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 11 05:20:15 np0005481065 podman[386816]: 2025-10-11 09:20:15.804790443 +0000 UTC m=+0.100021704 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 05:20:15 np0005481065 nova_compute[260935]: 2025-10-11 09:20:15.821 2 DEBUG oslo_concurrency.lockutils [None req-13d36e4c-34d2-4b58-ba7d-39f792574e43 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "69860e17-caac-461a-a4a5-34ca72c0ee09" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:20:16 np0005481065 nova_compute[260935]: 2025-10-11 09:20:16.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:16 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2353: 321 pgs: 321 active+clean; 566 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 373 KiB/s rd, 2.2 MiB/s wr, 94 op/s
Oct 11 05:20:18 np0005481065 nova_compute[260935]: 2025-10-11 09:20:18.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:18 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2354: 321 pgs: 321 active+clean; 486 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 393 KiB/s rd, 2.2 MiB/s wr, 124 op/s
Oct 11 05:20:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:20:19 np0005481065 nova_compute[260935]: 2025-10-11 09:20:19.503 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:20:20 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2355: 321 pgs: 321 active+clean; 486 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 98 KiB/s rd, 114 KiB/s wr, 61 op/s
Oct 11 05:20:20 np0005481065 nova_compute[260935]: 2025-10-11 09:20:20.793 2 DEBUG nova.compute.manager [req-011b05e4-f65a-4d5c-9c17-59569661654a req-c41a3e21-0b5a-4ae4-942f-f6fc3a0f1260 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Received event network-changed-d5347066-e322-4664-8798-146b9745fa17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:20:20 np0005481065 nova_compute[260935]: 2025-10-11 09:20:20.794 2 DEBUG nova.compute.manager [req-011b05e4-f65a-4d5c-9c17-59569661654a req-c41a3e21-0b5a-4ae4-942f-f6fc3a0f1260 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Refreshing instance network info cache due to event network-changed-d5347066-e322-4664-8798-146b9745fa17. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:20:20 np0005481065 nova_compute[260935]: 2025-10-11 09:20:20.794 2 DEBUG oslo_concurrency.lockutils [req-011b05e4-f65a-4d5c-9c17-59569661654a req-c41a3e21-0b5a-4ae4-942f-f6fc3a0f1260 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-a798e2c3-8294-482d-a073-883121427765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:20:20 np0005481065 nova_compute[260935]: 2025-10-11 09:20:20.795 2 DEBUG oslo_concurrency.lockutils [req-011b05e4-f65a-4d5c-9c17-59569661654a req-c41a3e21-0b5a-4ae4-942f-f6fc3a0f1260 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-a798e2c3-8294-482d-a073-883121427765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:20:20 np0005481065 nova_compute[260935]: 2025-10-11 09:20:20.795 2 DEBUG nova.network.neutron [req-011b05e4-f65a-4d5c-9c17-59569661654a req-c41a3e21-0b5a-4ae4-942f-f6fc3a0f1260 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Refreshing network info cache for port d5347066-e322-4664-8798-146b9745fa17 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:20:20 np0005481065 nova_compute[260935]: 2025-10-11 09:20:20.843 2 DEBUG oslo_concurrency.lockutils [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "a798e2c3-8294-482d-a073-883121427765" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:20:20 np0005481065 nova_compute[260935]: 2025-10-11 09:20:20.843 2 DEBUG oslo_concurrency.lockutils [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "a798e2c3-8294-482d-a073-883121427765" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:20:20 np0005481065 nova_compute[260935]: 2025-10-11 09:20:20.844 2 DEBUG oslo_concurrency.lockutils [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "a798e2c3-8294-482d-a073-883121427765-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:20:20 np0005481065 nova_compute[260935]: 2025-10-11 09:20:20.844 2 DEBUG oslo_concurrency.lockutils [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "a798e2c3-8294-482d-a073-883121427765-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:20:20 np0005481065 nova_compute[260935]: 2025-10-11 09:20:20.844 2 DEBUG oslo_concurrency.lockutils [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "a798e2c3-8294-482d-a073-883121427765-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:20:20 np0005481065 nova_compute[260935]: 2025-10-11 09:20:20.845 2 INFO nova.compute.manager [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Terminating instance#033[00m
Oct 11 05:20:20 np0005481065 nova_compute[260935]: 2025-10-11 09:20:20.846 2 DEBUG nova.compute.manager [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:20:20 np0005481065 kernel: tapd5347066-e3 (unregistering): left promiscuous mode
Oct 11 05:20:20 np0005481065 NetworkManager[44960]: <info>  [1760174420.9329] device (tapd5347066-e3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:20:20 np0005481065 nova_compute[260935]: 2025-10-11 09:20:20.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:20 np0005481065 ovn_controller[152945]: 2025-10-11T09:20:20Z|01169|binding|INFO|Releasing lport d5347066-e322-4664-8798-146b9745fa17 from this chassis (sb_readonly=0)
Oct 11 05:20:20 np0005481065 ovn_controller[152945]: 2025-10-11T09:20:20Z|01170|binding|INFO|Setting lport d5347066-e322-4664-8798-146b9745fa17 down in Southbound
Oct 11 05:20:20 np0005481065 ovn_controller[152945]: 2025-10-11T09:20:20Z|01171|binding|INFO|Removing iface tapd5347066-e3 ovn-installed in OVS
Oct 11 05:20:20 np0005481065 nova_compute[260935]: 2025-10-11 09:20:20.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:20 np0005481065 ovn_controller[152945]: 2025-10-11T09:20:20Z|01172|binding|INFO|Releasing lport bf7f7fa8-f1d9-4202-bec4-06d2178d548d from this chassis (sb_readonly=0)
Oct 11 05:20:20 np0005481065 ovn_controller[152945]: 2025-10-11T09:20:20Z|01173|binding|INFO|Releasing lport f4a0a09a-d174-44d3-b63d-753fefd94646 from this chassis (sb_readonly=0)
Oct 11 05:20:20 np0005481065 ovn_controller[152945]: 2025-10-11T09:20:20Z|01174|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:20:20 np0005481065 ovn_controller[152945]: 2025-10-11T09:20:20Z|01175|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:20:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:20.964 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:9b:3f 10.100.0.10'], port_security=['fa:16:3e:eb:9b:3f 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'a798e2c3-8294-482d-a073-883121427765', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6346ea52-07fc-49ad-8f2d-fcfed9769241', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a975f5dd-d207-4e6b-9401-45f24b820c2c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0785b003-8951-44fc-bd21-f1c5c3076102, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=d5347066-e322-4664-8798-146b9745fa17) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:20:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:20.966 162815 INFO neutron.agent.ovn.metadata.agent [-] Port d5347066-e322-4664-8798-146b9745fa17 in datapath 6346ea52-07fc-49ad-8f2d-fcfed9769241 unbound from our chassis#033[00m
Oct 11 05:20:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:20.969 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6346ea52-07fc-49ad-8f2d-fcfed9769241#033[00m
Oct 11 05:20:20 np0005481065 kernel: tapd8f510f6-a8 (unregistering): left promiscuous mode
Oct 11 05:20:20 np0005481065 NetworkManager[44960]: <info>  [1760174420.9849] device (tapd8f510f6-a8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:20:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:21.002 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b222e8ad-1c66-489d-8c13-46ece9bac66c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:21 np0005481065 nova_compute[260935]: 2025-10-11 09:20:21.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:21 np0005481065 ovn_controller[152945]: 2025-10-11T09:20:21Z|01176|binding|INFO|Releasing lport d8f510f6-a87e-4c76-be24-a143f69f564d from this chassis (sb_readonly=0)
Oct 11 05:20:21 np0005481065 ovn_controller[152945]: 2025-10-11T09:20:21Z|01177|binding|INFO|Setting lport d8f510f6-a87e-4c76-be24-a143f69f564d down in Southbound
Oct 11 05:20:21 np0005481065 nova_compute[260935]: 2025-10-11 09:20:21.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:21 np0005481065 ovn_controller[152945]: 2025-10-11T09:20:21Z|01178|binding|INFO|Removing iface tapd8f510f6-a8 ovn-installed in OVS
Oct 11 05:20:21 np0005481065 nova_compute[260935]: 2025-10-11 09:20:21.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:21 np0005481065 nova_compute[260935]: 2025-10-11 09:20:21.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:21.061 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f845f9d8-f19e-4dd1-82e1-305af7c1678e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:21.067 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f2:49:59 2001:db8::f816:3eff:fef2:4959'], port_security=['fa:16:3e:f2:49:59 2001:db8::f816:3eff:fef2:4959'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fef2:4959/64', 'neutron:device_id': 'a798e2c3-8294-482d-a073-883121427765', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce353a4c-7280-46bb-ac7d-157bb5dc08cb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a975f5dd-d207-4e6b-9401-45f24b820c2c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7413cee9-eaf1-4090-8a8e-9bf9dac36a0a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=d8f510f6-a87e-4c76-be24-a143f69f564d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:20:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:21.068 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[86e10ae5-9102-4822-b890-c22c6be2ca7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:21 np0005481065 systemd[1]: machine-qemu\x2d139\x2dinstance\x2d00000074.scope: Deactivated successfully.
Oct 11 05:20:21 np0005481065 systemd[1]: machine-qemu\x2d139\x2dinstance\x2d00000074.scope: Consumed 17.789s CPU time.
Oct 11 05:20:21 np0005481065 nova_compute[260935]: 2025-10-11 09:20:21.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:21 np0005481065 systemd-machined[215705]: Machine qemu-139-instance-00000074 terminated.
Oct 11 05:20:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:21.112 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[53dd05a6-b512-47f5-8839-ff29aadf57ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:21.143 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b268d4a0-c597-43a3-a87e-c932e7f816b9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6346ea52-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f5:67:fa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 325], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618844, 'reachable_time': 24736, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 386874, 'error': None, 'target': 'ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:21.175 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7e51667a-4d4d-4ee5-bfb5-717765d07bbb]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6346ea52-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618858, 'tstamp': 618858}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 386875, 'error': None, 'target': 'ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6346ea52-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618862, 'tstamp': 618862}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 386875, 'error': None, 'target': 'ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:21.177 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6346ea52-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:20:21 np0005481065 nova_compute[260935]: 2025-10-11 09:20:21.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:21 np0005481065 nova_compute[260935]: 2025-10-11 09:20:21.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:21.190 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6346ea52-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:20:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:21.190 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:20:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:21.191 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6346ea52-00, col_values=(('external_ids', {'iface-id': 'f4a0a09a-d174-44d3-b63d-753fefd94646'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:20:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:21.191 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:20:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:21.192 162815 INFO neutron.agent.ovn.metadata.agent [-] Port d8f510f6-a87e-4c76-be24-a143f69f564d in datapath ce353a4c-7280-46bb-ac7d-157bb5dc08cb unbound from our chassis#033[00m
Oct 11 05:20:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:21.194 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce353a4c-7280-46bb-ac7d-157bb5dc08cb#033[00m
Oct 11 05:20:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:21.210 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[427582f8-23e5-43a1-9d36-281049d1455e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:21.243 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3729a8a5-7136-45ed-98a7-36f53b608bde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:21.247 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[1aa647fb-2ead-4c0b-870d-3f39d71f8256]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:21.277 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f3d59f5c-6d19-4bfc-b3c3-2ed3f031fb0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:21 np0005481065 NetworkManager[44960]: <info>  [1760174421.2844] manager: (tapd8f510f6-a8): new Tun device (/org/freedesktop/NetworkManager/Devices/476)
Oct 11 05:20:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:21.298 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f5ebd787-1c6f-46a2-9bc4-94e1ecec7713]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce353a4c-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:23:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 32, 'tx_packets': 5, 'rx_bytes': 2912, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 32, 'tx_packets': 5, 'rx_bytes': 2912, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 326], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618949, 'reachable_time': 20488, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 32, 'inoctets': 2464, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 32, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2464, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 32, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 386889, 'error': None, 'target': 'ovnmeta-ce353a4c-7280-46bb-ac7d-157bb5dc08cb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:21 np0005481065 nova_compute[260935]: 2025-10-11 09:20:21.314 2 INFO nova.virt.libvirt.driver [-] [instance: a798e2c3-8294-482d-a073-883121427765] Instance destroyed successfully.#033[00m
Oct 11 05:20:21 np0005481065 nova_compute[260935]: 2025-10-11 09:20:21.315 2 DEBUG nova.objects.instance [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'resources' on Instance uuid a798e2c3-8294-482d-a073-883121427765 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:20:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:21.319 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[38d855b8-e72c-4504-b270-2f35c2885aac]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapce353a4c-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618964, 'tstamp': 618964}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 386902, 'error': None, 'target': 'ovnmeta-ce353a4c-7280-46bb-ac7d-157bb5dc08cb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:21.321 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce353a4c-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:20:21 np0005481065 nova_compute[260935]: 2025-10-11 09:20:21.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:21.331 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce353a4c-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:20:21 np0005481065 nova_compute[260935]: 2025-10-11 09:20:21.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:21.331 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:20:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:21.332 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce353a4c-70, col_values=(('external_ids', {'iface-id': 'bf7f7fa8-f1d9-4202-bec4-06d2178d548d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:20:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:21.332 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:20:21 np0005481065 nova_compute[260935]: 2025-10-11 09:20:21.340 2 DEBUG nova.virt.libvirt.vif [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:19:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1575481376',display_name='tempest-TestGettingAddress-server-1575481376',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1575481376',id=116,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJTFYLtktGdqXo2IJ/ShBWfWgvaV4+fWmolxALneU1gDJL8dLDtvTdIhs+PbG9YcPYaBAkq6yl21bo4TTyj4znAX7oza+01Fop0j7H2jGDTRbWSF88RRRnjrHtRpLFrBeg==',key_name='tempest-TestGettingAddress-2133650400',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:19:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-cbv7fcg7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:19:56Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=a798e2c3-8294-482d-a073-883121427765,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d5347066-e322-4664-8798-146b9745fa17", "address": "fa:16:3e:eb:9b:3f", "network": {"id": "6346ea52-07fc-49ad-8f2d-fcfed9769241", "bridge": "br-int", "label": "tempest-network-smoke--1742332739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5347066-e3", "ovs_interfaceid": "d5347066-e322-4664-8798-146b9745fa17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:20:21 np0005481065 nova_compute[260935]: 2025-10-11 09:20:21.341 2 DEBUG nova.network.os_vif_util [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "d5347066-e322-4664-8798-146b9745fa17", "address": "fa:16:3e:eb:9b:3f", "network": {"id": "6346ea52-07fc-49ad-8f2d-fcfed9769241", "bridge": "br-int", "label": "tempest-network-smoke--1742332739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5347066-e3", "ovs_interfaceid": "d5347066-e322-4664-8798-146b9745fa17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:20:21 np0005481065 nova_compute[260935]: 2025-10-11 09:20:21.342 2 DEBUG nova.network.os_vif_util [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:eb:9b:3f,bridge_name='br-int',has_traffic_filtering=True,id=d5347066-e322-4664-8798-146b9745fa17,network=Network(6346ea52-07fc-49ad-8f2d-fcfed9769241),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5347066-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:20:21 np0005481065 nova_compute[260935]: 2025-10-11 09:20:21.343 2 DEBUG os_vif [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:eb:9b:3f,bridge_name='br-int',has_traffic_filtering=True,id=d5347066-e322-4664-8798-146b9745fa17,network=Network(6346ea52-07fc-49ad-8f2d-fcfed9769241),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5347066-e3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:20:21 np0005481065 nova_compute[260935]: 2025-10-11 09:20:21.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:21 np0005481065 nova_compute[260935]: 2025-10-11 09:20:21.346 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd5347066-e3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:20:21 np0005481065 nova_compute[260935]: 2025-10-11 09:20:21.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:21 np0005481065 nova_compute[260935]: 2025-10-11 09:20:21.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:20:21 np0005481065 nova_compute[260935]: 2025-10-11 09:20:21.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:21 np0005481065 nova_compute[260935]: 2025-10-11 09:20:21.355 2 INFO os_vif [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:eb:9b:3f,bridge_name='br-int',has_traffic_filtering=True,id=d5347066-e322-4664-8798-146b9745fa17,network=Network(6346ea52-07fc-49ad-8f2d-fcfed9769241),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5347066-e3')#033[00m
Oct 11 05:20:21 np0005481065 nova_compute[260935]: 2025-10-11 09:20:21.356 2 DEBUG nova.virt.libvirt.vif [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:19:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1575481376',display_name='tempest-TestGettingAddress-server-1575481376',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1575481376',id=116,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJTFYLtktGdqXo2IJ/ShBWfWgvaV4+fWmolxALneU1gDJL8dLDtvTdIhs+PbG9YcPYaBAkq6yl21bo4TTyj4znAX7oza+01Fop0j7H2jGDTRbWSF88RRRnjrHtRpLFrBeg==',key_name='tempest-TestGettingAddress-2133650400',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:19:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-cbv7fcg7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:19:56Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=a798e2c3-8294-482d-a073-883121427765,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d8f510f6-a87e-4c76-be24-a143f69f564d", "address": "fa:16:3e:f2:49:59", "network": {"id": "ce353a4c-7280-46bb-ac7d-157bb5dc08cb", "bridge": "br-int", "label": "tempest-network-smoke--1463782365", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef2:4959", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8f510f6-a8", "ovs_interfaceid": "d8f510f6-a87e-4c76-be24-a143f69f564d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:20:21 np0005481065 nova_compute[260935]: 2025-10-11 09:20:21.357 2 DEBUG nova.network.os_vif_util [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "d8f510f6-a87e-4c76-be24-a143f69f564d", "address": "fa:16:3e:f2:49:59", "network": {"id": "ce353a4c-7280-46bb-ac7d-157bb5dc08cb", "bridge": "br-int", "label": "tempest-network-smoke--1463782365", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef2:4959", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8f510f6-a8", "ovs_interfaceid": "d8f510f6-a87e-4c76-be24-a143f69f564d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:20:21 np0005481065 nova_compute[260935]: 2025-10-11 09:20:21.358 2 DEBUG nova.network.os_vif_util [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f2:49:59,bridge_name='br-int',has_traffic_filtering=True,id=d8f510f6-a87e-4c76-be24-a143f69f564d,network=Network(ce353a4c-7280-46bb-ac7d-157bb5dc08cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8f510f6-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:20:21 np0005481065 nova_compute[260935]: 2025-10-11 09:20:21.358 2 DEBUG os_vif [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:49:59,bridge_name='br-int',has_traffic_filtering=True,id=d8f510f6-a87e-4c76-be24-a143f69f564d,network=Network(ce353a4c-7280-46bb-ac7d-157bb5dc08cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8f510f6-a8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:20:21 np0005481065 nova_compute[260935]: 2025-10-11 09:20:21.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:21 np0005481065 nova_compute[260935]: 2025-10-11 09:20:21.360 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd8f510f6-a8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:20:21 np0005481065 nova_compute[260935]: 2025-10-11 09:20:21.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:21 np0005481065 nova_compute[260935]: 2025-10-11 09:20:21.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:20:21 np0005481065 nova_compute[260935]: 2025-10-11 09:20:21.366 2 INFO os_vif [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:49:59,bridge_name='br-int',has_traffic_filtering=True,id=d8f510f6-a87e-4c76-be24-a143f69f564d,network=Network(ce353a4c-7280-46bb-ac7d-157bb5dc08cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8f510f6-a8')#033[00m
Oct 11 05:20:21 np0005481065 nova_compute[260935]: 2025-10-11 09:20:21.599 2 DEBUG nova.compute.manager [req-b64c5dad-e3b8-4c8c-95a0-0ca7467e7ee9 req-a2ce4c7f-59e0-43c0-bd88-20995ff9a95e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Received event network-vif-unplugged-d5347066-e322-4664-8798-146b9745fa17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:20:21 np0005481065 nova_compute[260935]: 2025-10-11 09:20:21.600 2 DEBUG oslo_concurrency.lockutils [req-b64c5dad-e3b8-4c8c-95a0-0ca7467e7ee9 req-a2ce4c7f-59e0-43c0-bd88-20995ff9a95e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "a798e2c3-8294-482d-a073-883121427765-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:20:21 np0005481065 nova_compute[260935]: 2025-10-11 09:20:21.600 2 DEBUG oslo_concurrency.lockutils [req-b64c5dad-e3b8-4c8c-95a0-0ca7467e7ee9 req-a2ce4c7f-59e0-43c0-bd88-20995ff9a95e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a798e2c3-8294-482d-a073-883121427765-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:20:21 np0005481065 nova_compute[260935]: 2025-10-11 09:20:21.601 2 DEBUG oslo_concurrency.lockutils [req-b64c5dad-e3b8-4c8c-95a0-0ca7467e7ee9 req-a2ce4c7f-59e0-43c0-bd88-20995ff9a95e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a798e2c3-8294-482d-a073-883121427765-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:20:21 np0005481065 nova_compute[260935]: 2025-10-11 09:20:21.601 2 DEBUG nova.compute.manager [req-b64c5dad-e3b8-4c8c-95a0-0ca7467e7ee9 req-a2ce4c7f-59e0-43c0-bd88-20995ff9a95e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] No waiting events found dispatching network-vif-unplugged-d5347066-e322-4664-8798-146b9745fa17 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:20:21 np0005481065 nova_compute[260935]: 2025-10-11 09:20:21.602 2 DEBUG nova.compute.manager [req-b64c5dad-e3b8-4c8c-95a0-0ca7467e7ee9 req-a2ce4c7f-59e0-43c0-bd88-20995ff9a95e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Received event network-vif-unplugged-d5347066-e322-4664-8798-146b9745fa17 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 05:20:21 np0005481065 nova_compute[260935]: 2025-10-11 09:20:21.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:21 np0005481065 nova_compute[260935]: 2025-10-11 09:20:21.842 2 INFO nova.virt.libvirt.driver [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Deleting instance files /var/lib/nova/instances/a798e2c3-8294-482d-a073-883121427765_del#033[00m
Oct 11 05:20:21 np0005481065 nova_compute[260935]: 2025-10-11 09:20:21.843 2 INFO nova.virt.libvirt.driver [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Deletion of /var/lib/nova/instances/a798e2c3-8294-482d-a073-883121427765_del complete#033[00m
Oct 11 05:20:21 np0005481065 nova_compute[260935]: 2025-10-11 09:20:21.915 2 INFO nova.compute.manager [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Took 1.07 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:20:21 np0005481065 nova_compute[260935]: 2025-10-11 09:20:21.915 2 DEBUG oslo.service.loopingcall [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:20:21 np0005481065 nova_compute[260935]: 2025-10-11 09:20:21.916 2 DEBUG nova.compute.manager [-] [instance: a798e2c3-8294-482d-a073-883121427765] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:20:21 np0005481065 nova_compute[260935]: 2025-10-11 09:20:21.916 2 DEBUG nova.network.neutron [-] [instance: a798e2c3-8294-482d-a073-883121427765] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:20:22 np0005481065 nova_compute[260935]: 2025-10-11 09:20:22.178 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174407.1768434, 6c9132ae-fb4f-4a77-8e3f-14f516eed49c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:20:22 np0005481065 nova_compute[260935]: 2025-10-11 09:20:22.179 2 INFO nova.compute.manager [-] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:20:22 np0005481065 nova_compute[260935]: 2025-10-11 09:20:22.272 2 DEBUG nova.compute.manager [None req-a655d275-fde7-45e2-a8fa-87f663bc64d6 - - - - - -] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:20:22 np0005481065 nova_compute[260935]: 2025-10-11 09:20:22.461 2 DEBUG nova.network.neutron [req-011b05e4-f65a-4d5c-9c17-59569661654a req-c41a3e21-0b5a-4ae4-942f-f6fc3a0f1260 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Updated VIF entry in instance network info cache for port d5347066-e322-4664-8798-146b9745fa17. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:20:22 np0005481065 nova_compute[260935]: 2025-10-11 09:20:22.461 2 DEBUG nova.network.neutron [req-011b05e4-f65a-4d5c-9c17-59569661654a req-c41a3e21-0b5a-4ae4-942f-f6fc3a0f1260 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Updating instance_info_cache with network_info: [{"id": "d5347066-e322-4664-8798-146b9745fa17", "address": "fa:16:3e:eb:9b:3f", "network": {"id": "6346ea52-07fc-49ad-8f2d-fcfed9769241", "bridge": "br-int", "label": "tempest-network-smoke--1742332739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5347066-e3", "ovs_interfaceid": "d5347066-e322-4664-8798-146b9745fa17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d8f510f6-a87e-4c76-be24-a143f69f564d", "address": "fa:16:3e:f2:49:59", "network": {"id": "ce353a4c-7280-46bb-ac7d-157bb5dc08cb", "bridge": "br-int", "label": "tempest-network-smoke--1463782365", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef2:4959", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8f510f6-a8", "ovs_interfaceid": "d8f510f6-a87e-4c76-be24-a143f69f564d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:20:22 np0005481065 nova_compute[260935]: 2025-10-11 09:20:22.566 2 DEBUG oslo_concurrency.lockutils [req-011b05e4-f65a-4d5c-9c17-59569661654a req-c41a3e21-0b5a-4ae4-942f-f6fc3a0f1260 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-a798e2c3-8294-482d-a073-883121427765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:20:22 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2356: 321 pgs: 321 active+clean; 433 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 124 KiB/s rd, 115 KiB/s wr, 89 op/s
Oct 11 05:20:22 np0005481065 nova_compute[260935]: 2025-10-11 09:20:22.887 2 DEBUG nova.compute.manager [req-09c0b2fb-8501-459e-867e-27ac87e8de7b req-0233dd50-f350-41d4-98f3-01f9f4e8b3f0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Received event network-vif-unplugged-d8f510f6-a87e-4c76-be24-a143f69f564d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:20:22 np0005481065 nova_compute[260935]: 2025-10-11 09:20:22.888 2 DEBUG oslo_concurrency.lockutils [req-09c0b2fb-8501-459e-867e-27ac87e8de7b req-0233dd50-f350-41d4-98f3-01f9f4e8b3f0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "a798e2c3-8294-482d-a073-883121427765-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:20:22 np0005481065 nova_compute[260935]: 2025-10-11 09:20:22.888 2 DEBUG oslo_concurrency.lockutils [req-09c0b2fb-8501-459e-867e-27ac87e8de7b req-0233dd50-f350-41d4-98f3-01f9f4e8b3f0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a798e2c3-8294-482d-a073-883121427765-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:20:22 np0005481065 nova_compute[260935]: 2025-10-11 09:20:22.889 2 DEBUG oslo_concurrency.lockutils [req-09c0b2fb-8501-459e-867e-27ac87e8de7b req-0233dd50-f350-41d4-98f3-01f9f4e8b3f0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a798e2c3-8294-482d-a073-883121427765-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:20:22 np0005481065 nova_compute[260935]: 2025-10-11 09:20:22.889 2 DEBUG nova.compute.manager [req-09c0b2fb-8501-459e-867e-27ac87e8de7b req-0233dd50-f350-41d4-98f3-01f9f4e8b3f0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] No waiting events found dispatching network-vif-unplugged-d8f510f6-a87e-4c76-be24-a143f69f564d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:20:22 np0005481065 nova_compute[260935]: 2025-10-11 09:20:22.890 2 DEBUG nova.compute.manager [req-09c0b2fb-8501-459e-867e-27ac87e8de7b req-0233dd50-f350-41d4-98f3-01f9f4e8b3f0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Received event network-vif-unplugged-d8f510f6-a87e-4c76-be24-a143f69f564d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 05:20:22 np0005481065 nova_compute[260935]: 2025-10-11 09:20:22.890 2 DEBUG nova.compute.manager [req-09c0b2fb-8501-459e-867e-27ac87e8de7b req-0233dd50-f350-41d4-98f3-01f9f4e8b3f0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Received event network-vif-plugged-d8f510f6-a87e-4c76-be24-a143f69f564d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:20:22 np0005481065 nova_compute[260935]: 2025-10-11 09:20:22.891 2 DEBUG oslo_concurrency.lockutils [req-09c0b2fb-8501-459e-867e-27ac87e8de7b req-0233dd50-f350-41d4-98f3-01f9f4e8b3f0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "a798e2c3-8294-482d-a073-883121427765-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:20:22 np0005481065 nova_compute[260935]: 2025-10-11 09:20:22.891 2 DEBUG oslo_concurrency.lockutils [req-09c0b2fb-8501-459e-867e-27ac87e8de7b req-0233dd50-f350-41d4-98f3-01f9f4e8b3f0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a798e2c3-8294-482d-a073-883121427765-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:20:22 np0005481065 nova_compute[260935]: 2025-10-11 09:20:22.892 2 DEBUG oslo_concurrency.lockutils [req-09c0b2fb-8501-459e-867e-27ac87e8de7b req-0233dd50-f350-41d4-98f3-01f9f4e8b3f0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a798e2c3-8294-482d-a073-883121427765-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:20:22 np0005481065 nova_compute[260935]: 2025-10-11 09:20:22.892 2 DEBUG nova.compute.manager [req-09c0b2fb-8501-459e-867e-27ac87e8de7b req-0233dd50-f350-41d4-98f3-01f9f4e8b3f0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] No waiting events found dispatching network-vif-plugged-d8f510f6-a87e-4c76-be24-a143f69f564d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:20:22 np0005481065 nova_compute[260935]: 2025-10-11 09:20:22.893 2 WARNING nova.compute.manager [req-09c0b2fb-8501-459e-867e-27ac87e8de7b req-0233dd50-f350-41d4-98f3-01f9f4e8b3f0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Received unexpected event network-vif-plugged-d8f510f6-a87e-4c76-be24-a143f69f564d for instance with vm_state active and task_state deleting.#033[00m
Oct 11 05:20:23 np0005481065 nova_compute[260935]: 2025-10-11 09:20:23.656 2 DEBUG nova.network.neutron [-] [instance: a798e2c3-8294-482d-a073-883121427765] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:20:23 np0005481065 nova_compute[260935]: 2025-10-11 09:20:23.683 2 INFO nova.compute.manager [-] [instance: a798e2c3-8294-482d-a073-883121427765] Took 1.77 seconds to deallocate network for instance.#033[00m
Oct 11 05:20:23 np0005481065 nova_compute[260935]: 2025-10-11 09:20:23.725 2 DEBUG oslo_concurrency.lockutils [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:20:23 np0005481065 nova_compute[260935]: 2025-10-11 09:20:23.725 2 DEBUG oslo_concurrency.lockutils [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:20:23 np0005481065 nova_compute[260935]: 2025-10-11 09:20:23.745 2 DEBUG nova.compute.manager [req-ca0fbed9-fead-40fa-8703-387da90e1a1a req-a5a6941c-8d6b-45db-8e79-0c7425bf3b6d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Received event network-vif-plugged-d5347066-e322-4664-8798-146b9745fa17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:20:23 np0005481065 nova_compute[260935]: 2025-10-11 09:20:23.745 2 DEBUG oslo_concurrency.lockutils [req-ca0fbed9-fead-40fa-8703-387da90e1a1a req-a5a6941c-8d6b-45db-8e79-0c7425bf3b6d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "a798e2c3-8294-482d-a073-883121427765-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:20:23 np0005481065 nova_compute[260935]: 2025-10-11 09:20:23.746 2 DEBUG oslo_concurrency.lockutils [req-ca0fbed9-fead-40fa-8703-387da90e1a1a req-a5a6941c-8d6b-45db-8e79-0c7425bf3b6d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a798e2c3-8294-482d-a073-883121427765-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:20:23 np0005481065 nova_compute[260935]: 2025-10-11 09:20:23.746 2 DEBUG oslo_concurrency.lockutils [req-ca0fbed9-fead-40fa-8703-387da90e1a1a req-a5a6941c-8d6b-45db-8e79-0c7425bf3b6d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a798e2c3-8294-482d-a073-883121427765-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:20:23 np0005481065 nova_compute[260935]: 2025-10-11 09:20:23.747 2 DEBUG nova.compute.manager [req-ca0fbed9-fead-40fa-8703-387da90e1a1a req-a5a6941c-8d6b-45db-8e79-0c7425bf3b6d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] No waiting events found dispatching network-vif-plugged-d5347066-e322-4664-8798-146b9745fa17 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:20:23 np0005481065 nova_compute[260935]: 2025-10-11 09:20:23.747 2 WARNING nova.compute.manager [req-ca0fbed9-fead-40fa-8703-387da90e1a1a req-a5a6941c-8d6b-45db-8e79-0c7425bf3b6d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Received unexpected event network-vif-plugged-d5347066-e322-4664-8798-146b9745fa17 for instance with vm_state deleted and task_state None.#033[00m
Oct 11 05:20:23 np0005481065 nova_compute[260935]: 2025-10-11 09:20:23.892 2 DEBUG oslo_concurrency.processutils [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:20:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:20:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:20:24 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3744330981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:20:24 np0005481065 nova_compute[260935]: 2025-10-11 09:20:24.379 2 DEBUG oslo_concurrency.processutils [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:20:24 np0005481065 nova_compute[260935]: 2025-10-11 09:20:24.388 2 DEBUG nova.compute.provider_tree [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:20:24 np0005481065 nova_compute[260935]: 2025-10-11 09:20:24.415 2 DEBUG nova.scheduler.client.report [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:20:24 np0005481065 nova_compute[260935]: 2025-10-11 09:20:24.456 2 DEBUG oslo_concurrency.lockutils [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:20:24 np0005481065 nova_compute[260935]: 2025-10-11 09:20:24.502 2 INFO nova.scheduler.client.report [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Deleted allocations for instance a798e2c3-8294-482d-a073-883121427765#033[00m
Oct 11 05:20:24 np0005481065 nova_compute[260935]: 2025-10-11 09:20:24.589 2 DEBUG oslo_concurrency.lockutils [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "a798e2c3-8294-482d-a073-883121427765" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.745s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:20:24 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2357: 321 pgs: 321 active+clean; 407 MiB data, 954 MiB used, 59 GiB / 60 GiB avail; 74 KiB/s rd, 92 KiB/s wr, 64 op/s
Oct 11 05:20:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:20:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:20:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:20:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:20:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:20:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:20:24 np0005481065 nova_compute[260935]: 2025-10-11 09:20:24.981 2 DEBUG nova.compute.manager [req-212cd942-967a-44d1-a302-b4cd770d7703 req-0f521bf4-5749-48da-9cbb-93690379eb0a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Received event network-vif-deleted-d5347066-e322-4664-8798-146b9745fa17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:20:24 np0005481065 nova_compute[260935]: 2025-10-11 09:20:24.981 2 DEBUG nova.compute.manager [req-212cd942-967a-44d1-a302-b4cd770d7703 req-0f521bf4-5749-48da-9cbb-93690379eb0a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Received event network-vif-deleted-d8f510f6-a87e-4c76-be24-a143f69f564d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:20:25 np0005481065 nova_compute[260935]: 2025-10-11 09:20:25.637 2 DEBUG oslo_concurrency.lockutils [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "564a3027-0f98-40fd-a495-1c13a103ea39" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:20:25 np0005481065 nova_compute[260935]: 2025-10-11 09:20:25.638 2 DEBUG oslo_concurrency.lockutils [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "564a3027-0f98-40fd-a495-1c13a103ea39" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:20:25 np0005481065 nova_compute[260935]: 2025-10-11 09:20:25.638 2 DEBUG oslo_concurrency.lockutils [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "564a3027-0f98-40fd-a495-1c13a103ea39-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:20:25 np0005481065 nova_compute[260935]: 2025-10-11 09:20:25.639 2 DEBUG oslo_concurrency.lockutils [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "564a3027-0f98-40fd-a495-1c13a103ea39-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:20:25 np0005481065 nova_compute[260935]: 2025-10-11 09:20:25.639 2 DEBUG oslo_concurrency.lockutils [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "564a3027-0f98-40fd-a495-1c13a103ea39-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:20:25 np0005481065 nova_compute[260935]: 2025-10-11 09:20:25.641 2 INFO nova.compute.manager [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Terminating instance#033[00m
Oct 11 05:20:25 np0005481065 nova_compute[260935]: 2025-10-11 09:20:25.642 2 DEBUG nova.compute.manager [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:20:25 np0005481065 nova_compute[260935]: 2025-10-11 09:20:25.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:25 np0005481065 kernel: tapdea05614-b0 (unregistering): left promiscuous mode
Oct 11 05:20:25 np0005481065 NetworkManager[44960]: <info>  [1760174425.7430] device (tapdea05614-b0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:20:25 np0005481065 ovn_controller[152945]: 2025-10-11T09:20:25Z|01179|binding|INFO|Releasing lport dea05614-b04a-4078-a98e-428065514f37 from this chassis (sb_readonly=0)
Oct 11 05:20:25 np0005481065 nova_compute[260935]: 2025-10-11 09:20:25.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:25 np0005481065 ovn_controller[152945]: 2025-10-11T09:20:25Z|01180|binding|INFO|Setting lport dea05614-b04a-4078-a98e-428065514f37 down in Southbound
Oct 11 05:20:25 np0005481065 ovn_controller[152945]: 2025-10-11T09:20:25Z|01181|binding|INFO|Removing iface tapdea05614-b0 ovn-installed in OVS
Oct 11 05:20:25 np0005481065 nova_compute[260935]: 2025-10-11 09:20:25.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:25.771 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:3b:17 10.100.0.3'], port_security=['fa:16:3e:00:3b:17 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '564a3027-0f98-40fd-a495-1c13a103ea39', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6346ea52-07fc-49ad-8f2d-fcfed9769241', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a975f5dd-d207-4e6b-9401-45f24b820c2c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0785b003-8951-44fc-bd21-f1c5c3076102, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=dea05614-b04a-4078-a98e-428065514f37) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:20:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:25.774 162815 INFO neutron.agent.ovn.metadata.agent [-] Port dea05614-b04a-4078-a98e-428065514f37 in datapath 6346ea52-07fc-49ad-8f2d-fcfed9769241 unbound from our chassis#033[00m
Oct 11 05:20:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:25.777 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6346ea52-07fc-49ad-8f2d-fcfed9769241, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:20:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:25.780 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bf9d163b-d82b-45a5-aa2c-c91147fdfa1d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:25.781 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241 namespace which is not needed anymore#033[00m
Oct 11 05:20:25 np0005481065 kernel: tapc8bc0542-b1 (unregistering): left promiscuous mode
Oct 11 05:20:25 np0005481065 NetworkManager[44960]: <info>  [1760174425.7889] device (tapc8bc0542-b1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:20:25 np0005481065 nova_compute[260935]: 2025-10-11 09:20:25.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:25 np0005481065 ovn_controller[152945]: 2025-10-11T09:20:25Z|01182|binding|INFO|Releasing lport c8bc0542-b12a-4eb6-898d-5fd663184c82 from this chassis (sb_readonly=0)
Oct 11 05:20:25 np0005481065 ovn_controller[152945]: 2025-10-11T09:20:25Z|01183|binding|INFO|Setting lport c8bc0542-b12a-4eb6-898d-5fd663184c82 down in Southbound
Oct 11 05:20:25 np0005481065 ovn_controller[152945]: 2025-10-11T09:20:25Z|01184|binding|INFO|Removing iface tapc8bc0542-b1 ovn-installed in OVS
Oct 11 05:20:25 np0005481065 nova_compute[260935]: 2025-10-11 09:20:25.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:25.818 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:b9:1a 2001:db8::f816:3eff:fe78:b91a'], port_security=['fa:16:3e:78:b9:1a 2001:db8::f816:3eff:fe78:b91a'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe78:b91a/64', 'neutron:device_id': '564a3027-0f98-40fd-a495-1c13a103ea39', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce353a4c-7280-46bb-ac7d-157bb5dc08cb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a975f5dd-d207-4e6b-9401-45f24b820c2c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7413cee9-eaf1-4090-8a8e-9bf9dac36a0a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=c8bc0542-b12a-4eb6-898d-5fd663184c82) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:20:25 np0005481065 nova_compute[260935]: 2025-10-11 09:20:25.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:25 np0005481065 systemd[1]: machine-qemu\x2d137\x2dinstance\x2d00000072.scope: Deactivated successfully.
Oct 11 05:20:25 np0005481065 systemd[1]: machine-qemu\x2d137\x2dinstance\x2d00000072.scope: Consumed 16.944s CPU time.
Oct 11 05:20:25 np0005481065 systemd-machined[215705]: Machine qemu-137-instance-00000072 terminated.
Oct 11 05:20:25 np0005481065 neutron-haproxy-ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241[384643]: [NOTICE]   (384647) : haproxy version is 2.8.14-c23fe91
Oct 11 05:20:25 np0005481065 neutron-haproxy-ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241[384643]: [NOTICE]   (384647) : path to executable is /usr/sbin/haproxy
Oct 11 05:20:25 np0005481065 neutron-haproxy-ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241[384643]: [WARNING]  (384647) : Exiting Master process...
Oct 11 05:20:25 np0005481065 neutron-haproxy-ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241[384643]: [ALERT]    (384647) : Current worker (384649) exited with code 143 (Terminated)
Oct 11 05:20:25 np0005481065 neutron-haproxy-ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241[384643]: [WARNING]  (384647) : All workers exited. Exiting... (0)
Oct 11 05:20:25 np0005481065 systemd[1]: libpod-35a4ceb1b8073f14fe148299f689cfe5109498accc2ce93b22f7a21ca56c96a2.scope: Deactivated successfully.
Oct 11 05:20:25 np0005481065 podman[386978]: 2025-10-11 09:20:25.944718579 +0000 UTC m=+0.044272571 container died 35a4ceb1b8073f14fe148299f689cfe5109498accc2ce93b22f7a21ca56c96a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 11 05:20:25 np0005481065 systemd[1]: var-lib-containers-storage-overlay-adfdfffff4362d598c2ef7afc8f136cd17a5b46b0bac4f37ab2cb909f4afc6f2-merged.mount: Deactivated successfully.
Oct 11 05:20:25 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-35a4ceb1b8073f14fe148299f689cfe5109498accc2ce93b22f7a21ca56c96a2-userdata-shm.mount: Deactivated successfully.
Oct 11 05:20:25 np0005481065 podman[386978]: 2025-10-11 09:20:25.983536854 +0000 UTC m=+0.083090846 container cleanup 35a4ceb1b8073f14fe148299f689cfe5109498accc2ce93b22f7a21ca56c96a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 05:20:25 np0005481065 systemd[1]: libpod-conmon-35a4ceb1b8073f14fe148299f689cfe5109498accc2ce93b22f7a21ca56c96a2.scope: Deactivated successfully.
Oct 11 05:20:26 np0005481065 podman[387009]: 2025-10-11 09:20:26.037446666 +0000 UTC m=+0.036196993 container remove 35a4ceb1b8073f14fe148299f689cfe5109498accc2ce93b22f7a21ca56c96a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:20:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:26.043 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[09178a5b-cae3-480f-be65-3564c85489d5]: (4, ('Sat Oct 11 09:20:25 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241 (35a4ceb1b8073f14fe148299f689cfe5109498accc2ce93b22f7a21ca56c96a2)\n35a4ceb1b8073f14fe148299f689cfe5109498accc2ce93b22f7a21ca56c96a2\nSat Oct 11 09:20:25 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241 (35a4ceb1b8073f14fe148299f689cfe5109498accc2ce93b22f7a21ca56c96a2)\n35a4ceb1b8073f14fe148299f689cfe5109498accc2ce93b22f7a21ca56c96a2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:26.044 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[12f98348-0fa3-4b1e-98da-8a7c2a0a1407]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:26.045 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6346ea52-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:20:26 np0005481065 nova_compute[260935]: 2025-10-11 09:20:26.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:26 np0005481065 kernel: tap6346ea52-00: left promiscuous mode
Oct 11 05:20:26 np0005481065 nova_compute[260935]: 2025-10-11 09:20:26.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:26 np0005481065 NetworkManager[44960]: <info>  [1760174426.0714] manager: (tapdea05614-b0): new Tun device (/org/freedesktop/NetworkManager/Devices/477)
Oct 11 05:20:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:26.076 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[241f2d6f-9a14-41c7-be75-8c3f7c5a7ef0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:26.108 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[21e1ade7-6ce0-4b8b-9cec-ce12e3fc04c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:26.109 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bda1745d-a4ad-422c-8f41-7b255b780d1e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:26 np0005481065 nova_compute[260935]: 2025-10-11 09:20:26.114 2 INFO nova.virt.libvirt.driver [-] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Instance destroyed successfully.#033[00m
Oct 11 05:20:26 np0005481065 nova_compute[260935]: 2025-10-11 09:20:26.115 2 DEBUG nova.objects.instance [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'resources' on Instance uuid 564a3027-0f98-40fd-a495-1c13a103ea39 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:20:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:26.124 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6579c86a-eb5b-42eb-b111-5d3050e47eb1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618836, 'reachable_time': 33711, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 387044, 'error': None, 'target': 'ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:26 np0005481065 systemd[1]: run-netns-ovnmeta\x2d6346ea52\x2d07fc\x2d49ad\x2d8f2d\x2dfcfed9769241.mount: Deactivated successfully.
Oct 11 05:20:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:26.128 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 05:20:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:26.128 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[a58e8790-a299-428b-8438-4b79bcbf97d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:26.129 162815 INFO neutron.agent.ovn.metadata.agent [-] Port c8bc0542-b12a-4eb6-898d-5fd663184c82 in datapath ce353a4c-7280-46bb-ac7d-157bb5dc08cb unbound from our chassis#033[00m
Oct 11 05:20:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:26.130 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ce353a4c-7280-46bb-ac7d-157bb5dc08cb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:20:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:26.131 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[863d3e94-1fb7-46db-9433-d9bbe9fce57b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:26.131 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ce353a4c-7280-46bb-ac7d-157bb5dc08cb namespace which is not needed anymore#033[00m
Oct 11 05:20:26 np0005481065 nova_compute[260935]: 2025-10-11 09:20:26.152 2 DEBUG nova.virt.libvirt.vif [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:18:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1664284742',display_name='tempest-TestGettingAddress-server-1664284742',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1664284742',id=114,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJTFYLtktGdqXo2IJ/ShBWfWgvaV4+fWmolxALneU1gDJL8dLDtvTdIhs+PbG9YcPYaBAkq6yl21bo4TTyj4znAX7oza+01Fop0j7H2jGDTRbWSF88RRRnjrHtRpLFrBeg==',key_name='tempest-TestGettingAddress-2133650400',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:19:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-2sdrmv3t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:19:05Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=564a3027-0f98-40fd-a495-1c13a103ea39,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dea05614-b04a-4078-a98e-428065514f37", "address": "fa:16:3e:00:3b:17", "network": {"id": "6346ea52-07fc-49ad-8f2d-fcfed9769241", "bridge": "br-int", "label": "tempest-network-smoke--1742332739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdea05614-b0", "ovs_interfaceid": "dea05614-b04a-4078-a98e-428065514f37", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:20:26 np0005481065 nova_compute[260935]: 2025-10-11 09:20:26.153 2 DEBUG nova.network.os_vif_util [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "dea05614-b04a-4078-a98e-428065514f37", "address": "fa:16:3e:00:3b:17", "network": {"id": "6346ea52-07fc-49ad-8f2d-fcfed9769241", "bridge": "br-int", "label": "tempest-network-smoke--1742332739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdea05614-b0", "ovs_interfaceid": "dea05614-b04a-4078-a98e-428065514f37", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:20:26 np0005481065 nova_compute[260935]: 2025-10-11 09:20:26.154 2 DEBUG nova.network.os_vif_util [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:00:3b:17,bridge_name='br-int',has_traffic_filtering=True,id=dea05614-b04a-4078-a98e-428065514f37,network=Network(6346ea52-07fc-49ad-8f2d-fcfed9769241),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdea05614-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:20:26 np0005481065 nova_compute[260935]: 2025-10-11 09:20:26.154 2 DEBUG os_vif [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:00:3b:17,bridge_name='br-int',has_traffic_filtering=True,id=dea05614-b04a-4078-a98e-428065514f37,network=Network(6346ea52-07fc-49ad-8f2d-fcfed9769241),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdea05614-b0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:20:26 np0005481065 nova_compute[260935]: 2025-10-11 09:20:26.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:26 np0005481065 nova_compute[260935]: 2025-10-11 09:20:26.155 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdea05614-b0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:20:26 np0005481065 nova_compute[260935]: 2025-10-11 09:20:26.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:26 np0005481065 nova_compute[260935]: 2025-10-11 09:20:26.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:20:26 np0005481065 nova_compute[260935]: 2025-10-11 09:20:26.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:26 np0005481065 nova_compute[260935]: 2025-10-11 09:20:26.163 2 INFO os_vif [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:00:3b:17,bridge_name='br-int',has_traffic_filtering=True,id=dea05614-b04a-4078-a98e-428065514f37,network=Network(6346ea52-07fc-49ad-8f2d-fcfed9769241),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdea05614-b0')#033[00m
Oct 11 05:20:26 np0005481065 nova_compute[260935]: 2025-10-11 09:20:26.164 2 DEBUG nova.virt.libvirt.vif [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:18:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1664284742',display_name='tempest-TestGettingAddress-server-1664284742',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1664284742',id=114,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJTFYLtktGdqXo2IJ/ShBWfWgvaV4+fWmolxALneU1gDJL8dLDtvTdIhs+PbG9YcPYaBAkq6yl21bo4TTyj4znAX7oza+01Fop0j7H2jGDTRbWSF88RRRnjrHtRpLFrBeg==',key_name='tempest-TestGettingAddress-2133650400',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:19:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-2sdrmv3t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:19:05Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=564a3027-0f98-40fd-a495-1c13a103ea39,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c8bc0542-b12a-4eb6-898d-5fd663184c82", "address": "fa:16:3e:78:b9:1a", "network": {"id": "ce353a4c-7280-46bb-ac7d-157bb5dc08cb", "bridge": "br-int", "label": "tempest-network-smoke--1463782365", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe78:b91a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8bc0542-b1", "ovs_interfaceid": "c8bc0542-b12a-4eb6-898d-5fd663184c82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:20:26 np0005481065 nova_compute[260935]: 2025-10-11 09:20:26.164 2 DEBUG nova.network.os_vif_util [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "c8bc0542-b12a-4eb6-898d-5fd663184c82", "address": "fa:16:3e:78:b9:1a", "network": {"id": "ce353a4c-7280-46bb-ac7d-157bb5dc08cb", "bridge": "br-int", "label": "tempest-network-smoke--1463782365", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe78:b91a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8bc0542-b1", "ovs_interfaceid": "c8bc0542-b12a-4eb6-898d-5fd663184c82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:20:26 np0005481065 nova_compute[260935]: 2025-10-11 09:20:26.165 2 DEBUG nova.network.os_vif_util [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:b9:1a,bridge_name='br-int',has_traffic_filtering=True,id=c8bc0542-b12a-4eb6-898d-5fd663184c82,network=Network(ce353a4c-7280-46bb-ac7d-157bb5dc08cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8bc0542-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:20:26 np0005481065 nova_compute[260935]: 2025-10-11 09:20:26.165 2 DEBUG os_vif [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:b9:1a,bridge_name='br-int',has_traffic_filtering=True,id=c8bc0542-b12a-4eb6-898d-5fd663184c82,network=Network(ce353a4c-7280-46bb-ac7d-157bb5dc08cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8bc0542-b1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:20:26 np0005481065 nova_compute[260935]: 2025-10-11 09:20:26.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:26 np0005481065 nova_compute[260935]: 2025-10-11 09:20:26.169 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc8bc0542-b1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:20:26 np0005481065 nova_compute[260935]: 2025-10-11 09:20:26.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:26 np0005481065 nova_compute[260935]: 2025-10-11 09:20:26.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:20:26 np0005481065 nova_compute[260935]: 2025-10-11 09:20:26.177 2 INFO os_vif [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:b9:1a,bridge_name='br-int',has_traffic_filtering=True,id=c8bc0542-b12a-4eb6-898d-5fd663184c82,network=Network(ce353a4c-7280-46bb-ac7d-157bb5dc08cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8bc0542-b1')#033[00m
Oct 11 05:20:26 np0005481065 neutron-haproxy-ovnmeta-ce353a4c-7280-46bb-ac7d-157bb5dc08cb[384716]: [NOTICE]   (384720) : haproxy version is 2.8.14-c23fe91
Oct 11 05:20:26 np0005481065 neutron-haproxy-ovnmeta-ce353a4c-7280-46bb-ac7d-157bb5dc08cb[384716]: [NOTICE]   (384720) : path to executable is /usr/sbin/haproxy
Oct 11 05:20:26 np0005481065 neutron-haproxy-ovnmeta-ce353a4c-7280-46bb-ac7d-157bb5dc08cb[384716]: [WARNING]  (384720) : Exiting Master process...
Oct 11 05:20:26 np0005481065 neutron-haproxy-ovnmeta-ce353a4c-7280-46bb-ac7d-157bb5dc08cb[384716]: [ALERT]    (384720) : Current worker (384722) exited with code 143 (Terminated)
Oct 11 05:20:26 np0005481065 neutron-haproxy-ovnmeta-ce353a4c-7280-46bb-ac7d-157bb5dc08cb[384716]: [WARNING]  (384720) : All workers exited. Exiting... (0)
Oct 11 05:20:26 np0005481065 systemd[1]: libpod-4f3f77c60f2bcd18ca6d28297f83f5e108db244fbb44511df252c2033ac620b4.scope: Deactivated successfully.
Oct 11 05:20:26 np0005481065 podman[387081]: 2025-10-11 09:20:26.283103007 +0000 UTC m=+0.052799510 container died 4f3f77c60f2bcd18ca6d28297f83f5e108db244fbb44511df252c2033ac620b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ce353a4c-7280-46bb-ac7d-157bb5dc08cb, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 05:20:26 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4f3f77c60f2bcd18ca6d28297f83f5e108db244fbb44511df252c2033ac620b4-userdata-shm.mount: Deactivated successfully.
Oct 11 05:20:26 np0005481065 systemd[1]: var-lib-containers-storage-overlay-e8c33ef9ddc9558d0573d134c73c574d03ca7402b4d4ad1531c574c341a884ee-merged.mount: Deactivated successfully.
Oct 11 05:20:26 np0005481065 podman[387081]: 2025-10-11 09:20:26.332540613 +0000 UTC m=+0.102237106 container cleanup 4f3f77c60f2bcd18ca6d28297f83f5e108db244fbb44511df252c2033ac620b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ce353a4c-7280-46bb-ac7d-157bb5dc08cb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 11 05:20:26 np0005481065 nova_compute[260935]: 2025-10-11 09:20:26.339 2 DEBUG nova.compute.manager [req-e5494a93-4bff-4c5b-a2fc-d666c6c6c124 req-1e8eb111-8a45-4529-92c7-97ce01cc4bfd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Received event network-changed-dea05614-b04a-4078-a98e-428065514f37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:20:26 np0005481065 nova_compute[260935]: 2025-10-11 09:20:26.339 2 DEBUG nova.compute.manager [req-e5494a93-4bff-4c5b-a2fc-d666c6c6c124 req-1e8eb111-8a45-4529-92c7-97ce01cc4bfd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Refreshing instance network info cache due to event network-changed-dea05614-b04a-4078-a98e-428065514f37. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:20:26 np0005481065 nova_compute[260935]: 2025-10-11 09:20:26.340 2 DEBUG oslo_concurrency.lockutils [req-e5494a93-4bff-4c5b-a2fc-d666c6c6c124 req-1e8eb111-8a45-4529-92c7-97ce01cc4bfd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-564a3027-0f98-40fd-a495-1c13a103ea39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:20:26 np0005481065 nova_compute[260935]: 2025-10-11 09:20:26.340 2 DEBUG oslo_concurrency.lockutils [req-e5494a93-4bff-4c5b-a2fc-d666c6c6c124 req-1e8eb111-8a45-4529-92c7-97ce01cc4bfd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-564a3027-0f98-40fd-a495-1c13a103ea39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:20:26 np0005481065 nova_compute[260935]: 2025-10-11 09:20:26.340 2 DEBUG nova.network.neutron [req-e5494a93-4bff-4c5b-a2fc-d666c6c6c124 req-1e8eb111-8a45-4529-92c7-97ce01cc4bfd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Refreshing network info cache for port dea05614-b04a-4078-a98e-428065514f37 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:20:26 np0005481065 systemd[1]: libpod-conmon-4f3f77c60f2bcd18ca6d28297f83f5e108db244fbb44511df252c2033ac620b4.scope: Deactivated successfully.
Oct 11 05:20:26 np0005481065 podman[387111]: 2025-10-11 09:20:26.398242837 +0000 UTC m=+0.041872243 container remove 4f3f77c60f2bcd18ca6d28297f83f5e108db244fbb44511df252c2033ac620b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ce353a4c-7280-46bb-ac7d-157bb5dc08cb, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 05:20:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:26.408 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7216be97-223e-4860-9a1a-ed4ac0fadc5f]: (4, ('Sat Oct 11 09:20:26 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ce353a4c-7280-46bb-ac7d-157bb5dc08cb (4f3f77c60f2bcd18ca6d28297f83f5e108db244fbb44511df252c2033ac620b4)\n4f3f77c60f2bcd18ca6d28297f83f5e108db244fbb44511df252c2033ac620b4\nSat Oct 11 09:20:26 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ce353a4c-7280-46bb-ac7d-157bb5dc08cb (4f3f77c60f2bcd18ca6d28297f83f5e108db244fbb44511df252c2033ac620b4)\n4f3f77c60f2bcd18ca6d28297f83f5e108db244fbb44511df252c2033ac620b4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:26.410 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0d04a085-7210-43a9-9ae8-3b80401a3370]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:26.410 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce353a4c-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:20:26 np0005481065 kernel: tapce353a4c-70: left promiscuous mode
Oct 11 05:20:26 np0005481065 nova_compute[260935]: 2025-10-11 09:20:26.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:26 np0005481065 nova_compute[260935]: 2025-10-11 09:20:26.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:26.445 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a7bd3284-ef5d-4065-a638-b7ec4820d3e5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:26.476 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0432fe86-c177-46bf-af90-242208c4a69a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:26.477 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cb2a0a32-d6ae-4ae8-ab50-9f59bb743587]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:26.506 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[04987f3d-918d-45ec-95fb-93fdd24ec345]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618938, 'reachable_time': 31550, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 387128, 'error': None, 'target': 'ovnmeta-ce353a4c-7280-46bb-ac7d-157bb5dc08cb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:26.508 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ce353a4c-7280-46bb-ac7d-157bb5dc08cb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 05:20:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:26.508 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[e3ae5bb6-f5d6-4530-9af3-a6c16f5f1b81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:26 np0005481065 nova_compute[260935]: 2025-10-11 09:20:26.575 2 INFO nova.virt.libvirt.driver [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Deleting instance files /var/lib/nova/instances/564a3027-0f98-40fd-a495-1c13a103ea39_del#033[00m
Oct 11 05:20:26 np0005481065 nova_compute[260935]: 2025-10-11 09:20:26.576 2 INFO nova.virt.libvirt.driver [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Deletion of /var/lib/nova/instances/564a3027-0f98-40fd-a495-1c13a103ea39_del complete#033[00m
Oct 11 05:20:26 np0005481065 nova_compute[260935]: 2025-10-11 09:20:26.663 2 INFO nova.compute.manager [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Took 1.02 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:20:26 np0005481065 nova_compute[260935]: 2025-10-11 09:20:26.664 2 DEBUG oslo.service.loopingcall [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:20:26 np0005481065 nova_compute[260935]: 2025-10-11 09:20:26.665 2 DEBUG nova.compute.manager [-] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:20:26 np0005481065 nova_compute[260935]: 2025-10-11 09:20:26.665 2 DEBUG nova.network.neutron [-] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:20:26 np0005481065 nova_compute[260935]: 2025-10-11 09:20:26.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:20:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2825321102' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:20:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:20:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2825321102' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:20:26 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2358: 321 pgs: 321 active+clean; 407 MiB data, 954 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 23 KiB/s wr, 59 op/s
Oct 11 05:20:26 np0005481065 systemd[1]: run-netns-ovnmeta\x2dce353a4c\x2d7280\x2d46bb\x2dac7d\x2d157bb5dc08cb.mount: Deactivated successfully.
Oct 11 05:20:27 np0005481065 nova_compute[260935]: 2025-10-11 09:20:27.390 2 DEBUG nova.compute.manager [req-e5299af2-6ff5-4d1f-81ab-bf5ad8dbe1f7 req-d3c4c889-306e-46f5-8d9a-646245d2d510 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Received event network-vif-deleted-c8bc0542-b12a-4eb6-898d-5fd663184c82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:20:27 np0005481065 nova_compute[260935]: 2025-10-11 09:20:27.391 2 INFO nova.compute.manager [req-e5299af2-6ff5-4d1f-81ab-bf5ad8dbe1f7 req-d3c4c889-306e-46f5-8d9a-646245d2d510 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Neutron deleted interface c8bc0542-b12a-4eb6-898d-5fd663184c82; detaching it from the instance and deleting it from the info cache#033[00m
Oct 11 05:20:27 np0005481065 nova_compute[260935]: 2025-10-11 09:20:27.392 2 DEBUG nova.network.neutron [req-e5299af2-6ff5-4d1f-81ab-bf5ad8dbe1f7 req-d3c4c889-306e-46f5-8d9a-646245d2d510 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Updating instance_info_cache with network_info: [{"id": "dea05614-b04a-4078-a98e-428065514f37", "address": "fa:16:3e:00:3b:17", "network": {"id": "6346ea52-07fc-49ad-8f2d-fcfed9769241", "bridge": "br-int", "label": "tempest-network-smoke--1742332739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdea05614-b0", "ovs_interfaceid": "dea05614-b04a-4078-a98e-428065514f37", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:20:27 np0005481065 nova_compute[260935]: 2025-10-11 09:20:27.415 2 DEBUG nova.compute.manager [req-e5299af2-6ff5-4d1f-81ab-bf5ad8dbe1f7 req-d3c4c889-306e-46f5-8d9a-646245d2d510 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Detach interface failed, port_id=c8bc0542-b12a-4eb6-898d-5fd663184c82, reason: Instance 564a3027-0f98-40fd-a495-1c13a103ea39 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct 11 05:20:27 np0005481065 nova_compute[260935]: 2025-10-11 09:20:27.673 2 DEBUG nova.network.neutron [req-e5494a93-4bff-4c5b-a2fc-d666c6c6c124 req-1e8eb111-8a45-4529-92c7-97ce01cc4bfd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Updated VIF entry in instance network info cache for port dea05614-b04a-4078-a98e-428065514f37. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:20:27 np0005481065 nova_compute[260935]: 2025-10-11 09:20:27.673 2 DEBUG nova.network.neutron [req-e5494a93-4bff-4c5b-a2fc-d666c6c6c124 req-1e8eb111-8a45-4529-92c7-97ce01cc4bfd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Updating instance_info_cache with network_info: [{"id": "dea05614-b04a-4078-a98e-428065514f37", "address": "fa:16:3e:00:3b:17", "network": {"id": "6346ea52-07fc-49ad-8f2d-fcfed9769241", "bridge": "br-int", "label": "tempest-network-smoke--1742332739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdea05614-b0", "ovs_interfaceid": "dea05614-b04a-4078-a98e-428065514f37", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c8bc0542-b12a-4eb6-898d-5fd663184c82", "address": "fa:16:3e:78:b9:1a", "network": {"id": "ce353a4c-7280-46bb-ac7d-157bb5dc08cb", "bridge": "br-int", "label": "tempest-network-smoke--1463782365", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe78:b91a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8bc0542-b1", "ovs_interfaceid": "c8bc0542-b12a-4eb6-898d-5fd663184c82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:20:27 np0005481065 nova_compute[260935]: 2025-10-11 09:20:27.702 2 DEBUG oslo_concurrency.lockutils [req-e5494a93-4bff-4c5b-a2fc-d666c6c6c124 req-1e8eb111-8a45-4529-92c7-97ce01cc4bfd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-564a3027-0f98-40fd-a495-1c13a103ea39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:20:27 np0005481065 nova_compute[260935]: 2025-10-11 09:20:27.888 2 DEBUG nova.network.neutron [-] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:20:27 np0005481065 nova_compute[260935]: 2025-10-11 09:20:27.915 2 INFO nova.compute.manager [-] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Took 1.25 seconds to deallocate network for instance.#033[00m
Oct 11 05:20:27 np0005481065 nova_compute[260935]: 2025-10-11 09:20:27.984 2 DEBUG oslo_concurrency.lockutils [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:20:27 np0005481065 nova_compute[260935]: 2025-10-11 09:20:27.985 2 DEBUG oslo_concurrency.lockutils [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:20:28 np0005481065 nova_compute[260935]: 2025-10-11 09:20:28.086 2 DEBUG oslo_concurrency.processutils [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:20:28 np0005481065 nova_compute[260935]: 2025-10-11 09:20:28.449 2 DEBUG nova.compute.manager [req-b29cc371-a48e-493d-8ca7-db4da13e5bc9 req-9947a489-6649-4869-bef4-f0331b95f83c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Received event network-vif-unplugged-dea05614-b04a-4078-a98e-428065514f37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:20:28 np0005481065 nova_compute[260935]: 2025-10-11 09:20:28.450 2 DEBUG oslo_concurrency.lockutils [req-b29cc371-a48e-493d-8ca7-db4da13e5bc9 req-9947a489-6649-4869-bef4-f0331b95f83c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "564a3027-0f98-40fd-a495-1c13a103ea39-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:20:28 np0005481065 nova_compute[260935]: 2025-10-11 09:20:28.450 2 DEBUG oslo_concurrency.lockutils [req-b29cc371-a48e-493d-8ca7-db4da13e5bc9 req-9947a489-6649-4869-bef4-f0331b95f83c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "564a3027-0f98-40fd-a495-1c13a103ea39-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:20:28 np0005481065 nova_compute[260935]: 2025-10-11 09:20:28.451 2 DEBUG oslo_concurrency.lockutils [req-b29cc371-a48e-493d-8ca7-db4da13e5bc9 req-9947a489-6649-4869-bef4-f0331b95f83c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "564a3027-0f98-40fd-a495-1c13a103ea39-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:20:28 np0005481065 nova_compute[260935]: 2025-10-11 09:20:28.451 2 DEBUG nova.compute.manager [req-b29cc371-a48e-493d-8ca7-db4da13e5bc9 req-9947a489-6649-4869-bef4-f0331b95f83c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] No waiting events found dispatching network-vif-unplugged-dea05614-b04a-4078-a98e-428065514f37 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:20:28 np0005481065 nova_compute[260935]: 2025-10-11 09:20:28.451 2 WARNING nova.compute.manager [req-b29cc371-a48e-493d-8ca7-db4da13e5bc9 req-9947a489-6649-4869-bef4-f0331b95f83c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Received unexpected event network-vif-unplugged-dea05614-b04a-4078-a98e-428065514f37 for instance with vm_state deleted and task_state None.#033[00m
Oct 11 05:20:28 np0005481065 nova_compute[260935]: 2025-10-11 09:20:28.452 2 DEBUG nova.compute.manager [req-b29cc371-a48e-493d-8ca7-db4da13e5bc9 req-9947a489-6649-4869-bef4-f0331b95f83c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Received event network-vif-plugged-dea05614-b04a-4078-a98e-428065514f37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:20:28 np0005481065 nova_compute[260935]: 2025-10-11 09:20:28.452 2 DEBUG oslo_concurrency.lockutils [req-b29cc371-a48e-493d-8ca7-db4da13e5bc9 req-9947a489-6649-4869-bef4-f0331b95f83c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "564a3027-0f98-40fd-a495-1c13a103ea39-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:20:28 np0005481065 nova_compute[260935]: 2025-10-11 09:20:28.452 2 DEBUG oslo_concurrency.lockutils [req-b29cc371-a48e-493d-8ca7-db4da13e5bc9 req-9947a489-6649-4869-bef4-f0331b95f83c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "564a3027-0f98-40fd-a495-1c13a103ea39-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:20:28 np0005481065 nova_compute[260935]: 2025-10-11 09:20:28.453 2 DEBUG oslo_concurrency.lockutils [req-b29cc371-a48e-493d-8ca7-db4da13e5bc9 req-9947a489-6649-4869-bef4-f0331b95f83c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "564a3027-0f98-40fd-a495-1c13a103ea39-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:20:28 np0005481065 nova_compute[260935]: 2025-10-11 09:20:28.453 2 DEBUG nova.compute.manager [req-b29cc371-a48e-493d-8ca7-db4da13e5bc9 req-9947a489-6649-4869-bef4-f0331b95f83c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] No waiting events found dispatching network-vif-plugged-dea05614-b04a-4078-a98e-428065514f37 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:20:28 np0005481065 nova_compute[260935]: 2025-10-11 09:20:28.454 2 WARNING nova.compute.manager [req-b29cc371-a48e-493d-8ca7-db4da13e5bc9 req-9947a489-6649-4869-bef4-f0331b95f83c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Received unexpected event network-vif-plugged-dea05614-b04a-4078-a98e-428065514f37 for instance with vm_state deleted and task_state None.#033[00m
Oct 11 05:20:28 np0005481065 nova_compute[260935]: 2025-10-11 09:20:28.479 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174413.4776118, 69860e17-caac-461a-a4a5-34ca72c0ee09 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:20:28 np0005481065 nova_compute[260935]: 2025-10-11 09:20:28.479 2 INFO nova.compute.manager [-] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:20:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:20:28 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/431750741' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:20:28 np0005481065 nova_compute[260935]: 2025-10-11 09:20:28.533 2 DEBUG nova.compute.manager [None req-20a5a97f-cf48-4e8b-b6e5-6097bb9d0561 - - - - - -] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:20:28 np0005481065 nova_compute[260935]: 2025-10-11 09:20:28.535 2 DEBUG oslo_concurrency.processutils [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:20:28 np0005481065 nova_compute[260935]: 2025-10-11 09:20:28.547 2 DEBUG nova.compute.provider_tree [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:20:28 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2359: 321 pgs: 321 active+clean; 328 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 73 KiB/s rd, 29 KiB/s wr, 87 op/s
Oct 11 05:20:28 np0005481065 nova_compute[260935]: 2025-10-11 09:20:28.774 2 DEBUG nova.scheduler.client.report [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:20:28 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #114. Immutable memtables: 0.
Oct 11 05:20:28 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:20:28.937758) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 05:20:28 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 114
Oct 11 05:20:28 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174428937899, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 1362, "num_deletes": 251, "total_data_size": 2055089, "memory_usage": 2088912, "flush_reason": "Manual Compaction"}
Oct 11 05:20:28 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #115: started
Oct 11 05:20:28 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174428951584, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 115, "file_size": 2023529, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 48332, "largest_seqno": 49693, "table_properties": {"data_size": 2017140, "index_size": 3592, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13616, "raw_average_key_size": 19, "raw_value_size": 2004290, "raw_average_value_size": 2938, "num_data_blocks": 161, "num_entries": 682, "num_filter_entries": 682, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760174294, "oldest_key_time": 1760174294, "file_creation_time": 1760174428, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 115, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:20:28 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 13874 microseconds, and 8536 cpu microseconds.
Oct 11 05:20:28 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:20:28 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:20:28.951645) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #115: 2023529 bytes OK
Oct 11 05:20:28 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:20:28.951670) [db/memtable_list.cc:519] [default] Level-0 commit table #115 started
Oct 11 05:20:28 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:20:28.955039) [db/memtable_list.cc:722] [default] Level-0 commit table #115: memtable #1 done
Oct 11 05:20:28 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:20:28.955069) EVENT_LOG_v1 {"time_micros": 1760174428955060, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 05:20:28 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:20:28.955094) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 05:20:28 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 2049000, prev total WAL file size 2049000, number of live WAL files 2.
Oct 11 05:20:28 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000111.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:20:28 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:20:28.956251) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Oct 11 05:20:28 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 05:20:28 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [115(1976KB)], [113(8063KB)]
Oct 11 05:20:28 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174428956333, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [115], "files_L6": [113], "score": -1, "input_data_size": 10280056, "oldest_snapshot_seqno": -1}
Oct 11 05:20:29 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #116: 6875 keys, 8525301 bytes, temperature: kUnknown
Oct 11 05:20:29 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174429017044, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 116, "file_size": 8525301, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8480929, "index_size": 26084, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17221, "raw_key_size": 179458, "raw_average_key_size": 26, "raw_value_size": 8359291, "raw_average_value_size": 1215, "num_data_blocks": 1014, "num_entries": 6875, "num_filter_entries": 6875, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760174428, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:20:29 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:20:29 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:20:29.020130) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 8525301 bytes
Oct 11 05:20:29 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:20:29.022408) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 169.1 rd, 140.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 7.9 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(9.3) write-amplify(4.2) OK, records in: 7389, records dropped: 514 output_compression: NoCompression
Oct 11 05:20:29 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:20:29.022440) EVENT_LOG_v1 {"time_micros": 1760174429022425, "job": 68, "event": "compaction_finished", "compaction_time_micros": 60801, "compaction_time_cpu_micros": 39573, "output_level": 6, "num_output_files": 1, "total_output_size": 8525301, "num_input_records": 7389, "num_output_records": 6875, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 05:20:29 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000115.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:20:29 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174429023235, "job": 68, "event": "table_file_deletion", "file_number": 115}
Oct 11 05:20:29 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:20:29 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174429026333, "job": 68, "event": "table_file_deletion", "file_number": 113}
Oct 11 05:20:29 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:20:28.956091) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:20:29 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:20:29.026378) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:20:29 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:20:29.026387) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:20:29 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:20:29.026390) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:20:29 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:20:29.026394) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:20:29 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:20:29.026396) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:20:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:20:29 np0005481065 nova_compute[260935]: 2025-10-11 09:20:29.285 2 DEBUG oslo_concurrency.lockutils [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.300s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:20:29 np0005481065 nova_compute[260935]: 2025-10-11 09:20:29.340 2 INFO nova.scheduler.client.report [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Deleted allocations for instance 564a3027-0f98-40fd-a495-1c13a103ea39#033[00m
Oct 11 05:20:29 np0005481065 nova_compute[260935]: 2025-10-11 09:20:29.450 2 DEBUG oslo_concurrency.lockutils [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "564a3027-0f98-40fd-a495-1c13a103ea39" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.813s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:20:29 np0005481065 nova_compute[260935]: 2025-10-11 09:20:29.500 2 DEBUG nova.compute.manager [req-ccafee76-3b93-4d3a-a9c3-9b05824256ad req-3636d77e-858a-4992-a6a7-c26a409da628 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Received event network-vif-deleted-dea05614-b04a-4078-a98e-428065514f37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:20:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:29.672 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=37, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=36) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:20:29 np0005481065 nova_compute[260935]: 2025-10-11 09:20:29.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:29.675 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 05:20:29 np0005481065 nova_compute[260935]: 2025-10-11 09:20:29.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:20:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:30.678 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '37'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:20:30 np0005481065 nova_compute[260935]: 2025-10-11 09:20:30.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:20:30 np0005481065 nova_compute[260935]: 2025-10-11 09:20:30.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:20:30 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2360: 321 pgs: 321 active+clean; 328 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 6.7 KiB/s wr, 57 op/s
Oct 11 05:20:31 np0005481065 nova_compute[260935]: 2025-10-11 09:20:31.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:31 np0005481065 nova_compute[260935]: 2025-10-11 09:20:31.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:32 np0005481065 nova_compute[260935]: 2025-10-11 09:20:32.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:32 np0005481065 nova_compute[260935]: 2025-10-11 09:20:32.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:20:32 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2361: 321 pgs: 321 active+clean; 328 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 6.7 KiB/s wr, 57 op/s
Oct 11 05:20:33 np0005481065 ovn_controller[152945]: 2025-10-11T09:20:33Z|01185|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:20:33 np0005481065 ovn_controller[152945]: 2025-10-11T09:20:33Z|01186|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:20:33 np0005481065 nova_compute[260935]: 2025-10-11 09:20:33.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:33 np0005481065 ovn_controller[152945]: 2025-10-11T09:20:33Z|01187|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:20:33 np0005481065 ovn_controller[152945]: 2025-10-11T09:20:33Z|01188|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:20:33 np0005481065 nova_compute[260935]: 2025-10-11 09:20:33.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:20:34 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2362: 321 pgs: 321 active+clean; 328 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 5.5 KiB/s wr, 30 op/s
Oct 11 05:20:35 np0005481065 nova_compute[260935]: 2025-10-11 09:20:35.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:20:35 np0005481065 nova_compute[260935]: 2025-10-11 09:20:35.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:20:35 np0005481065 nova_compute[260935]: 2025-10-11 09:20:35.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:20:35 np0005481065 podman[387155]: 2025-10-11 09:20:35.785743818 +0000 UTC m=+0.081783909 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Oct 11 05:20:36 np0005481065 nova_compute[260935]: 2025-10-11 09:20:36.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:36 np0005481065 nova_compute[260935]: 2025-10-11 09:20:36.306 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174421.304581, a798e2c3-8294-482d-a073-883121427765 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:20:36 np0005481065 nova_compute[260935]: 2025-10-11 09:20:36.306 2 INFO nova.compute.manager [-] [instance: a798e2c3-8294-482d-a073-883121427765] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:20:36 np0005481065 nova_compute[260935]: 2025-10-11 09:20:36.348 2 DEBUG nova.compute.manager [None req-80a8a35b-6f75-4be8-83b5-4f78a0e28fa0 - - - - - -] [instance: a798e2c3-8294-482d-a073-883121427765] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:20:36 np0005481065 nova_compute[260935]: 2025-10-11 09:20:36.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:36 np0005481065 nova_compute[260935]: 2025-10-11 09:20:36.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:20:36 np0005481065 nova_compute[260935]: 2025-10-11 09:20:36.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:20:36 np0005481065 nova_compute[260935]: 2025-10-11 09:20:36.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 11 05:20:36 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2363: 321 pgs: 321 active+clean; 328 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.5 KiB/s wr, 27 op/s
Oct 11 05:20:37 np0005481065 nova_compute[260935]: 2025-10-11 09:20:37.077 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:20:37 np0005481065 nova_compute[260935]: 2025-10-11 09:20:37.078 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:20:37 np0005481065 nova_compute[260935]: 2025-10-11 09:20:37.078 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 05:20:37 np0005481065 nova_compute[260935]: 2025-10-11 09:20:37.079 2 DEBUG nova.objects.instance [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c176845c-89c0-4038-ba22-4ee79bd3ebfe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:20:38 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2364: 321 pgs: 321 active+clean; 328 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.5 KiB/s wr, 27 op/s
Oct 11 05:20:38 np0005481065 nova_compute[260935]: 2025-10-11 09:20:38.788 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updating instance_info_cache with network_info: [{"id": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "address": "fa:16:3e:1e:82:58", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ae661-47", "ovs_interfaceid": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:20:38 np0005481065 nova_compute[260935]: 2025-10-11 09:20:38.812 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:20:38 np0005481065 nova_compute[260935]: 2025-10-11 09:20:38.813 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 05:20:38 np0005481065 nova_compute[260935]: 2025-10-11 09:20:38.814 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:20:38 np0005481065 nova_compute[260935]: 2025-10-11 09:20:38.840 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:20:38 np0005481065 nova_compute[260935]: 2025-10-11 09:20:38.840 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:20:38 np0005481065 nova_compute[260935]: 2025-10-11 09:20:38.841 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:20:38 np0005481065 nova_compute[260935]: 2025-10-11 09:20:38.841 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:20:38 np0005481065 nova_compute[260935]: 2025-10-11 09:20:38.841 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:20:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:20:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:20:39 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/596815401' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:20:39 np0005481065 nova_compute[260935]: 2025-10-11 09:20:39.284 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:20:39 np0005481065 nova_compute[260935]: 2025-10-11 09:20:39.409 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:20:39 np0005481065 nova_compute[260935]: 2025-10-11 09:20:39.410 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:20:39 np0005481065 nova_compute[260935]: 2025-10-11 09:20:39.411 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:20:39 np0005481065 nova_compute[260935]: 2025-10-11 09:20:39.416 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:20:39 np0005481065 nova_compute[260935]: 2025-10-11 09:20:39.417 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:20:39 np0005481065 nova_compute[260935]: 2025-10-11 09:20:39.423 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:20:39 np0005481065 nova_compute[260935]: 2025-10-11 09:20:39.423 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:20:39 np0005481065 nova_compute[260935]: 2025-10-11 09:20:39.515 2 DEBUG oslo_concurrency.lockutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:20:39 np0005481065 nova_compute[260935]: 2025-10-11 09:20:39.516 2 DEBUG oslo_concurrency.lockutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:20:39 np0005481065 nova_compute[260935]: 2025-10-11 09:20:39.533 2 DEBUG nova.compute.manager [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:20:39 np0005481065 nova_compute[260935]: 2025-10-11 09:20:39.604 2 DEBUG oslo_concurrency.lockutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:20:39 np0005481065 nova_compute[260935]: 2025-10-11 09:20:39.604 2 DEBUG oslo_concurrency.lockutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:20:39 np0005481065 nova_compute[260935]: 2025-10-11 09:20:39.612 2 DEBUG nova.virt.hardware [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:20:39 np0005481065 nova_compute[260935]: 2025-10-11 09:20:39.612 2 INFO nova.compute.claims [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:20:39 np0005481065 nova_compute[260935]: 2025-10-11 09:20:39.700 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:20:39 np0005481065 nova_compute[260935]: 2025-10-11 09:20:39.701 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2951MB free_disk=59.83066940307617GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:20:39 np0005481065 nova_compute[260935]: 2025-10-11 09:20:39.701 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:20:39 np0005481065 nova_compute[260935]: 2025-10-11 09:20:39.901 2 DEBUG oslo_concurrency.processutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:20:40 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:20:40 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2499093858' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:20:40 np0005481065 nova_compute[260935]: 2025-10-11 09:20:40.388 2 DEBUG oslo_concurrency.processutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:20:40 np0005481065 nova_compute[260935]: 2025-10-11 09:20:40.398 2 DEBUG nova.compute.provider_tree [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:20:40 np0005481065 nova_compute[260935]: 2025-10-11 09:20:40.416 2 DEBUG nova.scheduler.client.report [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:20:40 np0005481065 nova_compute[260935]: 2025-10-11 09:20:40.445 2 DEBUG oslo_concurrency.lockutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:20:40 np0005481065 nova_compute[260935]: 2025-10-11 09:20:40.446 2 DEBUG nova.compute.manager [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:20:40 np0005481065 nova_compute[260935]: 2025-10-11 09:20:40.451 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.750s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:20:40 np0005481065 nova_compute[260935]: 2025-10-11 09:20:40.519 2 DEBUG nova.compute.manager [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:20:40 np0005481065 nova_compute[260935]: 2025-10-11 09:20:40.519 2 DEBUG nova.network.neutron [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:20:40 np0005481065 nova_compute[260935]: 2025-10-11 09:20:40.544 2 INFO nova.virt.libvirt.driver [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:20:40 np0005481065 nova_compute[260935]: 2025-10-11 09:20:40.551 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:20:40 np0005481065 nova_compute[260935]: 2025-10-11 09:20:40.552 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:20:40 np0005481065 nova_compute[260935]: 2025-10-11 09:20:40.552 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:20:40 np0005481065 nova_compute[260935]: 2025-10-11 09:20:40.552 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 7f0d9214-39a5-458d-82db-dcbc7d61b8b5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:20:40 np0005481065 nova_compute[260935]: 2025-10-11 09:20:40.552 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:20:40 np0005481065 nova_compute[260935]: 2025-10-11 09:20:40.553 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:20:40 np0005481065 nova_compute[260935]: 2025-10-11 09:20:40.564 2 DEBUG nova.compute.manager [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:20:40 np0005481065 nova_compute[260935]: 2025-10-11 09:20:40.656 2 DEBUG nova.compute.manager [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:20:40 np0005481065 nova_compute[260935]: 2025-10-11 09:20:40.658 2 DEBUG nova.virt.libvirt.driver [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:20:40 np0005481065 nova_compute[260935]: 2025-10-11 09:20:40.658 2 INFO nova.virt.libvirt.driver [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Creating image(s)#033[00m
Oct 11 05:20:40 np0005481065 nova_compute[260935]: 2025-10-11 09:20:40.692 2 DEBUG nova.storage.rbd_utils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 7f0d9214-39a5-458d-82db-dcbc7d61b8b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:20:40 np0005481065 nova_compute[260935]: 2025-10-11 09:20:40.726 2 DEBUG nova.storage.rbd_utils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 7f0d9214-39a5-458d-82db-dcbc7d61b8b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:20:40 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2365: 321 pgs: 321 active+clean; 328 MiB data, 912 MiB used, 59 GiB / 60 GiB avail
Oct 11 05:20:40 np0005481065 nova_compute[260935]: 2025-10-11 09:20:40.754 2 DEBUG nova.storage.rbd_utils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 7f0d9214-39a5-458d-82db-dcbc7d61b8b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:20:40 np0005481065 nova_compute[260935]: 2025-10-11 09:20:40.758 2 DEBUG oslo_concurrency.processutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:20:40 np0005481065 nova_compute[260935]: 2025-10-11 09:20:40.799 2 DEBUG nova.policy [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dd336dcb24664df58613d4105ce1b004', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:20:40 np0005481065 nova_compute[260935]: 2025-10-11 09:20:40.828 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:20:40 np0005481065 nova_compute[260935]: 2025-10-11 09:20:40.864 2 DEBUG oslo_concurrency.processutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:20:40 np0005481065 nova_compute[260935]: 2025-10-11 09:20:40.865 2 DEBUG oslo_concurrency.lockutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:20:40 np0005481065 nova_compute[260935]: 2025-10-11 09:20:40.866 2 DEBUG oslo_concurrency.lockutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:20:40 np0005481065 nova_compute[260935]: 2025-10-11 09:20:40.866 2 DEBUG oslo_concurrency.lockutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:20:40 np0005481065 nova_compute[260935]: 2025-10-11 09:20:40.890 2 DEBUG nova.storage.rbd_utils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 7f0d9214-39a5-458d-82db-dcbc7d61b8b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:20:40 np0005481065 nova_compute[260935]: 2025-10-11 09:20:40.893 2 DEBUG oslo_concurrency.processutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 7f0d9214-39a5-458d-82db-dcbc7d61b8b5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:20:41 np0005481065 nova_compute[260935]: 2025-10-11 09:20:41.113 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174426.111461, 564a3027-0f98-40fd-a495-1c13a103ea39 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:20:41 np0005481065 nova_compute[260935]: 2025-10-11 09:20:41.114 2 INFO nova.compute.manager [-] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:20:41 np0005481065 nova_compute[260935]: 2025-10-11 09:20:41.145 2 DEBUG nova.compute.manager [None req-d45f3af2-02cb-461d-a8c3-b95ef63b8bda - - - - - -] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:20:41 np0005481065 nova_compute[260935]: 2025-10-11 09:20:41.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:41 np0005481065 nova_compute[260935]: 2025-10-11 09:20:41.222 2 DEBUG oslo_concurrency.processutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 7f0d9214-39a5-458d-82db-dcbc7d61b8b5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.330s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:20:41 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:20:41 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3269150857' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:20:41 np0005481065 nova_compute[260935]: 2025-10-11 09:20:41.282 2 DEBUG nova.storage.rbd_utils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] resizing rbd image 7f0d9214-39a5-458d-82db-dcbc7d61b8b5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:20:41 np0005481065 nova_compute[260935]: 2025-10-11 09:20:41.310 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:20:41 np0005481065 nova_compute[260935]: 2025-10-11 09:20:41.315 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:20:41 np0005481065 nova_compute[260935]: 2025-10-11 09:20:41.333 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:20:41 np0005481065 nova_compute[260935]: 2025-10-11 09:20:41.368 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:20:41 np0005481065 nova_compute[260935]: 2025-10-11 09:20:41.368 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.918s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:20:41 np0005481065 nova_compute[260935]: 2025-10-11 09:20:41.375 2 DEBUG nova.objects.instance [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'migration_context' on Instance uuid 7f0d9214-39a5-458d-82db-dcbc7d61b8b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:20:41 np0005481065 nova_compute[260935]: 2025-10-11 09:20:41.389 2 DEBUG nova.virt.libvirt.driver [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:20:41 np0005481065 nova_compute[260935]: 2025-10-11 09:20:41.390 2 DEBUG nova.virt.libvirt.driver [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Ensure instance console log exists: /var/lib/nova/instances/7f0d9214-39a5-458d-82db-dcbc7d61b8b5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:20:41 np0005481065 nova_compute[260935]: 2025-10-11 09:20:41.390 2 DEBUG oslo_concurrency.lockutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:20:41 np0005481065 nova_compute[260935]: 2025-10-11 09:20:41.391 2 DEBUG oslo_concurrency.lockutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:20:41 np0005481065 nova_compute[260935]: 2025-10-11 09:20:41.391 2 DEBUG oslo_concurrency.lockutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:20:41 np0005481065 nova_compute[260935]: 2025-10-11 09:20:41.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:41 np0005481065 podman[387407]: 2025-10-11 09:20:41.78507813 +0000 UTC m=+0.080585565 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 11 05:20:42 np0005481065 nova_compute[260935]: 2025-10-11 09:20:42.299 2 DEBUG nova.network.neutron [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Successfully created port: 6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:20:42 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2366: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:20:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:20:44 np0005481065 nova_compute[260935]: 2025-10-11 09:20:44.490 2 DEBUG nova.network.neutron [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Successfully updated port: 6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:20:44 np0005481065 nova_compute[260935]: 2025-10-11 09:20:44.506 2 DEBUG oslo_concurrency.lockutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "refresh_cache-7f0d9214-39a5-458d-82db-dcbc7d61b8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:20:44 np0005481065 nova_compute[260935]: 2025-10-11 09:20:44.507 2 DEBUG oslo_concurrency.lockutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquired lock "refresh_cache-7f0d9214-39a5-458d-82db-dcbc7d61b8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:20:44 np0005481065 nova_compute[260935]: 2025-10-11 09:20:44.507 2 DEBUG nova.network.neutron [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:20:44 np0005481065 nova_compute[260935]: 2025-10-11 09:20:44.699 2 DEBUG nova.compute.manager [req-479b6167-2282-44c4-af07-5c2a352e1f31 req-4fc599a1-4ae1-44b0-9f35-47fe39e5ab62 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Received event network-changed-6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:20:44 np0005481065 nova_compute[260935]: 2025-10-11 09:20:44.700 2 DEBUG nova.compute.manager [req-479b6167-2282-44c4-af07-5c2a352e1f31 req-4fc599a1-4ae1-44b0-9f35-47fe39e5ab62 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Refreshing instance network info cache due to event network-changed-6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:20:44 np0005481065 nova_compute[260935]: 2025-10-11 09:20:44.700 2 DEBUG oslo_concurrency.lockutils [req-479b6167-2282-44c4-af07-5c2a352e1f31 req-4fc599a1-4ae1-44b0-9f35-47fe39e5ab62 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-7f0d9214-39a5-458d-82db-dcbc7d61b8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:20:44 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2367: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:20:44 np0005481065 nova_compute[260935]: 2025-10-11 09:20:44.910 2 DEBUG nova.network.neutron [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:20:45 np0005481065 podman[387501]: 2025-10-11 09:20:45.927579564 +0000 UTC m=+0.070642374 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd)
Oct 11 05:20:45 np0005481065 podman[387502]: 2025-10-11 09:20:45.969874098 +0000 UTC m=+0.109756489 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 11 05:20:46 np0005481065 nova_compute[260935]: 2025-10-11 09:20:46.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:46 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:20:46 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:20:46 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 05:20:46 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:20:46 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 05:20:46 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:20:46 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 2138afd5-e370-436a-a9d5-d0530a47daa6 does not exist
Oct 11 05:20:46 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev ca56203b-81c3-4232-8b9b-30f399d35282 does not exist
Oct 11 05:20:46 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 0a1b985c-494b-457d-9592-646f0045b60b does not exist
Oct 11 05:20:46 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 05:20:46 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 05:20:46 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 05:20:46 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:20:46 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:20:46 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:20:46 np0005481065 nova_compute[260935]: 2025-10-11 09:20:46.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:46 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2368: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:20:46 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:20:46 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:20:46 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:20:47 np0005481065 podman[387748]: 2025-10-11 09:20:47.318848877 +0000 UTC m=+0.066603110 container create b25160fe3a32ae798afb3621636f6c35381edba3da183a0c33e229c23cb4108e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_vaughan, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:20:47 np0005481065 systemd[1]: Started libpod-conmon-b25160fe3a32ae798afb3621636f6c35381edba3da183a0c33e229c23cb4108e.scope.
Oct 11 05:20:47 np0005481065 podman[387748]: 2025-10-11 09:20:47.292300278 +0000 UTC m=+0.040054561 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:20:47 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:20:47 np0005481065 podman[387748]: 2025-10-11 09:20:47.41639257 +0000 UTC m=+0.164146793 container init b25160fe3a32ae798afb3621636f6c35381edba3da183a0c33e229c23cb4108e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_vaughan, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 11 05:20:47 np0005481065 podman[387748]: 2025-10-11 09:20:47.426898066 +0000 UTC m=+0.174652259 container start b25160fe3a32ae798afb3621636f6c35381edba3da183a0c33e229c23cb4108e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 05:20:47 np0005481065 podman[387748]: 2025-10-11 09:20:47.431023133 +0000 UTC m=+0.178777406 container attach b25160fe3a32ae798afb3621636f6c35381edba3da183a0c33e229c23cb4108e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:20:47 np0005481065 fervent_vaughan[387765]: 167 167
Oct 11 05:20:47 np0005481065 systemd[1]: libpod-b25160fe3a32ae798afb3621636f6c35381edba3da183a0c33e229c23cb4108e.scope: Deactivated successfully.
Oct 11 05:20:47 np0005481065 conmon[387765]: conmon b25160fe3a32ae798afb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b25160fe3a32ae798afb3621636f6c35381edba3da183a0c33e229c23cb4108e.scope/container/memory.events
Oct 11 05:20:47 np0005481065 podman[387748]: 2025-10-11 09:20:47.44829272 +0000 UTC m=+0.196047003 container died b25160fe3a32ae798afb3621636f6c35381edba3da183a0c33e229c23cb4108e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 05:20:47 np0005481065 systemd[1]: var-lib-containers-storage-overlay-76c56d0ec68e3f5ecf415bf9a92c9198305d5baefd2cfc559431c8dc118f6dc6-merged.mount: Deactivated successfully.
Oct 11 05:20:47 np0005481065 podman[387748]: 2025-10-11 09:20:47.503007074 +0000 UTC m=+0.250761297 container remove b25160fe3a32ae798afb3621636f6c35381edba3da183a0c33e229c23cb4108e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_vaughan, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 05:20:47 np0005481065 systemd[1]: libpod-conmon-b25160fe3a32ae798afb3621636f6c35381edba3da183a0c33e229c23cb4108e.scope: Deactivated successfully.
Oct 11 05:20:47 np0005481065 nova_compute[260935]: 2025-10-11 09:20:47.587 2 DEBUG nova.network.neutron [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Updating instance_info_cache with network_info: [{"id": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "address": "fa:16:3e:38:a7:f5", "network": {"id": "6fb90d02-96cd-4920-92ac-462cc457cb11", "bridge": "br-int", "label": "tempest-network-smoke--1169953700", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aa7ac72-3e", "ovs_interfaceid": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:20:47 np0005481065 nova_compute[260935]: 2025-10-11 09:20:47.620 2 DEBUG oslo_concurrency.lockutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Releasing lock "refresh_cache-7f0d9214-39a5-458d-82db-dcbc7d61b8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:20:47 np0005481065 nova_compute[260935]: 2025-10-11 09:20:47.620 2 DEBUG nova.compute.manager [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Instance network_info: |[{"id": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "address": "fa:16:3e:38:a7:f5", "network": {"id": "6fb90d02-96cd-4920-92ac-462cc457cb11", "bridge": "br-int", "label": "tempest-network-smoke--1169953700", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aa7ac72-3e", "ovs_interfaceid": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:20:47 np0005481065 nova_compute[260935]: 2025-10-11 09:20:47.621 2 DEBUG oslo_concurrency.lockutils [req-479b6167-2282-44c4-af07-5c2a352e1f31 req-4fc599a1-4ae1-44b0-9f35-47fe39e5ab62 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-7f0d9214-39a5-458d-82db-dcbc7d61b8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:20:47 np0005481065 nova_compute[260935]: 2025-10-11 09:20:47.621 2 DEBUG nova.network.neutron [req-479b6167-2282-44c4-af07-5c2a352e1f31 req-4fc599a1-4ae1-44b0-9f35-47fe39e5ab62 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Refreshing network info cache for port 6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:20:47 np0005481065 nova_compute[260935]: 2025-10-11 09:20:47.625 2 DEBUG nova.virt.libvirt.driver [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Start _get_guest_xml network_info=[{"id": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "address": "fa:16:3e:38:a7:f5", "network": {"id": "6fb90d02-96cd-4920-92ac-462cc457cb11", "bridge": "br-int", "label": "tempest-network-smoke--1169953700", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aa7ac72-3e", "ovs_interfaceid": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:20:47 np0005481065 nova_compute[260935]: 2025-10-11 09:20:47.629 2 WARNING nova.virt.libvirt.driver [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:20:47 np0005481065 nova_compute[260935]: 2025-10-11 09:20:47.635 2 DEBUG nova.virt.libvirt.host [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:20:47 np0005481065 nova_compute[260935]: 2025-10-11 09:20:47.636 2 DEBUG nova.virt.libvirt.host [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:20:47 np0005481065 nova_compute[260935]: 2025-10-11 09:20:47.640 2 DEBUG nova.virt.libvirt.host [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:20:47 np0005481065 nova_compute[260935]: 2025-10-11 09:20:47.641 2 DEBUG nova.virt.libvirt.host [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:20:47 np0005481065 nova_compute[260935]: 2025-10-11 09:20:47.641 2 DEBUG nova.virt.libvirt.driver [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:20:47 np0005481065 nova_compute[260935]: 2025-10-11 09:20:47.642 2 DEBUG nova.virt.hardware [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:20:47 np0005481065 nova_compute[260935]: 2025-10-11 09:20:47.642 2 DEBUG nova.virt.hardware [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:20:47 np0005481065 nova_compute[260935]: 2025-10-11 09:20:47.643 2 DEBUG nova.virt.hardware [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:20:47 np0005481065 nova_compute[260935]: 2025-10-11 09:20:47.643 2 DEBUG nova.virt.hardware [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:20:47 np0005481065 nova_compute[260935]: 2025-10-11 09:20:47.644 2 DEBUG nova.virt.hardware [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:20:47 np0005481065 nova_compute[260935]: 2025-10-11 09:20:47.644 2 DEBUG nova.virt.hardware [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:20:47 np0005481065 nova_compute[260935]: 2025-10-11 09:20:47.645 2 DEBUG nova.virt.hardware [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:20:47 np0005481065 nova_compute[260935]: 2025-10-11 09:20:47.645 2 DEBUG nova.virt.hardware [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:20:47 np0005481065 nova_compute[260935]: 2025-10-11 09:20:47.645 2 DEBUG nova.virt.hardware [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:20:47 np0005481065 nova_compute[260935]: 2025-10-11 09:20:47.646 2 DEBUG nova.virt.hardware [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:20:47 np0005481065 nova_compute[260935]: 2025-10-11 09:20:47.646 2 DEBUG nova.virt.hardware [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:20:47 np0005481065 nova_compute[260935]: 2025-10-11 09:20:47.650 2 DEBUG oslo_concurrency.processutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:20:47 np0005481065 podman[387788]: 2025-10-11 09:20:47.736578145 +0000 UTC m=+0.063159253 container create dcc20560dddba7e136f8269db0394784eb2aa945da86a9a30c88b63f44281e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_murdock, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Oct 11 05:20:47 np0005481065 systemd[1]: Started libpod-conmon-dcc20560dddba7e136f8269db0394784eb2aa945da86a9a30c88b63f44281e17.scope.
Oct 11 05:20:47 np0005481065 podman[387788]: 2025-10-11 09:20:47.71690051 +0000 UTC m=+0.043481668 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:20:47 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:20:47 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b218df1b0c249dbd0cf9056236c12ca02effddc8b206c7e18241a0b2e27a1681/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:20:47 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b218df1b0c249dbd0cf9056236c12ca02effddc8b206c7e18241a0b2e27a1681/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:20:47 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b218df1b0c249dbd0cf9056236c12ca02effddc8b206c7e18241a0b2e27a1681/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:20:47 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b218df1b0c249dbd0cf9056236c12ca02effddc8b206c7e18241a0b2e27a1681/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:20:47 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b218df1b0c249dbd0cf9056236c12ca02effddc8b206c7e18241a0b2e27a1681/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 05:20:47 np0005481065 podman[387788]: 2025-10-11 09:20:47.844746477 +0000 UTC m=+0.171327665 container init dcc20560dddba7e136f8269db0394784eb2aa945da86a9a30c88b63f44281e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_murdock, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:20:47 np0005481065 podman[387788]: 2025-10-11 09:20:47.858743232 +0000 UTC m=+0.185324380 container start dcc20560dddba7e136f8269db0394784eb2aa945da86a9a30c88b63f44281e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:20:47 np0005481065 podman[387788]: 2025-10-11 09:20:47.862939081 +0000 UTC m=+0.189520229 container attach dcc20560dddba7e136f8269db0394784eb2aa945da86a9a30c88b63f44281e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 05:20:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:20:48 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/443230958' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:20:48 np0005481065 nova_compute[260935]: 2025-10-11 09:20:48.109 2 DEBUG oslo_concurrency.processutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:20:48 np0005481065 nova_compute[260935]: 2025-10-11 09:20:48.143 2 DEBUG nova.storage.rbd_utils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 7f0d9214-39a5-458d-82db-dcbc7d61b8b5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:20:48 np0005481065 nova_compute[260935]: 2025-10-11 09:20:48.148 2 DEBUG oslo_concurrency.processutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:20:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:20:48 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3789507855' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:20:48 np0005481065 nova_compute[260935]: 2025-10-11 09:20:48.615 2 DEBUG oslo_concurrency.processutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:20:48 np0005481065 nova_compute[260935]: 2025-10-11 09:20:48.619 2 DEBUG nova.virt.libvirt.vif [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:20:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1960711580',display_name='tempest-TestNetworkBasicOps-server-1960711580',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1960711580',id=117,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF5Wlq36JtMeAih7avX8wXRrPqfFsSPD2izoTRA0/VeN08ZY174fYPsstqVdaqAprTgQ0B4WJKyd87FK5YPL+XzWekXrwbc+R4XTrvOSv6dJKGu7vh0OlJADJW05rfop0g==',key_name='tempest-TestNetworkBasicOps-1691611342',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-9mefs4w8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:20:40Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=7f0d9214-39a5-458d-82db-dcbc7d61b8b5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "address": "fa:16:3e:38:a7:f5", "network": {"id": "6fb90d02-96cd-4920-92ac-462cc457cb11", "bridge": "br-int", "label": "tempest-network-smoke--1169953700", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aa7ac72-3e", "ovs_interfaceid": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:20:48 np0005481065 nova_compute[260935]: 2025-10-11 09:20:48.620 2 DEBUG nova.network.os_vif_util [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "address": "fa:16:3e:38:a7:f5", "network": {"id": "6fb90d02-96cd-4920-92ac-462cc457cb11", "bridge": "br-int", "label": "tempest-network-smoke--1169953700", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aa7ac72-3e", "ovs_interfaceid": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:20:48 np0005481065 nova_compute[260935]: 2025-10-11 09:20:48.622 2 DEBUG nova.network.os_vif_util [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:a7:f5,bridge_name='br-int',has_traffic_filtering=True,id=6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5,network=Network(6fb90d02-96cd-4920-92ac-462cc457cb11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6aa7ac72-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:20:48 np0005481065 nova_compute[260935]: 2025-10-11 09:20:48.624 2 DEBUG nova.objects.instance [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7f0d9214-39a5-458d-82db-dcbc7d61b8b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:20:48 np0005481065 nova_compute[260935]: 2025-10-11 09:20:48.706 2 DEBUG nova.virt.libvirt.driver [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:20:48 np0005481065 nova_compute[260935]:  <uuid>7f0d9214-39a5-458d-82db-dcbc7d61b8b5</uuid>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:  <name>instance-00000075</name>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:20:48 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:      <nova:name>tempest-TestNetworkBasicOps-server-1960711580</nova:name>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:20:47</nova:creationTime>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:20:48 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:        <nova:user uuid="dd336dcb24664df58613d4105ce1b004">tempest-TestNetworkBasicOps-1622727639-project-member</nova:user>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:        <nova:project uuid="bee9c6aad5fe46a2b0fb6caf4d995b72">tempest-TestNetworkBasicOps-1622727639</nova:project>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:        <nova:port uuid="6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5">
Oct 11 05:20:48 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:20:48 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:      <entry name="serial">7f0d9214-39a5-458d-82db-dcbc7d61b8b5</entry>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:      <entry name="uuid">7f0d9214-39a5-458d-82db-dcbc7d61b8b5</entry>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:20:48 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:20:48 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:20:48 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/7f0d9214-39a5-458d-82db-dcbc7d61b8b5_disk">
Oct 11 05:20:48 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:20:48 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:20:48 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/7f0d9214-39a5-458d-82db-dcbc7d61b8b5_disk.config">
Oct 11 05:20:48 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:20:48 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:20:48 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:38:a7:f5"/>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:      <target dev="tap6aa7ac72-3e"/>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:20:48 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/7f0d9214-39a5-458d-82db-dcbc7d61b8b5/console.log" append="off"/>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:20:48 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:20:48 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:20:48 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:20:48 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:20:48 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:20:48 np0005481065 nova_compute[260935]: 2025-10-11 09:20:48.711 2 DEBUG nova.compute.manager [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Preparing to wait for external event network-vif-plugged-6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:20:48 np0005481065 nova_compute[260935]: 2025-10-11 09:20:48.712 2 DEBUG oslo_concurrency.lockutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:20:48 np0005481065 nova_compute[260935]: 2025-10-11 09:20:48.713 2 DEBUG oslo_concurrency.lockutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:20:48 np0005481065 nova_compute[260935]: 2025-10-11 09:20:48.713 2 DEBUG oslo_concurrency.lockutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:20:48 np0005481065 nova_compute[260935]: 2025-10-11 09:20:48.714 2 DEBUG nova.virt.libvirt.vif [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:20:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1960711580',display_name='tempest-TestNetworkBasicOps-server-1960711580',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1960711580',id=117,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF5Wlq36JtMeAih7avX8wXRrPqfFsSPD2izoTRA0/VeN08ZY174fYPsstqVdaqAprTgQ0B4WJKyd87FK5YPL+XzWekXrwbc+R4XTrvOSv6dJKGu7vh0OlJADJW05rfop0g==',key_name='tempest-TestNetworkBasicOps-1691611342',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-9mefs4w8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:20:40Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=7f0d9214-39a5-458d-82db-dcbc7d61b8b5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "address": "fa:16:3e:38:a7:f5", "network": {"id": "6fb90d02-96cd-4920-92ac-462cc457cb11", "bridge": "br-int", "label": "tempest-network-smoke--1169953700", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aa7ac72-3e", "ovs_interfaceid": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:20:48 np0005481065 nova_compute[260935]: 2025-10-11 09:20:48.715 2 DEBUG nova.network.os_vif_util [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "address": "fa:16:3e:38:a7:f5", "network": {"id": "6fb90d02-96cd-4920-92ac-462cc457cb11", "bridge": "br-int", "label": "tempest-network-smoke--1169953700", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aa7ac72-3e", "ovs_interfaceid": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:20:48 np0005481065 nova_compute[260935]: 2025-10-11 09:20:48.716 2 DEBUG nova.network.os_vif_util [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:a7:f5,bridge_name='br-int',has_traffic_filtering=True,id=6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5,network=Network(6fb90d02-96cd-4920-92ac-462cc457cb11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6aa7ac72-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:20:48 np0005481065 nova_compute[260935]: 2025-10-11 09:20:48.717 2 DEBUG os_vif [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:a7:f5,bridge_name='br-int',has_traffic_filtering=True,id=6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5,network=Network(6fb90d02-96cd-4920-92ac-462cc457cb11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6aa7ac72-3e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:20:48 np0005481065 nova_compute[260935]: 2025-10-11 09:20:48.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:48 np0005481065 nova_compute[260935]: 2025-10-11 09:20:48.719 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:20:48 np0005481065 nova_compute[260935]: 2025-10-11 09:20:48.720 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:20:48 np0005481065 nova_compute[260935]: 2025-10-11 09:20:48.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:48 np0005481065 nova_compute[260935]: 2025-10-11 09:20:48.725 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6aa7ac72-3e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:20:48 np0005481065 nova_compute[260935]: 2025-10-11 09:20:48.726 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6aa7ac72-3e, col_values=(('external_ids', {'iface-id': '6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:38:a7:f5', 'vm-uuid': '7f0d9214-39a5-458d-82db-dcbc7d61b8b5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:20:48 np0005481065 NetworkManager[44960]: <info>  [1760174448.7304] manager: (tap6aa7ac72-3e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/478)
Oct 11 05:20:48 np0005481065 nova_compute[260935]: 2025-10-11 09:20:48.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:48 np0005481065 nova_compute[260935]: 2025-10-11 09:20:48.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:20:48 np0005481065 nova_compute[260935]: 2025-10-11 09:20:48.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:48 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2369: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:20:48 np0005481065 nova_compute[260935]: 2025-10-11 09:20:48.743 2 INFO os_vif [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:a7:f5,bridge_name='br-int',has_traffic_filtering=True,id=6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5,network=Network(6fb90d02-96cd-4920-92ac-462cc457cb11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6aa7ac72-3e')#033[00m
Oct 11 05:20:48 np0005481065 nova_compute[260935]: 2025-10-11 09:20:48.876 2 DEBUG nova.virt.libvirt.driver [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:20:48 np0005481065 nova_compute[260935]: 2025-10-11 09:20:48.878 2 DEBUG nova.virt.libvirt.driver [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:20:48 np0005481065 nova_compute[260935]: 2025-10-11 09:20:48.879 2 DEBUG nova.virt.libvirt.driver [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No VIF found with MAC fa:16:3e:38:a7:f5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:20:48 np0005481065 nova_compute[260935]: 2025-10-11 09:20:48.880 2 INFO nova.virt.libvirt.driver [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Using config drive#033[00m
Oct 11 05:20:48 np0005481065 nova_compute[260935]: 2025-10-11 09:20:48.914 2 DEBUG nova.storage.rbd_utils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 7f0d9214-39a5-458d-82db-dcbc7d61b8b5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:20:49 np0005481065 romantic_murdock[387816]: --> passed data devices: 0 physical, 3 LVM
Oct 11 05:20:49 np0005481065 romantic_murdock[387816]: --> relative data size: 1.0
Oct 11 05:20:49 np0005481065 romantic_murdock[387816]: --> All data devices are unavailable
Oct 11 05:20:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:20:49 np0005481065 systemd[1]: libpod-dcc20560dddba7e136f8269db0394784eb2aa945da86a9a30c88b63f44281e17.scope: Deactivated successfully.
Oct 11 05:20:49 np0005481065 systemd[1]: libpod-dcc20560dddba7e136f8269db0394784eb2aa945da86a9a30c88b63f44281e17.scope: Consumed 1.055s CPU time.
Oct 11 05:20:49 np0005481065 podman[387788]: 2025-10-11 09:20:49.057515903 +0000 UTC m=+1.384097041 container died dcc20560dddba7e136f8269db0394784eb2aa945da86a9a30c88b63f44281e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_murdock, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:20:49 np0005481065 nova_compute[260935]: 2025-10-11 09:20:49.076 2 DEBUG nova.network.neutron [req-479b6167-2282-44c4-af07-5c2a352e1f31 req-4fc599a1-4ae1-44b0-9f35-47fe39e5ab62 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Updated VIF entry in instance network info cache for port 6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:20:49 np0005481065 nova_compute[260935]: 2025-10-11 09:20:49.077 2 DEBUG nova.network.neutron [req-479b6167-2282-44c4-af07-5c2a352e1f31 req-4fc599a1-4ae1-44b0-9f35-47fe39e5ab62 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Updating instance_info_cache with network_info: [{"id": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "address": "fa:16:3e:38:a7:f5", "network": {"id": "6fb90d02-96cd-4920-92ac-462cc457cb11", "bridge": "br-int", "label": "tempest-network-smoke--1169953700", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aa7ac72-3e", "ovs_interfaceid": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:20:49 np0005481065 systemd[1]: var-lib-containers-storage-overlay-b218df1b0c249dbd0cf9056236c12ca02effddc8b206c7e18241a0b2e27a1681-merged.mount: Deactivated successfully.
Oct 11 05:20:49 np0005481065 podman[387788]: 2025-10-11 09:20:49.11729719 +0000 UTC m=+1.443878288 container remove dcc20560dddba7e136f8269db0394784eb2aa945da86a9a30c88b63f44281e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_murdock, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 11 05:20:49 np0005481065 systemd[1]: libpod-conmon-dcc20560dddba7e136f8269db0394784eb2aa945da86a9a30c88b63f44281e17.scope: Deactivated successfully.
Oct 11 05:20:49 np0005481065 nova_compute[260935]: 2025-10-11 09:20:49.206 2 DEBUG oslo_concurrency.lockutils [req-479b6167-2282-44c4-af07-5c2a352e1f31 req-4fc599a1-4ae1-44b0-9f35-47fe39e5ab62 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-7f0d9214-39a5-458d-82db-dcbc7d61b8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:20:49 np0005481065 podman[388069]: 2025-10-11 09:20:49.991624194 +0000 UTC m=+0.065211341 container create 12a4d16e900b57869d729d6082faaca6f644f93bfb00a70b5a288dba4166d8a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 05:20:50 np0005481065 systemd[1]: Started libpod-conmon-12a4d16e900b57869d729d6082faaca6f644f93bfb00a70b5a288dba4166d8a8.scope.
Oct 11 05:20:50 np0005481065 podman[388069]: 2025-10-11 09:20:49.967216745 +0000 UTC m=+0.040803902 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:20:50 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:20:50 np0005481065 podman[388069]: 2025-10-11 09:20:50.10345413 +0000 UTC m=+0.177041327 container init 12a4d16e900b57869d729d6082faaca6f644f93bfb00a70b5a288dba4166d8a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_cerf, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:20:50 np0005481065 podman[388069]: 2025-10-11 09:20:50.111100086 +0000 UTC m=+0.184687243 container start 12a4d16e900b57869d729d6082faaca6f644f93bfb00a70b5a288dba4166d8a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 05:20:50 np0005481065 podman[388069]: 2025-10-11 09:20:50.115428948 +0000 UTC m=+0.189016125 container attach 12a4d16e900b57869d729d6082faaca6f644f93bfb00a70b5a288dba4166d8a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_cerf, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:20:50 np0005481065 epic_cerf[388086]: 167 167
Oct 11 05:20:50 np0005481065 podman[388069]: 2025-10-11 09:20:50.117497897 +0000 UTC m=+0.191085044 container died 12a4d16e900b57869d729d6082faaca6f644f93bfb00a70b5a288dba4166d8a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_cerf, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:20:50 np0005481065 systemd[1]: libpod-12a4d16e900b57869d729d6082faaca6f644f93bfb00a70b5a288dba4166d8a8.scope: Deactivated successfully.
Oct 11 05:20:50 np0005481065 systemd[1]: var-lib-containers-storage-overlay-2c9702f3978339b81d868c9a4fc1e35376cb3d28c2da032e84aa047c78052a9d-merged.mount: Deactivated successfully.
Oct 11 05:20:50 np0005481065 podman[388069]: 2025-10-11 09:20:50.165378758 +0000 UTC m=+0.238965905 container remove 12a4d16e900b57869d729d6082faaca6f644f93bfb00a70b5a288dba4166d8a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 11 05:20:50 np0005481065 systemd[1]: libpod-conmon-12a4d16e900b57869d729d6082faaca6f644f93bfb00a70b5a288dba4166d8a8.scope: Deactivated successfully.
Oct 11 05:20:50 np0005481065 podman[388111]: 2025-10-11 09:20:50.421481035 +0000 UTC m=+0.064763118 container create 6735826573eda39b0d6ebe761b0acd00a1084ea990e979c09445ec782bcdc921 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_lewin, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:20:50 np0005481065 systemd[1]: Started libpod-conmon-6735826573eda39b0d6ebe761b0acd00a1084ea990e979c09445ec782bcdc921.scope.
Oct 11 05:20:50 np0005481065 podman[388111]: 2025-10-11 09:20:50.392129557 +0000 UTC m=+0.035411680 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:20:50 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:20:50 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bece647e87b72f881f22423a494a7f1f7cde6ce571bd74e6e2c51600770893b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:20:50 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bece647e87b72f881f22423a494a7f1f7cde6ce571bd74e6e2c51600770893b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:20:50 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bece647e87b72f881f22423a494a7f1f7cde6ce571bd74e6e2c51600770893b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:20:50 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bece647e87b72f881f22423a494a7f1f7cde6ce571bd74e6e2c51600770893b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:20:50 np0005481065 podman[388111]: 2025-10-11 09:20:50.532032695 +0000 UTC m=+0.175314818 container init 6735826573eda39b0d6ebe761b0acd00a1084ea990e979c09445ec782bcdc921 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 11 05:20:50 np0005481065 podman[388111]: 2025-10-11 09:20:50.546483833 +0000 UTC m=+0.189765876 container start 6735826573eda39b0d6ebe761b0acd00a1084ea990e979c09445ec782bcdc921 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:20:50 np0005481065 podman[388111]: 2025-10-11 09:20:50.550597969 +0000 UTC m=+0.193880062 container attach 6735826573eda39b0d6ebe761b0acd00a1084ea990e979c09445ec782bcdc921 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_lewin, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:20:50 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2370: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:20:51 np0005481065 nova_compute[260935]: 2025-10-11 09:20:51.106 2 INFO nova.virt.libvirt.driver [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Creating config drive at /var/lib/nova/instances/7f0d9214-39a5-458d-82db-dcbc7d61b8b5/disk.config#033[00m
Oct 11 05:20:51 np0005481065 nova_compute[260935]: 2025-10-11 09:20:51.112 2 DEBUG oslo_concurrency.processutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7f0d9214-39a5-458d-82db-dcbc7d61b8b5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq_eq0i0r execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:20:51 np0005481065 nova_compute[260935]: 2025-10-11 09:20:51.262 2 DEBUG oslo_concurrency.processutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7f0d9214-39a5-458d-82db-dcbc7d61b8b5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq_eq0i0r" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:20:51 np0005481065 nova_compute[260935]: 2025-10-11 09:20:51.302 2 DEBUG nova.storage.rbd_utils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 7f0d9214-39a5-458d-82db-dcbc7d61b8b5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:20:51 np0005481065 nova_compute[260935]: 2025-10-11 09:20:51.307 2 DEBUG oslo_concurrency.processutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7f0d9214-39a5-458d-82db-dcbc7d61b8b5/disk.config 7f0d9214-39a5-458d-82db-dcbc7d61b8b5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]: {
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:    "0": [
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:        {
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:            "devices": [
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:                "/dev/loop3"
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:            ],
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:            "lv_name": "ceph_lv0",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:            "lv_size": "21470642176",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:            "name": "ceph_lv0",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:            "tags": {
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:                "ceph.cluster_name": "ceph",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:                "ceph.crush_device_class": "",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:                "ceph.encrypted": "0",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:                "ceph.osd_id": "0",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:                "ceph.type": "block",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:                "ceph.vdo": "0"
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:            },
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:            "type": "block",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:            "vg_name": "ceph_vg0"
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:        }
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:    ],
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:    "1": [
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:        {
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:            "devices": [
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:                "/dev/loop4"
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:            ],
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:            "lv_name": "ceph_lv1",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:            "lv_size": "21470642176",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:            "name": "ceph_lv1",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:            "tags": {
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:                "ceph.cluster_name": "ceph",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:                "ceph.crush_device_class": "",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:                "ceph.encrypted": "0",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:                "ceph.osd_id": "1",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:                "ceph.type": "block",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:                "ceph.vdo": "0"
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:            },
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:            "type": "block",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:            "vg_name": "ceph_vg1"
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:        }
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:    ],
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:    "2": [
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:        {
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:            "devices": [
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:                "/dev/loop5"
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:            ],
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:            "lv_name": "ceph_lv2",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:            "lv_size": "21470642176",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:            "name": "ceph_lv2",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:            "tags": {
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:                "ceph.cluster_name": "ceph",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:                "ceph.crush_device_class": "",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:                "ceph.encrypted": "0",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:                "ceph.osd_id": "2",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:                "ceph.type": "block",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:                "ceph.vdo": "0"
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:            },
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:            "type": "block",
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:            "vg_name": "ceph_vg2"
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:        }
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]:    ]
Oct 11 05:20:51 np0005481065 ecstatic_lewin[388128]: }
Oct 11 05:20:51 np0005481065 systemd[1]: libpod-6735826573eda39b0d6ebe761b0acd00a1084ea990e979c09445ec782bcdc921.scope: Deactivated successfully.
Oct 11 05:20:51 np0005481065 podman[388111]: 2025-10-11 09:20:51.374573141 +0000 UTC m=+1.017855204 container died 6735826573eda39b0d6ebe761b0acd00a1084ea990e979c09445ec782bcdc921 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_lewin, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 11 05:20:51 np0005481065 systemd[1]: var-lib-containers-storage-overlay-bece647e87b72f881f22423a494a7f1f7cde6ce571bd74e6e2c51600770893b3-merged.mount: Deactivated successfully.
Oct 11 05:20:51 np0005481065 podman[388111]: 2025-10-11 09:20:51.461540296 +0000 UTC m=+1.104822349 container remove 6735826573eda39b0d6ebe761b0acd00a1084ea990e979c09445ec782bcdc921 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:20:51 np0005481065 systemd[1]: libpod-conmon-6735826573eda39b0d6ebe761b0acd00a1084ea990e979c09445ec782bcdc921.scope: Deactivated successfully.
Oct 11 05:20:51 np0005481065 nova_compute[260935]: 2025-10-11 09:20:51.564 2 DEBUG oslo_concurrency.processutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7f0d9214-39a5-458d-82db-dcbc7d61b8b5/disk.config 7f0d9214-39a5-458d-82db-dcbc7d61b8b5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.257s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:20:51 np0005481065 nova_compute[260935]: 2025-10-11 09:20:51.566 2 INFO nova.virt.libvirt.driver [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Deleting local config drive /var/lib/nova/instances/7f0d9214-39a5-458d-82db-dcbc7d61b8b5/disk.config because it was imported into RBD.#033[00m
Oct 11 05:20:51 np0005481065 kernel: tap6aa7ac72-3e: entered promiscuous mode
Oct 11 05:20:51 np0005481065 NetworkManager[44960]: <info>  [1760174451.6495] manager: (tap6aa7ac72-3e): new Tun device (/org/freedesktop/NetworkManager/Devices/479)
Oct 11 05:20:51 np0005481065 ovn_controller[152945]: 2025-10-11T09:20:51Z|01189|binding|INFO|Claiming lport 6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5 for this chassis.
Oct 11 05:20:51 np0005481065 ovn_controller[152945]: 2025-10-11T09:20:51Z|01190|binding|INFO|6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5: Claiming fa:16:3e:38:a7:f5 10.100.0.5
Oct 11 05:20:51 np0005481065 nova_compute[260935]: 2025-10-11 09:20:51.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:51 np0005481065 systemd-udevd[388239]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:20:51 np0005481065 systemd-machined[215705]: New machine qemu-140-instance-00000075.
Oct 11 05:20:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:51.719 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:a7:f5 10.100.0.5'], port_security=['fa:16:3e:38:a7:f5 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '7f0d9214-39a5-458d-82db-dcbc7d61b8b5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6fb90d02-96cd-4920-92ac-462cc457cb11', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5dc25595-7367-4d0c-a935-a850b2363806', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bf7e0c92-efbd-4ce9-a9f4-9c0c91150cdc, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:20:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:51.721 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5 in datapath 6fb90d02-96cd-4920-92ac-462cc457cb11 bound to our chassis#033[00m
Oct 11 05:20:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:51.723 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6fb90d02-96cd-4920-92ac-462cc457cb11#033[00m
Oct 11 05:20:51 np0005481065 NetworkManager[44960]: <info>  [1760174451.7276] device (tap6aa7ac72-3e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:20:51 np0005481065 NetworkManager[44960]: <info>  [1760174451.7287] device (tap6aa7ac72-3e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:20:51 np0005481065 systemd[1]: Started Virtual Machine qemu-140-instance-00000075.
Oct 11 05:20:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:51.738 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0153c93f-9631-4a75-9452-a24ad0e1f6d7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:51.739 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6fb90d02-91 in ovnmeta-6fb90d02-96cd-4920-92ac-462cc457cb11 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 05:20:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:51.742 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6fb90d02-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 05:20:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:51.742 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[43be4a03-9131-4ce2-a736-8beb070635f1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:51.743 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ed222795-9fe2-4511-8ea4-66e008d6f340]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:51.757 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[b1a50f53-d910-4f6f-a767-2a47336cd893]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:51 np0005481065 nova_compute[260935]: 2025-10-11 09:20:51.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:51 np0005481065 ovn_controller[152945]: 2025-10-11T09:20:51Z|01191|binding|INFO|Setting lport 6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5 ovn-installed in OVS
Oct 11 05:20:51 np0005481065 ovn_controller[152945]: 2025-10-11T09:20:51Z|01192|binding|INFO|Setting lport 6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5 up in Southbound
Oct 11 05:20:51 np0005481065 nova_compute[260935]: 2025-10-11 09:20:51.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:51.789 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a48d1a33-8624-452a-bb6a-cadebfb9d525]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:51.823 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[d034e89b-244e-431e-a832-b7e8e1630351]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:51.830 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ffea0a4c-223c-44ed-9344-3569feeb6f7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:51 np0005481065 NetworkManager[44960]: <info>  [1760174451.8320] manager: (tap6fb90d02-90): new Veth device (/org/freedesktop/NetworkManager/Devices/480)
Oct 11 05:20:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:51.880 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[dcb908a6-d4de-4b56-84c3-d0152515a833]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:51.883 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[77878250-341c-4f42-a78f-6708cf5f8d04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:51 np0005481065 NetworkManager[44960]: <info>  [1760174451.9095] device (tap6fb90d02-90): carrier: link connected
Oct 11 05:20:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:51.916 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6f6d2f5b-bb03-41bc-8edd-7d9453ddfd44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:51.938 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[173504d1-dcad-4883-bfad-a212a0d6346f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6fb90d02-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:3c:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 337], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 629661, 'reachable_time': 21889, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 388334, 'error': None, 'target': 'ovnmeta-6fb90d02-96cd-4920-92ac-462cc457cb11', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:51.960 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4f63eca1-afc2-4054-a8ef-da4cf0299c8e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8c:3cda'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 629661, 'tstamp': 629661}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 388336, 'error': None, 'target': 'ovnmeta-6fb90d02-96cd-4920-92ac-462cc457cb11', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:51.981 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0f7554b0-9669-43db-9ab4-674dadcb2ce7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6fb90d02-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:3c:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 337], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 629661, 'reachable_time': 21889, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 388337, 'error': None, 'target': 'ovnmeta-6fb90d02-96cd-4920-92ac-462cc457cb11', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:52.029 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0152dc10-860b-483a-a7ba-b6d0d95d66eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:52.106 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[847f15a3-c5ad-423f-81e6-82a78acfe16f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:52.110 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6fb90d02-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:20:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:52.110 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:20:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:52.111 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6fb90d02-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:20:52 np0005481065 kernel: tap6fb90d02-90: entered promiscuous mode
Oct 11 05:20:52 np0005481065 nova_compute[260935]: 2025-10-11 09:20:52.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:52 np0005481065 NetworkManager[44960]: <info>  [1760174452.1150] manager: (tap6fb90d02-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/481)
Oct 11 05:20:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:52.118 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6fb90d02-90, col_values=(('external_ids', {'iface-id': '89155f05-1b39-4918-893c-15a2cd2a9493'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:20:52 np0005481065 ovn_controller[152945]: 2025-10-11T09:20:52Z|01193|binding|INFO|Releasing lport 89155f05-1b39-4918-893c-15a2cd2a9493 from this chassis (sb_readonly=0)
Oct 11 05:20:52 np0005481065 nova_compute[260935]: 2025-10-11 09:20:52.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:52.140 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6fb90d02-96cd-4920-92ac-462cc457cb11.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6fb90d02-96cd-4920-92ac-462cc457cb11.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 05:20:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:52.142 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[dc59cfea-bb9f-496c-bdbd-0350891d1d03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:20:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:52.143 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 05:20:52 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 05:20:52 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 05:20:52 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-6fb90d02-96cd-4920-92ac-462cc457cb11
Oct 11 05:20:52 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 05:20:52 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 05:20:52 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 05:20:52 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/6fb90d02-96cd-4920-92ac-462cc457cb11.pid.haproxy
Oct 11 05:20:52 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 05:20:52 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:20:52 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 05:20:52 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 05:20:52 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 05:20:52 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 05:20:52 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 05:20:52 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 05:20:52 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 05:20:52 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 05:20:52 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 05:20:52 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 05:20:52 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 05:20:52 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 05:20:52 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 05:20:52 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:20:52 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:20:52 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 05:20:52 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 05:20:52 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 05:20:52 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID 6fb90d02-96cd-4920-92ac-462cc457cb11
Oct 11 05:20:52 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 05:20:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:20:52.144 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6fb90d02-96cd-4920-92ac-462cc457cb11', 'env', 'PROCESS_TAG=haproxy-6fb90d02-96cd-4920-92ac-462cc457cb11', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6fb90d02-96cd-4920-92ac-462cc457cb11.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 05:20:52 np0005481065 podman[388387]: 2025-10-11 09:20:52.384684008 +0000 UTC m=+0.061702333 container create 8eb7757e41f9fa1d20357232312fd9cef4c132206ffca305b98af0c69cc8313d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:20:52 np0005481065 systemd[1]: Started libpod-conmon-8eb7757e41f9fa1d20357232312fd9cef4c132206ffca305b98af0c69cc8313d.scope.
Oct 11 05:20:52 np0005481065 podman[388387]: 2025-10-11 09:20:52.350924345 +0000 UTC m=+0.027942710 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:20:52 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:20:52 np0005481065 podman[388387]: 2025-10-11 09:20:52.49992701 +0000 UTC m=+0.176945325 container init 8eb7757e41f9fa1d20357232312fd9cef4c132206ffca305b98af0c69cc8313d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_davinci, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 05:20:52 np0005481065 podman[388387]: 2025-10-11 09:20:52.508350748 +0000 UTC m=+0.185369073 container start 8eb7757e41f9fa1d20357232312fd9cef4c132206ffca305b98af0c69cc8313d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 05:20:52 np0005481065 podman[388387]: 2025-10-11 09:20:52.512949867 +0000 UTC m=+0.189968152 container attach 8eb7757e41f9fa1d20357232312fd9cef4c132206ffca305b98af0c69cc8313d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 11 05:20:52 np0005481065 systemd[1]: libpod-8eb7757e41f9fa1d20357232312fd9cef4c132206ffca305b98af0c69cc8313d.scope: Deactivated successfully.
Oct 11 05:20:52 np0005481065 quizzical_davinci[388443]: 167 167
Oct 11 05:20:52 np0005481065 conmon[388443]: conmon 8eb7757e41f9fa1d2035 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8eb7757e41f9fa1d20357232312fd9cef4c132206ffca305b98af0c69cc8313d.scope/container/memory.events
Oct 11 05:20:52 np0005481065 podman[388387]: 2025-10-11 09:20:52.520435999 +0000 UTC m=+0.197454314 container died 8eb7757e41f9fa1d20357232312fd9cef4c132206ffca305b98af0c69cc8313d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:20:52 np0005481065 systemd[1]: var-lib-containers-storage-overlay-5999b051e5cd6b406f42e90788c0770e102dbd73564df67bf5e55d2fb68082dc-merged.mount: Deactivated successfully.
Oct 11 05:20:52 np0005481065 podman[388387]: 2025-10-11 09:20:52.585053342 +0000 UTC m=+0.262071637 container remove 8eb7757e41f9fa1d20357232312fd9cef4c132206ffca305b98af0c69cc8313d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_davinci, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 11 05:20:52 np0005481065 systemd[1]: libpod-conmon-8eb7757e41f9fa1d20357232312fd9cef4c132206ffca305b98af0c69cc8313d.scope: Deactivated successfully.
Oct 11 05:20:52 np0005481065 podman[388477]: 2025-10-11 09:20:52.646505117 +0000 UTC m=+0.074167555 container create ba90e074b51be5207e984f89886b0437dbdf15c3e7ac9ee48d3902e875c55c96 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6fb90d02-96cd-4920-92ac-462cc457cb11, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 11 05:20:52 np0005481065 systemd[1]: Started libpod-conmon-ba90e074b51be5207e984f89886b0437dbdf15c3e7ac9ee48d3902e875c55c96.scope.
Oct 11 05:20:52 np0005481065 podman[388477]: 2025-10-11 09:20:52.615608515 +0000 UTC m=+0.043270983 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 05:20:52 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:20:52 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2371: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 11 05:20:52 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2806711dd2ed8736dcb901ac2aef71c255be2695bfa4fe140c6bb3417babc2b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 05:20:52 np0005481065 podman[388477]: 2025-10-11 09:20:52.767052578 +0000 UTC m=+0.194715036 container init ba90e074b51be5207e984f89886b0437dbdf15c3e7ac9ee48d3902e875c55c96 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6fb90d02-96cd-4920-92ac-462cc457cb11, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:20:52 np0005481065 podman[388477]: 2025-10-11 09:20:52.772906014 +0000 UTC m=+0.200568442 container start ba90e074b51be5207e984f89886b0437dbdf15c3e7ac9ee48d3902e875c55c96 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6fb90d02-96cd-4920-92ac-462cc457cb11, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:20:52 np0005481065 neutron-haproxy-ovnmeta-6fb90d02-96cd-4920-92ac-462cc457cb11[388501]: [NOTICE]   (388521) : New worker (388525) forked
Oct 11 05:20:52 np0005481065 neutron-haproxy-ovnmeta-6fb90d02-96cd-4920-92ac-462cc457cb11[388501]: [NOTICE]   (388521) : Loading success.
Oct 11 05:20:52 np0005481065 podman[388509]: 2025-10-11 09:20:52.807527731 +0000 UTC m=+0.048219832 container create 610ae3cb47813447ae260052b68721147b24b71e11f26264006effaa48fb1fb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_leakey, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:20:52 np0005481065 systemd[1]: Started libpod-conmon-610ae3cb47813447ae260052b68721147b24b71e11f26264006effaa48fb1fb2.scope.
Oct 11 05:20:52 np0005481065 podman[388509]: 2025-10-11 09:20:52.783360579 +0000 UTC m=+0.024052710 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:20:52 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:20:52 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e915e3ef93d593d2d3047941005b9ae2700199e5c0ff2acd45429761a0fcc47/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:20:52 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e915e3ef93d593d2d3047941005b9ae2700199e5c0ff2acd45429761a0fcc47/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:20:52 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e915e3ef93d593d2d3047941005b9ae2700199e5c0ff2acd45429761a0fcc47/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:20:52 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e915e3ef93d593d2d3047941005b9ae2700199e5c0ff2acd45429761a0fcc47/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:20:52 np0005481065 podman[388509]: 2025-10-11 09:20:52.909930451 +0000 UTC m=+0.150622572 container init 610ae3cb47813447ae260052b68721147b24b71e11f26264006effaa48fb1fb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_leakey, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 11 05:20:52 np0005481065 podman[388509]: 2025-10-11 09:20:52.924702728 +0000 UTC m=+0.165394829 container start 610ae3cb47813447ae260052b68721147b24b71e11f26264006effaa48fb1fb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_leakey, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:20:52 np0005481065 podman[388509]: 2025-10-11 09:20:52.927899988 +0000 UTC m=+0.168592089 container attach 610ae3cb47813447ae260052b68721147b24b71e11f26264006effaa48fb1fb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 05:20:52 np0005481065 nova_compute[260935]: 2025-10-11 09:20:52.939 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174452.9391422, 7f0d9214-39a5-458d-82db-dcbc7d61b8b5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:20:52 np0005481065 nova_compute[260935]: 2025-10-11 09:20:52.940 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] VM Started (Lifecycle Event)#033[00m
Oct 11 05:20:52 np0005481065 nova_compute[260935]: 2025-10-11 09:20:52.968 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:20:52 np0005481065 nova_compute[260935]: 2025-10-11 09:20:52.972 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174452.939985, 7f0d9214-39a5-458d-82db-dcbc7d61b8b5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:20:52 np0005481065 nova_compute[260935]: 2025-10-11 09:20:52.972 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:20:52 np0005481065 nova_compute[260935]: 2025-10-11 09:20:52.996 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:20:53 np0005481065 nova_compute[260935]: 2025-10-11 09:20:52.999 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:20:53 np0005481065 nova_compute[260935]: 2025-10-11 09:20:53.031 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:20:53 np0005481065 nova_compute[260935]: 2025-10-11 09:20:53.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:53 np0005481065 crazy_leakey[388536]: {
Oct 11 05:20:53 np0005481065 crazy_leakey[388536]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 05:20:53 np0005481065 crazy_leakey[388536]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:20:53 np0005481065 crazy_leakey[388536]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 05:20:53 np0005481065 crazy_leakey[388536]:        "osd_id": 2,
Oct 11 05:20:53 np0005481065 crazy_leakey[388536]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:20:53 np0005481065 crazy_leakey[388536]:        "type": "bluestore"
Oct 11 05:20:53 np0005481065 crazy_leakey[388536]:    },
Oct 11 05:20:53 np0005481065 crazy_leakey[388536]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 05:20:53 np0005481065 crazy_leakey[388536]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:20:53 np0005481065 crazy_leakey[388536]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 05:20:53 np0005481065 crazy_leakey[388536]:        "osd_id": 0,
Oct 11 05:20:53 np0005481065 crazy_leakey[388536]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:20:53 np0005481065 crazy_leakey[388536]:        "type": "bluestore"
Oct 11 05:20:53 np0005481065 crazy_leakey[388536]:    },
Oct 11 05:20:53 np0005481065 crazy_leakey[388536]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 05:20:53 np0005481065 crazy_leakey[388536]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:20:53 np0005481065 crazy_leakey[388536]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 05:20:53 np0005481065 crazy_leakey[388536]:        "osd_id": 1,
Oct 11 05:20:53 np0005481065 crazy_leakey[388536]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:20:53 np0005481065 crazy_leakey[388536]:        "type": "bluestore"
Oct 11 05:20:53 np0005481065 crazy_leakey[388536]:    }
Oct 11 05:20:53 np0005481065 crazy_leakey[388536]: }
Oct 11 05:20:53 np0005481065 systemd[1]: libpod-610ae3cb47813447ae260052b68721147b24b71e11f26264006effaa48fb1fb2.scope: Deactivated successfully.
Oct 11 05:20:53 np0005481065 systemd[1]: libpod-610ae3cb47813447ae260052b68721147b24b71e11f26264006effaa48fb1fb2.scope: Consumed 1.022s CPU time.
Oct 11 05:20:53 np0005481065 podman[388509]: 2025-10-11 09:20:53.958322977 +0000 UTC m=+1.199015118 container died 610ae3cb47813447ae260052b68721147b24b71e11f26264006effaa48fb1fb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_leakey, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:20:53 np0005481065 systemd[1]: var-lib-containers-storage-overlay-1e915e3ef93d593d2d3047941005b9ae2700199e5c0ff2acd45429761a0fcc47-merged.mount: Deactivated successfully.
Oct 11 05:20:54 np0005481065 podman[388509]: 2025-10-11 09:20:54.024349031 +0000 UTC m=+1.265041132 container remove 610ae3cb47813447ae260052b68721147b24b71e11f26264006effaa48fb1fb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 11 05:20:54 np0005481065 systemd[1]: libpod-conmon-610ae3cb47813447ae260052b68721147b24b71e11f26264006effaa48fb1fb2.scope: Deactivated successfully.
Oct 11 05:20:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:20:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:20:54 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:20:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:20:54 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:20:54 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 5d5cb1a2-18f1-4f6c-a01a-756e0b9fd824 does not exist
Oct 11 05:20:54 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev cd0f531e-15e6-479a-a427-3bbf5143d316 does not exist
Oct 11 05:20:54 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2372: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Oct 11 05:20:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:20:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:20:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:20:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:20:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:20:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:20:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:20:54
Oct 11 05:20:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:20:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:20:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'default.rgw.meta', 'default.rgw.log', 'vms', 'volumes', 'images', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'backups']
Oct 11 05:20:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:20:55 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:20:55 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:20:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:20:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:20:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:20:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:20:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:20:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:20:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:20:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:20:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:20:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:20:56 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2373: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Oct 11 05:20:56 np0005481065 nova_compute[260935]: 2025-10-11 09:20:56.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:58 np0005481065 nova_compute[260935]: 2025-10-11 09:20:58.067 2 DEBUG nova.compute.manager [req-2efd19d7-3f57-40c3-86ea-0b9f098d39d2 req-a2119d61-00be-4864-90ae-d5ad0484b44f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Received event network-vif-plugged-6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:20:58 np0005481065 nova_compute[260935]: 2025-10-11 09:20:58.068 2 DEBUG oslo_concurrency.lockutils [req-2efd19d7-3f57-40c3-86ea-0b9f098d39d2 req-a2119d61-00be-4864-90ae-d5ad0484b44f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:20:58 np0005481065 nova_compute[260935]: 2025-10-11 09:20:58.068 2 DEBUG oslo_concurrency.lockutils [req-2efd19d7-3f57-40c3-86ea-0b9f098d39d2 req-a2119d61-00be-4864-90ae-d5ad0484b44f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:20:58 np0005481065 nova_compute[260935]: 2025-10-11 09:20:58.068 2 DEBUG oslo_concurrency.lockutils [req-2efd19d7-3f57-40c3-86ea-0b9f098d39d2 req-a2119d61-00be-4864-90ae-d5ad0484b44f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:20:58 np0005481065 nova_compute[260935]: 2025-10-11 09:20:58.068 2 DEBUG nova.compute.manager [req-2efd19d7-3f57-40c3-86ea-0b9f098d39d2 req-a2119d61-00be-4864-90ae-d5ad0484b44f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Processing event network-vif-plugged-6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:20:58 np0005481065 nova_compute[260935]: 2025-10-11 09:20:58.070 2 DEBUG nova.compute.manager [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:20:58 np0005481065 nova_compute[260935]: 2025-10-11 09:20:58.075 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174458.0746095, 7f0d9214-39a5-458d-82db-dcbc7d61b8b5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:20:58 np0005481065 nova_compute[260935]: 2025-10-11 09:20:58.075 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:20:58 np0005481065 nova_compute[260935]: 2025-10-11 09:20:58.079 2 DEBUG nova.virt.libvirt.driver [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:20:58 np0005481065 nova_compute[260935]: 2025-10-11 09:20:58.084 2 INFO nova.virt.libvirt.driver [-] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Instance spawned successfully.#033[00m
Oct 11 05:20:58 np0005481065 nova_compute[260935]: 2025-10-11 09:20:58.085 2 DEBUG nova.virt.libvirt.driver [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:20:58 np0005481065 nova_compute[260935]: 2025-10-11 09:20:58.159 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:20:58 np0005481065 nova_compute[260935]: 2025-10-11 09:20:58.165 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:20:58 np0005481065 nova_compute[260935]: 2025-10-11 09:20:58.170 2 DEBUG nova.virt.libvirt.driver [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:20:58 np0005481065 nova_compute[260935]: 2025-10-11 09:20:58.170 2 DEBUG nova.virt.libvirt.driver [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:20:58 np0005481065 nova_compute[260935]: 2025-10-11 09:20:58.171 2 DEBUG nova.virt.libvirt.driver [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:20:58 np0005481065 nova_compute[260935]: 2025-10-11 09:20:58.172 2 DEBUG nova.virt.libvirt.driver [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:20:58 np0005481065 nova_compute[260935]: 2025-10-11 09:20:58.172 2 DEBUG nova.virt.libvirt.driver [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:20:58 np0005481065 nova_compute[260935]: 2025-10-11 09:20:58.173 2 DEBUG nova.virt.libvirt.driver [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:20:58 np0005481065 nova_compute[260935]: 2025-10-11 09:20:58.386 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:20:58 np0005481065 nova_compute[260935]: 2025-10-11 09:20:58.533 2 INFO nova.compute.manager [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Took 17.88 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:20:58 np0005481065 nova_compute[260935]: 2025-10-11 09:20:58.534 2 DEBUG nova.compute.manager [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:20:58 np0005481065 nova_compute[260935]: 2025-10-11 09:20:58.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:20:58 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2374: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Oct 11 05:20:58 np0005481065 nova_compute[260935]: 2025-10-11 09:20:58.771 2 INFO nova.compute.manager [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Took 19.19 seconds to build instance.#033[00m
Oct 11 05:20:58 np0005481065 nova_compute[260935]: 2025-10-11 09:20:58.821 2 DEBUG oslo_concurrency.lockutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.306s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:20:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:21:00 np0005481065 nova_compute[260935]: 2025-10-11 09:21:00.375 2 DEBUG nova.compute.manager [req-a30e8990-bb41-4553-a2b8-13064166b5cd req-4fad60f2-e4b7-448a-85b4-7ba21197f58f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Received event network-vif-plugged-6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:21:00 np0005481065 nova_compute[260935]: 2025-10-11 09:21:00.375 2 DEBUG oslo_concurrency.lockutils [req-a30e8990-bb41-4553-a2b8-13064166b5cd req-4fad60f2-e4b7-448a-85b4-7ba21197f58f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:21:00 np0005481065 nova_compute[260935]: 2025-10-11 09:21:00.375 2 DEBUG oslo_concurrency.lockutils [req-a30e8990-bb41-4553-a2b8-13064166b5cd req-4fad60f2-e4b7-448a-85b4-7ba21197f58f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:21:00 np0005481065 nova_compute[260935]: 2025-10-11 09:21:00.376 2 DEBUG oslo_concurrency.lockutils [req-a30e8990-bb41-4553-a2b8-13064166b5cd req-4fad60f2-e4b7-448a-85b4-7ba21197f58f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:21:00 np0005481065 nova_compute[260935]: 2025-10-11 09:21:00.376 2 DEBUG nova.compute.manager [req-a30e8990-bb41-4553-a2b8-13064166b5cd req-4fad60f2-e4b7-448a-85b4-7ba21197f58f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] No waiting events found dispatching network-vif-plugged-6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:21:00 np0005481065 nova_compute[260935]: 2025-10-11 09:21:00.376 2 WARNING nova.compute.manager [req-a30e8990-bb41-4553-a2b8-13064166b5cd req-4fad60f2-e4b7-448a-85b4-7ba21197f58f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Received unexpected event network-vif-plugged-6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5 for instance with vm_state active and task_state None.#033[00m
Oct 11 05:21:00 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:00.434 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:ac:3b 2001:db8:0:1:f816:3eff:feab:ac3b 2001:db8::f816:3eff:feab:ac3b'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:feab:ac3b/64 2001:db8::f816:3eff:feab:ac3b/64', 'neutron:device_id': 'ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e87b272f-66b8-494e-ab80-c2ee66df15a2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4bb88753-f635-4d32-a71c-7301c0eb5e38, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=af33797e-d57d-45c8-92d7-86ea03fdf1ef) old=Port_Binding(mac=['fa:16:3e:ab:ac:3b 2001:db8::f816:3eff:feab:ac3b'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feab:ac3b/64', 'neutron:device_id': 'ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e87b272f-66b8-494e-ab80-c2ee66df15a2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:21:00 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:00.436 162815 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port af33797e-d57d-45c8-92d7-86ea03fdf1ef in datapath e87b272f-66b8-494e-ab80-c2ee66df15a2 updated#033[00m
Oct 11 05:21:00 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:00.438 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e87b272f-66b8-494e-ab80-c2ee66df15a2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:21:00 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:00.439 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4e60a330-520a-4f93-b79f-a30b2543c777]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:21:00 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2375: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Oct 11 05:21:01 np0005481065 nova_compute[260935]: 2025-10-11 09:21:01.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:02 np0005481065 nova_compute[260935]: 2025-10-11 09:21:02.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:02 np0005481065 NetworkManager[44960]: <info>  [1760174462.4998] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/482)
Oct 11 05:21:02 np0005481065 NetworkManager[44960]: <info>  [1760174462.5017] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/483)
Oct 11 05:21:02 np0005481065 nova_compute[260935]: 2025-10-11 09:21:02.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:02 np0005481065 ovn_controller[152945]: 2025-10-11T09:21:02Z|01194|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:21:02 np0005481065 ovn_controller[152945]: 2025-10-11T09:21:02Z|01195|binding|INFO|Releasing lport 89155f05-1b39-4918-893c-15a2cd2a9493 from this chassis (sb_readonly=0)
Oct 11 05:21:02 np0005481065 ovn_controller[152945]: 2025-10-11T09:21:02Z|01196|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:21:02 np0005481065 nova_compute[260935]: 2025-10-11 09:21:02.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:02 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2376: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 05:21:03 np0005481065 nova_compute[260935]: 2025-10-11 09:21:03.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:21:04 np0005481065 nova_compute[260935]: 2025-10-11 09:21:04.230 2 DEBUG nova.compute.manager [req-e52cc407-1a7f-4d04-a677-23c7ede9571a req-b0d86aab-c3cb-434e-b10f-aa25c47e5e54 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Received event network-changed-6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:21:04 np0005481065 nova_compute[260935]: 2025-10-11 09:21:04.231 2 DEBUG nova.compute.manager [req-e52cc407-1a7f-4d04-a677-23c7ede9571a req-b0d86aab-c3cb-434e-b10f-aa25c47e5e54 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Refreshing instance network info cache due to event network-changed-6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:21:04 np0005481065 nova_compute[260935]: 2025-10-11 09:21:04.231 2 DEBUG oslo_concurrency.lockutils [req-e52cc407-1a7f-4d04-a677-23c7ede9571a req-b0d86aab-c3cb-434e-b10f-aa25c47e5e54 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-7f0d9214-39a5-458d-82db-dcbc7d61b8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:21:04 np0005481065 nova_compute[260935]: 2025-10-11 09:21:04.232 2 DEBUG oslo_concurrency.lockutils [req-e52cc407-1a7f-4d04-a677-23c7ede9571a req-b0d86aab-c3cb-434e-b10f-aa25c47e5e54 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-7f0d9214-39a5-458d-82db-dcbc7d61b8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:21:04 np0005481065 nova_compute[260935]: 2025-10-11 09:21:04.232 2 DEBUG nova.network.neutron [req-e52cc407-1a7f-4d04-a677-23c7ede9571a req-b0d86aab-c3cb-434e-b10f-aa25c47e5e54 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Refreshing network info cache for port 6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:21:04 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2377: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Oct 11 05:21:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:21:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:21:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:21:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:21:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0029757271724620694 of space, bias 1.0, pg target 0.8927181517386208 quantized to 32 (current 32)
Oct 11 05:21:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:21:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:21:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:21:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:21:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:21:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 11 05:21:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:21:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 05:21:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:21:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:21:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:21:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 05:21:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:21:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 05:21:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:21:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:21:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:21:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 05:21:06 np0005481065 nova_compute[260935]: 2025-10-11 09:21:06.415 2 DEBUG nova.network.neutron [req-e52cc407-1a7f-4d04-a677-23c7ede9571a req-b0d86aab-c3cb-434e-b10f-aa25c47e5e54 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Updated VIF entry in instance network info cache for port 6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:21:06 np0005481065 nova_compute[260935]: 2025-10-11 09:21:06.417 2 DEBUG nova.network.neutron [req-e52cc407-1a7f-4d04-a677-23c7ede9571a req-b0d86aab-c3cb-434e-b10f-aa25c47e5e54 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Updating instance_info_cache with network_info: [{"id": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "address": "fa:16:3e:38:a7:f5", "network": {"id": "6fb90d02-96cd-4920-92ac-462cc457cb11", "bridge": "br-int", "label": "tempest-network-smoke--1169953700", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aa7ac72-3e", "ovs_interfaceid": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:21:06 np0005481065 nova_compute[260935]: 2025-10-11 09:21:06.470 2 DEBUG oslo_concurrency.lockutils [req-e52cc407-1a7f-4d04-a677-23c7ede9571a req-b0d86aab-c3cb-434e-b10f-aa25c47e5e54 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-7f0d9214-39a5-458d-82db-dcbc7d61b8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:21:06 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2378: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Oct 11 05:21:06 np0005481065 podman[388632]: 2025-10-11 09:21:06.781491145 +0000 UTC m=+0.079766542 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 11 05:21:06 np0005481065 nova_compute[260935]: 2025-10-11 09:21:06.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:07 np0005481065 nova_compute[260935]: 2025-10-11 09:21:07.658 2 DEBUG oslo_concurrency.lockutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "fcb45648-eb7b-4975-9f50-08675a787d9c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:21:07 np0005481065 nova_compute[260935]: 2025-10-11 09:21:07.659 2 DEBUG oslo_concurrency.lockutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "fcb45648-eb7b-4975-9f50-08675a787d9c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:21:07 np0005481065 nova_compute[260935]: 2025-10-11 09:21:07.675 2 DEBUG nova.compute.manager [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:21:07 np0005481065 nova_compute[260935]: 2025-10-11 09:21:07.748 2 DEBUG oslo_concurrency.lockutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:21:07 np0005481065 nova_compute[260935]: 2025-10-11 09:21:07.749 2 DEBUG oslo_concurrency.lockutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:21:07 np0005481065 nova_compute[260935]: 2025-10-11 09:21:07.758 2 DEBUG nova.virt.hardware [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:21:07 np0005481065 nova_compute[260935]: 2025-10-11 09:21:07.758 2 INFO nova.compute.claims [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:21:07 np0005481065 nova_compute[260935]: 2025-10-11 09:21:07.926 2 DEBUG oslo_concurrency.processutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:21:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:21:08 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1115680410' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:21:08 np0005481065 nova_compute[260935]: 2025-10-11 09:21:08.383 2 DEBUG oslo_concurrency.processutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:21:08 np0005481065 nova_compute[260935]: 2025-10-11 09:21:08.393 2 DEBUG nova.compute.provider_tree [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:21:08 np0005481065 nova_compute[260935]: 2025-10-11 09:21:08.423 2 DEBUG nova.scheduler.client.report [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:21:08 np0005481065 nova_compute[260935]: 2025-10-11 09:21:08.455 2 DEBUG oslo_concurrency.lockutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:21:08 np0005481065 nova_compute[260935]: 2025-10-11 09:21:08.457 2 DEBUG nova.compute.manager [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:21:08 np0005481065 nova_compute[260935]: 2025-10-11 09:21:08.522 2 DEBUG nova.compute.manager [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:21:08 np0005481065 nova_compute[260935]: 2025-10-11 09:21:08.523 2 DEBUG nova.network.neutron [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:21:08 np0005481065 nova_compute[260935]: 2025-10-11 09:21:08.553 2 INFO nova.virt.libvirt.driver [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:21:08 np0005481065 nova_compute[260935]: 2025-10-11 09:21:08.593 2 DEBUG nova.compute.manager [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:21:08 np0005481065 nova_compute[260935]: 2025-10-11 09:21:08.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:08 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2379: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Oct 11 05:21:08 np0005481065 nova_compute[260935]: 2025-10-11 09:21:08.758 2 DEBUG nova.compute.manager [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:21:08 np0005481065 nova_compute[260935]: 2025-10-11 09:21:08.760 2 DEBUG nova.virt.libvirt.driver [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:21:08 np0005481065 nova_compute[260935]: 2025-10-11 09:21:08.760 2 INFO nova.virt.libvirt.driver [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Creating image(s)#033[00m
Oct 11 05:21:08 np0005481065 nova_compute[260935]: 2025-10-11 09:21:08.786 2 DEBUG nova.storage.rbd_utils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image fcb45648-eb7b-4975-9f50-08675a787d9c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:21:08 np0005481065 nova_compute[260935]: 2025-10-11 09:21:08.817 2 DEBUG nova.storage.rbd_utils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image fcb45648-eb7b-4975-9f50-08675a787d9c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:21:08 np0005481065 nova_compute[260935]: 2025-10-11 09:21:08.850 2 DEBUG nova.storage.rbd_utils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image fcb45648-eb7b-4975-9f50-08675a787d9c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:21:08 np0005481065 nova_compute[260935]: 2025-10-11 09:21:08.857 2 DEBUG oslo_concurrency.processutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:21:08 np0005481065 nova_compute[260935]: 2025-10-11 09:21:08.898 2 DEBUG nova.policy [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0e1fd111a1ff43179343661e01457085', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'db6885dd005947ad850fed13cefdf2fc', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:21:08 np0005481065 nova_compute[260935]: 2025-10-11 09:21:08.936 2 DEBUG oslo_concurrency.processutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:21:08 np0005481065 nova_compute[260935]: 2025-10-11 09:21:08.937 2 DEBUG oslo_concurrency.lockutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:21:08 np0005481065 nova_compute[260935]: 2025-10-11 09:21:08.938 2 DEBUG oslo_concurrency.lockutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:21:08 np0005481065 nova_compute[260935]: 2025-10-11 09:21:08.938 2 DEBUG oslo_concurrency.lockutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:21:08 np0005481065 nova_compute[260935]: 2025-10-11 09:21:08.964 2 DEBUG nova.storage.rbd_utils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image fcb45648-eb7b-4975-9f50-08675a787d9c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:21:08 np0005481065 nova_compute[260935]: 2025-10-11 09:21:08.968 2 DEBUG oslo_concurrency.processutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 fcb45648-eb7b-4975-9f50-08675a787d9c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:21:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:21:09 np0005481065 nova_compute[260935]: 2025-10-11 09:21:09.243 2 DEBUG oslo_concurrency.processutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 fcb45648-eb7b-4975-9f50-08675a787d9c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.275s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:21:09 np0005481065 nova_compute[260935]: 2025-10-11 09:21:09.331 2 DEBUG nova.storage.rbd_utils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] resizing rbd image fcb45648-eb7b-4975-9f50-08675a787d9c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:21:09 np0005481065 nova_compute[260935]: 2025-10-11 09:21:09.492 2 DEBUG nova.objects.instance [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'migration_context' on Instance uuid fcb45648-eb7b-4975-9f50-08675a787d9c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:21:09 np0005481065 nova_compute[260935]: 2025-10-11 09:21:09.513 2 DEBUG nova.virt.libvirt.driver [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:21:09 np0005481065 nova_compute[260935]: 2025-10-11 09:21:09.514 2 DEBUG nova.virt.libvirt.driver [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Ensure instance console log exists: /var/lib/nova/instances/fcb45648-eb7b-4975-9f50-08675a787d9c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:21:09 np0005481065 nova_compute[260935]: 2025-10-11 09:21:09.515 2 DEBUG oslo_concurrency.lockutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:21:09 np0005481065 nova_compute[260935]: 2025-10-11 09:21:09.515 2 DEBUG oslo_concurrency.lockutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:21:09 np0005481065 nova_compute[260935]: 2025-10-11 09:21:09.516 2 DEBUG oslo_concurrency.lockutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:21:10 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2380: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Oct 11 05:21:10 np0005481065 ovn_controller[152945]: 2025-10-11T09:21:10Z|00139|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:38:a7:f5 10.100.0.5
Oct 11 05:21:10 np0005481065 ovn_controller[152945]: 2025-10-11T09:21:10Z|00140|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:38:a7:f5 10.100.0.5
Oct 11 05:21:11 np0005481065 nova_compute[260935]: 2025-10-11 09:21:11.496 2 DEBUG nova.network.neutron [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Successfully created port: 9669f110-042a-40c6-b7a4-8d78d421ed23 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:21:11 np0005481065 nova_compute[260935]: 2025-10-11 09:21:11.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:12 np0005481065 nova_compute[260935]: 2025-10-11 09:21:12.015 2 DEBUG nova.network.neutron [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Successfully created port: 5a14fd50-c9b4-4c8c-b576-9f2a05d734f9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:21:12 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2381: 321 pgs: 321 active+clean; 453 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 152 op/s
Oct 11 05:21:12 np0005481065 podman[388845]: 2025-10-11 09:21:12.816374461 +0000 UTC m=+0.100765263 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 11 05:21:13 np0005481065 nova_compute[260935]: 2025-10-11 09:21:13.503 2 DEBUG nova.network.neutron [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Successfully updated port: 9669f110-042a-40c6-b7a4-8d78d421ed23 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:21:13 np0005481065 nova_compute[260935]: 2025-10-11 09:21:13.656 2 DEBUG nova.compute.manager [req-6da5ec96-4847-4d03-b995-7dedd7b304bd req-187b0095-798e-493b-9db5-ca2f3493f407 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Received event network-changed-9669f110-042a-40c6-b7a4-8d78d421ed23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:21:13 np0005481065 nova_compute[260935]: 2025-10-11 09:21:13.657 2 DEBUG nova.compute.manager [req-6da5ec96-4847-4d03-b995-7dedd7b304bd req-187b0095-798e-493b-9db5-ca2f3493f407 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Refreshing instance network info cache due to event network-changed-9669f110-042a-40c6-b7a4-8d78d421ed23. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:21:13 np0005481065 nova_compute[260935]: 2025-10-11 09:21:13.658 2 DEBUG oslo_concurrency.lockutils [req-6da5ec96-4847-4d03-b995-7dedd7b304bd req-187b0095-798e-493b-9db5-ca2f3493f407 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-fcb45648-eb7b-4975-9f50-08675a787d9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:21:13 np0005481065 nova_compute[260935]: 2025-10-11 09:21:13.659 2 DEBUG oslo_concurrency.lockutils [req-6da5ec96-4847-4d03-b995-7dedd7b304bd req-187b0095-798e-493b-9db5-ca2f3493f407 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-fcb45648-eb7b-4975-9f50-08675a787d9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:21:13 np0005481065 nova_compute[260935]: 2025-10-11 09:21:13.659 2 DEBUG nova.network.neutron [req-6da5ec96-4847-4d03-b995-7dedd7b304bd req-187b0095-798e-493b-9db5-ca2f3493f407 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Refreshing network info cache for port 9669f110-042a-40c6-b7a4-8d78d421ed23 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:21:13 np0005481065 nova_compute[260935]: 2025-10-11 09:21:13.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:13 np0005481065 nova_compute[260935]: 2025-10-11 09:21:13.922 2 DEBUG nova.network.neutron [req-6da5ec96-4847-4d03-b995-7dedd7b304bd req-187b0095-798e-493b-9db5-ca2f3493f407 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:21:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:21:14 np0005481065 nova_compute[260935]: 2025-10-11 09:21:14.288 2 DEBUG nova.network.neutron [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Successfully updated port: 5a14fd50-c9b4-4c8c-b576-9f2a05d734f9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:21:14 np0005481065 nova_compute[260935]: 2025-10-11 09:21:14.319 2 DEBUG oslo_concurrency.lockutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "refresh_cache-fcb45648-eb7b-4975-9f50-08675a787d9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:21:14 np0005481065 nova_compute[260935]: 2025-10-11 09:21:14.379 2 DEBUG nova.network.neutron [req-6da5ec96-4847-4d03-b995-7dedd7b304bd req-187b0095-798e-493b-9db5-ca2f3493f407 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:21:14 np0005481065 nova_compute[260935]: 2025-10-11 09:21:14.410 2 DEBUG oslo_concurrency.lockutils [req-6da5ec96-4847-4d03-b995-7dedd7b304bd req-187b0095-798e-493b-9db5-ca2f3493f407 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-fcb45648-eb7b-4975-9f50-08675a787d9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:21:14 np0005481065 nova_compute[260935]: 2025-10-11 09:21:14.411 2 DEBUG oslo_concurrency.lockutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquired lock "refresh_cache-fcb45648-eb7b-4975-9f50-08675a787d9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:21:14 np0005481065 nova_compute[260935]: 2025-10-11 09:21:14.411 2 DEBUG nova.network.neutron [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:21:14 np0005481065 nova_compute[260935]: 2025-10-11 09:21:14.579 2 DEBUG nova.network.neutron [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:21:14 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2382: 321 pgs: 321 active+clean; 453 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 322 KiB/s rd, 3.9 MiB/s wr, 88 op/s
Oct 11 05:21:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:15.219 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:21:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:15.219 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:21:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:15.221 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:21:15 np0005481065 nova_compute[260935]: 2025-10-11 09:21:15.876 2 DEBUG nova.compute.manager [req-76aa6285-d0af-4ab5-98aa-6701905a7a00 req-e7f0b20c-3ee6-4818-bd10-905079373d50 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Received event network-changed-5a14fd50-c9b4-4c8c-b576-9f2a05d734f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:21:15 np0005481065 nova_compute[260935]: 2025-10-11 09:21:15.876 2 DEBUG nova.compute.manager [req-76aa6285-d0af-4ab5-98aa-6701905a7a00 req-e7f0b20c-3ee6-4818-bd10-905079373d50 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Refreshing instance network info cache due to event network-changed-5a14fd50-c9b4-4c8c-b576-9f2a05d734f9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:21:15 np0005481065 nova_compute[260935]: 2025-10-11 09:21:15.877 2 DEBUG oslo_concurrency.lockutils [req-76aa6285-d0af-4ab5-98aa-6701905a7a00 req-e7f0b20c-3ee6-4818-bd10-905079373d50 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-fcb45648-eb7b-4975-9f50-08675a787d9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:21:16 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2383: 321 pgs: 321 active+clean; 453 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 322 KiB/s rd, 3.9 MiB/s wr, 88 op/s
Oct 11 05:21:16 np0005481065 nova_compute[260935]: 2025-10-11 09:21:16.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:16 np0005481065 podman[388866]: 2025-10-11 09:21:16.858534831 +0000 UTC m=+0.148979615 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:21:16 np0005481065 podman[388865]: 2025-10-11 09:21:16.858766717 +0000 UTC m=+0.152658099 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd)
Oct 11 05:21:16 np0005481065 nova_compute[260935]: 2025-10-11 09:21:16.993 2 INFO nova.compute.manager [None req-753a3a21-0a9e-4dbf-a11c-2497b19a58a0 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Get console output#033[00m
Oct 11 05:21:17 np0005481065 nova_compute[260935]: 2025-10-11 09:21:17.001 29289 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct 11 05:21:18 np0005481065 nova_compute[260935]: 2025-10-11 09:21:18.227 2 DEBUG nova.network.neutron [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Updating instance_info_cache with network_info: [{"id": "9669f110-042a-40c6-b7a4-8d78d421ed23", "address": "fa:16:3e:eb:54:35", "network": {"id": "02690ac5-d004-4c7d-b780-e5fed29e0aa7", "bridge": "br-int", "label": "tempest-network-smoke--1847613909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9669f110-04", "ovs_interfaceid": "9669f110-042a-40c6-b7a4-8d78d421ed23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5a14fd50-c9b4-4c8c-b576-9f2a05d734f9", "address": "fa:16:3e:ed:83:c1", "network": {"id": "e87b272f-66b8-494e-ab80-c2ee66df15a2", "bridge": "br-int", "label": "tempest-network-smoke--2027628824", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feed:83c1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feed:83c1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a14fd50-c9", "ovs_interfaceid": "5a14fd50-c9b4-4c8c-b576-9f2a05d734f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:21:18 np0005481065 nova_compute[260935]: 2025-10-11 09:21:18.255 2 DEBUG oslo_concurrency.lockutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Releasing lock "refresh_cache-fcb45648-eb7b-4975-9f50-08675a787d9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:21:18 np0005481065 nova_compute[260935]: 2025-10-11 09:21:18.256 2 DEBUG nova.compute.manager [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Instance network_info: |[{"id": "9669f110-042a-40c6-b7a4-8d78d421ed23", "address": "fa:16:3e:eb:54:35", "network": {"id": "02690ac5-d004-4c7d-b780-e5fed29e0aa7", "bridge": "br-int", "label": "tempest-network-smoke--1847613909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9669f110-04", "ovs_interfaceid": "9669f110-042a-40c6-b7a4-8d78d421ed23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5a14fd50-c9b4-4c8c-b576-9f2a05d734f9", "address": "fa:16:3e:ed:83:c1", "network": {"id": "e87b272f-66b8-494e-ab80-c2ee66df15a2", "bridge": "br-int", "label": "tempest-network-smoke--2027628824", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feed:83c1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feed:83c1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a14fd50-c9", "ovs_interfaceid": "5a14fd50-c9b4-4c8c-b576-9f2a05d734f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:21:18 np0005481065 nova_compute[260935]: 2025-10-11 09:21:18.256 2 DEBUG oslo_concurrency.lockutils [req-76aa6285-d0af-4ab5-98aa-6701905a7a00 req-e7f0b20c-3ee6-4818-bd10-905079373d50 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-fcb45648-eb7b-4975-9f50-08675a787d9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:21:18 np0005481065 nova_compute[260935]: 2025-10-11 09:21:18.256 2 DEBUG nova.network.neutron [req-76aa6285-d0af-4ab5-98aa-6701905a7a00 req-e7f0b20c-3ee6-4818-bd10-905079373d50 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Refreshing network info cache for port 5a14fd50-c9b4-4c8c-b576-9f2a05d734f9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:21:18 np0005481065 nova_compute[260935]: 2025-10-11 09:21:18.261 2 DEBUG nova.virt.libvirt.driver [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Start _get_guest_xml network_info=[{"id": "9669f110-042a-40c6-b7a4-8d78d421ed23", "address": "fa:16:3e:eb:54:35", "network": {"id": "02690ac5-d004-4c7d-b780-e5fed29e0aa7", "bridge": "br-int", "label": "tempest-network-smoke--1847613909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9669f110-04", "ovs_interfaceid": "9669f110-042a-40c6-b7a4-8d78d421ed23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5a14fd50-c9b4-4c8c-b576-9f2a05d734f9", "address": "fa:16:3e:ed:83:c1", "network": {"id": "e87b272f-66b8-494e-ab80-c2ee66df15a2", "bridge": "br-int", "label": "tempest-network-smoke--2027628824", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feed:83c1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feed:83c1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a14fd50-c9", "ovs_interfaceid": "5a14fd50-c9b4-4c8c-b576-9f2a05d734f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:21:18 np0005481065 nova_compute[260935]: 2025-10-11 09:21:18.266 2 WARNING nova.virt.libvirt.driver [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:21:18 np0005481065 nova_compute[260935]: 2025-10-11 09:21:18.273 2 DEBUG nova.virt.libvirt.host [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:21:18 np0005481065 nova_compute[260935]: 2025-10-11 09:21:18.273 2 DEBUG nova.virt.libvirt.host [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:21:18 np0005481065 nova_compute[260935]: 2025-10-11 09:21:18.277 2 DEBUG nova.virt.libvirt.host [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:21:18 np0005481065 nova_compute[260935]: 2025-10-11 09:21:18.278 2 DEBUG nova.virt.libvirt.host [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:21:18 np0005481065 nova_compute[260935]: 2025-10-11 09:21:18.278 2 DEBUG nova.virt.libvirt.driver [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:21:18 np0005481065 nova_compute[260935]: 2025-10-11 09:21:18.278 2 DEBUG nova.virt.hardware [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:21:18 np0005481065 nova_compute[260935]: 2025-10-11 09:21:18.279 2 DEBUG nova.virt.hardware [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:21:18 np0005481065 nova_compute[260935]: 2025-10-11 09:21:18.279 2 DEBUG nova.virt.hardware [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:21:18 np0005481065 nova_compute[260935]: 2025-10-11 09:21:18.279 2 DEBUG nova.virt.hardware [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:21:18 np0005481065 nova_compute[260935]: 2025-10-11 09:21:18.280 2 DEBUG nova.virt.hardware [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:21:18 np0005481065 nova_compute[260935]: 2025-10-11 09:21:18.280 2 DEBUG nova.virt.hardware [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:21:18 np0005481065 nova_compute[260935]: 2025-10-11 09:21:18.280 2 DEBUG nova.virt.hardware [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:21:18 np0005481065 nova_compute[260935]: 2025-10-11 09:21:18.281 2 DEBUG nova.virt.hardware [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:21:18 np0005481065 nova_compute[260935]: 2025-10-11 09:21:18.281 2 DEBUG nova.virt.hardware [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:21:18 np0005481065 nova_compute[260935]: 2025-10-11 09:21:18.281 2 DEBUG nova.virt.hardware [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:21:18 np0005481065 nova_compute[260935]: 2025-10-11 09:21:18.281 2 DEBUG nova.virt.hardware [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:21:18 np0005481065 nova_compute[260935]: 2025-10-11 09:21:18.285 2 DEBUG oslo_concurrency.processutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:21:18 np0005481065 nova_compute[260935]: 2025-10-11 09:21:18.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:18 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2384: 321 pgs: 321 active+clean; 453 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 322 KiB/s rd, 3.9 MiB/s wr, 88 op/s
Oct 11 05:21:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:21:18 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2856541549' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:21:18 np0005481065 nova_compute[260935]: 2025-10-11 09:21:18.804 2 DEBUG oslo_concurrency.processutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:21:18 np0005481065 nova_compute[260935]: 2025-10-11 09:21:18.830 2 DEBUG nova.storage.rbd_utils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image fcb45648-eb7b-4975-9f50-08675a787d9c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:21:18 np0005481065 nova_compute[260935]: 2025-10-11 09:21:18.834 2 DEBUG oslo_concurrency.processutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:21:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:21:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:21:19 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/78862230' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:21:19 np0005481065 nova_compute[260935]: 2025-10-11 09:21:19.347 2 DEBUG oslo_concurrency.processutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:21:19 np0005481065 nova_compute[260935]: 2025-10-11 09:21:19.349 2 DEBUG nova.virt.libvirt.vif [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:21:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-407795424',display_name='tempest-TestGettingAddress-server-407795424',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-407795424',id=118,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH3M/kXUSgkBS2kz04q1S/pRsyiwkY2YkIqMb3cQVJF1HU8FNi43mGn58wjJEXbpqq0zRDrJXjUF0vzQa/0kOsDkOMrrOR4SllnyixE3KdibmJHkyv4zy/eyFYJg8UzYow==',key_name='tempest-TestGettingAddress-1350685544',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-dfjcqrr4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:21:08Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=fcb45648-eb7b-4975-9f50-08675a787d9c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9669f110-042a-40c6-b7a4-8d78d421ed23", "address": "fa:16:3e:eb:54:35", "network": {"id": "02690ac5-d004-4c7d-b780-e5fed29e0aa7", "bridge": "br-int", "label": "tempest-network-smoke--1847613909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9669f110-04", "ovs_interfaceid": "9669f110-042a-40c6-b7a4-8d78d421ed23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:21:19 np0005481065 nova_compute[260935]: 2025-10-11 09:21:19.350 2 DEBUG nova.network.os_vif_util [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "9669f110-042a-40c6-b7a4-8d78d421ed23", "address": "fa:16:3e:eb:54:35", "network": {"id": "02690ac5-d004-4c7d-b780-e5fed29e0aa7", "bridge": "br-int", "label": "tempest-network-smoke--1847613909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9669f110-04", "ovs_interfaceid": "9669f110-042a-40c6-b7a4-8d78d421ed23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:21:19 np0005481065 nova_compute[260935]: 2025-10-11 09:21:19.351 2 DEBUG nova.network.os_vif_util [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:54:35,bridge_name='br-int',has_traffic_filtering=True,id=9669f110-042a-40c6-b7a4-8d78d421ed23,network=Network(02690ac5-d004-4c7d-b780-e5fed29e0aa7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9669f110-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:21:19 np0005481065 nova_compute[260935]: 2025-10-11 09:21:19.352 2 DEBUG nova.virt.libvirt.vif [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:21:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-407795424',display_name='tempest-TestGettingAddress-server-407795424',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-407795424',id=118,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH3M/kXUSgkBS2kz04q1S/pRsyiwkY2YkIqMb3cQVJF1HU8FNi43mGn58wjJEXbpqq0zRDrJXjUF0vzQa/0kOsDkOMrrOR4SllnyixE3KdibmJHkyv4zy/eyFYJg8UzYow==',key_name='tempest-TestGettingAddress-1350685544',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-dfjcqrr4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:21:08Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=fcb45648-eb7b-4975-9f50-08675a787d9c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5a14fd50-c9b4-4c8c-b576-9f2a05d734f9", "address": "fa:16:3e:ed:83:c1", "network": {"id": "e87b272f-66b8-494e-ab80-c2ee66df15a2", "bridge": "br-int", "label": "tempest-network-smoke--2027628824", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feed:83c1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feed:83c1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a14fd50-c9", "ovs_interfaceid": "5a14fd50-c9b4-4c8c-b576-9f2a05d734f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:21:19 np0005481065 nova_compute[260935]: 2025-10-11 09:21:19.352 2 DEBUG nova.network.os_vif_util [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "5a14fd50-c9b4-4c8c-b576-9f2a05d734f9", "address": "fa:16:3e:ed:83:c1", "network": {"id": "e87b272f-66b8-494e-ab80-c2ee66df15a2", "bridge": "br-int", "label": "tempest-network-smoke--2027628824", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feed:83c1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feed:83c1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a14fd50-c9", "ovs_interfaceid": "5a14fd50-c9b4-4c8c-b576-9f2a05d734f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:21:19 np0005481065 nova_compute[260935]: 2025-10-11 09:21:19.353 2 DEBUG nova.network.os_vif_util [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ed:83:c1,bridge_name='br-int',has_traffic_filtering=True,id=5a14fd50-c9b4-4c8c-b576-9f2a05d734f9,network=Network(e87b272f-66b8-494e-ab80-c2ee66df15a2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a14fd50-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:21:19 np0005481065 nova_compute[260935]: 2025-10-11 09:21:19.354 2 DEBUG nova.objects.instance [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'pci_devices' on Instance uuid fcb45648-eb7b-4975-9f50-08675a787d9c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:21:19 np0005481065 nova_compute[260935]: 2025-10-11 09:21:19.382 2 DEBUG nova.virt.libvirt.driver [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:21:19 np0005481065 nova_compute[260935]:  <uuid>fcb45648-eb7b-4975-9f50-08675a787d9c</uuid>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:  <name>instance-00000076</name>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:21:19 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:      <nova:name>tempest-TestGettingAddress-server-407795424</nova:name>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:21:18</nova:creationTime>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:21:19 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:        <nova:user uuid="0e1fd111a1ff43179343661e01457085">tempest-TestGettingAddress-1238692117-project-member</nova:user>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:        <nova:project uuid="db6885dd005947ad850fed13cefdf2fc">tempest-TestGettingAddress-1238692117</nova:project>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:        <nova:port uuid="9669f110-042a-40c6-b7a4-8d78d421ed23">
Oct 11 05:21:19 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:        <nova:port uuid="5a14fd50-c9b4-4c8c-b576-9f2a05d734f9">
Oct 11 05:21:19 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:feed:83c1" ipVersion="6"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:feed:83c1" ipVersion="6"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:      <entry name="serial">fcb45648-eb7b-4975-9f50-08675a787d9c</entry>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:      <entry name="uuid">fcb45648-eb7b-4975-9f50-08675a787d9c</entry>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:21:19 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/fcb45648-eb7b-4975-9f50-08675a787d9c_disk">
Oct 11 05:21:19 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:21:19 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:21:19 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/fcb45648-eb7b-4975-9f50-08675a787d9c_disk.config">
Oct 11 05:21:19 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:21:19 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:21:19 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:eb:54:35"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:      <target dev="tap9669f110-04"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:21:19 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:ed:83:c1"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:      <target dev="tap5a14fd50-c9"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:21:19 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/fcb45648-eb7b-4975-9f50-08675a787d9c/console.log" append="off"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:21:19 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:21:19 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:21:19 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:21:19 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:21:19 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:21:19 np0005481065 nova_compute[260935]: 2025-10-11 09:21:19.384 2 DEBUG nova.compute.manager [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Preparing to wait for external event network-vif-plugged-9669f110-042a-40c6-b7a4-8d78d421ed23 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:21:19 np0005481065 nova_compute[260935]: 2025-10-11 09:21:19.384 2 DEBUG oslo_concurrency.lockutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:21:19 np0005481065 nova_compute[260935]: 2025-10-11 09:21:19.385 2 DEBUG oslo_concurrency.lockutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:21:19 np0005481065 nova_compute[260935]: 2025-10-11 09:21:19.385 2 DEBUG oslo_concurrency.lockutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:21:19 np0005481065 nova_compute[260935]: 2025-10-11 09:21:19.385 2 DEBUG nova.compute.manager [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Preparing to wait for external event network-vif-plugged-5a14fd50-c9b4-4c8c-b576-9f2a05d734f9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:21:19 np0005481065 nova_compute[260935]: 2025-10-11 09:21:19.385 2 DEBUG oslo_concurrency.lockutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:21:19 np0005481065 nova_compute[260935]: 2025-10-11 09:21:19.385 2 DEBUG oslo_concurrency.lockutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:21:19 np0005481065 nova_compute[260935]: 2025-10-11 09:21:19.386 2 DEBUG oslo_concurrency.lockutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:21:19 np0005481065 nova_compute[260935]: 2025-10-11 09:21:19.386 2 DEBUG nova.virt.libvirt.vif [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:21:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-407795424',display_name='tempest-TestGettingAddress-server-407795424',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-407795424',id=118,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH3M/kXUSgkBS2kz04q1S/pRsyiwkY2YkIqMb3cQVJF1HU8FNi43mGn58wjJEXbpqq0zRDrJXjUF0vzQa/0kOsDkOMrrOR4SllnyixE3KdibmJHkyv4zy/eyFYJg8UzYow==',key_name='tempest-TestGettingAddress-1350685544',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-dfjcqrr4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:21:08Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=fcb45648-eb7b-4975-9f50-08675a787d9c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9669f110-042a-40c6-b7a4-8d78d421ed23", "address": "fa:16:3e:eb:54:35", "network": {"id": "02690ac5-d004-4c7d-b780-e5fed29e0aa7", "bridge": "br-int", "label": "tempest-network-smoke--1847613909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9669f110-04", "ovs_interfaceid": "9669f110-042a-40c6-b7a4-8d78d421ed23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:21:19 np0005481065 nova_compute[260935]: 2025-10-11 09:21:19.387 2 DEBUG nova.network.os_vif_util [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "9669f110-042a-40c6-b7a4-8d78d421ed23", "address": "fa:16:3e:eb:54:35", "network": {"id": "02690ac5-d004-4c7d-b780-e5fed29e0aa7", "bridge": "br-int", "label": "tempest-network-smoke--1847613909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9669f110-04", "ovs_interfaceid": "9669f110-042a-40c6-b7a4-8d78d421ed23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:21:19 np0005481065 nova_compute[260935]: 2025-10-11 09:21:19.387 2 DEBUG nova.network.os_vif_util [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:54:35,bridge_name='br-int',has_traffic_filtering=True,id=9669f110-042a-40c6-b7a4-8d78d421ed23,network=Network(02690ac5-d004-4c7d-b780-e5fed29e0aa7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9669f110-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:21:19 np0005481065 nova_compute[260935]: 2025-10-11 09:21:19.387 2 DEBUG os_vif [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:54:35,bridge_name='br-int',has_traffic_filtering=True,id=9669f110-042a-40c6-b7a4-8d78d421ed23,network=Network(02690ac5-d004-4c7d-b780-e5fed29e0aa7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9669f110-04') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:21:19 np0005481065 nova_compute[260935]: 2025-10-11 09:21:19.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:19 np0005481065 nova_compute[260935]: 2025-10-11 09:21:19.389 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:21:19 np0005481065 nova_compute[260935]: 2025-10-11 09:21:19.389 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:21:19 np0005481065 nova_compute[260935]: 2025-10-11 09:21:19.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:19 np0005481065 nova_compute[260935]: 2025-10-11 09:21:19.392 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9669f110-04, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:21:19 np0005481065 nova_compute[260935]: 2025-10-11 09:21:19.392 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9669f110-04, col_values=(('external_ids', {'iface-id': '9669f110-042a-40c6-b7a4-8d78d421ed23', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:eb:54:35', 'vm-uuid': 'fcb45648-eb7b-4975-9f50-08675a787d9c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:21:19 np0005481065 nova_compute[260935]: 2025-10-11 09:21:19.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:19 np0005481065 NetworkManager[44960]: <info>  [1760174479.3953] manager: (tap9669f110-04): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/484)
Oct 11 05:21:19 np0005481065 nova_compute[260935]: 2025-10-11 09:21:19.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:21:19 np0005481065 nova_compute[260935]: 2025-10-11 09:21:19.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:19 np0005481065 nova_compute[260935]: 2025-10-11 09:21:19.404 2 INFO os_vif [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:54:35,bridge_name='br-int',has_traffic_filtering=True,id=9669f110-042a-40c6-b7a4-8d78d421ed23,network=Network(02690ac5-d004-4c7d-b780-e5fed29e0aa7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9669f110-04')#033[00m
Oct 11 05:21:19 np0005481065 nova_compute[260935]: 2025-10-11 09:21:19.405 2 DEBUG nova.virt.libvirt.vif [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:21:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-407795424',display_name='tempest-TestGettingAddress-server-407795424',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-407795424',id=118,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH3M/kXUSgkBS2kz04q1S/pRsyiwkY2YkIqMb3cQVJF1HU8FNi43mGn58wjJEXbpqq0zRDrJXjUF0vzQa/0kOsDkOMrrOR4SllnyixE3KdibmJHkyv4zy/eyFYJg8UzYow==',key_name='tempest-TestGettingAddress-1350685544',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-dfjcqrr4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:21:08Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=fcb45648-eb7b-4975-9f50-08675a787d9c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5a14fd50-c9b4-4c8c-b576-9f2a05d734f9", "address": "fa:16:3e:ed:83:c1", "network": {"id": "e87b272f-66b8-494e-ab80-c2ee66df15a2", "bridge": "br-int", "label": "tempest-network-smoke--2027628824", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feed:83c1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feed:83c1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a14fd50-c9", "ovs_interfaceid": "5a14fd50-c9b4-4c8c-b576-9f2a05d734f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:21:19 np0005481065 nova_compute[260935]: 2025-10-11 09:21:19.405 2 DEBUG nova.network.os_vif_util [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "5a14fd50-c9b4-4c8c-b576-9f2a05d734f9", "address": "fa:16:3e:ed:83:c1", "network": {"id": "e87b272f-66b8-494e-ab80-c2ee66df15a2", "bridge": "br-int", "label": "tempest-network-smoke--2027628824", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feed:83c1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feed:83c1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a14fd50-c9", "ovs_interfaceid": "5a14fd50-c9b4-4c8c-b576-9f2a05d734f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:21:19 np0005481065 nova_compute[260935]: 2025-10-11 09:21:19.405 2 DEBUG nova.network.os_vif_util [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ed:83:c1,bridge_name='br-int',has_traffic_filtering=True,id=5a14fd50-c9b4-4c8c-b576-9f2a05d734f9,network=Network(e87b272f-66b8-494e-ab80-c2ee66df15a2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a14fd50-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:21:19 np0005481065 nova_compute[260935]: 2025-10-11 09:21:19.406 2 DEBUG os_vif [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:83:c1,bridge_name='br-int',has_traffic_filtering=True,id=5a14fd50-c9b4-4c8c-b576-9f2a05d734f9,network=Network(e87b272f-66b8-494e-ab80-c2ee66df15a2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a14fd50-c9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:21:19 np0005481065 nova_compute[260935]: 2025-10-11 09:21:19.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:19 np0005481065 nova_compute[260935]: 2025-10-11 09:21:19.406 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:21:19 np0005481065 nova_compute[260935]: 2025-10-11 09:21:19.407 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:21:19 np0005481065 nova_compute[260935]: 2025-10-11 09:21:19.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:19 np0005481065 nova_compute[260935]: 2025-10-11 09:21:19.408 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5a14fd50-c9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:21:19 np0005481065 nova_compute[260935]: 2025-10-11 09:21:19.408 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5a14fd50-c9, col_values=(('external_ids', {'iface-id': '5a14fd50-c9b4-4c8c-b576-9f2a05d734f9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ed:83:c1', 'vm-uuid': 'fcb45648-eb7b-4975-9f50-08675a787d9c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:21:19 np0005481065 nova_compute[260935]: 2025-10-11 09:21:19.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:19 np0005481065 NetworkManager[44960]: <info>  [1760174479.4108] manager: (tap5a14fd50-c9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/485)
Oct 11 05:21:19 np0005481065 nova_compute[260935]: 2025-10-11 09:21:19.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:21:19 np0005481065 nova_compute[260935]: 2025-10-11 09:21:19.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:19 np0005481065 nova_compute[260935]: 2025-10-11 09:21:19.418 2 INFO os_vif [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:83:c1,bridge_name='br-int',has_traffic_filtering=True,id=5a14fd50-c9b4-4c8c-b576-9f2a05d734f9,network=Network(e87b272f-66b8-494e-ab80-c2ee66df15a2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a14fd50-c9')#033[00m
Oct 11 05:21:19 np0005481065 nova_compute[260935]: 2025-10-11 09:21:19.490 2 DEBUG nova.virt.libvirt.driver [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:21:19 np0005481065 nova_compute[260935]: 2025-10-11 09:21:19.490 2 DEBUG nova.virt.libvirt.driver [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:21:19 np0005481065 nova_compute[260935]: 2025-10-11 09:21:19.490 2 DEBUG nova.virt.libvirt.driver [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No VIF found with MAC fa:16:3e:eb:54:35, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:21:19 np0005481065 nova_compute[260935]: 2025-10-11 09:21:19.491 2 DEBUG nova.virt.libvirt.driver [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No VIF found with MAC fa:16:3e:ed:83:c1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:21:19 np0005481065 nova_compute[260935]: 2025-10-11 09:21:19.491 2 INFO nova.virt.libvirt.driver [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Using config drive#033[00m
Oct 11 05:21:19 np0005481065 nova_compute[260935]: 2025-10-11 09:21:19.514 2 DEBUG nova.storage.rbd_utils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image fcb45648-eb7b-4975-9f50-08675a787d9c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:21:20 np0005481065 nova_compute[260935]: 2025-10-11 09:21:20.097 2 INFO nova.virt.libvirt.driver [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Creating config drive at /var/lib/nova/instances/fcb45648-eb7b-4975-9f50-08675a787d9c/disk.config#033[00m
Oct 11 05:21:20 np0005481065 nova_compute[260935]: 2025-10-11 09:21:20.106 2 DEBUG oslo_concurrency.processutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fcb45648-eb7b-4975-9f50-08675a787d9c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbox69brg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:21:20 np0005481065 nova_compute[260935]: 2025-10-11 09:21:20.276 2 DEBUG oslo_concurrency.processutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fcb45648-eb7b-4975-9f50-08675a787d9c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbox69brg" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:21:20 np0005481065 nova_compute[260935]: 2025-10-11 09:21:20.317 2 DEBUG nova.storage.rbd_utils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image fcb45648-eb7b-4975-9f50-08675a787d9c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:21:20 np0005481065 nova_compute[260935]: 2025-10-11 09:21:20.322 2 DEBUG oslo_concurrency.processutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fcb45648-eb7b-4975-9f50-08675a787d9c/disk.config fcb45648-eb7b-4975-9f50-08675a787d9c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:21:20 np0005481065 nova_compute[260935]: 2025-10-11 09:21:20.525 2 DEBUG oslo_concurrency.processutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fcb45648-eb7b-4975-9f50-08675a787d9c/disk.config fcb45648-eb7b-4975-9f50-08675a787d9c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.203s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:21:20 np0005481065 nova_compute[260935]: 2025-10-11 09:21:20.527 2 INFO nova.virt.libvirt.driver [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Deleting local config drive /var/lib/nova/instances/fcb45648-eb7b-4975-9f50-08675a787d9c/disk.config because it was imported into RBD.#033[00m
Oct 11 05:21:20 np0005481065 NetworkManager[44960]: <info>  [1760174480.6006] manager: (tap9669f110-04): new Tun device (/org/freedesktop/NetworkManager/Devices/486)
Oct 11 05:21:20 np0005481065 kernel: tap9669f110-04: entered promiscuous mode
Oct 11 05:21:20 np0005481065 nova_compute[260935]: 2025-10-11 09:21:20.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:20 np0005481065 ovn_controller[152945]: 2025-10-11T09:21:20Z|01197|binding|INFO|Claiming lport 9669f110-042a-40c6-b7a4-8d78d421ed23 for this chassis.
Oct 11 05:21:20 np0005481065 ovn_controller[152945]: 2025-10-11T09:21:20Z|01198|binding|INFO|9669f110-042a-40c6-b7a4-8d78d421ed23: Claiming fa:16:3e:eb:54:35 10.100.0.11
Oct 11 05:21:20 np0005481065 NetworkManager[44960]: <info>  [1760174480.6237] manager: (tap5a14fd50-c9): new Tun device (/org/freedesktop/NetworkManager/Devices/487)
Oct 11 05:21:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:20.630 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:54:35 10.100.0.11'], port_security=['fa:16:3e:eb:54:35 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'fcb45648-eb7b-4975-9f50-08675a787d9c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-02690ac5-d004-4c7d-b780-e5fed29e0aa7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c29d60dd-64a4-4eb0-a5b7-799e6b286f87', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bc6a655d-e3d9-4e53-b2d8-3195591acfb2, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=9669f110-042a-40c6-b7a4-8d78d421ed23) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:21:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:20.631 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 9669f110-042a-40c6-b7a4-8d78d421ed23 in datapath 02690ac5-d004-4c7d-b780-e5fed29e0aa7 bound to our chassis#033[00m
Oct 11 05:21:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:20.633 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 02690ac5-d004-4c7d-b780-e5fed29e0aa7#033[00m
Oct 11 05:21:20 np0005481065 kernel: tap5a14fd50-c9: entered promiscuous mode
Oct 11 05:21:20 np0005481065 nova_compute[260935]: 2025-10-11 09:21:20.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:20 np0005481065 ovn_controller[152945]: 2025-10-11T09:21:20Z|01199|binding|INFO|Setting lport 9669f110-042a-40c6-b7a4-8d78d421ed23 ovn-installed in OVS
Oct 11 05:21:20 np0005481065 ovn_controller[152945]: 2025-10-11T09:21:20Z|01200|binding|INFO|Setting lport 9669f110-042a-40c6-b7a4-8d78d421ed23 up in Southbound
Oct 11 05:21:20 np0005481065 ovn_controller[152945]: 2025-10-11T09:21:20Z|01201|if_status|INFO|Not updating pb chassis for 5a14fd50-c9b4-4c8c-b576-9f2a05d734f9 now as sb is readonly
Oct 11 05:21:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:20.653 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a4c9aad8-dea9-45b7-846a-d132072c2ec1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:21:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:20.654 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap02690ac5-d1 in ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 05:21:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:20.656 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap02690ac5-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 05:21:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:20.657 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2460c76d-1d2f-447d-be02-36109a760b59]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:21:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:20.657 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[66aae679-3a9d-41d1-9c80-6c19eb72b5a6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:21:20 np0005481065 nova_compute[260935]: 2025-10-11 09:21:20.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:20 np0005481065 nova_compute[260935]: 2025-10-11 09:21:20.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:20 np0005481065 systemd-udevd[389056]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:21:20 np0005481065 systemd-udevd[389054]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:21:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:20.684 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[dc21e49c-c67c-49fe-ab1c-d8d72cca49b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:21:20 np0005481065 systemd-machined[215705]: New machine qemu-141-instance-00000076.
Oct 11 05:21:20 np0005481065 NetworkManager[44960]: <info>  [1760174480.7032] device (tap5a14fd50-c9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:21:20 np0005481065 NetworkManager[44960]: <info>  [1760174480.7050] device (tap5a14fd50-c9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:21:20 np0005481065 NetworkManager[44960]: <info>  [1760174480.7059] device (tap9669f110-04): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:21:20 np0005481065 systemd[1]: Started Virtual Machine qemu-141-instance-00000076.
Oct 11 05:21:20 np0005481065 NetworkManager[44960]: <info>  [1760174480.7072] device (tap9669f110-04): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:21:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:20.712 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ffa55fc1-ec4b-4d46-a00b-dda6539951f0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:21:20 np0005481065 ovn_controller[152945]: 2025-10-11T09:21:20Z|01202|binding|INFO|Claiming lport 5a14fd50-c9b4-4c8c-b576-9f2a05d734f9 for this chassis.
Oct 11 05:21:20 np0005481065 ovn_controller[152945]: 2025-10-11T09:21:20Z|01203|binding|INFO|5a14fd50-c9b4-4c8c-b576-9f2a05d734f9: Claiming fa:16:3e:ed:83:c1 2001:db8:0:1:f816:3eff:feed:83c1 2001:db8::f816:3eff:feed:83c1
Oct 11 05:21:20 np0005481065 ovn_controller[152945]: 2025-10-11T09:21:20Z|01204|binding|INFO|Setting lport 5a14fd50-c9b4-4c8c-b576-9f2a05d734f9 ovn-installed in OVS
Oct 11 05:21:20 np0005481065 nova_compute[260935]: 2025-10-11 09:21:20.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:20 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2385: 321 pgs: 321 active+clean; 453 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 322 KiB/s rd, 3.9 MiB/s wr, 88 op/s
Oct 11 05:21:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:20.757 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[1ecc05a5-12f6-4013-8cde-abc42784d0b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:21:20 np0005481065 systemd-udevd[389060]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:21:20 np0005481065 NetworkManager[44960]: <info>  [1760174480.7660] manager: (tap02690ac5-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/488)
Oct 11 05:21:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:20.764 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[43bb2f55-f24e-498b-9ab7-a334ac044c30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:21:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:20.809 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[2e5b3a0a-b59f-4547-8a77-2dcc400463ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:21:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:20.813 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[e8be985b-93c9-492c-af37-5623af03f8de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:21:20 np0005481065 NetworkManager[44960]: <info>  [1760174480.8379] device (tap02690ac5-d0): carrier: link connected
Oct 11 05:21:20 np0005481065 ovn_controller[152945]: 2025-10-11T09:21:20Z|01205|binding|INFO|Setting lport 5a14fd50-c9b4-4c8c-b576-9f2a05d734f9 up in Southbound
Oct 11 05:21:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:20.843 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[78ec4edd-241a-42a1-bfcb-2d7be9ad7027]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:21:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:20.848 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ed:83:c1 2001:db8:0:1:f816:3eff:feed:83c1 2001:db8::f816:3eff:feed:83c1'], port_security=['fa:16:3e:ed:83:c1 2001:db8:0:1:f816:3eff:feed:83c1 2001:db8::f816:3eff:feed:83c1'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:feed:83c1/64 2001:db8::f816:3eff:feed:83c1/64', 'neutron:device_id': 'fcb45648-eb7b-4975-9f50-08675a787d9c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e87b272f-66b8-494e-ab80-c2ee66df15a2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c29d60dd-64a4-4eb0-a5b7-799e6b286f87', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4bb88753-f635-4d32-a71c-7301c0eb5e38, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=5a14fd50-c9b4-4c8c-b576-9f2a05d734f9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:21:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:20.862 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e1d68751-33ab-4f07-a887-7689745acf1e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap02690ac5-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3a:97:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 340], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632554, 'reachable_time': 37223, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 389087, 'error': None, 'target': 'ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:21:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:20.881 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5febfd7c-ff01-4161-88b5-0d29e75afdad]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3a:970e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 632554, 'tstamp': 632554}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 389088, 'error': None, 'target': 'ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:21:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:20.902 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cface96f-5b7b-483e-aa5b-81bdec2e5e6a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap02690ac5-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3a:97:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 340], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632554, 'reachable_time': 37223, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 389089, 'error': None, 'target': 'ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:21:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:20.941 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9b052046-cfca-4ec8-937f-fb879c839034]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:21:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:21.021 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d16885fd-db63-489a-9287-6a065de1565b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:21:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:21.023 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap02690ac5-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:21:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:21.023 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:21:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:21.024 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap02690ac5-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:21:21 np0005481065 nova_compute[260935]: 2025-10-11 09:21:21.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:21 np0005481065 NetworkManager[44960]: <info>  [1760174481.0271] manager: (tap02690ac5-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/489)
Oct 11 05:21:21 np0005481065 kernel: tap02690ac5-d0: entered promiscuous mode
Oct 11 05:21:21 np0005481065 nova_compute[260935]: 2025-10-11 09:21:21.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:21.032 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap02690ac5-d0, col_values=(('external_ids', {'iface-id': 'bc2597de-285d-49b2-9e10-c93c87e43828'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:21:21 np0005481065 nova_compute[260935]: 2025-10-11 09:21:21.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:21 np0005481065 ovn_controller[152945]: 2025-10-11T09:21:21Z|01206|binding|INFO|Releasing lport bc2597de-285d-49b2-9e10-c93c87e43828 from this chassis (sb_readonly=0)
Oct 11 05:21:21 np0005481065 nova_compute[260935]: 2025-10-11 09:21:21.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:21.071 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/02690ac5-d004-4c7d-b780-e5fed29e0aa7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/02690ac5-d004-4c7d-b780-e5fed29e0aa7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 05:21:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:21.072 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[eeb15664-0802-48ce-8c92-1015abb1d5e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:21:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:21.074 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 05:21:21 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 05:21:21 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 05:21:21 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-02690ac5-d004-4c7d-b780-e5fed29e0aa7
Oct 11 05:21:21 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 05:21:21 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 05:21:21 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 05:21:21 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/02690ac5-d004-4c7d-b780-e5fed29e0aa7.pid.haproxy
Oct 11 05:21:21 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 05:21:21 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:21:21 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 05:21:21 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 05:21:21 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 05:21:21 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 05:21:21 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 05:21:21 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 05:21:21 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 05:21:21 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 05:21:21 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 05:21:21 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 05:21:21 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 05:21:21 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 05:21:21 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 05:21:21 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:21:21 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:21:21 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 05:21:21 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 05:21:21 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 05:21:21 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID 02690ac5-d004-4c7d-b780-e5fed29e0aa7
Oct 11 05:21:21 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 05:21:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:21.077 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7', 'env', 'PROCESS_TAG=haproxy-02690ac5-d004-4c7d-b780-e5fed29e0aa7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/02690ac5-d004-4c7d-b780-e5fed29e0aa7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 05:21:21 np0005481065 nova_compute[260935]: 2025-10-11 09:21:21.109 2 DEBUG nova.network.neutron [req-76aa6285-d0af-4ab5-98aa-6701905a7a00 req-e7f0b20c-3ee6-4818-bd10-905079373d50 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Updated VIF entry in instance network info cache for port 5a14fd50-c9b4-4c8c-b576-9f2a05d734f9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:21:21 np0005481065 nova_compute[260935]: 2025-10-11 09:21:21.110 2 DEBUG nova.network.neutron [req-76aa6285-d0af-4ab5-98aa-6701905a7a00 req-e7f0b20c-3ee6-4818-bd10-905079373d50 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Updating instance_info_cache with network_info: [{"id": "9669f110-042a-40c6-b7a4-8d78d421ed23", "address": "fa:16:3e:eb:54:35", "network": {"id": "02690ac5-d004-4c7d-b780-e5fed29e0aa7", "bridge": "br-int", "label": "tempest-network-smoke--1847613909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9669f110-04", "ovs_interfaceid": "9669f110-042a-40c6-b7a4-8d78d421ed23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5a14fd50-c9b4-4c8c-b576-9f2a05d734f9", "address": "fa:16:3e:ed:83:c1", "network": {"id": "e87b272f-66b8-494e-ab80-c2ee66df15a2", "bridge": "br-int", "label": "tempest-network-smoke--2027628824", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feed:83c1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feed:83c1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a14fd50-c9", "ovs_interfaceid": "5a14fd50-c9b4-4c8c-b576-9f2a05d734f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:21:21 np0005481065 nova_compute[260935]: 2025-10-11 09:21:21.142 2 DEBUG oslo_concurrency.lockutils [req-76aa6285-d0af-4ab5-98aa-6701905a7a00 req-e7f0b20c-3ee6-4818-bd10-905079373d50 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-fcb45648-eb7b-4975-9f50-08675a787d9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:21:21 np0005481065 nova_compute[260935]: 2025-10-11 09:21:21.309 2 DEBUG nova.compute.manager [req-494aa31d-d1fd-48b5-a3c7-ad0e040ed971 req-4d65361b-caf1-4123-a286-95b761aa4b1b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Received event network-vif-plugged-9669f110-042a-40c6-b7a4-8d78d421ed23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:21:21 np0005481065 nova_compute[260935]: 2025-10-11 09:21:21.310 2 DEBUG oslo_concurrency.lockutils [req-494aa31d-d1fd-48b5-a3c7-ad0e040ed971 req-4d65361b-caf1-4123-a286-95b761aa4b1b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:21:21 np0005481065 nova_compute[260935]: 2025-10-11 09:21:21.310 2 DEBUG oslo_concurrency.lockutils [req-494aa31d-d1fd-48b5-a3c7-ad0e040ed971 req-4d65361b-caf1-4123-a286-95b761aa4b1b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:21:21 np0005481065 nova_compute[260935]: 2025-10-11 09:21:21.311 2 DEBUG oslo_concurrency.lockutils [req-494aa31d-d1fd-48b5-a3c7-ad0e040ed971 req-4d65361b-caf1-4123-a286-95b761aa4b1b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:21:21 np0005481065 nova_compute[260935]: 2025-10-11 09:21:21.311 2 DEBUG nova.compute.manager [req-494aa31d-d1fd-48b5-a3c7-ad0e040ed971 req-4d65361b-caf1-4123-a286-95b761aa4b1b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Processing event network-vif-plugged-9669f110-042a-40c6-b7a4-8d78d421ed23 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:21:21 np0005481065 podman[389121]: 2025-10-11 09:21:21.527726698 +0000 UTC m=+0.064185582 container create a30fa67bcb3a4967e8ed944a89e7f834e57e556362850d2eae55ff591936d2f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 11 05:21:21 np0005481065 systemd[1]: Started libpod-conmon-a30fa67bcb3a4967e8ed944a89e7f834e57e556362850d2eae55ff591936d2f1.scope.
Oct 11 05:21:21 np0005481065 podman[389121]: 2025-10-11 09:21:21.500730777 +0000 UTC m=+0.037189711 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 05:21:21 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:21:21 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45a43bab447974501414b98be50feaa29e43ab1d6b08cf88962b6450fef6fae6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 05:21:21 np0005481065 podman[389121]: 2025-10-11 09:21:21.633253737 +0000 UTC m=+0.169712631 container init a30fa67bcb3a4967e8ed944a89e7f834e57e556362850d2eae55ff591936d2f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 11 05:21:21 np0005481065 podman[389121]: 2025-10-11 09:21:21.643703661 +0000 UTC m=+0.180162575 container start a30fa67bcb3a4967e8ed944a89e7f834e57e556362850d2eae55ff591936d2f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:21:21 np0005481065 neutron-haproxy-ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7[389155]: [NOTICE]   (389183) : New worker (389186) forked
Oct 11 05:21:21 np0005481065 neutron-haproxy-ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7[389155]: [NOTICE]   (389183) : Loading success.
Oct 11 05:21:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:21.722 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 5a14fd50-c9b4-4c8c-b576-9f2a05d734f9 in datapath e87b272f-66b8-494e-ab80-c2ee66df15a2 unbound from our chassis#033[00m
Oct 11 05:21:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:21.728 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e87b272f-66b8-494e-ab80-c2ee66df15a2#033[00m
Oct 11 05:21:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:21.746 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[85e19c22-3308-48c1-9f05-8da6eded2ed5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:21:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:21.748 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape87b272f-61 in ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 05:21:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:21.751 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape87b272f-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 05:21:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:21.751 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[55132554-8176-4410-ab58-22e710b79eff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:21:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:21.753 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8fc30cb8-f19c-4db3-a866-4f7448a3983e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:21:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:21.779 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[fc6b1955-b316-4969-8e1c-25dfe432815e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:21:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:21.814 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fd27ba31-9bf4-45e4-bef6-366a87bf0873]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:21:21 np0005481065 nova_compute[260935]: 2025-10-11 09:21:21.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:21.866 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[99590b47-903f-4fa9-94bb-48f2013dd825]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:21:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:21.871 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c1391291-c50a-42e7-b6f2-488217d1364b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:21:21 np0005481065 systemd-udevd[389073]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:21:21 np0005481065 NetworkManager[44960]: <info>  [1760174481.8735] manager: (tape87b272f-60): new Veth device (/org/freedesktop/NetworkManager/Devices/490)
Oct 11 05:21:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:21.913 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6c10e11a-1059-4b4d-a13e-03ff861968c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:21:21 np0005481065 nova_compute[260935]: 2025-10-11 09:21:21.918 2 DEBUG oslo_concurrency.lockutils [None req-17a2808a-fce2-40a9-a85c-94ba9c8cfbba dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "interface-7f0d9214-39a5-458d-82db-dcbc7d61b8b5-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:21:21 np0005481065 nova_compute[260935]: 2025-10-11 09:21:21.918 2 DEBUG oslo_concurrency.lockutils [None req-17a2808a-fce2-40a9-a85c-94ba9c8cfbba dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "interface-7f0d9214-39a5-458d-82db-dcbc7d61b8b5-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:21:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:21.918 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[1d730707-b634-40a0-ba73-a2657a09f85a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:21:21 np0005481065 nova_compute[260935]: 2025-10-11 09:21:21.918 2 DEBUG nova.objects.instance [None req-17a2808a-fce2-40a9-a85c-94ba9c8cfbba dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'flavor' on Instance uuid 7f0d9214-39a5-458d-82db-dcbc7d61b8b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:21:21 np0005481065 NetworkManager[44960]: <info>  [1760174481.9601] device (tape87b272f-60): carrier: link connected
Oct 11 05:21:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:21.966 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[022874f2-f9a3-4ee8-bf31-f09945cb9aa5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:21:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:21.987 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d37b4323-55fe-478d-aff3-957261c3677f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape87b272f-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ab:ac:3b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 341], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632666, 'reachable_time': 19970, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 389205, 'error': None, 'target': 'ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:21:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:22.005 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e9b7b3ae-92f9-4157-b89d-14ea6c9867a5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feab:ac3b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 632666, 'tstamp': 632666}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 389206, 'error': None, 'target': 'ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:21:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:22.024 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e3d55a6f-9ae5-429a-b39b-8aad393a53a7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape87b272f-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ab:ac:3b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 341], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632666, 'reachable_time': 19970, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 389207, 'error': None, 'target': 'ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:21:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:22.067 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6c694064-3308-45df-9fe4-d5773e6d78c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:21:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:22.108 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[78c107a3-7aa7-40ea-829a-727d2347f549]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:21:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:22.109 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape87b272f-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:21:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:22.110 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:21:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:22.110 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape87b272f-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:21:22 np0005481065 nova_compute[260935]: 2025-10-11 09:21:22.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:22 np0005481065 NetworkManager[44960]: <info>  [1760174482.1129] manager: (tape87b272f-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/491)
Oct 11 05:21:22 np0005481065 kernel: tape87b272f-60: entered promiscuous mode
Oct 11 05:21:22 np0005481065 nova_compute[260935]: 2025-10-11 09:21:22.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:22.116 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape87b272f-60, col_values=(('external_ids', {'iface-id': 'af33797e-d57d-45c8-92d7-86ea03fdf1ef'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:21:22 np0005481065 nova_compute[260935]: 2025-10-11 09:21:22.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:22 np0005481065 ovn_controller[152945]: 2025-10-11T09:21:22Z|01207|binding|INFO|Releasing lport af33797e-d57d-45c8-92d7-86ea03fdf1ef from this chassis (sb_readonly=0)
Oct 11 05:21:22 np0005481065 nova_compute[260935]: 2025-10-11 09:21:22.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:22.135 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e87b272f-66b8-494e-ab80-c2ee66df15a2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e87b272f-66b8-494e-ab80-c2ee66df15a2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 05:21:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:22.136 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2b5d3e2e-0693-421d-82b6-3712aa0a1da3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:21:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:22.137 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 05:21:22 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 05:21:22 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 05:21:22 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-e87b272f-66b8-494e-ab80-c2ee66df15a2
Oct 11 05:21:22 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 05:21:22 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 05:21:22 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 05:21:22 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/e87b272f-66b8-494e-ab80-c2ee66df15a2.pid.haproxy
Oct 11 05:21:22 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 05:21:22 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:21:22 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 05:21:22 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 05:21:22 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 05:21:22 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 05:21:22 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 05:21:22 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 05:21:22 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 05:21:22 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 05:21:22 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 05:21:22 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 05:21:22 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 05:21:22 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 05:21:22 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 05:21:22 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:21:22 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:21:22 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 05:21:22 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 05:21:22 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 05:21:22 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID e87b272f-66b8-494e-ab80-c2ee66df15a2
Oct 11 05:21:22 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 05:21:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:22.138 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2', 'env', 'PROCESS_TAG=haproxy-e87b272f-66b8-494e-ab80-c2ee66df15a2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e87b272f-66b8-494e-ab80-c2ee66df15a2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 05:21:22 np0005481065 nova_compute[260935]: 2025-10-11 09:21:22.257 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:21:22 np0005481065 nova_compute[260935]: 2025-10-11 09:21:22.272 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174482.2715697, fcb45648-eb7b-4975-9f50-08675a787d9c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:21:22 np0005481065 nova_compute[260935]: 2025-10-11 09:21:22.272 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] VM Started (Lifecycle Event)#033[00m
Oct 11 05:21:22 np0005481065 nova_compute[260935]: 2025-10-11 09:21:22.305 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:21:22 np0005481065 nova_compute[260935]: 2025-10-11 09:21:22.310 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174482.271799, fcb45648-eb7b-4975-9f50-08675a787d9c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:21:22 np0005481065 nova_compute[260935]: 2025-10-11 09:21:22.311 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:21:22 np0005481065 nova_compute[260935]: 2025-10-11 09:21:22.344 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:21:22 np0005481065 nova_compute[260935]: 2025-10-11 09:21:22.355 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:21:22 np0005481065 nova_compute[260935]: 2025-10-11 09:21:22.394 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:21:22 np0005481065 nova_compute[260935]: 2025-10-11 09:21:22.435 2 DEBUG nova.objects.instance [None req-17a2808a-fce2-40a9-a85c-94ba9c8cfbba dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'pci_requests' on Instance uuid 7f0d9214-39a5-458d-82db-dcbc7d61b8b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:21:22 np0005481065 nova_compute[260935]: 2025-10-11 09:21:22.469 2 DEBUG nova.network.neutron [None req-17a2808a-fce2-40a9-a85c-94ba9c8cfbba dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:21:22 np0005481065 podman[389238]: 2025-10-11 09:21:22.601559693 +0000 UTC m=+0.084156836 container create b8d98df1aece0f443c2b04c5466e7e8fb0cdc8405dd6c0b75e45450252da2026 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct 11 05:21:22 np0005481065 systemd[1]: Started libpod-conmon-b8d98df1aece0f443c2b04c5466e7e8fb0cdc8405dd6c0b75e45450252da2026.scope.
Oct 11 05:21:22 np0005481065 podman[389238]: 2025-10-11 09:21:22.559777454 +0000 UTC m=+0.042374677 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 05:21:22 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:21:22 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c17045466eb5e9b60860258040d0d3e0755d8193eccf00a44cefc9a37c24b920/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 05:21:22 np0005481065 podman[389238]: 2025-10-11 09:21:22.724670337 +0000 UTC m=+0.207267530 container init b8d98df1aece0f443c2b04c5466e7e8fb0cdc8405dd6c0b75e45450252da2026 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 05:21:22 np0005481065 podman[389238]: 2025-10-11 09:21:22.736058179 +0000 UTC m=+0.218655322 container start b8d98df1aece0f443c2b04c5466e7e8fb0cdc8405dd6c0b75e45450252da2026 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 11 05:21:22 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2386: 321 pgs: 321 active+clean; 453 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 3.9 MiB/s wr, 98 op/s
Oct 11 05:21:22 np0005481065 neutron-haproxy-ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2[389253]: [NOTICE]   (389257) : New worker (389259) forked
Oct 11 05:21:22 np0005481065 neutron-haproxy-ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2[389253]: [NOTICE]   (389257) : Loading success.
Oct 11 05:21:23 np0005481065 nova_compute[260935]: 2025-10-11 09:21:23.422 2 DEBUG nova.policy [None req-17a2808a-fce2-40a9-a85c-94ba9c8cfbba dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dd336dcb24664df58613d4105ce1b004', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:21:24 np0005481065 nova_compute[260935]: 2025-10-11 09:21:24.027 2 DEBUG nova.compute.manager [req-a55ac624-7089-4dc2-9f3a-0e64e467425d req-4db9f8cd-fe2b-4560-b05d-0721889d2323 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Received event network-vif-plugged-9669f110-042a-40c6-b7a4-8d78d421ed23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:21:24 np0005481065 nova_compute[260935]: 2025-10-11 09:21:24.028 2 DEBUG oslo_concurrency.lockutils [req-a55ac624-7089-4dc2-9f3a-0e64e467425d req-4db9f8cd-fe2b-4560-b05d-0721889d2323 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:21:24 np0005481065 nova_compute[260935]: 2025-10-11 09:21:24.029 2 DEBUG oslo_concurrency.lockutils [req-a55ac624-7089-4dc2-9f3a-0e64e467425d req-4db9f8cd-fe2b-4560-b05d-0721889d2323 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:21:24 np0005481065 nova_compute[260935]: 2025-10-11 09:21:24.030 2 DEBUG oslo_concurrency.lockutils [req-a55ac624-7089-4dc2-9f3a-0e64e467425d req-4db9f8cd-fe2b-4560-b05d-0721889d2323 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:21:24 np0005481065 nova_compute[260935]: 2025-10-11 09:21:24.030 2 DEBUG nova.compute.manager [req-a55ac624-7089-4dc2-9f3a-0e64e467425d req-4db9f8cd-fe2b-4560-b05d-0721889d2323 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] No event matching network-vif-plugged-9669f110-042a-40c6-b7a4-8d78d421ed23 in dict_keys([('network-vif-plugged', '5a14fd50-c9b4-4c8c-b576-9f2a05d734f9')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Oct 11 05:21:24 np0005481065 nova_compute[260935]: 2025-10-11 09:21:24.031 2 WARNING nova.compute.manager [req-a55ac624-7089-4dc2-9f3a-0e64e467425d req-4db9f8cd-fe2b-4560-b05d-0721889d2323 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Received unexpected event network-vif-plugged-9669f110-042a-40c6-b7a4-8d78d421ed23 for instance with vm_state building and task_state spawning.#033[00m
Oct 11 05:21:24 np0005481065 nova_compute[260935]: 2025-10-11 09:21:24.032 2 DEBUG nova.compute.manager [req-a55ac624-7089-4dc2-9f3a-0e64e467425d req-4db9f8cd-fe2b-4560-b05d-0721889d2323 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Received event network-vif-plugged-5a14fd50-c9b4-4c8c-b576-9f2a05d734f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:21:24 np0005481065 nova_compute[260935]: 2025-10-11 09:21:24.032 2 DEBUG oslo_concurrency.lockutils [req-a55ac624-7089-4dc2-9f3a-0e64e467425d req-4db9f8cd-fe2b-4560-b05d-0721889d2323 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:21:24 np0005481065 nova_compute[260935]: 2025-10-11 09:21:24.033 2 DEBUG oslo_concurrency.lockutils [req-a55ac624-7089-4dc2-9f3a-0e64e467425d req-4db9f8cd-fe2b-4560-b05d-0721889d2323 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:21:24 np0005481065 nova_compute[260935]: 2025-10-11 09:21:24.034 2 DEBUG oslo_concurrency.lockutils [req-a55ac624-7089-4dc2-9f3a-0e64e467425d req-4db9f8cd-fe2b-4560-b05d-0721889d2323 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:21:24 np0005481065 nova_compute[260935]: 2025-10-11 09:21:24.035 2 DEBUG nova.compute.manager [req-a55ac624-7089-4dc2-9f3a-0e64e467425d req-4db9f8cd-fe2b-4560-b05d-0721889d2323 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Processing event network-vif-plugged-5a14fd50-c9b4-4c8c-b576-9f2a05d734f9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:21:24 np0005481065 nova_compute[260935]: 2025-10-11 09:21:24.035 2 DEBUG nova.compute.manager [req-a55ac624-7089-4dc2-9f3a-0e64e467425d req-4db9f8cd-fe2b-4560-b05d-0721889d2323 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Received event network-vif-plugged-5a14fd50-c9b4-4c8c-b576-9f2a05d734f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:21:24 np0005481065 nova_compute[260935]: 2025-10-11 09:21:24.036 2 DEBUG oslo_concurrency.lockutils [req-a55ac624-7089-4dc2-9f3a-0e64e467425d req-4db9f8cd-fe2b-4560-b05d-0721889d2323 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:21:24 np0005481065 nova_compute[260935]: 2025-10-11 09:21:24.036 2 DEBUG oslo_concurrency.lockutils [req-a55ac624-7089-4dc2-9f3a-0e64e467425d req-4db9f8cd-fe2b-4560-b05d-0721889d2323 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:21:24 np0005481065 nova_compute[260935]: 2025-10-11 09:21:24.037 2 DEBUG oslo_concurrency.lockutils [req-a55ac624-7089-4dc2-9f3a-0e64e467425d req-4db9f8cd-fe2b-4560-b05d-0721889d2323 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:21:24 np0005481065 nova_compute[260935]: 2025-10-11 09:21:24.037 2 DEBUG nova.compute.manager [req-a55ac624-7089-4dc2-9f3a-0e64e467425d req-4db9f8cd-fe2b-4560-b05d-0721889d2323 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] No waiting events found dispatching network-vif-plugged-5a14fd50-c9b4-4c8c-b576-9f2a05d734f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:21:24 np0005481065 nova_compute[260935]: 2025-10-11 09:21:24.038 2 WARNING nova.compute.manager [req-a55ac624-7089-4dc2-9f3a-0e64e467425d req-4db9f8cd-fe2b-4560-b05d-0721889d2323 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Received unexpected event network-vif-plugged-5a14fd50-c9b4-4c8c-b576-9f2a05d734f9 for instance with vm_state building and task_state spawning.#033[00m
Oct 11 05:21:24 np0005481065 nova_compute[260935]: 2025-10-11 09:21:24.039 2 DEBUG nova.compute.manager [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:21:24 np0005481065 nova_compute[260935]: 2025-10-11 09:21:24.044 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174484.044082, fcb45648-eb7b-4975-9f50-08675a787d9c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:21:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:21:24 np0005481065 nova_compute[260935]: 2025-10-11 09:21:24.044 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:21:24 np0005481065 nova_compute[260935]: 2025-10-11 09:21:24.048 2 DEBUG nova.virt.libvirt.driver [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:21:24 np0005481065 nova_compute[260935]: 2025-10-11 09:21:24.053 2 INFO nova.virt.libvirt.driver [-] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Instance spawned successfully.#033[00m
Oct 11 05:21:24 np0005481065 nova_compute[260935]: 2025-10-11 09:21:24.053 2 DEBUG nova.virt.libvirt.driver [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:21:24 np0005481065 nova_compute[260935]: 2025-10-11 09:21:24.083 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:21:24 np0005481065 nova_compute[260935]: 2025-10-11 09:21:24.088 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:21:24 np0005481065 nova_compute[260935]: 2025-10-11 09:21:24.104 2 DEBUG nova.virt.libvirt.driver [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:21:24 np0005481065 nova_compute[260935]: 2025-10-11 09:21:24.105 2 DEBUG nova.virt.libvirt.driver [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:21:24 np0005481065 nova_compute[260935]: 2025-10-11 09:21:24.105 2 DEBUG nova.virt.libvirt.driver [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:21:24 np0005481065 nova_compute[260935]: 2025-10-11 09:21:24.106 2 DEBUG nova.virt.libvirt.driver [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:21:24 np0005481065 nova_compute[260935]: 2025-10-11 09:21:24.107 2 DEBUG nova.virt.libvirt.driver [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:21:24 np0005481065 nova_compute[260935]: 2025-10-11 09:21:24.107 2 DEBUG nova.virt.libvirt.driver [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:21:24 np0005481065 nova_compute[260935]: 2025-10-11 09:21:24.120 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:21:24 np0005481065 nova_compute[260935]: 2025-10-11 09:21:24.207 2 INFO nova.compute.manager [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Took 15.45 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:21:24 np0005481065 nova_compute[260935]: 2025-10-11 09:21:24.208 2 DEBUG nova.compute.manager [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:21:24 np0005481065 nova_compute[260935]: 2025-10-11 09:21:24.286 2 INFO nova.compute.manager [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Took 16.56 seconds to build instance.#033[00m
Oct 11 05:21:24 np0005481065 nova_compute[260935]: 2025-10-11 09:21:24.315 2 DEBUG oslo_concurrency.lockutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "fcb45648-eb7b-4975-9f50-08675a787d9c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:21:24 np0005481065 nova_compute[260935]: 2025-10-11 09:21:24.362 2 DEBUG nova.network.neutron [None req-17a2808a-fce2-40a9-a85c-94ba9c8cfbba dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Successfully created port: 300c071c-a312-4a9b-bd7a-1b16b9a35ae6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:21:24 np0005481065 nova_compute[260935]: 2025-10-11 09:21:24.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:24 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2387: 321 pgs: 321 active+clean; 453 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 25 KiB/s wr, 10 op/s
Oct 11 05:21:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:21:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:21:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:21:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:21:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:21:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:21:25 np0005481065 nova_compute[260935]: 2025-10-11 09:21:25.534 2 DEBUG nova.network.neutron [None req-17a2808a-fce2-40a9-a85c-94ba9c8cfbba dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Successfully updated port: 300c071c-a312-4a9b-bd7a-1b16b9a35ae6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:21:25 np0005481065 nova_compute[260935]: 2025-10-11 09:21:25.552 2 DEBUG oslo_concurrency.lockutils [None req-17a2808a-fce2-40a9-a85c-94ba9c8cfbba dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "refresh_cache-7f0d9214-39a5-458d-82db-dcbc7d61b8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:21:25 np0005481065 nova_compute[260935]: 2025-10-11 09:21:25.552 2 DEBUG oslo_concurrency.lockutils [None req-17a2808a-fce2-40a9-a85c-94ba9c8cfbba dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquired lock "refresh_cache-7f0d9214-39a5-458d-82db-dcbc7d61b8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:21:25 np0005481065 nova_compute[260935]: 2025-10-11 09:21:25.553 2 DEBUG nova.network.neutron [None req-17a2808a-fce2-40a9-a85c-94ba9c8cfbba dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:21:25 np0005481065 nova_compute[260935]: 2025-10-11 09:21:25.820 2 DEBUG nova.compute.manager [req-af20fffb-fdf6-4aae-b00c-221527506290 req-6e2cf7bf-0ebf-4f98-a4a2-4fd8ff4bd0c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Received event network-changed-300c071c-a312-4a9b-bd7a-1b16b9a35ae6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:21:25 np0005481065 nova_compute[260935]: 2025-10-11 09:21:25.820 2 DEBUG nova.compute.manager [req-af20fffb-fdf6-4aae-b00c-221527506290 req-6e2cf7bf-0ebf-4f98-a4a2-4fd8ff4bd0c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Refreshing instance network info cache due to event network-changed-300c071c-a312-4a9b-bd7a-1b16b9a35ae6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:21:25 np0005481065 nova_compute[260935]: 2025-10-11 09:21:25.821 2 DEBUG oslo_concurrency.lockutils [req-af20fffb-fdf6-4aae-b00c-221527506290 req-6e2cf7bf-0ebf-4f98-a4a2-4fd8ff4bd0c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-7f0d9214-39a5-458d-82db-dcbc7d61b8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:21:25 np0005481065 nova_compute[260935]: 2025-10-11 09:21:25.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:21:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2073524993' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:21:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:21:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2073524993' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:21:26 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2388: 321 pgs: 321 active+clean; 453 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 25 KiB/s wr, 10 op/s
Oct 11 05:21:26 np0005481065 nova_compute[260935]: 2025-10-11 09:21:26.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:28 np0005481065 nova_compute[260935]: 2025-10-11 09:21:28.748 2 DEBUG nova.network.neutron [None req-17a2808a-fce2-40a9-a85c-94ba9c8cfbba dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Updating instance_info_cache with network_info: [{"id": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "address": "fa:16:3e:38:a7:f5", "network": {"id": "6fb90d02-96cd-4920-92ac-462cc457cb11", "bridge": "br-int", "label": "tempest-network-smoke--1169953700", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aa7ac72-3e", "ovs_interfaceid": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "address": "fa:16:3e:f8:68:88", "network": {"id": "428d818e-c08a-4eef-be62-24fe484fed05", "bridge": "br-int", "label": "tempest-network-smoke--2064721736", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap300c071c-a3", "ovs_interfaceid": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:21:28 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2389: 321 pgs: 321 active+clean; 453 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 74 op/s
Oct 11 05:21:28 np0005481065 nova_compute[260935]: 2025-10-11 09:21:28.775 2 DEBUG oslo_concurrency.lockutils [None req-17a2808a-fce2-40a9-a85c-94ba9c8cfbba dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Releasing lock "refresh_cache-7f0d9214-39a5-458d-82db-dcbc7d61b8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:21:28 np0005481065 nova_compute[260935]: 2025-10-11 09:21:28.776 2 DEBUG oslo_concurrency.lockutils [req-af20fffb-fdf6-4aae-b00c-221527506290 req-6e2cf7bf-0ebf-4f98-a4a2-4fd8ff4bd0c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-7f0d9214-39a5-458d-82db-dcbc7d61b8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:21:28 np0005481065 nova_compute[260935]: 2025-10-11 09:21:28.777 2 DEBUG nova.network.neutron [req-af20fffb-fdf6-4aae-b00c-221527506290 req-6e2cf7bf-0ebf-4f98-a4a2-4fd8ff4bd0c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Refreshing network info cache for port 300c071c-a312-4a9b-bd7a-1b16b9a35ae6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:21:28 np0005481065 nova_compute[260935]: 2025-10-11 09:21:28.780 2 DEBUG nova.virt.libvirt.vif [None req-17a2808a-fce2-40a9-a85c-94ba9c8cfbba dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:20:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1960711580',display_name='tempest-TestNetworkBasicOps-server-1960711580',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1960711580',id=117,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF5Wlq36JtMeAih7avX8wXRrPqfFsSPD2izoTRA0/VeN08ZY174fYPsstqVdaqAprTgQ0B4WJKyd87FK5YPL+XzWekXrwbc+R4XTrvOSv6dJKGu7vh0OlJADJW05rfop0g==',key_name='tempest-TestNetworkBasicOps-1691611342',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:20:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-9mefs4w8',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:20:58Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=7f0d9214-39a5-458d-82db-dcbc7d61b8b5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "address": "fa:16:3e:f8:68:88", "network": {"id": "428d818e-c08a-4eef-be62-24fe484fed05", "bridge": "br-int", "label": "tempest-network-smoke--2064721736", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap300c071c-a3", "ovs_interfaceid": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:21:28 np0005481065 nova_compute[260935]: 2025-10-11 09:21:28.780 2 DEBUG nova.network.os_vif_util [None req-17a2808a-fce2-40a9-a85c-94ba9c8cfbba dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "address": "fa:16:3e:f8:68:88", "network": {"id": "428d818e-c08a-4eef-be62-24fe484fed05", "bridge": "br-int", "label": "tempest-network-smoke--2064721736", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap300c071c-a3", "ovs_interfaceid": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:21:28 np0005481065 nova_compute[260935]: 2025-10-11 09:21:28.781 2 DEBUG nova.network.os_vif_util [None req-17a2808a-fce2-40a9-a85c-94ba9c8cfbba dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:68:88,bridge_name='br-int',has_traffic_filtering=True,id=300c071c-a312-4a9b-bd7a-1b16b9a35ae6,network=Network(428d818e-c08a-4eef-be62-24fe484fed05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap300c071c-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:21:28 np0005481065 nova_compute[260935]: 2025-10-11 09:21:28.782 2 DEBUG os_vif [None req-17a2808a-fce2-40a9-a85c-94ba9c8cfbba dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:68:88,bridge_name='br-int',has_traffic_filtering=True,id=300c071c-a312-4a9b-bd7a-1b16b9a35ae6,network=Network(428d818e-c08a-4eef-be62-24fe484fed05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap300c071c-a3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:21:28 np0005481065 nova_compute[260935]: 2025-10-11 09:21:28.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:28 np0005481065 nova_compute[260935]: 2025-10-11 09:21:28.783 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:21:28 np0005481065 nova_compute[260935]: 2025-10-11 09:21:28.783 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:21:28 np0005481065 nova_compute[260935]: 2025-10-11 09:21:28.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:28 np0005481065 nova_compute[260935]: 2025-10-11 09:21:28.786 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap300c071c-a3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:21:28 np0005481065 nova_compute[260935]: 2025-10-11 09:21:28.786 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap300c071c-a3, col_values=(('external_ids', {'iface-id': '300c071c-a312-4a9b-bd7a-1b16b9a35ae6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f8:68:88', 'vm-uuid': '7f0d9214-39a5-458d-82db-dcbc7d61b8b5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:21:28 np0005481065 nova_compute[260935]: 2025-10-11 09:21:28.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:28 np0005481065 NetworkManager[44960]: <info>  [1760174488.7893] manager: (tap300c071c-a3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/492)
Oct 11 05:21:28 np0005481065 nova_compute[260935]: 2025-10-11 09:21:28.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:21:28 np0005481065 nova_compute[260935]: 2025-10-11 09:21:28.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:28 np0005481065 nova_compute[260935]: 2025-10-11 09:21:28.796 2 INFO os_vif [None req-17a2808a-fce2-40a9-a85c-94ba9c8cfbba dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:68:88,bridge_name='br-int',has_traffic_filtering=True,id=300c071c-a312-4a9b-bd7a-1b16b9a35ae6,network=Network(428d818e-c08a-4eef-be62-24fe484fed05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap300c071c-a3')#033[00m
Oct 11 05:21:28 np0005481065 nova_compute[260935]: 2025-10-11 09:21:28.797 2 DEBUG nova.virt.libvirt.vif [None req-17a2808a-fce2-40a9-a85c-94ba9c8cfbba dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:20:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1960711580',display_name='tempest-TestNetworkBasicOps-server-1960711580',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1960711580',id=117,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF5Wlq36JtMeAih7avX8wXRrPqfFsSPD2izoTRA0/VeN08ZY174fYPsstqVdaqAprTgQ0B4WJKyd87FK5YPL+XzWekXrwbc+R4XTrvOSv6dJKGu7vh0OlJADJW05rfop0g==',key_name='tempest-TestNetworkBasicOps-1691611342',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:20:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-9mefs4w8',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:20:58Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=7f0d9214-39a5-458d-82db-dcbc7d61b8b5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "address": "fa:16:3e:f8:68:88", "network": {"id": "428d818e-c08a-4eef-be62-24fe484fed05", "bridge": "br-int", "label": "tempest-network-smoke--2064721736", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap300c071c-a3", "ovs_interfaceid": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:21:28 np0005481065 nova_compute[260935]: 2025-10-11 09:21:28.797 2 DEBUG nova.network.os_vif_util [None req-17a2808a-fce2-40a9-a85c-94ba9c8cfbba dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "address": "fa:16:3e:f8:68:88", "network": {"id": "428d818e-c08a-4eef-be62-24fe484fed05", "bridge": "br-int", "label": "tempest-network-smoke--2064721736", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap300c071c-a3", "ovs_interfaceid": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:21:28 np0005481065 nova_compute[260935]: 2025-10-11 09:21:28.798 2 DEBUG nova.network.os_vif_util [None req-17a2808a-fce2-40a9-a85c-94ba9c8cfbba dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:68:88,bridge_name='br-int',has_traffic_filtering=True,id=300c071c-a312-4a9b-bd7a-1b16b9a35ae6,network=Network(428d818e-c08a-4eef-be62-24fe484fed05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap300c071c-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:21:28 np0005481065 nova_compute[260935]: 2025-10-11 09:21:28.801 2 DEBUG nova.virt.libvirt.guest [None req-17a2808a-fce2-40a9-a85c-94ba9c8cfbba dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] attach device xml: <interface type="ethernet">
Oct 11 05:21:28 np0005481065 nova_compute[260935]:  <mac address="fa:16:3e:f8:68:88"/>
Oct 11 05:21:28 np0005481065 nova_compute[260935]:  <model type="virtio"/>
Oct 11 05:21:28 np0005481065 nova_compute[260935]:  <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:21:28 np0005481065 nova_compute[260935]:  <mtu size="1442"/>
Oct 11 05:21:28 np0005481065 nova_compute[260935]:  <target dev="tap300c071c-a3"/>
Oct 11 05:21:28 np0005481065 nova_compute[260935]: </interface>
Oct 11 05:21:28 np0005481065 nova_compute[260935]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct 11 05:21:28 np0005481065 kernel: tap300c071c-a3: entered promiscuous mode
Oct 11 05:21:28 np0005481065 nova_compute[260935]: 2025-10-11 09:21:28.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:28 np0005481065 ovn_controller[152945]: 2025-10-11T09:21:28Z|01208|binding|INFO|Claiming lport 300c071c-a312-4a9b-bd7a-1b16b9a35ae6 for this chassis.
Oct 11 05:21:28 np0005481065 ovn_controller[152945]: 2025-10-11T09:21:28Z|01209|binding|INFO|300c071c-a312-4a9b-bd7a-1b16b9a35ae6: Claiming fa:16:3e:f8:68:88 10.100.0.30
Oct 11 05:21:28 np0005481065 NetworkManager[44960]: <info>  [1760174488.8191] manager: (tap300c071c-a3): new Tun device (/org/freedesktop/NetworkManager/Devices/493)
Oct 11 05:21:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:28.853 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:68:88 10.100.0.30'], port_security=['fa:16:3e:f8:68:88 10.100.0.30'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.30/28', 'neutron:device_id': '7f0d9214-39a5-458d-82db-dcbc7d61b8b5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-428d818e-c08a-4eef-be62-24fe484fed05', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '2', 'neutron:security_group_ids': '019fac1b-1c13-4d21-809e-39e39fdd9255', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=020223cc-9ae9-496f-8a67-2eb055f1a089, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=300c071c-a312-4a9b-bd7a-1b16b9a35ae6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:21:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:28.859 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 300c071c-a312-4a9b-bd7a-1b16b9a35ae6 in datapath 428d818e-c08a-4eef-be62-24fe484fed05 bound to our chassis#033[00m
Oct 11 05:21:28 np0005481065 nova_compute[260935]: 2025-10-11 09:21:28.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:28.864 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 428d818e-c08a-4eef-be62-24fe484fed05#033[00m
Oct 11 05:21:28 np0005481065 ovn_controller[152945]: 2025-10-11T09:21:28Z|01210|binding|INFO|Setting lport 300c071c-a312-4a9b-bd7a-1b16b9a35ae6 ovn-installed in OVS
Oct 11 05:21:28 np0005481065 ovn_controller[152945]: 2025-10-11T09:21:28Z|01211|binding|INFO|Setting lport 300c071c-a312-4a9b-bd7a-1b16b9a35ae6 up in Southbound
Oct 11 05:21:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:28.878 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a408a41b-cc5f-4244-b787-87a3d8d56421]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:21:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:28.878 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap428d818e-c1 in ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 05:21:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:28.880 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap428d818e-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 05:21:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:28.880 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[28430f20-2ed3-4433-ab25-6f6fc5a8390b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:21:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:28.880 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[83542451-dc4c-467c-b213-10986833ada5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:21:28 np0005481065 systemd-udevd[389275]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:21:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:28.892 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[d044123a-5bfe-4758-9f5b-cf44fe456977]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:21:28 np0005481065 NetworkManager[44960]: <info>  [1760174488.9029] device (tap300c071c-a3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:21:28 np0005481065 NetworkManager[44960]: <info>  [1760174488.9040] device (tap300c071c-a3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:21:28 np0005481065 nova_compute[260935]: 2025-10-11 09:21:28.910 2 DEBUG nova.virt.libvirt.driver [None req-17a2808a-fce2-40a9-a85c-94ba9c8cfbba dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:21:28 np0005481065 nova_compute[260935]: 2025-10-11 09:21:28.911 2 DEBUG nova.virt.libvirt.driver [None req-17a2808a-fce2-40a9-a85c-94ba9c8cfbba dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:21:28 np0005481065 nova_compute[260935]: 2025-10-11 09:21:28.911 2 DEBUG nova.virt.libvirt.driver [None req-17a2808a-fce2-40a9-a85c-94ba9c8cfbba dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No VIF found with MAC fa:16:3e:38:a7:f5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:21:28 np0005481065 nova_compute[260935]: 2025-10-11 09:21:28.912 2 DEBUG nova.virt.libvirt.driver [None req-17a2808a-fce2-40a9-a85c-94ba9c8cfbba dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No VIF found with MAC fa:16:3e:f8:68:88, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:21:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:28.914 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4b49b838-7830-4cf0-bb48-61fce52dc680]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:21:28 np0005481065 nova_compute[260935]: 2025-10-11 09:21:28.940 2 DEBUG nova.virt.libvirt.guest [None req-17a2808a-fce2-40a9-a85c-94ba9c8cfbba dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:21:28 np0005481065 nova_compute[260935]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:21:28 np0005481065 nova_compute[260935]:  <nova:name>tempest-TestNetworkBasicOps-server-1960711580</nova:name>
Oct 11 05:21:28 np0005481065 nova_compute[260935]:  <nova:creationTime>2025-10-11 09:21:28</nova:creationTime>
Oct 11 05:21:28 np0005481065 nova_compute[260935]:  <nova:flavor name="m1.nano">
Oct 11 05:21:28 np0005481065 nova_compute[260935]:    <nova:memory>128</nova:memory>
Oct 11 05:21:28 np0005481065 nova_compute[260935]:    <nova:disk>1</nova:disk>
Oct 11 05:21:28 np0005481065 nova_compute[260935]:    <nova:swap>0</nova:swap>
Oct 11 05:21:28 np0005481065 nova_compute[260935]:    <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:21:28 np0005481065 nova_compute[260935]:    <nova:vcpus>1</nova:vcpus>
Oct 11 05:21:28 np0005481065 nova_compute[260935]:  </nova:flavor>
Oct 11 05:21:28 np0005481065 nova_compute[260935]:  <nova:owner>
Oct 11 05:21:28 np0005481065 nova_compute[260935]:    <nova:user uuid="dd336dcb24664df58613d4105ce1b004">tempest-TestNetworkBasicOps-1622727639-project-member</nova:user>
Oct 11 05:21:28 np0005481065 nova_compute[260935]:    <nova:project uuid="bee9c6aad5fe46a2b0fb6caf4d995b72">tempest-TestNetworkBasicOps-1622727639</nova:project>
Oct 11 05:21:28 np0005481065 nova_compute[260935]:  </nova:owner>
Oct 11 05:21:28 np0005481065 nova_compute[260935]:  <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:21:28 np0005481065 nova_compute[260935]:  <nova:ports>
Oct 11 05:21:28 np0005481065 nova_compute[260935]:    <nova:port uuid="6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5">
Oct 11 05:21:28 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 11 05:21:28 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 05:21:28 np0005481065 nova_compute[260935]:    <nova:port uuid="300c071c-a312-4a9b-bd7a-1b16b9a35ae6">
Oct 11 05:21:28 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.30" ipVersion="4"/>
Oct 11 05:21:28 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 05:21:28 np0005481065 nova_compute[260935]:  </nova:ports>
Oct 11 05:21:28 np0005481065 nova_compute[260935]: </nova:instance>
Oct 11 05:21:28 np0005481065 nova_compute[260935]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct 11 05:21:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:28.942 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[0cf0685a-a485-4ae8-8bb1-ca82f77b2d12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:21:28 np0005481065 NetworkManager[44960]: <info>  [1760174488.9475] manager: (tap428d818e-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/494)
Oct 11 05:21:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:28.946 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[627b4fc7-118c-43e0-8396-835950a0de3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:21:28 np0005481065 nova_compute[260935]: 2025-10-11 09:21:28.965 2 DEBUG nova.compute.manager [req-088b6119-a339-4890-b2c6-05078454be23 req-3aa84d6a-5919-45c1-9e79-6f902047082c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Received event network-changed-9669f110-042a-40c6-b7a4-8d78d421ed23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:21:28 np0005481065 nova_compute[260935]: 2025-10-11 09:21:28.965 2 DEBUG nova.compute.manager [req-088b6119-a339-4890-b2c6-05078454be23 req-3aa84d6a-5919-45c1-9e79-6f902047082c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Refreshing instance network info cache due to event network-changed-9669f110-042a-40c6-b7a4-8d78d421ed23. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:21:28 np0005481065 nova_compute[260935]: 2025-10-11 09:21:28.966 2 DEBUG oslo_concurrency.lockutils [req-088b6119-a339-4890-b2c6-05078454be23 req-3aa84d6a-5919-45c1-9e79-6f902047082c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-fcb45648-eb7b-4975-9f50-08675a787d9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:21:28 np0005481065 nova_compute[260935]: 2025-10-11 09:21:28.966 2 DEBUG oslo_concurrency.lockutils [req-088b6119-a339-4890-b2c6-05078454be23 req-3aa84d6a-5919-45c1-9e79-6f902047082c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-fcb45648-eb7b-4975-9f50-08675a787d9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:21:28 np0005481065 nova_compute[260935]: 2025-10-11 09:21:28.966 2 DEBUG nova.network.neutron [req-088b6119-a339-4890-b2c6-05078454be23 req-3aa84d6a-5919-45c1-9e79-6f902047082c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Refreshing network info cache for port 9669f110-042a-40c6-b7a4-8d78d421ed23 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:21:28 np0005481065 nova_compute[260935]: 2025-10-11 09:21:28.994 2 DEBUG oslo_concurrency.lockutils [None req-17a2808a-fce2-40a9-a85c-94ba9c8cfbba dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "interface-7f0d9214-39a5-458d-82db-dcbc7d61b8b5-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 7.076s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:21:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:28.997 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[92f7d87d-bd00-4c8a-84c3-9595dea1ec5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:21:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:29.004 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b1e2f42a-40e5-414d-a66f-3eedd6ea9028]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:21:29 np0005481065 NetworkManager[44960]: <info>  [1760174489.0338] device (tap428d818e-c0): carrier: link connected
Oct 11 05:21:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:29.042 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[dfcb914e-6142-402c-9879-fb4122acbd13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:21:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:21:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:29.068 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1e8a957e-728c-491d-be3a-b8ed4ac05d18]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap428d818e-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:fc:e7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 343], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633373, 'reachable_time': 23194, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 389301, 'error': None, 'target': 'ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:21:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:29.087 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[566ea4fa-aa03-42f1-b82a-b4b9b0806c1c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5d:fce7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 633373, 'tstamp': 633373}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 389302, 'error': None, 'target': 'ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:21:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:29.114 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[79cbf57c-6c0d-46a9-85c2-5dd29f4ea5a1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap428d818e-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:fc:e7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 343], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633373, 'reachable_time': 23194, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 389303, 'error': None, 'target': 'ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:21:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:29.149 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a7a63d72-2cd4-49b1-9aff-e039b1fd9c0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:21:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:29.214 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bc4e63f4-232e-4846-b937-b538e38738f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:21:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:29.216 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap428d818e-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:21:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:29.216 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:21:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:29.217 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap428d818e-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:21:29 np0005481065 NetworkManager[44960]: <info>  [1760174489.2203] manager: (tap428d818e-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/495)
Oct 11 05:21:29 np0005481065 kernel: tap428d818e-c0: entered promiscuous mode
Oct 11 05:21:29 np0005481065 nova_compute[260935]: 2025-10-11 09:21:29.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:29.224 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap428d818e-c0, col_values=(('external_ids', {'iface-id': 'f0c2b309-d39a-4d01-be06-bdc63deb27f9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:21:29 np0005481065 nova_compute[260935]: 2025-10-11 09:21:29.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:29 np0005481065 ovn_controller[152945]: 2025-10-11T09:21:29Z|01212|binding|INFO|Releasing lport f0c2b309-d39a-4d01-be06-bdc63deb27f9 from this chassis (sb_readonly=0)
Oct 11 05:21:29 np0005481065 nova_compute[260935]: 2025-10-11 09:21:29.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:29 np0005481065 nova_compute[260935]: 2025-10-11 09:21:29.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:29.248 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/428d818e-c08a-4eef-be62-24fe484fed05.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/428d818e-c08a-4eef-be62-24fe484fed05.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 05:21:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:29.254 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8c88bdbd-c838-4ee4-a732-4b2418b01af1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:21:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:29.255 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 05:21:29 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 05:21:29 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 05:21:29 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-428d818e-c08a-4eef-be62-24fe484fed05
Oct 11 05:21:29 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 05:21:29 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 05:21:29 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 05:21:29 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/428d818e-c08a-4eef-be62-24fe484fed05.pid.haproxy
Oct 11 05:21:29 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 05:21:29 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:21:29 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 05:21:29 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 05:21:29 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 05:21:29 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 05:21:29 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 05:21:29 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 05:21:29 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 05:21:29 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 05:21:29 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 05:21:29 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 05:21:29 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 05:21:29 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 05:21:29 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 05:21:29 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:21:29 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:21:29 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 05:21:29 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 05:21:29 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 05:21:29 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID 428d818e-c08a-4eef-be62-24fe484fed05
Oct 11 05:21:29 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 05:21:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:29.256 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05', 'env', 'PROCESS_TAG=haproxy-428d818e-c08a-4eef-be62-24fe484fed05', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/428d818e-c08a-4eef-be62-24fe484fed05.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 05:21:29 np0005481065 podman[389335]: 2025-10-11 09:21:29.673354473 +0000 UTC m=+0.078108635 container create 3d44ae400f68c69c83298cdd1bf71370cdc99ef9434f35a592b35316b09b177f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 11 05:21:29 np0005481065 podman[389335]: 2025-10-11 09:21:29.631260015 +0000 UTC m=+0.036014227 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 05:21:29 np0005481065 systemd[1]: Started libpod-conmon-3d44ae400f68c69c83298cdd1bf71370cdc99ef9434f35a592b35316b09b177f.scope.
Oct 11 05:21:29 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:21:29 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e5d17837c87f5f50f3b76a7653ef33e05fc19c9a00cc47d414f88016a25e132/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 05:21:29 np0005481065 podman[389335]: 2025-10-11 09:21:29.807875069 +0000 UTC m=+0.212629291 container init 3d44ae400f68c69c83298cdd1bf71370cdc99ef9434f35a592b35316b09b177f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 11 05:21:29 np0005481065 podman[389335]: 2025-10-11 09:21:29.816920035 +0000 UTC m=+0.221674207 container start 3d44ae400f68c69c83298cdd1bf71370cdc99ef9434f35a592b35316b09b177f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:21:29 np0005481065 neutron-haproxy-ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05[389350]: [NOTICE]   (389354) : New worker (389356) forked
Oct 11 05:21:29 np0005481065 neutron-haproxy-ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05[389350]: [NOTICE]   (389354) : Loading success.
Oct 11 05:21:30 np0005481065 nova_compute[260935]: 2025-10-11 09:21:30.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:21:30 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2390: 321 pgs: 321 active+clean; 453 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Oct 11 05:21:30 np0005481065 nova_compute[260935]: 2025-10-11 09:21:30.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:31 np0005481065 nova_compute[260935]: 2025-10-11 09:21:31.140 2 DEBUG nova.network.neutron [req-088b6119-a339-4890-b2c6-05078454be23 req-3aa84d6a-5919-45c1-9e79-6f902047082c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Updated VIF entry in instance network info cache for port 9669f110-042a-40c6-b7a4-8d78d421ed23. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:21:31 np0005481065 nova_compute[260935]: 2025-10-11 09:21:31.141 2 DEBUG nova.network.neutron [req-088b6119-a339-4890-b2c6-05078454be23 req-3aa84d6a-5919-45c1-9e79-6f902047082c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Updating instance_info_cache with network_info: [{"id": "9669f110-042a-40c6-b7a4-8d78d421ed23", "address": "fa:16:3e:eb:54:35", "network": {"id": "02690ac5-d004-4c7d-b780-e5fed29e0aa7", "bridge": "br-int", "label": "tempest-network-smoke--1847613909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9669f110-04", "ovs_interfaceid": "9669f110-042a-40c6-b7a4-8d78d421ed23", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5a14fd50-c9b4-4c8c-b576-9f2a05d734f9", "address": "fa:16:3e:ed:83:c1", "network": {"id": "e87b272f-66b8-494e-ab80-c2ee66df15a2", "bridge": "br-int", "label": "tempest-network-smoke--2027628824", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feed:83c1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feed:83c1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a14fd50-c9", "ovs_interfaceid": "5a14fd50-c9b4-4c8c-b576-9f2a05d734f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:21:31 np0005481065 nova_compute[260935]: 2025-10-11 09:21:31.197 2 DEBUG nova.network.neutron [req-af20fffb-fdf6-4aae-b00c-221527506290 req-6e2cf7bf-0ebf-4f98-a4a2-4fd8ff4bd0c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Updated VIF entry in instance network info cache for port 300c071c-a312-4a9b-bd7a-1b16b9a35ae6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:21:31 np0005481065 nova_compute[260935]: 2025-10-11 09:21:31.197 2 DEBUG nova.network.neutron [req-af20fffb-fdf6-4aae-b00c-221527506290 req-6e2cf7bf-0ebf-4f98-a4a2-4fd8ff4bd0c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Updating instance_info_cache with network_info: [{"id": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "address": "fa:16:3e:38:a7:f5", "network": {"id": "6fb90d02-96cd-4920-92ac-462cc457cb11", "bridge": "br-int", "label": "tempest-network-smoke--1169953700", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aa7ac72-3e", "ovs_interfaceid": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "address": "fa:16:3e:f8:68:88", "network": {"id": "428d818e-c08a-4eef-be62-24fe484fed05", "bridge": "br-int", "label": "tempest-network-smoke--2064721736", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap300c071c-a3", "ovs_interfaceid": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:21:31 np0005481065 nova_compute[260935]: 2025-10-11 09:21:31.204 2 DEBUG oslo_concurrency.lockutils [req-088b6119-a339-4890-b2c6-05078454be23 req-3aa84d6a-5919-45c1-9e79-6f902047082c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-fcb45648-eb7b-4975-9f50-08675a787d9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:21:31 np0005481065 nova_compute[260935]: 2025-10-11 09:21:31.209 2 DEBUG oslo_concurrency.lockutils [req-af20fffb-fdf6-4aae-b00c-221527506290 req-6e2cf7bf-0ebf-4f98-a4a2-4fd8ff4bd0c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-7f0d9214-39a5-458d-82db-dcbc7d61b8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:21:31 np0005481065 nova_compute[260935]: 2025-10-11 09:21:31.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:31.211 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=38, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=37) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:21:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:31.214 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 05:21:31 np0005481065 ovn_controller[152945]: 2025-10-11T09:21:31Z|00141|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f8:68:88 10.100.0.30
Oct 11 05:21:31 np0005481065 ovn_controller[152945]: 2025-10-11T09:21:31Z|00142|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f8:68:88 10.100.0.30
Oct 11 05:21:31 np0005481065 nova_compute[260935]: 2025-10-11 09:21:31.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:21:31 np0005481065 nova_compute[260935]: 2025-10-11 09:21:31.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:21:31 np0005481065 nova_compute[260935]: 2025-10-11 09:21:31.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:32 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2391: 321 pgs: 321 active+clean; 453 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Oct 11 05:21:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:33.216 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '38'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:21:33 np0005481065 nova_compute[260935]: 2025-10-11 09:21:33.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:21:33 np0005481065 nova_compute[260935]: 2025-10-11 09:21:33.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:21:34 np0005481065 nova_compute[260935]: 2025-10-11 09:21:34.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:34 np0005481065 nova_compute[260935]: 2025-10-11 09:21:34.722 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:21:34 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2392: 321 pgs: 321 active+clean; 453 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 KiB/s wr, 64 op/s
Oct 11 05:21:35 np0005481065 nova_compute[260935]: 2025-10-11 09:21:35.354 2 DEBUG nova.compute.manager [req-4abbb14b-b969-4ede-af3d-775d8a0c8872 req-b0204d3c-6b6b-452c-8d23-2b08769c6b3d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Received event network-vif-plugged-300c071c-a312-4a9b-bd7a-1b16b9a35ae6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:21:35 np0005481065 nova_compute[260935]: 2025-10-11 09:21:35.354 2 DEBUG oslo_concurrency.lockutils [req-4abbb14b-b969-4ede-af3d-775d8a0c8872 req-b0204d3c-6b6b-452c-8d23-2b08769c6b3d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:21:35 np0005481065 nova_compute[260935]: 2025-10-11 09:21:35.355 2 DEBUG oslo_concurrency.lockutils [req-4abbb14b-b969-4ede-af3d-775d8a0c8872 req-b0204d3c-6b6b-452c-8d23-2b08769c6b3d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:21:35 np0005481065 nova_compute[260935]: 2025-10-11 09:21:35.355 2 DEBUG oslo_concurrency.lockutils [req-4abbb14b-b969-4ede-af3d-775d8a0c8872 req-b0204d3c-6b6b-452c-8d23-2b08769c6b3d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:21:35 np0005481065 nova_compute[260935]: 2025-10-11 09:21:35.356 2 DEBUG nova.compute.manager [req-4abbb14b-b969-4ede-af3d-775d8a0c8872 req-b0204d3c-6b6b-452c-8d23-2b08769c6b3d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] No waiting events found dispatching network-vif-plugged-300c071c-a312-4a9b-bd7a-1b16b9a35ae6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:21:35 np0005481065 nova_compute[260935]: 2025-10-11 09:21:35.357 2 WARNING nova.compute.manager [req-4abbb14b-b969-4ede-af3d-775d8a0c8872 req-b0204d3c-6b6b-452c-8d23-2b08769c6b3d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Received unexpected event network-vif-plugged-300c071c-a312-4a9b-bd7a-1b16b9a35ae6 for instance with vm_state active and task_state None.#033[00m
Oct 11 05:21:36 np0005481065 nova_compute[260935]: 2025-10-11 09:21:36.272 2 DEBUG oslo_concurrency.lockutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "c677661f-bc62-4954-9130-09b285e6abe4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:21:36 np0005481065 nova_compute[260935]: 2025-10-11 09:21:36.273 2 DEBUG oslo_concurrency.lockutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "c677661f-bc62-4954-9130-09b285e6abe4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:21:36 np0005481065 nova_compute[260935]: 2025-10-11 09:21:36.293 2 DEBUG nova.compute.manager [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:21:36 np0005481065 nova_compute[260935]: 2025-10-11 09:21:36.401 2 DEBUG oslo_concurrency.lockutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:21:36 np0005481065 nova_compute[260935]: 2025-10-11 09:21:36.402 2 DEBUG oslo_concurrency.lockutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:21:36 np0005481065 nova_compute[260935]: 2025-10-11 09:21:36.412 2 DEBUG nova.virt.hardware [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:21:36 np0005481065 nova_compute[260935]: 2025-10-11 09:21:36.412 2 INFO nova.compute.claims [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:21:36 np0005481065 nova_compute[260935]: 2025-10-11 09:21:36.658 2 DEBUG oslo_concurrency.processutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:21:36 np0005481065 nova_compute[260935]: 2025-10-11 09:21:36.708 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:21:36 np0005481065 nova_compute[260935]: 2025-10-11 09:21:36.709 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:21:36 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2393: 321 pgs: 321 active+clean; 453 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 KiB/s wr, 64 op/s
Oct 11 05:21:36 np0005481065 nova_compute[260935]: 2025-10-11 09:21:36.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:36 np0005481065 nova_compute[260935]: 2025-10-11 09:21:36.958 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:21:36 np0005481065 nova_compute[260935]: 2025-10-11 09:21:36.959 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:21:36 np0005481065 nova_compute[260935]: 2025-10-11 09:21:36.959 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 05:21:37 np0005481065 ovn_controller[152945]: 2025-10-11T09:21:37Z|00143|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:eb:54:35 10.100.0.11
Oct 11 05:21:37 np0005481065 ovn_controller[152945]: 2025-10-11T09:21:37Z|00144|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:eb:54:35 10.100.0.11
Oct 11 05:21:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:21:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3072223939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:21:37 np0005481065 nova_compute[260935]: 2025-10-11 09:21:37.211 2 DEBUG oslo_concurrency.processutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:21:37 np0005481065 nova_compute[260935]: 2025-10-11 09:21:37.219 2 DEBUG nova.compute.provider_tree [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:21:37 np0005481065 nova_compute[260935]: 2025-10-11 09:21:37.243 2 DEBUG nova.scheduler.client.report [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:21:37 np0005481065 nova_compute[260935]: 2025-10-11 09:21:37.285 2 DEBUG oslo_concurrency.lockutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.883s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:21:37 np0005481065 nova_compute[260935]: 2025-10-11 09:21:37.286 2 DEBUG nova.compute.manager [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:21:37 np0005481065 nova_compute[260935]: 2025-10-11 09:21:37.352 2 DEBUG nova.compute.manager [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:21:37 np0005481065 nova_compute[260935]: 2025-10-11 09:21:37.352 2 DEBUG nova.network.neutron [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:21:37 np0005481065 nova_compute[260935]: 2025-10-11 09:21:37.368 2 INFO nova.virt.libvirt.driver [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:21:37 np0005481065 nova_compute[260935]: 2025-10-11 09:21:37.386 2 DEBUG nova.compute.manager [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:21:37 np0005481065 nova_compute[260935]: 2025-10-11 09:21:37.494 2 DEBUG nova.compute.manager [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:21:37 np0005481065 nova_compute[260935]: 2025-10-11 09:21:37.496 2 DEBUG nova.virt.libvirt.driver [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:21:37 np0005481065 nova_compute[260935]: 2025-10-11 09:21:37.496 2 INFO nova.virt.libvirt.driver [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Creating image(s)#033[00m
Oct 11 05:21:37 np0005481065 nova_compute[260935]: 2025-10-11 09:21:37.530 2 DEBUG nova.storage.rbd_utils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image c677661f-bc62-4954-9130-09b285e6abe4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:21:37 np0005481065 nova_compute[260935]: 2025-10-11 09:21:37.566 2 DEBUG nova.storage.rbd_utils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image c677661f-bc62-4954-9130-09b285e6abe4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:21:37 np0005481065 nova_compute[260935]: 2025-10-11 09:21:37.603 2 DEBUG nova.storage.rbd_utils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image c677661f-bc62-4954-9130-09b285e6abe4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:21:37 np0005481065 nova_compute[260935]: 2025-10-11 09:21:37.609 2 DEBUG oslo_concurrency.processutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:21:37 np0005481065 nova_compute[260935]: 2025-10-11 09:21:37.680 2 DEBUG nova.compute.manager [req-643d4e85-792d-4840-9465-da7857923f12 req-bc6c0716-69f0-4b59-88ab-97392747320b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Received event network-vif-plugged-300c071c-a312-4a9b-bd7a-1b16b9a35ae6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:21:37 np0005481065 nova_compute[260935]: 2025-10-11 09:21:37.681 2 DEBUG oslo_concurrency.lockutils [req-643d4e85-792d-4840-9465-da7857923f12 req-bc6c0716-69f0-4b59-88ab-97392747320b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:21:37 np0005481065 nova_compute[260935]: 2025-10-11 09:21:37.682 2 DEBUG oslo_concurrency.lockutils [req-643d4e85-792d-4840-9465-da7857923f12 req-bc6c0716-69f0-4b59-88ab-97392747320b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:21:37 np0005481065 nova_compute[260935]: 2025-10-11 09:21:37.683 2 DEBUG oslo_concurrency.lockutils [req-643d4e85-792d-4840-9465-da7857923f12 req-bc6c0716-69f0-4b59-88ab-97392747320b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:21:37 np0005481065 nova_compute[260935]: 2025-10-11 09:21:37.684 2 DEBUG nova.compute.manager [req-643d4e85-792d-4840-9465-da7857923f12 req-bc6c0716-69f0-4b59-88ab-97392747320b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] No waiting events found dispatching network-vif-plugged-300c071c-a312-4a9b-bd7a-1b16b9a35ae6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:21:37 np0005481065 nova_compute[260935]: 2025-10-11 09:21:37.685 2 WARNING nova.compute.manager [req-643d4e85-792d-4840-9465-da7857923f12 req-bc6c0716-69f0-4b59-88ab-97392747320b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Received unexpected event network-vif-plugged-300c071c-a312-4a9b-bd7a-1b16b9a35ae6 for instance with vm_state active and task_state None.#033[00m
Oct 11 05:21:37 np0005481065 nova_compute[260935]: 2025-10-11 09:21:37.718 2 DEBUG oslo_concurrency.processutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:21:37 np0005481065 nova_compute[260935]: 2025-10-11 09:21:37.720 2 DEBUG oslo_concurrency.lockutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:21:37 np0005481065 nova_compute[260935]: 2025-10-11 09:21:37.721 2 DEBUG oslo_concurrency.lockutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:21:37 np0005481065 nova_compute[260935]: 2025-10-11 09:21:37.722 2 DEBUG oslo_concurrency.lockutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:21:37 np0005481065 nova_compute[260935]: 2025-10-11 09:21:37.759 2 DEBUG nova.storage.rbd_utils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image c677661f-bc62-4954-9130-09b285e6abe4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:21:37 np0005481065 nova_compute[260935]: 2025-10-11 09:21:37.768 2 DEBUG oslo_concurrency.processutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 c677661f-bc62-4954-9130-09b285e6abe4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:21:37 np0005481065 podman[389443]: 2025-10-11 09:21:37.806685642 +0000 UTC m=+0.104156231 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 11 05:21:37 np0005481065 nova_compute[260935]: 2025-10-11 09:21:37.878 2 DEBUG nova.policy [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dd336dcb24664df58613d4105ce1b004', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:21:38 np0005481065 nova_compute[260935]: 2025-10-11 09:21:38.155 2 DEBUG oslo_concurrency.processutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 c677661f-bc62-4954-9130-09b285e6abe4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.388s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:21:38 np0005481065 nova_compute[260935]: 2025-10-11 09:21:38.209 2 DEBUG nova.storage.rbd_utils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] resizing rbd image c677661f-bc62-4954-9130-09b285e6abe4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:21:38 np0005481065 nova_compute[260935]: 2025-10-11 09:21:38.292 2 DEBUG nova.objects.instance [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'migration_context' on Instance uuid c677661f-bc62-4954-9130-09b285e6abe4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:21:38 np0005481065 nova_compute[260935]: 2025-10-11 09:21:38.307 2 DEBUG nova.virt.libvirt.driver [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:21:38 np0005481065 nova_compute[260935]: 2025-10-11 09:21:38.308 2 DEBUG nova.virt.libvirt.driver [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Ensure instance console log exists: /var/lib/nova/instances/c677661f-bc62-4954-9130-09b285e6abe4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:21:38 np0005481065 nova_compute[260935]: 2025-10-11 09:21:38.308 2 DEBUG oslo_concurrency.lockutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:21:38 np0005481065 nova_compute[260935]: 2025-10-11 09:21:38.309 2 DEBUG oslo_concurrency.lockutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:21:38 np0005481065 nova_compute[260935]: 2025-10-11 09:21:38.309 2 DEBUG oslo_concurrency.lockutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:21:38 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2394: 321 pgs: 321 active+clean; 486 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 127 op/s
Oct 11 05:21:38 np0005481065 nova_compute[260935]: 2025-10-11 09:21:38.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:21:40 np0005481065 nova_compute[260935]: 2025-10-11 09:21:40.709 2 DEBUG nova.network.neutron [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Successfully created port: f09cab39-5082-4d84-91fe-0c7b1cc2fb8a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:21:40 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2395: 321 pgs: 321 active+clean; 486 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 05:21:41 np0005481065 nova_compute[260935]: 2025-10-11 09:21:41.007 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updating instance_info_cache with network_info: [{"id": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "address": "fa:16:3e:ab:9b:26", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99e74dca-1d", "ovs_interfaceid": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:21:41 np0005481065 nova_compute[260935]: 2025-10-11 09:21:41.023 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:21:41 np0005481065 nova_compute[260935]: 2025-10-11 09:21:41.024 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 05:21:41 np0005481065 nova_compute[260935]: 2025-10-11 09:21:41.024 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:21:41 np0005481065 nova_compute[260935]: 2025-10-11 09:21:41.025 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:21:41 np0005481065 nova_compute[260935]: 2025-10-11 09:21:41.025 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:21:41 np0005481065 nova_compute[260935]: 2025-10-11 09:21:41.025 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:21:41 np0005481065 nova_compute[260935]: 2025-10-11 09:21:41.047 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:21:41 np0005481065 nova_compute[260935]: 2025-10-11 09:21:41.047 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:21:41 np0005481065 nova_compute[260935]: 2025-10-11 09:21:41.048 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:21:41 np0005481065 nova_compute[260935]: 2025-10-11 09:21:41.048 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:21:41 np0005481065 nova_compute[260935]: 2025-10-11 09:21:41.049 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:21:41 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:21:41 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1929259347' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:21:41 np0005481065 nova_compute[260935]: 2025-10-11 09:21:41.552 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:21:41 np0005481065 nova_compute[260935]: 2025-10-11 09:21:41.661 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:21:41 np0005481065 nova_compute[260935]: 2025-10-11 09:21:41.661 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:21:41 np0005481065 nova_compute[260935]: 2025-10-11 09:21:41.661 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:21:41 np0005481065 nova_compute[260935]: 2025-10-11 09:21:41.665 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:21:41 np0005481065 nova_compute[260935]: 2025-10-11 09:21:41.666 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:21:41 np0005481065 nova_compute[260935]: 2025-10-11 09:21:41.669 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:21:41 np0005481065 nova_compute[260935]: 2025-10-11 09:21:41.669 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:21:41 np0005481065 nova_compute[260935]: 2025-10-11 09:21:41.672 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:21:41 np0005481065 nova_compute[260935]: 2025-10-11 09:21:41.672 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:21:41 np0005481065 nova_compute[260935]: 2025-10-11 09:21:41.676 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:21:41 np0005481065 nova_compute[260935]: 2025-10-11 09:21:41.676 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:21:41 np0005481065 nova_compute[260935]: 2025-10-11 09:21:41.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:41 np0005481065 nova_compute[260935]: 2025-10-11 09:21:41.951 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:21:41 np0005481065 nova_compute[260935]: 2025-10-11 09:21:41.952 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2502MB free_disk=59.73967361450195GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:21:41 np0005481065 nova_compute[260935]: 2025-10-11 09:21:41.952 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:21:41 np0005481065 nova_compute[260935]: 2025-10-11 09:21:41.952 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:21:42 np0005481065 nova_compute[260935]: 2025-10-11 09:21:42.046 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:21:42 np0005481065 nova_compute[260935]: 2025-10-11 09:21:42.046 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:21:42 np0005481065 nova_compute[260935]: 2025-10-11 09:21:42.047 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:21:42 np0005481065 nova_compute[260935]: 2025-10-11 09:21:42.047 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 7f0d9214-39a5-458d-82db-dcbc7d61b8b5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:21:42 np0005481065 nova_compute[260935]: 2025-10-11 09:21:42.047 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance fcb45648-eb7b-4975-9f50-08675a787d9c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:21:42 np0005481065 nova_compute[260935]: 2025-10-11 09:21:42.047 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c677661f-bc62-4954-9130-09b285e6abe4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:21:42 np0005481065 nova_compute[260935]: 2025-10-11 09:21:42.047 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 6 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:21:42 np0005481065 nova_compute[260935]: 2025-10-11 09:21:42.048 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1280MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=6 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:21:42 np0005481065 nova_compute[260935]: 2025-10-11 09:21:42.158 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:21:42 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:21:42 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4016864633' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:21:42 np0005481065 nova_compute[260935]: 2025-10-11 09:21:42.593 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:21:42 np0005481065 nova_compute[260935]: 2025-10-11 09:21:42.599 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:21:42 np0005481065 nova_compute[260935]: 2025-10-11 09:21:42.621 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:21:42 np0005481065 nova_compute[260935]: 2025-10-11 09:21:42.661 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:21:42 np0005481065 nova_compute[260935]: 2025-10-11 09:21:42.661 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:21:42 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2396: 321 pgs: 321 active+clean; 533 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Oct 11 05:21:43 np0005481065 nova_compute[260935]: 2025-10-11 09:21:43.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:43 np0005481065 nova_compute[260935]: 2025-10-11 09:21:43.513 2 DEBUG nova.network.neutron [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Successfully updated port: f09cab39-5082-4d84-91fe-0c7b1cc2fb8a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:21:43 np0005481065 nova_compute[260935]: 2025-10-11 09:21:43.535 2 DEBUG oslo_concurrency.lockutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "refresh_cache-c677661f-bc62-4954-9130-09b285e6abe4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:21:43 np0005481065 nova_compute[260935]: 2025-10-11 09:21:43.536 2 DEBUG oslo_concurrency.lockutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquired lock "refresh_cache-c677661f-bc62-4954-9130-09b285e6abe4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:21:43 np0005481065 nova_compute[260935]: 2025-10-11 09:21:43.536 2 DEBUG nova.network.neutron [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:21:43 np0005481065 nova_compute[260935]: 2025-10-11 09:21:43.696 2 DEBUG nova.compute.manager [req-2ba3882c-668d-4d8b-9ccb-bc844d0c65f1 req-e7e686db-7651-456e-8f90-1e49f6871dc3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Received event network-changed-f09cab39-5082-4d84-91fe-0c7b1cc2fb8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:21:43 np0005481065 nova_compute[260935]: 2025-10-11 09:21:43.696 2 DEBUG nova.compute.manager [req-2ba3882c-668d-4d8b-9ccb-bc844d0c65f1 req-e7e686db-7651-456e-8f90-1e49f6871dc3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Refreshing instance network info cache due to event network-changed-f09cab39-5082-4d84-91fe-0c7b1cc2fb8a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:21:43 np0005481065 nova_compute[260935]: 2025-10-11 09:21:43.696 2 DEBUG oslo_concurrency.lockutils [req-2ba3882c-668d-4d8b-9ccb-bc844d0c65f1 req-e7e686db-7651-456e-8f90-1e49f6871dc3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-c677661f-bc62-4954-9130-09b285e6abe4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:21:43 np0005481065 podman[389618]: 2025-10-11 09:21:43.774001653 +0000 UTC m=+0.072004093 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 11 05:21:43 np0005481065 nova_compute[260935]: 2025-10-11 09:21:43.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:43 np0005481065 nova_compute[260935]: 2025-10-11 09:21:43.818 2 DEBUG nova.network.neutron [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:21:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:21:44 np0005481065 nova_compute[260935]: 2025-10-11 09:21:44.636 2 DEBUG nova.network.neutron [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Updating instance_info_cache with network_info: [{"id": "f09cab39-5082-4d84-91fe-0c7b1cc2fb8a", "address": "fa:16:3e:29:57:ba", "network": {"id": "428d818e-c08a-4eef-be62-24fe484fed05", "bridge": "br-int", "label": "tempest-network-smoke--2064721736", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf09cab39-50", "ovs_interfaceid": "f09cab39-5082-4d84-91fe-0c7b1cc2fb8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:21:44 np0005481065 nova_compute[260935]: 2025-10-11 09:21:44.666 2 DEBUG oslo_concurrency.lockutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Releasing lock "refresh_cache-c677661f-bc62-4954-9130-09b285e6abe4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:21:44 np0005481065 nova_compute[260935]: 2025-10-11 09:21:44.667 2 DEBUG nova.compute.manager [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Instance network_info: |[{"id": "f09cab39-5082-4d84-91fe-0c7b1cc2fb8a", "address": "fa:16:3e:29:57:ba", "network": {"id": "428d818e-c08a-4eef-be62-24fe484fed05", "bridge": "br-int", "label": "tempest-network-smoke--2064721736", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf09cab39-50", "ovs_interfaceid": "f09cab39-5082-4d84-91fe-0c7b1cc2fb8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:21:44 np0005481065 nova_compute[260935]: 2025-10-11 09:21:44.668 2 DEBUG oslo_concurrency.lockutils [req-2ba3882c-668d-4d8b-9ccb-bc844d0c65f1 req-e7e686db-7651-456e-8f90-1e49f6871dc3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-c677661f-bc62-4954-9130-09b285e6abe4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:21:44 np0005481065 nova_compute[260935]: 2025-10-11 09:21:44.668 2 DEBUG nova.network.neutron [req-2ba3882c-668d-4d8b-9ccb-bc844d0c65f1 req-e7e686db-7651-456e-8f90-1e49f6871dc3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Refreshing network info cache for port f09cab39-5082-4d84-91fe-0c7b1cc2fb8a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:21:44 np0005481065 nova_compute[260935]: 2025-10-11 09:21:44.673 2 DEBUG nova.virt.libvirt.driver [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Start _get_guest_xml network_info=[{"id": "f09cab39-5082-4d84-91fe-0c7b1cc2fb8a", "address": "fa:16:3e:29:57:ba", "network": {"id": "428d818e-c08a-4eef-be62-24fe484fed05", "bridge": "br-int", "label": "tempest-network-smoke--2064721736", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf09cab39-50", "ovs_interfaceid": "f09cab39-5082-4d84-91fe-0c7b1cc2fb8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:21:44 np0005481065 nova_compute[260935]: 2025-10-11 09:21:44.680 2 WARNING nova.virt.libvirt.driver [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:21:44 np0005481065 nova_compute[260935]: 2025-10-11 09:21:44.686 2 DEBUG nova.virt.libvirt.host [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:21:44 np0005481065 nova_compute[260935]: 2025-10-11 09:21:44.687 2 DEBUG nova.virt.libvirt.host [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:21:44 np0005481065 nova_compute[260935]: 2025-10-11 09:21:44.696 2 DEBUG nova.virt.libvirt.host [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:21:44 np0005481065 nova_compute[260935]: 2025-10-11 09:21:44.697 2 DEBUG nova.virt.libvirt.host [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:21:44 np0005481065 nova_compute[260935]: 2025-10-11 09:21:44.698 2 DEBUG nova.virt.libvirt.driver [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:21:44 np0005481065 nova_compute[260935]: 2025-10-11 09:21:44.698 2 DEBUG nova.virt.hardware [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:21:44 np0005481065 nova_compute[260935]: 2025-10-11 09:21:44.699 2 DEBUG nova.virt.hardware [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:21:44 np0005481065 nova_compute[260935]: 2025-10-11 09:21:44.700 2 DEBUG nova.virt.hardware [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:21:44 np0005481065 nova_compute[260935]: 2025-10-11 09:21:44.700 2 DEBUG nova.virt.hardware [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:21:44 np0005481065 nova_compute[260935]: 2025-10-11 09:21:44.701 2 DEBUG nova.virt.hardware [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:21:44 np0005481065 nova_compute[260935]: 2025-10-11 09:21:44.701 2 DEBUG nova.virt.hardware [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:21:44 np0005481065 nova_compute[260935]: 2025-10-11 09:21:44.702 2 DEBUG nova.virt.hardware [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:21:44 np0005481065 nova_compute[260935]: 2025-10-11 09:21:44.702 2 DEBUG nova.virt.hardware [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:21:44 np0005481065 nova_compute[260935]: 2025-10-11 09:21:44.703 2 DEBUG nova.virt.hardware [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:21:44 np0005481065 nova_compute[260935]: 2025-10-11 09:21:44.703 2 DEBUG nova.virt.hardware [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:21:44 np0005481065 nova_compute[260935]: 2025-10-11 09:21:44.704 2 DEBUG nova.virt.hardware [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:21:44 np0005481065 nova_compute[260935]: 2025-10-11 09:21:44.709 2 DEBUG oslo_concurrency.processutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:21:44 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2397: 321 pgs: 321 active+clean; 533 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Oct 11 05:21:45 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:21:45 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4236163751' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:21:45 np0005481065 nova_compute[260935]: 2025-10-11 09:21:45.254 2 DEBUG oslo_concurrency.processutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:21:45 np0005481065 nova_compute[260935]: 2025-10-11 09:21:45.283 2 DEBUG nova.storage.rbd_utils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image c677661f-bc62-4954-9130-09b285e6abe4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:21:45 np0005481065 nova_compute[260935]: 2025-10-11 09:21:45.288 2 DEBUG oslo_concurrency.processutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:21:45 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:21:45 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3920895477' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:21:45 np0005481065 nova_compute[260935]: 2025-10-11 09:21:45.703 2 DEBUG oslo_concurrency.processutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:21:45 np0005481065 nova_compute[260935]: 2025-10-11 09:21:45.707 2 DEBUG nova.virt.libvirt.vif [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:21:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1301201135',display_name='tempest-TestNetworkBasicOps-server-1301201135',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1301201135',id=119,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAFMu8G7LIufdrCxIhBQQCul5ZF6SEzch6wK6wL4IRYDngjbMt9CFdOpjTGBGp3BV5OfwWRm0v8OrQ6qzwsg+DrsRLw4XZQAC1nE0Ke58JBZ4nmeeruu5Ghd9xdLrk6Odw==',key_name='tempest-TestNetworkBasicOps-1626339166',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-1lilo2wr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:21:37Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=c677661f-bc62-4954-9130-09b285e6abe4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f09cab39-5082-4d84-91fe-0c7b1cc2fb8a", "address": "fa:16:3e:29:57:ba", "network": {"id": "428d818e-c08a-4eef-be62-24fe484fed05", "bridge": "br-int", "label": "tempest-network-smoke--2064721736", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf09cab39-50", "ovs_interfaceid": "f09cab39-5082-4d84-91fe-0c7b1cc2fb8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:21:45 np0005481065 nova_compute[260935]: 2025-10-11 09:21:45.707 2 DEBUG nova.network.os_vif_util [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "f09cab39-5082-4d84-91fe-0c7b1cc2fb8a", "address": "fa:16:3e:29:57:ba", "network": {"id": "428d818e-c08a-4eef-be62-24fe484fed05", "bridge": "br-int", "label": "tempest-network-smoke--2064721736", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf09cab39-50", "ovs_interfaceid": "f09cab39-5082-4d84-91fe-0c7b1cc2fb8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:21:45 np0005481065 nova_compute[260935]: 2025-10-11 09:21:45.709 2 DEBUG nova.network.os_vif_util [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:29:57:ba,bridge_name='br-int',has_traffic_filtering=True,id=f09cab39-5082-4d84-91fe-0c7b1cc2fb8a,network=Network(428d818e-c08a-4eef-be62-24fe484fed05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf09cab39-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:21:45 np0005481065 nova_compute[260935]: 2025-10-11 09:21:45.711 2 DEBUG nova.objects.instance [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'pci_devices' on Instance uuid c677661f-bc62-4954-9130-09b285e6abe4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:21:45 np0005481065 nova_compute[260935]: 2025-10-11 09:21:45.731 2 DEBUG nova.virt.libvirt.driver [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:21:45 np0005481065 nova_compute[260935]:  <uuid>c677661f-bc62-4954-9130-09b285e6abe4</uuid>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:  <name>instance-00000077</name>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:21:45 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:      <nova:name>tempest-TestNetworkBasicOps-server-1301201135</nova:name>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:21:44</nova:creationTime>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:21:45 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:        <nova:user uuid="dd336dcb24664df58613d4105ce1b004">tempest-TestNetworkBasicOps-1622727639-project-member</nova:user>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:        <nova:project uuid="bee9c6aad5fe46a2b0fb6caf4d995b72">tempest-TestNetworkBasicOps-1622727639</nova:project>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:        <nova:port uuid="f09cab39-5082-4d84-91fe-0c7b1cc2fb8a">
Oct 11 05:21:45 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.19" ipVersion="4"/>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:21:45 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:      <entry name="serial">c677661f-bc62-4954-9130-09b285e6abe4</entry>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:      <entry name="uuid">c677661f-bc62-4954-9130-09b285e6abe4</entry>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:21:45 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:21:45 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:21:45 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/c677661f-bc62-4954-9130-09b285e6abe4_disk">
Oct 11 05:21:45 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:21:45 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:21:45 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/c677661f-bc62-4954-9130-09b285e6abe4_disk.config">
Oct 11 05:21:45 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:21:45 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:21:45 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:29:57:ba"/>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:      <target dev="tapf09cab39-50"/>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:21:45 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/c677661f-bc62-4954-9130-09b285e6abe4/console.log" append="off"/>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:21:45 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:21:45 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:21:45 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:21:45 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:21:45 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:21:45 np0005481065 nova_compute[260935]: 2025-10-11 09:21:45.733 2 DEBUG nova.compute.manager [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Preparing to wait for external event network-vif-plugged-f09cab39-5082-4d84-91fe-0c7b1cc2fb8a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:21:45 np0005481065 nova_compute[260935]: 2025-10-11 09:21:45.733 2 DEBUG oslo_concurrency.lockutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "c677661f-bc62-4954-9130-09b285e6abe4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:21:45 np0005481065 nova_compute[260935]: 2025-10-11 09:21:45.734 2 DEBUG oslo_concurrency.lockutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "c677661f-bc62-4954-9130-09b285e6abe4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:21:45 np0005481065 nova_compute[260935]: 2025-10-11 09:21:45.734 2 DEBUG oslo_concurrency.lockutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "c677661f-bc62-4954-9130-09b285e6abe4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:21:45 np0005481065 nova_compute[260935]: 2025-10-11 09:21:45.736 2 DEBUG nova.virt.libvirt.vif [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:21:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1301201135',display_name='tempest-TestNetworkBasicOps-server-1301201135',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1301201135',id=119,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAFMu8G7LIufdrCxIhBQQCul5ZF6SEzch6wK6wL4IRYDngjbMt9CFdOpjTGBGp3BV5OfwWRm0v8OrQ6qzwsg+DrsRLw4XZQAC1nE0Ke58JBZ4nmeeruu5Ghd9xdLrk6Odw==',key_name='tempest-TestNetworkBasicOps-1626339166',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-1lilo2wr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:21:37Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=c677661f-bc62-4954-9130-09b285e6abe4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f09cab39-5082-4d84-91fe-0c7b1cc2fb8a", "address": "fa:16:3e:29:57:ba", "network": {"id": "428d818e-c08a-4eef-be62-24fe484fed05", "bridge": "br-int", "label": "tempest-network-smoke--2064721736", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf09cab39-50", "ovs_interfaceid": "f09cab39-5082-4d84-91fe-0c7b1cc2fb8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:21:45 np0005481065 nova_compute[260935]: 2025-10-11 09:21:45.736 2 DEBUG nova.network.os_vif_util [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "f09cab39-5082-4d84-91fe-0c7b1cc2fb8a", "address": "fa:16:3e:29:57:ba", "network": {"id": "428d818e-c08a-4eef-be62-24fe484fed05", "bridge": "br-int", "label": "tempest-network-smoke--2064721736", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf09cab39-50", "ovs_interfaceid": "f09cab39-5082-4d84-91fe-0c7b1cc2fb8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:21:45 np0005481065 nova_compute[260935]: 2025-10-11 09:21:45.737 2 DEBUG nova.network.os_vif_util [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:29:57:ba,bridge_name='br-int',has_traffic_filtering=True,id=f09cab39-5082-4d84-91fe-0c7b1cc2fb8a,network=Network(428d818e-c08a-4eef-be62-24fe484fed05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf09cab39-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:21:45 np0005481065 nova_compute[260935]: 2025-10-11 09:21:45.738 2 DEBUG os_vif [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:29:57:ba,bridge_name='br-int',has_traffic_filtering=True,id=f09cab39-5082-4d84-91fe-0c7b1cc2fb8a,network=Network(428d818e-c08a-4eef-be62-24fe484fed05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf09cab39-50') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:21:45 np0005481065 nova_compute[260935]: 2025-10-11 09:21:45.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:45 np0005481065 nova_compute[260935]: 2025-10-11 09:21:45.740 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:21:45 np0005481065 nova_compute[260935]: 2025-10-11 09:21:45.741 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:21:45 np0005481065 nova_compute[260935]: 2025-10-11 09:21:45.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:45 np0005481065 nova_compute[260935]: 2025-10-11 09:21:45.747 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf09cab39-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:21:45 np0005481065 nova_compute[260935]: 2025-10-11 09:21:45.748 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf09cab39-50, col_values=(('external_ids', {'iface-id': 'f09cab39-5082-4d84-91fe-0c7b1cc2fb8a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:29:57:ba', 'vm-uuid': 'c677661f-bc62-4954-9130-09b285e6abe4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:21:45 np0005481065 nova_compute[260935]: 2025-10-11 09:21:45.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:45 np0005481065 NetworkManager[44960]: <info>  [1760174505.7971] manager: (tapf09cab39-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/496)
Oct 11 05:21:45 np0005481065 nova_compute[260935]: 2025-10-11 09:21:45.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:21:45 np0005481065 nova_compute[260935]: 2025-10-11 09:21:45.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:45 np0005481065 nova_compute[260935]: 2025-10-11 09:21:45.812 2 INFO os_vif [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:29:57:ba,bridge_name='br-int',has_traffic_filtering=True,id=f09cab39-5082-4d84-91fe-0c7b1cc2fb8a,network=Network(428d818e-c08a-4eef-be62-24fe484fed05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf09cab39-50')#033[00m
Oct 11 05:21:45 np0005481065 nova_compute[260935]: 2025-10-11 09:21:45.871 2 DEBUG nova.virt.libvirt.driver [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:21:45 np0005481065 nova_compute[260935]: 2025-10-11 09:21:45.871 2 DEBUG nova.virt.libvirt.driver [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:21:45 np0005481065 nova_compute[260935]: 2025-10-11 09:21:45.872 2 DEBUG nova.virt.libvirt.driver [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No VIF found with MAC fa:16:3e:29:57:ba, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:21:45 np0005481065 nova_compute[260935]: 2025-10-11 09:21:45.873 2 INFO nova.virt.libvirt.driver [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Using config drive#033[00m
Oct 11 05:21:45 np0005481065 nova_compute[260935]: 2025-10-11 09:21:45.906 2 DEBUG nova.storage.rbd_utils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image c677661f-bc62-4954-9130-09b285e6abe4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:21:46 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2398: 321 pgs: 321 active+clean; 533 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Oct 11 05:21:46 np0005481065 nova_compute[260935]: 2025-10-11 09:21:46.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:47 np0005481065 nova_compute[260935]: 2025-10-11 09:21:47.094 2 INFO nova.virt.libvirt.driver [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Creating config drive at /var/lib/nova/instances/c677661f-bc62-4954-9130-09b285e6abe4/disk.config#033[00m
Oct 11 05:21:47 np0005481065 nova_compute[260935]: 2025-10-11 09:21:47.100 2 DEBUG oslo_concurrency.processutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c677661f-bc62-4954-9130-09b285e6abe4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpby9k0os1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:21:47 np0005481065 nova_compute[260935]: 2025-10-11 09:21:47.264 2 DEBUG oslo_concurrency.processutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c677661f-bc62-4954-9130-09b285e6abe4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpby9k0os1" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:21:47 np0005481065 nova_compute[260935]: 2025-10-11 09:21:47.291 2 DEBUG nova.storage.rbd_utils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image c677661f-bc62-4954-9130-09b285e6abe4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:21:47 np0005481065 nova_compute[260935]: 2025-10-11 09:21:47.295 2 DEBUG oslo_concurrency.processutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c677661f-bc62-4954-9130-09b285e6abe4/disk.config c677661f-bc62-4954-9130-09b285e6abe4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:21:47 np0005481065 ovn_controller[152945]: 2025-10-11T09:21:47Z|01213|binding|INFO|Releasing lport bc2597de-285d-49b2-9e10-c93c87e43828 from this chassis (sb_readonly=0)
Oct 11 05:21:47 np0005481065 ovn_controller[152945]: 2025-10-11T09:21:47Z|01214|binding|INFO|Releasing lport af33797e-d57d-45c8-92d7-86ea03fdf1ef from this chassis (sb_readonly=0)
Oct 11 05:21:47 np0005481065 ovn_controller[152945]: 2025-10-11T09:21:47Z|01215|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:21:47 np0005481065 ovn_controller[152945]: 2025-10-11T09:21:47Z|01216|binding|INFO|Releasing lport f0c2b309-d39a-4d01-be06-bdc63deb27f9 from this chassis (sb_readonly=0)
Oct 11 05:21:47 np0005481065 ovn_controller[152945]: 2025-10-11T09:21:47Z|01217|binding|INFO|Releasing lport 89155f05-1b39-4918-893c-15a2cd2a9493 from this chassis (sb_readonly=0)
Oct 11 05:21:47 np0005481065 ovn_controller[152945]: 2025-10-11T09:21:47Z|01218|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:21:47 np0005481065 nova_compute[260935]: 2025-10-11 09:21:47.474 2 DEBUG nova.network.neutron [req-2ba3882c-668d-4d8b-9ccb-bc844d0c65f1 req-e7e686db-7651-456e-8f90-1e49f6871dc3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Updated VIF entry in instance network info cache for port f09cab39-5082-4d84-91fe-0c7b1cc2fb8a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:21:47 np0005481065 nova_compute[260935]: 2025-10-11 09:21:47.477 2 DEBUG nova.network.neutron [req-2ba3882c-668d-4d8b-9ccb-bc844d0c65f1 req-e7e686db-7651-456e-8f90-1e49f6871dc3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Updating instance_info_cache with network_info: [{"id": "f09cab39-5082-4d84-91fe-0c7b1cc2fb8a", "address": "fa:16:3e:29:57:ba", "network": {"id": "428d818e-c08a-4eef-be62-24fe484fed05", "bridge": "br-int", "label": "tempest-network-smoke--2064721736", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf09cab39-50", "ovs_interfaceid": "f09cab39-5082-4d84-91fe-0c7b1cc2fb8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:21:47 np0005481065 nova_compute[260935]: 2025-10-11 09:21:47.515 2 DEBUG oslo_concurrency.lockutils [req-2ba3882c-668d-4d8b-9ccb-bc844d0c65f1 req-e7e686db-7651-456e-8f90-1e49f6871dc3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-c677661f-bc62-4954-9130-09b285e6abe4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:21:47 np0005481065 nova_compute[260935]: 2025-10-11 09:21:47.517 2 DEBUG oslo_concurrency.processutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c677661f-bc62-4954-9130-09b285e6abe4/disk.config c677661f-bc62-4954-9130-09b285e6abe4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.223s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:21:47 np0005481065 nova_compute[260935]: 2025-10-11 09:21:47.519 2 INFO nova.virt.libvirt.driver [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Deleting local config drive /var/lib/nova/instances/c677661f-bc62-4954-9130-09b285e6abe4/disk.config because it was imported into RBD.#033[00m
Oct 11 05:21:47 np0005481065 nova_compute[260935]: 2025-10-11 09:21:47.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:47 np0005481065 kernel: tapf09cab39-50: entered promiscuous mode
Oct 11 05:21:47 np0005481065 NetworkManager[44960]: <info>  [1760174507.6063] manager: (tapf09cab39-50): new Tun device (/org/freedesktop/NetworkManager/Devices/497)
Oct 11 05:21:47 np0005481065 nova_compute[260935]: 2025-10-11 09:21:47.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:47 np0005481065 ovn_controller[152945]: 2025-10-11T09:21:47Z|01219|binding|INFO|Claiming lport f09cab39-5082-4d84-91fe-0c7b1cc2fb8a for this chassis.
Oct 11 05:21:47 np0005481065 ovn_controller[152945]: 2025-10-11T09:21:47Z|01220|binding|INFO|f09cab39-5082-4d84-91fe-0c7b1cc2fb8a: Claiming fa:16:3e:29:57:ba 10.100.0.19
Oct 11 05:21:47 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:47.617 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:29:57:ba 10.100.0.19'], port_security=['fa:16:3e:29:57:ba 10.100.0.19'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.19/28', 'neutron:device_id': 'c677661f-bc62-4954-9130-09b285e6abe4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-428d818e-c08a-4eef-be62-24fe484fed05', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8bcfc965-4dfd-4911-b600-4d86bbeab7bc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=020223cc-9ae9-496f-8a67-2eb055f1a089, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=f09cab39-5082-4d84-91fe-0c7b1cc2fb8a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:21:47 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:47.620 162815 INFO neutron.agent.ovn.metadata.agent [-] Port f09cab39-5082-4d84-91fe-0c7b1cc2fb8a in datapath 428d818e-c08a-4eef-be62-24fe484fed05 bound to our chassis#033[00m
Oct 11 05:21:47 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:47.624 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 428d818e-c08a-4eef-be62-24fe484fed05#033[00m
Oct 11 05:21:47 np0005481065 ovn_controller[152945]: 2025-10-11T09:21:47Z|01221|binding|INFO|Setting lport f09cab39-5082-4d84-91fe-0c7b1cc2fb8a ovn-installed in OVS
Oct 11 05:21:47 np0005481065 ovn_controller[152945]: 2025-10-11T09:21:47Z|01222|binding|INFO|Setting lport f09cab39-5082-4d84-91fe-0c7b1cc2fb8a up in Southbound
Oct 11 05:21:47 np0005481065 nova_compute[260935]: 2025-10-11 09:21:47.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:47 np0005481065 nova_compute[260935]: 2025-10-11 09:21:47.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:47 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:47.656 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a80eec20-b682-49b4-a0cf-283a9fafcb93]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:21:47 np0005481065 systemd-machined[215705]: New machine qemu-142-instance-00000077.
Oct 11 05:21:47 np0005481065 systemd[1]: Started Virtual Machine qemu-142-instance-00000077.
Oct 11 05:21:47 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:47.700 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[10efe28c-dcbb-4087-88af-0ccaaf1385b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:21:47 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:47.705 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6cb2b385-3d14-4939-acd8-bd8d6392cbda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:21:47 np0005481065 systemd-udevd[389808]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:21:47 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:47.734 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[4941e860-e826-4bc8-aafb-ee26af51a05e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:21:47 np0005481065 NetworkManager[44960]: <info>  [1760174507.7451] device (tapf09cab39-50): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:21:47 np0005481065 NetworkManager[44960]: <info>  [1760174507.7462] device (tapf09cab39-50): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:21:47 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:47.757 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d579d8fc-4fcf-4d31-ab4c-76402dd3d0a3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap428d818e-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:fc:e7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 343], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633373, 'reachable_time': 23194, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 389813, 'error': None, 'target': 'ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:21:47 np0005481065 podman[389770]: 2025-10-11 09:21:47.763628961 +0000 UTC m=+0.109492181 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:21:47 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:47.779 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[00dda8fb-d749-4009-a113-b0aed9aba25d]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap428d818e-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 633388, 'tstamp': 633388}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 389823, 'error': None, 'target': 'ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.17'], ['IFA_LOCAL', '10.100.0.17'], ['IFA_BROADCAST', '10.100.0.31'], ['IFA_LABEL', 'tap428d818e-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 633391, 'tstamp': 633391}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 389823, 'error': None, 'target': 'ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:21:47 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:47.782 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap428d818e-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:21:47 np0005481065 nova_compute[260935]: 2025-10-11 09:21:47.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:47 np0005481065 podman[389772]: 2025-10-11 09:21:47.787965548 +0000 UTC m=+0.138939842 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 11 05:21:47 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:47.788 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap428d818e-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:21:47 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:47.788 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:21:47 np0005481065 nova_compute[260935]: 2025-10-11 09:21:47.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:47 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:47.788 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap428d818e-c0, col_values=(('external_ids', {'iface-id': 'f0c2b309-d39a-4d01-be06-bdc63deb27f9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:21:47 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:21:47.789 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:21:47 np0005481065 nova_compute[260935]: 2025-10-11 09:21:47.902 2 DEBUG nova.compute.manager [req-ce966411-99e5-4a25-873b-8b36ad2672ae req-7c26ab00-72d0-438b-b22c-c29caf75ae25 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Received event network-vif-plugged-f09cab39-5082-4d84-91fe-0c7b1cc2fb8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:21:47 np0005481065 nova_compute[260935]: 2025-10-11 09:21:47.903 2 DEBUG oslo_concurrency.lockutils [req-ce966411-99e5-4a25-873b-8b36ad2672ae req-7c26ab00-72d0-438b-b22c-c29caf75ae25 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "c677661f-bc62-4954-9130-09b285e6abe4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:21:47 np0005481065 nova_compute[260935]: 2025-10-11 09:21:47.903 2 DEBUG oslo_concurrency.lockutils [req-ce966411-99e5-4a25-873b-8b36ad2672ae req-7c26ab00-72d0-438b-b22c-c29caf75ae25 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "c677661f-bc62-4954-9130-09b285e6abe4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:21:47 np0005481065 nova_compute[260935]: 2025-10-11 09:21:47.903 2 DEBUG oslo_concurrency.lockutils [req-ce966411-99e5-4a25-873b-8b36ad2672ae req-7c26ab00-72d0-438b-b22c-c29caf75ae25 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "c677661f-bc62-4954-9130-09b285e6abe4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:21:47 np0005481065 nova_compute[260935]: 2025-10-11 09:21:47.903 2 DEBUG nova.compute.manager [req-ce966411-99e5-4a25-873b-8b36ad2672ae req-7c26ab00-72d0-438b-b22c-c29caf75ae25 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Processing event network-vif-plugged-f09cab39-5082-4d84-91fe-0c7b1cc2fb8a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:21:48 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2399: 321 pgs: 321 active+clean; 533 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Oct 11 05:21:49 np0005481065 nova_compute[260935]: 2025-10-11 09:21:49.012 2 DEBUG nova.compute.manager [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:21:49 np0005481065 nova_compute[260935]: 2025-10-11 09:21:49.013 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174509.0115993, c677661f-bc62-4954-9130-09b285e6abe4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:21:49 np0005481065 nova_compute[260935]: 2025-10-11 09:21:49.013 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: c677661f-bc62-4954-9130-09b285e6abe4] VM Started (Lifecycle Event)#033[00m
Oct 11 05:21:49 np0005481065 nova_compute[260935]: 2025-10-11 09:21:49.020 2 DEBUG nova.virt.libvirt.driver [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:21:49 np0005481065 nova_compute[260935]: 2025-10-11 09:21:49.027 2 INFO nova.virt.libvirt.driver [-] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Instance spawned successfully.#033[00m
Oct 11 05:21:49 np0005481065 nova_compute[260935]: 2025-10-11 09:21:49.027 2 DEBUG nova.virt.libvirt.driver [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:21:49 np0005481065 nova_compute[260935]: 2025-10-11 09:21:49.035 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:21:49 np0005481065 nova_compute[260935]: 2025-10-11 09:21:49.046 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:21:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:21:49 np0005481065 nova_compute[260935]: 2025-10-11 09:21:49.056 2 DEBUG nova.virt.libvirt.driver [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:21:49 np0005481065 nova_compute[260935]: 2025-10-11 09:21:49.057 2 DEBUG nova.virt.libvirt.driver [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:21:49 np0005481065 nova_compute[260935]: 2025-10-11 09:21:49.057 2 DEBUG nova.virt.libvirt.driver [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:21:49 np0005481065 nova_compute[260935]: 2025-10-11 09:21:49.058 2 DEBUG nova.virt.libvirt.driver [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:21:49 np0005481065 nova_compute[260935]: 2025-10-11 09:21:49.058 2 DEBUG nova.virt.libvirt.driver [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:21:49 np0005481065 nova_compute[260935]: 2025-10-11 09:21:49.059 2 DEBUG nova.virt.libvirt.driver [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:21:49 np0005481065 nova_compute[260935]: 2025-10-11 09:21:49.066 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: c677661f-bc62-4954-9130-09b285e6abe4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:21:49 np0005481065 nova_compute[260935]: 2025-10-11 09:21:49.066 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174509.0163283, c677661f-bc62-4954-9130-09b285e6abe4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:21:49 np0005481065 nova_compute[260935]: 2025-10-11 09:21:49.066 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: c677661f-bc62-4954-9130-09b285e6abe4] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:21:49 np0005481065 nova_compute[260935]: 2025-10-11 09:21:49.102 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:21:49 np0005481065 nova_compute[260935]: 2025-10-11 09:21:49.105 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174509.0186293, c677661f-bc62-4954-9130-09b285e6abe4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:21:49 np0005481065 nova_compute[260935]: 2025-10-11 09:21:49.105 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: c677661f-bc62-4954-9130-09b285e6abe4] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:21:49 np0005481065 nova_compute[260935]: 2025-10-11 09:21:49.123 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:21:49 np0005481065 nova_compute[260935]: 2025-10-11 09:21:49.125 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:21:49 np0005481065 nova_compute[260935]: 2025-10-11 09:21:49.139 2 INFO nova.compute.manager [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Took 11.64 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:21:49 np0005481065 nova_compute[260935]: 2025-10-11 09:21:49.139 2 DEBUG nova.compute.manager [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:21:49 np0005481065 nova_compute[260935]: 2025-10-11 09:21:49.176 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: c677661f-bc62-4954-9130-09b285e6abe4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:21:49 np0005481065 nova_compute[260935]: 2025-10-11 09:21:49.225 2 INFO nova.compute.manager [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Took 12.86 seconds to build instance.#033[00m
Oct 11 05:21:49 np0005481065 nova_compute[260935]: 2025-10-11 09:21:49.245 2 DEBUG oslo_concurrency.lockutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "c677661f-bc62-4954-9130-09b285e6abe4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.972s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:21:50 np0005481065 nova_compute[260935]: 2025-10-11 09:21:50.204 2 DEBUG nova.compute.manager [req-2a505d11-9e2d-43aa-89ac-e0b2eb71daca req-674d449c-0a93-4647-ac93-8627b023cba0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Received event network-vif-plugged-f09cab39-5082-4d84-91fe-0c7b1cc2fb8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:21:50 np0005481065 nova_compute[260935]: 2025-10-11 09:21:50.205 2 DEBUG oslo_concurrency.lockutils [req-2a505d11-9e2d-43aa-89ac-e0b2eb71daca req-674d449c-0a93-4647-ac93-8627b023cba0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "c677661f-bc62-4954-9130-09b285e6abe4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:21:50 np0005481065 nova_compute[260935]: 2025-10-11 09:21:50.205 2 DEBUG oslo_concurrency.lockutils [req-2a505d11-9e2d-43aa-89ac-e0b2eb71daca req-674d449c-0a93-4647-ac93-8627b023cba0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "c677661f-bc62-4954-9130-09b285e6abe4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:21:50 np0005481065 nova_compute[260935]: 2025-10-11 09:21:50.206 2 DEBUG oslo_concurrency.lockutils [req-2a505d11-9e2d-43aa-89ac-e0b2eb71daca req-674d449c-0a93-4647-ac93-8627b023cba0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "c677661f-bc62-4954-9130-09b285e6abe4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:21:50 np0005481065 nova_compute[260935]: 2025-10-11 09:21:50.206 2 DEBUG nova.compute.manager [req-2a505d11-9e2d-43aa-89ac-e0b2eb71daca req-674d449c-0a93-4647-ac93-8627b023cba0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] No waiting events found dispatching network-vif-plugged-f09cab39-5082-4d84-91fe-0c7b1cc2fb8a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:21:50 np0005481065 nova_compute[260935]: 2025-10-11 09:21:50.207 2 WARNING nova.compute.manager [req-2a505d11-9e2d-43aa-89ac-e0b2eb71daca req-674d449c-0a93-4647-ac93-8627b023cba0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Received unexpected event network-vif-plugged-f09cab39-5082-4d84-91fe-0c7b1cc2fb8a for instance with vm_state active and task_state None.#033[00m
Oct 11 05:21:50 np0005481065 nova_compute[260935]: 2025-10-11 09:21:50.616 2 DEBUG oslo_concurrency.lockutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "d703a39a-f502-4bc4-895a-bb87752c83df" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:21:50 np0005481065 nova_compute[260935]: 2025-10-11 09:21:50.617 2 DEBUG oslo_concurrency.lockutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "d703a39a-f502-4bc4-895a-bb87752c83df" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:21:50 np0005481065 nova_compute[260935]: 2025-10-11 09:21:50.633 2 DEBUG nova.compute.manager [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:21:50 np0005481065 nova_compute[260935]: 2025-10-11 09:21:50.736 2 DEBUG oslo_concurrency.lockutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:21:50 np0005481065 nova_compute[260935]: 2025-10-11 09:21:50.736 2 DEBUG oslo_concurrency.lockutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:21:50 np0005481065 nova_compute[260935]: 2025-10-11 09:21:50.743 2 DEBUG nova.virt.hardware [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:21:50 np0005481065 nova_compute[260935]: 2025-10-11 09:21:50.744 2 INFO nova.compute.claims [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:21:50 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2400: 321 pgs: 321 active+clean; 533 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 11 05:21:50 np0005481065 nova_compute[260935]: 2025-10-11 09:21:50.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:50 np0005481065 nova_compute[260935]: 2025-10-11 09:21:50.971 2 DEBUG oslo_concurrency.processutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:21:51 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:21:51 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2583833662' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:21:51 np0005481065 nova_compute[260935]: 2025-10-11 09:21:51.464 2 DEBUG oslo_concurrency.processutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:21:51 np0005481065 nova_compute[260935]: 2025-10-11 09:21:51.471 2 DEBUG nova.compute.provider_tree [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:21:51 np0005481065 nova_compute[260935]: 2025-10-11 09:21:51.489 2 DEBUG nova.scheduler.client.report [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:21:51 np0005481065 nova_compute[260935]: 2025-10-11 09:21:51.509 2 DEBUG oslo_concurrency.lockutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:21:51 np0005481065 nova_compute[260935]: 2025-10-11 09:21:51.509 2 DEBUG nova.compute.manager [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:21:51 np0005481065 nova_compute[260935]: 2025-10-11 09:21:51.553 2 DEBUG nova.compute.manager [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:21:51 np0005481065 nova_compute[260935]: 2025-10-11 09:21:51.553 2 DEBUG nova.network.neutron [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:21:51 np0005481065 nova_compute[260935]: 2025-10-11 09:21:51.573 2 INFO nova.virt.libvirt.driver [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:21:51 np0005481065 nova_compute[260935]: 2025-10-11 09:21:51.591 2 DEBUG nova.compute.manager [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:21:51 np0005481065 nova_compute[260935]: 2025-10-11 09:21:51.745 2 DEBUG nova.compute.manager [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:21:51 np0005481065 nova_compute[260935]: 2025-10-11 09:21:51.746 2 DEBUG nova.virt.libvirt.driver [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:21:51 np0005481065 nova_compute[260935]: 2025-10-11 09:21:51.747 2 INFO nova.virt.libvirt.driver [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Creating image(s)#033[00m
Oct 11 05:21:51 np0005481065 nova_compute[260935]: 2025-10-11 09:21:51.772 2 DEBUG nova.storage.rbd_utils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image d703a39a-f502-4bc4-895a-bb87752c83df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:21:51 np0005481065 nova_compute[260935]: 2025-10-11 09:21:51.799 2 DEBUG nova.storage.rbd_utils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image d703a39a-f502-4bc4-895a-bb87752c83df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:21:51 np0005481065 nova_compute[260935]: 2025-10-11 09:21:51.832 2 DEBUG nova.storage.rbd_utils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image d703a39a-f502-4bc4-895a-bb87752c83df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:21:51 np0005481065 nova_compute[260935]: 2025-10-11 09:21:51.836 2 DEBUG oslo_concurrency.processutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:21:51 np0005481065 nova_compute[260935]: 2025-10-11 09:21:51.943 2 DEBUG oslo_concurrency.processutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:21:51 np0005481065 nova_compute[260935]: 2025-10-11 09:21:51.945 2 DEBUG oslo_concurrency.lockutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:21:51 np0005481065 nova_compute[260935]: 2025-10-11 09:21:51.945 2 DEBUG oslo_concurrency.lockutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:21:51 np0005481065 nova_compute[260935]: 2025-10-11 09:21:51.946 2 DEBUG oslo_concurrency.lockutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:21:51 np0005481065 nova_compute[260935]: 2025-10-11 09:21:51.980 2 DEBUG nova.storage.rbd_utils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image d703a39a-f502-4bc4-895a-bb87752c83df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:21:51 np0005481065 nova_compute[260935]: 2025-10-11 09:21:51.986 2 DEBUG oslo_concurrency.processutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 d703a39a-f502-4bc4-895a-bb87752c83df_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:21:52 np0005481065 nova_compute[260935]: 2025-10-11 09:21:52.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:52 np0005481065 nova_compute[260935]: 2025-10-11 09:21:52.146 2 DEBUG nova.policy [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0e1fd111a1ff43179343661e01457085', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'db6885dd005947ad850fed13cefdf2fc', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:21:52 np0005481065 nova_compute[260935]: 2025-10-11 09:21:52.409 2 DEBUG oslo_concurrency.processutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 d703a39a-f502-4bc4-895a-bb87752c83df_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:21:52 np0005481065 nova_compute[260935]: 2025-10-11 09:21:52.488 2 DEBUG nova.storage.rbd_utils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] resizing rbd image d703a39a-f502-4bc4-895a-bb87752c83df_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:21:52 np0005481065 nova_compute[260935]: 2025-10-11 09:21:52.590 2 DEBUG nova.objects.instance [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'migration_context' on Instance uuid d703a39a-f502-4bc4-895a-bb87752c83df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:21:52 np0005481065 nova_compute[260935]: 2025-10-11 09:21:52.611 2 DEBUG nova.virt.libvirt.driver [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:21:52 np0005481065 nova_compute[260935]: 2025-10-11 09:21:52.611 2 DEBUG nova.virt.libvirt.driver [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Ensure instance console log exists: /var/lib/nova/instances/d703a39a-f502-4bc4-895a-bb87752c83df/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:21:52 np0005481065 nova_compute[260935]: 2025-10-11 09:21:52.612 2 DEBUG oslo_concurrency.lockutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:21:52 np0005481065 nova_compute[260935]: 2025-10-11 09:21:52.612 2 DEBUG oslo_concurrency.lockutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:21:52 np0005481065 nova_compute[260935]: 2025-10-11 09:21:52.613 2 DEBUG oslo_concurrency.lockutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:21:52 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2401: 321 pgs: 321 active+clean; 564 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.2 MiB/s wr, 117 op/s
Oct 11 05:21:53 np0005481065 ovn_controller[152945]: 2025-10-11T09:21:53Z|01223|binding|INFO|Releasing lport bc2597de-285d-49b2-9e10-c93c87e43828 from this chassis (sb_readonly=0)
Oct 11 05:21:53 np0005481065 ovn_controller[152945]: 2025-10-11T09:21:53Z|01224|binding|INFO|Releasing lport af33797e-d57d-45c8-92d7-86ea03fdf1ef from this chassis (sb_readonly=0)
Oct 11 05:21:53 np0005481065 ovn_controller[152945]: 2025-10-11T09:21:53Z|01225|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:21:53 np0005481065 ovn_controller[152945]: 2025-10-11T09:21:53Z|01226|binding|INFO|Releasing lport f0c2b309-d39a-4d01-be06-bdc63deb27f9 from this chassis (sb_readonly=0)
Oct 11 05:21:53 np0005481065 ovn_controller[152945]: 2025-10-11T09:21:53Z|01227|binding|INFO|Releasing lport 89155f05-1b39-4918-893c-15a2cd2a9493 from this chassis (sb_readonly=0)
Oct 11 05:21:53 np0005481065 ovn_controller[152945]: 2025-10-11T09:21:53Z|01228|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:21:53 np0005481065 nova_compute[260935]: 2025-10-11 09:21:53.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:53 np0005481065 nova_compute[260935]: 2025-10-11 09:21:53.870 2 DEBUG nova.network.neutron [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Successfully created port: 30ec8ddb-e058-428a-a08a-817e1e452938 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:21:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:21:54 np0005481065 nova_compute[260935]: 2025-10-11 09:21:54.457 2 DEBUG nova.network.neutron [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Successfully created port: a35d553f-941e-4ff2-bc8a-39419cc32ff2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:21:54 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2402: 321 pgs: 321 active+clean; 564 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 89 op/s
Oct 11 05:21:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:21:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:21:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:21:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:21:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:21:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:21:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:21:54
Oct 11 05:21:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:21:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:21:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log', 'volumes', '.rgw.root', 'default.rgw.control', 'backups', 'images']
Oct 11 05:21:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:21:55 np0005481065 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 05:21:55 np0005481065 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 36K writes, 143K keys, 36K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s#012Cumulative WAL: 36K writes, 13K syncs, 2.76 writes per sync, written: 0.14 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3998 writes, 16K keys, 3998 commit groups, 1.0 writes per commit group, ingest: 19.74 MB, 0.03 MB/s#012Interval WAL: 3998 writes, 1549 syncs, 2.58 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 05:21:55 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:21:55 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:21:55 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 05:21:55 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:21:55 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 05:21:55 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:21:55 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 2e514127-806d-4884-bb6a-9887f6f7a5f4 does not exist
Oct 11 05:21:55 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev f81826bb-97db-4417-a9d7-d5283f775d29 does not exist
Oct 11 05:21:55 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev f349f8f0-b7a5-4e6b-a3e2-37b2a195b0a5 does not exist
Oct 11 05:21:55 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 05:21:55 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 05:21:55 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 05:21:55 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:21:55 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:21:55 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:21:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:21:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:21:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:21:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:21:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:21:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:21:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:21:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:21:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:21:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:21:55 np0005481065 nova_compute[260935]: 2025-10-11 09:21:55.440 2 DEBUG nova.network.neutron [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Successfully updated port: 30ec8ddb-e058-428a-a08a-817e1e452938 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:21:55 np0005481065 nova_compute[260935]: 2025-10-11 09:21:55.624 2 DEBUG nova.compute.manager [req-c31149c9-9c56-414e-9b7e-fd19a9737faf req-548946cf-ad76-4b3b-ab58-dc37ef5162b9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Received event network-changed-30ec8ddb-e058-428a-a08a-817e1e452938 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:21:55 np0005481065 nova_compute[260935]: 2025-10-11 09:21:55.625 2 DEBUG nova.compute.manager [req-c31149c9-9c56-414e-9b7e-fd19a9737faf req-548946cf-ad76-4b3b-ab58-dc37ef5162b9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Refreshing instance network info cache due to event network-changed-30ec8ddb-e058-428a-a08a-817e1e452938. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:21:55 np0005481065 nova_compute[260935]: 2025-10-11 09:21:55.625 2 DEBUG oslo_concurrency.lockutils [req-c31149c9-9c56-414e-9b7e-fd19a9737faf req-548946cf-ad76-4b3b-ab58-dc37ef5162b9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-d703a39a-f502-4bc4-895a-bb87752c83df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:21:55 np0005481065 nova_compute[260935]: 2025-10-11 09:21:55.625 2 DEBUG oslo_concurrency.lockutils [req-c31149c9-9c56-414e-9b7e-fd19a9737faf req-548946cf-ad76-4b3b-ab58-dc37ef5162b9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-d703a39a-f502-4bc4-895a-bb87752c83df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:21:55 np0005481065 nova_compute[260935]: 2025-10-11 09:21:55.626 2 DEBUG nova.network.neutron [req-c31149c9-9c56-414e-9b7e-fd19a9737faf req-548946cf-ad76-4b3b-ab58-dc37ef5162b9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Refreshing network info cache for port 30ec8ddb-e058-428a-a08a-817e1e452938 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:21:55 np0005481065 nova_compute[260935]: 2025-10-11 09:21:55.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:55 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:21:55 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:21:55 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:21:56 np0005481065 nova_compute[260935]: 2025-10-11 09:21:56.048 2 DEBUG nova.network.neutron [req-c31149c9-9c56-414e-9b7e-fd19a9737faf req-548946cf-ad76-4b3b-ab58-dc37ef5162b9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:21:56 np0005481065 podman[390338]: 2025-10-11 09:21:56.357736011 +0000 UTC m=+0.082514679 container create 09100efc325169dd03cc5e2e050595c6d4cf312e9af1093871766a6eaead26cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dirac, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 05:21:56 np0005481065 podman[390338]: 2025-10-11 09:21:56.323060153 +0000 UTC m=+0.047838911 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:21:56 np0005481065 systemd[1]: Started libpod-conmon-09100efc325169dd03cc5e2e050595c6d4cf312e9af1093871766a6eaead26cd.scope.
Oct 11 05:21:56 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:21:56 np0005481065 podman[390338]: 2025-10-11 09:21:56.502667321 +0000 UTC m=+0.227446059 container init 09100efc325169dd03cc5e2e050595c6d4cf312e9af1093871766a6eaead26cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dirac, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 11 05:21:56 np0005481065 podman[390338]: 2025-10-11 09:21:56.514802074 +0000 UTC m=+0.239580762 container start 09100efc325169dd03cc5e2e050595c6d4cf312e9af1093871766a6eaead26cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:21:56 np0005481065 podman[390338]: 2025-10-11 09:21:56.519233449 +0000 UTC m=+0.244012137 container attach 09100efc325169dd03cc5e2e050595c6d4cf312e9af1093871766a6eaead26cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dirac, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 05:21:56 np0005481065 epic_dirac[390354]: 167 167
Oct 11 05:21:56 np0005481065 systemd[1]: libpod-09100efc325169dd03cc5e2e050595c6d4cf312e9af1093871766a6eaead26cd.scope: Deactivated successfully.
Oct 11 05:21:56 np0005481065 podman[390338]: 2025-10-11 09:21:56.524789356 +0000 UTC m=+0.249568054 container died 09100efc325169dd03cc5e2e050595c6d4cf312e9af1093871766a6eaead26cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:21:56 np0005481065 nova_compute[260935]: 2025-10-11 09:21:56.533 2 DEBUG nova.network.neutron [req-c31149c9-9c56-414e-9b7e-fd19a9737faf req-548946cf-ad76-4b3b-ab58-dc37ef5162b9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:21:56 np0005481065 nova_compute[260935]: 2025-10-11 09:21:56.556 2 DEBUG oslo_concurrency.lockutils [req-c31149c9-9c56-414e-9b7e-fd19a9737faf req-548946cf-ad76-4b3b-ab58-dc37ef5162b9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-d703a39a-f502-4bc4-895a-bb87752c83df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:21:56 np0005481065 systemd[1]: var-lib-containers-storage-overlay-019f0df99434ba76c1207907b8c2bcc7b3cfcac6e5fc6e30e3b14236928ff4b2-merged.mount: Deactivated successfully.
Oct 11 05:21:56 np0005481065 nova_compute[260935]: 2025-10-11 09:21:56.569 2 DEBUG nova.network.neutron [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Successfully updated port: a35d553f-941e-4ff2-bc8a-39419cc32ff2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:21:56 np0005481065 podman[390338]: 2025-10-11 09:21:56.587565517 +0000 UTC m=+0.312344175 container remove 09100efc325169dd03cc5e2e050595c6d4cf312e9af1093871766a6eaead26cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dirac, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 11 05:21:56 np0005481065 nova_compute[260935]: 2025-10-11 09:21:56.587 2 DEBUG oslo_concurrency.lockutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "refresh_cache-d703a39a-f502-4bc4-895a-bb87752c83df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:21:56 np0005481065 nova_compute[260935]: 2025-10-11 09:21:56.588 2 DEBUG oslo_concurrency.lockutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquired lock "refresh_cache-d703a39a-f502-4bc4-895a-bb87752c83df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:21:56 np0005481065 nova_compute[260935]: 2025-10-11 09:21:56.588 2 DEBUG nova.network.neutron [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:21:56 np0005481065 systemd[1]: libpod-conmon-09100efc325169dd03cc5e2e050595c6d4cf312e9af1093871766a6eaead26cd.scope: Deactivated successfully.
Oct 11 05:21:56 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2403: 321 pgs: 321 active+clean; 564 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 89 op/s
Oct 11 05:21:56 np0005481065 podman[390377]: 2025-10-11 09:21:56.913903457 +0000 UTC m=+0.100234620 container create ae8d70956043874001b4322608e2b9d85b13b8cef4d4417b8bfcb4b8c9b10afd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_joliot, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:21:56 np0005481065 podman[390377]: 2025-10-11 09:21:56.870117261 +0000 UTC m=+0.056448494 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:21:56 np0005481065 nova_compute[260935]: 2025-10-11 09:21:56.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:21:57 np0005481065 systemd[1]: Started libpod-conmon-ae8d70956043874001b4322608e2b9d85b13b8cef4d4417b8bfcb4b8c9b10afd.scope.
Oct 11 05:21:57 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:21:57 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/014f6d00710210cda0d1ecfaa8b7249ec4c0970af74a653c8fcb87647f9f087c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:21:57 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/014f6d00710210cda0d1ecfaa8b7249ec4c0970af74a653c8fcb87647f9f087c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:21:57 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/014f6d00710210cda0d1ecfaa8b7249ec4c0970af74a653c8fcb87647f9f087c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:21:57 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/014f6d00710210cda0d1ecfaa8b7249ec4c0970af74a653c8fcb87647f9f087c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:21:57 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/014f6d00710210cda0d1ecfaa8b7249ec4c0970af74a653c8fcb87647f9f087c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 05:21:57 np0005481065 nova_compute[260935]: 2025-10-11 09:21:57.063 2 DEBUG nova.network.neutron [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:21:57 np0005481065 podman[390377]: 2025-10-11 09:21:57.086481097 +0000 UTC m=+0.272812260 container init ae8d70956043874001b4322608e2b9d85b13b8cef4d4417b8bfcb4b8c9b10afd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:21:57 np0005481065 podman[390377]: 2025-10-11 09:21:57.094503674 +0000 UTC m=+0.280834837 container start ae8d70956043874001b4322608e2b9d85b13b8cef4d4417b8bfcb4b8c9b10afd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 05:21:57 np0005481065 podman[390377]: 2025-10-11 09:21:57.097677923 +0000 UTC m=+0.284009086 container attach ae8d70956043874001b4322608e2b9d85b13b8cef4d4417b8bfcb4b8c9b10afd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_joliot, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:21:57 np0005481065 nova_compute[260935]: 2025-10-11 09:21:57.773 2 DEBUG nova.compute.manager [req-289eff2a-4d7c-4008-b49b-96bfc9312e11 req-dbc4a8c5-e9b3-454a-8877-66b84e53996c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Received event network-changed-a35d553f-941e-4ff2-bc8a-39419cc32ff2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:21:57 np0005481065 nova_compute[260935]: 2025-10-11 09:21:57.774 2 DEBUG nova.compute.manager [req-289eff2a-4d7c-4008-b49b-96bfc9312e11 req-dbc4a8c5-e9b3-454a-8877-66b84e53996c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Refreshing instance network info cache due to event network-changed-a35d553f-941e-4ff2-bc8a-39419cc32ff2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:21:57 np0005481065 nova_compute[260935]: 2025-10-11 09:21:57.774 2 DEBUG oslo_concurrency.lockutils [req-289eff2a-4d7c-4008-b49b-96bfc9312e11 req-dbc4a8c5-e9b3-454a-8877-66b84e53996c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-d703a39a-f502-4bc4-895a-bb87752c83df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:21:58 np0005481065 naughty_joliot[390393]: --> passed data devices: 0 physical, 3 LVM
Oct 11 05:21:58 np0005481065 naughty_joliot[390393]: --> relative data size: 1.0
Oct 11 05:21:58 np0005481065 naughty_joliot[390393]: --> All data devices are unavailable
Oct 11 05:21:58 np0005481065 systemd[1]: libpod-ae8d70956043874001b4322608e2b9d85b13b8cef4d4417b8bfcb4b8c9b10afd.scope: Deactivated successfully.
Oct 11 05:21:58 np0005481065 podman[390377]: 2025-10-11 09:21:58.202251125 +0000 UTC m=+1.388582278 container died ae8d70956043874001b4322608e2b9d85b13b8cef4d4417b8bfcb4b8c9b10afd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:21:58 np0005481065 systemd[1]: libpod-ae8d70956043874001b4322608e2b9d85b13b8cef4d4417b8bfcb4b8c9b10afd.scope: Consumed 1.017s CPU time.
Oct 11 05:21:58 np0005481065 systemd[1]: var-lib-containers-storage-overlay-014f6d00710210cda0d1ecfaa8b7249ec4c0970af74a653c8fcb87647f9f087c-merged.mount: Deactivated successfully.
Oct 11 05:21:58 np0005481065 podman[390377]: 2025-10-11 09:21:58.264479501 +0000 UTC m=+1.450810674 container remove ae8d70956043874001b4322608e2b9d85b13b8cef4d4417b8bfcb4b8c9b10afd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 11 05:21:58 np0005481065 systemd[1]: libpod-conmon-ae8d70956043874001b4322608e2b9d85b13b8cef4d4417b8bfcb4b8c9b10afd.scope: Deactivated successfully.
Oct 11 05:21:58 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2404: 321 pgs: 321 active+clean; 579 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 11 05:21:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:21:59 np0005481065 podman[390576]: 2025-10-11 09:21:59.146295617 +0000 UTC m=+0.088397746 container create 479cf3a6e6abccd60f429a272c53bb20acaa65dd5db137c516c7f4dda71760d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bardeen, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:21:59 np0005481065 podman[390576]: 2025-10-11 09:21:59.104352623 +0000 UTC m=+0.046454802 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:21:59 np0005481065 systemd[1]: Started libpod-conmon-479cf3a6e6abccd60f429a272c53bb20acaa65dd5db137c516c7f4dda71760d6.scope.
Oct 11 05:21:59 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:21:59 np0005481065 podman[390576]: 2025-10-11 09:21:59.246411092 +0000 UTC m=+0.188513231 container init 479cf3a6e6abccd60f429a272c53bb20acaa65dd5db137c516c7f4dda71760d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bardeen, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:21:59 np0005481065 podman[390576]: 2025-10-11 09:21:59.261141678 +0000 UTC m=+0.203243807 container start 479cf3a6e6abccd60f429a272c53bb20acaa65dd5db137c516c7f4dda71760d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bardeen, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:21:59 np0005481065 podman[390576]: 2025-10-11 09:21:59.26973302 +0000 UTC m=+0.211835189 container attach 479cf3a6e6abccd60f429a272c53bb20acaa65dd5db137c516c7f4dda71760d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bardeen, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:21:59 np0005481065 hardcore_bardeen[390593]: 167 167
Oct 11 05:21:59 np0005481065 podman[390576]: 2025-10-11 09:21:59.274262598 +0000 UTC m=+0.216364697 container died 479cf3a6e6abccd60f429a272c53bb20acaa65dd5db137c516c7f4dda71760d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 05:21:59 np0005481065 systemd[1]: libpod-479cf3a6e6abccd60f429a272c53bb20acaa65dd5db137c516c7f4dda71760d6.scope: Deactivated successfully.
Oct 11 05:21:59 np0005481065 systemd[1]: var-lib-containers-storage-overlay-1b02528375492a19819aa9e39342232b550acf7191a1b798ea6dd0f9fe4c2a39-merged.mount: Deactivated successfully.
Oct 11 05:21:59 np0005481065 podman[390576]: 2025-10-11 09:21:59.338602343 +0000 UTC m=+0.280704452 container remove 479cf3a6e6abccd60f429a272c53bb20acaa65dd5db137c516c7f4dda71760d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bardeen, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:21:59 np0005481065 systemd[1]: libpod-conmon-479cf3a6e6abccd60f429a272c53bb20acaa65dd5db137c516c7f4dda71760d6.scope: Deactivated successfully.
Oct 11 05:21:59 np0005481065 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Oct 11 05:21:59 np0005481065 nova_compute[260935]: 2025-10-11 09:21:59.433 2 DEBUG nova.network.neutron [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Updating instance_info_cache with network_info: [{"id": "30ec8ddb-e058-428a-a08a-817e1e452938", "address": "fa:16:3e:97:eb:db", "network": {"id": "02690ac5-d004-4c7d-b780-e5fed29e0aa7", "bridge": "br-int", "label": "tempest-network-smoke--1847613909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30ec8ddb-e0", "ovs_interfaceid": "30ec8ddb-e058-428a-a08a-817e1e452938", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a35d553f-941e-4ff2-bc8a-39419cc32ff2", "address": "fa:16:3e:62:2c:ec", "network": {"id": "e87b272f-66b8-494e-ab80-c2ee66df15a2", "bridge": "br-int", "label": "tempest-network-smoke--2027628824", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe62:2cec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe62:2cec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa35d553f-94", "ovs_interfaceid": "a35d553f-941e-4ff2-bc8a-39419cc32ff2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:21:59 np0005481065 nova_compute[260935]: 2025-10-11 09:21:59.461 2 DEBUG oslo_concurrency.lockutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Releasing lock "refresh_cache-d703a39a-f502-4bc4-895a-bb87752c83df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:21:59 np0005481065 nova_compute[260935]: 2025-10-11 09:21:59.462 2 DEBUG nova.compute.manager [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Instance network_info: |[{"id": "30ec8ddb-e058-428a-a08a-817e1e452938", "address": "fa:16:3e:97:eb:db", "network": {"id": "02690ac5-d004-4c7d-b780-e5fed29e0aa7", "bridge": "br-int", "label": "tempest-network-smoke--1847613909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30ec8ddb-e0", "ovs_interfaceid": "30ec8ddb-e058-428a-a08a-817e1e452938", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a35d553f-941e-4ff2-bc8a-39419cc32ff2", "address": "fa:16:3e:62:2c:ec", "network": {"id": "e87b272f-66b8-494e-ab80-c2ee66df15a2", "bridge": "br-int", "label": "tempest-network-smoke--2027628824", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe62:2cec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe62:2cec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa35d553f-94", "ovs_interfaceid": "a35d553f-941e-4ff2-bc8a-39419cc32ff2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:21:59 np0005481065 nova_compute[260935]: 2025-10-11 09:21:59.463 2 DEBUG oslo_concurrency.lockutils [req-289eff2a-4d7c-4008-b49b-96bfc9312e11 req-dbc4a8c5-e9b3-454a-8877-66b84e53996c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-d703a39a-f502-4bc4-895a-bb87752c83df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:21:59 np0005481065 nova_compute[260935]: 2025-10-11 09:21:59.463 2 DEBUG nova.network.neutron [req-289eff2a-4d7c-4008-b49b-96bfc9312e11 req-dbc4a8c5-e9b3-454a-8877-66b84e53996c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Refreshing network info cache for port a35d553f-941e-4ff2-bc8a-39419cc32ff2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:21:59 np0005481065 nova_compute[260935]: 2025-10-11 09:21:59.470 2 DEBUG nova.virt.libvirt.driver [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Start _get_guest_xml network_info=[{"id": "30ec8ddb-e058-428a-a08a-817e1e452938", "address": "fa:16:3e:97:eb:db", "network": {"id": "02690ac5-d004-4c7d-b780-e5fed29e0aa7", "bridge": "br-int", "label": "tempest-network-smoke--1847613909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30ec8ddb-e0", "ovs_interfaceid": "30ec8ddb-e058-428a-a08a-817e1e452938", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a35d553f-941e-4ff2-bc8a-39419cc32ff2", "address": "fa:16:3e:62:2c:ec", "network": {"id": "e87b272f-66b8-494e-ab80-c2ee66df15a2", "bridge": "br-int", "label": "tempest-network-smoke--2027628824", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe62:2cec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe62:2cec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa35d553f-94", "ovs_interfaceid": "a35d553f-941e-4ff2-bc8a-39419cc32ff2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:21:59 np0005481065 nova_compute[260935]: 2025-10-11 09:21:59.477 2 WARNING nova.virt.libvirt.driver [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:21:59 np0005481065 nova_compute[260935]: 2025-10-11 09:21:59.484 2 DEBUG nova.virt.libvirt.host [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:21:59 np0005481065 nova_compute[260935]: 2025-10-11 09:21:59.485 2 DEBUG nova.virt.libvirt.host [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:21:59 np0005481065 nova_compute[260935]: 2025-10-11 09:21:59.489 2 DEBUG nova.virt.libvirt.host [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:21:59 np0005481065 nova_compute[260935]: 2025-10-11 09:21:59.489 2 DEBUG nova.virt.libvirt.host [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:21:59 np0005481065 nova_compute[260935]: 2025-10-11 09:21:59.489 2 DEBUG nova.virt.libvirt.driver [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:21:59 np0005481065 nova_compute[260935]: 2025-10-11 09:21:59.490 2 DEBUG nova.virt.hardware [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:21:59 np0005481065 nova_compute[260935]: 2025-10-11 09:21:59.490 2 DEBUG nova.virt.hardware [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:21:59 np0005481065 nova_compute[260935]: 2025-10-11 09:21:59.490 2 DEBUG nova.virt.hardware [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:21:59 np0005481065 nova_compute[260935]: 2025-10-11 09:21:59.490 2 DEBUG nova.virt.hardware [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:21:59 np0005481065 nova_compute[260935]: 2025-10-11 09:21:59.491 2 DEBUG nova.virt.hardware [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:21:59 np0005481065 nova_compute[260935]: 2025-10-11 09:21:59.491 2 DEBUG nova.virt.hardware [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:21:59 np0005481065 nova_compute[260935]: 2025-10-11 09:21:59.491 2 DEBUG nova.virt.hardware [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:21:59 np0005481065 nova_compute[260935]: 2025-10-11 09:21:59.491 2 DEBUG nova.virt.hardware [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:21:59 np0005481065 nova_compute[260935]: 2025-10-11 09:21:59.491 2 DEBUG nova.virt.hardware [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:21:59 np0005481065 nova_compute[260935]: 2025-10-11 09:21:59.491 2 DEBUG nova.virt.hardware [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:21:59 np0005481065 nova_compute[260935]: 2025-10-11 09:21:59.491 2 DEBUG nova.virt.hardware [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:21:59 np0005481065 nova_compute[260935]: 2025-10-11 09:21:59.494 2 DEBUG oslo_concurrency.processutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:21:59 np0005481065 podman[390619]: 2025-10-11 09:21:59.604158637 +0000 UTC m=+0.044042084 container create 9a9508abfd4865edac354c1312cb8e776c5bb0c68621265cccb05f6578ecc734 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_newton, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 05:21:59 np0005481065 systemd[1]: Started libpod-conmon-9a9508abfd4865edac354c1312cb8e776c5bb0c68621265cccb05f6578ecc734.scope.
Oct 11 05:21:59 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:21:59 np0005481065 podman[390619]: 2025-10-11 09:21:59.587025754 +0000 UTC m=+0.026909221 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:21:59 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cb4efb4ede56d4028982a986af9a410045447d4760c1983441c927331e9fc46/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:21:59 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cb4efb4ede56d4028982a986af9a410045447d4760c1983441c927331e9fc46/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:21:59 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cb4efb4ede56d4028982a986af9a410045447d4760c1983441c927331e9fc46/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:21:59 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cb4efb4ede56d4028982a986af9a410045447d4760c1983441c927331e9fc46/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:21:59 np0005481065 podman[390619]: 2025-10-11 09:21:59.736092211 +0000 UTC m=+0.175975678 container init 9a9508abfd4865edac354c1312cb8e776c5bb0c68621265cccb05f6578ecc734 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_newton, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 05:21:59 np0005481065 podman[390619]: 2025-10-11 09:21:59.745799735 +0000 UTC m=+0.185683192 container start 9a9508abfd4865edac354c1312cb8e776c5bb0c68621265cccb05f6578ecc734 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:21:59 np0005481065 podman[390619]: 2025-10-11 09:21:59.749279793 +0000 UTC m=+0.189163390 container attach 9a9508abfd4865edac354c1312cb8e776c5bb0c68621265cccb05f6578ecc734 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_newton, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:21:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:21:59 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2908042383' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:22:00 np0005481065 nova_compute[260935]: 2025-10-11 09:22:00.007 2 DEBUG oslo_concurrency.processutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:22:00 np0005481065 nova_compute[260935]: 2025-10-11 09:22:00.042 2 DEBUG nova.storage.rbd_utils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image d703a39a-f502-4bc4-895a-bb87752c83df_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:22:00 np0005481065 nova_compute[260935]: 2025-10-11 09:22:00.048 2 DEBUG oslo_concurrency.processutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:22:00 np0005481065 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 05:22:00 np0005481065 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 38K writes, 143K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s#012Cumulative WAL: 38K writes, 14K syncs, 2.73 writes per sync, written: 0.14 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4198 writes, 17K keys, 4198 commit groups, 1.0 writes per commit group, ingest: 20.50 MB, 0.03 MB/s#012Interval WAL: 4198 writes, 1588 syncs, 2.64 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 05:22:00 np0005481065 determined_newton[390653]: {
Oct 11 05:22:00 np0005481065 determined_newton[390653]:    "0": [
Oct 11 05:22:00 np0005481065 determined_newton[390653]:        {
Oct 11 05:22:00 np0005481065 determined_newton[390653]:            "devices": [
Oct 11 05:22:00 np0005481065 determined_newton[390653]:                "/dev/loop3"
Oct 11 05:22:00 np0005481065 determined_newton[390653]:            ],
Oct 11 05:22:00 np0005481065 determined_newton[390653]:            "lv_name": "ceph_lv0",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:            "lv_size": "21470642176",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:            "name": "ceph_lv0",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:            "tags": {
Oct 11 05:22:00 np0005481065 determined_newton[390653]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:                "ceph.cluster_name": "ceph",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:                "ceph.crush_device_class": "",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:                "ceph.encrypted": "0",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:                "ceph.osd_id": "0",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:                "ceph.type": "block",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:                "ceph.vdo": "0"
Oct 11 05:22:00 np0005481065 determined_newton[390653]:            },
Oct 11 05:22:00 np0005481065 determined_newton[390653]:            "type": "block",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:            "vg_name": "ceph_vg0"
Oct 11 05:22:00 np0005481065 determined_newton[390653]:        }
Oct 11 05:22:00 np0005481065 determined_newton[390653]:    ],
Oct 11 05:22:00 np0005481065 determined_newton[390653]:    "1": [
Oct 11 05:22:00 np0005481065 determined_newton[390653]:        {
Oct 11 05:22:00 np0005481065 determined_newton[390653]:            "devices": [
Oct 11 05:22:00 np0005481065 determined_newton[390653]:                "/dev/loop4"
Oct 11 05:22:00 np0005481065 determined_newton[390653]:            ],
Oct 11 05:22:00 np0005481065 determined_newton[390653]:            "lv_name": "ceph_lv1",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:            "lv_size": "21470642176",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:            "name": "ceph_lv1",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:            "tags": {
Oct 11 05:22:00 np0005481065 determined_newton[390653]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:                "ceph.cluster_name": "ceph",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:                "ceph.crush_device_class": "",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:                "ceph.encrypted": "0",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:                "ceph.osd_id": "1",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:                "ceph.type": "block",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:                "ceph.vdo": "0"
Oct 11 05:22:00 np0005481065 determined_newton[390653]:            },
Oct 11 05:22:00 np0005481065 determined_newton[390653]:            "type": "block",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:            "vg_name": "ceph_vg1"
Oct 11 05:22:00 np0005481065 determined_newton[390653]:        }
Oct 11 05:22:00 np0005481065 determined_newton[390653]:    ],
Oct 11 05:22:00 np0005481065 determined_newton[390653]:    "2": [
Oct 11 05:22:00 np0005481065 determined_newton[390653]:        {
Oct 11 05:22:00 np0005481065 determined_newton[390653]:            "devices": [
Oct 11 05:22:00 np0005481065 determined_newton[390653]:                "/dev/loop5"
Oct 11 05:22:00 np0005481065 determined_newton[390653]:            ],
Oct 11 05:22:00 np0005481065 determined_newton[390653]:            "lv_name": "ceph_lv2",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:            "lv_size": "21470642176",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:            "name": "ceph_lv2",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:            "tags": {
Oct 11 05:22:00 np0005481065 determined_newton[390653]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:                "ceph.cluster_name": "ceph",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:                "ceph.crush_device_class": "",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:                "ceph.encrypted": "0",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:                "ceph.osd_id": "2",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:                "ceph.type": "block",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:                "ceph.vdo": "0"
Oct 11 05:22:00 np0005481065 determined_newton[390653]:            },
Oct 11 05:22:00 np0005481065 determined_newton[390653]:            "type": "block",
Oct 11 05:22:00 np0005481065 determined_newton[390653]:            "vg_name": "ceph_vg2"
Oct 11 05:22:00 np0005481065 determined_newton[390653]:        }
Oct 11 05:22:00 np0005481065 determined_newton[390653]:    ]
Oct 11 05:22:00 np0005481065 determined_newton[390653]: }
Oct 11 05:22:00 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:22:00 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/586867095' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:22:00 np0005481065 nova_compute[260935]: 2025-10-11 09:22:00.517 2 DEBUG oslo_concurrency.processutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:22:00 np0005481065 nova_compute[260935]: 2025-10-11 09:22:00.519 2 DEBUG nova.virt.libvirt.vif [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:21:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-125667472',display_name='tempest-TestGettingAddress-server-125667472',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-125667472',id=120,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH3M/kXUSgkBS2kz04q1S/pRsyiwkY2YkIqMb3cQVJF1HU8FNi43mGn58wjJEXbpqq0zRDrJXjUF0vzQa/0kOsDkOMrrOR4SllnyixE3KdibmJHkyv4zy/eyFYJg8UzYow==',key_name='tempest-TestGettingAddress-1350685544',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-knbmzljz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:21:51Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=d703a39a-f502-4bc4-895a-bb87752c83df,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "30ec8ddb-e058-428a-a08a-817e1e452938", "address": "fa:16:3e:97:eb:db", "network": {"id": "02690ac5-d004-4c7d-b780-e5fed29e0aa7", "bridge": "br-int", "label": "tempest-network-smoke--1847613909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30ec8ddb-e0", "ovs_interfaceid": "30ec8ddb-e058-428a-a08a-817e1e452938", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:22:00 np0005481065 nova_compute[260935]: 2025-10-11 09:22:00.520 2 DEBUG nova.network.os_vif_util [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "30ec8ddb-e058-428a-a08a-817e1e452938", "address": "fa:16:3e:97:eb:db", "network": {"id": "02690ac5-d004-4c7d-b780-e5fed29e0aa7", "bridge": "br-int", "label": "tempest-network-smoke--1847613909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30ec8ddb-e0", "ovs_interfaceid": "30ec8ddb-e058-428a-a08a-817e1e452938", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:22:00 np0005481065 nova_compute[260935]: 2025-10-11 09:22:00.520 2 DEBUG nova.network.os_vif_util [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:eb:db,bridge_name='br-int',has_traffic_filtering=True,id=30ec8ddb-e058-428a-a08a-817e1e452938,network=Network(02690ac5-d004-4c7d-b780-e5fed29e0aa7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30ec8ddb-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:22:00 np0005481065 nova_compute[260935]: 2025-10-11 09:22:00.521 2 DEBUG nova.virt.libvirt.vif [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:21:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-125667472',display_name='tempest-TestGettingAddress-server-125667472',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-125667472',id=120,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH3M/kXUSgkBS2kz04q1S/pRsyiwkY2YkIqMb3cQVJF1HU8FNi43mGn58wjJEXbpqq0zRDrJXjUF0vzQa/0kOsDkOMrrOR4SllnyixE3KdibmJHkyv4zy/eyFYJg8UzYow==',key_name='tempest-TestGettingAddress-1350685544',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-knbmzljz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:21:51Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=d703a39a-f502-4bc4-895a-bb87752c83df,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a35d553f-941e-4ff2-bc8a-39419cc32ff2", "address": "fa:16:3e:62:2c:ec", "network": {"id": "e87b272f-66b8-494e-ab80-c2ee66df15a2", "bridge": "br-int", "label": "tempest-network-smoke--2027628824", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe62:2cec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe62:2cec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa35d553f-94", "ovs_interfaceid": "a35d553f-941e-4ff2-bc8a-39419cc32ff2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:22:00 np0005481065 nova_compute[260935]: 2025-10-11 09:22:00.521 2 DEBUG nova.network.os_vif_util [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "a35d553f-941e-4ff2-bc8a-39419cc32ff2", "address": "fa:16:3e:62:2c:ec", "network": {"id": "e87b272f-66b8-494e-ab80-c2ee66df15a2", "bridge": "br-int", "label": "tempest-network-smoke--2027628824", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe62:2cec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe62:2cec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa35d553f-94", "ovs_interfaceid": "a35d553f-941e-4ff2-bc8a-39419cc32ff2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:22:00 np0005481065 nova_compute[260935]: 2025-10-11 09:22:00.522 2 DEBUG nova.network.os_vif_util [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:2c:ec,bridge_name='br-int',has_traffic_filtering=True,id=a35d553f-941e-4ff2-bc8a-39419cc32ff2,network=Network(e87b272f-66b8-494e-ab80-c2ee66df15a2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa35d553f-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:22:00 np0005481065 nova_compute[260935]: 2025-10-11 09:22:00.523 2 DEBUG nova.objects.instance [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'pci_devices' on Instance uuid d703a39a-f502-4bc4-895a-bb87752c83df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:22:00 np0005481065 systemd[1]: libpod-9a9508abfd4865edac354c1312cb8e776c5bb0c68621265cccb05f6578ecc734.scope: Deactivated successfully.
Oct 11 05:22:00 np0005481065 podman[390619]: 2025-10-11 09:22:00.535440299 +0000 UTC m=+0.975323786 container died 9a9508abfd4865edac354c1312cb8e776c5bb0c68621265cccb05f6578ecc734 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:22:00 np0005481065 nova_compute[260935]: 2025-10-11 09:22:00.545 2 DEBUG nova.virt.libvirt.driver [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:22:00 np0005481065 nova_compute[260935]:  <uuid>d703a39a-f502-4bc4-895a-bb87752c83df</uuid>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:  <name>instance-00000078</name>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:22:00 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:      <nova:name>tempest-TestGettingAddress-server-125667472</nova:name>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:21:59</nova:creationTime>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:22:00 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:        <nova:user uuid="0e1fd111a1ff43179343661e01457085">tempest-TestGettingAddress-1238692117-project-member</nova:user>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:        <nova:project uuid="db6885dd005947ad850fed13cefdf2fc">tempest-TestGettingAddress-1238692117</nova:project>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:        <nova:port uuid="30ec8ddb-e058-428a-a08a-817e1e452938">
Oct 11 05:22:00 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:        <nova:port uuid="a35d553f-941e-4ff2-bc8a-39419cc32ff2">
Oct 11 05:22:00 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe62:2cec" ipVersion="6"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe62:2cec" ipVersion="6"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:      <entry name="serial">d703a39a-f502-4bc4-895a-bb87752c83df</entry>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:      <entry name="uuid">d703a39a-f502-4bc4-895a-bb87752c83df</entry>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:22:00 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/d703a39a-f502-4bc4-895a-bb87752c83df_disk">
Oct 11 05:22:00 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:22:00 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:22:00 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/d703a39a-f502-4bc4-895a-bb87752c83df_disk.config">
Oct 11 05:22:00 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:22:00 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:22:00 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:97:eb:db"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:      <target dev="tap30ec8ddb-e0"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:22:00 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:62:2c:ec"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:      <target dev="tapa35d553f-94"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:22:00 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/d703a39a-f502-4bc4-895a-bb87752c83df/console.log" append="off"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:22:00 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:22:00 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:22:00 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:22:00 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:22:00 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:22:00 np0005481065 nova_compute[260935]: 2025-10-11 09:22:00.546 2 DEBUG nova.compute.manager [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Preparing to wait for external event network-vif-plugged-30ec8ddb-e058-428a-a08a-817e1e452938 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:22:00 np0005481065 nova_compute[260935]: 2025-10-11 09:22:00.546 2 DEBUG oslo_concurrency.lockutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:22:00 np0005481065 nova_compute[260935]: 2025-10-11 09:22:00.546 2 DEBUG oslo_concurrency.lockutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:22:00 np0005481065 nova_compute[260935]: 2025-10-11 09:22:00.546 2 DEBUG oslo_concurrency.lockutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:22:00 np0005481065 nova_compute[260935]: 2025-10-11 09:22:00.547 2 DEBUG nova.compute.manager [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Preparing to wait for external event network-vif-plugged-a35d553f-941e-4ff2-bc8a-39419cc32ff2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:22:00 np0005481065 nova_compute[260935]: 2025-10-11 09:22:00.547 2 DEBUG oslo_concurrency.lockutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:22:00 np0005481065 nova_compute[260935]: 2025-10-11 09:22:00.547 2 DEBUG oslo_concurrency.lockutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:22:00 np0005481065 nova_compute[260935]: 2025-10-11 09:22:00.547 2 DEBUG oslo_concurrency.lockutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:22:00 np0005481065 nova_compute[260935]: 2025-10-11 09:22:00.548 2 DEBUG nova.virt.libvirt.vif [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:21:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-125667472',display_name='tempest-TestGettingAddress-server-125667472',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-125667472',id=120,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH3M/kXUSgkBS2kz04q1S/pRsyiwkY2YkIqMb3cQVJF1HU8FNi43mGn58wjJEXbpqq0zRDrJXjUF0vzQa/0kOsDkOMrrOR4SllnyixE3KdibmJHkyv4zy/eyFYJg8UzYow==',key_name='tempest-TestGettingAddress-1350685544',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-knbmzljz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:21:51Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=d703a39a-f502-4bc4-895a-bb87752c83df,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "30ec8ddb-e058-428a-a08a-817e1e452938", "address": "fa:16:3e:97:eb:db", "network": {"id": "02690ac5-d004-4c7d-b780-e5fed29e0aa7", "bridge": "br-int", "label": "tempest-network-smoke--1847613909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30ec8ddb-e0", "ovs_interfaceid": "30ec8ddb-e058-428a-a08a-817e1e452938", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:22:00 np0005481065 nova_compute[260935]: 2025-10-11 09:22:00.548 2 DEBUG nova.network.os_vif_util [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "30ec8ddb-e058-428a-a08a-817e1e452938", "address": "fa:16:3e:97:eb:db", "network": {"id": "02690ac5-d004-4c7d-b780-e5fed29e0aa7", "bridge": "br-int", "label": "tempest-network-smoke--1847613909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30ec8ddb-e0", "ovs_interfaceid": "30ec8ddb-e058-428a-a08a-817e1e452938", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:22:00 np0005481065 nova_compute[260935]: 2025-10-11 09:22:00.549 2 DEBUG nova.network.os_vif_util [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:eb:db,bridge_name='br-int',has_traffic_filtering=True,id=30ec8ddb-e058-428a-a08a-817e1e452938,network=Network(02690ac5-d004-4c7d-b780-e5fed29e0aa7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30ec8ddb-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:22:00 np0005481065 nova_compute[260935]: 2025-10-11 09:22:00.549 2 DEBUG os_vif [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:eb:db,bridge_name='br-int',has_traffic_filtering=True,id=30ec8ddb-e058-428a-a08a-817e1e452938,network=Network(02690ac5-d004-4c7d-b780-e5fed29e0aa7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30ec8ddb-e0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:22:00 np0005481065 nova_compute[260935]: 2025-10-11 09:22:00.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:00 np0005481065 nova_compute[260935]: 2025-10-11 09:22:00.550 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:22:00 np0005481065 nova_compute[260935]: 2025-10-11 09:22:00.550 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:22:00 np0005481065 nova_compute[260935]: 2025-10-11 09:22:00.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:00 np0005481065 nova_compute[260935]: 2025-10-11 09:22:00.555 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap30ec8ddb-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:22:00 np0005481065 nova_compute[260935]: 2025-10-11 09:22:00.556 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap30ec8ddb-e0, col_values=(('external_ids', {'iface-id': '30ec8ddb-e058-428a-a08a-817e1e452938', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:97:eb:db', 'vm-uuid': 'd703a39a-f502-4bc4-895a-bb87752c83df'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:22:00 np0005481065 nova_compute[260935]: 2025-10-11 09:22:00.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:00 np0005481065 NetworkManager[44960]: <info>  [1760174520.5593] manager: (tap30ec8ddb-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/498)
Oct 11 05:22:00 np0005481065 nova_compute[260935]: 2025-10-11 09:22:00.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:22:00 np0005481065 nova_compute[260935]: 2025-10-11 09:22:00.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:00 np0005481065 nova_compute[260935]: 2025-10-11 09:22:00.570 2 INFO os_vif [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:eb:db,bridge_name='br-int',has_traffic_filtering=True,id=30ec8ddb-e058-428a-a08a-817e1e452938,network=Network(02690ac5-d004-4c7d-b780-e5fed29e0aa7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30ec8ddb-e0')#033[00m
Oct 11 05:22:00 np0005481065 systemd[1]: var-lib-containers-storage-overlay-0cb4efb4ede56d4028982a986af9a410045447d4760c1983441c927331e9fc46-merged.mount: Deactivated successfully.
Oct 11 05:22:00 np0005481065 nova_compute[260935]: 2025-10-11 09:22:00.571 2 DEBUG nova.virt.libvirt.vif [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:21:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-125667472',display_name='tempest-TestGettingAddress-server-125667472',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-125667472',id=120,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH3M/kXUSgkBS2kz04q1S/pRsyiwkY2YkIqMb3cQVJF1HU8FNi43mGn58wjJEXbpqq0zRDrJXjUF0vzQa/0kOsDkOMrrOR4SllnyixE3KdibmJHkyv4zy/eyFYJg8UzYow==',key_name='tempest-TestGettingAddress-1350685544',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-knbmzljz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:21:51Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=d703a39a-f502-4bc4-895a-bb87752c83df,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a35d553f-941e-4ff2-bc8a-39419cc32ff2", "address": "fa:16:3e:62:2c:ec", "network": {"id": "e87b272f-66b8-494e-ab80-c2ee66df15a2", "bridge": "br-int", "label": "tempest-network-smoke--2027628824", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe62:2cec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe62:2cec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa35d553f-94", "ovs_interfaceid": "a35d553f-941e-4ff2-bc8a-39419cc32ff2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:22:00 np0005481065 nova_compute[260935]: 2025-10-11 09:22:00.571 2 DEBUG nova.network.os_vif_util [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "a35d553f-941e-4ff2-bc8a-39419cc32ff2", "address": "fa:16:3e:62:2c:ec", "network": {"id": "e87b272f-66b8-494e-ab80-c2ee66df15a2", "bridge": "br-int", "label": "tempest-network-smoke--2027628824", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe62:2cec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe62:2cec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa35d553f-94", "ovs_interfaceid": "a35d553f-941e-4ff2-bc8a-39419cc32ff2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:22:00 np0005481065 nova_compute[260935]: 2025-10-11 09:22:00.572 2 DEBUG nova.network.os_vif_util [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:2c:ec,bridge_name='br-int',has_traffic_filtering=True,id=a35d553f-941e-4ff2-bc8a-39419cc32ff2,network=Network(e87b272f-66b8-494e-ab80-c2ee66df15a2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa35d553f-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:22:00 np0005481065 nova_compute[260935]: 2025-10-11 09:22:00.572 2 DEBUG os_vif [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:2c:ec,bridge_name='br-int',has_traffic_filtering=True,id=a35d553f-941e-4ff2-bc8a-39419cc32ff2,network=Network(e87b272f-66b8-494e-ab80-c2ee66df15a2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa35d553f-94') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:22:00 np0005481065 nova_compute[260935]: 2025-10-11 09:22:00.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:00 np0005481065 nova_compute[260935]: 2025-10-11 09:22:00.573 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:22:00 np0005481065 nova_compute[260935]: 2025-10-11 09:22:00.573 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:22:00 np0005481065 nova_compute[260935]: 2025-10-11 09:22:00.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:00 np0005481065 nova_compute[260935]: 2025-10-11 09:22:00.577 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa35d553f-94, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:22:00 np0005481065 nova_compute[260935]: 2025-10-11 09:22:00.577 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa35d553f-94, col_values=(('external_ids', {'iface-id': 'a35d553f-941e-4ff2-bc8a-39419cc32ff2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:62:2c:ec', 'vm-uuid': 'd703a39a-f502-4bc4-895a-bb87752c83df'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:22:00 np0005481065 nova_compute[260935]: 2025-10-11 09:22:00.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:00 np0005481065 NetworkManager[44960]: <info>  [1760174520.5798] manager: (tapa35d553f-94): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/499)
Oct 11 05:22:00 np0005481065 nova_compute[260935]: 2025-10-11 09:22:00.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:22:00 np0005481065 nova_compute[260935]: 2025-10-11 09:22:00.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:00 np0005481065 nova_compute[260935]: 2025-10-11 09:22:00.589 2 INFO os_vif [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:2c:ec,bridge_name='br-int',has_traffic_filtering=True,id=a35d553f-941e-4ff2-bc8a-39419cc32ff2,network=Network(e87b272f-66b8-494e-ab80-c2ee66df15a2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa35d553f-94')#033[00m
Oct 11 05:22:00 np0005481065 podman[390619]: 2025-10-11 09:22:00.606617248 +0000 UTC m=+1.046500695 container remove 9a9508abfd4865edac354c1312cb8e776c5bb0c68621265cccb05f6578ecc734 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:22:00 np0005481065 systemd[1]: libpod-conmon-9a9508abfd4865edac354c1312cb8e776c5bb0c68621265cccb05f6578ecc734.scope: Deactivated successfully.
Oct 11 05:22:00 np0005481065 nova_compute[260935]: 2025-10-11 09:22:00.658 2 DEBUG nova.virt.libvirt.driver [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:22:00 np0005481065 nova_compute[260935]: 2025-10-11 09:22:00.658 2 DEBUG nova.virt.libvirt.driver [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:22:00 np0005481065 nova_compute[260935]: 2025-10-11 09:22:00.658 2 DEBUG nova.virt.libvirt.driver [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No VIF found with MAC fa:16:3e:97:eb:db, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:22:00 np0005481065 nova_compute[260935]: 2025-10-11 09:22:00.658 2 DEBUG nova.virt.libvirt.driver [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No VIF found with MAC fa:16:3e:62:2c:ec, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:22:00 np0005481065 nova_compute[260935]: 2025-10-11 09:22:00.659 2 INFO nova.virt.libvirt.driver [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Using config drive#033[00m
Oct 11 05:22:00 np0005481065 nova_compute[260935]: 2025-10-11 09:22:00.686 2 DEBUG nova.storage.rbd_utils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image d703a39a-f502-4bc4-895a-bb87752c83df_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:22:00 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2405: 321 pgs: 321 active+clean; 579 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 11 05:22:00 np0005481065 ovn_controller[152945]: 2025-10-11T09:22:00Z|00145|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:29:57:ba 10.100.0.19
Oct 11 05:22:00 np0005481065 ovn_controller[152945]: 2025-10-11T09:22:00Z|00146|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:29:57:ba 10.100.0.19
Oct 11 05:22:01 np0005481065 nova_compute[260935]: 2025-10-11 09:22:01.250 2 INFO nova.virt.libvirt.driver [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Creating config drive at /var/lib/nova/instances/d703a39a-f502-4bc4-895a-bb87752c83df/disk.config#033[00m
Oct 11 05:22:01 np0005481065 nova_compute[260935]: 2025-10-11 09:22:01.259 2 DEBUG oslo_concurrency.processutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d703a39a-f502-4bc4-895a-bb87752c83df/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyw7s9mne execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:22:01 np0005481065 nova_compute[260935]: 2025-10-11 09:22:01.415 2 DEBUG oslo_concurrency.processutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d703a39a-f502-4bc4-895a-bb87752c83df/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyw7s9mne" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:22:01 np0005481065 nova_compute[260935]: 2025-10-11 09:22:01.437 2 DEBUG nova.storage.rbd_utils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image d703a39a-f502-4bc4-895a-bb87752c83df_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:22:01 np0005481065 nova_compute[260935]: 2025-10-11 09:22:01.440 2 DEBUG oslo_concurrency.processutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d703a39a-f502-4bc4-895a-bb87752c83df/disk.config d703a39a-f502-4bc4-895a-bb87752c83df_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:22:01 np0005481065 podman[390901]: 2025-10-11 09:22:01.525006755 +0000 UTC m=+0.049373444 container create cfd7f8b4a57e2da5cd957ab6f9f70a3f7079e295fbcf4a985b7e4e5c77bfee04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_blackburn, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 11 05:22:01 np0005481065 nova_compute[260935]: 2025-10-11 09:22:01.543 2 DEBUG nova.network.neutron [req-289eff2a-4d7c-4008-b49b-96bfc9312e11 req-dbc4a8c5-e9b3-454a-8877-66b84e53996c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Updated VIF entry in instance network info cache for port a35d553f-941e-4ff2-bc8a-39419cc32ff2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:22:01 np0005481065 nova_compute[260935]: 2025-10-11 09:22:01.544 2 DEBUG nova.network.neutron [req-289eff2a-4d7c-4008-b49b-96bfc9312e11 req-dbc4a8c5-e9b3-454a-8877-66b84e53996c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Updating instance_info_cache with network_info: [{"id": "30ec8ddb-e058-428a-a08a-817e1e452938", "address": "fa:16:3e:97:eb:db", "network": {"id": "02690ac5-d004-4c7d-b780-e5fed29e0aa7", "bridge": "br-int", "label": "tempest-network-smoke--1847613909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30ec8ddb-e0", "ovs_interfaceid": "30ec8ddb-e058-428a-a08a-817e1e452938", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a35d553f-941e-4ff2-bc8a-39419cc32ff2", "address": "fa:16:3e:62:2c:ec", "network": {"id": "e87b272f-66b8-494e-ab80-c2ee66df15a2", "bridge": "br-int", "label": "tempest-network-smoke--2027628824", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe62:2cec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe62:2cec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa35d553f-94", "ovs_interfaceid": "a35d553f-941e-4ff2-bc8a-39419cc32ff2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:22:01 np0005481065 nova_compute[260935]: 2025-10-11 09:22:01.566 2 DEBUG oslo_concurrency.lockutils [req-289eff2a-4d7c-4008-b49b-96bfc9312e11 req-dbc4a8c5-e9b3-454a-8877-66b84e53996c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-d703a39a-f502-4bc4-895a-bb87752c83df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:22:01 np0005481065 systemd[1]: Started libpod-conmon-cfd7f8b4a57e2da5cd957ab6f9f70a3f7079e295fbcf4a985b7e4e5c77bfee04.scope.
Oct 11 05:22:01 np0005481065 nova_compute[260935]: 2025-10-11 09:22:01.590 2 DEBUG oslo_concurrency.processutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d703a39a-f502-4bc4-895a-bb87752c83df/disk.config d703a39a-f502-4bc4-895a-bb87752c83df_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:22:01 np0005481065 nova_compute[260935]: 2025-10-11 09:22:01.590 2 INFO nova.virt.libvirt.driver [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Deleting local config drive /var/lib/nova/instances/d703a39a-f502-4bc4-895a-bb87752c83df/disk.config because it was imported into RBD.#033[00m
Oct 11 05:22:01 np0005481065 podman[390901]: 2025-10-11 09:22:01.503104477 +0000 UTC m=+0.027471186 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:22:01 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:22:01 np0005481065 podman[390901]: 2025-10-11 09:22:01.628321911 +0000 UTC m=+0.152688600 container init cfd7f8b4a57e2da5cd957ab6f9f70a3f7079e295fbcf4a985b7e4e5c77bfee04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 11 05:22:01 np0005481065 podman[390901]: 2025-10-11 09:22:01.64353605 +0000 UTC m=+0.167902779 container start cfd7f8b4a57e2da5cd957ab6f9f70a3f7079e295fbcf4a985b7e4e5c77bfee04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_blackburn, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:22:01 np0005481065 podman[390901]: 2025-10-11 09:22:01.647903354 +0000 UTC m=+0.172270073 container attach cfd7f8b4a57e2da5cd957ab6f9f70a3f7079e295fbcf4a985b7e4e5c77bfee04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_blackburn, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 05:22:01 np0005481065 flamboyant_blackburn[390935]: 167 167
Oct 11 05:22:01 np0005481065 systemd[1]: libpod-cfd7f8b4a57e2da5cd957ab6f9f70a3f7079e295fbcf4a985b7e4e5c77bfee04.scope: Deactivated successfully.
Oct 11 05:22:01 np0005481065 podman[390901]: 2025-10-11 09:22:01.655897609 +0000 UTC m=+0.180264328 container died cfd7f8b4a57e2da5cd957ab6f9f70a3f7079e295fbcf4a985b7e4e5c77bfee04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_blackburn, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 05:22:01 np0005481065 NetworkManager[44960]: <info>  [1760174521.6818] manager: (tap30ec8ddb-e0): new Tun device (/org/freedesktop/NetworkManager/Devices/500)
Oct 11 05:22:01 np0005481065 kernel: tap30ec8ddb-e0: entered promiscuous mode
Oct 11 05:22:01 np0005481065 systemd[1]: var-lib-containers-storage-overlay-de24776d3d9548be8cd72115752df7cda374cf25f6b95dafcd7d0846a13fa76d-merged.mount: Deactivated successfully.
Oct 11 05:22:01 np0005481065 NetworkManager[44960]: <info>  [1760174521.7034] manager: (tapa35d553f-94): new Tun device (/org/freedesktop/NetworkManager/Devices/501)
Oct 11 05:22:01 np0005481065 ovn_controller[152945]: 2025-10-11T09:22:01Z|01229|binding|INFO|Claiming lport 30ec8ddb-e058-428a-a08a-817e1e452938 for this chassis.
Oct 11 05:22:01 np0005481065 ovn_controller[152945]: 2025-10-11T09:22:01Z|01230|binding|INFO|30ec8ddb-e058-428a-a08a-817e1e452938: Claiming fa:16:3e:97:eb:db 10.100.0.6
Oct 11 05:22:01 np0005481065 nova_compute[260935]: 2025-10-11 09:22:01.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:01 np0005481065 podman[390901]: 2025-10-11 09:22:01.714453642 +0000 UTC m=+0.238820341 container remove cfd7f8b4a57e2da5cd957ab6f9f70a3f7079e295fbcf4a985b7e4e5c77bfee04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 11 05:22:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:01.728 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:eb:db 10.100.0.6'], port_security=['fa:16:3e:97:eb:db 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'd703a39a-f502-4bc4-895a-bb87752c83df', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-02690ac5-d004-4c7d-b780-e5fed29e0aa7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c29d60dd-64a4-4eb0-a5b7-799e6b286f87', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bc6a655d-e3d9-4e53-b2d8-3195591acfb2, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=30ec8ddb-e058-428a-a08a-817e1e452938) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:22:01 np0005481065 systemd[1]: libpod-conmon-cfd7f8b4a57e2da5cd957ab6f9f70a3f7079e295fbcf4a985b7e4e5c77bfee04.scope: Deactivated successfully.
Oct 11 05:22:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:01.735 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 30ec8ddb-e058-428a-a08a-817e1e452938 in datapath 02690ac5-d004-4c7d-b780-e5fed29e0aa7 bound to our chassis#033[00m
Oct 11 05:22:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:01.737 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 02690ac5-d004-4c7d-b780-e5fed29e0aa7#033[00m
Oct 11 05:22:01 np0005481065 systemd-udevd[390966]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:22:01 np0005481065 systemd-udevd[390968]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:22:01 np0005481065 kernel: tapa35d553f-94: entered promiscuous mode
Oct 11 05:22:01 np0005481065 ovn_controller[152945]: 2025-10-11T09:22:01Z|01231|binding|INFO|Setting lport 30ec8ddb-e058-428a-a08a-817e1e452938 ovn-installed in OVS
Oct 11 05:22:01 np0005481065 ovn_controller[152945]: 2025-10-11T09:22:01Z|01232|binding|INFO|Setting lport 30ec8ddb-e058-428a-a08a-817e1e452938 up in Southbound
Oct 11 05:22:01 np0005481065 nova_compute[260935]: 2025-10-11 09:22:01.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:01 np0005481065 ovn_controller[152945]: 2025-10-11T09:22:01Z|01233|if_status|INFO|Dropped 3 log messages in last 41 seconds (most recently, 41 seconds ago) due to excessive rate
Oct 11 05:22:01 np0005481065 ovn_controller[152945]: 2025-10-11T09:22:01Z|01234|if_status|INFO|Not updating pb chassis for a35d553f-941e-4ff2-bc8a-39419cc32ff2 now as sb is readonly
Oct 11 05:22:01 np0005481065 systemd-machined[215705]: New machine qemu-143-instance-00000078.
Oct 11 05:22:01 np0005481065 ovn_controller[152945]: 2025-10-11T09:22:01Z|01235|binding|INFO|Claiming lport a35d553f-941e-4ff2-bc8a-39419cc32ff2 for this chassis.
Oct 11 05:22:01 np0005481065 ovn_controller[152945]: 2025-10-11T09:22:01Z|01236|binding|INFO|a35d553f-941e-4ff2-bc8a-39419cc32ff2: Claiming fa:16:3e:62:2c:ec 2001:db8:0:1:f816:3eff:fe62:2cec 2001:db8::f816:3eff:fe62:2cec
Oct 11 05:22:01 np0005481065 NetworkManager[44960]: <info>  [1760174521.7618] device (tapa35d553f-94): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:22:01 np0005481065 NetworkManager[44960]: <info>  [1760174521.7634] device (tapa35d553f-94): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:22:01 np0005481065 NetworkManager[44960]: <info>  [1760174521.7646] device (tap30ec8ddb-e0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:22:01 np0005481065 NetworkManager[44960]: <info>  [1760174521.7665] device (tap30ec8ddb-e0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:22:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:01.766 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:2c:ec 2001:db8:0:1:f816:3eff:fe62:2cec 2001:db8::f816:3eff:fe62:2cec'], port_security=['fa:16:3e:62:2c:ec 2001:db8:0:1:f816:3eff:fe62:2cec 2001:db8::f816:3eff:fe62:2cec'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe62:2cec/64 2001:db8::f816:3eff:fe62:2cec/64', 'neutron:device_id': 'd703a39a-f502-4bc4-895a-bb87752c83df', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e87b272f-66b8-494e-ab80-c2ee66df15a2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c29d60dd-64a4-4eb0-a5b7-799e6b286f87', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4bb88753-f635-4d32-a71c-7301c0eb5e38, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=a35d553f-941e-4ff2-bc8a-39419cc32ff2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:22:01 np0005481065 systemd[1]: Started Virtual Machine qemu-143-instance-00000078.
Oct 11 05:22:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:01.773 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[110c1904-4545-439f-92b9-fed8a8e56ac9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:01 np0005481065 ovn_controller[152945]: 2025-10-11T09:22:01Z|01237|binding|INFO|Setting lport a35d553f-941e-4ff2-bc8a-39419cc32ff2 ovn-installed in OVS
Oct 11 05:22:01 np0005481065 ovn_controller[152945]: 2025-10-11T09:22:01Z|01238|binding|INFO|Setting lport a35d553f-941e-4ff2-bc8a-39419cc32ff2 up in Southbound
Oct 11 05:22:01 np0005481065 nova_compute[260935]: 2025-10-11 09:22:01.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:01.802 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[ff24a203-60ae-44a4-9365-9a0d77f7372d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:01.809 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[da42ca27-93d7-46d6-ae04-033fc71b7dab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:01.849 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[26857fe2-1279-4723-9c3c-08cb438f6f0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:01.878 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7f8478b1-0f0d-4d1c-bd52-592a8e04001d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap02690ac5-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3a:97:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 340], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632554, 'reachable_time': 37223, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 390982, 'error': None, 'target': 'ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:01.903 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[969cc1f3-1263-43ef-9195-62ab18ca5504]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap02690ac5-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 632567, 'tstamp': 632567}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 390984, 'error': None, 'target': 'ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap02690ac5-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 632571, 'tstamp': 632571}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 390984, 'error': None, 'target': 'ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:01.911 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap02690ac5-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:22:01 np0005481065 nova_compute[260935]: 2025-10-11 09:22:01.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:01 np0005481065 nova_compute[260935]: 2025-10-11 09:22:01.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:01.915 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap02690ac5-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:22:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:01.915 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:22:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:01.915 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap02690ac5-d0, col_values=(('external_ids', {'iface-id': 'bc2597de-285d-49b2-9e10-c93c87e43828'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:22:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:01.915 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:22:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:01.917 162815 INFO neutron.agent.ovn.metadata.agent [-] Port a35d553f-941e-4ff2-bc8a-39419cc32ff2 in datapath e87b272f-66b8-494e-ab80-c2ee66df15a2 unbound from our chassis#033[00m
Oct 11 05:22:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:01.918 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e87b272f-66b8-494e-ab80-c2ee66df15a2#033[00m
Oct 11 05:22:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:01.945 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9568afbe-39bd-47cb-8a2f-316625883721]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:01 np0005481065 nova_compute[260935]: 2025-10-11 09:22:01.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:01.989 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[9529afb7-cb33-4863-8f97-003bbdabc3f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:01.994 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[84929a72-8425-4bb0-b076-acb5fe7b9ecd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:02.033 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[50092168-032d-4070-9d33-9430cbe4ef64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:02 np0005481065 podman[390991]: 2025-10-11 09:22:02.052689457 +0000 UTC m=+0.084019042 container create 7e2275b3d3bc322ce807aec87c3c4c30f76122b8622f580055a49d5a91d0200a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_kapitsa, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 05:22:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:02.067 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9961a8bc-0b7f-442c-a86d-b89f944ad335]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape87b272f-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ab:ac:3b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 23, 'tx_packets': 5, 'rx_bytes': 2146, 'tx_bytes': 402, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 23, 'tx_packets': 5, 'rx_bytes': 2146, 'tx_bytes': 402, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 341], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632666, 'reachable_time': 19970, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 23, 'inoctets': 1824, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 23, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1824, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 23, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391008, 'error': None, 'target': 'ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:02.097 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4a2845e5-8102-4bfb-9bf2-173c6309e326]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape87b272f-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 632680, 'tstamp': 632680}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391011, 'error': None, 'target': 'ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:02.099 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape87b272f-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:22:02 np0005481065 podman[390991]: 2025-10-11 09:22:02.024486441 +0000 UTC m=+0.055816126 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:22:02 np0005481065 nova_compute[260935]: 2025-10-11 09:22:02.137 2 DEBUG nova.compute.manager [req-1e48f9e3-bcf2-4e7b-8859-ac5a8e922aca req-ca271afc-ed5e-477b-9137-303eafbc9e57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Received event network-vif-plugged-a35d553f-941e-4ff2-bc8a-39419cc32ff2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:22:02 np0005481065 nova_compute[260935]: 2025-10-11 09:22:02.138 2 DEBUG oslo_concurrency.lockutils [req-1e48f9e3-bcf2-4e7b-8859-ac5a8e922aca req-ca271afc-ed5e-477b-9137-303eafbc9e57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:22:02 np0005481065 nova_compute[260935]: 2025-10-11 09:22:02.139 2 DEBUG oslo_concurrency.lockutils [req-1e48f9e3-bcf2-4e7b-8859-ac5a8e922aca req-ca271afc-ed5e-477b-9137-303eafbc9e57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:22:02 np0005481065 nova_compute[260935]: 2025-10-11 09:22:02.140 2 DEBUG oslo_concurrency.lockutils [req-1e48f9e3-bcf2-4e7b-8859-ac5a8e922aca req-ca271afc-ed5e-477b-9137-303eafbc9e57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:22:02 np0005481065 nova_compute[260935]: 2025-10-11 09:22:02.141 2 DEBUG nova.compute.manager [req-1e48f9e3-bcf2-4e7b-8859-ac5a8e922aca req-ca271afc-ed5e-477b-9137-303eafbc9e57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Processing event network-vif-plugged-a35d553f-941e-4ff2-bc8a-39419cc32ff2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:22:02 np0005481065 systemd[1]: Started libpod-conmon-7e2275b3d3bc322ce807aec87c3c4c30f76122b8622f580055a49d5a91d0200a.scope.
Oct 11 05:22:02 np0005481065 nova_compute[260935]: 2025-10-11 09:22:02.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:02.148 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape87b272f-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:22:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:02.148 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:22:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:02.148 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape87b272f-60, col_values=(('external_ids', {'iface-id': 'af33797e-d57d-45c8-92d7-86ea03fdf1ef'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:22:02 np0005481065 nova_compute[260935]: 2025-10-11 09:22:02.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:02.149 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:22:02 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:22:02 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feb155d8c716de36b1c9c04c308a7797c575c774d388fc5a32add1459f9652b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:22:02 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feb155d8c716de36b1c9c04c308a7797c575c774d388fc5a32add1459f9652b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:22:02 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feb155d8c716de36b1c9c04c308a7797c575c774d388fc5a32add1459f9652b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:22:02 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feb155d8c716de36b1c9c04c308a7797c575c774d388fc5a32add1459f9652b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:22:02 np0005481065 podman[390991]: 2025-10-11 09:22:02.216468789 +0000 UTC m=+0.247798394 container init 7e2275b3d3bc322ce807aec87c3c4c30f76122b8622f580055a49d5a91d0200a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_kapitsa, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:22:02 np0005481065 podman[390991]: 2025-10-11 09:22:02.234711694 +0000 UTC m=+0.266041289 container start 7e2275b3d3bc322ce807aec87c3c4c30f76122b8622f580055a49d5a91d0200a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:22:02 np0005481065 podman[390991]: 2025-10-11 09:22:02.239482259 +0000 UTC m=+0.270811884 container attach 7e2275b3d3bc322ce807aec87c3c4c30f76122b8622f580055a49d5a91d0200a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_kapitsa, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:22:02 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2406: 321 pgs: 321 active+clean; 612 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 173 op/s
Oct 11 05:22:02 np0005481065 nova_compute[260935]: 2025-10-11 09:22:02.921 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174522.9214606, d703a39a-f502-4bc4-895a-bb87752c83df => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:22:02 np0005481065 nova_compute[260935]: 2025-10-11 09:22:02.922 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] VM Started (Lifecycle Event)#033[00m
Oct 11 05:22:02 np0005481065 nova_compute[260935]: 2025-10-11 09:22:02.950 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:22:02 np0005481065 nova_compute[260935]: 2025-10-11 09:22:02.960 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174522.9253392, d703a39a-f502-4bc4-895a-bb87752c83df => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:22:02 np0005481065 nova_compute[260935]: 2025-10-11 09:22:02.960 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:22:02 np0005481065 nova_compute[260935]: 2025-10-11 09:22:02.978 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:22:02 np0005481065 nova_compute[260935]: 2025-10-11 09:22:02.982 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:22:03 np0005481065 nova_compute[260935]: 2025-10-11 09:22:03.006 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:22:03 np0005481065 gracious_kapitsa[391014]: {
Oct 11 05:22:03 np0005481065 gracious_kapitsa[391014]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 05:22:03 np0005481065 gracious_kapitsa[391014]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:22:03 np0005481065 gracious_kapitsa[391014]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 05:22:03 np0005481065 gracious_kapitsa[391014]:        "osd_id": 2,
Oct 11 05:22:03 np0005481065 gracious_kapitsa[391014]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:22:03 np0005481065 gracious_kapitsa[391014]:        "type": "bluestore"
Oct 11 05:22:03 np0005481065 gracious_kapitsa[391014]:    },
Oct 11 05:22:03 np0005481065 gracious_kapitsa[391014]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 05:22:03 np0005481065 gracious_kapitsa[391014]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:22:03 np0005481065 gracious_kapitsa[391014]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 05:22:03 np0005481065 gracious_kapitsa[391014]:        "osd_id": 0,
Oct 11 05:22:03 np0005481065 gracious_kapitsa[391014]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:22:03 np0005481065 gracious_kapitsa[391014]:        "type": "bluestore"
Oct 11 05:22:03 np0005481065 gracious_kapitsa[391014]:    },
Oct 11 05:22:03 np0005481065 gracious_kapitsa[391014]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 05:22:03 np0005481065 gracious_kapitsa[391014]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:22:03 np0005481065 gracious_kapitsa[391014]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 05:22:03 np0005481065 gracious_kapitsa[391014]:        "osd_id": 1,
Oct 11 05:22:03 np0005481065 gracious_kapitsa[391014]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:22:03 np0005481065 gracious_kapitsa[391014]:        "type": "bluestore"
Oct 11 05:22:03 np0005481065 gracious_kapitsa[391014]:    }
Oct 11 05:22:03 np0005481065 gracious_kapitsa[391014]: }
Oct 11 05:22:03 np0005481065 systemd[1]: libpod-7e2275b3d3bc322ce807aec87c3c4c30f76122b8622f580055a49d5a91d0200a.scope: Deactivated successfully.
Oct 11 05:22:03 np0005481065 podman[390991]: 2025-10-11 09:22:03.398106925 +0000 UTC m=+1.429436560 container died 7e2275b3d3bc322ce807aec87c3c4c30f76122b8622f580055a49d5a91d0200a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_kapitsa, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:22:03 np0005481065 systemd[1]: libpod-7e2275b3d3bc322ce807aec87c3c4c30f76122b8622f580055a49d5a91d0200a.scope: Consumed 1.145s CPU time.
Oct 11 05:22:03 np0005481065 systemd[1]: var-lib-containers-storage-overlay-feb155d8c716de36b1c9c04c308a7797c575c774d388fc5a32add1459f9652b0-merged.mount: Deactivated successfully.
Oct 11 05:22:03 np0005481065 podman[390991]: 2025-10-11 09:22:03.47056023 +0000 UTC m=+1.501889825 container remove 7e2275b3d3bc322ce807aec87c3c4c30f76122b8622f580055a49d5a91d0200a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_kapitsa, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Oct 11 05:22:03 np0005481065 systemd[1]: libpod-conmon-7e2275b3d3bc322ce807aec87c3c4c30f76122b8622f580055a49d5a91d0200a.scope: Deactivated successfully.
Oct 11 05:22:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:22:03 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:22:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:22:03 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:22:03 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev cd3acdd0-36c7-4010-a426-4b22c794fcbb does not exist
Oct 11 05:22:03 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 1df60877-1629-45e6-9b6d-3b626773b40a does not exist
Oct 11 05:22:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:22:04 np0005481065 nova_compute[260935]: 2025-10-11 09:22:04.227 2 DEBUG nova.compute.manager [req-7448c91f-5996-4335-884d-0c7d3a39483f req-c9542f8f-136f-4aad-96dd-78bb478a50cd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Received event network-vif-plugged-a35d553f-941e-4ff2-bc8a-39419cc32ff2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:22:04 np0005481065 nova_compute[260935]: 2025-10-11 09:22:04.229 2 DEBUG oslo_concurrency.lockutils [req-7448c91f-5996-4335-884d-0c7d3a39483f req-c9542f8f-136f-4aad-96dd-78bb478a50cd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:22:04 np0005481065 nova_compute[260935]: 2025-10-11 09:22:04.229 2 DEBUG oslo_concurrency.lockutils [req-7448c91f-5996-4335-884d-0c7d3a39483f req-c9542f8f-136f-4aad-96dd-78bb478a50cd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:22:04 np0005481065 nova_compute[260935]: 2025-10-11 09:22:04.230 2 DEBUG oslo_concurrency.lockutils [req-7448c91f-5996-4335-884d-0c7d3a39483f req-c9542f8f-136f-4aad-96dd-78bb478a50cd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:22:04 np0005481065 nova_compute[260935]: 2025-10-11 09:22:04.230 2 DEBUG nova.compute.manager [req-7448c91f-5996-4335-884d-0c7d3a39483f req-c9542f8f-136f-4aad-96dd-78bb478a50cd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] No event matching network-vif-plugged-a35d553f-941e-4ff2-bc8a-39419cc32ff2 in dict_keys([('network-vif-plugged', '30ec8ddb-e058-428a-a08a-817e1e452938')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Oct 11 05:22:04 np0005481065 nova_compute[260935]: 2025-10-11 09:22:04.232 2 WARNING nova.compute.manager [req-7448c91f-5996-4335-884d-0c7d3a39483f req-c9542f8f-136f-4aad-96dd-78bb478a50cd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Received unexpected event network-vif-plugged-a35d553f-941e-4ff2-bc8a-39419cc32ff2 for instance with vm_state building and task_state spawning.#033[00m
Oct 11 05:22:04 np0005481065 nova_compute[260935]: 2025-10-11 09:22:04.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:04 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:22:04 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:22:04 np0005481065 nova_compute[260935]: 2025-10-11 09:22:04.588 2 DEBUG nova.compute.manager [req-2b195128-48cd-4c84-a45a-9f5aa81155b6 req-a8c21f2e-3df0-43bc-b04b-dba39f5f574e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Received event network-vif-plugged-30ec8ddb-e058-428a-a08a-817e1e452938 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:22:04 np0005481065 nova_compute[260935]: 2025-10-11 09:22:04.588 2 DEBUG oslo_concurrency.lockutils [req-2b195128-48cd-4c84-a45a-9f5aa81155b6 req-a8c21f2e-3df0-43bc-b04b-dba39f5f574e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:22:04 np0005481065 nova_compute[260935]: 2025-10-11 09:22:04.588 2 DEBUG oslo_concurrency.lockutils [req-2b195128-48cd-4c84-a45a-9f5aa81155b6 req-a8c21f2e-3df0-43bc-b04b-dba39f5f574e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:22:04 np0005481065 nova_compute[260935]: 2025-10-11 09:22:04.589 2 DEBUG oslo_concurrency.lockutils [req-2b195128-48cd-4c84-a45a-9f5aa81155b6 req-a8c21f2e-3df0-43bc-b04b-dba39f5f574e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:22:04 np0005481065 nova_compute[260935]: 2025-10-11 09:22:04.589 2 DEBUG nova.compute.manager [req-2b195128-48cd-4c84-a45a-9f5aa81155b6 req-a8c21f2e-3df0-43bc-b04b-dba39f5f574e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Processing event network-vif-plugged-30ec8ddb-e058-428a-a08a-817e1e452938 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:22:04 np0005481065 nova_compute[260935]: 2025-10-11 09:22:04.589 2 DEBUG nova.compute.manager [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:22:04 np0005481065 nova_compute[260935]: 2025-10-11 09:22:04.599 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174524.5987594, d703a39a-f502-4bc4-895a-bb87752c83df => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:22:04 np0005481065 nova_compute[260935]: 2025-10-11 09:22:04.599 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:22:04 np0005481065 nova_compute[260935]: 2025-10-11 09:22:04.601 2 DEBUG nova.virt.libvirt.driver [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:22:04 np0005481065 nova_compute[260935]: 2025-10-11 09:22:04.604 2 INFO nova.virt.libvirt.driver [-] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Instance spawned successfully.#033[00m
Oct 11 05:22:04 np0005481065 nova_compute[260935]: 2025-10-11 09:22:04.605 2 DEBUG nova.virt.libvirt.driver [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:22:04 np0005481065 nova_compute[260935]: 2025-10-11 09:22:04.634 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:22:04 np0005481065 nova_compute[260935]: 2025-10-11 09:22:04.640 2 DEBUG nova.virt.libvirt.driver [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:22:04 np0005481065 nova_compute[260935]: 2025-10-11 09:22:04.641 2 DEBUG nova.virt.libvirt.driver [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:22:04 np0005481065 nova_compute[260935]: 2025-10-11 09:22:04.641 2 DEBUG nova.virt.libvirt.driver [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:22:04 np0005481065 nova_compute[260935]: 2025-10-11 09:22:04.642 2 DEBUG nova.virt.libvirt.driver [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:22:04 np0005481065 nova_compute[260935]: 2025-10-11 09:22:04.642 2 DEBUG nova.virt.libvirt.driver [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:22:04 np0005481065 nova_compute[260935]: 2025-10-11 09:22:04.643 2 DEBUG nova.virt.libvirt.driver [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:22:04 np0005481065 nova_compute[260935]: 2025-10-11 09:22:04.647 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:22:04 np0005481065 nova_compute[260935]: 2025-10-11 09:22:04.700 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:22:04 np0005481065 nova_compute[260935]: 2025-10-11 09:22:04.732 2 INFO nova.compute.manager [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Took 12.99 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:22:04 np0005481065 nova_compute[260935]: 2025-10-11 09:22:04.733 2 DEBUG nova.compute.manager [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:22:04 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2407: 321 pgs: 321 active+clean; 612 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 334 KiB/s rd, 2.5 MiB/s wr, 85 op/s
Oct 11 05:22:04 np0005481065 nova_compute[260935]: 2025-10-11 09:22:04.824 2 INFO nova.compute.manager [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Took 14.13 seconds to build instance.#033[00m
Oct 11 05:22:04 np0005481065 nova_compute[260935]: 2025-10-11 09:22:04.847 2 DEBUG oslo_concurrency.lockutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "d703a39a-f502-4bc4-895a-bb87752c83df" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.230s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:22:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:22:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:22:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:22:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:22:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.005253101191638992 of space, bias 1.0, pg target 1.5759303574916974 quantized to 32 (current 32)
Oct 11 05:22:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:22:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:22:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:22:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:22:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:22:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.1992057139048968 quantized to 32 (current 32)
Oct 11 05:22:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:22:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 32)
Oct 11 05:22:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:22:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:22:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:22:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Oct 11 05:22:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:22:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Oct 11 05:22:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:22:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:22:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:22:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Oct 11 05:22:05 np0005481065 nova_compute[260935]: 2025-10-11 09:22:05.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:06 np0005481065 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Oct 11 05:22:06 np0005481065 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 05:22:06 np0005481065 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.3 total, 600.0 interval#012Cumulative writes: 29K writes, 117K keys, 29K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.03 MB/s#012Cumulative WAL: 29K writes, 10K syncs, 2.81 writes per sync, written: 0.11 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2981 writes, 12K keys, 2981 commit groups, 1.0 writes per commit group, ingest: 16.28 MB, 0.03 MB/s#012Interval WAL: 2981 writes, 1191 syncs, 2.50 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 05:22:06 np0005481065 nova_compute[260935]: 2025-10-11 09:22:06.723 2 DEBUG nova.compute.manager [req-24af4119-5c29-4d4d-b5fe-e2aa950fbd8b req-b3df8e9c-997e-491d-acac-a27b9198672f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Received event network-vif-plugged-30ec8ddb-e058-428a-a08a-817e1e452938 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:22:06 np0005481065 nova_compute[260935]: 2025-10-11 09:22:06.724 2 DEBUG oslo_concurrency.lockutils [req-24af4119-5c29-4d4d-b5fe-e2aa950fbd8b req-b3df8e9c-997e-491d-acac-a27b9198672f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:22:06 np0005481065 nova_compute[260935]: 2025-10-11 09:22:06.724 2 DEBUG oslo_concurrency.lockutils [req-24af4119-5c29-4d4d-b5fe-e2aa950fbd8b req-b3df8e9c-997e-491d-acac-a27b9198672f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:22:06 np0005481065 nova_compute[260935]: 2025-10-11 09:22:06.724 2 DEBUG oslo_concurrency.lockutils [req-24af4119-5c29-4d4d-b5fe-e2aa950fbd8b req-b3df8e9c-997e-491d-acac-a27b9198672f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:22:06 np0005481065 nova_compute[260935]: 2025-10-11 09:22:06.725 2 DEBUG nova.compute.manager [req-24af4119-5c29-4d4d-b5fe-e2aa950fbd8b req-b3df8e9c-997e-491d-acac-a27b9198672f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] No waiting events found dispatching network-vif-plugged-30ec8ddb-e058-428a-a08a-817e1e452938 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:22:06 np0005481065 nova_compute[260935]: 2025-10-11 09:22:06.725 2 WARNING nova.compute.manager [req-24af4119-5c29-4d4d-b5fe-e2aa950fbd8b req-b3df8e9c-997e-491d-acac-a27b9198672f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Received unexpected event network-vif-plugged-30ec8ddb-e058-428a-a08a-817e1e452938 for instance with vm_state active and task_state None.#033[00m
Oct 11 05:22:06 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2408: 321 pgs: 321 active+clean; 612 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 334 KiB/s rd, 2.5 MiB/s wr, 85 op/s
Oct 11 05:22:07 np0005481065 nova_compute[260935]: 2025-10-11 09:22:07.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:07 np0005481065 ceph-mgr[74605]: [devicehealth INFO root] Check health
Oct 11 05:22:08 np0005481065 nova_compute[260935]: 2025-10-11 09:22:08.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:08 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2409: 321 pgs: 321 active+clean; 612 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.5 MiB/s wr, 150 op/s
Oct 11 05:22:08 np0005481065 podman[391154]: 2025-10-11 09:22:08.813713158 +0000 UTC m=+0.102553896 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:22:08 np0005481065 nova_compute[260935]: 2025-10-11 09:22:08.847 2 DEBUG nova.compute.manager [req-d7bf0e0e-bb5a-43e8-9f2a-d0cfac71d02c req-ae8b6e8b-881f-4e68-9a9e-246f159ceb6c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Received event network-changed-300c071c-a312-4a9b-bd7a-1b16b9a35ae6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:22:08 np0005481065 nova_compute[260935]: 2025-10-11 09:22:08.847 2 DEBUG nova.compute.manager [req-d7bf0e0e-bb5a-43e8-9f2a-d0cfac71d02c req-ae8b6e8b-881f-4e68-9a9e-246f159ceb6c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Refreshing instance network info cache due to event network-changed-300c071c-a312-4a9b-bd7a-1b16b9a35ae6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:22:08 np0005481065 nova_compute[260935]: 2025-10-11 09:22:08.848 2 DEBUG oslo_concurrency.lockutils [req-d7bf0e0e-bb5a-43e8-9f2a-d0cfac71d02c req-ae8b6e8b-881f-4e68-9a9e-246f159ceb6c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-7f0d9214-39a5-458d-82db-dcbc7d61b8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:22:08 np0005481065 nova_compute[260935]: 2025-10-11 09:22:08.851 2 DEBUG oslo_concurrency.lockutils [req-d7bf0e0e-bb5a-43e8-9f2a-d0cfac71d02c req-ae8b6e8b-881f-4e68-9a9e-246f159ceb6c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-7f0d9214-39a5-458d-82db-dcbc7d61b8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:22:08 np0005481065 nova_compute[260935]: 2025-10-11 09:22:08.851 2 DEBUG nova.network.neutron [req-d7bf0e0e-bb5a-43e8-9f2a-d0cfac71d02c req-ae8b6e8b-881f-4e68-9a9e-246f159ceb6c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Refreshing network info cache for port 300c071c-a312-4a9b-bd7a-1b16b9a35ae6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:22:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:22:10 np0005481065 nova_compute[260935]: 2025-10-11 09:22:10.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:10 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2410: 321 pgs: 321 active+clean; 612 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 137 op/s
Oct 11 05:22:10 np0005481065 nova_compute[260935]: 2025-10-11 09:22:10.925 2 DEBUG nova.compute.manager [req-57f76c4f-9e18-4f10-825e-711e1f444756 req-0c87906e-ae7e-48c2-b2b4-33c597b3526e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Received event network-changed-30ec8ddb-e058-428a-a08a-817e1e452938 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:22:10 np0005481065 nova_compute[260935]: 2025-10-11 09:22:10.926 2 DEBUG nova.compute.manager [req-57f76c4f-9e18-4f10-825e-711e1f444756 req-0c87906e-ae7e-48c2-b2b4-33c597b3526e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Refreshing instance network info cache due to event network-changed-30ec8ddb-e058-428a-a08a-817e1e452938. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:22:10 np0005481065 nova_compute[260935]: 2025-10-11 09:22:10.927 2 DEBUG oslo_concurrency.lockutils [req-57f76c4f-9e18-4f10-825e-711e1f444756 req-0c87906e-ae7e-48c2-b2b4-33c597b3526e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-d703a39a-f502-4bc4-895a-bb87752c83df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:22:10 np0005481065 nova_compute[260935]: 2025-10-11 09:22:10.927 2 DEBUG oslo_concurrency.lockutils [req-57f76c4f-9e18-4f10-825e-711e1f444756 req-0c87906e-ae7e-48c2-b2b4-33c597b3526e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-d703a39a-f502-4bc4-895a-bb87752c83df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:22:10 np0005481065 nova_compute[260935]: 2025-10-11 09:22:10.927 2 DEBUG nova.network.neutron [req-57f76c4f-9e18-4f10-825e-711e1f444756 req-0c87906e-ae7e-48c2-b2b4-33c597b3526e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Refreshing network info cache for port 30ec8ddb-e058-428a-a08a-817e1e452938 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:22:11 np0005481065 nova_compute[260935]: 2025-10-11 09:22:11.187 2 DEBUG nova.network.neutron [req-d7bf0e0e-bb5a-43e8-9f2a-d0cfac71d02c req-ae8b6e8b-881f-4e68-9a9e-246f159ceb6c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Updated VIF entry in instance network info cache for port 300c071c-a312-4a9b-bd7a-1b16b9a35ae6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:22:11 np0005481065 nova_compute[260935]: 2025-10-11 09:22:11.188 2 DEBUG nova.network.neutron [req-d7bf0e0e-bb5a-43e8-9f2a-d0cfac71d02c req-ae8b6e8b-881f-4e68-9a9e-246f159ceb6c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Updating instance_info_cache with network_info: [{"id": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "address": "fa:16:3e:38:a7:f5", "network": {"id": "6fb90d02-96cd-4920-92ac-462cc457cb11", "bridge": "br-int", "label": "tempest-network-smoke--1169953700", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aa7ac72-3e", "ovs_interfaceid": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "address": "fa:16:3e:f8:68:88", "network": {"id": "428d818e-c08a-4eef-be62-24fe484fed05", "bridge": "br-int", "label": "tempest-network-smoke--2064721736", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap300c071c-a3", "ovs_interfaceid": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:22:11 np0005481065 nova_compute[260935]: 2025-10-11 09:22:11.928 2 DEBUG oslo_concurrency.lockutils [req-d7bf0e0e-bb5a-43e8-9f2a-d0cfac71d02c req-ae8b6e8b-881f-4e68-9a9e-246f159ceb6c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-7f0d9214-39a5-458d-82db-dcbc7d61b8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:22:12 np0005481065 nova_compute[260935]: 2025-10-11 09:22:12.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:12 np0005481065 nova_compute[260935]: 2025-10-11 09:22:12.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:12 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2411: 321 pgs: 321 active+clean; 612 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Oct 11 05:22:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:22:14 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2412: 321 pgs: 321 active+clean; 612 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 66 op/s
Oct 11 05:22:14 np0005481065 podman[391173]: 2025-10-11 09:22:14.834352894 +0000 UTC m=+0.123743553 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 11 05:22:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:15.219 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:22:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:15.220 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:22:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:15.221 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:22:15 np0005481065 nova_compute[260935]: 2025-10-11 09:22:15.408 2 DEBUG nova.network.neutron [req-57f76c4f-9e18-4f10-825e-711e1f444756 req-0c87906e-ae7e-48c2-b2b4-33c597b3526e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Updated VIF entry in instance network info cache for port 30ec8ddb-e058-428a-a08a-817e1e452938. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:22:15 np0005481065 nova_compute[260935]: 2025-10-11 09:22:15.410 2 DEBUG nova.network.neutron [req-57f76c4f-9e18-4f10-825e-711e1f444756 req-0c87906e-ae7e-48c2-b2b4-33c597b3526e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Updating instance_info_cache with network_info: [{"id": "30ec8ddb-e058-428a-a08a-817e1e452938", "address": "fa:16:3e:97:eb:db", "network": {"id": "02690ac5-d004-4c7d-b780-e5fed29e0aa7", "bridge": "br-int", "label": "tempest-network-smoke--1847613909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30ec8ddb-e0", "ovs_interfaceid": "30ec8ddb-e058-428a-a08a-817e1e452938", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a35d553f-941e-4ff2-bc8a-39419cc32ff2", "address": "fa:16:3e:62:2c:ec", "network": {"id": "e87b272f-66b8-494e-ab80-c2ee66df15a2", "bridge": "br-int", "label": "tempest-network-smoke--2027628824", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe62:2cec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe62:2cec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa35d553f-94", "ovs_interfaceid": "a35d553f-941e-4ff2-bc8a-39419cc32ff2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:22:15 np0005481065 nova_compute[260935]: 2025-10-11 09:22:15.480 2 DEBUG oslo_concurrency.lockutils [req-57f76c4f-9e18-4f10-825e-711e1f444756 req-0c87906e-ae7e-48c2-b2b4-33c597b3526e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-d703a39a-f502-4bc4-895a-bb87752c83df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:22:15 np0005481065 nova_compute[260935]: 2025-10-11 09:22:15.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:16 np0005481065 ovn_controller[152945]: 2025-10-11T09:22:16Z|00147|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:97:eb:db 10.100.0.6
Oct 11 05:22:16 np0005481065 ovn_controller[152945]: 2025-10-11T09:22:16Z|00148|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:97:eb:db 10.100.0.6
Oct 11 05:22:16 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2413: 321 pgs: 321 active+clean; 612 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 66 op/s
Oct 11 05:22:17 np0005481065 nova_compute[260935]: 2025-10-11 09:22:17.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:18 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2414: 321 pgs: 321 active+clean; 645 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 131 op/s
Oct 11 05:22:18 np0005481065 podman[391194]: 2025-10-11 09:22:18.845744408 +0000 UTC m=+0.133339204 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 05:22:18 np0005481065 podman[391195]: 2025-10-11 09:22:18.867720658 +0000 UTC m=+0.154566703 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Oct 11 05:22:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:22:20 np0005481065 nova_compute[260935]: 2025-10-11 09:22:20.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:20 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2415: 321 pgs: 321 active+clean; 645 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 11 05:22:21 np0005481065 nova_compute[260935]: 2025-10-11 09:22:21.340 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:22:22 np0005481065 nova_compute[260935]: 2025-10-11 09:22:22.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:22 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2416: 321 pgs: 321 active+clean; 645 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 11 05:22:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:22:24 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2417: 321 pgs: 321 active+clean; 645 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 11 05:22:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:22:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:22:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:22:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:22:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:22:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:22:25 np0005481065 nova_compute[260935]: 2025-10-11 09:22:25.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:22:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/300202442' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:22:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:22:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/300202442' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:22:26 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2418: 321 pgs: 321 active+clean; 645 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 11 05:22:27 np0005481065 nova_compute[260935]: 2025-10-11 09:22:27.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:28 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2419: 321 pgs: 321 active+clean; 645 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.2 MiB/s wr, 66 op/s
Oct 11 05:22:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:22:30 np0005481065 nova_compute[260935]: 2025-10-11 09:22:30.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:30 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2420: 321 pgs: 321 active+clean; 645 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s rd, 18 KiB/s wr, 1 op/s
Oct 11 05:22:31 np0005481065 nova_compute[260935]: 2025-10-11 09:22:31.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:22:31 np0005481065 nova_compute[260935]: 2025-10-11 09:22:31.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:22:32 np0005481065 nova_compute[260935]: 2025-10-11 09:22:32.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:32 np0005481065 nova_compute[260935]: 2025-10-11 09:22:32.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:22:32 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2421: 321 pgs: 321 active+clean; 645 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.0 KiB/s rd, 22 KiB/s wr, 2 op/s
Oct 11 05:22:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:22:34 np0005481065 nova_compute[260935]: 2025-10-11 09:22:34.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:22:34 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2422: 321 pgs: 321 active+clean; 645 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 19 KiB/s wr, 2 op/s
Oct 11 05:22:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:35.279 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=39, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=38) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:22:35 np0005481065 nova_compute[260935]: 2025-10-11 09:22:35.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:35.282 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 05:22:35 np0005481065 nova_compute[260935]: 2025-10-11 09:22:35.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:35 np0005481065 nova_compute[260935]: 2025-10-11 09:22:35.711 2 DEBUG oslo_concurrency.lockutils [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "d703a39a-f502-4bc4-895a-bb87752c83df" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:22:35 np0005481065 nova_compute[260935]: 2025-10-11 09:22:35.712 2 DEBUG oslo_concurrency.lockutils [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "d703a39a-f502-4bc4-895a-bb87752c83df" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:22:35 np0005481065 nova_compute[260935]: 2025-10-11 09:22:35.713 2 DEBUG oslo_concurrency.lockutils [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:22:35 np0005481065 nova_compute[260935]: 2025-10-11 09:22:35.713 2 DEBUG oslo_concurrency.lockutils [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:22:35 np0005481065 nova_compute[260935]: 2025-10-11 09:22:35.714 2 DEBUG oslo_concurrency.lockutils [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:22:35 np0005481065 nova_compute[260935]: 2025-10-11 09:22:35.715 2 INFO nova.compute.manager [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Terminating instance#033[00m
Oct 11 05:22:35 np0005481065 nova_compute[260935]: 2025-10-11 09:22:35.717 2 DEBUG nova.compute.manager [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:22:35 np0005481065 nova_compute[260935]: 2025-10-11 09:22:35.747 2 DEBUG nova.compute.manager [req-4ef6af49-ab96-4aeb-b173-f1db2bef0ae1 req-5616ddc8-c1ab-403f-978d-928065fd1cb1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Received event network-changed-30ec8ddb-e058-428a-a08a-817e1e452938 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:22:35 np0005481065 nova_compute[260935]: 2025-10-11 09:22:35.748 2 DEBUG nova.compute.manager [req-4ef6af49-ab96-4aeb-b173-f1db2bef0ae1 req-5616ddc8-c1ab-403f-978d-928065fd1cb1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Refreshing instance network info cache due to event network-changed-30ec8ddb-e058-428a-a08a-817e1e452938. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:22:35 np0005481065 nova_compute[260935]: 2025-10-11 09:22:35.748 2 DEBUG oslo_concurrency.lockutils [req-4ef6af49-ab96-4aeb-b173-f1db2bef0ae1 req-5616ddc8-c1ab-403f-978d-928065fd1cb1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-d703a39a-f502-4bc4-895a-bb87752c83df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:22:35 np0005481065 nova_compute[260935]: 2025-10-11 09:22:35.749 2 DEBUG oslo_concurrency.lockutils [req-4ef6af49-ab96-4aeb-b173-f1db2bef0ae1 req-5616ddc8-c1ab-403f-978d-928065fd1cb1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-d703a39a-f502-4bc4-895a-bb87752c83df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:22:35 np0005481065 nova_compute[260935]: 2025-10-11 09:22:35.750 2 DEBUG nova.network.neutron [req-4ef6af49-ab96-4aeb-b173-f1db2bef0ae1 req-5616ddc8-c1ab-403f-978d-928065fd1cb1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Refreshing network info cache for port 30ec8ddb-e058-428a-a08a-817e1e452938 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:22:35 np0005481065 kernel: tap30ec8ddb-e0 (unregistering): left promiscuous mode
Oct 11 05:22:35 np0005481065 NetworkManager[44960]: <info>  [1760174555.8131] device (tap30ec8ddb-e0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:22:35 np0005481065 ovn_controller[152945]: 2025-10-11T09:22:35Z|01239|binding|INFO|Releasing lport 30ec8ddb-e058-428a-a08a-817e1e452938 from this chassis (sb_readonly=0)
Oct 11 05:22:35 np0005481065 ovn_controller[152945]: 2025-10-11T09:22:35Z|01240|binding|INFO|Setting lport 30ec8ddb-e058-428a-a08a-817e1e452938 down in Southbound
Oct 11 05:22:35 np0005481065 nova_compute[260935]: 2025-10-11 09:22:35.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:35 np0005481065 ovn_controller[152945]: 2025-10-11T09:22:35Z|01241|binding|INFO|Removing iface tap30ec8ddb-e0 ovn-installed in OVS
Oct 11 05:22:35 np0005481065 nova_compute[260935]: 2025-10-11 09:22:35.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:35 np0005481065 kernel: tapa35d553f-94 (unregistering): left promiscuous mode
Oct 11 05:22:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:35.863 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:eb:db 10.100.0.6'], port_security=['fa:16:3e:97:eb:db 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'd703a39a-f502-4bc4-895a-bb87752c83df', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-02690ac5-d004-4c7d-b780-e5fed29e0aa7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c29d60dd-64a4-4eb0-a5b7-799e6b286f87', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bc6a655d-e3d9-4e53-b2d8-3195591acfb2, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=30ec8ddb-e058-428a-a08a-817e1e452938) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:22:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:35.865 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 30ec8ddb-e058-428a-a08a-817e1e452938 in datapath 02690ac5-d004-4c7d-b780-e5fed29e0aa7 unbound from our chassis#033[00m
Oct 11 05:22:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:35.871 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 02690ac5-d004-4c7d-b780-e5fed29e0aa7#033[00m
Oct 11 05:22:35 np0005481065 NetworkManager[44960]: <info>  [1760174555.8742] device (tapa35d553f-94): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:22:35 np0005481065 nova_compute[260935]: 2025-10-11 09:22:35.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:35 np0005481065 ovn_controller[152945]: 2025-10-11T09:22:35Z|01242|binding|INFO|Releasing lport a35d553f-941e-4ff2-bc8a-39419cc32ff2 from this chassis (sb_readonly=0)
Oct 11 05:22:35 np0005481065 ovn_controller[152945]: 2025-10-11T09:22:35Z|01243|binding|INFO|Setting lport a35d553f-941e-4ff2-bc8a-39419cc32ff2 down in Southbound
Oct 11 05:22:35 np0005481065 ovn_controller[152945]: 2025-10-11T09:22:35Z|01244|binding|INFO|Removing iface tapa35d553f-94 ovn-installed in OVS
Oct 11 05:22:35 np0005481065 nova_compute[260935]: 2025-10-11 09:22:35.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:35.910 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:2c:ec 2001:db8:0:1:f816:3eff:fe62:2cec 2001:db8::f816:3eff:fe62:2cec'], port_security=['fa:16:3e:62:2c:ec 2001:db8:0:1:f816:3eff:fe62:2cec 2001:db8::f816:3eff:fe62:2cec'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe62:2cec/64 2001:db8::f816:3eff:fe62:2cec/64', 'neutron:device_id': 'd703a39a-f502-4bc4-895a-bb87752c83df', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e87b272f-66b8-494e-ab80-c2ee66df15a2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c29d60dd-64a4-4eb0-a5b7-799e6b286f87', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4bb88753-f635-4d32-a71c-7301c0eb5e38, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=a35d553f-941e-4ff2-bc8a-39419cc32ff2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:22:35 np0005481065 nova_compute[260935]: 2025-10-11 09:22:35.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:35.910 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[44565fb9-8eee-4705-90ad-2c1d325f49f9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:35 np0005481065 systemd[1]: machine-qemu\x2d143\x2dinstance\x2d00000078.scope: Deactivated successfully.
Oct 11 05:22:35 np0005481065 systemd[1]: machine-qemu\x2d143\x2dinstance\x2d00000078.scope: Consumed 14.261s CPU time.
Oct 11 05:22:35 np0005481065 systemd-machined[215705]: Machine qemu-143-instance-00000078 terminated.
Oct 11 05:22:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:35.954 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[e3be260a-6224-4460-92d6-18bc0cc4914c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:35.957 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[ab945d87-fde4-4e1e-a55a-95443f22cc43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:35.996 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[7a7039f9-2f66-4fc1-828c-22cff8725e1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:36.016 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[821fdc58-b91a-4ee5-949f-38506678b97d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap02690ac5-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3a:97:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 340], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632554, 'reachable_time': 37223, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391260, 'error': None, 'target': 'ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:36.034 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6ed5ca9d-2487-4f7c-8ac3-b109d2feaeaf]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap02690ac5-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 632567, 'tstamp': 632567}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391261, 'error': None, 'target': 'ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap02690ac5-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 632571, 'tstamp': 632571}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391261, 'error': None, 'target': 'ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:36.036 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap02690ac5-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:22:36 np0005481065 nova_compute[260935]: 2025-10-11 09:22:36.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:36 np0005481065 nova_compute[260935]: 2025-10-11 09:22:36.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:36.047 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap02690ac5-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:22:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:36.048 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:22:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:36.048 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap02690ac5-d0, col_values=(('external_ids', {'iface-id': 'bc2597de-285d-49b2-9e10-c93c87e43828'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:22:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:36.049 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:22:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:36.051 162815 INFO neutron.agent.ovn.metadata.agent [-] Port a35d553f-941e-4ff2-bc8a-39419cc32ff2 in datapath e87b272f-66b8-494e-ab80-c2ee66df15a2 unbound from our chassis#033[00m
Oct 11 05:22:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:36.055 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e87b272f-66b8-494e-ab80-c2ee66df15a2#033[00m
Oct 11 05:22:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:36.074 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5dfce559-dd54-4a34-9f41-acf3ba3e80f7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:36.117 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[e0d80629-00a9-4b8e-aa17-7b6f282b6473]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:36.122 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[8e31e20e-2cd4-433e-93ae-feb7258ec122]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:36 np0005481065 nova_compute[260935]: 2025-10-11 09:22:36.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:36 np0005481065 NetworkManager[44960]: <info>  [1760174556.1648] manager: (tapa35d553f-94): new Tun device (/org/freedesktop/NetworkManager/Devices/502)
Oct 11 05:22:36 np0005481065 nova_compute[260935]: 2025-10-11 09:22:36.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:36.185 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[602a087e-7bb8-40d1-b15e-bebb34e64cf8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:36 np0005481065 nova_compute[260935]: 2025-10-11 09:22:36.183 2 INFO nova.virt.libvirt.driver [-] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Instance destroyed successfully.#033[00m
Oct 11 05:22:36 np0005481065 nova_compute[260935]: 2025-10-11 09:22:36.184 2 DEBUG nova.objects.instance [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'resources' on Instance uuid d703a39a-f502-4bc4-895a-bb87752c83df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:22:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:36.217 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[378fb50b-9e69-438e-ac62-ef5471243aad]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape87b272f-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ab:ac:3b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 38, 'tx_packets': 6, 'rx_bytes': 3460, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 38, 'tx_packets': 6, 'rx_bytes': 3460, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 341], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632666, 'reachable_time': 19970, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 38, 'inoctets': 2928, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 38, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2928, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 38, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391288, 'error': None, 'target': 'ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:36 np0005481065 nova_compute[260935]: 2025-10-11 09:22:36.236 2 DEBUG nova.virt.libvirt.vif [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:21:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-125667472',display_name='tempest-TestGettingAddress-server-125667472',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-125667472',id=120,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH3M/kXUSgkBS2kz04q1S/pRsyiwkY2YkIqMb3cQVJF1HU8FNi43mGn58wjJEXbpqq0zRDrJXjUF0vzQa/0kOsDkOMrrOR4SllnyixE3KdibmJHkyv4zy/eyFYJg8UzYow==',key_name='tempest-TestGettingAddress-1350685544',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:22:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-knbmzljz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:22:04Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=d703a39a-f502-4bc4-895a-bb87752c83df,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "30ec8ddb-e058-428a-a08a-817e1e452938", "address": "fa:16:3e:97:eb:db", "network": {"id": "02690ac5-d004-4c7d-b780-e5fed29e0aa7", "bridge": "br-int", "label": "tempest-network-smoke--1847613909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30ec8ddb-e0", "ovs_interfaceid": "30ec8ddb-e058-428a-a08a-817e1e452938", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:22:36 np0005481065 nova_compute[260935]: 2025-10-11 09:22:36.237 2 DEBUG nova.network.os_vif_util [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "30ec8ddb-e058-428a-a08a-817e1e452938", "address": "fa:16:3e:97:eb:db", "network": {"id": "02690ac5-d004-4c7d-b780-e5fed29e0aa7", "bridge": "br-int", "label": "tempest-network-smoke--1847613909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30ec8ddb-e0", "ovs_interfaceid": "30ec8ddb-e058-428a-a08a-817e1e452938", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:22:36 np0005481065 nova_compute[260935]: 2025-10-11 09:22:36.239 2 DEBUG nova.network.os_vif_util [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:97:eb:db,bridge_name='br-int',has_traffic_filtering=True,id=30ec8ddb-e058-428a-a08a-817e1e452938,network=Network(02690ac5-d004-4c7d-b780-e5fed29e0aa7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30ec8ddb-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:22:36 np0005481065 nova_compute[260935]: 2025-10-11 09:22:36.240 2 DEBUG os_vif [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:97:eb:db,bridge_name='br-int',has_traffic_filtering=True,id=30ec8ddb-e058-428a-a08a-817e1e452938,network=Network(02690ac5-d004-4c7d-b780-e5fed29e0aa7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30ec8ddb-e0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:22:36 np0005481065 nova_compute[260935]: 2025-10-11 09:22:36.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:36.242 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[388610dd-2b1f-4280-a0c8-2ab8baa30228]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape87b272f-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 632680, 'tstamp': 632680}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391289, 'error': None, 'target': 'ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:36 np0005481065 nova_compute[260935]: 2025-10-11 09:22:36.243 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap30ec8ddb-e0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:22:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:36.244 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape87b272f-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:22:36 np0005481065 nova_compute[260935]: 2025-10-11 09:22:36.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:22:36 np0005481065 nova_compute[260935]: 2025-10-11 09:22:36.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:36.254 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape87b272f-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:22:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:36.254 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:22:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:36.255 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape87b272f-60, col_values=(('external_ids', {'iface-id': 'af33797e-d57d-45c8-92d7-86ea03fdf1ef'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:22:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:36.255 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:22:36 np0005481065 nova_compute[260935]: 2025-10-11 09:22:36.256 2 INFO os_vif [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:97:eb:db,bridge_name='br-int',has_traffic_filtering=True,id=30ec8ddb-e058-428a-a08a-817e1e452938,network=Network(02690ac5-d004-4c7d-b780-e5fed29e0aa7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30ec8ddb-e0')#033[00m
Oct 11 05:22:36 np0005481065 nova_compute[260935]: 2025-10-11 09:22:36.256 2 DEBUG nova.virt.libvirt.vif [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:21:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-125667472',display_name='tempest-TestGettingAddress-server-125667472',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-125667472',id=120,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH3M/kXUSgkBS2kz04q1S/pRsyiwkY2YkIqMb3cQVJF1HU8FNi43mGn58wjJEXbpqq0zRDrJXjUF0vzQa/0kOsDkOMrrOR4SllnyixE3KdibmJHkyv4zy/eyFYJg8UzYow==',key_name='tempest-TestGettingAddress-1350685544',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:22:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-knbmzljz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:22:04Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=d703a39a-f502-4bc4-895a-bb87752c83df,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a35d553f-941e-4ff2-bc8a-39419cc32ff2", "address": "fa:16:3e:62:2c:ec", "network": {"id": "e87b272f-66b8-494e-ab80-c2ee66df15a2", "bridge": "br-int", "label": "tempest-network-smoke--2027628824", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe62:2cec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe62:2cec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa35d553f-94", "ovs_interfaceid": "a35d553f-941e-4ff2-bc8a-39419cc32ff2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:22:36 np0005481065 nova_compute[260935]: 2025-10-11 09:22:36.257 2 DEBUG nova.network.os_vif_util [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "a35d553f-941e-4ff2-bc8a-39419cc32ff2", "address": "fa:16:3e:62:2c:ec", "network": {"id": "e87b272f-66b8-494e-ab80-c2ee66df15a2", "bridge": "br-int", "label": "tempest-network-smoke--2027628824", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe62:2cec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe62:2cec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa35d553f-94", "ovs_interfaceid": "a35d553f-941e-4ff2-bc8a-39419cc32ff2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:22:36 np0005481065 nova_compute[260935]: 2025-10-11 09:22:36.258 2 DEBUG nova.network.os_vif_util [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:2c:ec,bridge_name='br-int',has_traffic_filtering=True,id=a35d553f-941e-4ff2-bc8a-39419cc32ff2,network=Network(e87b272f-66b8-494e-ab80-c2ee66df15a2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa35d553f-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:22:36 np0005481065 nova_compute[260935]: 2025-10-11 09:22:36.258 2 DEBUG os_vif [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:2c:ec,bridge_name='br-int',has_traffic_filtering=True,id=a35d553f-941e-4ff2-bc8a-39419cc32ff2,network=Network(e87b272f-66b8-494e-ab80-c2ee66df15a2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa35d553f-94') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:22:36 np0005481065 nova_compute[260935]: 2025-10-11 09:22:36.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:36 np0005481065 nova_compute[260935]: 2025-10-11 09:22:36.260 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa35d553f-94, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:22:36 np0005481065 nova_compute[260935]: 2025-10-11 09:22:36.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:36 np0005481065 nova_compute[260935]: 2025-10-11 09:22:36.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:22:36 np0005481065 nova_compute[260935]: 2025-10-11 09:22:36.265 2 INFO os_vif [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:2c:ec,bridge_name='br-int',has_traffic_filtering=True,id=a35d553f-941e-4ff2-bc8a-39419cc32ff2,network=Network(e87b272f-66b8-494e-ab80-c2ee66df15a2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa35d553f-94')#033[00m
Oct 11 05:22:36 np0005481065 nova_compute[260935]: 2025-10-11 09:22:36.296 2 DEBUG nova.compute.manager [req-368ab2fe-a1e3-40cc-990e-1c159a60a479 req-09997ae3-5ebf-463a-9864-bea1b7c02c69 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Received event network-vif-unplugged-30ec8ddb-e058-428a-a08a-817e1e452938 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:22:36 np0005481065 nova_compute[260935]: 2025-10-11 09:22:36.297 2 DEBUG oslo_concurrency.lockutils [req-368ab2fe-a1e3-40cc-990e-1c159a60a479 req-09997ae3-5ebf-463a-9864-bea1b7c02c69 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:22:36 np0005481065 nova_compute[260935]: 2025-10-11 09:22:36.298 2 DEBUG oslo_concurrency.lockutils [req-368ab2fe-a1e3-40cc-990e-1c159a60a479 req-09997ae3-5ebf-463a-9864-bea1b7c02c69 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:22:36 np0005481065 nova_compute[260935]: 2025-10-11 09:22:36.298 2 DEBUG oslo_concurrency.lockutils [req-368ab2fe-a1e3-40cc-990e-1c159a60a479 req-09997ae3-5ebf-463a-9864-bea1b7c02c69 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:22:36 np0005481065 nova_compute[260935]: 2025-10-11 09:22:36.299 2 DEBUG nova.compute.manager [req-368ab2fe-a1e3-40cc-990e-1c159a60a479 req-09997ae3-5ebf-463a-9864-bea1b7c02c69 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] No waiting events found dispatching network-vif-unplugged-30ec8ddb-e058-428a-a08a-817e1e452938 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:22:36 np0005481065 nova_compute[260935]: 2025-10-11 09:22:36.299 2 DEBUG nova.compute.manager [req-368ab2fe-a1e3-40cc-990e-1c159a60a479 req-09997ae3-5ebf-463a-9864-bea1b7c02c69 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Received event network-vif-unplugged-30ec8ddb-e058-428a-a08a-817e1e452938 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 05:22:36 np0005481065 nova_compute[260935]: 2025-10-11 09:22:36.441 2 DEBUG oslo_concurrency.lockutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Acquiring lock "ef21f945-0076-48fa-8d22-c5376e26d278" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:22:36 np0005481065 nova_compute[260935]: 2025-10-11 09:22:36.442 2 DEBUG oslo_concurrency.lockutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:22:36 np0005481065 nova_compute[260935]: 2025-10-11 09:22:36.536 2 DEBUG nova.compute.manager [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:22:36 np0005481065 nova_compute[260935]: 2025-10-11 09:22:36.731 2 INFO nova.virt.libvirt.driver [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Deleting instance files /var/lib/nova/instances/d703a39a-f502-4bc4-895a-bb87752c83df_del#033[00m
Oct 11 05:22:36 np0005481065 nova_compute[260935]: 2025-10-11 09:22:36.733 2 INFO nova.virt.libvirt.driver [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Deletion of /var/lib/nova/instances/d703a39a-f502-4bc4-895a-bb87752c83df_del complete#033[00m
Oct 11 05:22:36 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2423: 321 pgs: 321 active+clean; 645 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 19 KiB/s wr, 2 op/s
Oct 11 05:22:36 np0005481065 nova_compute[260935]: 2025-10-11 09:22:36.794 2 DEBUG oslo_concurrency.lockutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:22:36 np0005481065 nova_compute[260935]: 2025-10-11 09:22:36.795 2 DEBUG oslo_concurrency.lockutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:22:36 np0005481065 nova_compute[260935]: 2025-10-11 09:22:36.807 2 DEBUG nova.virt.hardware [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:22:36 np0005481065 nova_compute[260935]: 2025-10-11 09:22:36.807 2 INFO nova.compute.claims [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:22:36 np0005481065 nova_compute[260935]: 2025-10-11 09:22:36.996 2 INFO nova.compute.manager [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Took 1.28 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:22:36 np0005481065 nova_compute[260935]: 2025-10-11 09:22:36.997 2 DEBUG oslo.service.loopingcall [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:22:36 np0005481065 nova_compute[260935]: 2025-10-11 09:22:36.998 2 DEBUG nova.compute.manager [-] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:22:36 np0005481065 nova_compute[260935]: 2025-10-11 09:22:36.998 2 DEBUG nova.network.neutron [-] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:22:37 np0005481065 nova_compute[260935]: 2025-10-11 09:22:37.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:37 np0005481065 nova_compute[260935]: 2025-10-11 09:22:37.215 2 DEBUG oslo_concurrency.processutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:22:37 np0005481065 nova_compute[260935]: 2025-10-11 09:22:37.465 2 DEBUG nova.network.neutron [req-4ef6af49-ab96-4aeb-b173-f1db2bef0ae1 req-5616ddc8-c1ab-403f-978d-928065fd1cb1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Updated VIF entry in instance network info cache for port 30ec8ddb-e058-428a-a08a-817e1e452938. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:22:37 np0005481065 nova_compute[260935]: 2025-10-11 09:22:37.467 2 DEBUG nova.network.neutron [req-4ef6af49-ab96-4aeb-b173-f1db2bef0ae1 req-5616ddc8-c1ab-403f-978d-928065fd1cb1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Updating instance_info_cache with network_info: [{"id": "30ec8ddb-e058-428a-a08a-817e1e452938", "address": "fa:16:3e:97:eb:db", "network": {"id": "02690ac5-d004-4c7d-b780-e5fed29e0aa7", "bridge": "br-int", "label": "tempest-network-smoke--1847613909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30ec8ddb-e0", "ovs_interfaceid": "30ec8ddb-e058-428a-a08a-817e1e452938", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a35d553f-941e-4ff2-bc8a-39419cc32ff2", "address": "fa:16:3e:62:2c:ec", "network": {"id": "e87b272f-66b8-494e-ab80-c2ee66df15a2", "bridge": "br-int", "label": "tempest-network-smoke--2027628824", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe62:2cec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe62:2cec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa35d553f-94", "ovs_interfaceid": "a35d553f-941e-4ff2-bc8a-39419cc32ff2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:22:37 np0005481065 nova_compute[260935]: 2025-10-11 09:22:37.625 2 DEBUG oslo_concurrency.lockutils [req-4ef6af49-ab96-4aeb-b173-f1db2bef0ae1 req-5616ddc8-c1ab-403f-978d-928065fd1cb1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-d703a39a-f502-4bc4-895a-bb87752c83df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:22:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:22:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2858889742' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:22:37 np0005481065 nova_compute[260935]: 2025-10-11 09:22:37.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:22:37 np0005481065 nova_compute[260935]: 2025-10-11 09:22:37.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:22:37 np0005481065 nova_compute[260935]: 2025-10-11 09:22:37.702 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:22:37 np0005481065 nova_compute[260935]: 2025-10-11 09:22:37.722 2 DEBUG oslo_concurrency.processutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:22:37 np0005481065 nova_compute[260935]: 2025-10-11 09:22:37.729 2 DEBUG nova.compute.provider_tree [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:22:37 np0005481065 nova_compute[260935]: 2025-10-11 09:22:37.771 2 DEBUG nova.scheduler.client.report [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:22:37 np0005481065 nova_compute[260935]: 2025-10-11 09:22:37.866 2 DEBUG oslo_concurrency.lockutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.070s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:22:37 np0005481065 nova_compute[260935]: 2025-10-11 09:22:37.867 2 DEBUG nova.compute.manager [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:22:37 np0005481065 nova_compute[260935]: 2025-10-11 09:22:37.893 2 DEBUG nova.compute.manager [req-da87ed27-e0c6-4cbe-bc37-d8a03fc5b55c req-e6ba7759-77e9-480d-9507-720da3833e1c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Received event network-vif-unplugged-a35d553f-941e-4ff2-bc8a-39419cc32ff2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:22:37 np0005481065 nova_compute[260935]: 2025-10-11 09:22:37.893 2 DEBUG oslo_concurrency.lockutils [req-da87ed27-e0c6-4cbe-bc37-d8a03fc5b55c req-e6ba7759-77e9-480d-9507-720da3833e1c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:22:37 np0005481065 nova_compute[260935]: 2025-10-11 09:22:37.894 2 DEBUG oslo_concurrency.lockutils [req-da87ed27-e0c6-4cbe-bc37-d8a03fc5b55c req-e6ba7759-77e9-480d-9507-720da3833e1c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:22:37 np0005481065 nova_compute[260935]: 2025-10-11 09:22:37.894 2 DEBUG oslo_concurrency.lockutils [req-da87ed27-e0c6-4cbe-bc37-d8a03fc5b55c req-e6ba7759-77e9-480d-9507-720da3833e1c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:22:37 np0005481065 nova_compute[260935]: 2025-10-11 09:22:37.895 2 DEBUG nova.compute.manager [req-da87ed27-e0c6-4cbe-bc37-d8a03fc5b55c req-e6ba7759-77e9-480d-9507-720da3833e1c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] No waiting events found dispatching network-vif-unplugged-a35d553f-941e-4ff2-bc8a-39419cc32ff2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:22:37 np0005481065 nova_compute[260935]: 2025-10-11 09:22:37.895 2 DEBUG nova.compute.manager [req-da87ed27-e0c6-4cbe-bc37-d8a03fc5b55c req-e6ba7759-77e9-480d-9507-720da3833e1c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Received event network-vif-unplugged-a35d553f-941e-4ff2-bc8a-39419cc32ff2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 05:22:37 np0005481065 nova_compute[260935]: 2025-10-11 09:22:37.896 2 DEBUG nova.compute.manager [req-da87ed27-e0c6-4cbe-bc37-d8a03fc5b55c req-e6ba7759-77e9-480d-9507-720da3833e1c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Received event network-vif-plugged-a35d553f-941e-4ff2-bc8a-39419cc32ff2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:22:37 np0005481065 nova_compute[260935]: 2025-10-11 09:22:37.896 2 DEBUG oslo_concurrency.lockutils [req-da87ed27-e0c6-4cbe-bc37-d8a03fc5b55c req-e6ba7759-77e9-480d-9507-720da3833e1c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:22:37 np0005481065 nova_compute[260935]: 2025-10-11 09:22:37.897 2 DEBUG oslo_concurrency.lockutils [req-da87ed27-e0c6-4cbe-bc37-d8a03fc5b55c req-e6ba7759-77e9-480d-9507-720da3833e1c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:22:37 np0005481065 nova_compute[260935]: 2025-10-11 09:22:37.897 2 DEBUG oslo_concurrency.lockutils [req-da87ed27-e0c6-4cbe-bc37-d8a03fc5b55c req-e6ba7759-77e9-480d-9507-720da3833e1c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:22:37 np0005481065 nova_compute[260935]: 2025-10-11 09:22:37.898 2 DEBUG nova.compute.manager [req-da87ed27-e0c6-4cbe-bc37-d8a03fc5b55c req-e6ba7759-77e9-480d-9507-720da3833e1c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] No waiting events found dispatching network-vif-plugged-a35d553f-941e-4ff2-bc8a-39419cc32ff2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:22:37 np0005481065 nova_compute[260935]: 2025-10-11 09:22:37.898 2 WARNING nova.compute.manager [req-da87ed27-e0c6-4cbe-bc37-d8a03fc5b55c req-e6ba7759-77e9-480d-9507-720da3833e1c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Received unexpected event network-vif-plugged-a35d553f-941e-4ff2-bc8a-39419cc32ff2 for instance with vm_state active and task_state deleting.#033[00m
Oct 11 05:22:37 np0005481065 nova_compute[260935]: 2025-10-11 09:22:37.989 2 DEBUG nova.compute.manager [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:22:37 np0005481065 nova_compute[260935]: 2025-10-11 09:22:37.990 2 DEBUG nova.network.neutron [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:22:38 np0005481065 nova_compute[260935]: 2025-10-11 09:22:38.044 2 INFO nova.virt.libvirt.driver [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:22:38 np0005481065 nova_compute[260935]: 2025-10-11 09:22:38.139 2 DEBUG nova.compute.manager [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:22:38 np0005481065 nova_compute[260935]: 2025-10-11 09:22:38.165 2 DEBUG nova.policy [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '67e20c1f7ae24f2f8b9e25e0d8ce61ca', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a13210f275984f3eadf85eba0c749d99', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:22:38 np0005481065 nova_compute[260935]: 2025-10-11 09:22:38.447 2 DEBUG nova.compute.manager [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:22:38 np0005481065 nova_compute[260935]: 2025-10-11 09:22:38.449 2 DEBUG nova.virt.libvirt.driver [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:22:38 np0005481065 nova_compute[260935]: 2025-10-11 09:22:38.450 2 INFO nova.virt.libvirt.driver [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Creating image(s)#033[00m
Oct 11 05:22:38 np0005481065 nova_compute[260935]: 2025-10-11 09:22:38.485 2 DEBUG nova.storage.rbd_utils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] rbd image ef21f945-0076-48fa-8d22-c5376e26d278_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:22:38 np0005481065 nova_compute[260935]: 2025-10-11 09:22:38.530 2 DEBUG nova.storage.rbd_utils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] rbd image ef21f945-0076-48fa-8d22-c5376e26d278_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:22:38 np0005481065 nova_compute[260935]: 2025-10-11 09:22:38.563 2 DEBUG nova.storage.rbd_utils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] rbd image ef21f945-0076-48fa-8d22-c5376e26d278_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:22:38 np0005481065 nova_compute[260935]: 2025-10-11 09:22:38.567 2 DEBUG oslo_concurrency.processutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:22:38 np0005481065 nova_compute[260935]: 2025-10-11 09:22:38.631 2 DEBUG nova.compute.manager [req-74e00882-09ba-4c7f-bf84-123a96f6c2d9 req-61db89a6-94b4-4aeb-8faa-37b1067a49b8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Received event network-vif-plugged-30ec8ddb-e058-428a-a08a-817e1e452938 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:22:38 np0005481065 nova_compute[260935]: 2025-10-11 09:22:38.632 2 DEBUG oslo_concurrency.lockutils [req-74e00882-09ba-4c7f-bf84-123a96f6c2d9 req-61db89a6-94b4-4aeb-8faa-37b1067a49b8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:22:38 np0005481065 nova_compute[260935]: 2025-10-11 09:22:38.632 2 DEBUG oslo_concurrency.lockutils [req-74e00882-09ba-4c7f-bf84-123a96f6c2d9 req-61db89a6-94b4-4aeb-8faa-37b1067a49b8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:22:38 np0005481065 nova_compute[260935]: 2025-10-11 09:22:38.632 2 DEBUG oslo_concurrency.lockutils [req-74e00882-09ba-4c7f-bf84-123a96f6c2d9 req-61db89a6-94b4-4aeb-8faa-37b1067a49b8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:22:38 np0005481065 nova_compute[260935]: 2025-10-11 09:22:38.633 2 DEBUG nova.compute.manager [req-74e00882-09ba-4c7f-bf84-123a96f6c2d9 req-61db89a6-94b4-4aeb-8faa-37b1067a49b8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] No waiting events found dispatching network-vif-plugged-30ec8ddb-e058-428a-a08a-817e1e452938 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:22:38 np0005481065 nova_compute[260935]: 2025-10-11 09:22:38.633 2 WARNING nova.compute.manager [req-74e00882-09ba-4c7f-bf84-123a96f6c2d9 req-61db89a6-94b4-4aeb-8faa-37b1067a49b8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Received unexpected event network-vif-plugged-30ec8ddb-e058-428a-a08a-817e1e452938 for instance with vm_state active and task_state deleting.#033[00m
Oct 11 05:22:38 np0005481065 nova_compute[260935]: 2025-10-11 09:22:38.634 2 DEBUG nova.compute.manager [req-74e00882-09ba-4c7f-bf84-123a96f6c2d9 req-61db89a6-94b4-4aeb-8faa-37b1067a49b8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Received event network-vif-deleted-a35d553f-941e-4ff2-bc8a-39419cc32ff2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:22:38 np0005481065 nova_compute[260935]: 2025-10-11 09:22:38.634 2 INFO nova.compute.manager [req-74e00882-09ba-4c7f-bf84-123a96f6c2d9 req-61db89a6-94b4-4aeb-8faa-37b1067a49b8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Neutron deleted interface a35d553f-941e-4ff2-bc8a-39419cc32ff2; detaching it from the instance and deleting it from the info cache#033[00m
Oct 11 05:22:38 np0005481065 nova_compute[260935]: 2025-10-11 09:22:38.634 2 DEBUG nova.network.neutron [req-74e00882-09ba-4c7f-bf84-123a96f6c2d9 req-61db89a6-94b4-4aeb-8faa-37b1067a49b8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Updating instance_info_cache with network_info: [{"id": "30ec8ddb-e058-428a-a08a-817e1e452938", "address": "fa:16:3e:97:eb:db", "network": {"id": "02690ac5-d004-4c7d-b780-e5fed29e0aa7", "bridge": "br-int", "label": "tempest-network-smoke--1847613909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30ec8ddb-e0", "ovs_interfaceid": "30ec8ddb-e058-428a-a08a-817e1e452938", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:22:38 np0005481065 nova_compute[260935]: 2025-10-11 09:22:38.695 2 DEBUG oslo_concurrency.processutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:22:38 np0005481065 nova_compute[260935]: 2025-10-11 09:22:38.697 2 DEBUG oslo_concurrency.lockutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:22:38 np0005481065 nova_compute[260935]: 2025-10-11 09:22:38.698 2 DEBUG oslo_concurrency.lockutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:22:38 np0005481065 nova_compute[260935]: 2025-10-11 09:22:38.699 2 DEBUG oslo_concurrency.lockutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:22:38 np0005481065 nova_compute[260935]: 2025-10-11 09:22:38.734 2 DEBUG nova.storage.rbd_utils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] rbd image ef21f945-0076-48fa-8d22-c5376e26d278_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:22:38 np0005481065 nova_compute[260935]: 2025-10-11 09:22:38.739 2 DEBUG oslo_concurrency.processutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 ef21f945-0076-48fa-8d22-c5376e26d278_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:22:38 np0005481065 nova_compute[260935]: 2025-10-11 09:22:38.788 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:22:38 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2424: 321 pgs: 321 active+clean; 566 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 27 KiB/s wr, 32 op/s
Oct 11 05:22:38 np0005481065 nova_compute[260935]: 2025-10-11 09:22:38.789 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:22:38 np0005481065 nova_compute[260935]: 2025-10-11 09:22:38.791 2 DEBUG nova.network.neutron [-] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:22:38 np0005481065 nova_compute[260935]: 2025-10-11 09:22:38.795 2 DEBUG nova.compute.manager [req-74e00882-09ba-4c7f-bf84-123a96f6c2d9 req-61db89a6-94b4-4aeb-8faa-37b1067a49b8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Detach interface failed, port_id=a35d553f-941e-4ff2-bc8a-39419cc32ff2, reason: Instance d703a39a-f502-4bc4-895a-bb87752c83df could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct 11 05:22:38 np0005481065 nova_compute[260935]: 2025-10-11 09:22:38.913 2 INFO nova.compute.manager [-] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Took 1.91 seconds to deallocate network for instance.#033[00m
Oct 11 05:22:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:22:39 np0005481065 nova_compute[260935]: 2025-10-11 09:22:39.060 2 DEBUG oslo_concurrency.lockutils [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:22:39 np0005481065 nova_compute[260935]: 2025-10-11 09:22:39.061 2 DEBUG oslo_concurrency.lockutils [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:22:39 np0005481065 nova_compute[260935]: 2025-10-11 09:22:39.110 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:22:39 np0005481065 nova_compute[260935]: 2025-10-11 09:22:39.110 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:22:39 np0005481065 nova_compute[260935]: 2025-10-11 09:22:39.111 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 05:22:39 np0005481065 nova_compute[260935]: 2025-10-11 09:22:39.115 2 DEBUG oslo_concurrency.processutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 ef21f945-0076-48fa-8d22-c5376e26d278_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.376s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:22:39 np0005481065 nova_compute[260935]: 2025-10-11 09:22:39.213 2 DEBUG nova.storage.rbd_utils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] resizing rbd image ef21f945-0076-48fa-8d22-c5376e26d278_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:22:39 np0005481065 nova_compute[260935]: 2025-10-11 09:22:39.319 2 DEBUG nova.network.neutron [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Successfully created port: 0f516e4b-c284-4151-944c-8a7d98f695b5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:22:39 np0005481065 nova_compute[260935]: 2025-10-11 09:22:39.327 2 DEBUG nova.objects.instance [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lazy-loading 'migration_context' on Instance uuid ef21f945-0076-48fa-8d22-c5376e26d278 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:22:39 np0005481065 nova_compute[260935]: 2025-10-11 09:22:39.381 2 DEBUG nova.virt.libvirt.driver [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:22:39 np0005481065 nova_compute[260935]: 2025-10-11 09:22:39.381 2 DEBUG nova.virt.libvirt.driver [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Ensure instance console log exists: /var/lib/nova/instances/ef21f945-0076-48fa-8d22-c5376e26d278/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:22:39 np0005481065 nova_compute[260935]: 2025-10-11 09:22:39.381 2 DEBUG oslo_concurrency.lockutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:22:39 np0005481065 nova_compute[260935]: 2025-10-11 09:22:39.382 2 DEBUG oslo_concurrency.lockutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:22:39 np0005481065 nova_compute[260935]: 2025-10-11 09:22:39.382 2 DEBUG oslo_concurrency.lockutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:22:39 np0005481065 nova_compute[260935]: 2025-10-11 09:22:39.395 2 DEBUG oslo_concurrency.processutils [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:22:39 np0005481065 podman[391519]: 2025-10-11 09:22:39.806552147 +0000 UTC m=+0.094859888 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct 11 05:22:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:22:39 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/187125492' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:22:39 np0005481065 nova_compute[260935]: 2025-10-11 09:22:39.879 2 DEBUG oslo_concurrency.processutils [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:22:39 np0005481065 nova_compute[260935]: 2025-10-11 09:22:39.889 2 DEBUG nova.compute.provider_tree [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:22:39 np0005481065 nova_compute[260935]: 2025-10-11 09:22:39.911 2 DEBUG nova.scheduler.client.report [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:22:39 np0005481065 nova_compute[260935]: 2025-10-11 09:22:39.939 2 DEBUG oslo_concurrency.lockutils [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.877s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:22:39 np0005481065 nova_compute[260935]: 2025-10-11 09:22:39.970 2 INFO nova.scheduler.client.report [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Deleted allocations for instance d703a39a-f502-4bc4-895a-bb87752c83df#033[00m
Oct 11 05:22:40 np0005481065 nova_compute[260935]: 2025-10-11 09:22:40.068 2 DEBUG oslo_concurrency.lockutils [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "d703a39a-f502-4bc4-895a-bb87752c83df" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.355s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:22:40 np0005481065 nova_compute[260935]: 2025-10-11 09:22:40.163 2 DEBUG nova.network.neutron [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Successfully updated port: 0f516e4b-c284-4151-944c-8a7d98f695b5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:22:40 np0005481065 nova_compute[260935]: 2025-10-11 09:22:40.181 2 DEBUG oslo_concurrency.lockutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Acquiring lock "refresh_cache-ef21f945-0076-48fa-8d22-c5376e26d278" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:22:40 np0005481065 nova_compute[260935]: 2025-10-11 09:22:40.182 2 DEBUG oslo_concurrency.lockutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Acquired lock "refresh_cache-ef21f945-0076-48fa-8d22-c5376e26d278" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:22:40 np0005481065 nova_compute[260935]: 2025-10-11 09:22:40.182 2 DEBUG nova.network.neutron [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:22:40 np0005481065 nova_compute[260935]: 2025-10-11 09:22:40.282 2 DEBUG nova.compute.manager [req-1eae47d7-ed4b-4ec4-9822-19dc4c720a81 req-6ae3b80a-de73-4cc1-b7cf-5b622ee7f39d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Received event network-changed-0f516e4b-c284-4151-944c-8a7d98f695b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:22:40 np0005481065 nova_compute[260935]: 2025-10-11 09:22:40.283 2 DEBUG nova.compute.manager [req-1eae47d7-ed4b-4ec4-9822-19dc4c720a81 req-6ae3b80a-de73-4cc1-b7cf-5b622ee7f39d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Refreshing instance network info cache due to event network-changed-0f516e4b-c284-4151-944c-8a7d98f695b5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:22:40 np0005481065 nova_compute[260935]: 2025-10-11 09:22:40.283 2 DEBUG oslo_concurrency.lockutils [req-1eae47d7-ed4b-4ec4-9822-19dc4c720a81 req-6ae3b80a-de73-4cc1-b7cf-5b622ee7f39d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-ef21f945-0076-48fa-8d22-c5376e26d278" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:22:40 np0005481065 nova_compute[260935]: 2025-10-11 09:22:40.375 2 DEBUG nova.network.neutron [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:22:40 np0005481065 nova_compute[260935]: 2025-10-11 09:22:40.583 2 DEBUG nova.compute.manager [req-5ccec58f-ae6a-4e04-b2db-ddc7d8a55cac req-1d109231-e42a-4520-ab03-a6a11dda4a73 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Received event network-vif-deleted-30ec8ddb-e058-428a-a08a-817e1e452938 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:22:40 np0005481065 nova_compute[260935]: 2025-10-11 09:22:40.688 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updating instance_info_cache with network_info: [{"id": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "address": "fa:16:3e:d3:b5:ce", "network": {"id": "7c40ad6c-6e2c-4d8e-a70f-72c8786fa745", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1855455514-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ba95f2514ce4fe4b00f245335eaeb01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc992d6e3-ef", "ovs_interfaceid": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:22:40 np0005481065 nova_compute[260935]: 2025-10-11 09:22:40.705 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:22:40 np0005481065 nova_compute[260935]: 2025-10-11 09:22:40.705 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 05:22:40 np0005481065 nova_compute[260935]: 2025-10-11 09:22:40.705 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:22:40 np0005481065 nova_compute[260935]: 2025-10-11 09:22:40.725 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:22:40 np0005481065 nova_compute[260935]: 2025-10-11 09:22:40.725 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:22:40 np0005481065 nova_compute[260935]: 2025-10-11 09:22:40.726 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:22:40 np0005481065 nova_compute[260935]: 2025-10-11 09:22:40.726 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:22:40 np0005481065 nova_compute[260935]: 2025-10-11 09:22:40.726 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:22:40 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2425: 321 pgs: 321 active+clean; 566 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 13 KiB/s wr, 31 op/s
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.039 2 DEBUG oslo_concurrency.lockutils [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "fcb45648-eb7b-4975-9f50-08675a787d9c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.040 2 DEBUG oslo_concurrency.lockutils [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "fcb45648-eb7b-4975-9f50-08675a787d9c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.040 2 DEBUG oslo_concurrency.lockutils [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.040 2 DEBUG oslo_concurrency.lockutils [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.041 2 DEBUG oslo_concurrency.lockutils [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.042 2 INFO nova.compute.manager [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Terminating instance#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.043 2 DEBUG nova.compute.manager [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:22:41 np0005481065 kernel: tap9669f110-04 (unregistering): left promiscuous mode
Oct 11 05:22:41 np0005481065 NetworkManager[44960]: <info>  [1760174561.1120] device (tap9669f110-04): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:22:41 np0005481065 ovn_controller[152945]: 2025-10-11T09:22:41Z|01245|binding|INFO|Releasing lport 9669f110-042a-40c6-b7a4-8d78d421ed23 from this chassis (sb_readonly=0)
Oct 11 05:22:41 np0005481065 ovn_controller[152945]: 2025-10-11T09:22:41Z|01246|binding|INFO|Setting lport 9669f110-042a-40c6-b7a4-8d78d421ed23 down in Southbound
Oct 11 05:22:41 np0005481065 ovn_controller[152945]: 2025-10-11T09:22:41Z|01247|binding|INFO|Removing iface tap9669f110-04 ovn-installed in OVS
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:41 np0005481065 kernel: tap5a14fd50-c9 (unregistering): left promiscuous mode
Oct 11 05:22:41 np0005481065 NetworkManager[44960]: <info>  [1760174561.2127] device (tap5a14fd50-c9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:22:41 np0005481065 ovn_controller[152945]: 2025-10-11T09:22:41Z|01248|binding|INFO|Releasing lport 5a14fd50-c9b4-4c8c-b576-9f2a05d734f9 from this chassis (sb_readonly=1)
Oct 11 05:22:41 np0005481065 ovn_controller[152945]: 2025-10-11T09:22:41Z|01249|binding|INFO|Removing iface tap5a14fd50-c9 ovn-installed in OVS
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:41 np0005481065 ovn_controller[152945]: 2025-10-11T09:22:41Z|01250|if_status|INFO|Dropped 1 log messages in last 901 seconds (most recently, 901 seconds ago) due to excessive rate
Oct 11 05:22:41 np0005481065 ovn_controller[152945]: 2025-10-11T09:22:41Z|01251|if_status|INFO|Not setting lport 5a14fd50-c9b4-4c8c-b576-9f2a05d734f9 down as sb is readonly
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:41 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:22:41 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2850151861' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:22:41 np0005481065 ovn_controller[152945]: 2025-10-11T09:22:41Z|01252|binding|INFO|Setting lport 5a14fd50-c9b4-4c8c-b576-9f2a05d734f9 down in Southbound
Oct 11 05:22:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:41.233 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:54:35 10.100.0.11'], port_security=['fa:16:3e:eb:54:35 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'fcb45648-eb7b-4975-9f50-08675a787d9c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-02690ac5-d004-4c7d-b780-e5fed29e0aa7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c29d60dd-64a4-4eb0-a5b7-799e6b286f87', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bc6a655d-e3d9-4e53-b2d8-3195591acfb2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=9669f110-042a-40c6-b7a4-8d78d421ed23) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:22:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:41.236 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 9669f110-042a-40c6-b7a4-8d78d421ed23 in datapath 02690ac5-d004-4c7d-b780-e5fed29e0aa7 unbound from our chassis#033[00m
Oct 11 05:22:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:41.239 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 02690ac5-d004-4c7d-b780-e5fed29e0aa7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:22:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:41.241 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ed:83:c1 2001:db8:0:1:f816:3eff:feed:83c1 2001:db8::f816:3eff:feed:83c1'], port_security=['fa:16:3e:ed:83:c1 2001:db8:0:1:f816:3eff:feed:83c1 2001:db8::f816:3eff:feed:83c1'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:feed:83c1/64 2001:db8::f816:3eff:feed:83c1/64', 'neutron:device_id': 'fcb45648-eb7b-4975-9f50-08675a787d9c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e87b272f-66b8-494e-ab80-c2ee66df15a2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c29d60dd-64a4-4eb0-a5b7-799e6b286f87', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4bb88753-f635-4d32-a71c-7301c0eb5e38, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=5a14fd50-c9b4-4c8c-b576-9f2a05d734f9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:22:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:41.241 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[eb8d25bc-1011-4302-b851-d26cf887c89f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:41.244 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7 namespace which is not needed anymore#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.257 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:41 np0005481065 systemd[1]: machine-qemu\x2d141\x2dinstance\x2d00000076.scope: Deactivated successfully.
Oct 11 05:22:41 np0005481065 systemd[1]: machine-qemu\x2d141\x2dinstance\x2d00000076.scope: Consumed 16.770s CPU time.
Oct 11 05:22:41 np0005481065 systemd-machined[215705]: Machine qemu-141-instance-00000076 terminated.
Oct 11 05:22:41 np0005481065 neutron-haproxy-ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7[389155]: [NOTICE]   (389183) : haproxy version is 2.8.14-c23fe91
Oct 11 05:22:41 np0005481065 neutron-haproxy-ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7[389155]: [NOTICE]   (389183) : path to executable is /usr/sbin/haproxy
Oct 11 05:22:41 np0005481065 neutron-haproxy-ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7[389155]: [WARNING]  (389183) : Exiting Master process...
Oct 11 05:22:41 np0005481065 systemd[1]: libpod-a30fa67bcb3a4967e8ed944a89e7f834e57e556362850d2eae55ff591936d2f1.scope: Deactivated successfully.
Oct 11 05:22:41 np0005481065 neutron-haproxy-ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7[389155]: [ALERT]    (389183) : Current worker (389186) exited with code 143 (Terminated)
Oct 11 05:22:41 np0005481065 neutron-haproxy-ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7[389155]: [WARNING]  (389183) : All workers exited. Exiting... (0)
Oct 11 05:22:41 np0005481065 conmon[389155]: conmon a30fa67bcb3a4967e8ed <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a30fa67bcb3a4967e8ed944a89e7f834e57e556362850d2eae55ff591936d2f1.scope/container/memory.events
Oct 11 05:22:41 np0005481065 podman[391591]: 2025-10-11 09:22:41.409941966 +0000 UTC m=+0.058561183 container died a30fa67bcb3a4967e8ed944a89e7f834e57e556362850d2eae55ff591936d2f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:22:41 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a30fa67bcb3a4967e8ed944a89e7f834e57e556362850d2eae55ff591936d2f1-userdata-shm.mount: Deactivated successfully.
Oct 11 05:22:41 np0005481065 systemd[1]: var-lib-containers-storage-overlay-45a43bab447974501414b98be50feaa29e43ab1d6b08cf88962b6450fef6fae6-merged.mount: Deactivated successfully.
Oct 11 05:22:41 np0005481065 podman[391591]: 2025-10-11 09:22:41.455599495 +0000 UTC m=+0.104218652 container cleanup a30fa67bcb3a4967e8ed944a89e7f834e57e556362850d2eae55ff591936d2f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 11 05:22:41 np0005481065 systemd[1]: libpod-conmon-a30fa67bcb3a4967e8ed944a89e7f834e57e556362850d2eae55ff591936d2f1.scope: Deactivated successfully.
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.503 2 INFO nova.virt.libvirt.driver [-] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Instance destroyed successfully.#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.503 2 DEBUG nova.objects.instance [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'resources' on Instance uuid fcb45648-eb7b-4975-9f50-08675a787d9c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.506 2 DEBUG nova.network.neutron [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Updating instance_info_cache with network_info: [{"id": "0f516e4b-c284-4151-944c-8a7d98f695b5", "address": "fa:16:3e:4b:08:14", "network": {"id": "0007a0de-db42-4add-9b55-6d92ceffa860", "bridge": "br-int", "label": "tempest-TestShelveInstance-683353345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a13210f275984f3eadf85eba0c749d99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f516e4b-c2", "ovs_interfaceid": "0f516e4b-c284-4151-944c-8a7d98f695b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.522 2 DEBUG nova.virt.libvirt.vif [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:21:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-407795424',display_name='tempest-TestGettingAddress-server-407795424',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-407795424',id=118,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH3M/kXUSgkBS2kz04q1S/pRsyiwkY2YkIqMb3cQVJF1HU8FNi43mGn58wjJEXbpqq0zRDrJXjUF0vzQa/0kOsDkOMrrOR4SllnyixE3KdibmJHkyv4zy/eyFYJg8UzYow==',key_name='tempest-TestGettingAddress-1350685544',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:21:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-dfjcqrr4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:21:24Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=fcb45648-eb7b-4975-9f50-08675a787d9c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9669f110-042a-40c6-b7a4-8d78d421ed23", "address": "fa:16:3e:eb:54:35", "network": {"id": "02690ac5-d004-4c7d-b780-e5fed29e0aa7", "bridge": "br-int", "label": "tempest-network-smoke--1847613909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9669f110-04", "ovs_interfaceid": "9669f110-042a-40c6-b7a4-8d78d421ed23", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.523 2 DEBUG nova.network.os_vif_util [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "9669f110-042a-40c6-b7a4-8d78d421ed23", "address": "fa:16:3e:eb:54:35", "network": {"id": "02690ac5-d004-4c7d-b780-e5fed29e0aa7", "bridge": "br-int", "label": "tempest-network-smoke--1847613909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9669f110-04", "ovs_interfaceid": "9669f110-042a-40c6-b7a4-8d78d421ed23", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.524 2 DEBUG nova.network.os_vif_util [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:eb:54:35,bridge_name='br-int',has_traffic_filtering=True,id=9669f110-042a-40c6-b7a4-8d78d421ed23,network=Network(02690ac5-d004-4c7d-b780-e5fed29e0aa7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9669f110-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.524 2 DEBUG os_vif [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:eb:54:35,bridge_name='br-int',has_traffic_filtering=True,id=9669f110-042a-40c6-b7a4-8d78d421ed23,network=Network(02690ac5-d004-4c7d-b780-e5fed29e0aa7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9669f110-04') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.526 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9669f110-04, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.540 2 DEBUG oslo_concurrency.lockutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Releasing lock "refresh_cache-ef21f945-0076-48fa-8d22-c5376e26d278" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.540 2 DEBUG nova.compute.manager [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Instance network_info: |[{"id": "0f516e4b-c284-4151-944c-8a7d98f695b5", "address": "fa:16:3e:4b:08:14", "network": {"id": "0007a0de-db42-4add-9b55-6d92ceffa860", "bridge": "br-int", "label": "tempest-TestShelveInstance-683353345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a13210f275984f3eadf85eba0c749d99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f516e4b-c2", "ovs_interfaceid": "0f516e4b-c284-4151-944c-8a7d98f695b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.540 2 DEBUG oslo_concurrency.lockutils [req-1eae47d7-ed4b-4ec4-9822-19dc4c720a81 req-6ae3b80a-de73-4cc1-b7cf-5b622ee7f39d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-ef21f945-0076-48fa-8d22-c5376e26d278" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.541 2 DEBUG nova.network.neutron [req-1eae47d7-ed4b-4ec4-9822-19dc4c720a81 req-6ae3b80a-de73-4cc1-b7cf-5b622ee7f39d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Refreshing network info cache for port 0f516e4b-c284-4151-944c-8a7d98f695b5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.543 2 DEBUG nova.virt.libvirt.driver [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Start _get_guest_xml network_info=[{"id": "0f516e4b-c284-4151-944c-8a7d98f695b5", "address": "fa:16:3e:4b:08:14", "network": {"id": "0007a0de-db42-4add-9b55-6d92ceffa860", "bridge": "br-int", "label": "tempest-TestShelveInstance-683353345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a13210f275984f3eadf85eba0c749d99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f516e4b-c2", "ovs_interfaceid": "0f516e4b-c284-4151-944c-8a7d98f695b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.544 2 INFO os_vif [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:eb:54:35,bridge_name='br-int',has_traffic_filtering=True,id=9669f110-042a-40c6-b7a4-8d78d421ed23,network=Network(02690ac5-d004-4c7d-b780-e5fed29e0aa7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9669f110-04')#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.544 2 DEBUG nova.virt.libvirt.vif [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:21:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-407795424',display_name='tempest-TestGettingAddress-server-407795424',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-407795424',id=118,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH3M/kXUSgkBS2kz04q1S/pRsyiwkY2YkIqMb3cQVJF1HU8FNi43mGn58wjJEXbpqq0zRDrJXjUF0vzQa/0kOsDkOMrrOR4SllnyixE3KdibmJHkyv4zy/eyFYJg8UzYow==',key_name='tempest-TestGettingAddress-1350685544',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:21:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-dfjcqrr4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:21:24Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=fcb45648-eb7b-4975-9f50-08675a787d9c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5a14fd50-c9b4-4c8c-b576-9f2a05d734f9", "address": "fa:16:3e:ed:83:c1", "network": {"id": "e87b272f-66b8-494e-ab80-c2ee66df15a2", "bridge": "br-int", "label": "tempest-network-smoke--2027628824", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feed:83c1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feed:83c1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a14fd50-c9", "ovs_interfaceid": "5a14fd50-c9b4-4c8c-b576-9f2a05d734f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.545 2 DEBUG nova.network.os_vif_util [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "5a14fd50-c9b4-4c8c-b576-9f2a05d734f9", "address": "fa:16:3e:ed:83:c1", "network": {"id": "e87b272f-66b8-494e-ab80-c2ee66df15a2", "bridge": "br-int", "label": "tempest-network-smoke--2027628824", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feed:83c1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feed:83c1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a14fd50-c9", "ovs_interfaceid": "5a14fd50-c9b4-4c8c-b576-9f2a05d734f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.545 2 DEBUG nova.network.os_vif_util [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ed:83:c1,bridge_name='br-int',has_traffic_filtering=True,id=5a14fd50-c9b4-4c8c-b576-9f2a05d734f9,network=Network(e87b272f-66b8-494e-ab80-c2ee66df15a2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a14fd50-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.546 2 DEBUG os_vif [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:83:c1,bridge_name='br-int',has_traffic_filtering=True,id=5a14fd50-c9b4-4c8c-b576-9f2a05d734f9,network=Network(e87b272f-66b8-494e-ab80-c2ee66df15a2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a14fd50-c9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.547 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5a14fd50-c9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:41 np0005481065 podman[391624]: 2025-10-11 09:22:41.552493289 +0000 UTC m=+0.064504951 container remove a30fa67bcb3a4967e8ed944a89e7f834e57e556362850d2eae55ff591936d2f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0)
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.559 2 INFO os_vif [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:83:c1,bridge_name='br-int',has_traffic_filtering=True,id=5a14fd50-c9b4-4c8c-b576-9f2a05d734f9,network=Network(e87b272f-66b8-494e-ab80-c2ee66df15a2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a14fd50-c9')#033[00m
Oct 11 05:22:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:41.563 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e36d7dad-7a0a-433d-8bf2-63d5e606eb8c]: (4, ('Sat Oct 11 09:22:41 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7 (a30fa67bcb3a4967e8ed944a89e7f834e57e556362850d2eae55ff591936d2f1)\na30fa67bcb3a4967e8ed944a89e7f834e57e556362850d2eae55ff591936d2f1\nSat Oct 11 09:22:41 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7 (a30fa67bcb3a4967e8ed944a89e7f834e57e556362850d2eae55ff591936d2f1)\na30fa67bcb3a4967e8ed944a89e7f834e57e556362850d2eae55ff591936d2f1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:41.566 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c5e2a0e6-3443-4112-bea1-46a6cf561970]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:41.567 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap02690ac5-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:22:41 np0005481065 kernel: tap02690ac5-d0: left promiscuous mode
Oct 11 05:22:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:41.575 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a90b3171-e341-483a-b232-bece09fd1d79]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.584 2 WARNING nova.virt.libvirt.driver [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.599 2 DEBUG nova.virt.libvirt.host [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.600 2 DEBUG nova.virt.libvirt.host [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.603 2 DEBUG nova.virt.libvirt.host [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.604 2 DEBUG nova.virt.libvirt.host [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.604 2 DEBUG nova.virt.libvirt.driver [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.604 2 DEBUG nova.virt.hardware [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.605 2 DEBUG nova.virt.hardware [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.605 2 DEBUG nova.virt.hardware [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.605 2 DEBUG nova.virt.hardware [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.606 2 DEBUG nova.virt.hardware [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.606 2 DEBUG nova.virt.hardware [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.606 2 DEBUG nova.virt.hardware [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.607 2 DEBUG nova.virt.hardware [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.607 2 DEBUG nova.virt.hardware [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.607 2 DEBUG nova.virt.hardware [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.608 2 DEBUG nova.virt.hardware [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.612 2 DEBUG oslo_concurrency.processutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:22:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:41.612 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1aeb8f00-5b19-45f4-afbe-43473ab24eba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:41.614 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[81eada51-7a4b-43e2-98a2-de506b56f798]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:41.641 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2a5c4799-1a59-44db-87f6-24c0791b12a4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632545, 'reachable_time': 18072, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391670, 'error': None, 'target': 'ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:41.647 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 05:22:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:41.647 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[ed3a0dd5-6271-43ad-ad69-583864d49539]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:41.649 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 5a14fd50-c9b4-4c8c-b576-9f2a05d734f9 in datapath e87b272f-66b8-494e-ab80-c2ee66df15a2 unbound from our chassis#033[00m
Oct 11 05:22:41 np0005481065 systemd[1]: run-netns-ovnmeta\x2d02690ac5\x2dd004\x2d4c7d\x2db780\x2de5fed29e0aa7.mount: Deactivated successfully.
Oct 11 05:22:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:41.654 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e87b272f-66b8-494e-ab80-c2ee66df15a2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:22:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:41.655 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[960da418-9d7b-413b-b9f3-b605df035495]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:41.656 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2 namespace which is not needed anymore#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.716 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.716 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.724 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.725 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.725 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.731 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.731 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.739 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.739 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.747 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.747 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.752 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:22:41 np0005481065 nova_compute[260935]: 2025-10-11 09:22:41.753 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:22:41 np0005481065 neutron-haproxy-ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2[389253]: [NOTICE]   (389257) : haproxy version is 2.8.14-c23fe91
Oct 11 05:22:41 np0005481065 neutron-haproxy-ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2[389253]: [NOTICE]   (389257) : path to executable is /usr/sbin/haproxy
Oct 11 05:22:41 np0005481065 neutron-haproxy-ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2[389253]: [WARNING]  (389257) : Exiting Master process...
Oct 11 05:22:41 np0005481065 neutron-haproxy-ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2[389253]: [WARNING]  (389257) : Exiting Master process...
Oct 11 05:22:41 np0005481065 neutron-haproxy-ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2[389253]: [ALERT]    (389257) : Current worker (389259) exited with code 143 (Terminated)
Oct 11 05:22:41 np0005481065 neutron-haproxy-ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2[389253]: [WARNING]  (389257) : All workers exited. Exiting... (0)
Oct 11 05:22:41 np0005481065 systemd[1]: libpod-b8d98df1aece0f443c2b04c5466e7e8fb0cdc8405dd6c0b75e45450252da2026.scope: Deactivated successfully.
Oct 11 05:22:41 np0005481065 podman[391691]: 2025-10-11 09:22:41.86009296 +0000 UTC m=+0.065899601 container died b8d98df1aece0f443c2b04c5466e7e8fb0cdc8405dd6c0b75e45450252da2026 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:22:41 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b8d98df1aece0f443c2b04c5466e7e8fb0cdc8405dd6c0b75e45450252da2026-userdata-shm.mount: Deactivated successfully.
Oct 11 05:22:41 np0005481065 systemd[1]: var-lib-containers-storage-overlay-c17045466eb5e9b60860258040d0d3e0755d8193eccf00a44cefc9a37c24b920-merged.mount: Deactivated successfully.
Oct 11 05:22:41 np0005481065 podman[391691]: 2025-10-11 09:22:41.909012901 +0000 UTC m=+0.114819552 container cleanup b8d98df1aece0f443c2b04c5466e7e8fb0cdc8405dd6c0b75e45450252da2026 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 11 05:22:41 np0005481065 systemd[1]: libpod-conmon-b8d98df1aece0f443c2b04c5466e7e8fb0cdc8405dd6c0b75e45450252da2026.scope: Deactivated successfully.
Oct 11 05:22:42 np0005481065 podman[391739]: 2025-10-11 09:22:42.015923448 +0000 UTC m=+0.065133459 container remove b8d98df1aece0f443c2b04c5466e7e8fb0cdc8405dd6c0b75e45450252da2026 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 11 05:22:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:42.024 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6366c50f-61c1-4185-bf01-3236959a8958]: (4, ('Sat Oct 11 09:22:41 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2 (b8d98df1aece0f443c2b04c5466e7e8fb0cdc8405dd6c0b75e45450252da2026)\nb8d98df1aece0f443c2b04c5466e7e8fb0cdc8405dd6c0b75e45450252da2026\nSat Oct 11 09:22:41 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2 (b8d98df1aece0f443c2b04c5466e7e8fb0cdc8405dd6c0b75e45450252da2026)\nb8d98df1aece0f443c2b04c5466e7e8fb0cdc8405dd6c0b75e45450252da2026\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:42.027 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4f6eab01-7757-46b5-bc9f-068e0fb48898]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:42.028 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape87b272f-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:42 np0005481065 kernel: tape87b272f-60: left promiscuous mode
Oct 11 05:22:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:42.036 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[98767196-3202-4c97-b217-3810d30a4426]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:42.066 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ace3161f-e3f0-4357-860d-07b4f168a12d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:42.067 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[20f8af05-b1f6-4a6c-bc5a-8664ec47364c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:42.098 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[dd14abb5-9258-4a48-99d4-12710c1b0f5e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632655, 'reachable_time': 31666, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391754, 'error': None, 'target': 'ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:42.100 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 05:22:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:42.100 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[ae52e7ea-a361-4e70-9405-3db4d0b82c1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.125 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.126 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2446MB free_disk=59.69379425048828GB free_vcpus=2 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.126 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.126 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.166 2 INFO nova.virt.libvirt.driver [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Deleting instance files /var/lib/nova/instances/fcb45648-eb7b-4975-9f50-08675a787d9c_del#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.167 2 INFO nova.virt.libvirt.driver [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Deletion of /var/lib/nova/instances/fcb45648-eb7b-4975-9f50-08675a787d9c_del complete#033[00m
Oct 11 05:22:42 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:22:42 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3544785565' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.210 2 DEBUG oslo_concurrency.processutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.239 2 DEBUG nova.storage.rbd_utils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] rbd image ef21f945-0076-48fa-8d22-c5376e26d278_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.245 2 DEBUG oslo_concurrency.processutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.294 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.295 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.295 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.295 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 7f0d9214-39a5-458d-82db-dcbc7d61b8b5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.295 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance fcb45648-eb7b-4975-9f50-08675a787d9c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.296 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c677661f-bc62-4954-9130-09b285e6abe4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.296 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance ef21f945-0076-48fa-8d22-c5376e26d278 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.296 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 7 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.296 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1408MB phys_disk=59GB used_disk=7GB total_vcpus=8 used_vcpus=7 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.304 2 INFO nova.compute.manager [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Took 1.26 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.305 2 DEBUG oslo.service.loopingcall [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.305 2 DEBUG nova.compute.manager [-] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.305 2 DEBUG nova.network.neutron [-] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:22:42 np0005481065 systemd[1]: run-netns-ovnmeta\x2de87b272f\x2d66b8\x2d494e\x2dab80\x2dc2ee66df15a2.mount: Deactivated successfully.
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.462 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:22:42 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:22:42 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2661389353' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.662 2 DEBUG oslo_concurrency.processutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.663 2 DEBUG nova.virt.libvirt.vif [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:22:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1752003988',display_name='tempest-TestShelveInstance-server-1752003988',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1752003988',id=121,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBc59pqsdiIpcBTYIi36cHQ9pbnLBXoyhUafGO6McKtc7V938v9xkK/F0LUwOp/S6AlI8sKdzLvrGPTJbVDXHRRnlCu0B+lzAS2vfK523X9mqHyrpvhTkQghQuITH7BALg==',key_name='tempest-TestShelveInstance-1436929269',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a13210f275984f3eadf85eba0c749d99',ramdisk_id='',reservation_id='r-9i2gklxc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-243029510',owner_user_name='tempest-TestShelveInstance-243029510-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:22:38Z,user_data=None,user_id='67e20c1f7ae24f2f8b9e25e0d8ce61ca',uuid=ef21f945-0076-48fa-8d22-c5376e26d278,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0f516e4b-c284-4151-944c-8a7d98f695b5", "address": "fa:16:3e:4b:08:14", "network": {"id": "0007a0de-db42-4add-9b55-6d92ceffa860", "bridge": "br-int", "label": "tempest-TestShelveInstance-683353345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a13210f275984f3eadf85eba0c749d99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f516e4b-c2", "ovs_interfaceid": "0f516e4b-c284-4151-944c-8a7d98f695b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.664 2 DEBUG nova.network.os_vif_util [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Converting VIF {"id": "0f516e4b-c284-4151-944c-8a7d98f695b5", "address": "fa:16:3e:4b:08:14", "network": {"id": "0007a0de-db42-4add-9b55-6d92ceffa860", "bridge": "br-int", "label": "tempest-TestShelveInstance-683353345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a13210f275984f3eadf85eba0c749d99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f516e4b-c2", "ovs_interfaceid": "0f516e4b-c284-4151-944c-8a7d98f695b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.665 2 DEBUG nova.network.os_vif_util [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:08:14,bridge_name='br-int',has_traffic_filtering=True,id=0f516e4b-c284-4151-944c-8a7d98f695b5,network=Network(0007a0de-db42-4add-9b55-6d92ceffa860),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f516e4b-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.666 2 DEBUG nova.objects.instance [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lazy-loading 'pci_devices' on Instance uuid ef21f945-0076-48fa-8d22-c5376e26d278 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.688 2 DEBUG nova.virt.libvirt.driver [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:22:42 np0005481065 nova_compute[260935]:  <uuid>ef21f945-0076-48fa-8d22-c5376e26d278</uuid>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:  <name>instance-00000079</name>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:22:42 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:      <nova:name>tempest-TestShelveInstance-server-1752003988</nova:name>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:22:41</nova:creationTime>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:22:42 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:        <nova:user uuid="67e20c1f7ae24f2f8b9e25e0d8ce61ca">tempest-TestShelveInstance-243029510-project-member</nova:user>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:        <nova:project uuid="a13210f275984f3eadf85eba0c749d99">tempest-TestShelveInstance-243029510</nova:project>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:        <nova:port uuid="0f516e4b-c284-4151-944c-8a7d98f695b5">
Oct 11 05:22:42 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:22:42 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:      <entry name="serial">ef21f945-0076-48fa-8d22-c5376e26d278</entry>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:      <entry name="uuid">ef21f945-0076-48fa-8d22-c5376e26d278</entry>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:22:42 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:22:42 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:22:42 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/ef21f945-0076-48fa-8d22-c5376e26d278_disk">
Oct 11 05:22:42 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:22:42 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:22:42 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/ef21f945-0076-48fa-8d22-c5376e26d278_disk.config">
Oct 11 05:22:42 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:22:42 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:22:42 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:4b:08:14"/>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:      <target dev="tap0f516e4b-c2"/>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:22:42 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/ef21f945-0076-48fa-8d22-c5376e26d278/console.log" append="off"/>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:22:42 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:22:42 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:22:42 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:22:42 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:22:42 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.689 2 DEBUG nova.compute.manager [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Preparing to wait for external event network-vif-plugged-0f516e4b-c284-4151-944c-8a7d98f695b5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.689 2 DEBUG oslo_concurrency.lockutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Acquiring lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.690 2 DEBUG oslo_concurrency.lockutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.690 2 DEBUG oslo_concurrency.lockutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.691 2 DEBUG nova.virt.libvirt.vif [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:22:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1752003988',display_name='tempest-TestShelveInstance-server-1752003988',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1752003988',id=121,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBc59pqsdiIpcBTYIi36cHQ9pbnLBXoyhUafGO6McKtc7V938v9xkK/F0LUwOp/S6AlI8sKdzLvrGPTJbVDXHRRnlCu0B+lzAS2vfK523X9mqHyrpvhTkQghQuITH7BALg==',key_name='tempest-TestShelveInstance-1436929269',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a13210f275984f3eadf85eba0c749d99',ramdisk_id='',reservation_id='r-9i2gklxc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-243029510',owner_user_name='tempest-TestShelveInstance-243029510-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:22:38Z,user_data=None,user_id='67e20c1f7ae24f2f8b9e25e0d8ce61ca',uuid=ef21f945-0076-48fa-8d22-c5376e26d278,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0f516e4b-c284-4151-944c-8a7d98f695b5", "address": "fa:16:3e:4b:08:14", "network": {"id": "0007a0de-db42-4add-9b55-6d92ceffa860", "bridge": "br-int", "label": "tempest-TestShelveInstance-683353345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a13210f275984f3eadf85eba0c749d99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f516e4b-c2", "ovs_interfaceid": "0f516e4b-c284-4151-944c-8a7d98f695b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.691 2 DEBUG nova.network.os_vif_util [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Converting VIF {"id": "0f516e4b-c284-4151-944c-8a7d98f695b5", "address": "fa:16:3e:4b:08:14", "network": {"id": "0007a0de-db42-4add-9b55-6d92ceffa860", "bridge": "br-int", "label": "tempest-TestShelveInstance-683353345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a13210f275984f3eadf85eba0c749d99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f516e4b-c2", "ovs_interfaceid": "0f516e4b-c284-4151-944c-8a7d98f695b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.692 2 DEBUG nova.network.os_vif_util [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:08:14,bridge_name='br-int',has_traffic_filtering=True,id=0f516e4b-c284-4151-944c-8a7d98f695b5,network=Network(0007a0de-db42-4add-9b55-6d92ceffa860),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f516e4b-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.692 2 DEBUG os_vif [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:08:14,bridge_name='br-int',has_traffic_filtering=True,id=0f516e4b-c284-4151-944c-8a7d98f695b5,network=Network(0007a0de-db42-4add-9b55-6d92ceffa860),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f516e4b-c2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.693 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.693 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.696 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0f516e4b-c2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.696 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0f516e4b-c2, col_values=(('external_ids', {'iface-id': '0f516e4b-c284-4151-944c-8a7d98f695b5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4b:08:14', 'vm-uuid': 'ef21f945-0076-48fa-8d22-c5376e26d278'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:42 np0005481065 NetworkManager[44960]: <info>  [1760174562.6995] manager: (tap0f516e4b-c2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/503)
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.711 2 INFO os_vif [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:08:14,bridge_name='br-int',has_traffic_filtering=True,id=0f516e4b-c284-4151-944c-8a7d98f695b5,network=Network(0007a0de-db42-4add-9b55-6d92ceffa860),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f516e4b-c2')#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.715 2 DEBUG nova.compute.manager [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Received event network-changed-9669f110-042a-40c6-b7a4-8d78d421ed23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.715 2 DEBUG nova.compute.manager [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Refreshing instance network info cache due to event network-changed-9669f110-042a-40c6-b7a4-8d78d421ed23. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.716 2 DEBUG oslo_concurrency.lockutils [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-fcb45648-eb7b-4975-9f50-08675a787d9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.716 2 DEBUG oslo_concurrency.lockutils [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-fcb45648-eb7b-4975-9f50-08675a787d9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.716 2 DEBUG nova.network.neutron [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Refreshing network info cache for port 9669f110-042a-40c6-b7a4-8d78d421ed23 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.784 2 DEBUG nova.virt.libvirt.driver [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.785 2 DEBUG nova.virt.libvirt.driver [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.785 2 DEBUG nova.virt.libvirt.driver [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] No VIF found with MAC fa:16:3e:4b:08:14, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.786 2 INFO nova.virt.libvirt.driver [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Using config drive#033[00m
Oct 11 05:22:42 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2426: 321 pgs: 321 active+clean; 533 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 70 KiB/s rd, 1.8 MiB/s wr, 87 op/s
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.820 2 DEBUG nova.storage.rbd_utils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] rbd image ef21f945-0076-48fa-8d22-c5376e26d278_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:22:42 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:22:42 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1558495703' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.896 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.904 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.926 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.957 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:22:42 np0005481065 nova_compute[260935]: 2025-10-11 09:22:42.957 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.831s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:22:43 np0005481065 nova_compute[260935]: 2025-10-11 09:22:43.404 2 INFO nova.virt.libvirt.driver [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Creating config drive at /var/lib/nova/instances/ef21f945-0076-48fa-8d22-c5376e26d278/disk.config#033[00m
Oct 11 05:22:43 np0005481065 nova_compute[260935]: 2025-10-11 09:22:43.414 2 DEBUG oslo_concurrency.processutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ef21f945-0076-48fa-8d22-c5376e26d278/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwknq21d5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:22:43 np0005481065 nova_compute[260935]: 2025-10-11 09:22:43.575 2 DEBUG nova.network.neutron [req-1eae47d7-ed4b-4ec4-9822-19dc4c720a81 req-6ae3b80a-de73-4cc1-b7cf-5b622ee7f39d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Updated VIF entry in instance network info cache for port 0f516e4b-c284-4151-944c-8a7d98f695b5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:22:43 np0005481065 nova_compute[260935]: 2025-10-11 09:22:43.576 2 DEBUG nova.network.neutron [req-1eae47d7-ed4b-4ec4-9822-19dc4c720a81 req-6ae3b80a-de73-4cc1-b7cf-5b622ee7f39d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Updating instance_info_cache with network_info: [{"id": "0f516e4b-c284-4151-944c-8a7d98f695b5", "address": "fa:16:3e:4b:08:14", "network": {"id": "0007a0de-db42-4add-9b55-6d92ceffa860", "bridge": "br-int", "label": "tempest-TestShelveInstance-683353345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a13210f275984f3eadf85eba0c749d99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f516e4b-c2", "ovs_interfaceid": "0f516e4b-c284-4151-944c-8a7d98f695b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:22:43 np0005481065 nova_compute[260935]: 2025-10-11 09:22:43.594 2 DEBUG oslo_concurrency.processutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ef21f945-0076-48fa-8d22-c5376e26d278/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwknq21d5" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:22:43 np0005481065 nova_compute[260935]: 2025-10-11 09:22:43.644 2 DEBUG nova.storage.rbd_utils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] rbd image ef21f945-0076-48fa-8d22-c5376e26d278_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:22:43 np0005481065 nova_compute[260935]: 2025-10-11 09:22:43.650 2 DEBUG oslo_concurrency.processutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ef21f945-0076-48fa-8d22-c5376e26d278/disk.config ef21f945-0076-48fa-8d22-c5376e26d278_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:22:43 np0005481065 nova_compute[260935]: 2025-10-11 09:22:43.700 2 DEBUG oslo_concurrency.lockutils [req-1eae47d7-ed4b-4ec4-9822-19dc4c720a81 req-6ae3b80a-de73-4cc1-b7cf-5b622ee7f39d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-ef21f945-0076-48fa-8d22-c5376e26d278" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:22:43 np0005481065 nova_compute[260935]: 2025-10-11 09:22:43.737 2 DEBUG nova.network.neutron [-] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:22:43 np0005481065 nova_compute[260935]: 2025-10-11 09:22:43.755 2 INFO nova.compute.manager [-] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Took 1.45 seconds to deallocate network for instance.#033[00m
Oct 11 05:22:43 np0005481065 nova_compute[260935]: 2025-10-11 09:22:43.803 2 DEBUG oslo_concurrency.lockutils [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:22:43 np0005481065 nova_compute[260935]: 2025-10-11 09:22:43.804 2 DEBUG oslo_concurrency.lockutils [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:22:43 np0005481065 nova_compute[260935]: 2025-10-11 09:22:43.814 2 DEBUG oslo_concurrency.processutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ef21f945-0076-48fa-8d22-c5376e26d278/disk.config ef21f945-0076-48fa-8d22-c5376e26d278_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:22:43 np0005481065 nova_compute[260935]: 2025-10-11 09:22:43.815 2 INFO nova.virt.libvirt.driver [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Deleting local config drive /var/lib/nova/instances/ef21f945-0076-48fa-8d22-c5376e26d278/disk.config because it was imported into RBD.#033[00m
Oct 11 05:22:43 np0005481065 kernel: tap0f516e4b-c2: entered promiscuous mode
Oct 11 05:22:43 np0005481065 ovn_controller[152945]: 2025-10-11T09:22:43Z|01253|binding|INFO|Claiming lport 0f516e4b-c284-4151-944c-8a7d98f695b5 for this chassis.
Oct 11 05:22:43 np0005481065 ovn_controller[152945]: 2025-10-11T09:22:43Z|01254|binding|INFO|0f516e4b-c284-4151-944c-8a7d98f695b5: Claiming fa:16:3e:4b:08:14 10.100.0.10
Oct 11 05:22:43 np0005481065 NetworkManager[44960]: <info>  [1760174563.8740] manager: (tap0f516e4b-c2): new Tun device (/org/freedesktop/NetworkManager/Devices/504)
Oct 11 05:22:43 np0005481065 nova_compute[260935]: 2025-10-11 09:22:43.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:43 np0005481065 systemd-udevd[391562]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:22:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:43.892 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:08:14 10.100.0.10'], port_security=['fa:16:3e:4b:08:14 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'ef21f945-0076-48fa-8d22-c5376e26d278', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0007a0de-db42-4add-9b55-6d92ceffa860', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a13210f275984f3eadf85eba0c749d99', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fd52e2b9-19bf-4137-a511-25dd1d4c9f0e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d2271296-3ceb-4987-affd-a0b4da64fffe, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=0f516e4b-c284-4151-944c-8a7d98f695b5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:22:43 np0005481065 NetworkManager[44960]: <info>  [1760174563.8965] device (tap0f516e4b-c2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:22:43 np0005481065 NetworkManager[44960]: <info>  [1760174563.8971] device (tap0f516e4b-c2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:22:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:43.895 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 0f516e4b-c284-4151-944c-8a7d98f695b5 in datapath 0007a0de-db42-4add-9b55-6d92ceffa860 bound to our chassis#033[00m
Oct 11 05:22:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:43.900 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0007a0de-db42-4add-9b55-6d92ceffa860#033[00m
Oct 11 05:22:43 np0005481065 ovn_controller[152945]: 2025-10-11T09:22:43Z|01255|binding|INFO|Setting lport 0f516e4b-c284-4151-944c-8a7d98f695b5 ovn-installed in OVS
Oct 11 05:22:43 np0005481065 ovn_controller[152945]: 2025-10-11T09:22:43Z|01256|binding|INFO|Setting lport 0f516e4b-c284-4151-944c-8a7d98f695b5 up in Southbound
Oct 11 05:22:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:43.920 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[82b887c8-3f0e-450b-923d-b0dc9735afa0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:43.922 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0007a0de-d1 in ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 05:22:43 np0005481065 nova_compute[260935]: 2025-10-11 09:22:43.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:43 np0005481065 nova_compute[260935]: 2025-10-11 09:22:43.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:43.924 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0007a0de-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 05:22:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:43.925 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[63c0b6e5-664a-436b-865d-bea59c46dd6e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:43.928 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[75515830-fd0a-4e05-b265-7c7e589e86bc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:43 np0005481065 systemd-machined[215705]: New machine qemu-144-instance-00000079.
Oct 11 05:22:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:43.947 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[1123e63f-e0bb-41af-ac74-3ee6651d3196]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:43 np0005481065 systemd[1]: Started Virtual Machine qemu-144-instance-00000079.
Oct 11 05:22:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:43.982 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[52e6c2a6-1e72-4704-b005-9810ef2a18d9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:43 np0005481065 nova_compute[260935]: 2025-10-11 09:22:43.996 2 DEBUG oslo_concurrency.processutils [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:22:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:44.033 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[10f7ace6-0b2e-44ed-92e8-429f5997668f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:44 np0005481065 NetworkManager[44960]: <info>  [1760174564.0473] manager: (tap0007a0de-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/505)
Oct 11 05:22:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:44.046 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f2e604f0-6e62-403f-9ec6-771e3a63ccc1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:22:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:44.085 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[503888d6-c257-4bd4-a143-4d91afb36f04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:44.089 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b1ea09c9-f00f-44ba-b8a8-9a35641ad272]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:44 np0005481065 NetworkManager[44960]: <info>  [1760174564.1214] device (tap0007a0de-d0): carrier: link connected
Oct 11 05:22:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:44.131 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[7f6cbdc7-ba13-449d-9d2e-088905dd0e74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:44.157 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[56e76348-9088-49bb-b809-58c6f7baaec1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0007a0de-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7d:ca:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 352], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 640882, 'reachable_time': 43124, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391924, 'error': None, 'target': 'ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:44.184 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4f831581-c7d9-4d0e-9722-030b44e9bbc2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7d:ca09'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 640882, 'tstamp': 640882}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391935, 'error': None, 'target': 'ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:44.206 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a146e5d8-9867-42ef-a605-8a797796669c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0007a0de-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7d:ca:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 352], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 640882, 'reachable_time': 43124, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 391945, 'error': None, 'target': 'ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:44.260 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[66bc9abd-c121-4ca0-b5f2-fd7fa8b5bb07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:44.365 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[10b260a2-2a72-4e24-a32e-6e3bb6ef1870]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:44.367 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0007a0de-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:22:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:44.367 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:22:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:44.368 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0007a0de-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:22:44 np0005481065 kernel: tap0007a0de-d0: entered promiscuous mode
Oct 11 05:22:44 np0005481065 NetworkManager[44960]: <info>  [1760174564.3719] manager: (tap0007a0de-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/506)
Oct 11 05:22:44 np0005481065 nova_compute[260935]: 2025-10-11 09:22:44.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:44.379 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0007a0de-d0, col_values=(('external_ids', {'iface-id': 'e89e8ea5-8038-433d-8c45-d2ec20f4f896'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:22:44 np0005481065 nova_compute[260935]: 2025-10-11 09:22:44.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:44 np0005481065 ovn_controller[152945]: 2025-10-11T09:22:44Z|01257|binding|INFO|Releasing lport e89e8ea5-8038-433d-8c45-d2ec20f4f896 from this chassis (sb_readonly=0)
Oct 11 05:22:44 np0005481065 nova_compute[260935]: 2025-10-11 09:22:44.401 2 DEBUG nova.network.neutron [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Updated VIF entry in instance network info cache for port 9669f110-042a-40c6-b7a4-8d78d421ed23. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:22:44 np0005481065 nova_compute[260935]: 2025-10-11 09:22:44.402 2 DEBUG nova.network.neutron [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Updating instance_info_cache with network_info: [{"id": "9669f110-042a-40c6-b7a4-8d78d421ed23", "address": "fa:16:3e:eb:54:35", "network": {"id": "02690ac5-d004-4c7d-b780-e5fed29e0aa7", "bridge": "br-int", "label": "tempest-network-smoke--1847613909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9669f110-04", "ovs_interfaceid": "9669f110-042a-40c6-b7a4-8d78d421ed23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5a14fd50-c9b4-4c8c-b576-9f2a05d734f9", "address": "fa:16:3e:ed:83:c1", "network": {"id": "e87b272f-66b8-494e-ab80-c2ee66df15a2", "bridge": "br-int", "label": "tempest-network-smoke--2027628824", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feed:83c1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feed:83c1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a14fd50-c9", "ovs_interfaceid": "5a14fd50-c9b4-4c8c-b576-9f2a05d734f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:22:44 np0005481065 nova_compute[260935]: 2025-10-11 09:22:44.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:44.421 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0007a0de-db42-4add-9b55-6d92ceffa860.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0007a0de-db42-4add-9b55-6d92ceffa860.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 05:22:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:44.422 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[352331b4-2a7f-4926-8d1b-e7be1fde5442]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:44.424 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 05:22:44 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 05:22:44 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 05:22:44 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-0007a0de-db42-4add-9b55-6d92ceffa860
Oct 11 05:22:44 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 05:22:44 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 05:22:44 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 05:22:44 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/0007a0de-db42-4add-9b55-6d92ceffa860.pid.haproxy
Oct 11 05:22:44 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 05:22:44 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:22:44 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 05:22:44 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 05:22:44 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 05:22:44 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 05:22:44 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 05:22:44 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 05:22:44 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 05:22:44 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 05:22:44 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 05:22:44 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 05:22:44 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 05:22:44 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 05:22:44 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 05:22:44 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:22:44 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:22:44 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 05:22:44 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 05:22:44 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 05:22:44 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID 0007a0de-db42-4add-9b55-6d92ceffa860
Oct 11 05:22:44 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 05:22:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:44.425 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860', 'env', 'PROCESS_TAG=haproxy-0007a0de-db42-4add-9b55-6d92ceffa860', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0007a0de-db42-4add-9b55-6d92ceffa860.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 05:22:44 np0005481065 nova_compute[260935]: 2025-10-11 09:22:44.431 2 DEBUG oslo_concurrency.lockutils [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-fcb45648-eb7b-4975-9f50-08675a787d9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:22:44 np0005481065 nova_compute[260935]: 2025-10-11 09:22:44.431 2 DEBUG nova.compute.manager [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Received event network-vif-unplugged-9669f110-042a-40c6-b7a4-8d78d421ed23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:22:44 np0005481065 nova_compute[260935]: 2025-10-11 09:22:44.432 2 DEBUG oslo_concurrency.lockutils [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:22:44 np0005481065 nova_compute[260935]: 2025-10-11 09:22:44.432 2 DEBUG oslo_concurrency.lockutils [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:22:44 np0005481065 nova_compute[260935]: 2025-10-11 09:22:44.432 2 DEBUG oslo_concurrency.lockutils [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:22:44 np0005481065 nova_compute[260935]: 2025-10-11 09:22:44.432 2 DEBUG nova.compute.manager [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] No waiting events found dispatching network-vif-unplugged-9669f110-042a-40c6-b7a4-8d78d421ed23 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:22:44 np0005481065 nova_compute[260935]: 2025-10-11 09:22:44.433 2 DEBUG nova.compute.manager [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Received event network-vif-unplugged-9669f110-042a-40c6-b7a4-8d78d421ed23 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 05:22:44 np0005481065 nova_compute[260935]: 2025-10-11 09:22:44.433 2 DEBUG nova.compute.manager [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Received event network-vif-plugged-9669f110-042a-40c6-b7a4-8d78d421ed23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:22:44 np0005481065 nova_compute[260935]: 2025-10-11 09:22:44.433 2 DEBUG oslo_concurrency.lockutils [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:22:44 np0005481065 nova_compute[260935]: 2025-10-11 09:22:44.433 2 DEBUG oslo_concurrency.lockutils [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:22:44 np0005481065 nova_compute[260935]: 2025-10-11 09:22:44.434 2 DEBUG oslo_concurrency.lockutils [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:22:44 np0005481065 nova_compute[260935]: 2025-10-11 09:22:44.434 2 DEBUG nova.compute.manager [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] No waiting events found dispatching network-vif-plugged-9669f110-042a-40c6-b7a4-8d78d421ed23 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:22:44 np0005481065 nova_compute[260935]: 2025-10-11 09:22:44.434 2 WARNING nova.compute.manager [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Received unexpected event network-vif-plugged-9669f110-042a-40c6-b7a4-8d78d421ed23 for instance with vm_state active and task_state deleting.#033[00m
Oct 11 05:22:44 np0005481065 nova_compute[260935]: 2025-10-11 09:22:44.434 2 DEBUG nova.compute.manager [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Received event network-vif-unplugged-5a14fd50-c9b4-4c8c-b576-9f2a05d734f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:22:44 np0005481065 nova_compute[260935]: 2025-10-11 09:22:44.435 2 DEBUG oslo_concurrency.lockutils [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:22:44 np0005481065 nova_compute[260935]: 2025-10-11 09:22:44.435 2 DEBUG oslo_concurrency.lockutils [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:22:44 np0005481065 nova_compute[260935]: 2025-10-11 09:22:44.435 2 DEBUG oslo_concurrency.lockutils [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:22:44 np0005481065 nova_compute[260935]: 2025-10-11 09:22:44.435 2 DEBUG nova.compute.manager [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] No waiting events found dispatching network-vif-unplugged-5a14fd50-c9b4-4c8c-b576-9f2a05d734f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:22:44 np0005481065 nova_compute[260935]: 2025-10-11 09:22:44.435 2 DEBUG nova.compute.manager [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Received event network-vif-unplugged-5a14fd50-c9b4-4c8c-b576-9f2a05d734f9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 05:22:44 np0005481065 nova_compute[260935]: 2025-10-11 09:22:44.436 2 DEBUG nova.compute.manager [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Received event network-vif-plugged-5a14fd50-c9b4-4c8c-b576-9f2a05d734f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:22:44 np0005481065 nova_compute[260935]: 2025-10-11 09:22:44.436 2 DEBUG oslo_concurrency.lockutils [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:22:44 np0005481065 nova_compute[260935]: 2025-10-11 09:22:44.436 2 DEBUG oslo_concurrency.lockutils [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:22:44 np0005481065 nova_compute[260935]: 2025-10-11 09:22:44.436 2 DEBUG oslo_concurrency.lockutils [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:22:44 np0005481065 nova_compute[260935]: 2025-10-11 09:22:44.437 2 DEBUG nova.compute.manager [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] No waiting events found dispatching network-vif-plugged-5a14fd50-c9b4-4c8c-b576-9f2a05d734f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:22:44 np0005481065 nova_compute[260935]: 2025-10-11 09:22:44.437 2 WARNING nova.compute.manager [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Received unexpected event network-vif-plugged-5a14fd50-c9b4-4c8c-b576-9f2a05d734f9 for instance with vm_state active and task_state deleting.#033[00m
Oct 11 05:22:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:22:44 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/298502596' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:22:44 np0005481065 nova_compute[260935]: 2025-10-11 09:22:44.525 2 DEBUG oslo_concurrency.processutils [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:22:44 np0005481065 nova_compute[260935]: 2025-10-11 09:22:44.534 2 DEBUG nova.compute.provider_tree [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:22:44 np0005481065 nova_compute[260935]: 2025-10-11 09:22:44.553 2 DEBUG nova.scheduler.client.report [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:22:44 np0005481065 nova_compute[260935]: 2025-10-11 09:22:44.583 2 DEBUG oslo_concurrency.lockutils [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:22:44 np0005481065 nova_compute[260935]: 2025-10-11 09:22:44.620 2 INFO nova.scheduler.client.report [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Deleted allocations for instance fcb45648-eb7b-4975-9f50-08675a787d9c#033[00m
Oct 11 05:22:44 np0005481065 nova_compute[260935]: 2025-10-11 09:22:44.735 2 DEBUG oslo_concurrency.lockutils [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "fcb45648-eb7b-4975-9f50-08675a787d9c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:22:44 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2427: 321 pgs: 321 active+clean; 533 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 1.8 MiB/s wr, 86 op/s
Oct 11 05:22:44 np0005481065 nova_compute[260935]: 2025-10-11 09:22:44.850 2 DEBUG nova.compute.manager [req-cb444796-a88d-45e5-8002-7aff3a6d3e79 req-e8ccd70c-2c53-4d58-8a5e-6523ac028591 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Received event network-vif-deleted-5a14fd50-c9b4-4c8c-b576-9f2a05d734f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:22:44 np0005481065 nova_compute[260935]: 2025-10-11 09:22:44.851 2 INFO nova.compute.manager [req-cb444796-a88d-45e5-8002-7aff3a6d3e79 req-e8ccd70c-2c53-4d58-8a5e-6523ac028591 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Neutron deleted interface 5a14fd50-c9b4-4c8c-b576-9f2a05d734f9; detaching it from the instance and deleting it from the info cache#033[00m
Oct 11 05:22:44 np0005481065 nova_compute[260935]: 2025-10-11 09:22:44.852 2 DEBUG nova.network.neutron [req-cb444796-a88d-45e5-8002-7aff3a6d3e79 req-e8ccd70c-2c53-4d58-8a5e-6523ac028591 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Oct 11 05:22:44 np0005481065 nova_compute[260935]: 2025-10-11 09:22:44.855 2 DEBUG nova.compute.manager [req-cb444796-a88d-45e5-8002-7aff3a6d3e79 req-e8ccd70c-2c53-4d58-8a5e-6523ac028591 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Detach interface failed, port_id=5a14fd50-c9b4-4c8c-b576-9f2a05d734f9, reason: Instance fcb45648-eb7b-4975-9f50-08675a787d9c could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct 11 05:22:44 np0005481065 nova_compute[260935]: 2025-10-11 09:22:44.856 2 DEBUG nova.compute.manager [req-cb444796-a88d-45e5-8002-7aff3a6d3e79 req-e8ccd70c-2c53-4d58-8a5e-6523ac028591 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Received event network-vif-deleted-9669f110-042a-40c6-b7a4-8d78d421ed23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:22:44 np0005481065 nova_compute[260935]: 2025-10-11 09:22:44.856 2 INFO nova.compute.manager [req-cb444796-a88d-45e5-8002-7aff3a6d3e79 req-e8ccd70c-2c53-4d58-8a5e-6523ac028591 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Neutron deleted interface 9669f110-042a-40c6-b7a4-8d78d421ed23; detaching it from the instance and deleting it from the info cache#033[00m
Oct 11 05:22:44 np0005481065 nova_compute[260935]: 2025-10-11 09:22:44.856 2 DEBUG nova.network.neutron [req-cb444796-a88d-45e5-8002-7aff3a6d3e79 req-e8ccd70c-2c53-4d58-8a5e-6523ac028591 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Oct 11 05:22:44 np0005481065 nova_compute[260935]: 2025-10-11 09:22:44.858 2 DEBUG nova.compute.manager [req-cb444796-a88d-45e5-8002-7aff3a6d3e79 req-e8ccd70c-2c53-4d58-8a5e-6523ac028591 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Detach interface failed, port_id=9669f110-042a-40c6-b7a4-8d78d421ed23, reason: Instance fcb45648-eb7b-4975-9f50-08675a787d9c could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct 11 05:22:44 np0005481065 nova_compute[260935]: 2025-10-11 09:22:44.859 2 DEBUG nova.compute.manager [req-cb444796-a88d-45e5-8002-7aff3a6d3e79 req-e8ccd70c-2c53-4d58-8a5e-6523ac028591 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Received event network-vif-plugged-0f516e4b-c284-4151-944c-8a7d98f695b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:22:44 np0005481065 nova_compute[260935]: 2025-10-11 09:22:44.859 2 DEBUG oslo_concurrency.lockutils [req-cb444796-a88d-45e5-8002-7aff3a6d3e79 req-e8ccd70c-2c53-4d58-8a5e-6523ac028591 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:22:44 np0005481065 nova_compute[260935]: 2025-10-11 09:22:44.859 2 DEBUG oslo_concurrency.lockutils [req-cb444796-a88d-45e5-8002-7aff3a6d3e79 req-e8ccd70c-2c53-4d58-8a5e-6523ac028591 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:22:44 np0005481065 nova_compute[260935]: 2025-10-11 09:22:44.860 2 DEBUG oslo_concurrency.lockutils [req-cb444796-a88d-45e5-8002-7aff3a6d3e79 req-e8ccd70c-2c53-4d58-8a5e-6523ac028591 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:22:44 np0005481065 nova_compute[260935]: 2025-10-11 09:22:44.860 2 DEBUG nova.compute.manager [req-cb444796-a88d-45e5-8002-7aff3a6d3e79 req-e8ccd70c-2c53-4d58-8a5e-6523ac028591 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Processing event network-vif-plugged-0f516e4b-c284-4151-944c-8a7d98f695b5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:22:44 np0005481065 nova_compute[260935]: 2025-10-11 09:22:44.860 2 DEBUG nova.compute.manager [req-cb444796-a88d-45e5-8002-7aff3a6d3e79 req-e8ccd70c-2c53-4d58-8a5e-6523ac028591 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Received event network-vif-plugged-0f516e4b-c284-4151-944c-8a7d98f695b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:22:44 np0005481065 nova_compute[260935]: 2025-10-11 09:22:44.860 2 DEBUG oslo_concurrency.lockutils [req-cb444796-a88d-45e5-8002-7aff3a6d3e79 req-e8ccd70c-2c53-4d58-8a5e-6523ac028591 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:22:44 np0005481065 nova_compute[260935]: 2025-10-11 09:22:44.861 2 DEBUG oslo_concurrency.lockutils [req-cb444796-a88d-45e5-8002-7aff3a6d3e79 req-e8ccd70c-2c53-4d58-8a5e-6523ac028591 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:22:44 np0005481065 nova_compute[260935]: 2025-10-11 09:22:44.861 2 DEBUG oslo_concurrency.lockutils [req-cb444796-a88d-45e5-8002-7aff3a6d3e79 req-e8ccd70c-2c53-4d58-8a5e-6523ac028591 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:22:44 np0005481065 nova_compute[260935]: 2025-10-11 09:22:44.861 2 DEBUG nova.compute.manager [req-cb444796-a88d-45e5-8002-7aff3a6d3e79 req-e8ccd70c-2c53-4d58-8a5e-6523ac028591 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] No waiting events found dispatching network-vif-plugged-0f516e4b-c284-4151-944c-8a7d98f695b5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:22:44 np0005481065 nova_compute[260935]: 2025-10-11 09:22:44.861 2 WARNING nova.compute.manager [req-cb444796-a88d-45e5-8002-7aff3a6d3e79 req-e8ccd70c-2c53-4d58-8a5e-6523ac028591 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Received unexpected event network-vif-plugged-0f516e4b-c284-4151-944c-8a7d98f695b5 for instance with vm_state building and task_state spawning.#033[00m
Oct 11 05:22:44 np0005481065 podman[392022]: 2025-10-11 09:22:44.942608701 +0000 UTC m=+0.058666937 container create 08681398b84cfa03e0e290ee16c1ce0aff8a4babeb74f855ff91733dfa902f9f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 11 05:22:45 np0005481065 systemd[1]: Started libpod-conmon-08681398b84cfa03e0e290ee16c1ce0aff8a4babeb74f855ff91733dfa902f9f.scope.
Oct 11 05:22:45 np0005481065 podman[392022]: 2025-10-11 09:22:44.91177324 +0000 UTC m=+0.027831456 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 05:22:45 np0005481065 nova_compute[260935]: 2025-10-11 09:22:45.013 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174565.012413, ef21f945-0076-48fa-8d22-c5376e26d278 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:22:45 np0005481065 nova_compute[260935]: 2025-10-11 09:22:45.014 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] VM Started (Lifecycle Event)#033[00m
Oct 11 05:22:45 np0005481065 nova_compute[260935]: 2025-10-11 09:22:45.017 2 DEBUG nova.compute.manager [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:22:45 np0005481065 nova_compute[260935]: 2025-10-11 09:22:45.022 2 DEBUG nova.virt.libvirt.driver [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:22:45 np0005481065 nova_compute[260935]: 2025-10-11 09:22:45.028 2 INFO nova.virt.libvirt.driver [-] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Instance spawned successfully.#033[00m
Oct 11 05:22:45 np0005481065 nova_compute[260935]: 2025-10-11 09:22:45.029 2 DEBUG nova.virt.libvirt.driver [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:22:45 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:22:45 np0005481065 nova_compute[260935]: 2025-10-11 09:22:45.039 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:22:45 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f47c3c136e5a0070dae877380d4b02cb73021141c299a51e87e245e6899f06b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 05:22:45 np0005481065 nova_compute[260935]: 2025-10-11 09:22:45.050 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:22:45 np0005481065 nova_compute[260935]: 2025-10-11 09:22:45.057 2 DEBUG nova.virt.libvirt.driver [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:22:45 np0005481065 nova_compute[260935]: 2025-10-11 09:22:45.058 2 DEBUG nova.virt.libvirt.driver [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:22:45 np0005481065 nova_compute[260935]: 2025-10-11 09:22:45.058 2 DEBUG nova.virt.libvirt.driver [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:22:45 np0005481065 nova_compute[260935]: 2025-10-11 09:22:45.059 2 DEBUG nova.virt.libvirt.driver [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:22:45 np0005481065 nova_compute[260935]: 2025-10-11 09:22:45.060 2 DEBUG nova.virt.libvirt.driver [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:22:45 np0005481065 nova_compute[260935]: 2025-10-11 09:22:45.060 2 DEBUG nova.virt.libvirt.driver [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:22:45 np0005481065 podman[392022]: 2025-10-11 09:22:45.064047588 +0000 UTC m=+0.180105834 container init 08681398b84cfa03e0e290ee16c1ce0aff8a4babeb74f855ff91733dfa902f9f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 05:22:45 np0005481065 nova_compute[260935]: 2025-10-11 09:22:45.071 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:22:45 np0005481065 podman[392022]: 2025-10-11 09:22:45.072296871 +0000 UTC m=+0.188355087 container start 08681398b84cfa03e0e290ee16c1ce0aff8a4babeb74f855ff91733dfa902f9f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 05:22:45 np0005481065 nova_compute[260935]: 2025-10-11 09:22:45.072 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174565.0125074, ef21f945-0076-48fa-8d22-c5376e26d278 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:22:45 np0005481065 nova_compute[260935]: 2025-10-11 09:22:45.072 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:22:45 np0005481065 podman[392036]: 2025-10-11 09:22:45.079793902 +0000 UTC m=+0.094293842 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid)
Oct 11 05:22:45 np0005481065 nova_compute[260935]: 2025-10-11 09:22:45.093 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:22:45 np0005481065 nova_compute[260935]: 2025-10-11 09:22:45.096 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174565.0221415, ef21f945-0076-48fa-8d22-c5376e26d278 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:22:45 np0005481065 nova_compute[260935]: 2025-10-11 09:22:45.096 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:22:45 np0005481065 neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860[392049]: [NOTICE]   (392060) : New worker (392064) forked
Oct 11 05:22:45 np0005481065 neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860[392049]: [NOTICE]   (392060) : Loading success.
Oct 11 05:22:45 np0005481065 nova_compute[260935]: 2025-10-11 09:22:45.130 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:22:45 np0005481065 nova_compute[260935]: 2025-10-11 09:22:45.134 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:22:45 np0005481065 nova_compute[260935]: 2025-10-11 09:22:45.151 2 INFO nova.compute.manager [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Took 6.70 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:22:45 np0005481065 nova_compute[260935]: 2025-10-11 09:22:45.151 2 DEBUG nova.compute.manager [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:22:45 np0005481065 nova_compute[260935]: 2025-10-11 09:22:45.185 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:22:45 np0005481065 nova_compute[260935]: 2025-10-11 09:22:45.225 2 INFO nova.compute.manager [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Took 8.56 seconds to build instance.#033[00m
Oct 11 05:22:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:45.284 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '39'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:22:45 np0005481065 nova_compute[260935]: 2025-10-11 09:22:45.292 2 DEBUG oslo_concurrency.lockutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.850s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:22:46 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2428: 321 pgs: 321 active+clean; 533 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 1.8 MiB/s wr, 86 op/s
Oct 11 05:22:47 np0005481065 nova_compute[260935]: 2025-10-11 09:22:47.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:47 np0005481065 nova_compute[260935]: 2025-10-11 09:22:47.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:48 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2429: 321 pgs: 321 active+clean; 533 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 161 op/s
Oct 11 05:22:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:22:49 np0005481065 podman[392073]: 2025-10-11 09:22:49.802637223 +0000 UTC m=+0.103609195 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 05:22:49 np0005481065 podman[392074]: 2025-10-11 09:22:49.925466509 +0000 UTC m=+0.212292572 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 05:22:50 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2430: 321 pgs: 321 active+clean; 533 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 131 op/s
Oct 11 05:22:51 np0005481065 nova_compute[260935]: 2025-10-11 09:22:51.180 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174556.1787422, d703a39a-f502-4bc4-895a-bb87752c83df => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:22:51 np0005481065 nova_compute[260935]: 2025-10-11 09:22:51.180 2 INFO nova.compute.manager [-] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:22:51 np0005481065 nova_compute[260935]: 2025-10-11 09:22:51.210 2 DEBUG nova.compute.manager [None req-2c589f31-d519-4157-9f3c-2ba626c48f24 - - - - - -] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:22:51 np0005481065 nova_compute[260935]: 2025-10-11 09:22:51.285 2 DEBUG nova.compute.manager [req-68b84799-cea6-45d6-bc46-eeb0bc8e9d70 req-26bc73a3-7d74-426b-9906-a35f9de993d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Received event network-changed-0f516e4b-c284-4151-944c-8a7d98f695b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:22:51 np0005481065 nova_compute[260935]: 2025-10-11 09:22:51.286 2 DEBUG nova.compute.manager [req-68b84799-cea6-45d6-bc46-eeb0bc8e9d70 req-26bc73a3-7d74-426b-9906-a35f9de993d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Refreshing instance network info cache due to event network-changed-0f516e4b-c284-4151-944c-8a7d98f695b5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:22:51 np0005481065 nova_compute[260935]: 2025-10-11 09:22:51.286 2 DEBUG oslo_concurrency.lockutils [req-68b84799-cea6-45d6-bc46-eeb0bc8e9d70 req-26bc73a3-7d74-426b-9906-a35f9de993d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-ef21f945-0076-48fa-8d22-c5376e26d278" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:22:51 np0005481065 nova_compute[260935]: 2025-10-11 09:22:51.286 2 DEBUG oslo_concurrency.lockutils [req-68b84799-cea6-45d6-bc46-eeb0bc8e9d70 req-26bc73a3-7d74-426b-9906-a35f9de993d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-ef21f945-0076-48fa-8d22-c5376e26d278" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:22:51 np0005481065 nova_compute[260935]: 2025-10-11 09:22:51.287 2 DEBUG nova.network.neutron [req-68b84799-cea6-45d6-bc46-eeb0bc8e9d70 req-26bc73a3-7d74-426b-9906-a35f9de993d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Refreshing network info cache for port 0f516e4b-c284-4151-944c-8a7d98f695b5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:22:52 np0005481065 nova_compute[260935]: 2025-10-11 09:22:52.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:52 np0005481065 ovn_controller[152945]: 2025-10-11T09:22:52Z|01258|binding|INFO|Releasing lport e89e8ea5-8038-433d-8c45-d2ec20f4f896 from this chassis (sb_readonly=0)
Oct 11 05:22:52 np0005481065 ovn_controller[152945]: 2025-10-11T09:22:52Z|01259|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:22:52 np0005481065 ovn_controller[152945]: 2025-10-11T09:22:52Z|01260|binding|INFO|Releasing lport f0c2b309-d39a-4d01-be06-bdc63deb27f9 from this chassis (sb_readonly=0)
Oct 11 05:22:52 np0005481065 ovn_controller[152945]: 2025-10-11T09:22:52Z|01261|binding|INFO|Releasing lport 89155f05-1b39-4918-893c-15a2cd2a9493 from this chassis (sb_readonly=0)
Oct 11 05:22:52 np0005481065 ovn_controller[152945]: 2025-10-11T09:22:52Z|01262|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:22:52 np0005481065 nova_compute[260935]: 2025-10-11 09:22:52.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:52 np0005481065 nova_compute[260935]: 2025-10-11 09:22:52.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:52 np0005481065 nova_compute[260935]: 2025-10-11 09:22:52.778 2 DEBUG nova.network.neutron [req-68b84799-cea6-45d6-bc46-eeb0bc8e9d70 req-26bc73a3-7d74-426b-9906-a35f9de993d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Updated VIF entry in instance network info cache for port 0f516e4b-c284-4151-944c-8a7d98f695b5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:22:52 np0005481065 nova_compute[260935]: 2025-10-11 09:22:52.778 2 DEBUG nova.network.neutron [req-68b84799-cea6-45d6-bc46-eeb0bc8e9d70 req-26bc73a3-7d74-426b-9906-a35f9de993d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Updating instance_info_cache with network_info: [{"id": "0f516e4b-c284-4151-944c-8a7d98f695b5", "address": "fa:16:3e:4b:08:14", "network": {"id": "0007a0de-db42-4add-9b55-6d92ceffa860", "bridge": "br-int", "label": "tempest-TestShelveInstance-683353345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a13210f275984f3eadf85eba0c749d99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f516e4b-c2", "ovs_interfaceid": "0f516e4b-c284-4151-944c-8a7d98f695b5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:22:52 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2431: 321 pgs: 321 active+clean; 533 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 131 op/s
Oct 11 05:22:52 np0005481065 nova_compute[260935]: 2025-10-11 09:22:52.802 2 DEBUG oslo_concurrency.lockutils [req-68b84799-cea6-45d6-bc46-eeb0bc8e9d70 req-26bc73a3-7d74-426b-9906-a35f9de993d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-ef21f945-0076-48fa-8d22-c5376e26d278" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:22:53 np0005481065 nova_compute[260935]: 2025-10-11 09:22:53.860 2 DEBUG oslo_concurrency.lockutils [None req-77443734-065e-4e2d-80ca-8d8e3f47c155 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "c677661f-bc62-4954-9130-09b285e6abe4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:22:53 np0005481065 nova_compute[260935]: 2025-10-11 09:22:53.861 2 DEBUG oslo_concurrency.lockutils [None req-77443734-065e-4e2d-80ca-8d8e3f47c155 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "c677661f-bc62-4954-9130-09b285e6abe4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:22:53 np0005481065 nova_compute[260935]: 2025-10-11 09:22:53.861 2 DEBUG oslo_concurrency.lockutils [None req-77443734-065e-4e2d-80ca-8d8e3f47c155 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "c677661f-bc62-4954-9130-09b285e6abe4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:22:53 np0005481065 nova_compute[260935]: 2025-10-11 09:22:53.862 2 DEBUG oslo_concurrency.lockutils [None req-77443734-065e-4e2d-80ca-8d8e3f47c155 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "c677661f-bc62-4954-9130-09b285e6abe4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:22:53 np0005481065 nova_compute[260935]: 2025-10-11 09:22:53.862 2 DEBUG oslo_concurrency.lockutils [None req-77443734-065e-4e2d-80ca-8d8e3f47c155 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "c677661f-bc62-4954-9130-09b285e6abe4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:22:53 np0005481065 nova_compute[260935]: 2025-10-11 09:22:53.864 2 INFO nova.compute.manager [None req-77443734-065e-4e2d-80ca-8d8e3f47c155 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Terminating instance#033[00m
Oct 11 05:22:53 np0005481065 nova_compute[260935]: 2025-10-11 09:22:53.865 2 DEBUG nova.compute.manager [None req-77443734-065e-4e2d-80ca-8d8e3f47c155 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:22:53 np0005481065 kernel: tapf09cab39-50 (unregistering): left promiscuous mode
Oct 11 05:22:53 np0005481065 NetworkManager[44960]: <info>  [1760174573.9415] device (tapf09cab39-50): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:22:53 np0005481065 ovn_controller[152945]: 2025-10-11T09:22:53Z|01263|binding|INFO|Releasing lport f09cab39-5082-4d84-91fe-0c7b1cc2fb8a from this chassis (sb_readonly=0)
Oct 11 05:22:53 np0005481065 ovn_controller[152945]: 2025-10-11T09:22:53Z|01264|binding|INFO|Setting lport f09cab39-5082-4d84-91fe-0c7b1cc2fb8a down in Southbound
Oct 11 05:22:53 np0005481065 nova_compute[260935]: 2025-10-11 09:22:53.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:53 np0005481065 ovn_controller[152945]: 2025-10-11T09:22:53Z|01265|binding|INFO|Removing iface tapf09cab39-50 ovn-installed in OVS
Oct 11 05:22:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:53.976 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:29:57:ba 10.100.0.19'], port_security=['fa:16:3e:29:57:ba 10.100.0.19'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.19/28', 'neutron:device_id': 'c677661f-bc62-4954-9130-09b285e6abe4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-428d818e-c08a-4eef-be62-24fe484fed05', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8bcfc965-4dfd-4911-b600-4d86bbeab7bc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=020223cc-9ae9-496f-8a67-2eb055f1a089, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=f09cab39-5082-4d84-91fe-0c7b1cc2fb8a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:22:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:53.977 162815 INFO neutron.agent.ovn.metadata.agent [-] Port f09cab39-5082-4d84-91fe-0c7b1cc2fb8a in datapath 428d818e-c08a-4eef-be62-24fe484fed05 unbound from our chassis#033[00m
Oct 11 05:22:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:53.978 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 428d818e-c08a-4eef-be62-24fe484fed05#033[00m
Oct 11 05:22:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:53.994 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8082b36c-3b0b-4d82-a88a-27378c196b13]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:54 np0005481065 nova_compute[260935]: 2025-10-11 09:22:53.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:54 np0005481065 systemd[1]: machine-qemu\x2d142\x2dinstance\x2d00000077.scope: Deactivated successfully.
Oct 11 05:22:54 np0005481065 systemd[1]: machine-qemu\x2d142\x2dinstance\x2d00000077.scope: Consumed 15.303s CPU time.
Oct 11 05:22:54 np0005481065 systemd-machined[215705]: Machine qemu-142-instance-00000077 terminated.
Oct 11 05:22:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:54.027 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[841702fa-e64d-4594-a793-0a131ff2356b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:54.032 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[8c08081a-9811-454a-bae4-98ddae44885e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:54.061 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[a45cdab2-9bc2-43ef-87e5-1e4796456b4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:22:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:54.085 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[633230b9-8838-4155-bc30-390419363578]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap428d818e-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:fc:e7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 13, 'tx_packets': 7, 'rx_bytes': 1042, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 13, 'tx_packets': 7, 'rx_bytes': 1042, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 343], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633373, 'reachable_time': 23194, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 392131, 'error': None, 'target': 'ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:54 np0005481065 nova_compute[260935]: 2025-10-11 09:22:54.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:54 np0005481065 nova_compute[260935]: 2025-10-11 09:22:54.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:54.107 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a5d4775a-f148-4134-9026-33f86d9d07fe]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap428d818e-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 633388, 'tstamp': 633388}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 392135, 'error': None, 'target': 'ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.17'], ['IFA_LOCAL', '10.100.0.17'], ['IFA_BROADCAST', '10.100.0.31'], ['IFA_LABEL', 'tap428d818e-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 633391, 'tstamp': 633391}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 392135, 'error': None, 'target': 'ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:54.109 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap428d818e-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:22:54 np0005481065 nova_compute[260935]: 2025-10-11 09:22:54.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:54 np0005481065 nova_compute[260935]: 2025-10-11 09:22:54.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:54.115 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap428d818e-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:22:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:54.115 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:22:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:54.115 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap428d818e-c0, col_values=(('external_ids', {'iface-id': 'f0c2b309-d39a-4d01-be06-bdc63deb27f9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:22:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:54.116 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:22:54 np0005481065 nova_compute[260935]: 2025-10-11 09:22:54.116 2 INFO nova.virt.libvirt.driver [-] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Instance destroyed successfully.#033[00m
Oct 11 05:22:54 np0005481065 nova_compute[260935]: 2025-10-11 09:22:54.116 2 DEBUG nova.objects.instance [None req-77443734-065e-4e2d-80ca-8d8e3f47c155 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'resources' on Instance uuid c677661f-bc62-4954-9130-09b285e6abe4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:22:54 np0005481065 nova_compute[260935]: 2025-10-11 09:22:54.131 2 DEBUG nova.virt.libvirt.vif [None req-77443734-065e-4e2d-80ca-8d8e3f47c155 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:21:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1301201135',display_name='tempest-TestNetworkBasicOps-server-1301201135',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1301201135',id=119,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAFMu8G7LIufdrCxIhBQQCul5ZF6SEzch6wK6wL4IRYDngjbMt9CFdOpjTGBGp3BV5OfwWRm0v8OrQ6qzwsg+DrsRLw4XZQAC1nE0Ke58JBZ4nmeeruu5Ghd9xdLrk6Odw==',key_name='tempest-TestNetworkBasicOps-1626339166',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:21:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-1lilo2wr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:21:49Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=c677661f-bc62-4954-9130-09b285e6abe4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f09cab39-5082-4d84-91fe-0c7b1cc2fb8a", "address": "fa:16:3e:29:57:ba", "network": {"id": "428d818e-c08a-4eef-be62-24fe484fed05", "bridge": "br-int", "label": "tempest-network-smoke--2064721736", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf09cab39-50", "ovs_interfaceid": "f09cab39-5082-4d84-91fe-0c7b1cc2fb8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:22:54 np0005481065 nova_compute[260935]: 2025-10-11 09:22:54.132 2 DEBUG nova.network.os_vif_util [None req-77443734-065e-4e2d-80ca-8d8e3f47c155 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "f09cab39-5082-4d84-91fe-0c7b1cc2fb8a", "address": "fa:16:3e:29:57:ba", "network": {"id": "428d818e-c08a-4eef-be62-24fe484fed05", "bridge": "br-int", "label": "tempest-network-smoke--2064721736", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf09cab39-50", "ovs_interfaceid": "f09cab39-5082-4d84-91fe-0c7b1cc2fb8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:22:54 np0005481065 nova_compute[260935]: 2025-10-11 09:22:54.133 2 DEBUG nova.network.os_vif_util [None req-77443734-065e-4e2d-80ca-8d8e3f47c155 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:29:57:ba,bridge_name='br-int',has_traffic_filtering=True,id=f09cab39-5082-4d84-91fe-0c7b1cc2fb8a,network=Network(428d818e-c08a-4eef-be62-24fe484fed05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf09cab39-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:22:54 np0005481065 nova_compute[260935]: 2025-10-11 09:22:54.134 2 DEBUG os_vif [None req-77443734-065e-4e2d-80ca-8d8e3f47c155 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:29:57:ba,bridge_name='br-int',has_traffic_filtering=True,id=f09cab39-5082-4d84-91fe-0c7b1cc2fb8a,network=Network(428d818e-c08a-4eef-be62-24fe484fed05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf09cab39-50') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:22:54 np0005481065 nova_compute[260935]: 2025-10-11 09:22:54.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:54 np0005481065 nova_compute[260935]: 2025-10-11 09:22:54.136 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf09cab39-50, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:22:54 np0005481065 nova_compute[260935]: 2025-10-11 09:22:54.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:22:54 np0005481065 nova_compute[260935]: 2025-10-11 09:22:54.144 2 INFO os_vif [None req-77443734-065e-4e2d-80ca-8d8e3f47c155 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:29:57:ba,bridge_name='br-int',has_traffic_filtering=True,id=f09cab39-5082-4d84-91fe-0c7b1cc2fb8a,network=Network(428d818e-c08a-4eef-be62-24fe484fed05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf09cab39-50')#033[00m
Oct 11 05:22:54 np0005481065 nova_compute[260935]: 2025-10-11 09:22:54.234 2 DEBUG nova.compute.manager [req-ebdc15d1-1c5e-420c-bf72-a453c9aec90f req-1239e11d-bc60-4243-a4d0-d6d007b564d4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Received event network-vif-unplugged-f09cab39-5082-4d84-91fe-0c7b1cc2fb8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:22:54 np0005481065 nova_compute[260935]: 2025-10-11 09:22:54.234 2 DEBUG oslo_concurrency.lockutils [req-ebdc15d1-1c5e-420c-bf72-a453c9aec90f req-1239e11d-bc60-4243-a4d0-d6d007b564d4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "c677661f-bc62-4954-9130-09b285e6abe4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:22:54 np0005481065 nova_compute[260935]: 2025-10-11 09:22:54.234 2 DEBUG oslo_concurrency.lockutils [req-ebdc15d1-1c5e-420c-bf72-a453c9aec90f req-1239e11d-bc60-4243-a4d0-d6d007b564d4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "c677661f-bc62-4954-9130-09b285e6abe4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:22:54 np0005481065 nova_compute[260935]: 2025-10-11 09:22:54.235 2 DEBUG oslo_concurrency.lockutils [req-ebdc15d1-1c5e-420c-bf72-a453c9aec90f req-1239e11d-bc60-4243-a4d0-d6d007b564d4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "c677661f-bc62-4954-9130-09b285e6abe4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:22:54 np0005481065 nova_compute[260935]: 2025-10-11 09:22:54.235 2 DEBUG nova.compute.manager [req-ebdc15d1-1c5e-420c-bf72-a453c9aec90f req-1239e11d-bc60-4243-a4d0-d6d007b564d4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] No waiting events found dispatching network-vif-unplugged-f09cab39-5082-4d84-91fe-0c7b1cc2fb8a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:22:54 np0005481065 nova_compute[260935]: 2025-10-11 09:22:54.235 2 DEBUG nova.compute.manager [req-ebdc15d1-1c5e-420c-bf72-a453c9aec90f req-1239e11d-bc60-4243-a4d0-d6d007b564d4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Received event network-vif-unplugged-f09cab39-5082-4d84-91fe-0c7b1cc2fb8a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 05:22:54 np0005481065 nova_compute[260935]: 2025-10-11 09:22:54.599 2 INFO nova.virt.libvirt.driver [None req-77443734-065e-4e2d-80ca-8d8e3f47c155 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Deleting instance files /var/lib/nova/instances/c677661f-bc62-4954-9130-09b285e6abe4_del#033[00m
Oct 11 05:22:54 np0005481065 nova_compute[260935]: 2025-10-11 09:22:54.600 2 INFO nova.virt.libvirt.driver [None req-77443734-065e-4e2d-80ca-8d8e3f47c155 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Deletion of /var/lib/nova/instances/c677661f-bc62-4954-9130-09b285e6abe4_del complete#033[00m
Oct 11 05:22:54 np0005481065 nova_compute[260935]: 2025-10-11 09:22:54.689 2 INFO nova.compute.manager [None req-77443734-065e-4e2d-80ca-8d8e3f47c155 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Took 0.82 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:22:54 np0005481065 nova_compute[260935]: 2025-10-11 09:22:54.690 2 DEBUG oslo.service.loopingcall [None req-77443734-065e-4e2d-80ca-8d8e3f47c155 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:22:54 np0005481065 nova_compute[260935]: 2025-10-11 09:22:54.690 2 DEBUG nova.compute.manager [-] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:22:54 np0005481065 nova_compute[260935]: 2025-10-11 09:22:54.691 2 DEBUG nova.network.neutron [-] [instance: c677661f-bc62-4954-9130-09b285e6abe4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:22:54 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2432: 321 pgs: 321 active+clean; 533 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 23 KiB/s wr, 75 op/s
Oct 11 05:22:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:22:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:22:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:22:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:22:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:22:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:22:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:22:54
Oct 11 05:22:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:22:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:22:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['backups', 'default.rgw.meta', 'cephfs.cephfs.data', 'images', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log', 'volumes', 'default.rgw.control']
Oct 11 05:22:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:22:55 np0005481065 nova_compute[260935]: 2025-10-11 09:22:55.339 2 DEBUG nova.network.neutron [-] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:22:55 np0005481065 nova_compute[260935]: 2025-10-11 09:22:55.356 2 INFO nova.compute.manager [-] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Took 0.66 seconds to deallocate network for instance.#033[00m
Oct 11 05:22:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:22:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:22:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:22:55 np0005481065 nova_compute[260935]: 2025-10-11 09:22:55.403 2 DEBUG oslo_concurrency.lockutils [None req-77443734-065e-4e2d-80ca-8d8e3f47c155 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:22:55 np0005481065 nova_compute[260935]: 2025-10-11 09:22:55.404 2 DEBUG oslo_concurrency.lockutils [None req-77443734-065e-4e2d-80ca-8d8e3f47c155 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:22:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:22:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:22:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:22:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:22:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:22:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:22:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:22:55 np0005481065 nova_compute[260935]: 2025-10-11 09:22:55.417 2 DEBUG nova.compute.manager [req-a508f92c-837e-43bd-b5db-9614c2938312 req-59825ce6-c679-4cd8-a85e-d0b6caea822f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Received event network-vif-deleted-f09cab39-5082-4d84-91fe-0c7b1cc2fb8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:22:55 np0005481065 nova_compute[260935]: 2025-10-11 09:22:55.557 2 DEBUG oslo_concurrency.processutils [None req-77443734-065e-4e2d-80ca-8d8e3f47c155 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:22:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:22:56 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2362224432' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:22:56 np0005481065 nova_compute[260935]: 2025-10-11 09:22:56.111 2 DEBUG oslo_concurrency.processutils [None req-77443734-065e-4e2d-80ca-8d8e3f47c155 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:22:56 np0005481065 nova_compute[260935]: 2025-10-11 09:22:56.119 2 DEBUG nova.compute.provider_tree [None req-77443734-065e-4e2d-80ca-8d8e3f47c155 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:22:56 np0005481065 nova_compute[260935]: 2025-10-11 09:22:56.138 2 DEBUG nova.scheduler.client.report [None req-77443734-065e-4e2d-80ca-8d8e3f47c155 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:22:56 np0005481065 nova_compute[260935]: 2025-10-11 09:22:56.160 2 DEBUG oslo_concurrency.lockutils [None req-77443734-065e-4e2d-80ca-8d8e3f47c155 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:22:56 np0005481065 nova_compute[260935]: 2025-10-11 09:22:56.192 2 INFO nova.scheduler.client.report [None req-77443734-065e-4e2d-80ca-8d8e3f47c155 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Deleted allocations for instance c677661f-bc62-4954-9130-09b285e6abe4#033[00m
Oct 11 05:22:56 np0005481065 nova_compute[260935]: 2025-10-11 09:22:56.259 2 DEBUG oslo_concurrency.lockutils [None req-77443734-065e-4e2d-80ca-8d8e3f47c155 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "c677661f-bc62-4954-9130-09b285e6abe4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.398s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:22:56 np0005481065 nova_compute[260935]: 2025-10-11 09:22:56.348 2 DEBUG nova.compute.manager [req-12e59c4a-d169-4c52-9d98-006404c0f404 req-ded371c5-f52c-4c0b-b97e-74d12bcdbebd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Received event network-vif-plugged-f09cab39-5082-4d84-91fe-0c7b1cc2fb8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:22:56 np0005481065 nova_compute[260935]: 2025-10-11 09:22:56.349 2 DEBUG oslo_concurrency.lockutils [req-12e59c4a-d169-4c52-9d98-006404c0f404 req-ded371c5-f52c-4c0b-b97e-74d12bcdbebd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "c677661f-bc62-4954-9130-09b285e6abe4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:22:56 np0005481065 nova_compute[260935]: 2025-10-11 09:22:56.349 2 DEBUG oslo_concurrency.lockutils [req-12e59c4a-d169-4c52-9d98-006404c0f404 req-ded371c5-f52c-4c0b-b97e-74d12bcdbebd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "c677661f-bc62-4954-9130-09b285e6abe4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:22:56 np0005481065 nova_compute[260935]: 2025-10-11 09:22:56.349 2 DEBUG oslo_concurrency.lockutils [req-12e59c4a-d169-4c52-9d98-006404c0f404 req-ded371c5-f52c-4c0b-b97e-74d12bcdbebd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "c677661f-bc62-4954-9130-09b285e6abe4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:22:56 np0005481065 nova_compute[260935]: 2025-10-11 09:22:56.349 2 DEBUG nova.compute.manager [req-12e59c4a-d169-4c52-9d98-006404c0f404 req-ded371c5-f52c-4c0b-b97e-74d12bcdbebd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] No waiting events found dispatching network-vif-plugged-f09cab39-5082-4d84-91fe-0c7b1cc2fb8a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:22:56 np0005481065 nova_compute[260935]: 2025-10-11 09:22:56.350 2 WARNING nova.compute.manager [req-12e59c4a-d169-4c52-9d98-006404c0f404 req-ded371c5-f52c-4c0b-b97e-74d12bcdbebd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Received unexpected event network-vif-plugged-f09cab39-5082-4d84-91fe-0c7b1cc2fb8a for instance with vm_state deleted and task_state None.#033[00m
Oct 11 05:22:56 np0005481065 nova_compute[260935]: 2025-10-11 09:22:56.499 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174561.4984128, fcb45648-eb7b-4975-9f50-08675a787d9c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:22:56 np0005481065 nova_compute[260935]: 2025-10-11 09:22:56.500 2 INFO nova.compute.manager [-] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:22:56 np0005481065 nova_compute[260935]: 2025-10-11 09:22:56.521 2 DEBUG nova.compute.manager [None req-3ddd017b-90e9-4e7d-b87a-15c2a87c6696 - - - - - -] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:22:56 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2433: 321 pgs: 321 active+clean; 533 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 23 KiB/s wr, 75 op/s
Oct 11 05:22:57 np0005481065 nova_compute[260935]: 2025-10-11 09:22:57.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:57 np0005481065 ovn_controller[152945]: 2025-10-11T09:22:57Z|00149|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4b:08:14 10.100.0.10
Oct 11 05:22:57 np0005481065 ovn_controller[152945]: 2025-10-11T09:22:57Z|00150|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4b:08:14 10.100.0.10
Oct 11 05:22:57 np0005481065 nova_compute[260935]: 2025-10-11 09:22:57.591 2 DEBUG oslo_concurrency.lockutils [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "interface-7f0d9214-39a5-458d-82db-dcbc7d61b8b5-300c071c-a312-4a9b-bd7a-1b16b9a35ae6" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:22:57 np0005481065 nova_compute[260935]: 2025-10-11 09:22:57.591 2 DEBUG oslo_concurrency.lockutils [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "interface-7f0d9214-39a5-458d-82db-dcbc7d61b8b5-300c071c-a312-4a9b-bd7a-1b16b9a35ae6" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:22:57 np0005481065 nova_compute[260935]: 2025-10-11 09:22:57.609 2 DEBUG nova.objects.instance [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'flavor' on Instance uuid 7f0d9214-39a5-458d-82db-dcbc7d61b8b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:22:57 np0005481065 nova_compute[260935]: 2025-10-11 09:22:57.627 2 DEBUG nova.virt.libvirt.vif [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:20:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1960711580',display_name='tempest-TestNetworkBasicOps-server-1960711580',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1960711580',id=117,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF5Wlq36JtMeAih7avX8wXRrPqfFsSPD2izoTRA0/VeN08ZY174fYPsstqVdaqAprTgQ0B4WJKyd87FK5YPL+XzWekXrwbc+R4XTrvOSv6dJKGu7vh0OlJADJW05rfop0g==',key_name='tempest-TestNetworkBasicOps-1691611342',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:20:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-9mefs4w8',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:20:58Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=7f0d9214-39a5-458d-82db-dcbc7d61b8b5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "address": "fa:16:3e:f8:68:88", "network": {"id": "428d818e-c08a-4eef-be62-24fe484fed05", "bridge": "br-int", "label": "tempest-network-smoke--2064721736", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap300c071c-a3", "ovs_interfaceid": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:22:57 np0005481065 nova_compute[260935]: 2025-10-11 09:22:57.628 2 DEBUG nova.network.os_vif_util [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "address": "fa:16:3e:f8:68:88", "network": {"id": "428d818e-c08a-4eef-be62-24fe484fed05", "bridge": "br-int", "label": "tempest-network-smoke--2064721736", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap300c071c-a3", "ovs_interfaceid": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:22:57 np0005481065 nova_compute[260935]: 2025-10-11 09:22:57.629 2 DEBUG nova.network.os_vif_util [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f8:68:88,bridge_name='br-int',has_traffic_filtering=True,id=300c071c-a312-4a9b-bd7a-1b16b9a35ae6,network=Network(428d818e-c08a-4eef-be62-24fe484fed05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap300c071c-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:22:57 np0005481065 nova_compute[260935]: 2025-10-11 09:22:57.634 2 DEBUG nova.virt.libvirt.guest [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:f8:68:88"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap300c071c-a3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 11 05:22:57 np0005481065 nova_compute[260935]: 2025-10-11 09:22:57.638 2 DEBUG nova.virt.libvirt.guest [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:f8:68:88"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap300c071c-a3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 11 05:22:57 np0005481065 nova_compute[260935]: 2025-10-11 09:22:57.643 2 DEBUG nova.virt.libvirt.driver [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Attempting to detach device tap300c071c-a3 from instance 7f0d9214-39a5-458d-82db-dcbc7d61b8b5 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct 11 05:22:57 np0005481065 nova_compute[260935]: 2025-10-11 09:22:57.643 2 DEBUG nova.virt.libvirt.guest [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] detach device xml: <interface type="ethernet">
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <mac address="fa:16:3e:f8:68:88"/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <model type="virtio"/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <mtu size="1442"/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <target dev="tap300c071c-a3"/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]: </interface>
Oct 11 05:22:57 np0005481065 nova_compute[260935]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 11 05:22:57 np0005481065 nova_compute[260935]: 2025-10-11 09:22:57.736 2 DEBUG nova.virt.libvirt.guest [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:f8:68:88"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap300c071c-a3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 11 05:22:57 np0005481065 nova_compute[260935]: 2025-10-11 09:22:57.743 2 DEBUG nova.virt.libvirt.guest [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:f8:68:88"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap300c071c-a3"/></interface>not found in domain: <domain type='kvm' id='140'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <name>instance-00000075</name>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <uuid>7f0d9214-39a5-458d-82db-dcbc7d61b8b5</uuid>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <nova:name>tempest-TestNetworkBasicOps-server-1960711580</nova:name>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <nova:creationTime>2025-10-11 09:21:28</nova:creationTime>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <nova:flavor name="m1.nano">
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <nova:memory>128</nova:memory>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <nova:disk>1</nova:disk>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <nova:swap>0</nova:swap>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <nova:vcpus>1</nova:vcpus>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  </nova:flavor>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <nova:owner>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <nova:user uuid="dd336dcb24664df58613d4105ce1b004">tempest-TestNetworkBasicOps-1622727639-project-member</nova:user>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <nova:project uuid="bee9c6aad5fe46a2b0fb6caf4d995b72">tempest-TestNetworkBasicOps-1622727639</nova:project>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  </nova:owner>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <nova:ports>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <nova:port uuid="6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5">
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <nova:port uuid="300c071c-a312-4a9b-bd7a-1b16b9a35ae6">
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.30" ipVersion="4"/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  </nova:ports>
Oct 11 05:22:57 np0005481065 nova_compute[260935]: </nova:instance>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <memory unit='KiB'>131072</memory>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <vcpu placement='static'>1</vcpu>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <resource>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <partition>/machine</partition>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  </resource>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <sysinfo type='smbios'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <entry name='manufacturer'>RDO</entry>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <entry name='product'>OpenStack Compute</entry>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <entry name='serial'>7f0d9214-39a5-458d-82db-dcbc7d61b8b5</entry>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <entry name='uuid'>7f0d9214-39a5-458d-82db-dcbc7d61b8b5</entry>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <entry name='family'>Virtual Machine</entry>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <boot dev='hd'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <smbios mode='sysinfo'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <vmcoreinfo state='on'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <cpu mode='custom' match='exact' check='full'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <model fallback='forbid'>EPYC-Rome</model>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <vendor>AMD</vendor>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='require' name='x2apic'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='require' name='tsc-deadline'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='require' name='hypervisor'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='require' name='tsc_adjust'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='require' name='spec-ctrl'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='require' name='stibp'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='require' name='arch-capabilities'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='require' name='ssbd'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='require' name='cmp_legacy'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='require' name='overflow-recov'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='require' name='succor'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='require' name='ibrs'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='require' name='amd-ssbd'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='require' name='virt-ssbd'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='disable' name='lbrv'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='disable' name='tsc-scale'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='disable' name='vmcb-clean'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='disable' name='flushbyasid'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='disable' name='pause-filter'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='disable' name='pfthreshold'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='disable' name='svme-addr-chk'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='require' name='lfence-always-serializing'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='require' name='rdctl-no'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='require' name='mds-no'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='require' name='pschange-mc-no'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='require' name='gds-no'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='require' name='rfds-no'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='disable' name='xsaves'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='disable' name='svm'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='require' name='topoext'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='disable' name='npt'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='disable' name='nrip-save'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <clock offset='utc'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <timer name='pit' tickpolicy='delay'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <timer name='rtc' tickpolicy='catchup'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <timer name='hpet' present='no'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <on_poweroff>destroy</on_poweroff>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <on_reboot>restart</on_reboot>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <on_crash>destroy</on_crash>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <disk type='network' device='disk'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <driver name='qemu' type='raw' cache='none'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <auth username='openstack'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:        <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <source protocol='rbd' name='vms/7f0d9214-39a5-458d-82db-dcbc7d61b8b5_disk' index='2'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:        <host name='192.168.122.100' port='6789'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target dev='vda' bus='virtio'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='virtio-disk0'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <disk type='network' device='cdrom'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <driver name='qemu' type='raw' cache='none'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <auth username='openstack'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:        <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <source protocol='rbd' name='vms/7f0d9214-39a5-458d-82db-dcbc7d61b8b5_disk.config' index='1'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:        <host name='192.168.122.100' port='6789'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target dev='sda' bus='sata'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <readonly/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='sata0-0-0'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='pci' index='0' model='pcie-root'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='pcie.0'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target chassis='1' port='0x10'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='pci.1'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target chassis='2' port='0x11'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='pci.2'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target chassis='3' port='0x12'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='pci.3'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target chassis='4' port='0x13'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='pci.4'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target chassis='5' port='0x14'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='pci.5'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target chassis='6' port='0x15'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='pci.6'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target chassis='7' port='0x16'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='pci.7'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target chassis='8' port='0x17'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='pci.8'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target chassis='9' port='0x18'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='pci.9'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target chassis='10' port='0x19'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='pci.10'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target chassis='11' port='0x1a'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='pci.11'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target chassis='12' port='0x1b'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='pci.12'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target chassis='13' port='0x1c'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='pci.13'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target chassis='14' port='0x1d'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='pci.14'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target chassis='15' port='0x1e'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='pci.15'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target chassis='16' port='0x1f'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='pci.16'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target chassis='17' port='0x20'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='pci.17'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target chassis='18' port='0x21'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='pci.18'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target chassis='19' port='0x22'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='pci.19'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target chassis='20' port='0x23'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='pci.20'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target chassis='21' port='0x24'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='pci.21'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target chassis='22' port='0x25'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='pci.22'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target chassis='23' port='0x26'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='pci.23'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target chassis='24' port='0x27'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='pci.24'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target chassis='25' port='0x28'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='pci.25'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model name='pcie-pci-bridge'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='pci.26'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='usb'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='sata' index='0'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='ide'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <interface type='ethernet'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <mac address='fa:16:3e:38:a7:f5'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target dev='tap6aa7ac72-3e'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model type='virtio'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <driver name='vhost' rx_queue_size='512'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <mtu size='1442'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='net0'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <interface type='ethernet'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <mac address='fa:16:3e:f8:68:88'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target dev='tap300c071c-a3'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model type='virtio'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <driver name='vhost' rx_queue_size='512'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <mtu size='1442'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='net1'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <serial type='pty'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <source path='/dev/pts/2'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <log file='/var/lib/nova/instances/7f0d9214-39a5-458d-82db-dcbc7d61b8b5/console.log' append='off'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target type='isa-serial' port='0'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:        <model name='isa-serial'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      </target>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='serial0'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <console type='pty' tty='/dev/pts/2'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <source path='/dev/pts/2'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <log file='/var/lib/nova/instances/7f0d9214-39a5-458d-82db-dcbc7d61b8b5/console.log' append='off'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target type='serial' port='0'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='serial0'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </console>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <input type='tablet' bus='usb'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='input0'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='usb' bus='0' port='1'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </input>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <input type='mouse' bus='ps2'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='input1'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </input>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <input type='keyboard' bus='ps2'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='input2'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </input>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <listen type='address' address='::0'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </graphics>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <audio id='1' type='none'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model type='virtio' heads='1' primary='yes'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='video0'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <watchdog model='itco' action='reset'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='watchdog0'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </watchdog>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <memballoon model='virtio'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <stats period='10'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='balloon0'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <rng model='virtio'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <backend model='random'>/dev/urandom</backend>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='rng0'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <label>system_u:system_r:svirt_t:s0:c329,c594</label>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c329,c594</imagelabel>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  </seclabel>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <label>+107:+107</label>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <imagelabel>+107:+107</imagelabel>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  </seclabel>
Oct 11 05:22:57 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:22:57 np0005481065 nova_compute[260935]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct 11 05:22:57 np0005481065 nova_compute[260935]: 2025-10-11 09:22:57.746 2 INFO nova.virt.libvirt.driver [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully detached device tap300c071c-a3 from instance 7f0d9214-39a5-458d-82db-dcbc7d61b8b5 from the persistent domain config.#033[00m
Oct 11 05:22:57 np0005481065 nova_compute[260935]: 2025-10-11 09:22:57.746 2 DEBUG nova.virt.libvirt.driver [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] (1/8): Attempting to detach device tap300c071c-a3 with device alias net1 from instance 7f0d9214-39a5-458d-82db-dcbc7d61b8b5 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct 11 05:22:57 np0005481065 nova_compute[260935]: 2025-10-11 09:22:57.747 2 DEBUG nova.virt.libvirt.guest [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] detach device xml: <interface type="ethernet">
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <mac address="fa:16:3e:f8:68:88"/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <model type="virtio"/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <mtu size="1442"/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <target dev="tap300c071c-a3"/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]: </interface>
Oct 11 05:22:57 np0005481065 nova_compute[260935]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 11 05:22:57 np0005481065 kernel: tap300c071c-a3 (unregistering): left promiscuous mode
Oct 11 05:22:57 np0005481065 NetworkManager[44960]: <info>  [1760174577.8576] device (tap300c071c-a3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:22:57 np0005481065 nova_compute[260935]: 2025-10-11 09:22:57.870 2 DEBUG nova.virt.libvirt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Received event <DeviceRemovedEvent: 1760174577.8703454, 7f0d9214-39a5-458d-82db-dcbc7d61b8b5 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct 11 05:22:57 np0005481065 ovn_controller[152945]: 2025-10-11T09:22:57Z|01266|binding|INFO|Releasing lport 300c071c-a312-4a9b-bd7a-1b16b9a35ae6 from this chassis (sb_readonly=0)
Oct 11 05:22:57 np0005481065 ovn_controller[152945]: 2025-10-11T09:22:57Z|01267|binding|INFO|Setting lport 300c071c-a312-4a9b-bd7a-1b16b9a35ae6 down in Southbound
Oct 11 05:22:57 np0005481065 ovn_controller[152945]: 2025-10-11T09:22:57Z|01268|binding|INFO|Removing iface tap300c071c-a3 ovn-installed in OVS
Oct 11 05:22:57 np0005481065 nova_compute[260935]: 2025-10-11 09:22:57.873 2 DEBUG nova.virt.libvirt.driver [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Start waiting for the detach event from libvirt for device tap300c071c-a3 with device alias net1 for instance 7f0d9214-39a5-458d-82db-dcbc7d61b8b5 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct 11 05:22:57 np0005481065 nova_compute[260935]: 2025-10-11 09:22:57.874 2 DEBUG nova.virt.libvirt.guest [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:f8:68:88"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap300c071c-a3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 11 05:22:57 np0005481065 nova_compute[260935]: 2025-10-11 09:22:57.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:57.881 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:68:88 10.100.0.30', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.30/28', 'neutron:device_id': '7f0d9214-39a5-458d-82db-dcbc7d61b8b5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-428d818e-c08a-4eef-be62-24fe484fed05', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=020223cc-9ae9-496f-8a67-2eb055f1a089, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=300c071c-a312-4a9b-bd7a-1b16b9a35ae6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:22:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:57.883 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 300c071c-a312-4a9b-bd7a-1b16b9a35ae6 in datapath 428d818e-c08a-4eef-be62-24fe484fed05 unbound from our chassis#033[00m
Oct 11 05:22:57 np0005481065 nova_compute[260935]: 2025-10-11 09:22:57.886 2 DEBUG nova.virt.libvirt.guest [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:f8:68:88"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap300c071c-a3"/></interface>not found in domain: <domain type='kvm' id='140'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <name>instance-00000075</name>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <uuid>7f0d9214-39a5-458d-82db-dcbc7d61b8b5</uuid>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <nova:name>tempest-TestNetworkBasicOps-server-1960711580</nova:name>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <nova:creationTime>2025-10-11 09:21:28</nova:creationTime>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <nova:flavor name="m1.nano">
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <nova:memory>128</nova:memory>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <nova:disk>1</nova:disk>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <nova:swap>0</nova:swap>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <nova:vcpus>1</nova:vcpus>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  </nova:flavor>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <nova:owner>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <nova:user uuid="dd336dcb24664df58613d4105ce1b004">tempest-TestNetworkBasicOps-1622727639-project-member</nova:user>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <nova:project uuid="bee9c6aad5fe46a2b0fb6caf4d995b72">tempest-TestNetworkBasicOps-1622727639</nova:project>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  </nova:owner>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <nova:ports>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <nova:port uuid="6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5">
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <nova:port uuid="300c071c-a312-4a9b-bd7a-1b16b9a35ae6">
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.30" ipVersion="4"/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  </nova:ports>
Oct 11 05:22:57 np0005481065 nova_compute[260935]: </nova:instance>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <memory unit='KiB'>131072</memory>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <vcpu placement='static'>1</vcpu>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <resource>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <partition>/machine</partition>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  </resource>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <sysinfo type='smbios'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <entry name='manufacturer'>RDO</entry>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <entry name='product'>OpenStack Compute</entry>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <entry name='serial'>7f0d9214-39a5-458d-82db-dcbc7d61b8b5</entry>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <entry name='uuid'>7f0d9214-39a5-458d-82db-dcbc7d61b8b5</entry>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <entry name='family'>Virtual Machine</entry>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <boot dev='hd'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <smbios mode='sysinfo'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <vmcoreinfo state='on'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <cpu mode='custom' match='exact' check='full'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <model fallback='forbid'>EPYC-Rome</model>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <vendor>AMD</vendor>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='require' name='x2apic'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='require' name='tsc-deadline'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='require' name='hypervisor'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='require' name='tsc_adjust'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='require' name='spec-ctrl'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='require' name='stibp'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='require' name='arch-capabilities'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='require' name='ssbd'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='require' name='cmp_legacy'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='require' name='overflow-recov'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='require' name='succor'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='require' name='ibrs'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='require' name='amd-ssbd'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='require' name='virt-ssbd'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='disable' name='lbrv'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='disable' name='tsc-scale'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='disable' name='vmcb-clean'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='disable' name='flushbyasid'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='disable' name='pause-filter'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='disable' name='pfthreshold'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='disable' name='svme-addr-chk'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='require' name='lfence-always-serializing'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='require' name='rdctl-no'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='require' name='mds-no'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='require' name='pschange-mc-no'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='require' name='gds-no'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='require' name='rfds-no'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='disable' name='xsaves'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='disable' name='svm'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='require' name='topoext'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='disable' name='npt'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <feature policy='disable' name='nrip-save'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <clock offset='utc'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <timer name='pit' tickpolicy='delay'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <timer name='rtc' tickpolicy='catchup'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <timer name='hpet' present='no'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <on_poweroff>destroy</on_poweroff>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <on_reboot>restart</on_reboot>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <on_crash>destroy</on_crash>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <disk type='network' device='disk'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <driver name='qemu' type='raw' cache='none'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <auth username='openstack'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:        <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <source protocol='rbd' name='vms/7f0d9214-39a5-458d-82db-dcbc7d61b8b5_disk' index='2'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:        <host name='192.168.122.100' port='6789'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target dev='vda' bus='virtio'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='virtio-disk0'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <disk type='network' device='cdrom'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <driver name='qemu' type='raw' cache='none'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <auth username='openstack'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:        <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <source protocol='rbd' name='vms/7f0d9214-39a5-458d-82db-dcbc7d61b8b5_disk.config' index='1'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:        <host name='192.168.122.100' port='6789'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target dev='sda' bus='sata'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <readonly/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='sata0-0-0'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='pci' index='0' model='pcie-root'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='pcie.0'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target chassis='1' port='0x10'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='pci.1'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target chassis='2' port='0x11'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='pci.2'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target chassis='3' port='0x12'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='pci.3'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target chassis='4' port='0x13'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='pci.4'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target chassis='5' port='0x14'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='pci.5'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target chassis='6' port='0x15'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='pci.6'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target chassis='7' port='0x16'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='pci.7'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target chassis='8' port='0x17'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='pci.8'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target chassis='9' port='0x18'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='pci.9'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target chassis='10' port='0x19'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='pci.10'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target chassis='11' port='0x1a'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='pci.11'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target chassis='12' port='0x1b'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='pci.12'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target chassis='13' port='0x1c'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='pci.13'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target chassis='14' port='0x1d'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='pci.14'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target chassis='15' port='0x1e'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='pci.15'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target chassis='16' port='0x1f'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='pci.16'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target chassis='17' port='0x20'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='pci.17'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target chassis='18' port='0x21'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='pci.18'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target chassis='19' port='0x22'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='pci.19'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target chassis='20' port='0x23'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='pci.20'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target chassis='21' port='0x24'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='pci.21'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target chassis='22' port='0x25'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='pci.22'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target chassis='23' port='0x26'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='pci.23'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target chassis='24' port='0x27'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='pci.24'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target chassis='25' port='0x28'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='pci.25'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model name='pcie-pci-bridge'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='pci.26'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='usb'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <controller type='sata' index='0'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='ide'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <interface type='ethernet'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <mac address='fa:16:3e:38:a7:f5'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target dev='tap6aa7ac72-3e'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model type='virtio'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <driver name='vhost' rx_queue_size='512'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <mtu size='1442'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='net0'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <serial type='pty'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <source path='/dev/pts/2'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <log file='/var/lib/nova/instances/7f0d9214-39a5-458d-82db-dcbc7d61b8b5/console.log' append='off'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target type='isa-serial' port='0'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:        <model name='isa-serial'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      </target>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='serial0'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <console type='pty' tty='/dev/pts/2'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <source path='/dev/pts/2'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <log file='/var/lib/nova/instances/7f0d9214-39a5-458d-82db-dcbc7d61b8b5/console.log' append='off'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <target type='serial' port='0'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='serial0'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </console>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <input type='tablet' bus='usb'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='input0'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='usb' bus='0' port='1'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </input>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <input type='mouse' bus='ps2'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='input1'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </input>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <input type='keyboard' bus='ps2'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='input2'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </input>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <listen type='address' address='::0'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </graphics>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <audio id='1' type='none'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <model type='virtio' heads='1' primary='yes'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='video0'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <watchdog model='itco' action='reset'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='watchdog0'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </watchdog>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <memballoon model='virtio'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <stats period='10'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='balloon0'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <rng model='virtio'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <backend model='random'>/dev/urandom</backend>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <alias name='rng0'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <label>system_u:system_r:svirt_t:s0:c329,c594</label>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c329,c594</imagelabel>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  </seclabel>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <label>+107:+107</label>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <imagelabel>+107:+107</imagelabel>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  </seclabel>
Oct 11 05:22:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:57.887 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 428d818e-c08a-4eef-be62-24fe484fed05, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:22:57 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:22:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:57.888 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6617feeb-ad2d-48b3-8829-e7b831fe58f1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:57.889 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05 namespace which is not needed anymore#033[00m
Oct 11 05:22:57 np0005481065 nova_compute[260935]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct 11 05:22:57 np0005481065 nova_compute[260935]: 2025-10-11 09:22:57.886 2 INFO nova.virt.libvirt.driver [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully detached device tap300c071c-a3 from instance 7f0d9214-39a5-458d-82db-dcbc7d61b8b5 from the live domain config.#033[00m
Oct 11 05:22:57 np0005481065 nova_compute[260935]: 2025-10-11 09:22:57.887 2 DEBUG nova.virt.libvirt.vif [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:20:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1960711580',display_name='tempest-TestNetworkBasicOps-server-1960711580',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1960711580',id=117,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF5Wlq36JtMeAih7avX8wXRrPqfFsSPD2izoTRA0/VeN08ZY174fYPsstqVdaqAprTgQ0B4WJKyd87FK5YPL+XzWekXrwbc+R4XTrvOSv6dJKGu7vh0OlJADJW05rfop0g==',key_name='tempest-TestNetworkBasicOps-1691611342',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:20:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-9mefs4w8',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:20:58Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=7f0d9214-39a5-458d-82db-dcbc7d61b8b5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "address": "fa:16:3e:f8:68:88", "network": {"id": "428d818e-c08a-4eef-be62-24fe484fed05", "bridge": "br-int", "label": "tempest-network-smoke--2064721736", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap300c071c-a3", "ovs_interfaceid": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:22:57 np0005481065 nova_compute[260935]: 2025-10-11 09:22:57.888 2 DEBUG nova.network.os_vif_util [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "address": "fa:16:3e:f8:68:88", "network": {"id": "428d818e-c08a-4eef-be62-24fe484fed05", "bridge": "br-int", "label": "tempest-network-smoke--2064721736", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap300c071c-a3", "ovs_interfaceid": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:22:57 np0005481065 nova_compute[260935]: 2025-10-11 09:22:57.889 2 DEBUG nova.network.os_vif_util [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f8:68:88,bridge_name='br-int',has_traffic_filtering=True,id=300c071c-a312-4a9b-bd7a-1b16b9a35ae6,network=Network(428d818e-c08a-4eef-be62-24fe484fed05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap300c071c-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:22:57 np0005481065 nova_compute[260935]: 2025-10-11 09:22:57.889 2 DEBUG os_vif [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f8:68:88,bridge_name='br-int',has_traffic_filtering=True,id=300c071c-a312-4a9b-bd7a-1b16b9a35ae6,network=Network(428d818e-c08a-4eef-be62-24fe484fed05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap300c071c-a3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:22:57 np0005481065 nova_compute[260935]: 2025-10-11 09:22:57.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:57 np0005481065 nova_compute[260935]: 2025-10-11 09:22:57.892 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap300c071c-a3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:22:57 np0005481065 nova_compute[260935]: 2025-10-11 09:22:57.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:22:57 np0005481065 nova_compute[260935]: 2025-10-11 09:22:57.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:57 np0005481065 nova_compute[260935]: 2025-10-11 09:22:57.904 2 INFO os_vif [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f8:68:88,bridge_name='br-int',has_traffic_filtering=True,id=300c071c-a312-4a9b-bd7a-1b16b9a35ae6,network=Network(428d818e-c08a-4eef-be62-24fe484fed05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap300c071c-a3')#033[00m
Oct 11 05:22:57 np0005481065 nova_compute[260935]: 2025-10-11 09:22:57.905 2 DEBUG nova.virt.libvirt.guest [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <nova:name>tempest-TestNetworkBasicOps-server-1960711580</nova:name>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <nova:creationTime>2025-10-11 09:22:57</nova:creationTime>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <nova:flavor name="m1.nano">
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <nova:memory>128</nova:memory>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <nova:disk>1</nova:disk>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <nova:swap>0</nova:swap>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <nova:vcpus>1</nova:vcpus>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  </nova:flavor>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <nova:owner>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <nova:user uuid="dd336dcb24664df58613d4105ce1b004">tempest-TestNetworkBasicOps-1622727639-project-member</nova:user>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <nova:project uuid="bee9c6aad5fe46a2b0fb6caf4d995b72">tempest-TestNetworkBasicOps-1622727639</nova:project>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  </nova:owner>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  <nova:ports>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    <nova:port uuid="6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5">
Oct 11 05:22:57 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 05:22:57 np0005481065 nova_compute[260935]:  </nova:ports>
Oct 11 05:22:57 np0005481065 nova_compute[260935]: </nova:instance>
Oct 11 05:22:57 np0005481065 nova_compute[260935]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct 11 05:22:58 np0005481065 neutron-haproxy-ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05[389350]: [NOTICE]   (389354) : haproxy version is 2.8.14-c23fe91
Oct 11 05:22:58 np0005481065 neutron-haproxy-ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05[389350]: [NOTICE]   (389354) : path to executable is /usr/sbin/haproxy
Oct 11 05:22:58 np0005481065 neutron-haproxy-ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05[389350]: [WARNING]  (389354) : Exiting Master process...
Oct 11 05:22:58 np0005481065 neutron-haproxy-ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05[389350]: [WARNING]  (389354) : Exiting Master process...
Oct 11 05:22:58 np0005481065 neutron-haproxy-ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05[389350]: [ALERT]    (389354) : Current worker (389356) exited with code 143 (Terminated)
Oct 11 05:22:58 np0005481065 neutron-haproxy-ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05[389350]: [WARNING]  (389354) : All workers exited. Exiting... (0)
Oct 11 05:22:58 np0005481065 systemd[1]: libpod-3d44ae400f68c69c83298cdd1bf71370cdc99ef9434f35a592b35316b09b177f.scope: Deactivated successfully.
Oct 11 05:22:58 np0005481065 podman[392208]: 2025-10-11 09:22:58.041576801 +0000 UTC m=+0.053897982 container died 3d44ae400f68c69c83298cdd1bf71370cdc99ef9434f35a592b35316b09b177f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:22:58 np0005481065 systemd[1]: var-lib-containers-storage-overlay-4e5d17837c87f5f50f3b76a7653ef33e05fc19c9a00cc47d414f88016a25e132-merged.mount: Deactivated successfully.
Oct 11 05:22:58 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3d44ae400f68c69c83298cdd1bf71370cdc99ef9434f35a592b35316b09b177f-userdata-shm.mount: Deactivated successfully.
Oct 11 05:22:58 np0005481065 podman[392208]: 2025-10-11 09:22:58.112087881 +0000 UTC m=+0.124409082 container cleanup 3d44ae400f68c69c83298cdd1bf71370cdc99ef9434f35a592b35316b09b177f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 11 05:22:58 np0005481065 systemd[1]: libpod-conmon-3d44ae400f68c69c83298cdd1bf71370cdc99ef9434f35a592b35316b09b177f.scope: Deactivated successfully.
Oct 11 05:22:58 np0005481065 podman[392237]: 2025-10-11 09:22:58.240502065 +0000 UTC m=+0.066865470 container remove 3d44ae400f68c69c83298cdd1bf71370cdc99ef9434f35a592b35316b09b177f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 11 05:22:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:58.248 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[002f844d-f101-45d7-8198-0b28142d38c1]: (4, ('Sat Oct 11 09:22:57 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05 (3d44ae400f68c69c83298cdd1bf71370cdc99ef9434f35a592b35316b09b177f)\n3d44ae400f68c69c83298cdd1bf71370cdc99ef9434f35a592b35316b09b177f\nSat Oct 11 09:22:58 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05 (3d44ae400f68c69c83298cdd1bf71370cdc99ef9434f35a592b35316b09b177f)\n3d44ae400f68c69c83298cdd1bf71370cdc99ef9434f35a592b35316b09b177f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:58.250 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ef3d00b9-b425-412d-9fa6-e927ccd78cf5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:58.251 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap428d818e-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:22:58 np0005481065 kernel: tap428d818e-c0: left promiscuous mode
Oct 11 05:22:58 np0005481065 nova_compute[260935]: 2025-10-11 09:22:58.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:58 np0005481065 nova_compute[260935]: 2025-10-11 09:22:58.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:58.276 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[42430a25-27fe-4853-afb8-b0d471e62259]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:58.303 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4968a1f4-e46a-4c38-b96f-00d8329b89b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:58.304 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[18e519f7-ce44-4d7e-adb0-4fdd6dcd922f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:58.339 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c9110da9-b25f-4b9c-bb44-7fb7bc0d313a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633363, 'reachable_time': 29039, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 392252, 'error': None, 'target': 'ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:58.344 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 05:22:58 np0005481065 systemd[1]: run-netns-ovnmeta\x2d428d818e\x2dc08a\x2d4eef\x2dbe62\x2d24fe484fed05.mount: Deactivated successfully.
Oct 11 05:22:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:22:58.344 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[892f9552-e3a3-4e71-83d4-a9025c916b18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:22:58 np0005481065 nova_compute[260935]: 2025-10-11 09:22:58.403 2 DEBUG oslo_concurrency.lockutils [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "refresh_cache-7f0d9214-39a5-458d-82db-dcbc7d61b8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:22:58 np0005481065 nova_compute[260935]: 2025-10-11 09:22:58.403 2 DEBUG oslo_concurrency.lockutils [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquired lock "refresh_cache-7f0d9214-39a5-458d-82db-dcbc7d61b8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:22:58 np0005481065 nova_compute[260935]: 2025-10-11 09:22:58.404 2 DEBUG nova.network.neutron [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:22:58 np0005481065 nova_compute[260935]: 2025-10-11 09:22:58.464 2 DEBUG nova.compute.manager [req-d761917b-9a4c-4ef8-8611-f8ade2a85eea req-fd76771f-3126-4a7f-9dba-e7975cd449b1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Received event network-vif-deleted-300c071c-a312-4a9b-bd7a-1b16b9a35ae6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:22:58 np0005481065 nova_compute[260935]: 2025-10-11 09:22:58.464 2 INFO nova.compute.manager [req-d761917b-9a4c-4ef8-8611-f8ade2a85eea req-fd76771f-3126-4a7f-9dba-e7975cd449b1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Neutron deleted interface 300c071c-a312-4a9b-bd7a-1b16b9a35ae6; detaching it from the instance and deleting it from the info cache#033[00m
Oct 11 05:22:58 np0005481065 nova_compute[260935]: 2025-10-11 09:22:58.465 2 DEBUG nova.network.neutron [req-d761917b-9a4c-4ef8-8611-f8ade2a85eea req-fd76771f-3126-4a7f-9dba-e7975cd449b1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Updating instance_info_cache with network_info: [{"id": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "address": "fa:16:3e:38:a7:f5", "network": {"id": "6fb90d02-96cd-4920-92ac-462cc457cb11", "bridge": "br-int", "label": "tempest-network-smoke--1169953700", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aa7ac72-3e", "ovs_interfaceid": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:22:58 np0005481065 nova_compute[260935]: 2025-10-11 09:22:58.469 2 DEBUG nova.compute.manager [req-b371964c-f5f8-45f8-b250-0f9e13a4c2ec req-27592c84-3b89-4a36-8872-dbf2513ad211 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Received event network-vif-unplugged-300c071c-a312-4a9b-bd7a-1b16b9a35ae6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:22:58 np0005481065 nova_compute[260935]: 2025-10-11 09:22:58.470 2 DEBUG oslo_concurrency.lockutils [req-b371964c-f5f8-45f8-b250-0f9e13a4c2ec req-27592c84-3b89-4a36-8872-dbf2513ad211 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:22:58 np0005481065 nova_compute[260935]: 2025-10-11 09:22:58.470 2 DEBUG oslo_concurrency.lockutils [req-b371964c-f5f8-45f8-b250-0f9e13a4c2ec req-27592c84-3b89-4a36-8872-dbf2513ad211 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:22:58 np0005481065 nova_compute[260935]: 2025-10-11 09:22:58.470 2 DEBUG oslo_concurrency.lockutils [req-b371964c-f5f8-45f8-b250-0f9e13a4c2ec req-27592c84-3b89-4a36-8872-dbf2513ad211 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:22:58 np0005481065 nova_compute[260935]: 2025-10-11 09:22:58.471 2 DEBUG nova.compute.manager [req-b371964c-f5f8-45f8-b250-0f9e13a4c2ec req-27592c84-3b89-4a36-8872-dbf2513ad211 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] No waiting events found dispatching network-vif-unplugged-300c071c-a312-4a9b-bd7a-1b16b9a35ae6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:22:58 np0005481065 nova_compute[260935]: 2025-10-11 09:22:58.471 2 WARNING nova.compute.manager [req-b371964c-f5f8-45f8-b250-0f9e13a4c2ec req-27592c84-3b89-4a36-8872-dbf2513ad211 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Received unexpected event network-vif-unplugged-300c071c-a312-4a9b-bd7a-1b16b9a35ae6 for instance with vm_state active and task_state None.#033[00m
Oct 11 05:22:58 np0005481065 nova_compute[260935]: 2025-10-11 09:22:58.629 2 DEBUG nova.objects.instance [req-d761917b-9a4c-4ef8-8611-f8ade2a85eea req-fd76771f-3126-4a7f-9dba-e7975cd449b1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lazy-loading 'system_metadata' on Instance uuid 7f0d9214-39a5-458d-82db-dcbc7d61b8b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:22:58 np0005481065 nova_compute[260935]: 2025-10-11 09:22:58.663 2 DEBUG nova.objects.instance [req-d761917b-9a4c-4ef8-8611-f8ade2a85eea req-fd76771f-3126-4a7f-9dba-e7975cd449b1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lazy-loading 'flavor' on Instance uuid 7f0d9214-39a5-458d-82db-dcbc7d61b8b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:22:58 np0005481065 nova_compute[260935]: 2025-10-11 09:22:58.696 2 DEBUG nova.virt.libvirt.vif [req-d761917b-9a4c-4ef8-8611-f8ade2a85eea req-fd76771f-3126-4a7f-9dba-e7975cd449b1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:20:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1960711580',display_name='tempest-TestNetworkBasicOps-server-1960711580',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1960711580',id=117,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF5Wlq36JtMeAih7avX8wXRrPqfFsSPD2izoTRA0/VeN08ZY174fYPsstqVdaqAprTgQ0B4WJKyd87FK5YPL+XzWekXrwbc+R4XTrvOSv6dJKGu7vh0OlJADJW05rfop0g==',key_name='tempest-TestNetworkBasicOps-1691611342',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:20:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-9mefs4w8',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:20:58Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=7f0d9214-39a5-458d-82db-dcbc7d61b8b5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "address": "fa:16:3e:f8:68:88", "network": {"id": "428d818e-c08a-4eef-be62-24fe484fed05", "bridge": "br-int", "label": "tempest-network-smoke--2064721736", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap300c071c-a3", "ovs_interfaceid": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:22:58 np0005481065 nova_compute[260935]: 2025-10-11 09:22:58.696 2 DEBUG nova.network.os_vif_util [req-d761917b-9a4c-4ef8-8611-f8ade2a85eea req-fd76771f-3126-4a7f-9dba-e7975cd449b1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Converting VIF {"id": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "address": "fa:16:3e:f8:68:88", "network": {"id": "428d818e-c08a-4eef-be62-24fe484fed05", "bridge": "br-int", "label": "tempest-network-smoke--2064721736", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap300c071c-a3", "ovs_interfaceid": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:22:58 np0005481065 nova_compute[260935]: 2025-10-11 09:22:58.697 2 DEBUG nova.network.os_vif_util [req-d761917b-9a4c-4ef8-8611-f8ade2a85eea req-fd76771f-3126-4a7f-9dba-e7975cd449b1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f8:68:88,bridge_name='br-int',has_traffic_filtering=True,id=300c071c-a312-4a9b-bd7a-1b16b9a35ae6,network=Network(428d818e-c08a-4eef-be62-24fe484fed05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap300c071c-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:22:58 np0005481065 nova_compute[260935]: 2025-10-11 09:22:58.701 2 DEBUG nova.virt.libvirt.guest [req-d761917b-9a4c-4ef8-8611-f8ade2a85eea req-fd76771f-3126-4a7f-9dba-e7975cd449b1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:f8:68:88"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap300c071c-a3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 11 05:22:58 np0005481065 nova_compute[260935]: 2025-10-11 09:22:58.704 2 DEBUG nova.virt.libvirt.guest [req-d761917b-9a4c-4ef8-8611-f8ade2a85eea req-fd76771f-3126-4a7f-9dba-e7975cd449b1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:f8:68:88"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap300c071c-a3"/></interface>not found in domain: <domain type='kvm' id='140'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <name>instance-00000075</name>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <uuid>7f0d9214-39a5-458d-82db-dcbc7d61b8b5</uuid>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <nova:name>tempest-TestNetworkBasicOps-server-1960711580</nova:name>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <nova:creationTime>2025-10-11 09:22:57</nova:creationTime>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <nova:flavor name="m1.nano">
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <nova:memory>128</nova:memory>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <nova:disk>1</nova:disk>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <nova:swap>0</nova:swap>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <nova:vcpus>1</nova:vcpus>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  </nova:flavor>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <nova:owner>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <nova:user uuid="dd336dcb24664df58613d4105ce1b004">tempest-TestNetworkBasicOps-1622727639-project-member</nova:user>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <nova:project uuid="bee9c6aad5fe46a2b0fb6caf4d995b72">tempest-TestNetworkBasicOps-1622727639</nova:project>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  </nova:owner>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <nova:ports>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <nova:port uuid="6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5">
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  </nova:ports>
Oct 11 05:22:58 np0005481065 nova_compute[260935]: </nova:instance>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <memory unit='KiB'>131072</memory>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <vcpu placement='static'>1</vcpu>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <resource>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <partition>/machine</partition>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  </resource>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <sysinfo type='smbios'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <entry name='manufacturer'>RDO</entry>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <entry name='product'>OpenStack Compute</entry>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <entry name='serial'>7f0d9214-39a5-458d-82db-dcbc7d61b8b5</entry>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <entry name='uuid'>7f0d9214-39a5-458d-82db-dcbc7d61b8b5</entry>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <entry name='family'>Virtual Machine</entry>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <boot dev='hd'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <smbios mode='sysinfo'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <vmcoreinfo state='on'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <cpu mode='custom' match='exact' check='full'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <model fallback='forbid'>EPYC-Rome</model>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <vendor>AMD</vendor>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='x2apic'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='tsc-deadline'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='hypervisor'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='tsc_adjust'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='spec-ctrl'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='stibp'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='arch-capabilities'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='ssbd'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='cmp_legacy'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='overflow-recov'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='succor'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='ibrs'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='amd-ssbd'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='virt-ssbd'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='disable' name='lbrv'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='disable' name='tsc-scale'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='disable' name='vmcb-clean'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='disable' name='flushbyasid'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='disable' name='pause-filter'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='disable' name='pfthreshold'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='disable' name='svme-addr-chk'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='lfence-always-serializing'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='rdctl-no'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='mds-no'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='pschange-mc-no'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='gds-no'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='rfds-no'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='disable' name='xsaves'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='disable' name='svm'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='topoext'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='disable' name='npt'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='disable' name='nrip-save'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <clock offset='utc'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <timer name='pit' tickpolicy='delay'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <timer name='rtc' tickpolicy='catchup'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <timer name='hpet' present='no'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <on_poweroff>destroy</on_poweroff>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <on_reboot>restart</on_reboot>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <on_crash>destroy</on_crash>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <disk type='network' device='disk'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <driver name='qemu' type='raw' cache='none'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <auth username='openstack'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:        <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <source protocol='rbd' name='vms/7f0d9214-39a5-458d-82db-dcbc7d61b8b5_disk' index='2'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:        <host name='192.168.122.100' port='6789'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target dev='vda' bus='virtio'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='virtio-disk0'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <disk type='network' device='cdrom'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <driver name='qemu' type='raw' cache='none'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <auth username='openstack'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:        <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <source protocol='rbd' name='vms/7f0d9214-39a5-458d-82db-dcbc7d61b8b5_disk.config' index='1'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:        <host name='192.168.122.100' port='6789'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target dev='sda' bus='sata'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <readonly/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='sata0-0-0'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='0' model='pcie-root'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='pcie.0'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target chassis='1' port='0x10'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='pci.1'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target chassis='2' port='0x11'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='pci.2'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target chassis='3' port='0x12'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='pci.3'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target chassis='4' port='0x13'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='pci.4'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target chassis='5' port='0x14'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='pci.5'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target chassis='6' port='0x15'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='pci.6'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target chassis='7' port='0x16'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='pci.7'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target chassis='8' port='0x17'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='pci.8'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target chassis='9' port='0x18'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='pci.9'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target chassis='10' port='0x19'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='pci.10'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target chassis='11' port='0x1a'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='pci.11'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target chassis='12' port='0x1b'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='pci.12'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target chassis='13' port='0x1c'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='pci.13'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target chassis='14' port='0x1d'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='pci.14'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target chassis='15' port='0x1e'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='pci.15'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target chassis='16' port='0x1f'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='pci.16'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target chassis='17' port='0x20'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='pci.17'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target chassis='18' port='0x21'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='pci.18'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target chassis='19' port='0x22'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='pci.19'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target chassis='20' port='0x23'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='pci.20'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target chassis='21' port='0x24'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='pci.21'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target chassis='22' port='0x25'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='pci.22'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target chassis='23' port='0x26'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='pci.23'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target chassis='24' port='0x27'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='pci.24'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target chassis='25' port='0x28'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='pci.25'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <model name='pcie-pci-bridge'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='pci.26'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='usb'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='sata' index='0'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='ide'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <interface type='ethernet'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <mac address='fa:16:3e:38:a7:f5'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target dev='tap6aa7ac72-3e'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <model type='virtio'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <driver name='vhost' rx_queue_size='512'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <mtu size='1442'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='net0'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <serial type='pty'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <source path='/dev/pts/2'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <log file='/var/lib/nova/instances/7f0d9214-39a5-458d-82db-dcbc7d61b8b5/console.log' append='off'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target type='isa-serial' port='0'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:        <model name='isa-serial'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      </target>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='serial0'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <console type='pty' tty='/dev/pts/2'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <source path='/dev/pts/2'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <log file='/var/lib/nova/instances/7f0d9214-39a5-458d-82db-dcbc7d61b8b5/console.log' append='off'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target type='serial' port='0'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='serial0'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </console>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <input type='tablet' bus='usb'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='input0'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='usb' bus='0' port='1'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </input>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <input type='mouse' bus='ps2'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='input1'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </input>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <input type='keyboard' bus='ps2'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='input2'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </input>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <listen type='address' address='::0'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </graphics>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <audio id='1' type='none'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <model type='virtio' heads='1' primary='yes'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='video0'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <watchdog model='itco' action='reset'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='watchdog0'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </watchdog>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <memballoon model='virtio'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <stats period='10'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='balloon0'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <rng model='virtio'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <backend model='random'>/dev/urandom</backend>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='rng0'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <label>system_u:system_r:svirt_t:s0:c329,c594</label>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c329,c594</imagelabel>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  </seclabel>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <label>+107:+107</label>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <imagelabel>+107:+107</imagelabel>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  </seclabel>
Oct 11 05:22:58 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:22:58 np0005481065 nova_compute[260935]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct 11 05:22:58 np0005481065 nova_compute[260935]: 2025-10-11 09:22:58.705 2 DEBUG nova.virt.libvirt.guest [req-d761917b-9a4c-4ef8-8611-f8ade2a85eea req-fd76771f-3126-4a7f-9dba-e7975cd449b1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:f8:68:88"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap300c071c-a3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 11 05:22:58 np0005481065 nova_compute[260935]: 2025-10-11 09:22:58.712 2 DEBUG nova.virt.libvirt.guest [req-d761917b-9a4c-4ef8-8611-f8ade2a85eea req-fd76771f-3126-4a7f-9dba-e7975cd449b1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:f8:68:88"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap300c071c-a3"/></interface>not found in domain: <domain type='kvm' id='140'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <name>instance-00000075</name>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <uuid>7f0d9214-39a5-458d-82db-dcbc7d61b8b5</uuid>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <nova:name>tempest-TestNetworkBasicOps-server-1960711580</nova:name>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <nova:creationTime>2025-10-11 09:22:57</nova:creationTime>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <nova:flavor name="m1.nano">
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <nova:memory>128</nova:memory>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <nova:disk>1</nova:disk>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <nova:swap>0</nova:swap>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <nova:vcpus>1</nova:vcpus>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  </nova:flavor>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <nova:owner>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <nova:user uuid="dd336dcb24664df58613d4105ce1b004">tempest-TestNetworkBasicOps-1622727639-project-member</nova:user>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <nova:project uuid="bee9c6aad5fe46a2b0fb6caf4d995b72">tempest-TestNetworkBasicOps-1622727639</nova:project>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  </nova:owner>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <nova:ports>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <nova:port uuid="6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5">
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  </nova:ports>
Oct 11 05:22:58 np0005481065 nova_compute[260935]: </nova:instance>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <memory unit='KiB'>131072</memory>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <vcpu placement='static'>1</vcpu>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <resource>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <partition>/machine</partition>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  </resource>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <sysinfo type='smbios'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <entry name='manufacturer'>RDO</entry>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <entry name='product'>OpenStack Compute</entry>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <entry name='serial'>7f0d9214-39a5-458d-82db-dcbc7d61b8b5</entry>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <entry name='uuid'>7f0d9214-39a5-458d-82db-dcbc7d61b8b5</entry>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <entry name='family'>Virtual Machine</entry>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <boot dev='hd'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <smbios mode='sysinfo'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <vmcoreinfo state='on'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <cpu mode='custom' match='exact' check='full'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <model fallback='forbid'>EPYC-Rome</model>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <vendor>AMD</vendor>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='x2apic'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='tsc-deadline'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='hypervisor'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='tsc_adjust'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='spec-ctrl'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='stibp'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='arch-capabilities'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='ssbd'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='cmp_legacy'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='overflow-recov'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='succor'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='ibrs'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='amd-ssbd'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='virt-ssbd'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='disable' name='lbrv'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='disable' name='tsc-scale'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='disable' name='vmcb-clean'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='disable' name='flushbyasid'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='disable' name='pause-filter'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='disable' name='pfthreshold'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='disable' name='svme-addr-chk'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='lfence-always-serializing'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='rdctl-no'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='mds-no'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='pschange-mc-no'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='gds-no'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='rfds-no'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='disable' name='xsaves'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='disable' name='svm'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='require' name='topoext'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='disable' name='npt'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <feature policy='disable' name='nrip-save'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <clock offset='utc'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <timer name='pit' tickpolicy='delay'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <timer name='rtc' tickpolicy='catchup'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <timer name='hpet' present='no'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <on_poweroff>destroy</on_poweroff>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <on_reboot>restart</on_reboot>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <on_crash>destroy</on_crash>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <disk type='network' device='disk'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <driver name='qemu' type='raw' cache='none'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <auth username='openstack'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:        <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <source protocol='rbd' name='vms/7f0d9214-39a5-458d-82db-dcbc7d61b8b5_disk' index='2'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:        <host name='192.168.122.100' port='6789'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target dev='vda' bus='virtio'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='virtio-disk0'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <disk type='network' device='cdrom'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <driver name='qemu' type='raw' cache='none'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <auth username='openstack'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:        <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <source protocol='rbd' name='vms/7f0d9214-39a5-458d-82db-dcbc7d61b8b5_disk.config' index='1'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:        <host name='192.168.122.100' port='6789'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target dev='sda' bus='sata'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <readonly/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='sata0-0-0'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='0' model='pcie-root'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='pcie.0'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target chassis='1' port='0x10'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='pci.1'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target chassis='2' port='0x11'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='pci.2'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target chassis='3' port='0x12'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='pci.3'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target chassis='4' port='0x13'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='pci.4'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target chassis='5' port='0x14'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='pci.5'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target chassis='6' port='0x15'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='pci.6'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target chassis='7' port='0x16'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='pci.7'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target chassis='8' port='0x17'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='pci.8'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target chassis='9' port='0x18'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='pci.9'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target chassis='10' port='0x19'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='pci.10'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target chassis='11' port='0x1a'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='pci.11'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target chassis='12' port='0x1b'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='pci.12'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target chassis='13' port='0x1c'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='pci.13'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target chassis='14' port='0x1d'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='pci.14'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target chassis='15' port='0x1e'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='pci.15'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target chassis='16' port='0x1f'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='pci.16'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target chassis='17' port='0x20'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='pci.17'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target chassis='18' port='0x21'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='pci.18'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target chassis='19' port='0x22'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='pci.19'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target chassis='20' port='0x23'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='pci.20'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target chassis='21' port='0x24'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='pci.21'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target chassis='22' port='0x25'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='pci.22'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target chassis='23' port='0x26'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='pci.23'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target chassis='24' port='0x27'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='pci.24'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <model name='pcie-root-port'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target chassis='25' port='0x28'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='pci.25'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <model name='pcie-pci-bridge'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='pci.26'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='usb'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <controller type='sata' index='0'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='ide'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </controller>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <interface type='ethernet'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <mac address='fa:16:3e:38:a7:f5'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target dev='tap6aa7ac72-3e'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <model type='virtio'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <driver name='vhost' rx_queue_size='512'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <mtu size='1442'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='net0'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <serial type='pty'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <source path='/dev/pts/2'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <log file='/var/lib/nova/instances/7f0d9214-39a5-458d-82db-dcbc7d61b8b5/console.log' append='off'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target type='isa-serial' port='0'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:        <model name='isa-serial'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      </target>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='serial0'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <console type='pty' tty='/dev/pts/2'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <source path='/dev/pts/2'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <log file='/var/lib/nova/instances/7f0d9214-39a5-458d-82db-dcbc7d61b8b5/console.log' append='off'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <target type='serial' port='0'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='serial0'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </console>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <input type='tablet' bus='usb'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='input0'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='usb' bus='0' port='1'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </input>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <input type='mouse' bus='ps2'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='input1'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </input>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <input type='keyboard' bus='ps2'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='input2'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </input>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <listen type='address' address='::0'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </graphics>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <audio id='1' type='none'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <model type='virtio' heads='1' primary='yes'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='video0'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <watchdog model='itco' action='reset'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='watchdog0'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </watchdog>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <memballoon model='virtio'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <stats period='10'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='balloon0'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <rng model='virtio'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <backend model='random'>/dev/urandom</backend>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <alias name='rng0'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <label>system_u:system_r:svirt_t:s0:c329,c594</label>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c329,c594</imagelabel>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  </seclabel>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <label>+107:+107</label>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <imagelabel>+107:+107</imagelabel>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  </seclabel>
Oct 11 05:22:58 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:22:58 np0005481065 nova_compute[260935]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct 11 05:22:58 np0005481065 nova_compute[260935]: 2025-10-11 09:22:58.712 2 WARNING nova.virt.libvirt.driver [req-d761917b-9a4c-4ef8-8611-f8ade2a85eea req-fd76771f-3126-4a7f-9dba-e7975cd449b1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Detaching interface fa:16:3e:f8:68:88 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap300c071c-a3' not found.#033[00m
Oct 11 05:22:58 np0005481065 nova_compute[260935]: 2025-10-11 09:22:58.712 2 DEBUG nova.virt.libvirt.vif [req-d761917b-9a4c-4ef8-8611-f8ade2a85eea req-fd76771f-3126-4a7f-9dba-e7975cd449b1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:20:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1960711580',display_name='tempest-TestNetworkBasicOps-server-1960711580',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1960711580',id=117,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF5Wlq36JtMeAih7avX8wXRrPqfFsSPD2izoTRA0/VeN08ZY174fYPsstqVdaqAprTgQ0B4WJKyd87FK5YPL+XzWekXrwbc+R4XTrvOSv6dJKGu7vh0OlJADJW05rfop0g==',key_name='tempest-TestNetworkBasicOps-1691611342',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:20:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-9mefs4w8',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:20:58Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=7f0d9214-39a5-458d-82db-dcbc7d61b8b5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "address": "fa:16:3e:f8:68:88", "network": {"id": "428d818e-c08a-4eef-be62-24fe484fed05", "bridge": "br-int", "label": "tempest-network-smoke--2064721736", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap300c071c-a3", "ovs_interfaceid": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:22:58 np0005481065 nova_compute[260935]: 2025-10-11 09:22:58.713 2 DEBUG nova.network.os_vif_util [req-d761917b-9a4c-4ef8-8611-f8ade2a85eea req-fd76771f-3126-4a7f-9dba-e7975cd449b1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Converting VIF {"id": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "address": "fa:16:3e:f8:68:88", "network": {"id": "428d818e-c08a-4eef-be62-24fe484fed05", "bridge": "br-int", "label": "tempest-network-smoke--2064721736", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap300c071c-a3", "ovs_interfaceid": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:22:58 np0005481065 nova_compute[260935]: 2025-10-11 09:22:58.713 2 DEBUG nova.network.os_vif_util [req-d761917b-9a4c-4ef8-8611-f8ade2a85eea req-fd76771f-3126-4a7f-9dba-e7975cd449b1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f8:68:88,bridge_name='br-int',has_traffic_filtering=True,id=300c071c-a312-4a9b-bd7a-1b16b9a35ae6,network=Network(428d818e-c08a-4eef-be62-24fe484fed05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap300c071c-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:22:58 np0005481065 nova_compute[260935]: 2025-10-11 09:22:58.713 2 DEBUG os_vif [req-d761917b-9a4c-4ef8-8611-f8ade2a85eea req-fd76771f-3126-4a7f-9dba-e7975cd449b1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f8:68:88,bridge_name='br-int',has_traffic_filtering=True,id=300c071c-a312-4a9b-bd7a-1b16b9a35ae6,network=Network(428d818e-c08a-4eef-be62-24fe484fed05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap300c071c-a3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:22:58 np0005481065 nova_compute[260935]: 2025-10-11 09:22:58.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:22:58 np0005481065 nova_compute[260935]: 2025-10-11 09:22:58.715 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap300c071c-a3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:22:58 np0005481065 nova_compute[260935]: 2025-10-11 09:22:58.716 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:22:58 np0005481065 nova_compute[260935]: 2025-10-11 09:22:58.718 2 INFO os_vif [req-d761917b-9a4c-4ef8-8611-f8ade2a85eea req-fd76771f-3126-4a7f-9dba-e7975cd449b1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f8:68:88,bridge_name='br-int',has_traffic_filtering=True,id=300c071c-a312-4a9b-bd7a-1b16b9a35ae6,network=Network(428d818e-c08a-4eef-be62-24fe484fed05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap300c071c-a3')#033[00m
Oct 11 05:22:58 np0005481065 nova_compute[260935]: 2025-10-11 09:22:58.718 2 DEBUG nova.virt.libvirt.guest [req-d761917b-9a4c-4ef8-8611-f8ade2a85eea req-fd76771f-3126-4a7f-9dba-e7975cd449b1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <nova:name>tempest-TestNetworkBasicOps-server-1960711580</nova:name>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <nova:creationTime>2025-10-11 09:22:58</nova:creationTime>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <nova:flavor name="m1.nano">
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <nova:memory>128</nova:memory>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <nova:disk>1</nova:disk>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <nova:swap>0</nova:swap>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <nova:vcpus>1</nova:vcpus>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  </nova:flavor>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <nova:owner>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <nova:user uuid="dd336dcb24664df58613d4105ce1b004">tempest-TestNetworkBasicOps-1622727639-project-member</nova:user>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <nova:project uuid="bee9c6aad5fe46a2b0fb6caf4d995b72">tempest-TestNetworkBasicOps-1622727639</nova:project>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  </nova:owner>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  <nova:ports>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    <nova:port uuid="6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5">
Oct 11 05:22:58 np0005481065 nova_compute[260935]:      <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:    </nova:port>
Oct 11 05:22:58 np0005481065 nova_compute[260935]:  </nova:ports>
Oct 11 05:22:58 np0005481065 nova_compute[260935]: </nova:instance>
Oct 11 05:22:58 np0005481065 nova_compute[260935]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct 11 05:22:58 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2434: 321 pgs: 321 active+clean; 479 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 154 op/s
Oct 11 05:22:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:23:00 np0005481065 ovn_controller[152945]: 2025-10-11T09:23:00Z|01269|binding|INFO|Releasing lport e89e8ea5-8038-433d-8c45-d2ec20f4f896 from this chassis (sb_readonly=0)
Oct 11 05:23:00 np0005481065 ovn_controller[152945]: 2025-10-11T09:23:00Z|01270|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:23:00 np0005481065 ovn_controller[152945]: 2025-10-11T09:23:00Z|01271|binding|INFO|Releasing lport 89155f05-1b39-4918-893c-15a2cd2a9493 from this chassis (sb_readonly=0)
Oct 11 05:23:00 np0005481065 ovn_controller[152945]: 2025-10-11T09:23:00Z|01272|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:23:00 np0005481065 nova_compute[260935]: 2025-10-11 09:23:00.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:00 np0005481065 nova_compute[260935]: 2025-10-11 09:23:00.602 2 DEBUG nova.compute.manager [req-d9d24be2-e2b2-45a7-90fa-8a0a9a976df8 req-d0ca769d-c3a7-4043-91fa-2a02cc5e2cbc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Received event network-vif-plugged-300c071c-a312-4a9b-bd7a-1b16b9a35ae6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:23:00 np0005481065 nova_compute[260935]: 2025-10-11 09:23:00.603 2 DEBUG oslo_concurrency.lockutils [req-d9d24be2-e2b2-45a7-90fa-8a0a9a976df8 req-d0ca769d-c3a7-4043-91fa-2a02cc5e2cbc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:23:00 np0005481065 nova_compute[260935]: 2025-10-11 09:23:00.604 2 DEBUG oslo_concurrency.lockutils [req-d9d24be2-e2b2-45a7-90fa-8a0a9a976df8 req-d0ca769d-c3a7-4043-91fa-2a02cc5e2cbc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:23:00 np0005481065 nova_compute[260935]: 2025-10-11 09:23:00.605 2 DEBUG oslo_concurrency.lockutils [req-d9d24be2-e2b2-45a7-90fa-8a0a9a976df8 req-d0ca769d-c3a7-4043-91fa-2a02cc5e2cbc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:23:00 np0005481065 nova_compute[260935]: 2025-10-11 09:23:00.606 2 DEBUG nova.compute.manager [req-d9d24be2-e2b2-45a7-90fa-8a0a9a976df8 req-d0ca769d-c3a7-4043-91fa-2a02cc5e2cbc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] No waiting events found dispatching network-vif-plugged-300c071c-a312-4a9b-bd7a-1b16b9a35ae6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:23:00 np0005481065 nova_compute[260935]: 2025-10-11 09:23:00.606 2 WARNING nova.compute.manager [req-d9d24be2-e2b2-45a7-90fa-8a0a9a976df8 req-d0ca769d-c3a7-4043-91fa-2a02cc5e2cbc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Received unexpected event network-vif-plugged-300c071c-a312-4a9b-bd7a-1b16b9a35ae6 for instance with vm_state active and task_state None.#033[00m
Oct 11 05:23:00 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2435: 321 pgs: 321 active+clean; 479 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 2.0 MiB/s wr, 78 op/s
Oct 11 05:23:01 np0005481065 nova_compute[260935]: 2025-10-11 09:23:01.593 2 INFO nova.network.neutron [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Port 300c071c-a312-4a9b-bd7a-1b16b9a35ae6 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Oct 11 05:23:01 np0005481065 nova_compute[260935]: 2025-10-11 09:23:01.594 2 DEBUG nova.network.neutron [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Updating instance_info_cache with network_info: [{"id": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "address": "fa:16:3e:38:a7:f5", "network": {"id": "6fb90d02-96cd-4920-92ac-462cc457cb11", "bridge": "br-int", "label": "tempest-network-smoke--1169953700", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aa7ac72-3e", "ovs_interfaceid": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:23:01 np0005481065 nova_compute[260935]: 2025-10-11 09:23:01.692 2 DEBUG oslo_concurrency.lockutils [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Releasing lock "refresh_cache-7f0d9214-39a5-458d-82db-dcbc7d61b8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:23:01 np0005481065 nova_compute[260935]: 2025-10-11 09:23:01.746 2 DEBUG oslo_concurrency.lockutils [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "interface-7f0d9214-39a5-458d-82db-dcbc7d61b8b5-300c071c-a312-4a9b-bd7a-1b16b9a35ae6" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 4.155s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:23:01 np0005481065 nova_compute[260935]: 2025-10-11 09:23:01.813 2 DEBUG oslo_concurrency.lockutils [None req-4687798b-5f06-4a27-a232-9e475c105426 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:23:01 np0005481065 nova_compute[260935]: 2025-10-11 09:23:01.814 2 DEBUG oslo_concurrency.lockutils [None req-4687798b-5f06-4a27-a232-9e475c105426 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:23:01 np0005481065 nova_compute[260935]: 2025-10-11 09:23:01.815 2 DEBUG oslo_concurrency.lockutils [None req-4687798b-5f06-4a27-a232-9e475c105426 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:23:01 np0005481065 nova_compute[260935]: 2025-10-11 09:23:01.815 2 DEBUG oslo_concurrency.lockutils [None req-4687798b-5f06-4a27-a232-9e475c105426 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:23:01 np0005481065 nova_compute[260935]: 2025-10-11 09:23:01.816 2 DEBUG oslo_concurrency.lockutils [None req-4687798b-5f06-4a27-a232-9e475c105426 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:23:01 np0005481065 nova_compute[260935]: 2025-10-11 09:23:01.817 2 INFO nova.compute.manager [None req-4687798b-5f06-4a27-a232-9e475c105426 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Terminating instance#033[00m
Oct 11 05:23:01 np0005481065 nova_compute[260935]: 2025-10-11 09:23:01.819 2 DEBUG nova.compute.manager [None req-4687798b-5f06-4a27-a232-9e475c105426 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:23:01 np0005481065 kernel: tap6aa7ac72-3e (unregistering): left promiscuous mode
Oct 11 05:23:01 np0005481065 NetworkManager[44960]: <info>  [1760174581.9206] device (tap6aa7ac72-3e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:23:01 np0005481065 ovn_controller[152945]: 2025-10-11T09:23:01Z|01273|binding|INFO|Releasing lport 6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5 from this chassis (sb_readonly=0)
Oct 11 05:23:01 np0005481065 ovn_controller[152945]: 2025-10-11T09:23:01Z|01274|binding|INFO|Setting lport 6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5 down in Southbound
Oct 11 05:23:01 np0005481065 ovn_controller[152945]: 2025-10-11T09:23:01Z|01275|binding|INFO|Removing iface tap6aa7ac72-3e ovn-installed in OVS
Oct 11 05:23:01 np0005481065 nova_compute[260935]: 2025-10-11 09:23:01.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:01 np0005481065 nova_compute[260935]: 2025-10-11 09:23:01.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:01.956 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:a7:f5 10.100.0.5'], port_security=['fa:16:3e:38:a7:f5 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '7f0d9214-39a5-458d-82db-dcbc7d61b8b5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6fb90d02-96cd-4920-92ac-462cc457cb11', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5dc25595-7367-4d0c-a935-a850b2363806', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bf7e0c92-efbd-4ce9-a9f4-9c0c91150cdc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:23:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:01.958 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5 in datapath 6fb90d02-96cd-4920-92ac-462cc457cb11 unbound from our chassis#033[00m
Oct 11 05:23:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:01.961 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6fb90d02-96cd-4920-92ac-462cc457cb11, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:23:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:01.963 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[927cfde9-cfa0-4bc5-bee3-8dc92f603b3f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:01.964 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6fb90d02-96cd-4920-92ac-462cc457cb11 namespace which is not needed anymore#033[00m
Oct 11 05:23:01 np0005481065 systemd[1]: machine-qemu\x2d140\x2dinstance\x2d00000075.scope: Deactivated successfully.
Oct 11 05:23:01 np0005481065 systemd[1]: machine-qemu\x2d140\x2dinstance\x2d00000075.scope: Consumed 19.513s CPU time.
Oct 11 05:23:01 np0005481065 nova_compute[260935]: 2025-10-11 09:23:01.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:01 np0005481065 systemd-machined[215705]: Machine qemu-140-instance-00000075 terminated.
Oct 11 05:23:02 np0005481065 nova_compute[260935]: 2025-10-11 09:23:02.056 2 INFO nova.virt.libvirt.driver [-] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Instance destroyed successfully.#033[00m
Oct 11 05:23:02 np0005481065 nova_compute[260935]: 2025-10-11 09:23:02.057 2 DEBUG nova.objects.instance [None req-4687798b-5f06-4a27-a232-9e475c105426 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'resources' on Instance uuid 7f0d9214-39a5-458d-82db-dcbc7d61b8b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:23:02 np0005481065 nova_compute[260935]: 2025-10-11 09:23:02.075 2 DEBUG nova.virt.libvirt.vif [None req-4687798b-5f06-4a27-a232-9e475c105426 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:20:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1960711580',display_name='tempest-TestNetworkBasicOps-server-1960711580',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1960711580',id=117,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF5Wlq36JtMeAih7avX8wXRrPqfFsSPD2izoTRA0/VeN08ZY174fYPsstqVdaqAprTgQ0B4WJKyd87FK5YPL+XzWekXrwbc+R4XTrvOSv6dJKGu7vh0OlJADJW05rfop0g==',key_name='tempest-TestNetworkBasicOps-1691611342',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:20:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-9mefs4w8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:20:58Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=7f0d9214-39a5-458d-82db-dcbc7d61b8b5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "address": "fa:16:3e:38:a7:f5", "network": {"id": "6fb90d02-96cd-4920-92ac-462cc457cb11", "bridge": "br-int", "label": "tempest-network-smoke--1169953700", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aa7ac72-3e", "ovs_interfaceid": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:23:02 np0005481065 nova_compute[260935]: 2025-10-11 09:23:02.075 2 DEBUG nova.network.os_vif_util [None req-4687798b-5f06-4a27-a232-9e475c105426 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "address": "fa:16:3e:38:a7:f5", "network": {"id": "6fb90d02-96cd-4920-92ac-462cc457cb11", "bridge": "br-int", "label": "tempest-network-smoke--1169953700", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aa7ac72-3e", "ovs_interfaceid": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:23:02 np0005481065 nova_compute[260935]: 2025-10-11 09:23:02.076 2 DEBUG nova.network.os_vif_util [None req-4687798b-5f06-4a27-a232-9e475c105426 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:38:a7:f5,bridge_name='br-int',has_traffic_filtering=True,id=6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5,network=Network(6fb90d02-96cd-4920-92ac-462cc457cb11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6aa7ac72-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:23:02 np0005481065 nova_compute[260935]: 2025-10-11 09:23:02.077 2 DEBUG os_vif [None req-4687798b-5f06-4a27-a232-9e475c105426 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:38:a7:f5,bridge_name='br-int',has_traffic_filtering=True,id=6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5,network=Network(6fb90d02-96cd-4920-92ac-462cc457cb11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6aa7ac72-3e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:23:02 np0005481065 nova_compute[260935]: 2025-10-11 09:23:02.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:02 np0005481065 nova_compute[260935]: 2025-10-11 09:23:02.079 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6aa7ac72-3e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:23:02 np0005481065 nova_compute[260935]: 2025-10-11 09:23:02.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:02 np0005481065 nova_compute[260935]: 2025-10-11 09:23:02.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:23:02 np0005481065 nova_compute[260935]: 2025-10-11 09:23:02.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:02 np0005481065 nova_compute[260935]: 2025-10-11 09:23:02.086 2 INFO os_vif [None req-4687798b-5f06-4a27-a232-9e475c105426 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:38:a7:f5,bridge_name='br-int',has_traffic_filtering=True,id=6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5,network=Network(6fb90d02-96cd-4920-92ac-462cc457cb11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6aa7ac72-3e')#033[00m
Oct 11 05:23:02 np0005481065 neutron-haproxy-ovnmeta-6fb90d02-96cd-4920-92ac-462cc457cb11[388501]: [NOTICE]   (388521) : haproxy version is 2.8.14-c23fe91
Oct 11 05:23:02 np0005481065 neutron-haproxy-ovnmeta-6fb90d02-96cd-4920-92ac-462cc457cb11[388501]: [NOTICE]   (388521) : path to executable is /usr/sbin/haproxy
Oct 11 05:23:02 np0005481065 neutron-haproxy-ovnmeta-6fb90d02-96cd-4920-92ac-462cc457cb11[388501]: [WARNING]  (388521) : Exiting Master process...
Oct 11 05:23:02 np0005481065 neutron-haproxy-ovnmeta-6fb90d02-96cd-4920-92ac-462cc457cb11[388501]: [WARNING]  (388521) : Exiting Master process...
Oct 11 05:23:02 np0005481065 neutron-haproxy-ovnmeta-6fb90d02-96cd-4920-92ac-462cc457cb11[388501]: [ALERT]    (388521) : Current worker (388525) exited with code 143 (Terminated)
Oct 11 05:23:02 np0005481065 neutron-haproxy-ovnmeta-6fb90d02-96cd-4920-92ac-462cc457cb11[388501]: [WARNING]  (388521) : All workers exited. Exiting... (0)
Oct 11 05:23:02 np0005481065 systemd[1]: libpod-ba90e074b51be5207e984f89886b0437dbdf15c3e7ac9ee48d3902e875c55c96.scope: Deactivated successfully.
Oct 11 05:23:02 np0005481065 podman[392300]: 2025-10-11 09:23:02.201055344 +0000 UTC m=+0.058734089 container died ba90e074b51be5207e984f89886b0437dbdf15c3e7ac9ee48d3902e875c55c96 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6fb90d02-96cd-4920-92ac-462cc457cb11, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:23:02 np0005481065 nova_compute[260935]: 2025-10-11 09:23:02.222 2 DEBUG nova.compute.manager [req-18ccc87d-3505-4a04-a267-f8fa1c1e9df1 req-d6260500-6674-450c-b1ad-cd017fc13d6a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Received event network-vif-unplugged-6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:23:02 np0005481065 nova_compute[260935]: 2025-10-11 09:23:02.222 2 DEBUG oslo_concurrency.lockutils [req-18ccc87d-3505-4a04-a267-f8fa1c1e9df1 req-d6260500-6674-450c-b1ad-cd017fc13d6a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:23:02 np0005481065 nova_compute[260935]: 2025-10-11 09:23:02.223 2 DEBUG oslo_concurrency.lockutils [req-18ccc87d-3505-4a04-a267-f8fa1c1e9df1 req-d6260500-6674-450c-b1ad-cd017fc13d6a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:23:02 np0005481065 nova_compute[260935]: 2025-10-11 09:23:02.224 2 DEBUG oslo_concurrency.lockutils [req-18ccc87d-3505-4a04-a267-f8fa1c1e9df1 req-d6260500-6674-450c-b1ad-cd017fc13d6a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:23:02 np0005481065 nova_compute[260935]: 2025-10-11 09:23:02.224 2 DEBUG nova.compute.manager [req-18ccc87d-3505-4a04-a267-f8fa1c1e9df1 req-d6260500-6674-450c-b1ad-cd017fc13d6a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] No waiting events found dispatching network-vif-unplugged-6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:23:02 np0005481065 nova_compute[260935]: 2025-10-11 09:23:02.224 2 DEBUG nova.compute.manager [req-18ccc87d-3505-4a04-a267-f8fa1c1e9df1 req-d6260500-6674-450c-b1ad-cd017fc13d6a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Received event network-vif-unplugged-6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 05:23:02 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ba90e074b51be5207e984f89886b0437dbdf15c3e7ac9ee48d3902e875c55c96-userdata-shm.mount: Deactivated successfully.
Oct 11 05:23:02 np0005481065 nova_compute[260935]: 2025-10-11 09:23:02.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:02 np0005481065 systemd[1]: var-lib-containers-storage-overlay-f2806711dd2ed8736dcb901ac2aef71c255be2695bfa4fe140c6bb3417babc2b-merged.mount: Deactivated successfully.
Oct 11 05:23:02 np0005481065 podman[392300]: 2025-10-11 09:23:02.348776173 +0000 UTC m=+0.206454898 container cleanup ba90e074b51be5207e984f89886b0437dbdf15c3e7ac9ee48d3902e875c55c96 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6fb90d02-96cd-4920-92ac-462cc457cb11, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:23:02 np0005481065 systemd[1]: libpod-conmon-ba90e074b51be5207e984f89886b0437dbdf15c3e7ac9ee48d3902e875c55c96.scope: Deactivated successfully.
Oct 11 05:23:02 np0005481065 podman[392334]: 2025-10-11 09:23:02.481653982 +0000 UTC m=+0.095473425 container remove ba90e074b51be5207e984f89886b0437dbdf15c3e7ac9ee48d3902e875c55c96 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6fb90d02-96cd-4920-92ac-462cc457cb11, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 11 05:23:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:02.492 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e350c72e-11b8-4eff-a6c9-071a33b1d376]: (4, ('Sat Oct 11 09:23:02 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6fb90d02-96cd-4920-92ac-462cc457cb11 (ba90e074b51be5207e984f89886b0437dbdf15c3e7ac9ee48d3902e875c55c96)\nba90e074b51be5207e984f89886b0437dbdf15c3e7ac9ee48d3902e875c55c96\nSat Oct 11 09:23:02 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6fb90d02-96cd-4920-92ac-462cc457cb11 (ba90e074b51be5207e984f89886b0437dbdf15c3e7ac9ee48d3902e875c55c96)\nba90e074b51be5207e984f89886b0437dbdf15c3e7ac9ee48d3902e875c55c96\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:02.494 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9de6d340-d1f0-408d-9c79-4e1c99d4c8ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:02.496 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6fb90d02-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:23:02 np0005481065 nova_compute[260935]: 2025-10-11 09:23:02.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:02 np0005481065 kernel: tap6fb90d02-90: left promiscuous mode
Oct 11 05:23:02 np0005481065 nova_compute[260935]: 2025-10-11 09:23:02.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:02.533 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[844500d5-00c1-445e-9ede-9a138344b3e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:02.564 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e90cbbb8-09cb-42f8-9482-2c04c1676201]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:02.565 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9bb46361-f0d0-477e-acd9-c9dda64c92b0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:02.584 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e760ce03-86c8-4106-9c8e-5d29fd62ae55]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 629651, 'reachable_time': 25072, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 392349, 'error': None, 'target': 'ovnmeta-6fb90d02-96cd-4920-92ac-462cc457cb11', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:02.586 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6fb90d02-96cd-4920-92ac-462cc457cb11 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 05:23:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:02.586 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[54c79994-9b63-4bea-aec0-c790374633b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:02 np0005481065 systemd[1]: run-netns-ovnmeta\x2d6fb90d02\x2d96cd\x2d4920\x2d92ac\x2d462cc457cb11.mount: Deactivated successfully.
Oct 11 05:23:02 np0005481065 nova_compute[260935]: 2025-10-11 09:23:02.799 2 DEBUG nova.compute.manager [req-7a3dbc67-4fdf-4e7a-bcc1-f05077cf79be req-38f85783-19d8-40b7-adbb-260af21e9782 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Received event network-changed-6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:23:02 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2436: 321 pgs: 321 active+clean; 435 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 404 KiB/s rd, 2.1 MiB/s wr, 109 op/s
Oct 11 05:23:02 np0005481065 nova_compute[260935]: 2025-10-11 09:23:02.799 2 DEBUG nova.compute.manager [req-7a3dbc67-4fdf-4e7a-bcc1-f05077cf79be req-38f85783-19d8-40b7-adbb-260af21e9782 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Refreshing instance network info cache due to event network-changed-6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:23:02 np0005481065 nova_compute[260935]: 2025-10-11 09:23:02.800 2 DEBUG oslo_concurrency.lockutils [req-7a3dbc67-4fdf-4e7a-bcc1-f05077cf79be req-38f85783-19d8-40b7-adbb-260af21e9782 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-7f0d9214-39a5-458d-82db-dcbc7d61b8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:23:02 np0005481065 nova_compute[260935]: 2025-10-11 09:23:02.801 2 DEBUG oslo_concurrency.lockutils [req-7a3dbc67-4fdf-4e7a-bcc1-f05077cf79be req-38f85783-19d8-40b7-adbb-260af21e9782 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-7f0d9214-39a5-458d-82db-dcbc7d61b8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:23:02 np0005481065 nova_compute[260935]: 2025-10-11 09:23:02.801 2 DEBUG nova.network.neutron [req-7a3dbc67-4fdf-4e7a-bcc1-f05077cf79be req-38f85783-19d8-40b7-adbb-260af21e9782 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Refreshing network info cache for port 6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:23:03 np0005481065 nova_compute[260935]: 2025-10-11 09:23:03.698 2 INFO nova.virt.libvirt.driver [None req-4687798b-5f06-4a27-a232-9e475c105426 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Deleting instance files /var/lib/nova/instances/7f0d9214-39a5-458d-82db-dcbc7d61b8b5_del#033[00m
Oct 11 05:23:03 np0005481065 nova_compute[260935]: 2025-10-11 09:23:03.700 2 INFO nova.virt.libvirt.driver [None req-4687798b-5f06-4a27-a232-9e475c105426 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Deletion of /var/lib/nova/instances/7f0d9214-39a5-458d-82db-dcbc7d61b8b5_del complete#033[00m
Oct 11 05:23:03 np0005481065 nova_compute[260935]: 2025-10-11 09:23:03.768 2 INFO nova.compute.manager [None req-4687798b-5f06-4a27-a232-9e475c105426 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Took 1.95 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:23:03 np0005481065 nova_compute[260935]: 2025-10-11 09:23:03.769 2 DEBUG oslo.service.loopingcall [None req-4687798b-5f06-4a27-a232-9e475c105426 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:23:03 np0005481065 nova_compute[260935]: 2025-10-11 09:23:03.769 2 DEBUG nova.compute.manager [-] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:23:03 np0005481065 nova_compute[260935]: 2025-10-11 09:23:03.770 2 DEBUG nova.network.neutron [-] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:23:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:23:04 np0005481065 nova_compute[260935]: 2025-10-11 09:23:04.336 2 DEBUG nova.compute.manager [req-4a7abc6c-d9f6-470d-a701-40b10add4bec req-0c92b593-5f94-4aea-8b30-b80622bfa869 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Received event network-vif-plugged-6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:23:04 np0005481065 nova_compute[260935]: 2025-10-11 09:23:04.339 2 DEBUG oslo_concurrency.lockutils [req-4a7abc6c-d9f6-470d-a701-40b10add4bec req-0c92b593-5f94-4aea-8b30-b80622bfa869 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:23:04 np0005481065 nova_compute[260935]: 2025-10-11 09:23:04.339 2 DEBUG oslo_concurrency.lockutils [req-4a7abc6c-d9f6-470d-a701-40b10add4bec req-0c92b593-5f94-4aea-8b30-b80622bfa869 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:23:04 np0005481065 nova_compute[260935]: 2025-10-11 09:23:04.340 2 DEBUG oslo_concurrency.lockutils [req-4a7abc6c-d9f6-470d-a701-40b10add4bec req-0c92b593-5f94-4aea-8b30-b80622bfa869 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:23:04 np0005481065 nova_compute[260935]: 2025-10-11 09:23:04.340 2 DEBUG nova.compute.manager [req-4a7abc6c-d9f6-470d-a701-40b10add4bec req-0c92b593-5f94-4aea-8b30-b80622bfa869 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] No waiting events found dispatching network-vif-plugged-6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:23:04 np0005481065 nova_compute[260935]: 2025-10-11 09:23:04.341 2 WARNING nova.compute.manager [req-4a7abc6c-d9f6-470d-a701-40b10add4bec req-0c92b593-5f94-4aea-8b30-b80622bfa869 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Received unexpected event network-vif-plugged-6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5 for instance with vm_state active and task_state deleting.#033[00m
Oct 11 05:23:04 np0005481065 nova_compute[260935]: 2025-10-11 09:23:04.558 2 DEBUG nova.network.neutron [req-7a3dbc67-4fdf-4e7a-bcc1-f05077cf79be req-38f85783-19d8-40b7-adbb-260af21e9782 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Updated VIF entry in instance network info cache for port 6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:23:04 np0005481065 nova_compute[260935]: 2025-10-11 09:23:04.559 2 DEBUG nova.network.neutron [req-7a3dbc67-4fdf-4e7a-bcc1-f05077cf79be req-38f85783-19d8-40b7-adbb-260af21e9782 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Updating instance_info_cache with network_info: [{"id": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "address": "fa:16:3e:38:a7:f5", "network": {"id": "6fb90d02-96cd-4920-92ac-462cc457cb11", "bridge": "br-int", "label": "tempest-network-smoke--1169953700", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aa7ac72-3e", "ovs_interfaceid": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:23:04 np0005481065 nova_compute[260935]: 2025-10-11 09:23:04.732 2 DEBUG oslo_concurrency.lockutils [req-7a3dbc67-4fdf-4e7a-bcc1-f05077cf79be req-38f85783-19d8-40b7-adbb-260af21e9782 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-7f0d9214-39a5-458d-82db-dcbc7d61b8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:23:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:23:04 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:23:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 05:23:04 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:23:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 05:23:04 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2437: 321 pgs: 321 active+clean; 435 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 404 KiB/s rd, 2.1 MiB/s wr, 109 op/s
Oct 11 05:23:04 np0005481065 nova_compute[260935]: 2025-10-11 09:23:04.819 2 DEBUG nova.network.neutron [-] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:23:04 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:23:04 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 8094a4bb-e8c4-4343-bc23-168d38f91675 does not exist
Oct 11 05:23:04 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 32417e03-2432-4fe7-bb9a-c0e00b98d124 does not exist
Oct 11 05:23:04 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 70c01afb-a987-45a4-8e14-25795668cd20 does not exist
Oct 11 05:23:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 05:23:04 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 05:23:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 05:23:04 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:23:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:23:04 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:23:04 np0005481065 nova_compute[260935]: 2025-10-11 09:23:04.919 2 INFO nova.compute.manager [-] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Took 1.15 seconds to deallocate network for instance.#033[00m
Oct 11 05:23:04 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:23:04 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:23:04 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:23:05 np0005481065 nova_compute[260935]: 2025-10-11 09:23:05.083 2 DEBUG oslo_concurrency.lockutils [None req-4687798b-5f06-4a27-a232-9e475c105426 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:23:05 np0005481065 nova_compute[260935]: 2025-10-11 09:23:05.083 2 DEBUG oslo_concurrency.lockutils [None req-4687798b-5f06-4a27-a232-9e475c105426 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:23:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:23:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:23:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:23:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:23:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0036581190580842614 of space, bias 1.0, pg target 1.0974357174252785 quantized to 32 (current 32)
Oct 11 05:23:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:23:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:23:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:23:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:23:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:23:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.1992057139048968 quantized to 32 (current 32)
Oct 11 05:23:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:23:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 32)
Oct 11 05:23:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:23:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:23:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:23:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Oct 11 05:23:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:23:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Oct 11 05:23:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:23:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:23:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:23:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Oct 11 05:23:05 np0005481065 podman[392620]: 2025-10-11 09:23:05.59242548 +0000 UTC m=+0.044878258 container create b1b08078307f02b063f089e3564d68829344ce1ac3b416102ca491a156af2921 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 11 05:23:05 np0005481065 systemd[1]: Started libpod-conmon-b1b08078307f02b063f089e3564d68829344ce1ac3b416102ca491a156af2921.scope.
Oct 11 05:23:05 np0005481065 podman[392620]: 2025-10-11 09:23:05.57469147 +0000 UTC m=+0.027144278 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:23:05 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:23:05 np0005481065 podman[392620]: 2025-10-11 09:23:05.700295164 +0000 UTC m=+0.152748002 container init b1b08078307f02b063f089e3564d68829344ce1ac3b416102ca491a156af2921 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 05:23:05 np0005481065 podman[392620]: 2025-10-11 09:23:05.71253811 +0000 UTC m=+0.164990898 container start b1b08078307f02b063f089e3564d68829344ce1ac3b416102ca491a156af2921 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hertz, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:23:05 np0005481065 podman[392620]: 2025-10-11 09:23:05.71643237 +0000 UTC m=+0.168885208 container attach b1b08078307f02b063f089e3564d68829344ce1ac3b416102ca491a156af2921 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:23:05 np0005481065 wizardly_hertz[392636]: 167 167
Oct 11 05:23:05 np0005481065 systemd[1]: libpod-b1b08078307f02b063f089e3564d68829344ce1ac3b416102ca491a156af2921.scope: Deactivated successfully.
Oct 11 05:23:05 np0005481065 conmon[392636]: conmon b1b08078307f02b063f0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b1b08078307f02b063f089e3564d68829344ce1ac3b416102ca491a156af2921.scope/container/memory.events
Oct 11 05:23:05 np0005481065 podman[392620]: 2025-10-11 09:23:05.721628716 +0000 UTC m=+0.174081504 container died b1b08078307f02b063f089e3564d68829344ce1ac3b416102ca491a156af2921 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 11 05:23:05 np0005481065 systemd[1]: var-lib-containers-storage-overlay-25ef04cb751192ca782db2fca021509eac38d668511699e7c913c96d00dc992f-merged.mount: Deactivated successfully.
Oct 11 05:23:05 np0005481065 podman[392620]: 2025-10-11 09:23:05.764501846 +0000 UTC m=+0.216954634 container remove b1b08078307f02b063f089e3564d68829344ce1ac3b416102ca491a156af2921 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 05:23:05 np0005481065 systemd[1]: libpod-conmon-b1b08078307f02b063f089e3564d68829344ce1ac3b416102ca491a156af2921.scope: Deactivated successfully.
Oct 11 05:23:05 np0005481065 podman[392662]: 2025-10-11 09:23:05.992896902 +0000 UTC m=+0.043846259 container create 1ed3ff9ea95b1b4b2495b1c726289cf21b649473103867ed21abbca2c3954eab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:23:06 np0005481065 systemd[1]: Started libpod-conmon-1ed3ff9ea95b1b4b2495b1c726289cf21b649473103867ed21abbca2c3954eab.scope.
Oct 11 05:23:06 np0005481065 podman[392662]: 2025-10-11 09:23:05.975495801 +0000 UTC m=+0.026445178 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:23:06 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:23:06 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/839f2be87f634ab0890b8a4a2b0c38b2b8ced936778ee2b48dc7c930645c7343/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:23:06 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/839f2be87f634ab0890b8a4a2b0c38b2b8ced936778ee2b48dc7c930645c7343/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:23:06 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/839f2be87f634ab0890b8a4a2b0c38b2b8ced936778ee2b48dc7c930645c7343/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:23:06 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/839f2be87f634ab0890b8a4a2b0c38b2b8ced936778ee2b48dc7c930645c7343/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:23:06 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/839f2be87f634ab0890b8a4a2b0c38b2b8ced936778ee2b48dc7c930645c7343/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 05:23:06 np0005481065 podman[392662]: 2025-10-11 09:23:06.106672262 +0000 UTC m=+0.157621669 container init 1ed3ff9ea95b1b4b2495b1c726289cf21b649473103867ed21abbca2c3954eab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hodgkin, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 05:23:06 np0005481065 podman[392662]: 2025-10-11 09:23:06.122461348 +0000 UTC m=+0.173410705 container start 1ed3ff9ea95b1b4b2495b1c726289cf21b649473103867ed21abbca2c3954eab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 11 05:23:06 np0005481065 podman[392662]: 2025-10-11 09:23:06.12572413 +0000 UTC m=+0.176673497 container attach 1ed3ff9ea95b1b4b2495b1c726289cf21b649473103867ed21abbca2c3954eab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:23:06 np0005481065 nova_compute[260935]: 2025-10-11 09:23:06.131 2 DEBUG oslo_concurrency.processutils [None req-4687798b-5f06-4a27-a232-9e475c105426 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:23:06 np0005481065 nova_compute[260935]: 2025-10-11 09:23:06.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:06 np0005481065 nova_compute[260935]: 2025-10-11 09:23:06.481 2 DEBUG nova.compute.manager [req-34c10473-3da8-4f74-a90a-b713284e2442 req-d140928c-3cbf-455c-9701-2bdd6c79e625 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Received event network-vif-deleted-6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:23:06 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:23:06 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2133143860' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:23:06 np0005481065 nova_compute[260935]: 2025-10-11 09:23:06.651 2 DEBUG oslo_concurrency.processutils [None req-4687798b-5f06-4a27-a232-9e475c105426 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:23:06 np0005481065 nova_compute[260935]: 2025-10-11 09:23:06.660 2 DEBUG nova.compute.provider_tree [None req-4687798b-5f06-4a27-a232-9e475c105426 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:23:06 np0005481065 nova_compute[260935]: 2025-10-11 09:23:06.693 2 DEBUG nova.scheduler.client.report [None req-4687798b-5f06-4a27-a232-9e475c105426 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:23:06 np0005481065 nova_compute[260935]: 2025-10-11 09:23:06.732 2 DEBUG oslo_concurrency.lockutils [None req-4687798b-5f06-4a27-a232-9e475c105426 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:23:06 np0005481065 nova_compute[260935]: 2025-10-11 09:23:06.782 2 INFO nova.scheduler.client.report [None req-4687798b-5f06-4a27-a232-9e475c105426 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Deleted allocations for instance 7f0d9214-39a5-458d-82db-dcbc7d61b8b5#033[00m
Oct 11 05:23:06 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2438: 321 pgs: 321 active+clean; 435 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 404 KiB/s rd, 2.1 MiB/s wr, 109 op/s
Oct 11 05:23:06 np0005481065 nova_compute[260935]: 2025-10-11 09:23:06.932 2 DEBUG oslo_concurrency.lockutils [None req-4687798b-5f06-4a27-a232-9e475c105426 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.118s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:23:07 np0005481065 nova_compute[260935]: 2025-10-11 09:23:07.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:07 np0005481065 jovial_hodgkin[392678]: --> passed data devices: 0 physical, 3 LVM
Oct 11 05:23:07 np0005481065 jovial_hodgkin[392678]: --> relative data size: 1.0
Oct 11 05:23:07 np0005481065 jovial_hodgkin[392678]: --> All data devices are unavailable
Oct 11 05:23:07 np0005481065 nova_compute[260935]: 2025-10-11 09:23:07.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:07 np0005481065 systemd[1]: libpod-1ed3ff9ea95b1b4b2495b1c726289cf21b649473103867ed21abbca2c3954eab.scope: Deactivated successfully.
Oct 11 05:23:07 np0005481065 systemd[1]: libpod-1ed3ff9ea95b1b4b2495b1c726289cf21b649473103867ed21abbca2c3954eab.scope: Consumed 1.123s CPU time.
Oct 11 05:23:07 np0005481065 podman[392662]: 2025-10-11 09:23:07.322459002 +0000 UTC m=+1.373408379 container died 1ed3ff9ea95b1b4b2495b1c726289cf21b649473103867ed21abbca2c3954eab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 05:23:07 np0005481065 systemd[1]: var-lib-containers-storage-overlay-839f2be87f634ab0890b8a4a2b0c38b2b8ced936778ee2b48dc7c930645c7343-merged.mount: Deactivated successfully.
Oct 11 05:23:07 np0005481065 podman[392662]: 2025-10-11 09:23:07.712350215 +0000 UTC m=+1.763299602 container remove 1ed3ff9ea95b1b4b2495b1c726289cf21b649473103867ed21abbca2c3954eab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 05:23:07 np0005481065 systemd[1]: libpod-conmon-1ed3ff9ea95b1b4b2495b1c726289cf21b649473103867ed21abbca2c3954eab.scope: Deactivated successfully.
Oct 11 05:23:07 np0005481065 nova_compute[260935]: 2025-10-11 09:23:07.948 2 DEBUG oslo_concurrency.lockutils [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Acquiring lock "ef21f945-0076-48fa-8d22-c5376e26d278" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:23:07 np0005481065 nova_compute[260935]: 2025-10-11 09:23:07.948 2 DEBUG oslo_concurrency.lockutils [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:23:07 np0005481065 nova_compute[260935]: 2025-10-11 09:23:07.948 2 INFO nova.compute.manager [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Shelving#033[00m
Oct 11 05:23:07 np0005481065 nova_compute[260935]: 2025-10-11 09:23:07.983 2 DEBUG nova.virt.libvirt.driver [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct 11 05:23:08 np0005481065 podman[392884]: 2025-10-11 09:23:08.625001301 +0000 UTC m=+0.048458099 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:23:08 np0005481065 podman[392884]: 2025-10-11 09:23:08.731290251 +0000 UTC m=+0.154746999 container create ca08c74cb92cf7e1810dbe0459a61801c55f54abea4713a4d9d9de75d6ac309e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_goldberg, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:23:08 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2439: 321 pgs: 321 active+clean; 407 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 414 KiB/s rd, 2.2 MiB/s wr, 124 op/s
Oct 11 05:23:08 np0005481065 systemd[1]: Started libpod-conmon-ca08c74cb92cf7e1810dbe0459a61801c55f54abea4713a4d9d9de75d6ac309e.scope.
Oct 11 05:23:08 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:23:08 np0005481065 podman[392884]: 2025-10-11 09:23:08.962142045 +0000 UTC m=+0.385598853 container init ca08c74cb92cf7e1810dbe0459a61801c55f54abea4713a4d9d9de75d6ac309e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_goldberg, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:23:08 np0005481065 podman[392884]: 2025-10-11 09:23:08.97435612 +0000 UTC m=+0.397812868 container start ca08c74cb92cf7e1810dbe0459a61801c55f54abea4713a4d9d9de75d6ac309e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_goldberg, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:23:08 np0005481065 optimistic_goldberg[392900]: 167 167
Oct 11 05:23:08 np0005481065 systemd[1]: libpod-ca08c74cb92cf7e1810dbe0459a61801c55f54abea4713a4d9d9de75d6ac309e.scope: Deactivated successfully.
Oct 11 05:23:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:23:09 np0005481065 nova_compute[260935]: 2025-10-11 09:23:09.110 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174574.1085095, c677661f-bc62-4954-9130-09b285e6abe4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:23:09 np0005481065 nova_compute[260935]: 2025-10-11 09:23:09.111 2 INFO nova.compute.manager [-] [instance: c677661f-bc62-4954-9130-09b285e6abe4] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:23:09 np0005481065 podman[392884]: 2025-10-11 09:23:09.129364364 +0000 UTC m=+0.552821112 container attach ca08c74cb92cf7e1810dbe0459a61801c55f54abea4713a4d9d9de75d6ac309e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 05:23:09 np0005481065 podman[392884]: 2025-10-11 09:23:09.130227659 +0000 UTC m=+0.553684407 container died ca08c74cb92cf7e1810dbe0459a61801c55f54abea4713a4d9d9de75d6ac309e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_goldberg, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:23:09 np0005481065 nova_compute[260935]: 2025-10-11 09:23:09.161 2 DEBUG nova.compute.manager [None req-f2693f11-7788-4e2b-a260-c8ed5e837a54 - - - - - -] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:23:09 np0005481065 systemd[1]: var-lib-containers-storage-overlay-f8c06a9ec4d65ac1ce622ffacb6483c974e9f78fd3eb9dd62df0dbf0263d5607-merged.mount: Deactivated successfully.
Oct 11 05:23:09 np0005481065 podman[392884]: 2025-10-11 09:23:09.467421155 +0000 UTC m=+0.890877913 container remove ca08c74cb92cf7e1810dbe0459a61801c55f54abea4713a4d9d9de75d6ac309e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:23:09 np0005481065 nova_compute[260935]: 2025-10-11 09:23:09.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:09 np0005481065 systemd[1]: libpod-conmon-ca08c74cb92cf7e1810dbe0459a61801c55f54abea4713a4d9d9de75d6ac309e.scope: Deactivated successfully.
Oct 11 05:23:09 np0005481065 podman[392924]: 2025-10-11 09:23:09.74861163 +0000 UTC m=+0.071210291 container create e22244cd44f6fe2f2eb45e185154518469896254c043ec317e3bb655cd6aaf04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_tesla, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 05:23:09 np0005481065 podman[392924]: 2025-10-11 09:23:09.703036254 +0000 UTC m=+0.025634995 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:23:09 np0005481065 systemd[1]: Started libpod-conmon-e22244cd44f6fe2f2eb45e185154518469896254c043ec317e3bb655cd6aaf04.scope.
Oct 11 05:23:09 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:23:09 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0226c3908f538d448ec2055eafc18e02102b756bdc5e8f9a159bff3b95d37cd2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:23:09 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0226c3908f538d448ec2055eafc18e02102b756bdc5e8f9a159bff3b95d37cd2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:23:09 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0226c3908f538d448ec2055eafc18e02102b756bdc5e8f9a159bff3b95d37cd2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:23:09 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0226c3908f538d448ec2055eafc18e02102b756bdc5e8f9a159bff3b95d37cd2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:23:10 np0005481065 podman[392924]: 2025-10-11 09:23:10.025291228 +0000 UTC m=+0.347889919 container init e22244cd44f6fe2f2eb45e185154518469896254c043ec317e3bb655cd6aaf04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 05:23:10 np0005481065 podman[392924]: 2025-10-11 09:23:10.040849427 +0000 UTC m=+0.363448108 container start e22244cd44f6fe2f2eb45e185154518469896254c043ec317e3bb655cd6aaf04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 05:23:10 np0005481065 podman[392924]: 2025-10-11 09:23:10.093175534 +0000 UTC m=+0.415774225 container attach e22244cd44f6fe2f2eb45e185154518469896254c043ec317e3bb655cd6aaf04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_tesla, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 05:23:10 np0005481065 podman[392942]: 2025-10-11 09:23:10.148456734 +0000 UTC m=+0.191145386 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 11 05:23:10 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2440: 321 pgs: 321 active+clean; 407 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 78 KiB/s rd, 125 KiB/s wr, 46 op/s
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]: {
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:    "0": [
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:        {
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:            "devices": [
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:                "/dev/loop3"
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:            ],
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:            "lv_name": "ceph_lv0",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:            "lv_size": "21470642176",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:            "name": "ceph_lv0",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:            "tags": {
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:                "ceph.cluster_name": "ceph",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:                "ceph.crush_device_class": "",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:                "ceph.encrypted": "0",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:                "ceph.osd_id": "0",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:                "ceph.type": "block",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:                "ceph.vdo": "0"
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:            },
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:            "type": "block",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:            "vg_name": "ceph_vg0"
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:        }
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:    ],
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:    "1": [
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:        {
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:            "devices": [
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:                "/dev/loop4"
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:            ],
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:            "lv_name": "ceph_lv1",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:            "lv_size": "21470642176",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:            "name": "ceph_lv1",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:            "tags": {
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:                "ceph.cluster_name": "ceph",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:                "ceph.crush_device_class": "",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:                "ceph.encrypted": "0",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:                "ceph.osd_id": "1",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:                "ceph.type": "block",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:                "ceph.vdo": "0"
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:            },
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:            "type": "block",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:            "vg_name": "ceph_vg1"
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:        }
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:    ],
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:    "2": [
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:        {
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:            "devices": [
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:                "/dev/loop5"
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:            ],
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:            "lv_name": "ceph_lv2",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:            "lv_size": "21470642176",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:            "name": "ceph_lv2",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:            "tags": {
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:                "ceph.cluster_name": "ceph",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:                "ceph.crush_device_class": "",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:                "ceph.encrypted": "0",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:                "ceph.osd_id": "2",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:                "ceph.type": "block",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:                "ceph.vdo": "0"
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:            },
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:            "type": "block",
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:            "vg_name": "ceph_vg2"
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:        }
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]:    ]
Oct 11 05:23:10 np0005481065 trusting_tesla[392941]: }
Oct 11 05:23:10 np0005481065 systemd[1]: libpod-e22244cd44f6fe2f2eb45e185154518469896254c043ec317e3bb655cd6aaf04.scope: Deactivated successfully.
Oct 11 05:23:10 np0005481065 podman[392924]: 2025-10-11 09:23:10.846976567 +0000 UTC m=+1.169575258 container died e22244cd44f6fe2f2eb45e185154518469896254c043ec317e3bb655cd6aaf04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_tesla, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:23:11 np0005481065 systemd[1]: var-lib-containers-storage-overlay-0226c3908f538d448ec2055eafc18e02102b756bdc5e8f9a159bff3b95d37cd2-merged.mount: Deactivated successfully.
Oct 11 05:23:11 np0005481065 kernel: tap0f516e4b-c2 (unregistering): left promiscuous mode
Oct 11 05:23:11 np0005481065 NetworkManager[44960]: <info>  [1760174591.4311] device (tap0f516e4b-c2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:23:11 np0005481065 podman[392924]: 2025-10-11 09:23:11.434151346 +0000 UTC m=+1.756750017 container remove e22244cd44f6fe2f2eb45e185154518469896254c043ec317e3bb655cd6aaf04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_tesla, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 05:23:11 np0005481065 systemd[1]: libpod-conmon-e22244cd44f6fe2f2eb45e185154518469896254c043ec317e3bb655cd6aaf04.scope: Deactivated successfully.
Oct 11 05:23:11 np0005481065 nova_compute[260935]: 2025-10-11 09:23:11.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:11 np0005481065 ovn_controller[152945]: 2025-10-11T09:23:11Z|01276|binding|INFO|Releasing lport 0f516e4b-c284-4151-944c-8a7d98f695b5 from this chassis (sb_readonly=0)
Oct 11 05:23:11 np0005481065 ovn_controller[152945]: 2025-10-11T09:23:11Z|01277|binding|INFO|Setting lport 0f516e4b-c284-4151-944c-8a7d98f695b5 down in Southbound
Oct 11 05:23:11 np0005481065 ovn_controller[152945]: 2025-10-11T09:23:11Z|01278|binding|INFO|Removing iface tap0f516e4b-c2 ovn-installed in OVS
Oct 11 05:23:11 np0005481065 nova_compute[260935]: 2025-10-11 09:23:11.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:11 np0005481065 nova_compute[260935]: 2025-10-11 09:23:11.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:11.491 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:08:14 10.100.0.10'], port_security=['fa:16:3e:4b:08:14 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'ef21f945-0076-48fa-8d22-c5376e26d278', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0007a0de-db42-4add-9b55-6d92ceffa860', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a13210f275984f3eadf85eba0c749d99', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fd52e2b9-19bf-4137-a511-25dd1d4c9f0e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.199'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d2271296-3ceb-4987-affd-a0b4da64fffe, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=0f516e4b-c284-4151-944c-8a7d98f695b5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:23:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:11.493 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 0f516e4b-c284-4151-944c-8a7d98f695b5 in datapath 0007a0de-db42-4add-9b55-6d92ceffa860 unbound from our chassis#033[00m
Oct 11 05:23:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:11.497 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0007a0de-db42-4add-9b55-6d92ceffa860, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:23:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:11.499 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c1644e13-6339-48e3-82df-9f9884596996]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:11.500 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860 namespace which is not needed anymore#033[00m
Oct 11 05:23:11 np0005481065 systemd[1]: machine-qemu\x2d144\x2dinstance\x2d00000079.scope: Deactivated successfully.
Oct 11 05:23:11 np0005481065 systemd[1]: machine-qemu\x2d144\x2dinstance\x2d00000079.scope: Consumed 13.635s CPU time.
Oct 11 05:23:11 np0005481065 systemd-machined[215705]: Machine qemu-144-instance-00000079 terminated.
Oct 11 05:23:11 np0005481065 neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860[392049]: [NOTICE]   (392060) : haproxy version is 2.8.14-c23fe91
Oct 11 05:23:11 np0005481065 neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860[392049]: [NOTICE]   (392060) : path to executable is /usr/sbin/haproxy
Oct 11 05:23:11 np0005481065 neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860[392049]: [WARNING]  (392060) : Exiting Master process...
Oct 11 05:23:11 np0005481065 neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860[392049]: [WARNING]  (392060) : Exiting Master process...
Oct 11 05:23:11 np0005481065 neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860[392049]: [ALERT]    (392060) : Current worker (392064) exited with code 143 (Terminated)
Oct 11 05:23:11 np0005481065 neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860[392049]: [WARNING]  (392060) : All workers exited. Exiting... (0)
Oct 11 05:23:11 np0005481065 systemd[1]: libpod-08681398b84cfa03e0e290ee16c1ce0aff8a4babeb74f855ff91733dfa902f9f.scope: Deactivated successfully.
Oct 11 05:23:11 np0005481065 podman[393037]: 2025-10-11 09:23:11.774758319 +0000 UTC m=+0.140020053 container died 08681398b84cfa03e0e290ee16c1ce0aff8a4babeb74f855ff91733dfa902f9f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 11 05:23:11 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-08681398b84cfa03e0e290ee16c1ce0aff8a4babeb74f855ff91733dfa902f9f-userdata-shm.mount: Deactivated successfully.
Oct 11 05:23:11 np0005481065 systemd[1]: var-lib-containers-storage-overlay-8f47c3c136e5a0070dae877380d4b02cb73021141c299a51e87e245e6899f06b-merged.mount: Deactivated successfully.
Oct 11 05:23:11 np0005481065 podman[393037]: 2025-10-11 09:23:11.876782148 +0000 UTC m=+0.242043842 container cleanup 08681398b84cfa03e0e290ee16c1ce0aff8a4babeb74f855ff91733dfa902f9f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:23:11 np0005481065 systemd[1]: libpod-conmon-08681398b84cfa03e0e290ee16c1ce0aff8a4babeb74f855ff91733dfa902f9f.scope: Deactivated successfully.
Oct 11 05:23:12 np0005481065 nova_compute[260935]: 2025-10-11 09:23:12.015 2 INFO nova.virt.libvirt.driver [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Instance shutdown successfully after 4 seconds.#033[00m
Oct 11 05:23:12 np0005481065 nova_compute[260935]: 2025-10-11 09:23:12.026 2 INFO nova.virt.libvirt.driver [-] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Instance destroyed successfully.#033[00m
Oct 11 05:23:12 np0005481065 nova_compute[260935]: 2025-10-11 09:23:12.027 2 DEBUG nova.objects.instance [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lazy-loading 'numa_topology' on Instance uuid ef21f945-0076-48fa-8d22-c5376e26d278 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:23:12 np0005481065 nova_compute[260935]: 2025-10-11 09:23:12.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:12 np0005481065 podman[393147]: 2025-10-11 09:23:12.155441402 +0000 UTC m=+0.242458184 container remove 08681398b84cfa03e0e290ee16c1ce0aff8a4babeb74f855ff91733dfa902f9f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:23:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:12.164 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c7c3b07c-b514-4c66-b58f-ec5d6c8d9651]: (4, ('Sat Oct 11 09:23:11 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860 (08681398b84cfa03e0e290ee16c1ce0aff8a4babeb74f855ff91733dfa902f9f)\n08681398b84cfa03e0e290ee16c1ce0aff8a4babeb74f855ff91733dfa902f9f\nSat Oct 11 09:23:11 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860 (08681398b84cfa03e0e290ee16c1ce0aff8a4babeb74f855ff91733dfa902f9f)\n08681398b84cfa03e0e290ee16c1ce0aff8a4babeb74f855ff91733dfa902f9f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:12.166 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6b72ecfd-38c0-4f85-b5b1-a0bd2e16006b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:12.167 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0007a0de-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:23:12 np0005481065 nova_compute[260935]: 2025-10-11 09:23:12.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:12 np0005481065 kernel: tap0007a0de-d0: left promiscuous mode
Oct 11 05:23:12 np0005481065 nova_compute[260935]: 2025-10-11 09:23:12.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:12.204 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a2751334-c7c8-420d-865a-c0c4d3c808d3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:12.237 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f4b2f6ad-1540-4f8e-a569-5b907cc19457]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:12.239 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[30fa5a89-1639-4c9b-8ef1-742352cda691]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:12.263 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c4680da4-f00c-4135-b7a6-8bee5de5e433]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 640873, 'reachable_time': 28634, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 393192, 'error': None, 'target': 'ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:12 np0005481065 systemd[1]: run-netns-ovnmeta\x2d0007a0de\x2ddb42\x2d4add\x2d9b55\x2d6d92ceffa860.mount: Deactivated successfully.
Oct 11 05:23:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:12.268 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 05:23:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:12.268 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[56cf7863-6362-4c02-a4da-2372abc8b69f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:12 np0005481065 nova_compute[260935]: 2025-10-11 09:23:12.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:12 np0005481065 nova_compute[260935]: 2025-10-11 09:23:12.420 2 INFO nova.virt.libvirt.driver [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Beginning cold snapshot process#033[00m
Oct 11 05:23:12 np0005481065 podman[393208]: 2025-10-11 09:23:12.425049161 +0000 UTC m=+0.046755111 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:23:12 np0005481065 podman[393208]: 2025-10-11 09:23:12.52495533 +0000 UTC m=+0.146661230 container create d854f82dc79dbf699c152eca924d7e3d148be03ff823b81c194fd632fc63a565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_liskov, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 05:23:12 np0005481065 systemd[1]: Started libpod-conmon-d854f82dc79dbf699c152eca924d7e3d148be03ff823b81c194fd632fc63a565.scope.
Oct 11 05:23:12 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:23:12 np0005481065 podman[393208]: 2025-10-11 09:23:12.706114702 +0000 UTC m=+0.327820592 container init d854f82dc79dbf699c152eca924d7e3d148be03ff823b81c194fd632fc63a565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_liskov, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:23:12 np0005481065 podman[393208]: 2025-10-11 09:23:12.719394867 +0000 UTC m=+0.341100737 container start d854f82dc79dbf699c152eca924d7e3d148be03ff823b81c194fd632fc63a565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_liskov, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:23:12 np0005481065 nova_compute[260935]: 2025-10-11 09:23:12.722 2 DEBUG nova.virt.libvirt.imagebackend [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] No parent info for 03f2fef0-11c0-48e1-b3a0-3e02d898739e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct 11 05:23:12 np0005481065 cool_liskov[393225]: 167 167
Oct 11 05:23:12 np0005481065 systemd[1]: libpod-d854f82dc79dbf699c152eca924d7e3d148be03ff823b81c194fd632fc63a565.scope: Deactivated successfully.
Oct 11 05:23:12 np0005481065 podman[393208]: 2025-10-11 09:23:12.737739225 +0000 UTC m=+0.359445085 container attach d854f82dc79dbf699c152eca924d7e3d148be03ff823b81c194fd632fc63a565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_liskov, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:23:12 np0005481065 podman[393208]: 2025-10-11 09:23:12.738192467 +0000 UTC m=+0.359898337 container died d854f82dc79dbf699c152eca924d7e3d148be03ff823b81c194fd632fc63a565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:23:12 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2441: 321 pgs: 321 active+clean; 407 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 79 KiB/s rd, 134 KiB/s wr, 47 op/s
Oct 11 05:23:12 np0005481065 nova_compute[260935]: 2025-10-11 09:23:12.910 2 DEBUG nova.storage.rbd_utils [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] creating snapshot(72e2a98ef8e74bc4b5a1fdfe221cc8fb) on rbd image(ef21f945-0076-48fa-8d22-c5376e26d278_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 11 05:23:12 np0005481065 systemd[1]: var-lib-containers-storage-overlay-539b1bce0e1bd782e6a327b9702766efd164028e619bb38fda8ef4cc04283645-merged.mount: Deactivated successfully.
Oct 11 05:23:13 np0005481065 podman[393208]: 2025-10-11 09:23:13.092119075 +0000 UTC m=+0.713824986 container remove d854f82dc79dbf699c152eca924d7e3d148be03ff823b81c194fd632fc63a565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_liskov, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:23:13 np0005481065 systemd[1]: libpod-conmon-d854f82dc79dbf699c152eca924d7e3d148be03ff823b81c194fd632fc63a565.scope: Deactivated successfully.
Oct 11 05:23:13 np0005481065 podman[393302]: 2025-10-11 09:23:13.331601014 +0000 UTC m=+0.032144748 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:23:13 np0005481065 podman[393302]: 2025-10-11 09:23:13.432690907 +0000 UTC m=+0.133234621 container create 6072704a3c03a584729f2efece58af20dee5a82ea93bef251fdf8dfbdcaec5ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_ritchie, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 05:23:13 np0005481065 systemd[1]: Started libpod-conmon-6072704a3c03a584729f2efece58af20dee5a82ea93bef251fdf8dfbdcaec5ca.scope.
Oct 11 05:23:13 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:23:13 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d994243090408733e9719bb7e50883484b107b9706393fff50ca657ec55c6895/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:23:13 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d994243090408733e9719bb7e50883484b107b9706393fff50ca657ec55c6895/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:23:13 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d994243090408733e9719bb7e50883484b107b9706393fff50ca657ec55c6895/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:23:13 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d994243090408733e9719bb7e50883484b107b9706393fff50ca657ec55c6895/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:23:13 np0005481065 podman[393302]: 2025-10-11 09:23:13.644451823 +0000 UTC m=+0.344995547 container init 6072704a3c03a584729f2efece58af20dee5a82ea93bef251fdf8dfbdcaec5ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 11 05:23:13 np0005481065 podman[393302]: 2025-10-11 09:23:13.660403073 +0000 UTC m=+0.360946767 container start 6072704a3c03a584729f2efece58af20dee5a82ea93bef251fdf8dfbdcaec5ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:23:13 np0005481065 podman[393302]: 2025-10-11 09:23:13.668665106 +0000 UTC m=+0.369208790 container attach 6072704a3c03a584729f2efece58af20dee5a82ea93bef251fdf8dfbdcaec5ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 11 05:23:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 do_prune osdmap full prune enabled
Oct 11 05:23:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e283 e283: 3 total, 3 up, 3 in
Oct 11 05:23:14 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e283: 3 total, 3 up, 3 in
Oct 11 05:23:14 np0005481065 zen_ritchie[393319]: {
Oct 11 05:23:14 np0005481065 zen_ritchie[393319]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 05:23:14 np0005481065 zen_ritchie[393319]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:23:14 np0005481065 zen_ritchie[393319]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 05:23:14 np0005481065 zen_ritchie[393319]:        "osd_id": 2,
Oct 11 05:23:14 np0005481065 zen_ritchie[393319]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:23:14 np0005481065 zen_ritchie[393319]:        "type": "bluestore"
Oct 11 05:23:14 np0005481065 zen_ritchie[393319]:    },
Oct 11 05:23:14 np0005481065 zen_ritchie[393319]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 05:23:14 np0005481065 zen_ritchie[393319]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:23:14 np0005481065 zen_ritchie[393319]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 05:23:14 np0005481065 zen_ritchie[393319]:        "osd_id": 0,
Oct 11 05:23:14 np0005481065 zen_ritchie[393319]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:23:14 np0005481065 zen_ritchie[393319]:        "type": "bluestore"
Oct 11 05:23:14 np0005481065 zen_ritchie[393319]:    },
Oct 11 05:23:14 np0005481065 zen_ritchie[393319]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 05:23:14 np0005481065 zen_ritchie[393319]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:23:14 np0005481065 zen_ritchie[393319]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 05:23:14 np0005481065 zen_ritchie[393319]:        "osd_id": 1,
Oct 11 05:23:14 np0005481065 zen_ritchie[393319]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:23:14 np0005481065 zen_ritchie[393319]:        "type": "bluestore"
Oct 11 05:23:14 np0005481065 zen_ritchie[393319]:    }
Oct 11 05:23:14 np0005481065 zen_ritchie[393319]: }
Oct 11 05:23:14 np0005481065 systemd[1]: libpod-6072704a3c03a584729f2efece58af20dee5a82ea93bef251fdf8dfbdcaec5ca.scope: Deactivated successfully.
Oct 11 05:23:14 np0005481065 systemd[1]: libpod-6072704a3c03a584729f2efece58af20dee5a82ea93bef251fdf8dfbdcaec5ca.scope: Consumed 1.123s CPU time.
Oct 11 05:23:14 np0005481065 podman[393302]: 2025-10-11 09:23:14.781298585 +0000 UTC m=+1.481842279 container died 6072704a3c03a584729f2efece58af20dee5a82ea93bef251fdf8dfbdcaec5ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_ritchie, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 11 05:23:14 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2443: 321 pgs: 321 active+clean; 407 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 47 KiB/s wr, 20 op/s
Oct 11 05:23:15 np0005481065 systemd[1]: var-lib-containers-storage-overlay-d994243090408733e9719bb7e50883484b107b9706393fff50ca657ec55c6895-merged.mount: Deactivated successfully.
Oct 11 05:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:15.220 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:15.222 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:23:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:15.222 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:23:15 np0005481065 nova_compute[260935]: 2025-10-11 09:23:15.385 2 DEBUG nova.storage.rbd_utils [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] cloning vms/ef21f945-0076-48fa-8d22-c5376e26d278_disk@72e2a98ef8e74bc4b5a1fdfe221cc8fb to images/a24df9e3-57ef-4be8-af9c-14e8b8d2e436 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct 11 05:23:15 np0005481065 podman[393302]: 2025-10-11 09:23:15.536846747 +0000 UTC m=+2.237390441 container remove 6072704a3c03a584729f2efece58af20dee5a82ea93bef251fdf8dfbdcaec5ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_ritchie, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:23:15 np0005481065 systemd[1]: libpod-conmon-6072704a3c03a584729f2efece58af20dee5a82ea93bef251fdf8dfbdcaec5ca.scope: Deactivated successfully.
Oct 11 05:23:15 np0005481065 podman[393364]: 2025-10-11 09:23:15.60145121 +0000 UTC m=+0.422387481 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid)
Oct 11 05:23:15 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:23:15 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:23:15 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:23:15 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:23:15 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 43b08548-e02c-4a3c-b85c-af18422ddb35 does not exist
Oct 11 05:23:15 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 012a3b34-c7d9-4f1c-9935-3324c3f0d104 does not exist
Oct 11 05:23:15 np0005481065 nova_compute[260935]: 2025-10-11 09:23:15.727 2 DEBUG nova.storage.rbd_utils [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] flattening images/a24df9e3-57ef-4be8-af9c-14e8b8d2e436 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct 11 05:23:15 np0005481065 ovn_controller[152945]: 2025-10-11T09:23:15Z|01279|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:23:15 np0005481065 ovn_controller[152945]: 2025-10-11T09:23:15Z|01280|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:23:15 np0005481065 nova_compute[260935]: 2025-10-11 09:23:15.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:16 np0005481065 ovn_controller[152945]: 2025-10-11T09:23:16Z|01281|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:23:16 np0005481065 ovn_controller[152945]: 2025-10-11T09:23:16Z|01282|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:23:16 np0005481065 nova_compute[260935]: 2025-10-11 09:23:16.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:16 np0005481065 nova_compute[260935]: 2025-10-11 09:23:16.288 2 DEBUG nova.storage.rbd_utils [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] removing snapshot(72e2a98ef8e74bc4b5a1fdfe221cc8fb) on rbd image(ef21f945-0076-48fa-8d22-c5376e26d278_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct 11 05:23:16 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:23:16 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:23:16 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e283 do_prune osdmap full prune enabled
Oct 11 05:23:16 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e284 e284: 3 total, 3 up, 3 in
Oct 11 05:23:16 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e284: 3 total, 3 up, 3 in
Oct 11 05:23:16 np0005481065 nova_compute[260935]: 2025-10-11 09:23:16.703 2 DEBUG nova.storage.rbd_utils [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] creating snapshot(snap) on rbd image(a24df9e3-57ef-4be8-af9c-14e8b8d2e436) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 11 05:23:16 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2445: 321 pgs: 321 active+clean; 407 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 KiB/s rd, 13 KiB/s wr, 2 op/s
Oct 11 05:23:17 np0005481065 nova_compute[260935]: 2025-10-11 09:23:17.055 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174582.0539756, 7f0d9214-39a5-458d-82db-dcbc7d61b8b5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:23:17 np0005481065 nova_compute[260935]: 2025-10-11 09:23:17.055 2 INFO nova.compute.manager [-] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:23:17 np0005481065 nova_compute[260935]: 2025-10-11 09:23:17.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:17 np0005481065 nova_compute[260935]: 2025-10-11 09:23:17.214 2 DEBUG nova.compute.manager [None req-33bc5a34-5573-4828-a3ce-3e6852adde9d - - - - - -] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:23:17 np0005481065 nova_compute[260935]: 2025-10-11 09:23:17.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e284 do_prune osdmap full prune enabled
Oct 11 05:23:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e285 e285: 3 total, 3 up, 3 in
Oct 11 05:23:17 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e285: 3 total, 3 up, 3 in
Oct 11 05:23:18 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2447: 321 pgs: 321 active+clean; 486 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 145 op/s
Oct 11 05:23:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:23:19 np0005481065 nova_compute[260935]: 2025-10-11 09:23:19.286 2 DEBUG nova.compute.manager [req-6b000ce9-8fca-41eb-8d25-95d0f23745e4 req-354de8e5-f7e1-42b1-8c53-b75791371861 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Received event network-vif-unplugged-0f516e4b-c284-4151-944c-8a7d98f695b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:23:19 np0005481065 nova_compute[260935]: 2025-10-11 09:23:19.286 2 DEBUG oslo_concurrency.lockutils [req-6b000ce9-8fca-41eb-8d25-95d0f23745e4 req-354de8e5-f7e1-42b1-8c53-b75791371861 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:23:19 np0005481065 nova_compute[260935]: 2025-10-11 09:23:19.287 2 DEBUG oslo_concurrency.lockutils [req-6b000ce9-8fca-41eb-8d25-95d0f23745e4 req-354de8e5-f7e1-42b1-8c53-b75791371861 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:23:19 np0005481065 nova_compute[260935]: 2025-10-11 09:23:19.287 2 DEBUG oslo_concurrency.lockutils [req-6b000ce9-8fca-41eb-8d25-95d0f23745e4 req-354de8e5-f7e1-42b1-8c53-b75791371861 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:23:19 np0005481065 nova_compute[260935]: 2025-10-11 09:23:19.288 2 DEBUG nova.compute.manager [req-6b000ce9-8fca-41eb-8d25-95d0f23745e4 req-354de8e5-f7e1-42b1-8c53-b75791371861 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] No waiting events found dispatching network-vif-unplugged-0f516e4b-c284-4151-944c-8a7d98f695b5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:23:19 np0005481065 nova_compute[260935]: 2025-10-11 09:23:19.288 2 WARNING nova.compute.manager [req-6b000ce9-8fca-41eb-8d25-95d0f23745e4 req-354de8e5-f7e1-42b1-8c53-b75791371861 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Received unexpected event network-vif-unplugged-0f516e4b-c284-4151-944c-8a7d98f695b5 for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Oct 11 05:23:19 np0005481065 nova_compute[260935]: 2025-10-11 09:23:19.621 2 INFO nova.virt.libvirt.driver [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Snapshot image upload complete#033[00m
Oct 11 05:23:19 np0005481065 nova_compute[260935]: 2025-10-11 09:23:19.621 2 DEBUG nova.compute.manager [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:23:19 np0005481065 nova_compute[260935]: 2025-10-11 09:23:19.838 2 INFO nova.compute.manager [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Shelve offloading#033[00m
Oct 11 05:23:19 np0005481065 nova_compute[260935]: 2025-10-11 09:23:19.850 2 INFO nova.virt.libvirt.driver [-] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Instance destroyed successfully.#033[00m
Oct 11 05:23:19 np0005481065 nova_compute[260935]: 2025-10-11 09:23:19.850 2 DEBUG nova.compute.manager [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:23:19 np0005481065 nova_compute[260935]: 2025-10-11 09:23:19.853 2 DEBUG oslo_concurrency.lockutils [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Acquiring lock "refresh_cache-ef21f945-0076-48fa-8d22-c5376e26d278" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:23:19 np0005481065 nova_compute[260935]: 2025-10-11 09:23:19.854 2 DEBUG oslo_concurrency.lockutils [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Acquired lock "refresh_cache-ef21f945-0076-48fa-8d22-c5376e26d278" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:23:19 np0005481065 nova_compute[260935]: 2025-10-11 09:23:19.854 2 DEBUG nova.network.neutron [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:23:20 np0005481065 podman[393525]: 2025-10-11 09:23:20.785439355 +0000 UTC m=+0.078788984 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 11 05:23:20 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2448: 321 pgs: 321 active+clean; 486 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.2 MiB/s rd, 7.1 MiB/s wr, 134 op/s
Oct 11 05:23:20 np0005481065 podman[393526]: 2025-10-11 09:23:20.836355052 +0000 UTC m=+0.130227806 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:23:21 np0005481065 nova_compute[260935]: 2025-10-11 09:23:21.545 2 DEBUG nova.compute.manager [req-9800c47f-a5b0-4ac8-a3c4-51519968983c req-489711d1-1a57-4606-9c94-2d14340bb0c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Received event network-vif-plugged-0f516e4b-c284-4151-944c-8a7d98f695b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:23:21 np0005481065 nova_compute[260935]: 2025-10-11 09:23:21.546 2 DEBUG oslo_concurrency.lockutils [req-9800c47f-a5b0-4ac8-a3c4-51519968983c req-489711d1-1a57-4606-9c94-2d14340bb0c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:23:21 np0005481065 nova_compute[260935]: 2025-10-11 09:23:21.546 2 DEBUG oslo_concurrency.lockutils [req-9800c47f-a5b0-4ac8-a3c4-51519968983c req-489711d1-1a57-4606-9c94-2d14340bb0c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:23:21 np0005481065 nova_compute[260935]: 2025-10-11 09:23:21.547 2 DEBUG oslo_concurrency.lockutils [req-9800c47f-a5b0-4ac8-a3c4-51519968983c req-489711d1-1a57-4606-9c94-2d14340bb0c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:23:21 np0005481065 nova_compute[260935]: 2025-10-11 09:23:21.547 2 DEBUG nova.compute.manager [req-9800c47f-a5b0-4ac8-a3c4-51519968983c req-489711d1-1a57-4606-9c94-2d14340bb0c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] No waiting events found dispatching network-vif-plugged-0f516e4b-c284-4151-944c-8a7d98f695b5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:23:21 np0005481065 nova_compute[260935]: 2025-10-11 09:23:21.548 2 WARNING nova.compute.manager [req-9800c47f-a5b0-4ac8-a3c4-51519968983c req-489711d1-1a57-4606-9c94-2d14340bb0c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Received unexpected event network-vif-plugged-0f516e4b-c284-4151-944c-8a7d98f695b5 for instance with vm_state shelved and task_state shelving_offloading.#033[00m
Oct 11 05:23:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:21.551 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:23:00 2001:db8:0:1:f816:3eff:febe:2300 2001:db8::f816:3eff:febe:2300'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:febe:2300/64 2001:db8::f816:3eff:febe:2300/64', 'neutron:device_id': 'ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f0bc2c62-89ab-4ce1-9157-2273788b9018', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75b51dc9-c211-445f-9e3e-d219efddef19, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=c1fabca0-ae77-4e48-b93b-3023955db235) old=Port_Binding(mac=['fa:16:3e:be:23:00 2001:db8::f816:3eff:febe:2300'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:febe:2300/64', 'neutron:device_id': 'ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f0bc2c62-89ab-4ce1-9157-2273788b9018', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:23:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:21.553 162815 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port c1fabca0-ae77-4e48-b93b-3023955db235 in datapath f0bc2c62-89ab-4ce1-9157-2273788b9018 updated#033[00m
Oct 11 05:23:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:21.556 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f0bc2c62-89ab-4ce1-9157-2273788b9018, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:23:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:21.557 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d47ac421-e2c9-46b0-94ae-7a7cc09e003f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:22 np0005481065 nova_compute[260935]: 2025-10-11 09:23:22.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:22 np0005481065 nova_compute[260935]: 2025-10-11 09:23:22.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:22 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2449: 321 pgs: 321 active+clean; 486 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 128 op/s
Oct 11 05:23:23 np0005481065 nova_compute[260935]: 2025-10-11 09:23:23.234 2 DEBUG nova.network.neutron [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Updating instance_info_cache with network_info: [{"id": "0f516e4b-c284-4151-944c-8a7d98f695b5", "address": "fa:16:3e:4b:08:14", "network": {"id": "0007a0de-db42-4add-9b55-6d92ceffa860", "bridge": "br-int", "label": "tempest-TestShelveInstance-683353345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a13210f275984f3eadf85eba0c749d99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f516e4b-c2", "ovs_interfaceid": "0f516e4b-c284-4151-944c-8a7d98f695b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:23:23 np0005481065 nova_compute[260935]: 2025-10-11 09:23:23.549 2 DEBUG oslo_concurrency.lockutils [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Releasing lock "refresh_cache-ef21f945-0076-48fa-8d22-c5376e26d278" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:23:23 np0005481065 nova_compute[260935]: 2025-10-11 09:23:23.956 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:23:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:23:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e285 do_prune osdmap full prune enabled
Oct 11 05:23:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e286 e286: 3 total, 3 up, 3 in
Oct 11 05:23:24 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e286: 3 total, 3 up, 3 in
Oct 11 05:23:24 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2451: 321 pgs: 321 active+clean; 486 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 128 op/s
Oct 11 05:23:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:23:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:23:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:23:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:23:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:23:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:23:26 np0005481065 nova_compute[260935]: 2025-10-11 09:23:26.250 2 INFO nova.virt.libvirt.driver [-] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Instance destroyed successfully.#033[00m
Oct 11 05:23:26 np0005481065 nova_compute[260935]: 2025-10-11 09:23:26.251 2 DEBUG nova.objects.instance [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lazy-loading 'resources' on Instance uuid ef21f945-0076-48fa-8d22-c5376e26d278 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:23:26 np0005481065 nova_compute[260935]: 2025-10-11 09:23:26.367 2 DEBUG nova.virt.libvirt.vif [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:22:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1752003988',display_name='tempest-TestShelveInstance-server-1752003988',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1752003988',id=121,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBc59pqsdiIpcBTYIi36cHQ9pbnLBXoyhUafGO6McKtc7V938v9xkK/F0LUwOp/S6AlI8sKdzLvrGPTJbVDXHRRnlCu0B+lzAS2vfK523X9mqHyrpvhTkQghQuITH7BALg==',key_name='tempest-TestShelveInstance-1436929269',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:22:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='a13210f275984f3eadf85eba0c749d99',ramdisk_id='',reservation_id='r-9i2gklxc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-243029510',owner_user_name='tempest-TestShelveInstance-243029510-project-member',shelved_at='2025-10-11T09:23:19.621852',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='a24df9e3-57ef-4be8-af9c-14e8b8d2e436'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:23:12Z,user_data=None,user_id='67e20c1f7ae24f2f8b9e25e0d8ce61ca',uuid=ef21f945-0076-48fa-8d22-c5376e26d278,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "0f516e4b-c284-4151-944c-8a7d98f695b5", "address": "fa:16:3e:4b:08:14", "network": {"id": "0007a0de-db42-4add-9b55-6d92ceffa860", "bridge": "br-int", "label": "tempest-TestShelveInstance-683353345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a13210f275984f3eadf85eba0c749d99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f516e4b-c2", "ovs_interfaceid": "0f516e4b-c284-4151-944c-8a7d98f695b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:23:26 np0005481065 nova_compute[260935]: 2025-10-11 09:23:26.368 2 DEBUG nova.network.os_vif_util [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Converting VIF {"id": "0f516e4b-c284-4151-944c-8a7d98f695b5", "address": "fa:16:3e:4b:08:14", "network": {"id": "0007a0de-db42-4add-9b55-6d92ceffa860", "bridge": "br-int", "label": "tempest-TestShelveInstance-683353345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a13210f275984f3eadf85eba0c749d99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f516e4b-c2", "ovs_interfaceid": "0f516e4b-c284-4151-944c-8a7d98f695b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:23:26 np0005481065 nova_compute[260935]: 2025-10-11 09:23:26.369 2 DEBUG nova.network.os_vif_util [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:08:14,bridge_name='br-int',has_traffic_filtering=True,id=0f516e4b-c284-4151-944c-8a7d98f695b5,network=Network(0007a0de-db42-4add-9b55-6d92ceffa860),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f516e4b-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:23:26 np0005481065 nova_compute[260935]: 2025-10-11 09:23:26.370 2 DEBUG os_vif [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:08:14,bridge_name='br-int',has_traffic_filtering=True,id=0f516e4b-c284-4151-944c-8a7d98f695b5,network=Network(0007a0de-db42-4add-9b55-6d92ceffa860),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f516e4b-c2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:23:26 np0005481065 nova_compute[260935]: 2025-10-11 09:23:26.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:26 np0005481065 nova_compute[260935]: 2025-10-11 09:23:26.374 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0f516e4b-c2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:23:26 np0005481065 nova_compute[260935]: 2025-10-11 09:23:26.380 2 DEBUG nova.compute.manager [req-8f7da495-29b3-4372-a7a8-3f80171eb406 req-27e2bb8d-43ab-471e-96ce-911b5f471617 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Received event network-changed-0f516e4b-c284-4151-944c-8a7d98f695b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:23:26 np0005481065 nova_compute[260935]: 2025-10-11 09:23:26.380 2 DEBUG nova.compute.manager [req-8f7da495-29b3-4372-a7a8-3f80171eb406 req-27e2bb8d-43ab-471e-96ce-911b5f471617 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Refreshing instance network info cache due to event network-changed-0f516e4b-c284-4151-944c-8a7d98f695b5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:23:26 np0005481065 nova_compute[260935]: 2025-10-11 09:23:26.381 2 DEBUG oslo_concurrency.lockutils [req-8f7da495-29b3-4372-a7a8-3f80171eb406 req-27e2bb8d-43ab-471e-96ce-911b5f471617 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-ef21f945-0076-48fa-8d22-c5376e26d278" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:23:26 np0005481065 nova_compute[260935]: 2025-10-11 09:23:26.381 2 DEBUG oslo_concurrency.lockutils [req-8f7da495-29b3-4372-a7a8-3f80171eb406 req-27e2bb8d-43ab-471e-96ce-911b5f471617 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-ef21f945-0076-48fa-8d22-c5376e26d278" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:23:26 np0005481065 nova_compute[260935]: 2025-10-11 09:23:26.382 2 DEBUG nova.network.neutron [req-8f7da495-29b3-4372-a7a8-3f80171eb406 req-27e2bb8d-43ab-471e-96ce-911b5f471617 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Refreshing network info cache for port 0f516e4b-c284-4151-944c-8a7d98f695b5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:23:26 np0005481065 nova_compute[260935]: 2025-10-11 09:23:26.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:26 np0005481065 nova_compute[260935]: 2025-10-11 09:23:26.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:23:26 np0005481065 nova_compute[260935]: 2025-10-11 09:23:26.389 2 INFO os_vif [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:08:14,bridge_name='br-int',has_traffic_filtering=True,id=0f516e4b-c284-4151-944c-8a7d98f695b5,network=Network(0007a0de-db42-4add-9b55-6d92ceffa860),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f516e4b-c2')#033[00m
Oct 11 05:23:26 np0005481065 nova_compute[260935]: 2025-10-11 09:23:26.693 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174591.69287, ef21f945-0076-48fa-8d22-c5376e26d278 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:23:26 np0005481065 nova_compute[260935]: 2025-10-11 09:23:26.694 2 INFO nova.compute.manager [-] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:23:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:23:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3638457583' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:23:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:23:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3638457583' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:23:26 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2452: 321 pgs: 321 active+clean; 486 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 866 KiB/s rd, 1.4 MiB/s wr, 53 op/s
Oct 11 05:23:26 np0005481065 nova_compute[260935]: 2025-10-11 09:23:26.906 2 DEBUG nova.compute.manager [None req-29dc8321-0927-4e5f-96aa-47cdcece444b - - - - - -] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:23:26 np0005481065 nova_compute[260935]: 2025-10-11 09:23:26.914 2 DEBUG nova.compute.manager [None req-29dc8321-0927-4e5f-96aa-47cdcece444b - - - - - -] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: shelved, current task_state: shelving_offloading, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:23:26 np0005481065 nova_compute[260935]: 2025-10-11 09:23:26.982 2 INFO nova.compute.manager [None req-29dc8321-0927-4e5f-96aa-47cdcece444b - - - - - -] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] During sync_power_state the instance has a pending task (shelving_offloading). Skip.#033[00m
Oct 11 05:23:26 np0005481065 nova_compute[260935]: 2025-10-11 09:23:26.990 2 INFO nova.virt.libvirt.driver [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Deleting instance files /var/lib/nova/instances/ef21f945-0076-48fa-8d22-c5376e26d278_del#033[00m
Oct 11 05:23:26 np0005481065 nova_compute[260935]: 2025-10-11 09:23:26.991 2 INFO nova.virt.libvirt.driver [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Deletion of /var/lib/nova/instances/ef21f945-0076-48fa-8d22-c5376e26d278_del complete#033[00m
Oct 11 05:23:27 np0005481065 nova_compute[260935]: 2025-10-11 09:23:27.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:27 np0005481065 nova_compute[260935]: 2025-10-11 09:23:27.319 2 INFO nova.scheduler.client.report [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Deleted allocations for instance ef21f945-0076-48fa-8d22-c5376e26d278#033[00m
Oct 11 05:23:27 np0005481065 nova_compute[260935]: 2025-10-11 09:23:27.531 2 DEBUG oslo_concurrency.lockutils [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:23:27 np0005481065 nova_compute[260935]: 2025-10-11 09:23:27.532 2 DEBUG oslo_concurrency.lockutils [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:23:27 np0005481065 nova_compute[260935]: 2025-10-11 09:23:27.648 2 DEBUG oslo_concurrency.processutils [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:23:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:23:28 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2241683377' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:23:28 np0005481065 nova_compute[260935]: 2025-10-11 09:23:28.130 2 DEBUG oslo_concurrency.processutils [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:23:28 np0005481065 nova_compute[260935]: 2025-10-11 09:23:28.139 2 DEBUG nova.compute.provider_tree [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:23:28 np0005481065 nova_compute[260935]: 2025-10-11 09:23:28.204 2 DEBUG nova.scheduler.client.report [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:23:28 np0005481065 nova_compute[260935]: 2025-10-11 09:23:28.339 2 DEBUG oslo_concurrency.lockutils [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.807s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:23:28 np0005481065 nova_compute[260935]: 2025-10-11 09:23:28.480 2 DEBUG oslo_concurrency.lockutils [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 20.532s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:23:28 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2453: 321 pgs: 321 active+clean; 407 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 1.8 KiB/s wr, 48 op/s
Oct 11 05:23:28 np0005481065 nova_compute[260935]: 2025-10-11 09:23:28.830 2 DEBUG nova.network.neutron [req-8f7da495-29b3-4372-a7a8-3f80171eb406 req-27e2bb8d-43ab-471e-96ce-911b5f471617 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Updated VIF entry in instance network info cache for port 0f516e4b-c284-4151-944c-8a7d98f695b5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:23:28 np0005481065 nova_compute[260935]: 2025-10-11 09:23:28.832 2 DEBUG nova.network.neutron [req-8f7da495-29b3-4372-a7a8-3f80171eb406 req-27e2bb8d-43ab-471e-96ce-911b5f471617 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Updating instance_info_cache with network_info: [{"id": "0f516e4b-c284-4151-944c-8a7d98f695b5", "address": "fa:16:3e:4b:08:14", "network": {"id": "0007a0de-db42-4add-9b55-6d92ceffa860", "bridge": null, "label": "tempest-TestShelveInstance-683353345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a13210f275984f3eadf85eba0c749d99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap0f516e4b-c2", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:23:28 np0005481065 nova_compute[260935]: 2025-10-11 09:23:28.880 2 DEBUG oslo_concurrency.lockutils [req-8f7da495-29b3-4372-a7a8-3f80171eb406 req-27e2bb8d-43ab-471e-96ce-911b5f471617 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-ef21f945-0076-48fa-8d22-c5376e26d278" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:23:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:23:30 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2454: 321 pgs: 321 active+clean; 407 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 1.8 KiB/s wr, 48 op/s
Oct 11 05:23:31 np0005481065 nova_compute[260935]: 2025-10-11 09:23:31.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:32 np0005481065 nova_compute[260935]: 2025-10-11 09:23:32.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:32 np0005481065 nova_compute[260935]: 2025-10-11 09:23:32.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:23:32 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2455: 321 pgs: 321 active+clean; 407 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 65 KiB/s rd, 1.4 KiB/s wr, 104 op/s
Oct 11 05:23:33 np0005481065 nova_compute[260935]: 2025-10-11 09:23:33.346 2 DEBUG oslo_concurrency.lockutils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Acquiring lock "ef21f945-0076-48fa-8d22-c5376e26d278" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:23:33 np0005481065 nova_compute[260935]: 2025-10-11 09:23:33.347 2 DEBUG oslo_concurrency.lockutils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:23:33 np0005481065 nova_compute[260935]: 2025-10-11 09:23:33.347 2 INFO nova.compute.manager [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Unshelving#033[00m
Oct 11 05:23:33 np0005481065 nova_compute[260935]: 2025-10-11 09:23:33.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:23:33 np0005481065 nova_compute[260935]: 2025-10-11 09:23:33.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:23:33 np0005481065 nova_compute[260935]: 2025-10-11 09:23:33.879 2 DEBUG oslo_concurrency.lockutils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:23:33 np0005481065 nova_compute[260935]: 2025-10-11 09:23:33.880 2 DEBUG oslo_concurrency.lockutils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:23:33 np0005481065 nova_compute[260935]: 2025-10-11 09:23:33.885 2 DEBUG nova.objects.instance [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lazy-loading 'pci_requests' on Instance uuid ef21f945-0076-48fa-8d22-c5376e26d278 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:23:33 np0005481065 nova_compute[260935]: 2025-10-11 09:23:33.947 2 DEBUG oslo_concurrency.lockutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "85cf93a0-2068-4567-a399-b8d52e672913" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:23:33 np0005481065 nova_compute[260935]: 2025-10-11 09:23:33.947 2 DEBUG oslo_concurrency.lockutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "85cf93a0-2068-4567-a399-b8d52e672913" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:23:34 np0005481065 nova_compute[260935]: 2025-10-11 09:23:34.017 2 DEBUG nova.objects.instance [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lazy-loading 'numa_topology' on Instance uuid ef21f945-0076-48fa-8d22-c5376e26d278 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:23:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:23:34 np0005481065 nova_compute[260935]: 2025-10-11 09:23:34.200 2 DEBUG nova.compute.manager [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:23:34 np0005481065 nova_compute[260935]: 2025-10-11 09:23:34.216 2 DEBUG nova.virt.hardware [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:23:34 np0005481065 nova_compute[260935]: 2025-10-11 09:23:34.216 2 INFO nova.compute.claims [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:23:34 np0005481065 nova_compute[260935]: 2025-10-11 09:23:34.649 2 DEBUG oslo_concurrency.lockutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:23:34 np0005481065 nova_compute[260935]: 2025-10-11 09:23:34.705 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:23:34 np0005481065 nova_compute[260935]: 2025-10-11 09:23:34.706 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 11 05:23:34 np0005481065 nova_compute[260935]: 2025-10-11 09:23:34.792 2 DEBUG oslo_concurrency.processutils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:23:34 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2456: 321 pgs: 321 active+clean; 407 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 61 KiB/s rd, 1.3 KiB/s wr, 97 op/s
Oct 11 05:23:35 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:23:35 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3031109721' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:23:35 np0005481065 nova_compute[260935]: 2025-10-11 09:23:35.294 2 DEBUG oslo_concurrency.processutils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:23:35 np0005481065 nova_compute[260935]: 2025-10-11 09:23:35.302 2 DEBUG nova.compute.provider_tree [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:23:35 np0005481065 nova_compute[260935]: 2025-10-11 09:23:35.326 2 DEBUG nova.scheduler.client.report [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:23:35 np0005481065 nova_compute[260935]: 2025-10-11 09:23:35.389 2 DEBUG oslo_concurrency.lockutils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.509s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:23:35 np0005481065 nova_compute[260935]: 2025-10-11 09:23:35.398 2 DEBUG oslo_concurrency.lockutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.749s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:23:35 np0005481065 nova_compute[260935]: 2025-10-11 09:23:35.410 2 DEBUG nova.virt.hardware [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:23:35 np0005481065 nova_compute[260935]: 2025-10-11 09:23:35.411 2 INFO nova.compute.claims [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:23:35 np0005481065 nova_compute[260935]: 2025-10-11 09:23:35.620 2 INFO nova.network.neutron [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Updating port 0f516e4b-c284-4151-944c-8a7d98f695b5 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Oct 11 05:23:35 np0005481065 nova_compute[260935]: 2025-10-11 09:23:35.725 2 DEBUG oslo_concurrency.processutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:23:36 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:23:36 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1403048579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:23:36 np0005481065 nova_compute[260935]: 2025-10-11 09:23:36.186 2 DEBUG oslo_concurrency.processutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:23:36 np0005481065 nova_compute[260935]: 2025-10-11 09:23:36.198 2 DEBUG nova.compute.provider_tree [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:23:36 np0005481065 nova_compute[260935]: 2025-10-11 09:23:36.218 2 DEBUG nova.scheduler.client.report [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:23:36 np0005481065 nova_compute[260935]: 2025-10-11 09:23:36.253 2 DEBUG oslo_concurrency.lockutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.855s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:23:36 np0005481065 nova_compute[260935]: 2025-10-11 09:23:36.255 2 DEBUG nova.compute.manager [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:23:36 np0005481065 nova_compute[260935]: 2025-10-11 09:23:36.265 2 DEBUG oslo_concurrency.lockutils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Acquiring lock "refresh_cache-ef21f945-0076-48fa-8d22-c5376e26d278" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:23:36 np0005481065 nova_compute[260935]: 2025-10-11 09:23:36.267 2 DEBUG oslo_concurrency.lockutils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Acquired lock "refresh_cache-ef21f945-0076-48fa-8d22-c5376e26d278" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:23:36 np0005481065 nova_compute[260935]: 2025-10-11 09:23:36.267 2 DEBUG nova.network.neutron [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:23:36 np0005481065 nova_compute[260935]: 2025-10-11 09:23:36.313 2 DEBUG nova.compute.manager [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:23:36 np0005481065 nova_compute[260935]: 2025-10-11 09:23:36.314 2 DEBUG nova.network.neutron [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:23:36 np0005481065 nova_compute[260935]: 2025-10-11 09:23:36.336 2 INFO nova.virt.libvirt.driver [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:23:36 np0005481065 nova_compute[260935]: 2025-10-11 09:23:36.343 2 DEBUG nova.compute.manager [req-c499fbc0-d96b-4fe6-8281-96b30038efca req-e987cbf7-a099-4f5b-a35c-df66cbc4c4ab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Received event network-changed-0f516e4b-c284-4151-944c-8a7d98f695b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:23:36 np0005481065 nova_compute[260935]: 2025-10-11 09:23:36.344 2 DEBUG nova.compute.manager [req-c499fbc0-d96b-4fe6-8281-96b30038efca req-e987cbf7-a099-4f5b-a35c-df66cbc4c4ab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Refreshing instance network info cache due to event network-changed-0f516e4b-c284-4151-944c-8a7d98f695b5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:23:36 np0005481065 nova_compute[260935]: 2025-10-11 09:23:36.344 2 DEBUG oslo_concurrency.lockutils [req-c499fbc0-d96b-4fe6-8281-96b30038efca req-e987cbf7-a099-4f5b-a35c-df66cbc4c4ab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-ef21f945-0076-48fa-8d22-c5376e26d278" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:23:36 np0005481065 nova_compute[260935]: 2025-10-11 09:23:36.359 2 DEBUG nova.compute.manager [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:23:36 np0005481065 nova_compute[260935]: 2025-10-11 09:23:36.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:36 np0005481065 nova_compute[260935]: 2025-10-11 09:23:36.469 2 DEBUG nova.compute.manager [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:23:36 np0005481065 nova_compute[260935]: 2025-10-11 09:23:36.471 2 DEBUG nova.virt.libvirt.driver [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:23:36 np0005481065 nova_compute[260935]: 2025-10-11 09:23:36.472 2 INFO nova.virt.libvirt.driver [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Creating image(s)#033[00m
Oct 11 05:23:36 np0005481065 nova_compute[260935]: 2025-10-11 09:23:36.505 2 DEBUG nova.storage.rbd_utils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 85cf93a0-2068-4567-a399-b8d52e672913_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:23:36 np0005481065 nova_compute[260935]: 2025-10-11 09:23:36.534 2 DEBUG nova.storage.rbd_utils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 85cf93a0-2068-4567-a399-b8d52e672913_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:23:36 np0005481065 nova_compute[260935]: 2025-10-11 09:23:36.560 2 DEBUG nova.storage.rbd_utils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 85cf93a0-2068-4567-a399-b8d52e672913_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:23:36 np0005481065 nova_compute[260935]: 2025-10-11 09:23:36.564 2 DEBUG oslo_concurrency.processutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:23:36 np0005481065 nova_compute[260935]: 2025-10-11 09:23:36.608 2 DEBUG nova.policy [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0e1fd111a1ff43179343661e01457085', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'db6885dd005947ad850fed13cefdf2fc', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:23:36 np0005481065 nova_compute[260935]: 2025-10-11 09:23:36.639 2 DEBUG oslo_concurrency.processutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:23:36 np0005481065 nova_compute[260935]: 2025-10-11 09:23:36.640 2 DEBUG oslo_concurrency.lockutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:23:36 np0005481065 nova_compute[260935]: 2025-10-11 09:23:36.641 2 DEBUG oslo_concurrency.lockutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:23:36 np0005481065 nova_compute[260935]: 2025-10-11 09:23:36.642 2 DEBUG oslo_concurrency.lockutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:23:36 np0005481065 nova_compute[260935]: 2025-10-11 09:23:36.673 2 DEBUG nova.storage.rbd_utils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 85cf93a0-2068-4567-a399-b8d52e672913_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:23:36 np0005481065 nova_compute[260935]: 2025-10-11 09:23:36.677 2 DEBUG oslo_concurrency.processutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 85cf93a0-2068-4567-a399-b8d52e672913_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:23:36 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2457: 321 pgs: 321 active+clean; 407 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 1.2 KiB/s wr, 86 op/s
Oct 11 05:23:36 np0005481065 nova_compute[260935]: 2025-10-11 09:23:36.844 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:23:37 np0005481065 nova_compute[260935]: 2025-10-11 09:23:37.088 2 DEBUG oslo_concurrency.processutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 85cf93a0-2068-4567-a399-b8d52e672913_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:23:37 np0005481065 nova_compute[260935]: 2025-10-11 09:23:37.158 2 DEBUG nova.storage.rbd_utils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] resizing rbd image 85cf93a0-2068-4567-a399-b8d52e672913_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:23:37 np0005481065 nova_compute[260935]: 2025-10-11 09:23:37.242 2 DEBUG nova.objects.instance [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'migration_context' on Instance uuid 85cf93a0-2068-4567-a399-b8d52e672913 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:23:37 np0005481065 nova_compute[260935]: 2025-10-11 09:23:37.348 2 DEBUG nova.virt.libvirt.driver [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:23:37 np0005481065 nova_compute[260935]: 2025-10-11 09:23:37.348 2 DEBUG nova.virt.libvirt.driver [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Ensure instance console log exists: /var/lib/nova/instances/85cf93a0-2068-4567-a399-b8d52e672913/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:23:37 np0005481065 nova_compute[260935]: 2025-10-11 09:23:37.348 2 DEBUG oslo_concurrency.lockutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:23:37 np0005481065 nova_compute[260935]: 2025-10-11 09:23:37.349 2 DEBUG oslo_concurrency.lockutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:23:37 np0005481065 nova_compute[260935]: 2025-10-11 09:23:37.349 2 DEBUG oslo_concurrency.lockutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:23:37 np0005481065 nova_compute[260935]: 2025-10-11 09:23:37.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:38 np0005481065 nova_compute[260935]: 2025-10-11 09:23:38.550 2 DEBUG nova.network.neutron [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Successfully created port: 85875c6f-3380-42bc-9e4b-0b66df391e0d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:23:38 np0005481065 nova_compute[260935]: 2025-10-11 09:23:38.697 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:23:38 np0005481065 nova_compute[260935]: 2025-10-11 09:23:38.801 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:23:38 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2458: 321 pgs: 321 active+clean; 453 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 70 KiB/s rd, 1.8 MiB/s wr, 111 op/s
Oct 11 05:23:38 np0005481065 nova_compute[260935]: 2025-10-11 09:23:38.827 2 DEBUG nova.network.neutron [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Updating instance_info_cache with network_info: [{"id": "0f516e4b-c284-4151-944c-8a7d98f695b5", "address": "fa:16:3e:4b:08:14", "network": {"id": "0007a0de-db42-4add-9b55-6d92ceffa860", "bridge": "br-int", "label": "tempest-TestShelveInstance-683353345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a13210f275984f3eadf85eba0c749d99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f516e4b-c2", "ovs_interfaceid": "0f516e4b-c284-4151-944c-8a7d98f695b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:23:38 np0005481065 nova_compute[260935]: 2025-10-11 09:23:38.876 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:23:38 np0005481065 nova_compute[260935]: 2025-10-11 09:23:38.877 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:23:38 np0005481065 nova_compute[260935]: 2025-10-11 09:23:38.877 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:23:38 np0005481065 nova_compute[260935]: 2025-10-11 09:23:38.878 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:23:38 np0005481065 nova_compute[260935]: 2025-10-11 09:23:38.878 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:23:38 np0005481065 nova_compute[260935]: 2025-10-11 09:23:38.974 2 DEBUG oslo_concurrency.lockutils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Releasing lock "refresh_cache-ef21f945-0076-48fa-8d22-c5376e26d278" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:23:38 np0005481065 nova_compute[260935]: 2025-10-11 09:23:38.977 2 DEBUG nova.virt.libvirt.driver [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:23:38 np0005481065 nova_compute[260935]: 2025-10-11 09:23:38.978 2 INFO nova.virt.libvirt.driver [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Creating image(s)#033[00m
Oct 11 05:23:39 np0005481065 nova_compute[260935]: 2025-10-11 09:23:39.011 2 DEBUG nova.storage.rbd_utils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] rbd image ef21f945-0076-48fa-8d22-c5376e26d278_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:23:39 np0005481065 nova_compute[260935]: 2025-10-11 09:23:39.017 2 DEBUG nova.objects.instance [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lazy-loading 'trusted_certs' on Instance uuid ef21f945-0076-48fa-8d22-c5376e26d278 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:23:39 np0005481065 nova_compute[260935]: 2025-10-11 09:23:39.020 2 DEBUG oslo_concurrency.lockutils [req-c499fbc0-d96b-4fe6-8281-96b30038efca req-e987cbf7-a099-4f5b-a35c-df66cbc4c4ab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-ef21f945-0076-48fa-8d22-c5376e26d278" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:23:39 np0005481065 nova_compute[260935]: 2025-10-11 09:23:39.021 2 DEBUG nova.network.neutron [req-c499fbc0-d96b-4fe6-8281-96b30038efca req-e987cbf7-a099-4f5b-a35c-df66cbc4c4ab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Refreshing network info cache for port 0f516e4b-c284-4151-944c-8a7d98f695b5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:23:39 np0005481065 nova_compute[260935]: 2025-10-11 09:23:39.094 2 DEBUG nova.storage.rbd_utils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] rbd image ef21f945-0076-48fa-8d22-c5376e26d278_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:23:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:23:39 np0005481065 nova_compute[260935]: 2025-10-11 09:23:39.120 2 DEBUG nova.storage.rbd_utils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] rbd image ef21f945-0076-48fa-8d22-c5376e26d278_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:23:39 np0005481065 nova_compute[260935]: 2025-10-11 09:23:39.124 2 DEBUG oslo_concurrency.lockutils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Acquiring lock "d5adfb1452ade04ee58ac0834f4c40e7493c553e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:23:39 np0005481065 nova_compute[260935]: 2025-10-11 09:23:39.126 2 DEBUG oslo_concurrency.lockutils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "d5adfb1452ade04ee58ac0834f4c40e7493c553e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:23:39 np0005481065 nova_compute[260935]: 2025-10-11 09:23:39.307 2 DEBUG nova.network.neutron [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Successfully created port: f45a21da-34ce-448b-92cc-2639a191c755 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:23:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:23:39 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/342507186' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:23:39 np0005481065 nova_compute[260935]: 2025-10-11 09:23:39.357 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:23:39 np0005481065 nova_compute[260935]: 2025-10-11 09:23:39.417 2 DEBUG nova.virt.libvirt.imagebackend [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Image locations are: [{'url': 'rbd://33219f8b-dc38-5a8f-a577-8ccc4b37190a/images/a24df9e3-57ef-4be8-af9c-14e8b8d2e436/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://33219f8b-dc38-5a8f-a577-8ccc4b37190a/images/a24df9e3-57ef-4be8-af9c-14e8b8d2e436/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct 11 05:23:39 np0005481065 nova_compute[260935]: 2025-10-11 09:23:39.495 2 DEBUG nova.virt.libvirt.imagebackend [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Selected location: {'url': 'rbd://33219f8b-dc38-5a8f-a577-8ccc4b37190a/images/a24df9e3-57ef-4be8-af9c-14e8b8d2e436/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Oct 11 05:23:39 np0005481065 nova_compute[260935]: 2025-10-11 09:23:39.496 2 DEBUG nova.storage.rbd_utils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] cloning images/a24df9e3-57ef-4be8-af9c-14e8b8d2e436@snap to None/ef21f945-0076-48fa-8d22-c5376e26d278_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct 11 05:23:39 np0005481065 nova_compute[260935]: 2025-10-11 09:23:39.651 2 DEBUG oslo_concurrency.lockutils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "d5adfb1452ade04ee58ac0834f4c40e7493c553e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.526s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:23:39 np0005481065 nova_compute[260935]: 2025-10-11 09:23:39.840 2 DEBUG nova.objects.instance [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lazy-loading 'migration_context' on Instance uuid ef21f945-0076-48fa-8d22-c5376e26d278 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:23:39 np0005481065 nova_compute[260935]: 2025-10-11 09:23:39.846 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:23:39 np0005481065 nova_compute[260935]: 2025-10-11 09:23:39.847 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:23:39 np0005481065 nova_compute[260935]: 2025-10-11 09:23:39.847 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:23:39 np0005481065 nova_compute[260935]: 2025-10-11 09:23:39.852 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:23:39 np0005481065 nova_compute[260935]: 2025-10-11 09:23:39.852 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:23:39 np0005481065 nova_compute[260935]: 2025-10-11 09:23:39.856 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:23:39 np0005481065 nova_compute[260935]: 2025-10-11 09:23:39.856 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:23:39 np0005481065 nova_compute[260935]: 2025-10-11 09:23:39.946 2 DEBUG oslo_concurrency.lockutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "527bff5a-2d35-406a-8702-f80298a22342" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:23:39 np0005481065 nova_compute[260935]: 2025-10-11 09:23:39.946 2 DEBUG oslo_concurrency.lockutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "527bff5a-2d35-406a-8702-f80298a22342" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:23:39 np0005481065 nova_compute[260935]: 2025-10-11 09:23:39.954 2 DEBUG nova.storage.rbd_utils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] flattening vms/ef21f945-0076-48fa-8d22-c5376e26d278_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct 11 05:23:40 np0005481065 nova_compute[260935]: 2025-10-11 09:23:40.078 2 DEBUG nova.compute.manager [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:23:40 np0005481065 nova_compute[260935]: 2025-10-11 09:23:40.249 2 DEBUG oslo_concurrency.lockutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:23:40 np0005481065 nova_compute[260935]: 2025-10-11 09:23:40.249 2 DEBUG oslo_concurrency.lockutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:23:40 np0005481065 nova_compute[260935]: 2025-10-11 09:23:40.255 2 DEBUG nova.virt.hardware [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:23:40 np0005481065 nova_compute[260935]: 2025-10-11 09:23:40.257 2 INFO nova.compute.claims [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:23:40 np0005481065 nova_compute[260935]: 2025-10-11 09:23:40.311 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:23:40 np0005481065 nova_compute[260935]: 2025-10-11 09:23:40.312 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2928MB free_disk=59.80991744995117GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:23:40 np0005481065 nova_compute[260935]: 2025-10-11 09:23:40.312 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:23:40 np0005481065 nova_compute[260935]: 2025-10-11 09:23:40.366 2 DEBUG nova.virt.libvirt.driver [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Image rbd:vms/ef21f945-0076-48fa-8d22-c5376e26d278_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Oct 11 05:23:40 np0005481065 nova_compute[260935]: 2025-10-11 09:23:40.367 2 DEBUG nova.virt.libvirt.driver [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:23:40 np0005481065 nova_compute[260935]: 2025-10-11 09:23:40.367 2 DEBUG nova.virt.libvirt.driver [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Ensure instance console log exists: /var/lib/nova/instances/ef21f945-0076-48fa-8d22-c5376e26d278/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:23:40 np0005481065 nova_compute[260935]: 2025-10-11 09:23:40.368 2 DEBUG oslo_concurrency.lockutils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:23:40 np0005481065 nova_compute[260935]: 2025-10-11 09:23:40.368 2 DEBUG oslo_concurrency.lockutils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:23:40 np0005481065 nova_compute[260935]: 2025-10-11 09:23:40.368 2 DEBUG oslo_concurrency.lockutils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:23:40 np0005481065 nova_compute[260935]: 2025-10-11 09:23:40.371 2 DEBUG nova.virt.libvirt.driver [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Start _get_guest_xml network_info=[{"id": "0f516e4b-c284-4151-944c-8a7d98f695b5", "address": "fa:16:3e:4b:08:14", "network": {"id": "0007a0de-db42-4add-9b55-6d92ceffa860", "bridge": "br-int", "label": "tempest-TestShelveInstance-683353345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a13210f275984f3eadf85eba0c749d99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f516e4b-c2", "ovs_interfaceid": "0f516e4b-c284-4151-944c-8a7d98f695b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-11T09:23:07Z,direct_url=<?>,disk_format='raw',id=a24df9e3-57ef-4be8-af9c-14e8b8d2e436,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-1752003988-shelved',owner='a13210f275984f3eadf85eba0c749d99',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-11T09:23:19Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:23:40 np0005481065 nova_compute[260935]: 2025-10-11 09:23:40.374 2 WARNING nova.virt.libvirt.driver [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:23:40 np0005481065 nova_compute[260935]: 2025-10-11 09:23:40.379 2 DEBUG nova.virt.libvirt.host [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:23:40 np0005481065 nova_compute[260935]: 2025-10-11 09:23:40.379 2 DEBUG nova.virt.libvirt.host [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:23:40 np0005481065 nova_compute[260935]: 2025-10-11 09:23:40.384 2 DEBUG nova.virt.libvirt.host [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:23:40 np0005481065 nova_compute[260935]: 2025-10-11 09:23:40.384 2 DEBUG nova.virt.libvirt.host [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:23:40 np0005481065 nova_compute[260935]: 2025-10-11 09:23:40.385 2 DEBUG nova.virt.libvirt.driver [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:23:40 np0005481065 nova_compute[260935]: 2025-10-11 09:23:40.385 2 DEBUG nova.virt.hardware [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-11T09:23:07Z,direct_url=<?>,disk_format='raw',id=a24df9e3-57ef-4be8-af9c-14e8b8d2e436,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-1752003988-shelved',owner='a13210f275984f3eadf85eba0c749d99',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-11T09:23:19Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:23:40 np0005481065 nova_compute[260935]: 2025-10-11 09:23:40.385 2 DEBUG nova.virt.hardware [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:23:40 np0005481065 nova_compute[260935]: 2025-10-11 09:23:40.386 2 DEBUG nova.virt.hardware [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:23:40 np0005481065 nova_compute[260935]: 2025-10-11 09:23:40.386 2 DEBUG nova.virt.hardware [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:23:40 np0005481065 nova_compute[260935]: 2025-10-11 09:23:40.386 2 DEBUG nova.virt.hardware [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:23:40 np0005481065 nova_compute[260935]: 2025-10-11 09:23:40.386 2 DEBUG nova.virt.hardware [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:23:40 np0005481065 nova_compute[260935]: 2025-10-11 09:23:40.387 2 DEBUG nova.virt.hardware [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:23:40 np0005481065 nova_compute[260935]: 2025-10-11 09:23:40.387 2 DEBUG nova.virt.hardware [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:23:40 np0005481065 nova_compute[260935]: 2025-10-11 09:23:40.387 2 DEBUG nova.virt.hardware [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:23:40 np0005481065 nova_compute[260935]: 2025-10-11 09:23:40.387 2 DEBUG nova.virt.hardware [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:23:40 np0005481065 nova_compute[260935]: 2025-10-11 09:23:40.387 2 DEBUG nova.virt.hardware [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:23:40 np0005481065 nova_compute[260935]: 2025-10-11 09:23:40.388 2 DEBUG nova.objects.instance [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lazy-loading 'vcpu_model' on Instance uuid ef21f945-0076-48fa-8d22-c5376e26d278 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:23:40 np0005481065 nova_compute[260935]: 2025-10-11 09:23:40.406 2 DEBUG oslo_concurrency.processutils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:23:40 np0005481065 nova_compute[260935]: 2025-10-11 09:23:40.522 2 DEBUG oslo_concurrency.processutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:23:40 np0005481065 podman[394080]: 2025-10-11 09:23:40.763933367 +0000 UTC m=+0.073301180 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:23:40 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2459: 321 pgs: 321 active+clean; 453 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 1.8 MiB/s wr, 84 op/s
Oct 11 05:23:40 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:23:40 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/589241222' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:23:40 np0005481065 nova_compute[260935]: 2025-10-11 09:23:40.881 2 DEBUG oslo_concurrency.processutils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:23:40 np0005481065 nova_compute[260935]: 2025-10-11 09:23:40.912 2 DEBUG nova.storage.rbd_utils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] rbd image ef21f945-0076-48fa-8d22-c5376e26d278_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:23:40 np0005481065 nova_compute[260935]: 2025-10-11 09:23:40.916 2 DEBUG oslo_concurrency.processutils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:23:41 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:23:41 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2958725479' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.061 2 DEBUG oslo_concurrency.processutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.073 2 DEBUG nova.compute.provider_tree [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.107 2 DEBUG nova.scheduler.client.report [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.222 2 DEBUG oslo_concurrency.lockutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.973s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.223 2 DEBUG nova.compute.manager [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.228 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.916s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.376 2 DEBUG nova.compute.manager [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.377 2 DEBUG nova.network.neutron [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:23:41 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:23:41 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1481547396' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.403 2 DEBUG oslo_concurrency.processutils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.405 2 DEBUG nova.virt.libvirt.vif [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-11T09:22:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1752003988',display_name='tempest-TestShelveInstance-server-1752003988',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1752003988',id=121,image_ref='a24df9e3-57ef-4be8-af9c-14e8b8d2e436',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-1436929269',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:22:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='a13210f275984f3eadf85eba0c749d99',ramdisk_id='',reservation_id='r-9i2gklxc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-243029510',owner_user_name='tempest-TestShelveInstance-243029510-project-member',shelved_at='2025-10-11T09:23:19.621852',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='a24df9e3-57ef-4be8-af9c-14e8b8d2e436'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:23:33Z,user_data=None,user_id='67e20c1f7ae24f2f8b9e25e0d8ce61ca',uuid=ef21f945-0076-48fa-8d22-c5376e26d278,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "0f516e4b-c284-4151-944c-8a7d98f695b5", "address": "fa:16:3e:4b:08:14", "network": {"id": "0007a0de-db42-4add-9b55-6d92ceffa860", "bridge": "br-int", "label": "tempest-TestShelveInstance-683353345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a13210f275984f3eadf85eba0c749d99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f516e4b-c2", "ovs_interfaceid": "0f516e4b-c284-4151-944c-8a7d98f695b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.405 2 DEBUG nova.network.os_vif_util [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Converting VIF {"id": "0f516e4b-c284-4151-944c-8a7d98f695b5", "address": "fa:16:3e:4b:08:14", "network": {"id": "0007a0de-db42-4add-9b55-6d92ceffa860", "bridge": "br-int", "label": "tempest-TestShelveInstance-683353345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a13210f275984f3eadf85eba0c749d99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f516e4b-c2", "ovs_interfaceid": "0f516e4b-c284-4151-944c-8a7d98f695b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.407 2 DEBUG nova.network.os_vif_util [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:08:14,bridge_name='br-int',has_traffic_filtering=True,id=0f516e4b-c284-4151-944c-8a7d98f695b5,network=Network(0007a0de-db42-4add-9b55-6d92ceffa860),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f516e4b-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.409 2 DEBUG nova.objects.instance [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lazy-loading 'pci_devices' on Instance uuid ef21f945-0076-48fa-8d22-c5376e26d278 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.436 2 DEBUG nova.virt.libvirt.driver [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:23:41 np0005481065 nova_compute[260935]:  <uuid>ef21f945-0076-48fa-8d22-c5376e26d278</uuid>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:  <name>instance-00000079</name>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:23:41 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:      <nova:name>tempest-TestShelveInstance-server-1752003988</nova:name>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:23:40</nova:creationTime>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:23:41 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:        <nova:user uuid="67e20c1f7ae24f2f8b9e25e0d8ce61ca">tempest-TestShelveInstance-243029510-project-member</nova:user>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:        <nova:project uuid="a13210f275984f3eadf85eba0c749d99">tempest-TestShelveInstance-243029510</nova:project>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="a24df9e3-57ef-4be8-af9c-14e8b8d2e436"/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:        <nova:port uuid="0f516e4b-c284-4151-944c-8a7d98f695b5">
Oct 11 05:23:41 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:      <entry name="serial">ef21f945-0076-48fa-8d22-c5376e26d278</entry>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:      <entry name="uuid">ef21f945-0076-48fa-8d22-c5376e26d278</entry>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:23:41 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/ef21f945-0076-48fa-8d22-c5376e26d278_disk">
Oct 11 05:23:41 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:23:41 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:23:41 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/ef21f945-0076-48fa-8d22-c5376e26d278_disk.config">
Oct 11 05:23:41 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:23:41 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:23:41 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:4b:08:14"/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:      <target dev="tap0f516e4b-c2"/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:23:41 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/ef21f945-0076-48fa-8d22-c5376e26d278/console.log" append="off"/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    <input type="keyboard" bus="usb"/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:23:41 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:23:41 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:23:41 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:23:41 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:23:41 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.436 2 DEBUG nova.compute.manager [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Preparing to wait for external event network-vif-plugged-0f516e4b-c284-4151-944c-8a7d98f695b5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.437 2 DEBUG oslo_concurrency.lockutils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Acquiring lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.437 2 DEBUG oslo_concurrency.lockutils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.437 2 DEBUG oslo_concurrency.lockutils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.439 2 DEBUG nova.virt.libvirt.vif [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-11T09:22:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1752003988',display_name='tempest-TestShelveInstance-server-1752003988',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1752003988',id=121,image_ref='a24df9e3-57ef-4be8-af9c-14e8b8d2e436',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-1436929269',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:22:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='a13210f275984f3eadf85eba0c749d99',ramdisk_id='',reservation_id='r-9i2gklxc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-243029510',owner_user_name='tempest-TestShelveInstance-243029510-project-member',shelved_at='2025-10-11T09:23:19.621852',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='a24df9e3-57ef-4be8-af9c-14e8b8d2e436'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:23:33Z,user_data=None,user_id='67e20c1f7ae24f2f8b9e25e0d8ce61ca',uuid=ef21f945-0076-48fa-8d22-c5376e26d278,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "0f516e4b-c284-4151-944c-8a7d98f695b5", "address": "fa:16:3e:4b:08:14", "network": {"id": "0007a0de-db42-4add-9b55-6d92ceffa860", "bridge": "br-int", "label": "tempest-TestShelveInstance-683353345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a13210f275984f3eadf85eba0c749d99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f516e4b-c2", "ovs_interfaceid": "0f516e4b-c284-4151-944c-8a7d98f695b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.439 2 DEBUG nova.network.os_vif_util [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Converting VIF {"id": "0f516e4b-c284-4151-944c-8a7d98f695b5", "address": "fa:16:3e:4b:08:14", "network": {"id": "0007a0de-db42-4add-9b55-6d92ceffa860", "bridge": "br-int", "label": "tempest-TestShelveInstance-683353345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a13210f275984f3eadf85eba0c749d99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f516e4b-c2", "ovs_interfaceid": "0f516e4b-c284-4151-944c-8a7d98f695b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.440 2 DEBUG nova.network.os_vif_util [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:08:14,bridge_name='br-int',has_traffic_filtering=True,id=0f516e4b-c284-4151-944c-8a7d98f695b5,network=Network(0007a0de-db42-4add-9b55-6d92ceffa860),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f516e4b-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.440 2 DEBUG os_vif [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:08:14,bridge_name='br-int',has_traffic_filtering=True,id=0f516e4b-c284-4151-944c-8a7d98f695b5,network=Network(0007a0de-db42-4add-9b55-6d92ceffa860),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f516e4b-c2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.443 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.443 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.444 2 INFO nova.virt.libvirt.driver [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.451 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0f516e4b-c2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.452 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0f516e4b-c2, col_values=(('external_ids', {'iface-id': '0f516e4b-c284-4151-944c-8a7d98f695b5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4b:08:14', 'vm-uuid': 'ef21f945-0076-48fa-8d22-c5376e26d278'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:41 np0005481065 NetworkManager[44960]: <info>  [1760174621.4564] manager: (tap0f516e4b-c2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/507)
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.466 2 INFO os_vif [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:08:14,bridge_name='br-int',has_traffic_filtering=True,id=0f516e4b-c284-4151-944c-8a7d98f695b5,network=Network(0007a0de-db42-4add-9b55-6d92ceffa860),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f516e4b-c2')#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.468 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.469 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.469 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.469 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 85cf93a0-2068-4567-a399-b8d52e672913 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.469 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance ef21f945-0076-48fa-8d22-c5376e26d278 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.469 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 527bff5a-2d35-406a-8702-f80298a22342 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.470 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 6 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.470 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1280MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=6 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.472 2 DEBUG nova.compute.manager [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.563 2 DEBUG nova.virt.libvirt.driver [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.563 2 DEBUG nova.virt.libvirt.driver [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.563 2 DEBUG nova.virt.libvirt.driver [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] No VIF found with MAC fa:16:3e:4b:08:14, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.564 2 INFO nova.virt.libvirt.driver [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Using config drive#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.588 2 DEBUG nova.storage.rbd_utils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] rbd image ef21f945-0076-48fa-8d22-c5376e26d278_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.599 2 DEBUG nova.compute.manager [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.600 2 DEBUG nova.virt.libvirt.driver [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.600 2 INFO nova.virt.libvirt.driver [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Creating image(s)#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.625 2 DEBUG nova.storage.rbd_utils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 527bff5a-2d35-406a-8702-f80298a22342_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.653 2 DEBUG nova.storage.rbd_utils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 527bff5a-2d35-406a-8702-f80298a22342_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.685 2 DEBUG nova.storage.rbd_utils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 527bff5a-2d35-406a-8702-f80298a22342_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.689 2 DEBUG oslo_concurrency.processutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.735 2 DEBUG nova.objects.instance [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lazy-loading 'ec2_ids' on Instance uuid ef21f945-0076-48fa-8d22-c5376e26d278 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.775 2 DEBUG nova.objects.instance [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lazy-loading 'keypairs' on Instance uuid ef21f945-0076-48fa-8d22-c5376e26d278 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.782 2 DEBUG oslo_concurrency.processutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.783 2 DEBUG oslo_concurrency.lockutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.784 2 DEBUG oslo_concurrency.lockutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.785 2 DEBUG oslo_concurrency.lockutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.815 2 DEBUG nova.storage.rbd_utils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 527bff5a-2d35-406a-8702-f80298a22342_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.819 2 DEBUG oslo_concurrency.processutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 527bff5a-2d35-406a-8702-f80298a22342_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:23:41 np0005481065 nova_compute[260935]: 2025-10-11 09:23:41.905 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:23:42 np0005481065 nova_compute[260935]: 2025-10-11 09:23:42.160 2 DEBUG oslo_concurrency.processutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 527bff5a-2d35-406a-8702-f80298a22342_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:23:42 np0005481065 nova_compute[260935]: 2025-10-11 09:23:42.240 2 DEBUG nova.storage.rbd_utils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] resizing rbd image 527bff5a-2d35-406a-8702-f80298a22342_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:23:42 np0005481065 nova_compute[260935]: 2025-10-11 09:23:42.356 2 DEBUG nova.objects.instance [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'migration_context' on Instance uuid 527bff5a-2d35-406a-8702-f80298a22342 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:23:42 np0005481065 nova_compute[260935]: 2025-10-11 09:23:42.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:42 np0005481065 nova_compute[260935]: 2025-10-11 09:23:42.371 2 DEBUG nova.virt.libvirt.driver [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:23:42 np0005481065 nova_compute[260935]: 2025-10-11 09:23:42.372 2 DEBUG nova.virt.libvirt.driver [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Ensure instance console log exists: /var/lib/nova/instances/527bff5a-2d35-406a-8702-f80298a22342/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:23:42 np0005481065 nova_compute[260935]: 2025-10-11 09:23:42.372 2 DEBUG oslo_concurrency.lockutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:23:42 np0005481065 nova_compute[260935]: 2025-10-11 09:23:42.373 2 DEBUG oslo_concurrency.lockutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:23:42 np0005481065 nova_compute[260935]: 2025-10-11 09:23:42.373 2 DEBUG oslo_concurrency.lockutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:23:42 np0005481065 nova_compute[260935]: 2025-10-11 09:23:42.399 2 INFO nova.virt.libvirt.driver [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Creating config drive at /var/lib/nova/instances/ef21f945-0076-48fa-8d22-c5376e26d278/disk.config#033[00m
Oct 11 05:23:42 np0005481065 nova_compute[260935]: 2025-10-11 09:23:42.405 2 DEBUG oslo_concurrency.processutils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ef21f945-0076-48fa-8d22-c5376e26d278/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpntmi9zyc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:23:42 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:23:42 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2943104708' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:23:42 np0005481065 nova_compute[260935]: 2025-10-11 09:23:42.464 2 DEBUG nova.policy [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dd336dcb24664df58613d4105ce1b004', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:23:42 np0005481065 nova_compute[260935]: 2025-10-11 09:23:42.475 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:23:42 np0005481065 nova_compute[260935]: 2025-10-11 09:23:42.484 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:23:42 np0005481065 nova_compute[260935]: 2025-10-11 09:23:42.502 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:23:42 np0005481065 nova_compute[260935]: 2025-10-11 09:23:42.527 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:23:42 np0005481065 nova_compute[260935]: 2025-10-11 09:23:42.528 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.300s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:23:42 np0005481065 nova_compute[260935]: 2025-10-11 09:23:42.576 2 DEBUG oslo_concurrency.processutils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ef21f945-0076-48fa-8d22-c5376e26d278/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpntmi9zyc" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:23:42 np0005481065 nova_compute[260935]: 2025-10-11 09:23:42.616 2 DEBUG nova.storage.rbd_utils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] rbd image ef21f945-0076-48fa-8d22-c5376e26d278_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:23:42 np0005481065 nova_compute[260935]: 2025-10-11 09:23:42.621 2 DEBUG oslo_concurrency.processutils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ef21f945-0076-48fa-8d22-c5376e26d278/disk.config ef21f945-0076-48fa-8d22-c5376e26d278_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:23:42 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2460: 321 pgs: 321 active+clean; 579 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 7.4 MiB/s wr, 189 op/s
Oct 11 05:23:42 np0005481065 nova_compute[260935]: 2025-10-11 09:23:42.918 2 DEBUG oslo_concurrency.processutils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ef21f945-0076-48fa-8d22-c5376e26d278/disk.config ef21f945-0076-48fa-8d22-c5376e26d278_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.297s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:23:42 np0005481065 nova_compute[260935]: 2025-10-11 09:23:42.920 2 INFO nova.virt.libvirt.driver [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Deleting local config drive /var/lib/nova/instances/ef21f945-0076-48fa-8d22-c5376e26d278/disk.config because it was imported into RBD.#033[00m
Oct 11 05:23:42 np0005481065 kernel: tap0f516e4b-c2: entered promiscuous mode
Oct 11 05:23:42 np0005481065 NetworkManager[44960]: <info>  [1760174622.9932] manager: (tap0f516e4b-c2): new Tun device (/org/freedesktop/NetworkManager/Devices/508)
Oct 11 05:23:43 np0005481065 nova_compute[260935]: 2025-10-11 09:23:42.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:43 np0005481065 ovn_controller[152945]: 2025-10-11T09:23:42Z|01283|binding|INFO|Claiming lport 0f516e4b-c284-4151-944c-8a7d98f695b5 for this chassis.
Oct 11 05:23:43 np0005481065 ovn_controller[152945]: 2025-10-11T09:23:42Z|01284|binding|INFO|0f516e4b-c284-4151-944c-8a7d98f695b5: Claiming fa:16:3e:4b:08:14 10.100.0.10
Oct 11 05:23:43 np0005481065 nova_compute[260935]: 2025-10-11 09:23:43.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:43 np0005481065 nova_compute[260935]: 2025-10-11 09:23:43.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:43 np0005481065 NetworkManager[44960]: <info>  [1760174623.0338] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/509)
Oct 11 05:23:43 np0005481065 NetworkManager[44960]: <info>  [1760174623.0352] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/510)
Oct 11 05:23:43 np0005481065 nova_compute[260935]: 2025-10-11 09:23:43.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:43.037 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:08:14 10.100.0.10'], port_security=['fa:16:3e:4b:08:14 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'ef21f945-0076-48fa-8d22-c5376e26d278', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0007a0de-db42-4add-9b55-6d92ceffa860', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a13210f275984f3eadf85eba0c749d99', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'fd52e2b9-19bf-4137-a511-25dd1d4c9f0e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.199'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d2271296-3ceb-4987-affd-a0b4da64fffe, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=0f516e4b-c284-4151-944c-8a7d98f695b5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:43.040 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 0f516e4b-c284-4151-944c-8a7d98f695b5 in datapath 0007a0de-db42-4add-9b55-6d92ceffa860 bound to our chassis#033[00m
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:43.043 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0007a0de-db42-4add-9b55-6d92ceffa860#033[00m
Oct 11 05:23:43 np0005481065 systemd-machined[215705]: New machine qemu-145-instance-00000079.
Oct 11 05:23:43 np0005481065 systemd-udevd[394424]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:43.063 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[df18b95c-cd62-4855-a4cd-c894e546cc52]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:43.065 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0007a0de-d1 in ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 05:23:43 np0005481065 systemd[1]: Started Virtual Machine qemu-145-instance-00000079.
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:43.069 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0007a0de-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:43.069 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[71d26d76-982c-488e-84af-a751aae81fe6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:43.073 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5f6e1d79-1c15-41ae-9bed-8bb6c11b1c63]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:43 np0005481065 NetworkManager[44960]: <info>  [1760174623.0872] device (tap0f516e4b-c2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:23:43 np0005481065 NetworkManager[44960]: <info>  [1760174623.0889] device (tap0f516e4b-c2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:43.098 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[47a9085b-83e0-4a96-9dc1-41b6a354739e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:43.130 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ddce8097-2992-4dc3-876b-8f0b4ed70330]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:43.168 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[53a750be-6451-4ffe-89d7-4827891c85b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:43 np0005481065 systemd-udevd[394427]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:43.175 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[574e8cb9-165a-4f80-9721-477bfc3d9690]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:43 np0005481065 NetworkManager[44960]: <info>  [1760174623.1769] manager: (tap0007a0de-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/511)
Oct 11 05:23:43 np0005481065 nova_compute[260935]: 2025-10-11 09:23:43.213 2 DEBUG nova.network.neutron [req-c499fbc0-d96b-4fe6-8281-96b30038efca req-e987cbf7-a099-4f5b-a35c-df66cbc4c4ab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Updated VIF entry in instance network info cache for port 0f516e4b-c284-4151-944c-8a7d98f695b5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:23:43 np0005481065 nova_compute[260935]: 2025-10-11 09:23:43.214 2 DEBUG nova.network.neutron [req-c499fbc0-d96b-4fe6-8281-96b30038efca req-e987cbf7-a099-4f5b-a35c-df66cbc4c4ab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Updating instance_info_cache with network_info: [{"id": "0f516e4b-c284-4151-944c-8a7d98f695b5", "address": "fa:16:3e:4b:08:14", "network": {"id": "0007a0de-db42-4add-9b55-6d92ceffa860", "bridge": "br-int", "label": "tempest-TestShelveInstance-683353345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a13210f275984f3eadf85eba0c749d99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f516e4b-c2", "ovs_interfaceid": "0f516e4b-c284-4151-944c-8a7d98f695b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:43.255 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c722428c-e81e-448c-b215-152d2491c5b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:43 np0005481065 nova_compute[260935]: 2025-10-11 09:23:43.257 2 DEBUG oslo_concurrency.lockutils [req-c499fbc0-d96b-4fe6-8281-96b30038efca req-e987cbf7-a099-4f5b-a35c-df66cbc4c4ab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-ef21f945-0076-48fa-8d22-c5376e26d278" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:43.258 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[2e532d77-69fa-45de-b008-f5a9b8aa7682]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:43 np0005481065 nova_compute[260935]: 2025-10-11 09:23:43.268 2 DEBUG nova.network.neutron [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Successfully updated port: 85875c6f-3380-42bc-9e4b-0b66df391e0d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:23:43 np0005481065 NetworkManager[44960]: <info>  [1760174623.2880] device (tap0007a0de-d0): carrier: link connected
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:43.296 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[365c7f9a-b59d-4825-b5b2-0b639582cf8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:43 np0005481065 ovn_controller[152945]: 2025-10-11T09:23:43Z|01285|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:23:43 np0005481065 ovn_controller[152945]: 2025-10-11T09:23:43Z|01286|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:43.327 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8562ff12-aa46-4ca7-92cf-785281edd293]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0007a0de-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7d:ca:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 357], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 646799, 'reachable_time': 34457, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 394459, 'error': None, 'target': 'ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:43 np0005481065 nova_compute[260935]: 2025-10-11 09:23:43.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:43 np0005481065 ovn_controller[152945]: 2025-10-11T09:23:43Z|01287|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:23:43 np0005481065 ovn_controller[152945]: 2025-10-11T09:23:43Z|01288|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:23:43 np0005481065 nova_compute[260935]: 2025-10-11 09:23:43.367 2 DEBUG nova.compute.manager [req-f183313e-d3ce-4aa5-bd64-6c19c02ff857 req-e467080a-ca0b-476a-9f7f-c0ed14ef7366 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Received event network-changed-85875c6f-3380-42bc-9e4b-0b66df391e0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:23:43 np0005481065 nova_compute[260935]: 2025-10-11 09:23:43.368 2 DEBUG nova.compute.manager [req-f183313e-d3ce-4aa5-bd64-6c19c02ff857 req-e467080a-ca0b-476a-9f7f-c0ed14ef7366 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Refreshing instance network info cache due to event network-changed-85875c6f-3380-42bc-9e4b-0b66df391e0d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:23:43 np0005481065 nova_compute[260935]: 2025-10-11 09:23:43.368 2 DEBUG oslo_concurrency.lockutils [req-f183313e-d3ce-4aa5-bd64-6c19c02ff857 req-e467080a-ca0b-476a-9f7f-c0ed14ef7366 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-85cf93a0-2068-4567-a399-b8d52e672913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:23:43 np0005481065 nova_compute[260935]: 2025-10-11 09:23:43.368 2 DEBUG oslo_concurrency.lockutils [req-f183313e-d3ce-4aa5-bd64-6c19c02ff857 req-e467080a-ca0b-476a-9f7f-c0ed14ef7366 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-85cf93a0-2068-4567-a399-b8d52e672913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:23:43 np0005481065 nova_compute[260935]: 2025-10-11 09:23:43.369 2 DEBUG nova.network.neutron [req-f183313e-d3ce-4aa5-bd64-6c19c02ff857 req-e467080a-ca0b-476a-9f7f-c0ed14ef7366 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Refreshing network info cache for port 85875c6f-3380-42bc-9e4b-0b66df391e0d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:43.368 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[161da8fa-9262-415c-a255-dff27bbf164a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7d:ca09'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 646799, 'tstamp': 646799}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 394461, 'error': None, 'target': 'ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:43 np0005481065 ovn_controller[152945]: 2025-10-11T09:23:43Z|01289|binding|INFO|Setting lport 0f516e4b-c284-4151-944c-8a7d98f695b5 up in Southbound
Oct 11 05:23:43 np0005481065 ovn_controller[152945]: 2025-10-11T09:23:43Z|01290|binding|INFO|Setting lport 0f516e4b-c284-4151-944c-8a7d98f695b5 ovn-installed in OVS
Oct 11 05:23:43 np0005481065 nova_compute[260935]: 2025-10-11 09:23:43.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:43.388 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[daffd302-2b3f-43fb-a131-d092d7f8cbc0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0007a0de-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7d:ca:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 357], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 646799, 'reachable_time': 34457, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 394462, 'error': None, 'target': 'ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:43.429 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[79a5401f-3a34-41f5-b675-474ec46b2787]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:43 np0005481065 nova_compute[260935]: 2025-10-11 09:23:43.429 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:23:43 np0005481065 nova_compute[260935]: 2025-10-11 09:23:43.430 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:23:43 np0005481065 nova_compute[260935]: 2025-10-11 09:23:43.430 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 11 05:23:43 np0005481065 nova_compute[260935]: 2025-10-11 09:23:43.470 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct 11 05:23:43 np0005481065 nova_compute[260935]: 2025-10-11 09:23:43.470 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:43.502 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d164bbb1-3b05-49f6-87bc-9c62133025df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:43.503 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0007a0de-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:43.503 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:43.503 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0007a0de-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:23:43 np0005481065 kernel: tap0007a0de-d0: entered promiscuous mode
Oct 11 05:23:43 np0005481065 nova_compute[260935]: 2025-10-11 09:23:43.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:43 np0005481065 NetworkManager[44960]: <info>  [1760174623.5065] manager: (tap0007a0de-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/512)
Oct 11 05:23:43 np0005481065 nova_compute[260935]: 2025-10-11 09:23:43.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:43.509 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0007a0de-d0, col_values=(('external_ids', {'iface-id': 'e89e8ea5-8038-433d-8c45-d2ec20f4f896'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:23:43 np0005481065 nova_compute[260935]: 2025-10-11 09:23:43.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:43 np0005481065 ovn_controller[152945]: 2025-10-11T09:23:43Z|01291|binding|INFO|Releasing lport e89e8ea5-8038-433d-8c45-d2ec20f4f896 from this chassis (sb_readonly=0)
Oct 11 05:23:43 np0005481065 nova_compute[260935]: 2025-10-11 09:23:43.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:43.542 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0007a0de-db42-4add-9b55-6d92ceffa860.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0007a0de-db42-4add-9b55-6d92ceffa860.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:43.543 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[92bc4d88-0892-4e10-aad8-2dae7bf736c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:43.544 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-0007a0de-db42-4add-9b55-6d92ceffa860
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/0007a0de-db42-4add-9b55-6d92ceffa860.pid.haproxy
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID 0007a0de-db42-4add-9b55-6d92ceffa860
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 05:23:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:43.544 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860', 'env', 'PROCESS_TAG=haproxy-0007a0de-db42-4add-9b55-6d92ceffa860', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0007a0de-db42-4add-9b55-6d92ceffa860.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 05:23:43 np0005481065 nova_compute[260935]: 2025-10-11 09:23:43.757 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:23:43 np0005481065 nova_compute[260935]: 2025-10-11 09:23:43.758 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:23:43 np0005481065 nova_compute[260935]: 2025-10-11 09:23:43.759 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 05:23:43 np0005481065 nova_compute[260935]: 2025-10-11 09:23:43.759 2 DEBUG nova.objects.instance [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c176845c-89c0-4038-ba22-4ee79bd3ebfe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:23:43 np0005481065 nova_compute[260935]: 2025-10-11 09:23:43.766 2 DEBUG nova.network.neutron [req-f183313e-d3ce-4aa5-bd64-6c19c02ff857 req-e467080a-ca0b-476a-9f7f-c0ed14ef7366 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:23:44 np0005481065 podman[394536]: 2025-10-11 09:23:44.025561462 +0000 UTC m=+0.087905641 container create 22cf63d83641e03cba16870a5f372c09f901e5786a22ad9e01ecb7c96b7ff5b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:23:44 np0005481065 podman[394536]: 2025-10-11 09:23:43.97977811 +0000 UTC m=+0.042122329 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 05:23:44 np0005481065 systemd[1]: Started libpod-conmon-22cf63d83641e03cba16870a5f372c09f901e5786a22ad9e01ecb7c96b7ff5b2.scope.
Oct 11 05:23:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:23:44 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:23:44 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fd13644a85799175e493bbd75cfd83641987306523ad37c33480641586b4542/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 05:23:44 np0005481065 podman[394536]: 2025-10-11 09:23:44.13746098 +0000 UTC m=+0.199805199 container init 22cf63d83641e03cba16870a5f372c09f901e5786a22ad9e01ecb7c96b7ff5b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 11 05:23:44 np0005481065 podman[394536]: 2025-10-11 09:23:44.145635751 +0000 UTC m=+0.207979920 container start 22cf63d83641e03cba16870a5f372c09f901e5786a22ad9e01ecb7c96b7ff5b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 05:23:44 np0005481065 neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860[394551]: [NOTICE]   (394555) : New worker (394557) forked
Oct 11 05:23:44 np0005481065 neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860[394551]: [NOTICE]   (394555) : Loading success.
Oct 11 05:23:44 np0005481065 nova_compute[260935]: 2025-10-11 09:23:44.232 2 DEBUG nova.network.neutron [req-f183313e-d3ce-4aa5-bd64-6c19c02ff857 req-e467080a-ca0b-476a-9f7f-c0ed14ef7366 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:23:44 np0005481065 nova_compute[260935]: 2025-10-11 09:23:44.248 2 DEBUG oslo_concurrency.lockutils [req-f183313e-d3ce-4aa5-bd64-6c19c02ff857 req-e467080a-ca0b-476a-9f7f-c0ed14ef7366 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-85cf93a0-2068-4567-a399-b8d52e672913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:23:44 np0005481065 nova_compute[260935]: 2025-10-11 09:23:44.340 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174624.3390472, ef21f945-0076-48fa-8d22-c5376e26d278 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:23:44 np0005481065 nova_compute[260935]: 2025-10-11 09:23:44.340 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] VM Started (Lifecycle Event)#033[00m
Oct 11 05:23:44 np0005481065 nova_compute[260935]: 2025-10-11 09:23:44.366 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:23:44 np0005481065 nova_compute[260935]: 2025-10-11 09:23:44.371 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174624.3397129, ef21f945-0076-48fa-8d22-c5376e26d278 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:23:44 np0005481065 nova_compute[260935]: 2025-10-11 09:23:44.371 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:23:44 np0005481065 nova_compute[260935]: 2025-10-11 09:23:44.387 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:23:44 np0005481065 nova_compute[260935]: 2025-10-11 09:23:44.391 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:23:44 np0005481065 nova_compute[260935]: 2025-10-11 09:23:44.413 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:23:44 np0005481065 nova_compute[260935]: 2025-10-11 09:23:44.470 2 DEBUG nova.network.neutron [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Successfully updated port: ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:23:44 np0005481065 nova_compute[260935]: 2025-10-11 09:23:44.512 2 DEBUG oslo_concurrency.lockutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "refresh_cache-527bff5a-2d35-406a-8702-f80298a22342" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:23:44 np0005481065 nova_compute[260935]: 2025-10-11 09:23:44.513 2 DEBUG oslo_concurrency.lockutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquired lock "refresh_cache-527bff5a-2d35-406a-8702-f80298a22342" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:23:44 np0005481065 nova_compute[260935]: 2025-10-11 09:23:44.513 2 DEBUG nova.network.neutron [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:23:44 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2461: 321 pgs: 321 active+clean; 579 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 7.4 MiB/s wr, 130 op/s
Oct 11 05:23:44 np0005481065 nova_compute[260935]: 2025-10-11 09:23:44.849 2 DEBUG nova.network.neutron [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:23:45 np0005481065 nova_compute[260935]: 2025-10-11 09:23:45.250 2 DEBUG nova.network.neutron [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Successfully updated port: f45a21da-34ce-448b-92cc-2639a191c755 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:23:45 np0005481065 nova_compute[260935]: 2025-10-11 09:23:45.270 2 DEBUG oslo_concurrency.lockutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "refresh_cache-85cf93a0-2068-4567-a399-b8d52e672913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:23:45 np0005481065 nova_compute[260935]: 2025-10-11 09:23:45.270 2 DEBUG oslo_concurrency.lockutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquired lock "refresh_cache-85cf93a0-2068-4567-a399-b8d52e672913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:23:45 np0005481065 nova_compute[260935]: 2025-10-11 09:23:45.270 2 DEBUG nova.network.neutron [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:23:45 np0005481065 nova_compute[260935]: 2025-10-11 09:23:45.518 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updating instance_info_cache with network_info: [{"id": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "address": "fa:16:3e:1e:82:58", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ae661-47", "ovs_interfaceid": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:23:45 np0005481065 nova_compute[260935]: 2025-10-11 09:23:45.535 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:23:45 np0005481065 nova_compute[260935]: 2025-10-11 09:23:45.535 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 05:23:45 np0005481065 nova_compute[260935]: 2025-10-11 09:23:45.535 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:23:45 np0005481065 nova_compute[260935]: 2025-10-11 09:23:45.536 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:23:45 np0005481065 nova_compute[260935]: 2025-10-11 09:23:45.536 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:23:45 np0005481065 podman[394566]: 2025-10-11 09:23:45.822593396 +0000 UTC m=+0.116027555 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 05:23:46 np0005481065 nova_compute[260935]: 2025-10-11 09:23:46.108 2 DEBUG nova.network.neutron [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:23:46 np0005481065 nova_compute[260935]: 2025-10-11 09:23:46.234 2 DEBUG nova.compute.manager [req-7df65449-e412-407c-99df-175c6deaca94 req-ba1b4746-6122-4bea-856b-246577763942 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Received event network-vif-plugged-0f516e4b-c284-4151-944c-8a7d98f695b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:23:46 np0005481065 nova_compute[260935]: 2025-10-11 09:23:46.234 2 DEBUG oslo_concurrency.lockutils [req-7df65449-e412-407c-99df-175c6deaca94 req-ba1b4746-6122-4bea-856b-246577763942 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:23:46 np0005481065 nova_compute[260935]: 2025-10-11 09:23:46.234 2 DEBUG oslo_concurrency.lockutils [req-7df65449-e412-407c-99df-175c6deaca94 req-ba1b4746-6122-4bea-856b-246577763942 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:23:46 np0005481065 nova_compute[260935]: 2025-10-11 09:23:46.235 2 DEBUG oslo_concurrency.lockutils [req-7df65449-e412-407c-99df-175c6deaca94 req-ba1b4746-6122-4bea-856b-246577763942 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:23:46 np0005481065 nova_compute[260935]: 2025-10-11 09:23:46.235 2 DEBUG nova.compute.manager [req-7df65449-e412-407c-99df-175c6deaca94 req-ba1b4746-6122-4bea-856b-246577763942 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Processing event network-vif-plugged-0f516e4b-c284-4151-944c-8a7d98f695b5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:23:46 np0005481065 nova_compute[260935]: 2025-10-11 09:23:46.235 2 DEBUG nova.compute.manager [req-7df65449-e412-407c-99df-175c6deaca94 req-ba1b4746-6122-4bea-856b-246577763942 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Received event network-vif-plugged-0f516e4b-c284-4151-944c-8a7d98f695b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:23:46 np0005481065 nova_compute[260935]: 2025-10-11 09:23:46.235 2 DEBUG oslo_concurrency.lockutils [req-7df65449-e412-407c-99df-175c6deaca94 req-ba1b4746-6122-4bea-856b-246577763942 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:23:46 np0005481065 nova_compute[260935]: 2025-10-11 09:23:46.236 2 DEBUG oslo_concurrency.lockutils [req-7df65449-e412-407c-99df-175c6deaca94 req-ba1b4746-6122-4bea-856b-246577763942 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:23:46 np0005481065 nova_compute[260935]: 2025-10-11 09:23:46.236 2 DEBUG oslo_concurrency.lockutils [req-7df65449-e412-407c-99df-175c6deaca94 req-ba1b4746-6122-4bea-856b-246577763942 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:23:46 np0005481065 nova_compute[260935]: 2025-10-11 09:23:46.236 2 DEBUG nova.compute.manager [req-7df65449-e412-407c-99df-175c6deaca94 req-ba1b4746-6122-4bea-856b-246577763942 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] No waiting events found dispatching network-vif-plugged-0f516e4b-c284-4151-944c-8a7d98f695b5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:23:46 np0005481065 nova_compute[260935]: 2025-10-11 09:23:46.236 2 WARNING nova.compute.manager [req-7df65449-e412-407c-99df-175c6deaca94 req-ba1b4746-6122-4bea-856b-246577763942 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Received unexpected event network-vif-plugged-0f516e4b-c284-4151-944c-8a7d98f695b5 for instance with vm_state shelved_offloaded and task_state spawning.#033[00m
Oct 11 05:23:46 np0005481065 nova_compute[260935]: 2025-10-11 09:23:46.236 2 DEBUG nova.compute.manager [req-7df65449-e412-407c-99df-175c6deaca94 req-ba1b4746-6122-4bea-856b-246577763942 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Received event network-changed-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:23:46 np0005481065 nova_compute[260935]: 2025-10-11 09:23:46.237 2 DEBUG nova.compute.manager [req-7df65449-e412-407c-99df-175c6deaca94 req-ba1b4746-6122-4bea-856b-246577763942 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Refreshing instance network info cache due to event network-changed-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:23:46 np0005481065 nova_compute[260935]: 2025-10-11 09:23:46.237 2 DEBUG oslo_concurrency.lockutils [req-7df65449-e412-407c-99df-175c6deaca94 req-ba1b4746-6122-4bea-856b-246577763942 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-527bff5a-2d35-406a-8702-f80298a22342" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:23:46 np0005481065 nova_compute[260935]: 2025-10-11 09:23:46.237 2 DEBUG nova.compute.manager [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:23:46 np0005481065 nova_compute[260935]: 2025-10-11 09:23:46.244 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174626.2443242, ef21f945-0076-48fa-8d22-c5376e26d278 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:23:46 np0005481065 nova_compute[260935]: 2025-10-11 09:23:46.244 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:23:46 np0005481065 nova_compute[260935]: 2025-10-11 09:23:46.246 2 DEBUG nova.virt.libvirt.driver [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:23:46 np0005481065 nova_compute[260935]: 2025-10-11 09:23:46.249 2 INFO nova.virt.libvirt.driver [-] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Instance spawned successfully.#033[00m
Oct 11 05:23:46 np0005481065 nova_compute[260935]: 2025-10-11 09:23:46.431 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:23:46 np0005481065 nova_compute[260935]: 2025-10-11 09:23:46.442 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:23:46 np0005481065 nova_compute[260935]: 2025-10-11 09:23:46.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:46 np0005481065 nova_compute[260935]: 2025-10-11 09:23:46.492 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:23:46 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2462: 321 pgs: 321 active+clean; 579 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 7.4 MiB/s wr, 130 op/s
Oct 11 05:23:46 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e286 do_prune osdmap full prune enabled
Oct 11 05:23:46 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e287 e287: 3 total, 3 up, 3 in
Oct 11 05:23:46 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e287: 3 total, 3 up, 3 in
Oct 11 05:23:47 np0005481065 nova_compute[260935]: 2025-10-11 09:23:47.151 2 DEBUG nova.network.neutron [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Updating instance_info_cache with network_info: [{"id": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "address": "fa:16:3e:d6:ce:63", "network": {"id": "a7aa5898-dcd8-41c5-81e2-5c41061a1fb2", "bridge": "br-int", "label": "tempest-network-smoke--1511589092", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba6bfb6c-8b", "ovs_interfaceid": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:23:47 np0005481065 nova_compute[260935]: 2025-10-11 09:23:47.181 2 DEBUG oslo_concurrency.lockutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Releasing lock "refresh_cache-527bff5a-2d35-406a-8702-f80298a22342" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:23:47 np0005481065 nova_compute[260935]: 2025-10-11 09:23:47.182 2 DEBUG nova.compute.manager [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Instance network_info: |[{"id": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "address": "fa:16:3e:d6:ce:63", "network": {"id": "a7aa5898-dcd8-41c5-81e2-5c41061a1fb2", "bridge": "br-int", "label": "tempest-network-smoke--1511589092", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba6bfb6c-8b", "ovs_interfaceid": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:23:47 np0005481065 nova_compute[260935]: 2025-10-11 09:23:47.182 2 DEBUG oslo_concurrency.lockutils [req-7df65449-e412-407c-99df-175c6deaca94 req-ba1b4746-6122-4bea-856b-246577763942 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-527bff5a-2d35-406a-8702-f80298a22342" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:23:47 np0005481065 nova_compute[260935]: 2025-10-11 09:23:47.182 2 DEBUG nova.network.neutron [req-7df65449-e412-407c-99df-175c6deaca94 req-ba1b4746-6122-4bea-856b-246577763942 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Refreshing network info cache for port ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:23:47 np0005481065 nova_compute[260935]: 2025-10-11 09:23:47.184 2 DEBUG nova.virt.libvirt.driver [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Start _get_guest_xml network_info=[{"id": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "address": "fa:16:3e:d6:ce:63", "network": {"id": "a7aa5898-dcd8-41c5-81e2-5c41061a1fb2", "bridge": "br-int", "label": "tempest-network-smoke--1511589092", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba6bfb6c-8b", "ovs_interfaceid": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:23:47 np0005481065 nova_compute[260935]: 2025-10-11 09:23:47.188 2 WARNING nova.virt.libvirt.driver [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:23:47 np0005481065 nova_compute[260935]: 2025-10-11 09:23:47.193 2 DEBUG nova.virt.libvirt.host [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:23:47 np0005481065 nova_compute[260935]: 2025-10-11 09:23:47.194 2 DEBUG nova.virt.libvirt.host [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:23:47 np0005481065 nova_compute[260935]: 2025-10-11 09:23:47.198 2 DEBUG nova.virt.libvirt.host [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:23:47 np0005481065 nova_compute[260935]: 2025-10-11 09:23:47.199 2 DEBUG nova.virt.libvirt.host [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:23:47 np0005481065 nova_compute[260935]: 2025-10-11 09:23:47.199 2 DEBUG nova.virt.libvirt.driver [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:23:47 np0005481065 nova_compute[260935]: 2025-10-11 09:23:47.200 2 DEBUG nova.virt.hardware [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:23:47 np0005481065 nova_compute[260935]: 2025-10-11 09:23:47.201 2 DEBUG nova.virt.hardware [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:23:47 np0005481065 nova_compute[260935]: 2025-10-11 09:23:47.201 2 DEBUG nova.virt.hardware [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:23:47 np0005481065 nova_compute[260935]: 2025-10-11 09:23:47.201 2 DEBUG nova.virt.hardware [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:23:47 np0005481065 nova_compute[260935]: 2025-10-11 09:23:47.202 2 DEBUG nova.virt.hardware [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:23:47 np0005481065 nova_compute[260935]: 2025-10-11 09:23:47.202 2 DEBUG nova.virt.hardware [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:23:47 np0005481065 nova_compute[260935]: 2025-10-11 09:23:47.202 2 DEBUG nova.virt.hardware [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:23:47 np0005481065 nova_compute[260935]: 2025-10-11 09:23:47.203 2 DEBUG nova.virt.hardware [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:23:47 np0005481065 nova_compute[260935]: 2025-10-11 09:23:47.203 2 DEBUG nova.virt.hardware [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:23:47 np0005481065 nova_compute[260935]: 2025-10-11 09:23:47.204 2 DEBUG nova.virt.hardware [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:23:47 np0005481065 nova_compute[260935]: 2025-10-11 09:23:47.204 2 DEBUG nova.virt.hardware [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:23:47 np0005481065 nova_compute[260935]: 2025-10-11 09:23:47.209 2 DEBUG oslo_concurrency.processutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:23:47 np0005481065 nova_compute[260935]: 2025-10-11 09:23:47.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:47 np0005481065 nova_compute[260935]: 2025-10-11 09:23:47.538 2 DEBUG nova.compute.manager [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:23:47 np0005481065 nova_compute[260935]: 2025-10-11 09:23:47.630 2 DEBUG oslo_concurrency.lockutils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 14.283s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:23:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:23:47 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2391666843' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:23:47 np0005481065 nova_compute[260935]: 2025-10-11 09:23:47.778 2 DEBUG oslo_concurrency.processutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:23:47 np0005481065 nova_compute[260935]: 2025-10-11 09:23:47.799 2 DEBUG nova.storage.rbd_utils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 527bff5a-2d35-406a-8702-f80298a22342_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:23:47 np0005481065 nova_compute[260935]: 2025-10-11 09:23:47.804 2 DEBUG oslo_concurrency.processutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:23:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:23:48 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2914209808' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:23:48 np0005481065 nova_compute[260935]: 2025-10-11 09:23:48.256 2 DEBUG oslo_concurrency.processutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:23:48 np0005481065 nova_compute[260935]: 2025-10-11 09:23:48.259 2 DEBUG nova.virt.libvirt.vif [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:23:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1659866276',display_name='tempest-TestNetworkBasicOps-server-1659866276',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1659866276',id=123,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD5xFqm8IPc+0j24PucHdWviLqVsYMHOSZIhiNWh/27UIL+IzWKFcQXG0H5lF53N0KOCicqByqWqQ/NMVHseguyx/gUQwyqXnA+qdRlYqxWLbxA13mTZNbIWDUUTRGaKUQ==',key_name='tempest-TestNetworkBasicOps-1951063949',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-slz4kjjw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:23:41Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=527bff5a-2d35-406a-8702-f80298a22342,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "address": "fa:16:3e:d6:ce:63", "network": {"id": "a7aa5898-dcd8-41c5-81e2-5c41061a1fb2", "bridge": "br-int", "label": "tempest-network-smoke--1511589092", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba6bfb6c-8b", "ovs_interfaceid": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:23:48 np0005481065 nova_compute[260935]: 2025-10-11 09:23:48.260 2 DEBUG nova.network.os_vif_util [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "address": "fa:16:3e:d6:ce:63", "network": {"id": "a7aa5898-dcd8-41c5-81e2-5c41061a1fb2", "bridge": "br-int", "label": "tempest-network-smoke--1511589092", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba6bfb6c-8b", "ovs_interfaceid": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:23:48 np0005481065 nova_compute[260935]: 2025-10-11 09:23:48.262 2 DEBUG nova.network.os_vif_util [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:ce:63,bridge_name='br-int',has_traffic_filtering=True,id=ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca,network=Network(a7aa5898-dcd8-41c5-81e2-5c41061a1fb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapba6bfb6c-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:23:48 np0005481065 nova_compute[260935]: 2025-10-11 09:23:48.264 2 DEBUG nova.objects.instance [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'pci_devices' on Instance uuid 527bff5a-2d35-406a-8702-f80298a22342 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:23:48 np0005481065 nova_compute[260935]: 2025-10-11 09:23:48.289 2 DEBUG nova.virt.libvirt.driver [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:23:48 np0005481065 nova_compute[260935]:  <uuid>527bff5a-2d35-406a-8702-f80298a22342</uuid>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:  <name>instance-0000007b</name>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:23:48 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:      <nova:name>tempest-TestNetworkBasicOps-server-1659866276</nova:name>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:23:47</nova:creationTime>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:23:48 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:        <nova:user uuid="dd336dcb24664df58613d4105ce1b004">tempest-TestNetworkBasicOps-1622727639-project-member</nova:user>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:        <nova:project uuid="bee9c6aad5fe46a2b0fb6caf4d995b72">tempest-TestNetworkBasicOps-1622727639</nova:project>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:        <nova:port uuid="ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca">
Oct 11 05:23:48 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:23:48 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:      <entry name="serial">527bff5a-2d35-406a-8702-f80298a22342</entry>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:      <entry name="uuid">527bff5a-2d35-406a-8702-f80298a22342</entry>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:23:48 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:23:48 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:23:48 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/527bff5a-2d35-406a-8702-f80298a22342_disk">
Oct 11 05:23:48 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:23:48 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:23:48 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/527bff5a-2d35-406a-8702-f80298a22342_disk.config">
Oct 11 05:23:48 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:23:48 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:23:48 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:d6:ce:63"/>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:      <target dev="tapba6bfb6c-8b"/>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:23:48 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/527bff5a-2d35-406a-8702-f80298a22342/console.log" append="off"/>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:23:48 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:23:48 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:23:48 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:23:48 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:23:48 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:23:48 np0005481065 nova_compute[260935]: 2025-10-11 09:23:48.290 2 DEBUG nova.compute.manager [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Preparing to wait for external event network-vif-plugged-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:23:48 np0005481065 nova_compute[260935]: 2025-10-11 09:23:48.291 2 DEBUG oslo_concurrency.lockutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "527bff5a-2d35-406a-8702-f80298a22342-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:23:48 np0005481065 nova_compute[260935]: 2025-10-11 09:23:48.291 2 DEBUG oslo_concurrency.lockutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "527bff5a-2d35-406a-8702-f80298a22342-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:23:48 np0005481065 nova_compute[260935]: 2025-10-11 09:23:48.292 2 DEBUG oslo_concurrency.lockutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "527bff5a-2d35-406a-8702-f80298a22342-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:23:48 np0005481065 nova_compute[260935]: 2025-10-11 09:23:48.293 2 DEBUG nova.virt.libvirt.vif [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:23:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1659866276',display_name='tempest-TestNetworkBasicOps-server-1659866276',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1659866276',id=123,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD5xFqm8IPc+0j24PucHdWviLqVsYMHOSZIhiNWh/27UIL+IzWKFcQXG0H5lF53N0KOCicqByqWqQ/NMVHseguyx/gUQwyqXnA+qdRlYqxWLbxA13mTZNbIWDUUTRGaKUQ==',key_name='tempest-TestNetworkBasicOps-1951063949',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-slz4kjjw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:23:41Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=527bff5a-2d35-406a-8702-f80298a22342,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "address": "fa:16:3e:d6:ce:63", "network": {"id": "a7aa5898-dcd8-41c5-81e2-5c41061a1fb2", "bridge": "br-int", "label": "tempest-network-smoke--1511589092", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba6bfb6c-8b", "ovs_interfaceid": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:23:48 np0005481065 nova_compute[260935]: 2025-10-11 09:23:48.294 2 DEBUG nova.network.os_vif_util [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "address": "fa:16:3e:d6:ce:63", "network": {"id": "a7aa5898-dcd8-41c5-81e2-5c41061a1fb2", "bridge": "br-int", "label": "tempest-network-smoke--1511589092", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba6bfb6c-8b", "ovs_interfaceid": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:23:48 np0005481065 nova_compute[260935]: 2025-10-11 09:23:48.295 2 DEBUG nova.network.os_vif_util [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:ce:63,bridge_name='br-int',has_traffic_filtering=True,id=ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca,network=Network(a7aa5898-dcd8-41c5-81e2-5c41061a1fb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapba6bfb6c-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:23:48 np0005481065 nova_compute[260935]: 2025-10-11 09:23:48.295 2 DEBUG os_vif [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:ce:63,bridge_name='br-int',has_traffic_filtering=True,id=ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca,network=Network(a7aa5898-dcd8-41c5-81e2-5c41061a1fb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapba6bfb6c-8b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:23:48 np0005481065 nova_compute[260935]: 2025-10-11 09:23:48.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:48 np0005481065 nova_compute[260935]: 2025-10-11 09:23:48.298 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:23:48 np0005481065 nova_compute[260935]: 2025-10-11 09:23:48.299 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:23:48 np0005481065 nova_compute[260935]: 2025-10-11 09:23:48.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:48 np0005481065 nova_compute[260935]: 2025-10-11 09:23:48.307 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapba6bfb6c-8b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:23:48 np0005481065 nova_compute[260935]: 2025-10-11 09:23:48.308 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapba6bfb6c-8b, col_values=(('external_ids', {'iface-id': 'ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d6:ce:63', 'vm-uuid': '527bff5a-2d35-406a-8702-f80298a22342'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:23:48 np0005481065 NetworkManager[44960]: <info>  [1760174628.3114] manager: (tapba6bfb6c-8b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/513)
Oct 11 05:23:48 np0005481065 nova_compute[260935]: 2025-10-11 09:23:48.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:48 np0005481065 nova_compute[260935]: 2025-10-11 09:23:48.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:23:48 np0005481065 nova_compute[260935]: 2025-10-11 09:23:48.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:48 np0005481065 nova_compute[260935]: 2025-10-11 09:23:48.323 2 INFO os_vif [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:ce:63,bridge_name='br-int',has_traffic_filtering=True,id=ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca,network=Network(a7aa5898-dcd8-41c5-81e2-5c41061a1fb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapba6bfb6c-8b')#033[00m
Oct 11 05:23:48 np0005481065 nova_compute[260935]: 2025-10-11 09:23:48.402 2 DEBUG nova.virt.libvirt.driver [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:23:48 np0005481065 nova_compute[260935]: 2025-10-11 09:23:48.403 2 DEBUG nova.virt.libvirt.driver [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:23:48 np0005481065 nova_compute[260935]: 2025-10-11 09:23:48.404 2 DEBUG nova.virt.libvirt.driver [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No VIF found with MAC fa:16:3e:d6:ce:63, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:23:48 np0005481065 nova_compute[260935]: 2025-10-11 09:23:48.405 2 INFO nova.virt.libvirt.driver [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Using config drive#033[00m
Oct 11 05:23:48 np0005481065 nova_compute[260935]: 2025-10-11 09:23:48.444 2 DEBUG nova.storage.rbd_utils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 527bff5a-2d35-406a-8702-f80298a22342_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:23:48 np0005481065 nova_compute[260935]: 2025-10-11 09:23:48.646 2 DEBUG nova.network.neutron [req-7df65449-e412-407c-99df-175c6deaca94 req-ba1b4746-6122-4bea-856b-246577763942 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Updated VIF entry in instance network info cache for port ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:23:48 np0005481065 nova_compute[260935]: 2025-10-11 09:23:48.647 2 DEBUG nova.network.neutron [req-7df65449-e412-407c-99df-175c6deaca94 req-ba1b4746-6122-4bea-856b-246577763942 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Updating instance_info_cache with network_info: [{"id": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "address": "fa:16:3e:d6:ce:63", "network": {"id": "a7aa5898-dcd8-41c5-81e2-5c41061a1fb2", "bridge": "br-int", "label": "tempest-network-smoke--1511589092", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba6bfb6c-8b", "ovs_interfaceid": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:23:48 np0005481065 nova_compute[260935]: 2025-10-11 09:23:48.666 2 DEBUG oslo_concurrency.lockutils [req-7df65449-e412-407c-99df-175c6deaca94 req-ba1b4746-6122-4bea-856b-246577763942 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-527bff5a-2d35-406a-8702-f80298a22342" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:23:48 np0005481065 nova_compute[260935]: 2025-10-11 09:23:48.667 2 DEBUG nova.compute.manager [req-7df65449-e412-407c-99df-175c6deaca94 req-ba1b4746-6122-4bea-856b-246577763942 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Received event network-changed-f45a21da-34ce-448b-92cc-2639a191c755 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:23:48 np0005481065 nova_compute[260935]: 2025-10-11 09:23:48.667 2 DEBUG nova.compute.manager [req-7df65449-e412-407c-99df-175c6deaca94 req-ba1b4746-6122-4bea-856b-246577763942 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Refreshing instance network info cache due to event network-changed-f45a21da-34ce-448b-92cc-2639a191c755. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:23:48 np0005481065 nova_compute[260935]: 2025-10-11 09:23:48.667 2 DEBUG oslo_concurrency.lockutils [req-7df65449-e412-407c-99df-175c6deaca94 req-ba1b4746-6122-4bea-856b-246577763942 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-85cf93a0-2068-4567-a399-b8d52e672913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:23:48 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2464: 321 pgs: 321 active+clean; 509 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 6.8 MiB/s wr, 218 op/s
Oct 11 05:23:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:23:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e287 do_prune osdmap full prune enabled
Oct 11 05:23:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 e288: 3 total, 3 up, 3 in
Oct 11 05:23:49 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e288: 3 total, 3 up, 3 in
Oct 11 05:23:49 np0005481065 nova_compute[260935]: 2025-10-11 09:23:49.172 2 INFO nova.virt.libvirt.driver [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Creating config drive at /var/lib/nova/instances/527bff5a-2d35-406a-8702-f80298a22342/disk.config#033[00m
Oct 11 05:23:49 np0005481065 nova_compute[260935]: 2025-10-11 09:23:49.176 2 DEBUG oslo_concurrency.processutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/527bff5a-2d35-406a-8702-f80298a22342/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp82fbrooc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:23:49 np0005481065 nova_compute[260935]: 2025-10-11 09:23:49.320 2 DEBUG oslo_concurrency.processutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/527bff5a-2d35-406a-8702-f80298a22342/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp82fbrooc" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:23:49 np0005481065 nova_compute[260935]: 2025-10-11 09:23:49.359 2 DEBUG nova.storage.rbd_utils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 527bff5a-2d35-406a-8702-f80298a22342_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:23:49 np0005481065 nova_compute[260935]: 2025-10-11 09:23:49.364 2 DEBUG oslo_concurrency.processutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/527bff5a-2d35-406a-8702-f80298a22342/disk.config 527bff5a-2d35-406a-8702-f80298a22342_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:23:49 np0005481065 nova_compute[260935]: 2025-10-11 09:23:49.551 2 DEBUG oslo_concurrency.processutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/527bff5a-2d35-406a-8702-f80298a22342/disk.config 527bff5a-2d35-406a-8702-f80298a22342_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:23:49 np0005481065 nova_compute[260935]: 2025-10-11 09:23:49.553 2 INFO nova.virt.libvirt.driver [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Deleting local config drive /var/lib/nova/instances/527bff5a-2d35-406a-8702-f80298a22342/disk.config because it was imported into RBD.#033[00m
Oct 11 05:23:49 np0005481065 nova_compute[260935]: 2025-10-11 09:23:49.555 2 DEBUG nova.network.neutron [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Updating instance_info_cache with network_info: [{"id": "85875c6f-3380-42bc-9e4b-0b66df391e0d", "address": "fa:16:3e:81:f1:c1", "network": {"id": "cdd3c547-e0c3-4649-8427-08ce8e1c52d4", "bridge": "br-int", "label": "tempest-network-smoke--1698384506", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85875c6f-33", "ovs_interfaceid": "85875c6f-3380-42bc-9e4b-0b66df391e0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f45a21da-34ce-448b-92cc-2639a191c755", "address": "fa:16:3e:f6:a9:1e", "network": {"id": "f0bc2c62-89ab-4ce1-9157-2273788b9018", "bridge": "br-int", "label": "tempest-network-smoke--892002892", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef6:a91e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef6:a91e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf45a21da-34", "ovs_interfaceid": "f45a21da-34ce-448b-92cc-2639a191c755", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:23:49 np0005481065 nova_compute[260935]: 2025-10-11 09:23:49.580 2 DEBUG oslo_concurrency.lockutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Releasing lock "refresh_cache-85cf93a0-2068-4567-a399-b8d52e672913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:23:49 np0005481065 nova_compute[260935]: 2025-10-11 09:23:49.580 2 DEBUG nova.compute.manager [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Instance network_info: |[{"id": "85875c6f-3380-42bc-9e4b-0b66df391e0d", "address": "fa:16:3e:81:f1:c1", "network": {"id": "cdd3c547-e0c3-4649-8427-08ce8e1c52d4", "bridge": "br-int", "label": "tempest-network-smoke--1698384506", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85875c6f-33", "ovs_interfaceid": "85875c6f-3380-42bc-9e4b-0b66df391e0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f45a21da-34ce-448b-92cc-2639a191c755", "address": "fa:16:3e:f6:a9:1e", "network": {"id": "f0bc2c62-89ab-4ce1-9157-2273788b9018", "bridge": "br-int", "label": "tempest-network-smoke--892002892", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef6:a91e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef6:a91e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf45a21da-34", "ovs_interfaceid": "f45a21da-34ce-448b-92cc-2639a191c755", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:23:49 np0005481065 nova_compute[260935]: 2025-10-11 09:23:49.581 2 DEBUG oslo_concurrency.lockutils [req-7df65449-e412-407c-99df-175c6deaca94 req-ba1b4746-6122-4bea-856b-246577763942 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-85cf93a0-2068-4567-a399-b8d52e672913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:23:49 np0005481065 nova_compute[260935]: 2025-10-11 09:23:49.581 2 DEBUG nova.network.neutron [req-7df65449-e412-407c-99df-175c6deaca94 req-ba1b4746-6122-4bea-856b-246577763942 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Refreshing network info cache for port f45a21da-34ce-448b-92cc-2639a191c755 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:23:49 np0005481065 nova_compute[260935]: 2025-10-11 09:23:49.588 2 DEBUG nova.virt.libvirt.driver [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Start _get_guest_xml network_info=[{"id": "85875c6f-3380-42bc-9e4b-0b66df391e0d", "address": "fa:16:3e:81:f1:c1", "network": {"id": "cdd3c547-e0c3-4649-8427-08ce8e1c52d4", "bridge": "br-int", "label": "tempest-network-smoke--1698384506", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85875c6f-33", "ovs_interfaceid": "85875c6f-3380-42bc-9e4b-0b66df391e0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f45a21da-34ce-448b-92cc-2639a191c755", "address": "fa:16:3e:f6:a9:1e", "network": {"id": "f0bc2c62-89ab-4ce1-9157-2273788b9018", "bridge": "br-int", "label": "tempest-network-smoke--892002892", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef6:a91e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef6:a91e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf45a21da-34", "ovs_interfaceid": "f45a21da-34ce-448b-92cc-2639a191c755", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:23:49 np0005481065 nova_compute[260935]: 2025-10-11 09:23:49.603 2 WARNING nova.virt.libvirt.driver [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:23:49 np0005481065 nova_compute[260935]: 2025-10-11 09:23:49.614 2 DEBUG nova.virt.libvirt.host [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:23:49 np0005481065 nova_compute[260935]: 2025-10-11 09:23:49.616 2 DEBUG nova.virt.libvirt.host [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:23:49 np0005481065 nova_compute[260935]: 2025-10-11 09:23:49.637 2 DEBUG nova.virt.libvirt.host [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:23:49 np0005481065 nova_compute[260935]: 2025-10-11 09:23:49.637 2 DEBUG nova.virt.libvirt.host [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:23:49 np0005481065 nova_compute[260935]: 2025-10-11 09:23:49.638 2 DEBUG nova.virt.libvirt.driver [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:23:49 np0005481065 nova_compute[260935]: 2025-10-11 09:23:49.638 2 DEBUG nova.virt.hardware [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:23:49 np0005481065 nova_compute[260935]: 2025-10-11 09:23:49.639 2 DEBUG nova.virt.hardware [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:23:49 np0005481065 nova_compute[260935]: 2025-10-11 09:23:49.639 2 DEBUG nova.virt.hardware [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:23:49 np0005481065 nova_compute[260935]: 2025-10-11 09:23:49.639 2 DEBUG nova.virt.hardware [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:23:49 np0005481065 nova_compute[260935]: 2025-10-11 09:23:49.639 2 DEBUG nova.virt.hardware [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:23:49 np0005481065 nova_compute[260935]: 2025-10-11 09:23:49.640 2 DEBUG nova.virt.hardware [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:23:49 np0005481065 nova_compute[260935]: 2025-10-11 09:23:49.640 2 DEBUG nova.virt.hardware [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:23:49 np0005481065 nova_compute[260935]: 2025-10-11 09:23:49.640 2 DEBUG nova.virt.hardware [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:23:49 np0005481065 nova_compute[260935]: 2025-10-11 09:23:49.640 2 DEBUG nova.virt.hardware [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:23:49 np0005481065 nova_compute[260935]: 2025-10-11 09:23:49.640 2 DEBUG nova.virt.hardware [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:23:49 np0005481065 nova_compute[260935]: 2025-10-11 09:23:49.641 2 DEBUG nova.virt.hardware [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:23:49 np0005481065 nova_compute[260935]: 2025-10-11 09:23:49.645 2 DEBUG oslo_concurrency.processutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:23:49 np0005481065 kernel: tapba6bfb6c-8b: entered promiscuous mode
Oct 11 05:23:49 np0005481065 NetworkManager[44960]: <info>  [1760174629.6571] manager: (tapba6bfb6c-8b): new Tun device (/org/freedesktop/NetworkManager/Devices/514)
Oct 11 05:23:49 np0005481065 systemd-udevd[394721]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:23:49 np0005481065 ovn_controller[152945]: 2025-10-11T09:23:49Z|01292|binding|INFO|Claiming lport ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca for this chassis.
Oct 11 05:23:49 np0005481065 ovn_controller[152945]: 2025-10-11T09:23:49Z|01293|binding|INFO|ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca: Claiming fa:16:3e:d6:ce:63 10.100.0.5
Oct 11 05:23:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:49.694 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:ce:63 10.100.0.5'], port_security=['fa:16:3e:d6:ce:63 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-111056357', 'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '527bff5a-2d35-406a-8702-f80298a22342', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-111056357', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '2', 'neutron:security_group_ids': '019fac1b-1c13-4d21-809e-39e39fdd9255', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=60a8a4a9-2613-497d-90d8-59ebfcd1b053, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:23:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:49.697 162815 INFO neutron.agent.ovn.metadata.agent [-] Port ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca in datapath a7aa5898-dcd8-41c5-81e2-5c41061a1fb2 bound to our chassis#033[00m
Oct 11 05:23:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:49.700 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a7aa5898-dcd8-41c5-81e2-5c41061a1fb2#033[00m
Oct 11 05:23:49 np0005481065 NetworkManager[44960]: <info>  [1760174629.7030] device (tapba6bfb6c-8b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:23:49 np0005481065 NetworkManager[44960]: <info>  [1760174629.7050] device (tapba6bfb6c-8b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:23:49 np0005481065 ovn_controller[152945]: 2025-10-11T09:23:49Z|01294|binding|INFO|Setting lport ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca ovn-installed in OVS
Oct 11 05:23:49 np0005481065 ovn_controller[152945]: 2025-10-11T09:23:49Z|01295|binding|INFO|Setting lport ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca up in Southbound
Oct 11 05:23:49 np0005481065 nova_compute[260935]: 2025-10-11 09:23:49.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:49.720 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[67410d3e-0135-4245-a6ac-b2b3d515bc51]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:49.720 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa7aa5898-d1 in ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 05:23:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:49.722 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa7aa5898-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 05:23:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:49.722 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[296c97f8-3a20-4a37-a8a4-32a5de2080c3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:49.724 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3268f57c-a114-4621-859b-860109071335]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:49 np0005481065 systemd-machined[215705]: New machine qemu-146-instance-0000007b.
Oct 11 05:23:49 np0005481065 systemd[1]: Started Virtual Machine qemu-146-instance-0000007b.
Oct 11 05:23:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:49.743 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[58d7bcc1-c863-4e02-b797-abcd9d14f53f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:49.768 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[23ff34f6-d990-45ec-8950-902d4ba45d63]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:49.815 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[23cb0eb8-4cb5-4f60-bc46-368d277b8eab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:49.826 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bd41df57-8ae2-4d3b-9e53-e32b646ba9d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:49 np0005481065 NetworkManager[44960]: <info>  [1760174629.8274] manager: (tapa7aa5898-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/515)
Oct 11 05:23:49 np0005481065 systemd-udevd[394723]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:23:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:49.872 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b846da9f-d02e-433f-9ba9-df9be7140952]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:49.879 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[4c6c5e19-fcbf-4725-835b-e17058e4ad6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:49 np0005481065 NetworkManager[44960]: <info>  [1760174629.9120] device (tapa7aa5898-d0): carrier: link connected
Oct 11 05:23:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:49.924 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[a7d74f02-e007-46d4-84e8-bae8f72c5bd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:49.949 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4f80c8aa-eecb-45c6-8adb-6e1ae8f0a397]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa7aa5898-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:0e:28'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 359], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647461, 'reachable_time': 32882, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 394776, 'error': None, 'target': 'ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:49.975 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[16b39acb-a2f7-44a7-a57c-fd63bfad00e7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1a:e28'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647461, 'tstamp': 647461}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 394777, 'error': None, 'target': 'ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:50.010 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f1dd64c3-025d-40a2-91e1-bc5979828029]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa7aa5898-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:0e:28'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 359], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647461, 'reachable_time': 32882, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 394778, 'error': None, 'target': 'ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:50.060 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d056461f-8a81-4997-bf33-c50973f21176]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:50 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:23:50 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/869300691' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:23:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:50.183 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9549a83c-7292-4fec-8734-1ee0f2b2845c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:50.184 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa7aa5898-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:23:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:50.185 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:23:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:50.185 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa7aa5898-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:23:50 np0005481065 NetworkManager[44960]: <info>  [1760174630.1876] manager: (tapa7aa5898-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/516)
Oct 11 05:23:50 np0005481065 kernel: tapa7aa5898-d0: entered promiscuous mode
Oct 11 05:23:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:50.191 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa7aa5898-d0, col_values=(('external_ids', {'iface-id': 'aaa7cd6b-360e-44dd-8a57-64ba5457b964'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:23:50 np0005481065 ovn_controller[152945]: 2025-10-11T09:23:50Z|01296|binding|INFO|Releasing lport aaa7cd6b-360e-44dd-8a57-64ba5457b964 from this chassis (sb_readonly=0)
Oct 11 05:23:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:50.199 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a7aa5898-dcd8-41c5-81e2-5c41061a1fb2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a7aa5898-dcd8-41c5-81e2-5c41061a1fb2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 05:23:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:50.200 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[675b87b6-1284-48f6-ac2f-34066a9a7b93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:50 np0005481065 nova_compute[260935]: 2025-10-11 09:23:50.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:50.202 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 05:23:50 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 05:23:50 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 05:23:50 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2
Oct 11 05:23:50 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 05:23:50 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 05:23:50 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 05:23:50 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/a7aa5898-dcd8-41c5-81e2-5c41061a1fb2.pid.haproxy
Oct 11 05:23:50 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 05:23:50 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:23:50 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 05:23:50 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 05:23:50 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 05:23:50 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 05:23:50 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 05:23:50 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 05:23:50 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 05:23:50 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 05:23:50 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 05:23:50 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 05:23:50 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 05:23:50 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 05:23:50 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 05:23:50 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:23:50 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:23:50 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 05:23:50 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 05:23:50 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 05:23:50 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID a7aa5898-dcd8-41c5-81e2-5c41061a1fb2
Oct 11 05:23:50 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 05:23:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:50.202 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2', 'env', 'PROCESS_TAG=haproxy-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a7aa5898-dcd8-41c5-81e2-5c41061a1fb2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 05:23:50 np0005481065 nova_compute[260935]: 2025-10-11 09:23:50.214 2 DEBUG oslo_concurrency.processutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:23:50 np0005481065 nova_compute[260935]: 2025-10-11 09:23:50.248 2 DEBUG nova.storage.rbd_utils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 85cf93a0-2068-4567-a399-b8d52e672913_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:23:50 np0005481065 nova_compute[260935]: 2025-10-11 09:23:50.254 2 DEBUG oslo_concurrency.processutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:23:50 np0005481065 nova_compute[260935]: 2025-10-11 09:23:50.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:50 np0005481065 nova_compute[260935]: 2025-10-11 09:23:50.374 2 DEBUG nova.compute.manager [req-4a076041-faa2-4e35-b528-cdb13e62132f req-e0684e74-3b4e-4b42-a04a-5df38ec2e78a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Received event network-vif-plugged-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:23:50 np0005481065 nova_compute[260935]: 2025-10-11 09:23:50.375 2 DEBUG oslo_concurrency.lockutils [req-4a076041-faa2-4e35-b528-cdb13e62132f req-e0684e74-3b4e-4b42-a04a-5df38ec2e78a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "527bff5a-2d35-406a-8702-f80298a22342-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:23:50 np0005481065 nova_compute[260935]: 2025-10-11 09:23:50.376 2 DEBUG oslo_concurrency.lockutils [req-4a076041-faa2-4e35-b528-cdb13e62132f req-e0684e74-3b4e-4b42-a04a-5df38ec2e78a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "527bff5a-2d35-406a-8702-f80298a22342-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:23:50 np0005481065 nova_compute[260935]: 2025-10-11 09:23:50.376 2 DEBUG oslo_concurrency.lockutils [req-4a076041-faa2-4e35-b528-cdb13e62132f req-e0684e74-3b4e-4b42-a04a-5df38ec2e78a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "527bff5a-2d35-406a-8702-f80298a22342-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:23:50 np0005481065 nova_compute[260935]: 2025-10-11 09:23:50.377 2 DEBUG nova.compute.manager [req-4a076041-faa2-4e35-b528-cdb13e62132f req-e0684e74-3b4e-4b42-a04a-5df38ec2e78a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Processing event network-vif-plugged-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:23:50 np0005481065 nova_compute[260935]: 2025-10-11 09:23:50.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:50.485 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=40, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=39) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:23:50 np0005481065 podman[394891]: 2025-10-11 09:23:50.665777134 +0000 UTC m=+0.073721802 container create a287c3e2c1a13e36c7b223b0999b063f29ee7f65b1e8f71cff7b574f9d71dbf1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:23:50 np0005481065 podman[394891]: 2025-10-11 09:23:50.633360429 +0000 UTC m=+0.041305187 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 05:23:50 np0005481065 systemd[1]: Started libpod-conmon-a287c3e2c1a13e36c7b223b0999b063f29ee7f65b1e8f71cff7b574f9d71dbf1.scope.
Oct 11 05:23:50 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:23:50 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19190283d7b8624e3bb625f0ca81497210fb38475de1d71735aa96791be754ca/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 05:23:50 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:23:50 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2069791840' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:23:50 np0005481065 podman[394891]: 2025-10-11 09:23:50.770720355 +0000 UTC m=+0.178665023 container init a287c3e2c1a13e36c7b223b0999b063f29ee7f65b1e8f71cff7b574f9d71dbf1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 05:23:50 np0005481065 podman[394891]: 2025-10-11 09:23:50.781062997 +0000 UTC m=+0.189007665 container start a287c3e2c1a13e36c7b223b0999b063f29ee7f65b1e8f71cff7b574f9d71dbf1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:23:50 np0005481065 nova_compute[260935]: 2025-10-11 09:23:50.792 2 DEBUG oslo_concurrency.processutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:23:50 np0005481065 nova_compute[260935]: 2025-10-11 09:23:50.794 2 DEBUG nova.virt.libvirt.vif [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:23:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-145104929',display_name='tempest-TestGettingAddress-server-145104929',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-145104929',id=122,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFOcQNhE7VrJ44Z/esb1CHEpx8lcRbf27s/aaZGWQqBCH7+6fr5AKVHiLVq1p8ssvkamN6U1RSiwjmJnfBxXYkiNbQhuZVIjGwYPhoT+eqQySo9UJb10NKMqWJoQDGuD9g==',key_name='tempest-TestGettingAddress-1559448996',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-tevfktjf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:23:36Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=85cf93a0-2068-4567-a399-b8d52e672913,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "85875c6f-3380-42bc-9e4b-0b66df391e0d", "address": "fa:16:3e:81:f1:c1", "network": {"id": "cdd3c547-e0c3-4649-8427-08ce8e1c52d4", "bridge": "br-int", "label": "tempest-network-smoke--1698384506", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85875c6f-33", "ovs_interfaceid": "85875c6f-3380-42bc-9e4b-0b66df391e0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:23:50 np0005481065 nova_compute[260935]: 2025-10-11 09:23:50.794 2 DEBUG nova.network.os_vif_util [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "85875c6f-3380-42bc-9e4b-0b66df391e0d", "address": "fa:16:3e:81:f1:c1", "network": {"id": "cdd3c547-e0c3-4649-8427-08ce8e1c52d4", "bridge": "br-int", "label": "tempest-network-smoke--1698384506", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85875c6f-33", "ovs_interfaceid": "85875c6f-3380-42bc-9e4b-0b66df391e0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:23:50 np0005481065 nova_compute[260935]: 2025-10-11 09:23:50.795 2 DEBUG nova.network.os_vif_util [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:81:f1:c1,bridge_name='br-int',has_traffic_filtering=True,id=85875c6f-3380-42bc-9e4b-0b66df391e0d,network=Network(cdd3c547-e0c3-4649-8427-08ce8e1c52d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85875c6f-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:23:50 np0005481065 nova_compute[260935]: 2025-10-11 09:23:50.795 2 DEBUG nova.virt.libvirt.vif [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:23:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-145104929',display_name='tempest-TestGettingAddress-server-145104929',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-145104929',id=122,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFOcQNhE7VrJ44Z/esb1CHEpx8lcRbf27s/aaZGWQqBCH7+6fr5AKVHiLVq1p8ssvkamN6U1RSiwjmJnfBxXYkiNbQhuZVIjGwYPhoT+eqQySo9UJb10NKMqWJoQDGuD9g==',key_name='tempest-TestGettingAddress-1559448996',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-tevfktjf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:23:36Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=85cf93a0-2068-4567-a399-b8d52e672913,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f45a21da-34ce-448b-92cc-2639a191c755", "address": "fa:16:3e:f6:a9:1e", "network": {"id": "f0bc2c62-89ab-4ce1-9157-2273788b9018", "bridge": "br-int", "label": "tempest-network-smoke--892002892", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef6:a91e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef6:a91e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf45a21da-34", "ovs_interfaceid": "f45a21da-34ce-448b-92cc-2639a191c755", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:23:50 np0005481065 nova_compute[260935]: 2025-10-11 09:23:50.796 2 DEBUG nova.network.os_vif_util [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "f45a21da-34ce-448b-92cc-2639a191c755", "address": "fa:16:3e:f6:a9:1e", "network": {"id": "f0bc2c62-89ab-4ce1-9157-2273788b9018", "bridge": "br-int", "label": "tempest-network-smoke--892002892", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef6:a91e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef6:a91e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf45a21da-34", "ovs_interfaceid": "f45a21da-34ce-448b-92cc-2639a191c755", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:23:50 np0005481065 nova_compute[260935]: 2025-10-11 09:23:50.796 2 DEBUG nova.network.os_vif_util [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:a9:1e,bridge_name='br-int',has_traffic_filtering=True,id=f45a21da-34ce-448b-92cc-2639a191c755,network=Network(f0bc2c62-89ab-4ce1-9157-2273788b9018),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf45a21da-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:23:50 np0005481065 nova_compute[260935]: 2025-10-11 09:23:50.797 2 DEBUG nova.objects.instance [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'pci_devices' on Instance uuid 85cf93a0-2068-4567-a399-b8d52e672913 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:23:50 np0005481065 neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2[394906]: [NOTICE]   (394912) : New worker (394914) forked
Oct 11 05:23:50 np0005481065 neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2[394906]: [NOTICE]   (394912) : Loading success.
Oct 11 05:23:50 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2466: 321 pgs: 321 active+clean; 509 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 20 KiB/s wr, 115 op/s
Oct 11 05:23:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:50.856 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 05:23:50 np0005481065 nova_compute[260935]: 2025-10-11 09:23:50.915 2 DEBUG nova.compute.manager [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:23:50 np0005481065 nova_compute[260935]: 2025-10-11 09:23:50.917 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174630.9149716, 527bff5a-2d35-406a-8702-f80298a22342 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:23:50 np0005481065 nova_compute[260935]: 2025-10-11 09:23:50.917 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 527bff5a-2d35-406a-8702-f80298a22342] VM Started (Lifecycle Event)#033[00m
Oct 11 05:23:50 np0005481065 nova_compute[260935]: 2025-10-11 09:23:50.921 2 DEBUG nova.virt.libvirt.driver [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:23:50 np0005481065 nova_compute[260935]: 2025-10-11 09:23:50.929 2 DEBUG nova.virt.libvirt.driver [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:23:50 np0005481065 nova_compute[260935]:  <uuid>85cf93a0-2068-4567-a399-b8d52e672913</uuid>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:  <name>instance-0000007a</name>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:23:50 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:      <nova:name>tempest-TestGettingAddress-server-145104929</nova:name>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:23:49</nova:creationTime>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:23:50 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:        <nova:user uuid="0e1fd111a1ff43179343661e01457085">tempest-TestGettingAddress-1238692117-project-member</nova:user>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:        <nova:project uuid="db6885dd005947ad850fed13cefdf2fc">tempest-TestGettingAddress-1238692117</nova:project>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:        <nova:port uuid="85875c6f-3380-42bc-9e4b-0b66df391e0d">
Oct 11 05:23:50 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:        <nova:port uuid="f45a21da-34ce-448b-92cc-2639a191c755">
Oct 11 05:23:50 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fef6:a91e" ipVersion="6"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fef6:a91e" ipVersion="6"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:      <entry name="serial">85cf93a0-2068-4567-a399-b8d52e672913</entry>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:      <entry name="uuid">85cf93a0-2068-4567-a399-b8d52e672913</entry>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:23:50 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/85cf93a0-2068-4567-a399-b8d52e672913_disk">
Oct 11 05:23:50 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:23:50 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:23:50 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/85cf93a0-2068-4567-a399-b8d52e672913_disk.config">
Oct 11 05:23:50 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:23:50 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:23:50 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:81:f1:c1"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:      <target dev="tap85875c6f-33"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:23:50 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:f6:a9:1e"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:      <target dev="tapf45a21da-34"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:23:50 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/85cf93a0-2068-4567-a399-b8d52e672913/console.log" append="off"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:23:50 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:23:50 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:23:50 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:23:50 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:23:50 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:23:50 np0005481065 nova_compute[260935]: 2025-10-11 09:23:50.930 2 DEBUG nova.compute.manager [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Preparing to wait for external event network-vif-plugged-85875c6f-3380-42bc-9e4b-0b66df391e0d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:23:50 np0005481065 nova_compute[260935]: 2025-10-11 09:23:50.930 2 DEBUG oslo_concurrency.lockutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "85cf93a0-2068-4567-a399-b8d52e672913-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:23:50 np0005481065 nova_compute[260935]: 2025-10-11 09:23:50.931 2 DEBUG oslo_concurrency.lockutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "85cf93a0-2068-4567-a399-b8d52e672913-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:23:50 np0005481065 nova_compute[260935]: 2025-10-11 09:23:50.931 2 DEBUG oslo_concurrency.lockutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "85cf93a0-2068-4567-a399-b8d52e672913-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:23:50 np0005481065 nova_compute[260935]: 2025-10-11 09:23:50.931 2 DEBUG nova.compute.manager [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Preparing to wait for external event network-vif-plugged-f45a21da-34ce-448b-92cc-2639a191c755 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:23:50 np0005481065 nova_compute[260935]: 2025-10-11 09:23:50.932 2 DEBUG oslo_concurrency.lockutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "85cf93a0-2068-4567-a399-b8d52e672913-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:23:50 np0005481065 nova_compute[260935]: 2025-10-11 09:23:50.932 2 DEBUG oslo_concurrency.lockutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "85cf93a0-2068-4567-a399-b8d52e672913-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:23:50 np0005481065 nova_compute[260935]: 2025-10-11 09:23:50.932 2 DEBUG oslo_concurrency.lockutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "85cf93a0-2068-4567-a399-b8d52e672913-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:23:50 np0005481065 nova_compute[260935]: 2025-10-11 09:23:50.934 2 DEBUG nova.virt.libvirt.vif [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:23:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-145104929',display_name='tempest-TestGettingAddress-server-145104929',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-145104929',id=122,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFOcQNhE7VrJ44Z/esb1CHEpx8lcRbf27s/aaZGWQqBCH7+6fr5AKVHiLVq1p8ssvkamN6U1RSiwjmJnfBxXYkiNbQhuZVIjGwYPhoT+eqQySo9UJb10NKMqWJoQDGuD9g==',key_name='tempest-TestGettingAddress-1559448996',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-tevfktjf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:23:36Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=85cf93a0-2068-4567-a399-b8d52e672913,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "85875c6f-3380-42bc-9e4b-0b66df391e0d", "address": "fa:16:3e:81:f1:c1", "network": {"id": "cdd3c547-e0c3-4649-8427-08ce8e1c52d4", "bridge": "br-int", "label": "tempest-network-smoke--1698384506", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85875c6f-33", "ovs_interfaceid": "85875c6f-3380-42bc-9e4b-0b66df391e0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:23:50 np0005481065 nova_compute[260935]: 2025-10-11 09:23:50.934 2 DEBUG nova.network.os_vif_util [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "85875c6f-3380-42bc-9e4b-0b66df391e0d", "address": "fa:16:3e:81:f1:c1", "network": {"id": "cdd3c547-e0c3-4649-8427-08ce8e1c52d4", "bridge": "br-int", "label": "tempest-network-smoke--1698384506", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85875c6f-33", "ovs_interfaceid": "85875c6f-3380-42bc-9e4b-0b66df391e0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:23:50 np0005481065 nova_compute[260935]: 2025-10-11 09:23:50.936 2 DEBUG nova.network.os_vif_util [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:81:f1:c1,bridge_name='br-int',has_traffic_filtering=True,id=85875c6f-3380-42bc-9e4b-0b66df391e0d,network=Network(cdd3c547-e0c3-4649-8427-08ce8e1c52d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85875c6f-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:23:50 np0005481065 nova_compute[260935]: 2025-10-11 09:23:50.937 2 DEBUG os_vif [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:f1:c1,bridge_name='br-int',has_traffic_filtering=True,id=85875c6f-3380-42bc-9e4b-0b66df391e0d,network=Network(cdd3c547-e0c3-4649-8427-08ce8e1c52d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85875c6f-33') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:23:50 np0005481065 nova_compute[260935]: 2025-10-11 09:23:50.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:50 np0005481065 nova_compute[260935]: 2025-10-11 09:23:50.940 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:23:50 np0005481065 nova_compute[260935]: 2025-10-11 09:23:50.940 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:23:50 np0005481065 nova_compute[260935]: 2025-10-11 09:23:50.944 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:23:50 np0005481065 nova_compute[260935]: 2025-10-11 09:23:50.945 2 INFO nova.virt.libvirt.driver [-] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Instance spawned successfully.#033[00m
Oct 11 05:23:50 np0005481065 nova_compute[260935]: 2025-10-11 09:23:50.946 2 DEBUG nova.virt.libvirt.driver [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:23:50 np0005481065 nova_compute[260935]: 2025-10-11 09:23:50.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:50 np0005481065 nova_compute[260935]: 2025-10-11 09:23:50.949 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap85875c6f-33, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:23:50 np0005481065 nova_compute[260935]: 2025-10-11 09:23:50.951 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap85875c6f-33, col_values=(('external_ids', {'iface-id': '85875c6f-3380-42bc-9e4b-0b66df391e0d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:81:f1:c1', 'vm-uuid': '85cf93a0-2068-4567-a399-b8d52e672913'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:23:50 np0005481065 nova_compute[260935]: 2025-10-11 09:23:50.954 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:23:50 np0005481065 nova_compute[260935]: 2025-10-11 09:23:50.980 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 527bff5a-2d35-406a-8702-f80298a22342] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:23:50 np0005481065 nova_compute[260935]: 2025-10-11 09:23:50.980 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174630.915211, 527bff5a-2d35-406a-8702-f80298a22342 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:23:50 np0005481065 nova_compute[260935]: 2025-10-11 09:23:50.981 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 527bff5a-2d35-406a-8702-f80298a22342] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:23:50 np0005481065 NetworkManager[44960]: <info>  [1760174630.9867] manager: (tap85875c6f-33): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/517)
Oct 11 05:23:50 np0005481065 nova_compute[260935]: 2025-10-11 09:23:50.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:50 np0005481065 nova_compute[260935]: 2025-10-11 09:23:50.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:23:50 np0005481065 nova_compute[260935]: 2025-10-11 09:23:50.995 2 INFO os_vif [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:f1:c1,bridge_name='br-int',has_traffic_filtering=True,id=85875c6f-3380-42bc-9e4b-0b66df391e0d,network=Network(cdd3c547-e0c3-4649-8427-08ce8e1c52d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85875c6f-33')#033[00m
Oct 11 05:23:50 np0005481065 nova_compute[260935]: 2025-10-11 09:23:50.996 2 DEBUG nova.virt.libvirt.vif [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:23:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-145104929',display_name='tempest-TestGettingAddress-server-145104929',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-145104929',id=122,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFOcQNhE7VrJ44Z/esb1CHEpx8lcRbf27s/aaZGWQqBCH7+6fr5AKVHiLVq1p8ssvkamN6U1RSiwjmJnfBxXYkiNbQhuZVIjGwYPhoT+eqQySo9UJb10NKMqWJoQDGuD9g==',key_name='tempest-TestGettingAddress-1559448996',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-tevfktjf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:23:36Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=85cf93a0-2068-4567-a399-b8d52e672913,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f45a21da-34ce-448b-92cc-2639a191c755", "address": "fa:16:3e:f6:a9:1e", "network": {"id": "f0bc2c62-89ab-4ce1-9157-2273788b9018", "bridge": "br-int", "label": "tempest-network-smoke--892002892", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef6:a91e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef6:a91e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf45a21da-34", "ovs_interfaceid": "f45a21da-34ce-448b-92cc-2639a191c755", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:23:50 np0005481065 nova_compute[260935]: 2025-10-11 09:23:50.997 2 DEBUG nova.network.os_vif_util [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "f45a21da-34ce-448b-92cc-2639a191c755", "address": "fa:16:3e:f6:a9:1e", "network": {"id": "f0bc2c62-89ab-4ce1-9157-2273788b9018", "bridge": "br-int", "label": "tempest-network-smoke--892002892", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef6:a91e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef6:a91e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf45a21da-34", "ovs_interfaceid": "f45a21da-34ce-448b-92cc-2639a191c755", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:23:50 np0005481065 nova_compute[260935]: 2025-10-11 09:23:50.999 2 DEBUG nova.network.os_vif_util [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:a9:1e,bridge_name='br-int',has_traffic_filtering=True,id=f45a21da-34ce-448b-92cc-2639a191c755,network=Network(f0bc2c62-89ab-4ce1-9157-2273788b9018),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf45a21da-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:23:51 np0005481065 nova_compute[260935]: 2025-10-11 09:23:50.999 2 DEBUG os_vif [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:a9:1e,bridge_name='br-int',has_traffic_filtering=True,id=f45a21da-34ce-448b-92cc-2639a191c755,network=Network(f0bc2c62-89ab-4ce1-9157-2273788b9018),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf45a21da-34') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:23:51 np0005481065 nova_compute[260935]: 2025-10-11 09:23:51.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:51 np0005481065 nova_compute[260935]: 2025-10-11 09:23:51.000 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:23:51 np0005481065 nova_compute[260935]: 2025-10-11 09:23:51.001 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:23:51 np0005481065 nova_compute[260935]: 2025-10-11 09:23:51.003 2 DEBUG nova.virt.libvirt.driver [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:23:51 np0005481065 nova_compute[260935]: 2025-10-11 09:23:51.003 2 DEBUG nova.virt.libvirt.driver [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:23:51 np0005481065 nova_compute[260935]: 2025-10-11 09:23:51.004 2 DEBUG nova.virt.libvirt.driver [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:23:51 np0005481065 nova_compute[260935]: 2025-10-11 09:23:51.005 2 DEBUG nova.virt.libvirt.driver [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:23:51 np0005481065 nova_compute[260935]: 2025-10-11 09:23:51.006 2 DEBUG nova.virt.libvirt.driver [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:23:51 np0005481065 nova_compute[260935]: 2025-10-11 09:23:51.006 2 DEBUG nova.virt.libvirt.driver [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:23:51 np0005481065 nova_compute[260935]: 2025-10-11 09:23:51.014 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:23:51 np0005481065 nova_compute[260935]: 2025-10-11 09:23:51.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:51 np0005481065 nova_compute[260935]: 2025-10-11 09:23:51.016 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf45a21da-34, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:23:51 np0005481065 nova_compute[260935]: 2025-10-11 09:23:51.017 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf45a21da-34, col_values=(('external_ids', {'iface-id': 'f45a21da-34ce-448b-92cc-2639a191c755', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f6:a9:1e', 'vm-uuid': '85cf93a0-2068-4567-a399-b8d52e672913'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:23:51 np0005481065 NetworkManager[44960]: <info>  [1760174631.0194] manager: (tapf45a21da-34): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/518)
Oct 11 05:23:51 np0005481065 nova_compute[260935]: 2025-10-11 09:23:51.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:51 np0005481065 nova_compute[260935]: 2025-10-11 09:23:51.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:23:51 np0005481065 nova_compute[260935]: 2025-10-11 09:23:51.025 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174630.920565, 527bff5a-2d35-406a-8702-f80298a22342 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:23:51 np0005481065 nova_compute[260935]: 2025-10-11 09:23:51.025 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 527bff5a-2d35-406a-8702-f80298a22342] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:23:51 np0005481065 nova_compute[260935]: 2025-10-11 09:23:51.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:51 np0005481065 nova_compute[260935]: 2025-10-11 09:23:51.028 2 INFO os_vif [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:a9:1e,bridge_name='br-int',has_traffic_filtering=True,id=f45a21da-34ce-448b-92cc-2639a191c755,network=Network(f0bc2c62-89ab-4ce1-9157-2273788b9018),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf45a21da-34')#033[00m
Oct 11 05:23:51 np0005481065 nova_compute[260935]: 2025-10-11 09:23:51.047 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:23:51 np0005481065 nova_compute[260935]: 2025-10-11 09:23:51.054 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:23:51 np0005481065 nova_compute[260935]: 2025-10-11 09:23:51.057 2 INFO nova.compute.manager [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Took 9.46 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:23:51 np0005481065 nova_compute[260935]: 2025-10-11 09:23:51.057 2 DEBUG nova.compute.manager [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:23:51 np0005481065 nova_compute[260935]: 2025-10-11 09:23:51.070 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 527bff5a-2d35-406a-8702-f80298a22342] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:23:51 np0005481065 nova_compute[260935]: 2025-10-11 09:23:51.108 2 DEBUG nova.virt.libvirt.driver [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:23:51 np0005481065 nova_compute[260935]: 2025-10-11 09:23:51.108 2 DEBUG nova.virt.libvirt.driver [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:23:51 np0005481065 nova_compute[260935]: 2025-10-11 09:23:51.109 2 DEBUG nova.virt.libvirt.driver [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No VIF found with MAC fa:16:3e:81:f1:c1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:23:51 np0005481065 nova_compute[260935]: 2025-10-11 09:23:51.109 2 DEBUG nova.virt.libvirt.driver [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No VIF found with MAC fa:16:3e:f6:a9:1e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:23:51 np0005481065 nova_compute[260935]: 2025-10-11 09:23:51.110 2 INFO nova.virt.libvirt.driver [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Using config drive#033[00m
Oct 11 05:23:51 np0005481065 podman[394927]: 2025-10-11 09:23:51.121735661 +0000 UTC m=+0.061702992 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:23:51 np0005481065 podman[394928]: 2025-10-11 09:23:51.150264856 +0000 UTC m=+0.088487868 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 11 05:23:51 np0005481065 nova_compute[260935]: 2025-10-11 09:23:51.151 2 DEBUG nova.storage.rbd_utils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 85cf93a0-2068-4567-a399-b8d52e672913_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:23:51 np0005481065 nova_compute[260935]: 2025-10-11 09:23:51.174 2 INFO nova.compute.manager [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Took 11.00 seconds to build instance.#033[00m
Oct 11 05:23:51 np0005481065 nova_compute[260935]: 2025-10-11 09:23:51.196 2 DEBUG oslo_concurrency.lockutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "527bff5a-2d35-406a-8702-f80298a22342" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.249s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:23:51 np0005481065 nova_compute[260935]: 2025-10-11 09:23:51.481 2 INFO nova.virt.libvirt.driver [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Creating config drive at /var/lib/nova/instances/85cf93a0-2068-4567-a399-b8d52e672913/disk.config#033[00m
Oct 11 05:23:51 np0005481065 nova_compute[260935]: 2025-10-11 09:23:51.486 2 DEBUG oslo_concurrency.processutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/85cf93a0-2068-4567-a399-b8d52e672913/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp46u_0hqg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:23:51 np0005481065 nova_compute[260935]: 2025-10-11 09:23:51.663 2 DEBUG oslo_concurrency.processutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/85cf93a0-2068-4567-a399-b8d52e672913/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp46u_0hqg" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:23:51 np0005481065 nova_compute[260935]: 2025-10-11 09:23:51.713 2 DEBUG nova.storage.rbd_utils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 85cf93a0-2068-4567-a399-b8d52e672913_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:23:51 np0005481065 nova_compute[260935]: 2025-10-11 09:23:51.717 2 DEBUG oslo_concurrency.processutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/85cf93a0-2068-4567-a399-b8d52e672913/disk.config 85cf93a0-2068-4567-a399-b8d52e672913_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:23:51 np0005481065 nova_compute[260935]: 2025-10-11 09:23:51.921 2 DEBUG oslo_concurrency.processutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/85cf93a0-2068-4567-a399-b8d52e672913/disk.config 85cf93a0-2068-4567-a399-b8d52e672913_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.203s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:23:51 np0005481065 nova_compute[260935]: 2025-10-11 09:23:51.923 2 INFO nova.virt.libvirt.driver [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Deleting local config drive /var/lib/nova/instances/85cf93a0-2068-4567-a399-b8d52e672913/disk.config because it was imported into RBD.#033[00m
Oct 11 05:23:51 np0005481065 kernel: tap85875c6f-33: entered promiscuous mode
Oct 11 05:23:51 np0005481065 NetworkManager[44960]: <info>  [1760174631.9825] manager: (tap85875c6f-33): new Tun device (/org/freedesktop/NetworkManager/Devices/519)
Oct 11 05:23:51 np0005481065 nova_compute[260935]: 2025-10-11 09:23:51.982 2 DEBUG nova.network.neutron [req-7df65449-e412-407c-99df-175c6deaca94 req-ba1b4746-6122-4bea-856b-246577763942 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Updated VIF entry in instance network info cache for port f45a21da-34ce-448b-92cc-2639a191c755. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:23:51 np0005481065 systemd-udevd[394754]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:23:51 np0005481065 nova_compute[260935]: 2025-10-11 09:23:51.984 2 DEBUG nova.network.neutron [req-7df65449-e412-407c-99df-175c6deaca94 req-ba1b4746-6122-4bea-856b-246577763942 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Updating instance_info_cache with network_info: [{"id": "85875c6f-3380-42bc-9e4b-0b66df391e0d", "address": "fa:16:3e:81:f1:c1", "network": {"id": "cdd3c547-e0c3-4649-8427-08ce8e1c52d4", "bridge": "br-int", "label": "tempest-network-smoke--1698384506", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85875c6f-33", "ovs_interfaceid": "85875c6f-3380-42bc-9e4b-0b66df391e0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f45a21da-34ce-448b-92cc-2639a191c755", "address": "fa:16:3e:f6:a9:1e", "network": {"id": "f0bc2c62-89ab-4ce1-9157-2273788b9018", "bridge": "br-int", "label": "tempest-network-smoke--892002892", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef6:a91e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef6:a91e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf45a21da-34", "ovs_interfaceid": "f45a21da-34ce-448b-92cc-2639a191c755", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:23:51 np0005481065 ovn_controller[152945]: 2025-10-11T09:23:51Z|01297|binding|INFO|Claiming lport 85875c6f-3380-42bc-9e4b-0b66df391e0d for this chassis.
Oct 11 05:23:51 np0005481065 ovn_controller[152945]: 2025-10-11T09:23:51Z|01298|binding|INFO|85875c6f-3380-42bc-9e4b-0b66df391e0d: Claiming fa:16:3e:81:f1:c1 10.100.0.14
Oct 11 05:23:51 np0005481065 nova_compute[260935]: 2025-10-11 09:23:51.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:52 np0005481065 NetworkManager[44960]: <info>  [1760174632.0028] device (tap85875c6f-33): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:23:52 np0005481065 NetworkManager[44960]: <info>  [1760174632.0035] device (tap85875c6f-33): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.006 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:f1:c1 10.100.0.14'], port_security=['fa:16:3e:81:f1:c1 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '85cf93a0-2068-4567-a399-b8d52e672913', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cdd3c547-e0c3-4649-8427-08ce8e1c52d4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0ad377f6-44f5-45fc-95cb-2f677a42f5a2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=039eefbf-a236-4641-9b4d-7b7f1014a5e2, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=85875c6f-3380-42bc-9e4b-0b66df391e0d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.007 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 85875c6f-3380-42bc-9e4b-0b66df391e0d in datapath cdd3c547-e0c3-4649-8427-08ce8e1c52d4 bound to our chassis#033[00m
Oct 11 05:23:52 np0005481065 NetworkManager[44960]: <info>  [1760174632.0113] manager: (tapf45a21da-34): new Tun device (/org/freedesktop/NetworkManager/Devices/520)
Oct 11 05:23:52 np0005481065 nova_compute[260935]: 2025-10-11 09:23:52.010 2 DEBUG oslo_concurrency.lockutils [req-7df65449-e412-407c-99df-175c6deaca94 req-ba1b4746-6122-4bea-856b-246577763942 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-85cf93a0-2068-4567-a399-b8d52e672913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.014 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cdd3c547-e0c3-4649-8427-08ce8e1c52d4#033[00m
Oct 11 05:23:52 np0005481065 kernel: tapf45a21da-34: entered promiscuous mode
Oct 11 05:23:52 np0005481065 nova_compute[260935]: 2025-10-11 09:23:52.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:52 np0005481065 ovn_controller[152945]: 2025-10-11T09:23:52Z|01299|binding|INFO|Claiming lport f45a21da-34ce-448b-92cc-2639a191c755 for this chassis.
Oct 11 05:23:52 np0005481065 ovn_controller[152945]: 2025-10-11T09:23:52Z|01300|binding|INFO|f45a21da-34ce-448b-92cc-2639a191c755: Claiming fa:16:3e:f6:a9:1e 2001:db8:0:1:f816:3eff:fef6:a91e 2001:db8::f816:3eff:fef6:a91e
Oct 11 05:23:52 np0005481065 ovn_controller[152945]: 2025-10-11T09:23:52Z|01301|binding|INFO|Setting lport 85875c6f-3380-42bc-9e4b-0b66df391e0d ovn-installed in OVS
Oct 11 05:23:52 np0005481065 ovn_controller[152945]: 2025-10-11T09:23:52Z|01302|binding|INFO|Setting lport 85875c6f-3380-42bc-9e4b-0b66df391e0d up in Southbound
Oct 11 05:23:52 np0005481065 NetworkManager[44960]: <info>  [1760174632.0289] device (tapf45a21da-34): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:23:52 np0005481065 nova_compute[260935]: 2025-10-11 09:23:52.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:52 np0005481065 NetworkManager[44960]: <info>  [1760174632.0296] device (tapf45a21da-34): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.031 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[303e7948-784d-4fab-acb1-81c868435521]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.033 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:a9:1e 2001:db8:0:1:f816:3eff:fef6:a91e 2001:db8::f816:3eff:fef6:a91e'], port_security=['fa:16:3e:f6:a9:1e 2001:db8:0:1:f816:3eff:fef6:a91e 2001:db8::f816:3eff:fef6:a91e'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fef6:a91e/64 2001:db8::f816:3eff:fef6:a91e/64', 'neutron:device_id': '85cf93a0-2068-4567-a399-b8d52e672913', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f0bc2c62-89ab-4ce1-9157-2273788b9018', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0ad377f6-44f5-45fc-95cb-2f677a42f5a2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75b51dc9-c211-445f-9e3e-d219efddef19, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=f45a21da-34ce-448b-92cc-2639a191c755) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.034 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcdd3c547-e1 in ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.037 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcdd3c547-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.037 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[af29a287-b64f-41ed-8f96-9bd2eeec7691]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.038 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[153d807a-8f9f-479d-a9cd-bd550d9dc56e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:52 np0005481065 ovn_controller[152945]: 2025-10-11T09:23:52Z|01303|binding|INFO|Setting lport f45a21da-34ce-448b-92cc-2639a191c755 ovn-installed in OVS
Oct 11 05:23:52 np0005481065 ovn_controller[152945]: 2025-10-11T09:23:52Z|01304|binding|INFO|Setting lport f45a21da-34ce-448b-92cc-2639a191c755 up in Southbound
Oct 11 05:23:52 np0005481065 nova_compute[260935]: 2025-10-11 09:23:52.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:52 np0005481065 nova_compute[260935]: 2025-10-11 09:23:52.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.063 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[3113b5c0-c633-4e96-9b3d-624f47c1885b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:52 np0005481065 systemd-machined[215705]: New machine qemu-147-instance-0000007a.
Oct 11 05:23:52 np0005481065 systemd[1]: Started Virtual Machine qemu-147-instance-0000007a.
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.092 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5f715a3a-64ec-4c9b-b108-e88d4ac46891]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.129 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[87642fe1-6bc8-480b-8df5-ba3145454d55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:52 np0005481065 NetworkManager[44960]: <info>  [1760174632.1350] manager: (tapcdd3c547-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/521)
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.136 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[71ac57df-9c10-4888-888a-dd88fc550918]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.167 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[cd887db4-3c65-4950-8566-ab494f20bcfa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.172 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[cb5c27a7-def7-46b4-9491-7da4e3f99732]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:52 np0005481065 NetworkManager[44960]: <info>  [1760174632.1970] device (tapcdd3c547-e0): carrier: link connected
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.202 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3818228c-5c8c-4d8b-aa0c-d2e08fee9906]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.219 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[868a83f2-d862-4712-8ecf-de484aa06c89]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcdd3c547-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:b8:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 362], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647689, 'reachable_time': 38844, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395061, 'error': None, 'target': 'ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.240 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6f497c93-3624-4a3a-81ae-12165bf7175d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef1:b80d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647689, 'tstamp': 647689}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 395062, 'error': None, 'target': 'ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.264 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9148730e-c565-4e49-957c-6bb2311eaeb3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcdd3c547-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:b8:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 362], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647689, 'reachable_time': 38844, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 395063, 'error': None, 'target': 'ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.308 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[714bcac1-62be-48ee-ab02-531cd919ae11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:52 np0005481065 nova_compute[260935]: 2025-10-11 09:23:52.354 2 DEBUG nova.compute.manager [req-bb3e8536-f41c-4714-91b4-863e14944f64 req-62f694e6-1c24-4bc5-9b8f-0ff74adaf9a2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Received event network-vif-plugged-85875c6f-3380-42bc-9e4b-0b66df391e0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:23:52 np0005481065 nova_compute[260935]: 2025-10-11 09:23:52.355 2 DEBUG oslo_concurrency.lockutils [req-bb3e8536-f41c-4714-91b4-863e14944f64 req-62f694e6-1c24-4bc5-9b8f-0ff74adaf9a2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "85cf93a0-2068-4567-a399-b8d52e672913-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:23:52 np0005481065 nova_compute[260935]: 2025-10-11 09:23:52.355 2 DEBUG oslo_concurrency.lockutils [req-bb3e8536-f41c-4714-91b4-863e14944f64 req-62f694e6-1c24-4bc5-9b8f-0ff74adaf9a2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "85cf93a0-2068-4567-a399-b8d52e672913-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:23:52 np0005481065 nova_compute[260935]: 2025-10-11 09:23:52.356 2 DEBUG oslo_concurrency.lockutils [req-bb3e8536-f41c-4714-91b4-863e14944f64 req-62f694e6-1c24-4bc5-9b8f-0ff74adaf9a2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "85cf93a0-2068-4567-a399-b8d52e672913-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:23:52 np0005481065 nova_compute[260935]: 2025-10-11 09:23:52.356 2 DEBUG nova.compute.manager [req-bb3e8536-f41c-4714-91b4-863e14944f64 req-62f694e6-1c24-4bc5-9b8f-0ff74adaf9a2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Processing event network-vif-plugged-85875c6f-3380-42bc-9e4b-0b66df391e0d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.388 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c964ba5d-293a-4f57-8251-e0f1730d5b4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.390 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcdd3c547-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.390 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.390 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcdd3c547-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:23:52 np0005481065 nova_compute[260935]: 2025-10-11 09:23:52.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:52 np0005481065 NetworkManager[44960]: <info>  [1760174632.3933] manager: (tapcdd3c547-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/522)
Oct 11 05:23:52 np0005481065 kernel: tapcdd3c547-e0: entered promiscuous mode
Oct 11 05:23:52 np0005481065 nova_compute[260935]: 2025-10-11 09:23:52.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.396 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcdd3c547-e0, col_values=(('external_ids', {'iface-id': 'cd117be9-e2e2-4d92-9c01-a0f37b7175b9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:23:52 np0005481065 nova_compute[260935]: 2025-10-11 09:23:52.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:52 np0005481065 ovn_controller[152945]: 2025-10-11T09:23:52Z|01305|binding|INFO|Releasing lport cd117be9-e2e2-4d92-9c01-a0f37b7175b9 from this chassis (sb_readonly=0)
Oct 11 05:23:52 np0005481065 nova_compute[260935]: 2025-10-11 09:23:52.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.426 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cdd3c547-e0c3-4649-8427-08ce8e1c52d4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cdd3c547-e0c3-4649-8427-08ce8e1c52d4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.427 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[26f2e327-1883-456c-87f1-6e697e586668]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.428 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-cdd3c547-e0c3-4649-8427-08ce8e1c52d4
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/cdd3c547-e0c3-4649-8427-08ce8e1c52d4.pid.haproxy
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID cdd3c547-e0c3-4649-8427-08ce8e1c52d4
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.429 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4', 'env', 'PROCESS_TAG=haproxy-cdd3c547-e0c3-4649-8427-08ce8e1c52d4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cdd3c547-e0c3-4649-8427-08ce8e1c52d4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 05:23:52 np0005481065 nova_compute[260935]: 2025-10-11 09:23:52.464 2 DEBUG nova.compute.manager [req-01ad71ab-e518-4614-9531-d4ccb8cc8336 req-4825e3e6-2f74-4726-8235-11eb5c698d37 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Received event network-vif-plugged-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:23:52 np0005481065 nova_compute[260935]: 2025-10-11 09:23:52.464 2 DEBUG oslo_concurrency.lockutils [req-01ad71ab-e518-4614-9531-d4ccb8cc8336 req-4825e3e6-2f74-4726-8235-11eb5c698d37 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "527bff5a-2d35-406a-8702-f80298a22342-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:23:52 np0005481065 nova_compute[260935]: 2025-10-11 09:23:52.465 2 DEBUG oslo_concurrency.lockutils [req-01ad71ab-e518-4614-9531-d4ccb8cc8336 req-4825e3e6-2f74-4726-8235-11eb5c698d37 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "527bff5a-2d35-406a-8702-f80298a22342-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:23:52 np0005481065 nova_compute[260935]: 2025-10-11 09:23:52.465 2 DEBUG oslo_concurrency.lockutils [req-01ad71ab-e518-4614-9531-d4ccb8cc8336 req-4825e3e6-2f74-4726-8235-11eb5c698d37 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "527bff5a-2d35-406a-8702-f80298a22342-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:23:52 np0005481065 nova_compute[260935]: 2025-10-11 09:23:52.465 2 DEBUG nova.compute.manager [req-01ad71ab-e518-4614-9531-d4ccb8cc8336 req-4825e3e6-2f74-4726-8235-11eb5c698d37 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] No waiting events found dispatching network-vif-plugged-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:23:52 np0005481065 nova_compute[260935]: 2025-10-11 09:23:52.466 2 WARNING nova.compute.manager [req-01ad71ab-e518-4614-9531-d4ccb8cc8336 req-4825e3e6-2f74-4726-8235-11eb5c698d37 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Received unexpected event network-vif-plugged-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca for instance with vm_state active and task_state None.#033[00m
Oct 11 05:23:52 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2467: 321 pgs: 321 active+clean; 500 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 58 KiB/s wr, 252 op/s
Oct 11 05:23:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.858 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '40'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:23:52 np0005481065 podman[395095]: 2025-10-11 09:23:52.857431854 +0000 UTC m=+0.077642862 container create ab6ebb48cd0eab225fc177b91d9f4f5da5225331666aabf295bcb159bbbda39c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:23:52 np0005481065 podman[395095]: 2025-10-11 09:23:52.818066543 +0000 UTC m=+0.038277601 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 05:23:52 np0005481065 systemd[1]: Started libpod-conmon-ab6ebb48cd0eab225fc177b91d9f4f5da5225331666aabf295bcb159bbbda39c.scope.
Oct 11 05:23:52 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:23:52 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66d9ee5906b3e37b7221b27948dcaeafcd376d35672ef3e4c15b762b77c22192/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 05:23:52 np0005481065 podman[395095]: 2025-10-11 09:23:52.967254704 +0000 UTC m=+0.187465762 container init ab6ebb48cd0eab225fc177b91d9f4f5da5225331666aabf295bcb159bbbda39c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:23:52 np0005481065 podman[395095]: 2025-10-11 09:23:52.974334093 +0000 UTC m=+0.194545061 container start ab6ebb48cd0eab225fc177b91d9f4f5da5225331666aabf295bcb159bbbda39c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Oct 11 05:23:52 np0005481065 neutron-haproxy-ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4[395110]: [NOTICE]   (395121) : New worker (395128) forked
Oct 11 05:23:52 np0005481065 neutron-haproxy-ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4[395110]: [NOTICE]   (395121) : Loading success.
Oct 11 05:23:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:53.051 162815 INFO neutron.agent.ovn.metadata.agent [-] Port f45a21da-34ce-448b-92cc-2639a191c755 in datapath f0bc2c62-89ab-4ce1-9157-2273788b9018 unbound from our chassis#033[00m
Oct 11 05:23:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:53.055 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f0bc2c62-89ab-4ce1-9157-2273788b9018#033[00m
Oct 11 05:23:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:53.080 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[54191dc7-09f9-45ff-9a8e-582cc9b39297]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:53.085 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf0bc2c62-81 in ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 05:23:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:53.087 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf0bc2c62-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 05:23:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:53.087 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a42a8d5d-a1d3-4f71-b951-3951cca9da24]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:53.089 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9311f2f0-9b65-4dcf-8b02-406d53c2132d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:53.103 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[22ed617c-41e9-4c22-9099-2f341718b431]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:53.129 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f980627a-79f5-40d4-a6c1-0dac0d28f421]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:53.165 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6bfbf6bd-5f71-4303-8a41-5478f109c882]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:53 np0005481065 NetworkManager[44960]: <info>  [1760174633.1732] manager: (tapf0bc2c62-80): new Veth device (/org/freedesktop/NetworkManager/Devices/523)
Oct 11 05:23:53 np0005481065 systemd-udevd[395168]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:23:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:53.174 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[91dbf7a1-543d-48c8-a0be-31033fea6cbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:53.225 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[49dc4a38-75e7-4554-bd24-d146a11212ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:53.231 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[8ff56465-4421-40d0-8826-f17e3b2698fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:53 np0005481065 NetworkManager[44960]: <info>  [1760174633.2635] device (tapf0bc2c62-80): carrier: link connected
Oct 11 05:23:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:53.268 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[448cdd13-feaf-4df2-8038-3c320af70ff5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:53.304 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d6a438ad-d75f-423a-a539-85c3ba8d498f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf0bc2c62-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:be:23:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 363], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647796, 'reachable_time': 34498, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395194, 'error': None, 'target': 'ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:53.327 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d2906475-9771-4ec4-b8c5-1f7570bc6314]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febe:2300'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647796, 'tstamp': 647796}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 395195, 'error': None, 'target': 'ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:53.350 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[974877a0-6a32-401a-a576-b70576f07966]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf0bc2c62-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:be:23:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 363], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647796, 'reachable_time': 34498, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 395196, 'error': None, 'target': 'ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:53.393 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2c7b33e3-bc6a-47d4-8f2c-60c28de49e01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:53.430 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2c343d36-5cae-4c70-852d-20af71ecf68c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:53.431 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf0bc2c62-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:23:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:53.431 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:23:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:53.432 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf0bc2c62-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:23:53 np0005481065 nova_compute[260935]: 2025-10-11 09:23:53.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:53 np0005481065 NetworkManager[44960]: <info>  [1760174633.4769] manager: (tapf0bc2c62-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/524)
Oct 11 05:23:53 np0005481065 kernel: tapf0bc2c62-80: entered promiscuous mode
Oct 11 05:23:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:53.479 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf0bc2c62-80, col_values=(('external_ids', {'iface-id': 'c1fabca0-ae77-4e48-b93b-3023955db235'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:23:53 np0005481065 nova_compute[260935]: 2025-10-11 09:23:53.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:53 np0005481065 ovn_controller[152945]: 2025-10-11T09:23:53Z|01306|binding|INFO|Releasing lport c1fabca0-ae77-4e48-b93b-3023955db235 from this chassis (sb_readonly=0)
Oct 11 05:23:53 np0005481065 nova_compute[260935]: 2025-10-11 09:23:53.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:53.505 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f0bc2c62-89ab-4ce1-9157-2273788b9018.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f0bc2c62-89ab-4ce1-9157-2273788b9018.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 05:23:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:53.506 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f59b4617-c20d-4791-b79e-c0bc3d95c8b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:53.507 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 05:23:53 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 05:23:53 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 05:23:53 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-f0bc2c62-89ab-4ce1-9157-2273788b9018
Oct 11 05:23:53 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 05:23:53 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 05:23:53 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 05:23:53 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/f0bc2c62-89ab-4ce1-9157-2273788b9018.pid.haproxy
Oct 11 05:23:53 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 05:23:53 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:23:53 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 05:23:53 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 05:23:53 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 05:23:53 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 05:23:53 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 05:23:53 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 05:23:53 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 05:23:53 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 05:23:53 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 05:23:53 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 05:23:53 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 05:23:53 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 05:23:53 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 05:23:53 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:23:53 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:23:53 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 05:23:53 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 05:23:53 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 05:23:53 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID f0bc2c62-89ab-4ce1-9157-2273788b9018
Oct 11 05:23:53 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 05:23:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:53.507 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018', 'env', 'PROCESS_TAG=haproxy-f0bc2c62-89ab-4ce1-9157-2273788b9018', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f0bc2c62-89ab-4ce1-9157-2273788b9018.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 05:23:53 np0005481065 nova_compute[260935]: 2025-10-11 09:23:53.797 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174633.7969532, 85cf93a0-2068-4567-a399-b8d52e672913 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:23:53 np0005481065 nova_compute[260935]: 2025-10-11 09:23:53.798 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] VM Started (Lifecycle Event)#033[00m
Oct 11 05:23:53 np0005481065 nova_compute[260935]: 2025-10-11 09:23:53.815 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:23:53 np0005481065 nova_compute[260935]: 2025-10-11 09:23:53.817 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174633.7980013, 85cf93a0-2068-4567-a399-b8d52e672913 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:23:53 np0005481065 nova_compute[260935]: 2025-10-11 09:23:53.818 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:23:53 np0005481065 nova_compute[260935]: 2025-10-11 09:23:53.832 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:23:53 np0005481065 nova_compute[260935]: 2025-10-11 09:23:53.834 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:23:53 np0005481065 nova_compute[260935]: 2025-10-11 09:23:53.855 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:23:53 np0005481065 podman[395227]: 2025-10-11 09:23:53.868722033 +0000 UTC m=+0.043320333 container create 049b33c53e6737f8ceec85abe382027b44c2d2f7a48e3b43e0b4bf0d5ccae7e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:23:53 np0005481065 systemd[1]: Started libpod-conmon-049b33c53e6737f8ceec85abe382027b44c2d2f7a48e3b43e0b4bf0d5ccae7e4.scope.
Oct 11 05:23:53 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:23:53 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1a18029a52b2d99369687829c9fae311726c99a73db60edcfbb05990f0b5022/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 05:23:53 np0005481065 podman[395227]: 2025-10-11 09:23:53.848838012 +0000 UTC m=+0.023436322 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 05:23:53 np0005481065 podman[395227]: 2025-10-11 09:23:53.952725024 +0000 UTC m=+0.127323334 container init 049b33c53e6737f8ceec85abe382027b44c2d2f7a48e3b43e0b4bf0d5ccae7e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:23:53 np0005481065 podman[395227]: 2025-10-11 09:23:53.959307139 +0000 UTC m=+0.133905429 container start 049b33c53e6737f8ceec85abe382027b44c2d2f7a48e3b43e0b4bf0d5ccae7e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 11 05:23:53 np0005481065 neutron-haproxy-ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018[395243]: [NOTICE]   (395247) : New worker (395249) forked
Oct 11 05:23:53 np0005481065 neutron-haproxy-ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018[395243]: [NOTICE]   (395247) : Loading success.
Oct 11 05:23:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:23:54 np0005481065 nova_compute[260935]: 2025-10-11 09:23:54.530 2 DEBUG nova.compute.manager [req-ad766bb9-9402-4bef-868c-cc62cb6ead1c req-12632814-3f32-409c-b0e9-420eeec99f61 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Received event network-vif-plugged-85875c6f-3380-42bc-9e4b-0b66df391e0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:23:54 np0005481065 nova_compute[260935]: 2025-10-11 09:23:54.531 2 DEBUG oslo_concurrency.lockutils [req-ad766bb9-9402-4bef-868c-cc62cb6ead1c req-12632814-3f32-409c-b0e9-420eeec99f61 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "85cf93a0-2068-4567-a399-b8d52e672913-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:23:54 np0005481065 nova_compute[260935]: 2025-10-11 09:23:54.531 2 DEBUG oslo_concurrency.lockutils [req-ad766bb9-9402-4bef-868c-cc62cb6ead1c req-12632814-3f32-409c-b0e9-420eeec99f61 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "85cf93a0-2068-4567-a399-b8d52e672913-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:23:54 np0005481065 nova_compute[260935]: 2025-10-11 09:23:54.532 2 DEBUG oslo_concurrency.lockutils [req-ad766bb9-9402-4bef-868c-cc62cb6ead1c req-12632814-3f32-409c-b0e9-420eeec99f61 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "85cf93a0-2068-4567-a399-b8d52e672913-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:23:54 np0005481065 nova_compute[260935]: 2025-10-11 09:23:54.532 2 DEBUG nova.compute.manager [req-ad766bb9-9402-4bef-868c-cc62cb6ead1c req-12632814-3f32-409c-b0e9-420eeec99f61 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] No event matching network-vif-plugged-85875c6f-3380-42bc-9e4b-0b66df391e0d in dict_keys([('network-vif-plugged', 'f45a21da-34ce-448b-92cc-2639a191c755')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Oct 11 05:23:54 np0005481065 nova_compute[260935]: 2025-10-11 09:23:54.532 2 WARNING nova.compute.manager [req-ad766bb9-9402-4bef-868c-cc62cb6ead1c req-12632814-3f32-409c-b0e9-420eeec99f61 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Received unexpected event network-vif-plugged-85875c6f-3380-42bc-9e4b-0b66df391e0d for instance with vm_state building and task_state spawning.#033[00m
Oct 11 05:23:54 np0005481065 nova_compute[260935]: 2025-10-11 09:23:54.533 2 DEBUG nova.compute.manager [req-ad766bb9-9402-4bef-868c-cc62cb6ead1c req-12632814-3f32-409c-b0e9-420eeec99f61 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Received event network-vif-plugged-f45a21da-34ce-448b-92cc-2639a191c755 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:23:54 np0005481065 nova_compute[260935]: 2025-10-11 09:23:54.533 2 DEBUG oslo_concurrency.lockutils [req-ad766bb9-9402-4bef-868c-cc62cb6ead1c req-12632814-3f32-409c-b0e9-420eeec99f61 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "85cf93a0-2068-4567-a399-b8d52e672913-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:23:54 np0005481065 nova_compute[260935]: 2025-10-11 09:23:54.533 2 DEBUG oslo_concurrency.lockutils [req-ad766bb9-9402-4bef-868c-cc62cb6ead1c req-12632814-3f32-409c-b0e9-420eeec99f61 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "85cf93a0-2068-4567-a399-b8d52e672913-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:23:54 np0005481065 nova_compute[260935]: 2025-10-11 09:23:54.534 2 DEBUG oslo_concurrency.lockutils [req-ad766bb9-9402-4bef-868c-cc62cb6ead1c req-12632814-3f32-409c-b0e9-420eeec99f61 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "85cf93a0-2068-4567-a399-b8d52e672913-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:23:54 np0005481065 nova_compute[260935]: 2025-10-11 09:23:54.534 2 DEBUG nova.compute.manager [req-ad766bb9-9402-4bef-868c-cc62cb6ead1c req-12632814-3f32-409c-b0e9-420eeec99f61 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Processing event network-vif-plugged-f45a21da-34ce-448b-92cc-2639a191c755 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:23:54 np0005481065 nova_compute[260935]: 2025-10-11 09:23:54.534 2 DEBUG nova.compute.manager [req-ad766bb9-9402-4bef-868c-cc62cb6ead1c req-12632814-3f32-409c-b0e9-420eeec99f61 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Received event network-vif-plugged-f45a21da-34ce-448b-92cc-2639a191c755 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:23:54 np0005481065 nova_compute[260935]: 2025-10-11 09:23:54.535 2 DEBUG oslo_concurrency.lockutils [req-ad766bb9-9402-4bef-868c-cc62cb6ead1c req-12632814-3f32-409c-b0e9-420eeec99f61 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "85cf93a0-2068-4567-a399-b8d52e672913-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:23:54 np0005481065 nova_compute[260935]: 2025-10-11 09:23:54.535 2 DEBUG oslo_concurrency.lockutils [req-ad766bb9-9402-4bef-868c-cc62cb6ead1c req-12632814-3f32-409c-b0e9-420eeec99f61 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "85cf93a0-2068-4567-a399-b8d52e672913-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:23:54 np0005481065 nova_compute[260935]: 2025-10-11 09:23:54.536 2 DEBUG oslo_concurrency.lockutils [req-ad766bb9-9402-4bef-868c-cc62cb6ead1c req-12632814-3f32-409c-b0e9-420eeec99f61 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "85cf93a0-2068-4567-a399-b8d52e672913-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:23:54 np0005481065 nova_compute[260935]: 2025-10-11 09:23:54.536 2 DEBUG nova.compute.manager [req-ad766bb9-9402-4bef-868c-cc62cb6ead1c req-12632814-3f32-409c-b0e9-420eeec99f61 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] No waiting events found dispatching network-vif-plugged-f45a21da-34ce-448b-92cc-2639a191c755 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:23:54 np0005481065 nova_compute[260935]: 2025-10-11 09:23:54.536 2 WARNING nova.compute.manager [req-ad766bb9-9402-4bef-868c-cc62cb6ead1c req-12632814-3f32-409c-b0e9-420eeec99f61 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Received unexpected event network-vif-plugged-f45a21da-34ce-448b-92cc-2639a191c755 for instance with vm_state building and task_state spawning.#033[00m
Oct 11 05:23:54 np0005481065 nova_compute[260935]: 2025-10-11 09:23:54.537 2 DEBUG nova.compute.manager [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:23:54 np0005481065 nova_compute[260935]: 2025-10-11 09:23:54.547 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174634.546722, 85cf93a0-2068-4567-a399-b8d52e672913 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:23:54 np0005481065 nova_compute[260935]: 2025-10-11 09:23:54.548 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:23:54 np0005481065 nova_compute[260935]: 2025-10-11 09:23:54.549 2 DEBUG nova.virt.libvirt.driver [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:23:54 np0005481065 nova_compute[260935]: 2025-10-11 09:23:54.572 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:23:54 np0005481065 nova_compute[260935]: 2025-10-11 09:23:54.573 2 INFO nova.virt.libvirt.driver [-] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Instance spawned successfully.#033[00m
Oct 11 05:23:54 np0005481065 nova_compute[260935]: 2025-10-11 09:23:54.574 2 DEBUG nova.virt.libvirt.driver [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:23:54 np0005481065 nova_compute[260935]: 2025-10-11 09:23:54.577 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:23:54 np0005481065 nova_compute[260935]: 2025-10-11 09:23:54.597 2 DEBUG nova.virt.libvirt.driver [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:23:54 np0005481065 nova_compute[260935]: 2025-10-11 09:23:54.597 2 DEBUG nova.virt.libvirt.driver [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:23:54 np0005481065 nova_compute[260935]: 2025-10-11 09:23:54.598 2 DEBUG nova.virt.libvirt.driver [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:23:54 np0005481065 nova_compute[260935]: 2025-10-11 09:23:54.598 2 DEBUG nova.virt.libvirt.driver [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:23:54 np0005481065 nova_compute[260935]: 2025-10-11 09:23:54.599 2 DEBUG nova.virt.libvirt.driver [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:23:54 np0005481065 nova_compute[260935]: 2025-10-11 09:23:54.599 2 DEBUG nova.virt.libvirt.driver [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:23:54 np0005481065 nova_compute[260935]: 2025-10-11 09:23:54.603 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:23:54 np0005481065 nova_compute[260935]: 2025-10-11 09:23:54.655 2 INFO nova.compute.manager [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Took 18.19 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:23:54 np0005481065 nova_compute[260935]: 2025-10-11 09:23:54.656 2 DEBUG nova.compute.manager [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:23:54 np0005481065 nova_compute[260935]: 2025-10-11 09:23:54.765 2 INFO nova.compute.manager [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Took 20.14 seconds to build instance.#033[00m
Oct 11 05:23:54 np0005481065 nova_compute[260935]: 2025-10-11 09:23:54.789 2 DEBUG oslo_concurrency.lockutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "85cf93a0-2068-4567-a399-b8d52e672913" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.842s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:23:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:23:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:23:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:23:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:23:54 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2468: 321 pgs: 321 active+clean; 500 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 58 KiB/s wr, 252 op/s
Oct 11 05:23:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:23:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:23:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:23:54
Oct 11 05:23:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:23:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:23:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['backups', 'vms', '.mgr', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', 'images', 'cephfs.cephfs.meta', 'default.rgw.log']
Oct 11 05:23:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:23:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:23:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:23:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:23:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:23:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:23:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:23:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:23:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:23:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:23:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:23:56 np0005481065 nova_compute[260935]: 2025-10-11 09:23:56.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:56 np0005481065 nova_compute[260935]: 2025-10-11 09:23:56.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:23:56 np0005481065 nova_compute[260935]: 2025-10-11 09:23:56.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct 11 05:23:56 np0005481065 nova_compute[260935]: 2025-10-11 09:23:56.727 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct 11 05:23:56 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2469: 321 pgs: 321 active+clean; 500 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 47 KiB/s wr, 203 op/s
Oct 11 05:23:57 np0005481065 nova_compute[260935]: 2025-10-11 09:23:57.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:58 np0005481065 nova_compute[260935]: 2025-10-11 09:23:58.085 2 DEBUG nova.compute.manager [req-1d5a3aad-541d-42b0-ad7c-7d76e3311bb3 req-80853ffe-3399-4e2e-9a62-9c15cf3bb31b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Received event network-changed-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:23:58 np0005481065 nova_compute[260935]: 2025-10-11 09:23:58.086 2 DEBUG nova.compute.manager [req-1d5a3aad-541d-42b0-ad7c-7d76e3311bb3 req-80853ffe-3399-4e2e-9a62-9c15cf3bb31b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Refreshing instance network info cache due to event network-changed-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:23:58 np0005481065 nova_compute[260935]: 2025-10-11 09:23:58.087 2 DEBUG oslo_concurrency.lockutils [req-1d5a3aad-541d-42b0-ad7c-7d76e3311bb3 req-80853ffe-3399-4e2e-9a62-9c15cf3bb31b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-527bff5a-2d35-406a-8702-f80298a22342" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:23:58 np0005481065 nova_compute[260935]: 2025-10-11 09:23:58.087 2 DEBUG oslo_concurrency.lockutils [req-1d5a3aad-541d-42b0-ad7c-7d76e3311bb3 req-80853ffe-3399-4e2e-9a62-9c15cf3bb31b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-527bff5a-2d35-406a-8702-f80298a22342" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:23:58 np0005481065 nova_compute[260935]: 2025-10-11 09:23:58.087 2 DEBUG nova.network.neutron [req-1d5a3aad-541d-42b0-ad7c-7d76e3311bb3 req-80853ffe-3399-4e2e-9a62-9c15cf3bb31b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Refreshing network info cache for port ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:23:58 np0005481065 nova_compute[260935]: 2025-10-11 09:23:58.264 2 DEBUG oslo_concurrency.lockutils [None req-bf3d38fa-9d60-4481-84a4-4eda7f81908e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "527bff5a-2d35-406a-8702-f80298a22342" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:23:58 np0005481065 nova_compute[260935]: 2025-10-11 09:23:58.265 2 DEBUG oslo_concurrency.lockutils [None req-bf3d38fa-9d60-4481-84a4-4eda7f81908e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "527bff5a-2d35-406a-8702-f80298a22342" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:23:58 np0005481065 nova_compute[260935]: 2025-10-11 09:23:58.266 2 DEBUG oslo_concurrency.lockutils [None req-bf3d38fa-9d60-4481-84a4-4eda7f81908e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "527bff5a-2d35-406a-8702-f80298a22342-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:23:58 np0005481065 nova_compute[260935]: 2025-10-11 09:23:58.267 2 DEBUG oslo_concurrency.lockutils [None req-bf3d38fa-9d60-4481-84a4-4eda7f81908e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "527bff5a-2d35-406a-8702-f80298a22342-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:23:58 np0005481065 nova_compute[260935]: 2025-10-11 09:23:58.267 2 DEBUG oslo_concurrency.lockutils [None req-bf3d38fa-9d60-4481-84a4-4eda7f81908e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "527bff5a-2d35-406a-8702-f80298a22342-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:23:58 np0005481065 nova_compute[260935]: 2025-10-11 09:23:58.269 2 INFO nova.compute.manager [None req-bf3d38fa-9d60-4481-84a4-4eda7f81908e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Terminating instance#033[00m
Oct 11 05:23:58 np0005481065 nova_compute[260935]: 2025-10-11 09:23:58.271 2 DEBUG nova.compute.manager [None req-bf3d38fa-9d60-4481-84a4-4eda7f81908e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:23:58 np0005481065 kernel: tapba6bfb6c-8b (unregistering): left promiscuous mode
Oct 11 05:23:58 np0005481065 NetworkManager[44960]: <info>  [1760174638.3301] device (tapba6bfb6c-8b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:23:58 np0005481065 ovn_controller[152945]: 2025-10-11T09:23:58Z|01307|binding|INFO|Releasing lport ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca from this chassis (sb_readonly=0)
Oct 11 05:23:58 np0005481065 ovn_controller[152945]: 2025-10-11T09:23:58Z|01308|binding|INFO|Setting lport ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca down in Southbound
Oct 11 05:23:58 np0005481065 ovn_controller[152945]: 2025-10-11T09:23:58Z|01309|binding|INFO|Removing iface tapba6bfb6c-8b ovn-installed in OVS
Oct 11 05:23:58 np0005481065 nova_compute[260935]: 2025-10-11 09:23:58.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:58 np0005481065 nova_compute[260935]: 2025-10-11 09:23:58.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:58.348 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:ce:63 10.100.0.5'], port_security=['fa:16:3e:d6:ce:63 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-111056357', 'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '527bff5a-2d35-406a-8702-f80298a22342', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-111056357', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '4', 'neutron:security_group_ids': '019fac1b-1c13-4d21-809e-39e39fdd9255', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.192'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=60a8a4a9-2613-497d-90d8-59ebfcd1b053, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:23:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:58.350 162815 INFO neutron.agent.ovn.metadata.agent [-] Port ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca in datapath a7aa5898-dcd8-41c5-81e2-5c41061a1fb2 unbound from our chassis#033[00m
Oct 11 05:23:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:58.352 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a7aa5898-dcd8-41c5-81e2-5c41061a1fb2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:23:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:58.353 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2eac6d5c-8cb8-4f40-b97f-1d93da1ee74b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:58.354 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2 namespace which is not needed anymore#033[00m
Oct 11 05:23:58 np0005481065 nova_compute[260935]: 2025-10-11 09:23:58.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:58 np0005481065 systemd[1]: machine-qemu\x2d146\x2dinstance\x2d0000007b.scope: Deactivated successfully.
Oct 11 05:23:58 np0005481065 systemd[1]: machine-qemu\x2d146\x2dinstance\x2d0000007b.scope: Consumed 8.435s CPU time.
Oct 11 05:23:58 np0005481065 systemd-machined[215705]: Machine qemu-146-instance-0000007b terminated.
Oct 11 05:23:58 np0005481065 kernel: tapba6bfb6c-8b: entered promiscuous mode
Oct 11 05:23:58 np0005481065 NetworkManager[44960]: <info>  [1760174638.4919] manager: (tapba6bfb6c-8b): new Tun device (/org/freedesktop/NetworkManager/Devices/525)
Oct 11 05:23:58 np0005481065 kernel: tapba6bfb6c-8b (unregistering): left promiscuous mode
Oct 11 05:23:58 np0005481065 ovn_controller[152945]: 2025-10-11T09:23:58Z|01310|binding|INFO|Claiming lport ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca for this chassis.
Oct 11 05:23:58 np0005481065 ovn_controller[152945]: 2025-10-11T09:23:58Z|01311|binding|INFO|ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca: Claiming fa:16:3e:d6:ce:63 10.100.0.5
Oct 11 05:23:58 np0005481065 nova_compute[260935]: 2025-10-11 09:23:58.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:58.516 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:ce:63 10.100.0.5'], port_security=['fa:16:3e:d6:ce:63 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-111056357', 'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '527bff5a-2d35-406a-8702-f80298a22342', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-111056357', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '4', 'neutron:security_group_ids': '019fac1b-1c13-4d21-809e-39e39fdd9255', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.192'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=60a8a4a9-2613-497d-90d8-59ebfcd1b053, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:23:58 np0005481065 neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2[394906]: [NOTICE]   (394912) : haproxy version is 2.8.14-c23fe91
Oct 11 05:23:58 np0005481065 neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2[394906]: [NOTICE]   (394912) : path to executable is /usr/sbin/haproxy
Oct 11 05:23:58 np0005481065 neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2[394906]: [WARNING]  (394912) : Exiting Master process...
Oct 11 05:23:58 np0005481065 nova_compute[260935]: 2025-10-11 09:23:58.537 2 INFO nova.virt.libvirt.driver [-] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Instance destroyed successfully.#033[00m
Oct 11 05:23:58 np0005481065 nova_compute[260935]: 2025-10-11 09:23:58.537 2 DEBUG nova.objects.instance [None req-bf3d38fa-9d60-4481-84a4-4eda7f81908e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'resources' on Instance uuid 527bff5a-2d35-406a-8702-f80298a22342 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:23:58 np0005481065 neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2[394906]: [WARNING]  (394912) : Exiting Master process...
Oct 11 05:23:58 np0005481065 ovn_controller[152945]: 2025-10-11T09:23:58Z|01312|binding|INFO|Setting lport ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca ovn-installed in OVS
Oct 11 05:23:58 np0005481065 ovn_controller[152945]: 2025-10-11T09:23:58Z|01313|binding|INFO|Setting lport ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca up in Southbound
Oct 11 05:23:58 np0005481065 neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2[394906]: [ALERT]    (394912) : Current worker (394914) exited with code 143 (Terminated)
Oct 11 05:23:58 np0005481065 neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2[394906]: [WARNING]  (394912) : All workers exited. Exiting... (0)
Oct 11 05:23:58 np0005481065 ovn_controller[152945]: 2025-10-11T09:23:58Z|00151|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4b:08:14 10.100.0.10
Oct 11 05:23:58 np0005481065 nova_compute[260935]: 2025-10-11 09:23:58.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:58 np0005481065 systemd[1]: libpod-a287c3e2c1a13e36c7b223b0999b063f29ee7f65b1e8f71cff7b574f9d71dbf1.scope: Deactivated successfully.
Oct 11 05:23:58 np0005481065 conmon[394906]: conmon a287c3e2c1a13e36c7b2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a287c3e2c1a13e36c7b223b0999b063f29ee7f65b1e8f71cff7b574f9d71dbf1.scope/container/memory.events
Oct 11 05:23:58 np0005481065 ovn_controller[152945]: 2025-10-11T09:23:58Z|01314|binding|INFO|Releasing lport ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca from this chassis (sb_readonly=1)
Oct 11 05:23:58 np0005481065 ovn_controller[152945]: 2025-10-11T09:23:58Z|01315|binding|INFO|Removing iface tapba6bfb6c-8b ovn-installed in OVS
Oct 11 05:23:58 np0005481065 ovn_controller[152945]: 2025-10-11T09:23:58Z|01316|if_status|INFO|Dropped 1 log messages in last 78 seconds (most recently, 78 seconds ago) due to excessive rate
Oct 11 05:23:58 np0005481065 ovn_controller[152945]: 2025-10-11T09:23:58Z|01317|if_status|INFO|Not setting lport ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca down as sb is readonly
Oct 11 05:23:58 np0005481065 ovn_controller[152945]: 2025-10-11T09:23:58Z|01318|binding|INFO|Releasing lport ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca from this chassis (sb_readonly=0)
Oct 11 05:23:58 np0005481065 ovn_controller[152945]: 2025-10-11T09:23:58Z|01319|binding|INFO|Setting lport ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca down in Southbound
Oct 11 05:23:58 np0005481065 podman[395283]: 2025-10-11 09:23:58.553197122 +0000 UTC m=+0.090391612 container died a287c3e2c1a13e36c7b223b0999b063f29ee7f65b1e8f71cff7b574f9d71dbf1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 05:23:58 np0005481065 nova_compute[260935]: 2025-10-11 09:23:58.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:58.560 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:ce:63 10.100.0.5'], port_security=['fa:16:3e:d6:ce:63 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-111056357', 'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '527bff5a-2d35-406a-8702-f80298a22342', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-111056357', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '4', 'neutron:security_group_ids': '019fac1b-1c13-4d21-809e-39e39fdd9255', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.192'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=60a8a4a9-2613-497d-90d8-59ebfcd1b053, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:23:58 np0005481065 nova_compute[260935]: 2025-10-11 09:23:58.564 2 DEBUG nova.virt.libvirt.vif [None req-bf3d38fa-9d60-4481-84a4-4eda7f81908e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:23:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1659866276',display_name='tempest-TestNetworkBasicOps-server-1659866276',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1659866276',id=123,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD5xFqm8IPc+0j24PucHdWviLqVsYMHOSZIhiNWh/27UIL+IzWKFcQXG0H5lF53N0KOCicqByqWqQ/NMVHseguyx/gUQwyqXnA+qdRlYqxWLbxA13mTZNbIWDUUTRGaKUQ==',key_name='tempest-TestNetworkBasicOps-1951063949',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:23:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-slz4kjjw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:23:51Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=527bff5a-2d35-406a-8702-f80298a22342,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "address": "fa:16:3e:d6:ce:63", "network": {"id": "a7aa5898-dcd8-41c5-81e2-5c41061a1fb2", "bridge": "br-int", "label": "tempest-network-smoke--1511589092", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba6bfb6c-8b", "ovs_interfaceid": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:23:58 np0005481065 nova_compute[260935]: 2025-10-11 09:23:58.565 2 DEBUG nova.network.os_vif_util [None req-bf3d38fa-9d60-4481-84a4-4eda7f81908e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "address": "fa:16:3e:d6:ce:63", "network": {"id": "a7aa5898-dcd8-41c5-81e2-5c41061a1fb2", "bridge": "br-int", "label": "tempest-network-smoke--1511589092", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba6bfb6c-8b", "ovs_interfaceid": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:23:58 np0005481065 nova_compute[260935]: 2025-10-11 09:23:58.566 2 DEBUG nova.network.os_vif_util [None req-bf3d38fa-9d60-4481-84a4-4eda7f81908e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:ce:63,bridge_name='br-int',has_traffic_filtering=True,id=ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca,network=Network(a7aa5898-dcd8-41c5-81e2-5c41061a1fb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapba6bfb6c-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:23:58 np0005481065 nova_compute[260935]: 2025-10-11 09:23:58.566 2 DEBUG os_vif [None req-bf3d38fa-9d60-4481-84a4-4eda7f81908e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:ce:63,bridge_name='br-int',has_traffic_filtering=True,id=ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca,network=Network(a7aa5898-dcd8-41c5-81e2-5c41061a1fb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapba6bfb6c-8b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:23:58 np0005481065 nova_compute[260935]: 2025-10-11 09:23:58.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:58 np0005481065 nova_compute[260935]: 2025-10-11 09:23:58.569 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapba6bfb6c-8b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:23:58 np0005481065 nova_compute[260935]: 2025-10-11 09:23:58.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:58 np0005481065 nova_compute[260935]: 2025-10-11 09:23:58.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:23:58 np0005481065 nova_compute[260935]: 2025-10-11 09:23:58.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:58 np0005481065 nova_compute[260935]: 2025-10-11 09:23:58.579 2 INFO os_vif [None req-bf3d38fa-9d60-4481-84a4-4eda7f81908e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:ce:63,bridge_name='br-int',has_traffic_filtering=True,id=ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca,network=Network(a7aa5898-dcd8-41c5-81e2-5c41061a1fb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapba6bfb6c-8b')#033[00m
Oct 11 05:23:58 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a287c3e2c1a13e36c7b223b0999b063f29ee7f65b1e8f71cff7b574f9d71dbf1-userdata-shm.mount: Deactivated successfully.
Oct 11 05:23:58 np0005481065 systemd[1]: var-lib-containers-storage-overlay-19190283d7b8624e3bb625f0ca81497210fb38475de1d71735aa96791be754ca-merged.mount: Deactivated successfully.
Oct 11 05:23:58 np0005481065 podman[395283]: 2025-10-11 09:23:58.627769027 +0000 UTC m=+0.164963517 container cleanup a287c3e2c1a13e36c7b223b0999b063f29ee7f65b1e8f71cff7b574f9d71dbf1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:23:58 np0005481065 systemd[1]: libpod-conmon-a287c3e2c1a13e36c7b223b0999b063f29ee7f65b1e8f71cff7b574f9d71dbf1.scope: Deactivated successfully.
Oct 11 05:23:58 np0005481065 podman[395339]: 2025-10-11 09:23:58.701259281 +0000 UTC m=+0.050192818 container remove a287c3e2c1a13e36c7b223b0999b063f29ee7f65b1e8f71cff7b574f9d71dbf1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 11 05:23:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:58.711 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c0136503-8e4b-4079-8b9a-39b1d215544f]: (4, ('Sat Oct 11 09:23:58 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2 (a287c3e2c1a13e36c7b223b0999b063f29ee7f65b1e8f71cff7b574f9d71dbf1)\na287c3e2c1a13e36c7b223b0999b063f29ee7f65b1e8f71cff7b574f9d71dbf1\nSat Oct 11 09:23:58 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2 (a287c3e2c1a13e36c7b223b0999b063f29ee7f65b1e8f71cff7b574f9d71dbf1)\na287c3e2c1a13e36c7b223b0999b063f29ee7f65b1e8f71cff7b574f9d71dbf1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:58.714 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c266f070-8ed7-46b2-b08c-a2e539df5414]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:58.715 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa7aa5898-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:23:58 np0005481065 kernel: tapa7aa5898-d0: left promiscuous mode
Oct 11 05:23:58 np0005481065 nova_compute[260935]: 2025-10-11 09:23:58.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:58.727 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8cdd4a9f-531d-4835-838f-4116219d4bfc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:58 np0005481065 nova_compute[260935]: 2025-10-11 09:23:58.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:23:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:58.761 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[38edcec9-4573-4b9f-99ad-8782483f9745]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:58.764 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2d16754d-3976-4186-964c-dcc2a8f8bc5f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:58.790 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[264b4710-6cd9-49ec-81a7-81a68ec0ec89]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647450, 'reachable_time': 36469, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395354, 'error': None, 'target': 'ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:58 np0005481065 systemd[1]: run-netns-ovnmeta\x2da7aa5898\x2ddcd8\x2d41c5\x2d81e2\x2d5c41061a1fb2.mount: Deactivated successfully.
Oct 11 05:23:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:58.795 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 05:23:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:58.795 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[911d2a12-5e1f-4004-b832-72a913101edf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:58.796 162815 INFO neutron.agent.ovn.metadata.agent [-] Port ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca in datapath a7aa5898-dcd8-41c5-81e2-5c41061a1fb2 unbound from our chassis#033[00m
Oct 11 05:23:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:58.800 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a7aa5898-dcd8-41c5-81e2-5c41061a1fb2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:23:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:58.800 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[74b365f7-0a65-4f5a-8f94-35df51782449]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:58.801 162815 INFO neutron.agent.ovn.metadata.agent [-] Port ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca in datapath a7aa5898-dcd8-41c5-81e2-5c41061a1fb2 unbound from our chassis#033[00m
Oct 11 05:23:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:58.805 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a7aa5898-dcd8-41c5-81e2-5c41061a1fb2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:23:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:23:58.805 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[dc0df4cf-13e3-4399-9499-190590003fb1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:23:58 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2470: 321 pgs: 321 active+clean; 500 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 45 KiB/s wr, 232 op/s
Oct 11 05:23:59 np0005481065 nova_compute[260935]: 2025-10-11 09:23:59.020 2 INFO nova.virt.libvirt.driver [None req-bf3d38fa-9d60-4481-84a4-4eda7f81908e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Deleting instance files /var/lib/nova/instances/527bff5a-2d35-406a-8702-f80298a22342_del#033[00m
Oct 11 05:23:59 np0005481065 nova_compute[260935]: 2025-10-11 09:23:59.021 2 INFO nova.virt.libvirt.driver [None req-bf3d38fa-9d60-4481-84a4-4eda7f81908e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Deletion of /var/lib/nova/instances/527bff5a-2d35-406a-8702-f80298a22342_del complete#033[00m
Oct 11 05:23:59 np0005481065 nova_compute[260935]: 2025-10-11 09:23:59.083 2 INFO nova.compute.manager [None req-bf3d38fa-9d60-4481-84a4-4eda7f81908e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Took 0.81 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:23:59 np0005481065 nova_compute[260935]: 2025-10-11 09:23:59.084 2 DEBUG oslo.service.loopingcall [None req-bf3d38fa-9d60-4481-84a4-4eda7f81908e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:23:59 np0005481065 nova_compute[260935]: 2025-10-11 09:23:59.084 2 DEBUG nova.compute.manager [-] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:23:59 np0005481065 nova_compute[260935]: 2025-10-11 09:23:59.084 2 DEBUG nova.network.neutron [-] [instance: 527bff5a-2d35-406a-8702-f80298a22342] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:23:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:24:00 np0005481065 nova_compute[260935]: 2025-10-11 09:24:00.333 2 DEBUG nova.compute.manager [req-257906a9-d490-4c5f-9297-26a91e020d87 req-c847e475-c92d-46c7-a009-7d90b37b1f8e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Received event network-vif-unplugged-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:24:00 np0005481065 nova_compute[260935]: 2025-10-11 09:24:00.334 2 DEBUG oslo_concurrency.lockutils [req-257906a9-d490-4c5f-9297-26a91e020d87 req-c847e475-c92d-46c7-a009-7d90b37b1f8e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "527bff5a-2d35-406a-8702-f80298a22342-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:24:00 np0005481065 nova_compute[260935]: 2025-10-11 09:24:00.334 2 DEBUG oslo_concurrency.lockutils [req-257906a9-d490-4c5f-9297-26a91e020d87 req-c847e475-c92d-46c7-a009-7d90b37b1f8e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "527bff5a-2d35-406a-8702-f80298a22342-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:24:00 np0005481065 nova_compute[260935]: 2025-10-11 09:24:00.334 2 DEBUG oslo_concurrency.lockutils [req-257906a9-d490-4c5f-9297-26a91e020d87 req-c847e475-c92d-46c7-a009-7d90b37b1f8e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "527bff5a-2d35-406a-8702-f80298a22342-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:24:00 np0005481065 nova_compute[260935]: 2025-10-11 09:24:00.334 2 DEBUG nova.compute.manager [req-257906a9-d490-4c5f-9297-26a91e020d87 req-c847e475-c92d-46c7-a009-7d90b37b1f8e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] No waiting events found dispatching network-vif-unplugged-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:24:00 np0005481065 nova_compute[260935]: 2025-10-11 09:24:00.334 2 DEBUG nova.compute.manager [req-257906a9-d490-4c5f-9297-26a91e020d87 req-c847e475-c92d-46c7-a009-7d90b37b1f8e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Received event network-vif-unplugged-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 05:24:00 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2471: 321 pgs: 321 active+clean; 500 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 39 KiB/s wr, 198 op/s
Oct 11 05:24:01 np0005481065 nova_compute[260935]: 2025-10-11 09:24:01.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:24:02 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #117. Immutable memtables: 0.
Oct 11 05:24:02 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:02.060443) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 05:24:02 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 117
Oct 11 05:24:02 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174642060479, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 2089, "num_deletes": 253, "total_data_size": 3286330, "memory_usage": 3336688, "flush_reason": "Manual Compaction"}
Oct 11 05:24:02 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #118: started
Oct 11 05:24:02 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174642074390, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 118, "file_size": 3229379, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49694, "largest_seqno": 51782, "table_properties": {"data_size": 3219977, "index_size": 5896, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19631, "raw_average_key_size": 20, "raw_value_size": 3201015, "raw_average_value_size": 3324, "num_data_blocks": 260, "num_entries": 963, "num_filter_entries": 963, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760174429, "oldest_key_time": 1760174429, "file_creation_time": 1760174642, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 118, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:24:02 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 13991 microseconds, and 7684 cpu microseconds.
Oct 11 05:24:02 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:24:02 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:02.074431) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #118: 3229379 bytes OK
Oct 11 05:24:02 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:02.074452) [db/memtable_list.cc:519] [default] Level-0 commit table #118 started
Oct 11 05:24:02 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:02.076785) [db/memtable_list.cc:722] [default] Level-0 commit table #118: memtable #1 done
Oct 11 05:24:02 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:02.076800) EVENT_LOG_v1 {"time_micros": 1760174642076794, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 05:24:02 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:02.076848) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 05:24:02 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 3277550, prev total WAL file size 3277550, number of live WAL files 2.
Oct 11 05:24:02 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000114.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:24:02 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:02.077682) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Oct 11 05:24:02 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 05:24:02 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [118(3153KB)], [116(8325KB)]
Oct 11 05:24:02 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174642077708, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [118], "files_L6": [116], "score": -1, "input_data_size": 11754680, "oldest_snapshot_seqno": -1}
Oct 11 05:24:02 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #119: 7316 keys, 10030321 bytes, temperature: kUnknown
Oct 11 05:24:02 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174642154290, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 119, "file_size": 10030321, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9981523, "index_size": 29422, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18309, "raw_key_size": 189366, "raw_average_key_size": 25, "raw_value_size": 9850741, "raw_average_value_size": 1346, "num_data_blocks": 1152, "num_entries": 7316, "num_filter_entries": 7316, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760174642, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:24:02 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:24:02 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:02.154574) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 10030321 bytes
Oct 11 05:24:02 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:02.157735) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 153.3 rd, 130.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 8.1 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(6.7) write-amplify(3.1) OK, records in: 7838, records dropped: 522 output_compression: NoCompression
Oct 11 05:24:02 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:02.157753) EVENT_LOG_v1 {"time_micros": 1760174642157744, "job": 70, "event": "compaction_finished", "compaction_time_micros": 76698, "compaction_time_cpu_micros": 28042, "output_level": 6, "num_output_files": 1, "total_output_size": 10030321, "num_input_records": 7838, "num_output_records": 7316, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 05:24:02 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000118.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:24:02 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174642158570, "job": 70, "event": "table_file_deletion", "file_number": 118}
Oct 11 05:24:02 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:24:02 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174642161411, "job": 70, "event": "table_file_deletion", "file_number": 116}
Oct 11 05:24:02 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:02.077625) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:24:02 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:02.161502) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:24:02 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:02.161508) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:24:02 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:02.161510) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:24:02 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:02.161512) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:24:02 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:02.161514) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:24:02 np0005481065 nova_compute[260935]: 2025-10-11 09:24:02.493 2 DEBUG nova.compute.manager [req-1d7e722d-7dd9-4174-8ddd-140d91e71620 req-8d96fe77-f3e2-479c-adc8-1f6a50badea3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Received event network-vif-plugged-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:24:02 np0005481065 nova_compute[260935]: 2025-10-11 09:24:02.493 2 DEBUG oslo_concurrency.lockutils [req-1d7e722d-7dd9-4174-8ddd-140d91e71620 req-8d96fe77-f3e2-479c-adc8-1f6a50badea3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "527bff5a-2d35-406a-8702-f80298a22342-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:24:02 np0005481065 nova_compute[260935]: 2025-10-11 09:24:02.493 2 DEBUG oslo_concurrency.lockutils [req-1d7e722d-7dd9-4174-8ddd-140d91e71620 req-8d96fe77-f3e2-479c-adc8-1f6a50badea3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "527bff5a-2d35-406a-8702-f80298a22342-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:24:02 np0005481065 nova_compute[260935]: 2025-10-11 09:24:02.493 2 DEBUG oslo_concurrency.lockutils [req-1d7e722d-7dd9-4174-8ddd-140d91e71620 req-8d96fe77-f3e2-479c-adc8-1f6a50badea3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "527bff5a-2d35-406a-8702-f80298a22342-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:24:02 np0005481065 nova_compute[260935]: 2025-10-11 09:24:02.493 2 DEBUG nova.compute.manager [req-1d7e722d-7dd9-4174-8ddd-140d91e71620 req-8d96fe77-f3e2-479c-adc8-1f6a50badea3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] No waiting events found dispatching network-vif-plugged-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:24:02 np0005481065 nova_compute[260935]: 2025-10-11 09:24:02.494 2 WARNING nova.compute.manager [req-1d7e722d-7dd9-4174-8ddd-140d91e71620 req-8d96fe77-f3e2-479c-adc8-1f6a50badea3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Received unexpected event network-vif-plugged-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca for instance with vm_state active and task_state deleting.#033[00m
Oct 11 05:24:02 np0005481065 nova_compute[260935]: 2025-10-11 09:24:02.494 2 DEBUG nova.compute.manager [req-1d7e722d-7dd9-4174-8ddd-140d91e71620 req-8d96fe77-f3e2-479c-adc8-1f6a50badea3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Received event network-vif-plugged-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:24:02 np0005481065 nova_compute[260935]: 2025-10-11 09:24:02.494 2 DEBUG oslo_concurrency.lockutils [req-1d7e722d-7dd9-4174-8ddd-140d91e71620 req-8d96fe77-f3e2-479c-adc8-1f6a50badea3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "527bff5a-2d35-406a-8702-f80298a22342-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:24:02 np0005481065 nova_compute[260935]: 2025-10-11 09:24:02.494 2 DEBUG oslo_concurrency.lockutils [req-1d7e722d-7dd9-4174-8ddd-140d91e71620 req-8d96fe77-f3e2-479c-adc8-1f6a50badea3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "527bff5a-2d35-406a-8702-f80298a22342-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:24:02 np0005481065 nova_compute[260935]: 2025-10-11 09:24:02.494 2 DEBUG oslo_concurrency.lockutils [req-1d7e722d-7dd9-4174-8ddd-140d91e71620 req-8d96fe77-f3e2-479c-adc8-1f6a50badea3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "527bff5a-2d35-406a-8702-f80298a22342-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:24:02 np0005481065 nova_compute[260935]: 2025-10-11 09:24:02.494 2 DEBUG nova.compute.manager [req-1d7e722d-7dd9-4174-8ddd-140d91e71620 req-8d96fe77-f3e2-479c-adc8-1f6a50badea3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] No waiting events found dispatching network-vif-plugged-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:24:02 np0005481065 nova_compute[260935]: 2025-10-11 09:24:02.495 2 WARNING nova.compute.manager [req-1d7e722d-7dd9-4174-8ddd-140d91e71620 req-8d96fe77-f3e2-479c-adc8-1f6a50badea3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Received unexpected event network-vif-plugged-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca for instance with vm_state active and task_state deleting.#033[00m
Oct 11 05:24:02 np0005481065 nova_compute[260935]: 2025-10-11 09:24:02.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:02 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2472: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 39 KiB/s wr, 241 op/s
Oct 11 05:24:03 np0005481065 nova_compute[260935]: 2025-10-11 09:24:03.061 2 DEBUG nova.network.neutron [-] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:24:03 np0005481065 nova_compute[260935]: 2025-10-11 09:24:03.075 2 INFO nova.compute.manager [-] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Took 3.99 seconds to deallocate network for instance.#033[00m
Oct 11 05:24:03 np0005481065 nova_compute[260935]: 2025-10-11 09:24:03.156 2 DEBUG oslo_concurrency.lockutils [None req-bf3d38fa-9d60-4481-84a4-4eda7f81908e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:24:03 np0005481065 nova_compute[260935]: 2025-10-11 09:24:03.157 2 DEBUG oslo_concurrency.lockutils [None req-bf3d38fa-9d60-4481-84a4-4eda7f81908e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:24:03 np0005481065 nova_compute[260935]: 2025-10-11 09:24:03.298 2 DEBUG nova.network.neutron [req-1d5a3aad-541d-42b0-ad7c-7d76e3311bb3 req-80853ffe-3399-4e2e-9a62-9c15cf3bb31b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Updated VIF entry in instance network info cache for port ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:24:03 np0005481065 nova_compute[260935]: 2025-10-11 09:24:03.298 2 DEBUG nova.network.neutron [req-1d5a3aad-541d-42b0-ad7c-7d76e3311bb3 req-80853ffe-3399-4e2e-9a62-9c15cf3bb31b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Updating instance_info_cache with network_info: [{"id": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "address": "fa:16:3e:d6:ce:63", "network": {"id": "a7aa5898-dcd8-41c5-81e2-5c41061a1fb2", "bridge": "br-int", "label": "tempest-network-smoke--1511589092", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba6bfb6c-8b", "ovs_interfaceid": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:24:03 np0005481065 nova_compute[260935]: 2025-10-11 09:24:03.315 2 DEBUG oslo_concurrency.lockutils [req-1d5a3aad-541d-42b0-ad7c-7d76e3311bb3 req-80853ffe-3399-4e2e-9a62-9c15cf3bb31b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-527bff5a-2d35-406a-8702-f80298a22342" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:24:03 np0005481065 nova_compute[260935]: 2025-10-11 09:24:03.317 2 DEBUG oslo_concurrency.processutils [None req-bf3d38fa-9d60-4481-84a4-4eda7f81908e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:24:03 np0005481065 nova_compute[260935]: 2025-10-11 09:24:03.466 2 DEBUG nova.compute.manager [req-2d8b5391-b302-481f-91f7-ca45950a270b req-0e889bd1-5c82-4797-aa18-8046eb2dd336 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Received event network-changed-85875c6f-3380-42bc-9e4b-0b66df391e0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:24:03 np0005481065 nova_compute[260935]: 2025-10-11 09:24:03.467 2 DEBUG nova.compute.manager [req-2d8b5391-b302-481f-91f7-ca45950a270b req-0e889bd1-5c82-4797-aa18-8046eb2dd336 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Refreshing instance network info cache due to event network-changed-85875c6f-3380-42bc-9e4b-0b66df391e0d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:24:03 np0005481065 nova_compute[260935]: 2025-10-11 09:24:03.467 2 DEBUG oslo_concurrency.lockutils [req-2d8b5391-b302-481f-91f7-ca45950a270b req-0e889bd1-5c82-4797-aa18-8046eb2dd336 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-85cf93a0-2068-4567-a399-b8d52e672913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:24:03 np0005481065 nova_compute[260935]: 2025-10-11 09:24:03.468 2 DEBUG oslo_concurrency.lockutils [req-2d8b5391-b302-481f-91f7-ca45950a270b req-0e889bd1-5c82-4797-aa18-8046eb2dd336 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-85cf93a0-2068-4567-a399-b8d52e672913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:24:03 np0005481065 nova_compute[260935]: 2025-10-11 09:24:03.468 2 DEBUG nova.network.neutron [req-2d8b5391-b302-481f-91f7-ca45950a270b req-0e889bd1-5c82-4797-aa18-8046eb2dd336 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Refreshing network info cache for port 85875c6f-3380-42bc-9e4b-0b66df391e0d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:24:03 np0005481065 nova_compute[260935]: 2025-10-11 09:24:03.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:24:03 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/36327719' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:24:03 np0005481065 nova_compute[260935]: 2025-10-11 09:24:03.774 2 DEBUG oslo_concurrency.processutils [None req-bf3d38fa-9d60-4481-84a4-4eda7f81908e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:24:03 np0005481065 nova_compute[260935]: 2025-10-11 09:24:03.783 2 DEBUG nova.compute.provider_tree [None req-bf3d38fa-9d60-4481-84a4-4eda7f81908e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:24:03 np0005481065 nova_compute[260935]: 2025-10-11 09:24:03.829 2 DEBUG nova.scheduler.client.report [None req-bf3d38fa-9d60-4481-84a4-4eda7f81908e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:24:03 np0005481065 nova_compute[260935]: 2025-10-11 09:24:03.865 2 DEBUG oslo_concurrency.lockutils [None req-bf3d38fa-9d60-4481-84a4-4eda7f81908e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:24:03 np0005481065 nova_compute[260935]: 2025-10-11 09:24:03.901 2 INFO nova.scheduler.client.report [None req-bf3d38fa-9d60-4481-84a4-4eda7f81908e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Deleted allocations for instance 527bff5a-2d35-406a-8702-f80298a22342#033[00m
Oct 11 05:24:04 np0005481065 nova_compute[260935]: 2025-10-11 09:24:04.001 2 DEBUG oslo_concurrency.lockutils [None req-bf3d38fa-9d60-4481-84a4-4eda7f81908e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "527bff5a-2d35-406a-8702-f80298a22342" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:24:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:24:04 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2473: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 14 KiB/s wr, 149 op/s
Oct 11 05:24:05 np0005481065 nova_compute[260935]: 2025-10-11 09:24:05.139 2 DEBUG nova.network.neutron [req-2d8b5391-b302-481f-91f7-ca45950a270b req-0e889bd1-5c82-4797-aa18-8046eb2dd336 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Updated VIF entry in instance network info cache for port 85875c6f-3380-42bc-9e4b-0b66df391e0d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:24:05 np0005481065 nova_compute[260935]: 2025-10-11 09:24:05.141 2 DEBUG nova.network.neutron [req-2d8b5391-b302-481f-91f7-ca45950a270b req-0e889bd1-5c82-4797-aa18-8046eb2dd336 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Updating instance_info_cache with network_info: [{"id": "85875c6f-3380-42bc-9e4b-0b66df391e0d", "address": "fa:16:3e:81:f1:c1", "network": {"id": "cdd3c547-e0c3-4649-8427-08ce8e1c52d4", "bridge": "br-int", "label": "tempest-network-smoke--1698384506", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85875c6f-33", "ovs_interfaceid": "85875c6f-3380-42bc-9e4b-0b66df391e0d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f45a21da-34ce-448b-92cc-2639a191c755", "address": "fa:16:3e:f6:a9:1e", "network": {"id": "f0bc2c62-89ab-4ce1-9157-2273788b9018", "bridge": "br-int", "label": "tempest-network-smoke--892002892", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef6:a91e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef6:a91e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf45a21da-34", "ovs_interfaceid": "f45a21da-34ce-448b-92cc-2639a191c755", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:24:05 np0005481065 nova_compute[260935]: 2025-10-11 09:24:05.172 2 DEBUG oslo_concurrency.lockutils [req-2d8b5391-b302-481f-91f7-ca45950a270b req-0e889bd1-5c82-4797-aa18-8046eb2dd336 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-85cf93a0-2068-4567-a399-b8d52e672913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:24:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:24:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:24:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:24:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:24:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0037367807636582667 of space, bias 1.0, pg target 1.12103422909748 quantized to 32 (current 32)
Oct 11 05:24:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:24:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:24:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:24:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:24:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:24:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.1992057139048968 quantized to 32 (current 32)
Oct 11 05:24:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:24:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 32)
Oct 11 05:24:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:24:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:24:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:24:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Oct 11 05:24:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:24:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Oct 11 05:24:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:24:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:24:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:24:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Oct 11 05:24:05 np0005481065 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Oct 11 05:24:05 np0005481065 nova_compute[260935]: 2025-10-11 09:24:05.620 2 DEBUG oslo_concurrency.lockutils [None req-40d26b14-7bb1-44f1-9e06-f6f7cebb71d8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Acquiring lock "ef21f945-0076-48fa-8d22-c5376e26d278" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:24:05 np0005481065 nova_compute[260935]: 2025-10-11 09:24:05.620 2 DEBUG oslo_concurrency.lockutils [None req-40d26b14-7bb1-44f1-9e06-f6f7cebb71d8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:24:05 np0005481065 nova_compute[260935]: 2025-10-11 09:24:05.621 2 DEBUG oslo_concurrency.lockutils [None req-40d26b14-7bb1-44f1-9e06-f6f7cebb71d8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Acquiring lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:24:05 np0005481065 nova_compute[260935]: 2025-10-11 09:24:05.621 2 DEBUG oslo_concurrency.lockutils [None req-40d26b14-7bb1-44f1-9e06-f6f7cebb71d8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:24:05 np0005481065 nova_compute[260935]: 2025-10-11 09:24:05.622 2 DEBUG oslo_concurrency.lockutils [None req-40d26b14-7bb1-44f1-9e06-f6f7cebb71d8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:24:05 np0005481065 nova_compute[260935]: 2025-10-11 09:24:05.624 2 INFO nova.compute.manager [None req-40d26b14-7bb1-44f1-9e06-f6f7cebb71d8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Terminating instance#033[00m
Oct 11 05:24:05 np0005481065 nova_compute[260935]: 2025-10-11 09:24:05.626 2 DEBUG nova.compute.manager [None req-40d26b14-7bb1-44f1-9e06-f6f7cebb71d8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:24:05 np0005481065 nova_compute[260935]: 2025-10-11 09:24:05.631 2 DEBUG nova.compute.manager [req-100a8278-0c1c-4b94-a562-5d13925ed047 req-8938b891-0c37-4681-a3c4-056e1b4a220a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Received event network-changed-0f516e4b-c284-4151-944c-8a7d98f695b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:24:05 np0005481065 nova_compute[260935]: 2025-10-11 09:24:05.632 2 DEBUG nova.compute.manager [req-100a8278-0c1c-4b94-a562-5d13925ed047 req-8938b891-0c37-4681-a3c4-056e1b4a220a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Refreshing instance network info cache due to event network-changed-0f516e4b-c284-4151-944c-8a7d98f695b5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:24:05 np0005481065 nova_compute[260935]: 2025-10-11 09:24:05.633 2 DEBUG oslo_concurrency.lockutils [req-100a8278-0c1c-4b94-a562-5d13925ed047 req-8938b891-0c37-4681-a3c4-056e1b4a220a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-ef21f945-0076-48fa-8d22-c5376e26d278" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:24:05 np0005481065 nova_compute[260935]: 2025-10-11 09:24:05.633 2 DEBUG oslo_concurrency.lockutils [req-100a8278-0c1c-4b94-a562-5d13925ed047 req-8938b891-0c37-4681-a3c4-056e1b4a220a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-ef21f945-0076-48fa-8d22-c5376e26d278" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:24:05 np0005481065 nova_compute[260935]: 2025-10-11 09:24:05.633 2 DEBUG nova.network.neutron [req-100a8278-0c1c-4b94-a562-5d13925ed047 req-8938b891-0c37-4681-a3c4-056e1b4a220a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Refreshing network info cache for port 0f516e4b-c284-4151-944c-8a7d98f695b5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:24:06 np0005481065 kernel: tap0f516e4b-c2 (unregistering): left promiscuous mode
Oct 11 05:24:06 np0005481065 NetworkManager[44960]: <info>  [1760174646.6924] device (tap0f516e4b-c2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:24:06 np0005481065 ovn_controller[152945]: 2025-10-11T09:24:06Z|01320|binding|INFO|Releasing lport 0f516e4b-c284-4151-944c-8a7d98f695b5 from this chassis (sb_readonly=0)
Oct 11 05:24:06 np0005481065 ovn_controller[152945]: 2025-10-11T09:24:06Z|01321|binding|INFO|Setting lport 0f516e4b-c284-4151-944c-8a7d98f695b5 down in Southbound
Oct 11 05:24:06 np0005481065 ovn_controller[152945]: 2025-10-11T09:24:06Z|01322|binding|INFO|Removing iface tap0f516e4b-c2 ovn-installed in OVS
Oct 11 05:24:06 np0005481065 nova_compute[260935]: 2025-10-11 09:24:06.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:06.725 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:08:14 10.100.0.10'], port_security=['fa:16:3e:4b:08:14 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'ef21f945-0076-48fa-8d22-c5376e26d278', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0007a0de-db42-4add-9b55-6d92ceffa860', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a13210f275984f3eadf85eba0c749d99', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'fd52e2b9-19bf-4137-a511-25dd1d4c9f0e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d2271296-3ceb-4987-affd-a0b4da64fffe, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=0f516e4b-c284-4151-944c-8a7d98f695b5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:24:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:06.726 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 0f516e4b-c284-4151-944c-8a7d98f695b5 in datapath 0007a0de-db42-4add-9b55-6d92ceffa860 unbound from our chassis#033[00m
Oct 11 05:24:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:06.730 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0007a0de-db42-4add-9b55-6d92ceffa860, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:24:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:06.733 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c246af82-a525-4be5-bf80-026a4858fccd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:24:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:06.733 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860 namespace which is not needed anymore#033[00m
Oct 11 05:24:06 np0005481065 nova_compute[260935]: 2025-10-11 09:24:06.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:06 np0005481065 systemd[1]: machine-qemu\x2d145\x2dinstance\x2d00000079.scope: Deactivated successfully.
Oct 11 05:24:06 np0005481065 systemd[1]: machine-qemu\x2d145\x2dinstance\x2d00000079.scope: Consumed 13.489s CPU time.
Oct 11 05:24:06 np0005481065 systemd-machined[215705]: Machine qemu-145-instance-00000079 terminated.
Oct 11 05:24:06 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2474: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 14 KiB/s wr, 149 op/s
Oct 11 05:24:06 np0005481065 nova_compute[260935]: 2025-10-11 09:24:06.879 2 INFO nova.virt.libvirt.driver [-] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Instance destroyed successfully.#033[00m
Oct 11 05:24:06 np0005481065 nova_compute[260935]: 2025-10-11 09:24:06.880 2 DEBUG nova.objects.instance [None req-40d26b14-7bb1-44f1-9e06-f6f7cebb71d8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lazy-loading 'resources' on Instance uuid ef21f945-0076-48fa-8d22-c5376e26d278 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:24:06 np0005481065 nova_compute[260935]: 2025-10-11 09:24:06.913 2 DEBUG nova.virt.libvirt.vif [None req-40d26b14-7bb1-44f1-9e06-f6f7cebb71d8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-11T09:22:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1752003988',display_name='tempest-TestShelveInstance-server-1752003988',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1752003988',id=121,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBc59pqsdiIpcBTYIi36cHQ9pbnLBXoyhUafGO6McKtc7V938v9xkK/F0LUwOp/S6AlI8sKdzLvrGPTJbVDXHRRnlCu0B+lzAS2vfK523X9mqHyrpvhTkQghQuITH7BALg==',key_name='tempest-TestShelveInstance-1436929269',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:23:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a13210f275984f3eadf85eba0c749d99',ramdisk_id='',reservation_id='r-9i2gklxc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-243029510',owner_user_name='tempest-TestShelveInstance-243029510-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:23:47Z,user_data=None,user_id='67e20c1f7ae24f2f8b9e25e0d8ce61ca',uuid=ef21f945-0076-48fa-8d22-c5376e26d278,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0f516e4b-c284-4151-944c-8a7d98f695b5", "address": "fa:16:3e:4b:08:14", "network": {"id": "0007a0de-db42-4add-9b55-6d92ceffa860", "bridge": "br-int", "label": "tempest-TestShelveInstance-683353345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a13210f275984f3eadf85eba0c749d99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f516e4b-c2", "ovs_interfaceid": "0f516e4b-c284-4151-944c-8a7d98f695b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:24:06 np0005481065 nova_compute[260935]: 2025-10-11 09:24:06.914 2 DEBUG nova.network.os_vif_util [None req-40d26b14-7bb1-44f1-9e06-f6f7cebb71d8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Converting VIF {"id": "0f516e4b-c284-4151-944c-8a7d98f695b5", "address": "fa:16:3e:4b:08:14", "network": {"id": "0007a0de-db42-4add-9b55-6d92ceffa860", "bridge": "br-int", "label": "tempest-TestShelveInstance-683353345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a13210f275984f3eadf85eba0c749d99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f516e4b-c2", "ovs_interfaceid": "0f516e4b-c284-4151-944c-8a7d98f695b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:24:06 np0005481065 nova_compute[260935]: 2025-10-11 09:24:06.915 2 DEBUG nova.network.os_vif_util [None req-40d26b14-7bb1-44f1-9e06-f6f7cebb71d8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:08:14,bridge_name='br-int',has_traffic_filtering=True,id=0f516e4b-c284-4151-944c-8a7d98f695b5,network=Network(0007a0de-db42-4add-9b55-6d92ceffa860),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f516e4b-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:24:06 np0005481065 nova_compute[260935]: 2025-10-11 09:24:06.915 2 DEBUG os_vif [None req-40d26b14-7bb1-44f1-9e06-f6f7cebb71d8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:08:14,bridge_name='br-int',has_traffic_filtering=True,id=0f516e4b-c284-4151-944c-8a7d98f695b5,network=Network(0007a0de-db42-4add-9b55-6d92ceffa860),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f516e4b-c2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:24:06 np0005481065 nova_compute[260935]: 2025-10-11 09:24:06.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:06 np0005481065 nova_compute[260935]: 2025-10-11 09:24:06.918 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0f516e4b-c2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:24:06 np0005481065 nova_compute[260935]: 2025-10-11 09:24:06.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:06 np0005481065 nova_compute[260935]: 2025-10-11 09:24:06.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:06 np0005481065 nova_compute[260935]: 2025-10-11 09:24:06.925 2 INFO os_vif [None req-40d26b14-7bb1-44f1-9e06-f6f7cebb71d8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:08:14,bridge_name='br-int',has_traffic_filtering=True,id=0f516e4b-c284-4151-944c-8a7d98f695b5,network=Network(0007a0de-db42-4add-9b55-6d92ceffa860),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f516e4b-c2')#033[00m
Oct 11 05:24:07 np0005481065 neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860[394551]: [NOTICE]   (394555) : haproxy version is 2.8.14-c23fe91
Oct 11 05:24:07 np0005481065 neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860[394551]: [NOTICE]   (394555) : path to executable is /usr/sbin/haproxy
Oct 11 05:24:07 np0005481065 neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860[394551]: [WARNING]  (394555) : Exiting Master process...
Oct 11 05:24:07 np0005481065 neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860[394551]: [WARNING]  (394555) : Exiting Master process...
Oct 11 05:24:07 np0005481065 neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860[394551]: [ALERT]    (394555) : Current worker (394557) exited with code 143 (Terminated)
Oct 11 05:24:07 np0005481065 neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860[394551]: [WARNING]  (394555) : All workers exited. Exiting... (0)
Oct 11 05:24:07 np0005481065 systemd[1]: libpod-22cf63d83641e03cba16870a5f372c09f901e5786a22ad9e01ecb7c96b7ff5b2.scope: Deactivated successfully.
Oct 11 05:24:07 np0005481065 conmon[394551]: conmon 22cf63d83641e03cba16 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-22cf63d83641e03cba16870a5f372c09f901e5786a22ad9e01ecb7c96b7ff5b2.scope/container/memory.events
Oct 11 05:24:07 np0005481065 podman[395403]: 2025-10-11 09:24:07.135568563 +0000 UTC m=+0.293494174 container died 22cf63d83641e03cba16870a5f372c09f901e5786a22ad9e01ecb7c96b7ff5b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Oct 11 05:24:07 np0005481065 nova_compute[260935]: 2025-10-11 09:24:07.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:07 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-22cf63d83641e03cba16870a5f372c09f901e5786a22ad9e01ecb7c96b7ff5b2-userdata-shm.mount: Deactivated successfully.
Oct 11 05:24:07 np0005481065 systemd[1]: var-lib-containers-storage-overlay-1fd13644a85799175e493bbd75cfd83641987306523ad37c33480641586b4542-merged.mount: Deactivated successfully.
Oct 11 05:24:07 np0005481065 nova_compute[260935]: 2025-10-11 09:24:07.740 2 DEBUG nova.compute.manager [req-933a6697-0892-49dc-9960-a1bd9ceb4304 req-85a81438-6701-4512-a33a-8b4e984409a9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Received event network-vif-unplugged-0f516e4b-c284-4151-944c-8a7d98f695b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:24:07 np0005481065 nova_compute[260935]: 2025-10-11 09:24:07.740 2 DEBUG oslo_concurrency.lockutils [req-933a6697-0892-49dc-9960-a1bd9ceb4304 req-85a81438-6701-4512-a33a-8b4e984409a9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:24:07 np0005481065 nova_compute[260935]: 2025-10-11 09:24:07.741 2 DEBUG oslo_concurrency.lockutils [req-933a6697-0892-49dc-9960-a1bd9ceb4304 req-85a81438-6701-4512-a33a-8b4e984409a9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:24:07 np0005481065 nova_compute[260935]: 2025-10-11 09:24:07.741 2 DEBUG oslo_concurrency.lockutils [req-933a6697-0892-49dc-9960-a1bd9ceb4304 req-85a81438-6701-4512-a33a-8b4e984409a9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:24:07 np0005481065 nova_compute[260935]: 2025-10-11 09:24:07.741 2 DEBUG nova.compute.manager [req-933a6697-0892-49dc-9960-a1bd9ceb4304 req-85a81438-6701-4512-a33a-8b4e984409a9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] No waiting events found dispatching network-vif-unplugged-0f516e4b-c284-4151-944c-8a7d98f695b5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:24:07 np0005481065 nova_compute[260935]: 2025-10-11 09:24:07.742 2 DEBUG nova.compute.manager [req-933a6697-0892-49dc-9960-a1bd9ceb4304 req-85a81438-6701-4512-a33a-8b4e984409a9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Received event network-vif-unplugged-0f516e4b-c284-4151-944c-8a7d98f695b5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 05:24:07 np0005481065 nova_compute[260935]: 2025-10-11 09:24:07.785 2 DEBUG nova.network.neutron [req-100a8278-0c1c-4b94-a562-5d13925ed047 req-8938b891-0c37-4681-a3c4-056e1b4a220a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Updated VIF entry in instance network info cache for port 0f516e4b-c284-4151-944c-8a7d98f695b5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:24:07 np0005481065 nova_compute[260935]: 2025-10-11 09:24:07.785 2 DEBUG nova.network.neutron [req-100a8278-0c1c-4b94-a562-5d13925ed047 req-8938b891-0c37-4681-a3c4-056e1b4a220a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Updating instance_info_cache with network_info: [{"id": "0f516e4b-c284-4151-944c-8a7d98f695b5", "address": "fa:16:3e:4b:08:14", "network": {"id": "0007a0de-db42-4add-9b55-6d92ceffa860", "bridge": "br-int", "label": "tempest-TestShelveInstance-683353345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a13210f275984f3eadf85eba0c749d99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f516e4b-c2", "ovs_interfaceid": "0f516e4b-c284-4151-944c-8a7d98f695b5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:24:07 np0005481065 nova_compute[260935]: 2025-10-11 09:24:07.807 2 DEBUG oslo_concurrency.lockutils [req-100a8278-0c1c-4b94-a562-5d13925ed047 req-8938b891-0c37-4681-a3c4-056e1b4a220a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-ef21f945-0076-48fa-8d22-c5376e26d278" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:24:07 np0005481065 podman[395403]: 2025-10-11 09:24:07.974651911 +0000 UTC m=+1.132577522 container cleanup 22cf63d83641e03cba16870a5f372c09f901e5786a22ad9e01ecb7c96b7ff5b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0)
Oct 11 05:24:07 np0005481065 systemd[1]: libpod-conmon-22cf63d83641e03cba16870a5f372c09f901e5786a22ad9e01ecb7c96b7ff5b2.scope: Deactivated successfully.
Oct 11 05:24:08 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2475: 321 pgs: 321 active+clean; 476 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.0 MiB/s wr, 182 op/s
Oct 11 05:24:08 np0005481065 podman[395463]: 2025-10-11 09:24:08.968259309 +0000 UTC m=+0.955967217 container remove 22cf63d83641e03cba16870a5f372c09f901e5786a22ad9e01ecb7c96b7ff5b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:24:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:08.978 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8cfb0247-47df-4981-b96f-add7c1d88a5a]: (4, ('Sat Oct 11 09:24:06 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860 (22cf63d83641e03cba16870a5f372c09f901e5786a22ad9e01ecb7c96b7ff5b2)\n22cf63d83641e03cba16870a5f372c09f901e5786a22ad9e01ecb7c96b7ff5b2\nSat Oct 11 09:24:07 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860 (22cf63d83641e03cba16870a5f372c09f901e5786a22ad9e01ecb7c96b7ff5b2)\n22cf63d83641e03cba16870a5f372c09f901e5786a22ad9e01ecb7c96b7ff5b2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:24:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:08.980 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[30b8f885-5e88-4f2c-9a0a-8980da76bdcc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:24:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:08.982 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0007a0de-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:24:08 np0005481065 nova_compute[260935]: 2025-10-11 09:24:08.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:08 np0005481065 kernel: tap0007a0de-d0: left promiscuous mode
Oct 11 05:24:09 np0005481065 nova_compute[260935]: 2025-10-11 09:24:09.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:09 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:09.010 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7979659f-61e2-4844-8eaf-a1f76b8b57e1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:24:09 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:09.038 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fe42606d-7846-473d-a58e-22a521333620]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:24:09 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:09.039 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4355a677-85f9-4afb-b2d9-bd869753c058]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:24:09 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:09.062 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[16797d34-a1f2-4eb4-b255-970dc0c2cd57]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 646786, 'reachable_time': 20036, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395479, 'error': None, 'target': 'ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:24:09 np0005481065 systemd[1]: run-netns-ovnmeta\x2d0007a0de\x2ddb42\x2d4add\x2d9b55\x2d6d92ceffa860.mount: Deactivated successfully.
Oct 11 05:24:09 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:09.066 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 05:24:09 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:09.066 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[fe2d79e4-32ca-411d-9fb4-9dc462d9788c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:24:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:24:09 np0005481065 nova_compute[260935]: 2025-10-11 09:24:09.582 2 INFO nova.virt.libvirt.driver [None req-40d26b14-7bb1-44f1-9e06-f6f7cebb71d8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Deleting instance files /var/lib/nova/instances/ef21f945-0076-48fa-8d22-c5376e26d278_del#033[00m
Oct 11 05:24:09 np0005481065 nova_compute[260935]: 2025-10-11 09:24:09.582 2 INFO nova.virt.libvirt.driver [None req-40d26b14-7bb1-44f1-9e06-f6f7cebb71d8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Deletion of /var/lib/nova/instances/ef21f945-0076-48fa-8d22-c5376e26d278_del complete#033[00m
Oct 11 05:24:09 np0005481065 nova_compute[260935]: 2025-10-11 09:24:09.663 2 INFO nova.compute.manager [None req-40d26b14-7bb1-44f1-9e06-f6f7cebb71d8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Took 4.04 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:24:09 np0005481065 nova_compute[260935]: 2025-10-11 09:24:09.664 2 DEBUG oslo.service.loopingcall [None req-40d26b14-7bb1-44f1-9e06-f6f7cebb71d8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:24:09 np0005481065 nova_compute[260935]: 2025-10-11 09:24:09.665 2 DEBUG nova.compute.manager [-] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:24:09 np0005481065 nova_compute[260935]: 2025-10-11 09:24:09.665 2 DEBUG nova.network.neutron [-] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:24:09 np0005481065 nova_compute[260935]: 2025-10-11 09:24:09.842 2 DEBUG nova.compute.manager [req-e26b1321-d02c-4178-9efb-cf6fbdcda488 req-0feb0b40-a196-40e9-a4c5-bbc84622ad14 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Received event network-vif-plugged-0f516e4b-c284-4151-944c-8a7d98f695b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:24:09 np0005481065 nova_compute[260935]: 2025-10-11 09:24:09.843 2 DEBUG oslo_concurrency.lockutils [req-e26b1321-d02c-4178-9efb-cf6fbdcda488 req-0feb0b40-a196-40e9-a4c5-bbc84622ad14 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:24:09 np0005481065 nova_compute[260935]: 2025-10-11 09:24:09.843 2 DEBUG oslo_concurrency.lockutils [req-e26b1321-d02c-4178-9efb-cf6fbdcda488 req-0feb0b40-a196-40e9-a4c5-bbc84622ad14 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:24:09 np0005481065 nova_compute[260935]: 2025-10-11 09:24:09.844 2 DEBUG oslo_concurrency.lockutils [req-e26b1321-d02c-4178-9efb-cf6fbdcda488 req-0feb0b40-a196-40e9-a4c5-bbc84622ad14 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:24:09 np0005481065 nova_compute[260935]: 2025-10-11 09:24:09.844 2 DEBUG nova.compute.manager [req-e26b1321-d02c-4178-9efb-cf6fbdcda488 req-0feb0b40-a196-40e9-a4c5-bbc84622ad14 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] No waiting events found dispatching network-vif-plugged-0f516e4b-c284-4151-944c-8a7d98f695b5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:24:09 np0005481065 nova_compute[260935]: 2025-10-11 09:24:09.844 2 WARNING nova.compute.manager [req-e26b1321-d02c-4178-9efb-cf6fbdcda488 req-0feb0b40-a196-40e9-a4c5-bbc84622ad14 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Received unexpected event network-vif-plugged-0f516e4b-c284-4151-944c-8a7d98f695b5 for instance with vm_state active and task_state deleting.#033[00m
Oct 11 05:24:10 np0005481065 nova_compute[260935]: 2025-10-11 09:24:10.290 2 DEBUG nova.network.neutron [-] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:24:10 np0005481065 nova_compute[260935]: 2025-10-11 09:24:10.317 2 INFO nova.compute.manager [-] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Took 0.65 seconds to deallocate network for instance.#033[00m
Oct 11 05:24:10 np0005481065 nova_compute[260935]: 2025-10-11 09:24:10.393 2 DEBUG oslo_concurrency.lockutils [None req-40d26b14-7bb1-44f1-9e06-f6f7cebb71d8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:24:10 np0005481065 nova_compute[260935]: 2025-10-11 09:24:10.394 2 DEBUG oslo_concurrency.lockutils [None req-40d26b14-7bb1-44f1-9e06-f6f7cebb71d8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:24:10 np0005481065 nova_compute[260935]: 2025-10-11 09:24:10.539 2 DEBUG oslo_concurrency.processutils [None req-40d26b14-7bb1-44f1-9e06-f6f7cebb71d8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:24:10 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2476: 321 pgs: 321 active+clean; 476 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 560 KiB/s rd, 1.9 MiB/s wr, 79 op/s
Oct 11 05:24:11 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:24:11 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3079503864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:24:11 np0005481065 nova_compute[260935]: 2025-10-11 09:24:11.125 2 DEBUG oslo_concurrency.processutils [None req-40d26b14-7bb1-44f1-9e06-f6f7cebb71d8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:24:11 np0005481065 nova_compute[260935]: 2025-10-11 09:24:11.131 2 DEBUG nova.compute.provider_tree [None req-40d26b14-7bb1-44f1-9e06-f6f7cebb71d8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:24:11 np0005481065 nova_compute[260935]: 2025-10-11 09:24:11.147 2 DEBUG nova.scheduler.client.report [None req-40d26b14-7bb1-44f1-9e06-f6f7cebb71d8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:24:11 np0005481065 nova_compute[260935]: 2025-10-11 09:24:11.170 2 DEBUG oslo_concurrency.lockutils [None req-40d26b14-7bb1-44f1-9e06-f6f7cebb71d8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:24:11 np0005481065 nova_compute[260935]: 2025-10-11 09:24:11.214 2 INFO nova.scheduler.client.report [None req-40d26b14-7bb1-44f1-9e06-f6f7cebb71d8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Deleted allocations for instance ef21f945-0076-48fa-8d22-c5376e26d278#033[00m
Oct 11 05:24:11 np0005481065 ovn_controller[152945]: 2025-10-11T09:24:11Z|00152|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:81:f1:c1 10.100.0.14
Oct 11 05:24:11 np0005481065 ovn_controller[152945]: 2025-10-11T09:24:11Z|00153|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:81:f1:c1 10.100.0.14
Oct 11 05:24:11 np0005481065 nova_compute[260935]: 2025-10-11 09:24:11.299 2 DEBUG oslo_concurrency.lockutils [None req-40d26b14-7bb1-44f1-9e06-f6f7cebb71d8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:24:11 np0005481065 nova_compute[260935]: 2025-10-11 09:24:11.426 2 DEBUG oslo_concurrency.lockutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:24:11 np0005481065 nova_compute[260935]: 2025-10-11 09:24:11.426 2 DEBUG oslo_concurrency.lockutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:24:11 np0005481065 nova_compute[260935]: 2025-10-11 09:24:11.443 2 DEBUG nova.compute.manager [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:24:11 np0005481065 nova_compute[260935]: 2025-10-11 09:24:11.532 2 DEBUG oslo_concurrency.lockutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:24:11 np0005481065 nova_compute[260935]: 2025-10-11 09:24:11.533 2 DEBUG oslo_concurrency.lockutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:24:11 np0005481065 nova_compute[260935]: 2025-10-11 09:24:11.543 2 DEBUG nova.virt.hardware [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:24:11 np0005481065 nova_compute[260935]: 2025-10-11 09:24:11.543 2 INFO nova.compute.claims [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:24:11 np0005481065 podman[395503]: 2025-10-11 09:24:11.781662355 +0000 UTC m=+0.080294806 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Oct 11 05:24:11 np0005481065 nova_compute[260935]: 2025-10-11 09:24:11.838 2 DEBUG oslo_concurrency.processutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:24:11 np0005481065 nova_compute[260935]: 2025-10-11 09:24:11.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:11 np0005481065 nova_compute[260935]: 2025-10-11 09:24:11.956 2 DEBUG nova.compute.manager [req-a6087bb7-aaf1-4fbf-bede-1c483ed4ef53 req-a2e91bb9-d7ca-425f-b398-b87370618de7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Received event network-vif-deleted-0f516e4b-c284-4151-944c-8a7d98f695b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:24:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:24:12 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3903550588' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:24:12 np0005481065 nova_compute[260935]: 2025-10-11 09:24:12.300 2 DEBUG oslo_concurrency.processutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:24:12 np0005481065 nova_compute[260935]: 2025-10-11 09:24:12.306 2 DEBUG nova.compute.provider_tree [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:24:12 np0005481065 nova_compute[260935]: 2025-10-11 09:24:12.327 2 DEBUG nova.scheduler.client.report [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:24:12 np0005481065 nova_compute[260935]: 2025-10-11 09:24:12.361 2 DEBUG oslo_concurrency.lockutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.828s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:24:12 np0005481065 nova_compute[260935]: 2025-10-11 09:24:12.362 2 DEBUG nova.compute.manager [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:24:12 np0005481065 nova_compute[260935]: 2025-10-11 09:24:12.435 2 DEBUG nova.compute.manager [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:24:12 np0005481065 nova_compute[260935]: 2025-10-11 09:24:12.436 2 DEBUG nova.network.neutron [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:24:12 np0005481065 nova_compute[260935]: 2025-10-11 09:24:12.456 2 INFO nova.virt.libvirt.driver [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:24:12 np0005481065 nova_compute[260935]: 2025-10-11 09:24:12.474 2 DEBUG nova.compute.manager [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:24:12 np0005481065 nova_compute[260935]: 2025-10-11 09:24:12.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:12 np0005481065 nova_compute[260935]: 2025-10-11 09:24:12.559 2 DEBUG nova.compute.manager [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:24:12 np0005481065 nova_compute[260935]: 2025-10-11 09:24:12.561 2 DEBUG nova.virt.libvirt.driver [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:24:12 np0005481065 nova_compute[260935]: 2025-10-11 09:24:12.561 2 INFO nova.virt.libvirt.driver [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Creating image(s)#033[00m
Oct 11 05:24:12 np0005481065 nova_compute[260935]: 2025-10-11 09:24:12.593 2 DEBUG nova.storage.rbd_utils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:24:12 np0005481065 nova_compute[260935]: 2025-10-11 09:24:12.626 2 DEBUG nova.storage.rbd_utils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:24:12 np0005481065 nova_compute[260935]: 2025-10-11 09:24:12.660 2 DEBUG nova.storage.rbd_utils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:24:12 np0005481065 nova_compute[260935]: 2025-10-11 09:24:12.665 2 DEBUG oslo_concurrency.processutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:24:12 np0005481065 nova_compute[260935]: 2025-10-11 09:24:12.748 2 DEBUG oslo_concurrency.processutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:24:12 np0005481065 nova_compute[260935]: 2025-10-11 09:24:12.749 2 DEBUG oslo_concurrency.lockutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:24:12 np0005481065 nova_compute[260935]: 2025-10-11 09:24:12.750 2 DEBUG oslo_concurrency.lockutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:24:12 np0005481065 nova_compute[260935]: 2025-10-11 09:24:12.751 2 DEBUG oslo_concurrency.lockutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:24:12 np0005481065 nova_compute[260935]: 2025-10-11 09:24:12.787 2 DEBUG nova.storage.rbd_utils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:24:12 np0005481065 nova_compute[260935]: 2025-10-11 09:24:12.792 2 DEBUG oslo_concurrency.processutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:24:12 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2477: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 915 KiB/s rd, 2.1 MiB/s wr, 143 op/s
Oct 11 05:24:13 np0005481065 nova_compute[260935]: 2025-10-11 09:24:13.218 2 DEBUG oslo_concurrency.processutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:24:13 np0005481065 nova_compute[260935]: 2025-10-11 09:24:13.309 2 DEBUG nova.storage.rbd_utils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] resizing rbd image 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:24:13 np0005481065 nova_compute[260935]: 2025-10-11 09:24:13.404 2 DEBUG nova.policy [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dd336dcb24664df58613d4105ce1b004', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:24:13 np0005481065 nova_compute[260935]: 2025-10-11 09:24:13.456 2 DEBUG nova.objects.instance [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'migration_context' on Instance uuid 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:24:13 np0005481065 nova_compute[260935]: 2025-10-11 09:24:13.476 2 DEBUG nova.virt.libvirt.driver [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:24:13 np0005481065 nova_compute[260935]: 2025-10-11 09:24:13.477 2 DEBUG nova.virt.libvirt.driver [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Ensure instance console log exists: /var/lib/nova/instances/32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:24:13 np0005481065 nova_compute[260935]: 2025-10-11 09:24:13.478 2 DEBUG oslo_concurrency.lockutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:24:13 np0005481065 nova_compute[260935]: 2025-10-11 09:24:13.478 2 DEBUG oslo_concurrency.lockutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:24:13 np0005481065 nova_compute[260935]: 2025-10-11 09:24:13.479 2 DEBUG oslo_concurrency.lockutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:24:13 np0005481065 nova_compute[260935]: 2025-10-11 09:24:13.528 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174638.5274398, 527bff5a-2d35-406a-8702-f80298a22342 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:24:13 np0005481065 nova_compute[260935]: 2025-10-11 09:24:13.529 2 INFO nova.compute.manager [-] [instance: 527bff5a-2d35-406a-8702-f80298a22342] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:24:13 np0005481065 nova_compute[260935]: 2025-10-11 09:24:13.557 2 DEBUG nova.compute.manager [None req-b78a5af4-7d30-4843-a9b8-20288124f355 - - - - - -] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:24:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:24:14 np0005481065 nova_compute[260935]: 2025-10-11 09:24:14.652 2 DEBUG nova.network.neutron [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Successfully updated port: ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:24:14 np0005481065 nova_compute[260935]: 2025-10-11 09:24:14.684 2 DEBUG oslo_concurrency.lockutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "refresh_cache-32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:24:14 np0005481065 nova_compute[260935]: 2025-10-11 09:24:14.684 2 DEBUG oslo_concurrency.lockutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquired lock "refresh_cache-32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:24:14 np0005481065 nova_compute[260935]: 2025-10-11 09:24:14.685 2 DEBUG nova.network.neutron [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:24:14 np0005481065 nova_compute[260935]: 2025-10-11 09:24:14.793 2 DEBUG nova.compute.manager [req-3004c618-5dce-4d24-a297-c9044294418c req-b118437c-81a0-420c-9200-77efe9ceb834 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Received event network-changed-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:24:14 np0005481065 nova_compute[260935]: 2025-10-11 09:24:14.794 2 DEBUG nova.compute.manager [req-3004c618-5dce-4d24-a297-c9044294418c req-b118437c-81a0-420c-9200-77efe9ceb834 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Refreshing instance network info cache due to event network-changed-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:24:14 np0005481065 nova_compute[260935]: 2025-10-11 09:24:14.794 2 DEBUG oslo_concurrency.lockutils [req-3004c618-5dce-4d24-a297-c9044294418c req-b118437c-81a0-420c-9200-77efe9ceb834 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:24:14 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2478: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 601 KiB/s rd, 2.1 MiB/s wr, 96 op/s
Oct 11 05:24:14 np0005481065 nova_compute[260935]: 2025-10-11 09:24:14.912 2 DEBUG nova.network.neutron [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:24:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:15.220 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:24:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:15.221 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:24:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:15.222 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:24:16 np0005481065 podman[395734]: 2025-10-11 09:24:16.143043026 +0000 UTC m=+0.091605956 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 05:24:16 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:24:16 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:24:16 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:24:16 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:24:16 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2479: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 601 KiB/s rd, 2.1 MiB/s wr, 96 op/s
Oct 11 05:24:16 np0005481065 nova_compute[260935]: 2025-10-11 09:24:16.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:17 np0005481065 nova_compute[260935]: 2025-10-11 09:24:17.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:17 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:24:17 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:24:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:24:17 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:24:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 05:24:17 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:24:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 05:24:17 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:24:17 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev da967a8e-ce8a-4697-afc1-0f4d16586d3a does not exist
Oct 11 05:24:17 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev afad6b6f-3d20-420d-8de5-2b9c8b2a2c3f does not exist
Oct 11 05:24:17 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 54934954-23e6-46f4-a382-429e69934502 does not exist
Oct 11 05:24:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 05:24:17 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 05:24:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 05:24:17 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:24:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:24:17 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:24:18 np0005481065 nova_compute[260935]: 2025-10-11 09:24:18.147 2 DEBUG nova.network.neutron [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Updating instance_info_cache with network_info: [{"id": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "address": "fa:16:3e:d6:ce:63", "network": {"id": "a7aa5898-dcd8-41c5-81e2-5c41061a1fb2", "bridge": "br-int", "label": "tempest-network-smoke--1511589092", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba6bfb6c-8b", "ovs_interfaceid": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:24:18 np0005481065 nova_compute[260935]: 2025-10-11 09:24:18.166 2 DEBUG oslo_concurrency.lockutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Releasing lock "refresh_cache-32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:24:18 np0005481065 nova_compute[260935]: 2025-10-11 09:24:18.166 2 DEBUG nova.compute.manager [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Instance network_info: |[{"id": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "address": "fa:16:3e:d6:ce:63", "network": {"id": "a7aa5898-dcd8-41c5-81e2-5c41061a1fb2", "bridge": "br-int", "label": "tempest-network-smoke--1511589092", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba6bfb6c-8b", "ovs_interfaceid": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:24:18 np0005481065 nova_compute[260935]: 2025-10-11 09:24:18.167 2 DEBUG oslo_concurrency.lockutils [req-3004c618-5dce-4d24-a297-c9044294418c req-b118437c-81a0-420c-9200-77efe9ceb834 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:24:18 np0005481065 nova_compute[260935]: 2025-10-11 09:24:18.167 2 DEBUG nova.network.neutron [req-3004c618-5dce-4d24-a297-c9044294418c req-b118437c-81a0-420c-9200-77efe9ceb834 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Refreshing network info cache for port ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:24:18 np0005481065 nova_compute[260935]: 2025-10-11 09:24:18.169 2 DEBUG nova.virt.libvirt.driver [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Start _get_guest_xml network_info=[{"id": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "address": "fa:16:3e:d6:ce:63", "network": {"id": "a7aa5898-dcd8-41c5-81e2-5c41061a1fb2", "bridge": "br-int", "label": "tempest-network-smoke--1511589092", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba6bfb6c-8b", "ovs_interfaceid": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:24:18 np0005481065 nova_compute[260935]: 2025-10-11 09:24:18.174 2 WARNING nova.virt.libvirt.driver [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:24:18 np0005481065 nova_compute[260935]: 2025-10-11 09:24:18.179 2 DEBUG nova.virt.libvirt.host [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:24:18 np0005481065 nova_compute[260935]: 2025-10-11 09:24:18.179 2 DEBUG nova.virt.libvirt.host [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:24:18 np0005481065 nova_compute[260935]: 2025-10-11 09:24:18.182 2 DEBUG nova.virt.libvirt.host [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:24:18 np0005481065 nova_compute[260935]: 2025-10-11 09:24:18.183 2 DEBUG nova.virt.libvirt.host [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:24:18 np0005481065 nova_compute[260935]: 2025-10-11 09:24:18.183 2 DEBUG nova.virt.libvirt.driver [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:24:18 np0005481065 nova_compute[260935]: 2025-10-11 09:24:18.183 2 DEBUG nova.virt.hardware [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:24:18 np0005481065 nova_compute[260935]: 2025-10-11 09:24:18.184 2 DEBUG nova.virt.hardware [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:24:18 np0005481065 nova_compute[260935]: 2025-10-11 09:24:18.184 2 DEBUG nova.virt.hardware [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:24:18 np0005481065 nova_compute[260935]: 2025-10-11 09:24:18.184 2 DEBUG nova.virt.hardware [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:24:18 np0005481065 nova_compute[260935]: 2025-10-11 09:24:18.184 2 DEBUG nova.virt.hardware [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:24:18 np0005481065 nova_compute[260935]: 2025-10-11 09:24:18.185 2 DEBUG nova.virt.hardware [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:24:18 np0005481065 nova_compute[260935]: 2025-10-11 09:24:18.185 2 DEBUG nova.virt.hardware [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:24:18 np0005481065 nova_compute[260935]: 2025-10-11 09:24:18.185 2 DEBUG nova.virt.hardware [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:24:18 np0005481065 nova_compute[260935]: 2025-10-11 09:24:18.185 2 DEBUG nova.virt.hardware [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:24:18 np0005481065 nova_compute[260935]: 2025-10-11 09:24:18.186 2 DEBUG nova.virt.hardware [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:24:18 np0005481065 nova_compute[260935]: 2025-10-11 09:24:18.186 2 DEBUG nova.virt.hardware [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:24:18 np0005481065 nova_compute[260935]: 2025-10-11 09:24:18.188 2 DEBUG oslo_concurrency.processutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:24:18 np0005481065 podman[396142]: 2025-10-11 09:24:18.466138556 +0000 UTC m=+0.049628181 container create 4e755bc5f00bc16340bb70f3053627fd90b39132d2b4bf5b289cf737b42179f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_germain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:24:18 np0005481065 systemd[1]: Started libpod-conmon-4e755bc5f00bc16340bb70f3053627fd90b39132d2b4bf5b289cf737b42179f3.scope.
Oct 11 05:24:18 np0005481065 podman[396142]: 2025-10-11 09:24:18.442635313 +0000 UTC m=+0.026124928 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:24:18 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:24:18 np0005481065 podman[396142]: 2025-10-11 09:24:18.583135268 +0000 UTC m=+0.166624953 container init 4e755bc5f00bc16340bb70f3053627fd90b39132d2b4bf5b289cf737b42179f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 11 05:24:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:24:18 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/614254496' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:24:18 np0005481065 podman[396142]: 2025-10-11 09:24:18.595316702 +0000 UTC m=+0.178806327 container start 4e755bc5f00bc16340bb70f3053627fd90b39132d2b4bf5b289cf737b42179f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 05:24:18 np0005481065 podman[396142]: 2025-10-11 09:24:18.600110247 +0000 UTC m=+0.183599922 container attach 4e755bc5f00bc16340bb70f3053627fd90b39132d2b4bf5b289cf737b42179f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_germain, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:24:18 np0005481065 focused_germain[396158]: 167 167
Oct 11 05:24:18 np0005481065 systemd[1]: libpod-4e755bc5f00bc16340bb70f3053627fd90b39132d2b4bf5b289cf737b42179f3.scope: Deactivated successfully.
Oct 11 05:24:18 np0005481065 podman[396142]: 2025-10-11 09:24:18.605279583 +0000 UTC m=+0.188769268 container died 4e755bc5f00bc16340bb70f3053627fd90b39132d2b4bf5b289cf737b42179f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_germain, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:24:18 np0005481065 nova_compute[260935]: 2025-10-11 09:24:18.618 2 DEBUG oslo_concurrency.processutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:24:18 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:24:18 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:24:18 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:24:18 np0005481065 systemd[1]: var-lib-containers-storage-overlay-92876428261ee7dbc1db31d19d0c93e3aa1081e77c2c18897b0c2dccc8b49b6c-merged.mount: Deactivated successfully.
Oct 11 05:24:18 np0005481065 nova_compute[260935]: 2025-10-11 09:24:18.651 2 DEBUG nova.storage.rbd_utils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:24:18 np0005481065 nova_compute[260935]: 2025-10-11 09:24:18.658 2 DEBUG oslo_concurrency.processutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:24:18 np0005481065 podman[396142]: 2025-10-11 09:24:18.658437123 +0000 UTC m=+0.241926748 container remove 4e755bc5f00bc16340bb70f3053627fd90b39132d2b4bf5b289cf737b42179f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_germain, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:24:18 np0005481065 systemd[1]: libpod-conmon-4e755bc5f00bc16340bb70f3053627fd90b39132d2b4bf5b289cf737b42179f3.scope: Deactivated successfully.
Oct 11 05:24:18 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2480: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 618 KiB/s rd, 3.9 MiB/s wr, 124 op/s
Oct 11 05:24:18 np0005481065 podman[396221]: 2025-10-11 09:24:18.906986116 +0000 UTC m=+0.048151809 container create 6cd72fcb8fbfc5b414651ba1c906adde7a223d8be17a17e043c941e0f8a4ad94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_williams, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 05:24:18 np0005481065 systemd[1]: Started libpod-conmon-6cd72fcb8fbfc5b414651ba1c906adde7a223d8be17a17e043c941e0f8a4ad94.scope.
Oct 11 05:24:18 np0005481065 podman[396221]: 2025-10-11 09:24:18.881794866 +0000 UTC m=+0.022960529 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:24:18 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:24:18 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2082cf4e9099de7b30ac0e4b71b7b4b6141249335e35f635796e904ae90c4372/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:24:18 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2082cf4e9099de7b30ac0e4b71b7b4b6141249335e35f635796e904ae90c4372/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:24:18 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2082cf4e9099de7b30ac0e4b71b7b4b6141249335e35f635796e904ae90c4372/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:24:18 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2082cf4e9099de7b30ac0e4b71b7b4b6141249335e35f635796e904ae90c4372/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:24:18 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2082cf4e9099de7b30ac0e4b71b7b4b6141249335e35f635796e904ae90c4372/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 05:24:19 np0005481065 podman[396221]: 2025-10-11 09:24:19.026768057 +0000 UTC m=+0.167933780 container init 6cd72fcb8fbfc5b414651ba1c906adde7a223d8be17a17e043c941e0f8a4ad94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_williams, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:24:19 np0005481065 podman[396221]: 2025-10-11 09:24:19.039470325 +0000 UTC m=+0.180636008 container start 6cd72fcb8fbfc5b414651ba1c906adde7a223d8be17a17e043c941e0f8a4ad94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_williams, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:24:19 np0005481065 podman[396221]: 2025-10-11 09:24:19.044870618 +0000 UTC m=+0.186036371 container attach 6cd72fcb8fbfc5b414651ba1c906adde7a223d8be17a17e043c941e0f8a4ad94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_williams, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 05:24:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:24:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:24:19 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4226472973' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:24:19 np0005481065 nova_compute[260935]: 2025-10-11 09:24:19.181 2 DEBUG oslo_concurrency.processutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:24:19 np0005481065 nova_compute[260935]: 2025-10-11 09:24:19.185 2 DEBUG nova.virt.libvirt.vif [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:24:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1892767569',display_name='tempest-TestNetworkBasicOps-server-1892767569',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1892767569',id=124,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBrnL5y5rmm40zHS3GRiZibnIxynw7Rihcq0RaK7G/33Ym/s8OYTLPyLC9bisJAK/wCL7YwT5B+iTgnNd5S0eO9bWMML8tCfmSgkHWu6rGCUSjC8WYx7kb6zW0E6al3ojg==',key_name='tempest-TestNetworkBasicOps-485184698',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-i2l25idp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:24:12Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "address": "fa:16:3e:d6:ce:63", "network": {"id": "a7aa5898-dcd8-41c5-81e2-5c41061a1fb2", "bridge": "br-int", "label": "tempest-network-smoke--1511589092", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba6bfb6c-8b", "ovs_interfaceid": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:24:19 np0005481065 nova_compute[260935]: 2025-10-11 09:24:19.186 2 DEBUG nova.network.os_vif_util [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "address": "fa:16:3e:d6:ce:63", "network": {"id": "a7aa5898-dcd8-41c5-81e2-5c41061a1fb2", "bridge": "br-int", "label": "tempest-network-smoke--1511589092", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba6bfb6c-8b", "ovs_interfaceid": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:24:19 np0005481065 nova_compute[260935]: 2025-10-11 09:24:19.187 2 DEBUG nova.network.os_vif_util [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:ce:63,bridge_name='br-int',has_traffic_filtering=True,id=ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca,network=Network(a7aa5898-dcd8-41c5-81e2-5c41061a1fb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapba6bfb6c-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:24:19 np0005481065 nova_compute[260935]: 2025-10-11 09:24:19.189 2 DEBUG nova.objects.instance [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'pci_devices' on Instance uuid 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:24:19 np0005481065 nova_compute[260935]: 2025-10-11 09:24:19.429 2 DEBUG nova.virt.libvirt.driver [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:24:19 np0005481065 nova_compute[260935]:  <uuid>32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1</uuid>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:  <name>instance-0000007c</name>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:24:19 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:      <nova:name>tempest-TestNetworkBasicOps-server-1892767569</nova:name>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:24:18</nova:creationTime>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:24:19 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:        <nova:user uuid="dd336dcb24664df58613d4105ce1b004">tempest-TestNetworkBasicOps-1622727639-project-member</nova:user>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:        <nova:project uuid="bee9c6aad5fe46a2b0fb6caf4d995b72">tempest-TestNetworkBasicOps-1622727639</nova:project>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:        <nova:port uuid="ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca">
Oct 11 05:24:19 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:24:19 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:      <entry name="serial">32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1</entry>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:      <entry name="uuid">32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1</entry>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:24:19 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:24:19 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:24:19 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1_disk">
Oct 11 05:24:19 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:24:19 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:24:19 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1_disk.config">
Oct 11 05:24:19 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:24:19 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:24:19 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:d6:ce:63"/>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:      <target dev="tapba6bfb6c-8b"/>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:24:19 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1/console.log" append="off"/>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:24:19 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:24:19 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:24:19 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:24:19 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:24:19 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:24:19 np0005481065 nova_compute[260935]: 2025-10-11 09:24:19.432 2 DEBUG nova.compute.manager [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Preparing to wait for external event network-vif-plugged-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:24:19 np0005481065 nova_compute[260935]: 2025-10-11 09:24:19.432 2 DEBUG oslo_concurrency.lockutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:24:19 np0005481065 nova_compute[260935]: 2025-10-11 09:24:19.433 2 DEBUG oslo_concurrency.lockutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:24:19 np0005481065 nova_compute[260935]: 2025-10-11 09:24:19.433 2 DEBUG oslo_concurrency.lockutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:24:19 np0005481065 nova_compute[260935]: 2025-10-11 09:24:19.434 2 DEBUG nova.virt.libvirt.vif [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:24:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1892767569',display_name='tempest-TestNetworkBasicOps-server-1892767569',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1892767569',id=124,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBrnL5y5rmm40zHS3GRiZibnIxynw7Rihcq0RaK7G/33Ym/s8OYTLPyLC9bisJAK/wCL7YwT5B+iTgnNd5S0eO9bWMML8tCfmSgkHWu6rGCUSjC8WYx7kb6zW0E6al3ojg==',key_name='tempest-TestNetworkBasicOps-485184698',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-i2l25idp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:24:12Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "address": "fa:16:3e:d6:ce:63", "network": {"id": "a7aa5898-dcd8-41c5-81e2-5c41061a1fb2", "bridge": "br-int", "label": "tempest-network-smoke--1511589092", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba6bfb6c-8b", "ovs_interfaceid": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:24:19 np0005481065 nova_compute[260935]: 2025-10-11 09:24:19.435 2 DEBUG nova.network.os_vif_util [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "address": "fa:16:3e:d6:ce:63", "network": {"id": "a7aa5898-dcd8-41c5-81e2-5c41061a1fb2", "bridge": "br-int", "label": "tempest-network-smoke--1511589092", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba6bfb6c-8b", "ovs_interfaceid": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:24:19 np0005481065 nova_compute[260935]: 2025-10-11 09:24:19.436 2 DEBUG nova.network.os_vif_util [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:ce:63,bridge_name='br-int',has_traffic_filtering=True,id=ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca,network=Network(a7aa5898-dcd8-41c5-81e2-5c41061a1fb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapba6bfb6c-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:24:19 np0005481065 nova_compute[260935]: 2025-10-11 09:24:19.437 2 DEBUG os_vif [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:ce:63,bridge_name='br-int',has_traffic_filtering=True,id=ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca,network=Network(a7aa5898-dcd8-41c5-81e2-5c41061a1fb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapba6bfb6c-8b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:24:19 np0005481065 nova_compute[260935]: 2025-10-11 09:24:19.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:19 np0005481065 nova_compute[260935]: 2025-10-11 09:24:19.439 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:24:19 np0005481065 nova_compute[260935]: 2025-10-11 09:24:19.439 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:24:19 np0005481065 nova_compute[260935]: 2025-10-11 09:24:19.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:19 np0005481065 nova_compute[260935]: 2025-10-11 09:24:19.443 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapba6bfb6c-8b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:24:19 np0005481065 nova_compute[260935]: 2025-10-11 09:24:19.444 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapba6bfb6c-8b, col_values=(('external_ids', {'iface-id': 'ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d6:ce:63', 'vm-uuid': '32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:24:19 np0005481065 NetworkManager[44960]: <info>  [1760174659.4486] manager: (tapba6bfb6c-8b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/526)
Oct 11 05:24:19 np0005481065 nova_compute[260935]: 2025-10-11 09:24:19.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:24:19 np0005481065 nova_compute[260935]: 2025-10-11 09:24:19.455 2 INFO os_vif [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:ce:63,bridge_name='br-int',has_traffic_filtering=True,id=ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca,network=Network(a7aa5898-dcd8-41c5-81e2-5c41061a1fb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapba6bfb6c-8b')#033[00m
Oct 11 05:24:19 np0005481065 ovn_controller[152945]: 2025-10-11T09:24:19Z|01323|binding|INFO|Releasing lport c1fabca0-ae77-4e48-b93b-3023955db235 from this chassis (sb_readonly=0)
Oct 11 05:24:19 np0005481065 ovn_controller[152945]: 2025-10-11T09:24:19Z|01324|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:24:19 np0005481065 ovn_controller[152945]: 2025-10-11T09:24:19Z|01325|binding|INFO|Releasing lport cd117be9-e2e2-4d92-9c01-a0f37b7175b9 from this chassis (sb_readonly=0)
Oct 11 05:24:19 np0005481065 ovn_controller[152945]: 2025-10-11T09:24:19Z|01326|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:24:19 np0005481065 nova_compute[260935]: 2025-10-11 09:24:19.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:19 np0005481065 nova_compute[260935]: 2025-10-11 09:24:19.605 2 DEBUG nova.virt.libvirt.driver [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:24:19 np0005481065 nova_compute[260935]: 2025-10-11 09:24:19.605 2 DEBUG nova.virt.libvirt.driver [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:24:19 np0005481065 nova_compute[260935]: 2025-10-11 09:24:19.606 2 DEBUG nova.virt.libvirt.driver [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No VIF found with MAC fa:16:3e:d6:ce:63, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:24:19 np0005481065 nova_compute[260935]: 2025-10-11 09:24:19.606 2 INFO nova.virt.libvirt.driver [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Using config drive#033[00m
Oct 11 05:24:19 np0005481065 nova_compute[260935]: 2025-10-11 09:24:19.630 2 DEBUG nova.storage.rbd_utils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:24:20 np0005481065 practical_williams[396238]: --> passed data devices: 0 physical, 3 LVM
Oct 11 05:24:20 np0005481065 practical_williams[396238]: --> relative data size: 1.0
Oct 11 05:24:20 np0005481065 practical_williams[396238]: --> All data devices are unavailable
Oct 11 05:24:20 np0005481065 systemd[1]: libpod-6cd72fcb8fbfc5b414651ba1c906adde7a223d8be17a17e043c941e0f8a4ad94.scope: Deactivated successfully.
Oct 11 05:24:20 np0005481065 systemd[1]: libpod-6cd72fcb8fbfc5b414651ba1c906adde7a223d8be17a17e043c941e0f8a4ad94.scope: Consumed 1.042s CPU time.
Oct 11 05:24:20 np0005481065 podman[396290]: 2025-10-11 09:24:20.205159932 +0000 UTC m=+0.030545783 container died 6cd72fcb8fbfc5b414651ba1c906adde7a223d8be17a17e043c941e0f8a4ad94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_williams, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:24:20 np0005481065 systemd[1]: var-lib-containers-storage-overlay-2082cf4e9099de7b30ac0e4b71b7b4b6141249335e35f635796e904ae90c4372-merged.mount: Deactivated successfully.
Oct 11 05:24:20 np0005481065 podman[396290]: 2025-10-11 09:24:20.282799673 +0000 UTC m=+0.108185554 container remove 6cd72fcb8fbfc5b414651ba1c906adde7a223d8be17a17e043c941e0f8a4ad94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_williams, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 05:24:20 np0005481065 systemd[1]: libpod-conmon-6cd72fcb8fbfc5b414651ba1c906adde7a223d8be17a17e043c941e0f8a4ad94.scope: Deactivated successfully.
Oct 11 05:24:20 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2481: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 372 KiB/s rd, 2.0 MiB/s wr, 91 op/s
Oct 11 05:24:21 np0005481065 nova_compute[260935]: 2025-10-11 09:24:21.209 2 INFO nova.virt.libvirt.driver [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Creating config drive at /var/lib/nova/instances/32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1/disk.config#033[00m
Oct 11 05:24:21 np0005481065 nova_compute[260935]: 2025-10-11 09:24:21.220 2 DEBUG oslo_concurrency.processutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt4zfj3_c execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:24:21 np0005481065 podman[396445]: 2025-10-11 09:24:21.229532271 +0000 UTC m=+0.076933422 container create 10ceebc6ed3f8771af4cc7ecf3e9924fd1aa8aff3921aa68d80ca4d051834c43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ishizaka, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:24:21 np0005481065 systemd[1]: Started libpod-conmon-10ceebc6ed3f8771af4cc7ecf3e9924fd1aa8aff3921aa68d80ca4d051834c43.scope.
Oct 11 05:24:21 np0005481065 podman[396445]: 2025-10-11 09:24:21.198501685 +0000 UTC m=+0.045902886 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:24:21 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:24:21 np0005481065 podman[396445]: 2025-10-11 09:24:21.349782244 +0000 UTC m=+0.197183395 container init 10ceebc6ed3f8771af4cc7ecf3e9924fd1aa8aff3921aa68d80ca4d051834c43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:24:21 np0005481065 podman[396445]: 2025-10-11 09:24:21.365348344 +0000 UTC m=+0.212749465 container start 10ceebc6ed3f8771af4cc7ecf3e9924fd1aa8aff3921aa68d80ca4d051834c43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ishizaka, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 11 05:24:21 np0005481065 podman[396445]: 2025-10-11 09:24:21.369949814 +0000 UTC m=+0.217350985 container attach 10ceebc6ed3f8771af4cc7ecf3e9924fd1aa8aff3921aa68d80ca4d051834c43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Oct 11 05:24:21 np0005481065 dreamy_ishizaka[396470]: 167 167
Oct 11 05:24:21 np0005481065 systemd[1]: libpod-10ceebc6ed3f8771af4cc7ecf3e9924fd1aa8aff3921aa68d80ca4d051834c43.scope: Deactivated successfully.
Oct 11 05:24:21 np0005481065 podman[396445]: 2025-10-11 09:24:21.37549763 +0000 UTC m=+0.222898801 container died 10ceebc6ed3f8771af4cc7ecf3e9924fd1aa8aff3921aa68d80ca4d051834c43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ishizaka, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 05:24:21 np0005481065 nova_compute[260935]: 2025-10-11 09:24:21.384 2 DEBUG oslo_concurrency.processutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt4zfj3_c" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:24:21 np0005481065 systemd[1]: var-lib-containers-storage-overlay-9ab8e3e7a896951404ae28ea76dcc06f5a68255963124d242ac3ce51f22714ba-merged.mount: Deactivated successfully.
Oct 11 05:24:21 np0005481065 podman[396460]: 2025-10-11 09:24:21.413928415 +0000 UTC m=+0.125614776 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 05:24:21 np0005481065 nova_compute[260935]: 2025-10-11 09:24:21.427 2 DEBUG nova.storage.rbd_utils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:24:21 np0005481065 nova_compute[260935]: 2025-10-11 09:24:21.434 2 DEBUG oslo_concurrency.processutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1/disk.config 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:24:21 np0005481065 podman[396445]: 2025-10-11 09:24:21.439012363 +0000 UTC m=+0.286413484 container remove 10ceebc6ed3f8771af4cc7ecf3e9924fd1aa8aff3921aa68d80ca4d051834c43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 05:24:21 np0005481065 podman[396463]: 2025-10-11 09:24:21.440265188 +0000 UTC m=+0.145155378 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible)
Oct 11 05:24:21 np0005481065 systemd[1]: libpod-conmon-10ceebc6ed3f8771af4cc7ecf3e9924fd1aa8aff3921aa68d80ca4d051834c43.scope: Deactivated successfully.
Oct 11 05:24:21 np0005481065 nova_compute[260935]: 2025-10-11 09:24:21.669 2 DEBUG oslo_concurrency.processutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1/disk.config 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.235s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:24:21 np0005481065 nova_compute[260935]: 2025-10-11 09:24:21.670 2 INFO nova.virt.libvirt.driver [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Deleting local config drive /var/lib/nova/instances/32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1/disk.config because it was imported into RBD.#033[00m
Oct 11 05:24:21 np0005481065 podman[396570]: 2025-10-11 09:24:21.711330488 +0000 UTC m=+0.078886758 container create c7e5baf1e46da732c9b315e649b653b9bf14673b7f224704bee2a858fc2df5f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:24:21 np0005481065 nova_compute[260935]: 2025-10-11 09:24:21.717 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:24:21 np0005481065 NetworkManager[44960]: <info>  [1760174661.7662] manager: (tapba6bfb6c-8b): new Tun device (/org/freedesktop/NetworkManager/Devices/527)
Oct 11 05:24:21 np0005481065 systemd[1]: Started libpod-conmon-c7e5baf1e46da732c9b315e649b653b9bf14673b7f224704bee2a858fc2df5f6.scope.
Oct 11 05:24:21 np0005481065 podman[396570]: 2025-10-11 09:24:21.678714337 +0000 UTC m=+0.046270697 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:24:21 np0005481065 kernel: tapba6bfb6c-8b: entered promiscuous mode
Oct 11 05:24:21 np0005481065 nova_compute[260935]: 2025-10-11 09:24:21.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:21 np0005481065 ovn_controller[152945]: 2025-10-11T09:24:21Z|01327|binding|INFO|Claiming lport ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca for this chassis.
Oct 11 05:24:21 np0005481065 ovn_controller[152945]: 2025-10-11T09:24:21Z|01328|binding|INFO|ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca: Claiming fa:16:3e:d6:ce:63 10.100.0.5
Oct 11 05:24:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:21.792 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:ce:63 10.100.0.5'], port_security=['fa:16:3e:d6:ce:63 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-111056357', 'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-111056357', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '8', 'neutron:security_group_ids': '019fac1b-1c13-4d21-809e-39e39fdd9255', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.192'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=60a8a4a9-2613-497d-90d8-59ebfcd1b053, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:24:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:21.794 162815 INFO neutron.agent.ovn.metadata.agent [-] Port ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca in datapath a7aa5898-dcd8-41c5-81e2-5c41061a1fb2 bound to our chassis#033[00m
Oct 11 05:24:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:21.798 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a7aa5898-dcd8-41c5-81e2-5c41061a1fb2#033[00m
Oct 11 05:24:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:21.817 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d003b91e-ff62-4bfe-b60f-caf2f181b8eb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:24:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:21.819 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa7aa5898-d1 in ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 05:24:21 np0005481065 ovn_controller[152945]: 2025-10-11T09:24:21Z|01329|binding|INFO|Setting lport ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca ovn-installed in OVS
Oct 11 05:24:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:21.822 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa7aa5898-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 05:24:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:21.822 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f74ee1b8-97dd-4c36-aeec-fe22e8682a24]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:24:21 np0005481065 ovn_controller[152945]: 2025-10-11T09:24:21Z|01330|binding|INFO|Setting lport ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca up in Southbound
Oct 11 05:24:21 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:24:21 np0005481065 nova_compute[260935]: 2025-10-11 09:24:21.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:21.825 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8e957c6d-861d-4d85-aa61-6db12204bd2a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:24:21 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4ba737ecf8dfdac7b1c3c01298648218bcc531da869fe350e8964fa4b79995e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:24:21 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4ba737ecf8dfdac7b1c3c01298648218bcc531da869fe350e8964fa4b79995e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:24:21 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4ba737ecf8dfdac7b1c3c01298648218bcc531da869fe350e8964fa4b79995e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:24:21 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4ba737ecf8dfdac7b1c3c01298648218bcc531da869fe350e8964fa4b79995e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:24:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:21.847 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[ac0d06c9-d7bb-40f7-9bd6-368d50944b20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:24:21 np0005481065 systemd-machined[215705]: New machine qemu-148-instance-0000007c.
Oct 11 05:24:21 np0005481065 systemd[1]: Started Virtual Machine qemu-148-instance-0000007c.
Oct 11 05:24:21 np0005481065 systemd-udevd[396606]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:24:21 np0005481065 podman[396570]: 2025-10-11 09:24:21.871338293 +0000 UTC m=+0.238894653 container init c7e5baf1e46da732c9b315e649b653b9bf14673b7f224704bee2a858fc2df5f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:24:21 np0005481065 nova_compute[260935]: 2025-10-11 09:24:21.878 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174646.8773644, ef21f945-0076-48fa-8d22-c5376e26d278 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:24:21 np0005481065 nova_compute[260935]: 2025-10-11 09:24:21.878 2 INFO nova.compute.manager [-] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:24:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:21.880 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[169165e9-86bb-44ae-af3c-c263e17f31ce]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:24:21 np0005481065 NetworkManager[44960]: <info>  [1760174661.8826] device (tapba6bfb6c-8b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:24:21 np0005481065 NetworkManager[44960]: <info>  [1760174661.8836] device (tapba6bfb6c-8b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:24:21 np0005481065 podman[396570]: 2025-10-11 09:24:21.888856038 +0000 UTC m=+0.256412338 container start c7e5baf1e46da732c9b315e649b653b9bf14673b7f224704bee2a858fc2df5f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_galileo, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Oct 11 05:24:21 np0005481065 podman[396570]: 2025-10-11 09:24:21.894213529 +0000 UTC m=+0.261769809 container attach c7e5baf1e46da732c9b315e649b653b9bf14673b7f224704bee2a858fc2df5f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_galileo, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 05:24:21 np0005481065 nova_compute[260935]: 2025-10-11 09:24:21.910 2 DEBUG nova.compute.manager [None req-7e25574c-e786-455f-b544-0a211679ca83 - - - - - -] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:24:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:21.926 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[0f405d82-4c22-4ebb-95d7-1ac8a3ccc0d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:24:21 np0005481065 NetworkManager[44960]: <info>  [1760174661.9339] manager: (tapa7aa5898-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/528)
Oct 11 05:24:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:21.933 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[01ffe2a8-bd34-402d-8cbe-15da5d862e6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:24:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:21.980 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[e8332885-20e0-4682-aca0-1f49e3e4a6ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:24:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:21.984 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6b9ccba6-5a15-4252-b2a7-04b661b70625]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:24:22 np0005481065 NetworkManager[44960]: <info>  [1760174662.0104] device (tapa7aa5898-d0): carrier: link connected
Oct 11 05:24:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:22.015 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[2e443cca-d9ee-481f-86e2-2331b6f4f60e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:24:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:22.030 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c1f3e609-60c4-452f-b65b-377b7da8ccb6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa7aa5898-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:0e:28'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 367], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650671, 'reachable_time': 39532, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 396639, 'error': None, 'target': 'ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:24:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:22.044 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[42b08470-d735-4c38-8ef6-392d81aeed99]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1a:e28'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 650671, 'tstamp': 650671}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 396640, 'error': None, 'target': 'ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:24:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:22.057 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8f1ba8aa-033b-4698-928b-c21aafb0a832]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa7aa5898-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:0e:28'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 367], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650671, 'reachable_time': 39532, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 396641, 'error': None, 'target': 'ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:24:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:22.092 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[24516d76-0b3e-4af6-ad64-47a1de115a5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:24:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:22.177 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a23edcf8-fdef-41be-8173-fe1162f1ef62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:24:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:22.178 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa7aa5898-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:24:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:22.178 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:24:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:22.179 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa7aa5898-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:24:22 np0005481065 NetworkManager[44960]: <info>  [1760174662.2395] manager: (tapa7aa5898-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/529)
Oct 11 05:24:22 np0005481065 kernel: tapa7aa5898-d0: entered promiscuous mode
Oct 11 05:24:22 np0005481065 nova_compute[260935]: 2025-10-11 09:24:22.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:22.241 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa7aa5898-d0, col_values=(('external_ids', {'iface-id': 'aaa7cd6b-360e-44dd-8a57-64ba5457b964'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:24:22 np0005481065 ovn_controller[152945]: 2025-10-11T09:24:22Z|01331|binding|INFO|Releasing lport aaa7cd6b-360e-44dd-8a57-64ba5457b964 from this chassis (sb_readonly=0)
Oct 11 05:24:22 np0005481065 nova_compute[260935]: 2025-10-11 09:24:22.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:22.258 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a7aa5898-dcd8-41c5-81e2-5c41061a1fb2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a7aa5898-dcd8-41c5-81e2-5c41061a1fb2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 05:24:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:22.258 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c8098961-5fb5-4c0b-8644-1a5b5e3ba131]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:24:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:22.259 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 05:24:22 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 05:24:22 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 05:24:22 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2
Oct 11 05:24:22 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 05:24:22 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 05:24:22 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 05:24:22 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/a7aa5898-dcd8-41c5-81e2-5c41061a1fb2.pid.haproxy
Oct 11 05:24:22 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 05:24:22 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:24:22 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 05:24:22 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 05:24:22 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 05:24:22 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 05:24:22 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 05:24:22 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 05:24:22 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 05:24:22 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 05:24:22 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 05:24:22 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 05:24:22 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 05:24:22 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 05:24:22 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 05:24:22 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:24:22 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:24:22 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 05:24:22 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 05:24:22 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 05:24:22 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID a7aa5898-dcd8-41c5-81e2-5c41061a1fb2
Oct 11 05:24:22 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 05:24:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:22.260 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2', 'env', 'PROCESS_TAG=haproxy-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a7aa5898-dcd8-41c5-81e2-5c41061a1fb2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 05:24:22 np0005481065 nova_compute[260935]: 2025-10-11 09:24:22.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]: {
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:    "0": [
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:        {
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:            "devices": [
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:                "/dev/loop3"
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:            ],
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:            "lv_name": "ceph_lv0",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:            "lv_size": "21470642176",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:            "name": "ceph_lv0",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:            "tags": {
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:                "ceph.cluster_name": "ceph",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:                "ceph.crush_device_class": "",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:                "ceph.encrypted": "0",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:                "ceph.osd_id": "0",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:                "ceph.type": "block",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:                "ceph.vdo": "0"
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:            },
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:            "type": "block",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:            "vg_name": "ceph_vg0"
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:        }
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:    ],
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:    "1": [
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:        {
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:            "devices": [
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:                "/dev/loop4"
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:            ],
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:            "lv_name": "ceph_lv1",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:            "lv_size": "21470642176",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:            "name": "ceph_lv1",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:            "tags": {
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:                "ceph.cluster_name": "ceph",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:                "ceph.crush_device_class": "",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:                "ceph.encrypted": "0",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:                "ceph.osd_id": "1",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:                "ceph.type": "block",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:                "ceph.vdo": "0"
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:            },
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:            "type": "block",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:            "vg_name": "ceph_vg1"
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:        }
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:    ],
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:    "2": [
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:        {
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:            "devices": [
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:                "/dev/loop5"
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:            ],
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:            "lv_name": "ceph_lv2",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:            "lv_size": "21470642176",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:            "name": "ceph_lv2",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:            "tags": {
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:                "ceph.cluster_name": "ceph",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:                "ceph.crush_device_class": "",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:                "ceph.encrypted": "0",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:                "ceph.osd_id": "2",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:                "ceph.type": "block",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:                "ceph.vdo": "0"
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:            },
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:            "type": "block",
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:            "vg_name": "ceph_vg2"
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:        }
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]:    ]
Oct 11 05:24:22 np0005481065 blissful_galileo[396595]: }
Oct 11 05:24:22 np0005481065 systemd[1]: libpod-c7e5baf1e46da732c9b315e649b653b9bf14673b7f224704bee2a858fc2df5f6.scope: Deactivated successfully.
Oct 11 05:24:22 np0005481065 podman[396570]: 2025-10-11 09:24:22.666676037 +0000 UTC m=+1.034232307 container died c7e5baf1e46da732c9b315e649b653b9bf14673b7f224704bee2a858fc2df5f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_galileo, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 05:24:22 np0005481065 podman[396714]: 2025-10-11 09:24:22.692689391 +0000 UTC m=+0.060238520 container create 154018af38dbd3f9211de43375a45fc046310d8d6609a6abdb810f3a4e6db988 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 05:24:22 np0005481065 systemd[1]: var-lib-containers-storage-overlay-c4ba737ecf8dfdac7b1c3c01298648218bcc531da869fe350e8964fa4b79995e-merged.mount: Deactivated successfully.
Oct 11 05:24:22 np0005481065 podman[396570]: 2025-10-11 09:24:22.727807433 +0000 UTC m=+1.095363693 container remove c7e5baf1e46da732c9b315e649b653b9bf14673b7f224704bee2a858fc2df5f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_galileo, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 05:24:22 np0005481065 podman[396714]: 2025-10-11 09:24:22.657531909 +0000 UTC m=+0.025081028 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 05:24:22 np0005481065 systemd[1]: Started libpod-conmon-154018af38dbd3f9211de43375a45fc046310d8d6609a6abdb810f3a4e6db988.scope.
Oct 11 05:24:22 np0005481065 systemd[1]: libpod-conmon-c7e5baf1e46da732c9b315e649b653b9bf14673b7f224704bee2a858fc2df5f6.scope: Deactivated successfully.
Oct 11 05:24:22 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:24:22 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f43b85947b21cd3c6ad161e442ba8d1aa969883f679ceb40877a58563c40fdb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 05:24:22 np0005481065 podman[396714]: 2025-10-11 09:24:22.810107705 +0000 UTC m=+0.177656824 container init 154018af38dbd3f9211de43375a45fc046310d8d6609a6abdb810f3a4e6db988 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 11 05:24:22 np0005481065 podman[396714]: 2025-10-11 09:24:22.815342893 +0000 UTC m=+0.182891982 container start 154018af38dbd3f9211de43375a45fc046310d8d6609a6abdb810f3a4e6db988 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:24:22 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2482: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 376 KiB/s rd, 2.0 MiB/s wr, 96 op/s
Oct 11 05:24:22 np0005481065 neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2[396742]: [NOTICE]   (396766) : New worker (396771) forked
Oct 11 05:24:22 np0005481065 neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2[396742]: [NOTICE]   (396766) : Loading success.
Oct 11 05:24:23 np0005481065 nova_compute[260935]: 2025-10-11 09:24:23.173 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174663.1730971, 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:24:23 np0005481065 nova_compute[260935]: 2025-10-11 09:24:23.174 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] VM Started (Lifecycle Event)#033[00m
Oct 11 05:24:23 np0005481065 nova_compute[260935]: 2025-10-11 09:24:23.179 2 DEBUG nova.network.neutron [req-3004c618-5dce-4d24-a297-c9044294418c req-b118437c-81a0-420c-9200-77efe9ceb834 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Updated VIF entry in instance network info cache for port ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:24:23 np0005481065 nova_compute[260935]: 2025-10-11 09:24:23.179 2 DEBUG nova.network.neutron [req-3004c618-5dce-4d24-a297-c9044294418c req-b118437c-81a0-420c-9200-77efe9ceb834 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Updating instance_info_cache with network_info: [{"id": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "address": "fa:16:3e:d6:ce:63", "network": {"id": "a7aa5898-dcd8-41c5-81e2-5c41061a1fb2", "bridge": "br-int", "label": "tempest-network-smoke--1511589092", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba6bfb6c-8b", "ovs_interfaceid": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:24:23 np0005481065 nova_compute[260935]: 2025-10-11 09:24:23.216 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:24:23 np0005481065 nova_compute[260935]: 2025-10-11 09:24:23.219 2 DEBUG oslo_concurrency.lockutils [req-3004c618-5dce-4d24-a297-c9044294418c req-b118437c-81a0-420c-9200-77efe9ceb834 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:24:23 np0005481065 nova_compute[260935]: 2025-10-11 09:24:23.223 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174663.1732094, 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:24:23 np0005481065 nova_compute[260935]: 2025-10-11 09:24:23.223 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:24:23 np0005481065 nova_compute[260935]: 2025-10-11 09:24:23.256 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:24:23 np0005481065 nova_compute[260935]: 2025-10-11 09:24:23.260 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:24:23 np0005481065 nova_compute[260935]: 2025-10-11 09:24:23.296 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:24:23 np0005481065 podman[396898]: 2025-10-11 09:24:23.489676753 +0000 UTC m=+0.052198354 container create e171397ed2570fbb10bf5598b5e49d38cab2285dd914f0945a241336e9476428 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:24:23 np0005481065 systemd[1]: Started libpod-conmon-e171397ed2570fbb10bf5598b5e49d38cab2285dd914f0945a241336e9476428.scope.
Oct 11 05:24:23 np0005481065 podman[396898]: 2025-10-11 09:24:23.466846239 +0000 UTC m=+0.029367890 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:24:23 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:24:23 np0005481065 podman[396898]: 2025-10-11 09:24:23.58312379 +0000 UTC m=+0.145645441 container init e171397ed2570fbb10bf5598b5e49d38cab2285dd914f0945a241336e9476428 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_greider, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:24:23 np0005481065 podman[396898]: 2025-10-11 09:24:23.591922839 +0000 UTC m=+0.154444480 container start e171397ed2570fbb10bf5598b5e49d38cab2285dd914f0945a241336e9476428 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_greider, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:24:23 np0005481065 podman[396898]: 2025-10-11 09:24:23.595592502 +0000 UTC m=+0.158114153 container attach e171397ed2570fbb10bf5598b5e49d38cab2285dd914f0945a241336e9476428 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_greider, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:24:23 np0005481065 elastic_greider[396914]: 167 167
Oct 11 05:24:23 np0005481065 systemd[1]: libpod-e171397ed2570fbb10bf5598b5e49d38cab2285dd914f0945a241336e9476428.scope: Deactivated successfully.
Oct 11 05:24:23 np0005481065 conmon[396914]: conmon e171397ed2570fbb10bf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e171397ed2570fbb10bf5598b5e49d38cab2285dd914f0945a241336e9476428.scope/container/memory.events
Oct 11 05:24:23 np0005481065 podman[396898]: 2025-10-11 09:24:23.604269587 +0000 UTC m=+0.166791228 container died e171397ed2570fbb10bf5598b5e49d38cab2285dd914f0945a241336e9476428 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_greider, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 05:24:23 np0005481065 systemd[1]: var-lib-containers-storage-overlay-12e8d5acd52f35f6becd0663eebead45fe188fa49a86c9ef34cfcb39a31963e1-merged.mount: Deactivated successfully.
Oct 11 05:24:23 np0005481065 podman[396898]: 2025-10-11 09:24:23.65290407 +0000 UTC m=+0.215425711 container remove e171397ed2570fbb10bf5598b5e49d38cab2285dd914f0945a241336e9476428 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_greider, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:24:23 np0005481065 systemd[1]: libpod-conmon-e171397ed2570fbb10bf5598b5e49d38cab2285dd914f0945a241336e9476428.scope: Deactivated successfully.
Oct 11 05:24:23 np0005481065 podman[396936]: 2025-10-11 09:24:23.954033128 +0000 UTC m=+0.082544221 container create 026561d048a66d10ed3e0b0eaa507a98e6815d8c655f7fa82c4f54dd6f558e2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jackson, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:24:24 np0005481065 systemd[1]: Started libpod-conmon-026561d048a66d10ed3e0b0eaa507a98e6815d8c655f7fa82c4f54dd6f558e2a.scope.
Oct 11 05:24:24 np0005481065 podman[396936]: 2025-10-11 09:24:23.922385985 +0000 UTC m=+0.050897148 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:24:24 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:24:24 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c4892047388cfef2f33695d90cdc60210c4cb7ac91bd8e22eed18abf2d260ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:24:24 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c4892047388cfef2f33695d90cdc60210c4cb7ac91bd8e22eed18abf2d260ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:24:24 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c4892047388cfef2f33695d90cdc60210c4cb7ac91bd8e22eed18abf2d260ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:24:24 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c4892047388cfef2f33695d90cdc60210c4cb7ac91bd8e22eed18abf2d260ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:24:24 np0005481065 podman[396936]: 2025-10-11 09:24:24.066407209 +0000 UTC m=+0.194918352 container init 026561d048a66d10ed3e0b0eaa507a98e6815d8c655f7fa82c4f54dd6f558e2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:24:24 np0005481065 podman[396936]: 2025-10-11 09:24:24.081348781 +0000 UTC m=+0.209859864 container start 026561d048a66d10ed3e0b0eaa507a98e6815d8c655f7fa82c4f54dd6f558e2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 05:24:24 np0005481065 podman[396936]: 2025-10-11 09:24:24.085799916 +0000 UTC m=+0.214311019 container attach 026561d048a66d10ed3e0b0eaa507a98e6815d8c655f7fa82c4f54dd6f558e2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jackson, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 11 05:24:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:24:24 np0005481065 nova_compute[260935]: 2025-10-11 09:24:24.204 2 DEBUG nova.compute.manager [req-89ce029d-f7aa-45cb-9c01-d1cc1cc5723a req-f65478f9-5d86-4a56-a80e-cdb574b134f1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Received event network-vif-plugged-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:24:24 np0005481065 nova_compute[260935]: 2025-10-11 09:24:24.206 2 DEBUG oslo_concurrency.lockutils [req-89ce029d-f7aa-45cb-9c01-d1cc1cc5723a req-f65478f9-5d86-4a56-a80e-cdb574b134f1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:24:24 np0005481065 nova_compute[260935]: 2025-10-11 09:24:24.206 2 DEBUG oslo_concurrency.lockutils [req-89ce029d-f7aa-45cb-9c01-d1cc1cc5723a req-f65478f9-5d86-4a56-a80e-cdb574b134f1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:24:24 np0005481065 nova_compute[260935]: 2025-10-11 09:24:24.207 2 DEBUG oslo_concurrency.lockutils [req-89ce029d-f7aa-45cb-9c01-d1cc1cc5723a req-f65478f9-5d86-4a56-a80e-cdb574b134f1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:24:24 np0005481065 nova_compute[260935]: 2025-10-11 09:24:24.207 2 DEBUG nova.compute.manager [req-89ce029d-f7aa-45cb-9c01-d1cc1cc5723a req-f65478f9-5d86-4a56-a80e-cdb574b134f1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Processing event network-vif-plugged-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:24:24 np0005481065 nova_compute[260935]: 2025-10-11 09:24:24.208 2 DEBUG nova.compute.manager [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:24:24 np0005481065 nova_compute[260935]: 2025-10-11 09:24:24.214 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174664.2137172, 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:24:24 np0005481065 nova_compute[260935]: 2025-10-11 09:24:24.214 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:24:24 np0005481065 nova_compute[260935]: 2025-10-11 09:24:24.217 2 DEBUG nova.virt.libvirt.driver [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:24:24 np0005481065 nova_compute[260935]: 2025-10-11 09:24:24.226 2 INFO nova.virt.libvirt.driver [-] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Instance spawned successfully.#033[00m
Oct 11 05:24:24 np0005481065 nova_compute[260935]: 2025-10-11 09:24:24.226 2 DEBUG nova.virt.libvirt.driver [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:24:24 np0005481065 nova_compute[260935]: 2025-10-11 09:24:24.249 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:24:24 np0005481065 nova_compute[260935]: 2025-10-11 09:24:24.261 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:24:24 np0005481065 nova_compute[260935]: 2025-10-11 09:24:24.266 2 DEBUG nova.virt.libvirt.driver [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:24:24 np0005481065 nova_compute[260935]: 2025-10-11 09:24:24.267 2 DEBUG nova.virt.libvirt.driver [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:24:24 np0005481065 nova_compute[260935]: 2025-10-11 09:24:24.267 2 DEBUG nova.virt.libvirt.driver [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:24:24 np0005481065 nova_compute[260935]: 2025-10-11 09:24:24.268 2 DEBUG nova.virt.libvirt.driver [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:24:24 np0005481065 nova_compute[260935]: 2025-10-11 09:24:24.269 2 DEBUG nova.virt.libvirt.driver [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:24:24 np0005481065 nova_compute[260935]: 2025-10-11 09:24:24.270 2 DEBUG nova.virt.libvirt.driver [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:24:24 np0005481065 nova_compute[260935]: 2025-10-11 09:24:24.308 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:24:24 np0005481065 nova_compute[260935]: 2025-10-11 09:24:24.351 2 INFO nova.compute.manager [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Took 11.79 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:24:24 np0005481065 nova_compute[260935]: 2025-10-11 09:24:24.351 2 DEBUG nova.compute.manager [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:24:24 np0005481065 nova_compute[260935]: 2025-10-11 09:24:24.436 2 INFO nova.compute.manager [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Took 12.92 seconds to build instance.#033[00m
Oct 11 05:24:24 np0005481065 nova_compute[260935]: 2025-10-11 09:24:24.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:24 np0005481065 nova_compute[260935]: 2025-10-11 09:24:24.455 2 DEBUG oslo_concurrency.lockutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.028s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:24:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:24:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:24:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:24:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:24:24 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2483: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 11 05:24:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:24:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:24:25 np0005481065 interesting_jackson[396952]: {
Oct 11 05:24:25 np0005481065 interesting_jackson[396952]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 05:24:25 np0005481065 interesting_jackson[396952]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:24:25 np0005481065 interesting_jackson[396952]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 05:24:25 np0005481065 interesting_jackson[396952]:        "osd_id": 2,
Oct 11 05:24:25 np0005481065 interesting_jackson[396952]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:24:25 np0005481065 interesting_jackson[396952]:        "type": "bluestore"
Oct 11 05:24:25 np0005481065 interesting_jackson[396952]:    },
Oct 11 05:24:25 np0005481065 interesting_jackson[396952]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 05:24:25 np0005481065 interesting_jackson[396952]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:24:25 np0005481065 interesting_jackson[396952]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 05:24:25 np0005481065 interesting_jackson[396952]:        "osd_id": 0,
Oct 11 05:24:25 np0005481065 interesting_jackson[396952]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:24:25 np0005481065 interesting_jackson[396952]:        "type": "bluestore"
Oct 11 05:24:25 np0005481065 interesting_jackson[396952]:    },
Oct 11 05:24:25 np0005481065 interesting_jackson[396952]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 05:24:25 np0005481065 interesting_jackson[396952]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:24:25 np0005481065 interesting_jackson[396952]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 05:24:25 np0005481065 interesting_jackson[396952]:        "osd_id": 1,
Oct 11 05:24:25 np0005481065 interesting_jackson[396952]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:24:25 np0005481065 interesting_jackson[396952]:        "type": "bluestore"
Oct 11 05:24:25 np0005481065 interesting_jackson[396952]:    }
Oct 11 05:24:25 np0005481065 interesting_jackson[396952]: }
Oct 11 05:24:25 np0005481065 systemd[1]: libpod-026561d048a66d10ed3e0b0eaa507a98e6815d8c655f7fa82c4f54dd6f558e2a.scope: Deactivated successfully.
Oct 11 05:24:25 np0005481065 podman[396936]: 2025-10-11 09:24:25.236967903 +0000 UTC m=+1.365478966 container died 026561d048a66d10ed3e0b0eaa507a98e6815d8c655f7fa82c4f54dd6f558e2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jackson, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 11 05:24:25 np0005481065 systemd[1]: libpod-026561d048a66d10ed3e0b0eaa507a98e6815d8c655f7fa82c4f54dd6f558e2a.scope: Consumed 1.115s CPU time.
Oct 11 05:24:25 np0005481065 systemd[1]: var-lib-containers-storage-overlay-8c4892047388cfef2f33695d90cdc60210c4cb7ac91bd8e22eed18abf2d260ca-merged.mount: Deactivated successfully.
Oct 11 05:24:25 np0005481065 podman[396936]: 2025-10-11 09:24:25.30559767 +0000 UTC m=+1.434108733 container remove 026561d048a66d10ed3e0b0eaa507a98e6815d8c655f7fa82c4f54dd6f558e2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 05:24:25 np0005481065 systemd[1]: libpod-conmon-026561d048a66d10ed3e0b0eaa507a98e6815d8c655f7fa82c4f54dd6f558e2a.scope: Deactivated successfully.
Oct 11 05:24:25 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:24:25 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:24:25 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:24:25 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:24:25 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 562dd2ef-f7e7-4f6e-a236-d7a36e0be0da does not exist
Oct 11 05:24:25 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev d4cfe1bd-34d3-441f-ba8c-f9cb189e7086 does not exist
Oct 11 05:24:25 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:24:25 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:24:26 np0005481065 nova_compute[260935]: 2025-10-11 09:24:26.370 2 DEBUG nova.compute.manager [req-74f3b3c9-8de2-4e64-9501-7319b727129b req-45390fcb-bd40-4f3c-8886-f5e167673c5d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Received event network-vif-plugged-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:24:26 np0005481065 nova_compute[260935]: 2025-10-11 09:24:26.372 2 DEBUG oslo_concurrency.lockutils [req-74f3b3c9-8de2-4e64-9501-7319b727129b req-45390fcb-bd40-4f3c-8886-f5e167673c5d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:24:26 np0005481065 nova_compute[260935]: 2025-10-11 09:24:26.372 2 DEBUG oslo_concurrency.lockutils [req-74f3b3c9-8de2-4e64-9501-7319b727129b req-45390fcb-bd40-4f3c-8886-f5e167673c5d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:24:26 np0005481065 nova_compute[260935]: 2025-10-11 09:24:26.373 2 DEBUG oslo_concurrency.lockutils [req-74f3b3c9-8de2-4e64-9501-7319b727129b req-45390fcb-bd40-4f3c-8886-f5e167673c5d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:24:26 np0005481065 nova_compute[260935]: 2025-10-11 09:24:26.373 2 DEBUG nova.compute.manager [req-74f3b3c9-8de2-4e64-9501-7319b727129b req-45390fcb-bd40-4f3c-8886-f5e167673c5d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] No waiting events found dispatching network-vif-plugged-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:24:26 np0005481065 nova_compute[260935]: 2025-10-11 09:24:26.374 2 WARNING nova.compute.manager [req-74f3b3c9-8de2-4e64-9501-7319b727129b req-45390fcb-bd40-4f3c-8886-f5e167673c5d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Received unexpected event network-vif-plugged-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca for instance with vm_state active and task_state None.#033[00m
Oct 11 05:24:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:24:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/597248780' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:24:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:24:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/597248780' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:24:26 np0005481065 nova_compute[260935]: 2025-10-11 09:24:26.732 2 DEBUG oslo_concurrency.lockutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "ece1112a-294b-461f-8b8f-c6a0bb212647" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:24:26 np0005481065 nova_compute[260935]: 2025-10-11 09:24:26.733 2 DEBUG oslo_concurrency.lockutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "ece1112a-294b-461f-8b8f-c6a0bb212647" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:24:26 np0005481065 nova_compute[260935]: 2025-10-11 09:24:26.748 2 DEBUG nova.compute.manager [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:24:26 np0005481065 nova_compute[260935]: 2025-10-11 09:24:26.834 2 DEBUG oslo_concurrency.lockutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:24:26 np0005481065 nova_compute[260935]: 2025-10-11 09:24:26.835 2 DEBUG oslo_concurrency.lockutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:24:26 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2484: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 11 05:24:26 np0005481065 nova_compute[260935]: 2025-10-11 09:24:26.850 2 DEBUG nova.virt.hardware [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:24:26 np0005481065 nova_compute[260935]: 2025-10-11 09:24:26.851 2 INFO nova.compute.claims [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:24:27 np0005481065 nova_compute[260935]: 2025-10-11 09:24:27.135 2 DEBUG oslo_concurrency.processutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:24:27 np0005481065 nova_compute[260935]: 2025-10-11 09:24:27.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:24:27 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1427365149' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:24:27 np0005481065 nova_compute[260935]: 2025-10-11 09:24:27.615 2 DEBUG oslo_concurrency.processutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:24:27 np0005481065 nova_compute[260935]: 2025-10-11 09:24:27.621 2 DEBUG nova.compute.provider_tree [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:24:27 np0005481065 nova_compute[260935]: 2025-10-11 09:24:27.682 2 DEBUG nova.scheduler.client.report [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:24:27 np0005481065 nova_compute[260935]: 2025-10-11 09:24:27.706 2 DEBUG oslo_concurrency.lockutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.871s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:24:27 np0005481065 nova_compute[260935]: 2025-10-11 09:24:27.707 2 DEBUG nova.compute.manager [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:24:27 np0005481065 nova_compute[260935]: 2025-10-11 09:24:27.758 2 DEBUG nova.compute.manager [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:24:27 np0005481065 nova_compute[260935]: 2025-10-11 09:24:27.759 2 DEBUG nova.network.neutron [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:24:27 np0005481065 nova_compute[260935]: 2025-10-11 09:24:27.768 2 DEBUG oslo_concurrency.lockutils [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:24:27 np0005481065 nova_compute[260935]: 2025-10-11 09:24:27.768 2 DEBUG oslo_concurrency.lockutils [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:24:27 np0005481065 nova_compute[260935]: 2025-10-11 09:24:27.769 2 DEBUG oslo_concurrency.lockutils [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:24:27 np0005481065 nova_compute[260935]: 2025-10-11 09:24:27.769 2 DEBUG oslo_concurrency.lockutils [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:24:27 np0005481065 nova_compute[260935]: 2025-10-11 09:24:27.769 2 DEBUG oslo_concurrency.lockutils [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:24:27 np0005481065 nova_compute[260935]: 2025-10-11 09:24:27.771 2 INFO nova.compute.manager [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Terminating instance#033[00m
Oct 11 05:24:27 np0005481065 nova_compute[260935]: 2025-10-11 09:24:27.773 2 DEBUG nova.compute.manager [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:24:27 np0005481065 nova_compute[260935]: 2025-10-11 09:24:27.802 2 INFO nova.virt.libvirt.driver [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:24:27 np0005481065 kernel: tapba6bfb6c-8b (unregistering): left promiscuous mode
Oct 11 05:24:27 np0005481065 NetworkManager[44960]: <info>  [1760174667.8310] device (tapba6bfb6c-8b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:24:27 np0005481065 nova_compute[260935]: 2025-10-11 09:24:27.839 2 DEBUG nova.compute.manager [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:24:27 np0005481065 nova_compute[260935]: 2025-10-11 09:24:27.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:27 np0005481065 ovn_controller[152945]: 2025-10-11T09:24:27Z|01332|binding|INFO|Releasing lport ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca from this chassis (sb_readonly=0)
Oct 11 05:24:27 np0005481065 ovn_controller[152945]: 2025-10-11T09:24:27Z|01333|binding|INFO|Setting lport ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca down in Southbound
Oct 11 05:24:27 np0005481065 ovn_controller[152945]: 2025-10-11T09:24:27Z|01334|binding|INFO|Removing iface tapba6bfb6c-8b ovn-installed in OVS
Oct 11 05:24:27 np0005481065 nova_compute[260935]: 2025-10-11 09:24:27.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:27 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:27.853 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:ce:63 10.100.0.5'], port_security=['fa:16:3e:d6:ce:63 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-111056357', 'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-111056357', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '10', 'neutron:security_group_ids': '019fac1b-1c13-4d21-809e-39e39fdd9255', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.192', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=60a8a4a9-2613-497d-90d8-59ebfcd1b053, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:24:27 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:27.855 162815 INFO neutron.agent.ovn.metadata.agent [-] Port ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca in datapath a7aa5898-dcd8-41c5-81e2-5c41061a1fb2 unbound from our chassis#033[00m
Oct 11 05:24:27 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:27.856 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a7aa5898-dcd8-41c5-81e2-5c41061a1fb2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:24:27 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:27.858 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c6e77930-ac5d-4611-8676-d7de7b587ba4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:24:27 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:27.859 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2 namespace which is not needed anymore#033[00m
Oct 11 05:24:27 np0005481065 nova_compute[260935]: 2025-10-11 09:24:27.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:27 np0005481065 ovn_controller[152945]: 2025-10-11T09:24:27Z|01335|binding|INFO|Releasing lport c1fabca0-ae77-4e48-b93b-3023955db235 from this chassis (sb_readonly=0)
Oct 11 05:24:27 np0005481065 ovn_controller[152945]: 2025-10-11T09:24:27Z|01336|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:24:27 np0005481065 ovn_controller[152945]: 2025-10-11T09:24:27Z|01337|binding|INFO|Releasing lport cd117be9-e2e2-4d92-9c01-a0f37b7175b9 from this chassis (sb_readonly=0)
Oct 11 05:24:27 np0005481065 ovn_controller[152945]: 2025-10-11T09:24:27Z|01338|binding|INFO|Releasing lport aaa7cd6b-360e-44dd-8a57-64ba5457b964 from this chassis (sb_readonly=0)
Oct 11 05:24:27 np0005481065 ovn_controller[152945]: 2025-10-11T09:24:27Z|01339|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:24:27 np0005481065 systemd[1]: machine-qemu\x2d148\x2dinstance\x2d0000007c.scope: Deactivated successfully.
Oct 11 05:24:27 np0005481065 systemd[1]: machine-qemu\x2d148\x2dinstance\x2d0000007c.scope: Consumed 4.748s CPU time.
Oct 11 05:24:27 np0005481065 systemd-machined[215705]: Machine qemu-148-instance-0000007c terminated.
Oct 11 05:24:27 np0005481065 nova_compute[260935]: 2025-10-11 09:24:27.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:27 np0005481065 nova_compute[260935]: 2025-10-11 09:24:27.980 2 DEBUG nova.compute.manager [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:24:27 np0005481065 nova_compute[260935]: 2025-10-11 09:24:27.981 2 DEBUG nova.virt.libvirt.driver [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:24:27 np0005481065 nova_compute[260935]: 2025-10-11 09:24:27.981 2 INFO nova.virt.libvirt.driver [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Creating image(s)#033[00m
Oct 11 05:24:28 np0005481065 neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2[396742]: [NOTICE]   (396766) : haproxy version is 2.8.14-c23fe91
Oct 11 05:24:28 np0005481065 neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2[396742]: [NOTICE]   (396766) : path to executable is /usr/sbin/haproxy
Oct 11 05:24:28 np0005481065 neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2[396742]: [WARNING]  (396766) : Exiting Master process...
Oct 11 05:24:28 np0005481065 neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2[396742]: [ALERT]    (396766) : Current worker (396771) exited with code 143 (Terminated)
Oct 11 05:24:28 np0005481065 neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2[396742]: [WARNING]  (396766) : All workers exited. Exiting... (0)
Oct 11 05:24:28 np0005481065 systemd[1]: libpod-154018af38dbd3f9211de43375a45fc046310d8d6609a6abdb810f3a4e6db988.scope: Deactivated successfully.
Oct 11 05:24:28 np0005481065 nova_compute[260935]: 2025-10-11 09:24:28.023 2 DEBUG nova.storage.rbd_utils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image ece1112a-294b-461f-8b8f-c6a0bb212647_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:24:28 np0005481065 podman[397094]: 2025-10-11 09:24:28.028280646 +0000 UTC m=+0.065599332 container died 154018af38dbd3f9211de43375a45fc046310d8d6609a6abdb810f3a4e6db988 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 11 05:24:28 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-154018af38dbd3f9211de43375a45fc046310d8d6609a6abdb810f3a4e6db988-userdata-shm.mount: Deactivated successfully.
Oct 11 05:24:28 np0005481065 systemd[1]: var-lib-containers-storage-overlay-1f43b85947b21cd3c6ad161e442ba8d1aa969883f679ceb40877a58563c40fdb-merged.mount: Deactivated successfully.
Oct 11 05:24:28 np0005481065 nova_compute[260935]: 2025-10-11 09:24:28.071 2 DEBUG nova.storage.rbd_utils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image ece1112a-294b-461f-8b8f-c6a0bb212647_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:24:28 np0005481065 podman[397094]: 2025-10-11 09:24:28.081728284 +0000 UTC m=+0.119046960 container cleanup 154018af38dbd3f9211de43375a45fc046310d8d6609a6abdb810f3a4e6db988 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:24:28 np0005481065 systemd[1]: libpod-conmon-154018af38dbd3f9211de43375a45fc046310d8d6609a6abdb810f3a4e6db988.scope: Deactivated successfully.
Oct 11 05:24:28 np0005481065 nova_compute[260935]: 2025-10-11 09:24:28.103 2 DEBUG nova.storage.rbd_utils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image ece1112a-294b-461f-8b8f-c6a0bb212647_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:24:28 np0005481065 nova_compute[260935]: 2025-10-11 09:24:28.108 2 DEBUG oslo_concurrency.processutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:24:28 np0005481065 nova_compute[260935]: 2025-10-11 09:24:28.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:28 np0005481065 nova_compute[260935]: 2025-10-11 09:24:28.159 2 DEBUG nova.policy [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0e1fd111a1ff43179343661e01457085', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'db6885dd005947ad850fed13cefdf2fc', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:24:28 np0005481065 nova_compute[260935]: 2025-10-11 09:24:28.168 2 INFO nova.virt.libvirt.driver [-] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Instance destroyed successfully.#033[00m
Oct 11 05:24:28 np0005481065 nova_compute[260935]: 2025-10-11 09:24:28.169 2 DEBUG nova.objects.instance [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'resources' on Instance uuid 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:24:28 np0005481065 nova_compute[260935]: 2025-10-11 09:24:28.187 2 DEBUG nova.virt.libvirt.vif [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:24:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1892767569',display_name='tempest-TestNetworkBasicOps-server-1892767569',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1892767569',id=124,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBrnL5y5rmm40zHS3GRiZibnIxynw7Rihcq0RaK7G/33Ym/s8OYTLPyLC9bisJAK/wCL7YwT5B+iTgnNd5S0eO9bWMML8tCfmSgkHWu6rGCUSjC8WYx7kb6zW0E6al3ojg==',key_name='tempest-TestNetworkBasicOps-485184698',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:24:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-i2l25idp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:24:24Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "address": "fa:16:3e:d6:ce:63", "network": {"id": "a7aa5898-dcd8-41c5-81e2-5c41061a1fb2", "bridge": "br-int", "label": "tempest-network-smoke--1511589092", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba6bfb6c-8b", "ovs_interfaceid": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:24:28 np0005481065 nova_compute[260935]: 2025-10-11 09:24:28.187 2 DEBUG nova.network.os_vif_util [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "address": "fa:16:3e:d6:ce:63", "network": {"id": "a7aa5898-dcd8-41c5-81e2-5c41061a1fb2", "bridge": "br-int", "label": "tempest-network-smoke--1511589092", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba6bfb6c-8b", "ovs_interfaceid": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:24:28 np0005481065 nova_compute[260935]: 2025-10-11 09:24:28.188 2 DEBUG nova.network.os_vif_util [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:ce:63,bridge_name='br-int',has_traffic_filtering=True,id=ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca,network=Network(a7aa5898-dcd8-41c5-81e2-5c41061a1fb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapba6bfb6c-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:24:28 np0005481065 nova_compute[260935]: 2025-10-11 09:24:28.189 2 DEBUG os_vif [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:ce:63,bridge_name='br-int',has_traffic_filtering=True,id=ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca,network=Network(a7aa5898-dcd8-41c5-81e2-5c41061a1fb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapba6bfb6c-8b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:24:28 np0005481065 nova_compute[260935]: 2025-10-11 09:24:28.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:28 np0005481065 nova_compute[260935]: 2025-10-11 09:24:28.192 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapba6bfb6c-8b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:24:28 np0005481065 nova_compute[260935]: 2025-10-11 09:24:28.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:28 np0005481065 nova_compute[260935]: 2025-10-11 09:24:28.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:24:28 np0005481065 nova_compute[260935]: 2025-10-11 09:24:28.198 2 INFO os_vif [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:ce:63,bridge_name='br-int',has_traffic_filtering=True,id=ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca,network=Network(a7aa5898-dcd8-41c5-81e2-5c41061a1fb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapba6bfb6c-8b')#033[00m
Oct 11 05:24:28 np0005481065 nova_compute[260935]: 2025-10-11 09:24:28.218 2 DEBUG oslo_concurrency.processutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:24:28 np0005481065 nova_compute[260935]: 2025-10-11 09:24:28.218 2 DEBUG oslo_concurrency.lockutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:24:28 np0005481065 nova_compute[260935]: 2025-10-11 09:24:28.219 2 DEBUG oslo_concurrency.lockutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:24:28 np0005481065 nova_compute[260935]: 2025-10-11 09:24:28.219 2 DEBUG oslo_concurrency.lockutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:24:28 np0005481065 podman[397182]: 2025-10-11 09:24:28.226087308 +0000 UTC m=+0.118476514 container remove 154018af38dbd3f9211de43375a45fc046310d8d6609a6abdb810f3a4e6db988 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 05:24:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:28.236 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[75ddc3ad-7b98-4420-ad70-452b5389d9e1]: (4, ('Sat Oct 11 09:24:27 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2 (154018af38dbd3f9211de43375a45fc046310d8d6609a6abdb810f3a4e6db988)\n154018af38dbd3f9211de43375a45fc046310d8d6609a6abdb810f3a4e6db988\nSat Oct 11 09:24:28 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2 (154018af38dbd3f9211de43375a45fc046310d8d6609a6abdb810f3a4e6db988)\n154018af38dbd3f9211de43375a45fc046310d8d6609a6abdb810f3a4e6db988\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:24:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:28.237 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2b19c7b1-cfb4-4652-967d-7bf01d1f12a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:24:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:28.238 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa7aa5898-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:24:28 np0005481065 kernel: tapa7aa5898-d0: left promiscuous mode
Oct 11 05:24:28 np0005481065 nova_compute[260935]: 2025-10-11 09:24:28.252 2 DEBUG nova.storage.rbd_utils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image ece1112a-294b-461f-8b8f-c6a0bb212647_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:24:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:28.262 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cf317727-8ca5-48b4-9b6b-73ec62b756d9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:24:28 np0005481065 nova_compute[260935]: 2025-10-11 09:24:28.264 2 DEBUG oslo_concurrency.processutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 ece1112a-294b-461f-8b8f-c6a0bb212647_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:24:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:28.287 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f64f0d52-6575-4933-bc0a-2b495becbc2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:24:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:28.288 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f29cda37-652a-4657-af6e-7a371c996467]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:24:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:28.313 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b47ea44d-77ef-4cc0-9cca-4765ccdebc3a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650661, 'reachable_time': 38453, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 397237, 'error': None, 'target': 'ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:24:28 np0005481065 systemd[1]: run-netns-ovnmeta\x2da7aa5898\x2ddcd8\x2d41c5\x2d81e2\x2d5c41061a1fb2.mount: Deactivated successfully.
Oct 11 05:24:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:28.317 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 05:24:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:28.317 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[2a459dea-f87f-4f32-8f22-c12f3ba96998]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:24:28 np0005481065 nova_compute[260935]: 2025-10-11 09:24:28.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:28 np0005481065 nova_compute[260935]: 2025-10-11 09:24:28.662 2 DEBUG nova.compute.manager [req-61d63996-77ae-487a-a5fa-3a59071db54b req-f2063364-4e0a-4d71-9119-c5fb396e3793 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Received event network-vif-unplugged-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:24:28 np0005481065 nova_compute[260935]: 2025-10-11 09:24:28.663 2 DEBUG oslo_concurrency.lockutils [req-61d63996-77ae-487a-a5fa-3a59071db54b req-f2063364-4e0a-4d71-9119-c5fb396e3793 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:24:28 np0005481065 nova_compute[260935]: 2025-10-11 09:24:28.663 2 DEBUG oslo_concurrency.lockutils [req-61d63996-77ae-487a-a5fa-3a59071db54b req-f2063364-4e0a-4d71-9119-c5fb396e3793 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:24:28 np0005481065 nova_compute[260935]: 2025-10-11 09:24:28.664 2 DEBUG oslo_concurrency.lockutils [req-61d63996-77ae-487a-a5fa-3a59071db54b req-f2063364-4e0a-4d71-9119-c5fb396e3793 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:24:28 np0005481065 nova_compute[260935]: 2025-10-11 09:24:28.664 2 DEBUG nova.compute.manager [req-61d63996-77ae-487a-a5fa-3a59071db54b req-f2063364-4e0a-4d71-9119-c5fb396e3793 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] No waiting events found dispatching network-vif-unplugged-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:24:28 np0005481065 nova_compute[260935]: 2025-10-11 09:24:28.664 2 DEBUG nova.compute.manager [req-61d63996-77ae-487a-a5fa-3a59071db54b req-f2063364-4e0a-4d71-9119-c5fb396e3793 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Received event network-vif-unplugged-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 05:24:28 np0005481065 nova_compute[260935]: 2025-10-11 09:24:28.664 2 DEBUG nova.compute.manager [req-61d63996-77ae-487a-a5fa-3a59071db54b req-f2063364-4e0a-4d71-9119-c5fb396e3793 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Received event network-vif-plugged-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:24:28 np0005481065 nova_compute[260935]: 2025-10-11 09:24:28.665 2 DEBUG oslo_concurrency.lockutils [req-61d63996-77ae-487a-a5fa-3a59071db54b req-f2063364-4e0a-4d71-9119-c5fb396e3793 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:24:28 np0005481065 nova_compute[260935]: 2025-10-11 09:24:28.665 2 DEBUG oslo_concurrency.lockutils [req-61d63996-77ae-487a-a5fa-3a59071db54b req-f2063364-4e0a-4d71-9119-c5fb396e3793 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:24:28 np0005481065 nova_compute[260935]: 2025-10-11 09:24:28.665 2 DEBUG oslo_concurrency.lockutils [req-61d63996-77ae-487a-a5fa-3a59071db54b req-f2063364-4e0a-4d71-9119-c5fb396e3793 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:24:28 np0005481065 nova_compute[260935]: 2025-10-11 09:24:28.665 2 DEBUG nova.compute.manager [req-61d63996-77ae-487a-a5fa-3a59071db54b req-f2063364-4e0a-4d71-9119-c5fb396e3793 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] No waiting events found dispatching network-vif-plugged-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:24:28 np0005481065 nova_compute[260935]: 2025-10-11 09:24:28.665 2 WARNING nova.compute.manager [req-61d63996-77ae-487a-a5fa-3a59071db54b req-f2063364-4e0a-4d71-9119-c5fb396e3793 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Received unexpected event network-vif-plugged-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca for instance with vm_state active and task_state deleting.#033[00m
Oct 11 05:24:28 np0005481065 nova_compute[260935]: 2025-10-11 09:24:28.721 2 DEBUG oslo_concurrency.processutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 ece1112a-294b-461f-8b8f-c6a0bb212647_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:24:28 np0005481065 nova_compute[260935]: 2025-10-11 09:24:28.796 2 DEBUG nova.storage.rbd_utils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] resizing rbd image ece1112a-294b-461f-8b8f-c6a0bb212647_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:24:28 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2485: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 11 05:24:28 np0005481065 nova_compute[260935]: 2025-10-11 09:24:28.897 2 INFO nova.virt.libvirt.driver [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Deleting instance files /var/lib/nova/instances/32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1_del#033[00m
Oct 11 05:24:28 np0005481065 nova_compute[260935]: 2025-10-11 09:24:28.898 2 INFO nova.virt.libvirt.driver [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Deletion of /var/lib/nova/instances/32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1_del complete#033[00m
Oct 11 05:24:28 np0005481065 nova_compute[260935]: 2025-10-11 09:24:28.904 2 DEBUG nova.objects.instance [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'migration_context' on Instance uuid ece1112a-294b-461f-8b8f-c6a0bb212647 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:24:28 np0005481065 nova_compute[260935]: 2025-10-11 09:24:28.988 2 DEBUG nova.virt.libvirt.driver [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:24:28 np0005481065 nova_compute[260935]: 2025-10-11 09:24:28.989 2 DEBUG nova.virt.libvirt.driver [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Ensure instance console log exists: /var/lib/nova/instances/ece1112a-294b-461f-8b8f-c6a0bb212647/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:24:28 np0005481065 nova_compute[260935]: 2025-10-11 09:24:28.989 2 DEBUG oslo_concurrency.lockutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:24:28 np0005481065 nova_compute[260935]: 2025-10-11 09:24:28.990 2 DEBUG oslo_concurrency.lockutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:24:28 np0005481065 nova_compute[260935]: 2025-10-11 09:24:28.990 2 DEBUG oslo_concurrency.lockutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:24:29 np0005481065 nova_compute[260935]: 2025-10-11 09:24:29.016 2 INFO nova.compute.manager [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Took 1.24 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:24:29 np0005481065 nova_compute[260935]: 2025-10-11 09:24:29.017 2 DEBUG oslo.service.loopingcall [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:24:29 np0005481065 nova_compute[260935]: 2025-10-11 09:24:29.017 2 DEBUG nova.compute.manager [-] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:24:29 np0005481065 nova_compute[260935]: 2025-10-11 09:24:29.017 2 DEBUG nova.network.neutron [-] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:24:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:24:30 np0005481065 nova_compute[260935]: 2025-10-11 09:24:30.344 2 DEBUG nova.network.neutron [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Successfully created port: c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:24:30 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2486: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Oct 11 05:24:31 np0005481065 nova_compute[260935]: 2025-10-11 09:24:31.595 2 DEBUG nova.network.neutron [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Successfully created port: 336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:24:31 np0005481065 nova_compute[260935]: 2025-10-11 09:24:31.703 2 DEBUG nova.network.neutron [-] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:24:31 np0005481065 nova_compute[260935]: 2025-10-11 09:24:31.725 2 INFO nova.compute.manager [-] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Took 2.71 seconds to deallocate network for instance.#033[00m
Oct 11 05:24:31 np0005481065 nova_compute[260935]: 2025-10-11 09:24:31.776 2 DEBUG oslo_concurrency.lockutils [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:24:31 np0005481065 nova_compute[260935]: 2025-10-11 09:24:31.777 2 DEBUG oslo_concurrency.lockutils [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:24:31 np0005481065 nova_compute[260935]: 2025-10-11 09:24:31.806 2 DEBUG nova.scheduler.client.report [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Refreshing inventories for resource provider ead2f521-4d5d-46d9-864c-1aac19134114 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 11 05:24:31 np0005481065 nova_compute[260935]: 2025-10-11 09:24:31.838 2 DEBUG nova.scheduler.client.report [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Updating ProviderTree inventory for provider ead2f521-4d5d-46d9-864c-1aac19134114 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 11 05:24:31 np0005481065 nova_compute[260935]: 2025-10-11 09:24:31.839 2 DEBUG nova.compute.provider_tree [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Updating inventory in ProviderTree for provider ead2f521-4d5d-46d9-864c-1aac19134114 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 11 05:24:31 np0005481065 nova_compute[260935]: 2025-10-11 09:24:31.862 2 DEBUG nova.scheduler.client.report [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Refreshing aggregate associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 11 05:24:31 np0005481065 nova_compute[260935]: 2025-10-11 09:24:31.883 2 DEBUG nova.scheduler.client.report [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Refreshing trait associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, traits: HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSE41,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_USB,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,HW_CPU_X86_SVM,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 11 05:24:32 np0005481065 nova_compute[260935]: 2025-10-11 09:24:32.000 2 DEBUG oslo_concurrency.processutils [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:24:32 np0005481065 nova_compute[260935]: 2025-10-11 09:24:32.470 2 DEBUG nova.network.neutron [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Successfully updated port: c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:24:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:24:32 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1438114392' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:24:32 np0005481065 nova_compute[260935]: 2025-10-11 09:24:32.519 2 DEBUG oslo_concurrency.processutils [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:24:32 np0005481065 nova_compute[260935]: 2025-10-11 09:24:32.563 2 DEBUG nova.compute.provider_tree [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:24:32 np0005481065 nova_compute[260935]: 2025-10-11 09:24:32.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:32 np0005481065 nova_compute[260935]: 2025-10-11 09:24:32.585 2 DEBUG nova.scheduler.client.report [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:24:32 np0005481065 nova_compute[260935]: 2025-10-11 09:24:32.609 2 DEBUG oslo_concurrency.lockutils [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.832s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:24:32 np0005481065 nova_compute[260935]: 2025-10-11 09:24:32.646 2 INFO nova.scheduler.client.report [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Deleted allocations for instance 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1#033[00m
Oct 11 05:24:32 np0005481065 nova_compute[260935]: 2025-10-11 09:24:32.692 2 DEBUG nova.compute.manager [req-b9263583-9176-49a9-8af4-8b97784824af req-c1aad60f-a7f6-460e-9daf-7586253dc807 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Received event network-changed-c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:24:32 np0005481065 nova_compute[260935]: 2025-10-11 09:24:32.692 2 DEBUG nova.compute.manager [req-b9263583-9176-49a9-8af4-8b97784824af req-c1aad60f-a7f6-460e-9daf-7586253dc807 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Refreshing instance network info cache due to event network-changed-c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:24:32 np0005481065 nova_compute[260935]: 2025-10-11 09:24:32.692 2 DEBUG oslo_concurrency.lockutils [req-b9263583-9176-49a9-8af4-8b97784824af req-c1aad60f-a7f6-460e-9daf-7586253dc807 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-ece1112a-294b-461f-8b8f-c6a0bb212647" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:24:32 np0005481065 nova_compute[260935]: 2025-10-11 09:24:32.692 2 DEBUG oslo_concurrency.lockutils [req-b9263583-9176-49a9-8af4-8b97784824af req-c1aad60f-a7f6-460e-9daf-7586253dc807 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-ece1112a-294b-461f-8b8f-c6a0bb212647" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:24:32 np0005481065 nova_compute[260935]: 2025-10-11 09:24:32.693 2 DEBUG nova.network.neutron [req-b9263583-9176-49a9-8af4-8b97784824af req-c1aad60f-a7f6-460e-9daf-7586253dc807 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Refreshing network info cache for port c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:24:32 np0005481065 nova_compute[260935]: 2025-10-11 09:24:32.713 2 DEBUG oslo_concurrency.lockutils [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.944s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:24:32 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2487: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Oct 11 05:24:33 np0005481065 nova_compute[260935]: 2025-10-11 09:24:33.185 2 DEBUG nova.network.neutron [req-b9263583-9176-49a9-8af4-8b97784824af req-c1aad60f-a7f6-460e-9daf-7586253dc807 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:24:33 np0005481065 nova_compute[260935]: 2025-10-11 09:24:33.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:33 np0005481065 nova_compute[260935]: 2025-10-11 09:24:33.527 2 DEBUG nova.network.neutron [req-b9263583-9176-49a9-8af4-8b97784824af req-c1aad60f-a7f6-460e-9daf-7586253dc807 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:24:33 np0005481065 nova_compute[260935]: 2025-10-11 09:24:33.546 2 DEBUG oslo_concurrency.lockutils [req-b9263583-9176-49a9-8af4-8b97784824af req-c1aad60f-a7f6-460e-9daf-7586253dc807 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-ece1112a-294b-461f-8b8f-c6a0bb212647" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:24:33 np0005481065 nova_compute[260935]: 2025-10-11 09:24:33.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:24:33 np0005481065 nova_compute[260935]: 2025-10-11 09:24:33.775 2 DEBUG nova.network.neutron [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Successfully updated port: 336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:24:33 np0005481065 nova_compute[260935]: 2025-10-11 09:24:33.841 2 DEBUG oslo_concurrency.lockutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "refresh_cache-ece1112a-294b-461f-8b8f-c6a0bb212647" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:24:33 np0005481065 nova_compute[260935]: 2025-10-11 09:24:33.841 2 DEBUG oslo_concurrency.lockutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquired lock "refresh_cache-ece1112a-294b-461f-8b8f-c6a0bb212647" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:24:33 np0005481065 nova_compute[260935]: 2025-10-11 09:24:33.841 2 DEBUG nova.network.neutron [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:24:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:24:34 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #120. Immutable memtables: 0.
Oct 11 05:24:34 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:34.128986) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 05:24:34 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 120
Oct 11 05:24:34 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174674129089, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 570, "num_deletes": 255, "total_data_size": 575467, "memory_usage": 587432, "flush_reason": "Manual Compaction"}
Oct 11 05:24:34 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #121: started
Oct 11 05:24:34 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174674137743, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 121, "file_size": 559091, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 51783, "largest_seqno": 52352, "table_properties": {"data_size": 555965, "index_size": 1034, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7270, "raw_average_key_size": 18, "raw_value_size": 549672, "raw_average_value_size": 1416, "num_data_blocks": 46, "num_entries": 388, "num_filter_entries": 388, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760174643, "oldest_key_time": 1760174643, "file_creation_time": 1760174674, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 121, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:24:34 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 8902 microseconds, and 5163 cpu microseconds.
Oct 11 05:24:34 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:24:34 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:34.137895) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #121: 559091 bytes OK
Oct 11 05:24:34 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:34.137930) [db/memtable_list.cc:519] [default] Level-0 commit table #121 started
Oct 11 05:24:34 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:34.140026) [db/memtable_list.cc:722] [default] Level-0 commit table #121: memtable #1 done
Oct 11 05:24:34 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:34.140053) EVENT_LOG_v1 {"time_micros": 1760174674140045, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 05:24:34 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:34.140080) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 05:24:34 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 572254, prev total WAL file size 572254, number of live WAL files 2.
Oct 11 05:24:34 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000117.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:24:34 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:34.140706) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303039' seq:72057594037927935, type:22 .. '6C6F676D0032323630' seq:0, type:0; will stop at (end)
Oct 11 05:24:34 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 05:24:34 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [121(545KB)], [119(9795KB)]
Oct 11 05:24:34 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174674140768, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [121], "files_L6": [119], "score": -1, "input_data_size": 10589412, "oldest_snapshot_seqno": -1}
Oct 11 05:24:34 np0005481065 nova_compute[260935]: 2025-10-11 09:24:34.167 2 DEBUG nova.network.neutron [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:24:34 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #122: 7182 keys, 10464671 bytes, temperature: kUnknown
Oct 11 05:24:34 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174674219338, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 122, "file_size": 10464671, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10415702, "index_size": 29913, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17989, "raw_key_size": 187526, "raw_average_key_size": 26, "raw_value_size": 10286188, "raw_average_value_size": 1432, "num_data_blocks": 1171, "num_entries": 7182, "num_filter_entries": 7182, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760174674, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:24:34 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:24:34 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:34.219702) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 10464671 bytes
Oct 11 05:24:34 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:34.221426) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 134.6 rd, 133.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 9.6 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(37.7) write-amplify(18.7) OK, records in: 7704, records dropped: 522 output_compression: NoCompression
Oct 11 05:24:34 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:34.221454) EVENT_LOG_v1 {"time_micros": 1760174674221441, "job": 72, "event": "compaction_finished", "compaction_time_micros": 78696, "compaction_time_cpu_micros": 49567, "output_level": 6, "num_output_files": 1, "total_output_size": 10464671, "num_input_records": 7704, "num_output_records": 7182, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 05:24:34 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000121.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:24:34 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174674222050, "job": 72, "event": "table_file_deletion", "file_number": 121}
Oct 11 05:24:34 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:24:34 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174674224597, "job": 72, "event": "table_file_deletion", "file_number": 119}
Oct 11 05:24:34 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:34.140637) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:24:34 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:34.224767) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:24:34 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:34.224778) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:24:34 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:34.224781) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:24:34 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:34.224785) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:24:34 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:34.224789) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:24:34 np0005481065 nova_compute[260935]: 2025-10-11 09:24:34.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:24:34 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2488: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 122 op/s
Oct 11 05:24:34 np0005481065 nova_compute[260935]: 2025-10-11 09:24:34.865 2 DEBUG nova.compute.manager [req-81ab7537-b13d-464b-be3a-9a35876895bc req-289088ac-177e-4ac5-83ed-82d7a9e486f4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Received event network-changed-336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:24:34 np0005481065 nova_compute[260935]: 2025-10-11 09:24:34.865 2 DEBUG nova.compute.manager [req-81ab7537-b13d-464b-be3a-9a35876895bc req-289088ac-177e-4ac5-83ed-82d7a9e486f4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Refreshing instance network info cache due to event network-changed-336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:24:34 np0005481065 nova_compute[260935]: 2025-10-11 09:24:34.866 2 DEBUG oslo_concurrency.lockutils [req-81ab7537-b13d-464b-be3a-9a35876895bc req-289088ac-177e-4ac5-83ed-82d7a9e486f4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-ece1112a-294b-461f-8b8f-c6a0bb212647" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:24:35 np0005481065 nova_compute[260935]: 2025-10-11 09:24:35.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:24:36 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2489: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 122 op/s
Oct 11 05:24:37 np0005481065 nova_compute[260935]: 2025-10-11 09:24:37.430 2 DEBUG nova.network.neutron [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Updating instance_info_cache with network_info: [{"id": "c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9", "address": "fa:16:3e:c1:1a:f4", "network": {"id": "cdd3c547-e0c3-4649-8427-08ce8e1c52d4", "bridge": "br-int", "label": "tempest-network-smoke--1698384506", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9d0cfb2-3f", "ovs_interfaceid": "c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0", "address": "fa:16:3e:5b:3e:95", "network": {"id": "f0bc2c62-89ab-4ce1-9157-2273788b9018", "bridge": "br-int", "label": "tempest-network-smoke--892002892", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe5b:3e95", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5b:3e95", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap336ce2f8-5f", "ovs_interfaceid": "336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:24:37 np0005481065 nova_compute[260935]: 2025-10-11 09:24:37.454 2 DEBUG oslo_concurrency.lockutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Releasing lock "refresh_cache-ece1112a-294b-461f-8b8f-c6a0bb212647" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:24:37 np0005481065 nova_compute[260935]: 2025-10-11 09:24:37.454 2 DEBUG nova.compute.manager [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Instance network_info: |[{"id": "c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9", "address": "fa:16:3e:c1:1a:f4", "network": {"id": "cdd3c547-e0c3-4649-8427-08ce8e1c52d4", "bridge": "br-int", "label": "tempest-network-smoke--1698384506", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9d0cfb2-3f", "ovs_interfaceid": "c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0", "address": "fa:16:3e:5b:3e:95", "network": {"id": "f0bc2c62-89ab-4ce1-9157-2273788b9018", "bridge": "br-int", "label": "tempest-network-smoke--892002892", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe5b:3e95", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5b:3e95", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap336ce2f8-5f", "ovs_interfaceid": "336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:24:37 np0005481065 nova_compute[260935]: 2025-10-11 09:24:37.455 2 DEBUG oslo_concurrency.lockutils [req-81ab7537-b13d-464b-be3a-9a35876895bc req-289088ac-177e-4ac5-83ed-82d7a9e486f4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-ece1112a-294b-461f-8b8f-c6a0bb212647" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:24:37 np0005481065 nova_compute[260935]: 2025-10-11 09:24:37.455 2 DEBUG nova.network.neutron [req-81ab7537-b13d-464b-be3a-9a35876895bc req-289088ac-177e-4ac5-83ed-82d7a9e486f4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Refreshing network info cache for port 336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:24:37 np0005481065 nova_compute[260935]: 2025-10-11 09:24:37.460 2 DEBUG nova.virt.libvirt.driver [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Start _get_guest_xml network_info=[{"id": "c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9", "address": "fa:16:3e:c1:1a:f4", "network": {"id": "cdd3c547-e0c3-4649-8427-08ce8e1c52d4", "bridge": "br-int", "label": "tempest-network-smoke--1698384506", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9d0cfb2-3f", "ovs_interfaceid": "c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0", "address": "fa:16:3e:5b:3e:95", "network": {"id": "f0bc2c62-89ab-4ce1-9157-2273788b9018", "bridge": "br-int", "label": "tempest-network-smoke--892002892", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe5b:3e95", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5b:3e95", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap336ce2f8-5f", "ovs_interfaceid": "336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:24:37 np0005481065 nova_compute[260935]: 2025-10-11 09:24:37.465 2 WARNING nova.virt.libvirt.driver [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:24:37 np0005481065 nova_compute[260935]: 2025-10-11 09:24:37.470 2 DEBUG nova.virt.libvirt.host [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:24:37 np0005481065 nova_compute[260935]: 2025-10-11 09:24:37.471 2 DEBUG nova.virt.libvirt.host [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:24:37 np0005481065 nova_compute[260935]: 2025-10-11 09:24:37.480 2 DEBUG nova.virt.libvirt.host [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:24:37 np0005481065 nova_compute[260935]: 2025-10-11 09:24:37.481 2 DEBUG nova.virt.libvirt.host [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:24:37 np0005481065 nova_compute[260935]: 2025-10-11 09:24:37.482 2 DEBUG nova.virt.libvirt.driver [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:24:37 np0005481065 nova_compute[260935]: 2025-10-11 09:24:37.482 2 DEBUG nova.virt.hardware [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:24:37 np0005481065 nova_compute[260935]: 2025-10-11 09:24:37.483 2 DEBUG nova.virt.hardware [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:24:37 np0005481065 nova_compute[260935]: 2025-10-11 09:24:37.483 2 DEBUG nova.virt.hardware [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:24:37 np0005481065 nova_compute[260935]: 2025-10-11 09:24:37.484 2 DEBUG nova.virt.hardware [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:24:37 np0005481065 nova_compute[260935]: 2025-10-11 09:24:37.484 2 DEBUG nova.virt.hardware [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:24:37 np0005481065 nova_compute[260935]: 2025-10-11 09:24:37.484 2 DEBUG nova.virt.hardware [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:24:37 np0005481065 nova_compute[260935]: 2025-10-11 09:24:37.485 2 DEBUG nova.virt.hardware [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:24:37 np0005481065 nova_compute[260935]: 2025-10-11 09:24:37.485 2 DEBUG nova.virt.hardware [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:24:37 np0005481065 nova_compute[260935]: 2025-10-11 09:24:37.485 2 DEBUG nova.virt.hardware [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:24:37 np0005481065 nova_compute[260935]: 2025-10-11 09:24:37.486 2 DEBUG nova.virt.hardware [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:24:37 np0005481065 nova_compute[260935]: 2025-10-11 09:24:37.486 2 DEBUG nova.virt.hardware [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:24:37 np0005481065 nova_compute[260935]: 2025-10-11 09:24:37.490 2 DEBUG oslo_concurrency.processutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:24:37 np0005481065 nova_compute[260935]: 2025-10-11 09:24:37.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:37 np0005481065 nova_compute[260935]: 2025-10-11 09:24:37.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:24:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:24:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2011420552' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:24:37 np0005481065 nova_compute[260935]: 2025-10-11 09:24:37.988 2 DEBUG oslo_concurrency.processutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:24:38 np0005481065 nova_compute[260935]: 2025-10-11 09:24:38.015 2 DEBUG nova.storage.rbd_utils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image ece1112a-294b-461f-8b8f-c6a0bb212647_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:24:38 np0005481065 nova_compute[260935]: 2025-10-11 09:24:38.019 2 DEBUG oslo_concurrency.processutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:24:38 np0005481065 nova_compute[260935]: 2025-10-11 09:24:38.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:24:38 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3188556766' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:24:38 np0005481065 nova_compute[260935]: 2025-10-11 09:24:38.476 2 DEBUG oslo_concurrency.processutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:24:38 np0005481065 nova_compute[260935]: 2025-10-11 09:24:38.478 2 DEBUG nova.virt.libvirt.vif [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:24:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1996538223',display_name='tempest-TestGettingAddress-server-1996538223',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1996538223',id=125,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFOcQNhE7VrJ44Z/esb1CHEpx8lcRbf27s/aaZGWQqBCH7+6fr5AKVHiLVq1p8ssvkamN6U1RSiwjmJnfBxXYkiNbQhuZVIjGwYPhoT+eqQySo9UJb10NKMqWJoQDGuD9g==',key_name='tempest-TestGettingAddress-1559448996',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-oa0yn1o6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:24:27Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=ece1112a-294b-461f-8b8f-c6a0bb212647,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9", "address": "fa:16:3e:c1:1a:f4", "network": {"id": "cdd3c547-e0c3-4649-8427-08ce8e1c52d4", "bridge": "br-int", "label": "tempest-network-smoke--1698384506", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9d0cfb2-3f", "ovs_interfaceid": "c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:24:38 np0005481065 nova_compute[260935]: 2025-10-11 09:24:38.479 2 DEBUG nova.network.os_vif_util [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9", "address": "fa:16:3e:c1:1a:f4", "network": {"id": "cdd3c547-e0c3-4649-8427-08ce8e1c52d4", "bridge": "br-int", "label": "tempest-network-smoke--1698384506", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9d0cfb2-3f", "ovs_interfaceid": "c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:24:38 np0005481065 nova_compute[260935]: 2025-10-11 09:24:38.480 2 DEBUG nova.network.os_vif_util [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:1a:f4,bridge_name='br-int',has_traffic_filtering=True,id=c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9,network=Network(cdd3c547-e0c3-4649-8427-08ce8e1c52d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9d0cfb2-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:24:38 np0005481065 nova_compute[260935]: 2025-10-11 09:24:38.482 2 DEBUG nova.virt.libvirt.vif [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:24:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1996538223',display_name='tempest-TestGettingAddress-server-1996538223',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1996538223',id=125,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFOcQNhE7VrJ44Z/esb1CHEpx8lcRbf27s/aaZGWQqBCH7+6fr5AKVHiLVq1p8ssvkamN6U1RSiwjmJnfBxXYkiNbQhuZVIjGwYPhoT+eqQySo9UJb10NKMqWJoQDGuD9g==',key_name='tempest-TestGettingAddress-1559448996',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-oa0yn1o6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:24:27Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=ece1112a-294b-461f-8b8f-c6a0bb212647,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0", "address": "fa:16:3e:5b:3e:95", "network": {"id": "f0bc2c62-89ab-4ce1-9157-2273788b9018", "bridge": "br-int", "label": "tempest-network-smoke--892002892", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe5b:3e95", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5b:3e95", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap336ce2f8-5f", "ovs_interfaceid": "336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:24:38 np0005481065 nova_compute[260935]: 2025-10-11 09:24:38.483 2 DEBUG nova.network.os_vif_util [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0", "address": "fa:16:3e:5b:3e:95", "network": {"id": "f0bc2c62-89ab-4ce1-9157-2273788b9018", "bridge": "br-int", "label": "tempest-network-smoke--892002892", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe5b:3e95", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5b:3e95", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap336ce2f8-5f", "ovs_interfaceid": "336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:24:38 np0005481065 nova_compute[260935]: 2025-10-11 09:24:38.484 2 DEBUG nova.network.os_vif_util [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:3e:95,bridge_name='br-int',has_traffic_filtering=True,id=336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0,network=Network(f0bc2c62-89ab-4ce1-9157-2273788b9018),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap336ce2f8-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:24:38 np0005481065 nova_compute[260935]: 2025-10-11 09:24:38.486 2 DEBUG nova.objects.instance [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'pci_devices' on Instance uuid ece1112a-294b-461f-8b8f-c6a0bb212647 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:24:38 np0005481065 nova_compute[260935]: 2025-10-11 09:24:38.508 2 DEBUG nova.virt.libvirt.driver [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:24:38 np0005481065 nova_compute[260935]:  <uuid>ece1112a-294b-461f-8b8f-c6a0bb212647</uuid>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:  <name>instance-0000007d</name>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:24:38 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:      <nova:name>tempest-TestGettingAddress-server-1996538223</nova:name>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:24:37</nova:creationTime>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:24:38 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:        <nova:user uuid="0e1fd111a1ff43179343661e01457085">tempest-TestGettingAddress-1238692117-project-member</nova:user>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:        <nova:project uuid="db6885dd005947ad850fed13cefdf2fc">tempest-TestGettingAddress-1238692117</nova:project>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:        <nova:port uuid="c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9">
Oct 11 05:24:38 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:        <nova:port uuid="336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0">
Oct 11 05:24:38 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe5b:3e95" ipVersion="6"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe5b:3e95" ipVersion="6"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:      <entry name="serial">ece1112a-294b-461f-8b8f-c6a0bb212647</entry>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:      <entry name="uuid">ece1112a-294b-461f-8b8f-c6a0bb212647</entry>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:24:38 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/ece1112a-294b-461f-8b8f-c6a0bb212647_disk">
Oct 11 05:24:38 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:24:38 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:24:38 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/ece1112a-294b-461f-8b8f-c6a0bb212647_disk.config">
Oct 11 05:24:38 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:24:38 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:24:38 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:c1:1a:f4"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:      <target dev="tapc9d0cfb2-3f"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:24:38 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:5b:3e:95"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:      <target dev="tap336ce2f8-5f"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:24:38 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/ece1112a-294b-461f-8b8f-c6a0bb212647/console.log" append="off"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:24:38 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:24:38 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:24:38 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:24:38 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:24:38 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:24:38 np0005481065 nova_compute[260935]: 2025-10-11 09:24:38.508 2 DEBUG nova.compute.manager [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Preparing to wait for external event network-vif-plugged-c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:24:38 np0005481065 nova_compute[260935]: 2025-10-11 09:24:38.509 2 DEBUG oslo_concurrency.lockutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:24:38 np0005481065 nova_compute[260935]: 2025-10-11 09:24:38.510 2 DEBUG oslo_concurrency.lockutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:24:38 np0005481065 nova_compute[260935]: 2025-10-11 09:24:38.510 2 DEBUG oslo_concurrency.lockutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:24:38 np0005481065 nova_compute[260935]: 2025-10-11 09:24:38.511 2 DEBUG nova.compute.manager [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Preparing to wait for external event network-vif-plugged-336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:24:38 np0005481065 nova_compute[260935]: 2025-10-11 09:24:38.511 2 DEBUG oslo_concurrency.lockutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:24:38 np0005481065 nova_compute[260935]: 2025-10-11 09:24:38.512 2 DEBUG oslo_concurrency.lockutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:24:38 np0005481065 nova_compute[260935]: 2025-10-11 09:24:38.512 2 DEBUG oslo_concurrency.lockutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:24:38 np0005481065 nova_compute[260935]: 2025-10-11 09:24:38.514 2 DEBUG nova.virt.libvirt.vif [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:24:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1996538223',display_name='tempest-TestGettingAddress-server-1996538223',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1996538223',id=125,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFOcQNhE7VrJ44Z/esb1CHEpx8lcRbf27s/aaZGWQqBCH7+6fr5AKVHiLVq1p8ssvkamN6U1RSiwjmJnfBxXYkiNbQhuZVIjGwYPhoT+eqQySo9UJb10NKMqWJoQDGuD9g==',key_name='tempest-TestGettingAddress-1559448996',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-oa0yn1o6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:24:27Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=ece1112a-294b-461f-8b8f-c6a0bb212647,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9", "address": "fa:16:3e:c1:1a:f4", "network": {"id": "cdd3c547-e0c3-4649-8427-08ce8e1c52d4", "bridge": "br-int", "label": "tempest-network-smoke--1698384506", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9d0cfb2-3f", "ovs_interfaceid": "c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:24:38 np0005481065 nova_compute[260935]: 2025-10-11 09:24:38.515 2 DEBUG nova.network.os_vif_util [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9", "address": "fa:16:3e:c1:1a:f4", "network": {"id": "cdd3c547-e0c3-4649-8427-08ce8e1c52d4", "bridge": "br-int", "label": "tempest-network-smoke--1698384506", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9d0cfb2-3f", "ovs_interfaceid": "c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:24:38 np0005481065 nova_compute[260935]: 2025-10-11 09:24:38.516 2 DEBUG nova.network.os_vif_util [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:1a:f4,bridge_name='br-int',has_traffic_filtering=True,id=c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9,network=Network(cdd3c547-e0c3-4649-8427-08ce8e1c52d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9d0cfb2-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:24:38 np0005481065 nova_compute[260935]: 2025-10-11 09:24:38.517 2 DEBUG os_vif [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:1a:f4,bridge_name='br-int',has_traffic_filtering=True,id=c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9,network=Network(cdd3c547-e0c3-4649-8427-08ce8e1c52d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9d0cfb2-3f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:24:38 np0005481065 nova_compute[260935]: 2025-10-11 09:24:38.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:38 np0005481065 nova_compute[260935]: 2025-10-11 09:24:38.519 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:24:38 np0005481065 nova_compute[260935]: 2025-10-11 09:24:38.520 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:24:38 np0005481065 nova_compute[260935]: 2025-10-11 09:24:38.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:38 np0005481065 nova_compute[260935]: 2025-10-11 09:24:38.525 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc9d0cfb2-3f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:24:38 np0005481065 nova_compute[260935]: 2025-10-11 09:24:38.526 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc9d0cfb2-3f, col_values=(('external_ids', {'iface-id': 'c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c1:1a:f4', 'vm-uuid': 'ece1112a-294b-461f-8b8f-c6a0bb212647'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:24:38 np0005481065 nova_compute[260935]: 2025-10-11 09:24:38.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:38 np0005481065 NetworkManager[44960]: <info>  [1760174678.5301] manager: (tapc9d0cfb2-3f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/530)
Oct 11 05:24:38 np0005481065 nova_compute[260935]: 2025-10-11 09:24:38.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:24:38 np0005481065 nova_compute[260935]: 2025-10-11 09:24:38.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:38 np0005481065 nova_compute[260935]: 2025-10-11 09:24:38.539 2 INFO os_vif [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:1a:f4,bridge_name='br-int',has_traffic_filtering=True,id=c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9,network=Network(cdd3c547-e0c3-4649-8427-08ce8e1c52d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9d0cfb2-3f')#033[00m
Oct 11 05:24:38 np0005481065 nova_compute[260935]: 2025-10-11 09:24:38.541 2 DEBUG nova.virt.libvirt.vif [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:24:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1996538223',display_name='tempest-TestGettingAddress-server-1996538223',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1996538223',id=125,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFOcQNhE7VrJ44Z/esb1CHEpx8lcRbf27s/aaZGWQqBCH7+6fr5AKVHiLVq1p8ssvkamN6U1RSiwjmJnfBxXYkiNbQhuZVIjGwYPhoT+eqQySo9UJb10NKMqWJoQDGuD9g==',key_name='tempest-TestGettingAddress-1559448996',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-oa0yn1o6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:24:27Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=ece1112a-294b-461f-8b8f-c6a0bb212647,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0", "address": "fa:16:3e:5b:3e:95", "network": {"id": "f0bc2c62-89ab-4ce1-9157-2273788b9018", "bridge": "br-int", "label": "tempest-network-smoke--892002892", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe5b:3e95", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5b:3e95", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap336ce2f8-5f", "ovs_interfaceid": "336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:24:38 np0005481065 nova_compute[260935]: 2025-10-11 09:24:38.541 2 DEBUG nova.network.os_vif_util [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0", "address": "fa:16:3e:5b:3e:95", "network": {"id": "f0bc2c62-89ab-4ce1-9157-2273788b9018", "bridge": "br-int", "label": "tempest-network-smoke--892002892", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe5b:3e95", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5b:3e95", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap336ce2f8-5f", "ovs_interfaceid": "336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:24:38 np0005481065 nova_compute[260935]: 2025-10-11 09:24:38.542 2 DEBUG nova.network.os_vif_util [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:3e:95,bridge_name='br-int',has_traffic_filtering=True,id=336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0,network=Network(f0bc2c62-89ab-4ce1-9157-2273788b9018),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap336ce2f8-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:24:38 np0005481065 nova_compute[260935]: 2025-10-11 09:24:38.543 2 DEBUG os_vif [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:3e:95,bridge_name='br-int',has_traffic_filtering=True,id=336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0,network=Network(f0bc2c62-89ab-4ce1-9157-2273788b9018),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap336ce2f8-5f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:24:38 np0005481065 nova_compute[260935]: 2025-10-11 09:24:38.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:38 np0005481065 nova_compute[260935]: 2025-10-11 09:24:38.544 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:24:38 np0005481065 nova_compute[260935]: 2025-10-11 09:24:38.545 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:24:38 np0005481065 nova_compute[260935]: 2025-10-11 09:24:38.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:38 np0005481065 nova_compute[260935]: 2025-10-11 09:24:38.548 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap336ce2f8-5f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:24:38 np0005481065 nova_compute[260935]: 2025-10-11 09:24:38.549 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap336ce2f8-5f, col_values=(('external_ids', {'iface-id': '336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5b:3e:95', 'vm-uuid': 'ece1112a-294b-461f-8b8f-c6a0bb212647'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:24:38 np0005481065 nova_compute[260935]: 2025-10-11 09:24:38.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:38 np0005481065 NetworkManager[44960]: <info>  [1760174678.5528] manager: (tap336ce2f8-5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/531)
Oct 11 05:24:38 np0005481065 nova_compute[260935]: 2025-10-11 09:24:38.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:24:38 np0005481065 nova_compute[260935]: 2025-10-11 09:24:38.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:38 np0005481065 nova_compute[260935]: 2025-10-11 09:24:38.566 2 INFO os_vif [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:3e:95,bridge_name='br-int',has_traffic_filtering=True,id=336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0,network=Network(f0bc2c62-89ab-4ce1-9157-2273788b9018),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap336ce2f8-5f')#033[00m
Oct 11 05:24:38 np0005481065 ovn_controller[152945]: 2025-10-11T09:24:38Z|01340|binding|INFO|Releasing lport c1fabca0-ae77-4e48-b93b-3023955db235 from this chassis (sb_readonly=0)
Oct 11 05:24:38 np0005481065 ovn_controller[152945]: 2025-10-11T09:24:38Z|01341|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:24:38 np0005481065 ovn_controller[152945]: 2025-10-11T09:24:38Z|01342|binding|INFO|Releasing lport cd117be9-e2e2-4d92-9c01-a0f37b7175b9 from this chassis (sb_readonly=0)
Oct 11 05:24:38 np0005481065 ovn_controller[152945]: 2025-10-11T09:24:38Z|01343|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:24:38 np0005481065 nova_compute[260935]: 2025-10-11 09:24:38.665 2 DEBUG nova.virt.libvirt.driver [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:24:38 np0005481065 nova_compute[260935]: 2025-10-11 09:24:38.665 2 DEBUG nova.virt.libvirt.driver [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:24:38 np0005481065 nova_compute[260935]: 2025-10-11 09:24:38.666 2 DEBUG nova.virt.libvirt.driver [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No VIF found with MAC fa:16:3e:c1:1a:f4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:24:38 np0005481065 nova_compute[260935]: 2025-10-11 09:24:38.666 2 DEBUG nova.virt.libvirt.driver [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No VIF found with MAC fa:16:3e:5b:3e:95, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:24:38 np0005481065 nova_compute[260935]: 2025-10-11 09:24:38.667 2 INFO nova.virt.libvirt.driver [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Using config drive#033[00m
Oct 11 05:24:38 np0005481065 nova_compute[260935]: 2025-10-11 09:24:38.705 2 DEBUG nova.storage.rbd_utils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image ece1112a-294b-461f-8b8f-c6a0bb212647_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:24:38 np0005481065 nova_compute[260935]: 2025-10-11 09:24:38.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:38 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2490: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 122 op/s
Oct 11 05:24:38 np0005481065 nova_compute[260935]: 2025-10-11 09:24:38.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:24:39 np0005481065 nova_compute[260935]: 2025-10-11 09:24:39.227 2 INFO nova.virt.libvirt.driver [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Creating config drive at /var/lib/nova/instances/ece1112a-294b-461f-8b8f-c6a0bb212647/disk.config#033[00m
Oct 11 05:24:39 np0005481065 nova_compute[260935]: 2025-10-11 09:24:39.235 2 DEBUG oslo_concurrency.processutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ece1112a-294b-461f-8b8f-c6a0bb212647/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1oa0yu8c execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:24:39 np0005481065 nova_compute[260935]: 2025-10-11 09:24:39.403 2 DEBUG oslo_concurrency.processutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ece1112a-294b-461f-8b8f-c6a0bb212647/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1oa0yu8c" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:24:39 np0005481065 nova_compute[260935]: 2025-10-11 09:24:39.449 2 DEBUG nova.storage.rbd_utils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image ece1112a-294b-461f-8b8f-c6a0bb212647_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:24:39 np0005481065 nova_compute[260935]: 2025-10-11 09:24:39.454 2 DEBUG oslo_concurrency.processutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ece1112a-294b-461f-8b8f-c6a0bb212647/disk.config ece1112a-294b-461f-8b8f-c6a0bb212647_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:24:39 np0005481065 nova_compute[260935]: 2025-10-11 09:24:39.678 2 DEBUG oslo_concurrency.processutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ece1112a-294b-461f-8b8f-c6a0bb212647/disk.config ece1112a-294b-461f-8b8f-c6a0bb212647_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.224s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:24:39 np0005481065 nova_compute[260935]: 2025-10-11 09:24:39.679 2 INFO nova.virt.libvirt.driver [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Deleting local config drive /var/lib/nova/instances/ece1112a-294b-461f-8b8f-c6a0bb212647/disk.config because it was imported into RBD.#033[00m
Oct 11 05:24:39 np0005481065 nova_compute[260935]: 2025-10-11 09:24:39.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:24:39 np0005481065 nova_compute[260935]: 2025-10-11 09:24:39.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:24:39 np0005481065 nova_compute[260935]: 2025-10-11 09:24:39.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:24:39 np0005481065 nova_compute[260935]: 2025-10-11 09:24:39.745 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:24:39 np0005481065 nova_compute[260935]: 2025-10-11 09:24:39.746 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:24:39 np0005481065 nova_compute[260935]: 2025-10-11 09:24:39.747 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:24:39 np0005481065 nova_compute[260935]: 2025-10-11 09:24:39.747 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:24:39 np0005481065 nova_compute[260935]: 2025-10-11 09:24:39.748 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:24:39 np0005481065 kernel: tapc9d0cfb2-3f: entered promiscuous mode
Oct 11 05:24:39 np0005481065 NetworkManager[44960]: <info>  [1760174679.7554] manager: (tapc9d0cfb2-3f): new Tun device (/org/freedesktop/NetworkManager/Devices/532)
Oct 11 05:24:39 np0005481065 ovn_controller[152945]: 2025-10-11T09:24:39Z|01344|binding|INFO|Claiming lport c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9 for this chassis.
Oct 11 05:24:39 np0005481065 ovn_controller[152945]: 2025-10-11T09:24:39Z|01345|binding|INFO|c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9: Claiming fa:16:3e:c1:1a:f4 10.100.0.11
Oct 11 05:24:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:39.812 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:1a:f4 10.100.0.11'], port_security=['fa:16:3e:c1:1a:f4 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'ece1112a-294b-461f-8b8f-c6a0bb212647', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cdd3c547-e0c3-4649-8427-08ce8e1c52d4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0ad377f6-44f5-45fc-95cb-2f677a42f5a2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=039eefbf-a236-4641-9b4d-7b7f1014a5e2, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:24:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:39.813 162815 INFO neutron.agent.ovn.metadata.agent [-] Port c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9 in datapath cdd3c547-e0c3-4649-8427-08ce8e1c52d4 bound to our chassis#033[00m
Oct 11 05:24:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:39.816 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cdd3c547-e0c3-4649-8427-08ce8e1c52d4#033[00m
Oct 11 05:24:39 np0005481065 NetworkManager[44960]: <info>  [1760174679.8197] manager: (tap336ce2f8-5f): new Tun device (/org/freedesktop/NetworkManager/Devices/533)
Oct 11 05:24:39 np0005481065 kernel: tap336ce2f8-5f: entered promiscuous mode
Oct 11 05:24:39 np0005481065 nova_compute[260935]: 2025-10-11 09:24:39.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:39 np0005481065 ovn_controller[152945]: 2025-10-11T09:24:39Z|01346|binding|INFO|Setting lport c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9 ovn-installed in OVS
Oct 11 05:24:39 np0005481065 ovn_controller[152945]: 2025-10-11T09:24:39Z|01347|binding|INFO|Setting lport c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9 up in Southbound
Oct 11 05:24:39 np0005481065 systemd-udevd[397496]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:24:39 np0005481065 ovn_controller[152945]: 2025-10-11T09:24:39Z|01348|if_status|INFO|Dropped 1 log messages in last 158 seconds (most recently, 158 seconds ago) due to excessive rate
Oct 11 05:24:39 np0005481065 ovn_controller[152945]: 2025-10-11T09:24:39Z|01349|if_status|INFO|Not updating pb chassis for 336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0 now as sb is readonly
Oct 11 05:24:39 np0005481065 ovn_controller[152945]: 2025-10-11T09:24:39Z|01350|binding|INFO|Claiming lport 336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0 for this chassis.
Oct 11 05:24:39 np0005481065 ovn_controller[152945]: 2025-10-11T09:24:39Z|01351|binding|INFO|336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0: Claiming fa:16:3e:5b:3e:95 2001:db8:0:1:f816:3eff:fe5b:3e95 2001:db8::f816:3eff:fe5b:3e95
Oct 11 05:24:39 np0005481065 systemd-udevd[397497]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:24:39 np0005481065 nova_compute[260935]: 2025-10-11 09:24:39.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:39.849 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2c0c5b77-8127-44c7-aeb6-21babe1b13b8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:24:39 np0005481065 NetworkManager[44960]: <info>  [1760174679.8539] device (tap336ce2f8-5f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:24:39 np0005481065 NetworkManager[44960]: <info>  [1760174679.8550] device (tap336ce2f8-5f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:24:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:39.855 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:3e:95 2001:db8:0:1:f816:3eff:fe5b:3e95 2001:db8::f816:3eff:fe5b:3e95'], port_security=['fa:16:3e:5b:3e:95 2001:db8:0:1:f816:3eff:fe5b:3e95 2001:db8::f816:3eff:fe5b:3e95'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe5b:3e95/64 2001:db8::f816:3eff:fe5b:3e95/64', 'neutron:device_id': 'ece1112a-294b-461f-8b8f-c6a0bb212647', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f0bc2c62-89ab-4ce1-9157-2273788b9018', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0ad377f6-44f5-45fc-95cb-2f677a42f5a2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75b51dc9-c211-445f-9e3e-d219efddef19, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:24:39 np0005481065 ovn_controller[152945]: 2025-10-11T09:24:39Z|01352|binding|INFO|Setting lport 336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0 ovn-installed in OVS
Oct 11 05:24:39 np0005481065 ovn_controller[152945]: 2025-10-11T09:24:39Z|01353|binding|INFO|Setting lport 336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0 up in Southbound
Oct 11 05:24:39 np0005481065 NetworkManager[44960]: <info>  [1760174679.8597] device (tapc9d0cfb2-3f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:24:39 np0005481065 NetworkManager[44960]: <info>  [1760174679.8607] device (tapc9d0cfb2-3f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:24:39 np0005481065 nova_compute[260935]: 2025-10-11 09:24:39.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:39 np0005481065 systemd-machined[215705]: New machine qemu-149-instance-0000007d.
Oct 11 05:24:39 np0005481065 systemd[1]: Started Virtual Machine qemu-149-instance-0000007d.
Oct 11 05:24:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:39.887 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[7baca2f7-866b-40b2-85b6-1fa5ef3d3d8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:24:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:39.891 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[ceef3728-8c06-44cb-89d1-66702d4eac20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:24:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:39.923 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[38140c31-b093-4671-89fa-e121e2636f64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:24:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:39.952 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8401059d-fde1-4e1a-b303-f8d2a93c744f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcdd3c547-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:b8:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 362], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647689, 'reachable_time': 38844, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 397521, 'error': None, 'target': 'ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:24:39 np0005481065 nova_compute[260935]: 2025-10-11 09:24:39.965 2 DEBUG nova.network.neutron [req-81ab7537-b13d-464b-be3a-9a35876895bc req-289088ac-177e-4ac5-83ed-82d7a9e486f4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Updated VIF entry in instance network info cache for port 336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:24:39 np0005481065 nova_compute[260935]: 2025-10-11 09:24:39.966 2 DEBUG nova.network.neutron [req-81ab7537-b13d-464b-be3a-9a35876895bc req-289088ac-177e-4ac5-83ed-82d7a9e486f4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Updating instance_info_cache with network_info: [{"id": "c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9", "address": "fa:16:3e:c1:1a:f4", "network": {"id": "cdd3c547-e0c3-4649-8427-08ce8e1c52d4", "bridge": "br-int", "label": "tempest-network-smoke--1698384506", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9d0cfb2-3f", "ovs_interfaceid": "c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0", "address": "fa:16:3e:5b:3e:95", "network": {"id": "f0bc2c62-89ab-4ce1-9157-2273788b9018", "bridge": "br-int", "label": "tempest-network-smoke--892002892", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe5b:3e95", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5b:3e95", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap336ce2f8-5f", "ovs_interfaceid": "336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:24:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:39.970 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[edb4a456-240c-40af-9184-81952a4ef347]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcdd3c547-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647704, 'tstamp': 647704}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 397532, 'error': None, 'target': 'ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcdd3c547-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647708, 'tstamp': 647708}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 397532, 'error': None, 'target': 'ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:24:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:39.972 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcdd3c547-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:24:39 np0005481065 nova_compute[260935]: 2025-10-11 09:24:39.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:39 np0005481065 nova_compute[260935]: 2025-10-11 09:24:39.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:39.975 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcdd3c547-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:24:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:39.975 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:24:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:39.976 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcdd3c547-e0, col_values=(('external_ids', {'iface-id': 'cd117be9-e2e2-4d92-9c01-a0f37b7175b9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:24:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:39.976 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:24:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:39.977 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0 in datapath f0bc2c62-89ab-4ce1-9157-2273788b9018 unbound from our chassis#033[00m
Oct 11 05:24:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:39.978 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f0bc2c62-89ab-4ce1-9157-2273788b9018#033[00m
Oct 11 05:24:39 np0005481065 nova_compute[260935]: 2025-10-11 09:24:39.991 2 DEBUG oslo_concurrency.lockutils [req-81ab7537-b13d-464b-be3a-9a35876895bc req-289088ac-177e-4ac5-83ed-82d7a9e486f4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-ece1112a-294b-461f-8b8f-c6a0bb212647" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:24:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:39.996 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[95bf92fc-56e8-4b8e-b619-4f7de22b01a0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:24:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:40.036 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[218866de-4259-496b-915f-0a43c2f431ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:24:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:40.040 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6399581f-207f-4da5-a696-4d2648c5f5fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:24:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:40.078 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6e9a515d-8504-43a8-bfcb-f5edbb615ae5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:24:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:40.094 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[69300fa4-08ff-40c1-9e0e-5b871d0e3429]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf0bc2c62-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:be:23:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 23, 'tx_packets': 4, 'rx_bytes': 2146, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 23, 'tx_packets': 4, 'rx_bytes': 2146, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 363], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647796, 'reachable_time': 34498, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 23, 'inoctets': 1824, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 23, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1824, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 23, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 397538, 'error': None, 'target': 'ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:24:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:40.109 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[14e68b55-3924-4b04-b45f-5db7945c59da]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf0bc2c62-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647812, 'tstamp': 647812}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 397539, 'error': None, 'target': 'ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:24:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:40.111 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf0bc2c62-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:24:40 np0005481065 nova_compute[260935]: 2025-10-11 09:24:40.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:40 np0005481065 nova_compute[260935]: 2025-10-11 09:24:40.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:40.114 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf0bc2c62-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:24:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:40.114 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:24:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:40.114 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf0bc2c62-80, col_values=(('external_ids', {'iface-id': 'c1fabca0-ae77-4e48-b93b-3023955db235'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:24:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:40.115 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:24:40 np0005481065 nova_compute[260935]: 2025-10-11 09:24:40.190 2 DEBUG nova.compute.manager [req-85fddae4-ff8f-486b-9e80-24233495e566 req-32437273-42f9-4e56-b0c7-ea07262a43c2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Received event network-vif-plugged-c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:24:40 np0005481065 nova_compute[260935]: 2025-10-11 09:24:40.191 2 DEBUG oslo_concurrency.lockutils [req-85fddae4-ff8f-486b-9e80-24233495e566 req-32437273-42f9-4e56-b0c7-ea07262a43c2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:24:40 np0005481065 nova_compute[260935]: 2025-10-11 09:24:40.191 2 DEBUG oslo_concurrency.lockutils [req-85fddae4-ff8f-486b-9e80-24233495e566 req-32437273-42f9-4e56-b0c7-ea07262a43c2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:24:40 np0005481065 nova_compute[260935]: 2025-10-11 09:24:40.191 2 DEBUG oslo_concurrency.lockutils [req-85fddae4-ff8f-486b-9e80-24233495e566 req-32437273-42f9-4e56-b0c7-ea07262a43c2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:24:40 np0005481065 nova_compute[260935]: 2025-10-11 09:24:40.192 2 DEBUG nova.compute.manager [req-85fddae4-ff8f-486b-9e80-24233495e566 req-32437273-42f9-4e56-b0c7-ea07262a43c2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Processing event network-vif-plugged-c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:24:40 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:24:40 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4294208709' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:24:40 np0005481065 nova_compute[260935]: 2025-10-11 09:24:40.255 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:24:40 np0005481065 nova_compute[260935]: 2025-10-11 09:24:40.343 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:24:40 np0005481065 nova_compute[260935]: 2025-10-11 09:24:40.344 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:24:40 np0005481065 nova_compute[260935]: 2025-10-11 09:24:40.344 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:24:40 np0005481065 nova_compute[260935]: 2025-10-11 09:24:40.350 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:24:40 np0005481065 nova_compute[260935]: 2025-10-11 09:24:40.351 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:24:40 np0005481065 nova_compute[260935]: 2025-10-11 09:24:40.354 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:24:40 np0005481065 nova_compute[260935]: 2025-10-11 09:24:40.354 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:24:40 np0005481065 nova_compute[260935]: 2025-10-11 09:24:40.357 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:24:40 np0005481065 nova_compute[260935]: 2025-10-11 09:24:40.357 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:24:40 np0005481065 nova_compute[260935]: 2025-10-11 09:24:40.360 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:24:40 np0005481065 nova_compute[260935]: 2025-10-11 09:24:40.360 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:24:40 np0005481065 nova_compute[260935]: 2025-10-11 09:24:40.575 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:24:40 np0005481065 nova_compute[260935]: 2025-10-11 09:24:40.576 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2657MB free_disk=59.76435852050781GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:24:40 np0005481065 nova_compute[260935]: 2025-10-11 09:24:40.576 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:24:40 np0005481065 nova_compute[260935]: 2025-10-11 09:24:40.577 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:24:40 np0005481065 nova_compute[260935]: 2025-10-11 09:24:40.726 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:24:40 np0005481065 nova_compute[260935]: 2025-10-11 09:24:40.726 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:24:40 np0005481065 nova_compute[260935]: 2025-10-11 09:24:40.726 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:24:40 np0005481065 nova_compute[260935]: 2025-10-11 09:24:40.727 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 85cf93a0-2068-4567-a399-b8d52e672913 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:24:40 np0005481065 nova_compute[260935]: 2025-10-11 09:24:40.727 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance ece1112a-294b-461f-8b8f-c6a0bb212647 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:24:40 np0005481065 nova_compute[260935]: 2025-10-11 09:24:40.727 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:24:40 np0005481065 nova_compute[260935]: 2025-10-11 09:24:40.727 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:24:40 np0005481065 nova_compute[260935]: 2025-10-11 09:24:40.840 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174680.8400786, ece1112a-294b-461f-8b8f-c6a0bb212647 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:24:40 np0005481065 nova_compute[260935]: 2025-10-11 09:24:40.840 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] VM Started (Lifecycle Event)#033[00m
Oct 11 05:24:40 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2491: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Oct 11 05:24:40 np0005481065 nova_compute[260935]: 2025-10-11 09:24:40.858 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:24:40 np0005481065 nova_compute[260935]: 2025-10-11 09:24:40.897 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:24:40 np0005481065 nova_compute[260935]: 2025-10-11 09:24:40.902 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174680.8402112, ece1112a-294b-461f-8b8f-c6a0bb212647 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:24:40 np0005481065 nova_compute[260935]: 2025-10-11 09:24:40.902 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:24:40 np0005481065 nova_compute[260935]: 2025-10-11 09:24:40.928 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:24:40 np0005481065 nova_compute[260935]: 2025-10-11 09:24:40.934 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:24:40 np0005481065 nova_compute[260935]: 2025-10-11 09:24:40.958 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:24:41 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:24:41 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/137599789' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:24:41 np0005481065 nova_compute[260935]: 2025-10-11 09:24:41.286 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:24:41 np0005481065 nova_compute[260935]: 2025-10-11 09:24:41.293 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:24:41 np0005481065 nova_compute[260935]: 2025-10-11 09:24:41.321 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:24:41 np0005481065 nova_compute[260935]: 2025-10-11 09:24:41.352 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:24:41 np0005481065 nova_compute[260935]: 2025-10-11 09:24:41.352 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:24:42 np0005481065 nova_compute[260935]: 2025-10-11 09:24:42.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:42 np0005481065 nova_compute[260935]: 2025-10-11 09:24:42.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:42 np0005481065 nova_compute[260935]: 2025-10-11 09:24:42.288 2 DEBUG nova.compute.manager [req-a38e041f-243a-468c-9933-7773ac783c2f req-462dc945-ea86-4235-bbf1-d35e611d43c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Received event network-vif-plugged-c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:24:42 np0005481065 nova_compute[260935]: 2025-10-11 09:24:42.289 2 DEBUG oslo_concurrency.lockutils [req-a38e041f-243a-468c-9933-7773ac783c2f req-462dc945-ea86-4235-bbf1-d35e611d43c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:24:42 np0005481065 nova_compute[260935]: 2025-10-11 09:24:42.289 2 DEBUG oslo_concurrency.lockutils [req-a38e041f-243a-468c-9933-7773ac783c2f req-462dc945-ea86-4235-bbf1-d35e611d43c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:24:42 np0005481065 nova_compute[260935]: 2025-10-11 09:24:42.290 2 DEBUG oslo_concurrency.lockutils [req-a38e041f-243a-468c-9933-7773ac783c2f req-462dc945-ea86-4235-bbf1-d35e611d43c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:24:42 np0005481065 nova_compute[260935]: 2025-10-11 09:24:42.291 2 DEBUG nova.compute.manager [req-a38e041f-243a-468c-9933-7773ac783c2f req-462dc945-ea86-4235-bbf1-d35e611d43c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] No event matching network-vif-plugged-c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9 in dict_keys([('network-vif-plugged', '336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Oct 11 05:24:42 np0005481065 nova_compute[260935]: 2025-10-11 09:24:42.291 2 WARNING nova.compute.manager [req-a38e041f-243a-468c-9933-7773ac783c2f req-462dc945-ea86-4235-bbf1-d35e611d43c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Received unexpected event network-vif-plugged-c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9 for instance with vm_state building and task_state spawning.#033[00m
Oct 11 05:24:42 np0005481065 nova_compute[260935]: 2025-10-11 09:24:42.292 2 DEBUG nova.compute.manager [req-a38e041f-243a-468c-9933-7773ac783c2f req-462dc945-ea86-4235-bbf1-d35e611d43c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Received event network-vif-plugged-336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:24:42 np0005481065 nova_compute[260935]: 2025-10-11 09:24:42.292 2 DEBUG oslo_concurrency.lockutils [req-a38e041f-243a-468c-9933-7773ac783c2f req-462dc945-ea86-4235-bbf1-d35e611d43c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:24:42 np0005481065 nova_compute[260935]: 2025-10-11 09:24:42.293 2 DEBUG oslo_concurrency.lockutils [req-a38e041f-243a-468c-9933-7773ac783c2f req-462dc945-ea86-4235-bbf1-d35e611d43c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:24:42 np0005481065 nova_compute[260935]: 2025-10-11 09:24:42.293 2 DEBUG oslo_concurrency.lockutils [req-a38e041f-243a-468c-9933-7773ac783c2f req-462dc945-ea86-4235-bbf1-d35e611d43c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:24:42 np0005481065 nova_compute[260935]: 2025-10-11 09:24:42.294 2 DEBUG nova.compute.manager [req-a38e041f-243a-468c-9933-7773ac783c2f req-462dc945-ea86-4235-bbf1-d35e611d43c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Processing event network-vif-plugged-336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:24:42 np0005481065 nova_compute[260935]: 2025-10-11 09:24:42.294 2 DEBUG nova.compute.manager [req-a38e041f-243a-468c-9933-7773ac783c2f req-462dc945-ea86-4235-bbf1-d35e611d43c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Received event network-vif-plugged-336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:24:42 np0005481065 nova_compute[260935]: 2025-10-11 09:24:42.295 2 DEBUG oslo_concurrency.lockutils [req-a38e041f-243a-468c-9933-7773ac783c2f req-462dc945-ea86-4235-bbf1-d35e611d43c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:24:42 np0005481065 nova_compute[260935]: 2025-10-11 09:24:42.295 2 DEBUG oslo_concurrency.lockutils [req-a38e041f-243a-468c-9933-7773ac783c2f req-462dc945-ea86-4235-bbf1-d35e611d43c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:24:42 np0005481065 nova_compute[260935]: 2025-10-11 09:24:42.296 2 DEBUG oslo_concurrency.lockutils [req-a38e041f-243a-468c-9933-7773ac783c2f req-462dc945-ea86-4235-bbf1-d35e611d43c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:24:42 np0005481065 nova_compute[260935]: 2025-10-11 09:24:42.296 2 DEBUG nova.compute.manager [req-a38e041f-243a-468c-9933-7773ac783c2f req-462dc945-ea86-4235-bbf1-d35e611d43c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] No waiting events found dispatching network-vif-plugged-336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:24:42 np0005481065 nova_compute[260935]: 2025-10-11 09:24:42.297 2 WARNING nova.compute.manager [req-a38e041f-243a-468c-9933-7773ac783c2f req-462dc945-ea86-4235-bbf1-d35e611d43c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Received unexpected event network-vif-plugged-336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0 for instance with vm_state building and task_state spawning.#033[00m
Oct 11 05:24:42 np0005481065 nova_compute[260935]: 2025-10-11 09:24:42.298 2 DEBUG nova.compute.manager [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:24:42 np0005481065 nova_compute[260935]: 2025-10-11 09:24:42.303 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174682.3019073, ece1112a-294b-461f-8b8f-c6a0bb212647 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:24:42 np0005481065 nova_compute[260935]: 2025-10-11 09:24:42.305 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:24:42 np0005481065 nova_compute[260935]: 2025-10-11 09:24:42.308 2 DEBUG nova.virt.libvirt.driver [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:24:42 np0005481065 nova_compute[260935]: 2025-10-11 09:24:42.317 2 INFO nova.virt.libvirt.driver [-] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Instance spawned successfully.#033[00m
Oct 11 05:24:42 np0005481065 nova_compute[260935]: 2025-10-11 09:24:42.318 2 DEBUG nova.virt.libvirt.driver [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:24:42 np0005481065 nova_compute[260935]: 2025-10-11 09:24:42.335 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:24:42 np0005481065 nova_compute[260935]: 2025-10-11 09:24:42.342 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:24:42 np0005481065 nova_compute[260935]: 2025-10-11 09:24:42.346 2 DEBUG nova.virt.libvirt.driver [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:24:42 np0005481065 nova_compute[260935]: 2025-10-11 09:24:42.346 2 DEBUG nova.virt.libvirt.driver [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:24:42 np0005481065 nova_compute[260935]: 2025-10-11 09:24:42.347 2 DEBUG nova.virt.libvirt.driver [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:24:42 np0005481065 nova_compute[260935]: 2025-10-11 09:24:42.347 2 DEBUG nova.virt.libvirt.driver [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:24:42 np0005481065 nova_compute[260935]: 2025-10-11 09:24:42.348 2 DEBUG nova.virt.libvirt.driver [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:24:42 np0005481065 nova_compute[260935]: 2025-10-11 09:24:42.348 2 DEBUG nova.virt.libvirt.driver [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:24:42 np0005481065 nova_compute[260935]: 2025-10-11 09:24:42.352 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:24:42 np0005481065 nova_compute[260935]: 2025-10-11 09:24:42.353 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:24:42 np0005481065 nova_compute[260935]: 2025-10-11 09:24:42.380 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:24:42 np0005481065 nova_compute[260935]: 2025-10-11 09:24:42.429 2 INFO nova.compute.manager [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Took 14.45 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:24:42 np0005481065 nova_compute[260935]: 2025-10-11 09:24:42.429 2 DEBUG nova.compute.manager [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:24:42 np0005481065 nova_compute[260935]: 2025-10-11 09:24:42.522 2 INFO nova.compute.manager [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Took 15.73 seconds to build instance.#033[00m
Oct 11 05:24:42 np0005481065 nova_compute[260935]: 2025-10-11 09:24:42.550 2 DEBUG oslo_concurrency.lockutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "ece1112a-294b-461f-8b8f-c6a0bb212647" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.817s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:24:42 np0005481065 nova_compute[260935]: 2025-10-11 09:24:42.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:42 np0005481065 podman[397608]: 2025-10-11 09:24:42.811443895 +0000 UTC m=+0.101160916 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 05:24:42 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2492: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Oct 11 05:24:43 np0005481065 nova_compute[260935]: 2025-10-11 09:24:43.162 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174668.0107627, 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:24:43 np0005481065 nova_compute[260935]: 2025-10-11 09:24:43.163 2 INFO nova.compute.manager [-] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:24:43 np0005481065 nova_compute[260935]: 2025-10-11 09:24:43.196 2 DEBUG nova.compute.manager [None req-63549b3d-04f8-44b3-8b26-5cb220d1ebb6 - - - - - -] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:24:43 np0005481065 nova_compute[260935]: 2025-10-11 09:24:43.267 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:24:43 np0005481065 nova_compute[260935]: 2025-10-11 09:24:43.267 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:24:43 np0005481065 nova_compute[260935]: 2025-10-11 09:24:43.268 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 05:24:43 np0005481065 nova_compute[260935]: 2025-10-11 09:24:43.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:24:44 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2493: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.4 KiB/s rd, 15 KiB/s wr, 10 op/s
Oct 11 05:24:46 np0005481065 nova_compute[260935]: 2025-10-11 09:24:46.248 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updating instance_info_cache with network_info: [{"id": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "address": "fa:16:3e:ab:9b:26", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99e74dca-1d", "ovs_interfaceid": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:24:46 np0005481065 nova_compute[260935]: 2025-10-11 09:24:46.266 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:24:46 np0005481065 nova_compute[260935]: 2025-10-11 09:24:46.266 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 05:24:46 np0005481065 nova_compute[260935]: 2025-10-11 09:24:46.267 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:24:46 np0005481065 nova_compute[260935]: 2025-10-11 09:24:46.267 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:24:46 np0005481065 nova_compute[260935]: 2025-10-11 09:24:46.294 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Triggering sync for uuid c176845c-89c0-4038-ba22-4ee79bd3ebfe _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct 11 05:24:46 np0005481065 nova_compute[260935]: 2025-10-11 09:24:46.295 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Triggering sync for uuid b75d8ded-515b-48ff-a6b6-28df88878996 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct 11 05:24:46 np0005481065 nova_compute[260935]: 2025-10-11 09:24:46.295 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Triggering sync for uuid 52be16b4-343a-4fd4-9041-39069a1fde2a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct 11 05:24:46 np0005481065 nova_compute[260935]: 2025-10-11 09:24:46.296 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Triggering sync for uuid 85cf93a0-2068-4567-a399-b8d52e672913 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct 11 05:24:46 np0005481065 nova_compute[260935]: 2025-10-11 09:24:46.296 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Triggering sync for uuid ece1112a-294b-461f-8b8f-c6a0bb212647 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct 11 05:24:46 np0005481065 nova_compute[260935]: 2025-10-11 09:24:46.296 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "c176845c-89c0-4038-ba22-4ee79bd3ebfe" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:24:46 np0005481065 nova_compute[260935]: 2025-10-11 09:24:46.296 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "c176845c-89c0-4038-ba22-4ee79bd3ebfe" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:24:46 np0005481065 nova_compute[260935]: 2025-10-11 09:24:46.297 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "b75d8ded-515b-48ff-a6b6-28df88878996" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:24:46 np0005481065 nova_compute[260935]: 2025-10-11 09:24:46.297 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "b75d8ded-515b-48ff-a6b6-28df88878996" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:24:46 np0005481065 nova_compute[260935]: 2025-10-11 09:24:46.297 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "52be16b4-343a-4fd4-9041-39069a1fde2a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:24:46 np0005481065 nova_compute[260935]: 2025-10-11 09:24:46.298 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "52be16b4-343a-4fd4-9041-39069a1fde2a" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:24:46 np0005481065 nova_compute[260935]: 2025-10-11 09:24:46.298 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "85cf93a0-2068-4567-a399-b8d52e672913" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:24:46 np0005481065 nova_compute[260935]: 2025-10-11 09:24:46.298 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "85cf93a0-2068-4567-a399-b8d52e672913" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:24:46 np0005481065 nova_compute[260935]: 2025-10-11 09:24:46.298 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "ece1112a-294b-461f-8b8f-c6a0bb212647" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:24:46 np0005481065 nova_compute[260935]: 2025-10-11 09:24:46.299 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "ece1112a-294b-461f-8b8f-c6a0bb212647" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:24:46 np0005481065 nova_compute[260935]: 2025-10-11 09:24:46.350 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "85cf93a0-2068-4567-a399-b8d52e672913" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.052s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:24:46 np0005481065 nova_compute[260935]: 2025-10-11 09:24:46.353 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "c176845c-89c0-4038-ba22-4ee79bd3ebfe" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:24:46 np0005481065 nova_compute[260935]: 2025-10-11 09:24:46.353 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "b75d8ded-515b-48ff-a6b6-28df88878996" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:24:46 np0005481065 nova_compute[260935]: 2025-10-11 09:24:46.354 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "ece1112a-294b-461f-8b8f-c6a0bb212647" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:24:46 np0005481065 nova_compute[260935]: 2025-10-11 09:24:46.356 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "52be16b4-343a-4fd4-9041-39069a1fde2a" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.058s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:24:46 np0005481065 podman[397628]: 2025-10-11 09:24:46.800079407 +0000 UTC m=+0.097836893 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 11 05:24:46 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2494: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.4 KiB/s rd, 15 KiB/s wr, 10 op/s
Oct 11 05:24:47 np0005481065 nova_compute[260935]: 2025-10-11 09:24:47.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:48 np0005481065 nova_compute[260935]: 2025-10-11 09:24:48.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:48 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2495: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 11 05:24:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:24:50 np0005481065 nova_compute[260935]: 2025-10-11 09:24:50.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:50 np0005481065 ovn_controller[152945]: 2025-10-11T09:24:50Z|01354|binding|INFO|Releasing lport c1fabca0-ae77-4e48-b93b-3023955db235 from this chassis (sb_readonly=0)
Oct 11 05:24:50 np0005481065 ovn_controller[152945]: 2025-10-11T09:24:50Z|01355|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:24:50 np0005481065 ovn_controller[152945]: 2025-10-11T09:24:50Z|01356|binding|INFO|Releasing lport cd117be9-e2e2-4d92-9c01-a0f37b7175b9 from this chassis (sb_readonly=0)
Oct 11 05:24:50 np0005481065 ovn_controller[152945]: 2025-10-11T09:24:50Z|01357|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:24:50 np0005481065 nova_compute[260935]: 2025-10-11 09:24:50.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:50 np0005481065 nova_compute[260935]: 2025-10-11 09:24:50.826 2 DEBUG nova.compute.manager [req-a8559006-e3a2-4bca-9afc-6bc0ea4214c4 req-66df1d76-4bdd-46e6-8bad-2983d4b8bc5b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Received event network-changed-c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:24:50 np0005481065 nova_compute[260935]: 2025-10-11 09:24:50.827 2 DEBUG nova.compute.manager [req-a8559006-e3a2-4bca-9afc-6bc0ea4214c4 req-66df1d76-4bdd-46e6-8bad-2983d4b8bc5b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Refreshing instance network info cache due to event network-changed-c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:24:50 np0005481065 nova_compute[260935]: 2025-10-11 09:24:50.827 2 DEBUG oslo_concurrency.lockutils [req-a8559006-e3a2-4bca-9afc-6bc0ea4214c4 req-66df1d76-4bdd-46e6-8bad-2983d4b8bc5b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-ece1112a-294b-461f-8b8f-c6a0bb212647" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:24:50 np0005481065 nova_compute[260935]: 2025-10-11 09:24:50.827 2 DEBUG oslo_concurrency.lockutils [req-a8559006-e3a2-4bca-9afc-6bc0ea4214c4 req-66df1d76-4bdd-46e6-8bad-2983d4b8bc5b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-ece1112a-294b-461f-8b8f-c6a0bb212647" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:24:50 np0005481065 nova_compute[260935]: 2025-10-11 09:24:50.828 2 DEBUG nova.network.neutron [req-a8559006-e3a2-4bca-9afc-6bc0ea4214c4 req-66df1d76-4bdd-46e6-8bad-2983d4b8bc5b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Refreshing network info cache for port c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:24:50 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2496: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 11 05:24:51 np0005481065 podman[397651]: 2025-10-11 09:24:51.812332955 +0000 UTC m=+0.102736810 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 11 05:24:51 np0005481065 podman[397652]: 2025-10-11 09:24:51.861289787 +0000 UTC m=+0.143242854 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:24:52 np0005481065 nova_compute[260935]: 2025-10-11 09:24:52.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:52 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2497: 321 pgs: 321 active+clean; 456 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 190 KiB/s wr, 80 op/s
Oct 11 05:24:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:53.288 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=41, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=40) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:24:53 np0005481065 nova_compute[260935]: 2025-10-11 09:24:53.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:24:53.290 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 05:24:53 np0005481065 nova_compute[260935]: 2025-10-11 09:24:53.295 2 DEBUG nova.network.neutron [req-a8559006-e3a2-4bca-9afc-6bc0ea4214c4 req-66df1d76-4bdd-46e6-8bad-2983d4b8bc5b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Updated VIF entry in instance network info cache for port c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:24:53 np0005481065 nova_compute[260935]: 2025-10-11 09:24:53.296 2 DEBUG nova.network.neutron [req-a8559006-e3a2-4bca-9afc-6bc0ea4214c4 req-66df1d76-4bdd-46e6-8bad-2983d4b8bc5b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Updating instance_info_cache with network_info: [{"id": "c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9", "address": "fa:16:3e:c1:1a:f4", "network": {"id": "cdd3c547-e0c3-4649-8427-08ce8e1c52d4", "bridge": "br-int", "label": "tempest-network-smoke--1698384506", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9d0cfb2-3f", "ovs_interfaceid": "c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0", "address": "fa:16:3e:5b:3e:95", "network": {"id": "f0bc2c62-89ab-4ce1-9157-2273788b9018", "bridge": "br-int", "label": "tempest-network-smoke--892002892", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe5b:3e95", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5b:3e95", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap336ce2f8-5f", "ovs_interfaceid": "336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:24:53 np0005481065 nova_compute[260935]: 2025-10-11 09:24:53.337 2 DEBUG oslo_concurrency.lockutils [req-a8559006-e3a2-4bca-9afc-6bc0ea4214c4 req-66df1d76-4bdd-46e6-8bad-2983d4b8bc5b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-ece1112a-294b-461f-8b8f-c6a0bb212647" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:24:53 np0005481065 nova_compute[260935]: 2025-10-11 09:24:53.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:24:54 np0005481065 ovn_controller[152945]: 2025-10-11T09:24:54Z|00154|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c1:1a:f4 10.100.0.11
Oct 11 05:24:54 np0005481065 ovn_controller[152945]: 2025-10-11T09:24:54Z|00155|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c1:1a:f4 10.100.0.11
Oct 11 05:24:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:24:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:24:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:24:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:24:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:24:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:24:54 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2498: 321 pgs: 321 active+clean; 456 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 176 KiB/s wr, 69 op/s
Oct 11 05:24:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:24:54
Oct 11 05:24:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:24:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:24:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups', 'volumes', 'vms', 'images', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log']
Oct 11 05:24:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:24:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:24:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:24:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:24:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:24:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:24:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:24:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:24:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:24:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:24:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:24:56 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2499: 321 pgs: 321 active+clean; 456 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 176 KiB/s wr, 69 op/s
Oct 11 05:24:57 np0005481065 nova_compute[260935]: 2025-10-11 09:24:57.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:58 np0005481065 nova_compute[260935]: 2025-10-11 09:24:58.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:24:58 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2500: 321 pgs: 321 active+clean; 486 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 127 op/s
Oct 11 05:24:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:25:00 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:00.293 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '41'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:25:00 np0005481065 nova_compute[260935]: 2025-10-11 09:25:00.604 2 DEBUG oslo_concurrency.lockutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "e4670193-9ea3-45bc-9dbd-14d2e62f32f1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:25:00 np0005481065 nova_compute[260935]: 2025-10-11 09:25:00.604 2 DEBUG oslo_concurrency.lockutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "e4670193-9ea3-45bc-9dbd-14d2e62f32f1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:25:00 np0005481065 nova_compute[260935]: 2025-10-11 09:25:00.633 2 DEBUG nova.compute.manager [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:25:00 np0005481065 nova_compute[260935]: 2025-10-11 09:25:00.738 2 DEBUG oslo_concurrency.lockutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:25:00 np0005481065 nova_compute[260935]: 2025-10-11 09:25:00.739 2 DEBUG oslo_concurrency.lockutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:25:00 np0005481065 nova_compute[260935]: 2025-10-11 09:25:00.752 2 DEBUG nova.virt.hardware [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:25:00 np0005481065 nova_compute[260935]: 2025-10-11 09:25:00.753 2 INFO nova.compute.claims [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:25:00 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2501: 321 pgs: 321 active+clean; 486 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 05:25:01 np0005481065 nova_compute[260935]: 2025-10-11 09:25:01.038 2 DEBUG oslo_concurrency.processutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:25:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:25:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2932817991' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:25:01 np0005481065 nova_compute[260935]: 2025-10-11 09:25:01.539 2 DEBUG oslo_concurrency.processutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:25:01 np0005481065 nova_compute[260935]: 2025-10-11 09:25:01.549 2 DEBUG nova.compute.provider_tree [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:25:01 np0005481065 nova_compute[260935]: 2025-10-11 09:25:01.628 2 DEBUG nova.scheduler.client.report [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:25:01 np0005481065 nova_compute[260935]: 2025-10-11 09:25:01.668 2 DEBUG oslo_concurrency.lockutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.929s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:25:01 np0005481065 nova_compute[260935]: 2025-10-11 09:25:01.669 2 DEBUG nova.compute.manager [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:25:01 np0005481065 nova_compute[260935]: 2025-10-11 09:25:01.748 2 DEBUG nova.compute.manager [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:25:01 np0005481065 nova_compute[260935]: 2025-10-11 09:25:01.749 2 DEBUG nova.network.neutron [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:25:01 np0005481065 nova_compute[260935]: 2025-10-11 09:25:01.776 2 INFO nova.virt.libvirt.driver [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:25:01 np0005481065 nova_compute[260935]: 2025-10-11 09:25:01.797 2 DEBUG nova.compute.manager [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:25:01 np0005481065 nova_compute[260935]: 2025-10-11 09:25:01.907 2 DEBUG nova.compute.manager [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:25:01 np0005481065 nova_compute[260935]: 2025-10-11 09:25:01.909 2 DEBUG nova.virt.libvirt.driver [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:25:01 np0005481065 nova_compute[260935]: 2025-10-11 09:25:01.910 2 INFO nova.virt.libvirt.driver [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Creating image(s)#033[00m
Oct 11 05:25:01 np0005481065 nova_compute[260935]: 2025-10-11 09:25:01.943 2 DEBUG nova.storage.rbd_utils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image e4670193-9ea3-45bc-9dbd-14d2e62f32f1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:25:01 np0005481065 nova_compute[260935]: 2025-10-11 09:25:01.978 2 DEBUG nova.storage.rbd_utils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image e4670193-9ea3-45bc-9dbd-14d2e62f32f1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:25:02 np0005481065 nova_compute[260935]: 2025-10-11 09:25:02.014 2 DEBUG nova.storage.rbd_utils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image e4670193-9ea3-45bc-9dbd-14d2e62f32f1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:25:02 np0005481065 nova_compute[260935]: 2025-10-11 09:25:02.020 2 DEBUG oslo_concurrency.processutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:25:02 np0005481065 nova_compute[260935]: 2025-10-11 09:25:02.134 2 DEBUG oslo_concurrency.processutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.114s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:25:02 np0005481065 nova_compute[260935]: 2025-10-11 09:25:02.136 2 DEBUG oslo_concurrency.lockutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:25:02 np0005481065 nova_compute[260935]: 2025-10-11 09:25:02.137 2 DEBUG oslo_concurrency.lockutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:25:02 np0005481065 nova_compute[260935]: 2025-10-11 09:25:02.138 2 DEBUG oslo_concurrency.lockutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:25:02 np0005481065 nova_compute[260935]: 2025-10-11 09:25:02.173 2 DEBUG nova.storage.rbd_utils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image e4670193-9ea3-45bc-9dbd-14d2e62f32f1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:25:02 np0005481065 nova_compute[260935]: 2025-10-11 09:25:02.179 2 DEBUG oslo_concurrency.processutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 e4670193-9ea3-45bc-9dbd-14d2e62f32f1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:25:02 np0005481065 nova_compute[260935]: 2025-10-11 09:25:02.271 2 DEBUG nova.policy [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dd336dcb24664df58613d4105ce1b004', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:25:02 np0005481065 nova_compute[260935]: 2025-10-11 09:25:02.487 2 DEBUG oslo_concurrency.processutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 e4670193-9ea3-45bc-9dbd-14d2e62f32f1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.308s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:25:02 np0005481065 nova_compute[260935]: 2025-10-11 09:25:02.576 2 DEBUG nova.storage.rbd_utils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] resizing rbd image e4670193-9ea3-45bc-9dbd-14d2e62f32f1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:25:02 np0005481065 nova_compute[260935]: 2025-10-11 09:25:02.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:02 np0005481065 nova_compute[260935]: 2025-10-11 09:25:02.732 2 DEBUG nova.objects.instance [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'migration_context' on Instance uuid e4670193-9ea3-45bc-9dbd-14d2e62f32f1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:25:02 np0005481065 nova_compute[260935]: 2025-10-11 09:25:02.752 2 DEBUG nova.virt.libvirt.driver [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:25:02 np0005481065 nova_compute[260935]: 2025-10-11 09:25:02.753 2 DEBUG nova.virt.libvirt.driver [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Ensure instance console log exists: /var/lib/nova/instances/e4670193-9ea3-45bc-9dbd-14d2e62f32f1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:25:02 np0005481065 nova_compute[260935]: 2025-10-11 09:25:02.753 2 DEBUG oslo_concurrency.lockutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:25:02 np0005481065 nova_compute[260935]: 2025-10-11 09:25:02.754 2 DEBUG oslo_concurrency.lockutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:25:02 np0005481065 nova_compute[260935]: 2025-10-11 09:25:02.755 2 DEBUG oslo_concurrency.lockutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:25:02 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2502: 321 pgs: 321 active+clean; 518 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 334 KiB/s rd, 3.3 MiB/s wr, 80 op/s
Oct 11 05:25:03 np0005481065 nova_compute[260935]: 2025-10-11 09:25:03.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:25:04 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2503: 321 pgs: 321 active+clean; 518 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 315 KiB/s rd, 3.2 MiB/s wr, 74 op/s
Oct 11 05:25:04 np0005481065 nova_compute[260935]: 2025-10-11 09:25:04.944 2 DEBUG nova.network.neutron [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Successfully created port: 119ff9b5-e3af-4688-97a4-92b5f6a240dc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:25:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:25:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:25:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:25:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:25:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0043770120099954415 of space, bias 1.0, pg target 1.3131036029986325 quantized to 32 (current 32)
Oct 11 05:25:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:25:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:25:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:25:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:25:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:25:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.1992057139048968 quantized to 32 (current 32)
Oct 11 05:25:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:25:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 32)
Oct 11 05:25:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:25:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:25:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:25:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Oct 11 05:25:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:25:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Oct 11 05:25:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:25:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:25:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:25:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Oct 11 05:25:06 np0005481065 nova_compute[260935]: 2025-10-11 09:25:06.280 2 DEBUG nova.network.neutron [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Successfully updated port: 119ff9b5-e3af-4688-97a4-92b5f6a240dc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:25:06 np0005481065 nova_compute[260935]: 2025-10-11 09:25:06.669 2 DEBUG oslo_concurrency.lockutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "refresh_cache-e4670193-9ea3-45bc-9dbd-14d2e62f32f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:25:06 np0005481065 nova_compute[260935]: 2025-10-11 09:25:06.669 2 DEBUG oslo_concurrency.lockutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquired lock "refresh_cache-e4670193-9ea3-45bc-9dbd-14d2e62f32f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:25:06 np0005481065 nova_compute[260935]: 2025-10-11 09:25:06.670 2 DEBUG nova.network.neutron [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:25:06 np0005481065 nova_compute[260935]: 2025-10-11 09:25:06.833 2 DEBUG nova.compute.manager [req-f59ddb1c-6142-42d2-b11a-b1ad2ac739b8 req-e29d0e37-95d8-4ea5-8dcf-21feeaafc175 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Received event network-changed-119ff9b5-e3af-4688-97a4-92b5f6a240dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:25:06 np0005481065 nova_compute[260935]: 2025-10-11 09:25:06.834 2 DEBUG nova.compute.manager [req-f59ddb1c-6142-42d2-b11a-b1ad2ac739b8 req-e29d0e37-95d8-4ea5-8dcf-21feeaafc175 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Refreshing instance network info cache due to event network-changed-119ff9b5-e3af-4688-97a4-92b5f6a240dc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:25:06 np0005481065 nova_compute[260935]: 2025-10-11 09:25:06.835 2 DEBUG oslo_concurrency.lockutils [req-f59ddb1c-6142-42d2-b11a-b1ad2ac739b8 req-e29d0e37-95d8-4ea5-8dcf-21feeaafc175 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-e4670193-9ea3-45bc-9dbd-14d2e62f32f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:25:06 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2504: 321 pgs: 321 active+clean; 518 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 315 KiB/s rd, 3.2 MiB/s wr, 74 op/s
Oct 11 05:25:07 np0005481065 nova_compute[260935]: 2025-10-11 09:25:07.268 2 DEBUG nova.network.neutron [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:25:07 np0005481065 nova_compute[260935]: 2025-10-11 09:25:07.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:08 np0005481065 nova_compute[260935]: 2025-10-11 09:25:08.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:08 np0005481065 nova_compute[260935]: 2025-10-11 09:25:08.712 2 DEBUG nova.network.neutron [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Updating instance_info_cache with network_info: [{"id": "119ff9b5-e3af-4688-97a4-92b5f6a240dc", "address": "fa:16:3e:54:f4:1e", "network": {"id": "342daca7-3c5d-4bb3-bcdc-6abce7b4d414", "bridge": "br-int", "label": "tempest-network-smoke--2000863110", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap119ff9b5-e3", "ovs_interfaceid": "119ff9b5-e3af-4688-97a4-92b5f6a240dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:25:08 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2505: 321 pgs: 321 active+clean; 533 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 3.7 MiB/s wr, 86 op/s
Oct 11 05:25:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:25:09 np0005481065 nova_compute[260935]: 2025-10-11 09:25:09.440 2 DEBUG oslo_concurrency.lockutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Releasing lock "refresh_cache-e4670193-9ea3-45bc-9dbd-14d2e62f32f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:25:09 np0005481065 nova_compute[260935]: 2025-10-11 09:25:09.440 2 DEBUG nova.compute.manager [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Instance network_info: |[{"id": "119ff9b5-e3af-4688-97a4-92b5f6a240dc", "address": "fa:16:3e:54:f4:1e", "network": {"id": "342daca7-3c5d-4bb3-bcdc-6abce7b4d414", "bridge": "br-int", "label": "tempest-network-smoke--2000863110", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap119ff9b5-e3", "ovs_interfaceid": "119ff9b5-e3af-4688-97a4-92b5f6a240dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:25:09 np0005481065 nova_compute[260935]: 2025-10-11 09:25:09.440 2 DEBUG oslo_concurrency.lockutils [req-f59ddb1c-6142-42d2-b11a-b1ad2ac739b8 req-e29d0e37-95d8-4ea5-8dcf-21feeaafc175 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-e4670193-9ea3-45bc-9dbd-14d2e62f32f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:25:09 np0005481065 nova_compute[260935]: 2025-10-11 09:25:09.441 2 DEBUG nova.network.neutron [req-f59ddb1c-6142-42d2-b11a-b1ad2ac739b8 req-e29d0e37-95d8-4ea5-8dcf-21feeaafc175 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Refreshing network info cache for port 119ff9b5-e3af-4688-97a4-92b5f6a240dc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:25:09 np0005481065 nova_compute[260935]: 2025-10-11 09:25:09.444 2 DEBUG nova.virt.libvirt.driver [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Start _get_guest_xml network_info=[{"id": "119ff9b5-e3af-4688-97a4-92b5f6a240dc", "address": "fa:16:3e:54:f4:1e", "network": {"id": "342daca7-3c5d-4bb3-bcdc-6abce7b4d414", "bridge": "br-int", "label": "tempest-network-smoke--2000863110", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap119ff9b5-e3", "ovs_interfaceid": "119ff9b5-e3af-4688-97a4-92b5f6a240dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:25:09 np0005481065 nova_compute[260935]: 2025-10-11 09:25:09.450 2 WARNING nova.virt.libvirt.driver [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:25:09 np0005481065 nova_compute[260935]: 2025-10-11 09:25:09.462 2 DEBUG nova.virt.libvirt.host [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:25:09 np0005481065 nova_compute[260935]: 2025-10-11 09:25:09.462 2 DEBUG nova.virt.libvirt.host [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:25:09 np0005481065 nova_compute[260935]: 2025-10-11 09:25:09.469 2 DEBUG nova.virt.libvirt.host [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:25:09 np0005481065 nova_compute[260935]: 2025-10-11 09:25:09.469 2 DEBUG nova.virt.libvirt.host [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:25:09 np0005481065 nova_compute[260935]: 2025-10-11 09:25:09.470 2 DEBUG nova.virt.libvirt.driver [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:25:09 np0005481065 nova_compute[260935]: 2025-10-11 09:25:09.470 2 DEBUG nova.virt.hardware [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:25:09 np0005481065 nova_compute[260935]: 2025-10-11 09:25:09.471 2 DEBUG nova.virt.hardware [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:25:09 np0005481065 nova_compute[260935]: 2025-10-11 09:25:09.471 2 DEBUG nova.virt.hardware [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:25:09 np0005481065 nova_compute[260935]: 2025-10-11 09:25:09.471 2 DEBUG nova.virt.hardware [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:25:09 np0005481065 nova_compute[260935]: 2025-10-11 09:25:09.472 2 DEBUG nova.virt.hardware [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:25:09 np0005481065 nova_compute[260935]: 2025-10-11 09:25:09.472 2 DEBUG nova.virt.hardware [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:25:09 np0005481065 nova_compute[260935]: 2025-10-11 09:25:09.473 2 DEBUG nova.virt.hardware [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:25:09 np0005481065 nova_compute[260935]: 2025-10-11 09:25:09.473 2 DEBUG nova.virt.hardware [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:25:09 np0005481065 nova_compute[260935]: 2025-10-11 09:25:09.473 2 DEBUG nova.virt.hardware [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:25:09 np0005481065 nova_compute[260935]: 2025-10-11 09:25:09.473 2 DEBUG nova.virt.hardware [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:25:09 np0005481065 nova_compute[260935]: 2025-10-11 09:25:09.474 2 DEBUG nova.virt.hardware [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:25:09 np0005481065 nova_compute[260935]: 2025-10-11 09:25:09.477 2 DEBUG oslo_concurrency.processutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:25:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:25:09 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1106144960' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:25:09 np0005481065 nova_compute[260935]: 2025-10-11 09:25:09.975 2 DEBUG oslo_concurrency.processutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:25:10 np0005481065 nova_compute[260935]: 2025-10-11 09:25:10.020 2 DEBUG nova.storage.rbd_utils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image e4670193-9ea3-45bc-9dbd-14d2e62f32f1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:25:10 np0005481065 nova_compute[260935]: 2025-10-11 09:25:10.028 2 DEBUG oslo_concurrency.processutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:25:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:25:10 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3703661375' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:25:10 np0005481065 nova_compute[260935]: 2025-10-11 09:25:10.686 2 DEBUG oslo_concurrency.processutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.658s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:25:10 np0005481065 nova_compute[260935]: 2025-10-11 09:25:10.690 2 DEBUG nova.virt.libvirt.vif [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:24:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1522106396',display_name='tempest-TestNetworkBasicOps-server-1522106396',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1522106396',id=126,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPJuvLde49Grl0xrW9XqlZWm2iapzj/ptvpeM0vUXkL9WjkO1Vts1ZqPNK0xa0RFKzwonFoVCMufaWV362ApeqPtUvwTGOGY43qYDgAKGh07RNbr2NQMmIQQ5Njnnyha+Q==',key_name='tempest-TestNetworkBasicOps-2127179421',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-4ic01z0m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:25:01Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=e4670193-9ea3-45bc-9dbd-14d2e62f32f1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "119ff9b5-e3af-4688-97a4-92b5f6a240dc", "address": "fa:16:3e:54:f4:1e", "network": {"id": "342daca7-3c5d-4bb3-bcdc-6abce7b4d414", "bridge": "br-int", "label": "tempest-network-smoke--2000863110", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap119ff9b5-e3", "ovs_interfaceid": "119ff9b5-e3af-4688-97a4-92b5f6a240dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:25:10 np0005481065 nova_compute[260935]: 2025-10-11 09:25:10.691 2 DEBUG nova.network.os_vif_util [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "119ff9b5-e3af-4688-97a4-92b5f6a240dc", "address": "fa:16:3e:54:f4:1e", "network": {"id": "342daca7-3c5d-4bb3-bcdc-6abce7b4d414", "bridge": "br-int", "label": "tempest-network-smoke--2000863110", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap119ff9b5-e3", "ovs_interfaceid": "119ff9b5-e3af-4688-97a4-92b5f6a240dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:25:10 np0005481065 nova_compute[260935]: 2025-10-11 09:25:10.693 2 DEBUG nova.network.os_vif_util [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:f4:1e,bridge_name='br-int',has_traffic_filtering=True,id=119ff9b5-e3af-4688-97a4-92b5f6a240dc,network=Network(342daca7-3c5d-4bb3-bcdc-6abce7b4d414),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap119ff9b5-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:25:10 np0005481065 nova_compute[260935]: 2025-10-11 09:25:10.695 2 DEBUG nova.objects.instance [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'pci_devices' on Instance uuid e4670193-9ea3-45bc-9dbd-14d2e62f32f1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:25:10 np0005481065 nova_compute[260935]: 2025-10-11 09:25:10.747 2 DEBUG nova.virt.libvirt.driver [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:25:10 np0005481065 nova_compute[260935]:  <uuid>e4670193-9ea3-45bc-9dbd-14d2e62f32f1</uuid>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:  <name>instance-0000007e</name>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:25:10 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:      <nova:name>tempest-TestNetworkBasicOps-server-1522106396</nova:name>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:25:09</nova:creationTime>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:25:10 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:        <nova:user uuid="dd336dcb24664df58613d4105ce1b004">tempest-TestNetworkBasicOps-1622727639-project-member</nova:user>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:        <nova:project uuid="bee9c6aad5fe46a2b0fb6caf4d995b72">tempest-TestNetworkBasicOps-1622727639</nova:project>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:        <nova:port uuid="119ff9b5-e3af-4688-97a4-92b5f6a240dc">
Oct 11 05:25:10 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:25:10 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:      <entry name="serial">e4670193-9ea3-45bc-9dbd-14d2e62f32f1</entry>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:      <entry name="uuid">e4670193-9ea3-45bc-9dbd-14d2e62f32f1</entry>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:25:10 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:25:10 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:25:10 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/e4670193-9ea3-45bc-9dbd-14d2e62f32f1_disk">
Oct 11 05:25:10 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:25:10 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:25:10 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/e4670193-9ea3-45bc-9dbd-14d2e62f32f1_disk.config">
Oct 11 05:25:10 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:25:10 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:25:10 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:54:f4:1e"/>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:      <target dev="tap119ff9b5-e3"/>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:25:10 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/e4670193-9ea3-45bc-9dbd-14d2e62f32f1/console.log" append="off"/>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:25:10 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:25:10 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:25:10 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:25:10 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:25:10 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:25:10 np0005481065 nova_compute[260935]: 2025-10-11 09:25:10.749 2 DEBUG nova.compute.manager [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Preparing to wait for external event network-vif-plugged-119ff9b5-e3af-4688-97a4-92b5f6a240dc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:25:10 np0005481065 nova_compute[260935]: 2025-10-11 09:25:10.750 2 DEBUG oslo_concurrency.lockutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "e4670193-9ea3-45bc-9dbd-14d2e62f32f1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:25:10 np0005481065 nova_compute[260935]: 2025-10-11 09:25:10.751 2 DEBUG oslo_concurrency.lockutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "e4670193-9ea3-45bc-9dbd-14d2e62f32f1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:25:10 np0005481065 nova_compute[260935]: 2025-10-11 09:25:10.751 2 DEBUG oslo_concurrency.lockutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "e4670193-9ea3-45bc-9dbd-14d2e62f32f1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:25:10 np0005481065 nova_compute[260935]: 2025-10-11 09:25:10.752 2 DEBUG nova.virt.libvirt.vif [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:24:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1522106396',display_name='tempest-TestNetworkBasicOps-server-1522106396',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1522106396',id=126,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPJuvLde49Grl0xrW9XqlZWm2iapzj/ptvpeM0vUXkL9WjkO1Vts1ZqPNK0xa0RFKzwonFoVCMufaWV362ApeqPtUvwTGOGY43qYDgAKGh07RNbr2NQMmIQQ5Njnnyha+Q==',key_name='tempest-TestNetworkBasicOps-2127179421',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-4ic01z0m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:25:01Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=e4670193-9ea3-45bc-9dbd-14d2e62f32f1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "119ff9b5-e3af-4688-97a4-92b5f6a240dc", "address": "fa:16:3e:54:f4:1e", "network": {"id": "342daca7-3c5d-4bb3-bcdc-6abce7b4d414", "bridge": "br-int", "label": "tempest-network-smoke--2000863110", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap119ff9b5-e3", "ovs_interfaceid": "119ff9b5-e3af-4688-97a4-92b5f6a240dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:25:10 np0005481065 nova_compute[260935]: 2025-10-11 09:25:10.753 2 DEBUG nova.network.os_vif_util [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "119ff9b5-e3af-4688-97a4-92b5f6a240dc", "address": "fa:16:3e:54:f4:1e", "network": {"id": "342daca7-3c5d-4bb3-bcdc-6abce7b4d414", "bridge": "br-int", "label": "tempest-network-smoke--2000863110", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap119ff9b5-e3", "ovs_interfaceid": "119ff9b5-e3af-4688-97a4-92b5f6a240dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:25:10 np0005481065 nova_compute[260935]: 2025-10-11 09:25:10.754 2 DEBUG nova.network.os_vif_util [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:f4:1e,bridge_name='br-int',has_traffic_filtering=True,id=119ff9b5-e3af-4688-97a4-92b5f6a240dc,network=Network(342daca7-3c5d-4bb3-bcdc-6abce7b4d414),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap119ff9b5-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:25:10 np0005481065 nova_compute[260935]: 2025-10-11 09:25:10.755 2 DEBUG os_vif [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:f4:1e,bridge_name='br-int',has_traffic_filtering=True,id=119ff9b5-e3af-4688-97a4-92b5f6a240dc,network=Network(342daca7-3c5d-4bb3-bcdc-6abce7b4d414),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap119ff9b5-e3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:25:10 np0005481065 nova_compute[260935]: 2025-10-11 09:25:10.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:10 np0005481065 nova_compute[260935]: 2025-10-11 09:25:10.757 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:25:10 np0005481065 nova_compute[260935]: 2025-10-11 09:25:10.757 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:25:10 np0005481065 nova_compute[260935]: 2025-10-11 09:25:10.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:10 np0005481065 nova_compute[260935]: 2025-10-11 09:25:10.763 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap119ff9b5-e3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:25:10 np0005481065 nova_compute[260935]: 2025-10-11 09:25:10.764 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap119ff9b5-e3, col_values=(('external_ids', {'iface-id': '119ff9b5-e3af-4688-97a4-92b5f6a240dc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:54:f4:1e', 'vm-uuid': 'e4670193-9ea3-45bc-9dbd-14d2e62f32f1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:25:10 np0005481065 nova_compute[260935]: 2025-10-11 09:25:10.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:10 np0005481065 NetworkManager[44960]: <info>  [1760174710.7680] manager: (tap119ff9b5-e3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/534)
Oct 11 05:25:10 np0005481065 nova_compute[260935]: 2025-10-11 09:25:10.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:25:10 np0005481065 nova_compute[260935]: 2025-10-11 09:25:10.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:10 np0005481065 nova_compute[260935]: 2025-10-11 09:25:10.778 2 INFO os_vif [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:f4:1e,bridge_name='br-int',has_traffic_filtering=True,id=119ff9b5-e3af-4688-97a4-92b5f6a240dc,network=Network(342daca7-3c5d-4bb3-bcdc-6abce7b4d414),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap119ff9b5-e3')#033[00m
Oct 11 05:25:10 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2506: 321 pgs: 321 active+clean; 533 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 11 05:25:10 np0005481065 nova_compute[260935]: 2025-10-11 09:25:10.941 2 DEBUG nova.compute.manager [req-42f83aae-c5de-45c8-acae-8d4aa041b73c req-8855b0e1-fa39-4ea0-82a9-2aec76bd5a94 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Received event network-changed-c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:25:10 np0005481065 nova_compute[260935]: 2025-10-11 09:25:10.942 2 DEBUG nova.compute.manager [req-42f83aae-c5de-45c8-acae-8d4aa041b73c req-8855b0e1-fa39-4ea0-82a9-2aec76bd5a94 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Refreshing instance network info cache due to event network-changed-c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:25:10 np0005481065 nova_compute[260935]: 2025-10-11 09:25:10.942 2 DEBUG oslo_concurrency.lockutils [req-42f83aae-c5de-45c8-acae-8d4aa041b73c req-8855b0e1-fa39-4ea0-82a9-2aec76bd5a94 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-ece1112a-294b-461f-8b8f-c6a0bb212647" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:25:10 np0005481065 nova_compute[260935]: 2025-10-11 09:25:10.943 2 DEBUG oslo_concurrency.lockutils [req-42f83aae-c5de-45c8-acae-8d4aa041b73c req-8855b0e1-fa39-4ea0-82a9-2aec76bd5a94 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-ece1112a-294b-461f-8b8f-c6a0bb212647" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:25:10 np0005481065 nova_compute[260935]: 2025-10-11 09:25:10.943 2 DEBUG nova.network.neutron [req-42f83aae-c5de-45c8-acae-8d4aa041b73c req-8855b0e1-fa39-4ea0-82a9-2aec76bd5a94 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Refreshing network info cache for port c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:25:11 np0005481065 nova_compute[260935]: 2025-10-11 09:25:11.092 2 DEBUG nova.virt.libvirt.driver [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:25:11 np0005481065 nova_compute[260935]: 2025-10-11 09:25:11.092 2 DEBUG nova.virt.libvirt.driver [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:25:11 np0005481065 nova_compute[260935]: 2025-10-11 09:25:11.092 2 DEBUG nova.virt.libvirt.driver [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No VIF found with MAC fa:16:3e:54:f4:1e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:25:11 np0005481065 nova_compute[260935]: 2025-10-11 09:25:11.093 2 INFO nova.virt.libvirt.driver [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Using config drive#033[00m
Oct 11 05:25:11 np0005481065 nova_compute[260935]: 2025-10-11 09:25:11.128 2 DEBUG nova.storage.rbd_utils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image e4670193-9ea3-45bc-9dbd-14d2e62f32f1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:25:11 np0005481065 nova_compute[260935]: 2025-10-11 09:25:11.173 2 DEBUG oslo_concurrency.lockutils [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "ece1112a-294b-461f-8b8f-c6a0bb212647" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:25:11 np0005481065 nova_compute[260935]: 2025-10-11 09:25:11.174 2 DEBUG oslo_concurrency.lockutils [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "ece1112a-294b-461f-8b8f-c6a0bb212647" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:25:11 np0005481065 nova_compute[260935]: 2025-10-11 09:25:11.175 2 DEBUG oslo_concurrency.lockutils [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:25:11 np0005481065 nova_compute[260935]: 2025-10-11 09:25:11.175 2 DEBUG oslo_concurrency.lockutils [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:25:11 np0005481065 nova_compute[260935]: 2025-10-11 09:25:11.175 2 DEBUG oslo_concurrency.lockutils [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:25:11 np0005481065 nova_compute[260935]: 2025-10-11 09:25:11.177 2 INFO nova.compute.manager [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Terminating instance#033[00m
Oct 11 05:25:11 np0005481065 nova_compute[260935]: 2025-10-11 09:25:11.179 2 DEBUG nova.compute.manager [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:25:11 np0005481065 kernel: tapc9d0cfb2-3f (unregistering): left promiscuous mode
Oct 11 05:25:11 np0005481065 NetworkManager[44960]: <info>  [1760174711.2500] device (tapc9d0cfb2-3f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:25:11 np0005481065 ovn_controller[152945]: 2025-10-11T09:25:11Z|01358|binding|INFO|Releasing lport c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9 from this chassis (sb_readonly=0)
Oct 11 05:25:11 np0005481065 ovn_controller[152945]: 2025-10-11T09:25:11Z|01359|binding|INFO|Setting lport c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9 down in Southbound
Oct 11 05:25:11 np0005481065 nova_compute[260935]: 2025-10-11 09:25:11.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:11 np0005481065 ovn_controller[152945]: 2025-10-11T09:25:11Z|01360|binding|INFO|Removing iface tapc9d0cfb2-3f ovn-installed in OVS
Oct 11 05:25:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:11.294 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:1a:f4 10.100.0.11'], port_security=['fa:16:3e:c1:1a:f4 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'ece1112a-294b-461f-8b8f-c6a0bb212647', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cdd3c547-e0c3-4649-8427-08ce8e1c52d4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0ad377f6-44f5-45fc-95cb-2f677a42f5a2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=039eefbf-a236-4641-9b4d-7b7f1014a5e2, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:25:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:11.295 162815 INFO neutron.agent.ovn.metadata.agent [-] Port c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9 in datapath cdd3c547-e0c3-4649-8427-08ce8e1c52d4 unbound from our chassis#033[00m
Oct 11 05:25:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:11.299 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cdd3c547-e0c3-4649-8427-08ce8e1c52d4#033[00m
Oct 11 05:25:11 np0005481065 kernel: tap336ce2f8-5f (unregistering): left promiscuous mode
Oct 11 05:25:11 np0005481065 NetworkManager[44960]: <info>  [1760174711.3107] device (tap336ce2f8-5f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:25:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:11.330 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d5395388-6971-4554-8e88-11743e77a99a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:25:11 np0005481065 ovn_controller[152945]: 2025-10-11T09:25:11Z|01361|binding|INFO|Releasing lport 336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0 from this chassis (sb_readonly=0)
Oct 11 05:25:11 np0005481065 ovn_controller[152945]: 2025-10-11T09:25:11Z|01362|binding|INFO|Setting lport 336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0 down in Southbound
Oct 11 05:25:11 np0005481065 nova_compute[260935]: 2025-10-11 09:25:11.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:11 np0005481065 ovn_controller[152945]: 2025-10-11T09:25:11Z|01363|binding|INFO|Removing iface tap336ce2f8-5f ovn-installed in OVS
Oct 11 05:25:11 np0005481065 nova_compute[260935]: 2025-10-11 09:25:11.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:11 np0005481065 nova_compute[260935]: 2025-10-11 09:25:11.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:11.401 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[8155c85e-9887-404c-88f3-cb444697a2ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:25:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:11.406 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[0cb89acc-4345-49d6-957e-0cc1958a353b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:25:11 np0005481065 systemd[1]: machine-qemu\x2d149\x2dinstance\x2d0000007d.scope: Deactivated successfully.
Oct 11 05:25:11 np0005481065 systemd[1]: machine-qemu\x2d149\x2dinstance\x2d0000007d.scope: Consumed 13.986s CPU time.
Oct 11 05:25:11 np0005481065 systemd-machined[215705]: Machine qemu-149-instance-0000007d terminated.
Oct 11 05:25:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:11.439 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:3e:95 2001:db8:0:1:f816:3eff:fe5b:3e95 2001:db8::f816:3eff:fe5b:3e95'], port_security=['fa:16:3e:5b:3e:95 2001:db8:0:1:f816:3eff:fe5b:3e95 2001:db8::f816:3eff:fe5b:3e95'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe5b:3e95/64 2001:db8::f816:3eff:fe5b:3e95/64', 'neutron:device_id': 'ece1112a-294b-461f-8b8f-c6a0bb212647', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f0bc2c62-89ab-4ce1-9157-2273788b9018', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0ad377f6-44f5-45fc-95cb-2f677a42f5a2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75b51dc9-c211-445f-9e3e-d219efddef19, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:25:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:11.451 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[ad8ffaec-7801-40eb-b649-7aab2eef317f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:25:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:11.477 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9a701fd4-3cab-43f6-b556-cbc6b465301d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcdd3c547-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:b8:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 8, 'rx_bytes': 1000, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 8, 'rx_bytes': 1000, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 362], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647689, 'reachable_time': 38844, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 397991, 'error': None, 'target': 'ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:25:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:11.501 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e9a4cd6e-54a8-434f-911a-fb61a8ce8008]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcdd3c547-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647704, 'tstamp': 647704}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 397992, 'error': None, 'target': 'ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcdd3c547-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647708, 'tstamp': 647708}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 397992, 'error': None, 'target': 'ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:25:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:11.503 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcdd3c547-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:25:11 np0005481065 nova_compute[260935]: 2025-10-11 09:25:11.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:11 np0005481065 nova_compute[260935]: 2025-10-11 09:25:11.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:11.525 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcdd3c547-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:25:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:11.526 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:25:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:11.526 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcdd3c547-e0, col_values=(('external_ids', {'iface-id': 'cd117be9-e2e2-4d92-9c01-a0f37b7175b9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:25:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:11.527 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:25:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:11.528 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0 in datapath f0bc2c62-89ab-4ce1-9157-2273788b9018 unbound from our chassis#033[00m
Oct 11 05:25:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:11.530 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f0bc2c62-89ab-4ce1-9157-2273788b9018#033[00m
Oct 11 05:25:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:11.558 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2d451e41-d34d-40cf-8196-2e4118a46815]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:25:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:11.610 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[e386137f-4881-4a20-b7a3-22a600a03293]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:25:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:11.616 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[0b12e3c9-36fd-4167-828b-03ec101d64dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:25:11 np0005481065 nova_compute[260935]: 2025-10-11 09:25:11.644 2 INFO nova.virt.libvirt.driver [-] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Instance destroyed successfully.#033[00m
Oct 11 05:25:11 np0005481065 nova_compute[260935]: 2025-10-11 09:25:11.645 2 DEBUG nova.objects.instance [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'resources' on Instance uuid ece1112a-294b-461f-8b8f-c6a0bb212647 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:25:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:11.667 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[204fa331-4a17-4514-b1b2-c6689949b21f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:25:11 np0005481065 nova_compute[260935]: 2025-10-11 09:25:11.684 2 INFO nova.virt.libvirt.driver [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Creating config drive at /var/lib/nova/instances/e4670193-9ea3-45bc-9dbd-14d2e62f32f1/disk.config#033[00m
Oct 11 05:25:11 np0005481065 nova_compute[260935]: 2025-10-11 09:25:11.694 2 DEBUG oslo_concurrency.processutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e4670193-9ea3-45bc-9dbd-14d2e62f32f1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3rg0a14t execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:25:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:11.700 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8347092d-bf86-45a8-933e-b7ec693532ac]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf0bc2c62-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:be:23:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 38, 'tx_packets': 5, 'rx_bytes': 3460, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 38, 'tx_packets': 5, 'rx_bytes': 3460, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 363], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647796, 'reachable_time': 34498, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 38, 'inoctets': 2928, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 38, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2928, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 38, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 398023, 'error': None, 'target': 'ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:25:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:11.727 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fc86a639-764c-4121-a124-6fe87ff12a0d]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf0bc2c62-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647812, 'tstamp': 647812}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 398024, 'error': None, 'target': 'ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:25:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:11.729 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf0bc2c62-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:25:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:11.743 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf0bc2c62-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:25:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:11.744 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:25:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:11.745 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf0bc2c62-80, col_values=(('external_ids', {'iface-id': 'c1fabca0-ae77-4e48-b93b-3023955db235'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:25:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:11.746 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:25:11 np0005481065 nova_compute[260935]: 2025-10-11 09:25:11.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:11 np0005481065 nova_compute[260935]: 2025-10-11 09:25:11.859 2 DEBUG oslo_concurrency.processutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e4670193-9ea3-45bc-9dbd-14d2e62f32f1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3rg0a14t" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:25:11 np0005481065 nova_compute[260935]: 2025-10-11 09:25:11.889 2 DEBUG nova.storage.rbd_utils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image e4670193-9ea3-45bc-9dbd-14d2e62f32f1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:25:11 np0005481065 nova_compute[260935]: 2025-10-11 09:25:11.894 2 DEBUG oslo_concurrency.processutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e4670193-9ea3-45bc-9dbd-14d2e62f32f1/disk.config e4670193-9ea3-45bc-9dbd-14d2e62f32f1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:25:11 np0005481065 nova_compute[260935]: 2025-10-11 09:25:11.952 2 DEBUG nova.virt.libvirt.vif [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:24:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1996538223',display_name='tempest-TestGettingAddress-server-1996538223',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1996538223',id=125,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFOcQNhE7VrJ44Z/esb1CHEpx8lcRbf27s/aaZGWQqBCH7+6fr5AKVHiLVq1p8ssvkamN6U1RSiwjmJnfBxXYkiNbQhuZVIjGwYPhoT+eqQySo9UJb10NKMqWJoQDGuD9g==',key_name='tempest-TestGettingAddress-1559448996',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:24:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-oa0yn1o6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:24:42Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=ece1112a-294b-461f-8b8f-c6a0bb212647,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9", "address": "fa:16:3e:c1:1a:f4", "network": {"id": "cdd3c547-e0c3-4649-8427-08ce8e1c52d4", "bridge": "br-int", "label": "tempest-network-smoke--1698384506", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9d0cfb2-3f", "ovs_interfaceid": "c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:25:11 np0005481065 nova_compute[260935]: 2025-10-11 09:25:11.953 2 DEBUG nova.network.os_vif_util [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9", "address": "fa:16:3e:c1:1a:f4", "network": {"id": "cdd3c547-e0c3-4649-8427-08ce8e1c52d4", "bridge": "br-int", "label": "tempest-network-smoke--1698384506", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9d0cfb2-3f", "ovs_interfaceid": "c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:25:11 np0005481065 nova_compute[260935]: 2025-10-11 09:25:11.955 2 DEBUG nova.network.os_vif_util [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c1:1a:f4,bridge_name='br-int',has_traffic_filtering=True,id=c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9,network=Network(cdd3c547-e0c3-4649-8427-08ce8e1c52d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9d0cfb2-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:25:11 np0005481065 nova_compute[260935]: 2025-10-11 09:25:11.955 2 DEBUG os_vif [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c1:1a:f4,bridge_name='br-int',has_traffic_filtering=True,id=c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9,network=Network(cdd3c547-e0c3-4649-8427-08ce8e1c52d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9d0cfb2-3f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:25:11 np0005481065 nova_compute[260935]: 2025-10-11 09:25:11.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:11 np0005481065 nova_compute[260935]: 2025-10-11 09:25:11.959 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc9d0cfb2-3f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:25:11 np0005481065 nova_compute[260935]: 2025-10-11 09:25:11.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:11 np0005481065 nova_compute[260935]: 2025-10-11 09:25:11.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:25:11 np0005481065 nova_compute[260935]: 2025-10-11 09:25:11.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:11 np0005481065 nova_compute[260935]: 2025-10-11 09:25:11.976 2 INFO os_vif [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c1:1a:f4,bridge_name='br-int',has_traffic_filtering=True,id=c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9,network=Network(cdd3c547-e0c3-4649-8427-08ce8e1c52d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9d0cfb2-3f')#033[00m
Oct 11 05:25:11 np0005481065 nova_compute[260935]: 2025-10-11 09:25:11.978 2 DEBUG nova.virt.libvirt.vif [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:24:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1996538223',display_name='tempest-TestGettingAddress-server-1996538223',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1996538223',id=125,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFOcQNhE7VrJ44Z/esb1CHEpx8lcRbf27s/aaZGWQqBCH7+6fr5AKVHiLVq1p8ssvkamN6U1RSiwjmJnfBxXYkiNbQhuZVIjGwYPhoT+eqQySo9UJb10NKMqWJoQDGuD9g==',key_name='tempest-TestGettingAddress-1559448996',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:24:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-oa0yn1o6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:24:42Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=ece1112a-294b-461f-8b8f-c6a0bb212647,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0", "address": "fa:16:3e:5b:3e:95", "network": {"id": "f0bc2c62-89ab-4ce1-9157-2273788b9018", "bridge": "br-int", "label": "tempest-network-smoke--892002892", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe5b:3e95", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5b:3e95", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap336ce2f8-5f", "ovs_interfaceid": "336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:25:11 np0005481065 nova_compute[260935]: 2025-10-11 09:25:11.979 2 DEBUG nova.network.os_vif_util [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0", "address": "fa:16:3e:5b:3e:95", "network": {"id": "f0bc2c62-89ab-4ce1-9157-2273788b9018", "bridge": "br-int", "label": "tempest-network-smoke--892002892", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe5b:3e95", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5b:3e95", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap336ce2f8-5f", "ovs_interfaceid": "336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:25:11 np0005481065 nova_compute[260935]: 2025-10-11 09:25:11.981 2 DEBUG nova.network.os_vif_util [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:3e:95,bridge_name='br-int',has_traffic_filtering=True,id=336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0,network=Network(f0bc2c62-89ab-4ce1-9157-2273788b9018),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap336ce2f8-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:25:11 np0005481065 nova_compute[260935]: 2025-10-11 09:25:11.982 2 DEBUG os_vif [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:3e:95,bridge_name='br-int',has_traffic_filtering=True,id=336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0,network=Network(f0bc2c62-89ab-4ce1-9157-2273788b9018),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap336ce2f8-5f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:25:11 np0005481065 nova_compute[260935]: 2025-10-11 09:25:11.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:11 np0005481065 nova_compute[260935]: 2025-10-11 09:25:11.985 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap336ce2f8-5f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:25:11 np0005481065 nova_compute[260935]: 2025-10-11 09:25:11.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:11 np0005481065 nova_compute[260935]: 2025-10-11 09:25:11.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:25:11 np0005481065 nova_compute[260935]: 2025-10-11 09:25:11.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:11 np0005481065 nova_compute[260935]: 2025-10-11 09:25:11.995 2 INFO os_vif [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:3e:95,bridge_name='br-int',has_traffic_filtering=True,id=336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0,network=Network(f0bc2c62-89ab-4ce1-9157-2273788b9018),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap336ce2f8-5f')#033[00m
Oct 11 05:25:12 np0005481065 nova_compute[260935]: 2025-10-11 09:25:12.097 2 DEBUG oslo_concurrency.processutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e4670193-9ea3-45bc-9dbd-14d2e62f32f1/disk.config e4670193-9ea3-45bc-9dbd-14d2e62f32f1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.204s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:25:12 np0005481065 nova_compute[260935]: 2025-10-11 09:25:12.098 2 INFO nova.virt.libvirt.driver [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Deleting local config drive /var/lib/nova/instances/e4670193-9ea3-45bc-9dbd-14d2e62f32f1/disk.config because it was imported into RBD.#033[00m
Oct 11 05:25:12 np0005481065 kernel: tap119ff9b5-e3: entered promiscuous mode
Oct 11 05:25:12 np0005481065 NetworkManager[44960]: <info>  [1760174712.1930] manager: (tap119ff9b5-e3): new Tun device (/org/freedesktop/NetworkManager/Devices/535)
Oct 11 05:25:12 np0005481065 ovn_controller[152945]: 2025-10-11T09:25:12Z|01364|binding|INFO|Claiming lport 119ff9b5-e3af-4688-97a4-92b5f6a240dc for this chassis.
Oct 11 05:25:12 np0005481065 ovn_controller[152945]: 2025-10-11T09:25:12Z|01365|binding|INFO|119ff9b5-e3af-4688-97a4-92b5f6a240dc: Claiming fa:16:3e:54:f4:1e 10.100.0.9
Oct 11 05:25:12 np0005481065 nova_compute[260935]: 2025-10-11 09:25:12.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:12 np0005481065 systemd-udevd[397977]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:25:12 np0005481065 NetworkManager[44960]: <info>  [1760174712.2254] device (tap119ff9b5-e3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:25:12 np0005481065 NetworkManager[44960]: <info>  [1760174712.2271] device (tap119ff9b5-e3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:25:12 np0005481065 ovn_controller[152945]: 2025-10-11T09:25:12Z|01366|binding|INFO|Setting lport 119ff9b5-e3af-4688-97a4-92b5f6a240dc ovn-installed in OVS
Oct 11 05:25:12 np0005481065 nova_compute[260935]: 2025-10-11 09:25:12.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:12 np0005481065 nova_compute[260935]: 2025-10-11 09:25:12.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:12 np0005481065 systemd-machined[215705]: New machine qemu-150-instance-0000007e.
Oct 11 05:25:12 np0005481065 ovn_controller[152945]: 2025-10-11T09:25:12Z|01367|binding|INFO|Setting lport 119ff9b5-e3af-4688-97a4-92b5f6a240dc up in Southbound
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:12.251 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:f4:1e 10.100.0.9'], port_security=['fa:16:3e:54:f4:1e 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e4670193-9ea3-45bc-9dbd-14d2e62f32f1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-342daca7-3c5d-4bb3-bcdc-6abce7b4d414', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c1e4fc68-c314-4fe3-a3ef-a4986a78eba5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b157abac-1a65-40f2-b3c7-9c7765cf8246, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=119ff9b5-e3af-4688-97a4-92b5f6a240dc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:12.252 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 119ff9b5-e3af-4688-97a4-92b5f6a240dc in datapath 342daca7-3c5d-4bb3-bcdc-6abce7b4d414 bound to our chassis#033[00m
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:12.255 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 342daca7-3c5d-4bb3-bcdc-6abce7b4d414#033[00m
Oct 11 05:25:12 np0005481065 nova_compute[260935]: 2025-10-11 09:25:12.257 2 DEBUG nova.network.neutron [req-f59ddb1c-6142-42d2-b11a-b1ad2ac739b8 req-e29d0e37-95d8-4ea5-8dcf-21feeaafc175 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Updated VIF entry in instance network info cache for port 119ff9b5-e3af-4688-97a4-92b5f6a240dc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:25:12 np0005481065 nova_compute[260935]: 2025-10-11 09:25:12.258 2 DEBUG nova.network.neutron [req-f59ddb1c-6142-42d2-b11a-b1ad2ac739b8 req-e29d0e37-95d8-4ea5-8dcf-21feeaafc175 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Updating instance_info_cache with network_info: [{"id": "119ff9b5-e3af-4688-97a4-92b5f6a240dc", "address": "fa:16:3e:54:f4:1e", "network": {"id": "342daca7-3c5d-4bb3-bcdc-6abce7b4d414", "bridge": "br-int", "label": "tempest-network-smoke--2000863110", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap119ff9b5-e3", "ovs_interfaceid": "119ff9b5-e3af-4688-97a4-92b5f6a240dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:25:12 np0005481065 systemd[1]: Started Virtual Machine qemu-150-instance-0000007e.
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:12.268 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1ba98029-9520-4486-bd1e-add87c2f4e5c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:12.269 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap342daca7-31 in ovnmeta-342daca7-3c5d-4bb3-bcdc-6abce7b4d414 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:12.271 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap342daca7-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:12.271 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[663c302c-31ae-4b3a-9072-8e988fa5698e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:12.272 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9a3719c1-e895-4d17-8fb1-707ca8fe42ab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:12.289 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[41a26af8-e7b4-4778-964b-9cd7dfb10ec7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:12.304 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a3d75560-0021-4b2d-9cf5-ddfcf06c5675]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:25:12 np0005481065 nova_compute[260935]: 2025-10-11 09:25:12.323 2 DEBUG oslo_concurrency.lockutils [req-f59ddb1c-6142-42d2-b11a-b1ad2ac739b8 req-e29d0e37-95d8-4ea5-8dcf-21feeaafc175 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-e4670193-9ea3-45bc-9dbd-14d2e62f32f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:12.352 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f7762623-21f0-488d-929a-4071496b752c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:25:12 np0005481065 NetworkManager[44960]: <info>  [1760174712.3608] manager: (tap342daca7-30): new Veth device (/org/freedesktop/NetworkManager/Devices/536)
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:12.360 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[15f013ec-6700-413e-b6ae-6cf678965b9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:12.412 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f1a4b9f1-4958-4777-a707-aca3e5c12bbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:12.417 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[87fe7c6f-eba3-478f-8f46-0d081dc02c9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:25:12 np0005481065 NetworkManager[44960]: <info>  [1760174712.4566] device (tap342daca7-30): carrier: link connected
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:12.465 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6ab1e2b3-d341-4b47-8bff-05b3abe83eff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:12.502 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d6ad7dbd-44c2-4dee-94e9-b3a46fe93d21]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap342daca7-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:bf:8c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 374], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 655715, 'reachable_time': 36185, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 398133, 'error': None, 'target': 'ovnmeta-342daca7-3c5d-4bb3-bcdc-6abce7b4d414', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:12.526 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[412ae439-6e21-46e7-8acb-5af170d97efb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feed:bf8c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 655715, 'tstamp': 655715}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 398134, 'error': None, 'target': 'ovnmeta-342daca7-3c5d-4bb3-bcdc-6abce7b4d414', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:25:12 np0005481065 nova_compute[260935]: 2025-10-11 09:25:12.541 2 INFO nova.virt.libvirt.driver [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Deleting instance files /var/lib/nova/instances/ece1112a-294b-461f-8b8f-c6a0bb212647_del#033[00m
Oct 11 05:25:12 np0005481065 nova_compute[260935]: 2025-10-11 09:25:12.542 2 INFO nova.virt.libvirt.driver [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Deletion of /var/lib/nova/instances/ece1112a-294b-461f-8b8f-c6a0bb212647_del complete#033[00m
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:12.552 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a51ea093-30eb-4bff-a439-68ed392b5e2e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap342daca7-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:bf:8c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 374], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 655715, 'reachable_time': 36185, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 398135, 'error': None, 'target': 'ovnmeta-342daca7-3c5d-4bb3-bcdc-6abce7b4d414', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:12.594 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bc7f5c7f-cf60-45ec-8836-40476ae9055a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:25:12 np0005481065 nova_compute[260935]: 2025-10-11 09:25:12.667 2 INFO nova.compute.manager [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Took 1.49 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:25:12 np0005481065 nova_compute[260935]: 2025-10-11 09:25:12.667 2 DEBUG oslo.service.loopingcall [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:25:12 np0005481065 nova_compute[260935]: 2025-10-11 09:25:12.668 2 DEBUG nova.compute.manager [-] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:25:12 np0005481065 nova_compute[260935]: 2025-10-11 09:25:12.671 2 DEBUG nova.network.neutron [-] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:25:12 np0005481065 nova_compute[260935]: 2025-10-11 09:25:12.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:12.680 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cfef72fa-3f5b-4fd9-b396-02280c480ed0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:12.683 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap342daca7-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:12.684 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:12.684 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap342daca7-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:25:12 np0005481065 nova_compute[260935]: 2025-10-11 09:25:12.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:12 np0005481065 NetworkManager[44960]: <info>  [1760174712.6875] manager: (tap342daca7-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/537)
Oct 11 05:25:12 np0005481065 kernel: tap342daca7-30: entered promiscuous mode
Oct 11 05:25:12 np0005481065 nova_compute[260935]: 2025-10-11 09:25:12.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:12.693 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap342daca7-30, col_values=(('external_ids', {'iface-id': 'ac8c2194-1d46-4375-b091-3b31589411f5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:25:12 np0005481065 nova_compute[260935]: 2025-10-11 09:25:12.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:12 np0005481065 ovn_controller[152945]: 2025-10-11T09:25:12Z|01368|binding|INFO|Releasing lport ac8c2194-1d46-4375-b091-3b31589411f5 from this chassis (sb_readonly=0)
Oct 11 05:25:12 np0005481065 nova_compute[260935]: 2025-10-11 09:25:12.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:12.726 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/342daca7-3c5d-4bb3-bcdc-6abce7b4d414.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/342daca7-3c5d-4bb3-bcdc-6abce7b4d414.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:12.727 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b216997d-8479-4d30-9e71-2a549ae274a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:12.728 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-342daca7-3c5d-4bb3-bcdc-6abce7b4d414
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/342daca7-3c5d-4bb3-bcdc-6abce7b4d414.pid.haproxy
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID 342daca7-3c5d-4bb3-bcdc-6abce7b4d414
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 05:25:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:12.729 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-342daca7-3c5d-4bb3-bcdc-6abce7b4d414', 'env', 'PROCESS_TAG=haproxy-342daca7-3c5d-4bb3-bcdc-6abce7b4d414', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/342daca7-3c5d-4bb3-bcdc-6abce7b4d414.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 05:25:12 np0005481065 nova_compute[260935]: 2025-10-11 09:25:12.798 2 DEBUG nova.compute.manager [req-647b0d45-842f-446a-aae0-4709d5fdfd44 req-ca72c4c0-dd38-4b45-86eb-a16d0f94f67d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Received event network-vif-plugged-119ff9b5-e3af-4688-97a4-92b5f6a240dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:25:12 np0005481065 nova_compute[260935]: 2025-10-11 09:25:12.799 2 DEBUG oslo_concurrency.lockutils [req-647b0d45-842f-446a-aae0-4709d5fdfd44 req-ca72c4c0-dd38-4b45-86eb-a16d0f94f67d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "e4670193-9ea3-45bc-9dbd-14d2e62f32f1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:25:12 np0005481065 nova_compute[260935]: 2025-10-11 09:25:12.800 2 DEBUG oslo_concurrency.lockutils [req-647b0d45-842f-446a-aae0-4709d5fdfd44 req-ca72c4c0-dd38-4b45-86eb-a16d0f94f67d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e4670193-9ea3-45bc-9dbd-14d2e62f32f1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:25:12 np0005481065 nova_compute[260935]: 2025-10-11 09:25:12.800 2 DEBUG oslo_concurrency.lockutils [req-647b0d45-842f-446a-aae0-4709d5fdfd44 req-ca72c4c0-dd38-4b45-86eb-a16d0f94f67d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e4670193-9ea3-45bc-9dbd-14d2e62f32f1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:25:12 np0005481065 nova_compute[260935]: 2025-10-11 09:25:12.801 2 DEBUG nova.compute.manager [req-647b0d45-842f-446a-aae0-4709d5fdfd44 req-ca72c4c0-dd38-4b45-86eb-a16d0f94f67d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Processing event network-vif-plugged-119ff9b5-e3af-4688-97a4-92b5f6a240dc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:25:12 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2507: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Oct 11 05:25:13 np0005481065 podman[398210]: 2025-10-11 09:25:13.213727325 +0000 UTC m=+0.067399203 container create 7942cf246211150ab9a379ef3012a741b0b0deb7b617be7c2673948e421375b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-342daca7-3c5d-4bb3-bcdc-6abce7b4d414, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 05:25:13 np0005481065 nova_compute[260935]: 2025-10-11 09:25:13.263 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174713.26251, e4670193-9ea3-45bc-9dbd-14d2e62f32f1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:25:13 np0005481065 nova_compute[260935]: 2025-10-11 09:25:13.263 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] VM Started (Lifecycle Event)#033[00m
Oct 11 05:25:13 np0005481065 nova_compute[260935]: 2025-10-11 09:25:13.267 2 DEBUG nova.compute.manager [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:25:13 np0005481065 nova_compute[260935]: 2025-10-11 09:25:13.276 2 DEBUG nova.virt.libvirt.driver [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:25:13 np0005481065 podman[398210]: 2025-10-11 09:25:13.183864082 +0000 UTC m=+0.037536030 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 05:25:13 np0005481065 nova_compute[260935]: 2025-10-11 09:25:13.281 2 INFO nova.virt.libvirt.driver [-] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Instance spawned successfully.#033[00m
Oct 11 05:25:13 np0005481065 nova_compute[260935]: 2025-10-11 09:25:13.281 2 DEBUG nova.virt.libvirt.driver [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:25:13 np0005481065 systemd[1]: Started libpod-conmon-7942cf246211150ab9a379ef3012a741b0b0deb7b617be7c2673948e421375b3.scope.
Oct 11 05:25:13 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:25:13 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72cac3b459deeeef18dd1fe465a7ead282b0060817d48bf95435e7d2a56eb042/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 05:25:13 np0005481065 podman[398210]: 2025-10-11 09:25:13.358335616 +0000 UTC m=+0.212007574 container init 7942cf246211150ab9a379ef3012a741b0b0deb7b617be7c2673948e421375b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-342daca7-3c5d-4bb3-bcdc-6abce7b4d414, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2)
Oct 11 05:25:13 np0005481065 podman[398223]: 2025-10-11 09:25:13.358437288 +0000 UTC m=+0.091245736 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 05:25:13 np0005481065 podman[398210]: 2025-10-11 09:25:13.369674906 +0000 UTC m=+0.223346804 container start 7942cf246211150ab9a379ef3012a741b0b0deb7b617be7c2673948e421375b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-342daca7-3c5d-4bb3-bcdc-6abce7b4d414, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 11 05:25:13 np0005481065 neutron-haproxy-ovnmeta-342daca7-3c5d-4bb3-bcdc-6abce7b4d414[398234]: [NOTICE]   (398247) : New worker (398249) forked
Oct 11 05:25:13 np0005481065 neutron-haproxy-ovnmeta-342daca7-3c5d-4bb3-bcdc-6abce7b4d414[398234]: [NOTICE]   (398247) : Loading success.
Oct 11 05:25:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:25:14 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2508: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 628 KiB/s wr, 38 op/s
Oct 11 05:25:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:15.221 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:25:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:15.222 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:25:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:15.223 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:25:16 np0005481065 nova_compute[260935]: 2025-10-11 09:25:16.377 2 DEBUG nova.compute.manager [req-38f7775a-eb65-49fe-b98f-761f0d9e095a req-e9e19990-e851-4057-b445-587ce59a2f4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Received event network-vif-unplugged-c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:25:16 np0005481065 nova_compute[260935]: 2025-10-11 09:25:16.377 2 DEBUG oslo_concurrency.lockutils [req-38f7775a-eb65-49fe-b98f-761f0d9e095a req-e9e19990-e851-4057-b445-587ce59a2f4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:25:16 np0005481065 nova_compute[260935]: 2025-10-11 09:25:16.378 2 DEBUG oslo_concurrency.lockutils [req-38f7775a-eb65-49fe-b98f-761f0d9e095a req-e9e19990-e851-4057-b445-587ce59a2f4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:25:16 np0005481065 nova_compute[260935]: 2025-10-11 09:25:16.379 2 DEBUG oslo_concurrency.lockutils [req-38f7775a-eb65-49fe-b98f-761f0d9e095a req-e9e19990-e851-4057-b445-587ce59a2f4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:25:16 np0005481065 nova_compute[260935]: 2025-10-11 09:25:16.379 2 DEBUG nova.compute.manager [req-38f7775a-eb65-49fe-b98f-761f0d9e095a req-e9e19990-e851-4057-b445-587ce59a2f4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] No waiting events found dispatching network-vif-unplugged-c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:25:16 np0005481065 nova_compute[260935]: 2025-10-11 09:25:16.380 2 DEBUG nova.compute.manager [req-38f7775a-eb65-49fe-b98f-761f0d9e095a req-e9e19990-e851-4057-b445-587ce59a2f4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Received event network-vif-unplugged-c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 05:25:16 np0005481065 nova_compute[260935]: 2025-10-11 09:25:16.381 2 DEBUG nova.compute.manager [req-38f7775a-eb65-49fe-b98f-761f0d9e095a req-e9e19990-e851-4057-b445-587ce59a2f4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Received event network-vif-plugged-c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:25:16 np0005481065 nova_compute[260935]: 2025-10-11 09:25:16.381 2 DEBUG oslo_concurrency.lockutils [req-38f7775a-eb65-49fe-b98f-761f0d9e095a req-e9e19990-e851-4057-b445-587ce59a2f4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:25:16 np0005481065 nova_compute[260935]: 2025-10-11 09:25:16.382 2 DEBUG oslo_concurrency.lockutils [req-38f7775a-eb65-49fe-b98f-761f0d9e095a req-e9e19990-e851-4057-b445-587ce59a2f4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:25:16 np0005481065 nova_compute[260935]: 2025-10-11 09:25:16.382 2 DEBUG oslo_concurrency.lockutils [req-38f7775a-eb65-49fe-b98f-761f0d9e095a req-e9e19990-e851-4057-b445-587ce59a2f4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:25:16 np0005481065 nova_compute[260935]: 2025-10-11 09:25:16.383 2 DEBUG nova.compute.manager [req-38f7775a-eb65-49fe-b98f-761f0d9e095a req-e9e19990-e851-4057-b445-587ce59a2f4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] No waiting events found dispatching network-vif-plugged-c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:25:16 np0005481065 nova_compute[260935]: 2025-10-11 09:25:16.383 2 WARNING nova.compute.manager [req-38f7775a-eb65-49fe-b98f-761f0d9e095a req-e9e19990-e851-4057-b445-587ce59a2f4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Received unexpected event network-vif-plugged-c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9 for instance with vm_state active and task_state deleting.#033[00m
Oct 11 05:25:16 np0005481065 nova_compute[260935]: 2025-10-11 09:25:16.384 2 DEBUG nova.compute.manager [req-38f7775a-eb65-49fe-b98f-761f0d9e095a req-e9e19990-e851-4057-b445-587ce59a2f4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Received event network-vif-unplugged-336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:25:16 np0005481065 nova_compute[260935]: 2025-10-11 09:25:16.384 2 DEBUG oslo_concurrency.lockutils [req-38f7775a-eb65-49fe-b98f-761f0d9e095a req-e9e19990-e851-4057-b445-587ce59a2f4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:25:16 np0005481065 nova_compute[260935]: 2025-10-11 09:25:16.385 2 DEBUG oslo_concurrency.lockutils [req-38f7775a-eb65-49fe-b98f-761f0d9e095a req-e9e19990-e851-4057-b445-587ce59a2f4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:25:16 np0005481065 nova_compute[260935]: 2025-10-11 09:25:16.385 2 DEBUG oslo_concurrency.lockutils [req-38f7775a-eb65-49fe-b98f-761f0d9e095a req-e9e19990-e851-4057-b445-587ce59a2f4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:25:16 np0005481065 nova_compute[260935]: 2025-10-11 09:25:16.386 2 DEBUG nova.compute.manager [req-38f7775a-eb65-49fe-b98f-761f0d9e095a req-e9e19990-e851-4057-b445-587ce59a2f4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] No waiting events found dispatching network-vif-unplugged-336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:25:16 np0005481065 nova_compute[260935]: 2025-10-11 09:25:16.386 2 DEBUG nova.compute.manager [req-38f7775a-eb65-49fe-b98f-761f0d9e095a req-e9e19990-e851-4057-b445-587ce59a2f4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Received event network-vif-unplugged-336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 05:25:16 np0005481065 nova_compute[260935]: 2025-10-11 09:25:16.387 2 DEBUG nova.compute.manager [req-38f7775a-eb65-49fe-b98f-761f0d9e095a req-e9e19990-e851-4057-b445-587ce59a2f4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Received event network-vif-plugged-336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:25:16 np0005481065 nova_compute[260935]: 2025-10-11 09:25:16.387 2 DEBUG oslo_concurrency.lockutils [req-38f7775a-eb65-49fe-b98f-761f0d9e095a req-e9e19990-e851-4057-b445-587ce59a2f4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:25:16 np0005481065 nova_compute[260935]: 2025-10-11 09:25:16.388 2 DEBUG oslo_concurrency.lockutils [req-38f7775a-eb65-49fe-b98f-761f0d9e095a req-e9e19990-e851-4057-b445-587ce59a2f4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:25:16 np0005481065 nova_compute[260935]: 2025-10-11 09:25:16.388 2 DEBUG oslo_concurrency.lockutils [req-38f7775a-eb65-49fe-b98f-761f0d9e095a req-e9e19990-e851-4057-b445-587ce59a2f4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:25:16 np0005481065 nova_compute[260935]: 2025-10-11 09:25:16.389 2 DEBUG nova.compute.manager [req-38f7775a-eb65-49fe-b98f-761f0d9e095a req-e9e19990-e851-4057-b445-587ce59a2f4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] No waiting events found dispatching network-vif-plugged-336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:25:16 np0005481065 nova_compute[260935]: 2025-10-11 09:25:16.390 2 WARNING nova.compute.manager [req-38f7775a-eb65-49fe-b98f-761f0d9e095a req-e9e19990-e851-4057-b445-587ce59a2f4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Received unexpected event network-vif-plugged-336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0 for instance with vm_state active and task_state deleting.#033[00m
Oct 11 05:25:16 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2509: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 628 KiB/s wr, 38 op/s
Oct 11 05:25:16 np0005481065 nova_compute[260935]: 2025-10-11 09:25:16.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:17 np0005481065 nova_compute[260935]: 2025-10-11 09:25:17.016 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:25:17 np0005481065 nova_compute[260935]: 2025-10-11 09:25:17.022 2 DEBUG nova.virt.libvirt.driver [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:25:17 np0005481065 nova_compute[260935]: 2025-10-11 09:25:17.023 2 DEBUG nova.virt.libvirt.driver [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:25:17 np0005481065 nova_compute[260935]: 2025-10-11 09:25:17.023 2 DEBUG nova.virt.libvirt.driver [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:25:17 np0005481065 nova_compute[260935]: 2025-10-11 09:25:17.024 2 DEBUG nova.virt.libvirt.driver [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:25:17 np0005481065 nova_compute[260935]: 2025-10-11 09:25:17.024 2 DEBUG nova.virt.libvirt.driver [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:25:17 np0005481065 nova_compute[260935]: 2025-10-11 09:25:17.025 2 DEBUG nova.virt.libvirt.driver [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:25:17 np0005481065 nova_compute[260935]: 2025-10-11 09:25:17.030 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:25:17 np0005481065 nova_compute[260935]: 2025-10-11 09:25:17.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:17 np0005481065 nova_compute[260935]: 2025-10-11 09:25:17.745 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:25:17 np0005481065 nova_compute[260935]: 2025-10-11 09:25:17.745 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174713.2627838, e4670193-9ea3-45bc-9dbd-14d2e62f32f1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:25:17 np0005481065 nova_compute[260935]: 2025-10-11 09:25:17.746 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:25:17 np0005481065 podman[398258]: 2025-10-11 09:25:17.838989693 +0000 UTC m=+0.094545710 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3)
Oct 11 05:25:18 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2510: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 633 KiB/s wr, 116 op/s
Oct 11 05:25:18 np0005481065 nova_compute[260935]: 2025-10-11 09:25:18.875 2 INFO nova.compute.manager [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Took 16.97 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:25:18 np0005481065 nova_compute[260935]: 2025-10-11 09:25:18.875 2 DEBUG nova.compute.manager [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:25:18 np0005481065 nova_compute[260935]: 2025-10-11 09:25:18.933 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:25:18 np0005481065 nova_compute[260935]: 2025-10-11 09:25:18.936 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174713.2746034, e4670193-9ea3-45bc-9dbd-14d2e62f32f1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:25:18 np0005481065 nova_compute[260935]: 2025-10-11 09:25:18.937 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:25:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:25:19 np0005481065 nova_compute[260935]: 2025-10-11 09:25:19.581 2 DEBUG nova.compute.manager [req-bca77e6b-39d8-4f24-8f48-549ac482a1d4 req-1eb946f5-c7d5-42c8-84ad-5af8a94b82c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Received event network-vif-plugged-119ff9b5-e3af-4688-97a4-92b5f6a240dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:25:19 np0005481065 nova_compute[260935]: 2025-10-11 09:25:19.582 2 DEBUG oslo_concurrency.lockutils [req-bca77e6b-39d8-4f24-8f48-549ac482a1d4 req-1eb946f5-c7d5-42c8-84ad-5af8a94b82c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "e4670193-9ea3-45bc-9dbd-14d2e62f32f1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:25:19 np0005481065 nova_compute[260935]: 2025-10-11 09:25:19.582 2 DEBUG oslo_concurrency.lockutils [req-bca77e6b-39d8-4f24-8f48-549ac482a1d4 req-1eb946f5-c7d5-42c8-84ad-5af8a94b82c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e4670193-9ea3-45bc-9dbd-14d2e62f32f1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:25:19 np0005481065 nova_compute[260935]: 2025-10-11 09:25:19.582 2 DEBUG oslo_concurrency.lockutils [req-bca77e6b-39d8-4f24-8f48-549ac482a1d4 req-1eb946f5-c7d5-42c8-84ad-5af8a94b82c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e4670193-9ea3-45bc-9dbd-14d2e62f32f1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:25:19 np0005481065 nova_compute[260935]: 2025-10-11 09:25:19.583 2 DEBUG nova.compute.manager [req-bca77e6b-39d8-4f24-8f48-549ac482a1d4 req-1eb946f5-c7d5-42c8-84ad-5af8a94b82c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] No waiting events found dispatching network-vif-plugged-119ff9b5-e3af-4688-97a4-92b5f6a240dc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:25:19 np0005481065 nova_compute[260935]: 2025-10-11 09:25:19.583 2 WARNING nova.compute.manager [req-bca77e6b-39d8-4f24-8f48-549ac482a1d4 req-1eb946f5-c7d5-42c8-84ad-5af8a94b82c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Received unexpected event network-vif-plugged-119ff9b5-e3af-4688-97a4-92b5f6a240dc for instance with vm_state building and task_state spawning.#033[00m
Oct 11 05:25:19 np0005481065 nova_compute[260935]: 2025-10-11 09:25:19.595 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:25:19 np0005481065 nova_compute[260935]: 2025-10-11 09:25:19.600 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:25:19 np0005481065 nova_compute[260935]: 2025-10-11 09:25:19.674 2 DEBUG nova.network.neutron [req-42f83aae-c5de-45c8-acae-8d4aa041b73c req-8855b0e1-fa39-4ea0-82a9-2aec76bd5a94 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Updated VIF entry in instance network info cache for port c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:25:19 np0005481065 nova_compute[260935]: 2025-10-11 09:25:19.675 2 DEBUG nova.network.neutron [req-42f83aae-c5de-45c8-acae-8d4aa041b73c req-8855b0e1-fa39-4ea0-82a9-2aec76bd5a94 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Updating instance_info_cache with network_info: [{"id": "c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9", "address": "fa:16:3e:c1:1a:f4", "network": {"id": "cdd3c547-e0c3-4649-8427-08ce8e1c52d4", "bridge": "br-int", "label": "tempest-network-smoke--1698384506", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9d0cfb2-3f", "ovs_interfaceid": "c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0", "address": "fa:16:3e:5b:3e:95", "network": {"id": "f0bc2c62-89ab-4ce1-9157-2273788b9018", "bridge": "br-int", "label": "tempest-network-smoke--892002892", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe5b:3e95", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5b:3e95", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap336ce2f8-5f", "ovs_interfaceid": "336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:25:19 np0005481065 nova_compute[260935]: 2025-10-11 09:25:19.757 2 INFO nova.compute.manager [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Took 19.06 seconds to build instance.#033[00m
Oct 11 05:25:20 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2511: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 30 KiB/s wr, 104 op/s
Oct 11 05:25:21 np0005481065 nova_compute[260935]: 2025-10-11 09:25:21.196 2 DEBUG oslo_concurrency.lockutils [req-42f83aae-c5de-45c8-acae-8d4aa041b73c req-8855b0e1-fa39-4ea0-82a9-2aec76bd5a94 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-ece1112a-294b-461f-8b8f-c6a0bb212647" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:25:21 np0005481065 nova_compute[260935]: 2025-10-11 09:25:21.308 2 DEBUG oslo_concurrency.lockutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "e4670193-9ea3-45bc-9dbd-14d2e62f32f1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:25:21 np0005481065 nova_compute[260935]: 2025-10-11 09:25:21.735 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:25:21 np0005481065 nova_compute[260935]: 2025-10-11 09:25:21.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:22 np0005481065 nova_compute[260935]: 2025-10-11 09:25:22.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:22 np0005481065 podman[398281]: 2025-10-11 09:25:22.822886001 +0000 UTC m=+0.119537144 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 05:25:22 np0005481065 podman[398282]: 2025-10-11 09:25:22.866102421 +0000 UTC m=+0.157742613 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 11 05:25:22 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2512: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 30 KiB/s wr, 104 op/s
Oct 11 05:25:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:25:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:25:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:25:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:25:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:25:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:25:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:25:24 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2513: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 5.0 KiB/s wr, 77 op/s
Oct 11 05:25:24 np0005481065 ovn_controller[152945]: 2025-10-11T09:25:24Z|00156|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:54:f4:1e 10.100.0.9
Oct 11 05:25:24 np0005481065 ovn_controller[152945]: 2025-10-11T09:25:24Z|00157|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:54:f4:1e 10.100.0.9
Oct 11 05:25:25 np0005481065 nova_compute[260935]: 2025-10-11 09:25:25.211 2 DEBUG nova.compute.manager [req-8fd9544a-06a9-4fa6-99b6-161592353f9f req-34ce2620-3af4-420f-9d8f-aae15dd8ea0f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Received event network-vif-deleted-336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:25:25 np0005481065 nova_compute[260935]: 2025-10-11 09:25:25.212 2 INFO nova.compute.manager [req-8fd9544a-06a9-4fa6-99b6-161592353f9f req-34ce2620-3af4-420f-9d8f-aae15dd8ea0f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Neutron deleted interface 336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0; detaching it from the instance and deleting it from the info cache#033[00m
Oct 11 05:25:25 np0005481065 nova_compute[260935]: 2025-10-11 09:25:25.212 2 DEBUG nova.network.neutron [req-8fd9544a-06a9-4fa6-99b6-161592353f9f req-34ce2620-3af4-420f-9d8f-aae15dd8ea0f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Updating instance_info_cache with network_info: [{"id": "c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9", "address": "fa:16:3e:c1:1a:f4", "network": {"id": "cdd3c547-e0c3-4649-8427-08ce8e1c52d4", "bridge": "br-int", "label": "tempest-network-smoke--1698384506", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9d0cfb2-3f", "ovs_interfaceid": "c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:25:26 np0005481065 nova_compute[260935]: 2025-10-11 09:25:26.641 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174711.639506, ece1112a-294b-461f-8b8f-c6a0bb212647 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:25:26 np0005481065 nova_compute[260935]: 2025-10-11 09:25:26.642 2 INFO nova.compute.manager [-] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:25:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:25:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1053940313' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:25:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:25:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1053940313' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:25:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 11 05:25:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 05:25:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:25:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:25:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 05:25:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:25:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 05:25:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:25:26 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 9280dd1a-20cb-4f83-8eb7-12a6dfcdf04d does not exist
Oct 11 05:25:26 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 07acd9e6-9892-4bb9-95e8-e4e844d2fda6 does not exist
Oct 11 05:25:26 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 6a04d1a7-badb-43d2-9afc-66e4194a4d82 does not exist
Oct 11 05:25:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 05:25:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 05:25:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 05:25:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:25:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:25:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:25:26 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2514: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 5.0 KiB/s wr, 77 op/s
Oct 11 05:25:26 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 05:25:26 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:25:26 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:25:26 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:25:26 np0005481065 nova_compute[260935]: 2025-10-11 09:25:26.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:27 np0005481065 nova_compute[260935]: 2025-10-11 09:25:27.318 2 DEBUG nova.compute.manager [None req-de49268c-f16f-47ab-ad69-d5aeee2e74a0 - - - - - -] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:25:27 np0005481065 nova_compute[260935]: 2025-10-11 09:25:27.353 2 DEBUG nova.compute.manager [req-8fd9544a-06a9-4fa6-99b6-161592353f9f req-34ce2620-3af4-420f-9d8f-aae15dd8ea0f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Detach interface failed, port_id=336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0, reason: Instance ece1112a-294b-461f-8b8f-c6a0bb212647 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct 11 05:25:27 np0005481065 podman[398599]: 2025-10-11 09:25:27.666464889 +0000 UTC m=+0.076973633 container create d2f65b93e70c31c2a1d1b152c10579da83b1c71b4762318f428b8d29081ae737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_williams, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 11 05:25:27 np0005481065 podman[398599]: 2025-10-11 09:25:27.632506111 +0000 UTC m=+0.043014905 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:25:27 np0005481065 nova_compute[260935]: 2025-10-11 09:25:27.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:27 np0005481065 systemd[1]: Started libpod-conmon-d2f65b93e70c31c2a1d1b152c10579da83b1c71b4762318f428b8d29081ae737.scope.
Oct 11 05:25:27 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:25:27 np0005481065 podman[398599]: 2025-10-11 09:25:27.794795671 +0000 UTC m=+0.205304475 container init d2f65b93e70c31c2a1d1b152c10579da83b1c71b4762318f428b8d29081ae737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_williams, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 11 05:25:27 np0005481065 podman[398599]: 2025-10-11 09:25:27.806318276 +0000 UTC m=+0.216827020 container start d2f65b93e70c31c2a1d1b152c10579da83b1c71b4762318f428b8d29081ae737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:25:27 np0005481065 frosty_williams[398616]: 167 167
Oct 11 05:25:27 np0005481065 systemd[1]: libpod-d2f65b93e70c31c2a1d1b152c10579da83b1c71b4762318f428b8d29081ae737.scope: Deactivated successfully.
Oct 11 05:25:27 np0005481065 conmon[398616]: conmon d2f65b93e70c31c2a1d1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d2f65b93e70c31c2a1d1b152c10579da83b1c71b4762318f428b8d29081ae737.scope/container/memory.events
Oct 11 05:25:27 np0005481065 podman[398599]: 2025-10-11 09:25:27.816776231 +0000 UTC m=+0.227284975 container attach d2f65b93e70c31c2a1d1b152c10579da83b1c71b4762318f428b8d29081ae737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_williams, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 11 05:25:27 np0005481065 podman[398599]: 2025-10-11 09:25:27.818034677 +0000 UTC m=+0.228543481 container died d2f65b93e70c31c2a1d1b152c10579da83b1c71b4762318f428b8d29081ae737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:25:27 np0005481065 systemd[1]: var-lib-containers-storage-overlay-d8df2d914a614085e1a2d2f84d81c34d29fc98a5eb02a2d2e0b3a51c5945d3c6-merged.mount: Deactivated successfully.
Oct 11 05:25:27 np0005481065 podman[398599]: 2025-10-11 09:25:27.875975502 +0000 UTC m=+0.286484236 container remove d2f65b93e70c31c2a1d1b152c10579da83b1c71b4762318f428b8d29081ae737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_williams, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:25:27 np0005481065 systemd[1]: libpod-conmon-d2f65b93e70c31c2a1d1b152c10579da83b1c71b4762318f428b8d29081ae737.scope: Deactivated successfully.
Oct 11 05:25:28 np0005481065 nova_compute[260935]: 2025-10-11 09:25:28.004 2 DEBUG nova.network.neutron [-] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:25:28 np0005481065 nova_compute[260935]: 2025-10-11 09:25:28.093 2 INFO nova.compute.manager [-] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Took 15.42 seconds to deallocate network for instance.#033[00m
Oct 11 05:25:28 np0005481065 nova_compute[260935]: 2025-10-11 09:25:28.102 2 DEBUG nova.compute.manager [req-99adba50-68e6-4b74-b4bf-26f1da833184 req-f96d1192-3e64-48e5-bdd4-9409a1b6b9bc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Received event network-vif-deleted-c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:25:28 np0005481065 nova_compute[260935]: 2025-10-11 09:25:28.103 2 INFO nova.compute.manager [req-99adba50-68e6-4b74-b4bf-26f1da833184 req-f96d1192-3e64-48e5-bdd4-9409a1b6b9bc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Neutron deleted interface c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9; detaching it from the instance and deleting it from the info cache#033[00m
Oct 11 05:25:28 np0005481065 nova_compute[260935]: 2025-10-11 09:25:28.103 2 DEBUG nova.network.neutron [req-99adba50-68e6-4b74-b4bf-26f1da833184 req-f96d1192-3e64-48e5-bdd4-9409a1b6b9bc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:25:28 np0005481065 podman[398641]: 2025-10-11 09:25:28.122296303 +0000 UTC m=+0.054547510 container create a593ae937500db96e196f72cd4a031ae6e96f85e94f03ba5814ccb2ae313f389 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:25:28 np0005481065 systemd[1]: Started libpod-conmon-a593ae937500db96e196f72cd4a031ae6e96f85e94f03ba5814ccb2ae313f389.scope.
Oct 11 05:25:28 np0005481065 nova_compute[260935]: 2025-10-11 09:25:28.184 2 DEBUG nova.compute.manager [req-99adba50-68e6-4b74-b4bf-26f1da833184 req-f96d1192-3e64-48e5-bdd4-9409a1b6b9bc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Detach interface failed, port_id=c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9, reason: Instance ece1112a-294b-461f-8b8f-c6a0bb212647 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct 11 05:25:28 np0005481065 podman[398641]: 2025-10-11 09:25:28.102901196 +0000 UTC m=+0.035152423 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:25:28 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:25:28 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef65815cbce6ac3058a74c292c393a5f6bb4e148df6f3c0630e66ce8a827e0d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:25:28 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef65815cbce6ac3058a74c292c393a5f6bb4e148df6f3c0630e66ce8a827e0d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:25:28 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef65815cbce6ac3058a74c292c393a5f6bb4e148df6f3c0630e66ce8a827e0d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:25:28 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef65815cbce6ac3058a74c292c393a5f6bb4e148df6f3c0630e66ce8a827e0d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:25:28 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef65815cbce6ac3058a74c292c393a5f6bb4e148df6f3c0630e66ce8a827e0d2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 05:25:28 np0005481065 nova_compute[260935]: 2025-10-11 09:25:28.222 2 DEBUG oslo_concurrency.lockutils [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:25:28 np0005481065 podman[398641]: 2025-10-11 09:25:28.223331374 +0000 UTC m=+0.155582621 container init a593ae937500db96e196f72cd4a031ae6e96f85e94f03ba5814ccb2ae313f389 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_elbakyan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Oct 11 05:25:28 np0005481065 nova_compute[260935]: 2025-10-11 09:25:28.223 2 DEBUG oslo_concurrency.lockutils [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:25:28 np0005481065 podman[398641]: 2025-10-11 09:25:28.238300467 +0000 UTC m=+0.170551674 container start a593ae937500db96e196f72cd4a031ae6e96f85e94f03ba5814ccb2ae313f389 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_elbakyan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:25:28 np0005481065 podman[398641]: 2025-10-11 09:25:28.243684759 +0000 UTC m=+0.175936016 container attach a593ae937500db96e196f72cd4a031ae6e96f85e94f03ba5814ccb2ae313f389 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_elbakyan, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 05:25:28 np0005481065 nova_compute[260935]: 2025-10-11 09:25:28.380 2 DEBUG oslo_concurrency.processutils [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:25:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:25:28 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1931950980' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:25:28 np0005481065 nova_compute[260935]: 2025-10-11 09:25:28.857 2 DEBUG oslo_concurrency.processutils [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:25:28 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2515: 321 pgs: 321 active+clean; 486 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 141 op/s
Oct 11 05:25:28 np0005481065 nova_compute[260935]: 2025-10-11 09:25:28.869 2 DEBUG nova.compute.provider_tree [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:25:28 np0005481065 nova_compute[260935]: 2025-10-11 09:25:28.950 2 DEBUG nova.scheduler.client.report [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:25:29 np0005481065 nova_compute[260935]: 2025-10-11 09:25:29.017 2 DEBUG oslo_concurrency.lockutils [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.794s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:25:29 np0005481065 nova_compute[260935]: 2025-10-11 09:25:29.060 2 INFO nova.scheduler.client.report [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Deleted allocations for instance ece1112a-294b-461f-8b8f-c6a0bb212647#033[00m
Oct 11 05:25:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:25:29 np0005481065 nova_compute[260935]: 2025-10-11 09:25:29.213 2 DEBUG oslo_concurrency.lockutils [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "ece1112a-294b-461f-8b8f-c6a0bb212647" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 18.039s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:25:29 np0005481065 trusting_elbakyan[398657]: --> passed data devices: 0 physical, 3 LVM
Oct 11 05:25:29 np0005481065 trusting_elbakyan[398657]: --> relative data size: 1.0
Oct 11 05:25:29 np0005481065 trusting_elbakyan[398657]: --> All data devices are unavailable
Oct 11 05:25:29 np0005481065 systemd[1]: libpod-a593ae937500db96e196f72cd4a031ae6e96f85e94f03ba5814ccb2ae313f389.scope: Deactivated successfully.
Oct 11 05:25:29 np0005481065 podman[398641]: 2025-10-11 09:25:29.312199353 +0000 UTC m=+1.244450560 container died a593ae937500db96e196f72cd4a031ae6e96f85e94f03ba5814ccb2ae313f389 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_elbakyan, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:25:29 np0005481065 systemd[1]: libpod-a593ae937500db96e196f72cd4a031ae6e96f85e94f03ba5814ccb2ae313f389.scope: Consumed 1.015s CPU time.
Oct 11 05:25:29 np0005481065 systemd[1]: var-lib-containers-storage-overlay-ef65815cbce6ac3058a74c292c393a5f6bb4e148df6f3c0630e66ce8a827e0d2-merged.mount: Deactivated successfully.
Oct 11 05:25:29 np0005481065 podman[398641]: 2025-10-11 09:25:29.402588134 +0000 UTC m=+1.334839351 container remove a593ae937500db96e196f72cd4a031ae6e96f85e94f03ba5814ccb2ae313f389 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_elbakyan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:25:29 np0005481065 systemd[1]: libpod-conmon-a593ae937500db96e196f72cd4a031ae6e96f85e94f03ba5814ccb2ae313f389.scope: Deactivated successfully.
Oct 11 05:25:30 np0005481065 podman[398864]: 2025-10-11 09:25:30.278590226 +0000 UTC m=+0.067584749 container create 642f6ebc500a6e4a78c80465c2b51d6b3822a7be894b96a88c8508be237896c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_elgamal, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 05:25:30 np0005481065 podman[398864]: 2025-10-11 09:25:30.24934181 +0000 UTC m=+0.038336383 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:25:30 np0005481065 systemd[1]: Started libpod-conmon-642f6ebc500a6e4a78c80465c2b51d6b3822a7be894b96a88c8508be237896c5.scope.
Oct 11 05:25:30 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:25:30 np0005481065 podman[398864]: 2025-10-11 09:25:30.409392817 +0000 UTC m=+0.198387330 container init 642f6ebc500a6e4a78c80465c2b51d6b3822a7be894b96a88c8508be237896c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_elgamal, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:25:30 np0005481065 podman[398864]: 2025-10-11 09:25:30.416184719 +0000 UTC m=+0.205179192 container start 642f6ebc500a6e4a78c80465c2b51d6b3822a7be894b96a88c8508be237896c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_elgamal, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 05:25:30 np0005481065 podman[398864]: 2025-10-11 09:25:30.419391819 +0000 UTC m=+0.208386342 container attach 642f6ebc500a6e4a78c80465c2b51d6b3822a7be894b96a88c8508be237896c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_elgamal, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 11 05:25:30 np0005481065 condescending_elgamal[398880]: 167 167
Oct 11 05:25:30 np0005481065 systemd[1]: libpod-642f6ebc500a6e4a78c80465c2b51d6b3822a7be894b96a88c8508be237896c5.scope: Deactivated successfully.
Oct 11 05:25:30 np0005481065 conmon[398880]: conmon 642f6ebc500a6e4a78c8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-642f6ebc500a6e4a78c80465c2b51d6b3822a7be894b96a88c8508be237896c5.scope/container/memory.events
Oct 11 05:25:30 np0005481065 podman[398864]: 2025-10-11 09:25:30.422320042 +0000 UTC m=+0.211314525 container died 642f6ebc500a6e4a78c80465c2b51d6b3822a7be894b96a88c8508be237896c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_elgamal, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:25:30 np0005481065 systemd[1]: var-lib-containers-storage-overlay-fe4c176979dbd9cda4390a1a204ab4f884754df3e73659725d607fc53ab627ce-merged.mount: Deactivated successfully.
Oct 11 05:25:30 np0005481065 podman[398864]: 2025-10-11 09:25:30.463316369 +0000 UTC m=+0.252310882 container remove 642f6ebc500a6e4a78c80465c2b51d6b3822a7be894b96a88c8508be237896c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_elgamal, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:25:30 np0005481065 systemd[1]: libpod-conmon-642f6ebc500a6e4a78c80465c2b51d6b3822a7be894b96a88c8508be237896c5.scope: Deactivated successfully.
Oct 11 05:25:30 np0005481065 podman[398902]: 2025-10-11 09:25:30.700175272 +0000 UTC m=+0.045939557 container create 17a04c1fc8ffae27e7eb48719bff9c68dc5f62b6601373c938f539748457326d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_poincare, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 11 05:25:30 np0005481065 systemd[1]: Started libpod-conmon-17a04c1fc8ffae27e7eb48719bff9c68dc5f62b6601373c938f539748457326d.scope.
Oct 11 05:25:30 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:25:30 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c01530c07a40aa3d847d215677c6556a6eee366d3d24f8429cf161363d15f19/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:25:30 np0005481065 podman[398902]: 2025-10-11 09:25:30.684210802 +0000 UTC m=+0.029975087 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:25:30 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c01530c07a40aa3d847d215677c6556a6eee366d3d24f8429cf161363d15f19/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:25:30 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c01530c07a40aa3d847d215677c6556a6eee366d3d24f8429cf161363d15f19/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:25:30 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c01530c07a40aa3d847d215677c6556a6eee366d3d24f8429cf161363d15f19/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:25:30 np0005481065 podman[398902]: 2025-10-11 09:25:30.794429552 +0000 UTC m=+0.140193847 container init 17a04c1fc8ffae27e7eb48719bff9c68dc5f62b6601373c938f539748457326d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 05:25:30 np0005481065 podman[398902]: 2025-10-11 09:25:30.80676606 +0000 UTC m=+0.152530345 container start 17a04c1fc8ffae27e7eb48719bff9c68dc5f62b6601373c938f539748457326d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 05:25:30 np0005481065 podman[398902]: 2025-10-11 09:25:30.811216186 +0000 UTC m=+0.156980491 container attach 17a04c1fc8ffae27e7eb48719bff9c68dc5f62b6601373c938f539748457326d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_poincare, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 05:25:30 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2516: 321 pgs: 321 active+clean; 486 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 05:25:31 np0005481065 nova_compute[260935]: 2025-10-11 09:25:31.129 2 DEBUG nova.compute.manager [req-b162d961-a910-4491-8eb8-868da786e142 req-5a378dc8-b74d-4af9-b990-cd7d7a36554e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Received event network-changed-85875c6f-3380-42bc-9e4b-0b66df391e0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:25:31 np0005481065 nova_compute[260935]: 2025-10-11 09:25:31.130 2 DEBUG nova.compute.manager [req-b162d961-a910-4491-8eb8-868da786e142 req-5a378dc8-b74d-4af9-b990-cd7d7a36554e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Refreshing instance network info cache due to event network-changed-85875c6f-3380-42bc-9e4b-0b66df391e0d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:25:31 np0005481065 nova_compute[260935]: 2025-10-11 09:25:31.131 2 DEBUG oslo_concurrency.lockutils [req-b162d961-a910-4491-8eb8-868da786e142 req-5a378dc8-b74d-4af9-b990-cd7d7a36554e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-85cf93a0-2068-4567-a399-b8d52e672913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:25:31 np0005481065 nova_compute[260935]: 2025-10-11 09:25:31.131 2 DEBUG oslo_concurrency.lockutils [req-b162d961-a910-4491-8eb8-868da786e142 req-5a378dc8-b74d-4af9-b990-cd7d7a36554e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-85cf93a0-2068-4567-a399-b8d52e672913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:25:31 np0005481065 nova_compute[260935]: 2025-10-11 09:25:31.132 2 DEBUG nova.network.neutron [req-b162d961-a910-4491-8eb8-868da786e142 req-5a378dc8-b74d-4af9-b990-cd7d7a36554e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Refreshing network info cache for port 85875c6f-3380-42bc-9e4b-0b66df391e0d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:25:31 np0005481065 nova_compute[260935]: 2025-10-11 09:25:31.345 2 DEBUG oslo_concurrency.lockutils [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "85cf93a0-2068-4567-a399-b8d52e672913" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:25:31 np0005481065 nova_compute[260935]: 2025-10-11 09:25:31.346 2 DEBUG oslo_concurrency.lockutils [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "85cf93a0-2068-4567-a399-b8d52e672913" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:25:31 np0005481065 nova_compute[260935]: 2025-10-11 09:25:31.346 2 DEBUG oslo_concurrency.lockutils [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "85cf93a0-2068-4567-a399-b8d52e672913-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:25:31 np0005481065 nova_compute[260935]: 2025-10-11 09:25:31.346 2 DEBUG oslo_concurrency.lockutils [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "85cf93a0-2068-4567-a399-b8d52e672913-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:25:31 np0005481065 nova_compute[260935]: 2025-10-11 09:25:31.347 2 DEBUG oslo_concurrency.lockutils [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "85cf93a0-2068-4567-a399-b8d52e672913-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:25:31 np0005481065 nova_compute[260935]: 2025-10-11 09:25:31.348 2 INFO nova.compute.manager [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Terminating instance#033[00m
Oct 11 05:25:31 np0005481065 nova_compute[260935]: 2025-10-11 09:25:31.349 2 DEBUG nova.compute.manager [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:25:31 np0005481065 kernel: tap85875c6f-33 (unregistering): left promiscuous mode
Oct 11 05:25:31 np0005481065 NetworkManager[44960]: <info>  [1760174731.4202] device (tap85875c6f-33): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:25:31 np0005481065 kernel: tapf45a21da-34 (unregistering): left promiscuous mode
Oct 11 05:25:31 np0005481065 ovn_controller[152945]: 2025-10-11T09:25:31Z|01369|binding|INFO|Releasing lport 85875c6f-3380-42bc-9e4b-0b66df391e0d from this chassis (sb_readonly=0)
Oct 11 05:25:31 np0005481065 ovn_controller[152945]: 2025-10-11T09:25:31Z|01370|binding|INFO|Setting lport 85875c6f-3380-42bc-9e4b-0b66df391e0d down in Southbound
Oct 11 05:25:31 np0005481065 ovn_controller[152945]: 2025-10-11T09:25:31Z|01371|binding|INFO|Removing iface tap85875c6f-33 ovn-installed in OVS
Oct 11 05:25:31 np0005481065 nova_compute[260935]: 2025-10-11 09:25:31.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:31 np0005481065 NetworkManager[44960]: <info>  [1760174731.4862] device (tapf45a21da-34): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:25:31 np0005481065 nova_compute[260935]: 2025-10-11 09:25:31.490 2 DEBUG nova.compute.manager [req-70b14429-9b8a-40a6-bc3f-3d564b8ca202 req-14447cff-54b2-402e-b7d0-aebd62646e38 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Received event network-changed-119ff9b5-e3af-4688-97a4-92b5f6a240dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:25:31 np0005481065 nova_compute[260935]: 2025-10-11 09:25:31.491 2 DEBUG nova.compute.manager [req-70b14429-9b8a-40a6-bc3f-3d564b8ca202 req-14447cff-54b2-402e-b7d0-aebd62646e38 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Refreshing instance network info cache due to event network-changed-119ff9b5-e3af-4688-97a4-92b5f6a240dc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:25:31 np0005481065 nova_compute[260935]: 2025-10-11 09:25:31.491 2 DEBUG oslo_concurrency.lockutils [req-70b14429-9b8a-40a6-bc3f-3d564b8ca202 req-14447cff-54b2-402e-b7d0-aebd62646e38 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-e4670193-9ea3-45bc-9dbd-14d2e62f32f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:25:31 np0005481065 nova_compute[260935]: 2025-10-11 09:25:31.492 2 DEBUG oslo_concurrency.lockutils [req-70b14429-9b8a-40a6-bc3f-3d564b8ca202 req-14447cff-54b2-402e-b7d0-aebd62646e38 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-e4670193-9ea3-45bc-9dbd-14d2e62f32f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:25:31 np0005481065 nova_compute[260935]: 2025-10-11 09:25:31.492 2 DEBUG nova.network.neutron [req-70b14429-9b8a-40a6-bc3f-3d564b8ca202 req-14447cff-54b2-402e-b7d0-aebd62646e38 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Refreshing network info cache for port 119ff9b5-e3af-4688-97a4-92b5f6a240dc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:25:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:31.503 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:f1:c1 10.100.0.14'], port_security=['fa:16:3e:81:f1:c1 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '85cf93a0-2068-4567-a399-b8d52e672913', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cdd3c547-e0c3-4649-8427-08ce8e1c52d4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0ad377f6-44f5-45fc-95cb-2f677a42f5a2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=039eefbf-a236-4641-9b4d-7b7f1014a5e2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=85875c6f-3380-42bc-9e4b-0b66df391e0d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:25:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:31.505 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 85875c6f-3380-42bc-9e4b-0b66df391e0d in datapath cdd3c547-e0c3-4649-8427-08ce8e1c52d4 unbound from our chassis#033[00m
Oct 11 05:25:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:31.508 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cdd3c547-e0c3-4649-8427-08ce8e1c52d4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:25:31 np0005481065 nova_compute[260935]: 2025-10-11 09:25:31.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:31.515 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4f176940-8fd4-4e4f-b8a4-b1572640c0ed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:25:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:31.518 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4 namespace which is not needed anymore#033[00m
Oct 11 05:25:31 np0005481065 nova_compute[260935]: 2025-10-11 09:25:31.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:31 np0005481065 ovn_controller[152945]: 2025-10-11T09:25:31Z|01372|binding|INFO|Releasing lport f45a21da-34ce-448b-92cc-2639a191c755 from this chassis (sb_readonly=0)
Oct 11 05:25:31 np0005481065 ovn_controller[152945]: 2025-10-11T09:25:31Z|01373|binding|INFO|Setting lport f45a21da-34ce-448b-92cc-2639a191c755 down in Southbound
Oct 11 05:25:31 np0005481065 ovn_controller[152945]: 2025-10-11T09:25:31Z|01374|binding|INFO|Removing iface tapf45a21da-34 ovn-installed in OVS
Oct 11 05:25:31 np0005481065 nova_compute[260935]: 2025-10-11 09:25:31.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:31 np0005481065 funny_poincare[398918]: {
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:    "0": [
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:        {
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:            "devices": [
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:                "/dev/loop3"
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:            ],
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:            "lv_name": "ceph_lv0",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:            "lv_size": "21470642176",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:            "name": "ceph_lv0",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:            "tags": {
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:                "ceph.cluster_name": "ceph",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:                "ceph.crush_device_class": "",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:                "ceph.encrypted": "0",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:                "ceph.osd_id": "0",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:                "ceph.type": "block",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:                "ceph.vdo": "0"
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:            },
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:            "type": "block",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:            "vg_name": "ceph_vg0"
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:        }
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:    ],
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:    "1": [
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:        {
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:            "devices": [
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:                "/dev/loop4"
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:            ],
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:            "lv_name": "ceph_lv1",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:            "lv_size": "21470642176",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:            "name": "ceph_lv1",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:            "tags": {
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:                "ceph.cluster_name": "ceph",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:                "ceph.crush_device_class": "",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:                "ceph.encrypted": "0",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:                "ceph.osd_id": "1",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:                "ceph.type": "block",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:                "ceph.vdo": "0"
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:            },
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:            "type": "block",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:            "vg_name": "ceph_vg1"
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:        }
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:    ],
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:    "2": [
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:        {
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:            "devices": [
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:                "/dev/loop5"
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:            ],
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:            "lv_name": "ceph_lv2",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:            "lv_size": "21470642176",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:            "name": "ceph_lv2",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:            "tags": {
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:                "ceph.cluster_name": "ceph",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:                "ceph.crush_device_class": "",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:                "ceph.encrypted": "0",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:                "ceph.osd_id": "2",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:                "ceph.type": "block",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:                "ceph.vdo": "0"
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:            },
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:            "type": "block",
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:            "vg_name": "ceph_vg2"
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:        }
Oct 11 05:25:31 np0005481065 funny_poincare[398918]:    ]
Oct 11 05:25:31 np0005481065 funny_poincare[398918]: }
Oct 11 05:25:31 np0005481065 nova_compute[260935]: 2025-10-11 09:25:31.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:31 np0005481065 systemd[1]: machine-qemu\x2d147\x2dinstance\x2d0000007a.scope: Deactivated successfully.
Oct 11 05:25:31 np0005481065 systemd[1]: machine-qemu\x2d147\x2dinstance\x2d0000007a.scope: Consumed 17.934s CPU time.
Oct 11 05:25:31 np0005481065 systemd-machined[215705]: Machine qemu-147-instance-0000007a terminated.
Oct 11 05:25:31 np0005481065 systemd[1]: libpod-17a04c1fc8ffae27e7eb48719bff9c68dc5f62b6601373c938f539748457326d.scope: Deactivated successfully.
Oct 11 05:25:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:31.578 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:a9:1e 2001:db8:0:1:f816:3eff:fef6:a91e 2001:db8::f816:3eff:fef6:a91e'], port_security=['fa:16:3e:f6:a9:1e 2001:db8:0:1:f816:3eff:fef6:a91e 2001:db8::f816:3eff:fef6:a91e'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fef6:a91e/64 2001:db8::f816:3eff:fef6:a91e/64', 'neutron:device_id': '85cf93a0-2068-4567-a399-b8d52e672913', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f0bc2c62-89ab-4ce1-9157-2273788b9018', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0ad377f6-44f5-45fc-95cb-2f677a42f5a2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75b51dc9-c211-445f-9e3e-d219efddef19, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=f45a21da-34ce-448b-92cc-2639a191c755) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:25:31 np0005481065 podman[398946]: 2025-10-11 09:25:31.620974848 +0000 UTC m=+0.033285800 container died 17a04c1fc8ffae27e7eb48719bff9c68dc5f62b6601373c938f539748457326d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:25:31 np0005481065 systemd[1]: var-lib-containers-storage-overlay-7c01530c07a40aa3d847d215677c6556a6eee366d3d24f8429cf161363d15f19-merged.mount: Deactivated successfully.
Oct 11 05:25:31 np0005481065 neutron-haproxy-ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4[395110]: [NOTICE]   (395121) : haproxy version is 2.8.14-c23fe91
Oct 11 05:25:31 np0005481065 neutron-haproxy-ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4[395110]: [NOTICE]   (395121) : path to executable is /usr/sbin/haproxy
Oct 11 05:25:31 np0005481065 neutron-haproxy-ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4[395110]: [WARNING]  (395121) : Exiting Master process...
Oct 11 05:25:31 np0005481065 neutron-haproxy-ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4[395110]: [WARNING]  (395121) : Exiting Master process...
Oct 11 05:25:31 np0005481065 neutron-haproxy-ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4[395110]: [ALERT]    (395121) : Current worker (395128) exited with code 143 (Terminated)
Oct 11 05:25:31 np0005481065 neutron-haproxy-ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4[395110]: [WARNING]  (395121) : All workers exited. Exiting... (0)
Oct 11 05:25:31 np0005481065 systemd[1]: libpod-ab6ebb48cd0eab225fc177b91d9f4f5da5225331666aabf295bcb159bbbda39c.scope: Deactivated successfully.
Oct 11 05:25:31 np0005481065 podman[398965]: 2025-10-11 09:25:31.680488678 +0000 UTC m=+0.050799155 container died ab6ebb48cd0eab225fc177b91d9f4f5da5225331666aabf295bcb159bbbda39c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:25:31 np0005481065 podman[398946]: 2025-10-11 09:25:31.706480821 +0000 UTC m=+0.118791713 container remove 17a04c1fc8ffae27e7eb48719bff9c68dc5f62b6601373c938f539748457326d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_poincare, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:25:31 np0005481065 systemd[1]: libpod-conmon-17a04c1fc8ffae27e7eb48719bff9c68dc5f62b6601373c938f539748457326d.scope: Deactivated successfully.
Oct 11 05:25:31 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ab6ebb48cd0eab225fc177b91d9f4f5da5225331666aabf295bcb159bbbda39c-userdata-shm.mount: Deactivated successfully.
Oct 11 05:25:31 np0005481065 systemd[1]: var-lib-containers-storage-overlay-66d9ee5906b3e37b7221b27948dcaeafcd376d35672ef3e4c15b762b77c22192-merged.mount: Deactivated successfully.
Oct 11 05:25:31 np0005481065 podman[398965]: 2025-10-11 09:25:31.744858074 +0000 UTC m=+0.115168531 container cleanup ab6ebb48cd0eab225fc177b91d9f4f5da5225331666aabf295bcb159bbbda39c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:25:31 np0005481065 systemd[1]: libpod-conmon-ab6ebb48cd0eab225fc177b91d9f4f5da5225331666aabf295bcb159bbbda39c.scope: Deactivated successfully.
Oct 11 05:25:31 np0005481065 NetworkManager[44960]: <info>  [1760174731.7862] manager: (tapf45a21da-34): new Tun device (/org/freedesktop/NetworkManager/Devices/538)
Oct 11 05:25:31 np0005481065 nova_compute[260935]: 2025-10-11 09:25:31.803 2 INFO nova.virt.libvirt.driver [-] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Instance destroyed successfully.#033[00m
Oct 11 05:25:31 np0005481065 nova_compute[260935]: 2025-10-11 09:25:31.804 2 DEBUG nova.objects.instance [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'resources' on Instance uuid 85cf93a0-2068-4567-a399-b8d52e672913 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:25:31 np0005481065 nova_compute[260935]: 2025-10-11 09:25:31.853 2 DEBUG nova.virt.libvirt.vif [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:23:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-145104929',display_name='tempest-TestGettingAddress-server-145104929',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-145104929',id=122,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFOcQNhE7VrJ44Z/esb1CHEpx8lcRbf27s/aaZGWQqBCH7+6fr5AKVHiLVq1p8ssvkamN6U1RSiwjmJnfBxXYkiNbQhuZVIjGwYPhoT+eqQySo9UJb10NKMqWJoQDGuD9g==',key_name='tempest-TestGettingAddress-1559448996',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:23:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-tevfktjf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:23:54Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=85cf93a0-2068-4567-a399-b8d52e672913,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "85875c6f-3380-42bc-9e4b-0b66df391e0d", "address": "fa:16:3e:81:f1:c1", "network": {"id": "cdd3c547-e0c3-4649-8427-08ce8e1c52d4", "bridge": "br-int", "label": "tempest-network-smoke--1698384506", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85875c6f-33", "ovs_interfaceid": "85875c6f-3380-42bc-9e4b-0b66df391e0d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:25:31 np0005481065 nova_compute[260935]: 2025-10-11 09:25:31.856 2 DEBUG nova.network.os_vif_util [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "85875c6f-3380-42bc-9e4b-0b66df391e0d", "address": "fa:16:3e:81:f1:c1", "network": {"id": "cdd3c547-e0c3-4649-8427-08ce8e1c52d4", "bridge": "br-int", "label": "tempest-network-smoke--1698384506", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85875c6f-33", "ovs_interfaceid": "85875c6f-3380-42bc-9e4b-0b66df391e0d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:25:31 np0005481065 nova_compute[260935]: 2025-10-11 09:25:31.858 2 DEBUG nova.network.os_vif_util [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:81:f1:c1,bridge_name='br-int',has_traffic_filtering=True,id=85875c6f-3380-42bc-9e4b-0b66df391e0d,network=Network(cdd3c547-e0c3-4649-8427-08ce8e1c52d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85875c6f-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:25:31 np0005481065 podman[398992]: 2025-10-11 09:25:31.859659234 +0000 UTC m=+0.076336455 container remove ab6ebb48cd0eab225fc177b91d9f4f5da5225331666aabf295bcb159bbbda39c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 11 05:25:31 np0005481065 nova_compute[260935]: 2025-10-11 09:25:31.859 2 DEBUG os_vif [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:81:f1:c1,bridge_name='br-int',has_traffic_filtering=True,id=85875c6f-3380-42bc-9e4b-0b66df391e0d,network=Network(cdd3c547-e0c3-4649-8427-08ce8e1c52d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85875c6f-33') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:25:31 np0005481065 nova_compute[260935]: 2025-10-11 09:25:31.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:31 np0005481065 nova_compute[260935]: 2025-10-11 09:25:31.866 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85875c6f-33, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:25:31 np0005481065 nova_compute[260935]: 2025-10-11 09:25:31.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:31 np0005481065 nova_compute[260935]: 2025-10-11 09:25:31.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:25:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:31.872 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[91ce86f3-b1bb-45fe-a9f3-44b05bba6359]: (4, ('Sat Oct 11 09:25:31 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4 (ab6ebb48cd0eab225fc177b91d9f4f5da5225331666aabf295bcb159bbbda39c)\nab6ebb48cd0eab225fc177b91d9f4f5da5225331666aabf295bcb159bbbda39c\nSat Oct 11 09:25:31 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4 (ab6ebb48cd0eab225fc177b91d9f4f5da5225331666aabf295bcb159bbbda39c)\nab6ebb48cd0eab225fc177b91d9f4f5da5225331666aabf295bcb159bbbda39c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:25:31 np0005481065 nova_compute[260935]: 2025-10-11 09:25:31.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:31 np0005481065 nova_compute[260935]: 2025-10-11 09:25:31.877 2 INFO os_vif [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:81:f1:c1,bridge_name='br-int',has_traffic_filtering=True,id=85875c6f-3380-42bc-9e4b-0b66df391e0d,network=Network(cdd3c547-e0c3-4649-8427-08ce8e1c52d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85875c6f-33')#033[00m
Oct 11 05:25:31 np0005481065 nova_compute[260935]: 2025-10-11 09:25:31.878 2 DEBUG nova.virt.libvirt.vif [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:23:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-145104929',display_name='tempest-TestGettingAddress-server-145104929',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-145104929',id=122,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFOcQNhE7VrJ44Z/esb1CHEpx8lcRbf27s/aaZGWQqBCH7+6fr5AKVHiLVq1p8ssvkamN6U1RSiwjmJnfBxXYkiNbQhuZVIjGwYPhoT+eqQySo9UJb10NKMqWJoQDGuD9g==',key_name='tempest-TestGettingAddress-1559448996',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:23:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-tevfktjf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:23:54Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=85cf93a0-2068-4567-a399-b8d52e672913,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f45a21da-34ce-448b-92cc-2639a191c755", "address": "fa:16:3e:f6:a9:1e", "network": {"id": "f0bc2c62-89ab-4ce1-9157-2273788b9018", "bridge": "br-int", "label": "tempest-network-smoke--892002892", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef6:a91e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef6:a91e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf45a21da-34", "ovs_interfaceid": "f45a21da-34ce-448b-92cc-2639a191c755", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:25:31 np0005481065 nova_compute[260935]: 2025-10-11 09:25:31.878 2 DEBUG nova.network.os_vif_util [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "f45a21da-34ce-448b-92cc-2639a191c755", "address": "fa:16:3e:f6:a9:1e", "network": {"id": "f0bc2c62-89ab-4ce1-9157-2273788b9018", "bridge": "br-int", "label": "tempest-network-smoke--892002892", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef6:a91e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef6:a91e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf45a21da-34", "ovs_interfaceid": "f45a21da-34ce-448b-92cc-2639a191c755", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:25:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:31.878 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d2f91037-12ca-4259-9263-372f72103a08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:25:31 np0005481065 nova_compute[260935]: 2025-10-11 09:25:31.878 2 DEBUG nova.network.os_vif_util [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:a9:1e,bridge_name='br-int',has_traffic_filtering=True,id=f45a21da-34ce-448b-92cc-2639a191c755,network=Network(f0bc2c62-89ab-4ce1-9157-2273788b9018),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf45a21da-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:25:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:31.879 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcdd3c547-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:25:31 np0005481065 kernel: tapcdd3c547-e0: left promiscuous mode
Oct 11 05:25:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:31.919 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e896effe-6e89-4bac-b301-7f466ad10006]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:25:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:31.950 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[49193f5b-89a7-48c8-9e16-0a20a70ab181]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:25:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:31.951 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0cfa1f16-be8c-4b47-bd22-9b1678dd947a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:25:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:31.973 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5d98073c-1111-4f2c-a1ad-f0beb6f32156]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647682, 'reachable_time': 27246, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 399085, 'error': None, 'target': 'ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:25:31 np0005481065 systemd[1]: run-netns-ovnmeta\x2dcdd3c547\x2de0c3\x2d4649\x2d8427\x2d08ce8e1c52d4.mount: Deactivated successfully.
Oct 11 05:25:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:31.980 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 05:25:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:31.980 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[75a56882-6cc3-43b4-b3c2-e41d6f42a517]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:25:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:31.981 162815 INFO neutron.agent.ovn.metadata.agent [-] Port f45a21da-34ce-448b-92cc-2639a191c755 in datapath f0bc2c62-89ab-4ce1-9157-2273788b9018 unbound from our chassis#033[00m
Oct 11 05:25:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:31.983 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f0bc2c62-89ab-4ce1-9157-2273788b9018, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:25:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:31.983 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[280bd44b-4263-4781-afcc-0e4d3e63f6b1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:25:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:31.984 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018 namespace which is not needed anymore#033[00m
Oct 11 05:25:32 np0005481065 nova_compute[260935]: 2025-10-11 09:25:31.879 2 DEBUG os_vif [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:a9:1e,bridge_name='br-int',has_traffic_filtering=True,id=f45a21da-34ce-448b-92cc-2639a191c755,network=Network(f0bc2c62-89ab-4ce1-9157-2273788b9018),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf45a21da-34') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:25:32 np0005481065 nova_compute[260935]: 2025-10-11 09:25:32.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:32 np0005481065 nova_compute[260935]: 2025-10-11 09:25:32.011 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf45a21da-34, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:25:32 np0005481065 nova_compute[260935]: 2025-10-11 09:25:32.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:32 np0005481065 nova_compute[260935]: 2025-10-11 09:25:32.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:32 np0005481065 nova_compute[260935]: 2025-10-11 09:25:32.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:25:32 np0005481065 nova_compute[260935]: 2025-10-11 09:25:32.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:32 np0005481065 nova_compute[260935]: 2025-10-11 09:25:32.024 2 INFO os_vif [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:a9:1e,bridge_name='br-int',has_traffic_filtering=True,id=f45a21da-34ce-448b-92cc-2639a191c755,network=Network(f0bc2c62-89ab-4ce1-9157-2273788b9018),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf45a21da-34')#033[00m
Oct 11 05:25:32 np0005481065 neutron-haproxy-ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018[395243]: [NOTICE]   (395247) : haproxy version is 2.8.14-c23fe91
Oct 11 05:25:32 np0005481065 neutron-haproxy-ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018[395243]: [NOTICE]   (395247) : path to executable is /usr/sbin/haproxy
Oct 11 05:25:32 np0005481065 neutron-haproxy-ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018[395243]: [WARNING]  (395247) : Exiting Master process...
Oct 11 05:25:32 np0005481065 neutron-haproxy-ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018[395243]: [WARNING]  (395247) : Exiting Master process...
Oct 11 05:25:32 np0005481065 neutron-haproxy-ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018[395243]: [ALERT]    (395247) : Current worker (395249) exited with code 143 (Terminated)
Oct 11 05:25:32 np0005481065 neutron-haproxy-ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018[395243]: [WARNING]  (395247) : All workers exited. Exiting... (0)
Oct 11 05:25:32 np0005481065 systemd[1]: libpod-049b33c53e6737f8ceec85abe382027b44c2d2f7a48e3b43e0b4bf0d5ccae7e4.scope: Deactivated successfully.
Oct 11 05:25:32 np0005481065 podman[399167]: 2025-10-11 09:25:32.141952161 +0000 UTC m=+0.053692657 container died 049b33c53e6737f8ceec85abe382027b44c2d2f7a48e3b43e0b4bf0d5ccae7e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:25:32 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-049b33c53e6737f8ceec85abe382027b44c2d2f7a48e3b43e0b4bf0d5ccae7e4-userdata-shm.mount: Deactivated successfully.
Oct 11 05:25:32 np0005481065 podman[399167]: 2025-10-11 09:25:32.209188278 +0000 UTC m=+0.120928734 container cleanup 049b33c53e6737f8ceec85abe382027b44c2d2f7a48e3b43e0b4bf0d5ccae7e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:25:32 np0005481065 systemd[1]: libpod-conmon-049b33c53e6737f8ceec85abe382027b44c2d2f7a48e3b43e0b4bf0d5ccae7e4.scope: Deactivated successfully.
Oct 11 05:25:32 np0005481065 podman[399201]: 2025-10-11 09:25:32.304090156 +0000 UTC m=+0.063282667 container remove 049b33c53e6737f8ceec85abe382027b44c2d2f7a48e3b43e0b4bf0d5ccae7e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 05:25:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:32.311 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4fb9754f-df0c-4718-9af7-bfef9a9aaf9e]: (4, ('Sat Oct 11 09:25:32 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018 (049b33c53e6737f8ceec85abe382027b44c2d2f7a48e3b43e0b4bf0d5ccae7e4)\n049b33c53e6737f8ceec85abe382027b44c2d2f7a48e3b43e0b4bf0d5ccae7e4\nSat Oct 11 09:25:32 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018 (049b33c53e6737f8ceec85abe382027b44c2d2f7a48e3b43e0b4bf0d5ccae7e4)\n049b33c53e6737f8ceec85abe382027b44c2d2f7a48e3b43e0b4bf0d5ccae7e4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:25:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:32.312 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[43b5667c-3aa4-4ebf-a9d6-cb8fc24595af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:25:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:32.314 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf0bc2c62-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:25:32 np0005481065 nova_compute[260935]: 2025-10-11 09:25:32.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:32 np0005481065 kernel: tapf0bc2c62-80: left promiscuous mode
Oct 11 05:25:32 np0005481065 nova_compute[260935]: 2025-10-11 09:25:32.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:32 np0005481065 nova_compute[260935]: 2025-10-11 09:25:32.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:32.336 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cc67f4ad-4661-41f1-9a62-5f6148fcbf99]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:25:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:32.355 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d3a4f9a2-925f-49ab-b80c-5def52696e2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:25:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:32.358 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[aaa1d1f5-a841-4431-9989-033f3ffbf57d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:25:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:32.373 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[008c6154-b11a-4494-b89b-4da4f4fbb4cd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647786, 'reachable_time': 30996, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 399246, 'error': None, 'target': 'ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:25:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:32.375 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 05:25:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:32.375 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[0810bc7a-8074-4595-a9ca-4b0255881d35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:25:32 np0005481065 nova_compute[260935]: 2025-10-11 09:25:32.404 2 INFO nova.virt.libvirt.driver [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Deleting instance files /var/lib/nova/instances/85cf93a0-2068-4567-a399-b8d52e672913_del#033[00m
Oct 11 05:25:32 np0005481065 nova_compute[260935]: 2025-10-11 09:25:32.405 2 INFO nova.virt.libvirt.driver [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Deletion of /var/lib/nova/instances/85cf93a0-2068-4567-a399-b8d52e672913_del complete#033[00m
Oct 11 05:25:32 np0005481065 podman[399255]: 2025-10-11 09:25:32.490048424 +0000 UTC m=+0.056418483 container create 88782e1fcaf5f920e655151f5adf3bb887e2ee0d4572e9bcfaa9db06bdeab5d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_sutherland, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 05:25:32 np0005481065 systemd[1]: Started libpod-conmon-88782e1fcaf5f920e655151f5adf3bb887e2ee0d4572e9bcfaa9db06bdeab5d8.scope.
Oct 11 05:25:32 np0005481065 podman[399255]: 2025-10-11 09:25:32.463471394 +0000 UTC m=+0.029841683 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:25:32 np0005481065 nova_compute[260935]: 2025-10-11 09:25:32.571 2 INFO nova.compute.manager [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Took 1.22 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:25:32 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:25:32 np0005481065 nova_compute[260935]: 2025-10-11 09:25:32.573 2 DEBUG oslo.service.loopingcall [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:25:32 np0005481065 nova_compute[260935]: 2025-10-11 09:25:32.574 2 DEBUG nova.compute.manager [-] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:25:32 np0005481065 nova_compute[260935]: 2025-10-11 09:25:32.574 2 DEBUG nova.network.neutron [-] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:25:32 np0005481065 podman[399255]: 2025-10-11 09:25:32.592075023 +0000 UTC m=+0.158445152 container init 88782e1fcaf5f920e655151f5adf3bb887e2ee0d4572e9bcfaa9db06bdeab5d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_sutherland, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:25:32 np0005481065 podman[399255]: 2025-10-11 09:25:32.605364668 +0000 UTC m=+0.171734737 container start 88782e1fcaf5f920e655151f5adf3bb887e2ee0d4572e9bcfaa9db06bdeab5d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 11 05:25:32 np0005481065 podman[399255]: 2025-10-11 09:25:32.609457064 +0000 UTC m=+0.175827193 container attach 88782e1fcaf5f920e655151f5adf3bb887e2ee0d4572e9bcfaa9db06bdeab5d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_sutherland, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:25:32 np0005481065 kind_sutherland[399272]: 167 167
Oct 11 05:25:32 np0005481065 systemd[1]: libpod-88782e1fcaf5f920e655151f5adf3bb887e2ee0d4572e9bcfaa9db06bdeab5d8.scope: Deactivated successfully.
Oct 11 05:25:32 np0005481065 conmon[399272]: conmon 88782e1fcaf5f920e655 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-88782e1fcaf5f920e655151f5adf3bb887e2ee0d4572e9bcfaa9db06bdeab5d8.scope/container/memory.events
Oct 11 05:25:32 np0005481065 podman[399255]: 2025-10-11 09:25:32.615121164 +0000 UTC m=+0.181491223 container died 88782e1fcaf5f920e655151f5adf3bb887e2ee0d4572e9bcfaa9db06bdeab5d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_sutherland, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:25:32 np0005481065 systemd[1]: var-lib-containers-storage-overlay-e1a18029a52b2d99369687829c9fae311726c99a73db60edcfbb05990f0b5022-merged.mount: Deactivated successfully.
Oct 11 05:25:32 np0005481065 systemd[1]: run-netns-ovnmeta\x2df0bc2c62\x2d89ab\x2d4ce1\x2d9157\x2d2273788b9018.mount: Deactivated successfully.
Oct 11 05:25:32 np0005481065 podman[399255]: 2025-10-11 09:25:32.662758768 +0000 UTC m=+0.229128837 container remove 88782e1fcaf5f920e655151f5adf3bb887e2ee0d4572e9bcfaa9db06bdeab5d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_sutherland, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:25:32 np0005481065 systemd[1]: libpod-conmon-88782e1fcaf5f920e655151f5adf3bb887e2ee0d4572e9bcfaa9db06bdeab5d8.scope: Deactivated successfully.
Oct 11 05:25:32 np0005481065 nova_compute[260935]: 2025-10-11 09:25:32.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:32 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2517: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 84 op/s
Oct 11 05:25:32 np0005481065 nova_compute[260935]: 2025-10-11 09:25:32.907 2 INFO nova.compute.manager [None req-532fdc66-7134-4826-b2f2-f9db9ee56e78 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Get console output#033[00m
Oct 11 05:25:32 np0005481065 nova_compute[260935]: 2025-10-11 09:25:32.918 29289 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct 11 05:25:32 np0005481065 podman[399295]: 2025-10-11 09:25:32.958225897 +0000 UTC m=+0.074085502 container create 8706fdc9b627f21130aa1b80d84c84630b13a8d16802c7e9858a66c55fbe88da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_darwin, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 05:25:33 np0005481065 systemd[1]: Started libpod-conmon-8706fdc9b627f21130aa1b80d84c84630b13a8d16802c7e9858a66c55fbe88da.scope.
Oct 11 05:25:33 np0005481065 podman[399295]: 2025-10-11 09:25:32.930179765 +0000 UTC m=+0.046039450 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:25:33 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:25:33 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8a2ae4c33597cdb47ac09d39f42f645cf0ff74c4c2fb654321bf06d8f062660/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:25:33 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8a2ae4c33597cdb47ac09d39f42f645cf0ff74c4c2fb654321bf06d8f062660/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:25:33 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8a2ae4c33597cdb47ac09d39f42f645cf0ff74c4c2fb654321bf06d8f062660/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:25:33 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8a2ae4c33597cdb47ac09d39f42f645cf0ff74c4c2fb654321bf06d8f062660/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:25:33 np0005481065 podman[399295]: 2025-10-11 09:25:33.084711836 +0000 UTC m=+0.200571471 container init 8706fdc9b627f21130aa1b80d84c84630b13a8d16802c7e9858a66c55fbe88da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_darwin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:25:33 np0005481065 podman[399295]: 2025-10-11 09:25:33.099469623 +0000 UTC m=+0.215329268 container start 8706fdc9b627f21130aa1b80d84c84630b13a8d16802c7e9858a66c55fbe88da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 05:25:33 np0005481065 podman[399295]: 2025-10-11 09:25:33.103933189 +0000 UTC m=+0.219792824 container attach 8706fdc9b627f21130aa1b80d84c84630b13a8d16802c7e9858a66c55fbe88da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 05:25:33 np0005481065 nova_compute[260935]: 2025-10-11 09:25:33.594 2 DEBUG nova.compute.manager [req-13ba95d1-477a-46b8-9d97-b86253c68297 req-fa97920b-0580-4806-882d-e954bcfb1875 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Received event network-vif-unplugged-85875c6f-3380-42bc-9e4b-0b66df391e0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:25:33 np0005481065 nova_compute[260935]: 2025-10-11 09:25:33.595 2 DEBUG oslo_concurrency.lockutils [req-13ba95d1-477a-46b8-9d97-b86253c68297 req-fa97920b-0580-4806-882d-e954bcfb1875 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "85cf93a0-2068-4567-a399-b8d52e672913-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:25:33 np0005481065 nova_compute[260935]: 2025-10-11 09:25:33.595 2 DEBUG oslo_concurrency.lockutils [req-13ba95d1-477a-46b8-9d97-b86253c68297 req-fa97920b-0580-4806-882d-e954bcfb1875 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "85cf93a0-2068-4567-a399-b8d52e672913-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:25:33 np0005481065 nova_compute[260935]: 2025-10-11 09:25:33.595 2 DEBUG oslo_concurrency.lockutils [req-13ba95d1-477a-46b8-9d97-b86253c68297 req-fa97920b-0580-4806-882d-e954bcfb1875 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "85cf93a0-2068-4567-a399-b8d52e672913-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:25:33 np0005481065 nova_compute[260935]: 2025-10-11 09:25:33.596 2 DEBUG nova.compute.manager [req-13ba95d1-477a-46b8-9d97-b86253c68297 req-fa97920b-0580-4806-882d-e954bcfb1875 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] No waiting events found dispatching network-vif-unplugged-85875c6f-3380-42bc-9e4b-0b66df391e0d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:25:33 np0005481065 nova_compute[260935]: 2025-10-11 09:25:33.596 2 DEBUG nova.compute.manager [req-13ba95d1-477a-46b8-9d97-b86253c68297 req-fa97920b-0580-4806-882d-e954bcfb1875 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Received event network-vif-unplugged-85875c6f-3380-42bc-9e4b-0b66df391e0d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 05:25:33 np0005481065 nova_compute[260935]: 2025-10-11 09:25:33.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:25:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:25:34 np0005481065 infallible_darwin[399311]: {
Oct 11 05:25:34 np0005481065 infallible_darwin[399311]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 05:25:34 np0005481065 infallible_darwin[399311]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:25:34 np0005481065 infallible_darwin[399311]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 05:25:34 np0005481065 infallible_darwin[399311]:        "osd_id": 2,
Oct 11 05:25:34 np0005481065 infallible_darwin[399311]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:25:34 np0005481065 infallible_darwin[399311]:        "type": "bluestore"
Oct 11 05:25:34 np0005481065 infallible_darwin[399311]:    },
Oct 11 05:25:34 np0005481065 infallible_darwin[399311]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 05:25:34 np0005481065 infallible_darwin[399311]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:25:34 np0005481065 infallible_darwin[399311]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 05:25:34 np0005481065 infallible_darwin[399311]:        "osd_id": 0,
Oct 11 05:25:34 np0005481065 infallible_darwin[399311]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:25:34 np0005481065 infallible_darwin[399311]:        "type": "bluestore"
Oct 11 05:25:34 np0005481065 infallible_darwin[399311]:    },
Oct 11 05:25:34 np0005481065 infallible_darwin[399311]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 05:25:34 np0005481065 infallible_darwin[399311]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:25:34 np0005481065 infallible_darwin[399311]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 05:25:34 np0005481065 infallible_darwin[399311]:        "osd_id": 1,
Oct 11 05:25:34 np0005481065 infallible_darwin[399311]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:25:34 np0005481065 infallible_darwin[399311]:        "type": "bluestore"
Oct 11 05:25:34 np0005481065 infallible_darwin[399311]:    }
Oct 11 05:25:34 np0005481065 infallible_darwin[399311]: }
Oct 11 05:25:34 np0005481065 systemd[1]: libpod-8706fdc9b627f21130aa1b80d84c84630b13a8d16802c7e9858a66c55fbe88da.scope: Deactivated successfully.
Oct 11 05:25:34 np0005481065 systemd[1]: libpod-8706fdc9b627f21130aa1b80d84c84630b13a8d16802c7e9858a66c55fbe88da.scope: Consumed 1.131s CPU time.
Oct 11 05:25:34 np0005481065 podman[399295]: 2025-10-11 09:25:34.225338885 +0000 UTC m=+1.341198560 container died 8706fdc9b627f21130aa1b80d84c84630b13a8d16802c7e9858a66c55fbe88da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_darwin, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 11 05:25:34 np0005481065 systemd[1]: var-lib-containers-storage-overlay-a8a2ae4c33597cdb47ac09d39f42f645cf0ff74c4c2fb654321bf06d8f062660-merged.mount: Deactivated successfully.
Oct 11 05:25:34 np0005481065 podman[399295]: 2025-10-11 09:25:34.303522131 +0000 UTC m=+1.419381726 container remove 8706fdc9b627f21130aa1b80d84c84630b13a8d16802c7e9858a66c55fbe88da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:25:34 np0005481065 systemd[1]: libpod-conmon-8706fdc9b627f21130aa1b80d84c84630b13a8d16802c7e9858a66c55fbe88da.scope: Deactivated successfully.
Oct 11 05:25:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:25:34 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:25:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:25:34 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:25:34 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 9dfca160-2797-4030-bd8a-842fa083a2a2 does not exist
Oct 11 05:25:34 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 13a2331d-7b21-4c57-ad8b-67b533840d4c does not exist
Oct 11 05:25:34 np0005481065 ovn_controller[152945]: 2025-10-11T09:25:34Z|00158|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:54:f4:1e 10.100.0.9
Oct 11 05:25:34 np0005481065 nova_compute[260935]: 2025-10-11 09:25:34.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:25:34 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2518: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 84 op/s
Oct 11 05:25:34 np0005481065 nova_compute[260935]: 2025-10-11 09:25:34.992 2 DEBUG nova.compute.manager [req-95bf35b9-9f84-4340-b0d8-731de9533d04 req-6360398c-1674-41a9-a0ef-ec742a2ae7f5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Received event network-vif-deleted-f45a21da-34ce-448b-92cc-2639a191c755 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:25:34 np0005481065 nova_compute[260935]: 2025-10-11 09:25:34.993 2 INFO nova.compute.manager [req-95bf35b9-9f84-4340-b0d8-731de9533d04 req-6360398c-1674-41a9-a0ef-ec742a2ae7f5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Neutron deleted interface f45a21da-34ce-448b-92cc-2639a191c755; detaching it from the instance and deleting it from the info cache#033[00m
Oct 11 05:25:34 np0005481065 nova_compute[260935]: 2025-10-11 09:25:34.993 2 DEBUG nova.network.neutron [req-95bf35b9-9f84-4340-b0d8-731de9533d04 req-6360398c-1674-41a9-a0ef-ec742a2ae7f5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Updating instance_info_cache with network_info: [{"id": "85875c6f-3380-42bc-9e4b-0b66df391e0d", "address": "fa:16:3e:81:f1:c1", "network": {"id": "cdd3c547-e0c3-4649-8427-08ce8e1c52d4", "bridge": "br-int", "label": "tempest-network-smoke--1698384506", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85875c6f-33", "ovs_interfaceid": "85875c6f-3380-42bc-9e4b-0b66df391e0d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:25:35 np0005481065 nova_compute[260935]: 2025-10-11 09:25:35.030 2 DEBUG nova.network.neutron [req-b162d961-a910-4491-8eb8-868da786e142 req-5a378dc8-b74d-4af9-b990-cd7d7a36554e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Updated VIF entry in instance network info cache for port 85875c6f-3380-42bc-9e4b-0b66df391e0d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:25:35 np0005481065 nova_compute[260935]: 2025-10-11 09:25:35.031 2 DEBUG nova.network.neutron [req-b162d961-a910-4491-8eb8-868da786e142 req-5a378dc8-b74d-4af9-b990-cd7d7a36554e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Updating instance_info_cache with network_info: [{"id": "85875c6f-3380-42bc-9e4b-0b66df391e0d", "address": "fa:16:3e:81:f1:c1", "network": {"id": "cdd3c547-e0c3-4649-8427-08ce8e1c52d4", "bridge": "br-int", "label": "tempest-network-smoke--1698384506", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85875c6f-33", "ovs_interfaceid": "85875c6f-3380-42bc-9e4b-0b66df391e0d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f45a21da-34ce-448b-92cc-2639a191c755", "address": "fa:16:3e:f6:a9:1e", "network": {"id": "f0bc2c62-89ab-4ce1-9157-2273788b9018", "bridge": "br-int", "label": "tempest-network-smoke--892002892", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef6:a91e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef6:a91e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf45a21da-34", "ovs_interfaceid": "f45a21da-34ce-448b-92cc-2639a191c755", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:25:35 np0005481065 nova_compute[260935]: 2025-10-11 09:25:35.101 2 DEBUG nova.compute.manager [req-95bf35b9-9f84-4340-b0d8-731de9533d04 req-6360398c-1674-41a9-a0ef-ec742a2ae7f5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Detach interface failed, port_id=f45a21da-34ce-448b-92cc-2639a191c755, reason: Instance 85cf93a0-2068-4567-a399-b8d52e672913 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct 11 05:25:35 np0005481065 nova_compute[260935]: 2025-10-11 09:25:35.138 2 DEBUG oslo_concurrency.lockutils [req-b162d961-a910-4491-8eb8-868da786e142 req-5a378dc8-b74d-4af9-b990-cd7d7a36554e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-85cf93a0-2068-4567-a399-b8d52e672913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:25:35 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:25:35 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:25:35 np0005481065 nova_compute[260935]: 2025-10-11 09:25:35.519 2 DEBUG nova.network.neutron [-] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:25:35 np0005481065 nova_compute[260935]: 2025-10-11 09:25:35.541 2 DEBUG nova.network.neutron [req-70b14429-9b8a-40a6-bc3f-3d564b8ca202 req-14447cff-54b2-402e-b7d0-aebd62646e38 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Updated VIF entry in instance network info cache for port 119ff9b5-e3af-4688-97a4-92b5f6a240dc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:25:35 np0005481065 nova_compute[260935]: 2025-10-11 09:25:35.542 2 DEBUG nova.network.neutron [req-70b14429-9b8a-40a6-bc3f-3d564b8ca202 req-14447cff-54b2-402e-b7d0-aebd62646e38 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Updating instance_info_cache with network_info: [{"id": "119ff9b5-e3af-4688-97a4-92b5f6a240dc", "address": "fa:16:3e:54:f4:1e", "network": {"id": "342daca7-3c5d-4bb3-bcdc-6abce7b4d414", "bridge": "br-int", "label": "tempest-network-smoke--2000863110", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "9.8.7.6", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap119ff9b5-e3", "ovs_interfaceid": "119ff9b5-e3af-4688-97a4-92b5f6a240dc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:25:35 np0005481065 nova_compute[260935]: 2025-10-11 09:25:35.645 2 INFO nova.compute.manager [-] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Took 3.07 seconds to deallocate network for instance.#033[00m
Oct 11 05:25:35 np0005481065 nova_compute[260935]: 2025-10-11 09:25:35.672 2 DEBUG oslo_concurrency.lockutils [req-70b14429-9b8a-40a6-bc3f-3d564b8ca202 req-14447cff-54b2-402e-b7d0-aebd62646e38 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-e4670193-9ea3-45bc-9dbd-14d2e62f32f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:25:35 np0005481065 nova_compute[260935]: 2025-10-11 09:25:35.699 2 DEBUG nova.compute.manager [req-399145a9-4ff6-4feb-90e9-75b341585672 req-677a52d9-dbfd-4362-b2fc-986152c7e446 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Received event network-vif-plugged-85875c6f-3380-42bc-9e4b-0b66df391e0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:25:35 np0005481065 nova_compute[260935]: 2025-10-11 09:25:35.700 2 DEBUG oslo_concurrency.lockutils [req-399145a9-4ff6-4feb-90e9-75b341585672 req-677a52d9-dbfd-4362-b2fc-986152c7e446 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "85cf93a0-2068-4567-a399-b8d52e672913-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:25:35 np0005481065 nova_compute[260935]: 2025-10-11 09:25:35.700 2 DEBUG oslo_concurrency.lockutils [req-399145a9-4ff6-4feb-90e9-75b341585672 req-677a52d9-dbfd-4362-b2fc-986152c7e446 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "85cf93a0-2068-4567-a399-b8d52e672913-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:25:35 np0005481065 nova_compute[260935]: 2025-10-11 09:25:35.701 2 DEBUG oslo_concurrency.lockutils [req-399145a9-4ff6-4feb-90e9-75b341585672 req-677a52d9-dbfd-4362-b2fc-986152c7e446 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "85cf93a0-2068-4567-a399-b8d52e672913-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:25:35 np0005481065 nova_compute[260935]: 2025-10-11 09:25:35.701 2 DEBUG nova.compute.manager [req-399145a9-4ff6-4feb-90e9-75b341585672 req-677a52d9-dbfd-4362-b2fc-986152c7e446 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] No waiting events found dispatching network-vif-plugged-85875c6f-3380-42bc-9e4b-0b66df391e0d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:25:35 np0005481065 nova_compute[260935]: 2025-10-11 09:25:35.701 2 WARNING nova.compute.manager [req-399145a9-4ff6-4feb-90e9-75b341585672 req-677a52d9-dbfd-4362-b2fc-986152c7e446 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Received unexpected event network-vif-plugged-85875c6f-3380-42bc-9e4b-0b66df391e0d for instance with vm_state active and task_state deleting.#033[00m
Oct 11 05:25:35 np0005481065 nova_compute[260935]: 2025-10-11 09:25:35.727 2 DEBUG oslo_concurrency.lockutils [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:25:35 np0005481065 nova_compute[260935]: 2025-10-11 09:25:35.728 2 DEBUG oslo_concurrency.lockutils [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:25:35 np0005481065 nova_compute[260935]: 2025-10-11 09:25:35.856 2 DEBUG oslo_concurrency.processutils [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:25:36 np0005481065 ovn_controller[152945]: 2025-10-11T09:25:36Z|00159|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:54:f4:1e 10.100.0.9
Oct 11 05:25:36 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:25:36 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2142244451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:25:36 np0005481065 nova_compute[260935]: 2025-10-11 09:25:36.373 2 DEBUG oslo_concurrency.processutils [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:25:36 np0005481065 nova_compute[260935]: 2025-10-11 09:25:36.379 2 DEBUG nova.compute.provider_tree [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:25:36 np0005481065 nova_compute[260935]: 2025-10-11 09:25:36.419 2 DEBUG nova.scheduler.client.report [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:25:36 np0005481065 nova_compute[260935]: 2025-10-11 09:25:36.482 2 DEBUG oslo_concurrency.lockutils [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:25:36 np0005481065 nova_compute[260935]: 2025-10-11 09:25:36.522 2 INFO nova.scheduler.client.report [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Deleted allocations for instance 85cf93a0-2068-4567-a399-b8d52e672913#033[00m
Oct 11 05:25:36 np0005481065 nova_compute[260935]: 2025-10-11 09:25:36.669 2 DEBUG oslo_concurrency.lockutils [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "85cf93a0-2068-4567-a399-b8d52e672913" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.323s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:25:36 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2519: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 84 op/s
Oct 11 05:25:37 np0005481065 nova_compute[260935]: 2025-10-11 09:25:37.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:37 np0005481065 nova_compute[260935]: 2025-10-11 09:25:37.116 2 DEBUG nova.compute.manager [req-99392b72-825b-4faa-890d-c717a9d3c7c4 req-2ec63e0d-015b-40bf-8055-4e25f9640bf2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Received event network-vif-deleted-85875c6f-3380-42bc-9e4b-0b66df391e0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:25:37 np0005481065 nova_compute[260935]: 2025-10-11 09:25:37.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:25:37 np0005481065 nova_compute[260935]: 2025-10-11 09:25:37.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:38 np0005481065 nova_compute[260935]: 2025-10-11 09:25:38.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:25:38 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2520: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 349 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Oct 11 05:25:39 np0005481065 nova_compute[260935]: 2025-10-11 09:25:39.005 2 DEBUG nova.compute.manager [req-406994a8-a847-4ede-bb61-eb0e099e405e req-f3c660b0-154b-40c5-91fb-f7aef9229881 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Received event network-changed-119ff9b5-e3af-4688-97a4-92b5f6a240dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:25:39 np0005481065 nova_compute[260935]: 2025-10-11 09:25:39.005 2 DEBUG nova.compute.manager [req-406994a8-a847-4ede-bb61-eb0e099e405e req-f3c660b0-154b-40c5-91fb-f7aef9229881 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Refreshing instance network info cache due to event network-changed-119ff9b5-e3af-4688-97a4-92b5f6a240dc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:25:39 np0005481065 nova_compute[260935]: 2025-10-11 09:25:39.007 2 DEBUG oslo_concurrency.lockutils [req-406994a8-a847-4ede-bb61-eb0e099e405e req-f3c660b0-154b-40c5-91fb-f7aef9229881 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-e4670193-9ea3-45bc-9dbd-14d2e62f32f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:25:39 np0005481065 nova_compute[260935]: 2025-10-11 09:25:39.007 2 DEBUG oslo_concurrency.lockutils [req-406994a8-a847-4ede-bb61-eb0e099e405e req-f3c660b0-154b-40c5-91fb-f7aef9229881 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-e4670193-9ea3-45bc-9dbd-14d2e62f32f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:25:39 np0005481065 nova_compute[260935]: 2025-10-11 09:25:39.008 2 DEBUG nova.network.neutron [req-406994a8-a847-4ede-bb61-eb0e099e405e req-f3c660b0-154b-40c5-91fb-f7aef9229881 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Refreshing network info cache for port 119ff9b5-e3af-4688-97a4-92b5f6a240dc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:25:39 np0005481065 nova_compute[260935]: 2025-10-11 09:25:39.058 2 DEBUG oslo_concurrency.lockutils [None req-8ae7735c-d341-4f8a-af49-6959c16c170b dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "e4670193-9ea3-45bc-9dbd-14d2e62f32f1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:25:39 np0005481065 nova_compute[260935]: 2025-10-11 09:25:39.059 2 DEBUG oslo_concurrency.lockutils [None req-8ae7735c-d341-4f8a-af49-6959c16c170b dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "e4670193-9ea3-45bc-9dbd-14d2e62f32f1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:25:39 np0005481065 nova_compute[260935]: 2025-10-11 09:25:39.059 2 DEBUG oslo_concurrency.lockutils [None req-8ae7735c-d341-4f8a-af49-6959c16c170b dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "e4670193-9ea3-45bc-9dbd-14d2e62f32f1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:25:39 np0005481065 nova_compute[260935]: 2025-10-11 09:25:39.060 2 DEBUG oslo_concurrency.lockutils [None req-8ae7735c-d341-4f8a-af49-6959c16c170b dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "e4670193-9ea3-45bc-9dbd-14d2e62f32f1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:25:39 np0005481065 nova_compute[260935]: 2025-10-11 09:25:39.060 2 DEBUG oslo_concurrency.lockutils [None req-8ae7735c-d341-4f8a-af49-6959c16c170b dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "e4670193-9ea3-45bc-9dbd-14d2e62f32f1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:25:39 np0005481065 nova_compute[260935]: 2025-10-11 09:25:39.062 2 INFO nova.compute.manager [None req-8ae7735c-d341-4f8a-af49-6959c16c170b dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Terminating instance#033[00m
Oct 11 05:25:39 np0005481065 nova_compute[260935]: 2025-10-11 09:25:39.065 2 DEBUG nova.compute.manager [None req-8ae7735c-d341-4f8a-af49-6959c16c170b dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:25:39 np0005481065 kernel: tap119ff9b5-e3 (unregistering): left promiscuous mode
Oct 11 05:25:39 np0005481065 NetworkManager[44960]: <info>  [1760174739.1286] device (tap119ff9b5-e3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:25:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:25:39 np0005481065 nova_compute[260935]: 2025-10-11 09:25:39.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:39 np0005481065 ovn_controller[152945]: 2025-10-11T09:25:39Z|01375|binding|INFO|Releasing lport 119ff9b5-e3af-4688-97a4-92b5f6a240dc from this chassis (sb_readonly=0)
Oct 11 05:25:39 np0005481065 ovn_controller[152945]: 2025-10-11T09:25:39Z|01376|binding|INFO|Setting lport 119ff9b5-e3af-4688-97a4-92b5f6a240dc down in Southbound
Oct 11 05:25:39 np0005481065 ovn_controller[152945]: 2025-10-11T09:25:39Z|01377|binding|INFO|Removing iface tap119ff9b5-e3 ovn-installed in OVS
Oct 11 05:25:39 np0005481065 nova_compute[260935]: 2025-10-11 09:25:39.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:39.153 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:f4:1e 10.100.0.9'], port_security=['fa:16:3e:54:f4:1e 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e4670193-9ea3-45bc-9dbd-14d2e62f32f1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-342daca7-3c5d-4bb3-bcdc-6abce7b4d414', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c1e4fc68-c314-4fe3-a3ef-a4986a78eba5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b157abac-1a65-40f2-b3c7-9c7765cf8246, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=119ff9b5-e3af-4688-97a4-92b5f6a240dc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:25:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:39.155 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 119ff9b5-e3af-4688-97a4-92b5f6a240dc in datapath 342daca7-3c5d-4bb3-bcdc-6abce7b4d414 unbound from our chassis#033[00m
Oct 11 05:25:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:39.156 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 342daca7-3c5d-4bb3-bcdc-6abce7b4d414, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:25:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:39.158 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5c7ec4b0-2beb-4ad7-85a7-bde9fdb45957]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:25:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:39.158 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-342daca7-3c5d-4bb3-bcdc-6abce7b4d414 namespace which is not needed anymore#033[00m
Oct 11 05:25:39 np0005481065 nova_compute[260935]: 2025-10-11 09:25:39.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:39 np0005481065 systemd[1]: machine-qemu\x2d150\x2dinstance\x2d0000007e.scope: Deactivated successfully.
Oct 11 05:25:39 np0005481065 systemd[1]: machine-qemu\x2d150\x2dinstance\x2d0000007e.scope: Consumed 13.873s CPU time.
Oct 11 05:25:39 np0005481065 systemd-machined[215705]: Machine qemu-150-instance-0000007e terminated.
Oct 11 05:25:39 np0005481065 nova_compute[260935]: 2025-10-11 09:25:39.311 2 INFO nova.virt.libvirt.driver [-] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Instance destroyed successfully.#033[00m
Oct 11 05:25:39 np0005481065 nova_compute[260935]: 2025-10-11 09:25:39.312 2 DEBUG nova.objects.instance [None req-8ae7735c-d341-4f8a-af49-6959c16c170b dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'resources' on Instance uuid e4670193-9ea3-45bc-9dbd-14d2e62f32f1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:25:39 np0005481065 nova_compute[260935]: 2025-10-11 09:25:39.329 2 DEBUG nova.virt.libvirt.vif [None req-8ae7735c-d341-4f8a-af49-6959c16c170b dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:24:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1522106396',display_name='tempest-TestNetworkBasicOps-server-1522106396',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1522106396',id=126,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPJuvLde49Grl0xrW9XqlZWm2iapzj/ptvpeM0vUXkL9WjkO1Vts1ZqPNK0xa0RFKzwonFoVCMufaWV362ApeqPtUvwTGOGY43qYDgAKGh07RNbr2NQMmIQQ5Njnnyha+Q==',key_name='tempest-TestNetworkBasicOps-2127179421',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:25:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-4ic01z0m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:25:19Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=e4670193-9ea3-45bc-9dbd-14d2e62f32f1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "119ff9b5-e3af-4688-97a4-92b5f6a240dc", "address": "fa:16:3e:54:f4:1e", "network": {"id": "342daca7-3c5d-4bb3-bcdc-6abce7b4d414", "bridge": "br-int", "label": "tempest-network-smoke--2000863110", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "9.8.7.6", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap119ff9b5-e3", "ovs_interfaceid": "119ff9b5-e3af-4688-97a4-92b5f6a240dc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:25:39 np0005481065 nova_compute[260935]: 2025-10-11 09:25:39.329 2 DEBUG nova.network.os_vif_util [None req-8ae7735c-d341-4f8a-af49-6959c16c170b dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "119ff9b5-e3af-4688-97a4-92b5f6a240dc", "address": "fa:16:3e:54:f4:1e", "network": {"id": "342daca7-3c5d-4bb3-bcdc-6abce7b4d414", "bridge": "br-int", "label": "tempest-network-smoke--2000863110", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "9.8.7.6", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap119ff9b5-e3", "ovs_interfaceid": "119ff9b5-e3af-4688-97a4-92b5f6a240dc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:25:39 np0005481065 nova_compute[260935]: 2025-10-11 09:25:39.330 2 DEBUG nova.network.os_vif_util [None req-8ae7735c-d341-4f8a-af49-6959c16c170b dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:54:f4:1e,bridge_name='br-int',has_traffic_filtering=True,id=119ff9b5-e3af-4688-97a4-92b5f6a240dc,network=Network(342daca7-3c5d-4bb3-bcdc-6abce7b4d414),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap119ff9b5-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:25:39 np0005481065 nova_compute[260935]: 2025-10-11 09:25:39.330 2 DEBUG os_vif [None req-8ae7735c-d341-4f8a-af49-6959c16c170b dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:54:f4:1e,bridge_name='br-int',has_traffic_filtering=True,id=119ff9b5-e3af-4688-97a4-92b5f6a240dc,network=Network(342daca7-3c5d-4bb3-bcdc-6abce7b4d414),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap119ff9b5-e3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:25:39 np0005481065 nova_compute[260935]: 2025-10-11 09:25:39.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:39 np0005481065 nova_compute[260935]: 2025-10-11 09:25:39.332 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap119ff9b5-e3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:25:39 np0005481065 nova_compute[260935]: 2025-10-11 09:25:39.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:39 np0005481065 nova_compute[260935]: 2025-10-11 09:25:39.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:39 np0005481065 nova_compute[260935]: 2025-10-11 09:25:39.337 2 INFO os_vif [None req-8ae7735c-d341-4f8a-af49-6959c16c170b dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:54:f4:1e,bridge_name='br-int',has_traffic_filtering=True,id=119ff9b5-e3af-4688-97a4-92b5f6a240dc,network=Network(342daca7-3c5d-4bb3-bcdc-6abce7b4d414),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap119ff9b5-e3')#033[00m
Oct 11 05:25:39 np0005481065 neutron-haproxy-ovnmeta-342daca7-3c5d-4bb3-bcdc-6abce7b4d414[398234]: [NOTICE]   (398247) : haproxy version is 2.8.14-c23fe91
Oct 11 05:25:39 np0005481065 neutron-haproxy-ovnmeta-342daca7-3c5d-4bb3-bcdc-6abce7b4d414[398234]: [NOTICE]   (398247) : path to executable is /usr/sbin/haproxy
Oct 11 05:25:39 np0005481065 neutron-haproxy-ovnmeta-342daca7-3c5d-4bb3-bcdc-6abce7b4d414[398234]: [WARNING]  (398247) : Exiting Master process...
Oct 11 05:25:39 np0005481065 neutron-haproxy-ovnmeta-342daca7-3c5d-4bb3-bcdc-6abce7b4d414[398234]: [ALERT]    (398247) : Current worker (398249) exited with code 143 (Terminated)
Oct 11 05:25:39 np0005481065 neutron-haproxy-ovnmeta-342daca7-3c5d-4bb3-bcdc-6abce7b4d414[398234]: [WARNING]  (398247) : All workers exited. Exiting... (0)
Oct 11 05:25:39 np0005481065 systemd[1]: libpod-7942cf246211150ab9a379ef3012a741b0b0deb7b617be7c2673948e421375b3.scope: Deactivated successfully.
Oct 11 05:25:39 np0005481065 podman[399453]: 2025-10-11 09:25:39.368473299 +0000 UTC m=+0.072835977 container died 7942cf246211150ab9a379ef3012a741b0b0deb7b617be7c2673948e421375b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-342daca7-3c5d-4bb3-bcdc-6abce7b4d414, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:25:39 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7942cf246211150ab9a379ef3012a741b0b0deb7b617be7c2673948e421375b3-userdata-shm.mount: Deactivated successfully.
Oct 11 05:25:39 np0005481065 systemd[1]: var-lib-containers-storage-overlay-72cac3b459deeeef18dd1fe465a7ead282b0060817d48bf95435e7d2a56eb042-merged.mount: Deactivated successfully.
Oct 11 05:25:39 np0005481065 podman[399453]: 2025-10-11 09:25:39.421397902 +0000 UTC m=+0.125760580 container cleanup 7942cf246211150ab9a379ef3012a741b0b0deb7b617be7c2673948e421375b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-342daca7-3c5d-4bb3-bcdc-6abce7b4d414, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 05:25:39 np0005481065 systemd[1]: libpod-conmon-7942cf246211150ab9a379ef3012a741b0b0deb7b617be7c2673948e421375b3.scope: Deactivated successfully.
Oct 11 05:25:39 np0005481065 nova_compute[260935]: 2025-10-11 09:25:39.440 2 DEBUG nova.compute.manager [req-678f704d-b48f-4583-8258-b974274a9feb req-09c0925c-672a-4d7b-96c9-67acb87f4046 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Received event network-vif-unplugged-119ff9b5-e3af-4688-97a4-92b5f6a240dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:25:39 np0005481065 nova_compute[260935]: 2025-10-11 09:25:39.441 2 DEBUG oslo_concurrency.lockutils [req-678f704d-b48f-4583-8258-b974274a9feb req-09c0925c-672a-4d7b-96c9-67acb87f4046 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "e4670193-9ea3-45bc-9dbd-14d2e62f32f1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:25:39 np0005481065 nova_compute[260935]: 2025-10-11 09:25:39.441 2 DEBUG oslo_concurrency.lockutils [req-678f704d-b48f-4583-8258-b974274a9feb req-09c0925c-672a-4d7b-96c9-67acb87f4046 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e4670193-9ea3-45bc-9dbd-14d2e62f32f1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:25:39 np0005481065 nova_compute[260935]: 2025-10-11 09:25:39.441 2 DEBUG oslo_concurrency.lockutils [req-678f704d-b48f-4583-8258-b974274a9feb req-09c0925c-672a-4d7b-96c9-67acb87f4046 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e4670193-9ea3-45bc-9dbd-14d2e62f32f1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:25:39 np0005481065 nova_compute[260935]: 2025-10-11 09:25:39.442 2 DEBUG nova.compute.manager [req-678f704d-b48f-4583-8258-b974274a9feb req-09c0925c-672a-4d7b-96c9-67acb87f4046 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] No waiting events found dispatching network-vif-unplugged-119ff9b5-e3af-4688-97a4-92b5f6a240dc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:25:39 np0005481065 nova_compute[260935]: 2025-10-11 09:25:39.442 2 DEBUG nova.compute.manager [req-678f704d-b48f-4583-8258-b974274a9feb req-09c0925c-672a-4d7b-96c9-67acb87f4046 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Received event network-vif-unplugged-119ff9b5-e3af-4688-97a4-92b5f6a240dc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 05:25:39 np0005481065 podman[399512]: 2025-10-11 09:25:39.510208529 +0000 UTC m=+0.055735204 container remove 7942cf246211150ab9a379ef3012a741b0b0deb7b617be7c2673948e421375b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-342daca7-3c5d-4bb3-bcdc-6abce7b4d414, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 05:25:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:39.525 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2d96b293-2fc7-4300-9937-b3849b5c0739]: (4, ('Sat Oct 11 09:25:39 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-342daca7-3c5d-4bb3-bcdc-6abce7b4d414 (7942cf246211150ab9a379ef3012a741b0b0deb7b617be7c2673948e421375b3)\n7942cf246211150ab9a379ef3012a741b0b0deb7b617be7c2673948e421375b3\nSat Oct 11 09:25:39 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-342daca7-3c5d-4bb3-bcdc-6abce7b4d414 (7942cf246211150ab9a379ef3012a741b0b0deb7b617be7c2673948e421375b3)\n7942cf246211150ab9a379ef3012a741b0b0deb7b617be7c2673948e421375b3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:25:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:39.529 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7b321d63-1c60-4e87-ac42-a93c7cdf4a3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:25:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:39.530 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap342daca7-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:25:39 np0005481065 nova_compute[260935]: 2025-10-11 09:25:39.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:39 np0005481065 kernel: tap342daca7-30: left promiscuous mode
Oct 11 05:25:39 np0005481065 nova_compute[260935]: 2025-10-11 09:25:39.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:39.568 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ec137465-36f9-4838-977d-b00a71927eba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:25:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:39.595 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[425a82a7-1ebf-43e3-bb33-0ad4b9529b6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:25:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:39.597 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2df1893e-8f0c-4106-8e28-6a4cc3a35b5f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:25:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:39.617 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2a999327-3b6c-48a3-8808-6bb601a92d55]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 655704, 'reachable_time': 44664, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 399528, 'error': None, 'target': 'ovnmeta-342daca7-3c5d-4bb3-bcdc-6abce7b4d414', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:25:39 np0005481065 systemd[1]: run-netns-ovnmeta\x2d342daca7\x2d3c5d\x2d4bb3\x2dbcdc\x2d6abce7b4d414.mount: Deactivated successfully.
Oct 11 05:25:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:39.623 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-342daca7-3c5d-4bb3-bcdc-6abce7b4d414 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 05:25:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:39.624 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[2e2f6c32-d775-417f-8857-0f5386be8f1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:25:39 np0005481065 nova_compute[260935]: 2025-10-11 09:25:39.808 2 INFO nova.virt.libvirt.driver [None req-8ae7735c-d341-4f8a-af49-6959c16c170b dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Deleting instance files /var/lib/nova/instances/e4670193-9ea3-45bc-9dbd-14d2e62f32f1_del#033[00m
Oct 11 05:25:39 np0005481065 nova_compute[260935]: 2025-10-11 09:25:39.809 2 INFO nova.virt.libvirt.driver [None req-8ae7735c-d341-4f8a-af49-6959c16c170b dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Deletion of /var/lib/nova/instances/e4670193-9ea3-45bc-9dbd-14d2e62f32f1_del complete#033[00m
Oct 11 05:25:39 np0005481065 nova_compute[260935]: 2025-10-11 09:25:39.858 2 INFO nova.compute.manager [None req-8ae7735c-d341-4f8a-af49-6959c16c170b dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Took 0.79 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:25:39 np0005481065 nova_compute[260935]: 2025-10-11 09:25:39.859 2 DEBUG oslo.service.loopingcall [None req-8ae7735c-d341-4f8a-af49-6959c16c170b dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:25:39 np0005481065 nova_compute[260935]: 2025-10-11 09:25:39.860 2 DEBUG nova.compute.manager [-] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:25:39 np0005481065 nova_compute[260935]: 2025-10-11 09:25:39.860 2 DEBUG nova.network.neutron [-] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:25:40 np0005481065 nova_compute[260935]: 2025-10-11 09:25:40.654 2 DEBUG nova.network.neutron [req-406994a8-a847-4ede-bb61-eb0e099e405e req-f3c660b0-154b-40c5-91fb-f7aef9229881 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Updated VIF entry in instance network info cache for port 119ff9b5-e3af-4688-97a4-92b5f6a240dc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:25:40 np0005481065 nova_compute[260935]: 2025-10-11 09:25:40.655 2 DEBUG nova.network.neutron [req-406994a8-a847-4ede-bb61-eb0e099e405e req-f3c660b0-154b-40c5-91fb-f7aef9229881 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Updating instance_info_cache with network_info: [{"id": "119ff9b5-e3af-4688-97a4-92b5f6a240dc", "address": "fa:16:3e:54:f4:1e", "network": {"id": "342daca7-3c5d-4bb3-bcdc-6abce7b4d414", "bridge": "br-int", "label": "tempest-network-smoke--2000863110", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "9.8.7.6", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap119ff9b5-e3", "ovs_interfaceid": "119ff9b5-e3af-4688-97a4-92b5f6a240dc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:25:40 np0005481065 nova_compute[260935]: 2025-10-11 09:25:40.685 2 DEBUG oslo_concurrency.lockutils [req-406994a8-a847-4ede-bb61-eb0e099e405e req-f3c660b0-154b-40c5-91fb-f7aef9229881 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-e4670193-9ea3-45bc-9dbd-14d2e62f32f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:25:40 np0005481065 nova_compute[260935]: 2025-10-11 09:25:40.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:25:40 np0005481065 nova_compute[260935]: 2025-10-11 09:25:40.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:25:40 np0005481065 nova_compute[260935]: 2025-10-11 09:25:40.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:25:40 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2521: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 13 KiB/s wr, 29 op/s
Oct 11 05:25:41 np0005481065 nova_compute[260935]: 2025-10-11 09:25:41.266 2 DEBUG nova.network.neutron [-] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:25:41 np0005481065 nova_compute[260935]: 2025-10-11 09:25:41.297 2 INFO nova.compute.manager [-] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Took 1.44 seconds to deallocate network for instance.#033[00m
Oct 11 05:25:41 np0005481065 nova_compute[260935]: 2025-10-11 09:25:41.334 2 DEBUG oslo_concurrency.lockutils [None req-8ae7735c-d341-4f8a-af49-6959c16c170b dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:25:41 np0005481065 nova_compute[260935]: 2025-10-11 09:25:41.335 2 DEBUG oslo_concurrency.lockutils [None req-8ae7735c-d341-4f8a-af49-6959c16c170b dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:25:41 np0005481065 nova_compute[260935]: 2025-10-11 09:25:41.448 2 DEBUG oslo_concurrency.processutils [None req-8ae7735c-d341-4f8a-af49-6959c16c170b dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:25:41 np0005481065 nova_compute[260935]: 2025-10-11 09:25:41.537 2 DEBUG nova.compute.manager [req-ae28e358-5a97-4e4a-90d0-d4e84f22abed req-3682aaf0-ff64-47f5-a347-e842915de394 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Received event network-vif-plugged-119ff9b5-e3af-4688-97a4-92b5f6a240dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:25:41 np0005481065 nova_compute[260935]: 2025-10-11 09:25:41.538 2 DEBUG oslo_concurrency.lockutils [req-ae28e358-5a97-4e4a-90d0-d4e84f22abed req-3682aaf0-ff64-47f5-a347-e842915de394 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "e4670193-9ea3-45bc-9dbd-14d2e62f32f1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:25:41 np0005481065 nova_compute[260935]: 2025-10-11 09:25:41.539 2 DEBUG oslo_concurrency.lockutils [req-ae28e358-5a97-4e4a-90d0-d4e84f22abed req-3682aaf0-ff64-47f5-a347-e842915de394 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e4670193-9ea3-45bc-9dbd-14d2e62f32f1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:25:41 np0005481065 nova_compute[260935]: 2025-10-11 09:25:41.540 2 DEBUG oslo_concurrency.lockutils [req-ae28e358-5a97-4e4a-90d0-d4e84f22abed req-3682aaf0-ff64-47f5-a347-e842915de394 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e4670193-9ea3-45bc-9dbd-14d2e62f32f1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:25:41 np0005481065 nova_compute[260935]: 2025-10-11 09:25:41.540 2 DEBUG nova.compute.manager [req-ae28e358-5a97-4e4a-90d0-d4e84f22abed req-3682aaf0-ff64-47f5-a347-e842915de394 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] No waiting events found dispatching network-vif-plugged-119ff9b5-e3af-4688-97a4-92b5f6a240dc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:25:41 np0005481065 nova_compute[260935]: 2025-10-11 09:25:41.541 2 WARNING nova.compute.manager [req-ae28e358-5a97-4e4a-90d0-d4e84f22abed req-3682aaf0-ff64-47f5-a347-e842915de394 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Received unexpected event network-vif-plugged-119ff9b5-e3af-4688-97a4-92b5f6a240dc for instance with vm_state deleted and task_state None.#033[00m
Oct 11 05:25:41 np0005481065 nova_compute[260935]: 2025-10-11 09:25:41.541 2 DEBUG nova.compute.manager [req-ae28e358-5a97-4e4a-90d0-d4e84f22abed req-3682aaf0-ff64-47f5-a347-e842915de394 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Received event network-vif-deleted-119ff9b5-e3af-4688-97a4-92b5f6a240dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:25:41 np0005481065 nova_compute[260935]: 2025-10-11 09:25:41.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:25:41 np0005481065 nova_compute[260935]: 2025-10-11 09:25:41.726 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:25:41 np0005481065 nova_compute[260935]: 2025-10-11 09:25:41.745 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:25:41 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:25:41 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3657812144' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:25:41 np0005481065 nova_compute[260935]: 2025-10-11 09:25:41.943 2 DEBUG oslo_concurrency.processutils [None req-8ae7735c-d341-4f8a-af49-6959c16c170b dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:25:41 np0005481065 nova_compute[260935]: 2025-10-11 09:25:41.952 2 DEBUG nova.compute.provider_tree [None req-8ae7735c-d341-4f8a-af49-6959c16c170b dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:25:41 np0005481065 nova_compute[260935]: 2025-10-11 09:25:41.972 2 DEBUG nova.scheduler.client.report [None req-8ae7735c-d341-4f8a-af49-6959c16c170b dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:25:41 np0005481065 nova_compute[260935]: 2025-10-11 09:25:41.997 2 DEBUG oslo_concurrency.lockutils [None req-8ae7735c-d341-4f8a-af49-6959c16c170b dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:25:42 np0005481065 nova_compute[260935]: 2025-10-11 09:25:42.002 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.257s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:25:42 np0005481065 nova_compute[260935]: 2025-10-11 09:25:42.003 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:25:42 np0005481065 nova_compute[260935]: 2025-10-11 09:25:42.003 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:25:42 np0005481065 nova_compute[260935]: 2025-10-11 09:25:42.004 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:25:42 np0005481065 nova_compute[260935]: 2025-10-11 09:25:42.097 2 INFO nova.scheduler.client.report [None req-8ae7735c-d341-4f8a-af49-6959c16c170b dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Deleted allocations for instance e4670193-9ea3-45bc-9dbd-14d2e62f32f1#033[00m
Oct 11 05:25:42 np0005481065 nova_compute[260935]: 2025-10-11 09:25:42.165 2 DEBUG oslo_concurrency.lockutils [None req-8ae7735c-d341-4f8a-af49-6959c16c170b dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "e4670193-9ea3-45bc-9dbd-14d2e62f32f1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.107s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:25:42 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:25:42 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/510749789' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:25:42 np0005481065 nova_compute[260935]: 2025-10-11 09:25:42.469 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:25:42 np0005481065 nova_compute[260935]: 2025-10-11 09:25:42.623 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:25:42 np0005481065 nova_compute[260935]: 2025-10-11 09:25:42.623 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:25:42 np0005481065 nova_compute[260935]: 2025-10-11 09:25:42.623 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:25:42 np0005481065 nova_compute[260935]: 2025-10-11 09:25:42.628 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:25:42 np0005481065 nova_compute[260935]: 2025-10-11 09:25:42.628 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:25:42 np0005481065 nova_compute[260935]: 2025-10-11 09:25:42.632 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:25:42 np0005481065 nova_compute[260935]: 2025-10-11 09:25:42.632 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:25:42 np0005481065 nova_compute[260935]: 2025-10-11 09:25:42.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:42 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2522: 321 pgs: 321 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 20 KiB/s wr, 57 op/s
Oct 11 05:25:42 np0005481065 nova_compute[260935]: 2025-10-11 09:25:42.898 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:25:42 np0005481065 nova_compute[260935]: 2025-10-11 09:25:42.899 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2860MB free_disk=59.7851448059082GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:25:42 np0005481065 nova_compute[260935]: 2025-10-11 09:25:42.900 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:25:42 np0005481065 nova_compute[260935]: 2025-10-11 09:25:42.900 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:25:43 np0005481065 nova_compute[260935]: 2025-10-11 09:25:43.013 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:25:43 np0005481065 nova_compute[260935]: 2025-10-11 09:25:43.014 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:25:43 np0005481065 nova_compute[260935]: 2025-10-11 09:25:43.014 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:25:43 np0005481065 nova_compute[260935]: 2025-10-11 09:25:43.015 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:25:43 np0005481065 nova_compute[260935]: 2025-10-11 09:25:43.015 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:25:43 np0005481065 nova_compute[260935]: 2025-10-11 09:25:43.122 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:25:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:25:43 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1715715982' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:25:43 np0005481065 nova_compute[260935]: 2025-10-11 09:25:43.589 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:25:43 np0005481065 nova_compute[260935]: 2025-10-11 09:25:43.598 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:25:43 np0005481065 nova_compute[260935]: 2025-10-11 09:25:43.622 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:25:43 np0005481065 nova_compute[260935]: 2025-10-11 09:25:43.650 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:25:43 np0005481065 nova_compute[260935]: 2025-10-11 09:25:43.651 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:25:43 np0005481065 podman[399596]: 2025-10-11 09:25:43.788963178 +0000 UTC m=+0.088573320 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:25:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:25:44 np0005481065 nova_compute[260935]: 2025-10-11 09:25:44.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:44 np0005481065 nova_compute[260935]: 2025-10-11 09:25:44.627 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:25:44 np0005481065 nova_compute[260935]: 2025-10-11 09:25:44.628 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:25:44 np0005481065 ovn_controller[152945]: 2025-10-11T09:25:44Z|01378|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:25:44 np0005481065 ovn_controller[152945]: 2025-10-11T09:25:44Z|01379|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:25:44 np0005481065 nova_compute[260935]: 2025-10-11 09:25:44.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:44 np0005481065 ovn_controller[152945]: 2025-10-11T09:25:44Z|01380|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:25:44 np0005481065 ovn_controller[152945]: 2025-10-11T09:25:44Z|01381|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:25:44 np0005481065 nova_compute[260935]: 2025-10-11 09:25:44.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:44 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2523: 321 pgs: 321 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 6.5 KiB/s wr, 36 op/s
Oct 11 05:25:44 np0005481065 nova_compute[260935]: 2025-10-11 09:25:44.935 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:25:44 np0005481065 nova_compute[260935]: 2025-10-11 09:25:44.935 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:25:44 np0005481065 nova_compute[260935]: 2025-10-11 09:25:44.935 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 05:25:46 np0005481065 nova_compute[260935]: 2025-10-11 09:25:46.802 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174731.7999434, 85cf93a0-2068-4567-a399-b8d52e672913 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:25:46 np0005481065 nova_compute[260935]: 2025-10-11 09:25:46.802 2 INFO nova.compute.manager [-] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:25:46 np0005481065 nova_compute[260935]: 2025-10-11 09:25:46.821 2 DEBUG nova.compute.manager [None req-870de4f7-38e0-4e94-9920-cd1c49edd099 - - - - - -] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:25:46 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2524: 321 pgs: 321 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 6.5 KiB/s wr, 36 op/s
Oct 11 05:25:46 np0005481065 nova_compute[260935]: 2025-10-11 09:25:46.943 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updating instance_info_cache with network_info: [{"id": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "address": "fa:16:3e:d3:b5:ce", "network": {"id": "7c40ad6c-6e2c-4d8e-a70f-72c8786fa745", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1855455514-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ba95f2514ce4fe4b00f245335eaeb01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc992d6e3-ef", "ovs_interfaceid": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:25:46 np0005481065 nova_compute[260935]: 2025-10-11 09:25:46.963 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:25:46 np0005481065 nova_compute[260935]: 2025-10-11 09:25:46.963 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 05:25:47 np0005481065 nova_compute[260935]: 2025-10-11 09:25:47.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:48 np0005481065 podman[399616]: 2025-10-11 09:25:48.782251361 +0000 UTC m=+0.084898007 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=iscsid, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 05:25:48 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2525: 321 pgs: 321 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 6.5 KiB/s wr, 36 op/s
Oct 11 05:25:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:25:49 np0005481065 nova_compute[260935]: 2025-10-11 09:25:49.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:50 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2526: 321 pgs: 321 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 6.5 KiB/s wr, 27 op/s
Oct 11 05:25:52 np0005481065 nova_compute[260935]: 2025-10-11 09:25:52.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:52 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2527: 321 pgs: 321 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 6.5 KiB/s wr, 27 op/s
Oct 11 05:25:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:53.392 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=42, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=41) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:25:53 np0005481065 nova_compute[260935]: 2025-10-11 09:25:53.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:53.394 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 05:25:53 np0005481065 podman[399638]: 2025-10-11 09:25:53.776411169 +0000 UTC m=+0.078703612 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct 11 05:25:53 np0005481065 podman[399639]: 2025-10-11 09:25:53.858477935 +0000 UTC m=+0.153782411 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:25:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:25:54 np0005481065 nova_compute[260935]: 2025-10-11 09:25:54.310 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174739.3090696, e4670193-9ea3-45bc-9dbd-14d2e62f32f1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:25:54 np0005481065 nova_compute[260935]: 2025-10-11 09:25:54.310 2 INFO nova.compute.manager [-] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:25:54 np0005481065 nova_compute[260935]: 2025-10-11 09:25:54.339 2 DEBUG nova.compute.manager [None req-fa20a344-9e78-4177-b5fc-d244242a00f6 - - - - - -] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:25:54 np0005481065 nova_compute[260935]: 2025-10-11 09:25:54.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:25:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:25:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:25:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:25:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:25:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:25:54 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2528: 321 pgs: 321 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:25:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:25:54
Oct 11 05:25:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:25:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:25:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['default.rgw.log', 'vms', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta', 'images', 'volumes', '.rgw.root', '.mgr', 'cephfs.cephfs.data', 'backups']
Oct 11 05:25:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:25:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:25:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:25:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:25:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:25:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:25:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:25:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:25:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:25:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:25:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:25:56 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:25:56.396 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '42'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:25:56 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2529: 321 pgs: 321 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:25:57 np0005481065 nova_compute[260935]: 2025-10-11 09:25:57.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:25:58 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2530: 321 pgs: 321 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:25:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:25:59 np0005481065 nova_compute[260935]: 2025-10-11 09:25:59.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:00 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2531: 321 pgs: 321 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:26:02 np0005481065 nova_compute[260935]: 2025-10-11 09:26:02.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:02 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2532: 321 pgs: 321 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:26:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:26:04 np0005481065 nova_compute[260935]: 2025-10-11 09:26:04.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:04 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2533: 321 pgs: 321 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:26:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:26:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:26:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:26:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:26:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002627377275021163 of space, bias 1.0, pg target 0.788213182506349 quantized to 32 (current 32)
Oct 11 05:26:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:26:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:26:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:26:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:26:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:26:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 11 05:26:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:26:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 05:26:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:26:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:26:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:26:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 05:26:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:26:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 05:26:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:26:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:26:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:26:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 05:26:06 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2534: 321 pgs: 321 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:26:07 np0005481065 nova_compute[260935]: 2025-10-11 09:26:07.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:08 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2535: 321 pgs: 321 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:26:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:26:09 np0005481065 nova_compute[260935]: 2025-10-11 09:26:09.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:10 np0005481065 nova_compute[260935]: 2025-10-11 09:26:10.391 2 DEBUG oslo_concurrency.lockutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "36d0acda-9f37-4308-aa46-973f11c57b0e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:26:10 np0005481065 nova_compute[260935]: 2025-10-11 09:26:10.391 2 DEBUG oslo_concurrency.lockutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "36d0acda-9f37-4308-aa46-973f11c57b0e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:26:10 np0005481065 nova_compute[260935]: 2025-10-11 09:26:10.411 2 DEBUG nova.compute.manager [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:26:10 np0005481065 nova_compute[260935]: 2025-10-11 09:26:10.495 2 DEBUG oslo_concurrency.lockutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:26:10 np0005481065 nova_compute[260935]: 2025-10-11 09:26:10.496 2 DEBUG oslo_concurrency.lockutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:26:10 np0005481065 nova_compute[260935]: 2025-10-11 09:26:10.507 2 DEBUG nova.virt.hardware [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:26:10 np0005481065 nova_compute[260935]: 2025-10-11 09:26:10.507 2 INFO nova.compute.claims [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:26:10 np0005481065 nova_compute[260935]: 2025-10-11 09:26:10.663 2 DEBUG oslo_concurrency.processutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:26:10 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2536: 321 pgs: 321 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:26:11 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:26:11 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3336150054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:26:11 np0005481065 nova_compute[260935]: 2025-10-11 09:26:11.200 2 DEBUG oslo_concurrency.processutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:26:11 np0005481065 nova_compute[260935]: 2025-10-11 09:26:11.208 2 DEBUG nova.compute.provider_tree [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:26:11 np0005481065 nova_compute[260935]: 2025-10-11 09:26:11.232 2 DEBUG nova.scheduler.client.report [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:26:11 np0005481065 nova_compute[260935]: 2025-10-11 09:26:11.260 2 DEBUG oslo_concurrency.lockutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:26:11 np0005481065 nova_compute[260935]: 2025-10-11 09:26:11.261 2 DEBUG nova.compute.manager [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:26:11 np0005481065 nova_compute[260935]: 2025-10-11 09:26:11.331 2 DEBUG nova.compute.manager [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:26:11 np0005481065 nova_compute[260935]: 2025-10-11 09:26:11.331 2 DEBUG nova.network.neutron [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:26:11 np0005481065 nova_compute[260935]: 2025-10-11 09:26:11.366 2 INFO nova.virt.libvirt.driver [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:26:11 np0005481065 nova_compute[260935]: 2025-10-11 09:26:11.397 2 DEBUG nova.compute.manager [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:26:11 np0005481065 nova_compute[260935]: 2025-10-11 09:26:11.512 2 DEBUG nova.compute.manager [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:26:11 np0005481065 nova_compute[260935]: 2025-10-11 09:26:11.515 2 DEBUG nova.virt.libvirt.driver [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:26:11 np0005481065 nova_compute[260935]: 2025-10-11 09:26:11.516 2 INFO nova.virt.libvirt.driver [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Creating image(s)#033[00m
Oct 11 05:26:11 np0005481065 nova_compute[260935]: 2025-10-11 09:26:11.553 2 DEBUG nova.storage.rbd_utils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 36d0acda-9f37-4308-aa46-973f11c57b0e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:26:11 np0005481065 nova_compute[260935]: 2025-10-11 09:26:11.598 2 DEBUG nova.storage.rbd_utils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 36d0acda-9f37-4308-aa46-973f11c57b0e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:26:11 np0005481065 nova_compute[260935]: 2025-10-11 09:26:11.629 2 DEBUG nova.storage.rbd_utils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 36d0acda-9f37-4308-aa46-973f11c57b0e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:26:11 np0005481065 nova_compute[260935]: 2025-10-11 09:26:11.633 2 DEBUG oslo_concurrency.processutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:26:11 np0005481065 nova_compute[260935]: 2025-10-11 09:26:11.699 2 DEBUG nova.policy [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dd336dcb24664df58613d4105ce1b004', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:26:11 np0005481065 nova_compute[260935]: 2025-10-11 09:26:11.747 2 DEBUG oslo_concurrency.processutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.114s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:26:11 np0005481065 nova_compute[260935]: 2025-10-11 09:26:11.748 2 DEBUG oslo_concurrency.lockutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:26:11 np0005481065 nova_compute[260935]: 2025-10-11 09:26:11.748 2 DEBUG oslo_concurrency.lockutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:26:11 np0005481065 nova_compute[260935]: 2025-10-11 09:26:11.749 2 DEBUG oslo_concurrency.lockutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:26:11 np0005481065 nova_compute[260935]: 2025-10-11 09:26:11.774 2 DEBUG nova.storage.rbd_utils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 36d0acda-9f37-4308-aa46-973f11c57b0e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:26:11 np0005481065 nova_compute[260935]: 2025-10-11 09:26:11.779 2 DEBUG oslo_concurrency.processutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 36d0acda-9f37-4308-aa46-973f11c57b0e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:26:12 np0005481065 nova_compute[260935]: 2025-10-11 09:26:12.170 2 DEBUG oslo_concurrency.processutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 36d0acda-9f37-4308-aa46-973f11c57b0e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.392s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:26:12 np0005481065 nova_compute[260935]: 2025-10-11 09:26:12.250 2 DEBUG nova.storage.rbd_utils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] resizing rbd image 36d0acda-9f37-4308-aa46-973f11c57b0e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:26:12 np0005481065 nova_compute[260935]: 2025-10-11 09:26:12.340 2 DEBUG nova.objects.instance [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'migration_context' on Instance uuid 36d0acda-9f37-4308-aa46-973f11c57b0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:26:12 np0005481065 nova_compute[260935]: 2025-10-11 09:26:12.355 2 DEBUG nova.virt.libvirt.driver [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:26:12 np0005481065 nova_compute[260935]: 2025-10-11 09:26:12.355 2 DEBUG nova.virt.libvirt.driver [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Ensure instance console log exists: /var/lib/nova/instances/36d0acda-9f37-4308-aa46-973f11c57b0e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:26:12 np0005481065 nova_compute[260935]: 2025-10-11 09:26:12.355 2 DEBUG oslo_concurrency.lockutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:26:12 np0005481065 nova_compute[260935]: 2025-10-11 09:26:12.356 2 DEBUG oslo_concurrency.lockutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:26:12 np0005481065 nova_compute[260935]: 2025-10-11 09:26:12.356 2 DEBUG oslo_concurrency.lockutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:26:12 np0005481065 nova_compute[260935]: 2025-10-11 09:26:12.718 2 DEBUG nova.network.neutron [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Successfully created port: 36e1391f-eaf8-490f-8434-c3fb25eed0a4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:26:12 np0005481065 nova_compute[260935]: 2025-10-11 09:26:12.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:12 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2537: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Oct 11 05:26:13 np0005481065 nova_compute[260935]: 2025-10-11 09:26:13.475 2 DEBUG nova.network.neutron [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Successfully updated port: 36e1391f-eaf8-490f-8434-c3fb25eed0a4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:26:13 np0005481065 nova_compute[260935]: 2025-10-11 09:26:13.510 2 DEBUG oslo_concurrency.lockutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "refresh_cache-36d0acda-9f37-4308-aa46-973f11c57b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:26:13 np0005481065 nova_compute[260935]: 2025-10-11 09:26:13.511 2 DEBUG oslo_concurrency.lockutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquired lock "refresh_cache-36d0acda-9f37-4308-aa46-973f11c57b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:26:13 np0005481065 nova_compute[260935]: 2025-10-11 09:26:13.511 2 DEBUG nova.network.neutron [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:26:13 np0005481065 nova_compute[260935]: 2025-10-11 09:26:13.601 2 DEBUG nova.compute.manager [req-998fd68b-96b2-493d-a235-5c91d0b3165e req-447898cc-5795-4443-9c06-235bd75ca719 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Received event network-changed-36e1391f-eaf8-490f-8434-c3fb25eed0a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:26:13 np0005481065 nova_compute[260935]: 2025-10-11 09:26:13.602 2 DEBUG nova.compute.manager [req-998fd68b-96b2-493d-a235-5c91d0b3165e req-447898cc-5795-4443-9c06-235bd75ca719 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Refreshing instance network info cache due to event network-changed-36e1391f-eaf8-490f-8434-c3fb25eed0a4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:26:13 np0005481065 nova_compute[260935]: 2025-10-11 09:26:13.603 2 DEBUG oslo_concurrency.lockutils [req-998fd68b-96b2-493d-a235-5c91d0b3165e req-447898cc-5795-4443-9c06-235bd75ca719 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-36d0acda-9f37-4308-aa46-973f11c57b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:26:13 np0005481065 nova_compute[260935]: 2025-10-11 09:26:13.696 2 DEBUG nova.network.neutron [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:26:13 np0005481065 nova_compute[260935]: 2025-10-11 09:26:13.851 2 DEBUG oslo_concurrency.lockutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:26:13 np0005481065 nova_compute[260935]: 2025-10-11 09:26:13.851 2 DEBUG oslo_concurrency.lockutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:26:13 np0005481065 nova_compute[260935]: 2025-10-11 09:26:13.957 2 DEBUG nova.compute.manager [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:26:14 np0005481065 nova_compute[260935]: 2025-10-11 09:26:14.159 2 DEBUG oslo_concurrency.lockutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:26:14 np0005481065 nova_compute[260935]: 2025-10-11 09:26:14.160 2 DEBUG oslo_concurrency.lockutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:26:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:26:14 np0005481065 nova_compute[260935]: 2025-10-11 09:26:14.170 2 DEBUG nova.virt.hardware [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:26:14 np0005481065 nova_compute[260935]: 2025-10-11 09:26:14.170 2 INFO nova.compute.claims [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:26:14 np0005481065 nova_compute[260935]: 2025-10-11 09:26:14.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:14 np0005481065 nova_compute[260935]: 2025-10-11 09:26:14.542 2 DEBUG oslo_concurrency.processutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:26:14 np0005481065 nova_compute[260935]: 2025-10-11 09:26:14.656 2 DEBUG nova.network.neutron [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Updating instance_info_cache with network_info: [{"id": "36e1391f-eaf8-490f-8434-c3fb25eed0a4", "address": "fa:16:3e:da:66:34", "network": {"id": "b9f9ae84-9b18-48f7-bff2-94e8835de5c8", "bridge": "br-int", "label": "tempest-network-smoke--306681163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36e1391f-ea", "ovs_interfaceid": "36e1391f-eaf8-490f-8434-c3fb25eed0a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:26:14 np0005481065 nova_compute[260935]: 2025-10-11 09:26:14.751 2 DEBUG oslo_concurrency.lockutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Releasing lock "refresh_cache-36d0acda-9f37-4308-aa46-973f11c57b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:26:14 np0005481065 nova_compute[260935]: 2025-10-11 09:26:14.752 2 DEBUG nova.compute.manager [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Instance network_info: |[{"id": "36e1391f-eaf8-490f-8434-c3fb25eed0a4", "address": "fa:16:3e:da:66:34", "network": {"id": "b9f9ae84-9b18-48f7-bff2-94e8835de5c8", "bridge": "br-int", "label": "tempest-network-smoke--306681163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36e1391f-ea", "ovs_interfaceid": "36e1391f-eaf8-490f-8434-c3fb25eed0a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:26:14 np0005481065 nova_compute[260935]: 2025-10-11 09:26:14.752 2 DEBUG oslo_concurrency.lockutils [req-998fd68b-96b2-493d-a235-5c91d0b3165e req-447898cc-5795-4443-9c06-235bd75ca719 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-36d0acda-9f37-4308-aa46-973f11c57b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:26:14 np0005481065 nova_compute[260935]: 2025-10-11 09:26:14.752 2 DEBUG nova.network.neutron [req-998fd68b-96b2-493d-a235-5c91d0b3165e req-447898cc-5795-4443-9c06-235bd75ca719 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Refreshing network info cache for port 36e1391f-eaf8-490f-8434-c3fb25eed0a4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:26:14 np0005481065 nova_compute[260935]: 2025-10-11 09:26:14.756 2 DEBUG nova.virt.libvirt.driver [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Start _get_guest_xml network_info=[{"id": "36e1391f-eaf8-490f-8434-c3fb25eed0a4", "address": "fa:16:3e:da:66:34", "network": {"id": "b9f9ae84-9b18-48f7-bff2-94e8835de5c8", "bridge": "br-int", "label": "tempest-network-smoke--306681163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36e1391f-ea", "ovs_interfaceid": "36e1391f-eaf8-490f-8434-c3fb25eed0a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:26:14 np0005481065 nova_compute[260935]: 2025-10-11 09:26:14.761 2 WARNING nova.virt.libvirt.driver [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:26:14 np0005481065 nova_compute[260935]: 2025-10-11 09:26:14.766 2 DEBUG nova.virt.libvirt.host [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:26:14 np0005481065 nova_compute[260935]: 2025-10-11 09:26:14.767 2 DEBUG nova.virt.libvirt.host [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:26:14 np0005481065 nova_compute[260935]: 2025-10-11 09:26:14.770 2 DEBUG nova.virt.libvirt.host [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:26:14 np0005481065 nova_compute[260935]: 2025-10-11 09:26:14.770 2 DEBUG nova.virt.libvirt.host [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:26:14 np0005481065 nova_compute[260935]: 2025-10-11 09:26:14.771 2 DEBUG nova.virt.libvirt.driver [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:26:14 np0005481065 nova_compute[260935]: 2025-10-11 09:26:14.771 2 DEBUG nova.virt.hardware [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:26:14 np0005481065 nova_compute[260935]: 2025-10-11 09:26:14.771 2 DEBUG nova.virt.hardware [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:26:14 np0005481065 nova_compute[260935]: 2025-10-11 09:26:14.772 2 DEBUG nova.virt.hardware [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:26:14 np0005481065 nova_compute[260935]: 2025-10-11 09:26:14.772 2 DEBUG nova.virt.hardware [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:26:14 np0005481065 nova_compute[260935]: 2025-10-11 09:26:14.772 2 DEBUG nova.virt.hardware [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:26:14 np0005481065 nova_compute[260935]: 2025-10-11 09:26:14.772 2 DEBUG nova.virt.hardware [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:26:14 np0005481065 nova_compute[260935]: 2025-10-11 09:26:14.772 2 DEBUG nova.virt.hardware [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:26:14 np0005481065 nova_compute[260935]: 2025-10-11 09:26:14.773 2 DEBUG nova.virt.hardware [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:26:14 np0005481065 nova_compute[260935]: 2025-10-11 09:26:14.773 2 DEBUG nova.virt.hardware [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:26:14 np0005481065 nova_compute[260935]: 2025-10-11 09:26:14.773 2 DEBUG nova.virt.hardware [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:26:14 np0005481065 nova_compute[260935]: 2025-10-11 09:26:14.773 2 DEBUG nova.virt.hardware [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:26:14 np0005481065 podman[399874]: 2025-10-11 09:26:14.776567793 +0000 UTC m=+0.067981280 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 11 05:26:14 np0005481065 nova_compute[260935]: 2025-10-11 09:26:14.778 2 DEBUG oslo_concurrency.processutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:26:14 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2538: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Oct 11 05:26:15 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:26:15 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/604605797' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:26:15 np0005481065 nova_compute[260935]: 2025-10-11 09:26:15.029 2 DEBUG oslo_concurrency.processutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:26:15 np0005481065 nova_compute[260935]: 2025-10-11 09:26:15.037 2 DEBUG nova.compute.provider_tree [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:26:15 np0005481065 nova_compute[260935]: 2025-10-11 09:26:15.093 2 DEBUG nova.scheduler.client.report [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:26:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:15.222 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:26:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:15.223 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:26:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:15.224 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:26:15 np0005481065 nova_compute[260935]: 2025-10-11 09:26:15.313 2 DEBUG oslo_concurrency.lockutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.153s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:26:15 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:26:15 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1828184593' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:26:15 np0005481065 nova_compute[260935]: 2025-10-11 09:26:15.314 2 DEBUG nova.compute.manager [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:26:15 np0005481065 nova_compute[260935]: 2025-10-11 09:26:15.331 2 DEBUG oslo_concurrency.processutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:26:15 np0005481065 nova_compute[260935]: 2025-10-11 09:26:15.371 2 DEBUG nova.storage.rbd_utils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 36d0acda-9f37-4308-aa46-973f11c57b0e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:26:15 np0005481065 nova_compute[260935]: 2025-10-11 09:26:15.377 2 DEBUG oslo_concurrency.processutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:26:15 np0005481065 nova_compute[260935]: 2025-10-11 09:26:15.477 2 DEBUG nova.compute.manager [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:26:15 np0005481065 nova_compute[260935]: 2025-10-11 09:26:15.478 2 DEBUG nova.network.neutron [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:26:15 np0005481065 nova_compute[260935]: 2025-10-11 09:26:15.586 2 INFO nova.virt.libvirt.driver [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:26:15 np0005481065 nova_compute[260935]: 2025-10-11 09:26:15.645 2 DEBUG nova.policy [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0e1fd111a1ff43179343661e01457085', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'db6885dd005947ad850fed13cefdf2fc', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:26:15 np0005481065 nova_compute[260935]: 2025-10-11 09:26:15.696 2 DEBUG nova.compute.manager [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:26:15 np0005481065 nova_compute[260935]: 2025-10-11 09:26:15.868 2 DEBUG nova.network.neutron [req-998fd68b-96b2-493d-a235-5c91d0b3165e req-447898cc-5795-4443-9c06-235bd75ca719 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Updated VIF entry in instance network info cache for port 36e1391f-eaf8-490f-8434-c3fb25eed0a4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:26:15 np0005481065 nova_compute[260935]: 2025-10-11 09:26:15.868 2 DEBUG nova.network.neutron [req-998fd68b-96b2-493d-a235-5c91d0b3165e req-447898cc-5795-4443-9c06-235bd75ca719 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Updating instance_info_cache with network_info: [{"id": "36e1391f-eaf8-490f-8434-c3fb25eed0a4", "address": "fa:16:3e:da:66:34", "network": {"id": "b9f9ae84-9b18-48f7-bff2-94e8835de5c8", "bridge": "br-int", "label": "tempest-network-smoke--306681163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36e1391f-ea", "ovs_interfaceid": "36e1391f-eaf8-490f-8434-c3fb25eed0a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:26:15 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:26:15 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/695730209' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:26:15 np0005481065 nova_compute[260935]: 2025-10-11 09:26:15.897 2 DEBUG oslo_concurrency.processutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:26:15 np0005481065 nova_compute[260935]: 2025-10-11 09:26:15.899 2 DEBUG nova.virt.libvirt.vif [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:26:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1410731753',display_name='tempest-TestNetworkBasicOps-server-1410731753',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1410731753',id=127,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNfFDuZ2VUr+6EowKBtrZDd7zud1Oa+cp6ZA/ixez4vTqy3B2Qz2dWoCxMYTkI6OHKrvBP/PVMobSzlTBVFJQHby9DrXveKkH7hKU36MweJuxInYFJMR8tgPbvonZEpvAg==',key_name='tempest-TestNetworkBasicOps-714435839',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-4f9xvxhr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:26:11Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=36d0acda-9f37-4308-aa46-973f11c57b0e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "36e1391f-eaf8-490f-8434-c3fb25eed0a4", "address": "fa:16:3e:da:66:34", "network": {"id": "b9f9ae84-9b18-48f7-bff2-94e8835de5c8", "bridge": "br-int", "label": "tempest-network-smoke--306681163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36e1391f-ea", "ovs_interfaceid": "36e1391f-eaf8-490f-8434-c3fb25eed0a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:26:15 np0005481065 nova_compute[260935]: 2025-10-11 09:26:15.899 2 DEBUG nova.network.os_vif_util [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "36e1391f-eaf8-490f-8434-c3fb25eed0a4", "address": "fa:16:3e:da:66:34", "network": {"id": "b9f9ae84-9b18-48f7-bff2-94e8835de5c8", "bridge": "br-int", "label": "tempest-network-smoke--306681163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36e1391f-ea", "ovs_interfaceid": "36e1391f-eaf8-490f-8434-c3fb25eed0a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:26:15 np0005481065 nova_compute[260935]: 2025-10-11 09:26:15.900 2 DEBUG nova.network.os_vif_util [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:66:34,bridge_name='br-int',has_traffic_filtering=True,id=36e1391f-eaf8-490f-8434-c3fb25eed0a4,network=Network(b9f9ae84-9b18-48f7-bff2-94e8835de5c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36e1391f-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:26:15 np0005481065 nova_compute[260935]: 2025-10-11 09:26:15.901 2 DEBUG nova.objects.instance [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'pci_devices' on Instance uuid 36d0acda-9f37-4308-aa46-973f11c57b0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:26:15 np0005481065 nova_compute[260935]: 2025-10-11 09:26:15.976 2 DEBUG nova.virt.libvirt.driver [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:26:15 np0005481065 nova_compute[260935]:  <uuid>36d0acda-9f37-4308-aa46-973f11c57b0e</uuid>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:  <name>instance-0000007f</name>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:26:15 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:      <nova:name>tempest-TestNetworkBasicOps-server-1410731753</nova:name>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:26:14</nova:creationTime>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:26:15 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:        <nova:user uuid="dd336dcb24664df58613d4105ce1b004">tempest-TestNetworkBasicOps-1622727639-project-member</nova:user>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:        <nova:project uuid="bee9c6aad5fe46a2b0fb6caf4d995b72">tempest-TestNetworkBasicOps-1622727639</nova:project>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:        <nova:port uuid="36e1391f-eaf8-490f-8434-c3fb25eed0a4">
Oct 11 05:26:15 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:26:15 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:      <entry name="serial">36d0acda-9f37-4308-aa46-973f11c57b0e</entry>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:      <entry name="uuid">36d0acda-9f37-4308-aa46-973f11c57b0e</entry>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:26:15 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:26:15 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:26:15 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/36d0acda-9f37-4308-aa46-973f11c57b0e_disk">
Oct 11 05:26:15 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:26:15 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:26:15 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/36d0acda-9f37-4308-aa46-973f11c57b0e_disk.config">
Oct 11 05:26:15 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:26:15 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:26:15 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:da:66:34"/>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:      <target dev="tap36e1391f-ea"/>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:26:15 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/36d0acda-9f37-4308-aa46-973f11c57b0e/console.log" append="off"/>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:26:15 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:26:15 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:26:15 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:26:15 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:26:15 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:26:15 np0005481065 nova_compute[260935]: 2025-10-11 09:26:15.977 2 DEBUG nova.compute.manager [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Preparing to wait for external event network-vif-plugged-36e1391f-eaf8-490f-8434-c3fb25eed0a4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:26:15 np0005481065 nova_compute[260935]: 2025-10-11 09:26:15.977 2 DEBUG oslo_concurrency.lockutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:26:15 np0005481065 nova_compute[260935]: 2025-10-11 09:26:15.978 2 DEBUG oslo_concurrency.lockutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:26:15 np0005481065 nova_compute[260935]: 2025-10-11 09:26:15.978 2 DEBUG oslo_concurrency.lockutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:26:15 np0005481065 nova_compute[260935]: 2025-10-11 09:26:15.979 2 DEBUG nova.virt.libvirt.vif [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:26:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1410731753',display_name='tempest-TestNetworkBasicOps-server-1410731753',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1410731753',id=127,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNfFDuZ2VUr+6EowKBtrZDd7zud1Oa+cp6ZA/ixez4vTqy3B2Qz2dWoCxMYTkI6OHKrvBP/PVMobSzlTBVFJQHby9DrXveKkH7hKU36MweJuxInYFJMR8tgPbvonZEpvAg==',key_name='tempest-TestNetworkBasicOps-714435839',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-4f9xvxhr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:26:11Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=36d0acda-9f37-4308-aa46-973f11c57b0e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "36e1391f-eaf8-490f-8434-c3fb25eed0a4", "address": "fa:16:3e:da:66:34", "network": {"id": "b9f9ae84-9b18-48f7-bff2-94e8835de5c8", "bridge": "br-int", "label": "tempest-network-smoke--306681163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36e1391f-ea", "ovs_interfaceid": "36e1391f-eaf8-490f-8434-c3fb25eed0a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:26:15 np0005481065 nova_compute[260935]: 2025-10-11 09:26:15.979 2 DEBUG nova.network.os_vif_util [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "36e1391f-eaf8-490f-8434-c3fb25eed0a4", "address": "fa:16:3e:da:66:34", "network": {"id": "b9f9ae84-9b18-48f7-bff2-94e8835de5c8", "bridge": "br-int", "label": "tempest-network-smoke--306681163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36e1391f-ea", "ovs_interfaceid": "36e1391f-eaf8-490f-8434-c3fb25eed0a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:26:15 np0005481065 nova_compute[260935]: 2025-10-11 09:26:15.980 2 DEBUG nova.network.os_vif_util [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:66:34,bridge_name='br-int',has_traffic_filtering=True,id=36e1391f-eaf8-490f-8434-c3fb25eed0a4,network=Network(b9f9ae84-9b18-48f7-bff2-94e8835de5c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36e1391f-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:26:15 np0005481065 nova_compute[260935]: 2025-10-11 09:26:15.980 2 DEBUG os_vif [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:66:34,bridge_name='br-int',has_traffic_filtering=True,id=36e1391f-eaf8-490f-8434-c3fb25eed0a4,network=Network(b9f9ae84-9b18-48f7-bff2-94e8835de5c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36e1391f-ea') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:26:15 np0005481065 nova_compute[260935]: 2025-10-11 09:26:15.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:15 np0005481065 nova_compute[260935]: 2025-10-11 09:26:15.983 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:26:15 np0005481065 nova_compute[260935]: 2025-10-11 09:26:15.984 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:26:15 np0005481065 nova_compute[260935]: 2025-10-11 09:26:15.984 2 DEBUG oslo_concurrency.lockutils [req-998fd68b-96b2-493d-a235-5c91d0b3165e req-447898cc-5795-4443-9c06-235bd75ca719 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-36d0acda-9f37-4308-aa46-973f11c57b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:26:15 np0005481065 nova_compute[260935]: 2025-10-11 09:26:15.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:15 np0005481065 nova_compute[260935]: 2025-10-11 09:26:15.989 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap36e1391f-ea, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:26:15 np0005481065 nova_compute[260935]: 2025-10-11 09:26:15.989 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap36e1391f-ea, col_values=(('external_ids', {'iface-id': '36e1391f-eaf8-490f-8434-c3fb25eed0a4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:da:66:34', 'vm-uuid': '36d0acda-9f37-4308-aa46-973f11c57b0e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:26:16 np0005481065 nova_compute[260935]: 2025-10-11 09:26:16.004 2 DEBUG nova.compute.manager [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:26:16 np0005481065 nova_compute[260935]: 2025-10-11 09:26:16.005 2 DEBUG nova.virt.libvirt.driver [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:26:16 np0005481065 nova_compute[260935]: 2025-10-11 09:26:16.005 2 INFO nova.virt.libvirt.driver [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Creating image(s)#033[00m
Oct 11 05:26:16 np0005481065 NetworkManager[44960]: <info>  [1760174776.0229] manager: (tap36e1391f-ea): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/539)
Oct 11 05:26:16 np0005481065 nova_compute[260935]: 2025-10-11 09:26:16.038 2 DEBUG nova.storage.rbd_utils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image b39b8161-8a46-46fe-8a2a-0fc6b4eab850_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:26:16 np0005481065 nova_compute[260935]: 2025-10-11 09:26:16.093 2 DEBUG nova.storage.rbd_utils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image b39b8161-8a46-46fe-8a2a-0fc6b4eab850_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:26:16 np0005481065 nova_compute[260935]: 2025-10-11 09:26:16.130 2 DEBUG nova.storage.rbd_utils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image b39b8161-8a46-46fe-8a2a-0fc6b4eab850_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:26:16 np0005481065 nova_compute[260935]: 2025-10-11 09:26:16.142 2 DEBUG oslo_concurrency.processutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:26:16 np0005481065 nova_compute[260935]: 2025-10-11 09:26:16.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:16 np0005481065 nova_compute[260935]: 2025-10-11 09:26:16.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:26:16 np0005481065 nova_compute[260935]: 2025-10-11 09:26:16.204 2 INFO os_vif [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:66:34,bridge_name='br-int',has_traffic_filtering=True,id=36e1391f-eaf8-490f-8434-c3fb25eed0a4,network=Network(b9f9ae84-9b18-48f7-bff2-94e8835de5c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36e1391f-ea')#033[00m
Oct 11 05:26:16 np0005481065 nova_compute[260935]: 2025-10-11 09:26:16.251 2 DEBUG oslo_concurrency.processutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:26:16 np0005481065 nova_compute[260935]: 2025-10-11 09:26:16.251 2 DEBUG oslo_concurrency.lockutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:26:16 np0005481065 nova_compute[260935]: 2025-10-11 09:26:16.252 2 DEBUG oslo_concurrency.lockutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:26:16 np0005481065 nova_compute[260935]: 2025-10-11 09:26:16.253 2 DEBUG oslo_concurrency.lockutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:26:16 np0005481065 nova_compute[260935]: 2025-10-11 09:26:16.282 2 DEBUG nova.storage.rbd_utils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image b39b8161-8a46-46fe-8a2a-0fc6b4eab850_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:26:16 np0005481065 nova_compute[260935]: 2025-10-11 09:26:16.287 2 DEBUG oslo_concurrency.processutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 b39b8161-8a46-46fe-8a2a-0fc6b4eab850_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:26:16 np0005481065 nova_compute[260935]: 2025-10-11 09:26:16.600 2 DEBUG nova.virt.libvirt.driver [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:26:16 np0005481065 nova_compute[260935]: 2025-10-11 09:26:16.601 2 DEBUG nova.virt.libvirt.driver [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:26:16 np0005481065 nova_compute[260935]: 2025-10-11 09:26:16.602 2 DEBUG nova.virt.libvirt.driver [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No VIF found with MAC fa:16:3e:da:66:34, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:26:16 np0005481065 nova_compute[260935]: 2025-10-11 09:26:16.604 2 INFO nova.virt.libvirt.driver [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Using config drive#033[00m
Oct 11 05:26:16 np0005481065 nova_compute[260935]: 2025-10-11 09:26:16.648 2 DEBUG nova.storage.rbd_utils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 36d0acda-9f37-4308-aa46-973f11c57b0e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:26:16 np0005481065 nova_compute[260935]: 2025-10-11 09:26:16.848 2 DEBUG nova.network.neutron [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Successfully created port: 0f8506e7-c03f-48bd-938d-f3b4cea2675b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:26:16 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2539: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Oct 11 05:26:17 np0005481065 nova_compute[260935]: 2025-10-11 09:26:17.073 2 DEBUG oslo_concurrency.processutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 b39b8161-8a46-46fe-8a2a-0fc6b4eab850_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.786s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:26:17 np0005481065 nova_compute[260935]: 2025-10-11 09:26:17.170 2 DEBUG nova.storage.rbd_utils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] resizing rbd image b39b8161-8a46-46fe-8a2a-0fc6b4eab850_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:26:17 np0005481065 nova_compute[260935]: 2025-10-11 09:26:17.287 2 DEBUG nova.objects.instance [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'migration_context' on Instance uuid b39b8161-8a46-46fe-8a2a-0fc6b4eab850 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:26:17 np0005481065 nova_compute[260935]: 2025-10-11 09:26:17.419 2 DEBUG nova.virt.libvirt.driver [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:26:17 np0005481065 nova_compute[260935]: 2025-10-11 09:26:17.419 2 DEBUG nova.virt.libvirt.driver [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Ensure instance console log exists: /var/lib/nova/instances/b39b8161-8a46-46fe-8a2a-0fc6b4eab850/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:26:17 np0005481065 nova_compute[260935]: 2025-10-11 09:26:17.420 2 DEBUG oslo_concurrency.lockutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:26:17 np0005481065 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 05:26:17 np0005481065 nova_compute[260935]: 2025-10-11 09:26:17.420 2 DEBUG oslo_concurrency.lockutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:26:17 np0005481065 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 05:26:17 np0005481065 nova_compute[260935]: 2025-10-11 09:26:17.421 2 DEBUG oslo_concurrency.lockutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:26:17 np0005481065 nova_compute[260935]: 2025-10-11 09:26:17.598 2 INFO nova.virt.libvirt.driver [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Creating config drive at /var/lib/nova/instances/36d0acda-9f37-4308-aa46-973f11c57b0e/disk.config#033[00m
Oct 11 05:26:17 np0005481065 nova_compute[260935]: 2025-10-11 09:26:17.603 2 DEBUG oslo_concurrency.processutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/36d0acda-9f37-4308-aa46-973f11c57b0e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpddx6wl9p execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:26:17 np0005481065 nova_compute[260935]: 2025-10-11 09:26:17.657 2 DEBUG nova.network.neutron [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Successfully created port: a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:26:17 np0005481065 nova_compute[260935]: 2025-10-11 09:26:17.770 2 DEBUG oslo_concurrency.processutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/36d0acda-9f37-4308-aa46-973f11c57b0e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpddx6wl9p" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:26:17 np0005481065 nova_compute[260935]: 2025-10-11 09:26:17.813 2 DEBUG nova.storage.rbd_utils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 36d0acda-9f37-4308-aa46-973f11c57b0e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:26:17 np0005481065 nova_compute[260935]: 2025-10-11 09:26:17.818 2 DEBUG oslo_concurrency.processutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/36d0acda-9f37-4308-aa46-973f11c57b0e/disk.config 36d0acda-9f37-4308-aa46-973f11c57b0e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:26:17 np0005481065 nova_compute[260935]: 2025-10-11 09:26:17.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:18 np0005481065 nova_compute[260935]: 2025-10-11 09:26:18.115 2 DEBUG oslo_concurrency.processutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/36d0acda-9f37-4308-aa46-973f11c57b0e/disk.config 36d0acda-9f37-4308-aa46-973f11c57b0e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.297s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:26:18 np0005481065 nova_compute[260935]: 2025-10-11 09:26:18.116 2 INFO nova.virt.libvirt.driver [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Deleting local config drive /var/lib/nova/instances/36d0acda-9f37-4308-aa46-973f11c57b0e/disk.config because it was imported into RBD.#033[00m
Oct 11 05:26:18 np0005481065 kernel: tap36e1391f-ea: entered promiscuous mode
Oct 11 05:26:18 np0005481065 NetworkManager[44960]: <info>  [1760174778.2079] manager: (tap36e1391f-ea): new Tun device (/org/freedesktop/NetworkManager/Devices/540)
Oct 11 05:26:18 np0005481065 ovn_controller[152945]: 2025-10-11T09:26:18Z|01382|binding|INFO|Claiming lport 36e1391f-eaf8-490f-8434-c3fb25eed0a4 for this chassis.
Oct 11 05:26:18 np0005481065 ovn_controller[152945]: 2025-10-11T09:26:18Z|01383|binding|INFO|36e1391f-eaf8-490f-8434-c3fb25eed0a4: Claiming fa:16:3e:da:66:34 10.100.0.11
Oct 11 05:26:18 np0005481065 nova_compute[260935]: 2025-10-11 09:26:18.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:18.225 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:66:34 10.100.0.11'], port_security=['fa:16:3e:da:66:34 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '36d0acda-9f37-4308-aa46-973f11c57b0e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b9f9ae84-9b18-48f7-bff2-94e8835de5c8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a0735faf-4c5a-437a-8d73-2ecca218ad1a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=419ebcda-831c-4f7b-8ef8-fba16bc71b52, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=36e1391f-eaf8-490f-8434-c3fb25eed0a4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:18.228 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 36e1391f-eaf8-490f-8434-c3fb25eed0a4 in datapath b9f9ae84-9b18-48f7-bff2-94e8835de5c8 bound to our chassis#033[00m
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:18.232 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b9f9ae84-9b18-48f7-bff2-94e8835de5c8#033[00m
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:18.252 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8cd244b6-d1ee-488a-a00f-9db0baa639e3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:18.254 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb9f9ae84-91 in ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:18.256 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb9f9ae84-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:18.256 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e3da0435-ec86-418a-b4b1-a16648d0a02d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:26:18 np0005481065 systemd-udevd[400219]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:18.258 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4ac21390-b012-4953-8441-aa1dbe6dffee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:26:18 np0005481065 systemd-machined[215705]: New machine qemu-151-instance-0000007f.
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:18.281 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[8ac529fa-e420-4fc0-952b-d18f3b19a99b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:26:18 np0005481065 NetworkManager[44960]: <info>  [1760174778.2862] device (tap36e1391f-ea): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:26:18 np0005481065 NetworkManager[44960]: <info>  [1760174778.2874] device (tap36e1391f-ea): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:26:18 np0005481065 systemd[1]: Started Virtual Machine qemu-151-instance-0000007f.
Oct 11 05:26:18 np0005481065 nova_compute[260935]: 2025-10-11 09:26:18.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:18 np0005481065 ovn_controller[152945]: 2025-10-11T09:26:18Z|01384|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:26:18 np0005481065 ovn_controller[152945]: 2025-10-11T09:26:18Z|01385|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:18.314 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cf23d405-adac-41fd-8789-fd0e359c7499]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:26:18 np0005481065 ovn_controller[152945]: 2025-10-11T09:26:18Z|01386|binding|INFO|Setting lport 36e1391f-eaf8-490f-8434-c3fb25eed0a4 ovn-installed in OVS
Oct 11 05:26:18 np0005481065 ovn_controller[152945]: 2025-10-11T09:26:18Z|01387|binding|INFO|Setting lport 36e1391f-eaf8-490f-8434-c3fb25eed0a4 up in Southbound
Oct 11 05:26:18 np0005481065 nova_compute[260935]: 2025-10-11 09:26:18.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:18.359 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c57a4cea-7914-418d-a203-8c4073fefff0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:18.367 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b4780d4d-ab6a-4ecf-b44b-b336473a5974]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:26:18 np0005481065 NetworkManager[44960]: <info>  [1760174778.3691] manager: (tapb9f9ae84-90): new Veth device (/org/freedesktop/NetworkManager/Devices/541)
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:18.423 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[711a920b-cc48-43ed-9a2b-8926beb39783]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:18.428 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[e78169a9-9abe-4242-a2f5-4e7e75b081d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:26:18 np0005481065 NetworkManager[44960]: <info>  [1760174778.4650] device (tapb9f9ae84-90): carrier: link connected
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:18.474 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[5cf6db54-028b-4356-99a6-36595b92a947]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:18.505 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f2583178-27a8-4425-9db9-475f0bfe743e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb9f9ae84-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:ad:53'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 379], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662316, 'reachable_time': 34988, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 400252, 'error': None, 'target': 'ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:26:18 np0005481065 nova_compute[260935]: 2025-10-11 09:26:18.521 2 DEBUG nova.network.neutron [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Successfully updated port: 0f8506e7-c03f-48bd-938d-f3b4cea2675b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:18.532 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[30b15c68-8067-4849-93f1-46f31e638349]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef4:ad53'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662316, 'tstamp': 662316}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 400253, 'error': None, 'target': 'ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:18.562 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[21321d27-60c9-4c33-963c-5e4273d13c88]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb9f9ae84-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:ad:53'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 379], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662316, 'reachable_time': 34988, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 400254, 'error': None, 'target': 'ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:18.618 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[11cdbdba-b3e3-48f3-b04e-cbdd917f89b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:26:18 np0005481065 nova_compute[260935]: 2025-10-11 09:26:18.675 2 DEBUG nova.compute.manager [req-1eb22a8e-2e35-4158-bfe9-06c9e91a00f9 req-9d98cb8d-bad8-42d7-854f-5b7f41321a54 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Received event network-changed-0f8506e7-c03f-48bd-938d-f3b4cea2675b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:26:18 np0005481065 nova_compute[260935]: 2025-10-11 09:26:18.675 2 DEBUG nova.compute.manager [req-1eb22a8e-2e35-4158-bfe9-06c9e91a00f9 req-9d98cb8d-bad8-42d7-854f-5b7f41321a54 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Refreshing instance network info cache due to event network-changed-0f8506e7-c03f-48bd-938d-f3b4cea2675b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:26:18 np0005481065 nova_compute[260935]: 2025-10-11 09:26:18.675 2 DEBUG oslo_concurrency.lockutils [req-1eb22a8e-2e35-4158-bfe9-06c9e91a00f9 req-9d98cb8d-bad8-42d7-854f-5b7f41321a54 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-b39b8161-8a46-46fe-8a2a-0fc6b4eab850" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:26:18 np0005481065 nova_compute[260935]: 2025-10-11 09:26:18.676 2 DEBUG oslo_concurrency.lockutils [req-1eb22a8e-2e35-4158-bfe9-06c9e91a00f9 req-9d98cb8d-bad8-42d7-854f-5b7f41321a54 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-b39b8161-8a46-46fe-8a2a-0fc6b4eab850" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:26:18 np0005481065 nova_compute[260935]: 2025-10-11 09:26:18.676 2 DEBUG nova.network.neutron [req-1eb22a8e-2e35-4158-bfe9-06c9e91a00f9 req-9d98cb8d-bad8-42d7-854f-5b7f41321a54 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Refreshing network info cache for port 0f8506e7-c03f-48bd-938d-f3b4cea2675b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:18.713 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5b6c0aa0-ddfc-4e4d-8c75-b966b5748bfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:18.714 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb9f9ae84-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:18.715 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:18.715 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb9f9ae84-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:26:18 np0005481065 kernel: tapb9f9ae84-90: entered promiscuous mode
Oct 11 05:26:18 np0005481065 nova_compute[260935]: 2025-10-11 09:26:18.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:18 np0005481065 NetworkManager[44960]: <info>  [1760174778.7200] manager: (tapb9f9ae84-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/542)
Oct 11 05:26:18 np0005481065 nova_compute[260935]: 2025-10-11 09:26:18.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:18.725 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb9f9ae84-90, col_values=(('external_ids', {'iface-id': '829ba2ca-e21f-4927-8525-5f43e59d37f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:26:18 np0005481065 nova_compute[260935]: 2025-10-11 09:26:18.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:18 np0005481065 ovn_controller[152945]: 2025-10-11T09:26:18Z|01388|binding|INFO|Releasing lport 829ba2ca-e21f-4927-8525-5f43e59d37f8 from this chassis (sb_readonly=0)
Oct 11 05:26:18 np0005481065 nova_compute[260935]: 2025-10-11 09:26:18.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:18.758 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b9f9ae84-9b18-48f7-bff2-94e8835de5c8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b9f9ae84-9b18-48f7-bff2-94e8835de5c8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:18.759 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5b489582-7c93-4dda-8f27-776d406e1c15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:18.761 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-b9f9ae84-9b18-48f7-bff2-94e8835de5c8
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/b9f9ae84-9b18-48f7-bff2-94e8835de5c8.pid.haproxy
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID b9f9ae84-9b18-48f7-bff2-94e8835de5c8
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 05:26:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:18.761 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8', 'env', 'PROCESS_TAG=haproxy-b9f9ae84-9b18-48f7-bff2-94e8835de5c8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b9f9ae84-9b18-48f7-bff2-94e8835de5c8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 05:26:18 np0005481065 nova_compute[260935]: 2025-10-11 09:26:18.857 2 DEBUG nova.compute.manager [req-a2379df5-b6bf-4980-a8fa-d10318d4c461 req-b676a8fa-2688-4d6c-92bf-ae7396bb8a28 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Received event network-vif-plugged-36e1391f-eaf8-490f-8434-c3fb25eed0a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:26:18 np0005481065 nova_compute[260935]: 2025-10-11 09:26:18.857 2 DEBUG oslo_concurrency.lockutils [req-a2379df5-b6bf-4980-a8fa-d10318d4c461 req-b676a8fa-2688-4d6c-92bf-ae7396bb8a28 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:26:18 np0005481065 nova_compute[260935]: 2025-10-11 09:26:18.858 2 DEBUG oslo_concurrency.lockutils [req-a2379df5-b6bf-4980-a8fa-d10318d4c461 req-b676a8fa-2688-4d6c-92bf-ae7396bb8a28 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:26:18 np0005481065 nova_compute[260935]: 2025-10-11 09:26:18.858 2 DEBUG oslo_concurrency.lockutils [req-a2379df5-b6bf-4980-a8fa-d10318d4c461 req-b676a8fa-2688-4d6c-92bf-ae7396bb8a28 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:26:18 np0005481065 nova_compute[260935]: 2025-10-11 09:26:18.858 2 DEBUG nova.compute.manager [req-a2379df5-b6bf-4980-a8fa-d10318d4c461 req-b676a8fa-2688-4d6c-92bf-ae7396bb8a28 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Processing event network-vif-plugged-36e1391f-eaf8-490f-8434-c3fb25eed0a4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:26:18 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2540: 321 pgs: 321 active+clean; 420 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 11 05:26:18 np0005481065 nova_compute[260935]: 2025-10-11 09:26:18.904 2 DEBUG nova.network.neutron [req-1eb22a8e-2e35-4158-bfe9-06c9e91a00f9 req-9d98cb8d-bad8-42d7-854f-5b7f41321a54 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:26:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:26:19 np0005481065 nova_compute[260935]: 2025-10-11 09:26:19.247 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174779.2463243, 36d0acda-9f37-4308-aa46-973f11c57b0e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:26:19 np0005481065 nova_compute[260935]: 2025-10-11 09:26:19.249 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] VM Started (Lifecycle Event)#033[00m
Oct 11 05:26:19 np0005481065 nova_compute[260935]: 2025-10-11 09:26:19.253 2 DEBUG nova.compute.manager [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:26:19 np0005481065 podman[400329]: 2025-10-11 09:26:19.257863118 +0000 UTC m=+0.065276513 container create 98a9e3dfba5d9f0db108a43068ad5b78e93374ee7968fa6567843c496e5784d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:26:19 np0005481065 nova_compute[260935]: 2025-10-11 09:26:19.258 2 DEBUG nova.virt.libvirt.driver [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:26:19 np0005481065 nova_compute[260935]: 2025-10-11 09:26:19.274 2 INFO nova.virt.libvirt.driver [-] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Instance spawned successfully.#033[00m
Oct 11 05:26:19 np0005481065 nova_compute[260935]: 2025-10-11 09:26:19.275 2 DEBUG nova.virt.libvirt.driver [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:26:19 np0005481065 nova_compute[260935]: 2025-10-11 09:26:19.289 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:26:19 np0005481065 nova_compute[260935]: 2025-10-11 09:26:19.295 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:26:19 np0005481065 systemd[1]: Started libpod-conmon-98a9e3dfba5d9f0db108a43068ad5b78e93374ee7968fa6567843c496e5784d5.scope.
Oct 11 05:26:19 np0005481065 nova_compute[260935]: 2025-10-11 09:26:19.313 2 DEBUG nova.virt.libvirt.driver [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:26:19 np0005481065 nova_compute[260935]: 2025-10-11 09:26:19.314 2 DEBUG nova.virt.libvirt.driver [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:26:19 np0005481065 nova_compute[260935]: 2025-10-11 09:26:19.316 2 DEBUG nova.virt.libvirt.driver [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:26:19 np0005481065 podman[400329]: 2025-10-11 09:26:19.220750841 +0000 UTC m=+0.028164226 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 05:26:19 np0005481065 nova_compute[260935]: 2025-10-11 09:26:19.317 2 DEBUG nova.virt.libvirt.driver [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:26:19 np0005481065 nova_compute[260935]: 2025-10-11 09:26:19.318 2 DEBUG nova.virt.libvirt.driver [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:26:19 np0005481065 nova_compute[260935]: 2025-10-11 09:26:19.319 2 DEBUG nova.virt.libvirt.driver [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:26:19 np0005481065 nova_compute[260935]: 2025-10-11 09:26:19.326 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:26:19 np0005481065 nova_compute[260935]: 2025-10-11 09:26:19.328 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174779.248165, 36d0acda-9f37-4308-aa46-973f11c57b0e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:26:19 np0005481065 nova_compute[260935]: 2025-10-11 09:26:19.328 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:26:19 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:26:19 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b93609cfe788969adf943db69f3753cdeb411820f1a1f8aaf68d3557d2f4c37/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 05:26:19 np0005481065 nova_compute[260935]: 2025-10-11 09:26:19.364 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:26:19 np0005481065 nova_compute[260935]: 2025-10-11 09:26:19.370 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174779.2563791, 36d0acda-9f37-4308-aa46-973f11c57b0e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:26:19 np0005481065 nova_compute[260935]: 2025-10-11 09:26:19.370 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:26:19 np0005481065 podman[400329]: 2025-10-11 09:26:19.378698658 +0000 UTC m=+0.186112073 container init 98a9e3dfba5d9f0db108a43068ad5b78e93374ee7968fa6567843c496e5784d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 11 05:26:19 np0005481065 podman[400329]: 2025-10-11 09:26:19.386608042 +0000 UTC m=+0.194021437 container start 98a9e3dfba5d9f0db108a43068ad5b78e93374ee7968fa6567843c496e5784d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 11 05:26:19 np0005481065 nova_compute[260935]: 2025-10-11 09:26:19.388 2 INFO nova.compute.manager [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Took 7.87 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:26:19 np0005481065 nova_compute[260935]: 2025-10-11 09:26:19.389 2 DEBUG nova.compute.manager [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:26:19 np0005481065 nova_compute[260935]: 2025-10-11 09:26:19.393 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:26:19 np0005481065 podman[400342]: 2025-10-11 09:26:19.394377071 +0000 UTC m=+0.091306168 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 11 05:26:19 np0005481065 nova_compute[260935]: 2025-10-11 09:26:19.408 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:26:19 np0005481065 neutron-haproxy-ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8[400350]: [NOTICE]   (400368) : New worker (400370) forked
Oct 11 05:26:19 np0005481065 neutron-haproxy-ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8[400350]: [NOTICE]   (400368) : Loading success.
Oct 11 05:26:19 np0005481065 nova_compute[260935]: 2025-10-11 09:26:19.443 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:26:19 np0005481065 nova_compute[260935]: 2025-10-11 09:26:19.466 2 INFO nova.compute.manager [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Took 9.00 seconds to build instance.#033[00m
Oct 11 05:26:19 np0005481065 nova_compute[260935]: 2025-10-11 09:26:19.482 2 DEBUG oslo_concurrency.lockutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "36d0acda-9f37-4308-aa46-973f11c57b0e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.091s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:26:20 np0005481065 nova_compute[260935]: 2025-10-11 09:26:20.261 2 DEBUG nova.network.neutron [req-1eb22a8e-2e35-4158-bfe9-06c9e91a00f9 req-9d98cb8d-bad8-42d7-854f-5b7f41321a54 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:26:20 np0005481065 nova_compute[260935]: 2025-10-11 09:26:20.335 2 DEBUG oslo_concurrency.lockutils [req-1eb22a8e-2e35-4158-bfe9-06c9e91a00f9 req-9d98cb8d-bad8-42d7-854f-5b7f41321a54 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-b39b8161-8a46-46fe-8a2a-0fc6b4eab850" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:26:20 np0005481065 nova_compute[260935]: 2025-10-11 09:26:20.504 2 DEBUG nova.network.neutron [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Successfully updated port: a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:26:20 np0005481065 nova_compute[260935]: 2025-10-11 09:26:20.592 2 DEBUG oslo_concurrency.lockutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "refresh_cache-b39b8161-8a46-46fe-8a2a-0fc6b4eab850" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:26:20 np0005481065 nova_compute[260935]: 2025-10-11 09:26:20.593 2 DEBUG oslo_concurrency.lockutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquired lock "refresh_cache-b39b8161-8a46-46fe-8a2a-0fc6b4eab850" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:26:20 np0005481065 nova_compute[260935]: 2025-10-11 09:26:20.593 2 DEBUG nova.network.neutron [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:26:20 np0005481065 nova_compute[260935]: 2025-10-11 09:26:20.756 2 DEBUG nova.network.neutron [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:26:20 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2541: 321 pgs: 321 active+clean; 420 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 11 05:26:20 np0005481065 nova_compute[260935]: 2025-10-11 09:26:20.914 2 DEBUG nova.compute.manager [req-581c61dc-dfa6-4212-806b-3989c6eee803 req-913ecc62-ea06-496d-adcf-4740ec3daacc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Received event network-changed-a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:26:20 np0005481065 nova_compute[260935]: 2025-10-11 09:26:20.914 2 DEBUG nova.compute.manager [req-581c61dc-dfa6-4212-806b-3989c6eee803 req-913ecc62-ea06-496d-adcf-4740ec3daacc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Refreshing instance network info cache due to event network-changed-a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:26:20 np0005481065 nova_compute[260935]: 2025-10-11 09:26:20.915 2 DEBUG oslo_concurrency.lockutils [req-581c61dc-dfa6-4212-806b-3989c6eee803 req-913ecc62-ea06-496d-adcf-4740ec3daacc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-b39b8161-8a46-46fe-8a2a-0fc6b4eab850" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:26:21 np0005481065 nova_compute[260935]: 2025-10-11 09:26:21.004 2 DEBUG nova.compute.manager [req-751c6d1a-9c99-4e98-88f3-d82a2cc0ed61 req-bbc07894-f84b-4024-87c4-2e66b1d34ed2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Received event network-vif-plugged-36e1391f-eaf8-490f-8434-c3fb25eed0a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:26:21 np0005481065 nova_compute[260935]: 2025-10-11 09:26:21.004 2 DEBUG oslo_concurrency.lockutils [req-751c6d1a-9c99-4e98-88f3-d82a2cc0ed61 req-bbc07894-f84b-4024-87c4-2e66b1d34ed2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:26:21 np0005481065 nova_compute[260935]: 2025-10-11 09:26:21.005 2 DEBUG oslo_concurrency.lockutils [req-751c6d1a-9c99-4e98-88f3-d82a2cc0ed61 req-bbc07894-f84b-4024-87c4-2e66b1d34ed2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:26:21 np0005481065 nova_compute[260935]: 2025-10-11 09:26:21.005 2 DEBUG oslo_concurrency.lockutils [req-751c6d1a-9c99-4e98-88f3-d82a2cc0ed61 req-bbc07894-f84b-4024-87c4-2e66b1d34ed2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:26:21 np0005481065 nova_compute[260935]: 2025-10-11 09:26:21.006 2 DEBUG nova.compute.manager [req-751c6d1a-9c99-4e98-88f3-d82a2cc0ed61 req-bbc07894-f84b-4024-87c4-2e66b1d34ed2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] No waiting events found dispatching network-vif-plugged-36e1391f-eaf8-490f-8434-c3fb25eed0a4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:26:21 np0005481065 nova_compute[260935]: 2025-10-11 09:26:21.006 2 WARNING nova.compute.manager [req-751c6d1a-9c99-4e98-88f3-d82a2cc0ed61 req-bbc07894-f84b-4024-87c4-2e66b1d34ed2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Received unexpected event network-vif-plugged-36e1391f-eaf8-490f-8434-c3fb25eed0a4 for instance with vm_state active and task_state None.#033[00m
Oct 11 05:26:21 np0005481065 nova_compute[260935]: 2025-10-11 09:26:21.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:21 np0005481065 nova_compute[260935]: 2025-10-11 09:26:21.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:26:22 np0005481065 nova_compute[260935]: 2025-10-11 09:26:22.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:22 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2542: 321 pgs: 321 active+clean; 420 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 127 op/s
Oct 11 05:26:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:26:24 np0005481065 podman[400379]: 2025-10-11 09:26:24.783565777 +0000 UTC m=+0.081143041 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 11 05:26:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:26:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:26:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:26:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:26:24 np0005481065 podman[400380]: 2025-10-11 09:26:24.845748252 +0000 UTC m=+0.140581159 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:26:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:26:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:26:24 np0005481065 nova_compute[260935]: 2025-10-11 09:26:24.891 2 DEBUG nova.network.neutron [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Updating instance_info_cache with network_info: [{"id": "0f8506e7-c03f-48bd-938d-f3b4cea2675b", "address": "fa:16:3e:f4:67:d9", "network": {"id": "024b1f88-7312-4f05-a55e-4c82e878906e", "bridge": "br-int", "label": "tempest-network-smoke--93934650", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f8506e7-c0", "ovs_interfaceid": "0f8506e7-c03f-48bd-938d-f3b4cea2675b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088", "address": "fa:16:3e:4d:e9:36", "network": {"id": "894decad-3bed-4c55-b643-5fbe5479bf3f", "bridge": "br-int", "label": "tempest-network-smoke--1044478516", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4d:e936", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0aadf8d-3a", "ovs_interfaceid": "a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:26:24 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2543: 321 pgs: 321 active+clean; 420 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 11 05:26:25 np0005481065 nova_compute[260935]: 2025-10-11 09:26:25.245 2 DEBUG oslo_concurrency.lockutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Releasing lock "refresh_cache-b39b8161-8a46-46fe-8a2a-0fc6b4eab850" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:26:25 np0005481065 nova_compute[260935]: 2025-10-11 09:26:25.246 2 DEBUG nova.compute.manager [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Instance network_info: |[{"id": "0f8506e7-c03f-48bd-938d-f3b4cea2675b", "address": "fa:16:3e:f4:67:d9", "network": {"id": "024b1f88-7312-4f05-a55e-4c82e878906e", "bridge": "br-int", "label": "tempest-network-smoke--93934650", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f8506e7-c0", "ovs_interfaceid": "0f8506e7-c03f-48bd-938d-f3b4cea2675b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088", "address": "fa:16:3e:4d:e9:36", "network": {"id": "894decad-3bed-4c55-b643-5fbe5479bf3f", "bridge": "br-int", "label": "tempest-network-smoke--1044478516", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4d:e936", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0aadf8d-3a", "ovs_interfaceid": "a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:26:25 np0005481065 nova_compute[260935]: 2025-10-11 09:26:25.246 2 DEBUG oslo_concurrency.lockutils [req-581c61dc-dfa6-4212-806b-3989c6eee803 req-913ecc62-ea06-496d-adcf-4740ec3daacc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-b39b8161-8a46-46fe-8a2a-0fc6b4eab850" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:26:25 np0005481065 nova_compute[260935]: 2025-10-11 09:26:25.247 2 DEBUG nova.network.neutron [req-581c61dc-dfa6-4212-806b-3989c6eee803 req-913ecc62-ea06-496d-adcf-4740ec3daacc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Refreshing network info cache for port a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:26:25 np0005481065 nova_compute[260935]: 2025-10-11 09:26:25.251 2 DEBUG nova.virt.libvirt.driver [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Start _get_guest_xml network_info=[{"id": "0f8506e7-c03f-48bd-938d-f3b4cea2675b", "address": "fa:16:3e:f4:67:d9", "network": {"id": "024b1f88-7312-4f05-a55e-4c82e878906e", "bridge": "br-int", "label": "tempest-network-smoke--93934650", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f8506e7-c0", "ovs_interfaceid": "0f8506e7-c03f-48bd-938d-f3b4cea2675b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088", "address": "fa:16:3e:4d:e9:36", "network": {"id": "894decad-3bed-4c55-b643-5fbe5479bf3f", "bridge": "br-int", "label": "tempest-network-smoke--1044478516", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4d:e936", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0aadf8d-3a", "ovs_interfaceid": "a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:26:25 np0005481065 nova_compute[260935]: 2025-10-11 09:26:25.255 2 WARNING nova.virt.libvirt.driver [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:26:25 np0005481065 nova_compute[260935]: 2025-10-11 09:26:25.261 2 DEBUG nova.virt.libvirt.host [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:26:25 np0005481065 nova_compute[260935]: 2025-10-11 09:26:25.262 2 DEBUG nova.virt.libvirt.host [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:26:25 np0005481065 nova_compute[260935]: 2025-10-11 09:26:25.266 2 DEBUG nova.virt.libvirt.host [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:26:25 np0005481065 nova_compute[260935]: 2025-10-11 09:26:25.267 2 DEBUG nova.virt.libvirt.host [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:26:25 np0005481065 nova_compute[260935]: 2025-10-11 09:26:25.267 2 DEBUG nova.virt.libvirt.driver [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:26:25 np0005481065 nova_compute[260935]: 2025-10-11 09:26:25.268 2 DEBUG nova.virt.hardware [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:26:25 np0005481065 nova_compute[260935]: 2025-10-11 09:26:25.269 2 DEBUG nova.virt.hardware [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:26:25 np0005481065 nova_compute[260935]: 2025-10-11 09:26:25.269 2 DEBUG nova.virt.hardware [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:26:25 np0005481065 nova_compute[260935]: 2025-10-11 09:26:25.270 2 DEBUG nova.virt.hardware [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:26:25 np0005481065 nova_compute[260935]: 2025-10-11 09:26:25.270 2 DEBUG nova.virt.hardware [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:26:25 np0005481065 nova_compute[260935]: 2025-10-11 09:26:25.271 2 DEBUG nova.virt.hardware [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:26:25 np0005481065 nova_compute[260935]: 2025-10-11 09:26:25.271 2 DEBUG nova.virt.hardware [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:26:25 np0005481065 nova_compute[260935]: 2025-10-11 09:26:25.271 2 DEBUG nova.virt.hardware [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:26:25 np0005481065 nova_compute[260935]: 2025-10-11 09:26:25.272 2 DEBUG nova.virt.hardware [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:26:25 np0005481065 nova_compute[260935]: 2025-10-11 09:26:25.272 2 DEBUG nova.virt.hardware [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:26:25 np0005481065 nova_compute[260935]: 2025-10-11 09:26:25.272 2 DEBUG nova.virt.hardware [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:26:25 np0005481065 nova_compute[260935]: 2025-10-11 09:26:25.276 2 DEBUG oslo_concurrency.processutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:26:25 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:26:25 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1388407615' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:26:25 np0005481065 nova_compute[260935]: 2025-10-11 09:26:25.733 2 DEBUG oslo_concurrency.processutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:26:25 np0005481065 nova_compute[260935]: 2025-10-11 09:26:25.762 2 DEBUG nova.storage.rbd_utils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image b39b8161-8a46-46fe-8a2a-0fc6b4eab850_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:26:25 np0005481065 nova_compute[260935]: 2025-10-11 09:26:25.766 2 DEBUG oslo_concurrency.processutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:26:26 np0005481065 nova_compute[260935]: 2025-10-11 09:26:26.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:26:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4168525353' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:26:26 np0005481065 nova_compute[260935]: 2025-10-11 09:26:26.208 2 DEBUG oslo_concurrency.processutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:26:26 np0005481065 nova_compute[260935]: 2025-10-11 09:26:26.210 2 DEBUG nova.virt.libvirt.vif [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:26:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1892429863',display_name='tempest-TestGettingAddress-server-1892429863',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1892429863',id=128,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPG2umSzgx1Pm5tS05rzNTKNesiRVqOQsMrcb/c4H9RJnO9a7zP3A3lTuGkE8FijEzV3gKWwvO4cBsyzZeZuE85e7xUGmBvWEUdTEGeD9UeQgoUvFozUeyHRBe1bh7q6pA==',key_name='tempest-TestGettingAddress-1214513575',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-yz54v603',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:26:15Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=b39b8161-8a46-46fe-8a2a-0fc6b4eab850,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0f8506e7-c03f-48bd-938d-f3b4cea2675b", "address": "fa:16:3e:f4:67:d9", "network": {"id": "024b1f88-7312-4f05-a55e-4c82e878906e", "bridge": "br-int", "label": "tempest-network-smoke--93934650", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f8506e7-c0", "ovs_interfaceid": "0f8506e7-c03f-48bd-938d-f3b4cea2675b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:26:26 np0005481065 nova_compute[260935]: 2025-10-11 09:26:26.211 2 DEBUG nova.network.os_vif_util [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "0f8506e7-c03f-48bd-938d-f3b4cea2675b", "address": "fa:16:3e:f4:67:d9", "network": {"id": "024b1f88-7312-4f05-a55e-4c82e878906e", "bridge": "br-int", "label": "tempest-network-smoke--93934650", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f8506e7-c0", "ovs_interfaceid": "0f8506e7-c03f-48bd-938d-f3b4cea2675b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:26:26 np0005481065 nova_compute[260935]: 2025-10-11 09:26:26.212 2 DEBUG nova.network.os_vif_util [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f4:67:d9,bridge_name='br-int',has_traffic_filtering=True,id=0f8506e7-c03f-48bd-938d-f3b4cea2675b,network=Network(024b1f88-7312-4f05-a55e-4c82e878906e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f8506e7-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:26:26 np0005481065 nova_compute[260935]: 2025-10-11 09:26:26.214 2 DEBUG nova.virt.libvirt.vif [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:26:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1892429863',display_name='tempest-TestGettingAddress-server-1892429863',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1892429863',id=128,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPG2umSzgx1Pm5tS05rzNTKNesiRVqOQsMrcb/c4H9RJnO9a7zP3A3lTuGkE8FijEzV3gKWwvO4cBsyzZeZuE85e7xUGmBvWEUdTEGeD9UeQgoUvFozUeyHRBe1bh7q6pA==',key_name='tempest-TestGettingAddress-1214513575',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-yz54v603',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:26:15Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=b39b8161-8a46-46fe-8a2a-0fc6b4eab850,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088", "address": "fa:16:3e:4d:e9:36", "network": {"id": "894decad-3bed-4c55-b643-5fbe5479bf3f", "bridge": "br-int", "label": "tempest-network-smoke--1044478516", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4d:e936", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0aadf8d-3a", "ovs_interfaceid": "a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:26:26 np0005481065 nova_compute[260935]: 2025-10-11 09:26:26.214 2 DEBUG nova.network.os_vif_util [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088", "address": "fa:16:3e:4d:e9:36", "network": {"id": "894decad-3bed-4c55-b643-5fbe5479bf3f", "bridge": "br-int", "label": "tempest-network-smoke--1044478516", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4d:e936", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0aadf8d-3a", "ovs_interfaceid": "a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:26:26 np0005481065 nova_compute[260935]: 2025-10-11 09:26:26.215 2 DEBUG nova.network.os_vif_util [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:e9:36,bridge_name='br-int',has_traffic_filtering=True,id=a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088,network=Network(894decad-3bed-4c55-b643-5fbe5479bf3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0aadf8d-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:26:26 np0005481065 nova_compute[260935]: 2025-10-11 09:26:26.217 2 DEBUG nova.objects.instance [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'pci_devices' on Instance uuid b39b8161-8a46-46fe-8a2a-0fc6b4eab850 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:26:26 np0005481065 nova_compute[260935]: 2025-10-11 09:26:26.259 2 DEBUG nova.virt.libvirt.driver [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:26:26 np0005481065 nova_compute[260935]:  <uuid>b39b8161-8a46-46fe-8a2a-0fc6b4eab850</uuid>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:  <name>instance-00000080</name>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:26:26 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:      <nova:name>tempest-TestGettingAddress-server-1892429863</nova:name>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:26:25</nova:creationTime>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:26:26 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:        <nova:user uuid="0e1fd111a1ff43179343661e01457085">tempest-TestGettingAddress-1238692117-project-member</nova:user>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:        <nova:project uuid="db6885dd005947ad850fed13cefdf2fc">tempest-TestGettingAddress-1238692117</nova:project>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:        <nova:port uuid="0f8506e7-c03f-48bd-938d-f3b4cea2675b">
Oct 11 05:26:26 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:        <nova:port uuid="a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088">
Oct 11 05:26:26 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe4d:e936" ipVersion="6"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:      <entry name="serial">b39b8161-8a46-46fe-8a2a-0fc6b4eab850</entry>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:      <entry name="uuid">b39b8161-8a46-46fe-8a2a-0fc6b4eab850</entry>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:26:26 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/b39b8161-8a46-46fe-8a2a-0fc6b4eab850_disk">
Oct 11 05:26:26 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:26:26 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:26:26 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/b39b8161-8a46-46fe-8a2a-0fc6b4eab850_disk.config">
Oct 11 05:26:26 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:26:26 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:26:26 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:f4:67:d9"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:      <target dev="tap0f8506e7-c0"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:26:26 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:4d:e9:36"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:      <target dev="tapa0aadf8d-3a"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:26:26 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/b39b8161-8a46-46fe-8a2a-0fc6b4eab850/console.log" append="off"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:26:26 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:26:26 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:26:26 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:26:26 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:26:26 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:26:26 np0005481065 nova_compute[260935]: 2025-10-11 09:26:26.259 2 DEBUG nova.compute.manager [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Preparing to wait for external event network-vif-plugged-0f8506e7-c03f-48bd-938d-f3b4cea2675b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:26:26 np0005481065 nova_compute[260935]: 2025-10-11 09:26:26.260 2 DEBUG oslo_concurrency.lockutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:26:26 np0005481065 nova_compute[260935]: 2025-10-11 09:26:26.260 2 DEBUG oslo_concurrency.lockutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:26:26 np0005481065 nova_compute[260935]: 2025-10-11 09:26:26.261 2 DEBUG oslo_concurrency.lockutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:26:26 np0005481065 nova_compute[260935]: 2025-10-11 09:26:26.261 2 DEBUG nova.compute.manager [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Preparing to wait for external event network-vif-plugged-a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:26:26 np0005481065 nova_compute[260935]: 2025-10-11 09:26:26.261 2 DEBUG oslo_concurrency.lockutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:26:26 np0005481065 nova_compute[260935]: 2025-10-11 09:26:26.262 2 DEBUG oslo_concurrency.lockutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:26:26 np0005481065 nova_compute[260935]: 2025-10-11 09:26:26.262 2 DEBUG oslo_concurrency.lockutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:26:26 np0005481065 nova_compute[260935]: 2025-10-11 09:26:26.263 2 DEBUG nova.virt.libvirt.vif [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:26:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1892429863',display_name='tempest-TestGettingAddress-server-1892429863',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1892429863',id=128,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPG2umSzgx1Pm5tS05rzNTKNesiRVqOQsMrcb/c4H9RJnO9a7zP3A3lTuGkE8FijEzV3gKWwvO4cBsyzZeZuE85e7xUGmBvWEUdTEGeD9UeQgoUvFozUeyHRBe1bh7q6pA==',key_name='tempest-TestGettingAddress-1214513575',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-yz54v603',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:26:15Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=b39b8161-8a46-46fe-8a2a-0fc6b4eab850,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0f8506e7-c03f-48bd-938d-f3b4cea2675b", "address": "fa:16:3e:f4:67:d9", "network": {"id": "024b1f88-7312-4f05-a55e-4c82e878906e", "bridge": "br-int", "label": "tempest-network-smoke--93934650", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f8506e7-c0", "ovs_interfaceid": "0f8506e7-c03f-48bd-938d-f3b4cea2675b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:26:26 np0005481065 nova_compute[260935]: 2025-10-11 09:26:26.263 2 DEBUG nova.network.os_vif_util [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "0f8506e7-c03f-48bd-938d-f3b4cea2675b", "address": "fa:16:3e:f4:67:d9", "network": {"id": "024b1f88-7312-4f05-a55e-4c82e878906e", "bridge": "br-int", "label": "tempest-network-smoke--93934650", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f8506e7-c0", "ovs_interfaceid": "0f8506e7-c03f-48bd-938d-f3b4cea2675b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:26:26 np0005481065 nova_compute[260935]: 2025-10-11 09:26:26.264 2 DEBUG nova.network.os_vif_util [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f4:67:d9,bridge_name='br-int',has_traffic_filtering=True,id=0f8506e7-c03f-48bd-938d-f3b4cea2675b,network=Network(024b1f88-7312-4f05-a55e-4c82e878906e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f8506e7-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:26:26 np0005481065 nova_compute[260935]: 2025-10-11 09:26:26.265 2 DEBUG os_vif [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:67:d9,bridge_name='br-int',has_traffic_filtering=True,id=0f8506e7-c03f-48bd-938d-f3b4cea2675b,network=Network(024b1f88-7312-4f05-a55e-4c82e878906e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f8506e7-c0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:26:26 np0005481065 nova_compute[260935]: 2025-10-11 09:26:26.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:26 np0005481065 nova_compute[260935]: 2025-10-11 09:26:26.266 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:26:26 np0005481065 nova_compute[260935]: 2025-10-11 09:26:26.267 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:26:26 np0005481065 nova_compute[260935]: 2025-10-11 09:26:26.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:26 np0005481065 nova_compute[260935]: 2025-10-11 09:26:26.272 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0f8506e7-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:26:26 np0005481065 nova_compute[260935]: 2025-10-11 09:26:26.273 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0f8506e7-c0, col_values=(('external_ids', {'iface-id': '0f8506e7-c03f-48bd-938d-f3b4cea2675b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f4:67:d9', 'vm-uuid': 'b39b8161-8a46-46fe-8a2a-0fc6b4eab850'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:26:26 np0005481065 NetworkManager[44960]: <info>  [1760174786.2757] manager: (tap0f8506e7-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/543)
Oct 11 05:26:26 np0005481065 nova_compute[260935]: 2025-10-11 09:26:26.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:26 np0005481065 nova_compute[260935]: 2025-10-11 09:26:26.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:26:26 np0005481065 nova_compute[260935]: 2025-10-11 09:26:26.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:26 np0005481065 nova_compute[260935]: 2025-10-11 09:26:26.287 2 INFO os_vif [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:67:d9,bridge_name='br-int',has_traffic_filtering=True,id=0f8506e7-c03f-48bd-938d-f3b4cea2675b,network=Network(024b1f88-7312-4f05-a55e-4c82e878906e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f8506e7-c0')#033[00m
Oct 11 05:26:26 np0005481065 nova_compute[260935]: 2025-10-11 09:26:26.288 2 DEBUG nova.virt.libvirt.vif [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:26:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1892429863',display_name='tempest-TestGettingAddress-server-1892429863',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1892429863',id=128,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPG2umSzgx1Pm5tS05rzNTKNesiRVqOQsMrcb/c4H9RJnO9a7zP3A3lTuGkE8FijEzV3gKWwvO4cBsyzZeZuE85e7xUGmBvWEUdTEGeD9UeQgoUvFozUeyHRBe1bh7q6pA==',key_name='tempest-TestGettingAddress-1214513575',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-yz54v603',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:26:15Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=b39b8161-8a46-46fe-8a2a-0fc6b4eab850,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088", "address": "fa:16:3e:4d:e9:36", "network": {"id": "894decad-3bed-4c55-b643-5fbe5479bf3f", "bridge": "br-int", "label": "tempest-network-smoke--1044478516", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4d:e936", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0aadf8d-3a", "ovs_interfaceid": "a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:26:26 np0005481065 nova_compute[260935]: 2025-10-11 09:26:26.289 2 DEBUG nova.network.os_vif_util [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088", "address": "fa:16:3e:4d:e9:36", "network": {"id": "894decad-3bed-4c55-b643-5fbe5479bf3f", "bridge": "br-int", "label": "tempest-network-smoke--1044478516", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4d:e936", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0aadf8d-3a", "ovs_interfaceid": "a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:26:26 np0005481065 nova_compute[260935]: 2025-10-11 09:26:26.290 2 DEBUG nova.network.os_vif_util [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:e9:36,bridge_name='br-int',has_traffic_filtering=True,id=a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088,network=Network(894decad-3bed-4c55-b643-5fbe5479bf3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0aadf8d-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:26:26 np0005481065 nova_compute[260935]: 2025-10-11 09:26:26.290 2 DEBUG os_vif [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:e9:36,bridge_name='br-int',has_traffic_filtering=True,id=a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088,network=Network(894decad-3bed-4c55-b643-5fbe5479bf3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0aadf8d-3a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:26:26 np0005481065 nova_compute[260935]: 2025-10-11 09:26:26.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:26 np0005481065 nova_compute[260935]: 2025-10-11 09:26:26.291 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:26:26 np0005481065 nova_compute[260935]: 2025-10-11 09:26:26.292 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:26:26 np0005481065 nova_compute[260935]: 2025-10-11 09:26:26.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:26 np0005481065 nova_compute[260935]: 2025-10-11 09:26:26.294 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa0aadf8d-3a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:26:26 np0005481065 nova_compute[260935]: 2025-10-11 09:26:26.295 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa0aadf8d-3a, col_values=(('external_ids', {'iface-id': 'a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4d:e9:36', 'vm-uuid': 'b39b8161-8a46-46fe-8a2a-0fc6b4eab850'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:26:26 np0005481065 NetworkManager[44960]: <info>  [1760174786.2973] manager: (tapa0aadf8d-3a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/544)
Oct 11 05:26:26 np0005481065 nova_compute[260935]: 2025-10-11 09:26:26.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:26 np0005481065 nova_compute[260935]: 2025-10-11 09:26:26.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:26:26 np0005481065 nova_compute[260935]: 2025-10-11 09:26:26.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:26 np0005481065 nova_compute[260935]: 2025-10-11 09:26:26.310 2 INFO os_vif [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:e9:36,bridge_name='br-int',has_traffic_filtering=True,id=a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088,network=Network(894decad-3bed-4c55-b643-5fbe5479bf3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0aadf8d-3a')#033[00m
Oct 11 05:26:26 np0005481065 nova_compute[260935]: 2025-10-11 09:26:26.655 2 DEBUG nova.virt.libvirt.driver [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:26:26 np0005481065 nova_compute[260935]: 2025-10-11 09:26:26.655 2 DEBUG nova.virt.libvirt.driver [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:26:26 np0005481065 nova_compute[260935]: 2025-10-11 09:26:26.656 2 DEBUG nova.virt.libvirt.driver [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No VIF found with MAC fa:16:3e:f4:67:d9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:26:26 np0005481065 nova_compute[260935]: 2025-10-11 09:26:26.656 2 DEBUG nova.virt.libvirt.driver [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No VIF found with MAC fa:16:3e:4d:e9:36, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:26:26 np0005481065 nova_compute[260935]: 2025-10-11 09:26:26.657 2 INFO nova.virt.libvirt.driver [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Using config drive#033[00m
Oct 11 05:26:26 np0005481065 nova_compute[260935]: 2025-10-11 09:26:26.688 2 DEBUG nova.storage.rbd_utils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image b39b8161-8a46-46fe-8a2a-0fc6b4eab850_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:26:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:26:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/309278539' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:26:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:26:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/309278539' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:26:26 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2544: 321 pgs: 321 active+clean; 420 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 11 05:26:27 np0005481065 nova_compute[260935]: 2025-10-11 09:26:27.246 2 INFO nova.virt.libvirt.driver [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Creating config drive at /var/lib/nova/instances/b39b8161-8a46-46fe-8a2a-0fc6b4eab850/disk.config#033[00m
Oct 11 05:26:27 np0005481065 nova_compute[260935]: 2025-10-11 09:26:27.250 2 DEBUG oslo_concurrency.processutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b39b8161-8a46-46fe-8a2a-0fc6b4eab850/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv087eoef execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:26:27 np0005481065 NetworkManager[44960]: <info>  [1760174787.3228] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/545)
Oct 11 05:26:27 np0005481065 NetworkManager[44960]: <info>  [1760174787.3241] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/546)
Oct 11 05:26:27 np0005481065 nova_compute[260935]: 2025-10-11 09:26:27.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:27 np0005481065 nova_compute[260935]: 2025-10-11 09:26:27.397 2 DEBUG oslo_concurrency.processutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b39b8161-8a46-46fe-8a2a-0fc6b4eab850/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv087eoef" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:26:27 np0005481065 nova_compute[260935]: 2025-10-11 09:26:27.440 2 DEBUG nova.storage.rbd_utils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image b39b8161-8a46-46fe-8a2a-0fc6b4eab850_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:26:27 np0005481065 nova_compute[260935]: 2025-10-11 09:26:27.452 2 DEBUG oslo_concurrency.processutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b39b8161-8a46-46fe-8a2a-0fc6b4eab850/disk.config b39b8161-8a46-46fe-8a2a-0fc6b4eab850_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:26:27 np0005481065 ovn_controller[152945]: 2025-10-11T09:26:27Z|01389|binding|INFO|Releasing lport 829ba2ca-e21f-4927-8525-5f43e59d37f8 from this chassis (sb_readonly=0)
Oct 11 05:26:27 np0005481065 ovn_controller[152945]: 2025-10-11T09:26:27Z|01390|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:26:27 np0005481065 ovn_controller[152945]: 2025-10-11T09:26:27Z|01391|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:26:27 np0005481065 nova_compute[260935]: 2025-10-11 09:26:27.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:27 np0005481065 nova_compute[260935]: 2025-10-11 09:26:27.797 2 DEBUG nova.network.neutron [req-581c61dc-dfa6-4212-806b-3989c6eee803 req-913ecc62-ea06-496d-adcf-4740ec3daacc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Updated VIF entry in instance network info cache for port a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:26:27 np0005481065 nova_compute[260935]: 2025-10-11 09:26:27.798 2 DEBUG nova.network.neutron [req-581c61dc-dfa6-4212-806b-3989c6eee803 req-913ecc62-ea06-496d-adcf-4740ec3daacc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Updating instance_info_cache with network_info: [{"id": "0f8506e7-c03f-48bd-938d-f3b4cea2675b", "address": "fa:16:3e:f4:67:d9", "network": {"id": "024b1f88-7312-4f05-a55e-4c82e878906e", "bridge": "br-int", "label": "tempest-network-smoke--93934650", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f8506e7-c0", "ovs_interfaceid": "0f8506e7-c03f-48bd-938d-f3b4cea2675b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088", "address": "fa:16:3e:4d:e9:36", "network": {"id": "894decad-3bed-4c55-b643-5fbe5479bf3f", "bridge": "br-int", "label": "tempest-network-smoke--1044478516", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4d:e936", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0aadf8d-3a", "ovs_interfaceid": "a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:26:27 np0005481065 nova_compute[260935]: 2025-10-11 09:26:27.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:28 np0005481065 nova_compute[260935]: 2025-10-11 09:26:28.017 2 DEBUG oslo_concurrency.lockutils [req-581c61dc-dfa6-4212-806b-3989c6eee803 req-913ecc62-ea06-496d-adcf-4740ec3daacc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-b39b8161-8a46-46fe-8a2a-0fc6b4eab850" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:26:28 np0005481065 nova_compute[260935]: 2025-10-11 09:26:28.233 2 DEBUG nova.compute.manager [req-dea1eaee-1a8f-46f3-aef4-56e6bd341a75 req-5b771df6-e6ff-460d-a831-b71261d3fd3f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Received event network-changed-36e1391f-eaf8-490f-8434-c3fb25eed0a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:26:28 np0005481065 nova_compute[260935]: 2025-10-11 09:26:28.233 2 DEBUG nova.compute.manager [req-dea1eaee-1a8f-46f3-aef4-56e6bd341a75 req-5b771df6-e6ff-460d-a831-b71261d3fd3f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Refreshing instance network info cache due to event network-changed-36e1391f-eaf8-490f-8434-c3fb25eed0a4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:26:28 np0005481065 nova_compute[260935]: 2025-10-11 09:26:28.234 2 DEBUG oslo_concurrency.lockutils [req-dea1eaee-1a8f-46f3-aef4-56e6bd341a75 req-5b771df6-e6ff-460d-a831-b71261d3fd3f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-36d0acda-9f37-4308-aa46-973f11c57b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:26:28 np0005481065 nova_compute[260935]: 2025-10-11 09:26:28.234 2 DEBUG oslo_concurrency.lockutils [req-dea1eaee-1a8f-46f3-aef4-56e6bd341a75 req-5b771df6-e6ff-460d-a831-b71261d3fd3f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-36d0acda-9f37-4308-aa46-973f11c57b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:26:28 np0005481065 nova_compute[260935]: 2025-10-11 09:26:28.234 2 DEBUG nova.network.neutron [req-dea1eaee-1a8f-46f3-aef4-56e6bd341a75 req-5b771df6-e6ff-460d-a831-b71261d3fd3f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Refreshing network info cache for port 36e1391f-eaf8-490f-8434-c3fb25eed0a4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:26:28 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2545: 321 pgs: 321 active+clean; 420 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 11 05:26:29 np0005481065 nova_compute[260935]: 2025-10-11 09:26:29.104 2 DEBUG oslo_concurrency.processutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b39b8161-8a46-46fe-8a2a-0fc6b4eab850/disk.config b39b8161-8a46-46fe-8a2a-0fc6b4eab850_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.652s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:26:29 np0005481065 nova_compute[260935]: 2025-10-11 09:26:29.105 2 INFO nova.virt.libvirt.driver [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Deleting local config drive /var/lib/nova/instances/b39b8161-8a46-46fe-8a2a-0fc6b4eab850/disk.config because it was imported into RBD.#033[00m
Oct 11 05:26:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:26:29 np0005481065 kernel: tap0f8506e7-c0: entered promiscuous mode
Oct 11 05:26:29 np0005481065 NetworkManager[44960]: <info>  [1760174789.1856] manager: (tap0f8506e7-c0): new Tun device (/org/freedesktop/NetworkManager/Devices/547)
Oct 11 05:26:29 np0005481065 ovn_controller[152945]: 2025-10-11T09:26:29Z|01392|binding|INFO|Claiming lport 0f8506e7-c03f-48bd-938d-f3b4cea2675b for this chassis.
Oct 11 05:26:29 np0005481065 ovn_controller[152945]: 2025-10-11T09:26:29Z|01393|binding|INFO|0f8506e7-c03f-48bd-938d-f3b4cea2675b: Claiming fa:16:3e:f4:67:d9 10.100.0.4
Oct 11 05:26:29 np0005481065 nova_compute[260935]: 2025-10-11 09:26:29.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:29 np0005481065 NetworkManager[44960]: <info>  [1760174789.2114] manager: (tapa0aadf8d-3a): new Tun device (/org/freedesktop/NetworkManager/Devices/548)
Oct 11 05:26:29 np0005481065 kernel: tapa0aadf8d-3a: entered promiscuous mode
Oct 11 05:26:29 np0005481065 ovn_controller[152945]: 2025-10-11T09:26:29Z|01394|binding|INFO|Setting lport 0f8506e7-c03f-48bd-938d-f3b4cea2675b ovn-installed in OVS
Oct 11 05:26:29 np0005481065 ovn_controller[152945]: 2025-10-11T09:26:29Z|01395|if_status|INFO|Not updating pb chassis for a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088 now as sb is readonly
Oct 11 05:26:29 np0005481065 nova_compute[260935]: 2025-10-11 09:26:29.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.226 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f4:67:d9 10.100.0.4'], port_security=['fa:16:3e:f4:67:d9 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'b39b8161-8a46-46fe-8a2a-0fc6b4eab850', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-024b1f88-7312-4f05-a55e-4c82e878906e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'efa7de30-a0ba-4f09-b29e-87a969a72d97', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5a6a9a27-420b-4816-9550-7c34ef42c810, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=0f8506e7-c03f-48bd-938d-f3b4cea2675b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:26:29 np0005481065 ovn_controller[152945]: 2025-10-11T09:26:29Z|01396|binding|INFO|Claiming lport a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088 for this chassis.
Oct 11 05:26:29 np0005481065 ovn_controller[152945]: 2025-10-11T09:26:29Z|01397|binding|INFO|a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088: Claiming fa:16:3e:4d:e9:36 2001:db8::f816:3eff:fe4d:e936
Oct 11 05:26:29 np0005481065 ovn_controller[152945]: 2025-10-11T09:26:29Z|01398|binding|INFO|Setting lport 0f8506e7-c03f-48bd-938d-f3b4cea2675b up in Southbound
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.227 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 0f8506e7-c03f-48bd-938d-f3b4cea2675b in datapath 024b1f88-7312-4f05-a55e-4c82e878906e bound to our chassis#033[00m
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.229 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 024b1f88-7312-4f05-a55e-4c82e878906e#033[00m
Oct 11 05:26:29 np0005481065 systemd-udevd[400570]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:26:29 np0005481065 systemd-udevd[400571]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:26:29 np0005481065 ovn_controller[152945]: 2025-10-11T09:26:29Z|01399|binding|INFO|Setting lport a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088 ovn-installed in OVS
Oct 11 05:26:29 np0005481065 nova_compute[260935]: 2025-10-11 09:26:29.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.247 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f442b89e-16c9-4de9-99d5-e3f56a6987e9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:26:29 np0005481065 NetworkManager[44960]: <info>  [1760174789.2516] device (tap0f8506e7-c0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:26:29 np0005481065 ovn_controller[152945]: 2025-10-11T09:26:29Z|01400|binding|INFO|Setting lport a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088 up in Southbound
Oct 11 05:26:29 np0005481065 NetworkManager[44960]: <info>  [1760174789.2525] device (tap0f8506e7-c0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.251 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:e9:36 2001:db8::f816:3eff:fe4d:e936'], port_security=['fa:16:3e:4d:e9:36 2001:db8::f816:3eff:fe4d:e936'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe4d:e936/64', 'neutron:device_id': 'b39b8161-8a46-46fe-8a2a-0fc6b4eab850', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-894decad-3bed-4c55-b643-5fbe5479bf3f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'efa7de30-a0ba-4f09-b29e-87a969a72d97', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8c88be83-b34c-4a0a-8d40-8a42bbccb566, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.252 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap024b1f88-71 in ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 05:26:29 np0005481065 NetworkManager[44960]: <info>  [1760174789.2553] device (tapa0aadf8d-3a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.255 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap024b1f88-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.255 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3ef96c4f-0208-4f18-b8cc-a2b8ef634899]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:26:29 np0005481065 NetworkManager[44960]: <info>  [1760174789.2561] device (tapa0aadf8d-3a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.256 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[92bad109-a86a-4bf4-ac44-13ec4505501d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:26:29 np0005481065 systemd-machined[215705]: New machine qemu-152-instance-00000080.
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.275 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[6919be81-57c9-4e13-b98e-6cc80a964b79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:26:29 np0005481065 systemd[1]: Started Virtual Machine qemu-152-instance-00000080.
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.294 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f1c9d063-7b9b-4e67-99a3-cf07cb8af700]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.330 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[d62ac21a-0ecc-4882-8f36-409bbc7871f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.339 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6d2557dc-9452-423d-86dc-827155e86531]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:26:29 np0005481065 NetworkManager[44960]: <info>  [1760174789.3404] manager: (tap024b1f88-70): new Veth device (/org/freedesktop/NetworkManager/Devices/549)
Oct 11 05:26:29 np0005481065 systemd-udevd[400577]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.382 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[2ee992c0-d133-41d2-8627-890e59968eaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.389 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[2b181a66-343b-4a1a-8d9a-3f3f99bcc9ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:26:29 np0005481065 NetworkManager[44960]: <info>  [1760174789.4231] device (tap024b1f88-70): carrier: link connected
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.430 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[0a309948-b1ef-49ec-bb63-f93b62c89153]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.462 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2ed84848-1ee8-4b4d-a5d7-6c5018878656]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap024b1f88-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:47:3b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 382], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663412, 'reachable_time': 33597, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 400607, 'error': None, 'target': 'ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.479 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cacb4dc6-f60b-4654-8081-2bb70836b447]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7a:473b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663412, 'tstamp': 663412}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 400608, 'error': None, 'target': 'ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.506 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[373d8c5f-f8d7-4354-9f06-013545662e13]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap024b1f88-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:47:3b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 382], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663412, 'reachable_time': 33597, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 400609, 'error': None, 'target': 'ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.558 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b3f5b7d3-ea0e-415f-9f92-c8407af28a32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.673 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[703f6e5a-abd3-4572-bfa4-7ae85452d9a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.674 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap024b1f88-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.675 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.675 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap024b1f88-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:26:29 np0005481065 nova_compute[260935]: 2025-10-11 09:26:29.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:29 np0005481065 kernel: tap024b1f88-70: entered promiscuous mode
Oct 11 05:26:29 np0005481065 NetworkManager[44960]: <info>  [1760174789.6792] manager: (tap024b1f88-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/550)
Oct 11 05:26:29 np0005481065 nova_compute[260935]: 2025-10-11 09:26:29.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.683 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap024b1f88-70, col_values=(('external_ids', {'iface-id': '219b454c-6c32-4a38-b10c-afcdb1be1980'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:26:29 np0005481065 ovn_controller[152945]: 2025-10-11T09:26:29Z|01401|binding|INFO|Releasing lport 219b454c-6c32-4a38-b10c-afcdb1be1980 from this chassis (sb_readonly=0)
Oct 11 05:26:29 np0005481065 nova_compute[260935]: 2025-10-11 09:26:29.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:29 np0005481065 nova_compute[260935]: 2025-10-11 09:26:29.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.717 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/024b1f88-7312-4f05-a55e-4c82e878906e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/024b1f88-7312-4f05-a55e-4c82e878906e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.720 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[28117820-ed7d-44f1-bc4e-456ac999cdb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.721 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-024b1f88-7312-4f05-a55e-4c82e878906e
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/024b1f88-7312-4f05-a55e-4c82e878906e.pid.haproxy
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID 024b1f88-7312-4f05-a55e-4c82e878906e
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 05:26:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.722 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e', 'env', 'PROCESS_TAG=haproxy-024b1f88-7312-4f05-a55e-4c82e878906e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/024b1f88-7312-4f05-a55e-4c82e878906e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 05:26:29 np0005481065 nova_compute[260935]: 2025-10-11 09:26:29.771 2 DEBUG nova.compute.manager [req-fbfbafdb-d7d1-4c5b-9536-a98b65fced9c req-a71930b8-40c9-4060-9df9-c6f060b08d09 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Received event network-vif-plugged-0f8506e7-c03f-48bd-938d-f3b4cea2675b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:26:29 np0005481065 nova_compute[260935]: 2025-10-11 09:26:29.771 2 DEBUG oslo_concurrency.lockutils [req-fbfbafdb-d7d1-4c5b-9536-a98b65fced9c req-a71930b8-40c9-4060-9df9-c6f060b08d09 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:26:29 np0005481065 nova_compute[260935]: 2025-10-11 09:26:29.772 2 DEBUG oslo_concurrency.lockutils [req-fbfbafdb-d7d1-4c5b-9536-a98b65fced9c req-a71930b8-40c9-4060-9df9-c6f060b08d09 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:26:29 np0005481065 nova_compute[260935]: 2025-10-11 09:26:29.772 2 DEBUG oslo_concurrency.lockutils [req-fbfbafdb-d7d1-4c5b-9536-a98b65fced9c req-a71930b8-40c9-4060-9df9-c6f060b08d09 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:26:29 np0005481065 nova_compute[260935]: 2025-10-11 09:26:29.773 2 DEBUG nova.compute.manager [req-fbfbafdb-d7d1-4c5b-9536-a98b65fced9c req-a71930b8-40c9-4060-9df9-c6f060b08d09 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Processing event network-vif-plugged-0f8506e7-c03f-48bd-938d-f3b4cea2675b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:26:30 np0005481065 podman[400660]: 2025-10-11 09:26:30.205016922 +0000 UTC m=+0.054266312 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 05:26:30 np0005481065 nova_compute[260935]: 2025-10-11 09:26:30.358 2 DEBUG nova.network.neutron [req-dea1eaee-1a8f-46f3-aef4-56e6bd341a75 req-5b771df6-e6ff-460d-a831-b71261d3fd3f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Updated VIF entry in instance network info cache for port 36e1391f-eaf8-490f-8434-c3fb25eed0a4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:26:30 np0005481065 nova_compute[260935]: 2025-10-11 09:26:30.359 2 DEBUG nova.network.neutron [req-dea1eaee-1a8f-46f3-aef4-56e6bd341a75 req-5b771df6-e6ff-460d-a831-b71261d3fd3f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Updating instance_info_cache with network_info: [{"id": "36e1391f-eaf8-490f-8434-c3fb25eed0a4", "address": "fa:16:3e:da:66:34", "network": {"id": "b9f9ae84-9b18-48f7-bff2-94e8835de5c8", "bridge": "br-int", "label": "tempest-network-smoke--306681163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36e1391f-ea", "ovs_interfaceid": "36e1391f-eaf8-490f-8434-c3fb25eed0a4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:26:30 np0005481065 nova_compute[260935]: 2025-10-11 09:26:30.389 2 DEBUG nova.compute.manager [req-7e8b8b00-6161-43c8-9788-0b28dcfd6b2e req-04d87ddd-ab98-4813-b68f-4f846907e104 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Received event network-vif-plugged-a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:26:30 np0005481065 nova_compute[260935]: 2025-10-11 09:26:30.389 2 DEBUG oslo_concurrency.lockutils [req-7e8b8b00-6161-43c8-9788-0b28dcfd6b2e req-04d87ddd-ab98-4813-b68f-4f846907e104 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:26:30 np0005481065 nova_compute[260935]: 2025-10-11 09:26:30.390 2 DEBUG oslo_concurrency.lockutils [req-7e8b8b00-6161-43c8-9788-0b28dcfd6b2e req-04d87ddd-ab98-4813-b68f-4f846907e104 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:26:30 np0005481065 nova_compute[260935]: 2025-10-11 09:26:30.390 2 DEBUG oslo_concurrency.lockutils [req-7e8b8b00-6161-43c8-9788-0b28dcfd6b2e req-04d87ddd-ab98-4813-b68f-4f846907e104 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:26:30 np0005481065 nova_compute[260935]: 2025-10-11 09:26:30.391 2 DEBUG nova.compute.manager [req-7e8b8b00-6161-43c8-9788-0b28dcfd6b2e req-04d87ddd-ab98-4813-b68f-4f846907e104 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Processing event network-vif-plugged-a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:26:30 np0005481065 nova_compute[260935]: 2025-10-11 09:26:30.511 2 DEBUG oslo_concurrency.lockutils [req-dea1eaee-1a8f-46f3-aef4-56e6bd341a75 req-5b771df6-e6ff-460d-a831-b71261d3fd3f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-36d0acda-9f37-4308-aa46-973f11c57b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:26:30 np0005481065 podman[400660]: 2025-10-11 09:26:30.85589084 +0000 UTC m=+0.705140150 container create ebae48339624729f873bd9cdaf767f454b383aabcfedadf411bcdf7d4b67e8a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:26:30 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2546: 321 pgs: 321 active+clean; 420 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 05:26:30 np0005481065 systemd[1]: Started libpod-conmon-ebae48339624729f873bd9cdaf767f454b383aabcfedadf411bcdf7d4b67e8a5.scope.
Oct 11 05:26:30 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:26:30 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7364967b58c22da1966d5d1241e71eee425dfe401e15845ca723c5e08b53f77/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 05:26:31 np0005481065 podman[400660]: 2025-10-11 09:26:31.119772037 +0000 UTC m=+0.969021447 container init ebae48339624729f873bd9cdaf767f454b383aabcfedadf411bcdf7d4b67e8a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 05:26:31 np0005481065 podman[400660]: 2025-10-11 09:26:31.131142208 +0000 UTC m=+0.980391558 container start ebae48339624729f873bd9cdaf767f454b383aabcfedadf411bcdf7d4b67e8a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 11 05:26:31 np0005481065 neutron-haproxy-ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e[400697]: [NOTICE]   (400702) : New worker (400704) forked
Oct 11 05:26:31 np0005481065 neutron-haproxy-ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e[400697]: [NOTICE]   (400702) : Loading success.
Oct 11 05:26:31 np0005481065 nova_compute[260935]: 2025-10-11 09:26:31.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:31.459 162815 INFO neutron.agent.ovn.metadata.agent [-] Port a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088 in datapath 894decad-3bed-4c55-b643-5fbe5479bf3f unbound from our chassis#033[00m
Oct 11 05:26:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:31.464 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 894decad-3bed-4c55-b643-5fbe5479bf3f#033[00m
Oct 11 05:26:31 np0005481065 nova_compute[260935]: 2025-10-11 09:26:31.465 2 DEBUG nova.compute.manager [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:26:31 np0005481065 nova_compute[260935]: 2025-10-11 09:26:31.467 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174791.4650497, b39b8161-8a46-46fe-8a2a-0fc6b4eab850 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:26:31 np0005481065 nova_compute[260935]: 2025-10-11 09:26:31.470 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] VM Started (Lifecycle Event)#033[00m
Oct 11 05:26:31 np0005481065 nova_compute[260935]: 2025-10-11 09:26:31.474 2 DEBUG nova.virt.libvirt.driver [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:26:31 np0005481065 nova_compute[260935]: 2025-10-11 09:26:31.480 2 INFO nova.virt.libvirt.driver [-] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Instance spawned successfully.#033[00m
Oct 11 05:26:31 np0005481065 nova_compute[260935]: 2025-10-11 09:26:31.481 2 DEBUG nova.virt.libvirt.driver [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:26:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:31.486 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[71d5c0c1-b1c7-4fbe-a519-68f94482dbcc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:26:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:31.488 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap894decad-31 in ovnmeta-894decad-3bed-4c55-b643-5fbe5479bf3f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 05:26:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:31.491 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap894decad-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 05:26:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:31.491 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f1fd2655-f498-4ffc-9d79-13082b063a29]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:26:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:31.493 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[550ce776-fb43-4ba0-83c2-02ffd6cf6a01]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:26:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:31.514 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[6e212e56-e682-40c1-a729-ccf0c1a27095]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:26:31 np0005481065 nova_compute[260935]: 2025-10-11 09:26:31.516 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:26:31 np0005481065 nova_compute[260935]: 2025-10-11 09:26:31.521 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:26:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:31.548 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ce289530-7407-4607-9dd4-96344740f059]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:26:31 np0005481065 nova_compute[260935]: 2025-10-11 09:26:31.599 2 DEBUG nova.virt.libvirt.driver [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:26:31 np0005481065 nova_compute[260935]: 2025-10-11 09:26:31.600 2 DEBUG nova.virt.libvirt.driver [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:26:31 np0005481065 nova_compute[260935]: 2025-10-11 09:26:31.601 2 DEBUG nova.virt.libvirt.driver [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:26:31 np0005481065 nova_compute[260935]: 2025-10-11 09:26:31.602 2 DEBUG nova.virt.libvirt.driver [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:26:31 np0005481065 nova_compute[260935]: 2025-10-11 09:26:31.603 2 DEBUG nova.virt.libvirt.driver [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:26:31 np0005481065 nova_compute[260935]: 2025-10-11 09:26:31.604 2 DEBUG nova.virt.libvirt.driver [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:26:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:31.603 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[29ce435b-75ac-45b0-b529-eb5efd127dc1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:26:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:31.617 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c9296b0b-0031-4eda-9ec3-0ad35e267fc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:26:31 np0005481065 NetworkManager[44960]: <info>  [1760174791.6190] manager: (tap894decad-30): new Veth device (/org/freedesktop/NetworkManager/Devices/551)
Oct 11 05:26:31 np0005481065 systemd-udevd[400603]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:26:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:31.673 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[74d719e8-7a5b-44f3-abd4-b62149bf1906]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:26:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:31.677 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[8bf33924-b961-45c4-b9ba-be9e0a13536e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:26:31 np0005481065 NetworkManager[44960]: <info>  [1760174791.7211] device (tap894decad-30): carrier: link connected
Oct 11 05:26:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:31.735 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[7f527dff-6640-4382-9c71-ff789291953a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:26:31 np0005481065 nova_compute[260935]: 2025-10-11 09:26:31.760 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:26:31 np0005481065 nova_compute[260935]: 2025-10-11 09:26:31.761 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174791.4651244, b39b8161-8a46-46fe-8a2a-0fc6b4eab850 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:26:31 np0005481065 nova_compute[260935]: 2025-10-11 09:26:31.762 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:26:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:31.761 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[109275fb-7840-4111-bfc8-fb62fd09c2fe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap894decad-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:b0:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 383], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663642, 'reachable_time': 44137, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 400725, 'error': None, 'target': 'ovnmeta-894decad-3bed-4c55-b643-5fbe5479bf3f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:26:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:31.781 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4a9b3464-de9b-43c2-bb87-9c5b04dd6aab]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1c:b096'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663642, 'tstamp': 663642}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 400726, 'error': None, 'target': 'ovnmeta-894decad-3bed-4c55-b643-5fbe5479bf3f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:26:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:31.807 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[afadd8fd-8d5d-440c-856a-b80d39b4eab6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap894decad-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:b0:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 383], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663642, 'reachable_time': 44137, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 400727, 'error': None, 'target': 'ovnmeta-894decad-3bed-4c55-b643-5fbe5479bf3f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:26:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:31.848 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a744aaa6-d5b6-4a5c-829e-2e244d3ce681]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:26:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:31.891 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[552120d5-1522-45ce-b411-7b44b8e7a2e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:26:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:31.894 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap894decad-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:26:31 np0005481065 nova_compute[260935]: 2025-10-11 09:26:31.893 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:26:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:31.894 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:26:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:31.896 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap894decad-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:26:31 np0005481065 nova_compute[260935]: 2025-10-11 09:26:31.899 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174791.4705718, b39b8161-8a46-46fe-8a2a-0fc6b4eab850 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:26:31 np0005481065 kernel: tap894decad-30: entered promiscuous mode
Oct 11 05:26:31 np0005481065 NetworkManager[44960]: <info>  [1760174791.9006] manager: (tap894decad-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/552)
Oct 11 05:26:31 np0005481065 nova_compute[260935]: 2025-10-11 09:26:31.901 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:26:31 np0005481065 nova_compute[260935]: 2025-10-11 09:26:31.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:31.907 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap894decad-30, col_values=(('external_ids', {'iface-id': '48cfaf23-d7a3-467e-89cb-3eb3687ea15e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:26:31 np0005481065 ovn_controller[152945]: 2025-10-11T09:26:31Z|01402|binding|INFO|Releasing lport 48cfaf23-d7a3-467e-89cb-3eb3687ea15e from this chassis (sb_readonly=0)
Oct 11 05:26:31 np0005481065 nova_compute[260935]: 2025-10-11 09:26:31.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:31 np0005481065 nova_compute[260935]: 2025-10-11 09:26:31.914 2 DEBUG nova.compute.manager [req-e409e923-ed5e-4233-a3b3-6b60de5f6c98 req-4437c9f6-12cd-4313-8b4b-b3329d6199c5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Received event network-vif-plugged-0f8506e7-c03f-48bd-938d-f3b4cea2675b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:26:31 np0005481065 nova_compute[260935]: 2025-10-11 09:26:31.914 2 DEBUG oslo_concurrency.lockutils [req-e409e923-ed5e-4233-a3b3-6b60de5f6c98 req-4437c9f6-12cd-4313-8b4b-b3329d6199c5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:26:31 np0005481065 nova_compute[260935]: 2025-10-11 09:26:31.915 2 DEBUG oslo_concurrency.lockutils [req-e409e923-ed5e-4233-a3b3-6b60de5f6c98 req-4437c9f6-12cd-4313-8b4b-b3329d6199c5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:26:31 np0005481065 nova_compute[260935]: 2025-10-11 09:26:31.916 2 DEBUG oslo_concurrency.lockutils [req-e409e923-ed5e-4233-a3b3-6b60de5f6c98 req-4437c9f6-12cd-4313-8b4b-b3329d6199c5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:26:31 np0005481065 nova_compute[260935]: 2025-10-11 09:26:31.917 2 DEBUG nova.compute.manager [req-e409e923-ed5e-4233-a3b3-6b60de5f6c98 req-4437c9f6-12cd-4313-8b4b-b3329d6199c5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] No waiting events found dispatching network-vif-plugged-0f8506e7-c03f-48bd-938d-f3b4cea2675b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:26:31 np0005481065 nova_compute[260935]: 2025-10-11 09:26:31.918 2 WARNING nova.compute.manager [req-e409e923-ed5e-4233-a3b3-6b60de5f6c98 req-4437c9f6-12cd-4313-8b4b-b3329d6199c5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Received unexpected event network-vif-plugged-0f8506e7-c03f-48bd-938d-f3b4cea2675b for instance with vm_state building and task_state spawning.#033[00m
Oct 11 05:26:31 np0005481065 nova_compute[260935]: 2025-10-11 09:26:31.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:31.928 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/894decad-3bed-4c55-b643-5fbe5479bf3f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/894decad-3bed-4c55-b643-5fbe5479bf3f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 05:26:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:31.929 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f25bfda7-5866-4c06-9a4f-d7517c6d760d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:26:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:31.930 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 05:26:31 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 05:26:31 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 05:26:31 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-894decad-3bed-4c55-b643-5fbe5479bf3f
Oct 11 05:26:31 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 05:26:31 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 05:26:31 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 05:26:31 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/894decad-3bed-4c55-b643-5fbe5479bf3f.pid.haproxy
Oct 11 05:26:31 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 05:26:31 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:26:31 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 05:26:31 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 05:26:31 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 05:26:31 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 05:26:31 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 05:26:31 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 05:26:31 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 05:26:31 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 05:26:31 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 05:26:31 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 05:26:31 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 05:26:31 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 05:26:31 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 05:26:31 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:26:31 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:26:31 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 05:26:31 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 05:26:31 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 05:26:31 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID 894decad-3bed-4c55-b643-5fbe5479bf3f
Oct 11 05:26:31 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 05:26:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:31.931 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-894decad-3bed-4c55-b643-5fbe5479bf3f', 'env', 'PROCESS_TAG=haproxy-894decad-3bed-4c55-b643-5fbe5479bf3f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/894decad-3bed-4c55-b643-5fbe5479bf3f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 05:26:31 np0005481065 nova_compute[260935]: 2025-10-11 09:26:31.940 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:26:31 np0005481065 nova_compute[260935]: 2025-10-11 09:26:31.945 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:26:31 np0005481065 nova_compute[260935]: 2025-10-11 09:26:31.958 2 INFO nova.compute.manager [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Took 15.95 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:26:31 np0005481065 nova_compute[260935]: 2025-10-11 09:26:31.959 2 DEBUG nova.compute.manager [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:26:32 np0005481065 nova_compute[260935]: 2025-10-11 09:26:32.012 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:26:32 np0005481065 nova_compute[260935]: 2025-10-11 09:26:32.149 2 INFO nova.compute.manager [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Took 18.01 seconds to build instance.#033[00m
Oct 11 05:26:32 np0005481065 nova_compute[260935]: 2025-10-11 09:26:32.269 2 DEBUG oslo_concurrency.lockutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.418s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:26:32 np0005481065 podman[400757]: 2025-10-11 09:26:32.459329259 +0000 UTC m=+0.074892785 container create 7d8390a7292f4de4796afe322c1990187330820880950a473cdbf2710ac3ec79 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-894decad-3bed-4c55-b643-5fbe5479bf3f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 11 05:26:32 np0005481065 podman[400757]: 2025-10-11 09:26:32.427664185 +0000 UTC m=+0.043227721 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 05:26:32 np0005481065 systemd[1]: Started libpod-conmon-7d8390a7292f4de4796afe322c1990187330820880950a473cdbf2710ac3ec79.scope.
Oct 11 05:26:32 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:26:32 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfc0d2ca647c901f7f4d6b1ca7ee3a7b693ee57f6d1e527af9b3cd693721335a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 05:26:32 np0005481065 podman[400757]: 2025-10-11 09:26:32.562167141 +0000 UTC m=+0.177730657 container init 7d8390a7292f4de4796afe322c1990187330820880950a473cdbf2710ac3ec79 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-894decad-3bed-4c55-b643-5fbe5479bf3f, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:26:32 np0005481065 podman[400757]: 2025-10-11 09:26:32.56920005 +0000 UTC m=+0.184763546 container start 7d8390a7292f4de4796afe322c1990187330820880950a473cdbf2710ac3ec79 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-894decad-3bed-4c55-b643-5fbe5479bf3f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 05:26:32 np0005481065 neutron-haproxy-ovnmeta-894decad-3bed-4c55-b643-5fbe5479bf3f[400773]: [NOTICE]   (400777) : New worker (400779) forked
Oct 11 05:26:32 np0005481065 neutron-haproxy-ovnmeta-894decad-3bed-4c55-b643-5fbe5479bf3f[400773]: [NOTICE]   (400777) : Loading success.
Oct 11 05:26:32 np0005481065 nova_compute[260935]: 2025-10-11 09:26:32.615 2 DEBUG nova.compute.manager [req-af43ad6c-1637-4e8a-97c9-f6392bb7925d req-1e43513a-27d7-422c-805b-5cc4654e0109 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Received event network-vif-plugged-a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:26:32 np0005481065 nova_compute[260935]: 2025-10-11 09:26:32.616 2 DEBUG oslo_concurrency.lockutils [req-af43ad6c-1637-4e8a-97c9-f6392bb7925d req-1e43513a-27d7-422c-805b-5cc4654e0109 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:26:32 np0005481065 nova_compute[260935]: 2025-10-11 09:26:32.616 2 DEBUG oslo_concurrency.lockutils [req-af43ad6c-1637-4e8a-97c9-f6392bb7925d req-1e43513a-27d7-422c-805b-5cc4654e0109 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:26:32 np0005481065 nova_compute[260935]: 2025-10-11 09:26:32.617 2 DEBUG oslo_concurrency.lockutils [req-af43ad6c-1637-4e8a-97c9-f6392bb7925d req-1e43513a-27d7-422c-805b-5cc4654e0109 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:26:32 np0005481065 nova_compute[260935]: 2025-10-11 09:26:32.617 2 DEBUG nova.compute.manager [req-af43ad6c-1637-4e8a-97c9-f6392bb7925d req-1e43513a-27d7-422c-805b-5cc4654e0109 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] No waiting events found dispatching network-vif-plugged-a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:26:32 np0005481065 nova_compute[260935]: 2025-10-11 09:26:32.618 2 WARNING nova.compute.manager [req-af43ad6c-1637-4e8a-97c9-f6392bb7925d req-1e43513a-27d7-422c-805b-5cc4654e0109 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Received unexpected event network-vif-plugged-a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088 for instance with vm_state active and task_state None.#033[00m
Oct 11 05:26:32 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2547: 321 pgs: 321 active+clean; 445 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 2.1 MiB/s wr, 178 op/s
Oct 11 05:26:32 np0005481065 nova_compute[260935]: 2025-10-11 09:26:32.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:33 np0005481065 ovn_controller[152945]: 2025-10-11T09:26:33Z|00160|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:da:66:34 10.100.0.11
Oct 11 05:26:33 np0005481065 ovn_controller[152945]: 2025-10-11T09:26:33Z|00161|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:da:66:34 10.100.0.11
Oct 11 05:26:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:26:34 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #123. Immutable memtables: 0.
Oct 11 05:26:34 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:26:34.177978) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 05:26:34 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 123
Oct 11 05:26:34 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174794178274, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 1203, "num_deletes": 250, "total_data_size": 1770298, "memory_usage": 1798424, "flush_reason": "Manual Compaction"}
Oct 11 05:26:34 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #124: started
Oct 11 05:26:34 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174794187867, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 124, "file_size": 1043477, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 52353, "largest_seqno": 53555, "table_properties": {"data_size": 1039058, "index_size": 1879, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11721, "raw_average_key_size": 20, "raw_value_size": 1029524, "raw_average_value_size": 1822, "num_data_blocks": 85, "num_entries": 565, "num_filter_entries": 565, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760174675, "oldest_key_time": 1760174675, "file_creation_time": 1760174794, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 124, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:26:34 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 9931 microseconds, and 4818 cpu microseconds.
Oct 11 05:26:34 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:26:34 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:26:34.187912) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #124: 1043477 bytes OK
Oct 11 05:26:34 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:26:34.187932) [db/memtable_list.cc:519] [default] Level-0 commit table #124 started
Oct 11 05:26:34 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:26:34.191172) [db/memtable_list.cc:722] [default] Level-0 commit table #124: memtable #1 done
Oct 11 05:26:34 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:26:34.191189) EVENT_LOG_v1 {"time_micros": 1760174794191183, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 05:26:34 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:26:34.191210) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 05:26:34 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 1764819, prev total WAL file size 1764819, number of live WAL files 2.
Oct 11 05:26:34 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000120.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:26:34 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:26:34.192227) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303034' seq:72057594037927935, type:22 .. '6D6772737461740032323535' seq:0, type:0; will stop at (end)
Oct 11 05:26:34 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 05:26:34 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [124(1019KB)], [122(10219KB)]
Oct 11 05:26:34 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174794192279, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [124], "files_L6": [122], "score": -1, "input_data_size": 11508148, "oldest_snapshot_seqno": -1}
Oct 11 05:26:34 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #125: 7289 keys, 8916464 bytes, temperature: kUnknown
Oct 11 05:26:34 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174794265572, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 125, "file_size": 8916464, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8870098, "index_size": 27054, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18245, "raw_key_size": 189934, "raw_average_key_size": 26, "raw_value_size": 8741941, "raw_average_value_size": 1199, "num_data_blocks": 1056, "num_entries": 7289, "num_filter_entries": 7289, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760174794, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:26:34 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:26:34 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:26:34.265858) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 8916464 bytes
Oct 11 05:26:34 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:26:34.267256) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 156.8 rd, 121.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 10.0 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(19.6) write-amplify(8.5) OK, records in: 7747, records dropped: 458 output_compression: NoCompression
Oct 11 05:26:34 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:26:34.267276) EVENT_LOG_v1 {"time_micros": 1760174794267267, "job": 74, "event": "compaction_finished", "compaction_time_micros": 73379, "compaction_time_cpu_micros": 41178, "output_level": 6, "num_output_files": 1, "total_output_size": 8916464, "num_input_records": 7747, "num_output_records": 7289, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 05:26:34 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000124.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:26:34 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174794267804, "job": 74, "event": "table_file_deletion", "file_number": 124}
Oct 11 05:26:34 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:26:34 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174794270370, "job": 74, "event": "table_file_deletion", "file_number": 122}
Oct 11 05:26:34 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:26:34.192122) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:26:34 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:26:34.270550) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:26:34 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:26:34.270563) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:26:34 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:26:34.270568) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:26:34 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:26:34.270571) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:26:34 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:26:34.270576) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:26:34 np0005481065 nova_compute[260935]: 2025-10-11 09:26:34.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:26:34 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2548: 321 pgs: 321 active+clean; 445 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 105 op/s
Oct 11 05:26:36 np0005481065 podman[400959]: 2025-10-11 09:26:36.000828902 +0000 UTC m=+0.534422403 container exec ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 05:26:36 np0005481065 nova_compute[260935]: 2025-10-11 09:26:36.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:36 np0005481065 podman[400959]: 2025-10-11 09:26:36.334303003 +0000 UTC m=+0.867896474 container exec_died ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 05:26:36 np0005481065 nova_compute[260935]: 2025-10-11 09:26:36.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:26:36 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2549: 321 pgs: 321 active+clean; 445 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 105 op/s
Oct 11 05:26:37 np0005481065 nova_compute[260935]: 2025-10-11 09:26:37.705 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:26:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:26:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:26:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:26:37 np0005481065 nova_compute[260935]: 2025-10-11 09:26:37.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:26:38 np0005481065 nova_compute[260935]: 2025-10-11 09:26:38.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:26:38 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:26:38 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:26:38 np0005481065 nova_compute[260935]: 2025-10-11 09:26:38.892 2 DEBUG nova.compute.manager [req-1563758d-fb28-4fdd-a3c5-e4b0ed07eb66 req-1449793e-6114-4d7e-b39d-12d3cddeb743 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Received event network-changed-0f8506e7-c03f-48bd-938d-f3b4cea2675b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:26:38 np0005481065 nova_compute[260935]: 2025-10-11 09:26:38.893 2 DEBUG nova.compute.manager [req-1563758d-fb28-4fdd-a3c5-e4b0ed07eb66 req-1449793e-6114-4d7e-b39d-12d3cddeb743 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Refreshing instance network info cache due to event network-changed-0f8506e7-c03f-48bd-938d-f3b4cea2675b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:26:38 np0005481065 nova_compute[260935]: 2025-10-11 09:26:38.894 2 DEBUG oslo_concurrency.lockutils [req-1563758d-fb28-4fdd-a3c5-e4b0ed07eb66 req-1449793e-6114-4d7e-b39d-12d3cddeb743 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-b39b8161-8a46-46fe-8a2a-0fc6b4eab850" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:26:38 np0005481065 nova_compute[260935]: 2025-10-11 09:26:38.895 2 DEBUG oslo_concurrency.lockutils [req-1563758d-fb28-4fdd-a3c5-e4b0ed07eb66 req-1449793e-6114-4d7e-b39d-12d3cddeb743 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-b39b8161-8a46-46fe-8a2a-0fc6b4eab850" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:26:38 np0005481065 nova_compute[260935]: 2025-10-11 09:26:38.895 2 DEBUG nova.network.neutron [req-1563758d-fb28-4fdd-a3c5-e4b0ed07eb66 req-1449793e-6114-4d7e-b39d-12d3cddeb743 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Refreshing network info cache for port 0f8506e7-c03f-48bd-938d-f3b4cea2675b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:26:38 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2550: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 144 op/s
Oct 11 05:26:39 np0005481065 nova_compute[260935]: 2025-10-11 09:26:39.015 2 DEBUG oslo_concurrency.lockutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "ebf4e4f9-b225-4ba9-927e-0619aeea8d89" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:26:39 np0005481065 nova_compute[260935]: 2025-10-11 09:26:39.017 2 DEBUG oslo_concurrency.lockutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "ebf4e4f9-b225-4ba9-927e-0619aeea8d89" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:26:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:26:39 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:26:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 05:26:39 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:26:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 05:26:39 np0005481065 nova_compute[260935]: 2025-10-11 09:26:39.119 2 DEBUG nova.compute.manager [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:26:39 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:26:39 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 234dd9b1-913f-4e87-90d5-5e71cebc31e0 does not exist
Oct 11 05:26:39 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 97e808bd-4736-4c8f-abbc-9e1e1a5d031e does not exist
Oct 11 05:26:39 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 47f0b9c4-a202-4094-b1b1-3f7f5fe21e3a does not exist
Oct 11 05:26:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 05:26:39 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 05:26:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:26:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 05:26:39 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:26:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:26:39 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:26:39 np0005481065 nova_compute[260935]: 2025-10-11 09:26:39.385 2 DEBUG oslo_concurrency.lockutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:26:39 np0005481065 nova_compute[260935]: 2025-10-11 09:26:39.386 2 DEBUG oslo_concurrency.lockutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:26:39 np0005481065 nova_compute[260935]: 2025-10-11 09:26:39.402 2 DEBUG nova.virt.hardware [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:26:39 np0005481065 nova_compute[260935]: 2025-10-11 09:26:39.403 2 INFO nova.compute.claims [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:26:39 np0005481065 nova_compute[260935]: 2025-10-11 09:26:39.761 2 DEBUG oslo_concurrency.processutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:26:39 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:26:39 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:26:39 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:26:40 np0005481065 podman[401401]: 2025-10-11 09:26:39.95884503 +0000 UTC m=+0.036137481 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:26:40 np0005481065 podman[401401]: 2025-10-11 09:26:40.150037695 +0000 UTC m=+0.227330126 container create fc94fe6945d6264270abdd98f38daf9e1d6de7ba9a6b9ed76f56682e97f63243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 05:26:40 np0005481065 systemd[1]: Started libpod-conmon-fc94fe6945d6264270abdd98f38daf9e1d6de7ba9a6b9ed76f56682e97f63243.scope.
Oct 11 05:26:40 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:26:40 np0005481065 podman[401401]: 2025-10-11 09:26:40.473214116 +0000 UTC m=+0.550506577 container init fc94fe6945d6264270abdd98f38daf9e1d6de7ba9a6b9ed76f56682e97f63243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_yonath, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 11 05:26:40 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:26:40 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2425161141' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:26:40 np0005481065 podman[401401]: 2025-10-11 09:26:40.487769996 +0000 UTC m=+0.565062457 container start fc94fe6945d6264270abdd98f38daf9e1d6de7ba9a6b9ed76f56682e97f63243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:26:40 np0005481065 thirsty_yonath[401426]: 167 167
Oct 11 05:26:40 np0005481065 systemd[1]: libpod-fc94fe6945d6264270abdd98f38daf9e1d6de7ba9a6b9ed76f56682e97f63243.scope: Deactivated successfully.
Oct 11 05:26:40 np0005481065 conmon[401426]: conmon fc94fe6945d6264270ab <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fc94fe6945d6264270abdd98f38daf9e1d6de7ba9a6b9ed76f56682e97f63243.scope/container/memory.events
Oct 11 05:26:40 np0005481065 nova_compute[260935]: 2025-10-11 09:26:40.513 2 DEBUG oslo_concurrency.processutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.751s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:26:40 np0005481065 nova_compute[260935]: 2025-10-11 09:26:40.526 2 DEBUG nova.compute.provider_tree [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:26:40 np0005481065 nova_compute[260935]: 2025-10-11 09:26:40.548 2 DEBUG nova.scheduler.client.report [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:26:40 np0005481065 nova_compute[260935]: 2025-10-11 09:26:40.587 2 DEBUG oslo_concurrency.lockutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.201s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:26:40 np0005481065 nova_compute[260935]: 2025-10-11 09:26:40.588 2 DEBUG nova.compute.manager [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:26:40 np0005481065 podman[401401]: 2025-10-11 09:26:40.61333627 +0000 UTC m=+0.690628691 container attach fc94fe6945d6264270abdd98f38daf9e1d6de7ba9a6b9ed76f56682e97f63243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 11 05:26:40 np0005481065 podman[401401]: 2025-10-11 09:26:40.615508061 +0000 UTC m=+0.692800522 container died fc94fe6945d6264270abdd98f38daf9e1d6de7ba9a6b9ed76f56682e97f63243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 05:26:40 np0005481065 nova_compute[260935]: 2025-10-11 09:26:40.664 2 DEBUG nova.compute.manager [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:26:40 np0005481065 nova_compute[260935]: 2025-10-11 09:26:40.666 2 DEBUG nova.network.neutron [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:26:40 np0005481065 nova_compute[260935]: 2025-10-11 09:26:40.701 2 INFO nova.virt.libvirt.driver [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:26:40 np0005481065 nova_compute[260935]: 2025-10-11 09:26:40.732 2 DEBUG nova.compute.manager [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:26:40 np0005481065 systemd[1]: var-lib-containers-storage-overlay-7625e06170edad45e849fee0f32493380d20861a3e2f663d66f2437e02ea3b4c-merged.mount: Deactivated successfully.
Oct 11 05:26:40 np0005481065 nova_compute[260935]: 2025-10-11 09:26:40.851 2 DEBUG nova.compute.manager [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:26:40 np0005481065 nova_compute[260935]: 2025-10-11 09:26:40.852 2 DEBUG nova.virt.libvirt.driver [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:26:40 np0005481065 nova_compute[260935]: 2025-10-11 09:26:40.853 2 INFO nova.virt.libvirt.driver [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Creating image(s)#033[00m
Oct 11 05:26:40 np0005481065 nova_compute[260935]: 2025-10-11 09:26:40.879 2 DEBUG nova.storage.rbd_utils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image ebf4e4f9-b225-4ba9-927e-0619aeea8d89_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:26:40 np0005481065 nova_compute[260935]: 2025-10-11 09:26:40.927 2 DEBUG nova.storage.rbd_utils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image ebf4e4f9-b225-4ba9-927e-0619aeea8d89_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:26:40 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2551: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 144 op/s
Oct 11 05:26:40 np0005481065 nova_compute[260935]: 2025-10-11 09:26:40.971 2 DEBUG nova.storage.rbd_utils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image ebf4e4f9-b225-4ba9-927e-0619aeea8d89_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:26:40 np0005481065 nova_compute[260935]: 2025-10-11 09:26:40.976 2 DEBUG oslo_concurrency.processutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:26:41 np0005481065 nova_compute[260935]: 2025-10-11 09:26:41.050 2 DEBUG oslo_concurrency.processutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:26:41 np0005481065 nova_compute[260935]: 2025-10-11 09:26:41.051 2 DEBUG oslo_concurrency.lockutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:26:41 np0005481065 nova_compute[260935]: 2025-10-11 09:26:41.052 2 DEBUG oslo_concurrency.lockutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:26:41 np0005481065 nova_compute[260935]: 2025-10-11 09:26:41.052 2 DEBUG oslo_concurrency.lockutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:26:41 np0005481065 podman[401401]: 2025-10-11 09:26:41.053961335 +0000 UTC m=+1.131253806 container remove fc94fe6945d6264270abdd98f38daf9e1d6de7ba9a6b9ed76f56682e97f63243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:26:41 np0005481065 nova_compute[260935]: 2025-10-11 09:26:41.093 2 DEBUG nova.storage.rbd_utils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image ebf4e4f9-b225-4ba9-927e-0619aeea8d89_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:26:41 np0005481065 nova_compute[260935]: 2025-10-11 09:26:41.103 2 DEBUG oslo_concurrency.processutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 ebf4e4f9-b225-4ba9-927e-0619aeea8d89_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:26:41 np0005481065 systemd[1]: libpod-conmon-fc94fe6945d6264270abdd98f38daf9e1d6de7ba9a6b9ed76f56682e97f63243.scope: Deactivated successfully.
Oct 11 05:26:41 np0005481065 nova_compute[260935]: 2025-10-11 09:26:41.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:41 np0005481065 podman[401543]: 2025-10-11 09:26:41.329875261 +0000 UTC m=+0.060891659 container create e6abf1327511a0a30be33139b8592de4c47bb212d95fccaa58d7d5207d894f5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_perlman, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:26:41 np0005481065 nova_compute[260935]: 2025-10-11 09:26:41.360 2 DEBUG nova.policy [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dd336dcb24664df58613d4105ce1b004', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:26:41 np0005481065 systemd[1]: Started libpod-conmon-e6abf1327511a0a30be33139b8592de4c47bb212d95fccaa58d7d5207d894f5d.scope.
Oct 11 05:26:41 np0005481065 podman[401543]: 2025-10-11 09:26:41.297192359 +0000 UTC m=+0.028208767 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:26:41 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:26:41 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6523e40f322e3f7661473808937bcc4950df177a5e2bed6868bcdbe32903173/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:26:41 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6523e40f322e3f7661473808937bcc4950df177a5e2bed6868bcdbe32903173/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:26:41 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6523e40f322e3f7661473808937bcc4950df177a5e2bed6868bcdbe32903173/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:26:41 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6523e40f322e3f7661473808937bcc4950df177a5e2bed6868bcdbe32903173/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:26:41 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6523e40f322e3f7661473808937bcc4950df177a5e2bed6868bcdbe32903173/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 05:26:41 np0005481065 podman[401543]: 2025-10-11 09:26:41.531349737 +0000 UTC m=+0.262366165 container init e6abf1327511a0a30be33139b8592de4c47bb212d95fccaa58d7d5207d894f5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 05:26:41 np0005481065 podman[401543]: 2025-10-11 09:26:41.540062013 +0000 UTC m=+0.271078411 container start e6abf1327511a0a30be33139b8592de4c47bb212d95fccaa58d7d5207d894f5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:26:41 np0005481065 podman[401543]: 2025-10-11 09:26:41.599949073 +0000 UTC m=+0.330965471 container attach e6abf1327511a0a30be33139b8592de4c47bb212d95fccaa58d7d5207d894f5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_perlman, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 05:26:41 np0005481065 nova_compute[260935]: 2025-10-11 09:26:41.700 2 DEBUG oslo_concurrency.processutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 ebf4e4f9-b225-4ba9-927e-0619aeea8d89_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:26:41 np0005481065 nova_compute[260935]: 2025-10-11 09:26:41.725 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:26:41 np0005481065 nova_compute[260935]: 2025-10-11 09:26:41.767 2 DEBUG nova.storage.rbd_utils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] resizing rbd image ebf4e4f9-b225-4ba9-927e-0619aeea8d89_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:26:41 np0005481065 nova_compute[260935]: 2025-10-11 09:26:41.806 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:26:41 np0005481065 nova_compute[260935]: 2025-10-11 09:26:41.806 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:26:41 np0005481065 nova_compute[260935]: 2025-10-11 09:26:41.807 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:26:41 np0005481065 nova_compute[260935]: 2025-10-11 09:26:41.807 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:26:41 np0005481065 nova_compute[260935]: 2025-10-11 09:26:41.808 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:26:42 np0005481065 nova_compute[260935]: 2025-10-11 09:26:42.131 2 DEBUG nova.objects.instance [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'migration_context' on Instance uuid ebf4e4f9-b225-4ba9-927e-0619aeea8d89 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:26:42 np0005481065 nova_compute[260935]: 2025-10-11 09:26:42.158 2 DEBUG nova.virt.libvirt.driver [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:26:42 np0005481065 nova_compute[260935]: 2025-10-11 09:26:42.158 2 DEBUG nova.virt.libvirt.driver [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Ensure instance console log exists: /var/lib/nova/instances/ebf4e4f9-b225-4ba9-927e-0619aeea8d89/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:26:42 np0005481065 nova_compute[260935]: 2025-10-11 09:26:42.159 2 DEBUG oslo_concurrency.lockutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:26:42 np0005481065 nova_compute[260935]: 2025-10-11 09:26:42.159 2 DEBUG oslo_concurrency.lockutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:26:42 np0005481065 nova_compute[260935]: 2025-10-11 09:26:42.159 2 DEBUG oslo_concurrency.lockutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:26:42 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:26:42 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2712588189' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:26:42 np0005481065 nova_compute[260935]: 2025-10-11 09:26:42.359 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:26:42 np0005481065 nova_compute[260935]: 2025-10-11 09:26:42.626 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:26:42 np0005481065 nova_compute[260935]: 2025-10-11 09:26:42.627 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:26:42 np0005481065 nova_compute[260935]: 2025-10-11 09:26:42.633 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:26:42 np0005481065 nova_compute[260935]: 2025-10-11 09:26:42.633 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:26:42 np0005481065 nova_compute[260935]: 2025-10-11 09:26:42.634 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:26:42 np0005481065 nova_compute[260935]: 2025-10-11 09:26:42.642 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:26:42 np0005481065 nova_compute[260935]: 2025-10-11 09:26:42.643 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:26:42 np0005481065 nova_compute[260935]: 2025-10-11 09:26:42.649 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:26:42 np0005481065 nova_compute[260935]: 2025-10-11 09:26:42.649 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:26:42 np0005481065 nova_compute[260935]: 2025-10-11 09:26:42.656 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:26:42 np0005481065 nova_compute[260935]: 2025-10-11 09:26:42.657 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:26:42 np0005481065 boring_perlman[401563]: --> passed data devices: 0 physical, 3 LVM
Oct 11 05:26:42 np0005481065 boring_perlman[401563]: --> relative data size: 1.0
Oct 11 05:26:42 np0005481065 boring_perlman[401563]: --> All data devices are unavailable
Oct 11 05:26:42 np0005481065 systemd[1]: libpod-e6abf1327511a0a30be33139b8592de4c47bb212d95fccaa58d7d5207d894f5d.scope: Deactivated successfully.
Oct 11 05:26:42 np0005481065 systemd[1]: libpod-e6abf1327511a0a30be33139b8592de4c47bb212d95fccaa58d7d5207d894f5d.scope: Consumed 1.156s CPU time.
Oct 11 05:26:42 np0005481065 podman[401543]: 2025-10-11 09:26:42.937135359 +0000 UTC m=+1.668151807 container died e6abf1327511a0a30be33139b8592de4c47bb212d95fccaa58d7d5207d894f5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:26:42 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2552: 321 pgs: 321 active+clean; 508 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.0 MiB/s wr, 186 op/s
Oct 11 05:26:42 np0005481065 nova_compute[260935]: 2025-10-11 09:26:42.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:43 np0005481065 nova_compute[260935]: 2025-10-11 09:26:43.045 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:26:43 np0005481065 nova_compute[260935]: 2025-10-11 09:26:43.047 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2360MB free_disk=59.76438522338867GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:26:43 np0005481065 nova_compute[260935]: 2025-10-11 09:26:43.047 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:26:43 np0005481065 nova_compute[260935]: 2025-10-11 09:26:43.048 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:26:43 np0005481065 nova_compute[260935]: 2025-10-11 09:26:43.147 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:26:43 np0005481065 nova_compute[260935]: 2025-10-11 09:26:43.147 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:26:43 np0005481065 nova_compute[260935]: 2025-10-11 09:26:43.148 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:26:43 np0005481065 nova_compute[260935]: 2025-10-11 09:26:43.148 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 36d0acda-9f37-4308-aa46-973f11c57b0e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:26:43 np0005481065 nova_compute[260935]: 2025-10-11 09:26:43.148 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b39b8161-8a46-46fe-8a2a-0fc6b4eab850 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:26:43 np0005481065 nova_compute[260935]: 2025-10-11 09:26:43.149 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance ebf4e4f9-b225-4ba9-927e-0619aeea8d89 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:26:43 np0005481065 nova_compute[260935]: 2025-10-11 09:26:43.149 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 6 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:26:43 np0005481065 nova_compute[260935]: 2025-10-11 09:26:43.149 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1280MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=6 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:26:43 np0005481065 nova_compute[260935]: 2025-10-11 09:26:43.279 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:26:43 np0005481065 systemd[1]: var-lib-containers-storage-overlay-a6523e40f322e3f7661473808937bcc4950df177a5e2bed6868bcdbe32903173-merged.mount: Deactivated successfully.
Oct 11 05:26:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:26:43 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/247873486' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:26:43 np0005481065 podman[401543]: 2025-10-11 09:26:43.75041645 +0000 UTC m=+2.481432878 container remove e6abf1327511a0a30be33139b8592de4c47bb212d95fccaa58d7d5207d894f5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:26:43 np0005481065 nova_compute[260935]: 2025-10-11 09:26:43.775 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:26:43 np0005481065 nova_compute[260935]: 2025-10-11 09:26:43.784 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:26:43 np0005481065 nova_compute[260935]: 2025-10-11 09:26:43.813 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:26:43 np0005481065 systemd[1]: libpod-conmon-e6abf1327511a0a30be33139b8592de4c47bb212d95fccaa58d7d5207d894f5d.scope: Deactivated successfully.
Oct 11 05:26:43 np0005481065 nova_compute[260935]: 2025-10-11 09:26:43.843 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:26:43 np0005481065 nova_compute[260935]: 2025-10-11 09:26:43.844 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.796s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:26:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:26:44 np0005481065 nova_compute[260935]: 2025-10-11 09:26:44.365 2 DEBUG nova.network.neutron [req-1563758d-fb28-4fdd-a3c5-e4b0ed07eb66 req-1449793e-6114-4d7e-b39d-12d3cddeb743 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Updated VIF entry in instance network info cache for port 0f8506e7-c03f-48bd-938d-f3b4cea2675b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:26:44 np0005481065 nova_compute[260935]: 2025-10-11 09:26:44.366 2 DEBUG nova.network.neutron [req-1563758d-fb28-4fdd-a3c5-e4b0ed07eb66 req-1449793e-6114-4d7e-b39d-12d3cddeb743 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Updating instance_info_cache with network_info: [{"id": "0f8506e7-c03f-48bd-938d-f3b4cea2675b", "address": "fa:16:3e:f4:67:d9", "network": {"id": "024b1f88-7312-4f05-a55e-4c82e878906e", "bridge": "br-int", "label": "tempest-network-smoke--93934650", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f8506e7-c0", "ovs_interfaceid": "0f8506e7-c03f-48bd-938d-f3b4cea2675b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088", "address": "fa:16:3e:4d:e9:36", "network": {"id": "894decad-3bed-4c55-b643-5fbe5479bf3f", "bridge": "br-int", "label": "tempest-network-smoke--1044478516", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4d:e936", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0aadf8d-3a", "ovs_interfaceid": "a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:26:44 np0005481065 podman[401864]: 2025-10-11 09:26:44.683859453 +0000 UTC m=+0.079267288 container create 32fb6cf391b33dd692c821d37210b9e76871fcbefb7b07075b7dba7c7e010952 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_noether, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 11 05:26:44 np0005481065 nova_compute[260935]: 2025-10-11 09:26:44.711 2 DEBUG oslo_concurrency.lockutils [req-1563758d-fb28-4fdd-a3c5-e4b0ed07eb66 req-1449793e-6114-4d7e-b39d-12d3cddeb743 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-b39b8161-8a46-46fe-8a2a-0fc6b4eab850" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:26:44 np0005481065 nova_compute[260935]: 2025-10-11 09:26:44.713 2 DEBUG nova.network.neutron [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Successfully created port: 733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:26:44 np0005481065 podman[401864]: 2025-10-11 09:26:44.642495646 +0000 UTC m=+0.037903531 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:26:44 np0005481065 systemd[1]: Started libpod-conmon-32fb6cf391b33dd692c821d37210b9e76871fcbefb7b07075b7dba7c7e010952.scope.
Oct 11 05:26:44 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:26:44 np0005481065 podman[401864]: 2025-10-11 09:26:44.813904293 +0000 UTC m=+0.209312218 container init 32fb6cf391b33dd692c821d37210b9e76871fcbefb7b07075b7dba7c7e010952 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_noether, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 11 05:26:44 np0005481065 nova_compute[260935]: 2025-10-11 09:26:44.823 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:26:44 np0005481065 nova_compute[260935]: 2025-10-11 09:26:44.825 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:26:44 np0005481065 podman[401864]: 2025-10-11 09:26:44.8315296 +0000 UTC m=+0.226937435 container start 32fb6cf391b33dd692c821d37210b9e76871fcbefb7b07075b7dba7c7e010952 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_noether, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:26:44 np0005481065 sad_noether[401880]: 167 167
Oct 11 05:26:44 np0005481065 systemd[1]: libpod-32fb6cf391b33dd692c821d37210b9e76871fcbefb7b07075b7dba7c7e010952.scope: Deactivated successfully.
Oct 11 05:26:44 np0005481065 conmon[401880]: conmon 32fb6cf391b33dd692c8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-32fb6cf391b33dd692c821d37210b9e76871fcbefb7b07075b7dba7c7e010952.scope/container/memory.events
Oct 11 05:26:44 np0005481065 podman[401864]: 2025-10-11 09:26:44.845408612 +0000 UTC m=+0.240816487 container attach 32fb6cf391b33dd692c821d37210b9e76871fcbefb7b07075b7dba7c7e010952 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:26:44 np0005481065 podman[401864]: 2025-10-11 09:26:44.846100032 +0000 UTC m=+0.241507907 container died 32fb6cf391b33dd692c821d37210b9e76871fcbefb7b07075b7dba7c7e010952 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:26:44 np0005481065 nova_compute[260935]: 2025-10-11 09:26:44.861 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 11 05:26:44 np0005481065 nova_compute[260935]: 2025-10-11 09:26:44.862 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:26:44 np0005481065 nova_compute[260935]: 2025-10-11 09:26:44.862 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:26:44 np0005481065 nova_compute[260935]: 2025-10-11 09:26:44.862 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:26:44 np0005481065 systemd[1]: var-lib-containers-storage-overlay-d492595a93584019fa6fbaa7aa575831263e006b1189e662500f3cc855d43477-merged.mount: Deactivated successfully.
Oct 11 05:26:44 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2553: 321 pgs: 321 active+clean; 508 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 2.9 MiB/s wr, 81 op/s
Oct 11 05:26:45 np0005481065 podman[401864]: 2025-10-11 09:26:45.063462416 +0000 UTC m=+0.458870261 container remove 32fb6cf391b33dd692c821d37210b9e76871fcbefb7b07075b7dba7c7e010952 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_noether, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 05:26:45 np0005481065 systemd[1]: libpod-conmon-32fb6cf391b33dd692c821d37210b9e76871fcbefb7b07075b7dba7c7e010952.scope: Deactivated successfully.
Oct 11 05:26:45 np0005481065 podman[401883]: 2025-10-11 09:26:45.155295577 +0000 UTC m=+0.339851752 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 11 05:26:45 np0005481065 podman[401925]: 2025-10-11 09:26:45.31660602 +0000 UTC m=+0.024132072 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:26:45 np0005481065 podman[401925]: 2025-10-11 09:26:45.428876748 +0000 UTC m=+0.136402800 container create cf47a4979cc279ed36e66b223cbe8581bbc71b75196f29de32d496b486e99ce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_saha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 05:26:45 np0005481065 systemd[1]: Started libpod-conmon-cf47a4979cc279ed36e66b223cbe8581bbc71b75196f29de32d496b486e99ce0.scope.
Oct 11 05:26:45 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:26:45 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcc7ac2b70633db55b88430f0783934540668a9e4bdfbc872dcee86c6776a052/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:26:45 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcc7ac2b70633db55b88430f0783934540668a9e4bdfbc872dcee86c6776a052/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:26:45 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcc7ac2b70633db55b88430f0783934540668a9e4bdfbc872dcee86c6776a052/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:26:45 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcc7ac2b70633db55b88430f0783934540668a9e4bdfbc872dcee86c6776a052/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:26:45 np0005481065 podman[401925]: 2025-10-11 09:26:45.995887128 +0000 UTC m=+0.703413260 container init cf47a4979cc279ed36e66b223cbe8581bbc71b75196f29de32d496b486e99ce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 05:26:46 np0005481065 podman[401925]: 2025-10-11 09:26:46.012314262 +0000 UTC m=+0.719840324 container start cf47a4979cc279ed36e66b223cbe8581bbc71b75196f29de32d496b486e99ce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_saha, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:26:46 np0005481065 ovn_controller[152945]: 2025-10-11T09:26:46Z|00162|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f4:67:d9 10.100.0.4
Oct 11 05:26:46 np0005481065 ovn_controller[152945]: 2025-10-11T09:26:46Z|00163|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f4:67:d9 10.100.0.4
Oct 11 05:26:46 np0005481065 podman[401925]: 2025-10-11 09:26:46.213447328 +0000 UTC m=+0.920973450 container attach cf47a4979cc279ed36e66b223cbe8581bbc71b75196f29de32d496b486e99ce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_saha, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:26:46 np0005481065 nova_compute[260935]: 2025-10-11 09:26:46.249 2 DEBUG nova.network.neutron [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Successfully updated port: 733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:26:46 np0005481065 nova_compute[260935]: 2025-10-11 09:26:46.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:46 np0005481065 nova_compute[260935]: 2025-10-11 09:26:46.398 2 DEBUG oslo_concurrency.lockutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "refresh_cache-ebf4e4f9-b225-4ba9-927e-0619aeea8d89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:26:46 np0005481065 nova_compute[260935]: 2025-10-11 09:26:46.399 2 DEBUG oslo_concurrency.lockutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquired lock "refresh_cache-ebf4e4f9-b225-4ba9-927e-0619aeea8d89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:26:46 np0005481065 nova_compute[260935]: 2025-10-11 09:26:46.399 2 DEBUG nova.network.neutron [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:26:46 np0005481065 nova_compute[260935]: 2025-10-11 09:26:46.450 2 DEBUG nova.compute.manager [req-cee575f3-02ca-46d2-aee8-5ee353edb984 req-b1b23c97-ec86-4362-8b55-77c404311886 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Received event network-changed-733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:26:46 np0005481065 nova_compute[260935]: 2025-10-11 09:26:46.450 2 DEBUG nova.compute.manager [req-cee575f3-02ca-46d2-aee8-5ee353edb984 req-b1b23c97-ec86-4362-8b55-77c404311886 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Refreshing instance network info cache due to event network-changed-733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:26:46 np0005481065 nova_compute[260935]: 2025-10-11 09:26:46.451 2 DEBUG oslo_concurrency.lockutils [req-cee575f3-02ca-46d2-aee8-5ee353edb984 req-b1b23c97-ec86-4362-8b55-77c404311886 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-ebf4e4f9-b225-4ba9-927e-0619aeea8d89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:26:46 np0005481065 nova_compute[260935]: 2025-10-11 09:26:46.682 2 DEBUG nova.network.neutron [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]: {
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:    "0": [
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:        {
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:            "devices": [
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:                "/dev/loop3"
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:            ],
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:            "lv_name": "ceph_lv0",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:            "lv_size": "21470642176",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:            "name": "ceph_lv0",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:            "tags": {
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:                "ceph.cluster_name": "ceph",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:                "ceph.crush_device_class": "",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:                "ceph.encrypted": "0",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:                "ceph.osd_id": "0",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:                "ceph.type": "block",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:                "ceph.vdo": "0"
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:            },
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:            "type": "block",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:            "vg_name": "ceph_vg0"
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:        }
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:    ],
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:    "1": [
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:        {
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:            "devices": [
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:                "/dev/loop4"
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:            ],
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:            "lv_name": "ceph_lv1",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:            "lv_size": "21470642176",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:            "name": "ceph_lv1",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:            "tags": {
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:                "ceph.cluster_name": "ceph",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:                "ceph.crush_device_class": "",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:                "ceph.encrypted": "0",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:                "ceph.osd_id": "1",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:                "ceph.type": "block",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:                "ceph.vdo": "0"
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:            },
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:            "type": "block",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:            "vg_name": "ceph_vg1"
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:        }
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:    ],
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:    "2": [
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:        {
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:            "devices": [
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:                "/dev/loop5"
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:            ],
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:            "lv_name": "ceph_lv2",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:            "lv_size": "21470642176",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:            "name": "ceph_lv2",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:            "tags": {
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:                "ceph.cluster_name": "ceph",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:                "ceph.crush_device_class": "",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:                "ceph.encrypted": "0",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:                "ceph.osd_id": "2",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:                "ceph.type": "block",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:                "ceph.vdo": "0"
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:            },
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:            "type": "block",
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:            "vg_name": "ceph_vg2"
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:        }
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]:    ]
Oct 11 05:26:46 np0005481065 optimistic_saha[401942]: }
Oct 11 05:26:46 np0005481065 systemd[1]: libpod-cf47a4979cc279ed36e66b223cbe8581bbc71b75196f29de32d496b486e99ce0.scope: Deactivated successfully.
Oct 11 05:26:46 np0005481065 podman[401951]: 2025-10-11 09:26:46.891999218 +0000 UTC m=+0.036401759 container died cf47a4979cc279ed36e66b223cbe8581bbc71b75196f29de32d496b486e99ce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_saha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 05:26:46 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2554: 321 pgs: 321 active+clean; 508 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 2.9 MiB/s wr, 81 op/s
Oct 11 05:26:47 np0005481065 nova_compute[260935]: 2025-10-11 09:26:47.907 2 DEBUG nova.network.neutron [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Updating instance_info_cache with network_info: [{"id": "733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6", "address": "fa:16:3e:aa:8a:b1", "network": {"id": "b9f9ae84-9b18-48f7-bff2-94e8835de5c8", "bridge": "br-int", "label": "tempest-network-smoke--306681163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap733f68a9-9a", "ovs_interfaceid": "733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:26:47 np0005481065 nova_compute[260935]: 2025-10-11 09:26:47.934 2 DEBUG oslo_concurrency.lockutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Releasing lock "refresh_cache-ebf4e4f9-b225-4ba9-927e-0619aeea8d89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:26:47 np0005481065 nova_compute[260935]: 2025-10-11 09:26:47.934 2 DEBUG nova.compute.manager [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Instance network_info: |[{"id": "733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6", "address": "fa:16:3e:aa:8a:b1", "network": {"id": "b9f9ae84-9b18-48f7-bff2-94e8835de5c8", "bridge": "br-int", "label": "tempest-network-smoke--306681163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap733f68a9-9a", "ovs_interfaceid": "733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:26:47 np0005481065 nova_compute[260935]: 2025-10-11 09:26:47.935 2 DEBUG oslo_concurrency.lockutils [req-cee575f3-02ca-46d2-aee8-5ee353edb984 req-b1b23c97-ec86-4362-8b55-77c404311886 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-ebf4e4f9-b225-4ba9-927e-0619aeea8d89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:26:47 np0005481065 nova_compute[260935]: 2025-10-11 09:26:47.935 2 DEBUG nova.network.neutron [req-cee575f3-02ca-46d2-aee8-5ee353edb984 req-b1b23c97-ec86-4362-8b55-77c404311886 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Refreshing network info cache for port 733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:26:47 np0005481065 nova_compute[260935]: 2025-10-11 09:26:47.940 2 DEBUG nova.virt.libvirt.driver [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Start _get_guest_xml network_info=[{"id": "733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6", "address": "fa:16:3e:aa:8a:b1", "network": {"id": "b9f9ae84-9b18-48f7-bff2-94e8835de5c8", "bridge": "br-int", "label": "tempest-network-smoke--306681163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap733f68a9-9a", "ovs_interfaceid": "733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:26:47 np0005481065 nova_compute[260935]: 2025-10-11 09:26:47.951 2 WARNING nova.virt.libvirt.driver [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:26:47 np0005481065 nova_compute[260935]: 2025-10-11 09:26:47.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:47 np0005481065 nova_compute[260935]: 2025-10-11 09:26:47.970 2 DEBUG nova.virt.libvirt.host [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:26:47 np0005481065 nova_compute[260935]: 2025-10-11 09:26:47.971 2 DEBUG nova.virt.libvirt.host [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:26:47 np0005481065 nova_compute[260935]: 2025-10-11 09:26:47.975 2 DEBUG nova.virt.libvirt.host [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:26:47 np0005481065 nova_compute[260935]: 2025-10-11 09:26:47.976 2 DEBUG nova.virt.libvirt.host [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:26:47 np0005481065 nova_compute[260935]: 2025-10-11 09:26:47.976 2 DEBUG nova.virt.libvirt.driver [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:26:47 np0005481065 nova_compute[260935]: 2025-10-11 09:26:47.977 2 DEBUG nova.virt.hardware [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:26:47 np0005481065 nova_compute[260935]: 2025-10-11 09:26:47.978 2 DEBUG nova.virt.hardware [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:26:47 np0005481065 nova_compute[260935]: 2025-10-11 09:26:47.978 2 DEBUG nova.virt.hardware [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:26:47 np0005481065 nova_compute[260935]: 2025-10-11 09:26:47.978 2 DEBUG nova.virt.hardware [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:26:47 np0005481065 nova_compute[260935]: 2025-10-11 09:26:47.979 2 DEBUG nova.virt.hardware [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:26:47 np0005481065 nova_compute[260935]: 2025-10-11 09:26:47.979 2 DEBUG nova.virt.hardware [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:26:47 np0005481065 nova_compute[260935]: 2025-10-11 09:26:47.979 2 DEBUG nova.virt.hardware [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:26:47 np0005481065 nova_compute[260935]: 2025-10-11 09:26:47.980 2 DEBUG nova.virt.hardware [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:26:47 np0005481065 nova_compute[260935]: 2025-10-11 09:26:47.980 2 DEBUG nova.virt.hardware [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:26:47 np0005481065 nova_compute[260935]: 2025-10-11 09:26:47.980 2 DEBUG nova.virt.hardware [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:26:47 np0005481065 nova_compute[260935]: 2025-10-11 09:26:47.981 2 DEBUG nova.virt.hardware [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:26:47 np0005481065 nova_compute[260935]: 2025-10-11 09:26:47.986 2 DEBUG oslo_concurrency.processutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:26:48 np0005481065 systemd[1]: var-lib-containers-storage-overlay-fcc7ac2b70633db55b88430f0783934540668a9e4bdfbc872dcee86c6776a052-merged.mount: Deactivated successfully.
Oct 11 05:26:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:26:48 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3346623215' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:26:48 np0005481065 nova_compute[260935]: 2025-10-11 09:26:48.574 2 DEBUG oslo_concurrency.processutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:26:48 np0005481065 nova_compute[260935]: 2025-10-11 09:26:48.621 2 DEBUG nova.storage.rbd_utils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image ebf4e4f9-b225-4ba9-927e-0619aeea8d89_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:26:48 np0005481065 nova_compute[260935]: 2025-10-11 09:26:48.626 2 DEBUG oslo_concurrency.processutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:26:48 np0005481065 podman[401951]: 2025-10-11 09:26:48.875263247 +0000 UTC m=+2.019665798 container remove cf47a4979cc279ed36e66b223cbe8581bbc71b75196f29de32d496b486e99ce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_saha, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:26:48 np0005481065 systemd[1]: libpod-conmon-cf47a4979cc279ed36e66b223cbe8581bbc71b75196f29de32d496b486e99ce0.scope: Deactivated successfully.
Oct 11 05:26:48 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2555: 321 pgs: 321 active+clean; 525 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.9 MiB/s wr, 114 op/s
Oct 11 05:26:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:26:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:26:49 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4012108614' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:26:49 np0005481065 nova_compute[260935]: 2025-10-11 09:26:49.240 2 DEBUG oslo_concurrency.processutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:26:49 np0005481065 nova_compute[260935]: 2025-10-11 09:26:49.244 2 DEBUG nova.virt.libvirt.vif [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:26:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-642071395',display_name='tempest-TestNetworkBasicOps-server-642071395',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-642071395',id=129,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL/a4lv10vOA8+Y0o9A9HssJRB57eWElESeZBiHZYTC4NgZskMwnl+IoMYFCrg3V9vZquWFF14c5AYcCtavq1tvAMKZzjjtkqimBk00BtDUv/kjU7cBauBpTA49aY1aFOA==',key_name='tempest-TestNetworkBasicOps-566812727',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-z5r1nfln',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:26:40Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=ebf4e4f9-b225-4ba9-927e-0619aeea8d89,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6", "address": "fa:16:3e:aa:8a:b1", "network": {"id": "b9f9ae84-9b18-48f7-bff2-94e8835de5c8", "bridge": "br-int", "label": "tempest-network-smoke--306681163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap733f68a9-9a", "ovs_interfaceid": "733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:26:49 np0005481065 nova_compute[260935]: 2025-10-11 09:26:49.244 2 DEBUG nova.network.os_vif_util [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6", "address": "fa:16:3e:aa:8a:b1", "network": {"id": "b9f9ae84-9b18-48f7-bff2-94e8835de5c8", "bridge": "br-int", "label": "tempest-network-smoke--306681163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap733f68a9-9a", "ovs_interfaceid": "733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:26:49 np0005481065 nova_compute[260935]: 2025-10-11 09:26:49.246 2 DEBUG nova.network.os_vif_util [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:8a:b1,bridge_name='br-int',has_traffic_filtering=True,id=733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6,network=Network(b9f9ae84-9b18-48f7-bff2-94e8835de5c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap733f68a9-9a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:26:49 np0005481065 nova_compute[260935]: 2025-10-11 09:26:49.248 2 DEBUG nova.objects.instance [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'pci_devices' on Instance uuid ebf4e4f9-b225-4ba9-927e-0619aeea8d89 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:26:49 np0005481065 nova_compute[260935]: 2025-10-11 09:26:49.265 2 DEBUG nova.virt.libvirt.driver [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:26:49 np0005481065 nova_compute[260935]:  <uuid>ebf4e4f9-b225-4ba9-927e-0619aeea8d89</uuid>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:  <name>instance-00000081</name>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:26:49 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:      <nova:name>tempest-TestNetworkBasicOps-server-642071395</nova:name>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:26:47</nova:creationTime>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:26:49 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:        <nova:user uuid="dd336dcb24664df58613d4105ce1b004">tempest-TestNetworkBasicOps-1622727639-project-member</nova:user>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:        <nova:project uuid="bee9c6aad5fe46a2b0fb6caf4d995b72">tempest-TestNetworkBasicOps-1622727639</nova:project>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:        <nova:port uuid="733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6">
Oct 11 05:26:49 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:26:49 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:      <entry name="serial">ebf4e4f9-b225-4ba9-927e-0619aeea8d89</entry>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:      <entry name="uuid">ebf4e4f9-b225-4ba9-927e-0619aeea8d89</entry>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:26:49 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:26:49 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:26:49 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/ebf4e4f9-b225-4ba9-927e-0619aeea8d89_disk">
Oct 11 05:26:49 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:26:49 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:26:49 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/ebf4e4f9-b225-4ba9-927e-0619aeea8d89_disk.config">
Oct 11 05:26:49 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:26:49 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:26:49 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:aa:8a:b1"/>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:      <target dev="tap733f68a9-9a"/>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:26:49 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/ebf4e4f9-b225-4ba9-927e-0619aeea8d89/console.log" append="off"/>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:26:49 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:26:49 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:26:49 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:26:49 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:26:49 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:26:49 np0005481065 nova_compute[260935]: 2025-10-11 09:26:49.265 2 DEBUG nova.compute.manager [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Preparing to wait for external event network-vif-plugged-733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:26:49 np0005481065 nova_compute[260935]: 2025-10-11 09:26:49.266 2 DEBUG oslo_concurrency.lockutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "ebf4e4f9-b225-4ba9-927e-0619aeea8d89-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:26:49 np0005481065 nova_compute[260935]: 2025-10-11 09:26:49.266 2 DEBUG oslo_concurrency.lockutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "ebf4e4f9-b225-4ba9-927e-0619aeea8d89-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:26:49 np0005481065 nova_compute[260935]: 2025-10-11 09:26:49.267 2 DEBUG oslo_concurrency.lockutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "ebf4e4f9-b225-4ba9-927e-0619aeea8d89-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:26:49 np0005481065 nova_compute[260935]: 2025-10-11 09:26:49.268 2 DEBUG nova.virt.libvirt.vif [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:26:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-642071395',display_name='tempest-TestNetworkBasicOps-server-642071395',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-642071395',id=129,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL/a4lv10vOA8+Y0o9A9HssJRB57eWElESeZBiHZYTC4NgZskMwnl+IoMYFCrg3V9vZquWFF14c5AYcCtavq1tvAMKZzjjtkqimBk00BtDUv/kjU7cBauBpTA49aY1aFOA==',key_name='tempest-TestNetworkBasicOps-566812727',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-z5r1nfln',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:26:40Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=ebf4e4f9-b225-4ba9-927e-0619aeea8d89,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6", "address": "fa:16:3e:aa:8a:b1", "network": {"id": "b9f9ae84-9b18-48f7-bff2-94e8835de5c8", "bridge": "br-int", "label": "tempest-network-smoke--306681163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap733f68a9-9a", "ovs_interfaceid": "733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:26:49 np0005481065 nova_compute[260935]: 2025-10-11 09:26:49.269 2 DEBUG nova.network.os_vif_util [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6", "address": "fa:16:3e:aa:8a:b1", "network": {"id": "b9f9ae84-9b18-48f7-bff2-94e8835de5c8", "bridge": "br-int", "label": "tempest-network-smoke--306681163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap733f68a9-9a", "ovs_interfaceid": "733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:26:49 np0005481065 nova_compute[260935]: 2025-10-11 09:26:49.270 2 DEBUG nova.network.os_vif_util [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:8a:b1,bridge_name='br-int',has_traffic_filtering=True,id=733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6,network=Network(b9f9ae84-9b18-48f7-bff2-94e8835de5c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap733f68a9-9a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:26:49 np0005481065 nova_compute[260935]: 2025-10-11 09:26:49.270 2 DEBUG os_vif [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:8a:b1,bridge_name='br-int',has_traffic_filtering=True,id=733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6,network=Network(b9f9ae84-9b18-48f7-bff2-94e8835de5c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap733f68a9-9a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:26:49 np0005481065 nova_compute[260935]: 2025-10-11 09:26:49.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:49 np0005481065 nova_compute[260935]: 2025-10-11 09:26:49.273 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:26:49 np0005481065 nova_compute[260935]: 2025-10-11 09:26:49.273 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:26:49 np0005481065 nova_compute[260935]: 2025-10-11 09:26:49.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:49 np0005481065 nova_compute[260935]: 2025-10-11 09:26:49.279 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap733f68a9-9a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:26:49 np0005481065 nova_compute[260935]: 2025-10-11 09:26:49.280 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap733f68a9-9a, col_values=(('external_ids', {'iface-id': '733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:aa:8a:b1', 'vm-uuid': 'ebf4e4f9-b225-4ba9-927e-0619aeea8d89'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:26:49 np0005481065 nova_compute[260935]: 2025-10-11 09:26:49.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:49 np0005481065 NetworkManager[44960]: <info>  [1760174809.2842] manager: (tap733f68a9-9a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/553)
Oct 11 05:26:49 np0005481065 nova_compute[260935]: 2025-10-11 09:26:49.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:26:49 np0005481065 nova_compute[260935]: 2025-10-11 09:26:49.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:49 np0005481065 nova_compute[260935]: 2025-10-11 09:26:49.295 2 INFO os_vif [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:8a:b1,bridge_name='br-int',has_traffic_filtering=True,id=733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6,network=Network(b9f9ae84-9b18-48f7-bff2-94e8835de5c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap733f68a9-9a')#033[00m
Oct 11 05:26:49 np0005481065 nova_compute[260935]: 2025-10-11 09:26:49.354 2 DEBUG nova.virt.libvirt.driver [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:26:49 np0005481065 nova_compute[260935]: 2025-10-11 09:26:49.355 2 DEBUG nova.virt.libvirt.driver [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:26:49 np0005481065 nova_compute[260935]: 2025-10-11 09:26:49.356 2 DEBUG nova.virt.libvirt.driver [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No VIF found with MAC fa:16:3e:aa:8a:b1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:26:49 np0005481065 nova_compute[260935]: 2025-10-11 09:26:49.357 2 INFO nova.virt.libvirt.driver [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Using config drive#033[00m
Oct 11 05:26:49 np0005481065 nova_compute[260935]: 2025-10-11 09:26:49.387 2 DEBUG nova.storage.rbd_utils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image ebf4e4f9-b225-4ba9-927e-0619aeea8d89_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:26:49 np0005481065 nova_compute[260935]: 2025-10-11 09:26:49.461 2 DEBUG nova.network.neutron [req-cee575f3-02ca-46d2-aee8-5ee353edb984 req-b1b23c97-ec86-4362-8b55-77c404311886 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Updated VIF entry in instance network info cache for port 733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:26:49 np0005481065 nova_compute[260935]: 2025-10-11 09:26:49.462 2 DEBUG nova.network.neutron [req-cee575f3-02ca-46d2-aee8-5ee353edb984 req-b1b23c97-ec86-4362-8b55-77c404311886 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Updating instance_info_cache with network_info: [{"id": "733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6", "address": "fa:16:3e:aa:8a:b1", "network": {"id": "b9f9ae84-9b18-48f7-bff2-94e8835de5c8", "bridge": "br-int", "label": "tempest-network-smoke--306681163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap733f68a9-9a", "ovs_interfaceid": "733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:26:49 np0005481065 nova_compute[260935]: 2025-10-11 09:26:49.479 2 DEBUG oslo_concurrency.lockutils [req-cee575f3-02ca-46d2-aee8-5ee353edb984 req-b1b23c97-ec86-4362-8b55-77c404311886 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-ebf4e4f9-b225-4ba9-927e-0619aeea8d89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:26:49 np0005481065 podman[402185]: 2025-10-11 09:26:49.708289374 +0000 UTC m=+0.054670673 container create a94ca6eaf3280ead49da3a0083bcfd416559e7eca14cb9c12014e2d98af7d4ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:26:49 np0005481065 nova_compute[260935]: 2025-10-11 09:26:49.738 2 INFO nova.virt.libvirt.driver [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Creating config drive at /var/lib/nova/instances/ebf4e4f9-b225-4ba9-927e-0619aeea8d89/disk.config#033[00m
Oct 11 05:26:49 np0005481065 nova_compute[260935]: 2025-10-11 09:26:49.747 2 DEBUG oslo_concurrency.processutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ebf4e4f9-b225-4ba9-927e-0619aeea8d89/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyx8qw76b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:26:49 np0005481065 systemd[1]: Started libpod-conmon-a94ca6eaf3280ead49da3a0083bcfd416559e7eca14cb9c12014e2d98af7d4ad.scope.
Oct 11 05:26:49 np0005481065 podman[402185]: 2025-10-11 09:26:49.692107148 +0000 UTC m=+0.038488497 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:26:49 np0005481065 podman[402194]: 2025-10-11 09:26:49.802944486 +0000 UTC m=+0.101169426 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 11 05:26:49 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:26:49 np0005481065 podman[402185]: 2025-10-11 09:26:49.82399334 +0000 UTC m=+0.170374639 container init a94ca6eaf3280ead49da3a0083bcfd416559e7eca14cb9c12014e2d98af7d4ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wozniak, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:26:49 np0005481065 podman[402185]: 2025-10-11 09:26:49.835723601 +0000 UTC m=+0.182104910 container start a94ca6eaf3280ead49da3a0083bcfd416559e7eca14cb9c12014e2d98af7d4ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wozniak, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:26:49 np0005481065 festive_wozniak[402217]: 167 167
Oct 11 05:26:49 np0005481065 systemd[1]: libpod-a94ca6eaf3280ead49da3a0083bcfd416559e7eca14cb9c12014e2d98af7d4ad.scope: Deactivated successfully.
Oct 11 05:26:49 np0005481065 podman[402185]: 2025-10-11 09:26:49.840751913 +0000 UTC m=+0.187133232 container attach a94ca6eaf3280ead49da3a0083bcfd416559e7eca14cb9c12014e2d98af7d4ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wozniak, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 11 05:26:49 np0005481065 conmon[402217]: conmon a94ca6eaf3280ead49da <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a94ca6eaf3280ead49da3a0083bcfd416559e7eca14cb9c12014e2d98af7d4ad.scope/container/memory.events
Oct 11 05:26:49 np0005481065 podman[402185]: 2025-10-11 09:26:49.84350642 +0000 UTC m=+0.189887769 container died a94ca6eaf3280ead49da3a0083bcfd416559e7eca14cb9c12014e2d98af7d4ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wozniak, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:26:49 np0005481065 systemd[1]: var-lib-containers-storage-overlay-3533003b4a273565175e3c84cb51443beabed5ec1845e951001c8a5dcd9da4cd-merged.mount: Deactivated successfully.
Oct 11 05:26:49 np0005481065 podman[402185]: 2025-10-11 09:26:49.88352753 +0000 UTC m=+0.229908829 container remove a94ca6eaf3280ead49da3a0083bcfd416559e7eca14cb9c12014e2d98af7d4ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 05:26:49 np0005481065 systemd[1]: libpod-conmon-a94ca6eaf3280ead49da3a0083bcfd416559e7eca14cb9c12014e2d98af7d4ad.scope: Deactivated successfully.
Oct 11 05:26:49 np0005481065 nova_compute[260935]: 2025-10-11 09:26:49.914 2 DEBUG oslo_concurrency.processutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ebf4e4f9-b225-4ba9-927e-0619aeea8d89/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyx8qw76b" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:26:49 np0005481065 nova_compute[260935]: 2025-10-11 09:26:49.942 2 DEBUG nova.storage.rbd_utils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image ebf4e4f9-b225-4ba9-927e-0619aeea8d89_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:26:49 np0005481065 nova_compute[260935]: 2025-10-11 09:26:49.946 2 DEBUG oslo_concurrency.processutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ebf4e4f9-b225-4ba9-927e-0619aeea8d89/disk.config ebf4e4f9-b225-4ba9-927e-0619aeea8d89_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:26:50 np0005481065 podman[402279]: 2025-10-11 09:26:50.103352794 +0000 UTC m=+0.053353007 container create 0ed4ba66d0ff46c19a947a281e01662bf4d54f3cb11e975460ab9391b0330c76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mahavira, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:26:50 np0005481065 systemd[1]: Started libpod-conmon-0ed4ba66d0ff46c19a947a281e01662bf4d54f3cb11e975460ab9391b0330c76.scope.
Oct 11 05:26:50 np0005481065 nova_compute[260935]: 2025-10-11 09:26:50.143 2 DEBUG oslo_concurrency.processutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ebf4e4f9-b225-4ba9-927e-0619aeea8d89/disk.config ebf4e4f9-b225-4ba9-927e-0619aeea8d89_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.198s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:26:50 np0005481065 nova_compute[260935]: 2025-10-11 09:26:50.146 2 INFO nova.virt.libvirt.driver [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Deleting local config drive /var/lib/nova/instances/ebf4e4f9-b225-4ba9-927e-0619aeea8d89/disk.config because it was imported into RBD.#033[00m
Oct 11 05:26:50 np0005481065 podman[402279]: 2025-10-11 09:26:50.073745948 +0000 UTC m=+0.023746231 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:26:50 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:26:50 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e22acf452f3b518f1fba6ce8c6d25d6adbfec8cb5b0ad1feaf5bd2fb494f5112/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:26:50 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e22acf452f3b518f1fba6ce8c6d25d6adbfec8cb5b0ad1feaf5bd2fb494f5112/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:26:50 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e22acf452f3b518f1fba6ce8c6d25d6adbfec8cb5b0ad1feaf5bd2fb494f5112/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:26:50 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e22acf452f3b518f1fba6ce8c6d25d6adbfec8cb5b0ad1feaf5bd2fb494f5112/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:26:50 np0005481065 podman[402279]: 2025-10-11 09:26:50.200271179 +0000 UTC m=+0.150271392 container init 0ed4ba66d0ff46c19a947a281e01662bf4d54f3cb11e975460ab9391b0330c76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mahavira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 05:26:50 np0005481065 podman[402279]: 2025-10-11 09:26:50.209309424 +0000 UTC m=+0.159309657 container start 0ed4ba66d0ff46c19a947a281e01662bf4d54f3cb11e975460ab9391b0330c76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 05:26:50 np0005481065 NetworkManager[44960]: <info>  [1760174810.2119] manager: (tap733f68a9-9a): new Tun device (/org/freedesktop/NetworkManager/Devices/554)
Oct 11 05:26:50 np0005481065 kernel: tap733f68a9-9a: entered promiscuous mode
Oct 11 05:26:50 np0005481065 ovn_controller[152945]: 2025-10-11T09:26:50Z|01403|binding|INFO|Claiming lport 733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6 for this chassis.
Oct 11 05:26:50 np0005481065 ovn_controller[152945]: 2025-10-11T09:26:50Z|01404|binding|INFO|733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6: Claiming fa:16:3e:aa:8a:b1 10.100.0.8
Oct 11 05:26:50 np0005481065 nova_compute[260935]: 2025-10-11 09:26:50.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:50.223 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:aa:8a:b1 10.100.0.8'], port_security=['fa:16:3e:aa:8a:b1 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'ebf4e4f9-b225-4ba9-927e-0619aeea8d89', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b9f9ae84-9b18-48f7-bff2-94e8835de5c8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ed8dd5c0-e5cc-4f41-beee-a727185c1ca5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=419ebcda-831c-4f7b-8ef8-fba16bc71b52, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:26:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:50.228 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6 in datapath b9f9ae84-9b18-48f7-bff2-94e8835de5c8 bound to our chassis#033[00m
Oct 11 05:26:50 np0005481065 podman[402279]: 2025-10-11 09:26:50.228846325 +0000 UTC m=+0.178846538 container attach 0ed4ba66d0ff46c19a947a281e01662bf4d54f3cb11e975460ab9391b0330c76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mahavira, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 11 05:26:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:50.231 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b9f9ae84-9b18-48f7-bff2-94e8835de5c8#033[00m
Oct 11 05:26:50 np0005481065 systemd-udevd[402314]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:26:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:50.252 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[17ad2c5a-b4eb-4caf-ade7-775d4c34202c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:26:50 np0005481065 ovn_controller[152945]: 2025-10-11T09:26:50Z|01405|binding|INFO|Setting lport 733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6 ovn-installed in OVS
Oct 11 05:26:50 np0005481065 ovn_controller[152945]: 2025-10-11T09:26:50Z|01406|binding|INFO|Setting lport 733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6 up in Southbound
Oct 11 05:26:50 np0005481065 nova_compute[260935]: 2025-10-11 09:26:50.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:50 np0005481065 NetworkManager[44960]: <info>  [1760174810.3045] device (tap733f68a9-9a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:26:50 np0005481065 NetworkManager[44960]: <info>  [1760174810.3053] device (tap733f68a9-9a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:26:50 np0005481065 systemd-machined[215705]: New machine qemu-153-instance-00000081.
Oct 11 05:26:50 np0005481065 systemd[1]: Started Virtual Machine qemu-153-instance-00000081.
Oct 11 05:26:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:50.334 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[4dd84647-dd47-4158-9d98-a82f776c4a7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:26:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:50.337 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[aa8af352-9576-49ff-9d09-518a7b1ffa85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:26:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:50.375 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[60ac2038-d2f5-40b6-8f31-e17f7231c965]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:26:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:50.396 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7182574f-e61a-43f5-a65a-ca71c2bf6536]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb9f9ae84-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:ad:53'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 379], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662316, 'reachable_time': 34988, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 402328, 'error': None, 'target': 'ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:26:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:50.423 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7100493a-7904-4504-b405-ef84d38e01c1]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb9f9ae84-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662335, 'tstamp': 662335}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 402330, 'error': None, 'target': 'ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb9f9ae84-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662340, 'tstamp': 662340}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 402330, 'error': None, 'target': 'ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:26:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:50.425 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb9f9ae84-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:26:50 np0005481065 nova_compute[260935]: 2025-10-11 09:26:50.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:50 np0005481065 nova_compute[260935]: 2025-10-11 09:26:50.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:50.429 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb9f9ae84-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:26:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:50.430 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:26:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:50.431 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb9f9ae84-90, col_values=(('external_ids', {'iface-id': '829ba2ca-e21f-4927-8525-5f43e59d37f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:26:50 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:50.432 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:26:50 np0005481065 nova_compute[260935]: 2025-10-11 09:26:50.499 2 DEBUG nova.compute.manager [req-bdedd1be-ed7b-4127-a888-4c1ead5373ec req-3eae7b5f-bee7-4c1c-a2fc-fd9b1814a04d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Received event network-vif-plugged-733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:26:50 np0005481065 nova_compute[260935]: 2025-10-11 09:26:50.500 2 DEBUG oslo_concurrency.lockutils [req-bdedd1be-ed7b-4127-a888-4c1ead5373ec req-3eae7b5f-bee7-4c1c-a2fc-fd9b1814a04d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ebf4e4f9-b225-4ba9-927e-0619aeea8d89-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:26:50 np0005481065 nova_compute[260935]: 2025-10-11 09:26:50.500 2 DEBUG oslo_concurrency.lockutils [req-bdedd1be-ed7b-4127-a888-4c1ead5373ec req-3eae7b5f-bee7-4c1c-a2fc-fd9b1814a04d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ebf4e4f9-b225-4ba9-927e-0619aeea8d89-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:26:50 np0005481065 nova_compute[260935]: 2025-10-11 09:26:50.501 2 DEBUG oslo_concurrency.lockutils [req-bdedd1be-ed7b-4127-a888-4c1ead5373ec req-3eae7b5f-bee7-4c1c-a2fc-fd9b1814a04d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ebf4e4f9-b225-4ba9-927e-0619aeea8d89-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:26:50 np0005481065 nova_compute[260935]: 2025-10-11 09:26:50.501 2 DEBUG nova.compute.manager [req-bdedd1be-ed7b-4127-a888-4c1ead5373ec req-3eae7b5f-bee7-4c1c-a2fc-fd9b1814a04d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Processing event network-vif-plugged-733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:26:50 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2556: 321 pgs: 321 active+clean; 525 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 264 KiB/s rd, 3.8 MiB/s wr, 74 op/s
Oct 11 05:26:51 np0005481065 tender_mahavira[402299]: {
Oct 11 05:26:51 np0005481065 tender_mahavira[402299]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 05:26:51 np0005481065 tender_mahavira[402299]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:26:51 np0005481065 tender_mahavira[402299]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 05:26:51 np0005481065 tender_mahavira[402299]:        "osd_id": 2,
Oct 11 05:26:51 np0005481065 tender_mahavira[402299]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:26:51 np0005481065 tender_mahavira[402299]:        "type": "bluestore"
Oct 11 05:26:51 np0005481065 tender_mahavira[402299]:    },
Oct 11 05:26:51 np0005481065 tender_mahavira[402299]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 05:26:51 np0005481065 tender_mahavira[402299]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:26:51 np0005481065 tender_mahavira[402299]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 05:26:51 np0005481065 tender_mahavira[402299]:        "osd_id": 0,
Oct 11 05:26:51 np0005481065 tender_mahavira[402299]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:26:51 np0005481065 tender_mahavira[402299]:        "type": "bluestore"
Oct 11 05:26:51 np0005481065 tender_mahavira[402299]:    },
Oct 11 05:26:51 np0005481065 tender_mahavira[402299]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 05:26:51 np0005481065 tender_mahavira[402299]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:26:51 np0005481065 tender_mahavira[402299]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 05:26:51 np0005481065 tender_mahavira[402299]:        "osd_id": 1,
Oct 11 05:26:51 np0005481065 tender_mahavira[402299]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:26:51 np0005481065 tender_mahavira[402299]:        "type": "bluestore"
Oct 11 05:26:51 np0005481065 tender_mahavira[402299]:    }
Oct 11 05:26:51 np0005481065 tender_mahavira[402299]: }
Oct 11 05:26:51 np0005481065 systemd[1]: libpod-0ed4ba66d0ff46c19a947a281e01662bf4d54f3cb11e975460ab9391b0330c76.scope: Deactivated successfully.
Oct 11 05:26:51 np0005481065 conmon[402299]: conmon 0ed4ba66d0ff46c19a94 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0ed4ba66d0ff46c19a947a281e01662bf4d54f3cb11e975460ab9391b0330c76.scope/container/memory.events
Oct 11 05:26:51 np0005481065 podman[402279]: 2025-10-11 09:26:51.186335366 +0000 UTC m=+1.136335559 container died 0ed4ba66d0ff46c19a947a281e01662bf4d54f3cb11e975460ab9391b0330c76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mahavira, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:26:51 np0005481065 systemd[1]: var-lib-containers-storage-overlay-e22acf452f3b518f1fba6ce8c6d25d6adbfec8cb5b0ad1feaf5bd2fb494f5112-merged.mount: Deactivated successfully.
Oct 11 05:26:51 np0005481065 podman[402279]: 2025-10-11 09:26:51.260758997 +0000 UTC m=+1.210759200 container remove 0ed4ba66d0ff46c19a947a281e01662bf4d54f3cb11e975460ab9391b0330c76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mahavira, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:26:51 np0005481065 systemd[1]: libpod-conmon-0ed4ba66d0ff46c19a947a281e01662bf4d54f3cb11e975460ab9391b0330c76.scope: Deactivated successfully.
Oct 11 05:26:51 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:26:51 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:26:51 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:26:51 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:26:51 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 0519fb0a-678f-4add-a355-d50dbc26c946 does not exist
Oct 11 05:26:51 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 43de6eab-7015-427e-b501-1d2efec697db does not exist
Oct 11 05:26:51 np0005481065 nova_compute[260935]: 2025-10-11 09:26:51.620 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174811.6197715, ebf4e4f9-b225-4ba9-927e-0619aeea8d89 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:26:51 np0005481065 nova_compute[260935]: 2025-10-11 09:26:51.621 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] VM Started (Lifecycle Event)#033[00m
Oct 11 05:26:51 np0005481065 nova_compute[260935]: 2025-10-11 09:26:51.623 2 DEBUG nova.compute.manager [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:26:51 np0005481065 nova_compute[260935]: 2025-10-11 09:26:51.627 2 DEBUG nova.virt.libvirt.driver [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:26:51 np0005481065 nova_compute[260935]: 2025-10-11 09:26:51.631 2 INFO nova.virt.libvirt.driver [-] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Instance spawned successfully.#033[00m
Oct 11 05:26:51 np0005481065 nova_compute[260935]: 2025-10-11 09:26:51.632 2 DEBUG nova.virt.libvirt.driver [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:26:51 np0005481065 nova_compute[260935]: 2025-10-11 09:26:51.646 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:26:51 np0005481065 nova_compute[260935]: 2025-10-11 09:26:51.654 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:26:51 np0005481065 nova_compute[260935]: 2025-10-11 09:26:51.663 2 DEBUG nova.virt.libvirt.driver [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:26:51 np0005481065 nova_compute[260935]: 2025-10-11 09:26:51.663 2 DEBUG nova.virt.libvirt.driver [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:26:51 np0005481065 nova_compute[260935]: 2025-10-11 09:26:51.664 2 DEBUG nova.virt.libvirt.driver [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:26:51 np0005481065 nova_compute[260935]: 2025-10-11 09:26:51.664 2 DEBUG nova.virt.libvirt.driver [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:26:51 np0005481065 nova_compute[260935]: 2025-10-11 09:26:51.665 2 DEBUG nova.virt.libvirt.driver [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:26:51 np0005481065 nova_compute[260935]: 2025-10-11 09:26:51.666 2 DEBUG nova.virt.libvirt.driver [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:26:51 np0005481065 nova_compute[260935]: 2025-10-11 09:26:51.694 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:26:51 np0005481065 nova_compute[260935]: 2025-10-11 09:26:51.694 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174811.6201277, ebf4e4f9-b225-4ba9-927e-0619aeea8d89 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:26:51 np0005481065 nova_compute[260935]: 2025-10-11 09:26:51.695 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:26:51 np0005481065 nova_compute[260935]: 2025-10-11 09:26:51.719 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:26:51 np0005481065 nova_compute[260935]: 2025-10-11 09:26:51.722 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174811.626755, ebf4e4f9-b225-4ba9-927e-0619aeea8d89 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:26:51 np0005481065 nova_compute[260935]: 2025-10-11 09:26:51.722 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:26:51 np0005481065 nova_compute[260935]: 2025-10-11 09:26:51.734 2 INFO nova.compute.manager [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Took 10.88 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:26:51 np0005481065 nova_compute[260935]: 2025-10-11 09:26:51.735 2 DEBUG nova.compute.manager [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:26:51 np0005481065 nova_compute[260935]: 2025-10-11 09:26:51.749 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:26:51 np0005481065 nova_compute[260935]: 2025-10-11 09:26:51.754 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:26:51 np0005481065 nova_compute[260935]: 2025-10-11 09:26:51.776 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:26:51 np0005481065 nova_compute[260935]: 2025-10-11 09:26:51.805 2 INFO nova.compute.manager [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Took 12.46 seconds to build instance.#033[00m
Oct 11 05:26:51 np0005481065 nova_compute[260935]: 2025-10-11 09:26:51.821 2 DEBUG oslo_concurrency.lockutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "ebf4e4f9-b225-4ba9-927e-0619aeea8d89" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:26:52 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:26:52 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:26:52 np0005481065 nova_compute[260935]: 2025-10-11 09:26:52.635 2 DEBUG nova.compute.manager [req-b3ce5aff-6f1e-48de-ab60-7b6302f9b026 req-73a1da29-68a1-4954-9e6a-ec2c8b7b28c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Received event network-vif-plugged-733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:26:52 np0005481065 nova_compute[260935]: 2025-10-11 09:26:52.636 2 DEBUG oslo_concurrency.lockutils [req-b3ce5aff-6f1e-48de-ab60-7b6302f9b026 req-73a1da29-68a1-4954-9e6a-ec2c8b7b28c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ebf4e4f9-b225-4ba9-927e-0619aeea8d89-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:26:52 np0005481065 nova_compute[260935]: 2025-10-11 09:26:52.637 2 DEBUG oslo_concurrency.lockutils [req-b3ce5aff-6f1e-48de-ab60-7b6302f9b026 req-73a1da29-68a1-4954-9e6a-ec2c8b7b28c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ebf4e4f9-b225-4ba9-927e-0619aeea8d89-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:26:52 np0005481065 nova_compute[260935]: 2025-10-11 09:26:52.638 2 DEBUG oslo_concurrency.lockutils [req-b3ce5aff-6f1e-48de-ab60-7b6302f9b026 req-73a1da29-68a1-4954-9e6a-ec2c8b7b28c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ebf4e4f9-b225-4ba9-927e-0619aeea8d89-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:26:52 np0005481065 nova_compute[260935]: 2025-10-11 09:26:52.638 2 DEBUG nova.compute.manager [req-b3ce5aff-6f1e-48de-ab60-7b6302f9b026 req-73a1da29-68a1-4954-9e6a-ec2c8b7b28c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] No waiting events found dispatching network-vif-plugged-733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:26:52 np0005481065 nova_compute[260935]: 2025-10-11 09:26:52.639 2 WARNING nova.compute.manager [req-b3ce5aff-6f1e-48de-ab60-7b6302f9b026 req-73a1da29-68a1-4954-9e6a-ec2c8b7b28c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Received unexpected event network-vif-plugged-733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6 for instance with vm_state active and task_state None.#033[00m
Oct 11 05:26:52 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2557: 321 pgs: 321 active+clean; 533 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 912 KiB/s rd, 3.9 MiB/s wr, 118 op/s
Oct 11 05:26:52 np0005481065 nova_compute[260935]: 2025-10-11 09:26:52.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:53.673 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=43, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=42) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:26:53 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:53.674 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 05:26:53 np0005481065 nova_compute[260935]: 2025-10-11 09:26:53.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:26:54 np0005481065 nova_compute[260935]: 2025-10-11 09:26:54.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:54 np0005481065 nova_compute[260935]: 2025-10-11 09:26:54.681 2 DEBUG nova.compute.manager [req-7fd3cbdb-80e8-4b4e-b0d6-8a04ee458487 req-54a301de-97bd-4934-81d0-8818dc2f8a04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Received event network-changed-733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:26:54 np0005481065 nova_compute[260935]: 2025-10-11 09:26:54.681 2 DEBUG nova.compute.manager [req-7fd3cbdb-80e8-4b4e-b0d6-8a04ee458487 req-54a301de-97bd-4934-81d0-8818dc2f8a04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Refreshing instance network info cache due to event network-changed-733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:26:54 np0005481065 nova_compute[260935]: 2025-10-11 09:26:54.681 2 DEBUG oslo_concurrency.lockutils [req-7fd3cbdb-80e8-4b4e-b0d6-8a04ee458487 req-54a301de-97bd-4934-81d0-8818dc2f8a04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-ebf4e4f9-b225-4ba9-927e-0619aeea8d89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:26:54 np0005481065 nova_compute[260935]: 2025-10-11 09:26:54.682 2 DEBUG oslo_concurrency.lockutils [req-7fd3cbdb-80e8-4b4e-b0d6-8a04ee458487 req-54a301de-97bd-4934-81d0-8818dc2f8a04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-ebf4e4f9-b225-4ba9-927e-0619aeea8d89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:26:54 np0005481065 nova_compute[260935]: 2025-10-11 09:26:54.682 2 DEBUG nova.network.neutron [req-7fd3cbdb-80e8-4b4e-b0d6-8a04ee458487 req-54a301de-97bd-4934-81d0-8818dc2f8a04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Refreshing network info cache for port 733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:26:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:26:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:26:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:26:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:26:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:26:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:26:54 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2558: 321 pgs: 321 active+clean; 533 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 866 KiB/s rd, 1.1 MiB/s wr, 75 op/s
Oct 11 05:26:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:26:54
Oct 11 05:26:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:26:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:26:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['images', '.rgw.root', '.mgr', 'backups', 'default.rgw.log', 'volumes', 'default.rgw.control', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', 'cephfs.cephfs.data']
Oct 11 05:26:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:26:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:26:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:26:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:26:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:26:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:26:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:26:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:26:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:26:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:26:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:26:55 np0005481065 podman[402466]: 2025-10-11 09:26:55.784195351 +0000 UTC m=+0.084430214 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 11 05:26:55 np0005481065 podman[402467]: 2025-10-11 09:26:55.817578663 +0000 UTC m=+0.110250792 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 11 05:26:56 np0005481065 nova_compute[260935]: 2025-10-11 09:26:56.083 2 DEBUG nova.network.neutron [req-7fd3cbdb-80e8-4b4e-b0d6-8a04ee458487 req-54a301de-97bd-4934-81d0-8818dc2f8a04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Updated VIF entry in instance network info cache for port 733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:26:56 np0005481065 nova_compute[260935]: 2025-10-11 09:26:56.084 2 DEBUG nova.network.neutron [req-7fd3cbdb-80e8-4b4e-b0d6-8a04ee458487 req-54a301de-97bd-4934-81d0-8818dc2f8a04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Updating instance_info_cache with network_info: [{"id": "733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6", "address": "fa:16:3e:aa:8a:b1", "network": {"id": "b9f9ae84-9b18-48f7-bff2-94e8835de5c8", "bridge": "br-int", "label": "tempest-network-smoke--306681163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap733f68a9-9a", "ovs_interfaceid": "733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:26:56 np0005481065 nova_compute[260935]: 2025-10-11 09:26:56.130 2 DEBUG oslo_concurrency.lockutils [req-7fd3cbdb-80e8-4b4e-b0d6-8a04ee458487 req-54a301de-97bd-4934-81d0-8818dc2f8a04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-ebf4e4f9-b225-4ba9-927e-0619aeea8d89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:26:56 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2559: 321 pgs: 321 active+clean; 533 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 866 KiB/s rd, 1.1 MiB/s wr, 75 op/s
Oct 11 05:26:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:26:57.676 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '43'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:26:57 np0005481065 nova_compute[260935]: 2025-10-11 09:26:57.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:58 np0005481065 nova_compute[260935]: 2025-10-11 09:26:58.696 2 DEBUG oslo_concurrency.lockutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "ac163a38-d5cb-4d00-af1b-f3361849dd68" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:26:58 np0005481065 nova_compute[260935]: 2025-10-11 09:26:58.696 2 DEBUG oslo_concurrency.lockutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "ac163a38-d5cb-4d00-af1b-f3361849dd68" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:26:58 np0005481065 nova_compute[260935]: 2025-10-11 09:26:58.727 2 DEBUG nova.compute.manager [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:26:58 np0005481065 nova_compute[260935]: 2025-10-11 09:26:58.843 2 DEBUG oslo_concurrency.lockutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:26:58 np0005481065 nova_compute[260935]: 2025-10-11 09:26:58.844 2 DEBUG oslo_concurrency.lockutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:26:58 np0005481065 nova_compute[260935]: 2025-10-11 09:26:58.853 2 DEBUG nova.virt.hardware [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:26:58 np0005481065 nova_compute[260935]: 2025-10-11 09:26:58.853 2 INFO nova.compute.claims [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:26:58 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2560: 321 pgs: 321 active+clean; 533 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.1 MiB/s wr, 120 op/s
Oct 11 05:26:59 np0005481065 nova_compute[260935]: 2025-10-11 09:26:59.109 2 DEBUG oslo_concurrency.processutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:26:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:26:59 np0005481065 nova_compute[260935]: 2025-10-11 09:26:59.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:26:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:26:59 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2143380819' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:26:59 np0005481065 nova_compute[260935]: 2025-10-11 09:26:59.568 2 DEBUG oslo_concurrency.processutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:26:59 np0005481065 nova_compute[260935]: 2025-10-11 09:26:59.575 2 DEBUG nova.compute.provider_tree [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:26:59 np0005481065 nova_compute[260935]: 2025-10-11 09:26:59.597 2 DEBUG nova.scheduler.client.report [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:26:59 np0005481065 nova_compute[260935]: 2025-10-11 09:26:59.632 2 DEBUG oslo_concurrency.lockutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.788s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:26:59 np0005481065 nova_compute[260935]: 2025-10-11 09:26:59.633 2 DEBUG nova.compute.manager [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:26:59 np0005481065 nova_compute[260935]: 2025-10-11 09:26:59.704 2 DEBUG nova.compute.manager [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:26:59 np0005481065 nova_compute[260935]: 2025-10-11 09:26:59.704 2 DEBUG nova.network.neutron [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:26:59 np0005481065 nova_compute[260935]: 2025-10-11 09:26:59.723 2 INFO nova.virt.libvirt.driver [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:26:59 np0005481065 nova_compute[260935]: 2025-10-11 09:26:59.743 2 DEBUG nova.compute.manager [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:26:59 np0005481065 nova_compute[260935]: 2025-10-11 09:26:59.849 2 DEBUG nova.compute.manager [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:26:59 np0005481065 nova_compute[260935]: 2025-10-11 09:26:59.850 2 DEBUG nova.virt.libvirt.driver [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:26:59 np0005481065 nova_compute[260935]: 2025-10-11 09:26:59.850 2 INFO nova.virt.libvirt.driver [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Creating image(s)#033[00m
Oct 11 05:26:59 np0005481065 nova_compute[260935]: 2025-10-11 09:26:59.875 2 DEBUG nova.storage.rbd_utils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image ac163a38-d5cb-4d00-af1b-f3361849dd68_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:26:59 np0005481065 nova_compute[260935]: 2025-10-11 09:26:59.899 2 DEBUG nova.storage.rbd_utils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image ac163a38-d5cb-4d00-af1b-f3361849dd68_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:26:59 np0005481065 nova_compute[260935]: 2025-10-11 09:26:59.927 2 DEBUG nova.storage.rbd_utils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image ac163a38-d5cb-4d00-af1b-f3361849dd68_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:26:59 np0005481065 nova_compute[260935]: 2025-10-11 09:26:59.931 2 DEBUG oslo_concurrency.processutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:27:00 np0005481065 nova_compute[260935]: 2025-10-11 09:27:00.009 2 DEBUG oslo_concurrency.processutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:27:00 np0005481065 nova_compute[260935]: 2025-10-11 09:27:00.010 2 DEBUG oslo_concurrency.lockutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:27:00 np0005481065 nova_compute[260935]: 2025-10-11 09:27:00.010 2 DEBUG oslo_concurrency.lockutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:27:00 np0005481065 nova_compute[260935]: 2025-10-11 09:27:00.011 2 DEBUG oslo_concurrency.lockutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:27:00 np0005481065 nova_compute[260935]: 2025-10-11 09:27:00.034 2 DEBUG nova.storage.rbd_utils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image ac163a38-d5cb-4d00-af1b-f3361849dd68_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:27:00 np0005481065 nova_compute[260935]: 2025-10-11 09:27:00.038 2 DEBUG oslo_concurrency.processutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 ac163a38-d5cb-4d00-af1b-f3361849dd68_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:27:00 np0005481065 nova_compute[260935]: 2025-10-11 09:27:00.335 2 DEBUG nova.policy [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0e1fd111a1ff43179343661e01457085', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'db6885dd005947ad850fed13cefdf2fc', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:27:00 np0005481065 nova_compute[260935]: 2025-10-11 09:27:00.395 2 DEBUG oslo_concurrency.processutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 ac163a38-d5cb-4d00-af1b-f3361849dd68_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.357s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:27:00 np0005481065 nova_compute[260935]: 2025-10-11 09:27:00.495 2 DEBUG nova.storage.rbd_utils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] resizing rbd image ac163a38-d5cb-4d00-af1b-f3361849dd68_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:27:00 np0005481065 nova_compute[260935]: 2025-10-11 09:27:00.602 2 DEBUG nova.objects.instance [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'migration_context' on Instance uuid ac163a38-d5cb-4d00-af1b-f3361849dd68 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:27:00 np0005481065 nova_compute[260935]: 2025-10-11 09:27:00.667 2 DEBUG nova.virt.libvirt.driver [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:27:00 np0005481065 nova_compute[260935]: 2025-10-11 09:27:00.668 2 DEBUG nova.virt.libvirt.driver [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Ensure instance console log exists: /var/lib/nova/instances/ac163a38-d5cb-4d00-af1b-f3361849dd68/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:27:00 np0005481065 nova_compute[260935]: 2025-10-11 09:27:00.669 2 DEBUG oslo_concurrency.lockutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:27:00 np0005481065 nova_compute[260935]: 2025-10-11 09:27:00.669 2 DEBUG oslo_concurrency.lockutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:27:00 np0005481065 nova_compute[260935]: 2025-10-11 09:27:00.670 2 DEBUG oslo_concurrency.lockutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:27:00 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2561: 321 pgs: 321 active+clean; 533 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 107 KiB/s wr, 87 op/s
Oct 11 05:27:02 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2562: 321 pgs: 321 active+clean; 585 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.4 MiB/s wr, 126 op/s
Oct 11 05:27:02 np0005481065 nova_compute[260935]: 2025-10-11 09:27:02.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:03 np0005481065 nova_compute[260935]: 2025-10-11 09:27:03.386 2 DEBUG nova.network.neutron [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Successfully created port: d227ca0f-e837-4a46-be0f-e66b72db8028 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:27:03 np0005481065 ovn_controller[152945]: 2025-10-11T09:27:03Z|00164|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:aa:8a:b1 10.100.0.8
Oct 11 05:27:03 np0005481065 ovn_controller[152945]: 2025-10-11T09:27:03Z|00165|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:aa:8a:b1 10.100.0.8
Oct 11 05:27:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:27:04 np0005481065 nova_compute[260935]: 2025-10-11 09:27:04.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:04 np0005481065 nova_compute[260935]: 2025-10-11 09:27:04.818 2 DEBUG nova.network.neutron [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Successfully created port: 731dd7de-8de3-4b67-a34b-8fef195606b3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:27:04 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2563: 321 pgs: 321 active+clean; 585 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.3 MiB/s wr, 82 op/s
Oct 11 05:27:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:27:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:27:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:27:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:27:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.005020804335970568 of space, bias 1.0, pg target 1.5062413007911704 quantized to 32 (current 32)
Oct 11 05:27:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:27:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:27:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:27:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:27:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:27:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.1992057139048968 quantized to 32 (current 32)
Oct 11 05:27:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:27:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 32)
Oct 11 05:27:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:27:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:27:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:27:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Oct 11 05:27:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:27:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Oct 11 05:27:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:27:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:27:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:27:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Oct 11 05:27:05 np0005481065 nova_compute[260935]: 2025-10-11 09:27:05.393 2 DEBUG nova.network.neutron [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Successfully updated port: d227ca0f-e837-4a46-be0f-e66b72db8028 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:27:05 np0005481065 nova_compute[260935]: 2025-10-11 09:27:05.547 2 DEBUG nova.compute.manager [req-e24c0ff0-7afc-429a-87c9-8ed706c6f620 req-292f4b59-68cd-413b-b588-0cbc1319fe76 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Received event network-changed-d227ca0f-e837-4a46-be0f-e66b72db8028 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:27:05 np0005481065 nova_compute[260935]: 2025-10-11 09:27:05.548 2 DEBUG nova.compute.manager [req-e24c0ff0-7afc-429a-87c9-8ed706c6f620 req-292f4b59-68cd-413b-b588-0cbc1319fe76 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Refreshing instance network info cache due to event network-changed-d227ca0f-e837-4a46-be0f-e66b72db8028. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:27:05 np0005481065 nova_compute[260935]: 2025-10-11 09:27:05.548 2 DEBUG oslo_concurrency.lockutils [req-e24c0ff0-7afc-429a-87c9-8ed706c6f620 req-292f4b59-68cd-413b-b588-0cbc1319fe76 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-ac163a38-d5cb-4d00-af1b-f3361849dd68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:27:05 np0005481065 nova_compute[260935]: 2025-10-11 09:27:05.549 2 DEBUG oslo_concurrency.lockutils [req-e24c0ff0-7afc-429a-87c9-8ed706c6f620 req-292f4b59-68cd-413b-b588-0cbc1319fe76 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-ac163a38-d5cb-4d00-af1b-f3361849dd68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:27:05 np0005481065 nova_compute[260935]: 2025-10-11 09:27:05.549 2 DEBUG nova.network.neutron [req-e24c0ff0-7afc-429a-87c9-8ed706c6f620 req-292f4b59-68cd-413b-b588-0cbc1319fe76 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Refreshing network info cache for port d227ca0f-e837-4a46-be0f-e66b72db8028 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:27:06 np0005481065 nova_compute[260935]: 2025-10-11 09:27:06.157 2 DEBUG nova.network.neutron [req-e24c0ff0-7afc-429a-87c9-8ed706c6f620 req-292f4b59-68cd-413b-b588-0cbc1319fe76 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:27:06 np0005481065 nova_compute[260935]: 2025-10-11 09:27:06.559 2 DEBUG nova.network.neutron [req-e24c0ff0-7afc-429a-87c9-8ed706c6f620 req-292f4b59-68cd-413b-b588-0cbc1319fe76 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:27:06 np0005481065 nova_compute[260935]: 2025-10-11 09:27:06.587 2 DEBUG oslo_concurrency.lockutils [req-e24c0ff0-7afc-429a-87c9-8ed706c6f620 req-292f4b59-68cd-413b-b588-0cbc1319fe76 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-ac163a38-d5cb-4d00-af1b-f3361849dd68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:27:06 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2564: 321 pgs: 321 active+clean; 585 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.3 MiB/s wr, 82 op/s
Oct 11 05:27:07 np0005481065 nova_compute[260935]: 2025-10-11 09:27:07.317 2 DEBUG nova.network.neutron [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Successfully updated port: 731dd7de-8de3-4b67-a34b-8fef195606b3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:27:07 np0005481065 nova_compute[260935]: 2025-10-11 09:27:07.361 2 DEBUG oslo_concurrency.lockutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "refresh_cache-ac163a38-d5cb-4d00-af1b-f3361849dd68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:27:07 np0005481065 nova_compute[260935]: 2025-10-11 09:27:07.362 2 DEBUG oslo_concurrency.lockutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquired lock "refresh_cache-ac163a38-d5cb-4d00-af1b-f3361849dd68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:27:07 np0005481065 nova_compute[260935]: 2025-10-11 09:27:07.363 2 DEBUG nova.network.neutron [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:27:07 np0005481065 nova_compute[260935]: 2025-10-11 09:27:07.543 2 DEBUG nova.network.neutron [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:27:07 np0005481065 nova_compute[260935]: 2025-10-11 09:27:07.667 2 DEBUG nova.compute.manager [req-4b417079-6517-4b3e-86f6-5c51c930ea43 req-18404a90-d79e-4ce2-bd3b-531720486135 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Received event network-changed-731dd7de-8de3-4b67-a34b-8fef195606b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:27:07 np0005481065 nova_compute[260935]: 2025-10-11 09:27:07.669 2 DEBUG nova.compute.manager [req-4b417079-6517-4b3e-86f6-5c51c930ea43 req-18404a90-d79e-4ce2-bd3b-531720486135 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Refreshing instance network info cache due to event network-changed-731dd7de-8de3-4b67-a34b-8fef195606b3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:27:07 np0005481065 nova_compute[260935]: 2025-10-11 09:27:07.669 2 DEBUG oslo_concurrency.lockutils [req-4b417079-6517-4b3e-86f6-5c51c930ea43 req-18404a90-d79e-4ce2-bd3b-531720486135 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-ac163a38-d5cb-4d00-af1b-f3361849dd68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:27:07 np0005481065 nova_compute[260935]: 2025-10-11 09:27:07.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:08 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2565: 321 pgs: 321 active+clean; 612 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.9 MiB/s wr, 135 op/s
Oct 11 05:27:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:27:09 np0005481065 nova_compute[260935]: 2025-10-11 09:27:09.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:09 np0005481065 nova_compute[260935]: 2025-10-11 09:27:09.645 2 DEBUG nova.network.neutron [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Updating instance_info_cache with network_info: [{"id": "d227ca0f-e837-4a46-be0f-e66b72db8028", "address": "fa:16:3e:01:a2:ce", "network": {"id": "024b1f88-7312-4f05-a55e-4c82e878906e", "bridge": "br-int", "label": "tempest-network-smoke--93934650", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd227ca0f-e8", "ovs_interfaceid": "d227ca0f-e837-4a46-be0f-e66b72db8028", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "731dd7de-8de3-4b67-a34b-8fef195606b3", "address": "fa:16:3e:3b:40:03", "network": {"id": "894decad-3bed-4c55-b643-5fbe5479bf3f", "bridge": "br-int", "label": "tempest-network-smoke--1044478516", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3b:4003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap731dd7de-8d", "ovs_interfaceid": "731dd7de-8de3-4b67-a34b-8fef195606b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:27:09 np0005481065 nova_compute[260935]: 2025-10-11 09:27:09.666 2 DEBUG oslo_concurrency.lockutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Releasing lock "refresh_cache-ac163a38-d5cb-4d00-af1b-f3361849dd68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:27:09 np0005481065 nova_compute[260935]: 2025-10-11 09:27:09.667 2 DEBUG nova.compute.manager [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Instance network_info: |[{"id": "d227ca0f-e837-4a46-be0f-e66b72db8028", "address": "fa:16:3e:01:a2:ce", "network": {"id": "024b1f88-7312-4f05-a55e-4c82e878906e", "bridge": "br-int", "label": "tempest-network-smoke--93934650", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd227ca0f-e8", "ovs_interfaceid": "d227ca0f-e837-4a46-be0f-e66b72db8028", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "731dd7de-8de3-4b67-a34b-8fef195606b3", "address": "fa:16:3e:3b:40:03", "network": {"id": "894decad-3bed-4c55-b643-5fbe5479bf3f", "bridge": "br-int", "label": "tempest-network-smoke--1044478516", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3b:4003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap731dd7de-8d", "ovs_interfaceid": "731dd7de-8de3-4b67-a34b-8fef195606b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:27:09 np0005481065 nova_compute[260935]: 2025-10-11 09:27:09.667 2 DEBUG oslo_concurrency.lockutils [req-4b417079-6517-4b3e-86f6-5c51c930ea43 req-18404a90-d79e-4ce2-bd3b-531720486135 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-ac163a38-d5cb-4d00-af1b-f3361849dd68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:27:09 np0005481065 nova_compute[260935]: 2025-10-11 09:27:09.667 2 DEBUG nova.network.neutron [req-4b417079-6517-4b3e-86f6-5c51c930ea43 req-18404a90-d79e-4ce2-bd3b-531720486135 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Refreshing network info cache for port 731dd7de-8de3-4b67-a34b-8fef195606b3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:27:09 np0005481065 nova_compute[260935]: 2025-10-11 09:27:09.670 2 DEBUG nova.virt.libvirt.driver [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Start _get_guest_xml network_info=[{"id": "d227ca0f-e837-4a46-be0f-e66b72db8028", "address": "fa:16:3e:01:a2:ce", "network": {"id": "024b1f88-7312-4f05-a55e-4c82e878906e", "bridge": "br-int", "label": "tempest-network-smoke--93934650", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd227ca0f-e8", "ovs_interfaceid": "d227ca0f-e837-4a46-be0f-e66b72db8028", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "731dd7de-8de3-4b67-a34b-8fef195606b3", "address": "fa:16:3e:3b:40:03", "network": {"id": "894decad-3bed-4c55-b643-5fbe5479bf3f", "bridge": "br-int", "label": "tempest-network-smoke--1044478516", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3b:4003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap731dd7de-8d", "ovs_interfaceid": "731dd7de-8de3-4b67-a34b-8fef195606b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:27:09 np0005481065 nova_compute[260935]: 2025-10-11 09:27:09.675 2 WARNING nova.virt.libvirt.driver [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:27:09 np0005481065 nova_compute[260935]: 2025-10-11 09:27:09.681 2 DEBUG nova.virt.libvirt.host [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:27:09 np0005481065 nova_compute[260935]: 2025-10-11 09:27:09.681 2 DEBUG nova.virt.libvirt.host [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:27:09 np0005481065 nova_compute[260935]: 2025-10-11 09:27:09.686 2 DEBUG nova.virt.libvirt.host [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:27:09 np0005481065 nova_compute[260935]: 2025-10-11 09:27:09.686 2 DEBUG nova.virt.libvirt.host [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:27:09 np0005481065 nova_compute[260935]: 2025-10-11 09:27:09.686 2 DEBUG nova.virt.libvirt.driver [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:27:09 np0005481065 nova_compute[260935]: 2025-10-11 09:27:09.687 2 DEBUG nova.virt.hardware [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:27:09 np0005481065 nova_compute[260935]: 2025-10-11 09:27:09.687 2 DEBUG nova.virt.hardware [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:27:09 np0005481065 nova_compute[260935]: 2025-10-11 09:27:09.687 2 DEBUG nova.virt.hardware [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:27:09 np0005481065 nova_compute[260935]: 2025-10-11 09:27:09.687 2 DEBUG nova.virt.hardware [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:27:09 np0005481065 nova_compute[260935]: 2025-10-11 09:27:09.687 2 DEBUG nova.virt.hardware [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:27:09 np0005481065 nova_compute[260935]: 2025-10-11 09:27:09.688 2 DEBUG nova.virt.hardware [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:27:09 np0005481065 nova_compute[260935]: 2025-10-11 09:27:09.688 2 DEBUG nova.virt.hardware [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:27:09 np0005481065 nova_compute[260935]: 2025-10-11 09:27:09.688 2 DEBUG nova.virt.hardware [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:27:09 np0005481065 nova_compute[260935]: 2025-10-11 09:27:09.688 2 DEBUG nova.virt.hardware [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:27:09 np0005481065 nova_compute[260935]: 2025-10-11 09:27:09.688 2 DEBUG nova.virt.hardware [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:27:09 np0005481065 nova_compute[260935]: 2025-10-11 09:27:09.688 2 DEBUG nova.virt.hardware [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:27:09 np0005481065 nova_compute[260935]: 2025-10-11 09:27:09.691 2 DEBUG oslo_concurrency.processutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:27:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:27:10 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3388837810' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:27:10 np0005481065 nova_compute[260935]: 2025-10-11 09:27:10.222 2 DEBUG oslo_concurrency.processutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:27:10 np0005481065 nova_compute[260935]: 2025-10-11 09:27:10.242 2 DEBUG nova.storage.rbd_utils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image ac163a38-d5cb-4d00-af1b-f3361849dd68_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:27:10 np0005481065 nova_compute[260935]: 2025-10-11 09:27:10.246 2 DEBUG oslo_concurrency.processutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:27:10 np0005481065 nova_compute[260935]: 2025-10-11 09:27:10.531 2 INFO nova.compute.manager [None req-91844f93-46e5-4b48-81de-2ce889f60cc5 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Get console output#033[00m
Oct 11 05:27:10 np0005481065 nova_compute[260935]: 2025-10-11 09:27:10.538 29289 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct 11 05:27:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:27:10 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/481952330' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:27:10 np0005481065 nova_compute[260935]: 2025-10-11 09:27:10.726 2 DEBUG oslo_concurrency.processutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:27:10 np0005481065 nova_compute[260935]: 2025-10-11 09:27:10.728 2 DEBUG nova.virt.libvirt.vif [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:26:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-8061287',display_name='tempest-TestGettingAddress-server-8061287',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-8061287',id=130,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPG2umSzgx1Pm5tS05rzNTKNesiRVqOQsMrcb/c4H9RJnO9a7zP3A3lTuGkE8FijEzV3gKWwvO4cBsyzZeZuE85e7xUGmBvWEUdTEGeD9UeQgoUvFozUeyHRBe1bh7q6pA==',key_name='tempest-TestGettingAddress-1214513575',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-ptk677gz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:26:59Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=ac163a38-d5cb-4d00-af1b-f3361849dd68,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d227ca0f-e837-4a46-be0f-e66b72db8028", "address": "fa:16:3e:01:a2:ce", "network": {"id": "024b1f88-7312-4f05-a55e-4c82e878906e", "bridge": "br-int", "label": "tempest-network-smoke--93934650", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd227ca0f-e8", "ovs_interfaceid": "d227ca0f-e837-4a46-be0f-e66b72db8028", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:27:10 np0005481065 nova_compute[260935]: 2025-10-11 09:27:10.729 2 DEBUG nova.network.os_vif_util [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "d227ca0f-e837-4a46-be0f-e66b72db8028", "address": "fa:16:3e:01:a2:ce", "network": {"id": "024b1f88-7312-4f05-a55e-4c82e878906e", "bridge": "br-int", "label": "tempest-network-smoke--93934650", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd227ca0f-e8", "ovs_interfaceid": "d227ca0f-e837-4a46-be0f-e66b72db8028", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:27:10 np0005481065 nova_compute[260935]: 2025-10-11 09:27:10.730 2 DEBUG nova.network.os_vif_util [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:a2:ce,bridge_name='br-int',has_traffic_filtering=True,id=d227ca0f-e837-4a46-be0f-e66b72db8028,network=Network(024b1f88-7312-4f05-a55e-4c82e878906e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd227ca0f-e8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:27:10 np0005481065 nova_compute[260935]: 2025-10-11 09:27:10.732 2 DEBUG nova.virt.libvirt.vif [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:26:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-8061287',display_name='tempest-TestGettingAddress-server-8061287',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-8061287',id=130,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPG2umSzgx1Pm5tS05rzNTKNesiRVqOQsMrcb/c4H9RJnO9a7zP3A3lTuGkE8FijEzV3gKWwvO4cBsyzZeZuE85e7xUGmBvWEUdTEGeD9UeQgoUvFozUeyHRBe1bh7q6pA==',key_name='tempest-TestGettingAddress-1214513575',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-ptk677gz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:26:59Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=ac163a38-d5cb-4d00-af1b-f3361849dd68,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "731dd7de-8de3-4b67-a34b-8fef195606b3", "address": "fa:16:3e:3b:40:03", "network": {"id": "894decad-3bed-4c55-b643-5fbe5479bf3f", "bridge": "br-int", "label": "tempest-network-smoke--1044478516", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3b:4003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap731dd7de-8d", "ovs_interfaceid": "731dd7de-8de3-4b67-a34b-8fef195606b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:27:10 np0005481065 nova_compute[260935]: 2025-10-11 09:27:10.732 2 DEBUG nova.network.os_vif_util [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "731dd7de-8de3-4b67-a34b-8fef195606b3", "address": "fa:16:3e:3b:40:03", "network": {"id": "894decad-3bed-4c55-b643-5fbe5479bf3f", "bridge": "br-int", "label": "tempest-network-smoke--1044478516", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3b:4003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap731dd7de-8d", "ovs_interfaceid": "731dd7de-8de3-4b67-a34b-8fef195606b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:27:10 np0005481065 nova_compute[260935]: 2025-10-11 09:27:10.735 2 DEBUG nova.network.os_vif_util [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3b:40:03,bridge_name='br-int',has_traffic_filtering=True,id=731dd7de-8de3-4b67-a34b-8fef195606b3,network=Network(894decad-3bed-4c55-b643-5fbe5479bf3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap731dd7de-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:27:10 np0005481065 nova_compute[260935]: 2025-10-11 09:27:10.737 2 DEBUG nova.objects.instance [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'pci_devices' on Instance uuid ac163a38-d5cb-4d00-af1b-f3361849dd68 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:27:10 np0005481065 nova_compute[260935]: 2025-10-11 09:27:10.785 2 DEBUG nova.virt.libvirt.driver [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:27:10 np0005481065 nova_compute[260935]:  <uuid>ac163a38-d5cb-4d00-af1b-f3361849dd68</uuid>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:  <name>instance-00000082</name>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:27:10 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:      <nova:name>tempest-TestGettingAddress-server-8061287</nova:name>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:27:09</nova:creationTime>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:27:10 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:        <nova:user uuid="0e1fd111a1ff43179343661e01457085">tempest-TestGettingAddress-1238692117-project-member</nova:user>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:        <nova:project uuid="db6885dd005947ad850fed13cefdf2fc">tempest-TestGettingAddress-1238692117</nova:project>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:        <nova:port uuid="d227ca0f-e837-4a46-be0f-e66b72db8028">
Oct 11 05:27:10 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:        <nova:port uuid="731dd7de-8de3-4b67-a34b-8fef195606b3">
Oct 11 05:27:10 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe3b:4003" ipVersion="6"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:      <entry name="serial">ac163a38-d5cb-4d00-af1b-f3361849dd68</entry>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:      <entry name="uuid">ac163a38-d5cb-4d00-af1b-f3361849dd68</entry>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:27:10 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/ac163a38-d5cb-4d00-af1b-f3361849dd68_disk">
Oct 11 05:27:10 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:27:10 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:27:10 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/ac163a38-d5cb-4d00-af1b-f3361849dd68_disk.config">
Oct 11 05:27:10 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:27:10 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:27:10 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:01:a2:ce"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:      <target dev="tapd227ca0f-e8"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:27:10 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:3b:40:03"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:      <target dev="tap731dd7de-8d"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:27:10 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/ac163a38-d5cb-4d00-af1b-f3361849dd68/console.log" append="off"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:27:10 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:27:10 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:27:10 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:27:10 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:27:10 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:27:10 np0005481065 nova_compute[260935]: 2025-10-11 09:27:10.786 2 DEBUG nova.compute.manager [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Preparing to wait for external event network-vif-plugged-d227ca0f-e837-4a46-be0f-e66b72db8028 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:27:10 np0005481065 nova_compute[260935]: 2025-10-11 09:27:10.787 2 DEBUG oslo_concurrency.lockutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "ac163a38-d5cb-4d00-af1b-f3361849dd68-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:27:10 np0005481065 nova_compute[260935]: 2025-10-11 09:27:10.787 2 DEBUG oslo_concurrency.lockutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "ac163a38-d5cb-4d00-af1b-f3361849dd68-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:27:10 np0005481065 nova_compute[260935]: 2025-10-11 09:27:10.788 2 DEBUG oslo_concurrency.lockutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "ac163a38-d5cb-4d00-af1b-f3361849dd68-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:27:10 np0005481065 nova_compute[260935]: 2025-10-11 09:27:10.788 2 DEBUG nova.compute.manager [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Preparing to wait for external event network-vif-plugged-731dd7de-8de3-4b67-a34b-8fef195606b3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:27:10 np0005481065 nova_compute[260935]: 2025-10-11 09:27:10.789 2 DEBUG oslo_concurrency.lockutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "ac163a38-d5cb-4d00-af1b-f3361849dd68-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:27:10 np0005481065 nova_compute[260935]: 2025-10-11 09:27:10.789 2 DEBUG oslo_concurrency.lockutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "ac163a38-d5cb-4d00-af1b-f3361849dd68-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:27:10 np0005481065 nova_compute[260935]: 2025-10-11 09:27:10.790 2 DEBUG oslo_concurrency.lockutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "ac163a38-d5cb-4d00-af1b-f3361849dd68-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:27:10 np0005481065 nova_compute[260935]: 2025-10-11 09:27:10.791 2 DEBUG nova.virt.libvirt.vif [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:26:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-8061287',display_name='tempest-TestGettingAddress-server-8061287',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-8061287',id=130,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPG2umSzgx1Pm5tS05rzNTKNesiRVqOQsMrcb/c4H9RJnO9a7zP3A3lTuGkE8FijEzV3gKWwvO4cBsyzZeZuE85e7xUGmBvWEUdTEGeD9UeQgoUvFozUeyHRBe1bh7q6pA==',key_name='tempest-TestGettingAddress-1214513575',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-ptk677gz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:26:59Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=ac163a38-d5cb-4d00-af1b-f3361849dd68,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d227ca0f-e837-4a46-be0f-e66b72db8028", "address": "fa:16:3e:01:a2:ce", "network": {"id": "024b1f88-7312-4f05-a55e-4c82e878906e", "bridge": "br-int", "label": "tempest-network-smoke--93934650", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd227ca0f-e8", "ovs_interfaceid": "d227ca0f-e837-4a46-be0f-e66b72db8028", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:27:10 np0005481065 nova_compute[260935]: 2025-10-11 09:27:10.791 2 DEBUG nova.network.os_vif_util [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "d227ca0f-e837-4a46-be0f-e66b72db8028", "address": "fa:16:3e:01:a2:ce", "network": {"id": "024b1f88-7312-4f05-a55e-4c82e878906e", "bridge": "br-int", "label": "tempest-network-smoke--93934650", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd227ca0f-e8", "ovs_interfaceid": "d227ca0f-e837-4a46-be0f-e66b72db8028", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:27:10 np0005481065 nova_compute[260935]: 2025-10-11 09:27:10.792 2 DEBUG nova.network.os_vif_util [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:a2:ce,bridge_name='br-int',has_traffic_filtering=True,id=d227ca0f-e837-4a46-be0f-e66b72db8028,network=Network(024b1f88-7312-4f05-a55e-4c82e878906e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd227ca0f-e8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:27:10 np0005481065 nova_compute[260935]: 2025-10-11 09:27:10.793 2 DEBUG os_vif [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:a2:ce,bridge_name='br-int',has_traffic_filtering=True,id=d227ca0f-e837-4a46-be0f-e66b72db8028,network=Network(024b1f88-7312-4f05-a55e-4c82e878906e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd227ca0f-e8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:27:10 np0005481065 nova_compute[260935]: 2025-10-11 09:27:10.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:10 np0005481065 nova_compute[260935]: 2025-10-11 09:27:10.795 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:27:10 np0005481065 nova_compute[260935]: 2025-10-11 09:27:10.796 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:27:10 np0005481065 nova_compute[260935]: 2025-10-11 09:27:10.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:10 np0005481065 nova_compute[260935]: 2025-10-11 09:27:10.801 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd227ca0f-e8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:27:10 np0005481065 nova_compute[260935]: 2025-10-11 09:27:10.802 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd227ca0f-e8, col_values=(('external_ids', {'iface-id': 'd227ca0f-e837-4a46-be0f-e66b72db8028', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:01:a2:ce', 'vm-uuid': 'ac163a38-d5cb-4d00-af1b-f3361849dd68'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:27:10 np0005481065 NetworkManager[44960]: <info>  [1760174830.8055] manager: (tapd227ca0f-e8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/555)
Oct 11 05:27:10 np0005481065 nova_compute[260935]: 2025-10-11 09:27:10.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:10 np0005481065 nova_compute[260935]: 2025-10-11 09:27:10.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:27:10 np0005481065 nova_compute[260935]: 2025-10-11 09:27:10.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:10 np0005481065 nova_compute[260935]: 2025-10-11 09:27:10.818 2 INFO os_vif [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:a2:ce,bridge_name='br-int',has_traffic_filtering=True,id=d227ca0f-e837-4a46-be0f-e66b72db8028,network=Network(024b1f88-7312-4f05-a55e-4c82e878906e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd227ca0f-e8')#033[00m
Oct 11 05:27:10 np0005481065 nova_compute[260935]: 2025-10-11 09:27:10.820 2 DEBUG nova.virt.libvirt.vif [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:26:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-8061287',display_name='tempest-TestGettingAddress-server-8061287',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-8061287',id=130,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPG2umSzgx1Pm5tS05rzNTKNesiRVqOQsMrcb/c4H9RJnO9a7zP3A3lTuGkE8FijEzV3gKWwvO4cBsyzZeZuE85e7xUGmBvWEUdTEGeD9UeQgoUvFozUeyHRBe1bh7q6pA==',key_name='tempest-TestGettingAddress-1214513575',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-ptk677gz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:26:59Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=ac163a38-d5cb-4d00-af1b-f3361849dd68,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "731dd7de-8de3-4b67-a34b-8fef195606b3", "address": "fa:16:3e:3b:40:03", "network": {"id": "894decad-3bed-4c55-b643-5fbe5479bf3f", "bridge": "br-int", "label": "tempest-network-smoke--1044478516", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3b:4003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap731dd7de-8d", "ovs_interfaceid": "731dd7de-8de3-4b67-a34b-8fef195606b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:27:10 np0005481065 nova_compute[260935]: 2025-10-11 09:27:10.820 2 DEBUG nova.network.os_vif_util [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "731dd7de-8de3-4b67-a34b-8fef195606b3", "address": "fa:16:3e:3b:40:03", "network": {"id": "894decad-3bed-4c55-b643-5fbe5479bf3f", "bridge": "br-int", "label": "tempest-network-smoke--1044478516", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3b:4003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap731dd7de-8d", "ovs_interfaceid": "731dd7de-8de3-4b67-a34b-8fef195606b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:27:10 np0005481065 nova_compute[260935]: 2025-10-11 09:27:10.821 2 DEBUG nova.network.os_vif_util [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3b:40:03,bridge_name='br-int',has_traffic_filtering=True,id=731dd7de-8de3-4b67-a34b-8fef195606b3,network=Network(894decad-3bed-4c55-b643-5fbe5479bf3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap731dd7de-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:27:10 np0005481065 nova_compute[260935]: 2025-10-11 09:27:10.822 2 DEBUG os_vif [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:40:03,bridge_name='br-int',has_traffic_filtering=True,id=731dd7de-8de3-4b67-a34b-8fef195606b3,network=Network(894decad-3bed-4c55-b643-5fbe5479bf3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap731dd7de-8d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:27:10 np0005481065 nova_compute[260935]: 2025-10-11 09:27:10.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:10 np0005481065 nova_compute[260935]: 2025-10-11 09:27:10.823 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:27:10 np0005481065 nova_compute[260935]: 2025-10-11 09:27:10.823 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:27:10 np0005481065 nova_compute[260935]: 2025-10-11 09:27:10.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:10 np0005481065 nova_compute[260935]: 2025-10-11 09:27:10.827 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap731dd7de-8d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:27:10 np0005481065 nova_compute[260935]: 2025-10-11 09:27:10.827 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap731dd7de-8d, col_values=(('external_ids', {'iface-id': '731dd7de-8de3-4b67-a34b-8fef195606b3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3b:40:03', 'vm-uuid': 'ac163a38-d5cb-4d00-af1b-f3361849dd68'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:27:10 np0005481065 nova_compute[260935]: 2025-10-11 09:27:10.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:10 np0005481065 NetworkManager[44960]: <info>  [1760174830.8303] manager: (tap731dd7de-8d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/556)
Oct 11 05:27:10 np0005481065 nova_compute[260935]: 2025-10-11 09:27:10.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:27:10 np0005481065 nova_compute[260935]: 2025-10-11 09:27:10.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:10 np0005481065 nova_compute[260935]: 2025-10-11 09:27:10.843 2 INFO os_vif [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:40:03,bridge_name='br-int',has_traffic_filtering=True,id=731dd7de-8de3-4b67-a34b-8fef195606b3,network=Network(894decad-3bed-4c55-b643-5fbe5479bf3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap731dd7de-8d')#033[00m
Oct 11 05:27:10 np0005481065 nova_compute[260935]: 2025-10-11 09:27:10.918 2 DEBUG nova.virt.libvirt.driver [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:27:10 np0005481065 nova_compute[260935]: 2025-10-11 09:27:10.918 2 DEBUG nova.virt.libvirt.driver [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:27:10 np0005481065 nova_compute[260935]: 2025-10-11 09:27:10.919 2 DEBUG nova.virt.libvirt.driver [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No VIF found with MAC fa:16:3e:01:a2:ce, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:27:10 np0005481065 nova_compute[260935]: 2025-10-11 09:27:10.919 2 DEBUG nova.virt.libvirt.driver [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No VIF found with MAC fa:16:3e:3b:40:03, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:27:10 np0005481065 nova_compute[260935]: 2025-10-11 09:27:10.920 2 INFO nova.virt.libvirt.driver [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Using config drive#033[00m
Oct 11 05:27:10 np0005481065 nova_compute[260935]: 2025-10-11 09:27:10.955 2 DEBUG nova.storage.rbd_utils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image ac163a38-d5cb-4d00-af1b-f3361849dd68_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:27:10 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2566: 321 pgs: 321 active+clean; 612 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Oct 11 05:27:11 np0005481065 nova_compute[260935]: 2025-10-11 09:27:11.575 2 INFO nova.virt.libvirt.driver [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Creating config drive at /var/lib/nova/instances/ac163a38-d5cb-4d00-af1b-f3361849dd68/disk.config#033[00m
Oct 11 05:27:11 np0005481065 nova_compute[260935]: 2025-10-11 09:27:11.585 2 DEBUG oslo_concurrency.processutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ac163a38-d5cb-4d00-af1b-f3361849dd68/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjqcgkiid execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:27:11 np0005481065 nova_compute[260935]: 2025-10-11 09:27:11.741 2 DEBUG oslo_concurrency.processutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ac163a38-d5cb-4d00-af1b-f3361849dd68/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjqcgkiid" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:27:11 np0005481065 nova_compute[260935]: 2025-10-11 09:27:11.780 2 DEBUG nova.storage.rbd_utils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image ac163a38-d5cb-4d00-af1b-f3361849dd68_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:27:11 np0005481065 nova_compute[260935]: 2025-10-11 09:27:11.785 2 DEBUG oslo_concurrency.processutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ac163a38-d5cb-4d00-af1b-f3361849dd68/disk.config ac163a38-d5cb-4d00-af1b-f3361849dd68_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:27:11 np0005481065 nova_compute[260935]: 2025-10-11 09:27:11.835 2 DEBUG nova.network.neutron [req-4b417079-6517-4b3e-86f6-5c51c930ea43 req-18404a90-d79e-4ce2-bd3b-531720486135 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Updated VIF entry in instance network info cache for port 731dd7de-8de3-4b67-a34b-8fef195606b3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:27:11 np0005481065 nova_compute[260935]: 2025-10-11 09:27:11.836 2 DEBUG nova.network.neutron [req-4b417079-6517-4b3e-86f6-5c51c930ea43 req-18404a90-d79e-4ce2-bd3b-531720486135 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Updating instance_info_cache with network_info: [{"id": "d227ca0f-e837-4a46-be0f-e66b72db8028", "address": "fa:16:3e:01:a2:ce", "network": {"id": "024b1f88-7312-4f05-a55e-4c82e878906e", "bridge": "br-int", "label": "tempest-network-smoke--93934650", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd227ca0f-e8", "ovs_interfaceid": "d227ca0f-e837-4a46-be0f-e66b72db8028", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "731dd7de-8de3-4b67-a34b-8fef195606b3", "address": "fa:16:3e:3b:40:03", "network": {"id": "894decad-3bed-4c55-b643-5fbe5479bf3f", "bridge": "br-int", "label": "tempest-network-smoke--1044478516", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3b:4003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap731dd7de-8d", "ovs_interfaceid": "731dd7de-8de3-4b67-a34b-8fef195606b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:27:11 np0005481065 nova_compute[260935]: 2025-10-11 09:27:11.846 2 DEBUG nova.compute.manager [req-b81c69c2-c6e5-4812-a275-03e82918d592 req-e43ce0ac-486e-491b-b145-53bcf7e3d97f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Received event network-changed-36e1391f-eaf8-490f-8434-c3fb25eed0a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:27:11 np0005481065 nova_compute[260935]: 2025-10-11 09:27:11.847 2 DEBUG nova.compute.manager [req-b81c69c2-c6e5-4812-a275-03e82918d592 req-e43ce0ac-486e-491b-b145-53bcf7e3d97f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Refreshing instance network info cache due to event network-changed-36e1391f-eaf8-490f-8434-c3fb25eed0a4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:27:11 np0005481065 nova_compute[260935]: 2025-10-11 09:27:11.847 2 DEBUG oslo_concurrency.lockutils [req-b81c69c2-c6e5-4812-a275-03e82918d592 req-e43ce0ac-486e-491b-b145-53bcf7e3d97f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-36d0acda-9f37-4308-aa46-973f11c57b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:27:11 np0005481065 nova_compute[260935]: 2025-10-11 09:27:11.847 2 DEBUG oslo_concurrency.lockutils [req-b81c69c2-c6e5-4812-a275-03e82918d592 req-e43ce0ac-486e-491b-b145-53bcf7e3d97f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-36d0acda-9f37-4308-aa46-973f11c57b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:27:11 np0005481065 nova_compute[260935]: 2025-10-11 09:27:11.847 2 DEBUG nova.network.neutron [req-b81c69c2-c6e5-4812-a275-03e82918d592 req-e43ce0ac-486e-491b-b145-53bcf7e3d97f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Refreshing network info cache for port 36e1391f-eaf8-490f-8434-c3fb25eed0a4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:27:11 np0005481065 nova_compute[260935]: 2025-10-11 09:27:11.875 2 DEBUG oslo_concurrency.lockutils [req-4b417079-6517-4b3e-86f6-5c51c930ea43 req-18404a90-d79e-4ce2-bd3b-531720486135 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-ac163a38-d5cb-4d00-af1b-f3361849dd68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:27:11 np0005481065 nova_compute[260935]: 2025-10-11 09:27:11.991 2 DEBUG oslo_concurrency.processutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ac163a38-d5cb-4d00-af1b-f3361849dd68/disk.config ac163a38-d5cb-4d00-af1b-f3361849dd68_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.206s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:27:11 np0005481065 nova_compute[260935]: 2025-10-11 09:27:11.991 2 INFO nova.virt.libvirt.driver [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Deleting local config drive /var/lib/nova/instances/ac163a38-d5cb-4d00-af1b-f3361849dd68/disk.config because it was imported into RBD.#033[00m
Oct 11 05:27:12 np0005481065 kernel: tapd227ca0f-e8: entered promiscuous mode
Oct 11 05:27:12 np0005481065 NetworkManager[44960]: <info>  [1760174832.0638] manager: (tapd227ca0f-e8): new Tun device (/org/freedesktop/NetworkManager/Devices/557)
Oct 11 05:27:12 np0005481065 ovn_controller[152945]: 2025-10-11T09:27:12Z|01407|binding|INFO|Claiming lport d227ca0f-e837-4a46-be0f-e66b72db8028 for this chassis.
Oct 11 05:27:12 np0005481065 ovn_controller[152945]: 2025-10-11T09:27:12Z|01408|binding|INFO|d227ca0f-e837-4a46-be0f-e66b72db8028: Claiming fa:16:3e:01:a2:ce 10.100.0.6
Oct 11 05:27:12 np0005481065 nova_compute[260935]: 2025-10-11 09:27:12.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:12.080 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:a2:ce 10.100.0.6'], port_security=['fa:16:3e:01:a2:ce 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'ac163a38-d5cb-4d00-af1b-f3361849dd68', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-024b1f88-7312-4f05-a55e-4c82e878906e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'efa7de30-a0ba-4f09-b29e-87a969a72d97', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5a6a9a27-420b-4816-9550-7c34ef42c810, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=d227ca0f-e837-4a46-be0f-e66b72db8028) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:27:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:12.082 162815 INFO neutron.agent.ovn.metadata.agent [-] Port d227ca0f-e837-4a46-be0f-e66b72db8028 in datapath 024b1f88-7312-4f05-a55e-4c82e878906e bound to our chassis#033[00m
Oct 11 05:27:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:12.084 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 024b1f88-7312-4f05-a55e-4c82e878906e#033[00m
Oct 11 05:27:12 np0005481065 NetworkManager[44960]: <info>  [1760174832.0993] manager: (tap731dd7de-8d): new Tun device (/org/freedesktop/NetworkManager/Devices/558)
Oct 11 05:27:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:12.106 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c3dd67b0-09b2-4b33-b734-1d84e0abbb10]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:12 np0005481065 kernel: tap731dd7de-8d: entered promiscuous mode
Oct 11 05:27:12 np0005481065 systemd-udevd[402839]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:27:12 np0005481065 nova_compute[260935]: 2025-10-11 09:27:12.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:12 np0005481065 systemd-udevd[402838]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:27:12 np0005481065 ovn_controller[152945]: 2025-10-11T09:27:12Z|01409|binding|INFO|Setting lport d227ca0f-e837-4a46-be0f-e66b72db8028 ovn-installed in OVS
Oct 11 05:27:12 np0005481065 ovn_controller[152945]: 2025-10-11T09:27:12Z|01410|binding|INFO|Setting lport d227ca0f-e837-4a46-be0f-e66b72db8028 up in Southbound
Oct 11 05:27:12 np0005481065 ovn_controller[152945]: 2025-10-11T09:27:12Z|01411|if_status|INFO|Dropped 1 log messages in last 43 seconds (most recently, 43 seconds ago) due to excessive rate
Oct 11 05:27:12 np0005481065 ovn_controller[152945]: 2025-10-11T09:27:12Z|01412|if_status|INFO|Not updating pb chassis for 731dd7de-8de3-4b67-a34b-8fef195606b3 now as sb is readonly
Oct 11 05:27:12 np0005481065 ovn_controller[152945]: 2025-10-11T09:27:12Z|01413|binding|INFO|Claiming lport 731dd7de-8de3-4b67-a34b-8fef195606b3 for this chassis.
Oct 11 05:27:12 np0005481065 ovn_controller[152945]: 2025-10-11T09:27:12Z|01414|binding|INFO|731dd7de-8de3-4b67-a34b-8fef195606b3: Claiming fa:16:3e:3b:40:03 2001:db8::f816:3eff:fe3b:4003
Oct 11 05:27:12 np0005481065 NetworkManager[44960]: <info>  [1760174832.1315] device (tap731dd7de-8d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:27:12 np0005481065 NetworkManager[44960]: <info>  [1760174832.1326] device (tapd227ca0f-e8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:27:12 np0005481065 NetworkManager[44960]: <info>  [1760174832.1337] device (tap731dd7de-8d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:27:12 np0005481065 NetworkManager[44960]: <info>  [1760174832.1343] device (tapd227ca0f-e8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:27:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:12.135 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3b:40:03 2001:db8::f816:3eff:fe3b:4003'], port_security=['fa:16:3e:3b:40:03 2001:db8::f816:3eff:fe3b:4003'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe3b:4003/64', 'neutron:device_id': 'ac163a38-d5cb-4d00-af1b-f3361849dd68', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-894decad-3bed-4c55-b643-5fbe5479bf3f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'efa7de30-a0ba-4f09-b29e-87a969a72d97', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8c88be83-b34c-4a0a-8d40-8a42bbccb566, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=731dd7de-8de3-4b67-a34b-8fef195606b3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:27:12 np0005481065 ovn_controller[152945]: 2025-10-11T09:27:12Z|01415|binding|INFO|Setting lport 731dd7de-8de3-4b67-a34b-8fef195606b3 ovn-installed in OVS
Oct 11 05:27:12 np0005481065 ovn_controller[152945]: 2025-10-11T09:27:12Z|01416|binding|INFO|Setting lport 731dd7de-8de3-4b67-a34b-8fef195606b3 up in Southbound
Oct 11 05:27:12 np0005481065 nova_compute[260935]: 2025-10-11 09:27:12.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:12 np0005481065 systemd-machined[215705]: New machine qemu-154-instance-00000082.
Oct 11 05:27:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:12.163 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[35a32cbf-799d-45a4-88bc-0cbec413bb8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:12 np0005481065 systemd[1]: Started Virtual Machine qemu-154-instance-00000082.
Oct 11 05:27:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:12.169 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[838f30c7-07a5-4399-b02c-fa5827fc43c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:12.220 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[494245af-f2f9-49d9-9c51-6623036064f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:12.247 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[62c8adae-94d2-43a9-95c3-8b64afb6777f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap024b1f88-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:47:3b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 382], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663412, 'reachable_time': 33597, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 402854, 'error': None, 'target': 'ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:12.280 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[efd5070b-7418-44b0-ac05-1cf73b7c1ccf]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap024b1f88-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663430, 'tstamp': 663430}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 402855, 'error': None, 'target': 'ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap024b1f88-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663436, 'tstamp': 663436}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 402855, 'error': None, 'target': 'ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:12.284 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap024b1f88-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:27:12 np0005481065 nova_compute[260935]: 2025-10-11 09:27:12.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:12 np0005481065 nova_compute[260935]: 2025-10-11 09:27:12.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:12.288 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap024b1f88-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:27:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:12.289 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:27:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:12.290 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap024b1f88-70, col_values=(('external_ids', {'iface-id': '219b454c-6c32-4a38-b10c-afcdb1be1980'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:27:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:12.290 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:27:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:12.293 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 731dd7de-8de3-4b67-a34b-8fef195606b3 in datapath 894decad-3bed-4c55-b643-5fbe5479bf3f unbound from our chassis#033[00m
Oct 11 05:27:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:12.296 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 894decad-3bed-4c55-b643-5fbe5479bf3f#033[00m
Oct 11 05:27:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:12.321 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0bd82d46-40c7-46de-b7ab-aec26369bf36]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:12.363 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[e7b662dc-8091-4a25-8535-94af356b3215]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:12.371 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[0e103fab-8777-44ce-8b8e-a89a6fa7acc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:12.415 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[e1cae18d-e52f-4070-b525-ba76ced7de4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:12.442 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bb523604-bab9-4d7c-9487-190427531255]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap894decad-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:b0:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 19, 'tx_packets': 5, 'rx_bytes': 1802, 'tx_bytes': 402, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 19, 'tx_packets': 5, 'rx_bytes': 1802, 'tx_bytes': 402, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 383], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663642, 'reachable_time': 44137, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 19, 'inoctets': 1536, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 19, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1536, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 19, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 402862, 'error': None, 'target': 'ovnmeta-894decad-3bed-4c55-b643-5fbe5479bf3f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:12.465 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[aa2ac89d-c545-490b-8395-a3f45c688a1b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap894decad-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663658, 'tstamp': 663658}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 402863, 'error': None, 'target': 'ovnmeta-894decad-3bed-4c55-b643-5fbe5479bf3f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:12.467 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap894decad-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:27:12 np0005481065 nova_compute[260935]: 2025-10-11 09:27:12.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:12 np0005481065 nova_compute[260935]: 2025-10-11 09:27:12.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:12.471 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap894decad-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:27:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:12.471 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:27:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:12.471 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap894decad-30, col_values=(('external_ids', {'iface-id': '48cfaf23-d7a3-467e-89cb-3eb3687ea15e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:27:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:12.472 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:27:12 np0005481065 nova_compute[260935]: 2025-10-11 09:27:12.542 2 DEBUG nova.compute.manager [req-da329335-0e74-4da8-a01c-8c38fe28e27e req-1be3059b-66ae-4bbd-812c-80daa179ac5e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Received event network-vif-plugged-d227ca0f-e837-4a46-be0f-e66b72db8028 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:27:12 np0005481065 nova_compute[260935]: 2025-10-11 09:27:12.543 2 DEBUG oslo_concurrency.lockutils [req-da329335-0e74-4da8-a01c-8c38fe28e27e req-1be3059b-66ae-4bbd-812c-80daa179ac5e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ac163a38-d5cb-4d00-af1b-f3361849dd68-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:27:12 np0005481065 nova_compute[260935]: 2025-10-11 09:27:12.543 2 DEBUG oslo_concurrency.lockutils [req-da329335-0e74-4da8-a01c-8c38fe28e27e req-1be3059b-66ae-4bbd-812c-80daa179ac5e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ac163a38-d5cb-4d00-af1b-f3361849dd68-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:27:12 np0005481065 nova_compute[260935]: 2025-10-11 09:27:12.543 2 DEBUG oslo_concurrency.lockutils [req-da329335-0e74-4da8-a01c-8c38fe28e27e req-1be3059b-66ae-4bbd-812c-80daa179ac5e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ac163a38-d5cb-4d00-af1b-f3361849dd68-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:27:12 np0005481065 nova_compute[260935]: 2025-10-11 09:27:12.544 2 DEBUG nova.compute.manager [req-da329335-0e74-4da8-a01c-8c38fe28e27e req-1be3059b-66ae-4bbd-812c-80daa179ac5e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Processing event network-vif-plugged-d227ca0f-e837-4a46-be0f-e66b72db8028 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:27:12 np0005481065 nova_compute[260935]: 2025-10-11 09:27:12.789 2 INFO nova.compute.manager [None req-2d19f315-b95a-4eaf-b007-57cfe1500179 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Get console output#033[00m
Oct 11 05:27:12 np0005481065 nova_compute[260935]: 2025-10-11 09:27:12.799 29289 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct 11 05:27:12 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2567: 321 pgs: 321 active+clean; 612 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 353 KiB/s rd, 3.9 MiB/s wr, 97 op/s
Oct 11 05:27:12 np0005481065 nova_compute[260935]: 2025-10-11 09:27:12.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:13 np0005481065 nova_compute[260935]: 2025-10-11 09:27:13.428 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174833.4274435, ac163a38-d5cb-4d00-af1b-f3361849dd68 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:27:13 np0005481065 nova_compute[260935]: 2025-10-11 09:27:13.428 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] VM Started (Lifecycle Event)#033[00m
Oct 11 05:27:13 np0005481065 nova_compute[260935]: 2025-10-11 09:27:13.456 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:27:13 np0005481065 nova_compute[260935]: 2025-10-11 09:27:13.463 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174833.4277086, ac163a38-d5cb-4d00-af1b-f3361849dd68 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:27:13 np0005481065 nova_compute[260935]: 2025-10-11 09:27:13.463 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:27:13 np0005481065 nova_compute[260935]: 2025-10-11 09:27:13.483 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:27:13 np0005481065 nova_compute[260935]: 2025-10-11 09:27:13.487 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:27:13 np0005481065 nova_compute[260935]: 2025-10-11 09:27:13.508 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:27:13 np0005481065 nova_compute[260935]: 2025-10-11 09:27:13.511 2 DEBUG nova.network.neutron [req-b81c69c2-c6e5-4812-a275-03e82918d592 req-e43ce0ac-486e-491b-b145-53bcf7e3d97f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Updated VIF entry in instance network info cache for port 36e1391f-eaf8-490f-8434-c3fb25eed0a4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:27:13 np0005481065 nova_compute[260935]: 2025-10-11 09:27:13.512 2 DEBUG nova.network.neutron [req-b81c69c2-c6e5-4812-a275-03e82918d592 req-e43ce0ac-486e-491b-b145-53bcf7e3d97f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Updating instance_info_cache with network_info: [{"id": "36e1391f-eaf8-490f-8434-c3fb25eed0a4", "address": "fa:16:3e:da:66:34", "network": {"id": "b9f9ae84-9b18-48f7-bff2-94e8835de5c8", "bridge": "br-int", "label": "tempest-network-smoke--306681163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36e1391f-ea", "ovs_interfaceid": "36e1391f-eaf8-490f-8434-c3fb25eed0a4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:27:13 np0005481065 nova_compute[260935]: 2025-10-11 09:27:13.528 2 DEBUG oslo_concurrency.lockutils [req-b81c69c2-c6e5-4812-a275-03e82918d592 req-e43ce0ac-486e-491b-b145-53bcf7e3d97f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-36d0acda-9f37-4308-aa46-973f11c57b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:27:13 np0005481065 nova_compute[260935]: 2025-10-11 09:27:13.859 2 DEBUG nova.compute.manager [req-9e513b07-234e-4e77-bbec-2ec68301c79c req-f0b23a8d-3d3c-41e9-8fdf-036abc4384c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Received event network-vif-unplugged-36e1391f-eaf8-490f-8434-c3fb25eed0a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:27:13 np0005481065 nova_compute[260935]: 2025-10-11 09:27:13.860 2 DEBUG oslo_concurrency.lockutils [req-9e513b07-234e-4e77-bbec-2ec68301c79c req-f0b23a8d-3d3c-41e9-8fdf-036abc4384c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:27:13 np0005481065 nova_compute[260935]: 2025-10-11 09:27:13.860 2 DEBUG oslo_concurrency.lockutils [req-9e513b07-234e-4e77-bbec-2ec68301c79c req-f0b23a8d-3d3c-41e9-8fdf-036abc4384c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:27:13 np0005481065 nova_compute[260935]: 2025-10-11 09:27:13.861 2 DEBUG oslo_concurrency.lockutils [req-9e513b07-234e-4e77-bbec-2ec68301c79c req-f0b23a8d-3d3c-41e9-8fdf-036abc4384c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:27:13 np0005481065 nova_compute[260935]: 2025-10-11 09:27:13.861 2 DEBUG nova.compute.manager [req-9e513b07-234e-4e77-bbec-2ec68301c79c req-f0b23a8d-3d3c-41e9-8fdf-036abc4384c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] No waiting events found dispatching network-vif-unplugged-36e1391f-eaf8-490f-8434-c3fb25eed0a4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:27:13 np0005481065 nova_compute[260935]: 2025-10-11 09:27:13.862 2 WARNING nova.compute.manager [req-9e513b07-234e-4e77-bbec-2ec68301c79c req-f0b23a8d-3d3c-41e9-8fdf-036abc4384c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Received unexpected event network-vif-unplugged-36e1391f-eaf8-490f-8434-c3fb25eed0a4 for instance with vm_state active and task_state None.#033[00m
Oct 11 05:27:13 np0005481065 nova_compute[260935]: 2025-10-11 09:27:13.863 2 DEBUG nova.compute.manager [req-9e513b07-234e-4e77-bbec-2ec68301c79c req-f0b23a8d-3d3c-41e9-8fdf-036abc4384c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Received event network-vif-plugged-36e1391f-eaf8-490f-8434-c3fb25eed0a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:27:13 np0005481065 nova_compute[260935]: 2025-10-11 09:27:13.863 2 DEBUG oslo_concurrency.lockutils [req-9e513b07-234e-4e77-bbec-2ec68301c79c req-f0b23a8d-3d3c-41e9-8fdf-036abc4384c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:27:13 np0005481065 nova_compute[260935]: 2025-10-11 09:27:13.864 2 DEBUG oslo_concurrency.lockutils [req-9e513b07-234e-4e77-bbec-2ec68301c79c req-f0b23a8d-3d3c-41e9-8fdf-036abc4384c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:27:13 np0005481065 nova_compute[260935]: 2025-10-11 09:27:13.864 2 DEBUG oslo_concurrency.lockutils [req-9e513b07-234e-4e77-bbec-2ec68301c79c req-f0b23a8d-3d3c-41e9-8fdf-036abc4384c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:27:13 np0005481065 nova_compute[260935]: 2025-10-11 09:27:13.865 2 DEBUG nova.compute.manager [req-9e513b07-234e-4e77-bbec-2ec68301c79c req-f0b23a8d-3d3c-41e9-8fdf-036abc4384c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] No waiting events found dispatching network-vif-plugged-36e1391f-eaf8-490f-8434-c3fb25eed0a4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:27:13 np0005481065 nova_compute[260935]: 2025-10-11 09:27:13.865 2 WARNING nova.compute.manager [req-9e513b07-234e-4e77-bbec-2ec68301c79c req-f0b23a8d-3d3c-41e9-8fdf-036abc4384c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Received unexpected event network-vif-plugged-36e1391f-eaf8-490f-8434-c3fb25eed0a4 for instance with vm_state active and task_state None.#033[00m
Oct 11 05:27:13 np0005481065 nova_compute[260935]: 2025-10-11 09:27:13.865 2 DEBUG nova.compute.manager [req-9e513b07-234e-4e77-bbec-2ec68301c79c req-f0b23a8d-3d3c-41e9-8fdf-036abc4384c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Received event network-vif-plugged-731dd7de-8de3-4b67-a34b-8fef195606b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:27:13 np0005481065 nova_compute[260935]: 2025-10-11 09:27:13.866 2 DEBUG oslo_concurrency.lockutils [req-9e513b07-234e-4e77-bbec-2ec68301c79c req-f0b23a8d-3d3c-41e9-8fdf-036abc4384c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ac163a38-d5cb-4d00-af1b-f3361849dd68-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:27:13 np0005481065 nova_compute[260935]: 2025-10-11 09:27:13.866 2 DEBUG oslo_concurrency.lockutils [req-9e513b07-234e-4e77-bbec-2ec68301c79c req-f0b23a8d-3d3c-41e9-8fdf-036abc4384c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ac163a38-d5cb-4d00-af1b-f3361849dd68-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:27:13 np0005481065 nova_compute[260935]: 2025-10-11 09:27:13.867 2 DEBUG oslo_concurrency.lockutils [req-9e513b07-234e-4e77-bbec-2ec68301c79c req-f0b23a8d-3d3c-41e9-8fdf-036abc4384c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ac163a38-d5cb-4d00-af1b-f3361849dd68-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:27:13 np0005481065 nova_compute[260935]: 2025-10-11 09:27:13.867 2 DEBUG nova.compute.manager [req-9e513b07-234e-4e77-bbec-2ec68301c79c req-f0b23a8d-3d3c-41e9-8fdf-036abc4384c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Processing event network-vif-plugged-731dd7de-8de3-4b67-a34b-8fef195606b3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:27:13 np0005481065 nova_compute[260935]: 2025-10-11 09:27:13.868 2 DEBUG nova.compute.manager [req-9e513b07-234e-4e77-bbec-2ec68301c79c req-f0b23a8d-3d3c-41e9-8fdf-036abc4384c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Received event network-vif-plugged-731dd7de-8de3-4b67-a34b-8fef195606b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:27:13 np0005481065 nova_compute[260935]: 2025-10-11 09:27:13.868 2 DEBUG oslo_concurrency.lockutils [req-9e513b07-234e-4e77-bbec-2ec68301c79c req-f0b23a8d-3d3c-41e9-8fdf-036abc4384c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ac163a38-d5cb-4d00-af1b-f3361849dd68-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:27:13 np0005481065 nova_compute[260935]: 2025-10-11 09:27:13.869 2 DEBUG oslo_concurrency.lockutils [req-9e513b07-234e-4e77-bbec-2ec68301c79c req-f0b23a8d-3d3c-41e9-8fdf-036abc4384c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ac163a38-d5cb-4d00-af1b-f3361849dd68-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:27:13 np0005481065 nova_compute[260935]: 2025-10-11 09:27:13.869 2 DEBUG oslo_concurrency.lockutils [req-9e513b07-234e-4e77-bbec-2ec68301c79c req-f0b23a8d-3d3c-41e9-8fdf-036abc4384c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ac163a38-d5cb-4d00-af1b-f3361849dd68-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:27:13 np0005481065 nova_compute[260935]: 2025-10-11 09:27:13.870 2 DEBUG nova.compute.manager [req-9e513b07-234e-4e77-bbec-2ec68301c79c req-f0b23a8d-3d3c-41e9-8fdf-036abc4384c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] No waiting events found dispatching network-vif-plugged-731dd7de-8de3-4b67-a34b-8fef195606b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:27:13 np0005481065 nova_compute[260935]: 2025-10-11 09:27:13.870 2 WARNING nova.compute.manager [req-9e513b07-234e-4e77-bbec-2ec68301c79c req-f0b23a8d-3d3c-41e9-8fdf-036abc4384c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Received unexpected event network-vif-plugged-731dd7de-8de3-4b67-a34b-8fef195606b3 for instance with vm_state building and task_state spawning.#033[00m
Oct 11 05:27:13 np0005481065 nova_compute[260935]: 2025-10-11 09:27:13.871 2 DEBUG nova.compute.manager [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:27:13 np0005481065 nova_compute[260935]: 2025-10-11 09:27:13.877 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174833.8764253, ac163a38-d5cb-4d00-af1b-f3361849dd68 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:27:13 np0005481065 nova_compute[260935]: 2025-10-11 09:27:13.878 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:27:13 np0005481065 nova_compute[260935]: 2025-10-11 09:27:13.883 2 DEBUG nova.virt.libvirt.driver [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:27:13 np0005481065 nova_compute[260935]: 2025-10-11 09:27:13.891 2 INFO nova.virt.libvirt.driver [-] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Instance spawned successfully.#033[00m
Oct 11 05:27:13 np0005481065 nova_compute[260935]: 2025-10-11 09:27:13.892 2 DEBUG nova.virt.libvirt.driver [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:27:13 np0005481065 nova_compute[260935]: 2025-10-11 09:27:13.902 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:27:13 np0005481065 nova_compute[260935]: 2025-10-11 09:27:13.915 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:27:13 np0005481065 nova_compute[260935]: 2025-10-11 09:27:13.932 2 DEBUG nova.virt.libvirt.driver [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:27:13 np0005481065 nova_compute[260935]: 2025-10-11 09:27:13.933 2 DEBUG nova.virt.libvirt.driver [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:27:13 np0005481065 nova_compute[260935]: 2025-10-11 09:27:13.934 2 DEBUG nova.virt.libvirt.driver [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:27:13 np0005481065 nova_compute[260935]: 2025-10-11 09:27:13.935 2 DEBUG nova.virt.libvirt.driver [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:27:13 np0005481065 nova_compute[260935]: 2025-10-11 09:27:13.936 2 DEBUG nova.virt.libvirt.driver [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:27:13 np0005481065 nova_compute[260935]: 2025-10-11 09:27:13.937 2 DEBUG nova.virt.libvirt.driver [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:27:13 np0005481065 nova_compute[260935]: 2025-10-11 09:27:13.944 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:27:14 np0005481065 nova_compute[260935]: 2025-10-11 09:27:14.011 2 INFO nova.compute.manager [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Took 14.16 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:27:14 np0005481065 nova_compute[260935]: 2025-10-11 09:27:14.011 2 DEBUG nova.compute.manager [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:27:14 np0005481065 nova_compute[260935]: 2025-10-11 09:27:14.124 2 INFO nova.compute.manager [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Took 15.32 seconds to build instance.#033[00m
Oct 11 05:27:14 np0005481065 nova_compute[260935]: 2025-10-11 09:27:14.144 2 DEBUG oslo_concurrency.lockutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "ac163a38-d5cb-4d00-af1b-f3361849dd68" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.448s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:27:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:27:14 np0005481065 nova_compute[260935]: 2025-10-11 09:27:14.629 2 DEBUG nova.compute.manager [req-bd896db1-732b-46d7-8d3a-ae33702a3160 req-d9e542f2-59e0-4e6a-968e-024cac88507a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Received event network-vif-plugged-d227ca0f-e837-4a46-be0f-e66b72db8028 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:27:14 np0005481065 nova_compute[260935]: 2025-10-11 09:27:14.630 2 DEBUG oslo_concurrency.lockutils [req-bd896db1-732b-46d7-8d3a-ae33702a3160 req-d9e542f2-59e0-4e6a-968e-024cac88507a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ac163a38-d5cb-4d00-af1b-f3361849dd68-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:27:14 np0005481065 nova_compute[260935]: 2025-10-11 09:27:14.630 2 DEBUG oslo_concurrency.lockutils [req-bd896db1-732b-46d7-8d3a-ae33702a3160 req-d9e542f2-59e0-4e6a-968e-024cac88507a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ac163a38-d5cb-4d00-af1b-f3361849dd68-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:27:14 np0005481065 nova_compute[260935]: 2025-10-11 09:27:14.630 2 DEBUG oslo_concurrency.lockutils [req-bd896db1-732b-46d7-8d3a-ae33702a3160 req-d9e542f2-59e0-4e6a-968e-024cac88507a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ac163a38-d5cb-4d00-af1b-f3361849dd68-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:27:14 np0005481065 nova_compute[260935]: 2025-10-11 09:27:14.630 2 DEBUG nova.compute.manager [req-bd896db1-732b-46d7-8d3a-ae33702a3160 req-d9e542f2-59e0-4e6a-968e-024cac88507a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] No waiting events found dispatching network-vif-plugged-d227ca0f-e837-4a46-be0f-e66b72db8028 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:27:14 np0005481065 nova_compute[260935]: 2025-10-11 09:27:14.631 2 WARNING nova.compute.manager [req-bd896db1-732b-46d7-8d3a-ae33702a3160 req-d9e542f2-59e0-4e6a-968e-024cac88507a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Received unexpected event network-vif-plugged-d227ca0f-e837-4a46-be0f-e66b72db8028 for instance with vm_state active and task_state None.#033[00m
Oct 11 05:27:14 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2568: 321 pgs: 321 active+clean; 612 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 307 KiB/s rd, 1.6 MiB/s wr, 58 op/s
Oct 11 05:27:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:15.223 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:27:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:15.224 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:27:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:15.226 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:27:15 np0005481065 nova_compute[260935]: 2025-10-11 09:27:15.243 2 INFO nova.compute.manager [None req-e763e95b-d22f-463f-9863-71d739ff3653 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Get console output#033[00m
Oct 11 05:27:15 np0005481065 nova_compute[260935]: 2025-10-11 09:27:15.254 29289 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct 11 05:27:15 np0005481065 nova_compute[260935]: 2025-10-11 09:27:15.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:15 np0005481065 podman[402907]: 2025-10-11 09:27:15.833713424 +0000 UTC m=+0.121839040 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 11 05:27:15 np0005481065 nova_compute[260935]: 2025-10-11 09:27:15.981 2 DEBUG nova.compute.manager [req-9c7c4aee-ba45-4af2-b2c2-5087ece79f9f req-55e610d7-6407-4761-b594-5bfcfee5cc57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Received event network-changed-36e1391f-eaf8-490f-8434-c3fb25eed0a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:27:15 np0005481065 nova_compute[260935]: 2025-10-11 09:27:15.982 2 DEBUG nova.compute.manager [req-9c7c4aee-ba45-4af2-b2c2-5087ece79f9f req-55e610d7-6407-4761-b594-5bfcfee5cc57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Refreshing instance network info cache due to event network-changed-36e1391f-eaf8-490f-8434-c3fb25eed0a4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:27:15 np0005481065 nova_compute[260935]: 2025-10-11 09:27:15.982 2 DEBUG oslo_concurrency.lockutils [req-9c7c4aee-ba45-4af2-b2c2-5087ece79f9f req-55e610d7-6407-4761-b594-5bfcfee5cc57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-36d0acda-9f37-4308-aa46-973f11c57b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:27:15 np0005481065 nova_compute[260935]: 2025-10-11 09:27:15.982 2 DEBUG oslo_concurrency.lockutils [req-9c7c4aee-ba45-4af2-b2c2-5087ece79f9f req-55e610d7-6407-4761-b594-5bfcfee5cc57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-36d0acda-9f37-4308-aa46-973f11c57b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:27:15 np0005481065 nova_compute[260935]: 2025-10-11 09:27:15.983 2 DEBUG nova.network.neutron [req-9c7c4aee-ba45-4af2-b2c2-5087ece79f9f req-55e610d7-6407-4761-b594-5bfcfee5cc57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Refreshing network info cache for port 36e1391f-eaf8-490f-8434-c3fb25eed0a4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:27:16 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2569: 321 pgs: 321 active+clean; 612 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 307 KiB/s rd, 1.6 MiB/s wr, 58 op/s
Oct 11 05:27:17 np0005481065 nova_compute[260935]: 2025-10-11 09:27:17.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:18 np0005481065 nova_compute[260935]: 2025-10-11 09:27:18.108 2 DEBUG nova.compute.manager [req-fe0d9dab-7a58-430c-aa24-9a590b71cc61 req-fe107f88-45b8-4d24-8f9a-cfd5b6ac2469 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Received event network-changed-733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:27:18 np0005481065 nova_compute[260935]: 2025-10-11 09:27:18.108 2 DEBUG nova.compute.manager [req-fe0d9dab-7a58-430c-aa24-9a590b71cc61 req-fe107f88-45b8-4d24-8f9a-cfd5b6ac2469 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Refreshing instance network info cache due to event network-changed-733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:27:18 np0005481065 nova_compute[260935]: 2025-10-11 09:27:18.109 2 DEBUG oslo_concurrency.lockutils [req-fe0d9dab-7a58-430c-aa24-9a590b71cc61 req-fe107f88-45b8-4d24-8f9a-cfd5b6ac2469 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-ebf4e4f9-b225-4ba9-927e-0619aeea8d89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:27:18 np0005481065 nova_compute[260935]: 2025-10-11 09:27:18.109 2 DEBUG oslo_concurrency.lockutils [req-fe0d9dab-7a58-430c-aa24-9a590b71cc61 req-fe107f88-45b8-4d24-8f9a-cfd5b6ac2469 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-ebf4e4f9-b225-4ba9-927e-0619aeea8d89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:27:18 np0005481065 nova_compute[260935]: 2025-10-11 09:27:18.109 2 DEBUG nova.network.neutron [req-fe0d9dab-7a58-430c-aa24-9a590b71cc61 req-fe107f88-45b8-4d24-8f9a-cfd5b6ac2469 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Refreshing network info cache for port 733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:27:18 np0005481065 nova_compute[260935]: 2025-10-11 09:27:18.183 2 DEBUG oslo_concurrency.lockutils [None req-01cd910c-c559-48fa-bbd7-900b3e480470 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "ebf4e4f9-b225-4ba9-927e-0619aeea8d89" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:27:18 np0005481065 nova_compute[260935]: 2025-10-11 09:27:18.184 2 DEBUG oslo_concurrency.lockutils [None req-01cd910c-c559-48fa-bbd7-900b3e480470 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "ebf4e4f9-b225-4ba9-927e-0619aeea8d89" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:27:18 np0005481065 nova_compute[260935]: 2025-10-11 09:27:18.184 2 DEBUG oslo_concurrency.lockutils [None req-01cd910c-c559-48fa-bbd7-900b3e480470 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "ebf4e4f9-b225-4ba9-927e-0619aeea8d89-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:27:18 np0005481065 nova_compute[260935]: 2025-10-11 09:27:18.184 2 DEBUG oslo_concurrency.lockutils [None req-01cd910c-c559-48fa-bbd7-900b3e480470 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "ebf4e4f9-b225-4ba9-927e-0619aeea8d89-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:27:18 np0005481065 nova_compute[260935]: 2025-10-11 09:27:18.185 2 DEBUG oslo_concurrency.lockutils [None req-01cd910c-c559-48fa-bbd7-900b3e480470 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "ebf4e4f9-b225-4ba9-927e-0619aeea8d89-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:27:18 np0005481065 nova_compute[260935]: 2025-10-11 09:27:18.187 2 INFO nova.compute.manager [None req-01cd910c-c559-48fa-bbd7-900b3e480470 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Terminating instance#033[00m
Oct 11 05:27:18 np0005481065 nova_compute[260935]: 2025-10-11 09:27:18.189 2 DEBUG nova.compute.manager [None req-01cd910c-c559-48fa-bbd7-900b3e480470 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:27:18 np0005481065 kernel: tap733f68a9-9a (unregistering): left promiscuous mode
Oct 11 05:27:18 np0005481065 NetworkManager[44960]: <info>  [1760174838.2619] device (tap733f68a9-9a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:27:18 np0005481065 nova_compute[260935]: 2025-10-11 09:27:18.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:18 np0005481065 ovn_controller[152945]: 2025-10-11T09:27:18Z|01417|binding|INFO|Releasing lport 733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6 from this chassis (sb_readonly=0)
Oct 11 05:27:18 np0005481065 ovn_controller[152945]: 2025-10-11T09:27:18Z|01418|binding|INFO|Setting lport 733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6 down in Southbound
Oct 11 05:27:18 np0005481065 ovn_controller[152945]: 2025-10-11T09:27:18Z|01419|binding|INFO|Removing iface tap733f68a9-9a ovn-installed in OVS
Oct 11 05:27:18 np0005481065 nova_compute[260935]: 2025-10-11 09:27:18.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:18.290 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:aa:8a:b1 10.100.0.8'], port_security=['fa:16:3e:aa:8a:b1 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'ebf4e4f9-b225-4ba9-927e-0619aeea8d89', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b9f9ae84-9b18-48f7-bff2-94e8835de5c8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ed8dd5c0-e5cc-4f41-beee-a727185c1ca5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=419ebcda-831c-4f7b-8ef8-fba16bc71b52, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:27:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:18.293 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6 in datapath b9f9ae84-9b18-48f7-bff2-94e8835de5c8 unbound from our chassis#033[00m
Oct 11 05:27:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:18.296 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b9f9ae84-9b18-48f7-bff2-94e8835de5c8#033[00m
Oct 11 05:27:18 np0005481065 nova_compute[260935]: 2025-10-11 09:27:18.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:18.322 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[85e29eec-020c-4dcb-9c3c-91e0a0c26489]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:18 np0005481065 systemd[1]: machine-qemu\x2d153\x2dinstance\x2d00000081.scope: Deactivated successfully.
Oct 11 05:27:18 np0005481065 systemd[1]: machine-qemu\x2d153\x2dinstance\x2d00000081.scope: Consumed 13.829s CPU time.
Oct 11 05:27:18 np0005481065 systemd-machined[215705]: Machine qemu-153-instance-00000081 terminated.
Oct 11 05:27:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:18.355 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[a18dc9ab-fa2b-4809-8b31-d73374cd2026]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:18.360 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[369bf491-c91e-42ce-a64a-a538161f58ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:18.390 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[d7e2a4cc-4b9c-484f-8711-ac945328e706]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:18.445 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7554ac2d-5453-4dcb-89b5-5253200c47f0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb9f9ae84-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:ad:53'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 379], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662316, 'reachable_time': 34988, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 402938, 'error': None, 'target': 'ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:18 np0005481065 nova_compute[260935]: 2025-10-11 09:27:18.462 2 INFO nova.virt.libvirt.driver [-] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Instance destroyed successfully.#033[00m
Oct 11 05:27:18 np0005481065 nova_compute[260935]: 2025-10-11 09:27:18.462 2 DEBUG nova.objects.instance [None req-01cd910c-c559-48fa-bbd7-900b3e480470 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'resources' on Instance uuid ebf4e4f9-b225-4ba9-927e-0619aeea8d89 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:27:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:18.471 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2fd82c6f-f666-484d-911d-f9b7b332a6ca]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb9f9ae84-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662335, 'tstamp': 662335}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 402949, 'error': None, 'target': 'ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb9f9ae84-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662340, 'tstamp': 662340}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 402949, 'error': None, 'target': 'ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:18.473 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb9f9ae84-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:27:18 np0005481065 nova_compute[260935]: 2025-10-11 09:27:18.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:18 np0005481065 nova_compute[260935]: 2025-10-11 09:27:18.477 2 DEBUG nova.virt.libvirt.vif [None req-01cd910c-c559-48fa-bbd7-900b3e480470 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:26:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-642071395',display_name='tempest-TestNetworkBasicOps-server-642071395',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-642071395',id=129,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL/a4lv10vOA8+Y0o9A9HssJRB57eWElESeZBiHZYTC4NgZskMwnl+IoMYFCrg3V9vZquWFF14c5AYcCtavq1tvAMKZzjjtkqimBk00BtDUv/kjU7cBauBpTA49aY1aFOA==',key_name='tempest-TestNetworkBasicOps-566812727',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:26:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-z5r1nfln',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:26:51Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=ebf4e4f9-b225-4ba9-927e-0619aeea8d89,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6", "address": "fa:16:3e:aa:8a:b1", "network": {"id": "b9f9ae84-9b18-48f7-bff2-94e8835de5c8", "bridge": "br-int", "label": "tempest-network-smoke--306681163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap733f68a9-9a", "ovs_interfaceid": "733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:27:18 np0005481065 nova_compute[260935]: 2025-10-11 09:27:18.477 2 DEBUG nova.network.os_vif_util [None req-01cd910c-c559-48fa-bbd7-900b3e480470 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6", "address": "fa:16:3e:aa:8a:b1", "network": {"id": "b9f9ae84-9b18-48f7-bff2-94e8835de5c8", "bridge": "br-int", "label": "tempest-network-smoke--306681163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap733f68a9-9a", "ovs_interfaceid": "733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:27:18 np0005481065 nova_compute[260935]: 2025-10-11 09:27:18.478 2 DEBUG nova.network.os_vif_util [None req-01cd910c-c559-48fa-bbd7-900b3e480470 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:aa:8a:b1,bridge_name='br-int',has_traffic_filtering=True,id=733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6,network=Network(b9f9ae84-9b18-48f7-bff2-94e8835de5c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap733f68a9-9a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:27:18 np0005481065 nova_compute[260935]: 2025-10-11 09:27:18.478 2 DEBUG os_vif [None req-01cd910c-c559-48fa-bbd7-900b3e480470 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:aa:8a:b1,bridge_name='br-int',has_traffic_filtering=True,id=733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6,network=Network(b9f9ae84-9b18-48f7-bff2-94e8835de5c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap733f68a9-9a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:27:18 np0005481065 nova_compute[260935]: 2025-10-11 09:27:18.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:18 np0005481065 nova_compute[260935]: 2025-10-11 09:27:18.480 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap733f68a9-9a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:27:18 np0005481065 nova_compute[260935]: 2025-10-11 09:27:18.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:18.480 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb9f9ae84-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:27:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:18.480 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:27:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:18.481 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb9f9ae84-90, col_values=(('external_ids', {'iface-id': '829ba2ca-e21f-4927-8525-5f43e59d37f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:27:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:18.481 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:27:18 np0005481065 nova_compute[260935]: 2025-10-11 09:27:18.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:18 np0005481065 nova_compute[260935]: 2025-10-11 09:27:18.485 2 INFO os_vif [None req-01cd910c-c559-48fa-bbd7-900b3e480470 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:aa:8a:b1,bridge_name='br-int',has_traffic_filtering=True,id=733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6,network=Network(b9f9ae84-9b18-48f7-bff2-94e8835de5c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap733f68a9-9a')#033[00m
Oct 11 05:27:18 np0005481065 nova_compute[260935]: 2025-10-11 09:27:18.524 2 DEBUG nova.network.neutron [req-9c7c4aee-ba45-4af2-b2c2-5087ece79f9f req-55e610d7-6407-4761-b594-5bfcfee5cc57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Updated VIF entry in instance network info cache for port 36e1391f-eaf8-490f-8434-c3fb25eed0a4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:27:18 np0005481065 nova_compute[260935]: 2025-10-11 09:27:18.525 2 DEBUG nova.network.neutron [req-9c7c4aee-ba45-4af2-b2c2-5087ece79f9f req-55e610d7-6407-4761-b594-5bfcfee5cc57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Updating instance_info_cache with network_info: [{"id": "36e1391f-eaf8-490f-8434-c3fb25eed0a4", "address": "fa:16:3e:da:66:34", "network": {"id": "b9f9ae84-9b18-48f7-bff2-94e8835de5c8", "bridge": "br-int", "label": "tempest-network-smoke--306681163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36e1391f-ea", "ovs_interfaceid": "36e1391f-eaf8-490f-8434-c3fb25eed0a4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:27:18 np0005481065 nova_compute[260935]: 2025-10-11 09:27:18.535 2 DEBUG nova.compute.manager [req-d954d31f-9861-4fd2-9506-b861a89bfb54 req-7e1baa71-a5e9-4f98-bcd7-5fb99efcf2d3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Received event network-vif-unplugged-733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:27:18 np0005481065 nova_compute[260935]: 2025-10-11 09:27:18.535 2 DEBUG oslo_concurrency.lockutils [req-d954d31f-9861-4fd2-9506-b861a89bfb54 req-7e1baa71-a5e9-4f98-bcd7-5fb99efcf2d3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ebf4e4f9-b225-4ba9-927e-0619aeea8d89-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:27:18 np0005481065 nova_compute[260935]: 2025-10-11 09:27:18.536 2 DEBUG oslo_concurrency.lockutils [req-d954d31f-9861-4fd2-9506-b861a89bfb54 req-7e1baa71-a5e9-4f98-bcd7-5fb99efcf2d3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ebf4e4f9-b225-4ba9-927e-0619aeea8d89-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:27:18 np0005481065 nova_compute[260935]: 2025-10-11 09:27:18.536 2 DEBUG oslo_concurrency.lockutils [req-d954d31f-9861-4fd2-9506-b861a89bfb54 req-7e1baa71-a5e9-4f98-bcd7-5fb99efcf2d3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ebf4e4f9-b225-4ba9-927e-0619aeea8d89-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:27:18 np0005481065 nova_compute[260935]: 2025-10-11 09:27:18.536 2 DEBUG nova.compute.manager [req-d954d31f-9861-4fd2-9506-b861a89bfb54 req-7e1baa71-a5e9-4f98-bcd7-5fb99efcf2d3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] No waiting events found dispatching network-vif-unplugged-733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:27:18 np0005481065 nova_compute[260935]: 2025-10-11 09:27:18.536 2 DEBUG nova.compute.manager [req-d954d31f-9861-4fd2-9506-b861a89bfb54 req-7e1baa71-a5e9-4f98-bcd7-5fb99efcf2d3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Received event network-vif-unplugged-733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 05:27:18 np0005481065 nova_compute[260935]: 2025-10-11 09:27:18.539 2 DEBUG oslo_concurrency.lockutils [req-9c7c4aee-ba45-4af2-b2c2-5087ece79f9f req-55e610d7-6407-4761-b594-5bfcfee5cc57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-36d0acda-9f37-4308-aa46-973f11c57b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:27:18 np0005481065 nova_compute[260935]: 2025-10-11 09:27:18.539 2 DEBUG nova.compute.manager [req-9c7c4aee-ba45-4af2-b2c2-5087ece79f9f req-55e610d7-6407-4761-b594-5bfcfee5cc57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Received event network-vif-plugged-36e1391f-eaf8-490f-8434-c3fb25eed0a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:27:18 np0005481065 nova_compute[260935]: 2025-10-11 09:27:18.540 2 DEBUG oslo_concurrency.lockutils [req-9c7c4aee-ba45-4af2-b2c2-5087ece79f9f req-55e610d7-6407-4761-b594-5bfcfee5cc57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:27:18 np0005481065 nova_compute[260935]: 2025-10-11 09:27:18.540 2 DEBUG oslo_concurrency.lockutils [req-9c7c4aee-ba45-4af2-b2c2-5087ece79f9f req-55e610d7-6407-4761-b594-5bfcfee5cc57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:27:18 np0005481065 nova_compute[260935]: 2025-10-11 09:27:18.540 2 DEBUG oslo_concurrency.lockutils [req-9c7c4aee-ba45-4af2-b2c2-5087ece79f9f req-55e610d7-6407-4761-b594-5bfcfee5cc57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:27:18 np0005481065 nova_compute[260935]: 2025-10-11 09:27:18.540 2 DEBUG nova.compute.manager [req-9c7c4aee-ba45-4af2-b2c2-5087ece79f9f req-55e610d7-6407-4761-b594-5bfcfee5cc57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] No waiting events found dispatching network-vif-plugged-36e1391f-eaf8-490f-8434-c3fb25eed0a4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:27:18 np0005481065 nova_compute[260935]: 2025-10-11 09:27:18.541 2 WARNING nova.compute.manager [req-9c7c4aee-ba45-4af2-b2c2-5087ece79f9f req-55e610d7-6407-4761-b594-5bfcfee5cc57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Received unexpected event network-vif-plugged-36e1391f-eaf8-490f-8434-c3fb25eed0a4 for instance with vm_state active and task_state None.#033[00m
Oct 11 05:27:18 np0005481065 nova_compute[260935]: 2025-10-11 09:27:18.541 2 DEBUG nova.compute.manager [req-9c7c4aee-ba45-4af2-b2c2-5087ece79f9f req-55e610d7-6407-4761-b594-5bfcfee5cc57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Received event network-vif-plugged-36e1391f-eaf8-490f-8434-c3fb25eed0a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:27:18 np0005481065 nova_compute[260935]: 2025-10-11 09:27:18.541 2 DEBUG oslo_concurrency.lockutils [req-9c7c4aee-ba45-4af2-b2c2-5087ece79f9f req-55e610d7-6407-4761-b594-5bfcfee5cc57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:27:18 np0005481065 nova_compute[260935]: 2025-10-11 09:27:18.541 2 DEBUG oslo_concurrency.lockutils [req-9c7c4aee-ba45-4af2-b2c2-5087ece79f9f req-55e610d7-6407-4761-b594-5bfcfee5cc57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:27:18 np0005481065 nova_compute[260935]: 2025-10-11 09:27:18.542 2 DEBUG oslo_concurrency.lockutils [req-9c7c4aee-ba45-4af2-b2c2-5087ece79f9f req-55e610d7-6407-4761-b594-5bfcfee5cc57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:27:18 np0005481065 nova_compute[260935]: 2025-10-11 09:27:18.542 2 DEBUG nova.compute.manager [req-9c7c4aee-ba45-4af2-b2c2-5087ece79f9f req-55e610d7-6407-4761-b594-5bfcfee5cc57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] No waiting events found dispatching network-vif-plugged-36e1391f-eaf8-490f-8434-c3fb25eed0a4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:27:18 np0005481065 nova_compute[260935]: 2025-10-11 09:27:18.542 2 WARNING nova.compute.manager [req-9c7c4aee-ba45-4af2-b2c2-5087ece79f9f req-55e610d7-6407-4761-b594-5bfcfee5cc57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Received unexpected event network-vif-plugged-36e1391f-eaf8-490f-8434-c3fb25eed0a4 for instance with vm_state active and task_state None.#033[00m
Oct 11 05:27:18 np0005481065 nova_compute[260935]: 2025-10-11 09:27:18.904 2 INFO nova.virt.libvirt.driver [None req-01cd910c-c559-48fa-bbd7-900b3e480470 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Deleting instance files /var/lib/nova/instances/ebf4e4f9-b225-4ba9-927e-0619aeea8d89_del#033[00m
Oct 11 05:27:18 np0005481065 nova_compute[260935]: 2025-10-11 09:27:18.905 2 INFO nova.virt.libvirt.driver [None req-01cd910c-c559-48fa-bbd7-900b3e480470 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Deletion of /var/lib/nova/instances/ebf4e4f9-b225-4ba9-927e-0619aeea8d89_del complete#033[00m
Oct 11 05:27:18 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2570: 321 pgs: 321 active+clean; 612 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.6 MiB/s wr, 128 op/s
Oct 11 05:27:18 np0005481065 nova_compute[260935]: 2025-10-11 09:27:18.973 2 INFO nova.compute.manager [None req-01cd910c-c559-48fa-bbd7-900b3e480470 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Took 0.78 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:27:18 np0005481065 nova_compute[260935]: 2025-10-11 09:27:18.974 2 DEBUG oslo.service.loopingcall [None req-01cd910c-c559-48fa-bbd7-900b3e480470 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:27:18 np0005481065 nova_compute[260935]: 2025-10-11 09:27:18.974 2 DEBUG nova.compute.manager [-] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:27:18 np0005481065 nova_compute[260935]: 2025-10-11 09:27:18.975 2 DEBUG nova.network.neutron [-] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:27:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:27:20 np0005481065 nova_compute[260935]: 2025-10-11 09:27:20.223 2 DEBUG nova.compute.manager [req-29b2dc2f-4f5b-4d09-a606-2db89fe0e60d req-118eff73-0884-4d16-aeac-6192bc3e6d8d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Received event network-changed-d227ca0f-e837-4a46-be0f-e66b72db8028 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:27:20 np0005481065 nova_compute[260935]: 2025-10-11 09:27:20.224 2 DEBUG nova.compute.manager [req-29b2dc2f-4f5b-4d09-a606-2db89fe0e60d req-118eff73-0884-4d16-aeac-6192bc3e6d8d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Refreshing instance network info cache due to event network-changed-d227ca0f-e837-4a46-be0f-e66b72db8028. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:27:20 np0005481065 nova_compute[260935]: 2025-10-11 09:27:20.224 2 DEBUG oslo_concurrency.lockutils [req-29b2dc2f-4f5b-4d09-a606-2db89fe0e60d req-118eff73-0884-4d16-aeac-6192bc3e6d8d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-ac163a38-d5cb-4d00-af1b-f3361849dd68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:27:20 np0005481065 nova_compute[260935]: 2025-10-11 09:27:20.225 2 DEBUG oslo_concurrency.lockutils [req-29b2dc2f-4f5b-4d09-a606-2db89fe0e60d req-118eff73-0884-4d16-aeac-6192bc3e6d8d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-ac163a38-d5cb-4d00-af1b-f3361849dd68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:27:20 np0005481065 nova_compute[260935]: 2025-10-11 09:27:20.225 2 DEBUG nova.network.neutron [req-29b2dc2f-4f5b-4d09-a606-2db89fe0e60d req-118eff73-0884-4d16-aeac-6192bc3e6d8d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Refreshing network info cache for port d227ca0f-e837-4a46-be0f-e66b72db8028 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:27:20 np0005481065 nova_compute[260935]: 2025-10-11 09:27:20.323 2 DEBUG nova.network.neutron [req-fe0d9dab-7a58-430c-aa24-9a590b71cc61 req-fe107f88-45b8-4d24-8f9a-cfd5b6ac2469 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Updated VIF entry in instance network info cache for port 733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:27:20 np0005481065 nova_compute[260935]: 2025-10-11 09:27:20.324 2 DEBUG nova.network.neutron [req-fe0d9dab-7a58-430c-aa24-9a590b71cc61 req-fe107f88-45b8-4d24-8f9a-cfd5b6ac2469 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Updating instance_info_cache with network_info: [{"id": "733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6", "address": "fa:16:3e:aa:8a:b1", "network": {"id": "b9f9ae84-9b18-48f7-bff2-94e8835de5c8", "bridge": "br-int", "label": "tempest-network-smoke--306681163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap733f68a9-9a", "ovs_interfaceid": "733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:27:20 np0005481065 nova_compute[260935]: 2025-10-11 09:27:20.355 2 DEBUG oslo_concurrency.lockutils [req-fe0d9dab-7a58-430c-aa24-9a590b71cc61 req-fe107f88-45b8-4d24-8f9a-cfd5b6ac2469 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-ebf4e4f9-b225-4ba9-927e-0619aeea8d89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:27:20 np0005481065 nova_compute[260935]: 2025-10-11 09:27:20.719 2 DEBUG nova.compute.manager [req-911f5885-7244-4a35-8d96-9ee2277c4db5 req-0783c1d5-06a4-4f1e-8bec-8593fd943dab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Received event network-vif-plugged-733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:27:20 np0005481065 nova_compute[260935]: 2025-10-11 09:27:20.720 2 DEBUG oslo_concurrency.lockutils [req-911f5885-7244-4a35-8d96-9ee2277c4db5 req-0783c1d5-06a4-4f1e-8bec-8593fd943dab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ebf4e4f9-b225-4ba9-927e-0619aeea8d89-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:27:20 np0005481065 nova_compute[260935]: 2025-10-11 09:27:20.721 2 DEBUG oslo_concurrency.lockutils [req-911f5885-7244-4a35-8d96-9ee2277c4db5 req-0783c1d5-06a4-4f1e-8bec-8593fd943dab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ebf4e4f9-b225-4ba9-927e-0619aeea8d89-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:27:20 np0005481065 nova_compute[260935]: 2025-10-11 09:27:20.721 2 DEBUG oslo_concurrency.lockutils [req-911f5885-7244-4a35-8d96-9ee2277c4db5 req-0783c1d5-06a4-4f1e-8bec-8593fd943dab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ebf4e4f9-b225-4ba9-927e-0619aeea8d89-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:27:20 np0005481065 nova_compute[260935]: 2025-10-11 09:27:20.722 2 DEBUG nova.compute.manager [req-911f5885-7244-4a35-8d96-9ee2277c4db5 req-0783c1d5-06a4-4f1e-8bec-8593fd943dab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] No waiting events found dispatching network-vif-plugged-733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:27:20 np0005481065 nova_compute[260935]: 2025-10-11 09:27:20.722 2 WARNING nova.compute.manager [req-911f5885-7244-4a35-8d96-9ee2277c4db5 req-0783c1d5-06a4-4f1e-8bec-8593fd943dab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Received unexpected event network-vif-plugged-733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6 for instance with vm_state active and task_state deleting.#033[00m
Oct 11 05:27:20 np0005481065 podman[402971]: 2025-10-11 09:27:20.799855262 +0000 UTC m=+0.099819898 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:27:20 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2571: 321 pgs: 321 active+clean; 612 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 30 KiB/s wr, 75 op/s
Oct 11 05:27:21 np0005481065 nova_compute[260935]: 2025-10-11 09:27:21.454 2 DEBUG nova.network.neutron [-] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:27:21 np0005481065 nova_compute[260935]: 2025-10-11 09:27:21.474 2 INFO nova.compute.manager [-] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Took 2.50 seconds to deallocate network for instance.#033[00m
Oct 11 05:27:21 np0005481065 nova_compute[260935]: 2025-10-11 09:27:21.519 2 DEBUG oslo_concurrency.lockutils [None req-01cd910c-c559-48fa-bbd7-900b3e480470 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:27:21 np0005481065 nova_compute[260935]: 2025-10-11 09:27:21.520 2 DEBUG oslo_concurrency.lockutils [None req-01cd910c-c559-48fa-bbd7-900b3e480470 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:27:21 np0005481065 nova_compute[260935]: 2025-10-11 09:27:21.764 2 DEBUG oslo_concurrency.processutils [None req-01cd910c-c559-48fa-bbd7-900b3e480470 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:27:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:27:22 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2125242929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:27:22 np0005481065 nova_compute[260935]: 2025-10-11 09:27:22.266 2 DEBUG oslo_concurrency.processutils [None req-01cd910c-c559-48fa-bbd7-900b3e480470 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:27:22 np0005481065 nova_compute[260935]: 2025-10-11 09:27:22.275 2 DEBUG nova.compute.provider_tree [None req-01cd910c-c559-48fa-bbd7-900b3e480470 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:27:22 np0005481065 nova_compute[260935]: 2025-10-11 09:27:22.315 2 DEBUG nova.scheduler.client.report [None req-01cd910c-c559-48fa-bbd7-900b3e480470 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:27:22 np0005481065 nova_compute[260935]: 2025-10-11 09:27:22.352 2 DEBUG oslo_concurrency.lockutils [None req-01cd910c-c559-48fa-bbd7-900b3e480470 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.832s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:27:22 np0005481065 nova_compute[260935]: 2025-10-11 09:27:22.402 2 INFO nova.scheduler.client.report [None req-01cd910c-c559-48fa-bbd7-900b3e480470 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Deleted allocations for instance ebf4e4f9-b225-4ba9-927e-0619aeea8d89#033[00m
Oct 11 05:27:22 np0005481065 nova_compute[260935]: 2025-10-11 09:27:22.524 2 DEBUG oslo_concurrency.lockutils [None req-01cd910c-c559-48fa-bbd7-900b3e480470 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "ebf4e4f9-b225-4ba9-927e-0619aeea8d89" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.340s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:27:22 np0005481065 nova_compute[260935]: 2025-10-11 09:27:22.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:27:22 np0005481065 nova_compute[260935]: 2025-10-11 09:27:22.846 2 DEBUG nova.compute.manager [req-1353d67b-ee4f-4ed4-ac64-f3f355fdab48 req-1cf7cc06-b795-44aa-90c5-b8ad89dddf52 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Received event network-vif-deleted-733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:27:22 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2572: 321 pgs: 321 active+clean; 533 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 33 KiB/s wr, 103 op/s
Oct 11 05:27:22 np0005481065 nova_compute[260935]: 2025-10-11 09:27:22.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:23 np0005481065 nova_compute[260935]: 2025-10-11 09:27:23.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:23 np0005481065 nova_compute[260935]: 2025-10-11 09:27:23.747 2 DEBUG nova.network.neutron [req-29b2dc2f-4f5b-4d09-a606-2db89fe0e60d req-118eff73-0884-4d16-aeac-6192bc3e6d8d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Updated VIF entry in instance network info cache for port d227ca0f-e837-4a46-be0f-e66b72db8028. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:27:23 np0005481065 nova_compute[260935]: 2025-10-11 09:27:23.748 2 DEBUG nova.network.neutron [req-29b2dc2f-4f5b-4d09-a606-2db89fe0e60d req-118eff73-0884-4d16-aeac-6192bc3e6d8d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Updating instance_info_cache with network_info: [{"id": "d227ca0f-e837-4a46-be0f-e66b72db8028", "address": "fa:16:3e:01:a2:ce", "network": {"id": "024b1f88-7312-4f05-a55e-4c82e878906e", "bridge": "br-int", "label": "tempest-network-smoke--93934650", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd227ca0f-e8", "ovs_interfaceid": "d227ca0f-e837-4a46-be0f-e66b72db8028", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "731dd7de-8de3-4b67-a34b-8fef195606b3", "address": "fa:16:3e:3b:40:03", "network": {"id": "894decad-3bed-4c55-b643-5fbe5479bf3f", "bridge": "br-int", "label": "tempest-network-smoke--1044478516", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3b:4003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap731dd7de-8d", "ovs_interfaceid": "731dd7de-8de3-4b67-a34b-8fef195606b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:27:23 np0005481065 nova_compute[260935]: 2025-10-11 09:27:23.773 2 DEBUG oslo_concurrency.lockutils [req-29b2dc2f-4f5b-4d09-a606-2db89fe0e60d req-118eff73-0884-4d16-aeac-6192bc3e6d8d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-ac163a38-d5cb-4d00-af1b-f3361849dd68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:27:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:27:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:27:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:27:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:27:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:27:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:27:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:27:24 np0005481065 nova_compute[260935]: 2025-10-11 09:27:24.952 2 DEBUG oslo_concurrency.lockutils [None req-0e7aaaae-92f2-487c-b813-0e5e81acce09 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "36d0acda-9f37-4308-aa46-973f11c57b0e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:27:24 np0005481065 nova_compute[260935]: 2025-10-11 09:27:24.953 2 DEBUG oslo_concurrency.lockutils [None req-0e7aaaae-92f2-487c-b813-0e5e81acce09 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "36d0acda-9f37-4308-aa46-973f11c57b0e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:27:24 np0005481065 nova_compute[260935]: 2025-10-11 09:27:24.954 2 DEBUG oslo_concurrency.lockutils [None req-0e7aaaae-92f2-487c-b813-0e5e81acce09 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:27:24 np0005481065 nova_compute[260935]: 2025-10-11 09:27:24.954 2 DEBUG oslo_concurrency.lockutils [None req-0e7aaaae-92f2-487c-b813-0e5e81acce09 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:27:24 np0005481065 nova_compute[260935]: 2025-10-11 09:27:24.955 2 DEBUG oslo_concurrency.lockutils [None req-0e7aaaae-92f2-487c-b813-0e5e81acce09 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:27:24 np0005481065 nova_compute[260935]: 2025-10-11 09:27:24.957 2 INFO nova.compute.manager [None req-0e7aaaae-92f2-487c-b813-0e5e81acce09 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Terminating instance#033[00m
Oct 11 05:27:24 np0005481065 nova_compute[260935]: 2025-10-11 09:27:24.959 2 DEBUG nova.compute.manager [None req-0e7aaaae-92f2-487c-b813-0e5e81acce09 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:27:24 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2573: 321 pgs: 321 active+clean; 533 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 8.3 KiB/s wr, 97 op/s
Oct 11 05:27:24 np0005481065 nova_compute[260935]: 2025-10-11 09:27:24.985 2 DEBUG nova.compute.manager [req-17af6400-6014-4a7e-9756-15bd3587c0fb req-7b95ef36-6321-4a23-a5d4-1b643e95242a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Received event network-changed-36e1391f-eaf8-490f-8434-c3fb25eed0a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:27:24 np0005481065 nova_compute[260935]: 2025-10-11 09:27:24.985 2 DEBUG nova.compute.manager [req-17af6400-6014-4a7e-9756-15bd3587c0fb req-7b95ef36-6321-4a23-a5d4-1b643e95242a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Refreshing instance network info cache due to event network-changed-36e1391f-eaf8-490f-8434-c3fb25eed0a4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:27:24 np0005481065 nova_compute[260935]: 2025-10-11 09:27:24.986 2 DEBUG oslo_concurrency.lockutils [req-17af6400-6014-4a7e-9756-15bd3587c0fb req-7b95ef36-6321-4a23-a5d4-1b643e95242a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-36d0acda-9f37-4308-aa46-973f11c57b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:27:24 np0005481065 nova_compute[260935]: 2025-10-11 09:27:24.986 2 DEBUG oslo_concurrency.lockutils [req-17af6400-6014-4a7e-9756-15bd3587c0fb req-7b95ef36-6321-4a23-a5d4-1b643e95242a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-36d0acda-9f37-4308-aa46-973f11c57b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:27:24 np0005481065 nova_compute[260935]: 2025-10-11 09:27:24.987 2 DEBUG nova.network.neutron [req-17af6400-6014-4a7e-9756-15bd3587c0fb req-7b95ef36-6321-4a23-a5d4-1b643e95242a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Refreshing network info cache for port 36e1391f-eaf8-490f-8434-c3fb25eed0a4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:27:25 np0005481065 kernel: tap36e1391f-ea (unregistering): left promiscuous mode
Oct 11 05:27:25 np0005481065 NetworkManager[44960]: <info>  [1760174845.0305] device (tap36e1391f-ea): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:27:25 np0005481065 nova_compute[260935]: 2025-10-11 09:27:25.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:25 np0005481065 ovn_controller[152945]: 2025-10-11T09:27:25Z|01420|binding|INFO|Releasing lport 36e1391f-eaf8-490f-8434-c3fb25eed0a4 from this chassis (sb_readonly=0)
Oct 11 05:27:25 np0005481065 ovn_controller[152945]: 2025-10-11T09:27:25Z|01421|binding|INFO|Setting lport 36e1391f-eaf8-490f-8434-c3fb25eed0a4 down in Southbound
Oct 11 05:27:25 np0005481065 ovn_controller[152945]: 2025-10-11T09:27:25Z|01422|binding|INFO|Removing iface tap36e1391f-ea ovn-installed in OVS
Oct 11 05:27:25 np0005481065 nova_compute[260935]: 2025-10-11 09:27:25.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:25 np0005481065 nova_compute[260935]: 2025-10-11 09:27:25.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:25.100 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:66:34 10.100.0.11'], port_security=['fa:16:3e:da:66:34 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '36d0acda-9f37-4308-aa46-973f11c57b0e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b9f9ae84-9b18-48f7-bff2-94e8835de5c8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'a0735faf-4c5a-437a-8d73-2ecca218ad1a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=419ebcda-831c-4f7b-8ef8-fba16bc71b52, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=36e1391f-eaf8-490f-8434-c3fb25eed0a4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:27:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:25.102 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 36e1391f-eaf8-490f-8434-c3fb25eed0a4 in datapath b9f9ae84-9b18-48f7-bff2-94e8835de5c8 unbound from our chassis#033[00m
Oct 11 05:27:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:25.104 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b9f9ae84-9b18-48f7-bff2-94e8835de5c8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:27:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:25.105 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7291d1a7-dc10-4128-bb52-e5f6435b2b38]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:25.106 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8 namespace which is not needed anymore#033[00m
Oct 11 05:27:25 np0005481065 systemd[1]: machine-qemu\x2d151\x2dinstance\x2d0000007f.scope: Deactivated successfully.
Oct 11 05:27:25 np0005481065 systemd[1]: machine-qemu\x2d151\x2dinstance\x2d0000007f.scope: Consumed 15.271s CPU time.
Oct 11 05:27:25 np0005481065 systemd-machined[215705]: Machine qemu-151-instance-0000007f terminated.
Oct 11 05:27:25 np0005481065 nova_compute[260935]: 2025-10-11 09:27:25.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:25 np0005481065 nova_compute[260935]: 2025-10-11 09:27:25.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:25 np0005481065 nova_compute[260935]: 2025-10-11 09:27:25.214 2 INFO nova.virt.libvirt.driver [-] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Instance destroyed successfully.#033[00m
Oct 11 05:27:25 np0005481065 nova_compute[260935]: 2025-10-11 09:27:25.214 2 DEBUG nova.objects.instance [None req-0e7aaaae-92f2-487c-b813-0e5e81acce09 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'resources' on Instance uuid 36d0acda-9f37-4308-aa46-973f11c57b0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:27:25 np0005481065 nova_compute[260935]: 2025-10-11 09:27:25.230 2 DEBUG nova.virt.libvirt.vif [None req-0e7aaaae-92f2-487c-b813-0e5e81acce09 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:26:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1410731753',display_name='tempest-TestNetworkBasicOps-server-1410731753',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1410731753',id=127,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNfFDuZ2VUr+6EowKBtrZDd7zud1Oa+cp6ZA/ixez4vTqy3B2Qz2dWoCxMYTkI6OHKrvBP/PVMobSzlTBVFJQHby9DrXveKkH7hKU36MweJuxInYFJMR8tgPbvonZEpvAg==',key_name='tempest-TestNetworkBasicOps-714435839',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:26:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-4f9xvxhr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:26:19Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=36d0acda-9f37-4308-aa46-973f11c57b0e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "36e1391f-eaf8-490f-8434-c3fb25eed0a4", "address": "fa:16:3e:da:66:34", "network": {"id": "b9f9ae84-9b18-48f7-bff2-94e8835de5c8", "bridge": "br-int", "label": "tempest-network-smoke--306681163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36e1391f-ea", "ovs_interfaceid": "36e1391f-eaf8-490f-8434-c3fb25eed0a4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:27:25 np0005481065 nova_compute[260935]: 2025-10-11 09:27:25.230 2 DEBUG nova.network.os_vif_util [None req-0e7aaaae-92f2-487c-b813-0e5e81acce09 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "36e1391f-eaf8-490f-8434-c3fb25eed0a4", "address": "fa:16:3e:da:66:34", "network": {"id": "b9f9ae84-9b18-48f7-bff2-94e8835de5c8", "bridge": "br-int", "label": "tempest-network-smoke--306681163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36e1391f-ea", "ovs_interfaceid": "36e1391f-eaf8-490f-8434-c3fb25eed0a4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:27:25 np0005481065 nova_compute[260935]: 2025-10-11 09:27:25.231 2 DEBUG nova.network.os_vif_util [None req-0e7aaaae-92f2-487c-b813-0e5e81acce09 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:da:66:34,bridge_name='br-int',has_traffic_filtering=True,id=36e1391f-eaf8-490f-8434-c3fb25eed0a4,network=Network(b9f9ae84-9b18-48f7-bff2-94e8835de5c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36e1391f-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:27:25 np0005481065 nova_compute[260935]: 2025-10-11 09:27:25.232 2 DEBUG os_vif [None req-0e7aaaae-92f2-487c-b813-0e5e81acce09 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:da:66:34,bridge_name='br-int',has_traffic_filtering=True,id=36e1391f-eaf8-490f-8434-c3fb25eed0a4,network=Network(b9f9ae84-9b18-48f7-bff2-94e8835de5c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36e1391f-ea') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:27:25 np0005481065 nova_compute[260935]: 2025-10-11 09:27:25.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:25 np0005481065 nova_compute[260935]: 2025-10-11 09:27:25.234 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap36e1391f-ea, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:27:25 np0005481065 nova_compute[260935]: 2025-10-11 09:27:25.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:25 np0005481065 nova_compute[260935]: 2025-10-11 09:27:25.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:25 np0005481065 nova_compute[260935]: 2025-10-11 09:27:25.243 2 INFO os_vif [None req-0e7aaaae-92f2-487c-b813-0e5e81acce09 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:da:66:34,bridge_name='br-int',has_traffic_filtering=True,id=36e1391f-eaf8-490f-8434-c3fb25eed0a4,network=Network(b9f9ae84-9b18-48f7-bff2-94e8835de5c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36e1391f-ea')#033[00m
Oct 11 05:27:25 np0005481065 neutron-haproxy-ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8[400350]: [NOTICE]   (400368) : haproxy version is 2.8.14-c23fe91
Oct 11 05:27:25 np0005481065 neutron-haproxy-ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8[400350]: [NOTICE]   (400368) : path to executable is /usr/sbin/haproxy
Oct 11 05:27:25 np0005481065 neutron-haproxy-ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8[400350]: [WARNING]  (400368) : Exiting Master process...
Oct 11 05:27:25 np0005481065 neutron-haproxy-ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8[400350]: [WARNING]  (400368) : Exiting Master process...
Oct 11 05:27:25 np0005481065 neutron-haproxy-ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8[400350]: [ALERT]    (400368) : Current worker (400370) exited with code 143 (Terminated)
Oct 11 05:27:25 np0005481065 neutron-haproxy-ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8[400350]: [WARNING]  (400368) : All workers exited. Exiting... (0)
Oct 11 05:27:25 np0005481065 systemd[1]: libpod-98a9e3dfba5d9f0db108a43068ad5b78e93374ee7968fa6567843c496e5784d5.scope: Deactivated successfully.
Oct 11 05:27:25 np0005481065 podman[403046]: 2025-10-11 09:27:25.314910139 +0000 UTC m=+0.064065499 container died 98a9e3dfba5d9f0db108a43068ad5b78e93374ee7968fa6567843c496e5784d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2)
Oct 11 05:27:25 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-98a9e3dfba5d9f0db108a43068ad5b78e93374ee7968fa6567843c496e5784d5-userdata-shm.mount: Deactivated successfully.
Oct 11 05:27:25 np0005481065 systemd[1]: var-lib-containers-storage-overlay-7b93609cfe788969adf943db69f3753cdeb411820f1a1f8aaf68d3557d2f4c37-merged.mount: Deactivated successfully.
Oct 11 05:27:25 np0005481065 podman[403046]: 2025-10-11 09:27:25.387275471 +0000 UTC m=+0.136430791 container cleanup 98a9e3dfba5d9f0db108a43068ad5b78e93374ee7968fa6567843c496e5784d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 05:27:25 np0005481065 systemd[1]: libpod-conmon-98a9e3dfba5d9f0db108a43068ad5b78e93374ee7968fa6567843c496e5784d5.scope: Deactivated successfully.
Oct 11 05:27:25 np0005481065 podman[403093]: 2025-10-11 09:27:25.492076079 +0000 UTC m=+0.077132918 container remove 98a9e3dfba5d9f0db108a43068ad5b78e93374ee7968fa6567843c496e5784d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 11 05:27:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:25.498 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[05458a23-d688-42a6-afc1-8b075bfa751e]: (4, ('Sat Oct 11 09:27:25 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8 (98a9e3dfba5d9f0db108a43068ad5b78e93374ee7968fa6567843c496e5784d5)\n98a9e3dfba5d9f0db108a43068ad5b78e93374ee7968fa6567843c496e5784d5\nSat Oct 11 09:27:25 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8 (98a9e3dfba5d9f0db108a43068ad5b78e93374ee7968fa6567843c496e5784d5)\n98a9e3dfba5d9f0db108a43068ad5b78e93374ee7968fa6567843c496e5784d5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:25.501 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8d86ca1d-b08b-4d65-a816-01afa4e6731d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:25.502 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb9f9ae84-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:27:25 np0005481065 kernel: tapb9f9ae84-90: left promiscuous mode
Oct 11 05:27:25 np0005481065 nova_compute[260935]: 2025-10-11 09:27:25.506 2 DEBUG nova.compute.manager [req-a4b19fb7-158a-4e8c-be9a-4ab819baeba1 req-c94d6239-ee44-438c-a9f9-18da70a71a41 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Received event network-vif-unplugged-36e1391f-eaf8-490f-8434-c3fb25eed0a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:27:25 np0005481065 nova_compute[260935]: 2025-10-11 09:27:25.507 2 DEBUG oslo_concurrency.lockutils [req-a4b19fb7-158a-4e8c-be9a-4ab819baeba1 req-c94d6239-ee44-438c-a9f9-18da70a71a41 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:27:25 np0005481065 nova_compute[260935]: 2025-10-11 09:27:25.507 2 DEBUG oslo_concurrency.lockutils [req-a4b19fb7-158a-4e8c-be9a-4ab819baeba1 req-c94d6239-ee44-438c-a9f9-18da70a71a41 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:27:25 np0005481065 nova_compute[260935]: 2025-10-11 09:27:25.507 2 DEBUG oslo_concurrency.lockutils [req-a4b19fb7-158a-4e8c-be9a-4ab819baeba1 req-c94d6239-ee44-438c-a9f9-18da70a71a41 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:27:25 np0005481065 nova_compute[260935]: 2025-10-11 09:27:25.508 2 DEBUG nova.compute.manager [req-a4b19fb7-158a-4e8c-be9a-4ab819baeba1 req-c94d6239-ee44-438c-a9f9-18da70a71a41 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] No waiting events found dispatching network-vif-unplugged-36e1391f-eaf8-490f-8434-c3fb25eed0a4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:27:25 np0005481065 nova_compute[260935]: 2025-10-11 09:27:25.508 2 DEBUG nova.compute.manager [req-a4b19fb7-158a-4e8c-be9a-4ab819baeba1 req-c94d6239-ee44-438c-a9f9-18da70a71a41 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Received event network-vif-unplugged-36e1391f-eaf8-490f-8434-c3fb25eed0a4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 05:27:25 np0005481065 nova_compute[260935]: 2025-10-11 09:27:25.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:25.510 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5a9fbb67-dc17-4fb1-8967-69df9d772a01]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:25 np0005481065 nova_compute[260935]: 2025-10-11 09:27:25.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:25.534 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9911584f-e591-4f7a-bd90-1edca80cc636]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:25.535 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d9682d68-234e-4b86-b0f8-e17c341720e5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:25.554 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[60f91175-16b4-4bb1-8fd1-c0f30d136e60]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662305, 'reachable_time': 23417, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 403108, 'error': None, 'target': 'ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:25 np0005481065 systemd[1]: run-netns-ovnmeta\x2db9f9ae84\x2d9b18\x2d48f7\x2dbff2\x2d94e8835de5c8.mount: Deactivated successfully.
Oct 11 05:27:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:25.560 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 05:27:25 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:25.560 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[111d9086-a92d-46af-a472-d3ee4f75503b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:25 np0005481065 nova_compute[260935]: 2025-10-11 09:27:25.877 2 INFO nova.virt.libvirt.driver [None req-0e7aaaae-92f2-487c-b813-0e5e81acce09 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Deleting instance files /var/lib/nova/instances/36d0acda-9f37-4308-aa46-973f11c57b0e_del#033[00m
Oct 11 05:27:25 np0005481065 nova_compute[260935]: 2025-10-11 09:27:25.878 2 INFO nova.virt.libvirt.driver [None req-0e7aaaae-92f2-487c-b813-0e5e81acce09 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Deletion of /var/lib/nova/instances/36d0acda-9f37-4308-aa46-973f11c57b0e_del complete#033[00m
Oct 11 05:27:25 np0005481065 nova_compute[260935]: 2025-10-11 09:27:25.939 2 INFO nova.compute.manager [None req-0e7aaaae-92f2-487c-b813-0e5e81acce09 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Took 0.98 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:27:25 np0005481065 nova_compute[260935]: 2025-10-11 09:27:25.940 2 DEBUG oslo.service.loopingcall [None req-0e7aaaae-92f2-487c-b813-0e5e81acce09 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:27:25 np0005481065 nova_compute[260935]: 2025-10-11 09:27:25.940 2 DEBUG nova.compute.manager [-] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:27:25 np0005481065 nova_compute[260935]: 2025-10-11 09:27:25.940 2 DEBUG nova.network.neutron [-] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:27:26 np0005481065 nova_compute[260935]: 2025-10-11 09:27:26.595 2 DEBUG nova.network.neutron [req-17af6400-6014-4a7e-9756-15bd3587c0fb req-7b95ef36-6321-4a23-a5d4-1b643e95242a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Updated VIF entry in instance network info cache for port 36e1391f-eaf8-490f-8434-c3fb25eed0a4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:27:26 np0005481065 nova_compute[260935]: 2025-10-11 09:27:26.596 2 DEBUG nova.network.neutron [req-17af6400-6014-4a7e-9756-15bd3587c0fb req-7b95ef36-6321-4a23-a5d4-1b643e95242a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Updating instance_info_cache with network_info: [{"id": "36e1391f-eaf8-490f-8434-c3fb25eed0a4", "address": "fa:16:3e:da:66:34", "network": {"id": "b9f9ae84-9b18-48f7-bff2-94e8835de5c8", "bridge": "br-int", "label": "tempest-network-smoke--306681163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36e1391f-ea", "ovs_interfaceid": "36e1391f-eaf8-490f-8434-c3fb25eed0a4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:27:26 np0005481065 nova_compute[260935]: 2025-10-11 09:27:26.621 2 DEBUG oslo_concurrency.lockutils [req-17af6400-6014-4a7e-9756-15bd3587c0fb req-7b95ef36-6321-4a23-a5d4-1b643e95242a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-36d0acda-9f37-4308-aa46-973f11c57b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:27:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:27:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3958491461' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:27:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:27:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3958491461' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:27:26 np0005481065 nova_compute[260935]: 2025-10-11 09:27:26.750 2 DEBUG nova.network.neutron [-] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:27:26 np0005481065 nova_compute[260935]: 2025-10-11 09:27:26.772 2 INFO nova.compute.manager [-] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Took 0.83 seconds to deallocate network for instance.#033[00m
Oct 11 05:27:26 np0005481065 podman[403110]: 2025-10-11 09:27:26.813919692 +0000 UTC m=+0.112744183 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Oct 11 05:27:26 np0005481065 podman[403111]: 2025-10-11 09:27:26.823507963 +0000 UTC m=+0.113549306 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 05:27:26 np0005481065 nova_compute[260935]: 2025-10-11 09:27:26.835 2 DEBUG oslo_concurrency.lockutils [None req-0e7aaaae-92f2-487c-b813-0e5e81acce09 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:27:26 np0005481065 nova_compute[260935]: 2025-10-11 09:27:26.835 2 DEBUG oslo_concurrency.lockutils [None req-0e7aaaae-92f2-487c-b813-0e5e81acce09 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:27:26 np0005481065 ovn_controller[152945]: 2025-10-11T09:27:26Z|00166|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:01:a2:ce 10.100.0.6
Oct 11 05:27:26 np0005481065 ovn_controller[152945]: 2025-10-11T09:27:26Z|00167|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:01:a2:ce 10.100.0.6
Oct 11 05:27:26 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2574: 321 pgs: 321 active+clean; 533 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 8.3 KiB/s wr, 97 op/s
Oct 11 05:27:27 np0005481065 nova_compute[260935]: 2025-10-11 09:27:27.018 2 DEBUG oslo_concurrency.processutils [None req-0e7aaaae-92f2-487c-b813-0e5e81acce09 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:27:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:27:27 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3689211471' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:27:27 np0005481065 nova_compute[260935]: 2025-10-11 09:27:27.466 2 DEBUG oslo_concurrency.processutils [None req-0e7aaaae-92f2-487c-b813-0e5e81acce09 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:27:27 np0005481065 nova_compute[260935]: 2025-10-11 09:27:27.473 2 DEBUG nova.compute.provider_tree [None req-0e7aaaae-92f2-487c-b813-0e5e81acce09 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:27:27 np0005481065 nova_compute[260935]: 2025-10-11 09:27:27.594 2 DEBUG nova.compute.manager [req-7f248af4-2386-4b05-86ab-2e389a0c36c2 req-da9fa276-27c1-4156-b2c4-668607f81835 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Received event network-vif-deleted-36e1391f-eaf8-490f-8434-c3fb25eed0a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:27:27 np0005481065 nova_compute[260935]: 2025-10-11 09:27:27.714 2 DEBUG nova.scheduler.client.report [None req-0e7aaaae-92f2-487c-b813-0e5e81acce09 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:27:27 np0005481065 nova_compute[260935]: 2025-10-11 09:27:27.747 2 DEBUG nova.compute.manager [req-a430e958-a075-42b3-a1ca-25d4120897a8 req-173bbb11-d8a7-4647-97b4-16554d131682 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Received event network-vif-plugged-36e1391f-eaf8-490f-8434-c3fb25eed0a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:27:27 np0005481065 nova_compute[260935]: 2025-10-11 09:27:27.748 2 DEBUG oslo_concurrency.lockutils [req-a430e958-a075-42b3-a1ca-25d4120897a8 req-173bbb11-d8a7-4647-97b4-16554d131682 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:27:27 np0005481065 nova_compute[260935]: 2025-10-11 09:27:27.748 2 DEBUG oslo_concurrency.lockutils [req-a430e958-a075-42b3-a1ca-25d4120897a8 req-173bbb11-d8a7-4647-97b4-16554d131682 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:27:27 np0005481065 nova_compute[260935]: 2025-10-11 09:27:27.749 2 DEBUG oslo_concurrency.lockutils [req-a430e958-a075-42b3-a1ca-25d4120897a8 req-173bbb11-d8a7-4647-97b4-16554d131682 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:27:27 np0005481065 nova_compute[260935]: 2025-10-11 09:27:27.749 2 DEBUG nova.compute.manager [req-a430e958-a075-42b3-a1ca-25d4120897a8 req-173bbb11-d8a7-4647-97b4-16554d131682 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] No waiting events found dispatching network-vif-plugged-36e1391f-eaf8-490f-8434-c3fb25eed0a4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:27:27 np0005481065 nova_compute[260935]: 2025-10-11 09:27:27.750 2 WARNING nova.compute.manager [req-a430e958-a075-42b3-a1ca-25d4120897a8 req-173bbb11-d8a7-4647-97b4-16554d131682 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Received unexpected event network-vif-plugged-36e1391f-eaf8-490f-8434-c3fb25eed0a4 for instance with vm_state deleted and task_state None.#033[00m
Oct 11 05:27:27 np0005481065 nova_compute[260935]: 2025-10-11 09:27:27.871 2 DEBUG oslo_concurrency.lockutils [None req-0e7aaaae-92f2-487c-b813-0e5e81acce09 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.035s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:27:27 np0005481065 nova_compute[260935]: 2025-10-11 09:27:27.911 2 INFO nova.scheduler.client.report [None req-0e7aaaae-92f2-487c-b813-0e5e81acce09 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Deleted allocations for instance 36d0acda-9f37-4308-aa46-973f11c57b0e#033[00m
Oct 11 05:27:27 np0005481065 nova_compute[260935]: 2025-10-11 09:27:27.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:28 np0005481065 nova_compute[260935]: 2025-10-11 09:27:28.002 2 DEBUG oslo_concurrency.lockutils [None req-0e7aaaae-92f2-487c-b813-0e5e81acce09 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "36d0acda-9f37-4308-aa46-973f11c57b0e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:27:28 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2575: 321 pgs: 321 active+clean; 486 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 190 op/s
Oct 11 05:27:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:27:30 np0005481065 nova_compute[260935]: 2025-10-11 09:27:30.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:30 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2576: 321 pgs: 321 active+clean; 486 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 383 KiB/s rd, 2.1 MiB/s wr, 120 op/s
Oct 11 05:27:32 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2577: 321 pgs: 321 active+clean; 486 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 383 KiB/s rd, 2.1 MiB/s wr, 121 op/s
Oct 11 05:27:32 np0005481065 nova_compute[260935]: 2025-10-11 09:27:32.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:33 np0005481065 nova_compute[260935]: 2025-10-11 09:27:33.461 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174838.46024, ebf4e4f9-b225-4ba9-927e-0619aeea8d89 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:27:33 np0005481065 nova_compute[260935]: 2025-10-11 09:27:33.462 2 INFO nova.compute.manager [-] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:27:33 np0005481065 nova_compute[260935]: 2025-10-11 09:27:33.495 2 DEBUG nova.compute.manager [None req-4c334cad-f73d-4fce-a940-c6f277deb3c4 - - - - - -] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:27:33 np0005481065 ovn_controller[152945]: 2025-10-11T09:27:33Z|01423|binding|INFO|Releasing lport 219b454c-6c32-4a38-b10c-afcdb1be1980 from this chassis (sb_readonly=0)
Oct 11 05:27:33 np0005481065 ovn_controller[152945]: 2025-10-11T09:27:33Z|01424|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:27:33 np0005481065 ovn_controller[152945]: 2025-10-11T09:27:33Z|01425|binding|INFO|Releasing lport 48cfaf23-d7a3-467e-89cb-3eb3687ea15e from this chassis (sb_readonly=0)
Oct 11 05:27:33 np0005481065 ovn_controller[152945]: 2025-10-11T09:27:33Z|01426|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:27:33 np0005481065 nova_compute[260935]: 2025-10-11 09:27:33.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:27:34 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2578: 321 pgs: 321 active+clean; 486 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 363 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Oct 11 05:27:35 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #126. Immutable memtables: 0.
Oct 11 05:27:35 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:27:35.057034) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 05:27:35 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 126
Oct 11 05:27:35 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174855057126, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 790, "num_deletes": 251, "total_data_size": 996698, "memory_usage": 1012728, "flush_reason": "Manual Compaction"}
Oct 11 05:27:35 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #127: started
Oct 11 05:27:35 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174855071495, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 127, "file_size": 975848, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 53556, "largest_seqno": 54345, "table_properties": {"data_size": 971815, "index_size": 1749, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9081, "raw_average_key_size": 19, "raw_value_size": 963756, "raw_average_value_size": 2072, "num_data_blocks": 78, "num_entries": 465, "num_filter_entries": 465, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760174795, "oldest_key_time": 1760174795, "file_creation_time": 1760174855, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 127, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:27:35 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 14579 microseconds, and 6411 cpu microseconds.
Oct 11 05:27:35 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:27:35 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:27:35.071615) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #127: 975848 bytes OK
Oct 11 05:27:35 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:27:35.071652) [db/memtable_list.cc:519] [default] Level-0 commit table #127 started
Oct 11 05:27:35 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:27:35.073778) [db/memtable_list.cc:722] [default] Level-0 commit table #127: memtable #1 done
Oct 11 05:27:35 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:27:35.073807) EVENT_LOG_v1 {"time_micros": 1760174855073796, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 05:27:35 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:27:35.073865) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 05:27:35 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 992717, prev total WAL file size 992717, number of live WAL files 2.
Oct 11 05:27:35 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000123.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:27:35 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:27:35.074845) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Oct 11 05:27:35 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 05:27:35 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [127(952KB)], [125(8707KB)]
Oct 11 05:27:35 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174855074898, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [127], "files_L6": [125], "score": -1, "input_data_size": 9892312, "oldest_snapshot_seqno": -1}
Oct 11 05:27:35 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #128: 7240 keys, 8205270 bytes, temperature: kUnknown
Oct 11 05:27:35 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174855150155, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 128, "file_size": 8205270, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8159932, "index_size": 26153, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18117, "raw_key_size": 189588, "raw_average_key_size": 26, "raw_value_size": 8033375, "raw_average_value_size": 1109, "num_data_blocks": 1011, "num_entries": 7240, "num_filter_entries": 7240, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760174855, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:27:35 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:27:35 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:27:35.150457) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 8205270 bytes
Oct 11 05:27:35 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:27:35.151934) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 131.3 rd, 108.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 8.5 +0.0 blob) out(7.8 +0.0 blob), read-write-amplify(18.5) write-amplify(8.4) OK, records in: 7754, records dropped: 514 output_compression: NoCompression
Oct 11 05:27:35 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:27:35.151956) EVENT_LOG_v1 {"time_micros": 1760174855151946, "job": 76, "event": "compaction_finished", "compaction_time_micros": 75356, "compaction_time_cpu_micros": 37507, "output_level": 6, "num_output_files": 1, "total_output_size": 8205270, "num_input_records": 7754, "num_output_records": 7240, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 05:27:35 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000127.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:27:35 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174855152457, "job": 76, "event": "table_file_deletion", "file_number": 127}
Oct 11 05:27:35 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:27:35 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174855154394, "job": 76, "event": "table_file_deletion", "file_number": 125}
Oct 11 05:27:35 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:27:35.074707) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:27:35 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:27:35.154470) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:27:35 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:27:35.154476) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:27:35 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:27:35.154478) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:27:35 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:27:35.154480) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:27:35 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:27:35.154482) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:27:35 np0005481065 nova_compute[260935]: 2025-10-11 09:27:35.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:36 np0005481065 nova_compute[260935]: 2025-10-11 09:27:36.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:27:36 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2579: 321 pgs: 321 active+clean; 486 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 363 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Oct 11 05:27:37 np0005481065 nova_compute[260935]: 2025-10-11 09:27:37.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:37 np0005481065 nova_compute[260935]: 2025-10-11 09:27:37.536 2 DEBUG nova.compute.manager [req-7f5c6790-5d00-4d1a-9a80-54b83dc5c042 req-44e19136-58db-423a-bd00-1581af801984 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Received event network-changed-d227ca0f-e837-4a46-be0f-e66b72db8028 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:27:37 np0005481065 nova_compute[260935]: 2025-10-11 09:27:37.536 2 DEBUG nova.compute.manager [req-7f5c6790-5d00-4d1a-9a80-54b83dc5c042 req-44e19136-58db-423a-bd00-1581af801984 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Refreshing instance network info cache due to event network-changed-d227ca0f-e837-4a46-be0f-e66b72db8028. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:27:37 np0005481065 nova_compute[260935]: 2025-10-11 09:27:37.537 2 DEBUG oslo_concurrency.lockutils [req-7f5c6790-5d00-4d1a-9a80-54b83dc5c042 req-44e19136-58db-423a-bd00-1581af801984 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-ac163a38-d5cb-4d00-af1b-f3361849dd68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:27:37 np0005481065 nova_compute[260935]: 2025-10-11 09:27:37.537 2 DEBUG oslo_concurrency.lockutils [req-7f5c6790-5d00-4d1a-9a80-54b83dc5c042 req-44e19136-58db-423a-bd00-1581af801984 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-ac163a38-d5cb-4d00-af1b-f3361849dd68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:27:37 np0005481065 nova_compute[260935]: 2025-10-11 09:27:37.537 2 DEBUG nova.network.neutron [req-7f5c6790-5d00-4d1a-9a80-54b83dc5c042 req-44e19136-58db-423a-bd00-1581af801984 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Refreshing network info cache for port d227ca0f-e837-4a46-be0f-e66b72db8028 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:27:37 np0005481065 nova_compute[260935]: 2025-10-11 09:27:37.629 2 DEBUG oslo_concurrency.lockutils [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "ac163a38-d5cb-4d00-af1b-f3361849dd68" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:27:37 np0005481065 nova_compute[260935]: 2025-10-11 09:27:37.630 2 DEBUG oslo_concurrency.lockutils [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "ac163a38-d5cb-4d00-af1b-f3361849dd68" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:27:37 np0005481065 nova_compute[260935]: 2025-10-11 09:27:37.630 2 DEBUG oslo_concurrency.lockutils [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "ac163a38-d5cb-4d00-af1b-f3361849dd68-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:27:37 np0005481065 nova_compute[260935]: 2025-10-11 09:27:37.630 2 DEBUG oslo_concurrency.lockutils [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "ac163a38-d5cb-4d00-af1b-f3361849dd68-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:27:37 np0005481065 nova_compute[260935]: 2025-10-11 09:27:37.630 2 DEBUG oslo_concurrency.lockutils [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "ac163a38-d5cb-4d00-af1b-f3361849dd68-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:27:37 np0005481065 nova_compute[260935]: 2025-10-11 09:27:37.631 2 INFO nova.compute.manager [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Terminating instance#033[00m
Oct 11 05:27:37 np0005481065 nova_compute[260935]: 2025-10-11 09:27:37.632 2 DEBUG nova.compute.manager [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:27:37 np0005481065 kernel: tapd227ca0f-e8 (unregistering): left promiscuous mode
Oct 11 05:27:37 np0005481065 NetworkManager[44960]: <info>  [1760174857.6984] device (tapd227ca0f-e8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:27:37 np0005481065 nova_compute[260935]: 2025-10-11 09:27:37.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:27:37 np0005481065 ovn_controller[152945]: 2025-10-11T09:27:37Z|01427|binding|INFO|Releasing lport d227ca0f-e837-4a46-be0f-e66b72db8028 from this chassis (sb_readonly=0)
Oct 11 05:27:37 np0005481065 ovn_controller[152945]: 2025-10-11T09:27:37Z|01428|binding|INFO|Setting lport d227ca0f-e837-4a46-be0f-e66b72db8028 down in Southbound
Oct 11 05:27:37 np0005481065 nova_compute[260935]: 2025-10-11 09:27:37.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:37 np0005481065 ovn_controller[152945]: 2025-10-11T09:27:37Z|01429|binding|INFO|Removing iface tapd227ca0f-e8 ovn-installed in OVS
Oct 11 05:27:37 np0005481065 nova_compute[260935]: 2025-10-11 09:27:37.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:37.721 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:a2:ce 10.100.0.6'], port_security=['fa:16:3e:01:a2:ce 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'ac163a38-d5cb-4d00-af1b-f3361849dd68', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-024b1f88-7312-4f05-a55e-4c82e878906e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'efa7de30-a0ba-4f09-b29e-87a969a72d97', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5a6a9a27-420b-4816-9550-7c34ef42c810, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=d227ca0f-e837-4a46-be0f-e66b72db8028) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:27:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:37.722 162815 INFO neutron.agent.ovn.metadata.agent [-] Port d227ca0f-e837-4a46-be0f-e66b72db8028 in datapath 024b1f88-7312-4f05-a55e-4c82e878906e unbound from our chassis#033[00m
Oct 11 05:27:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:37.723 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 024b1f88-7312-4f05-a55e-4c82e878906e#033[00m
Oct 11 05:27:37 np0005481065 kernel: tap731dd7de-8d (unregistering): left promiscuous mode
Oct 11 05:27:37 np0005481065 NetworkManager[44960]: <info>  [1760174857.7301] device (tap731dd7de-8d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:27:37 np0005481065 nova_compute[260935]: 2025-10-11 09:27:37.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:37 np0005481065 ovn_controller[152945]: 2025-10-11T09:27:37Z|01430|binding|INFO|Releasing lport 731dd7de-8de3-4b67-a34b-8fef195606b3 from this chassis (sb_readonly=0)
Oct 11 05:27:37 np0005481065 ovn_controller[152945]: 2025-10-11T09:27:37Z|01431|binding|INFO|Setting lport 731dd7de-8de3-4b67-a34b-8fef195606b3 down in Southbound
Oct 11 05:27:37 np0005481065 ovn_controller[152945]: 2025-10-11T09:27:37Z|01432|binding|INFO|Removing iface tap731dd7de-8d ovn-installed in OVS
Oct 11 05:27:37 np0005481065 nova_compute[260935]: 2025-10-11 09:27:37.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:37 np0005481065 nova_compute[260935]: 2025-10-11 09:27:37.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:37.751 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3b:40:03 2001:db8::f816:3eff:fe3b:4003'], port_security=['fa:16:3e:3b:40:03 2001:db8::f816:3eff:fe3b:4003'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe3b:4003/64', 'neutron:device_id': 'ac163a38-d5cb-4d00-af1b-f3361849dd68', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-894decad-3bed-4c55-b643-5fbe5479bf3f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'efa7de30-a0ba-4f09-b29e-87a969a72d97', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8c88be83-b34c-4a0a-8d40-8a42bbccb566, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=731dd7de-8de3-4b67-a34b-8fef195606b3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:27:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:37.750 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[96226645-1c4f-46c1-a0d7-ed7e5f771afa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:37 np0005481065 nova_compute[260935]: 2025-10-11 09:27:37.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:37.784 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[4d3139bd-8c1a-4ef3-a518-2632115e5cc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:37.788 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[85f50159-a9f5-4337-91f6-969c12a7d990]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:37 np0005481065 systemd[1]: machine-qemu\x2d154\x2dinstance\x2d00000082.scope: Deactivated successfully.
Oct 11 05:27:37 np0005481065 systemd[1]: machine-qemu\x2d154\x2dinstance\x2d00000082.scope: Consumed 14.247s CPU time.
Oct 11 05:27:37 np0005481065 systemd-machined[215705]: Machine qemu-154-instance-00000082 terminated.
Oct 11 05:27:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:37.818 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[8a2d34e4-5165-44c5-8eba-d8c57e24183a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:37.834 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1ccb42ad-9891-4317-9d28-b1773327fda3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap024b1f88-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:47:3b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 382], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663412, 'reachable_time': 33597, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 403195, 'error': None, 'target': 'ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:37.854 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[515a738f-c5f9-4e6b-8906-9b2af3f42d03]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap024b1f88-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663430, 'tstamp': 663430}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 403196, 'error': None, 'target': 'ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap024b1f88-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663436, 'tstamp': 663436}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 403196, 'error': None, 'target': 'ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:37.855 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap024b1f88-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:27:37 np0005481065 NetworkManager[44960]: <info>  [1760174857.8565] manager: (tap731dd7de-8d): new Tun device (/org/freedesktop/NetworkManager/Devices/559)
Oct 11 05:27:37 np0005481065 nova_compute[260935]: 2025-10-11 09:27:37.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:37 np0005481065 nova_compute[260935]: 2025-10-11 09:27:37.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:37.884 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap024b1f88-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:27:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:37.884 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:27:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:37.885 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap024b1f88-70, col_values=(('external_ids', {'iface-id': '219b454c-6c32-4a38-b10c-afcdb1be1980'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:27:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:37.885 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:27:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:37.887 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 731dd7de-8de3-4b67-a34b-8fef195606b3 in datapath 894decad-3bed-4c55-b643-5fbe5479bf3f unbound from our chassis#033[00m
Oct 11 05:27:37 np0005481065 nova_compute[260935]: 2025-10-11 09:27:37.888 2 INFO nova.virt.libvirt.driver [-] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Instance destroyed successfully.#033[00m
Oct 11 05:27:37 np0005481065 nova_compute[260935]: 2025-10-11 09:27:37.888 2 DEBUG nova.objects.instance [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'resources' on Instance uuid ac163a38-d5cb-4d00-af1b-f3361849dd68 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:27:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:37.889 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 894decad-3bed-4c55-b643-5fbe5479bf3f#033[00m
Oct 11 05:27:37 np0005481065 nova_compute[260935]: 2025-10-11 09:27:37.905 2 DEBUG nova.virt.libvirt.vif [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:26:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-8061287',display_name='tempest-TestGettingAddress-server-8061287',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-8061287',id=130,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPG2umSzgx1Pm5tS05rzNTKNesiRVqOQsMrcb/c4H9RJnO9a7zP3A3lTuGkE8FijEzV3gKWwvO4cBsyzZeZuE85e7xUGmBvWEUdTEGeD9UeQgoUvFozUeyHRBe1bh7q6pA==',key_name='tempest-TestGettingAddress-1214513575',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:27:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-ptk677gz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:27:14Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=ac163a38-d5cb-4d00-af1b-f3361849dd68,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d227ca0f-e837-4a46-be0f-e66b72db8028", "address": "fa:16:3e:01:a2:ce", "network": {"id": "024b1f88-7312-4f05-a55e-4c82e878906e", "bridge": "br-int", "label": "tempest-network-smoke--93934650", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd227ca0f-e8", "ovs_interfaceid": "d227ca0f-e837-4a46-be0f-e66b72db8028", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:27:37 np0005481065 nova_compute[260935]: 2025-10-11 09:27:37.906 2 DEBUG nova.network.os_vif_util [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "d227ca0f-e837-4a46-be0f-e66b72db8028", "address": "fa:16:3e:01:a2:ce", "network": {"id": "024b1f88-7312-4f05-a55e-4c82e878906e", "bridge": "br-int", "label": "tempest-network-smoke--93934650", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd227ca0f-e8", "ovs_interfaceid": "d227ca0f-e837-4a46-be0f-e66b72db8028", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:27:37 np0005481065 nova_compute[260935]: 2025-10-11 09:27:37.907 2 DEBUG nova.network.os_vif_util [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:01:a2:ce,bridge_name='br-int',has_traffic_filtering=True,id=d227ca0f-e837-4a46-be0f-e66b72db8028,network=Network(024b1f88-7312-4f05-a55e-4c82e878906e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd227ca0f-e8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:27:37 np0005481065 nova_compute[260935]: 2025-10-11 09:27:37.907 2 DEBUG os_vif [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:01:a2:ce,bridge_name='br-int',has_traffic_filtering=True,id=d227ca0f-e837-4a46-be0f-e66b72db8028,network=Network(024b1f88-7312-4f05-a55e-4c82e878906e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd227ca0f-e8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:27:37 np0005481065 nova_compute[260935]: 2025-10-11 09:27:37.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:37 np0005481065 nova_compute[260935]: 2025-10-11 09:27:37.909 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd227ca0f-e8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:27:37 np0005481065 nova_compute[260935]: 2025-10-11 09:27:37.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:37 np0005481065 nova_compute[260935]: 2025-10-11 09:27:37.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:27:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:37.911 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[50f3eeed-b567-42fb-9478-ca25c50580b9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:37 np0005481065 nova_compute[260935]: 2025-10-11 09:27:37.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:37 np0005481065 nova_compute[260935]: 2025-10-11 09:27:37.919 2 INFO os_vif [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:01:a2:ce,bridge_name='br-int',has_traffic_filtering=True,id=d227ca0f-e837-4a46-be0f-e66b72db8028,network=Network(024b1f88-7312-4f05-a55e-4c82e878906e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd227ca0f-e8')#033[00m
Oct 11 05:27:37 np0005481065 nova_compute[260935]: 2025-10-11 09:27:37.920 2 DEBUG nova.virt.libvirt.vif [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:26:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-8061287',display_name='tempest-TestGettingAddress-server-8061287',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-8061287',id=130,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPG2umSzgx1Pm5tS05rzNTKNesiRVqOQsMrcb/c4H9RJnO9a7zP3A3lTuGkE8FijEzV3gKWwvO4cBsyzZeZuE85e7xUGmBvWEUdTEGeD9UeQgoUvFozUeyHRBe1bh7q6pA==',key_name='tempest-TestGettingAddress-1214513575',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:27:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-ptk677gz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:27:14Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=ac163a38-d5cb-4d00-af1b-f3361849dd68,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "731dd7de-8de3-4b67-a34b-8fef195606b3", "address": "fa:16:3e:3b:40:03", "network": {"id": "894decad-3bed-4c55-b643-5fbe5479bf3f", "bridge": "br-int", "label": "tempest-network-smoke--1044478516", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3b:4003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap731dd7de-8d", "ovs_interfaceid": "731dd7de-8de3-4b67-a34b-8fef195606b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:27:37 np0005481065 nova_compute[260935]: 2025-10-11 09:27:37.920 2 DEBUG nova.network.os_vif_util [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "731dd7de-8de3-4b67-a34b-8fef195606b3", "address": "fa:16:3e:3b:40:03", "network": {"id": "894decad-3bed-4c55-b643-5fbe5479bf3f", "bridge": "br-int", "label": "tempest-network-smoke--1044478516", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3b:4003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap731dd7de-8d", "ovs_interfaceid": "731dd7de-8de3-4b67-a34b-8fef195606b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:27:37 np0005481065 nova_compute[260935]: 2025-10-11 09:27:37.921 2 DEBUG nova.network.os_vif_util [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3b:40:03,bridge_name='br-int',has_traffic_filtering=True,id=731dd7de-8de3-4b67-a34b-8fef195606b3,network=Network(894decad-3bed-4c55-b643-5fbe5479bf3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap731dd7de-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:27:37 np0005481065 nova_compute[260935]: 2025-10-11 09:27:37.921 2 DEBUG os_vif [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:40:03,bridge_name='br-int',has_traffic_filtering=True,id=731dd7de-8de3-4b67-a34b-8fef195606b3,network=Network(894decad-3bed-4c55-b643-5fbe5479bf3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap731dd7de-8d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:27:37 np0005481065 nova_compute[260935]: 2025-10-11 09:27:37.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:37 np0005481065 nova_compute[260935]: 2025-10-11 09:27:37.923 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap731dd7de-8d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:27:37 np0005481065 nova_compute[260935]: 2025-10-11 09:27:37.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:37 np0005481065 nova_compute[260935]: 2025-10-11 09:27:37.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:27:37 np0005481065 nova_compute[260935]: 2025-10-11 09:27:37.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:37 np0005481065 nova_compute[260935]: 2025-10-11 09:27:37.928 2 INFO os_vif [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:40:03,bridge_name='br-int',has_traffic_filtering=True,id=731dd7de-8de3-4b67-a34b-8fef195606b3,network=Network(894decad-3bed-4c55-b643-5fbe5479bf3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap731dd7de-8d')#033[00m
Oct 11 05:27:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:37.955 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[2e73e3e2-f270-4600-90ff-8b2be932c1a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:37.959 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c3b7a2ff-5486-4188-adda-830eb9701b1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:37 np0005481065 nova_compute[260935]: 2025-10-11 09:27:37.978 2 DEBUG nova.compute.manager [req-1dba537a-c321-4bb0-afd9-d04dd750ebe1 req-8670f31a-600b-428e-a4db-ba40a8531ff5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Received event network-vif-unplugged-d227ca0f-e837-4a46-be0f-e66b72db8028 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:27:37 np0005481065 nova_compute[260935]: 2025-10-11 09:27:37.979 2 DEBUG oslo_concurrency.lockutils [req-1dba537a-c321-4bb0-afd9-d04dd750ebe1 req-8670f31a-600b-428e-a4db-ba40a8531ff5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ac163a38-d5cb-4d00-af1b-f3361849dd68-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:27:37 np0005481065 nova_compute[260935]: 2025-10-11 09:27:37.979 2 DEBUG oslo_concurrency.lockutils [req-1dba537a-c321-4bb0-afd9-d04dd750ebe1 req-8670f31a-600b-428e-a4db-ba40a8531ff5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ac163a38-d5cb-4d00-af1b-f3361849dd68-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:27:37 np0005481065 nova_compute[260935]: 2025-10-11 09:27:37.980 2 DEBUG oslo_concurrency.lockutils [req-1dba537a-c321-4bb0-afd9-d04dd750ebe1 req-8670f31a-600b-428e-a4db-ba40a8531ff5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ac163a38-d5cb-4d00-af1b-f3361849dd68-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:27:37 np0005481065 nova_compute[260935]: 2025-10-11 09:27:37.981 2 DEBUG nova.compute.manager [req-1dba537a-c321-4bb0-afd9-d04dd750ebe1 req-8670f31a-600b-428e-a4db-ba40a8531ff5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] No waiting events found dispatching network-vif-unplugged-d227ca0f-e837-4a46-be0f-e66b72db8028 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:27:37 np0005481065 nova_compute[260935]: 2025-10-11 09:27:37.981 2 DEBUG nova.compute.manager [req-1dba537a-c321-4bb0-afd9-d04dd750ebe1 req-8670f31a-600b-428e-a4db-ba40a8531ff5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Received event network-vif-unplugged-d227ca0f-e837-4a46-be0f-e66b72db8028 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 05:27:37 np0005481065 nova_compute[260935]: 2025-10-11 09:27:37.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:37.999 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[35ec33e9-7cbd-4337-80fa-586941e7462f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:38.030 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d641779c-57ee-43fb-abe0-8ef72297602e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap894decad-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:b0:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 30, 'tx_packets': 6, 'rx_bytes': 2772, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 30, 'tx_packets': 6, 'rx_bytes': 2772, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 383], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663642, 'reachable_time': 44137, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 30, 'inoctets': 2352, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 30, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2352, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 30, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 403243, 'error': None, 'target': 'ovnmeta-894decad-3bed-4c55-b643-5fbe5479bf3f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:38.057 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4dc9b6dd-cabb-4c2e-8a75-93f0bc1f5228]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap894decad-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663658, 'tstamp': 663658}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 403244, 'error': None, 'target': 'ovnmeta-894decad-3bed-4c55-b643-5fbe5479bf3f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:38.066 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap894decad-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:27:38 np0005481065 nova_compute[260935]: 2025-10-11 09:27:38.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:38 np0005481065 nova_compute[260935]: 2025-10-11 09:27:38.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:38.070 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap894decad-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:27:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:38.071 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:27:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:38.073 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap894decad-30, col_values=(('external_ids', {'iface-id': '48cfaf23-d7a3-467e-89cb-3eb3687ea15e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:27:38 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:38.073 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:27:38 np0005481065 nova_compute[260935]: 2025-10-11 09:27:38.412 2 INFO nova.virt.libvirt.driver [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Deleting instance files /var/lib/nova/instances/ac163a38-d5cb-4d00-af1b-f3361849dd68_del#033[00m
Oct 11 05:27:38 np0005481065 nova_compute[260935]: 2025-10-11 09:27:38.413 2 INFO nova.virt.libvirt.driver [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Deletion of /var/lib/nova/instances/ac163a38-d5cb-4d00-af1b-f3361849dd68_del complete#033[00m
Oct 11 05:27:38 np0005481065 nova_compute[260935]: 2025-10-11 09:27:38.464 2 INFO nova.compute.manager [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Took 0.83 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:27:38 np0005481065 nova_compute[260935]: 2025-10-11 09:27:38.464 2 DEBUG oslo.service.loopingcall [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:27:38 np0005481065 nova_compute[260935]: 2025-10-11 09:27:38.464 2 DEBUG nova.compute.manager [-] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:27:38 np0005481065 nova_compute[260935]: 2025-10-11 09:27:38.465 2 DEBUG nova.network.neutron [-] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:27:38 np0005481065 nova_compute[260935]: 2025-10-11 09:27:38.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:27:38 np0005481065 nova_compute[260935]: 2025-10-11 09:27:38.701 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:27:38 np0005481065 nova_compute[260935]: 2025-10-11 09:27:38.901 2 DEBUG nova.network.neutron [req-7f5c6790-5d00-4d1a-9a80-54b83dc5c042 req-44e19136-58db-423a-bd00-1581af801984 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Updated VIF entry in instance network info cache for port d227ca0f-e837-4a46-be0f-e66b72db8028. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:27:38 np0005481065 nova_compute[260935]: 2025-10-11 09:27:38.901 2 DEBUG nova.network.neutron [req-7f5c6790-5d00-4d1a-9a80-54b83dc5c042 req-44e19136-58db-423a-bd00-1581af801984 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Updating instance_info_cache with network_info: [{"id": "d227ca0f-e837-4a46-be0f-e66b72db8028", "address": "fa:16:3e:01:a2:ce", "network": {"id": "024b1f88-7312-4f05-a55e-4c82e878906e", "bridge": "br-int", "label": "tempest-network-smoke--93934650", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd227ca0f-e8", "ovs_interfaceid": "d227ca0f-e837-4a46-be0f-e66b72db8028", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "731dd7de-8de3-4b67-a34b-8fef195606b3", "address": "fa:16:3e:3b:40:03", "network": {"id": "894decad-3bed-4c55-b643-5fbe5479bf3f", "bridge": "br-int", "label": "tempest-network-smoke--1044478516", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3b:4003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap731dd7de-8d", "ovs_interfaceid": "731dd7de-8de3-4b67-a34b-8fef195606b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:27:38 np0005481065 nova_compute[260935]: 2025-10-11 09:27:38.927 2 DEBUG oslo_concurrency.lockutils [req-7f5c6790-5d00-4d1a-9a80-54b83dc5c042 req-44e19136-58db-423a-bd00-1581af801984 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-ac163a38-d5cb-4d00-af1b-f3361849dd68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:27:38 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2580: 321 pgs: 321 active+clean; 486 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 378 KiB/s rd, 2.1 MiB/s wr, 95 op/s
Oct 11 05:27:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:27:39 np0005481065 nova_compute[260935]: 2025-10-11 09:27:39.455 2 DEBUG nova.network.neutron [-] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:27:39 np0005481065 nova_compute[260935]: 2025-10-11 09:27:39.478 2 INFO nova.compute.manager [-] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Took 1.01 seconds to deallocate network for instance.#033[00m
Oct 11 05:27:39 np0005481065 nova_compute[260935]: 2025-10-11 09:27:39.530 2 DEBUG oslo_concurrency.lockutils [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:27:39 np0005481065 nova_compute[260935]: 2025-10-11 09:27:39.530 2 DEBUG oslo_concurrency.lockutils [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:27:39 np0005481065 nova_compute[260935]: 2025-10-11 09:27:39.685 2 DEBUG oslo_concurrency.processutils [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:27:40 np0005481065 nova_compute[260935]: 2025-10-11 09:27:40.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:40 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:27:40 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2239567093' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:27:40 np0005481065 nova_compute[260935]: 2025-10-11 09:27:40.169 2 DEBUG oslo_concurrency.processutils [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:27:40 np0005481065 nova_compute[260935]: 2025-10-11 09:27:40.178 2 DEBUG nova.compute.provider_tree [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:27:40 np0005481065 nova_compute[260935]: 2025-10-11 09:27:40.189 2 DEBUG nova.compute.manager [req-23841cd6-a33e-4635-a96a-ab57909463ee req-f0807f4a-5bb2-4fb0-835a-fb6132d8a49c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Received event network-vif-plugged-d227ca0f-e837-4a46-be0f-e66b72db8028 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:27:40 np0005481065 nova_compute[260935]: 2025-10-11 09:27:40.190 2 DEBUG oslo_concurrency.lockutils [req-23841cd6-a33e-4635-a96a-ab57909463ee req-f0807f4a-5bb2-4fb0-835a-fb6132d8a49c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ac163a38-d5cb-4d00-af1b-f3361849dd68-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:27:40 np0005481065 nova_compute[260935]: 2025-10-11 09:27:40.191 2 DEBUG oslo_concurrency.lockutils [req-23841cd6-a33e-4635-a96a-ab57909463ee req-f0807f4a-5bb2-4fb0-835a-fb6132d8a49c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ac163a38-d5cb-4d00-af1b-f3361849dd68-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:27:40 np0005481065 nova_compute[260935]: 2025-10-11 09:27:40.191 2 DEBUG oslo_concurrency.lockutils [req-23841cd6-a33e-4635-a96a-ab57909463ee req-f0807f4a-5bb2-4fb0-835a-fb6132d8a49c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ac163a38-d5cb-4d00-af1b-f3361849dd68-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:27:40 np0005481065 nova_compute[260935]: 2025-10-11 09:27:40.192 2 DEBUG nova.compute.manager [req-23841cd6-a33e-4635-a96a-ab57909463ee req-f0807f4a-5bb2-4fb0-835a-fb6132d8a49c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] No waiting events found dispatching network-vif-plugged-d227ca0f-e837-4a46-be0f-e66b72db8028 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:27:40 np0005481065 nova_compute[260935]: 2025-10-11 09:27:40.192 2 WARNING nova.compute.manager [req-23841cd6-a33e-4635-a96a-ab57909463ee req-f0807f4a-5bb2-4fb0-835a-fb6132d8a49c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Received unexpected event network-vif-plugged-d227ca0f-e837-4a46-be0f-e66b72db8028 for instance with vm_state deleted and task_state None.#033[00m
Oct 11 05:27:40 np0005481065 nova_compute[260935]: 2025-10-11 09:27:40.193 2 DEBUG nova.compute.manager [req-23841cd6-a33e-4635-a96a-ab57909463ee req-f0807f4a-5bb2-4fb0-835a-fb6132d8a49c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Received event network-vif-deleted-731dd7de-8de3-4b67-a34b-8fef195606b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:27:40 np0005481065 nova_compute[260935]: 2025-10-11 09:27:40.193 2 DEBUG nova.compute.manager [req-23841cd6-a33e-4635-a96a-ab57909463ee req-f0807f4a-5bb2-4fb0-835a-fb6132d8a49c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Received event network-vif-deleted-d227ca0f-e837-4a46-be0f-e66b72db8028 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:27:40 np0005481065 nova_compute[260935]: 2025-10-11 09:27:40.209 2 DEBUG nova.scheduler.client.report [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:27:40 np0005481065 nova_compute[260935]: 2025-10-11 09:27:40.218 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174845.2107964, 36d0acda-9f37-4308-aa46-973f11c57b0e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:27:40 np0005481065 nova_compute[260935]: 2025-10-11 09:27:40.218 2 INFO nova.compute.manager [-] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:27:40 np0005481065 nova_compute[260935]: 2025-10-11 09:27:40.241 2 DEBUG nova.compute.manager [None req-3db0e2e4-6540-478d-a5a3-ee953752fd0b - - - - - -] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:27:40 np0005481065 nova_compute[260935]: 2025-10-11 09:27:40.243 2 DEBUG oslo_concurrency.lockutils [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:27:40 np0005481065 nova_compute[260935]: 2025-10-11 09:27:40.269 2 INFO nova.scheduler.client.report [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Deleted allocations for instance ac163a38-d5cb-4d00-af1b-f3361849dd68#033[00m
Oct 11 05:27:40 np0005481065 nova_compute[260935]: 2025-10-11 09:27:40.338 2 DEBUG oslo_concurrency.lockutils [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "ac163a38-d5cb-4d00-af1b-f3361849dd68" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:27:40 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2581: 321 pgs: 321 active+clean; 486 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 18 KiB/s wr, 3 op/s
Oct 11 05:27:41 np0005481065 nova_compute[260935]: 2025-10-11 09:27:41.291 2 DEBUG nova.compute.manager [req-3d927c7e-70b7-483e-b8ef-aa6280c8359b req-30759d61-0038-4222-a0ca-b3cede909496 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Received event network-changed-0f8506e7-c03f-48bd-938d-f3b4cea2675b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:27:41 np0005481065 nova_compute[260935]: 2025-10-11 09:27:41.292 2 DEBUG nova.compute.manager [req-3d927c7e-70b7-483e-b8ef-aa6280c8359b req-30759d61-0038-4222-a0ca-b3cede909496 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Refreshing instance network info cache due to event network-changed-0f8506e7-c03f-48bd-938d-f3b4cea2675b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:27:41 np0005481065 nova_compute[260935]: 2025-10-11 09:27:41.293 2 DEBUG oslo_concurrency.lockutils [req-3d927c7e-70b7-483e-b8ef-aa6280c8359b req-30759d61-0038-4222-a0ca-b3cede909496 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-b39b8161-8a46-46fe-8a2a-0fc6b4eab850" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:27:41 np0005481065 nova_compute[260935]: 2025-10-11 09:27:41.294 2 DEBUG oslo_concurrency.lockutils [req-3d927c7e-70b7-483e-b8ef-aa6280c8359b req-30759d61-0038-4222-a0ca-b3cede909496 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-b39b8161-8a46-46fe-8a2a-0fc6b4eab850" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:27:41 np0005481065 nova_compute[260935]: 2025-10-11 09:27:41.294 2 DEBUG nova.network.neutron [req-3d927c7e-70b7-483e-b8ef-aa6280c8359b req-30759d61-0038-4222-a0ca-b3cede909496 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Refreshing network info cache for port 0f8506e7-c03f-48bd-938d-f3b4cea2675b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:27:41 np0005481065 nova_compute[260935]: 2025-10-11 09:27:41.362 2 DEBUG oslo_concurrency.lockutils [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:27:41 np0005481065 nova_compute[260935]: 2025-10-11 09:27:41.363 2 DEBUG oslo_concurrency.lockutils [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:27:41 np0005481065 nova_compute[260935]: 2025-10-11 09:27:41.363 2 DEBUG oslo_concurrency.lockutils [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:27:41 np0005481065 nova_compute[260935]: 2025-10-11 09:27:41.363 2 DEBUG oslo_concurrency.lockutils [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:27:41 np0005481065 nova_compute[260935]: 2025-10-11 09:27:41.364 2 DEBUG oslo_concurrency.lockutils [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:27:41 np0005481065 nova_compute[260935]: 2025-10-11 09:27:41.365 2 INFO nova.compute.manager [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Terminating instance#033[00m
Oct 11 05:27:41 np0005481065 nova_compute[260935]: 2025-10-11 09:27:41.367 2 DEBUG nova.compute.manager [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:27:41 np0005481065 kernel: tap0f8506e7-c0 (unregistering): left promiscuous mode
Oct 11 05:27:41 np0005481065 NetworkManager[44960]: <info>  [1760174861.4397] device (tap0f8506e7-c0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:27:41 np0005481065 ovn_controller[152945]: 2025-10-11T09:27:41Z|01433|binding|INFO|Releasing lport 0f8506e7-c03f-48bd-938d-f3b4cea2675b from this chassis (sb_readonly=0)
Oct 11 05:27:41 np0005481065 ovn_controller[152945]: 2025-10-11T09:27:41Z|01434|binding|INFO|Setting lport 0f8506e7-c03f-48bd-938d-f3b4cea2675b down in Southbound
Oct 11 05:27:41 np0005481065 ovn_controller[152945]: 2025-10-11T09:27:41Z|01435|binding|INFO|Removing iface tap0f8506e7-c0 ovn-installed in OVS
Oct 11 05:27:41 np0005481065 nova_compute[260935]: 2025-10-11 09:27:41.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:41.466 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f4:67:d9 10.100.0.4'], port_security=['fa:16:3e:f4:67:d9 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'b39b8161-8a46-46fe-8a2a-0fc6b4eab850', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-024b1f88-7312-4f05-a55e-4c82e878906e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'efa7de30-a0ba-4f09-b29e-87a969a72d97', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5a6a9a27-420b-4816-9550-7c34ef42c810, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=0f8506e7-c03f-48bd-938d-f3b4cea2675b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:27:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:41.467 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 0f8506e7-c03f-48bd-938d-f3b4cea2675b in datapath 024b1f88-7312-4f05-a55e-4c82e878906e unbound from our chassis#033[00m
Oct 11 05:27:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:41.469 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 024b1f88-7312-4f05-a55e-4c82e878906e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:27:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:41.471 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a18c3a07-5209-4e50-8e23-97d11c912dfb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:41.472 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e namespace which is not needed anymore#033[00m
Oct 11 05:27:41 np0005481065 kernel: tapa0aadf8d-3a (unregistering): left promiscuous mode
Oct 11 05:27:41 np0005481065 NetworkManager[44960]: <info>  [1760174861.4930] device (tapa0aadf8d-3a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:27:41 np0005481065 nova_compute[260935]: 2025-10-11 09:27:41.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:41 np0005481065 nova_compute[260935]: 2025-10-11 09:27:41.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:41 np0005481065 ovn_controller[152945]: 2025-10-11T09:27:41Z|01436|binding|INFO|Releasing lport a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088 from this chassis (sb_readonly=0)
Oct 11 05:27:41 np0005481065 ovn_controller[152945]: 2025-10-11T09:27:41Z|01437|binding|INFO|Setting lport a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088 down in Southbound
Oct 11 05:27:41 np0005481065 ovn_controller[152945]: 2025-10-11T09:27:41Z|01438|binding|INFO|Removing iface tapa0aadf8d-3a ovn-installed in OVS
Oct 11 05:27:41 np0005481065 nova_compute[260935]: 2025-10-11 09:27:41.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:41.511 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:e9:36 2001:db8::f816:3eff:fe4d:e936'], port_security=['fa:16:3e:4d:e9:36 2001:db8::f816:3eff:fe4d:e936'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe4d:e936/64', 'neutron:device_id': 'b39b8161-8a46-46fe-8a2a-0fc6b4eab850', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-894decad-3bed-4c55-b643-5fbe5479bf3f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'efa7de30-a0ba-4f09-b29e-87a969a72d97', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8c88be83-b34c-4a0a-8d40-8a42bbccb566, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:27:41 np0005481065 nova_compute[260935]: 2025-10-11 09:27:41.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:41 np0005481065 systemd[1]: machine-qemu\x2d152\x2dinstance\x2d00000080.scope: Deactivated successfully.
Oct 11 05:27:41 np0005481065 systemd[1]: machine-qemu\x2d152\x2dinstance\x2d00000080.scope: Consumed 16.309s CPU time.
Oct 11 05:27:41 np0005481065 systemd-machined[215705]: Machine qemu-152-instance-00000080 terminated.
Oct 11 05:27:41 np0005481065 nova_compute[260935]: 2025-10-11 09:27:41.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:41 np0005481065 nova_compute[260935]: 2025-10-11 09:27:41.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:41 np0005481065 nova_compute[260935]: 2025-10-11 09:27:41.640 2 INFO nova.virt.libvirt.driver [-] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Instance destroyed successfully.#033[00m
Oct 11 05:27:41 np0005481065 nova_compute[260935]: 2025-10-11 09:27:41.641 2 DEBUG nova.objects.instance [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'resources' on Instance uuid b39b8161-8a46-46fe-8a2a-0fc6b4eab850 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:27:41 np0005481065 nova_compute[260935]: 2025-10-11 09:27:41.655 2 DEBUG nova.virt.libvirt.vif [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:26:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1892429863',display_name='tempest-TestGettingAddress-server-1892429863',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1892429863',id=128,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPG2umSzgx1Pm5tS05rzNTKNesiRVqOQsMrcb/c4H9RJnO9a7zP3A3lTuGkE8FijEzV3gKWwvO4cBsyzZeZuE85e7xUGmBvWEUdTEGeD9UeQgoUvFozUeyHRBe1bh7q6pA==',key_name='tempest-TestGettingAddress-1214513575',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:26:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-yz54v603',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:26:32Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=b39b8161-8a46-46fe-8a2a-0fc6b4eab850,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0f8506e7-c03f-48bd-938d-f3b4cea2675b", "address": "fa:16:3e:f4:67:d9", "network": {"id": "024b1f88-7312-4f05-a55e-4c82e878906e", "bridge": "br-int", "label": "tempest-network-smoke--93934650", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f8506e7-c0", "ovs_interfaceid": "0f8506e7-c03f-48bd-938d-f3b4cea2675b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:27:41 np0005481065 nova_compute[260935]: 2025-10-11 09:27:41.657 2 DEBUG nova.network.os_vif_util [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "0f8506e7-c03f-48bd-938d-f3b4cea2675b", "address": "fa:16:3e:f4:67:d9", "network": {"id": "024b1f88-7312-4f05-a55e-4c82e878906e", "bridge": "br-int", "label": "tempest-network-smoke--93934650", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f8506e7-c0", "ovs_interfaceid": "0f8506e7-c03f-48bd-938d-f3b4cea2675b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:27:41 np0005481065 nova_compute[260935]: 2025-10-11 09:27:41.659 2 DEBUG nova.network.os_vif_util [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f4:67:d9,bridge_name='br-int',has_traffic_filtering=True,id=0f8506e7-c03f-48bd-938d-f3b4cea2675b,network=Network(024b1f88-7312-4f05-a55e-4c82e878906e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f8506e7-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:27:41 np0005481065 nova_compute[260935]: 2025-10-11 09:27:41.660 2 DEBUG os_vif [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f4:67:d9,bridge_name='br-int',has_traffic_filtering=True,id=0f8506e7-c03f-48bd-938d-f3b4cea2675b,network=Network(024b1f88-7312-4f05-a55e-4c82e878906e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f8506e7-c0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:27:41 np0005481065 nova_compute[260935]: 2025-10-11 09:27:41.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:41 np0005481065 nova_compute[260935]: 2025-10-11 09:27:41.664 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0f8506e7-c0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:27:41 np0005481065 nova_compute[260935]: 2025-10-11 09:27:41.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:41 np0005481065 nova_compute[260935]: 2025-10-11 09:27:41.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:27:41 np0005481065 nova_compute[260935]: 2025-10-11 09:27:41.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:41 np0005481065 nova_compute[260935]: 2025-10-11 09:27:41.675 2 INFO os_vif [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f4:67:d9,bridge_name='br-int',has_traffic_filtering=True,id=0f8506e7-c03f-48bd-938d-f3b4cea2675b,network=Network(024b1f88-7312-4f05-a55e-4c82e878906e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f8506e7-c0')#033[00m
Oct 11 05:27:41 np0005481065 nova_compute[260935]: 2025-10-11 09:27:41.676 2 DEBUG nova.virt.libvirt.vif [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:26:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1892429863',display_name='tempest-TestGettingAddress-server-1892429863',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1892429863',id=128,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPG2umSzgx1Pm5tS05rzNTKNesiRVqOQsMrcb/c4H9RJnO9a7zP3A3lTuGkE8FijEzV3gKWwvO4cBsyzZeZuE85e7xUGmBvWEUdTEGeD9UeQgoUvFozUeyHRBe1bh7q6pA==',key_name='tempest-TestGettingAddress-1214513575',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:26:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-yz54v603',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:26:32Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=b39b8161-8a46-46fe-8a2a-0fc6b4eab850,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088", "address": "fa:16:3e:4d:e9:36", "network": {"id": "894decad-3bed-4c55-b643-5fbe5479bf3f", "bridge": "br-int", "label": "tempest-network-smoke--1044478516", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4d:e936", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0aadf8d-3a", "ovs_interfaceid": "a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:27:41 np0005481065 nova_compute[260935]: 2025-10-11 09:27:41.676 2 DEBUG nova.network.os_vif_util [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088", "address": "fa:16:3e:4d:e9:36", "network": {"id": "894decad-3bed-4c55-b643-5fbe5479bf3f", "bridge": "br-int", "label": "tempest-network-smoke--1044478516", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4d:e936", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0aadf8d-3a", "ovs_interfaceid": "a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:27:41 np0005481065 nova_compute[260935]: 2025-10-11 09:27:41.677 2 DEBUG nova.network.os_vif_util [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:e9:36,bridge_name='br-int',has_traffic_filtering=True,id=a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088,network=Network(894decad-3bed-4c55-b643-5fbe5479bf3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0aadf8d-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:27:41 np0005481065 nova_compute[260935]: 2025-10-11 09:27:41.678 2 DEBUG os_vif [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:e9:36,bridge_name='br-int',has_traffic_filtering=True,id=a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088,network=Network(894decad-3bed-4c55-b643-5fbe5479bf3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0aadf8d-3a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:27:41 np0005481065 nova_compute[260935]: 2025-10-11 09:27:41.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:41 np0005481065 nova_compute[260935]: 2025-10-11 09:27:41.681 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0aadf8d-3a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:27:41 np0005481065 nova_compute[260935]: 2025-10-11 09:27:41.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:41 np0005481065 nova_compute[260935]: 2025-10-11 09:27:41.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:41 np0005481065 nova_compute[260935]: 2025-10-11 09:27:41.688 2 INFO os_vif [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:e9:36,bridge_name='br-int',has_traffic_filtering=True,id=a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088,network=Network(894decad-3bed-4c55-b643-5fbe5479bf3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0aadf8d-3a')#033[00m
Oct 11 05:27:41 np0005481065 neutron-haproxy-ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e[400697]: [NOTICE]   (400702) : haproxy version is 2.8.14-c23fe91
Oct 11 05:27:41 np0005481065 neutron-haproxy-ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e[400697]: [NOTICE]   (400702) : path to executable is /usr/sbin/haproxy
Oct 11 05:27:41 np0005481065 neutron-haproxy-ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e[400697]: [WARNING]  (400702) : Exiting Master process...
Oct 11 05:27:41 np0005481065 neutron-haproxy-ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e[400697]: [WARNING]  (400702) : Exiting Master process...
Oct 11 05:27:41 np0005481065 neutron-haproxy-ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e[400697]: [ALERT]    (400702) : Current worker (400704) exited with code 143 (Terminated)
Oct 11 05:27:41 np0005481065 neutron-haproxy-ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e[400697]: [WARNING]  (400702) : All workers exited. Exiting... (0)
Oct 11 05:27:41 np0005481065 systemd[1]: libpod-ebae48339624729f873bd9cdaf767f454b383aabcfedadf411bcdf7d4b67e8a5.scope: Deactivated successfully.
Oct 11 05:27:41 np0005481065 podman[403304]: 2025-10-11 09:27:41.711060167 +0000 UTC m=+0.082507879 container died ebae48339624729f873bd9cdaf767f454b383aabcfedadf411bcdf7d4b67e8a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 11 05:27:41 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ebae48339624729f873bd9cdaf767f454b383aabcfedadf411bcdf7d4b67e8a5-userdata-shm.mount: Deactivated successfully.
Oct 11 05:27:41 np0005481065 systemd[1]: var-lib-containers-storage-overlay-c7364967b58c22da1966d5d1241e71eee425dfe401e15845ca723c5e08b53f77-merged.mount: Deactivated successfully.
Oct 11 05:27:41 np0005481065 podman[403304]: 2025-10-11 09:27:41.772224093 +0000 UTC m=+0.143671795 container cleanup ebae48339624729f873bd9cdaf767f454b383aabcfedadf411bcdf7d4b67e8a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 11 05:27:41 np0005481065 systemd[1]: libpod-conmon-ebae48339624729f873bd9cdaf767f454b383aabcfedadf411bcdf7d4b67e8a5.scope: Deactivated successfully.
Oct 11 05:27:41 np0005481065 podman[403362]: 2025-10-11 09:27:41.879639505 +0000 UTC m=+0.073235168 container remove ebae48339624729f873bd9cdaf767f454b383aabcfedadf411bcdf7d4b67e8a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 11 05:27:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:41.891 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[561c59b0-2f0f-497e-bc56-25e761adbfa4]: (4, ('Sat Oct 11 09:27:41 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e (ebae48339624729f873bd9cdaf767f454b383aabcfedadf411bcdf7d4b67e8a5)\nebae48339624729f873bd9cdaf767f454b383aabcfedadf411bcdf7d4b67e8a5\nSat Oct 11 09:27:41 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e (ebae48339624729f873bd9cdaf767f454b383aabcfedadf411bcdf7d4b67e8a5)\nebae48339624729f873bd9cdaf767f454b383aabcfedadf411bcdf7d4b67e8a5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:41.893 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9f16f4a7-1f8f-4cc8-b3a8-201e8ca4aba4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:41.895 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap024b1f88-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:27:41 np0005481065 nova_compute[260935]: 2025-10-11 09:27:41.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:41 np0005481065 kernel: tap024b1f88-70: left promiscuous mode
Oct 11 05:27:41 np0005481065 nova_compute[260935]: 2025-10-11 09:27:41.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:41 np0005481065 nova_compute[260935]: 2025-10-11 09:27:41.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:41.921 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0b5478b5-3cf7-4f8d-9d6e-fa8b8fc5bf05]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:41.949 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d3ed1743-fd74-4267-a1f8-749815a96ffc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:41.950 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9c52f2e5-d69a-46ce-b601-f984e638c64b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:41.976 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[10b35cf3-c5e5-4917-b126-958c60a2b425]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663402, 'reachable_time': 32325, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 403379, 'error': None, 'target': 'ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:41 np0005481065 systemd[1]: run-netns-ovnmeta\x2d024b1f88\x2d7312\x2d4f05\x2da55e\x2d4c82e878906e.mount: Deactivated successfully.
Oct 11 05:27:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:41.983 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 05:27:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:41.984 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[2243a2c6-8a9b-45b6-86ca-fd4aefd4af5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:41.988 162815 INFO neutron.agent.ovn.metadata.agent [-] Port a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088 in datapath 894decad-3bed-4c55-b643-5fbe5479bf3f unbound from our chassis#033[00m
Oct 11 05:27:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:41.989 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 894decad-3bed-4c55-b643-5fbe5479bf3f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:27:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:41.990 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[326512c9-b2fc-4733-be83-969a9e622159]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:41 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:41.991 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-894decad-3bed-4c55-b643-5fbe5479bf3f namespace which is not needed anymore#033[00m
Oct 11 05:27:42 np0005481065 nova_compute[260935]: 2025-10-11 09:27:42.182 2 INFO nova.virt.libvirt.driver [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Deleting instance files /var/lib/nova/instances/b39b8161-8a46-46fe-8a2a-0fc6b4eab850_del#033[00m
Oct 11 05:27:42 np0005481065 nova_compute[260935]: 2025-10-11 09:27:42.183 2 INFO nova.virt.libvirt.driver [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Deletion of /var/lib/nova/instances/b39b8161-8a46-46fe-8a2a-0fc6b4eab850_del complete#033[00m
Oct 11 05:27:42 np0005481065 neutron-haproxy-ovnmeta-894decad-3bed-4c55-b643-5fbe5479bf3f[400773]: [NOTICE]   (400777) : haproxy version is 2.8.14-c23fe91
Oct 11 05:27:42 np0005481065 neutron-haproxy-ovnmeta-894decad-3bed-4c55-b643-5fbe5479bf3f[400773]: [NOTICE]   (400777) : path to executable is /usr/sbin/haproxy
Oct 11 05:27:42 np0005481065 neutron-haproxy-ovnmeta-894decad-3bed-4c55-b643-5fbe5479bf3f[400773]: [WARNING]  (400777) : Exiting Master process...
Oct 11 05:27:42 np0005481065 neutron-haproxy-ovnmeta-894decad-3bed-4c55-b643-5fbe5479bf3f[400773]: [WARNING]  (400777) : Exiting Master process...
Oct 11 05:27:42 np0005481065 neutron-haproxy-ovnmeta-894decad-3bed-4c55-b643-5fbe5479bf3f[400773]: [ALERT]    (400777) : Current worker (400779) exited with code 143 (Terminated)
Oct 11 05:27:42 np0005481065 neutron-haproxy-ovnmeta-894decad-3bed-4c55-b643-5fbe5479bf3f[400773]: [WARNING]  (400777) : All workers exited. Exiting... (0)
Oct 11 05:27:42 np0005481065 systemd[1]: libpod-7d8390a7292f4de4796afe322c1990187330820880950a473cdbf2710ac3ec79.scope: Deactivated successfully.
Oct 11 05:27:42 np0005481065 conmon[400773]: conmon 7d8390a7292f4de4796a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7d8390a7292f4de4796afe322c1990187330820880950a473cdbf2710ac3ec79.scope/container/memory.events
Oct 11 05:27:42 np0005481065 podman[403397]: 2025-10-11 09:27:42.236533996 +0000 UTC m=+0.075136771 container died 7d8390a7292f4de4796afe322c1990187330820880950a473cdbf2710ac3ec79 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-894decad-3bed-4c55-b643-5fbe5479bf3f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 05:27:42 np0005481065 nova_compute[260935]: 2025-10-11 09:27:42.260 2 INFO nova.compute.manager [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Took 0.89 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:27:42 np0005481065 nova_compute[260935]: 2025-10-11 09:27:42.261 2 DEBUG oslo.service.loopingcall [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:27:42 np0005481065 nova_compute[260935]: 2025-10-11 09:27:42.262 2 DEBUG nova.compute.manager [-] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:27:42 np0005481065 nova_compute[260935]: 2025-10-11 09:27:42.262 2 DEBUG nova.network.neutron [-] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:27:42 np0005481065 systemd[1]: var-lib-containers-storage-overlay-dfc0d2ca647c901f7f4d6b1ca7ee3a7b693ee57f6d1e527af9b3cd693721335a-merged.mount: Deactivated successfully.
Oct 11 05:27:42 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7d8390a7292f4de4796afe322c1990187330820880950a473cdbf2710ac3ec79-userdata-shm.mount: Deactivated successfully.
Oct 11 05:27:42 np0005481065 podman[403397]: 2025-10-11 09:27:42.279450048 +0000 UTC m=+0.118052813 container cleanup 7d8390a7292f4de4796afe322c1990187330820880950a473cdbf2710ac3ec79 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-894decad-3bed-4c55-b643-5fbe5479bf3f, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 11 05:27:42 np0005481065 systemd[1]: libpod-conmon-7d8390a7292f4de4796afe322c1990187330820880950a473cdbf2710ac3ec79.scope: Deactivated successfully.
Oct 11 05:27:42 np0005481065 podman[403426]: 2025-10-11 09:27:42.375457807 +0000 UTC m=+0.058951815 container remove 7d8390a7292f4de4796afe322c1990187330820880950a473cdbf2710ac3ec79 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-894decad-3bed-4c55-b643-5fbe5479bf3f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 11 05:27:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:42.384 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[16c320e8-2a88-470f-8f75-8b7dc20b9e4e]: (4, ('Sat Oct 11 09:27:42 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-894decad-3bed-4c55-b643-5fbe5479bf3f (7d8390a7292f4de4796afe322c1990187330820880950a473cdbf2710ac3ec79)\n7d8390a7292f4de4796afe322c1990187330820880950a473cdbf2710ac3ec79\nSat Oct 11 09:27:42 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-894decad-3bed-4c55-b643-5fbe5479bf3f (7d8390a7292f4de4796afe322c1990187330820880950a473cdbf2710ac3ec79)\n7d8390a7292f4de4796afe322c1990187330820880950a473cdbf2710ac3ec79\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:42.386 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c5f92563-257d-4b4e-9914-77cd69bae63f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:42.387 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap894decad-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:27:42 np0005481065 nova_compute[260935]: 2025-10-11 09:27:42.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:42 np0005481065 kernel: tap894decad-30: left promiscuous mode
Oct 11 05:27:42 np0005481065 nova_compute[260935]: 2025-10-11 09:27:42.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:42.465 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5f99b020-3827-4040-83f4-e704988bb204]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:42.493 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1af684bd-9f58-4c3a-afbb-4140c2467d27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:42.495 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[def582c5-5468-47f8-bd51-2865bfab4dda]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:42.519 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fe4be1f2-e445-41d8-90a4-747b88cf0841]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663629, 'reachable_time': 38945, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 403444, 'error': None, 'target': 'ovnmeta-894decad-3bed-4c55-b643-5fbe5479bf3f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:42.522 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-894decad-3bed-4c55-b643-5fbe5479bf3f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 05:27:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:42.523 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[a0870410-00d0-4e6d-ae3f-463795c6c428]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:42 np0005481065 nova_compute[260935]: 2025-10-11 09:27:42.654 2 DEBUG nova.network.neutron [req-3d927c7e-70b7-483e-b8ef-aa6280c8359b req-30759d61-0038-4222-a0ca-b3cede909496 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Updated VIF entry in instance network info cache for port 0f8506e7-c03f-48bd-938d-f3b4cea2675b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:27:42 np0005481065 nova_compute[260935]: 2025-10-11 09:27:42.655 2 DEBUG nova.network.neutron [req-3d927c7e-70b7-483e-b8ef-aa6280c8359b req-30759d61-0038-4222-a0ca-b3cede909496 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Updating instance_info_cache with network_info: [{"id": "0f8506e7-c03f-48bd-938d-f3b4cea2675b", "address": "fa:16:3e:f4:67:d9", "network": {"id": "024b1f88-7312-4f05-a55e-4c82e878906e", "bridge": "br-int", "label": "tempest-network-smoke--93934650", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f8506e7-c0", "ovs_interfaceid": "0f8506e7-c03f-48bd-938d-f3b4cea2675b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088", "address": "fa:16:3e:4d:e9:36", "network": {"id": "894decad-3bed-4c55-b643-5fbe5479bf3f", "bridge": "br-int", "label": "tempest-network-smoke--1044478516", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4d:e936", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0aadf8d-3a", "ovs_interfaceid": "a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:27:42 np0005481065 nova_compute[260935]: 2025-10-11 09:27:42.683 2 DEBUG oslo_concurrency.lockutils [req-3d927c7e-70b7-483e-b8ef-aa6280c8359b req-30759d61-0038-4222-a0ca-b3cede909496 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-b39b8161-8a46-46fe-8a2a-0fc6b4eab850" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:27:42 np0005481065 nova_compute[260935]: 2025-10-11 09:27:42.697 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:27:42 np0005481065 nova_compute[260935]: 2025-10-11 09:27:42.721 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:27:42 np0005481065 nova_compute[260935]: 2025-10-11 09:27:42.722 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:27:42 np0005481065 nova_compute[260935]: 2025-10-11 09:27:42.722 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 11 05:27:42 np0005481065 nova_compute[260935]: 2025-10-11 09:27:42.745 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Oct 11 05:27:42 np0005481065 systemd[1]: run-netns-ovnmeta\x2d894decad\x2d3bed\x2d4c55\x2db643\x2d5fbe5479bf3f.mount: Deactivated successfully.
Oct 11 05:27:42 np0005481065 nova_compute[260935]: 2025-10-11 09:27:42.878 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:27:42 np0005481065 nova_compute[260935]: 2025-10-11 09:27:42.879 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:27:42 np0005481065 nova_compute[260935]: 2025-10-11 09:27:42.879 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 05:27:42 np0005481065 nova_compute[260935]: 2025-10-11 09:27:42.880 2 DEBUG nova.objects.instance [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c176845c-89c0-4038-ba22-4ee79bd3ebfe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:27:42 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2582: 321 pgs: 321 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 25 KiB/s wr, 59 op/s
Oct 11 05:27:43 np0005481065 nova_compute[260935]: 2025-10-11 09:27:43.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:43 np0005481065 nova_compute[260935]: 2025-10-11 09:27:43.208 2 DEBUG nova.network.neutron [-] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:27:43 np0005481065 nova_compute[260935]: 2025-10-11 09:27:43.228 2 INFO nova.compute.manager [-] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Took 0.97 seconds to deallocate network for instance.#033[00m
Oct 11 05:27:43 np0005481065 nova_compute[260935]: 2025-10-11 09:27:43.288 2 DEBUG oslo_concurrency.lockutils [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:27:43 np0005481065 nova_compute[260935]: 2025-10-11 09:27:43.289 2 DEBUG oslo_concurrency.lockutils [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:27:43 np0005481065 nova_compute[260935]: 2025-10-11 09:27:43.392 2 DEBUG nova.compute.manager [req-84e33a8d-a459-4526-8a61-d481af9a6f17 req-cbc4bc40-8b85-4990-800a-4631670baf00 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Received event network-vif-unplugged-0f8506e7-c03f-48bd-938d-f3b4cea2675b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:27:43 np0005481065 nova_compute[260935]: 2025-10-11 09:27:43.392 2 DEBUG oslo_concurrency.lockutils [req-84e33a8d-a459-4526-8a61-d481af9a6f17 req-cbc4bc40-8b85-4990-800a-4631670baf00 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:27:43 np0005481065 nova_compute[260935]: 2025-10-11 09:27:43.393 2 DEBUG oslo_concurrency.lockutils [req-84e33a8d-a459-4526-8a61-d481af9a6f17 req-cbc4bc40-8b85-4990-800a-4631670baf00 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:27:43 np0005481065 nova_compute[260935]: 2025-10-11 09:27:43.393 2 DEBUG oslo_concurrency.lockutils [req-84e33a8d-a459-4526-8a61-d481af9a6f17 req-cbc4bc40-8b85-4990-800a-4631670baf00 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:27:43 np0005481065 nova_compute[260935]: 2025-10-11 09:27:43.394 2 DEBUG nova.compute.manager [req-84e33a8d-a459-4526-8a61-d481af9a6f17 req-cbc4bc40-8b85-4990-800a-4631670baf00 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] No waiting events found dispatching network-vif-unplugged-0f8506e7-c03f-48bd-938d-f3b4cea2675b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:27:43 np0005481065 nova_compute[260935]: 2025-10-11 09:27:43.394 2 WARNING nova.compute.manager [req-84e33a8d-a459-4526-8a61-d481af9a6f17 req-cbc4bc40-8b85-4990-800a-4631670baf00 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Received unexpected event network-vif-unplugged-0f8506e7-c03f-48bd-938d-f3b4cea2675b for instance with vm_state deleted and task_state None.#033[00m
Oct 11 05:27:43 np0005481065 nova_compute[260935]: 2025-10-11 09:27:43.395 2 DEBUG nova.compute.manager [req-84e33a8d-a459-4526-8a61-d481af9a6f17 req-cbc4bc40-8b85-4990-800a-4631670baf00 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Received event network-vif-plugged-0f8506e7-c03f-48bd-938d-f3b4cea2675b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:27:43 np0005481065 nova_compute[260935]: 2025-10-11 09:27:43.395 2 DEBUG oslo_concurrency.lockutils [req-84e33a8d-a459-4526-8a61-d481af9a6f17 req-cbc4bc40-8b85-4990-800a-4631670baf00 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:27:43 np0005481065 nova_compute[260935]: 2025-10-11 09:27:43.396 2 DEBUG oslo_concurrency.lockutils [req-84e33a8d-a459-4526-8a61-d481af9a6f17 req-cbc4bc40-8b85-4990-800a-4631670baf00 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:27:43 np0005481065 nova_compute[260935]: 2025-10-11 09:27:43.396 2 DEBUG oslo_concurrency.lockutils [req-84e33a8d-a459-4526-8a61-d481af9a6f17 req-cbc4bc40-8b85-4990-800a-4631670baf00 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:27:43 np0005481065 nova_compute[260935]: 2025-10-11 09:27:43.397 2 DEBUG nova.compute.manager [req-84e33a8d-a459-4526-8a61-d481af9a6f17 req-cbc4bc40-8b85-4990-800a-4631670baf00 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] No waiting events found dispatching network-vif-plugged-0f8506e7-c03f-48bd-938d-f3b4cea2675b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:27:43 np0005481065 nova_compute[260935]: 2025-10-11 09:27:43.397 2 WARNING nova.compute.manager [req-84e33a8d-a459-4526-8a61-d481af9a6f17 req-cbc4bc40-8b85-4990-800a-4631670baf00 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Received unexpected event network-vif-plugged-0f8506e7-c03f-48bd-938d-f3b4cea2675b for instance with vm_state deleted and task_state None.#033[00m
Oct 11 05:27:43 np0005481065 nova_compute[260935]: 2025-10-11 09:27:43.398 2 DEBUG nova.compute.manager [req-84e33a8d-a459-4526-8a61-d481af9a6f17 req-cbc4bc40-8b85-4990-800a-4631670baf00 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Received event network-vif-deleted-a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:27:43 np0005481065 nova_compute[260935]: 2025-10-11 09:27:43.398 2 DEBUG nova.compute.manager [req-84e33a8d-a459-4526-8a61-d481af9a6f17 req-cbc4bc40-8b85-4990-800a-4631670baf00 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Received event network-vif-deleted-0f8506e7-c03f-48bd-938d-f3b4cea2675b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:27:43 np0005481065 nova_compute[260935]: 2025-10-11 09:27:43.416 2 DEBUG oslo_concurrency.processutils [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:27:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:27:43 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1208272319' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:27:43 np0005481065 nova_compute[260935]: 2025-10-11 09:27:43.911 2 DEBUG oslo_concurrency.processutils [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:27:43 np0005481065 nova_compute[260935]: 2025-10-11 09:27:43.919 2 DEBUG nova.compute.provider_tree [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:27:43 np0005481065 nova_compute[260935]: 2025-10-11 09:27:43.945 2 DEBUG nova.scheduler.client.report [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:27:43 np0005481065 nova_compute[260935]: 2025-10-11 09:27:43.954 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updating instance_info_cache with network_info: [{"id": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "address": "fa:16:3e:1e:82:58", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ae661-47", "ovs_interfaceid": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:27:43 np0005481065 nova_compute[260935]: 2025-10-11 09:27:43.995 2 DEBUG oslo_concurrency.lockutils [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:27:44 np0005481065 nova_compute[260935]: 2025-10-11 09:27:44.000 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:27:44 np0005481065 nova_compute[260935]: 2025-10-11 09:27:44.001 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 05:27:44 np0005481065 nova_compute[260935]: 2025-10-11 09:27:44.002 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:27:44 np0005481065 nova_compute[260935]: 2025-10-11 09:27:44.003 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:27:44 np0005481065 nova_compute[260935]: 2025-10-11 09:27:44.003 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:27:44 np0005481065 nova_compute[260935]: 2025-10-11 09:27:44.004 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:27:44 np0005481065 nova_compute[260935]: 2025-10-11 09:27:44.042 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:27:44 np0005481065 nova_compute[260935]: 2025-10-11 09:27:44.044 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:27:44 np0005481065 nova_compute[260935]: 2025-10-11 09:27:44.045 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:27:44 np0005481065 nova_compute[260935]: 2025-10-11 09:27:44.045 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:27:44 np0005481065 nova_compute[260935]: 2025-10-11 09:27:44.046 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:27:44 np0005481065 nova_compute[260935]: 2025-10-11 09:27:44.104 2 INFO nova.scheduler.client.report [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Deleted allocations for instance b39b8161-8a46-46fe-8a2a-0fc6b4eab850#033[00m
Oct 11 05:27:44 np0005481065 nova_compute[260935]: 2025-10-11 09:27:44.194 2 DEBUG oslo_concurrency.lockutils [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.832s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:27:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:27:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:27:44 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1294313095' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:27:44 np0005481065 nova_compute[260935]: 2025-10-11 09:27:44.521 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:27:44 np0005481065 nova_compute[260935]: 2025-10-11 09:27:44.651 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:27:44 np0005481065 nova_compute[260935]: 2025-10-11 09:27:44.651 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:27:44 np0005481065 nova_compute[260935]: 2025-10-11 09:27:44.652 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:27:44 np0005481065 nova_compute[260935]: 2025-10-11 09:27:44.656 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:27:44 np0005481065 nova_compute[260935]: 2025-10-11 09:27:44.656 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:27:44 np0005481065 nova_compute[260935]: 2025-10-11 09:27:44.660 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:27:44 np0005481065 nova_compute[260935]: 2025-10-11 09:27:44.661 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:27:44 np0005481065 nova_compute[260935]: 2025-10-11 09:27:44.894 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:27:44 np0005481065 nova_compute[260935]: 2025-10-11 09:27:44.896 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2831MB free_disk=59.830665588378906GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:27:44 np0005481065 nova_compute[260935]: 2025-10-11 09:27:44.896 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:27:44 np0005481065 nova_compute[260935]: 2025-10-11 09:27:44.896 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:27:44 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2583: 321 pgs: 321 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 14 KiB/s wr, 58 op/s
Oct 11 05:27:44 np0005481065 nova_compute[260935]: 2025-10-11 09:27:44.988 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:27:44 np0005481065 nova_compute[260935]: 2025-10-11 09:27:44.989 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:27:44 np0005481065 nova_compute[260935]: 2025-10-11 09:27:44.989 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:27:44 np0005481065 nova_compute[260935]: 2025-10-11 09:27:44.990 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:27:44 np0005481065 nova_compute[260935]: 2025-10-11 09:27:44.990 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:27:45 np0005481065 nova_compute[260935]: 2025-10-11 09:27:45.069 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:27:45 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:27:45 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2419163320' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:27:45 np0005481065 nova_compute[260935]: 2025-10-11 09:27:45.547 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:27:45 np0005481065 nova_compute[260935]: 2025-10-11 09:27:45.553 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:27:45 np0005481065 nova_compute[260935]: 2025-10-11 09:27:45.572 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:27:45 np0005481065 nova_compute[260935]: 2025-10-11 09:27:45.603 2 DEBUG oslo_concurrency.lockutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "f4e61943-e59b-40a1-ab1e-07ba5f131bb3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:27:45 np0005481065 nova_compute[260935]: 2025-10-11 09:27:45.604 2 DEBUG oslo_concurrency.lockutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "f4e61943-e59b-40a1-ab1e-07ba5f131bb3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:27:45 np0005481065 nova_compute[260935]: 2025-10-11 09:27:45.616 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:27:45 np0005481065 nova_compute[260935]: 2025-10-11 09:27:45.617 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:27:45 np0005481065 nova_compute[260935]: 2025-10-11 09:27:45.638 2 DEBUG nova.compute.manager [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:27:45 np0005481065 nova_compute[260935]: 2025-10-11 09:27:45.732 2 DEBUG oslo_concurrency.lockutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:27:45 np0005481065 nova_compute[260935]: 2025-10-11 09:27:45.733 2 DEBUG oslo_concurrency.lockutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:27:45 np0005481065 nova_compute[260935]: 2025-10-11 09:27:45.741 2 DEBUG nova.virt.hardware [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:27:45 np0005481065 nova_compute[260935]: 2025-10-11 09:27:45.742 2 INFO nova.compute.claims [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:27:45 np0005481065 nova_compute[260935]: 2025-10-11 09:27:45.890 2 DEBUG oslo_concurrency.processutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:27:46 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:27:46 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2877685704' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:27:46 np0005481065 nova_compute[260935]: 2025-10-11 09:27:46.401 2 DEBUG oslo_concurrency.processutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:27:46 np0005481065 nova_compute[260935]: 2025-10-11 09:27:46.411 2 DEBUG nova.compute.provider_tree [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:27:46 np0005481065 nova_compute[260935]: 2025-10-11 09:27:46.430 2 DEBUG nova.scheduler.client.report [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:27:46 np0005481065 nova_compute[260935]: 2025-10-11 09:27:46.455 2 DEBUG oslo_concurrency.lockutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:27:46 np0005481065 nova_compute[260935]: 2025-10-11 09:27:46.457 2 DEBUG nova.compute.manager [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:27:46 np0005481065 nova_compute[260935]: 2025-10-11 09:27:46.513 2 DEBUG nova.compute.manager [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:27:46 np0005481065 nova_compute[260935]: 2025-10-11 09:27:46.514 2 DEBUG nova.network.neutron [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:27:46 np0005481065 nova_compute[260935]: 2025-10-11 09:27:46.537 2 INFO nova.virt.libvirt.driver [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:27:46 np0005481065 nova_compute[260935]: 2025-10-11 09:27:46.559 2 DEBUG nova.compute.manager [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:27:46 np0005481065 nova_compute[260935]: 2025-10-11 09:27:46.670 2 DEBUG nova.compute.manager [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:27:46 np0005481065 nova_compute[260935]: 2025-10-11 09:27:46.672 2 DEBUG nova.virt.libvirt.driver [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:27:46 np0005481065 nova_compute[260935]: 2025-10-11 09:27:46.672 2 INFO nova.virt.libvirt.driver [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Creating image(s)#033[00m
Oct 11 05:27:46 np0005481065 nova_compute[260935]: 2025-10-11 09:27:46.716 2 DEBUG nova.storage.rbd_utils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image f4e61943-e59b-40a1-ab1e-07ba5f131bb3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:27:46 np0005481065 nova_compute[260935]: 2025-10-11 09:27:46.765 2 DEBUG nova.storage.rbd_utils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image f4e61943-e59b-40a1-ab1e-07ba5f131bb3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:27:46 np0005481065 nova_compute[260935]: 2025-10-11 09:27:46.805 2 DEBUG nova.storage.rbd_utils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image f4e61943-e59b-40a1-ab1e-07ba5f131bb3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:27:46 np0005481065 nova_compute[260935]: 2025-10-11 09:27:46.810 2 DEBUG oslo_concurrency.processutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:27:46 np0005481065 podman[403534]: 2025-10-11 09:27:46.822535855 +0000 UTC m=+0.108350418 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent)
Oct 11 05:27:46 np0005481065 nova_compute[260935]: 2025-10-11 09:27:46.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:46 np0005481065 nova_compute[260935]: 2025-10-11 09:27:46.892 2 DEBUG oslo_concurrency.processutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:27:46 np0005481065 nova_compute[260935]: 2025-10-11 09:27:46.893 2 DEBUG oslo_concurrency.lockutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:27:46 np0005481065 nova_compute[260935]: 2025-10-11 09:27:46.894 2 DEBUG oslo_concurrency.lockutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:27:46 np0005481065 nova_compute[260935]: 2025-10-11 09:27:46.894 2 DEBUG oslo_concurrency.lockutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:27:46 np0005481065 nova_compute[260935]: 2025-10-11 09:27:46.918 2 DEBUG nova.storage.rbd_utils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image f4e61943-e59b-40a1-ab1e-07ba5f131bb3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:27:46 np0005481065 nova_compute[260935]: 2025-10-11 09:27:46.921 2 DEBUG oslo_concurrency.processutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 f4e61943-e59b-40a1-ab1e-07ba5f131bb3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:27:46 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2584: 321 pgs: 321 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 14 KiB/s wr, 58 op/s
Oct 11 05:27:47 np0005481065 nova_compute[260935]: 2025-10-11 09:27:47.232 2 DEBUG oslo_concurrency.processutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 f4e61943-e59b-40a1-ab1e-07ba5f131bb3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.311s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:27:47 np0005481065 nova_compute[260935]: 2025-10-11 09:27:47.287 2 DEBUG nova.storage.rbd_utils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] resizing rbd image f4e61943-e59b-40a1-ab1e-07ba5f131bb3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:27:47 np0005481065 nova_compute[260935]: 2025-10-11 09:27:47.383 2 DEBUG nova.policy [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dd336dcb24664df58613d4105ce1b004', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:27:47 np0005481065 nova_compute[260935]: 2025-10-11 09:27:47.388 2 DEBUG nova.objects.instance [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'migration_context' on Instance uuid f4e61943-e59b-40a1-ab1e-07ba5f131bb3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:27:47 np0005481065 nova_compute[260935]: 2025-10-11 09:27:47.410 2 DEBUG nova.virt.libvirt.driver [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:27:47 np0005481065 nova_compute[260935]: 2025-10-11 09:27:47.410 2 DEBUG nova.virt.libvirt.driver [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Ensure instance console log exists: /var/lib/nova/instances/f4e61943-e59b-40a1-ab1e-07ba5f131bb3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:27:47 np0005481065 nova_compute[260935]: 2025-10-11 09:27:47.410 2 DEBUG oslo_concurrency.lockutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:27:47 np0005481065 nova_compute[260935]: 2025-10-11 09:27:47.411 2 DEBUG oslo_concurrency.lockutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:27:47 np0005481065 nova_compute[260935]: 2025-10-11 09:27:47.411 2 DEBUG oslo_concurrency.lockutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:27:48 np0005481065 nova_compute[260935]: 2025-10-11 09:27:48.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:48 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2585: 321 pgs: 321 active+clean; 368 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 1.6 MiB/s wr, 84 op/s
Oct 11 05:27:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:27:49 np0005481065 nova_compute[260935]: 2025-10-11 09:27:49.282 2 DEBUG nova.network.neutron [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Successfully created port: c550ea63-9098-4f0c-95b9-107f8a0a8a5c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:27:49 np0005481065 ovn_controller[152945]: 2025-10-11T09:27:49Z|01439|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:27:49 np0005481065 ovn_controller[152945]: 2025-10-11T09:27:49Z|01440|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:27:49 np0005481065 nova_compute[260935]: 2025-10-11 09:27:49.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:49 np0005481065 ovn_controller[152945]: 2025-10-11T09:27:49Z|01441|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:27:49 np0005481065 ovn_controller[152945]: 2025-10-11T09:27:49Z|01442|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:27:49 np0005481065 nova_compute[260935]: 2025-10-11 09:27:49.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:50 np0005481065 nova_compute[260935]: 2025-10-11 09:27:50.811 2 DEBUG nova.network.neutron [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Successfully updated port: c550ea63-9098-4f0c-95b9-107f8a0a8a5c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:27:50 np0005481065 nova_compute[260935]: 2025-10-11 09:27:50.831 2 DEBUG oslo_concurrency.lockutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "refresh_cache-f4e61943-e59b-40a1-ab1e-07ba5f131bb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:27:50 np0005481065 nova_compute[260935]: 2025-10-11 09:27:50.831 2 DEBUG oslo_concurrency.lockutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquired lock "refresh_cache-f4e61943-e59b-40a1-ab1e-07ba5f131bb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:27:50 np0005481065 nova_compute[260935]: 2025-10-11 09:27:50.832 2 DEBUG nova.network.neutron [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:27:50 np0005481065 nova_compute[260935]: 2025-10-11 09:27:50.917 2 DEBUG nova.compute.manager [req-ee596f4b-1ba9-41e3-9b3c-4ec0a02b253a req-0ee0257b-2910-4ee1-91b1-74062d6915dd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Received event network-changed-c550ea63-9098-4f0c-95b9-107f8a0a8a5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:27:50 np0005481065 nova_compute[260935]: 2025-10-11 09:27:50.917 2 DEBUG nova.compute.manager [req-ee596f4b-1ba9-41e3-9b3c-4ec0a02b253a req-0ee0257b-2910-4ee1-91b1-74062d6915dd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Refreshing instance network info cache due to event network-changed-c550ea63-9098-4f0c-95b9-107f8a0a8a5c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:27:50 np0005481065 nova_compute[260935]: 2025-10-11 09:27:50.918 2 DEBUG oslo_concurrency.lockutils [req-ee596f4b-1ba9-41e3-9b3c-4ec0a02b253a req-0ee0257b-2910-4ee1-91b1-74062d6915dd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-f4e61943-e59b-40a1-ab1e-07ba5f131bb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:27:50 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2586: 321 pgs: 321 active+clean; 368 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 1.6 MiB/s wr, 82 op/s
Oct 11 05:27:51 np0005481065 nova_compute[260935]: 2025-10-11 09:27:51.034 2 DEBUG nova.network.neutron [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:27:51 np0005481065 podman[403745]: 2025-10-11 09:27:51.735659716 +0000 UTC m=+0.106561228 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 05:27:51 np0005481065 nova_compute[260935]: 2025-10-11 09:27:51.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:52 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:27:52 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:27:52 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 05:27:52 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:27:52 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 05:27:52 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:27:52 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 3c9dfe89-70fd-4558-a95e-daf0c3ab2b9d does not exist
Oct 11 05:27:52 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 3ace1c22-5814-42e5-8219-fa497eaf5255 does not exist
Oct 11 05:27:52 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev a5ba44a9-15f4-4ab6-b007-30c56b812b4d does not exist
Oct 11 05:27:52 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 05:27:52 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 05:27:52 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 05:27:52 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:27:52 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:27:52 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:27:52 np0005481065 nova_compute[260935]: 2025-10-11 09:27:52.738 2 DEBUG nova.network.neutron [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Updating instance_info_cache with network_info: [{"id": "c550ea63-9098-4f0c-95b9-107f8a0a8a5c", "address": "fa:16:3e:09:48:8e", "network": {"id": "066f5196-fbff-458b-ab48-5301dc324450", "bridge": "br-int", "label": "tempest-network-smoke--783922482", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc550ea63-90", "ovs_interfaceid": "c550ea63-9098-4f0c-95b9-107f8a0a8a5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:27:52 np0005481065 nova_compute[260935]: 2025-10-11 09:27:52.759 2 DEBUG oslo_concurrency.lockutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Releasing lock "refresh_cache-f4e61943-e59b-40a1-ab1e-07ba5f131bb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:27:52 np0005481065 nova_compute[260935]: 2025-10-11 09:27:52.759 2 DEBUG nova.compute.manager [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Instance network_info: |[{"id": "c550ea63-9098-4f0c-95b9-107f8a0a8a5c", "address": "fa:16:3e:09:48:8e", "network": {"id": "066f5196-fbff-458b-ab48-5301dc324450", "bridge": "br-int", "label": "tempest-network-smoke--783922482", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc550ea63-90", "ovs_interfaceid": "c550ea63-9098-4f0c-95b9-107f8a0a8a5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:27:52 np0005481065 nova_compute[260935]: 2025-10-11 09:27:52.760 2 DEBUG oslo_concurrency.lockutils [req-ee596f4b-1ba9-41e3-9b3c-4ec0a02b253a req-0ee0257b-2910-4ee1-91b1-74062d6915dd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-f4e61943-e59b-40a1-ab1e-07ba5f131bb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:27:52 np0005481065 nova_compute[260935]: 2025-10-11 09:27:52.760 2 DEBUG nova.network.neutron [req-ee596f4b-1ba9-41e3-9b3c-4ec0a02b253a req-0ee0257b-2910-4ee1-91b1-74062d6915dd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Refreshing network info cache for port c550ea63-9098-4f0c-95b9-107f8a0a8a5c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:27:52 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:27:52 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:27:52 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:27:52 np0005481065 nova_compute[260935]: 2025-10-11 09:27:52.809 2 DEBUG nova.virt.libvirt.driver [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Start _get_guest_xml network_info=[{"id": "c550ea63-9098-4f0c-95b9-107f8a0a8a5c", "address": "fa:16:3e:09:48:8e", "network": {"id": "066f5196-fbff-458b-ab48-5301dc324450", "bridge": "br-int", "label": "tempest-network-smoke--783922482", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc550ea63-90", "ovs_interfaceid": "c550ea63-9098-4f0c-95b9-107f8a0a8a5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:27:52 np0005481065 nova_compute[260935]: 2025-10-11 09:27:52.817 2 WARNING nova.virt.libvirt.driver [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:27:52 np0005481065 nova_compute[260935]: 2025-10-11 09:27:52.827 2 DEBUG nova.virt.libvirt.host [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:27:52 np0005481065 nova_compute[260935]: 2025-10-11 09:27:52.827 2 DEBUG nova.virt.libvirt.host [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:27:52 np0005481065 nova_compute[260935]: 2025-10-11 09:27:52.833 2 DEBUG nova.virt.libvirt.host [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:27:52 np0005481065 nova_compute[260935]: 2025-10-11 09:27:52.834 2 DEBUG nova.virt.libvirt.host [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:27:52 np0005481065 nova_compute[260935]: 2025-10-11 09:27:52.834 2 DEBUG nova.virt.libvirt.driver [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:27:52 np0005481065 nova_compute[260935]: 2025-10-11 09:27:52.835 2 DEBUG nova.virt.hardware [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:27:52 np0005481065 nova_compute[260935]: 2025-10-11 09:27:52.835 2 DEBUG nova.virt.hardware [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:27:52 np0005481065 nova_compute[260935]: 2025-10-11 09:27:52.835 2 DEBUG nova.virt.hardware [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:27:52 np0005481065 nova_compute[260935]: 2025-10-11 09:27:52.836 2 DEBUG nova.virt.hardware [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:27:52 np0005481065 nova_compute[260935]: 2025-10-11 09:27:52.836 2 DEBUG nova.virt.hardware [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:27:52 np0005481065 nova_compute[260935]: 2025-10-11 09:27:52.836 2 DEBUG nova.virt.hardware [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:27:52 np0005481065 nova_compute[260935]: 2025-10-11 09:27:52.836 2 DEBUG nova.virt.hardware [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:27:52 np0005481065 nova_compute[260935]: 2025-10-11 09:27:52.837 2 DEBUG nova.virt.hardware [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:27:52 np0005481065 nova_compute[260935]: 2025-10-11 09:27:52.837 2 DEBUG nova.virt.hardware [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:27:52 np0005481065 nova_compute[260935]: 2025-10-11 09:27:52.837 2 DEBUG nova.virt.hardware [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:27:52 np0005481065 nova_compute[260935]: 2025-10-11 09:27:52.838 2 DEBUG nova.virt.hardware [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:27:52 np0005481065 nova_compute[260935]: 2025-10-11 09:27:52.841 2 DEBUG oslo_concurrency.processutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:27:52 np0005481065 nova_compute[260935]: 2025-10-11 09:27:52.887 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174857.8831065, ac163a38-d5cb-4d00-af1b-f3361849dd68 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:27:52 np0005481065 nova_compute[260935]: 2025-10-11 09:27:52.888 2 INFO nova.compute.manager [-] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:27:52 np0005481065 nova_compute[260935]: 2025-10-11 09:27:52.908 2 DEBUG nova.compute.manager [None req-e39d945a-293c-45ae-b806-3920025e28ff - - - - - -] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:27:52 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2587: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 1.8 MiB/s wr, 83 op/s
Oct 11 05:27:53 np0005481065 nova_compute[260935]: 2025-10-11 09:27:53.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:27:53 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2070273049' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:27:53 np0005481065 nova_compute[260935]: 2025-10-11 09:27:53.333 2 DEBUG oslo_concurrency.processutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:27:53 np0005481065 nova_compute[260935]: 2025-10-11 09:27:53.361 2 DEBUG nova.storage.rbd_utils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image f4e61943-e59b-40a1-ab1e-07ba5f131bb3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:27:53 np0005481065 nova_compute[260935]: 2025-10-11 09:27:53.365 2 DEBUG oslo_concurrency.processutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:27:53 np0005481065 podman[404054]: 2025-10-11 09:27:53.57804082 +0000 UTC m=+0.068852874 container create 109fb305e239d2d80bbbd331ee84ffed123cf6dc030eb80f095cea7ce87a0d6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_blackburn, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:27:53 np0005481065 systemd[1]: Started libpod-conmon-109fb305e239d2d80bbbd331ee84ffed123cf6dc030eb80f095cea7ce87a0d6a.scope.
Oct 11 05:27:53 np0005481065 podman[404054]: 2025-10-11 09:27:53.548308691 +0000 UTC m=+0.039120725 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:27:53 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:27:53 np0005481065 podman[404054]: 2025-10-11 09:27:53.689307379 +0000 UTC m=+0.180119483 container init 109fb305e239d2d80bbbd331ee84ffed123cf6dc030eb80f095cea7ce87a0d6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_blackburn, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Oct 11 05:27:53 np0005481065 podman[404054]: 2025-10-11 09:27:53.701325848 +0000 UTC m=+0.192137862 container start 109fb305e239d2d80bbbd331ee84ffed123cf6dc030eb80f095cea7ce87a0d6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_blackburn, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 05:27:53 np0005481065 podman[404054]: 2025-10-11 09:27:53.704924969 +0000 UTC m=+0.195737073 container attach 109fb305e239d2d80bbbd331ee84ffed123cf6dc030eb80f095cea7ce87a0d6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 05:27:53 np0005481065 systemd[1]: libpod-109fb305e239d2d80bbbd331ee84ffed123cf6dc030eb80f095cea7ce87a0d6a.scope: Deactivated successfully.
Oct 11 05:27:53 np0005481065 determined_blackburn[404087]: 167 167
Oct 11 05:27:53 np0005481065 conmon[404087]: conmon 109fb305e239d2d80bbb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-109fb305e239d2d80bbbd331ee84ffed123cf6dc030eb80f095cea7ce87a0d6a.scope/container/memory.events
Oct 11 05:27:53 np0005481065 podman[404054]: 2025-10-11 09:27:53.712675568 +0000 UTC m=+0.203487622 container died 109fb305e239d2d80bbbd331ee84ffed123cf6dc030eb80f095cea7ce87a0d6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_blackburn, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:27:53 np0005481065 systemd[1]: var-lib-containers-storage-overlay-4e088d4b3577a88df5f158b15bb27f32a3030334533fd2f3b5503b7a061677f8-merged.mount: Deactivated successfully.
Oct 11 05:27:53 np0005481065 podman[404054]: 2025-10-11 09:27:53.766321432 +0000 UTC m=+0.257133456 container remove 109fb305e239d2d80bbbd331ee84ffed123cf6dc030eb80f095cea7ce87a0d6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:27:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:27:53 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1630305984' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:27:53 np0005481065 systemd[1]: libpod-conmon-109fb305e239d2d80bbbd331ee84ffed123cf6dc030eb80f095cea7ce87a0d6a.scope: Deactivated successfully.
Oct 11 05:27:53 np0005481065 nova_compute[260935]: 2025-10-11 09:27:53.808 2 DEBUG oslo_concurrency.processutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:27:53 np0005481065 nova_compute[260935]: 2025-10-11 09:27:53.811 2 DEBUG nova.virt.libvirt.vif [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:27:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-432477065',display_name='tempest-TestNetworkBasicOps-server-432477065',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-432477065',id=131,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGjbEjzk/3FFF7Yo/G8/P29WblN99ZiVV249D7QjKIfGUPghnNZARefNU8o4oiRaxrPiX6DdwLvn7rFQvHVV6IMhrMlVIFfDDgq3XHS93q11oJL1iJIdpdsGtZABU/MPnw==',key_name='tempest-TestNetworkBasicOps-52027208',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-kx08je2o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:27:46Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=f4e61943-e59b-40a1-ab1e-07ba5f131bb3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c550ea63-9098-4f0c-95b9-107f8a0a8a5c", "address": "fa:16:3e:09:48:8e", "network": {"id": "066f5196-fbff-458b-ab48-5301dc324450", "bridge": "br-int", "label": "tempest-network-smoke--783922482", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc550ea63-90", "ovs_interfaceid": "c550ea63-9098-4f0c-95b9-107f8a0a8a5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:27:53 np0005481065 nova_compute[260935]: 2025-10-11 09:27:53.812 2 DEBUG nova.network.os_vif_util [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "c550ea63-9098-4f0c-95b9-107f8a0a8a5c", "address": "fa:16:3e:09:48:8e", "network": {"id": "066f5196-fbff-458b-ab48-5301dc324450", "bridge": "br-int", "label": "tempest-network-smoke--783922482", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc550ea63-90", "ovs_interfaceid": "c550ea63-9098-4f0c-95b9-107f8a0a8a5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:27:53 np0005481065 nova_compute[260935]: 2025-10-11 09:27:53.813 2 DEBUG nova.network.os_vif_util [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:48:8e,bridge_name='br-int',has_traffic_filtering=True,id=c550ea63-9098-4f0c-95b9-107f8a0a8a5c,network=Network(066f5196-fbff-458b-ab48-5301dc324450),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc550ea63-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:27:53 np0005481065 nova_compute[260935]: 2025-10-11 09:27:53.815 2 DEBUG nova.objects.instance [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'pci_devices' on Instance uuid f4e61943-e59b-40a1-ab1e-07ba5f131bb3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:27:53 np0005481065 nova_compute[260935]: 2025-10-11 09:27:53.832 2 DEBUG nova.virt.libvirt.driver [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:27:53 np0005481065 nova_compute[260935]:  <uuid>f4e61943-e59b-40a1-ab1e-07ba5f131bb3</uuid>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:  <name>instance-00000083</name>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:27:53 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:      <nova:name>tempest-TestNetworkBasicOps-server-432477065</nova:name>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:27:52</nova:creationTime>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:27:53 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:        <nova:user uuid="dd336dcb24664df58613d4105ce1b004">tempest-TestNetworkBasicOps-1622727639-project-member</nova:user>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:        <nova:project uuid="bee9c6aad5fe46a2b0fb6caf4d995b72">tempest-TestNetworkBasicOps-1622727639</nova:project>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:        <nova:port uuid="c550ea63-9098-4f0c-95b9-107f8a0a8a5c">
Oct 11 05:27:53 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:27:53 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:      <entry name="serial">f4e61943-e59b-40a1-ab1e-07ba5f131bb3</entry>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:      <entry name="uuid">f4e61943-e59b-40a1-ab1e-07ba5f131bb3</entry>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:27:53 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:27:53 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:27:53 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/f4e61943-e59b-40a1-ab1e-07ba5f131bb3_disk">
Oct 11 05:27:53 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:27:53 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:27:53 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/f4e61943-e59b-40a1-ab1e-07ba5f131bb3_disk.config">
Oct 11 05:27:53 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:27:53 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:27:53 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:09:48:8e"/>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:      <target dev="tapc550ea63-90"/>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:27:53 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/f4e61943-e59b-40a1-ab1e-07ba5f131bb3/console.log" append="off"/>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:27:53 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:27:53 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:27:53 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:27:53 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:27:53 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:27:53 np0005481065 nova_compute[260935]: 2025-10-11 09:27:53.840 2 DEBUG nova.compute.manager [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Preparing to wait for external event network-vif-plugged-c550ea63-9098-4f0c-95b9-107f8a0a8a5c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:27:53 np0005481065 nova_compute[260935]: 2025-10-11 09:27:53.841 2 DEBUG oslo_concurrency.lockutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "f4e61943-e59b-40a1-ab1e-07ba5f131bb3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:27:53 np0005481065 nova_compute[260935]: 2025-10-11 09:27:53.841 2 DEBUG oslo_concurrency.lockutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "f4e61943-e59b-40a1-ab1e-07ba5f131bb3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:27:53 np0005481065 nova_compute[260935]: 2025-10-11 09:27:53.842 2 DEBUG oslo_concurrency.lockutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "f4e61943-e59b-40a1-ab1e-07ba5f131bb3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:27:53 np0005481065 nova_compute[260935]: 2025-10-11 09:27:53.843 2 DEBUG nova.virt.libvirt.vif [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:27:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-432477065',display_name='tempest-TestNetworkBasicOps-server-432477065',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-432477065',id=131,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGjbEjzk/3FFF7Yo/G8/P29WblN99ZiVV249D7QjKIfGUPghnNZARefNU8o4oiRaxrPiX6DdwLvn7rFQvHVV6IMhrMlVIFfDDgq3XHS93q11oJL1iJIdpdsGtZABU/MPnw==',key_name='tempest-TestNetworkBasicOps-52027208',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-kx08je2o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:27:46Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=f4e61943-e59b-40a1-ab1e-07ba5f131bb3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c550ea63-9098-4f0c-95b9-107f8a0a8a5c", "address": "fa:16:3e:09:48:8e", "network": {"id": "066f5196-fbff-458b-ab48-5301dc324450", "bridge": "br-int", "label": "tempest-network-smoke--783922482", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc550ea63-90", "ovs_interfaceid": "c550ea63-9098-4f0c-95b9-107f8a0a8a5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:27:53 np0005481065 nova_compute[260935]: 2025-10-11 09:27:53.843 2 DEBUG nova.network.os_vif_util [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "c550ea63-9098-4f0c-95b9-107f8a0a8a5c", "address": "fa:16:3e:09:48:8e", "network": {"id": "066f5196-fbff-458b-ab48-5301dc324450", "bridge": "br-int", "label": "tempest-network-smoke--783922482", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc550ea63-90", "ovs_interfaceid": "c550ea63-9098-4f0c-95b9-107f8a0a8a5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:27:53 np0005481065 nova_compute[260935]: 2025-10-11 09:27:53.844 2 DEBUG nova.network.os_vif_util [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:48:8e,bridge_name='br-int',has_traffic_filtering=True,id=c550ea63-9098-4f0c-95b9-107f8a0a8a5c,network=Network(066f5196-fbff-458b-ab48-5301dc324450),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc550ea63-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:27:53 np0005481065 nova_compute[260935]: 2025-10-11 09:27:53.845 2 DEBUG os_vif [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:48:8e,bridge_name='br-int',has_traffic_filtering=True,id=c550ea63-9098-4f0c-95b9-107f8a0a8a5c,network=Network(066f5196-fbff-458b-ab48-5301dc324450),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc550ea63-90') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:27:53 np0005481065 nova_compute[260935]: 2025-10-11 09:27:53.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:53 np0005481065 nova_compute[260935]: 2025-10-11 09:27:53.847 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:27:53 np0005481065 nova_compute[260935]: 2025-10-11 09:27:53.848 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:27:53 np0005481065 nova_compute[260935]: 2025-10-11 09:27:53.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:53 np0005481065 nova_compute[260935]: 2025-10-11 09:27:53.852 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc550ea63-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:27:53 np0005481065 nova_compute[260935]: 2025-10-11 09:27:53.852 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc550ea63-90, col_values=(('external_ids', {'iface-id': 'c550ea63-9098-4f0c-95b9-107f8a0a8a5c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:09:48:8e', 'vm-uuid': 'f4e61943-e59b-40a1-ab1e-07ba5f131bb3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:27:53 np0005481065 nova_compute[260935]: 2025-10-11 09:27:53.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:53 np0005481065 NetworkManager[44960]: <info>  [1760174873.8554] manager: (tapc550ea63-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/560)
Oct 11 05:27:53 np0005481065 nova_compute[260935]: 2025-10-11 09:27:53.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:27:53 np0005481065 nova_compute[260935]: 2025-10-11 09:27:53.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:53 np0005481065 nova_compute[260935]: 2025-10-11 09:27:53.864 2 INFO os_vif [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:48:8e,bridge_name='br-int',has_traffic_filtering=True,id=c550ea63-9098-4f0c-95b9-107f8a0a8a5c,network=Network(066f5196-fbff-458b-ab48-5301dc324450),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc550ea63-90')#033[00m
Oct 11 05:27:53 np0005481065 nova_compute[260935]: 2025-10-11 09:27:53.910 2 DEBUG nova.virt.libvirt.driver [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:27:53 np0005481065 nova_compute[260935]: 2025-10-11 09:27:53.911 2 DEBUG nova.virt.libvirt.driver [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:27:53 np0005481065 nova_compute[260935]: 2025-10-11 09:27:53.912 2 DEBUG nova.virt.libvirt.driver [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No VIF found with MAC fa:16:3e:09:48:8e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:27:53 np0005481065 nova_compute[260935]: 2025-10-11 09:27:53.913 2 INFO nova.virt.libvirt.driver [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Using config drive#033[00m
Oct 11 05:27:53 np0005481065 nova_compute[260935]: 2025-10-11 09:27:53.934 2 DEBUG nova.storage.rbd_utils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image f4e61943-e59b-40a1-ab1e-07ba5f131bb3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:27:54 np0005481065 podman[404132]: 2025-10-11 09:27:54.059553767 +0000 UTC m=+0.071175829 container create 73b7111a56c78bfb20dd2c23d343a6993e951fe38c63ee4b9b64408376596e5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hopper, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:27:54 np0005481065 systemd[1]: Started libpod-conmon-73b7111a56c78bfb20dd2c23d343a6993e951fe38c63ee4b9b64408376596e5a.scope.
Oct 11 05:27:54 np0005481065 podman[404132]: 2025-10-11 09:27:54.030465087 +0000 UTC m=+0.042087239 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:27:54 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:27:54 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcfe3ff0c32684eea7bdd9ee96bebad21fa107555c80ed4dd02f8a8808a72954/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:27:54 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcfe3ff0c32684eea7bdd9ee96bebad21fa107555c80ed4dd02f8a8808a72954/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:27:54 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcfe3ff0c32684eea7bdd9ee96bebad21fa107555c80ed4dd02f8a8808a72954/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:27:54 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcfe3ff0c32684eea7bdd9ee96bebad21fa107555c80ed4dd02f8a8808a72954/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:27:54 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcfe3ff0c32684eea7bdd9ee96bebad21fa107555c80ed4dd02f8a8808a72954/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 05:27:54 np0005481065 podman[404132]: 2025-10-11 09:27:54.155512565 +0000 UTC m=+0.167134657 container init 73b7111a56c78bfb20dd2c23d343a6993e951fe38c63ee4b9b64408376596e5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 05:27:54 np0005481065 podman[404132]: 2025-10-11 09:27:54.169711066 +0000 UTC m=+0.181333128 container start 73b7111a56c78bfb20dd2c23d343a6993e951fe38c63ee4b9b64408376596e5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hopper, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 05:27:54 np0005481065 podman[404132]: 2025-10-11 09:27:54.174938704 +0000 UTC m=+0.186560846 container attach 73b7111a56c78bfb20dd2c23d343a6993e951fe38c63ee4b9b64408376596e5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 05:27:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:27:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:54.402 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=44, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=43) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:27:54 np0005481065 nova_compute[260935]: 2025-10-11 09:27:54.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:54.405 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 05:27:54 np0005481065 nova_compute[260935]: 2025-10-11 09:27:54.445 2 INFO nova.virt.libvirt.driver [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Creating config drive at /var/lib/nova/instances/f4e61943-e59b-40a1-ab1e-07ba5f131bb3/disk.config#033[00m
Oct 11 05:27:54 np0005481065 nova_compute[260935]: 2025-10-11 09:27:54.449 2 DEBUG oslo_concurrency.processutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f4e61943-e59b-40a1-ab1e-07ba5f131bb3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwkl9vmtt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:27:54 np0005481065 nova_compute[260935]: 2025-10-11 09:27:54.625 2 DEBUG oslo_concurrency.processutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f4e61943-e59b-40a1-ab1e-07ba5f131bb3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwkl9vmtt" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:27:54 np0005481065 nova_compute[260935]: 2025-10-11 09:27:54.653 2 DEBUG nova.storage.rbd_utils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image f4e61943-e59b-40a1-ab1e-07ba5f131bb3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:27:54 np0005481065 nova_compute[260935]: 2025-10-11 09:27:54.657 2 DEBUG oslo_concurrency.processutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f4e61943-e59b-40a1-ab1e-07ba5f131bb3/disk.config f4e61943-e59b-40a1-ab1e-07ba5f131bb3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:27:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:27:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:27:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:27:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:27:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:27:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:27:54 np0005481065 nova_compute[260935]: 2025-10-11 09:27:54.883 2 DEBUG oslo_concurrency.processutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f4e61943-e59b-40a1-ab1e-07ba5f131bb3/disk.config f4e61943-e59b-40a1-ab1e-07ba5f131bb3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.225s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:27:54 np0005481065 nova_compute[260935]: 2025-10-11 09:27:54.885 2 INFO nova.virt.libvirt.driver [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Deleting local config drive /var/lib/nova/instances/f4e61943-e59b-40a1-ab1e-07ba5f131bb3/disk.config because it was imported into RBD.#033[00m
Oct 11 05:27:54 np0005481065 NetworkManager[44960]: <info>  [1760174874.9711] manager: (tapc550ea63-90): new Tun device (/org/freedesktop/NetworkManager/Devices/561)
Oct 11 05:27:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:27:54
Oct 11 05:27:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:27:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:27:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['vms', 'default.rgw.meta', 'default.rgw.log', 'backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root', 'volumes', 'default.rgw.control', 'images', '.mgr']
Oct 11 05:27:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:27:54 np0005481065 kernel: tapc550ea63-90: entered promiscuous mode
Oct 11 05:27:54 np0005481065 ovn_controller[152945]: 2025-10-11T09:27:54Z|01443|binding|INFO|Claiming lport c550ea63-9098-4f0c-95b9-107f8a0a8a5c for this chassis.
Oct 11 05:27:54 np0005481065 ovn_controller[152945]: 2025-10-11T09:27:54Z|01444|binding|INFO|c550ea63-9098-4f0c-95b9-107f8a0a8a5c: Claiming fa:16:3e:09:48:8e 10.100.0.10
Oct 11 05:27:54 np0005481065 nova_compute[260935]: 2025-10-11 09:27:54.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:54 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2588: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:54.999 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:48:8e 10.100.0.10'], port_security=['fa:16:3e:09:48:8e 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'f4e61943-e59b-40a1-ab1e-07ba5f131bb3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-066f5196-fbff-458b-ab48-5301dc324450', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8fa9b7a7-a9c5-4782-b5d9-2526bb91182e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dd668780-bea0-4b19-a159-529899429d34, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=c550ea63-9098-4f0c-95b9-107f8a0a8a5c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:55.001 162815 INFO neutron.agent.ovn.metadata.agent [-] Port c550ea63-9098-4f0c-95b9-107f8a0a8a5c in datapath 066f5196-fbff-458b-ab48-5301dc324450 bound to our chassis#033[00m
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:55.003 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 066f5196-fbff-458b-ab48-5301dc324450#033[00m
Oct 11 05:27:55 np0005481065 nova_compute[260935]: 2025-10-11 09:27:55.007 2 DEBUG nova.network.neutron [req-ee596f4b-1ba9-41e3-9b3c-4ec0a02b253a req-0ee0257b-2910-4ee1-91b1-74062d6915dd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Updated VIF entry in instance network info cache for port c550ea63-9098-4f0c-95b9-107f8a0a8a5c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:27:55 np0005481065 nova_compute[260935]: 2025-10-11 09:27:55.008 2 DEBUG nova.network.neutron [req-ee596f4b-1ba9-41e3-9b3c-4ec0a02b253a req-0ee0257b-2910-4ee1-91b1-74062d6915dd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Updating instance_info_cache with network_info: [{"id": "c550ea63-9098-4f0c-95b9-107f8a0a8a5c", "address": "fa:16:3e:09:48:8e", "network": {"id": "066f5196-fbff-458b-ab48-5301dc324450", "bridge": "br-int", "label": "tempest-network-smoke--783922482", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc550ea63-90", "ovs_interfaceid": "c550ea63-9098-4f0c-95b9-107f8a0a8a5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:27:55 np0005481065 systemd-machined[215705]: New machine qemu-155-instance-00000083.
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:55.019 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5ee5878a-4f5f-4e6b-bfb7-474e35217c70]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:55.020 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap066f5196-f1 in ovnmeta-066f5196-fbff-458b-ab48-5301dc324450 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:55.024 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap066f5196-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:55.024 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7f32bb51-80e5-48e3-bc41-eb4924c81d90]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:55.026 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[160348b7-2299-4851-bf85-a831a4f9f011]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:55 np0005481065 nova_compute[260935]: 2025-10-11 09:27:55.033 2 DEBUG oslo_concurrency.lockutils [req-ee596f4b-1ba9-41e3-9b3c-4ec0a02b253a req-0ee0257b-2910-4ee1-91b1-74062d6915dd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-f4e61943-e59b-40a1-ab1e-07ba5f131bb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:55.040 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[5d63f022-3d4c-4bb6-9bae-1d0ccffc0720]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:55 np0005481065 systemd[1]: Started Virtual Machine qemu-155-instance-00000083.
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:55.073 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[79dfee6f-1e21-49b8-aaf6-7b6d8d8da656]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:55 np0005481065 systemd-udevd[404217]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:27:55 np0005481065 nova_compute[260935]: 2025-10-11 09:27:55.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:55 np0005481065 ovn_controller[152945]: 2025-10-11T09:27:55Z|01445|binding|INFO|Setting lport c550ea63-9098-4f0c-95b9-107f8a0a8a5c ovn-installed in OVS
Oct 11 05:27:55 np0005481065 ovn_controller[152945]: 2025-10-11T09:27:55Z|01446|binding|INFO|Setting lport c550ea63-9098-4f0c-95b9-107f8a0a8a5c up in Southbound
Oct 11 05:27:55 np0005481065 nova_compute[260935]: 2025-10-11 09:27:55.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:55 np0005481065 NetworkManager[44960]: <info>  [1760174875.0954] device (tapc550ea63-90): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:27:55 np0005481065 NetworkManager[44960]: <info>  [1760174875.0969] device (tapc550ea63-90): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:55.113 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b6ec94c8-48bc-460d-bad5-ce122726238a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:55.123 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[28210ecf-1dee-442e-a9a8-636ec2102d68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:55 np0005481065 NetworkManager[44960]: <info>  [1760174875.1263] manager: (tap066f5196-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/562)
Oct 11 05:27:55 np0005481065 systemd-udevd[404222]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:55.172 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[026c485e-70d5-41d5-8328-8ea53f707abe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:55.177 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b5cf61e3-ce8d-463e-aaba-3b1af36c2922]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:55 np0005481065 NetworkManager[44960]: <info>  [1760174875.2072] device (tap066f5196-f0): carrier: link connected
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:55.214 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[83bed4e9-5e2f-4b2b-b59d-00ec52ca0e3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:55.233 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0c604942-d286-41eb-97bf-caad4d03cd9a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap066f5196-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:73:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 394], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 671990, 'reachable_time': 27379, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 404255, 'error': None, 'target': 'ovnmeta-066f5196-fbff-458b-ab48-5301dc324450', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:55.256 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b41df90d-8d38-4373-8ac6-a8cbc2599008]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef4:73af'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 671990, 'tstamp': 671990}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 404256, 'error': None, 'target': 'ovnmeta-066f5196-fbff-458b-ab48-5301dc324450', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:55.287 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c1ad61ee-2200-45f3-8210-2562d5118aa9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap066f5196-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:73:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 394], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 671990, 'reachable_time': 27379, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 404260, 'error': None, 'target': 'ovnmeta-066f5196-fbff-458b-ab48-5301dc324450', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:55.322 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ec318b6c-f319-4497-a33c-36a7d53f99dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:55 np0005481065 nova_compute[260935]: 2025-10-11 09:27:55.378 2 DEBUG nova.compute.manager [req-a35bba3b-23a0-416e-9042-0d9af3e19737 req-de85a242-f179-4ab7-91e0-9d4acbd97c21 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Received event network-vif-plugged-c550ea63-9098-4f0c-95b9-107f8a0a8a5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:27:55 np0005481065 nova_compute[260935]: 2025-10-11 09:27:55.379 2 DEBUG oslo_concurrency.lockutils [req-a35bba3b-23a0-416e-9042-0d9af3e19737 req-de85a242-f179-4ab7-91e0-9d4acbd97c21 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "f4e61943-e59b-40a1-ab1e-07ba5f131bb3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:27:55 np0005481065 nova_compute[260935]: 2025-10-11 09:27:55.379 2 DEBUG oslo_concurrency.lockutils [req-a35bba3b-23a0-416e-9042-0d9af3e19737 req-de85a242-f179-4ab7-91e0-9d4acbd97c21 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f4e61943-e59b-40a1-ab1e-07ba5f131bb3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:27:55 np0005481065 nova_compute[260935]: 2025-10-11 09:27:55.380 2 DEBUG oslo_concurrency.lockutils [req-a35bba3b-23a0-416e-9042-0d9af3e19737 req-de85a242-f179-4ab7-91e0-9d4acbd97c21 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f4e61943-e59b-40a1-ab1e-07ba5f131bb3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:27:55 np0005481065 nova_compute[260935]: 2025-10-11 09:27:55.380 2 DEBUG nova.compute.manager [req-a35bba3b-23a0-416e-9042-0d9af3e19737 req-de85a242-f179-4ab7-91e0-9d4acbd97c21 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Processing event network-vif-plugged-c550ea63-9098-4f0c-95b9-107f8a0a8a5c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:55.393 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[322753c1-7146-430b-9738-b8c38228c9ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:55.394 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap066f5196-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:55.394 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:55.395 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap066f5196-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:27:55 np0005481065 kernel: tap066f5196-f0: entered promiscuous mode
Oct 11 05:27:55 np0005481065 NetworkManager[44960]: <info>  [1760174875.3973] manager: (tap066f5196-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/563)
Oct 11 05:27:55 np0005481065 nova_compute[260935]: 2025-10-11 09:27:55.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:55 np0005481065 nova_compute[260935]: 2025-10-11 09:27:55.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:55.400 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap066f5196-f0, col_values=(('external_ids', {'iface-id': '7affb32e-f5e8-4572-9839-dbf40ccc596e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:27:55 np0005481065 ovn_controller[152945]: 2025-10-11T09:27:55Z|01447|binding|INFO|Releasing lport 7affb32e-f5e8-4572-9839-dbf40ccc596e from this chassis (sb_readonly=0)
Oct 11 05:27:55 np0005481065 nova_compute[260935]: 2025-10-11 09:27:55.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:55.416 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/066f5196-fbff-458b-ab48-5301dc324450.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/066f5196-fbff-458b-ab48-5301dc324450.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:55.417 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[923a403e-4f3d-4ca6-beea-e4ba4828f424]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:55.419 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-066f5196-fbff-458b-ab48-5301dc324450
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/066f5196-fbff-458b-ab48-5301dc324450.pid.haproxy
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID 066f5196-fbff-458b-ab48-5301dc324450
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 05:27:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:27:55.419 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-066f5196-fbff-458b-ab48-5301dc324450', 'env', 'PROCESS_TAG=haproxy-066f5196-fbff-458b-ab48-5301dc324450', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/066f5196-fbff-458b-ab48-5301dc324450.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 05:27:55 np0005481065 blissful_hopper[404148]: --> passed data devices: 0 physical, 3 LVM
Oct 11 05:27:55 np0005481065 blissful_hopper[404148]: --> relative data size: 1.0
Oct 11 05:27:55 np0005481065 blissful_hopper[404148]: --> All data devices are unavailable
Oct 11 05:27:55 np0005481065 systemd[1]: libpod-73b7111a56c78bfb20dd2c23d343a6993e951fe38c63ee4b9b64408376596e5a.scope: Deactivated successfully.
Oct 11 05:27:55 np0005481065 systemd[1]: libpod-73b7111a56c78bfb20dd2c23d343a6993e951fe38c63ee4b9b64408376596e5a.scope: Consumed 1.153s CPU time.
Oct 11 05:27:55 np0005481065 podman[404132]: 2025-10-11 09:27:55.482444803 +0000 UTC m=+1.494066865 container died 73b7111a56c78bfb20dd2c23d343a6993e951fe38c63ee4b9b64408376596e5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 05:27:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:27:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:27:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:27:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:27:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:27:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:27:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:27:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:27:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:27:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:27:55 np0005481065 systemd[1]: var-lib-containers-storage-overlay-dcfe3ff0c32684eea7bdd9ee96bebad21fa107555c80ed4dd02f8a8808a72954-merged.mount: Deactivated successfully.
Oct 11 05:27:55 np0005481065 podman[404132]: 2025-10-11 09:27:55.563521601 +0000 UTC m=+1.575143703 container remove 73b7111a56c78bfb20dd2c23d343a6993e951fe38c63ee4b9b64408376596e5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hopper, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:27:55 np0005481065 systemd[1]: libpod-conmon-73b7111a56c78bfb20dd2c23d343a6993e951fe38c63ee4b9b64408376596e5a.scope: Deactivated successfully.
Oct 11 05:27:55 np0005481065 podman[404398]: 2025-10-11 09:27:55.906034497 +0000 UTC m=+0.087527481 container create 6c1bef00ea444213d88bfa6af5eec4702f85b880b766550f49146dbec2b30f32 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-066f5196-fbff-458b-ab48-5301dc324450, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 05:27:55 np0005481065 podman[404398]: 2025-10-11 09:27:55.867429967 +0000 UTC m=+0.048922991 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 05:27:55 np0005481065 systemd[1]: Started libpod-conmon-6c1bef00ea444213d88bfa6af5eec4702f85b880b766550f49146dbec2b30f32.scope.
Oct 11 05:27:56 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:27:56 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7567a7af7802e398bd5207beaa54376df4c7fcd14b370da4835b5b052e2eda6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 05:27:56 np0005481065 podman[404398]: 2025-10-11 09:27:56.025253881 +0000 UTC m=+0.206746885 container init 6c1bef00ea444213d88bfa6af5eec4702f85b880b766550f49146dbec2b30f32 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-066f5196-fbff-458b-ab48-5301dc324450, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS)
Oct 11 05:27:56 np0005481065 nova_compute[260935]: 2025-10-11 09:27:56.035 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174876.0343685, f4e61943-e59b-40a1-ab1e-07ba5f131bb3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:27:56 np0005481065 nova_compute[260935]: 2025-10-11 09:27:56.036 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] VM Started (Lifecycle Event)#033[00m
Oct 11 05:27:56 np0005481065 podman[404398]: 2025-10-11 09:27:56.036174189 +0000 UTC m=+0.217667153 container start 6c1bef00ea444213d88bfa6af5eec4702f85b880b766550f49146dbec2b30f32 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-066f5196-fbff-458b-ab48-5301dc324450, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:27:56 np0005481065 nova_compute[260935]: 2025-10-11 09:27:56.037 2 DEBUG nova.compute.manager [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:27:56 np0005481065 nova_compute[260935]: 2025-10-11 09:27:56.045 2 DEBUG nova.virt.libvirt.driver [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:27:56 np0005481065 nova_compute[260935]: 2025-10-11 09:27:56.051 2 INFO nova.virt.libvirt.driver [-] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Instance spawned successfully.#033[00m
Oct 11 05:27:56 np0005481065 nova_compute[260935]: 2025-10-11 09:27:56.052 2 DEBUG nova.virt.libvirt.driver [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:27:56 np0005481065 nova_compute[260935]: 2025-10-11 09:27:56.056 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:27:56 np0005481065 nova_compute[260935]: 2025-10-11 09:27:56.062 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:27:56 np0005481065 nova_compute[260935]: 2025-10-11 09:27:56.073 2 DEBUG nova.virt.libvirt.driver [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:27:56 np0005481065 nova_compute[260935]: 2025-10-11 09:27:56.073 2 DEBUG nova.virt.libvirt.driver [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:27:56 np0005481065 nova_compute[260935]: 2025-10-11 09:27:56.074 2 DEBUG nova.virt.libvirt.driver [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:27:56 np0005481065 nova_compute[260935]: 2025-10-11 09:27:56.074 2 DEBUG nova.virt.libvirt.driver [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:27:56 np0005481065 nova_compute[260935]: 2025-10-11 09:27:56.075 2 DEBUG nova.virt.libvirt.driver [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:27:56 np0005481065 nova_compute[260935]: 2025-10-11 09:27:56.076 2 DEBUG nova.virt.libvirt.driver [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:27:56 np0005481065 nova_compute[260935]: 2025-10-11 09:27:56.081 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:27:56 np0005481065 nova_compute[260935]: 2025-10-11 09:27:56.081 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174876.0350084, f4e61943-e59b-40a1-ab1e-07ba5f131bb3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:27:56 np0005481065 nova_compute[260935]: 2025-10-11 09:27:56.081 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:27:56 np0005481065 neutron-haproxy-ovnmeta-066f5196-fbff-458b-ab48-5301dc324450[404464]: [NOTICE]   (404469) : New worker (404471) forked
Oct 11 05:27:56 np0005481065 neutron-haproxy-ovnmeta-066f5196-fbff-458b-ab48-5301dc324450[404464]: [NOTICE]   (404469) : Loading success.
Oct 11 05:27:56 np0005481065 nova_compute[260935]: 2025-10-11 09:27:56.110 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:27:56 np0005481065 nova_compute[260935]: 2025-10-11 09:27:56.115 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174876.0405686, f4e61943-e59b-40a1-ab1e-07ba5f131bb3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:27:56 np0005481065 nova_compute[260935]: 2025-10-11 09:27:56.115 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:27:56 np0005481065 nova_compute[260935]: 2025-10-11 09:27:56.141 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:27:56 np0005481065 nova_compute[260935]: 2025-10-11 09:27:56.144 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:27:56 np0005481065 nova_compute[260935]: 2025-10-11 09:27:56.166 2 INFO nova.compute.manager [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Took 9.50 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:27:56 np0005481065 nova_compute[260935]: 2025-10-11 09:27:56.168 2 DEBUG nova.compute.manager [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:27:56 np0005481065 nova_compute[260935]: 2025-10-11 09:27:56.169 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:27:56 np0005481065 nova_compute[260935]: 2025-10-11 09:27:56.230 2 INFO nova.compute.manager [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Took 10.52 seconds to build instance.#033[00m
Oct 11 05:27:56 np0005481065 nova_compute[260935]: 2025-10-11 09:27:56.254 2 DEBUG oslo_concurrency.lockutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "f4e61943-e59b-40a1-ab1e-07ba5f131bb3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:27:56 np0005481065 podman[404519]: 2025-10-11 09:27:56.436660412 +0000 UTC m=+0.067442925 container create fdb7a9facb4aff9509d7504b004236f6840ef3e74fffacfc2f074da92a9c23f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:27:56 np0005481065 systemd[1]: Started libpod-conmon-fdb7a9facb4aff9509d7504b004236f6840ef3e74fffacfc2f074da92a9c23f2.scope.
Oct 11 05:27:56 np0005481065 podman[404519]: 2025-10-11 09:27:56.412667174 +0000 UTC m=+0.043449777 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:27:56 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:27:56 np0005481065 podman[404519]: 2025-10-11 09:27:56.566801634 +0000 UTC m=+0.197584167 container init fdb7a9facb4aff9509d7504b004236f6840ef3e74fffacfc2f074da92a9c23f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lalande, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 11 05:27:56 np0005481065 podman[404519]: 2025-10-11 09:27:56.577519657 +0000 UTC m=+0.208302200 container start fdb7a9facb4aff9509d7504b004236f6840ef3e74fffacfc2f074da92a9c23f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lalande, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:27:56 np0005481065 podman[404519]: 2025-10-11 09:27:56.581809578 +0000 UTC m=+0.212592101 container attach fdb7a9facb4aff9509d7504b004236f6840ef3e74fffacfc2f074da92a9c23f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lalande, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:27:56 np0005481065 festive_lalande[404535]: 167 167
Oct 11 05:27:56 np0005481065 systemd[1]: libpod-fdb7a9facb4aff9509d7504b004236f6840ef3e74fffacfc2f074da92a9c23f2.scope: Deactivated successfully.
Oct 11 05:27:56 np0005481065 podman[404519]: 2025-10-11 09:27:56.588914458 +0000 UTC m=+0.219697001 container died fdb7a9facb4aff9509d7504b004236f6840ef3e74fffacfc2f074da92a9c23f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lalande, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 05:27:56 np0005481065 systemd[1]: var-lib-containers-storage-overlay-2a008830d2ad2f07edef94b0ed8740421f43c805f8fb81d28115368c1c74ad6a-merged.mount: Deactivated successfully.
Oct 11 05:27:56 np0005481065 nova_compute[260935]: 2025-10-11 09:27:56.628 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174861.6273727, b39b8161-8a46-46fe-8a2a-0fc6b4eab850 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:27:56 np0005481065 nova_compute[260935]: 2025-10-11 09:27:56.629 2 INFO nova.compute.manager [-] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:27:56 np0005481065 podman[404519]: 2025-10-11 09:27:56.642663445 +0000 UTC m=+0.273445998 container remove fdb7a9facb4aff9509d7504b004236f6840ef3e74fffacfc2f074da92a9c23f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:27:56 np0005481065 nova_compute[260935]: 2025-10-11 09:27:56.653 2 DEBUG nova.compute.manager [None req-464fad34-0a31-4c8f-9aac-ecb224ebe1b0 - - - - - -] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:27:56 np0005481065 systemd[1]: libpod-conmon-fdb7a9facb4aff9509d7504b004236f6840ef3e74fffacfc2f074da92a9c23f2.scope: Deactivated successfully.
Oct 11 05:27:56 np0005481065 podman[404559]: 2025-10-11 09:27:56.897181638 +0000 UTC m=+0.050531447 container create fe8111882924ed36225f22d9cfdbb957e100cee2566bc9600399086f9d579945 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:27:56 np0005481065 systemd[1]: Started libpod-conmon-fe8111882924ed36225f22d9cfdbb957e100cee2566bc9600399086f9d579945.scope.
Oct 11 05:27:56 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:27:56 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3cde817fe4f1bdff856147757d987d91227908f6b34bd3f5f48c9475fc61593/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:27:56 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3cde817fe4f1bdff856147757d987d91227908f6b34bd3f5f48c9475fc61593/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:27:56 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3cde817fe4f1bdff856147757d987d91227908f6b34bd3f5f48c9475fc61593/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:27:56 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3cde817fe4f1bdff856147757d987d91227908f6b34bd3f5f48c9475fc61593/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:27:56 np0005481065 podman[404559]: 2025-10-11 09:27:56.879100718 +0000 UTC m=+0.032450557 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:27:56 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2589: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:27:56 np0005481065 podman[404559]: 2025-10-11 09:27:56.987011663 +0000 UTC m=+0.140361512 container init fe8111882924ed36225f22d9cfdbb957e100cee2566bc9600399086f9d579945 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_wing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:27:56 np0005481065 podman[404559]: 2025-10-11 09:27:56.995372989 +0000 UTC m=+0.148722808 container start fe8111882924ed36225f22d9cfdbb957e100cee2566bc9600399086f9d579945 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_wing, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 11 05:27:57 np0005481065 podman[404559]: 2025-10-11 09:27:57.001551343 +0000 UTC m=+0.154901202 container attach fe8111882924ed36225f22d9cfdbb957e100cee2566bc9600399086f9d579945 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_wing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 11 05:27:57 np0005481065 podman[404573]: 2025-10-11 09:27:57.016009311 +0000 UTC m=+0.079292708 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:27:57 np0005481065 podman[404576]: 2025-10-11 09:27:57.058322395 +0000 UTC m=+0.113557985 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Oct 11 05:27:57 np0005481065 nova_compute[260935]: 2025-10-11 09:27:57.479 2 DEBUG nova.compute.manager [req-3e98ba0d-2034-4080-a602-571243e4ef6b req-2ece9a50-3ba7-4783-86a7-13cf613b9701 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Received event network-vif-plugged-c550ea63-9098-4f0c-95b9-107f8a0a8a5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:27:57 np0005481065 nova_compute[260935]: 2025-10-11 09:27:57.479 2 DEBUG oslo_concurrency.lockutils [req-3e98ba0d-2034-4080-a602-571243e4ef6b req-2ece9a50-3ba7-4783-86a7-13cf613b9701 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "f4e61943-e59b-40a1-ab1e-07ba5f131bb3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:27:57 np0005481065 nova_compute[260935]: 2025-10-11 09:27:57.479 2 DEBUG oslo_concurrency.lockutils [req-3e98ba0d-2034-4080-a602-571243e4ef6b req-2ece9a50-3ba7-4783-86a7-13cf613b9701 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f4e61943-e59b-40a1-ab1e-07ba5f131bb3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:27:57 np0005481065 nova_compute[260935]: 2025-10-11 09:27:57.480 2 DEBUG oslo_concurrency.lockutils [req-3e98ba0d-2034-4080-a602-571243e4ef6b req-2ece9a50-3ba7-4783-86a7-13cf613b9701 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f4e61943-e59b-40a1-ab1e-07ba5f131bb3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:27:57 np0005481065 nova_compute[260935]: 2025-10-11 09:27:57.480 2 DEBUG nova.compute.manager [req-3e98ba0d-2034-4080-a602-571243e4ef6b req-2ece9a50-3ba7-4783-86a7-13cf613b9701 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] No waiting events found dispatching network-vif-plugged-c550ea63-9098-4f0c-95b9-107f8a0a8a5c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:27:57 np0005481065 nova_compute[260935]: 2025-10-11 09:27:57.480 2 WARNING nova.compute.manager [req-3e98ba0d-2034-4080-a602-571243e4ef6b req-2ece9a50-3ba7-4783-86a7-13cf613b9701 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Received unexpected event network-vif-plugged-c550ea63-9098-4f0c-95b9-107f8a0a8a5c for instance with vm_state active and task_state None.#033[00m
Oct 11 05:27:57 np0005481065 agitated_wing[404577]: {
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:    "0": [
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:        {
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:            "devices": [
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:                "/dev/loop3"
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:            ],
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:            "lv_name": "ceph_lv0",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:            "lv_size": "21470642176",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:            "name": "ceph_lv0",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:            "tags": {
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:                "ceph.cluster_name": "ceph",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:                "ceph.crush_device_class": "",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:                "ceph.encrypted": "0",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:                "ceph.osd_id": "0",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:                "ceph.type": "block",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:                "ceph.vdo": "0"
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:            },
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:            "type": "block",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:            "vg_name": "ceph_vg0"
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:        }
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:    ],
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:    "1": [
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:        {
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:            "devices": [
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:                "/dev/loop4"
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:            ],
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:            "lv_name": "ceph_lv1",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:            "lv_size": "21470642176",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:            "name": "ceph_lv1",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:            "tags": {
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:                "ceph.cluster_name": "ceph",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:                "ceph.crush_device_class": "",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:                "ceph.encrypted": "0",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:                "ceph.osd_id": "1",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:                "ceph.type": "block",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:                "ceph.vdo": "0"
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:            },
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:            "type": "block",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:            "vg_name": "ceph_vg1"
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:        }
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:    ],
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:    "2": [
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:        {
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:            "devices": [
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:                "/dev/loop5"
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:            ],
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:            "lv_name": "ceph_lv2",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:            "lv_size": "21470642176",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:            "name": "ceph_lv2",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:            "tags": {
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:                "ceph.cluster_name": "ceph",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:                "ceph.crush_device_class": "",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:                "ceph.encrypted": "0",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:                "ceph.osd_id": "2",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:                "ceph.type": "block",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:                "ceph.vdo": "0"
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:            },
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:            "type": "block",
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:            "vg_name": "ceph_vg2"
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:        }
Oct 11 05:27:57 np0005481065 agitated_wing[404577]:    ]
Oct 11 05:27:57 np0005481065 agitated_wing[404577]: }
Oct 11 05:27:57 np0005481065 systemd[1]: libpod-fe8111882924ed36225f22d9cfdbb957e100cee2566bc9600399086f9d579945.scope: Deactivated successfully.
Oct 11 05:27:57 np0005481065 podman[404628]: 2025-10-11 09:27:57.821726418 +0000 UTC m=+0.028350701 container died fe8111882924ed36225f22d9cfdbb957e100cee2566bc9600399086f9d579945 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:27:57 np0005481065 systemd[1]: var-lib-containers-storage-overlay-b3cde817fe4f1bdff856147757d987d91227908f6b34bd3f5f48c9475fc61593-merged.mount: Deactivated successfully.
Oct 11 05:27:57 np0005481065 podman[404628]: 2025-10-11 09:27:57.88096225 +0000 UTC m=+0.087586523 container remove fe8111882924ed36225f22d9cfdbb957e100cee2566bc9600399086f9d579945 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 05:27:57 np0005481065 systemd[1]: libpod-conmon-fe8111882924ed36225f22d9cfdbb957e100cee2566bc9600399086f9d579945.scope: Deactivated successfully.
Oct 11 05:27:58 np0005481065 nova_compute[260935]: 2025-10-11 09:27:58.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:58 np0005481065 podman[404786]: 2025-10-11 09:27:58.717317353 +0000 UTC m=+0.068444113 container create 9701ea7ca5b8548af42c56b52c842004653e35f9b912aa326426bb0c2c594d3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 05:27:58 np0005481065 systemd[1]: Started libpod-conmon-9701ea7ca5b8548af42c56b52c842004653e35f9b912aa326426bb0c2c594d3a.scope.
Oct 11 05:27:58 np0005481065 podman[404786]: 2025-10-11 09:27:58.687286685 +0000 UTC m=+0.038413535 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:27:58 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:27:58 np0005481065 podman[404786]: 2025-10-11 09:27:58.813689663 +0000 UTC m=+0.164816443 container init 9701ea7ca5b8548af42c56b52c842004653e35f9b912aa326426bb0c2c594d3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:27:58 np0005481065 podman[404786]: 2025-10-11 09:27:58.820939787 +0000 UTC m=+0.172066537 container start 9701ea7ca5b8548af42c56b52c842004653e35f9b912aa326426bb0c2c594d3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:27:58 np0005481065 podman[404786]: 2025-10-11 09:27:58.824266001 +0000 UTC m=+0.175392751 container attach 9701ea7ca5b8548af42c56b52c842004653e35f9b912aa326426bb0c2c594d3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_gagarin, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:27:58 np0005481065 funny_gagarin[404800]: 167 167
Oct 11 05:27:58 np0005481065 systemd[1]: libpod-9701ea7ca5b8548af42c56b52c842004653e35f9b912aa326426bb0c2c594d3a.scope: Deactivated successfully.
Oct 11 05:27:58 np0005481065 podman[404786]: 2025-10-11 09:27:58.830371013 +0000 UTC m=+0.181497773 container died 9701ea7ca5b8548af42c56b52c842004653e35f9b912aa326426bb0c2c594d3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_gagarin, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 11 05:27:58 np0005481065 systemd[1]: var-lib-containers-storage-overlay-e3ea3694b280a8757f3208512b973c988aa5caf3f4419a015f2e2057e61a2d9c-merged.mount: Deactivated successfully.
Oct 11 05:27:58 np0005481065 nova_compute[260935]: 2025-10-11 09:27:58.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:58 np0005481065 podman[404786]: 2025-10-11 09:27:58.873742687 +0000 UTC m=+0.224869437 container remove 9701ea7ca5b8548af42c56b52c842004653e35f9b912aa326426bb0c2c594d3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:27:58 np0005481065 systemd[1]: libpod-conmon-9701ea7ca5b8548af42c56b52c842004653e35f9b912aa326426bb0c2c594d3a.scope: Deactivated successfully.
Oct 11 05:27:58 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2590: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 94 op/s
Oct 11 05:27:59 np0005481065 podman[404823]: 2025-10-11 09:27:59.140022742 +0000 UTC m=+0.071909270 container create 431af402c777a49d585736b5636f7a46ffeef4f946081fe92f8683a375f70675 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ardinghelli, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:27:59 np0005481065 systemd[1]: Started libpod-conmon-431af402c777a49d585736b5636f7a46ffeef4f946081fe92f8683a375f70675.scope.
Oct 11 05:27:59 np0005481065 podman[404823]: 2025-10-11 09:27:59.108897864 +0000 UTC m=+0.040784402 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:27:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:27:59 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:27:59 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d0174f96e188e3f5aef1d4ce80bb1c785603016847defde42858b9e5b1485f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:27:59 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d0174f96e188e3f5aef1d4ce80bb1c785603016847defde42858b9e5b1485f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:27:59 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d0174f96e188e3f5aef1d4ce80bb1c785603016847defde42858b9e5b1485f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:27:59 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d0174f96e188e3f5aef1d4ce80bb1c785603016847defde42858b9e5b1485f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:27:59 np0005481065 podman[404823]: 2025-10-11 09:27:59.270537435 +0000 UTC m=+0.202423923 container init 431af402c777a49d585736b5636f7a46ffeef4f946081fe92f8683a375f70675 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ardinghelli, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:27:59 np0005481065 podman[404823]: 2025-10-11 09:27:59.285728924 +0000 UTC m=+0.217615392 container start 431af402c777a49d585736b5636f7a46ffeef4f946081fe92f8683a375f70675 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 11 05:27:59 np0005481065 podman[404823]: 2025-10-11 09:27:59.290472008 +0000 UTC m=+0.222358466 container attach 431af402c777a49d585736b5636f7a46ffeef4f946081fe92f8683a375f70675 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ardinghelli, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:27:59 np0005481065 ovn_controller[152945]: 2025-10-11T09:27:59Z|01448|binding|INFO|Releasing lport 7affb32e-f5e8-4572-9839-dbf40ccc596e from this chassis (sb_readonly=0)
Oct 11 05:27:59 np0005481065 NetworkManager[44960]: <info>  [1760174879.5684] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/564)
Oct 11 05:27:59 np0005481065 nova_compute[260935]: 2025-10-11 09:27:59.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:59 np0005481065 NetworkManager[44960]: <info>  [1760174879.5702] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/565)
Oct 11 05:27:59 np0005481065 ovn_controller[152945]: 2025-10-11T09:27:59Z|01449|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:27:59 np0005481065 ovn_controller[152945]: 2025-10-11T09:27:59Z|01450|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:27:59 np0005481065 ovn_controller[152945]: 2025-10-11T09:27:59Z|01451|binding|INFO|Releasing lport 7affb32e-f5e8-4572-9839-dbf40ccc596e from this chassis (sb_readonly=0)
Oct 11 05:27:59 np0005481065 ovn_controller[152945]: 2025-10-11T09:27:59Z|01452|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:27:59 np0005481065 ovn_controller[152945]: 2025-10-11T09:27:59Z|01453|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:27:59 np0005481065 nova_compute[260935]: 2025-10-11 09:27:59.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:27:59 np0005481065 nova_compute[260935]: 2025-10-11 09:27:59.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:28:00 np0005481065 quirky_ardinghelli[404840]: {
Oct 11 05:28:00 np0005481065 quirky_ardinghelli[404840]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 05:28:00 np0005481065 quirky_ardinghelli[404840]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:28:00 np0005481065 quirky_ardinghelli[404840]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 05:28:00 np0005481065 quirky_ardinghelli[404840]:        "osd_id": 2,
Oct 11 05:28:00 np0005481065 quirky_ardinghelli[404840]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:28:00 np0005481065 quirky_ardinghelli[404840]:        "type": "bluestore"
Oct 11 05:28:00 np0005481065 quirky_ardinghelli[404840]:    },
Oct 11 05:28:00 np0005481065 quirky_ardinghelli[404840]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 05:28:00 np0005481065 quirky_ardinghelli[404840]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:28:00 np0005481065 quirky_ardinghelli[404840]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 05:28:00 np0005481065 quirky_ardinghelli[404840]:        "osd_id": 0,
Oct 11 05:28:00 np0005481065 quirky_ardinghelli[404840]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:28:00 np0005481065 quirky_ardinghelli[404840]:        "type": "bluestore"
Oct 11 05:28:00 np0005481065 quirky_ardinghelli[404840]:    },
Oct 11 05:28:00 np0005481065 quirky_ardinghelli[404840]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 05:28:00 np0005481065 quirky_ardinghelli[404840]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:28:00 np0005481065 quirky_ardinghelli[404840]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 05:28:00 np0005481065 quirky_ardinghelli[404840]:        "osd_id": 1,
Oct 11 05:28:00 np0005481065 quirky_ardinghelli[404840]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:28:00 np0005481065 quirky_ardinghelli[404840]:        "type": "bluestore"
Oct 11 05:28:00 np0005481065 quirky_ardinghelli[404840]:    }
Oct 11 05:28:00 np0005481065 quirky_ardinghelli[404840]: }
Oct 11 05:28:00 np0005481065 nova_compute[260935]: 2025-10-11 09:28:00.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:28:00 np0005481065 systemd[1]: libpod-431af402c777a49d585736b5636f7a46ffeef4f946081fe92f8683a375f70675.scope: Deactivated successfully.
Oct 11 05:28:00 np0005481065 systemd[1]: libpod-431af402c777a49d585736b5636f7a46ffeef4f946081fe92f8683a375f70675.scope: Consumed 1.129s CPU time.
Oct 11 05:28:00 np0005481065 podman[404874]: 2025-10-11 09:28:00.469698887 +0000 UTC m=+0.040127764 container died 431af402c777a49d585736b5636f7a46ffeef4f946081fe92f8683a375f70675 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ardinghelli, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 05:28:00 np0005481065 systemd[1]: var-lib-containers-storage-overlay-4d0174f96e188e3f5aef1d4ce80bb1c785603016847defde42858b9e5b1485f5-merged.mount: Deactivated successfully.
Oct 11 05:28:00 np0005481065 podman[404874]: 2025-10-11 09:28:00.521064066 +0000 UTC m=+0.091492933 container remove 431af402c777a49d585736b5636f7a46ffeef4f946081fe92f8683a375f70675 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ardinghelli, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 05:28:00 np0005481065 systemd[1]: libpod-conmon-431af402c777a49d585736b5636f7a46ffeef4f946081fe92f8683a375f70675.scope: Deactivated successfully.
Oct 11 05:28:00 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:28:00 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:28:00 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:28:00 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:28:00 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev ce8331f4-0274-4169-9608-d020345d4f20 does not exist
Oct 11 05:28:00 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 2bded35a-810d-4d42-92d6-249f47933cc9 does not exist
Oct 11 05:28:00 np0005481065 nova_compute[260935]: 2025-10-11 09:28:00.604 2 DEBUG nova.compute.manager [req-3ad761a3-8806-4b9d-a516-40d698ae7679 req-0f41dfab-010f-44fd-8152-b235f41c7008 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Received event network-changed-c550ea63-9098-4f0c-95b9-107f8a0a8a5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:28:00 np0005481065 nova_compute[260935]: 2025-10-11 09:28:00.605 2 DEBUG nova.compute.manager [req-3ad761a3-8806-4b9d-a516-40d698ae7679 req-0f41dfab-010f-44fd-8152-b235f41c7008 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Refreshing instance network info cache due to event network-changed-c550ea63-9098-4f0c-95b9-107f8a0a8a5c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:28:00 np0005481065 nova_compute[260935]: 2025-10-11 09:28:00.605 2 DEBUG oslo_concurrency.lockutils [req-3ad761a3-8806-4b9d-a516-40d698ae7679 req-0f41dfab-010f-44fd-8152-b235f41c7008 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-f4e61943-e59b-40a1-ab1e-07ba5f131bb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:28:00 np0005481065 nova_compute[260935]: 2025-10-11 09:28:00.606 2 DEBUG oslo_concurrency.lockutils [req-3ad761a3-8806-4b9d-a516-40d698ae7679 req-0f41dfab-010f-44fd-8152-b235f41c7008 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-f4e61943-e59b-40a1-ab1e-07ba5f131bb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:28:00 np0005481065 nova_compute[260935]: 2025-10-11 09:28:00.607 2 DEBUG nova.network.neutron [req-3ad761a3-8806-4b9d-a516-40d698ae7679 req-0f41dfab-010f-44fd-8152-b235f41c7008 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Refreshing network info cache for port c550ea63-9098-4f0c-95b9-107f8a0a8a5c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:28:00 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2591: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 186 KiB/s wr, 68 op/s
Oct 11 05:28:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:28:01.408 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '44'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:28:01 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:28:01 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:28:02 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2592: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 186 KiB/s wr, 74 op/s
Oct 11 05:28:03 np0005481065 nova_compute[260935]: 2025-10-11 09:28:03.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:28:03 np0005481065 nova_compute[260935]: 2025-10-11 09:28:03.408 2 DEBUG nova.network.neutron [req-3ad761a3-8806-4b9d-a516-40d698ae7679 req-0f41dfab-010f-44fd-8152-b235f41c7008 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Updated VIF entry in instance network info cache for port c550ea63-9098-4f0c-95b9-107f8a0a8a5c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:28:03 np0005481065 nova_compute[260935]: 2025-10-11 09:28:03.411 2 DEBUG nova.network.neutron [req-3ad761a3-8806-4b9d-a516-40d698ae7679 req-0f41dfab-010f-44fd-8152-b235f41c7008 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Updating instance_info_cache with network_info: [{"id": "c550ea63-9098-4f0c-95b9-107f8a0a8a5c", "address": "fa:16:3e:09:48:8e", "network": {"id": "066f5196-fbff-458b-ab48-5301dc324450", "bridge": "br-int", "label": "tempest-network-smoke--783922482", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc550ea63-90", "ovs_interfaceid": "c550ea63-9098-4f0c-95b9-107f8a0a8a5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:28:03 np0005481065 nova_compute[260935]: 2025-10-11 09:28:03.445 2 DEBUG oslo_concurrency.lockutils [req-3ad761a3-8806-4b9d-a516-40d698ae7679 req-0f41dfab-010f-44fd-8152-b235f41c7008 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-f4e61943-e59b-40a1-ab1e-07ba5f131bb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:28:03 np0005481065 nova_compute[260935]: 2025-10-11 09:28:03.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:28:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:28:04 np0005481065 nova_compute[260935]: 2025-10-11 09:28:04.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:28:04 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2593: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 05:28:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:28:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:28:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:28:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:28:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0029757271724620694 of space, bias 1.0, pg target 0.8927181517386208 quantized to 32 (current 32)
Oct 11 05:28:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:28:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:28:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:28:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:28:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:28:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 11 05:28:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:28:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 05:28:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:28:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:28:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:28:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 05:28:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:28:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 05:28:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:28:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:28:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:28:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 05:28:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:28:06.280 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:c9:82 10.100.0.2 2001:db8::f816:3eff:fe1f:c982'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe1f:c982/64', 'neutron:device_id': 'ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03e15108-5f8d-4fec-9ad8-133d9551c667', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3325c2b-a3cd-4cde-bb9c-e17cc326ff80, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=1945eb8f-f9e5-437a-9fc9-017b522ae777) old=Port_Binding(mac=['fa:16:3e:1f:c9:82 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03e15108-5f8d-4fec-9ad8-133d9551c667', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:28:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:28:06.283 162815 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 1945eb8f-f9e5-437a-9fc9-017b522ae777 in datapath 03e15108-5f8d-4fec-9ad8-133d9551c667 updated#033[00m
Oct 11 05:28:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:28:06.286 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 03e15108-5f8d-4fec-9ad8-133d9551c667, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:28:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:28:06.287 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b054156d-5dbf-46a2-afac-d2f2c01a6e5d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:28:06 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2594: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 05:28:07 np0005481065 ovn_controller[152945]: 2025-10-11T09:28:07Z|00168|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:09:48:8e 10.100.0.10
Oct 11 05:28:07 np0005481065 ovn_controller[152945]: 2025-10-11T09:28:07Z|00169|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:09:48:8e 10.100.0.10
Oct 11 05:28:08 np0005481065 nova_compute[260935]: 2025-10-11 09:28:08.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:28:08 np0005481065 nova_compute[260935]: 2025-10-11 09:28:08.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:28:08 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2595: 321 pgs: 321 active+clean; 395 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 115 op/s
Oct 11 05:28:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:28:10 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2596: 321 pgs: 321 active+clean; 395 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 400 KiB/s rd, 2.0 MiB/s wr, 47 op/s
Oct 11 05:28:12 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2597: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 510 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Oct 11 05:28:13 np0005481065 nova_compute[260935]: 2025-10-11 09:28:13.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:28:13 np0005481065 nova_compute[260935]: 2025-10-11 09:28:13.530 2 INFO nova.compute.manager [None req-76dee426-8c43-4ff6-8ce0-6f4fedb1a90c dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Get console output#033[00m
Oct 11 05:28:13 np0005481065 nova_compute[260935]: 2025-10-11 09:28:13.537 29289 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct 11 05:28:13 np0005481065 nova_compute[260935]: 2025-10-11 09:28:13.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:28:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:28:14 np0005481065 ovn_controller[152945]: 2025-10-11T09:28:14Z|01454|binding|INFO|Releasing lport 7affb32e-f5e8-4572-9839-dbf40ccc596e from this chassis (sb_readonly=0)
Oct 11 05:28:14 np0005481065 ovn_controller[152945]: 2025-10-11T09:28:14Z|01455|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:28:14 np0005481065 ovn_controller[152945]: 2025-10-11T09:28:14Z|01456|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:28:14 np0005481065 nova_compute[260935]: 2025-10-11 09:28:14.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:28:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:28:14.669 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:c9:82 10.100.0.2 2001:db8:0:1:f816:3eff:fe1f:c982 2001:db8::f816:3eff:fe1f:c982'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8:0:1:f816:3eff:fe1f:c982/64 2001:db8::f816:3eff:fe1f:c982/64', 'neutron:device_id': 'ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03e15108-5f8d-4fec-9ad8-133d9551c667', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3325c2b-a3cd-4cde-bb9c-e17cc326ff80, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=1945eb8f-f9e5-437a-9fc9-017b522ae777) old=Port_Binding(mac=['fa:16:3e:1f:c9:82 10.100.0.2 2001:db8::f816:3eff:fe1f:c982'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe1f:c982/64', 'neutron:device_id': 'ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03e15108-5f8d-4fec-9ad8-133d9551c667', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:28:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:28:14.671 162815 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 1945eb8f-f9e5-437a-9fc9-017b522ae777 in datapath 03e15108-5f8d-4fec-9ad8-133d9551c667 updated#033[00m
Oct 11 05:28:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:28:14.674 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 03e15108-5f8d-4fec-9ad8-133d9551c667, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:28:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:28:14.676 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9bb251b8-9814-480c-a49a-bef2825f1f61]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:28:14 np0005481065 ovn_controller[152945]: 2025-10-11T09:28:14Z|01457|binding|INFO|Releasing lport 7affb32e-f5e8-4572-9839-dbf40ccc596e from this chassis (sb_readonly=0)
Oct 11 05:28:14 np0005481065 ovn_controller[152945]: 2025-10-11T09:28:14Z|01458|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:28:14 np0005481065 ovn_controller[152945]: 2025-10-11T09:28:14Z|01459|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:28:14 np0005481065 nova_compute[260935]: 2025-10-11 09:28:14.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:28:14 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2598: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 05:28:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:28:15.224 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:28:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:28:15.224 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:28:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:28:15.225 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:28:15 np0005481065 nova_compute[260935]: 2025-10-11 09:28:15.891 2 INFO nova.compute.manager [None req-b3e04bff-9c19-4742-aef8-9bbc5633f275 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Get console output#033[00m
Oct 11 05:28:15 np0005481065 nova_compute[260935]: 2025-10-11 09:28:15.898 29289 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct 11 05:28:16 np0005481065 nova_compute[260935]: 2025-10-11 09:28:16.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:28:16 np0005481065 NetworkManager[44960]: <info>  [1760174896.4561] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/566)
Oct 11 05:28:16 np0005481065 NetworkManager[44960]: <info>  [1760174896.4575] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/567)
Oct 11 05:28:16 np0005481065 ovn_controller[152945]: 2025-10-11T09:28:16Z|01460|binding|INFO|Releasing lport 7affb32e-f5e8-4572-9839-dbf40ccc596e from this chassis (sb_readonly=0)
Oct 11 05:28:16 np0005481065 ovn_controller[152945]: 2025-10-11T09:28:16Z|01461|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:28:16 np0005481065 ovn_controller[152945]: 2025-10-11T09:28:16Z|01462|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:28:16 np0005481065 ovn_controller[152945]: 2025-10-11T09:28:16Z|01463|binding|INFO|Releasing lport 7affb32e-f5e8-4572-9839-dbf40ccc596e from this chassis (sb_readonly=0)
Oct 11 05:28:16 np0005481065 ovn_controller[152945]: 2025-10-11T09:28:16Z|01464|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:28:16 np0005481065 ovn_controller[152945]: 2025-10-11T09:28:16Z|01465|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:28:16 np0005481065 nova_compute[260935]: 2025-10-11 09:28:16.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:28:16 np0005481065 nova_compute[260935]: 2025-10-11 09:28:16.856 2 INFO nova.compute.manager [None req-110c04c2-6366-4ac4-aef6-6a3e53e8faa6 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Get console output#033[00m
Oct 11 05:28:16 np0005481065 nova_compute[260935]: 2025-10-11 09:28:16.863 29289 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct 11 05:28:16 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2599: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 05:28:17 np0005481065 podman[404945]: 2025-10-11 09:28:17.802411417 +0000 UTC m=+0.083842917 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:28:18 np0005481065 nova_compute[260935]: 2025-10-11 09:28:18.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:28:18 np0005481065 nova_compute[260935]: 2025-10-11 09:28:18.113 2 DEBUG oslo_concurrency.lockutils [None req-ae608449-82f5-4a9f-bfd3-602b3265c302 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "f4e61943-e59b-40a1-ab1e-07ba5f131bb3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:28:18 np0005481065 nova_compute[260935]: 2025-10-11 09:28:18.114 2 DEBUG oslo_concurrency.lockutils [None req-ae608449-82f5-4a9f-bfd3-602b3265c302 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "f4e61943-e59b-40a1-ab1e-07ba5f131bb3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:28:18 np0005481065 nova_compute[260935]: 2025-10-11 09:28:18.114 2 DEBUG oslo_concurrency.lockutils [None req-ae608449-82f5-4a9f-bfd3-602b3265c302 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "f4e61943-e59b-40a1-ab1e-07ba5f131bb3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:28:18 np0005481065 nova_compute[260935]: 2025-10-11 09:28:18.114 2 DEBUG oslo_concurrency.lockutils [None req-ae608449-82f5-4a9f-bfd3-602b3265c302 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "f4e61943-e59b-40a1-ab1e-07ba5f131bb3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:28:18 np0005481065 nova_compute[260935]: 2025-10-11 09:28:18.114 2 DEBUG oslo_concurrency.lockutils [None req-ae608449-82f5-4a9f-bfd3-602b3265c302 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "f4e61943-e59b-40a1-ab1e-07ba5f131bb3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:28:18 np0005481065 nova_compute[260935]: 2025-10-11 09:28:18.115 2 INFO nova.compute.manager [None req-ae608449-82f5-4a9f-bfd3-602b3265c302 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Terminating instance#033[00m
Oct 11 05:28:18 np0005481065 nova_compute[260935]: 2025-10-11 09:28:18.116 2 DEBUG nova.compute.manager [None req-ae608449-82f5-4a9f-bfd3-602b3265c302 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:28:18 np0005481065 kernel: tapc550ea63-90 (unregistering): left promiscuous mode
Oct 11 05:28:18 np0005481065 NetworkManager[44960]: <info>  [1760174898.1837] device (tapc550ea63-90): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:28:18 np0005481065 nova_compute[260935]: 2025-10-11 09:28:18.191 2 DEBUG nova.compute.manager [req-5a8bbdf3-0fb2-466d-b83f-afc726083eb0 req-25e06eb8-d50c-415c-97a2-ccad79cc77a9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Received event network-changed-c550ea63-9098-4f0c-95b9-107f8a0a8a5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:28:18 np0005481065 nova_compute[260935]: 2025-10-11 09:28:18.192 2 DEBUG nova.compute.manager [req-5a8bbdf3-0fb2-466d-b83f-afc726083eb0 req-25e06eb8-d50c-415c-97a2-ccad79cc77a9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Refreshing instance network info cache due to event network-changed-c550ea63-9098-4f0c-95b9-107f8a0a8a5c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:28:18 np0005481065 nova_compute[260935]: 2025-10-11 09:28:18.192 2 DEBUG oslo_concurrency.lockutils [req-5a8bbdf3-0fb2-466d-b83f-afc726083eb0 req-25e06eb8-d50c-415c-97a2-ccad79cc77a9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-f4e61943-e59b-40a1-ab1e-07ba5f131bb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:28:18 np0005481065 nova_compute[260935]: 2025-10-11 09:28:18.192 2 DEBUG oslo_concurrency.lockutils [req-5a8bbdf3-0fb2-466d-b83f-afc726083eb0 req-25e06eb8-d50c-415c-97a2-ccad79cc77a9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-f4e61943-e59b-40a1-ab1e-07ba5f131bb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:28:18 np0005481065 nova_compute[260935]: 2025-10-11 09:28:18.193 2 DEBUG nova.network.neutron [req-5a8bbdf3-0fb2-466d-b83f-afc726083eb0 req-25e06eb8-d50c-415c-97a2-ccad79cc77a9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Refreshing network info cache for port c550ea63-9098-4f0c-95b9-107f8a0a8a5c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:28:18 np0005481065 nova_compute[260935]: 2025-10-11 09:28:18.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:28:18 np0005481065 ovn_controller[152945]: 2025-10-11T09:28:18Z|01466|binding|INFO|Releasing lport c550ea63-9098-4f0c-95b9-107f8a0a8a5c from this chassis (sb_readonly=0)
Oct 11 05:28:18 np0005481065 ovn_controller[152945]: 2025-10-11T09:28:18Z|01467|binding|INFO|Setting lport c550ea63-9098-4f0c-95b9-107f8a0a8a5c down in Southbound
Oct 11 05:28:18 np0005481065 ovn_controller[152945]: 2025-10-11T09:28:18Z|01468|binding|INFO|Removing iface tapc550ea63-90 ovn-installed in OVS
Oct 11 05:28:18 np0005481065 nova_compute[260935]: 2025-10-11 09:28:18.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:28:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:28:18.216 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:48:8e 10.100.0.10'], port_security=['fa:16:3e:09:48:8e 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'f4e61943-e59b-40a1-ab1e-07ba5f131bb3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-066f5196-fbff-458b-ab48-5301dc324450', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8fa9b7a7-a9c5-4782-b5d9-2526bb91182e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dd668780-bea0-4b19-a159-529899429d34, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=c550ea63-9098-4f0c-95b9-107f8a0a8a5c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:28:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:28:18.217 162815 INFO neutron.agent.ovn.metadata.agent [-] Port c550ea63-9098-4f0c-95b9-107f8a0a8a5c in datapath 066f5196-fbff-458b-ab48-5301dc324450 unbound from our chassis#033[00m
Oct 11 05:28:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:28:18.219 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 066f5196-fbff-458b-ab48-5301dc324450, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:28:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:28:18.220 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d0633edc-03b5-4caa-a21f-ec8ca12d74af]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:28:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:28:18.222 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-066f5196-fbff-458b-ab48-5301dc324450 namespace which is not needed anymore#033[00m
Oct 11 05:28:18 np0005481065 nova_compute[260935]: 2025-10-11 09:28:18.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:28:18 np0005481065 systemd[1]: machine-qemu\x2d155\x2dinstance\x2d00000083.scope: Deactivated successfully.
Oct 11 05:28:18 np0005481065 systemd[1]: machine-qemu\x2d155\x2dinstance\x2d00000083.scope: Consumed 13.019s CPU time.
Oct 11 05:28:18 np0005481065 systemd-machined[215705]: Machine qemu-155-instance-00000083 terminated.
Oct 11 05:28:18 np0005481065 nova_compute[260935]: 2025-10-11 09:28:18.361 2 INFO nova.virt.libvirt.driver [-] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Instance destroyed successfully.#033[00m
Oct 11 05:28:18 np0005481065 nova_compute[260935]: 2025-10-11 09:28:18.362 2 DEBUG nova.objects.instance [None req-ae608449-82f5-4a9f-bfd3-602b3265c302 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'resources' on Instance uuid f4e61943-e59b-40a1-ab1e-07ba5f131bb3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:28:18 np0005481065 neutron-haproxy-ovnmeta-066f5196-fbff-458b-ab48-5301dc324450[404464]: [NOTICE]   (404469) : haproxy version is 2.8.14-c23fe91
Oct 11 05:28:18 np0005481065 neutron-haproxy-ovnmeta-066f5196-fbff-458b-ab48-5301dc324450[404464]: [NOTICE]   (404469) : path to executable is /usr/sbin/haproxy
Oct 11 05:28:18 np0005481065 neutron-haproxy-ovnmeta-066f5196-fbff-458b-ab48-5301dc324450[404464]: [WARNING]  (404469) : Exiting Master process...
Oct 11 05:28:18 np0005481065 neutron-haproxy-ovnmeta-066f5196-fbff-458b-ab48-5301dc324450[404464]: [WARNING]  (404469) : Exiting Master process...
Oct 11 05:28:18 np0005481065 neutron-haproxy-ovnmeta-066f5196-fbff-458b-ab48-5301dc324450[404464]: [ALERT]    (404469) : Current worker (404471) exited with code 143 (Terminated)
Oct 11 05:28:18 np0005481065 neutron-haproxy-ovnmeta-066f5196-fbff-458b-ab48-5301dc324450[404464]: [WARNING]  (404469) : All workers exited. Exiting... (0)
Oct 11 05:28:18 np0005481065 systemd[1]: libpod-6c1bef00ea444213d88bfa6af5eec4702f85b880b766550f49146dbec2b30f32.scope: Deactivated successfully.
Oct 11 05:28:18 np0005481065 podman[404990]: 2025-10-11 09:28:18.386937862 +0000 UTC m=+0.067498036 container died 6c1bef00ea444213d88bfa6af5eec4702f85b880b766550f49146dbec2b30f32 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-066f5196-fbff-458b-ab48-5301dc324450, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:28:18 np0005481065 nova_compute[260935]: 2025-10-11 09:28:18.392 2 DEBUG nova.virt.libvirt.vif [None req-ae608449-82f5-4a9f-bfd3-602b3265c302 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:27:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-432477065',display_name='tempest-TestNetworkBasicOps-server-432477065',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-432477065',id=131,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGjbEjzk/3FFF7Yo/G8/P29WblN99ZiVV249D7QjKIfGUPghnNZARefNU8o4oiRaxrPiX6DdwLvn7rFQvHVV6IMhrMlVIFfDDgq3XHS93q11oJL1iJIdpdsGtZABU/MPnw==',key_name='tempest-TestNetworkBasicOps-52027208',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:27:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-kx08je2o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:27:56Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=f4e61943-e59b-40a1-ab1e-07ba5f131bb3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c550ea63-9098-4f0c-95b9-107f8a0a8a5c", "address": "fa:16:3e:09:48:8e", "network": {"id": "066f5196-fbff-458b-ab48-5301dc324450", "bridge": "br-int", "label": "tempest-network-smoke--783922482", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc550ea63-90", "ovs_interfaceid": "c550ea63-9098-4f0c-95b9-107f8a0a8a5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:28:18 np0005481065 nova_compute[260935]: 2025-10-11 09:28:18.393 2 DEBUG nova.network.os_vif_util [None req-ae608449-82f5-4a9f-bfd3-602b3265c302 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "c550ea63-9098-4f0c-95b9-107f8a0a8a5c", "address": "fa:16:3e:09:48:8e", "network": {"id": "066f5196-fbff-458b-ab48-5301dc324450", "bridge": "br-int", "label": "tempest-network-smoke--783922482", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc550ea63-90", "ovs_interfaceid": "c550ea63-9098-4f0c-95b9-107f8a0a8a5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:28:18 np0005481065 nova_compute[260935]: 2025-10-11 09:28:18.395 2 DEBUG nova.network.os_vif_util [None req-ae608449-82f5-4a9f-bfd3-602b3265c302 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:09:48:8e,bridge_name='br-int',has_traffic_filtering=True,id=c550ea63-9098-4f0c-95b9-107f8a0a8a5c,network=Network(066f5196-fbff-458b-ab48-5301dc324450),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc550ea63-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:28:18 np0005481065 nova_compute[260935]: 2025-10-11 09:28:18.396 2 DEBUG os_vif [None req-ae608449-82f5-4a9f-bfd3-602b3265c302 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:09:48:8e,bridge_name='br-int',has_traffic_filtering=True,id=c550ea63-9098-4f0c-95b9-107f8a0a8a5c,network=Network(066f5196-fbff-458b-ab48-5301dc324450),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc550ea63-90') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:28:18 np0005481065 nova_compute[260935]: 2025-10-11 09:28:18.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:28:18 np0005481065 nova_compute[260935]: 2025-10-11 09:28:18.400 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc550ea63-90, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:28:18 np0005481065 nova_compute[260935]: 2025-10-11 09:28:18.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:28:18 np0005481065 nova_compute[260935]: 2025-10-11 09:28:18.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:28:18 np0005481065 nova_compute[260935]: 2025-10-11 09:28:18.410 2 INFO os_vif [None req-ae608449-82f5-4a9f-bfd3-602b3265c302 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:09:48:8e,bridge_name='br-int',has_traffic_filtering=True,id=c550ea63-9098-4f0c-95b9-107f8a0a8a5c,network=Network(066f5196-fbff-458b-ab48-5301dc324450),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc550ea63-90')#033[00m
Oct 11 05:28:18 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6c1bef00ea444213d88bfa6af5eec4702f85b880b766550f49146dbec2b30f32-userdata-shm.mount: Deactivated successfully.
Oct 11 05:28:18 np0005481065 systemd[1]: var-lib-containers-storage-overlay-d7567a7af7802e398bd5207beaa54376df4c7fcd14b370da4835b5b052e2eda6-merged.mount: Deactivated successfully.
Oct 11 05:28:18 np0005481065 podman[404990]: 2025-10-11 09:28:18.435886343 +0000 UTC m=+0.116446497 container cleanup 6c1bef00ea444213d88bfa6af5eec4702f85b880b766550f49146dbec2b30f32 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-066f5196-fbff-458b-ab48-5301dc324450, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 11 05:28:18 np0005481065 systemd[1]: libpod-conmon-6c1bef00ea444213d88bfa6af5eec4702f85b880b766550f49146dbec2b30f32.scope: Deactivated successfully.
Oct 11 05:28:18 np0005481065 podman[405048]: 2025-10-11 09:28:18.508945465 +0000 UTC m=+0.044029983 container remove 6c1bef00ea444213d88bfa6af5eec4702f85b880b766550f49146dbec2b30f32 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-066f5196-fbff-458b-ab48-5301dc324450, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 11 05:28:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:28:18.521 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7fe41dfc-f353-4a66-9786-ae972470c132]: (4, ('Sat Oct 11 09:28:18 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-066f5196-fbff-458b-ab48-5301dc324450 (6c1bef00ea444213d88bfa6af5eec4702f85b880b766550f49146dbec2b30f32)\n6c1bef00ea444213d88bfa6af5eec4702f85b880b766550f49146dbec2b30f32\nSat Oct 11 09:28:18 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-066f5196-fbff-458b-ab48-5301dc324450 (6c1bef00ea444213d88bfa6af5eec4702f85b880b766550f49146dbec2b30f32)\n6c1bef00ea444213d88bfa6af5eec4702f85b880b766550f49146dbec2b30f32\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:28:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:28:18.522 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6e1df120-72b0-42e0-a112-10fc68ac44b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:28:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:28:18.523 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap066f5196-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:28:18 np0005481065 kernel: tap066f5196-f0: left promiscuous mode
Oct 11 05:28:18 np0005481065 nova_compute[260935]: 2025-10-11 09:28:18.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:28:18 np0005481065 nova_compute[260935]: 2025-10-11 09:28:18.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:28:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:28:18.544 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[18a3c6b8-d2e3-4c51-baa7-d3fd65c26c86]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:28:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:28:18.566 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7e79c030-350b-4479-b336-92bfb62174a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:28:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:28:18.567 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[03297a67-3ea3-4ce5-b5c3-0ad734227d64]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:28:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:28:18.586 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[43da8779-8f97-4e63-a8b5-e1e5a7ad9e88]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 671981, 'reachable_time': 17746, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 405067, 'error': None, 'target': 'ovnmeta-066f5196-fbff-458b-ab48-5301dc324450', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:28:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:28:18.588 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-066f5196-fbff-458b-ab48-5301dc324450 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 05:28:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:28:18.588 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[6c012e65-1aa1-4287-aaf4-82f10ec744f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:28:18 np0005481065 systemd[1]: run-netns-ovnmeta\x2d066f5196\x2dfbff\x2d458b\x2dab48\x2d5301dc324450.mount: Deactivated successfully.
Oct 11 05:28:18 np0005481065 nova_compute[260935]: 2025-10-11 09:28:18.830 2 INFO nova.virt.libvirt.driver [None req-ae608449-82f5-4a9f-bfd3-602b3265c302 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Deleting instance files /var/lib/nova/instances/f4e61943-e59b-40a1-ab1e-07ba5f131bb3_del#033[00m
Oct 11 05:28:18 np0005481065 nova_compute[260935]: 2025-10-11 09:28:18.831 2 INFO nova.virt.libvirt.driver [None req-ae608449-82f5-4a9f-bfd3-602b3265c302 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Deletion of /var/lib/nova/instances/f4e61943-e59b-40a1-ab1e-07ba5f131bb3_del complete#033[00m
Oct 11 05:28:18 np0005481065 nova_compute[260935]: 2025-10-11 09:28:18.878 2 INFO nova.compute.manager [None req-ae608449-82f5-4a9f-bfd3-602b3265c302 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Took 0.76 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:28:18 np0005481065 nova_compute[260935]: 2025-10-11 09:28:18.879 2 DEBUG oslo.service.loopingcall [None req-ae608449-82f5-4a9f-bfd3-602b3265c302 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:28:18 np0005481065 nova_compute[260935]: 2025-10-11 09:28:18.879 2 DEBUG nova.compute.manager [-] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:28:18 np0005481065 nova_compute[260935]: 2025-10-11 09:28:18.880 2 DEBUG nova.network.neutron [-] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:28:18 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2600: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 11 05:28:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:28:19 np0005481065 nova_compute[260935]: 2025-10-11 09:28:19.419 2 DEBUG nova.compute.manager [req-81c06862-b98a-4d89-950a-863a181bd14f req-a932ab58-b6a6-443d-b8c9-802e8b046950 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Received event network-vif-unplugged-c550ea63-9098-4f0c-95b9-107f8a0a8a5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:28:19 np0005481065 nova_compute[260935]: 2025-10-11 09:28:19.420 2 DEBUG oslo_concurrency.lockutils [req-81c06862-b98a-4d89-950a-863a181bd14f req-a932ab58-b6a6-443d-b8c9-802e8b046950 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "f4e61943-e59b-40a1-ab1e-07ba5f131bb3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:28:19 np0005481065 nova_compute[260935]: 2025-10-11 09:28:19.420 2 DEBUG oslo_concurrency.lockutils [req-81c06862-b98a-4d89-950a-863a181bd14f req-a932ab58-b6a6-443d-b8c9-802e8b046950 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f4e61943-e59b-40a1-ab1e-07ba5f131bb3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:28:19 np0005481065 nova_compute[260935]: 2025-10-11 09:28:19.421 2 DEBUG oslo_concurrency.lockutils [req-81c06862-b98a-4d89-950a-863a181bd14f req-a932ab58-b6a6-443d-b8c9-802e8b046950 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f4e61943-e59b-40a1-ab1e-07ba5f131bb3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:28:19 np0005481065 nova_compute[260935]: 2025-10-11 09:28:19.421 2 DEBUG nova.compute.manager [req-81c06862-b98a-4d89-950a-863a181bd14f req-a932ab58-b6a6-443d-b8c9-802e8b046950 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] No waiting events found dispatching network-vif-unplugged-c550ea63-9098-4f0c-95b9-107f8a0a8a5c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:28:19 np0005481065 nova_compute[260935]: 2025-10-11 09:28:19.421 2 DEBUG nova.compute.manager [req-81c06862-b98a-4d89-950a-863a181bd14f req-a932ab58-b6a6-443d-b8c9-802e8b046950 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Received event network-vif-unplugged-c550ea63-9098-4f0c-95b9-107f8a0a8a5c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 05:28:19 np0005481065 nova_compute[260935]: 2025-10-11 09:28:19.733 2 DEBUG nova.network.neutron [-] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:28:19 np0005481065 nova_compute[260935]: 2025-10-11 09:28:19.755 2 INFO nova.compute.manager [-] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Took 0.88 seconds to deallocate network for instance.#033[00m
Oct 11 05:28:19 np0005481065 nova_compute[260935]: 2025-10-11 09:28:19.804 2 DEBUG nova.network.neutron [req-5a8bbdf3-0fb2-466d-b83f-afc726083eb0 req-25e06eb8-d50c-415c-97a2-ccad79cc77a9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Updated VIF entry in instance network info cache for port c550ea63-9098-4f0c-95b9-107f8a0a8a5c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:28:19 np0005481065 nova_compute[260935]: 2025-10-11 09:28:19.804 2 DEBUG nova.network.neutron [req-5a8bbdf3-0fb2-466d-b83f-afc726083eb0 req-25e06eb8-d50c-415c-97a2-ccad79cc77a9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Updating instance_info_cache with network_info: [{"id": "c550ea63-9098-4f0c-95b9-107f8a0a8a5c", "address": "fa:16:3e:09:48:8e", "network": {"id": "066f5196-fbff-458b-ab48-5301dc324450", "bridge": "br-int", "label": "tempest-network-smoke--783922482", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc550ea63-90", "ovs_interfaceid": "c550ea63-9098-4f0c-95b9-107f8a0a8a5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:28:19 np0005481065 nova_compute[260935]: 2025-10-11 09:28:19.811 2 DEBUG oslo_concurrency.lockutils [None req-ae608449-82f5-4a9f-bfd3-602b3265c302 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:28:19 np0005481065 nova_compute[260935]: 2025-10-11 09:28:19.811 2 DEBUG oslo_concurrency.lockutils [None req-ae608449-82f5-4a9f-bfd3-602b3265c302 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:28:19 np0005481065 nova_compute[260935]: 2025-10-11 09:28:19.827 2 DEBUG oslo_concurrency.lockutils [req-5a8bbdf3-0fb2-466d-b83f-afc726083eb0 req-25e06eb8-d50c-415c-97a2-ccad79cc77a9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-f4e61943-e59b-40a1-ab1e-07ba5f131bb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:28:19 np0005481065 nova_compute[260935]: 2025-10-11 09:28:19.925 2 DEBUG oslo_concurrency.processutils [None req-ae608449-82f5-4a9f-bfd3-602b3265c302 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:28:20 np0005481065 nova_compute[260935]: 2025-10-11 09:28:20.309 2 DEBUG nova.compute.manager [req-a22c9509-6769-434f-a5ee-44f158f1270f req-65c69765-d366-459d-b8c0-7d1c5a570fe9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Received event network-vif-deleted-c550ea63-9098-4f0c-95b9-107f8a0a8a5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:28:20 np0005481065 nova_compute[260935]: 2025-10-11 09:28:20.310 2 INFO nova.compute.manager [req-a22c9509-6769-434f-a5ee-44f158f1270f req-65c69765-d366-459d-b8c0-7d1c5a570fe9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Neutron deleted interface c550ea63-9098-4f0c-95b9-107f8a0a8a5c; detaching it from the instance and deleting it from the info cache#033[00m
Oct 11 05:28:20 np0005481065 nova_compute[260935]: 2025-10-11 09:28:20.311 2 DEBUG nova.network.neutron [req-a22c9509-6769-434f-a5ee-44f158f1270f req-65c69765-d366-459d-b8c0-7d1c5a570fe9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:28:20 np0005481065 nova_compute[260935]: 2025-10-11 09:28:20.344 2 DEBUG nova.compute.manager [req-a22c9509-6769-434f-a5ee-44f158f1270f req-65c69765-d366-459d-b8c0-7d1c5a570fe9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Detach interface failed, port_id=c550ea63-9098-4f0c-95b9-107f8a0a8a5c, reason: Instance f4e61943-e59b-40a1-ab1e-07ba5f131bb3 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct 11 05:28:20 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:28:20 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2104140217' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:28:20 np0005481065 nova_compute[260935]: 2025-10-11 09:28:20.424 2 DEBUG oslo_concurrency.processutils [None req-ae608449-82f5-4a9f-bfd3-602b3265c302 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:28:20 np0005481065 nova_compute[260935]: 2025-10-11 09:28:20.432 2 DEBUG nova.compute.provider_tree [None req-ae608449-82f5-4a9f-bfd3-602b3265c302 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:28:20 np0005481065 nova_compute[260935]: 2025-10-11 09:28:20.455 2 DEBUG nova.scheduler.client.report [None req-ae608449-82f5-4a9f-bfd3-602b3265c302 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:28:20 np0005481065 nova_compute[260935]: 2025-10-11 09:28:20.485 2 DEBUG oslo_concurrency.lockutils [None req-ae608449-82f5-4a9f-bfd3-602b3265c302 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:28:20 np0005481065 nova_compute[260935]: 2025-10-11 09:28:20.518 2 INFO nova.scheduler.client.report [None req-ae608449-82f5-4a9f-bfd3-602b3265c302 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Deleted allocations for instance f4e61943-e59b-40a1-ab1e-07ba5f131bb3#033[00m
Oct 11 05:28:20 np0005481065 nova_compute[260935]: 2025-10-11 09:28:20.629 2 DEBUG oslo_concurrency.lockutils [None req-ae608449-82f5-4a9f-bfd3-602b3265c302 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "f4e61943-e59b-40a1-ab1e-07ba5f131bb3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.515s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:28:20 np0005481065 nova_compute[260935]: 2025-10-11 09:28:20.794 2 DEBUG oslo_concurrency.lockutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "16108538-ef61-4d43-8a42-c324c334138b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:28:20 np0005481065 nova_compute[260935]: 2025-10-11 09:28:20.795 2 DEBUG oslo_concurrency.lockutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "16108538-ef61-4d43-8a42-c324c334138b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:28:20 np0005481065 nova_compute[260935]: 2025-10-11 09:28:20.828 2 DEBUG nova.compute.manager [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:28:20 np0005481065 nova_compute[260935]: 2025-10-11 09:28:20.911 2 DEBUG oslo_concurrency.lockutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:28:20 np0005481065 nova_compute[260935]: 2025-10-11 09:28:20.911 2 DEBUG oslo_concurrency.lockutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:28:20 np0005481065 nova_compute[260935]: 2025-10-11 09:28:20.922 2 DEBUG nova.virt.hardware [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:28:20 np0005481065 nova_compute[260935]: 2025-10-11 09:28:20.923 2 INFO nova.compute.claims [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:28:20 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2601: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 110 KiB/s rd, 107 KiB/s wr, 23 op/s
Oct 11 05:28:21 np0005481065 nova_compute[260935]: 2025-10-11 09:28:21.106 2 DEBUG oslo_concurrency.processutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:28:21 np0005481065 nova_compute[260935]: 2025-10-11 09:28:21.510 2 DEBUG nova.compute.manager [req-b4913a3f-2c21-4a92-b257-142621781993 req-f1ae4df3-b210-49e4-ad9e-6b52e294b587 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Received event network-vif-plugged-c550ea63-9098-4f0c-95b9-107f8a0a8a5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:28:21 np0005481065 nova_compute[260935]: 2025-10-11 09:28:21.511 2 DEBUG oslo_concurrency.lockutils [req-b4913a3f-2c21-4a92-b257-142621781993 req-f1ae4df3-b210-49e4-ad9e-6b52e294b587 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "f4e61943-e59b-40a1-ab1e-07ba5f131bb3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:28:21 np0005481065 nova_compute[260935]: 2025-10-11 09:28:21.511 2 DEBUG oslo_concurrency.lockutils [req-b4913a3f-2c21-4a92-b257-142621781993 req-f1ae4df3-b210-49e4-ad9e-6b52e294b587 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f4e61943-e59b-40a1-ab1e-07ba5f131bb3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:28:21 np0005481065 nova_compute[260935]: 2025-10-11 09:28:21.512 2 DEBUG oslo_concurrency.lockutils [req-b4913a3f-2c21-4a92-b257-142621781993 req-f1ae4df3-b210-49e4-ad9e-6b52e294b587 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f4e61943-e59b-40a1-ab1e-07ba5f131bb3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:28:21 np0005481065 nova_compute[260935]: 2025-10-11 09:28:21.512 2 DEBUG nova.compute.manager [req-b4913a3f-2c21-4a92-b257-142621781993 req-f1ae4df3-b210-49e4-ad9e-6b52e294b587 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] No waiting events found dispatching network-vif-plugged-c550ea63-9098-4f0c-95b9-107f8a0a8a5c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:28:21 np0005481065 nova_compute[260935]: 2025-10-11 09:28:21.512 2 WARNING nova.compute.manager [req-b4913a3f-2c21-4a92-b257-142621781993 req-f1ae4df3-b210-49e4-ad9e-6b52e294b587 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Received unexpected event network-vif-plugged-c550ea63-9098-4f0c-95b9-107f8a0a8a5c for instance with vm_state deleted and task_state None.#033[00m
Oct 11 05:28:21 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:28:21 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4094774514' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:28:21 np0005481065 nova_compute[260935]: 2025-10-11 09:28:21.578 2 DEBUG oslo_concurrency.processutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:28:21 np0005481065 nova_compute[260935]: 2025-10-11 09:28:21.583 2 DEBUG nova.compute.provider_tree [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:28:21 np0005481065 nova_compute[260935]: 2025-10-11 09:28:21.601 2 DEBUG nova.scheduler.client.report [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:28:21 np0005481065 nova_compute[260935]: 2025-10-11 09:28:21.633 2 DEBUG oslo_concurrency.lockutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:28:21 np0005481065 nova_compute[260935]: 2025-10-11 09:28:21.634 2 DEBUG nova.compute.manager [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:28:21 np0005481065 nova_compute[260935]: 2025-10-11 09:28:21.712 2 DEBUG nova.compute.manager [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:28:21 np0005481065 nova_compute[260935]: 2025-10-11 09:28:21.712 2 DEBUG nova.network.neutron [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:28:21 np0005481065 nova_compute[260935]: 2025-10-11 09:28:21.742 2 INFO nova.virt.libvirt.driver [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:28:21 np0005481065 nova_compute[260935]: 2025-10-11 09:28:21.767 2 DEBUG nova.compute.manager [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:28:21 np0005481065 nova_compute[260935]: 2025-10-11 09:28:21.867 2 DEBUG nova.compute.manager [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:28:21 np0005481065 nova_compute[260935]: 2025-10-11 09:28:21.870 2 DEBUG nova.virt.libvirt.driver [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:28:21 np0005481065 nova_compute[260935]: 2025-10-11 09:28:21.871 2 INFO nova.virt.libvirt.driver [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Creating image(s)#033[00m
Oct 11 05:28:21 np0005481065 nova_compute[260935]: 2025-10-11 09:28:21.912 2 DEBUG nova.storage.rbd_utils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 16108538-ef61-4d43-8a42-c324c334138b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:28:21 np0005481065 nova_compute[260935]: 2025-10-11 09:28:21.949 2 DEBUG nova.storage.rbd_utils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 16108538-ef61-4d43-8a42-c324c334138b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:28:21 np0005481065 nova_compute[260935]: 2025-10-11 09:28:21.981 2 DEBUG nova.storage.rbd_utils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 16108538-ef61-4d43-8a42-c324c334138b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:28:21 np0005481065 nova_compute[260935]: 2025-10-11 09:28:21.985 2 DEBUG oslo_concurrency.processutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:28:22 np0005481065 nova_compute[260935]: 2025-10-11 09:28:22.040 2 DEBUG nova.policy [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0e1fd111a1ff43179343661e01457085', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'db6885dd005947ad850fed13cefdf2fc', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:28:22 np0005481065 nova_compute[260935]: 2025-10-11 09:28:22.093 2 DEBUG oslo_concurrency.processutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:28:22 np0005481065 nova_compute[260935]: 2025-10-11 09:28:22.094 2 DEBUG oslo_concurrency.lockutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:28:22 np0005481065 nova_compute[260935]: 2025-10-11 09:28:22.095 2 DEBUG oslo_concurrency.lockutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:28:22 np0005481065 nova_compute[260935]: 2025-10-11 09:28:22.095 2 DEBUG oslo_concurrency.lockutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:28:22 np0005481065 nova_compute[260935]: 2025-10-11 09:28:22.123 2 DEBUG nova.storage.rbd_utils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 16108538-ef61-4d43-8a42-c324c334138b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:28:22 np0005481065 nova_compute[260935]: 2025-10-11 09:28:22.127 2 DEBUG oslo_concurrency.processutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 16108538-ef61-4d43-8a42-c324c334138b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:28:22 np0005481065 nova_compute[260935]: 2025-10-11 09:28:22.470 2 DEBUG oslo_concurrency.processutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 16108538-ef61-4d43-8a42-c324c334138b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.342s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:28:22 np0005481065 nova_compute[260935]: 2025-10-11 09:28:22.546 2 DEBUG nova.storage.rbd_utils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] resizing rbd image 16108538-ef61-4d43-8a42-c324c334138b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:28:22 np0005481065 nova_compute[260935]: 2025-10-11 09:28:22.652 2 DEBUG nova.objects.instance [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'migration_context' on Instance uuid 16108538-ef61-4d43-8a42-c324c334138b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:28:22 np0005481065 nova_compute[260935]: 2025-10-11 09:28:22.679 2 DEBUG nova.virt.libvirt.driver [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:28:22 np0005481065 nova_compute[260935]: 2025-10-11 09:28:22.680 2 DEBUG nova.virt.libvirt.driver [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Ensure instance console log exists: /var/lib/nova/instances/16108538-ef61-4d43-8a42-c324c334138b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:28:22 np0005481065 nova_compute[260935]: 2025-10-11 09:28:22.681 2 DEBUG oslo_concurrency.lockutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:28:22 np0005481065 nova_compute[260935]: 2025-10-11 09:28:22.681 2 DEBUG oslo_concurrency.lockutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:28:22 np0005481065 nova_compute[260935]: 2025-10-11 09:28:22.681 2 DEBUG oslo_concurrency.lockutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:28:22 np0005481065 podman[405279]: 2025-10-11 09:28:22.812204165 +0000 UTC m=+0.105273102 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:28:22 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2602: 321 pgs: 321 active+clean; 336 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 137 KiB/s rd, 181 KiB/s wr, 65 op/s
Oct 11 05:28:23 np0005481065 nova_compute[260935]: 2025-10-11 09:28:23.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:28:23 np0005481065 nova_compute[260935]: 2025-10-11 09:28:23.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:28:23 np0005481065 nova_compute[260935]: 2025-10-11 09:28:23.512 2 DEBUG nova.network.neutron [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Successfully created port: e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:28:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:28:24 np0005481065 nova_compute[260935]: 2025-10-11 09:28:24.318 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:28:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:28:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:28:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:28:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:28:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:28:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:28:24 np0005481065 nova_compute[260935]: 2025-10-11 09:28:24.850 2 DEBUG nova.network.neutron [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Successfully updated port: e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:28:24 np0005481065 nova_compute[260935]: 2025-10-11 09:28:24.870 2 DEBUG oslo_concurrency.lockutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "refresh_cache-16108538-ef61-4d43-8a42-c324c334138b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:28:24 np0005481065 nova_compute[260935]: 2025-10-11 09:28:24.871 2 DEBUG oslo_concurrency.lockutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquired lock "refresh_cache-16108538-ef61-4d43-8a42-c324c334138b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:28:24 np0005481065 nova_compute[260935]: 2025-10-11 09:28:24.871 2 DEBUG nova.network.neutron [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:28:24 np0005481065 nova_compute[260935]: 2025-10-11 09:28:24.944 2 DEBUG nova.compute.manager [req-3d5709a5-6a14-4fe9-ad95-cd453cd83a78 req-62c265a3-aa6e-49ea-af1b-f56d06fcf10c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Received event network-changed-e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:28:24 np0005481065 nova_compute[260935]: 2025-10-11 09:28:24.944 2 DEBUG nova.compute.manager [req-3d5709a5-6a14-4fe9-ad95-cd453cd83a78 req-62c265a3-aa6e-49ea-af1b-f56d06fcf10c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Refreshing instance network info cache due to event network-changed-e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:28:24 np0005481065 nova_compute[260935]: 2025-10-11 09:28:24.945 2 DEBUG oslo_concurrency.lockutils [req-3d5709a5-6a14-4fe9-ad95-cd453cd83a78 req-62c265a3-aa6e-49ea-af1b-f56d06fcf10c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-16108538-ef61-4d43-8a42-c324c334138b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:28:24 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2603: 321 pgs: 321 active+clean; 336 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 86 KiB/s wr, 42 op/s
Oct 11 05:28:25 np0005481065 nova_compute[260935]: 2025-10-11 09:28:25.088 2 DEBUG nova.network.neutron [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:28:25 np0005481065 ovn_controller[152945]: 2025-10-11T09:28:25Z|01469|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:28:25 np0005481065 ovn_controller[152945]: 2025-10-11T09:28:25Z|01470|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:28:25 np0005481065 nova_compute[260935]: 2025-10-11 09:28:25.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:28:25 np0005481065 ovn_controller[152945]: 2025-10-11T09:28:25Z|01471|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:28:25 np0005481065 ovn_controller[152945]: 2025-10-11T09:28:25Z|01472|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:28:25 np0005481065 nova_compute[260935]: 2025-10-11 09:28:25.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:28:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:28:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2096691239' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:28:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:28:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2096691239' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:28:26 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2604: 321 pgs: 321 active+clean; 336 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 86 KiB/s wr, 42 op/s
Oct 11 05:28:27 np0005481065 podman[405300]: 2025-10-11 09:28:27.791102812 +0000 UTC m=+0.082820459 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Oct 11 05:28:27 np0005481065 podman[405301]: 2025-10-11 09:28:27.838028926 +0000 UTC m=+0.118505835 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251001, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 05:28:28 np0005481065 nova_compute[260935]: 2025-10-11 09:28:28.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:28:28 np0005481065 nova_compute[260935]: 2025-10-11 09:28:28.359 2 DEBUG nova.network.neutron [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Updating instance_info_cache with network_info: [{"id": "e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0", "address": "fa:16:3e:e6:64:0e", "network": {"id": "03e15108-5f8d-4fec-9ad8-133d9551c667", "bridge": "br-int", "label": "tempest-network-smoke--866575139", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:640e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:640e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89af28e-d1", "ovs_interfaceid": "e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:28:28 np0005481065 nova_compute[260935]: 2025-10-11 09:28:28.380 2 DEBUG oslo_concurrency.lockutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Releasing lock "refresh_cache-16108538-ef61-4d43-8a42-c324c334138b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:28:28 np0005481065 nova_compute[260935]: 2025-10-11 09:28:28.381 2 DEBUG nova.compute.manager [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Instance network_info: |[{"id": "e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0", "address": "fa:16:3e:e6:64:0e", "network": {"id": "03e15108-5f8d-4fec-9ad8-133d9551c667", "bridge": "br-int", "label": "tempest-network-smoke--866575139", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:640e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:640e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89af28e-d1", "ovs_interfaceid": "e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:28:28 np0005481065 nova_compute[260935]: 2025-10-11 09:28:28.381 2 DEBUG oslo_concurrency.lockutils [req-3d5709a5-6a14-4fe9-ad95-cd453cd83a78 req-62c265a3-aa6e-49ea-af1b-f56d06fcf10c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-16108538-ef61-4d43-8a42-c324c334138b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:28:28 np0005481065 nova_compute[260935]: 2025-10-11 09:28:28.382 2 DEBUG nova.network.neutron [req-3d5709a5-6a14-4fe9-ad95-cd453cd83a78 req-62c265a3-aa6e-49ea-af1b-f56d06fcf10c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Refreshing network info cache for port e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:28:28 np0005481065 nova_compute[260935]: 2025-10-11 09:28:28.388 2 DEBUG nova.virt.libvirt.driver [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Start _get_guest_xml network_info=[{"id": "e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0", "address": "fa:16:3e:e6:64:0e", "network": {"id": "03e15108-5f8d-4fec-9ad8-133d9551c667", "bridge": "br-int", "label": "tempest-network-smoke--866575139", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:640e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:640e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89af28e-d1", "ovs_interfaceid": "e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:28:28 np0005481065 nova_compute[260935]: 2025-10-11 09:28:28.395 2 WARNING nova.virt.libvirt.driver [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:28:28 np0005481065 nova_compute[260935]: 2025-10-11 09:28:28.400 2 DEBUG nova.virt.libvirt.host [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:28:28 np0005481065 nova_compute[260935]: 2025-10-11 09:28:28.402 2 DEBUG nova.virt.libvirt.host [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:28:28 np0005481065 nova_compute[260935]: 2025-10-11 09:28:28.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:28:28 np0005481065 nova_compute[260935]: 2025-10-11 09:28:28.415 2 DEBUG nova.virt.libvirt.host [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:28:28 np0005481065 nova_compute[260935]: 2025-10-11 09:28:28.416 2 DEBUG nova.virt.libvirt.host [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:28:28 np0005481065 nova_compute[260935]: 2025-10-11 09:28:28.417 2 DEBUG nova.virt.libvirt.driver [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:28:28 np0005481065 nova_compute[260935]: 2025-10-11 09:28:28.417 2 DEBUG nova.virt.hardware [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:28:28 np0005481065 nova_compute[260935]: 2025-10-11 09:28:28.418 2 DEBUG nova.virt.hardware [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:28:28 np0005481065 nova_compute[260935]: 2025-10-11 09:28:28.419 2 DEBUG nova.virt.hardware [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:28:28 np0005481065 nova_compute[260935]: 2025-10-11 09:28:28.419 2 DEBUG nova.virt.hardware [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:28:28 np0005481065 nova_compute[260935]: 2025-10-11 09:28:28.421 2 DEBUG nova.virt.hardware [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:28:28 np0005481065 nova_compute[260935]: 2025-10-11 09:28:28.421 2 DEBUG nova.virt.hardware [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:28:28 np0005481065 nova_compute[260935]: 2025-10-11 09:28:28.422 2 DEBUG nova.virt.hardware [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:28:28 np0005481065 nova_compute[260935]: 2025-10-11 09:28:28.422 2 DEBUG nova.virt.hardware [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:28:28 np0005481065 nova_compute[260935]: 2025-10-11 09:28:28.422 2 DEBUG nova.virt.hardware [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:28:28 np0005481065 nova_compute[260935]: 2025-10-11 09:28:28.423 2 DEBUG nova.virt.hardware [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:28:28 np0005481065 nova_compute[260935]: 2025-10-11 09:28:28.424 2 DEBUG nova.virt.hardware [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:28:28 np0005481065 nova_compute[260935]: 2025-10-11 09:28:28.429 2 DEBUG oslo_concurrency.processutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:28:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:28:28 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/243282572' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:28:28 np0005481065 nova_compute[260935]: 2025-10-11 09:28:28.924 2 DEBUG oslo_concurrency.processutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:28:28 np0005481065 nova_compute[260935]: 2025-10-11 09:28:28.961 2 DEBUG nova.storage.rbd_utils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 16108538-ef61-4d43-8a42-c324c334138b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:28:28 np0005481065 nova_compute[260935]: 2025-10-11 09:28:28.967 2 DEBUG oslo_concurrency.processutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:28:28 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2605: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Oct 11 05:28:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:28:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:28:29 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3223387606' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:28:29 np0005481065 nova_compute[260935]: 2025-10-11 09:28:29.421 2 DEBUG oslo_concurrency.processutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:28:29 np0005481065 nova_compute[260935]: 2025-10-11 09:28:29.423 2 DEBUG nova.virt.libvirt.vif [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:28:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1335802812',display_name='tempest-TestGettingAddress-server-1335802812',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1335802812',id=132,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI6hRsTatVKtQ3nuA1r7FSvjUwlf4xF10ggzb3Lhl+N08jgpdPafA8cbnqq6OClpxtss8V3s8tFKTynT8iMbxi96RnSSo3i9TmqSxoUO3xLRHc6Xamp8CuhNQXpWeJ78Zg==',key_name='tempest-TestGettingAddress-951303195',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-7xgmxaip',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:28:21Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=16108538-ef61-4d43-8a42-c324c334138b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0", "address": "fa:16:3e:e6:64:0e", "network": {"id": "03e15108-5f8d-4fec-9ad8-133d9551c667", "bridge": "br-int", "label": "tempest-network-smoke--866575139", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:640e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:640e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89af28e-d1", "ovs_interfaceid": "e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:28:29 np0005481065 nova_compute[260935]: 2025-10-11 09:28:29.424 2 DEBUG nova.network.os_vif_util [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0", "address": "fa:16:3e:e6:64:0e", "network": {"id": "03e15108-5f8d-4fec-9ad8-133d9551c667", "bridge": "br-int", "label": "tempest-network-smoke--866575139", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:640e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:640e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89af28e-d1", "ovs_interfaceid": "e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:28:29 np0005481065 nova_compute[260935]: 2025-10-11 09:28:29.425 2 DEBUG nova.network.os_vif_util [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:64:0e,bridge_name='br-int',has_traffic_filtering=True,id=e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0,network=Network(03e15108-5f8d-4fec-9ad8-133d9551c667),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape89af28e-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:28:29 np0005481065 nova_compute[260935]: 2025-10-11 09:28:29.426 2 DEBUG nova.objects.instance [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'pci_devices' on Instance uuid 16108538-ef61-4d43-8a42-c324c334138b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:28:29 np0005481065 nova_compute[260935]: 2025-10-11 09:28:29.441 2 DEBUG nova.virt.libvirt.driver [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:28:29 np0005481065 nova_compute[260935]:  <uuid>16108538-ef61-4d43-8a42-c324c334138b</uuid>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:  <name>instance-00000084</name>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:28:29 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:      <nova:name>tempest-TestGettingAddress-server-1335802812</nova:name>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:28:28</nova:creationTime>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:28:29 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:        <nova:user uuid="0e1fd111a1ff43179343661e01457085">tempest-TestGettingAddress-1238692117-project-member</nova:user>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:        <nova:project uuid="db6885dd005947ad850fed13cefdf2fc">tempest-TestGettingAddress-1238692117</nova:project>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:        <nova:port uuid="e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0">
Oct 11 05:28:29 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fee6:640e" ipVersion="6"/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fee6:640e" ipVersion="6"/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:28:29 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:      <entry name="serial">16108538-ef61-4d43-8a42-c324c334138b</entry>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:      <entry name="uuid">16108538-ef61-4d43-8a42-c324c334138b</entry>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:28:29 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:28:29 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:28:29 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/16108538-ef61-4d43-8a42-c324c334138b_disk">
Oct 11 05:28:29 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:28:29 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:28:29 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/16108538-ef61-4d43-8a42-c324c334138b_disk.config">
Oct 11 05:28:29 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:28:29 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:28:29 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:e6:64:0e"/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:      <target dev="tape89af28e-d1"/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:28:29 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/16108538-ef61-4d43-8a42-c324c334138b/console.log" append="off"/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:28:29 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:28:29 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:28:29 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:28:29 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:28:29 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:28:29 np0005481065 nova_compute[260935]: 2025-10-11 09:28:29.442 2 DEBUG nova.compute.manager [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Preparing to wait for external event network-vif-plugged-e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:28:29 np0005481065 nova_compute[260935]: 2025-10-11 09:28:29.442 2 DEBUG oslo_concurrency.lockutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "16108538-ef61-4d43-8a42-c324c334138b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:28:29 np0005481065 nova_compute[260935]: 2025-10-11 09:28:29.442 2 DEBUG oslo_concurrency.lockutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "16108538-ef61-4d43-8a42-c324c334138b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:28:29 np0005481065 nova_compute[260935]: 2025-10-11 09:28:29.443 2 DEBUG oslo_concurrency.lockutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "16108538-ef61-4d43-8a42-c324c334138b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:28:29 np0005481065 nova_compute[260935]: 2025-10-11 09:28:29.443 2 DEBUG nova.virt.libvirt.vif [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:28:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1335802812',display_name='tempest-TestGettingAddress-server-1335802812',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1335802812',id=132,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI6hRsTatVKtQ3nuA1r7FSvjUwlf4xF10ggzb3Lhl+N08jgpdPafA8cbnqq6OClpxtss8V3s8tFKTynT8iMbxi96RnSSo3i9TmqSxoUO3xLRHc6Xamp8CuhNQXpWeJ78Zg==',key_name='tempest-TestGettingAddress-951303195',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-7xgmxaip',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:28:21Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=16108538-ef61-4d43-8a42-c324c334138b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0", "address": "fa:16:3e:e6:64:0e", "network": {"id": "03e15108-5f8d-4fec-9ad8-133d9551c667", "bridge": "br-int", "label": "tempest-network-smoke--866575139", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:640e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:640e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89af28e-d1", "ovs_interfaceid": "e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:28:29 np0005481065 nova_compute[260935]: 2025-10-11 09:28:29.444 2 DEBUG nova.network.os_vif_util [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0", "address": "fa:16:3e:e6:64:0e", "network": {"id": "03e15108-5f8d-4fec-9ad8-133d9551c667", "bridge": "br-int", "label": "tempest-network-smoke--866575139", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:640e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:640e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89af28e-d1", "ovs_interfaceid": "e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:28:29 np0005481065 nova_compute[260935]: 2025-10-11 09:28:29.444 2 DEBUG nova.network.os_vif_util [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:64:0e,bridge_name='br-int',has_traffic_filtering=True,id=e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0,network=Network(03e15108-5f8d-4fec-9ad8-133d9551c667),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape89af28e-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:28:29 np0005481065 nova_compute[260935]: 2025-10-11 09:28:29.445 2 DEBUG os_vif [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:64:0e,bridge_name='br-int',has_traffic_filtering=True,id=e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0,network=Network(03e15108-5f8d-4fec-9ad8-133d9551c667),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape89af28e-d1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:28:29 np0005481065 nova_compute[260935]: 2025-10-11 09:28:29.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:28:29 np0005481065 nova_compute[260935]: 2025-10-11 09:28:29.446 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:28:29 np0005481065 nova_compute[260935]: 2025-10-11 09:28:29.446 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:28:29 np0005481065 nova_compute[260935]: 2025-10-11 09:28:29.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:28:29 np0005481065 nova_compute[260935]: 2025-10-11 09:28:29.449 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape89af28e-d1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:28:29 np0005481065 nova_compute[260935]: 2025-10-11 09:28:29.449 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape89af28e-d1, col_values=(('external_ids', {'iface-id': 'e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e6:64:0e', 'vm-uuid': '16108538-ef61-4d43-8a42-c324c334138b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:28:29 np0005481065 NetworkManager[44960]: <info>  [1760174909.4516] manager: (tape89af28e-d1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/568)
Oct 11 05:28:29 np0005481065 nova_compute[260935]: 2025-10-11 09:28:29.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:28:29 np0005481065 nova_compute[260935]: 2025-10-11 09:28:29.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:28:29 np0005481065 nova_compute[260935]: 2025-10-11 09:28:29.457 2 INFO os_vif [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:64:0e,bridge_name='br-int',has_traffic_filtering=True,id=e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0,network=Network(03e15108-5f8d-4fec-9ad8-133d9551c667),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape89af28e-d1')#033[00m
Oct 11 05:28:29 np0005481065 nova_compute[260935]: 2025-10-11 09:28:29.539 2 DEBUG nova.virt.libvirt.driver [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:28:29 np0005481065 nova_compute[260935]: 2025-10-11 09:28:29.539 2 DEBUG nova.virt.libvirt.driver [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:28:29 np0005481065 nova_compute[260935]: 2025-10-11 09:28:29.539 2 DEBUG nova.virt.libvirt.driver [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No VIF found with MAC fa:16:3e:e6:64:0e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:28:29 np0005481065 nova_compute[260935]: 2025-10-11 09:28:29.540 2 INFO nova.virt.libvirt.driver [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Using config drive#033[00m
Oct 11 05:28:29 np0005481065 nova_compute[260935]: 2025-10-11 09:28:29.562 2 DEBUG nova.storage.rbd_utils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 16108538-ef61-4d43-8a42-c324c334138b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:28:30 np0005481065 nova_compute[260935]: 2025-10-11 09:28:30.508 2 INFO nova.virt.libvirt.driver [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Creating config drive at /var/lib/nova/instances/16108538-ef61-4d43-8a42-c324c334138b/disk.config#033[00m
Oct 11 05:28:30 np0005481065 nova_compute[260935]: 2025-10-11 09:28:30.519 2 DEBUG oslo_concurrency.processutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/16108538-ef61-4d43-8a42-c324c334138b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2p6wt4bw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:28:30 np0005481065 nova_compute[260935]: 2025-10-11 09:28:30.693 2 DEBUG oslo_concurrency.processutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/16108538-ef61-4d43-8a42-c324c334138b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2p6wt4bw" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:28:30 np0005481065 nova_compute[260935]: 2025-10-11 09:28:30.725 2 DEBUG nova.storage.rbd_utils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 16108538-ef61-4d43-8a42-c324c334138b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:28:30 np0005481065 nova_compute[260935]: 2025-10-11 09:28:30.729 2 DEBUG oslo_concurrency.processutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/16108538-ef61-4d43-8a42-c324c334138b/disk.config 16108538-ef61-4d43-8a42-c324c334138b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:28:30 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2606: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Oct 11 05:28:31 np0005481065 nova_compute[260935]: 2025-10-11 09:28:31.680 2 DEBUG nova.network.neutron [req-3d5709a5-6a14-4fe9-ad95-cd453cd83a78 req-62c265a3-aa6e-49ea-af1b-f56d06fcf10c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Updated VIF entry in instance network info cache for port e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:28:31 np0005481065 nova_compute[260935]: 2025-10-11 09:28:31.682 2 DEBUG nova.network.neutron [req-3d5709a5-6a14-4fe9-ad95-cd453cd83a78 req-62c265a3-aa6e-49ea-af1b-f56d06fcf10c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Updating instance_info_cache with network_info: [{"id": "e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0", "address": "fa:16:3e:e6:64:0e", "network": {"id": "03e15108-5f8d-4fec-9ad8-133d9551c667", "bridge": "br-int", "label": "tempest-network-smoke--866575139", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:640e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:640e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89af28e-d1", "ovs_interfaceid": "e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:28:31 np0005481065 nova_compute[260935]: 2025-10-11 09:28:31.703 2 DEBUG oslo_concurrency.lockutils [req-3d5709a5-6a14-4fe9-ad95-cd453cd83a78 req-62c265a3-aa6e-49ea-af1b-f56d06fcf10c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-16108538-ef61-4d43-8a42-c324c334138b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:28:32 np0005481065 nova_compute[260935]: 2025-10-11 09:28:32.094 2 DEBUG oslo_concurrency.processutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/16108538-ef61-4d43-8a42-c324c334138b/disk.config 16108538-ef61-4d43-8a42-c324c334138b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.365s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:28:32 np0005481065 nova_compute[260935]: 2025-10-11 09:28:32.095 2 INFO nova.virt.libvirt.driver [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Deleting local config drive /var/lib/nova/instances/16108538-ef61-4d43-8a42-c324c334138b/disk.config because it was imported into RBD.#033[00m
Oct 11 05:28:32 np0005481065 kernel: tape89af28e-d1: entered promiscuous mode
Oct 11 05:28:32 np0005481065 NetworkManager[44960]: <info>  [1760174912.1855] manager: (tape89af28e-d1): new Tun device (/org/freedesktop/NetworkManager/Devices/569)
Oct 11 05:28:32 np0005481065 ovn_controller[152945]: 2025-10-11T09:28:32Z|01473|binding|INFO|Claiming lport e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0 for this chassis.
Oct 11 05:28:32 np0005481065 ovn_controller[152945]: 2025-10-11T09:28:32Z|01474|binding|INFO|e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0: Claiming fa:16:3e:e6:64:0e 10.100.0.7 2001:db8:0:1:f816:3eff:fee6:640e 2001:db8::f816:3eff:fee6:640e
Oct 11 05:28:32 np0005481065 nova_compute[260935]: 2025-10-11 09:28:32.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:28:32.206 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:64:0e 10.100.0.7 2001:db8:0:1:f816:3eff:fee6:640e 2001:db8::f816:3eff:fee6:640e'], port_security=['fa:16:3e:e6:64:0e 10.100.0.7 2001:db8:0:1:f816:3eff:fee6:640e 2001:db8::f816:3eff:fee6:640e'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28 2001:db8:0:1:f816:3eff:fee6:640e/64 2001:db8::f816:3eff:fee6:640e/64', 'neutron:device_id': '16108538-ef61-4d43-8a42-c324c334138b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03e15108-5f8d-4fec-9ad8-133d9551c667', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ea903998-56a8-4844-9402-6fdca37c3b3c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3325c2b-a3cd-4cde-bb9c-e17cc326ff80, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:28:32.209 162815 INFO neutron.agent.ovn.metadata.agent [-] Port e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0 in datapath 03e15108-5f8d-4fec-9ad8-133d9551c667 bound to our chassis#033[00m
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:28:32.212 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 03e15108-5f8d-4fec-9ad8-133d9551c667#033[00m
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:28:32.233 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[38ab1d46-e600-4524-9f85-67e4db96b181]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:28:32.235 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap03e15108-51 in ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:28:32.238 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap03e15108-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:28:32.239 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e825d5a2-70a8-4a60-9c03-915c21e92508]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:28:32.240 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d8bb5180-01a5-4f3f-9918-d5f78973a8a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:28:32 np0005481065 systemd-udevd[405486]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:28:32.256 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[e2b8d9b8-dd02-4e5a-b976-e535d6e499f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:28:32 np0005481065 systemd-machined[215705]: New machine qemu-156-instance-00000084.
Oct 11 05:28:32 np0005481065 NetworkManager[44960]: <info>  [1760174912.2739] device (tape89af28e-d1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:28:32 np0005481065 NetworkManager[44960]: <info>  [1760174912.2753] device (tape89af28e-d1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:28:32 np0005481065 systemd[1]: Started Virtual Machine qemu-156-instance-00000084.
Oct 11 05:28:32 np0005481065 nova_compute[260935]: 2025-10-11 09:28:32.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:28:32.294 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d9f37c16-2b8c-4f5f-89bf-4a66109af65d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:28:32 np0005481065 ovn_controller[152945]: 2025-10-11T09:28:32Z|01475|binding|INFO|Setting lport e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0 ovn-installed in OVS
Oct 11 05:28:32 np0005481065 ovn_controller[152945]: 2025-10-11T09:28:32Z|01476|binding|INFO|Setting lport e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0 up in Southbound
Oct 11 05:28:32 np0005481065 nova_compute[260935]: 2025-10-11 09:28:32.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:28:32.330 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[31873169-825c-4a68-9593-a345196a2b2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:28:32.336 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[99d7c71a-5395-4a18-842e-4a90893649bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:28:32 np0005481065 systemd-udevd[405489]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:28:32 np0005481065 NetworkManager[44960]: <info>  [1760174912.3381] manager: (tap03e15108-50): new Veth device (/org/freedesktop/NetworkManager/Devices/570)
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:28:32.373 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[dcf199ed-2493-431b-989f-b23c3f98f624]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:28:32.377 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[92b850eb-c342-45cd-9016-6732490679fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:28:32 np0005481065 NetworkManager[44960]: <info>  [1760174912.4197] device (tap03e15108-50): carrier: link connected
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:28:32.428 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[73c5a7be-c3d3-4521-9f36-9dc187db57e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:28:32.455 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c56690c1-76b1-47d6-a329-666c518e5cf7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap03e15108-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:c9:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 397], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675712, 'reachable_time': 41440, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 405517, 'error': None, 'target': 'ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:28:32.478 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e7368e75-04d8-4959-88c3-2fa640cb6c51]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1f:c982'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675712, 'tstamp': 675712}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 405518, 'error': None, 'target': 'ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:28:32.507 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a17f7b5b-d6d4-4b4d-a422-0f0d8ecde5a9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap03e15108-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:c9:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 397], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675712, 'reachable_time': 41440, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 405519, 'error': None, 'target': 'ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:28:32.558 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3ec45104-bea7-4441-a081-10cc9c82aede]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:28:32 np0005481065 nova_compute[260935]: 2025-10-11 09:28:32.592 2 DEBUG nova.compute.manager [req-1ca29e82-6cea-4620-a66e-548d9054c6db req-84466a12-00db-44cd-8c1d-dcda98b501fa e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Received event network-vif-plugged-e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:28:32 np0005481065 nova_compute[260935]: 2025-10-11 09:28:32.592 2 DEBUG oslo_concurrency.lockutils [req-1ca29e82-6cea-4620-a66e-548d9054c6db req-84466a12-00db-44cd-8c1d-dcda98b501fa e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "16108538-ef61-4d43-8a42-c324c334138b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:28:32 np0005481065 nova_compute[260935]: 2025-10-11 09:28:32.592 2 DEBUG oslo_concurrency.lockutils [req-1ca29e82-6cea-4620-a66e-548d9054c6db req-84466a12-00db-44cd-8c1d-dcda98b501fa e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "16108538-ef61-4d43-8a42-c324c334138b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:28:32 np0005481065 nova_compute[260935]: 2025-10-11 09:28:32.592 2 DEBUG oslo_concurrency.lockutils [req-1ca29e82-6cea-4620-a66e-548d9054c6db req-84466a12-00db-44cd-8c1d-dcda98b501fa e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "16108538-ef61-4d43-8a42-c324c334138b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:28:32 np0005481065 nova_compute[260935]: 2025-10-11 09:28:32.593 2 DEBUG nova.compute.manager [req-1ca29e82-6cea-4620-a66e-548d9054c6db req-84466a12-00db-44cd-8c1d-dcda98b501fa e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Processing event network-vif-plugged-e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:28:32.659 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1f368477-d2f4-4a9b-a9f6-d251160cf08d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:28:32.661 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap03e15108-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:28:32.661 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:28:32.662 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap03e15108-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:28:32 np0005481065 nova_compute[260935]: 2025-10-11 09:28:32.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:28:32 np0005481065 NetworkManager[44960]: <info>  [1760174912.6656] manager: (tap03e15108-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/571)
Oct 11 05:28:32 np0005481065 kernel: tap03e15108-50: entered promiscuous mode
Oct 11 05:28:32 np0005481065 nova_compute[260935]: 2025-10-11 09:28:32.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:28:32.669 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap03e15108-50, col_values=(('external_ids', {'iface-id': '1945eb8f-f9e5-437a-9fc9-017b522ae777'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:28:32 np0005481065 nova_compute[260935]: 2025-10-11 09:28:32.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:28:32 np0005481065 ovn_controller[152945]: 2025-10-11T09:28:32Z|01477|binding|INFO|Releasing lport 1945eb8f-f9e5-437a-9fc9-017b522ae777 from this chassis (sb_readonly=0)
Oct 11 05:28:32 np0005481065 nova_compute[260935]: 2025-10-11 09:28:32.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:28:32 np0005481065 nova_compute[260935]: 2025-10-11 09:28:32.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:28:32.705 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/03e15108-5f8d-4fec-9ad8-133d9551c667.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/03e15108-5f8d-4fec-9ad8-133d9551c667.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:28:32.706 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3f5366e3-ebcb-4a5e-98ce-033fbb2065c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:28:32.708 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-03e15108-5f8d-4fec-9ad8-133d9551c667
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/03e15108-5f8d-4fec-9ad8-133d9551c667.pid.haproxy
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID 03e15108-5f8d-4fec-9ad8-133d9551c667
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 05:28:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:28:32.709 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667', 'env', 'PROCESS_TAG=haproxy-03e15108-5f8d-4fec-9ad8-133d9551c667', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/03e15108-5f8d-4fec-9ad8-133d9551c667.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 05:28:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2607: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Oct 11 05:28:33 np0005481065 podman[405593]: 2025-10-11 09:28:33.157306259 +0000 UTC m=+0.070179291 container create d0e7b6791d31aa4d05705ff6bca99563fb0a8251a2e20915b4284d5398aa84a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:28:33 np0005481065 nova_compute[260935]: 2025-10-11 09:28:33.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:28:33 np0005481065 podman[405593]: 2025-10-11 09:28:33.120185862 +0000 UTC m=+0.033058934 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 05:28:33 np0005481065 systemd[1]: Started libpod-conmon-d0e7b6791d31aa4d05705ff6bca99563fb0a8251a2e20915b4284d5398aa84a4.scope.
Oct 11 05:28:33 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:28:33 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ed2907c6f4d557002a67a93ce41ed6b3e05074fb0e893ba2f5111564f8191c6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 05:28:33 np0005481065 podman[405593]: 2025-10-11 09:28:33.286251558 +0000 UTC m=+0.199124610 container init d0e7b6791d31aa4d05705ff6bca99563fb0a8251a2e20915b4284d5398aa84a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 11 05:28:33 np0005481065 podman[405593]: 2025-10-11 09:28:33.293189234 +0000 UTC m=+0.206062246 container start d0e7b6791d31aa4d05705ff6bca99563fb0a8251a2e20915b4284d5398aa84a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:28:33 np0005481065 neutron-haproxy-ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667[405609]: [NOTICE]   (405613) : New worker (405615) forked
Oct 11 05:28:33 np0005481065 neutron-haproxy-ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667[405609]: [NOTICE]   (405613) : Loading success.
Oct 11 05:28:33 np0005481065 nova_compute[260935]: 2025-10-11 09:28:33.354 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174898.3532627, f4e61943-e59b-40a1-ab1e-07ba5f131bb3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:28:33 np0005481065 nova_compute[260935]: 2025-10-11 09:28:33.355 2 INFO nova.compute.manager [-] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:28:33 np0005481065 nova_compute[260935]: 2025-10-11 09:28:33.376 2 DEBUG nova.compute.manager [None req-4590d3ed-687a-49fe-bf7b-2efd5db3dc43 - - - - - -] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:28:33 np0005481065 nova_compute[260935]: 2025-10-11 09:28:33.575 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174913.5747213, 16108538-ef61-4d43-8a42-c324c334138b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:28:33 np0005481065 nova_compute[260935]: 2025-10-11 09:28:33.575 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 16108538-ef61-4d43-8a42-c324c334138b] VM Started (Lifecycle Event)#033[00m
Oct 11 05:28:33 np0005481065 nova_compute[260935]: 2025-10-11 09:28:33.578 2 DEBUG nova.compute.manager [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:28:33 np0005481065 nova_compute[260935]: 2025-10-11 09:28:33.581 2 DEBUG nova.virt.libvirt.driver [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:28:33 np0005481065 nova_compute[260935]: 2025-10-11 09:28:33.585 2 INFO nova.virt.libvirt.driver [-] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Instance spawned successfully.#033[00m
Oct 11 05:28:33 np0005481065 nova_compute[260935]: 2025-10-11 09:28:33.585 2 DEBUG nova.virt.libvirt.driver [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:28:33 np0005481065 nova_compute[260935]: 2025-10-11 09:28:33.597 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:28:33 np0005481065 nova_compute[260935]: 2025-10-11 09:28:33.600 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:28:33 np0005481065 nova_compute[260935]: 2025-10-11 09:28:33.610 2 DEBUG nova.virt.libvirt.driver [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:28:33 np0005481065 nova_compute[260935]: 2025-10-11 09:28:33.611 2 DEBUG nova.virt.libvirt.driver [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:28:33 np0005481065 nova_compute[260935]: 2025-10-11 09:28:33.611 2 DEBUG nova.virt.libvirt.driver [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:28:33 np0005481065 nova_compute[260935]: 2025-10-11 09:28:33.612 2 DEBUG nova.virt.libvirt.driver [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:28:33 np0005481065 nova_compute[260935]: 2025-10-11 09:28:33.612 2 DEBUG nova.virt.libvirt.driver [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:28:33 np0005481065 nova_compute[260935]: 2025-10-11 09:28:33.613 2 DEBUG nova.virt.libvirt.driver [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:28:33 np0005481065 nova_compute[260935]: 2025-10-11 09:28:33.620 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 16108538-ef61-4d43-8a42-c324c334138b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:28:33 np0005481065 nova_compute[260935]: 2025-10-11 09:28:33.620 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174913.5775435, 16108538-ef61-4d43-8a42-c324c334138b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:28:33 np0005481065 nova_compute[260935]: 2025-10-11 09:28:33.621 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 16108538-ef61-4d43-8a42-c324c334138b] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:28:33 np0005481065 nova_compute[260935]: 2025-10-11 09:28:33.656 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:28:33 np0005481065 nova_compute[260935]: 2025-10-11 09:28:33.659 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174913.5807302, 16108538-ef61-4d43-8a42-c324c334138b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:28:33 np0005481065 nova_compute[260935]: 2025-10-11 09:28:33.659 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 16108538-ef61-4d43-8a42-c324c334138b] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:28:33 np0005481065 nova_compute[260935]: 2025-10-11 09:28:33.681 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:28:33 np0005481065 nova_compute[260935]: 2025-10-11 09:28:33.684 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:28:33 np0005481065 nova_compute[260935]: 2025-10-11 09:28:33.691 2 INFO nova.compute.manager [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Took 11.82 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:28:33 np0005481065 nova_compute[260935]: 2025-10-11 09:28:33.692 2 DEBUG nova.compute.manager [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:28:33 np0005481065 nova_compute[260935]: 2025-10-11 09:28:33.702 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 16108538-ef61-4d43-8a42-c324c334138b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:28:33 np0005481065 nova_compute[260935]: 2025-10-11 09:28:33.764 2 INFO nova.compute.manager [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Took 12.88 seconds to build instance.#033[00m
Oct 11 05:28:33 np0005481065 nova_compute[260935]: 2025-10-11 09:28:33.787 2 DEBUG oslo_concurrency.lockutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "16108538-ef61-4d43-8a42-c324c334138b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.992s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:28:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:28:34 np0005481065 nova_compute[260935]: 2025-10-11 09:28:34.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:28:34 np0005481065 nova_compute[260935]: 2025-10-11 09:28:34.754 2 DEBUG nova.compute.manager [req-2f5b4059-ab58-4993-9465-5d88c24f6022 req-d3db7354-8173-410c-a065-9c7394c602e6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Received event network-vif-plugged-e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:28:34 np0005481065 nova_compute[260935]: 2025-10-11 09:28:34.755 2 DEBUG oslo_concurrency.lockutils [req-2f5b4059-ab58-4993-9465-5d88c24f6022 req-d3db7354-8173-410c-a065-9c7394c602e6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "16108538-ef61-4d43-8a42-c324c334138b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:28:34 np0005481065 nova_compute[260935]: 2025-10-11 09:28:34.755 2 DEBUG oslo_concurrency.lockutils [req-2f5b4059-ab58-4993-9465-5d88c24f6022 req-d3db7354-8173-410c-a065-9c7394c602e6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "16108538-ef61-4d43-8a42-c324c334138b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:28:34 np0005481065 nova_compute[260935]: 2025-10-11 09:28:34.756 2 DEBUG oslo_concurrency.lockutils [req-2f5b4059-ab58-4993-9465-5d88c24f6022 req-d3db7354-8173-410c-a065-9c7394c602e6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "16108538-ef61-4d43-8a42-c324c334138b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:28:34 np0005481065 nova_compute[260935]: 2025-10-11 09:28:34.756 2 DEBUG nova.compute.manager [req-2f5b4059-ab58-4993-9465-5d88c24f6022 req-d3db7354-8173-410c-a065-9c7394c602e6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] No waiting events found dispatching network-vif-plugged-e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:28:34 np0005481065 nova_compute[260935]: 2025-10-11 09:28:34.757 2 WARNING nova.compute.manager [req-2f5b4059-ab58-4993-9465-5d88c24f6022 req-d3db7354-8173-410c-a065-9c7394c602e6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Received unexpected event network-vif-plugged-e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0 for instance with vm_state active and task_state None.#033[00m
Oct 11 05:28:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2608: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1.7 MiB/s wr, 18 op/s
Oct 11 05:28:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2609: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1.7 MiB/s wr, 18 op/s
Oct 11 05:28:37 np0005481065 nova_compute[260935]: 2025-10-11 09:28:37.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:28:38 np0005481065 nova_compute[260935]: 2025-10-11 09:28:38.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:28:38 np0005481065 nova_compute[260935]: 2025-10-11 09:28:38.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:28:38 np0005481065 nova_compute[260935]: 2025-10-11 09:28:38.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:28:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2610: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 86 op/s
Oct 11 05:28:39 np0005481065 NetworkManager[44960]: <info>  [1760174919.1751] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/572)
Oct 11 05:28:39 np0005481065 NetworkManager[44960]: <info>  [1760174919.1771] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/573)
Oct 11 05:28:39 np0005481065 nova_compute[260935]: 2025-10-11 09:28:39.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:28:39 np0005481065 ovn_controller[152945]: 2025-10-11T09:28:39Z|01478|binding|INFO|Releasing lport 1945eb8f-f9e5-437a-9fc9-017b522ae777 from this chassis (sb_readonly=0)
Oct 11 05:28:39 np0005481065 ovn_controller[152945]: 2025-10-11T09:28:39Z|01479|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:28:39 np0005481065 ovn_controller[152945]: 2025-10-11T09:28:39Z|01480|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:28:39 np0005481065 nova_compute[260935]: 2025-10-11 09:28:39.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:28:39 np0005481065 nova_compute[260935]: 2025-10-11 09:28:39.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:28:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:28:39 np0005481065 nova_compute[260935]: 2025-10-11 09:28:39.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:28:39 np0005481065 nova_compute[260935]: 2025-10-11 09:28:39.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:28:39 np0005481065 nova_compute[260935]: 2025-10-11 09:28:39.896 2 DEBUG nova.compute.manager [req-89ec08e8-66a5-4514-ad41-e343852f4ee0 req-0abee7b7-7cbb-4c2b-a4c0-36cc7ae20cd2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Received event network-changed-e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:28:39 np0005481065 nova_compute[260935]: 2025-10-11 09:28:39.897 2 DEBUG nova.compute.manager [req-89ec08e8-66a5-4514-ad41-e343852f4ee0 req-0abee7b7-7cbb-4c2b-a4c0-36cc7ae20cd2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Refreshing instance network info cache due to event network-changed-e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:28:39 np0005481065 nova_compute[260935]: 2025-10-11 09:28:39.897 2 DEBUG oslo_concurrency.lockutils [req-89ec08e8-66a5-4514-ad41-e343852f4ee0 req-0abee7b7-7cbb-4c2b-a4c0-36cc7ae20cd2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-16108538-ef61-4d43-8a42-c324c334138b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:28:39 np0005481065 nova_compute[260935]: 2025-10-11 09:28:39.898 2 DEBUG oslo_concurrency.lockutils [req-89ec08e8-66a5-4514-ad41-e343852f4ee0 req-0abee7b7-7cbb-4c2b-a4c0-36cc7ae20cd2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-16108538-ef61-4d43-8a42-c324c334138b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:28:39 np0005481065 nova_compute[260935]: 2025-10-11 09:28:39.898 2 DEBUG nova.network.neutron [req-89ec08e8-66a5-4514-ad41-e343852f4ee0 req-0abee7b7-7cbb-4c2b-a4c0-36cc7ae20cd2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Refreshing network info cache for port e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:28:40 np0005481065 nova_compute[260935]: 2025-10-11 09:28:40.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:28:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2611: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 05:28:42 np0005481065 nova_compute[260935]: 2025-10-11 09:28:42.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:28:42 np0005481065 nova_compute[260935]: 2025-10-11 09:28:42.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:28:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2612: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 05:28:43 np0005481065 nova_compute[260935]: 2025-10-11 09:28:43.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:28:43 np0005481065 nova_compute[260935]: 2025-10-11 09:28:43.427 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:28:43 np0005481065 nova_compute[260935]: 2025-10-11 09:28:43.428 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:28:43 np0005481065 nova_compute[260935]: 2025-10-11 09:28:43.428 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 05:28:43 np0005481065 nova_compute[260935]: 2025-10-11 09:28:43.520 2 DEBUG nova.network.neutron [req-89ec08e8-66a5-4514-ad41-e343852f4ee0 req-0abee7b7-7cbb-4c2b-a4c0-36cc7ae20cd2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Updated VIF entry in instance network info cache for port e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:28:43 np0005481065 nova_compute[260935]: 2025-10-11 09:28:43.521 2 DEBUG nova.network.neutron [req-89ec08e8-66a5-4514-ad41-e343852f4ee0 req-0abee7b7-7cbb-4c2b-a4c0-36cc7ae20cd2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Updating instance_info_cache with network_info: [{"id": "e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0", "address": "fa:16:3e:e6:64:0e", "network": {"id": "03e15108-5f8d-4fec-9ad8-133d9551c667", "bridge": "br-int", "label": "tempest-network-smoke--866575139", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:640e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:640e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89af28e-d1", "ovs_interfaceid": "e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:28:43 np0005481065 nova_compute[260935]: 2025-10-11 09:28:43.547 2 DEBUG oslo_concurrency.lockutils [req-89ec08e8-66a5-4514-ad41-e343852f4ee0 req-0abee7b7-7cbb-4c2b-a4c0-36cc7ae20cd2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-16108538-ef61-4d43-8a42-c324c334138b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:28:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:28:44 np0005481065 nova_compute[260935]: 2025-10-11 09:28:44.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:28:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2613: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 68 op/s
Oct 11 05:28:45 np0005481065 ovn_controller[152945]: 2025-10-11T09:28:45Z|00170|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e6:64:0e 10.100.0.7
Oct 11 05:28:45 np0005481065 ovn_controller[152945]: 2025-10-11T09:28:45Z|00171|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e6:64:0e 10.100.0.7
Oct 11 05:28:46 np0005481065 nova_compute[260935]: 2025-10-11 09:28:46.699 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updating instance_info_cache with network_info: [{"id": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "address": "fa:16:3e:ab:9b:26", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99e74dca-1d", "ovs_interfaceid": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:28:46 np0005481065 nova_compute[260935]: 2025-10-11 09:28:46.729 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:28:46 np0005481065 nova_compute[260935]: 2025-10-11 09:28:46.730 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 05:28:46 np0005481065 nova_compute[260935]: 2025-10-11 09:28:46.731 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:28:46 np0005481065 nova_compute[260935]: 2025-10-11 09:28:46.731 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:28:46 np0005481065 nova_compute[260935]: 2025-10-11 09:28:46.732 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:28:46 np0005481065 nova_compute[260935]: 2025-10-11 09:28:46.732 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:28:46 np0005481065 nova_compute[260935]: 2025-10-11 09:28:46.754 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:28:46 np0005481065 nova_compute[260935]: 2025-10-11 09:28:46.755 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:28:46 np0005481065 nova_compute[260935]: 2025-10-11 09:28:46.755 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:28:46 np0005481065 nova_compute[260935]: 2025-10-11 09:28:46.756 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:28:46 np0005481065 nova_compute[260935]: 2025-10-11 09:28:46.756 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:28:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2614: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 68 op/s
Oct 11 05:28:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:28:47 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2059551540' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:28:47 np0005481065 nova_compute[260935]: 2025-10-11 09:28:47.313 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:28:47 np0005481065 nova_compute[260935]: 2025-10-11 09:28:47.438 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:28:47 np0005481065 nova_compute[260935]: 2025-10-11 09:28:47.439 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:28:47 np0005481065 nova_compute[260935]: 2025-10-11 09:28:47.439 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:28:47 np0005481065 nova_compute[260935]: 2025-10-11 09:28:47.446 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:28:47 np0005481065 nova_compute[260935]: 2025-10-11 09:28:47.447 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:28:47 np0005481065 nova_compute[260935]: 2025-10-11 09:28:47.452 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:28:47 np0005481065 nova_compute[260935]: 2025-10-11 09:28:47.452 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:28:47 np0005481065 nova_compute[260935]: 2025-10-11 09:28:47.456 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:28:47 np0005481065 nova_compute[260935]: 2025-10-11 09:28:47.456 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:28:47 np0005481065 nova_compute[260935]: 2025-10-11 09:28:47.695 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:28:47 np0005481065 nova_compute[260935]: 2025-10-11 09:28:47.697 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2723MB free_disk=59.80977249145508GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:28:47 np0005481065 nova_compute[260935]: 2025-10-11 09:28:47.697 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:28:47 np0005481065 nova_compute[260935]: 2025-10-11 09:28:47.697 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:28:47 np0005481065 nova_compute[260935]: 2025-10-11 09:28:47.944 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:28:47 np0005481065 nova_compute[260935]: 2025-10-11 09:28:47.945 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:28:47 np0005481065 nova_compute[260935]: 2025-10-11 09:28:47.945 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:28:47 np0005481065 nova_compute[260935]: 2025-10-11 09:28:47.946 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 16108538-ef61-4d43-8a42-c324c334138b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:28:47 np0005481065 nova_compute[260935]: 2025-10-11 09:28:47.946 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:28:47 np0005481065 nova_compute[260935]: 2025-10-11 09:28:47.948 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:28:48 np0005481065 nova_compute[260935]: 2025-10-11 09:28:48.200 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:28:48 np0005481065 nova_compute[260935]: 2025-10-11 09:28:48.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:28:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:28:48 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3513290236' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:28:48 np0005481065 nova_compute[260935]: 2025-10-11 09:28:48.705 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:28:48 np0005481065 nova_compute[260935]: 2025-10-11 09:28:48.714 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:28:48 np0005481065 nova_compute[260935]: 2025-10-11 09:28:48.742 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:28:48 np0005481065 nova_compute[260935]: 2025-10-11 09:28:48.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:28:48 np0005481065 podman[405671]: 2025-10-11 09:28:48.796545199 +0000 UTC m=+0.096334500 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 11 05:28:48 np0005481065 nova_compute[260935]: 2025-10-11 09:28:48.797 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:28:48 np0005481065 nova_compute[260935]: 2025-10-11 09:28:48.798 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.101s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:28:48 np0005481065 nova_compute[260935]: 2025-10-11 09:28:48.799 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:28:48 np0005481065 nova_compute[260935]: 2025-10-11 09:28:48.799 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 11 05:28:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2615: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 130 op/s
Oct 11 05:28:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:28:49 np0005481065 nova_compute[260935]: 2025-10-11 09:28:49.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:28:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2616: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 285 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 11 05:28:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2617: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 285 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 11 05:28:53 np0005481065 nova_compute[260935]: 2025-10-11 09:28:53.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:28:53 np0005481065 nova_compute[260935]: 2025-10-11 09:28:53.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:28:53 np0005481065 podman[405693]: 2025-10-11 09:28:53.765672339 +0000 UTC m=+0.076885461 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2)
Oct 11 05:28:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:28:54 np0005481065 nova_compute[260935]: 2025-10-11 09:28:54.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:28:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:28:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:28:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:28:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:28:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:28:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:28:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:28:54
Oct 11 05:28:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:28:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:28:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups', 'vms', 'images', '.rgw.root', 'volumes', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control']
Oct 11 05:28:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:28:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2618: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 285 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 11 05:28:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:28:55.061 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=45, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=44) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:28:55 np0005481065 nova_compute[260935]: 2025-10-11 09:28:55.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:28:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:28:55.063 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 05:28:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:28:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:28:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:28:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:28:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:28:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:28:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:28:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:28:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:28:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:28:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2619: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 285 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 11 05:28:58 np0005481065 nova_compute[260935]: 2025-10-11 09:28:58.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:28:58 np0005481065 podman[405713]: 2025-10-11 09:28:58.793607137 +0000 UTC m=+0.098691366 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 11 05:28:58 np0005481065 podman[405714]: 2025-10-11 09:28:58.806493011 +0000 UTC m=+0.099067287 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 11 05:28:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2620: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 285 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 11 05:28:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:28:59 np0005481065 nova_compute[260935]: 2025-10-11 09:28:59.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:00 np0005481065 nova_compute[260935]: 2025-10-11 09:29:00.864 2 DEBUG oslo_concurrency.lockutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "80209953-3c4c-4932-a9a3-8166c70e1029" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:29:00 np0005481065 nova_compute[260935]: 2025-10-11 09:29:00.865 2 DEBUG oslo_concurrency.lockutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "80209953-3c4c-4932-a9a3-8166c70e1029" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:29:00 np0005481065 nova_compute[260935]: 2025-10-11 09:29:00.887 2 DEBUG nova.compute.manager [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:29:00 np0005481065 nova_compute[260935]: 2025-10-11 09:29:00.976 2 DEBUG oslo_concurrency.lockutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:29:00 np0005481065 nova_compute[260935]: 2025-10-11 09:29:00.976 2 DEBUG oslo_concurrency.lockutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:29:00 np0005481065 nova_compute[260935]: 2025-10-11 09:29:00.988 2 DEBUG nova.virt.hardware [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:29:00 np0005481065 nova_compute[260935]: 2025-10-11 09:29:00.988 2 INFO nova.compute.claims [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:29:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2621: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Oct 11 05:29:01 np0005481065 nova_compute[260935]: 2025-10-11 09:29:01.218 2 DEBUG oslo_concurrency.processutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:29:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:29:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1740083688' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:29:01 np0005481065 nova_compute[260935]: 2025-10-11 09:29:01.670 2 DEBUG oslo_concurrency.processutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:29:01 np0005481065 nova_compute[260935]: 2025-10-11 09:29:01.678 2 DEBUG nova.compute.provider_tree [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:29:01 np0005481065 nova_compute[260935]: 2025-10-11 09:29:01.721 2 DEBUG nova.scheduler.client.report [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:29:01 np0005481065 nova_compute[260935]: 2025-10-11 09:29:01.747 2 DEBUG oslo_concurrency.lockutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.771s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:29:01 np0005481065 nova_compute[260935]: 2025-10-11 09:29:01.749 2 DEBUG nova.compute.manager [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:29:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:29:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:29:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 05:29:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:29:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 05:29:01 np0005481065 nova_compute[260935]: 2025-10-11 09:29:01.852 2 DEBUG nova.compute.manager [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:29:01 np0005481065 nova_compute[260935]: 2025-10-11 09:29:01.853 2 DEBUG nova.network.neutron [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:29:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:29:01 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 0688c8e1-e0dd-4900-86c7-1f3b69287f73 does not exist
Oct 11 05:29:01 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev fe7894b5-5f05-4809-bed6-58806bebd224 does not exist
Oct 11 05:29:01 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev f55d21b2-cf70-4439-9129-5d932d6c31a8 does not exist
Oct 11 05:29:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 05:29:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 05:29:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 05:29:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:29:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:29:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:29:01 np0005481065 nova_compute[260935]: 2025-10-11 09:29:01.953 2 INFO nova.virt.libvirt.driver [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:29:02 np0005481065 nova_compute[260935]: 2025-10-11 09:29:02.078 2 DEBUG nova.compute.manager [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:29:02 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:29:02 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:29:02 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:29:02 np0005481065 nova_compute[260935]: 2025-10-11 09:29:02.477 2 DEBUG nova.compute.manager [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:29:02 np0005481065 nova_compute[260935]: 2025-10-11 09:29:02.479 2 DEBUG nova.virt.libvirt.driver [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:29:02 np0005481065 nova_compute[260935]: 2025-10-11 09:29:02.480 2 INFO nova.virt.libvirt.driver [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Creating image(s)#033[00m
Oct 11 05:29:02 np0005481065 nova_compute[260935]: 2025-10-11 09:29:02.530 2 DEBUG nova.storage.rbd_utils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 80209953-3c4c-4932-a9a3-8166c70e1029_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:29:02 np0005481065 nova_compute[260935]: 2025-10-11 09:29:02.578 2 DEBUG nova.storage.rbd_utils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 80209953-3c4c-4932-a9a3-8166c70e1029_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:29:02 np0005481065 nova_compute[260935]: 2025-10-11 09:29:02.622 2 DEBUG nova.storage.rbd_utils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 80209953-3c4c-4932-a9a3-8166c70e1029_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:29:02 np0005481065 nova_compute[260935]: 2025-10-11 09:29:02.628 2 DEBUG oslo_concurrency.processutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:29:02 np0005481065 nova_compute[260935]: 2025-10-11 09:29:02.689 2 DEBUG nova.policy [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0e1fd111a1ff43179343661e01457085', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'db6885dd005947ad850fed13cefdf2fc', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:29:02 np0005481065 nova_compute[260935]: 2025-10-11 09:29:02.757 2 DEBUG oslo_concurrency.processutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:29:02 np0005481065 nova_compute[260935]: 2025-10-11 09:29:02.759 2 DEBUG oslo_concurrency.lockutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:29:02 np0005481065 nova_compute[260935]: 2025-10-11 09:29:02.760 2 DEBUG oslo_concurrency.lockutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:29:02 np0005481065 nova_compute[260935]: 2025-10-11 09:29:02.760 2 DEBUG oslo_concurrency.lockutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:29:02 np0005481065 nova_compute[260935]: 2025-10-11 09:29:02.789 2 DEBUG nova.storage.rbd_utils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 80209953-3c4c-4932-a9a3-8166c70e1029_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:29:02 np0005481065 nova_compute[260935]: 2025-10-11 09:29:02.794 2 DEBUG oslo_concurrency.processutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 80209953-3c4c-4932-a9a3-8166c70e1029_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:29:02 np0005481065 podman[406104]: 2025-10-11 09:29:02.741054874 +0000 UTC m=+0.029707969 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:29:02 np0005481065 podman[406104]: 2025-10-11 09:29:02.864517279 +0000 UTC m=+0.153170394 container create b242c98078d5f08fef08d072b9553419a15dd9c4068c332f7971b3687fd1a192 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_haibt, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 11 05:29:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2622: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s wr, 0 op/s
Oct 11 05:29:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:03.065 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '45'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:29:03 np0005481065 systemd[1]: Started libpod-conmon-b242c98078d5f08fef08d072b9553419a15dd9c4068c332f7971b3687fd1a192.scope.
Oct 11 05:29:03 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:29:03 np0005481065 podman[406104]: 2025-10-11 09:29:03.288568285 +0000 UTC m=+0.577221450 container init b242c98078d5f08fef08d072b9553419a15dd9c4068c332f7971b3687fd1a192 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 11 05:29:03 np0005481065 podman[406104]: 2025-10-11 09:29:03.301709276 +0000 UTC m=+0.590362391 container start b242c98078d5f08fef08d072b9553419a15dd9c4068c332f7971b3687fd1a192 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_haibt, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:29:03 np0005481065 peaceful_haibt[406157]: 167 167
Oct 11 05:29:03 np0005481065 systemd[1]: libpod-b242c98078d5f08fef08d072b9553419a15dd9c4068c332f7971b3687fd1a192.scope: Deactivated successfully.
Oct 11 05:29:03 np0005481065 nova_compute[260935]: 2025-10-11 09:29:03.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:03 np0005481065 podman[406104]: 2025-10-11 09:29:03.392136558 +0000 UTC m=+0.680789673 container attach b242c98078d5f08fef08d072b9553419a15dd9c4068c332f7971b3687fd1a192 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:29:03 np0005481065 podman[406104]: 2025-10-11 09:29:03.394516955 +0000 UTC m=+0.683170060 container died b242c98078d5f08fef08d072b9553419a15dd9c4068c332f7971b3687fd1a192 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_haibt, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:29:03 np0005481065 systemd[1]: var-lib-containers-storage-overlay-151d91323702af5144199a0167d7c52ebcd33ce0f3f9f0f7755043a10ab32d2a-merged.mount: Deactivated successfully.
Oct 11 05:29:03 np0005481065 nova_compute[260935]: 2025-10-11 09:29:03.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:29:04 np0005481065 podman[406104]: 2025-10-11 09:29:04.155968994 +0000 UTC m=+1.444622099 container remove b242c98078d5f08fef08d072b9553419a15dd9c4068c332f7971b3687fd1a192 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_haibt, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 05:29:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:29:04 np0005481065 systemd[1]: libpod-conmon-b242c98078d5f08fef08d072b9553419a15dd9c4068c332f7971b3687fd1a192.scope: Deactivated successfully.
Oct 11 05:29:04 np0005481065 nova_compute[260935]: 2025-10-11 09:29:04.318 2 DEBUG nova.network.neutron [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Successfully created port: b6c25504-002e-4bf7-8b34-c9e43fcc55c2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:29:04 np0005481065 nova_compute[260935]: 2025-10-11 09:29:04.398 2 DEBUG oslo_concurrency.processutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 80209953-3c4c-4932-a9a3-8166c70e1029_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:29:04 np0005481065 nova_compute[260935]: 2025-10-11 09:29:04.527 2 DEBUG nova.storage.rbd_utils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] resizing rbd image 80209953-3c4c-4932-a9a3-8166c70e1029_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:29:04 np0005481065 podman[406185]: 2025-10-11 09:29:04.53107575 +0000 UTC m=+0.116407066 container create 24c2e165636373c1de28ba33e21950660d26c12e1780f152aa3c8ab140f35cd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_bassi, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 11 05:29:04 np0005481065 podman[406185]: 2025-10-11 09:29:04.461051084 +0000 UTC m=+0.046382460 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:29:04 np0005481065 nova_compute[260935]: 2025-10-11 09:29:04.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:04 np0005481065 systemd[1]: Started libpod-conmon-24c2e165636373c1de28ba33e21950660d26c12e1780f152aa3c8ab140f35cd2.scope.
Oct 11 05:29:04 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:29:04 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d0d699d28f7b3ea07cd05ec6840fdfdc7dfc5cee55128e68607606945e321a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:29:04 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d0d699d28f7b3ea07cd05ec6840fdfdc7dfc5cee55128e68607606945e321a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:29:04 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d0d699d28f7b3ea07cd05ec6840fdfdc7dfc5cee55128e68607606945e321a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:29:04 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d0d699d28f7b3ea07cd05ec6840fdfdc7dfc5cee55128e68607606945e321a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:29:04 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d0d699d28f7b3ea07cd05ec6840fdfdc7dfc5cee55128e68607606945e321a1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 05:29:04 np0005481065 podman[406185]: 2025-10-11 09:29:04.76989859 +0000 UTC m=+0.355229956 container init 24c2e165636373c1de28ba33e21950660d26c12e1780f152aa3c8ab140f35cd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_bassi, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:29:04 np0005481065 podman[406185]: 2025-10-11 09:29:04.787161627 +0000 UTC m=+0.372492943 container start 24c2e165636373c1de28ba33e21950660d26c12e1780f152aa3c8ab140f35cd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_bassi, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:29:04 np0005481065 podman[406185]: 2025-10-11 09:29:04.828073012 +0000 UTC m=+0.413404378 container attach 24c2e165636373c1de28ba33e21950660d26c12e1780f152aa3c8ab140f35cd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:29:04 np0005481065 nova_compute[260935]: 2025-10-11 09:29:04.973 2 DEBUG nova.objects.instance [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'migration_context' on Instance uuid 80209953-3c4c-4932-a9a3-8166c70e1029 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:29:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2623: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Oct 11 05:29:05 np0005481065 nova_compute[260935]: 2025-10-11 09:29:05.066 2 DEBUG nova.virt.libvirt.driver [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:29:05 np0005481065 nova_compute[260935]: 2025-10-11 09:29:05.067 2 DEBUG nova.virt.libvirt.driver [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Ensure instance console log exists: /var/lib/nova/instances/80209953-3c4c-4932-a9a3-8166c70e1029/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:29:05 np0005481065 nova_compute[260935]: 2025-10-11 09:29:05.068 2 DEBUG oslo_concurrency.lockutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:29:05 np0005481065 nova_compute[260935]: 2025-10-11 09:29:05.068 2 DEBUG oslo_concurrency.lockutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:29:05 np0005481065 nova_compute[260935]: 2025-10-11 09:29:05.069 2 DEBUG oslo_concurrency.lockutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:29:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:29:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:29:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:29:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:29:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033867139171062056 of space, bias 1.0, pg target 1.0160141751318617 quantized to 32 (current 32)
Oct 11 05:29:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:29:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:29:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:29:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:29:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:29:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.1992057139048968 quantized to 32 (current 32)
Oct 11 05:29:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:29:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 32)
Oct 11 05:29:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:29:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:29:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:29:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Oct 11 05:29:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:29:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Oct 11 05:29:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:29:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:29:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:29:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Oct 11 05:29:05 np0005481065 nostalgic_bassi[406257]: --> passed data devices: 0 physical, 3 LVM
Oct 11 05:29:05 np0005481065 nostalgic_bassi[406257]: --> relative data size: 1.0
Oct 11 05:29:05 np0005481065 nostalgic_bassi[406257]: --> All data devices are unavailable
Oct 11 05:29:06 np0005481065 systemd[1]: libpod-24c2e165636373c1de28ba33e21950660d26c12e1780f152aa3c8ab140f35cd2.scope: Deactivated successfully.
Oct 11 05:29:06 np0005481065 podman[406185]: 2025-10-11 09:29:06.043598724 +0000 UTC m=+1.628930010 container died 24c2e165636373c1de28ba33e21950660d26c12e1780f152aa3c8ab140f35cd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_bassi, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:29:06 np0005481065 systemd[1]: libpod-24c2e165636373c1de28ba33e21950660d26c12e1780f152aa3c8ab140f35cd2.scope: Consumed 1.213s CPU time.
Oct 11 05:29:06 np0005481065 nova_compute[260935]: 2025-10-11 09:29:06.161 2 DEBUG nova.network.neutron [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Successfully updated port: b6c25504-002e-4bf7-8b34-c9e43fcc55c2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:29:06 np0005481065 systemd[1]: var-lib-containers-storage-overlay-6d0d699d28f7b3ea07cd05ec6840fdfdc7dfc5cee55128e68607606945e321a1-merged.mount: Deactivated successfully.
Oct 11 05:29:06 np0005481065 nova_compute[260935]: 2025-10-11 09:29:06.322 2 DEBUG oslo_concurrency.lockutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "refresh_cache-80209953-3c4c-4932-a9a3-8166c70e1029" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:29:06 np0005481065 nova_compute[260935]: 2025-10-11 09:29:06.323 2 DEBUG oslo_concurrency.lockutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquired lock "refresh_cache-80209953-3c4c-4932-a9a3-8166c70e1029" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:29:06 np0005481065 nova_compute[260935]: 2025-10-11 09:29:06.323 2 DEBUG nova.network.neutron [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:29:06 np0005481065 podman[406185]: 2025-10-11 09:29:06.324856321 +0000 UTC m=+1.910187627 container remove 24c2e165636373c1de28ba33e21950660d26c12e1780f152aa3c8ab140f35cd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_bassi, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:29:06 np0005481065 nova_compute[260935]: 2025-10-11 09:29:06.331 2 DEBUG nova.compute.manager [req-3f9fa8b2-02bf-4350-b41a-e4c601f61a38 req-4e8a8726-ad2f-4b4e-aaa1-1923a7a35c6f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Received event network-changed-b6c25504-002e-4bf7-8b34-c9e43fcc55c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:29:06 np0005481065 nova_compute[260935]: 2025-10-11 09:29:06.331 2 DEBUG nova.compute.manager [req-3f9fa8b2-02bf-4350-b41a-e4c601f61a38 req-4e8a8726-ad2f-4b4e-aaa1-1923a7a35c6f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Refreshing instance network info cache due to event network-changed-b6c25504-002e-4bf7-8b34-c9e43fcc55c2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:29:06 np0005481065 nova_compute[260935]: 2025-10-11 09:29:06.332 2 DEBUG oslo_concurrency.lockutils [req-3f9fa8b2-02bf-4350-b41a-e4c601f61a38 req-4e8a8726-ad2f-4b4e-aaa1-1923a7a35c6f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-80209953-3c4c-4932-a9a3-8166c70e1029" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:29:06 np0005481065 systemd[1]: libpod-conmon-24c2e165636373c1de28ba33e21950660d26c12e1780f152aa3c8ab140f35cd2.scope: Deactivated successfully.
Oct 11 05:29:06 np0005481065 nova_compute[260935]: 2025-10-11 09:29:06.792 2 DEBUG nova.network.neutron [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:29:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2624: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Oct 11 05:29:07 np0005481065 podman[406457]: 2025-10-11 09:29:07.306457833 +0000 UTC m=+0.104271103 container create 8c9be1eff8b42edf9bed8049796db75abfc861aae7c2195157bdf0e681432e68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bhabha, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 05:29:07 np0005481065 podman[406457]: 2025-10-11 09:29:07.231189599 +0000 UTC m=+0.029002879 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:29:07 np0005481065 nova_compute[260935]: 2025-10-11 09:29:07.333 2 DEBUG oslo_concurrency.lockutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Acquiring lock "af8a1ab7-7512-4de4-8493-cfe85095fbc5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:29:07 np0005481065 nova_compute[260935]: 2025-10-11 09:29:07.334 2 DEBUG oslo_concurrency.lockutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Lock "af8a1ab7-7512-4de4-8493-cfe85095fbc5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:29:07 np0005481065 systemd[1]: Started libpod-conmon-8c9be1eff8b42edf9bed8049796db75abfc861aae7c2195157bdf0e681432e68.scope.
Oct 11 05:29:07 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:29:07 np0005481065 podman[406457]: 2025-10-11 09:29:07.479403834 +0000 UTC m=+0.277217124 container init 8c9be1eff8b42edf9bed8049796db75abfc861aae7c2195157bdf0e681432e68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bhabha, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 05:29:07 np0005481065 podman[406457]: 2025-10-11 09:29:07.487980286 +0000 UTC m=+0.285793566 container start 8c9be1eff8b42edf9bed8049796db75abfc861aae7c2195157bdf0e681432e68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bhabha, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 11 05:29:07 np0005481065 eloquent_bhabha[406473]: 167 167
Oct 11 05:29:07 np0005481065 systemd[1]: libpod-8c9be1eff8b42edf9bed8049796db75abfc861aae7c2195157bdf0e681432e68.scope: Deactivated successfully.
Oct 11 05:29:07 np0005481065 conmon[406473]: conmon 8c9be1eff8b42edf9bed <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8c9be1eff8b42edf9bed8049796db75abfc861aae7c2195157bdf0e681432e68.scope/container/memory.events
Oct 11 05:29:07 np0005481065 nova_compute[260935]: 2025-10-11 09:29:07.500 2 DEBUG nova.compute.manager [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:29:07 np0005481065 podman[406457]: 2025-10-11 09:29:07.547315371 +0000 UTC m=+0.345128631 container attach 8c9be1eff8b42edf9bed8049796db75abfc861aae7c2195157bdf0e681432e68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:29:07 np0005481065 podman[406457]: 2025-10-11 09:29:07.548032321 +0000 UTC m=+0.345845621 container died 8c9be1eff8b42edf9bed8049796db75abfc861aae7c2195157bdf0e681432e68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bhabha, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:29:07 np0005481065 nova_compute[260935]: 2025-10-11 09:29:07.733 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:29:07 np0005481065 nova_compute[260935]: 2025-10-11 09:29:07.734 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct 11 05:29:07 np0005481065 nova_compute[260935]: 2025-10-11 09:29:07.776 2 DEBUG oslo_concurrency.lockutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:29:07 np0005481065 nova_compute[260935]: 2025-10-11 09:29:07.777 2 DEBUG oslo_concurrency.lockutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:29:07 np0005481065 nova_compute[260935]: 2025-10-11 09:29:07.787 2 DEBUG nova.virt.hardware [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:29:07 np0005481065 nova_compute[260935]: 2025-10-11 09:29:07.788 2 INFO nova.compute.claims [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:29:07 np0005481065 nova_compute[260935]: 2025-10-11 09:29:07.793 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct 11 05:29:07 np0005481065 systemd[1]: var-lib-containers-storage-overlay-9b392575ae53c68caa7abdbd62e254410273fb1e3726433d8630b6d921932161-merged.mount: Deactivated successfully.
Oct 11 05:29:08 np0005481065 podman[406478]: 2025-10-11 09:29:08.106125691 +0000 UTC m=+0.593170281 container remove 8c9be1eff8b42edf9bed8049796db75abfc861aae7c2195157bdf0e681432e68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bhabha, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:29:08 np0005481065 nova_compute[260935]: 2025-10-11 09:29:08.112 2 DEBUG oslo_concurrency.processutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:29:08 np0005481065 systemd[1]: libpod-conmon-8c9be1eff8b42edf9bed8049796db75abfc861aae7c2195157bdf0e681432e68.scope: Deactivated successfully.
Oct 11 05:29:08 np0005481065 nova_compute[260935]: 2025-10-11 09:29:08.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:08 np0005481065 podman[406521]: 2025-10-11 09:29:08.357923107 +0000 UTC m=+0.026758436 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:29:08 np0005481065 podman[406521]: 2025-10-11 09:29:08.46930684 +0000 UTC m=+0.138142189 container create e3a2d4dace19dfd664e42ad7c65e728ffd41f182933a8e1b4f4ef10b8f61d59f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_brahmagupta, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 05:29:08 np0005481065 systemd[1]: Started libpod-conmon-e3a2d4dace19dfd664e42ad7c65e728ffd41f182933a8e1b4f4ef10b8f61d59f.scope.
Oct 11 05:29:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:29:08 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1533797081' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:29:08 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:29:08 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/446082782a84f1b64494d38278f63b78a58d983bda88e8fb665d1b4bc13c8e03/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:29:08 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/446082782a84f1b64494d38278f63b78a58d983bda88e8fb665d1b4bc13c8e03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:29:08 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/446082782a84f1b64494d38278f63b78a58d983bda88e8fb665d1b4bc13c8e03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:29:08 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/446082782a84f1b64494d38278f63b78a58d983bda88e8fb665d1b4bc13c8e03/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:29:08 np0005481065 nova_compute[260935]: 2025-10-11 09:29:08.604 2 DEBUG oslo_concurrency.processutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:29:08 np0005481065 nova_compute[260935]: 2025-10-11 09:29:08.617 2 DEBUG nova.compute.provider_tree [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:29:08 np0005481065 nova_compute[260935]: 2025-10-11 09:29:08.637 2 DEBUG nova.scheduler.client.report [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:29:08 np0005481065 nova_compute[260935]: 2025-10-11 09:29:08.666 2 DEBUG oslo_concurrency.lockutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.889s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:29:08 np0005481065 nova_compute[260935]: 2025-10-11 09:29:08.667 2 DEBUG nova.compute.manager [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:29:08 np0005481065 podman[406521]: 2025-10-11 09:29:08.678632297 +0000 UTC m=+0.347467706 container init e3a2d4dace19dfd664e42ad7c65e728ffd41f182933a8e1b4f4ef10b8f61d59f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_brahmagupta, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:29:08 np0005481065 podman[406521]: 2025-10-11 09:29:08.686439728 +0000 UTC m=+0.355275087 container start e3a2d4dace19dfd664e42ad7c65e728ffd41f182933a8e1b4f4ef10b8f61d59f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_brahmagupta, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:29:08 np0005481065 nova_compute[260935]: 2025-10-11 09:29:08.724 2 DEBUG nova.compute.manager [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:29:08 np0005481065 nova_compute[260935]: 2025-10-11 09:29:08.725 2 DEBUG nova.network.neutron [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:29:08 np0005481065 nova_compute[260935]: 2025-10-11 09:29:08.748 2 INFO nova.virt.libvirt.driver [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:29:08 np0005481065 podman[406521]: 2025-10-11 09:29:08.761189517 +0000 UTC m=+0.430024856 container attach e3a2d4dace19dfd664e42ad7c65e728ffd41f182933a8e1b4f4ef10b8f61d59f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:29:08 np0005481065 nova_compute[260935]: 2025-10-11 09:29:08.768 2 DEBUG nova.compute.manager [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:29:08 np0005481065 nova_compute[260935]: 2025-10-11 09:29:08.846 2 DEBUG nova.compute.manager [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:29:08 np0005481065 nova_compute[260935]: 2025-10-11 09:29:08.848 2 DEBUG nova.virt.libvirt.driver [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:29:08 np0005481065 nova_compute[260935]: 2025-10-11 09:29:08.848 2 INFO nova.virt.libvirt.driver [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Creating image(s)#033[00m
Oct 11 05:29:08 np0005481065 nova_compute[260935]: 2025-10-11 09:29:08.873 2 DEBUG nova.storage.rbd_utils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] rbd image af8a1ab7-7512-4de4-8493-cfe85095fbc5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:29:08 np0005481065 nova_compute[260935]: 2025-10-11 09:29:08.900 2 DEBUG nova.storage.rbd_utils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] rbd image af8a1ab7-7512-4de4-8493-cfe85095fbc5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:29:08 np0005481065 nova_compute[260935]: 2025-10-11 09:29:08.927 2 DEBUG nova.storage.rbd_utils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] rbd image af8a1ab7-7512-4de4-8493-cfe85095fbc5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:29:08 np0005481065 nova_compute[260935]: 2025-10-11 09:29:08.932 2 DEBUG oslo_concurrency.processutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:29:08 np0005481065 nova_compute[260935]: 2025-10-11 09:29:08.983 2 DEBUG nova.policy [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '95ad8a94007e4ab88fcad372c4695cf5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9e69b3710dc94c919d8bd75d2a540c10', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:29:08 np0005481065 nova_compute[260935]: 2025-10-11 09:29:08.986 2 DEBUG nova.network.neutron [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Updating instance_info_cache with network_info: [{"id": "b6c25504-002e-4bf7-8b34-c9e43fcc55c2", "address": "fa:16:3e:ba:b7:77", "network": {"id": "03e15108-5f8d-4fec-9ad8-133d9551c667", "bridge": "br-int", "label": "tempest-network-smoke--866575139", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feba:b777", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feba:b777", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6c25504-00", "ovs_interfaceid": "b6c25504-002e-4bf7-8b34-c9e43fcc55c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:29:09 np0005481065 nova_compute[260935]: 2025-10-11 09:29:09.013 2 DEBUG oslo_concurrency.lockutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Releasing lock "refresh_cache-80209953-3c4c-4932-a9a3-8166c70e1029" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:29:09 np0005481065 nova_compute[260935]: 2025-10-11 09:29:09.013 2 DEBUG nova.compute.manager [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Instance network_info: |[{"id": "b6c25504-002e-4bf7-8b34-c9e43fcc55c2", "address": "fa:16:3e:ba:b7:77", "network": {"id": "03e15108-5f8d-4fec-9ad8-133d9551c667", "bridge": "br-int", "label": "tempest-network-smoke--866575139", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feba:b777", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feba:b777", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6c25504-00", "ovs_interfaceid": "b6c25504-002e-4bf7-8b34-c9e43fcc55c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:29:09 np0005481065 nova_compute[260935]: 2025-10-11 09:29:09.013 2 DEBUG oslo_concurrency.lockutils [req-3f9fa8b2-02bf-4350-b41a-e4c601f61a38 req-4e8a8726-ad2f-4b4e-aaa1-1923a7a35c6f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-80209953-3c4c-4932-a9a3-8166c70e1029" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:29:09 np0005481065 nova_compute[260935]: 2025-10-11 09:29:09.014 2 DEBUG nova.network.neutron [req-3f9fa8b2-02bf-4350-b41a-e4c601f61a38 req-4e8a8726-ad2f-4b4e-aaa1-1923a7a35c6f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Refreshing network info cache for port b6c25504-002e-4bf7-8b34-c9e43fcc55c2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:29:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2625: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:29:09 np0005481065 nova_compute[260935]: 2025-10-11 09:29:09.017 2 DEBUG nova.virt.libvirt.driver [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Start _get_guest_xml network_info=[{"id": "b6c25504-002e-4bf7-8b34-c9e43fcc55c2", "address": "fa:16:3e:ba:b7:77", "network": {"id": "03e15108-5f8d-4fec-9ad8-133d9551c667", "bridge": "br-int", "label": "tempest-network-smoke--866575139", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feba:b777", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feba:b777", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6c25504-00", "ovs_interfaceid": "b6c25504-002e-4bf7-8b34-c9e43fcc55c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:29:09 np0005481065 nova_compute[260935]: 2025-10-11 09:29:09.021 2 WARNING nova.virt.libvirt.driver [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:29:09 np0005481065 nova_compute[260935]: 2025-10-11 09:29:09.026 2 DEBUG nova.virt.libvirt.host [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:29:09 np0005481065 nova_compute[260935]: 2025-10-11 09:29:09.027 2 DEBUG nova.virt.libvirt.host [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:29:09 np0005481065 nova_compute[260935]: 2025-10-11 09:29:09.030 2 DEBUG nova.virt.libvirt.host [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:29:09 np0005481065 nova_compute[260935]: 2025-10-11 09:29:09.030 2 DEBUG nova.virt.libvirt.host [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:29:09 np0005481065 nova_compute[260935]: 2025-10-11 09:29:09.030 2 DEBUG nova.virt.libvirt.driver [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:29:09 np0005481065 nova_compute[260935]: 2025-10-11 09:29:09.031 2 DEBUG nova.virt.hardware [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:29:09 np0005481065 nova_compute[260935]: 2025-10-11 09:29:09.031 2 DEBUG nova.virt.hardware [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:29:09 np0005481065 nova_compute[260935]: 2025-10-11 09:29:09.031 2 DEBUG nova.virt.hardware [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:29:09 np0005481065 nova_compute[260935]: 2025-10-11 09:29:09.031 2 DEBUG nova.virt.hardware [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:29:09 np0005481065 nova_compute[260935]: 2025-10-11 09:29:09.031 2 DEBUG nova.virt.hardware [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:29:09 np0005481065 nova_compute[260935]: 2025-10-11 09:29:09.031 2 DEBUG nova.virt.hardware [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:29:09 np0005481065 nova_compute[260935]: 2025-10-11 09:29:09.032 2 DEBUG nova.virt.hardware [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:29:09 np0005481065 nova_compute[260935]: 2025-10-11 09:29:09.032 2 DEBUG nova.virt.hardware [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:29:09 np0005481065 nova_compute[260935]: 2025-10-11 09:29:09.032 2 DEBUG nova.virt.hardware [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:29:09 np0005481065 nova_compute[260935]: 2025-10-11 09:29:09.032 2 DEBUG nova.virt.hardware [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:29:09 np0005481065 nova_compute[260935]: 2025-10-11 09:29:09.032 2 DEBUG nova.virt.hardware [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:29:09 np0005481065 nova_compute[260935]: 2025-10-11 09:29:09.035 2 DEBUG oslo_concurrency.processutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:29:09 np0005481065 nova_compute[260935]: 2025-10-11 09:29:09.074 2 DEBUG oslo_concurrency.processutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:29:09 np0005481065 nova_compute[260935]: 2025-10-11 09:29:09.075 2 DEBUG oslo_concurrency.lockutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:29:09 np0005481065 nova_compute[260935]: 2025-10-11 09:29:09.076 2 DEBUG oslo_concurrency.lockutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:29:09 np0005481065 nova_compute[260935]: 2025-10-11 09:29:09.076 2 DEBUG oslo_concurrency.lockutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:29:09 np0005481065 nova_compute[260935]: 2025-10-11 09:29:09.101 2 DEBUG nova.storage.rbd_utils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] rbd image af8a1ab7-7512-4de4-8493-cfe85095fbc5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:29:09 np0005481065 nova_compute[260935]: 2025-10-11 09:29:09.104 2 DEBUG oslo_concurrency.processutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 af8a1ab7-7512-4de4-8493-cfe85095fbc5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:29:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]: {
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:    "0": [
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:        {
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:            "devices": [
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:                "/dev/loop3"
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:            ],
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:            "lv_name": "ceph_lv0",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:            "lv_size": "21470642176",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:            "name": "ceph_lv0",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:            "tags": {
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:                "ceph.cluster_name": "ceph",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:                "ceph.crush_device_class": "",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:                "ceph.encrypted": "0",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:                "ceph.osd_id": "0",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:                "ceph.type": "block",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:                "ceph.vdo": "0"
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:            },
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:            "type": "block",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:            "vg_name": "ceph_vg0"
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:        }
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:    ],
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:    "1": [
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:        {
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:            "devices": [
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:                "/dev/loop4"
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:            ],
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:            "lv_name": "ceph_lv1",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:            "lv_size": "21470642176",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:            "name": "ceph_lv1",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:            "tags": {
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:                "ceph.cluster_name": "ceph",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:                "ceph.crush_device_class": "",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:                "ceph.encrypted": "0",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:                "ceph.osd_id": "1",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:                "ceph.type": "block",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:                "ceph.vdo": "0"
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:            },
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:            "type": "block",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:            "vg_name": "ceph_vg1"
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:        }
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:    ],
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:    "2": [
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:        {
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:            "devices": [
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:                "/dev/loop5"
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:            ],
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:            "lv_name": "ceph_lv2",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:            "lv_size": "21470642176",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:            "name": "ceph_lv2",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:            "tags": {
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:                "ceph.cluster_name": "ceph",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:                "ceph.crush_device_class": "",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:                "ceph.encrypted": "0",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:                "ceph.osd_id": "2",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:                "ceph.type": "block",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:                "ceph.vdo": "0"
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:            },
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:            "type": "block",
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:            "vg_name": "ceph_vg2"
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:        }
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]:    ]
Oct 11 05:29:09 np0005481065 festive_brahmagupta[406539]: }
Oct 11 05:29:09 np0005481065 nova_compute[260935]: 2025-10-11 09:29:09.442 2 DEBUG oslo_concurrency.processutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 af8a1ab7-7512-4de4-8493-cfe85095fbc5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.337s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:29:09 np0005481065 systemd[1]: libpod-e3a2d4dace19dfd664e42ad7c65e728ffd41f182933a8e1b4f4ef10b8f61d59f.scope: Deactivated successfully.
Oct 11 05:29:09 np0005481065 podman[406521]: 2025-10-11 09:29:09.472990794 +0000 UTC m=+1.141826113 container died e3a2d4dace19dfd664e42ad7c65e728ffd41f182933a8e1b4f4ef10b8f61d59f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_brahmagupta, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:29:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:29:09 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3842193835' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:29:09 np0005481065 systemd[1]: var-lib-containers-storage-overlay-446082782a84f1b64494d38278f63b78a58d983bda88e8fb665d1b4bc13c8e03-merged.mount: Deactivated successfully.
Oct 11 05:29:09 np0005481065 podman[406521]: 2025-10-11 09:29:09.528761518 +0000 UTC m=+1.197596837 container remove e3a2d4dace19dfd664e42ad7c65e728ffd41f182933a8e1b4f4ef10b8f61d59f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_brahmagupta, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:29:09 np0005481065 nova_compute[260935]: 2025-10-11 09:29:09.537 2 DEBUG oslo_concurrency.processutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:29:09 np0005481065 systemd[1]: libpod-conmon-e3a2d4dace19dfd664e42ad7c65e728ffd41f182933a8e1b4f4ef10b8f61d59f.scope: Deactivated successfully.
Oct 11 05:29:09 np0005481065 nova_compute[260935]: 2025-10-11 09:29:09.555 2 DEBUG nova.storage.rbd_utils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 80209953-3c4c-4932-a9a3-8166c70e1029_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:29:09 np0005481065 nova_compute[260935]: 2025-10-11 09:29:09.558 2 DEBUG oslo_concurrency.processutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:29:09 np0005481065 nova_compute[260935]: 2025-10-11 09:29:09.617 2 DEBUG nova.storage.rbd_utils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] resizing rbd image af8a1ab7-7512-4de4-8493-cfe85095fbc5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:29:09 np0005481065 nova_compute[260935]: 2025-10-11 09:29:09.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:09 np0005481065 nova_compute[260935]: 2025-10-11 09:29:09.725 2 DEBUG nova.objects.instance [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Lazy-loading 'migration_context' on Instance uuid af8a1ab7-7512-4de4-8493-cfe85095fbc5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:29:09 np0005481065 nova_compute[260935]: 2025-10-11 09:29:09.742 2 DEBUG nova.virt.libvirt.driver [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:29:09 np0005481065 nova_compute[260935]: 2025-10-11 09:29:09.744 2 DEBUG nova.virt.libvirt.driver [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Ensure instance console log exists: /var/lib/nova/instances/af8a1ab7-7512-4de4-8493-cfe85095fbc5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:29:09 np0005481065 nova_compute[260935]: 2025-10-11 09:29:09.744 2 DEBUG oslo_concurrency.lockutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:29:09 np0005481065 nova_compute[260935]: 2025-10-11 09:29:09.745 2 DEBUG oslo_concurrency.lockutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:29:09 np0005481065 nova_compute[260935]: 2025-10-11 09:29:09.745 2 DEBUG oslo_concurrency.lockutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:29:09 np0005481065 nova_compute[260935]: 2025-10-11 09:29:09.770 2 DEBUG nova.network.neutron [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Successfully created port: c6f854bc-831f-4bb1-ad9f-e1aa03343d25 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:29:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:29:09 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1958131527' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:29:10 np0005481065 nova_compute[260935]: 2025-10-11 09:29:10.002 2 DEBUG oslo_concurrency.processutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:29:10 np0005481065 nova_compute[260935]: 2025-10-11 09:29:10.003 2 DEBUG nova.virt.libvirt.vif [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:28:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-373220295',display_name='tempest-TestGettingAddress-server-373220295',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-373220295',id=133,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI6hRsTatVKtQ3nuA1r7FSvjUwlf4xF10ggzb3Lhl+N08jgpdPafA8cbnqq6OClpxtss8V3s8tFKTynT8iMbxi96RnSSo3i9TmqSxoUO3xLRHc6Xamp8CuhNQXpWeJ78Zg==',key_name='tempest-TestGettingAddress-951303195',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-0s96u5c0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:29:02Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=80209953-3c4c-4932-a9a3-8166c70e1029,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b6c25504-002e-4bf7-8b34-c9e43fcc55c2", "address": "fa:16:3e:ba:b7:77", "network": {"id": "03e15108-5f8d-4fec-9ad8-133d9551c667", "bridge": "br-int", "label": "tempest-network-smoke--866575139", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feba:b777", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feba:b777", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6c25504-00", "ovs_interfaceid": "b6c25504-002e-4bf7-8b34-c9e43fcc55c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:29:10 np0005481065 nova_compute[260935]: 2025-10-11 09:29:10.003 2 DEBUG nova.network.os_vif_util [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "b6c25504-002e-4bf7-8b34-c9e43fcc55c2", "address": "fa:16:3e:ba:b7:77", "network": {"id": "03e15108-5f8d-4fec-9ad8-133d9551c667", "bridge": "br-int", "label": "tempest-network-smoke--866575139", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feba:b777", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feba:b777", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6c25504-00", "ovs_interfaceid": "b6c25504-002e-4bf7-8b34-c9e43fcc55c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:29:10 np0005481065 nova_compute[260935]: 2025-10-11 09:29:10.004 2 DEBUG nova.network.os_vif_util [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:b7:77,bridge_name='br-int',has_traffic_filtering=True,id=b6c25504-002e-4bf7-8b34-c9e43fcc55c2,network=Network(03e15108-5f8d-4fec-9ad8-133d9551c667),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6c25504-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:29:10 np0005481065 nova_compute[260935]: 2025-10-11 09:29:10.005 2 DEBUG nova.objects.instance [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'pci_devices' on Instance uuid 80209953-3c4c-4932-a9a3-8166c70e1029 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:29:10 np0005481065 nova_compute[260935]: 2025-10-11 09:29:10.165 2 DEBUG nova.virt.libvirt.driver [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:29:10 np0005481065 nova_compute[260935]:  <uuid>80209953-3c4c-4932-a9a3-8166c70e1029</uuid>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:  <name>instance-00000085</name>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:29:10 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:      <nova:name>tempest-TestGettingAddress-server-373220295</nova:name>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:29:09</nova:creationTime>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:29:10 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:        <nova:user uuid="0e1fd111a1ff43179343661e01457085">tempest-TestGettingAddress-1238692117-project-member</nova:user>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:        <nova:project uuid="db6885dd005947ad850fed13cefdf2fc">tempest-TestGettingAddress-1238692117</nova:project>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:        <nova:port uuid="b6c25504-002e-4bf7-8b34-c9e43fcc55c2">
Oct 11 05:29:10 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:feba:b777" ipVersion="6"/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:feba:b777" ipVersion="6"/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:29:10 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:      <entry name="serial">80209953-3c4c-4932-a9a3-8166c70e1029</entry>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:      <entry name="uuid">80209953-3c4c-4932-a9a3-8166c70e1029</entry>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:29:10 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:29:10 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:29:10 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/80209953-3c4c-4932-a9a3-8166c70e1029_disk">
Oct 11 05:29:10 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:29:10 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:29:10 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/80209953-3c4c-4932-a9a3-8166c70e1029_disk.config">
Oct 11 05:29:10 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:29:10 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:29:10 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:ba:b7:77"/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:      <target dev="tapb6c25504-00"/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:29:10 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/80209953-3c4c-4932-a9a3-8166c70e1029/console.log" append="off"/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:29:10 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:29:10 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:29:10 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:29:10 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:29:10 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:29:10 np0005481065 nova_compute[260935]: 2025-10-11 09:29:10.167 2 DEBUG nova.compute.manager [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Preparing to wait for external event network-vif-plugged-b6c25504-002e-4bf7-8b34-c9e43fcc55c2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:29:10 np0005481065 nova_compute[260935]: 2025-10-11 09:29:10.168 2 DEBUG oslo_concurrency.lockutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "80209953-3c4c-4932-a9a3-8166c70e1029-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:29:10 np0005481065 nova_compute[260935]: 2025-10-11 09:29:10.168 2 DEBUG oslo_concurrency.lockutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "80209953-3c4c-4932-a9a3-8166c70e1029-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:29:10 np0005481065 nova_compute[260935]: 2025-10-11 09:29:10.169 2 DEBUG oslo_concurrency.lockutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "80209953-3c4c-4932-a9a3-8166c70e1029-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:29:10 np0005481065 nova_compute[260935]: 2025-10-11 09:29:10.170 2 DEBUG nova.virt.libvirt.vif [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:28:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-373220295',display_name='tempest-TestGettingAddress-server-373220295',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-373220295',id=133,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI6hRsTatVKtQ3nuA1r7FSvjUwlf4xF10ggzb3Lhl+N08jgpdPafA8cbnqq6OClpxtss8V3s8tFKTynT8iMbxi96RnSSo3i9TmqSxoUO3xLRHc6Xamp8CuhNQXpWeJ78Zg==',key_name='tempest-TestGettingAddress-951303195',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-0s96u5c0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:29:02Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=80209953-3c4c-4932-a9a3-8166c70e1029,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b6c25504-002e-4bf7-8b34-c9e43fcc55c2", "address": "fa:16:3e:ba:b7:77", "network": {"id": "03e15108-5f8d-4fec-9ad8-133d9551c667", "bridge": "br-int", "label": "tempest-network-smoke--866575139", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feba:b777", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feba:b777", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6c25504-00", "ovs_interfaceid": "b6c25504-002e-4bf7-8b34-c9e43fcc55c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:29:10 np0005481065 nova_compute[260935]: 2025-10-11 09:29:10.171 2 DEBUG nova.network.os_vif_util [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "b6c25504-002e-4bf7-8b34-c9e43fcc55c2", "address": "fa:16:3e:ba:b7:77", "network": {"id": "03e15108-5f8d-4fec-9ad8-133d9551c667", "bridge": "br-int", "label": "tempest-network-smoke--866575139", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feba:b777", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feba:b777", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6c25504-00", "ovs_interfaceid": "b6c25504-002e-4bf7-8b34-c9e43fcc55c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:29:10 np0005481065 nova_compute[260935]: 2025-10-11 09:29:10.173 2 DEBUG nova.network.os_vif_util [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:b7:77,bridge_name='br-int',has_traffic_filtering=True,id=b6c25504-002e-4bf7-8b34-c9e43fcc55c2,network=Network(03e15108-5f8d-4fec-9ad8-133d9551c667),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6c25504-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:29:10 np0005481065 nova_compute[260935]: 2025-10-11 09:29:10.174 2 DEBUG os_vif [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:b7:77,bridge_name='br-int',has_traffic_filtering=True,id=b6c25504-002e-4bf7-8b34-c9e43fcc55c2,network=Network(03e15108-5f8d-4fec-9ad8-133d9551c667),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6c25504-00') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:29:10 np0005481065 nova_compute[260935]: 2025-10-11 09:29:10.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:10 np0005481065 nova_compute[260935]: 2025-10-11 09:29:10.175 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:29:10 np0005481065 nova_compute[260935]: 2025-10-11 09:29:10.176 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:29:10 np0005481065 nova_compute[260935]: 2025-10-11 09:29:10.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:10 np0005481065 nova_compute[260935]: 2025-10-11 09:29:10.181 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb6c25504-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:29:10 np0005481065 nova_compute[260935]: 2025-10-11 09:29:10.182 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb6c25504-00, col_values=(('external_ids', {'iface-id': 'b6c25504-002e-4bf7-8b34-c9e43fcc55c2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ba:b7:77', 'vm-uuid': '80209953-3c4c-4932-a9a3-8166c70e1029'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:29:10 np0005481065 NetworkManager[44960]: <info>  [1760174950.1859] manager: (tapb6c25504-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/574)
Oct 11 05:29:10 np0005481065 nova_compute[260935]: 2025-10-11 09:29:10.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:10 np0005481065 nova_compute[260935]: 2025-10-11 09:29:10.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:29:10 np0005481065 nova_compute[260935]: 2025-10-11 09:29:10.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:10 np0005481065 nova_compute[260935]: 2025-10-11 09:29:10.198 2 INFO os_vif [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:b7:77,bridge_name='br-int',has_traffic_filtering=True,id=b6c25504-002e-4bf7-8b34-c9e43fcc55c2,network=Network(03e15108-5f8d-4fec-9ad8-133d9551c667),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6c25504-00')#033[00m
Oct 11 05:29:10 np0005481065 nova_compute[260935]: 2025-10-11 09:29:10.250 2 DEBUG nova.virt.libvirt.driver [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:29:10 np0005481065 nova_compute[260935]: 2025-10-11 09:29:10.251 2 DEBUG nova.virt.libvirt.driver [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:29:10 np0005481065 nova_compute[260935]: 2025-10-11 09:29:10.251 2 DEBUG nova.virt.libvirt.driver [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No VIF found with MAC fa:16:3e:ba:b7:77, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:29:10 np0005481065 nova_compute[260935]: 2025-10-11 09:29:10.252 2 INFO nova.virt.libvirt.driver [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Using config drive#033[00m
Oct 11 05:29:10 np0005481065 podman[406930]: 2025-10-11 09:29:10.259113949 +0000 UTC m=+0.069502593 container create 4d4a75a835a59d53f4b5a60dcc7e720ce077a6603e9554e738f1e32fc247654e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:29:10 np0005481065 nova_compute[260935]: 2025-10-11 09:29:10.300 2 DEBUG nova.storage.rbd_utils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 80209953-3c4c-4932-a9a3-8166c70e1029_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:29:10 np0005481065 systemd[1]: Started libpod-conmon-4d4a75a835a59d53f4b5a60dcc7e720ce077a6603e9554e738f1e32fc247654e.scope.
Oct 11 05:29:10 np0005481065 podman[406930]: 2025-10-11 09:29:10.226671343 +0000 UTC m=+0.037060037 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:29:10 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:29:10 np0005481065 podman[406930]: 2025-10-11 09:29:10.374066023 +0000 UTC m=+0.184454717 container init 4d4a75a835a59d53f4b5a60dcc7e720ce077a6603e9554e738f1e32fc247654e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_pascal, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 11 05:29:10 np0005481065 podman[406930]: 2025-10-11 09:29:10.387464041 +0000 UTC m=+0.197852675 container start 4d4a75a835a59d53f4b5a60dcc7e720ce077a6603e9554e738f1e32fc247654e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 05:29:10 np0005481065 podman[406930]: 2025-10-11 09:29:10.39416535 +0000 UTC m=+0.204553984 container attach 4d4a75a835a59d53f4b5a60dcc7e720ce077a6603e9554e738f1e32fc247654e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 11 05:29:10 np0005481065 laughing_pascal[406966]: 167 167
Oct 11 05:29:10 np0005481065 systemd[1]: libpod-4d4a75a835a59d53f4b5a60dcc7e720ce077a6603e9554e738f1e32fc247654e.scope: Deactivated successfully.
Oct 11 05:29:10 np0005481065 podman[406930]: 2025-10-11 09:29:10.397128074 +0000 UTC m=+0.207516768 container died 4d4a75a835a59d53f4b5a60dcc7e720ce077a6603e9554e738f1e32fc247654e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 11 05:29:10 np0005481065 systemd[1]: var-lib-containers-storage-overlay-dce4e2bc3ff7641886fe7fc748755d9900c4fbd2a425540535c01f599ee1b638-merged.mount: Deactivated successfully.
Oct 11 05:29:10 np0005481065 podman[406930]: 2025-10-11 09:29:10.466014288 +0000 UTC m=+0.276402902 container remove 4d4a75a835a59d53f4b5a60dcc7e720ce077a6603e9554e738f1e32fc247654e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 05:29:10 np0005481065 systemd[1]: libpod-conmon-4d4a75a835a59d53f4b5a60dcc7e720ce077a6603e9554e738f1e32fc247654e.scope: Deactivated successfully.
Oct 11 05:29:10 np0005481065 podman[406990]: 2025-10-11 09:29:10.733912018 +0000 UTC m=+0.077429546 container create 4b33f2af51897c3144209c02edcc94fa6de2374d3b2044c408f45b92e9bf9b8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_panini, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 11 05:29:10 np0005481065 podman[406990]: 2025-10-11 09:29:10.702523592 +0000 UTC m=+0.046041180 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:29:10 np0005481065 systemd[1]: Started libpod-conmon-4b33f2af51897c3144209c02edcc94fa6de2374d3b2044c408f45b92e9bf9b8a.scope.
Oct 11 05:29:10 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:29:10 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1469e3fd795504d32d64dd3fe6639706e270344fa77cbf6cd4b4c3aa8551699c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:29:10 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1469e3fd795504d32d64dd3fe6639706e270344fa77cbf6cd4b4c3aa8551699c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:29:10 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1469e3fd795504d32d64dd3fe6639706e270344fa77cbf6cd4b4c3aa8551699c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:29:10 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1469e3fd795504d32d64dd3fe6639706e270344fa77cbf6cd4b4c3aa8551699c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:29:10 np0005481065 podman[406990]: 2025-10-11 09:29:10.860592383 +0000 UTC m=+0.204109971 container init 4b33f2af51897c3144209c02edcc94fa6de2374d3b2044c408f45b92e9bf9b8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_panini, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 05:29:10 np0005481065 podman[406990]: 2025-10-11 09:29:10.875663938 +0000 UTC m=+0.219181446 container start 4b33f2af51897c3144209c02edcc94fa6de2374d3b2044c408f45b92e9bf9b8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 05:29:10 np0005481065 podman[406990]: 2025-10-11 09:29:10.879513407 +0000 UTC m=+0.223030945 container attach 4b33f2af51897c3144209c02edcc94fa6de2374d3b2044c408f45b92e9bf9b8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_panini, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:29:10 np0005481065 nova_compute[260935]: 2025-10-11 09:29:10.937 2 INFO nova.virt.libvirt.driver [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Creating config drive at /var/lib/nova/instances/80209953-3c4c-4932-a9a3-8166c70e1029/disk.config#033[00m
Oct 11 05:29:10 np0005481065 nova_compute[260935]: 2025-10-11 09:29:10.945 2 DEBUG oslo_concurrency.processutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/80209953-3c4c-4932-a9a3-8166c70e1029/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp04n0sd7x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:29:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2626: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:29:11 np0005481065 nova_compute[260935]: 2025-10-11 09:29:11.099 2 DEBUG oslo_concurrency.processutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/80209953-3c4c-4932-a9a3-8166c70e1029/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp04n0sd7x" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:29:11 np0005481065 nova_compute[260935]: 2025-10-11 09:29:11.124 2 DEBUG nova.storage.rbd_utils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 80209953-3c4c-4932-a9a3-8166c70e1029_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:29:11 np0005481065 nova_compute[260935]: 2025-10-11 09:29:11.128 2 DEBUG oslo_concurrency.processutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/80209953-3c4c-4932-a9a3-8166c70e1029/disk.config 80209953-3c4c-4932-a9a3-8166c70e1029_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:29:11 np0005481065 nova_compute[260935]: 2025-10-11 09:29:11.330 2 DEBUG oslo_concurrency.processutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/80209953-3c4c-4932-a9a3-8166c70e1029/disk.config 80209953-3c4c-4932-a9a3-8166c70e1029_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.202s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:29:11 np0005481065 nova_compute[260935]: 2025-10-11 09:29:11.332 2 INFO nova.virt.libvirt.driver [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Deleting local config drive /var/lib/nova/instances/80209953-3c4c-4932-a9a3-8166c70e1029/disk.config because it was imported into RBD.#033[00m
Oct 11 05:29:11 np0005481065 kernel: tapb6c25504-00: entered promiscuous mode
Oct 11 05:29:11 np0005481065 NetworkManager[44960]: <info>  [1760174951.3879] manager: (tapb6c25504-00): new Tun device (/org/freedesktop/NetworkManager/Devices/575)
Oct 11 05:29:11 np0005481065 ovn_controller[152945]: 2025-10-11T09:29:11Z|01481|binding|INFO|Claiming lport b6c25504-002e-4bf7-8b34-c9e43fcc55c2 for this chassis.
Oct 11 05:29:11 np0005481065 ovn_controller[152945]: 2025-10-11T09:29:11Z|01482|binding|INFO|b6c25504-002e-4bf7-8b34-c9e43fcc55c2: Claiming fa:16:3e:ba:b7:77 10.100.0.6 2001:db8:0:1:f816:3eff:feba:b777 2001:db8::f816:3eff:feba:b777
Oct 11 05:29:11 np0005481065 nova_compute[260935]: 2025-10-11 09:29:11.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:11.408 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:b7:77 10.100.0.6 2001:db8:0:1:f816:3eff:feba:b777 2001:db8::f816:3eff:feba:b777'], port_security=['fa:16:3e:ba:b7:77 10.100.0.6 2001:db8:0:1:f816:3eff:feba:b777 2001:db8::f816:3eff:feba:b777'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28 2001:db8:0:1:f816:3eff:feba:b777/64 2001:db8::f816:3eff:feba:b777/64', 'neutron:device_id': '80209953-3c4c-4932-a9a3-8166c70e1029', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03e15108-5f8d-4fec-9ad8-133d9551c667', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ea903998-56a8-4844-9402-6fdca37c3b3c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3325c2b-a3cd-4cde-bb9c-e17cc326ff80, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=b6c25504-002e-4bf7-8b34-c9e43fcc55c2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:29:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:11.409 162815 INFO neutron.agent.ovn.metadata.agent [-] Port b6c25504-002e-4bf7-8b34-c9e43fcc55c2 in datapath 03e15108-5f8d-4fec-9ad8-133d9551c667 bound to our chassis#033[00m
Oct 11 05:29:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:11.412 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 03e15108-5f8d-4fec-9ad8-133d9551c667#033[00m
Oct 11 05:29:11 np0005481065 ovn_controller[152945]: 2025-10-11T09:29:11Z|01483|binding|INFO|Setting lport b6c25504-002e-4bf7-8b34-c9e43fcc55c2 ovn-installed in OVS
Oct 11 05:29:11 np0005481065 ovn_controller[152945]: 2025-10-11T09:29:11Z|01484|binding|INFO|Setting lport b6c25504-002e-4bf7-8b34-c9e43fcc55c2 up in Southbound
Oct 11 05:29:11 np0005481065 nova_compute[260935]: 2025-10-11 09:29:11.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:11 np0005481065 nova_compute[260935]: 2025-10-11 09:29:11.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:11.432 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[05f3b06d-c9aa-4773-8aab-5db25484927e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:29:11 np0005481065 systemd-udevd[407066]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:29:11 np0005481065 systemd-machined[215705]: New machine qemu-157-instance-00000085.
Oct 11 05:29:11 np0005481065 systemd[1]: Started Virtual Machine qemu-157-instance-00000085.
Oct 11 05:29:11 np0005481065 NetworkManager[44960]: <info>  [1760174951.4602] device (tapb6c25504-00): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:29:11 np0005481065 NetworkManager[44960]: <info>  [1760174951.4611] device (tapb6c25504-00): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:29:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:11.468 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b9e72670-1977-45ae-b5a2-71b20197b327]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:29:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:11.472 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[1d4d110c-2cbe-4a6e-8ec3-d79eaf103b1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:29:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:11.506 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[9655f1d2-8018-41c6-9dbf-ee8103748673]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:29:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:11.528 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f5270274-333a-4b40-8330-1c1fcd7e0536]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap03e15108-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:c9:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 26, 'tx_packets': 5, 'rx_bytes': 2300, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 26, 'tx_packets': 5, 'rx_bytes': 2300, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 397], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675712, 'reachable_time': 41440, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 24, 'inoctets': 1880, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 24, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1880, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 24, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 407078, 'error': None, 'target': 'ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:29:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:11.554 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7be5ba4c-3cd0-4e1f-be82-a09e08ed5e1d]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap03e15108-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675730, 'tstamp': 675730}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 407080, 'error': None, 'target': 'ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap03e15108-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675735, 'tstamp': 675735}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 407080, 'error': None, 'target': 'ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:29:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:11.555 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap03e15108-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:29:11 np0005481065 nova_compute[260935]: 2025-10-11 09:29:11.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:11 np0005481065 nova_compute[260935]: 2025-10-11 09:29:11.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:11.559 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap03e15108-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:29:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:11.559 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:29:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:11.559 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap03e15108-50, col_values=(('external_ids', {'iface-id': '1945eb8f-f9e5-437a-9fc9-017b522ae777'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:29:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:11.560 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:29:11 np0005481065 nova_compute[260935]: 2025-10-11 09:29:11.700 2 DEBUG nova.compute.manager [req-7d72c822-ee59-42ca-b18e-ff629807e912 req-a107fa71-ea33-4ded-aaad-497b1cd6f4cb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Received event network-vif-plugged-b6c25504-002e-4bf7-8b34-c9e43fcc55c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:29:11 np0005481065 nova_compute[260935]: 2025-10-11 09:29:11.701 2 DEBUG oslo_concurrency.lockutils [req-7d72c822-ee59-42ca-b18e-ff629807e912 req-a107fa71-ea33-4ded-aaad-497b1cd6f4cb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "80209953-3c4c-4932-a9a3-8166c70e1029-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:29:11 np0005481065 nova_compute[260935]: 2025-10-11 09:29:11.701 2 DEBUG oslo_concurrency.lockutils [req-7d72c822-ee59-42ca-b18e-ff629807e912 req-a107fa71-ea33-4ded-aaad-497b1cd6f4cb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "80209953-3c4c-4932-a9a3-8166c70e1029-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:29:11 np0005481065 nova_compute[260935]: 2025-10-11 09:29:11.702 2 DEBUG oslo_concurrency.lockutils [req-7d72c822-ee59-42ca-b18e-ff629807e912 req-a107fa71-ea33-4ded-aaad-497b1cd6f4cb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "80209953-3c4c-4932-a9a3-8166c70e1029-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:29:11 np0005481065 nova_compute[260935]: 2025-10-11 09:29:11.702 2 DEBUG nova.compute.manager [req-7d72c822-ee59-42ca-b18e-ff629807e912 req-a107fa71-ea33-4ded-aaad-497b1cd6f4cb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Processing event network-vif-plugged-b6c25504-002e-4bf7-8b34-c9e43fcc55c2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:29:11 np0005481065 nova_compute[260935]: 2025-10-11 09:29:11.707 2 DEBUG nova.network.neutron [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Successfully updated port: c6f854bc-831f-4bb1-ad9f-e1aa03343d25 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:29:11 np0005481065 nova_compute[260935]: 2025-10-11 09:29:11.728 2 DEBUG oslo_concurrency.lockutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Acquiring lock "refresh_cache-af8a1ab7-7512-4de4-8493-cfe85095fbc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:29:11 np0005481065 nova_compute[260935]: 2025-10-11 09:29:11.729 2 DEBUG oslo_concurrency.lockutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Acquired lock "refresh_cache-af8a1ab7-7512-4de4-8493-cfe85095fbc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:29:11 np0005481065 nova_compute[260935]: 2025-10-11 09:29:11.729 2 DEBUG nova.network.neutron [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:29:11 np0005481065 nova_compute[260935]: 2025-10-11 09:29:11.800 2 DEBUG nova.compute.manager [req-6b2267af-46e3-4880-904e-c307501415d5 req-af5809e7-ac01-40fe-a16e-1cdacb57f309 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Received event network-changed-c6f854bc-831f-4bb1-ad9f-e1aa03343d25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:29:11 np0005481065 nova_compute[260935]: 2025-10-11 09:29:11.800 2 DEBUG nova.compute.manager [req-6b2267af-46e3-4880-904e-c307501415d5 req-af5809e7-ac01-40fe-a16e-1cdacb57f309 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Refreshing instance network info cache due to event network-changed-c6f854bc-831f-4bb1-ad9f-e1aa03343d25. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:29:11 np0005481065 nova_compute[260935]: 2025-10-11 09:29:11.801 2 DEBUG oslo_concurrency.lockutils [req-6b2267af-46e3-4880-904e-c307501415d5 req-af5809e7-ac01-40fe-a16e-1cdacb57f309 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-af8a1ab7-7512-4de4-8493-cfe85095fbc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:29:11 np0005481065 nova_compute[260935]: 2025-10-11 09:29:11.826 2 DEBUG nova.network.neutron [req-3f9fa8b2-02bf-4350-b41a-e4c601f61a38 req-4e8a8726-ad2f-4b4e-aaa1-1923a7a35c6f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Updated VIF entry in instance network info cache for port b6c25504-002e-4bf7-8b34-c9e43fcc55c2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:29:11 np0005481065 nova_compute[260935]: 2025-10-11 09:29:11.827 2 DEBUG nova.network.neutron [req-3f9fa8b2-02bf-4350-b41a-e4c601f61a38 req-4e8a8726-ad2f-4b4e-aaa1-1923a7a35c6f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Updating instance_info_cache with network_info: [{"id": "b6c25504-002e-4bf7-8b34-c9e43fcc55c2", "address": "fa:16:3e:ba:b7:77", "network": {"id": "03e15108-5f8d-4fec-9ad8-133d9551c667", "bridge": "br-int", "label": "tempest-network-smoke--866575139", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feba:b777", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feba:b777", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6c25504-00", "ovs_interfaceid": "b6c25504-002e-4bf7-8b34-c9e43fcc55c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:29:11 np0005481065 nova_compute[260935]: 2025-10-11 09:29:11.853 2 DEBUG oslo_concurrency.lockutils [req-3f9fa8b2-02bf-4350-b41a-e4c601f61a38 req-4e8a8726-ad2f-4b4e-aaa1-1923a7a35c6f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-80209953-3c4c-4932-a9a3-8166c70e1029" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:29:11 np0005481065 stupefied_panini[407007]: {
Oct 11 05:29:11 np0005481065 stupefied_panini[407007]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 05:29:11 np0005481065 stupefied_panini[407007]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:29:11 np0005481065 stupefied_panini[407007]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 05:29:11 np0005481065 stupefied_panini[407007]:        "osd_id": 2,
Oct 11 05:29:11 np0005481065 stupefied_panini[407007]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:29:11 np0005481065 stupefied_panini[407007]:        "type": "bluestore"
Oct 11 05:29:11 np0005481065 stupefied_panini[407007]:    },
Oct 11 05:29:11 np0005481065 stupefied_panini[407007]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 05:29:11 np0005481065 stupefied_panini[407007]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:29:11 np0005481065 stupefied_panini[407007]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 05:29:11 np0005481065 stupefied_panini[407007]:        "osd_id": 0,
Oct 11 05:29:11 np0005481065 stupefied_panini[407007]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:29:11 np0005481065 stupefied_panini[407007]:        "type": "bluestore"
Oct 11 05:29:11 np0005481065 stupefied_panini[407007]:    },
Oct 11 05:29:11 np0005481065 stupefied_panini[407007]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 05:29:11 np0005481065 stupefied_panini[407007]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:29:11 np0005481065 stupefied_panini[407007]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 05:29:11 np0005481065 stupefied_panini[407007]:        "osd_id": 1,
Oct 11 05:29:11 np0005481065 stupefied_panini[407007]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:29:11 np0005481065 stupefied_panini[407007]:        "type": "bluestore"
Oct 11 05:29:11 np0005481065 stupefied_panini[407007]:    }
Oct 11 05:29:11 np0005481065 stupefied_panini[407007]: }
Oct 11 05:29:11 np0005481065 systemd[1]: libpod-4b33f2af51897c3144209c02edcc94fa6de2374d3b2044c408f45b92e9bf9b8a.scope: Deactivated successfully.
Oct 11 05:29:11 np0005481065 nova_compute[260935]: 2025-10-11 09:29:11.894 2 DEBUG nova.network.neutron [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:29:11 np0005481065 podman[407146]: 2025-10-11 09:29:11.934060507 +0000 UTC m=+0.024103271 container died 4b33f2af51897c3144209c02edcc94fa6de2374d3b2044c408f45b92e9bf9b8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_panini, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 11 05:29:11 np0005481065 systemd[1]: var-lib-containers-storage-overlay-1469e3fd795504d32d64dd3fe6639706e270344fa77cbf6cd4b4c3aa8551699c-merged.mount: Deactivated successfully.
Oct 11 05:29:11 np0005481065 podman[407146]: 2025-10-11 09:29:11.991025645 +0000 UTC m=+0.081068389 container remove 4b33f2af51897c3144209c02edcc94fa6de2374d3b2044c408f45b92e9bf9b8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_panini, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 05:29:11 np0005481065 systemd[1]: libpod-conmon-4b33f2af51897c3144209c02edcc94fa6de2374d3b2044c408f45b92e9bf9b8a.scope: Deactivated successfully.
Oct 11 05:29:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:29:12 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:29:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:29:12 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:29:12 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 11a8cb53-aa10-4850-adc4-451f0b8c7010 does not exist
Oct 11 05:29:12 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 11334dce-3557-462c-b0bf-a702de73eba3 does not exist
Oct 11 05:29:12 np0005481065 nova_compute[260935]: 2025-10-11 09:29:12.359 2 DEBUG nova.compute.manager [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:29:12 np0005481065 nova_compute[260935]: 2025-10-11 09:29:12.360 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174952.3590016, 80209953-3c4c-4932-a9a3-8166c70e1029 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:29:12 np0005481065 nova_compute[260935]: 2025-10-11 09:29:12.361 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] VM Started (Lifecycle Event)#033[00m
Oct 11 05:29:12 np0005481065 nova_compute[260935]: 2025-10-11 09:29:12.364 2 DEBUG nova.virt.libvirt.driver [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:29:12 np0005481065 nova_compute[260935]: 2025-10-11 09:29:12.367 2 INFO nova.virt.libvirt.driver [-] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Instance spawned successfully.#033[00m
Oct 11 05:29:12 np0005481065 nova_compute[260935]: 2025-10-11 09:29:12.367 2 DEBUG nova.virt.libvirt.driver [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:29:12 np0005481065 nova_compute[260935]: 2025-10-11 09:29:12.384 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:29:12 np0005481065 nova_compute[260935]: 2025-10-11 09:29:12.390 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:29:12 np0005481065 nova_compute[260935]: 2025-10-11 09:29:12.394 2 DEBUG nova.virt.libvirt.driver [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:29:12 np0005481065 nova_compute[260935]: 2025-10-11 09:29:12.395 2 DEBUG nova.virt.libvirt.driver [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:29:12 np0005481065 nova_compute[260935]: 2025-10-11 09:29:12.395 2 DEBUG nova.virt.libvirt.driver [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:29:12 np0005481065 nova_compute[260935]: 2025-10-11 09:29:12.396 2 DEBUG nova.virt.libvirt.driver [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:29:12 np0005481065 nova_compute[260935]: 2025-10-11 09:29:12.396 2 DEBUG nova.virt.libvirt.driver [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:29:12 np0005481065 nova_compute[260935]: 2025-10-11 09:29:12.397 2 DEBUG nova.virt.libvirt.driver [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:29:12 np0005481065 nova_compute[260935]: 2025-10-11 09:29:12.441 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:29:12 np0005481065 nova_compute[260935]: 2025-10-11 09:29:12.442 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174952.3602972, 80209953-3c4c-4932-a9a3-8166c70e1029 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:29:12 np0005481065 nova_compute[260935]: 2025-10-11 09:29:12.442 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:29:12 np0005481065 nova_compute[260935]: 2025-10-11 09:29:12.497 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:29:12 np0005481065 nova_compute[260935]: 2025-10-11 09:29:12.501 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174952.3634884, 80209953-3c4c-4932-a9a3-8166c70e1029 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:29:12 np0005481065 nova_compute[260935]: 2025-10-11 09:29:12.501 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:29:12 np0005481065 nova_compute[260935]: 2025-10-11 09:29:12.512 2 INFO nova.compute.manager [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Took 10.03 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:29:12 np0005481065 nova_compute[260935]: 2025-10-11 09:29:12.512 2 DEBUG nova.compute.manager [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:29:12 np0005481065 nova_compute[260935]: 2025-10-11 09:29:12.524 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:29:12 np0005481065 nova_compute[260935]: 2025-10-11 09:29:12.526 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:29:12 np0005481065 nova_compute[260935]: 2025-10-11 09:29:12.574 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:29:12 np0005481065 nova_compute[260935]: 2025-10-11 09:29:12.609 2 INFO nova.compute.manager [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Took 11.67 seconds to build instance.#033[00m
Oct 11 05:29:12 np0005481065 nova_compute[260935]: 2025-10-11 09:29:12.630 2 DEBUG oslo_concurrency.lockutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "80209953-3c4c-4932-a9a3-8166c70e1029" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.766s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:29:12 np0005481065 nova_compute[260935]: 2025-10-11 09:29:12.766 2 DEBUG nova.network.neutron [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Updating instance_info_cache with network_info: [{"id": "c6f854bc-831f-4bb1-ad9f-e1aa03343d25", "address": "fa:16:3e:6b:d3:83", "network": {"id": "079fe911-cb70-4ab3-8112-a68496105754", "bridge": "br-int", "label": "tempest-TestServerBasicOps-693767079-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e69b3710dc94c919d8bd75d2a540c10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f854bc-83", "ovs_interfaceid": "c6f854bc-831f-4bb1-ad9f-e1aa03343d25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:29:12 np0005481065 nova_compute[260935]: 2025-10-11 09:29:12.803 2 DEBUG oslo_concurrency.lockutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Releasing lock "refresh_cache-af8a1ab7-7512-4de4-8493-cfe85095fbc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:29:12 np0005481065 nova_compute[260935]: 2025-10-11 09:29:12.804 2 DEBUG nova.compute.manager [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Instance network_info: |[{"id": "c6f854bc-831f-4bb1-ad9f-e1aa03343d25", "address": "fa:16:3e:6b:d3:83", "network": {"id": "079fe911-cb70-4ab3-8112-a68496105754", "bridge": "br-int", "label": "tempest-TestServerBasicOps-693767079-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e69b3710dc94c919d8bd75d2a540c10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f854bc-83", "ovs_interfaceid": "c6f854bc-831f-4bb1-ad9f-e1aa03343d25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:29:12 np0005481065 nova_compute[260935]: 2025-10-11 09:29:12.805 2 DEBUG oslo_concurrency.lockutils [req-6b2267af-46e3-4880-904e-c307501415d5 req-af5809e7-ac01-40fe-a16e-1cdacb57f309 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-af8a1ab7-7512-4de4-8493-cfe85095fbc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:29:12 np0005481065 nova_compute[260935]: 2025-10-11 09:29:12.806 2 DEBUG nova.network.neutron [req-6b2267af-46e3-4880-904e-c307501415d5 req-af5809e7-ac01-40fe-a16e-1cdacb57f309 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Refreshing network info cache for port c6f854bc-831f-4bb1-ad9f-e1aa03343d25 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:29:12 np0005481065 nova_compute[260935]: 2025-10-11 09:29:12.811 2 DEBUG nova.virt.libvirt.driver [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Start _get_guest_xml network_info=[{"id": "c6f854bc-831f-4bb1-ad9f-e1aa03343d25", "address": "fa:16:3e:6b:d3:83", "network": {"id": "079fe911-cb70-4ab3-8112-a68496105754", "bridge": "br-int", "label": "tempest-TestServerBasicOps-693767079-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e69b3710dc94c919d8bd75d2a540c10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f854bc-83", "ovs_interfaceid": "c6f854bc-831f-4bb1-ad9f-e1aa03343d25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:29:12 np0005481065 nova_compute[260935]: 2025-10-11 09:29:12.820 2 WARNING nova.virt.libvirt.driver [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:29:12 np0005481065 nova_compute[260935]: 2025-10-11 09:29:12.828 2 DEBUG nova.virt.libvirt.host [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:29:12 np0005481065 nova_compute[260935]: 2025-10-11 09:29:12.830 2 DEBUG nova.virt.libvirt.host [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:29:12 np0005481065 nova_compute[260935]: 2025-10-11 09:29:12.834 2 DEBUG nova.virt.libvirt.host [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:29:12 np0005481065 nova_compute[260935]: 2025-10-11 09:29:12.837 2 DEBUG nova.virt.libvirt.host [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:29:12 np0005481065 nova_compute[260935]: 2025-10-11 09:29:12.837 2 DEBUG nova.virt.libvirt.driver [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:29:12 np0005481065 nova_compute[260935]: 2025-10-11 09:29:12.838 2 DEBUG nova.virt.hardware [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:29:12 np0005481065 nova_compute[260935]: 2025-10-11 09:29:12.839 2 DEBUG nova.virt.hardware [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:29:12 np0005481065 nova_compute[260935]: 2025-10-11 09:29:12.840 2 DEBUG nova.virt.hardware [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:29:12 np0005481065 nova_compute[260935]: 2025-10-11 09:29:12.840 2 DEBUG nova.virt.hardware [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:29:12 np0005481065 nova_compute[260935]: 2025-10-11 09:29:12.840 2 DEBUG nova.virt.hardware [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:29:12 np0005481065 nova_compute[260935]: 2025-10-11 09:29:12.841 2 DEBUG nova.virt.hardware [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:29:12 np0005481065 nova_compute[260935]: 2025-10-11 09:29:12.841 2 DEBUG nova.virt.hardware [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:29:12 np0005481065 nova_compute[260935]: 2025-10-11 09:29:12.842 2 DEBUG nova.virt.hardware [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:29:12 np0005481065 nova_compute[260935]: 2025-10-11 09:29:12.842 2 DEBUG nova.virt.hardware [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:29:12 np0005481065 nova_compute[260935]: 2025-10-11 09:29:12.843 2 DEBUG nova.virt.hardware [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:29:12 np0005481065 nova_compute[260935]: 2025-10-11 09:29:12.843 2 DEBUG nova.virt.hardware [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:29:12 np0005481065 nova_compute[260935]: 2025-10-11 09:29:12.848 2 DEBUG oslo_concurrency.processutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:29:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2627: 321 pgs: 321 active+clean; 500 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 3.6 MiB/s wr, 64 op/s
Oct 11 05:29:13 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:29:13 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:29:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:29:13 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1509367093' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:29:13 np0005481065 nova_compute[260935]: 2025-10-11 09:29:13.355 2 DEBUG oslo_concurrency.processutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:29:13 np0005481065 nova_compute[260935]: 2025-10-11 09:29:13.375 2 DEBUG nova.storage.rbd_utils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] rbd image af8a1ab7-7512-4de4-8493-cfe85095fbc5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:29:13 np0005481065 nova_compute[260935]: 2025-10-11 09:29:13.379 2 DEBUG oslo_concurrency.processutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:29:13 np0005481065 nova_compute[260935]: 2025-10-11 09:29:13.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:13 np0005481065 nova_compute[260935]: 2025-10-11 09:29:13.790 2 DEBUG nova.compute.manager [req-cfe95003-f550-4d8f-ba9b-bfef0326f536 req-51b20e6a-c0e9-48f3-8aec-8ddb2a840c7d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Received event network-vif-plugged-b6c25504-002e-4bf7-8b34-c9e43fcc55c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:29:13 np0005481065 nova_compute[260935]: 2025-10-11 09:29:13.791 2 DEBUG oslo_concurrency.lockutils [req-cfe95003-f550-4d8f-ba9b-bfef0326f536 req-51b20e6a-c0e9-48f3-8aec-8ddb2a840c7d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "80209953-3c4c-4932-a9a3-8166c70e1029-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:29:13 np0005481065 nova_compute[260935]: 2025-10-11 09:29:13.791 2 DEBUG oslo_concurrency.lockutils [req-cfe95003-f550-4d8f-ba9b-bfef0326f536 req-51b20e6a-c0e9-48f3-8aec-8ddb2a840c7d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "80209953-3c4c-4932-a9a3-8166c70e1029-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:29:13 np0005481065 nova_compute[260935]: 2025-10-11 09:29:13.792 2 DEBUG oslo_concurrency.lockutils [req-cfe95003-f550-4d8f-ba9b-bfef0326f536 req-51b20e6a-c0e9-48f3-8aec-8ddb2a840c7d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "80209953-3c4c-4932-a9a3-8166c70e1029-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:29:13 np0005481065 nova_compute[260935]: 2025-10-11 09:29:13.792 2 DEBUG nova.compute.manager [req-cfe95003-f550-4d8f-ba9b-bfef0326f536 req-51b20e6a-c0e9-48f3-8aec-8ddb2a840c7d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] No waiting events found dispatching network-vif-plugged-b6c25504-002e-4bf7-8b34-c9e43fcc55c2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:29:13 np0005481065 nova_compute[260935]: 2025-10-11 09:29:13.792 2 WARNING nova.compute.manager [req-cfe95003-f550-4d8f-ba9b-bfef0326f536 req-51b20e6a-c0e9-48f3-8aec-8ddb2a840c7d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Received unexpected event network-vif-plugged-b6c25504-002e-4bf7-8b34-c9e43fcc55c2 for instance with vm_state active and task_state None.#033[00m
Oct 11 05:29:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:29:13 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3137987385' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:29:13 np0005481065 nova_compute[260935]: 2025-10-11 09:29:13.813 2 DEBUG oslo_concurrency.processutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:29:13 np0005481065 nova_compute[260935]: 2025-10-11 09:29:13.815 2 DEBUG nova.virt.libvirt.vif [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:29:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1930148258',display_name='tempest-TestServerBasicOps-server-1930148258',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1930148258',id=134,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH102y8/0euXWtDeJlu4Y5Hi5/M8lShv6g1BNE2c8J5VmPE9Sc1i2z1jjNuXYbG9viqCuCV8FoDB59HS7lsdBA/LnJF8uoB7aDjGRRY+ZC3tUP+E3D+ZjN02qpM1ehWg2g==',key_name='tempest-TestServerBasicOps-820613880',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9e69b3710dc94c919d8bd75d2a540c10',ramdisk_id='',reservation_id='r-dglj62pg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-2065645952',owner_user_name='tempest-TestServerBasicOps-2065645952-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:29:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='95ad8a94007e4ab88fcad372c4695cf5',uuid=af8a1ab7-7512-4de4-8493-cfe85095fbc5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c6f854bc-831f-4bb1-ad9f-e1aa03343d25", "address": "fa:16:3e:6b:d3:83", "network": {"id": "079fe911-cb70-4ab3-8112-a68496105754", "bridge": "br-int", "label": "tempest-TestServerBasicOps-693767079-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e69b3710dc94c919d8bd75d2a540c10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f854bc-83", "ovs_interfaceid": "c6f854bc-831f-4bb1-ad9f-e1aa03343d25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:29:13 np0005481065 nova_compute[260935]: 2025-10-11 09:29:13.816 2 DEBUG nova.network.os_vif_util [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Converting VIF {"id": "c6f854bc-831f-4bb1-ad9f-e1aa03343d25", "address": "fa:16:3e:6b:d3:83", "network": {"id": "079fe911-cb70-4ab3-8112-a68496105754", "bridge": "br-int", "label": "tempest-TestServerBasicOps-693767079-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e69b3710dc94c919d8bd75d2a540c10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f854bc-83", "ovs_interfaceid": "c6f854bc-831f-4bb1-ad9f-e1aa03343d25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:29:13 np0005481065 nova_compute[260935]: 2025-10-11 09:29:13.817 2 DEBUG nova.network.os_vif_util [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:d3:83,bridge_name='br-int',has_traffic_filtering=True,id=c6f854bc-831f-4bb1-ad9f-e1aa03343d25,network=Network(079fe911-cb70-4ab3-8112-a68496105754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6f854bc-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:29:13 np0005481065 nova_compute[260935]: 2025-10-11 09:29:13.819 2 DEBUG nova.objects.instance [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Lazy-loading 'pci_devices' on Instance uuid af8a1ab7-7512-4de4-8493-cfe85095fbc5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:29:13 np0005481065 nova_compute[260935]: 2025-10-11 09:29:13.846 2 DEBUG nova.virt.libvirt.driver [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:29:13 np0005481065 nova_compute[260935]:  <uuid>af8a1ab7-7512-4de4-8493-cfe85095fbc5</uuid>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:  <name>instance-00000086</name>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:29:13 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:      <nova:name>tempest-TestServerBasicOps-server-1930148258</nova:name>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:29:12</nova:creationTime>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:29:13 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:        <nova:user uuid="95ad8a94007e4ab88fcad372c4695cf5">tempest-TestServerBasicOps-2065645952-project-member</nova:user>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:        <nova:project uuid="9e69b3710dc94c919d8bd75d2a540c10">tempest-TestServerBasicOps-2065645952</nova:project>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:        <nova:port uuid="c6f854bc-831f-4bb1-ad9f-e1aa03343d25">
Oct 11 05:29:13 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:29:13 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:      <entry name="serial">af8a1ab7-7512-4de4-8493-cfe85095fbc5</entry>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:      <entry name="uuid">af8a1ab7-7512-4de4-8493-cfe85095fbc5</entry>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:29:13 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:29:13 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:29:13 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/af8a1ab7-7512-4de4-8493-cfe85095fbc5_disk">
Oct 11 05:29:13 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:29:13 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:29:13 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/af8a1ab7-7512-4de4-8493-cfe85095fbc5_disk.config">
Oct 11 05:29:13 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:29:13 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:29:13 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:6b:d3:83"/>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:      <target dev="tapc6f854bc-83"/>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:29:13 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/af8a1ab7-7512-4de4-8493-cfe85095fbc5/console.log" append="off"/>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:29:13 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:29:13 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:29:13 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:29:13 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:29:13 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:29:13 np0005481065 nova_compute[260935]: 2025-10-11 09:29:13.847 2 DEBUG nova.compute.manager [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Preparing to wait for external event network-vif-plugged-c6f854bc-831f-4bb1-ad9f-e1aa03343d25 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:29:13 np0005481065 nova_compute[260935]: 2025-10-11 09:29:13.847 2 DEBUG oslo_concurrency.lockutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Acquiring lock "af8a1ab7-7512-4de4-8493-cfe85095fbc5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:29:13 np0005481065 nova_compute[260935]: 2025-10-11 09:29:13.848 2 DEBUG oslo_concurrency.lockutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Lock "af8a1ab7-7512-4de4-8493-cfe85095fbc5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:29:13 np0005481065 nova_compute[260935]: 2025-10-11 09:29:13.848 2 DEBUG oslo_concurrency.lockutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Lock "af8a1ab7-7512-4de4-8493-cfe85095fbc5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:29:13 np0005481065 nova_compute[260935]: 2025-10-11 09:29:13.849 2 DEBUG nova.virt.libvirt.vif [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:29:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1930148258',display_name='tempest-TestServerBasicOps-server-1930148258',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1930148258',id=134,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH102y8/0euXWtDeJlu4Y5Hi5/M8lShv6g1BNE2c8J5VmPE9Sc1i2z1jjNuXYbG9viqCuCV8FoDB59HS7lsdBA/LnJF8uoB7aDjGRRY+ZC3tUP+E3D+ZjN02qpM1ehWg2g==',key_name='tempest-TestServerBasicOps-820613880',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9e69b3710dc94c919d8bd75d2a540c10',ramdisk_id='',reservation_id='r-dglj62pg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-2065645952',owner_user_name='tempest-TestServerBasicOps-2065645952-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:29:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='95ad8a94007e4ab88fcad372c4695cf5',uuid=af8a1ab7-7512-4de4-8493-cfe85095fbc5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c6f854bc-831f-4bb1-ad9f-e1aa03343d25", "address": "fa:16:3e:6b:d3:83", "network": {"id": "079fe911-cb70-4ab3-8112-a68496105754", "bridge": "br-int", "label": "tempest-TestServerBasicOps-693767079-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e69b3710dc94c919d8bd75d2a540c10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f854bc-83", "ovs_interfaceid": "c6f854bc-831f-4bb1-ad9f-e1aa03343d25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:29:13 np0005481065 nova_compute[260935]: 2025-10-11 09:29:13.849 2 DEBUG nova.network.os_vif_util [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Converting VIF {"id": "c6f854bc-831f-4bb1-ad9f-e1aa03343d25", "address": "fa:16:3e:6b:d3:83", "network": {"id": "079fe911-cb70-4ab3-8112-a68496105754", "bridge": "br-int", "label": "tempest-TestServerBasicOps-693767079-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e69b3710dc94c919d8bd75d2a540c10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f854bc-83", "ovs_interfaceid": "c6f854bc-831f-4bb1-ad9f-e1aa03343d25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:29:13 np0005481065 nova_compute[260935]: 2025-10-11 09:29:13.850 2 DEBUG nova.network.os_vif_util [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:d3:83,bridge_name='br-int',has_traffic_filtering=True,id=c6f854bc-831f-4bb1-ad9f-e1aa03343d25,network=Network(079fe911-cb70-4ab3-8112-a68496105754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6f854bc-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:29:13 np0005481065 nova_compute[260935]: 2025-10-11 09:29:13.851 2 DEBUG os_vif [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:d3:83,bridge_name='br-int',has_traffic_filtering=True,id=c6f854bc-831f-4bb1-ad9f-e1aa03343d25,network=Network(079fe911-cb70-4ab3-8112-a68496105754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6f854bc-83') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:29:13 np0005481065 nova_compute[260935]: 2025-10-11 09:29:13.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:13 np0005481065 nova_compute[260935]: 2025-10-11 09:29:13.852 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:29:13 np0005481065 nova_compute[260935]: 2025-10-11 09:29:13.853 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:29:13 np0005481065 nova_compute[260935]: 2025-10-11 09:29:13.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:13 np0005481065 nova_compute[260935]: 2025-10-11 09:29:13.857 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc6f854bc-83, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:29:13 np0005481065 nova_compute[260935]: 2025-10-11 09:29:13.857 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc6f854bc-83, col_values=(('external_ids', {'iface-id': 'c6f854bc-831f-4bb1-ad9f-e1aa03343d25', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6b:d3:83', 'vm-uuid': 'af8a1ab7-7512-4de4-8493-cfe85095fbc5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:29:13 np0005481065 NetworkManager[44960]: <info>  [1760174953.8601] manager: (tapc6f854bc-83): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/576)
Oct 11 05:29:13 np0005481065 nova_compute[260935]: 2025-10-11 09:29:13.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:13 np0005481065 nova_compute[260935]: 2025-10-11 09:29:13.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:29:13 np0005481065 nova_compute[260935]: 2025-10-11 09:29:13.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:13 np0005481065 nova_compute[260935]: 2025-10-11 09:29:13.870 2 INFO os_vif [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:d3:83,bridge_name='br-int',has_traffic_filtering=True,id=c6f854bc-831f-4bb1-ad9f-e1aa03343d25,network=Network(079fe911-cb70-4ab3-8112-a68496105754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6f854bc-83')#033[00m
Oct 11 05:29:13 np0005481065 nova_compute[260935]: 2025-10-11 09:29:13.944 2 DEBUG nova.virt.libvirt.driver [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:29:13 np0005481065 nova_compute[260935]: 2025-10-11 09:29:13.944 2 DEBUG nova.virt.libvirt.driver [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:29:13 np0005481065 nova_compute[260935]: 2025-10-11 09:29:13.945 2 DEBUG nova.virt.libvirt.driver [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] No VIF found with MAC fa:16:3e:6b:d3:83, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:29:13 np0005481065 nova_compute[260935]: 2025-10-11 09:29:13.945 2 INFO nova.virt.libvirt.driver [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Using config drive#033[00m
Oct 11 05:29:13 np0005481065 nova_compute[260935]: 2025-10-11 09:29:13.967 2 DEBUG nova.storage.rbd_utils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] rbd image af8a1ab7-7512-4de4-8493-cfe85095fbc5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:29:14 np0005481065 nova_compute[260935]: 2025-10-11 09:29:14.192 2 DEBUG nova.network.neutron [req-6b2267af-46e3-4880-904e-c307501415d5 req-af5809e7-ac01-40fe-a16e-1cdacb57f309 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Updated VIF entry in instance network info cache for port c6f854bc-831f-4bb1-ad9f-e1aa03343d25. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:29:14 np0005481065 nova_compute[260935]: 2025-10-11 09:29:14.192 2 DEBUG nova.network.neutron [req-6b2267af-46e3-4880-904e-c307501415d5 req-af5809e7-ac01-40fe-a16e-1cdacb57f309 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Updating instance_info_cache with network_info: [{"id": "c6f854bc-831f-4bb1-ad9f-e1aa03343d25", "address": "fa:16:3e:6b:d3:83", "network": {"id": "079fe911-cb70-4ab3-8112-a68496105754", "bridge": "br-int", "label": "tempest-TestServerBasicOps-693767079-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e69b3710dc94c919d8bd75d2a540c10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f854bc-83", "ovs_interfaceid": "c6f854bc-831f-4bb1-ad9f-e1aa03343d25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:29:14 np0005481065 nova_compute[260935]: 2025-10-11 09:29:14.216 2 DEBUG oslo_concurrency.lockutils [req-6b2267af-46e3-4880-904e-c307501415d5 req-af5809e7-ac01-40fe-a16e-1cdacb57f309 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-af8a1ab7-7512-4de4-8493-cfe85095fbc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:29:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:29:14 np0005481065 nova_compute[260935]: 2025-10-11 09:29:14.403 2 INFO nova.virt.libvirt.driver [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Creating config drive at /var/lib/nova/instances/af8a1ab7-7512-4de4-8493-cfe85095fbc5/disk.config#033[00m
Oct 11 05:29:14 np0005481065 nova_compute[260935]: 2025-10-11 09:29:14.412 2 DEBUG oslo_concurrency.processutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/af8a1ab7-7512-4de4-8493-cfe85095fbc5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6ujcakhs execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:29:14 np0005481065 nova_compute[260935]: 2025-10-11 09:29:14.571 2 DEBUG oslo_concurrency.processutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/af8a1ab7-7512-4de4-8493-cfe85095fbc5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6ujcakhs" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:29:14 np0005481065 nova_compute[260935]: 2025-10-11 09:29:14.617 2 DEBUG nova.storage.rbd_utils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] rbd image af8a1ab7-7512-4de4-8493-cfe85095fbc5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:29:14 np0005481065 nova_compute[260935]: 2025-10-11 09:29:14.622 2 DEBUG oslo_concurrency.processutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/af8a1ab7-7512-4de4-8493-cfe85095fbc5/disk.config af8a1ab7-7512-4de4-8493-cfe85095fbc5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:29:14 np0005481065 nova_compute[260935]: 2025-10-11 09:29:14.776 2 DEBUG oslo_concurrency.processutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/af8a1ab7-7512-4de4-8493-cfe85095fbc5/disk.config af8a1ab7-7512-4de4-8493-cfe85095fbc5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:29:14 np0005481065 nova_compute[260935]: 2025-10-11 09:29:14.777 2 INFO nova.virt.libvirt.driver [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Deleting local config drive /var/lib/nova/instances/af8a1ab7-7512-4de4-8493-cfe85095fbc5/disk.config because it was imported into RBD.#033[00m
Oct 11 05:29:14 np0005481065 kernel: tapc6f854bc-83: entered promiscuous mode
Oct 11 05:29:14 np0005481065 NetworkManager[44960]: <info>  [1760174954.8281] manager: (tapc6f854bc-83): new Tun device (/org/freedesktop/NetworkManager/Devices/577)
Oct 11 05:29:14 np0005481065 ovn_controller[152945]: 2025-10-11T09:29:14Z|01485|binding|INFO|Claiming lport c6f854bc-831f-4bb1-ad9f-e1aa03343d25 for this chassis.
Oct 11 05:29:14 np0005481065 ovn_controller[152945]: 2025-10-11T09:29:14Z|01486|binding|INFO|c6f854bc-831f-4bb1-ad9f-e1aa03343d25: Claiming fa:16:3e:6b:d3:83 10.100.0.7
Oct 11 05:29:14 np0005481065 nova_compute[260935]: 2025-10-11 09:29:14.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:14.837 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:d3:83 10.100.0.7'], port_security=['fa:16:3e:6b:d3:83 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'af8a1ab7-7512-4de4-8493-cfe85095fbc5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-079fe911-cb70-4ab3-8112-a68496105754', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9e69b3710dc94c919d8bd75d2a540c10', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0dd44ff9-fcd8-4fbd-bf9d-f19c93cd632d af5d01b3-d9f2-4ea0-a9bb-fc66bdc2afc6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eb94e045-6353-4848-a698-5f94ed22b64e, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=c6f854bc-831f-4bb1-ad9f-e1aa03343d25) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:29:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:14.839 162815 INFO neutron.agent.ovn.metadata.agent [-] Port c6f854bc-831f-4bb1-ad9f-e1aa03343d25 in datapath 079fe911-cb70-4ab3-8112-a68496105754 bound to our chassis#033[00m
Oct 11 05:29:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:14.841 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 079fe911-cb70-4ab3-8112-a68496105754#033[00m
Oct 11 05:29:14 np0005481065 ovn_controller[152945]: 2025-10-11T09:29:14Z|01487|binding|INFO|Setting lport c6f854bc-831f-4bb1-ad9f-e1aa03343d25 ovn-installed in OVS
Oct 11 05:29:14 np0005481065 ovn_controller[152945]: 2025-10-11T09:29:14Z|01488|binding|INFO|Setting lport c6f854bc-831f-4bb1-ad9f-e1aa03343d25 up in Southbound
Oct 11 05:29:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:14.856 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7f8b1a7f-9747-445f-b09f-90122377830e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:29:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:14.857 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap079fe911-c1 in ovnmeta-079fe911-cb70-4ab3-8112-a68496105754 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 05:29:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:14.859 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap079fe911-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 05:29:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:14.859 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[02b08daf-8443-4065-8534-41ac1a953b9f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:29:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:14.860 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[22a6b359-ed8a-491e-a221-85c054528b4b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:29:14 np0005481065 nova_compute[260935]: 2025-10-11 09:29:14.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:14 np0005481065 systemd-udevd[407350]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:29:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:14.872 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[9758400b-cd54-4d8e-b4bf-f4d3aceea036]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:29:14 np0005481065 NetworkManager[44960]: <info>  [1760174954.8814] device (tapc6f854bc-83): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:29:14 np0005481065 NetworkManager[44960]: <info>  [1760174954.8822] device (tapc6f854bc-83): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:29:14 np0005481065 systemd-machined[215705]: New machine qemu-158-instance-00000086.
Oct 11 05:29:14 np0005481065 systemd[1]: Started Virtual Machine qemu-158-instance-00000086.
Oct 11 05:29:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:14.899 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[177b04b0-43d1-43d1-827b-d7926ee6c380]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:29:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:14.929 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[eb5edfe7-3bf9-49c7-8f31-59617614f907]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:29:14 np0005481065 NetworkManager[44960]: <info>  [1760174954.9414] manager: (tap079fe911-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/578)
Oct 11 05:29:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:14.943 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0a8d4e9e-ccf2-410b-b242-bd34b59d4b4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:29:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:14.982 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[347e4432-5481-4747-b956-08e6eb695cdb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:29:14 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:14.984 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[7e482f88-f1f9-4f26-8c55-71893800845c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:29:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2628: 321 pgs: 321 active+clean; 500 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 3.6 MiB/s wr, 64 op/s
Oct 11 05:29:15 np0005481065 NetworkManager[44960]: <info>  [1760174955.1121] device (tap079fe911-c0): carrier: link connected
Oct 11 05:29:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:15.120 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[bc923c81-4ab8-49e9-9a57-9b8b23842fb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:29:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:15.148 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3a4f4704-39d2-4713-b8db-34c5e084ff7c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap079fe911-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:16:bb:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 400], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 679981, 'reachable_time': 32917, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 407383, 'error': None, 'target': 'ovnmeta-079fe911-cb70-4ab3-8112-a68496105754', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:29:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:15.174 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d24f8e4e-e8cc-4617-bbd7-18393f7397f1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe16:bbd6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 679981, 'tstamp': 679981}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 407384, 'error': None, 'target': 'ovnmeta-079fe911-cb70-4ab3-8112-a68496105754', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:29:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:15.199 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d6a0ca8f-d272-4630-93bf-66a51ef3abf1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap079fe911-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:16:bb:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 400], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 679981, 'reachable_time': 32917, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 407385, 'error': None, 'target': 'ovnmeta-079fe911-cb70-4ab3-8112-a68496105754', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:29:15 np0005481065 nova_compute[260935]: 2025-10-11 09:29:15.219 2 DEBUG nova.compute.manager [req-99eda441-d7b5-470d-99ca-4d554a3f7823 req-d021fbe2-d39d-4ba6-9a0c-573e6e626c2a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Received event network-vif-plugged-c6f854bc-831f-4bb1-ad9f-e1aa03343d25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:29:15 np0005481065 nova_compute[260935]: 2025-10-11 09:29:15.220 2 DEBUG oslo_concurrency.lockutils [req-99eda441-d7b5-470d-99ca-4d554a3f7823 req-d021fbe2-d39d-4ba6-9a0c-573e6e626c2a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "af8a1ab7-7512-4de4-8493-cfe85095fbc5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:29:15 np0005481065 nova_compute[260935]: 2025-10-11 09:29:15.220 2 DEBUG oslo_concurrency.lockutils [req-99eda441-d7b5-470d-99ca-4d554a3f7823 req-d021fbe2-d39d-4ba6-9a0c-573e6e626c2a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "af8a1ab7-7512-4de4-8493-cfe85095fbc5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:29:15 np0005481065 nova_compute[260935]: 2025-10-11 09:29:15.220 2 DEBUG oslo_concurrency.lockutils [req-99eda441-d7b5-470d-99ca-4d554a3f7823 req-d021fbe2-d39d-4ba6-9a0c-573e6e626c2a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "af8a1ab7-7512-4de4-8493-cfe85095fbc5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:29:15 np0005481065 nova_compute[260935]: 2025-10-11 09:29:15.220 2 DEBUG nova.compute.manager [req-99eda441-d7b5-470d-99ca-4d554a3f7823 req-d021fbe2-d39d-4ba6-9a0c-573e6e626c2a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Processing event network-vif-plugged-c6f854bc-831f-4bb1-ad9f-e1aa03343d25 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:29:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:15.224 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:29:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:15.225 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:29:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:15.226 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:29:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:15.250 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9f056072-59cf-46f5-bcd4-202f04a78657]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:29:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:15.326 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a31fe241-92bc-4205-bd01-f731212a54b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:29:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:15.328 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap079fe911-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:29:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:15.328 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:29:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:15.329 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap079fe911-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:29:15 np0005481065 NetworkManager[44960]: <info>  [1760174955.3321] manager: (tap079fe911-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/579)
Oct 11 05:29:15 np0005481065 kernel: tap079fe911-c0: entered promiscuous mode
Oct 11 05:29:15 np0005481065 nova_compute[260935]: 2025-10-11 09:29:15.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:15 np0005481065 nova_compute[260935]: 2025-10-11 09:29:15.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:15.336 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap079fe911-c0, col_values=(('external_ids', {'iface-id': '853c3a89-16e1-4df4-8c36-5b3e31f01524'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:29:15 np0005481065 nova_compute[260935]: 2025-10-11 09:29:15.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:15 np0005481065 nova_compute[260935]: 2025-10-11 09:29:15.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:15.342 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/079fe911-cb70-4ab3-8112-a68496105754.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/079fe911-cb70-4ab3-8112-a68496105754.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 05:29:15 np0005481065 ovn_controller[152945]: 2025-10-11T09:29:15Z|01489|binding|INFO|Releasing lport 853c3a89-16e1-4df4-8c36-5b3e31f01524 from this chassis (sb_readonly=0)
Oct 11 05:29:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:15.347 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[42d4288e-e473-49f4-93f6-fb4792410d3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:29:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:15.349 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 05:29:15 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 05:29:15 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 05:29:15 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-079fe911-cb70-4ab3-8112-a68496105754
Oct 11 05:29:15 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 05:29:15 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 05:29:15 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 05:29:15 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/079fe911-cb70-4ab3-8112-a68496105754.pid.haproxy
Oct 11 05:29:15 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 05:29:15 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:29:15 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 05:29:15 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 05:29:15 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 05:29:15 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 05:29:15 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 05:29:15 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 05:29:15 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 05:29:15 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 05:29:15 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 05:29:15 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 05:29:15 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 05:29:15 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 05:29:15 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 05:29:15 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:29:15 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:29:15 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 05:29:15 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 05:29:15 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 05:29:15 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID 079fe911-cb70-4ab3-8112-a68496105754
Oct 11 05:29:15 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 05:29:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:15.351 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-079fe911-cb70-4ab3-8112-a68496105754', 'env', 'PROCESS_TAG=haproxy-079fe911-cb70-4ab3-8112-a68496105754', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/079fe911-cb70-4ab3-8112-a68496105754.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 05:29:15 np0005481065 nova_compute[260935]: 2025-10-11 09:29:15.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:15 np0005481065 podman[407457]: 2025-10-11 09:29:15.81321465 +0000 UTC m=+0.073649450 container create 7c3025809c58e71159b26621f801a618a836a3bf281c33f5a5ecd917b5d02ebc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-079fe911-cb70-4ab3-8112-a68496105754, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 11 05:29:15 np0005481065 podman[407457]: 2025-10-11 09:29:15.772533402 +0000 UTC m=+0.032968252 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 05:29:15 np0005481065 systemd[1]: Started libpod-conmon-7c3025809c58e71159b26621f801a618a836a3bf281c33f5a5ecd917b5d02ebc.scope.
Oct 11 05:29:15 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:29:15 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c582133b8efc079cf0c19ee163457092710bcac477f70ae60348ebd6ded8cd4e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 05:29:15 np0005481065 podman[407457]: 2025-10-11 09:29:15.927076493 +0000 UTC m=+0.187511293 container init 7c3025809c58e71159b26621f801a618a836a3bf281c33f5a5ecd917b5d02ebc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-079fe911-cb70-4ab3-8112-a68496105754, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 11 05:29:15 np0005481065 podman[407457]: 2025-10-11 09:29:15.938269369 +0000 UTC m=+0.198704139 container start 7c3025809c58e71159b26621f801a618a836a3bf281c33f5a5ecd917b5d02ebc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-079fe911-cb70-4ab3-8112-a68496105754, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 11 05:29:15 np0005481065 neutron-haproxy-ovnmeta-079fe911-cb70-4ab3-8112-a68496105754[407473]: [NOTICE]   (407477) : New worker (407479) forked
Oct 11 05:29:15 np0005481065 neutron-haproxy-ovnmeta-079fe911-cb70-4ab3-8112-a68496105754[407473]: [NOTICE]   (407477) : Loading success.
Oct 11 05:29:16 np0005481065 nova_compute[260935]: 2025-10-11 09:29:16.157 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174956.1568, af8a1ab7-7512-4de4-8493-cfe85095fbc5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:29:16 np0005481065 nova_compute[260935]: 2025-10-11 09:29:16.157 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] VM Started (Lifecycle Event)#033[00m
Oct 11 05:29:16 np0005481065 nova_compute[260935]: 2025-10-11 09:29:16.160 2 DEBUG nova.compute.manager [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:29:16 np0005481065 nova_compute[260935]: 2025-10-11 09:29:16.163 2 DEBUG nova.virt.libvirt.driver [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:29:16 np0005481065 nova_compute[260935]: 2025-10-11 09:29:16.166 2 INFO nova.virt.libvirt.driver [-] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Instance spawned successfully.#033[00m
Oct 11 05:29:16 np0005481065 nova_compute[260935]: 2025-10-11 09:29:16.166 2 DEBUG nova.virt.libvirt.driver [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:29:16 np0005481065 nova_compute[260935]: 2025-10-11 09:29:16.184 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:29:16 np0005481065 nova_compute[260935]: 2025-10-11 09:29:16.191 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:29:16 np0005481065 nova_compute[260935]: 2025-10-11 09:29:16.195 2 DEBUG nova.virt.libvirt.driver [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:29:16 np0005481065 nova_compute[260935]: 2025-10-11 09:29:16.196 2 DEBUG nova.virt.libvirt.driver [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:29:16 np0005481065 nova_compute[260935]: 2025-10-11 09:29:16.198 2 DEBUG nova.virt.libvirt.driver [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:29:16 np0005481065 nova_compute[260935]: 2025-10-11 09:29:16.198 2 DEBUG nova.virt.libvirt.driver [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:29:16 np0005481065 nova_compute[260935]: 2025-10-11 09:29:16.199 2 DEBUG nova.virt.libvirt.driver [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:29:16 np0005481065 nova_compute[260935]: 2025-10-11 09:29:16.199 2 DEBUG nova.virt.libvirt.driver [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:29:16 np0005481065 nova_compute[260935]: 2025-10-11 09:29:16.226 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:29:16 np0005481065 nova_compute[260935]: 2025-10-11 09:29:16.226 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174956.1572866, af8a1ab7-7512-4de4-8493-cfe85095fbc5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:29:16 np0005481065 nova_compute[260935]: 2025-10-11 09:29:16.226 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:29:16 np0005481065 nova_compute[260935]: 2025-10-11 09:29:16.252 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:29:16 np0005481065 nova_compute[260935]: 2025-10-11 09:29:16.255 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174956.1624923, af8a1ab7-7512-4de4-8493-cfe85095fbc5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:29:16 np0005481065 nova_compute[260935]: 2025-10-11 09:29:16.255 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:29:16 np0005481065 nova_compute[260935]: 2025-10-11 09:29:16.262 2 INFO nova.compute.manager [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Took 7.41 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:29:16 np0005481065 nova_compute[260935]: 2025-10-11 09:29:16.262 2 DEBUG nova.compute.manager [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:29:16 np0005481065 nova_compute[260935]: 2025-10-11 09:29:16.270 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:29:16 np0005481065 nova_compute[260935]: 2025-10-11 09:29:16.272 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:29:16 np0005481065 nova_compute[260935]: 2025-10-11 09:29:16.297 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:29:16 np0005481065 nova_compute[260935]: 2025-10-11 09:29:16.319 2 INFO nova.compute.manager [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Took 8.57 seconds to build instance.#033[00m
Oct 11 05:29:16 np0005481065 nova_compute[260935]: 2025-10-11 09:29:16.335 2 DEBUG oslo_concurrency.lockutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Lock "af8a1ab7-7512-4de4-8493-cfe85095fbc5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:29:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2629: 321 pgs: 321 active+clean; 500 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 3.6 MiB/s wr, 64 op/s
Oct 11 05:29:17 np0005481065 nova_compute[260935]: 2025-10-11 09:29:17.316 2 DEBUG nova.compute.manager [req-b72c4aa9-1abc-4506-b1ef-f6457e77586f req-fd0ba4d0-75b5-48c4-9712-3330f3fb10e1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Received event network-vif-plugged-c6f854bc-831f-4bb1-ad9f-e1aa03343d25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:29:17 np0005481065 nova_compute[260935]: 2025-10-11 09:29:17.316 2 DEBUG oslo_concurrency.lockutils [req-b72c4aa9-1abc-4506-b1ef-f6457e77586f req-fd0ba4d0-75b5-48c4-9712-3330f3fb10e1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "af8a1ab7-7512-4de4-8493-cfe85095fbc5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:29:17 np0005481065 nova_compute[260935]: 2025-10-11 09:29:17.317 2 DEBUG oslo_concurrency.lockutils [req-b72c4aa9-1abc-4506-b1ef-f6457e77586f req-fd0ba4d0-75b5-48c4-9712-3330f3fb10e1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "af8a1ab7-7512-4de4-8493-cfe85095fbc5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:29:17 np0005481065 nova_compute[260935]: 2025-10-11 09:29:17.317 2 DEBUG oslo_concurrency.lockutils [req-b72c4aa9-1abc-4506-b1ef-f6457e77586f req-fd0ba4d0-75b5-48c4-9712-3330f3fb10e1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "af8a1ab7-7512-4de4-8493-cfe85095fbc5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:29:17 np0005481065 nova_compute[260935]: 2025-10-11 09:29:17.317 2 DEBUG nova.compute.manager [req-b72c4aa9-1abc-4506-b1ef-f6457e77586f req-fd0ba4d0-75b5-48c4-9712-3330f3fb10e1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] No waiting events found dispatching network-vif-plugged-c6f854bc-831f-4bb1-ad9f-e1aa03343d25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:29:17 np0005481065 nova_compute[260935]: 2025-10-11 09:29:17.317 2 WARNING nova.compute.manager [req-b72c4aa9-1abc-4506-b1ef-f6457e77586f req-fd0ba4d0-75b5-48c4-9712-3330f3fb10e1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Received unexpected event network-vif-plugged-c6f854bc-831f-4bb1-ad9f-e1aa03343d25 for instance with vm_state active and task_state None.#033[00m
Oct 11 05:29:17 np0005481065 nova_compute[260935]: 2025-10-11 09:29:17.317 2 DEBUG nova.compute.manager [req-b72c4aa9-1abc-4506-b1ef-f6457e77586f req-fd0ba4d0-75b5-48c4-9712-3330f3fb10e1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Received event network-changed-b6c25504-002e-4bf7-8b34-c9e43fcc55c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:29:17 np0005481065 nova_compute[260935]: 2025-10-11 09:29:17.318 2 DEBUG nova.compute.manager [req-b72c4aa9-1abc-4506-b1ef-f6457e77586f req-fd0ba4d0-75b5-48c4-9712-3330f3fb10e1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Refreshing instance network info cache due to event network-changed-b6c25504-002e-4bf7-8b34-c9e43fcc55c2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:29:17 np0005481065 nova_compute[260935]: 2025-10-11 09:29:17.318 2 DEBUG oslo_concurrency.lockutils [req-b72c4aa9-1abc-4506-b1ef-f6457e77586f req-fd0ba4d0-75b5-48c4-9712-3330f3fb10e1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-80209953-3c4c-4932-a9a3-8166c70e1029" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:29:17 np0005481065 nova_compute[260935]: 2025-10-11 09:29:17.318 2 DEBUG oslo_concurrency.lockutils [req-b72c4aa9-1abc-4506-b1ef-f6457e77586f req-fd0ba4d0-75b5-48c4-9712-3330f3fb10e1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-80209953-3c4c-4932-a9a3-8166c70e1029" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:29:17 np0005481065 nova_compute[260935]: 2025-10-11 09:29:17.318 2 DEBUG nova.network.neutron [req-b72c4aa9-1abc-4506-b1ef-f6457e77586f req-fd0ba4d0-75b5-48c4-9712-3330f3fb10e1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Refreshing network info cache for port b6c25504-002e-4bf7-8b34-c9e43fcc55c2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:29:18 np0005481065 nova_compute[260935]: 2025-10-11 09:29:18.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:18 np0005481065 nova_compute[260935]: 2025-10-11 09:29:18.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2630: 321 pgs: 321 active+clean; 500 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 195 op/s
Oct 11 05:29:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:29:19 np0005481065 nova_compute[260935]: 2025-10-11 09:29:19.422 2 DEBUG nova.compute.manager [req-65929319-c2ef-47d6-98fb-3457ae2fc994 req-04ec60b6-a950-4488-94f7-d73ad64a4528 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Received event network-changed-c6f854bc-831f-4bb1-ad9f-e1aa03343d25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:29:19 np0005481065 nova_compute[260935]: 2025-10-11 09:29:19.423 2 DEBUG nova.compute.manager [req-65929319-c2ef-47d6-98fb-3457ae2fc994 req-04ec60b6-a950-4488-94f7-d73ad64a4528 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Refreshing instance network info cache due to event network-changed-c6f854bc-831f-4bb1-ad9f-e1aa03343d25. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:29:19 np0005481065 nova_compute[260935]: 2025-10-11 09:29:19.423 2 DEBUG oslo_concurrency.lockutils [req-65929319-c2ef-47d6-98fb-3457ae2fc994 req-04ec60b6-a950-4488-94f7-d73ad64a4528 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-af8a1ab7-7512-4de4-8493-cfe85095fbc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:29:19 np0005481065 nova_compute[260935]: 2025-10-11 09:29:19.423 2 DEBUG oslo_concurrency.lockutils [req-65929319-c2ef-47d6-98fb-3457ae2fc994 req-04ec60b6-a950-4488-94f7-d73ad64a4528 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-af8a1ab7-7512-4de4-8493-cfe85095fbc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:29:19 np0005481065 nova_compute[260935]: 2025-10-11 09:29:19.423 2 DEBUG nova.network.neutron [req-65929319-c2ef-47d6-98fb-3457ae2fc994 req-04ec60b6-a950-4488-94f7-d73ad64a4528 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Refreshing network info cache for port c6f854bc-831f-4bb1-ad9f-e1aa03343d25 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:29:19 np0005481065 podman[407490]: 2025-10-11 09:29:19.789897524 +0000 UTC m=+0.080140073 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:29:20 np0005481065 nova_compute[260935]: 2025-10-11 09:29:20.462 2 DEBUG nova.network.neutron [req-b72c4aa9-1abc-4506-b1ef-f6457e77586f req-fd0ba4d0-75b5-48c4-9712-3330f3fb10e1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Updated VIF entry in instance network info cache for port b6c25504-002e-4bf7-8b34-c9e43fcc55c2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:29:20 np0005481065 nova_compute[260935]: 2025-10-11 09:29:20.463 2 DEBUG nova.network.neutron [req-b72c4aa9-1abc-4506-b1ef-f6457e77586f req-fd0ba4d0-75b5-48c4-9712-3330f3fb10e1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Updating instance_info_cache with network_info: [{"id": "b6c25504-002e-4bf7-8b34-c9e43fcc55c2", "address": "fa:16:3e:ba:b7:77", "network": {"id": "03e15108-5f8d-4fec-9ad8-133d9551c667", "bridge": "br-int", "label": "tempest-network-smoke--866575139", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feba:b777", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feba:b777", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6c25504-00", "ovs_interfaceid": "b6c25504-002e-4bf7-8b34-c9e43fcc55c2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:29:20 np0005481065 nova_compute[260935]: 2025-10-11 09:29:20.488 2 DEBUG oslo_concurrency.lockutils [req-b72c4aa9-1abc-4506-b1ef-f6457e77586f req-fd0ba4d0-75b5-48c4-9712-3330f3fb10e1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-80209953-3c4c-4932-a9a3-8166c70e1029" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:29:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2631: 321 pgs: 321 active+clean; 500 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 168 op/s
Oct 11 05:29:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2632: 321 pgs: 321 active+clean; 500 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 175 op/s
Oct 11 05:29:23 np0005481065 nova_compute[260935]: 2025-10-11 09:29:23.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:23 np0005481065 nova_compute[260935]: 2025-10-11 09:29:23.762 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:29:23 np0005481065 nova_compute[260935]: 2025-10-11 09:29:23.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:23 np0005481065 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Oct 11 05:29:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:29:24 np0005481065 nova_compute[260935]: 2025-10-11 09:29:24.668 2 DEBUG nova.network.neutron [req-65929319-c2ef-47d6-98fb-3457ae2fc994 req-04ec60b6-a950-4488-94f7-d73ad64a4528 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Updated VIF entry in instance network info cache for port c6f854bc-831f-4bb1-ad9f-e1aa03343d25. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:29:24 np0005481065 nova_compute[260935]: 2025-10-11 09:29:24.669 2 DEBUG nova.network.neutron [req-65929319-c2ef-47d6-98fb-3457ae2fc994 req-04ec60b6-a950-4488-94f7-d73ad64a4528 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Updating instance_info_cache with network_info: [{"id": "c6f854bc-831f-4bb1-ad9f-e1aa03343d25", "address": "fa:16:3e:6b:d3:83", "network": {"id": "079fe911-cb70-4ab3-8112-a68496105754", "bridge": "br-int", "label": "tempest-TestServerBasicOps-693767079-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e69b3710dc94c919d8bd75d2a540c10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f854bc-83", "ovs_interfaceid": "c6f854bc-831f-4bb1-ad9f-e1aa03343d25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:29:24 np0005481065 nova_compute[260935]: 2025-10-11 09:29:24.685 2 DEBUG oslo_concurrency.lockutils [req-65929319-c2ef-47d6-98fb-3457ae2fc994 req-04ec60b6-a950-4488-94f7-d73ad64a4528 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-af8a1ab7-7512-4de4-8493-cfe85095fbc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:29:24 np0005481065 podman[407508]: 2025-10-11 09:29:24.756516585 +0000 UTC m=+0.064005677 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 11 05:29:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:29:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:29:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:29:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:29:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:29:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:29:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2633: 321 pgs: 321 active+clean; 500 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 17 KiB/s wr, 138 op/s
Oct 11 05:29:25 np0005481065 ovn_controller[152945]: 2025-10-11T09:29:25Z|00172|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ba:b7:77 10.100.0.6
Oct 11 05:29:25 np0005481065 ovn_controller[152945]: 2025-10-11T09:29:25Z|00173|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ba:b7:77 10.100.0.6
Oct 11 05:29:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:29:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/105732265' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:29:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:29:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/105732265' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:29:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2634: 321 pgs: 321 active+clean; 500 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 17 KiB/s wr, 138 op/s
Oct 11 05:29:28 np0005481065 ovn_controller[152945]: 2025-10-11T09:29:28Z|00174|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6b:d3:83 10.100.0.7
Oct 11 05:29:28 np0005481065 ovn_controller[152945]: 2025-10-11T09:29:28Z|00175|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6b:d3:83 10.100.0.7
Oct 11 05:29:28 np0005481065 nova_compute[260935]: 2025-10-11 09:29:28.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:28 np0005481065 nova_compute[260935]: 2025-10-11 09:29:28.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2635: 321 pgs: 321 active+clean; 554 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 4.2 MiB/s wr, 240 op/s
Oct 11 05:29:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:29:29 np0005481065 podman[407528]: 2025-10-11 09:29:29.755583218 +0000 UTC m=+0.061613860 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:29:29 np0005481065 podman[407529]: 2025-10-11 09:29:29.813373029 +0000 UTC m=+0.117202879 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true)
Oct 11 05:29:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2636: 321 pgs: 321 active+clean; 554 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 696 KiB/s rd, 4.2 MiB/s wr, 108 op/s
Oct 11 05:29:32 np0005481065 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Oct 11 05:29:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2637: 321 pgs: 321 active+clean; 566 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 820 KiB/s rd, 4.3 MiB/s wr, 140 op/s
Oct 11 05:29:33 np0005481065 nova_compute[260935]: 2025-10-11 09:29:33.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:33 np0005481065 nova_compute[260935]: 2025-10-11 09:29:33.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:29:34 np0005481065 nova_compute[260935]: 2025-10-11 09:29:34.894 2 DEBUG nova.compute.manager [req-022868cf-302d-472e-bf83-3e0a77ab0203 req-6200f9c3-c286-4b74-a8d1-aac1ea91e9fb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Received event network-changed-b6c25504-002e-4bf7-8b34-c9e43fcc55c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:29:34 np0005481065 nova_compute[260935]: 2025-10-11 09:29:34.895 2 DEBUG nova.compute.manager [req-022868cf-302d-472e-bf83-3e0a77ab0203 req-6200f9c3-c286-4b74-a8d1-aac1ea91e9fb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Refreshing instance network info cache due to event network-changed-b6c25504-002e-4bf7-8b34-c9e43fcc55c2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:29:34 np0005481065 nova_compute[260935]: 2025-10-11 09:29:34.896 2 DEBUG oslo_concurrency.lockutils [req-022868cf-302d-472e-bf83-3e0a77ab0203 req-6200f9c3-c286-4b74-a8d1-aac1ea91e9fb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-80209953-3c4c-4932-a9a3-8166c70e1029" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:29:34 np0005481065 nova_compute[260935]: 2025-10-11 09:29:34.896 2 DEBUG oslo_concurrency.lockutils [req-022868cf-302d-472e-bf83-3e0a77ab0203 req-6200f9c3-c286-4b74-a8d1-aac1ea91e9fb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-80209953-3c4c-4932-a9a3-8166c70e1029" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:29:34 np0005481065 nova_compute[260935]: 2025-10-11 09:29:34.897 2 DEBUG nova.network.neutron [req-022868cf-302d-472e-bf83-3e0a77ab0203 req-6200f9c3-c286-4b74-a8d1-aac1ea91e9fb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Refreshing network info cache for port b6c25504-002e-4bf7-8b34-c9e43fcc55c2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:29:35 np0005481065 nova_compute[260935]: 2025-10-11 09:29:35.001 2 DEBUG oslo_concurrency.lockutils [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "80209953-3c4c-4932-a9a3-8166c70e1029" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:29:35 np0005481065 nova_compute[260935]: 2025-10-11 09:29:35.002 2 DEBUG oslo_concurrency.lockutils [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "80209953-3c4c-4932-a9a3-8166c70e1029" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:29:35 np0005481065 nova_compute[260935]: 2025-10-11 09:29:35.002 2 DEBUG oslo_concurrency.lockutils [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "80209953-3c4c-4932-a9a3-8166c70e1029-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:29:35 np0005481065 nova_compute[260935]: 2025-10-11 09:29:35.003 2 DEBUG oslo_concurrency.lockutils [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "80209953-3c4c-4932-a9a3-8166c70e1029-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:29:35 np0005481065 nova_compute[260935]: 2025-10-11 09:29:35.004 2 DEBUG oslo_concurrency.lockutils [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "80209953-3c4c-4932-a9a3-8166c70e1029-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:29:35 np0005481065 nova_compute[260935]: 2025-10-11 09:29:35.006 2 INFO nova.compute.manager [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Terminating instance#033[00m
Oct 11 05:29:35 np0005481065 nova_compute[260935]: 2025-10-11 09:29:35.007 2 DEBUG nova.compute.manager [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:29:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2638: 321 pgs: 321 active+clean; 566 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 611 KiB/s rd, 4.3 MiB/s wr, 132 op/s
Oct 11 05:29:35 np0005481065 kernel: tapb6c25504-00 (unregistering): left promiscuous mode
Oct 11 05:29:35 np0005481065 NetworkManager[44960]: <info>  [1760174975.4086] device (tapb6c25504-00): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:29:35 np0005481065 ovn_controller[152945]: 2025-10-11T09:29:35Z|01490|binding|INFO|Releasing lport b6c25504-002e-4bf7-8b34-c9e43fcc55c2 from this chassis (sb_readonly=0)
Oct 11 05:29:35 np0005481065 ovn_controller[152945]: 2025-10-11T09:29:35Z|01491|binding|INFO|Setting lport b6c25504-002e-4bf7-8b34-c9e43fcc55c2 down in Southbound
Oct 11 05:29:35 np0005481065 nova_compute[260935]: 2025-10-11 09:29:35.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:35 np0005481065 ovn_controller[152945]: 2025-10-11T09:29:35Z|01492|binding|INFO|Removing iface tapb6c25504-00 ovn-installed in OVS
Oct 11 05:29:35 np0005481065 nova_compute[260935]: 2025-10-11 09:29:35.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:35.441 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:b7:77 10.100.0.6 2001:db8:0:1:f816:3eff:feba:b777 2001:db8::f816:3eff:feba:b777'], port_security=['fa:16:3e:ba:b7:77 10.100.0.6 2001:db8:0:1:f816:3eff:feba:b777 2001:db8::f816:3eff:feba:b777'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28 2001:db8:0:1:f816:3eff:feba:b777/64 2001:db8::f816:3eff:feba:b777/64', 'neutron:device_id': '80209953-3c4c-4932-a9a3-8166c70e1029', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03e15108-5f8d-4fec-9ad8-133d9551c667', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ea903998-56a8-4844-9402-6fdca37c3b3c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3325c2b-a3cd-4cde-bb9c-e17cc326ff80, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=b6c25504-002e-4bf7-8b34-c9e43fcc55c2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:29:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:35.445 162815 INFO neutron.agent.ovn.metadata.agent [-] Port b6c25504-002e-4bf7-8b34-c9e43fcc55c2 in datapath 03e15108-5f8d-4fec-9ad8-133d9551c667 unbound from our chassis#033[00m
Oct 11 05:29:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:35.450 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 03e15108-5f8d-4fec-9ad8-133d9551c667#033[00m
Oct 11 05:29:35 np0005481065 nova_compute[260935]: 2025-10-11 09:29:35.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:35.480 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[494b3feb-322c-45a8-828e-c516b1541f30]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:29:35 np0005481065 systemd[1]: machine-qemu\x2d157\x2dinstance\x2d00000085.scope: Deactivated successfully.
Oct 11 05:29:35 np0005481065 systemd[1]: machine-qemu\x2d157\x2dinstance\x2d00000085.scope: Consumed 13.534s CPU time.
Oct 11 05:29:35 np0005481065 systemd-machined[215705]: Machine qemu-157-instance-00000085 terminated.
Oct 11 05:29:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:35.541 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[96e1d028-37e2-4377-8d1f-cf3819ea7ff6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:29:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:35.547 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[03aae237-803a-46c3-93a8-681ff1bc7ad0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:29:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:35.599 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6ac21f5f-e7ef-4bcc-ab03-fc98cddd406f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:29:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:35.625 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f6d3ebf5-cef9-477a-a914-30c11d1f90d1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap03e15108-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:c9:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 44, 'tx_packets': 7, 'rx_bytes': 3768, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 44, 'tx_packets': 7, 'rx_bytes': 3768, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 397], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675712, 'reachable_time': 41440, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 40, 'inoctets': 3040, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 40, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 3040, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 40, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 407586, 'error': None, 'target': 'ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:29:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:35.666 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0adabb10-0322-491b-829a-d9319c1eaa24]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap03e15108-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675730, 'tstamp': 675730}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 407588, 'error': None, 'target': 'ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap03e15108-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675735, 'tstamp': 675735}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 407588, 'error': None, 'target': 'ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:29:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:35.670 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap03e15108-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:29:35 np0005481065 nova_compute[260935]: 2025-10-11 09:29:35.674 2 INFO nova.virt.libvirt.driver [-] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Instance destroyed successfully.#033[00m
Oct 11 05:29:35 np0005481065 nova_compute[260935]: 2025-10-11 09:29:35.674 2 DEBUG nova.objects.instance [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'resources' on Instance uuid 80209953-3c4c-4932-a9a3-8166c70e1029 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:29:35 np0005481065 nova_compute[260935]: 2025-10-11 09:29:35.689 2 DEBUG nova.virt.libvirt.vif [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:28:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-373220295',display_name='tempest-TestGettingAddress-server-373220295',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-373220295',id=133,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI6hRsTatVKtQ3nuA1r7FSvjUwlf4xF10ggzb3Lhl+N08jgpdPafA8cbnqq6OClpxtss8V3s8tFKTynT8iMbxi96RnSSo3i9TmqSxoUO3xLRHc6Xamp8CuhNQXpWeJ78Zg==',key_name='tempest-TestGettingAddress-951303195',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:29:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-0s96u5c0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:29:12Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=80209953-3c4c-4932-a9a3-8166c70e1029,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b6c25504-002e-4bf7-8b34-c9e43fcc55c2", "address": "fa:16:3e:ba:b7:77", "network": {"id": "03e15108-5f8d-4fec-9ad8-133d9551c667", "bridge": "br-int", "label": "tempest-network-smoke--866575139", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feba:b777", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feba:b777", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6c25504-00", "ovs_interfaceid": "b6c25504-002e-4bf7-8b34-c9e43fcc55c2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:29:35 np0005481065 nova_compute[260935]: 2025-10-11 09:29:35.690 2 DEBUG nova.network.os_vif_util [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "b6c25504-002e-4bf7-8b34-c9e43fcc55c2", "address": "fa:16:3e:ba:b7:77", "network": {"id": "03e15108-5f8d-4fec-9ad8-133d9551c667", "bridge": "br-int", "label": "tempest-network-smoke--866575139", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feba:b777", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feba:b777", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6c25504-00", "ovs_interfaceid": "b6c25504-002e-4bf7-8b34-c9e43fcc55c2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:29:35 np0005481065 nova_compute[260935]: 2025-10-11 09:29:35.692 2 DEBUG nova.network.os_vif_util [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ba:b7:77,bridge_name='br-int',has_traffic_filtering=True,id=b6c25504-002e-4bf7-8b34-c9e43fcc55c2,network=Network(03e15108-5f8d-4fec-9ad8-133d9551c667),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6c25504-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:29:35 np0005481065 nova_compute[260935]: 2025-10-11 09:29:35.692 2 DEBUG os_vif [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ba:b7:77,bridge_name='br-int',has_traffic_filtering=True,id=b6c25504-002e-4bf7-8b34-c9e43fcc55c2,network=Network(03e15108-5f8d-4fec-9ad8-133d9551c667),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6c25504-00') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:29:35 np0005481065 nova_compute[260935]: 2025-10-11 09:29:35.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:35 np0005481065 nova_compute[260935]: 2025-10-11 09:29:35.695 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb6c25504-00, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:29:35 np0005481065 nova_compute[260935]: 2025-10-11 09:29:35.714 2 DEBUG nova.compute.manager [req-8fa91584-9a8d-4547-b96b-27efc2da70e6 req-826a6969-ca5d-4952-8788-54f246d5e683 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Received event network-vif-unplugged-b6c25504-002e-4bf7-8b34-c9e43fcc55c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:29:35 np0005481065 nova_compute[260935]: 2025-10-11 09:29:35.714 2 DEBUG oslo_concurrency.lockutils [req-8fa91584-9a8d-4547-b96b-27efc2da70e6 req-826a6969-ca5d-4952-8788-54f246d5e683 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "80209953-3c4c-4932-a9a3-8166c70e1029-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:29:35 np0005481065 nova_compute[260935]: 2025-10-11 09:29:35.715 2 DEBUG oslo_concurrency.lockutils [req-8fa91584-9a8d-4547-b96b-27efc2da70e6 req-826a6969-ca5d-4952-8788-54f246d5e683 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "80209953-3c4c-4932-a9a3-8166c70e1029-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:29:35 np0005481065 nova_compute[260935]: 2025-10-11 09:29:35.715 2 DEBUG oslo_concurrency.lockutils [req-8fa91584-9a8d-4547-b96b-27efc2da70e6 req-826a6969-ca5d-4952-8788-54f246d5e683 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "80209953-3c4c-4932-a9a3-8166c70e1029-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:29:35 np0005481065 nova_compute[260935]: 2025-10-11 09:29:35.715 2 DEBUG nova.compute.manager [req-8fa91584-9a8d-4547-b96b-27efc2da70e6 req-826a6969-ca5d-4952-8788-54f246d5e683 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] No waiting events found dispatching network-vif-unplugged-b6c25504-002e-4bf7-8b34-c9e43fcc55c2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:29:35 np0005481065 nova_compute[260935]: 2025-10-11 09:29:35.716 2 DEBUG nova.compute.manager [req-8fa91584-9a8d-4547-b96b-27efc2da70e6 req-826a6969-ca5d-4952-8788-54f246d5e683 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Received event network-vif-unplugged-b6c25504-002e-4bf7-8b34-c9e43fcc55c2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 05:29:35 np0005481065 nova_compute[260935]: 2025-10-11 09:29:35.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:35 np0005481065 nova_compute[260935]: 2025-10-11 09:29:35.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:29:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:35.725 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap03e15108-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:29:35 np0005481065 nova_compute[260935]: 2025-10-11 09:29:35.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:35.725 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:29:35 np0005481065 nova_compute[260935]: 2025-10-11 09:29:35.728 2 INFO os_vif [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ba:b7:77,bridge_name='br-int',has_traffic_filtering=True,id=b6c25504-002e-4bf7-8b34-c9e43fcc55c2,network=Network(03e15108-5f8d-4fec-9ad8-133d9551c667),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6c25504-00')#033[00m
Oct 11 05:29:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:35.728 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap03e15108-50, col_values=(('external_ids', {'iface-id': '1945eb8f-f9e5-437a-9fc9-017b522ae777'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:29:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:35.730 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:29:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:35.826 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=46, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=45) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:29:35 np0005481065 nova_compute[260935]: 2025-10-11 09:29:35.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:35.827 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 05:29:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2639: 321 pgs: 321 active+clean; 566 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 611 KiB/s rd, 4.3 MiB/s wr, 132 op/s
Oct 11 05:29:37 np0005481065 nova_compute[260935]: 2025-10-11 09:29:37.831 2 DEBUG nova.compute.manager [req-c1fcb4e2-926d-4d5f-9952-7d271c6f99ad req-65fd6448-01ad-41c3-8c4c-b68250ff9e70 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Received event network-vif-plugged-b6c25504-002e-4bf7-8b34-c9e43fcc55c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:29:37 np0005481065 nova_compute[260935]: 2025-10-11 09:29:37.831 2 DEBUG oslo_concurrency.lockutils [req-c1fcb4e2-926d-4d5f-9952-7d271c6f99ad req-65fd6448-01ad-41c3-8c4c-b68250ff9e70 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "80209953-3c4c-4932-a9a3-8166c70e1029-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:29:37 np0005481065 nova_compute[260935]: 2025-10-11 09:29:37.832 2 DEBUG oslo_concurrency.lockutils [req-c1fcb4e2-926d-4d5f-9952-7d271c6f99ad req-65fd6448-01ad-41c3-8c4c-b68250ff9e70 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "80209953-3c4c-4932-a9a3-8166c70e1029-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:29:37 np0005481065 nova_compute[260935]: 2025-10-11 09:29:37.832 2 DEBUG oslo_concurrency.lockutils [req-c1fcb4e2-926d-4d5f-9952-7d271c6f99ad req-65fd6448-01ad-41c3-8c4c-b68250ff9e70 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "80209953-3c4c-4932-a9a3-8166c70e1029-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:29:37 np0005481065 nova_compute[260935]: 2025-10-11 09:29:37.833 2 DEBUG nova.compute.manager [req-c1fcb4e2-926d-4d5f-9952-7d271c6f99ad req-65fd6448-01ad-41c3-8c4c-b68250ff9e70 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] No waiting events found dispatching network-vif-plugged-b6c25504-002e-4bf7-8b34-c9e43fcc55c2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:29:37 np0005481065 nova_compute[260935]: 2025-10-11 09:29:37.834 2 WARNING nova.compute.manager [req-c1fcb4e2-926d-4d5f-9952-7d271c6f99ad req-65fd6448-01ad-41c3-8c4c-b68250ff9e70 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Received unexpected event network-vif-plugged-b6c25504-002e-4bf7-8b34-c9e43fcc55c2 for instance with vm_state active and task_state deleting.#033[00m
Oct 11 05:29:38 np0005481065 nova_compute[260935]: 2025-10-11 09:29:38.348 2 INFO nova.virt.libvirt.driver [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Deleting instance files /var/lib/nova/instances/80209953-3c4c-4932-a9a3-8166c70e1029_del#033[00m
Oct 11 05:29:38 np0005481065 nova_compute[260935]: 2025-10-11 09:29:38.349 2 INFO nova.virt.libvirt.driver [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Deletion of /var/lib/nova/instances/80209953-3c4c-4932-a9a3-8166c70e1029_del complete#033[00m
Oct 11 05:29:38 np0005481065 nova_compute[260935]: 2025-10-11 09:29:38.359 2 DEBUG nova.network.neutron [req-022868cf-302d-472e-bf83-3e0a77ab0203 req-6200f9c3-c286-4b74-a8d1-aac1ea91e9fb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Updated VIF entry in instance network info cache for port b6c25504-002e-4bf7-8b34-c9e43fcc55c2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:29:38 np0005481065 nova_compute[260935]: 2025-10-11 09:29:38.360 2 DEBUG nova.network.neutron [req-022868cf-302d-472e-bf83-3e0a77ab0203 req-6200f9c3-c286-4b74-a8d1-aac1ea91e9fb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Updating instance_info_cache with network_info: [{"id": "b6c25504-002e-4bf7-8b34-c9e43fcc55c2", "address": "fa:16:3e:ba:b7:77", "network": {"id": "03e15108-5f8d-4fec-9ad8-133d9551c667", "bridge": "br-int", "label": "tempest-network-smoke--866575139", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feba:b777", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feba:b777", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6c25504-00", "ovs_interfaceid": "b6c25504-002e-4bf7-8b34-c9e43fcc55c2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:29:38 np0005481065 nova_compute[260935]: 2025-10-11 09:29:38.399 2 DEBUG oslo_concurrency.lockutils [req-022868cf-302d-472e-bf83-3e0a77ab0203 req-6200f9c3-c286-4b74-a8d1-aac1ea91e9fb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-80209953-3c4c-4932-a9a3-8166c70e1029" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:29:38 np0005481065 nova_compute[260935]: 2025-10-11 09:29:38.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:38 np0005481065 nova_compute[260935]: 2025-10-11 09:29:38.418 2 INFO nova.compute.manager [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Took 3.41 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:29:38 np0005481065 nova_compute[260935]: 2025-10-11 09:29:38.418 2 DEBUG oslo.service.loopingcall [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:29:38 np0005481065 nova_compute[260935]: 2025-10-11 09:29:38.419 2 DEBUG nova.compute.manager [-] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:29:38 np0005481065 nova_compute[260935]: 2025-10-11 09:29:38.419 2 DEBUG nova.network.neutron [-] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:29:38 np0005481065 nova_compute[260935]: 2025-10-11 09:29:38.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:29:38 np0005481065 nova_compute[260935]: 2025-10-11 09:29:38.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:29:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2640: 321 pgs: 321 active+clean; 566 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 624 KiB/s rd, 4.3 MiB/s wr, 143 op/s
Oct 11 05:29:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:29:39 np0005481065 nova_compute[260935]: 2025-10-11 09:29:39.592 2 DEBUG nova.network.neutron [-] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:29:39 np0005481065 nova_compute[260935]: 2025-10-11 09:29:39.615 2 INFO nova.compute.manager [-] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Took 1.20 seconds to deallocate network for instance.#033[00m
Oct 11 05:29:39 np0005481065 nova_compute[260935]: 2025-10-11 09:29:39.661 2 DEBUG oslo_concurrency.lockutils [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:29:39 np0005481065 nova_compute[260935]: 2025-10-11 09:29:39.662 2 DEBUG oslo_concurrency.lockutils [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:29:39 np0005481065 nova_compute[260935]: 2025-10-11 09:29:39.669 2 DEBUG nova.compute.manager [req-7e3ebcc4-2cfd-4525-9dae-dca19877266a req-30e8dde6-fa39-4173-a806-af24ed2c9a01 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Received event network-vif-deleted-b6c25504-002e-4bf7-8b34-c9e43fcc55c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:29:39 np0005481065 nova_compute[260935]: 2025-10-11 09:29:39.697 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:29:39 np0005481065 nova_compute[260935]: 2025-10-11 09:29:39.703 2 DEBUG nova.scheduler.client.report [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Refreshing inventories for resource provider ead2f521-4d5d-46d9-864c-1aac19134114 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 11 05:29:39 np0005481065 nova_compute[260935]: 2025-10-11 09:29:39.728 2 DEBUG nova.scheduler.client.report [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Updating ProviderTree inventory for provider ead2f521-4d5d-46d9-864c-1aac19134114 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 11 05:29:39 np0005481065 nova_compute[260935]: 2025-10-11 09:29:39.728 2 DEBUG nova.compute.provider_tree [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Updating inventory in ProviderTree for provider ead2f521-4d5d-46d9-864c-1aac19134114 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 11 05:29:39 np0005481065 nova_compute[260935]: 2025-10-11 09:29:39.757 2 DEBUG nova.scheduler.client.report [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Refreshing aggregate associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 11 05:29:39 np0005481065 nova_compute[260935]: 2025-10-11 09:29:39.792 2 DEBUG nova.scheduler.client.report [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Refreshing trait associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, traits: HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSE41,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_USB,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,HW_CPU_X86_SVM,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 11 05:29:39 np0005481065 nova_compute[260935]: 2025-10-11 09:29:39.927 2 DEBUG oslo_concurrency.processutils [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:29:40 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:29:40 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1860391964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:29:40 np0005481065 nova_compute[260935]: 2025-10-11 09:29:40.384 2 DEBUG oslo_concurrency.processutils [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:29:40 np0005481065 nova_compute[260935]: 2025-10-11 09:29:40.394 2 DEBUG nova.compute.provider_tree [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:29:40 np0005481065 nova_compute[260935]: 2025-10-11 09:29:40.417 2 DEBUG nova.scheduler.client.report [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:29:40 np0005481065 nova_compute[260935]: 2025-10-11 09:29:40.441 2 DEBUG oslo_concurrency.lockutils [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.779s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:29:40 np0005481065 nova_compute[260935]: 2025-10-11 09:29:40.468 2 INFO nova.scheduler.client.report [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Deleted allocations for instance 80209953-3c4c-4932-a9a3-8166c70e1029#033[00m
Oct 11 05:29:40 np0005481065 nova_compute[260935]: 2025-10-11 09:29:40.541 2 DEBUG oslo_concurrency.lockutils [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "80209953-3c4c-4932-a9a3-8166c70e1029" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.539s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:29:40 np0005481065 nova_compute[260935]: 2025-10-11 09:29:40.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2641: 321 pgs: 321 active+clean; 566 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 137 KiB/s rd, 129 KiB/s wr, 41 op/s
Oct 11 05:29:41 np0005481065 nova_compute[260935]: 2025-10-11 09:29:41.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:29:42 np0005481065 nova_compute[260935]: 2025-10-11 09:29:42.610 2 DEBUG nova.compute.manager [req-e1ef3a65-e183-43aa-b560-6473502afbf6 req-a5f477c9-824e-4604-8927-c3b16166ffba e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Received event network-changed-e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:29:42 np0005481065 nova_compute[260935]: 2025-10-11 09:29:42.611 2 DEBUG nova.compute.manager [req-e1ef3a65-e183-43aa-b560-6473502afbf6 req-a5f477c9-824e-4604-8927-c3b16166ffba e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Refreshing instance network info cache due to event network-changed-e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:29:42 np0005481065 nova_compute[260935]: 2025-10-11 09:29:42.611 2 DEBUG oslo_concurrency.lockutils [req-e1ef3a65-e183-43aa-b560-6473502afbf6 req-a5f477c9-824e-4604-8927-c3b16166ffba e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-16108538-ef61-4d43-8a42-c324c334138b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:29:42 np0005481065 nova_compute[260935]: 2025-10-11 09:29:42.612 2 DEBUG oslo_concurrency.lockutils [req-e1ef3a65-e183-43aa-b560-6473502afbf6 req-a5f477c9-824e-4604-8927-c3b16166ffba e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-16108538-ef61-4d43-8a42-c324c334138b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:29:42 np0005481065 nova_compute[260935]: 2025-10-11 09:29:42.612 2 DEBUG nova.network.neutron [req-e1ef3a65-e183-43aa-b560-6473502afbf6 req-a5f477c9-824e-4604-8927-c3b16166ffba e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Refreshing network info cache for port e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:29:42 np0005481065 nova_compute[260935]: 2025-10-11 09:29:42.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:29:42 np0005481065 nova_compute[260935]: 2025-10-11 09:29:42.702 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:29:42 np0005481065 nova_compute[260935]: 2025-10-11 09:29:42.716 2 DEBUG oslo_concurrency.lockutils [None req-ceb0ac14-3eb8-4c1c-ab12-43ef8fb7945f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "16108538-ef61-4d43-8a42-c324c334138b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:29:42 np0005481065 nova_compute[260935]: 2025-10-11 09:29:42.716 2 DEBUG oslo_concurrency.lockutils [None req-ceb0ac14-3eb8-4c1c-ab12-43ef8fb7945f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "16108538-ef61-4d43-8a42-c324c334138b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:29:42 np0005481065 nova_compute[260935]: 2025-10-11 09:29:42.716 2 DEBUG oslo_concurrency.lockutils [None req-ceb0ac14-3eb8-4c1c-ab12-43ef8fb7945f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "16108538-ef61-4d43-8a42-c324c334138b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:29:42 np0005481065 nova_compute[260935]: 2025-10-11 09:29:42.717 2 DEBUG oslo_concurrency.lockutils [None req-ceb0ac14-3eb8-4c1c-ab12-43ef8fb7945f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "16108538-ef61-4d43-8a42-c324c334138b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:29:42 np0005481065 nova_compute[260935]: 2025-10-11 09:29:42.717 2 DEBUG oslo_concurrency.lockutils [None req-ceb0ac14-3eb8-4c1c-ab12-43ef8fb7945f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "16108538-ef61-4d43-8a42-c324c334138b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:29:42 np0005481065 nova_compute[260935]: 2025-10-11 09:29:42.718 2 INFO nova.compute.manager [None req-ceb0ac14-3eb8-4c1c-ab12-43ef8fb7945f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Terminating instance#033[00m
Oct 11 05:29:42 np0005481065 nova_compute[260935]: 2025-10-11 09:29:42.719 2 DEBUG nova.compute.manager [None req-ceb0ac14-3eb8-4c1c-ab12-43ef8fb7945f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:29:42 np0005481065 kernel: tape89af28e-d1 (unregistering): left promiscuous mode
Oct 11 05:29:42 np0005481065 NetworkManager[44960]: <info>  [1760174982.7760] device (tape89af28e-d1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:29:42 np0005481065 ovn_controller[152945]: 2025-10-11T09:29:42Z|01493|binding|INFO|Releasing lport e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0 from this chassis (sb_readonly=0)
Oct 11 05:29:42 np0005481065 ovn_controller[152945]: 2025-10-11T09:29:42Z|01494|binding|INFO|Setting lport e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0 down in Southbound
Oct 11 05:29:42 np0005481065 nova_compute[260935]: 2025-10-11 09:29:42.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:42 np0005481065 ovn_controller[152945]: 2025-10-11T09:29:42Z|01495|binding|INFO|Removing iface tape89af28e-d1 ovn-installed in OVS
Oct 11 05:29:42 np0005481065 nova_compute[260935]: 2025-10-11 09:29:42.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:42.799 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:64:0e 10.100.0.7 2001:db8:0:1:f816:3eff:fee6:640e 2001:db8::f816:3eff:fee6:640e'], port_security=['fa:16:3e:e6:64:0e 10.100.0.7 2001:db8:0:1:f816:3eff:fee6:640e 2001:db8::f816:3eff:fee6:640e'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28 2001:db8:0:1:f816:3eff:fee6:640e/64 2001:db8::f816:3eff:fee6:640e/64', 'neutron:device_id': '16108538-ef61-4d43-8a42-c324c334138b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03e15108-5f8d-4fec-9ad8-133d9551c667', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ea903998-56a8-4844-9402-6fdca37c3b3c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3325c2b-a3cd-4cde-bb9c-e17cc326ff80, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:29:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:42.800 162815 INFO neutron.agent.ovn.metadata.agent [-] Port e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0 in datapath 03e15108-5f8d-4fec-9ad8-133d9551c667 unbound from our chassis#033[00m
Oct 11 05:29:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:42.801 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 03e15108-5f8d-4fec-9ad8-133d9551c667, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:29:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:42.805 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[17b75d14-12c4-457e-b1ca-55a4b020c526]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:29:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:42.805 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667 namespace which is not needed anymore#033[00m
Oct 11 05:29:42 np0005481065 nova_compute[260935]: 2025-10-11 09:29:42.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:42 np0005481065 systemd[1]: machine-qemu\x2d156\x2dinstance\x2d00000084.scope: Deactivated successfully.
Oct 11 05:29:42 np0005481065 systemd[1]: machine-qemu\x2d156\x2dinstance\x2d00000084.scope: Consumed 16.319s CPU time.
Oct 11 05:29:42 np0005481065 systemd-machined[215705]: Machine qemu-156-instance-00000084 terminated.
Oct 11 05:29:42 np0005481065 neutron-haproxy-ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667[405609]: [NOTICE]   (405613) : haproxy version is 2.8.14-c23fe91
Oct 11 05:29:42 np0005481065 neutron-haproxy-ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667[405609]: [NOTICE]   (405613) : path to executable is /usr/sbin/haproxy
Oct 11 05:29:42 np0005481065 neutron-haproxy-ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667[405609]: [WARNING]  (405613) : Exiting Master process...
Oct 11 05:29:42 np0005481065 neutron-haproxy-ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667[405609]: [ALERT]    (405613) : Current worker (405615) exited with code 143 (Terminated)
Oct 11 05:29:42 np0005481065 neutron-haproxy-ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667[405609]: [WARNING]  (405613) : All workers exited. Exiting... (0)
Oct 11 05:29:42 np0005481065 systemd[1]: libpod-d0e7b6791d31aa4d05705ff6bca99563fb0a8251a2e20915b4284d5398aa84a4.scope: Deactivated successfully.
Oct 11 05:29:42 np0005481065 podman[407662]: 2025-10-11 09:29:42.963868321 +0000 UTC m=+0.060622402 container died d0e7b6791d31aa4d05705ff6bca99563fb0a8251a2e20915b4284d5398aa84a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 11 05:29:42 np0005481065 nova_compute[260935]: 2025-10-11 09:29:42.966 2 INFO nova.virt.libvirt.driver [-] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Instance destroyed successfully.#033[00m
Oct 11 05:29:42 np0005481065 nova_compute[260935]: 2025-10-11 09:29:42.966 2 DEBUG nova.objects.instance [None req-ceb0ac14-3eb8-4c1c-ab12-43ef8fb7945f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'resources' on Instance uuid 16108538-ef61-4d43-8a42-c324c334138b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:29:42 np0005481065 nova_compute[260935]: 2025-10-11 09:29:42.983 2 DEBUG nova.virt.libvirt.vif [None req-ceb0ac14-3eb8-4c1c-ab12-43ef8fb7945f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:28:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1335802812',display_name='tempest-TestGettingAddress-server-1335802812',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1335802812',id=132,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI6hRsTatVKtQ3nuA1r7FSvjUwlf4xF10ggzb3Lhl+N08jgpdPafA8cbnqq6OClpxtss8V3s8tFKTynT8iMbxi96RnSSo3i9TmqSxoUO3xLRHc6Xamp8CuhNQXpWeJ78Zg==',key_name='tempest-TestGettingAddress-951303195',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:28:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-7xgmxaip',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:28:33Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=16108538-ef61-4d43-8a42-c324c334138b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0", "address": "fa:16:3e:e6:64:0e", "network": {"id": "03e15108-5f8d-4fec-9ad8-133d9551c667", "bridge": "br-int", "label": "tempest-network-smoke--866575139", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:640e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:640e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89af28e-d1", "ovs_interfaceid": "e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:29:42 np0005481065 nova_compute[260935]: 2025-10-11 09:29:42.984 2 DEBUG nova.network.os_vif_util [None req-ceb0ac14-3eb8-4c1c-ab12-43ef8fb7945f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0", "address": "fa:16:3e:e6:64:0e", "network": {"id": "03e15108-5f8d-4fec-9ad8-133d9551c667", "bridge": "br-int", "label": "tempest-network-smoke--866575139", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:640e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:640e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89af28e-d1", "ovs_interfaceid": "e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:29:42 np0005481065 nova_compute[260935]: 2025-10-11 09:29:42.986 2 DEBUG nova.network.os_vif_util [None req-ceb0ac14-3eb8-4c1c-ab12-43ef8fb7945f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e6:64:0e,bridge_name='br-int',has_traffic_filtering=True,id=e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0,network=Network(03e15108-5f8d-4fec-9ad8-133d9551c667),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape89af28e-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:29:42 np0005481065 nova_compute[260935]: 2025-10-11 09:29:42.986 2 DEBUG os_vif [None req-ceb0ac14-3eb8-4c1c-ab12-43ef8fb7945f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e6:64:0e,bridge_name='br-int',has_traffic_filtering=True,id=e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0,network=Network(03e15108-5f8d-4fec-9ad8-133d9551c667),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape89af28e-d1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:29:42 np0005481065 nova_compute[260935]: 2025-10-11 09:29:42.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:42 np0005481065 nova_compute[260935]: 2025-10-11 09:29:42.989 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape89af28e-d1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:29:42 np0005481065 nova_compute[260935]: 2025-10-11 09:29:42.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:42 np0005481065 nova_compute[260935]: 2025-10-11 09:29:42.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:29:43 np0005481065 nova_compute[260935]: 2025-10-11 09:29:43.001 2 INFO os_vif [None req-ceb0ac14-3eb8-4c1c-ab12-43ef8fb7945f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e6:64:0e,bridge_name='br-int',has_traffic_filtering=True,id=e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0,network=Network(03e15108-5f8d-4fec-9ad8-133d9551c667),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape89af28e-d1')#033[00m
Oct 11 05:29:43 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d0e7b6791d31aa4d05705ff6bca99563fb0a8251a2e20915b4284d5398aa84a4-userdata-shm.mount: Deactivated successfully.
Oct 11 05:29:43 np0005481065 systemd[1]: var-lib-containers-storage-overlay-4ed2907c6f4d557002a67a93ce41ed6b3e05074fb0e893ba2f5111564f8191c6-merged.mount: Deactivated successfully.
Oct 11 05:29:43 np0005481065 podman[407662]: 2025-10-11 09:29:43.018291467 +0000 UTC m=+0.115045558 container cleanup d0e7b6791d31aa4d05705ff6bca99563fb0a8251a2e20915b4284d5398aa84a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 11 05:29:43 np0005481065 systemd[1]: libpod-conmon-d0e7b6791d31aa4d05705ff6bca99563fb0a8251a2e20915b4284d5398aa84a4.scope: Deactivated successfully.
Oct 11 05:29:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2642: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 149 KiB/s rd, 133 KiB/s wr, 60 op/s
Oct 11 05:29:43 np0005481065 podman[407720]: 2025-10-11 09:29:43.104301084 +0000 UTC m=+0.053456890 container remove d0e7b6791d31aa4d05705ff6bca99563fb0a8251a2e20915b4284d5398aa84a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 11 05:29:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:43.111 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9a045684-ba3b-4097-b493-f7e6e92a5812]: (4, ('Sat Oct 11 09:29:42 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667 (d0e7b6791d31aa4d05705ff6bca99563fb0a8251a2e20915b4284d5398aa84a4)\nd0e7b6791d31aa4d05705ff6bca99563fb0a8251a2e20915b4284d5398aa84a4\nSat Oct 11 09:29:43 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667 (d0e7b6791d31aa4d05705ff6bca99563fb0a8251a2e20915b4284d5398aa84a4)\nd0e7b6791d31aa4d05705ff6bca99563fb0a8251a2e20915b4284d5398aa84a4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:29:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:43.113 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b52db876-9929-42b6-8098-2f3401fb3c32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:29:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:43.114 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap03e15108-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:29:43 np0005481065 kernel: tap03e15108-50: left promiscuous mode
Oct 11 05:29:43 np0005481065 nova_compute[260935]: 2025-10-11 09:29:43.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:43 np0005481065 nova_compute[260935]: 2025-10-11 09:29:43.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:43.124 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e31bdbc9-c282-4c48-8e29-580b2374bf52]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:29:43 np0005481065 nova_compute[260935]: 2025-10-11 09:29:43.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:43.145 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[baed22a9-8569-4bc3-9c32-998c312cf999]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:29:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:43.146 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[03239e74-e38c-4535-81fc-dbd225eabd84]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:29:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:43.174 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[dbf0a76b-5b70-4139-ab79-9380e5790037]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675702, 'reachable_time': 25969, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 407738, 'error': None, 'target': 'ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:29:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:43.180 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 05:29:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:43.180 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[7d4a6b4b-9478-40db-aa9f-d2b892d2f985]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:29:43 np0005481065 systemd[1]: run-netns-ovnmeta\x2d03e15108\x2d5f8d\x2d4fec\x2d9ad8\x2d133d9551c667.mount: Deactivated successfully.
Oct 11 05:29:43 np0005481065 nova_compute[260935]: 2025-10-11 09:29:43.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:43 np0005481065 nova_compute[260935]: 2025-10-11 09:29:43.464 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:29:43 np0005481065 nova_compute[260935]: 2025-10-11 09:29:43.465 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:29:43 np0005481065 nova_compute[260935]: 2025-10-11 09:29:43.465 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 05:29:43 np0005481065 nova_compute[260935]: 2025-10-11 09:29:43.472 2 INFO nova.virt.libvirt.driver [None req-ceb0ac14-3eb8-4c1c-ab12-43ef8fb7945f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Deleting instance files /var/lib/nova/instances/16108538-ef61-4d43-8a42-c324c334138b_del#033[00m
Oct 11 05:29:43 np0005481065 nova_compute[260935]: 2025-10-11 09:29:43.473 2 INFO nova.virt.libvirt.driver [None req-ceb0ac14-3eb8-4c1c-ab12-43ef8fb7945f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Deletion of /var/lib/nova/instances/16108538-ef61-4d43-8a42-c324c334138b_del complete#033[00m
Oct 11 05:29:43 np0005481065 nova_compute[260935]: 2025-10-11 09:29:43.521 2 INFO nova.compute.manager [None req-ceb0ac14-3eb8-4c1c-ab12-43ef8fb7945f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Took 0.80 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:29:43 np0005481065 nova_compute[260935]: 2025-10-11 09:29:43.522 2 DEBUG oslo.service.loopingcall [None req-ceb0ac14-3eb8-4c1c-ab12-43ef8fb7945f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:29:43 np0005481065 nova_compute[260935]: 2025-10-11 09:29:43.523 2 DEBUG nova.compute.manager [-] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:29:43 np0005481065 nova_compute[260935]: 2025-10-11 09:29:43.523 2 DEBUG nova.network.neutron [-] [instance: 16108538-ef61-4d43-8a42-c324c334138b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:29:43 np0005481065 nova_compute[260935]: 2025-10-11 09:29:43.722 2 DEBUG nova.compute.manager [req-a930dbb7-b60e-4f69-ac62-b005c8d5755c req-46a06c2b-0936-4edd-b030-3bfe0ab3cdc0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Received event network-vif-unplugged-e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:29:43 np0005481065 nova_compute[260935]: 2025-10-11 09:29:43.724 2 DEBUG oslo_concurrency.lockutils [req-a930dbb7-b60e-4f69-ac62-b005c8d5755c req-46a06c2b-0936-4edd-b030-3bfe0ab3cdc0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "16108538-ef61-4d43-8a42-c324c334138b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:29:43 np0005481065 nova_compute[260935]: 2025-10-11 09:29:43.724 2 DEBUG oslo_concurrency.lockutils [req-a930dbb7-b60e-4f69-ac62-b005c8d5755c req-46a06c2b-0936-4edd-b030-3bfe0ab3cdc0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "16108538-ef61-4d43-8a42-c324c334138b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:29:43 np0005481065 nova_compute[260935]: 2025-10-11 09:29:43.725 2 DEBUG oslo_concurrency.lockutils [req-a930dbb7-b60e-4f69-ac62-b005c8d5755c req-46a06c2b-0936-4edd-b030-3bfe0ab3cdc0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "16108538-ef61-4d43-8a42-c324c334138b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:29:43 np0005481065 nova_compute[260935]: 2025-10-11 09:29:43.725 2 DEBUG nova.compute.manager [req-a930dbb7-b60e-4f69-ac62-b005c8d5755c req-46a06c2b-0936-4edd-b030-3bfe0ab3cdc0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] No waiting events found dispatching network-vif-unplugged-e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:29:43 np0005481065 nova_compute[260935]: 2025-10-11 09:29:43.726 2 DEBUG nova.compute.manager [req-a930dbb7-b60e-4f69-ac62-b005c8d5755c req-46a06c2b-0936-4edd-b030-3bfe0ab3cdc0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Received event network-vif-unplugged-e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 05:29:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:29:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2643: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 17 KiB/s wr, 29 op/s
Oct 11 05:29:45 np0005481065 nova_compute[260935]: 2025-10-11 09:29:45.515 2 DEBUG nova.network.neutron [-] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:29:45 np0005481065 nova_compute[260935]: 2025-10-11 09:29:45.540 2 INFO nova.compute.manager [-] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Took 2.02 seconds to deallocate network for instance.#033[00m
Oct 11 05:29:45 np0005481065 nova_compute[260935]: 2025-10-11 09:29:45.623 2 DEBUG nova.compute.manager [req-09a1a7a4-9cb6-4804-aaca-771c0db9b481 req-803916f9-a2a7-4325-ba3b-4b7001fcfaf0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Received event network-vif-deleted-e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:29:45 np0005481065 nova_compute[260935]: 2025-10-11 09:29:45.627 2 DEBUG oslo_concurrency.lockutils [None req-ceb0ac14-3eb8-4c1c-ab12-43ef8fb7945f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:29:45 np0005481065 nova_compute[260935]: 2025-10-11 09:29:45.627 2 DEBUG oslo_concurrency.lockutils [None req-ceb0ac14-3eb8-4c1c-ab12-43ef8fb7945f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:29:45 np0005481065 nova_compute[260935]: 2025-10-11 09:29:45.722 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updating instance_info_cache with network_info: [{"id": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "address": "fa:16:3e:d3:b5:ce", "network": {"id": "7c40ad6c-6e2c-4d8e-a70f-72c8786fa745", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1855455514-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ba95f2514ce4fe4b00f245335eaeb01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc992d6e3-ef", "ovs_interfaceid": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:29:45 np0005481065 nova_compute[260935]: 2025-10-11 09:29:45.740 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:29:45 np0005481065 nova_compute[260935]: 2025-10-11 09:29:45.741 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 05:29:45 np0005481065 nova_compute[260935]: 2025-10-11 09:29:45.746 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:29:45 np0005481065 nova_compute[260935]: 2025-10-11 09:29:45.747 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:29:45 np0005481065 nova_compute[260935]: 2025-10-11 09:29:45.747 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:29:45 np0005481065 nova_compute[260935]: 2025-10-11 09:29:45.760 2 DEBUG oslo_concurrency.processutils [None req-ceb0ac14-3eb8-4c1c-ab12-43ef8fb7945f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:29:45 np0005481065 nova_compute[260935]: 2025-10-11 09:29:45.827 2 DEBUG nova.compute.manager [req-e8a69a60-981f-44bd-97c8-289d35c22262 req-a4144217-ccd0-46f2-8a0f-35d11cc71509 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Received event network-vif-plugged-e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:29:45 np0005481065 nova_compute[260935]: 2025-10-11 09:29:45.828 2 DEBUG oslo_concurrency.lockutils [req-e8a69a60-981f-44bd-97c8-289d35c22262 req-a4144217-ccd0-46f2-8a0f-35d11cc71509 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "16108538-ef61-4d43-8a42-c324c334138b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:29:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:45.830 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '46'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:29:45 np0005481065 nova_compute[260935]: 2025-10-11 09:29:45.829 2 DEBUG oslo_concurrency.lockutils [req-e8a69a60-981f-44bd-97c8-289d35c22262 req-a4144217-ccd0-46f2-8a0f-35d11cc71509 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "16108538-ef61-4d43-8a42-c324c334138b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:29:45 np0005481065 nova_compute[260935]: 2025-10-11 09:29:45.832 2 DEBUG oslo_concurrency.lockutils [req-e8a69a60-981f-44bd-97c8-289d35c22262 req-a4144217-ccd0-46f2-8a0f-35d11cc71509 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "16108538-ef61-4d43-8a42-c324c334138b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:29:45 np0005481065 nova_compute[260935]: 2025-10-11 09:29:45.832 2 DEBUG nova.compute.manager [req-e8a69a60-981f-44bd-97c8-289d35c22262 req-a4144217-ccd0-46f2-8a0f-35d11cc71509 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] No waiting events found dispatching network-vif-plugged-e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:29:45 np0005481065 nova_compute[260935]: 2025-10-11 09:29:45.833 2 WARNING nova.compute.manager [req-e8a69a60-981f-44bd-97c8-289d35c22262 req-a4144217-ccd0-46f2-8a0f-35d11cc71509 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Received unexpected event network-vif-plugged-e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0 for instance with vm_state deleted and task_state None.#033[00m
Oct 11 05:29:46 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:29:46 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1256189463' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:29:46 np0005481065 nova_compute[260935]: 2025-10-11 09:29:46.257 2 DEBUG oslo_concurrency.processutils [None req-ceb0ac14-3eb8-4c1c-ab12-43ef8fb7945f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:29:46 np0005481065 nova_compute[260935]: 2025-10-11 09:29:46.264 2 DEBUG nova.compute.provider_tree [None req-ceb0ac14-3eb8-4c1c-ab12-43ef8fb7945f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:29:46 np0005481065 nova_compute[260935]: 2025-10-11 09:29:46.289 2 DEBUG nova.scheduler.client.report [None req-ceb0ac14-3eb8-4c1c-ab12-43ef8fb7945f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:29:46 np0005481065 nova_compute[260935]: 2025-10-11 09:29:46.319 2 DEBUG nova.network.neutron [req-e1ef3a65-e183-43aa-b560-6473502afbf6 req-a5f477c9-824e-4604-8927-c3b16166ffba e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Updated VIF entry in instance network info cache for port e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:29:46 np0005481065 nova_compute[260935]: 2025-10-11 09:29:46.320 2 DEBUG nova.network.neutron [req-e1ef3a65-e183-43aa-b560-6473502afbf6 req-a5f477c9-824e-4604-8927-c3b16166ffba e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Updating instance_info_cache with network_info: [{"id": "e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0", "address": "fa:16:3e:e6:64:0e", "network": {"id": "03e15108-5f8d-4fec-9ad8-133d9551c667", "bridge": "br-int", "label": "tempest-network-smoke--866575139", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:640e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:640e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89af28e-d1", "ovs_interfaceid": "e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:29:46 np0005481065 nova_compute[260935]: 2025-10-11 09:29:46.325 2 DEBUG oslo_concurrency.lockutils [None req-ceb0ac14-3eb8-4c1c-ab12-43ef8fb7945f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:29:46 np0005481065 nova_compute[260935]: 2025-10-11 09:29:46.339 2 DEBUG oslo_concurrency.lockutils [req-e1ef3a65-e183-43aa-b560-6473502afbf6 req-a5f477c9-824e-4604-8927-c3b16166ffba e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-16108538-ef61-4d43-8a42-c324c334138b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:29:46 np0005481065 nova_compute[260935]: 2025-10-11 09:29:46.347 2 INFO nova.scheduler.client.report [None req-ceb0ac14-3eb8-4c1c-ab12-43ef8fb7945f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Deleted allocations for instance 16108538-ef61-4d43-8a42-c324c334138b#033[00m
Oct 11 05:29:46 np0005481065 nova_compute[260935]: 2025-10-11 09:29:46.416 2 DEBUG oslo_concurrency.lockutils [None req-ceb0ac14-3eb8-4c1c-ab12-43ef8fb7945f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "16108538-ef61-4d43-8a42-c324c334138b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:29:46 np0005481065 nova_compute[260935]: 2025-10-11 09:29:46.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:29:46 np0005481065 nova_compute[260935]: 2025-10-11 09:29:46.730 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:29:46 np0005481065 nova_compute[260935]: 2025-10-11 09:29:46.754 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:29:46 np0005481065 nova_compute[260935]: 2025-10-11 09:29:46.754 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:29:46 np0005481065 nova_compute[260935]: 2025-10-11 09:29:46.754 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:29:46 np0005481065 nova_compute[260935]: 2025-10-11 09:29:46.755 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:29:46 np0005481065 nova_compute[260935]: 2025-10-11 09:29:46.755 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:29:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2644: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 17 KiB/s wr, 29 op/s
Oct 11 05:29:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:29:47 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/622301888' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:29:47 np0005481065 nova_compute[260935]: 2025-10-11 09:29:47.266 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:29:47 np0005481065 nova_compute[260935]: 2025-10-11 09:29:47.386 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:29:47 np0005481065 nova_compute[260935]: 2025-10-11 09:29:47.386 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:29:47 np0005481065 nova_compute[260935]: 2025-10-11 09:29:47.386 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:29:47 np0005481065 nova_compute[260935]: 2025-10-11 09:29:47.390 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:29:47 np0005481065 nova_compute[260935]: 2025-10-11 09:29:47.391 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:29:47 np0005481065 nova_compute[260935]: 2025-10-11 09:29:47.394 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000086 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:29:47 np0005481065 nova_compute[260935]: 2025-10-11 09:29:47.394 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000086 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:29:47 np0005481065 nova_compute[260935]: 2025-10-11 09:29:47.399 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:29:47 np0005481065 nova_compute[260935]: 2025-10-11 09:29:47.399 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:29:47 np0005481065 nova_compute[260935]: 2025-10-11 09:29:47.611 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:29:47 np0005481065 nova_compute[260935]: 2025-10-11 09:29:47.612 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2597MB free_disk=59.739524841308594GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:29:47 np0005481065 nova_compute[260935]: 2025-10-11 09:29:47.613 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:29:47 np0005481065 nova_compute[260935]: 2025-10-11 09:29:47.613 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:29:47 np0005481065 nova_compute[260935]: 2025-10-11 09:29:47.731 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:29:47 np0005481065 nova_compute[260935]: 2025-10-11 09:29:47.731 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:29:47 np0005481065 nova_compute[260935]: 2025-10-11 09:29:47.732 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:29:47 np0005481065 nova_compute[260935]: 2025-10-11 09:29:47.732 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance af8a1ab7-7512-4de4-8493-cfe85095fbc5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:29:47 np0005481065 nova_compute[260935]: 2025-10-11 09:29:47.732 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:29:47 np0005481065 nova_compute[260935]: 2025-10-11 09:29:47.733 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:29:47 np0005481065 nova_compute[260935]: 2025-10-11 09:29:47.808 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:29:47 np0005481065 nova_compute[260935]: 2025-10-11 09:29:47.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:29:48 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2299407443' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:29:48 np0005481065 nova_compute[260935]: 2025-10-11 09:29:48.352 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:29:48 np0005481065 nova_compute[260935]: 2025-10-11 09:29:48.363 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:29:48 np0005481065 nova_compute[260935]: 2025-10-11 09:29:48.394 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:29:48 np0005481065 nova_compute[260935]: 2025-10-11 09:29:48.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:48 np0005481065 nova_compute[260935]: 2025-10-11 09:29:48.424 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:29:48 np0005481065 nova_compute[260935]: 2025-10-11 09:29:48.424 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.811s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:29:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2645: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 19 KiB/s wr, 57 op/s
Oct 11 05:29:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:29:50 np0005481065 nova_compute[260935]: 2025-10-11 09:29:50.670 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174975.6681478, 80209953-3c4c-4932-a9a3-8166c70e1029 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:29:50 np0005481065 nova_compute[260935]: 2025-10-11 09:29:50.670 2 INFO nova.compute.manager [-] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:29:50 np0005481065 nova_compute[260935]: 2025-10-11 09:29:50.697 2 DEBUG nova.compute.manager [None req-cf949d5a-071b-4c69-9967-4b12635529af - - - - - -] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:29:50 np0005481065 podman[407807]: 2025-10-11 09:29:50.770187829 +0000 UTC m=+0.065751206 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 05:29:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2646: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 6.0 KiB/s wr, 46 op/s
Oct 11 05:29:52 np0005481065 nova_compute[260935]: 2025-10-11 09:29:52.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2647: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 6.0 KiB/s wr, 46 op/s
Oct 11 05:29:53 np0005481065 nova_compute[260935]: 2025-10-11 09:29:53.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:29:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:54.659 162924 DEBUG eventlet.wsgi.server [-] (162924) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Oct 11 05:29:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:54.661 162924 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /latest/meta-data/public-ipv4 HTTP/1.0#015
Oct 11 05:29:54 np0005481065 ovn_metadata_agent[162810]: Accept: */*#015
Oct 11 05:29:54 np0005481065 ovn_metadata_agent[162810]: Connection: close#015
Oct 11 05:29:54 np0005481065 ovn_metadata_agent[162810]: Content-Type: text/plain#015
Oct 11 05:29:54 np0005481065 ovn_metadata_agent[162810]: Host: 169.254.169.254#015
Oct 11 05:29:54 np0005481065 ovn_metadata_agent[162810]: User-Agent: curl/7.84.0#015
Oct 11 05:29:54 np0005481065 ovn_metadata_agent[162810]: X-Forwarded-For: 10.100.0.7#015
Oct 11 05:29:54 np0005481065 ovn_metadata_agent[162810]: X-Ovn-Network-Id: 079fe911-cb70-4ab3-8112-a68496105754 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Oct 11 05:29:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:29:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:29:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:29:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:29:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:29:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:29:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:29:54
Oct 11 05:29:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:29:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:29:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['volumes', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'vms', 'backups']
Oct 11 05:29:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:29:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2648: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Oct 11 05:29:55 np0005481065 ovn_controller[152945]: 2025-10-11T09:29:55Z|01496|binding|INFO|Releasing lport 853c3a89-16e1-4df4-8c36-5b3e31f01524 from this chassis (sb_readonly=0)
Oct 11 05:29:55 np0005481065 ovn_controller[152945]: 2025-10-11T09:29:55Z|01497|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:29:55 np0005481065 ovn_controller[152945]: 2025-10-11T09:29:55Z|01498|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:29:55 np0005481065 nova_compute[260935]: 2025-10-11 09:29:55.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:29:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:29:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:29:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:29:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:29:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:29:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:29:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:29:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:29:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:29:55 np0005481065 podman[407830]: 2025-10-11 09:29:55.801696281 +0000 UTC m=+0.099178800 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.build-date=20251001)
Oct 11 05:29:55 np0005481065 haproxy-metadata-proxy-079fe911-cb70-4ab3-8112-a68496105754[407479]: 10.100.0.7:60544 [11/Oct/2025:09:29:54.657] listener listener/metadata 0/0/0/1203/1203 200 135 - - ---- 1/1/0/0/0 0/0 "GET /latest/meta-data/public-ipv4 HTTP/1.1"
Oct 11 05:29:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:55.859 162924 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Oct 11 05:29:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:55.861 162924 INFO eventlet.wsgi.server [-] 10.100.0.7,<local> "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200  len: 151 time: 1.1997545#033[00m
Oct 11 05:29:56 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:56.022 162924 DEBUG eventlet.wsgi.server [-] (162924) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Oct 11 05:29:56 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:56.023 162924 DEBUG neutron.agent.ovn.metadata.server [-] Request: POST /openstack/2013-10-17/password HTTP/1.0#015
Oct 11 05:29:56 np0005481065 ovn_metadata_agent[162810]: Accept: */*#015
Oct 11 05:29:56 np0005481065 ovn_metadata_agent[162810]: Connection: close#015
Oct 11 05:29:56 np0005481065 ovn_metadata_agent[162810]: Content-Length: 100#015
Oct 11 05:29:56 np0005481065 ovn_metadata_agent[162810]: Content-Type: application/x-www-form-urlencoded#015
Oct 11 05:29:56 np0005481065 ovn_metadata_agent[162810]: Host: 169.254.169.254#015
Oct 11 05:29:56 np0005481065 ovn_metadata_agent[162810]: User-Agent: curl/7.84.0#015
Oct 11 05:29:56 np0005481065 ovn_metadata_agent[162810]: X-Forwarded-For: 10.100.0.7#015
Oct 11 05:29:56 np0005481065 ovn_metadata_agent[162810]: X-Ovn-Network-Id: 079fe911-cb70-4ab3-8112-a68496105754#015
Oct 11 05:29:56 np0005481065 ovn_metadata_agent[162810]: #015
Oct 11 05:29:56 np0005481065 ovn_metadata_agent[162810]: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Oct 11 05:29:56 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:56.313 162924 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Oct 11 05:29:56 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:56.314 162924 INFO eventlet.wsgi.server [-] 10.100.0.7,<local> "POST /openstack/2013-10-17/password HTTP/1.1" status: 200  len: 134 time: 0.2909412#033[00m
Oct 11 05:29:56 np0005481065 haproxy-metadata-proxy-079fe911-cb70-4ab3-8112-a68496105754[407479]: 10.100.0.7:60552 [11/Oct/2025:09:29:56.021] listener listener/metadata 0/0/0/292/292 200 118 - - ---- 1/1/0/0/0 0/0 "POST /openstack/2013-10-17/password HTTP/1.1"
Oct 11 05:29:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2649: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Oct 11 05:29:57 np0005481065 nova_compute[260935]: 2025-10-11 09:29:57.961 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174982.9584463, 16108538-ef61-4d43-8a42-c324c334138b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:29:57 np0005481065 nova_compute[260935]: 2025-10-11 09:29:57.962 2 INFO nova.compute.manager [-] [instance: 16108538-ef61-4d43-8a42-c324c334138b] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:29:57 np0005481065 nova_compute[260935]: 2025-10-11 09:29:57.990 2 DEBUG nova.compute.manager [None req-e2db50e4-d6f7-43eb-a707-0307841cf8fe - - - - - -] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:29:57 np0005481065 nova_compute[260935]: 2025-10-11 09:29:57.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:58 np0005481065 nova_compute[260935]: 2025-10-11 09:29:58.272 2 DEBUG oslo_concurrency.lockutils [None req-795b8d2a-821d-4871-9132-7fac6bfa75ba 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Acquiring lock "af8a1ab7-7512-4de4-8493-cfe85095fbc5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:29:58 np0005481065 nova_compute[260935]: 2025-10-11 09:29:58.273 2 DEBUG oslo_concurrency.lockutils [None req-795b8d2a-821d-4871-9132-7fac6bfa75ba 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Lock "af8a1ab7-7512-4de4-8493-cfe85095fbc5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:29:58 np0005481065 nova_compute[260935]: 2025-10-11 09:29:58.273 2 DEBUG oslo_concurrency.lockutils [None req-795b8d2a-821d-4871-9132-7fac6bfa75ba 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Acquiring lock "af8a1ab7-7512-4de4-8493-cfe85095fbc5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:29:58 np0005481065 nova_compute[260935]: 2025-10-11 09:29:58.274 2 DEBUG oslo_concurrency.lockutils [None req-795b8d2a-821d-4871-9132-7fac6bfa75ba 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Lock "af8a1ab7-7512-4de4-8493-cfe85095fbc5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:29:58 np0005481065 nova_compute[260935]: 2025-10-11 09:29:58.274 2 DEBUG oslo_concurrency.lockutils [None req-795b8d2a-821d-4871-9132-7fac6bfa75ba 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Lock "af8a1ab7-7512-4de4-8493-cfe85095fbc5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:29:58 np0005481065 nova_compute[260935]: 2025-10-11 09:29:58.277 2 INFO nova.compute.manager [None req-795b8d2a-821d-4871-9132-7fac6bfa75ba 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Terminating instance#033[00m
Oct 11 05:29:58 np0005481065 nova_compute[260935]: 2025-10-11 09:29:58.280 2 DEBUG nova.compute.manager [None req-795b8d2a-821d-4871-9132-7fac6bfa75ba 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:29:58 np0005481065 nova_compute[260935]: 2025-10-11 09:29:58.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:58 np0005481065 kernel: tapc6f854bc-83 (unregistering): left promiscuous mode
Oct 11 05:29:58 np0005481065 NetworkManager[44960]: <info>  [1760174998.8526] device (tapc6f854bc-83): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:29:58 np0005481065 ovn_controller[152945]: 2025-10-11T09:29:58Z|01499|binding|INFO|Releasing lport c6f854bc-831f-4bb1-ad9f-e1aa03343d25 from this chassis (sb_readonly=0)
Oct 11 05:29:58 np0005481065 ovn_controller[152945]: 2025-10-11T09:29:58Z|01500|binding|INFO|Setting lport c6f854bc-831f-4bb1-ad9f-e1aa03343d25 down in Southbound
Oct 11 05:29:58 np0005481065 ovn_controller[152945]: 2025-10-11T09:29:58Z|01501|binding|INFO|Removing iface tapc6f854bc-83 ovn-installed in OVS
Oct 11 05:29:58 np0005481065 nova_compute[260935]: 2025-10-11 09:29:58.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:58.873 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:d3:83 10.100.0.7'], port_security=['fa:16:3e:6b:d3:83 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'af8a1ab7-7512-4de4-8493-cfe85095fbc5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-079fe911-cb70-4ab3-8112-a68496105754', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9e69b3710dc94c919d8bd75d2a540c10', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0dd44ff9-fcd8-4fbd-bf9d-f19c93cd632d af5d01b3-d9f2-4ea0-a9bb-fc66bdc2afc6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.243'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eb94e045-6353-4848-a698-5f94ed22b64e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=c6f854bc-831f-4bb1-ad9f-e1aa03343d25) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:29:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:58.874 162815 INFO neutron.agent.ovn.metadata.agent [-] Port c6f854bc-831f-4bb1-ad9f-e1aa03343d25 in datapath 079fe911-cb70-4ab3-8112-a68496105754 unbound from our chassis#033[00m
Oct 11 05:29:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:58.876 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 079fe911-cb70-4ab3-8112-a68496105754, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:29:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:58.878 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b37bbfd2-f216-4e72-b57a-1e75757c981d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:29:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:58.879 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-079fe911-cb70-4ab3-8112-a68496105754 namespace which is not needed anymore#033[00m
Oct 11 05:29:58 np0005481065 nova_compute[260935]: 2025-10-11 09:29:58.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:58 np0005481065 systemd[1]: machine-qemu\x2d158\x2dinstance\x2d00000086.scope: Deactivated successfully.
Oct 11 05:29:58 np0005481065 systemd[1]: machine-qemu\x2d158\x2dinstance\x2d00000086.scope: Consumed 14.747s CPU time.
Oct 11 05:29:58 np0005481065 systemd-machined[215705]: Machine qemu-158-instance-00000086 terminated.
Oct 11 05:29:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2650: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 3.8 KiB/s wr, 29 op/s
Oct 11 05:29:59 np0005481065 neutron-haproxy-ovnmeta-079fe911-cb70-4ab3-8112-a68496105754[407473]: [NOTICE]   (407477) : haproxy version is 2.8.14-c23fe91
Oct 11 05:29:59 np0005481065 neutron-haproxy-ovnmeta-079fe911-cb70-4ab3-8112-a68496105754[407473]: [NOTICE]   (407477) : path to executable is /usr/sbin/haproxy
Oct 11 05:29:59 np0005481065 neutron-haproxy-ovnmeta-079fe911-cb70-4ab3-8112-a68496105754[407473]: [WARNING]  (407477) : Exiting Master process...
Oct 11 05:29:59 np0005481065 neutron-haproxy-ovnmeta-079fe911-cb70-4ab3-8112-a68496105754[407473]: [ALERT]    (407477) : Current worker (407479) exited with code 143 (Terminated)
Oct 11 05:29:59 np0005481065 neutron-haproxy-ovnmeta-079fe911-cb70-4ab3-8112-a68496105754[407473]: [WARNING]  (407477) : All workers exited. Exiting... (0)
Oct 11 05:29:59 np0005481065 systemd[1]: libpod-7c3025809c58e71159b26621f801a618a836a3bf281c33f5a5ecd917b5d02ebc.scope: Deactivated successfully.
Oct 11 05:29:59 np0005481065 podman[407873]: 2025-10-11 09:29:59.076546188 +0000 UTC m=+0.090146575 container died 7c3025809c58e71159b26621f801a618a836a3bf281c33f5a5ecd917b5d02ebc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-079fe911-cb70-4ab3-8112-a68496105754, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:29:59 np0005481065 nova_compute[260935]: 2025-10-11 09:29:59.132 2 INFO nova.virt.libvirt.driver [-] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Instance destroyed successfully.#033[00m
Oct 11 05:29:59 np0005481065 nova_compute[260935]: 2025-10-11 09:29:59.133 2 DEBUG nova.objects.instance [None req-795b8d2a-821d-4871-9132-7fac6bfa75ba 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Lazy-loading 'resources' on Instance uuid af8a1ab7-7512-4de4-8493-cfe85095fbc5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:29:59 np0005481065 nova_compute[260935]: 2025-10-11 09:29:59.164 2 DEBUG nova.virt.libvirt.vif [None req-795b8d2a-821d-4871-9132-7fac6bfa75ba 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:29:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1930148258',display_name='tempest-TestServerBasicOps-server-1930148258',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1930148258',id=134,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH102y8/0euXWtDeJlu4Y5Hi5/M8lShv6g1BNE2c8J5VmPE9Sc1i2z1jjNuXYbG9viqCuCV8FoDB59HS7lsdBA/LnJF8uoB7aDjGRRY+ZC3tUP+E3D+ZjN02qpM1ehWg2g==',key_name='tempest-TestServerBasicOps-820613880',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:29:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9e69b3710dc94c919d8bd75d2a540c10',ramdisk_id='',reservation_id='r-dglj62pg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerBasicOps-2065645952',owner_user_name='tempest-TestServerBasicOps-2065645952-project-member',password_0='testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:29:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='95ad8a94007e4ab88fcad372c4695cf5',uuid=af8a1ab7-7512-4de4-8493-cfe85095fbc5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c6f854bc-831f-4bb1-ad9f-e1aa03343d25", "address": "fa:16:3e:6b:d3:83", "network": {"id": "079fe911-cb70-4ab3-8112-a68496105754", "bridge": "br-int", "label": "tempest-TestServerBasicOps-693767079-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e69b3710dc94c919d8bd75d2a540c10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f854bc-83", "ovs_interfaceid": "c6f854bc-831f-4bb1-ad9f-e1aa03343d25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:29:59 np0005481065 nova_compute[260935]: 2025-10-11 09:29:59.165 2 DEBUG nova.network.os_vif_util [None req-795b8d2a-821d-4871-9132-7fac6bfa75ba 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Converting VIF {"id": "c6f854bc-831f-4bb1-ad9f-e1aa03343d25", "address": "fa:16:3e:6b:d3:83", "network": {"id": "079fe911-cb70-4ab3-8112-a68496105754", "bridge": "br-int", "label": "tempest-TestServerBasicOps-693767079-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e69b3710dc94c919d8bd75d2a540c10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f854bc-83", "ovs_interfaceid": "c6f854bc-831f-4bb1-ad9f-e1aa03343d25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:29:59 np0005481065 nova_compute[260935]: 2025-10-11 09:29:59.166 2 DEBUG nova.network.os_vif_util [None req-795b8d2a-821d-4871-9132-7fac6bfa75ba 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6b:d3:83,bridge_name='br-int',has_traffic_filtering=True,id=c6f854bc-831f-4bb1-ad9f-e1aa03343d25,network=Network(079fe911-cb70-4ab3-8112-a68496105754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6f854bc-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:29:59 np0005481065 nova_compute[260935]: 2025-10-11 09:29:59.167 2 DEBUG os_vif [None req-795b8d2a-821d-4871-9132-7fac6bfa75ba 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6b:d3:83,bridge_name='br-int',has_traffic_filtering=True,id=c6f854bc-831f-4bb1-ad9f-e1aa03343d25,network=Network(079fe911-cb70-4ab3-8112-a68496105754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6f854bc-83') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:29:59 np0005481065 nova_compute[260935]: 2025-10-11 09:29:59.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:59 np0005481065 nova_compute[260935]: 2025-10-11 09:29:59.171 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc6f854bc-83, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:29:59 np0005481065 nova_compute[260935]: 2025-10-11 09:29:59.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:59 np0005481065 nova_compute[260935]: 2025-10-11 09:29:59.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:29:59 np0005481065 nova_compute[260935]: 2025-10-11 09:29:59.181 2 INFO os_vif [None req-795b8d2a-821d-4871-9132-7fac6bfa75ba 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6b:d3:83,bridge_name='br-int',has_traffic_filtering=True,id=c6f854bc-831f-4bb1-ad9f-e1aa03343d25,network=Network(079fe911-cb70-4ab3-8112-a68496105754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6f854bc-83')#033[00m
Oct 11 05:29:59 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7c3025809c58e71159b26621f801a618a836a3bf281c33f5a5ecd917b5d02ebc-userdata-shm.mount: Deactivated successfully.
Oct 11 05:29:59 np0005481065 systemd[1]: var-lib-containers-storage-overlay-c582133b8efc079cf0c19ee163457092710bcac477f70ae60348ebd6ded8cd4e-merged.mount: Deactivated successfully.
Oct 11 05:29:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:29:59 np0005481065 podman[407873]: 2025-10-11 09:29:59.352422464 +0000 UTC m=+0.366022881 container cleanup 7c3025809c58e71159b26621f801a618a836a3bf281c33f5a5ecd917b5d02ebc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-079fe911-cb70-4ab3-8112-a68496105754, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 11 05:29:59 np0005481065 systemd[1]: libpod-conmon-7c3025809c58e71159b26621f801a618a836a3bf281c33f5a5ecd917b5d02ebc.scope: Deactivated successfully.
Oct 11 05:29:59 np0005481065 nova_compute[260935]: 2025-10-11 09:29:59.407 2 DEBUG nova.compute.manager [req-2911521d-9e29-4d3c-be7a-333fd013e481 req-df462282-2733-4b99-b2a3-e02f61d9036f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Received event network-vif-unplugged-c6f854bc-831f-4bb1-ad9f-e1aa03343d25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:29:59 np0005481065 nova_compute[260935]: 2025-10-11 09:29:59.407 2 DEBUG oslo_concurrency.lockutils [req-2911521d-9e29-4d3c-be7a-333fd013e481 req-df462282-2733-4b99-b2a3-e02f61d9036f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "af8a1ab7-7512-4de4-8493-cfe85095fbc5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:29:59 np0005481065 nova_compute[260935]: 2025-10-11 09:29:59.409 2 DEBUG oslo_concurrency.lockutils [req-2911521d-9e29-4d3c-be7a-333fd013e481 req-df462282-2733-4b99-b2a3-e02f61d9036f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "af8a1ab7-7512-4de4-8493-cfe85095fbc5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:29:59 np0005481065 nova_compute[260935]: 2025-10-11 09:29:59.410 2 DEBUG oslo_concurrency.lockutils [req-2911521d-9e29-4d3c-be7a-333fd013e481 req-df462282-2733-4b99-b2a3-e02f61d9036f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "af8a1ab7-7512-4de4-8493-cfe85095fbc5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:29:59 np0005481065 nova_compute[260935]: 2025-10-11 09:29:59.411 2 DEBUG nova.compute.manager [req-2911521d-9e29-4d3c-be7a-333fd013e481 req-df462282-2733-4b99-b2a3-e02f61d9036f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] No waiting events found dispatching network-vif-unplugged-c6f854bc-831f-4bb1-ad9f-e1aa03343d25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:29:59 np0005481065 nova_compute[260935]: 2025-10-11 09:29:59.411 2 DEBUG nova.compute.manager [req-2911521d-9e29-4d3c-be7a-333fd013e481 req-df462282-2733-4b99-b2a3-e02f61d9036f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Received event network-vif-unplugged-c6f854bc-831f-4bb1-ad9f-e1aa03343d25 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 05:29:59 np0005481065 podman[407931]: 2025-10-11 09:29:59.724621408 +0000 UTC m=+0.340116989 container remove 7c3025809c58e71159b26621f801a618a836a3bf281c33f5a5ecd917b5d02ebc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-079fe911-cb70-4ab3-8112-a68496105754, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:29:59 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:59.735 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b4040b48-6fb3-4f48-a2f0-734844a11df5]: (4, ('Sat Oct 11 09:29:58 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-079fe911-cb70-4ab3-8112-a68496105754 (7c3025809c58e71159b26621f801a618a836a3bf281c33f5a5ecd917b5d02ebc)\n7c3025809c58e71159b26621f801a618a836a3bf281c33f5a5ecd917b5d02ebc\nSat Oct 11 09:29:59 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-079fe911-cb70-4ab3-8112-a68496105754 (7c3025809c58e71159b26621f801a618a836a3bf281c33f5a5ecd917b5d02ebc)\n7c3025809c58e71159b26621f801a618a836a3bf281c33f5a5ecd917b5d02ebc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:29:59 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:59.738 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1857f95d-1bde-4c28-bbfb-f4ac825eb57c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:29:59 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:59.740 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap079fe911-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:29:59 np0005481065 kernel: tap079fe911-c0: left promiscuous mode
Oct 11 05:29:59 np0005481065 nova_compute[260935]: 2025-10-11 09:29:59.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:59 np0005481065 nova_compute[260935]: 2025-10-11 09:29:59.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:59 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:59.777 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2b3650f5-1c58-4459-99d9-268c476b14d8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:29:59 np0005481065 nova_compute[260935]: 2025-10-11 09:29:59.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:29:59 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:59.805 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[964a435c-da45-45e8-a873-c8f3bb4a303f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:29:59 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:59.806 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d4b4934f-b6bb-4db7-80e4-e0a819713b7d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:29:59 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:59.826 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[792ea839-065c-44da-ac36-575228e5fc86]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 679962, 'reachable_time': 33203, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 407952, 'error': None, 'target': 'ovnmeta-079fe911-cb70-4ab3-8112-a68496105754', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:29:59 np0005481065 systemd[1]: run-netns-ovnmeta\x2d079fe911\x2dcb70\x2d4ab3\x2d8112\x2da68496105754.mount: Deactivated successfully.
Oct 11 05:29:59 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:59.830 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-079fe911-cb70-4ab3-8112-a68496105754 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 05:29:59 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:29:59.830 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[bae90ec0-2e15-4176-907f-abeef58f7a21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:29:59 np0005481065 podman[407944]: 2025-10-11 09:29:59.922330698 +0000 UTC m=+0.112312001 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 11 05:30:00 np0005481065 podman[407957]: 2025-10-11 09:30:00.006914105 +0000 UTC m=+0.124404282 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 11 05:30:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2651: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.0 KiB/s rd, 1.7 KiB/s wr, 1 op/s
Oct 11 05:30:01 np0005481065 nova_compute[260935]: 2025-10-11 09:30:01.573 2 DEBUG nova.compute.manager [req-8feee4d7-cb6d-43b7-a103-8685f4ceb280 req-23860e84-12c6-4664-88a3-374ea590f401 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Received event network-vif-plugged-c6f854bc-831f-4bb1-ad9f-e1aa03343d25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:30:01 np0005481065 nova_compute[260935]: 2025-10-11 09:30:01.573 2 DEBUG oslo_concurrency.lockutils [req-8feee4d7-cb6d-43b7-a103-8685f4ceb280 req-23860e84-12c6-4664-88a3-374ea590f401 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "af8a1ab7-7512-4de4-8493-cfe85095fbc5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:30:01 np0005481065 nova_compute[260935]: 2025-10-11 09:30:01.574 2 DEBUG oslo_concurrency.lockutils [req-8feee4d7-cb6d-43b7-a103-8685f4ceb280 req-23860e84-12c6-4664-88a3-374ea590f401 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "af8a1ab7-7512-4de4-8493-cfe85095fbc5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:30:01 np0005481065 nova_compute[260935]: 2025-10-11 09:30:01.574 2 DEBUG oslo_concurrency.lockutils [req-8feee4d7-cb6d-43b7-a103-8685f4ceb280 req-23860e84-12c6-4664-88a3-374ea590f401 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "af8a1ab7-7512-4de4-8493-cfe85095fbc5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:30:01 np0005481065 nova_compute[260935]: 2025-10-11 09:30:01.574 2 DEBUG nova.compute.manager [req-8feee4d7-cb6d-43b7-a103-8685f4ceb280 req-23860e84-12c6-4664-88a3-374ea590f401 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] No waiting events found dispatching network-vif-plugged-c6f854bc-831f-4bb1-ad9f-e1aa03343d25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:30:01 np0005481065 nova_compute[260935]: 2025-10-11 09:30:01.574 2 WARNING nova.compute.manager [req-8feee4d7-cb6d-43b7-a103-8685f4ceb280 req-23860e84-12c6-4664-88a3-374ea590f401 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Received unexpected event network-vif-plugged-c6f854bc-831f-4bb1-ad9f-e1aa03343d25 for instance with vm_state active and task_state deleting.#033[00m
Oct 11 05:30:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2652: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 2.2 KiB/s wr, 18 op/s
Oct 11 05:30:03 np0005481065 nova_compute[260935]: 2025-10-11 09:30:03.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:30:04 np0005481065 nova_compute[260935]: 2025-10-11 09:30:04.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:30:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:30:04 np0005481065 nova_compute[260935]: 2025-10-11 09:30:04.845 2 INFO nova.virt.libvirt.driver [None req-795b8d2a-821d-4871-9132-7fac6bfa75ba 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Deleting instance files /var/lib/nova/instances/af8a1ab7-7512-4de4-8493-cfe85095fbc5_del#033[00m
Oct 11 05:30:04 np0005481065 nova_compute[260935]: 2025-10-11 09:30:04.846 2 INFO nova.virt.libvirt.driver [None req-795b8d2a-821d-4871-9132-7fac6bfa75ba 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Deletion of /var/lib/nova/instances/af8a1ab7-7512-4de4-8493-cfe85095fbc5_del complete#033[00m
Oct 11 05:30:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2653: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 2.2 KiB/s wr, 17 op/s
Oct 11 05:30:05 np0005481065 nova_compute[260935]: 2025-10-11 09:30:05.154 2 INFO nova.compute.manager [None req-795b8d2a-821d-4871-9132-7fac6bfa75ba 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Took 6.87 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:30:05 np0005481065 nova_compute[260935]: 2025-10-11 09:30:05.155 2 DEBUG oslo.service.loopingcall [None req-795b8d2a-821d-4871-9132-7fac6bfa75ba 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:30:05 np0005481065 nova_compute[260935]: 2025-10-11 09:30:05.156 2 DEBUG nova.compute.manager [-] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:30:05 np0005481065 nova_compute[260935]: 2025-10-11 09:30:05.156 2 DEBUG nova.network.neutron [-] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:30:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:30:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:30:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:30:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:30:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0026301752661652667 of space, bias 1.0, pg target 0.78905257984958 quantized to 32 (current 32)
Oct 11 05:30:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:30:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:30:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:30:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:30:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:30:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 11 05:30:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:30:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 05:30:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:30:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:30:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:30:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 05:30:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:30:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 05:30:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:30:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:30:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:30:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 05:30:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2654: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 2.2 KiB/s wr, 17 op/s
Oct 11 05:30:07 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 05:30:07 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.0 total, 600.0 interval#012Cumulative writes: 12K writes, 55K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s#012Cumulative WAL: 12K writes, 12K syncs, 1.00 writes per sync, written: 0.07 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1404 writes, 6317 keys, 1404 commit groups, 1.0 writes per commit group, ingest: 8.77 MB, 0.01 MB/s#012Interval WAL: 1404 writes, 1404 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     71.9      0.91              0.27        38    0.024       0      0       0.0       0.0#012  L6      1/0    7.83 MB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   4.6    162.3    136.1      2.20              1.22        37    0.059    223K    20K       0.0       0.0#012 Sum      1/0    7.83 MB   0.0      0.3     0.1      0.3       0.4      0.1       0.0   5.6    114.8    117.3      3.11              1.48        75    0.041    223K    20K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.9    120.9    120.8      0.43              0.23        10    0.043     38K   2530       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   0.0    162.3    136.1      2.20              1.22        37    0.059    223K    20K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     72.2      0.91              0.27        37    0.024       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.2      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4800.0 total, 600.0 interval#012Flush(GB): cumulative 0.064, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.36 GB write, 0.08 MB/s write, 0.35 GB read, 0.07 MB/s read, 3.1 seconds#012Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558f0ab3b1f0#2 capacity: 304.00 MB usage: 40.58 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.000398 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2665,38.95 MB,12.8112%) FilterBlock(76,632.23 KB,0.203097%) IndexBlock(76,1.02 MB,0.334258%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct 11 05:30:08 np0005481065 nova_compute[260935]: 2025-10-11 09:30:08.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:30:08 np0005481065 nova_compute[260935]: 2025-10-11 09:30:08.769 2 DEBUG nova.network.neutron [-] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:30:08 np0005481065 nova_compute[260935]: 2025-10-11 09:30:08.916 2 DEBUG nova.compute.manager [req-1604b474-7ad4-479e-9b02-54dcf915433e req-8505ceaf-fe27-4962-bdba-34ad083b8bfd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Received event network-vif-deleted-c6f854bc-831f-4bb1-ad9f-e1aa03343d25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:30:08 np0005481065 nova_compute[260935]: 2025-10-11 09:30:08.916 2 INFO nova.compute.manager [req-1604b474-7ad4-479e-9b02-54dcf915433e req-8505ceaf-fe27-4962-bdba-34ad083b8bfd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Neutron deleted interface c6f854bc-831f-4bb1-ad9f-e1aa03343d25; detaching it from the instance and deleting it from the info cache#033[00m
Oct 11 05:30:08 np0005481065 nova_compute[260935]: 2025-10-11 09:30:08.917 2 DEBUG nova.network.neutron [req-1604b474-7ad4-479e-9b02-54dcf915433e req-8505ceaf-fe27-4962-bdba-34ad083b8bfd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:30:08 np0005481065 nova_compute[260935]: 2025-10-11 09:30:08.919 2 INFO nova.compute.manager [-] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Took 3.76 seconds to deallocate network for instance.#033[00m
Oct 11 05:30:08 np0005481065 nova_compute[260935]: 2025-10-11 09:30:08.977 2 DEBUG nova.compute.manager [req-1604b474-7ad4-479e-9b02-54dcf915433e req-8505ceaf-fe27-4962-bdba-34ad083b8bfd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Detach interface failed, port_id=c6f854bc-831f-4bb1-ad9f-e1aa03343d25, reason: Instance af8a1ab7-7512-4de4-8493-cfe85095fbc5 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct 11 05:30:09 np0005481065 nova_compute[260935]: 2025-10-11 09:30:09.006 2 DEBUG oslo_concurrency.lockutils [None req-795b8d2a-821d-4871-9132-7fac6bfa75ba 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:30:09 np0005481065 nova_compute[260935]: 2025-10-11 09:30:09.006 2 DEBUG oslo_concurrency.lockutils [None req-795b8d2a-821d-4871-9132-7fac6bfa75ba 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:30:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2655: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 2.8 KiB/s wr, 30 op/s
Oct 11 05:30:09 np0005481065 nova_compute[260935]: 2025-10-11 09:30:09.103 2 DEBUG oslo_concurrency.processutils [None req-795b8d2a-821d-4871-9132-7fac6bfa75ba 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:30:09 np0005481065 nova_compute[260935]: 2025-10-11 09:30:09.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:30:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:30:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:30:09 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1609355747' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:30:09 np0005481065 nova_compute[260935]: 2025-10-11 09:30:09.702 2 DEBUG oslo_concurrency.processutils [None req-795b8d2a-821d-4871-9132-7fac6bfa75ba 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:30:09 np0005481065 nova_compute[260935]: 2025-10-11 09:30:09.711 2 DEBUG nova.compute.provider_tree [None req-795b8d2a-821d-4871-9132-7fac6bfa75ba 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:30:09 np0005481065 nova_compute[260935]: 2025-10-11 09:30:09.755 2 DEBUG nova.scheduler.client.report [None req-795b8d2a-821d-4871-9132-7fac6bfa75ba 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:30:09 np0005481065 nova_compute[260935]: 2025-10-11 09:30:09.817 2 DEBUG oslo_concurrency.lockutils [None req-795b8d2a-821d-4871-9132-7fac6bfa75ba 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.811s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:30:09 np0005481065 nova_compute[260935]: 2025-10-11 09:30:09.853 2 INFO nova.scheduler.client.report [None req-795b8d2a-821d-4871-9132-7fac6bfa75ba 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Deleted allocations for instance af8a1ab7-7512-4de4-8493-cfe85095fbc5#033[00m
Oct 11 05:30:09 np0005481065 nova_compute[260935]: 2025-10-11 09:30:09.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:30:09 np0005481065 nova_compute[260935]: 2025-10-11 09:30:09.986 2 DEBUG oslo_concurrency.lockutils [None req-795b8d2a-821d-4871-9132-7fac6bfa75ba 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Lock "af8a1ab7-7512-4de4-8493-cfe85095fbc5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 11.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:30:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2656: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 11 05:30:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2657: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 11 05:30:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:30:13 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:30:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 05:30:13 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:30:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 05:30:13 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:30:13 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 0f2bc4d2-f290-4393-81e1-63622da7154b does not exist
Oct 11 05:30:13 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev c8e626a2-150f-4606-a299-982dfbb50cc3 does not exist
Oct 11 05:30:13 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 9820134d-fb49-4372-9f8f-7b145d3ecaf8 does not exist
Oct 11 05:30:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 05:30:13 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 05:30:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 05:30:13 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:30:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:30:13 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:30:13 np0005481065 nova_compute[260935]: 2025-10-11 09:30:13.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:30:14 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:30:14 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:30:14 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:30:14 np0005481065 podman[408282]: 2025-10-11 09:30:14.124178577 +0000 UTC m=+0.061287681 container create 28803cf2813d347abadbca604d17f8f83db8205210869a9d0b4b8a115884a046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_gauss, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 05:30:14 np0005481065 nova_compute[260935]: 2025-10-11 09:30:14.129 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174999.1259048, af8a1ab7-7512-4de4-8493-cfe85095fbc5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:30:14 np0005481065 nova_compute[260935]: 2025-10-11 09:30:14.130 2 INFO nova.compute.manager [-] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:30:14 np0005481065 systemd[1]: Started libpod-conmon-28803cf2813d347abadbca604d17f8f83db8205210869a9d0b4b8a115884a046.scope.
Oct 11 05:30:14 np0005481065 nova_compute[260935]: 2025-10-11 09:30:14.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:30:14 np0005481065 podman[408282]: 2025-10-11 09:30:14.103313308 +0000 UTC m=+0.040422392 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:30:14 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:30:14 np0005481065 podman[408282]: 2025-10-11 09:30:14.252016185 +0000 UTC m=+0.189125309 container init 28803cf2813d347abadbca604d17f8f83db8205210869a9d0b4b8a115884a046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_gauss, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:30:14 np0005481065 podman[408282]: 2025-10-11 09:30:14.265053533 +0000 UTC m=+0.202162597 container start 28803cf2813d347abadbca604d17f8f83db8205210869a9d0b4b8a115884a046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_gauss, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 11 05:30:14 np0005481065 podman[408282]: 2025-10-11 09:30:14.269008944 +0000 UTC m=+0.206118098 container attach 28803cf2813d347abadbca604d17f8f83db8205210869a9d0b4b8a115884a046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 05:30:14 np0005481065 systemd[1]: libpod-28803cf2813d347abadbca604d17f8f83db8205210869a9d0b4b8a115884a046.scope: Deactivated successfully.
Oct 11 05:30:14 np0005481065 fervent_gauss[408298]: 167 167
Oct 11 05:30:14 np0005481065 conmon[408298]: conmon 28803cf2813d347abadb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-28803cf2813d347abadbca604d17f8f83db8205210869a9d0b4b8a115884a046.scope/container/memory.events
Oct 11 05:30:14 np0005481065 podman[408282]: 2025-10-11 09:30:14.276021842 +0000 UTC m=+0.213130946 container died 28803cf2813d347abadbca604d17f8f83db8205210869a9d0b4b8a115884a046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 11 05:30:14 np0005481065 systemd[1]: var-lib-containers-storage-overlay-d8b44d551d2ba1f06c6097fbd12ac347f7cd4c0e5686e3651f7762b29540a853-merged.mount: Deactivated successfully.
Oct 11 05:30:14 np0005481065 podman[408282]: 2025-10-11 09:30:14.335528622 +0000 UTC m=+0.272637716 container remove 28803cf2813d347abadbca604d17f8f83db8205210869a9d0b4b8a115884a046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_gauss, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 05:30:14 np0005481065 systemd[1]: libpod-conmon-28803cf2813d347abadbca604d17f8f83db8205210869a9d0b4b8a115884a046.scope: Deactivated successfully.
Oct 11 05:30:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:30:14 np0005481065 nova_compute[260935]: 2025-10-11 09:30:14.374 2 DEBUG nova.compute.manager [None req-2bdc7660-e614-49d5-af34-6a77dc85d54c - - - - - -] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:30:14 np0005481065 podman[408322]: 2025-10-11 09:30:14.62533621 +0000 UTC m=+0.068275958 container create 72595bcb372bdab64e8e061399e18ad5a2cebc0920285c3efdf52efa4429e54d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:30:14 np0005481065 systemd[1]: Started libpod-conmon-72595bcb372bdab64e8e061399e18ad5a2cebc0920285c3efdf52efa4429e54d.scope.
Oct 11 05:30:14 np0005481065 podman[408322]: 2025-10-11 09:30:14.597938457 +0000 UTC m=+0.040878265 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:30:14 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:30:14 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/873005bc29385c5d2bc2851ee880aed2b5cec676b21fec85f659f625ad037b03/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:30:14 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/873005bc29385c5d2bc2851ee880aed2b5cec676b21fec85f659f625ad037b03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:30:14 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/873005bc29385c5d2bc2851ee880aed2b5cec676b21fec85f659f625ad037b03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:30:14 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/873005bc29385c5d2bc2851ee880aed2b5cec676b21fec85f659f625ad037b03/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:30:14 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/873005bc29385c5d2bc2851ee880aed2b5cec676b21fec85f659f625ad037b03/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 05:30:14 np0005481065 podman[408322]: 2025-10-11 09:30:14.75433109 +0000 UTC m=+0.197270878 container init 72595bcb372bdab64e8e061399e18ad5a2cebc0920285c3efdf52efa4429e54d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wiles, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 11 05:30:14 np0005481065 podman[408322]: 2025-10-11 09:30:14.772558975 +0000 UTC m=+0.215498733 container start 72595bcb372bdab64e8e061399e18ad5a2cebc0920285c3efdf52efa4429e54d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wiles, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:30:14 np0005481065 podman[408322]: 2025-10-11 09:30:14.777221736 +0000 UTC m=+0.220161544 container attach 72595bcb372bdab64e8e061399e18ad5a2cebc0920285c3efdf52efa4429e54d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wiles, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:30:14 np0005481065 nova_compute[260935]: 2025-10-11 09:30:14.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:30:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2658: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 9.2 KiB/s rd, 597 B/s wr, 12 op/s
Oct 11 05:30:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:30:15.224 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:30:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:30:15.226 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:30:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:30:15.227 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:30:15 np0005481065 stupefied_wiles[408338]: --> passed data devices: 0 physical, 3 LVM
Oct 11 05:30:15 np0005481065 stupefied_wiles[408338]: --> relative data size: 1.0
Oct 11 05:30:15 np0005481065 stupefied_wiles[408338]: --> All data devices are unavailable
Oct 11 05:30:15 np0005481065 systemd[1]: libpod-72595bcb372bdab64e8e061399e18ad5a2cebc0920285c3efdf52efa4429e54d.scope: Deactivated successfully.
Oct 11 05:30:15 np0005481065 systemd[1]: libpod-72595bcb372bdab64e8e061399e18ad5a2cebc0920285c3efdf52efa4429e54d.scope: Consumed 1.126s CPU time.
Oct 11 05:30:15 np0005481065 conmon[408338]: conmon 72595bcb372bdab64e8e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-72595bcb372bdab64e8e061399e18ad5a2cebc0920285c3efdf52efa4429e54d.scope/container/memory.events
Oct 11 05:30:15 np0005481065 podman[408322]: 2025-10-11 09:30:15.96226786 +0000 UTC m=+1.405207578 container died 72595bcb372bdab64e8e061399e18ad5a2cebc0920285c3efdf52efa4429e54d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wiles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 11 05:30:15 np0005481065 systemd[1]: var-lib-containers-storage-overlay-873005bc29385c5d2bc2851ee880aed2b5cec676b21fec85f659f625ad037b03-merged.mount: Deactivated successfully.
Oct 11 05:30:16 np0005481065 podman[408322]: 2025-10-11 09:30:16.011436107 +0000 UTC m=+1.454375865 container remove 72595bcb372bdab64e8e061399e18ad5a2cebc0920285c3efdf52efa4429e54d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wiles, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:30:16 np0005481065 systemd[1]: libpod-conmon-72595bcb372bdab64e8e061399e18ad5a2cebc0920285c3efdf52efa4429e54d.scope: Deactivated successfully.
Oct 11 05:30:16 np0005481065 podman[408523]: 2025-10-11 09:30:16.868610446 +0000 UTC m=+0.037545990 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:30:16 np0005481065 podman[408523]: 2025-10-11 09:30:16.96265275 +0000 UTC m=+0.131588234 container create 97ff7ad4928705f9d74cd84fa0f57016c01fa4ca8604d6dca915c5d3382e7738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hypatia, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:30:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2659: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 9.2 KiB/s rd, 597 B/s wr, 12 op/s
Oct 11 05:30:17 np0005481065 systemd[1]: Started libpod-conmon-97ff7ad4928705f9d74cd84fa0f57016c01fa4ca8604d6dca915c5d3382e7738.scope.
Oct 11 05:30:17 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:30:17 np0005481065 podman[408523]: 2025-10-11 09:30:17.225231981 +0000 UTC m=+0.394167535 container init 97ff7ad4928705f9d74cd84fa0f57016c01fa4ca8604d6dca915c5d3382e7738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hypatia, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 05:30:17 np0005481065 podman[408523]: 2025-10-11 09:30:17.237550658 +0000 UTC m=+0.406486142 container start 97ff7ad4928705f9d74cd84fa0f57016c01fa4ca8604d6dca915c5d3382e7738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hypatia, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:30:17 np0005481065 optimistic_hypatia[408539]: 167 167
Oct 11 05:30:17 np0005481065 systemd[1]: libpod-97ff7ad4928705f9d74cd84fa0f57016c01fa4ca8604d6dca915c5d3382e7738.scope: Deactivated successfully.
Oct 11 05:30:17 np0005481065 podman[408523]: 2025-10-11 09:30:17.289971448 +0000 UTC m=+0.458906932 container attach 97ff7ad4928705f9d74cd84fa0f57016c01fa4ca8604d6dca915c5d3382e7738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hypatia, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:30:17 np0005481065 podman[408523]: 2025-10-11 09:30:17.290607846 +0000 UTC m=+0.459543320 container died 97ff7ad4928705f9d74cd84fa0f57016c01fa4ca8604d6dca915c5d3382e7738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hypatia, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:30:17 np0005481065 systemd[1]: var-lib-containers-storage-overlay-0306c4a6d15d2425455907a9920247194b518a871b8374b7e828707b54b05f7d-merged.mount: Deactivated successfully.
Oct 11 05:30:17 np0005481065 podman[408523]: 2025-10-11 09:30:17.648436784 +0000 UTC m=+0.817372258 container remove 97ff7ad4928705f9d74cd84fa0f57016c01fa4ca8604d6dca915c5d3382e7738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:30:17 np0005481065 systemd[1]: libpod-conmon-97ff7ad4928705f9d74cd84fa0f57016c01fa4ca8604d6dca915c5d3382e7738.scope: Deactivated successfully.
Oct 11 05:30:17 np0005481065 podman[408565]: 2025-10-11 09:30:17.897478052 +0000 UTC m=+0.035805041 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:30:18 np0005481065 podman[408565]: 2025-10-11 09:30:18.063658232 +0000 UTC m=+0.201985181 container create 23e9ecf0d7d90560609d733e48c6488306df441ecee90abadec408a592a7193d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wiles, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 05:30:18 np0005481065 systemd[1]: Started libpod-conmon-23e9ecf0d7d90560609d733e48c6488306df441ecee90abadec408a592a7193d.scope.
Oct 11 05:30:18 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:30:18 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7888703b6ebaa21a6c8835339488b6ddcf2ac00235d5d47d2627b1429b45751b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:30:18 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7888703b6ebaa21a6c8835339488b6ddcf2ac00235d5d47d2627b1429b45751b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:30:18 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7888703b6ebaa21a6c8835339488b6ddcf2ac00235d5d47d2627b1429b45751b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:30:18 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7888703b6ebaa21a6c8835339488b6ddcf2ac00235d5d47d2627b1429b45751b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:30:18 np0005481065 podman[408565]: 2025-10-11 09:30:18.274764869 +0000 UTC m=+0.413091888 container init 23e9ecf0d7d90560609d733e48c6488306df441ecee90abadec408a592a7193d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 11 05:30:18 np0005481065 podman[408565]: 2025-10-11 09:30:18.281414137 +0000 UTC m=+0.419741046 container start 23e9ecf0d7d90560609d733e48c6488306df441ecee90abadec408a592a7193d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 11 05:30:18 np0005481065 podman[408565]: 2025-10-11 09:30:18.332325204 +0000 UTC m=+0.470652143 container attach 23e9ecf0d7d90560609d733e48c6488306df441ecee90abadec408a592a7193d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 11 05:30:18 np0005481065 nova_compute[260935]: 2025-10-11 09:30:18.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:30:19 np0005481065 great_wiles[408581]: {
Oct 11 05:30:19 np0005481065 great_wiles[408581]:    "0": [
Oct 11 05:30:19 np0005481065 great_wiles[408581]:        {
Oct 11 05:30:19 np0005481065 great_wiles[408581]:            "devices": [
Oct 11 05:30:19 np0005481065 great_wiles[408581]:                "/dev/loop3"
Oct 11 05:30:19 np0005481065 great_wiles[408581]:            ],
Oct 11 05:30:19 np0005481065 great_wiles[408581]:            "lv_name": "ceph_lv0",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:            "lv_size": "21470642176",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:            "name": "ceph_lv0",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:            "tags": {
Oct 11 05:30:19 np0005481065 great_wiles[408581]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:                "ceph.cluster_name": "ceph",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:                "ceph.crush_device_class": "",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:                "ceph.encrypted": "0",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:                "ceph.osd_id": "0",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:                "ceph.type": "block",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:                "ceph.vdo": "0"
Oct 11 05:30:19 np0005481065 great_wiles[408581]:            },
Oct 11 05:30:19 np0005481065 great_wiles[408581]:            "type": "block",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:            "vg_name": "ceph_vg0"
Oct 11 05:30:19 np0005481065 great_wiles[408581]:        }
Oct 11 05:30:19 np0005481065 great_wiles[408581]:    ],
Oct 11 05:30:19 np0005481065 great_wiles[408581]:    "1": [
Oct 11 05:30:19 np0005481065 great_wiles[408581]:        {
Oct 11 05:30:19 np0005481065 great_wiles[408581]:            "devices": [
Oct 11 05:30:19 np0005481065 great_wiles[408581]:                "/dev/loop4"
Oct 11 05:30:19 np0005481065 great_wiles[408581]:            ],
Oct 11 05:30:19 np0005481065 great_wiles[408581]:            "lv_name": "ceph_lv1",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:            "lv_size": "21470642176",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:            "name": "ceph_lv1",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:            "tags": {
Oct 11 05:30:19 np0005481065 great_wiles[408581]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:                "ceph.cluster_name": "ceph",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:                "ceph.crush_device_class": "",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:                "ceph.encrypted": "0",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:                "ceph.osd_id": "1",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:                "ceph.type": "block",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:                "ceph.vdo": "0"
Oct 11 05:30:19 np0005481065 great_wiles[408581]:            },
Oct 11 05:30:19 np0005481065 great_wiles[408581]:            "type": "block",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:            "vg_name": "ceph_vg1"
Oct 11 05:30:19 np0005481065 great_wiles[408581]:        }
Oct 11 05:30:19 np0005481065 great_wiles[408581]:    ],
Oct 11 05:30:19 np0005481065 great_wiles[408581]:    "2": [
Oct 11 05:30:19 np0005481065 great_wiles[408581]:        {
Oct 11 05:30:19 np0005481065 great_wiles[408581]:            "devices": [
Oct 11 05:30:19 np0005481065 great_wiles[408581]:                "/dev/loop5"
Oct 11 05:30:19 np0005481065 great_wiles[408581]:            ],
Oct 11 05:30:19 np0005481065 great_wiles[408581]:            "lv_name": "ceph_lv2",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:            "lv_size": "21470642176",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:            "name": "ceph_lv2",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:            "tags": {
Oct 11 05:30:19 np0005481065 great_wiles[408581]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:                "ceph.cluster_name": "ceph",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:                "ceph.crush_device_class": "",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:                "ceph.encrypted": "0",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:                "ceph.osd_id": "2",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:                "ceph.type": "block",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:                "ceph.vdo": "0"
Oct 11 05:30:19 np0005481065 great_wiles[408581]:            },
Oct 11 05:30:19 np0005481065 great_wiles[408581]:            "type": "block",
Oct 11 05:30:19 np0005481065 great_wiles[408581]:            "vg_name": "ceph_vg2"
Oct 11 05:30:19 np0005481065 great_wiles[408581]:        }
Oct 11 05:30:19 np0005481065 great_wiles[408581]:    ]
Oct 11 05:30:19 np0005481065 great_wiles[408581]: }
Oct 11 05:30:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2660: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 9.2 KiB/s rd, 597 B/s wr, 12 op/s
Oct 11 05:30:19 np0005481065 systemd[1]: libpod-23e9ecf0d7d90560609d733e48c6488306df441ecee90abadec408a592a7193d.scope: Deactivated successfully.
Oct 11 05:30:19 np0005481065 podman[408565]: 2025-10-11 09:30:19.051462268 +0000 UTC m=+1.189789227 container died 23e9ecf0d7d90560609d733e48c6488306df441ecee90abadec408a592a7193d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:30:19 np0005481065 nova_compute[260935]: 2025-10-11 09:30:19.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:30:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:30:19 np0005481065 systemd[1]: var-lib-containers-storage-overlay-7888703b6ebaa21a6c8835339488b6ddcf2ac00235d5d47d2627b1429b45751b-merged.mount: Deactivated successfully.
Oct 11 05:30:19 np0005481065 podman[408565]: 2025-10-11 09:30:19.974176248 +0000 UTC m=+2.112503197 container remove 23e9ecf0d7d90560609d733e48c6488306df441ecee90abadec408a592a7193d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 11 05:30:20 np0005481065 systemd[1]: libpod-conmon-23e9ecf0d7d90560609d733e48c6488306df441ecee90abadec408a592a7193d.scope: Deactivated successfully.
Oct 11 05:30:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:30:20.727 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d0:5b:95 10.100.0.2 2001:db8::f816:3eff:fed0:5b95'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fed0:5b95/64', 'neutron:device_id': 'ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=09ee216c-15c4-4f81-ac68-20fd45768bc8, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=3d1ec619-c133-47e3-8121-0a84b73ae16e) old=Port_Binding(mac=['fa:16:3e:d0:5b:95 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:30:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:30:20.729 162815 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 3d1ec619-c133-47e3-8121-0a84b73ae16e in datapath b8c83697-e12f-4ca4-8bd5-07c36e7b45a8 updated#033[00m
Oct 11 05:30:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:30:20.730 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b8c83697-e12f-4ca4-8bd5-07c36e7b45a8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:30:20 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:30:20.731 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7050fffc-13e4-45ef-8e26-8b01bfd7234f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:30:20 np0005481065 podman[408744]: 2025-10-11 09:30:20.805893049 +0000 UTC m=+0.027914699 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:30:20 np0005481065 podman[408744]: 2025-10-11 09:30:20.921535213 +0000 UTC m=+0.143556813 container create 9ab8e44127a404f22d1c91abc10ae3bd6f1395958e389b92c9e3acca0023b4b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 11 05:30:21 np0005481065 systemd[1]: Started libpod-conmon-9ab8e44127a404f22d1c91abc10ae3bd6f1395958e389b92c9e3acca0023b4b0.scope.
Oct 11 05:30:21 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:30:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2661: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:30:21 np0005481065 podman[408744]: 2025-10-11 09:30:21.102536691 +0000 UTC m=+0.324558341 container init 9ab8e44127a404f22d1c91abc10ae3bd6f1395958e389b92c9e3acca0023b4b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:30:21 np0005481065 podman[408758]: 2025-10-11 09:30:21.11278116 +0000 UTC m=+0.130904005 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 11 05:30:21 np0005481065 podman[408744]: 2025-10-11 09:30:21.120139027 +0000 UTC m=+0.342160607 container start 9ab8e44127a404f22d1c91abc10ae3bd6f1395958e389b92c9e3acca0023b4b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_banzai, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:30:21 np0005481065 silly_banzai[408771]: 167 167
Oct 11 05:30:21 np0005481065 systemd[1]: libpod-9ab8e44127a404f22d1c91abc10ae3bd6f1395958e389b92c9e3acca0023b4b0.scope: Deactivated successfully.
Oct 11 05:30:21 np0005481065 podman[408744]: 2025-10-11 09:30:21.138565417 +0000 UTC m=+0.360587067 container attach 9ab8e44127a404f22d1c91abc10ae3bd6f1395958e389b92c9e3acca0023b4b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 05:30:21 np0005481065 podman[408744]: 2025-10-11 09:30:21.139872934 +0000 UTC m=+0.361894544 container died 9ab8e44127a404f22d1c91abc10ae3bd6f1395958e389b92c9e3acca0023b4b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 11 05:30:21 np0005481065 systemd[1]: var-lib-containers-storage-overlay-bb530d299d53db85513da58367e93e72dc76083c65551a7bbe6a15d03a66746f-merged.mount: Deactivated successfully.
Oct 11 05:30:21 np0005481065 podman[408744]: 2025-10-11 09:30:21.408462634 +0000 UTC m=+0.630484204 container remove 9ab8e44127a404f22d1c91abc10ae3bd6f1395958e389b92c9e3acca0023b4b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_banzai, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:30:21 np0005481065 systemd[1]: libpod-conmon-9ab8e44127a404f22d1c91abc10ae3bd6f1395958e389b92c9e3acca0023b4b0.scope: Deactivated successfully.
Oct 11 05:30:21 np0005481065 podman[408806]: 2025-10-11 09:30:21.687621583 +0000 UTC m=+0.046649728 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:30:21 np0005481065 podman[408806]: 2025-10-11 09:30:21.807601138 +0000 UTC m=+0.166629233 container create 0cf40633061737a03bb814acd5b2436af13edd4c7d9b499ff448ad2821bf70a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mccarthy, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:30:21 np0005481065 systemd[1]: Started libpod-conmon-0cf40633061737a03bb814acd5b2436af13edd4c7d9b499ff448ad2821bf70a8.scope.
Oct 11 05:30:22 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:30:22 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88585cc36d87cbd28670095b47f2d5c86258b4e9e2d197963e181400015332a0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:30:22 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88585cc36d87cbd28670095b47f2d5c86258b4e9e2d197963e181400015332a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:30:22 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88585cc36d87cbd28670095b47f2d5c86258b4e9e2d197963e181400015332a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:30:22 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88585cc36d87cbd28670095b47f2d5c86258b4e9e2d197963e181400015332a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:30:22 np0005481065 podman[408806]: 2025-10-11 09:30:22.091452629 +0000 UTC m=+0.450480764 container init 0cf40633061737a03bb814acd5b2436af13edd4c7d9b499ff448ad2821bf70a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mccarthy, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 05:30:22 np0005481065 podman[408806]: 2025-10-11 09:30:22.102936423 +0000 UTC m=+0.461964488 container start 0cf40633061737a03bb814acd5b2436af13edd4c7d9b499ff448ad2821bf70a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 11 05:30:22 np0005481065 podman[408806]: 2025-10-11 09:30:22.217093225 +0000 UTC m=+0.576121320 container attach 0cf40633061737a03bb814acd5b2436af13edd4c7d9b499ff448ad2821bf70a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mccarthy, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:30:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2662: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:30:23 np0005481065 adoring_mccarthy[408824]: {
Oct 11 05:30:23 np0005481065 adoring_mccarthy[408824]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 05:30:23 np0005481065 adoring_mccarthy[408824]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:30:23 np0005481065 adoring_mccarthy[408824]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 05:30:23 np0005481065 adoring_mccarthy[408824]:        "osd_id": 2,
Oct 11 05:30:23 np0005481065 adoring_mccarthy[408824]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:30:23 np0005481065 adoring_mccarthy[408824]:        "type": "bluestore"
Oct 11 05:30:23 np0005481065 adoring_mccarthy[408824]:    },
Oct 11 05:30:23 np0005481065 adoring_mccarthy[408824]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 05:30:23 np0005481065 adoring_mccarthy[408824]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:30:23 np0005481065 adoring_mccarthy[408824]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 05:30:23 np0005481065 adoring_mccarthy[408824]:        "osd_id": 0,
Oct 11 05:30:23 np0005481065 adoring_mccarthy[408824]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:30:23 np0005481065 adoring_mccarthy[408824]:        "type": "bluestore"
Oct 11 05:30:23 np0005481065 adoring_mccarthy[408824]:    },
Oct 11 05:30:23 np0005481065 adoring_mccarthy[408824]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 05:30:23 np0005481065 adoring_mccarthy[408824]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:30:23 np0005481065 adoring_mccarthy[408824]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 05:30:23 np0005481065 adoring_mccarthy[408824]:        "osd_id": 1,
Oct 11 05:30:23 np0005481065 adoring_mccarthy[408824]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:30:23 np0005481065 adoring_mccarthy[408824]:        "type": "bluestore"
Oct 11 05:30:23 np0005481065 adoring_mccarthy[408824]:    }
Oct 11 05:30:23 np0005481065 adoring_mccarthy[408824]: }
Oct 11 05:30:23 np0005481065 systemd[1]: libpod-0cf40633061737a03bb814acd5b2436af13edd4c7d9b499ff448ad2821bf70a8.scope: Deactivated successfully.
Oct 11 05:30:23 np0005481065 podman[408806]: 2025-10-11 09:30:23.258739701 +0000 UTC m=+1.617767786 container died 0cf40633061737a03bb814acd5b2436af13edd4c7d9b499ff448ad2821bf70a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:30:23 np0005481065 systemd[1]: libpod-0cf40633061737a03bb814acd5b2436af13edd4c7d9b499ff448ad2821bf70a8.scope: Consumed 1.154s CPU time.
Oct 11 05:30:23 np0005481065 systemd[1]: var-lib-containers-storage-overlay-88585cc36d87cbd28670095b47f2d5c86258b4e9e2d197963e181400015332a0-merged.mount: Deactivated successfully.
Oct 11 05:30:23 np0005481065 nova_compute[260935]: 2025-10-11 09:30:23.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:30:23 np0005481065 podman[408806]: 2025-10-11 09:30:23.8157327 +0000 UTC m=+2.174760795 container remove 0cf40633061737a03bb814acd5b2436af13edd4c7d9b499ff448ad2821bf70a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 05:30:23 np0005481065 systemd[1]: libpod-conmon-0cf40633061737a03bb814acd5b2436af13edd4c7d9b499ff448ad2821bf70a8.scope: Deactivated successfully.
Oct 11 05:30:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:30:23 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:30:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:30:23 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:30:23 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev bd50cfd7-41c4-4188-bee8-1c32a6d012d4 does not exist
Oct 11 05:30:23 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 539918a0-1ed3-4312-85b6-0aa865390a13 does not exist
Oct 11 05:30:24 np0005481065 nova_compute[260935]: 2025-10-11 09:30:24.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:30:24 np0005481065 ovn_controller[152945]: 2025-10-11T09:30:24Z|01502|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:30:24 np0005481065 ovn_controller[152945]: 2025-10-11T09:30:24Z|01503|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:30:24 np0005481065 nova_compute[260935]: 2025-10-11 09:30:24.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:30:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:30:24 np0005481065 ovn_controller[152945]: 2025-10-11T09:30:24Z|01504|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:30:24 np0005481065 ovn_controller[152945]: 2025-10-11T09:30:24Z|01505|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:30:24 np0005481065 nova_compute[260935]: 2025-10-11 09:30:24.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:30:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:30:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:30:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:30:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:30:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:30:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:30:24 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:30:24 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:30:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2663: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:30:25 np0005481065 nova_compute[260935]: 2025-10-11 09:30:25.398 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:30:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:30:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/638731363' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:30:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:30:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/638731363' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:30:26 np0005481065 podman[408924]: 2025-10-11 09:30:26.824739516 +0000 UTC m=+0.113371211 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 11 05:30:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2664: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:30:28 np0005481065 nova_compute[260935]: 2025-10-11 09:30:28.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:30:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:30:28.943 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d0:5b:95 10.100.0.2 2001:db8:0:1:f816:3eff:fed0:5b95 2001:db8::f816:3eff:fed0:5b95'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8:0:1:f816:3eff:fed0:5b95/64 2001:db8::f816:3eff:fed0:5b95/64', 'neutron:device_id': 'ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=09ee216c-15c4-4f81-ac68-20fd45768bc8, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=3d1ec619-c133-47e3-8121-0a84b73ae16e) old=Port_Binding(mac=['fa:16:3e:d0:5b:95 10.100.0.2 2001:db8::f816:3eff:fed0:5b95'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fed0:5b95/64', 'neutron:device_id': 'ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:30:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:30:28.945 162815 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 3d1ec619-c133-47e3-8121-0a84b73ae16e in datapath b8c83697-e12f-4ca4-8bd5-07c36e7b45a8 updated#033[00m
Oct 11 05:30:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:30:28.948 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b8c83697-e12f-4ca4-8bd5-07c36e7b45a8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:30:28 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:30:28.948 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[16d5d547-5ba9-4f28-ae70-fa51557bc25b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:30:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2665: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:30:29 np0005481065 nova_compute[260935]: 2025-10-11 09:30:29.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:30:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:30:30 np0005481065 podman[408947]: 2025-10-11 09:30:30.813738118 +0000 UTC m=+0.099548731 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:30:30 np0005481065 podman[408948]: 2025-10-11 09:30:30.902810991 +0000 UTC m=+0.184115417 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, managed_by=edpm_ansible)
Oct 11 05:30:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2666: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:30:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2667: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:30:33 np0005481065 nova_compute[260935]: 2025-10-11 09:30:33.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:30:34 np0005481065 nova_compute[260935]: 2025-10-11 09:30:34.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:30:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:30:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2668: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:30:36 np0005481065 nova_compute[260935]: 2025-10-11 09:30:36.853 2 DEBUG oslo_concurrency.lockutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "811aca81-b712-4b96-a66c-8108b7791b3c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:30:36 np0005481065 nova_compute[260935]: 2025-10-11 09:30:36.854 2 DEBUG oslo_concurrency.lockutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "811aca81-b712-4b96-a66c-8108b7791b3c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:30:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2669: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:30:37 np0005481065 nova_compute[260935]: 2025-10-11 09:30:37.079 2 DEBUG nova.compute.manager [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:30:37 np0005481065 nova_compute[260935]: 2025-10-11 09:30:37.306 2 DEBUG oslo_concurrency.lockutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:30:37 np0005481065 nova_compute[260935]: 2025-10-11 09:30:37.307 2 DEBUG oslo_concurrency.lockutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:30:37 np0005481065 nova_compute[260935]: 2025-10-11 09:30:37.319 2 DEBUG nova.virt.hardware [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:30:37 np0005481065 nova_compute[260935]: 2025-10-11 09:30:37.319 2 INFO nova.compute.claims [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:30:37 np0005481065 nova_compute[260935]: 2025-10-11 09:30:37.763 2 DEBUG oslo_concurrency.processutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:30:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:30:38 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3677823553' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:30:38 np0005481065 nova_compute[260935]: 2025-10-11 09:30:38.256 2 DEBUG oslo_concurrency.processutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:30:38 np0005481065 nova_compute[260935]: 2025-10-11 09:30:38.264 2 DEBUG nova.compute.provider_tree [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:30:38 np0005481065 nova_compute[260935]: 2025-10-11 09:30:38.327 2 DEBUG nova.scheduler.client.report [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:30:38 np0005481065 nova_compute[260935]: 2025-10-11 09:30:38.386 2 DEBUG oslo_concurrency.lockutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.080s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:30:38 np0005481065 nova_compute[260935]: 2025-10-11 09:30:38.388 2 DEBUG nova.compute.manager [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:30:38 np0005481065 nova_compute[260935]: 2025-10-11 09:30:38.529 2 DEBUG nova.compute.manager [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:30:38 np0005481065 nova_compute[260935]: 2025-10-11 09:30:38.530 2 DEBUG nova.network.neutron [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:30:38 np0005481065 nova_compute[260935]: 2025-10-11 09:30:38.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:30:38 np0005481065 nova_compute[260935]: 2025-10-11 09:30:38.633 2 INFO nova.virt.libvirt.driver [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:30:38 np0005481065 nova_compute[260935]: 2025-10-11 09:30:38.748 2 DEBUG nova.policy [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0e1fd111a1ff43179343661e01457085', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'db6885dd005947ad850fed13cefdf2fc', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:30:38 np0005481065 nova_compute[260935]: 2025-10-11 09:30:38.770 2 DEBUG nova.compute.manager [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:30:38 np0005481065 nova_compute[260935]: 2025-10-11 09:30:38.987 2 DEBUG nova.compute.manager [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:30:38 np0005481065 nova_compute[260935]: 2025-10-11 09:30:38.989 2 DEBUG nova.virt.libvirt.driver [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:30:38 np0005481065 nova_compute[260935]: 2025-10-11 09:30:38.990 2 INFO nova.virt.libvirt.driver [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Creating image(s)#033[00m
Oct 11 05:30:39 np0005481065 nova_compute[260935]: 2025-10-11 09:30:39.026 2 DEBUG nova.storage.rbd_utils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 811aca81-b712-4b96-a66c-8108b7791b3c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:30:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2670: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:30:39 np0005481065 nova_compute[260935]: 2025-10-11 09:30:39.064 2 DEBUG nova.storage.rbd_utils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 811aca81-b712-4b96-a66c-8108b7791b3c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:30:39 np0005481065 nova_compute[260935]: 2025-10-11 09:30:39.094 2 DEBUG nova.storage.rbd_utils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 811aca81-b712-4b96-a66c-8108b7791b3c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:30:39 np0005481065 nova_compute[260935]: 2025-10-11 09:30:39.098 2 DEBUG oslo_concurrency.processutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:30:39 np0005481065 nova_compute[260935]: 2025-10-11 09:30:39.194 2 DEBUG oslo_concurrency.processutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:30:39 np0005481065 nova_compute[260935]: 2025-10-11 09:30:39.196 2 DEBUG oslo_concurrency.lockutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:30:39 np0005481065 nova_compute[260935]: 2025-10-11 09:30:39.197 2 DEBUG oslo_concurrency.lockutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:30:39 np0005481065 nova_compute[260935]: 2025-10-11 09:30:39.198 2 DEBUG oslo_concurrency.lockutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:30:39 np0005481065 nova_compute[260935]: 2025-10-11 09:30:39.390 2 DEBUG nova.storage.rbd_utils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 811aca81-b712-4b96-a66c-8108b7791b3c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:30:39 np0005481065 nova_compute[260935]: 2025-10-11 09:30:39.396 2 DEBUG oslo_concurrency.processutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 811aca81-b712-4b96-a66c-8108b7791b3c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:30:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:30:39 np0005481065 nova_compute[260935]: 2025-10-11 09:30:39.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:30:39 np0005481065 nova_compute[260935]: 2025-10-11 09:30:39.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:30:39 np0005481065 nova_compute[260935]: 2025-10-11 09:30:39.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:30:40 np0005481065 nova_compute[260935]: 2025-10-11 09:30:40.002 2 DEBUG nova.network.neutron [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Successfully created port: 41c97ff4-4ad5-4d35-ac33-d083a904f55a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:30:40 np0005481065 nova_compute[260935]: 2025-10-11 09:30:40.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:30:40 np0005481065 nova_compute[260935]: 2025-10-11 09:30:40.706 2 DEBUG oslo_concurrency.processutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 811aca81-b712-4b96-a66c-8108b7791b3c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.311s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:30:40 np0005481065 nova_compute[260935]: 2025-10-11 09:30:40.795 2 DEBUG nova.storage.rbd_utils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] resizing rbd image 811aca81-b712-4b96-a66c-8108b7791b3c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:30:40 np0005481065 nova_compute[260935]: 2025-10-11 09:30:40.923 2 DEBUG nova.objects.instance [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'migration_context' on Instance uuid 811aca81-b712-4b96-a66c-8108b7791b3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:30:41 np0005481065 nova_compute[260935]: 2025-10-11 09:30:41.049 2 DEBUG nova.virt.libvirt.driver [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:30:41 np0005481065 nova_compute[260935]: 2025-10-11 09:30:41.050 2 DEBUG nova.virt.libvirt.driver [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Ensure instance console log exists: /var/lib/nova/instances/811aca81-b712-4b96-a66c-8108b7791b3c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:30:41 np0005481065 nova_compute[260935]: 2025-10-11 09:30:41.050 2 DEBUG oslo_concurrency.lockutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:30:41 np0005481065 nova_compute[260935]: 2025-10-11 09:30:41.051 2 DEBUG oslo_concurrency.lockutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:30:41 np0005481065 nova_compute[260935]: 2025-10-11 09:30:41.051 2 DEBUG oslo_concurrency.lockutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:30:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2671: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:30:42 np0005481065 nova_compute[260935]: 2025-10-11 09:30:42.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:30:42 np0005481065 nova_compute[260935]: 2025-10-11 09:30:42.702 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:30:42 np0005481065 nova_compute[260935]: 2025-10-11 09:30:42.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 11 05:30:42 np0005481065 nova_compute[260935]: 2025-10-11 09:30:42.725 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct 11 05:30:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2672: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:30:43 np0005481065 nova_compute[260935]: 2025-10-11 09:30:43.488 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:30:43 np0005481065 nova_compute[260935]: 2025-10-11 09:30:43.488 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:30:43 np0005481065 nova_compute[260935]: 2025-10-11 09:30:43.489 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 05:30:43 np0005481065 nova_compute[260935]: 2025-10-11 09:30:43.489 2 DEBUG nova.objects.instance [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c176845c-89c0-4038-ba22-4ee79bd3ebfe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:30:43 np0005481065 nova_compute[260935]: 2025-10-11 09:30:43.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:30:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:30:44 np0005481065 nova_compute[260935]: 2025-10-11 09:30:44.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:30:45 np0005481065 nova_compute[260935]: 2025-10-11 09:30:45.049 2 DEBUG nova.network.neutron [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Successfully updated port: 41c97ff4-4ad5-4d35-ac33-d083a904f55a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:30:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2673: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:30:45 np0005481065 nova_compute[260935]: 2025-10-11 09:30:45.062 2 DEBUG oslo_concurrency.lockutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "refresh_cache-811aca81-b712-4b96-a66c-8108b7791b3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:30:45 np0005481065 nova_compute[260935]: 2025-10-11 09:30:45.062 2 DEBUG oslo_concurrency.lockutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquired lock "refresh_cache-811aca81-b712-4b96-a66c-8108b7791b3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:30:45 np0005481065 nova_compute[260935]: 2025-10-11 09:30:45.063 2 DEBUG nova.network.neutron [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:30:45 np0005481065 nova_compute[260935]: 2025-10-11 09:30:45.184 2 DEBUG nova.compute.manager [req-6c3679e1-8d6f-4a5c-ba63-f78ffb4be5f9 req-33f74171-2d4b-42c4-ac5d-efcb5c7b0eb9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Received event network-changed-41c97ff4-4ad5-4d35-ac33-d083a904f55a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:30:45 np0005481065 nova_compute[260935]: 2025-10-11 09:30:45.185 2 DEBUG nova.compute.manager [req-6c3679e1-8d6f-4a5c-ba63-f78ffb4be5f9 req-33f74171-2d4b-42c4-ac5d-efcb5c7b0eb9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Refreshing instance network info cache due to event network-changed-41c97ff4-4ad5-4d35-ac33-d083a904f55a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:30:45 np0005481065 nova_compute[260935]: 2025-10-11 09:30:45.185 2 DEBUG oslo_concurrency.lockutils [req-6c3679e1-8d6f-4a5c-ba63-f78ffb4be5f9 req-33f74171-2d4b-42c4-ac5d-efcb5c7b0eb9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-811aca81-b712-4b96-a66c-8108b7791b3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:30:45 np0005481065 nova_compute[260935]: 2025-10-11 09:30:45.257 2 DEBUG nova.network.neutron [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:30:46 np0005481065 nova_compute[260935]: 2025-10-11 09:30:46.241 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updating instance_info_cache with network_info: [{"id": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "address": "fa:16:3e:1e:82:58", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ae661-47", "ovs_interfaceid": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:30:46 np0005481065 nova_compute[260935]: 2025-10-11 09:30:46.403 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:30:46 np0005481065 nova_compute[260935]: 2025-10-11 09:30:46.403 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 05:30:46 np0005481065 nova_compute[260935]: 2025-10-11 09:30:46.404 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:30:46 np0005481065 nova_compute[260935]: 2025-10-11 09:30:46.404 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:30:46 np0005481065 nova_compute[260935]: 2025-10-11 09:30:46.404 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:30:46 np0005481065 nova_compute[260935]: 2025-10-11 09:30:46.404 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:30:46 np0005481065 nova_compute[260935]: 2025-10-11 09:30:46.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:30:46 np0005481065 nova_compute[260935]: 2025-10-11 09:30:46.741 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:30:46 np0005481065 nova_compute[260935]: 2025-10-11 09:30:46.742 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:30:46 np0005481065 nova_compute[260935]: 2025-10-11 09:30:46.742 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:30:46 np0005481065 nova_compute[260935]: 2025-10-11 09:30:46.743 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:30:46 np0005481065 nova_compute[260935]: 2025-10-11 09:30:46.743 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:30:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2674: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:30:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:30:47 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1825597475' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:30:47 np0005481065 nova_compute[260935]: 2025-10-11 09:30:47.270 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:30:47 np0005481065 nova_compute[260935]: 2025-10-11 09:30:47.410 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:30:47 np0005481065 nova_compute[260935]: 2025-10-11 09:30:47.411 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:30:47 np0005481065 nova_compute[260935]: 2025-10-11 09:30:47.412 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:30:47 np0005481065 nova_compute[260935]: 2025-10-11 09:30:47.418 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:30:47 np0005481065 nova_compute[260935]: 2025-10-11 09:30:47.418 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:30:47 np0005481065 nova_compute[260935]: 2025-10-11 09:30:47.424 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:30:47 np0005481065 nova_compute[260935]: 2025-10-11 09:30:47.424 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:30:47 np0005481065 nova_compute[260935]: 2025-10-11 09:30:47.687 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:30:47 np0005481065 nova_compute[260935]: 2025-10-11 09:30:47.689 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2837MB free_disk=59.80991744995117GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:30:47 np0005481065 nova_compute[260935]: 2025-10-11 09:30:47.689 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:30:47 np0005481065 nova_compute[260935]: 2025-10-11 09:30:47.689 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:30:47 np0005481065 nova_compute[260935]: 2025-10-11 09:30:47.762 2 DEBUG nova.network.neutron [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Updating instance_info_cache with network_info: [{"id": "41c97ff4-4ad5-4d35-ac33-d083a904f55a", "address": "fa:16:3e:e6:b0:e4", "network": {"id": "b8c83697-e12f-4ca4-8bd5-07c36e7b45a8", "bridge": "br-int", "label": "tempest-network-smoke--75951287", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:b0e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:b0e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41c97ff4-4a", "ovs_interfaceid": "41c97ff4-4ad5-4d35-ac33-d083a904f55a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:30:47 np0005481065 nova_compute[260935]: 2025-10-11 09:30:47.909 2 DEBUG oslo_concurrency.lockutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Releasing lock "refresh_cache-811aca81-b712-4b96-a66c-8108b7791b3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:30:47 np0005481065 nova_compute[260935]: 2025-10-11 09:30:47.910 2 DEBUG nova.compute.manager [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Instance network_info: |[{"id": "41c97ff4-4ad5-4d35-ac33-d083a904f55a", "address": "fa:16:3e:e6:b0:e4", "network": {"id": "b8c83697-e12f-4ca4-8bd5-07c36e7b45a8", "bridge": "br-int", "label": "tempest-network-smoke--75951287", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:b0e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:b0e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41c97ff4-4a", "ovs_interfaceid": "41c97ff4-4ad5-4d35-ac33-d083a904f55a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:30:47 np0005481065 nova_compute[260935]: 2025-10-11 09:30:47.910 2 DEBUG oslo_concurrency.lockutils [req-6c3679e1-8d6f-4a5c-ba63-f78ffb4be5f9 req-33f74171-2d4b-42c4-ac5d-efcb5c7b0eb9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-811aca81-b712-4b96-a66c-8108b7791b3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:30:47 np0005481065 nova_compute[260935]: 2025-10-11 09:30:47.911 2 DEBUG nova.network.neutron [req-6c3679e1-8d6f-4a5c-ba63-f78ffb4be5f9 req-33f74171-2d4b-42c4-ac5d-efcb5c7b0eb9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Refreshing network info cache for port 41c97ff4-4ad5-4d35-ac33-d083a904f55a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:30:47 np0005481065 nova_compute[260935]: 2025-10-11 09:30:47.914 2 DEBUG nova.virt.libvirt.driver [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Start _get_guest_xml network_info=[{"id": "41c97ff4-4ad5-4d35-ac33-d083a904f55a", "address": "fa:16:3e:e6:b0:e4", "network": {"id": "b8c83697-e12f-4ca4-8bd5-07c36e7b45a8", "bridge": "br-int", "label": "tempest-network-smoke--75951287", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:b0e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:b0e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41c97ff4-4a", "ovs_interfaceid": "41c97ff4-4ad5-4d35-ac33-d083a904f55a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:30:47 np0005481065 nova_compute[260935]: 2025-10-11 09:30:47.920 2 WARNING nova.virt.libvirt.driver [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:30:47 np0005481065 nova_compute[260935]: 2025-10-11 09:30:47.925 2 DEBUG nova.virt.libvirt.host [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:30:47 np0005481065 nova_compute[260935]: 2025-10-11 09:30:47.926 2 DEBUG nova.virt.libvirt.host [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:30:47 np0005481065 nova_compute[260935]: 2025-10-11 09:30:47.929 2 DEBUG nova.virt.libvirt.host [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:30:47 np0005481065 nova_compute[260935]: 2025-10-11 09:30:47.929 2 DEBUG nova.virt.libvirt.host [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:30:47 np0005481065 nova_compute[260935]: 2025-10-11 09:30:47.930 2 DEBUG nova.virt.libvirt.driver [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:30:47 np0005481065 nova_compute[260935]: 2025-10-11 09:30:47.930 2 DEBUG nova.virt.hardware [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:30:47 np0005481065 nova_compute[260935]: 2025-10-11 09:30:47.931 2 DEBUG nova.virt.hardware [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:30:47 np0005481065 nova_compute[260935]: 2025-10-11 09:30:47.931 2 DEBUG nova.virt.hardware [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:30:47 np0005481065 nova_compute[260935]: 2025-10-11 09:30:47.931 2 DEBUG nova.virt.hardware [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:30:47 np0005481065 nova_compute[260935]: 2025-10-11 09:30:47.931 2 DEBUG nova.virt.hardware [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:30:47 np0005481065 nova_compute[260935]: 2025-10-11 09:30:47.932 2 DEBUG nova.virt.hardware [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:30:47 np0005481065 nova_compute[260935]: 2025-10-11 09:30:47.932 2 DEBUG nova.virt.hardware [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:30:47 np0005481065 nova_compute[260935]: 2025-10-11 09:30:47.932 2 DEBUG nova.virt.hardware [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:30:47 np0005481065 nova_compute[260935]: 2025-10-11 09:30:47.933 2 DEBUG nova.virt.hardware [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:30:47 np0005481065 nova_compute[260935]: 2025-10-11 09:30:47.933 2 DEBUG nova.virt.hardware [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:30:47 np0005481065 nova_compute[260935]: 2025-10-11 09:30:47.933 2 DEBUG nova.virt.hardware [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:30:47 np0005481065 nova_compute[260935]: 2025-10-11 09:30:47.936 2 DEBUG oslo_concurrency.processutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:30:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:30:48 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/272243793' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:30:48 np0005481065 nova_compute[260935]: 2025-10-11 09:30:48.448 2 DEBUG oslo_concurrency.processutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:30:48 np0005481065 nova_compute[260935]: 2025-10-11 09:30:48.485 2 DEBUG nova.storage.rbd_utils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 811aca81-b712-4b96-a66c-8108b7791b3c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:30:48 np0005481065 nova_compute[260935]: 2025-10-11 09:30:48.490 2 DEBUG oslo_concurrency.processutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:30:48 np0005481065 nova_compute[260935]: 2025-10-11 09:30:48.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:30:48 np0005481065 nova_compute[260935]: 2025-10-11 09:30:48.636 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:30:48 np0005481065 nova_compute[260935]: 2025-10-11 09:30:48.637 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:30:48 np0005481065 nova_compute[260935]: 2025-10-11 09:30:48.637 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:30:48 np0005481065 nova_compute[260935]: 2025-10-11 09:30:48.638 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 811aca81-b712-4b96-a66c-8108b7791b3c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:30:48 np0005481065 nova_compute[260935]: 2025-10-11 09:30:48.638 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:30:48 np0005481065 nova_compute[260935]: 2025-10-11 09:30:48.639 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:30:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:30:48 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/575589019' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:30:48 np0005481065 nova_compute[260935]: 2025-10-11 09:30:48.954 2 DEBUG oslo_concurrency.processutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:30:48 np0005481065 nova_compute[260935]: 2025-10-11 09:30:48.957 2 DEBUG nova.virt.libvirt.vif [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:30:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-739529148',display_name='tempest-TestGettingAddress-server-739529148',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-739529148',id=135,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHRstPG1ot2VOcbBUXtJ+eOOOPIpYxS1823a8lgCgglrjvAd3j//+qkav+rN6NO2rJV6NwAQSbhFZXjoVb6gdqi2VPWucmakmAAgqAmAhQOUxzhrLgrzGURRlRuM8US+xA==',key_name='tempest-TestGettingAddress-264191416',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-jlin4bys',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:30:38Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=811aca81-b712-4b96-a66c-8108b7791b3c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "41c97ff4-4ad5-4d35-ac33-d083a904f55a", "address": "fa:16:3e:e6:b0:e4", "network": {"id": "b8c83697-e12f-4ca4-8bd5-07c36e7b45a8", "bridge": "br-int", "label": "tempest-network-smoke--75951287", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:b0e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:b0e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41c97ff4-4a", "ovs_interfaceid": "41c97ff4-4ad5-4d35-ac33-d083a904f55a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:30:48 np0005481065 nova_compute[260935]: 2025-10-11 09:30:48.958 2 DEBUG nova.network.os_vif_util [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "41c97ff4-4ad5-4d35-ac33-d083a904f55a", "address": "fa:16:3e:e6:b0:e4", "network": {"id": "b8c83697-e12f-4ca4-8bd5-07c36e7b45a8", "bridge": "br-int", "label": "tempest-network-smoke--75951287", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:b0e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:b0e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41c97ff4-4a", "ovs_interfaceid": "41c97ff4-4ad5-4d35-ac33-d083a904f55a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:30:48 np0005481065 nova_compute[260935]: 2025-10-11 09:30:48.960 2 DEBUG nova.network.os_vif_util [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:b0:e4,bridge_name='br-int',has_traffic_filtering=True,id=41c97ff4-4ad5-4d35-ac33-d083a904f55a,network=Network(b8c83697-e12f-4ca4-8bd5-07c36e7b45a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41c97ff4-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:30:48 np0005481065 nova_compute[260935]: 2025-10-11 09:30:48.962 2 DEBUG nova.objects.instance [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'pci_devices' on Instance uuid 811aca81-b712-4b96-a66c-8108b7791b3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:30:49 np0005481065 nova_compute[260935]: 2025-10-11 09:30:49.019 2 DEBUG nova.virt.libvirt.driver [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:30:49 np0005481065 nova_compute[260935]:  <uuid>811aca81-b712-4b96-a66c-8108b7791b3c</uuid>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:  <name>instance-00000087</name>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:30:49 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:      <nova:name>tempest-TestGettingAddress-server-739529148</nova:name>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:30:47</nova:creationTime>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:30:49 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:        <nova:user uuid="0e1fd111a1ff43179343661e01457085">tempest-TestGettingAddress-1238692117-project-member</nova:user>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:        <nova:project uuid="db6885dd005947ad850fed13cefdf2fc">tempest-TestGettingAddress-1238692117</nova:project>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:        <nova:port uuid="41c97ff4-4ad5-4d35-ac33-d083a904f55a">
Oct 11 05:30:49 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fee6:b0e4" ipVersion="6"/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fee6:b0e4" ipVersion="6"/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:30:49 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:      <entry name="serial">811aca81-b712-4b96-a66c-8108b7791b3c</entry>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:      <entry name="uuid">811aca81-b712-4b96-a66c-8108b7791b3c</entry>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:30:49 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:30:49 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:30:49 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/811aca81-b712-4b96-a66c-8108b7791b3c_disk">
Oct 11 05:30:49 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:30:49 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:30:49 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/811aca81-b712-4b96-a66c-8108b7791b3c_disk.config">
Oct 11 05:30:49 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:30:49 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:30:49 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:e6:b0:e4"/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:      <target dev="tap41c97ff4-4a"/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:30:49 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/811aca81-b712-4b96-a66c-8108b7791b3c/console.log" append="off"/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:30:49 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:30:49 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:30:49 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:30:49 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:30:49 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:30:49 np0005481065 nova_compute[260935]: 2025-10-11 09:30:49.021 2 DEBUG nova.compute.manager [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Preparing to wait for external event network-vif-plugged-41c97ff4-4ad5-4d35-ac33-d083a904f55a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:30:49 np0005481065 nova_compute[260935]: 2025-10-11 09:30:49.021 2 DEBUG oslo_concurrency.lockutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "811aca81-b712-4b96-a66c-8108b7791b3c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:30:49 np0005481065 nova_compute[260935]: 2025-10-11 09:30:49.022 2 DEBUG oslo_concurrency.lockutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "811aca81-b712-4b96-a66c-8108b7791b3c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:30:49 np0005481065 nova_compute[260935]: 2025-10-11 09:30:49.022 2 DEBUG oslo_concurrency.lockutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "811aca81-b712-4b96-a66c-8108b7791b3c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:30:49 np0005481065 nova_compute[260935]: 2025-10-11 09:30:49.023 2 DEBUG nova.virt.libvirt.vif [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:30:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-739529148',display_name='tempest-TestGettingAddress-server-739529148',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-739529148',id=135,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHRstPG1ot2VOcbBUXtJ+eOOOPIpYxS1823a8lgCgglrjvAd3j//+qkav+rN6NO2rJV6NwAQSbhFZXjoVb6gdqi2VPWucmakmAAgqAmAhQOUxzhrLgrzGURRlRuM8US+xA==',key_name='tempest-TestGettingAddress-264191416',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-jlin4bys',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:30:38Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=811aca81-b712-4b96-a66c-8108b7791b3c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "41c97ff4-4ad5-4d35-ac33-d083a904f55a", "address": "fa:16:3e:e6:b0:e4", "network": {"id": "b8c83697-e12f-4ca4-8bd5-07c36e7b45a8", "bridge": "br-int", "label": "tempest-network-smoke--75951287", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:b0e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:b0e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41c97ff4-4a", "ovs_interfaceid": "41c97ff4-4ad5-4d35-ac33-d083a904f55a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:30:49 np0005481065 nova_compute[260935]: 2025-10-11 09:30:49.023 2 DEBUG nova.network.os_vif_util [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "41c97ff4-4ad5-4d35-ac33-d083a904f55a", "address": "fa:16:3e:e6:b0:e4", "network": {"id": "b8c83697-e12f-4ca4-8bd5-07c36e7b45a8", "bridge": "br-int", "label": "tempest-network-smoke--75951287", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:b0e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:b0e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41c97ff4-4a", "ovs_interfaceid": "41c97ff4-4ad5-4d35-ac33-d083a904f55a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:30:49 np0005481065 nova_compute[260935]: 2025-10-11 09:30:49.024 2 DEBUG nova.network.os_vif_util [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:b0:e4,bridge_name='br-int',has_traffic_filtering=True,id=41c97ff4-4ad5-4d35-ac33-d083a904f55a,network=Network(b8c83697-e12f-4ca4-8bd5-07c36e7b45a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41c97ff4-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:30:49 np0005481065 nova_compute[260935]: 2025-10-11 09:30:49.025 2 DEBUG os_vif [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:b0:e4,bridge_name='br-int',has_traffic_filtering=True,id=41c97ff4-4ad5-4d35-ac33-d083a904f55a,network=Network(b8c83697-e12f-4ca4-8bd5-07c36e7b45a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41c97ff4-4a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:30:49 np0005481065 nova_compute[260935]: 2025-10-11 09:30:49.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:30:49 np0005481065 nova_compute[260935]: 2025-10-11 09:30:49.026 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:30:49 np0005481065 nova_compute[260935]: 2025-10-11 09:30:49.027 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:30:49 np0005481065 nova_compute[260935]: 2025-10-11 09:30:49.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:30:49 np0005481065 nova_compute[260935]: 2025-10-11 09:30:49.031 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41c97ff4-4a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:30:49 np0005481065 nova_compute[260935]: 2025-10-11 09:30:49.031 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap41c97ff4-4a, col_values=(('external_ids', {'iface-id': '41c97ff4-4ad5-4d35-ac33-d083a904f55a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e6:b0:e4', 'vm-uuid': '811aca81-b712-4b96-a66c-8108b7791b3c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:30:49 np0005481065 nova_compute[260935]: 2025-10-11 09:30:49.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:30:49 np0005481065 NetworkManager[44960]: <info>  [1760175049.0349] manager: (tap41c97ff4-4a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/580)
Oct 11 05:30:49 np0005481065 nova_compute[260935]: 2025-10-11 09:30:49.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:30:49 np0005481065 nova_compute[260935]: 2025-10-11 09:30:49.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:30:49 np0005481065 nova_compute[260935]: 2025-10-11 09:30:49.042 2 INFO os_vif [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:b0:e4,bridge_name='br-int',has_traffic_filtering=True,id=41c97ff4-4ad5-4d35-ac33-d083a904f55a,network=Network(b8c83697-e12f-4ca4-8bd5-07c36e7b45a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41c97ff4-4a')#033[00m
Oct 11 05:30:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2675: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:30:49 np0005481065 nova_compute[260935]: 2025-10-11 09:30:49.309 2 DEBUG nova.virt.libvirt.driver [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:30:49 np0005481065 nova_compute[260935]: 2025-10-11 09:30:49.310 2 DEBUG nova.virt.libvirt.driver [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:30:49 np0005481065 nova_compute[260935]: 2025-10-11 09:30:49.310 2 DEBUG nova.virt.libvirt.driver [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No VIF found with MAC fa:16:3e:e6:b0:e4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:30:49 np0005481065 nova_compute[260935]: 2025-10-11 09:30:49.311 2 INFO nova.virt.libvirt.driver [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Using config drive#033[00m
Oct 11 05:30:49 np0005481065 nova_compute[260935]: 2025-10-11 09:30:49.350 2 DEBUG nova.storage.rbd_utils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 811aca81-b712-4b96-a66c-8108b7791b3c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:30:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:30:49 np0005481065 nova_compute[260935]: 2025-10-11 09:30:49.968 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:30:50 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:30:50 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3882794878' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:30:50 np0005481065 nova_compute[260935]: 2025-10-11 09:30:50.450 2 INFO nova.virt.libvirt.driver [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Creating config drive at /var/lib/nova/instances/811aca81-b712-4b96-a66c-8108b7791b3c/disk.config#033[00m
Oct 11 05:30:50 np0005481065 nova_compute[260935]: 2025-10-11 09:30:50.459 2 DEBUG oslo_concurrency.processutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/811aca81-b712-4b96-a66c-8108b7791b3c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpourchqpe execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:30:50 np0005481065 nova_compute[260935]: 2025-10-11 09:30:50.503 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:30:50 np0005481065 nova_compute[260935]: 2025-10-11 09:30:50.512 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:30:50 np0005481065 nova_compute[260935]: 2025-10-11 09:30:50.610 2 DEBUG oslo_concurrency.processutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/811aca81-b712-4b96-a66c-8108b7791b3c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpourchqpe" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:30:50 np0005481065 nova_compute[260935]: 2025-10-11 09:30:50.635 2 DEBUG nova.storage.rbd_utils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 811aca81-b712-4b96-a66c-8108b7791b3c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:30:50 np0005481065 nova_compute[260935]: 2025-10-11 09:30:50.638 2 DEBUG oslo_concurrency.processutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/811aca81-b712-4b96-a66c-8108b7791b3c/disk.config 811aca81-b712-4b96-a66c-8108b7791b3c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:30:50 np0005481065 nova_compute[260935]: 2025-10-11 09:30:50.689 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:30:50 np0005481065 nova_compute[260935]: 2025-10-11 09:30:50.842 2 DEBUG oslo_concurrency.processutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/811aca81-b712-4b96-a66c-8108b7791b3c/disk.config 811aca81-b712-4b96-a66c-8108b7791b3c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.204s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:30:50 np0005481065 nova_compute[260935]: 2025-10-11 09:30:50.843 2 INFO nova.virt.libvirt.driver [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Deleting local config drive /var/lib/nova/instances/811aca81-b712-4b96-a66c-8108b7791b3c/disk.config because it was imported into RBD.#033[00m
Oct 11 05:30:50 np0005481065 nova_compute[260935]: 2025-10-11 09:30:50.855 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:30:50 np0005481065 nova_compute[260935]: 2025-10-11 09:30:50.857 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.168s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:30:50 np0005481065 kernel: tap41c97ff4-4a: entered promiscuous mode
Oct 11 05:30:50 np0005481065 NetworkManager[44960]: <info>  [1760175050.9235] manager: (tap41c97ff4-4a): new Tun device (/org/freedesktop/NetworkManager/Devices/581)
Oct 11 05:30:50 np0005481065 ovn_controller[152945]: 2025-10-11T09:30:50Z|01506|binding|INFO|Claiming lport 41c97ff4-4ad5-4d35-ac33-d083a904f55a for this chassis.
Oct 11 05:30:50 np0005481065 ovn_controller[152945]: 2025-10-11T09:30:50Z|01507|binding|INFO|41c97ff4-4ad5-4d35-ac33-d083a904f55a: Claiming fa:16:3e:e6:b0:e4 10.100.0.14 2001:db8:0:1:f816:3eff:fee6:b0e4 2001:db8::f816:3eff:fee6:b0e4
Oct 11 05:30:50 np0005481065 nova_compute[260935]: 2025-10-11 09:30:50.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:30:50 np0005481065 nova_compute[260935]: 2025-10-11 09:30:50.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:30:51 np0005481065 systemd-machined[215705]: New machine qemu-159-instance-00000087.
Oct 11 05:30:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2676: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:30:51.073 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:b0:e4 10.100.0.14 2001:db8:0:1:f816:3eff:fee6:b0e4 2001:db8::f816:3eff:fee6:b0e4'], port_security=['fa:16:3e:e6:b0:e4 10.100.0.14 2001:db8:0:1:f816:3eff:fee6:b0e4 2001:db8::f816:3eff:fee6:b0e4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28 2001:db8:0:1:f816:3eff:fee6:b0e4/64 2001:db8::f816:3eff:fee6:b0e4/64', 'neutron:device_id': '811aca81-b712-4b96-a66c-8108b7791b3c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9b3d6b51-0356-4b0d-af67-6a82e6bb09bb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=09ee216c-15c4-4f81-ac68-20fd45768bc8, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=41c97ff4-4ad5-4d35-ac33-d083a904f55a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:30:51.074 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 41c97ff4-4ad5-4d35-ac33-d083a904f55a in datapath b8c83697-e12f-4ca4-8bd5-07c36e7b45a8 bound to our chassis#033[00m
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:30:51.076 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b8c83697-e12f-4ca4-8bd5-07c36e7b45a8#033[00m
Oct 11 05:30:51 np0005481065 systemd[1]: Started Virtual Machine qemu-159-instance-00000087.
Oct 11 05:30:51 np0005481065 ovn_controller[152945]: 2025-10-11T09:30:51Z|01508|binding|INFO|Setting lport 41c97ff4-4ad5-4d35-ac33-d083a904f55a ovn-installed in OVS
Oct 11 05:30:51 np0005481065 ovn_controller[152945]: 2025-10-11T09:30:51Z|01509|binding|INFO|Setting lport 41c97ff4-4ad5-4d35-ac33-d083a904f55a up in Southbound
Oct 11 05:30:51 np0005481065 systemd-udevd[409364]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:30:51 np0005481065 nova_compute[260935]: 2025-10-11 09:30:51.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:30:51.088 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[61ff9053-c79f-45ae-9c17-5135d7356830]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:30:51.089 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb8c83697-e1 in ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:30:51.091 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb8c83697-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:30:51.092 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0b39212b-bb57-4000-90d8-3bc59445d6b2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:30:51.093 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c19adee0-68a6-4573-8c9c-6fd9d50a60ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:30:51 np0005481065 NetworkManager[44960]: <info>  [1760175051.0979] device (tap41c97ff4-4a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:30:51 np0005481065 NetworkManager[44960]: <info>  [1760175051.0989] device (tap41c97ff4-4a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:30:51.115 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[3e8d9aa4-d8b5-4721-9804-89836468bb96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:30:51.148 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[51f324df-80fc-4e35-b571-749bc97e7762]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:30:51.186 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3f18cd74-91d9-482a-8552-a349bf3b4264]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:30:51 np0005481065 NetworkManager[44960]: <info>  [1760175051.1945] manager: (tapb8c83697-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/582)
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:30:51.193 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[372e0502-50cb-48a4-b463-c5d4574bc39b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:30:51.236 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[19ac61b0-3954-4fb6-acad-ea273c85221c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:30:51.239 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[4619cb93-e200-4789-8b2f-91a2df00a7be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:30:51 np0005481065 NetworkManager[44960]: <info>  [1760175051.2641] device (tapb8c83697-e0): carrier: link connected
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:30:51.269 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[0f81a09f-2621-48a4-b192-65764b79b963]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:30:51.286 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7f744505-0c47-4724-818f-dc37b56cf45a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb8c83697-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d0:5b:95'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 405], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 689596, 'reachable_time': 36408, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 409405, 'error': None, 'target': 'ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:30:51 np0005481065 podman[409387]: 2025-10-11 09:30:51.297408645 +0000 UTC m=+0.062356800 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:30:51.301 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e50918c8-c95f-4509-8224-738d1ee0acf6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed0:5b95'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 689596, 'tstamp': 689596}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 409414, 'error': None, 'target': 'ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:30:51.317 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b8752176-e613-453f-999b-7fb980293911]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb8c83697-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d0:5b:95'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 405], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 689596, 'reachable_time': 36408, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 409416, 'error': None, 'target': 'ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:30:51.343 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c1dee893-8bc3-4788-9e97-9335f832e7fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:30:51.415 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[df902823-ca22-4ac4-8585-e6416daac4d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:30:51.416 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb8c83697-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:30:51.416 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:30:51.417 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb8c83697-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:30:51 np0005481065 NetworkManager[44960]: <info>  [1760175051.4195] manager: (tapb8c83697-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/583)
Oct 11 05:30:51 np0005481065 nova_compute[260935]: 2025-10-11 09:30:51.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:30:51 np0005481065 kernel: tapb8c83697-e0: entered promiscuous mode
Oct 11 05:30:51 np0005481065 nova_compute[260935]: 2025-10-11 09:30:51.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:30:51.424 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb8c83697-e0, col_values=(('external_ids', {'iface-id': '3d1ec619-c133-47e3-8121-0a84b73ae16e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:30:51 np0005481065 ovn_controller[152945]: 2025-10-11T09:30:51Z|01510|binding|INFO|Releasing lport 3d1ec619-c133-47e3-8121-0a84b73ae16e from this chassis (sb_readonly=0)
Oct 11 05:30:51 np0005481065 nova_compute[260935]: 2025-10-11 09:30:51.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:30:51.453 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b8c83697-e12f-4ca4-8bd5-07c36e7b45a8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b8c83697-e12f-4ca4-8bd5-07c36e7b45a8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:30:51.454 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fad1b6a3-aaef-4fa7-b1df-10d2da75d112]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:30:51.454 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/b8c83697-e12f-4ca4-8bd5-07c36e7b45a8.pid.haproxy
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID b8c83697-e12f-4ca4-8bd5-07c36e7b45a8
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 05:30:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:30:51.455 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8', 'env', 'PROCESS_TAG=haproxy-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b8c83697-e12f-4ca4-8bd5-07c36e7b45a8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 05:30:51 np0005481065 nova_compute[260935]: 2025-10-11 09:30:51.740 2 DEBUG nova.compute.manager [req-6c90ae79-fe76-4eea-ab8a-19843455a39b req-d5b30dfc-b9c1-472a-bed6-325e6683397c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Received event network-vif-plugged-41c97ff4-4ad5-4d35-ac33-d083a904f55a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:30:51 np0005481065 nova_compute[260935]: 2025-10-11 09:30:51.741 2 DEBUG oslo_concurrency.lockutils [req-6c90ae79-fe76-4eea-ab8a-19843455a39b req-d5b30dfc-b9c1-472a-bed6-325e6683397c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "811aca81-b712-4b96-a66c-8108b7791b3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:30:51 np0005481065 nova_compute[260935]: 2025-10-11 09:30:51.741 2 DEBUG oslo_concurrency.lockutils [req-6c90ae79-fe76-4eea-ab8a-19843455a39b req-d5b30dfc-b9c1-472a-bed6-325e6683397c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "811aca81-b712-4b96-a66c-8108b7791b3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:30:51 np0005481065 nova_compute[260935]: 2025-10-11 09:30:51.741 2 DEBUG oslo_concurrency.lockutils [req-6c90ae79-fe76-4eea-ab8a-19843455a39b req-d5b30dfc-b9c1-472a-bed6-325e6683397c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "811aca81-b712-4b96-a66c-8108b7791b3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:30:51 np0005481065 nova_compute[260935]: 2025-10-11 09:30:51.742 2 DEBUG nova.compute.manager [req-6c90ae79-fe76-4eea-ab8a-19843455a39b req-d5b30dfc-b9c1-472a-bed6-325e6683397c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Processing event network-vif-plugged-41c97ff4-4ad5-4d35-ac33-d083a904f55a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:30:51 np0005481065 nova_compute[260935]: 2025-10-11 09:30:51.840 2 DEBUG nova.network.neutron [req-6c3679e1-8d6f-4a5c-ba63-f78ffb4be5f9 req-33f74171-2d4b-42c4-ac5d-efcb5c7b0eb9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Updated VIF entry in instance network info cache for port 41c97ff4-4ad5-4d35-ac33-d083a904f55a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:30:51 np0005481065 nova_compute[260935]: 2025-10-11 09:30:51.841 2 DEBUG nova.network.neutron [req-6c3679e1-8d6f-4a5c-ba63-f78ffb4be5f9 req-33f74171-2d4b-42c4-ac5d-efcb5c7b0eb9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Updating instance_info_cache with network_info: [{"id": "41c97ff4-4ad5-4d35-ac33-d083a904f55a", "address": "fa:16:3e:e6:b0:e4", "network": {"id": "b8c83697-e12f-4ca4-8bd5-07c36e7b45a8", "bridge": "br-int", "label": "tempest-network-smoke--75951287", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:b0e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:b0e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41c97ff4-4a", "ovs_interfaceid": "41c97ff4-4ad5-4d35-ac33-d083a904f55a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:30:51 np0005481065 nova_compute[260935]: 2025-10-11 09:30:51.857 2 DEBUG oslo_concurrency.lockutils [req-6c3679e1-8d6f-4a5c-ba63-f78ffb4be5f9 req-33f74171-2d4b-42c4-ac5d-efcb5c7b0eb9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-811aca81-b712-4b96-a66c-8108b7791b3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:30:51 np0005481065 podman[409490]: 2025-10-11 09:30:51.897877971 +0000 UTC m=+0.081826300 container create cef6679f7f0fa451ed298ba0c50459d4b4021c71f328dd9432072e9dd2a9fab9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:30:51 np0005481065 systemd[1]: Started libpod-conmon-cef6679f7f0fa451ed298ba0c50459d4b4021c71f328dd9432072e9dd2a9fab9.scope.
Oct 11 05:30:51 np0005481065 podman[409490]: 2025-10-11 09:30:51.859717214 +0000 UTC m=+0.043665623 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 05:30:51 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:30:51 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac5db13615b0c34d4d58a71f6df0fd0ba5744113047ad9855ac3fe68117d7b25/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 05:30:52 np0005481065 podman[409490]: 2025-10-11 09:30:52.004176951 +0000 UTC m=+0.188125310 container init cef6679f7f0fa451ed298ba0c50459d4b4021c71f328dd9432072e9dd2a9fab9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 05:30:52 np0005481065 podman[409490]: 2025-10-11 09:30:52.014991666 +0000 UTC m=+0.198939985 container start cef6679f7f0fa451ed298ba0c50459d4b4021c71f328dd9432072e9dd2a9fab9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:30:52 np0005481065 neutron-haproxy-ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8[409505]: [NOTICE]   (409509) : New worker (409511) forked
Oct 11 05:30:52 np0005481065 neutron-haproxy-ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8[409505]: [NOTICE]   (409509) : Loading success.
Oct 11 05:30:52 np0005481065 nova_compute[260935]: 2025-10-11 09:30:52.116 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175052.115802, 811aca81-b712-4b96-a66c-8108b7791b3c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:30:52 np0005481065 nova_compute[260935]: 2025-10-11 09:30:52.117 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] VM Started (Lifecycle Event)#033[00m
Oct 11 05:30:52 np0005481065 nova_compute[260935]: 2025-10-11 09:30:52.120 2 DEBUG nova.compute.manager [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:30:52 np0005481065 nova_compute[260935]: 2025-10-11 09:30:52.123 2 DEBUG nova.virt.libvirt.driver [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:30:52 np0005481065 nova_compute[260935]: 2025-10-11 09:30:52.127 2 INFO nova.virt.libvirt.driver [-] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Instance spawned successfully.#033[00m
Oct 11 05:30:52 np0005481065 nova_compute[260935]: 2025-10-11 09:30:52.127 2 DEBUG nova.virt.libvirt.driver [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:30:52 np0005481065 nova_compute[260935]: 2025-10-11 09:30:52.142 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:30:52 np0005481065 nova_compute[260935]: 2025-10-11 09:30:52.148 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:30:52 np0005481065 nova_compute[260935]: 2025-10-11 09:30:52.152 2 DEBUG nova.virt.libvirt.driver [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:30:52 np0005481065 nova_compute[260935]: 2025-10-11 09:30:52.153 2 DEBUG nova.virt.libvirt.driver [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:30:52 np0005481065 nova_compute[260935]: 2025-10-11 09:30:52.153 2 DEBUG nova.virt.libvirt.driver [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:30:52 np0005481065 nova_compute[260935]: 2025-10-11 09:30:52.154 2 DEBUG nova.virt.libvirt.driver [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:30:52 np0005481065 nova_compute[260935]: 2025-10-11 09:30:52.154 2 DEBUG nova.virt.libvirt.driver [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:30:52 np0005481065 nova_compute[260935]: 2025-10-11 09:30:52.155 2 DEBUG nova.virt.libvirt.driver [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:30:52 np0005481065 nova_compute[260935]: 2025-10-11 09:30:52.178 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:30:52 np0005481065 nova_compute[260935]: 2025-10-11 09:30:52.178 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175052.115922, 811aca81-b712-4b96-a66c-8108b7791b3c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:30:52 np0005481065 nova_compute[260935]: 2025-10-11 09:30:52.178 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:30:52 np0005481065 nova_compute[260935]: 2025-10-11 09:30:52.210 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:30:52 np0005481065 nova_compute[260935]: 2025-10-11 09:30:52.213 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175052.1228755, 811aca81-b712-4b96-a66c-8108b7791b3c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:30:52 np0005481065 nova_compute[260935]: 2025-10-11 09:30:52.214 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:30:52 np0005481065 nova_compute[260935]: 2025-10-11 09:30:52.219 2 INFO nova.compute.manager [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Took 13.23 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:30:52 np0005481065 nova_compute[260935]: 2025-10-11 09:30:52.219 2 DEBUG nova.compute.manager [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:30:52 np0005481065 nova_compute[260935]: 2025-10-11 09:30:52.231 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:30:52 np0005481065 nova_compute[260935]: 2025-10-11 09:30:52.234 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:30:52 np0005481065 nova_compute[260935]: 2025-10-11 09:30:52.254 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:30:52 np0005481065 nova_compute[260935]: 2025-10-11 09:30:52.273 2 INFO nova.compute.manager [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Took 15.01 seconds to build instance.#033[00m
Oct 11 05:30:52 np0005481065 nova_compute[260935]: 2025-10-11 09:30:52.290 2 DEBUG oslo_concurrency.lockutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "811aca81-b712-4b96-a66c-8108b7791b3c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.437s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:30:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2677: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 11 05:30:53 np0005481065 nova_compute[260935]: 2025-10-11 09:30:53.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:30:53 np0005481065 nova_compute[260935]: 2025-10-11 09:30:53.825 2 DEBUG nova.compute.manager [req-9171ec85-3190-4f48-9afc-cb131fbba77f req-cf813c66-b4e2-4681-a8d7-73f9be8d0b10 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Received event network-vif-plugged-41c97ff4-4ad5-4d35-ac33-d083a904f55a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:30:53 np0005481065 nova_compute[260935]: 2025-10-11 09:30:53.826 2 DEBUG oslo_concurrency.lockutils [req-9171ec85-3190-4f48-9afc-cb131fbba77f req-cf813c66-b4e2-4681-a8d7-73f9be8d0b10 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "811aca81-b712-4b96-a66c-8108b7791b3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:30:53 np0005481065 nova_compute[260935]: 2025-10-11 09:30:53.826 2 DEBUG oslo_concurrency.lockutils [req-9171ec85-3190-4f48-9afc-cb131fbba77f req-cf813c66-b4e2-4681-a8d7-73f9be8d0b10 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "811aca81-b712-4b96-a66c-8108b7791b3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:30:53 np0005481065 nova_compute[260935]: 2025-10-11 09:30:53.827 2 DEBUG oslo_concurrency.lockutils [req-9171ec85-3190-4f48-9afc-cb131fbba77f req-cf813c66-b4e2-4681-a8d7-73f9be8d0b10 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "811aca81-b712-4b96-a66c-8108b7791b3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:30:53 np0005481065 nova_compute[260935]: 2025-10-11 09:30:53.827 2 DEBUG nova.compute.manager [req-9171ec85-3190-4f48-9afc-cb131fbba77f req-cf813c66-b4e2-4681-a8d7-73f9be8d0b10 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] No waiting events found dispatching network-vif-plugged-41c97ff4-4ad5-4d35-ac33-d083a904f55a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:30:53 np0005481065 nova_compute[260935]: 2025-10-11 09:30:53.828 2 WARNING nova.compute.manager [req-9171ec85-3190-4f48-9afc-cb131fbba77f req-cf813c66-b4e2-4681-a8d7-73f9be8d0b10 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Received unexpected event network-vif-plugged-41c97ff4-4ad5-4d35-ac33-d083a904f55a for instance with vm_state active and task_state None.#033[00m
Oct 11 05:30:54 np0005481065 nova_compute[260935]: 2025-10-11 09:30:54.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:30:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:30:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:30:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:30:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:30:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:30:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:30:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:30:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:30:54
Oct 11 05:30:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:30:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:30:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', 'images', 'backups', 'volumes', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'vms', '.rgw.root', 'cephfs.cephfs.meta']
Oct 11 05:30:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:30:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2678: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Oct 11 05:30:55 np0005481065 nova_compute[260935]: 2025-10-11 09:30:55.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:30:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:30:55.516 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=47, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=46) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:30:55 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:30:55.519 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 05:30:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:30:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:30:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:30:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:30:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:30:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:30:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:30:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:30:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:30:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:30:56 np0005481065 ovn_controller[152945]: 2025-10-11T09:30:56Z|01511|binding|INFO|Releasing lport 3d1ec619-c133-47e3-8121-0a84b73ae16e from this chassis (sb_readonly=0)
Oct 11 05:30:56 np0005481065 ovn_controller[152945]: 2025-10-11T09:30:56Z|01512|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:30:56 np0005481065 ovn_controller[152945]: 2025-10-11T09:30:56Z|01513|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:30:56 np0005481065 NetworkManager[44960]: <info>  [1760175056.1689] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/584)
Oct 11 05:30:56 np0005481065 NetworkManager[44960]: <info>  [1760175056.1695] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/585)
Oct 11 05:30:56 np0005481065 nova_compute[260935]: 2025-10-11 09:30:56.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:30:56 np0005481065 ovn_controller[152945]: 2025-10-11T09:30:56Z|01514|binding|INFO|Releasing lport 3d1ec619-c133-47e3-8121-0a84b73ae16e from this chassis (sb_readonly=0)
Oct 11 05:30:56 np0005481065 ovn_controller[152945]: 2025-10-11T09:30:56Z|01515|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:30:56 np0005481065 ovn_controller[152945]: 2025-10-11T09:30:56Z|01516|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:30:56 np0005481065 nova_compute[260935]: 2025-10-11 09:30:56.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:30:56 np0005481065 nova_compute[260935]: 2025-10-11 09:30:56.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:30:56 np0005481065 nova_compute[260935]: 2025-10-11 09:30:56.569 2 DEBUG nova.compute.manager [req-647d31e5-cf65-4c7e-a7de-2f5d20659534 req-76a29f32-2a15-4059-9387-d6aa9b3d5fd7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Received event network-changed-41c97ff4-4ad5-4d35-ac33-d083a904f55a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:30:56 np0005481065 nova_compute[260935]: 2025-10-11 09:30:56.570 2 DEBUG nova.compute.manager [req-647d31e5-cf65-4c7e-a7de-2f5d20659534 req-76a29f32-2a15-4059-9387-d6aa9b3d5fd7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Refreshing instance network info cache due to event network-changed-41c97ff4-4ad5-4d35-ac33-d083a904f55a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:30:56 np0005481065 nova_compute[260935]: 2025-10-11 09:30:56.571 2 DEBUG oslo_concurrency.lockutils [req-647d31e5-cf65-4c7e-a7de-2f5d20659534 req-76a29f32-2a15-4059-9387-d6aa9b3d5fd7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-811aca81-b712-4b96-a66c-8108b7791b3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:30:56 np0005481065 nova_compute[260935]: 2025-10-11 09:30:56.571 2 DEBUG oslo_concurrency.lockutils [req-647d31e5-cf65-4c7e-a7de-2f5d20659534 req-76a29f32-2a15-4059-9387-d6aa9b3d5fd7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-811aca81-b712-4b96-a66c-8108b7791b3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:30:56 np0005481065 nova_compute[260935]: 2025-10-11 09:30:56.572 2 DEBUG nova.network.neutron [req-647d31e5-cf65-4c7e-a7de-2f5d20659534 req-76a29f32-2a15-4059-9387-d6aa9b3d5fd7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Refreshing network info cache for port 41c97ff4-4ad5-4d35-ac33-d083a904f55a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:30:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2679: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Oct 11 05:30:57 np0005481065 podman[409521]: 2025-10-11 09:30:57.798952823 +0000 UTC m=+0.095942219 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid)
Oct 11 05:30:58 np0005481065 nova_compute[260935]: 2025-10-11 09:30:58.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:30:58 np0005481065 nova_compute[260935]: 2025-10-11 09:30:58.902 2 DEBUG nova.network.neutron [req-647d31e5-cf65-4c7e-a7de-2f5d20659534 req-76a29f32-2a15-4059-9387-d6aa9b3d5fd7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Updated VIF entry in instance network info cache for port 41c97ff4-4ad5-4d35-ac33-d083a904f55a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:30:58 np0005481065 nova_compute[260935]: 2025-10-11 09:30:58.903 2 DEBUG nova.network.neutron [req-647d31e5-cf65-4c7e-a7de-2f5d20659534 req-76a29f32-2a15-4059-9387-d6aa9b3d5fd7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Updating instance_info_cache with network_info: [{"id": "41c97ff4-4ad5-4d35-ac33-d083a904f55a", "address": "fa:16:3e:e6:b0:e4", "network": {"id": "b8c83697-e12f-4ca4-8bd5-07c36e7b45a8", "bridge": "br-int", "label": "tempest-network-smoke--75951287", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:b0e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:b0e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41c97ff4-4a", "ovs_interfaceid": "41c97ff4-4ad5-4d35-ac33-d083a904f55a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:30:58 np0005481065 nova_compute[260935]: 2025-10-11 09:30:58.927 2 DEBUG oslo_concurrency.lockutils [req-647d31e5-cf65-4c7e-a7de-2f5d20659534 req-76a29f32-2a15-4059-9387-d6aa9b3d5fd7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-811aca81-b712-4b96-a66c-8108b7791b3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:30:59 np0005481065 nova_compute[260935]: 2025-10-11 09:30:59.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:30:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2680: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 05:30:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:30:59 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:30:59.521 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '47'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:31:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2681: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 05:31:01 np0005481065 podman[409541]: 2025-10-11 09:31:01.793459421 +0000 UTC m=+0.091797652 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 11 05:31:01 np0005481065 podman[409542]: 2025-10-11 09:31:01.845355065 +0000 UTC m=+0.144450607 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001)
Oct 11 05:31:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2682: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 11 05:31:03 np0005481065 nova_compute[260935]: 2025-10-11 09:31:03.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:31:04 np0005481065 nova_compute[260935]: 2025-10-11 09:31:04.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:31:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:31:04 np0005481065 ovn_controller[152945]: 2025-10-11T09:31:04Z|00176|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e6:b0:e4 10.100.0.14
Oct 11 05:31:04 np0005481065 ovn_controller[152945]: 2025-10-11T09:31:04Z|00177|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e6:b0:e4 10.100.0.14
Oct 11 05:31:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2683: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Oct 11 05:31:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:31:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:31:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:31:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:31:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0029757271724620694 of space, bias 1.0, pg target 0.8927181517386208 quantized to 32 (current 32)
Oct 11 05:31:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:31:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:31:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:31:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:31:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:31:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 11 05:31:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:31:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 05:31:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:31:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:31:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:31:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 05:31:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:31:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 05:31:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:31:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:31:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:31:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 05:31:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2684: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Oct 11 05:31:08 np0005481065 nova_compute[260935]: 2025-10-11 09:31:08.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:31:09 np0005481065 nova_compute[260935]: 2025-10-11 09:31:09.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:31:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2685: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 129 op/s
Oct 11 05:31:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:31:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2686: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 11 05:31:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2687: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 11 05:31:13 np0005481065 nova_compute[260935]: 2025-10-11 09:31:13.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:31:14 np0005481065 nova_compute[260935]: 2025-10-11 09:31:14.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:31:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:31:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2688: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 387 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 11 05:31:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:31:15.226 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:31:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:31:15.227 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:31:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:31:15.228 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:31:16 np0005481065 nova_compute[260935]: 2025-10-11 09:31:16.066 2 DEBUG oslo_concurrency.lockutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "30069bfa-2edc-4f4c-b685-233b65a11de1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:31:16 np0005481065 nova_compute[260935]: 2025-10-11 09:31:16.067 2 DEBUG oslo_concurrency.lockutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "30069bfa-2edc-4f4c-b685-233b65a11de1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:31:16 np0005481065 nova_compute[260935]: 2025-10-11 09:31:16.099 2 DEBUG nova.compute.manager [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:31:16 np0005481065 nova_compute[260935]: 2025-10-11 09:31:16.209 2 DEBUG oslo_concurrency.lockutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:31:16 np0005481065 nova_compute[260935]: 2025-10-11 09:31:16.210 2 DEBUG oslo_concurrency.lockutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:31:16 np0005481065 nova_compute[260935]: 2025-10-11 09:31:16.221 2 DEBUG nova.virt.hardware [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:31:16 np0005481065 nova_compute[260935]: 2025-10-11 09:31:16.222 2 INFO nova.compute.claims [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:31:16 np0005481065 nova_compute[260935]: 2025-10-11 09:31:16.458 2 DEBUG oslo_concurrency.processutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:31:16 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:31:16 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3394472828' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:31:16 np0005481065 nova_compute[260935]: 2025-10-11 09:31:16.899 2 DEBUG oslo_concurrency.processutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:31:16 np0005481065 nova_compute[260935]: 2025-10-11 09:31:16.909 2 DEBUG nova.compute.provider_tree [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:31:16 np0005481065 nova_compute[260935]: 2025-10-11 09:31:16.927 2 DEBUG nova.scheduler.client.report [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:31:16 np0005481065 nova_compute[260935]: 2025-10-11 09:31:16.948 2 DEBUG oslo_concurrency.lockutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:31:16 np0005481065 nova_compute[260935]: 2025-10-11 09:31:16.949 2 DEBUG nova.compute.manager [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:31:17 np0005481065 nova_compute[260935]: 2025-10-11 09:31:17.003 2 DEBUG nova.compute.manager [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:31:17 np0005481065 nova_compute[260935]: 2025-10-11 09:31:17.004 2 DEBUG nova.network.neutron [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:31:17 np0005481065 nova_compute[260935]: 2025-10-11 09:31:17.037 2 INFO nova.virt.libvirt.driver [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:31:17 np0005481065 nova_compute[260935]: 2025-10-11 09:31:17.062 2 DEBUG nova.compute.manager [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:31:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2689: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 387 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 11 05:31:17 np0005481065 nova_compute[260935]: 2025-10-11 09:31:17.172 2 DEBUG nova.compute.manager [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:31:17 np0005481065 nova_compute[260935]: 2025-10-11 09:31:17.173 2 DEBUG nova.virt.libvirt.driver [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:31:17 np0005481065 nova_compute[260935]: 2025-10-11 09:31:17.174 2 INFO nova.virt.libvirt.driver [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Creating image(s)#033[00m
Oct 11 05:31:17 np0005481065 nova_compute[260935]: 2025-10-11 09:31:17.208 2 DEBUG nova.storage.rbd_utils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 30069bfa-2edc-4f4c-b685-233b65a11de1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:31:17 np0005481065 nova_compute[260935]: 2025-10-11 09:31:17.235 2 DEBUG nova.storage.rbd_utils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 30069bfa-2edc-4f4c-b685-233b65a11de1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:31:17 np0005481065 nova_compute[260935]: 2025-10-11 09:31:17.274 2 DEBUG nova.storage.rbd_utils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 30069bfa-2edc-4f4c-b685-233b65a11de1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:31:17 np0005481065 nova_compute[260935]: 2025-10-11 09:31:17.278 2 DEBUG oslo_concurrency.processutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:31:17 np0005481065 nova_compute[260935]: 2025-10-11 09:31:17.389 2 DEBUG oslo_concurrency.processutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:31:17 np0005481065 nova_compute[260935]: 2025-10-11 09:31:17.390 2 DEBUG oslo_concurrency.lockutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:31:17 np0005481065 nova_compute[260935]: 2025-10-11 09:31:17.391 2 DEBUG oslo_concurrency.lockutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:31:17 np0005481065 nova_compute[260935]: 2025-10-11 09:31:17.391 2 DEBUG oslo_concurrency.lockutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:31:17 np0005481065 nova_compute[260935]: 2025-10-11 09:31:17.425 2 DEBUG nova.storage.rbd_utils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 30069bfa-2edc-4f4c-b685-233b65a11de1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:31:17 np0005481065 nova_compute[260935]: 2025-10-11 09:31:17.429 2 DEBUG oslo_concurrency.processutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 30069bfa-2edc-4f4c-b685-233b65a11de1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:31:17 np0005481065 nova_compute[260935]: 2025-10-11 09:31:17.504 2 DEBUG nova.policy [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0e1fd111a1ff43179343661e01457085', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'db6885dd005947ad850fed13cefdf2fc', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:31:17 np0005481065 nova_compute[260935]: 2025-10-11 09:31:17.854 2 DEBUG oslo_concurrency.processutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 30069bfa-2edc-4f4c-b685-233b65a11de1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:31:17 np0005481065 nova_compute[260935]: 2025-10-11 09:31:17.947 2 DEBUG nova.storage.rbd_utils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] resizing rbd image 30069bfa-2edc-4f4c-b685-233b65a11de1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:31:18 np0005481065 nova_compute[260935]: 2025-10-11 09:31:18.074 2 DEBUG nova.objects.instance [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'migration_context' on Instance uuid 30069bfa-2edc-4f4c-b685-233b65a11de1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:31:18 np0005481065 nova_compute[260935]: 2025-10-11 09:31:18.095 2 DEBUG nova.virt.libvirt.driver [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:31:18 np0005481065 nova_compute[260935]: 2025-10-11 09:31:18.096 2 DEBUG nova.virt.libvirt.driver [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Ensure instance console log exists: /var/lib/nova/instances/30069bfa-2edc-4f4c-b685-233b65a11de1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:31:18 np0005481065 nova_compute[260935]: 2025-10-11 09:31:18.097 2 DEBUG oslo_concurrency.lockutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:31:18 np0005481065 nova_compute[260935]: 2025-10-11 09:31:18.097 2 DEBUG oslo_concurrency.lockutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:31:18 np0005481065 nova_compute[260935]: 2025-10-11 09:31:18.098 2 DEBUG oslo_concurrency.lockutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:31:18 np0005481065 nova_compute[260935]: 2025-10-11 09:31:18.400 2 DEBUG nova.network.neutron [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Successfully created port: faf6491c-2fe9-4559-bb38-e0b4068de1b5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:31:18 np0005481065 nova_compute[260935]: 2025-10-11 09:31:18.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:31:19 np0005481065 nova_compute[260935]: 2025-10-11 09:31:19.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:31:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2690: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 387 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 11 05:31:19 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #129. Immutable memtables: 0.
Oct 11 05:31:19 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:19.136360) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 05:31:19 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 129
Oct 11 05:31:19 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175079136394, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 2053, "num_deletes": 251, "total_data_size": 3346756, "memory_usage": 3404496, "flush_reason": "Manual Compaction"}
Oct 11 05:31:19 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #130: started
Oct 11 05:31:19 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175079154025, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 130, "file_size": 3291039, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 54346, "largest_seqno": 56398, "table_properties": {"data_size": 3281726, "index_size": 5870, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18898, "raw_average_key_size": 20, "raw_value_size": 3263231, "raw_average_value_size": 3478, "num_data_blocks": 260, "num_entries": 938, "num_filter_entries": 938, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760174856, "oldest_key_time": 1760174856, "file_creation_time": 1760175079, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 130, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:31:19 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 17704 microseconds, and 6091 cpu microseconds.
Oct 11 05:31:19 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:31:19 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:19.154064) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #130: 3291039 bytes OK
Oct 11 05:31:19 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:19.154083) [db/memtable_list.cc:519] [default] Level-0 commit table #130 started
Oct 11 05:31:19 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:19.156079) [db/memtable_list.cc:722] [default] Level-0 commit table #130: memtable #1 done
Oct 11 05:31:19 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:19.156091) EVENT_LOG_v1 {"time_micros": 1760175079156087, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 05:31:19 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:19.156107) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 05:31:19 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 3338158, prev total WAL file size 3338158, number of live WAL files 2.
Oct 11 05:31:19 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000126.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:31:19 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:19.156952) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Oct 11 05:31:19 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 05:31:19 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [130(3213KB)], [128(8012KB)]
Oct 11 05:31:19 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175079156993, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [130], "files_L6": [128], "score": -1, "input_data_size": 11496309, "oldest_snapshot_seqno": -1}
Oct 11 05:31:19 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #131: 7664 keys, 9833867 bytes, temperature: kUnknown
Oct 11 05:31:19 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175079228279, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 131, "file_size": 9833867, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9784118, "index_size": 29476, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19205, "raw_key_size": 199059, "raw_average_key_size": 25, "raw_value_size": 9648641, "raw_average_value_size": 1258, "num_data_blocks": 1149, "num_entries": 7664, "num_filter_entries": 7664, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760175079, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:31:19 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:31:19 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:19.228691) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 9833867 bytes
Oct 11 05:31:19 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:19.230332) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 161.0 rd, 137.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 7.8 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(6.5) write-amplify(3.0) OK, records in: 8178, records dropped: 514 output_compression: NoCompression
Oct 11 05:31:19 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:19.230356) EVENT_LOG_v1 {"time_micros": 1760175079230342, "job": 78, "event": "compaction_finished", "compaction_time_micros": 71427, "compaction_time_cpu_micros": 29221, "output_level": 6, "num_output_files": 1, "total_output_size": 9833867, "num_input_records": 8178, "num_output_records": 7664, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 05:31:19 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000130.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:31:19 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175079231247, "job": 78, "event": "table_file_deletion", "file_number": 130}
Oct 11 05:31:19 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:31:19 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175079232870, "job": 78, "event": "table_file_deletion", "file_number": 128}
Oct 11 05:31:19 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:19.156878) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:31:19 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:19.232973) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:31:19 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:19.232977) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:31:19 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:19.232979) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:31:19 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:19.232981) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:31:19 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:19.232983) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:31:19 np0005481065 nova_compute[260935]: 2025-10-11 09:31:19.351 2 DEBUG nova.network.neutron [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Successfully updated port: faf6491c-2fe9-4559-bb38-e0b4068de1b5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:31:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:31:19 np0005481065 nova_compute[260935]: 2025-10-11 09:31:19.438 2 DEBUG oslo_concurrency.lockutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "refresh_cache-30069bfa-2edc-4f4c-b685-233b65a11de1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:31:19 np0005481065 nova_compute[260935]: 2025-10-11 09:31:19.438 2 DEBUG oslo_concurrency.lockutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquired lock "refresh_cache-30069bfa-2edc-4f4c-b685-233b65a11de1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:31:19 np0005481065 nova_compute[260935]: 2025-10-11 09:31:19.440 2 DEBUG nova.network.neutron [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:31:19 np0005481065 nova_compute[260935]: 2025-10-11 09:31:19.495 2 DEBUG nova.compute.manager [req-0dda961c-5d5c-4b9d-8030-3e4b3fdff853 req-bbc7dca2-660d-4d19-ab70-10e338ce18a9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Received event network-changed-faf6491c-2fe9-4559-bb38-e0b4068de1b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:31:19 np0005481065 nova_compute[260935]: 2025-10-11 09:31:19.495 2 DEBUG nova.compute.manager [req-0dda961c-5d5c-4b9d-8030-3e4b3fdff853 req-bbc7dca2-660d-4d19-ab70-10e338ce18a9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Refreshing instance network info cache due to event network-changed-faf6491c-2fe9-4559-bb38-e0b4068de1b5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:31:19 np0005481065 nova_compute[260935]: 2025-10-11 09:31:19.496 2 DEBUG oslo_concurrency.lockutils [req-0dda961c-5d5c-4b9d-8030-3e4b3fdff853 req-bbc7dca2-660d-4d19-ab70-10e338ce18a9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-30069bfa-2edc-4f4c-b685-233b65a11de1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:31:19 np0005481065 nova_compute[260935]: 2025-10-11 09:31:19.642 2 DEBUG nova.network.neutron [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:31:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2691: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 16 KiB/s wr, 0 op/s
Oct 11 05:31:21 np0005481065 podman[409776]: 2025-10-11 09:31:21.780990521 +0000 UTC m=+0.075301896 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:31:21 np0005481065 nova_compute[260935]: 2025-10-11 09:31:21.893 2 DEBUG nova.network.neutron [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Updating instance_info_cache with network_info: [{"id": "faf6491c-2fe9-4559-bb38-e0b4068de1b5", "address": "fa:16:3e:73:79:c8", "network": {"id": "b8c83697-e12f-4ca4-8bd5-07c36e7b45a8", "bridge": "br-int", "label": "tempest-network-smoke--75951287", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe73:79c8", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe73:79c8", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf6491c-2f", "ovs_interfaceid": "faf6491c-2fe9-4559-bb38-e0b4068de1b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:31:21 np0005481065 nova_compute[260935]: 2025-10-11 09:31:21.914 2 DEBUG oslo_concurrency.lockutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Releasing lock "refresh_cache-30069bfa-2edc-4f4c-b685-233b65a11de1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:31:21 np0005481065 nova_compute[260935]: 2025-10-11 09:31:21.915 2 DEBUG nova.compute.manager [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Instance network_info: |[{"id": "faf6491c-2fe9-4559-bb38-e0b4068de1b5", "address": "fa:16:3e:73:79:c8", "network": {"id": "b8c83697-e12f-4ca4-8bd5-07c36e7b45a8", "bridge": "br-int", "label": "tempest-network-smoke--75951287", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe73:79c8", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe73:79c8", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf6491c-2f", "ovs_interfaceid": "faf6491c-2fe9-4559-bb38-e0b4068de1b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:31:21 np0005481065 nova_compute[260935]: 2025-10-11 09:31:21.915 2 DEBUG oslo_concurrency.lockutils [req-0dda961c-5d5c-4b9d-8030-3e4b3fdff853 req-bbc7dca2-660d-4d19-ab70-10e338ce18a9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-30069bfa-2edc-4f4c-b685-233b65a11de1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:31:21 np0005481065 nova_compute[260935]: 2025-10-11 09:31:21.916 2 DEBUG nova.network.neutron [req-0dda961c-5d5c-4b9d-8030-3e4b3fdff853 req-bbc7dca2-660d-4d19-ab70-10e338ce18a9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Refreshing network info cache for port faf6491c-2fe9-4559-bb38-e0b4068de1b5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:31:21 np0005481065 nova_compute[260935]: 2025-10-11 09:31:21.921 2 DEBUG nova.virt.libvirt.driver [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Start _get_guest_xml network_info=[{"id": "faf6491c-2fe9-4559-bb38-e0b4068de1b5", "address": "fa:16:3e:73:79:c8", "network": {"id": "b8c83697-e12f-4ca4-8bd5-07c36e7b45a8", "bridge": "br-int", "label": "tempest-network-smoke--75951287", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe73:79c8", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe73:79c8", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf6491c-2f", "ovs_interfaceid": "faf6491c-2fe9-4559-bb38-e0b4068de1b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:31:21 np0005481065 nova_compute[260935]: 2025-10-11 09:31:21.927 2 WARNING nova.virt.libvirt.driver [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:31:21 np0005481065 nova_compute[260935]: 2025-10-11 09:31:21.936 2 DEBUG nova.virt.libvirt.host [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:31:21 np0005481065 nova_compute[260935]: 2025-10-11 09:31:21.937 2 DEBUG nova.virt.libvirt.host [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:31:21 np0005481065 nova_compute[260935]: 2025-10-11 09:31:21.940 2 DEBUG nova.virt.libvirt.host [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:31:21 np0005481065 nova_compute[260935]: 2025-10-11 09:31:21.941 2 DEBUG nova.virt.libvirt.host [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:31:21 np0005481065 nova_compute[260935]: 2025-10-11 09:31:21.941 2 DEBUG nova.virt.libvirt.driver [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:31:21 np0005481065 nova_compute[260935]: 2025-10-11 09:31:21.941 2 DEBUG nova.virt.hardware [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:31:21 np0005481065 nova_compute[260935]: 2025-10-11 09:31:21.941 2 DEBUG nova.virt.hardware [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:31:21 np0005481065 nova_compute[260935]: 2025-10-11 09:31:21.942 2 DEBUG nova.virt.hardware [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:31:21 np0005481065 nova_compute[260935]: 2025-10-11 09:31:21.942 2 DEBUG nova.virt.hardware [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:31:21 np0005481065 nova_compute[260935]: 2025-10-11 09:31:21.942 2 DEBUG nova.virt.hardware [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:31:21 np0005481065 nova_compute[260935]: 2025-10-11 09:31:21.942 2 DEBUG nova.virt.hardware [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:31:21 np0005481065 nova_compute[260935]: 2025-10-11 09:31:21.942 2 DEBUG nova.virt.hardware [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:31:21 np0005481065 nova_compute[260935]: 2025-10-11 09:31:21.943 2 DEBUG nova.virt.hardware [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:31:21 np0005481065 nova_compute[260935]: 2025-10-11 09:31:21.943 2 DEBUG nova.virt.hardware [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:31:21 np0005481065 nova_compute[260935]: 2025-10-11 09:31:21.943 2 DEBUG nova.virt.hardware [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:31:21 np0005481065 nova_compute[260935]: 2025-10-11 09:31:21.943 2 DEBUG nova.virt.hardware [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:31:21 np0005481065 nova_compute[260935]: 2025-10-11 09:31:21.946 2 DEBUG oslo_concurrency.processutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:31:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:31:22 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2192263090' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:31:22 np0005481065 nova_compute[260935]: 2025-10-11 09:31:22.452 2 DEBUG oslo_concurrency.processutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:31:22 np0005481065 nova_compute[260935]: 2025-10-11 09:31:22.480 2 DEBUG nova.storage.rbd_utils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 30069bfa-2edc-4f4c-b685-233b65a11de1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:31:22 np0005481065 nova_compute[260935]: 2025-10-11 09:31:22.484 2 DEBUG oslo_concurrency.processutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:31:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:31:22 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3870550153' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:31:22 np0005481065 nova_compute[260935]: 2025-10-11 09:31:22.980 2 DEBUG oslo_concurrency.processutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:31:22 np0005481065 nova_compute[260935]: 2025-10-11 09:31:22.983 2 DEBUG nova.virt.libvirt.vif [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:31:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1320880782',display_name='tempest-TestGettingAddress-server-1320880782',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1320880782',id=136,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHRstPG1ot2VOcbBUXtJ+eOOOPIpYxS1823a8lgCgglrjvAd3j//+qkav+rN6NO2rJV6NwAQSbhFZXjoVb6gdqi2VPWucmakmAAgqAmAhQOUxzhrLgrzGURRlRuM8US+xA==',key_name='tempest-TestGettingAddress-264191416',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-fmtjcp7u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:31:17Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=30069bfa-2edc-4f4c-b685-233b65a11de1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "faf6491c-2fe9-4559-bb38-e0b4068de1b5", "address": "fa:16:3e:73:79:c8", "network": {"id": "b8c83697-e12f-4ca4-8bd5-07c36e7b45a8", "bridge": "br-int", "label": "tempest-network-smoke--75951287", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe73:79c8", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe73:79c8", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf6491c-2f", "ovs_interfaceid": "faf6491c-2fe9-4559-bb38-e0b4068de1b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:31:22 np0005481065 nova_compute[260935]: 2025-10-11 09:31:22.984 2 DEBUG nova.network.os_vif_util [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "faf6491c-2fe9-4559-bb38-e0b4068de1b5", "address": "fa:16:3e:73:79:c8", "network": {"id": "b8c83697-e12f-4ca4-8bd5-07c36e7b45a8", "bridge": "br-int", "label": "tempest-network-smoke--75951287", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe73:79c8", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe73:79c8", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf6491c-2f", "ovs_interfaceid": "faf6491c-2fe9-4559-bb38-e0b4068de1b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:31:22 np0005481065 nova_compute[260935]: 2025-10-11 09:31:22.986 2 DEBUG nova.network.os_vif_util [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:79:c8,bridge_name='br-int',has_traffic_filtering=True,id=faf6491c-2fe9-4559-bb38-e0b4068de1b5,network=Network(b8c83697-e12f-4ca4-8bd5-07c36e7b45a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaf6491c-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:31:22 np0005481065 nova_compute[260935]: 2025-10-11 09:31:22.989 2 DEBUG nova.objects.instance [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'pci_devices' on Instance uuid 30069bfa-2edc-4f4c-b685-233b65a11de1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:31:23 np0005481065 nova_compute[260935]: 2025-10-11 09:31:23.015 2 DEBUG nova.virt.libvirt.driver [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:31:23 np0005481065 nova_compute[260935]:  <uuid>30069bfa-2edc-4f4c-b685-233b65a11de1</uuid>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:  <name>instance-00000088</name>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:31:23 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:      <nova:name>tempest-TestGettingAddress-server-1320880782</nova:name>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:31:21</nova:creationTime>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:31:23 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:        <nova:user uuid="0e1fd111a1ff43179343661e01457085">tempest-TestGettingAddress-1238692117-project-member</nova:user>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:        <nova:project uuid="db6885dd005947ad850fed13cefdf2fc">tempest-TestGettingAddress-1238692117</nova:project>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:        <nova:port uuid="faf6491c-2fe9-4559-bb38-e0b4068de1b5">
Oct 11 05:31:23 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe73:79c8" ipVersion="6"/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe73:79c8" ipVersion="6"/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:31:23 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:      <entry name="serial">30069bfa-2edc-4f4c-b685-233b65a11de1</entry>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:      <entry name="uuid">30069bfa-2edc-4f4c-b685-233b65a11de1</entry>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:31:23 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:31:23 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:31:23 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/30069bfa-2edc-4f4c-b685-233b65a11de1_disk">
Oct 11 05:31:23 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:31:23 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:31:23 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/30069bfa-2edc-4f4c-b685-233b65a11de1_disk.config">
Oct 11 05:31:23 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:31:23 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:31:23 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:73:79:c8"/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:      <target dev="tapfaf6491c-2f"/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:31:23 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/30069bfa-2edc-4f4c-b685-233b65a11de1/console.log" append="off"/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:31:23 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:31:23 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:31:23 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:31:23 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:31:23 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:31:23 np0005481065 nova_compute[260935]: 2025-10-11 09:31:23.017 2 DEBUG nova.compute.manager [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Preparing to wait for external event network-vif-plugged-faf6491c-2fe9-4559-bb38-e0b4068de1b5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:31:23 np0005481065 nova_compute[260935]: 2025-10-11 09:31:23.018 2 DEBUG oslo_concurrency.lockutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "30069bfa-2edc-4f4c-b685-233b65a11de1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:31:23 np0005481065 nova_compute[260935]: 2025-10-11 09:31:23.018 2 DEBUG oslo_concurrency.lockutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "30069bfa-2edc-4f4c-b685-233b65a11de1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:31:23 np0005481065 nova_compute[260935]: 2025-10-11 09:31:23.019 2 DEBUG oslo_concurrency.lockutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "30069bfa-2edc-4f4c-b685-233b65a11de1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:31:23 np0005481065 nova_compute[260935]: 2025-10-11 09:31:23.020 2 DEBUG nova.virt.libvirt.vif [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:31:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1320880782',display_name='tempest-TestGettingAddress-server-1320880782',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1320880782',id=136,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHRstPG1ot2VOcbBUXtJ+eOOOPIpYxS1823a8lgCgglrjvAd3j//+qkav+rN6NO2rJV6NwAQSbhFZXjoVb6gdqi2VPWucmakmAAgqAmAhQOUxzhrLgrzGURRlRuM8US+xA==',key_name='tempest-TestGettingAddress-264191416',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-fmtjcp7u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:31:17Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=30069bfa-2edc-4f4c-b685-233b65a11de1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "faf6491c-2fe9-4559-bb38-e0b4068de1b5", "address": "fa:16:3e:73:79:c8", "network": {"id": "b8c83697-e12f-4ca4-8bd5-07c36e7b45a8", "bridge": "br-int", "label": "tempest-network-smoke--75951287", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe73:79c8", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe73:79c8", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf6491c-2f", "ovs_interfaceid": "faf6491c-2fe9-4559-bb38-e0b4068de1b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:31:23 np0005481065 nova_compute[260935]: 2025-10-11 09:31:23.021 2 DEBUG nova.network.os_vif_util [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "faf6491c-2fe9-4559-bb38-e0b4068de1b5", "address": "fa:16:3e:73:79:c8", "network": {"id": "b8c83697-e12f-4ca4-8bd5-07c36e7b45a8", "bridge": "br-int", "label": "tempest-network-smoke--75951287", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe73:79c8", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe73:79c8", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf6491c-2f", "ovs_interfaceid": "faf6491c-2fe9-4559-bb38-e0b4068de1b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:31:23 np0005481065 nova_compute[260935]: 2025-10-11 09:31:23.022 2 DEBUG nova.network.os_vif_util [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:79:c8,bridge_name='br-int',has_traffic_filtering=True,id=faf6491c-2fe9-4559-bb38-e0b4068de1b5,network=Network(b8c83697-e12f-4ca4-8bd5-07c36e7b45a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaf6491c-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:31:23 np0005481065 nova_compute[260935]: 2025-10-11 09:31:23.023 2 DEBUG os_vif [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:79:c8,bridge_name='br-int',has_traffic_filtering=True,id=faf6491c-2fe9-4559-bb38-e0b4068de1b5,network=Network(b8c83697-e12f-4ca4-8bd5-07c36e7b45a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaf6491c-2f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:31:23 np0005481065 nova_compute[260935]: 2025-10-11 09:31:23.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:31:23 np0005481065 nova_compute[260935]: 2025-10-11 09:31:23.025 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:31:23 np0005481065 nova_compute[260935]: 2025-10-11 09:31:23.026 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:31:23 np0005481065 nova_compute[260935]: 2025-10-11 09:31:23.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:31:23 np0005481065 nova_compute[260935]: 2025-10-11 09:31:23.032 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfaf6491c-2f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:31:23 np0005481065 nova_compute[260935]: 2025-10-11 09:31:23.033 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfaf6491c-2f, col_values=(('external_ids', {'iface-id': 'faf6491c-2fe9-4559-bb38-e0b4068de1b5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:73:79:c8', 'vm-uuid': '30069bfa-2edc-4f4c-b685-233b65a11de1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:31:23 np0005481065 nova_compute[260935]: 2025-10-11 09:31:23.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:31:23 np0005481065 NetworkManager[44960]: <info>  [1760175083.0380] manager: (tapfaf6491c-2f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/586)
Oct 11 05:31:23 np0005481065 nova_compute[260935]: 2025-10-11 09:31:23.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:31:23 np0005481065 nova_compute[260935]: 2025-10-11 09:31:23.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:31:23 np0005481065 nova_compute[260935]: 2025-10-11 09:31:23.045 2 INFO os_vif [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:79:c8,bridge_name='br-int',has_traffic_filtering=True,id=faf6491c-2fe9-4559-bb38-e0b4068de1b5,network=Network(b8c83697-e12f-4ca4-8bd5-07c36e7b45a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaf6491c-2f')#033[00m
Oct 11 05:31:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2692: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:31:23 np0005481065 nova_compute[260935]: 2025-10-11 09:31:23.141 2 DEBUG nova.virt.libvirt.driver [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:31:23 np0005481065 nova_compute[260935]: 2025-10-11 09:31:23.141 2 DEBUG nova.virt.libvirt.driver [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:31:23 np0005481065 nova_compute[260935]: 2025-10-11 09:31:23.142 2 DEBUG nova.virt.libvirt.driver [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No VIF found with MAC fa:16:3e:73:79:c8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:31:23 np0005481065 nova_compute[260935]: 2025-10-11 09:31:23.143 2 INFO nova.virt.libvirt.driver [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Using config drive#033[00m
Oct 11 05:31:23 np0005481065 nova_compute[260935]: 2025-10-11 09:31:23.180 2 DEBUG nova.storage.rbd_utils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 30069bfa-2edc-4f4c-b685-233b65a11de1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:31:23 np0005481065 nova_compute[260935]: 2025-10-11 09:31:23.674 2 INFO nova.virt.libvirt.driver [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Creating config drive at /var/lib/nova/instances/30069bfa-2edc-4f4c-b685-233b65a11de1/disk.config#033[00m
Oct 11 05:31:23 np0005481065 nova_compute[260935]: 2025-10-11 09:31:23.685 2 DEBUG oslo_concurrency.processutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/30069bfa-2edc-4f4c-b685-233b65a11de1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp04q_nnrl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:31:23 np0005481065 nova_compute[260935]: 2025-10-11 09:31:23.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:31:23 np0005481065 nova_compute[260935]: 2025-10-11 09:31:23.860 2 DEBUG oslo_concurrency.processutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/30069bfa-2edc-4f4c-b685-233b65a11de1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp04q_nnrl" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:31:23 np0005481065 nova_compute[260935]: 2025-10-11 09:31:23.904 2 DEBUG nova.storage.rbd_utils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 30069bfa-2edc-4f4c-b685-233b65a11de1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:31:23 np0005481065 nova_compute[260935]: 2025-10-11 09:31:23.910 2 DEBUG oslo_concurrency.processutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/30069bfa-2edc-4f4c-b685-233b65a11de1/disk.config 30069bfa-2edc-4f4c-b685-233b65a11de1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:31:24 np0005481065 nova_compute[260935]: 2025-10-11 09:31:24.366 2 DEBUG oslo_concurrency.processutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/30069bfa-2edc-4f4c-b685-233b65a11de1/disk.config 30069bfa-2edc-4f4c-b685-233b65a11de1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:31:24 np0005481065 nova_compute[260935]: 2025-10-11 09:31:24.369 2 INFO nova.virt.libvirt.driver [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Deleting local config drive /var/lib/nova/instances/30069bfa-2edc-4f4c-b685-233b65a11de1/disk.config because it was imported into RBD.#033[00m
Oct 11 05:31:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:31:24 np0005481065 kernel: tapfaf6491c-2f: entered promiscuous mode
Oct 11 05:31:24 np0005481065 NetworkManager[44960]: <info>  [1760175084.4517] manager: (tapfaf6491c-2f): new Tun device (/org/freedesktop/NetworkManager/Devices/587)
Oct 11 05:31:24 np0005481065 ovn_controller[152945]: 2025-10-11T09:31:24Z|01517|binding|INFO|Claiming lport faf6491c-2fe9-4559-bb38-e0b4068de1b5 for this chassis.
Oct 11 05:31:24 np0005481065 ovn_controller[152945]: 2025-10-11T09:31:24Z|01518|binding|INFO|faf6491c-2fe9-4559-bb38-e0b4068de1b5: Claiming fa:16:3e:73:79:c8 10.100.0.6 2001:db8:0:1:f816:3eff:fe73:79c8 2001:db8::f816:3eff:fe73:79c8
Oct 11 05:31:24 np0005481065 nova_compute[260935]: 2025-10-11 09:31:24.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:31:24 np0005481065 nova_compute[260935]: 2025-10-11 09:31:24.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:31:24 np0005481065 ovn_controller[152945]: 2025-10-11T09:31:24Z|01519|binding|INFO|Setting lport faf6491c-2fe9-4559-bb38-e0b4068de1b5 ovn-installed in OVS
Oct 11 05:31:24 np0005481065 ovn_controller[152945]: 2025-10-11T09:31:24Z|01520|binding|INFO|Setting lport faf6491c-2fe9-4559-bb38-e0b4068de1b5 up in Southbound
Oct 11 05:31:24 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:31:24.475 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:79:c8 10.100.0.6 2001:db8:0:1:f816:3eff:fe73:79c8 2001:db8::f816:3eff:fe73:79c8'], port_security=['fa:16:3e:73:79:c8 10.100.0.6 2001:db8:0:1:f816:3eff:fe73:79c8 2001:db8::f816:3eff:fe73:79c8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28 2001:db8:0:1:f816:3eff:fe73:79c8/64 2001:db8::f816:3eff:fe73:79c8/64', 'neutron:device_id': '30069bfa-2edc-4f4c-b685-233b65a11de1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9b3d6b51-0356-4b0d-af67-6a82e6bb09bb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=09ee216c-15c4-4f81-ac68-20fd45768bc8, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=faf6491c-2fe9-4559-bb38-e0b4068de1b5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:31:24 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:31:24.476 162815 INFO neutron.agent.ovn.metadata.agent [-] Port faf6491c-2fe9-4559-bb38-e0b4068de1b5 in datapath b8c83697-e12f-4ca4-8bd5-07c36e7b45a8 bound to our chassis#033[00m
Oct 11 05:31:24 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:31:24.478 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b8c83697-e12f-4ca4-8bd5-07c36e7b45a8#033[00m
Oct 11 05:31:24 np0005481065 systemd-udevd[410004]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:31:24 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:31:24.503 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6ea55ba6-3bcc-4781-9c30-c60913998ed7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:31:24 np0005481065 systemd-machined[215705]: New machine qemu-160-instance-00000088.
Oct 11 05:31:24 np0005481065 NetworkManager[44960]: <info>  [1760175084.5126] device (tapfaf6491c-2f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:31:24 np0005481065 NetworkManager[44960]: <info>  [1760175084.5144] device (tapfaf6491c-2f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:31:24 np0005481065 systemd[1]: Started Virtual Machine qemu-160-instance-00000088.
Oct 11 05:31:24 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:31:24.546 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[49ff8a6d-1ede-4a39-9392-68a632150592]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:31:24 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:31:24.550 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[e7c2dcc9-811a-4a61-89b8-04a6a195f3cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:31:24 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:31:24.582 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[861855b2-c69c-48d6-80f8-b61a35d0a4e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:31:24 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:31:24.609 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d270db61-3155-41be-af1e-d7f6f44b84dd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb8c83697-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d0:5b:95'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 25, 'tx_packets': 5, 'rx_bytes': 2230, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 25, 'tx_packets': 5, 'rx_bytes': 2230, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 405], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 689596, 'reachable_time': 36408, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 23, 'inoctets': 1824, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 23, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1824, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 23, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 410044, 'error': None, 'target': 'ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:31:24 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:31:24.645 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a57096f4-11fb-439c-bd4e-9ef1eaf19f70]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb8c83697-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 689607, 'tstamp': 689607}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 410046, 'error': None, 'target': 'ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb8c83697-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 689611, 'tstamp': 689611}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 410046, 'error': None, 'target': 'ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:31:24 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:31:24.648 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb8c83697-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:31:24 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:31:24.696 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb8c83697-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:31:24 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:31:24.697 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:31:24 np0005481065 nova_compute[260935]: 2025-10-11 09:31:24.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:31:24 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:31:24.697 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb8c83697-e0, col_values=(('external_ids', {'iface-id': '3d1ec619-c133-47e3-8121-0a84b73ae16e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:31:24 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:31:24.698 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:31:24 np0005481065 nova_compute[260935]: 2025-10-11 09:31:24.790 2 DEBUG nova.network.neutron [req-0dda961c-5d5c-4b9d-8030-3e4b3fdff853 req-bbc7dca2-660d-4d19-ab70-10e338ce18a9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Updated VIF entry in instance network info cache for port faf6491c-2fe9-4559-bb38-e0b4068de1b5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:31:24 np0005481065 nova_compute[260935]: 2025-10-11 09:31:24.791 2 DEBUG nova.network.neutron [req-0dda961c-5d5c-4b9d-8030-3e4b3fdff853 req-bbc7dca2-660d-4d19-ab70-10e338ce18a9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Updating instance_info_cache with network_info: [{"id": "faf6491c-2fe9-4559-bb38-e0b4068de1b5", "address": "fa:16:3e:73:79:c8", "network": {"id": "b8c83697-e12f-4ca4-8bd5-07c36e7b45a8", "bridge": "br-int", "label": "tempest-network-smoke--75951287", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe73:79c8", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe73:79c8", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf6491c-2f", "ovs_interfaceid": "faf6491c-2fe9-4559-bb38-e0b4068de1b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:31:24 np0005481065 nova_compute[260935]: 2025-10-11 09:31:24.809 2 DEBUG oslo_concurrency.lockutils [req-0dda961c-5d5c-4b9d-8030-3e4b3fdff853 req-bbc7dca2-660d-4d19-ab70-10e338ce18a9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-30069bfa-2edc-4f4c-b685-233b65a11de1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:31:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:31:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:31:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:31:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:31:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:31:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:31:25 np0005481065 nova_compute[260935]: 2025-10-11 09:31:24.998 2 DEBUG nova.compute.manager [req-86c5373b-59bc-4efa-8f66-65a7c316960a req-7b94cf95-fee1-4922-9466-2c3f7cb88d2a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Received event network-vif-plugged-faf6491c-2fe9-4559-bb38-e0b4068de1b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:31:25 np0005481065 nova_compute[260935]: 2025-10-11 09:31:24.999 2 DEBUG oslo_concurrency.lockutils [req-86c5373b-59bc-4efa-8f66-65a7c316960a req-7b94cf95-fee1-4922-9466-2c3f7cb88d2a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "30069bfa-2edc-4f4c-b685-233b65a11de1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:31:25 np0005481065 nova_compute[260935]: 2025-10-11 09:31:24.999 2 DEBUG oslo_concurrency.lockutils [req-86c5373b-59bc-4efa-8f66-65a7c316960a req-7b94cf95-fee1-4922-9466-2c3f7cb88d2a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "30069bfa-2edc-4f4c-b685-233b65a11de1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:31:25 np0005481065 nova_compute[260935]: 2025-10-11 09:31:25.001 2 DEBUG oslo_concurrency.lockutils [req-86c5373b-59bc-4efa-8f66-65a7c316960a req-7b94cf95-fee1-4922-9466-2c3f7cb88d2a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "30069bfa-2edc-4f4c-b685-233b65a11de1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:31:25 np0005481065 nova_compute[260935]: 2025-10-11 09:31:25.001 2 DEBUG nova.compute.manager [req-86c5373b-59bc-4efa-8f66-65a7c316960a req-7b94cf95-fee1-4922-9466-2c3f7cb88d2a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Processing event network-vif-plugged-faf6491c-2fe9-4559-bb38-e0b4068de1b5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:31:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2693: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:31:25 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:31:25 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:31:25 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 05:31:25 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:31:25 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 05:31:25 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:31:25 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 9dfa91e0-16fd-452f-859d-0829f17462f2 does not exist
Oct 11 05:31:25 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 2698604d-60e9-42a6-a321-6c3eb622df1c does not exist
Oct 11 05:31:25 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev b7eb5832-8404-4f2f-8711-427b6b3573f8 does not exist
Oct 11 05:31:25 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 05:31:25 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 05:31:25 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 05:31:25 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:31:25 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:31:25 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:31:25 np0005481065 nova_compute[260935]: 2025-10-11 09:31:25.766 2 DEBUG nova.compute.manager [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:31:25 np0005481065 nova_compute[260935]: 2025-10-11 09:31:25.767 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175085.7671936, 30069bfa-2edc-4f4c-b685-233b65a11de1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:31:25 np0005481065 nova_compute[260935]: 2025-10-11 09:31:25.767 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] VM Started (Lifecycle Event)#033[00m
Oct 11 05:31:25 np0005481065 nova_compute[260935]: 2025-10-11 09:31:25.771 2 DEBUG nova.virt.libvirt.driver [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:31:25 np0005481065 nova_compute[260935]: 2025-10-11 09:31:25.775 2 INFO nova.virt.libvirt.driver [-] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Instance spawned successfully.#033[00m
Oct 11 05:31:25 np0005481065 nova_compute[260935]: 2025-10-11 09:31:25.776 2 DEBUG nova.virt.libvirt.driver [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:31:25 np0005481065 nova_compute[260935]: 2025-10-11 09:31:25.808 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:31:25 np0005481065 nova_compute[260935]: 2025-10-11 09:31:25.813 2 DEBUG nova.virt.libvirt.driver [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:31:25 np0005481065 nova_compute[260935]: 2025-10-11 09:31:25.814 2 DEBUG nova.virt.libvirt.driver [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:31:25 np0005481065 nova_compute[260935]: 2025-10-11 09:31:25.814 2 DEBUG nova.virt.libvirt.driver [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:31:25 np0005481065 nova_compute[260935]: 2025-10-11 09:31:25.814 2 DEBUG nova.virt.libvirt.driver [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:31:25 np0005481065 nova_compute[260935]: 2025-10-11 09:31:25.815 2 DEBUG nova.virt.libvirt.driver [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:31:25 np0005481065 nova_compute[260935]: 2025-10-11 09:31:25.815 2 DEBUG nova.virt.libvirt.driver [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:31:25 np0005481065 nova_compute[260935]: 2025-10-11 09:31:25.819 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:31:25 np0005481065 nova_compute[260935]: 2025-10-11 09:31:25.856 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:31:25 np0005481065 nova_compute[260935]: 2025-10-11 09:31:25.857 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175085.769778, 30069bfa-2edc-4f4c-b685-233b65a11de1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:31:25 np0005481065 nova_compute[260935]: 2025-10-11 09:31:25.857 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:31:25 np0005481065 nova_compute[260935]: 2025-10-11 09:31:25.876 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:31:25 np0005481065 nova_compute[260935]: 2025-10-11 09:31:25.882 2 INFO nova.compute.manager [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Took 8.71 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:31:25 np0005481065 nova_compute[260935]: 2025-10-11 09:31:25.882 2 DEBUG nova.compute.manager [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:31:25 np0005481065 nova_compute[260935]: 2025-10-11 09:31:25.883 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175085.770087, 30069bfa-2edc-4f4c-b685-233b65a11de1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:31:25 np0005481065 nova_compute[260935]: 2025-10-11 09:31:25.884 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:31:25 np0005481065 nova_compute[260935]: 2025-10-11 09:31:25.922 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:31:25 np0005481065 nova_compute[260935]: 2025-10-11 09:31:25.925 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:31:25 np0005481065 nova_compute[260935]: 2025-10-11 09:31:25.972 2 INFO nova.compute.manager [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Took 9.80 seconds to build instance.#033[00m
Oct 11 05:31:26 np0005481065 nova_compute[260935]: 2025-10-11 09:31:26.018 2 DEBUG oslo_concurrency.lockutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "30069bfa-2edc-4f4c-b685-233b65a11de1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.951s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:31:26 np0005481065 podman[410262]: 2025-10-11 09:31:26.181151817 +0000 UTC m=+0.058549614 container create 3ee363ec5344f088a64c43c207b0dbc7924f956a22813b0ac8b1b051a725192b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:31:26 np0005481065 systemd[1]: Started libpod-conmon-3ee363ec5344f088a64c43c207b0dbc7924f956a22813b0ac8b1b051a725192b.scope.
Oct 11 05:31:26 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:31:26 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:31:26 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:31:26 np0005481065 podman[410262]: 2025-10-11 09:31:26.153071704 +0000 UTC m=+0.030469591 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:31:26 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:31:26 np0005481065 podman[410262]: 2025-10-11 09:31:26.323122933 +0000 UTC m=+0.200520800 container init 3ee363ec5344f088a64c43c207b0dbc7924f956a22813b0ac8b1b051a725192b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 05:31:26 np0005481065 podman[410262]: 2025-10-11 09:31:26.336767858 +0000 UTC m=+0.214165695 container start 3ee363ec5344f088a64c43c207b0dbc7924f956a22813b0ac8b1b051a725192b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_mestorf, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 05:31:26 np0005481065 strange_mestorf[410279]: 167 167
Oct 11 05:31:26 np0005481065 systemd[1]: libpod-3ee363ec5344f088a64c43c207b0dbc7924f956a22813b0ac8b1b051a725192b.scope: Deactivated successfully.
Oct 11 05:31:26 np0005481065 podman[410262]: 2025-10-11 09:31:26.414242825 +0000 UTC m=+0.291640722 container attach 3ee363ec5344f088a64c43c207b0dbc7924f956a22813b0ac8b1b051a725192b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_mestorf, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Oct 11 05:31:26 np0005481065 podman[410262]: 2025-10-11 09:31:26.415674795 +0000 UTC m=+0.293072612 container died 3ee363ec5344f088a64c43c207b0dbc7924f956a22813b0ac8b1b051a725192b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_mestorf, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Oct 11 05:31:26 np0005481065 systemd[1]: var-lib-containers-storage-overlay-cd7d6e83847a0ca50a029083374e8000c6cd4caeb51463faa001b13d2b821a60-merged.mount: Deactivated successfully.
Oct 11 05:31:26 np0005481065 podman[410262]: 2025-10-11 09:31:26.614428284 +0000 UTC m=+0.491826111 container remove 3ee363ec5344f088a64c43c207b0dbc7924f956a22813b0ac8b1b051a725192b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:31:26 np0005481065 systemd[1]: libpod-conmon-3ee363ec5344f088a64c43c207b0dbc7924f956a22813b0ac8b1b051a725192b.scope: Deactivated successfully.
Oct 11 05:31:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:31:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2698628905' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:31:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:31:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2698628905' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:31:26 np0005481065 podman[410305]: 2025-10-11 09:31:26.986657079 +0000 UTC m=+0.120107111 container create ab7b49654790e4517a9e204fc924686b3e3782192320efa96f3c1b766a485a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bouman, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:31:27 np0005481065 podman[410305]: 2025-10-11 09:31:26.935650079 +0000 UTC m=+0.069100191 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:31:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2694: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:31:27 np0005481065 systemd[1]: Started libpod-conmon-ab7b49654790e4517a9e204fc924686b3e3782192320efa96f3c1b766a485a56.scope.
Oct 11 05:31:27 np0005481065 nova_compute[260935]: 2025-10-11 09:31:27.129 2 DEBUG nova.compute.manager [req-7665eca1-0a78-4290-b7fa-189cb594665f req-1681e5c7-2040-4e6b-9994-70726dc9fbe2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Received event network-vif-plugged-faf6491c-2fe9-4559-bb38-e0b4068de1b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:31:27 np0005481065 nova_compute[260935]: 2025-10-11 09:31:27.130 2 DEBUG oslo_concurrency.lockutils [req-7665eca1-0a78-4290-b7fa-189cb594665f req-1681e5c7-2040-4e6b-9994-70726dc9fbe2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "30069bfa-2edc-4f4c-b685-233b65a11de1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:31:27 np0005481065 nova_compute[260935]: 2025-10-11 09:31:27.130 2 DEBUG oslo_concurrency.lockutils [req-7665eca1-0a78-4290-b7fa-189cb594665f req-1681e5c7-2040-4e6b-9994-70726dc9fbe2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "30069bfa-2edc-4f4c-b685-233b65a11de1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:31:27 np0005481065 nova_compute[260935]: 2025-10-11 09:31:27.130 2 DEBUG oslo_concurrency.lockutils [req-7665eca1-0a78-4290-b7fa-189cb594665f req-1681e5c7-2040-4e6b-9994-70726dc9fbe2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "30069bfa-2edc-4f4c-b685-233b65a11de1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:31:27 np0005481065 nova_compute[260935]: 2025-10-11 09:31:27.130 2 DEBUG nova.compute.manager [req-7665eca1-0a78-4290-b7fa-189cb594665f req-1681e5c7-2040-4e6b-9994-70726dc9fbe2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] No waiting events found dispatching network-vif-plugged-faf6491c-2fe9-4559-bb38-e0b4068de1b5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:31:27 np0005481065 nova_compute[260935]: 2025-10-11 09:31:27.131 2 WARNING nova.compute.manager [req-7665eca1-0a78-4290-b7fa-189cb594665f req-1681e5c7-2040-4e6b-9994-70726dc9fbe2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Received unexpected event network-vif-plugged-faf6491c-2fe9-4559-bb38-e0b4068de1b5 for instance with vm_state active and task_state None.#033[00m
Oct 11 05:31:27 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:31:27 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d1da03864e2d19ecf89844b30cd858c2ce1797f5661a26034caccf861a66f20/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:31:27 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d1da03864e2d19ecf89844b30cd858c2ce1797f5661a26034caccf861a66f20/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:31:27 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d1da03864e2d19ecf89844b30cd858c2ce1797f5661a26034caccf861a66f20/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:31:27 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d1da03864e2d19ecf89844b30cd858c2ce1797f5661a26034caccf861a66f20/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:31:27 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d1da03864e2d19ecf89844b30cd858c2ce1797f5661a26034caccf861a66f20/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 05:31:27 np0005481065 podman[410305]: 2025-10-11 09:31:27.207183942 +0000 UTC m=+0.340634004 container init ab7b49654790e4517a9e204fc924686b3e3782192320efa96f3c1b766a485a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bouman, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 05:31:27 np0005481065 podman[410305]: 2025-10-11 09:31:27.216448334 +0000 UTC m=+0.349898366 container start ab7b49654790e4517a9e204fc924686b3e3782192320efa96f3c1b766a485a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 05:31:27 np0005481065 podman[410305]: 2025-10-11 09:31:27.307251696 +0000 UTC m=+0.440701768 container attach ab7b49654790e4517a9e204fc924686b3e3782192320efa96f3c1b766a485a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bouman, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 05:31:28 np0005481065 nova_compute[260935]: 2025-10-11 09:31:28.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:31:28 np0005481065 gifted_bouman[410322]: --> passed data devices: 0 physical, 3 LVM
Oct 11 05:31:28 np0005481065 gifted_bouman[410322]: --> relative data size: 1.0
Oct 11 05:31:28 np0005481065 gifted_bouman[410322]: --> All data devices are unavailable
Oct 11 05:31:28 np0005481065 systemd[1]: libpod-ab7b49654790e4517a9e204fc924686b3e3782192320efa96f3c1b766a485a56.scope: Deactivated successfully.
Oct 11 05:31:28 np0005481065 systemd[1]: libpod-ab7b49654790e4517a9e204fc924686b3e3782192320efa96f3c1b766a485a56.scope: Consumed 1.088s CPU time.
Oct 11 05:31:28 np0005481065 podman[410305]: 2025-10-11 09:31:28.353860452 +0000 UTC m=+1.487310534 container died ab7b49654790e4517a9e204fc924686b3e3782192320efa96f3c1b766a485a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bouman, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 11 05:31:28 np0005481065 systemd[1]: var-lib-containers-storage-overlay-2d1da03864e2d19ecf89844b30cd858c2ce1797f5661a26034caccf861a66f20-merged.mount: Deactivated successfully.
Oct 11 05:31:28 np0005481065 nova_compute[260935]: 2025-10-11 09:31:28.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:31:28 np0005481065 nova_compute[260935]: 2025-10-11 09:31:28.859 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:31:28 np0005481065 podman[410305]: 2025-10-11 09:31:28.872942 +0000 UTC m=+2.006392062 container remove ab7b49654790e4517a9e204fc924686b3e3782192320efa96f3c1b766a485a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bouman, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:31:28 np0005481065 podman[410351]: 2025-10-11 09:31:28.935940988 +0000 UTC m=+0.541260595 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 05:31:28 np0005481065 systemd[1]: libpod-conmon-ab7b49654790e4517a9e204fc924686b3e3782192320efa96f3c1b766a485a56.scope: Deactivated successfully.
Oct 11 05:31:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2695: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 11 05:31:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:31:29 np0005481065 podman[410524]: 2025-10-11 09:31:29.757120273 +0000 UTC m=+0.076588863 container create 8364d5bea19cdeb22c7ead5e28c38804a9c6e9ac628f9c84af09fa31a3713fb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 05:31:29 np0005481065 podman[410524]: 2025-10-11 09:31:29.710882548 +0000 UTC m=+0.030351208 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:31:29 np0005481065 systemd[1]: Started libpod-conmon-8364d5bea19cdeb22c7ead5e28c38804a9c6e9ac628f9c84af09fa31a3713fb6.scope.
Oct 11 05:31:29 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:31:30 np0005481065 podman[410524]: 2025-10-11 09:31:30.026295239 +0000 UTC m=+0.345763869 container init 8364d5bea19cdeb22c7ead5e28c38804a9c6e9ac628f9c84af09fa31a3713fb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 05:31:30 np0005481065 podman[410524]: 2025-10-11 09:31:30.036837047 +0000 UTC m=+0.356305627 container start 8364d5bea19cdeb22c7ead5e28c38804a9c6e9ac628f9c84af09fa31a3713fb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 11 05:31:30 np0005481065 silly_euler[410540]: 167 167
Oct 11 05:31:30 np0005481065 systemd[1]: libpod-8364d5bea19cdeb22c7ead5e28c38804a9c6e9ac628f9c84af09fa31a3713fb6.scope: Deactivated successfully.
Oct 11 05:31:30 np0005481065 podman[410524]: 2025-10-11 09:31:30.083257677 +0000 UTC m=+0.402726267 container attach 8364d5bea19cdeb22c7ead5e28c38804a9c6e9ac628f9c84af09fa31a3713fb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_euler, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 05:31:30 np0005481065 podman[410524]: 2025-10-11 09:31:30.083757501 +0000 UTC m=+0.403226101 container died 8364d5bea19cdeb22c7ead5e28c38804a9c6e9ac628f9c84af09fa31a3713fb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_euler, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 11 05:31:30 np0005481065 systemd[1]: var-lib-containers-storage-overlay-916e653e41d4d725cb804cbf57b04c6a6722234b83058e3fc5ba1fa89aa0f031-merged.mount: Deactivated successfully.
Oct 11 05:31:30 np0005481065 podman[410524]: 2025-10-11 09:31:30.464427664 +0000 UTC m=+0.783896254 container remove 8364d5bea19cdeb22c7ead5e28c38804a9c6e9ac628f9c84af09fa31a3713fb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_euler, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Oct 11 05:31:30 np0005481065 systemd[1]: libpod-conmon-8364d5bea19cdeb22c7ead5e28c38804a9c6e9ac628f9c84af09fa31a3713fb6.scope: Deactivated successfully.
Oct 11 05:31:30 np0005481065 podman[410567]: 2025-10-11 09:31:30.810685385 +0000 UTC m=+0.115163801 container create e4c8b79b5928f90dbfd400b279a2e57b8ebcc3a02734271bd22911b6f57ccfb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 05:31:30 np0005481065 podman[410567]: 2025-10-11 09:31:30.743697195 +0000 UTC m=+0.048175661 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:31:31 np0005481065 systemd[1]: Started libpod-conmon-e4c8b79b5928f90dbfd400b279a2e57b8ebcc3a02734271bd22911b6f57ccfb4.scope.
Oct 11 05:31:31 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:31:31 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7412e1827b2ea30a5c9eddf77a9aa02d1b7966336153a3ec2141f2f454d3ab4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:31:31 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7412e1827b2ea30a5c9eddf77a9aa02d1b7966336153a3ec2141f2f454d3ab4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:31:31 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7412e1827b2ea30a5c9eddf77a9aa02d1b7966336153a3ec2141f2f454d3ab4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:31:31 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7412e1827b2ea30a5c9eddf77a9aa02d1b7966336153a3ec2141f2f454d3ab4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:31:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2696: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 05:31:31 np0005481065 podman[410567]: 2025-10-11 09:31:31.114682674 +0000 UTC m=+0.419161070 container init e4c8b79b5928f90dbfd400b279a2e57b8ebcc3a02734271bd22911b6f57ccfb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cerf, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 11 05:31:31 np0005481065 podman[410567]: 2025-10-11 09:31:31.129660597 +0000 UTC m=+0.434138983 container start e4c8b79b5928f90dbfd400b279a2e57b8ebcc3a02734271bd22911b6f57ccfb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cerf, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:31:31 np0005481065 podman[410567]: 2025-10-11 09:31:31.23675907 +0000 UTC m=+0.541237466 container attach e4c8b79b5928f90dbfd400b279a2e57b8ebcc3a02734271bd22911b6f57ccfb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cerf, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]: {
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:    "0": [
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:        {
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:            "devices": [
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:                "/dev/loop3"
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:            ],
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:            "lv_name": "ceph_lv0",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:            "lv_size": "21470642176",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:            "name": "ceph_lv0",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:            "tags": {
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:                "ceph.cluster_name": "ceph",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:                "ceph.crush_device_class": "",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:                "ceph.encrypted": "0",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:                "ceph.osd_id": "0",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:                "ceph.type": "block",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:                "ceph.vdo": "0"
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:            },
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:            "type": "block",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:            "vg_name": "ceph_vg0"
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:        }
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:    ],
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:    "1": [
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:        {
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:            "devices": [
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:                "/dev/loop4"
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:            ],
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:            "lv_name": "ceph_lv1",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:            "lv_size": "21470642176",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:            "name": "ceph_lv1",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:            "tags": {
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:                "ceph.cluster_name": "ceph",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:                "ceph.crush_device_class": "",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:                "ceph.encrypted": "0",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:                "ceph.osd_id": "1",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:                "ceph.type": "block",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:                "ceph.vdo": "0"
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:            },
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:            "type": "block",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:            "vg_name": "ceph_vg1"
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:        }
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:    ],
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:    "2": [
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:        {
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:            "devices": [
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:                "/dev/loop5"
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:            ],
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:            "lv_name": "ceph_lv2",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:            "lv_size": "21470642176",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:            "name": "ceph_lv2",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:            "tags": {
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:                "ceph.cluster_name": "ceph",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:                "ceph.crush_device_class": "",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:                "ceph.encrypted": "0",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:                "ceph.osd_id": "2",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:                "ceph.type": "block",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:                "ceph.vdo": "0"
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:            },
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:            "type": "block",
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:            "vg_name": "ceph_vg2"
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:        }
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]:    ]
Oct 11 05:31:31 np0005481065 gifted_cerf[410584]: }
Oct 11 05:31:31 np0005481065 systemd[1]: libpod-e4c8b79b5928f90dbfd400b279a2e57b8ebcc3a02734271bd22911b6f57ccfb4.scope: Deactivated successfully.
Oct 11 05:31:31 np0005481065 podman[410567]: 2025-10-11 09:31:31.910103412 +0000 UTC m=+1.214581798 container died e4c8b79b5928f90dbfd400b279a2e57b8ebcc3a02734271bd22911b6f57ccfb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cerf, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 11 05:31:32 np0005481065 systemd[1]: var-lib-containers-storage-overlay-e7412e1827b2ea30a5c9eddf77a9aa02d1b7966336153a3ec2141f2f454d3ab4-merged.mount: Deactivated successfully.
Oct 11 05:31:32 np0005481065 podman[410567]: 2025-10-11 09:31:32.605867946 +0000 UTC m=+1.910346362 container remove e4c8b79b5928f90dbfd400b279a2e57b8ebcc3a02734271bd22911b6f57ccfb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cerf, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:31:32 np0005481065 systemd[1]: libpod-conmon-e4c8b79b5928f90dbfd400b279a2e57b8ebcc3a02734271bd22911b6f57ccfb4.scope: Deactivated successfully.
Oct 11 05:31:32 np0005481065 podman[410594]: 2025-10-11 09:31:32.749267383 +0000 UTC m=+0.809648299 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 05:31:32 np0005481065 podman[410598]: 2025-10-11 09:31:32.803440062 +0000 UTC m=+0.862671326 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 11 05:31:33 np0005481065 nova_compute[260935]: 2025-10-11 09:31:33.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:31:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2697: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 05:31:33 np0005481065 podman[410790]: 2025-10-11 09:31:33.519742276 +0000 UTC m=+0.098767768 container create aa4689eb8d19124360b016972744306a1dfa02f7c7419cadb84ebc0936b297f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dhawan, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:31:33 np0005481065 podman[410790]: 2025-10-11 09:31:33.442445195 +0000 UTC m=+0.021470667 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:31:33 np0005481065 nova_compute[260935]: 2025-10-11 09:31:33.629 2 DEBUG nova.compute.manager [req-56b822ba-0458-4270-8332-c3e0273d3fb9 req-8d68ab8f-13de-43a6-9ce4-1493590a799d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Received event network-changed-faf6491c-2fe9-4559-bb38-e0b4068de1b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:31:33 np0005481065 nova_compute[260935]: 2025-10-11 09:31:33.630 2 DEBUG nova.compute.manager [req-56b822ba-0458-4270-8332-c3e0273d3fb9 req-8d68ab8f-13de-43a6-9ce4-1493590a799d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Refreshing instance network info cache due to event network-changed-faf6491c-2fe9-4559-bb38-e0b4068de1b5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:31:33 np0005481065 nova_compute[260935]: 2025-10-11 09:31:33.630 2 DEBUG oslo_concurrency.lockutils [req-56b822ba-0458-4270-8332-c3e0273d3fb9 req-8d68ab8f-13de-43a6-9ce4-1493590a799d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-30069bfa-2edc-4f4c-b685-233b65a11de1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:31:33 np0005481065 nova_compute[260935]: 2025-10-11 09:31:33.631 2 DEBUG oslo_concurrency.lockutils [req-56b822ba-0458-4270-8332-c3e0273d3fb9 req-8d68ab8f-13de-43a6-9ce4-1493590a799d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-30069bfa-2edc-4f4c-b685-233b65a11de1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:31:33 np0005481065 nova_compute[260935]: 2025-10-11 09:31:33.631 2 DEBUG nova.network.neutron [req-56b822ba-0458-4270-8332-c3e0273d3fb9 req-8d68ab8f-13de-43a6-9ce4-1493590a799d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Refreshing network info cache for port faf6491c-2fe9-4559-bb38-e0b4068de1b5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:31:33 np0005481065 nova_compute[260935]: 2025-10-11 09:31:33.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:31:33 np0005481065 systemd[1]: Started libpod-conmon-aa4689eb8d19124360b016972744306a1dfa02f7c7419cadb84ebc0936b297f4.scope.
Oct 11 05:31:33 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:31:33 np0005481065 podman[410790]: 2025-10-11 09:31:33.921175955 +0000 UTC m=+0.500201477 container init aa4689eb8d19124360b016972744306a1dfa02f7c7419cadb84ebc0936b297f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Oct 11 05:31:33 np0005481065 podman[410790]: 2025-10-11 09:31:33.935361286 +0000 UTC m=+0.514386778 container start aa4689eb8d19124360b016972744306a1dfa02f7c7419cadb84ebc0936b297f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 05:31:33 np0005481065 hopeful_dhawan[410806]: 167 167
Oct 11 05:31:33 np0005481065 systemd[1]: libpod-aa4689eb8d19124360b016972744306a1dfa02f7c7419cadb84ebc0936b297f4.scope: Deactivated successfully.
Oct 11 05:31:34 np0005481065 podman[410790]: 2025-10-11 09:31:34.004711493 +0000 UTC m=+0.583737025 container attach aa4689eb8d19124360b016972744306a1dfa02f7c7419cadb84ebc0936b297f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:31:34 np0005481065 podman[410790]: 2025-10-11 09:31:34.006141793 +0000 UTC m=+0.585167275 container died aa4689eb8d19124360b016972744306a1dfa02f7c7419cadb84ebc0936b297f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 05:31:34 np0005481065 systemd[1]: var-lib-containers-storage-overlay-c719f056635ced1df7e7a3aaae097783ade0368d5e14695227ffa12492c265b2-merged.mount: Deactivated successfully.
Oct 11 05:31:34 np0005481065 podman[410790]: 2025-10-11 09:31:34.159228263 +0000 UTC m=+0.738253755 container remove aa4689eb8d19124360b016972744306a1dfa02f7c7419cadb84ebc0936b297f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 11 05:31:34 np0005481065 systemd[1]: libpod-conmon-aa4689eb8d19124360b016972744306a1dfa02f7c7419cadb84ebc0936b297f4.scope: Deactivated successfully.
Oct 11 05:31:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:31:34 np0005481065 podman[410832]: 2025-10-11 09:31:34.522459104 +0000 UTC m=+0.107958508 container create 3e873a2a907803512e47c8a59dd4e8bf7a9d01f047937def10297d93c089d980 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:31:34 np0005481065 podman[410832]: 2025-10-11 09:31:34.443568078 +0000 UTC m=+0.029067512 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:31:34 np0005481065 systemd[1]: Started libpod-conmon-3e873a2a907803512e47c8a59dd4e8bf7a9d01f047937def10297d93c089d980.scope.
Oct 11 05:31:34 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:31:34 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9c341e1ce616213a7a9bc13e4ed3227e23e3e4e8e7894380decc3abdafb962e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:31:34 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9c341e1ce616213a7a9bc13e4ed3227e23e3e4e8e7894380decc3abdafb962e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:31:34 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9c341e1ce616213a7a9bc13e4ed3227e23e3e4e8e7894380decc3abdafb962e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:31:34 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9c341e1ce616213a7a9bc13e4ed3227e23e3e4e8e7894380decc3abdafb962e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:31:34 np0005481065 podman[410832]: 2025-10-11 09:31:34.897261431 +0000 UTC m=+0.482760885 container init 3e873a2a907803512e47c8a59dd4e8bf7a9d01f047937def10297d93c089d980 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wilbur, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 11 05:31:34 np0005481065 podman[410832]: 2025-10-11 09:31:34.905223326 +0000 UTC m=+0.490722730 container start 3e873a2a907803512e47c8a59dd4e8bf7a9d01f047937def10297d93c089d980 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:31:35 np0005481065 podman[410832]: 2025-10-11 09:31:35.013681987 +0000 UTC m=+0.599181401 container attach 3e873a2a907803512e47c8a59dd4e8bf7a9d01f047937def10297d93c089d980 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wilbur, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:31:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2698: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 05:31:35 np0005481065 zen_wilbur[410850]: {
Oct 11 05:31:35 np0005481065 zen_wilbur[410850]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 05:31:35 np0005481065 zen_wilbur[410850]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:31:35 np0005481065 zen_wilbur[410850]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 05:31:35 np0005481065 zen_wilbur[410850]:        "osd_id": 2,
Oct 11 05:31:35 np0005481065 zen_wilbur[410850]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:31:35 np0005481065 zen_wilbur[410850]:        "type": "bluestore"
Oct 11 05:31:35 np0005481065 zen_wilbur[410850]:    },
Oct 11 05:31:35 np0005481065 zen_wilbur[410850]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 05:31:35 np0005481065 zen_wilbur[410850]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:31:35 np0005481065 zen_wilbur[410850]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 05:31:35 np0005481065 zen_wilbur[410850]:        "osd_id": 0,
Oct 11 05:31:35 np0005481065 zen_wilbur[410850]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:31:35 np0005481065 zen_wilbur[410850]:        "type": "bluestore"
Oct 11 05:31:35 np0005481065 zen_wilbur[410850]:    },
Oct 11 05:31:35 np0005481065 zen_wilbur[410850]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 05:31:35 np0005481065 zen_wilbur[410850]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:31:35 np0005481065 zen_wilbur[410850]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 05:31:35 np0005481065 zen_wilbur[410850]:        "osd_id": 1,
Oct 11 05:31:35 np0005481065 zen_wilbur[410850]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:31:35 np0005481065 zen_wilbur[410850]:        "type": "bluestore"
Oct 11 05:31:35 np0005481065 zen_wilbur[410850]:    }
Oct 11 05:31:35 np0005481065 zen_wilbur[410850]: }
Oct 11 05:31:35 np0005481065 systemd[1]: libpod-3e873a2a907803512e47c8a59dd4e8bf7a9d01f047937def10297d93c089d980.scope: Deactivated successfully.
Oct 11 05:31:35 np0005481065 systemd[1]: libpod-3e873a2a907803512e47c8a59dd4e8bf7a9d01f047937def10297d93c089d980.scope: Consumed 1.077s CPU time.
Oct 11 05:31:36 np0005481065 podman[410883]: 2025-10-11 09:31:36.052948135 +0000 UTC m=+0.044370853 container died 3e873a2a907803512e47c8a59dd4e8bf7a9d01f047937def10297d93c089d980 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 11 05:31:36 np0005481065 nova_compute[260935]: 2025-10-11 09:31:36.191 2 DEBUG nova.network.neutron [req-56b822ba-0458-4270-8332-c3e0273d3fb9 req-8d68ab8f-13de-43a6-9ce4-1493590a799d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Updated VIF entry in instance network info cache for port faf6491c-2fe9-4559-bb38-e0b4068de1b5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:31:36 np0005481065 nova_compute[260935]: 2025-10-11 09:31:36.192 2 DEBUG nova.network.neutron [req-56b822ba-0458-4270-8332-c3e0273d3fb9 req-8d68ab8f-13de-43a6-9ce4-1493590a799d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Updating instance_info_cache with network_info: [{"id": "faf6491c-2fe9-4559-bb38-e0b4068de1b5", "address": "fa:16:3e:73:79:c8", "network": {"id": "b8c83697-e12f-4ca4-8bd5-07c36e7b45a8", "bridge": "br-int", "label": "tempest-network-smoke--75951287", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe73:79c8", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe73:79c8", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf6491c-2f", "ovs_interfaceid": "faf6491c-2fe9-4559-bb38-e0b4068de1b5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:31:36 np0005481065 nova_compute[260935]: 2025-10-11 09:31:36.235 2 DEBUG oslo_concurrency.lockutils [req-56b822ba-0458-4270-8332-c3e0273d3fb9 req-8d68ab8f-13de-43a6-9ce4-1493590a799d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-30069bfa-2edc-4f4c-b685-233b65a11de1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:31:36 np0005481065 systemd[1]: var-lib-containers-storage-overlay-a9c341e1ce616213a7a9bc13e4ed3227e23e3e4e8e7894380decc3abdafb962e-merged.mount: Deactivated successfully.
Oct 11 05:31:37 np0005481065 podman[410883]: 2025-10-11 09:31:37.000714272 +0000 UTC m=+0.992136990 container remove 3e873a2a907803512e47c8a59dd4e8bf7a9d01f047937def10297d93c089d980 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wilbur, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 05:31:37 np0005481065 systemd[1]: libpod-conmon-3e873a2a907803512e47c8a59dd4e8bf7a9d01f047937def10297d93c089d980.scope: Deactivated successfully.
Oct 11 05:31:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:31:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2699: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 05:31:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:31:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:31:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:31:37 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 3556e753-0b34-465a-86f8-e0a0d7ac1b36 does not exist
Oct 11 05:31:37 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev dd61f4b1-7c83-46b3-9bdd-015097ff0e7c does not exist
Oct 11 05:31:38 np0005481065 nova_compute[260935]: 2025-10-11 09:31:38.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:31:38 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:31:38 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:31:38 np0005481065 nova_compute[260935]: 2025-10-11 09:31:38.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:31:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2700: 321 pgs: 321 active+clean; 457 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 534 KiB/s wr, 89 op/s
Oct 11 05:31:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:31:40 np0005481065 ovn_controller[152945]: 2025-10-11T09:31:40Z|00178|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:73:79:c8 10.100.0.6
Oct 11 05:31:40 np0005481065 ovn_controller[152945]: 2025-10-11T09:31:40Z|00179|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:73:79:c8 10.100.0.6
Oct 11 05:31:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2701: 321 pgs: 321 active+clean; 457 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 522 KiB/s wr, 15 op/s
Oct 11 05:31:41 np0005481065 nova_compute[260935]: 2025-10-11 09:31:41.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:31:41 np0005481065 nova_compute[260935]: 2025-10-11 09:31:41.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:31:42 np0005481065 nova_compute[260935]: 2025-10-11 09:31:42.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:31:42 np0005481065 nova_compute[260935]: 2025-10-11 09:31:42.702 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:31:43 np0005481065 nova_compute[260935]: 2025-10-11 09:31:43.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:31:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2702: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 235 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 05:31:43 np0005481065 nova_compute[260935]: 2025-10-11 09:31:43.497 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:31:43 np0005481065 nova_compute[260935]: 2025-10-11 09:31:43.497 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:31:43 np0005481065 nova_compute[260935]: 2025-10-11 09:31:43.498 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 05:31:43 np0005481065 nova_compute[260935]: 2025-10-11 09:31:43.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:31:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:31:44 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #132. Immutable memtables: 0.
Oct 11 05:31:44 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:44.445358) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 05:31:44 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 132
Oct 11 05:31:44 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175104445404, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 486, "num_deletes": 257, "total_data_size": 412121, "memory_usage": 421544, "flush_reason": "Manual Compaction"}
Oct 11 05:31:44 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #133: started
Oct 11 05:31:44 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175104451229, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 133, "file_size": 408479, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56399, "largest_seqno": 56884, "table_properties": {"data_size": 405733, "index_size": 781, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6619, "raw_average_key_size": 18, "raw_value_size": 400109, "raw_average_value_size": 1120, "num_data_blocks": 34, "num_entries": 357, "num_filter_entries": 357, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760175080, "oldest_key_time": 1760175080, "file_creation_time": 1760175104, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 133, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:31:44 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 5921 microseconds, and 3073 cpu microseconds.
Oct 11 05:31:44 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:31:44 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:44.451281) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #133: 408479 bytes OK
Oct 11 05:31:44 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:44.451305) [db/memtable_list.cc:519] [default] Level-0 commit table #133 started
Oct 11 05:31:44 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:44.453114) [db/memtable_list.cc:722] [default] Level-0 commit table #133: memtable #1 done
Oct 11 05:31:44 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:44.453128) EVENT_LOG_v1 {"time_micros": 1760175104453123, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 05:31:44 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:44.453156) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 05:31:44 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 409206, prev total WAL file size 409206, number of live WAL files 2.
Oct 11 05:31:44 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000129.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:31:44 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:44.453782) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032323539' seq:72057594037927935, type:22 .. '6C6F676D0032353132' seq:0, type:0; will stop at (end)
Oct 11 05:31:44 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 05:31:44 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [133(398KB)], [131(9603KB)]
Oct 11 05:31:44 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175104453935, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [133], "files_L6": [131], "score": -1, "input_data_size": 10242346, "oldest_snapshot_seqno": -1}
Oct 11 05:31:44 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #134: 7495 keys, 10127045 bytes, temperature: kUnknown
Oct 11 05:31:44 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175104560461, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 134, "file_size": 10127045, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10077477, "index_size": 29743, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18757, "raw_key_size": 196500, "raw_average_key_size": 26, "raw_value_size": 9943999, "raw_average_value_size": 1326, "num_data_blocks": 1158, "num_entries": 7495, "num_filter_entries": 7495, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760175104, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:31:44 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:31:44 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:44.560788) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 10127045 bytes
Oct 11 05:31:44 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:44.562312) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 96.1 rd, 95.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 9.4 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(49.9) write-amplify(24.8) OK, records in: 8021, records dropped: 526 output_compression: NoCompression
Oct 11 05:31:44 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:44.562340) EVENT_LOG_v1 {"time_micros": 1760175104562327, "job": 80, "event": "compaction_finished", "compaction_time_micros": 106612, "compaction_time_cpu_micros": 50027, "output_level": 6, "num_output_files": 1, "total_output_size": 10127045, "num_input_records": 8021, "num_output_records": 7495, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 05:31:44 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000133.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:31:44 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175104562618, "job": 80, "event": "table_file_deletion", "file_number": 133}
Oct 11 05:31:44 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:31:44 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175104565633, "job": 80, "event": "table_file_deletion", "file_number": 131}
Oct 11 05:31:44 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:44.453627) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:31:44 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:44.565686) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:31:44 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:44.565692) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:31:44 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:44.565695) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:31:44 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:44.565698) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:31:44 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:44.565701) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:31:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2703: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 235 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 05:31:45 np0005481065 nova_compute[260935]: 2025-10-11 09:31:45.497 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updating instance_info_cache with network_info: [{"id": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "address": "fa:16:3e:ab:9b:26", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99e74dca-1d", "ovs_interfaceid": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:31:45 np0005481065 nova_compute[260935]: 2025-10-11 09:31:45.539 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:31:45 np0005481065 nova_compute[260935]: 2025-10-11 09:31:45.540 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 05:31:45 np0005481065 nova_compute[260935]: 2025-10-11 09:31:45.541 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:31:45 np0005481065 nova_compute[260935]: 2025-10-11 09:31:45.541 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:31:46 np0005481065 nova_compute[260935]: 2025-10-11 09:31:46.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:31:46 np0005481065 nova_compute[260935]: 2025-10-11 09:31:46.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:31:46 np0005481065 nova_compute[260935]: 2025-10-11 09:31:46.705 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:31:46 np0005481065 nova_compute[260935]: 2025-10-11 09:31:46.735 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:31:46 np0005481065 nova_compute[260935]: 2025-10-11 09:31:46.736 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:31:46 np0005481065 nova_compute[260935]: 2025-10-11 09:31:46.737 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:31:46 np0005481065 nova_compute[260935]: 2025-10-11 09:31:46.738 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:31:46 np0005481065 nova_compute[260935]: 2025-10-11 09:31:46.739 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:31:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2704: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 235 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 05:31:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:31:47 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2877184816' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:31:47 np0005481065 nova_compute[260935]: 2025-10-11 09:31:47.252 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:31:47 np0005481065 nova_compute[260935]: 2025-10-11 09:31:47.376 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:31:47 np0005481065 nova_compute[260935]: 2025-10-11 09:31:47.377 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:31:47 np0005481065 nova_compute[260935]: 2025-10-11 09:31:47.377 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:31:47 np0005481065 nova_compute[260935]: 2025-10-11 09:31:47.384 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:31:47 np0005481065 nova_compute[260935]: 2025-10-11 09:31:47.384 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:31:47 np0005481065 nova_compute[260935]: 2025-10-11 09:31:47.391 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:31:47 np0005481065 nova_compute[260935]: 2025-10-11 09:31:47.391 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:31:47 np0005481065 nova_compute[260935]: 2025-10-11 09:31:47.397 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:31:47 np0005481065 nova_compute[260935]: 2025-10-11 09:31:47.398 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:31:47 np0005481065 nova_compute[260935]: 2025-10-11 09:31:47.404 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000088 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:31:47 np0005481065 nova_compute[260935]: 2025-10-11 09:31:47.404 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000088 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:31:47 np0005481065 nova_compute[260935]: 2025-10-11 09:31:47.661 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:31:47 np0005481065 nova_compute[260935]: 2025-10-11 09:31:47.662 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2432MB free_disk=59.739715576171875GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:31:47 np0005481065 nova_compute[260935]: 2025-10-11 09:31:47.662 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:31:47 np0005481065 nova_compute[260935]: 2025-10-11 09:31:47.662 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:31:47 np0005481065 nova_compute[260935]: 2025-10-11 09:31:47.757 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:31:47 np0005481065 nova_compute[260935]: 2025-10-11 09:31:47.757 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:31:47 np0005481065 nova_compute[260935]: 2025-10-11 09:31:47.757 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:31:47 np0005481065 nova_compute[260935]: 2025-10-11 09:31:47.757 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 811aca81-b712-4b96-a66c-8108b7791b3c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:31:47 np0005481065 nova_compute[260935]: 2025-10-11 09:31:47.757 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 30069bfa-2edc-4f4c-b685-233b65a11de1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:31:47 np0005481065 nova_compute[260935]: 2025-10-11 09:31:47.758 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:31:47 np0005481065 nova_compute[260935]: 2025-10-11 09:31:47.758 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:31:47 np0005481065 nova_compute[260935]: 2025-10-11 09:31:47.877 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:31:48 np0005481065 nova_compute[260935]: 2025-10-11 09:31:48.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:31:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:31:48 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1368239932' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:31:48 np0005481065 nova_compute[260935]: 2025-10-11 09:31:48.341 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:31:48 np0005481065 nova_compute[260935]: 2025-10-11 09:31:48.348 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:31:48 np0005481065 nova_compute[260935]: 2025-10-11 09:31:48.368 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:31:48 np0005481065 nova_compute[260935]: 2025-10-11 09:31:48.395 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:31:48 np0005481065 nova_compute[260935]: 2025-10-11 09:31:48.395 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.733s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:31:48 np0005481065 nova_compute[260935]: 2025-10-11 09:31:48.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:31:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2705: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 235 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 05:31:49 np0005481065 nova_compute[260935]: 2025-10-11 09:31:49.391 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:31:49 np0005481065 nova_compute[260935]: 2025-10-11 09:31:49.421 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:31:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:31:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2706: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 205 KiB/s rd, 1.6 MiB/s wr, 48 op/s
Oct 11 05:31:51 np0005481065 nova_compute[260935]: 2025-10-11 09:31:51.910 2 DEBUG nova.compute.manager [req-8c040527-bde1-44af-91c3-be40390a8452 req-226b263e-02cd-4966-81a5-de0258ff06b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Received event network-changed-faf6491c-2fe9-4559-bb38-e0b4068de1b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:31:51 np0005481065 nova_compute[260935]: 2025-10-11 09:31:51.911 2 DEBUG nova.compute.manager [req-8c040527-bde1-44af-91c3-be40390a8452 req-226b263e-02cd-4966-81a5-de0258ff06b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Refreshing instance network info cache due to event network-changed-faf6491c-2fe9-4559-bb38-e0b4068de1b5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:31:51 np0005481065 nova_compute[260935]: 2025-10-11 09:31:51.911 2 DEBUG oslo_concurrency.lockutils [req-8c040527-bde1-44af-91c3-be40390a8452 req-226b263e-02cd-4966-81a5-de0258ff06b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-30069bfa-2edc-4f4c-b685-233b65a11de1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:31:51 np0005481065 nova_compute[260935]: 2025-10-11 09:31:51.911 2 DEBUG oslo_concurrency.lockutils [req-8c040527-bde1-44af-91c3-be40390a8452 req-226b263e-02cd-4966-81a5-de0258ff06b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-30069bfa-2edc-4f4c-b685-233b65a11de1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:31:51 np0005481065 nova_compute[260935]: 2025-10-11 09:31:51.911 2 DEBUG nova.network.neutron [req-8c040527-bde1-44af-91c3-be40390a8452 req-226b263e-02cd-4966-81a5-de0258ff06b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Refreshing network info cache for port faf6491c-2fe9-4559-bb38-e0b4068de1b5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:31:52 np0005481065 nova_compute[260935]: 2025-10-11 09:31:52.011 2 DEBUG oslo_concurrency.lockutils [None req-bbc8b334-050d-4cdc-81f4-62e98d796d1f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "30069bfa-2edc-4f4c-b685-233b65a11de1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:31:52 np0005481065 nova_compute[260935]: 2025-10-11 09:31:52.011 2 DEBUG oslo_concurrency.lockutils [None req-bbc8b334-050d-4cdc-81f4-62e98d796d1f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "30069bfa-2edc-4f4c-b685-233b65a11de1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:31:52 np0005481065 nova_compute[260935]: 2025-10-11 09:31:52.011 2 DEBUG oslo_concurrency.lockutils [None req-bbc8b334-050d-4cdc-81f4-62e98d796d1f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "30069bfa-2edc-4f4c-b685-233b65a11de1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:31:52 np0005481065 nova_compute[260935]: 2025-10-11 09:31:52.011 2 DEBUG oslo_concurrency.lockutils [None req-bbc8b334-050d-4cdc-81f4-62e98d796d1f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "30069bfa-2edc-4f4c-b685-233b65a11de1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:31:52 np0005481065 nova_compute[260935]: 2025-10-11 09:31:52.012 2 DEBUG oslo_concurrency.lockutils [None req-bbc8b334-050d-4cdc-81f4-62e98d796d1f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "30069bfa-2edc-4f4c-b685-233b65a11de1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:31:52 np0005481065 nova_compute[260935]: 2025-10-11 09:31:52.013 2 INFO nova.compute.manager [None req-bbc8b334-050d-4cdc-81f4-62e98d796d1f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Terminating instance#033[00m
Oct 11 05:31:52 np0005481065 nova_compute[260935]: 2025-10-11 09:31:52.014 2 DEBUG nova.compute.manager [None req-bbc8b334-050d-4cdc-81f4-62e98d796d1f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:31:52 np0005481065 kernel: tapfaf6491c-2f (unregistering): left promiscuous mode
Oct 11 05:31:52 np0005481065 NetworkManager[44960]: <info>  [1760175112.0745] device (tapfaf6491c-2f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:31:52 np0005481065 ovn_controller[152945]: 2025-10-11T09:31:52Z|01521|binding|INFO|Releasing lport faf6491c-2fe9-4559-bb38-e0b4068de1b5 from this chassis (sb_readonly=0)
Oct 11 05:31:52 np0005481065 nova_compute[260935]: 2025-10-11 09:31:52.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:31:52 np0005481065 ovn_controller[152945]: 2025-10-11T09:31:52Z|01522|binding|INFO|Setting lport faf6491c-2fe9-4559-bb38-e0b4068de1b5 down in Southbound
Oct 11 05:31:52 np0005481065 ovn_controller[152945]: 2025-10-11T09:31:52Z|01523|binding|INFO|Removing iface tapfaf6491c-2f ovn-installed in OVS
Oct 11 05:31:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:31:52.103 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:79:c8 10.100.0.6 2001:db8:0:1:f816:3eff:fe73:79c8 2001:db8::f816:3eff:fe73:79c8'], port_security=['fa:16:3e:73:79:c8 10.100.0.6 2001:db8:0:1:f816:3eff:fe73:79c8 2001:db8::f816:3eff:fe73:79c8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28 2001:db8:0:1:f816:3eff:fe73:79c8/64 2001:db8::f816:3eff:fe73:79c8/64', 'neutron:device_id': '30069bfa-2edc-4f4c-b685-233b65a11de1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9b3d6b51-0356-4b0d-af67-6a82e6bb09bb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=09ee216c-15c4-4f81-ac68-20fd45768bc8, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=faf6491c-2fe9-4559-bb38-e0b4068de1b5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:31:52 np0005481065 nova_compute[260935]: 2025-10-11 09:31:52.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:31:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:31:52.108 162815 INFO neutron.agent.ovn.metadata.agent [-] Port faf6491c-2fe9-4559-bb38-e0b4068de1b5 in datapath b8c83697-e12f-4ca4-8bd5-07c36e7b45a8 unbound from our chassis#033[00m
Oct 11 05:31:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:31:52.112 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b8c83697-e12f-4ca4-8bd5-07c36e7b45a8#033[00m
Oct 11 05:31:52 np0005481065 nova_compute[260935]: 2025-10-11 09:31:52.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:31:52 np0005481065 systemd[1]: machine-qemu\x2d160\x2dinstance\x2d00000088.scope: Deactivated successfully.
Oct 11 05:31:52 np0005481065 systemd[1]: machine-qemu\x2d160\x2dinstance\x2d00000088.scope: Consumed 13.907s CPU time.
Oct 11 05:31:52 np0005481065 systemd-machined[215705]: Machine qemu-160-instance-00000088 terminated.
Oct 11 05:31:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:31:52.140 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7a5b6b31-2835-42a3-8c18-5704e04c9d84]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:31:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:31:52.173 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[848b0093-d480-4573-8dc0-1ad370d8dc53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:31:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:31:52.177 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6f312a59-ad6a-4999-b481-a133c88bdeb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:31:52 np0005481065 podman[410998]: 2025-10-11 09:31:52.180205003 +0000 UTC m=+0.067753863 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 11 05:31:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:31:52.212 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6b6fa40e-8fae-43f2-9007-09d96c755ec6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:31:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:31:52.230 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[eb16f595-f7f5-43f2-a894-6c2b1795cef0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb8c83697-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d0:5b:95'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 42, 'tx_packets': 7, 'rx_bytes': 3628, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 42, 'tx_packets': 7, 'rx_bytes': 3628, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 405], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 689596, 'reachable_time': 36408, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 38, 'inoctets': 2928, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 38, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2928, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 38, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 411024, 'error': None, 'target': 'ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:31:52 np0005481065 nova_compute[260935]: 2025-10-11 09:31:52.253 2 INFO nova.virt.libvirt.driver [-] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Instance destroyed successfully.#033[00m
Oct 11 05:31:52 np0005481065 nova_compute[260935]: 2025-10-11 09:31:52.254 2 DEBUG nova.objects.instance [None req-bbc8b334-050d-4cdc-81f4-62e98d796d1f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'resources' on Instance uuid 30069bfa-2edc-4f4c-b685-233b65a11de1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:31:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:31:52.255 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ea78a73c-1cee-4141-9585-26537beefc44]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb8c83697-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 689607, 'tstamp': 689607}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 411028, 'error': None, 'target': 'ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb8c83697-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 689611, 'tstamp': 689611}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 411028, 'error': None, 'target': 'ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:31:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:31:52.257 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb8c83697-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:31:52 np0005481065 nova_compute[260935]: 2025-10-11 09:31:52.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:31:52 np0005481065 nova_compute[260935]: 2025-10-11 09:31:52.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:31:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:31:52.264 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb8c83697-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:31:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:31:52.265 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:31:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:31:52.265 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb8c83697-e0, col_values=(('external_ids', {'iface-id': '3d1ec619-c133-47e3-8121-0a84b73ae16e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:31:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:31:52.266 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:31:52 np0005481065 nova_compute[260935]: 2025-10-11 09:31:52.270 2 DEBUG nova.virt.libvirt.vif [None req-bbc8b334-050d-4cdc-81f4-62e98d796d1f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:31:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1320880782',display_name='tempest-TestGettingAddress-server-1320880782',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1320880782',id=136,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHRstPG1ot2VOcbBUXtJ+eOOOPIpYxS1823a8lgCgglrjvAd3j//+qkav+rN6NO2rJV6NwAQSbhFZXjoVb6gdqi2VPWucmakmAAgqAmAhQOUxzhrLgrzGURRlRuM8US+xA==',key_name='tempest-TestGettingAddress-264191416',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:31:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-fmtjcp7u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:31:25Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=30069bfa-2edc-4f4c-b685-233b65a11de1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "faf6491c-2fe9-4559-bb38-e0b4068de1b5", "address": "fa:16:3e:73:79:c8", "network": {"id": "b8c83697-e12f-4ca4-8bd5-07c36e7b45a8", "bridge": "br-int", "label": "tempest-network-smoke--75951287", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe73:79c8", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe73:79c8", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf6491c-2f", "ovs_interfaceid": "faf6491c-2fe9-4559-bb38-e0b4068de1b5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:31:52 np0005481065 nova_compute[260935]: 2025-10-11 09:31:52.271 2 DEBUG nova.network.os_vif_util [None req-bbc8b334-050d-4cdc-81f4-62e98d796d1f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "faf6491c-2fe9-4559-bb38-e0b4068de1b5", "address": "fa:16:3e:73:79:c8", "network": {"id": "b8c83697-e12f-4ca4-8bd5-07c36e7b45a8", "bridge": "br-int", "label": "tempest-network-smoke--75951287", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe73:79c8", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe73:79c8", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf6491c-2f", "ovs_interfaceid": "faf6491c-2fe9-4559-bb38-e0b4068de1b5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:31:52 np0005481065 nova_compute[260935]: 2025-10-11 09:31:52.271 2 DEBUG nova.network.os_vif_util [None req-bbc8b334-050d-4cdc-81f4-62e98d796d1f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:73:79:c8,bridge_name='br-int',has_traffic_filtering=True,id=faf6491c-2fe9-4559-bb38-e0b4068de1b5,network=Network(b8c83697-e12f-4ca4-8bd5-07c36e7b45a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaf6491c-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:31:52 np0005481065 nova_compute[260935]: 2025-10-11 09:31:52.272 2 DEBUG os_vif [None req-bbc8b334-050d-4cdc-81f4-62e98d796d1f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:73:79:c8,bridge_name='br-int',has_traffic_filtering=True,id=faf6491c-2fe9-4559-bb38-e0b4068de1b5,network=Network(b8c83697-e12f-4ca4-8bd5-07c36e7b45a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaf6491c-2f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:31:52 np0005481065 nova_compute[260935]: 2025-10-11 09:31:52.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:31:52 np0005481065 nova_compute[260935]: 2025-10-11 09:31:52.273 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfaf6491c-2f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:31:52 np0005481065 nova_compute[260935]: 2025-10-11 09:31:52.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:31:52 np0005481065 nova_compute[260935]: 2025-10-11 09:31:52.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:31:52 np0005481065 nova_compute[260935]: 2025-10-11 09:31:52.279 2 INFO os_vif [None req-bbc8b334-050d-4cdc-81f4-62e98d796d1f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:73:79:c8,bridge_name='br-int',has_traffic_filtering=True,id=faf6491c-2fe9-4559-bb38-e0b4068de1b5,network=Network(b8c83697-e12f-4ca4-8bd5-07c36e7b45a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaf6491c-2f')#033[00m
Oct 11 05:31:52 np0005481065 nova_compute[260935]: 2025-10-11 09:31:52.709 2 DEBUG nova.compute.manager [req-ba260f11-0c56-40ea-a357-10fb5faf0c89 req-04fc610f-252e-436e-bf06-eb234ee28535 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Received event network-vif-unplugged-faf6491c-2fe9-4559-bb38-e0b4068de1b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:31:52 np0005481065 nova_compute[260935]: 2025-10-11 09:31:52.710 2 DEBUG oslo_concurrency.lockutils [req-ba260f11-0c56-40ea-a357-10fb5faf0c89 req-04fc610f-252e-436e-bf06-eb234ee28535 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "30069bfa-2edc-4f4c-b685-233b65a11de1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:31:52 np0005481065 nova_compute[260935]: 2025-10-11 09:31:52.710 2 DEBUG oslo_concurrency.lockutils [req-ba260f11-0c56-40ea-a357-10fb5faf0c89 req-04fc610f-252e-436e-bf06-eb234ee28535 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "30069bfa-2edc-4f4c-b685-233b65a11de1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:31:52 np0005481065 nova_compute[260935]: 2025-10-11 09:31:52.711 2 DEBUG oslo_concurrency.lockutils [req-ba260f11-0c56-40ea-a357-10fb5faf0c89 req-04fc610f-252e-436e-bf06-eb234ee28535 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "30069bfa-2edc-4f4c-b685-233b65a11de1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:31:52 np0005481065 nova_compute[260935]: 2025-10-11 09:31:52.711 2 DEBUG nova.compute.manager [req-ba260f11-0c56-40ea-a357-10fb5faf0c89 req-04fc610f-252e-436e-bf06-eb234ee28535 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] No waiting events found dispatching network-vif-unplugged-faf6491c-2fe9-4559-bb38-e0b4068de1b5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:31:52 np0005481065 nova_compute[260935]: 2025-10-11 09:31:52.712 2 DEBUG nova.compute.manager [req-ba260f11-0c56-40ea-a357-10fb5faf0c89 req-04fc610f-252e-436e-bf06-eb234ee28535 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Received event network-vif-unplugged-faf6491c-2fe9-4559-bb38-e0b4068de1b5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 05:31:52 np0005481065 nova_compute[260935]: 2025-10-11 09:31:52.749 2 INFO nova.virt.libvirt.driver [None req-bbc8b334-050d-4cdc-81f4-62e98d796d1f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Deleting instance files /var/lib/nova/instances/30069bfa-2edc-4f4c-b685-233b65a11de1_del#033[00m
Oct 11 05:31:52 np0005481065 nova_compute[260935]: 2025-10-11 09:31:52.750 2 INFO nova.virt.libvirt.driver [None req-bbc8b334-050d-4cdc-81f4-62e98d796d1f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Deletion of /var/lib/nova/instances/30069bfa-2edc-4f4c-b685-233b65a11de1_del complete#033[00m
Oct 11 05:31:52 np0005481065 nova_compute[260935]: 2025-10-11 09:31:52.815 2 INFO nova.compute.manager [None req-bbc8b334-050d-4cdc-81f4-62e98d796d1f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Took 0.80 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:31:52 np0005481065 nova_compute[260935]: 2025-10-11 09:31:52.815 2 DEBUG oslo.service.loopingcall [None req-bbc8b334-050d-4cdc-81f4-62e98d796d1f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:31:52 np0005481065 nova_compute[260935]: 2025-10-11 09:31:52.816 2 DEBUG nova.compute.manager [-] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:31:52 np0005481065 nova_compute[260935]: 2025-10-11 09:31:52.816 2 DEBUG nova.network.neutron [-] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:31:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2707: 321 pgs: 321 active+clean; 467 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 216 KiB/s rd, 1.6 MiB/s wr, 57 op/s
Oct 11 05:31:53 np0005481065 nova_compute[260935]: 2025-10-11 09:31:53.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:31:53 np0005481065 nova_compute[260935]: 2025-10-11 09:31:53.800 2 DEBUG nova.network.neutron [-] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:31:53 np0005481065 nova_compute[260935]: 2025-10-11 09:31:53.835 2 INFO nova.compute.manager [-] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Took 1.02 seconds to deallocate network for instance.#033[00m
Oct 11 05:31:53 np0005481065 nova_compute[260935]: 2025-10-11 09:31:53.921 2 DEBUG oslo_concurrency.lockutils [None req-bbc8b334-050d-4cdc-81f4-62e98d796d1f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:31:53 np0005481065 nova_compute[260935]: 2025-10-11 09:31:53.922 2 DEBUG oslo_concurrency.lockutils [None req-bbc8b334-050d-4cdc-81f4-62e98d796d1f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:31:54 np0005481065 nova_compute[260935]: 2025-10-11 09:31:54.037 2 DEBUG nova.compute.manager [req-743cc7eb-7219-4ed2-a9cb-748eee1081c6 req-7f390076-d48a-4fe6-ab57-e979f76fa6fb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Received event network-vif-deleted-faf6491c-2fe9-4559-bb38-e0b4068de1b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:31:54 np0005481065 nova_compute[260935]: 2025-10-11 09:31:54.106 2 DEBUG oslo_concurrency.processutils [None req-bbc8b334-050d-4cdc-81f4-62e98d796d1f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:31:54 np0005481065 nova_compute[260935]: 2025-10-11 09:31:54.414 2 DEBUG nova.network.neutron [req-8c040527-bde1-44af-91c3-be40390a8452 req-226b263e-02cd-4966-81a5-de0258ff06b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Updated VIF entry in instance network info cache for port faf6491c-2fe9-4559-bb38-e0b4068de1b5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:31:54 np0005481065 nova_compute[260935]: 2025-10-11 09:31:54.416 2 DEBUG nova.network.neutron [req-8c040527-bde1-44af-91c3-be40390a8452 req-226b263e-02cd-4966-81a5-de0258ff06b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Updating instance_info_cache with network_info: [{"id": "faf6491c-2fe9-4559-bb38-e0b4068de1b5", "address": "fa:16:3e:73:79:c8", "network": {"id": "b8c83697-e12f-4ca4-8bd5-07c36e7b45a8", "bridge": "br-int", "label": "tempest-network-smoke--75951287", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe73:79c8", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe73:79c8", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf6491c-2f", "ovs_interfaceid": "faf6491c-2fe9-4559-bb38-e0b4068de1b5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:31:54 np0005481065 nova_compute[260935]: 2025-10-11 09:31:54.435 2 DEBUG oslo_concurrency.lockutils [req-8c040527-bde1-44af-91c3-be40390a8452 req-226b263e-02cd-4966-81a5-de0258ff06b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-30069bfa-2edc-4f4c-b685-233b65a11de1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:31:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:31:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:31:54 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3384131280' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:31:54 np0005481065 nova_compute[260935]: 2025-10-11 09:31:54.593 2 DEBUG oslo_concurrency.processutils [None req-bbc8b334-050d-4cdc-81f4-62e98d796d1f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:31:54 np0005481065 nova_compute[260935]: 2025-10-11 09:31:54.602 2 DEBUG nova.compute.provider_tree [None req-bbc8b334-050d-4cdc-81f4-62e98d796d1f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:31:54 np0005481065 nova_compute[260935]: 2025-10-11 09:31:54.619 2 DEBUG nova.scheduler.client.report [None req-bbc8b334-050d-4cdc-81f4-62e98d796d1f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:31:54 np0005481065 nova_compute[260935]: 2025-10-11 09:31:54.646 2 DEBUG oslo_concurrency.lockutils [None req-bbc8b334-050d-4cdc-81f4-62e98d796d1f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:31:54 np0005481065 nova_compute[260935]: 2025-10-11 09:31:54.673 2 INFO nova.scheduler.client.report [None req-bbc8b334-050d-4cdc-81f4-62e98d796d1f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Deleted allocations for instance 30069bfa-2edc-4f4c-b685-233b65a11de1#033[00m
Oct 11 05:31:54 np0005481065 nova_compute[260935]: 2025-10-11 09:31:54.744 2 DEBUG oslo_concurrency.lockutils [None req-bbc8b334-050d-4cdc-81f4-62e98d796d1f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "30069bfa-2edc-4f4c-b685-233b65a11de1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.733s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:31:54 np0005481065 nova_compute[260935]: 2025-10-11 09:31:54.802 2 DEBUG nova.compute.manager [req-db361101-51c2-4885-90c6-bb480c3ca96d req-32a4335f-940f-48af-bee6-76208c4d5e57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Received event network-vif-plugged-faf6491c-2fe9-4559-bb38-e0b4068de1b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:31:54 np0005481065 nova_compute[260935]: 2025-10-11 09:31:54.803 2 DEBUG oslo_concurrency.lockutils [req-db361101-51c2-4885-90c6-bb480c3ca96d req-32a4335f-940f-48af-bee6-76208c4d5e57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "30069bfa-2edc-4f4c-b685-233b65a11de1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:31:54 np0005481065 nova_compute[260935]: 2025-10-11 09:31:54.803 2 DEBUG oslo_concurrency.lockutils [req-db361101-51c2-4885-90c6-bb480c3ca96d req-32a4335f-940f-48af-bee6-76208c4d5e57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "30069bfa-2edc-4f4c-b685-233b65a11de1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:31:54 np0005481065 nova_compute[260935]: 2025-10-11 09:31:54.804 2 DEBUG oslo_concurrency.lockutils [req-db361101-51c2-4885-90c6-bb480c3ca96d req-32a4335f-940f-48af-bee6-76208c4d5e57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "30069bfa-2edc-4f4c-b685-233b65a11de1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:31:54 np0005481065 nova_compute[260935]: 2025-10-11 09:31:54.804 2 DEBUG nova.compute.manager [req-db361101-51c2-4885-90c6-bb480c3ca96d req-32a4335f-940f-48af-bee6-76208c4d5e57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] No waiting events found dispatching network-vif-plugged-faf6491c-2fe9-4559-bb38-e0b4068de1b5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:31:54 np0005481065 nova_compute[260935]: 2025-10-11 09:31:54.804 2 WARNING nova.compute.manager [req-db361101-51c2-4885-90c6-bb480c3ca96d req-32a4335f-940f-48af-bee6-76208c4d5e57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Received unexpected event network-vif-plugged-faf6491c-2fe9-4559-bb38-e0b4068de1b5 for instance with vm_state deleted and task_state None.#033[00m
Oct 11 05:31:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:31:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:31:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:31:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:31:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:31:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:31:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:31:54
Oct 11 05:31:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:31:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:31:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['images', 'backups', 'default.rgw.meta', '.mgr', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', 'default.rgw.control', '.rgw.root']
Oct 11 05:31:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:31:55 np0005481065 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 05:31:55 np0005481065 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 41K writes, 164K keys, 41K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.03 MB/s#012Cumulative WAL: 41K writes, 15K syncs, 2.75 writes per sync, written: 0.16 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4946 writes, 21K keys, 4946 commit groups, 1.0 writes per commit group, ingest: 24.70 MB, 0.04 MB/s#012Interval WAL: 4946 writes, 1807 syncs, 2.74 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 05:31:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2708: 321 pgs: 321 active+clean; 467 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 20 KiB/s wr, 9 op/s
Oct 11 05:31:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:31:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:31:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:31:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:31:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:31:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:31:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:31:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:31:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:31:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:31:56 np0005481065 nova_compute[260935]: 2025-10-11 09:31:56.193 2 DEBUG oslo_concurrency.lockutils [None req-a4abe9b7-cc7a-4b53-a93e-408020c6fbe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "811aca81-b712-4b96-a66c-8108b7791b3c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:31:56 np0005481065 nova_compute[260935]: 2025-10-11 09:31:56.194 2 DEBUG oslo_concurrency.lockutils [None req-a4abe9b7-cc7a-4b53-a93e-408020c6fbe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "811aca81-b712-4b96-a66c-8108b7791b3c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:31:56 np0005481065 nova_compute[260935]: 2025-10-11 09:31:56.194 2 DEBUG oslo_concurrency.lockutils [None req-a4abe9b7-cc7a-4b53-a93e-408020c6fbe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "811aca81-b712-4b96-a66c-8108b7791b3c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:31:56 np0005481065 nova_compute[260935]: 2025-10-11 09:31:56.195 2 DEBUG oslo_concurrency.lockutils [None req-a4abe9b7-cc7a-4b53-a93e-408020c6fbe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "811aca81-b712-4b96-a66c-8108b7791b3c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:31:56 np0005481065 nova_compute[260935]: 2025-10-11 09:31:56.195 2 DEBUG oslo_concurrency.lockutils [None req-a4abe9b7-cc7a-4b53-a93e-408020c6fbe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "811aca81-b712-4b96-a66c-8108b7791b3c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:31:56 np0005481065 nova_compute[260935]: 2025-10-11 09:31:56.197 2 INFO nova.compute.manager [None req-a4abe9b7-cc7a-4b53-a93e-408020c6fbe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Terminating instance#033[00m
Oct 11 05:31:56 np0005481065 nova_compute[260935]: 2025-10-11 09:31:56.199 2 DEBUG nova.compute.manager [None req-a4abe9b7-cc7a-4b53-a93e-408020c6fbe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:31:56 np0005481065 kernel: tap41c97ff4-4a (unregistering): left promiscuous mode
Oct 11 05:31:56 np0005481065 NetworkManager[44960]: <info>  [1760175116.4736] device (tap41c97ff4-4a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:31:56 np0005481065 ovn_controller[152945]: 2025-10-11T09:31:56Z|01524|binding|INFO|Releasing lport 41c97ff4-4ad5-4d35-ac33-d083a904f55a from this chassis (sb_readonly=0)
Oct 11 05:31:56 np0005481065 ovn_controller[152945]: 2025-10-11T09:31:56Z|01525|binding|INFO|Setting lport 41c97ff4-4ad5-4d35-ac33-d083a904f55a down in Southbound
Oct 11 05:31:56 np0005481065 ovn_controller[152945]: 2025-10-11T09:31:56Z|01526|binding|INFO|Removing iface tap41c97ff4-4a ovn-installed in OVS
Oct 11 05:31:56 np0005481065 nova_compute[260935]: 2025-10-11 09:31:56.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:31:56 np0005481065 nova_compute[260935]: 2025-10-11 09:31:56.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:31:56 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:31:56.540 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:b0:e4 10.100.0.14 2001:db8:0:1:f816:3eff:fee6:b0e4 2001:db8::f816:3eff:fee6:b0e4'], port_security=['fa:16:3e:e6:b0:e4 10.100.0.14 2001:db8:0:1:f816:3eff:fee6:b0e4 2001:db8::f816:3eff:fee6:b0e4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28 2001:db8:0:1:f816:3eff:fee6:b0e4/64 2001:db8::f816:3eff:fee6:b0e4/64', 'neutron:device_id': '811aca81-b712-4b96-a66c-8108b7791b3c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9b3d6b51-0356-4b0d-af67-6a82e6bb09bb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=09ee216c-15c4-4f81-ac68-20fd45768bc8, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=41c97ff4-4ad5-4d35-ac33-d083a904f55a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:31:56 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:31:56.543 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 41c97ff4-4ad5-4d35-ac33-d083a904f55a in datapath b8c83697-e12f-4ca4-8bd5-07c36e7b45a8 unbound from our chassis#033[00m
Oct 11 05:31:56 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:31:56.545 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b8c83697-e12f-4ca4-8bd5-07c36e7b45a8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:31:56 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:31:56.547 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b1bad010-04f0-4499-ae91-9642c42cd27c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:31:56 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:31:56.548 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8 namespace which is not needed anymore#033[00m
Oct 11 05:31:56 np0005481065 nova_compute[260935]: 2025-10-11 09:31:56.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:31:56 np0005481065 systemd[1]: machine-qemu\x2d159\x2dinstance\x2d00000087.scope: Deactivated successfully.
Oct 11 05:31:56 np0005481065 systemd[1]: machine-qemu\x2d159\x2dinstance\x2d00000087.scope: Consumed 15.990s CPU time.
Oct 11 05:31:56 np0005481065 systemd-machined[215705]: Machine qemu-159-instance-00000087 terminated.
Oct 11 05:31:56 np0005481065 nova_compute[260935]: 2025-10-11 09:31:56.647 2 INFO nova.virt.libvirt.driver [-] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Instance destroyed successfully.#033[00m
Oct 11 05:31:56 np0005481065 nova_compute[260935]: 2025-10-11 09:31:56.648 2 DEBUG nova.objects.instance [None req-a4abe9b7-cc7a-4b53-a93e-408020c6fbe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'resources' on Instance uuid 811aca81-b712-4b96-a66c-8108b7791b3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:31:56 np0005481065 nova_compute[260935]: 2025-10-11 09:31:56.676 2 DEBUG nova.virt.libvirt.vif [None req-a4abe9b7-cc7a-4b53-a93e-408020c6fbe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:30:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-739529148',display_name='tempest-TestGettingAddress-server-739529148',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-739529148',id=135,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHRstPG1ot2VOcbBUXtJ+eOOOPIpYxS1823a8lgCgglrjvAd3j//+qkav+rN6NO2rJV6NwAQSbhFZXjoVb6gdqi2VPWucmakmAAgqAmAhQOUxzhrLgrzGURRlRuM8US+xA==',key_name='tempest-TestGettingAddress-264191416',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:30:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-jlin4bys',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:30:52Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=811aca81-b712-4b96-a66c-8108b7791b3c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "41c97ff4-4ad5-4d35-ac33-d083a904f55a", "address": "fa:16:3e:e6:b0:e4", "network": {"id": "b8c83697-e12f-4ca4-8bd5-07c36e7b45a8", "bridge": "br-int", "label": "tempest-network-smoke--75951287", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:b0e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:b0e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41c97ff4-4a", "ovs_interfaceid": "41c97ff4-4ad5-4d35-ac33-d083a904f55a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:31:56 np0005481065 nova_compute[260935]: 2025-10-11 09:31:56.677 2 DEBUG nova.network.os_vif_util [None req-a4abe9b7-cc7a-4b53-a93e-408020c6fbe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "41c97ff4-4ad5-4d35-ac33-d083a904f55a", "address": "fa:16:3e:e6:b0:e4", "network": {"id": "b8c83697-e12f-4ca4-8bd5-07c36e7b45a8", "bridge": "br-int", "label": "tempest-network-smoke--75951287", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:b0e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:b0e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41c97ff4-4a", "ovs_interfaceid": "41c97ff4-4ad5-4d35-ac33-d083a904f55a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:31:56 np0005481065 nova_compute[260935]: 2025-10-11 09:31:56.679 2 DEBUG nova.network.os_vif_util [None req-a4abe9b7-cc7a-4b53-a93e-408020c6fbe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e6:b0:e4,bridge_name='br-int',has_traffic_filtering=True,id=41c97ff4-4ad5-4d35-ac33-d083a904f55a,network=Network(b8c83697-e12f-4ca4-8bd5-07c36e7b45a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41c97ff4-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:31:56 np0005481065 nova_compute[260935]: 2025-10-11 09:31:56.679 2 DEBUG os_vif [None req-a4abe9b7-cc7a-4b53-a93e-408020c6fbe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e6:b0:e4,bridge_name='br-int',has_traffic_filtering=True,id=41c97ff4-4ad5-4d35-ac33-d083a904f55a,network=Network(b8c83697-e12f-4ca4-8bd5-07c36e7b45a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41c97ff4-4a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:31:56 np0005481065 nova_compute[260935]: 2025-10-11 09:31:56.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:31:56 np0005481065 nova_compute[260935]: 2025-10-11 09:31:56.683 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41c97ff4-4a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:31:56 np0005481065 nova_compute[260935]: 2025-10-11 09:31:56.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:31:56 np0005481065 nova_compute[260935]: 2025-10-11 09:31:56.693 2 INFO os_vif [None req-a4abe9b7-cc7a-4b53-a93e-408020c6fbe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e6:b0:e4,bridge_name='br-int',has_traffic_filtering=True,id=41c97ff4-4ad5-4d35-ac33-d083a904f55a,network=Network(b8c83697-e12f-4ca4-8bd5-07c36e7b45a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41c97ff4-4a')#033[00m
Oct 11 05:31:56 np0005481065 neutron-haproxy-ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8[409505]: [NOTICE]   (409509) : haproxy version is 2.8.14-c23fe91
Oct 11 05:31:56 np0005481065 neutron-haproxy-ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8[409505]: [NOTICE]   (409509) : path to executable is /usr/sbin/haproxy
Oct 11 05:31:56 np0005481065 neutron-haproxy-ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8[409505]: [WARNING]  (409509) : Exiting Master process...
Oct 11 05:31:56 np0005481065 neutron-haproxy-ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8[409505]: [WARNING]  (409509) : Exiting Master process...
Oct 11 05:31:56 np0005481065 neutron-haproxy-ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8[409505]: [ALERT]    (409509) : Current worker (409511) exited with code 143 (Terminated)
Oct 11 05:31:56 np0005481065 neutron-haproxy-ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8[409505]: [WARNING]  (409509) : All workers exited. Exiting... (0)
Oct 11 05:31:56 np0005481065 systemd[1]: libpod-cef6679f7f0fa451ed298ba0c50459d4b4021c71f328dd9432072e9dd2a9fab9.scope: Deactivated successfully.
Oct 11 05:31:56 np0005481065 podman[411113]: 2025-10-11 09:31:56.75550535 +0000 UTC m=+0.083266001 container died cef6679f7f0fa451ed298ba0c50459d4b4021c71f328dd9432072e9dd2a9fab9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 05:31:56 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cef6679f7f0fa451ed298ba0c50459d4b4021c71f328dd9432072e9dd2a9fab9-userdata-shm.mount: Deactivated successfully.
Oct 11 05:31:56 np0005481065 systemd[1]: var-lib-containers-storage-overlay-ac5db13615b0c34d4d58a71f6df0fd0ba5744113047ad9855ac3fe68117d7b25-merged.mount: Deactivated successfully.
Oct 11 05:31:56 np0005481065 podman[411113]: 2025-10-11 09:31:56.827006737 +0000 UTC m=+0.154767388 container cleanup cef6679f7f0fa451ed298ba0c50459d4b4021c71f328dd9432072e9dd2a9fab9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:31:56 np0005481065 systemd[1]: libpod-conmon-cef6679f7f0fa451ed298ba0c50459d4b4021c71f328dd9432072e9dd2a9fab9.scope: Deactivated successfully.
Oct 11 05:31:56 np0005481065 nova_compute[260935]: 2025-10-11 09:31:56.877 2 DEBUG nova.compute.manager [req-1c1d2c9b-d441-46ca-84f6-7fdecf28ae3c req-a5e24557-dec2-472d-92d8-ada7d1367806 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Received event network-vif-unplugged-41c97ff4-4ad5-4d35-ac33-d083a904f55a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:31:56 np0005481065 nova_compute[260935]: 2025-10-11 09:31:56.878 2 DEBUG oslo_concurrency.lockutils [req-1c1d2c9b-d441-46ca-84f6-7fdecf28ae3c req-a5e24557-dec2-472d-92d8-ada7d1367806 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "811aca81-b712-4b96-a66c-8108b7791b3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:31:56 np0005481065 nova_compute[260935]: 2025-10-11 09:31:56.878 2 DEBUG oslo_concurrency.lockutils [req-1c1d2c9b-d441-46ca-84f6-7fdecf28ae3c req-a5e24557-dec2-472d-92d8-ada7d1367806 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "811aca81-b712-4b96-a66c-8108b7791b3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:31:56 np0005481065 nova_compute[260935]: 2025-10-11 09:31:56.879 2 DEBUG oslo_concurrency.lockutils [req-1c1d2c9b-d441-46ca-84f6-7fdecf28ae3c req-a5e24557-dec2-472d-92d8-ada7d1367806 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "811aca81-b712-4b96-a66c-8108b7791b3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:31:56 np0005481065 nova_compute[260935]: 2025-10-11 09:31:56.879 2 DEBUG nova.compute.manager [req-1c1d2c9b-d441-46ca-84f6-7fdecf28ae3c req-a5e24557-dec2-472d-92d8-ada7d1367806 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] No waiting events found dispatching network-vif-unplugged-41c97ff4-4ad5-4d35-ac33-d083a904f55a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:31:56 np0005481065 nova_compute[260935]: 2025-10-11 09:31:56.880 2 DEBUG nova.compute.manager [req-1c1d2c9b-d441-46ca-84f6-7fdecf28ae3c req-a5e24557-dec2-472d-92d8-ada7d1367806 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Received event network-vif-unplugged-41c97ff4-4ad5-4d35-ac33-d083a904f55a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 05:31:56 np0005481065 nova_compute[260935]: 2025-10-11 09:31:56.910 2 DEBUG nova.compute.manager [req-11bce2fa-3853-45b0-b808-154a5e45ca83 req-ccd0c56f-5d47-48c1-8e05-2c26e414cd13 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Received event network-changed-41c97ff4-4ad5-4d35-ac33-d083a904f55a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:31:56 np0005481065 nova_compute[260935]: 2025-10-11 09:31:56.911 2 DEBUG nova.compute.manager [req-11bce2fa-3853-45b0-b808-154a5e45ca83 req-ccd0c56f-5d47-48c1-8e05-2c26e414cd13 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Refreshing instance network info cache due to event network-changed-41c97ff4-4ad5-4d35-ac33-d083a904f55a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:31:56 np0005481065 nova_compute[260935]: 2025-10-11 09:31:56.912 2 DEBUG oslo_concurrency.lockutils [req-11bce2fa-3853-45b0-b808-154a5e45ca83 req-ccd0c56f-5d47-48c1-8e05-2c26e414cd13 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-811aca81-b712-4b96-a66c-8108b7791b3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:31:56 np0005481065 nova_compute[260935]: 2025-10-11 09:31:56.912 2 DEBUG oslo_concurrency.lockutils [req-11bce2fa-3853-45b0-b808-154a5e45ca83 req-ccd0c56f-5d47-48c1-8e05-2c26e414cd13 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-811aca81-b712-4b96-a66c-8108b7791b3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:31:56 np0005481065 nova_compute[260935]: 2025-10-11 09:31:56.912 2 DEBUG nova.network.neutron [req-11bce2fa-3853-45b0-b808-154a5e45ca83 req-ccd0c56f-5d47-48c1-8e05-2c26e414cd13 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Refreshing network info cache for port 41c97ff4-4ad5-4d35-ac33-d083a904f55a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:31:56 np0005481065 podman[411163]: 2025-10-11 09:31:56.918277873 +0000 UTC m=+0.064002117 container remove cef6679f7f0fa451ed298ba0c50459d4b4021c71f328dd9432072e9dd2a9fab9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 05:31:56 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:31:56.933 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b3ca0e26-9d46-4b7c-b907-df5de5294f03]: (4, ('Sat Oct 11 09:31:56 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8 (cef6679f7f0fa451ed298ba0c50459d4b4021c71f328dd9432072e9dd2a9fab9)\ncef6679f7f0fa451ed298ba0c50459d4b4021c71f328dd9432072e9dd2a9fab9\nSat Oct 11 09:31:56 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8 (cef6679f7f0fa451ed298ba0c50459d4b4021c71f328dd9432072e9dd2a9fab9)\ncef6679f7f0fa451ed298ba0c50459d4b4021c71f328dd9432072e9dd2a9fab9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:31:56 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:31:56.936 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[82deefca-79ce-4d76-994d-66fd6ec3cb5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:31:56 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:31:56.937 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb8c83697-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:31:56 np0005481065 nova_compute[260935]: 2025-10-11 09:31:56.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:31:56 np0005481065 kernel: tapb8c83697-e0: left promiscuous mode
Oct 11 05:31:56 np0005481065 nova_compute[260935]: 2025-10-11 09:31:56.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:31:56 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:31:56.971 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4ef33fa3-fcd0-4379-8dfe-873ef101108c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:31:56 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:31:56.998 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[26742deb-8534-4a4a-977e-11adb2683c80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:31:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:31:56.999 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2a4cdafb-b5e0-47f8-a4d1-4374e7cb52b0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:31:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:31:57.020 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fa322211-8df8-4b1a-af3d-6befb4bb4aec]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 689588, 'reachable_time': 33954, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 411179, 'error': None, 'target': 'ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:31:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:31:57.024 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 05:31:57 np0005481065 systemd[1]: run-netns-ovnmeta\x2db8c83697\x2de12f\x2d4ca4\x2d8bd5\x2d07c36e7b45a8.mount: Deactivated successfully.
Oct 11 05:31:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:31:57.024 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[9735fe78-a78f-4adb-8186-64f7cb6817e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:31:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:31:57.026 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=48, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=47) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:31:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:31:57.027 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 05:31:57 np0005481065 nova_compute[260935]: 2025-10-11 09:31:57.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:31:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2709: 321 pgs: 321 active+clean; 467 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 20 KiB/s wr, 9 op/s
Oct 11 05:31:57 np0005481065 nova_compute[260935]: 2025-10-11 09:31:57.224 2 INFO nova.virt.libvirt.driver [None req-a4abe9b7-cc7a-4b53-a93e-408020c6fbe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Deleting instance files /var/lib/nova/instances/811aca81-b712-4b96-a66c-8108b7791b3c_del#033[00m
Oct 11 05:31:57 np0005481065 nova_compute[260935]: 2025-10-11 09:31:57.225 2 INFO nova.virt.libvirt.driver [None req-a4abe9b7-cc7a-4b53-a93e-408020c6fbe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Deletion of /var/lib/nova/instances/811aca81-b712-4b96-a66c-8108b7791b3c_del complete#033[00m
Oct 11 05:31:57 np0005481065 nova_compute[260935]: 2025-10-11 09:31:57.325 2 INFO nova.compute.manager [None req-a4abe9b7-cc7a-4b53-a93e-408020c6fbe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Took 1.13 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:31:57 np0005481065 nova_compute[260935]: 2025-10-11 09:31:57.326 2 DEBUG oslo.service.loopingcall [None req-a4abe9b7-cc7a-4b53-a93e-408020c6fbe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:31:57 np0005481065 nova_compute[260935]: 2025-10-11 09:31:57.327 2 DEBUG nova.compute.manager [-] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:31:57 np0005481065 nova_compute[260935]: 2025-10-11 09:31:57.327 2 DEBUG nova.network.neutron [-] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:31:58 np0005481065 nova_compute[260935]: 2025-10-11 09:31:58.112 2 DEBUG nova.network.neutron [-] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:31:58 np0005481065 nova_compute[260935]: 2025-10-11 09:31:58.135 2 INFO nova.compute.manager [-] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Took 0.81 seconds to deallocate network for instance.#033[00m
Oct 11 05:31:58 np0005481065 nova_compute[260935]: 2025-10-11 09:31:58.189 2 DEBUG oslo_concurrency.lockutils [None req-a4abe9b7-cc7a-4b53-a93e-408020c6fbe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:31:58 np0005481065 nova_compute[260935]: 2025-10-11 09:31:58.190 2 DEBUG oslo_concurrency.lockutils [None req-a4abe9b7-cc7a-4b53-a93e-408020c6fbe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:31:58 np0005481065 nova_compute[260935]: 2025-10-11 09:31:58.297 2 DEBUG oslo_concurrency.processutils [None req-a4abe9b7-cc7a-4b53-a93e-408020c6fbe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:31:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:31:58 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3955888215' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:31:58 np0005481065 nova_compute[260935]: 2025-10-11 09:31:58.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:31:58 np0005481065 nova_compute[260935]: 2025-10-11 09:31:58.787 2 DEBUG oslo_concurrency.processutils [None req-a4abe9b7-cc7a-4b53-a93e-408020c6fbe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:31:58 np0005481065 nova_compute[260935]: 2025-10-11 09:31:58.794 2 DEBUG nova.compute.provider_tree [None req-a4abe9b7-cc7a-4b53-a93e-408020c6fbe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:31:58 np0005481065 nova_compute[260935]: 2025-10-11 09:31:58.815 2 DEBUG nova.scheduler.client.report [None req-a4abe9b7-cc7a-4b53-a93e-408020c6fbe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:31:58 np0005481065 nova_compute[260935]: 2025-10-11 09:31:58.849 2 DEBUG oslo_concurrency.lockutils [None req-a4abe9b7-cc7a-4b53-a93e-408020c6fbe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:31:58 np0005481065 nova_compute[260935]: 2025-10-11 09:31:58.883 2 INFO nova.scheduler.client.report [None req-a4abe9b7-cc7a-4b53-a93e-408020c6fbe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Deleted allocations for instance 811aca81-b712-4b96-a66c-8108b7791b3c#033[00m
Oct 11 05:31:58 np0005481065 nova_compute[260935]: 2025-10-11 09:31:58.971 2 DEBUG nova.compute.manager [req-266c5b87-5618-4929-b9f1-bd74dd6459e0 req-53492b09-c773-4d40-9316-2abcc5af7acb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Received event network-vif-plugged-41c97ff4-4ad5-4d35-ac33-d083a904f55a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:31:58 np0005481065 nova_compute[260935]: 2025-10-11 09:31:58.971 2 DEBUG oslo_concurrency.lockutils [req-266c5b87-5618-4929-b9f1-bd74dd6459e0 req-53492b09-c773-4d40-9316-2abcc5af7acb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "811aca81-b712-4b96-a66c-8108b7791b3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:31:58 np0005481065 nova_compute[260935]: 2025-10-11 09:31:58.972 2 DEBUG oslo_concurrency.lockutils [req-266c5b87-5618-4929-b9f1-bd74dd6459e0 req-53492b09-c773-4d40-9316-2abcc5af7acb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "811aca81-b712-4b96-a66c-8108b7791b3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:31:58 np0005481065 nova_compute[260935]: 2025-10-11 09:31:58.972 2 DEBUG oslo_concurrency.lockutils [req-266c5b87-5618-4929-b9f1-bd74dd6459e0 req-53492b09-c773-4d40-9316-2abcc5af7acb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "811aca81-b712-4b96-a66c-8108b7791b3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:31:58 np0005481065 nova_compute[260935]: 2025-10-11 09:31:58.972 2 DEBUG nova.compute.manager [req-266c5b87-5618-4929-b9f1-bd74dd6459e0 req-53492b09-c773-4d40-9316-2abcc5af7acb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] No waiting events found dispatching network-vif-plugged-41c97ff4-4ad5-4d35-ac33-d083a904f55a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:31:58 np0005481065 nova_compute[260935]: 2025-10-11 09:31:58.972 2 WARNING nova.compute.manager [req-266c5b87-5618-4929-b9f1-bd74dd6459e0 req-53492b09-c773-4d40-9316-2abcc5af7acb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Received unexpected event network-vif-plugged-41c97ff4-4ad5-4d35-ac33-d083a904f55a for instance with vm_state deleted and task_state None.#033[00m
Oct 11 05:31:58 np0005481065 nova_compute[260935]: 2025-10-11 09:31:58.973 2 DEBUG nova.compute.manager [req-266c5b87-5618-4929-b9f1-bd74dd6459e0 req-53492b09-c773-4d40-9316-2abcc5af7acb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Received event network-vif-deleted-41c97ff4-4ad5-4d35-ac33-d083a904f55a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:31:58 np0005481065 nova_compute[260935]: 2025-10-11 09:31:58.975 2 DEBUG oslo_concurrency.lockutils [None req-a4abe9b7-cc7a-4b53-a93e-408020c6fbe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "811aca81-b712-4b96-a66c-8108b7791b3c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.781s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:31:59 np0005481065 nova_compute[260935]: 2025-10-11 09:31:59.067 2 DEBUG nova.network.neutron [req-11bce2fa-3853-45b0-b808-154a5e45ca83 req-ccd0c56f-5d47-48c1-8e05-2c26e414cd13 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Updated VIF entry in instance network info cache for port 41c97ff4-4ad5-4d35-ac33-d083a904f55a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:31:59 np0005481065 nova_compute[260935]: 2025-10-11 09:31:59.067 2 DEBUG nova.network.neutron [req-11bce2fa-3853-45b0-b808-154a5e45ca83 req-ccd0c56f-5d47-48c1-8e05-2c26e414cd13 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Updating instance_info_cache with network_info: [{"id": "41c97ff4-4ad5-4d35-ac33-d083a904f55a", "address": "fa:16:3e:e6:b0:e4", "network": {"id": "b8c83697-e12f-4ca4-8bd5-07c36e7b45a8", "bridge": "br-int", "label": "tempest-network-smoke--75951287", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:b0e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:b0e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41c97ff4-4a", "ovs_interfaceid": "41c97ff4-4ad5-4d35-ac33-d083a904f55a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:31:59 np0005481065 nova_compute[260935]: 2025-10-11 09:31:59.088 2 DEBUG oslo_concurrency.lockutils [req-11bce2fa-3853-45b0-b808-154a5e45ca83 req-ccd0c56f-5d47-48c1-8e05-2c26e414cd13 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-811aca81-b712-4b96-a66c-8108b7791b3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:31:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2710: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 25 KiB/s wr, 58 op/s
Oct 11 05:31:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:31:59 np0005481065 podman[411202]: 2025-10-11 09:31:59.805366958 +0000 UTC m=+0.089217369 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:32:00 np0005481065 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 05:32:00 np0005481065 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 43K writes, 162K keys, 43K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.03 MB/s#012Cumulative WAL: 43K writes, 15K syncs, 2.71 writes per sync, written: 0.16 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4545 writes, 19K keys, 4545 commit groups, 1.0 writes per commit group, ingest: 21.41 MB, 0.04 MB/s#012Interval WAL: 4545 writes, 1749 syncs, 2.60 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 05:32:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2711: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 12 KiB/s wr, 57 op/s
Oct 11 05:32:01 np0005481065 nova_compute[260935]: 2025-10-11 09:32:01.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:32:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2712: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 12 KiB/s wr, 57 op/s
Oct 11 05:32:03 np0005481065 nova_compute[260935]: 2025-10-11 09:32:03.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:32:03 np0005481065 podman[411223]: 2025-10-11 09:32:03.811400619 +0000 UTC m=+0.099997633 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd)
Oct 11 05:32:03 np0005481065 podman[411224]: 2025-10-11 09:32:03.850489312 +0000 UTC m=+0.135092523 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Oct 11 05:32:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:32:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2713: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 4.7 KiB/s wr, 48 op/s
Oct 11 05:32:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:32:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:32:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:32:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:32:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002627377275021163 of space, bias 1.0, pg target 0.788213182506349 quantized to 32 (current 32)
Oct 11 05:32:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:32:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:32:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:32:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:32:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:32:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 11 05:32:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:32:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 05:32:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:32:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:32:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:32:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 05:32:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:32:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 05:32:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:32:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:32:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:32:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 05:32:06 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:32:06.029 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '48'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:32:06 np0005481065 ovn_controller[152945]: 2025-10-11T09:32:06Z|01527|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:32:06 np0005481065 ovn_controller[152945]: 2025-10-11T09:32:06Z|01528|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:32:06 np0005481065 nova_compute[260935]: 2025-10-11 09:32:06.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:32:06 np0005481065 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 05:32:06 np0005481065 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.3 total, 600.0 interval#012Cumulative writes: 33K writes, 131K keys, 33K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.03 MB/s#012Cumulative WAL: 33K writes, 11K syncs, 2.78 writes per sync, written: 0.13 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3725 writes, 14K keys, 3725 commit groups, 1.0 writes per commit group, ingest: 17.79 MB, 0.03 MB/s#012Interval WAL: 3725 writes, 1461 syncs, 2.55 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 05:32:06 np0005481065 ovn_controller[152945]: 2025-10-11T09:32:06Z|01529|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:32:06 np0005481065 ovn_controller[152945]: 2025-10-11T09:32:06Z|01530|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:32:06 np0005481065 nova_compute[260935]: 2025-10-11 09:32:06.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:32:06 np0005481065 nova_compute[260935]: 2025-10-11 09:32:06.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:32:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2714: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 4.7 KiB/s wr, 48 op/s
Oct 11 05:32:07 np0005481065 nova_compute[260935]: 2025-10-11 09:32:07.247 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760175112.246012, 30069bfa-2edc-4f4c-b685-233b65a11de1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:32:07 np0005481065 nova_compute[260935]: 2025-10-11 09:32:07.248 2 INFO nova.compute.manager [-] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:32:07 np0005481065 nova_compute[260935]: 2025-10-11 09:32:07.303 2 DEBUG nova.compute.manager [None req-f3d1296f-b3f8-4421-914d-7e362d61e3ac - - - - - -] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:32:07 np0005481065 ceph-mgr[74605]: [devicehealth INFO root] Check health
Oct 11 05:32:08 np0005481065 nova_compute[260935]: 2025-10-11 09:32:08.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:32:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2715: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 4.7 KiB/s wr, 48 op/s
Oct 11 05:32:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:32:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2716: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:32:11 np0005481065 nova_compute[260935]: 2025-10-11 09:32:11.643 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760175116.642373, 811aca81-b712-4b96-a66c-8108b7791b3c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:32:11 np0005481065 nova_compute[260935]: 2025-10-11 09:32:11.644 2 INFO nova.compute.manager [-] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:32:11 np0005481065 nova_compute[260935]: 2025-10-11 09:32:11.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:32:11 np0005481065 nova_compute[260935]: 2025-10-11 09:32:11.928 2 DEBUG nova.compute.manager [None req-695f903c-32f1-4b6b-b32b-bd60e9f2cd2e - - - - - -] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:32:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2717: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:32:13 np0005481065 nova_compute[260935]: 2025-10-11 09:32:13.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:32:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:32:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2718: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:32:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:32:15.226 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:32:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:32:15.227 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:32:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:32:15.228 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:32:16 np0005481065 nova_compute[260935]: 2025-10-11 09:32:16.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:32:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2719: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:32:18 np0005481065 nova_compute[260935]: 2025-10-11 09:32:18.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:32:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2720: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:32:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:32:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2721: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:32:21 np0005481065 nova_compute[260935]: 2025-10-11 09:32:21.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:32:22 np0005481065 podman[411274]: 2025-10-11 09:32:22.78701125 +0000 UTC m=+0.085616398 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent)
Oct 11 05:32:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2722: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:32:23 np0005481065 nova_compute[260935]: 2025-10-11 09:32:23.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:32:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:32:24 np0005481065 nova_compute[260935]: 2025-10-11 09:32:24.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:32:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:32:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:32:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:32:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:32:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:32:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:32:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2723: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:32:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:32:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/553364881' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:32:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:32:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/553364881' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:32:26 np0005481065 nova_compute[260935]: 2025-10-11 09:32:26.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:32:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2724: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:32:28 np0005481065 nova_compute[260935]: 2025-10-11 09:32:28.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:32:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2725: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:32:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:32:30 np0005481065 podman[411294]: 2025-10-11 09:32:30.786090195 +0000 UTC m=+0.086449231 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:32:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2726: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:32:31 np0005481065 nova_compute[260935]: 2025-10-11 09:32:31.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:32:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2727: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:32:33 np0005481065 nova_compute[260935]: 2025-10-11 09:32:33.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:32:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:32:34 np0005481065 podman[411314]: 2025-10-11 09:32:34.800949665 +0000 UTC m=+0.096943757 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 11 05:32:34 np0005481065 podman[411315]: 2025-10-11 09:32:34.845104791 +0000 UTC m=+0.134944729 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 05:32:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2728: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:32:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:32:36.016 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:4e:e8 10.100.0.2 2001:db8::f816:3eff:fe09:4ee8'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe09:4ee8/64', 'neutron:device_id': 'ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84203be9-d0b1-4633-bcd6-51523ee507a5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d927a0f8-4868-42ce-9cc2-00b73d34efaa, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=8bbc5aca-a0c9-493f-8847-60a74f11dbe6) old=Port_Binding(mac=['fa:16:3e:09:4e:e8 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84203be9-d0b1-4633-bcd6-51523ee507a5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:32:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:32:36.019 162815 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 8bbc5aca-a0c9-493f-8847-60a74f11dbe6 in datapath 84203be9-d0b1-4633-bcd6-51523ee507a5 updated#033[00m
Oct 11 05:32:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:32:36.023 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 84203be9-d0b1-4633-bcd6-51523ee507a5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:32:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:32:36.024 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a49569cc-d2d4-4e49-8e53-b11f3ca049a1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:32:36 np0005481065 nova_compute[260935]: 2025-10-11 09:32:36.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:32:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2729: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:32:38 np0005481065 nova_compute[260935]: 2025-10-11 09:32:38.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:32:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:32:38 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:32:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 05:32:38 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:32:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 05:32:38 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:32:38 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 3e99f6d0-bbc5-4923-a239-0e7fdca95289 does not exist
Oct 11 05:32:38 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 40ca2200-d19d-4ba5-846c-a7047bbf149e does not exist
Oct 11 05:32:38 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev ab5fee12-e89c-4212-99a4-8c7a38cbb614 does not exist
Oct 11 05:32:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 05:32:39 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 05:32:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 05:32:39 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:32:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:32:39 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:32:39 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:32:39 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:32:39 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:32:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2730: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:32:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:32:39 np0005481065 podman[411630]: 2025-10-11 09:32:39.874093102 +0000 UTC m=+0.071868089 container create 3a7650f5d2d177ed3efde067f4dcfae4611b22cdfa7e91a37d989abd3473c643 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_hermann, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:32:39 np0005481065 systemd[1]: Started libpod-conmon-3a7650f5d2d177ed3efde067f4dcfae4611b22cdfa7e91a37d989abd3473c643.scope.
Oct 11 05:32:39 np0005481065 podman[411630]: 2025-10-11 09:32:39.845454224 +0000 UTC m=+0.043229291 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:32:39 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:32:39 np0005481065 podman[411630]: 2025-10-11 09:32:39.979154957 +0000 UTC m=+0.176929974 container init 3a7650f5d2d177ed3efde067f4dcfae4611b22cdfa7e91a37d989abd3473c643 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_hermann, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:32:39 np0005481065 podman[411630]: 2025-10-11 09:32:39.991292658 +0000 UTC m=+0.189067655 container start 3a7650f5d2d177ed3efde067f4dcfae4611b22cdfa7e91a37d989abd3473c643 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:32:39 np0005481065 gracious_hermann[411647]: 167 167
Oct 11 05:32:40 np0005481065 systemd[1]: libpod-3a7650f5d2d177ed3efde067f4dcfae4611b22cdfa7e91a37d989abd3473c643.scope: Deactivated successfully.
Oct 11 05:32:40 np0005481065 podman[411630]: 2025-10-11 09:32:39.998997546 +0000 UTC m=+0.196772613 container attach 3a7650f5d2d177ed3efde067f4dcfae4611b22cdfa7e91a37d989abd3473c643 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 05:32:40 np0005481065 podman[411630]: 2025-10-11 09:32:39.999902251 +0000 UTC m=+0.197677288 container died 3a7650f5d2d177ed3efde067f4dcfae4611b22cdfa7e91a37d989abd3473c643 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_hermann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 11 05:32:40 np0005481065 systemd[1]: var-lib-containers-storage-overlay-5ce4818c9c590565f0439765da3b0361670fc31ef51e2b2aceccd483acaab9f1-merged.mount: Deactivated successfully.
Oct 11 05:32:40 np0005481065 podman[411630]: 2025-10-11 09:32:40.062134048 +0000 UTC m=+0.259909075 container remove 3a7650f5d2d177ed3efde067f4dcfae4611b22cdfa7e91a37d989abd3473c643 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_hermann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 05:32:40 np0005481065 systemd[1]: libpod-conmon-3a7650f5d2d177ed3efde067f4dcfae4611b22cdfa7e91a37d989abd3473c643.scope: Deactivated successfully.
Oct 11 05:32:40 np0005481065 podman[411670]: 2025-10-11 09:32:40.291386357 +0000 UTC m=+0.062903996 container create 70cda5baaa1ea847a8b8031fadcd3b369e8af945cdc94f2f9c8ded48449e56ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:32:40 np0005481065 systemd[1]: Started libpod-conmon-70cda5baaa1ea847a8b8031fadcd3b369e8af945cdc94f2f9c8ded48449e56ce.scope.
Oct 11 05:32:40 np0005481065 podman[411670]: 2025-10-11 09:32:40.270523549 +0000 UTC m=+0.042041188 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:32:40 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:32:40 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3727ed4c9af0d85f0958b9b4e47d1f720f3745995c7c21713ed889e72483e2d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:32:40 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3727ed4c9af0d85f0958b9b4e47d1f720f3745995c7c21713ed889e72483e2d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:32:40 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3727ed4c9af0d85f0958b9b4e47d1f720f3745995c7c21713ed889e72483e2d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:32:40 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3727ed4c9af0d85f0958b9b4e47d1f720f3745995c7c21713ed889e72483e2d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:32:40 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3727ed4c9af0d85f0958b9b4e47d1f720f3745995c7c21713ed889e72483e2d7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 05:32:40 np0005481065 podman[411670]: 2025-10-11 09:32:40.395195317 +0000 UTC m=+0.166713026 container init 70cda5baaa1ea847a8b8031fadcd3b369e8af945cdc94f2f9c8ded48449e56ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:32:40 np0005481065 podman[411670]: 2025-10-11 09:32:40.414208103 +0000 UTC m=+0.185725762 container start 70cda5baaa1ea847a8b8031fadcd3b369e8af945cdc94f2f9c8ded48449e56ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_haslett, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:32:40 np0005481065 podman[411670]: 2025-10-11 09:32:40.41904536 +0000 UTC m=+0.190563059 container attach 70cda5baaa1ea847a8b8031fadcd3b369e8af945cdc94f2f9c8ded48449e56ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 05:32:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2731: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:32:41 np0005481065 cranky_haslett[411687]: --> passed data devices: 0 physical, 3 LVM
Oct 11 05:32:41 np0005481065 cranky_haslett[411687]: --> relative data size: 1.0
Oct 11 05:32:41 np0005481065 cranky_haslett[411687]: --> All data devices are unavailable
Oct 11 05:32:41 np0005481065 systemd[1]: libpod-70cda5baaa1ea847a8b8031fadcd3b369e8af945cdc94f2f9c8ded48449e56ce.scope: Deactivated successfully.
Oct 11 05:32:41 np0005481065 podman[411670]: 2025-10-11 09:32:41.598783523 +0000 UTC m=+1.370301172 container died 70cda5baaa1ea847a8b8031fadcd3b369e8af945cdc94f2f9c8ded48449e56ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:32:41 np0005481065 systemd[1]: libpod-70cda5baaa1ea847a8b8031fadcd3b369e8af945cdc94f2f9c8ded48449e56ce.scope: Consumed 1.107s CPU time.
Oct 11 05:32:41 np0005481065 systemd[1]: var-lib-containers-storage-overlay-3727ed4c9af0d85f0958b9b4e47d1f720f3745995c7c21713ed889e72483e2d7-merged.mount: Deactivated successfully.
Oct 11 05:32:41 np0005481065 podman[411670]: 2025-10-11 09:32:41.662481601 +0000 UTC m=+1.433999220 container remove 70cda5baaa1ea847a8b8031fadcd3b369e8af945cdc94f2f9c8ded48449e56ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_haslett, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:32:41 np0005481065 systemd[1]: libpod-conmon-70cda5baaa1ea847a8b8031fadcd3b369e8af945cdc94f2f9c8ded48449e56ce.scope: Deactivated successfully.
Oct 11 05:32:41 np0005481065 nova_compute[260935]: 2025-10-11 09:32:41.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:32:42 np0005481065 podman[411868]: 2025-10-11 09:32:42.566182094 +0000 UTC m=+0.078433184 container create 729964ae30f2a23c405455a02aae4c8e6296b74cc94406cab5720aa22de1a2b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_cohen, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:32:42 np0005481065 systemd[1]: Started libpod-conmon-729964ae30f2a23c405455a02aae4c8e6296b74cc94406cab5720aa22de1a2b7.scope.
Oct 11 05:32:42 np0005481065 podman[411868]: 2025-10-11 09:32:42.533021868 +0000 UTC m=+0.045273018 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:32:42 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:32:42 np0005481065 podman[411868]: 2025-10-11 09:32:42.667206695 +0000 UTC m=+0.179457815 container init 729964ae30f2a23c405455a02aae4c8e6296b74cc94406cab5720aa22de1a2b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_cohen, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 11 05:32:42 np0005481065 podman[411868]: 2025-10-11 09:32:42.678313549 +0000 UTC m=+0.190564639 container start 729964ae30f2a23c405455a02aae4c8e6296b74cc94406cab5720aa22de1a2b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_cohen, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 11 05:32:42 np0005481065 podman[411868]: 2025-10-11 09:32:42.68227391 +0000 UTC m=+0.194525050 container attach 729964ae30f2a23c405455a02aae4c8e6296b74cc94406cab5720aa22de1a2b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_cohen, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:32:42 np0005481065 bold_cohen[411885]: 167 167
Oct 11 05:32:42 np0005481065 systemd[1]: libpod-729964ae30f2a23c405455a02aae4c8e6296b74cc94406cab5720aa22de1a2b7.scope: Deactivated successfully.
Oct 11 05:32:42 np0005481065 podman[411868]: 2025-10-11 09:32:42.685960664 +0000 UTC m=+0.198211754 container died 729964ae30f2a23c405455a02aae4c8e6296b74cc94406cab5720aa22de1a2b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_cohen, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:32:42 np0005481065 nova_compute[260935]: 2025-10-11 09:32:42.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:32:42 np0005481065 nova_compute[260935]: 2025-10-11 09:32:42.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:32:42 np0005481065 nova_compute[260935]: 2025-10-11 09:32:42.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:32:42 np0005481065 systemd[1]: var-lib-containers-storage-overlay-825862ed11853bdc7e158fa82f7bcc7e815dd22bd6f54d291bce73d78462c676-merged.mount: Deactivated successfully.
Oct 11 05:32:42 np0005481065 podman[411868]: 2025-10-11 09:32:42.743739965 +0000 UTC m=+0.255991055 container remove 729964ae30f2a23c405455a02aae4c8e6296b74cc94406cab5720aa22de1a2b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_cohen, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:32:42 np0005481065 systemd[1]: libpod-conmon-729964ae30f2a23c405455a02aae4c8e6296b74cc94406cab5720aa22de1a2b7.scope: Deactivated successfully.
Oct 11 05:32:43 np0005481065 podman[411908]: 2025-10-11 09:32:43.018039986 +0000 UTC m=+0.077047975 container create 4c8b4769c9cf251c28436273031592b71571902570dac6baa4c3abe21e0a7ba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_zhukovsky, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 05:32:43 np0005481065 systemd[1]: Started libpod-conmon-4c8b4769c9cf251c28436273031592b71571902570dac6baa4c3abe21e0a7ba7.scope.
Oct 11 05:32:43 np0005481065 podman[411908]: 2025-10-11 09:32:42.984946642 +0000 UTC m=+0.043954701 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:32:43 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:32:43 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8fbffc907665822d08e5886c855375799e4647b6ee8425bf158d4d7a15e5022/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:32:43 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8fbffc907665822d08e5886c855375799e4647b6ee8425bf158d4d7a15e5022/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:32:43 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8fbffc907665822d08e5886c855375799e4647b6ee8425bf158d4d7a15e5022/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:32:43 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8fbffc907665822d08e5886c855375799e4647b6ee8425bf158d4d7a15e5022/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:32:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2732: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:32:43 np0005481065 podman[411908]: 2025-10-11 09:32:43.128757211 +0000 UTC m=+0.187765270 container init 4c8b4769c9cf251c28436273031592b71571902570dac6baa4c3abe21e0a7ba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_zhukovsky, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 05:32:43 np0005481065 podman[411908]: 2025-10-11 09:32:43.143166397 +0000 UTC m=+0.202174396 container start 4c8b4769c9cf251c28436273031592b71571902570dac6baa4c3abe21e0a7ba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_zhukovsky, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:32:43 np0005481065 podman[411908]: 2025-10-11 09:32:43.148191999 +0000 UTC m=+0.207200008 container attach 4c8b4769c9cf251c28436273031592b71571902570dac6baa4c3abe21e0a7ba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_zhukovsky, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:32:43 np0005481065 nova_compute[260935]: 2025-10-11 09:32:43.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:32:43 np0005481065 nova_compute[260935]: 2025-10-11 09:32:43.701 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:32:43 np0005481065 nova_compute[260935]: 2025-10-11 09:32:43.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]: {
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:    "0": [
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:        {
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:            "devices": [
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:                "/dev/loop3"
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:            ],
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:            "lv_name": "ceph_lv0",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:            "lv_size": "21470642176",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:            "name": "ceph_lv0",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:            "tags": {
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:                "ceph.cluster_name": "ceph",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:                "ceph.crush_device_class": "",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:                "ceph.encrypted": "0",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:                "ceph.osd_id": "0",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:                "ceph.type": "block",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:                "ceph.vdo": "0"
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:            },
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:            "type": "block",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:            "vg_name": "ceph_vg0"
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:        }
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:    ],
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:    "1": [
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:        {
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:            "devices": [
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:                "/dev/loop4"
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:            ],
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:            "lv_name": "ceph_lv1",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:            "lv_size": "21470642176",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:            "name": "ceph_lv1",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:            "tags": {
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:                "ceph.cluster_name": "ceph",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:                "ceph.crush_device_class": "",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:                "ceph.encrypted": "0",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:                "ceph.osd_id": "1",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:                "ceph.type": "block",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:                "ceph.vdo": "0"
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:            },
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:            "type": "block",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:            "vg_name": "ceph_vg1"
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:        }
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:    ],
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:    "2": [
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:        {
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:            "devices": [
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:                "/dev/loop5"
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:            ],
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:            "lv_name": "ceph_lv2",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:            "lv_size": "21470642176",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:            "name": "ceph_lv2",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:            "tags": {
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:                "ceph.cluster_name": "ceph",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:                "ceph.crush_device_class": "",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:                "ceph.encrypted": "0",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:                "ceph.osd_id": "2",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:                "ceph.type": "block",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:                "ceph.vdo": "0"
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:            },
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:            "type": "block",
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:            "vg_name": "ceph_vg2"
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:        }
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]:    ]
Oct 11 05:32:43 np0005481065 amazing_zhukovsky[411925]: }
Oct 11 05:32:43 np0005481065 systemd[1]: libpod-4c8b4769c9cf251c28436273031592b71571902570dac6baa4c3abe21e0a7ba7.scope: Deactivated successfully.
Oct 11 05:32:43 np0005481065 podman[411908]: 2025-10-11 09:32:43.949952624 +0000 UTC m=+1.008960633 container died 4c8b4769c9cf251c28436273031592b71571902570dac6baa4c3abe21e0a7ba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:32:43 np0005481065 systemd[1]: var-lib-containers-storage-overlay-a8fbffc907665822d08e5886c855375799e4647b6ee8425bf158d4d7a15e5022-merged.mount: Deactivated successfully.
Oct 11 05:32:44 np0005481065 podman[411908]: 2025-10-11 09:32:44.042361402 +0000 UTC m=+1.101369411 container remove 4c8b4769c9cf251c28436273031592b71571902570dac6baa4c3abe21e0a7ba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_zhukovsky, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:32:44 np0005481065 systemd[1]: libpod-conmon-4c8b4769c9cf251c28436273031592b71571902570dac6baa4c3abe21e0a7ba7.scope: Deactivated successfully.
Oct 11 05:32:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:32:44 np0005481065 nova_compute[260935]: 2025-10-11 09:32:44.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:32:44 np0005481065 nova_compute[260935]: 2025-10-11 09:32:44.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:32:44 np0005481065 podman[412089]: 2025-10-11 09:32:44.814004659 +0000 UTC m=+0.063482933 container create 76c1f90201a0c8247d079c1678842e4a58fa6634b7b65fc7f37f3b2e640c599f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_sanderson, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:32:44 np0005481065 systemd[1]: Started libpod-conmon-76c1f90201a0c8247d079c1678842e4a58fa6634b7b65fc7f37f3b2e640c599f.scope.
Oct 11 05:32:44 np0005481065 podman[412089]: 2025-10-11 09:32:44.787253034 +0000 UTC m=+0.036731378 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:32:44 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:32:44 np0005481065 podman[412089]: 2025-10-11 09:32:44.928740467 +0000 UTC m=+0.178218731 container init 76c1f90201a0c8247d079c1678842e4a58fa6634b7b65fc7f37f3b2e640c599f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_sanderson, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:32:44 np0005481065 podman[412089]: 2025-10-11 09:32:44.94125913 +0000 UTC m=+0.190737384 container start 76c1f90201a0c8247d079c1678842e4a58fa6634b7b65fc7f37f3b2e640c599f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_sanderson, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 05:32:44 np0005481065 podman[412089]: 2025-10-11 09:32:44.945534591 +0000 UTC m=+0.195012835 container attach 76c1f90201a0c8247d079c1678842e4a58fa6634b7b65fc7f37f3b2e640c599f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:32:44 np0005481065 exciting_sanderson[412105]: 167 167
Oct 11 05:32:44 np0005481065 podman[412089]: 2025-10-11 09:32:44.949740429 +0000 UTC m=+0.199218713 container died 76c1f90201a0c8247d079c1678842e4a58fa6634b7b65fc7f37f3b2e640c599f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_sanderson, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 11 05:32:44 np0005481065 systemd[1]: libpod-76c1f90201a0c8247d079c1678842e4a58fa6634b7b65fc7f37f3b2e640c599f.scope: Deactivated successfully.
Oct 11 05:32:44 np0005481065 systemd[1]: var-lib-containers-storage-overlay-b181b86b7689186e5b8179dec5d02332f745eb2a8d13446cdc837690be123970-merged.mount: Deactivated successfully.
Oct 11 05:32:45 np0005481065 podman[412089]: 2025-10-11 09:32:45.012928773 +0000 UTC m=+0.262407027 container remove 76c1f90201a0c8247d079c1678842e4a58fa6634b7b65fc7f37f3b2e640c599f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_sanderson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:32:45 np0005481065 systemd[1]: libpod-conmon-76c1f90201a0c8247d079c1678842e4a58fa6634b7b65fc7f37f3b2e640c599f.scope: Deactivated successfully.
Oct 11 05:32:45 np0005481065 nova_compute[260935]: 2025-10-11 09:32:45.057 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:32:45 np0005481065 nova_compute[260935]: 2025-10-11 09:32:45.058 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:32:45 np0005481065 nova_compute[260935]: 2025-10-11 09:32:45.058 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 05:32:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2733: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:32:45 np0005481065 podman[412128]: 2025-10-11 09:32:45.301118846 +0000 UTC m=+0.065681035 container create 571dbf7d3aab56c3199bca03a21d2278f71249780a09738f43e5bb770c5d9c16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 05:32:45 np0005481065 systemd[1]: Started libpod-conmon-571dbf7d3aab56c3199bca03a21d2278f71249780a09738f43e5bb770c5d9c16.scope.
Oct 11 05:32:45 np0005481065 podman[412128]: 2025-10-11 09:32:45.276583803 +0000 UTC m=+0.041146002 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:32:45 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:32:45 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c40d764e0753fe7b4fa4422efdc72e2cb8cb4cf09755914ac6af7fce1f47ea73/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:32:45 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c40d764e0753fe7b4fa4422efdc72e2cb8cb4cf09755914ac6af7fce1f47ea73/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:32:45 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c40d764e0753fe7b4fa4422efdc72e2cb8cb4cf09755914ac6af7fce1f47ea73/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:32:45 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c40d764e0753fe7b4fa4422efdc72e2cb8cb4cf09755914ac6af7fce1f47ea73/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:32:45 np0005481065 podman[412128]: 2025-10-11 09:32:45.435234771 +0000 UTC m=+0.199796960 container init 571dbf7d3aab56c3199bca03a21d2278f71249780a09738f43e5bb770c5d9c16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 05:32:45 np0005481065 podman[412128]: 2025-10-11 09:32:45.448294019 +0000 UTC m=+0.212856168 container start 571dbf7d3aab56c3199bca03a21d2278f71249780a09738f43e5bb770c5d9c16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 05:32:45 np0005481065 podman[412128]: 2025-10-11 09:32:45.45079967 +0000 UTC m=+0.215361839 container attach 571dbf7d3aab56c3199bca03a21d2278f71249780a09738f43e5bb770c5d9c16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_herschel, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Oct 11 05:32:46 np0005481065 clever_herschel[412145]: {
Oct 11 05:32:46 np0005481065 clever_herschel[412145]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 05:32:46 np0005481065 clever_herschel[412145]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:32:46 np0005481065 clever_herschel[412145]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 05:32:46 np0005481065 clever_herschel[412145]:        "osd_id": 2,
Oct 11 05:32:46 np0005481065 clever_herschel[412145]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:32:46 np0005481065 clever_herschel[412145]:        "type": "bluestore"
Oct 11 05:32:46 np0005481065 clever_herschel[412145]:    },
Oct 11 05:32:46 np0005481065 clever_herschel[412145]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 05:32:46 np0005481065 clever_herschel[412145]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:32:46 np0005481065 clever_herschel[412145]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 05:32:46 np0005481065 clever_herschel[412145]:        "osd_id": 0,
Oct 11 05:32:46 np0005481065 clever_herschel[412145]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:32:46 np0005481065 clever_herschel[412145]:        "type": "bluestore"
Oct 11 05:32:46 np0005481065 clever_herschel[412145]:    },
Oct 11 05:32:46 np0005481065 clever_herschel[412145]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 05:32:46 np0005481065 clever_herschel[412145]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:32:46 np0005481065 clever_herschel[412145]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 05:32:46 np0005481065 clever_herschel[412145]:        "osd_id": 1,
Oct 11 05:32:46 np0005481065 clever_herschel[412145]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:32:46 np0005481065 clever_herschel[412145]:        "type": "bluestore"
Oct 11 05:32:46 np0005481065 clever_herschel[412145]:    }
Oct 11 05:32:46 np0005481065 clever_herschel[412145]: }
Oct 11 05:32:46 np0005481065 systemd[1]: libpod-571dbf7d3aab56c3199bca03a21d2278f71249780a09738f43e5bb770c5d9c16.scope: Deactivated successfully.
Oct 11 05:32:46 np0005481065 systemd[1]: libpod-571dbf7d3aab56c3199bca03a21d2278f71249780a09738f43e5bb770c5d9c16.scope: Consumed 1.104s CPU time.
Oct 11 05:32:46 np0005481065 podman[412178]: 2025-10-11 09:32:46.586347256 +0000 UTC m=+0.027826916 container died 571dbf7d3aab56c3199bca03a21d2278f71249780a09738f43e5bb770c5d9c16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:32:46 np0005481065 systemd[1]: var-lib-containers-storage-overlay-c40d764e0753fe7b4fa4422efdc72e2cb8cb4cf09755914ac6af7fce1f47ea73-merged.mount: Deactivated successfully.
Oct 11 05:32:46 np0005481065 podman[412178]: 2025-10-11 09:32:46.645073673 +0000 UTC m=+0.086553293 container remove 571dbf7d3aab56c3199bca03a21d2278f71249780a09738f43e5bb770c5d9c16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_herschel, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:32:46 np0005481065 systemd[1]: libpod-conmon-571dbf7d3aab56c3199bca03a21d2278f71249780a09738f43e5bb770c5d9c16.scope: Deactivated successfully.
Oct 11 05:32:46 np0005481065 nova_compute[260935]: 2025-10-11 09:32:46.698 2 DEBUG oslo_concurrency.lockutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "e7751d7b-3ef7-4b84-a579-89d4b72f6cf4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:32:46 np0005481065 nova_compute[260935]: 2025-10-11 09:32:46.699 2 DEBUG oslo_concurrency.lockutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "e7751d7b-3ef7-4b84-a579-89d4b72f6cf4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:32:46 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:32:46 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:32:46 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:32:46 np0005481065 nova_compute[260935]: 2025-10-11 09:32:46.717 2 DEBUG nova.compute.manager [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:32:46 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:32:46 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 3dfdb9ff-8081-48cb-87e2-3a3651a9a43f does not exist
Oct 11 05:32:46 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 845fd075-adb9-4533-b33f-02e95c0a82ee does not exist
Oct 11 05:32:46 np0005481065 nova_compute[260935]: 2025-10-11 09:32:46.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:32:46 np0005481065 nova_compute[260935]: 2025-10-11 09:32:46.819 2 DEBUG oslo_concurrency.lockutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:32:46 np0005481065 nova_compute[260935]: 2025-10-11 09:32:46.820 2 DEBUG oslo_concurrency.lockutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:32:46 np0005481065 nova_compute[260935]: 2025-10-11 09:32:46.830 2 DEBUG nova.virt.hardware [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:32:46 np0005481065 nova_compute[260935]: 2025-10-11 09:32:46.831 2 INFO nova.compute.claims [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:32:46 np0005481065 nova_compute[260935]: 2025-10-11 09:32:46.937 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updating instance_info_cache with network_info: [{"id": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "address": "fa:16:3e:d3:b5:ce", "network": {"id": "7c40ad6c-6e2c-4d8e-a70f-72c8786fa745", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1855455514-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ba95f2514ce4fe4b00f245335eaeb01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc992d6e3-ef", "ovs_interfaceid": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:32:46 np0005481065 nova_compute[260935]: 2025-10-11 09:32:46.953 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:32:46 np0005481065 nova_compute[260935]: 2025-10-11 09:32:46.954 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 05:32:46 np0005481065 nova_compute[260935]: 2025-10-11 09:32:46.954 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:32:46 np0005481065 nova_compute[260935]: 2025-10-11 09:32:46.954 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:32:47 np0005481065 nova_compute[260935]: 2025-10-11 09:32:47.004 2 DEBUG oslo_concurrency.processutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:32:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2734: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:32:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:32:47 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4002942164' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:32:47 np0005481065 nova_compute[260935]: 2025-10-11 09:32:47.577 2 DEBUG oslo_concurrency.processutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:32:47 np0005481065 nova_compute[260935]: 2025-10-11 09:32:47.587 2 DEBUG nova.compute.provider_tree [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:32:47 np0005481065 nova_compute[260935]: 2025-10-11 09:32:47.608 2 DEBUG nova.scheduler.client.report [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:32:47 np0005481065 nova_compute[260935]: 2025-10-11 09:32:47.641 2 DEBUG oslo_concurrency.lockutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.821s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:32:47 np0005481065 nova_compute[260935]: 2025-10-11 09:32:47.643 2 DEBUG nova.compute.manager [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:32:47 np0005481065 nova_compute[260935]: 2025-10-11 09:32:47.701 2 DEBUG nova.compute.manager [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:32:47 np0005481065 nova_compute[260935]: 2025-10-11 09:32:47.702 2 DEBUG nova.network.neutron [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:32:47 np0005481065 nova_compute[260935]: 2025-10-11 09:32:47.707 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:32:47 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:32:47 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:32:47 np0005481065 nova_compute[260935]: 2025-10-11 09:32:47.729 2 INFO nova.virt.libvirt.driver [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:32:47 np0005481065 nova_compute[260935]: 2025-10-11 09:32:47.738 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:32:47 np0005481065 nova_compute[260935]: 2025-10-11 09:32:47.739 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:32:47 np0005481065 nova_compute[260935]: 2025-10-11 09:32:47.740 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:32:47 np0005481065 nova_compute[260935]: 2025-10-11 09:32:47.741 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:32:47 np0005481065 nova_compute[260935]: 2025-10-11 09:32:47.741 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:32:47 np0005481065 nova_compute[260935]: 2025-10-11 09:32:47.799 2 DEBUG nova.compute.manager [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:32:47 np0005481065 nova_compute[260935]: 2025-10-11 09:32:47.913 2 DEBUG nova.compute.manager [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:32:47 np0005481065 nova_compute[260935]: 2025-10-11 09:32:47.915 2 DEBUG nova.virt.libvirt.driver [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:32:47 np0005481065 nova_compute[260935]: 2025-10-11 09:32:47.916 2 INFO nova.virt.libvirt.driver [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Creating image(s)#033[00m
Oct 11 05:32:47 np0005481065 nova_compute[260935]: 2025-10-11 09:32:47.948 2 DEBUG nova.storage.rbd_utils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image e7751d7b-3ef7-4b84-a579-89d4b72f6cf4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:32:47 np0005481065 nova_compute[260935]: 2025-10-11 09:32:47.974 2 DEBUG nova.storage.rbd_utils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image e7751d7b-3ef7-4b84-a579-89d4b72f6cf4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:32:48 np0005481065 nova_compute[260935]: 2025-10-11 09:32:48.001 2 DEBUG nova.storage.rbd_utils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image e7751d7b-3ef7-4b84-a579-89d4b72f6cf4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:32:48 np0005481065 nova_compute[260935]: 2025-10-11 09:32:48.005 2 DEBUG oslo_concurrency.processutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:32:48 np0005481065 nova_compute[260935]: 2025-10-11 09:32:48.047 2 DEBUG nova.policy [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0e1fd111a1ff43179343661e01457085', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'db6885dd005947ad850fed13cefdf2fc', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:32:48 np0005481065 nova_compute[260935]: 2025-10-11 09:32:48.093 2 DEBUG oslo_concurrency.processutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:32:48 np0005481065 nova_compute[260935]: 2025-10-11 09:32:48.094 2 DEBUG oslo_concurrency.lockutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:32:48 np0005481065 nova_compute[260935]: 2025-10-11 09:32:48.094 2 DEBUG oslo_concurrency.lockutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:32:48 np0005481065 nova_compute[260935]: 2025-10-11 09:32:48.094 2 DEBUG oslo_concurrency.lockutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:32:48 np0005481065 nova_compute[260935]: 2025-10-11 09:32:48.118 2 DEBUG nova.storage.rbd_utils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image e7751d7b-3ef7-4b84-a579-89d4b72f6cf4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:32:48 np0005481065 nova_compute[260935]: 2025-10-11 09:32:48.121 2 DEBUG oslo_concurrency.processutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 e7751d7b-3ef7-4b84-a579-89d4b72f6cf4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:32:48 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:32:48 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4225269718' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:32:48 np0005481065 nova_compute[260935]: 2025-10-11 09:32:48.273 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:32:48 np0005481065 nova_compute[260935]: 2025-10-11 09:32:48.384 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:32:48 np0005481065 nova_compute[260935]: 2025-10-11 09:32:48.385 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:32:48 np0005481065 nova_compute[260935]: 2025-10-11 09:32:48.386 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:32:48 np0005481065 nova_compute[260935]: 2025-10-11 09:32:48.391 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:32:48 np0005481065 nova_compute[260935]: 2025-10-11 09:32:48.392 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:32:48 np0005481065 nova_compute[260935]: 2025-10-11 09:32:48.396 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:32:48 np0005481065 nova_compute[260935]: 2025-10-11 09:32:48.396 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:32:48 np0005481065 nova_compute[260935]: 2025-10-11 09:32:48.443 2 DEBUG oslo_concurrency.processutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 e7751d7b-3ef7-4b84-a579-89d4b72f6cf4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.322s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:32:48 np0005481065 nova_compute[260935]: 2025-10-11 09:32:48.512 2 DEBUG nova.storage.rbd_utils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] resizing rbd image e7751d7b-3ef7-4b84-a579-89d4b72f6cf4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:32:48 np0005481065 nova_compute[260935]: 2025-10-11 09:32:48.823 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:32:48 np0005481065 nova_compute[260935]: 2025-10-11 09:32:48.824 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2793MB free_disk=59.83066940307617GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:32:48 np0005481065 nova_compute[260935]: 2025-10-11 09:32:48.824 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:32:48 np0005481065 nova_compute[260935]: 2025-10-11 09:32:48.825 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:32:48 np0005481065 nova_compute[260935]: 2025-10-11 09:32:48.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:32:48 np0005481065 nova_compute[260935]: 2025-10-11 09:32:48.908 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:32:48 np0005481065 nova_compute[260935]: 2025-10-11 09:32:48.908 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:32:48 np0005481065 nova_compute[260935]: 2025-10-11 09:32:48.909 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:32:48 np0005481065 nova_compute[260935]: 2025-10-11 09:32:48.909 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance e7751d7b-3ef7-4b84-a579-89d4b72f6cf4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:32:48 np0005481065 nova_compute[260935]: 2025-10-11 09:32:48.909 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:32:48 np0005481065 nova_compute[260935]: 2025-10-11 09:32:48.909 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:32:48 np0005481065 nova_compute[260935]: 2025-10-11 09:32:48.918 2 DEBUG nova.objects.instance [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'migration_context' on Instance uuid e7751d7b-3ef7-4b84-a579-89d4b72f6cf4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:32:48 np0005481065 nova_compute[260935]: 2025-10-11 09:32:48.935 2 DEBUG nova.virt.libvirt.driver [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:32:48 np0005481065 nova_compute[260935]: 2025-10-11 09:32:48.936 2 DEBUG nova.virt.libvirt.driver [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Ensure instance console log exists: /var/lib/nova/instances/e7751d7b-3ef7-4b84-a579-89d4b72f6cf4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:32:48 np0005481065 nova_compute[260935]: 2025-10-11 09:32:48.936 2 DEBUG oslo_concurrency.lockutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:32:48 np0005481065 nova_compute[260935]: 2025-10-11 09:32:48.937 2 DEBUG oslo_concurrency.lockutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:32:48 np0005481065 nova_compute[260935]: 2025-10-11 09:32:48.937 2 DEBUG oslo_concurrency.lockutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:32:48 np0005481065 nova_compute[260935]: 2025-10-11 09:32:48.997 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:32:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2735: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:32:49 np0005481065 nova_compute[260935]: 2025-10-11 09:32:49.344 2 DEBUG nova.network.neutron [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Successfully created port: fca94e7c-1f55-4f5d-a268-7a4f0b86bae0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:32:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:32:49 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1230948506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:32:49 np0005481065 nova_compute[260935]: 2025-10-11 09:32:49.446 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:32:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:32:49 np0005481065 nova_compute[260935]: 2025-10-11 09:32:49.456 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:32:49 np0005481065 nova_compute[260935]: 2025-10-11 09:32:49.481 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:32:49 np0005481065 nova_compute[260935]: 2025-10-11 09:32:49.516 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:32:49 np0005481065 nova_compute[260935]: 2025-10-11 09:32:49.517 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:32:50 np0005481065 nova_compute[260935]: 2025-10-11 09:32:50.093 2 DEBUG nova.network.neutron [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Successfully updated port: fca94e7c-1f55-4f5d-a268-7a4f0b86bae0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:32:50 np0005481065 nova_compute[260935]: 2025-10-11 09:32:50.244 2 DEBUG oslo_concurrency.lockutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "refresh_cache-e7751d7b-3ef7-4b84-a579-89d4b72f6cf4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:32:50 np0005481065 nova_compute[260935]: 2025-10-11 09:32:50.244 2 DEBUG oslo_concurrency.lockutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquired lock "refresh_cache-e7751d7b-3ef7-4b84-a579-89d4b72f6cf4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:32:50 np0005481065 nova_compute[260935]: 2025-10-11 09:32:50.245 2 DEBUG nova.network.neutron [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:32:50 np0005481065 nova_compute[260935]: 2025-10-11 09:32:50.321 2 DEBUG nova.compute.manager [req-66efc25e-7b51-487f-8c7a-8007245c9fc6 req-38eadef4-8eef-44c2-b8cf-283dd4e27713 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Received event network-changed-fca94e7c-1f55-4f5d-a268-7a4f0b86bae0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:32:50 np0005481065 nova_compute[260935]: 2025-10-11 09:32:50.322 2 DEBUG nova.compute.manager [req-66efc25e-7b51-487f-8c7a-8007245c9fc6 req-38eadef4-8eef-44c2-b8cf-283dd4e27713 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Refreshing instance network info cache due to event network-changed-fca94e7c-1f55-4f5d-a268-7a4f0b86bae0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:32:50 np0005481065 nova_compute[260935]: 2025-10-11 09:32:50.322 2 DEBUG oslo_concurrency.lockutils [req-66efc25e-7b51-487f-8c7a-8007245c9fc6 req-38eadef4-8eef-44c2-b8cf-283dd4e27713 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-e7751d7b-3ef7-4b84-a579-89d4b72f6cf4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:32:50 np0005481065 nova_compute[260935]: 2025-10-11 09:32:50.514 2 DEBUG nova.network.neutron [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:32:50 np0005481065 nova_compute[260935]: 2025-10-11 09:32:50.518 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:32:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2736: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:32:51 np0005481065 nova_compute[260935]: 2025-10-11 09:32:51.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:32:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2737: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:32:53 np0005481065 podman[412478]: 2025-10-11 09:32:53.79646788 +0000 UTC m=+0.088992392 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001)
Oct 11 05:32:53 np0005481065 nova_compute[260935]: 2025-10-11 09:32:53.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:32:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:32:54 np0005481065 nova_compute[260935]: 2025-10-11 09:32:54.594 2 DEBUG nova.network.neutron [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Updating instance_info_cache with network_info: [{"id": "fca94e7c-1f55-4f5d-a268-7a4f0b86bae0", "address": "fa:16:3e:9b:ef:fb", "network": {"id": "84203be9-d0b1-4633-bcd6-51523ee507a5", "bridge": "br-int", "label": "tempest-network-smoke--13547720", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9b:effb", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfca94e7c-1f", "ovs_interfaceid": "fca94e7c-1f55-4f5d-a268-7a4f0b86bae0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:32:54 np0005481065 nova_compute[260935]: 2025-10-11 09:32:54.616 2 DEBUG oslo_concurrency.lockutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Releasing lock "refresh_cache-e7751d7b-3ef7-4b84-a579-89d4b72f6cf4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:32:54 np0005481065 nova_compute[260935]: 2025-10-11 09:32:54.616 2 DEBUG nova.compute.manager [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Instance network_info: |[{"id": "fca94e7c-1f55-4f5d-a268-7a4f0b86bae0", "address": "fa:16:3e:9b:ef:fb", "network": {"id": "84203be9-d0b1-4633-bcd6-51523ee507a5", "bridge": "br-int", "label": "tempest-network-smoke--13547720", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9b:effb", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfca94e7c-1f", "ovs_interfaceid": "fca94e7c-1f55-4f5d-a268-7a4f0b86bae0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:32:54 np0005481065 nova_compute[260935]: 2025-10-11 09:32:54.617 2 DEBUG oslo_concurrency.lockutils [req-66efc25e-7b51-487f-8c7a-8007245c9fc6 req-38eadef4-8eef-44c2-b8cf-283dd4e27713 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-e7751d7b-3ef7-4b84-a579-89d4b72f6cf4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:32:54 np0005481065 nova_compute[260935]: 2025-10-11 09:32:54.617 2 DEBUG nova.network.neutron [req-66efc25e-7b51-487f-8c7a-8007245c9fc6 req-38eadef4-8eef-44c2-b8cf-283dd4e27713 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Refreshing network info cache for port fca94e7c-1f55-4f5d-a268-7a4f0b86bae0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:32:54 np0005481065 nova_compute[260935]: 2025-10-11 09:32:54.623 2 DEBUG nova.virt.libvirt.driver [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Start _get_guest_xml network_info=[{"id": "fca94e7c-1f55-4f5d-a268-7a4f0b86bae0", "address": "fa:16:3e:9b:ef:fb", "network": {"id": "84203be9-d0b1-4633-bcd6-51523ee507a5", "bridge": "br-int", "label": "tempest-network-smoke--13547720", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9b:effb", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfca94e7c-1f", "ovs_interfaceid": "fca94e7c-1f55-4f5d-a268-7a4f0b86bae0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:32:54 np0005481065 nova_compute[260935]: 2025-10-11 09:32:54.630 2 WARNING nova.virt.libvirt.driver [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:32:54 np0005481065 nova_compute[260935]: 2025-10-11 09:32:54.636 2 DEBUG nova.virt.libvirt.host [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:32:54 np0005481065 nova_compute[260935]: 2025-10-11 09:32:54.636 2 DEBUG nova.virt.libvirt.host [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:32:54 np0005481065 nova_compute[260935]: 2025-10-11 09:32:54.642 2 DEBUG nova.virt.libvirt.host [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:32:54 np0005481065 nova_compute[260935]: 2025-10-11 09:32:54.643 2 DEBUG nova.virt.libvirt.host [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:32:54 np0005481065 nova_compute[260935]: 2025-10-11 09:32:54.643 2 DEBUG nova.virt.libvirt.driver [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:32:54 np0005481065 nova_compute[260935]: 2025-10-11 09:32:54.643 2 DEBUG nova.virt.hardware [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:32:54 np0005481065 nova_compute[260935]: 2025-10-11 09:32:54.644 2 DEBUG nova.virt.hardware [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:32:54 np0005481065 nova_compute[260935]: 2025-10-11 09:32:54.644 2 DEBUG nova.virt.hardware [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:32:54 np0005481065 nova_compute[260935]: 2025-10-11 09:32:54.644 2 DEBUG nova.virt.hardware [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:32:54 np0005481065 nova_compute[260935]: 2025-10-11 09:32:54.645 2 DEBUG nova.virt.hardware [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:32:54 np0005481065 nova_compute[260935]: 2025-10-11 09:32:54.645 2 DEBUG nova.virt.hardware [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:32:54 np0005481065 nova_compute[260935]: 2025-10-11 09:32:54.645 2 DEBUG nova.virt.hardware [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:32:54 np0005481065 nova_compute[260935]: 2025-10-11 09:32:54.645 2 DEBUG nova.virt.hardware [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:32:54 np0005481065 nova_compute[260935]: 2025-10-11 09:32:54.645 2 DEBUG nova.virt.hardware [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:32:54 np0005481065 nova_compute[260935]: 2025-10-11 09:32:54.646 2 DEBUG nova.virt.hardware [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:32:54 np0005481065 nova_compute[260935]: 2025-10-11 09:32:54.646 2 DEBUG nova.virt.hardware [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:32:54 np0005481065 nova_compute[260935]: 2025-10-11 09:32:54.650 2 DEBUG oslo_concurrency.processutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:32:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:32:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:32:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:32:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:32:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:32:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:32:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:32:54
Oct 11 05:32:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:32:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:32:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', '.mgr', '.rgw.root', 'volumes', 'vms', 'images', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log']
Oct 11 05:32:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:32:55 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:32:55 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2764879442' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:32:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2738: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:32:55 np0005481065 nova_compute[260935]: 2025-10-11 09:32:55.133 2 DEBUG oslo_concurrency.processutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:32:55 np0005481065 nova_compute[260935]: 2025-10-11 09:32:55.169 2 DEBUG nova.storage.rbd_utils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image e7751d7b-3ef7-4b84-a579-89d4b72f6cf4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:32:55 np0005481065 nova_compute[260935]: 2025-10-11 09:32:55.180 2 DEBUG oslo_concurrency.processutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:32:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:32:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:32:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:32:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:32:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:32:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:32:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:32:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:32:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:32:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:32:55 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:32:55 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/525487199' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:32:55 np0005481065 nova_compute[260935]: 2025-10-11 09:32:55.656 2 DEBUG oslo_concurrency.processutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:32:55 np0005481065 nova_compute[260935]: 2025-10-11 09:32:55.660 2 DEBUG nova.virt.libvirt.vif [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:32:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1457402262',display_name='tempest-TestGettingAddress-server-1457402262',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1457402262',id=137,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHVfw7pOtwEAslurCfMJI+LHxZQ7Z2OcTKKHxGE0pH5jm6Sci3HdEc6EkB55fGhUPW8iPpuzF74thdADhl6pL+8SkI+p3jvP78xNgpaS0telyWktJDIyQUmQdgZIxm8Eyg==',key_name='tempest-TestGettingAddress-2055882368',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-axv24jbu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:32:47Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=e7751d7b-3ef7-4b84-a579-89d4b72f6cf4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fca94e7c-1f55-4f5d-a268-7a4f0b86bae0", "address": "fa:16:3e:9b:ef:fb", "network": {"id": "84203be9-d0b1-4633-bcd6-51523ee507a5", "bridge": "br-int", "label": "tempest-network-smoke--13547720", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9b:effb", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfca94e7c-1f", "ovs_interfaceid": "fca94e7c-1f55-4f5d-a268-7a4f0b86bae0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:32:55 np0005481065 nova_compute[260935]: 2025-10-11 09:32:55.661 2 DEBUG nova.network.os_vif_util [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "fca94e7c-1f55-4f5d-a268-7a4f0b86bae0", "address": "fa:16:3e:9b:ef:fb", "network": {"id": "84203be9-d0b1-4633-bcd6-51523ee507a5", "bridge": "br-int", "label": "tempest-network-smoke--13547720", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9b:effb", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfca94e7c-1f", "ovs_interfaceid": "fca94e7c-1f55-4f5d-a268-7a4f0b86bae0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:32:55 np0005481065 nova_compute[260935]: 2025-10-11 09:32:55.663 2 DEBUG nova.network.os_vif_util [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:ef:fb,bridge_name='br-int',has_traffic_filtering=True,id=fca94e7c-1f55-4f5d-a268-7a4f0b86bae0,network=Network(84203be9-d0b1-4633-bcd6-51523ee507a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfca94e7c-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:32:55 np0005481065 nova_compute[260935]: 2025-10-11 09:32:55.665 2 DEBUG nova.objects.instance [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'pci_devices' on Instance uuid e7751d7b-3ef7-4b84-a579-89d4b72f6cf4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:32:55 np0005481065 nova_compute[260935]: 2025-10-11 09:32:55.687 2 DEBUG nova.virt.libvirt.driver [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:32:55 np0005481065 nova_compute[260935]:  <uuid>e7751d7b-3ef7-4b84-a579-89d4b72f6cf4</uuid>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:  <name>instance-00000089</name>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:32:55 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:      <nova:name>tempest-TestGettingAddress-server-1457402262</nova:name>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:32:54</nova:creationTime>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:32:55 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:        <nova:user uuid="0e1fd111a1ff43179343661e01457085">tempest-TestGettingAddress-1238692117-project-member</nova:user>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:        <nova:project uuid="db6885dd005947ad850fed13cefdf2fc">tempest-TestGettingAddress-1238692117</nova:project>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:        <nova:port uuid="fca94e7c-1f55-4f5d-a268-7a4f0b86bae0">
Oct 11 05:32:55 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe9b:effb" ipVersion="6"/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:32:55 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:      <entry name="serial">e7751d7b-3ef7-4b84-a579-89d4b72f6cf4</entry>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:      <entry name="uuid">e7751d7b-3ef7-4b84-a579-89d4b72f6cf4</entry>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:32:55 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:32:55 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:32:55 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/e7751d7b-3ef7-4b84-a579-89d4b72f6cf4_disk">
Oct 11 05:32:55 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:32:55 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:32:55 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/e7751d7b-3ef7-4b84-a579-89d4b72f6cf4_disk.config">
Oct 11 05:32:55 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:32:55 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:32:55 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:9b:ef:fb"/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:      <target dev="tapfca94e7c-1f"/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:32:55 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/e7751d7b-3ef7-4b84-a579-89d4b72f6cf4/console.log" append="off"/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:32:55 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:32:55 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:32:55 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:32:55 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:32:55 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:32:55 np0005481065 nova_compute[260935]: 2025-10-11 09:32:55.689 2 DEBUG nova.compute.manager [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Preparing to wait for external event network-vif-plugged-fca94e7c-1f55-4f5d-a268-7a4f0b86bae0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:32:55 np0005481065 nova_compute[260935]: 2025-10-11 09:32:55.689 2 DEBUG oslo_concurrency.lockutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "e7751d7b-3ef7-4b84-a579-89d4b72f6cf4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:32:55 np0005481065 nova_compute[260935]: 2025-10-11 09:32:55.690 2 DEBUG oslo_concurrency.lockutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "e7751d7b-3ef7-4b84-a579-89d4b72f6cf4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:32:55 np0005481065 nova_compute[260935]: 2025-10-11 09:32:55.690 2 DEBUG oslo_concurrency.lockutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "e7751d7b-3ef7-4b84-a579-89d4b72f6cf4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:32:55 np0005481065 nova_compute[260935]: 2025-10-11 09:32:55.691 2 DEBUG nova.virt.libvirt.vif [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:32:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1457402262',display_name='tempest-TestGettingAddress-server-1457402262',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1457402262',id=137,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHVfw7pOtwEAslurCfMJI+LHxZQ7Z2OcTKKHxGE0pH5jm6Sci3HdEc6EkB55fGhUPW8iPpuzF74thdADhl6pL+8SkI+p3jvP78xNgpaS0telyWktJDIyQUmQdgZIxm8Eyg==',key_name='tempest-TestGettingAddress-2055882368',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-axv24jbu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:32:47Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=e7751d7b-3ef7-4b84-a579-89d4b72f6cf4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fca94e7c-1f55-4f5d-a268-7a4f0b86bae0", "address": "fa:16:3e:9b:ef:fb", "network": {"id": "84203be9-d0b1-4633-bcd6-51523ee507a5", "bridge": "br-int", "label": "tempest-network-smoke--13547720", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9b:effb", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfca94e7c-1f", "ovs_interfaceid": "fca94e7c-1f55-4f5d-a268-7a4f0b86bae0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:32:55 np0005481065 nova_compute[260935]: 2025-10-11 09:32:55.692 2 DEBUG nova.network.os_vif_util [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "fca94e7c-1f55-4f5d-a268-7a4f0b86bae0", "address": "fa:16:3e:9b:ef:fb", "network": {"id": "84203be9-d0b1-4633-bcd6-51523ee507a5", "bridge": "br-int", "label": "tempest-network-smoke--13547720", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9b:effb", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfca94e7c-1f", "ovs_interfaceid": "fca94e7c-1f55-4f5d-a268-7a4f0b86bae0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:32:55 np0005481065 nova_compute[260935]: 2025-10-11 09:32:55.693 2 DEBUG nova.network.os_vif_util [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:ef:fb,bridge_name='br-int',has_traffic_filtering=True,id=fca94e7c-1f55-4f5d-a268-7a4f0b86bae0,network=Network(84203be9-d0b1-4633-bcd6-51523ee507a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfca94e7c-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:32:55 np0005481065 nova_compute[260935]: 2025-10-11 09:32:55.693 2 DEBUG os_vif [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:ef:fb,bridge_name='br-int',has_traffic_filtering=True,id=fca94e7c-1f55-4f5d-a268-7a4f0b86bae0,network=Network(84203be9-d0b1-4633-bcd6-51523ee507a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfca94e7c-1f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:32:55 np0005481065 nova_compute[260935]: 2025-10-11 09:32:55.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:32:55 np0005481065 nova_compute[260935]: 2025-10-11 09:32:55.695 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:32:55 np0005481065 nova_compute[260935]: 2025-10-11 09:32:55.696 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:32:55 np0005481065 nova_compute[260935]: 2025-10-11 09:32:55.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:32:55 np0005481065 nova_compute[260935]: 2025-10-11 09:32:55.702 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfca94e7c-1f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:32:55 np0005481065 nova_compute[260935]: 2025-10-11 09:32:55.702 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfca94e7c-1f, col_values=(('external_ids', {'iface-id': 'fca94e7c-1f55-4f5d-a268-7a4f0b86bae0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9b:ef:fb', 'vm-uuid': 'e7751d7b-3ef7-4b84-a579-89d4b72f6cf4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:32:55 np0005481065 nova_compute[260935]: 2025-10-11 09:32:55.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:32:55 np0005481065 NetworkManager[44960]: <info>  [1760175175.7069] manager: (tapfca94e7c-1f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/588)
Oct 11 05:32:55 np0005481065 nova_compute[260935]: 2025-10-11 09:32:55.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:32:55 np0005481065 nova_compute[260935]: 2025-10-11 09:32:55.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:32:55 np0005481065 nova_compute[260935]: 2025-10-11 09:32:55.718 2 INFO os_vif [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:ef:fb,bridge_name='br-int',has_traffic_filtering=True,id=fca94e7c-1f55-4f5d-a268-7a4f0b86bae0,network=Network(84203be9-d0b1-4633-bcd6-51523ee507a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfca94e7c-1f')#033[00m
Oct 11 05:32:55 np0005481065 nova_compute[260935]: 2025-10-11 09:32:55.830 2 DEBUG nova.virt.libvirt.driver [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:32:55 np0005481065 nova_compute[260935]: 2025-10-11 09:32:55.830 2 DEBUG nova.virt.libvirt.driver [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:32:55 np0005481065 nova_compute[260935]: 2025-10-11 09:32:55.831 2 DEBUG nova.virt.libvirt.driver [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No VIF found with MAC fa:16:3e:9b:ef:fb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:32:55 np0005481065 nova_compute[260935]: 2025-10-11 09:32:55.832 2 INFO nova.virt.libvirt.driver [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Using config drive#033[00m
Oct 11 05:32:55 np0005481065 nova_compute[260935]: 2025-10-11 09:32:55.874 2 DEBUG nova.storage.rbd_utils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image e7751d7b-3ef7-4b84-a579-89d4b72f6cf4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:32:56 np0005481065 nova_compute[260935]: 2025-10-11 09:32:56.483 2 INFO nova.virt.libvirt.driver [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Creating config drive at /var/lib/nova/instances/e7751d7b-3ef7-4b84-a579-89d4b72f6cf4/disk.config#033[00m
Oct 11 05:32:56 np0005481065 nova_compute[260935]: 2025-10-11 09:32:56.492 2 DEBUG oslo_concurrency.processutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e7751d7b-3ef7-4b84-a579-89d4b72f6cf4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2xayg7dd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:32:56 np0005481065 nova_compute[260935]: 2025-10-11 09:32:56.663 2 DEBUG oslo_concurrency.processutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e7751d7b-3ef7-4b84-a579-89d4b72f6cf4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2xayg7dd" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:32:56 np0005481065 nova_compute[260935]: 2025-10-11 09:32:56.703 2 DEBUG nova.storage.rbd_utils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image e7751d7b-3ef7-4b84-a579-89d4b72f6cf4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:32:56 np0005481065 nova_compute[260935]: 2025-10-11 09:32:56.709 2 DEBUG oslo_concurrency.processutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e7751d7b-3ef7-4b84-a579-89d4b72f6cf4/disk.config e7751d7b-3ef7-4b84-a579-89d4b72f6cf4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:32:56 np0005481065 nova_compute[260935]: 2025-10-11 09:32:56.767 2 DEBUG nova.network.neutron [req-66efc25e-7b51-487f-8c7a-8007245c9fc6 req-38eadef4-8eef-44c2-b8cf-283dd4e27713 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Updated VIF entry in instance network info cache for port fca94e7c-1f55-4f5d-a268-7a4f0b86bae0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:32:56 np0005481065 nova_compute[260935]: 2025-10-11 09:32:56.768 2 DEBUG nova.network.neutron [req-66efc25e-7b51-487f-8c7a-8007245c9fc6 req-38eadef4-8eef-44c2-b8cf-283dd4e27713 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Updating instance_info_cache with network_info: [{"id": "fca94e7c-1f55-4f5d-a268-7a4f0b86bae0", "address": "fa:16:3e:9b:ef:fb", "network": {"id": "84203be9-d0b1-4633-bcd6-51523ee507a5", "bridge": "br-int", "label": "tempest-network-smoke--13547720", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9b:effb", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfca94e7c-1f", "ovs_interfaceid": "fca94e7c-1f55-4f5d-a268-7a4f0b86bae0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:32:56 np0005481065 nova_compute[260935]: 2025-10-11 09:32:56.787 2 DEBUG oslo_concurrency.lockutils [req-66efc25e-7b51-487f-8c7a-8007245c9fc6 req-38eadef4-8eef-44c2-b8cf-283dd4e27713 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-e7751d7b-3ef7-4b84-a579-89d4b72f6cf4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:32:56 np0005481065 nova_compute[260935]: 2025-10-11 09:32:56.942 2 DEBUG oslo_concurrency.processutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e7751d7b-3ef7-4b84-a579-89d4b72f6cf4/disk.config e7751d7b-3ef7-4b84-a579-89d4b72f6cf4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.233s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:32:56 np0005481065 nova_compute[260935]: 2025-10-11 09:32:56.943 2 INFO nova.virt.libvirt.driver [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Deleting local config drive /var/lib/nova/instances/e7751d7b-3ef7-4b84-a579-89d4b72f6cf4/disk.config because it was imported into RBD.#033[00m
Oct 11 05:32:57 np0005481065 kernel: tapfca94e7c-1f: entered promiscuous mode
Oct 11 05:32:57 np0005481065 NetworkManager[44960]: <info>  [1760175177.0300] manager: (tapfca94e7c-1f): new Tun device (/org/freedesktop/NetworkManager/Devices/589)
Oct 11 05:32:57 np0005481065 ovn_controller[152945]: 2025-10-11T09:32:57Z|01531|binding|INFO|Claiming lport fca94e7c-1f55-4f5d-a268-7a4f0b86bae0 for this chassis.
Oct 11 05:32:57 np0005481065 ovn_controller[152945]: 2025-10-11T09:32:57Z|01532|binding|INFO|fca94e7c-1f55-4f5d-a268-7a4f0b86bae0: Claiming fa:16:3e:9b:ef:fb 10.100.0.12 2001:db8::f816:3eff:fe9b:effb
Oct 11 05:32:57 np0005481065 nova_compute[260935]: 2025-10-11 09:32:57.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:32:57 np0005481065 nova_compute[260935]: 2025-10-11 09:32:57.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:32:57 np0005481065 systemd-udevd[412636]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:32:57 np0005481065 systemd-machined[215705]: New machine qemu-161-instance-00000089.
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:32:57.091 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:ef:fb 10.100.0.12 2001:db8::f816:3eff:fe9b:effb'], port_security=['fa:16:3e:9b:ef:fb 10.100.0.12 2001:db8::f816:3eff:fe9b:effb'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28 2001:db8::f816:3eff:fe9b:effb/64', 'neutron:device_id': 'e7751d7b-3ef7-4b84-a579-89d4b72f6cf4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84203be9-d0b1-4633-bcd6-51523ee507a5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '49b6c18b-a992-472d-bab2-db1a8305eb48', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d927a0f8-4868-42ce-9cc2-00b73d34efaa, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=fca94e7c-1f55-4f5d-a268-7a4f0b86bae0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:32:57.093 162815 INFO neutron.agent.ovn.metadata.agent [-] Port fca94e7c-1f55-4f5d-a268-7a4f0b86bae0 in datapath 84203be9-d0b1-4633-bcd6-51523ee507a5 bound to our chassis#033[00m
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:32:57.095 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 84203be9-d0b1-4633-bcd6-51523ee507a5#033[00m
Oct 11 05:32:57 np0005481065 NetworkManager[44960]: <info>  [1760175177.1131] device (tapfca94e7c-1f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:32:57 np0005481065 NetworkManager[44960]: <info>  [1760175177.1147] device (tapfca94e7c-1f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:32:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2739: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:32:57.112 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ef939037-8c37-49ad-ba7a-316f112d07de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:32:57.116 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap84203be9-d1 in ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:32:57.119 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap84203be9-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:32:57.119 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[eab8c58b-e053-4f93-b65c-25309210068d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:32:57 np0005481065 systemd[1]: Started Virtual Machine qemu-161-instance-00000089.
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:32:57.121 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5fd50968-c87c-4b4b-857f-6a45e2ae74c0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:32:57 np0005481065 ovn_controller[152945]: 2025-10-11T09:32:57Z|01533|binding|INFO|Setting lport fca94e7c-1f55-4f5d-a268-7a4f0b86bae0 ovn-installed in OVS
Oct 11 05:32:57 np0005481065 ovn_controller[152945]: 2025-10-11T09:32:57Z|01534|binding|INFO|Setting lport fca94e7c-1f55-4f5d-a268-7a4f0b86bae0 up in Southbound
Oct 11 05:32:57 np0005481065 nova_compute[260935]: 2025-10-11 09:32:57.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:32:57.137 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[bb736d9d-b5b1-49c7-8c6e-07878547cfef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:32:57.168 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8a034a43-dbae-426f-880c-61ff026f289b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:32:57.216 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[83858f03-84ea-4f97-8ce2-e8b5a867bd72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:32:57 np0005481065 systemd-udevd[412639]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:32:57 np0005481065 NetworkManager[44960]: <info>  [1760175177.2274] manager: (tap84203be9-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/590)
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:32:57.226 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[57d22cd9-40c2-47ff-afc4-d019c9fb034f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:32:57.267 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[7022cc63-8e81-4d85-99ef-3d764c8dba82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:32:57.271 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[18c5a624-e93a-407d-bac1-0652a64f9212]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:32:57 np0005481065 NetworkManager[44960]: <info>  [1760175177.3058] device (tap84203be9-d0): carrier: link connected
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:32:57.316 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[1ebc3b83-fb38-4a96-9f15-eeac288ab27f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:32:57.344 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[afebfc6c-85d2-4db1-b142-dfe95682415d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap84203be9-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:4e:e8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 410], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 702200, 'reachable_time': 38542, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 412669, 'error': None, 'target': 'ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:32:57.367 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[62a3aaa0-9eff-41c6-b43b-79a870a15948]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe09:4ee8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 702200, 'tstamp': 702200}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 412670, 'error': None, 'target': 'ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:32:57.397 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a5bcac3f-70d8-414e-a28f-857a59f094bb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap84203be9-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:4e:e8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 410], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 702200, 'reachable_time': 38542, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 412671, 'error': None, 'target': 'ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:32:57.438 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3e1cb462-e326-4fa1-9d66-9d8cf189b8b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:32:57.535 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a2a2ddf3-a6ed-4b7a-8dda-7f765cb22a9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:32:57.537 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84203be9-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:32:57.537 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:32:57.538 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap84203be9-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:32:57 np0005481065 NetworkManager[44960]: <info>  [1760175177.5419] manager: (tap84203be9-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/591)
Oct 11 05:32:57 np0005481065 kernel: tap84203be9-d0: entered promiscuous mode
Oct 11 05:32:57 np0005481065 nova_compute[260935]: 2025-10-11 09:32:57.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:32:57 np0005481065 nova_compute[260935]: 2025-10-11 09:32:57.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:32:57.545 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap84203be9-d0, col_values=(('external_ids', {'iface-id': '8bbc5aca-a0c9-493f-8847-60a74f11dbe6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:32:57 np0005481065 nova_compute[260935]: 2025-10-11 09:32:57.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:32:57 np0005481065 ovn_controller[152945]: 2025-10-11T09:32:57Z|01535|binding|INFO|Releasing lport 8bbc5aca-a0c9-493f-8847-60a74f11dbe6 from this chassis (sb_readonly=0)
Oct 11 05:32:57 np0005481065 nova_compute[260935]: 2025-10-11 09:32:57.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:32:57.583 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/84203be9-d0b1-4633-bcd6-51523ee507a5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/84203be9-d0b1-4633-bcd6-51523ee507a5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:32:57.585 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[05f1a0d1-daf7-416d-a1c3-f2b4d9b2e03d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:32:57.586 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-84203be9-d0b1-4633-bcd6-51523ee507a5
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/84203be9-d0b1-4633-bcd6-51523ee507a5.pid.haproxy
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID 84203be9-d0b1-4633-bcd6-51523ee507a5
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 05:32:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:32:57.590 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5', 'env', 'PROCESS_TAG=haproxy-84203be9-d0b1-4633-bcd6-51523ee507a5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/84203be9-d0b1-4633-bcd6-51523ee507a5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 05:32:57 np0005481065 nova_compute[260935]: 2025-10-11 09:32:57.807 2 DEBUG nova.compute.manager [req-e1f95efd-d04b-4067-90b7-0a548160d21e req-9f5a89c9-ffcb-4d5d-8abc-280b27d208a0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Received event network-vif-plugged-fca94e7c-1f55-4f5d-a268-7a4f0b86bae0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:32:57 np0005481065 nova_compute[260935]: 2025-10-11 09:32:57.808 2 DEBUG oslo_concurrency.lockutils [req-e1f95efd-d04b-4067-90b7-0a548160d21e req-9f5a89c9-ffcb-4d5d-8abc-280b27d208a0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "e7751d7b-3ef7-4b84-a579-89d4b72f6cf4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:32:57 np0005481065 nova_compute[260935]: 2025-10-11 09:32:57.808 2 DEBUG oslo_concurrency.lockutils [req-e1f95efd-d04b-4067-90b7-0a548160d21e req-9f5a89c9-ffcb-4d5d-8abc-280b27d208a0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e7751d7b-3ef7-4b84-a579-89d4b72f6cf4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:32:57 np0005481065 nova_compute[260935]: 2025-10-11 09:32:57.809 2 DEBUG oslo_concurrency.lockutils [req-e1f95efd-d04b-4067-90b7-0a548160d21e req-9f5a89c9-ffcb-4d5d-8abc-280b27d208a0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e7751d7b-3ef7-4b84-a579-89d4b72f6cf4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:32:57 np0005481065 nova_compute[260935]: 2025-10-11 09:32:57.811 2 DEBUG nova.compute.manager [req-e1f95efd-d04b-4067-90b7-0a548160d21e req-9f5a89c9-ffcb-4d5d-8abc-280b27d208a0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Processing event network-vif-plugged-fca94e7c-1f55-4f5d-a268-7a4f0b86bae0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:32:58 np0005481065 podman[412745]: 2025-10-11 09:32:58.033960985 +0000 UTC m=+0.051473624 container create 328fa346917de62bfec3429b7ba4ef5fc57b6ac0cdc64d48811fdf8800ac0793 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 11 05:32:58 np0005481065 systemd[1]: Started libpod-conmon-328fa346917de62bfec3429b7ba4ef5fc57b6ac0cdc64d48811fdf8800ac0793.scope.
Oct 11 05:32:58 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:32:58 np0005481065 podman[412745]: 2025-10-11 09:32:58.009421042 +0000 UTC m=+0.026933731 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 05:32:58 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b192159424a33d91dd7687b52effbbc7201b23f7d99b918f0ad704b8e1f4e2c9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 05:32:58 np0005481065 podman[412745]: 2025-10-11 09:32:58.122061461 +0000 UTC m=+0.139574150 container init 328fa346917de62bfec3429b7ba4ef5fc57b6ac0cdc64d48811fdf8800ac0793 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2)
Oct 11 05:32:58 np0005481065 podman[412745]: 2025-10-11 09:32:58.130839609 +0000 UTC m=+0.148352258 container start 328fa346917de62bfec3429b7ba4ef5fc57b6ac0cdc64d48811fdf8800ac0793 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 05:32:58 np0005481065 neutron-haproxy-ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5[412760]: [NOTICE]   (412764) : New worker (412766) forked
Oct 11 05:32:58 np0005481065 neutron-haproxy-ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5[412760]: [NOTICE]   (412764) : Loading success.
Oct 11 05:32:58 np0005481065 nova_compute[260935]: 2025-10-11 09:32:58.369 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175178.368896, e7751d7b-3ef7-4b84-a579-89d4b72f6cf4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:32:58 np0005481065 nova_compute[260935]: 2025-10-11 09:32:58.369 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] VM Started (Lifecycle Event)#033[00m
Oct 11 05:32:58 np0005481065 nova_compute[260935]: 2025-10-11 09:32:58.371 2 DEBUG nova.compute.manager [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:32:58 np0005481065 nova_compute[260935]: 2025-10-11 09:32:58.375 2 DEBUG nova.virt.libvirt.driver [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:32:58 np0005481065 nova_compute[260935]: 2025-10-11 09:32:58.379 2 INFO nova.virt.libvirt.driver [-] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Instance spawned successfully.#033[00m
Oct 11 05:32:58 np0005481065 nova_compute[260935]: 2025-10-11 09:32:58.380 2 DEBUG nova.virt.libvirt.driver [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:32:58 np0005481065 nova_compute[260935]: 2025-10-11 09:32:58.400 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:32:58 np0005481065 nova_compute[260935]: 2025-10-11 09:32:58.407 2 DEBUG nova.virt.libvirt.driver [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:32:58 np0005481065 nova_compute[260935]: 2025-10-11 09:32:58.408 2 DEBUG nova.virt.libvirt.driver [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:32:58 np0005481065 nova_compute[260935]: 2025-10-11 09:32:58.408 2 DEBUG nova.virt.libvirt.driver [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:32:58 np0005481065 nova_compute[260935]: 2025-10-11 09:32:58.408 2 DEBUG nova.virt.libvirt.driver [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:32:58 np0005481065 nova_compute[260935]: 2025-10-11 09:32:58.409 2 DEBUG nova.virt.libvirt.driver [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:32:58 np0005481065 nova_compute[260935]: 2025-10-11 09:32:58.409 2 DEBUG nova.virt.libvirt.driver [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:32:58 np0005481065 nova_compute[260935]: 2025-10-11 09:32:58.412 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:32:58 np0005481065 nova_compute[260935]: 2025-10-11 09:32:58.461 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:32:58 np0005481065 nova_compute[260935]: 2025-10-11 09:32:58.462 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175178.3720112, e7751d7b-3ef7-4b84-a579-89d4b72f6cf4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:32:58 np0005481065 nova_compute[260935]: 2025-10-11 09:32:58.462 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:32:58 np0005481065 nova_compute[260935]: 2025-10-11 09:32:58.491 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:32:58 np0005481065 nova_compute[260935]: 2025-10-11 09:32:58.496 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175178.3741326, e7751d7b-3ef7-4b84-a579-89d4b72f6cf4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:32:58 np0005481065 nova_compute[260935]: 2025-10-11 09:32:58.496 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:32:58 np0005481065 nova_compute[260935]: 2025-10-11 09:32:58.508 2 INFO nova.compute.manager [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Took 10.59 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:32:58 np0005481065 nova_compute[260935]: 2025-10-11 09:32:58.508 2 DEBUG nova.compute.manager [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:32:58 np0005481065 nova_compute[260935]: 2025-10-11 09:32:58.517 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:32:58 np0005481065 nova_compute[260935]: 2025-10-11 09:32:58.520 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:32:58 np0005481065 nova_compute[260935]: 2025-10-11 09:32:58.546 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:32:58 np0005481065 nova_compute[260935]: 2025-10-11 09:32:58.577 2 INFO nova.compute.manager [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Took 11.80 seconds to build instance.#033[00m
Oct 11 05:32:58 np0005481065 nova_compute[260935]: 2025-10-11 09:32:58.596 2 DEBUG oslo_concurrency.lockutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "e7751d7b-3ef7-4b84-a579-89d4b72f6cf4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.897s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:32:58 np0005481065 nova_compute[260935]: 2025-10-11 09:32:58.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:32:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2740: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 11 05:32:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:32:59 np0005481065 nova_compute[260935]: 2025-10-11 09:32:59.926 2 DEBUG nova.compute.manager [req-f0eee2e9-c7a8-43de-87fe-d5dc050ce3fc req-c5d34e32-0c7c-46a2-bfef-8a7c74c8b77e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Received event network-vif-plugged-fca94e7c-1f55-4f5d-a268-7a4f0b86bae0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:32:59 np0005481065 nova_compute[260935]: 2025-10-11 09:32:59.927 2 DEBUG oslo_concurrency.lockutils [req-f0eee2e9-c7a8-43de-87fe-d5dc050ce3fc req-c5d34e32-0c7c-46a2-bfef-8a7c74c8b77e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "e7751d7b-3ef7-4b84-a579-89d4b72f6cf4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:32:59 np0005481065 nova_compute[260935]: 2025-10-11 09:32:59.928 2 DEBUG oslo_concurrency.lockutils [req-f0eee2e9-c7a8-43de-87fe-d5dc050ce3fc req-c5d34e32-0c7c-46a2-bfef-8a7c74c8b77e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e7751d7b-3ef7-4b84-a579-89d4b72f6cf4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:32:59 np0005481065 nova_compute[260935]: 2025-10-11 09:32:59.929 2 DEBUG oslo_concurrency.lockutils [req-f0eee2e9-c7a8-43de-87fe-d5dc050ce3fc req-c5d34e32-0c7c-46a2-bfef-8a7c74c8b77e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e7751d7b-3ef7-4b84-a579-89d4b72f6cf4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:32:59 np0005481065 nova_compute[260935]: 2025-10-11 09:32:59.929 2 DEBUG nova.compute.manager [req-f0eee2e9-c7a8-43de-87fe-d5dc050ce3fc req-c5d34e32-0c7c-46a2-bfef-8a7c74c8b77e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] No waiting events found dispatching network-vif-plugged-fca94e7c-1f55-4f5d-a268-7a4f0b86bae0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:32:59 np0005481065 nova_compute[260935]: 2025-10-11 09:32:59.930 2 WARNING nova.compute.manager [req-f0eee2e9-c7a8-43de-87fe-d5dc050ce3fc req-c5d34e32-0c7c-46a2-bfef-8a7c74c8b77e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Received unexpected event network-vif-plugged-fca94e7c-1f55-4f5d-a268-7a4f0b86bae0 for instance with vm_state active and task_state None.#033[00m
Oct 11 05:33:00 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:33:00.524 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=49, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=48) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:33:00 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:33:00.525 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 05:33:00 np0005481065 nova_compute[260935]: 2025-10-11 09:33:00.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:33:00 np0005481065 nova_compute[260935]: 2025-10-11 09:33:00.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:33:01 np0005481065 nova_compute[260935]: 2025-10-11 09:33:01.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:33:01 np0005481065 NetworkManager[44960]: <info>  [1760175181.0871] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/592)
Oct 11 05:33:01 np0005481065 NetworkManager[44960]: <info>  [1760175181.0882] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/593)
Oct 11 05:33:01 np0005481065 ovn_controller[152945]: 2025-10-11T09:33:01Z|01536|binding|INFO|Releasing lport 8bbc5aca-a0c9-493f-8847-60a74f11dbe6 from this chassis (sb_readonly=0)
Oct 11 05:33:01 np0005481065 ovn_controller[152945]: 2025-10-11T09:33:01Z|01537|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:33:01 np0005481065 ovn_controller[152945]: 2025-10-11T09:33:01Z|01538|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:33:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2741: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 11 05:33:01 np0005481065 nova_compute[260935]: 2025-10-11 09:33:01.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:33:01 np0005481065 ovn_controller[152945]: 2025-10-11T09:33:01Z|01539|binding|INFO|Releasing lport 8bbc5aca-a0c9-493f-8847-60a74f11dbe6 from this chassis (sb_readonly=0)
Oct 11 05:33:01 np0005481065 ovn_controller[152945]: 2025-10-11T09:33:01Z|01540|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:33:01 np0005481065 ovn_controller[152945]: 2025-10-11T09:33:01Z|01541|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:33:01 np0005481065 nova_compute[260935]: 2025-10-11 09:33:01.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:33:01 np0005481065 podman[412776]: 2025-10-11 09:33:01.349284016 +0000 UTC m=+0.096108573 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 11 05:33:01 np0005481065 nova_compute[260935]: 2025-10-11 09:33:01.431 2 DEBUG nova.compute.manager [req-60bb825c-6197-4468-a57b-7d4cc4e529ac req-ed5ccceb-1572-4ae3-8603-3c327c2c5b13 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Received event network-changed-fca94e7c-1f55-4f5d-a268-7a4f0b86bae0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:33:01 np0005481065 nova_compute[260935]: 2025-10-11 09:33:01.431 2 DEBUG nova.compute.manager [req-60bb825c-6197-4468-a57b-7d4cc4e529ac req-ed5ccceb-1572-4ae3-8603-3c327c2c5b13 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Refreshing instance network info cache due to event network-changed-fca94e7c-1f55-4f5d-a268-7a4f0b86bae0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:33:01 np0005481065 nova_compute[260935]: 2025-10-11 09:33:01.431 2 DEBUG oslo_concurrency.lockutils [req-60bb825c-6197-4468-a57b-7d4cc4e529ac req-ed5ccceb-1572-4ae3-8603-3c327c2c5b13 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-e7751d7b-3ef7-4b84-a579-89d4b72f6cf4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:33:01 np0005481065 nova_compute[260935]: 2025-10-11 09:33:01.432 2 DEBUG oslo_concurrency.lockutils [req-60bb825c-6197-4468-a57b-7d4cc4e529ac req-ed5ccceb-1572-4ae3-8603-3c327c2c5b13 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-e7751d7b-3ef7-4b84-a579-89d4b72f6cf4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:33:01 np0005481065 nova_compute[260935]: 2025-10-11 09:33:01.432 2 DEBUG nova.network.neutron [req-60bb825c-6197-4468-a57b-7d4cc4e529ac req-ed5ccceb-1572-4ae3-8603-3c327c2c5b13 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Refreshing network info cache for port fca94e7c-1f55-4f5d-a268-7a4f0b86bae0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:33:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:33:01.528 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '49'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:33:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2742: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 05:33:03 np0005481065 nova_compute[260935]: 2025-10-11 09:33:03.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:33:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:33:04 np0005481065 nova_compute[260935]: 2025-10-11 09:33:04.572 2 DEBUG nova.network.neutron [req-60bb825c-6197-4468-a57b-7d4cc4e529ac req-ed5ccceb-1572-4ae3-8603-3c327c2c5b13 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Updated VIF entry in instance network info cache for port fca94e7c-1f55-4f5d-a268-7a4f0b86bae0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:33:04 np0005481065 nova_compute[260935]: 2025-10-11 09:33:04.572 2 DEBUG nova.network.neutron [req-60bb825c-6197-4468-a57b-7d4cc4e529ac req-ed5ccceb-1572-4ae3-8603-3c327c2c5b13 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Updating instance_info_cache with network_info: [{"id": "fca94e7c-1f55-4f5d-a268-7a4f0b86bae0", "address": "fa:16:3e:9b:ef:fb", "network": {"id": "84203be9-d0b1-4633-bcd6-51523ee507a5", "bridge": "br-int", "label": "tempest-network-smoke--13547720", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9b:effb", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfca94e7c-1f", "ovs_interfaceid": "fca94e7c-1f55-4f5d-a268-7a4f0b86bae0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:33:04 np0005481065 nova_compute[260935]: 2025-10-11 09:33:04.700 2 DEBUG oslo_concurrency.lockutils [req-60bb825c-6197-4468-a57b-7d4cc4e529ac req-ed5ccceb-1572-4ae3-8603-3c327c2c5b13 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-e7751d7b-3ef7-4b84-a579-89d4b72f6cf4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:33:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2743: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 05:33:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:33:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:33:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:33:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:33:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0029757271724620694 of space, bias 1.0, pg target 0.8927181517386208 quantized to 32 (current 32)
Oct 11 05:33:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:33:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:33:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:33:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:33:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:33:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 11 05:33:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:33:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 05:33:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:33:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:33:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:33:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 05:33:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:33:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 05:33:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:33:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:33:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:33:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 05:33:05 np0005481065 nova_compute[260935]: 2025-10-11 09:33:05.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:33:05 np0005481065 podman[412796]: 2025-10-11 09:33:05.805299317 +0000 UTC m=+0.109793249 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 11 05:33:05 np0005481065 podman[412797]: 2025-10-11 09:33:05.807403916 +0000 UTC m=+0.107669639 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 11 05:33:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2744: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 05:33:08 np0005481065 nova_compute[260935]: 2025-10-11 09:33:08.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:33:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2745: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 05:33:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:33:10 np0005481065 nova_compute[260935]: 2025-10-11 09:33:10.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:33:10 np0005481065 ovn_controller[152945]: 2025-10-11T09:33:10Z|00180|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9b:ef:fb 10.100.0.12
Oct 11 05:33:10 np0005481065 ovn_controller[152945]: 2025-10-11T09:33:10Z|00181|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9b:ef:fb 10.100.0.12
Oct 11 05:33:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2746: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 68 op/s
Oct 11 05:33:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2747: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Oct 11 05:33:13 np0005481065 nova_compute[260935]: 2025-10-11 09:33:13.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:33:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:33:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2748: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 05:33:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:33:15.228 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:33:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:33:15.229 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:33:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:33:15.230 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:33:15 np0005481065 nova_compute[260935]: 2025-10-11 09:33:15.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:33:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2749: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 05:33:18 np0005481065 nova_compute[260935]: 2025-10-11 09:33:18.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:33:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2750: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 05:33:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:33:20 np0005481065 nova_compute[260935]: 2025-10-11 09:33:20.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:33:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2751: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 05:33:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2752: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 05:33:23 np0005481065 nova_compute[260935]: 2025-10-11 09:33:23.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:33:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:33:24 np0005481065 nova_compute[260935]: 2025-10-11 09:33:24.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:33:24 np0005481065 podman[412840]: 2025-10-11 09:33:24.793197228 +0000 UTC m=+0.084244889 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:33:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:33:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:33:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:33:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:33:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:33:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:33:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2753: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s wr, 0 op/s
Oct 11 05:33:25 np0005481065 nova_compute[260935]: 2025-10-11 09:33:25.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:33:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:33:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3359709646' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:33:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:33:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3359709646' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:33:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2754: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s wr, 0 op/s
Oct 11 05:33:28 np0005481065 nova_compute[260935]: 2025-10-11 09:33:28.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:33:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2755: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 15 KiB/s wr, 0 op/s
Oct 11 05:33:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:33:30 np0005481065 nova_compute[260935]: 2025-10-11 09:33:30.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:33:31 np0005481065 ovn_controller[152945]: 2025-10-11T09:33:31Z|01542|memory_trim|INFO|Detected inactivity (last active 30026 ms ago): trimming memory
Oct 11 05:33:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2756: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.3 KiB/s wr, 0 op/s
Oct 11 05:33:31 np0005481065 podman[412862]: 2025-10-11 09:33:31.808408693 +0000 UTC m=+0.092184833 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible)
Oct 11 05:33:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2757: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 3.3 KiB/s wr, 59 op/s
Oct 11 05:33:33 np0005481065 nova_compute[260935]: 2025-10-11 09:33:33.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:33:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:33:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2758: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 341 B/s wr, 59 op/s
Oct 11 05:33:35 np0005481065 nova_compute[260935]: 2025-10-11 09:33:35.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:33:36 np0005481065 podman[412882]: 2025-10-11 09:33:36.788718278 +0000 UTC m=+0.089274171 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 11 05:33:36 np0005481065 podman[412883]: 2025-10-11 09:33:36.853799014 +0000 UTC m=+0.151794474 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible)
Oct 11 05:33:37 np0005481065 nova_compute[260935]: 2025-10-11 09:33:37.013 2 DEBUG oslo_concurrency.lockutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "5caa1430-1bd3-4b60-83f0-908b98b891cf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:33:37 np0005481065 nova_compute[260935]: 2025-10-11 09:33:37.013 2 DEBUG oslo_concurrency.lockutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "5caa1430-1bd3-4b60-83f0-908b98b891cf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:33:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2759: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 341 B/s wr, 59 op/s
Oct 11 05:33:37 np0005481065 nova_compute[260935]: 2025-10-11 09:33:37.163 2 DEBUG nova.compute.manager [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:33:37 np0005481065 nova_compute[260935]: 2025-10-11 09:33:37.501 2 DEBUG oslo_concurrency.lockutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:33:37 np0005481065 nova_compute[260935]: 2025-10-11 09:33:37.502 2 DEBUG oslo_concurrency.lockutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:33:37 np0005481065 nova_compute[260935]: 2025-10-11 09:33:37.510 2 DEBUG nova.virt.hardware [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:33:37 np0005481065 nova_compute[260935]: 2025-10-11 09:33:37.511 2 INFO nova.compute.claims [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:33:37 np0005481065 nova_compute[260935]: 2025-10-11 09:33:37.852 2 DEBUG oslo_concurrency.processutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:33:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:33:38 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4286754567' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:33:38 np0005481065 nova_compute[260935]: 2025-10-11 09:33:38.376 2 DEBUG oslo_concurrency.processutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:33:38 np0005481065 nova_compute[260935]: 2025-10-11 09:33:38.384 2 DEBUG nova.compute.provider_tree [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:33:38 np0005481065 nova_compute[260935]: 2025-10-11 09:33:38.441 2 DEBUG nova.scheduler.client.report [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:33:38 np0005481065 nova_compute[260935]: 2025-10-11 09:33:38.557 2 DEBUG oslo_concurrency.lockutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.055s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:33:38 np0005481065 nova_compute[260935]: 2025-10-11 09:33:38.558 2 DEBUG nova.compute.manager [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:33:38 np0005481065 nova_compute[260935]: 2025-10-11 09:33:38.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:33:39 np0005481065 nova_compute[260935]: 2025-10-11 09:33:39.013 2 DEBUG nova.compute.manager [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:33:39 np0005481065 nova_compute[260935]: 2025-10-11 09:33:39.014 2 DEBUG nova.network.neutron [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:33:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2760: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 341 B/s wr, 59 op/s
Oct 11 05:33:39 np0005481065 nova_compute[260935]: 2025-10-11 09:33:39.184 2 INFO nova.virt.libvirt.driver [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:33:39 np0005481065 nova_compute[260935]: 2025-10-11 09:33:39.265 2 DEBUG nova.policy [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0e1fd111a1ff43179343661e01457085', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'db6885dd005947ad850fed13cefdf2fc', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:33:39 np0005481065 nova_compute[260935]: 2025-10-11 09:33:39.445 2 DEBUG nova.compute.manager [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:33:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:33:39 np0005481065 nova_compute[260935]: 2025-10-11 09:33:39.872 2 DEBUG nova.compute.manager [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:33:39 np0005481065 nova_compute[260935]: 2025-10-11 09:33:39.873 2 DEBUG nova.virt.libvirt.driver [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:33:39 np0005481065 nova_compute[260935]: 2025-10-11 09:33:39.873 2 INFO nova.virt.libvirt.driver [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Creating image(s)#033[00m
Oct 11 05:33:39 np0005481065 nova_compute[260935]: 2025-10-11 09:33:39.896 2 DEBUG nova.storage.rbd_utils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 5caa1430-1bd3-4b60-83f0-908b98b891cf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:33:39 np0005481065 nova_compute[260935]: 2025-10-11 09:33:39.917 2 DEBUG nova.storage.rbd_utils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 5caa1430-1bd3-4b60-83f0-908b98b891cf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:33:39 np0005481065 nova_compute[260935]: 2025-10-11 09:33:39.940 2 DEBUG nova.storage.rbd_utils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 5caa1430-1bd3-4b60-83f0-908b98b891cf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:33:39 np0005481065 nova_compute[260935]: 2025-10-11 09:33:39.944 2 DEBUG oslo_concurrency.processutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:33:40 np0005481065 nova_compute[260935]: 2025-10-11 09:33:40.046 2 DEBUG oslo_concurrency.processutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:33:40 np0005481065 nova_compute[260935]: 2025-10-11 09:33:40.048 2 DEBUG oslo_concurrency.lockutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:33:40 np0005481065 nova_compute[260935]: 2025-10-11 09:33:40.048 2 DEBUG oslo_concurrency.lockutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:33:40 np0005481065 nova_compute[260935]: 2025-10-11 09:33:40.049 2 DEBUG oslo_concurrency.lockutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:33:40 np0005481065 nova_compute[260935]: 2025-10-11 09:33:40.083 2 DEBUG nova.storage.rbd_utils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 5caa1430-1bd3-4b60-83f0-908b98b891cf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:33:40 np0005481065 nova_compute[260935]: 2025-10-11 09:33:40.087 2 DEBUG oslo_concurrency.processutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 5caa1430-1bd3-4b60-83f0-908b98b891cf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:33:40 np0005481065 nova_compute[260935]: 2025-10-11 09:33:40.429 2 DEBUG oslo_concurrency.processutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 5caa1430-1bd3-4b60-83f0-908b98b891cf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.342s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:33:40 np0005481065 nova_compute[260935]: 2025-10-11 09:33:40.499 2 DEBUG nova.storage.rbd_utils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] resizing rbd image 5caa1430-1bd3-4b60-83f0-908b98b891cf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:33:40 np0005481065 nova_compute[260935]: 2025-10-11 09:33:40.599 2 DEBUG nova.objects.instance [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'migration_context' on Instance uuid 5caa1430-1bd3-4b60-83f0-908b98b891cf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:33:40 np0005481065 nova_compute[260935]: 2025-10-11 09:33:40.626 2 DEBUG nova.virt.libvirt.driver [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:33:40 np0005481065 nova_compute[260935]: 2025-10-11 09:33:40.627 2 DEBUG nova.virt.libvirt.driver [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Ensure instance console log exists: /var/lib/nova/instances/5caa1430-1bd3-4b60-83f0-908b98b891cf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:33:40 np0005481065 nova_compute[260935]: 2025-10-11 09:33:40.628 2 DEBUG oslo_concurrency.lockutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:33:40 np0005481065 nova_compute[260935]: 2025-10-11 09:33:40.629 2 DEBUG oslo_concurrency.lockutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:33:40 np0005481065 nova_compute[260935]: 2025-10-11 09:33:40.629 2 DEBUG oslo_concurrency.lockutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:33:40 np0005481065 nova_compute[260935]: 2025-10-11 09:33:40.764 2 DEBUG nova.network.neutron [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Successfully created port: ac79972f-498e-46be-b858-6c67d0f29f93 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:33:40 np0005481065 nova_compute[260935]: 2025-10-11 09:33:40.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:33:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2761: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 05:33:41 np0005481065 nova_compute[260935]: 2025-10-11 09:33:41.961 2 DEBUG nova.network.neutron [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Successfully updated port: ac79972f-498e-46be-b858-6c67d0f29f93 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:33:42 np0005481065 nova_compute[260935]: 2025-10-11 09:33:42.007 2 DEBUG oslo_concurrency.lockutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "refresh_cache-5caa1430-1bd3-4b60-83f0-908b98b891cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:33:42 np0005481065 nova_compute[260935]: 2025-10-11 09:33:42.007 2 DEBUG oslo_concurrency.lockutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquired lock "refresh_cache-5caa1430-1bd3-4b60-83f0-908b98b891cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:33:42 np0005481065 nova_compute[260935]: 2025-10-11 09:33:42.007 2 DEBUG nova.network.neutron [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:33:42 np0005481065 nova_compute[260935]: 2025-10-11 09:33:42.152 2 DEBUG nova.compute.manager [req-53f66181-7ed6-455a-a890-3bf6f09af614 req-885c04dd-f16d-4d30-b640-db9775973f57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Received event network-changed-ac79972f-498e-46be-b858-6c67d0f29f93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:33:42 np0005481065 nova_compute[260935]: 2025-10-11 09:33:42.152 2 DEBUG nova.compute.manager [req-53f66181-7ed6-455a-a890-3bf6f09af614 req-885c04dd-f16d-4d30-b640-db9775973f57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Refreshing instance network info cache due to event network-changed-ac79972f-498e-46be-b858-6c67d0f29f93. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:33:42 np0005481065 nova_compute[260935]: 2025-10-11 09:33:42.152 2 DEBUG oslo_concurrency.lockutils [req-53f66181-7ed6-455a-a890-3bf6f09af614 req-885c04dd-f16d-4d30-b640-db9775973f57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-5caa1430-1bd3-4b60-83f0-908b98b891cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:33:42 np0005481065 nova_compute[260935]: 2025-10-11 09:33:42.535 2 DEBUG nova.network.neutron [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:33:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2762: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 1.8 MiB/s wr, 87 op/s
Oct 11 05:33:43 np0005481065 nova_compute[260935]: 2025-10-11 09:33:43.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:33:43 np0005481065 nova_compute[260935]: 2025-10-11 09:33:43.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:33:43 np0005481065 nova_compute[260935]: 2025-10-11 09:33:43.715 2 DEBUG nova.network.neutron [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Updating instance_info_cache with network_info: [{"id": "ac79972f-498e-46be-b858-6c67d0f29f93", "address": "fa:16:3e:0d:1b:5f", "network": {"id": "84203be9-d0b1-4633-bcd6-51523ee507a5", "bridge": "br-int", "label": "tempest-network-smoke--13547720", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0d:1b5f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac79972f-49", "ovs_interfaceid": "ac79972f-498e-46be-b858-6c67d0f29f93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:33:43 np0005481065 nova_compute[260935]: 2025-10-11 09:33:43.920 2 DEBUG oslo_concurrency.lockutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Releasing lock "refresh_cache-5caa1430-1bd3-4b60-83f0-908b98b891cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:33:43 np0005481065 nova_compute[260935]: 2025-10-11 09:33:43.920 2 DEBUG nova.compute.manager [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Instance network_info: |[{"id": "ac79972f-498e-46be-b858-6c67d0f29f93", "address": "fa:16:3e:0d:1b:5f", "network": {"id": "84203be9-d0b1-4633-bcd6-51523ee507a5", "bridge": "br-int", "label": "tempest-network-smoke--13547720", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0d:1b5f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac79972f-49", "ovs_interfaceid": "ac79972f-498e-46be-b858-6c67d0f29f93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:33:43 np0005481065 nova_compute[260935]: 2025-10-11 09:33:43.921 2 DEBUG oslo_concurrency.lockutils [req-53f66181-7ed6-455a-a890-3bf6f09af614 req-885c04dd-f16d-4d30-b640-db9775973f57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-5caa1430-1bd3-4b60-83f0-908b98b891cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:33:43 np0005481065 nova_compute[260935]: 2025-10-11 09:33:43.921 2 DEBUG nova.network.neutron [req-53f66181-7ed6-455a-a890-3bf6f09af614 req-885c04dd-f16d-4d30-b640-db9775973f57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Refreshing network info cache for port ac79972f-498e-46be-b858-6c67d0f29f93 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:33:43 np0005481065 nova_compute[260935]: 2025-10-11 09:33:43.926 2 DEBUG nova.virt.libvirt.driver [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Start _get_guest_xml network_info=[{"id": "ac79972f-498e-46be-b858-6c67d0f29f93", "address": "fa:16:3e:0d:1b:5f", "network": {"id": "84203be9-d0b1-4633-bcd6-51523ee507a5", "bridge": "br-int", "label": "tempest-network-smoke--13547720", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0d:1b5f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac79972f-49", "ovs_interfaceid": "ac79972f-498e-46be-b858-6c67d0f29f93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:33:43 np0005481065 nova_compute[260935]: 2025-10-11 09:33:43.932 2 WARNING nova.virt.libvirt.driver [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:33:43 np0005481065 nova_compute[260935]: 2025-10-11 09:33:43.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:33:43 np0005481065 nova_compute[260935]: 2025-10-11 09:33:43.941 2 DEBUG nova.virt.libvirt.host [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:33:43 np0005481065 nova_compute[260935]: 2025-10-11 09:33:43.942 2 DEBUG nova.virt.libvirt.host [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:33:43 np0005481065 nova_compute[260935]: 2025-10-11 09:33:43.949 2 DEBUG nova.virt.libvirt.host [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:33:43 np0005481065 nova_compute[260935]: 2025-10-11 09:33:43.950 2 DEBUG nova.virt.libvirt.host [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:33:43 np0005481065 nova_compute[260935]: 2025-10-11 09:33:43.951 2 DEBUG nova.virt.libvirt.driver [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:33:43 np0005481065 nova_compute[260935]: 2025-10-11 09:33:43.951 2 DEBUG nova.virt.hardware [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:33:43 np0005481065 nova_compute[260935]: 2025-10-11 09:33:43.952 2 DEBUG nova.virt.hardware [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:33:43 np0005481065 nova_compute[260935]: 2025-10-11 09:33:43.952 2 DEBUG nova.virt.hardware [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:33:43 np0005481065 nova_compute[260935]: 2025-10-11 09:33:43.953 2 DEBUG nova.virt.hardware [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:33:43 np0005481065 nova_compute[260935]: 2025-10-11 09:33:43.953 2 DEBUG nova.virt.hardware [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:33:43 np0005481065 nova_compute[260935]: 2025-10-11 09:33:43.953 2 DEBUG nova.virt.hardware [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:33:43 np0005481065 nova_compute[260935]: 2025-10-11 09:33:43.954 2 DEBUG nova.virt.hardware [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:33:43 np0005481065 nova_compute[260935]: 2025-10-11 09:33:43.954 2 DEBUG nova.virt.hardware [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:33:43 np0005481065 nova_compute[260935]: 2025-10-11 09:33:43.955 2 DEBUG nova.virt.hardware [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:33:43 np0005481065 nova_compute[260935]: 2025-10-11 09:33:43.955 2 DEBUG nova.virt.hardware [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:33:43 np0005481065 nova_compute[260935]: 2025-10-11 09:33:43.956 2 DEBUG nova.virt.hardware [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:33:43 np0005481065 nova_compute[260935]: 2025-10-11 09:33:43.961 2 DEBUG oslo_concurrency.processutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:33:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:33:44 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1667727775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:33:44 np0005481065 nova_compute[260935]: 2025-10-11 09:33:44.402 2 DEBUG oslo_concurrency.processutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:33:44 np0005481065 nova_compute[260935]: 2025-10-11 09:33:44.427 2 DEBUG nova.storage.rbd_utils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 5caa1430-1bd3-4b60-83f0-908b98b891cf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:33:44 np0005481065 nova_compute[260935]: 2025-10-11 09:33:44.432 2 DEBUG oslo_concurrency.processutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:33:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:33:44 np0005481065 nova_compute[260935]: 2025-10-11 09:33:44.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:33:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:33:44 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/144209284' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:33:44 np0005481065 nova_compute[260935]: 2025-10-11 09:33:44.861 2 DEBUG oslo_concurrency.processutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:33:44 np0005481065 nova_compute[260935]: 2025-10-11 09:33:44.863 2 DEBUG nova.virt.libvirt.vif [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:33:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1406255321',display_name='tempest-TestGettingAddress-server-1406255321',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1406255321',id=138,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHVfw7pOtwEAslurCfMJI+LHxZQ7Z2OcTKKHxGE0pH5jm6Sci3HdEc6EkB55fGhUPW8iPpuzF74thdADhl6pL+8SkI+p3jvP78xNgpaS0telyWktJDIyQUmQdgZIxm8Eyg==',key_name='tempest-TestGettingAddress-2055882368',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-p3ajjp08',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:33:39Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=5caa1430-1bd3-4b60-83f0-908b98b891cf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ac79972f-498e-46be-b858-6c67d0f29f93", "address": "fa:16:3e:0d:1b:5f", "network": {"id": "84203be9-d0b1-4633-bcd6-51523ee507a5", "bridge": "br-int", "label": "tempest-network-smoke--13547720", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0d:1b5f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac79972f-49", "ovs_interfaceid": "ac79972f-498e-46be-b858-6c67d0f29f93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:33:44 np0005481065 nova_compute[260935]: 2025-10-11 09:33:44.863 2 DEBUG nova.network.os_vif_util [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "ac79972f-498e-46be-b858-6c67d0f29f93", "address": "fa:16:3e:0d:1b:5f", "network": {"id": "84203be9-d0b1-4633-bcd6-51523ee507a5", "bridge": "br-int", "label": "tempest-network-smoke--13547720", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0d:1b5f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac79972f-49", "ovs_interfaceid": "ac79972f-498e-46be-b858-6c67d0f29f93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:33:44 np0005481065 nova_compute[260935]: 2025-10-11 09:33:44.864 2 DEBUG nova.network.os_vif_util [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:1b:5f,bridge_name='br-int',has_traffic_filtering=True,id=ac79972f-498e-46be-b858-6c67d0f29f93,network=Network(84203be9-d0b1-4633-bcd6-51523ee507a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac79972f-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:33:44 np0005481065 nova_compute[260935]: 2025-10-11 09:33:44.866 2 DEBUG nova.objects.instance [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'pci_devices' on Instance uuid 5caa1430-1bd3-4b60-83f0-908b98b891cf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:33:44 np0005481065 nova_compute[260935]: 2025-10-11 09:33:44.901 2 DEBUG nova.virt.libvirt.driver [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:33:44 np0005481065 nova_compute[260935]:  <uuid>5caa1430-1bd3-4b60-83f0-908b98b891cf</uuid>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:  <name>instance-0000008a</name>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:33:44 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:      <nova:name>tempest-TestGettingAddress-server-1406255321</nova:name>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:33:43</nova:creationTime>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:33:44 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:        <nova:user uuid="0e1fd111a1ff43179343661e01457085">tempest-TestGettingAddress-1238692117-project-member</nova:user>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:        <nova:project uuid="db6885dd005947ad850fed13cefdf2fc">tempest-TestGettingAddress-1238692117</nova:project>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:        <nova:port uuid="ac79972f-498e-46be-b858-6c67d0f29f93">
Oct 11 05:33:44 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe0d:1b5f" ipVersion="6"/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:33:44 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:      <entry name="serial">5caa1430-1bd3-4b60-83f0-908b98b891cf</entry>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:      <entry name="uuid">5caa1430-1bd3-4b60-83f0-908b98b891cf</entry>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:33:44 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:33:44 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:33:44 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/5caa1430-1bd3-4b60-83f0-908b98b891cf_disk">
Oct 11 05:33:44 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:33:44 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:33:44 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/5caa1430-1bd3-4b60-83f0-908b98b891cf_disk.config">
Oct 11 05:33:44 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:33:44 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:33:44 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:0d:1b:5f"/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:      <target dev="tapac79972f-49"/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:33:44 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/5caa1430-1bd3-4b60-83f0-908b98b891cf/console.log" append="off"/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:33:44 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:33:44 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:33:44 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:33:44 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:33:44 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:33:44 np0005481065 nova_compute[260935]: 2025-10-11 09:33:44.902 2 DEBUG nova.compute.manager [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Preparing to wait for external event network-vif-plugged-ac79972f-498e-46be-b858-6c67d0f29f93 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:33:44 np0005481065 nova_compute[260935]: 2025-10-11 09:33:44.902 2 DEBUG oslo_concurrency.lockutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "5caa1430-1bd3-4b60-83f0-908b98b891cf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:33:44 np0005481065 nova_compute[260935]: 2025-10-11 09:33:44.903 2 DEBUG oslo_concurrency.lockutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "5caa1430-1bd3-4b60-83f0-908b98b891cf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:33:44 np0005481065 nova_compute[260935]: 2025-10-11 09:33:44.903 2 DEBUG oslo_concurrency.lockutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "5caa1430-1bd3-4b60-83f0-908b98b891cf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:33:44 np0005481065 nova_compute[260935]: 2025-10-11 09:33:44.904 2 DEBUG nova.virt.libvirt.vif [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:33:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1406255321',display_name='tempest-TestGettingAddress-server-1406255321',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1406255321',id=138,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHVfw7pOtwEAslurCfMJI+LHxZQ7Z2OcTKKHxGE0pH5jm6Sci3HdEc6EkB55fGhUPW8iPpuzF74thdADhl6pL+8SkI+p3jvP78xNgpaS0telyWktJDIyQUmQdgZIxm8Eyg==',key_name='tempest-TestGettingAddress-2055882368',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-p3ajjp08',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:33:39Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=5caa1430-1bd3-4b60-83f0-908b98b891cf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ac79972f-498e-46be-b858-6c67d0f29f93", "address": "fa:16:3e:0d:1b:5f", "network": {"id": "84203be9-d0b1-4633-bcd6-51523ee507a5", "bridge": "br-int", "label": "tempest-network-smoke--13547720", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0d:1b5f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac79972f-49", "ovs_interfaceid": "ac79972f-498e-46be-b858-6c67d0f29f93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:33:44 np0005481065 nova_compute[260935]: 2025-10-11 09:33:44.905 2 DEBUG nova.network.os_vif_util [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "ac79972f-498e-46be-b858-6c67d0f29f93", "address": "fa:16:3e:0d:1b:5f", "network": {"id": "84203be9-d0b1-4633-bcd6-51523ee507a5", "bridge": "br-int", "label": "tempest-network-smoke--13547720", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0d:1b5f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac79972f-49", "ovs_interfaceid": "ac79972f-498e-46be-b858-6c67d0f29f93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:33:44 np0005481065 nova_compute[260935]: 2025-10-11 09:33:44.906 2 DEBUG nova.network.os_vif_util [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:1b:5f,bridge_name='br-int',has_traffic_filtering=True,id=ac79972f-498e-46be-b858-6c67d0f29f93,network=Network(84203be9-d0b1-4633-bcd6-51523ee507a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac79972f-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:33:44 np0005481065 nova_compute[260935]: 2025-10-11 09:33:44.906 2 DEBUG os_vif [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:1b:5f,bridge_name='br-int',has_traffic_filtering=True,id=ac79972f-498e-46be-b858-6c67d0f29f93,network=Network(84203be9-d0b1-4633-bcd6-51523ee507a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac79972f-49') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:33:44 np0005481065 nova_compute[260935]: 2025-10-11 09:33:44.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:33:44 np0005481065 nova_compute[260935]: 2025-10-11 09:33:44.908 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:33:44 np0005481065 nova_compute[260935]: 2025-10-11 09:33:44.909 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:33:44 np0005481065 nova_compute[260935]: 2025-10-11 09:33:44.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:33:44 np0005481065 nova_compute[260935]: 2025-10-11 09:33:44.915 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapac79972f-49, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:33:44 np0005481065 nova_compute[260935]: 2025-10-11 09:33:44.916 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapac79972f-49, col_values=(('external_ids', {'iface-id': 'ac79972f-498e-46be-b858-6c67d0f29f93', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0d:1b:5f', 'vm-uuid': '5caa1430-1bd3-4b60-83f0-908b98b891cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:33:44 np0005481065 nova_compute[260935]: 2025-10-11 09:33:44.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:33:44 np0005481065 NetworkManager[44960]: <info>  [1760175224.9585] manager: (tapac79972f-49): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/594)
Oct 11 05:33:44 np0005481065 nova_compute[260935]: 2025-10-11 09:33:44.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:33:44 np0005481065 nova_compute[260935]: 2025-10-11 09:33:44.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:33:44 np0005481065 nova_compute[260935]: 2025-10-11 09:33:44.969 2 INFO os_vif [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:1b:5f,bridge_name='br-int',has_traffic_filtering=True,id=ac79972f-498e-46be-b858-6c67d0f29f93,network=Network(84203be9-d0b1-4633-bcd6-51523ee507a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac79972f-49')#033[00m
Oct 11 05:33:45 np0005481065 nova_compute[260935]: 2025-10-11 09:33:45.134 2 DEBUG nova.virt.libvirt.driver [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:33:45 np0005481065 nova_compute[260935]: 2025-10-11 09:33:45.135 2 DEBUG nova.virt.libvirt.driver [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:33:45 np0005481065 nova_compute[260935]: 2025-10-11 09:33:45.136 2 DEBUG nova.virt.libvirt.driver [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No VIF found with MAC fa:16:3e:0d:1b:5f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:33:45 np0005481065 nova_compute[260935]: 2025-10-11 09:33:45.137 2 INFO nova.virt.libvirt.driver [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Using config drive#033[00m
Oct 11 05:33:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2763: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:33:45 np0005481065 nova_compute[260935]: 2025-10-11 09:33:45.170 2 DEBUG nova.storage.rbd_utils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 5caa1430-1bd3-4b60-83f0-908b98b891cf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:33:45 np0005481065 nova_compute[260935]: 2025-10-11 09:33:45.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:33:45 np0005481065 nova_compute[260935]: 2025-10-11 09:33:45.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:33:45 np0005481065 nova_compute[260935]: 2025-10-11 09:33:45.702 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:33:45 np0005481065 nova_compute[260935]: 2025-10-11 09:33:45.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 11 05:33:45 np0005481065 nova_compute[260935]: 2025-10-11 09:33:45.750 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct 11 05:33:46 np0005481065 nova_compute[260935]: 2025-10-11 09:33:46.236 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:33:46 np0005481065 nova_compute[260935]: 2025-10-11 09:33:46.236 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:33:46 np0005481065 nova_compute[260935]: 2025-10-11 09:33:46.236 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 05:33:46 np0005481065 nova_compute[260935]: 2025-10-11 09:33:46.236 2 DEBUG nova.objects.instance [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c176845c-89c0-4038-ba22-4ee79bd3ebfe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:33:46 np0005481065 nova_compute[260935]: 2025-10-11 09:33:46.277 2 INFO nova.virt.libvirt.driver [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Creating config drive at /var/lib/nova/instances/5caa1430-1bd3-4b60-83f0-908b98b891cf/disk.config#033[00m
Oct 11 05:33:46 np0005481065 nova_compute[260935]: 2025-10-11 09:33:46.283 2 DEBUG oslo_concurrency.processutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5caa1430-1bd3-4b60-83f0-908b98b891cf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8um0t0xa execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:33:46 np0005481065 nova_compute[260935]: 2025-10-11 09:33:46.316 2 DEBUG nova.network.neutron [req-53f66181-7ed6-455a-a890-3bf6f09af614 req-885c04dd-f16d-4d30-b640-db9775973f57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Updated VIF entry in instance network info cache for port ac79972f-498e-46be-b858-6c67d0f29f93. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:33:46 np0005481065 nova_compute[260935]: 2025-10-11 09:33:46.317 2 DEBUG nova.network.neutron [req-53f66181-7ed6-455a-a890-3bf6f09af614 req-885c04dd-f16d-4d30-b640-db9775973f57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Updating instance_info_cache with network_info: [{"id": "ac79972f-498e-46be-b858-6c67d0f29f93", "address": "fa:16:3e:0d:1b:5f", "network": {"id": "84203be9-d0b1-4633-bcd6-51523ee507a5", "bridge": "br-int", "label": "tempest-network-smoke--13547720", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0d:1b5f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac79972f-49", "ovs_interfaceid": "ac79972f-498e-46be-b858-6c67d0f29f93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:33:46 np0005481065 nova_compute[260935]: 2025-10-11 09:33:46.395 2 DEBUG oslo_concurrency.lockutils [req-53f66181-7ed6-455a-a890-3bf6f09af614 req-885c04dd-f16d-4d30-b640-db9775973f57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-5caa1430-1bd3-4b60-83f0-908b98b891cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:33:46 np0005481065 nova_compute[260935]: 2025-10-11 09:33:46.422 2 DEBUG oslo_concurrency.processutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5caa1430-1bd3-4b60-83f0-908b98b891cf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8um0t0xa" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:33:46 np0005481065 nova_compute[260935]: 2025-10-11 09:33:46.450 2 DEBUG nova.storage.rbd_utils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 5caa1430-1bd3-4b60-83f0-908b98b891cf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:33:46 np0005481065 nova_compute[260935]: 2025-10-11 09:33:46.454 2 DEBUG oslo_concurrency.processutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5caa1430-1bd3-4b60-83f0-908b98b891cf/disk.config 5caa1430-1bd3-4b60-83f0-908b98b891cf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:33:46 np0005481065 nova_compute[260935]: 2025-10-11 09:33:46.646 2 DEBUG oslo_concurrency.processutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5caa1430-1bd3-4b60-83f0-908b98b891cf/disk.config 5caa1430-1bd3-4b60-83f0-908b98b891cf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.192s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:33:46 np0005481065 nova_compute[260935]: 2025-10-11 09:33:46.648 2 INFO nova.virt.libvirt.driver [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Deleting local config drive /var/lib/nova/instances/5caa1430-1bd3-4b60-83f0-908b98b891cf/disk.config because it was imported into RBD.#033[00m
Oct 11 05:33:46 np0005481065 kernel: tapac79972f-49: entered promiscuous mode
Oct 11 05:33:46 np0005481065 NetworkManager[44960]: <info>  [1760175226.7079] manager: (tapac79972f-49): new Tun device (/org/freedesktop/NetworkManager/Devices/595)
Oct 11 05:33:46 np0005481065 ovn_controller[152945]: 2025-10-11T09:33:46Z|01543|binding|INFO|Claiming lport ac79972f-498e-46be-b858-6c67d0f29f93 for this chassis.
Oct 11 05:33:46 np0005481065 ovn_controller[152945]: 2025-10-11T09:33:46Z|01544|binding|INFO|ac79972f-498e-46be-b858-6c67d0f29f93: Claiming fa:16:3e:0d:1b:5f 10.100.0.10 2001:db8::f816:3eff:fe0d:1b5f
Oct 11 05:33:46 np0005481065 nova_compute[260935]: 2025-10-11 09:33:46.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:33:46 np0005481065 ovn_controller[152945]: 2025-10-11T09:33:46Z|01545|binding|INFO|Setting lport ac79972f-498e-46be-b858-6c67d0f29f93 ovn-installed in OVS
Oct 11 05:33:46 np0005481065 nova_compute[260935]: 2025-10-11 09:33:46.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:33:46 np0005481065 ovn_controller[152945]: 2025-10-11T09:33:46Z|01546|binding|INFO|Setting lport ac79972f-498e-46be-b858-6c67d0f29f93 up in Southbound
Oct 11 05:33:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:33:46.747 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:1b:5f 10.100.0.10 2001:db8::f816:3eff:fe0d:1b5f'], port_security=['fa:16:3e:0d:1b:5f 10.100.0.10 2001:db8::f816:3eff:fe0d:1b5f'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28 2001:db8::f816:3eff:fe0d:1b5f/64', 'neutron:device_id': '5caa1430-1bd3-4b60-83f0-908b98b891cf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84203be9-d0b1-4633-bcd6-51523ee507a5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '49b6c18b-a992-472d-bab2-db1a8305eb48', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d927a0f8-4868-42ce-9cc2-00b73d34efaa, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=ac79972f-498e-46be-b858-6c67d0f29f93) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:33:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:33:46.748 162815 INFO neutron.agent.ovn.metadata.agent [-] Port ac79972f-498e-46be-b858-6c67d0f29f93 in datapath 84203be9-d0b1-4633-bcd6-51523ee507a5 bound to our chassis#033[00m
Oct 11 05:33:46 np0005481065 nova_compute[260935]: 2025-10-11 09:33:46.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:33:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:33:46.751 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 84203be9-d0b1-4633-bcd6-51523ee507a5#033[00m
Oct 11 05:33:46 np0005481065 systemd-machined[215705]: New machine qemu-162-instance-0000008a.
Oct 11 05:33:46 np0005481065 systemd-udevd[413255]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:33:46 np0005481065 systemd[1]: Started Virtual Machine qemu-162-instance-0000008a.
Oct 11 05:33:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:33:46.772 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fa707165-e8ef-4627-8a2e-3390a5541169]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:33:46 np0005481065 NetworkManager[44960]: <info>  [1760175226.7927] device (tapac79972f-49): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:33:46 np0005481065 NetworkManager[44960]: <info>  [1760175226.7940] device (tapac79972f-49): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:33:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:33:46.817 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[9cc1823d-08df-4631-91c8-a641ee94da4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:33:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:33:46.822 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b2dd88bc-65a8-480a-b2f1-dad1ea32c15a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:33:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:33:46.872 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b7ca17e8-5191-45e7-878e-1aa9d14e7e34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:33:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:33:46.896 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bbba0ac3-5719-4e74-a1ee-0502b82c1fbf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap84203be9-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:4e:e8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 21, 'tx_packets': 5, 'rx_bytes': 1886, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 21, 'tx_packets': 5, 'rx_bytes': 1886, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 410], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 702200, 'reachable_time': 31343, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 19, 'inoctets': 1536, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 19, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1536, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 19, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 413269, 'error': None, 'target': 'ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:33:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:33:46.915 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7407fb28-231d-43ce-848c-7ef16aa5bba1]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap84203be9-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 702218, 'tstamp': 702218}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 413270, 'error': None, 'target': 'ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap84203be9-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 702223, 'tstamp': 702223}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 413270, 'error': None, 'target': 'ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:33:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:33:46.918 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84203be9-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:33:46 np0005481065 nova_compute[260935]: 2025-10-11 09:33:46.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:33:46 np0005481065 nova_compute[260935]: 2025-10-11 09:33:46.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:33:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:33:46.923 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap84203be9-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:33:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:33:46.924 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:33:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:33:46.925 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap84203be9-d0, col_values=(('external_ids', {'iface-id': '8bbc5aca-a0c9-493f-8847-60a74f11dbe6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:33:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:33:46.925 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:33:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2764: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:33:47 np0005481065 nova_compute[260935]: 2025-10-11 09:33:47.606 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175227.6059654, 5caa1430-1bd3-4b60-83f0-908b98b891cf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:33:47 np0005481065 nova_compute[260935]: 2025-10-11 09:33:47.607 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] VM Started (Lifecycle Event)#033[00m
Oct 11 05:33:47 np0005481065 nova_compute[260935]: 2025-10-11 09:33:47.700 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:33:47 np0005481065 nova_compute[260935]: 2025-10-11 09:33:47.706 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175227.6061015, 5caa1430-1bd3-4b60-83f0-908b98b891cf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:33:47 np0005481065 nova_compute[260935]: 2025-10-11 09:33:47.706 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:33:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:33:47 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:33:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 05:33:47 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:33:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 05:33:47 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:33:47 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev e3b25e92-f8bf-44bd-8957-c5e84ab1974c does not exist
Oct 11 05:33:47 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 2f880e37-d251-469d-9a27-c5bd863401a1 does not exist
Oct 11 05:33:47 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev a655fc70-bd5d-4a3b-946f-8e331a10079f does not exist
Oct 11 05:33:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 05:33:47 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 05:33:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 05:33:47 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:33:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:33:47 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:33:47 np0005481065 nova_compute[260935]: 2025-10-11 09:33:47.944 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:33:47 np0005481065 nova_compute[260935]: 2025-10-11 09:33:47.949 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:33:48 np0005481065 nova_compute[260935]: 2025-10-11 09:33:48.052 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:33:48 np0005481065 nova_compute[260935]: 2025-10-11 09:33:48.116 2 DEBUG nova.compute.manager [req-aa7188f3-39b3-4da2-9216-0937035ee33d req-aa07fa17-4843-4acc-8067-415a89fbb8e8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Received event network-vif-plugged-ac79972f-498e-46be-b858-6c67d0f29f93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:33:48 np0005481065 nova_compute[260935]: 2025-10-11 09:33:48.116 2 DEBUG oslo_concurrency.lockutils [req-aa7188f3-39b3-4da2-9216-0937035ee33d req-aa07fa17-4843-4acc-8067-415a89fbb8e8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "5caa1430-1bd3-4b60-83f0-908b98b891cf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:33:48 np0005481065 nova_compute[260935]: 2025-10-11 09:33:48.116 2 DEBUG oslo_concurrency.lockutils [req-aa7188f3-39b3-4da2-9216-0937035ee33d req-aa07fa17-4843-4acc-8067-415a89fbb8e8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5caa1430-1bd3-4b60-83f0-908b98b891cf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:33:48 np0005481065 nova_compute[260935]: 2025-10-11 09:33:48.117 2 DEBUG oslo_concurrency.lockutils [req-aa7188f3-39b3-4da2-9216-0937035ee33d req-aa07fa17-4843-4acc-8067-415a89fbb8e8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5caa1430-1bd3-4b60-83f0-908b98b891cf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:33:48 np0005481065 nova_compute[260935]: 2025-10-11 09:33:48.117 2 DEBUG nova.compute.manager [req-aa7188f3-39b3-4da2-9216-0937035ee33d req-aa07fa17-4843-4acc-8067-415a89fbb8e8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Processing event network-vif-plugged-ac79972f-498e-46be-b858-6c67d0f29f93 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:33:48 np0005481065 nova_compute[260935]: 2025-10-11 09:33:48.118 2 DEBUG nova.compute.manager [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:33:48 np0005481065 nova_compute[260935]: 2025-10-11 09:33:48.122 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175228.121264, 5caa1430-1bd3-4b60-83f0-908b98b891cf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:33:48 np0005481065 nova_compute[260935]: 2025-10-11 09:33:48.122 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:33:48 np0005481065 nova_compute[260935]: 2025-10-11 09:33:48.129 2 DEBUG nova.virt.libvirt.driver [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:33:48 np0005481065 nova_compute[260935]: 2025-10-11 09:33:48.132 2 INFO nova.virt.libvirt.driver [-] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Instance spawned successfully.#033[00m
Oct 11 05:33:48 np0005481065 nova_compute[260935]: 2025-10-11 09:33:48.133 2 DEBUG nova.virt.libvirt.driver [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:33:48 np0005481065 nova_compute[260935]: 2025-10-11 09:33:48.184 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:33:48 np0005481065 nova_compute[260935]: 2025-10-11 09:33:48.190 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:33:48 np0005481065 nova_compute[260935]: 2025-10-11 09:33:48.194 2 DEBUG nova.virt.libvirt.driver [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:33:48 np0005481065 nova_compute[260935]: 2025-10-11 09:33:48.194 2 DEBUG nova.virt.libvirt.driver [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:33:48 np0005481065 nova_compute[260935]: 2025-10-11 09:33:48.195 2 DEBUG nova.virt.libvirt.driver [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:33:48 np0005481065 nova_compute[260935]: 2025-10-11 09:33:48.195 2 DEBUG nova.virt.libvirt.driver [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:33:48 np0005481065 nova_compute[260935]: 2025-10-11 09:33:48.196 2 DEBUG nova.virt.libvirt.driver [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:33:48 np0005481065 nova_compute[260935]: 2025-10-11 09:33:48.196 2 DEBUG nova.virt.libvirt.driver [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:33:48 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:33:48 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:33:48 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:33:48 np0005481065 nova_compute[260935]: 2025-10-11 09:33:48.325 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:33:48 np0005481065 nova_compute[260935]: 2025-10-11 09:33:48.410 2 INFO nova.compute.manager [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Took 8.54 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:33:48 np0005481065 nova_compute[260935]: 2025-10-11 09:33:48.411 2 DEBUG nova.compute.manager [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:33:48 np0005481065 nova_compute[260935]: 2025-10-11 09:33:48.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:33:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:33:48.462 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=50, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=49) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:33:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:33:48.463 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 05:33:48 np0005481065 nova_compute[260935]: 2025-10-11 09:33:48.547 2 INFO nova.compute.manager [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Took 11.08 seconds to build instance.#033[00m
Oct 11 05:33:48 np0005481065 podman[413581]: 2025-10-11 09:33:48.651007456 +0000 UTC m=+0.057455572 container create f92e0b4019d92c4055d76d34f6f7341ebcc6849401ed5e87d4a63de9faf9d975 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 05:33:48 np0005481065 nova_compute[260935]: 2025-10-11 09:33:48.671 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updating instance_info_cache with network_info: [{"id": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "address": "fa:16:3e:1e:82:58", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ae661-47", "ovs_interfaceid": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:33:48 np0005481065 nova_compute[260935]: 2025-10-11 09:33:48.700 2 DEBUG oslo_concurrency.lockutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "5caa1430-1bd3-4b60-83f0-908b98b891cf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:33:48 np0005481065 systemd[1]: Started libpod-conmon-f92e0b4019d92c4055d76d34f6f7341ebcc6849401ed5e87d4a63de9faf9d975.scope.
Oct 11 05:33:48 np0005481065 podman[413581]: 2025-10-11 09:33:48.629582841 +0000 UTC m=+0.036030957 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:33:48 np0005481065 nova_compute[260935]: 2025-10-11 09:33:48.739 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:33:48 np0005481065 nova_compute[260935]: 2025-10-11 09:33:48.739 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 05:33:48 np0005481065 nova_compute[260935]: 2025-10-11 09:33:48.740 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:33:48 np0005481065 nova_compute[260935]: 2025-10-11 09:33:48.740 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:33:48 np0005481065 nova_compute[260935]: 2025-10-11 09:33:48.740 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:33:48 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:33:48 np0005481065 podman[413581]: 2025-10-11 09:33:48.763521591 +0000 UTC m=+0.169969757 container init f92e0b4019d92c4055d76d34f6f7341ebcc6849401ed5e87d4a63de9faf9d975 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 11 05:33:48 np0005481065 podman[413581]: 2025-10-11 09:33:48.771788335 +0000 UTC m=+0.178236411 container start f92e0b4019d92c4055d76d34f6f7341ebcc6849401ed5e87d4a63de9faf9d975 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_diffie, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 05:33:48 np0005481065 peaceful_diffie[413597]: 167 167
Oct 11 05:33:48 np0005481065 systemd[1]: libpod-f92e0b4019d92c4055d76d34f6f7341ebcc6849401ed5e87d4a63de9faf9d975.scope: Deactivated successfully.
Oct 11 05:33:48 np0005481065 podman[413581]: 2025-10-11 09:33:48.778836664 +0000 UTC m=+0.185284830 container attach f92e0b4019d92c4055d76d34f6f7341ebcc6849401ed5e87d4a63de9faf9d975 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_diffie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 05:33:48 np0005481065 podman[413581]: 2025-10-11 09:33:48.779194124 +0000 UTC m=+0.185642240 container died f92e0b4019d92c4055d76d34f6f7341ebcc6849401ed5e87d4a63de9faf9d975 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_diffie, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 05:33:48 np0005481065 systemd[1]: var-lib-containers-storage-overlay-2fd0c637f32fd7ee53cd8e61b2db34d80c6a3fb061ccafa51067e560b839926b-merged.mount: Deactivated successfully.
Oct 11 05:33:48 np0005481065 podman[413581]: 2025-10-11 09:33:48.823460313 +0000 UTC m=+0.229908389 container remove f92e0b4019d92c4055d76d34f6f7341ebcc6849401ed5e87d4a63de9faf9d975 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 11 05:33:48 np0005481065 systemd[1]: libpod-conmon-f92e0b4019d92c4055d76d34f6f7341ebcc6849401ed5e87d4a63de9faf9d975.scope: Deactivated successfully.
Oct 11 05:33:48 np0005481065 nova_compute[260935]: 2025-10-11 09:33:48.863 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:33:48 np0005481065 nova_compute[260935]: 2025-10-11 09:33:48.864 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:33:48 np0005481065 nova_compute[260935]: 2025-10-11 09:33:48.864 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:33:48 np0005481065 nova_compute[260935]: 2025-10-11 09:33:48.864 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:33:48 np0005481065 nova_compute[260935]: 2025-10-11 09:33:48.864 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:33:48 np0005481065 nova_compute[260935]: 2025-10-11 09:33:48.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:33:49 np0005481065 podman[413620]: 2025-10-11 09:33:49.062749706 +0000 UTC m=+0.057391811 container create da14e3a8031872d4df5399a3e2d379bd3d33314fb32fbc710fb2de4f69cd9853 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_swartz, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 11 05:33:49 np0005481065 systemd[1]: Started libpod-conmon-da14e3a8031872d4df5399a3e2d379bd3d33314fb32fbc710fb2de4f69cd9853.scope.
Oct 11 05:33:49 np0005481065 podman[413620]: 2025-10-11 09:33:49.03102236 +0000 UTC m=+0.025664495 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:33:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2765: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 11 05:33:49 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:33:49 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de676b6d35ff340631f7b69520222d54f6111d12f54565119871358ab6ba2c7d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:33:49 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de676b6d35ff340631f7b69520222d54f6111d12f54565119871358ab6ba2c7d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:33:49 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de676b6d35ff340631f7b69520222d54f6111d12f54565119871358ab6ba2c7d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:33:49 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de676b6d35ff340631f7b69520222d54f6111d12f54565119871358ab6ba2c7d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:33:49 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de676b6d35ff340631f7b69520222d54f6111d12f54565119871358ab6ba2c7d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 05:33:49 np0005481065 podman[413620]: 2025-10-11 09:33:49.180752316 +0000 UTC m=+0.175394481 container init da14e3a8031872d4df5399a3e2d379bd3d33314fb32fbc710fb2de4f69cd9853 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_swartz, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:33:49 np0005481065 podman[413620]: 2025-10-11 09:33:49.188931717 +0000 UTC m=+0.183573832 container start da14e3a8031872d4df5399a3e2d379bd3d33314fb32fbc710fb2de4f69cd9853 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 05:33:49 np0005481065 podman[413620]: 2025-10-11 09:33:49.192744754 +0000 UTC m=+0.187386919 container attach da14e3a8031872d4df5399a3e2d379bd3d33314fb32fbc710fb2de4f69cd9853 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 05:33:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:33:49 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2635443784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:33:49 np0005481065 nova_compute[260935]: 2025-10-11 09:33:49.347 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:33:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:33:49 np0005481065 nova_compute[260935]: 2025-10-11 09:33:49.622 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000008a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:33:49 np0005481065 nova_compute[260935]: 2025-10-11 09:33:49.622 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000008a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:33:49 np0005481065 nova_compute[260935]: 2025-10-11 09:33:49.626 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:33:49 np0005481065 nova_compute[260935]: 2025-10-11 09:33:49.626 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:33:49 np0005481065 nova_compute[260935]: 2025-10-11 09:33:49.626 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:33:49 np0005481065 nova_compute[260935]: 2025-10-11 09:33:49.629 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:33:49 np0005481065 nova_compute[260935]: 2025-10-11 09:33:49.629 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:33:49 np0005481065 nova_compute[260935]: 2025-10-11 09:33:49.632 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:33:49 np0005481065 nova_compute[260935]: 2025-10-11 09:33:49.632 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:33:49 np0005481065 nova_compute[260935]: 2025-10-11 09:33:49.639 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:33:49 np0005481065 nova_compute[260935]: 2025-10-11 09:33:49.640 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:33:49 np0005481065 nova_compute[260935]: 2025-10-11 09:33:49.819 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:33:49 np0005481065 nova_compute[260935]: 2025-10-11 09:33:49.820 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2551MB free_disk=59.76419448852539GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:33:49 np0005481065 nova_compute[260935]: 2025-10-11 09:33:49.821 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:33:49 np0005481065 nova_compute[260935]: 2025-10-11 09:33:49.821 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:33:50 np0005481065 nova_compute[260935]: 2025-10-11 09:33:50.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:33:50 np0005481065 nova_compute[260935]: 2025-10-11 09:33:50.046 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:33:50 np0005481065 nova_compute[260935]: 2025-10-11 09:33:50.046 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:33:50 np0005481065 nova_compute[260935]: 2025-10-11 09:33:50.047 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:33:50 np0005481065 nova_compute[260935]: 2025-10-11 09:33:50.047 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance e7751d7b-3ef7-4b84-a579-89d4b72f6cf4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:33:50 np0005481065 nova_compute[260935]: 2025-10-11 09:33:50.047 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 5caa1430-1bd3-4b60-83f0-908b98b891cf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:33:50 np0005481065 nova_compute[260935]: 2025-10-11 09:33:50.047 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:33:50 np0005481065 nova_compute[260935]: 2025-10-11 09:33:50.047 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:33:50 np0005481065 nova_compute[260935]: 2025-10-11 09:33:50.173 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:33:50 np0005481065 nova_compute[260935]: 2025-10-11 09:33:50.263 2 DEBUG nova.compute.manager [req-33ca5f19-6d80-4e10-96fa-ae1339e17696 req-b0050e08-5e4f-40a9-9e9b-ef3de877920f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Received event network-vif-plugged-ac79972f-498e-46be-b858-6c67d0f29f93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:33:50 np0005481065 nova_compute[260935]: 2025-10-11 09:33:50.264 2 DEBUG oslo_concurrency.lockutils [req-33ca5f19-6d80-4e10-96fa-ae1339e17696 req-b0050e08-5e4f-40a9-9e9b-ef3de877920f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "5caa1430-1bd3-4b60-83f0-908b98b891cf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:33:50 np0005481065 nova_compute[260935]: 2025-10-11 09:33:50.264 2 DEBUG oslo_concurrency.lockutils [req-33ca5f19-6d80-4e10-96fa-ae1339e17696 req-b0050e08-5e4f-40a9-9e9b-ef3de877920f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5caa1430-1bd3-4b60-83f0-908b98b891cf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:33:50 np0005481065 nova_compute[260935]: 2025-10-11 09:33:50.264 2 DEBUG oslo_concurrency.lockutils [req-33ca5f19-6d80-4e10-96fa-ae1339e17696 req-b0050e08-5e4f-40a9-9e9b-ef3de877920f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5caa1430-1bd3-4b60-83f0-908b98b891cf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:33:50 np0005481065 nova_compute[260935]: 2025-10-11 09:33:50.264 2 DEBUG nova.compute.manager [req-33ca5f19-6d80-4e10-96fa-ae1339e17696 req-b0050e08-5e4f-40a9-9e9b-ef3de877920f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] No waiting events found dispatching network-vif-plugged-ac79972f-498e-46be-b858-6c67d0f29f93 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:33:50 np0005481065 nova_compute[260935]: 2025-10-11 09:33:50.264 2 WARNING nova.compute.manager [req-33ca5f19-6d80-4e10-96fa-ae1339e17696 req-b0050e08-5e4f-40a9-9e9b-ef3de877920f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Received unexpected event network-vif-plugged-ac79972f-498e-46be-b858-6c67d0f29f93 for instance with vm_state active and task_state None.#033[00m
Oct 11 05:33:50 np0005481065 jolly_swartz[413655]: --> passed data devices: 0 physical, 3 LVM
Oct 11 05:33:50 np0005481065 jolly_swartz[413655]: --> relative data size: 1.0
Oct 11 05:33:50 np0005481065 jolly_swartz[413655]: --> All data devices are unavailable
Oct 11 05:33:50 np0005481065 systemd[1]: libpod-da14e3a8031872d4df5399a3e2d379bd3d33314fb32fbc710fb2de4f69cd9853.scope: Deactivated successfully.
Oct 11 05:33:50 np0005481065 podman[413705]: 2025-10-11 09:33:50.356701492 +0000 UTC m=+0.025250793 container died da14e3a8031872d4df5399a3e2d379bd3d33314fb32fbc710fb2de4f69cd9853 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:33:50 np0005481065 systemd[1]: var-lib-containers-storage-overlay-de676b6d35ff340631f7b69520222d54f6111d12f54565119871358ab6ba2c7d-merged.mount: Deactivated successfully.
Oct 11 05:33:50 np0005481065 podman[413705]: 2025-10-11 09:33:50.405232762 +0000 UTC m=+0.073782013 container remove da14e3a8031872d4df5399a3e2d379bd3d33314fb32fbc710fb2de4f69cd9853 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 05:33:50 np0005481065 systemd[1]: libpod-conmon-da14e3a8031872d4df5399a3e2d379bd3d33314fb32fbc710fb2de4f69cd9853.scope: Deactivated successfully.
Oct 11 05:33:50 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:33:50 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1647981956' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:33:50 np0005481065 nova_compute[260935]: 2025-10-11 09:33:50.631 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:33:50 np0005481065 nova_compute[260935]: 2025-10-11 09:33:50.644 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:33:50 np0005481065 nova_compute[260935]: 2025-10-11 09:33:50.683 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:33:50 np0005481065 nova_compute[260935]: 2025-10-11 09:33:50.749 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:33:50 np0005481065 nova_compute[260935]: 2025-10-11 09:33:50.750 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.929s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:33:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2766: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 11 05:33:51 np0005481065 podman[413865]: 2025-10-11 09:33:51.257586946 +0000 UTC m=+0.058635266 container create 486101b5a3a4ba014a5b4886582fede2cf0259762b91ac5ca687b7bf046ba3a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 05:33:51 np0005481065 systemd[1]: Started libpod-conmon-486101b5a3a4ba014a5b4886582fede2cf0259762b91ac5ca687b7bf046ba3a5.scope.
Oct 11 05:33:51 np0005481065 podman[413865]: 2025-10-11 09:33:51.236017157 +0000 UTC m=+0.037065487 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:33:51 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:33:51 np0005481065 podman[413865]: 2025-10-11 09:33:51.354020688 +0000 UTC m=+0.155069088 container init 486101b5a3a4ba014a5b4886582fede2cf0259762b91ac5ca687b7bf046ba3a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_heisenberg, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 11 05:33:51 np0005481065 podman[413865]: 2025-10-11 09:33:51.368579468 +0000 UTC m=+0.169627788 container start 486101b5a3a4ba014a5b4886582fede2cf0259762b91ac5ca687b7bf046ba3a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:33:51 np0005481065 podman[413865]: 2025-10-11 09:33:51.372210351 +0000 UTC m=+0.173258671 container attach 486101b5a3a4ba014a5b4886582fede2cf0259762b91ac5ca687b7bf046ba3a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 05:33:51 np0005481065 zealous_heisenberg[413881]: 167 167
Oct 11 05:33:51 np0005481065 systemd[1]: libpod-486101b5a3a4ba014a5b4886582fede2cf0259762b91ac5ca687b7bf046ba3a5.scope: Deactivated successfully.
Oct 11 05:33:51 np0005481065 podman[413865]: 2025-10-11 09:33:51.3767874 +0000 UTC m=+0.177835740 container died 486101b5a3a4ba014a5b4886582fede2cf0259762b91ac5ca687b7bf046ba3a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:33:51 np0005481065 systemd[1]: var-lib-containers-storage-overlay-3f07093acff3f955691c73868d7ee0bbd38dbaf2f9fe9d9b5936fa6ec53e9f9d-merged.mount: Deactivated successfully.
Oct 11 05:33:51 np0005481065 podman[413865]: 2025-10-11 09:33:51.432237715 +0000 UTC m=+0.233286065 container remove 486101b5a3a4ba014a5b4886582fede2cf0259762b91ac5ca687b7bf046ba3a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_heisenberg, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:33:51 np0005481065 systemd[1]: libpod-conmon-486101b5a3a4ba014a5b4886582fede2cf0259762b91ac5ca687b7bf046ba3a5.scope: Deactivated successfully.
Oct 11 05:33:51 np0005481065 podman[413905]: 2025-10-11 09:33:51.658713155 +0000 UTC m=+0.039940648 container create a3482454fe7b4b12909c67b35926fce695220619cb673b388dffcc0da67fe91d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_wright, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 05:33:51 np0005481065 systemd[1]: Started libpod-conmon-a3482454fe7b4b12909c67b35926fce695220619cb673b388dffcc0da67fe91d.scope.
Oct 11 05:33:51 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:33:51 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c20ce29c18b06fb7061e48a448d97b2190b0b0e012e77819264ab14061b5a587/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:33:51 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c20ce29c18b06fb7061e48a448d97b2190b0b0e012e77819264ab14061b5a587/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:33:51 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c20ce29c18b06fb7061e48a448d97b2190b0b0e012e77819264ab14061b5a587/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:33:51 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c20ce29c18b06fb7061e48a448d97b2190b0b0e012e77819264ab14061b5a587/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:33:51 np0005481065 podman[413905]: 2025-10-11 09:33:51.6429231 +0000 UTC m=+0.024150613 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:33:51 np0005481065 podman[413905]: 2025-10-11 09:33:51.749032414 +0000 UTC m=+0.130259927 container init a3482454fe7b4b12909c67b35926fce695220619cb673b388dffcc0da67fe91d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:33:51 np0005481065 podman[413905]: 2025-10-11 09:33:51.755755634 +0000 UTC m=+0.136983157 container start a3482454fe7b4b12909c67b35926fce695220619cb673b388dffcc0da67fe91d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 11 05:33:51 np0005481065 podman[413905]: 2025-10-11 09:33:51.760113547 +0000 UTC m=+0.141341060 container attach a3482454fe7b4b12909c67b35926fce695220619cb673b388dffcc0da67fe91d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:33:52 np0005481065 angry_wright[413922]: {
Oct 11 05:33:52 np0005481065 angry_wright[413922]:    "0": [
Oct 11 05:33:52 np0005481065 angry_wright[413922]:        {
Oct 11 05:33:52 np0005481065 angry_wright[413922]:            "devices": [
Oct 11 05:33:52 np0005481065 angry_wright[413922]:                "/dev/loop3"
Oct 11 05:33:52 np0005481065 angry_wright[413922]:            ],
Oct 11 05:33:52 np0005481065 angry_wright[413922]:            "lv_name": "ceph_lv0",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:            "lv_size": "21470642176",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:            "name": "ceph_lv0",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:            "tags": {
Oct 11 05:33:52 np0005481065 angry_wright[413922]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:                "ceph.cluster_name": "ceph",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:                "ceph.crush_device_class": "",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:                "ceph.encrypted": "0",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:                "ceph.osd_id": "0",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:                "ceph.type": "block",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:                "ceph.vdo": "0"
Oct 11 05:33:52 np0005481065 angry_wright[413922]:            },
Oct 11 05:33:52 np0005481065 angry_wright[413922]:            "type": "block",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:            "vg_name": "ceph_vg0"
Oct 11 05:33:52 np0005481065 angry_wright[413922]:        }
Oct 11 05:33:52 np0005481065 angry_wright[413922]:    ],
Oct 11 05:33:52 np0005481065 angry_wright[413922]:    "1": [
Oct 11 05:33:52 np0005481065 angry_wright[413922]:        {
Oct 11 05:33:52 np0005481065 angry_wright[413922]:            "devices": [
Oct 11 05:33:52 np0005481065 angry_wright[413922]:                "/dev/loop4"
Oct 11 05:33:52 np0005481065 angry_wright[413922]:            ],
Oct 11 05:33:52 np0005481065 angry_wright[413922]:            "lv_name": "ceph_lv1",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:            "lv_size": "21470642176",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:            "name": "ceph_lv1",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:            "tags": {
Oct 11 05:33:52 np0005481065 angry_wright[413922]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:                "ceph.cluster_name": "ceph",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:                "ceph.crush_device_class": "",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:                "ceph.encrypted": "0",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:                "ceph.osd_id": "1",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:                "ceph.type": "block",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:                "ceph.vdo": "0"
Oct 11 05:33:52 np0005481065 angry_wright[413922]:            },
Oct 11 05:33:52 np0005481065 angry_wright[413922]:            "type": "block",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:            "vg_name": "ceph_vg1"
Oct 11 05:33:52 np0005481065 angry_wright[413922]:        }
Oct 11 05:33:52 np0005481065 angry_wright[413922]:    ],
Oct 11 05:33:52 np0005481065 angry_wright[413922]:    "2": [
Oct 11 05:33:52 np0005481065 angry_wright[413922]:        {
Oct 11 05:33:52 np0005481065 angry_wright[413922]:            "devices": [
Oct 11 05:33:52 np0005481065 angry_wright[413922]:                "/dev/loop5"
Oct 11 05:33:52 np0005481065 angry_wright[413922]:            ],
Oct 11 05:33:52 np0005481065 angry_wright[413922]:            "lv_name": "ceph_lv2",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:            "lv_size": "21470642176",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:            "name": "ceph_lv2",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:            "tags": {
Oct 11 05:33:52 np0005481065 angry_wright[413922]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:                "ceph.cluster_name": "ceph",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:                "ceph.crush_device_class": "",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:                "ceph.encrypted": "0",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:                "ceph.osd_id": "2",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:                "ceph.type": "block",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:                "ceph.vdo": "0"
Oct 11 05:33:52 np0005481065 angry_wright[413922]:            },
Oct 11 05:33:52 np0005481065 angry_wright[413922]:            "type": "block",
Oct 11 05:33:52 np0005481065 angry_wright[413922]:            "vg_name": "ceph_vg2"
Oct 11 05:33:52 np0005481065 angry_wright[413922]:        }
Oct 11 05:33:52 np0005481065 angry_wright[413922]:    ]
Oct 11 05:33:52 np0005481065 angry_wright[413922]: }
Oct 11 05:33:52 np0005481065 systemd[1]: libpod-a3482454fe7b4b12909c67b35926fce695220619cb673b388dffcc0da67fe91d.scope: Deactivated successfully.
Oct 11 05:33:52 np0005481065 podman[413905]: 2025-10-11 09:33:52.674399339 +0000 UTC m=+1.055626872 container died a3482454fe7b4b12909c67b35926fce695220619cb673b388dffcc0da67fe91d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_wright, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 05:33:52 np0005481065 systemd[1]: var-lib-containers-storage-overlay-c20ce29c18b06fb7061e48a448d97b2190b0b0e012e77819264ab14061b5a587-merged.mount: Deactivated successfully.
Oct 11 05:33:52 np0005481065 nova_compute[260935]: 2025-10-11 09:33:52.718 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:33:52 np0005481065 podman[413905]: 2025-10-11 09:33:52.760237341 +0000 UTC m=+1.141464854 container remove a3482454fe7b4b12909c67b35926fce695220619cb673b388dffcc0da67fe91d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_wright, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:33:52 np0005481065 nova_compute[260935]: 2025-10-11 09:33:52.790 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:33:52 np0005481065 systemd[1]: libpod-conmon-a3482454fe7b4b12909c67b35926fce695220619cb673b388dffcc0da67fe91d.scope: Deactivated successfully.
Oct 11 05:33:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2767: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 11 05:33:53 np0005481065 podman[414085]: 2025-10-11 09:33:53.657661877 +0000 UTC m=+0.061564838 container create 501432be35042fb0da6252f4c9d2ef028de47d477d6c3355f4acbe29d85f49d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_brattain, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 05:33:53 np0005481065 systemd[1]: Started libpod-conmon-501432be35042fb0da6252f4c9d2ef028de47d477d6c3355f4acbe29d85f49d4.scope.
Oct 11 05:33:53 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:33:53 np0005481065 podman[414085]: 2025-10-11 09:33:53.633769923 +0000 UTC m=+0.037672914 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:33:53 np0005481065 podman[414085]: 2025-10-11 09:33:53.736236875 +0000 UTC m=+0.140139836 container init 501432be35042fb0da6252f4c9d2ef028de47d477d6c3355f4acbe29d85f49d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_brattain, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 05:33:53 np0005481065 podman[414085]: 2025-10-11 09:33:53.745716022 +0000 UTC m=+0.149618983 container start 501432be35042fb0da6252f4c9d2ef028de47d477d6c3355f4acbe29d85f49d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:33:53 np0005481065 podman[414085]: 2025-10-11 09:33:53.748779029 +0000 UTC m=+0.152682020 container attach 501432be35042fb0da6252f4c9d2ef028de47d477d6c3355f4acbe29d85f49d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_brattain, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 11 05:33:53 np0005481065 zealous_brattain[414101]: 167 167
Oct 11 05:33:53 np0005481065 systemd[1]: libpod-501432be35042fb0da6252f4c9d2ef028de47d477d6c3355f4acbe29d85f49d4.scope: Deactivated successfully.
Oct 11 05:33:53 np0005481065 podman[414085]: 2025-10-11 09:33:53.749861899 +0000 UTC m=+0.153764860 container died 501432be35042fb0da6252f4c9d2ef028de47d477d6c3355f4acbe29d85f49d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:33:53 np0005481065 systemd[1]: var-lib-containers-storage-overlay-3f4551ee4790b923b65473b028a456d71a363ba5b947a39dcbfc3dfdc6a9e3f9-merged.mount: Deactivated successfully.
Oct 11 05:33:53 np0005481065 podman[414085]: 2025-10-11 09:33:53.803989057 +0000 UTC m=+0.207892018 container remove 501432be35042fb0da6252f4c9d2ef028de47d477d6c3355f4acbe29d85f49d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 11 05:33:53 np0005481065 systemd[1]: libpod-conmon-501432be35042fb0da6252f4c9d2ef028de47d477d6c3355f4acbe29d85f49d4.scope: Deactivated successfully.
Oct 11 05:33:53 np0005481065 nova_compute[260935]: 2025-10-11 09:33:53.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:33:54 np0005481065 podman[414125]: 2025-10-11 09:33:54.114425398 +0000 UTC m=+0.074418731 container create 2a4784b147769e6b7a7809feba20db0cd977eb39b82e8dedb2a79fd500c1d288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:33:54 np0005481065 systemd[1]: Started libpod-conmon-2a4784b147769e6b7a7809feba20db0cd977eb39b82e8dedb2a79fd500c1d288.scope.
Oct 11 05:33:54 np0005481065 podman[414125]: 2025-10-11 09:33:54.085223464 +0000 UTC m=+0.045216857 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:33:54 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:33:54 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b05fd4220f2afdbb80827e1580cf4b560e01d56f3b73eee77e7e3707eb05a69a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:33:54 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b05fd4220f2afdbb80827e1580cf4b560e01d56f3b73eee77e7e3707eb05a69a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:33:54 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b05fd4220f2afdbb80827e1580cf4b560e01d56f3b73eee77e7e3707eb05a69a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:33:54 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b05fd4220f2afdbb80827e1580cf4b560e01d56f3b73eee77e7e3707eb05a69a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:33:54 np0005481065 podman[414125]: 2025-10-11 09:33:54.22398982 +0000 UTC m=+0.183983213 container init 2a4784b147769e6b7a7809feba20db0cd977eb39b82e8dedb2a79fd500c1d288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 05:33:54 np0005481065 podman[414125]: 2025-10-11 09:33:54.233298593 +0000 UTC m=+0.193291906 container start 2a4784b147769e6b7a7809feba20db0cd977eb39b82e8dedb2a79fd500c1d288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_yalow, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 05:33:54 np0005481065 podman[414125]: 2025-10-11 09:33:54.23711367 +0000 UTC m=+0.197107083 container attach 2a4784b147769e6b7a7809feba20db0cd977eb39b82e8dedb2a79fd500c1d288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_yalow, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 11 05:33:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:33:54 np0005481065 nova_compute[260935]: 2025-10-11 09:33:54.804 2 DEBUG nova.compute.manager [req-256946c9-eeb7-4850-a713-aca88ea58d66 req-aacf12db-dd68-4e04-9986-77d8b564786b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Received event network-changed-ac79972f-498e-46be-b858-6c67d0f29f93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:33:54 np0005481065 nova_compute[260935]: 2025-10-11 09:33:54.804 2 DEBUG nova.compute.manager [req-256946c9-eeb7-4850-a713-aca88ea58d66 req-aacf12db-dd68-4e04-9986-77d8b564786b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Refreshing instance network info cache due to event network-changed-ac79972f-498e-46be-b858-6c67d0f29f93. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:33:54 np0005481065 nova_compute[260935]: 2025-10-11 09:33:54.805 2 DEBUG oslo_concurrency.lockutils [req-256946c9-eeb7-4850-a713-aca88ea58d66 req-aacf12db-dd68-4e04-9986-77d8b564786b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-5caa1430-1bd3-4b60-83f0-908b98b891cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:33:54 np0005481065 nova_compute[260935]: 2025-10-11 09:33:54.805 2 DEBUG oslo_concurrency.lockutils [req-256946c9-eeb7-4850-a713-aca88ea58d66 req-aacf12db-dd68-4e04-9986-77d8b564786b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-5caa1430-1bd3-4b60-83f0-908b98b891cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:33:54 np0005481065 nova_compute[260935]: 2025-10-11 09:33:54.805 2 DEBUG nova.network.neutron [req-256946c9-eeb7-4850-a713-aca88ea58d66 req-aacf12db-dd68-4e04-9986-77d8b564786b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Refreshing network info cache for port ac79972f-498e-46be-b858-6c67d0f29f93 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:33:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:33:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:33:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:33:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:33:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:33:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:33:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:33:54
Oct 11 05:33:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:33:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:33:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', '.rgw.root', 'images', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'vms']
Oct 11 05:33:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:33:55 np0005481065 nova_compute[260935]: 2025-10-11 09:33:55.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:33:55 np0005481065 dazzling_yalow[414142]: {
Oct 11 05:33:55 np0005481065 dazzling_yalow[414142]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 05:33:55 np0005481065 dazzling_yalow[414142]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:33:55 np0005481065 dazzling_yalow[414142]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 05:33:55 np0005481065 dazzling_yalow[414142]:        "osd_id": 2,
Oct 11 05:33:55 np0005481065 dazzling_yalow[414142]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:33:55 np0005481065 dazzling_yalow[414142]:        "type": "bluestore"
Oct 11 05:33:55 np0005481065 dazzling_yalow[414142]:    },
Oct 11 05:33:55 np0005481065 dazzling_yalow[414142]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 05:33:55 np0005481065 dazzling_yalow[414142]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:33:55 np0005481065 dazzling_yalow[414142]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 05:33:55 np0005481065 dazzling_yalow[414142]:        "osd_id": 0,
Oct 11 05:33:55 np0005481065 dazzling_yalow[414142]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:33:55 np0005481065 dazzling_yalow[414142]:        "type": "bluestore"
Oct 11 05:33:55 np0005481065 dazzling_yalow[414142]:    },
Oct 11 05:33:55 np0005481065 dazzling_yalow[414142]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 05:33:55 np0005481065 dazzling_yalow[414142]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:33:55 np0005481065 dazzling_yalow[414142]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 05:33:55 np0005481065 dazzling_yalow[414142]:        "osd_id": 1,
Oct 11 05:33:55 np0005481065 dazzling_yalow[414142]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:33:55 np0005481065 dazzling_yalow[414142]:        "type": "bluestore"
Oct 11 05:33:55 np0005481065 dazzling_yalow[414142]:    }
Oct 11 05:33:55 np0005481065 dazzling_yalow[414142]: }
Oct 11 05:33:55 np0005481065 systemd[1]: libpod-2a4784b147769e6b7a7809feba20db0cd977eb39b82e8dedb2a79fd500c1d288.scope: Deactivated successfully.
Oct 11 05:33:55 np0005481065 podman[414125]: 2025-10-11 09:33:55.138605021 +0000 UTC m=+1.098598324 container died 2a4784b147769e6b7a7809feba20db0cd977eb39b82e8dedb2a79fd500c1d288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:33:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2768: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 05:33:55 np0005481065 systemd[1]: var-lib-containers-storage-overlay-b05fd4220f2afdbb80827e1580cf4b560e01d56f3b73eee77e7e3707eb05a69a-merged.mount: Deactivated successfully.
Oct 11 05:33:55 np0005481065 podman[414125]: 2025-10-11 09:33:55.19139789 +0000 UTC m=+1.151391183 container remove 2a4784b147769e6b7a7809feba20db0cd977eb39b82e8dedb2a79fd500c1d288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 05:33:55 np0005481065 systemd[1]: libpod-conmon-2a4784b147769e6b7a7809feba20db0cd977eb39b82e8dedb2a79fd500c1d288.scope: Deactivated successfully.
Oct 11 05:33:55 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:33:55 np0005481065 podman[414176]: 2025-10-11 09:33:55.24206509 +0000 UTC m=+0.075886523 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:33:55 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:33:55 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:33:55 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:33:55 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev fc3e59f8-3622-44bd-aa78-11a788fe0191 does not exist
Oct 11 05:33:55 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 23657c31-57e6-4ee6-9ece-c6d93a17b85f does not exist
Oct 11 05:33:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:33:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:33:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:33:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:33:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:33:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:33:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:33:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:33:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:33:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:33:56 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:33:56 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:33:56 np0005481065 nova_compute[260935]: 2025-10-11 09:33:56.605 2 DEBUG nova.network.neutron [req-256946c9-eeb7-4850-a713-aca88ea58d66 req-aacf12db-dd68-4e04-9986-77d8b564786b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Updated VIF entry in instance network info cache for port ac79972f-498e-46be-b858-6c67d0f29f93. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:33:56 np0005481065 nova_compute[260935]: 2025-10-11 09:33:56.606 2 DEBUG nova.network.neutron [req-256946c9-eeb7-4850-a713-aca88ea58d66 req-aacf12db-dd68-4e04-9986-77d8b564786b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Updating instance_info_cache with network_info: [{"id": "ac79972f-498e-46be-b858-6c67d0f29f93", "address": "fa:16:3e:0d:1b:5f", "network": {"id": "84203be9-d0b1-4633-bcd6-51523ee507a5", "bridge": "br-int", "label": "tempest-network-smoke--13547720", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0d:1b5f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac79972f-49", "ovs_interfaceid": "ac79972f-498e-46be-b858-6c67d0f29f93", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:33:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2769: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 05:33:57 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:33:57.465 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '50'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:33:57 np0005481065 nova_compute[260935]: 2025-10-11 09:33:57.901 2 DEBUG oslo_concurrency.lockutils [req-256946c9-eeb7-4850-a713-aca88ea58d66 req-aacf12db-dd68-4e04-9986-77d8b564786b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-5caa1430-1bd3-4b60-83f0-908b98b891cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:33:58 np0005481065 nova_compute[260935]: 2025-10-11 09:33:58.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:33:58 np0005481065 nova_compute[260935]: 2025-10-11 09:33:58.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 11 05:33:58 np0005481065 nova_compute[260935]: 2025-10-11 09:33:58.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:33:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2770: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 21 KiB/s wr, 74 op/s
Oct 11 05:33:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:34:00 np0005481065 nova_compute[260935]: 2025-10-11 09:34:00.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:34:00 np0005481065 ovn_controller[152945]: 2025-10-11T09:34:00Z|00182|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0d:1b:5f 10.100.0.10
Oct 11 05:34:00 np0005481065 ovn_controller[152945]: 2025-10-11T09:34:00Z|00183|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0d:1b:5f 10.100.0.10
Oct 11 05:34:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2771: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 8.4 KiB/s wr, 65 op/s
Oct 11 05:34:02 np0005481065 nova_compute[260935]: 2025-10-11 09:34:02.178 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:34:02 np0005481065 podman[414255]: 2025-10-11 09:34:02.791676474 +0000 UTC m=+0.084758723 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:34:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2772: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 128 op/s
Oct 11 05:34:03 np0005481065 nova_compute[260935]: 2025-10-11 09:34:03.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:34:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:34:05 np0005481065 nova_compute[260935]: 2025-10-11 09:34:05.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:34:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2773: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 05:34:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:34:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:34:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:34:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:34:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004143761293709709 of space, bias 1.0, pg target 1.2431283881129125 quantized to 32 (current 32)
Oct 11 05:34:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:34:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:34:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:34:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:34:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:34:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.1992057139048968 quantized to 32 (current 32)
Oct 11 05:34:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:34:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 32)
Oct 11 05:34:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:34:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:34:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:34:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Oct 11 05:34:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:34:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Oct 11 05:34:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:34:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:34:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:34:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Oct 11 05:34:06 np0005481065 podman[414277]: 2025-10-11 09:34:06.975257358 +0000 UTC m=+0.090501885 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible)
Oct 11 05:34:07 np0005481065 podman[414278]: 2025-10-11 09:34:07.058789596 +0000 UTC m=+0.161365675 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true)
Oct 11 05:34:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2774: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 05:34:08 np0005481065 nova_compute[260935]: 2025-10-11 09:34:08.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:34:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2775: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 11 05:34:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:34:09 np0005481065 nova_compute[260935]: 2025-10-11 09:34:09.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:34:10 np0005481065 nova_compute[260935]: 2025-10-11 09:34:10.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:34:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2776: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 05:34:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2777: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 05:34:13 np0005481065 nova_compute[260935]: 2025-10-11 09:34:13.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:34:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:34:14 np0005481065 nova_compute[260935]: 2025-10-11 09:34:14.729 2 DEBUG nova.compute.manager [req-40aec95a-a27c-4450-8db4-b703678a5875 req-122188ee-6929-4698-9c76-2bebaab1b122 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Received event network-changed-ac79972f-498e-46be-b858-6c67d0f29f93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:34:14 np0005481065 nova_compute[260935]: 2025-10-11 09:34:14.730 2 DEBUG nova.compute.manager [req-40aec95a-a27c-4450-8db4-b703678a5875 req-122188ee-6929-4698-9c76-2bebaab1b122 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Refreshing instance network info cache due to event network-changed-ac79972f-498e-46be-b858-6c67d0f29f93. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:34:14 np0005481065 nova_compute[260935]: 2025-10-11 09:34:14.730 2 DEBUG oslo_concurrency.lockutils [req-40aec95a-a27c-4450-8db4-b703678a5875 req-122188ee-6929-4698-9c76-2bebaab1b122 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-5caa1430-1bd3-4b60-83f0-908b98b891cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:34:14 np0005481065 nova_compute[260935]: 2025-10-11 09:34:14.731 2 DEBUG oslo_concurrency.lockutils [req-40aec95a-a27c-4450-8db4-b703678a5875 req-122188ee-6929-4698-9c76-2bebaab1b122 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-5caa1430-1bd3-4b60-83f0-908b98b891cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:34:14 np0005481065 nova_compute[260935]: 2025-10-11 09:34:14.731 2 DEBUG nova.network.neutron [req-40aec95a-a27c-4450-8db4-b703678a5875 req-122188ee-6929-4698-9c76-2bebaab1b122 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Refreshing network info cache for port ac79972f-498e-46be-b858-6c67d0f29f93 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:34:14 np0005481065 nova_compute[260935]: 2025-10-11 09:34:14.879 2 DEBUG oslo_concurrency.lockutils [None req-38e74ca2-0169-4507-aea0-18c915f71c4b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "5caa1430-1bd3-4b60-83f0-908b98b891cf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:34:14 np0005481065 nova_compute[260935]: 2025-10-11 09:34:14.880 2 DEBUG oslo_concurrency.lockutils [None req-38e74ca2-0169-4507-aea0-18c915f71c4b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "5caa1430-1bd3-4b60-83f0-908b98b891cf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:34:14 np0005481065 nova_compute[260935]: 2025-10-11 09:34:14.880 2 DEBUG oslo_concurrency.lockutils [None req-38e74ca2-0169-4507-aea0-18c915f71c4b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "5caa1430-1bd3-4b60-83f0-908b98b891cf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:34:14 np0005481065 nova_compute[260935]: 2025-10-11 09:34:14.880 2 DEBUG oslo_concurrency.lockutils [None req-38e74ca2-0169-4507-aea0-18c915f71c4b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "5caa1430-1bd3-4b60-83f0-908b98b891cf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:34:14 np0005481065 nova_compute[260935]: 2025-10-11 09:34:14.881 2 DEBUG oslo_concurrency.lockutils [None req-38e74ca2-0169-4507-aea0-18c915f71c4b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "5caa1430-1bd3-4b60-83f0-908b98b891cf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:34:14 np0005481065 nova_compute[260935]: 2025-10-11 09:34:14.883 2 INFO nova.compute.manager [None req-38e74ca2-0169-4507-aea0-18c915f71c4b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Terminating instance#033[00m
Oct 11 05:34:14 np0005481065 nova_compute[260935]: 2025-10-11 09:34:14.887 2 DEBUG nova.compute.manager [None req-38e74ca2-0169-4507-aea0-18c915f71c4b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:34:14 np0005481065 kernel: tapac79972f-49 (unregistering): left promiscuous mode
Oct 11 05:34:14 np0005481065 NetworkManager[44960]: <info>  [1760175254.9530] device (tapac79972f-49): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:34:14 np0005481065 ovn_controller[152945]: 2025-10-11T09:34:14Z|01547|binding|INFO|Releasing lport ac79972f-498e-46be-b858-6c67d0f29f93 from this chassis (sb_readonly=0)
Oct 11 05:34:14 np0005481065 nova_compute[260935]: 2025-10-11 09:34:14.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:34:14 np0005481065 ovn_controller[152945]: 2025-10-11T09:34:14Z|01548|binding|INFO|Setting lport ac79972f-498e-46be-b858-6c67d0f29f93 down in Southbound
Oct 11 05:34:14 np0005481065 ovn_controller[152945]: 2025-10-11T09:34:14Z|01549|binding|INFO|Removing iface tapac79972f-49 ovn-installed in OVS
Oct 11 05:34:14 np0005481065 nova_compute[260935]: 2025-10-11 09:34:14.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:34:14 np0005481065 nova_compute[260935]: 2025-10-11 09:34:14.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:34:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:34:15.007 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:1b:5f 10.100.0.10 2001:db8::f816:3eff:fe0d:1b5f'], port_security=['fa:16:3e:0d:1b:5f 10.100.0.10 2001:db8::f816:3eff:fe0d:1b5f'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28 2001:db8::f816:3eff:fe0d:1b5f/64', 'neutron:device_id': '5caa1430-1bd3-4b60-83f0-908b98b891cf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84203be9-d0b1-4633-bcd6-51523ee507a5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': '49b6c18b-a992-472d-bab2-db1a8305eb48', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d927a0f8-4868-42ce-9cc2-00b73d34efaa, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=ac79972f-498e-46be-b858-6c67d0f29f93) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:34:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:34:15.008 162815 INFO neutron.agent.ovn.metadata.agent [-] Port ac79972f-498e-46be-b858-6c67d0f29f93 in datapath 84203be9-d0b1-4633-bcd6-51523ee507a5 unbound from our chassis#033[00m
Oct 11 05:34:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:34:15.010 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 84203be9-d0b1-4633-bcd6-51523ee507a5#033[00m
Oct 11 05:34:15 np0005481065 systemd[1]: machine-qemu\x2d162\x2dinstance\x2d0000008a.scope: Deactivated successfully.
Oct 11 05:34:15 np0005481065 systemd[1]: machine-qemu\x2d162\x2dinstance\x2d0000008a.scope: Consumed 13.405s CPU time.
Oct 11 05:34:15 np0005481065 systemd-machined[215705]: Machine qemu-162-instance-0000008a terminated.
Oct 11 05:34:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:34:15.030 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5733b19f-7c09-4722-9100-a2763df045e2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:34:15 np0005481065 nova_compute[260935]: 2025-10-11 09:34:15.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:34:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:34:15.064 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[70d061a2-967a-48ca-bb52-e7fa2205bd3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:34:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:34:15.069 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[582f876e-b66e-48e3-8c1e-8da2ed3b29d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:34:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:34:15.097 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[2655f18b-ecb8-43f7-bee2-9273318eec58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:34:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:34:15.121 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[070111d1-444b-4f78-bb3e-b2e83d1c80dc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap84203be9-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:4e:e8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 34, 'tx_packets': 7, 'rx_bytes': 2940, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 34, 'tx_packets': 7, 'rx_bytes': 2940, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 410], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 702200, 'reachable_time': 31343, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 30, 'inoctets': 2352, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 30, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2352, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 30, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 414334, 'error': None, 'target': 'ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:34:15 np0005481065 nova_compute[260935]: 2025-10-11 09:34:15.134 2 INFO nova.virt.libvirt.driver [-] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Instance destroyed successfully.#033[00m
Oct 11 05:34:15 np0005481065 nova_compute[260935]: 2025-10-11 09:34:15.134 2 DEBUG nova.objects.instance [None req-38e74ca2-0169-4507-aea0-18c915f71c4b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'resources' on Instance uuid 5caa1430-1bd3-4b60-83f0-908b98b891cf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:34:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:34:15.141 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1dfd8b0c-8c98-45df-b5dd-4b6223add90e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap84203be9-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 702218, 'tstamp': 702218}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 414342, 'error': None, 'target': 'ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap84203be9-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 702223, 'tstamp': 702223}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 414342, 'error': None, 'target': 'ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:34:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:34:15.143 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84203be9-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:34:15 np0005481065 nova_compute[260935]: 2025-10-11 09:34:15.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:34:15 np0005481065 nova_compute[260935]: 2025-10-11 09:34:15.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:34:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:34:15.151 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap84203be9-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:34:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2778: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 12 KiB/s wr, 1 op/s
Oct 11 05:34:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:34:15.152 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:34:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:34:15.153 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap84203be9-d0, col_values=(('external_ids', {'iface-id': '8bbc5aca-a0c9-493f-8847-60a74f11dbe6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:34:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:34:15.153 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:34:15 np0005481065 nova_compute[260935]: 2025-10-11 09:34:15.204 2 DEBUG nova.virt.libvirt.vif [None req-38e74ca2-0169-4507-aea0-18c915f71c4b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:33:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1406255321',display_name='tempest-TestGettingAddress-server-1406255321',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1406255321',id=138,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHVfw7pOtwEAslurCfMJI+LHxZQ7Z2OcTKKHxGE0pH5jm6Sci3HdEc6EkB55fGhUPW8iPpuzF74thdADhl6pL+8SkI+p3jvP78xNgpaS0telyWktJDIyQUmQdgZIxm8Eyg==',key_name='tempest-TestGettingAddress-2055882368',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:33:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-p3ajjp08',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:33:48Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=5caa1430-1bd3-4b60-83f0-908b98b891cf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ac79972f-498e-46be-b858-6c67d0f29f93", "address": "fa:16:3e:0d:1b:5f", "network": {"id": "84203be9-d0b1-4633-bcd6-51523ee507a5", "bridge": "br-int", "label": "tempest-network-smoke--13547720", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0d:1b5f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac79972f-49", "ovs_interfaceid": "ac79972f-498e-46be-b858-6c67d0f29f93", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:34:15 np0005481065 nova_compute[260935]: 2025-10-11 09:34:15.204 2 DEBUG nova.network.os_vif_util [None req-38e74ca2-0169-4507-aea0-18c915f71c4b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "ac79972f-498e-46be-b858-6c67d0f29f93", "address": "fa:16:3e:0d:1b:5f", "network": {"id": "84203be9-d0b1-4633-bcd6-51523ee507a5", "bridge": "br-int", "label": "tempest-network-smoke--13547720", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0d:1b5f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac79972f-49", "ovs_interfaceid": "ac79972f-498e-46be-b858-6c67d0f29f93", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:34:15 np0005481065 nova_compute[260935]: 2025-10-11 09:34:15.205 2 DEBUG nova.network.os_vif_util [None req-38e74ca2-0169-4507-aea0-18c915f71c4b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0d:1b:5f,bridge_name='br-int',has_traffic_filtering=True,id=ac79972f-498e-46be-b858-6c67d0f29f93,network=Network(84203be9-d0b1-4633-bcd6-51523ee507a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac79972f-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:34:15 np0005481065 nova_compute[260935]: 2025-10-11 09:34:15.205 2 DEBUG os_vif [None req-38e74ca2-0169-4507-aea0-18c915f71c4b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0d:1b:5f,bridge_name='br-int',has_traffic_filtering=True,id=ac79972f-498e-46be-b858-6c67d0f29f93,network=Network(84203be9-d0b1-4633-bcd6-51523ee507a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac79972f-49') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:34:15 np0005481065 nova_compute[260935]: 2025-10-11 09:34:15.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:34:15 np0005481065 nova_compute[260935]: 2025-10-11 09:34:15.207 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac79972f-49, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:34:15 np0005481065 nova_compute[260935]: 2025-10-11 09:34:15.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:34:15 np0005481065 nova_compute[260935]: 2025-10-11 09:34:15.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:34:15 np0005481065 nova_compute[260935]: 2025-10-11 09:34:15.214 2 INFO os_vif [None req-38e74ca2-0169-4507-aea0-18c915f71c4b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0d:1b:5f,bridge_name='br-int',has_traffic_filtering=True,id=ac79972f-498e-46be-b858-6c67d0f29f93,network=Network(84203be9-d0b1-4633-bcd6-51523ee507a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac79972f-49')#033[00m
Oct 11 05:34:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:34:15.229 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:34:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:34:15.230 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:34:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:34:15.231 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:34:15 np0005481065 nova_compute[260935]: 2025-10-11 09:34:15.337 2 DEBUG nova.compute.manager [req-8246ea68-8883-4a4d-a26a-1396194d7304 req-f453b8bf-edf9-462c-859f-dc69a19fc694 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Received event network-vif-unplugged-ac79972f-498e-46be-b858-6c67d0f29f93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:34:15 np0005481065 nova_compute[260935]: 2025-10-11 09:34:15.338 2 DEBUG oslo_concurrency.lockutils [req-8246ea68-8883-4a4d-a26a-1396194d7304 req-f453b8bf-edf9-462c-859f-dc69a19fc694 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "5caa1430-1bd3-4b60-83f0-908b98b891cf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:34:15 np0005481065 nova_compute[260935]: 2025-10-11 09:34:15.338 2 DEBUG oslo_concurrency.lockutils [req-8246ea68-8883-4a4d-a26a-1396194d7304 req-f453b8bf-edf9-462c-859f-dc69a19fc694 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5caa1430-1bd3-4b60-83f0-908b98b891cf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:34:15 np0005481065 nova_compute[260935]: 2025-10-11 09:34:15.339 2 DEBUG oslo_concurrency.lockutils [req-8246ea68-8883-4a4d-a26a-1396194d7304 req-f453b8bf-edf9-462c-859f-dc69a19fc694 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5caa1430-1bd3-4b60-83f0-908b98b891cf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:34:15 np0005481065 nova_compute[260935]: 2025-10-11 09:34:15.339 2 DEBUG nova.compute.manager [req-8246ea68-8883-4a4d-a26a-1396194d7304 req-f453b8bf-edf9-462c-859f-dc69a19fc694 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] No waiting events found dispatching network-vif-unplugged-ac79972f-498e-46be-b858-6c67d0f29f93 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:34:15 np0005481065 nova_compute[260935]: 2025-10-11 09:34:15.340 2 DEBUG nova.compute.manager [req-8246ea68-8883-4a4d-a26a-1396194d7304 req-f453b8bf-edf9-462c-859f-dc69a19fc694 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Received event network-vif-unplugged-ac79972f-498e-46be-b858-6c67d0f29f93 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 05:34:15 np0005481065 nova_compute[260935]: 2025-10-11 09:34:15.638 2 INFO nova.virt.libvirt.driver [None req-38e74ca2-0169-4507-aea0-18c915f71c4b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Deleting instance files /var/lib/nova/instances/5caa1430-1bd3-4b60-83f0-908b98b891cf_del#033[00m
Oct 11 05:34:15 np0005481065 nova_compute[260935]: 2025-10-11 09:34:15.639 2 INFO nova.virt.libvirt.driver [None req-38e74ca2-0169-4507-aea0-18c915f71c4b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Deletion of /var/lib/nova/instances/5caa1430-1bd3-4b60-83f0-908b98b891cf_del complete#033[00m
Oct 11 05:34:15 np0005481065 nova_compute[260935]: 2025-10-11 09:34:15.786 2 INFO nova.compute.manager [None req-38e74ca2-0169-4507-aea0-18c915f71c4b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Took 0.90 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:34:15 np0005481065 nova_compute[260935]: 2025-10-11 09:34:15.786 2 DEBUG oslo.service.loopingcall [None req-38e74ca2-0169-4507-aea0-18c915f71c4b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:34:15 np0005481065 nova_compute[260935]: 2025-10-11 09:34:15.787 2 DEBUG nova.compute.manager [-] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:34:15 np0005481065 nova_compute[260935]: 2025-10-11 09:34:15.787 2 DEBUG nova.network.neutron [-] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:34:16 np0005481065 nova_compute[260935]: 2025-10-11 09:34:16.716 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:34:16 np0005481065 nova_compute[260935]: 2025-10-11 09:34:16.716 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct 11 05:34:16 np0005481065 nova_compute[260935]: 2025-10-11 09:34:16.753 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct 11 05:34:16 np0005481065 nova_compute[260935]: 2025-10-11 09:34:16.963 2 DEBUG nova.network.neutron [req-40aec95a-a27c-4450-8db4-b703678a5875 req-122188ee-6929-4698-9c76-2bebaab1b122 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Updated VIF entry in instance network info cache for port ac79972f-498e-46be-b858-6c67d0f29f93. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:34:16 np0005481065 nova_compute[260935]: 2025-10-11 09:34:16.964 2 DEBUG nova.network.neutron [req-40aec95a-a27c-4450-8db4-b703678a5875 req-122188ee-6929-4698-9c76-2bebaab1b122 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Updating instance_info_cache with network_info: [{"id": "ac79972f-498e-46be-b858-6c67d0f29f93", "address": "fa:16:3e:0d:1b:5f", "network": {"id": "84203be9-d0b1-4633-bcd6-51523ee507a5", "bridge": "br-int", "label": "tempest-network-smoke--13547720", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0d:1b5f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac79972f-49", "ovs_interfaceid": "ac79972f-498e-46be-b858-6c67d0f29f93", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:34:17 np0005481065 nova_compute[260935]: 2025-10-11 09:34:17.134 2 DEBUG oslo_concurrency.lockutils [req-40aec95a-a27c-4450-8db4-b703678a5875 req-122188ee-6929-4698-9c76-2bebaab1b122 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-5caa1430-1bd3-4b60-83f0-908b98b891cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:34:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2779: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 12 KiB/s wr, 1 op/s
Oct 11 05:34:17 np0005481065 nova_compute[260935]: 2025-10-11 09:34:17.435 2 DEBUG nova.compute.manager [req-4aa54c14-c116-46c4-a92d-a93a1d8f3938 req-bcbe8eb0-4199-40db-b9c5-82ba90a7c747 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Received event network-vif-plugged-ac79972f-498e-46be-b858-6c67d0f29f93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:34:17 np0005481065 nova_compute[260935]: 2025-10-11 09:34:17.435 2 DEBUG oslo_concurrency.lockutils [req-4aa54c14-c116-46c4-a92d-a93a1d8f3938 req-bcbe8eb0-4199-40db-b9c5-82ba90a7c747 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "5caa1430-1bd3-4b60-83f0-908b98b891cf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:34:17 np0005481065 nova_compute[260935]: 2025-10-11 09:34:17.436 2 DEBUG oslo_concurrency.lockutils [req-4aa54c14-c116-46c4-a92d-a93a1d8f3938 req-bcbe8eb0-4199-40db-b9c5-82ba90a7c747 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5caa1430-1bd3-4b60-83f0-908b98b891cf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:34:17 np0005481065 nova_compute[260935]: 2025-10-11 09:34:17.436 2 DEBUG oslo_concurrency.lockutils [req-4aa54c14-c116-46c4-a92d-a93a1d8f3938 req-bcbe8eb0-4199-40db-b9c5-82ba90a7c747 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5caa1430-1bd3-4b60-83f0-908b98b891cf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:34:17 np0005481065 nova_compute[260935]: 2025-10-11 09:34:17.436 2 DEBUG nova.compute.manager [req-4aa54c14-c116-46c4-a92d-a93a1d8f3938 req-bcbe8eb0-4199-40db-b9c5-82ba90a7c747 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] No waiting events found dispatching network-vif-plugged-ac79972f-498e-46be-b858-6c67d0f29f93 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:34:17 np0005481065 nova_compute[260935]: 2025-10-11 09:34:17.437 2 WARNING nova.compute.manager [req-4aa54c14-c116-46c4-a92d-a93a1d8f3938 req-bcbe8eb0-4199-40db-b9c5-82ba90a7c747 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Received unexpected event network-vif-plugged-ac79972f-498e-46be-b858-6c67d0f29f93 for instance with vm_state active and task_state deleting.#033[00m
Oct 11 05:34:17 np0005481065 nova_compute[260935]: 2025-10-11 09:34:17.526 2 DEBUG nova.network.neutron [-] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:34:17 np0005481065 nova_compute[260935]: 2025-10-11 09:34:17.546 2 INFO nova.compute.manager [-] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Took 1.76 seconds to deallocate network for instance.#033[00m
Oct 11 05:34:17 np0005481065 nova_compute[260935]: 2025-10-11 09:34:17.590 2 DEBUG nova.compute.manager [req-ada5ff7c-f96e-4a16-ab8e-48e27b12c810 req-c8a10e74-8c1f-4069-9431-9f8e1d60158f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Received event network-vif-deleted-ac79972f-498e-46be-b858-6c67d0f29f93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:34:17 np0005481065 nova_compute[260935]: 2025-10-11 09:34:17.618 2 DEBUG oslo_concurrency.lockutils [None req-38e74ca2-0169-4507-aea0-18c915f71c4b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:34:17 np0005481065 nova_compute[260935]: 2025-10-11 09:34:17.618 2 DEBUG oslo_concurrency.lockutils [None req-38e74ca2-0169-4507-aea0-18c915f71c4b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:34:17 np0005481065 nova_compute[260935]: 2025-10-11 09:34:17.741 2 DEBUG oslo_concurrency.processutils [None req-38e74ca2-0169-4507-aea0-18c915f71c4b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:34:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:34:18 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1794202127' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:34:18 np0005481065 nova_compute[260935]: 2025-10-11 09:34:18.255 2 DEBUG oslo_concurrency.processutils [None req-38e74ca2-0169-4507-aea0-18c915f71c4b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:34:18 np0005481065 nova_compute[260935]: 2025-10-11 09:34:18.262 2 DEBUG nova.compute.provider_tree [None req-38e74ca2-0169-4507-aea0-18c915f71c4b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:34:18 np0005481065 nova_compute[260935]: 2025-10-11 09:34:18.332 2 DEBUG nova.scheduler.client.report [None req-38e74ca2-0169-4507-aea0-18c915f71c4b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:34:18 np0005481065 nova_compute[260935]: 2025-10-11 09:34:18.476 2 DEBUG oslo_concurrency.lockutils [None req-38e74ca2-0169-4507-aea0-18c915f71c4b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.858s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:34:18 np0005481065 nova_compute[260935]: 2025-10-11 09:34:18.646 2 INFO nova.scheduler.client.report [None req-38e74ca2-0169-4507-aea0-18c915f71c4b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Deleted allocations for instance 5caa1430-1bd3-4b60-83f0-908b98b891cf#033[00m
Oct 11 05:34:18 np0005481065 nova_compute[260935]: 2025-10-11 09:34:18.727 2 DEBUG oslo_concurrency.lockutils [None req-38e74ca2-0169-4507-aea0-18c915f71c4b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "5caa1430-1bd3-4b60-83f0-908b98b891cf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.848s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:34:18 np0005481065 nova_compute[260935]: 2025-10-11 09:34:18.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:34:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2780: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 16 KiB/s wr, 29 op/s
Oct 11 05:34:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:34:20 np0005481065 nova_compute[260935]: 2025-10-11 09:34:20.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:34:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2781: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 3.9 KiB/s wr, 29 op/s
Oct 11 05:34:21 np0005481065 nova_compute[260935]: 2025-10-11 09:34:21.344 2 DEBUG nova.compute.manager [req-10bbbeaa-bc36-4ae1-aba3-6ab7ca4b0b1d req-78a08336-4971-4b07-9f24-de8659f7f43f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Received event network-changed-fca94e7c-1f55-4f5d-a268-7a4f0b86bae0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:34:21 np0005481065 nova_compute[260935]: 2025-10-11 09:34:21.344 2 DEBUG nova.compute.manager [req-10bbbeaa-bc36-4ae1-aba3-6ab7ca4b0b1d req-78a08336-4971-4b07-9f24-de8659f7f43f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Refreshing instance network info cache due to event network-changed-fca94e7c-1f55-4f5d-a268-7a4f0b86bae0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:34:21 np0005481065 nova_compute[260935]: 2025-10-11 09:34:21.344 2 DEBUG oslo_concurrency.lockutils [req-10bbbeaa-bc36-4ae1-aba3-6ab7ca4b0b1d req-78a08336-4971-4b07-9f24-de8659f7f43f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-e7751d7b-3ef7-4b84-a579-89d4b72f6cf4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:34:21 np0005481065 nova_compute[260935]: 2025-10-11 09:34:21.344 2 DEBUG oslo_concurrency.lockutils [req-10bbbeaa-bc36-4ae1-aba3-6ab7ca4b0b1d req-78a08336-4971-4b07-9f24-de8659f7f43f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-e7751d7b-3ef7-4b84-a579-89d4b72f6cf4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:34:21 np0005481065 nova_compute[260935]: 2025-10-11 09:34:21.344 2 DEBUG nova.network.neutron [req-10bbbeaa-bc36-4ae1-aba3-6ab7ca4b0b1d req-78a08336-4971-4b07-9f24-de8659f7f43f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Refreshing network info cache for port fca94e7c-1f55-4f5d-a268-7a4f0b86bae0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:34:21 np0005481065 nova_compute[260935]: 2025-10-11 09:34:21.439 2 DEBUG oslo_concurrency.lockutils [None req-0ad757af-2e2c-4152-b14c-d19212ea303c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "e7751d7b-3ef7-4b84-a579-89d4b72f6cf4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:34:21 np0005481065 nova_compute[260935]: 2025-10-11 09:34:21.439 2 DEBUG oslo_concurrency.lockutils [None req-0ad757af-2e2c-4152-b14c-d19212ea303c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "e7751d7b-3ef7-4b84-a579-89d4b72f6cf4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:34:21 np0005481065 nova_compute[260935]: 2025-10-11 09:34:21.439 2 DEBUG oslo_concurrency.lockutils [None req-0ad757af-2e2c-4152-b14c-d19212ea303c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "e7751d7b-3ef7-4b84-a579-89d4b72f6cf4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:34:21 np0005481065 nova_compute[260935]: 2025-10-11 09:34:21.440 2 DEBUG oslo_concurrency.lockutils [None req-0ad757af-2e2c-4152-b14c-d19212ea303c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "e7751d7b-3ef7-4b84-a579-89d4b72f6cf4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:34:21 np0005481065 nova_compute[260935]: 2025-10-11 09:34:21.440 2 DEBUG oslo_concurrency.lockutils [None req-0ad757af-2e2c-4152-b14c-d19212ea303c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "e7751d7b-3ef7-4b84-a579-89d4b72f6cf4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:34:21 np0005481065 nova_compute[260935]: 2025-10-11 09:34:21.441 2 INFO nova.compute.manager [None req-0ad757af-2e2c-4152-b14c-d19212ea303c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Terminating instance#033[00m
Oct 11 05:34:21 np0005481065 nova_compute[260935]: 2025-10-11 09:34:21.442 2 DEBUG nova.compute.manager [None req-0ad757af-2e2c-4152-b14c-d19212ea303c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:34:21 np0005481065 kernel: tapfca94e7c-1f (unregistering): left promiscuous mode
Oct 11 05:34:21 np0005481065 NetworkManager[44960]: <info>  [1760175261.5099] device (tapfca94e7c-1f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:34:21 np0005481065 ovn_controller[152945]: 2025-10-11T09:34:21Z|01550|binding|INFO|Releasing lport fca94e7c-1f55-4f5d-a268-7a4f0b86bae0 from this chassis (sb_readonly=0)
Oct 11 05:34:21 np0005481065 ovn_controller[152945]: 2025-10-11T09:34:21Z|01551|binding|INFO|Setting lport fca94e7c-1f55-4f5d-a268-7a4f0b86bae0 down in Southbound
Oct 11 05:34:21 np0005481065 nova_compute[260935]: 2025-10-11 09:34:21.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:34:21 np0005481065 ovn_controller[152945]: 2025-10-11T09:34:21Z|01552|binding|INFO|Removing iface tapfca94e7c-1f ovn-installed in OVS
Oct 11 05:34:21 np0005481065 nova_compute[260935]: 2025-10-11 09:34:21.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:34:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:34:21.537 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:ef:fb 10.100.0.12 2001:db8::f816:3eff:fe9b:effb'], port_security=['fa:16:3e:9b:ef:fb 10.100.0.12 2001:db8::f816:3eff:fe9b:effb'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28 2001:db8::f816:3eff:fe9b:effb/64', 'neutron:device_id': 'e7751d7b-3ef7-4b84-a579-89d4b72f6cf4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84203be9-d0b1-4633-bcd6-51523ee507a5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': '49b6c18b-a992-472d-bab2-db1a8305eb48', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d927a0f8-4868-42ce-9cc2-00b73d34efaa, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=fca94e7c-1f55-4f5d-a268-7a4f0b86bae0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:34:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:34:21.540 162815 INFO neutron.agent.ovn.metadata.agent [-] Port fca94e7c-1f55-4f5d-a268-7a4f0b86bae0 in datapath 84203be9-d0b1-4633-bcd6-51523ee507a5 unbound from our chassis#033[00m
Oct 11 05:34:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:34:21.544 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 84203be9-d0b1-4633-bcd6-51523ee507a5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:34:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:34:21.546 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8bfb7e36-4354-4b3a-a06e-64ef66ecb107]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:34:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:34:21.547 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5 namespace which is not needed anymore#033[00m
Oct 11 05:34:21 np0005481065 nova_compute[260935]: 2025-10-11 09:34:21.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:34:21 np0005481065 systemd[1]: machine-qemu\x2d161\x2dinstance\x2d00000089.scope: Deactivated successfully.
Oct 11 05:34:21 np0005481065 systemd[1]: machine-qemu\x2d161\x2dinstance\x2d00000089.scope: Consumed 16.625s CPU time.
Oct 11 05:34:21 np0005481065 systemd-machined[215705]: Machine qemu-161-instance-00000089 terminated.
Oct 11 05:34:21 np0005481065 nova_compute[260935]: 2025-10-11 09:34:21.697 2 INFO nova.virt.libvirt.driver [-] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Instance destroyed successfully.#033[00m
Oct 11 05:34:21 np0005481065 nova_compute[260935]: 2025-10-11 09:34:21.698 2 DEBUG nova.objects.instance [None req-0ad757af-2e2c-4152-b14c-d19212ea303c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'resources' on Instance uuid e7751d7b-3ef7-4b84-a579-89d4b72f6cf4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:34:21 np0005481065 nova_compute[260935]: 2025-10-11 09:34:21.715 2 DEBUG nova.virt.libvirt.vif [None req-0ad757af-2e2c-4152-b14c-d19212ea303c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:32:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1457402262',display_name='tempest-TestGettingAddress-server-1457402262',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1457402262',id=137,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHVfw7pOtwEAslurCfMJI+LHxZQ7Z2OcTKKHxGE0pH5jm6Sci3HdEc6EkB55fGhUPW8iPpuzF74thdADhl6pL+8SkI+p3jvP78xNgpaS0telyWktJDIyQUmQdgZIxm8Eyg==',key_name='tempest-TestGettingAddress-2055882368',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:32:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-axv24jbu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:32:58Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=e7751d7b-3ef7-4b84-a579-89d4b72f6cf4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fca94e7c-1f55-4f5d-a268-7a4f0b86bae0", "address": "fa:16:3e:9b:ef:fb", "network": {"id": "84203be9-d0b1-4633-bcd6-51523ee507a5", "bridge": "br-int", "label": "tempest-network-smoke--13547720", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9b:effb", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfca94e7c-1f", "ovs_interfaceid": "fca94e7c-1f55-4f5d-a268-7a4f0b86bae0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:34:21 np0005481065 nova_compute[260935]: 2025-10-11 09:34:21.715 2 DEBUG nova.network.os_vif_util [None req-0ad757af-2e2c-4152-b14c-d19212ea303c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "fca94e7c-1f55-4f5d-a268-7a4f0b86bae0", "address": "fa:16:3e:9b:ef:fb", "network": {"id": "84203be9-d0b1-4633-bcd6-51523ee507a5", "bridge": "br-int", "label": "tempest-network-smoke--13547720", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9b:effb", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfca94e7c-1f", "ovs_interfaceid": "fca94e7c-1f55-4f5d-a268-7a4f0b86bae0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:34:21 np0005481065 nova_compute[260935]: 2025-10-11 09:34:21.717 2 DEBUG nova.network.os_vif_util [None req-0ad757af-2e2c-4152-b14c-d19212ea303c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9b:ef:fb,bridge_name='br-int',has_traffic_filtering=True,id=fca94e7c-1f55-4f5d-a268-7a4f0b86bae0,network=Network(84203be9-d0b1-4633-bcd6-51523ee507a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfca94e7c-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:34:21 np0005481065 nova_compute[260935]: 2025-10-11 09:34:21.717 2 DEBUG os_vif [None req-0ad757af-2e2c-4152-b14c-d19212ea303c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9b:ef:fb,bridge_name='br-int',has_traffic_filtering=True,id=fca94e7c-1f55-4f5d-a268-7a4f0b86bae0,network=Network(84203be9-d0b1-4633-bcd6-51523ee507a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfca94e7c-1f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:34:21 np0005481065 nova_compute[260935]: 2025-10-11 09:34:21.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:34:21 np0005481065 nova_compute[260935]: 2025-10-11 09:34:21.723 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfca94e7c-1f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:34:21 np0005481065 nova_compute[260935]: 2025-10-11 09:34:21.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:34:21 np0005481065 nova_compute[260935]: 2025-10-11 09:34:21.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:34:21 np0005481065 nova_compute[260935]: 2025-10-11 09:34:21.736 2 INFO os_vif [None req-0ad757af-2e2c-4152-b14c-d19212ea303c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9b:ef:fb,bridge_name='br-int',has_traffic_filtering=True,id=fca94e7c-1f55-4f5d-a268-7a4f0b86bae0,network=Network(84203be9-d0b1-4633-bcd6-51523ee507a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfca94e7c-1f')#033[00m
Oct 11 05:34:21 np0005481065 neutron-haproxy-ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5[412760]: [NOTICE]   (412764) : haproxy version is 2.8.14-c23fe91
Oct 11 05:34:21 np0005481065 neutron-haproxy-ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5[412760]: [NOTICE]   (412764) : path to executable is /usr/sbin/haproxy
Oct 11 05:34:21 np0005481065 neutron-haproxy-ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5[412760]: [WARNING]  (412764) : Exiting Master process...
Oct 11 05:34:21 np0005481065 neutron-haproxy-ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5[412760]: [ALERT]    (412764) : Current worker (412766) exited with code 143 (Terminated)
Oct 11 05:34:21 np0005481065 neutron-haproxy-ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5[412760]: [WARNING]  (412764) : All workers exited. Exiting... (0)
Oct 11 05:34:21 np0005481065 systemd[1]: libpod-328fa346917de62bfec3429b7ba4ef5fc57b6ac0cdc64d48811fdf8800ac0793.scope: Deactivated successfully.
Oct 11 05:34:21 np0005481065 podman[414419]: 2025-10-11 09:34:21.787123995 +0000 UTC m=+0.077515508 container died 328fa346917de62bfec3429b7ba4ef5fc57b6ac0cdc64d48811fdf8800ac0793 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 05:34:21 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-328fa346917de62bfec3429b7ba4ef5fc57b6ac0cdc64d48811fdf8800ac0793-userdata-shm.mount: Deactivated successfully.
Oct 11 05:34:21 np0005481065 systemd[1]: var-lib-containers-storage-overlay-b192159424a33d91dd7687b52effbbc7201b23f7d99b918f0ad704b8e1f4e2c9-merged.mount: Deactivated successfully.
Oct 11 05:34:21 np0005481065 nova_compute[260935]: 2025-10-11 09:34:21.839 2 DEBUG nova.compute.manager [req-ea9a42a1-4457-460b-83ce-e07ed88efa01 req-27d9ce7b-bd12-4a20-9a58-0a782e931a5b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Received event network-vif-unplugged-fca94e7c-1f55-4f5d-a268-7a4f0b86bae0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:34:21 np0005481065 nova_compute[260935]: 2025-10-11 09:34:21.840 2 DEBUG oslo_concurrency.lockutils [req-ea9a42a1-4457-460b-83ce-e07ed88efa01 req-27d9ce7b-bd12-4a20-9a58-0a782e931a5b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "e7751d7b-3ef7-4b84-a579-89d4b72f6cf4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:34:21 np0005481065 nova_compute[260935]: 2025-10-11 09:34:21.840 2 DEBUG oslo_concurrency.lockutils [req-ea9a42a1-4457-460b-83ce-e07ed88efa01 req-27d9ce7b-bd12-4a20-9a58-0a782e931a5b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e7751d7b-3ef7-4b84-a579-89d4b72f6cf4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:34:21 np0005481065 nova_compute[260935]: 2025-10-11 09:34:21.840 2 DEBUG oslo_concurrency.lockutils [req-ea9a42a1-4457-460b-83ce-e07ed88efa01 req-27d9ce7b-bd12-4a20-9a58-0a782e931a5b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e7751d7b-3ef7-4b84-a579-89d4b72f6cf4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:34:21 np0005481065 nova_compute[260935]: 2025-10-11 09:34:21.841 2 DEBUG nova.compute.manager [req-ea9a42a1-4457-460b-83ce-e07ed88efa01 req-27d9ce7b-bd12-4a20-9a58-0a782e931a5b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] No waiting events found dispatching network-vif-unplugged-fca94e7c-1f55-4f5d-a268-7a4f0b86bae0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:34:21 np0005481065 nova_compute[260935]: 2025-10-11 09:34:21.841 2 DEBUG nova.compute.manager [req-ea9a42a1-4457-460b-83ce-e07ed88efa01 req-27d9ce7b-bd12-4a20-9a58-0a782e931a5b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Received event network-vif-unplugged-fca94e7c-1f55-4f5d-a268-7a4f0b86bae0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 05:34:21 np0005481065 podman[414419]: 2025-10-11 09:34:21.848354813 +0000 UTC m=+0.138746366 container cleanup 328fa346917de62bfec3429b7ba4ef5fc57b6ac0cdc64d48811fdf8800ac0793 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 05:34:21 np0005481065 systemd[1]: libpod-conmon-328fa346917de62bfec3429b7ba4ef5fc57b6ac0cdc64d48811fdf8800ac0793.scope: Deactivated successfully.
Oct 11 05:34:21 np0005481065 podman[414472]: 2025-10-11 09:34:21.951741341 +0000 UTC m=+0.072717913 container remove 328fa346917de62bfec3429b7ba4ef5fc57b6ac0cdc64d48811fdf8800ac0793 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:34:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:34:21.958 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4f52183e-69ea-4e2f-93fa-9bf010caa6f0]: (4, ('Sat Oct 11 09:34:21 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5 (328fa346917de62bfec3429b7ba4ef5fc57b6ac0cdc64d48811fdf8800ac0793)\n328fa346917de62bfec3429b7ba4ef5fc57b6ac0cdc64d48811fdf8800ac0793\nSat Oct 11 09:34:21 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5 (328fa346917de62bfec3429b7ba4ef5fc57b6ac0cdc64d48811fdf8800ac0793)\n328fa346917de62bfec3429b7ba4ef5fc57b6ac0cdc64d48811fdf8800ac0793\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:34:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:34:21.960 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[201ed882-171c-4cac-af88-634b4849beb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:34:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:34:21.961 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84203be9-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:34:21 np0005481065 kernel: tap84203be9-d0: left promiscuous mode
Oct 11 05:34:21 np0005481065 nova_compute[260935]: 2025-10-11 09:34:21.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:34:21 np0005481065 nova_compute[260935]: 2025-10-11 09:34:21.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:34:21 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:34:21.984 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ca6a9dba-f783-450a-bb43-1c5277eb0a8c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:34:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:34:22.007 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0d0bab8b-8111-40a2-8753-3f8aef8ff011]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:34:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:34:22.007 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cff2442d-3200-4e56-a488-3b3c2c042021]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:34:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:34:22.032 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0dfc2a0b-4479-456e-987a-68c336de24bc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 702190, 'reachable_time': 27474, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 414488, 'error': None, 'target': 'ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:34:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:34:22.034 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 05:34:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:34:22.034 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[e6c13fef-96fd-4b29-b6ba-96f3b368f829]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:34:22 np0005481065 systemd[1]: run-netns-ovnmeta\x2d84203be9\x2dd0b1\x2d4633\x2dbcd6\x2d51523ee507a5.mount: Deactivated successfully.
Oct 11 05:34:22 np0005481065 nova_compute[260935]: 2025-10-11 09:34:22.138 2 INFO nova.virt.libvirt.driver [None req-0ad757af-2e2c-4152-b14c-d19212ea303c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Deleting instance files /var/lib/nova/instances/e7751d7b-3ef7-4b84-a579-89d4b72f6cf4_del#033[00m
Oct 11 05:34:22 np0005481065 nova_compute[260935]: 2025-10-11 09:34:22.139 2 INFO nova.virt.libvirt.driver [None req-0ad757af-2e2c-4152-b14c-d19212ea303c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Deletion of /var/lib/nova/instances/e7751d7b-3ef7-4b84-a579-89d4b72f6cf4_del complete#033[00m
Oct 11 05:34:22 np0005481065 nova_compute[260935]: 2025-10-11 09:34:22.189 2 INFO nova.compute.manager [None req-0ad757af-2e2c-4152-b14c-d19212ea303c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Took 0.75 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:34:22 np0005481065 nova_compute[260935]: 2025-10-11 09:34:22.190 2 DEBUG oslo.service.loopingcall [None req-0ad757af-2e2c-4152-b14c-d19212ea303c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:34:22 np0005481065 nova_compute[260935]: 2025-10-11 09:34:22.190 2 DEBUG nova.compute.manager [-] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:34:22 np0005481065 nova_compute[260935]: 2025-10-11 09:34:22.191 2 DEBUG nova.network.neutron [-] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:34:22 np0005481065 nova_compute[260935]: 2025-10-11 09:34:22.766 2 DEBUG nova.network.neutron [req-10bbbeaa-bc36-4ae1-aba3-6ab7ca4b0b1d req-78a08336-4971-4b07-9f24-de8659f7f43f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Updated VIF entry in instance network info cache for port fca94e7c-1f55-4f5d-a268-7a4f0b86bae0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:34:22 np0005481065 nova_compute[260935]: 2025-10-11 09:34:22.767 2 DEBUG nova.network.neutron [req-10bbbeaa-bc36-4ae1-aba3-6ab7ca4b0b1d req-78a08336-4971-4b07-9f24-de8659f7f43f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Updating instance_info_cache with network_info: [{"id": "fca94e7c-1f55-4f5d-a268-7a4f0b86bae0", "address": "fa:16:3e:9b:ef:fb", "network": {"id": "84203be9-d0b1-4633-bcd6-51523ee507a5", "bridge": "br-int", "label": "tempest-network-smoke--13547720", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9b:effb", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfca94e7c-1f", "ovs_interfaceid": "fca94e7c-1f55-4f5d-a268-7a4f0b86bae0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:34:22 np0005481065 nova_compute[260935]: 2025-10-11 09:34:22.788 2 DEBUG oslo_concurrency.lockutils [req-10bbbeaa-bc36-4ae1-aba3-6ab7ca4b0b1d req-78a08336-4971-4b07-9f24-de8659f7f43f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-e7751d7b-3ef7-4b84-a579-89d4b72f6cf4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:34:22 np0005481065 nova_compute[260935]: 2025-10-11 09:34:22.834 2 DEBUG nova.network.neutron [-] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:34:22 np0005481065 nova_compute[260935]: 2025-10-11 09:34:22.849 2 INFO nova.compute.manager [-] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Took 0.66 seconds to deallocate network for instance.#033[00m
Oct 11 05:34:22 np0005481065 nova_compute[260935]: 2025-10-11 09:34:22.888 2 DEBUG oslo_concurrency.lockutils [None req-0ad757af-2e2c-4152-b14c-d19212ea303c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:34:22 np0005481065 nova_compute[260935]: 2025-10-11 09:34:22.889 2 DEBUG oslo_concurrency.lockutils [None req-0ad757af-2e2c-4152-b14c-d19212ea303c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:34:22 np0005481065 nova_compute[260935]: 2025-10-11 09:34:22.999 2 DEBUG oslo_concurrency.processutils [None req-0ad757af-2e2c-4152-b14c-d19212ea303c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:34:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2782: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 5.1 KiB/s wr, 56 op/s
Oct 11 05:34:23 np0005481065 nova_compute[260935]: 2025-10-11 09:34:23.423 2 DEBUG nova.compute.manager [req-f0b8aef2-8f21-4b60-9fec-1a68a55c1afb req-1a8d7cec-ad79-43d6-bf27-5e20c940b515 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Received event network-vif-deleted-fca94e7c-1f55-4f5d-a268-7a4f0b86bae0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:34:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:34:23 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/395003438' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:34:23 np0005481065 nova_compute[260935]: 2025-10-11 09:34:23.462 2 DEBUG oslo_concurrency.processutils [None req-0ad757af-2e2c-4152-b14c-d19212ea303c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:34:23 np0005481065 nova_compute[260935]: 2025-10-11 09:34:23.467 2 DEBUG nova.compute.provider_tree [None req-0ad757af-2e2c-4152-b14c-d19212ea303c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:34:23 np0005481065 nova_compute[260935]: 2025-10-11 09:34:23.483 2 DEBUG nova.scheduler.client.report [None req-0ad757af-2e2c-4152-b14c-d19212ea303c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:34:23 np0005481065 nova_compute[260935]: 2025-10-11 09:34:23.502 2 DEBUG oslo_concurrency.lockutils [None req-0ad757af-2e2c-4152-b14c-d19212ea303c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:34:23 np0005481065 nova_compute[260935]: 2025-10-11 09:34:23.524 2 INFO nova.scheduler.client.report [None req-0ad757af-2e2c-4152-b14c-d19212ea303c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Deleted allocations for instance e7751d7b-3ef7-4b84-a579-89d4b72f6cf4#033[00m
Oct 11 05:34:23 np0005481065 nova_compute[260935]: 2025-10-11 09:34:23.578 2 DEBUG oslo_concurrency.lockutils [None req-0ad757af-2e2c-4152-b14c-d19212ea303c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "e7751d7b-3ef7-4b84-a579-89d4b72f6cf4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.139s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:34:23 np0005481065 nova_compute[260935]: 2025-10-11 09:34:23.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:34:23 np0005481065 nova_compute[260935]: 2025-10-11 09:34:23.966 2 DEBUG nova.compute.manager [req-99247a1b-92d6-4dff-9602-a915a27d0072 req-b7582fd8-36aa-432a-94a0-a0b6f07a75ee e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Received event network-vif-plugged-fca94e7c-1f55-4f5d-a268-7a4f0b86bae0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:34:23 np0005481065 nova_compute[260935]: 2025-10-11 09:34:23.967 2 DEBUG oslo_concurrency.lockutils [req-99247a1b-92d6-4dff-9602-a915a27d0072 req-b7582fd8-36aa-432a-94a0-a0b6f07a75ee e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "e7751d7b-3ef7-4b84-a579-89d4b72f6cf4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:34:23 np0005481065 nova_compute[260935]: 2025-10-11 09:34:23.967 2 DEBUG oslo_concurrency.lockutils [req-99247a1b-92d6-4dff-9602-a915a27d0072 req-b7582fd8-36aa-432a-94a0-a0b6f07a75ee e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e7751d7b-3ef7-4b84-a579-89d4b72f6cf4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:34:23 np0005481065 nova_compute[260935]: 2025-10-11 09:34:23.968 2 DEBUG oslo_concurrency.lockutils [req-99247a1b-92d6-4dff-9602-a915a27d0072 req-b7582fd8-36aa-432a-94a0-a0b6f07a75ee e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e7751d7b-3ef7-4b84-a579-89d4b72f6cf4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:34:23 np0005481065 nova_compute[260935]: 2025-10-11 09:34:23.968 2 DEBUG nova.compute.manager [req-99247a1b-92d6-4dff-9602-a915a27d0072 req-b7582fd8-36aa-432a-94a0-a0b6f07a75ee e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] No waiting events found dispatching network-vif-plugged-fca94e7c-1f55-4f5d-a268-7a4f0b86bae0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:34:23 np0005481065 nova_compute[260935]: 2025-10-11 09:34:23.969 2 WARNING nova.compute.manager [req-99247a1b-92d6-4dff-9602-a915a27d0072 req-b7582fd8-36aa-432a-94a0-a0b6f07a75ee e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Received unexpected event network-vif-plugged-fca94e7c-1f55-4f5d-a268-7a4f0b86bae0 for instance with vm_state deleted and task_state None.#033[00m
Oct 11 05:34:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:34:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:34:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:34:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:34:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:34:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:34:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:34:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2783: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 5.0 KiB/s wr, 55 op/s
Oct 11 05:34:25 np0005481065 nova_compute[260935]: 2025-10-11 09:34:25.739 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:34:25 np0005481065 podman[414513]: 2025-10-11 09:34:25.789551795 +0000 UTC m=+0.073272199 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 11 05:34:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:34:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3479799431' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:34:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:34:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3479799431' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:34:26 np0005481065 nova_compute[260935]: 2025-10-11 09:34:26.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:34:27 np0005481065 ovn_controller[152945]: 2025-10-11T09:34:27Z|01553|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:34:27 np0005481065 ovn_controller[152945]: 2025-10-11T09:34:27Z|01554|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:34:27 np0005481065 nova_compute[260935]: 2025-10-11 09:34:27.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:34:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2784: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 5.0 KiB/s wr, 55 op/s
Oct 11 05:34:27 np0005481065 ovn_controller[152945]: 2025-10-11T09:34:27Z|01555|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:34:27 np0005481065 ovn_controller[152945]: 2025-10-11T09:34:27Z|01556|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:34:27 np0005481065 nova_compute[260935]: 2025-10-11 09:34:27.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:34:28 np0005481065 nova_compute[260935]: 2025-10-11 09:34:28.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:34:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2785: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 5.0 KiB/s wr, 55 op/s
Oct 11 05:34:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:34:30 np0005481065 nova_compute[260935]: 2025-10-11 09:34:30.131 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760175255.1291778, 5caa1430-1bd3-4b60-83f0-908b98b891cf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:34:30 np0005481065 nova_compute[260935]: 2025-10-11 09:34:30.132 2 INFO nova.compute.manager [-] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:34:30 np0005481065 nova_compute[260935]: 2025-10-11 09:34:30.152 2 DEBUG nova.compute.manager [None req-df7224fd-343e-40a1-9727-812397c2ff45 - - - - - -] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:34:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2786: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 05:34:31 np0005481065 nova_compute[260935]: 2025-10-11 09:34:31.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:34:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2787: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 05:34:33 np0005481065 podman[414534]: 2025-10-11 09:34:33.793067948 +0000 UTC m=+0.080002338 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 05:34:33 np0005481065 nova_compute[260935]: 2025-10-11 09:34:33.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:34:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:34:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2788: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:34:36 np0005481065 nova_compute[260935]: 2025-10-11 09:34:36.691 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760175261.6879365, e7751d7b-3ef7-4b84-a579-89d4b72f6cf4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:34:36 np0005481065 nova_compute[260935]: 2025-10-11 09:34:36.692 2 INFO nova.compute.manager [-] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:34:36 np0005481065 nova_compute[260935]: 2025-10-11 09:34:36.724 2 DEBUG nova.compute.manager [None req-a6ef93fe-ac82-46a1-a32a-4f7bdf3011d5 - - - - - -] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:34:36 np0005481065 nova_compute[260935]: 2025-10-11 09:34:36.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:34:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2789: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:34:37 np0005481065 podman[414554]: 2025-10-11 09:34:37.790683244 +0000 UTC m=+0.102889724 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:34:37 np0005481065 podman[414555]: 2025-10-11 09:34:37.812113849 +0000 UTC m=+0.118749962 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_controller)
Oct 11 05:34:38 np0005481065 nova_compute[260935]: 2025-10-11 09:34:38.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:34:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2790: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:34:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:34:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2791: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:34:41 np0005481065 nova_compute[260935]: 2025-10-11 09:34:41.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:34:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2792: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:34:43 np0005481065 nova_compute[260935]: 2025-10-11 09:34:43.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:34:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:34:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2793: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:34:45 np0005481065 nova_compute[260935]: 2025-10-11 09:34:45.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:34:45 np0005481065 nova_compute[260935]: 2025-10-11 09:34:45.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:34:46 np0005481065 nova_compute[260935]: 2025-10-11 09:34:46.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:34:46 np0005481065 nova_compute[260935]: 2025-10-11 09:34:46.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:34:46 np0005481065 nova_compute[260935]: 2025-10-11 09:34:46.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:34:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2794: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:34:47 np0005481065 nova_compute[260935]: 2025-10-11 09:34:47.559 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:34:47 np0005481065 nova_compute[260935]: 2025-10-11 09:34:47.559 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:34:47 np0005481065 nova_compute[260935]: 2025-10-11 09:34:47.560 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 05:34:48 np0005481065 nova_compute[260935]: 2025-10-11 09:34:48.722 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updating instance_info_cache with network_info: [{"id": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "address": "fa:16:3e:ab:9b:26", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99e74dca-1d", "ovs_interfaceid": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:34:48 np0005481065 nova_compute[260935]: 2025-10-11 09:34:48.742 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:34:48 np0005481065 nova_compute[260935]: 2025-10-11 09:34:48.743 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 05:34:48 np0005481065 nova_compute[260935]: 2025-10-11 09:34:48.743 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:34:48 np0005481065 nova_compute[260935]: 2025-10-11 09:34:48.743 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:34:48 np0005481065 nova_compute[260935]: 2025-10-11 09:34:48.744 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:34:48 np0005481065 nova_compute[260935]: 2025-10-11 09:34:48.744 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:34:48 np0005481065 nova_compute[260935]: 2025-10-11 09:34:48.779 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:34:48 np0005481065 nova_compute[260935]: 2025-10-11 09:34:48.780 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:34:48 np0005481065 nova_compute[260935]: 2025-10-11 09:34:48.780 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:34:48 np0005481065 nova_compute[260935]: 2025-10-11 09:34:48.780 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:34:48 np0005481065 nova_compute[260935]: 2025-10-11 09:34:48.781 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:34:48 np0005481065 nova_compute[260935]: 2025-10-11 09:34:48.824 2 DEBUG oslo_concurrency.lockutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "a77fa566-1ab4-484c-b6ef-53471c9f91f8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:34:48 np0005481065 nova_compute[260935]: 2025-10-11 09:34:48.825 2 DEBUG oslo_concurrency.lockutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "a77fa566-1ab4-484c-b6ef-53471c9f91f8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:34:48 np0005481065 nova_compute[260935]: 2025-10-11 09:34:48.937 2 DEBUG nova.compute.manager [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:34:48 np0005481065 nova_compute[260935]: 2025-10-11 09:34:48.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:34:49 np0005481065 nova_compute[260935]: 2025-10-11 09:34:49.062 2 DEBUG oslo_concurrency.lockutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:34:49 np0005481065 nova_compute[260935]: 2025-10-11 09:34:49.062 2 DEBUG oslo_concurrency.lockutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:34:49 np0005481065 nova_compute[260935]: 2025-10-11 09:34:49.070 2 DEBUG nova.virt.hardware [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:34:49 np0005481065 nova_compute[260935]: 2025-10-11 09:34:49.071 2 INFO nova.compute.claims [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:34:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2795: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:34:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:34:49 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3575484801' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:34:49 np0005481065 nova_compute[260935]: 2025-10-11 09:34:49.210 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:34:49 np0005481065 nova_compute[260935]: 2025-10-11 09:34:49.289 2 DEBUG nova.scheduler.client.report [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Refreshing inventories for resource provider ead2f521-4d5d-46d9-864c-1aac19134114 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 11 05:34:49 np0005481065 nova_compute[260935]: 2025-10-11 09:34:49.306 2 DEBUG nova.scheduler.client.report [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Updating ProviderTree inventory for provider ead2f521-4d5d-46d9-864c-1aac19134114 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 11 05:34:49 np0005481065 nova_compute[260935]: 2025-10-11 09:34:49.307 2 DEBUG nova.compute.provider_tree [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Updating inventory in ProviderTree for provider ead2f521-4d5d-46d9-864c-1aac19134114 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 11 05:34:49 np0005481065 nova_compute[260935]: 2025-10-11 09:34:49.323 2 DEBUG nova.scheduler.client.report [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Refreshing aggregate associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 11 05:34:49 np0005481065 nova_compute[260935]: 2025-10-11 09:34:49.337 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:34:49 np0005481065 nova_compute[260935]: 2025-10-11 09:34:49.338 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:34:49 np0005481065 nova_compute[260935]: 2025-10-11 09:34:49.338 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:34:49 np0005481065 nova_compute[260935]: 2025-10-11 09:34:49.343 2 DEBUG nova.scheduler.client.report [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Refreshing trait associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, traits: HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSE41,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_USB,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,HW_CPU_X86_SVM,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 11 05:34:49 np0005481065 nova_compute[260935]: 2025-10-11 09:34:49.347 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:34:49 np0005481065 nova_compute[260935]: 2025-10-11 09:34:49.347 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:34:49 np0005481065 nova_compute[260935]: 2025-10-11 09:34:49.352 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:34:49 np0005481065 nova_compute[260935]: 2025-10-11 09:34:49.352 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:34:49 np0005481065 nova_compute[260935]: 2025-10-11 09:34:49.429 2 DEBUG oslo_concurrency.processutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:34:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:34:49 np0005481065 nova_compute[260935]: 2025-10-11 09:34:49.701 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:34:49 np0005481065 nova_compute[260935]: 2025-10-11 09:34:49.702 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2898MB free_disk=59.83066940307617GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:34:49 np0005481065 nova_compute[260935]: 2025-10-11 09:34:49.703 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:34:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:34:49 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/719642367' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:34:49 np0005481065 nova_compute[260935]: 2025-10-11 09:34:49.937 2 DEBUG oslo_concurrency.processutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:34:49 np0005481065 nova_compute[260935]: 2025-10-11 09:34:49.944 2 DEBUG nova.compute.provider_tree [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:34:50 np0005481065 nova_compute[260935]: 2025-10-11 09:34:50.025 2 DEBUG nova.scheduler.client.report [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:34:50 np0005481065 nova_compute[260935]: 2025-10-11 09:34:50.092 2 DEBUG oslo_concurrency.lockutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.029s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:34:50 np0005481065 nova_compute[260935]: 2025-10-11 09:34:50.093 2 DEBUG nova.compute.manager [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:34:50 np0005481065 nova_compute[260935]: 2025-10-11 09:34:50.099 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.396s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:34:50 np0005481065 nova_compute[260935]: 2025-10-11 09:34:50.159 2 DEBUG nova.compute.manager [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:34:50 np0005481065 nova_compute[260935]: 2025-10-11 09:34:50.160 2 DEBUG nova.network.neutron [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:34:50 np0005481065 nova_compute[260935]: 2025-10-11 09:34:50.235 2 INFO nova.virt.libvirt.driver [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:34:50 np0005481065 nova_compute[260935]: 2025-10-11 09:34:50.273 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:34:50 np0005481065 nova_compute[260935]: 2025-10-11 09:34:50.273 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:34:50 np0005481065 nova_compute[260935]: 2025-10-11 09:34:50.274 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:34:50 np0005481065 nova_compute[260935]: 2025-10-11 09:34:50.274 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance a77fa566-1ab4-484c-b6ef-53471c9f91f8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:34:50 np0005481065 nova_compute[260935]: 2025-10-11 09:34:50.274 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:34:50 np0005481065 nova_compute[260935]: 2025-10-11 09:34:50.275 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:34:50 np0005481065 nova_compute[260935]: 2025-10-11 09:34:50.304 2 DEBUG nova.compute.manager [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:34:50 np0005481065 nova_compute[260935]: 2025-10-11 09:34:50.374 2 DEBUG nova.policy [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '489c4d0457354f4684f8b9e53261224f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '81e7096f23df4e7d8782cf98d09d54e9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:34:50 np0005481065 nova_compute[260935]: 2025-10-11 09:34:50.406 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:34:50 np0005481065 nova_compute[260935]: 2025-10-11 09:34:50.562 2 DEBUG nova.compute.manager [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:34:50 np0005481065 nova_compute[260935]: 2025-10-11 09:34:50.566 2 DEBUG nova.virt.libvirt.driver [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:34:50 np0005481065 nova_compute[260935]: 2025-10-11 09:34:50.566 2 INFO nova.virt.libvirt.driver [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Creating image(s)#033[00m
Oct 11 05:34:50 np0005481065 nova_compute[260935]: 2025-10-11 09:34:50.624 2 DEBUG nova.storage.rbd_utils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image a77fa566-1ab4-484c-b6ef-53471c9f91f8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:34:50 np0005481065 nova_compute[260935]: 2025-10-11 09:34:50.664 2 DEBUG nova.storage.rbd_utils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image a77fa566-1ab4-484c-b6ef-53471c9f91f8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:34:50 np0005481065 nova_compute[260935]: 2025-10-11 09:34:50.698 2 DEBUG nova.storage.rbd_utils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image a77fa566-1ab4-484c-b6ef-53471c9f91f8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:34:50 np0005481065 nova_compute[260935]: 2025-10-11 09:34:50.703 2 DEBUG oslo_concurrency.processutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:34:50 np0005481065 nova_compute[260935]: 2025-10-11 09:34:50.801 2 DEBUG oslo_concurrency.processutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:34:50 np0005481065 nova_compute[260935]: 2025-10-11 09:34:50.802 2 DEBUG oslo_concurrency.lockutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:34:50 np0005481065 nova_compute[260935]: 2025-10-11 09:34:50.803 2 DEBUG oslo_concurrency.lockutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:34:50 np0005481065 nova_compute[260935]: 2025-10-11 09:34:50.803 2 DEBUG oslo_concurrency.lockutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:34:50 np0005481065 nova_compute[260935]: 2025-10-11 09:34:50.830 2 DEBUG nova.storage.rbd_utils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image a77fa566-1ab4-484c-b6ef-53471c9f91f8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:34:50 np0005481065 nova_compute[260935]: 2025-10-11 09:34:50.836 2 DEBUG oslo_concurrency.processutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 a77fa566-1ab4-484c-b6ef-53471c9f91f8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:34:50 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:34:50 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2718831139' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:34:50 np0005481065 nova_compute[260935]: 2025-10-11 09:34:50.919 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:34:50 np0005481065 nova_compute[260935]: 2025-10-11 09:34:50.927 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:34:50 np0005481065 nova_compute[260935]: 2025-10-11 09:34:50.977 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:34:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2796: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:34:51 np0005481065 nova_compute[260935]: 2025-10-11 09:34:51.212 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:34:51 np0005481065 nova_compute[260935]: 2025-10-11 09:34:51.213 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.114s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:34:51 np0005481065 nova_compute[260935]: 2025-10-11 09:34:51.215 2 DEBUG oslo_concurrency.processutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 a77fa566-1ab4-484c-b6ef-53471c9f91f8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.379s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:34:51 np0005481065 nova_compute[260935]: 2025-10-11 09:34:51.265 2 DEBUG nova.network.neutron [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Successfully created port: 75ae5f39-4616-4581-8e22-411bee7c1747 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:34:51 np0005481065 nova_compute[260935]: 2025-10-11 09:34:51.324 2 DEBUG nova.storage.rbd_utils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] resizing rbd image a77fa566-1ab4-484c-b6ef-53471c9f91f8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:34:51 np0005481065 nova_compute[260935]: 2025-10-11 09:34:51.454 2 DEBUG nova.objects.instance [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lazy-loading 'migration_context' on Instance uuid a77fa566-1ab4-484c-b6ef-53471c9f91f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:34:51 np0005481065 nova_compute[260935]: 2025-10-11 09:34:51.530 2 DEBUG nova.virt.libvirt.driver [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:34:51 np0005481065 nova_compute[260935]: 2025-10-11 09:34:51.531 2 DEBUG nova.virt.libvirt.driver [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Ensure instance console log exists: /var/lib/nova/instances/a77fa566-1ab4-484c-b6ef-53471c9f91f8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:34:51 np0005481065 nova_compute[260935]: 2025-10-11 09:34:51.532 2 DEBUG oslo_concurrency.lockutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:34:51 np0005481065 nova_compute[260935]: 2025-10-11 09:34:51.535 2 DEBUG oslo_concurrency.lockutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:34:51 np0005481065 nova_compute[260935]: 2025-10-11 09:34:51.536 2 DEBUG oslo_concurrency.lockutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:34:51 np0005481065 nova_compute[260935]: 2025-10-11 09:34:51.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:34:52 np0005481065 nova_compute[260935]: 2025-10-11 09:34:52.210 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:34:52 np0005481065 nova_compute[260935]: 2025-10-11 09:34:52.211 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:34:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2797: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:34:53 np0005481065 nova_compute[260935]: 2025-10-11 09:34:53.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:34:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:34:54 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #135. Immutable memtables: 0.
Oct 11 05:34:54 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:54.480302) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 05:34:54 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 81] Flushing memtable with next log file: 135
Oct 11 05:34:54 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175294480389, "job": 81, "event": "flush_started", "num_memtables": 1, "num_entries": 1768, "num_deletes": 250, "total_data_size": 2803516, "memory_usage": 2845480, "flush_reason": "Manual Compaction"}
Oct 11 05:34:54 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 81] Level-0 flush table #136: started
Oct 11 05:34:54 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175294562274, "cf_name": "default", "job": 81, "event": "table_file_creation", "file_number": 136, "file_size": 1594857, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56885, "largest_seqno": 58652, "table_properties": {"data_size": 1589131, "index_size": 2800, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 15224, "raw_average_key_size": 20, "raw_value_size": 1576344, "raw_average_value_size": 2141, "num_data_blocks": 128, "num_entries": 736, "num_filter_entries": 736, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760175105, "oldest_key_time": 1760175105, "file_creation_time": 1760175294, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 136, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:34:54 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 81] Flush lasted 82047 microseconds, and 9383 cpu microseconds.
Oct 11 05:34:54 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:34:54 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:54.562354) [db/flush_job.cc:967] [default] [JOB 81] Level-0 flush table #136: 1594857 bytes OK
Oct 11 05:34:54 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:54.562388) [db/memtable_list.cc:519] [default] Level-0 commit table #136 started
Oct 11 05:34:54 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:54.564056) [db/memtable_list.cc:722] [default] Level-0 commit table #136: memtable #1 done
Oct 11 05:34:54 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:54.564087) EVENT_LOG_v1 {"time_micros": 1760175294564077, "job": 81, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 05:34:54 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:54.564116) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 05:34:54 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 81] Try to delete WAL files size 2796002, prev total WAL file size 2796002, number of live WAL files 2.
Oct 11 05:34:54 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000132.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:34:54 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:54.565580) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323534' seq:72057594037927935, type:22 .. '6D6772737461740032353035' seq:0, type:0; will stop at (end)
Oct 11 05:34:54 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 82] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 05:34:54 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 81 Base level 0, inputs: [136(1557KB)], [134(9889KB)]
Oct 11 05:34:54 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175294565621, "job": 82, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [136], "files_L6": [134], "score": -1, "input_data_size": 11721902, "oldest_snapshot_seqno": -1}
Oct 11 05:34:54 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 82] Generated table #137: 7808 keys, 9590003 bytes, temperature: kUnknown
Oct 11 05:34:54 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175294765507, "cf_name": "default", "job": 82, "event": "table_file_creation", "file_number": 137, "file_size": 9590003, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9540705, "index_size": 28705, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19525, "raw_key_size": 203111, "raw_average_key_size": 26, "raw_value_size": 9404124, "raw_average_value_size": 1204, "num_data_blocks": 1122, "num_entries": 7808, "num_filter_entries": 7808, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760175294, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 137, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:34:54 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:34:54 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:54.767152) [db/compaction/compaction_job.cc:1663] [default] [JOB 82] Compacted 1@0 + 1@6 files to L6 => 9590003 bytes
Oct 11 05:34:54 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:54.794901) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 58.6 rd, 47.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 9.7 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(13.4) write-amplify(6.0) OK, records in: 8231, records dropped: 423 output_compression: NoCompression
Oct 11 05:34:54 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:54.795124) EVENT_LOG_v1 {"time_micros": 1760175294795107, "job": 82, "event": "compaction_finished", "compaction_time_micros": 200007, "compaction_time_cpu_micros": 33782, "output_level": 6, "num_output_files": 1, "total_output_size": 9590003, "num_input_records": 8231, "num_output_records": 7808, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 05:34:54 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000136.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:34:54 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175294795792, "job": 82, "event": "table_file_deletion", "file_number": 136}
Oct 11 05:34:54 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000134.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:34:54 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175294799000, "job": 82, "event": "table_file_deletion", "file_number": 134}
Oct 11 05:34:54 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:54.565527) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:34:54 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:54.799155) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:34:54 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:54.799166) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:34:54 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:54.799170) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:34:54 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:54.799176) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:34:54 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:54.799182) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:34:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:34:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:34:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:34:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:34:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:34:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:34:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:34:54
Oct 11 05:34:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:34:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:34:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', 'vms', '.rgw.root', 'default.rgw.control', 'volumes', 'images', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log', '.mgr']
Oct 11 05:34:54 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:34:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2798: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:34:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:34:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:34:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:34:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:34:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:34:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:34:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:34:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:34:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:34:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:34:55 np0005481065 nova_compute[260935]: 2025-10-11 09:34:55.726 2 DEBUG nova.network.neutron [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Successfully updated port: 75ae5f39-4616-4581-8e22-411bee7c1747 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:34:55 np0005481065 nova_compute[260935]: 2025-10-11 09:34:55.915 2 DEBUG oslo_concurrency.lockutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "refresh_cache-a77fa566-1ab4-484c-b6ef-53471c9f91f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:34:55 np0005481065 nova_compute[260935]: 2025-10-11 09:34:55.915 2 DEBUG oslo_concurrency.lockutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquired lock "refresh_cache-a77fa566-1ab4-484c-b6ef-53471c9f91f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:34:55 np0005481065 nova_compute[260935]: 2025-10-11 09:34:55.916 2 DEBUG nova.network.neutron [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:34:55 np0005481065 nova_compute[260935]: 2025-10-11 09:34:55.920 2 DEBUG nova.compute.manager [req-194e8d22-3c7a-4e35-aadb-81a85961cc62 req-9eb466f2-c47b-48ea-8d0a-b56a106bcd99 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Received event network-changed-75ae5f39-4616-4581-8e22-411bee7c1747 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:34:55 np0005481065 nova_compute[260935]: 2025-10-11 09:34:55.921 2 DEBUG nova.compute.manager [req-194e8d22-3c7a-4e35-aadb-81a85961cc62 req-9eb466f2-c47b-48ea-8d0a-b56a106bcd99 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Refreshing instance network info cache due to event network-changed-75ae5f39-4616-4581-8e22-411bee7c1747. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:34:55 np0005481065 nova_compute[260935]: 2025-10-11 09:34:55.921 2 DEBUG oslo_concurrency.lockutils [req-194e8d22-3c7a-4e35-aadb-81a85961cc62 req-9eb466f2-c47b-48ea-8d0a-b56a106bcd99 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-a77fa566-1ab4-484c-b6ef-53471c9f91f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:34:56 np0005481065 nova_compute[260935]: 2025-10-11 09:34:56.066 2 DEBUG nova.network.neutron [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:34:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:34:56 np0005481065 podman[414952]: 2025-10-11 09:34:56.121385175 +0000 UTC m=+0.082273923 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Oct 11 05:34:56 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:34:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:34:56 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:34:56 np0005481065 nova_compute[260935]: 2025-10-11 09:34:56.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:34:57 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:34:57 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:34:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:34:57 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:34:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 05:34:57 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:34:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 05:34:57 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:34:57 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 28ec1567-f3a2-480f-be52-d55beb6f92e1 does not exist
Oct 11 05:34:57 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 96d67f8d-b9fa-4636-ba1c-2c9f0d443ff2 does not exist
Oct 11 05:34:57 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev da124377-ee81-465b-a69b-15090d2845ba does not exist
Oct 11 05:34:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 05:34:57 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 05:34:57 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #138. Immutable memtables: 0.
Oct 11 05:34:57 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:57.159124) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 05:34:57 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 83] Flushing memtable with next log file: 138
Oct 11 05:34:57 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175297159161, "job": 83, "event": "flush_started", "num_memtables": 1, "num_entries": 299, "num_deletes": 251, "total_data_size": 89249, "memory_usage": 95072, "flush_reason": "Manual Compaction"}
Oct 11 05:34:57 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 83] Level-0 flush table #139: started
Oct 11 05:34:57 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175297161667, "cf_name": "default", "job": 83, "event": "table_file_creation", "file_number": 139, "file_size": 88708, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 58653, "largest_seqno": 58951, "table_properties": {"data_size": 86732, "index_size": 203, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5190, "raw_average_key_size": 18, "raw_value_size": 82748, "raw_average_value_size": 297, "num_data_blocks": 8, "num_entries": 278, "num_filter_entries": 278, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760175295, "oldest_key_time": 1760175295, "file_creation_time": 1760175297, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 139, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:34:57 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 83] Flush lasted 2579 microseconds, and 788 cpu microseconds.
Oct 11 05:34:57 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:34:57 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:57.161701) [db/flush_job.cc:967] [default] [JOB 83] Level-0 flush table #139: 88708 bytes OK
Oct 11 05:34:57 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:57.161718) [db/memtable_list.cc:519] [default] Level-0 commit table #139 started
Oct 11 05:34:57 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:57.163017) [db/memtable_list.cc:722] [default] Level-0 commit table #139: memtable #1 done
Oct 11 05:34:57 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:57.163034) EVENT_LOG_v1 {"time_micros": 1760175297163028, "job": 83, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 05:34:57 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:57.163046) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 05:34:57 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 83] Try to delete WAL files size 87073, prev total WAL file size 87073, number of live WAL files 2.
Oct 11 05:34:57 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000135.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:34:57 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:57.163369) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035353232' seq:72057594037927935, type:22 .. '7061786F730035373734' seq:0, type:0; will stop at (end)
Oct 11 05:34:57 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 84] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 05:34:57 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 83 Base level 0, inputs: [139(86KB)], [137(9365KB)]
Oct 11 05:34:57 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175297163401, "job": 84, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [139], "files_L6": [137], "score": -1, "input_data_size": 9678711, "oldest_snapshot_seqno": -1}
Oct 11 05:34:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 05:34:57 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:34:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:34:57 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:34:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2799: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:34:57 np0005481065 nova_compute[260935]: 2025-10-11 09:34:57.188 2 DEBUG nova.network.neutron [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Updating instance_info_cache with network_info: [{"id": "75ae5f39-4616-4581-8e22-411bee7c1747", "address": "fa:16:3e:db:c5:0a", "network": {"id": "f0d9de45-f93f-45ef-aa2e-7cd54a90600b", "bridge": "br-int", "label": "tempest-network-smoke--1509318194", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75ae5f39-46", "ovs_interfaceid": "75ae5f39-4616-4581-8e22-411bee7c1747", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:34:57 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 84] Generated table #140: 7573 keys, 7985086 bytes, temperature: kUnknown
Oct 11 05:34:57 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175297213979, "cf_name": "default", "job": 84, "event": "table_file_creation", "file_number": 140, "file_size": 7985086, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7938870, "index_size": 26199, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18949, "raw_key_size": 198898, "raw_average_key_size": 26, "raw_value_size": 7807777, "raw_average_value_size": 1031, "num_data_blocks": 1008, "num_entries": 7573, "num_filter_entries": 7573, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760175297, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 140, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:34:57 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:34:57 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:57.214361) [db/compaction/compaction_job.cc:1663] [default] [JOB 84] Compacted 1@0 + 1@6 files to L6 => 7985086 bytes
Oct 11 05:34:57 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:57.216086) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 190.9 rd, 157.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 9.1 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(199.1) write-amplify(90.0) OK, records in: 8086, records dropped: 513 output_compression: NoCompression
Oct 11 05:34:57 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:57.216119) EVENT_LOG_v1 {"time_micros": 1760175297216101, "job": 84, "event": "compaction_finished", "compaction_time_micros": 50694, "compaction_time_cpu_micros": 25311, "output_level": 6, "num_output_files": 1, "total_output_size": 7985086, "num_input_records": 8086, "num_output_records": 7573, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 05:34:57 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000139.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:34:57 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175297216335, "job": 84, "event": "table_file_deletion", "file_number": 139}
Oct 11 05:34:57 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000137.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:34:57 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175297218662, "job": 84, "event": "table_file_deletion", "file_number": 137}
Oct 11 05:34:57 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:57.163298) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:34:57 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:57.218766) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:34:57 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:57.218777) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:34:57 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:57.218781) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:34:57 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:57.218785) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:34:57 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:57.218788) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:34:57 np0005481065 nova_compute[260935]: 2025-10-11 09:34:57.349 2 DEBUG oslo_concurrency.lockutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Releasing lock "refresh_cache-a77fa566-1ab4-484c-b6ef-53471c9f91f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:34:57 np0005481065 nova_compute[260935]: 2025-10-11 09:34:57.350 2 DEBUG nova.compute.manager [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Instance network_info: |[{"id": "75ae5f39-4616-4581-8e22-411bee7c1747", "address": "fa:16:3e:db:c5:0a", "network": {"id": "f0d9de45-f93f-45ef-aa2e-7cd54a90600b", "bridge": "br-int", "label": "tempest-network-smoke--1509318194", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75ae5f39-46", "ovs_interfaceid": "75ae5f39-4616-4581-8e22-411bee7c1747", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:34:57 np0005481065 nova_compute[260935]: 2025-10-11 09:34:57.351 2 DEBUG oslo_concurrency.lockutils [req-194e8d22-3c7a-4e35-aadb-81a85961cc62 req-9eb466f2-c47b-48ea-8d0a-b56a106bcd99 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-a77fa566-1ab4-484c-b6ef-53471c9f91f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:34:57 np0005481065 nova_compute[260935]: 2025-10-11 09:34:57.352 2 DEBUG nova.network.neutron [req-194e8d22-3c7a-4e35-aadb-81a85961cc62 req-9eb466f2-c47b-48ea-8d0a-b56a106bcd99 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Refreshing network info cache for port 75ae5f39-4616-4581-8e22-411bee7c1747 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:34:57 np0005481065 nova_compute[260935]: 2025-10-11 09:34:57.363 2 DEBUG nova.virt.libvirt.driver [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Start _get_guest_xml network_info=[{"id": "75ae5f39-4616-4581-8e22-411bee7c1747", "address": "fa:16:3e:db:c5:0a", "network": {"id": "f0d9de45-f93f-45ef-aa2e-7cd54a90600b", "bridge": "br-int", "label": "tempest-network-smoke--1509318194", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75ae5f39-46", "ovs_interfaceid": "75ae5f39-4616-4581-8e22-411bee7c1747", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:34:57 np0005481065 nova_compute[260935]: 2025-10-11 09:34:57.370 2 WARNING nova.virt.libvirt.driver [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:34:57 np0005481065 nova_compute[260935]: 2025-10-11 09:34:57.377 2 DEBUG nova.virt.libvirt.host [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:34:57 np0005481065 nova_compute[260935]: 2025-10-11 09:34:57.379 2 DEBUG nova.virt.libvirt.host [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:34:57 np0005481065 nova_compute[260935]: 2025-10-11 09:34:57.384 2 DEBUG nova.virt.libvirt.host [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:34:57 np0005481065 nova_compute[260935]: 2025-10-11 09:34:57.384 2 DEBUG nova.virt.libvirt.host [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:34:57 np0005481065 nova_compute[260935]: 2025-10-11 09:34:57.385 2 DEBUG nova.virt.libvirt.driver [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:34:57 np0005481065 nova_compute[260935]: 2025-10-11 09:34:57.386 2 DEBUG nova.virt.hardware [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:34:57 np0005481065 nova_compute[260935]: 2025-10-11 09:34:57.386 2 DEBUG nova.virt.hardware [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:34:57 np0005481065 nova_compute[260935]: 2025-10-11 09:34:57.387 2 DEBUG nova.virt.hardware [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:34:57 np0005481065 nova_compute[260935]: 2025-10-11 09:34:57.387 2 DEBUG nova.virt.hardware [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:34:57 np0005481065 nova_compute[260935]: 2025-10-11 09:34:57.388 2 DEBUG nova.virt.hardware [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:34:57 np0005481065 nova_compute[260935]: 2025-10-11 09:34:57.388 2 DEBUG nova.virt.hardware [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:34:57 np0005481065 nova_compute[260935]: 2025-10-11 09:34:57.388 2 DEBUG nova.virt.hardware [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:34:57 np0005481065 nova_compute[260935]: 2025-10-11 09:34:57.389 2 DEBUG nova.virt.hardware [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:34:57 np0005481065 nova_compute[260935]: 2025-10-11 09:34:57.389 2 DEBUG nova.virt.hardware [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:34:57 np0005481065 nova_compute[260935]: 2025-10-11 09:34:57.390 2 DEBUG nova.virt.hardware [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:34:57 np0005481065 nova_compute[260935]: 2025-10-11 09:34:57.390 2 DEBUG nova.virt.hardware [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:34:57 np0005481065 nova_compute[260935]: 2025-10-11 09:34:57.395 2 DEBUG oslo_concurrency.processutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:34:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:34:57 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3877447729' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:34:57 np0005481065 nova_compute[260935]: 2025-10-11 09:34:57.902 2 DEBUG oslo_concurrency.processutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:34:57 np0005481065 nova_compute[260935]: 2025-10-11 09:34:57.942 2 DEBUG nova.storage.rbd_utils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image a77fa566-1ab4-484c-b6ef-53471c9f91f8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:34:57 np0005481065 nova_compute[260935]: 2025-10-11 09:34:57.953 2 DEBUG oslo_concurrency.processutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:34:58 np0005481065 podman[415290]: 2025-10-11 09:34:58.051536996 +0000 UTC m=+0.062014312 container create 587f4fd5b29045fe9daeeab296b2e948b736d6636bf8c66ce1ed2ec270e0ff33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_cray, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:34:58 np0005481065 systemd[1]: Started libpod-conmon-587f4fd5b29045fe9daeeab296b2e948b736d6636bf8c66ce1ed2ec270e0ff33.scope.
Oct 11 05:34:58 np0005481065 podman[415290]: 2025-10-11 09:34:58.021133818 +0000 UTC m=+0.031611214 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:34:58 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:34:58 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:34:58 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:34:58 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:34:58 np0005481065 podman[415290]: 2025-10-11 09:34:58.169051562 +0000 UTC m=+0.179528968 container init 587f4fd5b29045fe9daeeab296b2e948b736d6636bf8c66ce1ed2ec270e0ff33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_cray, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:34:58 np0005481065 podman[415290]: 2025-10-11 09:34:58.185736743 +0000 UTC m=+0.196214049 container start 587f4fd5b29045fe9daeeab296b2e948b736d6636bf8c66ce1ed2ec270e0ff33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 05:34:58 np0005481065 podman[415290]: 2025-10-11 09:34:58.188686256 +0000 UTC m=+0.199163642 container attach 587f4fd5b29045fe9daeeab296b2e948b736d6636bf8c66ce1ed2ec270e0ff33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_cray, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 05:34:58 np0005481065 systemd[1]: libpod-587f4fd5b29045fe9daeeab296b2e948b736d6636bf8c66ce1ed2ec270e0ff33.scope: Deactivated successfully.
Oct 11 05:34:58 np0005481065 funny_cray[415315]: 167 167
Oct 11 05:34:58 np0005481065 conmon[415315]: conmon 587f4fd5b29045fe9dae <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-587f4fd5b29045fe9daeeab296b2e948b736d6636bf8c66ce1ed2ec270e0ff33.scope/container/memory.events
Oct 11 05:34:58 np0005481065 podman[415329]: 2025-10-11 09:34:58.266489472 +0000 UTC m=+0.044626521 container died 587f4fd5b29045fe9daeeab296b2e948b736d6636bf8c66ce1ed2ec270e0ff33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_cray, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:34:58 np0005481065 systemd[1]: var-lib-containers-storage-overlay-d609ff9df10e8670b6a0eed585fc4f51579f664abed24a56f176789fc25a0918-merged.mount: Deactivated successfully.
Oct 11 05:34:58 np0005481065 podman[415329]: 2025-10-11 09:34:58.306531542 +0000 UTC m=+0.084668541 container remove 587f4fd5b29045fe9daeeab296b2e948b736d6636bf8c66ce1ed2ec270e0ff33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_cray, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 05:34:58 np0005481065 systemd[1]: libpod-conmon-587f4fd5b29045fe9daeeab296b2e948b736d6636bf8c66ce1ed2ec270e0ff33.scope: Deactivated successfully.
Oct 11 05:34:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:34:58 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/232724278' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:34:58 np0005481065 nova_compute[260935]: 2025-10-11 09:34:58.426 2 DEBUG oslo_concurrency.processutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:34:58 np0005481065 nova_compute[260935]: 2025-10-11 09:34:58.431 2 DEBUG nova.virt.libvirt.vif [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:34:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-1880442521',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-1880442521',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-607770139-acc',id=139,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCdfdb9R+zsDw6jenCtVzR++AIgDN5ehA4oPbzl8x/m8tfimOhF7LhQ9X4A3F+nSw3/eKnrfnA/yKanSmbwPXrSaCs1ORHouK4S19eKbXKNTi17aP7Q4amYlBjFlUm9aIA==',key_name='tempest-TestSecurityGroupsBasicOps-1403644569',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='81e7096f23df4e7d8782cf98d09d54e9',ramdisk_id='',reservation_id='r-8gi3qbv7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-607770139',owner_user_name='tempest-TestSecurityGroupsBasicOps-607770139-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:34:50Z,user_data=None,user_id='489c4d0457354f4684f8b9e53261224f',uuid=a77fa566-1ab4-484c-b6ef-53471c9f91f8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "75ae5f39-4616-4581-8e22-411bee7c1747", "address": "fa:16:3e:db:c5:0a", "network": {"id": "f0d9de45-f93f-45ef-aa2e-7cd54a90600b", "bridge": "br-int", "label": "tempest-network-smoke--1509318194", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75ae5f39-46", "ovs_interfaceid": "75ae5f39-4616-4581-8e22-411bee7c1747", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:34:58 np0005481065 nova_compute[260935]: 2025-10-11 09:34:58.432 2 DEBUG nova.network.os_vif_util [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converting VIF {"id": "75ae5f39-4616-4581-8e22-411bee7c1747", "address": "fa:16:3e:db:c5:0a", "network": {"id": "f0d9de45-f93f-45ef-aa2e-7cd54a90600b", "bridge": "br-int", "label": "tempest-network-smoke--1509318194", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75ae5f39-46", "ovs_interfaceid": "75ae5f39-4616-4581-8e22-411bee7c1747", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:34:58 np0005481065 nova_compute[260935]: 2025-10-11 09:34:58.433 2 DEBUG nova.network.os_vif_util [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:db:c5:0a,bridge_name='br-int',has_traffic_filtering=True,id=75ae5f39-4616-4581-8e22-411bee7c1747,network=Network(f0d9de45-f93f-45ef-aa2e-7cd54a90600b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75ae5f39-46') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:34:58 np0005481065 nova_compute[260935]: 2025-10-11 09:34:58.435 2 DEBUG nova.objects.instance [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lazy-loading 'pci_devices' on Instance uuid a77fa566-1ab4-484c-b6ef-53471c9f91f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:34:58 np0005481065 nova_compute[260935]: 2025-10-11 09:34:58.513 2 DEBUG nova.virt.libvirt.driver [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:34:58 np0005481065 nova_compute[260935]:  <uuid>a77fa566-1ab4-484c-b6ef-53471c9f91f8</uuid>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:  <name>instance-0000008b</name>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:34:58 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-1880442521</nova:name>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:34:57</nova:creationTime>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:34:58 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:        <nova:user uuid="489c4d0457354f4684f8b9e53261224f">tempest-TestSecurityGroupsBasicOps-607770139-project-member</nova:user>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:        <nova:project uuid="81e7096f23df4e7d8782cf98d09d54e9">tempest-TestSecurityGroupsBasicOps-607770139</nova:project>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:        <nova:port uuid="75ae5f39-4616-4581-8e22-411bee7c1747">
Oct 11 05:34:58 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:34:58 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:      <entry name="serial">a77fa566-1ab4-484c-b6ef-53471c9f91f8</entry>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:      <entry name="uuid">a77fa566-1ab4-484c-b6ef-53471c9f91f8</entry>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:34:58 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:34:58 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:34:58 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/a77fa566-1ab4-484c-b6ef-53471c9f91f8_disk">
Oct 11 05:34:58 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:34:58 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:34:58 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/a77fa566-1ab4-484c-b6ef-53471c9f91f8_disk.config">
Oct 11 05:34:58 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:34:58 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:34:58 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:db:c5:0a"/>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:      <target dev="tap75ae5f39-46"/>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:34:58 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/a77fa566-1ab4-484c-b6ef-53471c9f91f8/console.log" append="off"/>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:34:58 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:34:58 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:34:58 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:34:58 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:34:58 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:34:58 np0005481065 nova_compute[260935]: 2025-10-11 09:34:58.513 2 DEBUG nova.compute.manager [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Preparing to wait for external event network-vif-plugged-75ae5f39-4616-4581-8e22-411bee7c1747 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:34:58 np0005481065 nova_compute[260935]: 2025-10-11 09:34:58.513 2 DEBUG oslo_concurrency.lockutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "a77fa566-1ab4-484c-b6ef-53471c9f91f8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:34:58 np0005481065 nova_compute[260935]: 2025-10-11 09:34:58.514 2 DEBUG oslo_concurrency.lockutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "a77fa566-1ab4-484c-b6ef-53471c9f91f8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:34:58 np0005481065 nova_compute[260935]: 2025-10-11 09:34:58.514 2 DEBUG oslo_concurrency.lockutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "a77fa566-1ab4-484c-b6ef-53471c9f91f8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:34:58 np0005481065 nova_compute[260935]: 2025-10-11 09:34:58.514 2 DEBUG nova.virt.libvirt.vif [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:34:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-1880442521',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-1880442521',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-607770139-acc',id=139,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCdfdb9R+zsDw6jenCtVzR++AIgDN5ehA4oPbzl8x/m8tfimOhF7LhQ9X4A3F+nSw3/eKnrfnA/yKanSmbwPXrSaCs1ORHouK4S19eKbXKNTi17aP7Q4amYlBjFlUm9aIA==',key_name='tempest-TestSecurityGroupsBasicOps-1403644569',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='81e7096f23df4e7d8782cf98d09d54e9',ramdisk_id='',reservation_id='r-8gi3qbv7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-607770139',owner_user_name='tempest-TestSecurityGroupsBasicOps-607770139-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:34:50Z,user_data=None,user_id='489c4d0457354f4684f8b9e53261224f',uuid=a77fa566-1ab4-484c-b6ef-53471c9f91f8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "75ae5f39-4616-4581-8e22-411bee7c1747", "address": "fa:16:3e:db:c5:0a", "network": {"id": "f0d9de45-f93f-45ef-aa2e-7cd54a90600b", "bridge": "br-int", "label": "tempest-network-smoke--1509318194", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75ae5f39-46", "ovs_interfaceid": "75ae5f39-4616-4581-8e22-411bee7c1747", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:34:58 np0005481065 nova_compute[260935]: 2025-10-11 09:34:58.515 2 DEBUG nova.network.os_vif_util [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converting VIF {"id": "75ae5f39-4616-4581-8e22-411bee7c1747", "address": "fa:16:3e:db:c5:0a", "network": {"id": "f0d9de45-f93f-45ef-aa2e-7cd54a90600b", "bridge": "br-int", "label": "tempest-network-smoke--1509318194", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75ae5f39-46", "ovs_interfaceid": "75ae5f39-4616-4581-8e22-411bee7c1747", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:34:58 np0005481065 nova_compute[260935]: 2025-10-11 09:34:58.515 2 DEBUG nova.network.os_vif_util [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:db:c5:0a,bridge_name='br-int',has_traffic_filtering=True,id=75ae5f39-4616-4581-8e22-411bee7c1747,network=Network(f0d9de45-f93f-45ef-aa2e-7cd54a90600b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75ae5f39-46') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:34:58 np0005481065 nova_compute[260935]: 2025-10-11 09:34:58.515 2 DEBUG os_vif [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:c5:0a,bridge_name='br-int',has_traffic_filtering=True,id=75ae5f39-4616-4581-8e22-411bee7c1747,network=Network(f0d9de45-f93f-45ef-aa2e-7cd54a90600b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75ae5f39-46') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:34:58 np0005481065 nova_compute[260935]: 2025-10-11 09:34:58.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:34:58 np0005481065 nova_compute[260935]: 2025-10-11 09:34:58.516 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:34:58 np0005481065 nova_compute[260935]: 2025-10-11 09:34:58.517 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:34:58 np0005481065 nova_compute[260935]: 2025-10-11 09:34:58.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:34:58 np0005481065 nova_compute[260935]: 2025-10-11 09:34:58.521 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap75ae5f39-46, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:34:58 np0005481065 nova_compute[260935]: 2025-10-11 09:34:58.521 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap75ae5f39-46, col_values=(('external_ids', {'iface-id': '75ae5f39-4616-4581-8e22-411bee7c1747', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:db:c5:0a', 'vm-uuid': 'a77fa566-1ab4-484c-b6ef-53471c9f91f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:34:58 np0005481065 nova_compute[260935]: 2025-10-11 09:34:58.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:34:58 np0005481065 NetworkManager[44960]: <info>  [1760175298.5244] manager: (tap75ae5f39-46): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/596)
Oct 11 05:34:58 np0005481065 nova_compute[260935]: 2025-10-11 09:34:58.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:34:58 np0005481065 nova_compute[260935]: 2025-10-11 09:34:58.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:34:58 np0005481065 nova_compute[260935]: 2025-10-11 09:34:58.535 2 INFO os_vif [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:c5:0a,bridge_name='br-int',has_traffic_filtering=True,id=75ae5f39-4616-4581-8e22-411bee7c1747,network=Network(f0d9de45-f93f-45ef-aa2e-7cd54a90600b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75ae5f39-46')#033[00m
Oct 11 05:34:58 np0005481065 podman[415352]: 2025-10-11 09:34:58.560929241 +0000 UTC m=+0.057524064 container create 519f5c0eab9f7e736515cec7fd7199c7c4b6c3c5dcd7eec393f0b55dbdd0c4de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_lewin, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 11 05:34:58 np0005481065 systemd[1]: Started libpod-conmon-519f5c0eab9f7e736515cec7fd7199c7c4b6c3c5dcd7eec393f0b55dbdd0c4de.scope.
Oct 11 05:34:58 np0005481065 podman[415352]: 2025-10-11 09:34:58.53927574 +0000 UTC m=+0.035870603 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:34:58 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:34:58 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b2fc7d2d37ee99efdb27f5a5ea6ac3aaf893b590c1f4359cfd6171a165cd843/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:34:58 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b2fc7d2d37ee99efdb27f5a5ea6ac3aaf893b590c1f4359cfd6171a165cd843/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:34:58 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b2fc7d2d37ee99efdb27f5a5ea6ac3aaf893b590c1f4359cfd6171a165cd843/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:34:58 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b2fc7d2d37ee99efdb27f5a5ea6ac3aaf893b590c1f4359cfd6171a165cd843/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:34:58 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b2fc7d2d37ee99efdb27f5a5ea6ac3aaf893b590c1f4359cfd6171a165cd843/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 05:34:58 np0005481065 podman[415352]: 2025-10-11 09:34:58.664422682 +0000 UTC m=+0.161017535 container init 519f5c0eab9f7e736515cec7fd7199c7c4b6c3c5dcd7eec393f0b55dbdd0c4de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_lewin, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:34:58 np0005481065 nova_compute[260935]: 2025-10-11 09:34:58.668 2 DEBUG nova.virt.libvirt.driver [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:34:58 np0005481065 nova_compute[260935]: 2025-10-11 09:34:58.668 2 DEBUG nova.virt.libvirt.driver [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:34:58 np0005481065 nova_compute[260935]: 2025-10-11 09:34:58.668 2 DEBUG nova.virt.libvirt.driver [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] No VIF found with MAC fa:16:3e:db:c5:0a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:34:58 np0005481065 nova_compute[260935]: 2025-10-11 09:34:58.669 2 INFO nova.virt.libvirt.driver [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Using config drive#033[00m
Oct 11 05:34:58 np0005481065 podman[415352]: 2025-10-11 09:34:58.679565539 +0000 UTC m=+0.176160402 container start 519f5c0eab9f7e736515cec7fd7199c7c4b6c3c5dcd7eec393f0b55dbdd0c4de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 05:34:58 np0005481065 podman[415352]: 2025-10-11 09:34:58.684118038 +0000 UTC m=+0.180712901 container attach 519f5c0eab9f7e736515cec7fd7199c7c4b6c3c5dcd7eec393f0b55dbdd0c4de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_lewin, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:34:58 np0005481065 nova_compute[260935]: 2025-10-11 09:34:58.706 2 DEBUG nova.storage.rbd_utils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image a77fa566-1ab4-484c-b6ef-53471c9f91f8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:34:58 np0005481065 nova_compute[260935]: 2025-10-11 09:34:58.769 2 DEBUG nova.network.neutron [req-194e8d22-3c7a-4e35-aadb-81a85961cc62 req-9eb466f2-c47b-48ea-8d0a-b56a106bcd99 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Updated VIF entry in instance network info cache for port 75ae5f39-4616-4581-8e22-411bee7c1747. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:34:58 np0005481065 nova_compute[260935]: 2025-10-11 09:34:58.770 2 DEBUG nova.network.neutron [req-194e8d22-3c7a-4e35-aadb-81a85961cc62 req-9eb466f2-c47b-48ea-8d0a-b56a106bcd99 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Updating instance_info_cache with network_info: [{"id": "75ae5f39-4616-4581-8e22-411bee7c1747", "address": "fa:16:3e:db:c5:0a", "network": {"id": "f0d9de45-f93f-45ef-aa2e-7cd54a90600b", "bridge": "br-int", "label": "tempest-network-smoke--1509318194", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75ae5f39-46", "ovs_interfaceid": "75ae5f39-4616-4581-8e22-411bee7c1747", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:34:58 np0005481065 nova_compute[260935]: 2025-10-11 09:34:58.792 2 DEBUG oslo_concurrency.lockutils [req-194e8d22-3c7a-4e35-aadb-81a85961cc62 req-9eb466f2-c47b-48ea-8d0a-b56a106bcd99 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-a77fa566-1ab4-484c-b6ef-53471c9f91f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:34:58 np0005481065 nova_compute[260935]: 2025-10-11 09:34:58.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:34:59 np0005481065 nova_compute[260935]: 2025-10-11 09:34:59.103 2 INFO nova.virt.libvirt.driver [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Creating config drive at /var/lib/nova/instances/a77fa566-1ab4-484c-b6ef-53471c9f91f8/disk.config#033[00m
Oct 11 05:34:59 np0005481065 nova_compute[260935]: 2025-10-11 09:34:59.114 2 DEBUG oslo_concurrency.processutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a77fa566-1ab4-484c-b6ef-53471c9f91f8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpka2xt1qv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:34:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2800: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:34:59 np0005481065 nova_compute[260935]: 2025-10-11 09:34:59.284 2 DEBUG oslo_concurrency.processutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a77fa566-1ab4-484c-b6ef-53471c9f91f8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpka2xt1qv" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:34:59 np0005481065 nova_compute[260935]: 2025-10-11 09:34:59.315 2 DEBUG nova.storage.rbd_utils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image a77fa566-1ab4-484c-b6ef-53471c9f91f8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:34:59 np0005481065 nova_compute[260935]: 2025-10-11 09:34:59.320 2 DEBUG oslo_concurrency.processutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a77fa566-1ab4-484c-b6ef-53471c9f91f8/disk.config a77fa566-1ab4-484c-b6ef-53471c9f91f8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:34:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:34:59 np0005481065 nova_compute[260935]: 2025-10-11 09:34:59.507 2 DEBUG oslo_concurrency.processutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a77fa566-1ab4-484c-b6ef-53471c9f91f8/disk.config a77fa566-1ab4-484c-b6ef-53471c9f91f8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:34:59 np0005481065 nova_compute[260935]: 2025-10-11 09:34:59.509 2 INFO nova.virt.libvirt.driver [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Deleting local config drive /var/lib/nova/instances/a77fa566-1ab4-484c-b6ef-53471c9f91f8/disk.config because it was imported into RBD.#033[00m
Oct 11 05:34:59 np0005481065 kernel: tap75ae5f39-46: entered promiscuous mode
Oct 11 05:34:59 np0005481065 NetworkManager[44960]: <info>  [1760175299.5749] manager: (tap75ae5f39-46): new Tun device (/org/freedesktop/NetworkManager/Devices/597)
Oct 11 05:34:59 np0005481065 nova_compute[260935]: 2025-10-11 09:34:59.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:34:59 np0005481065 ovn_controller[152945]: 2025-10-11T09:34:59Z|01557|binding|INFO|Claiming lport 75ae5f39-4616-4581-8e22-411bee7c1747 for this chassis.
Oct 11 05:34:59 np0005481065 ovn_controller[152945]: 2025-10-11T09:34:59Z|01558|binding|INFO|75ae5f39-4616-4581-8e22-411bee7c1747: Claiming fa:16:3e:db:c5:0a 10.100.0.14
Oct 11 05:34:59 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:34:59.593 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:c5:0a 10.100.0.14'], port_security=['fa:16:3e:db:c5:0a 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a77fa566-1ab4-484c-b6ef-53471c9f91f8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f0d9de45-f93f-45ef-aa2e-7cd54a90600b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '81e7096f23df4e7d8782cf98d09d54e9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7c64e7e9-762c-40c5-8997-daeebe19175c c4774add-49ce-4194-9683-2bce3da69dec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e2e43ada-251a-46ad-bf0a-b57c9e02b647, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=75ae5f39-4616-4581-8e22-411bee7c1747) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:34:59 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:34:59.594 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 75ae5f39-4616-4581-8e22-411bee7c1747 in datapath f0d9de45-f93f-45ef-aa2e-7cd54a90600b bound to our chassis#033[00m
Oct 11 05:34:59 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:34:59.596 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f0d9de45-f93f-45ef-aa2e-7cd54a90600b#033[00m
Oct 11 05:34:59 np0005481065 systemd-udevd[415465]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:34:59 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:34:59.613 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bc5a907f-5007-4918-adea-dd2c71f12e50]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:34:59 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:34:59.614 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf0d9de45-f1 in ovnmeta-f0d9de45-f93f-45ef-aa2e-7cd54a90600b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 05:34:59 np0005481065 systemd-machined[215705]: New machine qemu-163-instance-0000008b.
Oct 11 05:34:59 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:34:59.616 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf0d9de45-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 05:34:59 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:34:59.616 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[da27d94f-73ba-4572-8dcf-bc05d6c3298f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:34:59 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:34:59.617 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5c5ae0b6-7d9e-4639-9f99-8f23a28eecca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:34:59 np0005481065 NetworkManager[44960]: <info>  [1760175299.6222] device (tap75ae5f39-46): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:34:59 np0005481065 NetworkManager[44960]: <info>  [1760175299.6251] device (tap75ae5f39-46): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:34:59 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:34:59.632 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[e26ff3d3-d449-4a95-9283-17cb30863f28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:34:59 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:34:59.656 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c83aecc5-5c19-4de0-b0e5-c0aecef7af3c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:34:59 np0005481065 systemd[1]: Started Virtual Machine qemu-163-instance-0000008b.
Oct 11 05:34:59 np0005481065 nova_compute[260935]: 2025-10-11 09:34:59.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:34:59 np0005481065 ovn_controller[152945]: 2025-10-11T09:34:59Z|01559|binding|INFO|Setting lport 75ae5f39-4616-4581-8e22-411bee7c1747 ovn-installed in OVS
Oct 11 05:34:59 np0005481065 ovn_controller[152945]: 2025-10-11T09:34:59Z|01560|binding|INFO|Setting lport 75ae5f39-4616-4581-8e22-411bee7c1747 up in Southbound
Oct 11 05:34:59 np0005481065 nova_compute[260935]: 2025-10-11 09:34:59.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:34:59 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:34:59.693 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[5dcba2b1-615c-4ac4-b3c7-a5f0f298b4ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:34:59 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:34:59.698 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7240c16b-1cde-4864-8577-075abba58bf9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:34:59 np0005481065 systemd-udevd[415469]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:34:59 np0005481065 NetworkManager[44960]: <info>  [1760175299.7006] manager: (tapf0d9de45-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/598)
Oct 11 05:34:59 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:34:59.742 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c7e26833-d233-4806-bdb4-283d76841348]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:34:59 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:34:59.752 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[d1ef021c-a44d-486e-93ec-9b3703c866b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:34:59 np0005481065 intelligent_lewin[415372]: --> passed data devices: 0 physical, 3 LVM
Oct 11 05:34:59 np0005481065 intelligent_lewin[415372]: --> relative data size: 1.0
Oct 11 05:34:59 np0005481065 intelligent_lewin[415372]: --> All data devices are unavailable
Oct 11 05:34:59 np0005481065 NetworkManager[44960]: <info>  [1760175299.7824] device (tapf0d9de45-f0): carrier: link connected
Oct 11 05:34:59 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:34:59.791 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b6b985b2-6699-47a4-bbd8-5260626353cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:34:59 np0005481065 systemd[1]: libpod-519f5c0eab9f7e736515cec7fd7199c7c4b6c3c5dcd7eec393f0b55dbdd0c4de.scope: Deactivated successfully.
Oct 11 05:34:59 np0005481065 systemd[1]: libpod-519f5c0eab9f7e736515cec7fd7199c7c4b6c3c5dcd7eec393f0b55dbdd0c4de.scope: Consumed 1.055s CPU time.
Oct 11 05:34:59 np0005481065 podman[415352]: 2025-10-11 09:34:59.798600338 +0000 UTC m=+1.295195181 container died 519f5c0eab9f7e736515cec7fd7199c7c4b6c3c5dcd7eec393f0b55dbdd0c4de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:34:59 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:34:59.821 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9e3cf028-5789-46dd-b591-16491b89d8e1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf0d9de45-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3e:9a:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 415], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 714448, 'reachable_time': 34091, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 415505, 'error': None, 'target': 'ovnmeta-f0d9de45-f93f-45ef-aa2e-7cd54a90600b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:34:59 np0005481065 systemd[1]: var-lib-containers-storage-overlay-2b2fc7d2d37ee99efdb27f5a5ea6ac3aaf893b590c1f4359cfd6171a165cd843-merged.mount: Deactivated successfully.
Oct 11 05:34:59 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:34:59.844 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f2a26042-a0ae-45ea-b8e9-8a053daa983d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3e:9a8e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 714448, 'tstamp': 714448}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 415513, 'error': None, 'target': 'ovnmeta-f0d9de45-f93f-45ef-aa2e-7cd54a90600b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:34:59 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:34:59.869 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cd13b368-eefb-45b6-bc27-3d595dd4ce7a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf0d9de45-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3e:9a:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 196, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 196, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 415], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 714448, 'reachable_time': 34091, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 415519, 'error': None, 'target': 'ovnmeta-f0d9de45-f93f-45ef-aa2e-7cd54a90600b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:34:59 np0005481065 podman[415352]: 2025-10-11 09:34:59.885710117 +0000 UTC m=+1.382304970 container remove 519f5c0eab9f7e736515cec7fd7199c7c4b6c3c5dcd7eec393f0b55dbdd0c4de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_lewin, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 05:34:59 np0005481065 systemd[1]: libpod-conmon-519f5c0eab9f7e736515cec7fd7199c7c4b6c3c5dcd7eec393f0b55dbdd0c4de.scope: Deactivated successfully.
Oct 11 05:34:59 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:34:59.926 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b13623bf-1dc4-4752-8ed0-67046e138f11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:35:00 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:35:00.012 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b404011b-5373-4773-a432-25c5f3fae272]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:35:00 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:35:00.015 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf0d9de45-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:35:00 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:35:00.016 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:35:00 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:35:00.018 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf0d9de45-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:35:00 np0005481065 kernel: tapf0d9de45-f0: entered promiscuous mode
Oct 11 05:35:00 np0005481065 NetworkManager[44960]: <info>  [1760175300.0213] manager: (tapf0d9de45-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/599)
Oct 11 05:35:00 np0005481065 nova_compute[260935]: 2025-10-11 09:35:00.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:35:00 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:35:00.024 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf0d9de45-f0, col_values=(('external_ids', {'iface-id': '675f5b9f-9cb7-4132-af37-955cc69eba82'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:35:00 np0005481065 nova_compute[260935]: 2025-10-11 09:35:00.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:35:00 np0005481065 ovn_controller[152945]: 2025-10-11T09:35:00Z|01561|binding|INFO|Releasing lport 675f5b9f-9cb7-4132-af37-955cc69eba82 from this chassis (sb_readonly=0)
Oct 11 05:35:00 np0005481065 nova_compute[260935]: 2025-10-11 09:35:00.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:35:00 np0005481065 nova_compute[260935]: 2025-10-11 09:35:00.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:35:00 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:35:00.045 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f0d9de45-f93f-45ef-aa2e-7cd54a90600b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f0d9de45-f93f-45ef-aa2e-7cd54a90600b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 05:35:00 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:35:00.046 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[57516713-5a1b-4237-b214-5ef84a91c144]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:35:00 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:35:00.047 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 05:35:00 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 05:35:00 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 05:35:00 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-f0d9de45-f93f-45ef-aa2e-7cd54a90600b
Oct 11 05:35:00 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 05:35:00 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 05:35:00 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 05:35:00 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/f0d9de45-f93f-45ef-aa2e-7cd54a90600b.pid.haproxy
Oct 11 05:35:00 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 05:35:00 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:35:00 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 05:35:00 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 05:35:00 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 05:35:00 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 05:35:00 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 05:35:00 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 05:35:00 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 05:35:00 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 05:35:00 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 05:35:00 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 05:35:00 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 05:35:00 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 05:35:00 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 05:35:00 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:35:00 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:35:00 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 05:35:00 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 05:35:00 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 05:35:00 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID f0d9de45-f93f-45ef-aa2e-7cd54a90600b
Oct 11 05:35:00 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 05:35:00 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:35:00.049 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f0d9de45-f93f-45ef-aa2e-7cd54a90600b', 'env', 'PROCESS_TAG=haproxy-f0d9de45-f93f-45ef-aa2e-7cd54a90600b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f0d9de45-f93f-45ef-aa2e-7cd54a90600b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 05:35:00 np0005481065 nova_compute[260935]: 2025-10-11 09:35:00.491 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175300.4897466, a77fa566-1ab4-484c-b6ef-53471c9f91f8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:35:00 np0005481065 nova_compute[260935]: 2025-10-11 09:35:00.491 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] VM Started (Lifecycle Event)#033[00m
Oct 11 05:35:00 np0005481065 nova_compute[260935]: 2025-10-11 09:35:00.536 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:35:00 np0005481065 podman[415702]: 2025-10-11 09:35:00.445480354 +0000 UTC m=+0.040058032 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 05:35:00 np0005481065 nova_compute[260935]: 2025-10-11 09:35:00.542 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175300.4905643, a77fa566-1ab4-484c-b6ef-53471c9f91f8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:35:00 np0005481065 nova_compute[260935]: 2025-10-11 09:35:00.543 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:35:00 np0005481065 nova_compute[260935]: 2025-10-11 09:35:00.584 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:35:00 np0005481065 nova_compute[260935]: 2025-10-11 09:35:00.589 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:35:00 np0005481065 nova_compute[260935]: 2025-10-11 09:35:00.644 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:35:00 np0005481065 podman[415702]: 2025-10-11 09:35:00.648667988 +0000 UTC m=+0.243245676 container create c8a612255054c19d0be4ed190e3fbfb857d6f4a0946c2f20e00a1d51da26ca37 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f0d9de45-f93f-45ef-aa2e-7cd54a90600b, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 11 05:35:00 np0005481065 systemd[1]: Started libpod-conmon-c8a612255054c19d0be4ed190e3fbfb857d6f4a0946c2f20e00a1d51da26ca37.scope.
Oct 11 05:35:00 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:35:00 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9779a9262745b18ce2d1c4f4694138a6bb395f088c211d26fdb2e153f47b00b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 05:35:00 np0005481065 podman[415702]: 2025-10-11 09:35:00.760103183 +0000 UTC m=+0.354680861 container init c8a612255054c19d0be4ed190e3fbfb857d6f4a0946c2f20e00a1d51da26ca37 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f0d9de45-f93f-45ef-aa2e-7cd54a90600b, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 11 05:35:00 np0005481065 podman[415702]: 2025-10-11 09:35:00.773613604 +0000 UTC m=+0.368191262 container start c8a612255054c19d0be4ed190e3fbfb857d6f4a0946c2f20e00a1d51da26ca37 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f0d9de45-f93f-45ef-aa2e-7cd54a90600b, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 11 05:35:00 np0005481065 nova_compute[260935]: 2025-10-11 09:35:00.802 2 DEBUG nova.compute.manager [req-4a69dd77-34c4-4f62-9db0-843c16f4014e req-1af745fe-ce00-4d2d-8247-dffe5fecab79 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Received event network-vif-plugged-75ae5f39-4616-4581-8e22-411bee7c1747 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:35:00 np0005481065 nova_compute[260935]: 2025-10-11 09:35:00.803 2 DEBUG oslo_concurrency.lockutils [req-4a69dd77-34c4-4f62-9db0-843c16f4014e req-1af745fe-ce00-4d2d-8247-dffe5fecab79 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "a77fa566-1ab4-484c-b6ef-53471c9f91f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:35:00 np0005481065 nova_compute[260935]: 2025-10-11 09:35:00.803 2 DEBUG oslo_concurrency.lockutils [req-4a69dd77-34c4-4f62-9db0-843c16f4014e req-1af745fe-ce00-4d2d-8247-dffe5fecab79 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a77fa566-1ab4-484c-b6ef-53471c9f91f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:35:00 np0005481065 nova_compute[260935]: 2025-10-11 09:35:00.803 2 DEBUG oslo_concurrency.lockutils [req-4a69dd77-34c4-4f62-9db0-843c16f4014e req-1af745fe-ce00-4d2d-8247-dffe5fecab79 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a77fa566-1ab4-484c-b6ef-53471c9f91f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:35:00 np0005481065 nova_compute[260935]: 2025-10-11 09:35:00.804 2 DEBUG nova.compute.manager [req-4a69dd77-34c4-4f62-9db0-843c16f4014e req-1af745fe-ce00-4d2d-8247-dffe5fecab79 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Processing event network-vif-plugged-75ae5f39-4616-4581-8e22-411bee7c1747 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:35:00 np0005481065 neutron-haproxy-ovnmeta-f0d9de45-f93f-45ef-aa2e-7cd54a90600b[415736]: [NOTICE]   (415750) : New worker (415752) forked
Oct 11 05:35:00 np0005481065 neutron-haproxy-ovnmeta-f0d9de45-f93f-45ef-aa2e-7cd54a90600b[415736]: [NOTICE]   (415750) : Loading success.
Oct 11 05:35:00 np0005481065 nova_compute[260935]: 2025-10-11 09:35:00.805 2 DEBUG nova.compute.manager [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:35:00 np0005481065 nova_compute[260935]: 2025-10-11 09:35:00.811 2 DEBUG nova.virt.libvirt.driver [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:35:00 np0005481065 nova_compute[260935]: 2025-10-11 09:35:00.812 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175300.812208, a77fa566-1ab4-484c-b6ef-53471c9f91f8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:35:00 np0005481065 nova_compute[260935]: 2025-10-11 09:35:00.812 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:35:00 np0005481065 nova_compute[260935]: 2025-10-11 09:35:00.817 2 INFO nova.virt.libvirt.driver [-] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Instance spawned successfully.#033[00m
Oct 11 05:35:00 np0005481065 nova_compute[260935]: 2025-10-11 09:35:00.817 2 DEBUG nova.virt.libvirt.driver [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:35:00 np0005481065 podman[415760]: 2025-10-11 09:35:00.872250548 +0000 UTC m=+0.046696209 container create eaa884e06349883c544d7e89173d34e06b9868409fea14fbd96af882da7ac99e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bose, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:35:00 np0005481065 nova_compute[260935]: 2025-10-11 09:35:00.892 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:35:00 np0005481065 nova_compute[260935]: 2025-10-11 09:35:00.901 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:35:00 np0005481065 systemd[1]: Started libpod-conmon-eaa884e06349883c544d7e89173d34e06b9868409fea14fbd96af882da7ac99e.scope.
Oct 11 05:35:00 np0005481065 nova_compute[260935]: 2025-10-11 09:35:00.936 2 DEBUG nova.virt.libvirt.driver [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:35:00 np0005481065 nova_compute[260935]: 2025-10-11 09:35:00.937 2 DEBUG nova.virt.libvirt.driver [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:35:00 np0005481065 nova_compute[260935]: 2025-10-11 09:35:00.937 2 DEBUG nova.virt.libvirt.driver [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:35:00 np0005481065 nova_compute[260935]: 2025-10-11 09:35:00.938 2 DEBUG nova.virt.libvirt.driver [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:35:00 np0005481065 nova_compute[260935]: 2025-10-11 09:35:00.939 2 DEBUG nova.virt.libvirt.driver [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:35:00 np0005481065 nova_compute[260935]: 2025-10-11 09:35:00.940 2 DEBUG nova.virt.libvirt.driver [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:35:00 np0005481065 podman[415760]: 2025-10-11 09:35:00.851841102 +0000 UTC m=+0.026286773 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:35:00 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:35:00 np0005481065 nova_compute[260935]: 2025-10-11 09:35:00.968 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:35:00 np0005481065 podman[415760]: 2025-10-11 09:35:00.982763346 +0000 UTC m=+0.157209017 container init eaa884e06349883c544d7e89173d34e06b9868409fea14fbd96af882da7ac99e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 05:35:00 np0005481065 podman[415760]: 2025-10-11 09:35:00.991900144 +0000 UTC m=+0.166345805 container start eaa884e06349883c544d7e89173d34e06b9868409fea14fbd96af882da7ac99e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bose, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:35:00 np0005481065 podman[415760]: 2025-10-11 09:35:00.99566606 +0000 UTC m=+0.170111741 container attach eaa884e06349883c544d7e89173d34e06b9868409fea14fbd96af882da7ac99e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bose, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:35:00 np0005481065 strange_bose[415774]: 167 167
Oct 11 05:35:00 np0005481065 systemd[1]: libpod-eaa884e06349883c544d7e89173d34e06b9868409fea14fbd96af882da7ac99e.scope: Deactivated successfully.
Oct 11 05:35:01 np0005481065 podman[415760]: 2025-10-11 09:35:01.000028844 +0000 UTC m=+0.174474515 container died eaa884e06349883c544d7e89173d34e06b9868409fea14fbd96af882da7ac99e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bose, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:35:01 np0005481065 systemd[1]: var-lib-containers-storage-overlay-6482c6a029adf5b69ec7e53358f2731c232ca04d8091d470ffcc292db3d134b8-merged.mount: Deactivated successfully.
Oct 11 05:35:01 np0005481065 podman[415760]: 2025-10-11 09:35:01.054444079 +0000 UTC m=+0.228889760 container remove eaa884e06349883c544d7e89173d34e06b9868409fea14fbd96af882da7ac99e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bose, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:35:01 np0005481065 nova_compute[260935]: 2025-10-11 09:35:01.059 2 INFO nova.compute.manager [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Took 10.50 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:35:01 np0005481065 nova_compute[260935]: 2025-10-11 09:35:01.061 2 DEBUG nova.compute.manager [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:35:01 np0005481065 systemd[1]: libpod-conmon-eaa884e06349883c544d7e89173d34e06b9868409fea14fbd96af882da7ac99e.scope: Deactivated successfully.
Oct 11 05:35:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2801: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:35:01 np0005481065 nova_compute[260935]: 2025-10-11 09:35:01.300 2 INFO nova.compute.manager [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Took 12.27 seconds to build instance.#033[00m
Oct 11 05:35:01 np0005481065 podman[415802]: 2025-10-11 09:35:01.303155868 +0000 UTC m=+0.060284442 container create 0fe18510815923d1ed531df3faf2cd413a545e97b95f33929f863b1316b60e60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shirley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 05:35:01 np0005481065 podman[415802]: 2025-10-11 09:35:01.272207045 +0000 UTC m=+0.029335709 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:35:01 np0005481065 systemd[1]: Started libpod-conmon-0fe18510815923d1ed531df3faf2cd413a545e97b95f33929f863b1316b60e60.scope.
Oct 11 05:35:01 np0005481065 nova_compute[260935]: 2025-10-11 09:35:01.388 2 DEBUG oslo_concurrency.lockutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "a77fa566-1ab4-484c-b6ef-53471c9f91f8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:35:01 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:35:01 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00390b82c6358f05b1db82df8f39e2e647f140cb8514638b162a39ebf758746c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:35:01 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00390b82c6358f05b1db82df8f39e2e647f140cb8514638b162a39ebf758746c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:35:01 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00390b82c6358f05b1db82df8f39e2e647f140cb8514638b162a39ebf758746c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:35:01 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00390b82c6358f05b1db82df8f39e2e647f140cb8514638b162a39ebf758746c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:35:01 np0005481065 podman[415802]: 2025-10-11 09:35:01.441595485 +0000 UTC m=+0.198724149 container init 0fe18510815923d1ed531df3faf2cd413a545e97b95f33929f863b1316b60e60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:35:01 np0005481065 podman[415802]: 2025-10-11 09:35:01.455975821 +0000 UTC m=+0.213104405 container start 0fe18510815923d1ed531df3faf2cd413a545e97b95f33929f863b1316b60e60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shirley, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 05:35:01 np0005481065 podman[415802]: 2025-10-11 09:35:01.459794519 +0000 UTC m=+0.216923183 container attach 0fe18510815923d1ed531df3faf2cd413a545e97b95f33929f863b1316b60e60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shirley, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]: {
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:    "0": [
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:        {
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:            "devices": [
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:                "/dev/loop3"
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:            ],
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:            "lv_name": "ceph_lv0",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:            "lv_size": "21470642176",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:            "name": "ceph_lv0",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:            "tags": {
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:                "ceph.cluster_name": "ceph",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:                "ceph.crush_device_class": "",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:                "ceph.encrypted": "0",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:                "ceph.osd_id": "0",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:                "ceph.type": "block",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:                "ceph.vdo": "0"
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:            },
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:            "type": "block",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:            "vg_name": "ceph_vg0"
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:        }
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:    ],
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:    "1": [
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:        {
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:            "devices": [
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:                "/dev/loop4"
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:            ],
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:            "lv_name": "ceph_lv1",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:            "lv_size": "21470642176",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:            "name": "ceph_lv1",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:            "tags": {
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:                "ceph.cluster_name": "ceph",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:                "ceph.crush_device_class": "",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:                "ceph.encrypted": "0",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:                "ceph.osd_id": "1",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:                "ceph.type": "block",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:                "ceph.vdo": "0"
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:            },
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:            "type": "block",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:            "vg_name": "ceph_vg1"
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:        }
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:    ],
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:    "2": [
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:        {
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:            "devices": [
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:                "/dev/loop5"
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:            ],
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:            "lv_name": "ceph_lv2",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:            "lv_size": "21470642176",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:            "name": "ceph_lv2",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:            "tags": {
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:                "ceph.cluster_name": "ceph",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:                "ceph.crush_device_class": "",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:                "ceph.encrypted": "0",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:                "ceph.osd_id": "2",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:                "ceph.type": "block",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:                "ceph.vdo": "0"
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:            },
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:            "type": "block",
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:            "vg_name": "ceph_vg2"
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:        }
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]:    ]
Oct 11 05:35:02 np0005481065 thirsty_shirley[415819]: }
Oct 11 05:35:02 np0005481065 systemd[1]: libpod-0fe18510815923d1ed531df3faf2cd413a545e97b95f33929f863b1316b60e60.scope: Deactivated successfully.
Oct 11 05:35:02 np0005481065 podman[415802]: 2025-10-11 09:35:02.369848391 +0000 UTC m=+1.126976975 container died 0fe18510815923d1ed531df3faf2cd413a545e97b95f33929f863b1316b60e60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 11 05:35:02 np0005481065 systemd[1]: var-lib-containers-storage-overlay-00390b82c6358f05b1db82df8f39e2e647f140cb8514638b162a39ebf758746c-merged.mount: Deactivated successfully.
Oct 11 05:35:02 np0005481065 podman[415802]: 2025-10-11 09:35:02.426419728 +0000 UTC m=+1.183548312 container remove 0fe18510815923d1ed531df3faf2cd413a545e97b95f33929f863b1316b60e60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 05:35:02 np0005481065 systemd[1]: libpod-conmon-0fe18510815923d1ed531df3faf2cd413a545e97b95f33929f863b1316b60e60.scope: Deactivated successfully.
Oct 11 05:35:02 np0005481065 nova_compute[260935]: 2025-10-11 09:35:02.875 2 DEBUG nova.compute.manager [req-46c20a59-164a-4a5c-86b1-59d9331466dc req-c23e8d91-040e-494a-97be-0503c79b62b6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Received event network-vif-plugged-75ae5f39-4616-4581-8e22-411bee7c1747 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:35:02 np0005481065 nova_compute[260935]: 2025-10-11 09:35:02.876 2 DEBUG oslo_concurrency.lockutils [req-46c20a59-164a-4a5c-86b1-59d9331466dc req-c23e8d91-040e-494a-97be-0503c79b62b6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "a77fa566-1ab4-484c-b6ef-53471c9f91f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:35:02 np0005481065 nova_compute[260935]: 2025-10-11 09:35:02.876 2 DEBUG oslo_concurrency.lockutils [req-46c20a59-164a-4a5c-86b1-59d9331466dc req-c23e8d91-040e-494a-97be-0503c79b62b6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a77fa566-1ab4-484c-b6ef-53471c9f91f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:35:02 np0005481065 nova_compute[260935]: 2025-10-11 09:35:02.877 2 DEBUG oslo_concurrency.lockutils [req-46c20a59-164a-4a5c-86b1-59d9331466dc req-c23e8d91-040e-494a-97be-0503c79b62b6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a77fa566-1ab4-484c-b6ef-53471c9f91f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:35:02 np0005481065 nova_compute[260935]: 2025-10-11 09:35:02.877 2 DEBUG nova.compute.manager [req-46c20a59-164a-4a5c-86b1-59d9331466dc req-c23e8d91-040e-494a-97be-0503c79b62b6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] No waiting events found dispatching network-vif-plugged-75ae5f39-4616-4581-8e22-411bee7c1747 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:35:02 np0005481065 nova_compute[260935]: 2025-10-11 09:35:02.877 2 WARNING nova.compute.manager [req-46c20a59-164a-4a5c-86b1-59d9331466dc req-c23e8d91-040e-494a-97be-0503c79b62b6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Received unexpected event network-vif-plugged-75ae5f39-4616-4581-8e22-411bee7c1747 for instance with vm_state active and task_state None.#033[00m
Oct 11 05:35:03 np0005481065 podman[415982]: 2025-10-11 09:35:03.161204763 +0000 UTC m=+0.052638806 container create 44d2cf2e2a7ed3798b696cf8c58a555f3afa9fd025d32ab5719110df599b8794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 11 05:35:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2802: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 05:35:03 np0005481065 systemd[1]: Started libpod-conmon-44d2cf2e2a7ed3798b696cf8c58a555f3afa9fd025d32ab5719110df599b8794.scope.
Oct 11 05:35:03 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:35:03 np0005481065 podman[415982]: 2025-10-11 09:35:03.137249488 +0000 UTC m=+0.028683611 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:35:03 np0005481065 podman[415982]: 2025-10-11 09:35:03.249115014 +0000 UTC m=+0.140549097 container init 44d2cf2e2a7ed3798b696cf8c58a555f3afa9fd025d32ab5719110df599b8794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_pasteur, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:35:03 np0005481065 podman[415982]: 2025-10-11 09:35:03.260860105 +0000 UTC m=+0.152294138 container start 44d2cf2e2a7ed3798b696cf8c58a555f3afa9fd025d32ab5719110df599b8794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 05:35:03 np0005481065 podman[415982]: 2025-10-11 09:35:03.264050695 +0000 UTC m=+0.155484778 container attach 44d2cf2e2a7ed3798b696cf8c58a555f3afa9fd025d32ab5719110df599b8794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_pasteur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:35:03 np0005481065 sweet_pasteur[415998]: 167 167
Oct 11 05:35:03 np0005481065 systemd[1]: libpod-44d2cf2e2a7ed3798b696cf8c58a555f3afa9fd025d32ab5719110df599b8794.scope: Deactivated successfully.
Oct 11 05:35:03 np0005481065 podman[415982]: 2025-10-11 09:35:03.270526748 +0000 UTC m=+0.161960811 container died 44d2cf2e2a7ed3798b696cf8c58a555f3afa9fd025d32ab5719110df599b8794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:35:03 np0005481065 systemd[1]: var-lib-containers-storage-overlay-3747d1b85103fb8f66ad5d8a3e00746311387dfea762220c3b2c678bf3a4c3db-merged.mount: Deactivated successfully.
Oct 11 05:35:03 np0005481065 podman[415982]: 2025-10-11 09:35:03.317860944 +0000 UTC m=+0.209295007 container remove 44d2cf2e2a7ed3798b696cf8c58a555f3afa9fd025d32ab5719110df599b8794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_pasteur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 05:35:03 np0005481065 systemd[1]: libpod-conmon-44d2cf2e2a7ed3798b696cf8c58a555f3afa9fd025d32ab5719110df599b8794.scope: Deactivated successfully.
Oct 11 05:35:03 np0005481065 nova_compute[260935]: 2025-10-11 09:35:03.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:35:03 np0005481065 podman[416023]: 2025-10-11 09:35:03.580580478 +0000 UTC m=+0.068834673 container create 3529affaadc4c16e0d9a832ebc2b8258ab58a2214aeedf1987ccba63d75997e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:35:03 np0005481065 systemd[1]: Started libpod-conmon-3529affaadc4c16e0d9a832ebc2b8258ab58a2214aeedf1987ccba63d75997e7.scope.
Oct 11 05:35:03 np0005481065 podman[416023]: 2025-10-11 09:35:03.554222384 +0000 UTC m=+0.042476629 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:35:03 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:35:03 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daa09d7b1a60817f59a6eda79d2302104d33916544975fdc5069bc26fb5c6142/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:35:03 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daa09d7b1a60817f59a6eda79d2302104d33916544975fdc5069bc26fb5c6142/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:35:03 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daa09d7b1a60817f59a6eda79d2302104d33916544975fdc5069bc26fb5c6142/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:35:03 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daa09d7b1a60817f59a6eda79d2302104d33916544975fdc5069bc26fb5c6142/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:35:03 np0005481065 podman[416023]: 2025-10-11 09:35:03.707117519 +0000 UTC m=+0.195371704 container init 3529affaadc4c16e0d9a832ebc2b8258ab58a2214aeedf1987ccba63d75997e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:35:03 np0005481065 podman[416023]: 2025-10-11 09:35:03.719742406 +0000 UTC m=+0.207996561 container start 3529affaadc4c16e0d9a832ebc2b8258ab58a2214aeedf1987ccba63d75997e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mendel, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 11 05:35:03 np0005481065 podman[416023]: 2025-10-11 09:35:03.723924474 +0000 UTC m=+0.212178699 container attach 3529affaadc4c16e0d9a832ebc2b8258ab58a2214aeedf1987ccba63d75997e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mendel, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:35:03 np0005481065 nova_compute[260935]: 2025-10-11 09:35:03.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:35:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:35:04 np0005481065 nova_compute[260935]: 2025-10-11 09:35:04.661 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:35:04 np0005481065 hardcore_mendel[416039]: {
Oct 11 05:35:04 np0005481065 hardcore_mendel[416039]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 05:35:04 np0005481065 hardcore_mendel[416039]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:35:04 np0005481065 hardcore_mendel[416039]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 05:35:04 np0005481065 hardcore_mendel[416039]:        "osd_id": 2,
Oct 11 05:35:04 np0005481065 hardcore_mendel[416039]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:35:04 np0005481065 hardcore_mendel[416039]:        "type": "bluestore"
Oct 11 05:35:04 np0005481065 hardcore_mendel[416039]:    },
Oct 11 05:35:04 np0005481065 hardcore_mendel[416039]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 05:35:04 np0005481065 hardcore_mendel[416039]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:35:04 np0005481065 hardcore_mendel[416039]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 05:35:04 np0005481065 hardcore_mendel[416039]:        "osd_id": 0,
Oct 11 05:35:04 np0005481065 hardcore_mendel[416039]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:35:04 np0005481065 hardcore_mendel[416039]:        "type": "bluestore"
Oct 11 05:35:04 np0005481065 hardcore_mendel[416039]:    },
Oct 11 05:35:04 np0005481065 hardcore_mendel[416039]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 05:35:04 np0005481065 hardcore_mendel[416039]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:35:04 np0005481065 hardcore_mendel[416039]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 05:35:04 np0005481065 hardcore_mendel[416039]:        "osd_id": 1,
Oct 11 05:35:04 np0005481065 hardcore_mendel[416039]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:35:04 np0005481065 hardcore_mendel[416039]:        "type": "bluestore"
Oct 11 05:35:04 np0005481065 hardcore_mendel[416039]:    }
Oct 11 05:35:04 np0005481065 hardcore_mendel[416039]: }
Oct 11 05:35:04 np0005481065 nova_compute[260935]: 2025-10-11 09:35:04.735 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Triggering sync for uuid c176845c-89c0-4038-ba22-4ee79bd3ebfe _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct 11 05:35:04 np0005481065 nova_compute[260935]: 2025-10-11 09:35:04.736 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Triggering sync for uuid b75d8ded-515b-48ff-a6b6-28df88878996 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct 11 05:35:04 np0005481065 nova_compute[260935]: 2025-10-11 09:35:04.737 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Triggering sync for uuid 52be16b4-343a-4fd4-9041-39069a1fde2a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct 11 05:35:04 np0005481065 nova_compute[260935]: 2025-10-11 09:35:04.737 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Triggering sync for uuid a77fa566-1ab4-484c-b6ef-53471c9f91f8 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct 11 05:35:04 np0005481065 nova_compute[260935]: 2025-10-11 09:35:04.738 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "c176845c-89c0-4038-ba22-4ee79bd3ebfe" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:35:04 np0005481065 nova_compute[260935]: 2025-10-11 09:35:04.739 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "c176845c-89c0-4038-ba22-4ee79bd3ebfe" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:35:04 np0005481065 nova_compute[260935]: 2025-10-11 09:35:04.739 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "b75d8ded-515b-48ff-a6b6-28df88878996" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:35:04 np0005481065 nova_compute[260935]: 2025-10-11 09:35:04.740 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "b75d8ded-515b-48ff-a6b6-28df88878996" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:35:04 np0005481065 nova_compute[260935]: 2025-10-11 09:35:04.741 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "52be16b4-343a-4fd4-9041-39069a1fde2a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:35:04 np0005481065 nova_compute[260935]: 2025-10-11 09:35:04.742 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "52be16b4-343a-4fd4-9041-39069a1fde2a" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:35:04 np0005481065 nova_compute[260935]: 2025-10-11 09:35:04.742 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "a77fa566-1ab4-484c-b6ef-53471c9f91f8" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:35:04 np0005481065 nova_compute[260935]: 2025-10-11 09:35:04.743 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "a77fa566-1ab4-484c-b6ef-53471c9f91f8" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:35:04 np0005481065 systemd[1]: libpod-3529affaadc4c16e0d9a832ebc2b8258ab58a2214aeedf1987ccba63d75997e7.scope: Deactivated successfully.
Oct 11 05:35:04 np0005481065 systemd[1]: libpod-3529affaadc4c16e0d9a832ebc2b8258ab58a2214aeedf1987ccba63d75997e7.scope: Consumed 1.043s CPU time.
Oct 11 05:35:04 np0005481065 conmon[416039]: conmon 3529affaadc4c16e0d9a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3529affaadc4c16e0d9a832ebc2b8258ab58a2214aeedf1987ccba63d75997e7.scope/container/memory.events
Oct 11 05:35:04 np0005481065 podman[416023]: 2025-10-11 09:35:04.774966135 +0000 UTC m=+1.263220320 container died 3529affaadc4c16e0d9a832ebc2b8258ab58a2214aeedf1987ccba63d75997e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 11 05:35:04 np0005481065 nova_compute[260935]: 2025-10-11 09:35:04.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:35:04 np0005481065 NetworkManager[44960]: <info>  [1760175304.7769] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/600)
Oct 11 05:35:04 np0005481065 ovn_controller[152945]: 2025-10-11T09:35:04Z|01562|binding|INFO|Releasing lport 675f5b9f-9cb7-4132-af37-955cc69eba82 from this chassis (sb_readonly=0)
Oct 11 05:35:04 np0005481065 ovn_controller[152945]: 2025-10-11T09:35:04Z|01563|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:35:04 np0005481065 ovn_controller[152945]: 2025-10-11T09:35:04Z|01564|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:35:04 np0005481065 NetworkManager[44960]: <info>  [1760175304.7787] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/601)
Oct 11 05:35:04 np0005481065 podman[416069]: 2025-10-11 09:35:04.788424505 +0000 UTC m=+0.087761278 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.build-date=20251001, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 11 05:35:04 np0005481065 nova_compute[260935]: 2025-10-11 09:35:04.809 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "c176845c-89c0-4038-ba22-4ee79bd3ebfe" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.070s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:35:04 np0005481065 systemd[1]: var-lib-containers-storage-overlay-daa09d7b1a60817f59a6eda79d2302104d33916544975fdc5069bc26fb5c6142-merged.mount: Deactivated successfully.
Oct 11 05:35:04 np0005481065 ovn_controller[152945]: 2025-10-11T09:35:04Z|01565|binding|INFO|Releasing lport 675f5b9f-9cb7-4132-af37-955cc69eba82 from this chassis (sb_readonly=0)
Oct 11 05:35:04 np0005481065 ovn_controller[152945]: 2025-10-11T09:35:04Z|01566|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:35:04 np0005481065 ovn_controller[152945]: 2025-10-11T09:35:04Z|01567|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:35:04 np0005481065 nova_compute[260935]: 2025-10-11 09:35:04.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:35:04 np0005481065 nova_compute[260935]: 2025-10-11 09:35:04.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:35:04 np0005481065 nova_compute[260935]: 2025-10-11 09:35:04.843 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "a77fa566-1ab4-484c-b6ef-53471c9f91f8" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.101s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:35:04 np0005481065 nova_compute[260935]: 2025-10-11 09:35:04.844 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "52be16b4-343a-4fd4-9041-39069a1fde2a" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.102s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:35:04 np0005481065 nova_compute[260935]: 2025-10-11 09:35:04.844 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "b75d8ded-515b-48ff-a6b6-28df88878996" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.104s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:35:04 np0005481065 podman[416023]: 2025-10-11 09:35:04.857153794 +0000 UTC m=+1.345407959 container remove 3529affaadc4c16e0d9a832ebc2b8258ab58a2214aeedf1987ccba63d75997e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mendel, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:35:04 np0005481065 systemd[1]: libpod-conmon-3529affaadc4c16e0d9a832ebc2b8258ab58a2214aeedf1987ccba63d75997e7.scope: Deactivated successfully.
Oct 11 05:35:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:35:04 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:35:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:35:04 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:35:04 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 91940d04-9b03-48b7-aca5-5d115dc358bb does not exist
Oct 11 05:35:04 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev a4385df5-6be6-49af-a551-faef9c3549b9 does not exist
Oct 11 05:35:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2803: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 05:35:05 np0005481065 nova_compute[260935]: 2025-10-11 09:35:05.219 2 DEBUG nova.compute.manager [req-f07464da-f467-4ef7-8a77-728a45062dc1 req-409aa333-9aa1-42df-85ae-f5a98ef4d69e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Received event network-changed-75ae5f39-4616-4581-8e22-411bee7c1747 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:35:05 np0005481065 nova_compute[260935]: 2025-10-11 09:35:05.221 2 DEBUG nova.compute.manager [req-f07464da-f467-4ef7-8a77-728a45062dc1 req-409aa333-9aa1-42df-85ae-f5a98ef4d69e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Refreshing instance network info cache due to event network-changed-75ae5f39-4616-4581-8e22-411bee7c1747. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:35:05 np0005481065 nova_compute[260935]: 2025-10-11 09:35:05.221 2 DEBUG oslo_concurrency.lockutils [req-f07464da-f467-4ef7-8a77-728a45062dc1 req-409aa333-9aa1-42df-85ae-f5a98ef4d69e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-a77fa566-1ab4-484c-b6ef-53471c9f91f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:35:05 np0005481065 nova_compute[260935]: 2025-10-11 09:35:05.222 2 DEBUG oslo_concurrency.lockutils [req-f07464da-f467-4ef7-8a77-728a45062dc1 req-409aa333-9aa1-42df-85ae-f5a98ef4d69e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-a77fa566-1ab4-484c-b6ef-53471c9f91f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:35:05 np0005481065 nova_compute[260935]: 2025-10-11 09:35:05.222 2 DEBUG nova.network.neutron [req-f07464da-f467-4ef7-8a77-728a45062dc1 req-409aa333-9aa1-42df-85ae-f5a98ef4d69e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Refreshing network info cache for port 75ae5f39-4616-4581-8e22-411bee7c1747 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:35:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:35:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:35:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:35:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:35:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0029757271724620694 of space, bias 1.0, pg target 0.8927181517386208 quantized to 32 (current 32)
Oct 11 05:35:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:35:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:35:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:35:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:35:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:35:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 11 05:35:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:35:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 05:35:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:35:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:35:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:35:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 05:35:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:35:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 05:35:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:35:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:35:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:35:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 05:35:05 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:35:05 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:35:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2804: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 05:35:08 np0005481065 nova_compute[260935]: 2025-10-11 09:35:08.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:35:08 np0005481065 nova_compute[260935]: 2025-10-11 09:35:08.613 2 DEBUG nova.network.neutron [req-f07464da-f467-4ef7-8a77-728a45062dc1 req-409aa333-9aa1-42df-85ae-f5a98ef4d69e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Updated VIF entry in instance network info cache for port 75ae5f39-4616-4581-8e22-411bee7c1747. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:35:08 np0005481065 nova_compute[260935]: 2025-10-11 09:35:08.613 2 DEBUG nova.network.neutron [req-f07464da-f467-4ef7-8a77-728a45062dc1 req-409aa333-9aa1-42df-85ae-f5a98ef4d69e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Updating instance_info_cache with network_info: [{"id": "75ae5f39-4616-4581-8e22-411bee7c1747", "address": "fa:16:3e:db:c5:0a", "network": {"id": "f0d9de45-f93f-45ef-aa2e-7cd54a90600b", "bridge": "br-int", "label": "tempest-network-smoke--1509318194", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75ae5f39-46", "ovs_interfaceid": "75ae5f39-4616-4581-8e22-411bee7c1747", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:35:08 np0005481065 nova_compute[260935]: 2025-10-11 09:35:08.638 2 DEBUG oslo_concurrency.lockutils [req-f07464da-f467-4ef7-8a77-728a45062dc1 req-409aa333-9aa1-42df-85ae-f5a98ef4d69e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-a77fa566-1ab4-484c-b6ef-53471c9f91f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:35:08 np0005481065 podman[416156]: 2025-10-11 09:35:08.782138938 +0000 UTC m=+0.075457943 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2)
Oct 11 05:35:08 np0005481065 podman[416157]: 2025-10-11 09:35:08.818139121 +0000 UTC m=+0.111968631 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 11 05:35:08 np0005481065 nova_compute[260935]: 2025-10-11 09:35:08.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:35:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2805: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 05:35:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:35:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2806: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 05:35:12 np0005481065 ovn_controller[152945]: 2025-10-11T09:35:12Z|00184|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:db:c5:0a 10.100.0.14
Oct 11 05:35:12 np0005481065 ovn_controller[152945]: 2025-10-11T09:35:12Z|00185|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:db:c5:0a 10.100.0.14
Oct 11 05:35:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2807: 321 pgs: 321 active+clean; 399 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 123 op/s
Oct 11 05:35:13 np0005481065 nova_compute[260935]: 2025-10-11 09:35:13.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:35:13 np0005481065 nova_compute[260935]: 2025-10-11 09:35:13.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:35:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:35:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2808: 321 pgs: 321 active+clean; 399 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 292 KiB/s rd, 2.0 MiB/s wr, 49 op/s
Oct 11 05:35:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:35:15.231 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:35:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:35:15.232 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:35:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:35:15.233 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:35:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2809: 321 pgs: 321 active+clean; 399 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 292 KiB/s rd, 2.0 MiB/s wr, 49 op/s
Oct 11 05:35:18 np0005481065 nova_compute[260935]: 2025-10-11 09:35:18.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:35:18 np0005481065 nova_compute[260935]: 2025-10-11 09:35:18.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:35:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2810: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 05:35:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:35:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2811: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 05:35:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2812: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 05:35:23 np0005481065 nova_compute[260935]: 2025-10-11 09:35:23.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:35:23 np0005481065 nova_compute[260935]: 2025-10-11 09:35:23.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:35:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:35:24 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:35:24.529 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=51, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=50) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:35:24 np0005481065 nova_compute[260935]: 2025-10-11 09:35:24.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:35:24 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:35:24.531 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 05:35:24 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:35:24.533 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '51'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:35:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:35:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:35:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:35:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:35:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:35:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:35:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2813: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 103 KiB/s wr, 14 op/s
Oct 11 05:35:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:35:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/50139011' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:35:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:35:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/50139011' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:35:26 np0005481065 podman[416202]: 2025-10-11 09:35:26.769332653 +0000 UTC m=+0.067323843 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent)
Oct 11 05:35:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2814: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 103 KiB/s wr, 14 op/s
Oct 11 05:35:27 np0005481065 nova_compute[260935]: 2025-10-11 09:35:27.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:35:27 np0005481065 nova_compute[260935]: 2025-10-11 09:35:27.785 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:35:28 np0005481065 nova_compute[260935]: 2025-10-11 09:35:28.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:35:28 np0005481065 nova_compute[260935]: 2025-10-11 09:35:28.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:35:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2815: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 103 KiB/s wr, 14 op/s
Oct 11 05:35:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:35:31 np0005481065 nova_compute[260935]: 2025-10-11 09:35:31.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:35:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2816: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Oct 11 05:35:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2817: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Oct 11 05:35:33 np0005481065 nova_compute[260935]: 2025-10-11 09:35:33.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:35:33 np0005481065 nova_compute[260935]: 2025-10-11 09:35:33.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:35:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:35:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2818: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Oct 11 05:35:35 np0005481065 podman[416222]: 2025-10-11 09:35:35.796385883 +0000 UTC m=+0.093421734 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:35:36 np0005481065 nova_compute[260935]: 2025-10-11 09:35:36.889 2 DEBUG oslo_concurrency.lockutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Acquiring lock "8d35e49e-efca-4621-a6f7-de650e5272fd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:35:36 np0005481065 nova_compute[260935]: 2025-10-11 09:35:36.890 2 DEBUG oslo_concurrency.lockutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Lock "8d35e49e-efca-4621-a6f7-de650e5272fd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:35:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2819: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Oct 11 05:35:37 np0005481065 nova_compute[260935]: 2025-10-11 09:35:37.244 2 DEBUG nova.compute.manager [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:35:37 np0005481065 nova_compute[260935]: 2025-10-11 09:35:37.618 2 DEBUG oslo_concurrency.lockutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:35:37 np0005481065 nova_compute[260935]: 2025-10-11 09:35:37.618 2 DEBUG oslo_concurrency.lockutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:35:37 np0005481065 nova_compute[260935]: 2025-10-11 09:35:37.628 2 DEBUG nova.virt.hardware [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:35:37 np0005481065 nova_compute[260935]: 2025-10-11 09:35:37.628 2 INFO nova.compute.claims [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:35:38 np0005481065 nova_compute[260935]: 2025-10-11 09:35:38.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:35:38 np0005481065 nova_compute[260935]: 2025-10-11 09:35:38.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:35:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2820: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Oct 11 05:35:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:35:39 np0005481065 podman[416243]: 2025-10-11 09:35:39.815459307 +0000 UTC m=+0.105807436 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 11 05:35:39 np0005481065 podman[416244]: 2025-10-11 09:35:39.89621396 +0000 UTC m=+0.183229474 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:35:40 np0005481065 nova_compute[260935]: 2025-10-11 09:35:40.810 2 DEBUG oslo_concurrency.processutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:35:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2821: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Oct 11 05:35:41 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:35:41 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3238120510' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:35:41 np0005481065 nova_compute[260935]: 2025-10-11 09:35:41.273 2 DEBUG oslo_concurrency.processutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:35:41 np0005481065 nova_compute[260935]: 2025-10-11 09:35:41.281 2 DEBUG nova.compute.provider_tree [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:35:41 np0005481065 nova_compute[260935]: 2025-10-11 09:35:41.305 2 DEBUG nova.scheduler.client.report [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:35:42 np0005481065 nova_compute[260935]: 2025-10-11 09:35:42.206 2 DEBUG oslo_concurrency.lockutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 4.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:35:42 np0005481065 nova_compute[260935]: 2025-10-11 09:35:42.207 2 DEBUG nova.compute.manager [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:35:42 np0005481065 nova_compute[260935]: 2025-10-11 09:35:42.487 2 DEBUG nova.compute.manager [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:35:42 np0005481065 nova_compute[260935]: 2025-10-11 09:35:42.487 2 DEBUG nova.network.neutron [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:35:42 np0005481065 nova_compute[260935]: 2025-10-11 09:35:42.685 2 INFO nova.virt.libvirt.driver [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:35:43 np0005481065 nova_compute[260935]: 2025-10-11 09:35:43.138 2 DEBUG nova.compute.manager [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:35:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2822: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Oct 11 05:35:43 np0005481065 nova_compute[260935]: 2025-10-11 09:35:43.581 2 DEBUG nova.policy [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '14455fee241a4032a1491acddf675e59', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ae66871ba5094dd49475fed96fb24be2', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:35:43 np0005481065 nova_compute[260935]: 2025-10-11 09:35:43.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:35:43 np0005481065 nova_compute[260935]: 2025-10-11 09:35:43.644 2 DEBUG nova.compute.manager [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:35:43 np0005481065 nova_compute[260935]: 2025-10-11 09:35:43.646 2 DEBUG nova.virt.libvirt.driver [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:35:43 np0005481065 nova_compute[260935]: 2025-10-11 09:35:43.647 2 INFO nova.virt.libvirt.driver [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Creating image(s)#033[00m
Oct 11 05:35:43 np0005481065 nova_compute[260935]: 2025-10-11 09:35:43.687 2 DEBUG nova.storage.rbd_utils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] rbd image 8d35e49e-efca-4621-a6f7-de650e5272fd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:35:43 np0005481065 nova_compute[260935]: 2025-10-11 09:35:43.725 2 DEBUG nova.storage.rbd_utils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] rbd image 8d35e49e-efca-4621-a6f7-de650e5272fd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:35:43 np0005481065 nova_compute[260935]: 2025-10-11 09:35:43.765 2 DEBUG nova.storage.rbd_utils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] rbd image 8d35e49e-efca-4621-a6f7-de650e5272fd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:35:43 np0005481065 nova_compute[260935]: 2025-10-11 09:35:43.770 2 DEBUG oslo_concurrency.processutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:35:43 np0005481065 nova_compute[260935]: 2025-10-11 09:35:43.882 2 DEBUG oslo_concurrency.processutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.112s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:35:43 np0005481065 nova_compute[260935]: 2025-10-11 09:35:43.884 2 DEBUG oslo_concurrency.lockutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:35:43 np0005481065 nova_compute[260935]: 2025-10-11 09:35:43.885 2 DEBUG oslo_concurrency.lockutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:35:43 np0005481065 nova_compute[260935]: 2025-10-11 09:35:43.885 2 DEBUG oslo_concurrency.lockutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:35:43 np0005481065 nova_compute[260935]: 2025-10-11 09:35:43.922 2 DEBUG nova.storage.rbd_utils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] rbd image 8d35e49e-efca-4621-a6f7-de650e5272fd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:35:43 np0005481065 nova_compute[260935]: 2025-10-11 09:35:43.928 2 DEBUG oslo_concurrency.processutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 8d35e49e-efca-4621-a6f7-de650e5272fd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:35:43 np0005481065 nova_compute[260935]: 2025-10-11 09:35:43.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:35:44 np0005481065 nova_compute[260935]: 2025-10-11 09:35:44.238 2 DEBUG oslo_concurrency.processutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 8d35e49e-efca-4621-a6f7-de650e5272fd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.310s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:35:44 np0005481065 nova_compute[260935]: 2025-10-11 09:35:44.337 2 DEBUG nova.storage.rbd_utils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] resizing rbd image 8d35e49e-efca-4621-a6f7-de650e5272fd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:35:44 np0005481065 nova_compute[260935]: 2025-10-11 09:35:44.461 2 DEBUG nova.objects.instance [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Lazy-loading 'migration_context' on Instance uuid 8d35e49e-efca-4621-a6f7-de650e5272fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:35:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:35:44 np0005481065 nova_compute[260935]: 2025-10-11 09:35:44.636 2 DEBUG nova.virt.libvirt.driver [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:35:44 np0005481065 nova_compute[260935]: 2025-10-11 09:35:44.637 2 DEBUG nova.virt.libvirt.driver [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Ensure instance console log exists: /var/lib/nova/instances/8d35e49e-efca-4621-a6f7-de650e5272fd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:35:44 np0005481065 nova_compute[260935]: 2025-10-11 09:35:44.638 2 DEBUG oslo_concurrency.lockutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:35:44 np0005481065 nova_compute[260935]: 2025-10-11 09:35:44.638 2 DEBUG oslo_concurrency.lockutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:35:44 np0005481065 nova_compute[260935]: 2025-10-11 09:35:44.639 2 DEBUG oslo_concurrency.lockutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:35:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2823: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:35:45 np0005481065 nova_compute[260935]: 2025-10-11 09:35:45.454 2 DEBUG nova.network.neutron [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Successfully created port: 28067a9e-43c9-4f82-aeb3-002926889b4b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:35:46 np0005481065 nova_compute[260935]: 2025-10-11 09:35:46.555 2 DEBUG nova.network.neutron [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Successfully updated port: 28067a9e-43c9-4f82-aeb3-002926889b4b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:35:46 np0005481065 nova_compute[260935]: 2025-10-11 09:35:46.677 2 DEBUG oslo_concurrency.lockutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Acquiring lock "refresh_cache-8d35e49e-efca-4621-a6f7-de650e5272fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:35:46 np0005481065 nova_compute[260935]: 2025-10-11 09:35:46.677 2 DEBUG oslo_concurrency.lockutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Acquired lock "refresh_cache-8d35e49e-efca-4621-a6f7-de650e5272fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:35:46 np0005481065 nova_compute[260935]: 2025-10-11 09:35:46.678 2 DEBUG nova.network.neutron [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:35:46 np0005481065 nova_compute[260935]: 2025-10-11 09:35:46.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:35:46 np0005481065 nova_compute[260935]: 2025-10-11 09:35:46.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:35:46 np0005481065 nova_compute[260935]: 2025-10-11 09:35:46.789 2 DEBUG nova.compute.manager [req-66e1ed7f-e399-4bcd-b696-f92e0395f4e2 req-a3349235-6f62-4a71-b5ca-64933f2939fa e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Received event network-changed-28067a9e-43c9-4f82-aeb3-002926889b4b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:35:46 np0005481065 nova_compute[260935]: 2025-10-11 09:35:46.790 2 DEBUG nova.compute.manager [req-66e1ed7f-e399-4bcd-b696-f92e0395f4e2 req-a3349235-6f62-4a71-b5ca-64933f2939fa e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Refreshing instance network info cache due to event network-changed-28067a9e-43c9-4f82-aeb3-002926889b4b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:35:46 np0005481065 nova_compute[260935]: 2025-10-11 09:35:46.790 2 DEBUG oslo_concurrency.lockutils [req-66e1ed7f-e399-4bcd-b696-f92e0395f4e2 req-a3349235-6f62-4a71-b5ca-64933f2939fa e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-8d35e49e-efca-4621-a6f7-de650e5272fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:35:46 np0005481065 nova_compute[260935]: 2025-10-11 09:35:46.959 2 DEBUG nova.network.neutron [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:35:46 np0005481065 nova_compute[260935]: 2025-10-11 09:35:46.976 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:35:46 np0005481065 nova_compute[260935]: 2025-10-11 09:35:46.976 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:35:46 np0005481065 nova_compute[260935]: 2025-10-11 09:35:46.976 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 05:35:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2824: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:35:48 np0005481065 nova_compute[260935]: 2025-10-11 09:35:48.560 2 DEBUG nova.network.neutron [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Updating instance_info_cache with network_info: [{"id": "28067a9e-43c9-4f82-aeb3-002926889b4b", "address": "fa:16:3e:83:96:3a", "network": {"id": "617f2647-1180-40d5-ae2e-3b28298acf26", "bridge": "br-int", "label": "tempest-network-smoke--1660520824", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae66871ba5094dd49475fed96fb24be2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28067a9e-43", "ovs_interfaceid": "28067a9e-43c9-4f82-aeb3-002926889b4b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:35:48 np0005481065 nova_compute[260935]: 2025-10-11 09:35:48.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:35:48 np0005481065 nova_compute[260935]: 2025-10-11 09:35:48.754 2 DEBUG oslo_concurrency.lockutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Releasing lock "refresh_cache-8d35e49e-efca-4621-a6f7-de650e5272fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:35:48 np0005481065 nova_compute[260935]: 2025-10-11 09:35:48.754 2 DEBUG nova.compute.manager [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Instance network_info: |[{"id": "28067a9e-43c9-4f82-aeb3-002926889b4b", "address": "fa:16:3e:83:96:3a", "network": {"id": "617f2647-1180-40d5-ae2e-3b28298acf26", "bridge": "br-int", "label": "tempest-network-smoke--1660520824", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae66871ba5094dd49475fed96fb24be2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28067a9e-43", "ovs_interfaceid": "28067a9e-43c9-4f82-aeb3-002926889b4b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:35:48 np0005481065 nova_compute[260935]: 2025-10-11 09:35:48.755 2 DEBUG oslo_concurrency.lockutils [req-66e1ed7f-e399-4bcd-b696-f92e0395f4e2 req-a3349235-6f62-4a71-b5ca-64933f2939fa e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-8d35e49e-efca-4621-a6f7-de650e5272fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:35:48 np0005481065 nova_compute[260935]: 2025-10-11 09:35:48.755 2 DEBUG nova.network.neutron [req-66e1ed7f-e399-4bcd-b696-f92e0395f4e2 req-a3349235-6f62-4a71-b5ca-64933f2939fa e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Refreshing network info cache for port 28067a9e-43c9-4f82-aeb3-002926889b4b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:35:48 np0005481065 nova_compute[260935]: 2025-10-11 09:35:48.758 2 DEBUG nova.virt.libvirt.driver [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Start _get_guest_xml network_info=[{"id": "28067a9e-43c9-4f82-aeb3-002926889b4b", "address": "fa:16:3e:83:96:3a", "network": {"id": "617f2647-1180-40d5-ae2e-3b28298acf26", "bridge": "br-int", "label": "tempest-network-smoke--1660520824", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae66871ba5094dd49475fed96fb24be2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28067a9e-43", "ovs_interfaceid": "28067a9e-43c9-4f82-aeb3-002926889b4b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:35:48 np0005481065 nova_compute[260935]: 2025-10-11 09:35:48.762 2 WARNING nova.virt.libvirt.driver [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:35:48 np0005481065 nova_compute[260935]: 2025-10-11 09:35:48.774 2 DEBUG nova.virt.libvirt.host [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:35:48 np0005481065 nova_compute[260935]: 2025-10-11 09:35:48.775 2 DEBUG nova.virt.libvirt.host [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:35:48 np0005481065 nova_compute[260935]: 2025-10-11 09:35:48.778 2 DEBUG nova.virt.libvirt.host [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:35:48 np0005481065 nova_compute[260935]: 2025-10-11 09:35:48.779 2 DEBUG nova.virt.libvirt.host [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:35:48 np0005481065 nova_compute[260935]: 2025-10-11 09:35:48.779 2 DEBUG nova.virt.libvirt.driver [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:35:48 np0005481065 nova_compute[260935]: 2025-10-11 09:35:48.780 2 DEBUG nova.virt.hardware [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:35:48 np0005481065 nova_compute[260935]: 2025-10-11 09:35:48.780 2 DEBUG nova.virt.hardware [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:35:48 np0005481065 nova_compute[260935]: 2025-10-11 09:35:48.780 2 DEBUG nova.virt.hardware [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:35:48 np0005481065 nova_compute[260935]: 2025-10-11 09:35:48.781 2 DEBUG nova.virt.hardware [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:35:48 np0005481065 nova_compute[260935]: 2025-10-11 09:35:48.781 2 DEBUG nova.virt.hardware [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:35:48 np0005481065 nova_compute[260935]: 2025-10-11 09:35:48.781 2 DEBUG nova.virt.hardware [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:35:48 np0005481065 nova_compute[260935]: 2025-10-11 09:35:48.781 2 DEBUG nova.virt.hardware [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:35:48 np0005481065 nova_compute[260935]: 2025-10-11 09:35:48.781 2 DEBUG nova.virt.hardware [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:35:48 np0005481065 nova_compute[260935]: 2025-10-11 09:35:48.782 2 DEBUG nova.virt.hardware [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:35:48 np0005481065 nova_compute[260935]: 2025-10-11 09:35:48.782 2 DEBUG nova.virt.hardware [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:35:48 np0005481065 nova_compute[260935]: 2025-10-11 09:35:48.782 2 DEBUG nova.virt.hardware [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:35:48 np0005481065 nova_compute[260935]: 2025-10-11 09:35:48.785 2 DEBUG oslo_concurrency.processutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:35:48 np0005481065 nova_compute[260935]: 2025-10-11 09:35:48.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:35:49 np0005481065 nova_compute[260935]: 2025-10-11 09:35:49.034 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updating instance_info_cache with network_info: [{"id": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "address": "fa:16:3e:d3:b5:ce", "network": {"id": "7c40ad6c-6e2c-4d8e-a70f-72c8786fa745", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1855455514-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ba95f2514ce4fe4b00f245335eaeb01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc992d6e3-ef", "ovs_interfaceid": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:35:49 np0005481065 nova_compute[260935]: 2025-10-11 09:35:49.103 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:35:49 np0005481065 nova_compute[260935]: 2025-10-11 09:35:49.103 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 05:35:49 np0005481065 nova_compute[260935]: 2025-10-11 09:35:49.104 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:35:49 np0005481065 nova_compute[260935]: 2025-10-11 09:35:49.104 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:35:49 np0005481065 nova_compute[260935]: 2025-10-11 09:35:49.105 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:35:49 np0005481065 nova_compute[260935]: 2025-10-11 09:35:49.105 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:35:49 np0005481065 nova_compute[260935]: 2025-10-11 09:35:49.106 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:35:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2825: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:35:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:35:49 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4021734379' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:35:49 np0005481065 nova_compute[260935]: 2025-10-11 09:35:49.217 2 DEBUG oslo_concurrency.processutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:35:49 np0005481065 nova_compute[260935]: 2025-10-11 09:35:49.247 2 DEBUG nova.storage.rbd_utils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] rbd image 8d35e49e-efca-4621-a6f7-de650e5272fd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:35:49 np0005481065 nova_compute[260935]: 2025-10-11 09:35:49.262 2 DEBUG oslo_concurrency.processutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:35:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:35:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:35:49 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/239988064' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:35:49 np0005481065 nova_compute[260935]: 2025-10-11 09:35:49.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:35:49 np0005481065 nova_compute[260935]: 2025-10-11 09:35:49.705 2 DEBUG oslo_concurrency.processutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:35:49 np0005481065 nova_compute[260935]: 2025-10-11 09:35:49.705 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:35:49 np0005481065 nova_compute[260935]: 2025-10-11 09:35:49.707 2 DEBUG nova.virt.libvirt.vif [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:35:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-88042792-access_point-478778867',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-88042792-access_point-478778867',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-88042792-acce',id=140,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFTq8OGvKaz+LMA/PcV/K58nrpAVi7kg5IPqRwY1XCZgpgNS8omJcJt4bFk7y+YWXHh9knbjh0T2FJN0u/vwytRbPYrGFElF+LT6MJEgQPvTD/i1PmkAieU+uwu1S/cEGA==',key_name='tempest-TestSecurityGroupsBasicOps-1768444573',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ae66871ba5094dd49475fed96fb24be2',ramdisk_id='',reservation_id='r-zleps1oi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-88042792',owner_user_name='tempest-TestSecurityGroupsBasicOps-88042792-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:35:43Z,user_data=None,user_id='14455fee241a4032a1491acddf675e59',uuid=8d35e49e-efca-4621-a6f7-de650e5272fd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "28067a9e-43c9-4f82-aeb3-002926889b4b", "address": "fa:16:3e:83:96:3a", "network": {"id": "617f2647-1180-40d5-ae2e-3b28298acf26", "bridge": "br-int", "label": "tempest-network-smoke--1660520824", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae66871ba5094dd49475fed96fb24be2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28067a9e-43", "ovs_interfaceid": "28067a9e-43c9-4f82-aeb3-002926889b4b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:35:49 np0005481065 nova_compute[260935]: 2025-10-11 09:35:49.708 2 DEBUG nova.network.os_vif_util [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Converting VIF {"id": "28067a9e-43c9-4f82-aeb3-002926889b4b", "address": "fa:16:3e:83:96:3a", "network": {"id": "617f2647-1180-40d5-ae2e-3b28298acf26", "bridge": "br-int", "label": "tempest-network-smoke--1660520824", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae66871ba5094dd49475fed96fb24be2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28067a9e-43", "ovs_interfaceid": "28067a9e-43c9-4f82-aeb3-002926889b4b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:35:49 np0005481065 nova_compute[260935]: 2025-10-11 09:35:49.709 2 DEBUG nova.network.os_vif_util [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:83:96:3a,bridge_name='br-int',has_traffic_filtering=True,id=28067a9e-43c9-4f82-aeb3-002926889b4b,network=Network(617f2647-1180-40d5-ae2e-3b28298acf26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28067a9e-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:35:49 np0005481065 nova_compute[260935]: 2025-10-11 09:35:49.711 2 DEBUG nova.objects.instance [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8d35e49e-efca-4621-a6f7-de650e5272fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:35:49 np0005481065 nova_compute[260935]: 2025-10-11 09:35:49.800 2 DEBUG nova.virt.libvirt.driver [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:35:49 np0005481065 nova_compute[260935]:  <uuid>8d35e49e-efca-4621-a6f7-de650e5272fd</uuid>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:  <name>instance-0000008c</name>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:35:49 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-88042792-access_point-478778867</nova:name>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:35:48</nova:creationTime>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:35:49 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:        <nova:user uuid="14455fee241a4032a1491acddf675e59">tempest-TestSecurityGroupsBasicOps-88042792-project-member</nova:user>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:        <nova:project uuid="ae66871ba5094dd49475fed96fb24be2">tempest-TestSecurityGroupsBasicOps-88042792</nova:project>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:        <nova:port uuid="28067a9e-43c9-4f82-aeb3-002926889b4b">
Oct 11 05:35:49 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:35:49 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:      <entry name="serial">8d35e49e-efca-4621-a6f7-de650e5272fd</entry>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:      <entry name="uuid">8d35e49e-efca-4621-a6f7-de650e5272fd</entry>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:35:49 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:35:49 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:35:49 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/8d35e49e-efca-4621-a6f7-de650e5272fd_disk">
Oct 11 05:35:49 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:35:49 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:35:49 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/8d35e49e-efca-4621-a6f7-de650e5272fd_disk.config">
Oct 11 05:35:49 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:35:49 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:35:49 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:83:96:3a"/>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:      <target dev="tap28067a9e-43"/>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:35:49 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/8d35e49e-efca-4621-a6f7-de650e5272fd/console.log" append="off"/>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:35:49 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:35:49 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:35:49 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:35:49 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:35:49 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:35:49 np0005481065 nova_compute[260935]: 2025-10-11 09:35:49.801 2 DEBUG nova.compute.manager [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Preparing to wait for external event network-vif-plugged-28067a9e-43c9-4f82-aeb3-002926889b4b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:35:49 np0005481065 nova_compute[260935]: 2025-10-11 09:35:49.802 2 DEBUG oslo_concurrency.lockutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Acquiring lock "8d35e49e-efca-4621-a6f7-de650e5272fd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:35:49 np0005481065 nova_compute[260935]: 2025-10-11 09:35:49.802 2 DEBUG oslo_concurrency.lockutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Lock "8d35e49e-efca-4621-a6f7-de650e5272fd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:35:49 np0005481065 nova_compute[260935]: 2025-10-11 09:35:49.802 2 DEBUG oslo_concurrency.lockutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Lock "8d35e49e-efca-4621-a6f7-de650e5272fd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:35:49 np0005481065 nova_compute[260935]: 2025-10-11 09:35:49.803 2 DEBUG nova.virt.libvirt.vif [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:35:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-88042792-access_point-478778867',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-88042792-access_point-478778867',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-88042792-acce',id=140,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFTq8OGvKaz+LMA/PcV/K58nrpAVi7kg5IPqRwY1XCZgpgNS8omJcJt4bFk7y+YWXHh9knbjh0T2FJN0u/vwytRbPYrGFElF+LT6MJEgQPvTD/i1PmkAieU+uwu1S/cEGA==',key_name='tempest-TestSecurityGroupsBasicOps-1768444573',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ae66871ba5094dd49475fed96fb24be2',ramdisk_id='',reservation_id='r-zleps1oi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-88042792',owner_user_name='tempest-TestSecurityGroupsBasicOps-88042792-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:35:43Z,user_data=None,user_id='14455fee241a4032a1491acddf675e59',uuid=8d35e49e-efca-4621-a6f7-de650e5272fd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "28067a9e-43c9-4f82-aeb3-002926889b4b", "address": "fa:16:3e:83:96:3a", "network": {"id": "617f2647-1180-40d5-ae2e-3b28298acf26", "bridge": "br-int", "label": "tempest-network-smoke--1660520824", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae66871ba5094dd49475fed96fb24be2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28067a9e-43", "ovs_interfaceid": "28067a9e-43c9-4f82-aeb3-002926889b4b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:35:49 np0005481065 nova_compute[260935]: 2025-10-11 09:35:49.803 2 DEBUG nova.network.os_vif_util [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Converting VIF {"id": "28067a9e-43c9-4f82-aeb3-002926889b4b", "address": "fa:16:3e:83:96:3a", "network": {"id": "617f2647-1180-40d5-ae2e-3b28298acf26", "bridge": "br-int", "label": "tempest-network-smoke--1660520824", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae66871ba5094dd49475fed96fb24be2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28067a9e-43", "ovs_interfaceid": "28067a9e-43c9-4f82-aeb3-002926889b4b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:35:49 np0005481065 nova_compute[260935]: 2025-10-11 09:35:49.804 2 DEBUG nova.network.os_vif_util [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:83:96:3a,bridge_name='br-int',has_traffic_filtering=True,id=28067a9e-43c9-4f82-aeb3-002926889b4b,network=Network(617f2647-1180-40d5-ae2e-3b28298acf26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28067a9e-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:35:49 np0005481065 nova_compute[260935]: 2025-10-11 09:35:49.804 2 DEBUG os_vif [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:83:96:3a,bridge_name='br-int',has_traffic_filtering=True,id=28067a9e-43c9-4f82-aeb3-002926889b4b,network=Network(617f2647-1180-40d5-ae2e-3b28298acf26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28067a9e-43') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:35:49 np0005481065 nova_compute[260935]: 2025-10-11 09:35:49.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:35:49 np0005481065 nova_compute[260935]: 2025-10-11 09:35:49.805 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:35:49 np0005481065 nova_compute[260935]: 2025-10-11 09:35:49.805 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:35:49 np0005481065 nova_compute[260935]: 2025-10-11 09:35:49.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:35:49 np0005481065 nova_compute[260935]: 2025-10-11 09:35:49.809 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap28067a9e-43, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:35:49 np0005481065 nova_compute[260935]: 2025-10-11 09:35:49.809 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap28067a9e-43, col_values=(('external_ids', {'iface-id': '28067a9e-43c9-4f82-aeb3-002926889b4b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:83:96:3a', 'vm-uuid': '8d35e49e-efca-4621-a6f7-de650e5272fd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:35:49 np0005481065 nova_compute[260935]: 2025-10-11 09:35:49.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:35:49 np0005481065 NetworkManager[44960]: <info>  [1760175349.8626] manager: (tap28067a9e-43): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/602)
Oct 11 05:35:49 np0005481065 nova_compute[260935]: 2025-10-11 09:35:49.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:35:49 np0005481065 nova_compute[260935]: 2025-10-11 09:35:49.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:35:49 np0005481065 nova_compute[260935]: 2025-10-11 09:35:49.868 2 INFO os_vif [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:83:96:3a,bridge_name='br-int',has_traffic_filtering=True,id=28067a9e-43c9-4f82-aeb3-002926889b4b,network=Network(617f2647-1180-40d5-ae2e-3b28298acf26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28067a9e-43')#033[00m
Oct 11 05:35:49 np0005481065 nova_compute[260935]: 2025-10-11 09:35:49.907 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:35:49 np0005481065 nova_compute[260935]: 2025-10-11 09:35:49.908 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:35:49 np0005481065 nova_compute[260935]: 2025-10-11 09:35:49.909 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:35:49 np0005481065 nova_compute[260935]: 2025-10-11 09:35:49.909 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:35:49 np0005481065 nova_compute[260935]: 2025-10-11 09:35:49.910 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:35:49 np0005481065 nova_compute[260935]: 2025-10-11 09:35:49.948 2 DEBUG nova.network.neutron [req-66e1ed7f-e399-4bcd-b696-f92e0395f4e2 req-a3349235-6f62-4a71-b5ca-64933f2939fa e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Updated VIF entry in instance network info cache for port 28067a9e-43c9-4f82-aeb3-002926889b4b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:35:49 np0005481065 nova_compute[260935]: 2025-10-11 09:35:49.949 2 DEBUG nova.network.neutron [req-66e1ed7f-e399-4bcd-b696-f92e0395f4e2 req-a3349235-6f62-4a71-b5ca-64933f2939fa e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Updating instance_info_cache with network_info: [{"id": "28067a9e-43c9-4f82-aeb3-002926889b4b", "address": "fa:16:3e:83:96:3a", "network": {"id": "617f2647-1180-40d5-ae2e-3b28298acf26", "bridge": "br-int", "label": "tempest-network-smoke--1660520824", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae66871ba5094dd49475fed96fb24be2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28067a9e-43", "ovs_interfaceid": "28067a9e-43c9-4f82-aeb3-002926889b4b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:35:49 np0005481065 nova_compute[260935]: 2025-10-11 09:35:49.967 2 DEBUG oslo_concurrency.lockutils [req-66e1ed7f-e399-4bcd-b696-f92e0395f4e2 req-a3349235-6f62-4a71-b5ca-64933f2939fa e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-8d35e49e-efca-4621-a6f7-de650e5272fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:35:49 np0005481065 nova_compute[260935]: 2025-10-11 09:35:49.985 2 DEBUG nova.virt.libvirt.driver [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:35:49 np0005481065 nova_compute[260935]: 2025-10-11 09:35:49.986 2 DEBUG nova.virt.libvirt.driver [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:35:49 np0005481065 nova_compute[260935]: 2025-10-11 09:35:49.986 2 DEBUG nova.virt.libvirt.driver [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] No VIF found with MAC fa:16:3e:83:96:3a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:35:49 np0005481065 nova_compute[260935]: 2025-10-11 09:35:49.986 2 INFO nova.virt.libvirt.driver [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Using config drive#033[00m
Oct 11 05:35:50 np0005481065 nova_compute[260935]: 2025-10-11 09:35:50.008 2 DEBUG nova.storage.rbd_utils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] rbd image 8d35e49e-efca-4621-a6f7-de650e5272fd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:35:50 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:35:50 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/388559360' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:35:50 np0005481065 nova_compute[260935]: 2025-10-11 09:35:50.343 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:35:50 np0005481065 nova_compute[260935]: 2025-10-11 09:35:50.451 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:35:50 np0005481065 nova_compute[260935]: 2025-10-11 09:35:50.452 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:35:50 np0005481065 nova_compute[260935]: 2025-10-11 09:35:50.458 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:35:50 np0005481065 nova_compute[260935]: 2025-10-11 09:35:50.459 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:35:50 np0005481065 nova_compute[260935]: 2025-10-11 09:35:50.459 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:35:50 np0005481065 nova_compute[260935]: 2025-10-11 09:35:50.465 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:35:50 np0005481065 nova_compute[260935]: 2025-10-11 09:35:50.465 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:35:50 np0005481065 nova_compute[260935]: 2025-10-11 09:35:50.470 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:35:50 np0005481065 nova_compute[260935]: 2025-10-11 09:35:50.470 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:35:50 np0005481065 nova_compute[260935]: 2025-10-11 09:35:50.477 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:35:50 np0005481065 nova_compute[260935]: 2025-10-11 09:35:50.477 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:35:50 np0005481065 nova_compute[260935]: 2025-10-11 09:35:50.660 2 INFO nova.virt.libvirt.driver [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Creating config drive at /var/lib/nova/instances/8d35e49e-efca-4621-a6f7-de650e5272fd/disk.config#033[00m
Oct 11 05:35:50 np0005481065 nova_compute[260935]: 2025-10-11 09:35:50.670 2 DEBUG oslo_concurrency.processutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8d35e49e-efca-4621-a6f7-de650e5272fd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmqz6i04_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:35:50 np0005481065 nova_compute[260935]: 2025-10-11 09:35:50.782 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:35:50 np0005481065 nova_compute[260935]: 2025-10-11 09:35:50.784 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2621MB free_disk=59.764366149902344GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:35:50 np0005481065 nova_compute[260935]: 2025-10-11 09:35:50.784 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:35:50 np0005481065 nova_compute[260935]: 2025-10-11 09:35:50.785 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:35:50 np0005481065 nova_compute[260935]: 2025-10-11 09:35:50.818 2 DEBUG oslo_concurrency.processutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8d35e49e-efca-4621-a6f7-de650e5272fd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmqz6i04_" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:35:50 np0005481065 nova_compute[260935]: 2025-10-11 09:35:50.843 2 DEBUG nova.storage.rbd_utils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] rbd image 8d35e49e-efca-4621-a6f7-de650e5272fd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:35:50 np0005481065 nova_compute[260935]: 2025-10-11 09:35:50.848 2 DEBUG oslo_concurrency.processutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8d35e49e-efca-4621-a6f7-de650e5272fd/disk.config 8d35e49e-efca-4621-a6f7-de650e5272fd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:35:51 np0005481065 nova_compute[260935]: 2025-10-11 09:35:51.003 2 DEBUG oslo_concurrency.processutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8d35e49e-efca-4621-a6f7-de650e5272fd/disk.config 8d35e49e-efca-4621-a6f7-de650e5272fd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:35:51 np0005481065 nova_compute[260935]: 2025-10-11 09:35:51.004 2 INFO nova.virt.libvirt.driver [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Deleting local config drive /var/lib/nova/instances/8d35e49e-efca-4621-a6f7-de650e5272fd/disk.config because it was imported into RBD.#033[00m
Oct 11 05:35:51 np0005481065 NetworkManager[44960]: <info>  [1760175351.0474] manager: (tap28067a9e-43): new Tun device (/org/freedesktop/NetworkManager/Devices/603)
Oct 11 05:35:51 np0005481065 kernel: tap28067a9e-43: entered promiscuous mode
Oct 11 05:35:51 np0005481065 ovn_controller[152945]: 2025-10-11T09:35:51Z|01568|binding|INFO|Claiming lport 28067a9e-43c9-4f82-aeb3-002926889b4b for this chassis.
Oct 11 05:35:51 np0005481065 ovn_controller[152945]: 2025-10-11T09:35:51Z|01569|binding|INFO|28067a9e-43c9-4f82-aeb3-002926889b4b: Claiming fa:16:3e:83:96:3a 10.100.0.8
Oct 11 05:35:51 np0005481065 nova_compute[260935]: 2025-10-11 09:35:51.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:35:51 np0005481065 ovn_controller[152945]: 2025-10-11T09:35:51Z|01570|binding|INFO|Setting lport 28067a9e-43c9-4f82-aeb3-002926889b4b ovn-installed in OVS
Oct 11 05:35:51 np0005481065 nova_compute[260935]: 2025-10-11 09:35:51.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:35:51 np0005481065 nova_compute[260935]: 2025-10-11 09:35:51.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:35:51 np0005481065 systemd-udevd[416635]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:35:51 np0005481065 systemd-machined[215705]: New machine qemu-164-instance-0000008c.
Oct 11 05:35:51 np0005481065 systemd[1]: Started Virtual Machine qemu-164-instance-0000008c.
Oct 11 05:35:51 np0005481065 NetworkManager[44960]: <info>  [1760175351.1010] device (tap28067a9e-43): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:35:51 np0005481065 NetworkManager[44960]: <info>  [1760175351.1016] device (tap28067a9e-43): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:35:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2826: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:35:51 np0005481065 ovn_controller[152945]: 2025-10-11T09:35:51Z|01571|binding|INFO|Setting lport 28067a9e-43c9-4f82-aeb3-002926889b4b up in Southbound
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:35:51.238 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:96:3a 10.100.0.8'], port_security=['fa:16:3e:83:96:3a 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '8d35e49e-efca-4621-a6f7-de650e5272fd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-617f2647-1180-40d5-ae2e-3b28298acf26', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ae66871ba5094dd49475fed96fb24be2', 'neutron:revision_number': '2', 'neutron:security_group_ids': '31517e64-56cf-4001-97b4-ec41047395ae 9f4770d8-f722-49dc-9929-f78ea8b6a7c0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=321d388f-0e1e-4939-a64b-86d4cae0051f, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=28067a9e-43c9-4f82-aeb3-002926889b4b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:35:51.240 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 28067a9e-43c9-4f82-aeb3-002926889b4b in datapath 617f2647-1180-40d5-ae2e-3b28298acf26 bound to our chassis#033[00m
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:35:51.244 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 617f2647-1180-40d5-ae2e-3b28298acf26#033[00m
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:35:51.263 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a74f1e8d-6560-45e2-a5c2-544132147795]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:35:51.264 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap617f2647-11 in ovnmeta-617f2647-1180-40d5-ae2e-3b28298acf26 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:35:51.267 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap617f2647-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:35:51.267 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[25cb3ef9-e1d3-480b-98f2-9306e2a2855a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:35:51.268 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5bcaa0e7-f01a-4b4e-bc97-6b0c4dd30cd4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:35:51.295 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[734fe6ee-6829-4020-a005-80e9b3dc6e77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:35:51.326 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1631f050-d8bc-47d2-b02a-2b03cee3373e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:35:51.355 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[0a29b415-e586-46c3-8f3b-8d78d9520639]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:35:51 np0005481065 NetworkManager[44960]: <info>  [1760175351.3677] manager: (tap617f2647-10): new Veth device (/org/freedesktop/NetworkManager/Devices/604)
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:35:51.367 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[541e311e-67c2-4f88-9a7d-0b2621ce1bab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:35:51 np0005481065 systemd-udevd[416637]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:35:51 np0005481065 nova_compute[260935]: 2025-10-11 09:35:51.400 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:35:51 np0005481065 nova_compute[260935]: 2025-10-11 09:35:51.400 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:35:51 np0005481065 nova_compute[260935]: 2025-10-11 09:35:51.401 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:35:51 np0005481065 nova_compute[260935]: 2025-10-11 09:35:51.401 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance a77fa566-1ab4-484c-b6ef-53471c9f91f8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:35:51 np0005481065 nova_compute[260935]: 2025-10-11 09:35:51.401 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 8d35e49e-efca-4621-a6f7-de650e5272fd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:35:51 np0005481065 nova_compute[260935]: 2025-10-11 09:35:51.401 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:35:51 np0005481065 nova_compute[260935]: 2025-10-11 09:35:51.401 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:35:51.416 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[58f30fc8-0b5a-47f3-8b3e-7fbdd5d97d93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:35:51.419 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[66cf776c-e692-4cb7-bbf7-f6eeac5d797e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:35:51 np0005481065 NetworkManager[44960]: <info>  [1760175351.4585] device (tap617f2647-10): carrier: link connected
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:35:51.466 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[d5b978a2-3d29-4036-9102-d3ec76bb7d0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:35:51.492 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7e1adf72-2aa9-4aaf-9cd6-a283cddd523f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap617f2647-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:a9:10'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 417], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719616, 'reachable_time': 18424, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 416677, 'error': None, 'target': 'ovnmeta-617f2647-1180-40d5-ae2e-3b28298acf26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:35:51.512 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[127cc807-e160-40ba-abc1-5a7a57522a85]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe22:a910'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 719616, 'tstamp': 719616}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 416689, 'error': None, 'target': 'ovnmeta-617f2647-1180-40d5-ae2e-3b28298acf26', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:35:51.533 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4ed3c329-6e03-4f61-a515-66ac52bfa819]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap617f2647-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:a9:10'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 417], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719616, 'reachable_time': 18424, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 416697, 'error': None, 'target': 'ovnmeta-617f2647-1180-40d5-ae2e-3b28298acf26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:35:51.588 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[739e02cc-fc62-4397-9cc8-7a3661453167]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:35:51 np0005481065 nova_compute[260935]: 2025-10-11 09:35:51.600 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:35:51.677 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[98373f13-3a4e-47f5-84ba-e17949db52a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:35:51.679 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap617f2647-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:35:51.679 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:35:51.680 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap617f2647-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:35:51 np0005481065 NetworkManager[44960]: <info>  [1760175351.6832] manager: (tap617f2647-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/605)
Oct 11 05:35:51 np0005481065 kernel: tap617f2647-10: entered promiscuous mode
Oct 11 05:35:51 np0005481065 nova_compute[260935]: 2025-10-11 09:35:51.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:35:51.686 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap617f2647-10, col_values=(('external_ids', {'iface-id': 'a74f93a1-5d2a-45dc-853f-254c71fe1565'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:35:51 np0005481065 ovn_controller[152945]: 2025-10-11T09:35:51Z|01572|binding|INFO|Releasing lport a74f93a1-5d2a-45dc-853f-254c71fe1565 from this chassis (sb_readonly=0)
Oct 11 05:35:51 np0005481065 nova_compute[260935]: 2025-10-11 09:35:51.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:35:51.706 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/617f2647-1180-40d5-ae2e-3b28298acf26.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/617f2647-1180-40d5-ae2e-3b28298acf26.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:35:51.707 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[02d10c94-81fb-4d86-9041-b54e6766ef12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:35:51.709 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-617f2647-1180-40d5-ae2e-3b28298acf26
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/617f2647-1180-40d5-ae2e-3b28298acf26.pid.haproxy
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID 617f2647-1180-40d5-ae2e-3b28298acf26
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 05:35:51 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:35:51.713 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-617f2647-1180-40d5-ae2e-3b28298acf26', 'env', 'PROCESS_TAG=haproxy-617f2647-1180-40d5-ae2e-3b28298acf26', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/617f2647-1180-40d5-ae2e-3b28298acf26.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 05:35:51 np0005481065 nova_compute[260935]: 2025-10-11 09:35:51.751 2 DEBUG nova.compute.manager [req-709ab804-cc7e-41c1-8fb2-90f98ef8c046 req-52e75187-b8bc-4b63-a759-f2b067ec804b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Received event network-vif-plugged-28067a9e-43c9-4f82-aeb3-002926889b4b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:35:51 np0005481065 nova_compute[260935]: 2025-10-11 09:35:51.752 2 DEBUG oslo_concurrency.lockutils [req-709ab804-cc7e-41c1-8fb2-90f98ef8c046 req-52e75187-b8bc-4b63-a759-f2b067ec804b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "8d35e49e-efca-4621-a6f7-de650e5272fd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:35:51 np0005481065 nova_compute[260935]: 2025-10-11 09:35:51.752 2 DEBUG oslo_concurrency.lockutils [req-709ab804-cc7e-41c1-8fb2-90f98ef8c046 req-52e75187-b8bc-4b63-a759-f2b067ec804b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8d35e49e-efca-4621-a6f7-de650e5272fd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:35:51 np0005481065 nova_compute[260935]: 2025-10-11 09:35:51.753 2 DEBUG oslo_concurrency.lockutils [req-709ab804-cc7e-41c1-8fb2-90f98ef8c046 req-52e75187-b8bc-4b63-a759-f2b067ec804b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8d35e49e-efca-4621-a6f7-de650e5272fd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:35:51 np0005481065 nova_compute[260935]: 2025-10-11 09:35:51.753 2 DEBUG nova.compute.manager [req-709ab804-cc7e-41c1-8fb2-90f98ef8c046 req-52e75187-b8bc-4b63-a759-f2b067ec804b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Processing event network-vif-plugged-28067a9e-43c9-4f82-aeb3-002926889b4b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:35:52 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:35:52 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1108910696' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:35:52 np0005481065 nova_compute[260935]: 2025-10-11 09:35:52.055 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:35:52 np0005481065 nova_compute[260935]: 2025-10-11 09:35:52.060 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:35:52 np0005481065 nova_compute[260935]: 2025-10-11 09:35:52.076 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:35:52 np0005481065 nova_compute[260935]: 2025-10-11 09:35:52.125 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:35:52 np0005481065 nova_compute[260935]: 2025-10-11 09:35:52.125 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.340s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:35:52 np0005481065 podman[416767]: 2025-10-11 09:35:52.129505292 +0000 UTC m=+0.054982843 container create d864fe38d9be2dae3da3c070959dd6bc2d2ef716d596898d6e6ac4974a4b2210 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-617f2647-1180-40d5-ae2e-3b28298acf26, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:35:52 np0005481065 nova_compute[260935]: 2025-10-11 09:35:52.146 2 DEBUG nova.compute.manager [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:35:52 np0005481065 nova_compute[260935]: 2025-10-11 09:35:52.147 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175352.1472325, 8d35e49e-efca-4621-a6f7-de650e5272fd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:35:52 np0005481065 nova_compute[260935]: 2025-10-11 09:35:52.147 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] VM Started (Lifecycle Event)#033[00m
Oct 11 05:35:52 np0005481065 nova_compute[260935]: 2025-10-11 09:35:52.151 2 DEBUG nova.virt.libvirt.driver [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:35:52 np0005481065 nova_compute[260935]: 2025-10-11 09:35:52.155 2 INFO nova.virt.libvirt.driver [-] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Instance spawned successfully.#033[00m
Oct 11 05:35:52 np0005481065 nova_compute[260935]: 2025-10-11 09:35:52.156 2 DEBUG nova.virt.libvirt.driver [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:35:52 np0005481065 podman[416767]: 2025-10-11 09:35:52.099285413 +0000 UTC m=+0.024763045 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 05:35:52 np0005481065 systemd[1]: Started libpod-conmon-d864fe38d9be2dae3da3c070959dd6bc2d2ef716d596898d6e6ac4974a4b2210.scope.
Oct 11 05:35:52 np0005481065 nova_compute[260935]: 2025-10-11 09:35:52.193 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:35:52 np0005481065 nova_compute[260935]: 2025-10-11 09:35:52.198 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:35:52 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:35:52 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bce8ebf8528fbbe46433a517ccbe8faadfb46f7e1ba5f57651637bad728b7361/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 05:35:52 np0005481065 podman[416767]: 2025-10-11 09:35:52.247391199 +0000 UTC m=+0.172868810 container init d864fe38d9be2dae3da3c070959dd6bc2d2ef716d596898d6e6ac4974a4b2210 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-617f2647-1180-40d5-ae2e-3b28298acf26, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 11 05:35:52 np0005481065 podman[416767]: 2025-10-11 09:35:52.25797601 +0000 UTC m=+0.183453581 container start d864fe38d9be2dae3da3c070959dd6bc2d2ef716d596898d6e6ac4974a4b2210 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-617f2647-1180-40d5-ae2e-3b28298acf26, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 11 05:35:52 np0005481065 nova_compute[260935]: 2025-10-11 09:35:52.294 2 DEBUG nova.virt.libvirt.driver [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:35:52 np0005481065 nova_compute[260935]: 2025-10-11 09:35:52.295 2 DEBUG nova.virt.libvirt.driver [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:35:52 np0005481065 nova_compute[260935]: 2025-10-11 09:35:52.296 2 DEBUG nova.virt.libvirt.driver [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:35:52 np0005481065 nova_compute[260935]: 2025-10-11 09:35:52.296 2 DEBUG nova.virt.libvirt.driver [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:35:52 np0005481065 nova_compute[260935]: 2025-10-11 09:35:52.297 2 DEBUG nova.virt.libvirt.driver [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:35:52 np0005481065 nova_compute[260935]: 2025-10-11 09:35:52.297 2 DEBUG nova.virt.libvirt.driver [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:35:52 np0005481065 neutron-haproxy-ovnmeta-617f2647-1180-40d5-ae2e-3b28298acf26[416782]: [NOTICE]   (416786) : New worker (416788) forked
Oct 11 05:35:52 np0005481065 neutron-haproxy-ovnmeta-617f2647-1180-40d5-ae2e-3b28298acf26[416782]: [NOTICE]   (416786) : Loading success.
Oct 11 05:35:52 np0005481065 nova_compute[260935]: 2025-10-11 09:35:52.310 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:35:52 np0005481065 nova_compute[260935]: 2025-10-11 09:35:52.311 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175352.1494644, 8d35e49e-efca-4621-a6f7-de650e5272fd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:35:52 np0005481065 nova_compute[260935]: 2025-10-11 09:35:52.311 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:35:52 np0005481065 nova_compute[260935]: 2025-10-11 09:35:52.343 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:35:52 np0005481065 nova_compute[260935]: 2025-10-11 09:35:52.347 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175352.1510408, 8d35e49e-efca-4621-a6f7-de650e5272fd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:35:52 np0005481065 nova_compute[260935]: 2025-10-11 09:35:52.348 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:35:52 np0005481065 nova_compute[260935]: 2025-10-11 09:35:52.378 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:35:52 np0005481065 nova_compute[260935]: 2025-10-11 09:35:52.381 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:35:52 np0005481065 nova_compute[260935]: 2025-10-11 09:35:52.389 2 INFO nova.compute.manager [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Took 8.74 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:35:52 np0005481065 nova_compute[260935]: 2025-10-11 09:35:52.389 2 DEBUG nova.compute.manager [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:35:52 np0005481065 nova_compute[260935]: 2025-10-11 09:35:52.426 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:35:52 np0005481065 nova_compute[260935]: 2025-10-11 09:35:52.465 2 INFO nova.compute.manager [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Took 14.87 seconds to build instance.#033[00m
Oct 11 05:35:52 np0005481065 nova_compute[260935]: 2025-10-11 09:35:52.484 2 DEBUG oslo_concurrency.lockutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Lock "8d35e49e-efca-4621-a6f7-de650e5272fd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.594s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:35:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2827: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 11 05:35:53 np0005481065 nova_compute[260935]: 2025-10-11 09:35:53.851 2 DEBUG nova.compute.manager [req-d291c922-ed13-4ddb-b46c-96cd4bf74090 req-086f2737-481e-4d37-894f-46f76701ca84 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Received event network-vif-plugged-28067a9e-43c9-4f82-aeb3-002926889b4b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:35:53 np0005481065 nova_compute[260935]: 2025-10-11 09:35:53.852 2 DEBUG oslo_concurrency.lockutils [req-d291c922-ed13-4ddb-b46c-96cd4bf74090 req-086f2737-481e-4d37-894f-46f76701ca84 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "8d35e49e-efca-4621-a6f7-de650e5272fd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:35:53 np0005481065 nova_compute[260935]: 2025-10-11 09:35:53.852 2 DEBUG oslo_concurrency.lockutils [req-d291c922-ed13-4ddb-b46c-96cd4bf74090 req-086f2737-481e-4d37-894f-46f76701ca84 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8d35e49e-efca-4621-a6f7-de650e5272fd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:35:53 np0005481065 nova_compute[260935]: 2025-10-11 09:35:53.853 2 DEBUG oslo_concurrency.lockutils [req-d291c922-ed13-4ddb-b46c-96cd4bf74090 req-086f2737-481e-4d37-894f-46f76701ca84 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8d35e49e-efca-4621-a6f7-de650e5272fd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:35:53 np0005481065 nova_compute[260935]: 2025-10-11 09:35:53.853 2 DEBUG nova.compute.manager [req-d291c922-ed13-4ddb-b46c-96cd4bf74090 req-086f2737-481e-4d37-894f-46f76701ca84 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] No waiting events found dispatching network-vif-plugged-28067a9e-43c9-4f82-aeb3-002926889b4b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:35:53 np0005481065 nova_compute[260935]: 2025-10-11 09:35:53.853 2 WARNING nova.compute.manager [req-d291c922-ed13-4ddb-b46c-96cd4bf74090 req-086f2737-481e-4d37-894f-46f76701ca84 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Received unexpected event network-vif-plugged-28067a9e-43c9-4f82-aeb3-002926889b4b for instance with vm_state active and task_state None.#033[00m
Oct 11 05:35:54 np0005481065 nova_compute[260935]: 2025-10-11 09:35:54.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:35:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:35:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:35:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:35:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:35:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:35:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:35:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:35:54 np0005481065 nova_compute[260935]: 2025-10-11 09:35:54.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:35:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:35:54
Oct 11 05:35:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:35:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:35:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', 'images', 'vms', '.rgw.root', 'volumes']
Oct 11 05:35:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:35:55 np0005481065 nova_compute[260935]: 2025-10-11 09:35:55.122 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:35:55 np0005481065 nova_compute[260935]: 2025-10-11 09:35:55.145 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:35:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2828: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 11 05:35:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:35:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:35:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:35:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:35:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:35:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:35:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:35:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:35:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:35:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:35:55 np0005481065 nova_compute[260935]: 2025-10-11 09:35:55.931 2 DEBUG nova.compute.manager [req-f6cd7d3a-f757-41a3-abbc-920eaa30f196 req-478b1c3a-cd7a-4a43-96db-7848b59a64df e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Received event network-changed-28067a9e-43c9-4f82-aeb3-002926889b4b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:35:55 np0005481065 nova_compute[260935]: 2025-10-11 09:35:55.933 2 DEBUG nova.compute.manager [req-f6cd7d3a-f757-41a3-abbc-920eaa30f196 req-478b1c3a-cd7a-4a43-96db-7848b59a64df e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Refreshing instance network info cache due to event network-changed-28067a9e-43c9-4f82-aeb3-002926889b4b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:35:55 np0005481065 nova_compute[260935]: 2025-10-11 09:35:55.934 2 DEBUG oslo_concurrency.lockutils [req-f6cd7d3a-f757-41a3-abbc-920eaa30f196 req-478b1c3a-cd7a-4a43-96db-7848b59a64df e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-8d35e49e-efca-4621-a6f7-de650e5272fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:35:55 np0005481065 nova_compute[260935]: 2025-10-11 09:35:55.935 2 DEBUG oslo_concurrency.lockutils [req-f6cd7d3a-f757-41a3-abbc-920eaa30f196 req-478b1c3a-cd7a-4a43-96db-7848b59a64df e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-8d35e49e-efca-4621-a6f7-de650e5272fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:35:55 np0005481065 nova_compute[260935]: 2025-10-11 09:35:55.935 2 DEBUG nova.network.neutron [req-f6cd7d3a-f757-41a3-abbc-920eaa30f196 req-478b1c3a-cd7a-4a43-96db-7848b59a64df e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Refreshing network info cache for port 28067a9e-43c9-4f82-aeb3-002926889b4b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:35:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2829: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 11 05:35:57 np0005481065 podman[416799]: 2025-10-11 09:35:57.806386044 +0000 UTC m=+0.089891654 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 05:35:58 np0005481065 nova_compute[260935]: 2025-10-11 09:35:58.007 2 DEBUG nova.network.neutron [req-f6cd7d3a-f757-41a3-abbc-920eaa30f196 req-478b1c3a-cd7a-4a43-96db-7848b59a64df e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Updated VIF entry in instance network info cache for port 28067a9e-43c9-4f82-aeb3-002926889b4b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:35:58 np0005481065 nova_compute[260935]: 2025-10-11 09:35:58.008 2 DEBUG nova.network.neutron [req-f6cd7d3a-f757-41a3-abbc-920eaa30f196 req-478b1c3a-cd7a-4a43-96db-7848b59a64df e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Updating instance_info_cache with network_info: [{"id": "28067a9e-43c9-4f82-aeb3-002926889b4b", "address": "fa:16:3e:83:96:3a", "network": {"id": "617f2647-1180-40d5-ae2e-3b28298acf26", "bridge": "br-int", "label": "tempest-network-smoke--1660520824", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae66871ba5094dd49475fed96fb24be2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28067a9e-43", "ovs_interfaceid": "28067a9e-43c9-4f82-aeb3-002926889b4b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:35:58 np0005481065 nova_compute[260935]: 2025-10-11 09:35:58.038 2 DEBUG oslo_concurrency.lockutils [req-f6cd7d3a-f757-41a3-abbc-920eaa30f196 req-478b1c3a-cd7a-4a43-96db-7848b59a64df e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-8d35e49e-efca-4621-a6f7-de650e5272fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:35:59 np0005481065 nova_compute[260935]: 2025-10-11 09:35:59.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:35:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2830: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 11 05:35:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:35:59 np0005481065 nova_compute[260935]: 2025-10-11 09:35:59.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:36:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2831: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 05:36:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2832: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 11 05:36:04 np0005481065 nova_compute[260935]: 2025-10-11 09:36:04.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:36:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:36:04 np0005481065 nova_compute[260935]: 2025-10-11 09:36:04.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:36:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2833: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Oct 11 05:36:05 np0005481065 ovn_controller[152945]: 2025-10-11T09:36:05Z|00186|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:83:96:3a 10.100.0.8
Oct 11 05:36:05 np0005481065 ovn_controller[152945]: 2025-10-11T09:36:05Z|00187|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:83:96:3a 10.100.0.8
Oct 11 05:36:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:36:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:36:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:36:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:36:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003735063814547112 of space, bias 1.0, pg target 1.1205191443641336 quantized to 32 (current 32)
Oct 11 05:36:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:36:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:36:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:36:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:36:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:36:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.1992057139048968 quantized to 32 (current 32)
Oct 11 05:36:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:36:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 32)
Oct 11 05:36:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:36:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:36:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:36:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Oct 11 05:36:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:36:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Oct 11 05:36:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:36:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:36:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:36:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Oct 11 05:36:06 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 11 05:36:06 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 05:36:06 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:36:06 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:36:06 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 05:36:06 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:36:06 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 05:36:06 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:36:06 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 9f388bcc-84b2-4365-8f76-9327c5a09d07 does not exist
Oct 11 05:36:06 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev b13cf9af-73e5-420a-9426-4547c6f096b5 does not exist
Oct 11 05:36:06 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 8b8f358d-986b-4e67-b45b-8d6cb294dc68 does not exist
Oct 11 05:36:06 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 05:36:06 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 05:36:06 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 05:36:06 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:36:06 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:36:06 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:36:06 np0005481065 podman[416974]: 2025-10-11 09:36:06.279989508 +0000 UTC m=+0.106660831 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid)
Oct 11 05:36:06 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 05:36:06 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:36:06 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:36:06 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:36:06 np0005481065 podman[417107]: 2025-10-11 09:36:06.825849049 +0000 UTC m=+0.071448310 container create 15279f96ee3bc1fda5eb71add03766de8bf491ce9afe73826e463c2c9c00a21a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_goldberg, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:36:06 np0005481065 systemd[1]: Started libpod-conmon-15279f96ee3bc1fda5eb71add03766de8bf491ce9afe73826e463c2c9c00a21a.scope.
Oct 11 05:36:06 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:36:06 np0005481065 podman[417107]: 2025-10-11 09:36:06.799259234 +0000 UTC m=+0.044858525 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:36:06 np0005481065 podman[417107]: 2025-10-11 09:36:06.90722984 +0000 UTC m=+0.152829111 container init 15279f96ee3bc1fda5eb71add03766de8bf491ce9afe73826e463c2c9c00a21a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 11 05:36:06 np0005481065 podman[417107]: 2025-10-11 09:36:06.915609618 +0000 UTC m=+0.161208919 container start 15279f96ee3bc1fda5eb71add03766de8bf491ce9afe73826e463c2c9c00a21a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 11 05:36:06 np0005481065 podman[417107]: 2025-10-11 09:36:06.918968213 +0000 UTC m=+0.164567484 container attach 15279f96ee3bc1fda5eb71add03766de8bf491ce9afe73826e463c2c9c00a21a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_goldberg, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 05:36:06 np0005481065 systemd[1]: libpod-15279f96ee3bc1fda5eb71add03766de8bf491ce9afe73826e463c2c9c00a21a.scope: Deactivated successfully.
Oct 11 05:36:06 np0005481065 boring_goldberg[417124]: 167 167
Oct 11 05:36:06 np0005481065 conmon[417124]: conmon 15279f96ee3bc1fda5eb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-15279f96ee3bc1fda5eb71add03766de8bf491ce9afe73826e463c2c9c00a21a.scope/container/memory.events
Oct 11 05:36:06 np0005481065 podman[417107]: 2025-10-11 09:36:06.924870741 +0000 UTC m=+0.170469992 container died 15279f96ee3bc1fda5eb71add03766de8bf491ce9afe73826e463c2c9c00a21a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_goldberg, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 05:36:06 np0005481065 systemd[1]: var-lib-containers-storage-overlay-f6b5e9efcbdfe9b4b51782cc8b44dd61ec407d90b4a464ff4dcfb8d175982185-merged.mount: Deactivated successfully.
Oct 11 05:36:06 np0005481065 podman[417107]: 2025-10-11 09:36:06.973503612 +0000 UTC m=+0.219102863 container remove 15279f96ee3bc1fda5eb71add03766de8bf491ce9afe73826e463c2c9c00a21a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_goldberg, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:36:06 np0005481065 systemd[1]: libpod-conmon-15279f96ee3bc1fda5eb71add03766de8bf491ce9afe73826e463c2c9c00a21a.scope: Deactivated successfully.
Oct 11 05:36:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2834: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Oct 11 05:36:07 np0005481065 podman[417147]: 2025-10-11 09:36:07.256101047 +0000 UTC m=+0.066784247 container create c896f6bb10b0a121d1a569bf1e1dec9cb178b51df3ab96c76064ab616d1da6dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lederberg, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Oct 11 05:36:07 np0005481065 systemd[1]: Started libpod-conmon-c896f6bb10b0a121d1a569bf1e1dec9cb178b51df3ab96c76064ab616d1da6dc.scope.
Oct 11 05:36:07 np0005481065 podman[417147]: 2025-10-11 09:36:07.22624396 +0000 UTC m=+0.036927210 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:36:07 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:36:07 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1a683c50790f06de6da5c196121348762d4349b1c70bccac39e307466a660e6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:36:07 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1a683c50790f06de6da5c196121348762d4349b1c70bccac39e307466a660e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:36:07 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1a683c50790f06de6da5c196121348762d4349b1c70bccac39e307466a660e6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:36:07 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1a683c50790f06de6da5c196121348762d4349b1c70bccac39e307466a660e6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:36:07 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1a683c50790f06de6da5c196121348762d4349b1c70bccac39e307466a660e6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 05:36:07 np0005481065 podman[417147]: 2025-10-11 09:36:07.363519918 +0000 UTC m=+0.174203098 container init c896f6bb10b0a121d1a569bf1e1dec9cb178b51df3ab96c76064ab616d1da6dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lederberg, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 11 05:36:07 np0005481065 podman[417147]: 2025-10-11 09:36:07.383154995 +0000 UTC m=+0.193838195 container start c896f6bb10b0a121d1a569bf1e1dec9cb178b51df3ab96c76064ab616d1da6dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lederberg, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 05:36:07 np0005481065 podman[417147]: 2025-10-11 09:36:07.387784727 +0000 UTC m=+0.198467897 container attach c896f6bb10b0a121d1a569bf1e1dec9cb178b51df3ab96c76064ab616d1da6dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 11 05:36:08 np0005481065 sharp_lederberg[417164]: --> passed data devices: 0 physical, 3 LVM
Oct 11 05:36:08 np0005481065 sharp_lederberg[417164]: --> relative data size: 1.0
Oct 11 05:36:08 np0005481065 sharp_lederberg[417164]: --> All data devices are unavailable
Oct 11 05:36:08 np0005481065 systemd[1]: libpod-c896f6bb10b0a121d1a569bf1e1dec9cb178b51df3ab96c76064ab616d1da6dc.scope: Deactivated successfully.
Oct 11 05:36:08 np0005481065 podman[417147]: 2025-10-11 09:36:08.468832906 +0000 UTC m=+1.279516076 container died c896f6bb10b0a121d1a569bf1e1dec9cb178b51df3ab96c76064ab616d1da6dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lederberg, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:36:08 np0005481065 systemd[1]: libpod-c896f6bb10b0a121d1a569bf1e1dec9cb178b51df3ab96c76064ab616d1da6dc.scope: Consumed 1.024s CPU time.
Oct 11 05:36:08 np0005481065 systemd[1]: var-lib-containers-storage-overlay-d1a683c50790f06de6da5c196121348762d4349b1c70bccac39e307466a660e6-merged.mount: Deactivated successfully.
Oct 11 05:36:08 np0005481065 podman[417147]: 2025-10-11 09:36:08.535971052 +0000 UTC m=+1.346654222 container remove c896f6bb10b0a121d1a569bf1e1dec9cb178b51df3ab96c76064ab616d1da6dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lederberg, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:36:08 np0005481065 systemd[1]: libpod-conmon-c896f6bb10b0a121d1a569bf1e1dec9cb178b51df3ab96c76064ab616d1da6dc.scope: Deactivated successfully.
Oct 11 05:36:09 np0005481065 nova_compute[260935]: 2025-10-11 09:36:09.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:36:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2835: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 125 op/s
Oct 11 05:36:09 np0005481065 podman[417345]: 2025-10-11 09:36:09.376622015 +0000 UTC m=+0.069315559 container create 9bfb9ddc725b02b0347fb74fe49311600998b5a511d617f24a5073a887943b70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:36:09 np0005481065 systemd[1]: Started libpod-conmon-9bfb9ddc725b02b0347fb74fe49311600998b5a511d617f24a5073a887943b70.scope.
Oct 11 05:36:09 np0005481065 podman[417345]: 2025-10-11 09:36:09.347109657 +0000 UTC m=+0.039803281 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:36:09 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:36:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:36:09 np0005481065 podman[417345]: 2025-10-11 09:36:09.491192839 +0000 UTC m=+0.183886413 container init 9bfb9ddc725b02b0347fb74fe49311600998b5a511d617f24a5073a887943b70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_kepler, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:36:09 np0005481065 podman[417345]: 2025-10-11 09:36:09.499763222 +0000 UTC m=+0.192456756 container start 9bfb9ddc725b02b0347fb74fe49311600998b5a511d617f24a5073a887943b70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_kepler, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:36:09 np0005481065 podman[417345]: 2025-10-11 09:36:09.503660783 +0000 UTC m=+0.196354337 container attach 9bfb9ddc725b02b0347fb74fe49311600998b5a511d617f24a5073a887943b70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:36:09 np0005481065 elated_kepler[417361]: 167 167
Oct 11 05:36:09 np0005481065 systemd[1]: libpod-9bfb9ddc725b02b0347fb74fe49311600998b5a511d617f24a5073a887943b70.scope: Deactivated successfully.
Oct 11 05:36:09 np0005481065 podman[417345]: 2025-10-11 09:36:09.50638083 +0000 UTC m=+0.199074364 container died 9bfb9ddc725b02b0347fb74fe49311600998b5a511d617f24a5073a887943b70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_kepler, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:36:09 np0005481065 systemd[1]: var-lib-containers-storage-overlay-89c1c6d40041cc5a6c51c8f28969c43121fa6083ae11a29036438a036959b72f-merged.mount: Deactivated successfully.
Oct 11 05:36:09 np0005481065 podman[417345]: 2025-10-11 09:36:09.539754888 +0000 UTC m=+0.232448422 container remove 9bfb9ddc725b02b0347fb74fe49311600998b5a511d617f24a5073a887943b70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_kepler, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 11 05:36:09 np0005481065 systemd[1]: libpod-conmon-9bfb9ddc725b02b0347fb74fe49311600998b5a511d617f24a5073a887943b70.scope: Deactivated successfully.
Oct 11 05:36:09 np0005481065 podman[417384]: 2025-10-11 09:36:09.810017823 +0000 UTC m=+0.082771552 container create 13134e1eeaf069feb802e37a56347b940f7f4914e3a8859b3263c7b080e14cfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_knuth, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:36:09 np0005481065 systemd[1]: Started libpod-conmon-13134e1eeaf069feb802e37a56347b940f7f4914e3a8859b3263c7b080e14cfa.scope.
Oct 11 05:36:09 np0005481065 nova_compute[260935]: 2025-10-11 09:36:09.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:36:09 np0005481065 podman[417384]: 2025-10-11 09:36:09.780701171 +0000 UTC m=+0.053454920 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:36:09 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:36:09 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c8c741749fe33e5f282ed2556be34d151cc486b440addabd861b2191ecce01d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:36:09 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c8c741749fe33e5f282ed2556be34d151cc486b440addabd861b2191ecce01d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:36:09 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c8c741749fe33e5f282ed2556be34d151cc486b440addabd861b2191ecce01d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:36:09 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c8c741749fe33e5f282ed2556be34d151cc486b440addabd861b2191ecce01d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:36:09 np0005481065 podman[417384]: 2025-10-11 09:36:09.925076511 +0000 UTC m=+0.197830270 container init 13134e1eeaf069feb802e37a56347b940f7f4914e3a8859b3263c7b080e14cfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_knuth, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 05:36:09 np0005481065 podman[417384]: 2025-10-11 09:36:09.938580584 +0000 UTC m=+0.211334293 container start 13134e1eeaf069feb802e37a56347b940f7f4914e3a8859b3263c7b080e14cfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:36:09 np0005481065 podman[417384]: 2025-10-11 09:36:09.942546987 +0000 UTC m=+0.215300696 container attach 13134e1eeaf069feb802e37a56347b940f7f4914e3a8859b3263c7b080e14cfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_knuth, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:36:09 np0005481065 podman[417401]: 2025-10-11 09:36:09.972349673 +0000 UTC m=+0.086085406 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 11 05:36:10 np0005481065 podman[417425]: 2025-10-11 09:36:10.133291063 +0000 UTC m=+0.133249635 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 11 05:36:10 np0005481065 focused_knuth[417400]: {
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:    "0": [
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:        {
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:            "devices": [
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:                "/dev/loop3"
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:            ],
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:            "lv_name": "ceph_lv0",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:            "lv_size": "21470642176",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:            "name": "ceph_lv0",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:            "tags": {
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:                "ceph.cluster_name": "ceph",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:                "ceph.crush_device_class": "",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:                "ceph.encrypted": "0",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:                "ceph.osd_id": "0",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:                "ceph.type": "block",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:                "ceph.vdo": "0"
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:            },
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:            "type": "block",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:            "vg_name": "ceph_vg0"
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:        }
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:    ],
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:    "1": [
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:        {
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:            "devices": [
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:                "/dev/loop4"
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:            ],
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:            "lv_name": "ceph_lv1",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:            "lv_size": "21470642176",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:            "name": "ceph_lv1",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:            "tags": {
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:                "ceph.cluster_name": "ceph",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:                "ceph.crush_device_class": "",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:                "ceph.encrypted": "0",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:                "ceph.osd_id": "1",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:                "ceph.type": "block",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:                "ceph.vdo": "0"
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:            },
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:            "type": "block",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:            "vg_name": "ceph_vg1"
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:        }
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:    ],
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:    "2": [
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:        {
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:            "devices": [
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:                "/dev/loop5"
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:            ],
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:            "lv_name": "ceph_lv2",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:            "lv_size": "21470642176",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:            "name": "ceph_lv2",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:            "tags": {
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:                "ceph.cluster_name": "ceph",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:                "ceph.crush_device_class": "",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:                "ceph.encrypted": "0",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:                "ceph.osd_id": "2",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:                "ceph.type": "block",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:                "ceph.vdo": "0"
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:            },
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:            "type": "block",
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:            "vg_name": "ceph_vg2"
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:        }
Oct 11 05:36:10 np0005481065 focused_knuth[417400]:    ]
Oct 11 05:36:10 np0005481065 focused_knuth[417400]: }
Oct 11 05:36:10 np0005481065 systemd[1]: libpod-13134e1eeaf069feb802e37a56347b940f7f4914e3a8859b3263c7b080e14cfa.scope: Deactivated successfully.
Oct 11 05:36:10 np0005481065 podman[417384]: 2025-10-11 09:36:10.760511805 +0000 UTC m=+1.033265494 container died 13134e1eeaf069feb802e37a56347b940f7f4914e3a8859b3263c7b080e14cfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_knuth, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:36:10 np0005481065 systemd[1]: var-lib-containers-storage-overlay-4c8c741749fe33e5f282ed2556be34d151cc486b440addabd861b2191ecce01d-merged.mount: Deactivated successfully.
Oct 11 05:36:10 np0005481065 podman[417384]: 2025-10-11 09:36:10.816263009 +0000 UTC m=+1.089016688 container remove 13134e1eeaf069feb802e37a56347b940f7f4914e3a8859b3263c7b080e14cfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_knuth, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:36:10 np0005481065 systemd[1]: libpod-conmon-13134e1eeaf069feb802e37a56347b940f7f4914e3a8859b3263c7b080e14cfa.scope: Deactivated successfully.
Oct 11 05:36:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2836: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 271 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 11 05:36:11 np0005481065 podman[417605]: 2025-10-11 09:36:11.502865316 +0000 UTC m=+0.044226687 container create 60ab389b66da97309e3ceb2d2f485c6801cddb44f6b40c06bc4d5b637b7e500c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_chebyshev, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:36:11 np0005481065 systemd[1]: Started libpod-conmon-60ab389b66da97309e3ceb2d2f485c6801cddb44f6b40c06bc4d5b637b7e500c.scope.
Oct 11 05:36:11 np0005481065 podman[417605]: 2025-10-11 09:36:11.482969611 +0000 UTC m=+0.024331012 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:36:11 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:36:11 np0005481065 podman[417605]: 2025-10-11 09:36:11.60618869 +0000 UTC m=+0.147550091 container init 60ab389b66da97309e3ceb2d2f485c6801cddb44f6b40c06bc4d5b637b7e500c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 05:36:11 np0005481065 podman[417605]: 2025-10-11 09:36:11.613843088 +0000 UTC m=+0.155204469 container start 60ab389b66da97309e3ceb2d2f485c6801cddb44f6b40c06bc4d5b637b7e500c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 11 05:36:11 np0005481065 podman[417605]: 2025-10-11 09:36:11.617596204 +0000 UTC m=+0.158957605 container attach 60ab389b66da97309e3ceb2d2f485c6801cddb44f6b40c06bc4d5b637b7e500c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Oct 11 05:36:11 np0005481065 frosty_chebyshev[417621]: 167 167
Oct 11 05:36:11 np0005481065 podman[417605]: 2025-10-11 09:36:11.620382763 +0000 UTC m=+0.161744154 container died 60ab389b66da97309e3ceb2d2f485c6801cddb44f6b40c06bc4d5b637b7e500c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_chebyshev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 05:36:11 np0005481065 systemd[1]: libpod-60ab389b66da97309e3ceb2d2f485c6801cddb44f6b40c06bc4d5b637b7e500c.scope: Deactivated successfully.
Oct 11 05:36:11 np0005481065 systemd[1]: var-lib-containers-storage-overlay-474a4cafc2c047478e227aa156a5f01b01aff76536401ec0c4ac0ea5801050bb-merged.mount: Deactivated successfully.
Oct 11 05:36:11 np0005481065 podman[417605]: 2025-10-11 09:36:11.673701857 +0000 UTC m=+0.215063278 container remove 60ab389b66da97309e3ceb2d2f485c6801cddb44f6b40c06bc4d5b637b7e500c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_chebyshev, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 05:36:11 np0005481065 systemd[1]: libpod-conmon-60ab389b66da97309e3ceb2d2f485c6801cddb44f6b40c06bc4d5b637b7e500c.scope: Deactivated successfully.
Oct 11 05:36:11 np0005481065 podman[417644]: 2025-10-11 09:36:11.958360311 +0000 UTC m=+0.074284330 container create f6bd31b1f21eef7d175ee5d15d04fb5a294bab1d6c0c1583882d7294cc9e35eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:36:12 np0005481065 systemd[1]: Started libpod-conmon-f6bd31b1f21eef7d175ee5d15d04fb5a294bab1d6c0c1583882d7294cc9e35eb.scope.
Oct 11 05:36:12 np0005481065 podman[417644]: 2025-10-11 09:36:11.926665631 +0000 UTC m=+0.042589700 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:36:12 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:36:12 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fe9a08d3b88b7f6df5ef9c52eae83b7f268bd287ceb123e5b780f59f9fcf3bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:36:12 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fe9a08d3b88b7f6df5ef9c52eae83b7f268bd287ceb123e5b780f59f9fcf3bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:36:12 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fe9a08d3b88b7f6df5ef9c52eae83b7f268bd287ceb123e5b780f59f9fcf3bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:36:12 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fe9a08d3b88b7f6df5ef9c52eae83b7f268bd287ceb123e5b780f59f9fcf3bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:36:12 np0005481065 podman[417644]: 2025-10-11 09:36:12.073736398 +0000 UTC m=+0.189660467 container init f6bd31b1f21eef7d175ee5d15d04fb5a294bab1d6c0c1583882d7294cc9e35eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_poitras, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:36:12 np0005481065 podman[417644]: 2025-10-11 09:36:12.08615015 +0000 UTC m=+0.202074179 container start f6bd31b1f21eef7d175ee5d15d04fb5a294bab1d6c0c1583882d7294cc9e35eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:36:12 np0005481065 podman[417644]: 2025-10-11 09:36:12.090645388 +0000 UTC m=+0.206569497 container attach f6bd31b1f21eef7d175ee5d15d04fb5a294bab1d6c0c1583882d7294cc9e35eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_poitras, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:36:13 np0005481065 vigilant_poitras[417660]: {
Oct 11 05:36:13 np0005481065 vigilant_poitras[417660]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 05:36:13 np0005481065 vigilant_poitras[417660]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:36:13 np0005481065 vigilant_poitras[417660]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 05:36:13 np0005481065 vigilant_poitras[417660]:        "osd_id": 2,
Oct 11 05:36:13 np0005481065 vigilant_poitras[417660]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:36:13 np0005481065 vigilant_poitras[417660]:        "type": "bluestore"
Oct 11 05:36:13 np0005481065 vigilant_poitras[417660]:    },
Oct 11 05:36:13 np0005481065 vigilant_poitras[417660]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 05:36:13 np0005481065 vigilant_poitras[417660]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:36:13 np0005481065 vigilant_poitras[417660]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 05:36:13 np0005481065 vigilant_poitras[417660]:        "osd_id": 0,
Oct 11 05:36:13 np0005481065 vigilant_poitras[417660]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:36:13 np0005481065 vigilant_poitras[417660]:        "type": "bluestore"
Oct 11 05:36:13 np0005481065 vigilant_poitras[417660]:    },
Oct 11 05:36:13 np0005481065 vigilant_poitras[417660]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 05:36:13 np0005481065 vigilant_poitras[417660]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:36:13 np0005481065 vigilant_poitras[417660]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 05:36:13 np0005481065 vigilant_poitras[417660]:        "osd_id": 1,
Oct 11 05:36:13 np0005481065 vigilant_poitras[417660]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:36:13 np0005481065 vigilant_poitras[417660]:        "type": "bluestore"
Oct 11 05:36:13 np0005481065 vigilant_poitras[417660]:    }
Oct 11 05:36:13 np0005481065 vigilant_poitras[417660]: }
Oct 11 05:36:13 np0005481065 systemd[1]: libpod-f6bd31b1f21eef7d175ee5d15d04fb5a294bab1d6c0c1583882d7294cc9e35eb.scope: Deactivated successfully.
Oct 11 05:36:13 np0005481065 systemd[1]: libpod-f6bd31b1f21eef7d175ee5d15d04fb5a294bab1d6c0c1583882d7294cc9e35eb.scope: Consumed 1.078s CPU time.
Oct 11 05:36:13 np0005481065 podman[417644]: 2025-10-11 09:36:13.167327874 +0000 UTC m=+1.283251903 container died f6bd31b1f21eef7d175ee5d15d04fb5a294bab1d6c0c1583882d7294cc9e35eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_poitras, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:36:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2837: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 276 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 05:36:13 np0005481065 systemd[1]: var-lib-containers-storage-overlay-6fe9a08d3b88b7f6df5ef9c52eae83b7f268bd287ceb123e5b780f59f9fcf3bc-merged.mount: Deactivated successfully.
Oct 11 05:36:13 np0005481065 podman[417644]: 2025-10-11 09:36:13.255100186 +0000 UTC m=+1.371024215 container remove f6bd31b1f21eef7d175ee5d15d04fb5a294bab1d6c0c1583882d7294cc9e35eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_poitras, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 11 05:36:13 np0005481065 systemd[1]: libpod-conmon-f6bd31b1f21eef7d175ee5d15d04fb5a294bab1d6c0c1583882d7294cc9e35eb.scope: Deactivated successfully.
Oct 11 05:36:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:36:13 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:36:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:36:13 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:36:13 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev e2862908-e4fd-46f5-8690-5e7297c8ca6f does not exist
Oct 11 05:36:13 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 116d4332-cd19-4d44-8213-ab9c46705087 does not exist
Oct 11 05:36:14 np0005481065 nova_compute[260935]: 2025-10-11 09:36:14.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:36:14 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:36:14 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:36:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:36:14 np0005481065 nova_compute[260935]: 2025-10-11 09:36:14.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:36:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2838: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 273 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 11 05:36:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:15.232 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:36:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:15.233 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:36:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:15.235 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:36:16 np0005481065 nova_compute[260935]: 2025-10-11 09:36:16.672 2 DEBUG nova.compute.manager [req-84c1aa69-24c8-4e7e-a827-743369a5a798 req-b026976f-5fe9-490f-8b9c-32b8ee4c5f0f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Received event network-changed-28067a9e-43c9-4f82-aeb3-002926889b4b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:36:16 np0005481065 nova_compute[260935]: 2025-10-11 09:36:16.672 2 DEBUG nova.compute.manager [req-84c1aa69-24c8-4e7e-a827-743369a5a798 req-b026976f-5fe9-490f-8b9c-32b8ee4c5f0f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Refreshing instance network info cache due to event network-changed-28067a9e-43c9-4f82-aeb3-002926889b4b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:36:16 np0005481065 nova_compute[260935]: 2025-10-11 09:36:16.673 2 DEBUG oslo_concurrency.lockutils [req-84c1aa69-24c8-4e7e-a827-743369a5a798 req-b026976f-5fe9-490f-8b9c-32b8ee4c5f0f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-8d35e49e-efca-4621-a6f7-de650e5272fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:36:16 np0005481065 nova_compute[260935]: 2025-10-11 09:36:16.673 2 DEBUG oslo_concurrency.lockutils [req-84c1aa69-24c8-4e7e-a827-743369a5a798 req-b026976f-5fe9-490f-8b9c-32b8ee4c5f0f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-8d35e49e-efca-4621-a6f7-de650e5272fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:36:16 np0005481065 nova_compute[260935]: 2025-10-11 09:36:16.673 2 DEBUG nova.network.neutron [req-84c1aa69-24c8-4e7e-a827-743369a5a798 req-b026976f-5fe9-490f-8b9c-32b8ee4c5f0f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Refreshing network info cache for port 28067a9e-43c9-4f82-aeb3-002926889b4b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:36:16 np0005481065 nova_compute[260935]: 2025-10-11 09:36:16.771 2 DEBUG oslo_concurrency.lockutils [None req-eadff6a0-d525-4cae-9d02-3ce08e18351e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Acquiring lock "8d35e49e-efca-4621-a6f7-de650e5272fd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:36:16 np0005481065 nova_compute[260935]: 2025-10-11 09:36:16.772 2 DEBUG oslo_concurrency.lockutils [None req-eadff6a0-d525-4cae-9d02-3ce08e18351e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Lock "8d35e49e-efca-4621-a6f7-de650e5272fd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:36:16 np0005481065 nova_compute[260935]: 2025-10-11 09:36:16.772 2 DEBUG oslo_concurrency.lockutils [None req-eadff6a0-d525-4cae-9d02-3ce08e18351e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Acquiring lock "8d35e49e-efca-4621-a6f7-de650e5272fd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:36:16 np0005481065 nova_compute[260935]: 2025-10-11 09:36:16.773 2 DEBUG oslo_concurrency.lockutils [None req-eadff6a0-d525-4cae-9d02-3ce08e18351e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Lock "8d35e49e-efca-4621-a6f7-de650e5272fd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:36:16 np0005481065 nova_compute[260935]: 2025-10-11 09:36:16.774 2 DEBUG oslo_concurrency.lockutils [None req-eadff6a0-d525-4cae-9d02-3ce08e18351e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Lock "8d35e49e-efca-4621-a6f7-de650e5272fd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:36:16 np0005481065 nova_compute[260935]: 2025-10-11 09:36:16.776 2 INFO nova.compute.manager [None req-eadff6a0-d525-4cae-9d02-3ce08e18351e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Terminating instance#033[00m
Oct 11 05:36:16 np0005481065 nova_compute[260935]: 2025-10-11 09:36:16.778 2 DEBUG nova.compute.manager [None req-eadff6a0-d525-4cae-9d02-3ce08e18351e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:36:16 np0005481065 kernel: tap28067a9e-43 (unregistering): left promiscuous mode
Oct 11 05:36:16 np0005481065 NetworkManager[44960]: <info>  [1760175376.8451] device (tap28067a9e-43): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:36:16 np0005481065 nova_compute[260935]: 2025-10-11 09:36:16.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:36:16 np0005481065 ovn_controller[152945]: 2025-10-11T09:36:16Z|01573|binding|INFO|Releasing lport 28067a9e-43c9-4f82-aeb3-002926889b4b from this chassis (sb_readonly=0)
Oct 11 05:36:16 np0005481065 ovn_controller[152945]: 2025-10-11T09:36:16Z|01574|binding|INFO|Setting lport 28067a9e-43c9-4f82-aeb3-002926889b4b down in Southbound
Oct 11 05:36:16 np0005481065 ovn_controller[152945]: 2025-10-11T09:36:16Z|01575|binding|INFO|Removing iface tap28067a9e-43 ovn-installed in OVS
Oct 11 05:36:16 np0005481065 nova_compute[260935]: 2025-10-11 09:36:16.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:36:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:16.867 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:96:3a 10.100.0.8'], port_security=['fa:16:3e:83:96:3a 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '8d35e49e-efca-4621-a6f7-de650e5272fd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-617f2647-1180-40d5-ae2e-3b28298acf26', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ae66871ba5094dd49475fed96fb24be2', 'neutron:revision_number': '4', 'neutron:security_group_ids': '31517e64-56cf-4001-97b4-ec41047395ae 9f4770d8-f722-49dc-9929-f78ea8b6a7c0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=321d388f-0e1e-4939-a64b-86d4cae0051f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=28067a9e-43c9-4f82-aeb3-002926889b4b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:36:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:16.870 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 28067a9e-43c9-4f82-aeb3-002926889b4b in datapath 617f2647-1180-40d5-ae2e-3b28298acf26 unbound from our chassis#033[00m
Oct 11 05:36:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:16.873 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 617f2647-1180-40d5-ae2e-3b28298acf26, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:36:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:16.875 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[aef743b2-c14d-481d-9244-5c263c660f25]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:36:16 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:16.876 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-617f2647-1180-40d5-ae2e-3b28298acf26 namespace which is not needed anymore#033[00m
Oct 11 05:36:16 np0005481065 nova_compute[260935]: 2025-10-11 09:36:16.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:36:16 np0005481065 systemd[1]: machine-qemu\x2d164\x2dinstance\x2d0000008c.scope: Deactivated successfully.
Oct 11 05:36:16 np0005481065 systemd[1]: machine-qemu\x2d164\x2dinstance\x2d0000008c.scope: Consumed 13.539s CPU time.
Oct 11 05:36:16 np0005481065 systemd-machined[215705]: Machine qemu-164-instance-0000008c terminated.
Oct 11 05:36:17 np0005481065 nova_compute[260935]: 2025-10-11 09:36:17.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:36:17 np0005481065 nova_compute[260935]: 2025-10-11 09:36:17.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:36:17 np0005481065 nova_compute[260935]: 2025-10-11 09:36:17.025 2 INFO nova.virt.libvirt.driver [-] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Instance destroyed successfully.#033[00m
Oct 11 05:36:17 np0005481065 nova_compute[260935]: 2025-10-11 09:36:17.025 2 DEBUG nova.objects.instance [None req-eadff6a0-d525-4cae-9d02-3ce08e18351e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Lazy-loading 'resources' on Instance uuid 8d35e49e-efca-4621-a6f7-de650e5272fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:36:17 np0005481065 nova_compute[260935]: 2025-10-11 09:36:17.043 2 DEBUG nova.virt.libvirt.vif [None req-eadff6a0-d525-4cae-9d02-3ce08e18351e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:35:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-88042792-access_point-478778867',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-88042792-access_point-478778867',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-88042792-acce',id=140,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFTq8OGvKaz+LMA/PcV/K58nrpAVi7kg5IPqRwY1XCZgpgNS8omJcJt4bFk7y+YWXHh9knbjh0T2FJN0u/vwytRbPYrGFElF+LT6MJEgQPvTD/i1PmkAieU+uwu1S/cEGA==',key_name='tempest-TestSecurityGroupsBasicOps-1768444573',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:35:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ae66871ba5094dd49475fed96fb24be2',ramdisk_id='',reservation_id='r-zleps1oi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-88042792',owner_user_name='tempest-TestSecurityGroupsBasicOps-88042792-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:35:52Z,user_data=None,user_id='14455fee241a4032a1491acddf675e59',uuid=8d35e49e-efca-4621-a6f7-de650e5272fd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "28067a9e-43c9-4f82-aeb3-002926889b4b", "address": "fa:16:3e:83:96:3a", "network": {"id": "617f2647-1180-40d5-ae2e-3b28298acf26", "bridge": "br-int", "label": "tempest-network-smoke--1660520824", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae66871ba5094dd49475fed96fb24be2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28067a9e-43", "ovs_interfaceid": "28067a9e-43c9-4f82-aeb3-002926889b4b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:36:17 np0005481065 nova_compute[260935]: 2025-10-11 09:36:17.045 2 DEBUG nova.network.os_vif_util [None req-eadff6a0-d525-4cae-9d02-3ce08e18351e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Converting VIF {"id": "28067a9e-43c9-4f82-aeb3-002926889b4b", "address": "fa:16:3e:83:96:3a", "network": {"id": "617f2647-1180-40d5-ae2e-3b28298acf26", "bridge": "br-int", "label": "tempest-network-smoke--1660520824", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae66871ba5094dd49475fed96fb24be2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28067a9e-43", "ovs_interfaceid": "28067a9e-43c9-4f82-aeb3-002926889b4b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:36:17 np0005481065 nova_compute[260935]: 2025-10-11 09:36:17.046 2 DEBUG nova.network.os_vif_util [None req-eadff6a0-d525-4cae-9d02-3ce08e18351e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:83:96:3a,bridge_name='br-int',has_traffic_filtering=True,id=28067a9e-43c9-4f82-aeb3-002926889b4b,network=Network(617f2647-1180-40d5-ae2e-3b28298acf26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28067a9e-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:36:17 np0005481065 nova_compute[260935]: 2025-10-11 09:36:17.047 2 DEBUG os_vif [None req-eadff6a0-d525-4cae-9d02-3ce08e18351e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:83:96:3a,bridge_name='br-int',has_traffic_filtering=True,id=28067a9e-43c9-4f82-aeb3-002926889b4b,network=Network(617f2647-1180-40d5-ae2e-3b28298acf26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28067a9e-43') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:36:17 np0005481065 nova_compute[260935]: 2025-10-11 09:36:17.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:36:17 np0005481065 nova_compute[260935]: 2025-10-11 09:36:17.051 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap28067a9e-43, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:36:17 np0005481065 nova_compute[260935]: 2025-10-11 09:36:17.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:36:17 np0005481065 nova_compute[260935]: 2025-10-11 09:36:17.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:36:17 np0005481065 nova_compute[260935]: 2025-10-11 09:36:17.057 2 INFO os_vif [None req-eadff6a0-d525-4cae-9d02-3ce08e18351e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:83:96:3a,bridge_name='br-int',has_traffic_filtering=True,id=28067a9e-43c9-4f82-aeb3-002926889b4b,network=Network(617f2647-1180-40d5-ae2e-3b28298acf26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28067a9e-43')#033[00m
Oct 11 05:36:17 np0005481065 neutron-haproxy-ovnmeta-617f2647-1180-40d5-ae2e-3b28298acf26[416782]: [NOTICE]   (416786) : haproxy version is 2.8.14-c23fe91
Oct 11 05:36:17 np0005481065 neutron-haproxy-ovnmeta-617f2647-1180-40d5-ae2e-3b28298acf26[416782]: [NOTICE]   (416786) : path to executable is /usr/sbin/haproxy
Oct 11 05:36:17 np0005481065 neutron-haproxy-ovnmeta-617f2647-1180-40d5-ae2e-3b28298acf26[416782]: [WARNING]  (416786) : Exiting Master process...
Oct 11 05:36:17 np0005481065 neutron-haproxy-ovnmeta-617f2647-1180-40d5-ae2e-3b28298acf26[416782]: [WARNING]  (416786) : Exiting Master process...
Oct 11 05:36:17 np0005481065 neutron-haproxy-ovnmeta-617f2647-1180-40d5-ae2e-3b28298acf26[416782]: [ALERT]    (416786) : Current worker (416788) exited with code 143 (Terminated)
Oct 11 05:36:17 np0005481065 neutron-haproxy-ovnmeta-617f2647-1180-40d5-ae2e-3b28298acf26[416782]: [WARNING]  (416786) : All workers exited. Exiting... (0)
Oct 11 05:36:17 np0005481065 systemd[1]: libpod-d864fe38d9be2dae3da3c070959dd6bc2d2ef716d596898d6e6ac4974a4b2210.scope: Deactivated successfully.
Oct 11 05:36:17 np0005481065 podman[417783]: 2025-10-11 09:36:17.089242139 +0000 UTC m=+0.070179974 container died d864fe38d9be2dae3da3c070959dd6bc2d2ef716d596898d6e6ac4974a4b2210 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-617f2647-1180-40d5-ae2e-3b28298acf26, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct 11 05:36:17 np0005481065 nova_compute[260935]: 2025-10-11 09:36:17.095 2 DEBUG nova.compute.manager [req-beda6639-1f52-470b-9ef2-bc053da0526e req-918ccbcb-47d2-4941-a8fb-3bd8676f977d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Received event network-vif-unplugged-28067a9e-43c9-4f82-aeb3-002926889b4b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:36:17 np0005481065 nova_compute[260935]: 2025-10-11 09:36:17.095 2 DEBUG oslo_concurrency.lockutils [req-beda6639-1f52-470b-9ef2-bc053da0526e req-918ccbcb-47d2-4941-a8fb-3bd8676f977d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "8d35e49e-efca-4621-a6f7-de650e5272fd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:36:17 np0005481065 nova_compute[260935]: 2025-10-11 09:36:17.095 2 DEBUG oslo_concurrency.lockutils [req-beda6639-1f52-470b-9ef2-bc053da0526e req-918ccbcb-47d2-4941-a8fb-3bd8676f977d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8d35e49e-efca-4621-a6f7-de650e5272fd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:36:17 np0005481065 nova_compute[260935]: 2025-10-11 09:36:17.096 2 DEBUG oslo_concurrency.lockutils [req-beda6639-1f52-470b-9ef2-bc053da0526e req-918ccbcb-47d2-4941-a8fb-3bd8676f977d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8d35e49e-efca-4621-a6f7-de650e5272fd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:36:17 np0005481065 nova_compute[260935]: 2025-10-11 09:36:17.096 2 DEBUG nova.compute.manager [req-beda6639-1f52-470b-9ef2-bc053da0526e req-918ccbcb-47d2-4941-a8fb-3bd8676f977d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] No waiting events found dispatching network-vif-unplugged-28067a9e-43c9-4f82-aeb3-002926889b4b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:36:17 np0005481065 nova_compute[260935]: 2025-10-11 09:36:17.096 2 DEBUG nova.compute.manager [req-beda6639-1f52-470b-9ef2-bc053da0526e req-918ccbcb-47d2-4941-a8fb-3bd8676f977d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Received event network-vif-unplugged-28067a9e-43c9-4f82-aeb3-002926889b4b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 05:36:17 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d864fe38d9be2dae3da3c070959dd6bc2d2ef716d596898d6e6ac4974a4b2210-userdata-shm.mount: Deactivated successfully.
Oct 11 05:36:17 np0005481065 systemd[1]: var-lib-containers-storage-overlay-bce8ebf8528fbbe46433a517ccbe8faadfb46f7e1ba5f57651637bad728b7361-merged.mount: Deactivated successfully.
Oct 11 05:36:17 np0005481065 podman[417783]: 2025-10-11 09:36:17.131039846 +0000 UTC m=+0.111977721 container cleanup d864fe38d9be2dae3da3c070959dd6bc2d2ef716d596898d6e6ac4974a4b2210 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-617f2647-1180-40d5-ae2e-3b28298acf26, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 11 05:36:17 np0005481065 systemd[1]: libpod-conmon-d864fe38d9be2dae3da3c070959dd6bc2d2ef716d596898d6e6ac4974a4b2210.scope: Deactivated successfully.
Oct 11 05:36:17 np0005481065 podman[417841]: 2025-10-11 09:36:17.208052943 +0000 UTC m=+0.052663657 container remove d864fe38d9be2dae3da3c070959dd6bc2d2ef716d596898d6e6ac4974a4b2210 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-617f2647-1180-40d5-ae2e-3b28298acf26, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:36:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2839: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 273 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 11 05:36:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:17.217 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[488577c7-5743-48b1-8f39-7a90f10dab59]: (4, ('Sat Oct 11 09:36:17 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-617f2647-1180-40d5-ae2e-3b28298acf26 (d864fe38d9be2dae3da3c070959dd6bc2d2ef716d596898d6e6ac4974a4b2210)\nd864fe38d9be2dae3da3c070959dd6bc2d2ef716d596898d6e6ac4974a4b2210\nSat Oct 11 09:36:17 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-617f2647-1180-40d5-ae2e-3b28298acf26 (d864fe38d9be2dae3da3c070959dd6bc2d2ef716d596898d6e6ac4974a4b2210)\nd864fe38d9be2dae3da3c070959dd6bc2d2ef716d596898d6e6ac4974a4b2210\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:36:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:17.220 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0fbdc02f-e503-4b2f-9025-5dafa5666a7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:36:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:17.222 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap617f2647-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:36:17 np0005481065 nova_compute[260935]: 2025-10-11 09:36:17.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:36:17 np0005481065 kernel: tap617f2647-10: left promiscuous mode
Oct 11 05:36:17 np0005481065 nova_compute[260935]: 2025-10-11 09:36:17.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:36:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:17.247 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fa445a50-d570-4022-bdf7-49c9dbfbb0bd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:36:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:17.274 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6ec15aa4-6e08-4a6b-8431-27b9fe24b91b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:36:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:17.276 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6ed4afec-1225-4709-88dd-5f00a211e61b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:36:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:17.296 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e206c842-85d2-4594-8862-ccb74f83447a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719605, 'reachable_time': 22709, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 417857, 'error': None, 'target': 'ovnmeta-617f2647-1180-40d5-ae2e-3b28298acf26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:36:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:17.299 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-617f2647-1180-40d5-ae2e-3b28298acf26 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 05:36:17 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:17.299 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[5bc8e7d4-8a25-45f5-9eb5-085c5a9c9c1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:36:17 np0005481065 systemd[1]: run-netns-ovnmeta\x2d617f2647\x2d1180\x2d40d5\x2dae2e\x2d3b28298acf26.mount: Deactivated successfully.
Oct 11 05:36:17 np0005481065 nova_compute[260935]: 2025-10-11 09:36:17.429 2 INFO nova.virt.libvirt.driver [None req-eadff6a0-d525-4cae-9d02-3ce08e18351e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Deleting instance files /var/lib/nova/instances/8d35e49e-efca-4621-a6f7-de650e5272fd_del#033[00m
Oct 11 05:36:17 np0005481065 nova_compute[260935]: 2025-10-11 09:36:17.431 2 INFO nova.virt.libvirt.driver [None req-eadff6a0-d525-4cae-9d02-3ce08e18351e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Deletion of /var/lib/nova/instances/8d35e49e-efca-4621-a6f7-de650e5272fd_del complete#033[00m
Oct 11 05:36:17 np0005481065 nova_compute[260935]: 2025-10-11 09:36:17.490 2 INFO nova.compute.manager [None req-eadff6a0-d525-4cae-9d02-3ce08e18351e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Took 0.71 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:36:17 np0005481065 nova_compute[260935]: 2025-10-11 09:36:17.491 2 DEBUG oslo.service.loopingcall [None req-eadff6a0-d525-4cae-9d02-3ce08e18351e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:36:17 np0005481065 nova_compute[260935]: 2025-10-11 09:36:17.492 2 DEBUG nova.compute.manager [-] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:36:17 np0005481065 nova_compute[260935]: 2025-10-11 09:36:17.492 2 DEBUG nova.network.neutron [-] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:36:18 np0005481065 nova_compute[260935]: 2025-10-11 09:36:18.039 2 DEBUG nova.network.neutron [req-84c1aa69-24c8-4e7e-a827-743369a5a798 req-b026976f-5fe9-490f-8b9c-32b8ee4c5f0f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Updated VIF entry in instance network info cache for port 28067a9e-43c9-4f82-aeb3-002926889b4b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:36:18 np0005481065 nova_compute[260935]: 2025-10-11 09:36:18.040 2 DEBUG nova.network.neutron [req-84c1aa69-24c8-4e7e-a827-743369a5a798 req-b026976f-5fe9-490f-8b9c-32b8ee4c5f0f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Updating instance_info_cache with network_info: [{"id": "28067a9e-43c9-4f82-aeb3-002926889b4b", "address": "fa:16:3e:83:96:3a", "network": {"id": "617f2647-1180-40d5-ae2e-3b28298acf26", "bridge": "br-int", "label": "tempest-network-smoke--1660520824", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae66871ba5094dd49475fed96fb24be2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28067a9e-43", "ovs_interfaceid": "28067a9e-43c9-4f82-aeb3-002926889b4b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:36:18 np0005481065 nova_compute[260935]: 2025-10-11 09:36:18.050 2 DEBUG nova.network.neutron [-] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:36:18 np0005481065 nova_compute[260935]: 2025-10-11 09:36:18.057 2 DEBUG oslo_concurrency.lockutils [req-84c1aa69-24c8-4e7e-a827-743369a5a798 req-b026976f-5fe9-490f-8b9c-32b8ee4c5f0f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-8d35e49e-efca-4621-a6f7-de650e5272fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:36:18 np0005481065 nova_compute[260935]: 2025-10-11 09:36:18.066 2 INFO nova.compute.manager [-] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Took 0.57 seconds to deallocate network for instance.#033[00m
Oct 11 05:36:18 np0005481065 nova_compute[260935]: 2025-10-11 09:36:18.106 2 DEBUG oslo_concurrency.lockutils [None req-eadff6a0-d525-4cae-9d02-3ce08e18351e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:36:18 np0005481065 nova_compute[260935]: 2025-10-11 09:36:18.107 2 DEBUG oslo_concurrency.lockutils [None req-eadff6a0-d525-4cae-9d02-3ce08e18351e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:36:18 np0005481065 nova_compute[260935]: 2025-10-11 09:36:18.255 2 DEBUG oslo_concurrency.processutils [None req-eadff6a0-d525-4cae-9d02-3ce08e18351e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:36:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:36:18 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2623494835' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:36:18 np0005481065 nova_compute[260935]: 2025-10-11 09:36:18.715 2 DEBUG oslo_concurrency.processutils [None req-eadff6a0-d525-4cae-9d02-3ce08e18351e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:36:18 np0005481065 nova_compute[260935]: 2025-10-11 09:36:18.723 2 DEBUG nova.compute.provider_tree [None req-eadff6a0-d525-4cae-9d02-3ce08e18351e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:36:18 np0005481065 nova_compute[260935]: 2025-10-11 09:36:18.755 2 DEBUG nova.scheduler.client.report [None req-eadff6a0-d525-4cae-9d02-3ce08e18351e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:36:18 np0005481065 nova_compute[260935]: 2025-10-11 09:36:18.790 2 DEBUG oslo_concurrency.lockutils [None req-eadff6a0-d525-4cae-9d02-3ce08e18351e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.684s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:36:18 np0005481065 nova_compute[260935]: 2025-10-11 09:36:18.822 2 DEBUG nova.compute.manager [req-51c13e98-4e08-45e8-9ced-ec5d0d6ae073 req-da2df8f6-385e-4f51-bb00-3cbfc6eb6bba e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Received event network-vif-deleted-28067a9e-43c9-4f82-aeb3-002926889b4b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:36:18 np0005481065 nova_compute[260935]: 2025-10-11 09:36:18.825 2 INFO nova.scheduler.client.report [None req-eadff6a0-d525-4cae-9d02-3ce08e18351e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Deleted allocations for instance 8d35e49e-efca-4621-a6f7-de650e5272fd#033[00m
Oct 11 05:36:18 np0005481065 nova_compute[260935]: 2025-10-11 09:36:18.910 2 DEBUG oslo_concurrency.lockutils [None req-eadff6a0-d525-4cae-9d02-3ce08e18351e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Lock "8d35e49e-efca-4621-a6f7-de650e5272fd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.138s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:36:19 np0005481065 nova_compute[260935]: 2025-10-11 09:36:19.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:36:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2840: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 292 KiB/s rd, 2.2 MiB/s wr, 91 op/s
Oct 11 05:36:19 np0005481065 nova_compute[260935]: 2025-10-11 09:36:19.214 2 DEBUG nova.compute.manager [req-5f201fe6-316b-4d6b-9184-e2bd9e158f45 req-d6802a18-4bb8-460a-ae02-7db4b3fe871d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Received event network-vif-plugged-28067a9e-43c9-4f82-aeb3-002926889b4b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:36:19 np0005481065 nova_compute[260935]: 2025-10-11 09:36:19.215 2 DEBUG oslo_concurrency.lockutils [req-5f201fe6-316b-4d6b-9184-e2bd9e158f45 req-d6802a18-4bb8-460a-ae02-7db4b3fe871d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "8d35e49e-efca-4621-a6f7-de650e5272fd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:36:19 np0005481065 nova_compute[260935]: 2025-10-11 09:36:19.216 2 DEBUG oslo_concurrency.lockutils [req-5f201fe6-316b-4d6b-9184-e2bd9e158f45 req-d6802a18-4bb8-460a-ae02-7db4b3fe871d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8d35e49e-efca-4621-a6f7-de650e5272fd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:36:19 np0005481065 nova_compute[260935]: 2025-10-11 09:36:19.216 2 DEBUG oslo_concurrency.lockutils [req-5f201fe6-316b-4d6b-9184-e2bd9e158f45 req-d6802a18-4bb8-460a-ae02-7db4b3fe871d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8d35e49e-efca-4621-a6f7-de650e5272fd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:36:19 np0005481065 nova_compute[260935]: 2025-10-11 09:36:19.217 2 DEBUG nova.compute.manager [req-5f201fe6-316b-4d6b-9184-e2bd9e158f45 req-d6802a18-4bb8-460a-ae02-7db4b3fe871d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] No waiting events found dispatching network-vif-plugged-28067a9e-43c9-4f82-aeb3-002926889b4b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:36:19 np0005481065 nova_compute[260935]: 2025-10-11 09:36:19.217 2 WARNING nova.compute.manager [req-5f201fe6-316b-4d6b-9184-e2bd9e158f45 req-d6802a18-4bb8-460a-ae02-7db4b3fe871d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Received unexpected event network-vif-plugged-28067a9e-43c9-4f82-aeb3-002926889b4b for instance with vm_state deleted and task_state None.#033[00m
Oct 11 05:36:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:36:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2841: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 27 KiB/s wr, 30 op/s
Oct 11 05:36:22 np0005481065 nova_compute[260935]: 2025-10-11 09:36:22.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:36:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2842: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 27 KiB/s wr, 30 op/s
Oct 11 05:36:24 np0005481065 nova_compute[260935]: 2025-10-11 09:36:24.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:36:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:36:24 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:24.575 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=52, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=51) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:36:24 np0005481065 nova_compute[260935]: 2025-10-11 09:36:24.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:36:24 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:24.577 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 05:36:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:36:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:36:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:36:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:36:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:36:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:36:25 np0005481065 ovn_controller[152945]: 2025-10-11T09:36:25Z|01576|binding|INFO|Releasing lport 675f5b9f-9cb7-4132-af37-955cc69eba82 from this chassis (sb_readonly=0)
Oct 11 05:36:25 np0005481065 ovn_controller[152945]: 2025-10-11T09:36:25Z|01577|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:36:25 np0005481065 ovn_controller[152945]: 2025-10-11T09:36:25Z|01578|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:36:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2843: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 16 KiB/s wr, 29 op/s
Oct 11 05:36:25 np0005481065 nova_compute[260935]: 2025-10-11 09:36:25.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:36:26 np0005481065 nova_compute[260935]: 2025-10-11 09:36:26.672 2 DEBUG nova.compute.manager [req-fcffd95b-a1ff-472d-ad28-bc9ab8e4deb4 req-6f2aeb84-df20-4543-9c8a-ab027e240def e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Received event network-changed-75ae5f39-4616-4581-8e22-411bee7c1747 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:36:26 np0005481065 nova_compute[260935]: 2025-10-11 09:36:26.672 2 DEBUG nova.compute.manager [req-fcffd95b-a1ff-472d-ad28-bc9ab8e4deb4 req-6f2aeb84-df20-4543-9c8a-ab027e240def e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Refreshing instance network info cache due to event network-changed-75ae5f39-4616-4581-8e22-411bee7c1747. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:36:26 np0005481065 nova_compute[260935]: 2025-10-11 09:36:26.673 2 DEBUG oslo_concurrency.lockutils [req-fcffd95b-a1ff-472d-ad28-bc9ab8e4deb4 req-6f2aeb84-df20-4543-9c8a-ab027e240def e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-a77fa566-1ab4-484c-b6ef-53471c9f91f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:36:26 np0005481065 nova_compute[260935]: 2025-10-11 09:36:26.673 2 DEBUG oslo_concurrency.lockutils [req-fcffd95b-a1ff-472d-ad28-bc9ab8e4deb4 req-6f2aeb84-df20-4543-9c8a-ab027e240def e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-a77fa566-1ab4-484c-b6ef-53471c9f91f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:36:26 np0005481065 nova_compute[260935]: 2025-10-11 09:36:26.674 2 DEBUG nova.network.neutron [req-fcffd95b-a1ff-472d-ad28-bc9ab8e4deb4 req-6f2aeb84-df20-4543-9c8a-ab027e240def e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Refreshing network info cache for port 75ae5f39-4616-4581-8e22-411bee7c1747 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:36:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:36:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3259768337' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:36:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:36:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3259768337' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:36:26 np0005481065 nova_compute[260935]: 2025-10-11 09:36:26.738 2 DEBUG oslo_concurrency.lockutils [None req-0ec99d0e-7f11-4333-9136-b40d06e10165 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "a77fa566-1ab4-484c-b6ef-53471c9f91f8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:36:26 np0005481065 nova_compute[260935]: 2025-10-11 09:36:26.738 2 DEBUG oslo_concurrency.lockutils [None req-0ec99d0e-7f11-4333-9136-b40d06e10165 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "a77fa566-1ab4-484c-b6ef-53471c9f91f8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:36:26 np0005481065 nova_compute[260935]: 2025-10-11 09:36:26.739 2 DEBUG oslo_concurrency.lockutils [None req-0ec99d0e-7f11-4333-9136-b40d06e10165 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "a77fa566-1ab4-484c-b6ef-53471c9f91f8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:36:26 np0005481065 nova_compute[260935]: 2025-10-11 09:36:26.740 2 DEBUG oslo_concurrency.lockutils [None req-0ec99d0e-7f11-4333-9136-b40d06e10165 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "a77fa566-1ab4-484c-b6ef-53471c9f91f8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:36:26 np0005481065 nova_compute[260935]: 2025-10-11 09:36:26.740 2 DEBUG oslo_concurrency.lockutils [None req-0ec99d0e-7f11-4333-9136-b40d06e10165 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "a77fa566-1ab4-484c-b6ef-53471c9f91f8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:36:26 np0005481065 nova_compute[260935]: 2025-10-11 09:36:26.742 2 INFO nova.compute.manager [None req-0ec99d0e-7f11-4333-9136-b40d06e10165 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Terminating instance#033[00m
Oct 11 05:36:26 np0005481065 nova_compute[260935]: 2025-10-11 09:36:26.744 2 DEBUG nova.compute.manager [None req-0ec99d0e-7f11-4333-9136-b40d06e10165 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:36:26 np0005481065 kernel: tap75ae5f39-46 (unregistering): left promiscuous mode
Oct 11 05:36:26 np0005481065 NetworkManager[44960]: <info>  [1760175386.8175] device (tap75ae5f39-46): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:36:26 np0005481065 ovn_controller[152945]: 2025-10-11T09:36:26Z|01579|binding|INFO|Releasing lport 75ae5f39-4616-4581-8e22-411bee7c1747 from this chassis (sb_readonly=0)
Oct 11 05:36:26 np0005481065 ovn_controller[152945]: 2025-10-11T09:36:26Z|01580|binding|INFO|Setting lport 75ae5f39-4616-4581-8e22-411bee7c1747 down in Southbound
Oct 11 05:36:26 np0005481065 nova_compute[260935]: 2025-10-11 09:36:26.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:36:26 np0005481065 ovn_controller[152945]: 2025-10-11T09:36:26Z|01581|binding|INFO|Removing iface tap75ae5f39-46 ovn-installed in OVS
Oct 11 05:36:26 np0005481065 nova_compute[260935]: 2025-10-11 09:36:26.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:36:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:26.845 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:c5:0a 10.100.0.14'], port_security=['fa:16:3e:db:c5:0a 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a77fa566-1ab4-484c-b6ef-53471c9f91f8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f0d9de45-f93f-45ef-aa2e-7cd54a90600b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '81e7096f23df4e7d8782cf98d09d54e9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7c64e7e9-762c-40c5-8997-daeebe19175c c4774add-49ce-4194-9683-2bce3da69dec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e2e43ada-251a-46ad-bf0a-b57c9e02b647, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=75ae5f39-4616-4581-8e22-411bee7c1747) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:36:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:26.848 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 75ae5f39-4616-4581-8e22-411bee7c1747 in datapath f0d9de45-f93f-45ef-aa2e-7cd54a90600b unbound from our chassis#033[00m
Oct 11 05:36:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:26.851 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f0d9de45-f93f-45ef-aa2e-7cd54a90600b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:36:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:26.853 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9785fcc9-aa35-44b4-accf-9a5accab25bd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:36:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:26.854 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f0d9de45-f93f-45ef-aa2e-7cd54a90600b namespace which is not needed anymore#033[00m
Oct 11 05:36:26 np0005481065 nova_compute[260935]: 2025-10-11 09:36:26.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:36:26 np0005481065 systemd[1]: machine-qemu\x2d163\x2dinstance\x2d0000008b.scope: Deactivated successfully.
Oct 11 05:36:26 np0005481065 systemd[1]: machine-qemu\x2d163\x2dinstance\x2d0000008b.scope: Consumed 16.038s CPU time.
Oct 11 05:36:26 np0005481065 systemd-machined[215705]: Machine qemu-163-instance-0000008b terminated.
Oct 11 05:36:26 np0005481065 nova_compute[260935]: 2025-10-11 09:36:26.987 2 INFO nova.virt.libvirt.driver [-] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Instance destroyed successfully.#033[00m
Oct 11 05:36:26 np0005481065 nova_compute[260935]: 2025-10-11 09:36:26.988 2 DEBUG nova.objects.instance [None req-0ec99d0e-7f11-4333-9136-b40d06e10165 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lazy-loading 'resources' on Instance uuid a77fa566-1ab4-484c-b6ef-53471c9f91f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:36:27 np0005481065 nova_compute[260935]: 2025-10-11 09:36:27.004 2 DEBUG nova.virt.libvirt.vif [None req-0ec99d0e-7f11-4333-9136-b40d06e10165 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:34:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-1880442521',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-1880442521',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-607770139-acc',id=139,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCdfdb9R+zsDw6jenCtVzR++AIgDN5ehA4oPbzl8x/m8tfimOhF7LhQ9X4A3F+nSw3/eKnrfnA/yKanSmbwPXrSaCs1ORHouK4S19eKbXKNTi17aP7Q4amYlBjFlUm9aIA==',key_name='tempest-TestSecurityGroupsBasicOps-1403644569',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:35:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='81e7096f23df4e7d8782cf98d09d54e9',ramdisk_id='',reservation_id='r-8gi3qbv7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-607770139',owner_user_name='tempest-TestSecurityGroupsBasicOps-607770139-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:35:01Z,user_data=None,user_id='489c4d0457354f4684f8b9e53261224f',uuid=a77fa566-1ab4-484c-b6ef-53471c9f91f8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "75ae5f39-4616-4581-8e22-411bee7c1747", "address": "fa:16:3e:db:c5:0a", "network": {"id": "f0d9de45-f93f-45ef-aa2e-7cd54a90600b", "bridge": "br-int", "label": "tempest-network-smoke--1509318194", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75ae5f39-46", "ovs_interfaceid": "75ae5f39-4616-4581-8e22-411bee7c1747", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:36:27 np0005481065 nova_compute[260935]: 2025-10-11 09:36:27.004 2 DEBUG nova.network.os_vif_util [None req-0ec99d0e-7f11-4333-9136-b40d06e10165 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converting VIF {"id": "75ae5f39-4616-4581-8e22-411bee7c1747", "address": "fa:16:3e:db:c5:0a", "network": {"id": "f0d9de45-f93f-45ef-aa2e-7cd54a90600b", "bridge": "br-int", "label": "tempest-network-smoke--1509318194", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75ae5f39-46", "ovs_interfaceid": "75ae5f39-4616-4581-8e22-411bee7c1747", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:36:27 np0005481065 nova_compute[260935]: 2025-10-11 09:36:27.005 2 DEBUG nova.network.os_vif_util [None req-0ec99d0e-7f11-4333-9136-b40d06e10165 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:db:c5:0a,bridge_name='br-int',has_traffic_filtering=True,id=75ae5f39-4616-4581-8e22-411bee7c1747,network=Network(f0d9de45-f93f-45ef-aa2e-7cd54a90600b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75ae5f39-46') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:36:27 np0005481065 nova_compute[260935]: 2025-10-11 09:36:27.006 2 DEBUG os_vif [None req-0ec99d0e-7f11-4333-9136-b40d06e10165 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:db:c5:0a,bridge_name='br-int',has_traffic_filtering=True,id=75ae5f39-4616-4581-8e22-411bee7c1747,network=Network(f0d9de45-f93f-45ef-aa2e-7cd54a90600b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75ae5f39-46') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:36:27 np0005481065 nova_compute[260935]: 2025-10-11 09:36:27.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:36:27 np0005481065 nova_compute[260935]: 2025-10-11 09:36:27.007 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap75ae5f39-46, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:36:27 np0005481065 nova_compute[260935]: 2025-10-11 09:36:27.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:36:27 np0005481065 nova_compute[260935]: 2025-10-11 09:36:27.012 2 INFO os_vif [None req-0ec99d0e-7f11-4333-9136-b40d06e10165 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:db:c5:0a,bridge_name='br-int',has_traffic_filtering=True,id=75ae5f39-4616-4581-8e22-411bee7c1747,network=Network(f0d9de45-f93f-45ef-aa2e-7cd54a90600b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75ae5f39-46')#033[00m
Oct 11 05:36:27 np0005481065 neutron-haproxy-ovnmeta-f0d9de45-f93f-45ef-aa2e-7cd54a90600b[415736]: [NOTICE]   (415750) : haproxy version is 2.8.14-c23fe91
Oct 11 05:36:27 np0005481065 neutron-haproxy-ovnmeta-f0d9de45-f93f-45ef-aa2e-7cd54a90600b[415736]: [NOTICE]   (415750) : path to executable is /usr/sbin/haproxy
Oct 11 05:36:27 np0005481065 neutron-haproxy-ovnmeta-f0d9de45-f93f-45ef-aa2e-7cd54a90600b[415736]: [WARNING]  (415750) : Exiting Master process...
Oct 11 05:36:27 np0005481065 neutron-haproxy-ovnmeta-f0d9de45-f93f-45ef-aa2e-7cd54a90600b[415736]: [ALERT]    (415750) : Current worker (415752) exited with code 143 (Terminated)
Oct 11 05:36:27 np0005481065 neutron-haproxy-ovnmeta-f0d9de45-f93f-45ef-aa2e-7cd54a90600b[415736]: [WARNING]  (415750) : All workers exited. Exiting... (0)
Oct 11 05:36:27 np0005481065 systemd[1]: libpod-c8a612255054c19d0be4ed190e3fbfb857d6f4a0946c2f20e00a1d51da26ca37.scope: Deactivated successfully.
Oct 11 05:36:27 np0005481065 podman[417908]: 2025-10-11 09:36:27.03304115 +0000 UTC m=+0.051719739 container died c8a612255054c19d0be4ed190e3fbfb857d6f4a0946c2f20e00a1d51da26ca37 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f0d9de45-f93f-45ef-aa2e-7cd54a90600b, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 11 05:36:27 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c8a612255054c19d0be4ed190e3fbfb857d6f4a0946c2f20e00a1d51da26ca37-userdata-shm.mount: Deactivated successfully.
Oct 11 05:36:27 np0005481065 systemd[1]: var-lib-containers-storage-overlay-f9779a9262745b18ce2d1c4f4694138a6bb395f088c211d26fdb2e153f47b00b-merged.mount: Deactivated successfully.
Oct 11 05:36:27 np0005481065 podman[417908]: 2025-10-11 09:36:27.07739337 +0000 UTC m=+0.096071979 container cleanup c8a612255054c19d0be4ed190e3fbfb857d6f4a0946c2f20e00a1d51da26ca37 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f0d9de45-f93f-45ef-aa2e-7cd54a90600b, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 05:36:27 np0005481065 systemd[1]: libpod-conmon-c8a612255054c19d0be4ed190e3fbfb857d6f4a0946c2f20e00a1d51da26ca37.scope: Deactivated successfully.
Oct 11 05:36:27 np0005481065 podman[417962]: 2025-10-11 09:36:27.160003196 +0000 UTC m=+0.057036111 container remove c8a612255054c19d0be4ed190e3fbfb857d6f4a0946c2f20e00a1d51da26ca37 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f0d9de45-f93f-45ef-aa2e-7cd54a90600b, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:36:27 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:27.168 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1288e3b3-36aa-48a6-bb80-86d432ec09fc]: (4, ('Sat Oct 11 09:36:26 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f0d9de45-f93f-45ef-aa2e-7cd54a90600b (c8a612255054c19d0be4ed190e3fbfb857d6f4a0946c2f20e00a1d51da26ca37)\nc8a612255054c19d0be4ed190e3fbfb857d6f4a0946c2f20e00a1d51da26ca37\nSat Oct 11 09:36:27 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f0d9de45-f93f-45ef-aa2e-7cd54a90600b (c8a612255054c19d0be4ed190e3fbfb857d6f4a0946c2f20e00a1d51da26ca37)\nc8a612255054c19d0be4ed190e3fbfb857d6f4a0946c2f20e00a1d51da26ca37\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:36:27 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:27.171 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[800d42b1-6dc5-43c1-82bb-e881046872d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:36:27 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:27.174 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf0d9de45-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:36:27 np0005481065 nova_compute[260935]: 2025-10-11 09:36:27.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:36:27 np0005481065 kernel: tapf0d9de45-f0: left promiscuous mode
Oct 11 05:36:27 np0005481065 nova_compute[260935]: 2025-10-11 09:36:27.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:36:27 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:27.185 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[992eb05c-cb90-4112-88ef-f47af4115b42]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:36:27 np0005481065 nova_compute[260935]: 2025-10-11 09:36:27.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:36:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2844: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 16 KiB/s wr, 29 op/s
Oct 11 05:36:27 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:27.214 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0ecfd9dc-6383-4041-abc6-e8bbb14990ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:36:27 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:27.216 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7f5ffca3-ad04-4e17-9c94-e5a94acd969c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:36:27 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:27.234 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[93a82c1c-0978-40b3-a011-143d39503cde]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 714438, 'reachable_time': 41792, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 417977, 'error': None, 'target': 'ovnmeta-f0d9de45-f93f-45ef-aa2e-7cd54a90600b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:36:27 np0005481065 systemd[1]: run-netns-ovnmeta\x2df0d9de45\x2df93f\x2d45ef\x2daa2e\x2d7cd54a90600b.mount: Deactivated successfully.
Oct 11 05:36:27 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:27.236 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f0d9de45-f93f-45ef-aa2e-7cd54a90600b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 05:36:27 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:27.236 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[ece6194a-37c5-472d-8d8d-55feecdf8066]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:36:27 np0005481065 nova_compute[260935]: 2025-10-11 09:36:27.436 2 INFO nova.virt.libvirt.driver [None req-0ec99d0e-7f11-4333-9136-b40d06e10165 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Deleting instance files /var/lib/nova/instances/a77fa566-1ab4-484c-b6ef-53471c9f91f8_del#033[00m
Oct 11 05:36:27 np0005481065 nova_compute[260935]: 2025-10-11 09:36:27.438 2 INFO nova.virt.libvirt.driver [None req-0ec99d0e-7f11-4333-9136-b40d06e10165 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Deletion of /var/lib/nova/instances/a77fa566-1ab4-484c-b6ef-53471c9f91f8_del complete#033[00m
Oct 11 05:36:27 np0005481065 nova_compute[260935]: 2025-10-11 09:36:27.496 2 INFO nova.compute.manager [None req-0ec99d0e-7f11-4333-9136-b40d06e10165 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Took 0.75 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:36:27 np0005481065 nova_compute[260935]: 2025-10-11 09:36:27.497 2 DEBUG oslo.service.loopingcall [None req-0ec99d0e-7f11-4333-9136-b40d06e10165 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:36:27 np0005481065 nova_compute[260935]: 2025-10-11 09:36:27.497 2 DEBUG nova.compute.manager [-] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:36:27 np0005481065 nova_compute[260935]: 2025-10-11 09:36:27.498 2 DEBUG nova.network.neutron [-] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:36:27 np0005481065 nova_compute[260935]: 2025-10-11 09:36:27.705 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:36:28 np0005481065 nova_compute[260935]: 2025-10-11 09:36:28.113 2 DEBUG nova.network.neutron [-] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:36:28 np0005481065 nova_compute[260935]: 2025-10-11 09:36:28.129 2 INFO nova.compute.manager [-] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Took 0.63 seconds to deallocate network for instance.#033[00m
Oct 11 05:36:28 np0005481065 nova_compute[260935]: 2025-10-11 09:36:28.174 2 DEBUG oslo_concurrency.lockutils [None req-0ec99d0e-7f11-4333-9136-b40d06e10165 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:36:28 np0005481065 nova_compute[260935]: 2025-10-11 09:36:28.175 2 DEBUG oslo_concurrency.lockutils [None req-0ec99d0e-7f11-4333-9136-b40d06e10165 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:36:28 np0005481065 nova_compute[260935]: 2025-10-11 09:36:28.188 2 DEBUG nova.network.neutron [req-fcffd95b-a1ff-472d-ad28-bc9ab8e4deb4 req-6f2aeb84-df20-4543-9c8a-ab027e240def e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Updated VIF entry in instance network info cache for port 75ae5f39-4616-4581-8e22-411bee7c1747. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:36:28 np0005481065 nova_compute[260935]: 2025-10-11 09:36:28.189 2 DEBUG nova.network.neutron [req-fcffd95b-a1ff-472d-ad28-bc9ab8e4deb4 req-6f2aeb84-df20-4543-9c8a-ab027e240def e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Updating instance_info_cache with network_info: [{"id": "75ae5f39-4616-4581-8e22-411bee7c1747", "address": "fa:16:3e:db:c5:0a", "network": {"id": "f0d9de45-f93f-45ef-aa2e-7cd54a90600b", "bridge": "br-int", "label": "tempest-network-smoke--1509318194", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75ae5f39-46", "ovs_interfaceid": "75ae5f39-4616-4581-8e22-411bee7c1747", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:36:28 np0005481065 nova_compute[260935]: 2025-10-11 09:36:28.205 2 DEBUG oslo_concurrency.lockutils [req-fcffd95b-a1ff-472d-ad28-bc9ab8e4deb4 req-6f2aeb84-df20-4543-9c8a-ab027e240def e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-a77fa566-1ab4-484c-b6ef-53471c9f91f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:36:28 np0005481065 nova_compute[260935]: 2025-10-11 09:36:28.265 2 DEBUG oslo_concurrency.processutils [None req-0ec99d0e-7f11-4333-9136-b40d06e10165 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:36:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:36:28 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2525950482' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:36:28 np0005481065 podman[417999]: 2025-10-11 09:36:28.766669582 +0000 UTC m=+0.076949746 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 11 05:36:28 np0005481065 nova_compute[260935]: 2025-10-11 09:36:28.771 2 DEBUG oslo_concurrency.processutils [None req-0ec99d0e-7f11-4333-9136-b40d06e10165 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:36:28 np0005481065 nova_compute[260935]: 2025-10-11 09:36:28.779 2 DEBUG nova.compute.provider_tree [None req-0ec99d0e-7f11-4333-9136-b40d06e10165 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:36:28 np0005481065 nova_compute[260935]: 2025-10-11 09:36:28.809 2 DEBUG nova.scheduler.client.report [None req-0ec99d0e-7f11-4333-9136-b40d06e10165 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:36:28 np0005481065 nova_compute[260935]: 2025-10-11 09:36:28.847 2 DEBUG nova.compute.manager [req-b0bd5619-95bc-4045-81a8-4c69b854573e req-9cff2958-134b-4e72-8af3-38ed98d3bc04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Received event network-vif-unplugged-75ae5f39-4616-4581-8e22-411bee7c1747 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:36:28 np0005481065 nova_compute[260935]: 2025-10-11 09:36:28.847 2 DEBUG oslo_concurrency.lockutils [req-b0bd5619-95bc-4045-81a8-4c69b854573e req-9cff2958-134b-4e72-8af3-38ed98d3bc04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "a77fa566-1ab4-484c-b6ef-53471c9f91f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:36:28 np0005481065 nova_compute[260935]: 2025-10-11 09:36:28.848 2 DEBUG oslo_concurrency.lockutils [req-b0bd5619-95bc-4045-81a8-4c69b854573e req-9cff2958-134b-4e72-8af3-38ed98d3bc04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a77fa566-1ab4-484c-b6ef-53471c9f91f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:36:28 np0005481065 nova_compute[260935]: 2025-10-11 09:36:28.848 2 DEBUG oslo_concurrency.lockutils [req-b0bd5619-95bc-4045-81a8-4c69b854573e req-9cff2958-134b-4e72-8af3-38ed98d3bc04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a77fa566-1ab4-484c-b6ef-53471c9f91f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:36:28 np0005481065 nova_compute[260935]: 2025-10-11 09:36:28.849 2 DEBUG nova.compute.manager [req-b0bd5619-95bc-4045-81a8-4c69b854573e req-9cff2958-134b-4e72-8af3-38ed98d3bc04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] No waiting events found dispatching network-vif-unplugged-75ae5f39-4616-4581-8e22-411bee7c1747 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:36:28 np0005481065 nova_compute[260935]: 2025-10-11 09:36:28.849 2 WARNING nova.compute.manager [req-b0bd5619-95bc-4045-81a8-4c69b854573e req-9cff2958-134b-4e72-8af3-38ed98d3bc04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Received unexpected event network-vif-unplugged-75ae5f39-4616-4581-8e22-411bee7c1747 for instance with vm_state deleted and task_state None.#033[00m
Oct 11 05:36:28 np0005481065 nova_compute[260935]: 2025-10-11 09:36:28.849 2 DEBUG nova.compute.manager [req-b0bd5619-95bc-4045-81a8-4c69b854573e req-9cff2958-134b-4e72-8af3-38ed98d3bc04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Received event network-vif-plugged-75ae5f39-4616-4581-8e22-411bee7c1747 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:36:28 np0005481065 nova_compute[260935]: 2025-10-11 09:36:28.850 2 DEBUG oslo_concurrency.lockutils [req-b0bd5619-95bc-4045-81a8-4c69b854573e req-9cff2958-134b-4e72-8af3-38ed98d3bc04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "a77fa566-1ab4-484c-b6ef-53471c9f91f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:36:28 np0005481065 nova_compute[260935]: 2025-10-11 09:36:28.850 2 DEBUG oslo_concurrency.lockutils [req-b0bd5619-95bc-4045-81a8-4c69b854573e req-9cff2958-134b-4e72-8af3-38ed98d3bc04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a77fa566-1ab4-484c-b6ef-53471c9f91f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:36:28 np0005481065 nova_compute[260935]: 2025-10-11 09:36:28.850 2 DEBUG oslo_concurrency.lockutils [req-b0bd5619-95bc-4045-81a8-4c69b854573e req-9cff2958-134b-4e72-8af3-38ed98d3bc04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a77fa566-1ab4-484c-b6ef-53471c9f91f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:36:28 np0005481065 nova_compute[260935]: 2025-10-11 09:36:28.850 2 DEBUG nova.compute.manager [req-b0bd5619-95bc-4045-81a8-4c69b854573e req-9cff2958-134b-4e72-8af3-38ed98d3bc04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] No waiting events found dispatching network-vif-plugged-75ae5f39-4616-4581-8e22-411bee7c1747 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:36:28 np0005481065 nova_compute[260935]: 2025-10-11 09:36:28.851 2 WARNING nova.compute.manager [req-b0bd5619-95bc-4045-81a8-4c69b854573e req-9cff2958-134b-4e72-8af3-38ed98d3bc04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Received unexpected event network-vif-plugged-75ae5f39-4616-4581-8e22-411bee7c1747 for instance with vm_state deleted and task_state None.#033[00m
Oct 11 05:36:28 np0005481065 nova_compute[260935]: 2025-10-11 09:36:28.851 2 DEBUG nova.compute.manager [req-b0bd5619-95bc-4045-81a8-4c69b854573e req-9cff2958-134b-4e72-8af3-38ed98d3bc04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Received event network-vif-deleted-75ae5f39-4616-4581-8e22-411bee7c1747 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:36:28 np0005481065 nova_compute[260935]: 2025-10-11 09:36:28.851 2 INFO nova.compute.manager [req-b0bd5619-95bc-4045-81a8-4c69b854573e req-9cff2958-134b-4e72-8af3-38ed98d3bc04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Neutron deleted interface 75ae5f39-4616-4581-8e22-411bee7c1747; detaching it from the instance and deleting it from the info cache#033[00m
Oct 11 05:36:28 np0005481065 nova_compute[260935]: 2025-10-11 09:36:28.852 2 DEBUG nova.network.neutron [req-b0bd5619-95bc-4045-81a8-4c69b854573e req-9cff2958-134b-4e72-8af3-38ed98d3bc04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:36:28 np0005481065 nova_compute[260935]: 2025-10-11 09:36:28.891 2 DEBUG oslo_concurrency.lockutils [None req-0ec99d0e-7f11-4333-9136-b40d06e10165 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:36:28 np0005481065 nova_compute[260935]: 2025-10-11 09:36:28.945 2 INFO nova.scheduler.client.report [None req-0ec99d0e-7f11-4333-9136-b40d06e10165 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Deleted allocations for instance a77fa566-1ab4-484c-b6ef-53471c9f91f8#033[00m
Oct 11 05:36:28 np0005481065 nova_compute[260935]: 2025-10-11 09:36:28.951 2 DEBUG nova.compute.manager [req-b0bd5619-95bc-4045-81a8-4c69b854573e req-9cff2958-134b-4e72-8af3-38ed98d3bc04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Detach interface failed, port_id=75ae5f39-4616-4581-8e22-411bee7c1747, reason: Instance a77fa566-1ab4-484c-b6ef-53471c9f91f8 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct 11 05:36:29 np0005481065 nova_compute[260935]: 2025-10-11 09:36:29.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:36:29 np0005481065 nova_compute[260935]: 2025-10-11 09:36:29.118 2 DEBUG oslo_concurrency.lockutils [None req-0ec99d0e-7f11-4333-9136-b40d06e10165 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "a77fa566-1ab4-484c-b6ef-53471c9f91f8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.379s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:36:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2845: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 17 KiB/s wr, 57 op/s
Oct 11 05:36:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:36:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2846: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 05:36:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:31.580 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '52'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:36:32 np0005481065 nova_compute[260935]: 2025-10-11 09:36:32.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:36:32 np0005481065 nova_compute[260935]: 2025-10-11 09:36:32.022 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760175377.021556, 8d35e49e-efca-4621-a6f7-de650e5272fd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:36:32 np0005481065 nova_compute[260935]: 2025-10-11 09:36:32.022 2 INFO nova.compute.manager [-] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:36:32 np0005481065 nova_compute[260935]: 2025-10-11 09:36:32.046 2 DEBUG nova.compute.manager [None req-2698aad6-082a-4cd8-8540-a52cf04d5d67 - - - - - -] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:36:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2847: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 05:36:33 np0005481065 ovn_controller[152945]: 2025-10-11T09:36:33Z|01582|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:36:33 np0005481065 ovn_controller[152945]: 2025-10-11T09:36:33Z|01583|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:36:33 np0005481065 nova_compute[260935]: 2025-10-11 09:36:33.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:36:33 np0005481065 ovn_controller[152945]: 2025-10-11T09:36:33Z|01584|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:36:33 np0005481065 ovn_controller[152945]: 2025-10-11T09:36:33Z|01585|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:36:33 np0005481065 nova_compute[260935]: 2025-10-11 09:36:33.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:36:34 np0005481065 nova_compute[260935]: 2025-10-11 09:36:34.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:36:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:36:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2848: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 05:36:36 np0005481065 podman[418023]: 2025-10-11 09:36:36.802197357 +0000 UTC m=+0.098509069 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 05:36:37 np0005481065 nova_compute[260935]: 2025-10-11 09:36:37.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:36:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2849: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 05:36:39 np0005481065 nova_compute[260935]: 2025-10-11 09:36:39.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:36:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2850: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 05:36:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:36:40 np0005481065 podman[418044]: 2025-10-11 09:36:40.803261971 +0000 UTC m=+0.094733372 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:36:40 np0005481065 podman[418045]: 2025-10-11 09:36:40.837509863 +0000 UTC m=+0.128337595 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:36:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2851: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Oct 11 05:36:41 np0005481065 nova_compute[260935]: 2025-10-11 09:36:41.987 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760175386.9852498, a77fa566-1ab4-484c-b6ef-53471c9f91f8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:36:41 np0005481065 nova_compute[260935]: 2025-10-11 09:36:41.987 2 INFO nova.compute.manager [-] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:36:42 np0005481065 nova_compute[260935]: 2025-10-11 09:36:42.011 2 DEBUG nova.compute.manager [None req-5d03d261-a293-4717-bcc7-45d0e9e8f085 - - - - - -] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:36:42 np0005481065 nova_compute[260935]: 2025-10-11 09:36:42.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:36:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2852: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Oct 11 05:36:44 np0005481065 nova_compute[260935]: 2025-10-11 09:36:44.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:36:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:36:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2853: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:36:45 np0005481065 nova_compute[260935]: 2025-10-11 09:36:45.986 2 DEBUG oslo_concurrency.lockutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "1f73fb12-6b6e-4491-81a4-51fb66ffb310" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:36:45 np0005481065 nova_compute[260935]: 2025-10-11 09:36:45.988 2 DEBUG oslo_concurrency.lockutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "1f73fb12-6b6e-4491-81a4-51fb66ffb310" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:36:46 np0005481065 nova_compute[260935]: 2025-10-11 09:36:46.004 2 DEBUG nova.compute.manager [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:36:46 np0005481065 nova_compute[260935]: 2025-10-11 09:36:46.096 2 DEBUG oslo_concurrency.lockutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:36:46 np0005481065 nova_compute[260935]: 2025-10-11 09:36:46.097 2 DEBUG oslo_concurrency.lockutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:36:46 np0005481065 nova_compute[260935]: 2025-10-11 09:36:46.109 2 DEBUG nova.virt.hardware [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:36:46 np0005481065 nova_compute[260935]: 2025-10-11 09:36:46.110 2 INFO nova.compute.claims [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:36:46 np0005481065 nova_compute[260935]: 2025-10-11 09:36:46.300 2 DEBUG oslo_concurrency.processutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:36:46 np0005481065 nova_compute[260935]: 2025-10-11 09:36:46.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:36:46 np0005481065 nova_compute[260935]: 2025-10-11 09:36:46.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:36:46 np0005481065 nova_compute[260935]: 2025-10-11 09:36:46.740 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 11 05:36:46 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:36:46 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3001938525' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:36:46 np0005481065 nova_compute[260935]: 2025-10-11 09:36:46.764 2 DEBUG oslo_concurrency.processutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:36:46 np0005481065 nova_compute[260935]: 2025-10-11 09:36:46.773 2 DEBUG nova.compute.provider_tree [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:36:46 np0005481065 nova_compute[260935]: 2025-10-11 09:36:46.792 2 DEBUG nova.scheduler.client.report [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:36:46 np0005481065 nova_compute[260935]: 2025-10-11 09:36:46.818 2 DEBUG oslo_concurrency.lockutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:36:46 np0005481065 nova_compute[260935]: 2025-10-11 09:36:46.819 2 DEBUG nova.compute.manager [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:36:46 np0005481065 nova_compute[260935]: 2025-10-11 09:36:46.859 2 DEBUG nova.compute.manager [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:36:46 np0005481065 nova_compute[260935]: 2025-10-11 09:36:46.860 2 DEBUG nova.network.neutron [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:36:46 np0005481065 nova_compute[260935]: 2025-10-11 09:36:46.876 2 INFO nova.virt.libvirt.driver [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:36:46 np0005481065 nova_compute[260935]: 2025-10-11 09:36:46.895 2 DEBUG nova.compute.manager [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:36:46 np0005481065 nova_compute[260935]: 2025-10-11 09:36:46.982 2 DEBUG nova.compute.manager [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:36:46 np0005481065 nova_compute[260935]: 2025-10-11 09:36:46.984 2 DEBUG nova.virt.libvirt.driver [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:36:46 np0005481065 nova_compute[260935]: 2025-10-11 09:36:46.985 2 INFO nova.virt.libvirt.driver [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Creating image(s)#033[00m
Oct 11 05:36:47 np0005481065 nova_compute[260935]: 2025-10-11 09:36:47.014 2 DEBUG nova.storage.rbd_utils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image 1f73fb12-6b6e-4491-81a4-51fb66ffb310_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:36:47 np0005481065 nova_compute[260935]: 2025-10-11 09:36:47.045 2 DEBUG nova.storage.rbd_utils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image 1f73fb12-6b6e-4491-81a4-51fb66ffb310_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:36:47 np0005481065 nova_compute[260935]: 2025-10-11 09:36:47.072 2 DEBUG nova.storage.rbd_utils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image 1f73fb12-6b6e-4491-81a4-51fb66ffb310_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:36:47 np0005481065 nova_compute[260935]: 2025-10-11 09:36:47.077 2 DEBUG oslo_concurrency.processutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:36:47 np0005481065 nova_compute[260935]: 2025-10-11 09:36:47.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:36:47 np0005481065 nova_compute[260935]: 2025-10-11 09:36:47.164 2 DEBUG oslo_concurrency.processutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:36:47 np0005481065 nova_compute[260935]: 2025-10-11 09:36:47.166 2 DEBUG oslo_concurrency.lockutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:36:47 np0005481065 nova_compute[260935]: 2025-10-11 09:36:47.166 2 DEBUG oslo_concurrency.lockutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:36:47 np0005481065 nova_compute[260935]: 2025-10-11 09:36:47.167 2 DEBUG oslo_concurrency.lockutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:36:47 np0005481065 nova_compute[260935]: 2025-10-11 09:36:47.194 2 DEBUG nova.storage.rbd_utils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image 1f73fb12-6b6e-4491-81a4-51fb66ffb310_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:36:47 np0005481065 nova_compute[260935]: 2025-10-11 09:36:47.199 2 DEBUG oslo_concurrency.processutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 1f73fb12-6b6e-4491-81a4-51fb66ffb310_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:36:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2854: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:36:47 np0005481065 nova_compute[260935]: 2025-10-11 09:36:47.499 2 DEBUG oslo_concurrency.processutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 1f73fb12-6b6e-4491-81a4-51fb66ffb310_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.300s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:36:47 np0005481065 nova_compute[260935]: 2025-10-11 09:36:47.584 2 DEBUG nova.storage.rbd_utils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] resizing rbd image 1f73fb12-6b6e-4491-81a4-51fb66ffb310_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:36:47 np0005481065 nova_compute[260935]: 2025-10-11 09:36:47.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:36:47 np0005481065 nova_compute[260935]: 2025-10-11 09:36:47.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:36:47 np0005481065 nova_compute[260935]: 2025-10-11 09:36:47.711 2 DEBUG nova.objects.instance [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lazy-loading 'migration_context' on Instance uuid 1f73fb12-6b6e-4491-81a4-51fb66ffb310 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:36:47 np0005481065 nova_compute[260935]: 2025-10-11 09:36:47.770 2 DEBUG nova.virt.libvirt.driver [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:36:47 np0005481065 nova_compute[260935]: 2025-10-11 09:36:47.771 2 DEBUG nova.virt.libvirt.driver [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Ensure instance console log exists: /var/lib/nova/instances/1f73fb12-6b6e-4491-81a4-51fb66ffb310/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:36:47 np0005481065 nova_compute[260935]: 2025-10-11 09:36:47.772 2 DEBUG oslo_concurrency.lockutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:36:47 np0005481065 nova_compute[260935]: 2025-10-11 09:36:47.772 2 DEBUG oslo_concurrency.lockutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:36:47 np0005481065 nova_compute[260935]: 2025-10-11 09:36:47.773 2 DEBUG oslo_concurrency.lockutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:36:47 np0005481065 nova_compute[260935]: 2025-10-11 09:36:47.778 2 DEBUG nova.policy [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '489c4d0457354f4684f8b9e53261224f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '81e7096f23df4e7d8782cf98d09d54e9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:36:48 np0005481065 nova_compute[260935]: 2025-10-11 09:36:48.610 2 DEBUG nova.network.neutron [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Successfully created port: 25d7271a-bce8-4388-991e-e7069da0eff1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:36:49 np0005481065 nova_compute[260935]: 2025-10-11 09:36:49.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:36:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2855: 321 pgs: 321 active+clean; 360 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.2 MiB/s wr, 17 op/s
Oct 11 05:36:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:36:49 np0005481065 nova_compute[260935]: 2025-10-11 09:36:49.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:36:49 np0005481065 nova_compute[260935]: 2025-10-11 09:36:49.705 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:36:49 np0005481065 nova_compute[260935]: 2025-10-11 09:36:49.710 2 DEBUG nova.network.neutron [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Successfully updated port: 25d7271a-bce8-4388-991e-e7069da0eff1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:36:49 np0005481065 nova_compute[260935]: 2025-10-11 09:36:49.725 2 DEBUG oslo_concurrency.lockutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "refresh_cache-1f73fb12-6b6e-4491-81a4-51fb66ffb310" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:36:49 np0005481065 nova_compute[260935]: 2025-10-11 09:36:49.725 2 DEBUG oslo_concurrency.lockutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquired lock "refresh_cache-1f73fb12-6b6e-4491-81a4-51fb66ffb310" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:36:49 np0005481065 nova_compute[260935]: 2025-10-11 09:36:49.725 2 DEBUG nova.network.neutron [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:36:49 np0005481065 nova_compute[260935]: 2025-10-11 09:36:49.834 2 DEBUG nova.compute.manager [req-5f104c5a-a72a-4467-97f7-a3a3298f3e62 req-24e00788-677e-4108-b8dd-1aae34efa582 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Received event network-changed-25d7271a-bce8-4388-991e-e7069da0eff1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:36:49 np0005481065 nova_compute[260935]: 2025-10-11 09:36:49.834 2 DEBUG nova.compute.manager [req-5f104c5a-a72a-4467-97f7-a3a3298f3e62 req-24e00788-677e-4108-b8dd-1aae34efa582 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Refreshing instance network info cache due to event network-changed-25d7271a-bce8-4388-991e-e7069da0eff1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:36:49 np0005481065 nova_compute[260935]: 2025-10-11 09:36:49.835 2 DEBUG oslo_concurrency.lockutils [req-5f104c5a-a72a-4467-97f7-a3a3298f3e62 req-24e00788-677e-4108-b8dd-1aae34efa582 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-1f73fb12-6b6e-4491-81a4-51fb66ffb310" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:36:50 np0005481065 nova_compute[260935]: 2025-10-11 09:36:50.565 2 DEBUG nova.network.neutron [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:36:50 np0005481065 nova_compute[260935]: 2025-10-11 09:36:50.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:36:50 np0005481065 nova_compute[260935]: 2025-10-11 09:36:50.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:36:50 np0005481065 nova_compute[260935]: 2025-10-11 09:36:50.736 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:36:50 np0005481065 nova_compute[260935]: 2025-10-11 09:36:50.737 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:36:50 np0005481065 nova_compute[260935]: 2025-10-11 09:36:50.737 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:36:50 np0005481065 nova_compute[260935]: 2025-10-11 09:36:50.738 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:36:50 np0005481065 nova_compute[260935]: 2025-10-11 09:36:50.738 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:36:51 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:36:51 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/472120719' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:36:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2856: 321 pgs: 321 active+clean; 360 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.2 MiB/s wr, 17 op/s
Oct 11 05:36:51 np0005481065 nova_compute[260935]: 2025-10-11 09:36:51.228 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:36:51 np0005481065 nova_compute[260935]: 2025-10-11 09:36:51.386 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:36:51 np0005481065 nova_compute[260935]: 2025-10-11 09:36:51.387 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:36:51 np0005481065 nova_compute[260935]: 2025-10-11 09:36:51.387 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:36:51 np0005481065 nova_compute[260935]: 2025-10-11 09:36:51.394 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:36:51 np0005481065 nova_compute[260935]: 2025-10-11 09:36:51.395 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:36:51 np0005481065 nova_compute[260935]: 2025-10-11 09:36:51.401 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:36:51 np0005481065 nova_compute[260935]: 2025-10-11 09:36:51.402 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:36:51 np0005481065 nova_compute[260935]: 2025-10-11 09:36:51.698 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:36:51 np0005481065 nova_compute[260935]: 2025-10-11 09:36:51.699 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2845MB free_disk=59.81090545654297GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:36:51 np0005481065 nova_compute[260935]: 2025-10-11 09:36:51.700 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:36:51 np0005481065 nova_compute[260935]: 2025-10-11 09:36:51.700 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:36:51 np0005481065 nova_compute[260935]: 2025-10-11 09:36:51.788 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:36:51 np0005481065 nova_compute[260935]: 2025-10-11 09:36:51.788 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:36:51 np0005481065 nova_compute[260935]: 2025-10-11 09:36:51.789 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:36:51 np0005481065 nova_compute[260935]: 2025-10-11 09:36:51.789 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 1f73fb12-6b6e-4491-81a4-51fb66ffb310 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:36:51 np0005481065 nova_compute[260935]: 2025-10-11 09:36:51.789 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:36:51 np0005481065 nova_compute[260935]: 2025-10-11 09:36:51.789 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:36:51 np0005481065 nova_compute[260935]: 2025-10-11 09:36:51.903 2 DEBUG nova.network.neutron [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Updating instance_info_cache with network_info: [{"id": "25d7271a-bce8-4388-991e-e7069da0eff1", "address": "fa:16:3e:a1:d6:78", "network": {"id": "03aa96fb-1646-4924-8eea-e7c8a5518a31", "bridge": "br-int", "label": "tempest-network-smoke--2144007454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25d7271a-bc", "ovs_interfaceid": "25d7271a-bce8-4388-991e-e7069da0eff1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:36:52 np0005481065 nova_compute[260935]: 2025-10-11 09:36:52.048 2 DEBUG oslo_concurrency.lockutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Releasing lock "refresh_cache-1f73fb12-6b6e-4491-81a4-51fb66ffb310" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:36:52 np0005481065 nova_compute[260935]: 2025-10-11 09:36:52.048 2 DEBUG nova.compute.manager [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Instance network_info: |[{"id": "25d7271a-bce8-4388-991e-e7069da0eff1", "address": "fa:16:3e:a1:d6:78", "network": {"id": "03aa96fb-1646-4924-8eea-e7c8a5518a31", "bridge": "br-int", "label": "tempest-network-smoke--2144007454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25d7271a-bc", "ovs_interfaceid": "25d7271a-bce8-4388-991e-e7069da0eff1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:36:52 np0005481065 nova_compute[260935]: 2025-10-11 09:36:52.050 2 DEBUG oslo_concurrency.lockutils [req-5f104c5a-a72a-4467-97f7-a3a3298f3e62 req-24e00788-677e-4108-b8dd-1aae34efa582 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-1f73fb12-6b6e-4491-81a4-51fb66ffb310" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:36:52 np0005481065 nova_compute[260935]: 2025-10-11 09:36:52.050 2 DEBUG nova.network.neutron [req-5f104c5a-a72a-4467-97f7-a3a3298f3e62 req-24e00788-677e-4108-b8dd-1aae34efa582 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Refreshing network info cache for port 25d7271a-bce8-4388-991e-e7069da0eff1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:36:52 np0005481065 nova_compute[260935]: 2025-10-11 09:36:52.056 2 DEBUG nova.virt.libvirt.driver [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Start _get_guest_xml network_info=[{"id": "25d7271a-bce8-4388-991e-e7069da0eff1", "address": "fa:16:3e:a1:d6:78", "network": {"id": "03aa96fb-1646-4924-8eea-e7c8a5518a31", "bridge": "br-int", "label": "tempest-network-smoke--2144007454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25d7271a-bc", "ovs_interfaceid": "25d7271a-bce8-4388-991e-e7069da0eff1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:36:52 np0005481065 nova_compute[260935]: 2025-10-11 09:36:52.058 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:36:52 np0005481065 nova_compute[260935]: 2025-10-11 09:36:52.119 2 WARNING nova.virt.libvirt.driver [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:36:52 np0005481065 nova_compute[260935]: 2025-10-11 09:36:52.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:36:52 np0005481065 nova_compute[260935]: 2025-10-11 09:36:52.136 2 DEBUG nova.virt.libvirt.host [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:36:52 np0005481065 nova_compute[260935]: 2025-10-11 09:36:52.137 2 DEBUG nova.virt.libvirt.host [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:36:52 np0005481065 nova_compute[260935]: 2025-10-11 09:36:52.141 2 DEBUG nova.virt.libvirt.host [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:36:52 np0005481065 nova_compute[260935]: 2025-10-11 09:36:52.142 2 DEBUG nova.virt.libvirt.host [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:36:52 np0005481065 nova_compute[260935]: 2025-10-11 09:36:52.143 2 DEBUG nova.virt.libvirt.driver [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:36:52 np0005481065 nova_compute[260935]: 2025-10-11 09:36:52.144 2 DEBUG nova.virt.hardware [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:36:52 np0005481065 nova_compute[260935]: 2025-10-11 09:36:52.145 2 DEBUG nova.virt.hardware [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:36:52 np0005481065 nova_compute[260935]: 2025-10-11 09:36:52.145 2 DEBUG nova.virt.hardware [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:36:52 np0005481065 nova_compute[260935]: 2025-10-11 09:36:52.146 2 DEBUG nova.virt.hardware [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:36:52 np0005481065 nova_compute[260935]: 2025-10-11 09:36:52.147 2 DEBUG nova.virt.hardware [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:36:52 np0005481065 nova_compute[260935]: 2025-10-11 09:36:52.147 2 DEBUG nova.virt.hardware [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:36:52 np0005481065 nova_compute[260935]: 2025-10-11 09:36:52.148 2 DEBUG nova.virt.hardware [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:36:52 np0005481065 nova_compute[260935]: 2025-10-11 09:36:52.148 2 DEBUG nova.virt.hardware [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:36:52 np0005481065 nova_compute[260935]: 2025-10-11 09:36:52.149 2 DEBUG nova.virt.hardware [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:36:52 np0005481065 nova_compute[260935]: 2025-10-11 09:36:52.149 2 DEBUG nova.virt.hardware [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:36:52 np0005481065 nova_compute[260935]: 2025-10-11 09:36:52.150 2 DEBUG nova.virt.hardware [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:36:52 np0005481065 nova_compute[260935]: 2025-10-11 09:36:52.158 2 DEBUG oslo_concurrency.processutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:36:52 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:36:52 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2658467122' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:36:52 np0005481065 nova_compute[260935]: 2025-10-11 09:36:52.540 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:36:52 np0005481065 nova_compute[260935]: 2025-10-11 09:36:52.545 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:36:52 np0005481065 nova_compute[260935]: 2025-10-11 09:36:52.564 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:36:52 np0005481065 nova_compute[260935]: 2025-10-11 09:36:52.591 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:36:52 np0005481065 nova_compute[260935]: 2025-10-11 09:36:52.591 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.891s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:36:52 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:36:52 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2025239124' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:36:52 np0005481065 nova_compute[260935]: 2025-10-11 09:36:52.622 2 DEBUG oslo_concurrency.processutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:36:52 np0005481065 nova_compute[260935]: 2025-10-11 09:36:52.650 2 DEBUG nova.storage.rbd_utils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image 1f73fb12-6b6e-4491-81a4-51fb66ffb310_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:36:52 np0005481065 nova_compute[260935]: 2025-10-11 09:36:52.655 2 DEBUG oslo_concurrency.processutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:36:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:36:53 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1080274345' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:36:53 np0005481065 nova_compute[260935]: 2025-10-11 09:36:53.098 2 DEBUG oslo_concurrency.processutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:36:53 np0005481065 nova_compute[260935]: 2025-10-11 09:36:53.100 2 DEBUG nova.virt.libvirt.vif [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:36:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-2088666113',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-2088666113',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-607770139-acc',id=141,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDUdKb5dOuuDVVWeqX4zKoPcjWDfKUs1onbpng/L6Jwhcf0BjMOuWghlJ+p86B+gJd3uhEaIWe8cO4nTPLwQAMa1oAzsF+deBAfCAYSzplFFdsUIepQYO467EdbI9Uqbsw==',key_name='tempest-TestSecurityGroupsBasicOps-402699739',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='81e7096f23df4e7d8782cf98d09d54e9',ramdisk_id='',reservation_id='r-1r2upbl3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-607770139',owner_user_name='tempest-TestSecurityGroupsBasicOps-607770139-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:36:46Z,user_data=None,user_id='489c4d0457354f4684f8b9e53261224f',uuid=1f73fb12-6b6e-4491-81a4-51fb66ffb310,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "25d7271a-bce8-4388-991e-e7069da0eff1", "address": "fa:16:3e:a1:d6:78", "network": {"id": "03aa96fb-1646-4924-8eea-e7c8a5518a31", "bridge": "br-int", "label": "tempest-network-smoke--2144007454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25d7271a-bc", "ovs_interfaceid": "25d7271a-bce8-4388-991e-e7069da0eff1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:36:53 np0005481065 nova_compute[260935]: 2025-10-11 09:36:53.100 2 DEBUG nova.network.os_vif_util [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converting VIF {"id": "25d7271a-bce8-4388-991e-e7069da0eff1", "address": "fa:16:3e:a1:d6:78", "network": {"id": "03aa96fb-1646-4924-8eea-e7c8a5518a31", "bridge": "br-int", "label": "tempest-network-smoke--2144007454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25d7271a-bc", "ovs_interfaceid": "25d7271a-bce8-4388-991e-e7069da0eff1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:36:53 np0005481065 nova_compute[260935]: 2025-10-11 09:36:53.101 2 DEBUG nova.network.os_vif_util [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a1:d6:78,bridge_name='br-int',has_traffic_filtering=True,id=25d7271a-bce8-4388-991e-e7069da0eff1,network=Network(03aa96fb-1646-4924-8eea-e7c8a5518a31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25d7271a-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:36:53 np0005481065 nova_compute[260935]: 2025-10-11 09:36:53.102 2 DEBUG nova.objects.instance [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1f73fb12-6b6e-4491-81a4-51fb66ffb310 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:36:53 np0005481065 nova_compute[260935]: 2025-10-11 09:36:53.119 2 DEBUG nova.virt.libvirt.driver [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:36:53 np0005481065 nova_compute[260935]:  <uuid>1f73fb12-6b6e-4491-81a4-51fb66ffb310</uuid>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:  <name>instance-0000008d</name>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:36:53 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-2088666113</nova:name>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:36:52</nova:creationTime>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:36:53 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:        <nova:user uuid="489c4d0457354f4684f8b9e53261224f">tempest-TestSecurityGroupsBasicOps-607770139-project-member</nova:user>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:        <nova:project uuid="81e7096f23df4e7d8782cf98d09d54e9">tempest-TestSecurityGroupsBasicOps-607770139</nova:project>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:        <nova:port uuid="25d7271a-bce8-4388-991e-e7069da0eff1">
Oct 11 05:36:53 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:36:53 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:      <entry name="serial">1f73fb12-6b6e-4491-81a4-51fb66ffb310</entry>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:      <entry name="uuid">1f73fb12-6b6e-4491-81a4-51fb66ffb310</entry>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:36:53 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:36:53 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:36:53 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/1f73fb12-6b6e-4491-81a4-51fb66ffb310_disk">
Oct 11 05:36:53 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:36:53 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:36:53 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/1f73fb12-6b6e-4491-81a4-51fb66ffb310_disk.config">
Oct 11 05:36:53 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:36:53 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:36:53 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:a1:d6:78"/>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:      <target dev="tap25d7271a-bc"/>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:36:53 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/1f73fb12-6b6e-4491-81a4-51fb66ffb310/console.log" append="off"/>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:36:53 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:36:53 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:36:53 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:36:53 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:36:53 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:36:53 np0005481065 nova_compute[260935]: 2025-10-11 09:36:53.120 2 DEBUG nova.compute.manager [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Preparing to wait for external event network-vif-plugged-25d7271a-bce8-4388-991e-e7069da0eff1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:36:53 np0005481065 nova_compute[260935]: 2025-10-11 09:36:53.120 2 DEBUG oslo_concurrency.lockutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "1f73fb12-6b6e-4491-81a4-51fb66ffb310-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:36:53 np0005481065 nova_compute[260935]: 2025-10-11 09:36:53.120 2 DEBUG oslo_concurrency.lockutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "1f73fb12-6b6e-4491-81a4-51fb66ffb310-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:36:53 np0005481065 nova_compute[260935]: 2025-10-11 09:36:53.121 2 DEBUG oslo_concurrency.lockutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "1f73fb12-6b6e-4491-81a4-51fb66ffb310-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:36:53 np0005481065 nova_compute[260935]: 2025-10-11 09:36:53.121 2 DEBUG nova.virt.libvirt.vif [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:36:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-2088666113',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-2088666113',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-607770139-acc',id=141,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDUdKb5dOuuDVVWeqX4zKoPcjWDfKUs1onbpng/L6Jwhcf0BjMOuWghlJ+p86B+gJd3uhEaIWe8cO4nTPLwQAMa1oAzsF+deBAfCAYSzplFFdsUIepQYO467EdbI9Uqbsw==',key_name='tempest-TestSecurityGroupsBasicOps-402699739',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='81e7096f23df4e7d8782cf98d09d54e9',ramdisk_id='',reservation_id='r-1r2upbl3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-607770139',owner_user_name='tempest-TestSecurityGroupsBasicOps-607770139-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:36:46Z,user_data=None,user_id='489c4d0457354f4684f8b9e53261224f',uuid=1f73fb12-6b6e-4491-81a4-51fb66ffb310,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "25d7271a-bce8-4388-991e-e7069da0eff1", "address": "fa:16:3e:a1:d6:78", "network": {"id": "03aa96fb-1646-4924-8eea-e7c8a5518a31", "bridge": "br-int", "label": "tempest-network-smoke--2144007454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25d7271a-bc", "ovs_interfaceid": "25d7271a-bce8-4388-991e-e7069da0eff1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:36:53 np0005481065 nova_compute[260935]: 2025-10-11 09:36:53.121 2 DEBUG nova.network.os_vif_util [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converting VIF {"id": "25d7271a-bce8-4388-991e-e7069da0eff1", "address": "fa:16:3e:a1:d6:78", "network": {"id": "03aa96fb-1646-4924-8eea-e7c8a5518a31", "bridge": "br-int", "label": "tempest-network-smoke--2144007454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25d7271a-bc", "ovs_interfaceid": "25d7271a-bce8-4388-991e-e7069da0eff1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:36:53 np0005481065 nova_compute[260935]: 2025-10-11 09:36:53.122 2 DEBUG nova.network.os_vif_util [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a1:d6:78,bridge_name='br-int',has_traffic_filtering=True,id=25d7271a-bce8-4388-991e-e7069da0eff1,network=Network(03aa96fb-1646-4924-8eea-e7c8a5518a31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25d7271a-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:36:53 np0005481065 nova_compute[260935]: 2025-10-11 09:36:53.122 2 DEBUG os_vif [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a1:d6:78,bridge_name='br-int',has_traffic_filtering=True,id=25d7271a-bce8-4388-991e-e7069da0eff1,network=Network(03aa96fb-1646-4924-8eea-e7c8a5518a31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25d7271a-bc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:36:53 np0005481065 nova_compute[260935]: 2025-10-11 09:36:53.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:36:53 np0005481065 nova_compute[260935]: 2025-10-11 09:36:53.123 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:36:53 np0005481065 nova_compute[260935]: 2025-10-11 09:36:53.124 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:36:53 np0005481065 nova_compute[260935]: 2025-10-11 09:36:53.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:36:53 np0005481065 nova_compute[260935]: 2025-10-11 09:36:53.126 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap25d7271a-bc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:36:53 np0005481065 nova_compute[260935]: 2025-10-11 09:36:53.126 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap25d7271a-bc, col_values=(('external_ids', {'iface-id': '25d7271a-bce8-4388-991e-e7069da0eff1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a1:d6:78', 'vm-uuid': '1f73fb12-6b6e-4491-81a4-51fb66ffb310'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:36:53 np0005481065 nova_compute[260935]: 2025-10-11 09:36:53.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:36:53 np0005481065 nova_compute[260935]: 2025-10-11 09:36:53.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:36:53 np0005481065 NetworkManager[44960]: <info>  [1760175413.1309] manager: (tap25d7271a-bc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/606)
Oct 11 05:36:53 np0005481065 nova_compute[260935]: 2025-10-11 09:36:53.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:36:53 np0005481065 nova_compute[260935]: 2025-10-11 09:36:53.142 2 INFO os_vif [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a1:d6:78,bridge_name='br-int',has_traffic_filtering=True,id=25d7271a-bce8-4388-991e-e7069da0eff1,network=Network(03aa96fb-1646-4924-8eea-e7c8a5518a31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25d7271a-bc')#033[00m
Oct 11 05:36:53 np0005481065 nova_compute[260935]: 2025-10-11 09:36:53.211 2 DEBUG nova.virt.libvirt.driver [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:36:53 np0005481065 nova_compute[260935]: 2025-10-11 09:36:53.211 2 DEBUG nova.virt.libvirt.driver [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:36:53 np0005481065 nova_compute[260935]: 2025-10-11 09:36:53.212 2 DEBUG nova.virt.libvirt.driver [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] No VIF found with MAC fa:16:3e:a1:d6:78, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:36:53 np0005481065 nova_compute[260935]: 2025-10-11 09:36:53.212 2 INFO nova.virt.libvirt.driver [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Using config drive#033[00m
Oct 11 05:36:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2857: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:36:53 np0005481065 nova_compute[260935]: 2025-10-11 09:36:53.253 2 DEBUG nova.storage.rbd_utils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image 1f73fb12-6b6e-4491-81a4-51fb66ffb310_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:36:53 np0005481065 nova_compute[260935]: 2025-10-11 09:36:53.586 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:36:53 np0005481065 nova_compute[260935]: 2025-10-11 09:36:53.587 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:36:53 np0005481065 nova_compute[260935]: 2025-10-11 09:36:53.763 2 INFO nova.virt.libvirt.driver [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Creating config drive at /var/lib/nova/instances/1f73fb12-6b6e-4491-81a4-51fb66ffb310/disk.config#033[00m
Oct 11 05:36:53 np0005481065 nova_compute[260935]: 2025-10-11 09:36:53.769 2 DEBUG oslo_concurrency.processutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1f73fb12-6b6e-4491-81a4-51fb66ffb310/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmparptuho6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:36:53 np0005481065 nova_compute[260935]: 2025-10-11 09:36:53.918 2 DEBUG oslo_concurrency.processutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1f73fb12-6b6e-4491-81a4-51fb66ffb310/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmparptuho6" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:36:53 np0005481065 nova_compute[260935]: 2025-10-11 09:36:53.951 2 DEBUG nova.storage.rbd_utils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image 1f73fb12-6b6e-4491-81a4-51fb66ffb310_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:36:53 np0005481065 nova_compute[260935]: 2025-10-11 09:36:53.955 2 DEBUG oslo_concurrency.processutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1f73fb12-6b6e-4491-81a4-51fb66ffb310/disk.config 1f73fb12-6b6e-4491-81a4-51fb66ffb310_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:36:53 np0005481065 nova_compute[260935]: 2025-10-11 09:36:53.996 2 DEBUG nova.network.neutron [req-5f104c5a-a72a-4467-97f7-a3a3298f3e62 req-24e00788-677e-4108-b8dd-1aae34efa582 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Updated VIF entry in instance network info cache for port 25d7271a-bce8-4388-991e-e7069da0eff1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:36:53 np0005481065 nova_compute[260935]: 2025-10-11 09:36:53.997 2 DEBUG nova.network.neutron [req-5f104c5a-a72a-4467-97f7-a3a3298f3e62 req-24e00788-677e-4108-b8dd-1aae34efa582 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Updating instance_info_cache with network_info: [{"id": "25d7271a-bce8-4388-991e-e7069da0eff1", "address": "fa:16:3e:a1:d6:78", "network": {"id": "03aa96fb-1646-4924-8eea-e7c8a5518a31", "bridge": "br-int", "label": "tempest-network-smoke--2144007454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25d7271a-bc", "ovs_interfaceid": "25d7271a-bce8-4388-991e-e7069da0eff1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:36:54 np0005481065 nova_compute[260935]: 2025-10-11 09:36:54.021 2 DEBUG oslo_concurrency.lockutils [req-5f104c5a-a72a-4467-97f7-a3a3298f3e62 req-24e00788-677e-4108-b8dd-1aae34efa582 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-1f73fb12-6b6e-4491-81a4-51fb66ffb310" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:36:54 np0005481065 nova_compute[260935]: 2025-10-11 09:36:54.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:36:54 np0005481065 nova_compute[260935]: 2025-10-11 09:36:54.141 2 DEBUG oslo_concurrency.processutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1f73fb12-6b6e-4491-81a4-51fb66ffb310/disk.config 1f73fb12-6b6e-4491-81a4-51fb66ffb310_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:36:54 np0005481065 nova_compute[260935]: 2025-10-11 09:36:54.141 2 INFO nova.virt.libvirt.driver [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Deleting local config drive /var/lib/nova/instances/1f73fb12-6b6e-4491-81a4-51fb66ffb310/disk.config because it was imported into RBD.#033[00m
Oct 11 05:36:54 np0005481065 kernel: tap25d7271a-bc: entered promiscuous mode
Oct 11 05:36:54 np0005481065 nova_compute[260935]: 2025-10-11 09:36:54.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:36:54 np0005481065 ovn_controller[152945]: 2025-10-11T09:36:54Z|01586|binding|INFO|Claiming lport 25d7271a-bce8-4388-991e-e7069da0eff1 for this chassis.
Oct 11 05:36:54 np0005481065 ovn_controller[152945]: 2025-10-11T09:36:54Z|01587|binding|INFO|25d7271a-bce8-4388-991e-e7069da0eff1: Claiming fa:16:3e:a1:d6:78 10.100.0.13
Oct 11 05:36:54 np0005481065 NetworkManager[44960]: <info>  [1760175414.2262] manager: (tap25d7271a-bc): new Tun device (/org/freedesktop/NetworkManager/Devices/607)
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:54.244 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a1:d6:78 10.100.0.13'], port_security=['fa:16:3e:a1:d6:78 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '1f73fb12-6b6e-4491-81a4-51fb66ffb310', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03aa96fb-1646-4924-8eea-e7c8a5518a31', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '81e7096f23df4e7d8782cf98d09d54e9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0fdfee67-b394-4b27-bc8b-a70f0c2dfabe aba3d239-1d8c-4eb4-ab69-9421b1db2407', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9ec93d7-4219-431f-a875-bf3609f077df, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=25d7271a-bce8-4388-991e-e7069da0eff1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:54.247 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 25d7271a-bce8-4388-991e-e7069da0eff1 in datapath 03aa96fb-1646-4924-8eea-e7c8a5518a31 bound to our chassis#033[00m
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:54.251 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 03aa96fb-1646-4924-8eea-e7c8a5518a31#033[00m
Oct 11 05:36:54 np0005481065 systemd-machined[215705]: New machine qemu-165-instance-0000008d.
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:54.269 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c98d54ad-1b3e-4652-9472-b977550df518]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:54.270 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap03aa96fb-11 in ovnmeta-03aa96fb-1646-4924-8eea-e7c8a5518a31 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:54.273 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap03aa96fb-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:54.274 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[dc2a6246-3c19-4afd-8a79-16ea4c7e4abb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:54.275 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[68936c12-a02e-4421-baf6-57cc75ecf630]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:36:54 np0005481065 systemd[1]: Started Virtual Machine qemu-165-instance-0000008d.
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:54.291 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[51e84e09-fbf3-473d-8559-1555782f27c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:36:54 np0005481065 systemd-udevd[418461]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:54.335 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4f73bad5-b603-443b-ba1f-3a23ae185912]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:36:54 np0005481065 NetworkManager[44960]: <info>  [1760175414.3459] device (tap25d7271a-bc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:36:54 np0005481065 NetworkManager[44960]: <info>  [1760175414.3486] device (tap25d7271a-bc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:36:54 np0005481065 nova_compute[260935]: 2025-10-11 09:36:54.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:36:54 np0005481065 ovn_controller[152945]: 2025-10-11T09:36:54Z|01588|binding|INFO|Setting lport 25d7271a-bce8-4388-991e-e7069da0eff1 ovn-installed in OVS
Oct 11 05:36:54 np0005481065 ovn_controller[152945]: 2025-10-11T09:36:54Z|01589|binding|INFO|Setting lport 25d7271a-bce8-4388-991e-e7069da0eff1 up in Southbound
Oct 11 05:36:54 np0005481065 nova_compute[260935]: 2025-10-11 09:36:54.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:54.386 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3ebd61a3-856a-44a7-99e2-14bfae627f46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:54.397 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c8f2d522-f739-435f-bb96-cbd4bb978f57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:36:54 np0005481065 NetworkManager[44960]: <info>  [1760175414.3992] manager: (tap03aa96fb-10): new Veth device (/org/freedesktop/NetworkManager/Devices/608)
Oct 11 05:36:54 np0005481065 systemd-udevd[418465]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:54.445 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3245e512-b4a5-410d-ae38-b0ffda12dc6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:54.450 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[d3f365ff-d8c0-4f87-b9da-3ce8dbd522c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:36:54 np0005481065 NetworkManager[44960]: <info>  [1760175414.4828] device (tap03aa96fb-10): carrier: link connected
Oct 11 05:36:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:54.497 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c98c33cc-3d36-4e3d-bb8c-c44b5361bbb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:54.519 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b160923c-35f1-403e-8554-b2257baa070a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap03aa96fb-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:60:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 421], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 725918, 'reachable_time': 20337, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 418493, 'error': None, 'target': 'ovnmeta-03aa96fb-1646-4924-8eea-e7c8a5518a31', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:54.538 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2ca984f6-2ad3-418e-b939-d9e175759f70]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe61:6032'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 725918, 'tstamp': 725918}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 418494, 'error': None, 'target': 'ovnmeta-03aa96fb-1646-4924-8eea-e7c8a5518a31', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:54.560 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d9f3b399-9450-43b3-819c-f42ba4535cb3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap03aa96fb-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:60:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 421], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 725918, 'reachable_time': 20337, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 418495, 'error': None, 'target': 'ovnmeta-03aa96fb-1646-4924-8eea-e7c8a5518a31', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:36:54 np0005481065 nova_compute[260935]: 2025-10-11 09:36:54.564 2 DEBUG nova.compute.manager [req-847d668b-09ce-47ce-90db-24f5c3bdea56 req-8563a742-80dc-4888-84cc-2ac15196fedc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Received event network-vif-plugged-25d7271a-bce8-4388-991e-e7069da0eff1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:36:54 np0005481065 nova_compute[260935]: 2025-10-11 09:36:54.565 2 DEBUG oslo_concurrency.lockutils [req-847d668b-09ce-47ce-90db-24f5c3bdea56 req-8563a742-80dc-4888-84cc-2ac15196fedc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "1f73fb12-6b6e-4491-81a4-51fb66ffb310-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:36:54 np0005481065 nova_compute[260935]: 2025-10-11 09:36:54.565 2 DEBUG oslo_concurrency.lockutils [req-847d668b-09ce-47ce-90db-24f5c3bdea56 req-8563a742-80dc-4888-84cc-2ac15196fedc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1f73fb12-6b6e-4491-81a4-51fb66ffb310-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:36:54 np0005481065 nova_compute[260935]: 2025-10-11 09:36:54.565 2 DEBUG oslo_concurrency.lockutils [req-847d668b-09ce-47ce-90db-24f5c3bdea56 req-8563a742-80dc-4888-84cc-2ac15196fedc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1f73fb12-6b6e-4491-81a4-51fb66ffb310-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:36:54 np0005481065 nova_compute[260935]: 2025-10-11 09:36:54.566 2 DEBUG nova.compute.manager [req-847d668b-09ce-47ce-90db-24f5c3bdea56 req-8563a742-80dc-4888-84cc-2ac15196fedc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Processing event network-vif-plugged-25d7271a-bce8-4388-991e-e7069da0eff1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:54.601 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a9ea4167-4a46-42f8-9a2f-2844ddbd1b21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:54.686 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[67cd4389-2c1f-44cd-965b-7a31ab5eaa5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:54.688 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap03aa96fb-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:54.689 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:54.690 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap03aa96fb-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:36:54 np0005481065 NetworkManager[44960]: <info>  [1760175414.6935] manager: (tap03aa96fb-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/609)
Oct 11 05:36:54 np0005481065 kernel: tap03aa96fb-10: entered promiscuous mode
Oct 11 05:36:54 np0005481065 nova_compute[260935]: 2025-10-11 09:36:54.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:54.698 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap03aa96fb-10, col_values=(('external_ids', {'iface-id': '33c7af53-54c4-4168-8c26-88465029f36a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:36:54 np0005481065 ovn_controller[152945]: 2025-10-11T09:36:54Z|01590|binding|INFO|Releasing lport 33c7af53-54c4-4168-8c26-88465029f36a from this chassis (sb_readonly=0)
Oct 11 05:36:54 np0005481065 nova_compute[260935]: 2025-10-11 09:36:54.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:54.702 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/03aa96fb-1646-4924-8eea-e7c8a5518a31.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/03aa96fb-1646-4924-8eea-e7c8a5518a31.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:54.704 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2ecb418e-2966-4611-9432-3c38706a4825]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:54.705 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-03aa96fb-1646-4924-8eea-e7c8a5518a31
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/03aa96fb-1646-4924-8eea-e7c8a5518a31.pid.haproxy
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID 03aa96fb-1646-4924-8eea-e7c8a5518a31
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 05:36:54 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:36:54.707 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-03aa96fb-1646-4924-8eea-e7c8a5518a31', 'env', 'PROCESS_TAG=haproxy-03aa96fb-1646-4924-8eea-e7c8a5518a31', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/03aa96fb-1646-4924-8eea-e7c8a5518a31.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 05:36:54 np0005481065 nova_compute[260935]: 2025-10-11 09:36:54.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:36:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:36:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:36:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:36:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:36:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:36:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:36:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:36:55
Oct 11 05:36:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:36:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:36:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['images', 'backups', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr', 'default.rgw.meta', 'vms', 'volumes']
Oct 11 05:36:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:36:55 np0005481065 nova_compute[260935]: 2025-10-11 09:36:55.207 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175415.207217, 1f73fb12-6b6e-4491-81a4-51fb66ffb310 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:36:55 np0005481065 nova_compute[260935]: 2025-10-11 09:36:55.208 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] VM Started (Lifecycle Event)#033[00m
Oct 11 05:36:55 np0005481065 podman[418567]: 2025-10-11 09:36:55.211349798 +0000 UTC m=+0.081826026 container create f6a9261aec5be20db1e997a39a8f40719235d71526af0a641fc6e13ee745e3e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-03aa96fb-1646-4924-8eea-e7c8a5518a31, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 05:36:55 np0005481065 nova_compute[260935]: 2025-10-11 09:36:55.211 2 DEBUG nova.compute.manager [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:36:55 np0005481065 nova_compute[260935]: 2025-10-11 09:36:55.221 2 DEBUG nova.virt.libvirt.driver [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:36:55 np0005481065 nova_compute[260935]: 2025-10-11 09:36:55.225 2 INFO nova.virt.libvirt.driver [-] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Instance spawned successfully.#033[00m
Oct 11 05:36:55 np0005481065 nova_compute[260935]: 2025-10-11 09:36:55.225 2 DEBUG nova.virt.libvirt.driver [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:36:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2858: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:36:55 np0005481065 nova_compute[260935]: 2025-10-11 09:36:55.231 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:36:55 np0005481065 nova_compute[260935]: 2025-10-11 09:36:55.235 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:36:55 np0005481065 nova_compute[260935]: 2025-10-11 09:36:55.248 2 DEBUG nova.virt.libvirt.driver [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:36:55 np0005481065 nova_compute[260935]: 2025-10-11 09:36:55.249 2 DEBUG nova.virt.libvirt.driver [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:36:55 np0005481065 nova_compute[260935]: 2025-10-11 09:36:55.250 2 DEBUG nova.virt.libvirt.driver [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:36:55 np0005481065 nova_compute[260935]: 2025-10-11 09:36:55.251 2 DEBUG nova.virt.libvirt.driver [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:36:55 np0005481065 nova_compute[260935]: 2025-10-11 09:36:55.251 2 DEBUG nova.virt.libvirt.driver [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:36:55 np0005481065 nova_compute[260935]: 2025-10-11 09:36:55.252 2 DEBUG nova.virt.libvirt.driver [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:36:55 np0005481065 systemd[1]: Started libpod-conmon-f6a9261aec5be20db1e997a39a8f40719235d71526af0a641fc6e13ee745e3e5.scope.
Oct 11 05:36:55 np0005481065 nova_compute[260935]: 2025-10-11 09:36:55.260 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:36:55 np0005481065 nova_compute[260935]: 2025-10-11 09:36:55.260 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175415.2114127, 1f73fb12-6b6e-4491-81a4-51fb66ffb310 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:36:55 np0005481065 nova_compute[260935]: 2025-10-11 09:36:55.260 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:36:55 np0005481065 podman[418567]: 2025-10-11 09:36:55.174246657 +0000 UTC m=+0.044722935 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 05:36:55 np0005481065 nova_compute[260935]: 2025-10-11 09:36:55.291 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:36:55 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:36:55 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1249ac7e806de3a44795ea0929a99531493013ad6047c5dc235ff82a29920ead/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 05:36:55 np0005481065 nova_compute[260935]: 2025-10-11 09:36:55.301 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175415.2223327, 1f73fb12-6b6e-4491-81a4-51fb66ffb310 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:36:55 np0005481065 nova_compute[260935]: 2025-10-11 09:36:55.301 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:36:55 np0005481065 podman[418567]: 2025-10-11 09:36:55.323316108 +0000 UTC m=+0.193792336 container init f6a9261aec5be20db1e997a39a8f40719235d71526af0a641fc6e13ee745e3e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-03aa96fb-1646-4924-8eea-e7c8a5518a31, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:36:55 np0005481065 nova_compute[260935]: 2025-10-11 09:36:55.329 2 INFO nova.compute.manager [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Took 8.35 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:36:55 np0005481065 nova_compute[260935]: 2025-10-11 09:36:55.330 2 DEBUG nova.compute.manager [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:36:55 np0005481065 podman[418567]: 2025-10-11 09:36:55.332714792 +0000 UTC m=+0.203190990 container start f6a9261aec5be20db1e997a39a8f40719235d71526af0a641fc6e13ee745e3e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-03aa96fb-1646-4924-8eea-e7c8a5518a31, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 05:36:55 np0005481065 nova_compute[260935]: 2025-10-11 09:36:55.332 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:36:55 np0005481065 nova_compute[260935]: 2025-10-11 09:36:55.345 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:36:55 np0005481065 neutron-haproxy-ovnmeta-03aa96fb-1646-4924-8eea-e7c8a5518a31[418582]: [NOTICE]   (418586) : New worker (418588) forked
Oct 11 05:36:55 np0005481065 neutron-haproxy-ovnmeta-03aa96fb-1646-4924-8eea-e7c8a5518a31[418582]: [NOTICE]   (418586) : Loading success.
Oct 11 05:36:55 np0005481065 nova_compute[260935]: 2025-10-11 09:36:55.373 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:36:55 np0005481065 nova_compute[260935]: 2025-10-11 09:36:55.399 2 INFO nova.compute.manager [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Took 9.35 seconds to build instance.#033[00m
Oct 11 05:36:55 np0005481065 nova_compute[260935]: 2025-10-11 09:36:55.414 2 DEBUG oslo_concurrency.lockutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "1f73fb12-6b6e-4491-81a4-51fb66ffb310" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.426s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:36:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:36:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:36:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:36:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:36:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:36:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:36:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:36:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:36:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:36:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:36:56 np0005481065 nova_compute[260935]: 2025-10-11 09:36:56.761 2 DEBUG nova.compute.manager [req-0bb0a453-4935-4461-877a-b5d24fe1faa7 req-9c9835f0-b3eb-4b63-b977-d9ac4b287479 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Received event network-vif-plugged-25d7271a-bce8-4388-991e-e7069da0eff1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:36:56 np0005481065 nova_compute[260935]: 2025-10-11 09:36:56.761 2 DEBUG oslo_concurrency.lockutils [req-0bb0a453-4935-4461-877a-b5d24fe1faa7 req-9c9835f0-b3eb-4b63-b977-d9ac4b287479 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "1f73fb12-6b6e-4491-81a4-51fb66ffb310-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:36:56 np0005481065 nova_compute[260935]: 2025-10-11 09:36:56.762 2 DEBUG oslo_concurrency.lockutils [req-0bb0a453-4935-4461-877a-b5d24fe1faa7 req-9c9835f0-b3eb-4b63-b977-d9ac4b287479 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1f73fb12-6b6e-4491-81a4-51fb66ffb310-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:36:56 np0005481065 nova_compute[260935]: 2025-10-11 09:36:56.762 2 DEBUG oslo_concurrency.lockutils [req-0bb0a453-4935-4461-877a-b5d24fe1faa7 req-9c9835f0-b3eb-4b63-b977-d9ac4b287479 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1f73fb12-6b6e-4491-81a4-51fb66ffb310-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:36:56 np0005481065 nova_compute[260935]: 2025-10-11 09:36:56.762 2 DEBUG nova.compute.manager [req-0bb0a453-4935-4461-877a-b5d24fe1faa7 req-9c9835f0-b3eb-4b63-b977-d9ac4b287479 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] No waiting events found dispatching network-vif-plugged-25d7271a-bce8-4388-991e-e7069da0eff1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:36:56 np0005481065 nova_compute[260935]: 2025-10-11 09:36:56.762 2 WARNING nova.compute.manager [req-0bb0a453-4935-4461-877a-b5d24fe1faa7 req-9c9835f0-b3eb-4b63-b977-d9ac4b287479 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Received unexpected event network-vif-plugged-25d7271a-bce8-4388-991e-e7069da0eff1 for instance with vm_state active and task_state None.#033[00m
Oct 11 05:36:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2859: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:36:58 np0005481065 nova_compute[260935]: 2025-10-11 09:36:58.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:36:59 np0005481065 nova_compute[260935]: 2025-10-11 09:36:59.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:36:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2860: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 05:36:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:36:59 np0005481065 podman[418597]: 2025-10-11 09:36:59.767763104 +0000 UTC m=+0.073587375 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 05:37:00 np0005481065 ovn_controller[152945]: 2025-10-11T09:37:00Z|01591|binding|INFO|Releasing lport 33c7af53-54c4-4168-8c26-88465029f36a from this chassis (sb_readonly=0)
Oct 11 05:37:00 np0005481065 ovn_controller[152945]: 2025-10-11T09:37:00Z|01592|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:37:00 np0005481065 ovn_controller[152945]: 2025-10-11T09:37:00Z|01593|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:37:00 np0005481065 NetworkManager[44960]: <info>  [1760175420.2281] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/610)
Oct 11 05:37:00 np0005481065 NetworkManager[44960]: <info>  [1760175420.2299] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/611)
Oct 11 05:37:00 np0005481065 nova_compute[260935]: 2025-10-11 09:37:00.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:37:00 np0005481065 ovn_controller[152945]: 2025-10-11T09:37:00Z|01594|binding|INFO|Releasing lport 33c7af53-54c4-4168-8c26-88465029f36a from this chassis (sb_readonly=0)
Oct 11 05:37:00 np0005481065 nova_compute[260935]: 2025-10-11 09:37:00.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:37:00 np0005481065 ovn_controller[152945]: 2025-10-11T09:37:00Z|01595|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:37:00 np0005481065 ovn_controller[152945]: 2025-10-11T09:37:00Z|01596|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:37:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2861: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 571 KiB/s wr, 82 op/s
Oct 11 05:37:01 np0005481065 nova_compute[260935]: 2025-10-11 09:37:01.374 2 DEBUG nova.compute.manager [req-b6eed398-d152-4150-879c-1bfec323b707 req-b166ae1e-e675-43cd-a6ab-5cfea68f5380 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Received event network-changed-25d7271a-bce8-4388-991e-e7069da0eff1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:37:01 np0005481065 nova_compute[260935]: 2025-10-11 09:37:01.374 2 DEBUG nova.compute.manager [req-b6eed398-d152-4150-879c-1bfec323b707 req-b166ae1e-e675-43cd-a6ab-5cfea68f5380 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Refreshing instance network info cache due to event network-changed-25d7271a-bce8-4388-991e-e7069da0eff1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:37:01 np0005481065 nova_compute[260935]: 2025-10-11 09:37:01.374 2 DEBUG oslo_concurrency.lockutils [req-b6eed398-d152-4150-879c-1bfec323b707 req-b166ae1e-e675-43cd-a6ab-5cfea68f5380 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-1f73fb12-6b6e-4491-81a4-51fb66ffb310" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:37:01 np0005481065 nova_compute[260935]: 2025-10-11 09:37:01.375 2 DEBUG oslo_concurrency.lockutils [req-b6eed398-d152-4150-879c-1bfec323b707 req-b166ae1e-e675-43cd-a6ab-5cfea68f5380 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-1f73fb12-6b6e-4491-81a4-51fb66ffb310" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:37:01 np0005481065 nova_compute[260935]: 2025-10-11 09:37:01.375 2 DEBUG nova.network.neutron [req-b6eed398-d152-4150-879c-1bfec323b707 req-b166ae1e-e675-43cd-a6ab-5cfea68f5380 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Refreshing network info cache for port 25d7271a-bce8-4388-991e-e7069da0eff1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:37:03 np0005481065 nova_compute[260935]: 2025-10-11 09:37:03.088 2 DEBUG nova.network.neutron [req-b6eed398-d152-4150-879c-1bfec323b707 req-b166ae1e-e675-43cd-a6ab-5cfea68f5380 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Updated VIF entry in instance network info cache for port 25d7271a-bce8-4388-991e-e7069da0eff1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:37:03 np0005481065 nova_compute[260935]: 2025-10-11 09:37:03.088 2 DEBUG nova.network.neutron [req-b6eed398-d152-4150-879c-1bfec323b707 req-b166ae1e-e675-43cd-a6ab-5cfea68f5380 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Updating instance_info_cache with network_info: [{"id": "25d7271a-bce8-4388-991e-e7069da0eff1", "address": "fa:16:3e:a1:d6:78", "network": {"id": "03aa96fb-1646-4924-8eea-e7c8a5518a31", "bridge": "br-int", "label": "tempest-network-smoke--2144007454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25d7271a-bc", "ovs_interfaceid": "25d7271a-bce8-4388-991e-e7069da0eff1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:37:03 np0005481065 nova_compute[260935]: 2025-10-11 09:37:03.110 2 DEBUG oslo_concurrency.lockutils [req-b6eed398-d152-4150-879c-1bfec323b707 req-b166ae1e-e675-43cd-a6ab-5cfea68f5380 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-1f73fb12-6b6e-4491-81a4-51fb66ffb310" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:37:03 np0005481065 nova_compute[260935]: 2025-10-11 09:37:03.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:37:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2862: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 571 KiB/s wr, 82 op/s
Oct 11 05:37:04 np0005481065 nova_compute[260935]: 2025-10-11 09:37:04.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:37:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:37:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2863: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 05:37:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:37:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:37:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:37:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:37:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0029757271724620694 of space, bias 1.0, pg target 0.8927181517386208 quantized to 32 (current 32)
Oct 11 05:37:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:37:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:37:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:37:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:37:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:37:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 11 05:37:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:37:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 05:37:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:37:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:37:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:37:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 05:37:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:37:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 05:37:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:37:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:37:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:37:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 05:37:05 np0005481065 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Oct 11 05:37:06 np0005481065 ovn_controller[152945]: 2025-10-11T09:37:06Z|00188|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a1:d6:78 10.100.0.13
Oct 11 05:37:06 np0005481065 ovn_controller[152945]: 2025-10-11T09:37:06Z|00189|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a1:d6:78 10.100.0.13
Oct 11 05:37:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2864: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 05:37:07 np0005481065 podman[418617]: 2025-10-11 09:37:07.755571853 +0000 UTC m=+0.056465255 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct 11 05:37:08 np0005481065 nova_compute[260935]: 2025-10-11 09:37:08.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:37:09 np0005481065 nova_compute[260935]: 2025-10-11 09:37:09.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:37:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2865: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Oct 11 05:37:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:37:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2866: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 05:37:11 np0005481065 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 05:37:11 np0005481065 podman[418641]: 2025-10-11 09:37:11.772243212 +0000 UTC m=+0.066842956 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:37:11 np0005481065 podman[418642]: 2025-10-11 09:37:11.828379396 +0000 UTC m=+0.109712758 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 05:37:13 np0005481065 nova_compute[260935]: 2025-10-11 09:37:13.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:37:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2867: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 05:37:14 np0005481065 nova_compute[260935]: 2025-10-11 09:37:14.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:37:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:37:14 np0005481065 podman[418860]: 2025-10-11 09:37:14.88768024 +0000 UTC m=+0.407159781 container exec ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 05:37:15 np0005481065 podman[418880]: 2025-10-11 09:37:15.084152971 +0000 UTC m=+0.069305485 container exec_died ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:37:15 np0005481065 podman[418860]: 2025-10-11 09:37:15.163226729 +0000 UTC m=+0.682706250 container exec_died ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 05:37:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:37:15.233 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:37:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:37:15.234 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:37:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:37:15.235 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:37:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2868: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 05:37:16 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:37:16 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:37:16 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:37:16 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:37:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:37:17 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:37:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 05:37:17 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:37:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 05:37:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2869: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 05:37:17 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:37:17 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:37:17 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:37:17 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev ee4fd2c5-39b4-47a2-966f-16699a37a662 does not exist
Oct 11 05:37:17 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev db883301-b3a9-44d7-bc92-92e19c4bb7fd does not exist
Oct 11 05:37:17 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev fcc630c8-6f9a-4baa-b3ad-149a3f957c02 does not exist
Oct 11 05:37:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 05:37:17 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 05:37:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 05:37:17 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:37:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:37:17 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:37:18 np0005481065 nova_compute[260935]: 2025-10-11 09:37:18.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:37:18 np0005481065 podman[419293]: 2025-10-11 09:37:18.171291506 +0000 UTC m=+0.026749631 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:37:18 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:37:18 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:37:18 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:37:18 np0005481065 podman[419293]: 2025-10-11 09:37:18.37108195 +0000 UTC m=+0.226539985 container create e74a230b8739b707ccf56b2d8c5eb5fdaa1683baa27b68a8739072fa5beaa2b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_nobel, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:37:18 np0005481065 systemd[1]: Started libpod-conmon-e74a230b8739b707ccf56b2d8c5eb5fdaa1683baa27b68a8739072fa5beaa2b9.scope.
Oct 11 05:37:18 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:37:18 np0005481065 podman[419293]: 2025-10-11 09:37:18.804196248 +0000 UTC m=+0.659654363 container init e74a230b8739b707ccf56b2d8c5eb5fdaa1683baa27b68a8739072fa5beaa2b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_nobel, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:37:18 np0005481065 podman[419293]: 2025-10-11 09:37:18.815580707 +0000 UTC m=+0.671038762 container start e74a230b8739b707ccf56b2d8c5eb5fdaa1683baa27b68a8739072fa5beaa2b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_nobel, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:37:18 np0005481065 interesting_nobel[419309]: 167 167
Oct 11 05:37:18 np0005481065 systemd[1]: libpod-e74a230b8739b707ccf56b2d8c5eb5fdaa1683baa27b68a8739072fa5beaa2b9.scope: Deactivated successfully.
Oct 11 05:37:19 np0005481065 podman[419293]: 2025-10-11 09:37:19.009027223 +0000 UTC m=+0.864485258 container attach e74a230b8739b707ccf56b2d8c5eb5fdaa1683baa27b68a8739072fa5beaa2b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_nobel, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:37:19 np0005481065 podman[419293]: 2025-10-11 09:37:19.009415024 +0000 UTC m=+0.864873059 container died e74a230b8739b707ccf56b2d8c5eb5fdaa1683baa27b68a8739072fa5beaa2b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_nobel, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 05:37:19 np0005481065 nova_compute[260935]: 2025-10-11 09:37:19.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:37:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2870: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 05:37:19 np0005481065 systemd[1]: var-lib-containers-storage-overlay-b3a9ce8e260c855a2fb27e2e00f9f05895a6ebbaf560fae4bdf49038cd7d1362-merged.mount: Deactivated successfully.
Oct 11 05:37:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:37:19 np0005481065 podman[419293]: 2025-10-11 09:37:19.91811214 +0000 UTC m=+1.773570205 container remove e74a230b8739b707ccf56b2d8c5eb5fdaa1683baa27b68a8739072fa5beaa2b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_nobel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 11 05:37:20 np0005481065 systemd[1]: libpod-conmon-e74a230b8739b707ccf56b2d8c5eb5fdaa1683baa27b68a8739072fa5beaa2b9.scope: Deactivated successfully.
Oct 11 05:37:20 np0005481065 podman[419333]: 2025-10-11 09:37:20.193242037 +0000 UTC m=+0.118079783 container create cf9155b5ff473e93bf61de270b47f8edd4fa36b3be82bdc90d3fe5d4caf0812e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bell, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 11 05:37:20 np0005481065 podman[419333]: 2025-10-11 09:37:20.105828945 +0000 UTC m=+0.030666681 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:37:20 np0005481065 nova_compute[260935]: 2025-10-11 09:37:20.231 2 DEBUG oslo_concurrency.lockutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "27b27eae-7476-459a-b3e1-62199c81d4e1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:37:20 np0005481065 nova_compute[260935]: 2025-10-11 09:37:20.233 2 DEBUG oslo_concurrency.lockutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "27b27eae-7476-459a-b3e1-62199c81d4e1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:37:20 np0005481065 nova_compute[260935]: 2025-10-11 09:37:20.259 2 DEBUG nova.compute.manager [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:37:20 np0005481065 nova_compute[260935]: 2025-10-11 09:37:20.342 2 DEBUG oslo_concurrency.lockutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:37:20 np0005481065 nova_compute[260935]: 2025-10-11 09:37:20.343 2 DEBUG oslo_concurrency.lockutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:37:20 np0005481065 nova_compute[260935]: 2025-10-11 09:37:20.360 2 DEBUG nova.virt.hardware [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:37:20 np0005481065 nova_compute[260935]: 2025-10-11 09:37:20.360 2 INFO nova.compute.claims [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:37:20 np0005481065 systemd[1]: Started libpod-conmon-cf9155b5ff473e93bf61de270b47f8edd4fa36b3be82bdc90d3fe5d4caf0812e.scope.
Oct 11 05:37:20 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:37:20 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aab9e1fd1a2142c5e1d2b3a2eae6374c0e118879c058481fe07c55b2dc87e5e3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:37:20 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aab9e1fd1a2142c5e1d2b3a2eae6374c0e118879c058481fe07c55b2dc87e5e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:37:20 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aab9e1fd1a2142c5e1d2b3a2eae6374c0e118879c058481fe07c55b2dc87e5e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:37:20 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aab9e1fd1a2142c5e1d2b3a2eae6374c0e118879c058481fe07c55b2dc87e5e3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:37:20 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aab9e1fd1a2142c5e1d2b3a2eae6374c0e118879c058481fe07c55b2dc87e5e3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 05:37:20 np0005481065 podman[419333]: 2025-10-11 09:37:20.462087287 +0000 UTC m=+0.386925053 container init cf9155b5ff473e93bf61de270b47f8edd4fa36b3be82bdc90d3fe5d4caf0812e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bell, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:37:20 np0005481065 podman[419333]: 2025-10-11 09:37:20.471584684 +0000 UTC m=+0.396422440 container start cf9155b5ff473e93bf61de270b47f8edd4fa36b3be82bdc90d3fe5d4caf0812e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bell, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:37:20 np0005481065 nova_compute[260935]: 2025-10-11 09:37:20.538 2 DEBUG oslo_concurrency.processutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:37:20 np0005481065 podman[419333]: 2025-10-11 09:37:20.653627179 +0000 UTC m=+0.578464915 container attach cf9155b5ff473e93bf61de270b47f8edd4fa36b3be82bdc90d3fe5d4caf0812e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bell, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 11 05:37:21 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:37:21 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2225361393' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:37:21 np0005481065 nova_compute[260935]: 2025-10-11 09:37:21.052 2 DEBUG oslo_concurrency.processutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:37:21 np0005481065 nova_compute[260935]: 2025-10-11 09:37:21.062 2 DEBUG nova.compute.provider_tree [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:37:21 np0005481065 nova_compute[260935]: 2025-10-11 09:37:21.083 2 DEBUG nova.scheduler.client.report [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:37:21 np0005481065 nova_compute[260935]: 2025-10-11 09:37:21.126 2 DEBUG oslo_concurrency.lockutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:37:21 np0005481065 nova_compute[260935]: 2025-10-11 09:37:21.127 2 DEBUG nova.compute.manager [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:37:21 np0005481065 nova_compute[260935]: 2025-10-11 09:37:21.175 2 DEBUG nova.compute.manager [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:37:21 np0005481065 nova_compute[260935]: 2025-10-11 09:37:21.177 2 DEBUG nova.network.neutron [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:37:21 np0005481065 nova_compute[260935]: 2025-10-11 09:37:21.195 2 INFO nova.virt.libvirt.driver [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:37:21 np0005481065 nova_compute[260935]: 2025-10-11 09:37:21.212 2 DEBUG nova.compute.manager [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:37:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2871: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 KiB/s rd, 12 KiB/s wr, 0 op/s
Oct 11 05:37:21 np0005481065 nova_compute[260935]: 2025-10-11 09:37:21.295 2 DEBUG nova.compute.manager [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:37:21 np0005481065 nova_compute[260935]: 2025-10-11 09:37:21.299 2 DEBUG nova.virt.libvirt.driver [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:37:21 np0005481065 nova_compute[260935]: 2025-10-11 09:37:21.300 2 INFO nova.virt.libvirt.driver [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Creating image(s)#033[00m
Oct 11 05:37:21 np0005481065 nova_compute[260935]: 2025-10-11 09:37:21.335 2 DEBUG nova.storage.rbd_utils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image 27b27eae-7476-459a-b3e1-62199c81d4e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:37:21 np0005481065 nova_compute[260935]: 2025-10-11 09:37:21.377 2 DEBUG nova.storage.rbd_utils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image 27b27eae-7476-459a-b3e1-62199c81d4e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:37:21 np0005481065 nova_compute[260935]: 2025-10-11 09:37:21.414 2 DEBUG nova.storage.rbd_utils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image 27b27eae-7476-459a-b3e1-62199c81d4e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:37:21 np0005481065 nova_compute[260935]: 2025-10-11 09:37:21.422 2 DEBUG oslo_concurrency.processutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:37:21 np0005481065 epic_bell[419350]: --> passed data devices: 0 physical, 3 LVM
Oct 11 05:37:21 np0005481065 epic_bell[419350]: --> relative data size: 1.0
Oct 11 05:37:21 np0005481065 epic_bell[419350]: --> All data devices are unavailable
Oct 11 05:37:21 np0005481065 nova_compute[260935]: 2025-10-11 09:37:21.506 2 DEBUG oslo_concurrency.processutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:37:21 np0005481065 nova_compute[260935]: 2025-10-11 09:37:21.507 2 DEBUG oslo_concurrency.lockutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:37:21 np0005481065 nova_compute[260935]: 2025-10-11 09:37:21.508 2 DEBUG oslo_concurrency.lockutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:37:21 np0005481065 nova_compute[260935]: 2025-10-11 09:37:21.508 2 DEBUG oslo_concurrency.lockutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:37:21 np0005481065 systemd[1]: libpod-cf9155b5ff473e93bf61de270b47f8edd4fa36b3be82bdc90d3fe5d4caf0812e.scope: Deactivated successfully.
Oct 11 05:37:21 np0005481065 nova_compute[260935]: 2025-10-11 09:37:21.544 2 DEBUG nova.storage.rbd_utils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image 27b27eae-7476-459a-b3e1-62199c81d4e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:37:21 np0005481065 nova_compute[260935]: 2025-10-11 09:37:21.551 2 DEBUG oslo_concurrency.processutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 27b27eae-7476-459a-b3e1-62199c81d4e1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:37:21 np0005481065 podman[419458]: 2025-10-11 09:37:21.571088282 +0000 UTC m=+0.036655349 container died cf9155b5ff473e93bf61de270b47f8edd4fa36b3be82bdc90d3fe5d4caf0812e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bell, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 11 05:37:21 np0005481065 systemd[1]: var-lib-containers-storage-overlay-aab9e1fd1a2142c5e1d2b3a2eae6374c0e118879c058481fe07c55b2dc87e5e3-merged.mount: Deactivated successfully.
Oct 11 05:37:21 np0005481065 nova_compute[260935]: 2025-10-11 09:37:21.687 2 DEBUG nova.policy [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '489c4d0457354f4684f8b9e53261224f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '81e7096f23df4e7d8782cf98d09d54e9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:37:21 np0005481065 podman[419458]: 2025-10-11 09:37:21.743629262 +0000 UTC m=+0.209196249 container remove cf9155b5ff473e93bf61de270b47f8edd4fa36b3be82bdc90d3fe5d4caf0812e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 11 05:37:21 np0005481065 systemd[1]: libpod-conmon-cf9155b5ff473e93bf61de270b47f8edd4fa36b3be82bdc90d3fe5d4caf0812e.scope: Deactivated successfully.
Oct 11 05:37:22 np0005481065 nova_compute[260935]: 2025-10-11 09:37:22.307 2 DEBUG oslo_concurrency.processutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 27b27eae-7476-459a-b3e1-62199c81d4e1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.757s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:37:22 np0005481065 nova_compute[260935]: 2025-10-11 09:37:22.382 2 DEBUG nova.storage.rbd_utils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] resizing rbd image 27b27eae-7476-459a-b3e1-62199c81d4e1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:37:22 np0005481065 podman[419707]: 2025-10-11 09:37:22.499346708 +0000 UTC m=+0.069447709 container create 59797a5ef2ca90f121707b901900832f203c778fb603b4ec5cc9088193ea6c11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_dijkstra, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:37:22 np0005481065 podman[419707]: 2025-10-11 09:37:22.458253445 +0000 UTC m=+0.028354486 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:37:22 np0005481065 systemd[1]: Started libpod-conmon-59797a5ef2ca90f121707b901900832f203c778fb603b4ec5cc9088193ea6c11.scope.
Oct 11 05:37:22 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:37:22 np0005481065 podman[419707]: 2025-10-11 09:37:22.676779814 +0000 UTC m=+0.246880865 container init 59797a5ef2ca90f121707b901900832f203c778fb603b4ec5cc9088193ea6c11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:37:22 np0005481065 nova_compute[260935]: 2025-10-11 09:37:22.689 2 DEBUG nova.objects.instance [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lazy-loading 'migration_context' on Instance uuid 27b27eae-7476-459a-b3e1-62199c81d4e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:37:22 np0005481065 podman[419707]: 2025-10-11 09:37:22.692833325 +0000 UTC m=+0.262934366 container start 59797a5ef2ca90f121707b901900832f203c778fb603b4ec5cc9088193ea6c11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_dijkstra, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 05:37:22 np0005481065 kind_dijkstra[419723]: 167 167
Oct 11 05:37:22 np0005481065 systemd[1]: libpod-59797a5ef2ca90f121707b901900832f203c778fb603b4ec5cc9088193ea6c11.scope: Deactivated successfully.
Oct 11 05:37:22 np0005481065 podman[419707]: 2025-10-11 09:37:22.706622191 +0000 UTC m=+0.276723252 container attach 59797a5ef2ca90f121707b901900832f203c778fb603b4ec5cc9088193ea6c11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_dijkstra, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:37:22 np0005481065 podman[419707]: 2025-10-11 09:37:22.708130574 +0000 UTC m=+0.278231605 container died 59797a5ef2ca90f121707b901900832f203c778fb603b4ec5cc9088193ea6c11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_dijkstra, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:37:22 np0005481065 nova_compute[260935]: 2025-10-11 09:37:22.724 2 DEBUG nova.virt.libvirt.driver [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:37:22 np0005481065 nova_compute[260935]: 2025-10-11 09:37:22.725 2 DEBUG nova.virt.libvirt.driver [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Ensure instance console log exists: /var/lib/nova/instances/27b27eae-7476-459a-b3e1-62199c81d4e1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:37:22 np0005481065 nova_compute[260935]: 2025-10-11 09:37:22.726 2 DEBUG oslo_concurrency.lockutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:37:22 np0005481065 nova_compute[260935]: 2025-10-11 09:37:22.727 2 DEBUG oslo_concurrency.lockutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:37:22 np0005481065 nova_compute[260935]: 2025-10-11 09:37:22.727 2 DEBUG oslo_concurrency.lockutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:37:22 np0005481065 systemd[1]: var-lib-containers-storage-overlay-3e40c1cbe71f541f8e24b35359a6c320d245fedf6fbc580c87e9a7718ef3f963-merged.mount: Deactivated successfully.
Oct 11 05:37:22 np0005481065 nova_compute[260935]: 2025-10-11 09:37:22.782 2 DEBUG nova.network.neutron [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Successfully created port: 0110d16a-ba47-4f17-8196-6a5b11e482b3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:37:22 np0005481065 podman[419707]: 2025-10-11 09:37:22.896396783 +0000 UTC m=+0.466497794 container remove 59797a5ef2ca90f121707b901900832f203c778fb603b4ec5cc9088193ea6c11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_dijkstra, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 11 05:37:22 np0005481065 systemd[1]: libpod-conmon-59797a5ef2ca90f121707b901900832f203c778fb603b4ec5cc9088193ea6c11.scope: Deactivated successfully.
Oct 11 05:37:23 np0005481065 podman[419767]: 2025-10-11 09:37:23.093650166 +0000 UTC m=+0.021718101 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:37:23 np0005481065 nova_compute[260935]: 2025-10-11 09:37:23.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:37:23 np0005481065 podman[419767]: 2025-10-11 09:37:23.233345124 +0000 UTC m=+0.161413039 container create b164a2a9eddab52a2c459e3bf7dbf585b8cf083436ce2b4ecd401b82ad138060 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:37:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2872: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.8 MiB/s wr, 24 op/s
Oct 11 05:37:23 np0005481065 systemd[1]: Started libpod-conmon-b164a2a9eddab52a2c459e3bf7dbf585b8cf083436ce2b4ecd401b82ad138060.scope.
Oct 11 05:37:23 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:37:23 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ac20e729bd30e2066555d9f1a06f8c8d9199a33569ca164a18cfafd3b9f2b39/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:37:23 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ac20e729bd30e2066555d9f1a06f8c8d9199a33569ca164a18cfafd3b9f2b39/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:37:23 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ac20e729bd30e2066555d9f1a06f8c8d9199a33569ca164a18cfafd3b9f2b39/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:37:23 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ac20e729bd30e2066555d9f1a06f8c8d9199a33569ca164a18cfafd3b9f2b39/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:37:23 np0005481065 podman[419767]: 2025-10-11 09:37:23.490883157 +0000 UTC m=+0.418951152 container init b164a2a9eddab52a2c459e3bf7dbf585b8cf083436ce2b4ecd401b82ad138060 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 05:37:23 np0005481065 podman[419767]: 2025-10-11 09:37:23.50919473 +0000 UTC m=+0.437262665 container start b164a2a9eddab52a2c459e3bf7dbf585b8cf083436ce2b4ecd401b82ad138060 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_goodall, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:37:23 np0005481065 podman[419767]: 2025-10-11 09:37:23.551551238 +0000 UTC m=+0.479619243 container attach b164a2a9eddab52a2c459e3bf7dbf585b8cf083436ce2b4ecd401b82ad138060 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_goodall, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 05:37:23 np0005481065 nova_compute[260935]: 2025-10-11 09:37:23.877 2 DEBUG nova.network.neutron [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Successfully updated port: 0110d16a-ba47-4f17-8196-6a5b11e482b3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:37:23 np0005481065 nova_compute[260935]: 2025-10-11 09:37:23.894 2 DEBUG oslo_concurrency.lockutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "refresh_cache-27b27eae-7476-459a-b3e1-62199c81d4e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:37:23 np0005481065 nova_compute[260935]: 2025-10-11 09:37:23.894 2 DEBUG oslo_concurrency.lockutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquired lock "refresh_cache-27b27eae-7476-459a-b3e1-62199c81d4e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:37:23 np0005481065 nova_compute[260935]: 2025-10-11 09:37:23.894 2 DEBUG nova.network.neutron [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:37:23 np0005481065 nova_compute[260935]: 2025-10-11 09:37:23.974 2 DEBUG nova.compute.manager [req-20904805-f1e5-4a16-ad04-6075d7cd35e2 req-46dc4c43-2eae-4ebc-91c9-3b83d366f528 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Received event network-changed-0110d16a-ba47-4f17-8196-6a5b11e482b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:37:23 np0005481065 nova_compute[260935]: 2025-10-11 09:37:23.974 2 DEBUG nova.compute.manager [req-20904805-f1e5-4a16-ad04-6075d7cd35e2 req-46dc4c43-2eae-4ebc-91c9-3b83d366f528 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Refreshing instance network info cache due to event network-changed-0110d16a-ba47-4f17-8196-6a5b11e482b3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:37:23 np0005481065 nova_compute[260935]: 2025-10-11 09:37:23.975 2 DEBUG oslo_concurrency.lockutils [req-20904805-f1e5-4a16-ad04-6075d7cd35e2 req-46dc4c43-2eae-4ebc-91c9-3b83d366f528 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-27b27eae-7476-459a-b3e1-62199c81d4e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:37:24 np0005481065 nova_compute[260935]: 2025-10-11 09:37:24.034 2 DEBUG nova.network.neutron [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:37:24 np0005481065 nova_compute[260935]: 2025-10-11 09:37:24.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]: {
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:    "0": [
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:        {
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:            "devices": [
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:                "/dev/loop3"
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:            ],
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:            "lv_name": "ceph_lv0",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:            "lv_size": "21470642176",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:            "name": "ceph_lv0",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:            "tags": {
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:                "ceph.cluster_name": "ceph",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:                "ceph.crush_device_class": "",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:                "ceph.encrypted": "0",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:                "ceph.osd_id": "0",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:                "ceph.type": "block",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:                "ceph.vdo": "0"
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:            },
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:            "type": "block",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:            "vg_name": "ceph_vg0"
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:        }
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:    ],
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:    "1": [
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:        {
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:            "devices": [
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:                "/dev/loop4"
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:            ],
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:            "lv_name": "ceph_lv1",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:            "lv_size": "21470642176",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:            "name": "ceph_lv1",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:            "tags": {
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:                "ceph.cluster_name": "ceph",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:                "ceph.crush_device_class": "",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:                "ceph.encrypted": "0",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:                "ceph.osd_id": "1",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:                "ceph.type": "block",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:                "ceph.vdo": "0"
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:            },
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:            "type": "block",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:            "vg_name": "ceph_vg1"
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:        }
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:    ],
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:    "2": [
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:        {
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:            "devices": [
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:                "/dev/loop5"
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:            ],
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:            "lv_name": "ceph_lv2",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:            "lv_size": "21470642176",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:            "name": "ceph_lv2",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:            "tags": {
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:                "ceph.cluster_name": "ceph",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:                "ceph.crush_device_class": "",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:                "ceph.encrypted": "0",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:                "ceph.osd_id": "2",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:                "ceph.type": "block",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:                "ceph.vdo": "0"
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:            },
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:            "type": "block",
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:            "vg_name": "ceph_vg2"
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:        }
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]:    ]
Oct 11 05:37:24 np0005481065 admiring_goodall[419783]: }
Oct 11 05:37:24 np0005481065 systemd[1]: libpod-b164a2a9eddab52a2c459e3bf7dbf585b8cf083436ce2b4ecd401b82ad138060.scope: Deactivated successfully.
Oct 11 05:37:24 np0005481065 podman[419767]: 2025-10-11 09:37:24.288310073 +0000 UTC m=+1.216377998 container died b164a2a9eddab52a2c459e3bf7dbf585b8cf083436ce2b4ecd401b82ad138060 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_goodall, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 05:37:24 np0005481065 systemd[1]: var-lib-containers-storage-overlay-6ac20e729bd30e2066555d9f1a06f8c8d9199a33569ca164a18cfafd3b9f2b39-merged.mount: Deactivated successfully.
Oct 11 05:37:24 np0005481065 podman[419767]: 2025-10-11 09:37:24.354956862 +0000 UTC m=+1.283024817 container remove b164a2a9eddab52a2c459e3bf7dbf585b8cf083436ce2b4ecd401b82ad138060 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:37:24 np0005481065 systemd[1]: libpod-conmon-b164a2a9eddab52a2c459e3bf7dbf585b8cf083436ce2b4ecd401b82ad138060.scope: Deactivated successfully.
Oct 11 05:37:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:37:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:37:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:37:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:37:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:37:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:37:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:37:24 np0005481065 nova_compute[260935]: 2025-10-11 09:37:24.861 2 DEBUG nova.network.neutron [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Updating instance_info_cache with network_info: [{"id": "0110d16a-ba47-4f17-8196-6a5b11e482b3", "address": "fa:16:3e:32:9d:c0", "network": {"id": "03aa96fb-1646-4924-8eea-e7c8a5518a31", "bridge": "br-int", "label": "tempest-network-smoke--2144007454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0110d16a-ba", "ovs_interfaceid": "0110d16a-ba47-4f17-8196-6a5b11e482b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:37:24 np0005481065 nova_compute[260935]: 2025-10-11 09:37:24.902 2 DEBUG oslo_concurrency.lockutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Releasing lock "refresh_cache-27b27eae-7476-459a-b3e1-62199c81d4e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:37:24 np0005481065 nova_compute[260935]: 2025-10-11 09:37:24.903 2 DEBUG nova.compute.manager [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Instance network_info: |[{"id": "0110d16a-ba47-4f17-8196-6a5b11e482b3", "address": "fa:16:3e:32:9d:c0", "network": {"id": "03aa96fb-1646-4924-8eea-e7c8a5518a31", "bridge": "br-int", "label": "tempest-network-smoke--2144007454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0110d16a-ba", "ovs_interfaceid": "0110d16a-ba47-4f17-8196-6a5b11e482b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:37:24 np0005481065 nova_compute[260935]: 2025-10-11 09:37:24.903 2 DEBUG oslo_concurrency.lockutils [req-20904805-f1e5-4a16-ad04-6075d7cd35e2 req-46dc4c43-2eae-4ebc-91c9-3b83d366f528 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-27b27eae-7476-459a-b3e1-62199c81d4e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:37:24 np0005481065 nova_compute[260935]: 2025-10-11 09:37:24.903 2 DEBUG nova.network.neutron [req-20904805-f1e5-4a16-ad04-6075d7cd35e2 req-46dc4c43-2eae-4ebc-91c9-3b83d366f528 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Refreshing network info cache for port 0110d16a-ba47-4f17-8196-6a5b11e482b3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:37:24 np0005481065 nova_compute[260935]: 2025-10-11 09:37:24.907 2 DEBUG nova.virt.libvirt.driver [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Start _get_guest_xml network_info=[{"id": "0110d16a-ba47-4f17-8196-6a5b11e482b3", "address": "fa:16:3e:32:9d:c0", "network": {"id": "03aa96fb-1646-4924-8eea-e7c8a5518a31", "bridge": "br-int", "label": "tempest-network-smoke--2144007454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0110d16a-ba", "ovs_interfaceid": "0110d16a-ba47-4f17-8196-6a5b11e482b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:37:24 np0005481065 nova_compute[260935]: 2025-10-11 09:37:24.914 2 WARNING nova.virt.libvirt.driver [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:37:24 np0005481065 nova_compute[260935]: 2025-10-11 09:37:24.928 2 DEBUG nova.virt.libvirt.host [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:37:24 np0005481065 nova_compute[260935]: 2025-10-11 09:37:24.929 2 DEBUG nova.virt.libvirt.host [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:37:24 np0005481065 nova_compute[260935]: 2025-10-11 09:37:24.933 2 DEBUG nova.virt.libvirt.host [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:37:24 np0005481065 nova_compute[260935]: 2025-10-11 09:37:24.934 2 DEBUG nova.virt.libvirt.host [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:37:24 np0005481065 nova_compute[260935]: 2025-10-11 09:37:24.935 2 DEBUG nova.virt.libvirt.driver [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:37:24 np0005481065 nova_compute[260935]: 2025-10-11 09:37:24.935 2 DEBUG nova.virt.hardware [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:37:24 np0005481065 nova_compute[260935]: 2025-10-11 09:37:24.936 2 DEBUG nova.virt.hardware [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:37:24 np0005481065 nova_compute[260935]: 2025-10-11 09:37:24.937 2 DEBUG nova.virt.hardware [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:37:24 np0005481065 nova_compute[260935]: 2025-10-11 09:37:24.937 2 DEBUG nova.virt.hardware [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:37:24 np0005481065 nova_compute[260935]: 2025-10-11 09:37:24.937 2 DEBUG nova.virt.hardware [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:37:24 np0005481065 nova_compute[260935]: 2025-10-11 09:37:24.938 2 DEBUG nova.virt.hardware [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:37:24 np0005481065 nova_compute[260935]: 2025-10-11 09:37:24.938 2 DEBUG nova.virt.hardware [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:37:24 np0005481065 nova_compute[260935]: 2025-10-11 09:37:24.939 2 DEBUG nova.virt.hardware [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:37:24 np0005481065 nova_compute[260935]: 2025-10-11 09:37:24.939 2 DEBUG nova.virt.hardware [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:37:24 np0005481065 nova_compute[260935]: 2025-10-11 09:37:24.939 2 DEBUG nova.virt.hardware [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:37:24 np0005481065 nova_compute[260935]: 2025-10-11 09:37:24.940 2 DEBUG nova.virt.hardware [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:37:24 np0005481065 nova_compute[260935]: 2025-10-11 09:37:24.949 2 DEBUG oslo_concurrency.processutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:37:25 np0005481065 podman[419947]: 2025-10-11 09:37:25.03658502 +0000 UTC m=+0.035080795 container create 2a1da028167916aa5b667e04dde1bcf883c5af6d312ed01d866c798a6bc5875c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_villani, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:37:25 np0005481065 systemd[1]: Started libpod-conmon-2a1da028167916aa5b667e04dde1bcf883c5af6d312ed01d866c798a6bc5875c.scope.
Oct 11 05:37:25 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:37:25 np0005481065 podman[419947]: 2025-10-11 09:37:25.02266949 +0000 UTC m=+0.021165285 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:37:25 np0005481065 podman[419947]: 2025-10-11 09:37:25.118341623 +0000 UTC m=+0.116837418 container init 2a1da028167916aa5b667e04dde1bcf883c5af6d312ed01d866c798a6bc5875c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_villani, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Oct 11 05:37:25 np0005481065 podman[419947]: 2025-10-11 09:37:25.125592827 +0000 UTC m=+0.124088602 container start 2a1da028167916aa5b667e04dde1bcf883c5af6d312ed01d866c798a6bc5875c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 05:37:25 np0005481065 podman[419947]: 2025-10-11 09:37:25.128672713 +0000 UTC m=+0.127168488 container attach 2a1da028167916aa5b667e04dde1bcf883c5af6d312ed01d866c798a6bc5875c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_villani, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:37:25 np0005481065 ecstatic_villani[419963]: 167 167
Oct 11 05:37:25 np0005481065 systemd[1]: libpod-2a1da028167916aa5b667e04dde1bcf883c5af6d312ed01d866c798a6bc5875c.scope: Deactivated successfully.
Oct 11 05:37:25 np0005481065 conmon[419963]: conmon 2a1da028167916aa5b66 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2a1da028167916aa5b667e04dde1bcf883c5af6d312ed01d866c798a6bc5875c.scope/container/memory.events
Oct 11 05:37:25 np0005481065 podman[419947]: 2025-10-11 09:37:25.131921034 +0000 UTC m=+0.130416809 container died 2a1da028167916aa5b667e04dde1bcf883c5af6d312ed01d866c798a6bc5875c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_villani, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:37:25 np0005481065 systemd[1]: var-lib-containers-storage-overlay-1e9ea8233646edfb89339b23cdbe22a247620476d8590539113002f0e3faf47f-merged.mount: Deactivated successfully.
Oct 11 05:37:25 np0005481065 podman[419947]: 2025-10-11 09:37:25.165860616 +0000 UTC m=+0.164356391 container remove 2a1da028167916aa5b667e04dde1bcf883c5af6d312ed01d866c798a6bc5875c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_villani, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:37:25 np0005481065 systemd[1]: libpod-conmon-2a1da028167916aa5b667e04dde1bcf883c5af6d312ed01d866c798a6bc5875c.scope: Deactivated successfully.
Oct 11 05:37:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2873: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 1.8 MiB/s wr, 23 op/s
Oct 11 05:37:25 np0005481065 podman[420005]: 2025-10-11 09:37:25.374378094 +0000 UTC m=+0.053893912 container create 6bf6e523c579205dead90350d459d81d654dcc5bcb18e91d7c75b1776473663c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_galileo, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 05:37:25 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:37:25 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1599478162' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:37:25 np0005481065 nova_compute[260935]: 2025-10-11 09:37:25.419 2 DEBUG oslo_concurrency.processutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:37:25 np0005481065 systemd[1]: Started libpod-conmon-6bf6e523c579205dead90350d459d81d654dcc5bcb18e91d7c75b1776473663c.scope.
Oct 11 05:37:25 np0005481065 nova_compute[260935]: 2025-10-11 09:37:25.448 2 DEBUG nova.storage.rbd_utils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image 27b27eae-7476-459a-b3e1-62199c81d4e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:37:25 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:37:25 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1cdcdb1183aa2e86fd801d112bc065a7027e19904460d8bd33886acedb1295d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:37:25 np0005481065 podman[420005]: 2025-10-11 09:37:25.358570861 +0000 UTC m=+0.038086689 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:37:25 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1cdcdb1183aa2e86fd801d112bc065a7027e19904460d8bd33886acedb1295d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:37:25 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1cdcdb1183aa2e86fd801d112bc065a7027e19904460d8bd33886acedb1295d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:37:25 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1cdcdb1183aa2e86fd801d112bc065a7027e19904460d8bd33886acedb1295d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:37:25 np0005481065 podman[420005]: 2025-10-11 09:37:25.463856234 +0000 UTC m=+0.143372062 container init 6bf6e523c579205dead90350d459d81d654dcc5bcb18e91d7c75b1776473663c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:37:25 np0005481065 nova_compute[260935]: 2025-10-11 09:37:25.467 2 DEBUG oslo_concurrency.processutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:37:25 np0005481065 podman[420005]: 2025-10-11 09:37:25.472310041 +0000 UTC m=+0.151825839 container start 6bf6e523c579205dead90350d459d81d654dcc5bcb18e91d7c75b1776473663c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_galileo, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 11 05:37:25 np0005481065 podman[420005]: 2025-10-11 09:37:25.477139087 +0000 UTC m=+0.156654915 container attach 6bf6e523c579205dead90350d459d81d654dcc5bcb18e91d7c75b1776473663c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_galileo, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:37:25 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:37:25 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1024536959' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:37:25 np0005481065 nova_compute[260935]: 2025-10-11 09:37:25.867 2 DEBUG oslo_concurrency.processutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.400s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:37:25 np0005481065 nova_compute[260935]: 2025-10-11 09:37:25.871 2 DEBUG nova.virt.libvirt.vif [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:37:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-gen-0-202990093',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-gen-0-202990093',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-607770139-gen',id=142,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDUdKb5dOuuDVVWeqX4zKoPcjWDfKUs1onbpng/L6Jwhcf0BjMOuWghlJ+p86B+gJd3uhEaIWe8cO4nTPLwQAMa1oAzsF+deBAfCAYSzplFFdsUIepQYO467EdbI9Uqbsw==',key_name='tempest-TestSecurityGroupsBasicOps-402699739',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='81e7096f23df4e7d8782cf98d09d54e9',ramdisk_id='',reservation_id='r-0rby20lg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-607770139',owner_user_name='tempest-TestSecurityGroupsBasicOps-607770139-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:37:21Z,user_data=None,user_id='489c4d0457354f4684f8b9e53261224f',uuid=27b27eae-7476-459a-b3e1-62199c81d4e1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0110d16a-ba47-4f17-8196-6a5b11e482b3", "address": "fa:16:3e:32:9d:c0", "network": {"id": "03aa96fb-1646-4924-8eea-e7c8a5518a31", "bridge": "br-int", "label": "tempest-network-smoke--2144007454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0110d16a-ba", "ovs_interfaceid": "0110d16a-ba47-4f17-8196-6a5b11e482b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:37:25 np0005481065 nova_compute[260935]: 2025-10-11 09:37:25.871 2 DEBUG nova.network.os_vif_util [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converting VIF {"id": "0110d16a-ba47-4f17-8196-6a5b11e482b3", "address": "fa:16:3e:32:9d:c0", "network": {"id": "03aa96fb-1646-4924-8eea-e7c8a5518a31", "bridge": "br-int", "label": "tempest-network-smoke--2144007454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0110d16a-ba", "ovs_interfaceid": "0110d16a-ba47-4f17-8196-6a5b11e482b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:37:25 np0005481065 nova_compute[260935]: 2025-10-11 09:37:25.873 2 DEBUG nova.network.os_vif_util [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:9d:c0,bridge_name='br-int',has_traffic_filtering=True,id=0110d16a-ba47-4f17-8196-6a5b11e482b3,network=Network(03aa96fb-1646-4924-8eea-e7c8a5518a31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0110d16a-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:37:25 np0005481065 nova_compute[260935]: 2025-10-11 09:37:25.875 2 DEBUG nova.objects.instance [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 27b27eae-7476-459a-b3e1-62199c81d4e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:37:25 np0005481065 nova_compute[260935]: 2025-10-11 09:37:25.894 2 DEBUG nova.virt.libvirt.driver [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:37:25 np0005481065 nova_compute[260935]:  <uuid>27b27eae-7476-459a-b3e1-62199c81d4e1</uuid>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:  <name>instance-0000008e</name>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:37:25 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-gen-0-202990093</nova:name>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:37:24</nova:creationTime>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:37:25 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:        <nova:user uuid="489c4d0457354f4684f8b9e53261224f">tempest-TestSecurityGroupsBasicOps-607770139-project-member</nova:user>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:        <nova:project uuid="81e7096f23df4e7d8782cf98d09d54e9">tempest-TestSecurityGroupsBasicOps-607770139</nova:project>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:        <nova:port uuid="0110d16a-ba47-4f17-8196-6a5b11e482b3">
Oct 11 05:37:25 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:37:25 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:      <entry name="serial">27b27eae-7476-459a-b3e1-62199c81d4e1</entry>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:      <entry name="uuid">27b27eae-7476-459a-b3e1-62199c81d4e1</entry>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:37:25 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:37:25 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:37:25 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/27b27eae-7476-459a-b3e1-62199c81d4e1_disk">
Oct 11 05:37:25 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:37:25 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:37:25 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/27b27eae-7476-459a-b3e1-62199c81d4e1_disk.config">
Oct 11 05:37:25 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:37:25 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:37:25 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:32:9d:c0"/>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:      <target dev="tap0110d16a-ba"/>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:37:25 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/27b27eae-7476-459a-b3e1-62199c81d4e1/console.log" append="off"/>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:37:25 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:37:25 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:37:25 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:37:25 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:37:25 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:37:25 np0005481065 nova_compute[260935]: 2025-10-11 09:37:25.895 2 DEBUG nova.compute.manager [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Preparing to wait for external event network-vif-plugged-0110d16a-ba47-4f17-8196-6a5b11e482b3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:37:25 np0005481065 nova_compute[260935]: 2025-10-11 09:37:25.895 2 DEBUG oslo_concurrency.lockutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "27b27eae-7476-459a-b3e1-62199c81d4e1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:37:25 np0005481065 nova_compute[260935]: 2025-10-11 09:37:25.896 2 DEBUG oslo_concurrency.lockutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "27b27eae-7476-459a-b3e1-62199c81d4e1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:37:25 np0005481065 nova_compute[260935]: 2025-10-11 09:37:25.896 2 DEBUG oslo_concurrency.lockutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "27b27eae-7476-459a-b3e1-62199c81d4e1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:37:25 np0005481065 nova_compute[260935]: 2025-10-11 09:37:25.897 2 DEBUG nova.virt.libvirt.vif [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:37:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-gen-0-202990093',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-gen-0-202990093',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-607770139-gen',id=142,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDUdKb5dOuuDVVWeqX4zKoPcjWDfKUs1onbpng/L6Jwhcf0BjMOuWghlJ+p86B+gJd3uhEaIWe8cO4nTPLwQAMa1oAzsF+deBAfCAYSzplFFdsUIepQYO467EdbI9Uqbsw==',key_name='tempest-TestSecurityGroupsBasicOps-402699739',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='81e7096f23df4e7d8782cf98d09d54e9',ramdisk_id='',reservation_id='r-0rby20lg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-607770139',owner_user_name='tempest-TestSecurityGroupsBasicOps-607770139-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:37:21Z,user_data=None,user_id='489c4d0457354f4684f8b9e53261224f',uuid=27b27eae-7476-459a-b3e1-62199c81d4e1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0110d16a-ba47-4f17-8196-6a5b11e482b3", "address": "fa:16:3e:32:9d:c0", "network": {"id": "03aa96fb-1646-4924-8eea-e7c8a5518a31", "bridge": "br-int", "label": "tempest-network-smoke--2144007454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0110d16a-ba", "ovs_interfaceid": "0110d16a-ba47-4f17-8196-6a5b11e482b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:37:25 np0005481065 nova_compute[260935]: 2025-10-11 09:37:25.898 2 DEBUG nova.network.os_vif_util [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converting VIF {"id": "0110d16a-ba47-4f17-8196-6a5b11e482b3", "address": "fa:16:3e:32:9d:c0", "network": {"id": "03aa96fb-1646-4924-8eea-e7c8a5518a31", "bridge": "br-int", "label": "tempest-network-smoke--2144007454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0110d16a-ba", "ovs_interfaceid": "0110d16a-ba47-4f17-8196-6a5b11e482b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:37:25 np0005481065 nova_compute[260935]: 2025-10-11 09:37:25.899 2 DEBUG nova.network.os_vif_util [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:9d:c0,bridge_name='br-int',has_traffic_filtering=True,id=0110d16a-ba47-4f17-8196-6a5b11e482b3,network=Network(03aa96fb-1646-4924-8eea-e7c8a5518a31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0110d16a-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:37:25 np0005481065 nova_compute[260935]: 2025-10-11 09:37:25.900 2 DEBUG os_vif [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:9d:c0,bridge_name='br-int',has_traffic_filtering=True,id=0110d16a-ba47-4f17-8196-6a5b11e482b3,network=Network(03aa96fb-1646-4924-8eea-e7c8a5518a31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0110d16a-ba') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:37:25 np0005481065 nova_compute[260935]: 2025-10-11 09:37:25.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:37:25 np0005481065 nova_compute[260935]: 2025-10-11 09:37:25.902 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:37:25 np0005481065 nova_compute[260935]: 2025-10-11 09:37:25.903 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:37:25 np0005481065 nova_compute[260935]: 2025-10-11 09:37:25.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:37:25 np0005481065 nova_compute[260935]: 2025-10-11 09:37:25.908 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0110d16a-ba, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:37:25 np0005481065 nova_compute[260935]: 2025-10-11 09:37:25.909 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0110d16a-ba, col_values=(('external_ids', {'iface-id': '0110d16a-ba47-4f17-8196-6a5b11e482b3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:32:9d:c0', 'vm-uuid': '27b27eae-7476-459a-b3e1-62199c81d4e1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:37:25 np0005481065 nova_compute[260935]: 2025-10-11 09:37:25.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:37:25 np0005481065 NetworkManager[44960]: <info>  [1760175445.9135] manager: (tap0110d16a-ba): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/612)
Oct 11 05:37:25 np0005481065 nova_compute[260935]: 2025-10-11 09:37:25.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:37:25 np0005481065 nova_compute[260935]: 2025-10-11 09:37:25.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:37:25 np0005481065 nova_compute[260935]: 2025-10-11 09:37:25.922 2 INFO os_vif [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:9d:c0,bridge_name='br-int',has_traffic_filtering=True,id=0110d16a-ba47-4f17-8196-6a5b11e482b3,network=Network(03aa96fb-1646-4924-8eea-e7c8a5518a31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0110d16a-ba')#033[00m
Oct 11 05:37:25 np0005481065 nova_compute[260935]: 2025-10-11 09:37:25.995 2 DEBUG nova.virt.libvirt.driver [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:37:25 np0005481065 nova_compute[260935]: 2025-10-11 09:37:25.996 2 DEBUG nova.virt.libvirt.driver [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:37:25 np0005481065 nova_compute[260935]: 2025-10-11 09:37:25.996 2 DEBUG nova.virt.libvirt.driver [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] No VIF found with MAC fa:16:3e:32:9d:c0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:37:25 np0005481065 nova_compute[260935]: 2025-10-11 09:37:25.997 2 INFO nova.virt.libvirt.driver [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Using config drive#033[00m
Oct 11 05:37:26 np0005481065 nova_compute[260935]: 2025-10-11 09:37:26.025 2 DEBUG nova.storage.rbd_utils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image 27b27eae-7476-459a-b3e1-62199c81d4e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:37:26 np0005481065 nova_compute[260935]: 2025-10-11 09:37:26.118 2 DEBUG nova.network.neutron [req-20904805-f1e5-4a16-ad04-6075d7cd35e2 req-46dc4c43-2eae-4ebc-91c9-3b83d366f528 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Updated VIF entry in instance network info cache for port 0110d16a-ba47-4f17-8196-6a5b11e482b3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:37:26 np0005481065 nova_compute[260935]: 2025-10-11 09:37:26.119 2 DEBUG nova.network.neutron [req-20904805-f1e5-4a16-ad04-6075d7cd35e2 req-46dc4c43-2eae-4ebc-91c9-3b83d366f528 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Updating instance_info_cache with network_info: [{"id": "0110d16a-ba47-4f17-8196-6a5b11e482b3", "address": "fa:16:3e:32:9d:c0", "network": {"id": "03aa96fb-1646-4924-8eea-e7c8a5518a31", "bridge": "br-int", "label": "tempest-network-smoke--2144007454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0110d16a-ba", "ovs_interfaceid": "0110d16a-ba47-4f17-8196-6a5b11e482b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:37:26 np0005481065 nova_compute[260935]: 2025-10-11 09:37:26.133 2 DEBUG oslo_concurrency.lockutils [req-20904805-f1e5-4a16-ad04-6075d7cd35e2 req-46dc4c43-2eae-4ebc-91c9-3b83d366f528 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-27b27eae-7476-459a-b3e1-62199c81d4e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:37:26 np0005481065 nova_compute[260935]: 2025-10-11 09:37:26.284 2 INFO nova.virt.libvirt.driver [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Creating config drive at /var/lib/nova/instances/27b27eae-7476-459a-b3e1-62199c81d4e1/disk.config#033[00m
Oct 11 05:37:26 np0005481065 nova_compute[260935]: 2025-10-11 09:37:26.295 2 DEBUG oslo_concurrency.processutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/27b27eae-7476-459a-b3e1-62199c81d4e1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeuo5ffxd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:37:26 np0005481065 nova_compute[260935]: 2025-10-11 09:37:26.444 2 DEBUG oslo_concurrency.processutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/27b27eae-7476-459a-b3e1-62199c81d4e1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeuo5ffxd" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:37:26 np0005481065 magical_galileo[420024]: {
Oct 11 05:37:26 np0005481065 magical_galileo[420024]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 05:37:26 np0005481065 magical_galileo[420024]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:37:26 np0005481065 magical_galileo[420024]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 05:37:26 np0005481065 magical_galileo[420024]:        "osd_id": 2,
Oct 11 05:37:26 np0005481065 magical_galileo[420024]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:37:26 np0005481065 magical_galileo[420024]:        "type": "bluestore"
Oct 11 05:37:26 np0005481065 magical_galileo[420024]:    },
Oct 11 05:37:26 np0005481065 magical_galileo[420024]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 05:37:26 np0005481065 magical_galileo[420024]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:37:26 np0005481065 magical_galileo[420024]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 05:37:26 np0005481065 magical_galileo[420024]:        "osd_id": 0,
Oct 11 05:37:26 np0005481065 magical_galileo[420024]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:37:26 np0005481065 magical_galileo[420024]:        "type": "bluestore"
Oct 11 05:37:26 np0005481065 magical_galileo[420024]:    },
Oct 11 05:37:26 np0005481065 magical_galileo[420024]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 05:37:26 np0005481065 magical_galileo[420024]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:37:26 np0005481065 magical_galileo[420024]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 05:37:26 np0005481065 magical_galileo[420024]:        "osd_id": 1,
Oct 11 05:37:26 np0005481065 magical_galileo[420024]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:37:26 np0005481065 magical_galileo[420024]:        "type": "bluestore"
Oct 11 05:37:26 np0005481065 magical_galileo[420024]:    }
Oct 11 05:37:26 np0005481065 magical_galileo[420024]: }
Oct 11 05:37:26 np0005481065 systemd[1]: libpod-6bf6e523c579205dead90350d459d81d654dcc5bcb18e91d7c75b1776473663c.scope: Deactivated successfully.
Oct 11 05:37:26 np0005481065 systemd[1]: libpod-6bf6e523c579205dead90350d459d81d654dcc5bcb18e91d7c75b1776473663c.scope: Consumed 1.014s CPU time.
Oct 11 05:37:26 np0005481065 conmon[420024]: conmon 6bf6e523c579205dead9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6bf6e523c579205dead90350d459d81d654dcc5bcb18e91d7c75b1776473663c.scope/container/memory.events
Oct 11 05:37:26 np0005481065 podman[420005]: 2025-10-11 09:37:26.491509356 +0000 UTC m=+1.171025184 container died 6bf6e523c579205dead90350d459d81d654dcc5bcb18e91d7c75b1776473663c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_galileo, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:37:26 np0005481065 nova_compute[260935]: 2025-10-11 09:37:26.494 2 DEBUG nova.storage.rbd_utils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image 27b27eae-7476-459a-b3e1-62199c81d4e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:37:26 np0005481065 nova_compute[260935]: 2025-10-11 09:37:26.498 2 DEBUG oslo_concurrency.processutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/27b27eae-7476-459a-b3e1-62199c81d4e1/disk.config 27b27eae-7476-459a-b3e1-62199c81d4e1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:37:26 np0005481065 systemd[1]: var-lib-containers-storage-overlay-c1cdcdb1183aa2e86fd801d112bc065a7027e19904460d8bd33886acedb1295d-merged.mount: Deactivated successfully.
Oct 11 05:37:26 np0005481065 podman[420005]: 2025-10-11 09:37:26.542767074 +0000 UTC m=+1.222282872 container remove 6bf6e523c579205dead90350d459d81d654dcc5bcb18e91d7c75b1776473663c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 05:37:26 np0005481065 systemd[1]: libpod-conmon-6bf6e523c579205dead90350d459d81d654dcc5bcb18e91d7c75b1776473663c.scope: Deactivated successfully.
Oct 11 05:37:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:37:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:37:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:37:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:37:26 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 244937b3-5c8a-43ad-8c84-b6770ade7700 does not exist
Oct 11 05:37:26 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev fc28c099-bb94-45ce-97db-ceab283f5687 does not exist
Oct 11 05:37:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:37:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/71404979' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:37:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:37:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/71404979' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:37:26 np0005481065 nova_compute[260935]: 2025-10-11 09:37:26.686 2 DEBUG oslo_concurrency.processutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/27b27eae-7476-459a-b3e1-62199c81d4e1/disk.config 27b27eae-7476-459a-b3e1-62199c81d4e1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.188s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:37:26 np0005481065 nova_compute[260935]: 2025-10-11 09:37:26.687 2 INFO nova.virt.libvirt.driver [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Deleting local config drive /var/lib/nova/instances/27b27eae-7476-459a-b3e1-62199c81d4e1/disk.config because it was imported into RBD.#033[00m
Oct 11 05:37:26 np0005481065 kernel: tap0110d16a-ba: entered promiscuous mode
Oct 11 05:37:26 np0005481065 NetworkManager[44960]: <info>  [1760175446.7476] manager: (tap0110d16a-ba): new Tun device (/org/freedesktop/NetworkManager/Devices/613)
Oct 11 05:37:26 np0005481065 ovn_controller[152945]: 2025-10-11T09:37:26Z|01597|binding|INFO|Claiming lport 0110d16a-ba47-4f17-8196-6a5b11e482b3 for this chassis.
Oct 11 05:37:26 np0005481065 ovn_controller[152945]: 2025-10-11T09:37:26Z|01598|binding|INFO|0110d16a-ba47-4f17-8196-6a5b11e482b3: Claiming fa:16:3e:32:9d:c0 10.100.0.11
Oct 11 05:37:26 np0005481065 nova_compute[260935]: 2025-10-11 09:37:26.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:37:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:37:26.762 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:9d:c0 10.100.0.11'], port_security=['fa:16:3e:32:9d:c0 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '27b27eae-7476-459a-b3e1-62199c81d4e1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03aa96fb-1646-4924-8eea-e7c8a5518a31', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '81e7096f23df4e7d8782cf98d09d54e9', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'aba3d239-1d8c-4eb4-ab69-9421b1db2407', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9ec93d7-4219-431f-a875-bf3609f077df, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=0110d16a-ba47-4f17-8196-6a5b11e482b3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:37:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:37:26.764 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 0110d16a-ba47-4f17-8196-6a5b11e482b3 in datapath 03aa96fb-1646-4924-8eea-e7c8a5518a31 bound to our chassis#033[00m
Oct 11 05:37:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:37:26.766 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 03aa96fb-1646-4924-8eea-e7c8a5518a31#033[00m
Oct 11 05:37:26 np0005481065 ovn_controller[152945]: 2025-10-11T09:37:26Z|01599|binding|INFO|Setting lport 0110d16a-ba47-4f17-8196-6a5b11e482b3 ovn-installed in OVS
Oct 11 05:37:26 np0005481065 ovn_controller[152945]: 2025-10-11T09:37:26Z|01600|binding|INFO|Setting lport 0110d16a-ba47-4f17-8196-6a5b11e482b3 up in Southbound
Oct 11 05:37:26 np0005481065 nova_compute[260935]: 2025-10-11 09:37:26.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:37:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:37:26.786 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e3c7689b-4687-4914-8813-92986e269c69]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:37:26 np0005481065 systemd-machined[215705]: New machine qemu-166-instance-0000008e.
Oct 11 05:37:26 np0005481065 systemd[1]: Started Virtual Machine qemu-166-instance-0000008e.
Oct 11 05:37:26 np0005481065 systemd-udevd[420237]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:37:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:37:26.834 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[19973a50-bc68-4836-b1ff-cb33476062a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:37:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:37:26.838 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[9ba11c0b-1b77-4fe7-a787-fa01e3f4d916]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:37:26 np0005481065 NetworkManager[44960]: <info>  [1760175446.8485] device (tap0110d16a-ba): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:37:26 np0005481065 NetworkManager[44960]: <info>  [1760175446.8496] device (tap0110d16a-ba): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:37:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:37:26.871 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[d9368337-8a66-4143-acaa-878d2d61a0bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:37:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:37:26.938 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[deaf0e55-047b-4d9b-8cdf-4e1c832f080b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap03aa96fb-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:60:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 421], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 725918, 'reachable_time': 20337, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 420247, 'error': None, 'target': 'ovnmeta-03aa96fb-1646-4924-8eea-e7c8a5518a31', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:37:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:37:26.962 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a4938878-a2ac-43d5-86a3-bdc8707e905b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap03aa96fb-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 725934, 'tstamp': 725934}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 420249, 'error': None, 'target': 'ovnmeta-03aa96fb-1646-4924-8eea-e7c8a5518a31', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap03aa96fb-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 725937, 'tstamp': 725937}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 420249, 'error': None, 'target': 'ovnmeta-03aa96fb-1646-4924-8eea-e7c8a5518a31', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:37:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:37:26.964 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap03aa96fb-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:37:26 np0005481065 nova_compute[260935]: 2025-10-11 09:37:26.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:37:26 np0005481065 nova_compute[260935]: 2025-10-11 09:37:26.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:37:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:37:26.968 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap03aa96fb-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:37:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:37:26.968 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:37:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:37:26.968 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap03aa96fb-10, col_values=(('external_ids', {'iface-id': '33c7af53-54c4-4168-8c26-88465029f36a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:37:26 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:37:26.969 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:37:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2874: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 1.8 MiB/s wr, 23 op/s
Oct 11 05:37:27 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:37:27 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:37:27 np0005481065 nova_compute[260935]: 2025-10-11 09:37:27.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:37:27 np0005481065 nova_compute[260935]: 2025-10-11 09:37:27.897 2 DEBUG nova.compute.manager [req-67577cb7-ed9d-46b7-be65-13eed7429eb0 req-0ddd7289-5332-4e11-a871-63058f82010b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Received event network-vif-plugged-0110d16a-ba47-4f17-8196-6a5b11e482b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:37:27 np0005481065 nova_compute[260935]: 2025-10-11 09:37:27.897 2 DEBUG oslo_concurrency.lockutils [req-67577cb7-ed9d-46b7-be65-13eed7429eb0 req-0ddd7289-5332-4e11-a871-63058f82010b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "27b27eae-7476-459a-b3e1-62199c81d4e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:37:27 np0005481065 nova_compute[260935]: 2025-10-11 09:37:27.898 2 DEBUG oslo_concurrency.lockutils [req-67577cb7-ed9d-46b7-be65-13eed7429eb0 req-0ddd7289-5332-4e11-a871-63058f82010b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "27b27eae-7476-459a-b3e1-62199c81d4e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:37:27 np0005481065 nova_compute[260935]: 2025-10-11 09:37:27.898 2 DEBUG oslo_concurrency.lockutils [req-67577cb7-ed9d-46b7-be65-13eed7429eb0 req-0ddd7289-5332-4e11-a871-63058f82010b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "27b27eae-7476-459a-b3e1-62199c81d4e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:37:27 np0005481065 nova_compute[260935]: 2025-10-11 09:37:27.898 2 DEBUG nova.compute.manager [req-67577cb7-ed9d-46b7-be65-13eed7429eb0 req-0ddd7289-5332-4e11-a871-63058f82010b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Processing event network-vif-plugged-0110d16a-ba47-4f17-8196-6a5b11e482b3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:37:27 np0005481065 nova_compute[260935]: 2025-10-11 09:37:27.904 2 DEBUG nova.compute.manager [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:37:27 np0005481065 nova_compute[260935]: 2025-10-11 09:37:27.905 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175447.9050891, 27b27eae-7476-459a-b3e1-62199c81d4e1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:37:27 np0005481065 nova_compute[260935]: 2025-10-11 09:37:27.906 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] VM Started (Lifecycle Event)#033[00m
Oct 11 05:37:27 np0005481065 nova_compute[260935]: 2025-10-11 09:37:27.910 2 DEBUG nova.virt.libvirt.driver [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:37:27 np0005481065 nova_compute[260935]: 2025-10-11 09:37:27.914 2 INFO nova.virt.libvirt.driver [-] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Instance spawned successfully.#033[00m
Oct 11 05:37:27 np0005481065 nova_compute[260935]: 2025-10-11 09:37:27.914 2 DEBUG nova.virt.libvirt.driver [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:37:27 np0005481065 nova_compute[260935]: 2025-10-11 09:37:27.932 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:37:27 np0005481065 nova_compute[260935]: 2025-10-11 09:37:27.938 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:37:27 np0005481065 nova_compute[260935]: 2025-10-11 09:37:27.943 2 DEBUG nova.virt.libvirt.driver [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:37:27 np0005481065 nova_compute[260935]: 2025-10-11 09:37:27.944 2 DEBUG nova.virt.libvirt.driver [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:37:27 np0005481065 nova_compute[260935]: 2025-10-11 09:37:27.944 2 DEBUG nova.virt.libvirt.driver [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:37:27 np0005481065 nova_compute[260935]: 2025-10-11 09:37:27.945 2 DEBUG nova.virt.libvirt.driver [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:37:27 np0005481065 nova_compute[260935]: 2025-10-11 09:37:27.945 2 DEBUG nova.virt.libvirt.driver [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:37:27 np0005481065 nova_compute[260935]: 2025-10-11 09:37:27.946 2 DEBUG nova.virt.libvirt.driver [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:37:27 np0005481065 nova_compute[260935]: 2025-10-11 09:37:27.968 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:37:27 np0005481065 nova_compute[260935]: 2025-10-11 09:37:27.969 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175447.9055552, 27b27eae-7476-459a-b3e1-62199c81d4e1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:37:27 np0005481065 nova_compute[260935]: 2025-10-11 09:37:27.969 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:37:27 np0005481065 nova_compute[260935]: 2025-10-11 09:37:27.996 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:37:28 np0005481065 nova_compute[260935]: 2025-10-11 09:37:28.000 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175447.9090703, 27b27eae-7476-459a-b3e1-62199c81d4e1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:37:28 np0005481065 nova_compute[260935]: 2025-10-11 09:37:28.000 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:37:28 np0005481065 nova_compute[260935]: 2025-10-11 09:37:28.005 2 INFO nova.compute.manager [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Took 6.71 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:37:28 np0005481065 nova_compute[260935]: 2025-10-11 09:37:28.005 2 DEBUG nova.compute.manager [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:37:28 np0005481065 nova_compute[260935]: 2025-10-11 09:37:28.019 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:37:28 np0005481065 nova_compute[260935]: 2025-10-11 09:37:28.023 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:37:28 np0005481065 nova_compute[260935]: 2025-10-11 09:37:28.058 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:37:28 np0005481065 nova_compute[260935]: 2025-10-11 09:37:28.076 2 INFO nova.compute.manager [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Took 7.77 seconds to build instance.#033[00m
Oct 11 05:37:28 np0005481065 nova_compute[260935]: 2025-10-11 09:37:28.093 2 DEBUG oslo_concurrency.lockutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "27b27eae-7476-459a-b3e1-62199c81d4e1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.861s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:37:29 np0005481065 nova_compute[260935]: 2025-10-11 09:37:29.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:37:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2875: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 11 05:37:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:37:30 np0005481065 nova_compute[260935]: 2025-10-11 09:37:30.016 2 DEBUG nova.compute.manager [req-c8f68c1b-7556-48e2-bce4-5be42a27e4dc req-54f1e2c7-cef3-4db9-80c4-6d889d0a5128 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Received event network-vif-plugged-0110d16a-ba47-4f17-8196-6a5b11e482b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:37:30 np0005481065 nova_compute[260935]: 2025-10-11 09:37:30.016 2 DEBUG oslo_concurrency.lockutils [req-c8f68c1b-7556-48e2-bce4-5be42a27e4dc req-54f1e2c7-cef3-4db9-80c4-6d889d0a5128 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "27b27eae-7476-459a-b3e1-62199c81d4e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:37:30 np0005481065 nova_compute[260935]: 2025-10-11 09:37:30.017 2 DEBUG oslo_concurrency.lockutils [req-c8f68c1b-7556-48e2-bce4-5be42a27e4dc req-54f1e2c7-cef3-4db9-80c4-6d889d0a5128 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "27b27eae-7476-459a-b3e1-62199c81d4e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:37:30 np0005481065 nova_compute[260935]: 2025-10-11 09:37:30.018 2 DEBUG oslo_concurrency.lockutils [req-c8f68c1b-7556-48e2-bce4-5be42a27e4dc req-54f1e2c7-cef3-4db9-80c4-6d889d0a5128 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "27b27eae-7476-459a-b3e1-62199c81d4e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:37:30 np0005481065 nova_compute[260935]: 2025-10-11 09:37:30.018 2 DEBUG nova.compute.manager [req-c8f68c1b-7556-48e2-bce4-5be42a27e4dc req-54f1e2c7-cef3-4db9-80c4-6d889d0a5128 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] No waiting events found dispatching network-vif-plugged-0110d16a-ba47-4f17-8196-6a5b11e482b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:37:30 np0005481065 nova_compute[260935]: 2025-10-11 09:37:30.018 2 WARNING nova.compute.manager [req-c8f68c1b-7556-48e2-bce4-5be42a27e4dc req-54f1e2c7-cef3-4db9-80c4-6d889d0a5128 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Received unexpected event network-vif-plugged-0110d16a-ba47-4f17-8196-6a5b11e482b3 for instance with vm_state active and task_state None.#033[00m
Oct 11 05:37:30 np0005481065 podman[420292]: 2025-10-11 09:37:30.808381984 +0000 UTC m=+0.103580687 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 11 05:37:30 np0005481065 nova_compute[260935]: 2025-10-11 09:37:30.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:37:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2876: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 11 05:37:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:37:31.844 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=53, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=52) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:37:31 np0005481065 nova_compute[260935]: 2025-10-11 09:37:31.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:37:31 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:37:31.846 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 05:37:32 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:37:32.848 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '53'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:37:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2877: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 11 05:37:34 np0005481065 nova_compute[260935]: 2025-10-11 09:37:34.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:37:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:37:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2878: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 77 op/s
Oct 11 05:37:35 np0005481065 nova_compute[260935]: 2025-10-11 09:37:35.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:37:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2879: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 77 op/s
Oct 11 05:37:38 np0005481065 podman[420311]: 2025-10-11 09:37:38.786709765 +0000 UTC m=+0.087572577 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 11 05:37:39 np0005481065 nova_compute[260935]: 2025-10-11 09:37:39.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:37:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2880: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 78 op/s
Oct 11 05:37:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:37:40 np0005481065 ovn_controller[152945]: 2025-10-11T09:37:40Z|00190|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:32:9d:c0 10.100.0.11
Oct 11 05:37:40 np0005481065 ovn_controller[152945]: 2025-10-11T09:37:40Z|00191|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:32:9d:c0 10.100.0.11
Oct 11 05:37:40 np0005481065 nova_compute[260935]: 2025-10-11 09:37:40.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:37:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2881: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 KiB/s wr, 64 op/s
Oct 11 05:37:42 np0005481065 podman[420331]: 2025-10-11 09:37:42.814506064 +0000 UTC m=+0.104567464 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 11 05:37:42 np0005481065 podman[420332]: 2025-10-11 09:37:42.855936366 +0000 UTC m=+0.140012008 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 11 05:37:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2882: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 128 op/s
Oct 11 05:37:44 np0005481065 nova_compute[260935]: 2025-10-11 09:37:44.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:37:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:37:45 np0005481065 nova_compute[260935]: 2025-10-11 09:37:45.128 2 DEBUG oslo_concurrency.lockutils [None req-e6a19925-ee3b-42b4-bb70-827444d44bbb 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "27b27eae-7476-459a-b3e1-62199c81d4e1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:37:45 np0005481065 nova_compute[260935]: 2025-10-11 09:37:45.129 2 DEBUG oslo_concurrency.lockutils [None req-e6a19925-ee3b-42b4-bb70-827444d44bbb 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "27b27eae-7476-459a-b3e1-62199c81d4e1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:37:45 np0005481065 nova_compute[260935]: 2025-10-11 09:37:45.130 2 DEBUG oslo_concurrency.lockutils [None req-e6a19925-ee3b-42b4-bb70-827444d44bbb 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "27b27eae-7476-459a-b3e1-62199c81d4e1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:37:45 np0005481065 nova_compute[260935]: 2025-10-11 09:37:45.130 2 DEBUG oslo_concurrency.lockutils [None req-e6a19925-ee3b-42b4-bb70-827444d44bbb 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "27b27eae-7476-459a-b3e1-62199c81d4e1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:37:45 np0005481065 nova_compute[260935]: 2025-10-11 09:37:45.131 2 DEBUG oslo_concurrency.lockutils [None req-e6a19925-ee3b-42b4-bb70-827444d44bbb 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "27b27eae-7476-459a-b3e1-62199c81d4e1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:37:45 np0005481065 nova_compute[260935]: 2025-10-11 09:37:45.133 2 INFO nova.compute.manager [None req-e6a19925-ee3b-42b4-bb70-827444d44bbb 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Terminating instance#033[00m
Oct 11 05:37:45 np0005481065 nova_compute[260935]: 2025-10-11 09:37:45.135 2 DEBUG nova.compute.manager [None req-e6a19925-ee3b-42b4-bb70-827444d44bbb 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:37:45 np0005481065 kernel: tap0110d16a-ba (unregistering): left promiscuous mode
Oct 11 05:37:45 np0005481065 NetworkManager[44960]: <info>  [1760175465.2009] device (tap0110d16a-ba): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:37:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2883: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 05:37:45 np0005481065 nova_compute[260935]: 2025-10-11 09:37:45.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:37:45 np0005481065 ovn_controller[152945]: 2025-10-11T09:37:45Z|01601|binding|INFO|Releasing lport 0110d16a-ba47-4f17-8196-6a5b11e482b3 from this chassis (sb_readonly=0)
Oct 11 05:37:45 np0005481065 ovn_controller[152945]: 2025-10-11T09:37:45Z|01602|binding|INFO|Setting lport 0110d16a-ba47-4f17-8196-6a5b11e482b3 down in Southbound
Oct 11 05:37:45 np0005481065 ovn_controller[152945]: 2025-10-11T09:37:45Z|01603|binding|INFO|Removing iface tap0110d16a-ba ovn-installed in OVS
Oct 11 05:37:45 np0005481065 nova_compute[260935]: 2025-10-11 09:37:45.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:37:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:37:45.268 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:9d:c0 10.100.0.11'], port_security=['fa:16:3e:32:9d:c0 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '27b27eae-7476-459a-b3e1-62199c81d4e1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03aa96fb-1646-4924-8eea-e7c8a5518a31', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '81e7096f23df4e7d8782cf98d09d54e9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'aba3d239-1d8c-4eb4-ab69-9421b1db2407', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9ec93d7-4219-431f-a875-bf3609f077df, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=0110d16a-ba47-4f17-8196-6a5b11e482b3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:37:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:37:45.269 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 0110d16a-ba47-4f17-8196-6a5b11e482b3 in datapath 03aa96fb-1646-4924-8eea-e7c8a5518a31 unbound from our chassis#033[00m
Oct 11 05:37:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:37:45.272 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 03aa96fb-1646-4924-8eea-e7c8a5518a31#033[00m
Oct 11 05:37:45 np0005481065 nova_compute[260935]: 2025-10-11 09:37:45.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:37:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:37:45.293 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d441df3b-d975-4d64-83fd-deeb8214b792]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:37:45 np0005481065 systemd[1]: machine-qemu\x2d166\x2dinstance\x2d0000008e.scope: Deactivated successfully.
Oct 11 05:37:45 np0005481065 systemd[1]: machine-qemu\x2d166\x2dinstance\x2d0000008e.scope: Consumed 12.959s CPU time.
Oct 11 05:37:45 np0005481065 systemd-machined[215705]: Machine qemu-166-instance-0000008e terminated.
Oct 11 05:37:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:37:45.337 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3a393c0e-48ae-4a02-b789-35ac028646d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:37:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:37:45.341 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[4366f2fc-6405-4253-b64b-ed70c1aa6dbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:37:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:37:45.386 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[5c4409d7-fe7c-4743-9750-6ce27fabd3a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:37:45 np0005481065 nova_compute[260935]: 2025-10-11 09:37:45.395 2 INFO nova.virt.libvirt.driver [-] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Instance destroyed successfully.#033[00m
Oct 11 05:37:45 np0005481065 nova_compute[260935]: 2025-10-11 09:37:45.396 2 DEBUG nova.objects.instance [None req-e6a19925-ee3b-42b4-bb70-827444d44bbb 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lazy-loading 'resources' on Instance uuid 27b27eae-7476-459a-b3e1-62199c81d4e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:37:45 np0005481065 nova_compute[260935]: 2025-10-11 09:37:45.411 2 DEBUG nova.virt.libvirt.vif [None req-e6a19925-ee3b-42b4-bb70-827444d44bbb 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:37:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-gen-0-202990093',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-gen-0-202990093',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-607770139-gen',id=142,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDUdKb5dOuuDVVWeqX4zKoPcjWDfKUs1onbpng/L6Jwhcf0BjMOuWghlJ+p86B+gJd3uhEaIWe8cO4nTPLwQAMa1oAzsF+deBAfCAYSzplFFdsUIepQYO467EdbI9Uqbsw==',key_name='tempest-TestSecurityGroupsBasicOps-402699739',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:37:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='81e7096f23df4e7d8782cf98d09d54e9',ramdisk_id='',reservation_id='r-0rby20lg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-607770139',owner_user_name='tempest-TestSecurityGroupsBasicOps-607770139-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:37:28Z,user_data=None,user_id='489c4d0457354f4684f8b9e53261224f',uuid=27b27eae-7476-459a-b3e1-62199c81d4e1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0110d16a-ba47-4f17-8196-6a5b11e482b3", "address": "fa:16:3e:32:9d:c0", "network": {"id": "03aa96fb-1646-4924-8eea-e7c8a5518a31", "bridge": "br-int", "label": "tempest-network-smoke--2144007454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0110d16a-ba", "ovs_interfaceid": "0110d16a-ba47-4f17-8196-6a5b11e482b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:37:45 np0005481065 nova_compute[260935]: 2025-10-11 09:37:45.412 2 DEBUG nova.network.os_vif_util [None req-e6a19925-ee3b-42b4-bb70-827444d44bbb 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converting VIF {"id": "0110d16a-ba47-4f17-8196-6a5b11e482b3", "address": "fa:16:3e:32:9d:c0", "network": {"id": "03aa96fb-1646-4924-8eea-e7c8a5518a31", "bridge": "br-int", "label": "tempest-network-smoke--2144007454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0110d16a-ba", "ovs_interfaceid": "0110d16a-ba47-4f17-8196-6a5b11e482b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:37:45 np0005481065 nova_compute[260935]: 2025-10-11 09:37:45.413 2 DEBUG nova.network.os_vif_util [None req-e6a19925-ee3b-42b4-bb70-827444d44bbb 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:9d:c0,bridge_name='br-int',has_traffic_filtering=True,id=0110d16a-ba47-4f17-8196-6a5b11e482b3,network=Network(03aa96fb-1646-4924-8eea-e7c8a5518a31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0110d16a-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:37:45 np0005481065 nova_compute[260935]: 2025-10-11 09:37:45.413 2 DEBUG os_vif [None req-e6a19925-ee3b-42b4-bb70-827444d44bbb 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:9d:c0,bridge_name='br-int',has_traffic_filtering=True,id=0110d16a-ba47-4f17-8196-6a5b11e482b3,network=Network(03aa96fb-1646-4924-8eea-e7c8a5518a31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0110d16a-ba') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:37:45 np0005481065 nova_compute[260935]: 2025-10-11 09:37:45.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:37:45 np0005481065 nova_compute[260935]: 2025-10-11 09:37:45.416 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0110d16a-ba, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:37:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:37:45.418 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a066f576-bcbc-4d3a-af64-98f34c28d5ea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap03aa96fb-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:60:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 958, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 958, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 421], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 725918, 'reachable_time': 20337, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 420395, 'error': None, 'target': 'ovnmeta-03aa96fb-1646-4924-8eea-e7c8a5518a31', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:37:45 np0005481065 nova_compute[260935]: 2025-10-11 09:37:45.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:37:45 np0005481065 nova_compute[260935]: 2025-10-11 09:37:45.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:37:45 np0005481065 nova_compute[260935]: 2025-10-11 09:37:45.430 2 INFO os_vif [None req-e6a19925-ee3b-42b4-bb70-827444d44bbb 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:9d:c0,bridge_name='br-int',has_traffic_filtering=True,id=0110d16a-ba47-4f17-8196-6a5b11e482b3,network=Network(03aa96fb-1646-4924-8eea-e7c8a5518a31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0110d16a-ba')#033[00m
Oct 11 05:37:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:37:45.443 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a4a84768-34fe-4a88-9b1b-4417281d9d80]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap03aa96fb-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 725934, 'tstamp': 725934}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 420397, 'error': None, 'target': 'ovnmeta-03aa96fb-1646-4924-8eea-e7c8a5518a31', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap03aa96fb-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 725937, 'tstamp': 725937}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 420397, 'error': None, 'target': 'ovnmeta-03aa96fb-1646-4924-8eea-e7c8a5518a31', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:37:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:37:45.445 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap03aa96fb-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:37:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:37:45.449 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap03aa96fb-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:37:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:37:45.450 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:37:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:37:45.451 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap03aa96fb-10, col_values=(('external_ids', {'iface-id': '33c7af53-54c4-4168-8c26-88465029f36a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:37:45 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:37:45.452 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:37:45 np0005481065 nova_compute[260935]: 2025-10-11 09:37:45.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:37:45 np0005481065 nova_compute[260935]: 2025-10-11 09:37:45.849 2 DEBUG nova.compute.manager [req-7fdc563f-2a1d-4ed6-8465-159fd623f8f7 req-78f11f5a-daff-4ba6-9fe3-7934b5b2e85f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Received event network-vif-unplugged-0110d16a-ba47-4f17-8196-6a5b11e482b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:37:45 np0005481065 nova_compute[260935]: 2025-10-11 09:37:45.850 2 DEBUG oslo_concurrency.lockutils [req-7fdc563f-2a1d-4ed6-8465-159fd623f8f7 req-78f11f5a-daff-4ba6-9fe3-7934b5b2e85f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "27b27eae-7476-459a-b3e1-62199c81d4e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:37:45 np0005481065 nova_compute[260935]: 2025-10-11 09:37:45.850 2 DEBUG oslo_concurrency.lockutils [req-7fdc563f-2a1d-4ed6-8465-159fd623f8f7 req-78f11f5a-daff-4ba6-9fe3-7934b5b2e85f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "27b27eae-7476-459a-b3e1-62199c81d4e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:37:45 np0005481065 nova_compute[260935]: 2025-10-11 09:37:45.850 2 DEBUG oslo_concurrency.lockutils [req-7fdc563f-2a1d-4ed6-8465-159fd623f8f7 req-78f11f5a-daff-4ba6-9fe3-7934b5b2e85f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "27b27eae-7476-459a-b3e1-62199c81d4e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:37:45 np0005481065 nova_compute[260935]: 2025-10-11 09:37:45.850 2 DEBUG nova.compute.manager [req-7fdc563f-2a1d-4ed6-8465-159fd623f8f7 req-78f11f5a-daff-4ba6-9fe3-7934b5b2e85f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] No waiting events found dispatching network-vif-unplugged-0110d16a-ba47-4f17-8196-6a5b11e482b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:37:45 np0005481065 nova_compute[260935]: 2025-10-11 09:37:45.850 2 DEBUG nova.compute.manager [req-7fdc563f-2a1d-4ed6-8465-159fd623f8f7 req-78f11f5a-daff-4ba6-9fe3-7934b5b2e85f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Received event network-vif-unplugged-0110d16a-ba47-4f17-8196-6a5b11e482b3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 05:37:45 np0005481065 nova_compute[260935]: 2025-10-11 09:37:45.865 2 INFO nova.virt.libvirt.driver [None req-e6a19925-ee3b-42b4-bb70-827444d44bbb 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Deleting instance files /var/lib/nova/instances/27b27eae-7476-459a-b3e1-62199c81d4e1_del#033[00m
Oct 11 05:37:45 np0005481065 nova_compute[260935]: 2025-10-11 09:37:45.866 2 INFO nova.virt.libvirt.driver [None req-e6a19925-ee3b-42b4-bb70-827444d44bbb 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Deletion of /var/lib/nova/instances/27b27eae-7476-459a-b3e1-62199c81d4e1_del complete#033[00m
Oct 11 05:37:45 np0005481065 nova_compute[260935]: 2025-10-11 09:37:45.918 2 INFO nova.compute.manager [None req-e6a19925-ee3b-42b4-bb70-827444d44bbb 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Took 0.78 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:37:45 np0005481065 nova_compute[260935]: 2025-10-11 09:37:45.918 2 DEBUG oslo.service.loopingcall [None req-e6a19925-ee3b-42b4-bb70-827444d44bbb 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:37:45 np0005481065 nova_compute[260935]: 2025-10-11 09:37:45.919 2 DEBUG nova.compute.manager [-] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:37:45 np0005481065 nova_compute[260935]: 2025-10-11 09:37:45.919 2 DEBUG nova.network.neutron [-] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:37:46 np0005481065 nova_compute[260935]: 2025-10-11 09:37:46.691 2 DEBUG nova.network.neutron [-] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:37:46 np0005481065 nova_compute[260935]: 2025-10-11 09:37:46.712 2 INFO nova.compute.manager [-] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Took 0.79 seconds to deallocate network for instance.#033[00m
Oct 11 05:37:46 np0005481065 nova_compute[260935]: 2025-10-11 09:37:46.773 2 DEBUG oslo_concurrency.lockutils [None req-e6a19925-ee3b-42b4-bb70-827444d44bbb 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:37:46 np0005481065 nova_compute[260935]: 2025-10-11 09:37:46.773 2 DEBUG oslo_concurrency.lockutils [None req-e6a19925-ee3b-42b4-bb70-827444d44bbb 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:37:46 np0005481065 nova_compute[260935]: 2025-10-11 09:37:46.900 2 DEBUG oslo_concurrency.processutils [None req-e6a19925-ee3b-42b4-bb70-827444d44bbb 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:37:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2884: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 05:37:47 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:37:47 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1255123041' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:37:47 np0005481065 nova_compute[260935]: 2025-10-11 09:37:47.366 2 DEBUG oslo_concurrency.processutils [None req-e6a19925-ee3b-42b4-bb70-827444d44bbb 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:37:47 np0005481065 nova_compute[260935]: 2025-10-11 09:37:47.373 2 DEBUG nova.compute.provider_tree [None req-e6a19925-ee3b-42b4-bb70-827444d44bbb 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:37:47 np0005481065 nova_compute[260935]: 2025-10-11 09:37:47.388 2 DEBUG nova.scheduler.client.report [None req-e6a19925-ee3b-42b4-bb70-827444d44bbb 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:37:47 np0005481065 nova_compute[260935]: 2025-10-11 09:37:47.406 2 DEBUG oslo_concurrency.lockutils [None req-e6a19925-ee3b-42b4-bb70-827444d44bbb 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:37:47 np0005481065 nova_compute[260935]: 2025-10-11 09:37:47.458 2 INFO nova.scheduler.client.report [None req-e6a19925-ee3b-42b4-bb70-827444d44bbb 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Deleted allocations for instance 27b27eae-7476-459a-b3e1-62199c81d4e1#033[00m
Oct 11 05:37:47 np0005481065 nova_compute[260935]: 2025-10-11 09:37:47.530 2 DEBUG oslo_concurrency.lockutils [None req-e6a19925-ee3b-42b4-bb70-827444d44bbb 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "27b27eae-7476-459a-b3e1-62199c81d4e1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.401s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:37:47 np0005481065 nova_compute[260935]: 2025-10-11 09:37:47.931 2 DEBUG nova.compute.manager [req-f81e6af3-07d9-40d9-9dd8-45f3031ed632 req-e047558c-e11b-4df4-a5b3-c6fddf8e1784 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Received event network-vif-plugged-0110d16a-ba47-4f17-8196-6a5b11e482b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:37:47 np0005481065 nova_compute[260935]: 2025-10-11 09:37:47.932 2 DEBUG oslo_concurrency.lockutils [req-f81e6af3-07d9-40d9-9dd8-45f3031ed632 req-e047558c-e11b-4df4-a5b3-c6fddf8e1784 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "27b27eae-7476-459a-b3e1-62199c81d4e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:37:47 np0005481065 nova_compute[260935]: 2025-10-11 09:37:47.932 2 DEBUG oslo_concurrency.lockutils [req-f81e6af3-07d9-40d9-9dd8-45f3031ed632 req-e047558c-e11b-4df4-a5b3-c6fddf8e1784 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "27b27eae-7476-459a-b3e1-62199c81d4e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:37:47 np0005481065 nova_compute[260935]: 2025-10-11 09:37:47.932 2 DEBUG oslo_concurrency.lockutils [req-f81e6af3-07d9-40d9-9dd8-45f3031ed632 req-e047558c-e11b-4df4-a5b3-c6fddf8e1784 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "27b27eae-7476-459a-b3e1-62199c81d4e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:37:47 np0005481065 nova_compute[260935]: 2025-10-11 09:37:47.933 2 DEBUG nova.compute.manager [req-f81e6af3-07d9-40d9-9dd8-45f3031ed632 req-e047558c-e11b-4df4-a5b3-c6fddf8e1784 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] No waiting events found dispatching network-vif-plugged-0110d16a-ba47-4f17-8196-6a5b11e482b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:37:47 np0005481065 nova_compute[260935]: 2025-10-11 09:37:47.933 2 WARNING nova.compute.manager [req-f81e6af3-07d9-40d9-9dd8-45f3031ed632 req-e047558c-e11b-4df4-a5b3-c6fddf8e1784 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Received unexpected event network-vif-plugged-0110d16a-ba47-4f17-8196-6a5b11e482b3 for instance with vm_state deleted and task_state None.#033[00m
Oct 11 05:37:47 np0005481065 nova_compute[260935]: 2025-10-11 09:37:47.933 2 DEBUG nova.compute.manager [req-f81e6af3-07d9-40d9-9dd8-45f3031ed632 req-e047558c-e11b-4df4-a5b3-c6fddf8e1784 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Received event network-vif-deleted-0110d16a-ba47-4f17-8196-6a5b11e482b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:37:48 np0005481065 nova_compute[260935]: 2025-10-11 09:37:48.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:37:48 np0005481065 nova_compute[260935]: 2025-10-11 09:37:48.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:37:48 np0005481065 nova_compute[260935]: 2025-10-11 09:37:48.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 11 05:37:48 np0005481065 nova_compute[260935]: 2025-10-11 09:37:48.906 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:37:48 np0005481065 nova_compute[260935]: 2025-10-11 09:37:48.907 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:37:48 np0005481065 nova_compute[260935]: 2025-10-11 09:37:48.907 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 05:37:48 np0005481065 nova_compute[260935]: 2025-10-11 09:37:48.907 2 DEBUG nova.objects.instance [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c176845c-89c0-4038-ba22-4ee79bd3ebfe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:37:48 np0005481065 nova_compute[260935]: 2025-10-11 09:37:48.980 2 DEBUG oslo_concurrency.lockutils [None req-eab9539a-10df-46da-b724-617da73ee67c 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "1f73fb12-6b6e-4491-81a4-51fb66ffb310" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:37:48 np0005481065 nova_compute[260935]: 2025-10-11 09:37:48.980 2 DEBUG oslo_concurrency.lockutils [None req-eab9539a-10df-46da-b724-617da73ee67c 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "1f73fb12-6b6e-4491-81a4-51fb66ffb310" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:37:48 np0005481065 nova_compute[260935]: 2025-10-11 09:37:48.981 2 DEBUG oslo_concurrency.lockutils [None req-eab9539a-10df-46da-b724-617da73ee67c 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "1f73fb12-6b6e-4491-81a4-51fb66ffb310-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:37:48 np0005481065 nova_compute[260935]: 2025-10-11 09:37:48.981 2 DEBUG oslo_concurrency.lockutils [None req-eab9539a-10df-46da-b724-617da73ee67c 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "1f73fb12-6b6e-4491-81a4-51fb66ffb310-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:37:48 np0005481065 nova_compute[260935]: 2025-10-11 09:37:48.982 2 DEBUG oslo_concurrency.lockutils [None req-eab9539a-10df-46da-b724-617da73ee67c 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "1f73fb12-6b6e-4491-81a4-51fb66ffb310-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:37:48 np0005481065 nova_compute[260935]: 2025-10-11 09:37:48.984 2 INFO nova.compute.manager [None req-eab9539a-10df-46da-b724-617da73ee67c 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Terminating instance#033[00m
Oct 11 05:37:48 np0005481065 nova_compute[260935]: 2025-10-11 09:37:48.985 2 DEBUG nova.compute.manager [None req-eab9539a-10df-46da-b724-617da73ee67c 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:37:49 np0005481065 kernel: tap25d7271a-bc (unregistering): left promiscuous mode
Oct 11 05:37:49 np0005481065 NetworkManager[44960]: <info>  [1760175469.0381] device (tap25d7271a-bc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:37:49 np0005481065 nova_compute[260935]: 2025-10-11 09:37:49.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:37:49 np0005481065 ovn_controller[152945]: 2025-10-11T09:37:49Z|01604|binding|INFO|Releasing lport 25d7271a-bce8-4388-991e-e7069da0eff1 from this chassis (sb_readonly=0)
Oct 11 05:37:49 np0005481065 ovn_controller[152945]: 2025-10-11T09:37:49Z|01605|binding|INFO|Setting lport 25d7271a-bce8-4388-991e-e7069da0eff1 down in Southbound
Oct 11 05:37:49 np0005481065 ovn_controller[152945]: 2025-10-11T09:37:49Z|01606|binding|INFO|Removing iface tap25d7271a-bc ovn-installed in OVS
Oct 11 05:37:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:37:49.056 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a1:d6:78 10.100.0.13'], port_security=['fa:16:3e:a1:d6:78 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '1f73fb12-6b6e-4491-81a4-51fb66ffb310', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03aa96fb-1646-4924-8eea-e7c8a5518a31', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '81e7096f23df4e7d8782cf98d09d54e9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0fdfee67-b394-4b27-bc8b-a70f0c2dfabe aba3d239-1d8c-4eb4-ab69-9421b1db2407', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9ec93d7-4219-431f-a875-bf3609f077df, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=25d7271a-bce8-4388-991e-e7069da0eff1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:37:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:37:49.058 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 25d7271a-bce8-4388-991e-e7069da0eff1 in datapath 03aa96fb-1646-4924-8eea-e7c8a5518a31 unbound from our chassis#033[00m
Oct 11 05:37:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:37:49.059 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 03aa96fb-1646-4924-8eea-e7c8a5518a31, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:37:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:37:49.060 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cff60040-b244-489e-a993-6915bb33c3a1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:37:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:37:49.060 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-03aa96fb-1646-4924-8eea-e7c8a5518a31 namespace which is not needed anymore#033[00m
Oct 11 05:37:49 np0005481065 nova_compute[260935]: 2025-10-11 09:37:49.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:37:49 np0005481065 systemd[1]: machine-qemu\x2d165\x2dinstance\x2d0000008d.scope: Deactivated successfully.
Oct 11 05:37:49 np0005481065 systemd[1]: machine-qemu\x2d165\x2dinstance\x2d0000008d.scope: Consumed 14.846s CPU time.
Oct 11 05:37:49 np0005481065 systemd-machined[215705]: Machine qemu-165-instance-0000008d terminated.
Oct 11 05:37:49 np0005481065 nova_compute[260935]: 2025-10-11 09:37:49.219 2 INFO nova.virt.libvirt.driver [-] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Instance destroyed successfully.#033[00m
Oct 11 05:37:49 np0005481065 nova_compute[260935]: 2025-10-11 09:37:49.220 2 DEBUG nova.objects.instance [None req-eab9539a-10df-46da-b724-617da73ee67c 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lazy-loading 'resources' on Instance uuid 1f73fb12-6b6e-4491-81a4-51fb66ffb310 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:37:49 np0005481065 nova_compute[260935]: 2025-10-11 09:37:49.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:37:49 np0005481065 neutron-haproxy-ovnmeta-03aa96fb-1646-4924-8eea-e7c8a5518a31[418582]: [NOTICE]   (418586) : haproxy version is 2.8.14-c23fe91
Oct 11 05:37:49 np0005481065 neutron-haproxy-ovnmeta-03aa96fb-1646-4924-8eea-e7c8a5518a31[418582]: [NOTICE]   (418586) : path to executable is /usr/sbin/haproxy
Oct 11 05:37:49 np0005481065 neutron-haproxy-ovnmeta-03aa96fb-1646-4924-8eea-e7c8a5518a31[418582]: [ALERT]    (418586) : Current worker (418588) exited with code 143 (Terminated)
Oct 11 05:37:49 np0005481065 neutron-haproxy-ovnmeta-03aa96fb-1646-4924-8eea-e7c8a5518a31[418582]: [WARNING]  (418586) : All workers exited. Exiting... (0)
Oct 11 05:37:49 np0005481065 systemd[1]: libpod-f6a9261aec5be20db1e997a39a8f40719235d71526af0a641fc6e13ee745e3e5.scope: Deactivated successfully.
Oct 11 05:37:49 np0005481065 podman[420464]: 2025-10-11 09:37:49.256172375 +0000 UTC m=+0.065973261 container died f6a9261aec5be20db1e997a39a8f40719235d71526af0a641fc6e13ee745e3e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-03aa96fb-1646-4924-8eea-e7c8a5518a31, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 11 05:37:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2885: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Oct 11 05:37:49 np0005481065 systemd[1]: var-lib-containers-storage-overlay-1249ac7e806de3a44795ea0929a99531493013ad6047c5dc235ff82a29920ead-merged.mount: Deactivated successfully.
Oct 11 05:37:49 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f6a9261aec5be20db1e997a39a8f40719235d71526af0a641fc6e13ee745e3e5-userdata-shm.mount: Deactivated successfully.
Oct 11 05:37:49 np0005481065 podman[420464]: 2025-10-11 09:37:49.298326217 +0000 UTC m=+0.108127123 container cleanup f6a9261aec5be20db1e997a39a8f40719235d71526af0a641fc6e13ee745e3e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-03aa96fb-1646-4924-8eea-e7c8a5518a31, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:37:49 np0005481065 systemd[1]: libpod-conmon-f6a9261aec5be20db1e997a39a8f40719235d71526af0a641fc6e13ee745e3e5.scope: Deactivated successfully.
Oct 11 05:37:49 np0005481065 nova_compute[260935]: 2025-10-11 09:37:49.344 2 DEBUG nova.virt.libvirt.vif [None req-eab9539a-10df-46da-b724-617da73ee67c 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:36:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-2088666113',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-2088666113',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-607770139-acc',id=141,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDUdKb5dOuuDVVWeqX4zKoPcjWDfKUs1onbpng/L6Jwhcf0BjMOuWghlJ+p86B+gJd3uhEaIWe8cO4nTPLwQAMa1oAzsF+deBAfCAYSzplFFdsUIepQYO467EdbI9Uqbsw==',key_name='tempest-TestSecurityGroupsBasicOps-402699739',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:36:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='81e7096f23df4e7d8782cf98d09d54e9',ramdisk_id='',reservation_id='r-1r2upbl3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-607770139',owner_user_name='tempest-TestSecurityGroupsBasicOps-607770139-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:36:55Z,user_data=None,user_id='489c4d0457354f4684f8b9e53261224f',uuid=1f73fb12-6b6e-4491-81a4-51fb66ffb310,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "25d7271a-bce8-4388-991e-e7069da0eff1", "address": "fa:16:3e:a1:d6:78", "network": {"id": "03aa96fb-1646-4924-8eea-e7c8a5518a31", "bridge": "br-int", "label": "tempest-network-smoke--2144007454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25d7271a-bc", "ovs_interfaceid": "25d7271a-bce8-4388-991e-e7069da0eff1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:37:49 np0005481065 nova_compute[260935]: 2025-10-11 09:37:49.345 2 DEBUG nova.network.os_vif_util [None req-eab9539a-10df-46da-b724-617da73ee67c 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converting VIF {"id": "25d7271a-bce8-4388-991e-e7069da0eff1", "address": "fa:16:3e:a1:d6:78", "network": {"id": "03aa96fb-1646-4924-8eea-e7c8a5518a31", "bridge": "br-int", "label": "tempest-network-smoke--2144007454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25d7271a-bc", "ovs_interfaceid": "25d7271a-bce8-4388-991e-e7069da0eff1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:37:49 np0005481065 nova_compute[260935]: 2025-10-11 09:37:49.346 2 DEBUG nova.network.os_vif_util [None req-eab9539a-10df-46da-b724-617da73ee67c 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a1:d6:78,bridge_name='br-int',has_traffic_filtering=True,id=25d7271a-bce8-4388-991e-e7069da0eff1,network=Network(03aa96fb-1646-4924-8eea-e7c8a5518a31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25d7271a-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:37:49 np0005481065 nova_compute[260935]: 2025-10-11 09:37:49.346 2 DEBUG os_vif [None req-eab9539a-10df-46da-b724-617da73ee67c 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a1:d6:78,bridge_name='br-int',has_traffic_filtering=True,id=25d7271a-bce8-4388-991e-e7069da0eff1,network=Network(03aa96fb-1646-4924-8eea-e7c8a5518a31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25d7271a-bc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:37:49 np0005481065 nova_compute[260935]: 2025-10-11 09:37:49.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:37:49 np0005481065 nova_compute[260935]: 2025-10-11 09:37:49.349 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap25d7271a-bc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:37:49 np0005481065 nova_compute[260935]: 2025-10-11 09:37:49.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:37:49 np0005481065 nova_compute[260935]: 2025-10-11 09:37:49.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:37:49 np0005481065 nova_compute[260935]: 2025-10-11 09:37:49.355 2 INFO os_vif [None req-eab9539a-10df-46da-b724-617da73ee67c 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a1:d6:78,bridge_name='br-int',has_traffic_filtering=True,id=25d7271a-bce8-4388-991e-e7069da0eff1,network=Network(03aa96fb-1646-4924-8eea-e7c8a5518a31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25d7271a-bc')#033[00m
Oct 11 05:37:49 np0005481065 podman[420505]: 2025-10-11 09:37:49.360348747 +0000 UTC m=+0.040388894 container remove f6a9261aec5be20db1e997a39a8f40719235d71526af0a641fc6e13ee745e3e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-03aa96fb-1646-4924-8eea-e7c8a5518a31, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 11 05:37:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:37:49.367 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e655cc66-9971-4446-91d0-8626d86da726]: (4, ('Sat Oct 11 09:37:49 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-03aa96fb-1646-4924-8eea-e7c8a5518a31 (f6a9261aec5be20db1e997a39a8f40719235d71526af0a641fc6e13ee745e3e5)\nf6a9261aec5be20db1e997a39a8f40719235d71526af0a641fc6e13ee745e3e5\nSat Oct 11 09:37:49 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-03aa96fb-1646-4924-8eea-e7c8a5518a31 (f6a9261aec5be20db1e997a39a8f40719235d71526af0a641fc6e13ee745e3e5)\nf6a9261aec5be20db1e997a39a8f40719235d71526af0a641fc6e13ee745e3e5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:37:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:37:49.369 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[10ead976-4267-4782-bfb1-aa1c623b3191]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:37:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:37:49.370 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap03aa96fb-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:37:49 np0005481065 nova_compute[260935]: 2025-10-11 09:37:49.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:37:49 np0005481065 kernel: tap03aa96fb-10: left promiscuous mode
Oct 11 05:37:49 np0005481065 nova_compute[260935]: 2025-10-11 09:37:49.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:37:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:37:49.383 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a62774b6-397d-4ebd-810e-c22033d8b33d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:37:49 np0005481065 nova_compute[260935]: 2025-10-11 09:37:49.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:37:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:37:49.403 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[05344d1c-78a6-445c-b84e-db39e89eacc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:37:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:37:49.405 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[12c4bb35-a025-45bb-81e7-f4f530206e8f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:37:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:37:49.423 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[dfad55cc-59fd-4d2d-a082-57916836d127]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 725907, 'reachable_time': 40693, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 420538, 'error': None, 'target': 'ovnmeta-03aa96fb-1646-4924-8eea-e7c8a5518a31', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:37:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:37:49.426 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-03aa96fb-1646-4924-8eea-e7c8a5518a31 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 05:37:49 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:37:49.426 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[951a96d1-c632-4676-9122-e09f2dc064d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:37:49 np0005481065 systemd[1]: run-netns-ovnmeta\x2d03aa96fb\x2d1646\x2d4924\x2d8eea\x2de7c8a5518a31.mount: Deactivated successfully.
Oct 11 05:37:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:37:49 np0005481065 nova_compute[260935]: 2025-10-11 09:37:49.792 2 INFO nova.virt.libvirt.driver [None req-eab9539a-10df-46da-b724-617da73ee67c 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Deleting instance files /var/lib/nova/instances/1f73fb12-6b6e-4491-81a4-51fb66ffb310_del#033[00m
Oct 11 05:37:49 np0005481065 nova_compute[260935]: 2025-10-11 09:37:49.793 2 INFO nova.virt.libvirt.driver [None req-eab9539a-10df-46da-b724-617da73ee67c 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Deletion of /var/lib/nova/instances/1f73fb12-6b6e-4491-81a4-51fb66ffb310_del complete#033[00m
Oct 11 05:37:49 np0005481065 nova_compute[260935]: 2025-10-11 09:37:49.887 2 INFO nova.compute.manager [None req-eab9539a-10df-46da-b724-617da73ee67c 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Took 0.90 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:37:49 np0005481065 nova_compute[260935]: 2025-10-11 09:37:49.888 2 DEBUG oslo.service.loopingcall [None req-eab9539a-10df-46da-b724-617da73ee67c 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:37:49 np0005481065 nova_compute[260935]: 2025-10-11 09:37:49.889 2 DEBUG nova.compute.manager [-] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:37:49 np0005481065 nova_compute[260935]: 2025-10-11 09:37:49.889 2 DEBUG nova.network.neutron [-] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:37:50 np0005481065 nova_compute[260935]: 2025-10-11 09:37:50.111 2 DEBUG nova.compute.manager [req-93293fd9-9a46-4427-a4b3-e496864b715d req-ff0e0363-d505-4bb8-9984-9c4a0b7450af e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Received event network-changed-25d7271a-bce8-4388-991e-e7069da0eff1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:37:50 np0005481065 nova_compute[260935]: 2025-10-11 09:37:50.111 2 DEBUG nova.compute.manager [req-93293fd9-9a46-4427-a4b3-e496864b715d req-ff0e0363-d505-4bb8-9984-9c4a0b7450af e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Refreshing instance network info cache due to event network-changed-25d7271a-bce8-4388-991e-e7069da0eff1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:37:50 np0005481065 nova_compute[260935]: 2025-10-11 09:37:50.112 2 DEBUG oslo_concurrency.lockutils [req-93293fd9-9a46-4427-a4b3-e496864b715d req-ff0e0363-d505-4bb8-9984-9c4a0b7450af e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-1f73fb12-6b6e-4491-81a4-51fb66ffb310" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:37:50 np0005481065 nova_compute[260935]: 2025-10-11 09:37:50.113 2 DEBUG oslo_concurrency.lockutils [req-93293fd9-9a46-4427-a4b3-e496864b715d req-ff0e0363-d505-4bb8-9984-9c4a0b7450af e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-1f73fb12-6b6e-4491-81a4-51fb66ffb310" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:37:50 np0005481065 nova_compute[260935]: 2025-10-11 09:37:50.113 2 DEBUG nova.network.neutron [req-93293fd9-9a46-4427-a4b3-e496864b715d req-ff0e0363-d505-4bb8-9984-9c4a0b7450af e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Refreshing network info cache for port 25d7271a-bce8-4388-991e-e7069da0eff1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:37:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2886: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Oct 11 05:37:51 np0005481065 nova_compute[260935]: 2025-10-11 09:37:51.734 2 DEBUG nova.network.neutron [-] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:37:51 np0005481065 nova_compute[260935]: 2025-10-11 09:37:51.757 2 INFO nova.compute.manager [-] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Took 1.87 seconds to deallocate network for instance.#033[00m
Oct 11 05:37:51 np0005481065 nova_compute[260935]: 2025-10-11 09:37:51.816 2 DEBUG oslo_concurrency.lockutils [None req-eab9539a-10df-46da-b724-617da73ee67c 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:37:51 np0005481065 nova_compute[260935]: 2025-10-11 09:37:51.816 2 DEBUG oslo_concurrency.lockutils [None req-eab9539a-10df-46da-b724-617da73ee67c 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:37:51 np0005481065 nova_compute[260935]: 2025-10-11 09:37:51.819 2 DEBUG nova.compute.manager [req-c56ce2a9-c4eb-4c3b-8fbd-f469ea142eae req-0d20d5ab-080f-4caf-b065-3bc81ac31782 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Received event network-vif-deleted-25d7271a-bce8-4388-991e-e7069da0eff1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:37:51 np0005481065 nova_compute[260935]: 2025-10-11 09:37:51.924 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updating instance_info_cache with network_info: [{"id": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "address": "fa:16:3e:1e:82:58", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ae661-47", "ovs_interfaceid": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:37:51 np0005481065 nova_compute[260935]: 2025-10-11 09:37:51.943 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:37:51 np0005481065 nova_compute[260935]: 2025-10-11 09:37:51.944 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 05:37:51 np0005481065 nova_compute[260935]: 2025-10-11 09:37:51.945 2 DEBUG oslo_concurrency.processutils [None req-eab9539a-10df-46da-b724-617da73ee67c 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:37:51 np0005481065 nova_compute[260935]: 2025-10-11 09:37:51.997 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:37:51 np0005481065 nova_compute[260935]: 2025-10-11 09:37:51.999 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:37:52 np0005481065 nova_compute[260935]: 2025-10-11 09:37:52.000 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:37:52 np0005481065 nova_compute[260935]: 2025-10-11 09:37:52.000 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:37:52 np0005481065 nova_compute[260935]: 2025-10-11 09:37:52.000 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:37:52 np0005481065 nova_compute[260935]: 2025-10-11 09:37:52.000 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:37:52 np0005481065 nova_compute[260935]: 2025-10-11 09:37:52.025 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:37:52 np0005481065 nova_compute[260935]: 2025-10-11 09:37:52.225 2 DEBUG nova.network.neutron [req-93293fd9-9a46-4427-a4b3-e496864b715d req-ff0e0363-d505-4bb8-9984-9c4a0b7450af e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Updated VIF entry in instance network info cache for port 25d7271a-bce8-4388-991e-e7069da0eff1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:37:52 np0005481065 nova_compute[260935]: 2025-10-11 09:37:52.226 2 DEBUG nova.network.neutron [req-93293fd9-9a46-4427-a4b3-e496864b715d req-ff0e0363-d505-4bb8-9984-9c4a0b7450af e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Updating instance_info_cache with network_info: [{"id": "25d7271a-bce8-4388-991e-e7069da0eff1", "address": "fa:16:3e:a1:d6:78", "network": {"id": "03aa96fb-1646-4924-8eea-e7c8a5518a31", "bridge": "br-int", "label": "tempest-network-smoke--2144007454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25d7271a-bc", "ovs_interfaceid": "25d7271a-bce8-4388-991e-e7069da0eff1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:37:52 np0005481065 nova_compute[260935]: 2025-10-11 09:37:52.252 2 DEBUG oslo_concurrency.lockutils [req-93293fd9-9a46-4427-a4b3-e496864b715d req-ff0e0363-d505-4bb8-9984-9c4a0b7450af e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-1f73fb12-6b6e-4491-81a4-51fb66ffb310" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:37:52 np0005481065 nova_compute[260935]: 2025-10-11 09:37:52.253 2 DEBUG nova.compute.manager [req-93293fd9-9a46-4427-a4b3-e496864b715d req-ff0e0363-d505-4bb8-9984-9c4a0b7450af e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Received event network-vif-unplugged-25d7271a-bce8-4388-991e-e7069da0eff1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:37:52 np0005481065 nova_compute[260935]: 2025-10-11 09:37:52.253 2 DEBUG oslo_concurrency.lockutils [req-93293fd9-9a46-4427-a4b3-e496864b715d req-ff0e0363-d505-4bb8-9984-9c4a0b7450af e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "1f73fb12-6b6e-4491-81a4-51fb66ffb310-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:37:52 np0005481065 nova_compute[260935]: 2025-10-11 09:37:52.253 2 DEBUG oslo_concurrency.lockutils [req-93293fd9-9a46-4427-a4b3-e496864b715d req-ff0e0363-d505-4bb8-9984-9c4a0b7450af e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1f73fb12-6b6e-4491-81a4-51fb66ffb310-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:37:52 np0005481065 nova_compute[260935]: 2025-10-11 09:37:52.254 2 DEBUG oslo_concurrency.lockutils [req-93293fd9-9a46-4427-a4b3-e496864b715d req-ff0e0363-d505-4bb8-9984-9c4a0b7450af e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1f73fb12-6b6e-4491-81a4-51fb66ffb310-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:37:52 np0005481065 nova_compute[260935]: 2025-10-11 09:37:52.254 2 DEBUG nova.compute.manager [req-93293fd9-9a46-4427-a4b3-e496864b715d req-ff0e0363-d505-4bb8-9984-9c4a0b7450af e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] No waiting events found dispatching network-vif-unplugged-25d7271a-bce8-4388-991e-e7069da0eff1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:37:52 np0005481065 nova_compute[260935]: 2025-10-11 09:37:52.254 2 DEBUG nova.compute.manager [req-93293fd9-9a46-4427-a4b3-e496864b715d req-ff0e0363-d505-4bb8-9984-9c4a0b7450af e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Received event network-vif-unplugged-25d7271a-bce8-4388-991e-e7069da0eff1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 05:37:52 np0005481065 nova_compute[260935]: 2025-10-11 09:37:52.254 2 DEBUG nova.compute.manager [req-93293fd9-9a46-4427-a4b3-e496864b715d req-ff0e0363-d505-4bb8-9984-9c4a0b7450af e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Received event network-vif-plugged-25d7271a-bce8-4388-991e-e7069da0eff1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:37:52 np0005481065 nova_compute[260935]: 2025-10-11 09:37:52.254 2 DEBUG oslo_concurrency.lockutils [req-93293fd9-9a46-4427-a4b3-e496864b715d req-ff0e0363-d505-4bb8-9984-9c4a0b7450af e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "1f73fb12-6b6e-4491-81a4-51fb66ffb310-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:37:52 np0005481065 nova_compute[260935]: 2025-10-11 09:37:52.255 2 DEBUG oslo_concurrency.lockutils [req-93293fd9-9a46-4427-a4b3-e496864b715d req-ff0e0363-d505-4bb8-9984-9c4a0b7450af e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1f73fb12-6b6e-4491-81a4-51fb66ffb310-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:37:52 np0005481065 nova_compute[260935]: 2025-10-11 09:37:52.255 2 DEBUG oslo_concurrency.lockutils [req-93293fd9-9a46-4427-a4b3-e496864b715d req-ff0e0363-d505-4bb8-9984-9c4a0b7450af e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1f73fb12-6b6e-4491-81a4-51fb66ffb310-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:37:52 np0005481065 nova_compute[260935]: 2025-10-11 09:37:52.255 2 DEBUG nova.compute.manager [req-93293fd9-9a46-4427-a4b3-e496864b715d req-ff0e0363-d505-4bb8-9984-9c4a0b7450af e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] No waiting events found dispatching network-vif-plugged-25d7271a-bce8-4388-991e-e7069da0eff1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:37:52 np0005481065 nova_compute[260935]: 2025-10-11 09:37:52.255 2 WARNING nova.compute.manager [req-93293fd9-9a46-4427-a4b3-e496864b715d req-ff0e0363-d505-4bb8-9984-9c4a0b7450af e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Received unexpected event network-vif-plugged-25d7271a-bce8-4388-991e-e7069da0eff1 for instance with vm_state active and task_state deleting.#033[00m
Oct 11 05:37:52 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:37:52 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2726880311' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:37:52 np0005481065 nova_compute[260935]: 2025-10-11 09:37:52.402 2 DEBUG oslo_concurrency.processutils [None req-eab9539a-10df-46da-b724-617da73ee67c 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:37:52 np0005481065 nova_compute[260935]: 2025-10-11 09:37:52.409 2 DEBUG nova.compute.provider_tree [None req-eab9539a-10df-46da-b724-617da73ee67c 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:37:52 np0005481065 nova_compute[260935]: 2025-10-11 09:37:52.427 2 DEBUG nova.scheduler.client.report [None req-eab9539a-10df-46da-b724-617da73ee67c 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:37:52 np0005481065 nova_compute[260935]: 2025-10-11 09:37:52.459 2 DEBUG oslo_concurrency.lockutils [None req-eab9539a-10df-46da-b724-617da73ee67c 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:37:52 np0005481065 nova_compute[260935]: 2025-10-11 09:37:52.462 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.437s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:37:52 np0005481065 nova_compute[260935]: 2025-10-11 09:37:52.462 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:37:52 np0005481065 nova_compute[260935]: 2025-10-11 09:37:52.463 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:37:52 np0005481065 nova_compute[260935]: 2025-10-11 09:37:52.463 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:37:52 np0005481065 nova_compute[260935]: 2025-10-11 09:37:52.537 2 INFO nova.scheduler.client.report [None req-eab9539a-10df-46da-b724-617da73ee67c 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Deleted allocations for instance 1f73fb12-6b6e-4491-81a4-51fb66ffb310#033[00m
Oct 11 05:37:52 np0005481065 nova_compute[260935]: 2025-10-11 09:37:52.598 2 DEBUG oslo_concurrency.lockutils [None req-eab9539a-10df-46da-b724-617da73ee67c 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "1f73fb12-6b6e-4491-81a4-51fb66ffb310" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:37:52 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:37:52 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/108814941' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:37:52 np0005481065 nova_compute[260935]: 2025-10-11 09:37:52.889 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:37:52 np0005481065 nova_compute[260935]: 2025-10-11 09:37:52.972 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:37:52 np0005481065 nova_compute[260935]: 2025-10-11 09:37:52.973 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:37:52 np0005481065 nova_compute[260935]: 2025-10-11 09:37:52.973 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:37:52 np0005481065 nova_compute[260935]: 2025-10-11 09:37:52.977 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:37:52 np0005481065 nova_compute[260935]: 2025-10-11 09:37:52.977 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:37:52 np0005481065 nova_compute[260935]: 2025-10-11 09:37:52.981 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:37:52 np0005481065 nova_compute[260935]: 2025-10-11 09:37:52.981 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:37:53 np0005481065 nova_compute[260935]: 2025-10-11 09:37:53.258 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:37:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2887: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 363 KiB/s rd, 2.1 MiB/s wr, 119 op/s
Oct 11 05:37:53 np0005481065 nova_compute[260935]: 2025-10-11 09:37:53.260 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2822MB free_disk=59.78506851196289GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:37:53 np0005481065 nova_compute[260935]: 2025-10-11 09:37:53.261 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:37:53 np0005481065 nova_compute[260935]: 2025-10-11 09:37:53.261 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:37:53 np0005481065 nova_compute[260935]: 2025-10-11 09:37:53.329 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:37:53 np0005481065 nova_compute[260935]: 2025-10-11 09:37:53.330 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:37:53 np0005481065 nova_compute[260935]: 2025-10-11 09:37:53.330 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:37:53 np0005481065 nova_compute[260935]: 2025-10-11 09:37:53.330 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:37:53 np0005481065 nova_compute[260935]: 2025-10-11 09:37:53.332 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:37:53 np0005481065 nova_compute[260935]: 2025-10-11 09:37:53.402 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:37:53 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:37:53 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/874628231' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:37:53 np0005481065 nova_compute[260935]: 2025-10-11 09:37:53.840 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:37:53 np0005481065 nova_compute[260935]: 2025-10-11 09:37:53.844 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:37:53 np0005481065 nova_compute[260935]: 2025-10-11 09:37:53.859 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:37:53 np0005481065 nova_compute[260935]: 2025-10-11 09:37:53.875 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:37:53 np0005481065 nova_compute[260935]: 2025-10-11 09:37:53.876 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:37:54 np0005481065 nova_compute[260935]: 2025-10-11 09:37:54.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:37:54 np0005481065 nova_compute[260935]: 2025-10-11 09:37:54.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:37:54 np0005481065 nova_compute[260935]: 2025-10-11 09:37:54.579 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:37:54 np0005481065 nova_compute[260935]: 2025-10-11 09:37:54.580 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:37:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:37:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:37:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:37:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:37:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:37:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:37:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:37:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:37:55
Oct 11 05:37:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:37:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:37:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'backups', '.rgw.root', 'cephfs.cephfs.data', 'vms', 'images', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.meta']
Oct 11 05:37:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:37:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2888: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.3 KiB/s wr, 55 op/s
Oct 11 05:37:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:37:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:37:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:37:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:37:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:37:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:37:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:37:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:37:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:37:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:37:55 np0005481065 ovn_controller[152945]: 2025-10-11T09:37:55Z|01607|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:37:55 np0005481065 ovn_controller[152945]: 2025-10-11T09:37:55Z|01608|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:37:55 np0005481065 nova_compute[260935]: 2025-10-11 09:37:55.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:37:55 np0005481065 ovn_controller[152945]: 2025-10-11T09:37:55Z|01609|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:37:55 np0005481065 ovn_controller[152945]: 2025-10-11T09:37:55Z|01610|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:37:55 np0005481065 nova_compute[260935]: 2025-10-11 09:37:55.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:37:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2889: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.3 KiB/s wr, 55 op/s
Oct 11 05:37:58 np0005481065 nova_compute[260935]: 2025-10-11 09:37:58.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:37:59 np0005481065 nova_compute[260935]: 2025-10-11 09:37:59.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:37:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2890: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.3 KiB/s wr, 55 op/s
Oct 11 05:37:59 np0005481065 nova_compute[260935]: 2025-10-11 09:37:59.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:37:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:38:00 np0005481065 nova_compute[260935]: 2025-10-11 09:38:00.393 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760175465.3918967, 27b27eae-7476-459a-b3e1-62199c81d4e1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:38:00 np0005481065 nova_compute[260935]: 2025-10-11 09:38:00.394 2 INFO nova.compute.manager [-] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:38:00 np0005481065 nova_compute[260935]: 2025-10-11 09:38:00.413 2 DEBUG nova.compute.manager [None req-9c74a2de-46d2-49fb-8c59-0f471a1c290d - - - - - -] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:38:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2891: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 05:38:01 np0005481065 podman[420609]: 2025-10-11 09:38:01.798499717 +0000 UTC m=+0.093037821 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 11 05:38:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2892: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 05:38:04 np0005481065 nova_compute[260935]: 2025-10-11 09:38:04.217 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760175469.2156253, 1f73fb12-6b6e-4491-81a4-51fb66ffb310 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:38:04 np0005481065 nova_compute[260935]: 2025-10-11 09:38:04.218 2 INFO nova.compute.manager [-] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:38:04 np0005481065 nova_compute[260935]: 2025-10-11 09:38:04.239 2 DEBUG nova.compute.manager [None req-1bc4dd52-2b35-4183-8885-dbc56bbe2449 - - - - - -] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:38:04 np0005481065 nova_compute[260935]: 2025-10-11 09:38:04.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:38:04 np0005481065 nova_compute[260935]: 2025-10-11 09:38:04.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:38:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:38:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2893: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:38:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:38:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:38:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:38:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:38:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002627377275021163 of space, bias 1.0, pg target 0.788213182506349 quantized to 32 (current 32)
Oct 11 05:38:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:38:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:38:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:38:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:38:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:38:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 11 05:38:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:38:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 05:38:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:38:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:38:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:38:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 05:38:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:38:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 05:38:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:38:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:38:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:38:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 05:38:05 np0005481065 nova_compute[260935]: 2025-10-11 09:38:05.593 2 DEBUG oslo_concurrency.lockutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:38:05 np0005481065 nova_compute[260935]: 2025-10-11 09:38:05.593 2 DEBUG oslo_concurrency.lockutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:38:05 np0005481065 nova_compute[260935]: 2025-10-11 09:38:05.612 2 DEBUG nova.compute.manager [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:38:05 np0005481065 nova_compute[260935]: 2025-10-11 09:38:05.704 2 DEBUG oslo_concurrency.lockutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:38:05 np0005481065 nova_compute[260935]: 2025-10-11 09:38:05.704 2 DEBUG oslo_concurrency.lockutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:38:05 np0005481065 nova_compute[260935]: 2025-10-11 09:38:05.713 2 DEBUG nova.virt.hardware [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:38:05 np0005481065 nova_compute[260935]: 2025-10-11 09:38:05.714 2 INFO nova.compute.claims [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:38:05 np0005481065 nova_compute[260935]: 2025-10-11 09:38:05.877 2 DEBUG oslo_concurrency.processutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:38:06 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:38:06 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/731794348' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:38:06 np0005481065 nova_compute[260935]: 2025-10-11 09:38:06.392 2 DEBUG oslo_concurrency.processutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:38:06 np0005481065 nova_compute[260935]: 2025-10-11 09:38:06.402 2 DEBUG nova.compute.provider_tree [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:38:06 np0005481065 nova_compute[260935]: 2025-10-11 09:38:06.422 2 DEBUG nova.scheduler.client.report [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:38:06 np0005481065 nova_compute[260935]: 2025-10-11 09:38:06.451 2 DEBUG oslo_concurrency.lockutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.747s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:38:06 np0005481065 nova_compute[260935]: 2025-10-11 09:38:06.452 2 DEBUG nova.compute.manager [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:38:06 np0005481065 nova_compute[260935]: 2025-10-11 09:38:06.526 2 DEBUG nova.compute.manager [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:38:06 np0005481065 nova_compute[260935]: 2025-10-11 09:38:06.527 2 DEBUG nova.network.neutron [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:38:06 np0005481065 nova_compute[260935]: 2025-10-11 09:38:06.568 2 INFO nova.virt.libvirt.driver [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:38:06 np0005481065 nova_compute[260935]: 2025-10-11 09:38:06.600 2 DEBUG nova.compute.manager [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:38:06 np0005481065 nova_compute[260935]: 2025-10-11 09:38:06.708 2 DEBUG nova.compute.manager [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:38:06 np0005481065 nova_compute[260935]: 2025-10-11 09:38:06.710 2 DEBUG nova.virt.libvirt.driver [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:38:06 np0005481065 nova_compute[260935]: 2025-10-11 09:38:06.711 2 INFO nova.virt.libvirt.driver [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Creating image(s)#033[00m
Oct 11 05:38:06 np0005481065 nova_compute[260935]: 2025-10-11 09:38:06.744 2 DEBUG nova.storage.rbd_utils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:38:06 np0005481065 nova_compute[260935]: 2025-10-11 09:38:06.783 2 DEBUG nova.storage.rbd_utils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:38:06 np0005481065 nova_compute[260935]: 2025-10-11 09:38:06.825 2 DEBUG nova.storage.rbd_utils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:38:06 np0005481065 nova_compute[260935]: 2025-10-11 09:38:06.833 2 DEBUG oslo_concurrency.processutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:38:06 np0005481065 nova_compute[260935]: 2025-10-11 09:38:06.884 2 DEBUG nova.policy [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '489c4d0457354f4684f8b9e53261224f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '81e7096f23df4e7d8782cf98d09d54e9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:38:06 np0005481065 nova_compute[260935]: 2025-10-11 09:38:06.932 2 DEBUG oslo_concurrency.processutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:38:06 np0005481065 nova_compute[260935]: 2025-10-11 09:38:06.933 2 DEBUG oslo_concurrency.lockutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:38:06 np0005481065 nova_compute[260935]: 2025-10-11 09:38:06.933 2 DEBUG oslo_concurrency.lockutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:38:06 np0005481065 nova_compute[260935]: 2025-10-11 09:38:06.934 2 DEBUG oslo_concurrency.lockutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:38:06 np0005481065 nova_compute[260935]: 2025-10-11 09:38:06.967 2 DEBUG nova.storage.rbd_utils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:38:06 np0005481065 nova_compute[260935]: 2025-10-11 09:38:06.973 2 DEBUG oslo_concurrency.processutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:38:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2894: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:38:07 np0005481065 nova_compute[260935]: 2025-10-11 09:38:07.361 2 DEBUG oslo_concurrency.processutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.388s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:38:07 np0005481065 nova_compute[260935]: 2025-10-11 09:38:07.441 2 DEBUG nova.storage.rbd_utils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] resizing rbd image d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:38:07 np0005481065 nova_compute[260935]: 2025-10-11 09:38:07.539 2 DEBUG nova.objects.instance [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lazy-loading 'migration_context' on Instance uuid d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:38:07 np0005481065 nova_compute[260935]: 2025-10-11 09:38:07.556 2 DEBUG nova.virt.libvirt.driver [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:38:07 np0005481065 nova_compute[260935]: 2025-10-11 09:38:07.557 2 DEBUG nova.virt.libvirt.driver [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Ensure instance console log exists: /var/lib/nova/instances/d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:38:07 np0005481065 nova_compute[260935]: 2025-10-11 09:38:07.557 2 DEBUG oslo_concurrency.lockutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:38:07 np0005481065 nova_compute[260935]: 2025-10-11 09:38:07.558 2 DEBUG oslo_concurrency.lockutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:38:07 np0005481065 nova_compute[260935]: 2025-10-11 09:38:07.558 2 DEBUG oslo_concurrency.lockutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:38:07 np0005481065 nova_compute[260935]: 2025-10-11 09:38:07.775 2 DEBUG nova.network.neutron [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Successfully created port: 927e4c78-95c0-407e-ab92-2c7457c30f72 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:38:08 np0005481065 nova_compute[260935]: 2025-10-11 09:38:08.692 2 DEBUG nova.network.neutron [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Successfully updated port: 927e4c78-95c0-407e-ab92-2c7457c30f72 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:38:08 np0005481065 nova_compute[260935]: 2025-10-11 09:38:08.705 2 DEBUG oslo_concurrency.lockutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "refresh_cache-d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:38:08 np0005481065 nova_compute[260935]: 2025-10-11 09:38:08.705 2 DEBUG oslo_concurrency.lockutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquired lock "refresh_cache-d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:38:08 np0005481065 nova_compute[260935]: 2025-10-11 09:38:08.705 2 DEBUG nova.network.neutron [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:38:08 np0005481065 nova_compute[260935]: 2025-10-11 09:38:08.807 2 DEBUG nova.compute.manager [req-5ae8b420-bcc1-4e6e-a6bb-0099c9fcc198 req-77e91244-6006-47c4-9838-e4ef14c61537 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Received event network-changed-927e4c78-95c0-407e-ab92-2c7457c30f72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:38:08 np0005481065 nova_compute[260935]: 2025-10-11 09:38:08.807 2 DEBUG nova.compute.manager [req-5ae8b420-bcc1-4e6e-a6bb-0099c9fcc198 req-77e91244-6006-47c4-9838-e4ef14c61537 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Refreshing instance network info cache due to event network-changed-927e4c78-95c0-407e-ab92-2c7457c30f72. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:38:08 np0005481065 nova_compute[260935]: 2025-10-11 09:38:08.808 2 DEBUG oslo_concurrency.lockutils [req-5ae8b420-bcc1-4e6e-a6bb-0099c9fcc198 req-77e91244-6006-47c4-9838-e4ef14c61537 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:38:08 np0005481065 nova_compute[260935]: 2025-10-11 09:38:08.865 2 DEBUG nova.network.neutron [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:38:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2895: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 25 op/s
Oct 11 05:38:09 np0005481065 nova_compute[260935]: 2025-10-11 09:38:09.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:38:09 np0005481065 nova_compute[260935]: 2025-10-11 09:38:09.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:38:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:38:09 np0005481065 nova_compute[260935]: 2025-10-11 09:38:09.719 2 DEBUG nova.network.neutron [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Updating instance_info_cache with network_info: [{"id": "927e4c78-95c0-407e-ab92-2c7457c30f72", "address": "fa:16:3e:ff:f3:13", "network": {"id": "d0b488df-4f94-4347-9251-9c74a85cff63", "bridge": "br-int", "label": "tempest-network-smoke--397807174", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap927e4c78-95", "ovs_interfaceid": "927e4c78-95c0-407e-ab92-2c7457c30f72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:38:09 np0005481065 nova_compute[260935]: 2025-10-11 09:38:09.741 2 DEBUG oslo_concurrency.lockutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Releasing lock "refresh_cache-d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:38:09 np0005481065 nova_compute[260935]: 2025-10-11 09:38:09.741 2 DEBUG nova.compute.manager [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Instance network_info: |[{"id": "927e4c78-95c0-407e-ab92-2c7457c30f72", "address": "fa:16:3e:ff:f3:13", "network": {"id": "d0b488df-4f94-4347-9251-9c74a85cff63", "bridge": "br-int", "label": "tempest-network-smoke--397807174", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap927e4c78-95", "ovs_interfaceid": "927e4c78-95c0-407e-ab92-2c7457c30f72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:38:09 np0005481065 nova_compute[260935]: 2025-10-11 09:38:09.742 2 DEBUG oslo_concurrency.lockutils [req-5ae8b420-bcc1-4e6e-a6bb-0099c9fcc198 req-77e91244-6006-47c4-9838-e4ef14c61537 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:38:09 np0005481065 nova_compute[260935]: 2025-10-11 09:38:09.742 2 DEBUG nova.network.neutron [req-5ae8b420-bcc1-4e6e-a6bb-0099c9fcc198 req-77e91244-6006-47c4-9838-e4ef14c61537 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Refreshing network info cache for port 927e4c78-95c0-407e-ab92-2c7457c30f72 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:38:09 np0005481065 nova_compute[260935]: 2025-10-11 09:38:09.751 2 DEBUG nova.virt.libvirt.driver [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Start _get_guest_xml network_info=[{"id": "927e4c78-95c0-407e-ab92-2c7457c30f72", "address": "fa:16:3e:ff:f3:13", "network": {"id": "d0b488df-4f94-4347-9251-9c74a85cff63", "bridge": "br-int", "label": "tempest-network-smoke--397807174", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap927e4c78-95", "ovs_interfaceid": "927e4c78-95c0-407e-ab92-2c7457c30f72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:38:09 np0005481065 nova_compute[260935]: 2025-10-11 09:38:09.758 2 WARNING nova.virt.libvirt.driver [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:38:09 np0005481065 nova_compute[260935]: 2025-10-11 09:38:09.769 2 DEBUG nova.virt.libvirt.host [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:38:09 np0005481065 nova_compute[260935]: 2025-10-11 09:38:09.770 2 DEBUG nova.virt.libvirt.host [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:38:09 np0005481065 nova_compute[260935]: 2025-10-11 09:38:09.775 2 DEBUG nova.virt.libvirt.host [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:38:09 np0005481065 nova_compute[260935]: 2025-10-11 09:38:09.776 2 DEBUG nova.virt.libvirt.host [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:38:09 np0005481065 nova_compute[260935]: 2025-10-11 09:38:09.776 2 DEBUG nova.virt.libvirt.driver [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:38:09 np0005481065 nova_compute[260935]: 2025-10-11 09:38:09.777 2 DEBUG nova.virt.hardware [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:38:09 np0005481065 nova_compute[260935]: 2025-10-11 09:38:09.778 2 DEBUG nova.virt.hardware [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:38:09 np0005481065 nova_compute[260935]: 2025-10-11 09:38:09.778 2 DEBUG nova.virt.hardware [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:38:09 np0005481065 nova_compute[260935]: 2025-10-11 09:38:09.779 2 DEBUG nova.virt.hardware [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:38:09 np0005481065 nova_compute[260935]: 2025-10-11 09:38:09.779 2 DEBUG nova.virt.hardware [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:38:09 np0005481065 nova_compute[260935]: 2025-10-11 09:38:09.780 2 DEBUG nova.virt.hardware [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:38:09 np0005481065 nova_compute[260935]: 2025-10-11 09:38:09.780 2 DEBUG nova.virt.hardware [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:38:09 np0005481065 nova_compute[260935]: 2025-10-11 09:38:09.781 2 DEBUG nova.virt.hardware [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:38:09 np0005481065 nova_compute[260935]: 2025-10-11 09:38:09.781 2 DEBUG nova.virt.hardware [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:38:09 np0005481065 nova_compute[260935]: 2025-10-11 09:38:09.782 2 DEBUG nova.virt.hardware [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:38:09 np0005481065 nova_compute[260935]: 2025-10-11 09:38:09.783 2 DEBUG nova.virt.hardware [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:38:09 np0005481065 nova_compute[260935]: 2025-10-11 09:38:09.788 2 DEBUG oslo_concurrency.processutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:38:09 np0005481065 podman[420817]: 2025-10-11 09:38:09.809419482 +0000 UTC m=+0.093840383 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 11 05:38:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:38:10 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1074118747' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:38:10 np0005481065 nova_compute[260935]: 2025-10-11 09:38:10.315 2 DEBUG oslo_concurrency.processutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:38:10 np0005481065 nova_compute[260935]: 2025-10-11 09:38:10.356 2 DEBUG nova.storage.rbd_utils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:38:10 np0005481065 nova_compute[260935]: 2025-10-11 09:38:10.362 2 DEBUG oslo_concurrency.processutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:38:10 np0005481065 nova_compute[260935]: 2025-10-11 09:38:10.804 2 DEBUG nova.network.neutron [req-5ae8b420-bcc1-4e6e-a6bb-0099c9fcc198 req-77e91244-6006-47c4-9838-e4ef14c61537 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Updated VIF entry in instance network info cache for port 927e4c78-95c0-407e-ab92-2c7457c30f72. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:38:10 np0005481065 nova_compute[260935]: 2025-10-11 09:38:10.805 2 DEBUG nova.network.neutron [req-5ae8b420-bcc1-4e6e-a6bb-0099c9fcc198 req-77e91244-6006-47c4-9838-e4ef14c61537 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Updating instance_info_cache with network_info: [{"id": "927e4c78-95c0-407e-ab92-2c7457c30f72", "address": "fa:16:3e:ff:f3:13", "network": {"id": "d0b488df-4f94-4347-9251-9c74a85cff63", "bridge": "br-int", "label": "tempest-network-smoke--397807174", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap927e4c78-95", "ovs_interfaceid": "927e4c78-95c0-407e-ab92-2c7457c30f72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:38:10 np0005481065 nova_compute[260935]: 2025-10-11 09:38:10.822 2 DEBUG oslo_concurrency.lockutils [req-5ae8b420-bcc1-4e6e-a6bb-0099c9fcc198 req-77e91244-6006-47c4-9838-e4ef14c61537 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:38:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:38:10 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1368580161' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:38:10 np0005481065 nova_compute[260935]: 2025-10-11 09:38:10.859 2 DEBUG oslo_concurrency.processutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:38:10 np0005481065 nova_compute[260935]: 2025-10-11 09:38:10.862 2 DEBUG nova.virt.libvirt.vif [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:38:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-901075224',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-901075224',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-607770139-acc',id=143,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDeAiXe5BxSlKhTWUA0h8drhicmm4lHT+GCh5IJOhzOrvcV2CoWAUh+e7G/x4nbCWm9Jvt6uJONBp2OY5zPdoAKrYlSLLfE4Vnt6Ll8at8odUFHqxgC501PsHRAdCCNJdw==',key_name='tempest-TestSecurityGroupsBasicOps-904423859',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='81e7096f23df4e7d8782cf98d09d54e9',ramdisk_id='',reservation_id='r-9nlmw0id',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-607770139',owner_user_name='tempest-TestSecurityGroupsBasicOps-607770139-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:38:06Z,user_data=None,user_id='489c4d0457354f4684f8b9e53261224f',uuid=d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "927e4c78-95c0-407e-ab92-2c7457c30f72", "address": "fa:16:3e:ff:f3:13", "network": {"id": "d0b488df-4f94-4347-9251-9c74a85cff63", "bridge": "br-int", "label": "tempest-network-smoke--397807174", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap927e4c78-95", "ovs_interfaceid": "927e4c78-95c0-407e-ab92-2c7457c30f72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:38:10 np0005481065 nova_compute[260935]: 2025-10-11 09:38:10.862 2 DEBUG nova.network.os_vif_util [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converting VIF {"id": "927e4c78-95c0-407e-ab92-2c7457c30f72", "address": "fa:16:3e:ff:f3:13", "network": {"id": "d0b488df-4f94-4347-9251-9c74a85cff63", "bridge": "br-int", "label": "tempest-network-smoke--397807174", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap927e4c78-95", "ovs_interfaceid": "927e4c78-95c0-407e-ab92-2c7457c30f72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:38:10 np0005481065 nova_compute[260935]: 2025-10-11 09:38:10.864 2 DEBUG nova.network.os_vif_util [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ff:f3:13,bridge_name='br-int',has_traffic_filtering=True,id=927e4c78-95c0-407e-ab92-2c7457c30f72,network=Network(d0b488df-4f94-4347-9251-9c74a85cff63),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap927e4c78-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:38:10 np0005481065 nova_compute[260935]: 2025-10-11 09:38:10.866 2 DEBUG nova.objects.instance [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lazy-loading 'pci_devices' on Instance uuid d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:38:10 np0005481065 nova_compute[260935]: 2025-10-11 09:38:10.882 2 DEBUG nova.virt.libvirt.driver [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:38:10 np0005481065 nova_compute[260935]:  <uuid>d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b</uuid>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:  <name>instance-0000008f</name>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:38:10 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-901075224</nova:name>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:38:09</nova:creationTime>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:38:10 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:        <nova:user uuid="489c4d0457354f4684f8b9e53261224f">tempest-TestSecurityGroupsBasicOps-607770139-project-member</nova:user>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:        <nova:project uuid="81e7096f23df4e7d8782cf98d09d54e9">tempest-TestSecurityGroupsBasicOps-607770139</nova:project>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:        <nova:port uuid="927e4c78-95c0-407e-ab92-2c7457c30f72">
Oct 11 05:38:10 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:38:10 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:      <entry name="serial">d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b</entry>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:      <entry name="uuid">d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b</entry>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:38:10 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:38:10 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:38:10 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b_disk">
Oct 11 05:38:10 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:38:10 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:38:10 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b_disk.config">
Oct 11 05:38:10 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:38:10 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:38:10 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:ff:f3:13"/>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:      <target dev="tap927e4c78-95"/>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:38:10 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b/console.log" append="off"/>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:38:10 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:38:10 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:38:10 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:38:10 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:38:10 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:38:10 np0005481065 nova_compute[260935]: 2025-10-11 09:38:10.884 2 DEBUG nova.compute.manager [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Preparing to wait for external event network-vif-plugged-927e4c78-95c0-407e-ab92-2c7457c30f72 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:38:10 np0005481065 nova_compute[260935]: 2025-10-11 09:38:10.885 2 DEBUG oslo_concurrency.lockutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:38:10 np0005481065 nova_compute[260935]: 2025-10-11 09:38:10.885 2 DEBUG oslo_concurrency.lockutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:38:10 np0005481065 nova_compute[260935]: 2025-10-11 09:38:10.886 2 DEBUG oslo_concurrency.lockutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:38:10 np0005481065 nova_compute[260935]: 2025-10-11 09:38:10.887 2 DEBUG nova.virt.libvirt.vif [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:38:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-901075224',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-901075224',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-607770139-acc',id=143,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDeAiXe5BxSlKhTWUA0h8drhicmm4lHT+GCh5IJOhzOrvcV2CoWAUh+e7G/x4nbCWm9Jvt6uJONBp2OY5zPdoAKrYlSLLfE4Vnt6Ll8at8odUFHqxgC501PsHRAdCCNJdw==',key_name='tempest-TestSecurityGroupsBasicOps-904423859',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='81e7096f23df4e7d8782cf98d09d54e9',ramdisk_id='',reservation_id='r-9nlmw0id',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-607770139',owner_user_name='tempest-TestSecurityGroupsBasicOps-607770139-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:38:06Z,user_data=None,user_id='489c4d0457354f4684f8b9e53261224f',uuid=d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "927e4c78-95c0-407e-ab92-2c7457c30f72", "address": "fa:16:3e:ff:f3:13", "network": {"id": "d0b488df-4f94-4347-9251-9c74a85cff63", "bridge": "br-int", "label": "tempest-network-smoke--397807174", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap927e4c78-95", "ovs_interfaceid": "927e4c78-95c0-407e-ab92-2c7457c30f72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:38:10 np0005481065 nova_compute[260935]: 2025-10-11 09:38:10.888 2 DEBUG nova.network.os_vif_util [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converting VIF {"id": "927e4c78-95c0-407e-ab92-2c7457c30f72", "address": "fa:16:3e:ff:f3:13", "network": {"id": "d0b488df-4f94-4347-9251-9c74a85cff63", "bridge": "br-int", "label": "tempest-network-smoke--397807174", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap927e4c78-95", "ovs_interfaceid": "927e4c78-95c0-407e-ab92-2c7457c30f72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:38:10 np0005481065 nova_compute[260935]: 2025-10-11 09:38:10.889 2 DEBUG nova.network.os_vif_util [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ff:f3:13,bridge_name='br-int',has_traffic_filtering=True,id=927e4c78-95c0-407e-ab92-2c7457c30f72,network=Network(d0b488df-4f94-4347-9251-9c74a85cff63),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap927e4c78-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:38:10 np0005481065 nova_compute[260935]: 2025-10-11 09:38:10.890 2 DEBUG os_vif [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ff:f3:13,bridge_name='br-int',has_traffic_filtering=True,id=927e4c78-95c0-407e-ab92-2c7457c30f72,network=Network(d0b488df-4f94-4347-9251-9c74a85cff63),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap927e4c78-95') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:38:10 np0005481065 nova_compute[260935]: 2025-10-11 09:38:10.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:38:10 np0005481065 nova_compute[260935]: 2025-10-11 09:38:10.892 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:38:10 np0005481065 nova_compute[260935]: 2025-10-11 09:38:10.893 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:38:10 np0005481065 nova_compute[260935]: 2025-10-11 09:38:10.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:38:10 np0005481065 nova_compute[260935]: 2025-10-11 09:38:10.898 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap927e4c78-95, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:38:10 np0005481065 nova_compute[260935]: 2025-10-11 09:38:10.899 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap927e4c78-95, col_values=(('external_ids', {'iface-id': '927e4c78-95c0-407e-ab92-2c7457c30f72', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ff:f3:13', 'vm-uuid': 'd61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:38:10 np0005481065 NetworkManager[44960]: <info>  [1760175490.9033] manager: (tap927e4c78-95): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/614)
Oct 11 05:38:10 np0005481065 nova_compute[260935]: 2025-10-11 09:38:10.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:38:10 np0005481065 nova_compute[260935]: 2025-10-11 09:38:10.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:38:10 np0005481065 nova_compute[260935]: 2025-10-11 09:38:10.914 2 INFO os_vif [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ff:f3:13,bridge_name='br-int',has_traffic_filtering=True,id=927e4c78-95c0-407e-ab92-2c7457c30f72,network=Network(d0b488df-4f94-4347-9251-9c74a85cff63),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap927e4c78-95')#033[00m
Oct 11 05:38:10 np0005481065 nova_compute[260935]: 2025-10-11 09:38:10.976 2 DEBUG nova.virt.libvirt.driver [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:38:10 np0005481065 nova_compute[260935]: 2025-10-11 09:38:10.976 2 DEBUG nova.virt.libvirt.driver [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:38:10 np0005481065 nova_compute[260935]: 2025-10-11 09:38:10.977 2 DEBUG nova.virt.libvirt.driver [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] No VIF found with MAC fa:16:3e:ff:f3:13, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:38:10 np0005481065 nova_compute[260935]: 2025-10-11 09:38:10.977 2 INFO nova.virt.libvirt.driver [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Using config drive#033[00m
Oct 11 05:38:11 np0005481065 nova_compute[260935]: 2025-10-11 09:38:11.004 2 DEBUG nova.storage.rbd_utils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:38:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2896: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 25 op/s
Oct 11 05:38:11 np0005481065 nova_compute[260935]: 2025-10-11 09:38:11.283 2 INFO nova.virt.libvirt.driver [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Creating config drive at /var/lib/nova/instances/d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b/disk.config#033[00m
Oct 11 05:38:11 np0005481065 nova_compute[260935]: 2025-10-11 09:38:11.293 2 DEBUG oslo_concurrency.processutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpygd86x75 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:38:11 np0005481065 nova_compute[260935]: 2025-10-11 09:38:11.451 2 DEBUG oslo_concurrency.processutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpygd86x75" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:38:11 np0005481065 nova_compute[260935]: 2025-10-11 09:38:11.492 2 DEBUG nova.storage.rbd_utils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:38:11 np0005481065 nova_compute[260935]: 2025-10-11 09:38:11.497 2 DEBUG oslo_concurrency.processutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b/disk.config d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:38:11 np0005481065 nova_compute[260935]: 2025-10-11 09:38:11.685 2 DEBUG oslo_concurrency.processutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b/disk.config d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.188s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:38:11 np0005481065 nova_compute[260935]: 2025-10-11 09:38:11.687 2 INFO nova.virt.libvirt.driver [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Deleting local config drive /var/lib/nova/instances/d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b/disk.config because it was imported into RBD.#033[00m
Oct 11 05:38:11 np0005481065 kernel: tap927e4c78-95: entered promiscuous mode
Oct 11 05:38:11 np0005481065 NetworkManager[44960]: <info>  [1760175491.7597] manager: (tap927e4c78-95): new Tun device (/org/freedesktop/NetworkManager/Devices/615)
Oct 11 05:38:11 np0005481065 systemd-udevd[420968]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:38:11 np0005481065 ovn_controller[152945]: 2025-10-11T09:38:11Z|01611|binding|INFO|Claiming lport 927e4c78-95c0-407e-ab92-2c7457c30f72 for this chassis.
Oct 11 05:38:11 np0005481065 ovn_controller[152945]: 2025-10-11T09:38:11Z|01612|binding|INFO|927e4c78-95c0-407e-ab92-2c7457c30f72: Claiming fa:16:3e:ff:f3:13 10.100.0.5
Oct 11 05:38:11 np0005481065 nova_compute[260935]: 2025-10-11 09:38:11.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:38:11 np0005481065 NetworkManager[44960]: <info>  [1760175491.8252] device (tap927e4c78-95): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:38:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:38:11.825 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ff:f3:13 10.100.0.5'], port_security=['fa:16:3e:ff:f3:13 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'd61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d0b488df-4f94-4347-9251-9c74a85cff63', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '81e7096f23df4e7d8782cf98d09d54e9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '007bd862-2a46-4a84-aea1-941ef61ceb34 a1c26107-dbf4-42a0-ae54-f39f52094af5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c6f689b1-ae42-4f44-bdc0-840884eacd12, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=927e4c78-95c0-407e-ab92-2c7457c30f72) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:38:11 np0005481065 NetworkManager[44960]: <info>  [1760175491.8266] device (tap927e4c78-95): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:38:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:38:11.826 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 927e4c78-95c0-407e-ab92-2c7457c30f72 in datapath d0b488df-4f94-4347-9251-9c74a85cff63 bound to our chassis#033[00m
Oct 11 05:38:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:38:11.828 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d0b488df-4f94-4347-9251-9c74a85cff63#033[00m
Oct 11 05:38:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:38:11.851 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ac03e7b8-6766-4a7e-b6ae-55dfe8e416e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:38:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:38:11.854 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd0b488df-41 in ovnmeta-d0b488df-4f94-4347-9251-9c74a85cff63 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 05:38:11 np0005481065 systemd-machined[215705]: New machine qemu-167-instance-0000008f.
Oct 11 05:38:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:38:11.856 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd0b488df-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 05:38:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:38:11.856 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[410f5ae8-b78b-4e09-ab8c-3f390f332bf0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:38:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:38:11.857 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a883861c-0875-439d-8aec-a46b76105a5f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:38:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:38:11.870 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[2f0907bf-5726-422b-afda-20b12ecb484f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:38:11 np0005481065 systemd[1]: Started Virtual Machine qemu-167-instance-0000008f.
Oct 11 05:38:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:38:11.900 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0c152aef-4967-4362-8665-8fc5b0a124c5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:38:11 np0005481065 nova_compute[260935]: 2025-10-11 09:38:11.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:38:11 np0005481065 ovn_controller[152945]: 2025-10-11T09:38:11Z|01613|binding|INFO|Setting lport 927e4c78-95c0-407e-ab92-2c7457c30f72 ovn-installed in OVS
Oct 11 05:38:11 np0005481065 ovn_controller[152945]: 2025-10-11T09:38:11Z|01614|binding|INFO|Setting lport 927e4c78-95c0-407e-ab92-2c7457c30f72 up in Southbound
Oct 11 05:38:11 np0005481065 nova_compute[260935]: 2025-10-11 09:38:11.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:38:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:38:11.950 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[cf4eb8c2-486b-40b5-b219-22388ca1c33c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:38:11 np0005481065 NetworkManager[44960]: <info>  [1760175491.9577] manager: (tapd0b488df-40): new Veth device (/org/freedesktop/NetworkManager/Devices/616)
Oct 11 05:38:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:38:11.955 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[dd2d3c52-60a0-4470-a60d-913cf1618a36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:38:11 np0005481065 systemd-udevd[420971]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:38:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:38:12.000 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[761ad54d-c54c-4452-862e-8aed15e84203]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:38:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:38:12.004 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f409afb7-57a1-42cd-9a48-3818830ed4d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:38:12 np0005481065 NetworkManager[44960]: <info>  [1760175492.0301] device (tapd0b488df-40): carrier: link connected
Oct 11 05:38:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:38:12.036 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[870d60e7-17fa-42cd-934c-a3d861b2927a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:38:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:38:12.059 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6779d3b4-2f2b-4240-9d60-758fb2efef31]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd0b488df-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a6:41:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 426], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 733673, 'reachable_time': 36704, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 421004, 'error': None, 'target': 'ovnmeta-d0b488df-4f94-4347-9251-9c74a85cff63', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:38:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:38:12.085 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e761712b-6d3b-4738-a773-8e7c089c1ad6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea6:41da'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 733673, 'tstamp': 733673}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 421005, 'error': None, 'target': 'ovnmeta-d0b488df-4f94-4347-9251-9c74a85cff63', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:38:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:38:12.113 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b8cf7e2d-289f-4b78-b887-1c259280faa7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd0b488df-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a6:41:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 426], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 733673, 'reachable_time': 36704, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 421006, 'error': None, 'target': 'ovnmeta-d0b488df-4f94-4347-9251-9c74a85cff63', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:38:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:38:12.156 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f0344909-25ec-4c83-919d-907d83cb3e7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:38:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:38:12.225 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bf3a0610-739b-4ad5-b771-94489f2c0633]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:38:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:38:12.227 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd0b488df-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:38:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:38:12.227 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:38:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:38:12.228 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd0b488df-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:38:12 np0005481065 nova_compute[260935]: 2025-10-11 09:38:12.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:38:12 np0005481065 NetworkManager[44960]: <info>  [1760175492.2310] manager: (tapd0b488df-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/617)
Oct 11 05:38:12 np0005481065 kernel: tapd0b488df-40: entered promiscuous mode
Oct 11 05:38:12 np0005481065 nova_compute[260935]: 2025-10-11 09:38:12.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:38:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:38:12.238 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd0b488df-40, col_values=(('external_ids', {'iface-id': '42365f39-7cdb-44f9-ba26-d9bbfa355140'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:38:12 np0005481065 nova_compute[260935]: 2025-10-11 09:38:12.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:38:12 np0005481065 ovn_controller[152945]: 2025-10-11T09:38:12Z|01615|binding|INFO|Releasing lport 42365f39-7cdb-44f9-ba26-d9bbfa355140 from this chassis (sb_readonly=0)
Oct 11 05:38:12 np0005481065 nova_compute[260935]: 2025-10-11 09:38:12.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:38:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:38:12.271 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d0b488df-4f94-4347-9251-9c74a85cff63.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d0b488df-4f94-4347-9251-9c74a85cff63.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 05:38:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:38:12.272 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1a7f4542-f31f-4dd7-a18c-77b38791129a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:38:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:38:12.273 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 05:38:12 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 05:38:12 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 05:38:12 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-d0b488df-4f94-4347-9251-9c74a85cff63
Oct 11 05:38:12 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 05:38:12 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 05:38:12 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 05:38:12 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/d0b488df-4f94-4347-9251-9c74a85cff63.pid.haproxy
Oct 11 05:38:12 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 05:38:12 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:38:12 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 05:38:12 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 05:38:12 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 05:38:12 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 05:38:12 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 05:38:12 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 05:38:12 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 05:38:12 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 05:38:12 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 05:38:12 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 05:38:12 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 05:38:12 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 05:38:12 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 05:38:12 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:38:12 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:38:12 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 05:38:12 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 05:38:12 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 05:38:12 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID d0b488df-4f94-4347-9251-9c74a85cff63
Oct 11 05:38:12 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 05:38:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:38:12.274 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d0b488df-4f94-4347-9251-9c74a85cff63', 'env', 'PROCESS_TAG=haproxy-d0b488df-4f94-4347-9251-9c74a85cff63', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d0b488df-4f94-4347-9251-9c74a85cff63.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 05:38:12 np0005481065 podman[421080]: 2025-10-11 09:38:12.682986918 +0000 UTC m=+0.057334999 container create e8a3f080b91898e90dce62a6bb187fd979d7d1ce21868500283dee71e6647be6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d0b488df-4f94-4347-9251-9c74a85cff63, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:38:12 np0005481065 systemd[1]: Started libpod-conmon-e8a3f080b91898e90dce62a6bb187fd979d7d1ce21868500283dee71e6647be6.scope.
Oct 11 05:38:12 np0005481065 podman[421080]: 2025-10-11 09:38:12.651843325 +0000 UTC m=+0.026191426 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 05:38:12 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:38:12 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f4cf93020fc98090af1b4c5a6be3fdf7b017ecf3820002b4c2da663e5a79eaa/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 05:38:12 np0005481065 podman[421080]: 2025-10-11 09:38:12.786692347 +0000 UTC m=+0.161040468 container init e8a3f080b91898e90dce62a6bb187fd979d7d1ce21868500283dee71e6647be6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d0b488df-4f94-4347-9251-9c74a85cff63, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 11 05:38:12 np0005481065 nova_compute[260935]: 2025-10-11 09:38:12.794 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175492.7935276, d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:38:12 np0005481065 nova_compute[260935]: 2025-10-11 09:38:12.794 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] VM Started (Lifecycle Event)#033[00m
Oct 11 05:38:12 np0005481065 podman[421080]: 2025-10-11 09:38:12.799064174 +0000 UTC m=+0.173412295 container start e8a3f080b91898e90dce62a6bb187fd979d7d1ce21868500283dee71e6647be6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d0b488df-4f94-4347-9251-9c74a85cff63, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 11 05:38:12 np0005481065 nova_compute[260935]: 2025-10-11 09:38:12.821 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:38:12 np0005481065 nova_compute[260935]: 2025-10-11 09:38:12.824 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175492.7948625, d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:38:12 np0005481065 nova_compute[260935]: 2025-10-11 09:38:12.825 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:38:12 np0005481065 neutron-haproxy-ovnmeta-d0b488df-4f94-4347-9251-9c74a85cff63[421095]: [NOTICE]   (421099) : New worker (421101) forked
Oct 11 05:38:12 np0005481065 neutron-haproxy-ovnmeta-d0b488df-4f94-4347-9251-9c74a85cff63[421095]: [NOTICE]   (421099) : Loading success.
Oct 11 05:38:12 np0005481065 nova_compute[260935]: 2025-10-11 09:38:12.846 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:38:12 np0005481065 nova_compute[260935]: 2025-10-11 09:38:12.850 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:38:12 np0005481065 nova_compute[260935]: 2025-10-11 09:38:12.865 2 DEBUG nova.compute.manager [req-a7c1f1c5-fbd5-40bc-8b59-d3f29925c231 req-f825efc9-bab9-4cb2-bf06-8642d83f3652 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Received event network-vif-plugged-927e4c78-95c0-407e-ab92-2c7457c30f72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:38:12 np0005481065 nova_compute[260935]: 2025-10-11 09:38:12.866 2 DEBUG oslo_concurrency.lockutils [req-a7c1f1c5-fbd5-40bc-8b59-d3f29925c231 req-f825efc9-bab9-4cb2-bf06-8642d83f3652 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:38:12 np0005481065 nova_compute[260935]: 2025-10-11 09:38:12.866 2 DEBUG oslo_concurrency.lockutils [req-a7c1f1c5-fbd5-40bc-8b59-d3f29925c231 req-f825efc9-bab9-4cb2-bf06-8642d83f3652 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:38:12 np0005481065 nova_compute[260935]: 2025-10-11 09:38:12.867 2 DEBUG oslo_concurrency.lockutils [req-a7c1f1c5-fbd5-40bc-8b59-d3f29925c231 req-f825efc9-bab9-4cb2-bf06-8642d83f3652 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:38:12 np0005481065 nova_compute[260935]: 2025-10-11 09:38:12.867 2 DEBUG nova.compute.manager [req-a7c1f1c5-fbd5-40bc-8b59-d3f29925c231 req-f825efc9-bab9-4cb2-bf06-8642d83f3652 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Processing event network-vif-plugged-927e4c78-95c0-407e-ab92-2c7457c30f72 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:38:12 np0005481065 nova_compute[260935]: 2025-10-11 09:38:12.868 2 DEBUG nova.compute.manager [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:38:12 np0005481065 nova_compute[260935]: 2025-10-11 09:38:12.869 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:38:12 np0005481065 nova_compute[260935]: 2025-10-11 09:38:12.871 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175492.8712468, d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:38:12 np0005481065 nova_compute[260935]: 2025-10-11 09:38:12.871 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:38:12 np0005481065 nova_compute[260935]: 2025-10-11 09:38:12.873 2 DEBUG nova.virt.libvirt.driver [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:38:12 np0005481065 nova_compute[260935]: 2025-10-11 09:38:12.880 2 INFO nova.virt.libvirt.driver [-] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Instance spawned successfully.#033[00m
Oct 11 05:38:12 np0005481065 nova_compute[260935]: 2025-10-11 09:38:12.880 2 DEBUG nova.virt.libvirt.driver [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:38:12 np0005481065 nova_compute[260935]: 2025-10-11 09:38:12.887 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:38:12 np0005481065 nova_compute[260935]: 2025-10-11 09:38:12.890 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:38:12 np0005481065 nova_compute[260935]: 2025-10-11 09:38:12.900 2 DEBUG nova.virt.libvirt.driver [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:38:12 np0005481065 nova_compute[260935]: 2025-10-11 09:38:12.901 2 DEBUG nova.virt.libvirt.driver [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:38:12 np0005481065 nova_compute[260935]: 2025-10-11 09:38:12.901 2 DEBUG nova.virt.libvirt.driver [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:38:12 np0005481065 nova_compute[260935]: 2025-10-11 09:38:12.902 2 DEBUG nova.virt.libvirt.driver [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:38:12 np0005481065 nova_compute[260935]: 2025-10-11 09:38:12.902 2 DEBUG nova.virt.libvirt.driver [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:38:12 np0005481065 nova_compute[260935]: 2025-10-11 09:38:12.903 2 DEBUG nova.virt.libvirt.driver [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:38:12 np0005481065 nova_compute[260935]: 2025-10-11 09:38:12.907 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:38:12 np0005481065 nova_compute[260935]: 2025-10-11 09:38:12.955 2 INFO nova.compute.manager [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Took 6.25 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:38:12 np0005481065 nova_compute[260935]: 2025-10-11 09:38:12.956 2 DEBUG nova.compute.manager [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:38:13 np0005481065 nova_compute[260935]: 2025-10-11 09:38:13.011 2 INFO nova.compute.manager [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Took 7.34 seconds to build instance.#033[00m
Oct 11 05:38:13 np0005481065 nova_compute[260935]: 2025-10-11 09:38:13.026 2 DEBUG oslo_concurrency.lockutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.433s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:38:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2897: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 11 05:38:13 np0005481065 podman[421110]: 2025-10-11 09:38:13.811134439 +0000 UTC m=+0.102252719 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible)
Oct 11 05:38:13 np0005481065 podman[421111]: 2025-10-11 09:38:13.854950048 +0000 UTC m=+0.144935326 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 05:38:14 np0005481065 nova_compute[260935]: 2025-10-11 09:38:14.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:38:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:38:14 np0005481065 nova_compute[260935]: 2025-10-11 09:38:14.941 2 DEBUG nova.compute.manager [req-3fe15ade-a7bc-49ae-930a-234a6430574b req-c7253e32-24b9-4c06-8af2-c7b5f92918c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Received event network-vif-plugged-927e4c78-95c0-407e-ab92-2c7457c30f72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:38:14 np0005481065 nova_compute[260935]: 2025-10-11 09:38:14.943 2 DEBUG oslo_concurrency.lockutils [req-3fe15ade-a7bc-49ae-930a-234a6430574b req-c7253e32-24b9-4c06-8af2-c7b5f92918c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:38:14 np0005481065 nova_compute[260935]: 2025-10-11 09:38:14.944 2 DEBUG oslo_concurrency.lockutils [req-3fe15ade-a7bc-49ae-930a-234a6430574b req-c7253e32-24b9-4c06-8af2-c7b5f92918c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:38:14 np0005481065 nova_compute[260935]: 2025-10-11 09:38:14.944 2 DEBUG oslo_concurrency.lockutils [req-3fe15ade-a7bc-49ae-930a-234a6430574b req-c7253e32-24b9-4c06-8af2-c7b5f92918c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:38:14 np0005481065 nova_compute[260935]: 2025-10-11 09:38:14.945 2 DEBUG nova.compute.manager [req-3fe15ade-a7bc-49ae-930a-234a6430574b req-c7253e32-24b9-4c06-8af2-c7b5f92918c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] No waiting events found dispatching network-vif-plugged-927e4c78-95c0-407e-ab92-2c7457c30f72 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:38:14 np0005481065 nova_compute[260935]: 2025-10-11 09:38:14.946 2 WARNING nova.compute.manager [req-3fe15ade-a7bc-49ae-930a-234a6430574b req-c7253e32-24b9-4c06-8af2-c7b5f92918c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Received unexpected event network-vif-plugged-927e4c78-95c0-407e-ab92-2c7457c30f72 for instance with vm_state active and task_state None.#033[00m
Oct 11 05:38:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:38:15.234 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:38:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:38:15.235 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:38:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:38:15.236 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:38:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2898: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 11 05:38:15 np0005481065 nova_compute[260935]: 2025-10-11 09:38:15.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:38:16 np0005481065 ovn_controller[152945]: 2025-10-11T09:38:16Z|01616|binding|INFO|Releasing lport 42365f39-7cdb-44f9-ba26-d9bbfa355140 from this chassis (sb_readonly=0)
Oct 11 05:38:16 np0005481065 ovn_controller[152945]: 2025-10-11T09:38:16Z|01617|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:38:16 np0005481065 ovn_controller[152945]: 2025-10-11T09:38:16Z|01618|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:38:16 np0005481065 NetworkManager[44960]: <info>  [1760175496.8446] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/618)
Oct 11 05:38:16 np0005481065 NetworkManager[44960]: <info>  [1760175496.8457] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/619)
Oct 11 05:38:16 np0005481065 nova_compute[260935]: 2025-10-11 09:38:16.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:38:16 np0005481065 ovn_controller[152945]: 2025-10-11T09:38:16Z|01619|binding|INFO|Releasing lport 42365f39-7cdb-44f9-ba26-d9bbfa355140 from this chassis (sb_readonly=0)
Oct 11 05:38:16 np0005481065 ovn_controller[152945]: 2025-10-11T09:38:16Z|01620|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:38:16 np0005481065 ovn_controller[152945]: 2025-10-11T09:38:16Z|01621|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:38:16 np0005481065 nova_compute[260935]: 2025-10-11 09:38:16.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:38:16 np0005481065 nova_compute[260935]: 2025-10-11 09:38:16.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:38:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2899: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 11 05:38:17 np0005481065 nova_compute[260935]: 2025-10-11 09:38:17.287 2 DEBUG nova.compute.manager [req-a1a6c7bc-0c3e-4b71-abda-e7ef9266678b req-020dcead-bdae-46e3-bf68-f319f660d275 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Received event network-changed-927e4c78-95c0-407e-ab92-2c7457c30f72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:38:17 np0005481065 nova_compute[260935]: 2025-10-11 09:38:17.288 2 DEBUG nova.compute.manager [req-a1a6c7bc-0c3e-4b71-abda-e7ef9266678b req-020dcead-bdae-46e3-bf68-f319f660d275 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Refreshing instance network info cache due to event network-changed-927e4c78-95c0-407e-ab92-2c7457c30f72. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:38:17 np0005481065 nova_compute[260935]: 2025-10-11 09:38:17.288 2 DEBUG oslo_concurrency.lockutils [req-a1a6c7bc-0c3e-4b71-abda-e7ef9266678b req-020dcead-bdae-46e3-bf68-f319f660d275 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:38:17 np0005481065 nova_compute[260935]: 2025-10-11 09:38:17.289 2 DEBUG oslo_concurrency.lockutils [req-a1a6c7bc-0c3e-4b71-abda-e7ef9266678b req-020dcead-bdae-46e3-bf68-f319f660d275 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:38:17 np0005481065 nova_compute[260935]: 2025-10-11 09:38:17.289 2 DEBUG nova.network.neutron [req-a1a6c7bc-0c3e-4b71-abda-e7ef9266678b req-020dcead-bdae-46e3-bf68-f319f660d275 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Refreshing network info cache for port 927e4c78-95c0-407e-ab92-2c7457c30f72 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:38:18 np0005481065 nova_compute[260935]: 2025-10-11 09:38:18.673 2 DEBUG nova.network.neutron [req-a1a6c7bc-0c3e-4b71-abda-e7ef9266678b req-020dcead-bdae-46e3-bf68-f319f660d275 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Updated VIF entry in instance network info cache for port 927e4c78-95c0-407e-ab92-2c7457c30f72. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:38:18 np0005481065 nova_compute[260935]: 2025-10-11 09:38:18.674 2 DEBUG nova.network.neutron [req-a1a6c7bc-0c3e-4b71-abda-e7ef9266678b req-020dcead-bdae-46e3-bf68-f319f660d275 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Updating instance_info_cache with network_info: [{"id": "927e4c78-95c0-407e-ab92-2c7457c30f72", "address": "fa:16:3e:ff:f3:13", "network": {"id": "d0b488df-4f94-4347-9251-9c74a85cff63", "bridge": "br-int", "label": "tempest-network-smoke--397807174", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap927e4c78-95", "ovs_interfaceid": "927e4c78-95c0-407e-ab92-2c7457c30f72", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:38:18 np0005481065 nova_compute[260935]: 2025-10-11 09:38:18.703 2 DEBUG oslo_concurrency.lockutils [req-a1a6c7bc-0c3e-4b71-abda-e7ef9266678b req-020dcead-bdae-46e3-bf68-f319f660d275 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:38:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2900: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 05:38:19 np0005481065 nova_compute[260935]: 2025-10-11 09:38:19.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:38:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:38:20 np0005481065 nova_compute[260935]: 2025-10-11 09:38:20.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:38:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2901: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 75 op/s
Oct 11 05:38:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2902: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 75 op/s
Oct 11 05:38:23 np0005481065 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #51. Immutable memtables: 0.
Oct 11 05:38:24 np0005481065 nova_compute[260935]: 2025-10-11 09:38:24.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:38:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:38:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:38:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:38:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:38:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:38:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:38:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:38:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2903: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Oct 11 05:38:25 np0005481065 ovn_controller[152945]: 2025-10-11T09:38:25Z|00192|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ff:f3:13 10.100.0.5
Oct 11 05:38:25 np0005481065 ovn_controller[152945]: 2025-10-11T09:38:25Z|00193|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ff:f3:13 10.100.0.5
Oct 11 05:38:25 np0005481065 nova_compute[260935]: 2025-10-11 09:38:25.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:38:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:38:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1863885401' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:38:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:38:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1863885401' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:38:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2904: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Oct 11 05:38:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:38:27 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:38:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 05:38:27 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:38:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 05:38:27 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:38:27 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev f0500dc0-9c92-4582-966b-e21f96bee81c does not exist
Oct 11 05:38:27 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 845a3f9e-a596-467c-a57d-984347e8c60a does not exist
Oct 11 05:38:27 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev fc9ae470-784c-4f56-8889-fd06a2f9fd3f does not exist
Oct 11 05:38:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 05:38:27 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 05:38:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 05:38:27 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:38:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:38:27 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:38:28 np0005481065 podman[421436]: 2025-10-11 09:38:28.415759263 +0000 UTC m=+0.052671388 container create d232ecf4abb5b9323cf7cb65d28f5a02fb6fc7276c9c595a1a660b6eb148d06a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_faraday, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:38:28 np0005481065 systemd[1]: Started libpod-conmon-d232ecf4abb5b9323cf7cb65d28f5a02fb6fc7276c9c595a1a660b6eb148d06a.scope.
Oct 11 05:38:28 np0005481065 podman[421436]: 2025-10-11 09:38:28.386717759 +0000 UTC m=+0.023629924 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:38:28 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:38:28 np0005481065 podman[421436]: 2025-10-11 09:38:28.525401239 +0000 UTC m=+0.162313344 container init d232ecf4abb5b9323cf7cb65d28f5a02fb6fc7276c9c595a1a660b6eb148d06a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_faraday, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 11 05:38:28 np0005481065 podman[421436]: 2025-10-11 09:38:28.531832509 +0000 UTC m=+0.168744594 container start d232ecf4abb5b9323cf7cb65d28f5a02fb6fc7276c9c595a1a660b6eb148d06a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:38:28 np0005481065 podman[421436]: 2025-10-11 09:38:28.535608375 +0000 UTC m=+0.172520460 container attach d232ecf4abb5b9323cf7cb65d28f5a02fb6fc7276c9c595a1a660b6eb148d06a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_faraday, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 11 05:38:28 np0005481065 vigorous_faraday[421452]: 167 167
Oct 11 05:38:28 np0005481065 systemd[1]: libpod-d232ecf4abb5b9323cf7cb65d28f5a02fb6fc7276c9c595a1a660b6eb148d06a.scope: Deactivated successfully.
Oct 11 05:38:28 np0005481065 conmon[421452]: conmon d232ecf4abb5b9323cf7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d232ecf4abb5b9323cf7cb65d28f5a02fb6fc7276c9c595a1a660b6eb148d06a.scope/container/memory.events
Oct 11 05:38:28 np0005481065 podman[421436]: 2025-10-11 09:38:28.540228165 +0000 UTC m=+0.177140350 container died d232ecf4abb5b9323cf7cb65d28f5a02fb6fc7276c9c595a1a660b6eb148d06a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:38:28 np0005481065 systemd[1]: var-lib-containers-storage-overlay-99c45faafe4b6b63ee2e9cfa0bbc2a578e0994c3bd71be86cda5c59d2d40c702-merged.mount: Deactivated successfully.
Oct 11 05:38:28 np0005481065 podman[421436]: 2025-10-11 09:38:28.582231353 +0000 UTC m=+0.219143448 container remove d232ecf4abb5b9323cf7cb65d28f5a02fb6fc7276c9c595a1a660b6eb148d06a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 05:38:28 np0005481065 systemd[1]: libpod-conmon-d232ecf4abb5b9323cf7cb65d28f5a02fb6fc7276c9c595a1a660b6eb148d06a.scope: Deactivated successfully.
Oct 11 05:38:28 np0005481065 nova_compute[260935]: 2025-10-11 09:38:28.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:38:28 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:38:28 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:38:28 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:38:28 np0005481065 podman[421475]: 2025-10-11 09:38:28.788171639 +0000 UTC m=+0.046959938 container create 1feccc7810243549367a36941a49e1ce90fcba5923c0342216789520c45c3759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 05:38:28 np0005481065 systemd[1]: Started libpod-conmon-1feccc7810243549367a36941a49e1ce90fcba5923c0342216789520c45c3759.scope.
Oct 11 05:38:28 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:38:28 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b7b348d223a1a1cbe4cb2df9e2dcbb0d8ccea8c087d59d5bab84b0dcce19bbb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:38:28 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b7b348d223a1a1cbe4cb2df9e2dcbb0d8ccea8c087d59d5bab84b0dcce19bbb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:38:28 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b7b348d223a1a1cbe4cb2df9e2dcbb0d8ccea8c087d59d5bab84b0dcce19bbb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:38:28 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b7b348d223a1a1cbe4cb2df9e2dcbb0d8ccea8c087d59d5bab84b0dcce19bbb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:38:28 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b7b348d223a1a1cbe4cb2df9e2dcbb0d8ccea8c087d59d5bab84b0dcce19bbb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 05:38:28 np0005481065 podman[421475]: 2025-10-11 09:38:28.771871432 +0000 UTC m=+0.030659741 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:38:28 np0005481065 podman[421475]: 2025-10-11 09:38:28.878394599 +0000 UTC m=+0.137182888 container init 1feccc7810243549367a36941a49e1ce90fcba5923c0342216789520c45c3759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 05:38:28 np0005481065 podman[421475]: 2025-10-11 09:38:28.884317845 +0000 UTC m=+0.143106134 container start 1feccc7810243549367a36941a49e1ce90fcba5923c0342216789520c45c3759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:38:28 np0005481065 podman[421475]: 2025-10-11 09:38:28.888023509 +0000 UTC m=+0.146811828 container attach 1feccc7810243549367a36941a49e1ce90fcba5923c0342216789520c45c3759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_thompson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:38:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2905: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 127 op/s
Oct 11 05:38:29 np0005481065 nova_compute[260935]: 2025-10-11 09:38:29.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:38:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:38:30 np0005481065 eloquent_thompson[421492]: --> passed data devices: 0 physical, 3 LVM
Oct 11 05:38:30 np0005481065 eloquent_thompson[421492]: --> relative data size: 1.0
Oct 11 05:38:30 np0005481065 eloquent_thompson[421492]: --> All data devices are unavailable
Oct 11 05:38:30 np0005481065 systemd[1]: libpod-1feccc7810243549367a36941a49e1ce90fcba5923c0342216789520c45c3759.scope: Deactivated successfully.
Oct 11 05:38:30 np0005481065 systemd[1]: libpod-1feccc7810243549367a36941a49e1ce90fcba5923c0342216789520c45c3759.scope: Consumed 1.094s CPU time.
Oct 11 05:38:30 np0005481065 podman[421475]: 2025-10-11 09:38:30.053431546 +0000 UTC m=+1.312219915 container died 1feccc7810243549367a36941a49e1ce90fcba5923c0342216789520c45c3759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 05:38:30 np0005481065 systemd[1]: var-lib-containers-storage-overlay-0b7b348d223a1a1cbe4cb2df9e2dcbb0d8ccea8c087d59d5bab84b0dcce19bbb-merged.mount: Deactivated successfully.
Oct 11 05:38:30 np0005481065 podman[421475]: 2025-10-11 09:38:30.134393557 +0000 UTC m=+1.393181856 container remove 1feccc7810243549367a36941a49e1ce90fcba5923c0342216789520c45c3759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_thompson, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:38:30 np0005481065 systemd[1]: libpod-conmon-1feccc7810243549367a36941a49e1ce90fcba5923c0342216789520c45c3759.scope: Deactivated successfully.
Oct 11 05:38:30 np0005481065 nova_compute[260935]: 2025-10-11 09:38:30.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:38:31 np0005481065 podman[421676]: 2025-10-11 09:38:31.04903511 +0000 UTC m=+0.066890447 container create 767c7a118d00fd395b1ca9b52bb1f0ae114c09acbadcc0a2c8e5d9f60efa139a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_goldwasser, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:38:31 np0005481065 systemd[1]: Started libpod-conmon-767c7a118d00fd395b1ca9b52bb1f0ae114c09acbadcc0a2c8e5d9f60efa139a.scope.
Oct 11 05:38:31 np0005481065 podman[421676]: 2025-10-11 09:38:31.021978011 +0000 UTC m=+0.039833388 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:38:31 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:38:31 np0005481065 podman[421676]: 2025-10-11 09:38:31.152374368 +0000 UTC m=+0.170229695 container init 767c7a118d00fd395b1ca9b52bb1f0ae114c09acbadcc0a2c8e5d9f60efa139a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_goldwasser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:38:31 np0005481065 podman[421676]: 2025-10-11 09:38:31.167208964 +0000 UTC m=+0.185064301 container start 767c7a118d00fd395b1ca9b52bb1f0ae114c09acbadcc0a2c8e5d9f60efa139a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 11 05:38:31 np0005481065 podman[421676]: 2025-10-11 09:38:31.172278906 +0000 UTC m=+0.190134213 container attach 767c7a118d00fd395b1ca9b52bb1f0ae114c09acbadcc0a2c8e5d9f60efa139a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_goldwasser, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:38:31 np0005481065 confident_goldwasser[421693]: 167 167
Oct 11 05:38:31 np0005481065 systemd[1]: libpod-767c7a118d00fd395b1ca9b52bb1f0ae114c09acbadcc0a2c8e5d9f60efa139a.scope: Deactivated successfully.
Oct 11 05:38:31 np0005481065 podman[421676]: 2025-10-11 09:38:31.176786153 +0000 UTC m=+0.194641480 container died 767c7a118d00fd395b1ca9b52bb1f0ae114c09acbadcc0a2c8e5d9f60efa139a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 05:38:31 np0005481065 systemd[1]: var-lib-containers-storage-overlay-c820a7028d3f2d9d93b59a4d3901cb356c03161f549994d142a0338f4bebc2a4-merged.mount: Deactivated successfully.
Oct 11 05:38:31 np0005481065 podman[421676]: 2025-10-11 09:38:31.236989061 +0000 UTC m=+0.254844408 container remove 767c7a118d00fd395b1ca9b52bb1f0ae114c09acbadcc0a2c8e5d9f60efa139a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 11 05:38:31 np0005481065 systemd[1]: libpod-conmon-767c7a118d00fd395b1ca9b52bb1f0ae114c09acbadcc0a2c8e5d9f60efa139a.scope: Deactivated successfully.
Oct 11 05:38:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2906: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 05:38:31 np0005481065 podman[421718]: 2025-10-11 09:38:31.533388915 +0000 UTC m=+0.055327933 container create e2fa2fc09f08fbc2955e3dfa9622496019c99547cf882fcced71cc47864296a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 05:38:31 np0005481065 systemd[1]: Started libpod-conmon-e2fa2fc09f08fbc2955e3dfa9622496019c99547cf882fcced71cc47864296a5.scope.
Oct 11 05:38:31 np0005481065 podman[421718]: 2025-10-11 09:38:31.509361681 +0000 UTC m=+0.031300729 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:38:31 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:38:31 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/514eb36ec30afb7434542db6c6887913db66d3c2492b2de91f25e814eb03528e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:38:31 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/514eb36ec30afb7434542db6c6887913db66d3c2492b2de91f25e814eb03528e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:38:31 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/514eb36ec30afb7434542db6c6887913db66d3c2492b2de91f25e814eb03528e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:38:31 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/514eb36ec30afb7434542db6c6887913db66d3c2492b2de91f25e814eb03528e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:38:31 np0005481065 podman[421718]: 2025-10-11 09:38:31.643790141 +0000 UTC m=+0.165729159 container init e2fa2fc09f08fbc2955e3dfa9622496019c99547cf882fcced71cc47864296a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_beaver, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Oct 11 05:38:31 np0005481065 podman[421718]: 2025-10-11 09:38:31.659882942 +0000 UTC m=+0.181821930 container start e2fa2fc09f08fbc2955e3dfa9622496019c99547cf882fcced71cc47864296a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_beaver, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 05:38:31 np0005481065 podman[421718]: 2025-10-11 09:38:31.664482491 +0000 UTC m=+0.186421509 container attach e2fa2fc09f08fbc2955e3dfa9622496019c99547cf882fcced71cc47864296a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_beaver, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 05:38:32 np0005481065 clever_beaver[421737]: {
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:    "0": [
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:        {
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:            "devices": [
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:                "/dev/loop3"
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:            ],
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:            "lv_name": "ceph_lv0",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:            "lv_size": "21470642176",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:            "name": "ceph_lv0",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:            "tags": {
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:                "ceph.cluster_name": "ceph",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:                "ceph.crush_device_class": "",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:                "ceph.encrypted": "0",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:                "ceph.osd_id": "0",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:                "ceph.type": "block",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:                "ceph.vdo": "0"
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:            },
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:            "type": "block",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:            "vg_name": "ceph_vg0"
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:        }
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:    ],
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:    "1": [
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:        {
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:            "devices": [
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:                "/dev/loop4"
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:            ],
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:            "lv_name": "ceph_lv1",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:            "lv_size": "21470642176",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:            "name": "ceph_lv1",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:            "tags": {
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:                "ceph.cluster_name": "ceph",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:                "ceph.crush_device_class": "",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:                "ceph.encrypted": "0",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:                "ceph.osd_id": "1",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:                "ceph.type": "block",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:                "ceph.vdo": "0"
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:            },
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:            "type": "block",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:            "vg_name": "ceph_vg1"
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:        }
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:    ],
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:    "2": [
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:        {
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:            "devices": [
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:                "/dev/loop5"
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:            ],
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:            "lv_name": "ceph_lv2",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:            "lv_size": "21470642176",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:            "name": "ceph_lv2",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:            "tags": {
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:                "ceph.cluster_name": "ceph",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:                "ceph.crush_device_class": "",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:                "ceph.encrypted": "0",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:                "ceph.osd_id": "2",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:                "ceph.type": "block",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:                "ceph.vdo": "0"
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:            },
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:            "type": "block",
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:            "vg_name": "ceph_vg2"
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:        }
Oct 11 05:38:32 np0005481065 clever_beaver[421737]:    ]
Oct 11 05:38:32 np0005481065 clever_beaver[421737]: }
Oct 11 05:38:32 np0005481065 systemd[1]: libpod-e2fa2fc09f08fbc2955e3dfa9622496019c99547cf882fcced71cc47864296a5.scope: Deactivated successfully.
Oct 11 05:38:32 np0005481065 podman[421718]: 2025-10-11 09:38:32.504972365 +0000 UTC m=+1.026911363 container died e2fa2fc09f08fbc2955e3dfa9622496019c99547cf882fcced71cc47864296a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_beaver, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 05:38:32 np0005481065 systemd[1]: var-lib-containers-storage-overlay-514eb36ec30afb7434542db6c6887913db66d3c2492b2de91f25e814eb03528e-merged.mount: Deactivated successfully.
Oct 11 05:38:32 np0005481065 podman[421718]: 2025-10-11 09:38:32.585212786 +0000 UTC m=+1.107151774 container remove e2fa2fc09f08fbc2955e3dfa9622496019c99547cf882fcced71cc47864296a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_beaver, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:38:32 np0005481065 systemd[1]: libpod-conmon-e2fa2fc09f08fbc2955e3dfa9622496019c99547cf882fcced71cc47864296a5.scope: Deactivated successfully.
Oct 11 05:38:32 np0005481065 podman[421747]: 2025-10-11 09:38:32.637898354 +0000 UTC m=+0.089890113 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 11 05:38:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2907: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 05:38:33 np0005481065 podman[421915]: 2025-10-11 09:38:33.423650652 +0000 UTC m=+0.056492096 container create 0d1f4c0004ad48911c025b9cc849fee1e9ab6507752883454c89ca1bbd692432 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:38:33 np0005481065 systemd[1]: Started libpod-conmon-0d1f4c0004ad48911c025b9cc849fee1e9ab6507752883454c89ca1bbd692432.scope.
Oct 11 05:38:33 np0005481065 podman[421915]: 2025-10-11 09:38:33.393724553 +0000 UTC m=+0.026566087 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:38:33 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:38:33 np0005481065 podman[421915]: 2025-10-11 09:38:33.517727511 +0000 UTC m=+0.150568985 container init 0d1f4c0004ad48911c025b9cc849fee1e9ab6507752883454c89ca1bbd692432 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 05:38:33 np0005481065 podman[421915]: 2025-10-11 09:38:33.526495266 +0000 UTC m=+0.159336700 container start 0d1f4c0004ad48911c025b9cc849fee1e9ab6507752883454c89ca1bbd692432 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_poitras, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Oct 11 05:38:33 np0005481065 podman[421915]: 2025-10-11 09:38:33.530078057 +0000 UTC m=+0.162919531 container attach 0d1f4c0004ad48911c025b9cc849fee1e9ab6507752883454c89ca1bbd692432 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_poitras, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:38:33 np0005481065 stupefied_poitras[421931]: 167 167
Oct 11 05:38:33 np0005481065 systemd[1]: libpod-0d1f4c0004ad48911c025b9cc849fee1e9ab6507752883454c89ca1bbd692432.scope: Deactivated successfully.
Oct 11 05:38:33 np0005481065 podman[421915]: 2025-10-11 09:38:33.53622884 +0000 UTC m=+0.169070274 container died 0d1f4c0004ad48911c025b9cc849fee1e9ab6507752883454c89ca1bbd692432 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_poitras, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 11 05:38:33 np0005481065 systemd[1]: var-lib-containers-storage-overlay-bda4bd7722d054a4b2028e6f117706736c6ad75d5fe14a0d79543938e230d584-merged.mount: Deactivated successfully.
Oct 11 05:38:33 np0005481065 podman[421915]: 2025-10-11 09:38:33.578380432 +0000 UTC m=+0.211221866 container remove 0d1f4c0004ad48911c025b9cc849fee1e9ab6507752883454c89ca1bbd692432 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_poitras, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 11 05:38:33 np0005481065 systemd[1]: libpod-conmon-0d1f4c0004ad48911c025b9cc849fee1e9ab6507752883454c89ca1bbd692432.scope: Deactivated successfully.
Oct 11 05:38:33 np0005481065 podman[421954]: 2025-10-11 09:38:33.789226256 +0000 UTC m=+0.049650774 container create 1680d5b3f7e1263595d679a0ce3ad23b2cd7b2c853caa280388970c429d776e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bartik, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 05:38:33 np0005481065 systemd[1]: Started libpod-conmon-1680d5b3f7e1263595d679a0ce3ad23b2cd7b2c853caa280388970c429d776e2.scope.
Oct 11 05:38:33 np0005481065 podman[421954]: 2025-10-11 09:38:33.771528759 +0000 UTC m=+0.031953277 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:38:33 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:38:33 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87c77aade803147110c789c16801407fca6d5477d3d3693c9967c77eb808d421/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:38:33 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87c77aade803147110c789c16801407fca6d5477d3d3693c9967c77eb808d421/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:38:33 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87c77aade803147110c789c16801407fca6d5477d3d3693c9967c77eb808d421/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:38:33 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87c77aade803147110c789c16801407fca6d5477d3d3693c9967c77eb808d421/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:38:33 np0005481065 podman[421954]: 2025-10-11 09:38:33.905067965 +0000 UTC m=+0.165492463 container init 1680d5b3f7e1263595d679a0ce3ad23b2cd7b2c853caa280388970c429d776e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bartik, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:38:33 np0005481065 podman[421954]: 2025-10-11 09:38:33.918943474 +0000 UTC m=+0.179367982 container start 1680d5b3f7e1263595d679a0ce3ad23b2cd7b2c853caa280388970c429d776e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:38:33 np0005481065 podman[421954]: 2025-10-11 09:38:33.942761682 +0000 UTC m=+0.203186220 container attach 1680d5b3f7e1263595d679a0ce3ad23b2cd7b2c853caa280388970c429d776e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bartik, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 05:38:34 np0005481065 nova_compute[260935]: 2025-10-11 09:38:34.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:38:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:38:34 np0005481065 busy_bartik[421970]: {
Oct 11 05:38:34 np0005481065 busy_bartik[421970]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 05:38:34 np0005481065 busy_bartik[421970]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:38:34 np0005481065 busy_bartik[421970]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 05:38:34 np0005481065 busy_bartik[421970]:        "osd_id": 2,
Oct 11 05:38:34 np0005481065 busy_bartik[421970]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:38:34 np0005481065 busy_bartik[421970]:        "type": "bluestore"
Oct 11 05:38:34 np0005481065 busy_bartik[421970]:    },
Oct 11 05:38:34 np0005481065 busy_bartik[421970]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 05:38:34 np0005481065 busy_bartik[421970]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:38:34 np0005481065 busy_bartik[421970]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 05:38:34 np0005481065 busy_bartik[421970]:        "osd_id": 0,
Oct 11 05:38:34 np0005481065 busy_bartik[421970]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:38:34 np0005481065 busy_bartik[421970]:        "type": "bluestore"
Oct 11 05:38:34 np0005481065 busy_bartik[421970]:    },
Oct 11 05:38:34 np0005481065 busy_bartik[421970]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 05:38:34 np0005481065 busy_bartik[421970]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:38:34 np0005481065 busy_bartik[421970]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 05:38:34 np0005481065 busy_bartik[421970]:        "osd_id": 1,
Oct 11 05:38:34 np0005481065 busy_bartik[421970]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:38:34 np0005481065 busy_bartik[421970]:        "type": "bluestore"
Oct 11 05:38:34 np0005481065 busy_bartik[421970]:    }
Oct 11 05:38:34 np0005481065 busy_bartik[421970]: }
Oct 11 05:38:34 np0005481065 systemd[1]: libpod-1680d5b3f7e1263595d679a0ce3ad23b2cd7b2c853caa280388970c429d776e2.scope: Deactivated successfully.
Oct 11 05:38:34 np0005481065 podman[421954]: 2025-10-11 09:38:34.905471403 +0000 UTC m=+1.165895941 container died 1680d5b3f7e1263595d679a0ce3ad23b2cd7b2c853caa280388970c429d776e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bartik, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:38:34 np0005481065 systemd[1]: var-lib-containers-storage-overlay-87c77aade803147110c789c16801407fca6d5477d3d3693c9967c77eb808d421-merged.mount: Deactivated successfully.
Oct 11 05:38:34 np0005481065 podman[421954]: 2025-10-11 09:38:34.969450087 +0000 UTC m=+1.229874585 container remove 1680d5b3f7e1263595d679a0ce3ad23b2cd7b2c853caa280388970c429d776e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 05:38:34 np0005481065 systemd[1]: libpod-conmon-1680d5b3f7e1263595d679a0ce3ad23b2cd7b2c853caa280388970c429d776e2.scope: Deactivated successfully.
Oct 11 05:38:35 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:38:35 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:38:35 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:38:35 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:38:35 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 9e5f9a70-6870-45f1-9abf-1ebc66165e38 does not exist
Oct 11 05:38:35 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 12591a8d-b54d-4d51-82fa-7f56d0050612 does not exist
Oct 11 05:38:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:38:35.216 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=54, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=53) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:38:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:38:35.218 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 05:38:35 np0005481065 nova_compute[260935]: 2025-10-11 09:38:35.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:38:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2908: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 05:38:35 np0005481065 nova_compute[260935]: 2025-10-11 09:38:35.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:38:35 np0005481065 nova_compute[260935]: 2025-10-11 09:38:35.997 2 DEBUG nova.compute.manager [req-60ba0ff0-c067-4554-b5a9-0bfb7b14f6b9 req-f5dc0872-b655-4208-b8c5-b6b5f9011f56 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Received event network-changed-927e4c78-95c0-407e-ab92-2c7457c30f72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:38:35 np0005481065 nova_compute[260935]: 2025-10-11 09:38:35.998 2 DEBUG nova.compute.manager [req-60ba0ff0-c067-4554-b5a9-0bfb7b14f6b9 req-f5dc0872-b655-4208-b8c5-b6b5f9011f56 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Refreshing instance network info cache due to event network-changed-927e4c78-95c0-407e-ab92-2c7457c30f72. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:38:35 np0005481065 nova_compute[260935]: 2025-10-11 09:38:35.998 2 DEBUG oslo_concurrency.lockutils [req-60ba0ff0-c067-4554-b5a9-0bfb7b14f6b9 req-f5dc0872-b655-4208-b8c5-b6b5f9011f56 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:38:35 np0005481065 nova_compute[260935]: 2025-10-11 09:38:35.998 2 DEBUG oslo_concurrency.lockutils [req-60ba0ff0-c067-4554-b5a9-0bfb7b14f6b9 req-f5dc0872-b655-4208-b8c5-b6b5f9011f56 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:38:35 np0005481065 nova_compute[260935]: 2025-10-11 09:38:35.999 2 DEBUG nova.network.neutron [req-60ba0ff0-c067-4554-b5a9-0bfb7b14f6b9 req-f5dc0872-b655-4208-b8c5-b6b5f9011f56 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Refreshing network info cache for port 927e4c78-95c0-407e-ab92-2c7457c30f72 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:38:36 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:38:36 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:38:36 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #141. Immutable memtables: 0.
Oct 11 05:38:36 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:36.042592) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 05:38:36 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 85] Flushing memtable with next log file: 141
Oct 11 05:38:36 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175516042628, "job": 85, "event": "flush_started", "num_memtables": 1, "num_entries": 2059, "num_deletes": 251, "total_data_size": 3362519, "memory_usage": 3422056, "flush_reason": "Manual Compaction"}
Oct 11 05:38:36 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 85] Level-0 flush table #142: started
Oct 11 05:38:36 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175516065016, "cf_name": "default", "job": 85, "event": "table_file_creation", "file_number": 142, "file_size": 3284383, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 58952, "largest_seqno": 61010, "table_properties": {"data_size": 3275069, "index_size": 5871, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19008, "raw_average_key_size": 20, "raw_value_size": 3256530, "raw_average_value_size": 3453, "num_data_blocks": 261, "num_entries": 943, "num_filter_entries": 943, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760175298, "oldest_key_time": 1760175298, "file_creation_time": 1760175516, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 142, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:38:36 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 85] Flush lasted 22489 microseconds, and 12777 cpu microseconds.
Oct 11 05:38:36 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:38:36 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:36.065069) [db/flush_job.cc:967] [default] [JOB 85] Level-0 flush table #142: 3284383 bytes OK
Oct 11 05:38:36 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:36.065104) [db/memtable_list.cc:519] [default] Level-0 commit table #142 started
Oct 11 05:38:36 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:36.067444) [db/memtable_list.cc:722] [default] Level-0 commit table #142: memtable #1 done
Oct 11 05:38:36 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:36.067470) EVENT_LOG_v1 {"time_micros": 1760175516067462, "job": 85, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 05:38:36 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:36.067494) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 05:38:36 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 85] Try to delete WAL files size 3353879, prev total WAL file size 3380367, number of live WAL files 2.
Oct 11 05:38:36 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000138.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:38:36 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:36.068938) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035373733' seq:72057594037927935, type:22 .. '7061786F730036303235' seq:0, type:0; will stop at (end)
Oct 11 05:38:36 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 86] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 05:38:36 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 85 Base level 0, inputs: [142(3207KB)], [140(7797KB)]
Oct 11 05:38:36 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175516068981, "job": 86, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [142], "files_L6": [140], "score": -1, "input_data_size": 11269469, "oldest_snapshot_seqno": -1}
Oct 11 05:38:36 np0005481065 nova_compute[260935]: 2025-10-11 09:38:36.125 2 DEBUG oslo_concurrency.lockutils [None req-82f121cd-c1d2-45a9-bd52-8fccff66f235 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:38:36 np0005481065 nova_compute[260935]: 2025-10-11 09:38:36.125 2 DEBUG oslo_concurrency.lockutils [None req-82f121cd-c1d2-45a9-bd52-8fccff66f235 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:38:36 np0005481065 nova_compute[260935]: 2025-10-11 09:38:36.126 2 DEBUG oslo_concurrency.lockutils [None req-82f121cd-c1d2-45a9-bd52-8fccff66f235 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:38:36 np0005481065 nova_compute[260935]: 2025-10-11 09:38:36.126 2 DEBUG oslo_concurrency.lockutils [None req-82f121cd-c1d2-45a9-bd52-8fccff66f235 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:38:36 np0005481065 nova_compute[260935]: 2025-10-11 09:38:36.126 2 DEBUG oslo_concurrency.lockutils [None req-82f121cd-c1d2-45a9-bd52-8fccff66f235 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:38:36 np0005481065 nova_compute[260935]: 2025-10-11 09:38:36.128 2 INFO nova.compute.manager [None req-82f121cd-c1d2-45a9-bd52-8fccff66f235 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Terminating instance#033[00m
Oct 11 05:38:36 np0005481065 nova_compute[260935]: 2025-10-11 09:38:36.129 2 DEBUG nova.compute.manager [None req-82f121cd-c1d2-45a9-bd52-8fccff66f235 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:38:36 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 86] Generated table #143: 8002 keys, 9576876 bytes, temperature: kUnknown
Oct 11 05:38:36 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175516134490, "cf_name": "default", "job": 86, "event": "table_file_creation", "file_number": 143, "file_size": 9576876, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9526234, "index_size": 29537, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20037, "raw_key_size": 208479, "raw_average_key_size": 26, "raw_value_size": 9386195, "raw_average_value_size": 1172, "num_data_blocks": 1147, "num_entries": 8002, "num_filter_entries": 8002, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760175516, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 143, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:38:36 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:38:36 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:36.134759) [db/compaction/compaction_job.cc:1663] [default] [JOB 86] Compacted 1@0 + 1@6 files to L6 => 9576876 bytes
Oct 11 05:38:36 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:36.135866) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 171.8 rd, 146.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 7.6 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(6.3) write-amplify(2.9) OK, records in: 8516, records dropped: 514 output_compression: NoCompression
Oct 11 05:38:36 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:36.135899) EVENT_LOG_v1 {"time_micros": 1760175516135876, "job": 86, "event": "compaction_finished", "compaction_time_micros": 65579, "compaction_time_cpu_micros": 41617, "output_level": 6, "num_output_files": 1, "total_output_size": 9576876, "num_input_records": 8516, "num_output_records": 8002, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 05:38:36 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000142.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:38:36 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175516136735, "job": 86, "event": "table_file_deletion", "file_number": 142}
Oct 11 05:38:36 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000140.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:38:36 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175516138553, "job": 86, "event": "table_file_deletion", "file_number": 140}
Oct 11 05:38:36 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:36.068793) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:38:36 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:36.138657) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:38:36 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:36.138677) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:38:36 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:36.138681) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:38:36 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:36.138684) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:38:36 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:36.138687) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:38:36 np0005481065 kernel: tap927e4c78-95 (unregistering): left promiscuous mode
Oct 11 05:38:36 np0005481065 NetworkManager[44960]: <info>  [1760175516.1893] device (tap927e4c78-95): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:38:36 np0005481065 ovn_controller[152945]: 2025-10-11T09:38:36Z|01622|binding|INFO|Releasing lport 927e4c78-95c0-407e-ab92-2c7457c30f72 from this chassis (sb_readonly=0)
Oct 11 05:38:36 np0005481065 ovn_controller[152945]: 2025-10-11T09:38:36Z|01623|binding|INFO|Setting lport 927e4c78-95c0-407e-ab92-2c7457c30f72 down in Southbound
Oct 11 05:38:36 np0005481065 ovn_controller[152945]: 2025-10-11T09:38:36Z|01624|binding|INFO|Removing iface tap927e4c78-95 ovn-installed in OVS
Oct 11 05:38:36 np0005481065 nova_compute[260935]: 2025-10-11 09:38:36.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:38:36 np0005481065 nova_compute[260935]: 2025-10-11 09:38:36.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:38:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:38:36.228 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ff:f3:13 10.100.0.5'], port_security=['fa:16:3e:ff:f3:13 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'd61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d0b488df-4f94-4347-9251-9c74a85cff63', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '81e7096f23df4e7d8782cf98d09d54e9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '007bd862-2a46-4a84-aea1-941ef61ceb34 a1c26107-dbf4-42a0-ae54-f39f52094af5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c6f689b1-ae42-4f44-bdc0-840884eacd12, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=927e4c78-95c0-407e-ab92-2c7457c30f72) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:38:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:38:36.231 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 927e4c78-95c0-407e-ab92-2c7457c30f72 in datapath d0b488df-4f94-4347-9251-9c74a85cff63 unbound from our chassis#033[00m
Oct 11 05:38:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:38:36.234 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d0b488df-4f94-4347-9251-9c74a85cff63, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:38:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:38:36.237 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[24a816bc-023e-4b14-9ad0-e45f36628a6a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:38:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:38:36.237 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d0b488df-4f94-4347-9251-9c74a85cff63 namespace which is not needed anymore#033[00m
Oct 11 05:38:36 np0005481065 systemd[1]: machine-qemu\x2d167\x2dinstance\x2d0000008f.scope: Deactivated successfully.
Oct 11 05:38:36 np0005481065 systemd[1]: machine-qemu\x2d167\x2dinstance\x2d0000008f.scope: Consumed 12.824s CPU time.
Oct 11 05:38:36 np0005481065 systemd-machined[215705]: Machine qemu-167-instance-0000008f terminated.
Oct 11 05:38:36 np0005481065 nova_compute[260935]: 2025-10-11 09:38:36.377 2 INFO nova.virt.libvirt.driver [-] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Instance destroyed successfully.#033[00m
Oct 11 05:38:36 np0005481065 nova_compute[260935]: 2025-10-11 09:38:36.378 2 DEBUG nova.objects.instance [None req-82f121cd-c1d2-45a9-bd52-8fccff66f235 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lazy-loading 'resources' on Instance uuid d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:38:36 np0005481065 nova_compute[260935]: 2025-10-11 09:38:36.395 2 DEBUG nova.virt.libvirt.vif [None req-82f121cd-c1d2-45a9-bd52-8fccff66f235 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:38:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-901075224',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-901075224',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-607770139-acc',id=143,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDeAiXe5BxSlKhTWUA0h8drhicmm4lHT+GCh5IJOhzOrvcV2CoWAUh+e7G/x4nbCWm9Jvt6uJONBp2OY5zPdoAKrYlSLLfE4Vnt6Ll8at8odUFHqxgC501PsHRAdCCNJdw==',key_name='tempest-TestSecurityGroupsBasicOps-904423859',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:38:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='81e7096f23df4e7d8782cf98d09d54e9',ramdisk_id='',reservation_id='r-9nlmw0id',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-607770139',owner_user_name='tempest-TestSecurityGroupsBasicOps-607770139-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:38:12Z,user_data=None,user_id='489c4d0457354f4684f8b9e53261224f',uuid=d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "927e4c78-95c0-407e-ab92-2c7457c30f72", "address": "fa:16:3e:ff:f3:13", "network": {"id": "d0b488df-4f94-4347-9251-9c74a85cff63", "bridge": "br-int", "label": "tempest-network-smoke--397807174", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap927e4c78-95", "ovs_interfaceid": "927e4c78-95c0-407e-ab92-2c7457c30f72", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:38:36 np0005481065 nova_compute[260935]: 2025-10-11 09:38:36.395 2 DEBUG nova.network.os_vif_util [None req-82f121cd-c1d2-45a9-bd52-8fccff66f235 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converting VIF {"id": "927e4c78-95c0-407e-ab92-2c7457c30f72", "address": "fa:16:3e:ff:f3:13", "network": {"id": "d0b488df-4f94-4347-9251-9c74a85cff63", "bridge": "br-int", "label": "tempest-network-smoke--397807174", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap927e4c78-95", "ovs_interfaceid": "927e4c78-95c0-407e-ab92-2c7457c30f72", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:38:36 np0005481065 nova_compute[260935]: 2025-10-11 09:38:36.396 2 DEBUG nova.network.os_vif_util [None req-82f121cd-c1d2-45a9-bd52-8fccff66f235 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ff:f3:13,bridge_name='br-int',has_traffic_filtering=True,id=927e4c78-95c0-407e-ab92-2c7457c30f72,network=Network(d0b488df-4f94-4347-9251-9c74a85cff63),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap927e4c78-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:38:36 np0005481065 nova_compute[260935]: 2025-10-11 09:38:36.397 2 DEBUG os_vif [None req-82f121cd-c1d2-45a9-bd52-8fccff66f235 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ff:f3:13,bridge_name='br-int',has_traffic_filtering=True,id=927e4c78-95c0-407e-ab92-2c7457c30f72,network=Network(d0b488df-4f94-4347-9251-9c74a85cff63),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap927e4c78-95') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:38:36 np0005481065 nova_compute[260935]: 2025-10-11 09:38:36.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:38:36 np0005481065 nova_compute[260935]: 2025-10-11 09:38:36.399 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap927e4c78-95, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:38:36 np0005481065 nova_compute[260935]: 2025-10-11 09:38:36.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:38:36 np0005481065 nova_compute[260935]: 2025-10-11 09:38:36.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:38:36 np0005481065 nova_compute[260935]: 2025-10-11 09:38:36.408 2 INFO os_vif [None req-82f121cd-c1d2-45a9-bd52-8fccff66f235 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ff:f3:13,bridge_name='br-int',has_traffic_filtering=True,id=927e4c78-95c0-407e-ab92-2c7457c30f72,network=Network(d0b488df-4f94-4347-9251-9c74a85cff63),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap927e4c78-95')#033[00m
Oct 11 05:38:36 np0005481065 neutron-haproxy-ovnmeta-d0b488df-4f94-4347-9251-9c74a85cff63[421095]: [NOTICE]   (421099) : haproxy version is 2.8.14-c23fe91
Oct 11 05:38:36 np0005481065 neutron-haproxy-ovnmeta-d0b488df-4f94-4347-9251-9c74a85cff63[421095]: [NOTICE]   (421099) : path to executable is /usr/sbin/haproxy
Oct 11 05:38:36 np0005481065 neutron-haproxy-ovnmeta-d0b488df-4f94-4347-9251-9c74a85cff63[421095]: [WARNING]  (421099) : Exiting Master process...
Oct 11 05:38:36 np0005481065 neutron-haproxy-ovnmeta-d0b488df-4f94-4347-9251-9c74a85cff63[421095]: [ALERT]    (421099) : Current worker (421101) exited with code 143 (Terminated)
Oct 11 05:38:36 np0005481065 neutron-haproxy-ovnmeta-d0b488df-4f94-4347-9251-9c74a85cff63[421095]: [WARNING]  (421099) : All workers exited. Exiting... (0)
Oct 11 05:38:36 np0005481065 systemd[1]: libpod-e8a3f080b91898e90dce62a6bb187fd979d7d1ce21868500283dee71e6647be6.scope: Deactivated successfully.
Oct 11 05:38:36 np0005481065 podman[422096]: 2025-10-11 09:38:36.477603707 +0000 UTC m=+0.082300769 container died e8a3f080b91898e90dce62a6bb187fd979d7d1ce21868500283dee71e6647be6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d0b488df-4f94-4347-9251-9c74a85cff63, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 05:38:36 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e8a3f080b91898e90dce62a6bb187fd979d7d1ce21868500283dee71e6647be6-userdata-shm.mount: Deactivated successfully.
Oct 11 05:38:36 np0005481065 systemd[1]: var-lib-containers-storage-overlay-0f4cf93020fc98090af1b4c5a6be3fdf7b017ecf3820002b4c2da663e5a79eaa-merged.mount: Deactivated successfully.
Oct 11 05:38:36 np0005481065 podman[422096]: 2025-10-11 09:38:36.540482161 +0000 UTC m=+0.145179183 container cleanup e8a3f080b91898e90dce62a6bb187fd979d7d1ce21868500283dee71e6647be6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d0b488df-4f94-4347-9251-9c74a85cff63, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 05:38:36 np0005481065 systemd[1]: libpod-conmon-e8a3f080b91898e90dce62a6bb187fd979d7d1ce21868500283dee71e6647be6.scope: Deactivated successfully.
Oct 11 05:38:36 np0005481065 podman[422150]: 2025-10-11 09:38:36.63599718 +0000 UTC m=+0.064188442 container remove e8a3f080b91898e90dce62a6bb187fd979d7d1ce21868500283dee71e6647be6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d0b488df-4f94-4347-9251-9c74a85cff63, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 05:38:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:38:36.645 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[22bd0cca-5e0a-4cb9-8464-4d6ce40f2ead]: (4, ('Sat Oct 11 09:38:36 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d0b488df-4f94-4347-9251-9c74a85cff63 (e8a3f080b91898e90dce62a6bb187fd979d7d1ce21868500283dee71e6647be6)\ne8a3f080b91898e90dce62a6bb187fd979d7d1ce21868500283dee71e6647be6\nSat Oct 11 09:38:36 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d0b488df-4f94-4347-9251-9c74a85cff63 (e8a3f080b91898e90dce62a6bb187fd979d7d1ce21868500283dee71e6647be6)\ne8a3f080b91898e90dce62a6bb187fd979d7d1ce21868500283dee71e6647be6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:38:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:38:36.648 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[996a1028-fa84-4f6b-8959-770233d4d5a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:38:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:38:36.649 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd0b488df-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:38:36 np0005481065 nova_compute[260935]: 2025-10-11 09:38:36.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:38:36 np0005481065 kernel: tapd0b488df-40: left promiscuous mode
Oct 11 05:38:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:38:36.674 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a38752ca-d60f-4d77-b04e-6945f26c4d92]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:38:36 np0005481065 nova_compute[260935]: 2025-10-11 09:38:36.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:38:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:38:36.704 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0503b745-5123-4de3-ae2f-ef8c9e9eb04b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:38:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:38:36.706 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7dd0b3e1-d0f8-4ceb-b57b-700a2e1e0097]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:38:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:38:36.722 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ee14fea4-8d0b-457b-87a4-eacda58eaaf9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 733664, 'reachable_time': 19416, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 422166, 'error': None, 'target': 'ovnmeta-d0b488df-4f94-4347-9251-9c74a85cff63', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:38:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:38:36.727 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d0b488df-4f94-4347-9251-9c74a85cff63 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 05:38:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:38:36.727 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[274e655b-4b5a-4bca-a6d7-e0fbc1a27f04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:38:36 np0005481065 systemd[1]: run-netns-ovnmeta\x2dd0b488df\x2d4f94\x2d4347\x2d9251\x2d9c74a85cff63.mount: Deactivated successfully.
Oct 11 05:38:36 np0005481065 nova_compute[260935]: 2025-10-11 09:38:36.850 2 INFO nova.virt.libvirt.driver [None req-82f121cd-c1d2-45a9-bd52-8fccff66f235 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Deleting instance files /var/lib/nova/instances/d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b_del#033[00m
Oct 11 05:38:36 np0005481065 nova_compute[260935]: 2025-10-11 09:38:36.851 2 INFO nova.virt.libvirt.driver [None req-82f121cd-c1d2-45a9-bd52-8fccff66f235 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Deletion of /var/lib/nova/instances/d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b_del complete#033[00m
Oct 11 05:38:36 np0005481065 nova_compute[260935]: 2025-10-11 09:38:36.939 2 INFO nova.compute.manager [None req-82f121cd-c1d2-45a9-bd52-8fccff66f235 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Took 0.81 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:38:36 np0005481065 nova_compute[260935]: 2025-10-11 09:38:36.939 2 DEBUG oslo.service.loopingcall [None req-82f121cd-c1d2-45a9-bd52-8fccff66f235 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:38:36 np0005481065 nova_compute[260935]: 2025-10-11 09:38:36.940 2 DEBUG nova.compute.manager [-] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:38:36 np0005481065 nova_compute[260935]: 2025-10-11 09:38:36.940 2 DEBUG nova.network.neutron [-] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:38:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2909: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 05:38:38 np0005481065 nova_compute[260935]: 2025-10-11 09:38:38.101 2 DEBUG nova.compute.manager [req-f01483a1-bedc-42f4-9040-2a904d1e4bb8 req-79d87f6d-7097-41c7-b8a3-9fdfd7d00f33 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Received event network-vif-unplugged-927e4c78-95c0-407e-ab92-2c7457c30f72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:38:38 np0005481065 nova_compute[260935]: 2025-10-11 09:38:38.101 2 DEBUG oslo_concurrency.lockutils [req-f01483a1-bedc-42f4-9040-2a904d1e4bb8 req-79d87f6d-7097-41c7-b8a3-9fdfd7d00f33 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:38:38 np0005481065 nova_compute[260935]: 2025-10-11 09:38:38.102 2 DEBUG oslo_concurrency.lockutils [req-f01483a1-bedc-42f4-9040-2a904d1e4bb8 req-79d87f6d-7097-41c7-b8a3-9fdfd7d00f33 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:38:38 np0005481065 nova_compute[260935]: 2025-10-11 09:38:38.102 2 DEBUG oslo_concurrency.lockutils [req-f01483a1-bedc-42f4-9040-2a904d1e4bb8 req-79d87f6d-7097-41c7-b8a3-9fdfd7d00f33 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:38:38 np0005481065 nova_compute[260935]: 2025-10-11 09:38:38.102 2 DEBUG nova.compute.manager [req-f01483a1-bedc-42f4-9040-2a904d1e4bb8 req-79d87f6d-7097-41c7-b8a3-9fdfd7d00f33 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] No waiting events found dispatching network-vif-unplugged-927e4c78-95c0-407e-ab92-2c7457c30f72 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:38:38 np0005481065 nova_compute[260935]: 2025-10-11 09:38:38.103 2 DEBUG nova.compute.manager [req-f01483a1-bedc-42f4-9040-2a904d1e4bb8 req-79d87f6d-7097-41c7-b8a3-9fdfd7d00f33 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Received event network-vif-unplugged-927e4c78-95c0-407e-ab92-2c7457c30f72 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 05:38:38 np0005481065 nova_compute[260935]: 2025-10-11 09:38:38.103 2 DEBUG nova.compute.manager [req-f01483a1-bedc-42f4-9040-2a904d1e4bb8 req-79d87f6d-7097-41c7-b8a3-9fdfd7d00f33 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Received event network-vif-plugged-927e4c78-95c0-407e-ab92-2c7457c30f72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:38:38 np0005481065 nova_compute[260935]: 2025-10-11 09:38:38.103 2 DEBUG oslo_concurrency.lockutils [req-f01483a1-bedc-42f4-9040-2a904d1e4bb8 req-79d87f6d-7097-41c7-b8a3-9fdfd7d00f33 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:38:38 np0005481065 nova_compute[260935]: 2025-10-11 09:38:38.104 2 DEBUG oslo_concurrency.lockutils [req-f01483a1-bedc-42f4-9040-2a904d1e4bb8 req-79d87f6d-7097-41c7-b8a3-9fdfd7d00f33 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:38:38 np0005481065 nova_compute[260935]: 2025-10-11 09:38:38.104 2 DEBUG oslo_concurrency.lockutils [req-f01483a1-bedc-42f4-9040-2a904d1e4bb8 req-79d87f6d-7097-41c7-b8a3-9fdfd7d00f33 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:38:38 np0005481065 nova_compute[260935]: 2025-10-11 09:38:38.105 2 DEBUG nova.compute.manager [req-f01483a1-bedc-42f4-9040-2a904d1e4bb8 req-79d87f6d-7097-41c7-b8a3-9fdfd7d00f33 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] No waiting events found dispatching network-vif-plugged-927e4c78-95c0-407e-ab92-2c7457c30f72 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:38:38 np0005481065 nova_compute[260935]: 2025-10-11 09:38:38.105 2 WARNING nova.compute.manager [req-f01483a1-bedc-42f4-9040-2a904d1e4bb8 req-79d87f6d-7097-41c7-b8a3-9fdfd7d00f33 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Received unexpected event network-vif-plugged-927e4c78-95c0-407e-ab92-2c7457c30f72 for instance with vm_state active and task_state deleting.#033[00m
Oct 11 05:38:38 np0005481065 nova_compute[260935]: 2025-10-11 09:38:38.125 2 DEBUG nova.network.neutron [req-60ba0ff0-c067-4554-b5a9-0bfb7b14f6b9 req-f5dc0872-b655-4208-b8c5-b6b5f9011f56 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Updated VIF entry in instance network info cache for port 927e4c78-95c0-407e-ab92-2c7457c30f72. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:38:38 np0005481065 nova_compute[260935]: 2025-10-11 09:38:38.125 2 DEBUG nova.network.neutron [req-60ba0ff0-c067-4554-b5a9-0bfb7b14f6b9 req-f5dc0872-b655-4208-b8c5-b6b5f9011f56 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Updating instance_info_cache with network_info: [{"id": "927e4c78-95c0-407e-ab92-2c7457c30f72", "address": "fa:16:3e:ff:f3:13", "network": {"id": "d0b488df-4f94-4347-9251-9c74a85cff63", "bridge": "br-int", "label": "tempest-network-smoke--397807174", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap927e4c78-95", "ovs_interfaceid": "927e4c78-95c0-407e-ab92-2c7457c30f72", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:38:38 np0005481065 nova_compute[260935]: 2025-10-11 09:38:38.144 2 DEBUG oslo_concurrency.lockutils [req-60ba0ff0-c067-4554-b5a9-0bfb7b14f6b9 req-f5dc0872-b655-4208-b8c5-b6b5f9011f56 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:38:38 np0005481065 nova_compute[260935]: 2025-10-11 09:38:38.659 2 DEBUG nova.network.neutron [-] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:38:38 np0005481065 nova_compute[260935]: 2025-10-11 09:38:38.673 2 INFO nova.compute.manager [-] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Took 1.73 seconds to deallocate network for instance.#033[00m
Oct 11 05:38:38 np0005481065 nova_compute[260935]: 2025-10-11 09:38:38.713 2 DEBUG oslo_concurrency.lockutils [None req-82f121cd-c1d2-45a9-bd52-8fccff66f235 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:38:38 np0005481065 nova_compute[260935]: 2025-10-11 09:38:38.714 2 DEBUG oslo_concurrency.lockutils [None req-82f121cd-c1d2-45a9-bd52-8fccff66f235 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:38:38 np0005481065 nova_compute[260935]: 2025-10-11 09:38:38.835 2 DEBUG oslo_concurrency.processutils [None req-82f121cd-c1d2-45a9-bd52-8fccff66f235 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:38:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:38:39.220 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '54'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:38:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2910: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Oct 11 05:38:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:38:39 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/846503678' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:38:39 np0005481065 nova_compute[260935]: 2025-10-11 09:38:39.331 2 DEBUG oslo_concurrency.processutils [None req-82f121cd-c1d2-45a9-bd52-8fccff66f235 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:38:39 np0005481065 nova_compute[260935]: 2025-10-11 09:38:39.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:38:39 np0005481065 nova_compute[260935]: 2025-10-11 09:38:39.343 2 DEBUG nova.compute.provider_tree [None req-82f121cd-c1d2-45a9-bd52-8fccff66f235 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:38:39 np0005481065 nova_compute[260935]: 2025-10-11 09:38:39.361 2 DEBUG nova.scheduler.client.report [None req-82f121cd-c1d2-45a9-bd52-8fccff66f235 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:38:39 np0005481065 nova_compute[260935]: 2025-10-11 09:38:39.386 2 DEBUG oslo_concurrency.lockutils [None req-82f121cd-c1d2-45a9-bd52-8fccff66f235 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:38:39 np0005481065 nova_compute[260935]: 2025-10-11 09:38:39.410 2 INFO nova.scheduler.client.report [None req-82f121cd-c1d2-45a9-bd52-8fccff66f235 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Deleted allocations for instance d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b#033[00m
Oct 11 05:38:39 np0005481065 nova_compute[260935]: 2025-10-11 09:38:39.476 2 DEBUG oslo_concurrency.lockutils [None req-82f121cd-c1d2-45a9-bd52-8fccff66f235 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.350s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:38:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:38:40 np0005481065 nova_compute[260935]: 2025-10-11 09:38:40.187 2 DEBUG nova.compute.manager [req-add12720-68d5-4734-a6a0-74775b88095c req-58dffc40-4633-4867-ab49-d8541a50c45c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Received event network-vif-deleted-927e4c78-95c0-407e-ab92-2c7457c30f72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:38:40 np0005481065 podman[422189]: 2025-10-11 09:38:40.812161941 +0000 UTC m=+0.095436708 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:38:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2911: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 12 KiB/s wr, 28 op/s
Oct 11 05:38:41 np0005481065 nova_compute[260935]: 2025-10-11 09:38:41.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:38:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2912: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 12 KiB/s wr, 28 op/s
Oct 11 05:38:44 np0005481065 nova_compute[260935]: 2025-10-11 09:38:44.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:38:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:38:44 np0005481065 podman[422209]: 2025-10-11 09:38:44.760511361 +0000 UTC m=+0.068194383 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible)
Oct 11 05:38:44 np0005481065 podman[422210]: 2025-10-11 09:38:44.800467622 +0000 UTC m=+0.101547729 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:38:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2913: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 05:38:46 np0005481065 nova_compute[260935]: 2025-10-11 09:38:46.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:38:46 np0005481065 ovn_controller[152945]: 2025-10-11T09:38:46Z|01625|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:38:46 np0005481065 ovn_controller[152945]: 2025-10-11T09:38:46Z|01626|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:38:46 np0005481065 nova_compute[260935]: 2025-10-11 09:38:46.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:38:46 np0005481065 ovn_controller[152945]: 2025-10-11T09:38:46Z|01627|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:38:46 np0005481065 ovn_controller[152945]: 2025-10-11T09:38:46Z|01628|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:38:46 np0005481065 nova_compute[260935]: 2025-10-11 09:38:46.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:38:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2914: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 05:38:48 np0005481065 nova_compute[260935]: 2025-10-11 09:38:48.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:38:48 np0005481065 nova_compute[260935]: 2025-10-11 09:38:48.702 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:38:48 np0005481065 nova_compute[260935]: 2025-10-11 09:38:48.945 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:38:48 np0005481065 nova_compute[260935]: 2025-10-11 09:38:48.946 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:38:48 np0005481065 nova_compute[260935]: 2025-10-11 09:38:48.946 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 05:38:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2915: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 05:38:49 np0005481065 nova_compute[260935]: 2025-10-11 09:38:49.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:38:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:38:50 np0005481065 nova_compute[260935]: 2025-10-11 09:38:50.335 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updating instance_info_cache with network_info: [{"id": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "address": "fa:16:3e:ab:9b:26", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99e74dca-1d", "ovs_interfaceid": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:38:50 np0005481065 nova_compute[260935]: 2025-10-11 09:38:50.349 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:38:50 np0005481065 nova_compute[260935]: 2025-10-11 09:38:50.350 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 05:38:50 np0005481065 nova_compute[260935]: 2025-10-11 09:38:50.351 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:38:50 np0005481065 nova_compute[260935]: 2025-10-11 09:38:50.351 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:38:50 np0005481065 nova_compute[260935]: 2025-10-11 09:38:50.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:38:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2916: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:38:51 np0005481065 nova_compute[260935]: 2025-10-11 09:38:51.374 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760175516.370321, d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:38:51 np0005481065 nova_compute[260935]: 2025-10-11 09:38:51.374 2 INFO nova.compute.manager [-] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:38:51 np0005481065 nova_compute[260935]: 2025-10-11 09:38:51.396 2 DEBUG nova.compute.manager [None req-e92e827e-33e5-495e-840e-a96c90755c09 - - - - - -] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:38:51 np0005481065 nova_compute[260935]: 2025-10-11 09:38:51.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:38:51 np0005481065 nova_compute[260935]: 2025-10-11 09:38:51.697 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:38:52 np0005481065 nova_compute[260935]: 2025-10-11 09:38:52.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:38:52 np0005481065 nova_compute[260935]: 2025-10-11 09:38:52.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:38:52 np0005481065 nova_compute[260935]: 2025-10-11 09:38:52.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:38:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2917: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:38:53 np0005481065 nova_compute[260935]: 2025-10-11 09:38:53.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:38:53 np0005481065 nova_compute[260935]: 2025-10-11 09:38:53.732 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:38:53 np0005481065 nova_compute[260935]: 2025-10-11 09:38:53.732 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:38:53 np0005481065 nova_compute[260935]: 2025-10-11 09:38:53.733 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:38:53 np0005481065 nova_compute[260935]: 2025-10-11 09:38:53.733 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:38:53 np0005481065 nova_compute[260935]: 2025-10-11 09:38:53.734 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:38:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:38:54 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/131630721' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:38:54 np0005481065 nova_compute[260935]: 2025-10-11 09:38:54.221 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:38:54 np0005481065 nova_compute[260935]: 2025-10-11 09:38:54.321 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:38:54 np0005481065 nova_compute[260935]: 2025-10-11 09:38:54.323 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:38:54 np0005481065 nova_compute[260935]: 2025-10-11 09:38:54.323 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:38:54 np0005481065 nova_compute[260935]: 2025-10-11 09:38:54.328 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:38:54 np0005481065 nova_compute[260935]: 2025-10-11 09:38:54.328 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:38:54 np0005481065 nova_compute[260935]: 2025-10-11 09:38:54.334 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:38:54 np0005481065 nova_compute[260935]: 2025-10-11 09:38:54.334 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:38:54 np0005481065 nova_compute[260935]: 2025-10-11 09:38:54.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:38:54 np0005481065 nova_compute[260935]: 2025-10-11 09:38:54.635 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:38:54 np0005481065 nova_compute[260935]: 2025-10-11 09:38:54.638 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2802MB free_disk=59.83066940307617GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:38:54 np0005481065 nova_compute[260935]: 2025-10-11 09:38:54.638 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:38:54 np0005481065 nova_compute[260935]: 2025-10-11 09:38:54.639 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:38:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:38:54 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #144. Immutable memtables: 0.
Oct 11 05:38:54 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:54.670300) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 05:38:54 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 87] Flushing memtable with next log file: 144
Oct 11 05:38:54 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175534670337, "job": 87, "event": "flush_started", "num_memtables": 1, "num_entries": 402, "num_deletes": 256, "total_data_size": 292495, "memory_usage": 301640, "flush_reason": "Manual Compaction"}
Oct 11 05:38:54 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 87] Level-0 flush table #145: started
Oct 11 05:38:54 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175534674891, "cf_name": "default", "job": 87, "event": "table_file_creation", "file_number": 145, "file_size": 290261, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61011, "largest_seqno": 61412, "table_properties": {"data_size": 287824, "index_size": 536, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5622, "raw_average_key_size": 17, "raw_value_size": 283092, "raw_average_value_size": 898, "num_data_blocks": 24, "num_entries": 315, "num_filter_entries": 315, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760175516, "oldest_key_time": 1760175516, "file_creation_time": 1760175534, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 145, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:38:54 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 87] Flush lasted 4634 microseconds, and 1956 cpu microseconds.
Oct 11 05:38:54 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:38:54 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:54.674934) [db/flush_job.cc:967] [default] [JOB 87] Level-0 flush table #145: 290261 bytes OK
Oct 11 05:38:54 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:54.674952) [db/memtable_list.cc:519] [default] Level-0 commit table #145 started
Oct 11 05:38:54 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:54.676323) [db/memtable_list.cc:722] [default] Level-0 commit table #145: memtable #1 done
Oct 11 05:38:54 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:54.676341) EVENT_LOG_v1 {"time_micros": 1760175534676335, "job": 87, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 05:38:54 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:54.676360) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 05:38:54 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 87] Try to delete WAL files size 289929, prev total WAL file size 289929, number of live WAL files 2.
Oct 11 05:38:54 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000141.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:38:54 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:54.677078) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032353131' seq:72057594037927935, type:22 .. '6C6F676D0032373633' seq:0, type:0; will stop at (end)
Oct 11 05:38:54 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 88] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 05:38:54 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 87 Base level 0, inputs: [145(283KB)], [143(9352KB)]
Oct 11 05:38:54 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175534677111, "job": 88, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [145], "files_L6": [143], "score": -1, "input_data_size": 9867137, "oldest_snapshot_seqno": -1}
Oct 11 05:38:54 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 88] Generated table #146: 7798 keys, 9762706 bytes, temperature: kUnknown
Oct 11 05:38:54 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175534743476, "cf_name": "default", "job": 88, "event": "table_file_creation", "file_number": 146, "file_size": 9762706, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9712535, "index_size": 29578, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19525, "raw_key_size": 205165, "raw_average_key_size": 26, "raw_value_size": 9575233, "raw_average_value_size": 1227, "num_data_blocks": 1147, "num_entries": 7798, "num_filter_entries": 7798, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760175534, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 146, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:38:54 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:38:54 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:54.743758) [db/compaction/compaction_job.cc:1663] [default] [JOB 88] Compacted 1@0 + 1@6 files to L6 => 9762706 bytes
Oct 11 05:38:54 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:54.745449) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 148.5 rd, 146.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 9.1 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(67.6) write-amplify(33.6) OK, records in: 8317, records dropped: 519 output_compression: NoCompression
Oct 11 05:38:54 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:54.745470) EVENT_LOG_v1 {"time_micros": 1760175534745460, "job": 88, "event": "compaction_finished", "compaction_time_micros": 66452, "compaction_time_cpu_micros": 21913, "output_level": 6, "num_output_files": 1, "total_output_size": 9762706, "num_input_records": 8317, "num_output_records": 7798, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 05:38:54 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000145.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:38:54 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175534745651, "job": 88, "event": "table_file_deletion", "file_number": 145}
Oct 11 05:38:54 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000143.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:38:54 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175534747733, "job": 88, "event": "table_file_deletion", "file_number": 143}
Oct 11 05:38:54 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:54.676951) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:38:54 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:54.747907) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:38:54 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:54.747915) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:38:54 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:54.747919) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:38:54 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:54.747922) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:38:54 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:54.747925) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:38:54 np0005481065 nova_compute[260935]: 2025-10-11 09:38:54.789 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:38:54 np0005481065 nova_compute[260935]: 2025-10-11 09:38:54.790 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:38:54 np0005481065 nova_compute[260935]: 2025-10-11 09:38:54.790 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:38:54 np0005481065 nova_compute[260935]: 2025-10-11 09:38:54.790 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:38:54 np0005481065 nova_compute[260935]: 2025-10-11 09:38:54.791 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:38:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:38:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:38:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:38:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:38:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:38:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:38:54 np0005481065 nova_compute[260935]: 2025-10-11 09:38:54.897 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:38:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:38:55
Oct 11 05:38:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:38:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:38:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta', '.rgw.root', 'default.rgw.log', 'volumes', 'default.rgw.control', 'vms', 'images', 'backups']
Oct 11 05:38:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:38:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2918: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:38:55 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:38:55 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2305504440' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:38:55 np0005481065 nova_compute[260935]: 2025-10-11 09:38:55.356 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:38:55 np0005481065 nova_compute[260935]: 2025-10-11 09:38:55.364 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:38:55 np0005481065 nova_compute[260935]: 2025-10-11 09:38:55.417 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:38:55 np0005481065 nova_compute[260935]: 2025-10-11 09:38:55.540 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:38:55 np0005481065 nova_compute[260935]: 2025-10-11 09:38:55.541 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.902s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:38:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:38:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:38:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:38:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:38:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:38:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:38:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:38:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:38:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:38:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:38:56 np0005481065 nova_compute[260935]: 2025-10-11 09:38:56.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:38:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2919: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:38:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2920: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:38:59 np0005481065 nova_compute[260935]: 2025-10-11 09:38:59.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:38:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:39:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2921: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:39:01 np0005481065 nova_compute[260935]: 2025-10-11 09:39:01.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:39:01 np0005481065 nova_compute[260935]: 2025-10-11 09:39:01.613 2 DEBUG oslo_concurrency.lockutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "82fac090-c427-485c-98cd-ad02e839be40" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:39:01 np0005481065 nova_compute[260935]: 2025-10-11 09:39:01.613 2 DEBUG oslo_concurrency.lockutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "82fac090-c427-485c-98cd-ad02e839be40" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:39:01 np0005481065 nova_compute[260935]: 2025-10-11 09:39:01.701 2 DEBUG nova.compute.manager [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:39:01 np0005481065 nova_compute[260935]: 2025-10-11 09:39:01.864 2 DEBUG oslo_concurrency.lockutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:39:01 np0005481065 nova_compute[260935]: 2025-10-11 09:39:01.865 2 DEBUG oslo_concurrency.lockutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:39:01 np0005481065 nova_compute[260935]: 2025-10-11 09:39:01.875 2 DEBUG nova.virt.hardware [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:39:01 np0005481065 nova_compute[260935]: 2025-10-11 09:39:01.875 2 INFO nova.compute.claims [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:39:02 np0005481065 nova_compute[260935]: 2025-10-11 09:39:02.025 2 DEBUG oslo_concurrency.processutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:39:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:39:02 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3851506032' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:39:02 np0005481065 nova_compute[260935]: 2025-10-11 09:39:02.557 2 DEBUG oslo_concurrency.processutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:39:02 np0005481065 nova_compute[260935]: 2025-10-11 09:39:02.567 2 DEBUG nova.compute.provider_tree [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:39:02 np0005481065 nova_compute[260935]: 2025-10-11 09:39:02.602 2 DEBUG nova.scheduler.client.report [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:39:02 np0005481065 nova_compute[260935]: 2025-10-11 09:39:02.675 2 DEBUG oslo_concurrency.lockutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.809s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:39:02 np0005481065 nova_compute[260935]: 2025-10-11 09:39:02.675 2 DEBUG nova.compute.manager [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:39:02 np0005481065 podman[422326]: 2025-10-11 09:39:02.79908549 +0000 UTC m=+0.089368327 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 05:39:02 np0005481065 nova_compute[260935]: 2025-10-11 09:39:02.805 2 DEBUG nova.compute.manager [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:39:02 np0005481065 nova_compute[260935]: 2025-10-11 09:39:02.806 2 DEBUG nova.network.neutron [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:39:02 np0005481065 nova_compute[260935]: 2025-10-11 09:39:02.851 2 INFO nova.virt.libvirt.driver [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:39:02 np0005481065 nova_compute[260935]: 2025-10-11 09:39:02.899 2 DEBUG nova.compute.manager [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:39:03 np0005481065 nova_compute[260935]: 2025-10-11 09:39:03.059 2 DEBUG nova.compute.manager [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:39:03 np0005481065 nova_compute[260935]: 2025-10-11 09:39:03.061 2 DEBUG nova.virt.libvirt.driver [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:39:03 np0005481065 nova_compute[260935]: 2025-10-11 09:39:03.062 2 INFO nova.virt.libvirt.driver [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Creating image(s)#033[00m
Oct 11 05:39:03 np0005481065 nova_compute[260935]: 2025-10-11 09:39:03.088 2 DEBUG nova.storage.rbd_utils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image 82fac090-c427-485c-98cd-ad02e839be40_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:39:03 np0005481065 nova_compute[260935]: 2025-10-11 09:39:03.115 2 DEBUG nova.storage.rbd_utils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image 82fac090-c427-485c-98cd-ad02e839be40_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:39:03 np0005481065 nova_compute[260935]: 2025-10-11 09:39:03.141 2 DEBUG nova.storage.rbd_utils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image 82fac090-c427-485c-98cd-ad02e839be40_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:39:03 np0005481065 nova_compute[260935]: 2025-10-11 09:39:03.145 2 DEBUG oslo_concurrency.processutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:39:03 np0005481065 nova_compute[260935]: 2025-10-11 09:39:03.252 2 DEBUG oslo_concurrency.processutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:39:03 np0005481065 nova_compute[260935]: 2025-10-11 09:39:03.253 2 DEBUG oslo_concurrency.lockutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:39:03 np0005481065 nova_compute[260935]: 2025-10-11 09:39:03.254 2 DEBUG oslo_concurrency.lockutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:39:03 np0005481065 nova_compute[260935]: 2025-10-11 09:39:03.255 2 DEBUG oslo_concurrency.lockutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:39:03 np0005481065 nova_compute[260935]: 2025-10-11 09:39:03.279 2 DEBUG nova.storage.rbd_utils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image 82fac090-c427-485c-98cd-ad02e839be40_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:39:03 np0005481065 nova_compute[260935]: 2025-10-11 09:39:03.283 2 DEBUG oslo_concurrency.processutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 82fac090-c427-485c-98cd-ad02e839be40_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:39:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2922: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:39:03 np0005481065 nova_compute[260935]: 2025-10-11 09:39:03.624 2 DEBUG oslo_concurrency.processutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 82fac090-c427-485c-98cd-ad02e839be40_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:39:03 np0005481065 nova_compute[260935]: 2025-10-11 09:39:03.682 2 DEBUG nova.storage.rbd_utils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] resizing rbd image 82fac090-c427-485c-98cd-ad02e839be40_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:39:03 np0005481065 nova_compute[260935]: 2025-10-11 09:39:03.709 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:39:03 np0005481065 nova_compute[260935]: 2025-10-11 09:39:03.710 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 11 05:39:03 np0005481065 nova_compute[260935]: 2025-10-11 09:39:03.728 2 DEBUG nova.policy [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '489c4d0457354f4684f8b9e53261224f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '81e7096f23df4e7d8782cf98d09d54e9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:39:03 np0005481065 nova_compute[260935]: 2025-10-11 09:39:03.775 2 DEBUG nova.objects.instance [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lazy-loading 'migration_context' on Instance uuid 82fac090-c427-485c-98cd-ad02e839be40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:39:03 np0005481065 nova_compute[260935]: 2025-10-11 09:39:03.887 2 DEBUG nova.virt.libvirt.driver [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:39:03 np0005481065 nova_compute[260935]: 2025-10-11 09:39:03.887 2 DEBUG nova.virt.libvirt.driver [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Ensure instance console log exists: /var/lib/nova/instances/82fac090-c427-485c-98cd-ad02e839be40/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:39:03 np0005481065 nova_compute[260935]: 2025-10-11 09:39:03.888 2 DEBUG oslo_concurrency.lockutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:39:03 np0005481065 nova_compute[260935]: 2025-10-11 09:39:03.888 2 DEBUG oslo_concurrency.lockutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:39:03 np0005481065 nova_compute[260935]: 2025-10-11 09:39:03.889 2 DEBUG oslo_concurrency.lockutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:39:04 np0005481065 nova_compute[260935]: 2025-10-11 09:39:04.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:39:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:39:05 np0005481065 nova_compute[260935]: 2025-10-11 09:39:05.227 2 DEBUG nova.network.neutron [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Successfully created port: e8668336-ee38-49b1-97dd-f6c5fcfab2e5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:39:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2923: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:39:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:39:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:39:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:39:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:39:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002627377275021163 of space, bias 1.0, pg target 0.788213182506349 quantized to 32 (current 32)
Oct 11 05:39:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:39:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:39:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:39:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:39:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:39:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 11 05:39:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:39:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 05:39:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:39:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:39:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:39:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 05:39:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:39:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 05:39:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:39:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:39:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:39:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 05:39:06 np0005481065 nova_compute[260935]: 2025-10-11 09:39:06.260 2 DEBUG nova.network.neutron [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Successfully updated port: e8668336-ee38-49b1-97dd-f6c5fcfab2e5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:39:06 np0005481065 nova_compute[260935]: 2025-10-11 09:39:06.330 2 DEBUG oslo_concurrency.lockutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "refresh_cache-82fac090-c427-485c-98cd-ad02e839be40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:39:06 np0005481065 nova_compute[260935]: 2025-10-11 09:39:06.330 2 DEBUG oslo_concurrency.lockutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquired lock "refresh_cache-82fac090-c427-485c-98cd-ad02e839be40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:39:06 np0005481065 nova_compute[260935]: 2025-10-11 09:39:06.330 2 DEBUG nova.network.neutron [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:39:06 np0005481065 nova_compute[260935]: 2025-10-11 09:39:06.434 2 DEBUG nova.compute.manager [req-96caaa46-41f0-45aa-9e32-828537bb45af req-da02b2bf-7c8f-4832-8c44-01da4113ed52 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Received event network-changed-e8668336-ee38-49b1-97dd-f6c5fcfab2e5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:39:06 np0005481065 nova_compute[260935]: 2025-10-11 09:39:06.435 2 DEBUG nova.compute.manager [req-96caaa46-41f0-45aa-9e32-828537bb45af req-da02b2bf-7c8f-4832-8c44-01da4113ed52 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Refreshing instance network info cache due to event network-changed-e8668336-ee38-49b1-97dd-f6c5fcfab2e5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:39:06 np0005481065 nova_compute[260935]: 2025-10-11 09:39:06.436 2 DEBUG oslo_concurrency.lockutils [req-96caaa46-41f0-45aa-9e32-828537bb45af req-da02b2bf-7c8f-4832-8c44-01da4113ed52 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-82fac090-c427-485c-98cd-ad02e839be40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:39:06 np0005481065 nova_compute[260935]: 2025-10-11 09:39:06.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:39:06 np0005481065 nova_compute[260935]: 2025-10-11 09:39:06.522 2 DEBUG nova.network.neutron [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:39:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2924: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:39:07 np0005481065 nova_compute[260935]: 2025-10-11 09:39:07.826 2 DEBUG nova.network.neutron [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Updating instance_info_cache with network_info: [{"id": "e8668336-ee38-49b1-97dd-f6c5fcfab2e5", "address": "fa:16:3e:ae:4e:40", "network": {"id": "1aa39742-414b-41b6-bac5-b401ed01a1ec", "bridge": "br-int", "label": "tempest-network-smoke--1567678547", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8668336-ee", "ovs_interfaceid": "e8668336-ee38-49b1-97dd-f6c5fcfab2e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:39:07 np0005481065 nova_compute[260935]: 2025-10-11 09:39:07.942 2 DEBUG oslo_concurrency.lockutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Releasing lock "refresh_cache-82fac090-c427-485c-98cd-ad02e839be40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:39:07 np0005481065 nova_compute[260935]: 2025-10-11 09:39:07.943 2 DEBUG nova.compute.manager [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Instance network_info: |[{"id": "e8668336-ee38-49b1-97dd-f6c5fcfab2e5", "address": "fa:16:3e:ae:4e:40", "network": {"id": "1aa39742-414b-41b6-bac5-b401ed01a1ec", "bridge": "br-int", "label": "tempest-network-smoke--1567678547", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8668336-ee", "ovs_interfaceid": "e8668336-ee38-49b1-97dd-f6c5fcfab2e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:39:07 np0005481065 nova_compute[260935]: 2025-10-11 09:39:07.943 2 DEBUG oslo_concurrency.lockutils [req-96caaa46-41f0-45aa-9e32-828537bb45af req-da02b2bf-7c8f-4832-8c44-01da4113ed52 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-82fac090-c427-485c-98cd-ad02e839be40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:39:07 np0005481065 nova_compute[260935]: 2025-10-11 09:39:07.943 2 DEBUG nova.network.neutron [req-96caaa46-41f0-45aa-9e32-828537bb45af req-da02b2bf-7c8f-4832-8c44-01da4113ed52 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Refreshing network info cache for port e8668336-ee38-49b1-97dd-f6c5fcfab2e5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:39:07 np0005481065 nova_compute[260935]: 2025-10-11 09:39:07.947 2 DEBUG nova.virt.libvirt.driver [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Start _get_guest_xml network_info=[{"id": "e8668336-ee38-49b1-97dd-f6c5fcfab2e5", "address": "fa:16:3e:ae:4e:40", "network": {"id": "1aa39742-414b-41b6-bac5-b401ed01a1ec", "bridge": "br-int", "label": "tempest-network-smoke--1567678547", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8668336-ee", "ovs_interfaceid": "e8668336-ee38-49b1-97dd-f6c5fcfab2e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:39:07 np0005481065 nova_compute[260935]: 2025-10-11 09:39:07.952 2 WARNING nova.virt.libvirt.driver [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:39:07 np0005481065 nova_compute[260935]: 2025-10-11 09:39:07.957 2 DEBUG nova.virt.libvirt.host [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:39:07 np0005481065 nova_compute[260935]: 2025-10-11 09:39:07.958 2 DEBUG nova.virt.libvirt.host [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:39:07 np0005481065 nova_compute[260935]: 2025-10-11 09:39:07.964 2 DEBUG nova.virt.libvirt.host [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:39:07 np0005481065 nova_compute[260935]: 2025-10-11 09:39:07.965 2 DEBUG nova.virt.libvirt.host [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:39:07 np0005481065 nova_compute[260935]: 2025-10-11 09:39:07.965 2 DEBUG nova.virt.libvirt.driver [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:39:07 np0005481065 nova_compute[260935]: 2025-10-11 09:39:07.966 2 DEBUG nova.virt.hardware [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:39:07 np0005481065 nova_compute[260935]: 2025-10-11 09:39:07.967 2 DEBUG nova.virt.hardware [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:39:07 np0005481065 nova_compute[260935]: 2025-10-11 09:39:07.967 2 DEBUG nova.virt.hardware [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:39:07 np0005481065 nova_compute[260935]: 2025-10-11 09:39:07.967 2 DEBUG nova.virt.hardware [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:39:07 np0005481065 nova_compute[260935]: 2025-10-11 09:39:07.968 2 DEBUG nova.virt.hardware [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:39:07 np0005481065 nova_compute[260935]: 2025-10-11 09:39:07.968 2 DEBUG nova.virt.hardware [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:39:07 np0005481065 nova_compute[260935]: 2025-10-11 09:39:07.968 2 DEBUG nova.virt.hardware [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:39:07 np0005481065 nova_compute[260935]: 2025-10-11 09:39:07.969 2 DEBUG nova.virt.hardware [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:39:07 np0005481065 nova_compute[260935]: 2025-10-11 09:39:07.969 2 DEBUG nova.virt.hardware [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:39:07 np0005481065 nova_compute[260935]: 2025-10-11 09:39:07.969 2 DEBUG nova.virt.hardware [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:39:07 np0005481065 nova_compute[260935]: 2025-10-11 09:39:07.970 2 DEBUG nova.virt.hardware [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:39:07 np0005481065 nova_compute[260935]: 2025-10-11 09:39:07.973 2 DEBUG oslo_concurrency.processutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:39:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:39:08 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/325145860' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:39:08 np0005481065 nova_compute[260935]: 2025-10-11 09:39:08.440 2 DEBUG oslo_concurrency.processutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:39:08 np0005481065 nova_compute[260935]: 2025-10-11 09:39:08.473 2 DEBUG nova.storage.rbd_utils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image 82fac090-c427-485c-98cd-ad02e839be40_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:39:08 np0005481065 nova_compute[260935]: 2025-10-11 09:39:08.477 2 DEBUG oslo_concurrency.processutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:39:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:39:08 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3066451492' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:39:08 np0005481065 nova_compute[260935]: 2025-10-11 09:39:08.974 2 DEBUG oslo_concurrency.processutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:39:08 np0005481065 nova_compute[260935]: 2025-10-11 09:39:08.977 2 DEBUG nova.virt.libvirt.vif [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:38:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-830501792',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-830501792',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-607770139-acc',id=144,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMgIDSEbqXKfaiPe9PEwj6SDspyzopAbfyJnz8+djsMK1KLgiJJqBYs9uE57WKf6jqalL3R3Kh83Cc9busMQoG7JknYjD9hSHphOlTqezk2QHYcvhxySyaS51yDxw6uubA==',key_name='tempest-TestSecurityGroupsBasicOps-1631871125',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='81e7096f23df4e7d8782cf98d09d54e9',ramdisk_id='',reservation_id='r-bjfbul0r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-607770139',owner_user_name='tempest-TestSecurityGroupsBasicOps-607770139-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:39:02Z,user_data=None,user_id='489c4d0457354f4684f8b9e53261224f',uuid=82fac090-c427-485c-98cd-ad02e839be40,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e8668336-ee38-49b1-97dd-f6c5fcfab2e5", "address": "fa:16:3e:ae:4e:40", "network": {"id": "1aa39742-414b-41b6-bac5-b401ed01a1ec", "bridge": "br-int", "label": "tempest-network-smoke--1567678547", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8668336-ee", "ovs_interfaceid": "e8668336-ee38-49b1-97dd-f6c5fcfab2e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:39:08 np0005481065 nova_compute[260935]: 2025-10-11 09:39:08.978 2 DEBUG nova.network.os_vif_util [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converting VIF {"id": "e8668336-ee38-49b1-97dd-f6c5fcfab2e5", "address": "fa:16:3e:ae:4e:40", "network": {"id": "1aa39742-414b-41b6-bac5-b401ed01a1ec", "bridge": "br-int", "label": "tempest-network-smoke--1567678547", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8668336-ee", "ovs_interfaceid": "e8668336-ee38-49b1-97dd-f6c5fcfab2e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:39:08 np0005481065 nova_compute[260935]: 2025-10-11 09:39:08.980 2 DEBUG nova.network.os_vif_util [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:4e:40,bridge_name='br-int',has_traffic_filtering=True,id=e8668336-ee38-49b1-97dd-f6c5fcfab2e5,network=Network(1aa39742-414b-41b6-bac5-b401ed01a1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8668336-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:39:08 np0005481065 nova_compute[260935]: 2025-10-11 09:39:08.982 2 DEBUG nova.objects.instance [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 82fac090-c427-485c-98cd-ad02e839be40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:39:09 np0005481065 nova_compute[260935]: 2025-10-11 09:39:09.000 2 DEBUG nova.virt.libvirt.driver [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:39:09 np0005481065 nova_compute[260935]:  <uuid>82fac090-c427-485c-98cd-ad02e839be40</uuid>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:  <name>instance-00000090</name>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:39:09 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-830501792</nova:name>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:39:07</nova:creationTime>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:39:09 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:        <nova:user uuid="489c4d0457354f4684f8b9e53261224f">tempest-TestSecurityGroupsBasicOps-607770139-project-member</nova:user>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:        <nova:project uuid="81e7096f23df4e7d8782cf98d09d54e9">tempest-TestSecurityGroupsBasicOps-607770139</nova:project>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:        <nova:port uuid="e8668336-ee38-49b1-97dd-f6c5fcfab2e5">
Oct 11 05:39:09 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:39:09 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:      <entry name="serial">82fac090-c427-485c-98cd-ad02e839be40</entry>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:      <entry name="uuid">82fac090-c427-485c-98cd-ad02e839be40</entry>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:39:09 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:39:09 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:39:09 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/82fac090-c427-485c-98cd-ad02e839be40_disk">
Oct 11 05:39:09 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:39:09 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:39:09 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/82fac090-c427-485c-98cd-ad02e839be40_disk.config">
Oct 11 05:39:09 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:39:09 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:39:09 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:ae:4e:40"/>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:      <target dev="tape8668336-ee"/>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:39:09 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/82fac090-c427-485c-98cd-ad02e839be40/console.log" append="off"/>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:39:09 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:39:09 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:39:09 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:39:09 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:39:09 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:39:09 np0005481065 nova_compute[260935]: 2025-10-11 09:39:09.002 2 DEBUG nova.compute.manager [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Preparing to wait for external event network-vif-plugged-e8668336-ee38-49b1-97dd-f6c5fcfab2e5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:39:09 np0005481065 nova_compute[260935]: 2025-10-11 09:39:09.004 2 DEBUG oslo_concurrency.lockutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "82fac090-c427-485c-98cd-ad02e839be40-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:39:09 np0005481065 nova_compute[260935]: 2025-10-11 09:39:09.004 2 DEBUG oslo_concurrency.lockutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "82fac090-c427-485c-98cd-ad02e839be40-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:39:09 np0005481065 nova_compute[260935]: 2025-10-11 09:39:09.005 2 DEBUG oslo_concurrency.lockutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "82fac090-c427-485c-98cd-ad02e839be40-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:39:09 np0005481065 nova_compute[260935]: 2025-10-11 09:39:09.006 2 DEBUG nova.virt.libvirt.vif [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:38:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-830501792',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-830501792',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-607770139-acc',id=144,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMgIDSEbqXKfaiPe9PEwj6SDspyzopAbfyJnz8+djsMK1KLgiJJqBYs9uE57WKf6jqalL3R3Kh83Cc9busMQoG7JknYjD9hSHphOlTqezk2QHYcvhxySyaS51yDxw6uubA==',key_name='tempest-TestSecurityGroupsBasicOps-1631871125',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='81e7096f23df4e7d8782cf98d09d54e9',ramdisk_id='',reservation_id='r-bjfbul0r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-607770139',owner_user_name='tempest-TestSecurityGroupsBasicOps-607770139-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:39:02Z,user_data=None,user_id='489c4d0457354f4684f8b9e53261224f',uuid=82fac090-c427-485c-98cd-ad02e839be40,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e8668336-ee38-49b1-97dd-f6c5fcfab2e5", "address": "fa:16:3e:ae:4e:40", "network": {"id": "1aa39742-414b-41b6-bac5-b401ed01a1ec", "bridge": "br-int", "label": "tempest-network-smoke--1567678547", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8668336-ee", "ovs_interfaceid": "e8668336-ee38-49b1-97dd-f6c5fcfab2e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:39:09 np0005481065 nova_compute[260935]: 2025-10-11 09:39:09.006 2 DEBUG nova.network.os_vif_util [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converting VIF {"id": "e8668336-ee38-49b1-97dd-f6c5fcfab2e5", "address": "fa:16:3e:ae:4e:40", "network": {"id": "1aa39742-414b-41b6-bac5-b401ed01a1ec", "bridge": "br-int", "label": "tempest-network-smoke--1567678547", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8668336-ee", "ovs_interfaceid": "e8668336-ee38-49b1-97dd-f6c5fcfab2e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:39:09 np0005481065 nova_compute[260935]: 2025-10-11 09:39:09.007 2 DEBUG nova.network.os_vif_util [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:4e:40,bridge_name='br-int',has_traffic_filtering=True,id=e8668336-ee38-49b1-97dd-f6c5fcfab2e5,network=Network(1aa39742-414b-41b6-bac5-b401ed01a1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8668336-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:39:09 np0005481065 nova_compute[260935]: 2025-10-11 09:39:09.008 2 DEBUG os_vif [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:4e:40,bridge_name='br-int',has_traffic_filtering=True,id=e8668336-ee38-49b1-97dd-f6c5fcfab2e5,network=Network(1aa39742-414b-41b6-bac5-b401ed01a1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8668336-ee') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:39:09 np0005481065 nova_compute[260935]: 2025-10-11 09:39:09.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:39:09 np0005481065 nova_compute[260935]: 2025-10-11 09:39:09.010 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:39:09 np0005481065 nova_compute[260935]: 2025-10-11 09:39:09.010 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:39:09 np0005481065 nova_compute[260935]: 2025-10-11 09:39:09.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:39:09 np0005481065 nova_compute[260935]: 2025-10-11 09:39:09.019 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape8668336-ee, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:39:09 np0005481065 nova_compute[260935]: 2025-10-11 09:39:09.020 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape8668336-ee, col_values=(('external_ids', {'iface-id': 'e8668336-ee38-49b1-97dd-f6c5fcfab2e5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ae:4e:40', 'vm-uuid': '82fac090-c427-485c-98cd-ad02e839be40'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:39:09 np0005481065 nova_compute[260935]: 2025-10-11 09:39:09.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:39:09 np0005481065 NetworkManager[44960]: <info>  [1760175549.0233] manager: (tape8668336-ee): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/620)
Oct 11 05:39:09 np0005481065 nova_compute[260935]: 2025-10-11 09:39:09.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:39:09 np0005481065 nova_compute[260935]: 2025-10-11 09:39:09.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:39:09 np0005481065 nova_compute[260935]: 2025-10-11 09:39:09.033 2 INFO os_vif [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:4e:40,bridge_name='br-int',has_traffic_filtering=True,id=e8668336-ee38-49b1-97dd-f6c5fcfab2e5,network=Network(1aa39742-414b-41b6-bac5-b401ed01a1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8668336-ee')#033[00m
Oct 11 05:39:09 np0005481065 nova_compute[260935]: 2025-10-11 09:39:09.094 2 DEBUG nova.virt.libvirt.driver [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:39:09 np0005481065 nova_compute[260935]: 2025-10-11 09:39:09.095 2 DEBUG nova.virt.libvirt.driver [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:39:09 np0005481065 nova_compute[260935]: 2025-10-11 09:39:09.095 2 DEBUG nova.virt.libvirt.driver [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] No VIF found with MAC fa:16:3e:ae:4e:40, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:39:09 np0005481065 nova_compute[260935]: 2025-10-11 09:39:09.096 2 INFO nova.virt.libvirt.driver [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Using config drive#033[00m
Oct 11 05:39:09 np0005481065 nova_compute[260935]: 2025-10-11 09:39:09.127 2 DEBUG nova.storage.rbd_utils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image 82fac090-c427-485c-98cd-ad02e839be40_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:39:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2925: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:39:09 np0005481065 nova_compute[260935]: 2025-10-11 09:39:09.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:39:09 np0005481065 nova_compute[260935]: 2025-10-11 09:39:09.489 2 INFO nova.virt.libvirt.driver [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Creating config drive at /var/lib/nova/instances/82fac090-c427-485c-98cd-ad02e839be40/disk.config#033[00m
Oct 11 05:39:09 np0005481065 nova_compute[260935]: 2025-10-11 09:39:09.497 2 DEBUG oslo_concurrency.processutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/82fac090-c427-485c-98cd-ad02e839be40/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp23gpdswd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:39:09 np0005481065 nova_compute[260935]: 2025-10-11 09:39:09.551 2 DEBUG nova.network.neutron [req-96caaa46-41f0-45aa-9e32-828537bb45af req-da02b2bf-7c8f-4832-8c44-01da4113ed52 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Updated VIF entry in instance network info cache for port e8668336-ee38-49b1-97dd-f6c5fcfab2e5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:39:09 np0005481065 nova_compute[260935]: 2025-10-11 09:39:09.553 2 DEBUG nova.network.neutron [req-96caaa46-41f0-45aa-9e32-828537bb45af req-da02b2bf-7c8f-4832-8c44-01da4113ed52 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Updating instance_info_cache with network_info: [{"id": "e8668336-ee38-49b1-97dd-f6c5fcfab2e5", "address": "fa:16:3e:ae:4e:40", "network": {"id": "1aa39742-414b-41b6-bac5-b401ed01a1ec", "bridge": "br-int", "label": "tempest-network-smoke--1567678547", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8668336-ee", "ovs_interfaceid": "e8668336-ee38-49b1-97dd-f6c5fcfab2e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:39:09 np0005481065 nova_compute[260935]: 2025-10-11 09:39:09.596 2 DEBUG oslo_concurrency.lockutils [req-96caaa46-41f0-45aa-9e32-828537bb45af req-da02b2bf-7c8f-4832-8c44-01da4113ed52 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-82fac090-c427-485c-98cd-ad02e839be40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:39:09 np0005481065 nova_compute[260935]: 2025-10-11 09:39:09.664 2 DEBUG oslo_concurrency.processutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/82fac090-c427-485c-98cd-ad02e839be40/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp23gpdswd" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:39:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:39:09 np0005481065 nova_compute[260935]: 2025-10-11 09:39:09.690 2 DEBUG nova.storage.rbd_utils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image 82fac090-c427-485c-98cd-ad02e839be40_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:39:09 np0005481065 nova_compute[260935]: 2025-10-11 09:39:09.694 2 DEBUG oslo_concurrency.processutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/82fac090-c427-485c-98cd-ad02e839be40/disk.config 82fac090-c427-485c-98cd-ad02e839be40_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:39:09 np0005481065 nova_compute[260935]: 2025-10-11 09:39:09.875 2 DEBUG oslo_concurrency.processutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/82fac090-c427-485c-98cd-ad02e839be40/disk.config 82fac090-c427-485c-98cd-ad02e839be40_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:39:09 np0005481065 nova_compute[260935]: 2025-10-11 09:39:09.876 2 INFO nova.virt.libvirt.driver [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Deleting local config drive /var/lib/nova/instances/82fac090-c427-485c-98cd-ad02e839be40/disk.config because it was imported into RBD.#033[00m
Oct 11 05:39:09 np0005481065 kernel: tape8668336-ee: entered promiscuous mode
Oct 11 05:39:09 np0005481065 NetworkManager[44960]: <info>  [1760175549.9464] manager: (tape8668336-ee): new Tun device (/org/freedesktop/NetworkManager/Devices/621)
Oct 11 05:39:09 np0005481065 ovn_controller[152945]: 2025-10-11T09:39:09Z|01629|binding|INFO|Claiming lport e8668336-ee38-49b1-97dd-f6c5fcfab2e5 for this chassis.
Oct 11 05:39:09 np0005481065 ovn_controller[152945]: 2025-10-11T09:39:09Z|01630|binding|INFO|e8668336-ee38-49b1-97dd-f6c5fcfab2e5: Claiming fa:16:3e:ae:4e:40 10.100.0.4
Oct 11 05:39:09 np0005481065 nova_compute[260935]: 2025-10-11 09:39:09.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:39:09 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:39:09.968 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:4e:40 10.100.0.4'], port_security=['fa:16:3e:ae:4e:40 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '82fac090-c427-485c-98cd-ad02e839be40', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1aa39742-414b-41b6-bac5-b401ed01a1ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '81e7096f23df4e7d8782cf98d09d54e9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7b70b57d-37bb-4331-92b9-ae0d4d7e602c e4a50ca5-3bee-4cfb-9819-432e1e875b60', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f310564f-86ed-419d-996b-5fa9060df3fb, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=e8668336-ee38-49b1-97dd-f6c5fcfab2e5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:39:09 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:39:09.971 162815 INFO neutron.agent.ovn.metadata.agent [-] Port e8668336-ee38-49b1-97dd-f6c5fcfab2e5 in datapath 1aa39742-414b-41b6-bac5-b401ed01a1ec bound to our chassis#033[00m
Oct 11 05:39:09 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:39:09.974 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1aa39742-414b-41b6-bac5-b401ed01a1ec#033[00m
Oct 11 05:39:09 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:39:09.993 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3fdea2a3-3846-4b35-978d-f3aca8857322]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:39:09 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:39:09.994 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1aa39742-41 in ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 05:39:09 np0005481065 systemd-udevd[422646]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:39:10 np0005481065 systemd-machined[215705]: New machine qemu-168-instance-00000090.
Oct 11 05:39:10 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:39:10.001 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1aa39742-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 05:39:10 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:39:10.002 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cd6c78b8-9cd1-4955-9799-746db083762e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:39:10 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:39:10.003 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[363007cb-8201-4ec1-b790-c8483c4ddc05]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:39:10 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:39:10.014 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[e4c94c02-ad8e-46d4-abda-681c7960c902]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:39:10 np0005481065 NetworkManager[44960]: <info>  [1760175550.0272] device (tape8668336-ee): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:39:10 np0005481065 NetworkManager[44960]: <info>  [1760175550.0284] device (tape8668336-ee): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:39:10 np0005481065 systemd[1]: Started Virtual Machine qemu-168-instance-00000090.
Oct 11 05:39:10 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:39:10.045 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3416675a-0ec8-4653-a33b-259cba1efb37]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:39:10 np0005481065 nova_compute[260935]: 2025-10-11 09:39:10.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:39:10 np0005481065 ovn_controller[152945]: 2025-10-11T09:39:10Z|01631|binding|INFO|Setting lport e8668336-ee38-49b1-97dd-f6c5fcfab2e5 ovn-installed in OVS
Oct 11 05:39:10 np0005481065 ovn_controller[152945]: 2025-10-11T09:39:10Z|01632|binding|INFO|Setting lport e8668336-ee38-49b1-97dd-f6c5fcfab2e5 up in Southbound
Oct 11 05:39:10 np0005481065 nova_compute[260935]: 2025-10-11 09:39:10.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:39:10 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:39:10.084 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[cf1fe385-7016-48f0-a936-2b170fbeff07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:39:10 np0005481065 NetworkManager[44960]: <info>  [1760175550.0915] manager: (tap1aa39742-40): new Veth device (/org/freedesktop/NetworkManager/Devices/622)
Oct 11 05:39:10 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:39:10.091 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[737193aa-ffa6-421f-b7d3-b5c8eb9e2d16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:39:10 np0005481065 systemd-udevd[422649]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:39:10 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:39:10.125 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[4b56599c-2c9e-426d-9616-3589d851be4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:39:10 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:39:10.129 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f54e43b4-4c33-4136-bd97-80786602dc0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:39:10 np0005481065 NetworkManager[44960]: <info>  [1760175550.1662] device (tap1aa39742-40): carrier: link connected
Oct 11 05:39:10 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:39:10.173 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[1dc1f698-8c73-4d84-a61a-070f25e80740]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:39:10 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:39:10.205 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a7e6e1c8-913c-4a1b-8805-30ff2f1839db]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1aa39742-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6d:7f:8f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 429], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 739486, 'reachable_time': 20665, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 422678, 'error': None, 'target': 'ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:39:10 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:39:10.230 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[19c18478-c6cf-41a0-a7a6-dc16ee722800]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6d:7f8f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 739486, 'tstamp': 739486}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 422679, 'error': None, 'target': 'ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:39:10 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:39:10.259 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4c1caf68-3209-4e44-98cd-650c90f531d6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1aa39742-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6d:7f:8f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 429], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 739486, 'reachable_time': 20665, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 422680, 'error': None, 'target': 'ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:39:10 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:39:10.305 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6705deb7-d405-4aa7-952f-a89a46a6619e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:39:10 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:39:10.405 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e30b1847-b30b-41e3-9c3b-6c1ccb0ef018]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:39:10 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:39:10.408 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1aa39742-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:39:10 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:39:10.409 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:39:10 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:39:10.410 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1aa39742-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:39:10 np0005481065 NetworkManager[44960]: <info>  [1760175550.4136] manager: (tap1aa39742-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/623)
Oct 11 05:39:10 np0005481065 kernel: tap1aa39742-40: entered promiscuous mode
Oct 11 05:39:10 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:39:10.418 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1aa39742-40, col_values=(('external_ids', {'iface-id': 'c50fba63-8396-4845-aa84-03644ac4618d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:39:10 np0005481065 nova_compute[260935]: 2025-10-11 09:39:10.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:39:10 np0005481065 ovn_controller[152945]: 2025-10-11T09:39:10Z|01633|binding|INFO|Releasing lport c50fba63-8396-4845-aa84-03644ac4618d from this chassis (sb_readonly=0)
Oct 11 05:39:10 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:39:10.422 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1aa39742-414b-41b6-bac5-b401ed01a1ec.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1aa39742-414b-41b6-bac5-b401ed01a1ec.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 05:39:10 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:39:10.424 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f4ee7db2-bb3b-40ee-ab32-6eef6a3d008c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:39:10 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:39:10.425 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 05:39:10 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 05:39:10 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 05:39:10 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-1aa39742-414b-41b6-bac5-b401ed01a1ec
Oct 11 05:39:10 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 05:39:10 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 05:39:10 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 05:39:10 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/1aa39742-414b-41b6-bac5-b401ed01a1ec.pid.haproxy
Oct 11 05:39:10 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 05:39:10 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:39:10 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 05:39:10 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 05:39:10 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 05:39:10 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 05:39:10 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 05:39:10 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 05:39:10 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 05:39:10 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 05:39:10 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 05:39:10 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 05:39:10 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 05:39:10 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 05:39:10 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 05:39:10 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:39:10 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:39:10 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 05:39:10 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 05:39:10 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 05:39:10 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID 1aa39742-414b-41b6-bac5-b401ed01a1ec
Oct 11 05:39:10 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 05:39:10 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:39:10.426 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec', 'env', 'PROCESS_TAG=haproxy-1aa39742-414b-41b6-bac5-b401ed01a1ec', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1aa39742-414b-41b6-bac5-b401ed01a1ec.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 05:39:10 np0005481065 nova_compute[260935]: 2025-10-11 09:39:10.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:39:10 np0005481065 nova_compute[260935]: 2025-10-11 09:39:10.840 2 DEBUG nova.compute.manager [req-52a5a028-c2c2-498c-a1e7-a3635c35a40a req-4ecd5998-a901-49f2-a568-5130652c4287 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Received event network-vif-plugged-e8668336-ee38-49b1-97dd-f6c5fcfab2e5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:39:10 np0005481065 nova_compute[260935]: 2025-10-11 09:39:10.841 2 DEBUG oslo_concurrency.lockutils [req-52a5a028-c2c2-498c-a1e7-a3635c35a40a req-4ecd5998-a901-49f2-a568-5130652c4287 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "82fac090-c427-485c-98cd-ad02e839be40-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:39:10 np0005481065 nova_compute[260935]: 2025-10-11 09:39:10.841 2 DEBUG oslo_concurrency.lockutils [req-52a5a028-c2c2-498c-a1e7-a3635c35a40a req-4ecd5998-a901-49f2-a568-5130652c4287 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "82fac090-c427-485c-98cd-ad02e839be40-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:39:10 np0005481065 nova_compute[260935]: 2025-10-11 09:39:10.841 2 DEBUG oslo_concurrency.lockutils [req-52a5a028-c2c2-498c-a1e7-a3635c35a40a req-4ecd5998-a901-49f2-a568-5130652c4287 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "82fac090-c427-485c-98cd-ad02e839be40-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:39:10 np0005481065 nova_compute[260935]: 2025-10-11 09:39:10.841 2 DEBUG nova.compute.manager [req-52a5a028-c2c2-498c-a1e7-a3635c35a40a req-4ecd5998-a901-49f2-a568-5130652c4287 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Processing event network-vif-plugged-e8668336-ee38-49b1-97dd-f6c5fcfab2e5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:39:10 np0005481065 podman[422755]: 2025-10-11 09:39:10.827713973 +0000 UTC m=+0.031472684 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 05:39:10 np0005481065 podman[422755]: 2025-10-11 09:39:10.9370533 +0000 UTC m=+0.140812011 container create 038b982849d91ce8fc8acc43b62a7023ef78d8265bf04b4627973c6c00b5d798 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct 11 05:39:11 np0005481065 systemd[1]: Started libpod-conmon-038b982849d91ce8fc8acc43b62a7023ef78d8265bf04b4627973c6c00b5d798.scope.
Oct 11 05:39:11 np0005481065 nova_compute[260935]: 2025-10-11 09:39:11.013 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175551.0129657, 82fac090-c427-485c-98cd-ad02e839be40 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:39:11 np0005481065 nova_compute[260935]: 2025-10-11 09:39:11.014 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 82fac090-c427-485c-98cd-ad02e839be40] VM Started (Lifecycle Event)#033[00m
Oct 11 05:39:11 np0005481065 nova_compute[260935]: 2025-10-11 09:39:11.019 2 DEBUG nova.compute.manager [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:39:11 np0005481065 nova_compute[260935]: 2025-10-11 09:39:11.024 2 DEBUG nova.virt.libvirt.driver [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:39:11 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:39:11 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e06eeafc87d96a39c349d61f6972a13eca41eb9183cc378fdf1defab8f1ee2b5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 05:39:11 np0005481065 nova_compute[260935]: 2025-10-11 09:39:11.036 2 INFO nova.virt.libvirt.driver [-] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Instance spawned successfully.#033[00m
Oct 11 05:39:11 np0005481065 nova_compute[260935]: 2025-10-11 09:39:11.036 2 DEBUG nova.virt.libvirt.driver [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:39:11 np0005481065 nova_compute[260935]: 2025-10-11 09:39:11.064 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:39:11 np0005481065 nova_compute[260935]: 2025-10-11 09:39:11.069 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:39:11 np0005481065 podman[422755]: 2025-10-11 09:39:11.077875539 +0000 UTC m=+0.281634230 container init 038b982849d91ce8fc8acc43b62a7023ef78d8265bf04b4627973c6c00b5d798 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:39:11 np0005481065 podman[422768]: 2025-10-11 09:39:11.082732015 +0000 UTC m=+0.100271963 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 11 05:39:11 np0005481065 podman[422755]: 2025-10-11 09:39:11.085473872 +0000 UTC m=+0.289232553 container start 038b982849d91ce8fc8acc43b62a7023ef78d8265bf04b4627973c6c00b5d798 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 05:39:11 np0005481065 nova_compute[260935]: 2025-10-11 09:39:11.108 2 DEBUG nova.virt.libvirt.driver [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:39:11 np0005481065 nova_compute[260935]: 2025-10-11 09:39:11.109 2 DEBUG nova.virt.libvirt.driver [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:39:11 np0005481065 nova_compute[260935]: 2025-10-11 09:39:11.109 2 DEBUG nova.virt.libvirt.driver [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:39:11 np0005481065 nova_compute[260935]: 2025-10-11 09:39:11.110 2 DEBUG nova.virt.libvirt.driver [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:39:11 np0005481065 nova_compute[260935]: 2025-10-11 09:39:11.111 2 DEBUG nova.virt.libvirt.driver [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:39:11 np0005481065 nova_compute[260935]: 2025-10-11 09:39:11.111 2 DEBUG nova.virt.libvirt.driver [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:39:11 np0005481065 neutron-haproxy-ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec[422779]: [NOTICE]   (422792) : New worker (422794) forked
Oct 11 05:39:11 np0005481065 neutron-haproxy-ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec[422779]: [NOTICE]   (422792) : Loading success.
Oct 11 05:39:11 np0005481065 nova_compute[260935]: 2025-10-11 09:39:11.118 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 82fac090-c427-485c-98cd-ad02e839be40] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:39:11 np0005481065 nova_compute[260935]: 2025-10-11 09:39:11.119 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175551.0143678, 82fac090-c427-485c-98cd-ad02e839be40 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:39:11 np0005481065 nova_compute[260935]: 2025-10-11 09:39:11.119 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 82fac090-c427-485c-98cd-ad02e839be40] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:39:11 np0005481065 nova_compute[260935]: 2025-10-11 09:39:11.199 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:39:11 np0005481065 nova_compute[260935]: 2025-10-11 09:39:11.202 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175551.0223103, 82fac090-c427-485c-98cd-ad02e839be40 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:39:11 np0005481065 nova_compute[260935]: 2025-10-11 09:39:11.202 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 82fac090-c427-485c-98cd-ad02e839be40] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:39:11 np0005481065 nova_compute[260935]: 2025-10-11 09:39:11.252 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:39:11 np0005481065 nova_compute[260935]: 2025-10-11 09:39:11.255 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:39:11 np0005481065 nova_compute[260935]: 2025-10-11 09:39:11.258 2 INFO nova.compute.manager [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Took 8.20 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:39:11 np0005481065 nova_compute[260935]: 2025-10-11 09:39:11.259 2 DEBUG nova.compute.manager [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:39:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2926: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:39:11 np0005481065 nova_compute[260935]: 2025-10-11 09:39:11.412 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 82fac090-c427-485c-98cd-ad02e839be40] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:39:11 np0005481065 nova_compute[260935]: 2025-10-11 09:39:11.497 2 INFO nova.compute.manager [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Took 9.66 seconds to build instance.#033[00m
Oct 11 05:39:11 np0005481065 nova_compute[260935]: 2025-10-11 09:39:11.664 2 DEBUG oslo_concurrency.lockutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "82fac090-c427-485c-98cd-ad02e839be40" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.051s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:39:12 np0005481065 nova_compute[260935]: 2025-10-11 09:39:12.969 2 DEBUG nova.compute.manager [req-b63ab3ce-d5ff-4487-b811-7a34f3a6f196 req-7a3aae87-af0a-421a-a56f-fa5d8c983490 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Received event network-vif-plugged-e8668336-ee38-49b1-97dd-f6c5fcfab2e5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:39:12 np0005481065 nova_compute[260935]: 2025-10-11 09:39:12.969 2 DEBUG oslo_concurrency.lockutils [req-b63ab3ce-d5ff-4487-b811-7a34f3a6f196 req-7a3aae87-af0a-421a-a56f-fa5d8c983490 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "82fac090-c427-485c-98cd-ad02e839be40-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:39:12 np0005481065 nova_compute[260935]: 2025-10-11 09:39:12.970 2 DEBUG oslo_concurrency.lockutils [req-b63ab3ce-d5ff-4487-b811-7a34f3a6f196 req-7a3aae87-af0a-421a-a56f-fa5d8c983490 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "82fac090-c427-485c-98cd-ad02e839be40-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:39:12 np0005481065 nova_compute[260935]: 2025-10-11 09:39:12.970 2 DEBUG oslo_concurrency.lockutils [req-b63ab3ce-d5ff-4487-b811-7a34f3a6f196 req-7a3aae87-af0a-421a-a56f-fa5d8c983490 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "82fac090-c427-485c-98cd-ad02e839be40-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:39:12 np0005481065 nova_compute[260935]: 2025-10-11 09:39:12.971 2 DEBUG nova.compute.manager [req-b63ab3ce-d5ff-4487-b811-7a34f3a6f196 req-7a3aae87-af0a-421a-a56f-fa5d8c983490 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] No waiting events found dispatching network-vif-plugged-e8668336-ee38-49b1-97dd-f6c5fcfab2e5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:39:12 np0005481065 nova_compute[260935]: 2025-10-11 09:39:12.971 2 WARNING nova.compute.manager [req-b63ab3ce-d5ff-4487-b811-7a34f3a6f196 req-7a3aae87-af0a-421a-a56f-fa5d8c983490 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Received unexpected event network-vif-plugged-e8668336-ee38-49b1-97dd-f6c5fcfab2e5 for instance with vm_state active and task_state None.#033[00m
Oct 11 05:39:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2927: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 05:39:14 np0005481065 nova_compute[260935]: 2025-10-11 09:39:14.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:39:14 np0005481065 nova_compute[260935]: 2025-10-11 09:39:14.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:39:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:39:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:39:15.234 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:39:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:39:15.236 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:39:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:39:15.237 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:39:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2928: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 05:39:15 np0005481065 podman[422803]: 2025-10-11 09:39:15.809630523 +0000 UTC m=+0.100013116 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct 11 05:39:15 np0005481065 podman[422804]: 2025-10-11 09:39:15.845621753 +0000 UTC m=+0.129217306 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 11 05:39:16 np0005481065 ovn_controller[152945]: 2025-10-11T09:39:16Z|01634|binding|INFO|Releasing lport c50fba63-8396-4845-aa84-03644ac4618d from this chassis (sb_readonly=0)
Oct 11 05:39:16 np0005481065 ovn_controller[152945]: 2025-10-11T09:39:16Z|01635|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:39:16 np0005481065 ovn_controller[152945]: 2025-10-11T09:39:16Z|01636|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:39:16 np0005481065 NetworkManager[44960]: <info>  [1760175556.2540] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/624)
Oct 11 05:39:16 np0005481065 nova_compute[260935]: 2025-10-11 09:39:16.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:39:16 np0005481065 NetworkManager[44960]: <info>  [1760175556.2566] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/625)
Oct 11 05:39:16 np0005481065 ovn_controller[152945]: 2025-10-11T09:39:16Z|01637|binding|INFO|Releasing lport c50fba63-8396-4845-aa84-03644ac4618d from this chassis (sb_readonly=0)
Oct 11 05:39:16 np0005481065 ovn_controller[152945]: 2025-10-11T09:39:16Z|01638|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:39:16 np0005481065 nova_compute[260935]: 2025-10-11 09:39:16.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:39:16 np0005481065 ovn_controller[152945]: 2025-10-11T09:39:16Z|01639|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:39:16 np0005481065 nova_compute[260935]: 2025-10-11 09:39:16.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:39:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2929: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 05:39:17 np0005481065 nova_compute[260935]: 2025-10-11 09:39:17.336 2 DEBUG nova.compute.manager [req-4ee9159c-9041-4b69-b802-1571e683ee28 req-efd2311e-5b55-4de6-a67f-437da0e542bf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Received event network-changed-e8668336-ee38-49b1-97dd-f6c5fcfab2e5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:39:17 np0005481065 nova_compute[260935]: 2025-10-11 09:39:17.337 2 DEBUG nova.compute.manager [req-4ee9159c-9041-4b69-b802-1571e683ee28 req-efd2311e-5b55-4de6-a67f-437da0e542bf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Refreshing instance network info cache due to event network-changed-e8668336-ee38-49b1-97dd-f6c5fcfab2e5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:39:17 np0005481065 nova_compute[260935]: 2025-10-11 09:39:17.337 2 DEBUG oslo_concurrency.lockutils [req-4ee9159c-9041-4b69-b802-1571e683ee28 req-efd2311e-5b55-4de6-a67f-437da0e542bf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-82fac090-c427-485c-98cd-ad02e839be40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:39:17 np0005481065 nova_compute[260935]: 2025-10-11 09:39:17.338 2 DEBUG oslo_concurrency.lockutils [req-4ee9159c-9041-4b69-b802-1571e683ee28 req-efd2311e-5b55-4de6-a67f-437da0e542bf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-82fac090-c427-485c-98cd-ad02e839be40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:39:17 np0005481065 nova_compute[260935]: 2025-10-11 09:39:17.339 2 DEBUG nova.network.neutron [req-4ee9159c-9041-4b69-b802-1571e683ee28 req-efd2311e-5b55-4de6-a67f-437da0e542bf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Refreshing network info cache for port e8668336-ee38-49b1-97dd-f6c5fcfab2e5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:39:18 np0005481065 nova_compute[260935]: 2025-10-11 09:39:18.876 2 DEBUG nova.network.neutron [req-4ee9159c-9041-4b69-b802-1571e683ee28 req-efd2311e-5b55-4de6-a67f-437da0e542bf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Updated VIF entry in instance network info cache for port e8668336-ee38-49b1-97dd-f6c5fcfab2e5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:39:18 np0005481065 nova_compute[260935]: 2025-10-11 09:39:18.877 2 DEBUG nova.network.neutron [req-4ee9159c-9041-4b69-b802-1571e683ee28 req-efd2311e-5b55-4de6-a67f-437da0e542bf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Updating instance_info_cache with network_info: [{"id": "e8668336-ee38-49b1-97dd-f6c5fcfab2e5", "address": "fa:16:3e:ae:4e:40", "network": {"id": "1aa39742-414b-41b6-bac5-b401ed01a1ec", "bridge": "br-int", "label": "tempest-network-smoke--1567678547", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8668336-ee", "ovs_interfaceid": "e8668336-ee38-49b1-97dd-f6c5fcfab2e5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:39:18 np0005481065 nova_compute[260935]: 2025-10-11 09:39:18.998 2 DEBUG oslo_concurrency.lockutils [req-4ee9159c-9041-4b69-b802-1571e683ee28 req-efd2311e-5b55-4de6-a67f-437da0e542bf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-82fac090-c427-485c-98cd-ad02e839be40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:39:19 np0005481065 nova_compute[260935]: 2025-10-11 09:39:19.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:39:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2930: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 05:39:19 np0005481065 nova_compute[260935]: 2025-10-11 09:39:19.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:39:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:39:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2931: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 05:39:22 np0005481065 nova_compute[260935]: 2025-10-11 09:39:22.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:39:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2932: 321 pgs: 321 active+clean; 382 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 817 KiB/s wr, 90 op/s
Oct 11 05:39:24 np0005481065 nova_compute[260935]: 2025-10-11 09:39:24.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:39:24 np0005481065 nova_compute[260935]: 2025-10-11 09:39:24.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:39:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:39:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:39:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:39:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:39:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:39:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:39:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:39:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2933: 321 pgs: 321 active+clean; 382 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 804 KiB/s wr, 16 op/s
Oct 11 05:39:26 np0005481065 ovn_controller[152945]: 2025-10-11T09:39:26Z|00194|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ae:4e:40 10.100.0.4
Oct 11 05:39:26 np0005481065 ovn_controller[152945]: 2025-10-11T09:39:26Z|00195|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ae:4e:40 10.100.0.4
Oct 11 05:39:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:39:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/731782623' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:39:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:39:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/731782623' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:39:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2934: 321 pgs: 321 active+clean; 382 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 805 KiB/s wr, 16 op/s
Oct 11 05:39:27 np0005481065 nova_compute[260935]: 2025-10-11 09:39:27.759 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:39:27 np0005481065 nova_compute[260935]: 2025-10-11 09:39:27.760 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct 11 05:39:27 np0005481065 nova_compute[260935]: 2025-10-11 09:39:27.844 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct 11 05:39:29 np0005481065 nova_compute[260935]: 2025-10-11 09:39:29.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:39:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2935: 321 pgs: 321 active+clean; 400 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 291 KiB/s rd, 2.1 MiB/s wr, 54 op/s
Oct 11 05:39:29 np0005481065 nova_compute[260935]: 2025-10-11 09:39:29.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:39:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:39:30 np0005481065 nova_compute[260935]: 2025-10-11 09:39:30.789 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:39:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2936: 321 pgs: 321 active+clean; 400 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 291 KiB/s rd, 2.1 MiB/s wr, 54 op/s
Oct 11 05:39:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2937: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 361 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Oct 11 05:39:33 np0005481065 podman[422848]: 2025-10-11 09:39:33.791683416 +0000 UTC m=+0.087049792 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 11 05:39:34 np0005481065 nova_compute[260935]: 2025-10-11 09:39:34.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:39:34 np0005481065 nova_compute[260935]: 2025-10-11 09:39:34.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:39:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:39:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2938: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 1.3 MiB/s wr, 52 op/s
Oct 11 05:39:36 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:39:36 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:39:36 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 05:39:36 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:39:36 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 05:39:36 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:39:36 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 140f9669-be13-4e25-a249-6a8ff0748549 does not exist
Oct 11 05:39:36 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 2fd45056-7261-48a1-bd59-b24a1f8336bf does not exist
Oct 11 05:39:36 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 06c22ea0-2d30-40c2-9709-6326fa4ac7b8 does not exist
Oct 11 05:39:36 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 05:39:36 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 05:39:36 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 05:39:36 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:39:36 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:39:36 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:39:36 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:39:36 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:39:36 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:39:36 np0005481065 podman[423141]: 2025-10-11 09:39:36.949036252 +0000 UTC m=+0.045071245 container create 0ae0ba9955340b2eae926b5b33ac836ec9da16afccc3970702a716f58dedb216 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_elbakyan, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:39:36 np0005481065 systemd[1]: Started libpod-conmon-0ae0ba9955340b2eae926b5b33ac836ec9da16afccc3970702a716f58dedb216.scope.
Oct 11 05:39:37 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:39:37 np0005481065 podman[423141]: 2025-10-11 09:39:36.927452236 +0000 UTC m=+0.023487239 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:39:37 np0005481065 podman[423141]: 2025-10-11 09:39:37.042913935 +0000 UTC m=+0.138948918 container init 0ae0ba9955340b2eae926b5b33ac836ec9da16afccc3970702a716f58dedb216 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 11 05:39:37 np0005481065 podman[423141]: 2025-10-11 09:39:37.050892279 +0000 UTC m=+0.146927242 container start 0ae0ba9955340b2eae926b5b33ac836ec9da16afccc3970702a716f58dedb216 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_elbakyan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 05:39:37 np0005481065 podman[423141]: 2025-10-11 09:39:37.054733546 +0000 UTC m=+0.150768519 container attach 0ae0ba9955340b2eae926b5b33ac836ec9da16afccc3970702a716f58dedb216 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_elbakyan, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 05:39:37 np0005481065 elegant_elbakyan[423157]: 167 167
Oct 11 05:39:37 np0005481065 systemd[1]: libpod-0ae0ba9955340b2eae926b5b33ac836ec9da16afccc3970702a716f58dedb216.scope: Deactivated successfully.
Oct 11 05:39:37 np0005481065 podman[423141]: 2025-10-11 09:39:37.064288684 +0000 UTC m=+0.160323677 container died 0ae0ba9955340b2eae926b5b33ac836ec9da16afccc3970702a716f58dedb216 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 05:39:37 np0005481065 systemd[1]: var-lib-containers-storage-overlay-cea3c683a30b8cd2af0798dd18983af782bb69ca2886b9fea4740d3e17b104cd-merged.mount: Deactivated successfully.
Oct 11 05:39:37 np0005481065 podman[423141]: 2025-10-11 09:39:37.11689572 +0000 UTC m=+0.212930683 container remove 0ae0ba9955340b2eae926b5b33ac836ec9da16afccc3970702a716f58dedb216 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_elbakyan, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 05:39:37 np0005481065 systemd[1]: libpod-conmon-0ae0ba9955340b2eae926b5b33ac836ec9da16afccc3970702a716f58dedb216.scope: Deactivated successfully.
Oct 11 05:39:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2939: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 1.3 MiB/s wr, 52 op/s
Oct 11 05:39:37 np0005481065 podman[423182]: 2025-10-11 09:39:37.397703956 +0000 UTC m=+0.070935601 container create 01249dcd2db0f5f37d2e6c26690ea2644217bd79b5bd9d93a54a325781448ea5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 11 05:39:37 np0005481065 systemd[1]: Started libpod-conmon-01249dcd2db0f5f37d2e6c26690ea2644217bd79b5bd9d93a54a325781448ea5.scope.
Oct 11 05:39:37 np0005481065 podman[423182]: 2025-10-11 09:39:37.368879917 +0000 UTC m=+0.042111612 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:39:37 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:39:37 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38785a173538c460538805941cd484f7a99bb2cb9b8d090825d08ceb1c66df40/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:39:37 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38785a173538c460538805941cd484f7a99bb2cb9b8d090825d08ceb1c66df40/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:39:37 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38785a173538c460538805941cd484f7a99bb2cb9b8d090825d08ceb1c66df40/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:39:37 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38785a173538c460538805941cd484f7a99bb2cb9b8d090825d08ceb1c66df40/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:39:37 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38785a173538c460538805941cd484f7a99bb2cb9b8d090825d08ceb1c66df40/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 05:39:37 np0005481065 podman[423182]: 2025-10-11 09:39:37.502457004 +0000 UTC m=+0.175688679 container init 01249dcd2db0f5f37d2e6c26690ea2644217bd79b5bd9d93a54a325781448ea5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wozniak, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:39:37 np0005481065 podman[423182]: 2025-10-11 09:39:37.519967425 +0000 UTC m=+0.193199040 container start 01249dcd2db0f5f37d2e6c26690ea2644217bd79b5bd9d93a54a325781448ea5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wozniak, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:39:37 np0005481065 podman[423182]: 2025-10-11 09:39:37.523381961 +0000 UTC m=+0.196613666 container attach 01249dcd2db0f5f37d2e6c26690ea2644217bd79b5bd9d93a54a325781448ea5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wozniak, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 05:39:38 np0005481065 nova_compute[260935]: 2025-10-11 09:39:38.209 2 DEBUG oslo_concurrency.lockutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "cacbf863-eea2-4852-a3a4-7cc929ebacec" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:39:38 np0005481065 nova_compute[260935]: 2025-10-11 09:39:38.212 2 DEBUG oslo_concurrency.lockutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "cacbf863-eea2-4852-a3a4-7cc929ebacec" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:39:38 np0005481065 nova_compute[260935]: 2025-10-11 09:39:38.285 2 DEBUG nova.compute.manager [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:39:38 np0005481065 nova_compute[260935]: 2025-10-11 09:39:38.491 2 DEBUG oslo_concurrency.lockutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:39:38 np0005481065 nova_compute[260935]: 2025-10-11 09:39:38.493 2 DEBUG oslo_concurrency.lockutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:39:38 np0005481065 nova_compute[260935]: 2025-10-11 09:39:38.502 2 DEBUG nova.virt.hardware [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:39:38 np0005481065 nova_compute[260935]: 2025-10-11 09:39:38.502 2 INFO nova.compute.claims [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:39:38 np0005481065 tender_wozniak[423198]: --> passed data devices: 0 physical, 3 LVM
Oct 11 05:39:38 np0005481065 tender_wozniak[423198]: --> relative data size: 1.0
Oct 11 05:39:38 np0005481065 tender_wozniak[423198]: --> All data devices are unavailable
Oct 11 05:39:38 np0005481065 systemd[1]: libpod-01249dcd2db0f5f37d2e6c26690ea2644217bd79b5bd9d93a54a325781448ea5.scope: Deactivated successfully.
Oct 11 05:39:38 np0005481065 systemd[1]: libpod-01249dcd2db0f5f37d2e6c26690ea2644217bd79b5bd9d93a54a325781448ea5.scope: Consumed 1.123s CPU time.
Oct 11 05:39:38 np0005481065 podman[423182]: 2025-10-11 09:39:38.730782956 +0000 UTC m=+1.404014611 container died 01249dcd2db0f5f37d2e6c26690ea2644217bd79b5bd9d93a54a325781448ea5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:39:38 np0005481065 nova_compute[260935]: 2025-10-11 09:39:38.790 2 DEBUG oslo_concurrency.processutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:39:38 np0005481065 systemd[1]: var-lib-containers-storage-overlay-38785a173538c460538805941cd484f7a99bb2cb9b8d090825d08ceb1c66df40-merged.mount: Deactivated successfully.
Oct 11 05:39:39 np0005481065 nova_compute[260935]: 2025-10-11 09:39:39.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:39:39 np0005481065 podman[423182]: 2025-10-11 09:39:39.166056733 +0000 UTC m=+1.839288388 container remove 01249dcd2db0f5f37d2e6c26690ea2644217bd79b5bd9d93a54a325781448ea5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wozniak, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 11 05:39:39 np0005481065 systemd[1]: libpod-conmon-01249dcd2db0f5f37d2e6c26690ea2644217bd79b5bd9d93a54a325781448ea5.scope: Deactivated successfully.
Oct 11 05:39:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:39:39 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/307893045' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:39:39 np0005481065 nova_compute[260935]: 2025-10-11 09:39:39.253 2 DEBUG oslo_concurrency.processutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:39:39 np0005481065 nova_compute[260935]: 2025-10-11 09:39:39.268 2 DEBUG nova.compute.provider_tree [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:39:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2940: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 1.4 MiB/s wr, 53 op/s
Oct 11 05:39:39 np0005481065 nova_compute[260935]: 2025-10-11 09:39:39.357 2 DEBUG nova.scheduler.client.report [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:39:39 np0005481065 nova_compute[260935]: 2025-10-11 09:39:39.419 2 DEBUG oslo_concurrency.lockutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.926s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:39:39 np0005481065 nova_compute[260935]: 2025-10-11 09:39:39.420 2 DEBUG nova.compute.manager [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:39:39 np0005481065 nova_compute[260935]: 2025-10-11 09:39:39.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:39:39 np0005481065 nova_compute[260935]: 2025-10-11 09:39:39.601 2 DEBUG nova.compute.manager [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:39:39 np0005481065 nova_compute[260935]: 2025-10-11 09:39:39.602 2 DEBUG nova.network.neutron [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:39:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:39:39 np0005481065 nova_compute[260935]: 2025-10-11 09:39:39.707 2 INFO nova.virt.libvirt.driver [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:39:39 np0005481065 nova_compute[260935]: 2025-10-11 09:39:39.798 2 DEBUG nova.policy [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '489c4d0457354f4684f8b9e53261224f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '81e7096f23df4e7d8782cf98d09d54e9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:39:39 np0005481065 nova_compute[260935]: 2025-10-11 09:39:39.840 2 DEBUG nova.compute.manager [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:39:40 np0005481065 podman[423400]: 2025-10-11 09:39:40.074911024 +0000 UTC m=+0.059171280 container create 1c539a08aee25affcc1762dd54b16d24334cef3e977978e2c35d31a56d7ae7ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 05:39:40 np0005481065 systemd[1]: Started libpod-conmon-1c539a08aee25affcc1762dd54b16d24334cef3e977978e2c35d31a56d7ae7ad.scope.
Oct 11 05:39:40 np0005481065 podman[423400]: 2025-10-11 09:39:40.047434964 +0000 UTC m=+0.031695280 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:39:40 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:39:40 np0005481065 podman[423400]: 2025-10-11 09:39:40.196434583 +0000 UTC m=+0.180694889 container init 1c539a08aee25affcc1762dd54b16d24334cef3e977978e2c35d31a56d7ae7ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:39:40 np0005481065 podman[423400]: 2025-10-11 09:39:40.20774537 +0000 UTC m=+0.192005626 container start 1c539a08aee25affcc1762dd54b16d24334cef3e977978e2c35d31a56d7ae7ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_northcutt, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 11 05:39:40 np0005481065 podman[423400]: 2025-10-11 09:39:40.212961416 +0000 UTC m=+0.197221672 container attach 1c539a08aee25affcc1762dd54b16d24334cef3e977978e2c35d31a56d7ae7ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:39:40 np0005481065 charming_northcutt[423417]: 167 167
Oct 11 05:39:40 np0005481065 podman[423400]: 2025-10-11 09:39:40.215812136 +0000 UTC m=+0.200072402 container died 1c539a08aee25affcc1762dd54b16d24334cef3e977978e2c35d31a56d7ae7ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_northcutt, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 05:39:40 np0005481065 systemd[1]: libpod-1c539a08aee25affcc1762dd54b16d24334cef3e977978e2c35d31a56d7ae7ad.scope: Deactivated successfully.
Oct 11 05:39:40 np0005481065 systemd[1]: var-lib-containers-storage-overlay-303543a215637fa685aaa0ad5123d5d7e6c7760d8cf4d6c682cd37f8da96cfd2-merged.mount: Deactivated successfully.
Oct 11 05:39:40 np0005481065 podman[423400]: 2025-10-11 09:39:40.268912886 +0000 UTC m=+0.253173152 container remove 1c539a08aee25affcc1762dd54b16d24334cef3e977978e2c35d31a56d7ae7ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_northcutt, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 11 05:39:40 np0005481065 systemd[1]: libpod-conmon-1c539a08aee25affcc1762dd54b16d24334cef3e977978e2c35d31a56d7ae7ad.scope: Deactivated successfully.
Oct 11 05:39:40 np0005481065 nova_compute[260935]: 2025-10-11 09:39:40.312 2 DEBUG nova.compute.manager [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:39:40 np0005481065 nova_compute[260935]: 2025-10-11 09:39:40.314 2 DEBUG nova.virt.libvirt.driver [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:39:40 np0005481065 nova_compute[260935]: 2025-10-11 09:39:40.315 2 INFO nova.virt.libvirt.driver [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Creating image(s)#033[00m
Oct 11 05:39:40 np0005481065 nova_compute[260935]: 2025-10-11 09:39:40.351 2 DEBUG nova.storage.rbd_utils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image cacbf863-eea2-4852-a3a4-7cc929ebacec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:39:40 np0005481065 nova_compute[260935]: 2025-10-11 09:39:40.378 2 DEBUG nova.storage.rbd_utils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image cacbf863-eea2-4852-a3a4-7cc929ebacec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:39:40 np0005481065 nova_compute[260935]: 2025-10-11 09:39:40.407 2 DEBUG nova.storage.rbd_utils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image cacbf863-eea2-4852-a3a4-7cc929ebacec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:39:40 np0005481065 nova_compute[260935]: 2025-10-11 09:39:40.410 2 DEBUG oslo_concurrency.processutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:39:40 np0005481065 nova_compute[260935]: 2025-10-11 09:39:40.519 2 DEBUG oslo_concurrency.processutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:39:40 np0005481065 nova_compute[260935]: 2025-10-11 09:39:40.520 2 DEBUG oslo_concurrency.lockutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:39:40 np0005481065 nova_compute[260935]: 2025-10-11 09:39:40.521 2 DEBUG oslo_concurrency.lockutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:39:40 np0005481065 nova_compute[260935]: 2025-10-11 09:39:40.521 2 DEBUG oslo_concurrency.lockutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:39:40 np0005481065 nova_compute[260935]: 2025-10-11 09:39:40.552 2 DEBUG nova.storage.rbd_utils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image cacbf863-eea2-4852-a3a4-7cc929ebacec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:39:40 np0005481065 nova_compute[260935]: 2025-10-11 09:39:40.558 2 DEBUG oslo_concurrency.processutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 cacbf863-eea2-4852-a3a4-7cc929ebacec_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:39:40 np0005481065 podman[423494]: 2025-10-11 09:39:40.571125442 +0000 UTC m=+0.058178263 container create a4a7c55ee7c141d0eb3a7e220a358034c3eddcde754d8a8983f15733ca2a621c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:39:40 np0005481065 systemd[1]: Started libpod-conmon-a4a7c55ee7c141d0eb3a7e220a358034c3eddcde754d8a8983f15733ca2a621c.scope.
Oct 11 05:39:40 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:39:40 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ea591ea10bfd9ca77ea8906a91c25cd4f3e8ca662b6ddac2224093f2d19825b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:39:40 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ea591ea10bfd9ca77ea8906a91c25cd4f3e8ca662b6ddac2224093f2d19825b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:39:40 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ea591ea10bfd9ca77ea8906a91c25cd4f3e8ca662b6ddac2224093f2d19825b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:39:40 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ea591ea10bfd9ca77ea8906a91c25cd4f3e8ca662b6ddac2224093f2d19825b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:39:40 np0005481065 podman[423494]: 2025-10-11 09:39:40.552411297 +0000 UTC m=+0.039464118 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:39:40 np0005481065 podman[423494]: 2025-10-11 09:39:40.653588765 +0000 UTC m=+0.140641656 container init a4a7c55ee7c141d0eb3a7e220a358034c3eddcde754d8a8983f15733ca2a621c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 05:39:40 np0005481065 podman[423494]: 2025-10-11 09:39:40.664402498 +0000 UTC m=+0.151455309 container start a4a7c55ee7c141d0eb3a7e220a358034c3eddcde754d8a8983f15733ca2a621c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wescoff, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 05:39:40 np0005481065 podman[423494]: 2025-10-11 09:39:40.668487343 +0000 UTC m=+0.155540184 container attach a4a7c55ee7c141d0eb3a7e220a358034c3eddcde754d8a8983f15733ca2a621c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wescoff, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 05:39:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2941: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 71 KiB/s rd, 71 KiB/s wr, 14 op/s
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]: {
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:    "0": [
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:        {
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:            "devices": [
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:                "/dev/loop3"
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:            ],
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:            "lv_name": "ceph_lv0",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:            "lv_size": "21470642176",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:            "name": "ceph_lv0",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:            "tags": {
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:                "ceph.cluster_name": "ceph",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:                "ceph.crush_device_class": "",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:                "ceph.encrypted": "0",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:                "ceph.osd_id": "0",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:                "ceph.type": "block",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:                "ceph.vdo": "0"
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:            },
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:            "type": "block",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:            "vg_name": "ceph_vg0"
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:        }
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:    ],
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:    "1": [
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:        {
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:            "devices": [
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:                "/dev/loop4"
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:            ],
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:            "lv_name": "ceph_lv1",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:            "lv_size": "21470642176",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:            "name": "ceph_lv1",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:            "tags": {
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:                "ceph.cluster_name": "ceph",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:                "ceph.crush_device_class": "",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:                "ceph.encrypted": "0",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:                "ceph.osd_id": "1",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:                "ceph.type": "block",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:                "ceph.vdo": "0"
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:            },
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:            "type": "block",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:            "vg_name": "ceph_vg1"
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:        }
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:    ],
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:    "2": [
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:        {
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:            "devices": [
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:                "/dev/loop5"
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:            ],
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:            "lv_name": "ceph_lv2",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:            "lv_size": "21470642176",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:            "name": "ceph_lv2",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:            "tags": {
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:                "ceph.cluster_name": "ceph",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:                "ceph.crush_device_class": "",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:                "ceph.encrypted": "0",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:                "ceph.osd_id": "2",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:                "ceph.type": "block",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:                "ceph.vdo": "0"
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:            },
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:            "type": "block",
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:            "vg_name": "ceph_vg2"
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:        }
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]:    ]
Oct 11 05:39:41 np0005481065 objective_wescoff[423530]: }
Oct 11 05:39:41 np0005481065 systemd[1]: libpod-a4a7c55ee7c141d0eb3a7e220a358034c3eddcde754d8a8983f15733ca2a621c.scope: Deactivated successfully.
Oct 11 05:39:41 np0005481065 nova_compute[260935]: 2025-10-11 09:39:41.569 2 DEBUG oslo_concurrency.processutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 cacbf863-eea2-4852-a3a4-7cc929ebacec_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:39:41 np0005481065 podman[423558]: 2025-10-11 09:39:41.583390624 +0000 UTC m=+0.061448425 container died a4a7c55ee7c141d0eb3a7e220a358034c3eddcde754d8a8983f15733ca2a621c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 11 05:39:41 np0005481065 systemd[1]: var-lib-containers-storage-overlay-7ea591ea10bfd9ca77ea8906a91c25cd4f3e8ca662b6ddac2224093f2d19825b-merged.mount: Deactivated successfully.
Oct 11 05:39:41 np0005481065 podman[423559]: 2025-10-11 09:39:41.626616746 +0000 UTC m=+0.085102348 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 05:39:41 np0005481065 podman[423558]: 2025-10-11 09:39:41.640423493 +0000 UTC m=+0.118481284 container remove a4a7c55ee7c141d0eb3a7e220a358034c3eddcde754d8a8983f15733ca2a621c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 05:39:41 np0005481065 systemd[1]: libpod-conmon-a4a7c55ee7c141d0eb3a7e220a358034c3eddcde754d8a8983f15733ca2a621c.scope: Deactivated successfully.
Oct 11 05:39:41 np0005481065 nova_compute[260935]: 2025-10-11 09:39:41.650 2 DEBUG nova.storage.rbd_utils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] resizing rbd image cacbf863-eea2-4852-a3a4-7cc929ebacec_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:39:41 np0005481065 nova_compute[260935]: 2025-10-11 09:39:41.745 2 DEBUG nova.objects.instance [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lazy-loading 'migration_context' on Instance uuid cacbf863-eea2-4852-a3a4-7cc929ebacec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:39:42 np0005481065 nova_compute[260935]: 2025-10-11 09:39:42.042 2 DEBUG nova.virt.libvirt.driver [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:39:42 np0005481065 nova_compute[260935]: 2025-10-11 09:39:42.043 2 DEBUG nova.virt.libvirt.driver [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Ensure instance console log exists: /var/lib/nova/instances/cacbf863-eea2-4852-a3a4-7cc929ebacec/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:39:42 np0005481065 nova_compute[260935]: 2025-10-11 09:39:42.043 2 DEBUG oslo_concurrency.lockutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:39:42 np0005481065 nova_compute[260935]: 2025-10-11 09:39:42.044 2 DEBUG oslo_concurrency.lockutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:39:42 np0005481065 nova_compute[260935]: 2025-10-11 09:39:42.044 2 DEBUG oslo_concurrency.lockutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:39:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:39:42.373 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=55, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=54) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:39:42 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:39:42.375 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 05:39:42 np0005481065 nova_compute[260935]: 2025-10-11 09:39:42.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:39:42 np0005481065 podman[423802]: 2025-10-11 09:39:42.493812538 +0000 UTC m=+0.074377857 container create a26406f1e5bf177ccb68e64ac3accf8ecd8def4ea9e753e1b4c8969fafd0112b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 05:39:42 np0005481065 systemd[1]: Started libpod-conmon-a26406f1e5bf177ccb68e64ac3accf8ecd8def4ea9e753e1b4c8969fafd0112b.scope.
Oct 11 05:39:42 np0005481065 podman[423802]: 2025-10-11 09:39:42.456950624 +0000 UTC m=+0.037515983 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:39:42 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:39:42 np0005481065 podman[423802]: 2025-10-11 09:39:42.621197761 +0000 UTC m=+0.201763080 container init a26406f1e5bf177ccb68e64ac3accf8ecd8def4ea9e753e1b4c8969fafd0112b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_chaum, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 05:39:42 np0005481065 podman[423802]: 2025-10-11 09:39:42.634051741 +0000 UTC m=+0.214617070 container start a26406f1e5bf177ccb68e64ac3accf8ecd8def4ea9e753e1b4c8969fafd0112b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_chaum, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 05:39:42 np0005481065 podman[423802]: 2025-10-11 09:39:42.63792596 +0000 UTC m=+0.218491339 container attach a26406f1e5bf177ccb68e64ac3accf8ecd8def4ea9e753e1b4c8969fafd0112b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 05:39:42 np0005481065 gallant_chaum[423818]: 167 167
Oct 11 05:39:42 np0005481065 systemd[1]: libpod-a26406f1e5bf177ccb68e64ac3accf8ecd8def4ea9e753e1b4c8969fafd0112b.scope: Deactivated successfully.
Oct 11 05:39:42 np0005481065 podman[423802]: 2025-10-11 09:39:42.641889471 +0000 UTC m=+0.222454760 container died a26406f1e5bf177ccb68e64ac3accf8ecd8def4ea9e753e1b4c8969fafd0112b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_chaum, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:39:42 np0005481065 systemd[1]: var-lib-containers-storage-overlay-500409dc348cb3fb22d19be19b408231199d972da7a8172b6a8b61e0a95052d1-merged.mount: Deactivated successfully.
Oct 11 05:39:42 np0005481065 podman[423802]: 2025-10-11 09:39:42.692350356 +0000 UTC m=+0.272915665 container remove a26406f1e5bf177ccb68e64ac3accf8ecd8def4ea9e753e1b4c8969fafd0112b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_chaum, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:39:42 np0005481065 systemd[1]: libpod-conmon-a26406f1e5bf177ccb68e64ac3accf8ecd8def4ea9e753e1b4c8969fafd0112b.scope: Deactivated successfully.
Oct 11 05:39:42 np0005481065 podman[423841]: 2025-10-11 09:39:42.973306716 +0000 UTC m=+0.064642844 container create cd9fd497c09bcf4bb75533455bd6f33ba36b49525b8a08b097dd8b49b3928229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_wu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 11 05:39:43 np0005481065 systemd[1]: Started libpod-conmon-cd9fd497c09bcf4bb75533455bd6f33ba36b49525b8a08b097dd8b49b3928229.scope.
Oct 11 05:39:43 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:39:43 np0005481065 podman[423841]: 2025-10-11 09:39:42.95096904 +0000 UTC m=+0.042305198 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:39:43 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f72184d0a7ec2ac2e7179f234ddcb1ca2f49a93d05ab6b6c031b047032002ce7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:39:43 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f72184d0a7ec2ac2e7179f234ddcb1ca2f49a93d05ab6b6c031b047032002ce7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:39:43 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f72184d0a7ec2ac2e7179f234ddcb1ca2f49a93d05ab6b6c031b047032002ce7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:39:43 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f72184d0a7ec2ac2e7179f234ddcb1ca2f49a93d05ab6b6c031b047032002ce7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:39:43 np0005481065 podman[423841]: 2025-10-11 09:39:43.057266831 +0000 UTC m=+0.148602969 container init cd9fd497c09bcf4bb75533455bd6f33ba36b49525b8a08b097dd8b49b3928229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:39:43 np0005481065 podman[423841]: 2025-10-11 09:39:43.063118495 +0000 UTC m=+0.154454623 container start cd9fd497c09bcf4bb75533455bd6f33ba36b49525b8a08b097dd8b49b3928229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_wu, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:39:43 np0005481065 podman[423841]: 2025-10-11 09:39:43.066892191 +0000 UTC m=+0.158228319 container attach cd9fd497c09bcf4bb75533455bd6f33ba36b49525b8a08b097dd8b49b3928229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:39:43 np0005481065 nova_compute[260935]: 2025-10-11 09:39:43.069 2 DEBUG nova.network.neutron [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Successfully created port: 79c195e7-9605-49f3-b866-70093b232242 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:39:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2942: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 88 KiB/s rd, 1.8 MiB/s wr, 42 op/s
Oct 11 05:39:44 np0005481065 nova_compute[260935]: 2025-10-11 09:39:44.059 2 DEBUG nova.network.neutron [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Successfully updated port: 79c195e7-9605-49f3-b866-70093b232242 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:39:44 np0005481065 nova_compute[260935]: 2025-10-11 09:39:44.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:39:44 np0005481065 nova_compute[260935]: 2025-10-11 09:39:44.115 2 DEBUG oslo_concurrency.lockutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "refresh_cache-cacbf863-eea2-4852-a3a4-7cc929ebacec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:39:44 np0005481065 nova_compute[260935]: 2025-10-11 09:39:44.115 2 DEBUG oslo_concurrency.lockutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquired lock "refresh_cache-cacbf863-eea2-4852-a3a4-7cc929ebacec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:39:44 np0005481065 nova_compute[260935]: 2025-10-11 09:39:44.115 2 DEBUG nova.network.neutron [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:39:44 np0005481065 clever_wu[423858]: {
Oct 11 05:39:44 np0005481065 clever_wu[423858]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 05:39:44 np0005481065 clever_wu[423858]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:39:44 np0005481065 clever_wu[423858]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 05:39:44 np0005481065 clever_wu[423858]:        "osd_id": 2,
Oct 11 05:39:44 np0005481065 clever_wu[423858]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:39:44 np0005481065 clever_wu[423858]:        "type": "bluestore"
Oct 11 05:39:44 np0005481065 clever_wu[423858]:    },
Oct 11 05:39:44 np0005481065 clever_wu[423858]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 05:39:44 np0005481065 clever_wu[423858]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:39:44 np0005481065 clever_wu[423858]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 05:39:44 np0005481065 clever_wu[423858]:        "osd_id": 0,
Oct 11 05:39:44 np0005481065 clever_wu[423858]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:39:44 np0005481065 clever_wu[423858]:        "type": "bluestore"
Oct 11 05:39:44 np0005481065 clever_wu[423858]:    },
Oct 11 05:39:44 np0005481065 clever_wu[423858]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 05:39:44 np0005481065 clever_wu[423858]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:39:44 np0005481065 clever_wu[423858]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 05:39:44 np0005481065 clever_wu[423858]:        "osd_id": 1,
Oct 11 05:39:44 np0005481065 clever_wu[423858]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:39:44 np0005481065 clever_wu[423858]:        "type": "bluestore"
Oct 11 05:39:44 np0005481065 clever_wu[423858]:    }
Oct 11 05:39:44 np0005481065 clever_wu[423858]: }
Oct 11 05:39:44 np0005481065 systemd[1]: libpod-cd9fd497c09bcf4bb75533455bd6f33ba36b49525b8a08b097dd8b49b3928229.scope: Deactivated successfully.
Oct 11 05:39:44 np0005481065 systemd[1]: libpod-cd9fd497c09bcf4bb75533455bd6f33ba36b49525b8a08b097dd8b49b3928229.scope: Consumed 1.078s CPU time.
Oct 11 05:39:44 np0005481065 podman[423841]: 2025-10-11 09:39:44.144251829 +0000 UTC m=+1.235587957 container died cd9fd497c09bcf4bb75533455bd6f33ba36b49525b8a08b097dd8b49b3928229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_wu, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 11 05:39:44 np0005481065 nova_compute[260935]: 2025-10-11 09:39:44.229 2 DEBUG nova.compute.manager [req-8f46a015-bd7a-4b67-8949-c165a5f311f4 req-da8ba57d-f9cc-462a-8eab-1b63c59e1029 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Received event network-changed-79c195e7-9605-49f3-b866-70093b232242 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:39:44 np0005481065 nova_compute[260935]: 2025-10-11 09:39:44.230 2 DEBUG nova.compute.manager [req-8f46a015-bd7a-4b67-8949-c165a5f311f4 req-da8ba57d-f9cc-462a-8eab-1b63c59e1029 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Refreshing instance network info cache due to event network-changed-79c195e7-9605-49f3-b866-70093b232242. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:39:44 np0005481065 nova_compute[260935]: 2025-10-11 09:39:44.230 2 DEBUG oslo_concurrency.lockutils [req-8f46a015-bd7a-4b67-8949-c165a5f311f4 req-da8ba57d-f9cc-462a-8eab-1b63c59e1029 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-cacbf863-eea2-4852-a3a4-7cc929ebacec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:39:44 np0005481065 nova_compute[260935]: 2025-10-11 09:39:44.350 2 DEBUG nova.network.neutron [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:39:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:39:44.379 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '55'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:39:44 np0005481065 nova_compute[260935]: 2025-10-11 09:39:44.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:39:44 np0005481065 systemd[1]: var-lib-containers-storage-overlay-f72184d0a7ec2ac2e7179f234ddcb1ca2f49a93d05ab6b6c031b047032002ce7-merged.mount: Deactivated successfully.
Oct 11 05:39:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:39:44 np0005481065 podman[423841]: 2025-10-11 09:39:44.795482324 +0000 UTC m=+1.886818482 container remove cd9fd497c09bcf4bb75533455bd6f33ba36b49525b8a08b097dd8b49b3928229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_wu, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:39:44 np0005481065 systemd[1]: libpod-conmon-cd9fd497c09bcf4bb75533455bd6f33ba36b49525b8a08b097dd8b49b3928229.scope: Deactivated successfully.
Oct 11 05:39:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:39:44 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:39:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:39:44 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:39:44 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 25b7aaa0-17e7-4a8c-bd94-21ee68c95857 does not exist
Oct 11 05:39:44 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 680876e7-911d-4608-bccf-493acacda419 does not exist
Oct 11 05:39:45 np0005481065 nova_compute[260935]: 2025-10-11 09:39:45.277 2 DEBUG nova.network.neutron [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Updating instance_info_cache with network_info: [{"id": "79c195e7-9605-49f3-b866-70093b232242", "address": "fa:16:3e:61:d1:72", "network": {"id": "1aa39742-414b-41b6-bac5-b401ed01a1ec", "bridge": "br-int", "label": "tempest-network-smoke--1567678547", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79c195e7-96", "ovs_interfaceid": "79c195e7-9605-49f3-b866-70093b232242", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:39:45 np0005481065 nova_compute[260935]: 2025-10-11 09:39:45.308 2 DEBUG oslo_concurrency.lockutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Releasing lock "refresh_cache-cacbf863-eea2-4852-a3a4-7cc929ebacec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:39:45 np0005481065 nova_compute[260935]: 2025-10-11 09:39:45.308 2 DEBUG nova.compute.manager [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Instance network_info: |[{"id": "79c195e7-9605-49f3-b866-70093b232242", "address": "fa:16:3e:61:d1:72", "network": {"id": "1aa39742-414b-41b6-bac5-b401ed01a1ec", "bridge": "br-int", "label": "tempest-network-smoke--1567678547", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79c195e7-96", "ovs_interfaceid": "79c195e7-9605-49f3-b866-70093b232242", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:39:45 np0005481065 nova_compute[260935]: 2025-10-11 09:39:45.309 2 DEBUG oslo_concurrency.lockutils [req-8f46a015-bd7a-4b67-8949-c165a5f311f4 req-da8ba57d-f9cc-462a-8eab-1b63c59e1029 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-cacbf863-eea2-4852-a3a4-7cc929ebacec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:39:45 np0005481065 nova_compute[260935]: 2025-10-11 09:39:45.310 2 DEBUG nova.network.neutron [req-8f46a015-bd7a-4b67-8949-c165a5f311f4 req-da8ba57d-f9cc-462a-8eab-1b63c59e1029 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Refreshing network info cache for port 79c195e7-9605-49f3-b866-70093b232242 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:39:45 np0005481065 nova_compute[260935]: 2025-10-11 09:39:45.315 2 DEBUG nova.virt.libvirt.driver [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Start _get_guest_xml network_info=[{"id": "79c195e7-9605-49f3-b866-70093b232242", "address": "fa:16:3e:61:d1:72", "network": {"id": "1aa39742-414b-41b6-bac5-b401ed01a1ec", "bridge": "br-int", "label": "tempest-network-smoke--1567678547", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79c195e7-96", "ovs_interfaceid": "79c195e7-9605-49f3-b866-70093b232242", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:39:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2943: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:39:45 np0005481065 nova_compute[260935]: 2025-10-11 09:39:45.323 2 WARNING nova.virt.libvirt.driver [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:39:45 np0005481065 nova_compute[260935]: 2025-10-11 09:39:45.331 2 DEBUG nova.virt.libvirt.host [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:39:45 np0005481065 nova_compute[260935]: 2025-10-11 09:39:45.332 2 DEBUG nova.virt.libvirt.host [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:39:45 np0005481065 nova_compute[260935]: 2025-10-11 09:39:45.336 2 DEBUG nova.virt.libvirt.host [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:39:45 np0005481065 nova_compute[260935]: 2025-10-11 09:39:45.336 2 DEBUG nova.virt.libvirt.host [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:39:45 np0005481065 nova_compute[260935]: 2025-10-11 09:39:45.337 2 DEBUG nova.virt.libvirt.driver [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:39:45 np0005481065 nova_compute[260935]: 2025-10-11 09:39:45.338 2 DEBUG nova.virt.hardware [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:39:45 np0005481065 nova_compute[260935]: 2025-10-11 09:39:45.338 2 DEBUG nova.virt.hardware [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:39:45 np0005481065 nova_compute[260935]: 2025-10-11 09:39:45.339 2 DEBUG nova.virt.hardware [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:39:45 np0005481065 nova_compute[260935]: 2025-10-11 09:39:45.339 2 DEBUG nova.virt.hardware [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:39:45 np0005481065 nova_compute[260935]: 2025-10-11 09:39:45.340 2 DEBUG nova.virt.hardware [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:39:45 np0005481065 nova_compute[260935]: 2025-10-11 09:39:45.340 2 DEBUG nova.virt.hardware [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:39:45 np0005481065 nova_compute[260935]: 2025-10-11 09:39:45.341 2 DEBUG nova.virt.hardware [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:39:45 np0005481065 nova_compute[260935]: 2025-10-11 09:39:45.341 2 DEBUG nova.virt.hardware [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:39:45 np0005481065 nova_compute[260935]: 2025-10-11 09:39:45.342 2 DEBUG nova.virt.hardware [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:39:45 np0005481065 nova_compute[260935]: 2025-10-11 09:39:45.342 2 DEBUG nova.virt.hardware [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:39:45 np0005481065 nova_compute[260935]: 2025-10-11 09:39:45.343 2 DEBUG nova.virt.hardware [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:39:45 np0005481065 nova_compute[260935]: 2025-10-11 09:39:45.347 2 DEBUG oslo_concurrency.processutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:39:45 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:39:45 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1524207635' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:39:45 np0005481065 nova_compute[260935]: 2025-10-11 09:39:45.823 2 DEBUG oslo_concurrency.processutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:39:45 np0005481065 nova_compute[260935]: 2025-10-11 09:39:45.848 2 DEBUG nova.storage.rbd_utils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image cacbf863-eea2-4852-a3a4-7cc929ebacec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:39:45 np0005481065 nova_compute[260935]: 2025-10-11 09:39:45.852 2 DEBUG oslo_concurrency.processutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:39:45 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:39:45 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:39:46 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:39:46 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2912784440' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:39:46 np0005481065 nova_compute[260935]: 2025-10-11 09:39:46.294 2 DEBUG oslo_concurrency.processutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:39:46 np0005481065 nova_compute[260935]: 2025-10-11 09:39:46.296 2 DEBUG nova.virt.libvirt.vif [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:39:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-gen-1-1926377608',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-gen-1-1926377608',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-607770139-gen',id=145,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMgIDSEbqXKfaiPe9PEwj6SDspyzopAbfyJnz8+djsMK1KLgiJJqBYs9uE57WKf6jqalL3R3Kh83Cc9busMQoG7JknYjD9hSHphOlTqezk2QHYcvhxySyaS51yDxw6uubA==',key_name='tempest-TestSecurityGroupsBasicOps-1631871125',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='81e7096f23df4e7d8782cf98d09d54e9',ramdisk_id='',reservation_id='r-dza5ihjy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-607770139',owner_user_name='tempest-TestSecurityGroupsBasicOps-607770139-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:39:39Z,user_data=None,user_id='489c4d0457354f4684f8b9e53261224f',uuid=cacbf863-eea2-4852-a3a4-7cc929ebacec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "79c195e7-9605-49f3-b866-70093b232242", "address": "fa:16:3e:61:d1:72", "network": {"id": "1aa39742-414b-41b6-bac5-b401ed01a1ec", "bridge": "br-int", "label": "tempest-network-smoke--1567678547", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79c195e7-96", "ovs_interfaceid": "79c195e7-9605-49f3-b866-70093b232242", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:39:46 np0005481065 nova_compute[260935]: 2025-10-11 09:39:46.297 2 DEBUG nova.network.os_vif_util [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converting VIF {"id": "79c195e7-9605-49f3-b866-70093b232242", "address": "fa:16:3e:61:d1:72", "network": {"id": "1aa39742-414b-41b6-bac5-b401ed01a1ec", "bridge": "br-int", "label": "tempest-network-smoke--1567678547", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79c195e7-96", "ovs_interfaceid": "79c195e7-9605-49f3-b866-70093b232242", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:39:46 np0005481065 nova_compute[260935]: 2025-10-11 09:39:46.297 2 DEBUG nova.network.os_vif_util [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:d1:72,bridge_name='br-int',has_traffic_filtering=True,id=79c195e7-9605-49f3-b866-70093b232242,network=Network(1aa39742-414b-41b6-bac5-b401ed01a1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79c195e7-96') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:39:46 np0005481065 nova_compute[260935]: 2025-10-11 09:39:46.299 2 DEBUG nova.objects.instance [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lazy-loading 'pci_devices' on Instance uuid cacbf863-eea2-4852-a3a4-7cc929ebacec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:39:46 np0005481065 nova_compute[260935]: 2025-10-11 09:39:46.322 2 DEBUG nova.virt.libvirt.driver [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:39:46 np0005481065 nova_compute[260935]:  <uuid>cacbf863-eea2-4852-a3a4-7cc929ebacec</uuid>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:  <name>instance-00000091</name>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:39:46 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-gen-1-1926377608</nova:name>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:39:45</nova:creationTime>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:39:46 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:        <nova:user uuid="489c4d0457354f4684f8b9e53261224f">tempest-TestSecurityGroupsBasicOps-607770139-project-member</nova:user>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:        <nova:project uuid="81e7096f23df4e7d8782cf98d09d54e9">tempest-TestSecurityGroupsBasicOps-607770139</nova:project>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:        <nova:port uuid="79c195e7-9605-49f3-b866-70093b232242">
Oct 11 05:39:46 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:39:46 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:      <entry name="serial">cacbf863-eea2-4852-a3a4-7cc929ebacec</entry>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:      <entry name="uuid">cacbf863-eea2-4852-a3a4-7cc929ebacec</entry>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:39:46 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:39:46 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:39:46 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/cacbf863-eea2-4852-a3a4-7cc929ebacec_disk">
Oct 11 05:39:46 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:39:46 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:39:46 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/cacbf863-eea2-4852-a3a4-7cc929ebacec_disk.config">
Oct 11 05:39:46 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:39:46 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:39:46 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:61:d1:72"/>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:      <target dev="tap79c195e7-96"/>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:39:46 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/cacbf863-eea2-4852-a3a4-7cc929ebacec/console.log" append="off"/>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:39:46 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:39:46 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:39:46 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:39:46 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:39:46 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:39:46 np0005481065 nova_compute[260935]: 2025-10-11 09:39:46.323 2 DEBUG nova.compute.manager [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Preparing to wait for external event network-vif-plugged-79c195e7-9605-49f3-b866-70093b232242 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:39:46 np0005481065 nova_compute[260935]: 2025-10-11 09:39:46.324 2 DEBUG oslo_concurrency.lockutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "cacbf863-eea2-4852-a3a4-7cc929ebacec-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:39:46 np0005481065 nova_compute[260935]: 2025-10-11 09:39:46.324 2 DEBUG oslo_concurrency.lockutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "cacbf863-eea2-4852-a3a4-7cc929ebacec-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:39:46 np0005481065 nova_compute[260935]: 2025-10-11 09:39:46.324 2 DEBUG oslo_concurrency.lockutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "cacbf863-eea2-4852-a3a4-7cc929ebacec-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:39:46 np0005481065 nova_compute[260935]: 2025-10-11 09:39:46.325 2 DEBUG nova.virt.libvirt.vif [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:39:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-gen-1-1926377608',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-gen-1-1926377608',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-607770139-gen',id=145,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMgIDSEbqXKfaiPe9PEwj6SDspyzopAbfyJnz8+djsMK1KLgiJJqBYs9uE57WKf6jqalL3R3Kh83Cc9busMQoG7JknYjD9hSHphOlTqezk2QHYcvhxySyaS51yDxw6uubA==',key_name='tempest-TestSecurityGroupsBasicOps-1631871125',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='81e7096f23df4e7d8782cf98d09d54e9',ramdisk_id='',reservation_id='r-dza5ihjy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-607770139',owner_user_name='tempest-TestSecurityGroupsBasicOps-607770139-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:39:39Z,user_data=None,user_id='489c4d0457354f4684f8b9e53261224f',uuid=cacbf863-eea2-4852-a3a4-7cc929ebacec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "79c195e7-9605-49f3-b866-70093b232242", "address": "fa:16:3e:61:d1:72", "network": {"id": "1aa39742-414b-41b6-bac5-b401ed01a1ec", "bridge": "br-int", "label": "tempest-network-smoke--1567678547", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79c195e7-96", "ovs_interfaceid": "79c195e7-9605-49f3-b866-70093b232242", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:39:46 np0005481065 nova_compute[260935]: 2025-10-11 09:39:46.325 2 DEBUG nova.network.os_vif_util [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converting VIF {"id": "79c195e7-9605-49f3-b866-70093b232242", "address": "fa:16:3e:61:d1:72", "network": {"id": "1aa39742-414b-41b6-bac5-b401ed01a1ec", "bridge": "br-int", "label": "tempest-network-smoke--1567678547", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79c195e7-96", "ovs_interfaceid": "79c195e7-9605-49f3-b866-70093b232242", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:39:46 np0005481065 nova_compute[260935]: 2025-10-11 09:39:46.326 2 DEBUG nova.network.os_vif_util [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:d1:72,bridge_name='br-int',has_traffic_filtering=True,id=79c195e7-9605-49f3-b866-70093b232242,network=Network(1aa39742-414b-41b6-bac5-b401ed01a1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79c195e7-96') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:39:46 np0005481065 nova_compute[260935]: 2025-10-11 09:39:46.326 2 DEBUG os_vif [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:d1:72,bridge_name='br-int',has_traffic_filtering=True,id=79c195e7-9605-49f3-b866-70093b232242,network=Network(1aa39742-414b-41b6-bac5-b401ed01a1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79c195e7-96') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:39:46 np0005481065 nova_compute[260935]: 2025-10-11 09:39:46.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:39:46 np0005481065 nova_compute[260935]: 2025-10-11 09:39:46.330 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:39:46 np0005481065 nova_compute[260935]: 2025-10-11 09:39:46.331 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:39:46 np0005481065 nova_compute[260935]: 2025-10-11 09:39:46.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:39:46 np0005481065 nova_compute[260935]: 2025-10-11 09:39:46.334 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap79c195e7-96, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:39:46 np0005481065 nova_compute[260935]: 2025-10-11 09:39:46.334 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap79c195e7-96, col_values=(('external_ids', {'iface-id': '79c195e7-9605-49f3-b866-70093b232242', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:61:d1:72', 'vm-uuid': 'cacbf863-eea2-4852-a3a4-7cc929ebacec'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:39:46 np0005481065 NetworkManager[44960]: <info>  [1760175586.3372] manager: (tap79c195e7-96): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/626)
Oct 11 05:39:46 np0005481065 nova_compute[260935]: 2025-10-11 09:39:46.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:39:46 np0005481065 nova_compute[260935]: 2025-10-11 09:39:46.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:39:46 np0005481065 nova_compute[260935]: 2025-10-11 09:39:46.346 2 INFO os_vif [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:d1:72,bridge_name='br-int',has_traffic_filtering=True,id=79c195e7-9605-49f3-b866-70093b232242,network=Network(1aa39742-414b-41b6-bac5-b401ed01a1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79c195e7-96')#033[00m
Oct 11 05:39:46 np0005481065 podman[424018]: 2025-10-11 09:39:46.478739705 +0000 UTC m=+0.090409827 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=multipathd)
Oct 11 05:39:46 np0005481065 podman[424019]: 2025-10-11 09:39:46.478751885 +0000 UTC m=+0.088338219 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 05:39:46 np0005481065 nova_compute[260935]: 2025-10-11 09:39:46.582 2 DEBUG nova.virt.libvirt.driver [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:39:46 np0005481065 nova_compute[260935]: 2025-10-11 09:39:46.584 2 DEBUG nova.virt.libvirt.driver [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:39:46 np0005481065 nova_compute[260935]: 2025-10-11 09:39:46.584 2 DEBUG nova.virt.libvirt.driver [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] No VIF found with MAC fa:16:3e:61:d1:72, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:39:46 np0005481065 nova_compute[260935]: 2025-10-11 09:39:46.586 2 INFO nova.virt.libvirt.driver [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Using config drive#033[00m
Oct 11 05:39:46 np0005481065 nova_compute[260935]: 2025-10-11 09:39:46.627 2 DEBUG nova.storage.rbd_utils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image cacbf863-eea2-4852-a3a4-7cc929ebacec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:39:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2944: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:39:47 np0005481065 nova_compute[260935]: 2025-10-11 09:39:47.780 2 DEBUG nova.network.neutron [req-8f46a015-bd7a-4b67-8949-c165a5f311f4 req-da8ba57d-f9cc-462a-8eab-1b63c59e1029 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Updated VIF entry in instance network info cache for port 79c195e7-9605-49f3-b866-70093b232242. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:39:47 np0005481065 nova_compute[260935]: 2025-10-11 09:39:47.780 2 DEBUG nova.network.neutron [req-8f46a015-bd7a-4b67-8949-c165a5f311f4 req-da8ba57d-f9cc-462a-8eab-1b63c59e1029 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Updating instance_info_cache with network_info: [{"id": "79c195e7-9605-49f3-b866-70093b232242", "address": "fa:16:3e:61:d1:72", "network": {"id": "1aa39742-414b-41b6-bac5-b401ed01a1ec", "bridge": "br-int", "label": "tempest-network-smoke--1567678547", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79c195e7-96", "ovs_interfaceid": "79c195e7-9605-49f3-b866-70093b232242", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:39:47 np0005481065 nova_compute[260935]: 2025-10-11 09:39:47.800 2 DEBUG oslo_concurrency.lockutils [req-8f46a015-bd7a-4b67-8949-c165a5f311f4 req-da8ba57d-f9cc-462a-8eab-1b63c59e1029 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-cacbf863-eea2-4852-a3a4-7cc929ebacec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:39:48 np0005481065 nova_compute[260935]: 2025-10-11 09:39:48.109 2 INFO nova.virt.libvirt.driver [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Creating config drive at /var/lib/nova/instances/cacbf863-eea2-4852-a3a4-7cc929ebacec/disk.config#033[00m
Oct 11 05:39:48 np0005481065 nova_compute[260935]: 2025-10-11 09:39:48.116 2 DEBUG oslo_concurrency.processutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cacbf863-eea2-4852-a3a4-7cc929ebacec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw47xy07s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:39:48 np0005481065 nova_compute[260935]: 2025-10-11 09:39:48.263 2 DEBUG oslo_concurrency.processutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cacbf863-eea2-4852-a3a4-7cc929ebacec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw47xy07s" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:39:48 np0005481065 nova_compute[260935]: 2025-10-11 09:39:48.297 2 DEBUG nova.storage.rbd_utils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image cacbf863-eea2-4852-a3a4-7cc929ebacec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:39:48 np0005481065 nova_compute[260935]: 2025-10-11 09:39:48.301 2 DEBUG oslo_concurrency.processutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cacbf863-eea2-4852-a3a4-7cc929ebacec/disk.config cacbf863-eea2-4852-a3a4-7cc929ebacec_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:39:48 np0005481065 nova_compute[260935]: 2025-10-11 09:39:48.480 2 DEBUG oslo_concurrency.processutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cacbf863-eea2-4852-a3a4-7cc929ebacec/disk.config cacbf863-eea2-4852-a3a4-7cc929ebacec_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.178s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:39:48 np0005481065 nova_compute[260935]: 2025-10-11 09:39:48.481 2 INFO nova.virt.libvirt.driver [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Deleting local config drive /var/lib/nova/instances/cacbf863-eea2-4852-a3a4-7cc929ebacec/disk.config because it was imported into RBD.#033[00m
Oct 11 05:39:48 np0005481065 NetworkManager[44960]: <info>  [1760175588.5561] manager: (tap79c195e7-96): new Tun device (/org/freedesktop/NetworkManager/Devices/627)
Oct 11 05:39:48 np0005481065 kernel: tap79c195e7-96: entered promiscuous mode
Oct 11 05:39:48 np0005481065 nova_compute[260935]: 2025-10-11 09:39:48.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:39:48 np0005481065 ovn_controller[152945]: 2025-10-11T09:39:48Z|01640|binding|INFO|Claiming lport 79c195e7-9605-49f3-b866-70093b232242 for this chassis.
Oct 11 05:39:48 np0005481065 ovn_controller[152945]: 2025-10-11T09:39:48Z|01641|binding|INFO|79c195e7-9605-49f3-b866-70093b232242: Claiming fa:16:3e:61:d1:72 10.100.0.14
Oct 11 05:39:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:39:48.575 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:d1:72 10.100.0.14'], port_security=['fa:16:3e:61:d1:72 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'cacbf863-eea2-4852-a3a4-7cc929ebacec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1aa39742-414b-41b6-bac5-b401ed01a1ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '81e7096f23df4e7d8782cf98d09d54e9', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e4a50ca5-3bee-4cfb-9819-432e1e875b60', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f310564f-86ed-419d-996b-5fa9060df3fb, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=79c195e7-9605-49f3-b866-70093b232242) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:39:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:39:48.579 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 79c195e7-9605-49f3-b866-70093b232242 in datapath 1aa39742-414b-41b6-bac5-b401ed01a1ec bound to our chassis#033[00m
Oct 11 05:39:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:39:48.583 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1aa39742-414b-41b6-bac5-b401ed01a1ec#033[00m
Oct 11 05:39:48 np0005481065 systemd-udevd[424135]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:39:48 np0005481065 ovn_controller[152945]: 2025-10-11T09:39:48Z|01642|binding|INFO|Setting lport 79c195e7-9605-49f3-b866-70093b232242 ovn-installed in OVS
Oct 11 05:39:48 np0005481065 ovn_controller[152945]: 2025-10-11T09:39:48Z|01643|binding|INFO|Setting lport 79c195e7-9605-49f3-b866-70093b232242 up in Southbound
Oct 11 05:39:48 np0005481065 nova_compute[260935]: 2025-10-11 09:39:48.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:39:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:39:48.610 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[dc4c61bf-51f4-42a7-9b09-c6b401bc1e3d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:39:48 np0005481065 systemd-machined[215705]: New machine qemu-169-instance-00000091.
Oct 11 05:39:48 np0005481065 NetworkManager[44960]: <info>  [1760175588.6295] device (tap79c195e7-96): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:39:48 np0005481065 NetworkManager[44960]: <info>  [1760175588.6333] device (tap79c195e7-96): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:39:48 np0005481065 systemd[1]: Started Virtual Machine qemu-169-instance-00000091.
Oct 11 05:39:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:39:48.666 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f056b2d3-38a8-4bfd-ab00-20d077069b87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:39:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:39:48.670 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[bfa40f7f-f9ff-434f-93b2-e6ccdf9129a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:39:48 np0005481065 nova_compute[260935]: 2025-10-11 09:39:48.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:39:48 np0005481065 nova_compute[260935]: 2025-10-11 09:39:48.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:39:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:39:48.717 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[18e2813e-6198-476c-9836-ed227395e219]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:39:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:39:48.744 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ff806e41-aefb-4a08-9617-eb3ec5a3ab0e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1aa39742-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6d:7f:8f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 429], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 739486, 'reachable_time': 20665, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 424148, 'error': None, 'target': 'ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:39:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:39:48.772 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d80c92a8-a938-4215-a5bb-e56a8a9d2bb4]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1aa39742-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 739504, 'tstamp': 739504}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 424150, 'error': None, 'target': 'ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1aa39742-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 739509, 'tstamp': 739509}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 424150, 'error': None, 'target': 'ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:39:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:39:48.775 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1aa39742-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:39:48 np0005481065 nova_compute[260935]: 2025-10-11 09:39:48.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:39:48 np0005481065 nova_compute[260935]: 2025-10-11 09:39:48.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:39:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:39:48.780 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1aa39742-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:39:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:39:48.780 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:39:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:39:48.781 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1aa39742-40, col_values=(('external_ids', {'iface-id': 'c50fba63-8396-4845-aa84-03644ac4618d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:39:48 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:39:48.782 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:39:48 np0005481065 nova_compute[260935]: 2025-10-11 09:39:48.834 2 DEBUG nova.compute.manager [req-199b2cac-a64c-4944-805c-7a4028ab490e req-f06330ca-668a-4bf8-893a-c338abd88d17 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Received event network-vif-plugged-79c195e7-9605-49f3-b866-70093b232242 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:39:48 np0005481065 nova_compute[260935]: 2025-10-11 09:39:48.835 2 DEBUG oslo_concurrency.lockutils [req-199b2cac-a64c-4944-805c-7a4028ab490e req-f06330ca-668a-4bf8-893a-c338abd88d17 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "cacbf863-eea2-4852-a3a4-7cc929ebacec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:39:48 np0005481065 nova_compute[260935]: 2025-10-11 09:39:48.835 2 DEBUG oslo_concurrency.lockutils [req-199b2cac-a64c-4944-805c-7a4028ab490e req-f06330ca-668a-4bf8-893a-c338abd88d17 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "cacbf863-eea2-4852-a3a4-7cc929ebacec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:39:48 np0005481065 nova_compute[260935]: 2025-10-11 09:39:48.836 2 DEBUG oslo_concurrency.lockutils [req-199b2cac-a64c-4944-805c-7a4028ab490e req-f06330ca-668a-4bf8-893a-c338abd88d17 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "cacbf863-eea2-4852-a3a4-7cc929ebacec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:39:48 np0005481065 nova_compute[260935]: 2025-10-11 09:39:48.837 2 DEBUG nova.compute.manager [req-199b2cac-a64c-4944-805c-7a4028ab490e req-f06330ca-668a-4bf8-893a-c338abd88d17 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Processing event network-vif-plugged-79c195e7-9605-49f3-b866-70093b232242 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:39:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2945: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:39:49 np0005481065 nova_compute[260935]: 2025-10-11 09:39:49.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:39:49 np0005481065 nova_compute[260935]: 2025-10-11 09:39:49.616 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175589.616395, cacbf863-eea2-4852-a3a4-7cc929ebacec => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:39:49 np0005481065 nova_compute[260935]: 2025-10-11 09:39:49.617 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] VM Started (Lifecycle Event)#033[00m
Oct 11 05:39:49 np0005481065 nova_compute[260935]: 2025-10-11 09:39:49.620 2 DEBUG nova.compute.manager [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:39:49 np0005481065 nova_compute[260935]: 2025-10-11 09:39:49.623 2 DEBUG nova.virt.libvirt.driver [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:39:49 np0005481065 nova_compute[260935]: 2025-10-11 09:39:49.627 2 INFO nova.virt.libvirt.driver [-] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Instance spawned successfully.#033[00m
Oct 11 05:39:49 np0005481065 nova_compute[260935]: 2025-10-11 09:39:49.627 2 DEBUG nova.virt.libvirt.driver [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:39:49 np0005481065 nova_compute[260935]: 2025-10-11 09:39:49.639 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:39:49 np0005481065 nova_compute[260935]: 2025-10-11 09:39:49.641 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:39:49 np0005481065 nova_compute[260935]: 2025-10-11 09:39:49.649 2 DEBUG nova.virt.libvirt.driver [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:39:49 np0005481065 nova_compute[260935]: 2025-10-11 09:39:49.650 2 DEBUG nova.virt.libvirt.driver [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:39:49 np0005481065 nova_compute[260935]: 2025-10-11 09:39:49.650 2 DEBUG nova.virt.libvirt.driver [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:39:49 np0005481065 nova_compute[260935]: 2025-10-11 09:39:49.650 2 DEBUG nova.virt.libvirt.driver [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:39:49 np0005481065 nova_compute[260935]: 2025-10-11 09:39:49.650 2 DEBUG nova.virt.libvirt.driver [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:39:49 np0005481065 nova_compute[260935]: 2025-10-11 09:39:49.651 2 DEBUG nova.virt.libvirt.driver [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:39:49 np0005481065 nova_compute[260935]: 2025-10-11 09:39:49.659 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:39:49 np0005481065 nova_compute[260935]: 2025-10-11 09:39:49.659 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175589.619448, cacbf863-eea2-4852-a3a4-7cc929ebacec => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:39:49 np0005481065 nova_compute[260935]: 2025-10-11 09:39:49.660 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:39:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:39:49 np0005481065 nova_compute[260935]: 2025-10-11 09:39:49.684 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:39:49 np0005481065 nova_compute[260935]: 2025-10-11 09:39:49.687 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175589.6222608, cacbf863-eea2-4852-a3a4-7cc929ebacec => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:39:49 np0005481065 nova_compute[260935]: 2025-10-11 09:39:49.687 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:39:49 np0005481065 nova_compute[260935]: 2025-10-11 09:39:49.708 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:39:49 np0005481065 nova_compute[260935]: 2025-10-11 09:39:49.711 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:39:49 np0005481065 nova_compute[260935]: 2025-10-11 09:39:49.717 2 INFO nova.compute.manager [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Took 9.40 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:39:49 np0005481065 nova_compute[260935]: 2025-10-11 09:39:49.717 2 DEBUG nova.compute.manager [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:39:49 np0005481065 nova_compute[260935]: 2025-10-11 09:39:49.728 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:39:49 np0005481065 nova_compute[260935]: 2025-10-11 09:39:49.772 2 INFO nova.compute.manager [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Took 11.32 seconds to build instance.#033[00m
Oct 11 05:39:49 np0005481065 nova_compute[260935]: 2025-10-11 09:39:49.798 2 DEBUG oslo_concurrency.lockutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "cacbf863-eea2-4852-a3a4-7cc929ebacec" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:39:50 np0005481065 nova_compute[260935]: 2025-10-11 09:39:50.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:39:50 np0005481065 nova_compute[260935]: 2025-10-11 09:39:50.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:39:50 np0005481065 nova_compute[260935]: 2025-10-11 09:39:50.869 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:39:50 np0005481065 nova_compute[260935]: 2025-10-11 09:39:50.869 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:39:50 np0005481065 nova_compute[260935]: 2025-10-11 09:39:50.869 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 05:39:50 np0005481065 nova_compute[260935]: 2025-10-11 09:39:50.908 2 DEBUG nova.compute.manager [req-1d3f245f-ba12-4fcb-bff8-6b1edd1e9107 req-3eb87703-b457-45c5-8e78-bf3d23440ab3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Received event network-vif-plugged-79c195e7-9605-49f3-b866-70093b232242 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:39:50 np0005481065 nova_compute[260935]: 2025-10-11 09:39:50.908 2 DEBUG oslo_concurrency.lockutils [req-1d3f245f-ba12-4fcb-bff8-6b1edd1e9107 req-3eb87703-b457-45c5-8e78-bf3d23440ab3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "cacbf863-eea2-4852-a3a4-7cc929ebacec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:39:50 np0005481065 nova_compute[260935]: 2025-10-11 09:39:50.909 2 DEBUG oslo_concurrency.lockutils [req-1d3f245f-ba12-4fcb-bff8-6b1edd1e9107 req-3eb87703-b457-45c5-8e78-bf3d23440ab3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "cacbf863-eea2-4852-a3a4-7cc929ebacec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:39:50 np0005481065 nova_compute[260935]: 2025-10-11 09:39:50.909 2 DEBUG oslo_concurrency.lockutils [req-1d3f245f-ba12-4fcb-bff8-6b1edd1e9107 req-3eb87703-b457-45c5-8e78-bf3d23440ab3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "cacbf863-eea2-4852-a3a4-7cc929ebacec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:39:50 np0005481065 nova_compute[260935]: 2025-10-11 09:39:50.909 2 DEBUG nova.compute.manager [req-1d3f245f-ba12-4fcb-bff8-6b1edd1e9107 req-3eb87703-b457-45c5-8e78-bf3d23440ab3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] No waiting events found dispatching network-vif-plugged-79c195e7-9605-49f3-b866-70093b232242 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:39:50 np0005481065 nova_compute[260935]: 2025-10-11 09:39:50.910 2 WARNING nova.compute.manager [req-1d3f245f-ba12-4fcb-bff8-6b1edd1e9107 req-3eb87703-b457-45c5-8e78-bf3d23440ab3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Received unexpected event network-vif-plugged-79c195e7-9605-49f3-b866-70093b232242 for instance with vm_state active and task_state None.#033[00m
Oct 11 05:39:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2946: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:39:51 np0005481065 nova_compute[260935]: 2025-10-11 09:39:51.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:39:52 np0005481065 nova_compute[260935]: 2025-10-11 09:39:52.908 2 DEBUG nova.compute.manager [req-013a23b8-79db-4536-9f44-b9b2757ec46a req-d2030c1a-76e5-4e0f-a5d6-12b60d1d06c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Received event network-changed-79c195e7-9605-49f3-b866-70093b232242 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:39:52 np0005481065 nova_compute[260935]: 2025-10-11 09:39:52.908 2 DEBUG nova.compute.manager [req-013a23b8-79db-4536-9f44-b9b2757ec46a req-d2030c1a-76e5-4e0f-a5d6-12b60d1d06c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Refreshing instance network info cache due to event network-changed-79c195e7-9605-49f3-b866-70093b232242. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:39:52 np0005481065 nova_compute[260935]: 2025-10-11 09:39:52.909 2 DEBUG oslo_concurrency.lockutils [req-013a23b8-79db-4536-9f44-b9b2757ec46a req-d2030c1a-76e5-4e0f-a5d6-12b60d1d06c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-cacbf863-eea2-4852-a3a4-7cc929ebacec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:39:52 np0005481065 nova_compute[260935]: 2025-10-11 09:39:52.909 2 DEBUG oslo_concurrency.lockutils [req-013a23b8-79db-4536-9f44-b9b2757ec46a req-d2030c1a-76e5-4e0f-a5d6-12b60d1d06c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-cacbf863-eea2-4852-a3a4-7cc929ebacec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:39:52 np0005481065 nova_compute[260935]: 2025-10-11 09:39:52.909 2 DEBUG nova.network.neutron [req-013a23b8-79db-4536-9f44-b9b2757ec46a req-d2030c1a-76e5-4e0f-a5d6-12b60d1d06c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Refreshing network info cache for port 79c195e7-9605-49f3-b866-70093b232242 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:39:52 np0005481065 nova_compute[260935]: 2025-10-11 09:39:52.926 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updating instance_info_cache with network_info: [{"id": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "address": "fa:16:3e:d3:b5:ce", "network": {"id": "7c40ad6c-6e2c-4d8e-a70f-72c8786fa745", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1855455514-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ba95f2514ce4fe4b00f245335eaeb01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc992d6e3-ef", "ovs_interfaceid": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:39:52 np0005481065 nova_compute[260935]: 2025-10-11 09:39:52.942 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:39:52 np0005481065 nova_compute[260935]: 2025-10-11 09:39:52.943 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 05:39:52 np0005481065 nova_compute[260935]: 2025-10-11 09:39:52.943 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:39:52 np0005481065 nova_compute[260935]: 2025-10-11 09:39:52.943 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:39:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2947: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 11 05:39:53 np0005481065 nova_compute[260935]: 2025-10-11 09:39:53.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:39:53 np0005481065 nova_compute[260935]: 2025-10-11 09:39:53.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:39:53 np0005481065 nova_compute[260935]: 2025-10-11 09:39:53.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:39:53 np0005481065 nova_compute[260935]: 2025-10-11 09:39:53.748 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:39:53 np0005481065 nova_compute[260935]: 2025-10-11 09:39:53.749 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:39:53 np0005481065 nova_compute[260935]: 2025-10-11 09:39:53.749 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:39:53 np0005481065 nova_compute[260935]: 2025-10-11 09:39:53.749 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:39:53 np0005481065 nova_compute[260935]: 2025-10-11 09:39:53.750 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:39:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:39:54 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1302718109' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:39:54 np0005481065 nova_compute[260935]: 2025-10-11 09:39:54.252 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:39:54 np0005481065 nova_compute[260935]: 2025-10-11 09:39:54.376 2 DEBUG nova.network.neutron [req-013a23b8-79db-4536-9f44-b9b2757ec46a req-d2030c1a-76e5-4e0f-a5d6-12b60d1d06c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Updated VIF entry in instance network info cache for port 79c195e7-9605-49f3-b866-70093b232242. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:39:54 np0005481065 nova_compute[260935]: 2025-10-11 09:39:54.377 2 DEBUG nova.network.neutron [req-013a23b8-79db-4536-9f44-b9b2757ec46a req-d2030c1a-76e5-4e0f-a5d6-12b60d1d06c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Updating instance_info_cache with network_info: [{"id": "79c195e7-9605-49f3-b866-70093b232242", "address": "fa:16:3e:61:d1:72", "network": {"id": "1aa39742-414b-41b6-bac5-b401ed01a1ec", "bridge": "br-int", "label": "tempest-network-smoke--1567678547", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79c195e7-96", "ovs_interfaceid": "79c195e7-9605-49f3-b866-70093b232242", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:39:54 np0005481065 nova_compute[260935]: 2025-10-11 09:39:54.383 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:39:54 np0005481065 nova_compute[260935]: 2025-10-11 09:39:54.384 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:39:54 np0005481065 nova_compute[260935]: 2025-10-11 09:39:54.390 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:39:54 np0005481065 nova_compute[260935]: 2025-10-11 09:39:54.391 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:39:54 np0005481065 nova_compute[260935]: 2025-10-11 09:39:54.391 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:39:54 np0005481065 nova_compute[260935]: 2025-10-11 09:39:54.398 2 DEBUG oslo_concurrency.lockutils [req-013a23b8-79db-4536-9f44-b9b2757ec46a req-d2030c1a-76e5-4e0f-a5d6-12b60d1d06c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-cacbf863-eea2-4852-a3a4-7cc929ebacec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:39:54 np0005481065 nova_compute[260935]: 2025-10-11 09:39:54.400 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:39:54 np0005481065 nova_compute[260935]: 2025-10-11 09:39:54.400 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:39:54 np0005481065 nova_compute[260935]: 2025-10-11 09:39:54.407 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000091 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:39:54 np0005481065 nova_compute[260935]: 2025-10-11 09:39:54.407 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000091 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:39:54 np0005481065 nova_compute[260935]: 2025-10-11 09:39:54.414 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:39:54 np0005481065 nova_compute[260935]: 2025-10-11 09:39:54.415 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:39:54 np0005481065 nova_compute[260935]: 2025-10-11 09:39:54.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:39:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:39:54 np0005481065 nova_compute[260935]: 2025-10-11 09:39:54.753 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:39:54 np0005481065 nova_compute[260935]: 2025-10-11 09:39:54.754 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2445MB free_disk=59.76421356201172GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:39:54 np0005481065 nova_compute[260935]: 2025-10-11 09:39:54.755 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:39:54 np0005481065 nova_compute[260935]: 2025-10-11 09:39:54.755 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:39:54 np0005481065 nova_compute[260935]: 2025-10-11 09:39:54.844 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:39:54 np0005481065 nova_compute[260935]: 2025-10-11 09:39:54.844 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:39:54 np0005481065 nova_compute[260935]: 2025-10-11 09:39:54.844 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:39:54 np0005481065 nova_compute[260935]: 2025-10-11 09:39:54.845 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 82fac090-c427-485c-98cd-ad02e839be40 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:39:54 np0005481065 nova_compute[260935]: 2025-10-11 09:39:54.845 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance cacbf863-eea2-4852-a3a4-7cc929ebacec actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:39:54 np0005481065 nova_compute[260935]: 2025-10-11 09:39:54.845 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:39:54 np0005481065 nova_compute[260935]: 2025-10-11 09:39:54.845 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:39:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:39:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:39:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:39:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:39:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:39:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:39:54 np0005481065 nova_compute[260935]: 2025-10-11 09:39:54.859 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing inventories for resource provider ead2f521-4d5d-46d9-864c-1aac19134114 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 11 05:39:54 np0005481065 nova_compute[260935]: 2025-10-11 09:39:54.879 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updating ProviderTree inventory for provider ead2f521-4d5d-46d9-864c-1aac19134114 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 11 05:39:54 np0005481065 nova_compute[260935]: 2025-10-11 09:39:54.880 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updating inventory in ProviderTree for provider ead2f521-4d5d-46d9-864c-1aac19134114 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 11 05:39:54 np0005481065 nova_compute[260935]: 2025-10-11 09:39:54.894 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing aggregate associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 11 05:39:54 np0005481065 nova_compute[260935]: 2025-10-11 09:39:54.916 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing trait associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, traits: HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSE41,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_USB,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,HW_CPU_X86_SVM,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 11 05:39:54 np0005481065 nova_compute[260935]: 2025-10-11 09:39:54.988 2 DEBUG nova.compute.manager [req-c42889f6-5a2e-42d4-9879-8f34e73175a8 req-bf3ef614-f2d1-45bd-a0ec-59853e96e572 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Received event network-changed-79c195e7-9605-49f3-b866-70093b232242 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:39:54 np0005481065 nova_compute[260935]: 2025-10-11 09:39:54.988 2 DEBUG nova.compute.manager [req-c42889f6-5a2e-42d4-9879-8f34e73175a8 req-bf3ef614-f2d1-45bd-a0ec-59853e96e572 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Refreshing instance network info cache due to event network-changed-79c195e7-9605-49f3-b866-70093b232242. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:39:54 np0005481065 nova_compute[260935]: 2025-10-11 09:39:54.989 2 DEBUG oslo_concurrency.lockutils [req-c42889f6-5a2e-42d4-9879-8f34e73175a8 req-bf3ef614-f2d1-45bd-a0ec-59853e96e572 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-cacbf863-eea2-4852-a3a4-7cc929ebacec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:39:54 np0005481065 nova_compute[260935]: 2025-10-11 09:39:54.989 2 DEBUG oslo_concurrency.lockutils [req-c42889f6-5a2e-42d4-9879-8f34e73175a8 req-bf3ef614-f2d1-45bd-a0ec-59853e96e572 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-cacbf863-eea2-4852-a3a4-7cc929ebacec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:39:54 np0005481065 nova_compute[260935]: 2025-10-11 09:39:54.990 2 DEBUG nova.network.neutron [req-c42889f6-5a2e-42d4-9879-8f34e73175a8 req-bf3ef614-f2d1-45bd-a0ec-59853e96e572 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Refreshing network info cache for port 79c195e7-9605-49f3-b866-70093b232242 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:39:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:39:55
Oct 11 05:39:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:39:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:39:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', 'backups', 'default.rgw.log', 'volumes', 'vms', '.rgw.root', 'images', 'cephfs.cephfs.data']
Oct 11 05:39:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:39:55 np0005481065 nova_compute[260935]: 2025-10-11 09:39:55.056 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:39:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2948: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Oct 11 05:39:55 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:39:55 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/95193417' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:39:55 np0005481065 nova_compute[260935]: 2025-10-11 09:39:55.531 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:39:55 np0005481065 nova_compute[260935]: 2025-10-11 09:39:55.541 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:39:55 np0005481065 nova_compute[260935]: 2025-10-11 09:39:55.560 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:39:55 np0005481065 nova_compute[260935]: 2025-10-11 09:39:55.594 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:39:55 np0005481065 nova_compute[260935]: 2025-10-11 09:39:55.595 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.840s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:39:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:39:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:39:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:39:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:39:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:39:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:39:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:39:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:39:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:39:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:39:56 np0005481065 nova_compute[260935]: 2025-10-11 09:39:56.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:39:56 np0005481065 nova_compute[260935]: 2025-10-11 09:39:56.596 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:39:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2949: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Oct 11 05:39:57 np0005481065 nova_compute[260935]: 2025-10-11 09:39:57.716 2 DEBUG nova.network.neutron [req-c42889f6-5a2e-42d4-9879-8f34e73175a8 req-bf3ef614-f2d1-45bd-a0ec-59853e96e572 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Updated VIF entry in instance network info cache for port 79c195e7-9605-49f3-b866-70093b232242. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:39:57 np0005481065 nova_compute[260935]: 2025-10-11 09:39:57.717 2 DEBUG nova.network.neutron [req-c42889f6-5a2e-42d4-9879-8f34e73175a8 req-bf3ef614-f2d1-45bd-a0ec-59853e96e572 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Updating instance_info_cache with network_info: [{"id": "79c195e7-9605-49f3-b866-70093b232242", "address": "fa:16:3e:61:d1:72", "network": {"id": "1aa39742-414b-41b6-bac5-b401ed01a1ec", "bridge": "br-int", "label": "tempest-network-smoke--1567678547", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79c195e7-96", "ovs_interfaceid": "79c195e7-9605-49f3-b866-70093b232242", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:39:57 np0005481065 nova_compute[260935]: 2025-10-11 09:39:57.736 2 DEBUG oslo_concurrency.lockutils [req-c42889f6-5a2e-42d4-9879-8f34e73175a8 req-bf3ef614-f2d1-45bd-a0ec-59853e96e572 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-cacbf863-eea2-4852-a3a4-7cc929ebacec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:39:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2950: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 74 op/s
Oct 11 05:39:59 np0005481065 nova_compute[260935]: 2025-10-11 09:39:59.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:39:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:40:01 np0005481065 ovn_controller[152945]: 2025-10-11T09:40:01Z|00196|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:61:d1:72 10.100.0.14
Oct 11 05:40:01 np0005481065 ovn_controller[152945]: 2025-10-11T09:40:01Z|00197|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:61:d1:72 10.100.0.14
Oct 11 05:40:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2951: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 11 05:40:01 np0005481065 nova_compute[260935]: 2025-10-11 09:40:01.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:40:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2952: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 139 op/s
Oct 11 05:40:03 np0005481065 nova_compute[260935]: 2025-10-11 09:40:03.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:40:04 np0005481065 nova_compute[260935]: 2025-10-11 09:40:04.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:40:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:40:04 np0005481065 podman[424243]: 2025-10-11 09:40:04.826036568 +0000 UTC m=+0.102729452 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 11 05:40:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2953: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 394 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 11 05:40:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:40:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:40:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:40:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:40:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004143952065833171 of space, bias 1.0, pg target 1.2431856197499511 quantized to 32 (current 32)
Oct 11 05:40:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:40:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:40:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:40:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:40:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:40:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.1992057139048968 quantized to 32 (current 32)
Oct 11 05:40:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:40:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 32)
Oct 11 05:40:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:40:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:40:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:40:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Oct 11 05:40:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:40:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Oct 11 05:40:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:40:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:40:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:40:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Oct 11 05:40:06 np0005481065 nova_compute[260935]: 2025-10-11 09:40:06.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:40:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2954: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 394 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 11 05:40:07 np0005481065 nova_compute[260935]: 2025-10-11 09:40:07.467 2 DEBUG oslo_concurrency.lockutils [None req-13807304-469c-4d72-89ec-44b36ba0bc53 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "cacbf863-eea2-4852-a3a4-7cc929ebacec" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:40:07 np0005481065 nova_compute[260935]: 2025-10-11 09:40:07.467 2 DEBUG oslo_concurrency.lockutils [None req-13807304-469c-4d72-89ec-44b36ba0bc53 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "cacbf863-eea2-4852-a3a4-7cc929ebacec" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:40:07 np0005481065 nova_compute[260935]: 2025-10-11 09:40:07.468 2 DEBUG oslo_concurrency.lockutils [None req-13807304-469c-4d72-89ec-44b36ba0bc53 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "cacbf863-eea2-4852-a3a4-7cc929ebacec-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:40:07 np0005481065 nova_compute[260935]: 2025-10-11 09:40:07.469 2 DEBUG oslo_concurrency.lockutils [None req-13807304-469c-4d72-89ec-44b36ba0bc53 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "cacbf863-eea2-4852-a3a4-7cc929ebacec-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:40:07 np0005481065 nova_compute[260935]: 2025-10-11 09:40:07.469 2 DEBUG oslo_concurrency.lockutils [None req-13807304-469c-4d72-89ec-44b36ba0bc53 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "cacbf863-eea2-4852-a3a4-7cc929ebacec-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:40:07 np0005481065 nova_compute[260935]: 2025-10-11 09:40:07.471 2 INFO nova.compute.manager [None req-13807304-469c-4d72-89ec-44b36ba0bc53 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Terminating instance#033[00m
Oct 11 05:40:07 np0005481065 nova_compute[260935]: 2025-10-11 09:40:07.472 2 DEBUG nova.compute.manager [None req-13807304-469c-4d72-89ec-44b36ba0bc53 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:40:07 np0005481065 kernel: tap79c195e7-96 (unregistering): left promiscuous mode
Oct 11 05:40:07 np0005481065 NetworkManager[44960]: <info>  [1760175607.5278] device (tap79c195e7-96): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:40:07 np0005481065 ovn_controller[152945]: 2025-10-11T09:40:07Z|01644|binding|INFO|Releasing lport 79c195e7-9605-49f3-b866-70093b232242 from this chassis (sb_readonly=0)
Oct 11 05:40:07 np0005481065 ovn_controller[152945]: 2025-10-11T09:40:07Z|01645|binding|INFO|Setting lport 79c195e7-9605-49f3-b866-70093b232242 down in Southbound
Oct 11 05:40:07 np0005481065 ovn_controller[152945]: 2025-10-11T09:40:07Z|01646|binding|INFO|Removing iface tap79c195e7-96 ovn-installed in OVS
Oct 11 05:40:07 np0005481065 nova_compute[260935]: 2025-10-11 09:40:07.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:40:07 np0005481065 nova_compute[260935]: 2025-10-11 09:40:07.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:40:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:07.557 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:d1:72 10.100.0.14', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'cacbf863-eea2-4852-a3a4-7cc929ebacec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1aa39742-414b-41b6-bac5-b401ed01a1ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '81e7096f23df4e7d8782cf98d09d54e9', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f310564f-86ed-419d-996b-5fa9060df3fb, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=79c195e7-9605-49f3-b866-70093b232242) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:40:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:07.558 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 79c195e7-9605-49f3-b866-70093b232242 in datapath 1aa39742-414b-41b6-bac5-b401ed01a1ec unbound from our chassis#033[00m
Oct 11 05:40:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:07.560 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1aa39742-414b-41b6-bac5-b401ed01a1ec#033[00m
Oct 11 05:40:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:07.582 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2e7a5ddb-6a5a-46de-b85b-9c6c9e1f12fd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:40:07 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 05:40:07 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.0 total, 600.0 interval#012Cumulative writes: 13K writes, 61K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.02 MB/s#012Cumulative WAL: 13K writes, 13K syncs, 1.00 writes per sync, written: 0.08 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1366 writes, 6414 keys, 1366 commit groups, 1.0 writes per commit group, ingest: 8.66 MB, 0.01 MB/s#012Interval WAL: 1366 writes, 1366 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     70.8      1.05              0.30        44    0.024       0      0       0.0       0.0#012  L6      1/0    9.31 MB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   4.8    151.5    128.1      2.76              1.42        43    0.064    272K    23K       0.0       0.0#012 Sum      1/0    9.31 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   5.8    109.9    112.3      3.80              1.72        87    0.044    272K    23K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.3     88.1     90.2      0.70              0.24        12    0.058     49K   3009       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   0.0    151.5    128.1      2.76              1.42        43    0.064    272K    23K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     71.0      1.04              0.30        43    0.024       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.2      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 5400.0 total, 600.0 interval#012Flush(GB): cumulative 0.072, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.42 GB write, 0.08 MB/s write, 0.41 GB read, 0.08 MB/s read, 3.8 seconds#012Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.7 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558f0ab3b1f0#2 capacity: 304.00 MB usage: 47.71 MB table_size: 0 occupancy: 18446744073709551615 collections: 10 last_copies: 0 last_secs: 0.000298 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3112,45.75 MB,15.0506%) FilterBlock(88,762.23 KB,0.244858%) IndexBlock(88,1.21 MB,0.399574%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct 11 05:40:07 np0005481065 systemd[1]: machine-qemu\x2d169\x2dinstance\x2d00000091.scope: Deactivated successfully.
Oct 11 05:40:07 np0005481065 systemd[1]: machine-qemu\x2d169\x2dinstance\x2d00000091.scope: Consumed 13.317s CPU time.
Oct 11 05:40:07 np0005481065 systemd-machined[215705]: Machine qemu-169-instance-00000091 terminated.
Oct 11 05:40:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:07.622 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6935d3d5-a4e3-4047-9a39-9fe3bfc6b212]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:40:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:07.626 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[12f57cdd-4b9d-412e-a9a7-37fb39ace161]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:40:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:07.655 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f0400178-fa9c-44eb-bc01-9d4b1a41f140]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:40:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:07.686 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6637ba85-fa1a-47c1-839b-e253a67b117c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1aa39742-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6d:7f:8f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 21, 'tx_packets': 7, 'rx_bytes': 1670, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 21, 'tx_packets': 7, 'rx_bytes': 1670, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 429], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 739486, 'reachable_time': 20665, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 15, 'inoctets': 1208, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 15, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1208, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 15, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 424274, 'error': None, 'target': 'ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:40:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:07.707 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6ff647b4-7a86-43e9-ab63-d056b53a9ed1]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1aa39742-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 739504, 'tstamp': 739504}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 424277, 'error': None, 'target': 'ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1aa39742-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 739509, 'tstamp': 739509}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 424277, 'error': None, 'target': 'ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:40:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:07.709 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1aa39742-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:40:07 np0005481065 nova_compute[260935]: 2025-10-11 09:40:07.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:40:07 np0005481065 nova_compute[260935]: 2025-10-11 09:40:07.722 2 INFO nova.virt.libvirt.driver [-] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Instance destroyed successfully.#033[00m
Oct 11 05:40:07 np0005481065 nova_compute[260935]: 2025-10-11 09:40:07.723 2 DEBUG nova.objects.instance [None req-13807304-469c-4d72-89ec-44b36ba0bc53 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lazy-loading 'resources' on Instance uuid cacbf863-eea2-4852-a3a4-7cc929ebacec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:40:07 np0005481065 nova_compute[260935]: 2025-10-11 09:40:07.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:40:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:07.727 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1aa39742-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:40:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:07.728 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:40:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:07.728 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1aa39742-40, col_values=(('external_ids', {'iface-id': 'c50fba63-8396-4845-aa84-03644ac4618d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:40:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:07.729 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:40:07 np0005481065 nova_compute[260935]: 2025-10-11 09:40:07.744 2 DEBUG nova.virt.libvirt.vif [None req-13807304-469c-4d72-89ec-44b36ba0bc53 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:39:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-gen-1-1926377608',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-gen-1-1926377608',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-607770139-gen',id=145,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMgIDSEbqXKfaiPe9PEwj6SDspyzopAbfyJnz8+djsMK1KLgiJJqBYs9uE57WKf6jqalL3R3Kh83Cc9busMQoG7JknYjD9hSHphOlTqezk2QHYcvhxySyaS51yDxw6uubA==',key_name='tempest-TestSecurityGroupsBasicOps-1631871125',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:39:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='81e7096f23df4e7d8782cf98d09d54e9',ramdisk_id='',reservation_id='r-dza5ihjy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-607770139',owner_user_name='tempest-TestSecurityGroupsBasicOps-607770139-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:39:49Z,user_data=None,user_id='489c4d0457354f4684f8b9e53261224f',uuid=cacbf863-eea2-4852-a3a4-7cc929ebacec,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "79c195e7-9605-49f3-b866-70093b232242", "address": "fa:16:3e:61:d1:72", "network": {"id": "1aa39742-414b-41b6-bac5-b401ed01a1ec", "bridge": "br-int", "label": "tempest-network-smoke--1567678547", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79c195e7-96", "ovs_interfaceid": "79c195e7-9605-49f3-b866-70093b232242", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:40:07 np0005481065 nova_compute[260935]: 2025-10-11 09:40:07.744 2 DEBUG nova.network.os_vif_util [None req-13807304-469c-4d72-89ec-44b36ba0bc53 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converting VIF {"id": "79c195e7-9605-49f3-b866-70093b232242", "address": "fa:16:3e:61:d1:72", "network": {"id": "1aa39742-414b-41b6-bac5-b401ed01a1ec", "bridge": "br-int", "label": "tempest-network-smoke--1567678547", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79c195e7-96", "ovs_interfaceid": "79c195e7-9605-49f3-b866-70093b232242", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:40:07 np0005481065 nova_compute[260935]: 2025-10-11 09:40:07.746 2 DEBUG nova.network.os_vif_util [None req-13807304-469c-4d72-89ec-44b36ba0bc53 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:61:d1:72,bridge_name='br-int',has_traffic_filtering=True,id=79c195e7-9605-49f3-b866-70093b232242,network=Network(1aa39742-414b-41b6-bac5-b401ed01a1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79c195e7-96') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:40:07 np0005481065 nova_compute[260935]: 2025-10-11 09:40:07.747 2 DEBUG os_vif [None req-13807304-469c-4d72-89ec-44b36ba0bc53 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:61:d1:72,bridge_name='br-int',has_traffic_filtering=True,id=79c195e7-9605-49f3-b866-70093b232242,network=Network(1aa39742-414b-41b6-bac5-b401ed01a1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79c195e7-96') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:40:07 np0005481065 nova_compute[260935]: 2025-10-11 09:40:07.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:40:07 np0005481065 nova_compute[260935]: 2025-10-11 09:40:07.749 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79c195e7-96, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:40:07 np0005481065 nova_compute[260935]: 2025-10-11 09:40:07.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:40:07 np0005481065 nova_compute[260935]: 2025-10-11 09:40:07.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:40:07 np0005481065 nova_compute[260935]: 2025-10-11 09:40:07.757 2 INFO os_vif [None req-13807304-469c-4d72-89ec-44b36ba0bc53 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:61:d1:72,bridge_name='br-int',has_traffic_filtering=True,id=79c195e7-9605-49f3-b866-70093b232242,network=Network(1aa39742-414b-41b6-bac5-b401ed01a1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79c195e7-96')#033[00m
Oct 11 05:40:07 np0005481065 nova_compute[260935]: 2025-10-11 09:40:07.855 2 DEBUG nova.compute.manager [req-c83ed9c1-40a8-4aca-a4c4-999f9b82a9a6 req-11bf5389-e06b-4944-ae55-fe19b8a04f67 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Received event network-vif-unplugged-79c195e7-9605-49f3-b866-70093b232242 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:40:07 np0005481065 nova_compute[260935]: 2025-10-11 09:40:07.858 2 DEBUG oslo_concurrency.lockutils [req-c83ed9c1-40a8-4aca-a4c4-999f9b82a9a6 req-11bf5389-e06b-4944-ae55-fe19b8a04f67 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "cacbf863-eea2-4852-a3a4-7cc929ebacec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:40:07 np0005481065 nova_compute[260935]: 2025-10-11 09:40:07.859 2 DEBUG oslo_concurrency.lockutils [req-c83ed9c1-40a8-4aca-a4c4-999f9b82a9a6 req-11bf5389-e06b-4944-ae55-fe19b8a04f67 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "cacbf863-eea2-4852-a3a4-7cc929ebacec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:40:07 np0005481065 nova_compute[260935]: 2025-10-11 09:40:07.860 2 DEBUG oslo_concurrency.lockutils [req-c83ed9c1-40a8-4aca-a4c4-999f9b82a9a6 req-11bf5389-e06b-4944-ae55-fe19b8a04f67 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "cacbf863-eea2-4852-a3a4-7cc929ebacec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:40:07 np0005481065 nova_compute[260935]: 2025-10-11 09:40:07.861 2 DEBUG nova.compute.manager [req-c83ed9c1-40a8-4aca-a4c4-999f9b82a9a6 req-11bf5389-e06b-4944-ae55-fe19b8a04f67 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] No waiting events found dispatching network-vif-unplugged-79c195e7-9605-49f3-b866-70093b232242 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:40:07 np0005481065 nova_compute[260935]: 2025-10-11 09:40:07.862 2 DEBUG nova.compute.manager [req-c83ed9c1-40a8-4aca-a4c4-999f9b82a9a6 req-11bf5389-e06b-4944-ae55-fe19b8a04f67 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Received event network-vif-unplugged-79c195e7-9605-49f3-b866-70093b232242 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 05:40:08 np0005481065 nova_compute[260935]: 2025-10-11 09:40:08.132 2 INFO nova.virt.libvirt.driver [None req-13807304-469c-4d72-89ec-44b36ba0bc53 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Deleting instance files /var/lib/nova/instances/cacbf863-eea2-4852-a3a4-7cc929ebacec_del#033[00m
Oct 11 05:40:08 np0005481065 nova_compute[260935]: 2025-10-11 09:40:08.134 2 INFO nova.virt.libvirt.driver [None req-13807304-469c-4d72-89ec-44b36ba0bc53 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Deletion of /var/lib/nova/instances/cacbf863-eea2-4852-a3a4-7cc929ebacec_del complete#033[00m
Oct 11 05:40:08 np0005481065 nova_compute[260935]: 2025-10-11 09:40:08.209 2 INFO nova.compute.manager [None req-13807304-469c-4d72-89ec-44b36ba0bc53 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Took 0.74 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:40:08 np0005481065 nova_compute[260935]: 2025-10-11 09:40:08.210 2 DEBUG oslo.service.loopingcall [None req-13807304-469c-4d72-89ec-44b36ba0bc53 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:40:08 np0005481065 nova_compute[260935]: 2025-10-11 09:40:08.212 2 DEBUG nova.compute.manager [-] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:40:08 np0005481065 nova_compute[260935]: 2025-10-11 09:40:08.213 2 DEBUG nova.network.neutron [-] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:40:08 np0005481065 nova_compute[260935]: 2025-10-11 09:40:08.736 2 DEBUG nova.network.neutron [-] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:40:08 np0005481065 nova_compute[260935]: 2025-10-11 09:40:08.778 2 INFO nova.compute.manager [-] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Took 0.57 seconds to deallocate network for instance.#033[00m
Oct 11 05:40:08 np0005481065 nova_compute[260935]: 2025-10-11 09:40:08.831 2 DEBUG oslo_concurrency.lockutils [None req-13807304-469c-4d72-89ec-44b36ba0bc53 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:40:08 np0005481065 nova_compute[260935]: 2025-10-11 09:40:08.831 2 DEBUG oslo_concurrency.lockutils [None req-13807304-469c-4d72-89ec-44b36ba0bc53 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:40:08 np0005481065 nova_compute[260935]: 2025-10-11 09:40:08.845 2 DEBUG nova.compute.manager [req-151c9b99-690d-4545-b169-102528c27861 req-14140b4b-d40c-44af-b243-81e66f5005c2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Received event network-vif-deleted-79c195e7-9605-49f3-b866-70093b232242 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:40:08 np0005481065 nova_compute[260935]: 2025-10-11 09:40:08.965 2 DEBUG oslo_concurrency.processutils [None req-13807304-469c-4d72-89ec-44b36ba0bc53 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:40:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2955: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 396 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Oct 11 05:40:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:40:09 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3117894820' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:40:09 np0005481065 nova_compute[260935]: 2025-10-11 09:40:09.410 2 DEBUG oslo_concurrency.processutils [None req-13807304-469c-4d72-89ec-44b36ba0bc53 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:40:09 np0005481065 nova_compute[260935]: 2025-10-11 09:40:09.416 2 DEBUG nova.compute.provider_tree [None req-13807304-469c-4d72-89ec-44b36ba0bc53 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:40:09 np0005481065 nova_compute[260935]: 2025-10-11 09:40:09.430 2 DEBUG nova.scheduler.client.report [None req-13807304-469c-4d72-89ec-44b36ba0bc53 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:40:09 np0005481065 nova_compute[260935]: 2025-10-11 09:40:09.449 2 DEBUG oslo_concurrency.lockutils [None req-13807304-469c-4d72-89ec-44b36ba0bc53 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:40:09 np0005481065 nova_compute[260935]: 2025-10-11 09:40:09.473 2 INFO nova.scheduler.client.report [None req-13807304-469c-4d72-89ec-44b36ba0bc53 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Deleted allocations for instance cacbf863-eea2-4852-a3a4-7cc929ebacec#033[00m
Oct 11 05:40:09 np0005481065 nova_compute[260935]: 2025-10-11 09:40:09.521 2 DEBUG oslo_concurrency.lockutils [None req-13807304-469c-4d72-89ec-44b36ba0bc53 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "cacbf863-eea2-4852-a3a4-7cc929ebacec" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.054s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:40:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:40:09 np0005481065 nova_compute[260935]: 2025-10-11 09:40:09.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:40:09 np0005481065 nova_compute[260935]: 2025-10-11 09:40:09.944 2 DEBUG nova.compute.manager [req-621f7e6b-c6a9-427a-9a72-f67fab378eb4 req-e3220cae-deaa-4b29-a35f-5611a2b1d0bf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Received event network-vif-plugged-79c195e7-9605-49f3-b866-70093b232242 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:40:09 np0005481065 nova_compute[260935]: 2025-10-11 09:40:09.944 2 DEBUG oslo_concurrency.lockutils [req-621f7e6b-c6a9-427a-9a72-f67fab378eb4 req-e3220cae-deaa-4b29-a35f-5611a2b1d0bf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "cacbf863-eea2-4852-a3a4-7cc929ebacec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:40:09 np0005481065 nova_compute[260935]: 2025-10-11 09:40:09.944 2 DEBUG oslo_concurrency.lockutils [req-621f7e6b-c6a9-427a-9a72-f67fab378eb4 req-e3220cae-deaa-4b29-a35f-5611a2b1d0bf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "cacbf863-eea2-4852-a3a4-7cc929ebacec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:40:09 np0005481065 nova_compute[260935]: 2025-10-11 09:40:09.944 2 DEBUG oslo_concurrency.lockutils [req-621f7e6b-c6a9-427a-9a72-f67fab378eb4 req-e3220cae-deaa-4b29-a35f-5611a2b1d0bf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "cacbf863-eea2-4852-a3a4-7cc929ebacec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:40:09 np0005481065 nova_compute[260935]: 2025-10-11 09:40:09.945 2 DEBUG nova.compute.manager [req-621f7e6b-c6a9-427a-9a72-f67fab378eb4 req-e3220cae-deaa-4b29-a35f-5611a2b1d0bf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] No waiting events found dispatching network-vif-plugged-79c195e7-9605-49f3-b866-70093b232242 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:40:09 np0005481065 nova_compute[260935]: 2025-10-11 09:40:09.945 2 WARNING nova.compute.manager [req-621f7e6b-c6a9-427a-9a72-f67fab378eb4 req-e3220cae-deaa-4b29-a35f-5611a2b1d0bf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Received unexpected event network-vif-plugged-79c195e7-9605-49f3-b866-70093b232242 for instance with vm_state deleted and task_state None.#033[00m
Oct 11 05:40:11 np0005481065 nova_compute[260935]: 2025-10-11 09:40:11.200 2 DEBUG nova.compute.manager [req-353880be-ba1d-436d-b91a-a5d8b44e1231 req-9e82cae5-d1d4-4af6-a1ad-f1584ed29072 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Received event network-changed-e8668336-ee38-49b1-97dd-f6c5fcfab2e5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:40:11 np0005481065 nova_compute[260935]: 2025-10-11 09:40:11.202 2 DEBUG nova.compute.manager [req-353880be-ba1d-436d-b91a-a5d8b44e1231 req-9e82cae5-d1d4-4af6-a1ad-f1584ed29072 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Refreshing instance network info cache due to event network-changed-e8668336-ee38-49b1-97dd-f6c5fcfab2e5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:40:11 np0005481065 nova_compute[260935]: 2025-10-11 09:40:11.202 2 DEBUG oslo_concurrency.lockutils [req-353880be-ba1d-436d-b91a-a5d8b44e1231 req-9e82cae5-d1d4-4af6-a1ad-f1584ed29072 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-82fac090-c427-485c-98cd-ad02e839be40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:40:11 np0005481065 nova_compute[260935]: 2025-10-11 09:40:11.202 2 DEBUG oslo_concurrency.lockutils [req-353880be-ba1d-436d-b91a-a5d8b44e1231 req-9e82cae5-d1d4-4af6-a1ad-f1584ed29072 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-82fac090-c427-485c-98cd-ad02e839be40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:40:11 np0005481065 nova_compute[260935]: 2025-10-11 09:40:11.203 2 DEBUG nova.network.neutron [req-353880be-ba1d-436d-b91a-a5d8b44e1231 req-9e82cae5-d1d4-4af6-a1ad-f1584ed29072 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Refreshing network info cache for port e8668336-ee38-49b1-97dd-f6c5fcfab2e5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:40:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2956: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 11 05:40:11 np0005481065 nova_compute[260935]: 2025-10-11 09:40:11.339 2 DEBUG oslo_concurrency.lockutils [None req-ebd233bc-240a-4544-adea-8404466f68f4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "82fac090-c427-485c-98cd-ad02e839be40" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:40:11 np0005481065 nova_compute[260935]: 2025-10-11 09:40:11.339 2 DEBUG oslo_concurrency.lockutils [None req-ebd233bc-240a-4544-adea-8404466f68f4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "82fac090-c427-485c-98cd-ad02e839be40" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:40:11 np0005481065 nova_compute[260935]: 2025-10-11 09:40:11.340 2 DEBUG oslo_concurrency.lockutils [None req-ebd233bc-240a-4544-adea-8404466f68f4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "82fac090-c427-485c-98cd-ad02e839be40-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:40:11 np0005481065 nova_compute[260935]: 2025-10-11 09:40:11.340 2 DEBUG oslo_concurrency.lockutils [None req-ebd233bc-240a-4544-adea-8404466f68f4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "82fac090-c427-485c-98cd-ad02e839be40-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:40:11 np0005481065 nova_compute[260935]: 2025-10-11 09:40:11.341 2 DEBUG oslo_concurrency.lockutils [None req-ebd233bc-240a-4544-adea-8404466f68f4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "82fac090-c427-485c-98cd-ad02e839be40-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:40:11 np0005481065 nova_compute[260935]: 2025-10-11 09:40:11.342 2 INFO nova.compute.manager [None req-ebd233bc-240a-4544-adea-8404466f68f4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Terminating instance#033[00m
Oct 11 05:40:11 np0005481065 nova_compute[260935]: 2025-10-11 09:40:11.344 2 DEBUG nova.compute.manager [None req-ebd233bc-240a-4544-adea-8404466f68f4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:40:11 np0005481065 kernel: tape8668336-ee (unregistering): left promiscuous mode
Oct 11 05:40:11 np0005481065 NetworkManager[44960]: <info>  [1760175611.5305] device (tape8668336-ee): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:40:11 np0005481065 nova_compute[260935]: 2025-10-11 09:40:11.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:40:11 np0005481065 ovn_controller[152945]: 2025-10-11T09:40:11Z|01647|binding|INFO|Releasing lport e8668336-ee38-49b1-97dd-f6c5fcfab2e5 from this chassis (sb_readonly=0)
Oct 11 05:40:11 np0005481065 ovn_controller[152945]: 2025-10-11T09:40:11Z|01648|binding|INFO|Setting lport e8668336-ee38-49b1-97dd-f6c5fcfab2e5 down in Southbound
Oct 11 05:40:11 np0005481065 ovn_controller[152945]: 2025-10-11T09:40:11Z|01649|binding|INFO|Removing iface tape8668336-ee ovn-installed in OVS
Oct 11 05:40:11 np0005481065 nova_compute[260935]: 2025-10-11 09:40:11.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:40:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:11.545 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:4e:40 10.100.0.4'], port_security=['fa:16:3e:ae:4e:40 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '82fac090-c427-485c-98cd-ad02e839be40', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1aa39742-414b-41b6-bac5-b401ed01a1ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '81e7096f23df4e7d8782cf98d09d54e9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7b70b57d-37bb-4331-92b9-ae0d4d7e602c e4a50ca5-3bee-4cfb-9819-432e1e875b60', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f310564f-86ed-419d-996b-5fa9060df3fb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=e8668336-ee38-49b1-97dd-f6c5fcfab2e5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:40:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:11.546 162815 INFO neutron.agent.ovn.metadata.agent [-] Port e8668336-ee38-49b1-97dd-f6c5fcfab2e5 in datapath 1aa39742-414b-41b6-bac5-b401ed01a1ec unbound from our chassis#033[00m
Oct 11 05:40:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:11.548 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1aa39742-414b-41b6-bac5-b401ed01a1ec, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:40:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:11.549 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1ab2575e-271f-4d27-bcd5-6b3429e8e9e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:40:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:11.549 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec namespace which is not needed anymore#033[00m
Oct 11 05:40:11 np0005481065 nova_compute[260935]: 2025-10-11 09:40:11.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:40:11 np0005481065 systemd[1]: machine-qemu\x2d168\x2dinstance\x2d00000090.scope: Deactivated successfully.
Oct 11 05:40:11 np0005481065 systemd[1]: machine-qemu\x2d168\x2dinstance\x2d00000090.scope: Consumed 15.372s CPU time.
Oct 11 05:40:11 np0005481065 systemd-machined[215705]: Machine qemu-168-instance-00000090 terminated.
Oct 11 05:40:11 np0005481065 neutron-haproxy-ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec[422779]: [NOTICE]   (422792) : haproxy version is 2.8.14-c23fe91
Oct 11 05:40:11 np0005481065 neutron-haproxy-ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec[422779]: [NOTICE]   (422792) : path to executable is /usr/sbin/haproxy
Oct 11 05:40:11 np0005481065 neutron-haproxy-ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec[422779]: [WARNING]  (422792) : Exiting Master process...
Oct 11 05:40:11 np0005481065 neutron-haproxy-ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec[422779]: [WARNING]  (422792) : Exiting Master process...
Oct 11 05:40:11 np0005481065 neutron-haproxy-ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec[422779]: [ALERT]    (422792) : Current worker (422794) exited with code 143 (Terminated)
Oct 11 05:40:11 np0005481065 neutron-haproxy-ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec[422779]: [WARNING]  (422792) : All workers exited. Exiting... (0)
Oct 11 05:40:11 np0005481065 systemd[1]: libpod-038b982849d91ce8fc8acc43b62a7023ef78d8265bf04b4627973c6c00b5d798.scope: Deactivated successfully.
Oct 11 05:40:11 np0005481065 podman[424352]: 2025-10-11 09:40:11.706433231 +0000 UTC m=+0.055014004 container died 038b982849d91ce8fc8acc43b62a7023ef78d8265bf04b4627973c6c00b5d798 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 05:40:11 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-038b982849d91ce8fc8acc43b62a7023ef78d8265bf04b4627973c6c00b5d798-userdata-shm.mount: Deactivated successfully.
Oct 11 05:40:11 np0005481065 systemd[1]: var-lib-containers-storage-overlay-e06eeafc87d96a39c349d61f6972a13eca41eb9183cc378fdf1defab8f1ee2b5-merged.mount: Deactivated successfully.
Oct 11 05:40:11 np0005481065 podman[424352]: 2025-10-11 09:40:11.745104055 +0000 UTC m=+0.093684818 container cleanup 038b982849d91ce8fc8acc43b62a7023ef78d8265bf04b4627973c6c00b5d798 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 11 05:40:11 np0005481065 systemd[1]: libpod-conmon-038b982849d91ce8fc8acc43b62a7023ef78d8265bf04b4627973c6c00b5d798.scope: Deactivated successfully.
Oct 11 05:40:11 np0005481065 kernel: tape8668336-ee: entered promiscuous mode
Oct 11 05:40:11 np0005481065 systemd-udevd[424332]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:40:11 np0005481065 NetworkManager[44960]: <info>  [1760175611.7693] manager: (tape8668336-ee): new Tun device (/org/freedesktop/NetworkManager/Devices/628)
Oct 11 05:40:11 np0005481065 kernel: tape8668336-ee (unregistering): left promiscuous mode
Oct 11 05:40:11 np0005481065 ovn_controller[152945]: 2025-10-11T09:40:11Z|01650|binding|INFO|Claiming lport e8668336-ee38-49b1-97dd-f6c5fcfab2e5 for this chassis.
Oct 11 05:40:11 np0005481065 ovn_controller[152945]: 2025-10-11T09:40:11Z|01651|binding|INFO|e8668336-ee38-49b1-97dd-f6c5fcfab2e5: Claiming fa:16:3e:ae:4e:40 10.100.0.4
Oct 11 05:40:11 np0005481065 podman[424364]: 2025-10-11 09:40:11.77309092 +0000 UTC m=+0.074963803 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 11 05:40:11 np0005481065 nova_compute[260935]: 2025-10-11 09:40:11.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:40:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:11.779 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:4e:40 10.100.0.4'], port_security=['fa:16:3e:ae:4e:40 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '82fac090-c427-485c-98cd-ad02e839be40', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1aa39742-414b-41b6-bac5-b401ed01a1ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '81e7096f23df4e7d8782cf98d09d54e9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7b70b57d-37bb-4331-92b9-ae0d4d7e602c e4a50ca5-3bee-4cfb-9819-432e1e875b60', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f310564f-86ed-419d-996b-5fa9060df3fb, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=e8668336-ee38-49b1-97dd-f6c5fcfab2e5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:40:11 np0005481065 ovn_controller[152945]: 2025-10-11T09:40:11Z|01652|binding|INFO|Releasing lport e8668336-ee38-49b1-97dd-f6c5fcfab2e5 from this chassis (sb_readonly=0)
Oct 11 05:40:11 np0005481065 nova_compute[260935]: 2025-10-11 09:40:11.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:40:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:11.802 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:4e:40 10.100.0.4'], port_security=['fa:16:3e:ae:4e:40 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '82fac090-c427-485c-98cd-ad02e839be40', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1aa39742-414b-41b6-bac5-b401ed01a1ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '81e7096f23df4e7d8782cf98d09d54e9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7b70b57d-37bb-4331-92b9-ae0d4d7e602c e4a50ca5-3bee-4cfb-9819-432e1e875b60', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f310564f-86ed-419d-996b-5fa9060df3fb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=e8668336-ee38-49b1-97dd-f6c5fcfab2e5) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:40:11 np0005481065 nova_compute[260935]: 2025-10-11 09:40:11.803 2 INFO nova.virt.libvirt.driver [-] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Instance destroyed successfully.#033[00m
Oct 11 05:40:11 np0005481065 nova_compute[260935]: 2025-10-11 09:40:11.804 2 DEBUG nova.objects.instance [None req-ebd233bc-240a-4544-adea-8404466f68f4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lazy-loading 'resources' on Instance uuid 82fac090-c427-485c-98cd-ad02e839be40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:40:11 np0005481065 nova_compute[260935]: 2025-10-11 09:40:11.821 2 DEBUG nova.virt.libvirt.vif [None req-ebd233bc-240a-4544-adea-8404466f68f4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:38:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-830501792',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-830501792',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-607770139-acc',id=144,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMgIDSEbqXKfaiPe9PEwj6SDspyzopAbfyJnz8+djsMK1KLgiJJqBYs9uE57WKf6jqalL3R3Kh83Cc9busMQoG7JknYjD9hSHphOlTqezk2QHYcvhxySyaS51yDxw6uubA==',key_name='tempest-TestSecurityGroupsBasicOps-1631871125',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:39:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='81e7096f23df4e7d8782cf98d09d54e9',ramdisk_id='',reservation_id='r-bjfbul0r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-607770139',owner_user_name='tempest-TestSecurityGroupsBasicOps-607770139-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:39:11Z,user_data=None,user_id='489c4d0457354f4684f8b9e53261224f',uuid=82fac090-c427-485c-98cd-ad02e839be40,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e8668336-ee38-49b1-97dd-f6c5fcfab2e5", "address": "fa:16:3e:ae:4e:40", "network": {"id": "1aa39742-414b-41b6-bac5-b401ed01a1ec", "bridge": "br-int", "label": "tempest-network-smoke--1567678547", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8668336-ee", "ovs_interfaceid": "e8668336-ee38-49b1-97dd-f6c5fcfab2e5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:40:11 np0005481065 nova_compute[260935]: 2025-10-11 09:40:11.822 2 DEBUG nova.network.os_vif_util [None req-ebd233bc-240a-4544-adea-8404466f68f4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converting VIF {"id": "e8668336-ee38-49b1-97dd-f6c5fcfab2e5", "address": "fa:16:3e:ae:4e:40", "network": {"id": "1aa39742-414b-41b6-bac5-b401ed01a1ec", "bridge": "br-int", "label": "tempest-network-smoke--1567678547", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8668336-ee", "ovs_interfaceid": "e8668336-ee38-49b1-97dd-f6c5fcfab2e5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:40:11 np0005481065 nova_compute[260935]: 2025-10-11 09:40:11.823 2 DEBUG nova.network.os_vif_util [None req-ebd233bc-240a-4544-adea-8404466f68f4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ae:4e:40,bridge_name='br-int',has_traffic_filtering=True,id=e8668336-ee38-49b1-97dd-f6c5fcfab2e5,network=Network(1aa39742-414b-41b6-bac5-b401ed01a1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8668336-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:40:11 np0005481065 nova_compute[260935]: 2025-10-11 09:40:11.823 2 DEBUG os_vif [None req-ebd233bc-240a-4544-adea-8404466f68f4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ae:4e:40,bridge_name='br-int',has_traffic_filtering=True,id=e8668336-ee38-49b1-97dd-f6c5fcfab2e5,network=Network(1aa39742-414b-41b6-bac5-b401ed01a1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8668336-ee') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:40:11 np0005481065 podman[424401]: 2025-10-11 09:40:11.823909766 +0000 UTC m=+0.054885781 container remove 038b982849d91ce8fc8acc43b62a7023ef78d8265bf04b4627973c6c00b5d798 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 11 05:40:11 np0005481065 nova_compute[260935]: 2025-10-11 09:40:11.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:40:11 np0005481065 nova_compute[260935]: 2025-10-11 09:40:11.825 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape8668336-ee, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:40:11 np0005481065 nova_compute[260935]: 2025-10-11 09:40:11.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:40:11 np0005481065 nova_compute[260935]: 2025-10-11 09:40:11.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:40:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:11.830 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5923bbbf-2429-46dc-8ee3-4e03435ba210]: (4, ('Sat Oct 11 09:40:11 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec (038b982849d91ce8fc8acc43b62a7023ef78d8265bf04b4627973c6c00b5d798)\n038b982849d91ce8fc8acc43b62a7023ef78d8265bf04b4627973c6c00b5d798\nSat Oct 11 09:40:11 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec (038b982849d91ce8fc8acc43b62a7023ef78d8265bf04b4627973c6c00b5d798)\n038b982849d91ce8fc8acc43b62a7023ef78d8265bf04b4627973c6c00b5d798\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:40:11 np0005481065 nova_compute[260935]: 2025-10-11 09:40:11.830 2 INFO os_vif [None req-ebd233bc-240a-4544-adea-8404466f68f4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ae:4e:40,bridge_name='br-int',has_traffic_filtering=True,id=e8668336-ee38-49b1-97dd-f6c5fcfab2e5,network=Network(1aa39742-414b-41b6-bac5-b401ed01a1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8668336-ee')#033[00m
Oct 11 05:40:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:11.831 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ea719301-76c1-433e-8c75-5fde912ba966]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:40:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:11.831 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1aa39742-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:40:11 np0005481065 kernel: tap1aa39742-40: left promiscuous mode
Oct 11 05:40:11 np0005481065 nova_compute[260935]: 2025-10-11 09:40:11.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:40:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:11.855 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6a6056da-6927-44bb-8371-5cac7f76d817]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:40:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:11.874 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[513c3dd1-daac-431a-be04-2ca6c0f2ee1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:40:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:11.875 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[120016fa-3b9e-4b7e-af39-93e55d4f86ea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:40:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:11.889 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[dcf3287d-0c96-49c1-a85c-d6fb26c8029f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 739478, 'reachable_time': 24974, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 424438, 'error': None, 'target': 'ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:40:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:11.892 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 05:40:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:11.892 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[0afcecde-377a-40c0-9459-4e5ae8bc5596]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:40:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:11.893 162815 INFO neutron.agent.ovn.metadata.agent [-] Port e8668336-ee38-49b1-97dd-f6c5fcfab2e5 in datapath 1aa39742-414b-41b6-bac5-b401ed01a1ec unbound from our chassis#033[00m
Oct 11 05:40:11 np0005481065 systemd[1]: run-netns-ovnmeta\x2d1aa39742\x2d414b\x2d41b6\x2dbac5\x2db401ed01a1ec.mount: Deactivated successfully.
Oct 11 05:40:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:11.894 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1aa39742-414b-41b6-bac5-b401ed01a1ec, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:40:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:11.895 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[99b0368a-fc15-4d13-bc5b-54cb3b67d372]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:40:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:11.895 162815 INFO neutron.agent.ovn.metadata.agent [-] Port e8668336-ee38-49b1-97dd-f6c5fcfab2e5 in datapath 1aa39742-414b-41b6-bac5-b401ed01a1ec unbound from our chassis#033[00m
Oct 11 05:40:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:11.896 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1aa39742-414b-41b6-bac5-b401ed01a1ec, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:40:11 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:11.896 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bf90c03a-3dbb-463b-9c5a-121997bae0b2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:40:12 np0005481065 nova_compute[260935]: 2025-10-11 09:40:12.869 2 DEBUG nova.network.neutron [req-353880be-ba1d-436d-b91a-a5d8b44e1231 req-9e82cae5-d1d4-4af6-a1ad-f1584ed29072 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Updated VIF entry in instance network info cache for port e8668336-ee38-49b1-97dd-f6c5fcfab2e5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:40:12 np0005481065 nova_compute[260935]: 2025-10-11 09:40:12.871 2 DEBUG nova.network.neutron [req-353880be-ba1d-436d-b91a-a5d8b44e1231 req-9e82cae5-d1d4-4af6-a1ad-f1584ed29072 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Updating instance_info_cache with network_info: [{"id": "e8668336-ee38-49b1-97dd-f6c5fcfab2e5", "address": "fa:16:3e:ae:4e:40", "network": {"id": "1aa39742-414b-41b6-bac5-b401ed01a1ec", "bridge": "br-int", "label": "tempest-network-smoke--1567678547", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8668336-ee", "ovs_interfaceid": "e8668336-ee38-49b1-97dd-f6c5fcfab2e5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:40:12 np0005481065 nova_compute[260935]: 2025-10-11 09:40:12.895 2 DEBUG oslo_concurrency.lockutils [req-353880be-ba1d-436d-b91a-a5d8b44e1231 req-9e82cae5-d1d4-4af6-a1ad-f1584ed29072 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-82fac090-c427-485c-98cd-ad02e839be40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:40:13 np0005481065 nova_compute[260935]: 2025-10-11 09:40:13.023 2 INFO nova.virt.libvirt.driver [None req-ebd233bc-240a-4544-adea-8404466f68f4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Deleting instance files /var/lib/nova/instances/82fac090-c427-485c-98cd-ad02e839be40_del#033[00m
Oct 11 05:40:13 np0005481065 nova_compute[260935]: 2025-10-11 09:40:13.025 2 INFO nova.virt.libvirt.driver [None req-ebd233bc-240a-4544-adea-8404466f68f4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Deletion of /var/lib/nova/instances/82fac090-c427-485c-98cd-ad02e839be40_del complete#033[00m
Oct 11 05:40:13 np0005481065 nova_compute[260935]: 2025-10-11 09:40:13.095 2 INFO nova.compute.manager [None req-ebd233bc-240a-4544-adea-8404466f68f4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Took 1.75 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:40:13 np0005481065 nova_compute[260935]: 2025-10-11 09:40:13.096 2 DEBUG oslo.service.loopingcall [None req-ebd233bc-240a-4544-adea-8404466f68f4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:40:13 np0005481065 nova_compute[260935]: 2025-10-11 09:40:13.096 2 DEBUG nova.compute.manager [-] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:40:13 np0005481065 nova_compute[260935]: 2025-10-11 09:40:13.097 2 DEBUG nova.network.neutron [-] [instance: 82fac090-c427-485c-98cd-ad02e839be40] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:40:13 np0005481065 nova_compute[260935]: 2025-10-11 09:40:13.281 2 DEBUG nova.compute.manager [req-310f909e-3b5b-4b47-986c-4eeaa4b0804a req-84667300-8306-4472-9a2d-f89a00f1af8c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Received event network-vif-unplugged-e8668336-ee38-49b1-97dd-f6c5fcfab2e5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:40:13 np0005481065 nova_compute[260935]: 2025-10-11 09:40:13.282 2 DEBUG oslo_concurrency.lockutils [req-310f909e-3b5b-4b47-986c-4eeaa4b0804a req-84667300-8306-4472-9a2d-f89a00f1af8c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "82fac090-c427-485c-98cd-ad02e839be40-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:40:13 np0005481065 nova_compute[260935]: 2025-10-11 09:40:13.282 2 DEBUG oslo_concurrency.lockutils [req-310f909e-3b5b-4b47-986c-4eeaa4b0804a req-84667300-8306-4472-9a2d-f89a00f1af8c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "82fac090-c427-485c-98cd-ad02e839be40-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:40:13 np0005481065 nova_compute[260935]: 2025-10-11 09:40:13.282 2 DEBUG oslo_concurrency.lockutils [req-310f909e-3b5b-4b47-986c-4eeaa4b0804a req-84667300-8306-4472-9a2d-f89a00f1af8c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "82fac090-c427-485c-98cd-ad02e839be40-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:40:13 np0005481065 nova_compute[260935]: 2025-10-11 09:40:13.283 2 DEBUG nova.compute.manager [req-310f909e-3b5b-4b47-986c-4eeaa4b0804a req-84667300-8306-4472-9a2d-f89a00f1af8c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] No waiting events found dispatching network-vif-unplugged-e8668336-ee38-49b1-97dd-f6c5fcfab2e5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:40:13 np0005481065 nova_compute[260935]: 2025-10-11 09:40:13.283 2 DEBUG nova.compute.manager [req-310f909e-3b5b-4b47-986c-4eeaa4b0804a req-84667300-8306-4472-9a2d-f89a00f1af8c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Received event network-vif-unplugged-e8668336-ee38-49b1-97dd-f6c5fcfab2e5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 05:40:13 np0005481065 nova_compute[260935]: 2025-10-11 09:40:13.283 2 DEBUG nova.compute.manager [req-310f909e-3b5b-4b47-986c-4eeaa4b0804a req-84667300-8306-4472-9a2d-f89a00f1af8c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Received event network-vif-plugged-e8668336-ee38-49b1-97dd-f6c5fcfab2e5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:40:13 np0005481065 nova_compute[260935]: 2025-10-11 09:40:13.284 2 DEBUG oslo_concurrency.lockutils [req-310f909e-3b5b-4b47-986c-4eeaa4b0804a req-84667300-8306-4472-9a2d-f89a00f1af8c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "82fac090-c427-485c-98cd-ad02e839be40-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:40:13 np0005481065 nova_compute[260935]: 2025-10-11 09:40:13.284 2 DEBUG oslo_concurrency.lockutils [req-310f909e-3b5b-4b47-986c-4eeaa4b0804a req-84667300-8306-4472-9a2d-f89a00f1af8c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "82fac090-c427-485c-98cd-ad02e839be40-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:40:13 np0005481065 nova_compute[260935]: 2025-10-11 09:40:13.284 2 DEBUG oslo_concurrency.lockutils [req-310f909e-3b5b-4b47-986c-4eeaa4b0804a req-84667300-8306-4472-9a2d-f89a00f1af8c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "82fac090-c427-485c-98cd-ad02e839be40-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:40:13 np0005481065 nova_compute[260935]: 2025-10-11 09:40:13.284 2 DEBUG nova.compute.manager [req-310f909e-3b5b-4b47-986c-4eeaa4b0804a req-84667300-8306-4472-9a2d-f89a00f1af8c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] No waiting events found dispatching network-vif-plugged-e8668336-ee38-49b1-97dd-f6c5fcfab2e5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:40:13 np0005481065 nova_compute[260935]: 2025-10-11 09:40:13.285 2 WARNING nova.compute.manager [req-310f909e-3b5b-4b47-986c-4eeaa4b0804a req-84667300-8306-4472-9a2d-f89a00f1af8c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Received unexpected event network-vif-plugged-e8668336-ee38-49b1-97dd-f6c5fcfab2e5 for instance with vm_state active and task_state deleting.#033[00m
Oct 11 05:40:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2957: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 419 KiB/s rd, 2.1 MiB/s wr, 109 op/s
Oct 11 05:40:13 np0005481065 nova_compute[260935]: 2025-10-11 09:40:13.949 2 DEBUG nova.network.neutron [-] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:40:13 np0005481065 nova_compute[260935]: 2025-10-11 09:40:13.969 2 INFO nova.compute.manager [-] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Took 0.87 seconds to deallocate network for instance.#033[00m
Oct 11 05:40:14 np0005481065 nova_compute[260935]: 2025-10-11 09:40:14.014 2 DEBUG oslo_concurrency.lockutils [None req-ebd233bc-240a-4544-adea-8404466f68f4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:40:14 np0005481065 nova_compute[260935]: 2025-10-11 09:40:14.015 2 DEBUG oslo_concurrency.lockutils [None req-ebd233bc-240a-4544-adea-8404466f68f4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:40:14 np0005481065 nova_compute[260935]: 2025-10-11 09:40:14.129 2 DEBUG oslo_concurrency.processutils [None req-ebd233bc-240a-4544-adea-8404466f68f4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:40:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:40:14 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/288191443' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:40:14 np0005481065 nova_compute[260935]: 2025-10-11 09:40:14.632 2 DEBUG oslo_concurrency.processutils [None req-ebd233bc-240a-4544-adea-8404466f68f4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:40:14 np0005481065 nova_compute[260935]: 2025-10-11 09:40:14.642 2 DEBUG nova.compute.provider_tree [None req-ebd233bc-240a-4544-adea-8404466f68f4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:40:14 np0005481065 nova_compute[260935]: 2025-10-11 09:40:14.659 2 DEBUG nova.scheduler.client.report [None req-ebd233bc-240a-4544-adea-8404466f68f4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:40:14 np0005481065 nova_compute[260935]: 2025-10-11 09:40:14.676 2 DEBUG oslo_concurrency.lockutils [None req-ebd233bc-240a-4544-adea-8404466f68f4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:40:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:40:14 np0005481065 nova_compute[260935]: 2025-10-11 09:40:14.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:40:14 np0005481065 nova_compute[260935]: 2025-10-11 09:40:14.708 2 INFO nova.scheduler.client.report [None req-ebd233bc-240a-4544-adea-8404466f68f4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Deleted allocations for instance 82fac090-c427-485c-98cd-ad02e839be40#033[00m
Oct 11 05:40:14 np0005481065 nova_compute[260935]: 2025-10-11 09:40:14.774 2 DEBUG oslo_concurrency.lockutils [None req-ebd233bc-240a-4544-adea-8404466f68f4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "82fac090-c427-485c-98cd-ad02e839be40" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.435s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:40:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:15.236 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:40:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:15.236 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:40:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:15.237 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:40:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2958: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 4.5 KiB/s wr, 43 op/s
Oct 11 05:40:15 np0005481065 nova_compute[260935]: 2025-10-11 09:40:15.390 2 DEBUG nova.compute.manager [req-8c8ec429-264a-4d22-9286-ed572f20b435 req-488133fe-ad0a-40ae-bdc1-15e1d66146e4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Received event network-vif-deleted-e8668336-ee38-49b1-97dd-f6c5fcfab2e5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:40:16 np0005481065 podman[424462]: 2025-10-11 09:40:16.798149753 +0000 UTC m=+0.095253833 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 11 05:40:16 np0005481065 nova_compute[260935]: 2025-10-11 09:40:16.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:40:16 np0005481065 podman[424463]: 2025-10-11 09:40:16.829306106 +0000 UTC m=+0.116749315 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:40:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2959: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 4.5 KiB/s wr, 43 op/s
Oct 11 05:40:18 np0005481065 ovn_controller[152945]: 2025-10-11T09:40:18Z|01653|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:40:18 np0005481065 ovn_controller[152945]: 2025-10-11T09:40:18Z|01654|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:40:18 np0005481065 nova_compute[260935]: 2025-10-11 09:40:18.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:40:18 np0005481065 ovn_controller[152945]: 2025-10-11T09:40:18Z|01655|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:40:18 np0005481065 ovn_controller[152945]: 2025-10-11T09:40:18Z|01656|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:40:18 np0005481065 nova_compute[260935]: 2025-10-11 09:40:18.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:40:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2960: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 5.1 KiB/s wr, 56 op/s
Oct 11 05:40:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:40:19 np0005481065 nova_compute[260935]: 2025-10-11 09:40:19.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:40:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2961: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.4 KiB/s wr, 55 op/s
Oct 11 05:40:21 np0005481065 nova_compute[260935]: 2025-10-11 09:40:21.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:40:22 np0005481065 nova_compute[260935]: 2025-10-11 09:40:22.718 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760175607.7165146, cacbf863-eea2-4852-a3a4-7cc929ebacec => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:40:22 np0005481065 nova_compute[260935]: 2025-10-11 09:40:22.719 2 INFO nova.compute.manager [-] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:40:22 np0005481065 nova_compute[260935]: 2025-10-11 09:40:22.742 2 DEBUG nova.compute.manager [None req-5b59a428-284f-47e1-b707-00db6326745b - - - - - -] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:40:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2962: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.4 KiB/s wr, 55 op/s
Oct 11 05:40:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:40:24 np0005481065 nova_compute[260935]: 2025-10-11 09:40:24.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:40:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:40:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:40:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:40:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:40:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:40:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:40:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2963: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 9.3 KiB/s rd, 597 B/s wr, 12 op/s
Oct 11 05:40:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:40:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3860873842' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:40:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:40:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3860873842' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:40:26 np0005481065 nova_compute[260935]: 2025-10-11 09:40:26.795 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760175611.7924993, 82fac090-c427-485c-98cd-ad02e839be40 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:40:26 np0005481065 nova_compute[260935]: 2025-10-11 09:40:26.796 2 INFO nova.compute.manager [-] [instance: 82fac090-c427-485c-98cd-ad02e839be40] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:40:26 np0005481065 nova_compute[260935]: 2025-10-11 09:40:26.817 2 DEBUG nova.compute.manager [None req-9b4817d1-cd55-498b-955b-42139685e066 - - - - - -] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:40:26 np0005481065 nova_compute[260935]: 2025-10-11 09:40:26.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:40:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2964: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 9.3 KiB/s rd, 597 B/s wr, 12 op/s
Oct 11 05:40:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2965: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 9.3 KiB/s rd, 597 B/s wr, 12 op/s
Oct 11 05:40:29 np0005481065 nova_compute[260935]: 2025-10-11 09:40:29.553 2 DEBUG oslo_concurrency.lockutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "cf42187a-059a-49d4-b9c8-96786042977f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:40:29 np0005481065 nova_compute[260935]: 2025-10-11 09:40:29.554 2 DEBUG oslo_concurrency.lockutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "cf42187a-059a-49d4-b9c8-96786042977f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:40:29 np0005481065 nova_compute[260935]: 2025-10-11 09:40:29.572 2 DEBUG nova.compute.manager [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:40:29 np0005481065 nova_compute[260935]: 2025-10-11 09:40:29.661 2 DEBUG oslo_concurrency.lockutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:40:29 np0005481065 nova_compute[260935]: 2025-10-11 09:40:29.661 2 DEBUG oslo_concurrency.lockutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:40:29 np0005481065 nova_compute[260935]: 2025-10-11 09:40:29.671 2 DEBUG nova.virt.hardware [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:40:29 np0005481065 nova_compute[260935]: 2025-10-11 09:40:29.672 2 INFO nova.compute.claims [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:40:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:40:29 np0005481065 nova_compute[260935]: 2025-10-11 09:40:29.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:40:29 np0005481065 nova_compute[260935]: 2025-10-11 09:40:29.843 2 DEBUG oslo_concurrency.processutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:40:30 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:40:30 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2660461439' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:40:30 np0005481065 nova_compute[260935]: 2025-10-11 09:40:30.335 2 DEBUG oslo_concurrency.processutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:40:30 np0005481065 nova_compute[260935]: 2025-10-11 09:40:30.344 2 DEBUG nova.compute.provider_tree [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:40:30 np0005481065 nova_compute[260935]: 2025-10-11 09:40:30.362 2 DEBUG nova.scheduler.client.report [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:40:30 np0005481065 nova_compute[260935]: 2025-10-11 09:40:30.390 2 DEBUG oslo_concurrency.lockutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:40:30 np0005481065 nova_compute[260935]: 2025-10-11 09:40:30.392 2 DEBUG nova.compute.manager [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:40:30 np0005481065 nova_compute[260935]: 2025-10-11 09:40:30.441 2 DEBUG nova.compute.manager [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:40:30 np0005481065 nova_compute[260935]: 2025-10-11 09:40:30.442 2 DEBUG nova.network.neutron [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:40:30 np0005481065 nova_compute[260935]: 2025-10-11 09:40:30.467 2 INFO nova.virt.libvirt.driver [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:40:30 np0005481065 nova_compute[260935]: 2025-10-11 09:40:30.487 2 DEBUG nova.compute.manager [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:40:30 np0005481065 nova_compute[260935]: 2025-10-11 09:40:30.569 2 DEBUG nova.compute.manager [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:40:30 np0005481065 nova_compute[260935]: 2025-10-11 09:40:30.571 2 DEBUG nova.virt.libvirt.driver [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:40:30 np0005481065 nova_compute[260935]: 2025-10-11 09:40:30.572 2 INFO nova.virt.libvirt.driver [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Creating image(s)#033[00m
Oct 11 05:40:30 np0005481065 nova_compute[260935]: 2025-10-11 09:40:30.607 2 DEBUG nova.storage.rbd_utils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image cf42187a-059a-49d4-b9c8-96786042977f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:40:30 np0005481065 nova_compute[260935]: 2025-10-11 09:40:30.646 2 DEBUG nova.storage.rbd_utils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image cf42187a-059a-49d4-b9c8-96786042977f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:40:30 np0005481065 nova_compute[260935]: 2025-10-11 09:40:30.677 2 DEBUG nova.storage.rbd_utils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image cf42187a-059a-49d4-b9c8-96786042977f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:40:30 np0005481065 nova_compute[260935]: 2025-10-11 09:40:30.681 2 DEBUG oslo_concurrency.processutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:40:30 np0005481065 nova_compute[260935]: 2025-10-11 09:40:30.722 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:40:30 np0005481065 nova_compute[260935]: 2025-10-11 09:40:30.768 2 DEBUG oslo_concurrency.processutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:40:30 np0005481065 nova_compute[260935]: 2025-10-11 09:40:30.769 2 DEBUG oslo_concurrency.lockutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:40:30 np0005481065 nova_compute[260935]: 2025-10-11 09:40:30.770 2 DEBUG oslo_concurrency.lockutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:40:30 np0005481065 nova_compute[260935]: 2025-10-11 09:40:30.771 2 DEBUG oslo_concurrency.lockutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:40:30 np0005481065 nova_compute[260935]: 2025-10-11 09:40:30.801 2 DEBUG nova.storage.rbd_utils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image cf42187a-059a-49d4-b9c8-96786042977f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:40:30 np0005481065 nova_compute[260935]: 2025-10-11 09:40:30.806 2 DEBUG oslo_concurrency.processutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 cf42187a-059a-49d4-b9c8-96786042977f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:40:30 np0005481065 nova_compute[260935]: 2025-10-11 09:40:30.866 2 DEBUG nova.policy [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '489c4d0457354f4684f8b9e53261224f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '81e7096f23df4e7d8782cf98d09d54e9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:40:31 np0005481065 nova_compute[260935]: 2025-10-11 09:40:31.207 2 DEBUG oslo_concurrency.processutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 cf42187a-059a-49d4-b9c8-96786042977f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.401s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:40:31 np0005481065 nova_compute[260935]: 2025-10-11 09:40:31.299 2 DEBUG nova.storage.rbd_utils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] resizing rbd image cf42187a-059a-49d4-b9c8-96786042977f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:40:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2966: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:40:31 np0005481065 nova_compute[260935]: 2025-10-11 09:40:31.431 2 DEBUG nova.objects.instance [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lazy-loading 'migration_context' on Instance uuid cf42187a-059a-49d4-b9c8-96786042977f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:40:31 np0005481065 nova_compute[260935]: 2025-10-11 09:40:31.451 2 DEBUG nova.virt.libvirt.driver [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:40:31 np0005481065 nova_compute[260935]: 2025-10-11 09:40:31.452 2 DEBUG nova.virt.libvirt.driver [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Ensure instance console log exists: /var/lib/nova/instances/cf42187a-059a-49d4-b9c8-96786042977f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:40:31 np0005481065 nova_compute[260935]: 2025-10-11 09:40:31.453 2 DEBUG oslo_concurrency.lockutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:40:31 np0005481065 nova_compute[260935]: 2025-10-11 09:40:31.453 2 DEBUG oslo_concurrency.lockutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:40:31 np0005481065 nova_compute[260935]: 2025-10-11 09:40:31.454 2 DEBUG oslo_concurrency.lockutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:40:31 np0005481065 nova_compute[260935]: 2025-10-11 09:40:31.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:40:32 np0005481065 nova_compute[260935]: 2025-10-11 09:40:32.249 2 DEBUG nova.network.neutron [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Successfully created port: 9f09f897-79b2-4242-ba35-4ab26732b411 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:40:33 np0005481065 nova_compute[260935]: 2025-10-11 09:40:33.254 2 DEBUG nova.network.neutron [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Successfully updated port: 9f09f897-79b2-4242-ba35-4ab26732b411 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:40:33 np0005481065 nova_compute[260935]: 2025-10-11 09:40:33.278 2 DEBUG oslo_concurrency.lockutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "refresh_cache-cf42187a-059a-49d4-b9c8-96786042977f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:40:33 np0005481065 nova_compute[260935]: 2025-10-11 09:40:33.278 2 DEBUG oslo_concurrency.lockutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquired lock "refresh_cache-cf42187a-059a-49d4-b9c8-96786042977f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:40:33 np0005481065 nova_compute[260935]: 2025-10-11 09:40:33.278 2 DEBUG nova.network.neutron [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:40:33 np0005481065 nova_compute[260935]: 2025-10-11 09:40:33.329 2 DEBUG nova.compute.manager [req-9693bee0-39c7-428f-9576-552cfbc406bc req-c2de1901-d119-4acd-8a82-3e920b6e015a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Received event network-changed-9f09f897-79b2-4242-ba35-4ab26732b411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:40:33 np0005481065 nova_compute[260935]: 2025-10-11 09:40:33.329 2 DEBUG nova.compute.manager [req-9693bee0-39c7-428f-9576-552cfbc406bc req-c2de1901-d119-4acd-8a82-3e920b6e015a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Refreshing instance network info cache due to event network-changed-9f09f897-79b2-4242-ba35-4ab26732b411. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:40:33 np0005481065 nova_compute[260935]: 2025-10-11 09:40:33.329 2 DEBUG oslo_concurrency.lockutils [req-9693bee0-39c7-428f-9576-552cfbc406bc req-c2de1901-d119-4acd-8a82-3e920b6e015a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-cf42187a-059a-49d4-b9c8-96786042977f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:40:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2967: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:40:33 np0005481065 nova_compute[260935]: 2025-10-11 09:40:33.710 2 DEBUG nova.network.neutron [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:40:34 np0005481065 nova_compute[260935]: 2025-10-11 09:40:34.375 2 DEBUG nova.network.neutron [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Updating instance_info_cache with network_info: [{"id": "9f09f897-79b2-4242-ba35-4ab26732b411", "address": "fa:16:3e:f9:42:d3", "network": {"id": "fbb66f3b-0d2f-44df-a1ff-0fbd9c571596", "bridge": "br-int", "label": "tempest-network-smoke--2076009485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f09f897-79", "ovs_interfaceid": "9f09f897-79b2-4242-ba35-4ab26732b411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:40:34 np0005481065 nova_compute[260935]: 2025-10-11 09:40:34.397 2 DEBUG oslo_concurrency.lockutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Releasing lock "refresh_cache-cf42187a-059a-49d4-b9c8-96786042977f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:40:34 np0005481065 nova_compute[260935]: 2025-10-11 09:40:34.398 2 DEBUG nova.compute.manager [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Instance network_info: |[{"id": "9f09f897-79b2-4242-ba35-4ab26732b411", "address": "fa:16:3e:f9:42:d3", "network": {"id": "fbb66f3b-0d2f-44df-a1ff-0fbd9c571596", "bridge": "br-int", "label": "tempest-network-smoke--2076009485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f09f897-79", "ovs_interfaceid": "9f09f897-79b2-4242-ba35-4ab26732b411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:40:34 np0005481065 nova_compute[260935]: 2025-10-11 09:40:34.399 2 DEBUG oslo_concurrency.lockutils [req-9693bee0-39c7-428f-9576-552cfbc406bc req-c2de1901-d119-4acd-8a82-3e920b6e015a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-cf42187a-059a-49d4-b9c8-96786042977f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:40:34 np0005481065 nova_compute[260935]: 2025-10-11 09:40:34.400 2 DEBUG nova.network.neutron [req-9693bee0-39c7-428f-9576-552cfbc406bc req-c2de1901-d119-4acd-8a82-3e920b6e015a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Refreshing network info cache for port 9f09f897-79b2-4242-ba35-4ab26732b411 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:40:34 np0005481065 nova_compute[260935]: 2025-10-11 09:40:34.406 2 DEBUG nova.virt.libvirt.driver [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Start _get_guest_xml network_info=[{"id": "9f09f897-79b2-4242-ba35-4ab26732b411", "address": "fa:16:3e:f9:42:d3", "network": {"id": "fbb66f3b-0d2f-44df-a1ff-0fbd9c571596", "bridge": "br-int", "label": "tempest-network-smoke--2076009485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f09f897-79", "ovs_interfaceid": "9f09f897-79b2-4242-ba35-4ab26732b411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:40:34 np0005481065 nova_compute[260935]: 2025-10-11 09:40:34.420 2 WARNING nova.virt.libvirt.driver [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:40:34 np0005481065 nova_compute[260935]: 2025-10-11 09:40:34.432 2 DEBUG nova.virt.libvirt.host [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:40:34 np0005481065 nova_compute[260935]: 2025-10-11 09:40:34.433 2 DEBUG nova.virt.libvirt.host [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:40:34 np0005481065 nova_compute[260935]: 2025-10-11 09:40:34.441 2 DEBUG nova.virt.libvirt.host [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:40:34 np0005481065 nova_compute[260935]: 2025-10-11 09:40:34.442 2 DEBUG nova.virt.libvirt.host [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:40:34 np0005481065 nova_compute[260935]: 2025-10-11 09:40:34.443 2 DEBUG nova.virt.libvirt.driver [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:40:34 np0005481065 nova_compute[260935]: 2025-10-11 09:40:34.444 2 DEBUG nova.virt.hardware [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:40:34 np0005481065 nova_compute[260935]: 2025-10-11 09:40:34.446 2 DEBUG nova.virt.hardware [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:40:34 np0005481065 nova_compute[260935]: 2025-10-11 09:40:34.447 2 DEBUG nova.virt.hardware [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:40:34 np0005481065 nova_compute[260935]: 2025-10-11 09:40:34.448 2 DEBUG nova.virt.hardware [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:40:34 np0005481065 nova_compute[260935]: 2025-10-11 09:40:34.448 2 DEBUG nova.virt.hardware [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:40:34 np0005481065 nova_compute[260935]: 2025-10-11 09:40:34.449 2 DEBUG nova.virt.hardware [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:40:34 np0005481065 nova_compute[260935]: 2025-10-11 09:40:34.449 2 DEBUG nova.virt.hardware [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:40:34 np0005481065 nova_compute[260935]: 2025-10-11 09:40:34.450 2 DEBUG nova.virt.hardware [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:40:34 np0005481065 nova_compute[260935]: 2025-10-11 09:40:34.451 2 DEBUG nova.virt.hardware [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:40:34 np0005481065 nova_compute[260935]: 2025-10-11 09:40:34.451 2 DEBUG nova.virt.hardware [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:40:34 np0005481065 nova_compute[260935]: 2025-10-11 09:40:34.452 2 DEBUG nova.virt.hardware [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:40:34 np0005481065 nova_compute[260935]: 2025-10-11 09:40:34.458 2 DEBUG oslo_concurrency.processutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:40:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:40:34 np0005481065 nova_compute[260935]: 2025-10-11 09:40:34.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:40:35 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:40:35 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1064095526' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:40:35 np0005481065 nova_compute[260935]: 2025-10-11 09:40:35.018 2 DEBUG oslo_concurrency.processutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:40:35 np0005481065 nova_compute[260935]: 2025-10-11 09:40:35.056 2 DEBUG nova.storage.rbd_utils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image cf42187a-059a-49d4-b9c8-96786042977f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:40:35 np0005481065 nova_compute[260935]: 2025-10-11 09:40:35.062 2 DEBUG oslo_concurrency.processutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:40:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2968: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:40:35 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:40:35 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1853872656' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:40:35 np0005481065 nova_compute[260935]: 2025-10-11 09:40:35.557 2 DEBUG oslo_concurrency.processutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:40:35 np0005481065 nova_compute[260935]: 2025-10-11 09:40:35.561 2 DEBUG nova.virt.libvirt.vif [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:40:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-1885053733',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-1885053733',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-607770139-acc',id=146,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMJ2jJOMdFj9GKBYjHKBvmwlAaXSeJgKxd7Y5C0D5IYgR/hyyjjmJiPLE03H6XYiIWX3QUpUmhg6y0ka6+hRs35VkER6t8ZYYcl+MQfyMjAvRZwHFSUWgwuABpKNZ9OROA==',key_name='tempest-TestSecurityGroupsBasicOps-1246308253',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='81e7096f23df4e7d8782cf98d09d54e9',ramdisk_id='',reservation_id='r-8zvi2y9v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-607770139',owner_user_name='tempest-TestSecurityGroupsBasicOps-607770139-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:40:30Z,user_data=None,user_id='489c4d0457354f4684f8b9e53261224f',uuid=cf42187a-059a-49d4-b9c8-96786042977f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9f09f897-79b2-4242-ba35-4ab26732b411", "address": "fa:16:3e:f9:42:d3", "network": {"id": "fbb66f3b-0d2f-44df-a1ff-0fbd9c571596", "bridge": "br-int", "label": "tempest-network-smoke--2076009485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f09f897-79", "ovs_interfaceid": "9f09f897-79b2-4242-ba35-4ab26732b411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:40:35 np0005481065 nova_compute[260935]: 2025-10-11 09:40:35.561 2 DEBUG nova.network.os_vif_util [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converting VIF {"id": "9f09f897-79b2-4242-ba35-4ab26732b411", "address": "fa:16:3e:f9:42:d3", "network": {"id": "fbb66f3b-0d2f-44df-a1ff-0fbd9c571596", "bridge": "br-int", "label": "tempest-network-smoke--2076009485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f09f897-79", "ovs_interfaceid": "9f09f897-79b2-4242-ba35-4ab26732b411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:40:35 np0005481065 nova_compute[260935]: 2025-10-11 09:40:35.563 2 DEBUG nova.network.os_vif_util [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:42:d3,bridge_name='br-int',has_traffic_filtering=True,id=9f09f897-79b2-4242-ba35-4ab26732b411,network=Network(fbb66f3b-0d2f-44df-a1ff-0fbd9c571596),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f09f897-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:40:35 np0005481065 nova_compute[260935]: 2025-10-11 09:40:35.566 2 DEBUG nova.objects.instance [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lazy-loading 'pci_devices' on Instance uuid cf42187a-059a-49d4-b9c8-96786042977f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:40:35 np0005481065 nova_compute[260935]: 2025-10-11 09:40:35.584 2 DEBUG nova.virt.libvirt.driver [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:40:35 np0005481065 nova_compute[260935]:  <uuid>cf42187a-059a-49d4-b9c8-96786042977f</uuid>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:  <name>instance-00000092</name>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:40:35 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-1885053733</nova:name>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:40:34</nova:creationTime>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:40:35 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:        <nova:user uuid="489c4d0457354f4684f8b9e53261224f">tempest-TestSecurityGroupsBasicOps-607770139-project-member</nova:user>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:        <nova:project uuid="81e7096f23df4e7d8782cf98d09d54e9">tempest-TestSecurityGroupsBasicOps-607770139</nova:project>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:        <nova:port uuid="9f09f897-79b2-4242-ba35-4ab26732b411">
Oct 11 05:40:35 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:40:35 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:      <entry name="serial">cf42187a-059a-49d4-b9c8-96786042977f</entry>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:      <entry name="uuid">cf42187a-059a-49d4-b9c8-96786042977f</entry>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:40:35 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:40:35 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:40:35 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/cf42187a-059a-49d4-b9c8-96786042977f_disk">
Oct 11 05:40:35 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:40:35 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:40:35 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/cf42187a-059a-49d4-b9c8-96786042977f_disk.config">
Oct 11 05:40:35 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:40:35 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:40:35 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:f9:42:d3"/>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:      <target dev="tap9f09f897-79"/>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:40:35 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/cf42187a-059a-49d4-b9c8-96786042977f/console.log" append="off"/>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:40:35 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:40:35 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:40:35 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:40:35 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:40:35 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:40:35 np0005481065 nova_compute[260935]: 2025-10-11 09:40:35.587 2 DEBUG nova.compute.manager [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Preparing to wait for external event network-vif-plugged-9f09f897-79b2-4242-ba35-4ab26732b411 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:40:35 np0005481065 nova_compute[260935]: 2025-10-11 09:40:35.588 2 DEBUG oslo_concurrency.lockutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "cf42187a-059a-49d4-b9c8-96786042977f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:40:35 np0005481065 nova_compute[260935]: 2025-10-11 09:40:35.588 2 DEBUG oslo_concurrency.lockutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "cf42187a-059a-49d4-b9c8-96786042977f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:40:35 np0005481065 nova_compute[260935]: 2025-10-11 09:40:35.589 2 DEBUG oslo_concurrency.lockutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "cf42187a-059a-49d4-b9c8-96786042977f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:40:35 np0005481065 nova_compute[260935]: 2025-10-11 09:40:35.590 2 DEBUG nova.virt.libvirt.vif [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:40:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-1885053733',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-1885053733',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-607770139-acc',id=146,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMJ2jJOMdFj9GKBYjHKBvmwlAaXSeJgKxd7Y5C0D5IYgR/hyyjjmJiPLE03H6XYiIWX3QUpUmhg6y0ka6+hRs35VkER6t8ZYYcl+MQfyMjAvRZwHFSUWgwuABpKNZ9OROA==',key_name='tempest-TestSecurityGroupsBasicOps-1246308253',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='81e7096f23df4e7d8782cf98d09d54e9',ramdisk_id='',reservation_id='r-8zvi2y9v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-607770139',owner_user_name='tempest-TestSecurityGroupsBasicOps-607770139-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:40:30Z,user_data=None,user_id='489c4d0457354f4684f8b9e53261224f',uuid=cf42187a-059a-49d4-b9c8-96786042977f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9f09f897-79b2-4242-ba35-4ab26732b411", "address": "fa:16:3e:f9:42:d3", "network": {"id": "fbb66f3b-0d2f-44df-a1ff-0fbd9c571596", "bridge": "br-int", "label": "tempest-network-smoke--2076009485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f09f897-79", "ovs_interfaceid": "9f09f897-79b2-4242-ba35-4ab26732b411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:40:35 np0005481065 nova_compute[260935]: 2025-10-11 09:40:35.591 2 DEBUG nova.network.os_vif_util [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converting VIF {"id": "9f09f897-79b2-4242-ba35-4ab26732b411", "address": "fa:16:3e:f9:42:d3", "network": {"id": "fbb66f3b-0d2f-44df-a1ff-0fbd9c571596", "bridge": "br-int", "label": "tempest-network-smoke--2076009485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f09f897-79", "ovs_interfaceid": "9f09f897-79b2-4242-ba35-4ab26732b411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:40:35 np0005481065 nova_compute[260935]: 2025-10-11 09:40:35.592 2 DEBUG nova.network.os_vif_util [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:42:d3,bridge_name='br-int',has_traffic_filtering=True,id=9f09f897-79b2-4242-ba35-4ab26732b411,network=Network(fbb66f3b-0d2f-44df-a1ff-0fbd9c571596),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f09f897-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:40:35 np0005481065 nova_compute[260935]: 2025-10-11 09:40:35.593 2 DEBUG os_vif [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:42:d3,bridge_name='br-int',has_traffic_filtering=True,id=9f09f897-79b2-4242-ba35-4ab26732b411,network=Network(fbb66f3b-0d2f-44df-a1ff-0fbd9c571596),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f09f897-79') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:40:35 np0005481065 nova_compute[260935]: 2025-10-11 09:40:35.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:40:35 np0005481065 nova_compute[260935]: 2025-10-11 09:40:35.603 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:40:35 np0005481065 nova_compute[260935]: 2025-10-11 09:40:35.604 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:40:35 np0005481065 nova_compute[260935]: 2025-10-11 09:40:35.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:40:35 np0005481065 nova_compute[260935]: 2025-10-11 09:40:35.609 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9f09f897-79, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:40:35 np0005481065 nova_compute[260935]: 2025-10-11 09:40:35.610 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9f09f897-79, col_values=(('external_ids', {'iface-id': '9f09f897-79b2-4242-ba35-4ab26732b411', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f9:42:d3', 'vm-uuid': 'cf42187a-059a-49d4-b9c8-96786042977f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:40:35 np0005481065 nova_compute[260935]: 2025-10-11 09:40:35.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:40:35 np0005481065 NetworkManager[44960]: <info>  [1760175635.6143] manager: (tap9f09f897-79): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/629)
Oct 11 05:40:35 np0005481065 nova_compute[260935]: 2025-10-11 09:40:35.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:40:35 np0005481065 nova_compute[260935]: 2025-10-11 09:40:35.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:40:35 np0005481065 nova_compute[260935]: 2025-10-11 09:40:35.621 2 INFO os_vif [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:42:d3,bridge_name='br-int',has_traffic_filtering=True,id=9f09f897-79b2-4242-ba35-4ab26732b411,network=Network(fbb66f3b-0d2f-44df-a1ff-0fbd9c571596),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f09f897-79')#033[00m
Oct 11 05:40:35 np0005481065 nova_compute[260935]: 2025-10-11 09:40:35.681 2 DEBUG nova.virt.libvirt.driver [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:40:35 np0005481065 nova_compute[260935]: 2025-10-11 09:40:35.682 2 DEBUG nova.virt.libvirt.driver [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:40:35 np0005481065 nova_compute[260935]: 2025-10-11 09:40:35.682 2 DEBUG nova.virt.libvirt.driver [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] No VIF found with MAC fa:16:3e:f9:42:d3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:40:35 np0005481065 nova_compute[260935]: 2025-10-11 09:40:35.683 2 INFO nova.virt.libvirt.driver [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Using config drive#033[00m
Oct 11 05:40:35 np0005481065 nova_compute[260935]: 2025-10-11 09:40:35.721 2 DEBUG nova.storage.rbd_utils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image cf42187a-059a-49d4-b9c8-96786042977f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:40:35 np0005481065 podman[424762]: 2025-10-11 09:40:35.7309911 +0000 UTC m=+0.071904638 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:40:35 np0005481065 nova_compute[260935]: 2025-10-11 09:40:35.767 2 DEBUG nova.network.neutron [req-9693bee0-39c7-428f-9576-552cfbc406bc req-c2de1901-d119-4acd-8a82-3e920b6e015a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Updated VIF entry in instance network info cache for port 9f09f897-79b2-4242-ba35-4ab26732b411. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:40:35 np0005481065 nova_compute[260935]: 2025-10-11 09:40:35.768 2 DEBUG nova.network.neutron [req-9693bee0-39c7-428f-9576-552cfbc406bc req-c2de1901-d119-4acd-8a82-3e920b6e015a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Updating instance_info_cache with network_info: [{"id": "9f09f897-79b2-4242-ba35-4ab26732b411", "address": "fa:16:3e:f9:42:d3", "network": {"id": "fbb66f3b-0d2f-44df-a1ff-0fbd9c571596", "bridge": "br-int", "label": "tempest-network-smoke--2076009485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f09f897-79", "ovs_interfaceid": "9f09f897-79b2-4242-ba35-4ab26732b411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:40:35 np0005481065 nova_compute[260935]: 2025-10-11 09:40:35.783 2 DEBUG oslo_concurrency.lockutils [req-9693bee0-39c7-428f-9576-552cfbc406bc req-c2de1901-d119-4acd-8a82-3e920b6e015a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-cf42187a-059a-49d4-b9c8-96786042977f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:40:36 np0005481065 nova_compute[260935]: 2025-10-11 09:40:36.119 2 INFO nova.virt.libvirt.driver [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Creating config drive at /var/lib/nova/instances/cf42187a-059a-49d4-b9c8-96786042977f/disk.config#033[00m
Oct 11 05:40:36 np0005481065 nova_compute[260935]: 2025-10-11 09:40:36.127 2 DEBUG oslo_concurrency.processutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cf42187a-059a-49d4-b9c8-96786042977f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi97ndg0p execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:40:36 np0005481065 nova_compute[260935]: 2025-10-11 09:40:36.281 2 DEBUG oslo_concurrency.processutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cf42187a-059a-49d4-b9c8-96786042977f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi97ndg0p" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:40:36 np0005481065 nova_compute[260935]: 2025-10-11 09:40:36.316 2 DEBUG nova.storage.rbd_utils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image cf42187a-059a-49d4-b9c8-96786042977f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:40:36 np0005481065 nova_compute[260935]: 2025-10-11 09:40:36.320 2 DEBUG oslo_concurrency.processutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cf42187a-059a-49d4-b9c8-96786042977f/disk.config cf42187a-059a-49d4-b9c8-96786042977f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:40:36 np0005481065 nova_compute[260935]: 2025-10-11 09:40:36.527 2 DEBUG oslo_concurrency.processutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cf42187a-059a-49d4-b9c8-96786042977f/disk.config cf42187a-059a-49d4-b9c8-96786042977f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.207s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:40:36 np0005481065 nova_compute[260935]: 2025-10-11 09:40:36.529 2 INFO nova.virt.libvirt.driver [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Deleting local config drive /var/lib/nova/instances/cf42187a-059a-49d4-b9c8-96786042977f/disk.config because it was imported into RBD.#033[00m
Oct 11 05:40:36 np0005481065 NetworkManager[44960]: <info>  [1760175636.6077] manager: (tap9f09f897-79): new Tun device (/org/freedesktop/NetworkManager/Devices/630)
Oct 11 05:40:36 np0005481065 kernel: tap9f09f897-79: entered promiscuous mode
Oct 11 05:40:36 np0005481065 ovn_controller[152945]: 2025-10-11T09:40:36Z|01657|binding|INFO|Claiming lport 9f09f897-79b2-4242-ba35-4ab26732b411 for this chassis.
Oct 11 05:40:36 np0005481065 ovn_controller[152945]: 2025-10-11T09:40:36Z|01658|binding|INFO|9f09f897-79b2-4242-ba35-4ab26732b411: Claiming fa:16:3e:f9:42:d3 10.100.0.14
Oct 11 05:40:36 np0005481065 nova_compute[260935]: 2025-10-11 09:40:36.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:40:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:36.638 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:42:d3 10.100.0.14'], port_security=['fa:16:3e:f9:42:d3 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'cf42187a-059a-49d4-b9c8-96786042977f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '81e7096f23df4e7d8782cf98d09d54e9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '09004472-a3cd-4396-b377-505fc9433156 ece05fde-852d-4083-a996-a2c46d1ad9c2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d19e995-d8fe-4294-b195-20de1c881c60, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=9f09f897-79b2-4242-ba35-4ab26732b411) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:40:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:36.640 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 9f09f897-79b2-4242-ba35-4ab26732b411 in datapath fbb66f3b-0d2f-44df-a1ff-0fbd9c571596 bound to our chassis#033[00m
Oct 11 05:40:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:36.643 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fbb66f3b-0d2f-44df-a1ff-0fbd9c571596#033[00m
Oct 11 05:40:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:36.660 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6a1cde3e-4270-4402-a3e9-f786b28b77b5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:40:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:36.661 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfbb66f3b-01 in ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 05:40:36 np0005481065 systemd-machined[215705]: New machine qemu-170-instance-00000092.
Oct 11 05:40:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:36.666 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfbb66f3b-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 05:40:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:36.666 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d149be25-ceb9-4c96-8913-0c30c4b97bb9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:40:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:36.669 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d1eab10b-f836-46e9-864d-a9b97e25b525]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:40:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:36.688 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[1188c158-7213-428c-9afa-fcd155c91411]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:40:36 np0005481065 systemd[1]: Started Virtual Machine qemu-170-instance-00000092.
Oct 11 05:40:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:36.712 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[03e66ba9-8b91-49c4-bdb9-16f9ef035d11]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:40:36 np0005481065 systemd-udevd[424852]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:40:36 np0005481065 nova_compute[260935]: 2025-10-11 09:40:36.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:40:36 np0005481065 ovn_controller[152945]: 2025-10-11T09:40:36Z|01659|binding|INFO|Setting lport 9f09f897-79b2-4242-ba35-4ab26732b411 ovn-installed in OVS
Oct 11 05:40:36 np0005481065 ovn_controller[152945]: 2025-10-11T09:40:36Z|01660|binding|INFO|Setting lport 9f09f897-79b2-4242-ba35-4ab26732b411 up in Southbound
Oct 11 05:40:36 np0005481065 nova_compute[260935]: 2025-10-11 09:40:36.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:40:36 np0005481065 NetworkManager[44960]: <info>  [1760175636.7357] device (tap9f09f897-79): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:40:36 np0005481065 NetworkManager[44960]: <info>  [1760175636.7366] device (tap9f09f897-79): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:40:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:36.761 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b4685214-8ad0-44ec-9865-1eeeb4498d17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:40:36 np0005481065 NetworkManager[44960]: <info>  [1760175636.7705] manager: (tapfbb66f3b-00): new Veth device (/org/freedesktop/NetworkManager/Devices/631)
Oct 11 05:40:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:36.769 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[98b5f0de-4e5d-4254-81a7-930bf15382b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:40:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:36.818 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[100d498d-63fd-48cd-81d3-c7ecbc87787d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:40:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:36.822 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[4406d6da-f697-49cd-b3fc-c8282790853e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:40:36 np0005481065 NetworkManager[44960]: <info>  [1760175636.8527] device (tapfbb66f3b-00): carrier: link connected
Oct 11 05:40:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:36.857 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[dc276785-6436-4d2b-9b8b-70549633c896]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:40:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:36.880 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ee8cb72d-f355-40d7-ba91-1c421f094ce3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbb66f3b-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a1:6d:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 434], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 748155, 'reachable_time': 34340, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 424882, 'error': None, 'target': 'ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:40:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:36.898 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[884e3daa-9d1f-455b-b011-c2192833aa8e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea1:6dd0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 748155, 'tstamp': 748155}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 424883, 'error': None, 'target': 'ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:40:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:36.919 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[23d53aca-4fed-44f1-b808-cd774d1fe7e8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbb66f3b-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a1:6d:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 434], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 748155, 'reachable_time': 34340, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 424884, 'error': None, 'target': 'ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:40:36 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:36.957 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0228b9dc-ce6f-41dc-a5d4-e9658d9ef416]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:40:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:37.041 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[89172be3-f17b-4a02-9dc5-34eeb83ea24d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:40:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:37.043 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbb66f3b-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:40:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:37.044 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:40:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:37.045 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbb66f3b-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:40:37 np0005481065 nova_compute[260935]: 2025-10-11 09:40:37.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:40:37 np0005481065 NetworkManager[44960]: <info>  [1760175637.0488] manager: (tapfbb66f3b-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/632)
Oct 11 05:40:37 np0005481065 kernel: tapfbb66f3b-00: entered promiscuous mode
Oct 11 05:40:37 np0005481065 nova_compute[260935]: 2025-10-11 09:40:37.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:40:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:37.053 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfbb66f3b-00, col_values=(('external_ids', {'iface-id': '3bca8f57-1f39-42d4-90ee-6695770f4d39'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:40:37 np0005481065 nova_compute[260935]: 2025-10-11 09:40:37.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:40:37 np0005481065 ovn_controller[152945]: 2025-10-11T09:40:37Z|01661|binding|INFO|Releasing lport 3bca8f57-1f39-42d4-90ee-6695770f4d39 from this chassis (sb_readonly=0)
Oct 11 05:40:37 np0005481065 nova_compute[260935]: 2025-10-11 09:40:37.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:40:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:37.075 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fbb66f3b-0d2f-44df-a1ff-0fbd9c571596.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fbb66f3b-0d2f-44df-a1ff-0fbd9c571596.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 05:40:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:37.076 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7d252bd9-ee12-40f1-bc29-541036c8c9b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:40:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:37.077 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 05:40:37 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 05:40:37 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 05:40:37 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596
Oct 11 05:40:37 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 05:40:37 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 05:40:37 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 05:40:37 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/fbb66f3b-0d2f-44df-a1ff-0fbd9c571596.pid.haproxy
Oct 11 05:40:37 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 05:40:37 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:40:37 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 05:40:37 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 05:40:37 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 05:40:37 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 05:40:37 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 05:40:37 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 05:40:37 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 05:40:37 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 05:40:37 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 05:40:37 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 05:40:37 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 05:40:37 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 05:40:37 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 05:40:37 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:40:37 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:40:37 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 05:40:37 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 05:40:37 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 05:40:37 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID fbb66f3b-0d2f-44df-a1ff-0fbd9c571596
Oct 11 05:40:37 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 05:40:37 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:40:37.079 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596', 'env', 'PROCESS_TAG=haproxy-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fbb66f3b-0d2f-44df-a1ff-0fbd9c571596.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 05:40:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2969: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:40:37 np0005481065 podman[424958]: 2025-10-11 09:40:37.535692677 +0000 UTC m=+0.072014081 container create 2a2f657487bc3f85e27c94fbbd87712ccd236f179ffec9e579ec1ad63a55a04d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 11 05:40:37 np0005481065 systemd[1]: Started libpod-conmon-2a2f657487bc3f85e27c94fbbd87712ccd236f179ffec9e579ec1ad63a55a04d.scope.
Oct 11 05:40:37 np0005481065 podman[424958]: 2025-10-11 09:40:37.502791594 +0000 UTC m=+0.039113098 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 05:40:37 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:40:37 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cdf0e2af4d95bd9ffa8a3118a9662388e3229f9a077ce124d6a46ddcadbaf59/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 05:40:37 np0005481065 podman[424958]: 2025-10-11 09:40:37.651724631 +0000 UTC m=+0.188046045 container init 2a2f657487bc3f85e27c94fbbd87712ccd236f179ffec9e579ec1ad63a55a04d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:40:37 np0005481065 podman[424958]: 2025-10-11 09:40:37.656507765 +0000 UTC m=+0.192829169 container start 2a2f657487bc3f85e27c94fbbd87712ccd236f179ffec9e579ec1ad63a55a04d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct 11 05:40:37 np0005481065 neutron-haproxy-ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596[424973]: [NOTICE]   (424977) : New worker (424979) forked
Oct 11 05:40:37 np0005481065 neutron-haproxy-ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596[424973]: [NOTICE]   (424977) : Loading success.
Oct 11 05:40:37 np0005481065 nova_compute[260935]: 2025-10-11 09:40:37.766 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175637.7664146, cf42187a-059a-49d4-b9c8-96786042977f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:40:37 np0005481065 nova_compute[260935]: 2025-10-11 09:40:37.767 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cf42187a-059a-49d4-b9c8-96786042977f] VM Started (Lifecycle Event)#033[00m
Oct 11 05:40:37 np0005481065 nova_compute[260935]: 2025-10-11 09:40:37.788 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:40:37 np0005481065 nova_compute[260935]: 2025-10-11 09:40:37.794 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175637.7675128, cf42187a-059a-49d4-b9c8-96786042977f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:40:37 np0005481065 nova_compute[260935]: 2025-10-11 09:40:37.794 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cf42187a-059a-49d4-b9c8-96786042977f] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:40:37 np0005481065 nova_compute[260935]: 2025-10-11 09:40:37.813 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:40:37 np0005481065 nova_compute[260935]: 2025-10-11 09:40:37.817 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:40:37 np0005481065 nova_compute[260935]: 2025-10-11 09:40:37.838 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cf42187a-059a-49d4-b9c8-96786042977f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:40:38 np0005481065 nova_compute[260935]: 2025-10-11 09:40:38.231 2 DEBUG nova.compute.manager [req-30e1b532-c2d3-4b11-87ed-3fcee473cf46 req-59d161fb-83c4-4765-83b5-b2b15d5b5bcf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Received event network-vif-plugged-9f09f897-79b2-4242-ba35-4ab26732b411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:40:38 np0005481065 nova_compute[260935]: 2025-10-11 09:40:38.232 2 DEBUG oslo_concurrency.lockutils [req-30e1b532-c2d3-4b11-87ed-3fcee473cf46 req-59d161fb-83c4-4765-83b5-b2b15d5b5bcf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "cf42187a-059a-49d4-b9c8-96786042977f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:40:38 np0005481065 nova_compute[260935]: 2025-10-11 09:40:38.232 2 DEBUG oslo_concurrency.lockutils [req-30e1b532-c2d3-4b11-87ed-3fcee473cf46 req-59d161fb-83c4-4765-83b5-b2b15d5b5bcf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "cf42187a-059a-49d4-b9c8-96786042977f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:40:38 np0005481065 nova_compute[260935]: 2025-10-11 09:40:38.233 2 DEBUG oslo_concurrency.lockutils [req-30e1b532-c2d3-4b11-87ed-3fcee473cf46 req-59d161fb-83c4-4765-83b5-b2b15d5b5bcf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "cf42187a-059a-49d4-b9c8-96786042977f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:40:38 np0005481065 nova_compute[260935]: 2025-10-11 09:40:38.233 2 DEBUG nova.compute.manager [req-30e1b532-c2d3-4b11-87ed-3fcee473cf46 req-59d161fb-83c4-4765-83b5-b2b15d5b5bcf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Processing event network-vif-plugged-9f09f897-79b2-4242-ba35-4ab26732b411 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:40:38 np0005481065 nova_compute[260935]: 2025-10-11 09:40:38.235 2 DEBUG nova.compute.manager [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:40:38 np0005481065 nova_compute[260935]: 2025-10-11 09:40:38.240 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175638.239727, cf42187a-059a-49d4-b9c8-96786042977f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:40:38 np0005481065 nova_compute[260935]: 2025-10-11 09:40:38.241 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cf42187a-059a-49d4-b9c8-96786042977f] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:40:38 np0005481065 nova_compute[260935]: 2025-10-11 09:40:38.246 2 DEBUG nova.virt.libvirt.driver [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:40:38 np0005481065 nova_compute[260935]: 2025-10-11 09:40:38.251 2 INFO nova.virt.libvirt.driver [-] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Instance spawned successfully.#033[00m
Oct 11 05:40:38 np0005481065 nova_compute[260935]: 2025-10-11 09:40:38.252 2 DEBUG nova.virt.libvirt.driver [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:40:38 np0005481065 nova_compute[260935]: 2025-10-11 09:40:38.263 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:40:38 np0005481065 nova_compute[260935]: 2025-10-11 09:40:38.273 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:40:38 np0005481065 nova_compute[260935]: 2025-10-11 09:40:38.281 2 DEBUG nova.virt.libvirt.driver [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:40:38 np0005481065 nova_compute[260935]: 2025-10-11 09:40:38.282 2 DEBUG nova.virt.libvirt.driver [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:40:38 np0005481065 nova_compute[260935]: 2025-10-11 09:40:38.283 2 DEBUG nova.virt.libvirt.driver [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:40:38 np0005481065 nova_compute[260935]: 2025-10-11 09:40:38.283 2 DEBUG nova.virt.libvirt.driver [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:40:38 np0005481065 nova_compute[260935]: 2025-10-11 09:40:38.284 2 DEBUG nova.virt.libvirt.driver [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:40:38 np0005481065 nova_compute[260935]: 2025-10-11 09:40:38.284 2 DEBUG nova.virt.libvirt.driver [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:40:38 np0005481065 nova_compute[260935]: 2025-10-11 09:40:38.293 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cf42187a-059a-49d4-b9c8-96786042977f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:40:38 np0005481065 nova_compute[260935]: 2025-10-11 09:40:38.341 2 INFO nova.compute.manager [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Took 7.77 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:40:38 np0005481065 nova_compute[260935]: 2025-10-11 09:40:38.341 2 DEBUG nova.compute.manager [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:40:38 np0005481065 nova_compute[260935]: 2025-10-11 09:40:38.408 2 INFO nova.compute.manager [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Took 8.78 seconds to build instance.#033[00m
Oct 11 05:40:38 np0005481065 nova_compute[260935]: 2025-10-11 09:40:38.432 2 DEBUG oslo_concurrency.lockutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "cf42187a-059a-49d4-b9c8-96786042977f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.878s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:40:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2970: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 11 05:40:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:40:39 np0005481065 nova_compute[260935]: 2025-10-11 09:40:39.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:40:40 np0005481065 nova_compute[260935]: 2025-10-11 09:40:40.316 2 DEBUG nova.compute.manager [req-5a558e1c-8cf8-4e32-b190-b4cfb53baf2b req-931ba1e5-f68c-4f7d-a1d2-bda2f59ef1ce e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Received event network-vif-plugged-9f09f897-79b2-4242-ba35-4ab26732b411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:40:40 np0005481065 nova_compute[260935]: 2025-10-11 09:40:40.317 2 DEBUG oslo_concurrency.lockutils [req-5a558e1c-8cf8-4e32-b190-b4cfb53baf2b req-931ba1e5-f68c-4f7d-a1d2-bda2f59ef1ce e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "cf42187a-059a-49d4-b9c8-96786042977f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:40:40 np0005481065 nova_compute[260935]: 2025-10-11 09:40:40.317 2 DEBUG oslo_concurrency.lockutils [req-5a558e1c-8cf8-4e32-b190-b4cfb53baf2b req-931ba1e5-f68c-4f7d-a1d2-bda2f59ef1ce e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "cf42187a-059a-49d4-b9c8-96786042977f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:40:40 np0005481065 nova_compute[260935]: 2025-10-11 09:40:40.318 2 DEBUG oslo_concurrency.lockutils [req-5a558e1c-8cf8-4e32-b190-b4cfb53baf2b req-931ba1e5-f68c-4f7d-a1d2-bda2f59ef1ce e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "cf42187a-059a-49d4-b9c8-96786042977f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:40:40 np0005481065 nova_compute[260935]: 2025-10-11 09:40:40.318 2 DEBUG nova.compute.manager [req-5a558e1c-8cf8-4e32-b190-b4cfb53baf2b req-931ba1e5-f68c-4f7d-a1d2-bda2f59ef1ce e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] No waiting events found dispatching network-vif-plugged-9f09f897-79b2-4242-ba35-4ab26732b411 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:40:40 np0005481065 nova_compute[260935]: 2025-10-11 09:40:40.318 2 WARNING nova.compute.manager [req-5a558e1c-8cf8-4e32-b190-b4cfb53baf2b req-931ba1e5-f68c-4f7d-a1d2-bda2f59ef1ce e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Received unexpected event network-vif-plugged-9f09f897-79b2-4242-ba35-4ab26732b411 for instance with vm_state active and task_state None.#033[00m
Oct 11 05:40:40 np0005481065 nova_compute[260935]: 2025-10-11 09:40:40.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:40:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2971: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 11 05:40:41 np0005481065 ovn_controller[152945]: 2025-10-11T09:40:41Z|01662|binding|INFO|Releasing lport 3bca8f57-1f39-42d4-90ee-6695770f4d39 from this chassis (sb_readonly=0)
Oct 11 05:40:41 np0005481065 ovn_controller[152945]: 2025-10-11T09:40:41Z|01663|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:40:41 np0005481065 ovn_controller[152945]: 2025-10-11T09:40:41Z|01664|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:40:41 np0005481065 NetworkManager[44960]: <info>  [1760175641.8901] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/633)
Oct 11 05:40:41 np0005481065 NetworkManager[44960]: <info>  [1760175641.8922] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/634)
Oct 11 05:40:41 np0005481065 nova_compute[260935]: 2025-10-11 09:40:41.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:40:41 np0005481065 ovn_controller[152945]: 2025-10-11T09:40:41Z|01665|binding|INFO|Releasing lport 3bca8f57-1f39-42d4-90ee-6695770f4d39 from this chassis (sb_readonly=0)
Oct 11 05:40:41 np0005481065 ovn_controller[152945]: 2025-10-11T09:40:41Z|01666|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:40:41 np0005481065 ovn_controller[152945]: 2025-10-11T09:40:41Z|01667|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:40:41 np0005481065 nova_compute[260935]: 2025-10-11 09:40:41.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:40:41 np0005481065 nova_compute[260935]: 2025-10-11 09:40:41.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:40:42 np0005481065 nova_compute[260935]: 2025-10-11 09:40:42.216 2 DEBUG nova.compute.manager [req-20dd7588-773f-48dc-9ba0-f6755fa1b593 req-325fcfbb-2d35-46f5-93b1-6de270bbc839 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Received event network-changed-9f09f897-79b2-4242-ba35-4ab26732b411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:40:42 np0005481065 nova_compute[260935]: 2025-10-11 09:40:42.217 2 DEBUG nova.compute.manager [req-20dd7588-773f-48dc-9ba0-f6755fa1b593 req-325fcfbb-2d35-46f5-93b1-6de270bbc839 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Refreshing instance network info cache due to event network-changed-9f09f897-79b2-4242-ba35-4ab26732b411. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:40:42 np0005481065 nova_compute[260935]: 2025-10-11 09:40:42.217 2 DEBUG oslo_concurrency.lockutils [req-20dd7588-773f-48dc-9ba0-f6755fa1b593 req-325fcfbb-2d35-46f5-93b1-6de270bbc839 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-cf42187a-059a-49d4-b9c8-96786042977f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:40:42 np0005481065 nova_compute[260935]: 2025-10-11 09:40:42.217 2 DEBUG oslo_concurrency.lockutils [req-20dd7588-773f-48dc-9ba0-f6755fa1b593 req-325fcfbb-2d35-46f5-93b1-6de270bbc839 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-cf42187a-059a-49d4-b9c8-96786042977f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:40:42 np0005481065 nova_compute[260935]: 2025-10-11 09:40:42.217 2 DEBUG nova.network.neutron [req-20dd7588-773f-48dc-9ba0-f6755fa1b593 req-325fcfbb-2d35-46f5-93b1-6de270bbc839 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Refreshing network info cache for port 9f09f897-79b2-4242-ba35-4ab26732b411 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:40:42 np0005481065 podman[424989]: 2025-10-11 09:40:42.788852696 +0000 UTC m=+0.082330730 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 11 05:40:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2972: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 05:40:43 np0005481065 nova_compute[260935]: 2025-10-11 09:40:43.744 2 DEBUG nova.network.neutron [req-20dd7588-773f-48dc-9ba0-f6755fa1b593 req-325fcfbb-2d35-46f5-93b1-6de270bbc839 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Updated VIF entry in instance network info cache for port 9f09f897-79b2-4242-ba35-4ab26732b411. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:40:43 np0005481065 nova_compute[260935]: 2025-10-11 09:40:43.745 2 DEBUG nova.network.neutron [req-20dd7588-773f-48dc-9ba0-f6755fa1b593 req-325fcfbb-2d35-46f5-93b1-6de270bbc839 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Updating instance_info_cache with network_info: [{"id": "9f09f897-79b2-4242-ba35-4ab26732b411", "address": "fa:16:3e:f9:42:d3", "network": {"id": "fbb66f3b-0d2f-44df-a1ff-0fbd9c571596", "bridge": "br-int", "label": "tempest-network-smoke--2076009485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f09f897-79", "ovs_interfaceid": "9f09f897-79b2-4242-ba35-4ab26732b411", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:40:43 np0005481065 nova_compute[260935]: 2025-10-11 09:40:43.765 2 DEBUG oslo_concurrency.lockutils [req-20dd7588-773f-48dc-9ba0-f6755fa1b593 req-325fcfbb-2d35-46f5-93b1-6de270bbc839 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-cf42187a-059a-49d4-b9c8-96786042977f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:40:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:40:44 np0005481065 nova_compute[260935]: 2025-10-11 09:40:44.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:40:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2973: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 05:40:45 np0005481065 nova_compute[260935]: 2025-10-11 09:40:45.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:40:46 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:40:46 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:40:46 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 05:40:46 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:40:46 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 05:40:46 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:40:46 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 7e92c195-4a34-4e8f-ab89-4037c8511fae does not exist
Oct 11 05:40:46 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 1b637b65-91e5-450a-8f31-f539c0d0dbc9 does not exist
Oct 11 05:40:46 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev ca33e817-cc8c-4916-834d-bc44af9609a3 does not exist
Oct 11 05:40:46 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 05:40:46 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 05:40:46 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 05:40:46 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:40:46 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:40:46 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:40:46 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:40:46 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:40:46 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:40:46 np0005481065 podman[425281]: 2025-10-11 09:40:46.970946962 +0000 UTC m=+0.047831383 container create 9626556a2acbbe2824aebdc8596726bff5661c3580e14e45e82877d352403fe4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_curie, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:40:47 np0005481065 podman[425281]: 2025-10-11 09:40:46.946864946 +0000 UTC m=+0.023749357 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:40:47 np0005481065 systemd[1]: Started libpod-conmon-9626556a2acbbe2824aebdc8596726bff5661c3580e14e45e82877d352403fe4.scope.
Oct 11 05:40:47 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:40:47 np0005481065 podman[425281]: 2025-10-11 09:40:47.115796685 +0000 UTC m=+0.192681106 container init 9626556a2acbbe2824aebdc8596726bff5661c3580e14e45e82877d352403fe4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:40:47 np0005481065 podman[425295]: 2025-10-11 09:40:47.120847906 +0000 UTC m=+0.094363027 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 11 05:40:47 np0005481065 podman[425281]: 2025-10-11 09:40:47.126023111 +0000 UTC m=+0.202907512 container start 9626556a2acbbe2824aebdc8596726bff5661c3580e14e45e82877d352403fe4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_curie, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 05:40:47 np0005481065 podman[425281]: 2025-10-11 09:40:47.12990629 +0000 UTC m=+0.206790691 container attach 9626556a2acbbe2824aebdc8596726bff5661c3580e14e45e82877d352403fe4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 11 05:40:47 np0005481065 systemd[1]: libpod-9626556a2acbbe2824aebdc8596726bff5661c3580e14e45e82877d352403fe4.scope: Deactivated successfully.
Oct 11 05:40:47 np0005481065 stupefied_curie[425318]: 167 167
Oct 11 05:40:47 np0005481065 conmon[425318]: conmon 9626556a2acbbe2824ae <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9626556a2acbbe2824aebdc8596726bff5661c3580e14e45e82877d352403fe4.scope/container/memory.events
Oct 11 05:40:47 np0005481065 podman[425299]: 2025-10-11 09:40:47.153223374 +0000 UTC m=+0.126160889 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:40:47 np0005481065 podman[425346]: 2025-10-11 09:40:47.17588399 +0000 UTC m=+0.025829856 container died 9626556a2acbbe2824aebdc8596726bff5661c3580e14e45e82877d352403fe4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_curie, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 11 05:40:47 np0005481065 systemd[1]: var-lib-containers-storage-overlay-19f17a25cbae9d6f8a3e97fe01d72cde6bedf5b20efc568634a1ab9f6f533842-merged.mount: Deactivated successfully.
Oct 11 05:40:47 np0005481065 podman[425346]: 2025-10-11 09:40:47.2143991 +0000 UTC m=+0.064344956 container remove 9626556a2acbbe2824aebdc8596726bff5661c3580e14e45e82877d352403fe4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_curie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:40:47 np0005481065 systemd[1]: libpod-conmon-9626556a2acbbe2824aebdc8596726bff5661c3580e14e45e82877d352403fe4.scope: Deactivated successfully.
Oct 11 05:40:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2974: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 05:40:47 np0005481065 podman[425370]: 2025-10-11 09:40:47.420904702 +0000 UTC m=+0.054001645 container create e1f9e7b59ab78b0ef548f13bf7ae3f67dea5e799e584c3416138891cf31fe16f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:40:47 np0005481065 systemd[1]: Started libpod-conmon-e1f9e7b59ab78b0ef548f13bf7ae3f67dea5e799e584c3416138891cf31fe16f.scope.
Oct 11 05:40:47 np0005481065 podman[425370]: 2025-10-11 09:40:47.396204449 +0000 UTC m=+0.029301482 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:40:47 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:40:47 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/910949cea1c5127ed6d2cdd94fdb240d5cf4a49db364b46fdbd0674828e87fec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:40:47 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/910949cea1c5127ed6d2cdd94fdb240d5cf4a49db364b46fdbd0674828e87fec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:40:47 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/910949cea1c5127ed6d2cdd94fdb240d5cf4a49db364b46fdbd0674828e87fec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:40:47 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/910949cea1c5127ed6d2cdd94fdb240d5cf4a49db364b46fdbd0674828e87fec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:40:47 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/910949cea1c5127ed6d2cdd94fdb240d5cf4a49db364b46fdbd0674828e87fec/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 05:40:47 np0005481065 podman[425370]: 2025-10-11 09:40:47.533901501 +0000 UTC m=+0.166998484 container init e1f9e7b59ab78b0ef548f13bf7ae3f67dea5e799e584c3416138891cf31fe16f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 05:40:47 np0005481065 podman[425370]: 2025-10-11 09:40:47.545998481 +0000 UTC m=+0.179095424 container start e1f9e7b59ab78b0ef548f13bf7ae3f67dea5e799e584c3416138891cf31fe16f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 05:40:47 np0005481065 podman[425370]: 2025-10-11 09:40:47.552881924 +0000 UTC m=+0.185978907 container attach e1f9e7b59ab78b0ef548f13bf7ae3f67dea5e799e584c3416138891cf31fe16f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_pare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:40:48 np0005481065 optimistic_pare[425387]: --> passed data devices: 0 physical, 3 LVM
Oct 11 05:40:48 np0005481065 optimistic_pare[425387]: --> relative data size: 1.0
Oct 11 05:40:48 np0005481065 optimistic_pare[425387]: --> All data devices are unavailable
Oct 11 05:40:48 np0005481065 nova_compute[260935]: 2025-10-11 09:40:48.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:40:48 np0005481065 nova_compute[260935]: 2025-10-11 09:40:48.706 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:40:48 np0005481065 systemd[1]: libpod-e1f9e7b59ab78b0ef548f13bf7ae3f67dea5e799e584c3416138891cf31fe16f.scope: Deactivated successfully.
Oct 11 05:40:48 np0005481065 systemd[1]: libpod-e1f9e7b59ab78b0ef548f13bf7ae3f67dea5e799e584c3416138891cf31fe16f.scope: Consumed 1.121s CPU time.
Oct 11 05:40:48 np0005481065 podman[425417]: 2025-10-11 09:40:48.836525787 +0000 UTC m=+0.039397886 container died e1f9e7b59ab78b0ef548f13bf7ae3f67dea5e799e584c3416138891cf31fe16f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_pare, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 05:40:48 np0005481065 systemd[1]: var-lib-containers-storage-overlay-910949cea1c5127ed6d2cdd94fdb240d5cf4a49db364b46fdbd0674828e87fec-merged.mount: Deactivated successfully.
Oct 11 05:40:48 np0005481065 podman[425417]: 2025-10-11 09:40:48.924761482 +0000 UTC m=+0.127633541 container remove e1f9e7b59ab78b0ef548f13bf7ae3f67dea5e799e584c3416138891cf31fe16f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_pare, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:40:48 np0005481065 systemd[1]: libpod-conmon-e1f9e7b59ab78b0ef548f13bf7ae3f67dea5e799e584c3416138891cf31fe16f.scope: Deactivated successfully.
Oct 11 05:40:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2975: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 05:40:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:40:49 np0005481065 ovn_controller[152945]: 2025-10-11T09:40:49Z|00198|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f9:42:d3 10.100.0.14
Oct 11 05:40:49 np0005481065 nova_compute[260935]: 2025-10-11 09:40:49.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:40:49 np0005481065 ovn_controller[152945]: 2025-10-11T09:40:49Z|00199|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f9:42:d3 10.100.0.14
Oct 11 05:40:49 np0005481065 podman[425574]: 2025-10-11 09:40:49.800603717 +0000 UTC m=+0.114865323 container create f6e18345d3452229342959c1d17991edd3dbc699172b331e6700093b4b59fc40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:40:49 np0005481065 podman[425574]: 2025-10-11 09:40:49.715075238 +0000 UTC m=+0.029336834 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:40:49 np0005481065 systemd[1]: Started libpod-conmon-f6e18345d3452229342959c1d17991edd3dbc699172b331e6700093b4b59fc40.scope.
Oct 11 05:40:49 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:40:49 np0005481065 podman[425574]: 2025-10-11 09:40:49.927308461 +0000 UTC m=+0.241570067 container init f6e18345d3452229342959c1d17991edd3dbc699172b331e6700093b4b59fc40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 05:40:49 np0005481065 podman[425574]: 2025-10-11 09:40:49.939771431 +0000 UTC m=+0.254033027 container start f6e18345d3452229342959c1d17991edd3dbc699172b331e6700093b4b59fc40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lichterman, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:40:49 np0005481065 laughing_lichterman[425590]: 167 167
Oct 11 05:40:49 np0005481065 podman[425574]: 2025-10-11 09:40:49.946624683 +0000 UTC m=+0.260886269 container attach f6e18345d3452229342959c1d17991edd3dbc699172b331e6700093b4b59fc40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 05:40:49 np0005481065 systemd[1]: libpod-f6e18345d3452229342959c1d17991edd3dbc699172b331e6700093b4b59fc40.scope: Deactivated successfully.
Oct 11 05:40:49 np0005481065 podman[425574]: 2025-10-11 09:40:49.947892588 +0000 UTC m=+0.262154174 container died f6e18345d3452229342959c1d17991edd3dbc699172b331e6700093b4b59fc40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:40:50 np0005481065 systemd[1]: var-lib-containers-storage-overlay-005f8da06175772deb0485bce16cd28fb9a59916db499df094a53ab8453f696e-merged.mount: Deactivated successfully.
Oct 11 05:40:50 np0005481065 podman[425574]: 2025-10-11 09:40:50.246139864 +0000 UTC m=+0.560401430 container remove f6e18345d3452229342959c1d17991edd3dbc699172b331e6700093b4b59fc40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lichterman, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 05:40:50 np0005481065 systemd[1]: libpod-conmon-f6e18345d3452229342959c1d17991edd3dbc699172b331e6700093b4b59fc40.scope: Deactivated successfully.
Oct 11 05:40:50 np0005481065 podman[425615]: 2025-10-11 09:40:50.522065152 +0000 UTC m=+0.040955320 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:40:50 np0005481065 podman[425615]: 2025-10-11 09:40:50.614725411 +0000 UTC m=+0.133615569 container create 6a61ff2635e136c7494bed64d015b80c98f75bd900f82883c52f90a1e6cc1193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_shtern, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:40:50 np0005481065 nova_compute[260935]: 2025-10-11 09:40:50.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:40:50 np0005481065 systemd[1]: Started libpod-conmon-6a61ff2635e136c7494bed64d015b80c98f75bd900f82883c52f90a1e6cc1193.scope.
Oct 11 05:40:50 np0005481065 nova_compute[260935]: 2025-10-11 09:40:50.706 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:40:50 np0005481065 nova_compute[260935]: 2025-10-11 09:40:50.706 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:40:50 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:40:50 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c53e9148a5b249753869cc3dc2819875adb5344ebd66e2dada43832de9aafe19/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:40:50 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c53e9148a5b249753869cc3dc2819875adb5344ebd66e2dada43832de9aafe19/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:40:50 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c53e9148a5b249753869cc3dc2819875adb5344ebd66e2dada43832de9aafe19/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:40:50 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c53e9148a5b249753869cc3dc2819875adb5344ebd66e2dada43832de9aafe19/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:40:50 np0005481065 nova_compute[260935]: 2025-10-11 09:40:50.729 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 11 05:40:50 np0005481065 podman[425615]: 2025-10-11 09:40:50.747590267 +0000 UTC m=+0.266480495 container init 6a61ff2635e136c7494bed64d015b80c98f75bd900f82883c52f90a1e6cc1193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_shtern, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 05:40:50 np0005481065 podman[425615]: 2025-10-11 09:40:50.762132645 +0000 UTC m=+0.281022803 container start 6a61ff2635e136c7494bed64d015b80c98f75bd900f82883c52f90a1e6cc1193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_shtern, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 11 05:40:50 np0005481065 podman[425615]: 2025-10-11 09:40:50.849058783 +0000 UTC m=+0.367949021 container attach 6a61ff2635e136c7494bed64d015b80c98f75bd900f82883c52f90a1e6cc1193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 11 05:40:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2976: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Oct 11 05:40:51 np0005481065 sad_shtern[425632]: {
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:    "0": [
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:        {
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:            "devices": [
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:                "/dev/loop3"
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:            ],
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:            "lv_name": "ceph_lv0",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:            "lv_size": "21470642176",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:            "name": "ceph_lv0",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:            "tags": {
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:                "ceph.cluster_name": "ceph",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:                "ceph.crush_device_class": "",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:                "ceph.encrypted": "0",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:                "ceph.osd_id": "0",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:                "ceph.type": "block",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:                "ceph.vdo": "0"
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:            },
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:            "type": "block",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:            "vg_name": "ceph_vg0"
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:        }
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:    ],
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:    "1": [
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:        {
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:            "devices": [
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:                "/dev/loop4"
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:            ],
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:            "lv_name": "ceph_lv1",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:            "lv_size": "21470642176",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:            "name": "ceph_lv1",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:            "tags": {
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:                "ceph.cluster_name": "ceph",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:                "ceph.crush_device_class": "",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:                "ceph.encrypted": "0",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:                "ceph.osd_id": "1",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:                "ceph.type": "block",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:                "ceph.vdo": "0"
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:            },
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:            "type": "block",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:            "vg_name": "ceph_vg1"
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:        }
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:    ],
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:    "2": [
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:        {
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:            "devices": [
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:                "/dev/loop5"
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:            ],
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:            "lv_name": "ceph_lv2",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:            "lv_size": "21470642176",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:            "name": "ceph_lv2",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:            "tags": {
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:                "ceph.cluster_name": "ceph",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:                "ceph.crush_device_class": "",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:                "ceph.encrypted": "0",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:                "ceph.osd_id": "2",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:                "ceph.type": "block",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:                "ceph.vdo": "0"
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:            },
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:            "type": "block",
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:            "vg_name": "ceph_vg2"
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:        }
Oct 11 05:40:51 np0005481065 sad_shtern[425632]:    ]
Oct 11 05:40:51 np0005481065 sad_shtern[425632]: }
Oct 11 05:40:51 np0005481065 systemd[1]: libpod-6a61ff2635e136c7494bed64d015b80c98f75bd900f82883c52f90a1e6cc1193.scope: Deactivated successfully.
Oct 11 05:40:51 np0005481065 podman[425615]: 2025-10-11 09:40:51.563408629 +0000 UTC m=+1.082298807 container died 6a61ff2635e136c7494bed64d015b80c98f75bd900f82883c52f90a1e6cc1193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 05:40:51 np0005481065 systemd[1]: var-lib-containers-storage-overlay-c53e9148a5b249753869cc3dc2819875adb5344ebd66e2dada43832de9aafe19-merged.mount: Deactivated successfully.
Oct 11 05:40:51 np0005481065 podman[425615]: 2025-10-11 09:40:51.638185076 +0000 UTC m=+1.157075264 container remove 6a61ff2635e136c7494bed64d015b80c98f75bd900f82883c52f90a1e6cc1193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 05:40:51 np0005481065 systemd[1]: libpod-conmon-6a61ff2635e136c7494bed64d015b80c98f75bd900f82883c52f90a1e6cc1193.scope: Deactivated successfully.
Oct 11 05:40:52 np0005481065 podman[425794]: 2025-10-11 09:40:52.336663407 +0000 UTC m=+0.040394894 container create 0d209383da76ef84b39eb37023dfd6af5725992cd72484eb36aec6560e383463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:40:52 np0005481065 systemd[1]: Started libpod-conmon-0d209383da76ef84b39eb37023dfd6af5725992cd72484eb36aec6560e383463.scope.
Oct 11 05:40:52 np0005481065 podman[425794]: 2025-10-11 09:40:52.319506476 +0000 UTC m=+0.023237963 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:40:52 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:40:52 np0005481065 podman[425794]: 2025-10-11 09:40:52.436322702 +0000 UTC m=+0.140054209 container init 0d209383da76ef84b39eb37023dfd6af5725992cd72484eb36aec6560e383463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_brown, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:40:52 np0005481065 podman[425794]: 2025-10-11 09:40:52.443985877 +0000 UTC m=+0.147717354 container start 0d209383da76ef84b39eb37023dfd6af5725992cd72484eb36aec6560e383463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:40:52 np0005481065 nervous_brown[425810]: 167 167
Oct 11 05:40:52 np0005481065 podman[425794]: 2025-10-11 09:40:52.446884878 +0000 UTC m=+0.150616355 container attach 0d209383da76ef84b39eb37023dfd6af5725992cd72484eb36aec6560e383463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_brown, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 05:40:52 np0005481065 systemd[1]: libpod-0d209383da76ef84b39eb37023dfd6af5725992cd72484eb36aec6560e383463.scope: Deactivated successfully.
Oct 11 05:40:52 np0005481065 conmon[425810]: conmon 0d209383da76ef84b39e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0d209383da76ef84b39eb37023dfd6af5725992cd72484eb36aec6560e383463.scope/container/memory.events
Oct 11 05:40:52 np0005481065 podman[425794]: 2025-10-11 09:40:52.451691153 +0000 UTC m=+0.155422650 container died 0d209383da76ef84b39eb37023dfd6af5725992cd72484eb36aec6560e383463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 05:40:52 np0005481065 systemd[1]: var-lib-containers-storage-overlay-c54f5620345f3bb90eb59a3077115055f0aedebf034865721218a8b1760993a1-merged.mount: Deactivated successfully.
Oct 11 05:40:52 np0005481065 podman[425794]: 2025-10-11 09:40:52.495765029 +0000 UTC m=+0.199496546 container remove 0d209383da76ef84b39eb37023dfd6af5725992cd72484eb36aec6560e383463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:40:52 np0005481065 systemd[1]: libpod-conmon-0d209383da76ef84b39eb37023dfd6af5725992cd72484eb36aec6560e383463.scope: Deactivated successfully.
Oct 11 05:40:52 np0005481065 nova_compute[260935]: 2025-10-11 09:40:52.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:40:52 np0005481065 nova_compute[260935]: 2025-10-11 09:40:52.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:40:52 np0005481065 podman[425834]: 2025-10-11 09:40:52.771310208 +0000 UTC m=+0.065542790 container create e100d7a2c1d658aba90ba8c6e75aeb5c9ec4ecf66c545bc40b678d56953fb22c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_golick, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 11 05:40:52 np0005481065 systemd[1]: Started libpod-conmon-e100d7a2c1d658aba90ba8c6e75aeb5c9ec4ecf66c545bc40b678d56953fb22c.scope.
Oct 11 05:40:52 np0005481065 podman[425834]: 2025-10-11 09:40:52.749610739 +0000 UTC m=+0.043843341 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:40:52 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:40:52 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1b62f5b29c4b39dc0863e58fe1a5cc2990509b9c0507ede5281336241ed3193/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:40:52 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1b62f5b29c4b39dc0863e58fe1a5cc2990509b9c0507ede5281336241ed3193/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:40:52 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1b62f5b29c4b39dc0863e58fe1a5cc2990509b9c0507ede5281336241ed3193/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:40:52 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1b62f5b29c4b39dc0863e58fe1a5cc2990509b9c0507ede5281336241ed3193/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:40:52 np0005481065 podman[425834]: 2025-10-11 09:40:52.891865919 +0000 UTC m=+0.186098581 container init e100d7a2c1d658aba90ba8c6e75aeb5c9ec4ecf66c545bc40b678d56953fb22c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:40:52 np0005481065 podman[425834]: 2025-10-11 09:40:52.90151665 +0000 UTC m=+0.195749242 container start e100d7a2c1d658aba90ba8c6e75aeb5c9ec4ecf66c545bc40b678d56953fb22c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:40:52 np0005481065 podman[425834]: 2025-10-11 09:40:52.90544521 +0000 UTC m=+0.199677882 container attach e100d7a2c1d658aba90ba8c6e75aeb5c9ec4ecf66c545bc40b678d56953fb22c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:40:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2977: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 125 op/s
Oct 11 05:40:53 np0005481065 nova_compute[260935]: 2025-10-11 09:40:53.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:40:53 np0005481065 sad_golick[425851]: {
Oct 11 05:40:53 np0005481065 sad_golick[425851]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 05:40:53 np0005481065 sad_golick[425851]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:40:53 np0005481065 sad_golick[425851]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 05:40:53 np0005481065 sad_golick[425851]:        "osd_id": 2,
Oct 11 05:40:53 np0005481065 sad_golick[425851]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:40:53 np0005481065 sad_golick[425851]:        "type": "bluestore"
Oct 11 05:40:53 np0005481065 sad_golick[425851]:    },
Oct 11 05:40:53 np0005481065 sad_golick[425851]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 05:40:53 np0005481065 sad_golick[425851]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:40:53 np0005481065 sad_golick[425851]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 05:40:53 np0005481065 sad_golick[425851]:        "osd_id": 0,
Oct 11 05:40:53 np0005481065 sad_golick[425851]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:40:53 np0005481065 sad_golick[425851]:        "type": "bluestore"
Oct 11 05:40:53 np0005481065 sad_golick[425851]:    },
Oct 11 05:40:53 np0005481065 sad_golick[425851]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 05:40:53 np0005481065 sad_golick[425851]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:40:53 np0005481065 sad_golick[425851]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 05:40:53 np0005481065 sad_golick[425851]:        "osd_id": 1,
Oct 11 05:40:53 np0005481065 sad_golick[425851]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:40:53 np0005481065 sad_golick[425851]:        "type": "bluestore"
Oct 11 05:40:53 np0005481065 sad_golick[425851]:    }
Oct 11 05:40:53 np0005481065 sad_golick[425851]: }
Oct 11 05:40:53 np0005481065 systemd[1]: libpod-e100d7a2c1d658aba90ba8c6e75aeb5c9ec4ecf66c545bc40b678d56953fb22c.scope: Deactivated successfully.
Oct 11 05:40:53 np0005481065 systemd[1]: libpod-e100d7a2c1d658aba90ba8c6e75aeb5c9ec4ecf66c545bc40b678d56953fb22c.scope: Consumed 1.064s CPU time.
Oct 11 05:40:53 np0005481065 podman[425834]: 2025-10-11 09:40:53.974055561 +0000 UTC m=+1.268288113 container died e100d7a2c1d658aba90ba8c6e75aeb5c9ec4ecf66c545bc40b678d56953fb22c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_golick, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:40:54 np0005481065 systemd[1]: var-lib-containers-storage-overlay-f1b62f5b29c4b39dc0863e58fe1a5cc2990509b9c0507ede5281336241ed3193-merged.mount: Deactivated successfully.
Oct 11 05:40:54 np0005481065 podman[425834]: 2025-10-11 09:40:54.051161163 +0000 UTC m=+1.345393755 container remove e100d7a2c1d658aba90ba8c6e75aeb5c9ec4ecf66c545bc40b678d56953fb22c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_golick, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 11 05:40:54 np0005481065 systemd[1]: libpod-conmon-e100d7a2c1d658aba90ba8c6e75aeb5c9ec4ecf66c545bc40b678d56953fb22c.scope: Deactivated successfully.
Oct 11 05:40:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:40:54 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:40:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:40:54 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:40:54 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 59e3d7d7-038f-45a6-837d-a22d1be3c94c does not exist
Oct 11 05:40:54 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev d342ac12-db4e-40b3-8554-516736395ca6 does not exist
Oct 11 05:40:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:40:54 np0005481065 nova_compute[260935]: 2025-10-11 09:40:54.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:40:54 np0005481065 nova_compute[260935]: 2025-10-11 09:40:54.757 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:40:54 np0005481065 nova_compute[260935]: 2025-10-11 09:40:54.757 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:40:54 np0005481065 nova_compute[260935]: 2025-10-11 09:40:54.757 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:40:54 np0005481065 nova_compute[260935]: 2025-10-11 09:40:54.758 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:40:54 np0005481065 nova_compute[260935]: 2025-10-11 09:40:54.758 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:40:54 np0005481065 nova_compute[260935]: 2025-10-11 09:40:54.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:40:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:40:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:40:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:40:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:40:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:40:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:40:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:40:55
Oct 11 05:40:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:40:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:40:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log', '.rgw.root', 'volumes', 'vms']
Oct 11 05:40:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:40:55 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:40:55 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:40:55 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:40:55 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4031578983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:40:55 np0005481065 nova_compute[260935]: 2025-10-11 09:40:55.234 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:40:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2978: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 11 05:40:55 np0005481065 nova_compute[260935]: 2025-10-11 09:40:55.472 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:40:55 np0005481065 nova_compute[260935]: 2025-10-11 09:40:55.473 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:40:55 np0005481065 nova_compute[260935]: 2025-10-11 09:40:55.474 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:40:55 np0005481065 nova_compute[260935]: 2025-10-11 09:40:55.481 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:40:55 np0005481065 nova_compute[260935]: 2025-10-11 09:40:55.481 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:40:55 np0005481065 nova_compute[260935]: 2025-10-11 09:40:55.488 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:40:55 np0005481065 nova_compute[260935]: 2025-10-11 09:40:55.489 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:40:55 np0005481065 nova_compute[260935]: 2025-10-11 09:40:55.495 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:40:55 np0005481065 nova_compute[260935]: 2025-10-11 09:40:55.496 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:40:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:40:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:40:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:40:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:40:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:40:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:40:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:40:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:40:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:40:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:40:55 np0005481065 nova_compute[260935]: 2025-10-11 09:40:55.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:40:55 np0005481065 ceph-mgr[74605]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2898047278
Oct 11 05:40:55 np0005481065 nova_compute[260935]: 2025-10-11 09:40:55.797 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:40:55 np0005481065 nova_compute[260935]: 2025-10-11 09:40:55.798 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2601MB free_disk=59.785282135009766GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:40:55 np0005481065 nova_compute[260935]: 2025-10-11 09:40:55.799 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:40:55 np0005481065 nova_compute[260935]: 2025-10-11 09:40:55.799 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:40:56 np0005481065 nova_compute[260935]: 2025-10-11 09:40:56.010 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:40:56 np0005481065 nova_compute[260935]: 2025-10-11 09:40:56.010 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:40:56 np0005481065 nova_compute[260935]: 2025-10-11 09:40:56.010 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:40:56 np0005481065 nova_compute[260935]: 2025-10-11 09:40:56.010 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance cf42187a-059a-49d4-b9c8-96786042977f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:40:56 np0005481065 nova_compute[260935]: 2025-10-11 09:40:56.011 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:40:56 np0005481065 nova_compute[260935]: 2025-10-11 09:40:56.011 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:40:56 np0005481065 nova_compute[260935]: 2025-10-11 09:40:56.286 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:40:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:40:56 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/10474241' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:40:56 np0005481065 nova_compute[260935]: 2025-10-11 09:40:56.801 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:40:56 np0005481065 nova_compute[260935]: 2025-10-11 09:40:56.807 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:40:56 np0005481065 nova_compute[260935]: 2025-10-11 09:40:56.927 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:40:57 np0005481065 nova_compute[260935]: 2025-10-11 09:40:57.056 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:40:57 np0005481065 nova_compute[260935]: 2025-10-11 09:40:57.056 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.258s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:40:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2979: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 11 05:40:58 np0005481065 nova_compute[260935]: 2025-10-11 09:40:58.057 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:40:58 np0005481065 nova_compute[260935]: 2025-10-11 09:40:58.057 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:40:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2980: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 11 05:40:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:40:59 np0005481065 nova_compute[260935]: 2025-10-11 09:40:59.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:41:00 np0005481065 nova_compute[260935]: 2025-10-11 09:41:00.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:41:00 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:41:00.052 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=56, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=55) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:41:00 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:41:00.054 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 05:41:00 np0005481065 nova_compute[260935]: 2025-10-11 09:41:00.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:41:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2981: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 11 05:41:03 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:41:03.057 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '56'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:41:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2982: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 11 05:41:04 np0005481065 nova_compute[260935]: 2025-10-11 09:41:04.119 2 DEBUG oslo_concurrency.lockutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "7478f2ca-99cf-4c64-917f-bcfee233cf4e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:41:04 np0005481065 nova_compute[260935]: 2025-10-11 09:41:04.120 2 DEBUG oslo_concurrency.lockutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "7478f2ca-99cf-4c64-917f-bcfee233cf4e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:41:04 np0005481065 nova_compute[260935]: 2025-10-11 09:41:04.138 2 DEBUG nova.compute.manager [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:41:04 np0005481065 nova_compute[260935]: 2025-10-11 09:41:04.225 2 DEBUG oslo_concurrency.lockutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:41:04 np0005481065 nova_compute[260935]: 2025-10-11 09:41:04.226 2 DEBUG oslo_concurrency.lockutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:41:04 np0005481065 nova_compute[260935]: 2025-10-11 09:41:04.234 2 DEBUG nova.virt.hardware [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:41:04 np0005481065 nova_compute[260935]: 2025-10-11 09:41:04.235 2 INFO nova.compute.claims [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:41:04 np0005481065 nova_compute[260935]: 2025-10-11 09:41:04.412 2 DEBUG oslo_concurrency.processutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:41:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:41:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:41:04 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2341111548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:41:04 np0005481065 nova_compute[260935]: 2025-10-11 09:41:04.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:41:04 np0005481065 nova_compute[260935]: 2025-10-11 09:41:04.860 2 DEBUG oslo_concurrency.processutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:41:04 np0005481065 nova_compute[260935]: 2025-10-11 09:41:04.868 2 DEBUG nova.compute.provider_tree [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:41:04 np0005481065 nova_compute[260935]: 2025-10-11 09:41:04.888 2 DEBUG nova.scheduler.client.report [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:41:04 np0005481065 nova_compute[260935]: 2025-10-11 09:41:04.916 2 DEBUG oslo_concurrency.lockutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:41:04 np0005481065 nova_compute[260935]: 2025-10-11 09:41:04.917 2 DEBUG nova.compute.manager [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:41:04 np0005481065 nova_compute[260935]: 2025-10-11 09:41:04.985 2 DEBUG nova.compute.manager [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:41:04 np0005481065 nova_compute[260935]: 2025-10-11 09:41:04.986 2 DEBUG nova.network.neutron [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:41:05 np0005481065 nova_compute[260935]: 2025-10-11 09:41:05.011 2 INFO nova.virt.libvirt.driver [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:41:05 np0005481065 nova_compute[260935]: 2025-10-11 09:41:05.032 2 DEBUG nova.compute.manager [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:41:05 np0005481065 nova_compute[260935]: 2025-10-11 09:41:05.162 2 DEBUG nova.compute.manager [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:41:05 np0005481065 nova_compute[260935]: 2025-10-11 09:41:05.164 2 DEBUG nova.virt.libvirt.driver [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:41:05 np0005481065 nova_compute[260935]: 2025-10-11 09:41:05.165 2 INFO nova.virt.libvirt.driver [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Creating image(s)#033[00m
Oct 11 05:41:05 np0005481065 nova_compute[260935]: 2025-10-11 09:41:05.200 2 DEBUG nova.storage.rbd_utils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image 7478f2ca-99cf-4c64-917f-bcfee233cf4e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:41:05 np0005481065 nova_compute[260935]: 2025-10-11 09:41:05.222 2 DEBUG nova.storage.rbd_utils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image 7478f2ca-99cf-4c64-917f-bcfee233cf4e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:41:05 np0005481065 nova_compute[260935]: 2025-10-11 09:41:05.241 2 DEBUG nova.storage.rbd_utils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image 7478f2ca-99cf-4c64-917f-bcfee233cf4e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:41:05 np0005481065 nova_compute[260935]: 2025-10-11 09:41:05.246 2 DEBUG oslo_concurrency.processutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:41:05 np0005481065 nova_compute[260935]: 2025-10-11 09:41:05.285 2 DEBUG nova.policy [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '489c4d0457354f4684f8b9e53261224f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '81e7096f23df4e7d8782cf98d09d54e9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:41:05 np0005481065 nova_compute[260935]: 2025-10-11 09:41:05.336 2 DEBUG oslo_concurrency.processutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:41:05 np0005481065 nova_compute[260935]: 2025-10-11 09:41:05.337 2 DEBUG oslo_concurrency.lockutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:41:05 np0005481065 nova_compute[260935]: 2025-10-11 09:41:05.338 2 DEBUG oslo_concurrency.lockutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:41:05 np0005481065 nova_compute[260935]: 2025-10-11 09:41:05.338 2 DEBUG oslo_concurrency.lockutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:41:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2983: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Oct 11 05:41:05 np0005481065 nova_compute[260935]: 2025-10-11 09:41:05.373 2 DEBUG nova.storage.rbd_utils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image 7478f2ca-99cf-4c64-917f-bcfee233cf4e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:41:05 np0005481065 nova_compute[260935]: 2025-10-11 09:41:05.379 2 DEBUG oslo_concurrency.processutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 7478f2ca-99cf-4c64-917f-bcfee233cf4e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:41:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:41:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:41:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:41:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:41:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003386268782151462 of space, bias 1.0, pg target 1.0158806346454385 quantized to 32 (current 32)
Oct 11 05:41:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:41:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:41:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:41:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:41:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:41:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.1992057139048968 quantized to 32 (current 32)
Oct 11 05:41:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:41:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 32)
Oct 11 05:41:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:41:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:41:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:41:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Oct 11 05:41:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:41:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Oct 11 05:41:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:41:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:41:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:41:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Oct 11 05:41:05 np0005481065 nova_compute[260935]: 2025-10-11 09:41:05.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:41:05 np0005481065 nova_compute[260935]: 2025-10-11 09:41:05.736 2 DEBUG oslo_concurrency.processutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 7478f2ca-99cf-4c64-917f-bcfee233cf4e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.357s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:41:05 np0005481065 nova_compute[260935]: 2025-10-11 09:41:05.803 2 DEBUG nova.storage.rbd_utils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] resizing rbd image 7478f2ca-99cf-4c64-917f-bcfee233cf4e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:41:05 np0005481065 nova_compute[260935]: 2025-10-11 09:41:05.833 2 DEBUG nova.network.neutron [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Successfully created port: ea93226c-f47f-4213-9d19-8bbf98d9f70f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:41:05 np0005481065 nova_compute[260935]: 2025-10-11 09:41:05.908 2 DEBUG nova.objects.instance [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lazy-loading 'migration_context' on Instance uuid 7478f2ca-99cf-4c64-917f-bcfee233cf4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:41:05 np0005481065 nova_compute[260935]: 2025-10-11 09:41:05.921 2 DEBUG nova.virt.libvirt.driver [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:41:05 np0005481065 nova_compute[260935]: 2025-10-11 09:41:05.921 2 DEBUG nova.virt.libvirt.driver [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Ensure instance console log exists: /var/lib/nova/instances/7478f2ca-99cf-4c64-917f-bcfee233cf4e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:41:05 np0005481065 nova_compute[260935]: 2025-10-11 09:41:05.922 2 DEBUG oslo_concurrency.lockutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:41:05 np0005481065 nova_compute[260935]: 2025-10-11 09:41:05.922 2 DEBUG oslo_concurrency.lockutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:41:05 np0005481065 nova_compute[260935]: 2025-10-11 09:41:05.923 2 DEBUG oslo_concurrency.lockutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:41:06 np0005481065 nova_compute[260935]: 2025-10-11 09:41:06.658 2 DEBUG nova.network.neutron [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Successfully updated port: ea93226c-f47f-4213-9d19-8bbf98d9f70f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:41:06 np0005481065 nova_compute[260935]: 2025-10-11 09:41:06.674 2 DEBUG oslo_concurrency.lockutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "refresh_cache-7478f2ca-99cf-4c64-917f-bcfee233cf4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:41:06 np0005481065 nova_compute[260935]: 2025-10-11 09:41:06.675 2 DEBUG oslo_concurrency.lockutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquired lock "refresh_cache-7478f2ca-99cf-4c64-917f-bcfee233cf4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:41:06 np0005481065 nova_compute[260935]: 2025-10-11 09:41:06.675 2 DEBUG nova.network.neutron [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:41:06 np0005481065 nova_compute[260935]: 2025-10-11 09:41:06.772 2 DEBUG nova.compute.manager [req-f978723e-c646-487d-af34-e1f41b376a28 req-ec27a19c-6a2b-41c4-90a9-199e4609872f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Received event network-changed-ea93226c-f47f-4213-9d19-8bbf98d9f70f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:41:06 np0005481065 nova_compute[260935]: 2025-10-11 09:41:06.774 2 DEBUG nova.compute.manager [req-f978723e-c646-487d-af34-e1f41b376a28 req-ec27a19c-6a2b-41c4-90a9-199e4609872f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Refreshing instance network info cache due to event network-changed-ea93226c-f47f-4213-9d19-8bbf98d9f70f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:41:06 np0005481065 nova_compute[260935]: 2025-10-11 09:41:06.774 2 DEBUG oslo_concurrency.lockutils [req-f978723e-c646-487d-af34-e1f41b376a28 req-ec27a19c-6a2b-41c4-90a9-199e4609872f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-7478f2ca-99cf-4c64-917f-bcfee233cf4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:41:06 np0005481065 podman[426181]: 2025-10-11 09:41:06.817876398 +0000 UTC m=+0.101798837 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 05:41:06 np0005481065 nova_compute[260935]: 2025-10-11 09:41:06.845 2 DEBUG nova.network.neutron [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:41:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2984: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Oct 11 05:41:08 np0005481065 nova_compute[260935]: 2025-10-11 09:41:08.354 2 DEBUG nova.network.neutron [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Updating instance_info_cache with network_info: [{"id": "ea93226c-f47f-4213-9d19-8bbf98d9f70f", "address": "fa:16:3e:29:42:8b", "network": {"id": "fbb66f3b-0d2f-44df-a1ff-0fbd9c571596", "bridge": "br-int", "label": "tempest-network-smoke--2076009485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea93226c-f4", "ovs_interfaceid": "ea93226c-f47f-4213-9d19-8bbf98d9f70f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:41:08 np0005481065 nova_compute[260935]: 2025-10-11 09:41:08.384 2 DEBUG oslo_concurrency.lockutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Releasing lock "refresh_cache-7478f2ca-99cf-4c64-917f-bcfee233cf4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:41:08 np0005481065 nova_compute[260935]: 2025-10-11 09:41:08.385 2 DEBUG nova.compute.manager [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Instance network_info: |[{"id": "ea93226c-f47f-4213-9d19-8bbf98d9f70f", "address": "fa:16:3e:29:42:8b", "network": {"id": "fbb66f3b-0d2f-44df-a1ff-0fbd9c571596", "bridge": "br-int", "label": "tempest-network-smoke--2076009485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea93226c-f4", "ovs_interfaceid": "ea93226c-f47f-4213-9d19-8bbf98d9f70f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:41:08 np0005481065 nova_compute[260935]: 2025-10-11 09:41:08.385 2 DEBUG oslo_concurrency.lockutils [req-f978723e-c646-487d-af34-e1f41b376a28 req-ec27a19c-6a2b-41c4-90a9-199e4609872f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-7478f2ca-99cf-4c64-917f-bcfee233cf4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:41:08 np0005481065 nova_compute[260935]: 2025-10-11 09:41:08.386 2 DEBUG nova.network.neutron [req-f978723e-c646-487d-af34-e1f41b376a28 req-ec27a19c-6a2b-41c4-90a9-199e4609872f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Refreshing network info cache for port ea93226c-f47f-4213-9d19-8bbf98d9f70f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:41:08 np0005481065 nova_compute[260935]: 2025-10-11 09:41:08.391 2 DEBUG nova.virt.libvirt.driver [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Start _get_guest_xml network_info=[{"id": "ea93226c-f47f-4213-9d19-8bbf98d9f70f", "address": "fa:16:3e:29:42:8b", "network": {"id": "fbb66f3b-0d2f-44df-a1ff-0fbd9c571596", "bridge": "br-int", "label": "tempest-network-smoke--2076009485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea93226c-f4", "ovs_interfaceid": "ea93226c-f47f-4213-9d19-8bbf98d9f70f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:41:08 np0005481065 nova_compute[260935]: 2025-10-11 09:41:08.398 2 WARNING nova.virt.libvirt.driver [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:41:08 np0005481065 nova_compute[260935]: 2025-10-11 09:41:08.408 2 DEBUG nova.virt.libvirt.host [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:41:08 np0005481065 nova_compute[260935]: 2025-10-11 09:41:08.409 2 DEBUG nova.virt.libvirt.host [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:41:08 np0005481065 nova_compute[260935]: 2025-10-11 09:41:08.413 2 DEBUG nova.virt.libvirt.host [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:41:08 np0005481065 nova_compute[260935]: 2025-10-11 09:41:08.414 2 DEBUG nova.virt.libvirt.host [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:41:08 np0005481065 nova_compute[260935]: 2025-10-11 09:41:08.415 2 DEBUG nova.virt.libvirt.driver [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:41:08 np0005481065 nova_compute[260935]: 2025-10-11 09:41:08.416 2 DEBUG nova.virt.hardware [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:41:08 np0005481065 nova_compute[260935]: 2025-10-11 09:41:08.417 2 DEBUG nova.virt.hardware [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:41:08 np0005481065 nova_compute[260935]: 2025-10-11 09:41:08.417 2 DEBUG nova.virt.hardware [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:41:08 np0005481065 nova_compute[260935]: 2025-10-11 09:41:08.418 2 DEBUG nova.virt.hardware [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:41:08 np0005481065 nova_compute[260935]: 2025-10-11 09:41:08.418 2 DEBUG nova.virt.hardware [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:41:08 np0005481065 nova_compute[260935]: 2025-10-11 09:41:08.419 2 DEBUG nova.virt.hardware [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:41:08 np0005481065 nova_compute[260935]: 2025-10-11 09:41:08.420 2 DEBUG nova.virt.hardware [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:41:08 np0005481065 nova_compute[260935]: 2025-10-11 09:41:08.420 2 DEBUG nova.virt.hardware [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:41:08 np0005481065 nova_compute[260935]: 2025-10-11 09:41:08.421 2 DEBUG nova.virt.hardware [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:41:08 np0005481065 nova_compute[260935]: 2025-10-11 09:41:08.421 2 DEBUG nova.virt.hardware [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:41:08 np0005481065 nova_compute[260935]: 2025-10-11 09:41:08.422 2 DEBUG nova.virt.hardware [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:41:08 np0005481065 nova_compute[260935]: 2025-10-11 09:41:08.427 2 DEBUG oslo_concurrency.processutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:41:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:41:08 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4286187677' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:41:08 np0005481065 nova_compute[260935]: 2025-10-11 09:41:08.913 2 DEBUG oslo_concurrency.processutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:41:08 np0005481065 nova_compute[260935]: 2025-10-11 09:41:08.931 2 DEBUG nova.storage.rbd_utils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image 7478f2ca-99cf-4c64-917f-bcfee233cf4e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:41:08 np0005481065 nova_compute[260935]: 2025-10-11 09:41:08.935 2 DEBUG oslo_concurrency.processutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:41:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2985: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:41:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:41:09 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3478184439' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:41:09 np0005481065 nova_compute[260935]: 2025-10-11 09:41:09.377 2 DEBUG oslo_concurrency.processutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:41:09 np0005481065 nova_compute[260935]: 2025-10-11 09:41:09.380 2 DEBUG nova.virt.libvirt.vif [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:41:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-gen-1-1334732272',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-gen-1-1334732272',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-607770139-gen',id=147,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMJ2jJOMdFj9GKBYjHKBvmwlAaXSeJgKxd7Y5C0D5IYgR/hyyjjmJiPLE03H6XYiIWX3QUpUmhg6y0ka6+hRs35VkER6t8ZYYcl+MQfyMjAvRZwHFSUWgwuABpKNZ9OROA==',key_name='tempest-TestSecurityGroupsBasicOps-1246308253',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='81e7096f23df4e7d8782cf98d09d54e9',ramdisk_id='',reservation_id='r-n62joivm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-607770139',owner_user_name='tempest-TestSecurityGroupsBasicOps-607770139-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:41:05Z,user_data=None,user_id='489c4d0457354f4684f8b9e53261224f',uuid=7478f2ca-99cf-4c64-917f-bcfee233cf4e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ea93226c-f47f-4213-9d19-8bbf98d9f70f", "address": "fa:16:3e:29:42:8b", "network": {"id": "fbb66f3b-0d2f-44df-a1ff-0fbd9c571596", "bridge": "br-int", "label": "tempest-network-smoke--2076009485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea93226c-f4", "ovs_interfaceid": "ea93226c-f47f-4213-9d19-8bbf98d9f70f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:41:09 np0005481065 nova_compute[260935]: 2025-10-11 09:41:09.380 2 DEBUG nova.network.os_vif_util [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converting VIF {"id": "ea93226c-f47f-4213-9d19-8bbf98d9f70f", "address": "fa:16:3e:29:42:8b", "network": {"id": "fbb66f3b-0d2f-44df-a1ff-0fbd9c571596", "bridge": "br-int", "label": "tempest-network-smoke--2076009485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea93226c-f4", "ovs_interfaceid": "ea93226c-f47f-4213-9d19-8bbf98d9f70f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:41:09 np0005481065 nova_compute[260935]: 2025-10-11 09:41:09.382 2 DEBUG nova.network.os_vif_util [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:29:42:8b,bridge_name='br-int',has_traffic_filtering=True,id=ea93226c-f47f-4213-9d19-8bbf98d9f70f,network=Network(fbb66f3b-0d2f-44df-a1ff-0fbd9c571596),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea93226c-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:41:09 np0005481065 nova_compute[260935]: 2025-10-11 09:41:09.384 2 DEBUG nova.objects.instance [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7478f2ca-99cf-4c64-917f-bcfee233cf4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:41:09 np0005481065 nova_compute[260935]: 2025-10-11 09:41:09.401 2 DEBUG nova.virt.libvirt.driver [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:41:09 np0005481065 nova_compute[260935]:  <uuid>7478f2ca-99cf-4c64-917f-bcfee233cf4e</uuid>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:  <name>instance-00000093</name>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:41:09 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-gen-1-1334732272</nova:name>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:41:08</nova:creationTime>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:41:09 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:        <nova:user uuid="489c4d0457354f4684f8b9e53261224f">tempest-TestSecurityGroupsBasicOps-607770139-project-member</nova:user>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:        <nova:project uuid="81e7096f23df4e7d8782cf98d09d54e9">tempest-TestSecurityGroupsBasicOps-607770139</nova:project>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:        <nova:port uuid="ea93226c-f47f-4213-9d19-8bbf98d9f70f">
Oct 11 05:41:09 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:41:09 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:      <entry name="serial">7478f2ca-99cf-4c64-917f-bcfee233cf4e</entry>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:      <entry name="uuid">7478f2ca-99cf-4c64-917f-bcfee233cf4e</entry>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:41:09 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:41:09 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:41:09 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/7478f2ca-99cf-4c64-917f-bcfee233cf4e_disk">
Oct 11 05:41:09 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:41:09 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:41:09 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/7478f2ca-99cf-4c64-917f-bcfee233cf4e_disk.config">
Oct 11 05:41:09 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:41:09 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:41:09 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:29:42:8b"/>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:      <target dev="tapea93226c-f4"/>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:41:09 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/7478f2ca-99cf-4c64-917f-bcfee233cf4e/console.log" append="off"/>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:41:09 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:41:09 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:41:09 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:41:09 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:41:09 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:41:09 np0005481065 nova_compute[260935]: 2025-10-11 09:41:09.402 2 DEBUG nova.compute.manager [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Preparing to wait for external event network-vif-plugged-ea93226c-f47f-4213-9d19-8bbf98d9f70f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:41:09 np0005481065 nova_compute[260935]: 2025-10-11 09:41:09.403 2 DEBUG oslo_concurrency.lockutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "7478f2ca-99cf-4c64-917f-bcfee233cf4e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:41:09 np0005481065 nova_compute[260935]: 2025-10-11 09:41:09.403 2 DEBUG oslo_concurrency.lockutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "7478f2ca-99cf-4c64-917f-bcfee233cf4e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:41:09 np0005481065 nova_compute[260935]: 2025-10-11 09:41:09.403 2 DEBUG oslo_concurrency.lockutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "7478f2ca-99cf-4c64-917f-bcfee233cf4e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:41:09 np0005481065 nova_compute[260935]: 2025-10-11 09:41:09.404 2 DEBUG nova.virt.libvirt.vif [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:41:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-gen-1-1334732272',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-gen-1-1334732272',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-607770139-gen',id=147,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMJ2jJOMdFj9GKBYjHKBvmwlAaXSeJgKxd7Y5C0D5IYgR/hyyjjmJiPLE03H6XYiIWX3QUpUmhg6y0ka6+hRs35VkER6t8ZYYcl+MQfyMjAvRZwHFSUWgwuABpKNZ9OROA==',key_name='tempest-TestSecurityGroupsBasicOps-1246308253',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='81e7096f23df4e7d8782cf98d09d54e9',ramdisk_id='',reservation_id='r-n62joivm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-607770139',owner_user_name='tempest-TestSecurityGroupsBasicOps-607770139-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:41:05Z,user_data=None,user_id='489c4d0457354f4684f8b9e53261224f',uuid=7478f2ca-99cf-4c64-917f-bcfee233cf4e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ea93226c-f47f-4213-9d19-8bbf98d9f70f", "address": "fa:16:3e:29:42:8b", "network": {"id": "fbb66f3b-0d2f-44df-a1ff-0fbd9c571596", "bridge": "br-int", "label": "tempest-network-smoke--2076009485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea93226c-f4", "ovs_interfaceid": "ea93226c-f47f-4213-9d19-8bbf98d9f70f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:41:09 np0005481065 nova_compute[260935]: 2025-10-11 09:41:09.404 2 DEBUG nova.network.os_vif_util [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converting VIF {"id": "ea93226c-f47f-4213-9d19-8bbf98d9f70f", "address": "fa:16:3e:29:42:8b", "network": {"id": "fbb66f3b-0d2f-44df-a1ff-0fbd9c571596", "bridge": "br-int", "label": "tempest-network-smoke--2076009485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea93226c-f4", "ovs_interfaceid": "ea93226c-f47f-4213-9d19-8bbf98d9f70f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:41:09 np0005481065 nova_compute[260935]: 2025-10-11 09:41:09.405 2 DEBUG nova.network.os_vif_util [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:29:42:8b,bridge_name='br-int',has_traffic_filtering=True,id=ea93226c-f47f-4213-9d19-8bbf98d9f70f,network=Network(fbb66f3b-0d2f-44df-a1ff-0fbd9c571596),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea93226c-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:41:09 np0005481065 nova_compute[260935]: 2025-10-11 09:41:09.405 2 DEBUG os_vif [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:29:42:8b,bridge_name='br-int',has_traffic_filtering=True,id=ea93226c-f47f-4213-9d19-8bbf98d9f70f,network=Network(fbb66f3b-0d2f-44df-a1ff-0fbd9c571596),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea93226c-f4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:41:09 np0005481065 nova_compute[260935]: 2025-10-11 09:41:09.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:41:09 np0005481065 nova_compute[260935]: 2025-10-11 09:41:09.406 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:41:09 np0005481065 nova_compute[260935]: 2025-10-11 09:41:09.406 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:41:09 np0005481065 nova_compute[260935]: 2025-10-11 09:41:09.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:41:09 np0005481065 nova_compute[260935]: 2025-10-11 09:41:09.410 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapea93226c-f4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:41:09 np0005481065 nova_compute[260935]: 2025-10-11 09:41:09.411 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapea93226c-f4, col_values=(('external_ids', {'iface-id': 'ea93226c-f47f-4213-9d19-8bbf98d9f70f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:29:42:8b', 'vm-uuid': '7478f2ca-99cf-4c64-917f-bcfee233cf4e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:41:09 np0005481065 nova_compute[260935]: 2025-10-11 09:41:09.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:41:09 np0005481065 NetworkManager[44960]: <info>  [1760175669.4140] manager: (tapea93226c-f4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/635)
Oct 11 05:41:09 np0005481065 nova_compute[260935]: 2025-10-11 09:41:09.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:41:09 np0005481065 nova_compute[260935]: 2025-10-11 09:41:09.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:41:09 np0005481065 nova_compute[260935]: 2025-10-11 09:41:09.421 2 INFO os_vif [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:29:42:8b,bridge_name='br-int',has_traffic_filtering=True,id=ea93226c-f47f-4213-9d19-8bbf98d9f70f,network=Network(fbb66f3b-0d2f-44df-a1ff-0fbd9c571596),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea93226c-f4')#033[00m
Oct 11 05:41:09 np0005481065 nova_compute[260935]: 2025-10-11 09:41:09.462 2 DEBUG nova.virt.libvirt.driver [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:41:09 np0005481065 nova_compute[260935]: 2025-10-11 09:41:09.463 2 DEBUG nova.virt.libvirt.driver [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:41:09 np0005481065 nova_compute[260935]: 2025-10-11 09:41:09.463 2 DEBUG nova.virt.libvirt.driver [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] No VIF found with MAC fa:16:3e:29:42:8b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:41:09 np0005481065 nova_compute[260935]: 2025-10-11 09:41:09.464 2 INFO nova.virt.libvirt.driver [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Using config drive#033[00m
Oct 11 05:41:09 np0005481065 nova_compute[260935]: 2025-10-11 09:41:09.488 2 DEBUG nova.storage.rbd_utils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image 7478f2ca-99cf-4c64-917f-bcfee233cf4e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:41:09 np0005481065 nova_compute[260935]: 2025-10-11 09:41:09.493 2 DEBUG nova.network.neutron [req-f978723e-c646-487d-af34-e1f41b376a28 req-ec27a19c-6a2b-41c4-90a9-199e4609872f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Updated VIF entry in instance network info cache for port ea93226c-f47f-4213-9d19-8bbf98d9f70f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:41:09 np0005481065 nova_compute[260935]: 2025-10-11 09:41:09.493 2 DEBUG nova.network.neutron [req-f978723e-c646-487d-af34-e1f41b376a28 req-ec27a19c-6a2b-41c4-90a9-199e4609872f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Updating instance_info_cache with network_info: [{"id": "ea93226c-f47f-4213-9d19-8bbf98d9f70f", "address": "fa:16:3e:29:42:8b", "network": {"id": "fbb66f3b-0d2f-44df-a1ff-0fbd9c571596", "bridge": "br-int", "label": "tempest-network-smoke--2076009485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea93226c-f4", "ovs_interfaceid": "ea93226c-f47f-4213-9d19-8bbf98d9f70f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:41:09 np0005481065 nova_compute[260935]: 2025-10-11 09:41:09.522 2 DEBUG oslo_concurrency.lockutils [req-f978723e-c646-487d-af34-e1f41b376a28 req-ec27a19c-6a2b-41c4-90a9-199e4609872f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-7478f2ca-99cf-4c64-917f-bcfee233cf4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:41:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:41:09 np0005481065 nova_compute[260935]: 2025-10-11 09:41:09.736 2 INFO nova.virt.libvirt.driver [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Creating config drive at /var/lib/nova/instances/7478f2ca-99cf-4c64-917f-bcfee233cf4e/disk.config#033[00m
Oct 11 05:41:09 np0005481065 nova_compute[260935]: 2025-10-11 09:41:09.746 2 DEBUG oslo_concurrency.processutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7478f2ca-99cf-4c64-917f-bcfee233cf4e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuzkj8rfu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:41:09 np0005481065 nova_compute[260935]: 2025-10-11 09:41:09.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:41:09 np0005481065 nova_compute[260935]: 2025-10-11 09:41:09.915 2 DEBUG oslo_concurrency.processutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7478f2ca-99cf-4c64-917f-bcfee233cf4e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuzkj8rfu" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:41:09 np0005481065 nova_compute[260935]: 2025-10-11 09:41:09.954 2 DEBUG nova.storage.rbd_utils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image 7478f2ca-99cf-4c64-917f-bcfee233cf4e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:41:09 np0005481065 nova_compute[260935]: 2025-10-11 09:41:09.960 2 DEBUG oslo_concurrency.processutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7478f2ca-99cf-4c64-917f-bcfee233cf4e/disk.config 7478f2ca-99cf-4c64-917f-bcfee233cf4e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:41:10 np0005481065 nova_compute[260935]: 2025-10-11 09:41:10.181 2 DEBUG oslo_concurrency.processutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7478f2ca-99cf-4c64-917f-bcfee233cf4e/disk.config 7478f2ca-99cf-4c64-917f-bcfee233cf4e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.221s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:41:10 np0005481065 nova_compute[260935]: 2025-10-11 09:41:10.182 2 INFO nova.virt.libvirt.driver [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Deleting local config drive /var/lib/nova/instances/7478f2ca-99cf-4c64-917f-bcfee233cf4e/disk.config because it was imported into RBD.#033[00m
Oct 11 05:41:10 np0005481065 kernel: tapea93226c-f4: entered promiscuous mode
Oct 11 05:41:10 np0005481065 NetworkManager[44960]: <info>  [1760175670.2462] manager: (tapea93226c-f4): new Tun device (/org/freedesktop/NetworkManager/Devices/636)
Oct 11 05:41:10 np0005481065 nova_compute[260935]: 2025-10-11 09:41:10.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:41:10 np0005481065 ovn_controller[152945]: 2025-10-11T09:41:10Z|01668|binding|INFO|Claiming lport ea93226c-f47f-4213-9d19-8bbf98d9f70f for this chassis.
Oct 11 05:41:10 np0005481065 ovn_controller[152945]: 2025-10-11T09:41:10Z|01669|binding|INFO|ea93226c-f47f-4213-9d19-8bbf98d9f70f: Claiming fa:16:3e:29:42:8b 10.100.0.5
Oct 11 05:41:10 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:41:10.260 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:29:42:8b 10.100.0.5'], port_security=['fa:16:3e:29:42:8b 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '7478f2ca-99cf-4c64-917f-bcfee233cf4e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '81e7096f23df4e7d8782cf98d09d54e9', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ece05fde-852d-4083-a996-a2c46d1ad9c2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d19e995-d8fe-4294-b195-20de1c881c60, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=ea93226c-f47f-4213-9d19-8bbf98d9f70f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:41:10 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:41:10.262 162815 INFO neutron.agent.ovn.metadata.agent [-] Port ea93226c-f47f-4213-9d19-8bbf98d9f70f in datapath fbb66f3b-0d2f-44df-a1ff-0fbd9c571596 bound to our chassis#033[00m
Oct 11 05:41:10 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:41:10.269 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fbb66f3b-0d2f-44df-a1ff-0fbd9c571596#033[00m
Oct 11 05:41:10 np0005481065 ovn_controller[152945]: 2025-10-11T09:41:10Z|01670|binding|INFO|Setting lport ea93226c-f47f-4213-9d19-8bbf98d9f70f ovn-installed in OVS
Oct 11 05:41:10 np0005481065 ovn_controller[152945]: 2025-10-11T09:41:10Z|01671|binding|INFO|Setting lport ea93226c-f47f-4213-9d19-8bbf98d9f70f up in Southbound
Oct 11 05:41:10 np0005481065 nova_compute[260935]: 2025-10-11 09:41:10.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:41:10 np0005481065 nova_compute[260935]: 2025-10-11 09:41:10.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:41:10 np0005481065 systemd-machined[215705]: New machine qemu-171-instance-00000093.
Oct 11 05:41:10 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:41:10.297 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ee0f68ae-afa4-44b4-a47d-c31f56354091]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:41:10 np0005481065 systemd[1]: Started Virtual Machine qemu-171-instance-00000093.
Oct 11 05:41:10 np0005481065 systemd-udevd[426339]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:41:10 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:41:10.329 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[99d405a9-72cf-4d17-a3ca-bd99092bdff2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:41:10 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:41:10.333 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[cf481698-0839-4f8b-a76e-538bc2275fb5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:41:10 np0005481065 NetworkManager[44960]: <info>  [1760175670.3377] device (tapea93226c-f4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:41:10 np0005481065 NetworkManager[44960]: <info>  [1760175670.3385] device (tapea93226c-f4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:41:10 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:41:10.360 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[94067496-4ccf-4c86-a00a-3470106e6636]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:41:10 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:41:10.381 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a87e7191-1bac-47f3-9e8a-75a1d256657e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbb66f3b-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a1:6d:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 530, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 530, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 434], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 748155, 'reachable_time': 34340, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 5, 'outoctets': 376, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 5, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 376, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 5, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 426349, 'error': None, 'target': 'ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:41:10 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:41:10.399 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a34a08aa-7c21-4107-87ba-29c7740b0c12]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfbb66f3b-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 748169, 'tstamp': 748169}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 426351, 'error': None, 'target': 'ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfbb66f3b-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 748173, 'tstamp': 748173}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 426351, 'error': None, 'target': 'ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:41:10 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:41:10.400 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbb66f3b-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:41:10 np0005481065 nova_compute[260935]: 2025-10-11 09:41:10.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:41:10 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:41:10.404 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbb66f3b-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:41:10 np0005481065 nova_compute[260935]: 2025-10-11 09:41:10.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:41:10 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:41:10.404 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:41:10 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:41:10.404 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfbb66f3b-00, col_values=(('external_ids', {'iface-id': '3bca8f57-1f39-42d4-90ee-6695770f4d39'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:41:10 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:41:10.405 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:41:10 np0005481065 nova_compute[260935]: 2025-10-11 09:41:10.894 2 DEBUG nova.compute.manager [req-3a092c95-0b74-44bd-a958-3d4eafd404aa req-daac414b-f21e-4ed1-a22a-0a5ab96c7d6f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Received event network-vif-plugged-ea93226c-f47f-4213-9d19-8bbf98d9f70f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:41:10 np0005481065 nova_compute[260935]: 2025-10-11 09:41:10.895 2 DEBUG oslo_concurrency.lockutils [req-3a092c95-0b74-44bd-a958-3d4eafd404aa req-daac414b-f21e-4ed1-a22a-0a5ab96c7d6f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "7478f2ca-99cf-4c64-917f-bcfee233cf4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:41:10 np0005481065 nova_compute[260935]: 2025-10-11 09:41:10.895 2 DEBUG oslo_concurrency.lockutils [req-3a092c95-0b74-44bd-a958-3d4eafd404aa req-daac414b-f21e-4ed1-a22a-0a5ab96c7d6f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7478f2ca-99cf-4c64-917f-bcfee233cf4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:41:10 np0005481065 nova_compute[260935]: 2025-10-11 09:41:10.896 2 DEBUG oslo_concurrency.lockutils [req-3a092c95-0b74-44bd-a958-3d4eafd404aa req-daac414b-f21e-4ed1-a22a-0a5ab96c7d6f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7478f2ca-99cf-4c64-917f-bcfee233cf4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:41:10 np0005481065 nova_compute[260935]: 2025-10-11 09:41:10.897 2 DEBUG nova.compute.manager [req-3a092c95-0b74-44bd-a958-3d4eafd404aa req-daac414b-f21e-4ed1-a22a-0a5ab96c7d6f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Processing event network-vif-plugged-ea93226c-f47f-4213-9d19-8bbf98d9f70f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:41:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2986: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:41:11 np0005481065 nova_compute[260935]: 2025-10-11 09:41:11.885 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175671.8853352, 7478f2ca-99cf-4c64-917f-bcfee233cf4e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:41:11 np0005481065 nova_compute[260935]: 2025-10-11 09:41:11.886 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] VM Started (Lifecycle Event)#033[00m
Oct 11 05:41:11 np0005481065 nova_compute[260935]: 2025-10-11 09:41:11.889 2 DEBUG nova.compute.manager [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:41:11 np0005481065 nova_compute[260935]: 2025-10-11 09:41:11.892 2 DEBUG nova.virt.libvirt.driver [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:41:11 np0005481065 nova_compute[260935]: 2025-10-11 09:41:11.895 2 INFO nova.virt.libvirt.driver [-] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Instance spawned successfully.#033[00m
Oct 11 05:41:11 np0005481065 nova_compute[260935]: 2025-10-11 09:41:11.896 2 DEBUG nova.virt.libvirt.driver [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:41:11 np0005481065 nova_compute[260935]: 2025-10-11 09:41:11.922 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:41:11 np0005481065 nova_compute[260935]: 2025-10-11 09:41:11.925 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:41:11 np0005481065 nova_compute[260935]: 2025-10-11 09:41:11.932 2 DEBUG nova.virt.libvirt.driver [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:41:11 np0005481065 nova_compute[260935]: 2025-10-11 09:41:11.933 2 DEBUG nova.virt.libvirt.driver [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:41:11 np0005481065 nova_compute[260935]: 2025-10-11 09:41:11.933 2 DEBUG nova.virt.libvirt.driver [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:41:11 np0005481065 nova_compute[260935]: 2025-10-11 09:41:11.934 2 DEBUG nova.virt.libvirt.driver [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:41:11 np0005481065 nova_compute[260935]: 2025-10-11 09:41:11.934 2 DEBUG nova.virt.libvirt.driver [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:41:11 np0005481065 nova_compute[260935]: 2025-10-11 09:41:11.935 2 DEBUG nova.virt.libvirt.driver [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:41:11 np0005481065 nova_compute[260935]: 2025-10-11 09:41:11.971 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:41:11 np0005481065 nova_compute[260935]: 2025-10-11 09:41:11.971 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175671.886363, 7478f2ca-99cf-4c64-917f-bcfee233cf4e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:41:11 np0005481065 nova_compute[260935]: 2025-10-11 09:41:11.972 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:41:12 np0005481065 nova_compute[260935]: 2025-10-11 09:41:12.005 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:41:12 np0005481065 nova_compute[260935]: 2025-10-11 09:41:12.008 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175671.8910682, 7478f2ca-99cf-4c64-917f-bcfee233cf4e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:41:12 np0005481065 nova_compute[260935]: 2025-10-11 09:41:12.008 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:41:12 np0005481065 nova_compute[260935]: 2025-10-11 09:41:12.035 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:41:12 np0005481065 nova_compute[260935]: 2025-10-11 09:41:12.039 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:41:12 np0005481065 nova_compute[260935]: 2025-10-11 09:41:12.044 2 INFO nova.compute.manager [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Took 6.88 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:41:12 np0005481065 nova_compute[260935]: 2025-10-11 09:41:12.045 2 DEBUG nova.compute.manager [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:41:12 np0005481065 nova_compute[260935]: 2025-10-11 09:41:12.086 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:41:12 np0005481065 nova_compute[260935]: 2025-10-11 09:41:12.117 2 INFO nova.compute.manager [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Took 7.93 seconds to build instance.#033[00m
Oct 11 05:41:12 np0005481065 nova_compute[260935]: 2025-10-11 09:41:12.141 2 DEBUG oslo_concurrency.lockutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "7478f2ca-99cf-4c64-917f-bcfee233cf4e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.022s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:41:12 np0005481065 nova_compute[260935]: 2025-10-11 09:41:12.967 2 DEBUG nova.compute.manager [req-8d6bc5e4-ac86-4490-ad69-79ef958fca71 req-8781de71-8b5c-4395-8627-eafbb88acc29 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Received event network-vif-plugged-ea93226c-f47f-4213-9d19-8bbf98d9f70f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:41:12 np0005481065 nova_compute[260935]: 2025-10-11 09:41:12.968 2 DEBUG oslo_concurrency.lockutils [req-8d6bc5e4-ac86-4490-ad69-79ef958fca71 req-8781de71-8b5c-4395-8627-eafbb88acc29 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "7478f2ca-99cf-4c64-917f-bcfee233cf4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:41:12 np0005481065 nova_compute[260935]: 2025-10-11 09:41:12.969 2 DEBUG oslo_concurrency.lockutils [req-8d6bc5e4-ac86-4490-ad69-79ef958fca71 req-8781de71-8b5c-4395-8627-eafbb88acc29 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7478f2ca-99cf-4c64-917f-bcfee233cf4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:41:12 np0005481065 nova_compute[260935]: 2025-10-11 09:41:12.969 2 DEBUG oslo_concurrency.lockutils [req-8d6bc5e4-ac86-4490-ad69-79ef958fca71 req-8781de71-8b5c-4395-8627-eafbb88acc29 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7478f2ca-99cf-4c64-917f-bcfee233cf4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:41:12 np0005481065 nova_compute[260935]: 2025-10-11 09:41:12.970 2 DEBUG nova.compute.manager [req-8d6bc5e4-ac86-4490-ad69-79ef958fca71 req-8781de71-8b5c-4395-8627-eafbb88acc29 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] No waiting events found dispatching network-vif-plugged-ea93226c-f47f-4213-9d19-8bbf98d9f70f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:41:12 np0005481065 nova_compute[260935]: 2025-10-11 09:41:12.970 2 WARNING nova.compute.manager [req-8d6bc5e4-ac86-4490-ad69-79ef958fca71 req-8781de71-8b5c-4395-8627-eafbb88acc29 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Received unexpected event network-vif-plugged-ea93226c-f47f-4213-9d19-8bbf98d9f70f for instance with vm_state active and task_state None.#033[00m
Oct 11 05:41:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2987: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 139 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Oct 11 05:41:13 np0005481065 podman[426394]: 2025-10-11 09:41:13.776763046 +0000 UTC m=+0.079967564 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, container_name=iscsid)
Oct 11 05:41:14 np0005481065 nova_compute[260935]: 2025-10-11 09:41:14.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:41:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:41:14 np0005481065 nova_compute[260935]: 2025-10-11 09:41:14.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:41:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:41:15.237 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:41:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:41:15.237 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:41:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:41:15.238 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:41:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2988: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 139 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Oct 11 05:41:16 np0005481065 nova_compute[260935]: 2025-10-11 09:41:16.812 2 DEBUG nova.compute.manager [req-c976d1b7-a860-4431-a5f3-910c9c26a4b1 req-326acea8-da5f-4362-9d74-17056b00a9c2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Received event network-changed-ea93226c-f47f-4213-9d19-8bbf98d9f70f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:41:16 np0005481065 nova_compute[260935]: 2025-10-11 09:41:16.813 2 DEBUG nova.compute.manager [req-c976d1b7-a860-4431-a5f3-910c9c26a4b1 req-326acea8-da5f-4362-9d74-17056b00a9c2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Refreshing instance network info cache due to event network-changed-ea93226c-f47f-4213-9d19-8bbf98d9f70f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:41:16 np0005481065 nova_compute[260935]: 2025-10-11 09:41:16.814 2 DEBUG oslo_concurrency.lockutils [req-c976d1b7-a860-4431-a5f3-910c9c26a4b1 req-326acea8-da5f-4362-9d74-17056b00a9c2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-7478f2ca-99cf-4c64-917f-bcfee233cf4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:41:16 np0005481065 nova_compute[260935]: 2025-10-11 09:41:16.814 2 DEBUG oslo_concurrency.lockutils [req-c976d1b7-a860-4431-a5f3-910c9c26a4b1 req-326acea8-da5f-4362-9d74-17056b00a9c2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-7478f2ca-99cf-4c64-917f-bcfee233cf4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:41:16 np0005481065 nova_compute[260935]: 2025-10-11 09:41:16.815 2 DEBUG nova.network.neutron [req-c976d1b7-a860-4431-a5f3-910c9c26a4b1 req-326acea8-da5f-4362-9d74-17056b00a9c2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Refreshing network info cache for port ea93226c-f47f-4213-9d19-8bbf98d9f70f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:41:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2989: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 139 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Oct 11 05:41:17 np0005481065 podman[426414]: 2025-10-11 09:41:17.814520246 +0000 UTC m=+0.104578014 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct 11 05:41:17 np0005481065 podman[426415]: 2025-10-11 09:41:17.869224411 +0000 UTC m=+0.159878116 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:41:18 np0005481065 nova_compute[260935]: 2025-10-11 09:41:18.124 2 DEBUG nova.network.neutron [req-c976d1b7-a860-4431-a5f3-910c9c26a4b1 req-326acea8-da5f-4362-9d74-17056b00a9c2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Updated VIF entry in instance network info cache for port ea93226c-f47f-4213-9d19-8bbf98d9f70f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:41:18 np0005481065 nova_compute[260935]: 2025-10-11 09:41:18.125 2 DEBUG nova.network.neutron [req-c976d1b7-a860-4431-a5f3-910c9c26a4b1 req-326acea8-da5f-4362-9d74-17056b00a9c2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Updating instance_info_cache with network_info: [{"id": "ea93226c-f47f-4213-9d19-8bbf98d9f70f", "address": "fa:16:3e:29:42:8b", "network": {"id": "fbb66f3b-0d2f-44df-a1ff-0fbd9c571596", "bridge": "br-int", "label": "tempest-network-smoke--2076009485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea93226c-f4", "ovs_interfaceid": "ea93226c-f47f-4213-9d19-8bbf98d9f70f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:41:18 np0005481065 nova_compute[260935]: 2025-10-11 09:41:18.147 2 DEBUG oslo_concurrency.lockutils [req-c976d1b7-a860-4431-a5f3-910c9c26a4b1 req-326acea8-da5f-4362-9d74-17056b00a9c2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-7478f2ca-99cf-4c64-917f-bcfee233cf4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:41:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2990: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 11 05:41:19 np0005481065 nova_compute[260935]: 2025-10-11 09:41:19.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:41:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:41:19 np0005481065 nova_compute[260935]: 2025-10-11 09:41:19.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:41:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2991: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 11 05:41:23 np0005481065 ovn_controller[152945]: 2025-10-11T09:41:23Z|00200|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:29:42:8b 10.100.0.5
Oct 11 05:41:23 np0005481065 ovn_controller[152945]: 2025-10-11T09:41:23Z|00201|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:29:42:8b 10.100.0.5
Oct 11 05:41:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2992: 321 pgs: 321 active+clean; 466 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 95 op/s
Oct 11 05:41:24 np0005481065 nova_compute[260935]: 2025-10-11 09:41:24.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:41:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:41:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:41:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:41:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:41:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:41:24 np0005481065 nova_compute[260935]: 2025-10-11 09:41:24.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:41:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:41:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:41:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2993: 321 pgs: 321 active+clean; 466 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.4 MiB/s wr, 81 op/s
Oct 11 05:41:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:41:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2813472303' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:41:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:41:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2813472303' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:41:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2994: 321 pgs: 321 active+clean; 466 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.4 MiB/s wr, 81 op/s
Oct 11 05:41:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2995: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 123 op/s
Oct 11 05:41:29 np0005481065 nova_compute[260935]: 2025-10-11 09:41:29.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:41:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:41:29 np0005481065 nova_compute[260935]: 2025-10-11 09:41:29.820 2 DEBUG oslo_concurrency.lockutils [None req-42e61385-09b9-45de-b168-08184d24b4f5 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "7478f2ca-99cf-4c64-917f-bcfee233cf4e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:41:29 np0005481065 nova_compute[260935]: 2025-10-11 09:41:29.822 2 DEBUG oslo_concurrency.lockutils [None req-42e61385-09b9-45de-b168-08184d24b4f5 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "7478f2ca-99cf-4c64-917f-bcfee233cf4e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:41:29 np0005481065 nova_compute[260935]: 2025-10-11 09:41:29.822 2 DEBUG oslo_concurrency.lockutils [None req-42e61385-09b9-45de-b168-08184d24b4f5 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "7478f2ca-99cf-4c64-917f-bcfee233cf4e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:41:29 np0005481065 nova_compute[260935]: 2025-10-11 09:41:29.822 2 DEBUG oslo_concurrency.lockutils [None req-42e61385-09b9-45de-b168-08184d24b4f5 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "7478f2ca-99cf-4c64-917f-bcfee233cf4e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:41:29 np0005481065 nova_compute[260935]: 2025-10-11 09:41:29.822 2 DEBUG oslo_concurrency.lockutils [None req-42e61385-09b9-45de-b168-08184d24b4f5 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "7478f2ca-99cf-4c64-917f-bcfee233cf4e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:41:29 np0005481065 nova_compute[260935]: 2025-10-11 09:41:29.824 2 INFO nova.compute.manager [None req-42e61385-09b9-45de-b168-08184d24b4f5 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Terminating instance#033[00m
Oct 11 05:41:29 np0005481065 nova_compute[260935]: 2025-10-11 09:41:29.825 2 DEBUG nova.compute.manager [None req-42e61385-09b9-45de-b168-08184d24b4f5 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:41:29 np0005481065 nova_compute[260935]: 2025-10-11 09:41:29.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:41:29 np0005481065 kernel: tapea93226c-f4 (unregistering): left promiscuous mode
Oct 11 05:41:29 np0005481065 NetworkManager[44960]: <info>  [1760175689.8999] device (tapea93226c-f4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:41:29 np0005481065 ovn_controller[152945]: 2025-10-11T09:41:29Z|01672|binding|INFO|Releasing lport ea93226c-f47f-4213-9d19-8bbf98d9f70f from this chassis (sb_readonly=0)
Oct 11 05:41:29 np0005481065 ovn_controller[152945]: 2025-10-11T09:41:29Z|01673|binding|INFO|Setting lport ea93226c-f47f-4213-9d19-8bbf98d9f70f down in Southbound
Oct 11 05:41:29 np0005481065 nova_compute[260935]: 2025-10-11 09:41:29.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:41:29 np0005481065 ovn_controller[152945]: 2025-10-11T09:41:29Z|01674|binding|INFO|Removing iface tapea93226c-f4 ovn-installed in OVS
Oct 11 05:41:29 np0005481065 nova_compute[260935]: 2025-10-11 09:41:29.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:41:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:41:29.942 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:29:42:8b 10.100.0.5'], port_security=['fa:16:3e:29:42:8b 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '7478f2ca-99cf-4c64-917f-bcfee233cf4e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '81e7096f23df4e7d8782cf98d09d54e9', 'neutron:revision_number': '5', 'neutron:security_group_ids': '96e9c5f2-802e-4f1b-99f1-1853c36965bb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d19e995-d8fe-4294-b195-20de1c881c60, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=ea93226c-f47f-4213-9d19-8bbf98d9f70f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:41:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:41:29.944 162815 INFO neutron.agent.ovn.metadata.agent [-] Port ea93226c-f47f-4213-9d19-8bbf98d9f70f in datapath fbb66f3b-0d2f-44df-a1ff-0fbd9c571596 unbound from our chassis#033[00m
Oct 11 05:41:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:41:29.947 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fbb66f3b-0d2f-44df-a1ff-0fbd9c571596#033[00m
Oct 11 05:41:29 np0005481065 nova_compute[260935]: 2025-10-11 09:41:29.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:41:29 np0005481065 systemd[1]: machine-qemu\x2d171\x2dinstance\x2d00000093.scope: Deactivated successfully.
Oct 11 05:41:29 np0005481065 systemd[1]: machine-qemu\x2d171\x2dinstance\x2d00000093.scope: Consumed 13.983s CPU time.
Oct 11 05:41:29 np0005481065 systemd-machined[215705]: Machine qemu-171-instance-00000093 terminated.
Oct 11 05:41:29 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:41:29.978 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3b76029d-3a2e-4e0a-8198-2224cdda29d1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:41:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:41:30.025 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f7523f2f-8bfb-4594-a034-88e7b0dc90a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:41:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:41:30.030 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[8716474f-dda1-456d-ad04-f49abb3689d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:41:30 np0005481065 nova_compute[260935]: 2025-10-11 09:41:30.075 2 INFO nova.virt.libvirt.driver [-] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Instance destroyed successfully.#033[00m
Oct 11 05:41:30 np0005481065 nova_compute[260935]: 2025-10-11 09:41:30.076 2 DEBUG nova.objects.instance [None req-42e61385-09b9-45de-b168-08184d24b4f5 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lazy-loading 'resources' on Instance uuid 7478f2ca-99cf-4c64-917f-bcfee233cf4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:41:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:41:30.080 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[5f667095-7169-47a2-ab15-320b01d64f08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:41:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:41:30.112 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[02efde1c-b31c-4603-8bd5-540edcd9f690]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbb66f3b-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a1:6d:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 1000, 'tx_bytes': 614, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 1000, 'tx_bytes': 614, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 434], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 748155, 'reachable_time': 34340, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 5, 'outoctets': 376, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 5, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 376, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 5, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 426481, 'error': None, 'target': 'ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:41:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:41:30.135 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1f48780d-99a8-4b01-8266-6950dcc53546]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfbb66f3b-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 748169, 'tstamp': 748169}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 426482, 'error': None, 'target': 'ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfbb66f3b-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 748173, 'tstamp': 748173}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 426482, 'error': None, 'target': 'ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:41:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:41:30.137 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbb66f3b-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:41:30 np0005481065 nova_compute[260935]: 2025-10-11 09:41:30.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:41:30 np0005481065 nova_compute[260935]: 2025-10-11 09:41:30.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:41:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:41:30.146 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbb66f3b-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:41:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:41:30.146 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:41:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:41:30.147 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfbb66f3b-00, col_values=(('external_ids', {'iface-id': '3bca8f57-1f39-42d4-90ee-6695770f4d39'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:41:30 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:41:30.148 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:41:30 np0005481065 nova_compute[260935]: 2025-10-11 09:41:30.187 2 DEBUG nova.virt.libvirt.vif [None req-42e61385-09b9-45de-b168-08184d24b4f5 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:41:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-gen-1-1334732272',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-gen-1-1334732272',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-607770139-gen',id=147,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMJ2jJOMdFj9GKBYjHKBvmwlAaXSeJgKxd7Y5C0D5IYgR/hyyjjmJiPLE03H6XYiIWX3QUpUmhg6y0ka6+hRs35VkER6t8ZYYcl+MQfyMjAvRZwHFSUWgwuABpKNZ9OROA==',key_name='tempest-TestSecurityGroupsBasicOps-1246308253',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:41:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='81e7096f23df4e7d8782cf98d09d54e9',ramdisk_id='',reservation_id='r-n62joivm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-607770139',owner_user_name='tempest-TestSecurityGroupsBasicOps-607770139-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:41:12Z,user_data=None,user_id='489c4d0457354f4684f8b9e53261224f',uuid=7478f2ca-99cf-4c64-917f-bcfee233cf4e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ea93226c-f47f-4213-9d19-8bbf98d9f70f", "address": "fa:16:3e:29:42:8b", "network": {"id": "fbb66f3b-0d2f-44df-a1ff-0fbd9c571596", "bridge": "br-int", "label": "tempest-network-smoke--2076009485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea93226c-f4", "ovs_interfaceid": "ea93226c-f47f-4213-9d19-8bbf98d9f70f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:41:30 np0005481065 nova_compute[260935]: 2025-10-11 09:41:30.188 2 DEBUG nova.network.os_vif_util [None req-42e61385-09b9-45de-b168-08184d24b4f5 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converting VIF {"id": "ea93226c-f47f-4213-9d19-8bbf98d9f70f", "address": "fa:16:3e:29:42:8b", "network": {"id": "fbb66f3b-0d2f-44df-a1ff-0fbd9c571596", "bridge": "br-int", "label": "tempest-network-smoke--2076009485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea93226c-f4", "ovs_interfaceid": "ea93226c-f47f-4213-9d19-8bbf98d9f70f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:41:30 np0005481065 nova_compute[260935]: 2025-10-11 09:41:30.189 2 DEBUG nova.network.os_vif_util [None req-42e61385-09b9-45de-b168-08184d24b4f5 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:29:42:8b,bridge_name='br-int',has_traffic_filtering=True,id=ea93226c-f47f-4213-9d19-8bbf98d9f70f,network=Network(fbb66f3b-0d2f-44df-a1ff-0fbd9c571596),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea93226c-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:41:30 np0005481065 nova_compute[260935]: 2025-10-11 09:41:30.190 2 DEBUG os_vif [None req-42e61385-09b9-45de-b168-08184d24b4f5 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:29:42:8b,bridge_name='br-int',has_traffic_filtering=True,id=ea93226c-f47f-4213-9d19-8bbf98d9f70f,network=Network(fbb66f3b-0d2f-44df-a1ff-0fbd9c571596),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea93226c-f4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:41:30 np0005481065 nova_compute[260935]: 2025-10-11 09:41:30.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:41:30 np0005481065 nova_compute[260935]: 2025-10-11 09:41:30.194 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapea93226c-f4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:41:30 np0005481065 nova_compute[260935]: 2025-10-11 09:41:30.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:41:30 np0005481065 nova_compute[260935]: 2025-10-11 09:41:30.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:41:30 np0005481065 nova_compute[260935]: 2025-10-11 09:41:30.203 2 INFO os_vif [None req-42e61385-09b9-45de-b168-08184d24b4f5 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:29:42:8b,bridge_name='br-int',has_traffic_filtering=True,id=ea93226c-f47f-4213-9d19-8bbf98d9f70f,network=Network(fbb66f3b-0d2f-44df-a1ff-0fbd9c571596),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea93226c-f4')#033[00m
Oct 11 05:41:30 np0005481065 nova_compute[260935]: 2025-10-11 09:41:30.653 2 INFO nova.virt.libvirt.driver [None req-42e61385-09b9-45de-b168-08184d24b4f5 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Deleting instance files /var/lib/nova/instances/7478f2ca-99cf-4c64-917f-bcfee233cf4e_del#033[00m
Oct 11 05:41:30 np0005481065 nova_compute[260935]: 2025-10-11 09:41:30.654 2 INFO nova.virt.libvirt.driver [None req-42e61385-09b9-45de-b168-08184d24b4f5 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Deletion of /var/lib/nova/instances/7478f2ca-99cf-4c64-917f-bcfee233cf4e_del complete#033[00m
Oct 11 05:41:30 np0005481065 nova_compute[260935]: 2025-10-11 09:41:30.727 2 INFO nova.compute.manager [None req-42e61385-09b9-45de-b168-08184d24b4f5 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Took 0.90 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:41:30 np0005481065 nova_compute[260935]: 2025-10-11 09:41:30.727 2 DEBUG oslo.service.loopingcall [None req-42e61385-09b9-45de-b168-08184d24b4f5 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:41:30 np0005481065 nova_compute[260935]: 2025-10-11 09:41:30.728 2 DEBUG nova.compute.manager [-] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:41:30 np0005481065 nova_compute[260935]: 2025-10-11 09:41:30.728 2 DEBUG nova.network.neutron [-] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:41:30 np0005481065 nova_compute[260935]: 2025-10-11 09:41:30.929 2 DEBUG nova.compute.manager [req-5cd15e40-1b0e-4c8f-8d49-c1846b70a172 req-cb3478f5-532d-4cad-afe2-0ced00bd7bdb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Received event network-vif-unplugged-ea93226c-f47f-4213-9d19-8bbf98d9f70f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:41:30 np0005481065 nova_compute[260935]: 2025-10-11 09:41:30.930 2 DEBUG oslo_concurrency.lockutils [req-5cd15e40-1b0e-4c8f-8d49-c1846b70a172 req-cb3478f5-532d-4cad-afe2-0ced00bd7bdb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "7478f2ca-99cf-4c64-917f-bcfee233cf4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:41:30 np0005481065 nova_compute[260935]: 2025-10-11 09:41:30.930 2 DEBUG oslo_concurrency.lockutils [req-5cd15e40-1b0e-4c8f-8d49-c1846b70a172 req-cb3478f5-532d-4cad-afe2-0ced00bd7bdb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7478f2ca-99cf-4c64-917f-bcfee233cf4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:41:30 np0005481065 nova_compute[260935]: 2025-10-11 09:41:30.931 2 DEBUG oslo_concurrency.lockutils [req-5cd15e40-1b0e-4c8f-8d49-c1846b70a172 req-cb3478f5-532d-4cad-afe2-0ced00bd7bdb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7478f2ca-99cf-4c64-917f-bcfee233cf4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:41:30 np0005481065 nova_compute[260935]: 2025-10-11 09:41:30.931 2 DEBUG nova.compute.manager [req-5cd15e40-1b0e-4c8f-8d49-c1846b70a172 req-cb3478f5-532d-4cad-afe2-0ced00bd7bdb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] No waiting events found dispatching network-vif-unplugged-ea93226c-f47f-4213-9d19-8bbf98d9f70f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:41:30 np0005481065 nova_compute[260935]: 2025-10-11 09:41:30.932 2 DEBUG nova.compute.manager [req-5cd15e40-1b0e-4c8f-8d49-c1846b70a172 req-cb3478f5-532d-4cad-afe2-0ced00bd7bdb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Received event network-vif-unplugged-ea93226c-f47f-4213-9d19-8bbf98d9f70f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 05:41:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2996: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 312 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 05:41:31 np0005481065 nova_compute[260935]: 2025-10-11 09:41:31.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:41:32 np0005481065 nova_compute[260935]: 2025-10-11 09:41:32.084 2 DEBUG nova.network.neutron [-] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:41:32 np0005481065 nova_compute[260935]: 2025-10-11 09:41:32.109 2 INFO nova.compute.manager [-] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Took 1.38 seconds to deallocate network for instance.#033[00m
Oct 11 05:41:32 np0005481065 nova_compute[260935]: 2025-10-11 09:41:32.154 2 DEBUG nova.compute.manager [req-70621370-3dff-4291-a643-b2006a33e0bf req-99db5a06-a66b-4823-8270-75ea87120c91 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Received event network-vif-deleted-ea93226c-f47f-4213-9d19-8bbf98d9f70f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:41:32 np0005481065 nova_compute[260935]: 2025-10-11 09:41:32.156 2 DEBUG oslo_concurrency.lockutils [None req-42e61385-09b9-45de-b168-08184d24b4f5 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:41:32 np0005481065 nova_compute[260935]: 2025-10-11 09:41:32.156 2 DEBUG oslo_concurrency.lockutils [None req-42e61385-09b9-45de-b168-08184d24b4f5 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:41:32 np0005481065 nova_compute[260935]: 2025-10-11 09:41:32.272 2 DEBUG oslo_concurrency.processutils [None req-42e61385-09b9-45de-b168-08184d24b4f5 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:41:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:41:32 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3854747652' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:41:32 np0005481065 nova_compute[260935]: 2025-10-11 09:41:32.756 2 DEBUG oslo_concurrency.processutils [None req-42e61385-09b9-45de-b168-08184d24b4f5 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:41:32 np0005481065 nova_compute[260935]: 2025-10-11 09:41:32.765 2 DEBUG nova.compute.provider_tree [None req-42e61385-09b9-45de-b168-08184d24b4f5 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:41:32 np0005481065 nova_compute[260935]: 2025-10-11 09:41:32.783 2 DEBUG nova.scheduler.client.report [None req-42e61385-09b9-45de-b168-08184d24b4f5 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:41:32 np0005481065 nova_compute[260935]: 2025-10-11 09:41:32.804 2 DEBUG oslo_concurrency.lockutils [None req-42e61385-09b9-45de-b168-08184d24b4f5 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:41:32 np0005481065 nova_compute[260935]: 2025-10-11 09:41:32.831 2 INFO nova.scheduler.client.report [None req-42e61385-09b9-45de-b168-08184d24b4f5 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Deleted allocations for instance 7478f2ca-99cf-4c64-917f-bcfee233cf4e#033[00m
Oct 11 05:41:32 np0005481065 nova_compute[260935]: 2025-10-11 09:41:32.888 2 DEBUG oslo_concurrency.lockutils [None req-42e61385-09b9-45de-b168-08184d24b4f5 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "7478f2ca-99cf-4c64-917f-bcfee233cf4e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.067s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:41:32 np0005481065 nova_compute[260935]: 2025-10-11 09:41:32.999 2 DEBUG nova.compute.manager [req-8c8de571-56fc-4d27-996b-19eefb071682 req-979ab78a-23d8-4dc8-a64e-db4b5a27aeba e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Received event network-vif-plugged-ea93226c-f47f-4213-9d19-8bbf98d9f70f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:41:33 np0005481065 nova_compute[260935]: 2025-10-11 09:41:32.999 2 DEBUG oslo_concurrency.lockutils [req-8c8de571-56fc-4d27-996b-19eefb071682 req-979ab78a-23d8-4dc8-a64e-db4b5a27aeba e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "7478f2ca-99cf-4c64-917f-bcfee233cf4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:41:33 np0005481065 nova_compute[260935]: 2025-10-11 09:41:33.000 2 DEBUG oslo_concurrency.lockutils [req-8c8de571-56fc-4d27-996b-19eefb071682 req-979ab78a-23d8-4dc8-a64e-db4b5a27aeba e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7478f2ca-99cf-4c64-917f-bcfee233cf4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:41:33 np0005481065 nova_compute[260935]: 2025-10-11 09:41:33.000 2 DEBUG oslo_concurrency.lockutils [req-8c8de571-56fc-4d27-996b-19eefb071682 req-979ab78a-23d8-4dc8-a64e-db4b5a27aeba e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7478f2ca-99cf-4c64-917f-bcfee233cf4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:41:33 np0005481065 nova_compute[260935]: 2025-10-11 09:41:33.001 2 DEBUG nova.compute.manager [req-8c8de571-56fc-4d27-996b-19eefb071682 req-979ab78a-23d8-4dc8-a64e-db4b5a27aeba e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] No waiting events found dispatching network-vif-plugged-ea93226c-f47f-4213-9d19-8bbf98d9f70f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:41:33 np0005481065 nova_compute[260935]: 2025-10-11 09:41:33.001 2 WARNING nova.compute.manager [req-8c8de571-56fc-4d27-996b-19eefb071682 req-979ab78a-23d8-4dc8-a64e-db4b5a27aeba e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Received unexpected event network-vif-plugged-ea93226c-f47f-4213-9d19-8bbf98d9f70f for instance with vm_state deleted and task_state None.#033[00m
Oct 11 05:41:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2997: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Oct 11 05:41:34 np0005481065 nova_compute[260935]: 2025-10-11 09:41:34.625 2 DEBUG nova.compute.manager [req-7c930276-1c50-4402-b0ca-8f1342bfa047 req-07c0a6d2-3434-4d26-97c4-e603cf7b242b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Received event network-changed-9f09f897-79b2-4242-ba35-4ab26732b411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:41:34 np0005481065 nova_compute[260935]: 2025-10-11 09:41:34.626 2 DEBUG nova.compute.manager [req-7c930276-1c50-4402-b0ca-8f1342bfa047 req-07c0a6d2-3434-4d26-97c4-e603cf7b242b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Refreshing instance network info cache due to event network-changed-9f09f897-79b2-4242-ba35-4ab26732b411. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:41:34 np0005481065 nova_compute[260935]: 2025-10-11 09:41:34.626 2 DEBUG oslo_concurrency.lockutils [req-7c930276-1c50-4402-b0ca-8f1342bfa047 req-07c0a6d2-3434-4d26-97c4-e603cf7b242b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-cf42187a-059a-49d4-b9c8-96786042977f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:41:34 np0005481065 nova_compute[260935]: 2025-10-11 09:41:34.626 2 DEBUG oslo_concurrency.lockutils [req-7c930276-1c50-4402-b0ca-8f1342bfa047 req-07c0a6d2-3434-4d26-97c4-e603cf7b242b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-cf42187a-059a-49d4-b9c8-96786042977f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:41:34 np0005481065 nova_compute[260935]: 2025-10-11 09:41:34.627 2 DEBUG nova.network.neutron [req-7c930276-1c50-4402-b0ca-8f1342bfa047 req-07c0a6d2-3434-4d26-97c4-e603cf7b242b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Refreshing network info cache for port 9f09f897-79b2-4242-ba35-4ab26732b411 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:41:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:41:34 np0005481065 nova_compute[260935]: 2025-10-11 09:41:34.786 2 DEBUG oslo_concurrency.lockutils [None req-4861d1f6-0b93-49ce-a26a-2c004be74ac4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "cf42187a-059a-49d4-b9c8-96786042977f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:41:34 np0005481065 nova_compute[260935]: 2025-10-11 09:41:34.787 2 DEBUG oslo_concurrency.lockutils [None req-4861d1f6-0b93-49ce-a26a-2c004be74ac4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "cf42187a-059a-49d4-b9c8-96786042977f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:41:34 np0005481065 nova_compute[260935]: 2025-10-11 09:41:34.788 2 DEBUG oslo_concurrency.lockutils [None req-4861d1f6-0b93-49ce-a26a-2c004be74ac4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "cf42187a-059a-49d4-b9c8-96786042977f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:41:34 np0005481065 nova_compute[260935]: 2025-10-11 09:41:34.788 2 DEBUG oslo_concurrency.lockutils [None req-4861d1f6-0b93-49ce-a26a-2c004be74ac4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "cf42187a-059a-49d4-b9c8-96786042977f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:41:34 np0005481065 nova_compute[260935]: 2025-10-11 09:41:34.788 2 DEBUG oslo_concurrency.lockutils [None req-4861d1f6-0b93-49ce-a26a-2c004be74ac4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "cf42187a-059a-49d4-b9c8-96786042977f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:41:34 np0005481065 nova_compute[260935]: 2025-10-11 09:41:34.790 2 INFO nova.compute.manager [None req-4861d1f6-0b93-49ce-a26a-2c004be74ac4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Terminating instance#033[00m
Oct 11 05:41:34 np0005481065 nova_compute[260935]: 2025-10-11 09:41:34.792 2 DEBUG nova.compute.manager [None req-4861d1f6-0b93-49ce-a26a-2c004be74ac4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:41:34 np0005481065 kernel: tap9f09f897-79 (unregistering): left promiscuous mode
Oct 11 05:41:34 np0005481065 NetworkManager[44960]: <info>  [1760175694.8526] device (tap9f09f897-79): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:41:34 np0005481065 ovn_controller[152945]: 2025-10-11T09:41:34Z|01675|binding|INFO|Releasing lport 9f09f897-79b2-4242-ba35-4ab26732b411 from this chassis (sb_readonly=0)
Oct 11 05:41:34 np0005481065 ovn_controller[152945]: 2025-10-11T09:41:34Z|01676|binding|INFO|Setting lport 9f09f897-79b2-4242-ba35-4ab26732b411 down in Southbound
Oct 11 05:41:34 np0005481065 ovn_controller[152945]: 2025-10-11T09:41:34Z|01677|binding|INFO|Removing iface tap9f09f897-79 ovn-installed in OVS
Oct 11 05:41:34 np0005481065 nova_compute[260935]: 2025-10-11 09:41:34.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:41:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:41:34.876 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:42:d3 10.100.0.14'], port_security=['fa:16:3e:f9:42:d3 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'cf42187a-059a-49d4-b9c8-96786042977f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '81e7096f23df4e7d8782cf98d09d54e9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '09004472-a3cd-4396-b377-505fc9433156 ece05fde-852d-4083-a996-a2c46d1ad9c2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d19e995-d8fe-4294-b195-20de1c881c60, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=9f09f897-79b2-4242-ba35-4ab26732b411) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:41:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:41:34.878 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 9f09f897-79b2-4242-ba35-4ab26732b411 in datapath fbb66f3b-0d2f-44df-a1ff-0fbd9c571596 unbound from our chassis#033[00m
Oct 11 05:41:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:41:34.882 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fbb66f3b-0d2f-44df-a1ff-0fbd9c571596, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:41:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:41:34.883 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b19b0f4b-6c65-4133-b2f7-348061ad1a40]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:41:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:41:34.884 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596 namespace which is not needed anymore#033[00m
Oct 11 05:41:34 np0005481065 nova_compute[260935]: 2025-10-11 09:41:34.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:41:34 np0005481065 systemd[1]: machine-qemu\x2d170\x2dinstance\x2d00000092.scope: Deactivated successfully.
Oct 11 05:41:34 np0005481065 systemd[1]: machine-qemu\x2d170\x2dinstance\x2d00000092.scope: Consumed 14.969s CPU time.
Oct 11 05:41:34 np0005481065 systemd-machined[215705]: Machine qemu-170-instance-00000092 terminated.
Oct 11 05:41:35 np0005481065 nova_compute[260935]: 2025-10-11 09:41:35.040 2 INFO nova.virt.libvirt.driver [-] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Instance destroyed successfully.#033[00m
Oct 11 05:41:35 np0005481065 nova_compute[260935]: 2025-10-11 09:41:35.045 2 DEBUG nova.objects.instance [None req-4861d1f6-0b93-49ce-a26a-2c004be74ac4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lazy-loading 'resources' on Instance uuid cf42187a-059a-49d4-b9c8-96786042977f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:41:35 np0005481065 nova_compute[260935]: 2025-10-11 09:41:35.062 2 DEBUG nova.virt.libvirt.vif [None req-4861d1f6-0b93-49ce-a26a-2c004be74ac4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:40:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-1885053733',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-1885053733',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-607770139-acc',id=146,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMJ2jJOMdFj9GKBYjHKBvmwlAaXSeJgKxd7Y5C0D5IYgR/hyyjjmJiPLE03H6XYiIWX3QUpUmhg6y0ka6+hRs35VkER6t8ZYYcl+MQfyMjAvRZwHFSUWgwuABpKNZ9OROA==',key_name='tempest-TestSecurityGroupsBasicOps-1246308253',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:40:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='81e7096f23df4e7d8782cf98d09d54e9',ramdisk_id='',reservation_id='r-8zvi2y9v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-607770139',owner_user_name='tempest-TestSecurityGroupsBasicOps-607770139-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:40:38Z,user_data=None,user_id='489c4d0457354f4684f8b9e53261224f',uuid=cf42187a-059a-49d4-b9c8-96786042977f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9f09f897-79b2-4242-ba35-4ab26732b411", "address": "fa:16:3e:f9:42:d3", "network": {"id": "fbb66f3b-0d2f-44df-a1ff-0fbd9c571596", "bridge": "br-int", "label": "tempest-network-smoke--2076009485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f09f897-79", "ovs_interfaceid": "9f09f897-79b2-4242-ba35-4ab26732b411", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:41:35 np0005481065 nova_compute[260935]: 2025-10-11 09:41:35.063 2 DEBUG nova.network.os_vif_util [None req-4861d1f6-0b93-49ce-a26a-2c004be74ac4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converting VIF {"id": "9f09f897-79b2-4242-ba35-4ab26732b411", "address": "fa:16:3e:f9:42:d3", "network": {"id": "fbb66f3b-0d2f-44df-a1ff-0fbd9c571596", "bridge": "br-int", "label": "tempest-network-smoke--2076009485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f09f897-79", "ovs_interfaceid": "9f09f897-79b2-4242-ba35-4ab26732b411", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:41:35 np0005481065 nova_compute[260935]: 2025-10-11 09:41:35.064 2 DEBUG nova.network.os_vif_util [None req-4861d1f6-0b93-49ce-a26a-2c004be74ac4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f9:42:d3,bridge_name='br-int',has_traffic_filtering=True,id=9f09f897-79b2-4242-ba35-4ab26732b411,network=Network(fbb66f3b-0d2f-44df-a1ff-0fbd9c571596),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f09f897-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:41:35 np0005481065 nova_compute[260935]: 2025-10-11 09:41:35.064 2 DEBUG os_vif [None req-4861d1f6-0b93-49ce-a26a-2c004be74ac4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f9:42:d3,bridge_name='br-int',has_traffic_filtering=True,id=9f09f897-79b2-4242-ba35-4ab26732b411,network=Network(fbb66f3b-0d2f-44df-a1ff-0fbd9c571596),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f09f897-79') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:41:35 np0005481065 nova_compute[260935]: 2025-10-11 09:41:35.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:41:35 np0005481065 nova_compute[260935]: 2025-10-11 09:41:35.067 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f09f897-79, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:41:35 np0005481065 nova_compute[260935]: 2025-10-11 09:41:35.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:41:35 np0005481065 nova_compute[260935]: 2025-10-11 09:41:35.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:41:35 np0005481065 neutron-haproxy-ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596[424973]: [NOTICE]   (424977) : haproxy version is 2.8.14-c23fe91
Oct 11 05:41:35 np0005481065 neutron-haproxy-ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596[424973]: [NOTICE]   (424977) : path to executable is /usr/sbin/haproxy
Oct 11 05:41:35 np0005481065 neutron-haproxy-ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596[424973]: [WARNING]  (424977) : Exiting Master process...
Oct 11 05:41:35 np0005481065 neutron-haproxy-ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596[424973]: [ALERT]    (424977) : Current worker (424979) exited with code 143 (Terminated)
Oct 11 05:41:35 np0005481065 neutron-haproxy-ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596[424973]: [WARNING]  (424977) : All workers exited. Exiting... (0)
Oct 11 05:41:35 np0005481065 nova_compute[260935]: 2025-10-11 09:41:35.113 2 INFO os_vif [None req-4861d1f6-0b93-49ce-a26a-2c004be74ac4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f9:42:d3,bridge_name='br-int',has_traffic_filtering=True,id=9f09f897-79b2-4242-ba35-4ab26732b411,network=Network(fbb66f3b-0d2f-44df-a1ff-0fbd9c571596),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f09f897-79')#033[00m
Oct 11 05:41:35 np0005481065 systemd[1]: libpod-2a2f657487bc3f85e27c94fbbd87712ccd236f179ffec9e579ec1ad63a55a04d.scope: Deactivated successfully.
Oct 11 05:41:35 np0005481065 podman[426552]: 2025-10-11 09:41:35.128363936 +0000 UTC m=+0.070615311 container died 2a2f657487bc3f85e27c94fbbd87712ccd236f179ffec9e579ec1ad63a55a04d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 05:41:35 np0005481065 nova_compute[260935]: 2025-10-11 09:41:35.137 2 DEBUG nova.compute.manager [req-17b767a0-157f-4747-a1cb-103fa5847d24 req-2f1371aa-1da6-45b9-8067-f76928ef2689 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Received event network-vif-unplugged-9f09f897-79b2-4242-ba35-4ab26732b411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:41:35 np0005481065 nova_compute[260935]: 2025-10-11 09:41:35.137 2 DEBUG oslo_concurrency.lockutils [req-17b767a0-157f-4747-a1cb-103fa5847d24 req-2f1371aa-1da6-45b9-8067-f76928ef2689 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "cf42187a-059a-49d4-b9c8-96786042977f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:41:35 np0005481065 nova_compute[260935]: 2025-10-11 09:41:35.138 2 DEBUG oslo_concurrency.lockutils [req-17b767a0-157f-4747-a1cb-103fa5847d24 req-2f1371aa-1da6-45b9-8067-f76928ef2689 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "cf42187a-059a-49d4-b9c8-96786042977f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:41:35 np0005481065 nova_compute[260935]: 2025-10-11 09:41:35.138 2 DEBUG oslo_concurrency.lockutils [req-17b767a0-157f-4747-a1cb-103fa5847d24 req-2f1371aa-1da6-45b9-8067-f76928ef2689 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "cf42187a-059a-49d4-b9c8-96786042977f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:41:35 np0005481065 nova_compute[260935]: 2025-10-11 09:41:35.139 2 DEBUG nova.compute.manager [req-17b767a0-157f-4747-a1cb-103fa5847d24 req-2f1371aa-1da6-45b9-8067-f76928ef2689 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] No waiting events found dispatching network-vif-unplugged-9f09f897-79b2-4242-ba35-4ab26732b411 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:41:35 np0005481065 nova_compute[260935]: 2025-10-11 09:41:35.139 2 DEBUG nova.compute.manager [req-17b767a0-157f-4747-a1cb-103fa5847d24 req-2f1371aa-1da6-45b9-8067-f76928ef2689 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Received event network-vif-unplugged-9f09f897-79b2-4242-ba35-4ab26732b411 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 05:41:35 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2a2f657487bc3f85e27c94fbbd87712ccd236f179ffec9e579ec1ad63a55a04d-userdata-shm.mount: Deactivated successfully.
Oct 11 05:41:35 np0005481065 systemd[1]: var-lib-containers-storage-overlay-4cdf0e2af4d95bd9ffa8a3118a9662388e3229f9a077ce124d6a46ddcadbaf59-merged.mount: Deactivated successfully.
Oct 11 05:41:35 np0005481065 podman[426552]: 2025-10-11 09:41:35.175736925 +0000 UTC m=+0.117988200 container cleanup 2a2f657487bc3f85e27c94fbbd87712ccd236f179ffec9e579ec1ad63a55a04d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:41:35 np0005481065 systemd[1]: libpod-conmon-2a2f657487bc3f85e27c94fbbd87712ccd236f179ffec9e579ec1ad63a55a04d.scope: Deactivated successfully.
Oct 11 05:41:35 np0005481065 podman[426604]: 2025-10-11 09:41:35.242348753 +0000 UTC m=+0.045788255 container remove 2a2f657487bc3f85e27c94fbbd87712ccd236f179ffec9e579ec1ad63a55a04d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 11 05:41:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:41:35.252 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4beaa8b1-02ff-4890-9f47-bc08940ee605]: (4, ('Sat Oct 11 09:41:35 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596 (2a2f657487bc3f85e27c94fbbd87712ccd236f179ffec9e579ec1ad63a55a04d)\n2a2f657487bc3f85e27c94fbbd87712ccd236f179ffec9e579ec1ad63a55a04d\nSat Oct 11 09:41:35 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596 (2a2f657487bc3f85e27c94fbbd87712ccd236f179ffec9e579ec1ad63a55a04d)\n2a2f657487bc3f85e27c94fbbd87712ccd236f179ffec9e579ec1ad63a55a04d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:41:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:41:35.253 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f05479fb-ccf2-4638-8a13-9e13be0a7063]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:41:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:41:35.254 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbb66f3b-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:41:35 np0005481065 kernel: tapfbb66f3b-00: left promiscuous mode
Oct 11 05:41:35 np0005481065 nova_compute[260935]: 2025-10-11 09:41:35.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:41:35 np0005481065 nova_compute[260935]: 2025-10-11 09:41:35.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:41:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:41:35.266 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9e788872-423c-4875-9f6c-ebed03900c9c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:41:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:41:35.291 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c031cc92-5e48-482a-b95e-03b5356055a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:41:35 np0005481065 nova_compute[260935]: 2025-10-11 09:41:35.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:41:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:41:35.293 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1785ec6b-0867-4d8b-904d-1b0b7cd2b298]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:41:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:41:35.313 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[22c440b3-4376-4f41-9e95-01aeaf019e10]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 748145, 'reachable_time': 42330, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 426620, 'error': None, 'target': 'ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:41:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:41:35.317 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 05:41:35 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:41:35.317 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[1af595db-87be-4262-aed3-6c52c90df807]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:41:35 np0005481065 systemd[1]: run-netns-ovnmeta\x2dfbb66f3b\x2d0d2f\x2d44df\x2da1ff\x2d0fbd9c571596.mount: Deactivated successfully.
Oct 11 05:41:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2998: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 300 KiB/s rd, 796 KiB/s wr, 70 op/s
Oct 11 05:41:35 np0005481065 nova_compute[260935]: 2025-10-11 09:41:35.557 2 INFO nova.virt.libvirt.driver [None req-4861d1f6-0b93-49ce-a26a-2c004be74ac4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Deleting instance files /var/lib/nova/instances/cf42187a-059a-49d4-b9c8-96786042977f_del#033[00m
Oct 11 05:41:35 np0005481065 nova_compute[260935]: 2025-10-11 09:41:35.558 2 INFO nova.virt.libvirt.driver [None req-4861d1f6-0b93-49ce-a26a-2c004be74ac4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Deletion of /var/lib/nova/instances/cf42187a-059a-49d4-b9c8-96786042977f_del complete#033[00m
Oct 11 05:41:35 np0005481065 nova_compute[260935]: 2025-10-11 09:41:35.603 2 INFO nova.compute.manager [None req-4861d1f6-0b93-49ce-a26a-2c004be74ac4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Took 0.81 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:41:35 np0005481065 nova_compute[260935]: 2025-10-11 09:41:35.604 2 DEBUG oslo.service.loopingcall [None req-4861d1f6-0b93-49ce-a26a-2c004be74ac4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:41:35 np0005481065 nova_compute[260935]: 2025-10-11 09:41:35.604 2 DEBUG nova.compute.manager [-] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:41:35 np0005481065 nova_compute[260935]: 2025-10-11 09:41:35.604 2 DEBUG nova.network.neutron [-] [instance: cf42187a-059a-49d4-b9c8-96786042977f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:41:35 np0005481065 nova_compute[260935]: 2025-10-11 09:41:35.796 2 DEBUG nova.network.neutron [req-7c930276-1c50-4402-b0ca-8f1342bfa047 req-07c0a6d2-3434-4d26-97c4-e603cf7b242b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Updated VIF entry in instance network info cache for port 9f09f897-79b2-4242-ba35-4ab26732b411. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:41:35 np0005481065 nova_compute[260935]: 2025-10-11 09:41:35.797 2 DEBUG nova.network.neutron [req-7c930276-1c50-4402-b0ca-8f1342bfa047 req-07c0a6d2-3434-4d26-97c4-e603cf7b242b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Updating instance_info_cache with network_info: [{"id": "9f09f897-79b2-4242-ba35-4ab26732b411", "address": "fa:16:3e:f9:42:d3", "network": {"id": "fbb66f3b-0d2f-44df-a1ff-0fbd9c571596", "bridge": "br-int", "label": "tempest-network-smoke--2076009485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f09f897-79", "ovs_interfaceid": "9f09f897-79b2-4242-ba35-4ab26732b411", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:41:35 np0005481065 nova_compute[260935]: 2025-10-11 09:41:35.847 2 DEBUG oslo_concurrency.lockutils [req-7c930276-1c50-4402-b0ca-8f1342bfa047 req-07c0a6d2-3434-4d26-97c4-e603cf7b242b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-cf42187a-059a-49d4-b9c8-96786042977f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:41:36 np0005481065 nova_compute[260935]: 2025-10-11 09:41:36.111 2 DEBUG nova.network.neutron [-] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:41:36 np0005481065 nova_compute[260935]: 2025-10-11 09:41:36.131 2 INFO nova.compute.manager [-] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Took 0.53 seconds to deallocate network for instance.#033[00m
Oct 11 05:41:36 np0005481065 nova_compute[260935]: 2025-10-11 09:41:36.175 2 DEBUG oslo_concurrency.lockutils [None req-4861d1f6-0b93-49ce-a26a-2c004be74ac4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:41:36 np0005481065 nova_compute[260935]: 2025-10-11 09:41:36.175 2 DEBUG oslo_concurrency.lockutils [None req-4861d1f6-0b93-49ce-a26a-2c004be74ac4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:41:36 np0005481065 nova_compute[260935]: 2025-10-11 09:41:36.287 2 DEBUG oslo_concurrency.processutils [None req-4861d1f6-0b93-49ce-a26a-2c004be74ac4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:41:36 np0005481065 nova_compute[260935]: 2025-10-11 09:41:36.714 2 DEBUG nova.compute.manager [req-fb518a76-f90c-4ad5-9804-bcb7092300b3 req-963bdee2-04e9-4f60-ae7a-bfdedc58ae27 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Received event network-vif-deleted-9f09f897-79b2-4242-ba35-4ab26732b411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:41:36 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:41:36 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3224447743' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:41:36 np0005481065 nova_compute[260935]: 2025-10-11 09:41:36.774 2 DEBUG oslo_concurrency.processutils [None req-4861d1f6-0b93-49ce-a26a-2c004be74ac4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:41:36 np0005481065 nova_compute[260935]: 2025-10-11 09:41:36.781 2 DEBUG nova.compute.provider_tree [None req-4861d1f6-0b93-49ce-a26a-2c004be74ac4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:41:36 np0005481065 nova_compute[260935]: 2025-10-11 09:41:36.797 2 DEBUG nova.scheduler.client.report [None req-4861d1f6-0b93-49ce-a26a-2c004be74ac4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:41:36 np0005481065 nova_compute[260935]: 2025-10-11 09:41:36.823 2 DEBUG oslo_concurrency.lockutils [None req-4861d1f6-0b93-49ce-a26a-2c004be74ac4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:41:36 np0005481065 nova_compute[260935]: 2025-10-11 09:41:36.864 2 INFO nova.scheduler.client.report [None req-4861d1f6-0b93-49ce-a26a-2c004be74ac4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Deleted allocations for instance cf42187a-059a-49d4-b9c8-96786042977f#033[00m
Oct 11 05:41:36 np0005481065 nova_compute[260935]: 2025-10-11 09:41:36.938 2 DEBUG oslo_concurrency.lockutils [None req-4861d1f6-0b93-49ce-a26a-2c004be74ac4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "cf42187a-059a-49d4-b9c8-96786042977f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.151s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:41:37 np0005481065 nova_compute[260935]: 2025-10-11 09:41:37.241 2 DEBUG nova.compute.manager [req-d115e42f-d3b3-4737-a6e6-8a8662152a15 req-cce8a67b-6cb2-44f2-abab-cb4bc198ce75 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Received event network-vif-plugged-9f09f897-79b2-4242-ba35-4ab26732b411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:41:37 np0005481065 nova_compute[260935]: 2025-10-11 09:41:37.241 2 DEBUG oslo_concurrency.lockutils [req-d115e42f-d3b3-4737-a6e6-8a8662152a15 req-cce8a67b-6cb2-44f2-abab-cb4bc198ce75 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "cf42187a-059a-49d4-b9c8-96786042977f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:41:37 np0005481065 nova_compute[260935]: 2025-10-11 09:41:37.242 2 DEBUG oslo_concurrency.lockutils [req-d115e42f-d3b3-4737-a6e6-8a8662152a15 req-cce8a67b-6cb2-44f2-abab-cb4bc198ce75 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "cf42187a-059a-49d4-b9c8-96786042977f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:41:37 np0005481065 nova_compute[260935]: 2025-10-11 09:41:37.242 2 DEBUG oslo_concurrency.lockutils [req-d115e42f-d3b3-4737-a6e6-8a8662152a15 req-cce8a67b-6cb2-44f2-abab-cb4bc198ce75 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "cf42187a-059a-49d4-b9c8-96786042977f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:41:37 np0005481065 nova_compute[260935]: 2025-10-11 09:41:37.242 2 DEBUG nova.compute.manager [req-d115e42f-d3b3-4737-a6e6-8a8662152a15 req-cce8a67b-6cb2-44f2-abab-cb4bc198ce75 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] No waiting events found dispatching network-vif-plugged-9f09f897-79b2-4242-ba35-4ab26732b411 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:41:37 np0005481065 nova_compute[260935]: 2025-10-11 09:41:37.243 2 WARNING nova.compute.manager [req-d115e42f-d3b3-4737-a6e6-8a8662152a15 req-cce8a67b-6cb2-44f2-abab-cb4bc198ce75 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Received unexpected event network-vif-plugged-9f09f897-79b2-4242-ba35-4ab26732b411 for instance with vm_state deleted and task_state None.#033[00m
Oct 11 05:41:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2999: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 300 KiB/s rd, 796 KiB/s wr, 70 op/s
Oct 11 05:41:37 np0005481065 podman[426644]: 2025-10-11 09:41:37.776703465 +0000 UTC m=+0.081865228 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Oct 11 05:41:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3000: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 319 KiB/s rd, 797 KiB/s wr, 97 op/s
Oct 11 05:41:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:41:39 np0005481065 ovn_controller[152945]: 2025-10-11T09:41:39Z|01678|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:41:39 np0005481065 ovn_controller[152945]: 2025-10-11T09:41:39Z|01679|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:41:39 np0005481065 nova_compute[260935]: 2025-10-11 09:41:39.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:41:39 np0005481065 nova_compute[260935]: 2025-10-11 09:41:39.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:41:39 np0005481065 ovn_controller[152945]: 2025-10-11T09:41:39Z|01680|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:41:39 np0005481065 ovn_controller[152945]: 2025-10-11T09:41:39Z|01681|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:41:39 np0005481065 nova_compute[260935]: 2025-10-11 09:41:39.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:41:40 np0005481065 nova_compute[260935]: 2025-10-11 09:41:40.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:41:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3001: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.4 KiB/s wr, 56 op/s
Oct 11 05:41:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3002: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.4 KiB/s wr, 56 op/s
Oct 11 05:41:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:41:44 np0005481065 podman[426666]: 2025-10-11 09:41:44.805672549 +0000 UTC m=+0.104544113 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001)
Oct 11 05:41:44 np0005481065 nova_compute[260935]: 2025-10-11 09:41:44.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:41:45 np0005481065 nova_compute[260935]: 2025-10-11 09:41:45.070 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760175690.068873, 7478f2ca-99cf-4c64-917f-bcfee233cf4e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:41:45 np0005481065 nova_compute[260935]: 2025-10-11 09:41:45.071 2 INFO nova.compute.manager [-] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:41:45 np0005481065 nova_compute[260935]: 2025-10-11 09:41:45.111 2 DEBUG nova.compute.manager [None req-4a761097-9845-4f75-8a41-82a9d77f8230 - - - - - -] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:41:45 np0005481065 nova_compute[260935]: 2025-10-11 09:41:45.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:41:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3003: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 05:41:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3004: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 05:41:48 np0005481065 nova_compute[260935]: 2025-10-11 09:41:48.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:41:48 np0005481065 nova_compute[260935]: 2025-10-11 09:41:48.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:41:48 np0005481065 podman[426686]: 2025-10-11 09:41:48.772994872 +0000 UTC m=+0.073615146 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd)
Oct 11 05:41:48 np0005481065 podman[426687]: 2025-10-11 09:41:48.80856434 +0000 UTC m=+0.102456975 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller)
Oct 11 05:41:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3005: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 05:41:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:41:49 np0005481065 nova_compute[260935]: 2025-10-11 09:41:49.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:41:50 np0005481065 nova_compute[260935]: 2025-10-11 09:41:50.038 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760175695.0369942, cf42187a-059a-49d4-b9c8-96786042977f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:41:50 np0005481065 nova_compute[260935]: 2025-10-11 09:41:50.039 2 INFO nova.compute.manager [-] [instance: cf42187a-059a-49d4-b9c8-96786042977f] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:41:50 np0005481065 nova_compute[260935]: 2025-10-11 09:41:50.057 2 DEBUG nova.compute.manager [None req-7c12ff4f-1894-49d1-8823-46df4e261789 - - - - - -] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:41:50 np0005481065 nova_compute[260935]: 2025-10-11 09:41:50.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:41:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3006: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:41:51 np0005481065 nova_compute[260935]: 2025-10-11 09:41:51.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:41:51 np0005481065 nova_compute[260935]: 2025-10-11 09:41:51.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:41:51 np0005481065 nova_compute[260935]: 2025-10-11 09:41:51.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 11 05:41:51 np0005481065 nova_compute[260935]: 2025-10-11 09:41:51.875 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:41:51 np0005481065 nova_compute[260935]: 2025-10-11 09:41:51.875 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:41:51 np0005481065 nova_compute[260935]: 2025-10-11 09:41:51.876 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 05:41:51 np0005481065 nova_compute[260935]: 2025-10-11 09:41:51.876 2 DEBUG nova.objects.instance [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c176845c-89c0-4038-ba22-4ee79bd3ebfe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:41:53 np0005481065 nova_compute[260935]: 2025-10-11 09:41:53.157 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updating instance_info_cache with network_info: [{"id": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "address": "fa:16:3e:1e:82:58", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ae661-47", "ovs_interfaceid": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:41:53 np0005481065 nova_compute[260935]: 2025-10-11 09:41:53.176 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:41:53 np0005481065 nova_compute[260935]: 2025-10-11 09:41:53.177 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 05:41:53 np0005481065 nova_compute[260935]: 2025-10-11 09:41:53.178 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:41:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3007: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:41:53 np0005481065 nova_compute[260935]: 2025-10-11 09:41:53.701 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:41:54 np0005481065 nova_compute[260935]: 2025-10-11 09:41:54.044 2 DEBUG oslo_concurrency.lockutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Acquiring lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:41:54 np0005481065 nova_compute[260935]: 2025-10-11 09:41:54.045 2 DEBUG oslo_concurrency.lockutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:41:54 np0005481065 nova_compute[260935]: 2025-10-11 09:41:54.075 2 DEBUG nova.compute.manager [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:41:54 np0005481065 nova_compute[260935]: 2025-10-11 09:41:54.196 2 DEBUG oslo_concurrency.lockutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:41:54 np0005481065 nova_compute[260935]: 2025-10-11 09:41:54.197 2 DEBUG oslo_concurrency.lockutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:41:54 np0005481065 nova_compute[260935]: 2025-10-11 09:41:54.207 2 DEBUG nova.virt.hardware [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:41:54 np0005481065 nova_compute[260935]: 2025-10-11 09:41:54.207 2 INFO nova.compute.claims [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:41:54 np0005481065 nova_compute[260935]: 2025-10-11 09:41:54.391 2 DEBUG oslo_concurrency.processutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:41:54 np0005481065 nova_compute[260935]: 2025-10-11 09:41:54.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:41:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:41:54 np0005481065 nova_compute[260935]: 2025-10-11 09:41:54.701 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:41:54 np0005481065 nova_compute[260935]: 2025-10-11 09:41:54.731 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:41:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:41:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:41:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:41:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:41:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:41:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:41:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:41:54 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3688408711' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:41:54 np0005481065 nova_compute[260935]: 2025-10-11 09:41:54.898 2 DEBUG oslo_concurrency.processutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:41:54 np0005481065 nova_compute[260935]: 2025-10-11 09:41:54.909 2 DEBUG nova.compute.provider_tree [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:41:54 np0005481065 nova_compute[260935]: 2025-10-11 09:41:54.939 2 DEBUG nova.scheduler.client.report [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:41:54 np0005481065 nova_compute[260935]: 2025-10-11 09:41:54.966 2 DEBUG oslo_concurrency.lockutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:41:54 np0005481065 nova_compute[260935]: 2025-10-11 09:41:54.967 2 DEBUG nova.compute.manager [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:41:54 np0005481065 nova_compute[260935]: 2025-10-11 09:41:54.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:41:54 np0005481065 nova_compute[260935]: 2025-10-11 09:41:54.975 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.243s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:41:54 np0005481065 nova_compute[260935]: 2025-10-11 09:41:54.975 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:41:54 np0005481065 nova_compute[260935]: 2025-10-11 09:41:54.976 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:41:54 np0005481065 nova_compute[260935]: 2025-10-11 09:41:54.976 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:41:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:41:55
Oct 11 05:41:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:41:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:41:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['volumes', 'vms', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', 'default.rgw.meta', '.rgw.root']
Oct 11 05:41:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:41:55 np0005481065 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 05:41:55 np0005481065 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 44K writes, 177K keys, 44K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s#012Cumulative WAL: 44K writes, 16K syncs, 2.75 writes per sync, written: 0.17 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2843 writes, 12K keys, 2843 commit groups, 1.0 writes per commit group, ingest: 13.29 MB, 0.02 MB/s#012Interval WAL: 2843 writes, 1055 syncs, 2.69 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 05:41:55 np0005481065 nova_compute[260935]: 2025-10-11 09:41:55.087 2 DEBUG nova.compute.manager [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:41:55 np0005481065 nova_compute[260935]: 2025-10-11 09:41:55.088 2 DEBUG nova.network.neutron [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:41:55 np0005481065 nova_compute[260935]: 2025-10-11 09:41:55.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:41:55 np0005481065 nova_compute[260935]: 2025-10-11 09:41:55.124 2 INFO nova.virt.libvirt.driver [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:41:55 np0005481065 nova_compute[260935]: 2025-10-11 09:41:55.151 2 DEBUG nova.compute.manager [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:41:55 np0005481065 nova_compute[260935]: 2025-10-11 09:41:55.250 2 DEBUG nova.compute.manager [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:41:55 np0005481065 nova_compute[260935]: 2025-10-11 09:41:55.251 2 DEBUG nova.virt.libvirt.driver [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:41:55 np0005481065 nova_compute[260935]: 2025-10-11 09:41:55.252 2 INFO nova.virt.libvirt.driver [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Creating image(s)#033[00m
Oct 11 05:41:55 np0005481065 nova_compute[260935]: 2025-10-11 09:41:55.288 2 DEBUG nova.storage.rbd_utils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] rbd image 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:41:55 np0005481065 nova_compute[260935]: 2025-10-11 09:41:55.327 2 DEBUG nova.storage.rbd_utils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] rbd image 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:41:55 np0005481065 nova_compute[260935]: 2025-10-11 09:41:55.365 2 DEBUG nova.storage.rbd_utils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] rbd image 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:41:55 np0005481065 nova_compute[260935]: 2025-10-11 09:41:55.370 2 DEBUG oslo_concurrency.processutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:41:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3008: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:41:55 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:41:55 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/656746381' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:41:55 np0005481065 nova_compute[260935]: 2025-10-11 09:41:55.456 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:41:55 np0005481065 nova_compute[260935]: 2025-10-11 09:41:55.463 2 DEBUG oslo_concurrency.processutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:41:55 np0005481065 nova_compute[260935]: 2025-10-11 09:41:55.464 2 DEBUG oslo_concurrency.lockutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:41:55 np0005481065 nova_compute[260935]: 2025-10-11 09:41:55.465 2 DEBUG oslo_concurrency.lockutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:41:55 np0005481065 nova_compute[260935]: 2025-10-11 09:41:55.466 2 DEBUG oslo_concurrency.lockutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:41:55 np0005481065 nova_compute[260935]: 2025-10-11 09:41:55.505 2 DEBUG nova.storage.rbd_utils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] rbd image 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:41:55 np0005481065 nova_compute[260935]: 2025-10-11 09:41:55.516 2 DEBUG oslo_concurrency.processutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:41:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:41:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:41:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:41:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:41:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:41:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:41:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:41:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:41:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:41:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:41:55 np0005481065 nova_compute[260935]: 2025-10-11 09:41:55.663 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:41:55 np0005481065 nova_compute[260935]: 2025-10-11 09:41:55.664 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:41:55 np0005481065 nova_compute[260935]: 2025-10-11 09:41:55.665 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:41:55 np0005481065 nova_compute[260935]: 2025-10-11 09:41:55.671 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:41:55 np0005481065 nova_compute[260935]: 2025-10-11 09:41:55.671 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:41:55 np0005481065 nova_compute[260935]: 2025-10-11 09:41:55.678 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:41:55 np0005481065 nova_compute[260935]: 2025-10-11 09:41:55.679 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:41:55 np0005481065 nova_compute[260935]: 2025-10-11 09:41:55.772 2 DEBUG nova.policy [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a45c6166242048a983479be416f90c52', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bd3d8446de604a3682ade55965824985', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:41:55 np0005481065 nova_compute[260935]: 2025-10-11 09:41:55.865 2 DEBUG oslo_concurrency.processutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.349s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:41:55 np0005481065 nova_compute[260935]: 2025-10-11 09:41:55.944 2 DEBUG nova.storage.rbd_utils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] resizing rbd image 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:41:56 np0005481065 nova_compute[260935]: 2025-10-11 09:41:56.053 2 DEBUG nova.objects.instance [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Lazy-loading 'migration_context' on Instance uuid 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:41:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:41:56 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:41:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:41:56 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:41:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:41:56 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:41:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 05:41:56 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:41:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 05:41:56 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:41:56 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 36158715-1009-4fab-8772-b9abbaa429a2 does not exist
Oct 11 05:41:56 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 39659549-62df-4ccf-a1fa-ac4c1d3e3cd0 does not exist
Oct 11 05:41:56 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 022aa7c4-6c68-4680-a7b3-e80e7f9fcfca does not exist
Oct 11 05:41:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 05:41:56 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 05:41:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 05:41:56 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:41:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:41:56 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:41:56 np0005481065 nova_compute[260935]: 2025-10-11 09:41:56.213 2 DEBUG nova.virt.libvirt.driver [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:41:56 np0005481065 nova_compute[260935]: 2025-10-11 09:41:56.213 2 DEBUG nova.virt.libvirt.driver [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Ensure instance console log exists: /var/lib/nova/instances/4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:41:56 np0005481065 nova_compute[260935]: 2025-10-11 09:41:56.213 2 DEBUG oslo_concurrency.lockutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:41:56 np0005481065 nova_compute[260935]: 2025-10-11 09:41:56.214 2 DEBUG oslo_concurrency.lockutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:41:56 np0005481065 nova_compute[260935]: 2025-10-11 09:41:56.214 2 DEBUG oslo_concurrency.lockutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:41:56 np0005481065 nova_compute[260935]: 2025-10-11 09:41:56.282 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:41:56 np0005481065 nova_compute[260935]: 2025-10-11 09:41:56.283 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2830MB free_disk=59.83066940307617GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:41:56 np0005481065 nova_compute[260935]: 2025-10-11 09:41:56.283 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:41:56 np0005481065 nova_compute[260935]: 2025-10-11 09:41:56.283 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:41:56 np0005481065 nova_compute[260935]: 2025-10-11 09:41:56.351 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:41:56 np0005481065 nova_compute[260935]: 2025-10-11 09:41:56.351 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:41:56 np0005481065 nova_compute[260935]: 2025-10-11 09:41:56.351 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:41:56 np0005481065 nova_compute[260935]: 2025-10-11 09:41:56.351 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:41:56 np0005481065 nova_compute[260935]: 2025-10-11 09:41:56.351 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:41:56 np0005481065 nova_compute[260935]: 2025-10-11 09:41:56.351 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:41:56 np0005481065 nova_compute[260935]: 2025-10-11 09:41:56.467 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:41:56 np0005481065 nova_compute[260935]: 2025-10-11 09:41:56.831 2 DEBUG nova.network.neutron [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Successfully created port: a65667d8-0cc9-498a-9415-8fcecf5881f3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:41:56 np0005481065 podman[427354]: 2025-10-11 09:41:56.965139455 +0000 UTC m=+0.071919738 container create db027bb006cee476d52a108cdd113d3310894c25627d5e8a2504776ece7bfbc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_brahmagupta, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:41:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:41:56 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3831463335' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:41:57 np0005481065 nova_compute[260935]: 2025-10-11 09:41:57.007 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:41:57 np0005481065 nova_compute[260935]: 2025-10-11 09:41:57.016 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:41:57 np0005481065 podman[427354]: 2025-10-11 09:41:56.934937038 +0000 UTC m=+0.041717351 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:41:57 np0005481065 systemd[1]: Started libpod-conmon-db027bb006cee476d52a108cdd113d3310894c25627d5e8a2504776ece7bfbc1.scope.
Oct 11 05:41:57 np0005481065 nova_compute[260935]: 2025-10-11 09:41:57.042 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:41:57 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:41:57 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:41:57 np0005481065 nova_compute[260935]: 2025-10-11 09:41:57.070 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:41:57 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:41:57 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:41:57 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:41:57 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:41:57 np0005481065 nova_compute[260935]: 2025-10-11 09:41:57.070 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.787s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:41:57 np0005481065 podman[427354]: 2025-10-11 09:41:57.078539616 +0000 UTC m=+0.185319919 container init db027bb006cee476d52a108cdd113d3310894c25627d5e8a2504776ece7bfbc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_brahmagupta, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:41:57 np0005481065 podman[427354]: 2025-10-11 09:41:57.088984449 +0000 UTC m=+0.195764752 container start db027bb006cee476d52a108cdd113d3310894c25627d5e8a2504776ece7bfbc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 05:41:57 np0005481065 objective_brahmagupta[427372]: 167 167
Oct 11 05:41:57 np0005481065 systemd[1]: libpod-db027bb006cee476d52a108cdd113d3310894c25627d5e8a2504776ece7bfbc1.scope: Deactivated successfully.
Oct 11 05:41:57 np0005481065 conmon[427372]: conmon db027bb006cee476d52a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-db027bb006cee476d52a108cdd113d3310894c25627d5e8a2504776ece7bfbc1.scope/container/memory.events
Oct 11 05:41:57 np0005481065 podman[427354]: 2025-10-11 09:41:57.095158812 +0000 UTC m=+0.201939065 container attach db027bb006cee476d52a108cdd113d3310894c25627d5e8a2504776ece7bfbc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 05:41:57 np0005481065 podman[427354]: 2025-10-11 09:41:57.097500647 +0000 UTC m=+0.204280900 container died db027bb006cee476d52a108cdd113d3310894c25627d5e8a2504776ece7bfbc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_brahmagupta, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 05:41:57 np0005481065 systemd[1]: var-lib-containers-storage-overlay-f4b78a022b04bb2f2ce4f335ae388a8cbbb5e91d5908112b9d8c6e5b5850aac8-merged.mount: Deactivated successfully.
Oct 11 05:41:57 np0005481065 podman[427354]: 2025-10-11 09:41:57.140142073 +0000 UTC m=+0.246922326 container remove db027bb006cee476d52a108cdd113d3310894c25627d5e8a2504776ece7bfbc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_brahmagupta, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 05:41:57 np0005481065 systemd[1]: libpod-conmon-db027bb006cee476d52a108cdd113d3310894c25627d5e8a2504776ece7bfbc1.scope: Deactivated successfully.
Oct 11 05:41:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3009: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:41:57 np0005481065 podman[427398]: 2025-10-11 09:41:57.389101176 +0000 UTC m=+0.070710714 container create 5442941559e201f06ba23932e50989d0e04c1d52e54721a2d0ed633d7160166f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 05:41:57 np0005481065 systemd[1]: Started libpod-conmon-5442941559e201f06ba23932e50989d0e04c1d52e54721a2d0ed633d7160166f.scope.
Oct 11 05:41:57 np0005481065 podman[427398]: 2025-10-11 09:41:57.358983161 +0000 UTC m=+0.040592769 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:41:57 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:41:57 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6996d0a4690b7c0704d0a231afa8510b0a46d2819b5629b647a01452244cc654/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:41:57 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6996d0a4690b7c0704d0a231afa8510b0a46d2819b5629b647a01452244cc654/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:41:57 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6996d0a4690b7c0704d0a231afa8510b0a46d2819b5629b647a01452244cc654/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:41:57 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6996d0a4690b7c0704d0a231afa8510b0a46d2819b5629b647a01452244cc654/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:41:57 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6996d0a4690b7c0704d0a231afa8510b0a46d2819b5629b647a01452244cc654/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 05:41:57 np0005481065 nova_compute[260935]: 2025-10-11 09:41:57.496 2 DEBUG nova.network.neutron [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Successfully updated port: a65667d8-0cc9-498a-9415-8fcecf5881f3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:41:57 np0005481065 podman[427398]: 2025-10-11 09:41:57.514077461 +0000 UTC m=+0.195686969 container init 5442941559e201f06ba23932e50989d0e04c1d52e54721a2d0ed633d7160166f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shockley, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 05:41:57 np0005481065 nova_compute[260935]: 2025-10-11 09:41:57.517 2 DEBUG oslo_concurrency.lockutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Acquiring lock "refresh_cache-4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:41:57 np0005481065 nova_compute[260935]: 2025-10-11 09:41:57.518 2 DEBUG oslo_concurrency.lockutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Acquired lock "refresh_cache-4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:41:57 np0005481065 nova_compute[260935]: 2025-10-11 09:41:57.518 2 DEBUG nova.network.neutron [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:41:57 np0005481065 podman[427398]: 2025-10-11 09:41:57.521847529 +0000 UTC m=+0.203457067 container start 5442941559e201f06ba23932e50989d0e04c1d52e54721a2d0ed633d7160166f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shockley, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 05:41:57 np0005481065 podman[427398]: 2025-10-11 09:41:57.52580308 +0000 UTC m=+0.207412618 container attach 5442941559e201f06ba23932e50989d0e04c1d52e54721a2d0ed633d7160166f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 11 05:41:57 np0005481065 nova_compute[260935]: 2025-10-11 09:41:57.600 2 DEBUG nova.compute.manager [req-52cef021-412f-4e8e-af9d-73722e4da40d req-02608588-cc5b-4682-941f-08da95deffac e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Received event network-changed-a65667d8-0cc9-498a-9415-8fcecf5881f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:41:57 np0005481065 nova_compute[260935]: 2025-10-11 09:41:57.600 2 DEBUG nova.compute.manager [req-52cef021-412f-4e8e-af9d-73722e4da40d req-02608588-cc5b-4682-941f-08da95deffac e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Refreshing instance network info cache due to event network-changed-a65667d8-0cc9-498a-9415-8fcecf5881f3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:41:57 np0005481065 nova_compute[260935]: 2025-10-11 09:41:57.601 2 DEBUG oslo_concurrency.lockutils [req-52cef021-412f-4e8e-af9d-73722e4da40d req-02608588-cc5b-4682-941f-08da95deffac e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:41:57 np0005481065 nova_compute[260935]: 2025-10-11 09:41:57.726 2 DEBUG nova.network.neutron [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:41:58 np0005481065 nova_compute[260935]: 2025-10-11 09:41:58.292 2 DEBUG nova.network.neutron [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Updating instance_info_cache with network_info: [{"id": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "address": "fa:16:3e:38:9b:b3", "network": {"id": "7f1968f8-d23e-46ea-bffa-f1b3561e0cf9", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-351533677-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "bd3d8446de604a3682ade55965824985", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa65667d8-0c", "ovs_interfaceid": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:41:58 np0005481065 nova_compute[260935]: 2025-10-11 09:41:58.311 2 DEBUG oslo_concurrency.lockutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Releasing lock "refresh_cache-4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:41:58 np0005481065 nova_compute[260935]: 2025-10-11 09:41:58.311 2 DEBUG nova.compute.manager [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Instance network_info: |[{"id": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "address": "fa:16:3e:38:9b:b3", "network": {"id": "7f1968f8-d23e-46ea-bffa-f1b3561e0cf9", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-351533677-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "bd3d8446de604a3682ade55965824985", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa65667d8-0c", "ovs_interfaceid": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:41:58 np0005481065 nova_compute[260935]: 2025-10-11 09:41:58.312 2 DEBUG oslo_concurrency.lockutils [req-52cef021-412f-4e8e-af9d-73722e4da40d req-02608588-cc5b-4682-941f-08da95deffac e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:41:58 np0005481065 nova_compute[260935]: 2025-10-11 09:41:58.313 2 DEBUG nova.network.neutron [req-52cef021-412f-4e8e-af9d-73722e4da40d req-02608588-cc5b-4682-941f-08da95deffac e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Refreshing network info cache for port a65667d8-0cc9-498a-9415-8fcecf5881f3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:41:58 np0005481065 nova_compute[260935]: 2025-10-11 09:41:58.317 2 DEBUG nova.virt.libvirt.driver [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Start _get_guest_xml network_info=[{"id": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "address": "fa:16:3e:38:9b:b3", "network": {"id": "7f1968f8-d23e-46ea-bffa-f1b3561e0cf9", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-351533677-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "bd3d8446de604a3682ade55965824985", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa65667d8-0c", "ovs_interfaceid": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:41:58 np0005481065 nova_compute[260935]: 2025-10-11 09:41:58.324 2 WARNING nova.virt.libvirt.driver [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:41:58 np0005481065 nova_compute[260935]: 2025-10-11 09:41:58.334 2 DEBUG nova.virt.libvirt.host [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:41:58 np0005481065 nova_compute[260935]: 2025-10-11 09:41:58.335 2 DEBUG nova.virt.libvirt.host [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:41:58 np0005481065 nova_compute[260935]: 2025-10-11 09:41:58.339 2 DEBUG nova.virt.libvirt.host [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:41:58 np0005481065 nova_compute[260935]: 2025-10-11 09:41:58.340 2 DEBUG nova.virt.libvirt.host [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:41:58 np0005481065 nova_compute[260935]: 2025-10-11 09:41:58.340 2 DEBUG nova.virt.libvirt.driver [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:41:58 np0005481065 nova_compute[260935]: 2025-10-11 09:41:58.341 2 DEBUG nova.virt.hardware [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:41:58 np0005481065 nova_compute[260935]: 2025-10-11 09:41:58.342 2 DEBUG nova.virt.hardware [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:41:58 np0005481065 nova_compute[260935]: 2025-10-11 09:41:58.342 2 DEBUG nova.virt.hardware [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:41:58 np0005481065 nova_compute[260935]: 2025-10-11 09:41:58.342 2 DEBUG nova.virt.hardware [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:41:58 np0005481065 nova_compute[260935]: 2025-10-11 09:41:58.343 2 DEBUG nova.virt.hardware [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:41:58 np0005481065 nova_compute[260935]: 2025-10-11 09:41:58.343 2 DEBUG nova.virt.hardware [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:41:58 np0005481065 nova_compute[260935]: 2025-10-11 09:41:58.344 2 DEBUG nova.virt.hardware [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:41:58 np0005481065 nova_compute[260935]: 2025-10-11 09:41:58.344 2 DEBUG nova.virt.hardware [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:41:58 np0005481065 nova_compute[260935]: 2025-10-11 09:41:58.344 2 DEBUG nova.virt.hardware [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:41:58 np0005481065 nova_compute[260935]: 2025-10-11 09:41:58.345 2 DEBUG nova.virt.hardware [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:41:58 np0005481065 nova_compute[260935]: 2025-10-11 09:41:58.345 2 DEBUG nova.virt.hardware [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:41:58 np0005481065 nova_compute[260935]: 2025-10-11 09:41:58.351 2 DEBUG oslo_concurrency.processutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:41:58 np0005481065 angry_shockley[427415]: --> passed data devices: 0 physical, 3 LVM
Oct 11 05:41:58 np0005481065 angry_shockley[427415]: --> relative data size: 1.0
Oct 11 05:41:58 np0005481065 angry_shockley[427415]: --> All data devices are unavailable
Oct 11 05:41:58 np0005481065 systemd[1]: libpod-5442941559e201f06ba23932e50989d0e04c1d52e54721a2d0ed633d7160166f.scope: Deactivated successfully.
Oct 11 05:41:58 np0005481065 systemd[1]: libpod-5442941559e201f06ba23932e50989d0e04c1d52e54721a2d0ed633d7160166f.scope: Consumed 1.105s CPU time.
Oct 11 05:41:58 np0005481065 podman[427398]: 2025-10-11 09:41:58.702986287 +0000 UTC m=+1.384595785 container died 5442941559e201f06ba23932e50989d0e04c1d52e54721a2d0ed633d7160166f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 05:41:58 np0005481065 systemd[1]: var-lib-containers-storage-overlay-6996d0a4690b7c0704d0a231afa8510b0a46d2819b5629b647a01452244cc654-merged.mount: Deactivated successfully.
Oct 11 05:41:58 np0005481065 podman[427398]: 2025-10-11 09:41:58.780630094 +0000 UTC m=+1.462239622 container remove 5442941559e201f06ba23932e50989d0e04c1d52e54721a2d0ed633d7160166f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shockley, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:41:58 np0005481065 systemd[1]: libpod-conmon-5442941559e201f06ba23932e50989d0e04c1d52e54721a2d0ed633d7160166f.scope: Deactivated successfully.
Oct 11 05:41:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:41:58 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3743662840' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:41:58 np0005481065 nova_compute[260935]: 2025-10-11 09:41:58.842 2 DEBUG oslo_concurrency.processutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:41:58 np0005481065 nova_compute[260935]: 2025-10-11 09:41:58.877 2 DEBUG nova.storage.rbd_utils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] rbd image 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:41:58 np0005481065 nova_compute[260935]: 2025-10-11 09:41:58.885 2 DEBUG oslo_concurrency.processutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:41:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:41:59 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/796290833' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:41:59 np0005481065 nova_compute[260935]: 2025-10-11 09:41:59.358 2 DEBUG oslo_concurrency.processutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:41:59 np0005481065 nova_compute[260935]: 2025-10-11 09:41:59.360 2 DEBUG nova.virt.libvirt.vif [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:41:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-2061155047',display_name='tempest-TestServerAdvancedOps-server-2061155047',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-2061155047',id=148,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bd3d8446de604a3682ade55965824985',ramdisk_id='',reservation_id='r-q51mr6v9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-940341189',owner_user_name='tempest-TestServerAdvancedOps-940341189-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:41:55Z,user_data=None,user_id='a45c6166242048a983479be416f90c52',uuid=4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "address": "fa:16:3e:38:9b:b3", "network": {"id": "7f1968f8-d23e-46ea-bffa-f1b3561e0cf9", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-351533677-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "bd3d8446de604a3682ade55965824985", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa65667d8-0c", "ovs_interfaceid": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:41:59 np0005481065 nova_compute[260935]: 2025-10-11 09:41:59.360 2 DEBUG nova.network.os_vif_util [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Converting VIF {"id": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "address": "fa:16:3e:38:9b:b3", "network": {"id": "7f1968f8-d23e-46ea-bffa-f1b3561e0cf9", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-351533677-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "bd3d8446de604a3682ade55965824985", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa65667d8-0c", "ovs_interfaceid": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:41:59 np0005481065 nova_compute[260935]: 2025-10-11 09:41:59.361 2 DEBUG nova.network.os_vif_util [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:9b:b3,bridge_name='br-int',has_traffic_filtering=True,id=a65667d8-0cc9-498a-9415-8fcecf5881f3,network=Network(7f1968f8-d23e-46ea-bffa-f1b3561e0cf9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa65667d8-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:41:59 np0005481065 nova_compute[260935]: 2025-10-11 09:41:59.363 2 DEBUG nova.objects.instance [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:41:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3010: 321 pgs: 321 active+clean; 374 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:41:59 np0005481065 nova_compute[260935]: 2025-10-11 09:41:59.387 2 DEBUG nova.virt.libvirt.driver [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:41:59 np0005481065 nova_compute[260935]:  <uuid>4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1</uuid>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:  <name>instance-00000094</name>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:41:59 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:      <nova:name>tempest-TestServerAdvancedOps-server-2061155047</nova:name>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:41:58</nova:creationTime>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:41:59 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:        <nova:user uuid="a45c6166242048a983479be416f90c52">tempest-TestServerAdvancedOps-940341189-project-member</nova:user>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:        <nova:project uuid="bd3d8446de604a3682ade55965824985">tempest-TestServerAdvancedOps-940341189</nova:project>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:        <nova:port uuid="a65667d8-0cc9-498a-9415-8fcecf5881f3">
Oct 11 05:41:59 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:41:59 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:      <entry name="serial">4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1</entry>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:      <entry name="uuid">4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1</entry>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:41:59 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:41:59 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:41:59 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1_disk">
Oct 11 05:41:59 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:41:59 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:41:59 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1_disk.config">
Oct 11 05:41:59 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:41:59 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:41:59 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:38:9b:b3"/>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:      <target dev="tapa65667d8-0c"/>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:41:59 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1/console.log" append="off"/>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:41:59 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:41:59 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:41:59 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:41:59 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:41:59 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:41:59 np0005481065 nova_compute[260935]: 2025-10-11 09:41:59.388 2 DEBUG nova.compute.manager [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Preparing to wait for external event network-vif-plugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:41:59 np0005481065 nova_compute[260935]: 2025-10-11 09:41:59.389 2 DEBUG oslo_concurrency.lockutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Acquiring lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:41:59 np0005481065 nova_compute[260935]: 2025-10-11 09:41:59.389 2 DEBUG oslo_concurrency.lockutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:41:59 np0005481065 nova_compute[260935]: 2025-10-11 09:41:59.389 2 DEBUG oslo_concurrency.lockutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:41:59 np0005481065 nova_compute[260935]: 2025-10-11 09:41:59.390 2 DEBUG nova.virt.libvirt.vif [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:41:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-2061155047',display_name='tempest-TestServerAdvancedOps-server-2061155047',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-2061155047',id=148,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bd3d8446de604a3682ade55965824985',ramdisk_id='',reservation_id='r-q51mr6v9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-940341189',owner_user_name='tempest-TestServerAdvancedOps-940341189-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:41:55Z,user_data=None,user_id='a45c6166242048a983479be416f90c52',uuid=4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "address": "fa:16:3e:38:9b:b3", "network": {"id": "7f1968f8-d23e-46ea-bffa-f1b3561e0cf9", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-351533677-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "bd3d8446de604a3682ade55965824985", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa65667d8-0c", "ovs_interfaceid": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:41:59 np0005481065 nova_compute[260935]: 2025-10-11 09:41:59.390 2 DEBUG nova.network.os_vif_util [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Converting VIF {"id": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "address": "fa:16:3e:38:9b:b3", "network": {"id": "7f1968f8-d23e-46ea-bffa-f1b3561e0cf9", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-351533677-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "bd3d8446de604a3682ade55965824985", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa65667d8-0c", "ovs_interfaceid": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:41:59 np0005481065 nova_compute[260935]: 2025-10-11 09:41:59.391 2 DEBUG nova.network.os_vif_util [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:9b:b3,bridge_name='br-int',has_traffic_filtering=True,id=a65667d8-0cc9-498a-9415-8fcecf5881f3,network=Network(7f1968f8-d23e-46ea-bffa-f1b3561e0cf9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa65667d8-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:41:59 np0005481065 nova_compute[260935]: 2025-10-11 09:41:59.391 2 DEBUG os_vif [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:9b:b3,bridge_name='br-int',has_traffic_filtering=True,id=a65667d8-0cc9-498a-9415-8fcecf5881f3,network=Network(7f1968f8-d23e-46ea-bffa-f1b3561e0cf9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa65667d8-0c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:41:59 np0005481065 nova_compute[260935]: 2025-10-11 09:41:59.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:41:59 np0005481065 nova_compute[260935]: 2025-10-11 09:41:59.393 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:41:59 np0005481065 nova_compute[260935]: 2025-10-11 09:41:59.393 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:41:59 np0005481065 nova_compute[260935]: 2025-10-11 09:41:59.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:41:59 np0005481065 nova_compute[260935]: 2025-10-11 09:41:59.396 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa65667d8-0c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:41:59 np0005481065 nova_compute[260935]: 2025-10-11 09:41:59.396 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa65667d8-0c, col_values=(('external_ids', {'iface-id': 'a65667d8-0cc9-498a-9415-8fcecf5881f3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:38:9b:b3', 'vm-uuid': '4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:41:59 np0005481065 nova_compute[260935]: 2025-10-11 09:41:59.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:41:59 np0005481065 NetworkManager[44960]: <info>  [1760175719.4438] manager: (tapa65667d8-0c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/637)
Oct 11 05:41:59 np0005481065 nova_compute[260935]: 2025-10-11 09:41:59.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:41:59 np0005481065 nova_compute[260935]: 2025-10-11 09:41:59.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:41:59 np0005481065 nova_compute[260935]: 2025-10-11 09:41:59.450 2 INFO os_vif [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:9b:b3,bridge_name='br-int',has_traffic_filtering=True,id=a65667d8-0cc9-498a-9415-8fcecf5881f3,network=Network(7f1968f8-d23e-46ea-bffa-f1b3561e0cf9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa65667d8-0c')#033[00m
Oct 11 05:41:59 np0005481065 nova_compute[260935]: 2025-10-11 09:41:59.505 2 DEBUG nova.virt.libvirt.driver [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:41:59 np0005481065 nova_compute[260935]: 2025-10-11 09:41:59.505 2 DEBUG nova.virt.libvirt.driver [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:41:59 np0005481065 nova_compute[260935]: 2025-10-11 09:41:59.505 2 DEBUG nova.virt.libvirt.driver [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] No VIF found with MAC fa:16:3e:38:9b:b3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:41:59 np0005481065 nova_compute[260935]: 2025-10-11 09:41:59.506 2 INFO nova.virt.libvirt.driver [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Using config drive#033[00m
Oct 11 05:41:59 np0005481065 nova_compute[260935]: 2025-10-11 09:41:59.527 2 DEBUG nova.storage.rbd_utils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] rbd image 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:41:59 np0005481065 podman[427659]: 2025-10-11 09:41:59.548526082 +0000 UTC m=+0.044066317 container create 3b201b19e598de738308caa3100af28fbaaf22c8b759a401e97a0dd9fb08918f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 11 05:41:59 np0005481065 systemd[1]: Started libpod-conmon-3b201b19e598de738308caa3100af28fbaaf22c8b759a401e97a0dd9fb08918f.scope.
Oct 11 05:41:59 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:41:59 np0005481065 podman[427659]: 2025-10-11 09:41:59.532949595 +0000 UTC m=+0.028489810 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:41:59 np0005481065 podman[427659]: 2025-10-11 09:41:59.630525472 +0000 UTC m=+0.126065707 container init 3b201b19e598de738308caa3100af28fbaaf22c8b759a401e97a0dd9fb08918f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_darwin, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:41:59 np0005481065 podman[427659]: 2025-10-11 09:41:59.642305322 +0000 UTC m=+0.137845577 container start 3b201b19e598de738308caa3100af28fbaaf22c8b759a401e97a0dd9fb08918f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_darwin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 05:41:59 np0005481065 podman[427659]: 2025-10-11 09:41:59.647249811 +0000 UTC m=+0.142790026 container attach 3b201b19e598de738308caa3100af28fbaaf22c8b759a401e97a0dd9fb08918f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 11 05:41:59 np0005481065 wizardly_darwin[427693]: 167 167
Oct 11 05:41:59 np0005481065 systemd[1]: libpod-3b201b19e598de738308caa3100af28fbaaf22c8b759a401e97a0dd9fb08918f.scope: Deactivated successfully.
Oct 11 05:41:59 np0005481065 podman[427659]: 2025-10-11 09:41:59.650144912 +0000 UTC m=+0.145685127 container died 3b201b19e598de738308caa3100af28fbaaf22c8b759a401e97a0dd9fb08918f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_darwin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 05:41:59 np0005481065 systemd[1]: var-lib-containers-storage-overlay-51405a9e87857bb08b086a81544336e8b453cc566823a7829a91c02385313093-merged.mount: Deactivated successfully.
Oct 11 05:41:59 np0005481065 podman[427659]: 2025-10-11 09:41:59.69785269 +0000 UTC m=+0.193392895 container remove 3b201b19e598de738308caa3100af28fbaaf22c8b759a401e97a0dd9fb08918f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_darwin, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 05:41:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:41:59 np0005481065 systemd[1]: libpod-conmon-3b201b19e598de738308caa3100af28fbaaf22c8b759a401e97a0dd9fb08918f.scope: Deactivated successfully.
Oct 11 05:41:59 np0005481065 podman[427717]: 2025-10-11 09:41:59.920177466 +0000 UTC m=+0.056854166 container create a89a47fd8d8d9eab53365029aa78cb0b1a8ac9f2e04e5ae783d8c295322a1df0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kowalevski, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:41:59 np0005481065 systemd[1]: Started libpod-conmon-a89a47fd8d8d9eab53365029aa78cb0b1a8ac9f2e04e5ae783d8c295322a1df0.scope.
Oct 11 05:41:59 np0005481065 nova_compute[260935]: 2025-10-11 09:41:59.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:41:59 np0005481065 podman[427717]: 2025-10-11 09:41:59.899698432 +0000 UTC m=+0.036375122 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:41:59 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:42:00 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/940d7b53bae276155cc65e78b4573b9f71751355c9d25fe194971ea0ca47a55c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:42:00 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/940d7b53bae276155cc65e78b4573b9f71751355c9d25fe194971ea0ca47a55c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:42:00 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/940d7b53bae276155cc65e78b4573b9f71751355c9d25fe194971ea0ca47a55c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:42:00 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/940d7b53bae276155cc65e78b4573b9f71751355c9d25fe194971ea0ca47a55c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:42:00 np0005481065 podman[427717]: 2025-10-11 09:42:00.025208242 +0000 UTC m=+0.161884932 container init a89a47fd8d8d9eab53365029aa78cb0b1a8ac9f2e04e5ae783d8c295322a1df0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kowalevski, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 11 05:42:00 np0005481065 podman[427717]: 2025-10-11 09:42:00.04331591 +0000 UTC m=+0.179992620 container start a89a47fd8d8d9eab53365029aa78cb0b1a8ac9f2e04e5ae783d8c295322a1df0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kowalevski, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:42:00 np0005481065 podman[427717]: 2025-10-11 09:42:00.047045914 +0000 UTC m=+0.183722614 container attach a89a47fd8d8d9eab53365029aa78cb0b1a8ac9f2e04e5ae783d8c295322a1df0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:42:00 np0005481065 nova_compute[260935]: 2025-10-11 09:42:00.073 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:42:00 np0005481065 nova_compute[260935]: 2025-10-11 09:42:00.073 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:42:00 np0005481065 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 05:42:00 np0005481065 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 46K writes, 175K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s#012Cumulative WAL: 46K writes, 17K syncs, 2.71 writes per sync, written: 0.17 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3122 writes, 13K keys, 3122 commit groups, 1.0 writes per commit group, ingest: 16.94 MB, 0.03 MB/s#012Interval WAL: 3122 writes, 1172 syncs, 2.66 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 05:42:00 np0005481065 nova_compute[260935]: 2025-10-11 09:42:00.730 2 INFO nova.virt.libvirt.driver [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Creating config drive at /var/lib/nova/instances/4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1/disk.config#033[00m
Oct 11 05:42:00 np0005481065 nova_compute[260935]: 2025-10-11 09:42:00.737 2 DEBUG oslo_concurrency.processutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxdlik2tv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]: {
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:    "0": [
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:        {
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:            "devices": [
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:                "/dev/loop3"
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:            ],
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:            "lv_name": "ceph_lv0",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:            "lv_size": "21470642176",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:            "name": "ceph_lv0",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:            "tags": {
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:                "ceph.cluster_name": "ceph",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:                "ceph.crush_device_class": "",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:                "ceph.encrypted": "0",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:                "ceph.osd_id": "0",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:                "ceph.type": "block",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:                "ceph.vdo": "0"
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:            },
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:            "type": "block",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:            "vg_name": "ceph_vg0"
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:        }
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:    ],
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:    "1": [
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:        {
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:            "devices": [
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:                "/dev/loop4"
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:            ],
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:            "lv_name": "ceph_lv1",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:            "lv_size": "21470642176",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:            "name": "ceph_lv1",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:            "tags": {
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:                "ceph.cluster_name": "ceph",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:                "ceph.crush_device_class": "",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:                "ceph.encrypted": "0",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:                "ceph.osd_id": "1",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:                "ceph.type": "block",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:                "ceph.vdo": "0"
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:            },
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:            "type": "block",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:            "vg_name": "ceph_vg1"
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:        }
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:    ],
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:    "2": [
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:        {
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:            "devices": [
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:                "/dev/loop5"
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:            ],
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:            "lv_name": "ceph_lv2",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:            "lv_size": "21470642176",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:            "name": "ceph_lv2",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:            "tags": {
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:                "ceph.cluster_name": "ceph",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:                "ceph.crush_device_class": "",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:                "ceph.encrypted": "0",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:                "ceph.osd_id": "2",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:                "ceph.type": "block",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:                "ceph.vdo": "0"
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:            },
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:            "type": "block",
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:            "vg_name": "ceph_vg2"
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:        }
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]:    ]
Oct 11 05:42:00 np0005481065 nifty_kowalevski[427734]: }
Oct 11 05:42:00 np0005481065 nova_compute[260935]: 2025-10-11 09:42:00.794 2 DEBUG nova.network.neutron [req-52cef021-412f-4e8e-af9d-73722e4da40d req-02608588-cc5b-4682-941f-08da95deffac e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Updated VIF entry in instance network info cache for port a65667d8-0cc9-498a-9415-8fcecf5881f3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:42:00 np0005481065 nova_compute[260935]: 2025-10-11 09:42:00.795 2 DEBUG nova.network.neutron [req-52cef021-412f-4e8e-af9d-73722e4da40d req-02608588-cc5b-4682-941f-08da95deffac e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Updating instance_info_cache with network_info: [{"id": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "address": "fa:16:3e:38:9b:b3", "network": {"id": "7f1968f8-d23e-46ea-bffa-f1b3561e0cf9", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-351533677-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "bd3d8446de604a3682ade55965824985", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa65667d8-0c", "ovs_interfaceid": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:42:00 np0005481065 nova_compute[260935]: 2025-10-11 09:42:00.812 2 DEBUG oslo_concurrency.lockutils [req-52cef021-412f-4e8e-af9d-73722e4da40d req-02608588-cc5b-4682-941f-08da95deffac e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:42:00 np0005481065 systemd[1]: libpod-a89a47fd8d8d9eab53365029aa78cb0b1a8ac9f2e04e5ae783d8c295322a1df0.scope: Deactivated successfully.
Oct 11 05:42:00 np0005481065 podman[427717]: 2025-10-11 09:42:00.822474223 +0000 UTC m=+0.959150923 container died a89a47fd8d8d9eab53365029aa78cb0b1a8ac9f2e04e5ae783d8c295322a1df0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:42:00 np0005481065 systemd[1]: var-lib-containers-storage-overlay-940d7b53bae276155cc65e78b4573b9f71751355c9d25fe194971ea0ca47a55c-merged.mount: Deactivated successfully.
Oct 11 05:42:00 np0005481065 podman[427717]: 2025-10-11 09:42:00.890944804 +0000 UTC m=+1.027621484 container remove a89a47fd8d8d9eab53365029aa78cb0b1a8ac9f2e04e5ae783d8c295322a1df0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kowalevski, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 05:42:00 np0005481065 nova_compute[260935]: 2025-10-11 09:42:00.893 2 DEBUG oslo_concurrency.processutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxdlik2tv" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:42:00 np0005481065 systemd[1]: libpod-conmon-a89a47fd8d8d9eab53365029aa78cb0b1a8ac9f2e04e5ae783d8c295322a1df0.scope: Deactivated successfully.
Oct 11 05:42:00 np0005481065 nova_compute[260935]: 2025-10-11 09:42:00.934 2 DEBUG nova.storage.rbd_utils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] rbd image 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:42:00 np0005481065 nova_compute[260935]: 2025-10-11 09:42:00.940 2 DEBUG oslo_concurrency.processutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1/disk.config 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:42:01 np0005481065 nova_compute[260935]: 2025-10-11 09:42:01.121 2 DEBUG oslo_concurrency.processutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1/disk.config 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:42:01 np0005481065 nova_compute[260935]: 2025-10-11 09:42:01.124 2 INFO nova.virt.libvirt.driver [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Deleting local config drive /var/lib/nova/instances/4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1/disk.config because it was imported into RBD.#033[00m
Oct 11 05:42:01 np0005481065 kernel: tapa65667d8-0c: entered promiscuous mode
Oct 11 05:42:01 np0005481065 NetworkManager[44960]: <info>  [1760175721.1818] manager: (tapa65667d8-0c): new Tun device (/org/freedesktop/NetworkManager/Devices/638)
Oct 11 05:42:01 np0005481065 ovn_controller[152945]: 2025-10-11T09:42:01Z|01682|binding|INFO|Claiming lport a65667d8-0cc9-498a-9415-8fcecf5881f3 for this chassis.
Oct 11 05:42:01 np0005481065 ovn_controller[152945]: 2025-10-11T09:42:01Z|01683|binding|INFO|a65667d8-0cc9-498a-9415-8fcecf5881f3: Claiming fa:16:3e:38:9b:b3 10.100.0.10
Oct 11 05:42:01 np0005481065 nova_compute[260935]: 2025-10-11 09:42:01.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:42:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:42:01.191 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:9b:b3 10.100.0.10'], port_security=['fa:16:3e:38:9b:b3 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7f1968f8-d23e-46ea-bffa-f1b3561e0cf9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bd3d8446de604a3682ade55965824985', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0b05ff59-8aaa-4951-bb03-8ac33de90ac6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61cb3534-5fbd-4916-a755-03a295e9752c, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=a65667d8-0cc9-498a-9415-8fcecf5881f3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:42:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:42:01.193 162815 INFO neutron.agent.ovn.metadata.agent [-] Port a65667d8-0cc9-498a-9415-8fcecf5881f3 in datapath 7f1968f8-d23e-46ea-bffa-f1b3561e0cf9 bound to our chassis#033[00m
Oct 11 05:42:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:42:01.194 162815 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7f1968f8-d23e-46ea-bffa-f1b3561e0cf9 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct 11 05:42:01 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:42:01.195 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ef577416-86d1-42e5-a1b6-66921a9df3a4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:42:01 np0005481065 systemd-udevd[427886]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:42:01 np0005481065 systemd-machined[215705]: New machine qemu-172-instance-00000094.
Oct 11 05:42:01 np0005481065 NetworkManager[44960]: <info>  [1760175721.2386] device (tapa65667d8-0c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:42:01 np0005481065 NetworkManager[44960]: <info>  [1760175721.2397] device (tapa65667d8-0c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:42:01 np0005481065 ovn_controller[152945]: 2025-10-11T09:42:01Z|01684|binding|INFO|Setting lport a65667d8-0cc9-498a-9415-8fcecf5881f3 ovn-installed in OVS
Oct 11 05:42:01 np0005481065 ovn_controller[152945]: 2025-10-11T09:42:01Z|01685|binding|INFO|Setting lport a65667d8-0cc9-498a-9415-8fcecf5881f3 up in Southbound
Oct 11 05:42:01 np0005481065 nova_compute[260935]: 2025-10-11 09:42:01.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:42:01 np0005481065 nova_compute[260935]: 2025-10-11 09:42:01.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:42:01 np0005481065 systemd[1]: Started Virtual Machine qemu-172-instance-00000094.
Oct 11 05:42:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3011: 321 pgs: 321 active+clean; 374 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:42:01 np0005481065 podman[427958]: 2025-10-11 09:42:01.662602567 +0000 UTC m=+0.057086272 container create 2f96c944f22af246792492701ffdf855ecf749b52627df4edef29a20d36fde60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_tharp, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 11 05:42:01 np0005481065 systemd[1]: Started libpod-conmon-2f96c944f22af246792492701ffdf855ecf749b52627df4edef29a20d36fde60.scope.
Oct 11 05:42:01 np0005481065 podman[427958]: 2025-10-11 09:42:01.642184494 +0000 UTC m=+0.036668189 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:42:01 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:42:01 np0005481065 podman[427958]: 2025-10-11 09:42:01.769442814 +0000 UTC m=+0.163926489 container init 2f96c944f22af246792492701ffdf855ecf749b52627df4edef29a20d36fde60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_tharp, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 11 05:42:01 np0005481065 podman[427958]: 2025-10-11 09:42:01.782933142 +0000 UTC m=+0.177416847 container start 2f96c944f22af246792492701ffdf855ecf749b52627df4edef29a20d36fde60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_tharp, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Oct 11 05:42:01 np0005481065 podman[427958]: 2025-10-11 09:42:01.789452235 +0000 UTC m=+0.183935910 container attach 2f96c944f22af246792492701ffdf855ecf749b52627df4edef29a20d36fde60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_tharp, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 05:42:01 np0005481065 dreamy_tharp[427974]: 167 167
Oct 11 05:42:01 np0005481065 systemd[1]: libpod-2f96c944f22af246792492701ffdf855ecf749b52627df4edef29a20d36fde60.scope: Deactivated successfully.
Oct 11 05:42:01 np0005481065 podman[427958]: 2025-10-11 09:42:01.792708526 +0000 UTC m=+0.187192221 container died 2f96c944f22af246792492701ffdf855ecf749b52627df4edef29a20d36fde60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_tharp, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:42:01 np0005481065 systemd[1]: var-lib-containers-storage-overlay-ceee0bdfbaa1b4efabc20aab84948bcb36412fa9f8414095e00553d4e1ddb9d7-merged.mount: Deactivated successfully.
Oct 11 05:42:01 np0005481065 podman[427958]: 2025-10-11 09:42:01.839754796 +0000 UTC m=+0.234238461 container remove 2f96c944f22af246792492701ffdf855ecf749b52627df4edef29a20d36fde60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_tharp, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 05:42:01 np0005481065 systemd[1]: libpod-conmon-2f96c944f22af246792492701ffdf855ecf749b52627df4edef29a20d36fde60.scope: Deactivated successfully.
Oct 11 05:42:01 np0005481065 nova_compute[260935]: 2025-10-11 09:42:01.887 2 DEBUG nova.compute.manager [req-cd15cdab-6dc7-461d-9360-8aee9d4257e2 req-317114c0-6309-4804-96c3-a40bc32d4559 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Received event network-vif-plugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:42:01 np0005481065 nova_compute[260935]: 2025-10-11 09:42:01.889 2 DEBUG oslo_concurrency.lockutils [req-cd15cdab-6dc7-461d-9360-8aee9d4257e2 req-317114c0-6309-4804-96c3-a40bc32d4559 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:42:01 np0005481065 nova_compute[260935]: 2025-10-11 09:42:01.889 2 DEBUG oslo_concurrency.lockutils [req-cd15cdab-6dc7-461d-9360-8aee9d4257e2 req-317114c0-6309-4804-96c3-a40bc32d4559 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:42:01 np0005481065 nova_compute[260935]: 2025-10-11 09:42:01.890 2 DEBUG oslo_concurrency.lockutils [req-cd15cdab-6dc7-461d-9360-8aee9d4257e2 req-317114c0-6309-4804-96c3-a40bc32d4559 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:42:01 np0005481065 nova_compute[260935]: 2025-10-11 09:42:01.890 2 DEBUG nova.compute.manager [req-cd15cdab-6dc7-461d-9360-8aee9d4257e2 req-317114c0-6309-4804-96c3-a40bc32d4559 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Processing event network-vif-plugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:42:02 np0005481065 podman[428039]: 2025-10-11 09:42:02.130682554 +0000 UTC m=+0.078277296 container create 44623b47d2b5fdb018a3bc8d72e7e12191461c56163a724cfcff733e9e4a64f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:42:02 np0005481065 systemd[1]: Started libpod-conmon-44623b47d2b5fdb018a3bc8d72e7e12191461c56163a724cfcff733e9e4a64f4.scope.
Oct 11 05:42:02 np0005481065 podman[428039]: 2025-10-11 09:42:02.097949456 +0000 UTC m=+0.045544258 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:42:02 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:42:02 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e473843e00e29bcc81d91f7f7d6a2fd8f0c3b09897eb2d05159de73e927d2d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:42:02 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e473843e00e29bcc81d91f7f7d6a2fd8f0c3b09897eb2d05159de73e927d2d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:42:02 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e473843e00e29bcc81d91f7f7d6a2fd8f0c3b09897eb2d05159de73e927d2d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:42:02 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e473843e00e29bcc81d91f7f7d6a2fd8f0c3b09897eb2d05159de73e927d2d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:42:02 np0005481065 podman[428039]: 2025-10-11 09:42:02.225899335 +0000 UTC m=+0.173494047 container init 44623b47d2b5fdb018a3bc8d72e7e12191461c56163a724cfcff733e9e4a64f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 05:42:02 np0005481065 podman[428039]: 2025-10-11 09:42:02.23785743 +0000 UTC m=+0.185452152 container start 44623b47d2b5fdb018a3bc8d72e7e12191461c56163a724cfcff733e9e4a64f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_tharp, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 11 05:42:02 np0005481065 podman[428039]: 2025-10-11 09:42:02.242809919 +0000 UTC m=+0.190417602 container attach 44623b47d2b5fdb018a3bc8d72e7e12191461c56163a724cfcff733e9e4a64f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_tharp, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 05:42:02 np0005481065 nova_compute[260935]: 2025-10-11 09:42:02.439 2 DEBUG nova.compute.manager [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:42:02 np0005481065 nova_compute[260935]: 2025-10-11 09:42:02.441 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175722.439016, 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:42:02 np0005481065 nova_compute[260935]: 2025-10-11 09:42:02.442 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] VM Started (Lifecycle Event)#033[00m
Oct 11 05:42:02 np0005481065 nova_compute[260935]: 2025-10-11 09:42:02.447 2 DEBUG nova.virt.libvirt.driver [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:42:02 np0005481065 nova_compute[260935]: 2025-10-11 09:42:02.453 2 INFO nova.virt.libvirt.driver [-] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Instance spawned successfully.#033[00m
Oct 11 05:42:02 np0005481065 nova_compute[260935]: 2025-10-11 09:42:02.454 2 DEBUG nova.virt.libvirt.driver [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:42:02 np0005481065 nova_compute[260935]: 2025-10-11 09:42:02.463 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:42:02 np0005481065 nova_compute[260935]: 2025-10-11 09:42:02.468 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:42:02 np0005481065 nova_compute[260935]: 2025-10-11 09:42:02.482 2 DEBUG nova.virt.libvirt.driver [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:42:02 np0005481065 nova_compute[260935]: 2025-10-11 09:42:02.483 2 DEBUG nova.virt.libvirt.driver [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:42:02 np0005481065 nova_compute[260935]: 2025-10-11 09:42:02.484 2 DEBUG nova.virt.libvirt.driver [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:42:02 np0005481065 nova_compute[260935]: 2025-10-11 09:42:02.484 2 DEBUG nova.virt.libvirt.driver [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:42:02 np0005481065 nova_compute[260935]: 2025-10-11 09:42:02.485 2 DEBUG nova.virt.libvirt.driver [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:42:02 np0005481065 nova_compute[260935]: 2025-10-11 09:42:02.486 2 DEBUG nova.virt.libvirt.driver [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:42:02 np0005481065 nova_compute[260935]: 2025-10-11 09:42:02.494 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:42:02 np0005481065 nova_compute[260935]: 2025-10-11 09:42:02.494 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175722.4393353, 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:42:02 np0005481065 nova_compute[260935]: 2025-10-11 09:42:02.495 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:42:02 np0005481065 nova_compute[260935]: 2025-10-11 09:42:02.526 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:42:02 np0005481065 nova_compute[260935]: 2025-10-11 09:42:02.531 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175722.446989, 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:42:02 np0005481065 nova_compute[260935]: 2025-10-11 09:42:02.532 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:42:02 np0005481065 nova_compute[260935]: 2025-10-11 09:42:02.557 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:42:02 np0005481065 nova_compute[260935]: 2025-10-11 09:42:02.564 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:42:02 np0005481065 nova_compute[260935]: 2025-10-11 09:42:02.570 2 INFO nova.compute.manager [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Took 7.32 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:42:02 np0005481065 nova_compute[260935]: 2025-10-11 09:42:02.571 2 DEBUG nova.compute.manager [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:42:02 np0005481065 nova_compute[260935]: 2025-10-11 09:42:02.589 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:42:02 np0005481065 nova_compute[260935]: 2025-10-11 09:42:02.639 2 INFO nova.compute.manager [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Took 8.47 seconds to build instance.#033[00m
Oct 11 05:42:02 np0005481065 nova_compute[260935]: 2025-10-11 09:42:02.707 2 DEBUG oslo_concurrency.lockutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:42:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:42:02.773 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=57, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=56) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:42:02 np0005481065 nova_compute[260935]: 2025-10-11 09:42:02.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:42:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:42:02.775 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 05:42:03 np0005481065 pedantic_tharp[428056]: {
Oct 11 05:42:03 np0005481065 pedantic_tharp[428056]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 05:42:03 np0005481065 pedantic_tharp[428056]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:42:03 np0005481065 pedantic_tharp[428056]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 05:42:03 np0005481065 pedantic_tharp[428056]:        "osd_id": 2,
Oct 11 05:42:03 np0005481065 pedantic_tharp[428056]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:42:03 np0005481065 pedantic_tharp[428056]:        "type": "bluestore"
Oct 11 05:42:03 np0005481065 pedantic_tharp[428056]:    },
Oct 11 05:42:03 np0005481065 pedantic_tharp[428056]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 05:42:03 np0005481065 pedantic_tharp[428056]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:42:03 np0005481065 pedantic_tharp[428056]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 05:42:03 np0005481065 pedantic_tharp[428056]:        "osd_id": 0,
Oct 11 05:42:03 np0005481065 pedantic_tharp[428056]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:42:03 np0005481065 pedantic_tharp[428056]:        "type": "bluestore"
Oct 11 05:42:03 np0005481065 pedantic_tharp[428056]:    },
Oct 11 05:42:03 np0005481065 pedantic_tharp[428056]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 05:42:03 np0005481065 pedantic_tharp[428056]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:42:03 np0005481065 pedantic_tharp[428056]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 05:42:03 np0005481065 pedantic_tharp[428056]:        "osd_id": 1,
Oct 11 05:42:03 np0005481065 pedantic_tharp[428056]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:42:03 np0005481065 pedantic_tharp[428056]:        "type": "bluestore"
Oct 11 05:42:03 np0005481065 pedantic_tharp[428056]:    }
Oct 11 05:42:03 np0005481065 pedantic_tharp[428056]: }
Oct 11 05:42:03 np0005481065 systemd[1]: libpod-44623b47d2b5fdb018a3bc8d72e7e12191461c56163a724cfcff733e9e4a64f4.scope: Deactivated successfully.
Oct 11 05:42:03 np0005481065 systemd[1]: libpod-44623b47d2b5fdb018a3bc8d72e7e12191461c56163a724cfcff733e9e4a64f4.scope: Consumed 1.000s CPU time.
Oct 11 05:42:03 np0005481065 conmon[428056]: conmon 44623b47d2b5fdb018a3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-44623b47d2b5fdb018a3bc8d72e7e12191461c56163a724cfcff733e9e4a64f4.scope/container/memory.events
Oct 11 05:42:03 np0005481065 podman[428039]: 2025-10-11 09:42:03.25218555 +0000 UTC m=+1.199780262 container died 44623b47d2b5fdb018a3bc8d72e7e12191461c56163a724cfcff733e9e4a64f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_tharp, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:42:03 np0005481065 systemd[1]: var-lib-containers-storage-overlay-4e473843e00e29bcc81d91f7f7d6a2fd8f0c3b09897eb2d05159de73e927d2d9-merged.mount: Deactivated successfully.
Oct 11 05:42:03 np0005481065 podman[428039]: 2025-10-11 09:42:03.310090124 +0000 UTC m=+1.257684836 container remove 44623b47d2b5fdb018a3bc8d72e7e12191461c56163a724cfcff733e9e4a64f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 05:42:03 np0005481065 systemd[1]: libpod-conmon-44623b47d2b5fdb018a3bc8d72e7e12191461c56163a724cfcff733e9e4a64f4.scope: Deactivated successfully.
Oct 11 05:42:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:42:03 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:42:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:42:03 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:42:03 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev e256c990-9bc8-458f-9fde-17aef5a01017 does not exist
Oct 11 05:42:03 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev afa43252-7147-415e-9dd9-c0371a91c89e does not exist
Oct 11 05:42:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3012: 321 pgs: 321 active+clean; 374 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 11 05:42:03 np0005481065 nova_compute[260935]: 2025-10-11 09:42:03.972 2 DEBUG nova.compute.manager [req-40c50636-2a14-4749-bbc6-ac93f57cf557 req-c15d3149-369f-41a5-8eca-a6006a9701d5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Received event network-vif-plugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:42:03 np0005481065 nova_compute[260935]: 2025-10-11 09:42:03.973 2 DEBUG oslo_concurrency.lockutils [req-40c50636-2a14-4749-bbc6-ac93f57cf557 req-c15d3149-369f-41a5-8eca-a6006a9701d5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:42:03 np0005481065 nova_compute[260935]: 2025-10-11 09:42:03.973 2 DEBUG oslo_concurrency.lockutils [req-40c50636-2a14-4749-bbc6-ac93f57cf557 req-c15d3149-369f-41a5-8eca-a6006a9701d5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:42:03 np0005481065 nova_compute[260935]: 2025-10-11 09:42:03.974 2 DEBUG oslo_concurrency.lockutils [req-40c50636-2a14-4749-bbc6-ac93f57cf557 req-c15d3149-369f-41a5-8eca-a6006a9701d5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:42:03 np0005481065 nova_compute[260935]: 2025-10-11 09:42:03.974 2 DEBUG nova.compute.manager [req-40c50636-2a14-4749-bbc6-ac93f57cf557 req-c15d3149-369f-41a5-8eca-a6006a9701d5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] No waiting events found dispatching network-vif-plugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:42:03 np0005481065 nova_compute[260935]: 2025-10-11 09:42:03.974 2 WARNING nova.compute.manager [req-40c50636-2a14-4749-bbc6-ac93f57cf557 req-c15d3149-369f-41a5-8eca-a6006a9701d5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Received unexpected event network-vif-plugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 for instance with vm_state active and task_state None.#033[00m
Oct 11 05:42:04 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:42:04 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:42:04 np0005481065 nova_compute[260935]: 2025-10-11 09:42:04.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:42:04 np0005481065 nova_compute[260935]: 2025-10-11 09:42:04.671 2 DEBUG nova.objects.instance [None req-5aa95985-9947-4d14-b787-44d2c9d016d5 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:42:04 np0005481065 nova_compute[260935]: 2025-10-11 09:42:04.693 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175724.6935637, 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:42:04 np0005481065 nova_compute[260935]: 2025-10-11 09:42:04.694 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:42:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:42:04 np0005481065 nova_compute[260935]: 2025-10-11 09:42:04.715 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:42:04 np0005481065 nova_compute[260935]: 2025-10-11 09:42:04.723 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:42:04 np0005481065 nova_compute[260935]: 2025-10-11 09:42:04.752 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Oct 11 05:42:04 np0005481065 nova_compute[260935]: 2025-10-11 09:42:04.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:42:05 np0005481065 kernel: tapa65667d8-0c (unregistering): left promiscuous mode
Oct 11 05:42:05 np0005481065 NetworkManager[44960]: <info>  [1760175725.2450] device (tapa65667d8-0c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:42:05 np0005481065 ovn_controller[152945]: 2025-10-11T09:42:05Z|01686|binding|INFO|Releasing lport a65667d8-0cc9-498a-9415-8fcecf5881f3 from this chassis (sb_readonly=0)
Oct 11 05:42:05 np0005481065 ovn_controller[152945]: 2025-10-11T09:42:05Z|01687|binding|INFO|Setting lport a65667d8-0cc9-498a-9415-8fcecf5881f3 down in Southbound
Oct 11 05:42:05 np0005481065 nova_compute[260935]: 2025-10-11 09:42:05.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:42:05 np0005481065 ovn_controller[152945]: 2025-10-11T09:42:05Z|01688|binding|INFO|Removing iface tapa65667d8-0c ovn-installed in OVS
Oct 11 05:42:05 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:42:05.269 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:9b:b3 10.100.0.10'], port_security=['fa:16:3e:38:9b:b3 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7f1968f8-d23e-46ea-bffa-f1b3561e0cf9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bd3d8446de604a3682ade55965824985', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0b05ff59-8aaa-4951-bb03-8ac33de90ac6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61cb3534-5fbd-4916-a755-03a295e9752c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=a65667d8-0cc9-498a-9415-8fcecf5881f3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:42:05 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:42:05.271 162815 INFO neutron.agent.ovn.metadata.agent [-] Port a65667d8-0cc9-498a-9415-8fcecf5881f3 in datapath 7f1968f8-d23e-46ea-bffa-f1b3561e0cf9 unbound from our chassis#033[00m
Oct 11 05:42:05 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:42:05.273 162815 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7f1968f8-d23e-46ea-bffa-f1b3561e0cf9 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct 11 05:42:05 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:42:05.274 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[305e2b54-9248-4397-a529-cc457f42ff49]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:42:05 np0005481065 nova_compute[260935]: 2025-10-11 09:42:05.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:42:05 np0005481065 systemd[1]: machine-qemu\x2d172\x2dinstance\x2d00000094.scope: Deactivated successfully.
Oct 11 05:42:05 np0005481065 systemd[1]: machine-qemu\x2d172\x2dinstance\x2d00000094.scope: Consumed 3.522s CPU time.
Oct 11 05:42:05 np0005481065 systemd-machined[215705]: Machine qemu-172-instance-00000094 terminated.
Oct 11 05:42:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3013: 321 pgs: 321 active+clean; 374 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 11 05:42:05 np0005481065 nova_compute[260935]: 2025-10-11 09:42:05.452 2 DEBUG nova.compute.manager [None req-5aa95985-9947-4d14-b787-44d2c9d016d5 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:42:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:42:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:42:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:42:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:42:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0029757271724620694 of space, bias 1.0, pg target 0.8927181517386208 quantized to 32 (current 32)
Oct 11 05:42:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:42:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:42:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:42:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:42:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:42:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 11 05:42:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:42:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 05:42:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:42:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:42:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:42:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 05:42:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:42:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 05:42:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:42:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:42:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:42:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 05:42:06 np0005481065 nova_compute[260935]: 2025-10-11 09:42:06.066 2 DEBUG nova.compute.manager [req-c2466a66-a43f-46ab-8649-c51c1913a56e req-92bec300-8673-4fa1-b059-944613e01403 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Received event network-vif-unplugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:42:06 np0005481065 nova_compute[260935]: 2025-10-11 09:42:06.066 2 DEBUG oslo_concurrency.lockutils [req-c2466a66-a43f-46ab-8649-c51c1913a56e req-92bec300-8673-4fa1-b059-944613e01403 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:42:06 np0005481065 nova_compute[260935]: 2025-10-11 09:42:06.067 2 DEBUG oslo_concurrency.lockutils [req-c2466a66-a43f-46ab-8649-c51c1913a56e req-92bec300-8673-4fa1-b059-944613e01403 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:42:06 np0005481065 nova_compute[260935]: 2025-10-11 09:42:06.067 2 DEBUG oslo_concurrency.lockutils [req-c2466a66-a43f-46ab-8649-c51c1913a56e req-92bec300-8673-4fa1-b059-944613e01403 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:42:06 np0005481065 nova_compute[260935]: 2025-10-11 09:42:06.068 2 DEBUG nova.compute.manager [req-c2466a66-a43f-46ab-8649-c51c1913a56e req-92bec300-8673-4fa1-b059-944613e01403 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] No waiting events found dispatching network-vif-unplugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:42:06 np0005481065 nova_compute[260935]: 2025-10-11 09:42:06.068 2 WARNING nova.compute.manager [req-c2466a66-a43f-46ab-8649-c51c1913a56e req-92bec300-8673-4fa1-b059-944613e01403 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Received unexpected event network-vif-unplugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 for instance with vm_state suspended and task_state None.#033[00m
Oct 11 05:42:06 np0005481065 nova_compute[260935]: 2025-10-11 09:42:06.069 2 DEBUG nova.compute.manager [req-c2466a66-a43f-46ab-8649-c51c1913a56e req-92bec300-8673-4fa1-b059-944613e01403 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Received event network-vif-plugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:42:06 np0005481065 nova_compute[260935]: 2025-10-11 09:42:06.069 2 DEBUG oslo_concurrency.lockutils [req-c2466a66-a43f-46ab-8649-c51c1913a56e req-92bec300-8673-4fa1-b059-944613e01403 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:42:06 np0005481065 nova_compute[260935]: 2025-10-11 09:42:06.070 2 DEBUG oslo_concurrency.lockutils [req-c2466a66-a43f-46ab-8649-c51c1913a56e req-92bec300-8673-4fa1-b059-944613e01403 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:42:06 np0005481065 nova_compute[260935]: 2025-10-11 09:42:06.070 2 DEBUG oslo_concurrency.lockutils [req-c2466a66-a43f-46ab-8649-c51c1913a56e req-92bec300-8673-4fa1-b059-944613e01403 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:42:06 np0005481065 nova_compute[260935]: 2025-10-11 09:42:06.070 2 DEBUG nova.compute.manager [req-c2466a66-a43f-46ab-8649-c51c1913a56e req-92bec300-8673-4fa1-b059-944613e01403 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] No waiting events found dispatching network-vif-plugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:42:06 np0005481065 nova_compute[260935]: 2025-10-11 09:42:06.071 2 WARNING nova.compute.manager [req-c2466a66-a43f-46ab-8649-c51c1913a56e req-92bec300-8673-4fa1-b059-944613e01403 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Received unexpected event network-vif-plugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 for instance with vm_state suspended and task_state None.#033[00m
Oct 11 05:42:06 np0005481065 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 05:42:06 np0005481065 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.3 total, 600.0 interval#012Cumulative writes: 35K writes, 140K keys, 35K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s#012Cumulative WAL: 35K writes, 12K syncs, 2.75 writes per sync, written: 0.14 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2305 writes, 8726 keys, 2305 commit groups, 1.0 writes per commit group, ingest: 10.50 MB, 0.02 MB/s#012Interval WAL: 2305 writes, 962 syncs, 2.40 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 05:42:07 np0005481065 nova_compute[260935]: 2025-10-11 09:42:07.071 2 INFO nova.compute.manager [None req-d6a7ad79-e7cc-4d9e-a0a6-406d4e7e1e8b a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Resuming#033[00m
Oct 11 05:42:07 np0005481065 nova_compute[260935]: 2025-10-11 09:42:07.073 2 DEBUG nova.objects.instance [None req-d6a7ad79-e7cc-4d9e-a0a6-406d4e7e1e8b a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Lazy-loading 'flavor' on Instance uuid 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:42:07 np0005481065 nova_compute[260935]: 2025-10-11 09:42:07.117 2 DEBUG oslo_concurrency.lockutils [None req-d6a7ad79-e7cc-4d9e-a0a6-406d4e7e1e8b a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Acquiring lock "refresh_cache-4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:42:07 np0005481065 nova_compute[260935]: 2025-10-11 09:42:07.118 2 DEBUG oslo_concurrency.lockutils [None req-d6a7ad79-e7cc-4d9e-a0a6-406d4e7e1e8b a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Acquired lock "refresh_cache-4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:42:07 np0005481065 nova_compute[260935]: 2025-10-11 09:42:07.118 2 DEBUG nova.network.neutron [None req-d6a7ad79-e7cc-4d9e-a0a6-406d4e7e1e8b a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:42:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3014: 321 pgs: 321 active+clean; 374 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 82 op/s
Oct 11 05:42:07 np0005481065 ceph-mgr[74605]: [devicehealth INFO root] Check health
Oct 11 05:42:08 np0005481065 nova_compute[260935]: 2025-10-11 09:42:08.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:42:08 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:42:08.778 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '57'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:42:08 np0005481065 podman[428171]: 2025-10-11 09:42:08.823831221 +0000 UTC m=+0.108348459 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent)
Oct 11 05:42:08 np0005481065 nova_compute[260935]: 2025-10-11 09:42:08.943 2 DEBUG nova.network.neutron [None req-d6a7ad79-e7cc-4d9e-a0a6-406d4e7e1e8b a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Updating instance_info_cache with network_info: [{"id": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "address": "fa:16:3e:38:9b:b3", "network": {"id": "7f1968f8-d23e-46ea-bffa-f1b3561e0cf9", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-351533677-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "bd3d8446de604a3682ade55965824985", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa65667d8-0c", "ovs_interfaceid": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:42:08 np0005481065 nova_compute[260935]: 2025-10-11 09:42:08.966 2 DEBUG oslo_concurrency.lockutils [None req-d6a7ad79-e7cc-4d9e-a0a6-406d4e7e1e8b a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Releasing lock "refresh_cache-4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:42:08 np0005481065 nova_compute[260935]: 2025-10-11 09:42:08.973 2 DEBUG nova.virt.libvirt.vif [None req-d6a7ad79-e7cc-4d9e-a0a6-406d4e7e1e8b a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:41:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-2061155047',display_name='tempest-TestServerAdvancedOps-server-2061155047',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-2061155047',id=148,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:42:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='bd3d8446de604a3682ade55965824985',ramdisk_id='',reservation_id='r-q51mr6v9',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-940341189',owner_user_name='tempest-TestServerAdvancedOps-940341189-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:42:05Z,user_data=None,user_id='a45c6166242048a983479be416f90c52',uuid=4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "address": "fa:16:3e:38:9b:b3", "network": {"id": "7f1968f8-d23e-46ea-bffa-f1b3561e0cf9", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-351533677-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "bd3d8446de604a3682ade55965824985", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa65667d8-0c", "ovs_interfaceid": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:42:08 np0005481065 nova_compute[260935]: 2025-10-11 09:42:08.974 2 DEBUG nova.network.os_vif_util [None req-d6a7ad79-e7cc-4d9e-a0a6-406d4e7e1e8b a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Converting VIF {"id": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "address": "fa:16:3e:38:9b:b3", "network": {"id": "7f1968f8-d23e-46ea-bffa-f1b3561e0cf9", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-351533677-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "bd3d8446de604a3682ade55965824985", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa65667d8-0c", "ovs_interfaceid": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:42:08 np0005481065 nova_compute[260935]: 2025-10-11 09:42:08.975 2 DEBUG nova.network.os_vif_util [None req-d6a7ad79-e7cc-4d9e-a0a6-406d4e7e1e8b a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:9b:b3,bridge_name='br-int',has_traffic_filtering=True,id=a65667d8-0cc9-498a-9415-8fcecf5881f3,network=Network(7f1968f8-d23e-46ea-bffa-f1b3561e0cf9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa65667d8-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:42:08 np0005481065 nova_compute[260935]: 2025-10-11 09:42:08.976 2 DEBUG os_vif [None req-d6a7ad79-e7cc-4d9e-a0a6-406d4e7e1e8b a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:9b:b3,bridge_name='br-int',has_traffic_filtering=True,id=a65667d8-0cc9-498a-9415-8fcecf5881f3,network=Network(7f1968f8-d23e-46ea-bffa-f1b3561e0cf9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa65667d8-0c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:42:08 np0005481065 nova_compute[260935]: 2025-10-11 09:42:08.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:42:08 np0005481065 nova_compute[260935]: 2025-10-11 09:42:08.977 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:42:08 np0005481065 nova_compute[260935]: 2025-10-11 09:42:08.977 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:42:08 np0005481065 nova_compute[260935]: 2025-10-11 09:42:08.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:42:08 np0005481065 nova_compute[260935]: 2025-10-11 09:42:08.983 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa65667d8-0c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:42:08 np0005481065 nova_compute[260935]: 2025-10-11 09:42:08.984 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa65667d8-0c, col_values=(('external_ids', {'iface-id': 'a65667d8-0cc9-498a-9415-8fcecf5881f3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:38:9b:b3', 'vm-uuid': '4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:42:08 np0005481065 nova_compute[260935]: 2025-10-11 09:42:08.985 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:42:08 np0005481065 nova_compute[260935]: 2025-10-11 09:42:08.985 2 INFO os_vif [None req-d6a7ad79-e7cc-4d9e-a0a6-406d4e7e1e8b a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:9b:b3,bridge_name='br-int',has_traffic_filtering=True,id=a65667d8-0cc9-498a-9415-8fcecf5881f3,network=Network(7f1968f8-d23e-46ea-bffa-f1b3561e0cf9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa65667d8-0c')#033[00m
Oct 11 05:42:09 np0005481065 nova_compute[260935]: 2025-10-11 09:42:09.011 2 DEBUG nova.objects.instance [None req-d6a7ad79-e7cc-4d9e-a0a6-406d4e7e1e8b a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Lazy-loading 'numa_topology' on Instance uuid 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:42:09 np0005481065 kernel: tapa65667d8-0c: entered promiscuous mode
Oct 11 05:42:09 np0005481065 nova_compute[260935]: 2025-10-11 09:42:09.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:42:09 np0005481065 ovn_controller[152945]: 2025-10-11T09:42:09Z|01689|binding|INFO|Claiming lport a65667d8-0cc9-498a-9415-8fcecf5881f3 for this chassis.
Oct 11 05:42:09 np0005481065 ovn_controller[152945]: 2025-10-11T09:42:09Z|01690|binding|INFO|a65667d8-0cc9-498a-9415-8fcecf5881f3: Claiming fa:16:3e:38:9b:b3 10.100.0.10
Oct 11 05:42:09 np0005481065 NetworkManager[44960]: <info>  [1760175729.1180] manager: (tapa65667d8-0c): new Tun device (/org/freedesktop/NetworkManager/Devices/639)
Oct 11 05:42:09 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:42:09.122 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:9b:b3 10.100.0.10'], port_security=['fa:16:3e:38:9b:b3 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7f1968f8-d23e-46ea-bffa-f1b3561e0cf9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bd3d8446de604a3682ade55965824985', 'neutron:revision_number': '5', 'neutron:security_group_ids': '0b05ff59-8aaa-4951-bb03-8ac33de90ac6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61cb3534-5fbd-4916-a755-03a295e9752c, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=a65667d8-0cc9-498a-9415-8fcecf5881f3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:42:09 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:42:09.124 162815 INFO neutron.agent.ovn.metadata.agent [-] Port a65667d8-0cc9-498a-9415-8fcecf5881f3 in datapath 7f1968f8-d23e-46ea-bffa-f1b3561e0cf9 bound to our chassis#033[00m
Oct 11 05:42:09 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:42:09.125 162815 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7f1968f8-d23e-46ea-bffa-f1b3561e0cf9 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct 11 05:42:09 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:42:09.126 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[626243bd-71b5-4257-a429-86dcacc85739]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:42:09 np0005481065 ovn_controller[152945]: 2025-10-11T09:42:09Z|01691|binding|INFO|Setting lport a65667d8-0cc9-498a-9415-8fcecf5881f3 up in Southbound
Oct 11 05:42:09 np0005481065 ovn_controller[152945]: 2025-10-11T09:42:09Z|01692|binding|INFO|Setting lport a65667d8-0cc9-498a-9415-8fcecf5881f3 ovn-installed in OVS
Oct 11 05:42:09 np0005481065 nova_compute[260935]: 2025-10-11 09:42:09.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:42:09 np0005481065 nova_compute[260935]: 2025-10-11 09:42:09.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:42:09 np0005481065 systemd-machined[215705]: New machine qemu-173-instance-00000094.
Oct 11 05:42:09 np0005481065 systemd[1]: Started Virtual Machine qemu-173-instance-00000094.
Oct 11 05:42:09 np0005481065 systemd-udevd[428205]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:42:09 np0005481065 NetworkManager[44960]: <info>  [1760175729.2089] device (tapa65667d8-0c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:42:09 np0005481065 NetworkManager[44960]: <info>  [1760175729.2112] device (tapa65667d8-0c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:42:09 np0005481065 nova_compute[260935]: 2025-10-11 09:42:09.292 2 DEBUG nova.compute.manager [req-c00e7a6a-c0e0-4b76-b06f-9bce4f253c59 req-93ce15a0-dc9f-4e0a-beea-872740918fe0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Received event network-vif-plugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:42:09 np0005481065 nova_compute[260935]: 2025-10-11 09:42:09.293 2 DEBUG oslo_concurrency.lockutils [req-c00e7a6a-c0e0-4b76-b06f-9bce4f253c59 req-93ce15a0-dc9f-4e0a-beea-872740918fe0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:42:09 np0005481065 nova_compute[260935]: 2025-10-11 09:42:09.294 2 DEBUG oslo_concurrency.lockutils [req-c00e7a6a-c0e0-4b76-b06f-9bce4f253c59 req-93ce15a0-dc9f-4e0a-beea-872740918fe0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:42:09 np0005481065 nova_compute[260935]: 2025-10-11 09:42:09.294 2 DEBUG oslo_concurrency.lockutils [req-c00e7a6a-c0e0-4b76-b06f-9bce4f253c59 req-93ce15a0-dc9f-4e0a-beea-872740918fe0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:42:09 np0005481065 nova_compute[260935]: 2025-10-11 09:42:09.294 2 DEBUG nova.compute.manager [req-c00e7a6a-c0e0-4b76-b06f-9bce4f253c59 req-93ce15a0-dc9f-4e0a-beea-872740918fe0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] No waiting events found dispatching network-vif-plugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:42:09 np0005481065 nova_compute[260935]: 2025-10-11 09:42:09.294 2 WARNING nova.compute.manager [req-c00e7a6a-c0e0-4b76-b06f-9bce4f253c59 req-93ce15a0-dc9f-4e0a-beea-872740918fe0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Received unexpected event network-vif-plugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 for instance with vm_state suspended and task_state resuming.#033[00m
Oct 11 05:42:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3015: 321 pgs: 321 active+clean; 374 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 05:42:09 np0005481065 nova_compute[260935]: 2025-10-11 09:42:09.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:42:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:42:09 np0005481065 nova_compute[260935]: 2025-10-11 09:42:09.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:42:10 np0005481065 nova_compute[260935]: 2025-10-11 09:42:10.824 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Removed pending event for 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct 11 05:42:10 np0005481065 nova_compute[260935]: 2025-10-11 09:42:10.824 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175730.8237205, 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:42:10 np0005481065 nova_compute[260935]: 2025-10-11 09:42:10.824 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] VM Started (Lifecycle Event)#033[00m
Oct 11 05:42:10 np0005481065 nova_compute[260935]: 2025-10-11 09:42:10.848 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:42:10 np0005481065 nova_compute[260935]: 2025-10-11 09:42:10.861 2 DEBUG nova.compute.manager [None req-d6a7ad79-e7cc-4d9e-a0a6-406d4e7e1e8b a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:42:10 np0005481065 nova_compute[260935]: 2025-10-11 09:42:10.861 2 DEBUG nova.objects.instance [None req-d6a7ad79-e7cc-4d9e-a0a6-406d4e7e1e8b a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:42:10 np0005481065 nova_compute[260935]: 2025-10-11 09:42:10.866 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:42:10 np0005481065 nova_compute[260935]: 2025-10-11 09:42:10.888 2 INFO nova.virt.libvirt.driver [-] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Instance running successfully.#033[00m
Oct 11 05:42:10 np0005481065 virtqemud[260524]: argument unsupported: QEMU guest agent is not configured
Oct 11 05:42:10 np0005481065 nova_compute[260935]: 2025-10-11 09:42:10.890 2 DEBUG nova.virt.libvirt.guest [None req-d6a7ad79-e7cc-4d9e-a0a6-406d4e7e1e8b a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Oct 11 05:42:10 np0005481065 nova_compute[260935]: 2025-10-11 09:42:10.891 2 DEBUG nova.compute.manager [None req-d6a7ad79-e7cc-4d9e-a0a6-406d4e7e1e8b a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:42:10 np0005481065 nova_compute[260935]: 2025-10-11 09:42:10.891 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Oct 11 05:42:10 np0005481065 nova_compute[260935]: 2025-10-11 09:42:10.892 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175730.8285172, 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:42:10 np0005481065 nova_compute[260935]: 2025-10-11 09:42:10.892 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:42:10 np0005481065 nova_compute[260935]: 2025-10-11 09:42:10.920 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:42:10 np0005481065 nova_compute[260935]: 2025-10-11 09:42:10.923 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:42:10 np0005481065 nova_compute[260935]: 2025-10-11 09:42:10.953 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Oct 11 05:42:11 np0005481065 nova_compute[260935]: 2025-10-11 09:42:11.374 2 DEBUG nova.compute.manager [req-fa9159cf-fcf2-45b8-ba2e-2f3d3c3b5354 req-cbae1766-b588-437f-bce1-abfbd4b71ca3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Received event network-vif-plugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:42:11 np0005481065 nova_compute[260935]: 2025-10-11 09:42:11.374 2 DEBUG oslo_concurrency.lockutils [req-fa9159cf-fcf2-45b8-ba2e-2f3d3c3b5354 req-cbae1766-b588-437f-bce1-abfbd4b71ca3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:42:11 np0005481065 nova_compute[260935]: 2025-10-11 09:42:11.375 2 DEBUG oslo_concurrency.lockutils [req-fa9159cf-fcf2-45b8-ba2e-2f3d3c3b5354 req-cbae1766-b588-437f-bce1-abfbd4b71ca3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:42:11 np0005481065 nova_compute[260935]: 2025-10-11 09:42:11.375 2 DEBUG oslo_concurrency.lockutils [req-fa9159cf-fcf2-45b8-ba2e-2f3d3c3b5354 req-cbae1766-b588-437f-bce1-abfbd4b71ca3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:42:11 np0005481065 nova_compute[260935]: 2025-10-11 09:42:11.375 2 DEBUG nova.compute.manager [req-fa9159cf-fcf2-45b8-ba2e-2f3d3c3b5354 req-cbae1766-b588-437f-bce1-abfbd4b71ca3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] No waiting events found dispatching network-vif-plugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:42:11 np0005481065 nova_compute[260935]: 2025-10-11 09:42:11.375 2 WARNING nova.compute.manager [req-fa9159cf-fcf2-45b8-ba2e-2f3d3c3b5354 req-cbae1766-b588-437f-bce1-abfbd4b71ca3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Received unexpected event network-vif-plugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 for instance with vm_state active and task_state None.#033[00m
Oct 11 05:42:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3016: 321 pgs: 321 active+clean; 374 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 05:42:13 np0005481065 nova_compute[260935]: 2025-10-11 09:42:13.086 2 DEBUG nova.objects.instance [None req-afc008e0-ff13-4fae-805f-b002ec8810ad a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:42:13 np0005481065 nova_compute[260935]: 2025-10-11 09:42:13.105 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175733.104935, 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:42:13 np0005481065 nova_compute[260935]: 2025-10-11 09:42:13.105 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:42:13 np0005481065 nova_compute[260935]: 2025-10-11 09:42:13.131 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:42:13 np0005481065 nova_compute[260935]: 2025-10-11 09:42:13.135 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:42:13 np0005481065 nova_compute[260935]: 2025-10-11 09:42:13.156 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Oct 11 05:42:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3017: 321 pgs: 321 active+clean; 374 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 78 op/s
Oct 11 05:42:13 np0005481065 kernel: tapa65667d8-0c (unregistering): left promiscuous mode
Oct 11 05:42:13 np0005481065 NetworkManager[44960]: <info>  [1760175733.6167] device (tapa65667d8-0c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:42:13 np0005481065 ovn_controller[152945]: 2025-10-11T09:42:13Z|01693|binding|INFO|Releasing lport a65667d8-0cc9-498a-9415-8fcecf5881f3 from this chassis (sb_readonly=0)
Oct 11 05:42:13 np0005481065 ovn_controller[152945]: 2025-10-11T09:42:13Z|01694|binding|INFO|Setting lport a65667d8-0cc9-498a-9415-8fcecf5881f3 down in Southbound
Oct 11 05:42:13 np0005481065 nova_compute[260935]: 2025-10-11 09:42:13.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:42:13 np0005481065 ovn_controller[152945]: 2025-10-11T09:42:13Z|01695|binding|INFO|Removing iface tapa65667d8-0c ovn-installed in OVS
Oct 11 05:42:13 np0005481065 nova_compute[260935]: 2025-10-11 09:42:13.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:42:13 np0005481065 nova_compute[260935]: 2025-10-11 09:42:13.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:42:13 np0005481065 systemd[1]: machine-qemu\x2d173\x2dinstance\x2d00000094.scope: Deactivated successfully.
Oct 11 05:42:13 np0005481065 systemd[1]: machine-qemu\x2d173\x2dinstance\x2d00000094.scope: Consumed 3.918s CPU time.
Oct 11 05:42:13 np0005481065 systemd-machined[215705]: Machine qemu-173-instance-00000094 terminated.
Oct 11 05:42:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:42:13.714 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:9b:b3 10.100.0.10'], port_security=['fa:16:3e:38:9b:b3 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7f1968f8-d23e-46ea-bffa-f1b3561e0cf9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bd3d8446de604a3682ade55965824985', 'neutron:revision_number': '6', 'neutron:security_group_ids': '0b05ff59-8aaa-4951-bb03-8ac33de90ac6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61cb3534-5fbd-4916-a755-03a295e9752c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=a65667d8-0cc9-498a-9415-8fcecf5881f3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:42:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:42:13.715 162815 INFO neutron.agent.ovn.metadata.agent [-] Port a65667d8-0cc9-498a-9415-8fcecf5881f3 in datapath 7f1968f8-d23e-46ea-bffa-f1b3561e0cf9 unbound from our chassis#033[00m
Oct 11 05:42:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:42:13.716 162815 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7f1968f8-d23e-46ea-bffa-f1b3561e0cf9 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct 11 05:42:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:42:13.717 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5f19fe65-7e50-4e7a-be67-64ae6e5f4ce0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:42:13 np0005481065 nova_compute[260935]: 2025-10-11 09:42:13.796 2 DEBUG nova.compute.manager [None req-afc008e0-ff13-4fae-805f-b002ec8810ad a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:42:13 np0005481065 nova_compute[260935]: 2025-10-11 09:42:13.949 2 DEBUG nova.compute.manager [req-a77b8350-78c7-42a4-8b79-f3d95a4ca8b5 req-0e41b591-0887-47d1-b11f-6b80bd7ebf39 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Received event network-vif-unplugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:42:13 np0005481065 nova_compute[260935]: 2025-10-11 09:42:13.949 2 DEBUG oslo_concurrency.lockutils [req-a77b8350-78c7-42a4-8b79-f3d95a4ca8b5 req-0e41b591-0887-47d1-b11f-6b80bd7ebf39 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:42:13 np0005481065 nova_compute[260935]: 2025-10-11 09:42:13.949 2 DEBUG oslo_concurrency.lockutils [req-a77b8350-78c7-42a4-8b79-f3d95a4ca8b5 req-0e41b591-0887-47d1-b11f-6b80bd7ebf39 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:42:13 np0005481065 nova_compute[260935]: 2025-10-11 09:42:13.950 2 DEBUG oslo_concurrency.lockutils [req-a77b8350-78c7-42a4-8b79-f3d95a4ca8b5 req-0e41b591-0887-47d1-b11f-6b80bd7ebf39 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:42:13 np0005481065 nova_compute[260935]: 2025-10-11 09:42:13.950 2 DEBUG nova.compute.manager [req-a77b8350-78c7-42a4-8b79-f3d95a4ca8b5 req-0e41b591-0887-47d1-b11f-6b80bd7ebf39 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] No waiting events found dispatching network-vif-unplugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:42:13 np0005481065 nova_compute[260935]: 2025-10-11 09:42:13.950 2 WARNING nova.compute.manager [req-a77b8350-78c7-42a4-8b79-f3d95a4ca8b5 req-0e41b591-0887-47d1-b11f-6b80bd7ebf39 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Received unexpected event network-vif-unplugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 for instance with vm_state active and task_state suspending.#033[00m
Oct 11 05:42:14 np0005481065 nova_compute[260935]: 2025-10-11 09:42:14.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:42:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:42:14 np0005481065 nova_compute[260935]: 2025-10-11 09:42:14.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:42:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:42:15.238 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:42:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:42:15.239 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:42:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:42:15.240 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:42:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3018: 321 pgs: 321 active+clean; 374 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 68 op/s
Oct 11 05:42:15 np0005481065 podman[428276]: 2025-10-11 09:42:15.804876191 +0000 UTC m=+0.099876612 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 11 05:42:16 np0005481065 nova_compute[260935]: 2025-10-11 09:42:16.049 2 DEBUG nova.compute.manager [req-0125c421-863d-47fd-b899-a380169f7048 req-6b5d95ba-7b5e-49f7-a1fc-5373b5ad9440 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Received event network-vif-plugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:42:16 np0005481065 nova_compute[260935]: 2025-10-11 09:42:16.050 2 DEBUG oslo_concurrency.lockutils [req-0125c421-863d-47fd-b899-a380169f7048 req-6b5d95ba-7b5e-49f7-a1fc-5373b5ad9440 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:42:16 np0005481065 nova_compute[260935]: 2025-10-11 09:42:16.050 2 DEBUG oslo_concurrency.lockutils [req-0125c421-863d-47fd-b899-a380169f7048 req-6b5d95ba-7b5e-49f7-a1fc-5373b5ad9440 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:42:16 np0005481065 nova_compute[260935]: 2025-10-11 09:42:16.051 2 DEBUG oslo_concurrency.lockutils [req-0125c421-863d-47fd-b899-a380169f7048 req-6b5d95ba-7b5e-49f7-a1fc-5373b5ad9440 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:42:16 np0005481065 nova_compute[260935]: 2025-10-11 09:42:16.051 2 DEBUG nova.compute.manager [req-0125c421-863d-47fd-b899-a380169f7048 req-6b5d95ba-7b5e-49f7-a1fc-5373b5ad9440 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] No waiting events found dispatching network-vif-plugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:42:16 np0005481065 nova_compute[260935]: 2025-10-11 09:42:16.052 2 WARNING nova.compute.manager [req-0125c421-863d-47fd-b899-a380169f7048 req-6b5d95ba-7b5e-49f7-a1fc-5373b5ad9440 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Received unexpected event network-vif-plugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 for instance with vm_state suspended and task_state None.#033[00m
Oct 11 05:42:16 np0005481065 nova_compute[260935]: 2025-10-11 09:42:16.138 2 INFO nova.compute.manager [None req-fdad2070-38bd-4784-869b-5419abf20991 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Resuming#033[00m
Oct 11 05:42:16 np0005481065 nova_compute[260935]: 2025-10-11 09:42:16.139 2 DEBUG nova.objects.instance [None req-fdad2070-38bd-4784-869b-5419abf20991 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Lazy-loading 'flavor' on Instance uuid 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:42:16 np0005481065 nova_compute[260935]: 2025-10-11 09:42:16.183 2 DEBUG oslo_concurrency.lockutils [None req-fdad2070-38bd-4784-869b-5419abf20991 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Acquiring lock "refresh_cache-4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:42:16 np0005481065 nova_compute[260935]: 2025-10-11 09:42:16.183 2 DEBUG oslo_concurrency.lockutils [None req-fdad2070-38bd-4784-869b-5419abf20991 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Acquired lock "refresh_cache-4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:42:16 np0005481065 nova_compute[260935]: 2025-10-11 09:42:16.184 2 DEBUG nova.network.neutron [None req-fdad2070-38bd-4784-869b-5419abf20991 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:42:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3019: 321 pgs: 321 active+clean; 374 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 68 op/s
Oct 11 05:42:18 np0005481065 nova_compute[260935]: 2025-10-11 09:42:18.070 2 DEBUG nova.network.neutron [None req-fdad2070-38bd-4784-869b-5419abf20991 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Updating instance_info_cache with network_info: [{"id": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "address": "fa:16:3e:38:9b:b3", "network": {"id": "7f1968f8-d23e-46ea-bffa-f1b3561e0cf9", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-351533677-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "bd3d8446de604a3682ade55965824985", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa65667d8-0c", "ovs_interfaceid": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:42:18 np0005481065 nova_compute[260935]: 2025-10-11 09:42:18.088 2 DEBUG oslo_concurrency.lockutils [None req-fdad2070-38bd-4784-869b-5419abf20991 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Releasing lock "refresh_cache-4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:42:18 np0005481065 nova_compute[260935]: 2025-10-11 09:42:18.096 2 DEBUG nova.virt.libvirt.vif [None req-fdad2070-38bd-4784-869b-5419abf20991 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:41:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-2061155047',display_name='tempest-TestServerAdvancedOps-server-2061155047',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-2061155047',id=148,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:42:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='bd3d8446de604a3682ade55965824985',ramdisk_id='',reservation_id='r-q51mr6v9',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-940341189',owner_user_name='tempest-TestServerAdvancedOps-940341189-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:42:13Z,user_data=None,user_id='a45c6166242048a983479be416f90c52',uuid=4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "address": "fa:16:3e:38:9b:b3", "network": {"id": "7f1968f8-d23e-46ea-bffa-f1b3561e0cf9", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-351533677-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "bd3d8446de604a3682ade55965824985", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa65667d8-0c", "ovs_interfaceid": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:42:18 np0005481065 nova_compute[260935]: 2025-10-11 09:42:18.098 2 DEBUG nova.network.os_vif_util [None req-fdad2070-38bd-4784-869b-5419abf20991 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Converting VIF {"id": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "address": "fa:16:3e:38:9b:b3", "network": {"id": "7f1968f8-d23e-46ea-bffa-f1b3561e0cf9", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-351533677-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "bd3d8446de604a3682ade55965824985", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa65667d8-0c", "ovs_interfaceid": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:42:18 np0005481065 nova_compute[260935]: 2025-10-11 09:42:18.099 2 DEBUG nova.network.os_vif_util [None req-fdad2070-38bd-4784-869b-5419abf20991 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:9b:b3,bridge_name='br-int',has_traffic_filtering=True,id=a65667d8-0cc9-498a-9415-8fcecf5881f3,network=Network(7f1968f8-d23e-46ea-bffa-f1b3561e0cf9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa65667d8-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:42:18 np0005481065 nova_compute[260935]: 2025-10-11 09:42:18.099 2 DEBUG os_vif [None req-fdad2070-38bd-4784-869b-5419abf20991 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:9b:b3,bridge_name='br-int',has_traffic_filtering=True,id=a65667d8-0cc9-498a-9415-8fcecf5881f3,network=Network(7f1968f8-d23e-46ea-bffa-f1b3561e0cf9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa65667d8-0c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:42:18 np0005481065 nova_compute[260935]: 2025-10-11 09:42:18.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:42:18 np0005481065 nova_compute[260935]: 2025-10-11 09:42:18.101 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:42:18 np0005481065 nova_compute[260935]: 2025-10-11 09:42:18.101 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:42:18 np0005481065 nova_compute[260935]: 2025-10-11 09:42:18.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:42:18 np0005481065 nova_compute[260935]: 2025-10-11 09:42:18.105 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa65667d8-0c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:42:18 np0005481065 nova_compute[260935]: 2025-10-11 09:42:18.106 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa65667d8-0c, col_values=(('external_ids', {'iface-id': 'a65667d8-0cc9-498a-9415-8fcecf5881f3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:38:9b:b3', 'vm-uuid': '4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:42:18 np0005481065 nova_compute[260935]: 2025-10-11 09:42:18.108 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:42:18 np0005481065 nova_compute[260935]: 2025-10-11 09:42:18.108 2 INFO os_vif [None req-fdad2070-38bd-4784-869b-5419abf20991 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:9b:b3,bridge_name='br-int',has_traffic_filtering=True,id=a65667d8-0cc9-498a-9415-8fcecf5881f3,network=Network(7f1968f8-d23e-46ea-bffa-f1b3561e0cf9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa65667d8-0c')#033[00m
Oct 11 05:42:18 np0005481065 nova_compute[260935]: 2025-10-11 09:42:18.132 2 DEBUG nova.objects.instance [None req-fdad2070-38bd-4784-869b-5419abf20991 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Lazy-loading 'numa_topology' on Instance uuid 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:42:18 np0005481065 kernel: tapa65667d8-0c: entered promiscuous mode
Oct 11 05:42:18 np0005481065 NetworkManager[44960]: <info>  [1760175738.2144] manager: (tapa65667d8-0c): new Tun device (/org/freedesktop/NetworkManager/Devices/640)
Oct 11 05:42:18 np0005481065 systemd-udevd[428307]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:42:18 np0005481065 ovn_controller[152945]: 2025-10-11T09:42:18Z|01696|binding|INFO|Claiming lport a65667d8-0cc9-498a-9415-8fcecf5881f3 for this chassis.
Oct 11 05:42:18 np0005481065 ovn_controller[152945]: 2025-10-11T09:42:18Z|01697|binding|INFO|a65667d8-0cc9-498a-9415-8fcecf5881f3: Claiming fa:16:3e:38:9b:b3 10.100.0.10
Oct 11 05:42:18 np0005481065 nova_compute[260935]: 2025-10-11 09:42:18.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:42:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:42:18.256 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:9b:b3 10.100.0.10'], port_security=['fa:16:3e:38:9b:b3 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7f1968f8-d23e-46ea-bffa-f1b3561e0cf9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bd3d8446de604a3682ade55965824985', 'neutron:revision_number': '7', 'neutron:security_group_ids': '0b05ff59-8aaa-4951-bb03-8ac33de90ac6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61cb3534-5fbd-4916-a755-03a295e9752c, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=a65667d8-0cc9-498a-9415-8fcecf5881f3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:42:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:42:18.258 162815 INFO neutron.agent.ovn.metadata.agent [-] Port a65667d8-0cc9-498a-9415-8fcecf5881f3 in datapath 7f1968f8-d23e-46ea-bffa-f1b3561e0cf9 bound to our chassis#033[00m
Oct 11 05:42:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:42:18.259 162815 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7f1968f8-d23e-46ea-bffa-f1b3561e0cf9 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct 11 05:42:18 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:42:18.260 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[61b85472-6ac2-4b91-907a-15b6e3f2df99]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:42:18 np0005481065 NetworkManager[44960]: <info>  [1760175738.2736] device (tapa65667d8-0c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:42:18 np0005481065 NetworkManager[44960]: <info>  [1760175738.2762] device (tapa65667d8-0c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:42:18 np0005481065 systemd-machined[215705]: New machine qemu-174-instance-00000094.
Oct 11 05:42:18 np0005481065 ovn_controller[152945]: 2025-10-11T09:42:18Z|01698|binding|INFO|Setting lport a65667d8-0cc9-498a-9415-8fcecf5881f3 ovn-installed in OVS
Oct 11 05:42:18 np0005481065 ovn_controller[152945]: 2025-10-11T09:42:18Z|01699|binding|INFO|Setting lport a65667d8-0cc9-498a-9415-8fcecf5881f3 up in Southbound
Oct 11 05:42:18 np0005481065 nova_compute[260935]: 2025-10-11 09:42:18.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:42:18 np0005481065 systemd[1]: Started Virtual Machine qemu-174-instance-00000094.
Oct 11 05:42:18 np0005481065 nova_compute[260935]: 2025-10-11 09:42:18.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:42:18 np0005481065 nova_compute[260935]: 2025-10-11 09:42:18.914 2 DEBUG nova.compute.manager [req-4d47a79b-edca-4dde-8943-54f1730a02e9 req-3f8aea58-87ad-47df-a943-2cf5e2ced308 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Received event network-vif-plugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:42:18 np0005481065 nova_compute[260935]: 2025-10-11 09:42:18.915 2 DEBUG oslo_concurrency.lockutils [req-4d47a79b-edca-4dde-8943-54f1730a02e9 req-3f8aea58-87ad-47df-a943-2cf5e2ced308 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:42:18 np0005481065 nova_compute[260935]: 2025-10-11 09:42:18.916 2 DEBUG oslo_concurrency.lockutils [req-4d47a79b-edca-4dde-8943-54f1730a02e9 req-3f8aea58-87ad-47df-a943-2cf5e2ced308 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:42:18 np0005481065 nova_compute[260935]: 2025-10-11 09:42:18.916 2 DEBUG oslo_concurrency.lockutils [req-4d47a79b-edca-4dde-8943-54f1730a02e9 req-3f8aea58-87ad-47df-a943-2cf5e2ced308 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:42:18 np0005481065 nova_compute[260935]: 2025-10-11 09:42:18.916 2 DEBUG nova.compute.manager [req-4d47a79b-edca-4dde-8943-54f1730a02e9 req-3f8aea58-87ad-47df-a943-2cf5e2ced308 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] No waiting events found dispatching network-vif-plugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:42:18 np0005481065 nova_compute[260935]: 2025-10-11 09:42:18.917 2 WARNING nova.compute.manager [req-4d47a79b-edca-4dde-8943-54f1730a02e9 req-3f8aea58-87ad-47df-a943-2cf5e2ced308 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Received unexpected event network-vif-plugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 for instance with vm_state suspended and task_state resuming.#033[00m
Oct 11 05:42:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3020: 321 pgs: 321 active+clean; 374 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 526 KiB/s rd, 22 op/s
Oct 11 05:42:19 np0005481065 nova_compute[260935]: 2025-10-11 09:42:19.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:42:19 np0005481065 nova_compute[260935]: 2025-10-11 09:42:19.645 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Removed pending event for 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct 11 05:42:19 np0005481065 nova_compute[260935]: 2025-10-11 09:42:19.647 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175739.6445706, 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:42:19 np0005481065 nova_compute[260935]: 2025-10-11 09:42:19.647 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] VM Started (Lifecycle Event)#033[00m
Oct 11 05:42:19 np0005481065 nova_compute[260935]: 2025-10-11 09:42:19.673 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:42:19 np0005481065 nova_compute[260935]: 2025-10-11 09:42:19.680 2 DEBUG nova.compute.manager [None req-fdad2070-38bd-4784-869b-5419abf20991 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:42:19 np0005481065 nova_compute[260935]: 2025-10-11 09:42:19.681 2 DEBUG nova.objects.instance [None req-fdad2070-38bd-4784-869b-5419abf20991 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:42:19 np0005481065 nova_compute[260935]: 2025-10-11 09:42:19.691 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:42:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:42:19 np0005481065 nova_compute[260935]: 2025-10-11 09:42:19.705 2 INFO nova.virt.libvirt.driver [-] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Instance running successfully.#033[00m
Oct 11 05:42:19 np0005481065 virtqemud[260524]: argument unsupported: QEMU guest agent is not configured
Oct 11 05:42:19 np0005481065 nova_compute[260935]: 2025-10-11 09:42:19.711 2 DEBUG nova.virt.libvirt.guest [None req-fdad2070-38bd-4784-869b-5419abf20991 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Oct 11 05:42:19 np0005481065 nova_compute[260935]: 2025-10-11 09:42:19.712 2 DEBUG nova.compute.manager [None req-fdad2070-38bd-4784-869b-5419abf20991 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:42:19 np0005481065 nova_compute[260935]: 2025-10-11 09:42:19.714 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Oct 11 05:42:19 np0005481065 nova_compute[260935]: 2025-10-11 09:42:19.715 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175739.6523895, 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:42:19 np0005481065 nova_compute[260935]: 2025-10-11 09:42:19.715 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:42:19 np0005481065 nova_compute[260935]: 2025-10-11 09:42:19.748 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:42:19 np0005481065 nova_compute[260935]: 2025-10-11 09:42:19.752 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:42:19 np0005481065 nova_compute[260935]: 2025-10-11 09:42:19.783 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Oct 11 05:42:19 np0005481065 podman[428361]: 2025-10-11 09:42:19.795382205 +0000 UTC m=+0.090512609 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, io.buildah.version=1.41.3)
Oct 11 05:42:19 np0005481065 podman[428362]: 2025-10-11 09:42:19.798970036 +0000 UTC m=+0.087544087 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Oct 11 05:42:19 np0005481065 nova_compute[260935]: 2025-10-11 09:42:19.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:42:20 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #147. Immutable memtables: 0.
Oct 11 05:42:20 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:42:20.607530) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 05:42:20 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 89] Flushing memtable with next log file: 147
Oct 11 05:42:20 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175740607563, "job": 89, "event": "flush_started", "num_memtables": 1, "num_entries": 1911, "num_deletes": 251, "total_data_size": 3139384, "memory_usage": 3193424, "flush_reason": "Manual Compaction"}
Oct 11 05:42:20 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 89] Level-0 flush table #148: started
Oct 11 05:42:20 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175740625710, "cf_name": "default", "job": 89, "event": "table_file_creation", "file_number": 148, "file_size": 3075291, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61413, "largest_seqno": 63323, "table_properties": {"data_size": 3066526, "index_size": 5450, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 17837, "raw_average_key_size": 20, "raw_value_size": 3049132, "raw_average_value_size": 3441, "num_data_blocks": 242, "num_entries": 886, "num_filter_entries": 886, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760175535, "oldest_key_time": 1760175535, "file_creation_time": 1760175740, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 148, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:42:20 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 89] Flush lasted 18218 microseconds, and 5878 cpu microseconds.
Oct 11 05:42:20 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:42:20 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:42:20.625749) [db/flush_job.cc:967] [default] [JOB 89] Level-0 flush table #148: 3075291 bytes OK
Oct 11 05:42:20 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:42:20.625768) [db/memtable_list.cc:519] [default] Level-0 commit table #148 started
Oct 11 05:42:20 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:42:20.627821) [db/memtable_list.cc:722] [default] Level-0 commit table #148: memtable #1 done
Oct 11 05:42:20 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:42:20.627845) EVENT_LOG_v1 {"time_micros": 1760175740627842, "job": 89, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 05:42:20 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:42:20.627862) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 05:42:20 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 89] Try to delete WAL files size 3131276, prev total WAL file size 3131276, number of live WAL files 2.
Oct 11 05:42:20 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000144.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:42:20 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:42:20.628612) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036303234' seq:72057594037927935, type:22 .. '7061786F730036323736' seq:0, type:0; will stop at (end)
Oct 11 05:42:20 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 90] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 05:42:20 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 89 Base level 0, inputs: [148(3003KB)], [146(9533KB)]
Oct 11 05:42:20 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175740628641, "job": 90, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [148], "files_L6": [146], "score": -1, "input_data_size": 12837997, "oldest_snapshot_seqno": -1}
Oct 11 05:42:20 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 90] Generated table #149: 8170 keys, 11163371 bytes, temperature: kUnknown
Oct 11 05:42:20 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175740679372, "cf_name": "default", "job": 90, "event": "table_file_creation", "file_number": 149, "file_size": 11163371, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11109275, "index_size": 32543, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20485, "raw_key_size": 213575, "raw_average_key_size": 26, "raw_value_size": 10964101, "raw_average_value_size": 1341, "num_data_blocks": 1268, "num_entries": 8170, "num_filter_entries": 8170, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760175740, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 149, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:42:20 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:42:20 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:42:20.679592) [db/compaction/compaction_job.cc:1663] [default] [JOB 90] Compacted 1@0 + 1@6 files to L6 => 11163371 bytes
Oct 11 05:42:20 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:42:20.680811) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 252.7 rd, 219.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.9, 9.3 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(7.8) write-amplify(3.6) OK, records in: 8684, records dropped: 514 output_compression: NoCompression
Oct 11 05:42:20 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:42:20.680840) EVENT_LOG_v1 {"time_micros": 1760175740680822, "job": 90, "event": "compaction_finished", "compaction_time_micros": 50798, "compaction_time_cpu_micros": 23533, "output_level": 6, "num_output_files": 1, "total_output_size": 11163371, "num_input_records": 8684, "num_output_records": 8170, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 05:42:20 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000148.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:42:20 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175740681334, "job": 90, "event": "table_file_deletion", "file_number": 148}
Oct 11 05:42:20 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000146.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:42:20 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175740682949, "job": 90, "event": "table_file_deletion", "file_number": 146}
Oct 11 05:42:20 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:42:20.628537) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:42:20 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:42:20.683010) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:42:20 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:42:20.683015) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:42:20 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:42:20.683017) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:42:20 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:42:20.683019) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:42:20 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:42:20.683021) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:42:21 np0005481065 nova_compute[260935]: 2025-10-11 09:42:21.009 2 DEBUG nova.compute.manager [req-28d866e6-319f-4e66-a7f3-9ad4f2814d41 req-7b592e36-886d-4daf-9ee8-376e17fc3593 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Received event network-vif-plugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:42:21 np0005481065 nova_compute[260935]: 2025-10-11 09:42:21.010 2 DEBUG oslo_concurrency.lockutils [req-28d866e6-319f-4e66-a7f3-9ad4f2814d41 req-7b592e36-886d-4daf-9ee8-376e17fc3593 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:42:21 np0005481065 nova_compute[260935]: 2025-10-11 09:42:21.010 2 DEBUG oslo_concurrency.lockutils [req-28d866e6-319f-4e66-a7f3-9ad4f2814d41 req-7b592e36-886d-4daf-9ee8-376e17fc3593 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:42:21 np0005481065 nova_compute[260935]: 2025-10-11 09:42:21.011 2 DEBUG oslo_concurrency.lockutils [req-28d866e6-319f-4e66-a7f3-9ad4f2814d41 req-7b592e36-886d-4daf-9ee8-376e17fc3593 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:42:21 np0005481065 nova_compute[260935]: 2025-10-11 09:42:21.011 2 DEBUG nova.compute.manager [req-28d866e6-319f-4e66-a7f3-9ad4f2814d41 req-7b592e36-886d-4daf-9ee8-376e17fc3593 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] No waiting events found dispatching network-vif-plugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:42:21 np0005481065 nova_compute[260935]: 2025-10-11 09:42:21.012 2 WARNING nova.compute.manager [req-28d866e6-319f-4e66-a7f3-9ad4f2814d41 req-7b592e36-886d-4daf-9ee8-376e17fc3593 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Received unexpected event network-vif-plugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 for instance with vm_state active and task_state None.#033[00m
Oct 11 05:42:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3021: 321 pgs: 321 active+clean; 374 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 4 op/s
Oct 11 05:42:22 np0005481065 nova_compute[260935]: 2025-10-11 09:42:22.315 2 DEBUG oslo_concurrency.lockutils [None req-8947ba06-cc32-43ed-bd05-5aa754ba3953 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Acquiring lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:42:22 np0005481065 nova_compute[260935]: 2025-10-11 09:42:22.316 2 DEBUG oslo_concurrency.lockutils [None req-8947ba06-cc32-43ed-bd05-5aa754ba3953 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:42:22 np0005481065 nova_compute[260935]: 2025-10-11 09:42:22.316 2 DEBUG oslo_concurrency.lockutils [None req-8947ba06-cc32-43ed-bd05-5aa754ba3953 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Acquiring lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:42:22 np0005481065 nova_compute[260935]: 2025-10-11 09:42:22.316 2 DEBUG oslo_concurrency.lockutils [None req-8947ba06-cc32-43ed-bd05-5aa754ba3953 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:42:22 np0005481065 nova_compute[260935]: 2025-10-11 09:42:22.317 2 DEBUG oslo_concurrency.lockutils [None req-8947ba06-cc32-43ed-bd05-5aa754ba3953 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:42:22 np0005481065 nova_compute[260935]: 2025-10-11 09:42:22.318 2 INFO nova.compute.manager [None req-8947ba06-cc32-43ed-bd05-5aa754ba3953 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Terminating instance#033[00m
Oct 11 05:42:22 np0005481065 nova_compute[260935]: 2025-10-11 09:42:22.319 2 DEBUG nova.compute.manager [None req-8947ba06-cc32-43ed-bd05-5aa754ba3953 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:42:22 np0005481065 kernel: tapa65667d8-0c (unregistering): left promiscuous mode
Oct 11 05:42:22 np0005481065 NetworkManager[44960]: <info>  [1760175742.3632] device (tapa65667d8-0c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:42:22 np0005481065 nova_compute[260935]: 2025-10-11 09:42:22.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:42:22 np0005481065 ovn_controller[152945]: 2025-10-11T09:42:22Z|01700|binding|INFO|Releasing lport a65667d8-0cc9-498a-9415-8fcecf5881f3 from this chassis (sb_readonly=0)
Oct 11 05:42:22 np0005481065 ovn_controller[152945]: 2025-10-11T09:42:22Z|01701|binding|INFO|Setting lport a65667d8-0cc9-498a-9415-8fcecf5881f3 down in Southbound
Oct 11 05:42:22 np0005481065 ovn_controller[152945]: 2025-10-11T09:42:22Z|01702|binding|INFO|Removing iface tapa65667d8-0c ovn-installed in OVS
Oct 11 05:42:22 np0005481065 nova_compute[260935]: 2025-10-11 09:42:22.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:42:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:42:22.385 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:9b:b3 10.100.0.10'], port_security=['fa:16:3e:38:9b:b3 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7f1968f8-d23e-46ea-bffa-f1b3561e0cf9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bd3d8446de604a3682ade55965824985', 'neutron:revision_number': '8', 'neutron:security_group_ids': '0b05ff59-8aaa-4951-bb03-8ac33de90ac6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61cb3534-5fbd-4916-a755-03a295e9752c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=a65667d8-0cc9-498a-9415-8fcecf5881f3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:42:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:42:22.386 162815 INFO neutron.agent.ovn.metadata.agent [-] Port a65667d8-0cc9-498a-9415-8fcecf5881f3 in datapath 7f1968f8-d23e-46ea-bffa-f1b3561e0cf9 unbound from our chassis#033[00m
Oct 11 05:42:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:42:22.386 162815 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7f1968f8-d23e-46ea-bffa-f1b3561e0cf9 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct 11 05:42:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:42:22.387 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6fc723e8-d568-47bb-9ef4-200b29c3488f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:42:22 np0005481065 nova_compute[260935]: 2025-10-11 09:42:22.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:42:22 np0005481065 systemd[1]: machine-qemu\x2d174\x2dinstance\x2d00000094.scope: Deactivated successfully.
Oct 11 05:42:22 np0005481065 systemd[1]: machine-qemu\x2d174\x2dinstance\x2d00000094.scope: Consumed 3.912s CPU time.
Oct 11 05:42:22 np0005481065 systemd-machined[215705]: Machine qemu-174-instance-00000094 terminated.
Oct 11 05:42:22 np0005481065 nova_compute[260935]: 2025-10-11 09:42:22.562 2 INFO nova.virt.libvirt.driver [-] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Instance destroyed successfully.#033[00m
Oct 11 05:42:22 np0005481065 nova_compute[260935]: 2025-10-11 09:42:22.563 2 DEBUG nova.objects.instance [None req-8947ba06-cc32-43ed-bd05-5aa754ba3953 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Lazy-loading 'resources' on Instance uuid 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:42:22 np0005481065 nova_compute[260935]: 2025-10-11 09:42:22.579 2 DEBUG nova.virt.libvirt.vif [None req-8947ba06-cc32-43ed-bd05-5aa754ba3953 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:41:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-2061155047',display_name='tempest-TestServerAdvancedOps-server-2061155047',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-2061155047',id=148,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:42:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bd3d8446de604a3682ade55965824985',ramdisk_id='',reservation_id='r-q51mr6v9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerAdvancedOps-940341189',owner_user_name='tempest-TestServerAdvancedOps-940341189-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:42:19Z,user_data=None,user_id='a45c6166242048a983479be416f90c52',uuid=4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "address": "fa:16:3e:38:9b:b3", "network": {"id": "7f1968f8-d23e-46ea-bffa-f1b3561e0cf9", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-351533677-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "bd3d8446de604a3682ade55965824985", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa65667d8-0c", "ovs_interfaceid": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:42:22 np0005481065 nova_compute[260935]: 2025-10-11 09:42:22.579 2 DEBUG nova.network.os_vif_util [None req-8947ba06-cc32-43ed-bd05-5aa754ba3953 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Converting VIF {"id": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "address": "fa:16:3e:38:9b:b3", "network": {"id": "7f1968f8-d23e-46ea-bffa-f1b3561e0cf9", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-351533677-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "bd3d8446de604a3682ade55965824985", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa65667d8-0c", "ovs_interfaceid": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:42:22 np0005481065 nova_compute[260935]: 2025-10-11 09:42:22.580 2 DEBUG nova.network.os_vif_util [None req-8947ba06-cc32-43ed-bd05-5aa754ba3953 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:9b:b3,bridge_name='br-int',has_traffic_filtering=True,id=a65667d8-0cc9-498a-9415-8fcecf5881f3,network=Network(7f1968f8-d23e-46ea-bffa-f1b3561e0cf9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa65667d8-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:42:22 np0005481065 nova_compute[260935]: 2025-10-11 09:42:22.580 2 DEBUG os_vif [None req-8947ba06-cc32-43ed-bd05-5aa754ba3953 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:9b:b3,bridge_name='br-int',has_traffic_filtering=True,id=a65667d8-0cc9-498a-9415-8fcecf5881f3,network=Network(7f1968f8-d23e-46ea-bffa-f1b3561e0cf9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa65667d8-0c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:42:22 np0005481065 nova_compute[260935]: 2025-10-11 09:42:22.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:42:22 np0005481065 nova_compute[260935]: 2025-10-11 09:42:22.582 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa65667d8-0c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:42:22 np0005481065 nova_compute[260935]: 2025-10-11 09:42:22.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:42:22 np0005481065 nova_compute[260935]: 2025-10-11 09:42:22.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:42:22 np0005481065 nova_compute[260935]: 2025-10-11 09:42:22.587 2 INFO os_vif [None req-8947ba06-cc32-43ed-bd05-5aa754ba3953 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:9b:b3,bridge_name='br-int',has_traffic_filtering=True,id=a65667d8-0cc9-498a-9415-8fcecf5881f3,network=Network(7f1968f8-d23e-46ea-bffa-f1b3561e0cf9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa65667d8-0c')#033[00m
Oct 11 05:42:23 np0005481065 nova_compute[260935]: 2025-10-11 09:42:23.057 2 INFO nova.virt.libvirt.driver [None req-8947ba06-cc32-43ed-bd05-5aa754ba3953 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Deleting instance files /var/lib/nova/instances/4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1_del#033[00m
Oct 11 05:42:23 np0005481065 nova_compute[260935]: 2025-10-11 09:42:23.058 2 INFO nova.virt.libvirt.driver [None req-8947ba06-cc32-43ed-bd05-5aa754ba3953 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Deletion of /var/lib/nova/instances/4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1_del complete#033[00m
Oct 11 05:42:23 np0005481065 nova_compute[260935]: 2025-10-11 09:42:23.366 2 DEBUG nova.compute.manager [req-53b1b8f9-8217-4154-8f21-5d6411eeadaa req-28234a51-a3a5-41f7-a287-b8beb6cfdfc5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Received event network-vif-unplugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:42:23 np0005481065 nova_compute[260935]: 2025-10-11 09:42:23.367 2 DEBUG oslo_concurrency.lockutils [req-53b1b8f9-8217-4154-8f21-5d6411eeadaa req-28234a51-a3a5-41f7-a287-b8beb6cfdfc5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:42:23 np0005481065 nova_compute[260935]: 2025-10-11 09:42:23.367 2 DEBUG oslo_concurrency.lockutils [req-53b1b8f9-8217-4154-8f21-5d6411eeadaa req-28234a51-a3a5-41f7-a287-b8beb6cfdfc5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:42:23 np0005481065 nova_compute[260935]: 2025-10-11 09:42:23.367 2 DEBUG oslo_concurrency.lockutils [req-53b1b8f9-8217-4154-8f21-5d6411eeadaa req-28234a51-a3a5-41f7-a287-b8beb6cfdfc5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:42:23 np0005481065 nova_compute[260935]: 2025-10-11 09:42:23.368 2 DEBUG nova.compute.manager [req-53b1b8f9-8217-4154-8f21-5d6411eeadaa req-28234a51-a3a5-41f7-a287-b8beb6cfdfc5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] No waiting events found dispatching network-vif-unplugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:42:23 np0005481065 nova_compute[260935]: 2025-10-11 09:42:23.368 2 DEBUG nova.compute.manager [req-53b1b8f9-8217-4154-8f21-5d6411eeadaa req-28234a51-a3a5-41f7-a287-b8beb6cfdfc5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Received event network-vif-unplugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 05:42:23 np0005481065 nova_compute[260935]: 2025-10-11 09:42:23.369 2 DEBUG nova.compute.manager [req-53b1b8f9-8217-4154-8f21-5d6411eeadaa req-28234a51-a3a5-41f7-a287-b8beb6cfdfc5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Received event network-vif-plugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:42:23 np0005481065 nova_compute[260935]: 2025-10-11 09:42:23.369 2 DEBUG oslo_concurrency.lockutils [req-53b1b8f9-8217-4154-8f21-5d6411eeadaa req-28234a51-a3a5-41f7-a287-b8beb6cfdfc5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:42:23 np0005481065 nova_compute[260935]: 2025-10-11 09:42:23.369 2 DEBUG oslo_concurrency.lockutils [req-53b1b8f9-8217-4154-8f21-5d6411eeadaa req-28234a51-a3a5-41f7-a287-b8beb6cfdfc5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:42:23 np0005481065 nova_compute[260935]: 2025-10-11 09:42:23.370 2 DEBUG oslo_concurrency.lockutils [req-53b1b8f9-8217-4154-8f21-5d6411eeadaa req-28234a51-a3a5-41f7-a287-b8beb6cfdfc5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:42:23 np0005481065 nova_compute[260935]: 2025-10-11 09:42:23.370 2 DEBUG nova.compute.manager [req-53b1b8f9-8217-4154-8f21-5d6411eeadaa req-28234a51-a3a5-41f7-a287-b8beb6cfdfc5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] No waiting events found dispatching network-vif-plugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:42:23 np0005481065 nova_compute[260935]: 2025-10-11 09:42:23.370 2 WARNING nova.compute.manager [req-53b1b8f9-8217-4154-8f21-5d6411eeadaa req-28234a51-a3a5-41f7-a287-b8beb6cfdfc5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Received unexpected event network-vif-plugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 for instance with vm_state active and task_state deleting.#033[00m
Oct 11 05:42:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3022: 321 pgs: 321 active+clean; 374 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 8.6 KiB/s rd, 9 op/s
Oct 11 05:42:23 np0005481065 nova_compute[260935]: 2025-10-11 09:42:23.444 2 INFO nova.compute.manager [None req-8947ba06-cc32-43ed-bd05-5aa754ba3953 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Took 1.12 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:42:23 np0005481065 nova_compute[260935]: 2025-10-11 09:42:23.445 2 DEBUG oslo.service.loopingcall [None req-8947ba06-cc32-43ed-bd05-5aa754ba3953 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:42:23 np0005481065 nova_compute[260935]: 2025-10-11 09:42:23.445 2 DEBUG nova.compute.manager [-] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:42:23 np0005481065 nova_compute[260935]: 2025-10-11 09:42:23.446 2 DEBUG nova.network.neutron [-] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:42:24 np0005481065 nova_compute[260935]: 2025-10-11 09:42:24.294 2 DEBUG nova.network.neutron [-] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:42:24 np0005481065 nova_compute[260935]: 2025-10-11 09:42:24.312 2 INFO nova.compute.manager [-] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Took 0.87 seconds to deallocate network for instance.#033[00m
Oct 11 05:42:24 np0005481065 nova_compute[260935]: 2025-10-11 09:42:24.361 2 DEBUG oslo_concurrency.lockutils [None req-8947ba06-cc32-43ed-bd05-5aa754ba3953 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:42:24 np0005481065 nova_compute[260935]: 2025-10-11 09:42:24.362 2 DEBUG oslo_concurrency.lockutils [None req-8947ba06-cc32-43ed-bd05-5aa754ba3953 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:42:24 np0005481065 nova_compute[260935]: 2025-10-11 09:42:24.469 2 DEBUG oslo_concurrency.processutils [None req-8947ba06-cc32-43ed-bd05-5aa754ba3953 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:42:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:42:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:42:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:42:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:42:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:42:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:42:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:42:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:42:24 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2217349222' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:42:24 np0005481065 nova_compute[260935]: 2025-10-11 09:42:24.950 2 DEBUG oslo_concurrency.processutils [None req-8947ba06-cc32-43ed-bd05-5aa754ba3953 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:42:24 np0005481065 nova_compute[260935]: 2025-10-11 09:42:24.959 2 DEBUG nova.compute.provider_tree [None req-8947ba06-cc32-43ed-bd05-5aa754ba3953 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:42:24 np0005481065 nova_compute[260935]: 2025-10-11 09:42:24.980 2 DEBUG nova.scheduler.client.report [None req-8947ba06-cc32-43ed-bd05-5aa754ba3953 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:42:24 np0005481065 nova_compute[260935]: 2025-10-11 09:42:24.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:42:25 np0005481065 nova_compute[260935]: 2025-10-11 09:42:25.009 2 DEBUG oslo_concurrency.lockutils [None req-8947ba06-cc32-43ed-bd05-5aa754ba3953 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:42:25 np0005481065 nova_compute[260935]: 2025-10-11 09:42:25.035 2 INFO nova.scheduler.client.report [None req-8947ba06-cc32-43ed-bd05-5aa754ba3953 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Deleted allocations for instance 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1#033[00m
Oct 11 05:42:25 np0005481065 nova_compute[260935]: 2025-10-11 09:42:25.108 2 DEBUG oslo_concurrency.lockutils [None req-8947ba06-cc32-43ed-bd05-5aa754ba3953 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.793s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:42:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3023: 321 pgs: 321 active+clean; 374 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 4.3 KiB/s rd, 4 op/s
Oct 11 05:42:25 np0005481065 nova_compute[260935]: 2025-10-11 09:42:25.478 2 DEBUG nova.compute.manager [req-a54cf32c-c35c-4524-a603-479e7211bf62 req-b6184f53-2926-4a77-9632-389b80798845 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Received event network-vif-deleted-a65667d8-0cc9-498a-9415-8fcecf5881f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:42:26 np0005481065 ovn_controller[152945]: 2025-10-11T09:42:26Z|01703|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:42:26 np0005481065 ovn_controller[152945]: 2025-10-11T09:42:26Z|01704|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:42:26 np0005481065 nova_compute[260935]: 2025-10-11 09:42:26.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:42:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:42:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/393553716' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:42:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:42:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/393553716' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:42:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3024: 321 pgs: 321 active+clean; 339 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.2 KiB/s wr, 22 op/s
Oct 11 05:42:27 np0005481065 nova_compute[260935]: 2025-10-11 09:42:27.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:42:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3025: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Oct 11 05:42:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:42:30 np0005481065 nova_compute[260935]: 2025-10-11 09:42:30.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:42:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3026: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Oct 11 05:42:31 np0005481065 nova_compute[260935]: 2025-10-11 09:42:31.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:42:32 np0005481065 nova_compute[260935]: 2025-10-11 09:42:32.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:42:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3027: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Oct 11 05:42:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:42:35 np0005481065 nova_compute[260935]: 2025-10-11 09:42:35.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:42:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3028: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Oct 11 05:42:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3029: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Oct 11 05:42:37 np0005481065 nova_compute[260935]: 2025-10-11 09:42:37.562 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760175742.5586293, 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:42:37 np0005481065 nova_compute[260935]: 2025-10-11 09:42:37.564 2 INFO nova.compute.manager [-] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:42:37 np0005481065 nova_compute[260935]: 2025-10-11 09:42:37.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:42:37 np0005481065 nova_compute[260935]: 2025-10-11 09:42:37.599 2 DEBUG nova.compute.manager [None req-bbd4cac3-062e-4082-b035-f84bca38eadd - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:42:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3030: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.0 KiB/s rd, 0 B/s wr, 8 op/s
Oct 11 05:42:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:42:39 np0005481065 podman[428466]: 2025-10-11 09:42:39.841455001 +0000 UTC m=+0.128955238 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 11 05:42:40 np0005481065 nova_compute[260935]: 2025-10-11 09:42:40.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:42:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3031: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:42:42 np0005481065 nova_compute[260935]: 2025-10-11 09:42:42.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:42:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3032: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:42:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:42:45 np0005481065 nova_compute[260935]: 2025-10-11 09:42:45.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:42:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3033: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:42:46 np0005481065 podman[428485]: 2025-10-11 09:42:46.813297264 +0000 UTC m=+0.112308741 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:42:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3034: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:42:47 np0005481065 nova_compute[260935]: 2025-10-11 09:42:47.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:42:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3035: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:42:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:42:50 np0005481065 nova_compute[260935]: 2025-10-11 09:42:50.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:42:50 np0005481065 nova_compute[260935]: 2025-10-11 09:42:50.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:42:50 np0005481065 nova_compute[260935]: 2025-10-11 09:42:50.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:42:50 np0005481065 podman[428506]: 2025-10-11 09:42:50.779810324 +0000 UTC m=+0.084009697 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 11 05:42:50 np0005481065 podman[428507]: 2025-10-11 09:42:50.818965203 +0000 UTC m=+0.110077029 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 05:42:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3036: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:42:52 np0005481065 nova_compute[260935]: 2025-10-11 09:42:52.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:42:52 np0005481065 nova_compute[260935]: 2025-10-11 09:42:52.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:42:52 np0005481065 nova_compute[260935]: 2025-10-11 09:42:52.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:42:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3037: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:42:53 np0005481065 nova_compute[260935]: 2025-10-11 09:42:53.819 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:42:53 np0005481065 nova_compute[260935]: 2025-10-11 09:42:53.820 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:42:53 np0005481065 nova_compute[260935]: 2025-10-11 09:42:53.820 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 05:42:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:42:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:42:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:42:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:42:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:42:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:42:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:42:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:42:55
Oct 11 05:42:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:42:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:42:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', 'images', 'volumes', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'default.rgw.log', 'backups']
Oct 11 05:42:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:42:55 np0005481065 nova_compute[260935]: 2025-10-11 09:42:55.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:42:55 np0005481065 nova_compute[260935]: 2025-10-11 09:42:55.076 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updating instance_info_cache with network_info: [{"id": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "address": "fa:16:3e:ab:9b:26", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99e74dca-1d", "ovs_interfaceid": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:42:55 np0005481065 nova_compute[260935]: 2025-10-11 09:42:55.094 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:42:55 np0005481065 nova_compute[260935]: 2025-10-11 09:42:55.095 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 05:42:55 np0005481065 nova_compute[260935]: 2025-10-11 09:42:55.095 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:42:55 np0005481065 nova_compute[260935]: 2025-10-11 09:42:55.096 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:42:55 np0005481065 nova_compute[260935]: 2025-10-11 09:42:55.122 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:42:55 np0005481065 nova_compute[260935]: 2025-10-11 09:42:55.124 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:42:55 np0005481065 nova_compute[260935]: 2025-10-11 09:42:55.124 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:42:55 np0005481065 nova_compute[260935]: 2025-10-11 09:42:55.124 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:42:55 np0005481065 nova_compute[260935]: 2025-10-11 09:42:55.125 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:42:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3038: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:42:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:42:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:42:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:42:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:42:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:42:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:42:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:42:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:42:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:42:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:42:55 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:42:55 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1257212598' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:42:55 np0005481065 nova_compute[260935]: 2025-10-11 09:42:55.668 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:42:55 np0005481065 nova_compute[260935]: 2025-10-11 09:42:55.771 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:42:55 np0005481065 nova_compute[260935]: 2025-10-11 09:42:55.772 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:42:55 np0005481065 nova_compute[260935]: 2025-10-11 09:42:55.772 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:42:55 np0005481065 nova_compute[260935]: 2025-10-11 09:42:55.775 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:42:55 np0005481065 nova_compute[260935]: 2025-10-11 09:42:55.775 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:42:55 np0005481065 nova_compute[260935]: 2025-10-11 09:42:55.779 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:42:55 np0005481065 nova_compute[260935]: 2025-10-11 09:42:55.779 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:42:56 np0005481065 nova_compute[260935]: 2025-10-11 09:42:56.048 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:42:56 np0005481065 nova_compute[260935]: 2025-10-11 09:42:56.049 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2766MB free_disk=59.83066940307617GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:42:56 np0005481065 nova_compute[260935]: 2025-10-11 09:42:56.049 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:42:56 np0005481065 nova_compute[260935]: 2025-10-11 09:42:56.050 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:42:56 np0005481065 nova_compute[260935]: 2025-10-11 09:42:56.135 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:42:56 np0005481065 nova_compute[260935]: 2025-10-11 09:42:56.136 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:42:56 np0005481065 nova_compute[260935]: 2025-10-11 09:42:56.136 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:42:56 np0005481065 nova_compute[260935]: 2025-10-11 09:42:56.136 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:42:56 np0005481065 nova_compute[260935]: 2025-10-11 09:42:56.136 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:42:56 np0005481065 nova_compute[260935]: 2025-10-11 09:42:56.233 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:42:56 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:42:56 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3089073776' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:42:56 np0005481065 nova_compute[260935]: 2025-10-11 09:42:56.746 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:42:56 np0005481065 nova_compute[260935]: 2025-10-11 09:42:56.752 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:42:56 np0005481065 nova_compute[260935]: 2025-10-11 09:42:56.770 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:42:56 np0005481065 nova_compute[260935]: 2025-10-11 09:42:56.804 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:42:56 np0005481065 nova_compute[260935]: 2025-10-11 09:42:56.805 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:42:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3039: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:42:57 np0005481065 nova_compute[260935]: 2025-10-11 09:42:57.412 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:42:57 np0005481065 nova_compute[260935]: 2025-10-11 09:42:57.413 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:42:57 np0005481065 nova_compute[260935]: 2025-10-11 09:42:57.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:42:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3040: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:42:59 np0005481065 nova_compute[260935]: 2025-10-11 09:42:59.494 2 DEBUG oslo_concurrency.lockutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Acquiring lock "1f03604e-0a25-4f22-807b-11f2e654be90" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:42:59 np0005481065 nova_compute[260935]: 2025-10-11 09:42:59.495 2 DEBUG oslo_concurrency.lockutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "1f03604e-0a25-4f22-807b-11f2e654be90" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:42:59 np0005481065 nova_compute[260935]: 2025-10-11 09:42:59.516 2 DEBUG nova.compute.manager [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:42:59 np0005481065 nova_compute[260935]: 2025-10-11 09:42:59.581 2 DEBUG oslo_concurrency.lockutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:42:59 np0005481065 nova_compute[260935]: 2025-10-11 09:42:59.582 2 DEBUG oslo_concurrency.lockutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:42:59 np0005481065 nova_compute[260935]: 2025-10-11 09:42:59.591 2 DEBUG nova.virt.hardware [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:42:59 np0005481065 nova_compute[260935]: 2025-10-11 09:42:59.592 2 INFO nova.compute.claims [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:42:59 np0005481065 nova_compute[260935]: 2025-10-11 09:42:59.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:42:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:42:59 np0005481065 nova_compute[260935]: 2025-10-11 09:42:59.737 2 DEBUG oslo_concurrency.processutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:43:00 np0005481065 nova_compute[260935]: 2025-10-11 09:43:00.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:43:00 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:43:00 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/853917308' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:43:00 np0005481065 nova_compute[260935]: 2025-10-11 09:43:00.238 2 DEBUG oslo_concurrency.processutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:43:00 np0005481065 nova_compute[260935]: 2025-10-11 09:43:00.248 2 DEBUG nova.compute.provider_tree [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:43:00 np0005481065 nova_compute[260935]: 2025-10-11 09:43:00.267 2 DEBUG nova.scheduler.client.report [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:43:00 np0005481065 nova_compute[260935]: 2025-10-11 09:43:00.299 2 DEBUG oslo_concurrency.lockutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:43:00 np0005481065 nova_compute[260935]: 2025-10-11 09:43:00.300 2 DEBUG nova.compute.manager [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:43:00 np0005481065 nova_compute[260935]: 2025-10-11 09:43:00.365 2 DEBUG nova.compute.manager [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:43:00 np0005481065 nova_compute[260935]: 2025-10-11 09:43:00.368 2 DEBUG nova.network.neutron [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:43:00 np0005481065 nova_compute[260935]: 2025-10-11 09:43:00.394 2 INFO nova.virt.libvirt.driver [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:43:00 np0005481065 nova_compute[260935]: 2025-10-11 09:43:00.415 2 DEBUG nova.compute.manager [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:43:00 np0005481065 nova_compute[260935]: 2025-10-11 09:43:00.508 2 DEBUG nova.compute.manager [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:43:00 np0005481065 nova_compute[260935]: 2025-10-11 09:43:00.510 2 DEBUG nova.virt.libvirt.driver [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:43:00 np0005481065 nova_compute[260935]: 2025-10-11 09:43:00.511 2 INFO nova.virt.libvirt.driver [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Creating image(s)#033[00m
Oct 11 05:43:00 np0005481065 nova_compute[260935]: 2025-10-11 09:43:00.546 2 DEBUG nova.storage.rbd_utils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] rbd image 1f03604e-0a25-4f22-807b-11f2e654be90_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:43:00 np0005481065 nova_compute[260935]: 2025-10-11 09:43:00.580 2 DEBUG nova.storage.rbd_utils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] rbd image 1f03604e-0a25-4f22-807b-11f2e654be90_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:43:00 np0005481065 nova_compute[260935]: 2025-10-11 09:43:00.609 2 DEBUG nova.storage.rbd_utils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] rbd image 1f03604e-0a25-4f22-807b-11f2e654be90_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:43:00 np0005481065 nova_compute[260935]: 2025-10-11 09:43:00.614 2 DEBUG oslo_concurrency.processutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:43:00 np0005481065 nova_compute[260935]: 2025-10-11 09:43:00.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:43:00 np0005481065 nova_compute[260935]: 2025-10-11 09:43:00.706 2 DEBUG oslo_concurrency.processutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:43:00 np0005481065 nova_compute[260935]: 2025-10-11 09:43:00.706 2 DEBUG oslo_concurrency.lockutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:43:00 np0005481065 nova_compute[260935]: 2025-10-11 09:43:00.707 2 DEBUG oslo_concurrency.lockutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:43:00 np0005481065 nova_compute[260935]: 2025-10-11 09:43:00.707 2 DEBUG oslo_concurrency.lockutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:43:00 np0005481065 nova_compute[260935]: 2025-10-11 09:43:00.734 2 DEBUG nova.storage.rbd_utils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] rbd image 1f03604e-0a25-4f22-807b-11f2e654be90_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:43:00 np0005481065 nova_compute[260935]: 2025-10-11 09:43:00.739 2 DEBUG oslo_concurrency.processutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 1f03604e-0a25-4f22-807b-11f2e654be90_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:43:00 np0005481065 nova_compute[260935]: 2025-10-11 09:43:00.845 2 DEBUG nova.policy [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2b3a6de4bc924868b4e73b0a23a89090', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd599eac75aee4193a971c2c157a326a8', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:43:01 np0005481065 nova_compute[260935]: 2025-10-11 09:43:01.033 2 DEBUG oslo_concurrency.processutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 1f03604e-0a25-4f22-807b-11f2e654be90_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.294s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:43:01 np0005481065 nova_compute[260935]: 2025-10-11 09:43:01.116 2 DEBUG nova.storage.rbd_utils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] resizing rbd image 1f03604e-0a25-4f22-807b-11f2e654be90_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:43:01 np0005481065 nova_compute[260935]: 2025-10-11 09:43:01.208 2 DEBUG nova.objects.instance [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lazy-loading 'migration_context' on Instance uuid 1f03604e-0a25-4f22-807b-11f2e654be90 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:43:01 np0005481065 nova_compute[260935]: 2025-10-11 09:43:01.225 2 DEBUG nova.virt.libvirt.driver [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:43:01 np0005481065 nova_compute[260935]: 2025-10-11 09:43:01.225 2 DEBUG nova.virt.libvirt.driver [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Ensure instance console log exists: /var/lib/nova/instances/1f03604e-0a25-4f22-807b-11f2e654be90/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:43:01 np0005481065 nova_compute[260935]: 2025-10-11 09:43:01.226 2 DEBUG oslo_concurrency.lockutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:43:01 np0005481065 nova_compute[260935]: 2025-10-11 09:43:01.226 2 DEBUG oslo_concurrency.lockutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:43:01 np0005481065 nova_compute[260935]: 2025-10-11 09:43:01.227 2 DEBUG oslo_concurrency.lockutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:43:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3041: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:43:01 np0005481065 nova_compute[260935]: 2025-10-11 09:43:01.953 2 DEBUG nova.network.neutron [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Successfully created port: 0079b745-efa5-4641-a180-fee85fb30a5d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:43:02 np0005481065 nova_compute[260935]: 2025-10-11 09:43:02.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:43:02 np0005481065 nova_compute[260935]: 2025-10-11 09:43:02.671 2 DEBUG nova.network.neutron [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Successfully updated port: 0079b745-efa5-4641-a180-fee85fb30a5d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:43:02 np0005481065 nova_compute[260935]: 2025-10-11 09:43:02.693 2 DEBUG oslo_concurrency.lockutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Acquiring lock "refresh_cache-1f03604e-0a25-4f22-807b-11f2e654be90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:43:02 np0005481065 nova_compute[260935]: 2025-10-11 09:43:02.693 2 DEBUG oslo_concurrency.lockutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Acquired lock "refresh_cache-1f03604e-0a25-4f22-807b-11f2e654be90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:43:02 np0005481065 nova_compute[260935]: 2025-10-11 09:43:02.693 2 DEBUG nova.network.neutron [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:43:03 np0005481065 nova_compute[260935]: 2025-10-11 09:43:03.057 2 DEBUG nova.compute.manager [req-b809192d-1ef9-406f-841e-229d53aa9ca4 req-bca78657-e6e3-4dcc-bdd6-92fbd1951939 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Received event network-changed-0079b745-efa5-4641-a180-fee85fb30a5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:43:03 np0005481065 nova_compute[260935]: 2025-10-11 09:43:03.058 2 DEBUG nova.compute.manager [req-b809192d-1ef9-406f-841e-229d53aa9ca4 req-bca78657-e6e3-4dcc-bdd6-92fbd1951939 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Refreshing instance network info cache due to event network-changed-0079b745-efa5-4641-a180-fee85fb30a5d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:43:03 np0005481065 nova_compute[260935]: 2025-10-11 09:43:03.059 2 DEBUG oslo_concurrency.lockutils [req-b809192d-1ef9-406f-841e-229d53aa9ca4 req-bca78657-e6e3-4dcc-bdd6-92fbd1951939 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-1f03604e-0a25-4f22-807b-11f2e654be90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:43:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3042: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:43:03 np0005481065 nova_compute[260935]: 2025-10-11 09:43:03.822 2 DEBUG nova.network.neutron [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:43:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:43:05 np0005481065 nova_compute[260935]: 2025-10-11 09:43:05.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:43:05 np0005481065 nova_compute[260935]: 2025-10-11 09:43:05.116 2 DEBUG nova.network.neutron [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Updating instance_info_cache with network_info: [{"id": "0079b745-efa5-4641-a180-fee85fb30a5d", "address": "fa:16:3e:88:b5:6b", "network": {"id": "424246b3-55aa-428f-9446-55ed3d626b5d", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1806329114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d599eac75aee4193a971c2c157a326a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0079b745-ef", "ovs_interfaceid": "0079b745-efa5-4641-a180-fee85fb30a5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:43:05 np0005481065 nova_compute[260935]: 2025-10-11 09:43:05.142 2 DEBUG oslo_concurrency.lockutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Releasing lock "refresh_cache-1f03604e-0a25-4f22-807b-11f2e654be90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:43:05 np0005481065 nova_compute[260935]: 2025-10-11 09:43:05.142 2 DEBUG nova.compute.manager [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Instance network_info: |[{"id": "0079b745-efa5-4641-a180-fee85fb30a5d", "address": "fa:16:3e:88:b5:6b", "network": {"id": "424246b3-55aa-428f-9446-55ed3d626b5d", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1806329114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d599eac75aee4193a971c2c157a326a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0079b745-ef", "ovs_interfaceid": "0079b745-efa5-4641-a180-fee85fb30a5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:43:05 np0005481065 nova_compute[260935]: 2025-10-11 09:43:05.143 2 DEBUG oslo_concurrency.lockutils [req-b809192d-1ef9-406f-841e-229d53aa9ca4 req-bca78657-e6e3-4dcc-bdd6-92fbd1951939 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-1f03604e-0a25-4f22-807b-11f2e654be90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:43:05 np0005481065 nova_compute[260935]: 2025-10-11 09:43:05.143 2 DEBUG nova.network.neutron [req-b809192d-1ef9-406f-841e-229d53aa9ca4 req-bca78657-e6e3-4dcc-bdd6-92fbd1951939 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Refreshing network info cache for port 0079b745-efa5-4641-a180-fee85fb30a5d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:43:05 np0005481065 nova_compute[260935]: 2025-10-11 09:43:05.148 2 DEBUG nova.virt.libvirt.driver [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Start _get_guest_xml network_info=[{"id": "0079b745-efa5-4641-a180-fee85fb30a5d", "address": "fa:16:3e:88:b5:6b", "network": {"id": "424246b3-55aa-428f-9446-55ed3d626b5d", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1806329114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d599eac75aee4193a971c2c157a326a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0079b745-ef", "ovs_interfaceid": "0079b745-efa5-4641-a180-fee85fb30a5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:43:05 np0005481065 nova_compute[260935]: 2025-10-11 09:43:05.156 2 WARNING nova.virt.libvirt.driver [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:43:05 np0005481065 nova_compute[260935]: 2025-10-11 09:43:05.167 2 DEBUG nova.virt.libvirt.host [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:43:05 np0005481065 nova_compute[260935]: 2025-10-11 09:43:05.168 2 DEBUG nova.virt.libvirt.host [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:43:05 np0005481065 nova_compute[260935]: 2025-10-11 09:43:05.174 2 DEBUG nova.virt.libvirt.host [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:43:05 np0005481065 nova_compute[260935]: 2025-10-11 09:43:05.175 2 DEBUG nova.virt.libvirt.host [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:43:05 np0005481065 nova_compute[260935]: 2025-10-11 09:43:05.176 2 DEBUG nova.virt.libvirt.driver [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:43:05 np0005481065 nova_compute[260935]: 2025-10-11 09:43:05.176 2 DEBUG nova.virt.hardware [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:43:05 np0005481065 nova_compute[260935]: 2025-10-11 09:43:05.177 2 DEBUG nova.virt.hardware [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:43:05 np0005481065 nova_compute[260935]: 2025-10-11 09:43:05.177 2 DEBUG nova.virt.hardware [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:43:05 np0005481065 nova_compute[260935]: 2025-10-11 09:43:05.178 2 DEBUG nova.virt.hardware [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:43:05 np0005481065 nova_compute[260935]: 2025-10-11 09:43:05.178 2 DEBUG nova.virt.hardware [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:43:05 np0005481065 nova_compute[260935]: 2025-10-11 09:43:05.178 2 DEBUG nova.virt.hardware [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:43:05 np0005481065 nova_compute[260935]: 2025-10-11 09:43:05.179 2 DEBUG nova.virt.hardware [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:43:05 np0005481065 nova_compute[260935]: 2025-10-11 09:43:05.179 2 DEBUG nova.virt.hardware [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:43:05 np0005481065 nova_compute[260935]: 2025-10-11 09:43:05.179 2 DEBUG nova.virt.hardware [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:43:05 np0005481065 nova_compute[260935]: 2025-10-11 09:43:05.180 2 DEBUG nova.virt.hardware [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:43:05 np0005481065 nova_compute[260935]: 2025-10-11 09:43:05.180 2 DEBUG nova.virt.hardware [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:43:05 np0005481065 nova_compute[260935]: 2025-10-11 09:43:05.185 2 DEBUG oslo_concurrency.processutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:43:05 np0005481065 podman[429060]: 2025-10-11 09:43:05.241695463 +0000 UTC m=+0.073916294 container create 1a092983c130eef7c4541c55811be68b775bf8366c775e612c5a2895ff0a4b48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bardeen, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:43:05 np0005481065 podman[429060]: 2025-10-11 09:43:05.208503143 +0000 UTC m=+0.040724064 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:43:05 np0005481065 systemd[1]: Started libpod-conmon-1a092983c130eef7c4541c55811be68b775bf8366c775e612c5a2895ff0a4b48.scope.
Oct 11 05:43:05 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:43:05 np0005481065 podman[429060]: 2025-10-11 09:43:05.381116434 +0000 UTC m=+0.213337335 container init 1a092983c130eef7c4541c55811be68b775bf8366c775e612c5a2895ff0a4b48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:43:05 np0005481065 podman[429060]: 2025-10-11 09:43:05.392591366 +0000 UTC m=+0.224812237 container start 1a092983c130eef7c4541c55811be68b775bf8366c775e612c5a2895ff0a4b48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:43:05 np0005481065 podman[429060]: 2025-10-11 09:43:05.396785443 +0000 UTC m=+0.229006374 container attach 1a092983c130eef7c4541c55811be68b775bf8366c775e612c5a2895ff0a4b48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bardeen, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 11 05:43:05 np0005481065 goofy_bardeen[429078]: 167 167
Oct 11 05:43:05 np0005481065 systemd[1]: libpod-1a092983c130eef7c4541c55811be68b775bf8366c775e612c5a2895ff0a4b48.scope: Deactivated successfully.
Oct 11 05:43:05 np0005481065 conmon[429078]: conmon 1a092983c130eef7c454 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1a092983c130eef7c4541c55811be68b775bf8366c775e612c5a2895ff0a4b48.scope/container/memory.events
Oct 11 05:43:05 np0005481065 podman[429060]: 2025-10-11 09:43:05.403868982 +0000 UTC m=+0.236089883 container died 1a092983c130eef7c4541c55811be68b775bf8366c775e612c5a2895ff0a4b48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bardeen, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:43:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3043: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 05:43:05 np0005481065 systemd[1]: var-lib-containers-storage-overlay-3993dc1e7829b57b12dbac5a3e95fbd1c455d0de97d3381740ffb53c7bee289c-merged.mount: Deactivated successfully.
Oct 11 05:43:05 np0005481065 podman[429060]: 2025-10-11 09:43:05.463614728 +0000 UTC m=+0.295835579 container remove 1a092983c130eef7c4541c55811be68b775bf8366c775e612c5a2895ff0a4b48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bardeen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 05:43:05 np0005481065 systemd[1]: libpod-conmon-1a092983c130eef7c4541c55811be68b775bf8366c775e612c5a2895ff0a4b48.scope: Deactivated successfully.
Oct 11 05:43:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:43:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:43:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:43:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:43:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002973310725564889 of space, bias 1.0, pg target 0.8919932176694666 quantized to 32 (current 32)
Oct 11 05:43:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:43:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:43:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:43:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:43:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:43:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 11 05:43:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:43:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 05:43:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:43:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:43:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:43:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 05:43:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:43:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 05:43:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:43:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:43:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:43:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 05:43:05 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:43:05 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1733831700' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:43:05 np0005481065 nova_compute[260935]: 2025-10-11 09:43:05.662 2 DEBUG oslo_concurrency.processutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:43:05 np0005481065 nova_compute[260935]: 2025-10-11 09:43:05.694 2 DEBUG nova.storage.rbd_utils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] rbd image 1f03604e-0a25-4f22-807b-11f2e654be90_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:43:05 np0005481065 nova_compute[260935]: 2025-10-11 09:43:05.699 2 DEBUG oslo_concurrency.processutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:43:05 np0005481065 podman[429119]: 2025-10-11 09:43:05.700238525 +0000 UTC m=+0.063015219 container create bb3ace29aaac1f2148cc3940986c4c84a247b3d7c72dfec165b64c7e4db2360f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lovelace, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:43:05 np0005481065 systemd[1]: Started libpod-conmon-bb3ace29aaac1f2148cc3940986c4c84a247b3d7c72dfec165b64c7e4db2360f.scope.
Oct 11 05:43:05 np0005481065 podman[429119]: 2025-10-11 09:43:05.670764718 +0000 UTC m=+0.033541502 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:43:05 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:43:05 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8f0a717cb682ed8212f3ac948020242a6f4e7385c72fba36d642cfc64baed44/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:43:05 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8f0a717cb682ed8212f3ac948020242a6f4e7385c72fba36d642cfc64baed44/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:43:05 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8f0a717cb682ed8212f3ac948020242a6f4e7385c72fba36d642cfc64baed44/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:43:05 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8f0a717cb682ed8212f3ac948020242a6f4e7385c72fba36d642cfc64baed44/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:43:05 np0005481065 podman[429119]: 2025-10-11 09:43:05.791268968 +0000 UTC m=+0.154045662 container init bb3ace29aaac1f2148cc3940986c4c84a247b3d7c72dfec165b64c7e4db2360f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 05:43:05 np0005481065 podman[429119]: 2025-10-11 09:43:05.798876141 +0000 UTC m=+0.161652835 container start bb3ace29aaac1f2148cc3940986c4c84a247b3d7c72dfec165b64c7e4db2360f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lovelace, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 11 05:43:05 np0005481065 podman[429119]: 2025-10-11 09:43:05.802187824 +0000 UTC m=+0.164964548 container attach bb3ace29aaac1f2148cc3940986c4c84a247b3d7c72dfec165b64c7e4db2360f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lovelace, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:43:06 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:43:06 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1707526989' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:43:06 np0005481065 nova_compute[260935]: 2025-10-11 09:43:06.143 2 DEBUG oslo_concurrency.processutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:43:06 np0005481065 nova_compute[260935]: 2025-10-11 09:43:06.145 2 DEBUG nova.virt.libvirt.vif [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:42:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1990672178',display_name='tempest-TestSnapshotPattern-server-1990672178',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1990672178',id=149,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP2ECx0jj3Rt10/VBX40dpY/aktTrRIITXR7+kOE+gKATXjMYBpecra2wtbKgkthLAz8Iw/nS+1WioBHxe0IF9i4guGae1Gh4XxA1njuMr4iSWg7N0/eaZRKp5xLMkN7hg==',key_name='tempest-TestSnapshotPattern-1719255089',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d599eac75aee4193a971c2c157a326a8',ramdisk_id='',reservation_id='r-eoa49vly',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-1818755654',owner_user_name='tempest-TestSnapshotPattern-1818755654-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:43:00Z,user_data=None,user_id='2b3a6de4bc924868b4e73b0a23a89090',uuid=1f03604e-0a25-4f22-807b-11f2e654be90,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0079b745-efa5-4641-a180-fee85fb30a5d", "address": "fa:16:3e:88:b5:6b", "network": {"id": "424246b3-55aa-428f-9446-55ed3d626b5d", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1806329114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d599eac75aee4193a971c2c157a326a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0079b745-ef", "ovs_interfaceid": "0079b745-efa5-4641-a180-fee85fb30a5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:43:06 np0005481065 nova_compute[260935]: 2025-10-11 09:43:06.145 2 DEBUG nova.network.os_vif_util [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Converting VIF {"id": "0079b745-efa5-4641-a180-fee85fb30a5d", "address": "fa:16:3e:88:b5:6b", "network": {"id": "424246b3-55aa-428f-9446-55ed3d626b5d", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1806329114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d599eac75aee4193a971c2c157a326a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0079b745-ef", "ovs_interfaceid": "0079b745-efa5-4641-a180-fee85fb30a5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:43:06 np0005481065 nova_compute[260935]: 2025-10-11 09:43:06.146 2 DEBUG nova.network.os_vif_util [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:b5:6b,bridge_name='br-int',has_traffic_filtering=True,id=0079b745-efa5-4641-a180-fee85fb30a5d,network=Network(424246b3-55aa-428f-9446-55ed3d626b5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0079b745-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:43:06 np0005481065 nova_compute[260935]: 2025-10-11 09:43:06.147 2 DEBUG nova.objects.instance [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1f03604e-0a25-4f22-807b-11f2e654be90 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:43:06 np0005481065 nova_compute[260935]: 2025-10-11 09:43:06.162 2 DEBUG nova.virt.libvirt.driver [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:43:06 np0005481065 nova_compute[260935]:  <uuid>1f03604e-0a25-4f22-807b-11f2e654be90</uuid>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:  <name>instance-00000095</name>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:43:06 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:      <nova:name>tempest-TestSnapshotPattern-server-1990672178</nova:name>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:43:05</nova:creationTime>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:43:06 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:        <nova:user uuid="2b3a6de4bc924868b4e73b0a23a89090">tempest-TestSnapshotPattern-1818755654-project-member</nova:user>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:        <nova:project uuid="d599eac75aee4193a971c2c157a326a8">tempest-TestSnapshotPattern-1818755654</nova:project>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:        <nova:port uuid="0079b745-efa5-4641-a180-fee85fb30a5d">
Oct 11 05:43:06 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:43:06 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:      <entry name="serial">1f03604e-0a25-4f22-807b-11f2e654be90</entry>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:      <entry name="uuid">1f03604e-0a25-4f22-807b-11f2e654be90</entry>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:43:06 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:43:06 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:43:06 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/1f03604e-0a25-4f22-807b-11f2e654be90_disk">
Oct 11 05:43:06 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:43:06 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:43:06 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/1f03604e-0a25-4f22-807b-11f2e654be90_disk.config">
Oct 11 05:43:06 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:43:06 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:43:06 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:88:b5:6b"/>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:      <target dev="tap0079b745-ef"/>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:43:06 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/1f03604e-0a25-4f22-807b-11f2e654be90/console.log" append="off"/>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:43:06 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:43:06 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:43:06 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:43:06 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:43:06 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:43:06 np0005481065 nova_compute[260935]: 2025-10-11 09:43:06.162 2 DEBUG nova.compute.manager [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Preparing to wait for external event network-vif-plugged-0079b745-efa5-4641-a180-fee85fb30a5d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:43:06 np0005481065 nova_compute[260935]: 2025-10-11 09:43:06.162 2 DEBUG oslo_concurrency.lockutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Acquiring lock "1f03604e-0a25-4f22-807b-11f2e654be90-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:43:06 np0005481065 nova_compute[260935]: 2025-10-11 09:43:06.163 2 DEBUG oslo_concurrency.lockutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "1f03604e-0a25-4f22-807b-11f2e654be90-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:43:06 np0005481065 nova_compute[260935]: 2025-10-11 09:43:06.163 2 DEBUG oslo_concurrency.lockutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "1f03604e-0a25-4f22-807b-11f2e654be90-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:43:06 np0005481065 nova_compute[260935]: 2025-10-11 09:43:06.164 2 DEBUG nova.virt.libvirt.vif [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:42:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1990672178',display_name='tempest-TestSnapshotPattern-server-1990672178',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1990672178',id=149,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP2ECx0jj3Rt10/VBX40dpY/aktTrRIITXR7+kOE+gKATXjMYBpecra2wtbKgkthLAz8Iw/nS+1WioBHxe0IF9i4guGae1Gh4XxA1njuMr4iSWg7N0/eaZRKp5xLMkN7hg==',key_name='tempest-TestSnapshotPattern-1719255089',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d599eac75aee4193a971c2c157a326a8',ramdisk_id='',reservation_id='r-eoa49vly',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-1818755654',owner_user_name='tempest-TestSnapshotPattern-1818755654-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:43:00Z,user_data=None,user_id='2b3a6de4bc924868b4e73b0a23a89090',uuid=1f03604e-0a25-4f22-807b-11f2e654be90,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0079b745-efa5-4641-a180-fee85fb30a5d", "address": "fa:16:3e:88:b5:6b", "network": {"id": "424246b3-55aa-428f-9446-55ed3d626b5d", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1806329114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d599eac75aee4193a971c2c157a326a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0079b745-ef", "ovs_interfaceid": "0079b745-efa5-4641-a180-fee85fb30a5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:43:06 np0005481065 nova_compute[260935]: 2025-10-11 09:43:06.164 2 DEBUG nova.network.os_vif_util [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Converting VIF {"id": "0079b745-efa5-4641-a180-fee85fb30a5d", "address": "fa:16:3e:88:b5:6b", "network": {"id": "424246b3-55aa-428f-9446-55ed3d626b5d", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1806329114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d599eac75aee4193a971c2c157a326a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0079b745-ef", "ovs_interfaceid": "0079b745-efa5-4641-a180-fee85fb30a5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:43:06 np0005481065 nova_compute[260935]: 2025-10-11 09:43:06.165 2 DEBUG nova.network.os_vif_util [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:b5:6b,bridge_name='br-int',has_traffic_filtering=True,id=0079b745-efa5-4641-a180-fee85fb30a5d,network=Network(424246b3-55aa-428f-9446-55ed3d626b5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0079b745-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:43:06 np0005481065 nova_compute[260935]: 2025-10-11 09:43:06.166 2 DEBUG os_vif [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:b5:6b,bridge_name='br-int',has_traffic_filtering=True,id=0079b745-efa5-4641-a180-fee85fb30a5d,network=Network(424246b3-55aa-428f-9446-55ed3d626b5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0079b745-ef') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:43:06 np0005481065 nova_compute[260935]: 2025-10-11 09:43:06.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:43:06 np0005481065 nova_compute[260935]: 2025-10-11 09:43:06.168 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:43:06 np0005481065 nova_compute[260935]: 2025-10-11 09:43:06.168 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:43:06 np0005481065 nova_compute[260935]: 2025-10-11 09:43:06.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:43:06 np0005481065 nova_compute[260935]: 2025-10-11 09:43:06.173 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0079b745-ef, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:43:06 np0005481065 nova_compute[260935]: 2025-10-11 09:43:06.174 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0079b745-ef, col_values=(('external_ids', {'iface-id': '0079b745-efa5-4641-a180-fee85fb30a5d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:88:b5:6b', 'vm-uuid': '1f03604e-0a25-4f22-807b-11f2e654be90'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:43:06 np0005481065 nova_compute[260935]: 2025-10-11 09:43:06.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:43:06 np0005481065 NetworkManager[44960]: <info>  [1760175786.1769] manager: (tap0079b745-ef): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/641)
Oct 11 05:43:06 np0005481065 nova_compute[260935]: 2025-10-11 09:43:06.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:43:06 np0005481065 nova_compute[260935]: 2025-10-11 09:43:06.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:43:06 np0005481065 nova_compute[260935]: 2025-10-11 09:43:06.186 2 INFO os_vif [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:b5:6b,bridge_name='br-int',has_traffic_filtering=True,id=0079b745-efa5-4641-a180-fee85fb30a5d,network=Network(424246b3-55aa-428f-9446-55ed3d626b5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0079b745-ef')#033[00m
Oct 11 05:43:06 np0005481065 nova_compute[260935]: 2025-10-11 09:43:06.243 2 DEBUG nova.virt.libvirt.driver [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:43:06 np0005481065 nova_compute[260935]: 2025-10-11 09:43:06.244 2 DEBUG nova.virt.libvirt.driver [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:43:06 np0005481065 nova_compute[260935]: 2025-10-11 09:43:06.244 2 DEBUG nova.virt.libvirt.driver [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] No VIF found with MAC fa:16:3e:88:b5:6b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:43:06 np0005481065 nova_compute[260935]: 2025-10-11 09:43:06.245 2 INFO nova.virt.libvirt.driver [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Using config drive#033[00m
Oct 11 05:43:06 np0005481065 nova_compute[260935]: 2025-10-11 09:43:06.274 2 DEBUG nova.storage.rbd_utils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] rbd image 1f03604e-0a25-4f22-807b-11f2e654be90_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:43:06 np0005481065 nova_compute[260935]: 2025-10-11 09:43:06.513 2 DEBUG nova.network.neutron [req-b809192d-1ef9-406f-841e-229d53aa9ca4 req-bca78657-e6e3-4dcc-bdd6-92fbd1951939 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Updated VIF entry in instance network info cache for port 0079b745-efa5-4641-a180-fee85fb30a5d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:43:06 np0005481065 nova_compute[260935]: 2025-10-11 09:43:06.515 2 DEBUG nova.network.neutron [req-b809192d-1ef9-406f-841e-229d53aa9ca4 req-bca78657-e6e3-4dcc-bdd6-92fbd1951939 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Updating instance_info_cache with network_info: [{"id": "0079b745-efa5-4641-a180-fee85fb30a5d", "address": "fa:16:3e:88:b5:6b", "network": {"id": "424246b3-55aa-428f-9446-55ed3d626b5d", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1806329114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d599eac75aee4193a971c2c157a326a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0079b745-ef", "ovs_interfaceid": "0079b745-efa5-4641-a180-fee85fb30a5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:43:06 np0005481065 nova_compute[260935]: 2025-10-11 09:43:06.543 2 DEBUG oslo_concurrency.lockutils [req-b809192d-1ef9-406f-841e-229d53aa9ca4 req-bca78657-e6e3-4dcc-bdd6-92fbd1951939 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-1f03604e-0a25-4f22-807b-11f2e654be90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:43:06 np0005481065 nova_compute[260935]: 2025-10-11 09:43:06.642 2 INFO nova.virt.libvirt.driver [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Creating config drive at /var/lib/nova/instances/1f03604e-0a25-4f22-807b-11f2e654be90/disk.config#033[00m
Oct 11 05:43:06 np0005481065 nova_compute[260935]: 2025-10-11 09:43:06.653 2 DEBUG oslo_concurrency.processutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1f03604e-0a25-4f22-807b-11f2e654be90/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp47vqskgy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:43:06 np0005481065 nova_compute[260935]: 2025-10-11 09:43:06.820 2 DEBUG oslo_concurrency.processutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1f03604e-0a25-4f22-807b-11f2e654be90/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp47vqskgy" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:43:06 np0005481065 nova_compute[260935]: 2025-10-11 09:43:06.868 2 DEBUG nova.storage.rbd_utils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] rbd image 1f03604e-0a25-4f22-807b-11f2e654be90_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:43:06 np0005481065 nova_compute[260935]: 2025-10-11 09:43:06.873 2 DEBUG oslo_concurrency.processutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1f03604e-0a25-4f22-807b-11f2e654be90/disk.config 1f03604e-0a25-4f22-807b-11f2e654be90_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:43:07 np0005481065 nova_compute[260935]: 2025-10-11 09:43:07.083 2 DEBUG oslo_concurrency.processutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1f03604e-0a25-4f22-807b-11f2e654be90/disk.config 1f03604e-0a25-4f22-807b-11f2e654be90_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.210s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:43:07 np0005481065 nova_compute[260935]: 2025-10-11 09:43:07.085 2 INFO nova.virt.libvirt.driver [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Deleting local config drive /var/lib/nova/instances/1f03604e-0a25-4f22-807b-11f2e654be90/disk.config because it was imported into RBD.#033[00m
Oct 11 05:43:07 np0005481065 kernel: tap0079b745-ef: entered promiscuous mode
Oct 11 05:43:07 np0005481065 NetworkManager[44960]: <info>  [1760175787.1631] manager: (tap0079b745-ef): new Tun device (/org/freedesktop/NetworkManager/Devices/642)
Oct 11 05:43:07 np0005481065 ovn_controller[152945]: 2025-10-11T09:43:07Z|01705|binding|INFO|Claiming lport 0079b745-efa5-4641-a180-fee85fb30a5d for this chassis.
Oct 11 05:43:07 np0005481065 ovn_controller[152945]: 2025-10-11T09:43:07Z|01706|binding|INFO|0079b745-efa5-4641-a180-fee85fb30a5d: Claiming fa:16:3e:88:b5:6b 10.100.0.3
Oct 11 05:43:07 np0005481065 nova_compute[260935]: 2025-10-11 09:43:07.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:43:07.187 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:b5:6b 10.100.0.3'], port_security=['fa:16:3e:88:b5:6b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '1f03604e-0a25-4f22-807b-11f2e654be90', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-424246b3-55aa-428f-9446-55ed3d626b5d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd599eac75aee4193a971c2c157a326a8', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9d823c59-8c61-45f8-94bd-622699d4ae58', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8021c16-4de3-44ff-b5a5-7602224b9734, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=0079b745-efa5-4641-a180-fee85fb30a5d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:43:07.190 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 0079b745-efa5-4641-a180-fee85fb30a5d in datapath 424246b3-55aa-428f-9446-55ed3d626b5d bound to our chassis#033[00m
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:43:07.195 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 424246b3-55aa-428f-9446-55ed3d626b5d#033[00m
Oct 11 05:43:07 np0005481065 systemd-udevd[429973]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:43:07.216 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6963919c-7c66-4325-ba66-f0fbd2d5fc4a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:43:07.217 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap424246b3-51 in ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:43:07.221 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap424246b3-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:43:07.222 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c928771b-2295-4196-b783-9d27e1aaa233]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:43:07.223 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[70c32c64-beea-4c21-b844-170eedf12942]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:43:07 np0005481065 systemd-machined[215705]: New machine qemu-175-instance-00000095.
Oct 11 05:43:07 np0005481065 NetworkManager[44960]: <info>  [1760175787.2352] device (tap0079b745-ef): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:43:07 np0005481065 NetworkManager[44960]: <info>  [1760175787.2366] device (tap0079b745-ef): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:43:07 np0005481065 systemd[1]: Started Virtual Machine qemu-175-instance-00000095.
Oct 11 05:43:07 np0005481065 nova_compute[260935]: 2025-10-11 09:43:07.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:43:07.252 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[42c78a17-afcf-4582-9b8c-77dbf16147c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:43:07 np0005481065 ovn_controller[152945]: 2025-10-11T09:43:07Z|01707|binding|INFO|Setting lport 0079b745-efa5-4641-a180-fee85fb30a5d ovn-installed in OVS
Oct 11 05:43:07 np0005481065 ovn_controller[152945]: 2025-10-11T09:43:07Z|01708|binding|INFO|Setting lport 0079b745-efa5-4641-a180-fee85fb30a5d up in Southbound
Oct 11 05:43:07 np0005481065 nova_compute[260935]: 2025-10-11 09:43:07.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:43:07.283 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4cedb6cf-e0f8-467b-bd15-8780791537bd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:43:07.316 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[885109f9-9601-498e-ab90-110d82570d65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:43:07 np0005481065 systemd-udevd[430004]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:43:07 np0005481065 NetworkManager[44960]: <info>  [1760175787.3249] manager: (tap424246b3-50): new Veth device (/org/freedesktop/NetworkManager/Devices/643)
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:43:07.324 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5bfd30c1-c493-468e-928c-01113f9e9586]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:43:07.365 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[1cd6d0ad-cbac-46df-b522-cde97882a3d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:43:07.369 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[8bccf3ff-b7e6-4b90-8f0f-bfd0c25fa0c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:43:07 np0005481065 NetworkManager[44960]: <info>  [1760175787.3977] device (tap424246b3-50): carrier: link connected
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:43:07.408 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3d05cff4-1253-4895-8fef-5f69d198690f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:43:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3044: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:43:07.433 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1d9a4072-55a6-48eb-a109-679309355bb1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap424246b3-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:28:e0:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 445], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763210, 'reachable_time': 21658, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 430551, 'error': None, 'target': 'ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:43:07.458 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[18538dc6-d6ad-45e4-a636-c7d88d1c73d0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe28:e02b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763210, 'tstamp': 763210}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 430652, 'error': None, 'target': 'ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:43:07.481 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[92c39f9a-2abd-4dcc-a4a0-e7051c53ef55]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap424246b3-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:28:e0:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 445], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763210, 'reachable_time': 21658, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 430753, 'error': None, 'target': 'ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:43:07.520 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[37c305d5-e431-439a-9768-e792c449625b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:43:07.594 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bc490561-7ebd-451b-83e7-f98b5408c4d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:43:07.596 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap424246b3-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:43:07.596 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:43:07.597 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap424246b3-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:43:07 np0005481065 nova_compute[260935]: 2025-10-11 09:43:07.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:43:07 np0005481065 NetworkManager[44960]: <info>  [1760175787.6007] manager: (tap424246b3-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/644)
Oct 11 05:43:07 np0005481065 kernel: tap424246b3-50: entered promiscuous mode
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:43:07.604 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap424246b3-50, col_values=(('external_ids', {'iface-id': 'e23ac8d5-df18-43b3-bb12-29f5cd9ce802'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:43:07 np0005481065 nova_compute[260935]: 2025-10-11 09:43:07.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:43:07 np0005481065 ovn_controller[152945]: 2025-10-11T09:43:07Z|01709|binding|INFO|Releasing lport e23ac8d5-df18-43b3-bb12-29f5cd9ce802 from this chassis (sb_readonly=0)
Oct 11 05:43:07 np0005481065 nova_compute[260935]: 2025-10-11 09:43:07.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:43:07.629 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/424246b3-55aa-428f-9446-55ed3d626b5d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/424246b3-55aa-428f-9446-55ed3d626b5d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:43:07.630 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f6812a6f-c315-41d5-9694-687166900cf4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:43:07.631 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]: global
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]:    log         /dev/log local0 debug
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]:    log-tag     haproxy-metadata-proxy-424246b3-55aa-428f-9446-55ed3d626b5d
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]:    user        root
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]:    group       root
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]:    maxconn     1024
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]:    pidfile     /var/lib/neutron/external/pids/424246b3-55aa-428f-9446-55ed3d626b5d.pid.haproxy
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]:    daemon
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]: defaults
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]:    log global
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]:    mode http
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]:    option httplog
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]:    option dontlognull
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]:    option http-server-close
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]:    option forwardfor
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]:    retries                 3
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]:    timeout http-request    30s
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]:    timeout connect         30s
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]:    timeout client          32s
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]:    timeout server          32s
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]:    timeout http-keep-alive 30s
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]: 
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]: listen listener
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]:    bind 169.254.169.254:80
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]:    http-request add-header X-OVN-Network-ID 424246b3-55aa-428f-9446-55ed3d626b5d
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 05:43:07 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:43:07.633 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d', 'env', 'PROCESS_TAG=haproxy-424246b3-55aa-428f-9446-55ed3d626b5d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/424246b3-55aa-428f-9446-55ed3d626b5d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 05:43:07 np0005481065 bold_lovelace[429157]: [
Oct 11 05:43:07 np0005481065 bold_lovelace[429157]:    {
Oct 11 05:43:07 np0005481065 bold_lovelace[429157]:        "available": false,
Oct 11 05:43:07 np0005481065 bold_lovelace[429157]:        "ceph_device": false,
Oct 11 05:43:07 np0005481065 bold_lovelace[429157]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct 11 05:43:07 np0005481065 bold_lovelace[429157]:        "lsm_data": {},
Oct 11 05:43:07 np0005481065 bold_lovelace[429157]:        "lvs": [],
Oct 11 05:43:07 np0005481065 bold_lovelace[429157]:        "path": "/dev/sr0",
Oct 11 05:43:07 np0005481065 bold_lovelace[429157]:        "rejected_reasons": [
Oct 11 05:43:07 np0005481065 bold_lovelace[429157]:            "Has a FileSystem",
Oct 11 05:43:07 np0005481065 bold_lovelace[429157]:            "Insufficient space (<5GB)"
Oct 11 05:43:07 np0005481065 bold_lovelace[429157]:        ],
Oct 11 05:43:07 np0005481065 bold_lovelace[429157]:        "sys_api": {
Oct 11 05:43:07 np0005481065 bold_lovelace[429157]:            "actuators": null,
Oct 11 05:43:07 np0005481065 bold_lovelace[429157]:            "device_nodes": "sr0",
Oct 11 05:43:07 np0005481065 bold_lovelace[429157]:            "devname": "sr0",
Oct 11 05:43:07 np0005481065 bold_lovelace[429157]:            "human_readable_size": "482.00 KB",
Oct 11 05:43:07 np0005481065 bold_lovelace[429157]:            "id_bus": "ata",
Oct 11 05:43:07 np0005481065 bold_lovelace[429157]:            "model": "QEMU DVD-ROM",
Oct 11 05:43:07 np0005481065 bold_lovelace[429157]:            "nr_requests": "2",
Oct 11 05:43:07 np0005481065 bold_lovelace[429157]:            "parent": "/dev/sr0",
Oct 11 05:43:07 np0005481065 bold_lovelace[429157]:            "partitions": {},
Oct 11 05:43:07 np0005481065 bold_lovelace[429157]:            "path": "/dev/sr0",
Oct 11 05:43:07 np0005481065 bold_lovelace[429157]:            "removable": "1",
Oct 11 05:43:07 np0005481065 bold_lovelace[429157]:            "rev": "2.5+",
Oct 11 05:43:07 np0005481065 bold_lovelace[429157]:            "ro": "0",
Oct 11 05:43:07 np0005481065 bold_lovelace[429157]:            "rotational": "0",
Oct 11 05:43:07 np0005481065 bold_lovelace[429157]:            "sas_address": "",
Oct 11 05:43:07 np0005481065 bold_lovelace[429157]:            "sas_device_handle": "",
Oct 11 05:43:07 np0005481065 bold_lovelace[429157]:            "scheduler_mode": "mq-deadline",
Oct 11 05:43:07 np0005481065 bold_lovelace[429157]:            "sectors": 0,
Oct 11 05:43:07 np0005481065 bold_lovelace[429157]:            "sectorsize": "2048",
Oct 11 05:43:07 np0005481065 bold_lovelace[429157]:            "size": 493568.0,
Oct 11 05:43:07 np0005481065 bold_lovelace[429157]:            "support_discard": "2048",
Oct 11 05:43:07 np0005481065 bold_lovelace[429157]:            "type": "disk",
Oct 11 05:43:07 np0005481065 bold_lovelace[429157]:            "vendor": "QEMU"
Oct 11 05:43:07 np0005481065 bold_lovelace[429157]:        }
Oct 11 05:43:07 np0005481065 bold_lovelace[429157]:    }
Oct 11 05:43:07 np0005481065 bold_lovelace[429157]: ]
Oct 11 05:43:07 np0005481065 systemd[1]: libpod-bb3ace29aaac1f2148cc3940986c4c84a247b3d7c72dfec165b64c7e4db2360f.scope: Deactivated successfully.
Oct 11 05:43:07 np0005481065 systemd[1]: libpod-bb3ace29aaac1f2148cc3940986c4c84a247b3d7c72dfec165b64c7e4db2360f.scope: Consumed 1.927s CPU time.
Oct 11 05:43:07 np0005481065 podman[431664]: 2025-10-11 09:43:07.798463514 +0000 UTC m=+0.046495525 container died bb3ace29aaac1f2148cc3940986c4c84a247b3d7c72dfec165b64c7e4db2360f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:43:07 np0005481065 systemd[1]: var-lib-containers-storage-overlay-f8f0a717cb682ed8212f3ac948020242a6f4e7385c72fba36d642cfc64baed44-merged.mount: Deactivated successfully.
Oct 11 05:43:07 np0005481065 podman[431664]: 2025-10-11 09:43:07.873437997 +0000 UTC m=+0.121469918 container remove bb3ace29aaac1f2148cc3940986c4c84a247b3d7c72dfec165b64c7e4db2360f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lovelace, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:43:07 np0005481065 systemd[1]: libpod-conmon-bb3ace29aaac1f2148cc3940986c4c84a247b3d7c72dfec165b64c7e4db2360f.scope: Deactivated successfully.
Oct 11 05:43:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:43:07 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:43:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:43:07 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:43:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:43:07 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:43:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 05:43:07 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:43:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 05:43:07 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:43:07 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 10b8bcda-9811-49d5-9e54-b1bc8282ec99 does not exist
Oct 11 05:43:07 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev f599ce44-41e0-4c9e-901e-80f6b44f82a5 does not exist
Oct 11 05:43:07 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 9028ebff-1ccd-437f-9946-491a52ce5396 does not exist
Oct 11 05:43:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 05:43:07 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 05:43:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 05:43:07 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:43:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:43:07 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:43:08 np0005481065 nova_compute[260935]: 2025-10-11 09:43:08.010 2 DEBUG nova.compute.manager [req-cd5b286d-81bd-4376-8e0a-cdce4859dac3 req-b0accb5e-bb99-4ecc-819d-9a2a41e16548 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Received event network-vif-plugged-0079b745-efa5-4641-a180-fee85fb30a5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:43:08 np0005481065 nova_compute[260935]: 2025-10-11 09:43:08.010 2 DEBUG oslo_concurrency.lockutils [req-cd5b286d-81bd-4376-8e0a-cdce4859dac3 req-b0accb5e-bb99-4ecc-819d-9a2a41e16548 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "1f03604e-0a25-4f22-807b-11f2e654be90-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:43:08 np0005481065 nova_compute[260935]: 2025-10-11 09:43:08.011 2 DEBUG oslo_concurrency.lockutils [req-cd5b286d-81bd-4376-8e0a-cdce4859dac3 req-b0accb5e-bb99-4ecc-819d-9a2a41e16548 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1f03604e-0a25-4f22-807b-11f2e654be90-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:43:08 np0005481065 nova_compute[260935]: 2025-10-11 09:43:08.011 2 DEBUG oslo_concurrency.lockutils [req-cd5b286d-81bd-4376-8e0a-cdce4859dac3 req-b0accb5e-bb99-4ecc-819d-9a2a41e16548 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1f03604e-0a25-4f22-807b-11f2e654be90-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:43:08 np0005481065 nova_compute[260935]: 2025-10-11 09:43:08.012 2 DEBUG nova.compute.manager [req-cd5b286d-81bd-4376-8e0a-cdce4859dac3 req-b0accb5e-bb99-4ecc-819d-9a2a41e16548 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Processing event network-vif-plugged-0079b745-efa5-4641-a180-fee85fb30a5d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:43:08 np0005481065 podman[431719]: 2025-10-11 09:43:08.073953331 +0000 UTC m=+0.073864853 container create f3c157f84cd53093e9b7de536de6a8417e4e5d8b42621d118b292c613e249157 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 11 05:43:08 np0005481065 systemd[1]: Started libpod-conmon-f3c157f84cd53093e9b7de536de6a8417e4e5d8b42621d118b292c613e249157.scope.
Oct 11 05:43:08 np0005481065 podman[431719]: 2025-10-11 09:43:08.043315202 +0000 UTC m=+0.043226734 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 05:43:08 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:43:08 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a86c02cc2cbc8761b2102a73fcb1e50eef8c8d1227adb1efe533f1256a4c9e7f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 05:43:08 np0005481065 podman[431719]: 2025-10-11 09:43:08.187523506 +0000 UTC m=+0.187435098 container init f3c157f84cd53093e9b7de536de6a8417e4e5d8b42621d118b292c613e249157 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:43:08 np0005481065 podman[431719]: 2025-10-11 09:43:08.195350616 +0000 UTC m=+0.195262168 container start f3c157f84cd53093e9b7de536de6a8417e4e5d8b42621d118b292c613e249157 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 05:43:08 np0005481065 neutron-haproxy-ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d[431804]: [NOTICE]   (431828) : New worker (431836) forked
Oct 11 05:43:08 np0005481065 neutron-haproxy-ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d[431804]: [NOTICE]   (431828) : Loading success.
Oct 11 05:43:08 np0005481065 nova_compute[260935]: 2025-10-11 09:43:08.699 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175788.698853, 1f03604e-0a25-4f22-807b-11f2e654be90 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:43:08 np0005481065 nova_compute[260935]: 2025-10-11 09:43:08.701 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] VM Started (Lifecycle Event)#033[00m
Oct 11 05:43:08 np0005481065 nova_compute[260935]: 2025-10-11 09:43:08.704 2 DEBUG nova.compute.manager [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:43:08 np0005481065 nova_compute[260935]: 2025-10-11 09:43:08.710 2 DEBUG nova.virt.libvirt.driver [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:43:08 np0005481065 nova_compute[260935]: 2025-10-11 09:43:08.715 2 INFO nova.virt.libvirt.driver [-] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Instance spawned successfully.#033[00m
Oct 11 05:43:08 np0005481065 nova_compute[260935]: 2025-10-11 09:43:08.716 2 DEBUG nova.virt.libvirt.driver [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:43:08 np0005481065 nova_compute[260935]: 2025-10-11 09:43:08.727 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:43:08 np0005481065 nova_compute[260935]: 2025-10-11 09:43:08.734 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:43:08 np0005481065 podman[431912]: 2025-10-11 09:43:08.745789454 +0000 UTC m=+0.073442641 container create 2e956781cf9e473375f76efb6c98b8e440a0ca0f493749d8fad231eb8fa0a360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_lumiere, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:43:08 np0005481065 nova_compute[260935]: 2025-10-11 09:43:08.750 2 DEBUG nova.virt.libvirt.driver [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:43:08 np0005481065 nova_compute[260935]: 2025-10-11 09:43:08.750 2 DEBUG nova.virt.libvirt.driver [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:43:08 np0005481065 nova_compute[260935]: 2025-10-11 09:43:08.751 2 DEBUG nova.virt.libvirt.driver [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:43:08 np0005481065 nova_compute[260935]: 2025-10-11 09:43:08.751 2 DEBUG nova.virt.libvirt.driver [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:43:08 np0005481065 nova_compute[260935]: 2025-10-11 09:43:08.752 2 DEBUG nova.virt.libvirt.driver [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:43:08 np0005481065 nova_compute[260935]: 2025-10-11 09:43:08.753 2 DEBUG nova.virt.libvirt.driver [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:43:08 np0005481065 nova_compute[260935]: 2025-10-11 09:43:08.761 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:43:08 np0005481065 nova_compute[260935]: 2025-10-11 09:43:08.761 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175788.7011256, 1f03604e-0a25-4f22-807b-11f2e654be90 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:43:08 np0005481065 nova_compute[260935]: 2025-10-11 09:43:08.762 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:43:08 np0005481065 systemd[1]: Started libpod-conmon-2e956781cf9e473375f76efb6c98b8e440a0ca0f493749d8fad231eb8fa0a360.scope.
Oct 11 05:43:08 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:43:08 np0005481065 podman[431912]: 2025-10-11 09:43:08.726831433 +0000 UTC m=+0.054484640 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:43:08 np0005481065 nova_compute[260935]: 2025-10-11 09:43:08.826 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:43:08 np0005481065 nova_compute[260935]: 2025-10-11 09:43:08.831 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175788.7081282, 1f03604e-0a25-4f22-807b-11f2e654be90 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:43:08 np0005481065 nova_compute[260935]: 2025-10-11 09:43:08.831 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:43:08 np0005481065 podman[431912]: 2025-10-11 09:43:08.836644173 +0000 UTC m=+0.164297350 container init 2e956781cf9e473375f76efb6c98b8e440a0ca0f493749d8fad231eb8fa0a360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_lumiere, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 11 05:43:08 np0005481065 podman[431912]: 2025-10-11 09:43:08.842950539 +0000 UTC m=+0.170603746 container start 2e956781cf9e473375f76efb6c98b8e440a0ca0f493749d8fad231eb8fa0a360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_lumiere, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 05:43:08 np0005481065 podman[431912]: 2025-10-11 09:43:08.846749116 +0000 UTC m=+0.174402313 container attach 2e956781cf9e473375f76efb6c98b8e440a0ca0f493749d8fad231eb8fa0a360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_lumiere, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Oct 11 05:43:08 np0005481065 systemd[1]: libpod-2e956781cf9e473375f76efb6c98b8e440a0ca0f493749d8fad231eb8fa0a360.scope: Deactivated successfully.
Oct 11 05:43:08 np0005481065 inspiring_lumiere[431928]: 167 167
Oct 11 05:43:08 np0005481065 conmon[431928]: conmon 2e956781cf9e473375f7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2e956781cf9e473375f76efb6c98b8e440a0ca0f493749d8fad231eb8fa0a360.scope/container/memory.events
Oct 11 05:43:08 np0005481065 podman[431912]: 2025-10-11 09:43:08.852880778 +0000 UTC m=+0.180533995 container died 2e956781cf9e473375f76efb6c98b8e440a0ca0f493749d8fad231eb8fa0a360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_lumiere, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 11 05:43:08 np0005481065 nova_compute[260935]: 2025-10-11 09:43:08.874 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:43:08 np0005481065 nova_compute[260935]: 2025-10-11 09:43:08.878 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:43:08 np0005481065 nova_compute[260935]: 2025-10-11 09:43:08.885 2 INFO nova.compute.manager [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Took 8.38 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:43:08 np0005481065 nova_compute[260935]: 2025-10-11 09:43:08.885 2 DEBUG nova.compute.manager [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:43:08 np0005481065 systemd[1]: var-lib-containers-storage-overlay-0bbfe8e7f00aa1f6d64a8783ddcc5e6fa727c90ce97881b2165f3f3e3b82bb0a-merged.mount: Deactivated successfully.
Oct 11 05:43:08 np0005481065 podman[431912]: 2025-10-11 09:43:08.90108796 +0000 UTC m=+0.228741147 container remove 2e956781cf9e473375f76efb6c98b8e440a0ca0f493749d8fad231eb8fa0a360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:43:08 np0005481065 nova_compute[260935]: 2025-10-11 09:43:08.902 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:43:08 np0005481065 systemd[1]: libpod-conmon-2e956781cf9e473375f76efb6c98b8e440a0ca0f493749d8fad231eb8fa0a360.scope: Deactivated successfully.
Oct 11 05:43:08 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:43:08 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:43:08 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:43:08 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:43:08 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:43:08 np0005481065 nova_compute[260935]: 2025-10-11 09:43:08.978 2 INFO nova.compute.manager [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Took 9.42 seconds to build instance.#033[00m
Oct 11 05:43:09 np0005481065 nova_compute[260935]: 2025-10-11 09:43:09.003 2 DEBUG oslo_concurrency.lockutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "1f03604e-0a25-4f22-807b-11f2e654be90" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.507s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:43:09 np0005481065 podman[431954]: 2025-10-11 09:43:09.16634835 +0000 UTC m=+0.057300508 container create 1f53bf1e17932a138ecf2922f56188a39b18e7057164724457f4c8c269021dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shamir, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:43:09 np0005481065 podman[431954]: 2025-10-11 09:43:09.139655941 +0000 UTC m=+0.030608129 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:43:09 np0005481065 systemd[1]: Started libpod-conmon-1f53bf1e17932a138ecf2922f56188a39b18e7057164724457f4c8c269021dc8.scope.
Oct 11 05:43:09 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:43:09 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5c2179f1ee0ad2d9c40e7b03c6fc2483a2bc73caaf0a61f5fd4959880199b6f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:43:09 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5c2179f1ee0ad2d9c40e7b03c6fc2483a2bc73caaf0a61f5fd4959880199b6f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:43:09 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5c2179f1ee0ad2d9c40e7b03c6fc2483a2bc73caaf0a61f5fd4959880199b6f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:43:09 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5c2179f1ee0ad2d9c40e7b03c6fc2483a2bc73caaf0a61f5fd4959880199b6f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:43:09 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5c2179f1ee0ad2d9c40e7b03c6fc2483a2bc73caaf0a61f5fd4959880199b6f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 05:43:09 np0005481065 podman[431954]: 2025-10-11 09:43:09.297147309 +0000 UTC m=+0.188099477 container init 1f53bf1e17932a138ecf2922f56188a39b18e7057164724457f4c8c269021dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 05:43:09 np0005481065 podman[431954]: 2025-10-11 09:43:09.3053991 +0000 UTC m=+0.196351258 container start 1f53bf1e17932a138ecf2922f56188a39b18e7057164724457f4c8c269021dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shamir, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:43:09 np0005481065 podman[431954]: 2025-10-11 09:43:09.308999201 +0000 UTC m=+0.199951349 container attach 1f53bf1e17932a138ecf2922f56188a39b18e7057164724457f4c8c269021dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shamir, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:43:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3045: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 11 05:43:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:43:10 np0005481065 nova_compute[260935]: 2025-10-11 09:43:10.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:43:10 np0005481065 nova_compute[260935]: 2025-10-11 09:43:10.083 2 DEBUG nova.compute.manager [req-a208f417-83df-4d5b-9b65-2d22714c8d45 req-cb6e1f20-98dd-4f8b-ac19-d73cff918f3f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Received event network-vif-plugged-0079b745-efa5-4641-a180-fee85fb30a5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:43:10 np0005481065 nova_compute[260935]: 2025-10-11 09:43:10.084 2 DEBUG oslo_concurrency.lockutils [req-a208f417-83df-4d5b-9b65-2d22714c8d45 req-cb6e1f20-98dd-4f8b-ac19-d73cff918f3f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "1f03604e-0a25-4f22-807b-11f2e654be90-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:43:10 np0005481065 nova_compute[260935]: 2025-10-11 09:43:10.084 2 DEBUG oslo_concurrency.lockutils [req-a208f417-83df-4d5b-9b65-2d22714c8d45 req-cb6e1f20-98dd-4f8b-ac19-d73cff918f3f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1f03604e-0a25-4f22-807b-11f2e654be90-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:43:10 np0005481065 nova_compute[260935]: 2025-10-11 09:43:10.084 2 DEBUG oslo_concurrency.lockutils [req-a208f417-83df-4d5b-9b65-2d22714c8d45 req-cb6e1f20-98dd-4f8b-ac19-d73cff918f3f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1f03604e-0a25-4f22-807b-11f2e654be90-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:43:10 np0005481065 nova_compute[260935]: 2025-10-11 09:43:10.084 2 DEBUG nova.compute.manager [req-a208f417-83df-4d5b-9b65-2d22714c8d45 req-cb6e1f20-98dd-4f8b-ac19-d73cff918f3f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] No waiting events found dispatching network-vif-plugged-0079b745-efa5-4641-a180-fee85fb30a5d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:43:10 np0005481065 nova_compute[260935]: 2025-10-11 09:43:10.085 2 WARNING nova.compute.manager [req-a208f417-83df-4d5b-9b65-2d22714c8d45 req-cb6e1f20-98dd-4f8b-ac19-d73cff918f3f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Received unexpected event network-vif-plugged-0079b745-efa5-4641-a180-fee85fb30a5d for instance with vm_state active and task_state None.#033[00m
Oct 11 05:43:10 np0005481065 vigilant_shamir[431971]: --> passed data devices: 0 physical, 3 LVM
Oct 11 05:43:10 np0005481065 vigilant_shamir[431971]: --> relative data size: 1.0
Oct 11 05:43:10 np0005481065 vigilant_shamir[431971]: --> All data devices are unavailable
Oct 11 05:43:10 np0005481065 systemd[1]: libpod-1f53bf1e17932a138ecf2922f56188a39b18e7057164724457f4c8c269021dc8.scope: Deactivated successfully.
Oct 11 05:43:10 np0005481065 systemd[1]: libpod-1f53bf1e17932a138ecf2922f56188a39b18e7057164724457f4c8c269021dc8.scope: Consumed 1.123s CPU time.
Oct 11 05:43:10 np0005481065 podman[432001]: 2025-10-11 09:43:10.636546615 +0000 UTC m=+0.049948272 container died 1f53bf1e17932a138ecf2922f56188a39b18e7057164724457f4c8c269021dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shamir, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:43:10 np0005481065 systemd[1]: var-lib-containers-storage-overlay-e5c2179f1ee0ad2d9c40e7b03c6fc2483a2bc73caaf0a61f5fd4959880199b6f-merged.mount: Deactivated successfully.
Oct 11 05:43:10 np0005481065 podman[432000]: 2025-10-11 09:43:10.687457983 +0000 UTC m=+0.089996286 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible)
Oct 11 05:43:10 np0005481065 podman[432001]: 2025-10-11 09:43:10.718487753 +0000 UTC m=+0.131889370 container remove 1f53bf1e17932a138ecf2922f56188a39b18e7057164724457f4c8c269021dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shamir, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 05:43:10 np0005481065 systemd[1]: libpod-conmon-1f53bf1e17932a138ecf2922f56188a39b18e7057164724457f4c8c269021dc8.scope: Deactivated successfully.
Oct 11 05:43:11 np0005481065 nova_compute[260935]: 2025-10-11 09:43:11.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:43:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3046: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 11 05:43:11 np0005481065 podman[432174]: 2025-10-11 09:43:11.650704469 +0000 UTC m=+0.065260141 container create ad9080768b607930d81d193c7c7d513db5d7e8cef5a20674c49dfffa3926d000 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:43:11 np0005481065 systemd[1]: Started libpod-conmon-ad9080768b607930d81d193c7c7d513db5d7e8cef5a20674c49dfffa3926d000.scope.
Oct 11 05:43:11 np0005481065 podman[432174]: 2025-10-11 09:43:11.617785596 +0000 UTC m=+0.032341258 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:43:11 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:43:11 np0005481065 podman[432174]: 2025-10-11 09:43:11.760433277 +0000 UTC m=+0.174988999 container init ad9080768b607930d81d193c7c7d513db5d7e8cef5a20674c49dfffa3926d000 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_poitras, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:43:11 np0005481065 podman[432174]: 2025-10-11 09:43:11.77480471 +0000 UTC m=+0.189360382 container start ad9080768b607930d81d193c7c7d513db5d7e8cef5a20674c49dfffa3926d000 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_poitras, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 05:43:11 np0005481065 podman[432174]: 2025-10-11 09:43:11.779880352 +0000 UTC m=+0.194436114 container attach ad9080768b607930d81d193c7c7d513db5d7e8cef5a20674c49dfffa3926d000 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:43:11 np0005481065 hardcore_poitras[432189]: 167 167
Oct 11 05:43:11 np0005481065 systemd[1]: libpod-ad9080768b607930d81d193c7c7d513db5d7e8cef5a20674c49dfffa3926d000.scope: Deactivated successfully.
Oct 11 05:43:11 np0005481065 podman[432174]: 2025-10-11 09:43:11.785573052 +0000 UTC m=+0.200128724 container died ad9080768b607930d81d193c7c7d513db5d7e8cef5a20674c49dfffa3926d000 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:43:11 np0005481065 systemd[1]: var-lib-containers-storage-overlay-99c4704daf49d622146bffd0c72f808c2245a73f001cb4f011b0c8139cb6dc79-merged.mount: Deactivated successfully.
Oct 11 05:43:11 np0005481065 podman[432174]: 2025-10-11 09:43:11.83648287 +0000 UTC m=+0.251038542 container remove ad9080768b607930d81d193c7c7d513db5d7e8cef5a20674c49dfffa3926d000 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_poitras, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:43:11 np0005481065 systemd[1]: libpod-conmon-ad9080768b607930d81d193c7c7d513db5d7e8cef5a20674c49dfffa3926d000.scope: Deactivated successfully.
Oct 11 05:43:12 np0005481065 podman[432213]: 2025-10-11 09:43:12.116753761 +0000 UTC m=+0.074554352 container create 1fb14a8902dd17c4d24b6cfdc742f2180c70686186d2456f4aa7d58b7f9ba274 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mcnulty, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:43:12 np0005481065 podman[432213]: 2025-10-11 09:43:12.086468192 +0000 UTC m=+0.044268843 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:43:12 np0005481065 systemd[1]: Started libpod-conmon-1fb14a8902dd17c4d24b6cfdc742f2180c70686186d2456f4aa7d58b7f9ba274.scope.
Oct 11 05:43:12 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:43:12 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa092e71b1770e0d8a0b1aefc42f0b6ec71c6864da0d70694a723f7f2ab0213b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:43:12 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa092e71b1770e0d8a0b1aefc42f0b6ec71c6864da0d70694a723f7f2ab0213b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:43:12 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa092e71b1770e0d8a0b1aefc42f0b6ec71c6864da0d70694a723f7f2ab0213b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:43:12 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa092e71b1770e0d8a0b1aefc42f0b6ec71c6864da0d70694a723f7f2ab0213b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:43:12 np0005481065 podman[432213]: 2025-10-11 09:43:12.254020621 +0000 UTC m=+0.211821272 container init 1fb14a8902dd17c4d24b6cfdc742f2180c70686186d2456f4aa7d58b7f9ba274 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mcnulty, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:43:12 np0005481065 podman[432213]: 2025-10-11 09:43:12.268374684 +0000 UTC m=+0.226175275 container start 1fb14a8902dd17c4d24b6cfdc742f2180c70686186d2456f4aa7d58b7f9ba274 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 05:43:12 np0005481065 podman[432213]: 2025-10-11 09:43:12.27324811 +0000 UTC m=+0.231048701 container attach 1fb14a8902dd17c4d24b6cfdc742f2180c70686186d2456f4aa7d58b7f9ba274 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mcnulty, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 11 05:43:12 np0005481065 NetworkManager[44960]: <info>  [1760175792.3549] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/645)
Oct 11 05:43:12 np0005481065 nova_compute[260935]: 2025-10-11 09:43:12.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:43:12 np0005481065 NetworkManager[44960]: <info>  [1760175792.3576] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/646)
Oct 11 05:43:12 np0005481065 nova_compute[260935]: 2025-10-11 09:43:12.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:43:12 np0005481065 ovn_controller[152945]: 2025-10-11T09:43:12Z|01710|binding|INFO|Releasing lport e23ac8d5-df18-43b3-bb12-29f5cd9ce802 from this chassis (sb_readonly=0)
Oct 11 05:43:12 np0005481065 ovn_controller[152945]: 2025-10-11T09:43:12Z|01711|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:43:12 np0005481065 ovn_controller[152945]: 2025-10-11T09:43:12Z|01712|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:43:12 np0005481065 nova_compute[260935]: 2025-10-11 09:43:12.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]: {
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:    "0": [
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:        {
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:            "devices": [
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:                "/dev/loop3"
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:            ],
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:            "lv_name": "ceph_lv0",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:            "lv_size": "21470642176",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:            "name": "ceph_lv0",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:            "tags": {
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:                "ceph.cluster_name": "ceph",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:                "ceph.crush_device_class": "",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:                "ceph.encrypted": "0",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:                "ceph.osd_id": "0",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:                "ceph.type": "block",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:                "ceph.vdo": "0"
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:            },
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:            "type": "block",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:            "vg_name": "ceph_vg0"
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:        }
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:    ],
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:    "1": [
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:        {
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:            "devices": [
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:                "/dev/loop4"
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:            ],
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:            "lv_name": "ceph_lv1",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:            "lv_size": "21470642176",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:            "name": "ceph_lv1",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:            "tags": {
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:                "ceph.cluster_name": "ceph",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:                "ceph.crush_device_class": "",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:                "ceph.encrypted": "0",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:                "ceph.osd_id": "1",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:                "ceph.type": "block",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:                "ceph.vdo": "0"
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:            },
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:            "type": "block",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:            "vg_name": "ceph_vg1"
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:        }
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:    ],
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:    "2": [
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:        {
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:            "devices": [
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:                "/dev/loop5"
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:            ],
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:            "lv_name": "ceph_lv2",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:            "lv_size": "21470642176",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:            "name": "ceph_lv2",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:            "tags": {
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:                "ceph.cluster_name": "ceph",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:                "ceph.crush_device_class": "",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:                "ceph.encrypted": "0",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:                "ceph.osd_id": "2",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:                "ceph.type": "block",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:                "ceph.vdo": "0"
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:            },
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:            "type": "block",
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:            "vg_name": "ceph_vg2"
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:        }
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]:    ]
Oct 11 05:43:13 np0005481065 admiring_mcnulty[432230]: }
Oct 11 05:43:13 np0005481065 nova_compute[260935]: 2025-10-11 09:43:13.036 2 DEBUG nova.compute.manager [req-c0a05136-1ab6-47af-8820-1513e5ee3484 req-914b9789-4a5b-4c3a-949c-34b877341476 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Received event network-changed-0079b745-efa5-4641-a180-fee85fb30a5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:43:13 np0005481065 nova_compute[260935]: 2025-10-11 09:43:13.036 2 DEBUG nova.compute.manager [req-c0a05136-1ab6-47af-8820-1513e5ee3484 req-914b9789-4a5b-4c3a-949c-34b877341476 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Refreshing instance network info cache due to event network-changed-0079b745-efa5-4641-a180-fee85fb30a5d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:43:13 np0005481065 nova_compute[260935]: 2025-10-11 09:43:13.036 2 DEBUG oslo_concurrency.lockutils [req-c0a05136-1ab6-47af-8820-1513e5ee3484 req-914b9789-4a5b-4c3a-949c-34b877341476 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-1f03604e-0a25-4f22-807b-11f2e654be90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:43:13 np0005481065 nova_compute[260935]: 2025-10-11 09:43:13.037 2 DEBUG oslo_concurrency.lockutils [req-c0a05136-1ab6-47af-8820-1513e5ee3484 req-914b9789-4a5b-4c3a-949c-34b877341476 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-1f03604e-0a25-4f22-807b-11f2e654be90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:43:13 np0005481065 nova_compute[260935]: 2025-10-11 09:43:13.037 2 DEBUG nova.network.neutron [req-c0a05136-1ab6-47af-8820-1513e5ee3484 req-914b9789-4a5b-4c3a-949c-34b877341476 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Refreshing network info cache for port 0079b745-efa5-4641-a180-fee85fb30a5d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:43:13 np0005481065 systemd[1]: libpod-1fb14a8902dd17c4d24b6cfdc742f2180c70686186d2456f4aa7d58b7f9ba274.scope: Deactivated successfully.
Oct 11 05:43:13 np0005481065 podman[432213]: 2025-10-11 09:43:13.060302625 +0000 UTC m=+1.018103186 container died 1fb14a8902dd17c4d24b6cfdc742f2180c70686186d2456f4aa7d58b7f9ba274 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 05:43:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:43:13.092 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=58, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=57) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:43:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:43:13.094 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 05:43:13 np0005481065 systemd[1]: var-lib-containers-storage-overlay-aa092e71b1770e0d8a0b1aefc42f0b6ec71c6864da0d70694a723f7f2ab0213b-merged.mount: Deactivated successfully.
Oct 11 05:43:13 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:43:13.095 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '58'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:43:13 np0005481065 nova_compute[260935]: 2025-10-11 09:43:13.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:43:13 np0005481065 podman[432213]: 2025-10-11 09:43:13.129057654 +0000 UTC m=+1.086858205 container remove 1fb14a8902dd17c4d24b6cfdc742f2180c70686186d2456f4aa7d58b7f9ba274 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mcnulty, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 05:43:13 np0005481065 systemd[1]: libpod-conmon-1fb14a8902dd17c4d24b6cfdc742f2180c70686186d2456f4aa7d58b7f9ba274.scope: Deactivated successfully.
Oct 11 05:43:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3047: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 05:43:13 np0005481065 podman[432392]: 2025-10-11 09:43:13.912387403 +0000 UTC m=+0.063367638 container create 2897252657a8fe6ea1ce9b8cb93532342f033984969cf330cd6b5318275f532c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_thompson, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:43:13 np0005481065 systemd[1]: Started libpod-conmon-2897252657a8fe6ea1ce9b8cb93532342f033984969cf330cd6b5318275f532c.scope.
Oct 11 05:43:13 np0005481065 podman[432392]: 2025-10-11 09:43:13.889361167 +0000 UTC m=+0.040341482 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:43:13 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:43:14 np0005481065 podman[432392]: 2025-10-11 09:43:14.015251268 +0000 UTC m=+0.166231553 container init 2897252657a8fe6ea1ce9b8cb93532342f033984969cf330cd6b5318275f532c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:43:14 np0005481065 podman[432392]: 2025-10-11 09:43:14.026385001 +0000 UTC m=+0.177365276 container start 2897252657a8fe6ea1ce9b8cb93532342f033984969cf330cd6b5318275f532c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_thompson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 05:43:14 np0005481065 podman[432392]: 2025-10-11 09:43:14.03099767 +0000 UTC m=+0.181977915 container attach 2897252657a8fe6ea1ce9b8cb93532342f033984969cf330cd6b5318275f532c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_thompson, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 05:43:14 np0005481065 elegant_thompson[432408]: 167 167
Oct 11 05:43:14 np0005481065 systemd[1]: libpod-2897252657a8fe6ea1ce9b8cb93532342f033984969cf330cd6b5318275f532c.scope: Deactivated successfully.
Oct 11 05:43:14 np0005481065 podman[432392]: 2025-10-11 09:43:14.035181117 +0000 UTC m=+0.186161392 container died 2897252657a8fe6ea1ce9b8cb93532342f033984969cf330cd6b5318275f532c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_thompson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Oct 11 05:43:14 np0005481065 systemd[1]: var-lib-containers-storage-overlay-5a82d96b790c37660197fd8b534c81a85ea3e1934b77b4d529cbc19a196aa98a-merged.mount: Deactivated successfully.
Oct 11 05:43:14 np0005481065 podman[432392]: 2025-10-11 09:43:14.094642575 +0000 UTC m=+0.245622840 container remove 2897252657a8fe6ea1ce9b8cb93532342f033984969cf330cd6b5318275f532c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 05:43:14 np0005481065 systemd[1]: libpod-conmon-2897252657a8fe6ea1ce9b8cb93532342f033984969cf330cd6b5318275f532c.scope: Deactivated successfully.
Oct 11 05:43:14 np0005481065 podman[432431]: 2025-10-11 09:43:14.430756072 +0000 UTC m=+0.080299383 container create 1c14042b59e0c1bc43131aff64ee2856172186ee8805f2419b26c8e7acabd75c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:43:14 np0005481065 podman[432431]: 2025-10-11 09:43:14.399273249 +0000 UTC m=+0.048816570 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:43:14 np0005481065 systemd[1]: Started libpod-conmon-1c14042b59e0c1bc43131aff64ee2856172186ee8805f2419b26c8e7acabd75c.scope.
Oct 11 05:43:14 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:43:14 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c08a52a284e4b028e5ee1a137f00e3a778c90125ce25b82728c73a18e07eaff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:43:14 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c08a52a284e4b028e5ee1a137f00e3a778c90125ce25b82728c73a18e07eaff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:43:14 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c08a52a284e4b028e5ee1a137f00e3a778c90125ce25b82728c73a18e07eaff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:43:14 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c08a52a284e4b028e5ee1a137f00e3a778c90125ce25b82728c73a18e07eaff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:43:14 np0005481065 podman[432431]: 2025-10-11 09:43:14.552845917 +0000 UTC m=+0.202389258 container init 1c14042b59e0c1bc43131aff64ee2856172186ee8805f2419b26c8e7acabd75c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_saha, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 05:43:14 np0005481065 podman[432431]: 2025-10-11 09:43:14.567846097 +0000 UTC m=+0.217389368 container start 1c14042b59e0c1bc43131aff64ee2856172186ee8805f2419b26c8e7acabd75c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_saha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:43:14 np0005481065 podman[432431]: 2025-10-11 09:43:14.571596903 +0000 UTC m=+0.221140254 container attach 1c14042b59e0c1bc43131aff64ee2856172186ee8805f2419b26c8e7acabd75c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_saha, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:43:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:43:14 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #150. Immutable memtables: 0.
Oct 11 05:43:14 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:43:14.723738) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 05:43:14 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 91] Flushing memtable with next log file: 150
Oct 11 05:43:14 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175794723810, "job": 91, "event": "flush_started", "num_memtables": 1, "num_entries": 687, "num_deletes": 250, "total_data_size": 828326, "memory_usage": 841472, "flush_reason": "Manual Compaction"}
Oct 11 05:43:14 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 91] Level-0 flush table #151: started
Oct 11 05:43:14 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175794730320, "cf_name": "default", "job": 91, "event": "table_file_creation", "file_number": 151, "file_size": 544444, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 63324, "largest_seqno": 64010, "table_properties": {"data_size": 541336, "index_size": 1015, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 8325, "raw_average_key_size": 20, "raw_value_size": 534815, "raw_average_value_size": 1327, "num_data_blocks": 45, "num_entries": 403, "num_filter_entries": 403, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760175741, "oldest_key_time": 1760175741, "file_creation_time": 1760175794, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 151, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:43:14 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 91] Flush lasted 6611 microseconds, and 2374 cpu microseconds.
Oct 11 05:43:14 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:43:14 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:43:14.730360) [db/flush_job.cc:967] [default] [JOB 91] Level-0 flush table #151: 544444 bytes OK
Oct 11 05:43:14 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:43:14.730374) [db/memtable_list.cc:519] [default] Level-0 commit table #151 started
Oct 11 05:43:14 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:43:14.732108) [db/memtable_list.cc:722] [default] Level-0 commit table #151: memtable #1 done
Oct 11 05:43:14 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:43:14.732120) EVENT_LOG_v1 {"time_micros": 1760175794732116, "job": 91, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 05:43:14 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:43:14.732136) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 05:43:14 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 91] Try to delete WAL files size 824731, prev total WAL file size 824731, number of live WAL files 2.
Oct 11 05:43:14 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000147.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:43:14 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:43:14.732678) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032353034' seq:72057594037927935, type:22 .. '6D6772737461740032373535' seq:0, type:0; will stop at (end)
Oct 11 05:43:14 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 92] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 05:43:14 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 91 Base level 0, inputs: [151(531KB)], [149(10MB)]
Oct 11 05:43:14 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175794732785, "job": 92, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [151], "files_L6": [149], "score": -1, "input_data_size": 11707815, "oldest_snapshot_seqno": -1}
Oct 11 05:43:14 np0005481065 nova_compute[260935]: 2025-10-11 09:43:14.782 2 DEBUG nova.network.neutron [req-c0a05136-1ab6-47af-8820-1513e5ee3484 req-914b9789-4a5b-4c3a-949c-34b877341476 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Updated VIF entry in instance network info cache for port 0079b745-efa5-4641-a180-fee85fb30a5d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:43:14 np0005481065 nova_compute[260935]: 2025-10-11 09:43:14.784 2 DEBUG nova.network.neutron [req-c0a05136-1ab6-47af-8820-1513e5ee3484 req-914b9789-4a5b-4c3a-949c-34b877341476 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Updating instance_info_cache with network_info: [{"id": "0079b745-efa5-4641-a180-fee85fb30a5d", "address": "fa:16:3e:88:b5:6b", "network": {"id": "424246b3-55aa-428f-9446-55ed3d626b5d", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1806329114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d599eac75aee4193a971c2c157a326a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0079b745-ef", "ovs_interfaceid": "0079b745-efa5-4641-a180-fee85fb30a5d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:43:14 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 92] Generated table #152: 8083 keys, 8684197 bytes, temperature: kUnknown
Oct 11 05:43:14 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175794793235, "cf_name": "default", "job": 92, "event": "table_file_creation", "file_number": 152, "file_size": 8684197, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8634871, "index_size": 28029, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20229, "raw_key_size": 211951, "raw_average_key_size": 26, "raw_value_size": 8495318, "raw_average_value_size": 1051, "num_data_blocks": 1081, "num_entries": 8083, "num_filter_entries": 8083, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760175794, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 152, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:43:14 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:43:14 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:43:14.793505) [db/compaction/compaction_job.cc:1663] [default] [JOB 92] Compacted 1@0 + 1@6 files to L6 => 8684197 bytes
Oct 11 05:43:14 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:43:14.795649) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 193.5 rd, 143.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 10.6 +0.0 blob) out(8.3 +0.0 blob), read-write-amplify(37.5) write-amplify(16.0) OK, records in: 8573, records dropped: 490 output_compression: NoCompression
Oct 11 05:43:14 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:43:14.795670) EVENT_LOG_v1 {"time_micros": 1760175794795660, "job": 92, "event": "compaction_finished", "compaction_time_micros": 60512, "compaction_time_cpu_micros": 25503, "output_level": 6, "num_output_files": 1, "total_output_size": 8684197, "num_input_records": 8573, "num_output_records": 8083, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 05:43:14 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000151.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:43:14 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175794796059, "job": 92, "event": "table_file_deletion", "file_number": 151}
Oct 11 05:43:14 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000149.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:43:14 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175794798241, "job": 92, "event": "table_file_deletion", "file_number": 149}
Oct 11 05:43:14 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:43:14.732568) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:43:14 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:43:14.798392) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:43:14 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:43:14.798401) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:43:14 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:43:14.798406) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:43:14 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:43:14.798409) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:43:14 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:43:14.798412) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:43:14 np0005481065 nova_compute[260935]: 2025-10-11 09:43:14.810 2 DEBUG oslo_concurrency.lockutils [req-c0a05136-1ab6-47af-8820-1513e5ee3484 req-914b9789-4a5b-4c3a-949c-34b877341476 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-1f03604e-0a25-4f22-807b-11f2e654be90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:43:15 np0005481065 nova_compute[260935]: 2025-10-11 09:43:15.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:43:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:43:15.239 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:43:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:43:15.240 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:43:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:43:15.240 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:43:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3048: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 05:43:15 np0005481065 tender_saha[432448]: {
Oct 11 05:43:15 np0005481065 tender_saha[432448]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 05:43:15 np0005481065 tender_saha[432448]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:43:15 np0005481065 tender_saha[432448]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 05:43:15 np0005481065 tender_saha[432448]:        "osd_id": 2,
Oct 11 05:43:15 np0005481065 tender_saha[432448]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:43:15 np0005481065 tender_saha[432448]:        "type": "bluestore"
Oct 11 05:43:15 np0005481065 tender_saha[432448]:    },
Oct 11 05:43:15 np0005481065 tender_saha[432448]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 05:43:15 np0005481065 tender_saha[432448]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:43:15 np0005481065 tender_saha[432448]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 05:43:15 np0005481065 tender_saha[432448]:        "osd_id": 0,
Oct 11 05:43:15 np0005481065 tender_saha[432448]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:43:15 np0005481065 tender_saha[432448]:        "type": "bluestore"
Oct 11 05:43:15 np0005481065 tender_saha[432448]:    },
Oct 11 05:43:15 np0005481065 tender_saha[432448]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 05:43:15 np0005481065 tender_saha[432448]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:43:15 np0005481065 tender_saha[432448]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 05:43:15 np0005481065 tender_saha[432448]:        "osd_id": 1,
Oct 11 05:43:15 np0005481065 tender_saha[432448]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:43:15 np0005481065 tender_saha[432448]:        "type": "bluestore"
Oct 11 05:43:15 np0005481065 tender_saha[432448]:    }
Oct 11 05:43:15 np0005481065 tender_saha[432448]: }
Oct 11 05:43:15 np0005481065 systemd[1]: libpod-1c14042b59e0c1bc43131aff64ee2856172186ee8805f2419b26c8e7acabd75c.scope: Deactivated successfully.
Oct 11 05:43:15 np0005481065 systemd[1]: libpod-1c14042b59e0c1bc43131aff64ee2856172186ee8805f2419b26c8e7acabd75c.scope: Consumed 1.134s CPU time.
Oct 11 05:43:15 np0005481065 podman[432481]: 2025-10-11 09:43:15.764689766 +0000 UTC m=+0.043500521 container died 1c14042b59e0c1bc43131aff64ee2856172186ee8805f2419b26c8e7acabd75c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_saha, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:43:15 np0005481065 systemd[1]: var-lib-containers-storage-overlay-5c08a52a284e4b028e5ee1a137f00e3a778c90125ce25b82728c73a18e07eaff-merged.mount: Deactivated successfully.
Oct 11 05:43:15 np0005481065 podman[432481]: 2025-10-11 09:43:15.846326286 +0000 UTC m=+0.125137001 container remove 1c14042b59e0c1bc43131aff64ee2856172186ee8805f2419b26c8e7acabd75c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_saha, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:43:15 np0005481065 systemd[1]: libpod-conmon-1c14042b59e0c1bc43131aff64ee2856172186ee8805f2419b26c8e7acabd75c.scope: Deactivated successfully.
Oct 11 05:43:15 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:43:15 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:43:15 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:43:15 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:43:15 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev f0365db6-4b73-4cb9-83a0-a06a18263a26 does not exist
Oct 11 05:43:15 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev d8ba6715-907b-4456-982f-218a246e2468 does not exist
Oct 11 05:43:16 np0005481065 nova_compute[260935]: 2025-10-11 09:43:16.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:43:16 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:43:16 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:43:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3049: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 05:43:17 np0005481065 podman[432547]: 2025-10-11 09:43:17.812983025 +0000 UTC m=+0.102621929 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 05:43:19 np0005481065 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #52. Immutable memtables: 8.
Oct 11 05:43:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3050: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 69 op/s
Oct 11 05:43:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:43:20 np0005481065 nova_compute[260935]: 2025-10-11 09:43:20.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:43:21 np0005481065 ovn_controller[152945]: 2025-10-11T09:43:21Z|00202|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:88:b5:6b 10.100.0.3
Oct 11 05:43:21 np0005481065 ovn_controller[152945]: 2025-10-11T09:43:21Z|00203|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:88:b5:6b 10.100.0.3
Oct 11 05:43:21 np0005481065 nova_compute[260935]: 2025-10-11 09:43:21.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:43:21 np0005481065 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 8.
Oct 11 05:43:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3051: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 68 op/s
Oct 11 05:43:21 np0005481065 podman[432567]: 2025-10-11 09:43:21.81139762 +0000 UTC m=+0.110262123 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=multipathd)
Oct 11 05:43:21 np0005481065 podman[432568]: 2025-10-11 09:43:21.846516285 +0000 UTC m=+0.134136363 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 05:43:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3052: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Oct 11 05:43:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:43:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:43:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:43:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:43:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:43:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:43:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:43:25 np0005481065 nova_compute[260935]: 2025-10-11 09:43:25.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:43:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3053: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 05:43:26 np0005481065 nova_compute[260935]: 2025-10-11 09:43:26.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:43:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:43:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2699164002' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:43:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:43:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2699164002' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:43:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3054: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 05:43:29 np0005481065 nova_compute[260935]: 2025-10-11 09:43:29.367 2 DEBUG nova.compute.manager [None req-51e2322f-f017-4674-9b11-4d8873a16a30 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:43:29 np0005481065 nova_compute[260935]: 2025-10-11 09:43:29.420 2 INFO nova.compute.manager [None req-51e2322f-f017-4674-9b11-4d8873a16a30 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] instance snapshotting#033[00m
Oct 11 05:43:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3055: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 05:43:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:43:30 np0005481065 nova_compute[260935]: 2025-10-11 09:43:30.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:43:30 np0005481065 nova_compute[260935]: 2025-10-11 09:43:30.096 2 INFO nova.virt.libvirt.driver [None req-51e2322f-f017-4674-9b11-4d8873a16a30 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Beginning live snapshot process#033[00m
Oct 11 05:43:30 np0005481065 nova_compute[260935]: 2025-10-11 09:43:30.320 2 DEBUG nova.virt.libvirt.imagebackend [None req-51e2322f-f017-4674-9b11-4d8873a16a30 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] No parent info for 03f2fef0-11c0-48e1-b3a0-3e02d898739e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct 11 05:43:30 np0005481065 nova_compute[260935]: 2025-10-11 09:43:30.539 2 DEBUG nova.storage.rbd_utils [None req-51e2322f-f017-4674-9b11-4d8873a16a30 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] creating snapshot(32aeeb857bab45b484f9af7c8d20feb0) on rbd image(1f03604e-0a25-4f22-807b-11f2e654be90_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 11 05:43:30 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 do_prune osdmap full prune enabled
Oct 11 05:43:31 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e289 e289: 3 total, 3 up, 3 in
Oct 11 05:43:31 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e289: 3 total, 3 up, 3 in
Oct 11 05:43:31 np0005481065 nova_compute[260935]: 2025-10-11 09:43:31.079 2 DEBUG nova.storage.rbd_utils [None req-51e2322f-f017-4674-9b11-4d8873a16a30 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] cloning vms/1f03604e-0a25-4f22-807b-11f2e654be90_disk@32aeeb857bab45b484f9af7c8d20feb0 to images/72225c6c-cee7-4404-9d0d-8915e5edba0a clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct 11 05:43:31 np0005481065 nova_compute[260935]: 2025-10-11 09:43:31.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:43:31 np0005481065 nova_compute[260935]: 2025-10-11 09:43:31.259 2 DEBUG nova.storage.rbd_utils [None req-51e2322f-f017-4674-9b11-4d8873a16a30 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] flattening images/72225c6c-cee7-4404-9d0d-8915e5edba0a flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct 11 05:43:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3057: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 392 KiB/s rd, 2.6 MiB/s wr, 77 op/s
Oct 11 05:43:31 np0005481065 nova_compute[260935]: 2025-10-11 09:43:31.846 2 DEBUG nova.storage.rbd_utils [None req-51e2322f-f017-4674-9b11-4d8873a16a30 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] removing snapshot(32aeeb857bab45b484f9af7c8d20feb0) on rbd image(1f03604e-0a25-4f22-807b-11f2e654be90_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct 11 05:43:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e289 do_prune osdmap full prune enabled
Oct 11 05:43:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e290 e290: 3 total, 3 up, 3 in
Oct 11 05:43:32 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e290: 3 total, 3 up, 3 in
Oct 11 05:43:32 np0005481065 nova_compute[260935]: 2025-10-11 09:43:32.057 2 DEBUG nova.storage.rbd_utils [None req-51e2322f-f017-4674-9b11-4d8873a16a30 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] creating snapshot(snap) on rbd image(72225c6c-cee7-4404-9d0d-8915e5edba0a) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 11 05:43:32 np0005481065 nova_compute[260935]: 2025-10-11 09:43:32.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:43:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e290 do_prune osdmap full prune enabled
Oct 11 05:43:33 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e291 e291: 3 total, 3 up, 3 in
Oct 11 05:43:33 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e291: 3 total, 3 up, 3 in
Oct 11 05:43:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3060: 321 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 7.8 MiB/s wr, 227 op/s
Oct 11 05:43:34 np0005481065 nova_compute[260935]: 2025-10-11 09:43:34.589 2 INFO nova.virt.libvirt.driver [None req-51e2322f-f017-4674-9b11-4d8873a16a30 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Snapshot image upload complete#033[00m
Oct 11 05:43:34 np0005481065 nova_compute[260935]: 2025-10-11 09:43:34.589 2 INFO nova.compute.manager [None req-51e2322f-f017-4674-9b11-4d8873a16a30 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Took 5.16 seconds to snapshot the instance on the hypervisor.#033[00m
Oct 11 05:43:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:43:35 np0005481065 nova_compute[260935]: 2025-10-11 09:43:35.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:43:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3061: 321 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 7.8 MiB/s wr, 226 op/s
Oct 11 05:43:36 np0005481065 nova_compute[260935]: 2025-10-11 09:43:36.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:43:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3062: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 7.4 MiB/s rd, 7.3 MiB/s wr, 255 op/s
Oct 11 05:43:39 np0005481065 nova_compute[260935]: 2025-10-11 09:43:39.127 2 DEBUG oslo_concurrency.lockutils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Acquiring lock "84e65bfe-6918-4c8f-8b2a-3b5262c6e44e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:43:39 np0005481065 nova_compute[260935]: 2025-10-11 09:43:39.127 2 DEBUG oslo_concurrency.lockutils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "84e65bfe-6918-4c8f-8b2a-3b5262c6e44e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:43:39 np0005481065 nova_compute[260935]: 2025-10-11 09:43:39.157 2 DEBUG nova.compute.manager [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:43:39 np0005481065 nova_compute[260935]: 2025-10-11 09:43:39.353 2 DEBUG oslo_concurrency.lockutils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:43:39 np0005481065 nova_compute[260935]: 2025-10-11 09:43:39.354 2 DEBUG oslo_concurrency.lockutils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:43:39 np0005481065 nova_compute[260935]: 2025-10-11 09:43:39.364 2 DEBUG nova.virt.hardware [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:43:39 np0005481065 nova_compute[260935]: 2025-10-11 09:43:39.365 2 INFO nova.compute.claims [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:43:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3063: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.9 MiB/s wr, 226 op/s
Oct 11 05:43:39 np0005481065 nova_compute[260935]: 2025-10-11 09:43:39.524 2 DEBUG oslo_concurrency.processutils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:43:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:43:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e291 do_prune osdmap full prune enabled
Oct 11 05:43:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e292 e292: 3 total, 3 up, 3 in
Oct 11 05:43:39 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e292: 3 total, 3 up, 3 in
Oct 11 05:43:40 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:43:40 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1983641431' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:43:40 np0005481065 nova_compute[260935]: 2025-10-11 09:43:40.040 2 DEBUG oslo_concurrency.processutils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:43:40 np0005481065 nova_compute[260935]: 2025-10-11 09:43:40.049 2 DEBUG nova.compute.provider_tree [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:43:40 np0005481065 nova_compute[260935]: 2025-10-11 09:43:40.073 2 DEBUG nova.scheduler.client.report [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:43:40 np0005481065 nova_compute[260935]: 2025-10-11 09:43:40.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:43:40 np0005481065 nova_compute[260935]: 2025-10-11 09:43:40.103 2 DEBUG oslo_concurrency.lockutils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.749s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:43:40 np0005481065 nova_compute[260935]: 2025-10-11 09:43:40.104 2 DEBUG nova.compute.manager [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:43:40 np0005481065 nova_compute[260935]: 2025-10-11 09:43:40.174 2 DEBUG nova.compute.manager [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:43:40 np0005481065 nova_compute[260935]: 2025-10-11 09:43:40.175 2 DEBUG nova.network.neutron [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:43:40 np0005481065 nova_compute[260935]: 2025-10-11 09:43:40.203 2 INFO nova.virt.libvirt.driver [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:43:40 np0005481065 nova_compute[260935]: 2025-10-11 09:43:40.234 2 DEBUG nova.compute.manager [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:43:40 np0005481065 nova_compute[260935]: 2025-10-11 09:43:40.376 2 DEBUG nova.compute.manager [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:43:40 np0005481065 nova_compute[260935]: 2025-10-11 09:43:40.378 2 DEBUG nova.virt.libvirt.driver [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:43:40 np0005481065 nova_compute[260935]: 2025-10-11 09:43:40.379 2 INFO nova.virt.libvirt.driver [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Creating image(s)#033[00m
Oct 11 05:43:40 np0005481065 nova_compute[260935]: 2025-10-11 09:43:40.416 2 DEBUG nova.storage.rbd_utils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] rbd image 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:43:40 np0005481065 nova_compute[260935]: 2025-10-11 09:43:40.457 2 DEBUG nova.storage.rbd_utils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] rbd image 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:43:40 np0005481065 nova_compute[260935]: 2025-10-11 09:43:40.492 2 DEBUG nova.storage.rbd_utils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] rbd image 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:43:40 np0005481065 nova_compute[260935]: 2025-10-11 09:43:40.496 2 DEBUG oslo_concurrency.lockutils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Acquiring lock "76189c210ef45b7e7189903b8d72d52e9a464e92" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:43:40 np0005481065 nova_compute[260935]: 2025-10-11 09:43:40.497 2 DEBUG oslo_concurrency.lockutils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "76189c210ef45b7e7189903b8d72d52e9a464e92" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:43:40 np0005481065 nova_compute[260935]: 2025-10-11 09:43:40.764 2 DEBUG nova.virt.libvirt.imagebackend [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Image locations are: [{'url': 'rbd://33219f8b-dc38-5a8f-a577-8ccc4b37190a/images/72225c6c-cee7-4404-9d0d-8915e5edba0a/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://33219f8b-dc38-5a8f-a577-8ccc4b37190a/images/72225c6c-cee7-4404-9d0d-8915e5edba0a/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct 11 05:43:40 np0005481065 nova_compute[260935]: 2025-10-11 09:43:40.816 2 DEBUG nova.virt.libvirt.imagebackend [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Selected location: {'url': 'rbd://33219f8b-dc38-5a8f-a577-8ccc4b37190a/images/72225c6c-cee7-4404-9d0d-8915e5edba0a/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Oct 11 05:43:40 np0005481065 nova_compute[260935]: 2025-10-11 09:43:40.817 2 DEBUG nova.storage.rbd_utils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] cloning images/72225c6c-cee7-4404-9d0d-8915e5edba0a@snap to None/84e65bfe-6918-4c8f-8b2a-3b5262c6e44e_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct 11 05:43:40 np0005481065 nova_compute[260935]: 2025-10-11 09:43:40.933 2 DEBUG oslo_concurrency.lockutils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "76189c210ef45b7e7189903b8d72d52e9a464e92" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.436s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:43:40 np0005481065 nova_compute[260935]: 2025-10-11 09:43:40.989 2 DEBUG nova.policy [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2b3a6de4bc924868b4e73b0a23a89090', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd599eac75aee4193a971c2c157a326a8', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 05:43:41 np0005481065 nova_compute[260935]: 2025-10-11 09:43:41.117 2 DEBUG nova.objects.instance [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lazy-loading 'migration_context' on Instance uuid 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:43:41 np0005481065 nova_compute[260935]: 2025-10-11 09:43:41.144 2 DEBUG nova.virt.libvirt.driver [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:43:41 np0005481065 nova_compute[260935]: 2025-10-11 09:43:41.144 2 DEBUG nova.virt.libvirt.driver [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Ensure instance console log exists: /var/lib/nova/instances/84e65bfe-6918-4c8f-8b2a-3b5262c6e44e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:43:41 np0005481065 nova_compute[260935]: 2025-10-11 09:43:41.145 2 DEBUG oslo_concurrency.lockutils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:43:41 np0005481065 nova_compute[260935]: 2025-10-11 09:43:41.146 2 DEBUG oslo_concurrency.lockutils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:43:41 np0005481065 nova_compute[260935]: 2025-10-11 09:43:41.146 2 DEBUG oslo_concurrency.lockutils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:43:41 np0005481065 nova_compute[260935]: 2025-10-11 09:43:41.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:43:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3065: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 2.5 KiB/s wr, 53 op/s
Oct 11 05:43:41 np0005481065 podman[432961]: 2025-10-11 09:43:41.794866448 +0000 UTC m=+0.084687956 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 11 05:43:42 np0005481065 ovn_controller[152945]: 2025-10-11T09:43:42Z|01713|memory_trim|INFO|Detected inactivity (last active 30015 ms ago): trimming memory
Oct 11 05:43:42 np0005481065 nova_compute[260935]: 2025-10-11 09:43:42.950 2 DEBUG nova.network.neutron [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Successfully created port: 007ca8a8-8ce1-4661-9b04-c8764bd3b523 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 05:43:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3066: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 3.2 KiB/s wr, 77 op/s
Oct 11 05:43:43 np0005481065 nova_compute[260935]: 2025-10-11 09:43:43.705 2 DEBUG nova.network.neutron [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Successfully updated port: 007ca8a8-8ce1-4661-9b04-c8764bd3b523 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 05:43:43 np0005481065 nova_compute[260935]: 2025-10-11 09:43:43.720 2 DEBUG oslo_concurrency.lockutils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Acquiring lock "refresh_cache-84e65bfe-6918-4c8f-8b2a-3b5262c6e44e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:43:43 np0005481065 nova_compute[260935]: 2025-10-11 09:43:43.720 2 DEBUG oslo_concurrency.lockutils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Acquired lock "refresh_cache-84e65bfe-6918-4c8f-8b2a-3b5262c6e44e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:43:43 np0005481065 nova_compute[260935]: 2025-10-11 09:43:43.720 2 DEBUG nova.network.neutron [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:43:43 np0005481065 nova_compute[260935]: 2025-10-11 09:43:43.834 2 DEBUG nova.compute.manager [req-4eb9e5a6-41f9-4d62-9981-8a536b7c9d08 req-6a0fdad2-f9d7-4183-9fce-a507d3a3674a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Received event network-changed-007ca8a8-8ce1-4661-9b04-c8764bd3b523 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:43:43 np0005481065 nova_compute[260935]: 2025-10-11 09:43:43.835 2 DEBUG nova.compute.manager [req-4eb9e5a6-41f9-4d62-9981-8a536b7c9d08 req-6a0fdad2-f9d7-4183-9fce-a507d3a3674a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Refreshing instance network info cache due to event network-changed-007ca8a8-8ce1-4661-9b04-c8764bd3b523. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:43:43 np0005481065 nova_compute[260935]: 2025-10-11 09:43:43.836 2 DEBUG oslo_concurrency.lockutils [req-4eb9e5a6-41f9-4d62-9981-8a536b7c9d08 req-6a0fdad2-f9d7-4183-9fce-a507d3a3674a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-84e65bfe-6918-4c8f-8b2a-3b5262c6e44e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:43:43 np0005481065 nova_compute[260935]: 2025-10-11 09:43:43.855 2 DEBUG nova.network.neutron [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:43:44 np0005481065 nova_compute[260935]: 2025-10-11 09:43:44.644 2 DEBUG nova.network.neutron [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Updating instance_info_cache with network_info: [{"id": "007ca8a8-8ce1-4661-9b04-c8764bd3b523", "address": "fa:16:3e:c1:4b:38", "network": {"id": "424246b3-55aa-428f-9446-55ed3d626b5d", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1806329114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d599eac75aee4193a971c2c157a326a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap007ca8a8-8c", "ovs_interfaceid": "007ca8a8-8ce1-4661-9b04-c8764bd3b523", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:43:44 np0005481065 nova_compute[260935]: 2025-10-11 09:43:44.668 2 DEBUG oslo_concurrency.lockutils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Releasing lock "refresh_cache-84e65bfe-6918-4c8f-8b2a-3b5262c6e44e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:43:44 np0005481065 nova_compute[260935]: 2025-10-11 09:43:44.668 2 DEBUG nova.compute.manager [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Instance network_info: |[{"id": "007ca8a8-8ce1-4661-9b04-c8764bd3b523", "address": "fa:16:3e:c1:4b:38", "network": {"id": "424246b3-55aa-428f-9446-55ed3d626b5d", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1806329114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d599eac75aee4193a971c2c157a326a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap007ca8a8-8c", "ovs_interfaceid": "007ca8a8-8ce1-4661-9b04-c8764bd3b523", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:43:44 np0005481065 nova_compute[260935]: 2025-10-11 09:43:44.669 2 DEBUG oslo_concurrency.lockutils [req-4eb9e5a6-41f9-4d62-9981-8a536b7c9d08 req-6a0fdad2-f9d7-4183-9fce-a507d3a3674a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-84e65bfe-6918-4c8f-8b2a-3b5262c6e44e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:43:44 np0005481065 nova_compute[260935]: 2025-10-11 09:43:44.670 2 DEBUG nova.network.neutron [req-4eb9e5a6-41f9-4d62-9981-8a536b7c9d08 req-6a0fdad2-f9d7-4183-9fce-a507d3a3674a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Refreshing network info cache for port 007ca8a8-8ce1-4661-9b04-c8764bd3b523 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:43:44 np0005481065 nova_compute[260935]: 2025-10-11 09:43:44.675 2 DEBUG nova.virt.libvirt.driver [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Start _get_guest_xml network_info=[{"id": "007ca8a8-8ce1-4661-9b04-c8764bd3b523", "address": "fa:16:3e:c1:4b:38", "network": {"id": "424246b3-55aa-428f-9446-55ed3d626b5d", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1806329114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d599eac75aee4193a971c2c157a326a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap007ca8a8-8c", "ovs_interfaceid": "007ca8a8-8ce1-4661-9b04-c8764bd3b523", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-11T09:43:29Z,direct_url=<?>,disk_format='raw',id=72225c6c-cee7-4404-9d0d-8915e5edba0a,min_disk=1,min_ram=0,name='tempest-TestSnapshotPatternsnapshot-322378170',owner='d599eac75aee4193a971c2c157a326a8',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-11T09:43:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '72225c6c-cee7-4404-9d0d-8915e5edba0a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:43:44 np0005481065 nova_compute[260935]: 2025-10-11 09:43:44.681 2 WARNING nova.virt.libvirt.driver [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:43:44 np0005481065 nova_compute[260935]: 2025-10-11 09:43:44.691 2 DEBUG nova.virt.libvirt.host [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:43:44 np0005481065 nova_compute[260935]: 2025-10-11 09:43:44.692 2 DEBUG nova.virt.libvirt.host [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:43:44 np0005481065 nova_compute[260935]: 2025-10-11 09:43:44.707 2 DEBUG nova.virt.libvirt.host [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:43:44 np0005481065 nova_compute[260935]: 2025-10-11 09:43:44.708 2 DEBUG nova.virt.libvirt.host [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:43:44 np0005481065 nova_compute[260935]: 2025-10-11 09:43:44.709 2 DEBUG nova.virt.libvirt.driver [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:43:44 np0005481065 nova_compute[260935]: 2025-10-11 09:43:44.709 2 DEBUG nova.virt.hardware [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-11T09:43:29Z,direct_url=<?>,disk_format='raw',id=72225c6c-cee7-4404-9d0d-8915e5edba0a,min_disk=1,min_ram=0,name='tempest-TestSnapshotPatternsnapshot-322378170',owner='d599eac75aee4193a971c2c157a326a8',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-11T09:43:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:43:44 np0005481065 nova_compute[260935]: 2025-10-11 09:43:44.710 2 DEBUG nova.virt.hardware [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:43:44 np0005481065 nova_compute[260935]: 2025-10-11 09:43:44.710 2 DEBUG nova.virt.hardware [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:43:44 np0005481065 nova_compute[260935]: 2025-10-11 09:43:44.711 2 DEBUG nova.virt.hardware [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:43:44 np0005481065 nova_compute[260935]: 2025-10-11 09:43:44.711 2 DEBUG nova.virt.hardware [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:43:44 np0005481065 nova_compute[260935]: 2025-10-11 09:43:44.711 2 DEBUG nova.virt.hardware [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:43:44 np0005481065 nova_compute[260935]: 2025-10-11 09:43:44.711 2 DEBUG nova.virt.hardware [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:43:44 np0005481065 nova_compute[260935]: 2025-10-11 09:43:44.712 2 DEBUG nova.virt.hardware [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:43:44 np0005481065 nova_compute[260935]: 2025-10-11 09:43:44.712 2 DEBUG nova.virt.hardware [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:43:44 np0005481065 nova_compute[260935]: 2025-10-11 09:43:44.712 2 DEBUG nova.virt.hardware [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:43:44 np0005481065 nova_compute[260935]: 2025-10-11 09:43:44.713 2 DEBUG nova.virt.hardware [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:43:44 np0005481065 nova_compute[260935]: 2025-10-11 09:43:44.717 2 DEBUG oslo_concurrency.processutils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:43:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:43:45 np0005481065 nova_compute[260935]: 2025-10-11 09:43:45.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:43:45 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:43:45 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3748329194' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:43:45 np0005481065 nova_compute[260935]: 2025-10-11 09:43:45.217 2 DEBUG oslo_concurrency.processutils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:43:45 np0005481065 nova_compute[260935]: 2025-10-11 09:43:45.249 2 DEBUG nova.storage.rbd_utils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] rbd image 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:43:45 np0005481065 nova_compute[260935]: 2025-10-11 09:43:45.254 2 DEBUG oslo_concurrency.processutils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:43:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3067: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 3.2 KiB/s wr, 77 op/s
Oct 11 05:43:45 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:43:45 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/703938094' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:43:45 np0005481065 nova_compute[260935]: 2025-10-11 09:43:45.702 2 DEBUG oslo_concurrency.processutils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:43:45 np0005481065 nova_compute[260935]: 2025-10-11 09:43:45.704 2 DEBUG nova.virt.libvirt.vif [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:43:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-937149156',display_name='tempest-TestSnapshotPattern-server-937149156',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-937149156',id=150,image_ref='72225c6c-cee7-4404-9d0d-8915e5edba0a',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP2ECx0jj3Rt10/VBX40dpY/aktTrRIITXR7+kOE+gKATXjMYBpecra2wtbKgkthLAz8Iw/nS+1WioBHxe0IF9i4guGae1Gh4XxA1njuMr4iSWg7N0/eaZRKp5xLMkN7hg==',key_name='tempest-TestSnapshotPattern-1719255089',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d599eac75aee4193a971c2c157a326a8',ramdisk_id='',reservation_id='r-v3mdlfxx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='1f03604e-0a25-4f22-807b-11f2e654be90',image_min_disk='1',image_min_ram='0',image_owner_id='d599eac75aee4193a971c2c157a326a8',image_owner_project_name='tempest-TestSnapshotPattern-1818755654',image_owner_user_name='tempest-TestSnapshotPattern-1818755654-project-member',image_user_id='2b3a6de4bc924868b4e73b0a23a89090',image_version='8.0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-1818755654',owner_user_name='tempest-TestSnapshotPattern-1818755654-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:43:40Z,user_data=None,user_id='2b3a6de4bc924868b4e73b0a23a89090',uuid=84e65bfe-6918-4c8f-8b2a-3b5262c6e44e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "007ca8a8-8ce1-4661-9b04-c8764bd3b523", "address": "fa:16:3e:c1:4b:38", "network": {"id": "424246b3-55aa-428f-9446-55ed3d626b5d", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1806329114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d599eac75aee4193a971c2c157a326a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap007ca8a8-8c", "ovs_interfaceid": "007ca8a8-8ce1-4661-9b04-c8764bd3b523", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:43:45 np0005481065 nova_compute[260935]: 2025-10-11 09:43:45.705 2 DEBUG nova.network.os_vif_util [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Converting VIF {"id": "007ca8a8-8ce1-4661-9b04-c8764bd3b523", "address": "fa:16:3e:c1:4b:38", "network": {"id": "424246b3-55aa-428f-9446-55ed3d626b5d", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1806329114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d599eac75aee4193a971c2c157a326a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap007ca8a8-8c", "ovs_interfaceid": "007ca8a8-8ce1-4661-9b04-c8764bd3b523", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:43:45 np0005481065 nova_compute[260935]: 2025-10-11 09:43:45.706 2 DEBUG nova.network.os_vif_util [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:4b:38,bridge_name='br-int',has_traffic_filtering=True,id=007ca8a8-8ce1-4661-9b04-c8764bd3b523,network=Network(424246b3-55aa-428f-9446-55ed3d626b5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap007ca8a8-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:43:45 np0005481065 nova_compute[260935]: 2025-10-11 09:43:45.707 2 DEBUG nova.objects.instance [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lazy-loading 'pci_devices' on Instance uuid 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:43:45 np0005481065 nova_compute[260935]: 2025-10-11 09:43:45.730 2 DEBUG nova.virt.libvirt.driver [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:43:45 np0005481065 nova_compute[260935]:  <uuid>84e65bfe-6918-4c8f-8b2a-3b5262c6e44e</uuid>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:  <name>instance-00000096</name>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:43:45 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:      <nova:name>tempest-TestSnapshotPattern-server-937149156</nova:name>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:43:44</nova:creationTime>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:43:45 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:        <nova:user uuid="2b3a6de4bc924868b4e73b0a23a89090">tempest-TestSnapshotPattern-1818755654-project-member</nova:user>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:        <nova:project uuid="d599eac75aee4193a971c2c157a326a8">tempest-TestSnapshotPattern-1818755654</nova:project>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="72225c6c-cee7-4404-9d0d-8915e5edba0a"/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:      <nova:ports>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:        <nova:port uuid="007ca8a8-8ce1-4661-9b04-c8764bd3b523">
Oct 11 05:43:45 np0005481065 nova_compute[260935]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:        </nova:port>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:      </nova:ports>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:      <entry name="serial">84e65bfe-6918-4c8f-8b2a-3b5262c6e44e</entry>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:      <entry name="uuid">84e65bfe-6918-4c8f-8b2a-3b5262c6e44e</entry>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:43:45 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/84e65bfe-6918-4c8f-8b2a-3b5262c6e44e_disk">
Oct 11 05:43:45 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:43:45 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:43:45 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/84e65bfe-6918-4c8f-8b2a-3b5262c6e44e_disk.config">
Oct 11 05:43:45 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:43:45 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    <interface type="ethernet">
Oct 11 05:43:45 np0005481065 nova_compute[260935]:      <mac address="fa:16:3e:c1:4b:38"/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:      <mtu size="1442"/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:      <target dev="tap007ca8a8-8c"/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    </interface>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:43:45 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/84e65bfe-6918-4c8f-8b2a-3b5262c6e44e/console.log" append="off"/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    <input type="keyboard" bus="usb"/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:43:45 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:43:45 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:43:45 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:43:45 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:43:45 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:43:45 np0005481065 nova_compute[260935]: 2025-10-11 09:43:45.731 2 DEBUG nova.compute.manager [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Preparing to wait for external event network-vif-plugged-007ca8a8-8ce1-4661-9b04-c8764bd3b523 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 05:43:45 np0005481065 nova_compute[260935]: 2025-10-11 09:43:45.731 2 DEBUG oslo_concurrency.lockutils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Acquiring lock "84e65bfe-6918-4c8f-8b2a-3b5262c6e44e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:43:45 np0005481065 nova_compute[260935]: 2025-10-11 09:43:45.731 2 DEBUG oslo_concurrency.lockutils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "84e65bfe-6918-4c8f-8b2a-3b5262c6e44e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:43:45 np0005481065 nova_compute[260935]: 2025-10-11 09:43:45.732 2 DEBUG oslo_concurrency.lockutils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "84e65bfe-6918-4c8f-8b2a-3b5262c6e44e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:43:45 np0005481065 nova_compute[260935]: 2025-10-11 09:43:45.733 2 DEBUG nova.virt.libvirt.vif [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:43:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-937149156',display_name='tempest-TestSnapshotPattern-server-937149156',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-937149156',id=150,image_ref='72225c6c-cee7-4404-9d0d-8915e5edba0a',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP2ECx0jj3Rt10/VBX40dpY/aktTrRIITXR7+kOE+gKATXjMYBpecra2wtbKgkthLAz8Iw/nS+1WioBHxe0IF9i4guGae1Gh4XxA1njuMr4iSWg7N0/eaZRKp5xLMkN7hg==',key_name='tempest-TestSnapshotPattern-1719255089',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d599eac75aee4193a971c2c157a326a8',ramdisk_id='',reservation_id='r-v3mdlfxx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='1f03604e-0a25-4f22-807b-11f2e654be90',image_min_disk='1',image_min_ram='0',image_owner_id='d599eac75aee4193a971c2c157a326a8',image_owner_project_name='tempest-TestSnapshotPattern-1818755654',image_owner_user_name='tempest-TestSnapshotPattern-1818755654-project-member',image_user_id='2b3a6de4bc924868b4e73b0a23a89090',image_version='8.0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-1818755654',owner_user_name='tempest-TestSnapshotPattern-1818755654-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:43:40Z,user_data=None,user_id='2b3a6de4bc924868b4e73b0a23a89090',uuid=84e65bfe-6918-4c8f-8b2a-3b5262c6e44e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "007ca8a8-8ce1-4661-9b04-c8764bd3b523", "address": "fa:16:3e:c1:4b:38", "network": {"id": "424246b3-55aa-428f-9446-55ed3d626b5d", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1806329114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d599eac75aee4193a971c2c157a326a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap007ca8a8-8c", "ovs_interfaceid": "007ca8a8-8ce1-4661-9b04-c8764bd3b523", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 05:43:45 np0005481065 nova_compute[260935]: 2025-10-11 09:43:45.733 2 DEBUG nova.network.os_vif_util [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Converting VIF {"id": "007ca8a8-8ce1-4661-9b04-c8764bd3b523", "address": "fa:16:3e:c1:4b:38", "network": {"id": "424246b3-55aa-428f-9446-55ed3d626b5d", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1806329114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d599eac75aee4193a971c2c157a326a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap007ca8a8-8c", "ovs_interfaceid": "007ca8a8-8ce1-4661-9b04-c8764bd3b523", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:43:45 np0005481065 nova_compute[260935]: 2025-10-11 09:43:45.734 2 DEBUG nova.network.os_vif_util [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:4b:38,bridge_name='br-int',has_traffic_filtering=True,id=007ca8a8-8ce1-4661-9b04-c8764bd3b523,network=Network(424246b3-55aa-428f-9446-55ed3d626b5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap007ca8a8-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:43:45 np0005481065 nova_compute[260935]: 2025-10-11 09:43:45.734 2 DEBUG os_vif [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:4b:38,bridge_name='br-int',has_traffic_filtering=True,id=007ca8a8-8ce1-4661-9b04-c8764bd3b523,network=Network(424246b3-55aa-428f-9446-55ed3d626b5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap007ca8a8-8c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 05:43:45 np0005481065 nova_compute[260935]: 2025-10-11 09:43:45.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:43:45 np0005481065 nova_compute[260935]: 2025-10-11 09:43:45.735 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:43:45 np0005481065 nova_compute[260935]: 2025-10-11 09:43:45.736 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:43:45 np0005481065 nova_compute[260935]: 2025-10-11 09:43:45.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:43:45 np0005481065 nova_compute[260935]: 2025-10-11 09:43:45.740 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap007ca8a8-8c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:43:45 np0005481065 nova_compute[260935]: 2025-10-11 09:43:45.740 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap007ca8a8-8c, col_values=(('external_ids', {'iface-id': '007ca8a8-8ce1-4661-9b04-c8764bd3b523', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c1:4b:38', 'vm-uuid': '84e65bfe-6918-4c8f-8b2a-3b5262c6e44e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:43:45 np0005481065 nova_compute[260935]: 2025-10-11 09:43:45.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:43:45 np0005481065 NetworkManager[44960]: <info>  [1760175825.7444] manager: (tap007ca8a8-8c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/647)
Oct 11 05:43:45 np0005481065 nova_compute[260935]: 2025-10-11 09:43:45.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:43:45 np0005481065 nova_compute[260935]: 2025-10-11 09:43:45.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:43:45 np0005481065 nova_compute[260935]: 2025-10-11 09:43:45.757 2 INFO os_vif [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:4b:38,bridge_name='br-int',has_traffic_filtering=True,id=007ca8a8-8ce1-4661-9b04-c8764bd3b523,network=Network(424246b3-55aa-428f-9446-55ed3d626b5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap007ca8a8-8c')#033[00m
Oct 11 05:43:45 np0005481065 nova_compute[260935]: 2025-10-11 09:43:45.825 2 DEBUG nova.virt.libvirt.driver [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:43:45 np0005481065 nova_compute[260935]: 2025-10-11 09:43:45.825 2 DEBUG nova.virt.libvirt.driver [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:43:45 np0005481065 nova_compute[260935]: 2025-10-11 09:43:45.825 2 DEBUG nova.virt.libvirt.driver [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] No VIF found with MAC fa:16:3e:c1:4b:38, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 05:43:45 np0005481065 nova_compute[260935]: 2025-10-11 09:43:45.825 2 INFO nova.virt.libvirt.driver [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Using config drive#033[00m
Oct 11 05:43:45 np0005481065 nova_compute[260935]: 2025-10-11 09:43:45.846 2 DEBUG nova.storage.rbd_utils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] rbd image 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:43:46 np0005481065 nova_compute[260935]: 2025-10-11 09:43:46.238 2 INFO nova.virt.libvirt.driver [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Creating config drive at /var/lib/nova/instances/84e65bfe-6918-4c8f-8b2a-3b5262c6e44e/disk.config#033[00m
Oct 11 05:43:46 np0005481065 nova_compute[260935]: 2025-10-11 09:43:46.249 2 DEBUG oslo_concurrency.processutils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/84e65bfe-6918-4c8f-8b2a-3b5262c6e44e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp48vo7y9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:43:46 np0005481065 nova_compute[260935]: 2025-10-11 09:43:46.378 2 DEBUG nova.network.neutron [req-4eb9e5a6-41f9-4d62-9981-8a536b7c9d08 req-6a0fdad2-f9d7-4183-9fce-a507d3a3674a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Updated VIF entry in instance network info cache for port 007ca8a8-8ce1-4661-9b04-c8764bd3b523. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:43:46 np0005481065 nova_compute[260935]: 2025-10-11 09:43:46.380 2 DEBUG nova.network.neutron [req-4eb9e5a6-41f9-4d62-9981-8a536b7c9d08 req-6a0fdad2-f9d7-4183-9fce-a507d3a3674a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Updating instance_info_cache with network_info: [{"id": "007ca8a8-8ce1-4661-9b04-c8764bd3b523", "address": "fa:16:3e:c1:4b:38", "network": {"id": "424246b3-55aa-428f-9446-55ed3d626b5d", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1806329114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d599eac75aee4193a971c2c157a326a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap007ca8a8-8c", "ovs_interfaceid": "007ca8a8-8ce1-4661-9b04-c8764bd3b523", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:43:46 np0005481065 nova_compute[260935]: 2025-10-11 09:43:46.404 2 DEBUG oslo_concurrency.lockutils [req-4eb9e5a6-41f9-4d62-9981-8a536b7c9d08 req-6a0fdad2-f9d7-4183-9fce-a507d3a3674a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-84e65bfe-6918-4c8f-8b2a-3b5262c6e44e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:43:46 np0005481065 nova_compute[260935]: 2025-10-11 09:43:46.423 2 DEBUG oslo_concurrency.processutils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/84e65bfe-6918-4c8f-8b2a-3b5262c6e44e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp48vo7y9" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:43:46 np0005481065 nova_compute[260935]: 2025-10-11 09:43:46.466 2 DEBUG nova.storage.rbd_utils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] rbd image 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:43:46 np0005481065 nova_compute[260935]: 2025-10-11 09:43:46.471 2 DEBUG oslo_concurrency.processutils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/84e65bfe-6918-4c8f-8b2a-3b5262c6e44e/disk.config 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:43:46 np0005481065 nova_compute[260935]: 2025-10-11 09:43:46.678 2 DEBUG oslo_concurrency.processutils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/84e65bfe-6918-4c8f-8b2a-3b5262c6e44e/disk.config 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.208s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:43:46 np0005481065 nova_compute[260935]: 2025-10-11 09:43:46.679 2 INFO nova.virt.libvirt.driver [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Deleting local config drive /var/lib/nova/instances/84e65bfe-6918-4c8f-8b2a-3b5262c6e44e/disk.config because it was imported into RBD.#033[00m
Oct 11 05:43:46 np0005481065 kernel: tap007ca8a8-8c: entered promiscuous mode
Oct 11 05:43:46 np0005481065 NetworkManager[44960]: <info>  [1760175826.7462] manager: (tap007ca8a8-8c): new Tun device (/org/freedesktop/NetworkManager/Devices/648)
Oct 11 05:43:46 np0005481065 ovn_controller[152945]: 2025-10-11T09:43:46Z|01714|binding|INFO|Claiming lport 007ca8a8-8ce1-4661-9b04-c8764bd3b523 for this chassis.
Oct 11 05:43:46 np0005481065 ovn_controller[152945]: 2025-10-11T09:43:46Z|01715|binding|INFO|007ca8a8-8ce1-4661-9b04-c8764bd3b523: Claiming fa:16:3e:c1:4b:38 10.100.0.7
Oct 11 05:43:46 np0005481065 nova_compute[260935]: 2025-10-11 09:43:46.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:43:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:43:46.757 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:4b:38 10.100.0.7'], port_security=['fa:16:3e:c1:4b:38 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '84e65bfe-6918-4c8f-8b2a-3b5262c6e44e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-424246b3-55aa-428f-9446-55ed3d626b5d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd599eac75aee4193a971c2c157a326a8', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9d823c59-8c61-45f8-94bd-622699d4ae58', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8021c16-4de3-44ff-b5a5-7602224b9734, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=007ca8a8-8ce1-4661-9b04-c8764bd3b523) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:43:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:43:46.759 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 007ca8a8-8ce1-4661-9b04-c8764bd3b523 in datapath 424246b3-55aa-428f-9446-55ed3d626b5d bound to our chassis#033[00m
Oct 11 05:43:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:43:46.761 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 424246b3-55aa-428f-9446-55ed3d626b5d#033[00m
Oct 11 05:43:46 np0005481065 ovn_controller[152945]: 2025-10-11T09:43:46Z|01716|binding|INFO|Setting lport 007ca8a8-8ce1-4661-9b04-c8764bd3b523 ovn-installed in OVS
Oct 11 05:43:46 np0005481065 ovn_controller[152945]: 2025-10-11T09:43:46Z|01717|binding|INFO|Setting lport 007ca8a8-8ce1-4661-9b04-c8764bd3b523 up in Southbound
Oct 11 05:43:46 np0005481065 nova_compute[260935]: 2025-10-11 09:43:46.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:43:46 np0005481065 nova_compute[260935]: 2025-10-11 09:43:46.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:43:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:43:46.794 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[06df2a90-988d-4ac5-b2b6-7afd37b59b7f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:43:46 np0005481065 systemd-machined[215705]: New machine qemu-176-instance-00000096.
Oct 11 05:43:46 np0005481065 systemd[1]: Started Virtual Machine qemu-176-instance-00000096.
Oct 11 05:43:46 np0005481065 systemd-udevd[433118]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 05:43:46 np0005481065 NetworkManager[44960]: <info>  [1760175826.8551] device (tap007ca8a8-8c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 05:43:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:43:46.852 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[694b253b-8fa6-46d0-a0e1-e2547b7b4102]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:43:46 np0005481065 NetworkManager[44960]: <info>  [1760175826.8582] device (tap007ca8a8-8c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 05:43:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:43:46.859 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[ce885850-6fd8-45c4-9f20-af04c95a9d06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:43:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:43:46.909 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3d43a426-42c5-44eb-b862-bb440e893eed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:43:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:43:46.931 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[27fcfd95-f363-430b-92bd-fd6361bd333e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap424246b3-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:28:e0:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 445], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763210, 'reachable_time': 32262, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 433128, 'error': None, 'target': 'ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:43:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:43:46.961 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5e90a6f2-4a20-4440-b453-2b317f3e6010]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap424246b3-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763225, 'tstamp': 763225}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 433130, 'error': None, 'target': 'ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap424246b3-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763229, 'tstamp': 763229}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 433130, 'error': None, 'target': 'ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:43:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:43:46.964 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap424246b3-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:43:46 np0005481065 nova_compute[260935]: 2025-10-11 09:43:46.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:43:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:43:46.972 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap424246b3-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:43:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:43:46.973 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:43:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:43:46.974 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap424246b3-50, col_values=(('external_ids', {'iface-id': 'e23ac8d5-df18-43b3-bb12-29f5cd9ce802'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:43:46 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:43:46.975 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:43:47 np0005481065 nova_compute[260935]: 2025-10-11 09:43:47.035 2 DEBUG nova.compute.manager [req-12c5ffd2-f6d1-415e-a4b9-daf4b5936937 req-e30cfdc2-b1e0-41a6-9fd6-b1b40e01b813 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Received event network-vif-plugged-007ca8a8-8ce1-4661-9b04-c8764bd3b523 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:43:47 np0005481065 nova_compute[260935]: 2025-10-11 09:43:47.036 2 DEBUG oslo_concurrency.lockutils [req-12c5ffd2-f6d1-415e-a4b9-daf4b5936937 req-e30cfdc2-b1e0-41a6-9fd6-b1b40e01b813 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "84e65bfe-6918-4c8f-8b2a-3b5262c6e44e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:43:47 np0005481065 nova_compute[260935]: 2025-10-11 09:43:47.036 2 DEBUG oslo_concurrency.lockutils [req-12c5ffd2-f6d1-415e-a4b9-daf4b5936937 req-e30cfdc2-b1e0-41a6-9fd6-b1b40e01b813 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "84e65bfe-6918-4c8f-8b2a-3b5262c6e44e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:43:47 np0005481065 nova_compute[260935]: 2025-10-11 09:43:47.037 2 DEBUG oslo_concurrency.lockutils [req-12c5ffd2-f6d1-415e-a4b9-daf4b5936937 req-e30cfdc2-b1e0-41a6-9fd6-b1b40e01b813 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "84e65bfe-6918-4c8f-8b2a-3b5262c6e44e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:43:47 np0005481065 nova_compute[260935]: 2025-10-11 09:43:47.038 2 DEBUG nova.compute.manager [req-12c5ffd2-f6d1-415e-a4b9-daf4b5936937 req-e30cfdc2-b1e0-41a6-9fd6-b1b40e01b813 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Processing event network-vif-plugged-007ca8a8-8ce1-4661-9b04-c8764bd3b523 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 05:43:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3068: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 18 KiB/s wr, 53 op/s
Oct 11 05:43:47 np0005481065 nova_compute[260935]: 2025-10-11 09:43:47.814 2 DEBUG nova.compute.manager [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:43:47 np0005481065 nova_compute[260935]: 2025-10-11 09:43:47.815 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175827.8142624, 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:43:47 np0005481065 nova_compute[260935]: 2025-10-11 09:43:47.815 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] VM Started (Lifecycle Event)#033[00m
Oct 11 05:43:47 np0005481065 nova_compute[260935]: 2025-10-11 09:43:47.821 2 DEBUG nova.virt.libvirt.driver [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:43:47 np0005481065 nova_compute[260935]: 2025-10-11 09:43:47.824 2 INFO nova.virt.libvirt.driver [-] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Instance spawned successfully.#033[00m
Oct 11 05:43:47 np0005481065 nova_compute[260935]: 2025-10-11 09:43:47.825 2 INFO nova.compute.manager [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Took 7.45 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:43:47 np0005481065 nova_compute[260935]: 2025-10-11 09:43:47.825 2 DEBUG nova.compute.manager [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:43:47 np0005481065 nova_compute[260935]: 2025-10-11 09:43:47.834 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:43:47 np0005481065 nova_compute[260935]: 2025-10-11 09:43:47.837 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:43:47 np0005481065 nova_compute[260935]: 2025-10-11 09:43:47.873 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:43:47 np0005481065 nova_compute[260935]: 2025-10-11 09:43:47.873 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175827.8175523, 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:43:47 np0005481065 nova_compute[260935]: 2025-10-11 09:43:47.874 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] VM Paused (Lifecycle Event)#033[00m
Oct 11 05:43:47 np0005481065 nova_compute[260935]: 2025-10-11 09:43:47.904 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:43:47 np0005481065 nova_compute[260935]: 2025-10-11 09:43:47.906 2 INFO nova.compute.manager [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Took 8.58 seconds to build instance.#033[00m
Oct 11 05:43:47 np0005481065 nova_compute[260935]: 2025-10-11 09:43:47.910 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175827.8197222, 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:43:47 np0005481065 nova_compute[260935]: 2025-10-11 09:43:47.910 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:43:47 np0005481065 nova_compute[260935]: 2025-10-11 09:43:47.931 2 DEBUG oslo_concurrency.lockutils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "84e65bfe-6918-4c8f-8b2a-3b5262c6e44e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:43:47 np0005481065 nova_compute[260935]: 2025-10-11 09:43:47.937 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:43:47 np0005481065 nova_compute[260935]: 2025-10-11 09:43:47.943 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:43:48 np0005481065 podman[433173]: 2025-10-11 09:43:48.768797092 +0000 UTC m=+0.071364452 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:43:49 np0005481065 nova_compute[260935]: 2025-10-11 09:43:49.144 2 DEBUG nova.compute.manager [req-d1d7c0af-a367-4b5b-8f8d-a3abcc404929 req-8e59af5f-e597-4212-8ac4-7c93ad311718 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Received event network-vif-plugged-007ca8a8-8ce1-4661-9b04-c8764bd3b523 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:43:49 np0005481065 nova_compute[260935]: 2025-10-11 09:43:49.145 2 DEBUG oslo_concurrency.lockutils [req-d1d7c0af-a367-4b5b-8f8d-a3abcc404929 req-8e59af5f-e597-4212-8ac4-7c93ad311718 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "84e65bfe-6918-4c8f-8b2a-3b5262c6e44e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:43:49 np0005481065 nova_compute[260935]: 2025-10-11 09:43:49.146 2 DEBUG oslo_concurrency.lockutils [req-d1d7c0af-a367-4b5b-8f8d-a3abcc404929 req-8e59af5f-e597-4212-8ac4-7c93ad311718 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "84e65bfe-6918-4c8f-8b2a-3b5262c6e44e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:43:49 np0005481065 nova_compute[260935]: 2025-10-11 09:43:49.146 2 DEBUG oslo_concurrency.lockutils [req-d1d7c0af-a367-4b5b-8f8d-a3abcc404929 req-8e59af5f-e597-4212-8ac4-7c93ad311718 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "84e65bfe-6918-4c8f-8b2a-3b5262c6e44e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:43:49 np0005481065 nova_compute[260935]: 2025-10-11 09:43:49.147 2 DEBUG nova.compute.manager [req-d1d7c0af-a367-4b5b-8f8d-a3abcc404929 req-8e59af5f-e597-4212-8ac4-7c93ad311718 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] No waiting events found dispatching network-vif-plugged-007ca8a8-8ce1-4661-9b04-c8764bd3b523 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:43:49 np0005481065 nova_compute[260935]: 2025-10-11 09:43:49.147 2 WARNING nova.compute.manager [req-d1d7c0af-a367-4b5b-8f8d-a3abcc404929 req-8e59af5f-e597-4212-8ac4-7c93ad311718 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Received unexpected event network-vif-plugged-007ca8a8-8ce1-4661-9b04-c8764bd3b523 for instance with vm_state active and task_state None.#033[00m
Oct 11 05:43:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3069: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 16 KiB/s wr, 46 op/s
Oct 11 05:43:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:43:50 np0005481065 nova_compute[260935]: 2025-10-11 09:43:50.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:43:50 np0005481065 nova_compute[260935]: 2025-10-11 09:43:50.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:43:50 np0005481065 nova_compute[260935]: 2025-10-11 09:43:50.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:43:50 np0005481065 nova_compute[260935]: 2025-10-11 09:43:50.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:43:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3070: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 14 KiB/s wr, 39 op/s
Oct 11 05:43:52 np0005481065 nova_compute[260935]: 2025-10-11 09:43:52.308 2 DEBUG nova.compute.manager [req-a90ebb07-f741-4cda-857b-0e35a45f5f9a req-e24391f6-1da4-42b8-8599-dbd2fbfc257c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Received event network-changed-007ca8a8-8ce1-4661-9b04-c8764bd3b523 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:43:52 np0005481065 nova_compute[260935]: 2025-10-11 09:43:52.310 2 DEBUG nova.compute.manager [req-a90ebb07-f741-4cda-857b-0e35a45f5f9a req-e24391f6-1da4-42b8-8599-dbd2fbfc257c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Refreshing instance network info cache due to event network-changed-007ca8a8-8ce1-4661-9b04-c8764bd3b523. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:43:52 np0005481065 nova_compute[260935]: 2025-10-11 09:43:52.311 2 DEBUG oslo_concurrency.lockutils [req-a90ebb07-f741-4cda-857b-0e35a45f5f9a req-e24391f6-1da4-42b8-8599-dbd2fbfc257c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-84e65bfe-6918-4c8f-8b2a-3b5262c6e44e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:43:52 np0005481065 nova_compute[260935]: 2025-10-11 09:43:52.311 2 DEBUG oslo_concurrency.lockutils [req-a90ebb07-f741-4cda-857b-0e35a45f5f9a req-e24391f6-1da4-42b8-8599-dbd2fbfc257c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-84e65bfe-6918-4c8f-8b2a-3b5262c6e44e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:43:52 np0005481065 nova_compute[260935]: 2025-10-11 09:43:52.312 2 DEBUG nova.network.neutron [req-a90ebb07-f741-4cda-857b-0e35a45f5f9a req-e24391f6-1da4-42b8-8599-dbd2fbfc257c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Refreshing network info cache for port 007ca8a8-8ce1-4661-9b04-c8764bd3b523 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:43:52 np0005481065 podman[433194]: 2025-10-11 09:43:52.838751004 +0000 UTC m=+0.129010359 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.build-date=20251001)
Oct 11 05:43:52 np0005481065 podman[433195]: 2025-10-11 09:43:52.846108951 +0000 UTC m=+0.131864190 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:43:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3071: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 103 op/s
Oct 11 05:43:53 np0005481065 nova_compute[260935]: 2025-10-11 09:43:53.596 2 DEBUG nova.network.neutron [req-a90ebb07-f741-4cda-857b-0e35a45f5f9a req-e24391f6-1da4-42b8-8599-dbd2fbfc257c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Updated VIF entry in instance network info cache for port 007ca8a8-8ce1-4661-9b04-c8764bd3b523. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:43:53 np0005481065 nova_compute[260935]: 2025-10-11 09:43:53.597 2 DEBUG nova.network.neutron [req-a90ebb07-f741-4cda-857b-0e35a45f5f9a req-e24391f6-1da4-42b8-8599-dbd2fbfc257c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Updating instance_info_cache with network_info: [{"id": "007ca8a8-8ce1-4661-9b04-c8764bd3b523", "address": "fa:16:3e:c1:4b:38", "network": {"id": "424246b3-55aa-428f-9446-55ed3d626b5d", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1806329114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d599eac75aee4193a971c2c157a326a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap007ca8a8-8c", "ovs_interfaceid": "007ca8a8-8ce1-4661-9b04-c8764bd3b523", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:43:53 np0005481065 nova_compute[260935]: 2025-10-11 09:43:53.618 2 DEBUG oslo_concurrency.lockutils [req-a90ebb07-f741-4cda-857b-0e35a45f5f9a req-e24391f6-1da4-42b8-8599-dbd2fbfc257c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-84e65bfe-6918-4c8f-8b2a-3b5262c6e44e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:43:53 np0005481065 nova_compute[260935]: 2025-10-11 09:43:53.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:43:53 np0005481065 nova_compute[260935]: 2025-10-11 09:43:53.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:43:54 np0005481065 nova_compute[260935]: 2025-10-11 09:43:54.438 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:43:54 np0005481065 nova_compute[260935]: 2025-10-11 09:43:54.439 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:43:54 np0005481065 nova_compute[260935]: 2025-10-11 09:43:54.439 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 05:43:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:43:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:43:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:43:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:43:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:43:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:43:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:43:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:43:55
Oct 11 05:43:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:43:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:43:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['images', '.mgr', 'cephfs.cephfs.data', 'vms', '.rgw.root', 'default.rgw.control', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta']
Oct 11 05:43:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:43:55 np0005481065 nova_compute[260935]: 2025-10-11 09:43:55.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:43:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3072: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Oct 11 05:43:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:43:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:43:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:43:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:43:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:43:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:43:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:43:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:43:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:43:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:43:55 np0005481065 nova_compute[260935]: 2025-10-11 09:43:55.657 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updating instance_info_cache with network_info: [{"id": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "address": "fa:16:3e:d3:b5:ce", "network": {"id": "7c40ad6c-6e2c-4d8e-a70f-72c8786fa745", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1855455514-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ba95f2514ce4fe4b00f245335eaeb01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc992d6e3-ef", "ovs_interfaceid": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:43:55 np0005481065 nova_compute[260935]: 2025-10-11 09:43:55.683 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:43:55 np0005481065 nova_compute[260935]: 2025-10-11 09:43:55.684 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 05:43:55 np0005481065 nova_compute[260935]: 2025-10-11 09:43:55.685 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:43:55 np0005481065 nova_compute[260935]: 2025-10-11 09:43:55.701 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:43:55 np0005481065 nova_compute[260935]: 2025-10-11 09:43:55.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:43:56 np0005481065 nova_compute[260935]: 2025-10-11 09:43:56.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:43:56 np0005481065 nova_compute[260935]: 2025-10-11 09:43:56.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:43:56 np0005481065 nova_compute[260935]: 2025-10-11 09:43:56.728 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:43:56 np0005481065 nova_compute[260935]: 2025-10-11 09:43:56.729 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:43:56 np0005481065 nova_compute[260935]: 2025-10-11 09:43:56.729 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:43:56 np0005481065 nova_compute[260935]: 2025-10-11 09:43:56.730 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:43:56 np0005481065 nova_compute[260935]: 2025-10-11 09:43:56.730 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:43:57 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:43:57 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3890214284' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:43:57 np0005481065 nova_compute[260935]: 2025-10-11 09:43:57.249 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:43:57 np0005481065 nova_compute[260935]: 2025-10-11 09:43:57.359 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:43:57 np0005481065 nova_compute[260935]: 2025-10-11 09:43:57.360 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:43:57 np0005481065 nova_compute[260935]: 2025-10-11 09:43:57.360 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:43:57 np0005481065 nova_compute[260935]: 2025-10-11 09:43:57.366 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:43:57 np0005481065 nova_compute[260935]: 2025-10-11 09:43:57.366 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:43:57 np0005481065 nova_compute[260935]: 2025-10-11 09:43:57.372 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:43:57 np0005481065 nova_compute[260935]: 2025-10-11 09:43:57.373 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:43:57 np0005481065 nova_compute[260935]: 2025-10-11 09:43:57.379 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:43:57 np0005481065 nova_compute[260935]: 2025-10-11 09:43:57.379 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:43:57 np0005481065 nova_compute[260935]: 2025-10-11 09:43:57.386 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:43:57 np0005481065 nova_compute[260935]: 2025-10-11 09:43:57.387 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:43:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3073: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Oct 11 05:43:57 np0005481065 nova_compute[260935]: 2025-10-11 09:43:57.627 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:43:57 np0005481065 nova_compute[260935]: 2025-10-11 09:43:57.628 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2429MB free_disk=59.7849006652832GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:43:57 np0005481065 nova_compute[260935]: 2025-10-11 09:43:57.629 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:43:57 np0005481065 nova_compute[260935]: 2025-10-11 09:43:57.629 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:43:57 np0005481065 nova_compute[260935]: 2025-10-11 09:43:57.715 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:43:57 np0005481065 nova_compute[260935]: 2025-10-11 09:43:57.716 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:43:57 np0005481065 nova_compute[260935]: 2025-10-11 09:43:57.716 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:43:57 np0005481065 nova_compute[260935]: 2025-10-11 09:43:57.716 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 1f03604e-0a25-4f22-807b-11f2e654be90 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:43:57 np0005481065 nova_compute[260935]: 2025-10-11 09:43:57.716 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:43:57 np0005481065 nova_compute[260935]: 2025-10-11 09:43:57.716 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:43:57 np0005481065 nova_compute[260935]: 2025-10-11 09:43:57.716 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:43:57 np0005481065 nova_compute[260935]: 2025-10-11 09:43:57.825 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:43:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:43:58 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3517727397' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:43:58 np0005481065 nova_compute[260935]: 2025-10-11 09:43:58.308 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:43:58 np0005481065 nova_compute[260935]: 2025-10-11 09:43:58.318 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:43:58 np0005481065 nova_compute[260935]: 2025-10-11 09:43:58.352 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:43:58 np0005481065 nova_compute[260935]: 2025-10-11 09:43:58.380 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:43:58 np0005481065 nova_compute[260935]: 2025-10-11 09:43:58.380 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:43:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3074: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 85 B/s wr, 73 op/s
Oct 11 05:43:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:44:00 np0005481065 nova_compute[260935]: 2025-10-11 09:44:00.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:44:00 np0005481065 nova_compute[260935]: 2025-10-11 09:44:00.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:44:01 np0005481065 ovn_controller[152945]: 2025-10-11T09:44:01Z|00204|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.3 does not match offer 10.100.0.7
Oct 11 05:44:01 np0005481065 ovn_controller[152945]: 2025-10-11T09:44:01Z|00205|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:c1:4b:38 10.100.0.7
Oct 11 05:44:01 np0005481065 nova_compute[260935]: 2025-10-11 09:44:01.381 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:44:01 np0005481065 nova_compute[260935]: 2025-10-11 09:44:01.383 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:44:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3075: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 65 op/s
Oct 11 05:44:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3076: 321 pgs: 321 active+clean; 501 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 499 KiB/s wr, 117 op/s
Oct 11 05:44:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:44:05 np0005481065 nova_compute[260935]: 2025-10-11 09:44:05.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:44:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3077: 321 pgs: 321 active+clean; 501 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 499 KiB/s wr, 52 op/s
Oct 11 05:44:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:44:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:44:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:44:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:44:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003485025151396757 of space, bias 1.0, pg target 1.0455075454190272 quantized to 32 (current 32)
Oct 11 05:44:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:44:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:44:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:44:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:44:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:44:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.001424050310933125 of space, bias 1.0, pg target 0.4257910429690044 quantized to 32 (current 32)
Oct 11 05:44:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:44:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 32)
Oct 11 05:44:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:44:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:44:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:44:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Oct 11 05:44:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:44:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Oct 11 05:44:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:44:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:44:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:44:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Oct 11 05:44:05 np0005481065 nova_compute[260935]: 2025-10-11 09:44:05.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:44:05 np0005481065 ovn_controller[152945]: 2025-10-11T09:44:05Z|00206|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.3 does not match offer 10.100.0.7
Oct 11 05:44:05 np0005481065 ovn_controller[152945]: 2025-10-11T09:44:05Z|00207|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:c1:4b:38 10.100.0.7
Oct 11 05:44:06 np0005481065 ovn_controller[152945]: 2025-10-11T09:44:06Z|00208|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c1:4b:38 10.100.0.7
Oct 11 05:44:06 np0005481065 ovn_controller[152945]: 2025-10-11T09:44:06Z|00209|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c1:4b:38 10.100.0.7
Oct 11 05:44:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3078: 321 pgs: 321 active+clean; 504 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 546 KiB/s wr, 54 op/s
Oct 11 05:44:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3079: 321 pgs: 321 active+clean; 504 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 546 KiB/s wr, 55 op/s
Oct 11 05:44:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:44:10 np0005481065 nova_compute[260935]: 2025-10-11 09:44:10.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:44:10 np0005481065 nova_compute[260935]: 2025-10-11 09:44:10.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:44:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3080: 321 pgs: 321 active+clean; 504 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 546 KiB/s wr, 55 op/s
Oct 11 05:44:11 np0005481065 nova_compute[260935]: 2025-10-11 09:44:11.700 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:44:12 np0005481065 podman[433285]: 2025-10-11 09:44:12.797951866 +0000 UTC m=+0.089045209 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 05:44:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3081: 321 pgs: 321 active+clean; 504 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 549 KiB/s wr, 55 op/s
Oct 11 05:44:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:44:15 np0005481065 nova_compute[260935]: 2025-10-11 09:44:15.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:44:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:44:15.240 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:44:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:44:15.240 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:44:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:44:15.242 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:44:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3082: 321 pgs: 321 active+clean; 504 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 50 KiB/s wr, 2 op/s
Oct 11 05:44:15 np0005481065 nova_compute[260935]: 2025-10-11 09:44:15.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:44:16 np0005481065 nova_compute[260935]: 2025-10-11 09:44:16.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:44:16 np0005481065 nova_compute[260935]: 2025-10-11 09:44:16.705 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 11 05:44:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:44:17 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:44:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 05:44:17 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:44:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 05:44:17 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:44:17 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev b385e893-bbb3-4e28-8ace-dbefe2ceb71e does not exist
Oct 11 05:44:17 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 7e0dc3d6-eacf-4458-bbed-4d0dc25fd143 does not exist
Oct 11 05:44:17 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 29df011c-cf12-4ff4-873c-1bcdcf97f111 does not exist
Oct 11 05:44:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 05:44:17 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 05:44:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 05:44:17 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:44:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:44:17 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:44:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3083: 321 pgs: 321 active+clean; 504 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 50 KiB/s wr, 2 op/s
Oct 11 05:44:17 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:44:17 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:44:17 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:44:18 np0005481065 podman[433578]: 2025-10-11 09:44:18.150229163 +0000 UTC m=+0.079205512 container create b4c3768bf963a05c86a4103da3aed5015283feeaf4674d16472216077eb6f604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 11 05:44:18 np0005481065 podman[433578]: 2025-10-11 09:44:18.117065023 +0000 UTC m=+0.046041432 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:44:18 np0005481065 systemd[1]: Started libpod-conmon-b4c3768bf963a05c86a4103da3aed5015283feeaf4674d16472216077eb6f604.scope.
Oct 11 05:44:18 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:44:18 np0005481065 podman[433578]: 2025-10-11 09:44:18.27203766 +0000 UTC m=+0.201014039 container init b4c3768bf963a05c86a4103da3aed5015283feeaf4674d16472216077eb6f604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:44:18 np0005481065 podman[433578]: 2025-10-11 09:44:18.28772901 +0000 UTC m=+0.216705339 container start b4c3768bf963a05c86a4103da3aed5015283feeaf4674d16472216077eb6f604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_meninsky, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:44:18 np0005481065 podman[433578]: 2025-10-11 09:44:18.291612169 +0000 UTC m=+0.220588578 container attach b4c3768bf963a05c86a4103da3aed5015283feeaf4674d16472216077eb6f604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_meninsky, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:44:18 np0005481065 funny_meninsky[433595]: 167 167
Oct 11 05:44:18 np0005481065 systemd[1]: libpod-b4c3768bf963a05c86a4103da3aed5015283feeaf4674d16472216077eb6f604.scope: Deactivated successfully.
Oct 11 05:44:18 np0005481065 conmon[433595]: conmon b4c3768bf963a05c86a4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b4c3768bf963a05c86a4103da3aed5015283feeaf4674d16472216077eb6f604.scope/container/memory.events
Oct 11 05:44:18 np0005481065 podman[433578]: 2025-10-11 09:44:18.300286532 +0000 UTC m=+0.229262861 container died b4c3768bf963a05c86a4103da3aed5015283feeaf4674d16472216077eb6f604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_meninsky, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 05:44:18 np0005481065 systemd[1]: var-lib-containers-storage-overlay-08ac0945cea309b18c0a599d8a794f60a3f100ed34ff9fbf993c19b98cea9030-merged.mount: Deactivated successfully.
Oct 11 05:44:18 np0005481065 podman[433578]: 2025-10-11 09:44:18.356445957 +0000 UTC m=+0.285422296 container remove b4c3768bf963a05c86a4103da3aed5015283feeaf4674d16472216077eb6f604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Oct 11 05:44:18 np0005481065 systemd[1]: libpod-conmon-b4c3768bf963a05c86a4103da3aed5015283feeaf4674d16472216077eb6f604.scope: Deactivated successfully.
Oct 11 05:44:18 np0005481065 podman[433618]: 2025-10-11 09:44:18.610993297 +0000 UTC m=+0.072902366 container create eb54926b34328acf3f91dfc92c37761da0460d754383b3ec156c337568fff0eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_greider, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:44:18 np0005481065 systemd[1]: Started libpod-conmon-eb54926b34328acf3f91dfc92c37761da0460d754383b3ec156c337568fff0eb.scope.
Oct 11 05:44:18 np0005481065 podman[433618]: 2025-10-11 09:44:18.581746596 +0000 UTC m=+0.043655715 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:44:18 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:44:18 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0b225d2e7bf4e30ac8354a5b633a9165fc61b233fe21133c95a698a5995d749/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:44:18 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0b225d2e7bf4e30ac8354a5b633a9165fc61b233fe21133c95a698a5995d749/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:44:18 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0b225d2e7bf4e30ac8354a5b633a9165fc61b233fe21133c95a698a5995d749/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:44:18 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0b225d2e7bf4e30ac8354a5b633a9165fc61b233fe21133c95a698a5995d749/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:44:18 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0b225d2e7bf4e30ac8354a5b633a9165fc61b233fe21133c95a698a5995d749/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 05:44:18 np0005481065 podman[433618]: 2025-10-11 09:44:18.725266372 +0000 UTC m=+0.187175481 container init eb54926b34328acf3f91dfc92c37761da0460d754383b3ec156c337568fff0eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_greider, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 11 05:44:18 np0005481065 podman[433618]: 2025-10-11 09:44:18.740438247 +0000 UTC m=+0.202347316 container start eb54926b34328acf3f91dfc92c37761da0460d754383b3ec156c337568fff0eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:44:18 np0005481065 podman[433618]: 2025-10-11 09:44:18.745234922 +0000 UTC m=+0.207143951 container attach eb54926b34328acf3f91dfc92c37761da0460d754383b3ec156c337568fff0eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_greider, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 05:44:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3084: 321 pgs: 321 active+clean; 504 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 3.3 KiB/s wr, 0 op/s
Oct 11 05:44:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:44:19 np0005481065 podman[433654]: 2025-10-11 09:44:19.778797841 +0000 UTC m=+0.083291957 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 05:44:19 np0005481065 funny_greider[433635]: --> passed data devices: 0 physical, 3 LVM
Oct 11 05:44:19 np0005481065 funny_greider[433635]: --> relative data size: 1.0
Oct 11 05:44:19 np0005481065 funny_greider[433635]: --> All data devices are unavailable
Oct 11 05:44:19 np0005481065 systemd[1]: libpod-eb54926b34328acf3f91dfc92c37761da0460d754383b3ec156c337568fff0eb.scope: Deactivated successfully.
Oct 11 05:44:19 np0005481065 systemd[1]: libpod-eb54926b34328acf3f91dfc92c37761da0460d754383b3ec156c337568fff0eb.scope: Consumed 1.097s CPU time.
Oct 11 05:44:19 np0005481065 podman[433683]: 2025-10-11 09:44:19.968291956 +0000 UTC m=+0.029975112 container died eb54926b34328acf3f91dfc92c37761da0460d754383b3ec156c337568fff0eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_greider, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 11 05:44:20 np0005481065 systemd[1]: var-lib-containers-storage-overlay-f0b225d2e7bf4e30ac8354a5b633a9165fc61b233fe21133c95a698a5995d749-merged.mount: Deactivated successfully.
Oct 11 05:44:20 np0005481065 podman[433683]: 2025-10-11 09:44:20.040028378 +0000 UTC m=+0.101711464 container remove eb54926b34328acf3f91dfc92c37761da0460d754383b3ec156c337568fff0eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_greider, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:44:20 np0005481065 systemd[1]: libpod-conmon-eb54926b34328acf3f91dfc92c37761da0460d754383b3ec156c337568fff0eb.scope: Deactivated successfully.
Oct 11 05:44:20 np0005481065 nova_compute[260935]: 2025-10-11 09:44:20.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:44:20 np0005481065 nova_compute[260935]: 2025-10-11 09:44:20.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:44:20 np0005481065 podman[433838]: 2025-10-11 09:44:20.951135672 +0000 UTC m=+0.103003100 container create 603c7b9a69b6303019011b34b9cb8c58a10fbaddbdb1029c23817f575cb57477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_visvesvaraya, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:44:20 np0005481065 podman[433838]: 2025-10-11 09:44:20.893841825 +0000 UTC m=+0.045709343 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:44:21 np0005481065 systemd[1]: Started libpod-conmon-603c7b9a69b6303019011b34b9cb8c58a10fbaddbdb1029c23817f575cb57477.scope.
Oct 11 05:44:21 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:44:21 np0005481065 podman[433838]: 2025-10-11 09:44:21.065798368 +0000 UTC m=+0.217665816 container init 603c7b9a69b6303019011b34b9cb8c58a10fbaddbdb1029c23817f575cb57477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_visvesvaraya, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:44:21 np0005481065 podman[433838]: 2025-10-11 09:44:21.079985906 +0000 UTC m=+0.231853374 container start 603c7b9a69b6303019011b34b9cb8c58a10fbaddbdb1029c23817f575cb57477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 05:44:21 np0005481065 sweet_visvesvaraya[433855]: 167 167
Oct 11 05:44:21 np0005481065 systemd[1]: libpod-603c7b9a69b6303019011b34b9cb8c58a10fbaddbdb1029c23817f575cb57477.scope: Deactivated successfully.
Oct 11 05:44:21 np0005481065 podman[433838]: 2025-10-11 09:44:21.089309208 +0000 UTC m=+0.241176666 container attach 603c7b9a69b6303019011b34b9cb8c58a10fbaddbdb1029c23817f575cb57477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 11 05:44:21 np0005481065 conmon[433855]: conmon 603c7b9a69b630301901 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-603c7b9a69b6303019011b34b9cb8c58a10fbaddbdb1029c23817f575cb57477.scope/container/memory.events
Oct 11 05:44:21 np0005481065 podman[433838]: 2025-10-11 09:44:21.091323494 +0000 UTC m=+0.243190932 container died 603c7b9a69b6303019011b34b9cb8c58a10fbaddbdb1029c23817f575cb57477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:44:21 np0005481065 systemd[1]: var-lib-containers-storage-overlay-4c8aa8c0a6b958d11df2df4a76d972747c16c5d719052a0aa5d15945d1ea6d40-merged.mount: Deactivated successfully.
Oct 11 05:44:21 np0005481065 podman[433838]: 2025-10-11 09:44:21.146412519 +0000 UTC m=+0.298279957 container remove 603c7b9a69b6303019011b34b9cb8c58a10fbaddbdb1029c23817f575cb57477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Oct 11 05:44:21 np0005481065 systemd[1]: libpod-conmon-603c7b9a69b6303019011b34b9cb8c58a10fbaddbdb1029c23817f575cb57477.scope: Deactivated successfully.
Oct 11 05:44:21 np0005481065 podman[433880]: 2025-10-11 09:44:21.356027678 +0000 UTC m=+0.044609672 container create deba285630076e9f29218b6204f2c33874212c8dfbef286a8a746360e826ee67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_yalow, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:44:21 np0005481065 systemd[1]: Started libpod-conmon-deba285630076e9f29218b6204f2c33874212c8dfbef286a8a746360e826ee67.scope.
Oct 11 05:44:21 np0005481065 podman[433880]: 2025-10-11 09:44:21.33898897 +0000 UTC m=+0.027570984 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:44:21 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:44:21 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71d1fdc44915113904c902b546730cd24c08293c11d92c223e66b526fe0ed615/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:44:21 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71d1fdc44915113904c902b546730cd24c08293c11d92c223e66b526fe0ed615/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:44:21 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71d1fdc44915113904c902b546730cd24c08293c11d92c223e66b526fe0ed615/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:44:21 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71d1fdc44915113904c902b546730cd24c08293c11d92c223e66b526fe0ed615/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:44:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3085: 321 pgs: 321 active+clean; 504 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 3.3 KiB/s wr, 0 op/s
Oct 11 05:44:21 np0005481065 podman[433880]: 2025-10-11 09:44:21.457523605 +0000 UTC m=+0.146105689 container init deba285630076e9f29218b6204f2c33874212c8dfbef286a8a746360e826ee67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 11 05:44:21 np0005481065 podman[433880]: 2025-10-11 09:44:21.467250088 +0000 UTC m=+0.155832112 container start deba285630076e9f29218b6204f2c33874212c8dfbef286a8a746360e826ee67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 11 05:44:21 np0005481065 podman[433880]: 2025-10-11 09:44:21.471725083 +0000 UTC m=+0.160307157 container attach deba285630076e9f29218b6204f2c33874212c8dfbef286a8a746360e826ee67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_yalow, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:44:22 np0005481065 ovn_controller[152945]: 2025-10-11T09:44:22Z|01718|memory_trim|INFO|Detected inactivity (last active 30021 ms ago): trimming memory
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]: {
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:    "0": [
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:        {
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:            "devices": [
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:                "/dev/loop3"
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:            ],
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:            "lv_name": "ceph_lv0",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:            "lv_size": "21470642176",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:            "name": "ceph_lv0",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:            "tags": {
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:                "ceph.cluster_name": "ceph",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:                "ceph.crush_device_class": "",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:                "ceph.encrypted": "0",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:                "ceph.osd_id": "0",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:                "ceph.type": "block",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:                "ceph.vdo": "0"
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:            },
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:            "type": "block",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:            "vg_name": "ceph_vg0"
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:        }
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:    ],
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:    "1": [
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:        {
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:            "devices": [
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:                "/dev/loop4"
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:            ],
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:            "lv_name": "ceph_lv1",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:            "lv_size": "21470642176",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:            "name": "ceph_lv1",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:            "tags": {
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:                "ceph.cluster_name": "ceph",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:                "ceph.crush_device_class": "",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:                "ceph.encrypted": "0",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:                "ceph.osd_id": "1",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:                "ceph.type": "block",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:                "ceph.vdo": "0"
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:            },
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:            "type": "block",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:            "vg_name": "ceph_vg1"
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:        }
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:    ],
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:    "2": [
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:        {
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:            "devices": [
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:                "/dev/loop5"
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:            ],
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:            "lv_name": "ceph_lv2",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:            "lv_size": "21470642176",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:            "name": "ceph_lv2",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:            "tags": {
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:                "ceph.cluster_name": "ceph",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:                "ceph.crush_device_class": "",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:                "ceph.encrypted": "0",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:                "ceph.osd_id": "2",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:                "ceph.type": "block",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:                "ceph.vdo": "0"
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:            },
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:            "type": "block",
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:            "vg_name": "ceph_vg2"
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:        }
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]:    ]
Oct 11 05:44:22 np0005481065 elastic_yalow[433897]: }
Oct 11 05:44:22 np0005481065 systemd[1]: libpod-deba285630076e9f29218b6204f2c33874212c8dfbef286a8a746360e826ee67.scope: Deactivated successfully.
Oct 11 05:44:22 np0005481065 conmon[433897]: conmon deba285630076e9f2921 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-deba285630076e9f29218b6204f2c33874212c8dfbef286a8a746360e826ee67.scope/container/memory.events
Oct 11 05:44:22 np0005481065 podman[433880]: 2025-10-11 09:44:22.347722502 +0000 UTC m=+1.036304526 container died deba285630076e9f29218b6204f2c33874212c8dfbef286a8a746360e826ee67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_yalow, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:44:22 np0005481065 systemd[1]: var-lib-containers-storage-overlay-71d1fdc44915113904c902b546730cd24c08293c11d92c223e66b526fe0ed615-merged.mount: Deactivated successfully.
Oct 11 05:44:22 np0005481065 podman[433880]: 2025-10-11 09:44:22.412805118 +0000 UTC m=+1.101387112 container remove deba285630076e9f29218b6204f2c33874212c8dfbef286a8a746360e826ee67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_yalow, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 05:44:22 np0005481065 systemd[1]: libpod-conmon-deba285630076e9f29218b6204f2c33874212c8dfbef286a8a746360e826ee67.scope: Deactivated successfully.
Oct 11 05:44:23 np0005481065 podman[434057]: 2025-10-11 09:44:23.217694563 +0000 UTC m=+0.057411471 container create 9a16953235bdf0a26b547d425f1d8f754646237901d77f8a4dc1b83588287dd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_roentgen, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:44:23 np0005481065 systemd[1]: Started libpod-conmon-9a16953235bdf0a26b547d425f1d8f754646237901d77f8a4dc1b83588287dd0.scope.
Oct 11 05:44:23 np0005481065 podman[434057]: 2025-10-11 09:44:23.189955055 +0000 UTC m=+0.029672043 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:44:23 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:44:23 np0005481065 podman[434057]: 2025-10-11 09:44:23.307989436 +0000 UTC m=+0.147706434 container init 9a16953235bdf0a26b547d425f1d8f754646237901d77f8a4dc1b83588287dd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 11 05:44:23 np0005481065 podman[434057]: 2025-10-11 09:44:23.320089425 +0000 UTC m=+0.159806333 container start 9a16953235bdf0a26b547d425f1d8f754646237901d77f8a4dc1b83588287dd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_roentgen, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:44:23 np0005481065 podman[434057]: 2025-10-11 09:44:23.323782618 +0000 UTC m=+0.163499606 container attach 9a16953235bdf0a26b547d425f1d8f754646237901d77f8a4dc1b83588287dd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:44:23 np0005481065 happy_roentgen[434075]: 167 167
Oct 11 05:44:23 np0005481065 systemd[1]: libpod-9a16953235bdf0a26b547d425f1d8f754646237901d77f8a4dc1b83588287dd0.scope: Deactivated successfully.
Oct 11 05:44:23 np0005481065 podman[434057]: 2025-10-11 09:44:23.328985124 +0000 UTC m=+0.168702032 container died 9a16953235bdf0a26b547d425f1d8f754646237901d77f8a4dc1b83588287dd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_roentgen, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 11 05:44:23 np0005481065 systemd[1]: var-lib-containers-storage-overlay-ec6ca4389312919d5f5aa80856929b6bb2c45e45b9423241066a7b854f22549e-merged.mount: Deactivated successfully.
Oct 11 05:44:23 np0005481065 podman[434057]: 2025-10-11 09:44:23.368955745 +0000 UTC m=+0.208672653 container remove 9a16953235bdf0a26b547d425f1d8f754646237901d77f8a4dc1b83588287dd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_roentgen, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 05:44:23 np0005481065 podman[434071]: 2025-10-11 09:44:23.374532522 +0000 UTC m=+0.108392291 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Oct 11 05:44:23 np0005481065 systemd[1]: libpod-conmon-9a16953235bdf0a26b547d425f1d8f754646237901d77f8a4dc1b83588287dd0.scope: Deactivated successfully.
Oct 11 05:44:23 np0005481065 podman[434074]: 2025-10-11 09:44:23.396390085 +0000 UTC m=+0.128417143 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 11 05:44:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3086: 321 pgs: 321 active+clean; 504 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 6.7 KiB/s wr, 0 op/s
Oct 11 05:44:23 np0005481065 podman[434140]: 2025-10-11 09:44:23.664075043 +0000 UTC m=+0.057022870 container create 92673d2f769fa1afc5fff6a3fdd5d7483f17543ebce00a06c3d4e9ecb291a2a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wu, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:44:23 np0005481065 systemd[1]: Started libpod-conmon-92673d2f769fa1afc5fff6a3fdd5d7483f17543ebce00a06c3d4e9ecb291a2a5.scope.
Oct 11 05:44:23 np0005481065 podman[434140]: 2025-10-11 09:44:23.64757398 +0000 UTC m=+0.040521837 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:44:23 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:44:23 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7b2135b0e3e290e2f0121f730ab9a6ad97132337306745d9d155d75bb123dcf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:44:23 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7b2135b0e3e290e2f0121f730ab9a6ad97132337306745d9d155d75bb123dcf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:44:23 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7b2135b0e3e290e2f0121f730ab9a6ad97132337306745d9d155d75bb123dcf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:44:23 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7b2135b0e3e290e2f0121f730ab9a6ad97132337306745d9d155d75bb123dcf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:44:23 np0005481065 podman[434140]: 2025-10-11 09:44:23.766880046 +0000 UTC m=+0.159827893 container init 92673d2f769fa1afc5fff6a3fdd5d7483f17543ebce00a06c3d4e9ecb291a2a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wu, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 11 05:44:23 np0005481065 podman[434140]: 2025-10-11 09:44:23.780118428 +0000 UTC m=+0.173066265 container start 92673d2f769fa1afc5fff6a3fdd5d7483f17543ebce00a06c3d4e9ecb291a2a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wu, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 05:44:23 np0005481065 podman[434140]: 2025-10-11 09:44:23.784234073 +0000 UTC m=+0.177181900 container attach 92673d2f769fa1afc5fff6a3fdd5d7483f17543ebce00a06c3d4e9ecb291a2a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wu, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:44:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:44:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:44:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:44:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:44:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:44:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:44:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:44:24 np0005481065 dreamy_wu[434156]: {
Oct 11 05:44:24 np0005481065 dreamy_wu[434156]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 05:44:24 np0005481065 dreamy_wu[434156]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:44:24 np0005481065 dreamy_wu[434156]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 05:44:24 np0005481065 dreamy_wu[434156]:        "osd_id": 2,
Oct 11 05:44:24 np0005481065 dreamy_wu[434156]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:44:24 np0005481065 dreamy_wu[434156]:        "type": "bluestore"
Oct 11 05:44:24 np0005481065 dreamy_wu[434156]:    },
Oct 11 05:44:24 np0005481065 dreamy_wu[434156]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 05:44:24 np0005481065 dreamy_wu[434156]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:44:24 np0005481065 dreamy_wu[434156]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 05:44:24 np0005481065 dreamy_wu[434156]:        "osd_id": 0,
Oct 11 05:44:24 np0005481065 dreamy_wu[434156]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:44:24 np0005481065 dreamy_wu[434156]:        "type": "bluestore"
Oct 11 05:44:24 np0005481065 dreamy_wu[434156]:    },
Oct 11 05:44:24 np0005481065 dreamy_wu[434156]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 05:44:24 np0005481065 dreamy_wu[434156]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:44:24 np0005481065 dreamy_wu[434156]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 05:44:24 np0005481065 dreamy_wu[434156]:        "osd_id": 1,
Oct 11 05:44:24 np0005481065 dreamy_wu[434156]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:44:24 np0005481065 dreamy_wu[434156]:        "type": "bluestore"
Oct 11 05:44:24 np0005481065 dreamy_wu[434156]:    }
Oct 11 05:44:24 np0005481065 dreamy_wu[434156]: }
Oct 11 05:44:25 np0005481065 systemd[1]: libpod-92673d2f769fa1afc5fff6a3fdd5d7483f17543ebce00a06c3d4e9ecb291a2a5.scope: Deactivated successfully.
Oct 11 05:44:25 np0005481065 podman[434140]: 2025-10-11 09:44:25.017464372 +0000 UTC m=+1.410412239 container died 92673d2f769fa1afc5fff6a3fdd5d7483f17543ebce00a06c3d4e9ecb291a2a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wu, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 05:44:25 np0005481065 systemd[1]: libpod-92673d2f769fa1afc5fff6a3fdd5d7483f17543ebce00a06c3d4e9ecb291a2a5.scope: Consumed 1.217s CPU time.
Oct 11 05:44:25 np0005481065 systemd[1]: var-lib-containers-storage-overlay-e7b2135b0e3e290e2f0121f730ab9a6ad97132337306745d9d155d75bb123dcf-merged.mount: Deactivated successfully.
Oct 11 05:44:25 np0005481065 nova_compute[260935]: 2025-10-11 09:44:25.059 2 DEBUG nova.compute.manager [None req-57ec2092-f1ad-40af-a08e-500cc8dc8747 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:44:25 np0005481065 nova_compute[260935]: 2025-10-11 09:44:25.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:44:25 np0005481065 podman[434140]: 2025-10-11 09:44:25.10298493 +0000 UTC m=+1.495932777 container remove 92673d2f769fa1afc5fff6a3fdd5d7483f17543ebce00a06c3d4e9ecb291a2a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wu, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:44:25 np0005481065 nova_compute[260935]: 2025-10-11 09:44:25.121 2 INFO nova.compute.manager [None req-57ec2092-f1ad-40af-a08e-500cc8dc8747 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] instance snapshotting#033[00m
Oct 11 05:44:25 np0005481065 systemd[1]: libpod-conmon-92673d2f769fa1afc5fff6a3fdd5d7483f17543ebce00a06c3d4e9ecb291a2a5.scope: Deactivated successfully.
Oct 11 05:44:25 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:44:25 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:44:25 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:44:25 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:44:25 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev edf3ebd4-10c5-4493-b22b-18c72411344b does not exist
Oct 11 05:44:25 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev a70ba05e-bac3-4525-9b63-faa6577857ca does not exist
Oct 11 05:44:25 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:44:25 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:44:25 np0005481065 nova_compute[260935]: 2025-10-11 09:44:25.402 2 INFO nova.virt.libvirt.driver [None req-57ec2092-f1ad-40af-a08e-500cc8dc8747 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Beginning live snapshot process#033[00m
Oct 11 05:44:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3087: 321 pgs: 321 active+clean; 504 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 3.3 KiB/s wr, 0 op/s
Oct 11 05:44:25 np0005481065 nova_compute[260935]: 2025-10-11 09:44:25.670 2 DEBUG nova.storage.rbd_utils [None req-57ec2092-f1ad-40af-a08e-500cc8dc8747 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] creating snapshot(1cfadc1d5df44930ad898c4bbb06e9cc) on rbd image(84e65bfe-6918-4c8f-8b2a-3b5262c6e44e_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 11 05:44:25 np0005481065 nova_compute[260935]: 2025-10-11 09:44:25.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:44:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e292 do_prune osdmap full prune enabled
Oct 11 05:44:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e293 e293: 3 total, 3 up, 3 in
Oct 11 05:44:26 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e293: 3 total, 3 up, 3 in
Oct 11 05:44:26 np0005481065 nova_compute[260935]: 2025-10-11 09:44:26.354 2 DEBUG nova.storage.rbd_utils [None req-57ec2092-f1ad-40af-a08e-500cc8dc8747 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] cloning vms/84e65bfe-6918-4c8f-8b2a-3b5262c6e44e_disk@1cfadc1d5df44930ad898c4bbb06e9cc to images/661f6683-31d1-46ed-ae81-1e493e8ff006 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct 11 05:44:26 np0005481065 nova_compute[260935]: 2025-10-11 09:44:26.462 2 DEBUG nova.storage.rbd_utils [None req-57ec2092-f1ad-40af-a08e-500cc8dc8747 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] flattening images/661f6683-31d1-46ed-ae81-1e493e8ff006 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct 11 05:44:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:44:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3629907740' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:44:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:44:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3629907740' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:44:27 np0005481065 nova_compute[260935]: 2025-10-11 09:44:27.231 2 DEBUG nova.storage.rbd_utils [None req-57ec2092-f1ad-40af-a08e-500cc8dc8747 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] removing snapshot(1cfadc1d5df44930ad898c4bbb06e9cc) on rbd image(84e65bfe-6918-4c8f-8b2a-3b5262c6e44e_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct 11 05:44:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e293 do_prune osdmap full prune enabled
Oct 11 05:44:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e294 e294: 3 total, 3 up, 3 in
Oct 11 05:44:27 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e294: 3 total, 3 up, 3 in
Oct 11 05:44:27 np0005481065 nova_compute[260935]: 2025-10-11 09:44:27.330 2 DEBUG nova.storage.rbd_utils [None req-57ec2092-f1ad-40af-a08e-500cc8dc8747 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] creating snapshot(snap) on rbd image(661f6683-31d1-46ed-ae81-1e493e8ff006) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 11 05:44:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3090: 321 pgs: 321 active+clean; 577 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 7.8 MiB/s wr, 77 op/s
Oct 11 05:44:27 np0005481065 nova_compute[260935]: 2025-10-11 09:44:27.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:44:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e294 do_prune osdmap full prune enabled
Oct 11 05:44:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e295 e295: 3 total, 3 up, 3 in
Oct 11 05:44:28 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e295: 3 total, 3 up, 3 in
Oct 11 05:44:29 np0005481065 nova_compute[260935]: 2025-10-11 09:44:29.409 2 INFO nova.virt.libvirt.driver [None req-57ec2092-f1ad-40af-a08e-500cc8dc8747 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Snapshot image upload complete#033[00m
Oct 11 05:44:29 np0005481065 nova_compute[260935]: 2025-10-11 09:44:29.409 2 INFO nova.compute.manager [None req-57ec2092-f1ad-40af-a08e-500cc8dc8747 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Took 4.28 seconds to snapshot the instance on the hypervisor.#033[00m
Oct 11 05:44:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3092: 321 pgs: 321 active+clean; 605 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 8.3 MiB/s rd, 15 MiB/s wr, 145 op/s
Oct 11 05:44:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:44:30 np0005481065 nova_compute[260935]: 2025-10-11 09:44:30.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:44:30 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e295 do_prune osdmap full prune enabled
Oct 11 05:44:30 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e296 e296: 3 total, 3 up, 3 in
Oct 11 05:44:30 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e296: 3 total, 3 up, 3 in
Oct 11 05:44:30 np0005481065 nova_compute[260935]: 2025-10-11 09:44:30.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:44:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3094: 321 pgs: 321 active+clean; 605 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 9.6 MiB/s rd, 17 MiB/s wr, 168 op/s
Oct 11 05:44:31 np0005481065 nova_compute[260935]: 2025-10-11 09:44:31.719 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:44:31 np0005481065 nova_compute[260935]: 2025-10-11 09:44:31.720 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct 11 05:44:31 np0005481065 nova_compute[260935]: 2025-10-11 09:44:31.739 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct 11 05:44:32 np0005481065 nova_compute[260935]: 2025-10-11 09:44:32.723 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:44:33 np0005481065 nova_compute[260935]: 2025-10-11 09:44:33.063 2 DEBUG nova.compute.manager [req-8bdc2c52-d0a8-441c-bc90-2f0944771fd8 req-93f42f6b-c906-413e-8e13-9916c737fc00 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Received event network-changed-007ca8a8-8ce1-4661-9b04-c8764bd3b523 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:44:33 np0005481065 nova_compute[260935]: 2025-10-11 09:44:33.064 2 DEBUG nova.compute.manager [req-8bdc2c52-d0a8-441c-bc90-2f0944771fd8 req-93f42f6b-c906-413e-8e13-9916c737fc00 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Refreshing instance network info cache due to event network-changed-007ca8a8-8ce1-4661-9b04-c8764bd3b523. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:44:33 np0005481065 nova_compute[260935]: 2025-10-11 09:44:33.065 2 DEBUG oslo_concurrency.lockutils [req-8bdc2c52-d0a8-441c-bc90-2f0944771fd8 req-93f42f6b-c906-413e-8e13-9916c737fc00 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-84e65bfe-6918-4c8f-8b2a-3b5262c6e44e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:44:33 np0005481065 nova_compute[260935]: 2025-10-11 09:44:33.065 2 DEBUG oslo_concurrency.lockutils [req-8bdc2c52-d0a8-441c-bc90-2f0944771fd8 req-93f42f6b-c906-413e-8e13-9916c737fc00 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-84e65bfe-6918-4c8f-8b2a-3b5262c6e44e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:44:33 np0005481065 nova_compute[260935]: 2025-10-11 09:44:33.066 2 DEBUG nova.network.neutron [req-8bdc2c52-d0a8-441c-bc90-2f0944771fd8 req-93f42f6b-c906-413e-8e13-9916c737fc00 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Refreshing network info cache for port 007ca8a8-8ce1-4661-9b04-c8764bd3b523 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:44:33 np0005481065 nova_compute[260935]: 2025-10-11 09:44:33.148 2 DEBUG oslo_concurrency.lockutils [None req-356cdc09-84f0-400e-865f-67dee43f8a74 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Acquiring lock "84e65bfe-6918-4c8f-8b2a-3b5262c6e44e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:44:33 np0005481065 nova_compute[260935]: 2025-10-11 09:44:33.149 2 DEBUG oslo_concurrency.lockutils [None req-356cdc09-84f0-400e-865f-67dee43f8a74 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "84e65bfe-6918-4c8f-8b2a-3b5262c6e44e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:44:33 np0005481065 nova_compute[260935]: 2025-10-11 09:44:33.149 2 DEBUG oslo_concurrency.lockutils [None req-356cdc09-84f0-400e-865f-67dee43f8a74 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Acquiring lock "84e65bfe-6918-4c8f-8b2a-3b5262c6e44e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:44:33 np0005481065 nova_compute[260935]: 2025-10-11 09:44:33.150 2 DEBUG oslo_concurrency.lockutils [None req-356cdc09-84f0-400e-865f-67dee43f8a74 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "84e65bfe-6918-4c8f-8b2a-3b5262c6e44e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:44:33 np0005481065 nova_compute[260935]: 2025-10-11 09:44:33.150 2 DEBUG oslo_concurrency.lockutils [None req-356cdc09-84f0-400e-865f-67dee43f8a74 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "84e65bfe-6918-4c8f-8b2a-3b5262c6e44e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:44:33 np0005481065 nova_compute[260935]: 2025-10-11 09:44:33.152 2 INFO nova.compute.manager [None req-356cdc09-84f0-400e-865f-67dee43f8a74 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Terminating instance#033[00m
Oct 11 05:44:33 np0005481065 nova_compute[260935]: 2025-10-11 09:44:33.154 2 DEBUG nova.compute.manager [None req-356cdc09-84f0-400e-865f-67dee43f8a74 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:44:33 np0005481065 kernel: tap007ca8a8-8c (unregistering): left promiscuous mode
Oct 11 05:44:33 np0005481065 NetworkManager[44960]: <info>  [1760175873.4209] device (tap007ca8a8-8c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:44:33 np0005481065 ovn_controller[152945]: 2025-10-11T09:44:33Z|01719|binding|INFO|Releasing lport 007ca8a8-8ce1-4661-9b04-c8764bd3b523 from this chassis (sb_readonly=0)
Oct 11 05:44:33 np0005481065 ovn_controller[152945]: 2025-10-11T09:44:33Z|01720|binding|INFO|Setting lport 007ca8a8-8ce1-4661-9b04-c8764bd3b523 down in Southbound
Oct 11 05:44:33 np0005481065 nova_compute[260935]: 2025-10-11 09:44:33.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:44:33 np0005481065 ovn_controller[152945]: 2025-10-11T09:44:33Z|01721|binding|INFO|Removing iface tap007ca8a8-8c ovn-installed in OVS
Oct 11 05:44:33 np0005481065 nova_compute[260935]: 2025-10-11 09:44:33.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:44:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:44:33.444 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:4b:38 10.100.0.7'], port_security=['fa:16:3e:c1:4b:38 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '84e65bfe-6918-4c8f-8b2a-3b5262c6e44e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-424246b3-55aa-428f-9446-55ed3d626b5d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd599eac75aee4193a971c2c157a326a8', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9d823c59-8c61-45f8-94bd-622699d4ae58', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8021c16-4de3-44ff-b5a5-7602224b9734, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=007ca8a8-8ce1-4661-9b04-c8764bd3b523) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:44:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:44:33.446 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 007ca8a8-8ce1-4661-9b04-c8764bd3b523 in datapath 424246b3-55aa-428f-9446-55ed3d626b5d unbound from our chassis#033[00m
Oct 11 05:44:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:44:33.448 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 424246b3-55aa-428f-9446-55ed3d626b5d#033[00m
Oct 11 05:44:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3095: 321 pgs: 321 active+clean; 508 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 4.9 MiB/s wr, 146 op/s
Oct 11 05:44:33 np0005481065 nova_compute[260935]: 2025-10-11 09:44:33.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:44:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:44:33.479 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6fdb9598-36dd-4b1c-9400-4481aff46bbc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:44:33 np0005481065 systemd[1]: machine-qemu\x2d176\x2dinstance\x2d00000096.scope: Deactivated successfully.
Oct 11 05:44:33 np0005481065 systemd[1]: machine-qemu\x2d176\x2dinstance\x2d00000096.scope: Consumed 15.382s CPU time.
Oct 11 05:44:33 np0005481065 systemd-machined[215705]: Machine qemu-176-instance-00000096 terminated.
Oct 11 05:44:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:44:33.529 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3395237e-deeb-4f42-a01f-90823a94c094]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:44:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:44:33.532 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[44c49996-1fef-4ec5-8462-5daa87e0c994]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:44:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:44:33.577 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[86c3e8e4-11c3-4597-80d0-f13e446ff23b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:44:33 np0005481065 nova_compute[260935]: 2025-10-11 09:44:33.601 2 INFO nova.virt.libvirt.driver [-] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Instance destroyed successfully.#033[00m
Oct 11 05:44:33 np0005481065 nova_compute[260935]: 2025-10-11 09:44:33.601 2 DEBUG nova.objects.instance [None req-356cdc09-84f0-400e-865f-67dee43f8a74 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lazy-loading 'resources' on Instance uuid 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:44:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:44:33.601 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[69983b83-e782-4af9-b014-3c3cafb811e5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap424246b3-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:28:e0:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 445], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763210, 'reachable_time': 32262, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 434411, 'error': None, 'target': 'ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:44:33 np0005481065 nova_compute[260935]: 2025-10-11 09:44:33.614 2 DEBUG nova.virt.libvirt.vif [None req-356cdc09-84f0-400e-865f-67dee43f8a74 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:43:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-937149156',display_name='tempest-TestSnapshotPattern-server-937149156',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-937149156',id=150,image_ref='72225c6c-cee7-4404-9d0d-8915e5edba0a',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP2ECx0jj3Rt10/VBX40dpY/aktTrRIITXR7+kOE+gKATXjMYBpecra2wtbKgkthLAz8Iw/nS+1WioBHxe0IF9i4guGae1Gh4XxA1njuMr4iSWg7N0/eaZRKp5xLMkN7hg==',key_name='tempest-TestSnapshotPattern-1719255089',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:43:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d599eac75aee4193a971c2c157a326a8',ramdisk_id='',reservation_id='r-v3mdlfxx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='1f03604e-0a25-4f22-807b-11f2e654be90',image_min_disk='1',image_min_ram='0',image_owner_id='d599eac75aee4193a971c2c157a326a8',image_owner_project_name='tempest-TestSnapshotPattern-1818755654',image_owner_user_name='tempest-TestSnapshotPattern-1818755654-project-member',image_user_id='2b3a6de4bc924868b4e73b0a23a89090',image_version='8.0',owner_project_name='tempest-TestSnapshotPattern-1818755654',owner_user_name='tempest-TestSnapshotPattern-1818755654-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:44:29Z,user_data=None,user_id='2b3a6de4bc924868b4e73b0a23a89090',uuid=84e65bfe-6918-4c8f-8b2a-3b5262c6e44e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "007ca8a8-8ce1-4661-9b04-c8764bd3b523", "address": "fa:16:3e:c1:4b:38", "network": {"id": "424246b3-55aa-428f-9446-55ed3d626b5d", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1806329114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d599eac75aee4193a971c2c157a326a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap007ca8a8-8c", "ovs_interfaceid": "007ca8a8-8ce1-4661-9b04-c8764bd3b523", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:44:33 np0005481065 nova_compute[260935]: 2025-10-11 09:44:33.615 2 DEBUG nova.network.os_vif_util [None req-356cdc09-84f0-400e-865f-67dee43f8a74 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Converting VIF {"id": "007ca8a8-8ce1-4661-9b04-c8764bd3b523", "address": "fa:16:3e:c1:4b:38", "network": {"id": "424246b3-55aa-428f-9446-55ed3d626b5d", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1806329114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d599eac75aee4193a971c2c157a326a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap007ca8a8-8c", "ovs_interfaceid": "007ca8a8-8ce1-4661-9b04-c8764bd3b523", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:44:33 np0005481065 nova_compute[260935]: 2025-10-11 09:44:33.615 2 DEBUG nova.network.os_vif_util [None req-356cdc09-84f0-400e-865f-67dee43f8a74 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c1:4b:38,bridge_name='br-int',has_traffic_filtering=True,id=007ca8a8-8ce1-4661-9b04-c8764bd3b523,network=Network(424246b3-55aa-428f-9446-55ed3d626b5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap007ca8a8-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:44:33 np0005481065 nova_compute[260935]: 2025-10-11 09:44:33.616 2 DEBUG os_vif [None req-356cdc09-84f0-400e-865f-67dee43f8a74 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c1:4b:38,bridge_name='br-int',has_traffic_filtering=True,id=007ca8a8-8ce1-4661-9b04-c8764bd3b523,network=Network(424246b3-55aa-428f-9446-55ed3d626b5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap007ca8a8-8c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:44:33 np0005481065 nova_compute[260935]: 2025-10-11 09:44:33.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:44:33 np0005481065 nova_compute[260935]: 2025-10-11 09:44:33.617 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap007ca8a8-8c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:44:33 np0005481065 nova_compute[260935]: 2025-10-11 09:44:33.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:44:33 np0005481065 nova_compute[260935]: 2025-10-11 09:44:33.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:44:33 np0005481065 nova_compute[260935]: 2025-10-11 09:44:33.663 2 INFO os_vif [None req-356cdc09-84f0-400e-865f-67dee43f8a74 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c1:4b:38,bridge_name='br-int',has_traffic_filtering=True,id=007ca8a8-8ce1-4661-9b04-c8764bd3b523,network=Network(424246b3-55aa-428f-9446-55ed3d626b5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap007ca8a8-8c')#033[00m
Oct 11 05:44:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:44:33.665 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3877039d-b3cc-455d-8ede-cba97691a4e3]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap424246b3-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763225, 'tstamp': 763225}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 434419, 'error': None, 'target': 'ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap424246b3-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763229, 'tstamp': 763229}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 434419, 'error': None, 'target': 'ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:44:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:44:33.666 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap424246b3-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:44:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:44:33.671 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap424246b3-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:44:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:44:33.671 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:44:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:44:33.671 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap424246b3-50, col_values=(('external_ids', {'iface-id': 'e23ac8d5-df18-43b3-bb12-29f5cd9ce802'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:44:33 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:44:33.672 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 05:44:33 np0005481065 nova_compute[260935]: 2025-10-11 09:44:33.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:44:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:44:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e296 do_prune osdmap full prune enabled
Oct 11 05:44:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e297 e297: 3 total, 3 up, 3 in
Oct 11 05:44:34 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e297: 3 total, 3 up, 3 in
Oct 11 05:44:34 np0005481065 nova_compute[260935]: 2025-10-11 09:44:34.809 2 INFO nova.virt.libvirt.driver [None req-356cdc09-84f0-400e-865f-67dee43f8a74 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Deleting instance files /var/lib/nova/instances/84e65bfe-6918-4c8f-8b2a-3b5262c6e44e_del#033[00m
Oct 11 05:44:34 np0005481065 nova_compute[260935]: 2025-10-11 09:44:34.809 2 INFO nova.virt.libvirt.driver [None req-356cdc09-84f0-400e-865f-67dee43f8a74 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Deletion of /var/lib/nova/instances/84e65bfe-6918-4c8f-8b2a-3b5262c6e44e_del complete#033[00m
Oct 11 05:44:34 np0005481065 nova_compute[260935]: 2025-10-11 09:44:34.872 2 INFO nova.compute.manager [None req-356cdc09-84f0-400e-865f-67dee43f8a74 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Took 1.72 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:44:34 np0005481065 nova_compute[260935]: 2025-10-11 09:44:34.873 2 DEBUG oslo.service.loopingcall [None req-356cdc09-84f0-400e-865f-67dee43f8a74 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:44:34 np0005481065 nova_compute[260935]: 2025-10-11 09:44:34.873 2 DEBUG nova.compute.manager [-] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:44:34 np0005481065 nova_compute[260935]: 2025-10-11 09:44:34.873 2 DEBUG nova.network.neutron [-] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:44:35 np0005481065 nova_compute[260935]: 2025-10-11 09:44:35.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:44:35 np0005481065 nova_compute[260935]: 2025-10-11 09:44:35.162 2 DEBUG nova.network.neutron [req-8bdc2c52-d0a8-441c-bc90-2f0944771fd8 req-93f42f6b-c906-413e-8e13-9916c737fc00 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Updated VIF entry in instance network info cache for port 007ca8a8-8ce1-4661-9b04-c8764bd3b523. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:44:35 np0005481065 nova_compute[260935]: 2025-10-11 09:44:35.163 2 DEBUG nova.network.neutron [req-8bdc2c52-d0a8-441c-bc90-2f0944771fd8 req-93f42f6b-c906-413e-8e13-9916c737fc00 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Updating instance_info_cache with network_info: [{"id": "007ca8a8-8ce1-4661-9b04-c8764bd3b523", "address": "fa:16:3e:c1:4b:38", "network": {"id": "424246b3-55aa-428f-9446-55ed3d626b5d", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1806329114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d599eac75aee4193a971c2c157a326a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap007ca8a8-8c", "ovs_interfaceid": "007ca8a8-8ce1-4661-9b04-c8764bd3b523", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:44:35 np0005481065 nova_compute[260935]: 2025-10-11 09:44:35.183 2 DEBUG nova.compute.manager [req-95cff39c-729b-46f4-b9c9-f584f8db33b6 req-bd522e15-e1c2-4bfb-b033-3570d5924fde e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Received event network-vif-unplugged-007ca8a8-8ce1-4661-9b04-c8764bd3b523 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:44:35 np0005481065 nova_compute[260935]: 2025-10-11 09:44:35.184 2 DEBUG oslo_concurrency.lockutils [req-95cff39c-729b-46f4-b9c9-f584f8db33b6 req-bd522e15-e1c2-4bfb-b033-3570d5924fde e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "84e65bfe-6918-4c8f-8b2a-3b5262c6e44e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:44:35 np0005481065 nova_compute[260935]: 2025-10-11 09:44:35.185 2 DEBUG oslo_concurrency.lockutils [req-95cff39c-729b-46f4-b9c9-f584f8db33b6 req-bd522e15-e1c2-4bfb-b033-3570d5924fde e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "84e65bfe-6918-4c8f-8b2a-3b5262c6e44e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:44:35 np0005481065 nova_compute[260935]: 2025-10-11 09:44:35.185 2 DEBUG oslo_concurrency.lockutils [req-95cff39c-729b-46f4-b9c9-f584f8db33b6 req-bd522e15-e1c2-4bfb-b033-3570d5924fde e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "84e65bfe-6918-4c8f-8b2a-3b5262c6e44e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:44:35 np0005481065 nova_compute[260935]: 2025-10-11 09:44:35.186 2 DEBUG nova.compute.manager [req-95cff39c-729b-46f4-b9c9-f584f8db33b6 req-bd522e15-e1c2-4bfb-b033-3570d5924fde e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] No waiting events found dispatching network-vif-unplugged-007ca8a8-8ce1-4661-9b04-c8764bd3b523 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:44:35 np0005481065 nova_compute[260935]: 2025-10-11 09:44:35.186 2 DEBUG nova.compute.manager [req-95cff39c-729b-46f4-b9c9-f584f8db33b6 req-bd522e15-e1c2-4bfb-b033-3570d5924fde e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Received event network-vif-unplugged-007ca8a8-8ce1-4661-9b04-c8764bd3b523 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 05:44:35 np0005481065 nova_compute[260935]: 2025-10-11 09:44:35.186 2 DEBUG nova.compute.manager [req-95cff39c-729b-46f4-b9c9-f584f8db33b6 req-bd522e15-e1c2-4bfb-b033-3570d5924fde e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Received event network-vif-plugged-007ca8a8-8ce1-4661-9b04-c8764bd3b523 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:44:35 np0005481065 nova_compute[260935]: 2025-10-11 09:44:35.187 2 DEBUG oslo_concurrency.lockutils [req-95cff39c-729b-46f4-b9c9-f584f8db33b6 req-bd522e15-e1c2-4bfb-b033-3570d5924fde e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "84e65bfe-6918-4c8f-8b2a-3b5262c6e44e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:44:35 np0005481065 nova_compute[260935]: 2025-10-11 09:44:35.187 2 DEBUG oslo_concurrency.lockutils [req-95cff39c-729b-46f4-b9c9-f584f8db33b6 req-bd522e15-e1c2-4bfb-b033-3570d5924fde e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "84e65bfe-6918-4c8f-8b2a-3b5262c6e44e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:44:35 np0005481065 nova_compute[260935]: 2025-10-11 09:44:35.187 2 DEBUG oslo_concurrency.lockutils [req-95cff39c-729b-46f4-b9c9-f584f8db33b6 req-bd522e15-e1c2-4bfb-b033-3570d5924fde e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "84e65bfe-6918-4c8f-8b2a-3b5262c6e44e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:44:35 np0005481065 nova_compute[260935]: 2025-10-11 09:44:35.187 2 DEBUG nova.compute.manager [req-95cff39c-729b-46f4-b9c9-f584f8db33b6 req-bd522e15-e1c2-4bfb-b033-3570d5924fde e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] No waiting events found dispatching network-vif-plugged-007ca8a8-8ce1-4661-9b04-c8764bd3b523 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:44:35 np0005481065 nova_compute[260935]: 2025-10-11 09:44:35.188 2 WARNING nova.compute.manager [req-95cff39c-729b-46f4-b9c9-f584f8db33b6 req-bd522e15-e1c2-4bfb-b033-3570d5924fde e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Received unexpected event network-vif-plugged-007ca8a8-8ce1-4661-9b04-c8764bd3b523 for instance with vm_state active and task_state deleting.#033[00m
Oct 11 05:44:35 np0005481065 nova_compute[260935]: 2025-10-11 09:44:35.190 2 DEBUG oslo_concurrency.lockutils [req-8bdc2c52-d0a8-441c-bc90-2f0944771fd8 req-93f42f6b-c906-413e-8e13-9916c737fc00 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-84e65bfe-6918-4c8f-8b2a-3b5262c6e44e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:44:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3097: 321 pgs: 321 active+clean; 508 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 373 KiB/s rd, 321 KiB/s wr, 90 op/s
Oct 11 05:44:36 np0005481065 nova_compute[260935]: 2025-10-11 09:44:36.179 2 DEBUG nova.network.neutron [-] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:44:36 np0005481065 nova_compute[260935]: 2025-10-11 09:44:36.203 2 INFO nova.compute.manager [-] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Took 1.33 seconds to deallocate network for instance.#033[00m
Oct 11 05:44:36 np0005481065 nova_compute[260935]: 2025-10-11 09:44:36.258 2 DEBUG oslo_concurrency.lockutils [None req-356cdc09-84f0-400e-865f-67dee43f8a74 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:44:36 np0005481065 nova_compute[260935]: 2025-10-11 09:44:36.258 2 DEBUG oslo_concurrency.lockutils [None req-356cdc09-84f0-400e-865f-67dee43f8a74 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:44:36 np0005481065 nova_compute[260935]: 2025-10-11 09:44:36.417 2 DEBUG oslo_concurrency.processutils [None req-356cdc09-84f0-400e-865f-67dee43f8a74 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:44:36 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:44:36 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/571979900' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:44:36 np0005481065 nova_compute[260935]: 2025-10-11 09:44:36.923 2 DEBUG oslo_concurrency.processutils [None req-356cdc09-84f0-400e-865f-67dee43f8a74 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:44:36 np0005481065 nova_compute[260935]: 2025-10-11 09:44:36.932 2 DEBUG nova.compute.provider_tree [None req-356cdc09-84f0-400e-865f-67dee43f8a74 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:44:36 np0005481065 nova_compute[260935]: 2025-10-11 09:44:36.961 2 DEBUG nova.scheduler.client.report [None req-356cdc09-84f0-400e-865f-67dee43f8a74 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:44:36 np0005481065 nova_compute[260935]: 2025-10-11 09:44:36.988 2 DEBUG oslo_concurrency.lockutils [None req-356cdc09-84f0-400e-865f-67dee43f8a74 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:44:37 np0005481065 nova_compute[260935]: 2025-10-11 09:44:37.027 2 INFO nova.scheduler.client.report [None req-356cdc09-84f0-400e-865f-67dee43f8a74 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Deleted allocations for instance 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e#033[00m
Oct 11 05:44:37 np0005481065 nova_compute[260935]: 2025-10-11 09:44:37.119 2 DEBUG oslo_concurrency.lockutils [None req-356cdc09-84f0-400e-865f-67dee43f8a74 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "84e65bfe-6918-4c8f-8b2a-3b5262c6e44e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.970s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:44:37 np0005481065 nova_compute[260935]: 2025-10-11 09:44:37.328 2 DEBUG nova.compute.manager [req-f2358d54-0136-4fb5-b3d2-145b305533ae req-3fb464ba-2809-4640-a0f8-0dbc3040beaf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Received event network-vif-deleted-007ca8a8-8ce1-4661-9b04-c8764bd3b523 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:44:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3098: 321 pgs: 321 active+clean; 491 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 351 KiB/s rd, 289 KiB/s wr, 107 op/s
Oct 11 05:44:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e297 do_prune osdmap full prune enabled
Oct 11 05:44:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e298 e298: 3 total, 3 up, 3 in
Oct 11 05:44:37 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e298: 3 total, 3 up, 3 in
Oct 11 05:44:38 np0005481065 nova_compute[260935]: 2025-10-11 09:44:38.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:44:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3100: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 370 KiB/s rd, 290 KiB/s wr, 130 op/s
Oct 11 05:44:39 np0005481065 nova_compute[260935]: 2025-10-11 09:44:39.507 2 DEBUG nova.compute.manager [req-2918d78e-ff83-480e-87cb-30b3898e0bba req-c195240e-583b-4aa1-9eaf-eb85d7b00c4a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Received event network-changed-0079b745-efa5-4641-a180-fee85fb30a5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:44:39 np0005481065 nova_compute[260935]: 2025-10-11 09:44:39.507 2 DEBUG nova.compute.manager [req-2918d78e-ff83-480e-87cb-30b3898e0bba req-c195240e-583b-4aa1-9eaf-eb85d7b00c4a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Refreshing instance network info cache due to event network-changed-0079b745-efa5-4641-a180-fee85fb30a5d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 05:44:39 np0005481065 nova_compute[260935]: 2025-10-11 09:44:39.508 2 DEBUG oslo_concurrency.lockutils [req-2918d78e-ff83-480e-87cb-30b3898e0bba req-c195240e-583b-4aa1-9eaf-eb85d7b00c4a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-1f03604e-0a25-4f22-807b-11f2e654be90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:44:39 np0005481065 nova_compute[260935]: 2025-10-11 09:44:39.509 2 DEBUG oslo_concurrency.lockutils [req-2918d78e-ff83-480e-87cb-30b3898e0bba req-c195240e-583b-4aa1-9eaf-eb85d7b00c4a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-1f03604e-0a25-4f22-807b-11f2e654be90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:44:39 np0005481065 nova_compute[260935]: 2025-10-11 09:44:39.509 2 DEBUG nova.network.neutron [req-2918d78e-ff83-480e-87cb-30b3898e0bba req-c195240e-583b-4aa1-9eaf-eb85d7b00c4a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Refreshing network info cache for port 0079b745-efa5-4641-a180-fee85fb30a5d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 05:44:39 np0005481065 nova_compute[260935]: 2025-10-11 09:44:39.570 2 DEBUG oslo_concurrency.lockutils [None req-fec77767-140b-4bc4-9bbc-6c88db813392 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Acquiring lock "1f03604e-0a25-4f22-807b-11f2e654be90" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:44:39 np0005481065 nova_compute[260935]: 2025-10-11 09:44:39.571 2 DEBUG oslo_concurrency.lockutils [None req-fec77767-140b-4bc4-9bbc-6c88db813392 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "1f03604e-0a25-4f22-807b-11f2e654be90" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:44:39 np0005481065 nova_compute[260935]: 2025-10-11 09:44:39.572 2 DEBUG oslo_concurrency.lockutils [None req-fec77767-140b-4bc4-9bbc-6c88db813392 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Acquiring lock "1f03604e-0a25-4f22-807b-11f2e654be90-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:44:39 np0005481065 nova_compute[260935]: 2025-10-11 09:44:39.572 2 DEBUG oslo_concurrency.lockutils [None req-fec77767-140b-4bc4-9bbc-6c88db813392 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "1f03604e-0a25-4f22-807b-11f2e654be90-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:44:39 np0005481065 nova_compute[260935]: 2025-10-11 09:44:39.573 2 DEBUG oslo_concurrency.lockutils [None req-fec77767-140b-4bc4-9bbc-6c88db813392 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "1f03604e-0a25-4f22-807b-11f2e654be90-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:44:39 np0005481065 nova_compute[260935]: 2025-10-11 09:44:39.575 2 INFO nova.compute.manager [None req-fec77767-140b-4bc4-9bbc-6c88db813392 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Terminating instance#033[00m
Oct 11 05:44:39 np0005481065 nova_compute[260935]: 2025-10-11 09:44:39.577 2 DEBUG nova.compute.manager [None req-fec77767-140b-4bc4-9bbc-6c88db813392 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:44:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:44:39.591 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=59, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=58) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:44:39 np0005481065 nova_compute[260935]: 2025-10-11 09:44:39.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:44:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:44:39.593 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 05:44:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:44:39 np0005481065 kernel: tap0079b745-ef (unregistering): left promiscuous mode
Oct 11 05:44:39 np0005481065 NetworkManager[44960]: <info>  [1760175879.9166] device (tap0079b745-ef): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:44:39 np0005481065 ovn_controller[152945]: 2025-10-11T09:44:39Z|01722|binding|INFO|Releasing lport 0079b745-efa5-4641-a180-fee85fb30a5d from this chassis (sb_readonly=0)
Oct 11 05:44:39 np0005481065 ovn_controller[152945]: 2025-10-11T09:44:39Z|01723|binding|INFO|Setting lport 0079b745-efa5-4641-a180-fee85fb30a5d down in Southbound
Oct 11 05:44:39 np0005481065 ovn_controller[152945]: 2025-10-11T09:44:39Z|01724|binding|INFO|Removing iface tap0079b745-ef ovn-installed in OVS
Oct 11 05:44:39 np0005481065 nova_compute[260935]: 2025-10-11 09:44:39.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:44:39 np0005481065 nova_compute[260935]: 2025-10-11 09:44:39.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:44:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:44:39.931 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:b5:6b 10.100.0.3'], port_security=['fa:16:3e:88:b5:6b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '1f03604e-0a25-4f22-807b-11f2e654be90', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-424246b3-55aa-428f-9446-55ed3d626b5d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd599eac75aee4193a971c2c157a326a8', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9d823c59-8c61-45f8-94bd-622699d4ae58', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8021c16-4de3-44ff-b5a5-7602224b9734, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=0079b745-efa5-4641-a180-fee85fb30a5d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:44:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:44:39.933 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 0079b745-efa5-4641-a180-fee85fb30a5d in datapath 424246b3-55aa-428f-9446-55ed3d626b5d unbound from our chassis#033[00m
Oct 11 05:44:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:44:39.935 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 424246b3-55aa-428f-9446-55ed3d626b5d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:44:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:44:39.935 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[14399662-6246-4ff2-a2d3-71241878046e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:44:39 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:44:39.936 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d namespace which is not needed anymore#033[00m
Oct 11 05:44:39 np0005481065 nova_compute[260935]: 2025-10-11 09:44:39.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:44:39 np0005481065 systemd[1]: machine-qemu\x2d175\x2dinstance\x2d00000095.scope: Deactivated successfully.
Oct 11 05:44:39 np0005481065 systemd[1]: machine-qemu\x2d175\x2dinstance\x2d00000095.scope: Consumed 17.652s CPU time.
Oct 11 05:44:39 np0005481065 systemd-machined[215705]: Machine qemu-175-instance-00000095 terminated.
Oct 11 05:44:40 np0005481065 nova_compute[260935]: 2025-10-11 09:44:40.035 2 INFO nova.virt.libvirt.driver [-] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Instance destroyed successfully.#033[00m
Oct 11 05:44:40 np0005481065 nova_compute[260935]: 2025-10-11 09:44:40.035 2 DEBUG nova.objects.instance [None req-fec77767-140b-4bc4-9bbc-6c88db813392 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lazy-loading 'resources' on Instance uuid 1f03604e-0a25-4f22-807b-11f2e654be90 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:44:40 np0005481065 nova_compute[260935]: 2025-10-11 09:44:40.054 2 DEBUG nova.virt.libvirt.vif [None req-fec77767-140b-4bc4-9bbc-6c88db813392 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:42:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1990672178',display_name='tempest-TestSnapshotPattern-server-1990672178',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1990672178',id=149,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP2ECx0jj3Rt10/VBX40dpY/aktTrRIITXR7+kOE+gKATXjMYBpecra2wtbKgkthLAz8Iw/nS+1WioBHxe0IF9i4guGae1Gh4XxA1njuMr4iSWg7N0/eaZRKp5xLMkN7hg==',key_name='tempest-TestSnapshotPattern-1719255089',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:43:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d599eac75aee4193a971c2c157a326a8',ramdisk_id='',reservation_id='r-eoa49vly',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSnapshotPattern-1818755654',owner_user_name='tempest-TestSnapshotPattern-1818755654-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:43:34Z,user_data=None,user_id='2b3a6de4bc924868b4e73b0a23a89090',uuid=1f03604e-0a25-4f22-807b-11f2e654be90,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0079b745-efa5-4641-a180-fee85fb30a5d", "address": "fa:16:3e:88:b5:6b", "network": {"id": "424246b3-55aa-428f-9446-55ed3d626b5d", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1806329114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d599eac75aee4193a971c2c157a326a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0079b745-ef", "ovs_interfaceid": "0079b745-efa5-4641-a180-fee85fb30a5d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:44:40 np0005481065 nova_compute[260935]: 2025-10-11 09:44:40.055 2 DEBUG nova.network.os_vif_util [None req-fec77767-140b-4bc4-9bbc-6c88db813392 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Converting VIF {"id": "0079b745-efa5-4641-a180-fee85fb30a5d", "address": "fa:16:3e:88:b5:6b", "network": {"id": "424246b3-55aa-428f-9446-55ed3d626b5d", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1806329114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d599eac75aee4193a971c2c157a326a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0079b745-ef", "ovs_interfaceid": "0079b745-efa5-4641-a180-fee85fb30a5d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:44:40 np0005481065 nova_compute[260935]: 2025-10-11 09:44:40.055 2 DEBUG nova.network.os_vif_util [None req-fec77767-140b-4bc4-9bbc-6c88db813392 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:88:b5:6b,bridge_name='br-int',has_traffic_filtering=True,id=0079b745-efa5-4641-a180-fee85fb30a5d,network=Network(424246b3-55aa-428f-9446-55ed3d626b5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0079b745-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:44:40 np0005481065 nova_compute[260935]: 2025-10-11 09:44:40.056 2 DEBUG os_vif [None req-fec77767-140b-4bc4-9bbc-6c88db813392 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:88:b5:6b,bridge_name='br-int',has_traffic_filtering=True,id=0079b745-efa5-4641-a180-fee85fb30a5d,network=Network(424246b3-55aa-428f-9446-55ed3d626b5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0079b745-ef') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:44:40 np0005481065 nova_compute[260935]: 2025-10-11 09:44:40.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:44:40 np0005481065 nova_compute[260935]: 2025-10-11 09:44:40.057 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0079b745-ef, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:44:40 np0005481065 nova_compute[260935]: 2025-10-11 09:44:40.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:44:40 np0005481065 nova_compute[260935]: 2025-10-11 09:44:40.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:44:40 np0005481065 nova_compute[260935]: 2025-10-11 09:44:40.102 2 INFO os_vif [None req-fec77767-140b-4bc4-9bbc-6c88db813392 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:88:b5:6b,bridge_name='br-int',has_traffic_filtering=True,id=0079b745-efa5-4641-a180-fee85fb30a5d,network=Network(424246b3-55aa-428f-9446-55ed3d626b5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0079b745-ef')#033[00m
Oct 11 05:44:40 np0005481065 nova_compute[260935]: 2025-10-11 09:44:40.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:44:40 np0005481065 neutron-haproxy-ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d[431804]: [NOTICE]   (431828) : haproxy version is 2.8.14-c23fe91
Oct 11 05:44:40 np0005481065 neutron-haproxy-ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d[431804]: [NOTICE]   (431828) : path to executable is /usr/sbin/haproxy
Oct 11 05:44:40 np0005481065 neutron-haproxy-ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d[431804]: [WARNING]  (431828) : Exiting Master process...
Oct 11 05:44:40 np0005481065 neutron-haproxy-ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d[431804]: [ALERT]    (431828) : Current worker (431836) exited with code 143 (Terminated)
Oct 11 05:44:40 np0005481065 neutron-haproxy-ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d[431804]: [WARNING]  (431828) : All workers exited. Exiting... (0)
Oct 11 05:44:40 np0005481065 systemd[1]: libpod-f3c157f84cd53093e9b7de536de6a8417e4e5d8b42621d118b292c613e249157.scope: Deactivated successfully.
Oct 11 05:44:40 np0005481065 podman[434493]: 2025-10-11 09:44:40.289103803 +0000 UTC m=+0.187148111 container died f3c157f84cd53093e9b7de536de6a8417e4e5d8b42621d118b292c613e249157 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 11 05:44:40 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f3c157f84cd53093e9b7de536de6a8417e4e5d8b42621d118b292c613e249157-userdata-shm.mount: Deactivated successfully.
Oct 11 05:44:40 np0005481065 systemd[1]: var-lib-containers-storage-overlay-a86c02cc2cbc8761b2102a73fcb1e50eef8c8d1227adb1efe533f1256a4c9e7f-merged.mount: Deactivated successfully.
Oct 11 05:44:40 np0005481065 podman[434493]: 2025-10-11 09:44:40.642034152 +0000 UTC m=+0.540078470 container cleanup f3c157f84cd53093e9b7de536de6a8417e4e5d8b42621d118b292c613e249157 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 05:44:40 np0005481065 systemd[1]: libpod-conmon-f3c157f84cd53093e9b7de536de6a8417e4e5d8b42621d118b292c613e249157.scope: Deactivated successfully.
Oct 11 05:44:40 np0005481065 podman[434542]: 2025-10-11 09:44:40.765039312 +0000 UTC m=+0.082127665 container remove f3c157f84cd53093e9b7de536de6a8417e4e5d8b42621d118b292c613e249157 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 11 05:44:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:44:40.775 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[880fd4d5-b801-4564-9e01-69c297662d3f]: (4, ('Sat Oct 11 09:44:40 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d (f3c157f84cd53093e9b7de536de6a8417e4e5d8b42621d118b292c613e249157)\nf3c157f84cd53093e9b7de536de6a8417e4e5d8b42621d118b292c613e249157\nSat Oct 11 09:44:40 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d (f3c157f84cd53093e9b7de536de6a8417e4e5d8b42621d118b292c613e249157)\nf3c157f84cd53093e9b7de536de6a8417e4e5d8b42621d118b292c613e249157\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:44:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:44:40.779 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a35a966e-7b17-4929-b4df-f5afca654491]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:44:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:44:40.781 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap424246b3-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:44:40 np0005481065 nova_compute[260935]: 2025-10-11 09:44:40.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:44:40 np0005481065 kernel: tap424246b3-50: left promiscuous mode
Oct 11 05:44:40 np0005481065 nova_compute[260935]: 2025-10-11 09:44:40.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:44:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:44:40.808 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0d5f5379-ed00-475d-ab84-12402c204340]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:44:40 np0005481065 nova_compute[260935]: 2025-10-11 09:44:40.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:44:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:44:40.834 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e205d993-2aef-405e-9cb7-f51152d1fd4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:44:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:44:40.836 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c5cb009d-8a3b-47bf-bdbf-2de5cf3e9479]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:44:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:44:40.863 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d04ffa96-74c0-4ab8-b04c-b5f08a87e862]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763201, 'reachable_time': 27525, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 434558, 'error': None, 'target': 'ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:44:40 np0005481065 systemd[1]: run-netns-ovnmeta\x2d424246b3\x2d55aa\x2d428f\x2d9446\x2d55ed3d626b5d.mount: Deactivated successfully.
Oct 11 05:44:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:44:40.872 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 05:44:40 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:44:40.872 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[023f4c90-69a3-424a-b95e-58791ff68222]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:44:41 np0005481065 nova_compute[260935]: 2025-10-11 09:44:41.036 2 INFO nova.virt.libvirt.driver [None req-fec77767-140b-4bc4-9bbc-6c88db813392 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Deleting instance files /var/lib/nova/instances/1f03604e-0a25-4f22-807b-11f2e654be90_del#033[00m
Oct 11 05:44:41 np0005481065 nova_compute[260935]: 2025-10-11 09:44:41.037 2 INFO nova.virt.libvirt.driver [None req-fec77767-140b-4bc4-9bbc-6c88db813392 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Deletion of /var/lib/nova/instances/1f03604e-0a25-4f22-807b-11f2e654be90_del complete#033[00m
Oct 11 05:44:41 np0005481065 nova_compute[260935]: 2025-10-11 09:44:41.144 2 DEBUG nova.network.neutron [req-2918d78e-ff83-480e-87cb-30b3898e0bba req-c195240e-583b-4aa1-9eaf-eb85d7b00c4a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Updated VIF entry in instance network info cache for port 0079b745-efa5-4641-a180-fee85fb30a5d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 05:44:41 np0005481065 nova_compute[260935]: 2025-10-11 09:44:41.145 2 DEBUG nova.network.neutron [req-2918d78e-ff83-480e-87cb-30b3898e0bba req-c195240e-583b-4aa1-9eaf-eb85d7b00c4a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Updating instance_info_cache with network_info: [{"id": "0079b745-efa5-4641-a180-fee85fb30a5d", "address": "fa:16:3e:88:b5:6b", "network": {"id": "424246b3-55aa-428f-9446-55ed3d626b5d", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1806329114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d599eac75aee4193a971c2c157a326a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0079b745-ef", "ovs_interfaceid": "0079b745-efa5-4641-a180-fee85fb30a5d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:44:41 np0005481065 nova_compute[260935]: 2025-10-11 09:44:41.147 2 INFO nova.compute.manager [None req-fec77767-140b-4bc4-9bbc-6c88db813392 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Took 1.57 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:44:41 np0005481065 nova_compute[260935]: 2025-10-11 09:44:41.148 2 DEBUG oslo.service.loopingcall [None req-fec77767-140b-4bc4-9bbc-6c88db813392 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:44:41 np0005481065 nova_compute[260935]: 2025-10-11 09:44:41.148 2 DEBUG nova.compute.manager [-] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:44:41 np0005481065 nova_compute[260935]: 2025-10-11 09:44:41.149 2 DEBUG nova.network.neutron [-] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:44:41 np0005481065 nova_compute[260935]: 2025-10-11 09:44:41.172 2 DEBUG oslo_concurrency.lockutils [req-2918d78e-ff83-480e-87cb-30b3898e0bba req-c195240e-583b-4aa1-9eaf-eb85d7b00c4a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-1f03604e-0a25-4f22-807b-11f2e654be90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:44:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3101: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 2.6 KiB/s wr, 50 op/s
Oct 11 05:44:41 np0005481065 nova_compute[260935]: 2025-10-11 09:44:41.607 2 DEBUG nova.compute.manager [req-009a7719-9899-49aa-8cc5-93c3e4ca62d8 req-527a943f-28db-4d2e-b923-3de31b2b4593 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Received event network-vif-unplugged-0079b745-efa5-4641-a180-fee85fb30a5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:44:41 np0005481065 nova_compute[260935]: 2025-10-11 09:44:41.608 2 DEBUG oslo_concurrency.lockutils [req-009a7719-9899-49aa-8cc5-93c3e4ca62d8 req-527a943f-28db-4d2e-b923-3de31b2b4593 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "1f03604e-0a25-4f22-807b-11f2e654be90-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:44:41 np0005481065 nova_compute[260935]: 2025-10-11 09:44:41.608 2 DEBUG oslo_concurrency.lockutils [req-009a7719-9899-49aa-8cc5-93c3e4ca62d8 req-527a943f-28db-4d2e-b923-3de31b2b4593 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1f03604e-0a25-4f22-807b-11f2e654be90-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:44:41 np0005481065 nova_compute[260935]: 2025-10-11 09:44:41.609 2 DEBUG oslo_concurrency.lockutils [req-009a7719-9899-49aa-8cc5-93c3e4ca62d8 req-527a943f-28db-4d2e-b923-3de31b2b4593 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1f03604e-0a25-4f22-807b-11f2e654be90-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:44:41 np0005481065 nova_compute[260935]: 2025-10-11 09:44:41.609 2 DEBUG nova.compute.manager [req-009a7719-9899-49aa-8cc5-93c3e4ca62d8 req-527a943f-28db-4d2e-b923-3de31b2b4593 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] No waiting events found dispatching network-vif-unplugged-0079b745-efa5-4641-a180-fee85fb30a5d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:44:41 np0005481065 nova_compute[260935]: 2025-10-11 09:44:41.609 2 DEBUG nova.compute.manager [req-009a7719-9899-49aa-8cc5-93c3e4ca62d8 req-527a943f-28db-4d2e-b923-3de31b2b4593 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Received event network-vif-unplugged-0079b745-efa5-4641-a180-fee85fb30a5d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 05:44:41 np0005481065 nova_compute[260935]: 2025-10-11 09:44:41.610 2 DEBUG nova.compute.manager [req-009a7719-9899-49aa-8cc5-93c3e4ca62d8 req-527a943f-28db-4d2e-b923-3de31b2b4593 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Received event network-vif-plugged-0079b745-efa5-4641-a180-fee85fb30a5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:44:41 np0005481065 nova_compute[260935]: 2025-10-11 09:44:41.610 2 DEBUG oslo_concurrency.lockutils [req-009a7719-9899-49aa-8cc5-93c3e4ca62d8 req-527a943f-28db-4d2e-b923-3de31b2b4593 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "1f03604e-0a25-4f22-807b-11f2e654be90-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:44:41 np0005481065 nova_compute[260935]: 2025-10-11 09:44:41.610 2 DEBUG oslo_concurrency.lockutils [req-009a7719-9899-49aa-8cc5-93c3e4ca62d8 req-527a943f-28db-4d2e-b923-3de31b2b4593 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1f03604e-0a25-4f22-807b-11f2e654be90-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:44:41 np0005481065 nova_compute[260935]: 2025-10-11 09:44:41.611 2 DEBUG oslo_concurrency.lockutils [req-009a7719-9899-49aa-8cc5-93c3e4ca62d8 req-527a943f-28db-4d2e-b923-3de31b2b4593 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1f03604e-0a25-4f22-807b-11f2e654be90-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:44:41 np0005481065 nova_compute[260935]: 2025-10-11 09:44:41.611 2 DEBUG nova.compute.manager [req-009a7719-9899-49aa-8cc5-93c3e4ca62d8 req-527a943f-28db-4d2e-b923-3de31b2b4593 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] No waiting events found dispatching network-vif-plugged-0079b745-efa5-4641-a180-fee85fb30a5d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 05:44:41 np0005481065 nova_compute[260935]: 2025-10-11 09:44:41.611 2 WARNING nova.compute.manager [req-009a7719-9899-49aa-8cc5-93c3e4ca62d8 req-527a943f-28db-4d2e-b923-3de31b2b4593 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Received unexpected event network-vif-plugged-0079b745-efa5-4641-a180-fee85fb30a5d for instance with vm_state active and task_state deleting.#033[00m
Oct 11 05:44:41 np0005481065 nova_compute[260935]: 2025-10-11 09:44:41.873 2 DEBUG nova.network.neutron [-] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:44:41 np0005481065 nova_compute[260935]: 2025-10-11 09:44:41.900 2 INFO nova.compute.manager [-] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Took 0.75 seconds to deallocate network for instance.#033[00m
Oct 11 05:44:41 np0005481065 nova_compute[260935]: 2025-10-11 09:44:41.963 2 DEBUG oslo_concurrency.lockutils [None req-fec77767-140b-4bc4-9bbc-6c88db813392 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:44:41 np0005481065 nova_compute[260935]: 2025-10-11 09:44:41.964 2 DEBUG oslo_concurrency.lockutils [None req-fec77767-140b-4bc4-9bbc-6c88db813392 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:44:42 np0005481065 nova_compute[260935]: 2025-10-11 09:44:42.105 2 DEBUG oslo_concurrency.processutils [None req-fec77767-140b-4bc4-9bbc-6c88db813392 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:44:42 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:44:42 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3107757616' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:44:42 np0005481065 nova_compute[260935]: 2025-10-11 09:44:42.577 2 DEBUG oslo_concurrency.processutils [None req-fec77767-140b-4bc4-9bbc-6c88db813392 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:44:42 np0005481065 nova_compute[260935]: 2025-10-11 09:44:42.587 2 DEBUG nova.compute.provider_tree [None req-fec77767-140b-4bc4-9bbc-6c88db813392 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:44:42 np0005481065 nova_compute[260935]: 2025-10-11 09:44:42.628 2 DEBUG nova.scheduler.client.report [None req-fec77767-140b-4bc4-9bbc-6c88db813392 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:44:42 np0005481065 nova_compute[260935]: 2025-10-11 09:44:42.661 2 DEBUG oslo_concurrency.lockutils [None req-fec77767-140b-4bc4-9bbc-6c88db813392 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:44:42 np0005481065 nova_compute[260935]: 2025-10-11 09:44:42.711 2 INFO nova.scheduler.client.report [None req-fec77767-140b-4bc4-9bbc-6c88db813392 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Deleted allocations for instance 1f03604e-0a25-4f22-807b-11f2e654be90#033[00m
Oct 11 05:44:42 np0005481065 nova_compute[260935]: 2025-10-11 09:44:42.787 2 DEBUG oslo_concurrency.lockutils [None req-fec77767-140b-4bc4-9bbc-6c88db813392 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "1f03604e-0a25-4f22-807b-11f2e654be90" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.217s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:44:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3102: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 74 KiB/s rd, 5.1 KiB/s wr, 107 op/s
Oct 11 05:44:43 np0005481065 nova_compute[260935]: 2025-10-11 09:44:43.722 2 DEBUG nova.compute.manager [req-48c59ed9-cb11-40ee-85d8-2a9673f6a9e4 req-1a311ce8-c300-46cf-81ee-c73f1c333969 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Received event network-vif-deleted-0079b745-efa5-4641-a180-fee85fb30a5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:44:43 np0005481065 podman[434581]: 2025-10-11 09:44:43.807774832 +0000 UTC m=+0.097400293 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 05:44:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:44:44.596 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '59'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:44:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:44:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e298 do_prune osdmap full prune enabled
Oct 11 05:44:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e299 e299: 3 total, 3 up, 3 in
Oct 11 05:44:44 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e299: 3 total, 3 up, 3 in
Oct 11 05:44:45 np0005481065 nova_compute[260935]: 2025-10-11 09:44:45.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:44:45 np0005481065 nova_compute[260935]: 2025-10-11 09:44:45.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:44:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3104: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 3.7 KiB/s wr, 90 op/s
Oct 11 05:44:47 np0005481065 ovn_controller[152945]: 2025-10-11T09:44:47Z|01725|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:44:47 np0005481065 ovn_controller[152945]: 2025-10-11T09:44:47Z|01726|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:44:47 np0005481065 nova_compute[260935]: 2025-10-11 09:44:47.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:44:47 np0005481065 ovn_controller[152945]: 2025-10-11T09:44:47Z|01727|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 05:44:47 np0005481065 ovn_controller[152945]: 2025-10-11T09:44:47Z|01728|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 05:44:47 np0005481065 nova_compute[260935]: 2025-10-11 09:44:47.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:44:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3105: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 2.4 KiB/s wr, 55 op/s
Oct 11 05:44:48 np0005481065 nova_compute[260935]: 2025-10-11 09:44:48.600 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760175873.5994537, 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:44:48 np0005481065 nova_compute[260935]: 2025-10-11 09:44:48.601 2 INFO nova.compute.manager [-] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:44:48 np0005481065 nova_compute[260935]: 2025-10-11 09:44:48.631 2 DEBUG nova.compute.manager [None req-abe8ca32-68d9-44d6-a57f-633c053de132 - - - - - -] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:44:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3106: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 2.3 KiB/s wr, 53 op/s
Oct 11 05:44:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:44:50 np0005481065 nova_compute[260935]: 2025-10-11 09:44:50.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:44:50 np0005481065 nova_compute[260935]: 2025-10-11 09:44:50.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:44:50 np0005481065 podman[434604]: 2025-10-11 09:44:50.558675749 +0000 UTC m=+0.089225754 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:44:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3107: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 2.3 KiB/s wr, 53 op/s
Oct 11 05:44:51 np0005481065 nova_compute[260935]: 2025-10-11 09:44:51.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:44:51 np0005481065 nova_compute[260935]: 2025-10-11 09:44:51.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:44:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3108: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:44:53 np0005481065 podman[434625]: 2025-10-11 09:44:53.803299484 +0000 UTC m=+0.099777028 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd)
Oct 11 05:44:53 np0005481065 podman[434626]: 2025-10-11 09:44:53.914212285 +0000 UTC m=+0.205204255 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller)
Oct 11 05:44:54 np0005481065 nova_compute[260935]: 2025-10-11 09:44:54.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:44:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:44:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:44:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:44:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:44:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:44:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:44:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:44:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:44:55
Oct 11 05:44:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:44:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:44:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', '.mgr', '.rgw.root', 'volumes', 'images', 'default.rgw.log', 'backups', 'default.rgw.control']
Oct 11 05:44:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:44:55 np0005481065 nova_compute[260935]: 2025-10-11 09:44:55.034 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760175880.0333538, 1f03604e-0a25-4f22-807b-11f2e654be90 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:44:55 np0005481065 nova_compute[260935]: 2025-10-11 09:44:55.035 2 INFO nova.compute.manager [-] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:44:55 np0005481065 nova_compute[260935]: 2025-10-11 09:44:55.062 2 DEBUG nova.compute.manager [None req-94cac424-3afb-46e6-ad13-620ae14841c7 - - - - - -] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:44:55 np0005481065 nova_compute[260935]: 2025-10-11 09:44:55.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:44:55 np0005481065 nova_compute[260935]: 2025-10-11 09:44:55.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:44:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3109: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:44:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:44:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:44:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:44:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:44:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:44:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:44:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:44:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:44:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:44:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:44:55 np0005481065 nova_compute[260935]: 2025-10-11 09:44:55.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:44:55 np0005481065 nova_compute[260935]: 2025-10-11 09:44:55.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:44:55 np0005481065 nova_compute[260935]: 2025-10-11 09:44:55.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 11 05:44:56 np0005481065 nova_compute[260935]: 2025-10-11 09:44:56.464 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:44:56 np0005481065 nova_compute[260935]: 2025-10-11 09:44:56.465 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:44:56 np0005481065 nova_compute[260935]: 2025-10-11 09:44:56.466 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 05:44:56 np0005481065 nova_compute[260935]: 2025-10-11 09:44:56.467 2 DEBUG nova.objects.instance [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c176845c-89c0-4038-ba22-4ee79bd3ebfe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:44:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3110: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:44:58 np0005481065 nova_compute[260935]: 2025-10-11 09:44:58.188 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updating instance_info_cache with network_info: [{"id": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "address": "fa:16:3e:1e:82:58", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ae661-47", "ovs_interfaceid": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:44:58 np0005481065 nova_compute[260935]: 2025-10-11 09:44:58.216 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:44:58 np0005481065 nova_compute[260935]: 2025-10-11 09:44:58.216 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 05:44:58 np0005481065 nova_compute[260935]: 2025-10-11 09:44:58.217 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:44:58 np0005481065 nova_compute[260935]: 2025-10-11 09:44:58.217 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:44:58 np0005481065 nova_compute[260935]: 2025-10-11 09:44:58.237 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:44:58 np0005481065 nova_compute[260935]: 2025-10-11 09:44:58.237 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:44:58 np0005481065 nova_compute[260935]: 2025-10-11 09:44:58.237 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:44:58 np0005481065 nova_compute[260935]: 2025-10-11 09:44:58.238 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:44:58 np0005481065 nova_compute[260935]: 2025-10-11 09:44:58.238 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:44:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:44:58 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/976988539' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:44:58 np0005481065 nova_compute[260935]: 2025-10-11 09:44:58.660 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:44:58 np0005481065 nova_compute[260935]: 2025-10-11 09:44:58.768 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:44:58 np0005481065 nova_compute[260935]: 2025-10-11 09:44:58.769 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:44:58 np0005481065 nova_compute[260935]: 2025-10-11 09:44:58.770 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:44:58 np0005481065 nova_compute[260935]: 2025-10-11 09:44:58.776 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:44:58 np0005481065 nova_compute[260935]: 2025-10-11 09:44:58.777 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:44:58 np0005481065 nova_compute[260935]: 2025-10-11 09:44:58.783 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:44:58 np0005481065 nova_compute[260935]: 2025-10-11 09:44:58.783 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:44:59 np0005481065 nova_compute[260935]: 2025-10-11 09:44:59.092 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:44:59 np0005481065 nova_compute[260935]: 2025-10-11 09:44:59.094 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2735MB free_disk=59.83066940307617GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:44:59 np0005481065 nova_compute[260935]: 2025-10-11 09:44:59.095 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:44:59 np0005481065 nova_compute[260935]: 2025-10-11 09:44:59.095 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:44:59 np0005481065 nova_compute[260935]: 2025-10-11 09:44:59.200 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:44:59 np0005481065 nova_compute[260935]: 2025-10-11 09:44:59.200 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:44:59 np0005481065 nova_compute[260935]: 2025-10-11 09:44:59.201 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:44:59 np0005481065 nova_compute[260935]: 2025-10-11 09:44:59.201 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:44:59 np0005481065 nova_compute[260935]: 2025-10-11 09:44:59.202 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:44:59 np0005481065 nova_compute[260935]: 2025-10-11 09:44:59.231 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing inventories for resource provider ead2f521-4d5d-46d9-864c-1aac19134114 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 11 05:44:59 np0005481065 nova_compute[260935]: 2025-10-11 09:44:59.251 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updating ProviderTree inventory for provider ead2f521-4d5d-46d9-864c-1aac19134114 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 11 05:44:59 np0005481065 nova_compute[260935]: 2025-10-11 09:44:59.251 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updating inventory in ProviderTree for provider ead2f521-4d5d-46d9-864c-1aac19134114 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 11 05:44:59 np0005481065 nova_compute[260935]: 2025-10-11 09:44:59.275 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing aggregate associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, aggregates: 9222eec4-0f6c-4910-b4e6-f067acc46485 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 11 05:44:59 np0005481065 nova_compute[260935]: 2025-10-11 09:44:59.276 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updating resource provider ead2f521-4d5d-46d9-864c-1aac19134114 generation from 162 to 163 during operation: update_aggregates _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Oct 11 05:44:59 np0005481065 nova_compute[260935]: 2025-10-11 09:44:59.305 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing trait associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, traits: HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSE41,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_USB,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,HW_CPU_X86_SVM,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 11 05:44:59 np0005481065 nova_compute[260935]: 2025-10-11 09:44:59.416 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:44:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3111: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:44:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:44:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:44:59 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1714427866' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:44:59 np0005481065 nova_compute[260935]: 2025-10-11 09:44:59.892 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:44:59 np0005481065 nova_compute[260935]: 2025-10-11 09:44:59.900 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:44:59 np0005481065 nova_compute[260935]: 2025-10-11 09:44:59.919 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:44:59 np0005481065 nova_compute[260935]: 2025-10-11 09:44:59.952 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:44:59 np0005481065 nova_compute[260935]: 2025-10-11 09:44:59.953 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.857s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:45:00 np0005481065 nova_compute[260935]: 2025-10-11 09:45:00.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:45:00 np0005481065 nova_compute[260935]: 2025-10-11 09:45:00.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:45:00 np0005481065 nova_compute[260935]: 2025-10-11 09:45:00.949 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:45:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3112: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:45:01 np0005481065 nova_compute[260935]: 2025-10-11 09:45:01.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:45:02 np0005481065 nova_compute[260935]: 2025-10-11 09:45:02.395 2 DEBUG oslo_concurrency.lockutils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Acquiring lock "dba9c194-a688-42ad-8b48-789e10f168fd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:45:02 np0005481065 nova_compute[260935]: 2025-10-11 09:45:02.396 2 DEBUG oslo_concurrency.lockutils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Lock "dba9c194-a688-42ad-8b48-789e10f168fd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:45:02 np0005481065 nova_compute[260935]: 2025-10-11 09:45:02.433 2 DEBUG nova.compute.manager [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 05:45:02 np0005481065 nova_compute[260935]: 2025-10-11 09:45:02.498 2 DEBUG oslo_concurrency.lockutils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:45:02 np0005481065 nova_compute[260935]: 2025-10-11 09:45:02.499 2 DEBUG oslo_concurrency.lockutils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:45:02 np0005481065 nova_compute[260935]: 2025-10-11 09:45:02.508 2 DEBUG nova.virt.hardware [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 05:45:02 np0005481065 nova_compute[260935]: 2025-10-11 09:45:02.509 2 INFO nova.compute.claims [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 05:45:02 np0005481065 nova_compute[260935]: 2025-10-11 09:45:02.684 2 DEBUG oslo_concurrency.processutils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:45:02 np0005481065 nova_compute[260935]: 2025-10-11 09:45:02.737 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:45:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:45:03 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/614766696' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:45:03 np0005481065 nova_compute[260935]: 2025-10-11 09:45:03.147 2 DEBUG oslo_concurrency.processutils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:45:03 np0005481065 nova_compute[260935]: 2025-10-11 09:45:03.155 2 DEBUG nova.compute.provider_tree [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:45:03 np0005481065 nova_compute[260935]: 2025-10-11 09:45:03.186 2 DEBUG nova.scheduler.client.report [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:45:03 np0005481065 nova_compute[260935]: 2025-10-11 09:45:03.223 2 DEBUG oslo_concurrency.lockutils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:45:03 np0005481065 nova_compute[260935]: 2025-10-11 09:45:03.224 2 DEBUG nova.compute.manager [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 05:45:03 np0005481065 nova_compute[260935]: 2025-10-11 09:45:03.307 2 DEBUG nova.compute.manager [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 05:45:03 np0005481065 nova_compute[260935]: 2025-10-11 09:45:03.308 2 DEBUG nova.network.neutron [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 05:45:03 np0005481065 nova_compute[260935]: 2025-10-11 09:45:03.338 2 INFO nova.virt.libvirt.driver [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 05:45:03 np0005481065 nova_compute[260935]: 2025-10-11 09:45:03.365 2 DEBUG nova.compute.manager [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 05:45:03 np0005481065 nova_compute[260935]: 2025-10-11 09:45:03.458 2 DEBUG nova.compute.manager [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 05:45:03 np0005481065 nova_compute[260935]: 2025-10-11 09:45:03.459 2 DEBUG nova.virt.libvirt.driver [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 05:45:03 np0005481065 nova_compute[260935]: 2025-10-11 09:45:03.460 2 INFO nova.virt.libvirt.driver [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Creating image(s)#033[00m
Oct 11 05:45:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3113: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:45:03 np0005481065 nova_compute[260935]: 2025-10-11 09:45:03.483 2 DEBUG nova.storage.rbd_utils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] rbd image dba9c194-a688-42ad-8b48-789e10f168fd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:45:03 np0005481065 nova_compute[260935]: 2025-10-11 09:45:03.507 2 DEBUG nova.storage.rbd_utils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] rbd image dba9c194-a688-42ad-8b48-789e10f168fd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:45:03 np0005481065 nova_compute[260935]: 2025-10-11 09:45:03.533 2 DEBUG nova.storage.rbd_utils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] rbd image dba9c194-a688-42ad-8b48-789e10f168fd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:45:03 np0005481065 nova_compute[260935]: 2025-10-11 09:45:03.538 2 DEBUG oslo_concurrency.processutils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:45:03 np0005481065 nova_compute[260935]: 2025-10-11 09:45:03.641 2 DEBUG oslo_concurrency.processutils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:45:03 np0005481065 nova_compute[260935]: 2025-10-11 09:45:03.643 2 DEBUG oslo_concurrency.lockutils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:45:03 np0005481065 nova_compute[260935]: 2025-10-11 09:45:03.644 2 DEBUG oslo_concurrency.lockutils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:45:03 np0005481065 nova_compute[260935]: 2025-10-11 09:45:03.645 2 DEBUG oslo_concurrency.lockutils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:45:03 np0005481065 nova_compute[260935]: 2025-10-11 09:45:03.681 2 DEBUG nova.storage.rbd_utils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] rbd image dba9c194-a688-42ad-8b48-789e10f168fd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:45:03 np0005481065 nova_compute[260935]: 2025-10-11 09:45:03.686 2 DEBUG oslo_concurrency.processutils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 dba9c194-a688-42ad-8b48-789e10f168fd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:45:03 np0005481065 nova_compute[260935]: 2025-10-11 09:45:03.997 2 DEBUG nova.network.neutron [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct 11 05:45:03 np0005481065 nova_compute[260935]: 2025-10-11 09:45:03.998 2 DEBUG nova.compute.manager [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 05:45:04 np0005481065 nova_compute[260935]: 2025-10-11 09:45:04.052 2 DEBUG oslo_concurrency.processutils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 dba9c194-a688-42ad-8b48-789e10f168fd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.366s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:45:04 np0005481065 nova_compute[260935]: 2025-10-11 09:45:04.145 2 DEBUG nova.storage.rbd_utils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] resizing rbd image dba9c194-a688-42ad-8b48-789e10f168fd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 05:45:04 np0005481065 nova_compute[260935]: 2025-10-11 09:45:04.305 2 DEBUG nova.objects.instance [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Lazy-loading 'migration_context' on Instance uuid dba9c194-a688-42ad-8b48-789e10f168fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:45:04 np0005481065 nova_compute[260935]: 2025-10-11 09:45:04.325 2 DEBUG nova.virt.libvirt.driver [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 05:45:04 np0005481065 nova_compute[260935]: 2025-10-11 09:45:04.325 2 DEBUG nova.virt.libvirt.driver [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Ensure instance console log exists: /var/lib/nova/instances/dba9c194-a688-42ad-8b48-789e10f168fd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 05:45:04 np0005481065 nova_compute[260935]: 2025-10-11 09:45:04.326 2 DEBUG oslo_concurrency.lockutils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:45:04 np0005481065 nova_compute[260935]: 2025-10-11 09:45:04.327 2 DEBUG oslo_concurrency.lockutils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:45:04 np0005481065 nova_compute[260935]: 2025-10-11 09:45:04.327 2 DEBUG oslo_concurrency.lockutils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:45:04 np0005481065 nova_compute[260935]: 2025-10-11 09:45:04.330 2 DEBUG nova.virt.libvirt.driver [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 05:45:04 np0005481065 nova_compute[260935]: 2025-10-11 09:45:04.337 2 WARNING nova.virt.libvirt.driver [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:45:04 np0005481065 nova_compute[260935]: 2025-10-11 09:45:04.351 2 DEBUG nova.virt.libvirt.host [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 05:45:04 np0005481065 nova_compute[260935]: 2025-10-11 09:45:04.352 2 DEBUG nova.virt.libvirt.host [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 05:45:04 np0005481065 nova_compute[260935]: 2025-10-11 09:45:04.354 2 DEBUG nova.virt.libvirt.host [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 05:45:04 np0005481065 nova_compute[260935]: 2025-10-11 09:45:04.355 2 DEBUG nova.virt.libvirt.host [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 05:45:04 np0005481065 nova_compute[260935]: 2025-10-11 09:45:04.355 2 DEBUG nova.virt.libvirt.driver [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 05:45:04 np0005481065 nova_compute[260935]: 2025-10-11 09:45:04.355 2 DEBUG nova.virt.hardware [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 05:45:04 np0005481065 nova_compute[260935]: 2025-10-11 09:45:04.356 2 DEBUG nova.virt.hardware [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 05:45:04 np0005481065 nova_compute[260935]: 2025-10-11 09:45:04.356 2 DEBUG nova.virt.hardware [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 05:45:04 np0005481065 nova_compute[260935]: 2025-10-11 09:45:04.356 2 DEBUG nova.virt.hardware [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 05:45:04 np0005481065 nova_compute[260935]: 2025-10-11 09:45:04.356 2 DEBUG nova.virt.hardware [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 05:45:04 np0005481065 nova_compute[260935]: 2025-10-11 09:45:04.356 2 DEBUG nova.virt.hardware [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 05:45:04 np0005481065 nova_compute[260935]: 2025-10-11 09:45:04.357 2 DEBUG nova.virt.hardware [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 05:45:04 np0005481065 nova_compute[260935]: 2025-10-11 09:45:04.357 2 DEBUG nova.virt.hardware [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 05:45:04 np0005481065 nova_compute[260935]: 2025-10-11 09:45:04.357 2 DEBUG nova.virt.hardware [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 05:45:04 np0005481065 nova_compute[260935]: 2025-10-11 09:45:04.357 2 DEBUG nova.virt.hardware [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 05:45:04 np0005481065 nova_compute[260935]: 2025-10-11 09:45:04.357 2 DEBUG nova.virt.hardware [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 05:45:04 np0005481065 nova_compute[260935]: 2025-10-11 09:45:04.360 2 DEBUG oslo_concurrency.processutils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:45:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:45:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:45:04 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3511972821' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:45:04 np0005481065 nova_compute[260935]: 2025-10-11 09:45:04.805 2 DEBUG oslo_concurrency.processutils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:45:04 np0005481065 nova_compute[260935]: 2025-10-11 09:45:04.829 2 DEBUG nova.storage.rbd_utils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] rbd image dba9c194-a688-42ad-8b48-789e10f168fd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:45:04 np0005481065 nova_compute[260935]: 2025-10-11 09:45:04.833 2 DEBUG oslo_concurrency.processutils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:45:05 np0005481065 nova_compute[260935]: 2025-10-11 09:45:05.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:45:05 np0005481065 nova_compute[260935]: 2025-10-11 09:45:05.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:45:05 np0005481065 nova_compute[260935]: 2025-10-11 09:45:05.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5005 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:45:05 np0005481065 nova_compute[260935]: 2025-10-11 09:45:05.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:45:05 np0005481065 nova_compute[260935]: 2025-10-11 09:45:05.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:45:05 np0005481065 nova_compute[260935]: 2025-10-11 09:45:05.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:45:05 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 05:45:05 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/502829476' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 05:45:05 np0005481065 nova_compute[260935]: 2025-10-11 09:45:05.322 2 DEBUG oslo_concurrency.processutils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:45:05 np0005481065 nova_compute[260935]: 2025-10-11 09:45:05.325 2 DEBUG nova.objects.instance [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Lazy-loading 'pci_devices' on Instance uuid dba9c194-a688-42ad-8b48-789e10f168fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:45:05 np0005481065 nova_compute[260935]: 2025-10-11 09:45:05.351 2 DEBUG nova.virt.libvirt.driver [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] End _get_guest_xml xml=<domain type="kvm">
Oct 11 05:45:05 np0005481065 nova_compute[260935]:  <uuid>dba9c194-a688-42ad-8b48-789e10f168fd</uuid>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:  <name>instance-00000097</name>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:  <memory>131072</memory>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:  <vcpu>1</vcpu>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:  <metadata>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:45:05 np0005481065 nova_compute[260935]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:      <nova:name>tempest-AggregatesAdminTestJSON-server-1352653575</nova:name>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:      <nova:creationTime>2025-10-11 09:45:04</nova:creationTime>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:      <nova:flavor name="m1.nano">
Oct 11 05:45:05 np0005481065 nova_compute[260935]:        <nova:memory>128</nova:memory>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:        <nova:disk>1</nova:disk>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:        <nova:swap>0</nova:swap>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:        <nova:vcpus>1</nova:vcpus>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:      </nova:flavor>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:      <nova:owner>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:        <nova:user uuid="bee3284f19da4fb389bd50d7ec71d51c">tempest-AggregatesAdminTestJSON-938594948-project-member</nova:user>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:        <nova:project uuid="9b754c6818c940e8b5e1beeba6733fe1">tempest-AggregatesAdminTestJSON-938594948</nova:project>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:      </nova:owner>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:      <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:      <nova:ports/>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:    </nova:instance>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:  </metadata>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:  <sysinfo type="smbios">
Oct 11 05:45:05 np0005481065 nova_compute[260935]:    <system>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:      <entry name="manufacturer">RDO</entry>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:      <entry name="product">OpenStack Compute</entry>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:      <entry name="serial">dba9c194-a688-42ad-8b48-789e10f168fd</entry>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:      <entry name="uuid">dba9c194-a688-42ad-8b48-789e10f168fd</entry>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:      <entry name="family">Virtual Machine</entry>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:    </system>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:  </sysinfo>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:  <os>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:    <boot dev="hd"/>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:    <smbios mode="sysinfo"/>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:  </os>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:  <features>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:    <acpi/>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:    <apic/>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:    <vmcoreinfo/>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:  </features>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:  <clock offset="utc">
Oct 11 05:45:05 np0005481065 nova_compute[260935]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:    <timer name="hpet" present="no"/>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:  </clock>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:  <cpu mode="host-model" match="exact">
Oct 11 05:45:05 np0005481065 nova_compute[260935]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:  </cpu>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:  <devices>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:    <disk type="network" device="disk">
Oct 11 05:45:05 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/dba9c194-a688-42ad-8b48-789e10f168fd_disk">
Oct 11 05:45:05 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:45:05 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:      <target dev="vda" bus="virtio"/>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:    <disk type="network" device="cdrom">
Oct 11 05:45:05 np0005481065 nova_compute[260935]:      <driver type="raw" cache="none"/>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:      <source protocol="rbd" name="vms/dba9c194-a688-42ad-8b48-789e10f168fd_disk.config">
Oct 11 05:45:05 np0005481065 nova_compute[260935]:        <host name="192.168.122.100" port="6789"/>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:      </source>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:      <auth username="openstack">
Oct 11 05:45:05 np0005481065 nova_compute[260935]:        <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:      </auth>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:      <target dev="sda" bus="sata"/>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:    </disk>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:    <serial type="pty">
Oct 11 05:45:05 np0005481065 nova_compute[260935]:      <log file="/var/lib/nova/instances/dba9c194-a688-42ad-8b48-789e10f168fd/console.log" append="off"/>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:    </serial>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:    <video>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:      <model type="virtio"/>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:    </video>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:    <input type="tablet" bus="usb"/>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:    <rng model="virtio">
Oct 11 05:45:05 np0005481065 nova_compute[260935]:      <backend model="random">/dev/urandom</backend>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:    </rng>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root"/>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:    <controller type="usb" index="0"/>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:    <memballoon model="virtio">
Oct 11 05:45:05 np0005481065 nova_compute[260935]:      <stats period="10"/>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:    </memballoon>
Oct 11 05:45:05 np0005481065 nova_compute[260935]:  </devices>
Oct 11 05:45:05 np0005481065 nova_compute[260935]: </domain>
Oct 11 05:45:05 np0005481065 nova_compute[260935]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 05:45:05 np0005481065 nova_compute[260935]: 2025-10-11 09:45:05.409 2 DEBUG nova.virt.libvirt.driver [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:45:05 np0005481065 nova_compute[260935]: 2025-10-11 09:45:05.410 2 DEBUG nova.virt.libvirt.driver [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 05:45:05 np0005481065 nova_compute[260935]: 2025-10-11 09:45:05.410 2 INFO nova.virt.libvirt.driver [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Using config drive#033[00m
Oct 11 05:45:05 np0005481065 nova_compute[260935]: 2025-10-11 09:45:05.434 2 DEBUG nova.storage.rbd_utils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] rbd image dba9c194-a688-42ad-8b48-789e10f168fd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:45:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3114: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:45:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:45:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:45:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:45:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:45:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002627377275021163 of space, bias 1.0, pg target 0.788213182506349 quantized to 32 (current 32)
Oct 11 05:45:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:45:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:45:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:45:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:45:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:45:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 11 05:45:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:45:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 05:45:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:45:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:45:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:45:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 05:45:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:45:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 05:45:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:45:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:45:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:45:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 05:45:05 np0005481065 nova_compute[260935]: 2025-10-11 09:45:05.879 2 INFO nova.virt.libvirt.driver [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Creating config drive at /var/lib/nova/instances/dba9c194-a688-42ad-8b48-789e10f168fd/disk.config#033[00m
Oct 11 05:45:05 np0005481065 nova_compute[260935]: 2025-10-11 09:45:05.888 2 DEBUG oslo_concurrency.processutils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dba9c194-a688-42ad-8b48-789e10f168fd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplq24gx5u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:45:06 np0005481065 nova_compute[260935]: 2025-10-11 09:45:06.065 2 DEBUG oslo_concurrency.processutils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dba9c194-a688-42ad-8b48-789e10f168fd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplq24gx5u" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:45:06 np0005481065 nova_compute[260935]: 2025-10-11 09:45:06.101 2 DEBUG nova.storage.rbd_utils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] rbd image dba9c194-a688-42ad-8b48-789e10f168fd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 05:45:06 np0005481065 nova_compute[260935]: 2025-10-11 09:45:06.105 2 DEBUG oslo_concurrency.processutils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dba9c194-a688-42ad-8b48-789e10f168fd/disk.config dba9c194-a688-42ad-8b48-789e10f168fd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:45:06 np0005481065 nova_compute[260935]: 2025-10-11 09:45:06.337 2 DEBUG oslo_concurrency.processutils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dba9c194-a688-42ad-8b48-789e10f168fd/disk.config dba9c194-a688-42ad-8b48-789e10f168fd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.231s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:45:06 np0005481065 nova_compute[260935]: 2025-10-11 09:45:06.339 2 INFO nova.virt.libvirt.driver [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Deleting local config drive /var/lib/nova/instances/dba9c194-a688-42ad-8b48-789e10f168fd/disk.config because it was imported into RBD.#033[00m
Oct 11 05:45:06 np0005481065 systemd-machined[215705]: New machine qemu-177-instance-00000097.
Oct 11 05:45:06 np0005481065 systemd[1]: Started Virtual Machine qemu-177-instance-00000097.
Oct 11 05:45:07 np0005481065 nova_compute[260935]: 2025-10-11 09:45:07.325 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175907.3255875, dba9c194-a688-42ad-8b48-789e10f168fd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:45:07 np0005481065 nova_compute[260935]: 2025-10-11 09:45:07.326 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] VM Resumed (Lifecycle Event)#033[00m
Oct 11 05:45:07 np0005481065 nova_compute[260935]: 2025-10-11 09:45:07.330 2 DEBUG nova.compute.manager [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 05:45:07 np0005481065 nova_compute[260935]: 2025-10-11 09:45:07.330 2 DEBUG nova.virt.libvirt.driver [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 05:45:07 np0005481065 nova_compute[260935]: 2025-10-11 09:45:07.335 2 INFO nova.virt.libvirt.driver [-] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Instance spawned successfully.#033[00m
Oct 11 05:45:07 np0005481065 nova_compute[260935]: 2025-10-11 09:45:07.335 2 DEBUG nova.virt.libvirt.driver [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 05:45:07 np0005481065 nova_compute[260935]: 2025-10-11 09:45:07.368 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:45:07 np0005481065 nova_compute[260935]: 2025-10-11 09:45:07.375 2 DEBUG nova.virt.libvirt.driver [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:45:07 np0005481065 nova_compute[260935]: 2025-10-11 09:45:07.376 2 DEBUG nova.virt.libvirt.driver [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:45:07 np0005481065 nova_compute[260935]: 2025-10-11 09:45:07.376 2 DEBUG nova.virt.libvirt.driver [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:45:07 np0005481065 nova_compute[260935]: 2025-10-11 09:45:07.377 2 DEBUG nova.virt.libvirt.driver [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:45:07 np0005481065 nova_compute[260935]: 2025-10-11 09:45:07.377 2 DEBUG nova.virt.libvirt.driver [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:45:07 np0005481065 nova_compute[260935]: 2025-10-11 09:45:07.377 2 DEBUG nova.virt.libvirt.driver [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 05:45:07 np0005481065 nova_compute[260935]: 2025-10-11 09:45:07.381 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:45:07 np0005481065 nova_compute[260935]: 2025-10-11 09:45:07.430 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:45:07 np0005481065 nova_compute[260935]: 2025-10-11 09:45:07.431 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175907.3263695, dba9c194-a688-42ad-8b48-789e10f168fd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:45:07 np0005481065 nova_compute[260935]: 2025-10-11 09:45:07.431 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] VM Started (Lifecycle Event)#033[00m
Oct 11 05:45:07 np0005481065 nova_compute[260935]: 2025-10-11 09:45:07.456 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:45:07 np0005481065 nova_compute[260935]: 2025-10-11 09:45:07.459 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 05:45:07 np0005481065 nova_compute[260935]: 2025-10-11 09:45:07.466 2 INFO nova.compute.manager [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Took 4.01 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 05:45:07 np0005481065 nova_compute[260935]: 2025-10-11 09:45:07.466 2 DEBUG nova.compute.manager [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:45:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3115: 321 pgs: 321 active+clean; 366 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.3 MiB/s wr, 32 op/s
Oct 11 05:45:07 np0005481065 nova_compute[260935]: 2025-10-11 09:45:07.474 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 05:45:07 np0005481065 nova_compute[260935]: 2025-10-11 09:45:07.516 2 INFO nova.compute.manager [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Took 5.04 seconds to build instance.#033[00m
Oct 11 05:45:07 np0005481065 nova_compute[260935]: 2025-10-11 09:45:07.530 2 DEBUG oslo_concurrency.lockutils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Lock "dba9c194-a688-42ad-8b48-789e10f168fd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.134s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:45:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3116: 321 pgs: 321 active+clean; 374 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 11 05:45:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:45:09 np0005481065 nova_compute[260935]: 2025-10-11 09:45:09.972 2 DEBUG oslo_concurrency.lockutils [None req-52c8e5d0-9803-4e78-ad21-65fb1c7fae27 bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Acquiring lock "dba9c194-a688-42ad-8b48-789e10f168fd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:45:09 np0005481065 nova_compute[260935]: 2025-10-11 09:45:09.973 2 DEBUG oslo_concurrency.lockutils [None req-52c8e5d0-9803-4e78-ad21-65fb1c7fae27 bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Lock "dba9c194-a688-42ad-8b48-789e10f168fd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:45:09 np0005481065 nova_compute[260935]: 2025-10-11 09:45:09.973 2 DEBUG oslo_concurrency.lockutils [None req-52c8e5d0-9803-4e78-ad21-65fb1c7fae27 bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Acquiring lock "dba9c194-a688-42ad-8b48-789e10f168fd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:45:09 np0005481065 nova_compute[260935]: 2025-10-11 09:45:09.973 2 DEBUG oslo_concurrency.lockutils [None req-52c8e5d0-9803-4e78-ad21-65fb1c7fae27 bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Lock "dba9c194-a688-42ad-8b48-789e10f168fd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:45:09 np0005481065 nova_compute[260935]: 2025-10-11 09:45:09.974 2 DEBUG oslo_concurrency.lockutils [None req-52c8e5d0-9803-4e78-ad21-65fb1c7fae27 bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Lock "dba9c194-a688-42ad-8b48-789e10f168fd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:45:09 np0005481065 nova_compute[260935]: 2025-10-11 09:45:09.976 2 INFO nova.compute.manager [None req-52c8e5d0-9803-4e78-ad21-65fb1c7fae27 bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Terminating instance#033[00m
Oct 11 05:45:09 np0005481065 nova_compute[260935]: 2025-10-11 09:45:09.977 2 DEBUG oslo_concurrency.lockutils [None req-52c8e5d0-9803-4e78-ad21-65fb1c7fae27 bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Acquiring lock "refresh_cache-dba9c194-a688-42ad-8b48-789e10f168fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:45:09 np0005481065 nova_compute[260935]: 2025-10-11 09:45:09.978 2 DEBUG oslo_concurrency.lockutils [None req-52c8e5d0-9803-4e78-ad21-65fb1c7fae27 bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Acquired lock "refresh_cache-dba9c194-a688-42ad-8b48-789e10f168fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:45:09 np0005481065 nova_compute[260935]: 2025-10-11 09:45:09.978 2 DEBUG nova.network.neutron [None req-52c8e5d0-9803-4e78-ad21-65fb1c7fae27 bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 05:45:10 np0005481065 nova_compute[260935]: 2025-10-11 09:45:10.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:45:10 np0005481065 nova_compute[260935]: 2025-10-11 09:45:10.171 2 DEBUG nova.network.neutron [None req-52c8e5d0-9803-4e78-ad21-65fb1c7fae27 bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:45:11 np0005481065 nova_compute[260935]: 2025-10-11 09:45:11.032 2 DEBUG nova.network.neutron [None req-52c8e5d0-9803-4e78-ad21-65fb1c7fae27 bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:45:11 np0005481065 nova_compute[260935]: 2025-10-11 09:45:11.053 2 DEBUG oslo_concurrency.lockutils [None req-52c8e5d0-9803-4e78-ad21-65fb1c7fae27 bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Releasing lock "refresh_cache-dba9c194-a688-42ad-8b48-789e10f168fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:45:11 np0005481065 nova_compute[260935]: 2025-10-11 09:45:11.054 2 DEBUG nova.compute.manager [None req-52c8e5d0-9803-4e78-ad21-65fb1c7fae27 bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 05:45:11 np0005481065 systemd[1]: machine-qemu\x2d177\x2dinstance\x2d00000097.scope: Deactivated successfully.
Oct 11 05:45:11 np0005481065 systemd[1]: machine-qemu\x2d177\x2dinstance\x2d00000097.scope: Consumed 4.652s CPU time.
Oct 11 05:45:11 np0005481065 systemd-machined[215705]: Machine qemu-177-instance-00000097 terminated.
Oct 11 05:45:11 np0005481065 nova_compute[260935]: 2025-10-11 09:45:11.284 2 INFO nova.virt.libvirt.driver [-] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Instance destroyed successfully.#033[00m
Oct 11 05:45:11 np0005481065 nova_compute[260935]: 2025-10-11 09:45:11.285 2 DEBUG nova.objects.instance [None req-52c8e5d0-9803-4e78-ad21-65fb1c7fae27 bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Lazy-loading 'resources' on Instance uuid dba9c194-a688-42ad-8b48-789e10f168fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:45:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3117: 321 pgs: 321 active+clean; 374 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 11 05:45:11 np0005481065 nova_compute[260935]: 2025-10-11 09:45:11.713 2 INFO nova.virt.libvirt.driver [None req-52c8e5d0-9803-4e78-ad21-65fb1c7fae27 bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Deleting instance files /var/lib/nova/instances/dba9c194-a688-42ad-8b48-789e10f168fd_del#033[00m
Oct 11 05:45:11 np0005481065 nova_compute[260935]: 2025-10-11 09:45:11.715 2 INFO nova.virt.libvirt.driver [None req-52c8e5d0-9803-4e78-ad21-65fb1c7fae27 bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Deletion of /var/lib/nova/instances/dba9c194-a688-42ad-8b48-789e10f168fd_del complete#033[00m
Oct 11 05:45:11 np0005481065 nova_compute[260935]: 2025-10-11 09:45:11.772 2 INFO nova.compute.manager [None req-52c8e5d0-9803-4e78-ad21-65fb1c7fae27 bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Took 0.72 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 05:45:11 np0005481065 nova_compute[260935]: 2025-10-11 09:45:11.773 2 DEBUG oslo.service.loopingcall [None req-52c8e5d0-9803-4e78-ad21-65fb1c7fae27 bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 05:45:11 np0005481065 nova_compute[260935]: 2025-10-11 09:45:11.774 2 DEBUG nova.compute.manager [-] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 05:45:11 np0005481065 nova_compute[260935]: 2025-10-11 09:45:11.774 2 DEBUG nova.network.neutron [-] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 05:45:12 np0005481065 nova_compute[260935]: 2025-10-11 09:45:12.885 2 DEBUG nova.network.neutron [-] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:45:12 np0005481065 nova_compute[260935]: 2025-10-11 09:45:12.906 2 DEBUG nova.network.neutron [-] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:45:12 np0005481065 nova_compute[260935]: 2025-10-11 09:45:12.923 2 INFO nova.compute.manager [-] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Took 1.15 seconds to deallocate network for instance.#033[00m
Oct 11 05:45:12 np0005481065 nova_compute[260935]: 2025-10-11 09:45:12.988 2 DEBUG oslo_concurrency.lockutils [None req-52c8e5d0-9803-4e78-ad21-65fb1c7fae27 bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:45:12 np0005481065 nova_compute[260935]: 2025-10-11 09:45:12.989 2 DEBUG oslo_concurrency.lockutils [None req-52c8e5d0-9803-4e78-ad21-65fb1c7fae27 bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:45:13 np0005481065 nova_compute[260935]: 2025-10-11 09:45:13.155 2 DEBUG oslo_concurrency.processutils [None req-52c8e5d0-9803-4e78-ad21-65fb1c7fae27 bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:45:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3118: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Oct 11 05:45:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:45:13 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3430155061' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:45:13 np0005481065 nova_compute[260935]: 2025-10-11 09:45:13.641 2 DEBUG oslo_concurrency.processutils [None req-52c8e5d0-9803-4e78-ad21-65fb1c7fae27 bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:45:13 np0005481065 nova_compute[260935]: 2025-10-11 09:45:13.647 2 DEBUG nova.compute.provider_tree [None req-52c8e5d0-9803-4e78-ad21-65fb1c7fae27 bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:45:13 np0005481065 nova_compute[260935]: 2025-10-11 09:45:13.668 2 DEBUG nova.scheduler.client.report [None req-52c8e5d0-9803-4e78-ad21-65fb1c7fae27 bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:45:13 np0005481065 nova_compute[260935]: 2025-10-11 09:45:13.703 2 DEBUG oslo_concurrency.lockutils [None req-52c8e5d0-9803-4e78-ad21-65fb1c7fae27 bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:45:13 np0005481065 nova_compute[260935]: 2025-10-11 09:45:13.757 2 INFO nova.scheduler.client.report [None req-52c8e5d0-9803-4e78-ad21-65fb1c7fae27 bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Deleted allocations for instance dba9c194-a688-42ad-8b48-789e10f168fd#033[00m
Oct 11 05:45:13 np0005481065 nova_compute[260935]: 2025-10-11 09:45:13.827 2 DEBUG oslo_concurrency.lockutils [None req-52c8e5d0-9803-4e78-ad21-65fb1c7fae27 bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Lock "dba9c194-a688-42ad-8b48-789e10f168fd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.855s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:45:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:45:14 np0005481065 podman[435128]: 2025-10-11 09:45:14.798267942 +0000 UTC m=+0.096622741 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 11 05:45:15 np0005481065 nova_compute[260935]: 2025-10-11 09:45:15.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:45:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:45:15.241 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:45:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:45:15.241 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:45:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:45:15.242 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:45:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3119: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Oct 11 05:45:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3120: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Oct 11 05:45:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3121: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 513 KiB/s wr, 94 op/s
Oct 11 05:45:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:45:20 np0005481065 nova_compute[260935]: 2025-10-11 09:45:20.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:45:20 np0005481065 nova_compute[260935]: 2025-10-11 09:45:20.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:45:20 np0005481065 nova_compute[260935]: 2025-10-11 09:45:20.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:45:20 np0005481065 nova_compute[260935]: 2025-10-11 09:45:20.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:45:20 np0005481065 nova_compute[260935]: 2025-10-11 09:45:20.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:45:20 np0005481065 nova_compute[260935]: 2025-10-11 09:45:20.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:45:20 np0005481065 podman[435149]: 2025-10-11 09:45:20.798107881 +0000 UTC m=+0.090429807 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 11 05:45:21 np0005481065 ovn_controller[152945]: 2025-10-11T09:45:21Z|01729|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct 11 05:45:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3122: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 90 op/s
Oct 11 05:45:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3123: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 90 op/s
Oct 11 05:45:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:45:24 np0005481065 podman[435170]: 2025-10-11 09:45:24.799416037 +0000 UTC m=+0.096435606 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, container_name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 05:45:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:45:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:45:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:45:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:45:24 np0005481065 podman[435171]: 2025-10-11 09:45:24.859318367 +0000 UTC m=+0.149867094 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 11 05:45:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:45:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:45:25 np0005481065 nova_compute[260935]: 2025-10-11 09:45:25.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:45:25 np0005481065 nova_compute[260935]: 2025-10-11 09:45:25.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:45:25 np0005481065 nova_compute[260935]: 2025-10-11 09:45:25.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:45:25 np0005481065 nova_compute[260935]: 2025-10-11 09:45:25.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:45:25 np0005481065 nova_compute[260935]: 2025-10-11 09:45:25.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:45:25 np0005481065 nova_compute[260935]: 2025-10-11 09:45:25.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:45:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3124: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:45:25 np0005481065 nova_compute[260935]: 2025-10-11 09:45:25.661 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:45:25 np0005481065 nova_compute[260935]: 2025-10-11 09:45:25.709 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Triggering sync for uuid c176845c-89c0-4038-ba22-4ee79bd3ebfe _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct 11 05:45:25 np0005481065 nova_compute[260935]: 2025-10-11 09:45:25.710 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Triggering sync for uuid b75d8ded-515b-48ff-a6b6-28df88878996 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct 11 05:45:25 np0005481065 nova_compute[260935]: 2025-10-11 09:45:25.710 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Triggering sync for uuid 52be16b4-343a-4fd4-9041-39069a1fde2a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct 11 05:45:25 np0005481065 nova_compute[260935]: 2025-10-11 09:45:25.711 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "c176845c-89c0-4038-ba22-4ee79bd3ebfe" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:45:25 np0005481065 nova_compute[260935]: 2025-10-11 09:45:25.711 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "c176845c-89c0-4038-ba22-4ee79bd3ebfe" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:45:25 np0005481065 nova_compute[260935]: 2025-10-11 09:45:25.711 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "b75d8ded-515b-48ff-a6b6-28df88878996" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:45:25 np0005481065 nova_compute[260935]: 2025-10-11 09:45:25.712 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "b75d8ded-515b-48ff-a6b6-28df88878996" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:45:25 np0005481065 nova_compute[260935]: 2025-10-11 09:45:25.712 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "52be16b4-343a-4fd4-9041-39069a1fde2a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:45:25 np0005481065 nova_compute[260935]: 2025-10-11 09:45:25.713 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "52be16b4-343a-4fd4-9041-39069a1fde2a" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:45:25 np0005481065 nova_compute[260935]: 2025-10-11 09:45:25.769 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "c176845c-89c0-4038-ba22-4ee79bd3ebfe" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.058s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:45:25 np0005481065 nova_compute[260935]: 2025-10-11 09:45:25.775 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "b75d8ded-515b-48ff-a6b6-28df88878996" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.063s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:45:25 np0005481065 nova_compute[260935]: 2025-10-11 09:45:25.778 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "52be16b4-343a-4fd4-9041-39069a1fde2a" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.065s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:45:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:45:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:45:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:45:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:45:26 np0005481065 nova_compute[260935]: 2025-10-11 09:45:26.281 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760175911.2804635, dba9c194-a688-42ad-8b48-789e10f168fd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 05:45:26 np0005481065 nova_compute[260935]: 2025-10-11 09:45:26.282 2 INFO nova.compute.manager [-] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] VM Stopped (Lifecycle Event)#033[00m
Oct 11 05:45:26 np0005481065 nova_compute[260935]: 2025-10-11 09:45:26.303 2 DEBUG nova.compute.manager [None req-f211f4ea-1e7b-436b-bcc0-a00269ce03dd - - - - - -] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 05:45:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:45:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1176825860' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:45:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:45:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1176825860' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:45:27 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:45:27 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:45:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:45:27 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:45:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 05:45:27 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:45:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 05:45:27 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:45:27 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 5fe9012b-8e34-4e37-a0a7-de35e1a225ab does not exist
Oct 11 05:45:27 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 75d961f7-5425-41e8-86ec-dcd742ee631e does not exist
Oct 11 05:45:27 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev eb31a03d-d951-4863-88b2-a7617d3a355c does not exist
Oct 11 05:45:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 05:45:27 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 05:45:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 05:45:27 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:45:27 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:45:27 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:45:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3125: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:45:28 np0005481065 podman[435614]: 2025-10-11 09:45:28.120472824 +0000 UTC m=+0.079696796 container create 1e92f9766ec644e4b7f0d634a67525681730fab158422c999ad33121106b9e03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_goldberg, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 05:45:28 np0005481065 podman[435614]: 2025-10-11 09:45:28.070234655 +0000 UTC m=+0.029458687 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:45:28 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:45:28 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:45:28 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:45:28 np0005481065 systemd[1]: Started libpod-conmon-1e92f9766ec644e4b7f0d634a67525681730fab158422c999ad33121106b9e03.scope.
Oct 11 05:45:28 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:45:28 np0005481065 podman[435614]: 2025-10-11 09:45:28.352978026 +0000 UTC m=+0.312201988 container init 1e92f9766ec644e4b7f0d634a67525681730fab158422c999ad33121106b9e03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 05:45:28 np0005481065 podman[435614]: 2025-10-11 09:45:28.36704924 +0000 UTC m=+0.326273202 container start 1e92f9766ec644e4b7f0d634a67525681730fab158422c999ad33121106b9e03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 05:45:28 np0005481065 keen_goldberg[435630]: 167 167
Oct 11 05:45:28 np0005481065 systemd[1]: libpod-1e92f9766ec644e4b7f0d634a67525681730fab158422c999ad33121106b9e03.scope: Deactivated successfully.
Oct 11 05:45:28 np0005481065 podman[435614]: 2025-10-11 09:45:28.381501916 +0000 UTC m=+0.340725878 container attach 1e92f9766ec644e4b7f0d634a67525681730fab158422c999ad33121106b9e03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_goldberg, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 11 05:45:28 np0005481065 podman[435614]: 2025-10-11 09:45:28.381867956 +0000 UTC m=+0.341091948 container died 1e92f9766ec644e4b7f0d634a67525681730fab158422c999ad33121106b9e03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_goldberg, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:45:28 np0005481065 systemd[1]: var-lib-containers-storage-overlay-cc56555f16783b858c05fa30e1936e197ef1f9dad9abb02c971fc6cf09b37574-merged.mount: Deactivated successfully.
Oct 11 05:45:28 np0005481065 podman[435614]: 2025-10-11 09:45:28.541413391 +0000 UTC m=+0.500637363 container remove 1e92f9766ec644e4b7f0d634a67525681730fab158422c999ad33121106b9e03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_goldberg, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 11 05:45:28 np0005481065 systemd[1]: libpod-conmon-1e92f9766ec644e4b7f0d634a67525681730fab158422c999ad33121106b9e03.scope: Deactivated successfully.
Oct 11 05:45:28 np0005481065 podman[435657]: 2025-10-11 09:45:28.829306706 +0000 UTC m=+0.076526678 container create 85d649b21f7a8cda2beefe5084890a53e6ecfd6851b0c6539cbfc399aa7f34d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 05:45:28 np0005481065 systemd[1]: Started libpod-conmon-85d649b21f7a8cda2beefe5084890a53e6ecfd6851b0c6539cbfc399aa7f34d9.scope.
Oct 11 05:45:28 np0005481065 podman[435657]: 2025-10-11 09:45:28.796399313 +0000 UTC m=+0.043619305 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:45:28 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:45:28 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6074ebf12a33dd2f911d441d825f55fc9b044f80d65bbd904537ff354f253dd7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:45:28 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6074ebf12a33dd2f911d441d825f55fc9b044f80d65bbd904537ff354f253dd7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:45:28 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6074ebf12a33dd2f911d441d825f55fc9b044f80d65bbd904537ff354f253dd7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:45:28 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6074ebf12a33dd2f911d441d825f55fc9b044f80d65bbd904537ff354f253dd7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:45:28 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6074ebf12a33dd2f911d441d825f55fc9b044f80d65bbd904537ff354f253dd7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 05:45:28 np0005481065 podman[435657]: 2025-10-11 09:45:28.944412544 +0000 UTC m=+0.191632566 container init 85d649b21f7a8cda2beefe5084890a53e6ecfd6851b0c6539cbfc399aa7f34d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chaplygin, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 05:45:28 np0005481065 podman[435657]: 2025-10-11 09:45:28.957875102 +0000 UTC m=+0.205095114 container start 85d649b21f7a8cda2beefe5084890a53e6ecfd6851b0c6539cbfc399aa7f34d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chaplygin, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:45:28 np0005481065 podman[435657]: 2025-10-11 09:45:28.977812011 +0000 UTC m=+0.225032033 container attach 85d649b21f7a8cda2beefe5084890a53e6ecfd6851b0c6539cbfc399aa7f34d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 05:45:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3126: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:45:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:45:30 np0005481065 laughing_chaplygin[435673]: --> passed data devices: 0 physical, 3 LVM
Oct 11 05:45:30 np0005481065 laughing_chaplygin[435673]: --> relative data size: 1.0
Oct 11 05:45:30 np0005481065 laughing_chaplygin[435673]: --> All data devices are unavailable
Oct 11 05:45:30 np0005481065 systemd[1]: libpod-85d649b21f7a8cda2beefe5084890a53e6ecfd6851b0c6539cbfc399aa7f34d9.scope: Deactivated successfully.
Oct 11 05:45:30 np0005481065 systemd[1]: libpod-85d649b21f7a8cda2beefe5084890a53e6ecfd6851b0c6539cbfc399aa7f34d9.scope: Consumed 1.103s CPU time.
Oct 11 05:45:30 np0005481065 podman[435657]: 2025-10-11 09:45:30.104191442 +0000 UTC m=+1.351411454 container died 85d649b21f7a8cda2beefe5084890a53e6ecfd6851b0c6539cbfc399aa7f34d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chaplygin, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 05:45:30 np0005481065 systemd[1]: var-lib-containers-storage-overlay-6074ebf12a33dd2f911d441d825f55fc9b044f80d65bbd904537ff354f253dd7-merged.mount: Deactivated successfully.
Oct 11 05:45:30 np0005481065 podman[435657]: 2025-10-11 09:45:30.184775993 +0000 UTC m=+1.431996005 container remove 85d649b21f7a8cda2beefe5084890a53e6ecfd6851b0c6539cbfc399aa7f34d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 05:45:30 np0005481065 systemd[1]: libpod-conmon-85d649b21f7a8cda2beefe5084890a53e6ecfd6851b0c6539cbfc399aa7f34d9.scope: Deactivated successfully.
Oct 11 05:45:30 np0005481065 nova_compute[260935]: 2025-10-11 09:45:30.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:45:31 np0005481065 podman[435853]: 2025-10-11 09:45:31.016771578 +0000 UTC m=+0.056536777 container create 40eb5b80a3190af639a4871f51c9c2f2c123b2db64ef1b51bc9a2e94c9c32f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_nightingale, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:45:31 np0005481065 systemd[1]: Started libpod-conmon-40eb5b80a3190af639a4871f51c9c2f2c123b2db64ef1b51bc9a2e94c9c32f64.scope.
Oct 11 05:45:31 np0005481065 podman[435853]: 2025-10-11 09:45:30.989982727 +0000 UTC m=+0.029747986 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:45:31 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:45:31 np0005481065 podman[435853]: 2025-10-11 09:45:31.11060243 +0000 UTC m=+0.150367679 container init 40eb5b80a3190af639a4871f51c9c2f2c123b2db64ef1b51bc9a2e94c9c32f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:45:31 np0005481065 podman[435853]: 2025-10-11 09:45:31.120800756 +0000 UTC m=+0.160565925 container start 40eb5b80a3190af639a4871f51c9c2f2c123b2db64ef1b51bc9a2e94c9c32f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_nightingale, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 05:45:31 np0005481065 wonderful_nightingale[435869]: 167 167
Oct 11 05:45:31 np0005481065 systemd[1]: libpod-40eb5b80a3190af639a4871f51c9c2f2c123b2db64ef1b51bc9a2e94c9c32f64.scope: Deactivated successfully.
Oct 11 05:45:31 np0005481065 podman[435853]: 2025-10-11 09:45:31.127977897 +0000 UTC m=+0.167743146 container attach 40eb5b80a3190af639a4871f51c9c2f2c123b2db64ef1b51bc9a2e94c9c32f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_nightingale, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 11 05:45:31 np0005481065 podman[435853]: 2025-10-11 09:45:31.128379839 +0000 UTC m=+0.168145048 container died 40eb5b80a3190af639a4871f51c9c2f2c123b2db64ef1b51bc9a2e94c9c32f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:45:31 np0005481065 systemd[1]: var-lib-containers-storage-overlay-7e85a5f805cc43300e1fac85dd1645fe9d11d258e82d3af7869d649d91d8382c-merged.mount: Deactivated successfully.
Oct 11 05:45:31 np0005481065 podman[435853]: 2025-10-11 09:45:31.175287694 +0000 UTC m=+0.215052903 container remove 40eb5b80a3190af639a4871f51c9c2f2c123b2db64ef1b51bc9a2e94c9c32f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_nightingale, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:45:31 np0005481065 systemd[1]: libpod-conmon-40eb5b80a3190af639a4871f51c9c2f2c123b2db64ef1b51bc9a2e94c9c32f64.scope: Deactivated successfully.
Oct 11 05:45:31 np0005481065 podman[435893]: 2025-10-11 09:45:31.430217224 +0000 UTC m=+0.064640374 container create 80173a483183f7b6b4d586fb5ce65c0ca814fa4c34c7bf6f5a6e845740477578 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kare, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:45:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3127: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:45:31 np0005481065 systemd[1]: Started libpod-conmon-80173a483183f7b6b4d586fb5ce65c0ca814fa4c34c7bf6f5a6e845740477578.scope.
Oct 11 05:45:31 np0005481065 podman[435893]: 2025-10-11 09:45:31.405527332 +0000 UTC m=+0.039950522 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:45:31 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:45:31 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99db2da31b09cc0c93e01f4eb7fbece128c1eae7f89b46fcd72e483f57324009/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:45:31 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99db2da31b09cc0c93e01f4eb7fbece128c1eae7f89b46fcd72e483f57324009/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:45:31 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99db2da31b09cc0c93e01f4eb7fbece128c1eae7f89b46fcd72e483f57324009/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:45:31 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99db2da31b09cc0c93e01f4eb7fbece128c1eae7f89b46fcd72e483f57324009/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:45:31 np0005481065 podman[435893]: 2025-10-11 09:45:31.562550076 +0000 UTC m=+0.196973276 container init 80173a483183f7b6b4d586fb5ce65c0ca814fa4c34c7bf6f5a6e845740477578 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 05:45:31 np0005481065 podman[435893]: 2025-10-11 09:45:31.57658241 +0000 UTC m=+0.211005580 container start 80173a483183f7b6b4d586fb5ce65c0ca814fa4c34c7bf6f5a6e845740477578 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kare, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 05:45:31 np0005481065 podman[435893]: 2025-10-11 09:45:31.580420887 +0000 UTC m=+0.214844107 container attach 80173a483183f7b6b4d586fb5ce65c0ca814fa4c34c7bf6f5a6e845740477578 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kare, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]: {
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:    "0": [
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:        {
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:            "devices": [
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:                "/dev/loop3"
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:            ],
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:            "lv_name": "ceph_lv0",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:            "lv_size": "21470642176",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:            "name": "ceph_lv0",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:            "tags": {
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:                "ceph.cluster_name": "ceph",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:                "ceph.crush_device_class": "",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:                "ceph.encrypted": "0",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:                "ceph.osd_id": "0",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:                "ceph.type": "block",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:                "ceph.vdo": "0"
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:            },
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:            "type": "block",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:            "vg_name": "ceph_vg0"
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:        }
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:    ],
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:    "1": [
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:        {
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:            "devices": [
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:                "/dev/loop4"
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:            ],
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:            "lv_name": "ceph_lv1",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:            "lv_size": "21470642176",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:            "name": "ceph_lv1",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:            "tags": {
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:                "ceph.cluster_name": "ceph",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:                "ceph.crush_device_class": "",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:                "ceph.encrypted": "0",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:                "ceph.osd_id": "1",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:                "ceph.type": "block",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:                "ceph.vdo": "0"
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:            },
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:            "type": "block",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:            "vg_name": "ceph_vg1"
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:        }
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:    ],
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:    "2": [
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:        {
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:            "devices": [
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:                "/dev/loop5"
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:            ],
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:            "lv_name": "ceph_lv2",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:            "lv_size": "21470642176",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:            "name": "ceph_lv2",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:            "tags": {
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:                "ceph.cluster_name": "ceph",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:                "ceph.crush_device_class": "",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:                "ceph.encrypted": "0",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:                "ceph.osd_id": "2",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:                "ceph.type": "block",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:                "ceph.vdo": "0"
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:            },
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:            "type": "block",
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:            "vg_name": "ceph_vg2"
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:        }
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]:    ]
Oct 11 05:45:32 np0005481065 nostalgic_kare[435910]: }
Oct 11 05:45:32 np0005481065 systemd[1]: libpod-80173a483183f7b6b4d586fb5ce65c0ca814fa4c34c7bf6f5a6e845740477578.scope: Deactivated successfully.
Oct 11 05:45:32 np0005481065 podman[435893]: 2025-10-11 09:45:32.398091751 +0000 UTC m=+1.032514921 container died 80173a483183f7b6b4d586fb5ce65c0ca814fa4c34c7bf6f5a6e845740477578 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kare, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 05:45:32 np0005481065 systemd[1]: var-lib-containers-storage-overlay-99db2da31b09cc0c93e01f4eb7fbece128c1eae7f89b46fcd72e483f57324009-merged.mount: Deactivated successfully.
Oct 11 05:45:32 np0005481065 podman[435893]: 2025-10-11 09:45:32.48469531 +0000 UTC m=+1.119118480 container remove 80173a483183f7b6b4d586fb5ce65c0ca814fa4c34c7bf6f5a6e845740477578 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:45:32 np0005481065 systemd[1]: libpod-conmon-80173a483183f7b6b4d586fb5ce65c0ca814fa4c34c7bf6f5a6e845740477578.scope: Deactivated successfully.
Oct 11 05:45:32 np0005481065 nova_compute[260935]: 2025-10-11 09:45:32.754 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:45:33 np0005481065 podman[436072]: 2025-10-11 09:45:33.380647658 +0000 UTC m=+0.054246872 container create b0b6f21bb01e90854e2b69f65db58517b109092407b3caf39abde4590fefa772 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:45:33 np0005481065 systemd[1]: Started libpod-conmon-b0b6f21bb01e90854e2b69f65db58517b109092407b3caf39abde4590fefa772.scope.
Oct 11 05:45:33 np0005481065 podman[436072]: 2025-10-11 09:45:33.355510913 +0000 UTC m=+0.029110197 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:45:33 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:45:33 np0005481065 podman[436072]: 2025-10-11 09:45:33.470579691 +0000 UTC m=+0.144178935 container init b0b6f21bb01e90854e2b69f65db58517b109092407b3caf39abde4590fefa772 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_gagarin, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 11 05:45:33 np0005481065 podman[436072]: 2025-10-11 09:45:33.480002595 +0000 UTC m=+0.153601809 container start b0b6f21bb01e90854e2b69f65db58517b109092407b3caf39abde4590fefa772 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_gagarin, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Oct 11 05:45:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3128: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:45:33 np0005481065 podman[436072]: 2025-10-11 09:45:33.48517106 +0000 UTC m=+0.158770314 container attach b0b6f21bb01e90854e2b69f65db58517b109092407b3caf39abde4590fefa772 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 11 05:45:33 np0005481065 charming_gagarin[436089]: 167 167
Oct 11 05:45:33 np0005481065 systemd[1]: libpod-b0b6f21bb01e90854e2b69f65db58517b109092407b3caf39abde4590fefa772.scope: Deactivated successfully.
Oct 11 05:45:33 np0005481065 podman[436072]: 2025-10-11 09:45:33.489691827 +0000 UTC m=+0.163291101 container died b0b6f21bb01e90854e2b69f65db58517b109092407b3caf39abde4590fefa772 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_gagarin, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 05:45:33 np0005481065 systemd[1]: var-lib-containers-storage-overlay-c8246606b99c2a9721351544cee96f3c6e33c7849defa6889295c2266df451d3-merged.mount: Deactivated successfully.
Oct 11 05:45:33 np0005481065 podman[436072]: 2025-10-11 09:45:33.53971426 +0000 UTC m=+0.213313494 container remove b0b6f21bb01e90854e2b69f65db58517b109092407b3caf39abde4590fefa772 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_gagarin, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 05:45:33 np0005481065 systemd[1]: libpod-conmon-b0b6f21bb01e90854e2b69f65db58517b109092407b3caf39abde4590fefa772.scope: Deactivated successfully.
Oct 11 05:45:33 np0005481065 podman[436114]: 2025-10-11 09:45:33.772864079 +0000 UTC m=+0.073129952 container create 4faf2a597fc1d67b1b336adda8f558aa60e590843767a333a282e04d51784e05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:45:33 np0005481065 systemd[1]: Started libpod-conmon-4faf2a597fc1d67b1b336adda8f558aa60e590843767a333a282e04d51784e05.scope.
Oct 11 05:45:33 np0005481065 podman[436114]: 2025-10-11 09:45:33.745383698 +0000 UTC m=+0.045649631 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:45:33 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:45:33 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3087fbf324805acd7113ea9df5ae173f1cb8dc82114773e8ecc97632df0ef52c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:45:33 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3087fbf324805acd7113ea9df5ae173f1cb8dc82114773e8ecc97632df0ef52c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:45:33 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3087fbf324805acd7113ea9df5ae173f1cb8dc82114773e8ecc97632df0ef52c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:45:33 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3087fbf324805acd7113ea9df5ae173f1cb8dc82114773e8ecc97632df0ef52c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:45:33 np0005481065 podman[436114]: 2025-10-11 09:45:33.886089235 +0000 UTC m=+0.186355128 container init 4faf2a597fc1d67b1b336adda8f558aa60e590843767a333a282e04d51784e05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 11 05:45:33 np0005481065 podman[436114]: 2025-10-11 09:45:33.899538712 +0000 UTC m=+0.199804585 container start 4faf2a597fc1d67b1b336adda8f558aa60e590843767a333a282e04d51784e05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 05:45:33 np0005481065 podman[436114]: 2025-10-11 09:45:33.903694489 +0000 UTC m=+0.203960392 container attach 4faf2a597fc1d67b1b336adda8f558aa60e590843767a333a282e04d51784e05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 05:45:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:45:34 np0005481065 great_grothendieck[436131]: {
Oct 11 05:45:34 np0005481065 great_grothendieck[436131]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 05:45:34 np0005481065 great_grothendieck[436131]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:45:34 np0005481065 great_grothendieck[436131]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 05:45:34 np0005481065 great_grothendieck[436131]:        "osd_id": 2,
Oct 11 05:45:34 np0005481065 great_grothendieck[436131]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:45:34 np0005481065 great_grothendieck[436131]:        "type": "bluestore"
Oct 11 05:45:34 np0005481065 great_grothendieck[436131]:    },
Oct 11 05:45:34 np0005481065 great_grothendieck[436131]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 05:45:34 np0005481065 great_grothendieck[436131]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:45:34 np0005481065 great_grothendieck[436131]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 05:45:34 np0005481065 great_grothendieck[436131]:        "osd_id": 0,
Oct 11 05:45:34 np0005481065 great_grothendieck[436131]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:45:34 np0005481065 great_grothendieck[436131]:        "type": "bluestore"
Oct 11 05:45:34 np0005481065 great_grothendieck[436131]:    },
Oct 11 05:45:34 np0005481065 great_grothendieck[436131]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 05:45:34 np0005481065 great_grothendieck[436131]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:45:34 np0005481065 great_grothendieck[436131]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 05:45:34 np0005481065 great_grothendieck[436131]:        "osd_id": 1,
Oct 11 05:45:34 np0005481065 great_grothendieck[436131]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:45:34 np0005481065 great_grothendieck[436131]:        "type": "bluestore"
Oct 11 05:45:34 np0005481065 great_grothendieck[436131]:    }
Oct 11 05:45:34 np0005481065 great_grothendieck[436131]: }
Oct 11 05:45:35 np0005481065 systemd[1]: libpod-4faf2a597fc1d67b1b336adda8f558aa60e590843767a333a282e04d51784e05.scope: Deactivated successfully.
Oct 11 05:45:35 np0005481065 podman[436114]: 2025-10-11 09:45:35.013550538 +0000 UTC m=+1.313816391 container died 4faf2a597fc1d67b1b336adda8f558aa60e590843767a333a282e04d51784e05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:45:35 np0005481065 systemd[1]: libpod-4faf2a597fc1d67b1b336adda8f558aa60e590843767a333a282e04d51784e05.scope: Consumed 1.117s CPU time.
Oct 11 05:45:35 np0005481065 systemd[1]: var-lib-containers-storage-overlay-3087fbf324805acd7113ea9df5ae173f1cb8dc82114773e8ecc97632df0ef52c-merged.mount: Deactivated successfully.
Oct 11 05:45:35 np0005481065 podman[436114]: 2025-10-11 09:45:35.082881112 +0000 UTC m=+1.383146995 container remove 4faf2a597fc1d67b1b336adda8f558aa60e590843767a333a282e04d51784e05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 05:45:35 np0005481065 systemd[1]: libpod-conmon-4faf2a597fc1d67b1b336adda8f558aa60e590843767a333a282e04d51784e05.scope: Deactivated successfully.
Oct 11 05:45:35 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:45:35 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:45:35 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:45:35 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:45:35 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 89a4c598-f210-444e-a306-a898644536d9 does not exist
Oct 11 05:45:35 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 04d70ce9-3795-45ff-99be-5f842e78401e does not exist
Oct 11 05:45:35 np0005481065 nova_compute[260935]: 2025-10-11 09:45:35.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:45:35 np0005481065 nova_compute[260935]: 2025-10-11 09:45:35.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:45:35 np0005481065 nova_compute[260935]: 2025-10-11 09:45:35.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5029 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:45:35 np0005481065 nova_compute[260935]: 2025-10-11 09:45:35.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:45:35 np0005481065 nova_compute[260935]: 2025-10-11 09:45:35.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:45:35 np0005481065 nova_compute[260935]: 2025-10-11 09:45:35.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:45:35 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:45:35 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:45:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3129: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:45:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3130: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:45:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3131: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:45:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:45:40 np0005481065 nova_compute[260935]: 2025-10-11 09:45:40.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:45:40 np0005481065 nova_compute[260935]: 2025-10-11 09:45:40.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:45:40 np0005481065 nova_compute[260935]: 2025-10-11 09:45:40.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:45:40 np0005481065 nova_compute[260935]: 2025-10-11 09:45:40.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:45:40 np0005481065 nova_compute[260935]: 2025-10-11 09:45:40.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:45:40 np0005481065 nova_compute[260935]: 2025-10-11 09:45:40.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:45:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3132: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:45:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3133: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:45:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:45:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:45:44.907 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=60, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=59) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:45:44 np0005481065 nova_compute[260935]: 2025-10-11 09:45:44.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:45:44 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:45:44.910 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 05:45:45 np0005481065 nova_compute[260935]: 2025-10-11 09:45:45.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:45:45 np0005481065 nova_compute[260935]: 2025-10-11 09:45:45.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:45:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3134: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:45:45 np0005481065 podman[436227]: 2025-10-11 09:45:45.798733723 +0000 UTC m=+0.094064299 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 11 05:45:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3135: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:45:47 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:45:47.913 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '60'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:45:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3136: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:45:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:45:50 np0005481065 nova_compute[260935]: 2025-10-11 09:45:50.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:45:50 np0005481065 nova_compute[260935]: 2025-10-11 09:45:50.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:45:50 np0005481065 nova_compute[260935]: 2025-10-11 09:45:50.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:45:50 np0005481065 nova_compute[260935]: 2025-10-11 09:45:50.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:45:50 np0005481065 nova_compute[260935]: 2025-10-11 09:45:50.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:45:50 np0005481065 nova_compute[260935]: 2025-10-11 09:45:50.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:45:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3137: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:45:51 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #153. Immutable memtables: 0.
Oct 11 05:45:51 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:45:51.560651) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 05:45:51 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 93] Flushing memtable with next log file: 153
Oct 11 05:45:51 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175951560696, "job": 93, "event": "flush_started", "num_memtables": 1, "num_entries": 1677, "num_deletes": 255, "total_data_size": 2619277, "memory_usage": 2655152, "flush_reason": "Manual Compaction"}
Oct 11 05:45:51 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 93] Level-0 flush table #154: started
Oct 11 05:45:51 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175951579056, "cf_name": "default", "job": 93, "event": "table_file_creation", "file_number": 154, "file_size": 2558437, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 64011, "largest_seqno": 65687, "table_properties": {"data_size": 2550590, "index_size": 4725, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16275, "raw_average_key_size": 20, "raw_value_size": 2534873, "raw_average_value_size": 3168, "num_data_blocks": 210, "num_entries": 800, "num_filter_entries": 800, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760175794, "oldest_key_time": 1760175794, "file_creation_time": 1760175951, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 154, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:45:51 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 93] Flush lasted 18463 microseconds, and 11667 cpu microseconds.
Oct 11 05:45:51 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:45:51 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:45:51.579112) [db/flush_job.cc:967] [default] [JOB 93] Level-0 flush table #154: 2558437 bytes OK
Oct 11 05:45:51 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:45:51.579138) [db/memtable_list.cc:519] [default] Level-0 commit table #154 started
Oct 11 05:45:51 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:45:51.581267) [db/memtable_list.cc:722] [default] Level-0 commit table #154: memtable #1 done
Oct 11 05:45:51 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:45:51.581288) EVENT_LOG_v1 {"time_micros": 1760175951581281, "job": 93, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 05:45:51 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:45:51.581312) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 05:45:51 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 93] Try to delete WAL files size 2611975, prev total WAL file size 2611975, number of live WAL files 2.
Oct 11 05:45:51 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000150.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:45:51 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:45:51.582720) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036323735' seq:72057594037927935, type:22 .. '7061786F730036353237' seq:0, type:0; will stop at (end)
Oct 11 05:45:51 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 94] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 05:45:51 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 93 Base level 0, inputs: [154(2498KB)], [152(8480KB)]
Oct 11 05:45:51 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175951582764, "job": 94, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [154], "files_L6": [152], "score": -1, "input_data_size": 11242634, "oldest_snapshot_seqno": -1}
Oct 11 05:45:51 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 94] Generated table #155: 8359 keys, 9541598 bytes, temperature: kUnknown
Oct 11 05:45:51 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175951647583, "cf_name": "default", "job": 94, "event": "table_file_creation", "file_number": 155, "file_size": 9541598, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9489370, "index_size": 30227, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20933, "raw_key_size": 218409, "raw_average_key_size": 26, "raw_value_size": 9343990, "raw_average_value_size": 1117, "num_data_blocks": 1168, "num_entries": 8359, "num_filter_entries": 8359, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760175951, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 155, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:45:51 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:45:51 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:45:51.647790) [db/compaction/compaction_job.cc:1663] [default] [JOB 94] Compacted 1@0 + 1@6 files to L6 => 9541598 bytes
Oct 11 05:45:51 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:45:51.649187) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 173.3 rd, 147.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 8.3 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(8.1) write-amplify(3.7) OK, records in: 8883, records dropped: 524 output_compression: NoCompression
Oct 11 05:45:51 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:45:51.649207) EVENT_LOG_v1 {"time_micros": 1760175951649198, "job": 94, "event": "compaction_finished", "compaction_time_micros": 64878, "compaction_time_cpu_micros": 41619, "output_level": 6, "num_output_files": 1, "total_output_size": 9541598, "num_input_records": 8883, "num_output_records": 8359, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 05:45:51 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000154.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:45:51 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175951649914, "job": 94, "event": "table_file_deletion", "file_number": 154}
Oct 11 05:45:51 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000152.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:45:51 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175951651952, "job": 94, "event": "table_file_deletion", "file_number": 152}
Oct 11 05:45:51 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:45:51.582594) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:45:51 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:45:51.652103) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:45:51 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:45:51.652116) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:45:51 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:45:51.652119) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:45:51 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:45:51.652123) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:45:51 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:45:51.652127) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:45:51 np0005481065 podman[436247]: 2025-10-11 09:45:51.813356729 +0000 UTC m=+0.101965561 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.build-date=20251001)
Oct 11 05:45:52 np0005481065 nova_compute[260935]: 2025-10-11 09:45:52.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:45:52 np0005481065 nova_compute[260935]: 2025-10-11 09:45:52.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:45:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3138: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:45:54 np0005481065 nova_compute[260935]: 2025-10-11 09:45:54.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:45:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:45:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:45:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:45:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:45:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:45:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:45:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:45:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:45:55
Oct 11 05:45:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:45:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:45:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['backups', 'default.rgw.control', 'cephfs.cephfs.data', 'images', 'default.rgw.log', 'volumes', '.rgw.root', 'cephfs.cephfs.meta', 'vms', '.mgr', 'default.rgw.meta']
Oct 11 05:45:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:45:55 np0005481065 nova_compute[260935]: 2025-10-11 09:45:55.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:45:55 np0005481065 nova_compute[260935]: 2025-10-11 09:45:55.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:45:55 np0005481065 nova_compute[260935]: 2025-10-11 09:45:55.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:45:55 np0005481065 nova_compute[260935]: 2025-10-11 09:45:55.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:45:55 np0005481065 nova_compute[260935]: 2025-10-11 09:45:55.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:45:55 np0005481065 nova_compute[260935]: 2025-10-11 09:45:55.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:45:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3139: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:45:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:45:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:45:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:45:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:45:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:45:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:45:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:45:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:45:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:45:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:45:55 np0005481065 nova_compute[260935]: 2025-10-11 09:45:55.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:45:55 np0005481065 nova_compute[260935]: 2025-10-11 09:45:55.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:45:55 np0005481065 podman[436267]: 2025-10-11 09:45:55.821036873 +0000 UTC m=+0.117909178 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 05:45:55 np0005481065 podman[436268]: 2025-10-11 09:45:55.856514998 +0000 UTC m=+0.146629703 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 11 05:45:56 np0005481065 nova_compute[260935]: 2025-10-11 09:45:56.190 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:45:56 np0005481065 nova_compute[260935]: 2025-10-11 09:45:56.190 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:45:56 np0005481065 nova_compute[260935]: 2025-10-11 09:45:56.190 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 05:45:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3140: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:45:57 np0005481065 nova_compute[260935]: 2025-10-11 09:45:57.612 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updating instance_info_cache with network_info: [{"id": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "address": "fa:16:3e:ab:9b:26", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99e74dca-1d", "ovs_interfaceid": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:45:57 np0005481065 nova_compute[260935]: 2025-10-11 09:45:57.635 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:45:57 np0005481065 nova_compute[260935]: 2025-10-11 09:45:57.636 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 05:45:57 np0005481065 nova_compute[260935]: 2025-10-11 09:45:57.637 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:45:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3141: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:45:59 np0005481065 nova_compute[260935]: 2025-10-11 09:45:59.632 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:45:59 np0005481065 nova_compute[260935]: 2025-10-11 09:45:59.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:45:59 np0005481065 nova_compute[260935]: 2025-10-11 09:45:59.749 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:45:59 np0005481065 nova_compute[260935]: 2025-10-11 09:45:59.749 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:45:59 np0005481065 nova_compute[260935]: 2025-10-11 09:45:59.749 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:45:59 np0005481065 nova_compute[260935]: 2025-10-11 09:45:59.750 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:45:59 np0005481065 nova_compute[260935]: 2025-10-11 09:45:59.750 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:45:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:46:00 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:46:00 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1163292549' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:46:00 np0005481065 nova_compute[260935]: 2025-10-11 09:46:00.262 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:46:00 np0005481065 nova_compute[260935]: 2025-10-11 09:46:00.378 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:46:00 np0005481065 nova_compute[260935]: 2025-10-11 09:46:00.379 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:46:00 np0005481065 nova_compute[260935]: 2025-10-11 09:46:00.379 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:46:00 np0005481065 nova_compute[260935]: 2025-10-11 09:46:00.385 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:46:00 np0005481065 nova_compute[260935]: 2025-10-11 09:46:00.385 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:46:00 np0005481065 nova_compute[260935]: 2025-10-11 09:46:00.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:46:00 np0005481065 nova_compute[260935]: 2025-10-11 09:46:00.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:46:00 np0005481065 nova_compute[260935]: 2025-10-11 09:46:00.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:46:00 np0005481065 nova_compute[260935]: 2025-10-11 09:46:00.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:46:00 np0005481065 nova_compute[260935]: 2025-10-11 09:46:00.434 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:46:00 np0005481065 nova_compute[260935]: 2025-10-11 09:46:00.434 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:46:00 np0005481065 nova_compute[260935]: 2025-10-11 09:46:00.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:46:00 np0005481065 nova_compute[260935]: 2025-10-11 09:46:00.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:46:00 np0005481065 nova_compute[260935]: 2025-10-11 09:46:00.748 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:46:00 np0005481065 nova_compute[260935]: 2025-10-11 09:46:00.751 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2774MB free_disk=59.83066940307617GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:46:00 np0005481065 nova_compute[260935]: 2025-10-11 09:46:00.752 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:46:00 np0005481065 nova_compute[260935]: 2025-10-11 09:46:00.753 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:46:01 np0005481065 nova_compute[260935]: 2025-10-11 09:46:01.113 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:46:01 np0005481065 nova_compute[260935]: 2025-10-11 09:46:01.113 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:46:01 np0005481065 nova_compute[260935]: 2025-10-11 09:46:01.113 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:46:01 np0005481065 nova_compute[260935]: 2025-10-11 09:46:01.114 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:46:01 np0005481065 nova_compute[260935]: 2025-10-11 09:46:01.114 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:46:01 np0005481065 nova_compute[260935]: 2025-10-11 09:46:01.427 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:46:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3142: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:46:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:46:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3322287583' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:46:01 np0005481065 nova_compute[260935]: 2025-10-11 09:46:01.923 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:46:01 np0005481065 nova_compute[260935]: 2025-10-11 09:46:01.933 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:46:01 np0005481065 nova_compute[260935]: 2025-10-11 09:46:01.960 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:46:01 np0005481065 nova_compute[260935]: 2025-10-11 09:46:01.996 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:46:01 np0005481065 nova_compute[260935]: 2025-10-11 09:46:01.997 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.244s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:46:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3143: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:46:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:46:05 np0005481065 nova_compute[260935]: 2025-10-11 09:46:05.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:46:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3144: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:46:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:46:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:46:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:46:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:46:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002627377275021163 of space, bias 1.0, pg target 0.788213182506349 quantized to 32 (current 32)
Oct 11 05:46:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:46:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:46:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:46:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:46:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:46:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 11 05:46:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:46:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 05:46:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:46:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:46:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:46:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 05:46:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:46:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 05:46:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:46:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:46:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:46:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 05:46:05 np0005481065 nova_compute[260935]: 2025-10-11 09:46:05.998 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:46:06 np0005481065 nova_compute[260935]: 2025-10-11 09:46:05.999 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:46:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3145: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:46:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3146: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:46:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:46:10 np0005481065 nova_compute[260935]: 2025-10-11 09:46:10.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:46:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3147: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:46:11 np0005481065 nova_compute[260935]: 2025-10-11 09:46:11.700 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:46:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3148: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:46:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:46:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:46:15.241 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:46:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:46:15.242 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:46:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:46:15.243 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:46:15 np0005481065 nova_compute[260935]: 2025-10-11 09:46:15.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:46:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3149: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:46:16 np0005481065 podman[436357]: 2025-10-11 09:46:16.792445598 +0000 UTC m=+0.088547795 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 11 05:46:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3150: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:46:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e299 do_prune osdmap full prune enabled
Oct 11 05:46:17 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e300 e300: 3 total, 3 up, 3 in
Oct 11 05:46:17 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e300: 3 total, 3 up, 3 in
Oct 11 05:46:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3152: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 511 B/s rd, 102 B/s wr, 1 op/s
Oct 11 05:46:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:46:19 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #156. Immutable memtables: 0.
Oct 11 05:46:19 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:46:19.768808) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 05:46:19 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 95] Flushing memtable with next log file: 156
Oct 11 05:46:19 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175979768911, "job": 95, "event": "flush_started", "num_memtables": 1, "num_entries": 472, "num_deletes": 257, "total_data_size": 419207, "memory_usage": 428192, "flush_reason": "Manual Compaction"}
Oct 11 05:46:19 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 95] Level-0 flush table #157: started
Oct 11 05:46:19 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175979775439, "cf_name": "default", "job": 95, "event": "table_file_creation", "file_number": 157, "file_size": 415796, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 65688, "largest_seqno": 66159, "table_properties": {"data_size": 413074, "index_size": 757, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6349, "raw_average_key_size": 18, "raw_value_size": 407601, "raw_average_value_size": 1178, "num_data_blocks": 34, "num_entries": 346, "num_filter_entries": 346, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760175952, "oldest_key_time": 1760175952, "file_creation_time": 1760175979, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 157, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:46:19 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 95] Flush lasted 6668 microseconds, and 3269 cpu microseconds.
Oct 11 05:46:19 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:46:19 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:46:19.775507) [db/flush_job.cc:967] [default] [JOB 95] Level-0 flush table #157: 415796 bytes OK
Oct 11 05:46:19 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:46:19.775534) [db/memtable_list.cc:519] [default] Level-0 commit table #157 started
Oct 11 05:46:19 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:46:19.777439) [db/memtable_list.cc:722] [default] Level-0 commit table #157: memtable #1 done
Oct 11 05:46:19 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:46:19.777468) EVENT_LOG_v1 {"time_micros": 1760175979777458, "job": 95, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 05:46:19 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:46:19.777494) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 05:46:19 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 95] Try to delete WAL files size 416378, prev total WAL file size 416378, number of live WAL files 2.
Oct 11 05:46:19 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000153.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:46:19 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:46:19.778348) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032373632' seq:72057594037927935, type:22 .. '6C6F676D0033303134' seq:0, type:0; will stop at (end)
Oct 11 05:46:19 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 96] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 05:46:19 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 95 Base level 0, inputs: [157(406KB)], [155(9317KB)]
Oct 11 05:46:19 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175979778411, "job": 96, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [157], "files_L6": [155], "score": -1, "input_data_size": 9957394, "oldest_snapshot_seqno": -1}
Oct 11 05:46:19 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 96] Generated table #158: 8180 keys, 9855243 bytes, temperature: kUnknown
Oct 11 05:46:19 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175979840059, "cf_name": "default", "job": 96, "event": "table_file_creation", "file_number": 158, "file_size": 9855243, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9803172, "index_size": 30518, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20485, "raw_key_size": 215620, "raw_average_key_size": 26, "raw_value_size": 9659760, "raw_average_value_size": 1180, "num_data_blocks": 1178, "num_entries": 8180, "num_filter_entries": 8180, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760175979, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 158, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:46:19 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:46:19 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:46:19.840457) [db/compaction/compaction_job.cc:1663] [default] [JOB 96] Compacted 1@0 + 1@6 files to L6 => 9855243 bytes
Oct 11 05:46:19 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:46:19.841792) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 161.3 rd, 159.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 9.1 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(47.6) write-amplify(23.7) OK, records in: 8705, records dropped: 525 output_compression: NoCompression
Oct 11 05:46:19 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:46:19.841872) EVENT_LOG_v1 {"time_micros": 1760175979841854, "job": 96, "event": "compaction_finished", "compaction_time_micros": 61751, "compaction_time_cpu_micros": 45330, "output_level": 6, "num_output_files": 1, "total_output_size": 9855243, "num_input_records": 8705, "num_output_records": 8180, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 05:46:19 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000157.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:46:19 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175979842237, "job": 96, "event": "table_file_deletion", "file_number": 157}
Oct 11 05:46:19 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000155.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:46:19 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175979845736, "job": 96, "event": "table_file_deletion", "file_number": 155}
Oct 11 05:46:19 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:46:19.778224) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:46:19 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:46:19.845801) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:46:19 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:46:19.845807) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:46:19 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:46:19.845808) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:46:19 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:46:19.845810) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:46:19 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:46:19.845811) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:46:20 np0005481065 nova_compute[260935]: 2025-10-11 09:46:20.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:46:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3153: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 511 B/s rd, 102 B/s wr, 1 op/s
Oct 11 05:46:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:46:22.216 162815 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 533d8cc8-a3c8-4aab-9b38-4b2a82418dcd with type ""#033[00m
Oct 11 05:46:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:46:22.217 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:9b:26 10.100.0.3'], port_security=['fa:16:3e:ab:9b:26 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'b75d8ded-515b-48ff-a6b6-28df88878996', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4686205-cbf0-4221-bc49-ebb890c4a59f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '11b44ad9193e4e43838d52056ccf413e', 'neutron:revision_number': '6', 'neutron:security_group_ids': '4b0cfb76-aebd-4cfc-96f5-00bacc345d65', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9601b6e8-d9bc-46ca-99e8-33ec15f713e5, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=99e74dca-1d94-446c-ac4b-bc16dc028d2b) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:46:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:46:22.218 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 99e74dca-1d94-446c-ac4b-bc16dc028d2b in datapath e4686205-cbf0-4221-bc49-ebb890c4a59f unbound from our chassis#033[00m
Oct 11 05:46:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:46:22.219 162815 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network e4686205-cbf0-4221-bc49-ebb890c4a59f or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct 11 05:46:22 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:46:22.220 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[14c3df98-3c96-4db9-8916-ba740a9c57a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:46:22 np0005481065 ovn_controller[152945]: 2025-10-11T09:46:22Z|01730|binding|INFO|Removing iface tap99e74dca-1d ovn-installed in OVS
Oct 11 05:46:22 np0005481065 ovn_controller[152945]: 2025-10-11T09:46:22Z|01731|binding|INFO|Removing lport 99e74dca-1d94-446c-ac4b-bc16dc028d2b ovn-installed in OVS
Oct 11 05:46:22 np0005481065 nova_compute[260935]: 2025-10-11 09:46:22.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:46:22 np0005481065 nova_compute[260935]: 2025-10-11 09:46:22.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:46:22 np0005481065 nova_compute[260935]: 2025-10-11 09:46:22.360 2 DEBUG nova.compute.manager [req-6716311a-3b5c-4fd2-819c-bb56b2544039 req-b82d8d70-5488-4154-8877-93a2e1550fbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Received event network-vif-deleted-99e74dca-1d94-446c-ac4b-bc16dc028d2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:46:22 np0005481065 nova_compute[260935]: 2025-10-11 09:46:22.361 2 INFO nova.compute.manager [req-6716311a-3b5c-4fd2-819c-bb56b2544039 req-b82d8d70-5488-4154-8877-93a2e1550fbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Neutron deleted interface 99e74dca-1d94-446c-ac4b-bc16dc028d2b; detaching it from the instance and deleting it from the info cache#033[00m
Oct 11 05:46:22 np0005481065 nova_compute[260935]: 2025-10-11 09:46:22.361 2 DEBUG nova.network.neutron [req-6716311a-3b5c-4fd2-819c-bb56b2544039 req-b82d8d70-5488-4154-8877-93a2e1550fbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:46:22 np0005481065 nova_compute[260935]: 2025-10-11 09:46:22.398 2 DEBUG nova.objects.instance [req-6716311a-3b5c-4fd2-819c-bb56b2544039 req-b82d8d70-5488-4154-8877-93a2e1550fbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lazy-loading 'system_metadata' on Instance uuid b75d8ded-515b-48ff-a6b6-28df88878996 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:46:22 np0005481065 nova_compute[260935]: 2025-10-11 09:46:22.442 2 DEBUG nova.objects.instance [req-6716311a-3b5c-4fd2-819c-bb56b2544039 req-b82d8d70-5488-4154-8877-93a2e1550fbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lazy-loading 'flavor' on Instance uuid b75d8ded-515b-48ff-a6b6-28df88878996 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:46:22 np0005481065 nova_compute[260935]: 2025-10-11 09:46:22.472 2 DEBUG nova.virt.libvirt.vif [req-6716311a-3b5c-4fd2-819c-bb56b2544039 req-b82d8d70-5488-4154-8877-93a2e1550fbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:00:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-217285829',display_name='tempest-ServerRescueTestJSON-server-217285829',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-217285829',id=85,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:00:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='11b44ad9193e4e43838d52056ccf413e',ramdisk_id='',reservation_id='r-i0umpniu',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-1667208638',owner_user_name='tempest-ServerRescueTestJSON-1667208638-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:00:45Z,user_data=None,user_id='df5a3c3a5d68473aa2e2950de45ebce1',uuid=b75d8ded-515b-48ff-a6b6-28df88878996,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "address": "fa:16:3e:ab:9b:26", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99e74dca-1d", "ovs_interfaceid": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:46:22 np0005481065 nova_compute[260935]: 2025-10-11 09:46:22.473 2 DEBUG nova.network.os_vif_util [req-6716311a-3b5c-4fd2-819c-bb56b2544039 req-b82d8d70-5488-4154-8877-93a2e1550fbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Converting VIF {"id": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "address": "fa:16:3e:ab:9b:26", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99e74dca-1d", "ovs_interfaceid": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:46:22 np0005481065 nova_compute[260935]: 2025-10-11 09:46:22.475 2 DEBUG nova.network.os_vif_util [req-6716311a-3b5c-4fd2-819c-bb56b2544039 req-b82d8d70-5488-4154-8877-93a2e1550fbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ab:9b:26,bridge_name='br-int',has_traffic_filtering=True,id=99e74dca-1d94-446c-ac4b-bc16dc028d2b,network=Network(e4686205-cbf0-4221-bc49-ebb890c4a59f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99e74dca-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:46:22 np0005481065 nova_compute[260935]: 2025-10-11 09:46:22.483 2 DEBUG nova.virt.libvirt.guest [req-6716311a-3b5c-4fd2-819c-bb56b2544039 req-b82d8d70-5488-4154-8877-93a2e1550fbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ab:9b:26"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap99e74dca-1d"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 11 05:46:22 np0005481065 nova_compute[260935]: 2025-10-11 09:46:22.489 2 DEBUG nova.virt.libvirt.guest [req-6716311a-3b5c-4fd2-819c-bb56b2544039 req-b82d8d70-5488-4154-8877-93a2e1550fbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ab:9b:26"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap99e74dca-1d"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 11 05:46:22 np0005481065 nova_compute[260935]: 2025-10-11 09:46:22.495 2 DEBUG nova.virt.libvirt.driver [req-6716311a-3b5c-4fd2-819c-bb56b2544039 req-b82d8d70-5488-4154-8877-93a2e1550fbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Attempting to detach device tap99e74dca-1d from instance b75d8ded-515b-48ff-a6b6-28df88878996 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct 11 05:46:22 np0005481065 nova_compute[260935]: 2025-10-11 09:46:22.496 2 DEBUG nova.virt.libvirt.guest [req-6716311a-3b5c-4fd2-819c-bb56b2544039 req-b82d8d70-5488-4154-8877-93a2e1550fbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] detach device xml: <interface type="ethernet">
Oct 11 05:46:22 np0005481065 nova_compute[260935]:  <mac address="fa:16:3e:ab:9b:26"/>
Oct 11 05:46:22 np0005481065 nova_compute[260935]:  <model type="virtio"/>
Oct 11 05:46:22 np0005481065 nova_compute[260935]:  <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:46:22 np0005481065 nova_compute[260935]:  <mtu size="1442"/>
Oct 11 05:46:22 np0005481065 nova_compute[260935]:  <target dev="tap99e74dca-1d"/>
Oct 11 05:46:22 np0005481065 nova_compute[260935]: </interface>
Oct 11 05:46:22 np0005481065 nova_compute[260935]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 11 05:46:22 np0005481065 nova_compute[260935]: 2025-10-11 09:46:22.507 2 DEBUG nova.virt.libvirt.guest [req-6716311a-3b5c-4fd2-819c-bb56b2544039 req-b82d8d70-5488-4154-8877-93a2e1550fbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ab:9b:26"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap99e74dca-1d"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 11 05:46:22 np0005481065 nova_compute[260935]: 2025-10-11 09:46:22.512 2 DEBUG nova.virt.libvirt.guest [req-6716311a-3b5c-4fd2-819c-bb56b2544039 req-b82d8d70-5488-4154-8877-93a2e1550fbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] No interface of type: <class 'nova.virt.libvirt.config.LibvirtConfigGuestInterface'> found in domain get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:261#033[00m
Oct 11 05:46:22 np0005481065 nova_compute[260935]: 2025-10-11 09:46:22.513 2 INFO nova.virt.libvirt.driver [req-6716311a-3b5c-4fd2-819c-bb56b2544039 req-b82d8d70-5488-4154-8877-93a2e1550fbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Successfully detached device tap99e74dca-1d from instance b75d8ded-515b-48ff-a6b6-28df88878996 from the persistent domain config.#033[00m
Oct 11 05:46:22 np0005481065 nova_compute[260935]: 2025-10-11 09:46:22.513 2 DEBUG nova.virt.libvirt.driver [req-6716311a-3b5c-4fd2-819c-bb56b2544039 req-b82d8d70-5488-4154-8877-93a2e1550fbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] (1/8): Attempting to detach device tap99e74dca-1d with device alias net0 from instance b75d8ded-515b-48ff-a6b6-28df88878996 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct 11 05:46:22 np0005481065 nova_compute[260935]: 2025-10-11 09:46:22.514 2 DEBUG nova.virt.libvirt.guest [req-6716311a-3b5c-4fd2-819c-bb56b2544039 req-b82d8d70-5488-4154-8877-93a2e1550fbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] detach device xml: <interface type="ethernet">
Oct 11 05:46:22 np0005481065 nova_compute[260935]:  <mac address="fa:16:3e:ab:9b:26"/>
Oct 11 05:46:22 np0005481065 nova_compute[260935]:  <model type="virtio"/>
Oct 11 05:46:22 np0005481065 nova_compute[260935]:  <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:46:22 np0005481065 nova_compute[260935]:  <mtu size="1442"/>
Oct 11 05:46:22 np0005481065 nova_compute[260935]:  <target dev="tap99e74dca-1d"/>
Oct 11 05:46:22 np0005481065 nova_compute[260935]: </interface>
Oct 11 05:46:22 np0005481065 nova_compute[260935]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 11 05:46:22 np0005481065 kernel: tap99e74dca-1d (unregistering): left promiscuous mode
Oct 11 05:46:22 np0005481065 NetworkManager[44960]: <info>  [1760175982.5913] device (tap99e74dca-1d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:46:22 np0005481065 nova_compute[260935]: 2025-10-11 09:46:22.604 2 DEBUG nova.virt.libvirt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Received event <DeviceRemovedEvent: 1760175982.6040714, b75d8ded-515b-48ff-a6b6-28df88878996 => net0> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct 11 05:46:22 np0005481065 nova_compute[260935]: 2025-10-11 09:46:22.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:46:22 np0005481065 nova_compute[260935]: 2025-10-11 09:46:22.611 2 DEBUG nova.virt.libvirt.driver [req-6716311a-3b5c-4fd2-819c-bb56b2544039 req-b82d8d70-5488-4154-8877-93a2e1550fbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Start waiting for the detach event from libvirt for device tap99e74dca-1d with device alias net0 for instance b75d8ded-515b-48ff-a6b6-28df88878996 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct 11 05:46:22 np0005481065 nova_compute[260935]: 2025-10-11 09:46:22.611 2 DEBUG nova.virt.libvirt.guest [req-6716311a-3b5c-4fd2-819c-bb56b2544039 req-b82d8d70-5488-4154-8877-93a2e1550fbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ab:9b:26"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap99e74dca-1d"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 11 05:46:22 np0005481065 nova_compute[260935]: 2025-10-11 09:46:22.616 2 DEBUG nova.virt.libvirt.guest [req-6716311a-3b5c-4fd2-819c-bb56b2544039 req-b82d8d70-5488-4154-8877-93a2e1550fbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] No interface of type: <class 'nova.virt.libvirt.config.LibvirtConfigGuestInterface'> found in domain get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:261#033[00m
Oct 11 05:46:22 np0005481065 nova_compute[260935]: 2025-10-11 09:46:22.616 2 INFO nova.virt.libvirt.driver [req-6716311a-3b5c-4fd2-819c-bb56b2544039 req-b82d8d70-5488-4154-8877-93a2e1550fbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Successfully detached device tap99e74dca-1d from instance b75d8ded-515b-48ff-a6b6-28df88878996 from the live domain config.#033[00m
Oct 11 05:46:22 np0005481065 nova_compute[260935]: 2025-10-11 09:46:22.617 2 DEBUG nova.virt.libvirt.vif [req-6716311a-3b5c-4fd2-819c-bb56b2544039 req-b82d8d70-5488-4154-8877-93a2e1550fbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:00:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-217285829',display_name='tempest-ServerRescueTestJSON-server-217285829',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-217285829',id=85,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:00:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='11b44ad9193e4e43838d52056ccf413e',ramdisk_id='',reservation_id='r-i0umpniu',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-1667208638',owner_user_name='tempest-ServerRescueTestJSON-1667208638-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:00:45Z,user_data=None,user_id='df5a3c3a5d68473aa2e2950de45ebce1',uuid=b75d8ded-515b-48ff-a6b6-28df88878996,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "address": "fa:16:3e:ab:9b:26", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99e74dca-1d", "ovs_interfaceid": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:46:22 np0005481065 nova_compute[260935]: 2025-10-11 09:46:22.618 2 DEBUG nova.network.os_vif_util [req-6716311a-3b5c-4fd2-819c-bb56b2544039 req-b82d8d70-5488-4154-8877-93a2e1550fbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Converting VIF {"id": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "address": "fa:16:3e:ab:9b:26", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99e74dca-1d", "ovs_interfaceid": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:46:22 np0005481065 nova_compute[260935]: 2025-10-11 09:46:22.619 2 DEBUG nova.network.os_vif_util [req-6716311a-3b5c-4fd2-819c-bb56b2544039 req-b82d8d70-5488-4154-8877-93a2e1550fbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ab:9b:26,bridge_name='br-int',has_traffic_filtering=True,id=99e74dca-1d94-446c-ac4b-bc16dc028d2b,network=Network(e4686205-cbf0-4221-bc49-ebb890c4a59f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99e74dca-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:46:22 np0005481065 nova_compute[260935]: 2025-10-11 09:46:22.620 2 DEBUG os_vif [req-6716311a-3b5c-4fd2-819c-bb56b2544039 req-b82d8d70-5488-4154-8877-93a2e1550fbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ab:9b:26,bridge_name='br-int',has_traffic_filtering=True,id=99e74dca-1d94-446c-ac4b-bc16dc028d2b,network=Network(e4686205-cbf0-4221-bc49-ebb890c4a59f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99e74dca-1d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:46:22 np0005481065 nova_compute[260935]: 2025-10-11 09:46:22.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:46:22 np0005481065 nova_compute[260935]: 2025-10-11 09:46:22.624 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap99e74dca-1d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:46:22 np0005481065 nova_compute[260935]: 2025-10-11 09:46:22.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:46:22 np0005481065 nova_compute[260935]: 2025-10-11 09:46:22.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:46:22 np0005481065 nova_compute[260935]: 2025-10-11 09:46:22.633 2 INFO os_vif [req-6716311a-3b5c-4fd2-819c-bb56b2544039 req-b82d8d70-5488-4154-8877-93a2e1550fbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ab:9b:26,bridge_name='br-int',has_traffic_filtering=True,id=99e74dca-1d94-446c-ac4b-bc16dc028d2b,network=Network(e4686205-cbf0-4221-bc49-ebb890c4a59f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99e74dca-1d')#033[00m
Oct 11 05:46:22 np0005481065 nova_compute[260935]: 2025-10-11 09:46:22.634 2 DEBUG nova.virt.libvirt.guest [req-6716311a-3b5c-4fd2-819c-bb56b2544039 req-b82d8d70-5488-4154-8877-93a2e1550fbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:46:22 np0005481065 nova_compute[260935]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:46:22 np0005481065 nova_compute[260935]:  <nova:name>tempest-ServerRescueTestJSON-server-217285829</nova:name>
Oct 11 05:46:22 np0005481065 nova_compute[260935]:  <nova:creationTime>2025-10-11 09:46:22</nova:creationTime>
Oct 11 05:46:22 np0005481065 nova_compute[260935]:  <nova:flavor name="m1.nano">
Oct 11 05:46:22 np0005481065 nova_compute[260935]:    <nova:memory>128</nova:memory>
Oct 11 05:46:22 np0005481065 nova_compute[260935]:    <nova:disk>1</nova:disk>
Oct 11 05:46:22 np0005481065 nova_compute[260935]:    <nova:swap>0</nova:swap>
Oct 11 05:46:22 np0005481065 nova_compute[260935]:    <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:46:22 np0005481065 nova_compute[260935]:    <nova:vcpus>1</nova:vcpus>
Oct 11 05:46:22 np0005481065 nova_compute[260935]:  </nova:flavor>
Oct 11 05:46:22 np0005481065 nova_compute[260935]:  <nova:owner>
Oct 11 05:46:22 np0005481065 nova_compute[260935]:    <nova:user uuid="df5a3c3a5d68473aa2e2950de45ebce1">tempest-ServerRescueTestJSON-1667208638-project-member</nova:user>
Oct 11 05:46:22 np0005481065 nova_compute[260935]:    <nova:project uuid="11b44ad9193e4e43838d52056ccf413e">tempest-ServerRescueTestJSON-1667208638</nova:project>
Oct 11 05:46:22 np0005481065 nova_compute[260935]:  </nova:owner>
Oct 11 05:46:22 np0005481065 nova_compute[260935]:  <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:46:22 np0005481065 nova_compute[260935]:  <nova:ports/>
Oct 11 05:46:22 np0005481065 nova_compute[260935]: </nova:instance>
Oct 11 05:46:22 np0005481065 nova_compute[260935]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct 11 05:46:22 np0005481065 podman[436375]: 2025-10-11 09:46:22.734406906 +0000 UTC m=+0.108108593 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:46:23 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:46:23.186 162815 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 95a75fdd-515e-49c3-9289-acf68824d04a with type ""#033[00m
Oct 11 05:46:23 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:46:23.187 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:b5:ce 10.100.0.14'], port_security=['fa:16:3e:d3:b5:ce 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '52be16b4-343a-4fd4-9041-39069a1fde2a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c40ad6c-6e2c-4d8e-a70f-72c8786fa745', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0ba95f2514ce4fe4b00f245335eaeb01', 'neutron:revision_number': '4', 'neutron:security_group_ids': '462a25ad-d94b-4af1-ba28-eaa1a993c459', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=22e9d786-1ab0-4026-8b17-f42f91b9280f, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=c992d6e3-ef59-42a0-80c5-109fe0c056cd) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:46:23 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:46:23.189 162815 INFO neutron.agent.ovn.metadata.agent [-] Port c992d6e3-ef59-42a0-80c5-109fe0c056cd in datapath 7c40ad6c-6e2c-4d8e-a70f-72c8786fa745 unbound from our chassis#033[00m
Oct 11 05:46:23 np0005481065 ovn_controller[152945]: 2025-10-11T09:46:23Z|01732|binding|INFO|Removing iface tapc992d6e3-ef ovn-installed in OVS
Oct 11 05:46:23 np0005481065 ovn_controller[152945]: 2025-10-11T09:46:23Z|01733|binding|INFO|Removing lport c992d6e3-ef59-42a0-80c5-109fe0c056cd ovn-installed in OVS
Oct 11 05:46:23 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:46:23.191 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7c40ad6c-6e2c-4d8e-a70f-72c8786fa745, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:46:23 np0005481065 nova_compute[260935]: 2025-10-11 09:46:23.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:46:23 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:46:23.192 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1869c0d0-2afe-411d-aea6-f86a4e16abcb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:46:23 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:46:23.192 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7c40ad6c-6e2c-4d8e-a70f-72c8786fa745 namespace which is not needed anymore#033[00m
Oct 11 05:46:23 np0005481065 nova_compute[260935]: 2025-10-11 09:46:23.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:46:23 np0005481065 neutron-haproxy-ovnmeta-7c40ad6c-6e2c-4d8e-a70f-72c8786fa745[344017]: [NOTICE]   (344035) : haproxy version is 2.8.14-c23fe91
Oct 11 05:46:23 np0005481065 neutron-haproxy-ovnmeta-7c40ad6c-6e2c-4d8e-a70f-72c8786fa745[344017]: [NOTICE]   (344035) : path to executable is /usr/sbin/haproxy
Oct 11 05:46:23 np0005481065 neutron-haproxy-ovnmeta-7c40ad6c-6e2c-4d8e-a70f-72c8786fa745[344017]: [WARNING]  (344035) : Exiting Master process...
Oct 11 05:46:23 np0005481065 neutron-haproxy-ovnmeta-7c40ad6c-6e2c-4d8e-a70f-72c8786fa745[344017]: [ALERT]    (344035) : Current worker (344056) exited with code 143 (Terminated)
Oct 11 05:46:23 np0005481065 neutron-haproxy-ovnmeta-7c40ad6c-6e2c-4d8e-a70f-72c8786fa745[344017]: [WARNING]  (344035) : All workers exited. Exiting... (0)
Oct 11 05:46:23 np0005481065 systemd[1]: libpod-b5511f41d5dad8ddef6656dea0a0356c4dbed9d9c7c06c1948c618f051a18d4b.scope: Deactivated successfully.
Oct 11 05:46:23 np0005481065 podman[436419]: 2025-10-11 09:46:23.389703255 +0000 UTC m=+0.066233059 container died b5511f41d5dad8ddef6656dea0a0356c4dbed9d9c7c06c1948c618f051a18d4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7c40ad6c-6e2c-4d8e-a70f-72c8786fa745, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 11 05:46:23 np0005481065 systemd[1]: var-lib-containers-storage-overlay-d388700494a60a977f53213e9adaae75b3b656a3610a624cd88ae77b82267d55-merged.mount: Deactivated successfully.
Oct 11 05:46:23 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b5511f41d5dad8ddef6656dea0a0356c4dbed9d9c7c06c1948c618f051a18d4b-userdata-shm.mount: Deactivated successfully.
Oct 11 05:46:23 np0005481065 podman[436419]: 2025-10-11 09:46:23.444131121 +0000 UTC m=+0.120660925 container cleanup b5511f41d5dad8ddef6656dea0a0356c4dbed9d9c7c06c1948c618f051a18d4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7c40ad6c-6e2c-4d8e-a70f-72c8786fa745, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 11 05:46:23 np0005481065 systemd[1]: libpod-conmon-b5511f41d5dad8ddef6656dea0a0356c4dbed9d9c7c06c1948c618f051a18d4b.scope: Deactivated successfully.
Oct 11 05:46:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3154: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 11 05:46:23 np0005481065 podman[436447]: 2025-10-11 09:46:23.549529297 +0000 UTC m=+0.069167291 container remove b5511f41d5dad8ddef6656dea0a0356c4dbed9d9c7c06c1948c618f051a18d4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7c40ad6c-6e2c-4d8e-a70f-72c8786fa745, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:46:23 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:46:23.559 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c8a03e63-eb9c-40f1-93a8-c8478135964f]: (4, ('Sat Oct 11 09:46:23 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7c40ad6c-6e2c-4d8e-a70f-72c8786fa745 (b5511f41d5dad8ddef6656dea0a0356c4dbed9d9c7c06c1948c618f051a18d4b)\nb5511f41d5dad8ddef6656dea0a0356c4dbed9d9c7c06c1948c618f051a18d4b\nSat Oct 11 09:46:23 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7c40ad6c-6e2c-4d8e-a70f-72c8786fa745 (b5511f41d5dad8ddef6656dea0a0356c4dbed9d9c7c06c1948c618f051a18d4b)\nb5511f41d5dad8ddef6656dea0a0356c4dbed9d9c7c06c1948c618f051a18d4b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:46:23 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:46:23.562 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[eaeb63a9-bdac-4664-91e2-1ea85335d9c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:46:23 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:46:23.564 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c40ad6c-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:46:23 np0005481065 nova_compute[260935]: 2025-10-11 09:46:23.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:46:23 np0005481065 kernel: tap7c40ad6c-60: left promiscuous mode
Oct 11 05:46:23 np0005481065 nova_compute[260935]: 2025-10-11 09:46:23.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:46:23 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:46:23.600 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5f8caf6d-f180-41e2-b33f-fc07afa44364]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:46:23 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:46:23.636 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[612ae595-2d10-44e3-95eb-999b70860292]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:46:23 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:46:23.639 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2cc6f881-76a1-4f5e-82d4-c8e5915b2c4f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:46:23 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:46:23.660 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[40f9de06-20ad-4585-9611-397c5516652c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 509451, 'reachable_time': 33449, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 436460, 'error': None, 'target': 'ovnmeta-7c40ad6c-6e2c-4d8e-a70f-72c8786fa745', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:46:23 np0005481065 systemd[1]: run-netns-ovnmeta\x2d7c40ad6c\x2d6e2c\x2d4d8e\x2da70f\x2d72c8786fa745.mount: Deactivated successfully.
Oct 11 05:46:23 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:46:23.666 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7c40ad6c-6e2c-4d8e-a70f-72c8786fa745 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 05:46:23 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:46:23.666 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[fb3f3bc2-25fa-496e-b816-731c807d727d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:46:24 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:46:24.167 162815 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port db04ea8e-872a-415b-b3b7-56f400f73ae3 with type ""#033[00m
Oct 11 05:46:24 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:46:24.168 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:82:58 10.100.0.14'], port_security=['fa:16:3e:1e:82:58 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'c176845c-89c0-4038-ba22-4ee79bd3ebfe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-164a664d-5e52-48b9-8b00-f73d0851a4cc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd33b48586acf4e6c8254f2a1213b001c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd2a92dd4-4e3a-46e0-b2c1-347b2512c6a3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=68f3e6c4-f574-4830-9133-912bb9cd6132, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=e61ae661-47c6-4317-a2c2-6e7a5b567441) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:46:24 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:46:24.169 162815 INFO neutron.agent.ovn.metadata.agent [-] Port e61ae661-47c6-4317-a2c2-6e7a5b567441 in datapath 164a664d-5e52-48b9-8b00-f73d0851a4cc unbound from our chassis#033[00m
Oct 11 05:46:24 np0005481065 ovn_controller[152945]: 2025-10-11T09:46:24Z|01734|binding|INFO|Removing iface tape61ae661-47 ovn-installed in OVS
Oct 11 05:46:24 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:46:24.171 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 164a664d-5e52-48b9-8b00-f73d0851a4cc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 05:46:24 np0005481065 ovn_controller[152945]: 2025-10-11T09:46:24Z|01735|binding|INFO|Removing lport e61ae661-47c6-4317-a2c2-6e7a5b567441 ovn-installed in OVS
Oct 11 05:46:24 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:46:24.172 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5245a6f2-eb72-4ebc-bf4a-b26a3e493115]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:46:24 np0005481065 nova_compute[260935]: 2025-10-11 09:46:24.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:46:24 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:46:24.173 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc namespace which is not needed anymore#033[00m
Oct 11 05:46:24 np0005481065 nova_compute[260935]: 2025-10-11 09:46:24.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:46:24 np0005481065 neutron-haproxy-ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc[340003]: [NOTICE]   (340042) : haproxy version is 2.8.14-c23fe91
Oct 11 05:46:24 np0005481065 neutron-haproxy-ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc[340003]: [NOTICE]   (340042) : path to executable is /usr/sbin/haproxy
Oct 11 05:46:24 np0005481065 neutron-haproxy-ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc[340003]: [WARNING]  (340042) : Exiting Master process...
Oct 11 05:46:24 np0005481065 neutron-haproxy-ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc[340003]: [WARNING]  (340042) : Exiting Master process...
Oct 11 05:46:24 np0005481065 neutron-haproxy-ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc[340003]: [ALERT]    (340042) : Current worker (340059) exited with code 143 (Terminated)
Oct 11 05:46:24 np0005481065 neutron-haproxy-ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc[340003]: [WARNING]  (340042) : All workers exited. Exiting... (0)
Oct 11 05:46:24 np0005481065 systemd[1]: libpod-398747a8394e8bb5e06e9a7a905ca846a31a89812749ecde06f10d3769819829.scope: Deactivated successfully.
Oct 11 05:46:24 np0005481065 podman[436478]: 2025-10-11 09:46:24.3757281 +0000 UTC m=+0.065233060 container died 398747a8394e8bb5e06e9a7a905ca846a31a89812749ecde06f10d3769819829 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:46:24 np0005481065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-398747a8394e8bb5e06e9a7a905ca846a31a89812749ecde06f10d3769819829-userdata-shm.mount: Deactivated successfully.
Oct 11 05:46:24 np0005481065 systemd[1]: var-lib-containers-storage-overlay-88ff3d2b1d366c767d0b7303eead81accb675a1c28ccb112da1864efc018408e-merged.mount: Deactivated successfully.
Oct 11 05:46:24 np0005481065 podman[436478]: 2025-10-11 09:46:24.429728495 +0000 UTC m=+0.119233465 container cleanup 398747a8394e8bb5e06e9a7a905ca846a31a89812749ecde06f10d3769819829 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:46:24 np0005481065 systemd[1]: libpod-conmon-398747a8394e8bb5e06e9a7a905ca846a31a89812749ecde06f10d3769819829.scope: Deactivated successfully.
Oct 11 05:46:24 np0005481065 podman[436507]: 2025-10-11 09:46:24.537258911 +0000 UTC m=+0.071220449 container remove 398747a8394e8bb5e06e9a7a905ca846a31a89812749ecde06f10d3769819829 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:46:24 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:46:24.545 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[12d0cf05-3d34-4d4d-a2f1-408c22e25b23]: (4, ('Sat Oct 11 09:46:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc (398747a8394e8bb5e06e9a7a905ca846a31a89812749ecde06f10d3769819829)\n398747a8394e8bb5e06e9a7a905ca846a31a89812749ecde06f10d3769819829\nSat Oct 11 09:46:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc (398747a8394e8bb5e06e9a7a905ca846a31a89812749ecde06f10d3769819829)\n398747a8394e8bb5e06e9a7a905ca846a31a89812749ecde06f10d3769819829\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:46:24 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:46:24.547 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ba3ff83b-dae9-4418-b03e-e9967b3fbadc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:46:24 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:46:24.549 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap164a664d-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:46:24 np0005481065 nova_compute[260935]: 2025-10-11 09:46:24.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:46:24 np0005481065 kernel: tap164a664d-50: left promiscuous mode
Oct 11 05:46:24 np0005481065 nova_compute[260935]: 2025-10-11 09:46:24.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:46:24 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:46:24.583 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[288ef417-1d70-4400-af9c-a25dd66cf52d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:46:24 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:46:24.610 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a592c1c3-9377-452d-ae3e-0bb4f5abc26e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:46:24 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:46:24.612 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7259a7d5-eabe-458d-b480-c82d4a80f558]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:46:24 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:46:24.637 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2388920a-bea2-4440-98df-7926654df254]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 500896, 'reachable_time': 31085, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 436524, 'error': None, 'target': 'ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:46:24 np0005481065 systemd[1]: run-netns-ovnmeta\x2d164a664d\x2d5e52\x2d48b9\x2d8b00\x2df73d0851a4cc.mount: Deactivated successfully.
Oct 11 05:46:24 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:46:24.643 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 05:46:24 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:46:24.643 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[9c98e3df-b89e-4ff8-83c5-aaa6d6284819]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 05:46:24 np0005481065 nova_compute[260935]: 2025-10-11 09:46:24.703 2 DEBUG nova.compute.manager [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Received event network-vif-deleted-c992d6e3-ef59-42a0-80c5-109fe0c056cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:46:24 np0005481065 nova_compute[260935]: 2025-10-11 09:46:24.705 2 INFO nova.compute.manager [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Neutron deleted interface c992d6e3-ef59-42a0-80c5-109fe0c056cd; detaching it from the instance and deleting it from the info cache#033[00m
Oct 11 05:46:24 np0005481065 nova_compute[260935]: 2025-10-11 09:46:24.705 2 DEBUG nova.network.neutron [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:46:24 np0005481065 nova_compute[260935]: 2025-10-11 09:46:24.738 2 DEBUG nova.objects.instance [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lazy-loading 'system_metadata' on Instance uuid 52be16b4-343a-4fd4-9041-39069a1fde2a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:46:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:46:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e300 do_prune osdmap full prune enabled
Oct 11 05:46:24 np0005481065 nova_compute[260935]: 2025-10-11 09:46:24.768 2 DEBUG nova.objects.instance [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lazy-loading 'flavor' on Instance uuid 52be16b4-343a-4fd4-9041-39069a1fde2a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:46:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e301 e301: 3 total, 3 up, 3 in
Oct 11 05:46:24 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e301: 3 total, 3 up, 3 in
Oct 11 05:46:24 np0005481065 nova_compute[260935]: 2025-10-11 09:46:24.788 2 DEBUG nova.virt.libvirt.vif [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:00:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1975294956',display_name='tempest-ServerActionsTestJSON-server-1975294956',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1975294956',id=86,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCldfXJbbZ7d23yZCI3wgMskc62RJ3W+h+Bujyoq+l99HIouQoz2ogsrxnyNOy7JQrwu2S23uZGGnM/6kJAmk9ewWoiaLMeddrGku0Zod7LFIlcm/esb5hA9IKL9pBW3cA==',key_name='tempest-keypair-1522598924',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:00:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0ba95f2514ce4fe4b00f245335eaeb01',ramdisk_id='',reservation_id='r-4yee06pj',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-332398676',owner_user_name='tempest-ServerActionsTestJSON-332398676-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:00:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2e604a7f01ba42f8a2f2a90bf14cafba',uuid=52be16b4-343a-4fd4-9041-39069a1fde2a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "address": "fa:16:3e:d3:b5:ce", "network": {"id": "7c40ad6c-6e2c-4d8e-a70f-72c8786fa745", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1855455514-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ba95f2514ce4fe4b00f245335eaeb01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc992d6e3-ef", "ovs_interfaceid": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:46:24 np0005481065 nova_compute[260935]: 2025-10-11 09:46:24.788 2 DEBUG nova.network.os_vif_util [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Converting VIF {"id": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "address": "fa:16:3e:d3:b5:ce", "network": {"id": "7c40ad6c-6e2c-4d8e-a70f-72c8786fa745", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1855455514-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ba95f2514ce4fe4b00f245335eaeb01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc992d6e3-ef", "ovs_interfaceid": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:46:24 np0005481065 nova_compute[260935]: 2025-10-11 09:46:24.789 2 DEBUG nova.network.os_vif_util [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d3:b5:ce,bridge_name='br-int',has_traffic_filtering=True,id=c992d6e3-ef59-42a0-80c5-109fe0c056cd,network=Network(7c40ad6c-6e2c-4d8e-a70f-72c8786fa745),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc992d6e3-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:46:24 np0005481065 nova_compute[260935]: 2025-10-11 09:46:24.793 2 DEBUG nova.virt.libvirt.guest [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d3:b5:ce"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc992d6e3-ef"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 11 05:46:24 np0005481065 nova_compute[260935]: 2025-10-11 09:46:24.797 2 DEBUG nova.virt.libvirt.guest [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d3:b5:ce"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc992d6e3-ef"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 11 05:46:24 np0005481065 nova_compute[260935]: 2025-10-11 09:46:24.801 2 DEBUG nova.virt.libvirt.driver [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Attempting to detach device tapc992d6e3-ef from instance 52be16b4-343a-4fd4-9041-39069a1fde2a from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct 11 05:46:24 np0005481065 nova_compute[260935]: 2025-10-11 09:46:24.802 2 DEBUG nova.virt.libvirt.guest [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] detach device xml: <interface type="ethernet">
Oct 11 05:46:24 np0005481065 nova_compute[260935]:  <mac address="fa:16:3e:d3:b5:ce"/>
Oct 11 05:46:24 np0005481065 nova_compute[260935]:  <model type="virtio"/>
Oct 11 05:46:24 np0005481065 nova_compute[260935]:  <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:46:24 np0005481065 nova_compute[260935]:  <mtu size="1442"/>
Oct 11 05:46:24 np0005481065 nova_compute[260935]:  <target dev="tapc992d6e3-ef"/>
Oct 11 05:46:24 np0005481065 nova_compute[260935]: </interface>
Oct 11 05:46:24 np0005481065 nova_compute[260935]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 11 05:46:24 np0005481065 nova_compute[260935]: 2025-10-11 09:46:24.822 2 DEBUG nova.virt.libvirt.guest [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d3:b5:ce"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc992d6e3-ef"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 11 05:46:24 np0005481065 nova_compute[260935]: 2025-10-11 09:46:24.826 2 DEBUG nova.virt.libvirt.guest [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] No interface of type: <class 'nova.virt.libvirt.config.LibvirtConfigGuestInterface'> found in domain get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:261#033[00m
Oct 11 05:46:24 np0005481065 nova_compute[260935]: 2025-10-11 09:46:24.826 2 INFO nova.virt.libvirt.driver [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Successfully detached device tapc992d6e3-ef from instance 52be16b4-343a-4fd4-9041-39069a1fde2a from the persistent domain config.#033[00m
Oct 11 05:46:24 np0005481065 nova_compute[260935]: 2025-10-11 09:46:24.827 2 DEBUG nova.virt.libvirt.driver [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] (1/8): Attempting to detach device tapc992d6e3-ef with device alias net0 from instance 52be16b4-343a-4fd4-9041-39069a1fde2a from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct 11 05:46:24 np0005481065 nova_compute[260935]: 2025-10-11 09:46:24.828 2 DEBUG nova.virt.libvirt.guest [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] detach device xml: <interface type="ethernet">
Oct 11 05:46:24 np0005481065 nova_compute[260935]:  <mac address="fa:16:3e:d3:b5:ce"/>
Oct 11 05:46:24 np0005481065 nova_compute[260935]:  <model type="virtio"/>
Oct 11 05:46:24 np0005481065 nova_compute[260935]:  <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:46:24 np0005481065 nova_compute[260935]:  <mtu size="1442"/>
Oct 11 05:46:24 np0005481065 nova_compute[260935]:  <target dev="tapc992d6e3-ef"/>
Oct 11 05:46:24 np0005481065 nova_compute[260935]: </interface>
Oct 11 05:46:24 np0005481065 nova_compute[260935]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 11 05:46:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:46:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:46:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:46:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:46:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:46:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:46:24 np0005481065 kernel: tapc992d6e3-ef (unregistering): left promiscuous mode
Oct 11 05:46:24 np0005481065 NetworkManager[44960]: <info>  [1760175984.9593] device (tapc992d6e3-ef): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:46:24 np0005481065 nova_compute[260935]: 2025-10-11 09:46:24.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:46:24 np0005481065 nova_compute[260935]: 2025-10-11 09:46:24.979 2 DEBUG nova.virt.libvirt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Received event <DeviceRemovedEvent: 1760175984.9790182, 52be16b4-343a-4fd4-9041-39069a1fde2a => net0> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct 11 05:46:24 np0005481065 nova_compute[260935]: 2025-10-11 09:46:24.982 2 DEBUG nova.virt.libvirt.driver [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Start waiting for the detach event from libvirt for device tapc992d6e3-ef with device alias net0 for instance 52be16b4-343a-4fd4-9041-39069a1fde2a _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct 11 05:46:24 np0005481065 nova_compute[260935]: 2025-10-11 09:46:24.983 2 DEBUG nova.virt.libvirt.guest [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d3:b5:ce"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc992d6e3-ef"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 11 05:46:24 np0005481065 nova_compute[260935]: 2025-10-11 09:46:24.986 2 DEBUG nova.virt.libvirt.guest [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] No interface of type: <class 'nova.virt.libvirt.config.LibvirtConfigGuestInterface'> found in domain get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:261#033[00m
Oct 11 05:46:24 np0005481065 nova_compute[260935]: 2025-10-11 09:46:24.987 2 INFO nova.virt.libvirt.driver [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Successfully detached device tapc992d6e3-ef from instance 52be16b4-343a-4fd4-9041-39069a1fde2a from the live domain config.#033[00m
Oct 11 05:46:24 np0005481065 nova_compute[260935]: 2025-10-11 09:46:24.988 2 DEBUG nova.virt.libvirt.vif [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:00:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1975294956',display_name='tempest-ServerActionsTestJSON-server-1975294956',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1975294956',id=86,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCldfXJbbZ7d23yZCI3wgMskc62RJ3W+h+Bujyoq+l99HIouQoz2ogsrxnyNOy7JQrwu2S23uZGGnM/6kJAmk9ewWoiaLMeddrGku0Zod7LFIlcm/esb5hA9IKL9pBW3cA==',key_name='tempest-keypair-1522598924',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:00:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0ba95f2514ce4fe4b00f245335eaeb01',ramdisk_id='',reservation_id='r-4yee06pj',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-332398676',owner_user_name='tempest-ServerActionsTestJSON-332398676-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:00:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2e604a7f01ba42f8a2f2a90bf14cafba',uuid=52be16b4-343a-4fd4-9041-39069a1fde2a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "address": "fa:16:3e:d3:b5:ce", "network": {"id": "7c40ad6c-6e2c-4d8e-a70f-72c8786fa745", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1855455514-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ba95f2514ce4fe4b00f245335eaeb01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc992d6e3-ef", "ovs_interfaceid": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:46:24 np0005481065 nova_compute[260935]: 2025-10-11 09:46:24.989 2 DEBUG nova.network.os_vif_util [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Converting VIF {"id": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "address": "fa:16:3e:d3:b5:ce", "network": {"id": "7c40ad6c-6e2c-4d8e-a70f-72c8786fa745", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1855455514-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ba95f2514ce4fe4b00f245335eaeb01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc992d6e3-ef", "ovs_interfaceid": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:46:24 np0005481065 nova_compute[260935]: 2025-10-11 09:46:24.990 2 DEBUG nova.network.os_vif_util [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d3:b5:ce,bridge_name='br-int',has_traffic_filtering=True,id=c992d6e3-ef59-42a0-80c5-109fe0c056cd,network=Network(7c40ad6c-6e2c-4d8e-a70f-72c8786fa745),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc992d6e3-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:46:24 np0005481065 nova_compute[260935]: 2025-10-11 09:46:24.990 2 DEBUG os_vif [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d3:b5:ce,bridge_name='br-int',has_traffic_filtering=True,id=c992d6e3-ef59-42a0-80c5-109fe0c056cd,network=Network(7c40ad6c-6e2c-4d8e-a70f-72c8786fa745),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc992d6e3-ef') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:46:24 np0005481065 nova_compute[260935]: 2025-10-11 09:46:24.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:46:24 np0005481065 nova_compute[260935]: 2025-10-11 09:46:24.992 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc992d6e3-ef, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:46:24 np0005481065 nova_compute[260935]: 2025-10-11 09:46:24.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:46:25 np0005481065 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #52. Immutable memtables: 0.
Oct 11 05:46:25 np0005481065 nova_compute[260935]: 2025-10-11 09:46:25.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:46:25 np0005481065 nova_compute[260935]: 2025-10-11 09:46:25.004 2 INFO os_vif [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d3:b5:ce,bridge_name='br-int',has_traffic_filtering=True,id=c992d6e3-ef59-42a0-80c5-109fe0c056cd,network=Network(7c40ad6c-6e2c-4d8e-a70f-72c8786fa745),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc992d6e3-ef')#033[00m
Oct 11 05:46:25 np0005481065 nova_compute[260935]: 2025-10-11 09:46:25.005 2 DEBUG nova.virt.libvirt.guest [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:46:25 np0005481065 nova_compute[260935]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:46:25 np0005481065 nova_compute[260935]:  <nova:name>tempest-ServerActionsTestJSON-server-1975294956</nova:name>
Oct 11 05:46:25 np0005481065 nova_compute[260935]:  <nova:creationTime>2025-10-11 09:46:25</nova:creationTime>
Oct 11 05:46:25 np0005481065 nova_compute[260935]:  <nova:flavor name="m1.nano">
Oct 11 05:46:25 np0005481065 nova_compute[260935]:    <nova:memory>128</nova:memory>
Oct 11 05:46:25 np0005481065 nova_compute[260935]:    <nova:disk>1</nova:disk>
Oct 11 05:46:25 np0005481065 nova_compute[260935]:    <nova:swap>0</nova:swap>
Oct 11 05:46:25 np0005481065 nova_compute[260935]:    <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:46:25 np0005481065 nova_compute[260935]:    <nova:vcpus>1</nova:vcpus>
Oct 11 05:46:25 np0005481065 nova_compute[260935]:  </nova:flavor>
Oct 11 05:46:25 np0005481065 nova_compute[260935]:  <nova:owner>
Oct 11 05:46:25 np0005481065 nova_compute[260935]:    <nova:user uuid="2e604a7f01ba42f8a2f2a90bf14cafba">tempest-ServerActionsTestJSON-332398676-project-member</nova:user>
Oct 11 05:46:25 np0005481065 nova_compute[260935]:    <nova:project uuid="0ba95f2514ce4fe4b00f245335eaeb01">tempest-ServerActionsTestJSON-332398676</nova:project>
Oct 11 05:46:25 np0005481065 nova_compute[260935]:  </nova:owner>
Oct 11 05:46:25 np0005481065 nova_compute[260935]:  <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:46:25 np0005481065 nova_compute[260935]:  <nova:ports/>
Oct 11 05:46:25 np0005481065 nova_compute[260935]: </nova:instance>
Oct 11 05:46:25 np0005481065 nova_compute[260935]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct 11 05:46:25 np0005481065 nova_compute[260935]: 2025-10-11 09:46:25.008 2 DEBUG nova.compute.manager [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Received event network-vif-deleted-e61ae661-47c6-4317-a2c2-6e7a5b567441 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 05:46:25 np0005481065 nova_compute[260935]: 2025-10-11 09:46:25.008 2 INFO nova.compute.manager [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Neutron deleted interface e61ae661-47c6-4317-a2c2-6e7a5b567441; detaching it from the instance and deleting it from the info cache#033[00m
Oct 11 05:46:25 np0005481065 nova_compute[260935]: 2025-10-11 09:46:25.009 2 DEBUG nova.network.neutron [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:46:25 np0005481065 nova_compute[260935]: 2025-10-11 09:46:25.030 2 DEBUG nova.objects.instance [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lazy-loading 'system_metadata' on Instance uuid c176845c-89c0-4038-ba22-4ee79bd3ebfe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:46:25 np0005481065 nova_compute[260935]: 2025-10-11 09:46:25.075 2 DEBUG nova.objects.instance [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lazy-loading 'flavor' on Instance uuid c176845c-89c0-4038-ba22-4ee79bd3ebfe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:46:25 np0005481065 nova_compute[260935]: 2025-10-11 09:46:25.111 2 DEBUG nova.virt.libvirt.vif [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:59:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1251385331',display_name='tempest-ServerActionsTestOtherA-server-1251385331',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1251385331',id=82,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD0zd9vOn1MoClsAHXDeWFPP5kO+VNofuvu89K7qYloOUWW4N93cF9QhhUyaB1pmFmHZjCIiPEyZ5cYnUAirQfqKcPyMnKcAjeovnFjGTt2K03Doe1dtzSfmqbvVdrCpkQ==',key_name='tempest-keypair-168857508',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:59:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d33b48586acf4e6c8254f2a1213b001c',ramdisk_id='',reservation_id='r-vuepvjtl',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-620268335',owner_user_name='tempest-ServerActionsTestOtherA-620268335-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:59:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8d211063ed874837bead2e13898b31d4',uuid=c176845c-89c0-4038-ba22-4ee79bd3ebfe,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "address": "fa:16:3e:1e:82:58", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ae661-47", "ovs_interfaceid": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 05:46:25 np0005481065 nova_compute[260935]: 2025-10-11 09:46:25.112 2 DEBUG nova.network.os_vif_util [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Converting VIF {"id": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "address": "fa:16:3e:1e:82:58", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ae661-47", "ovs_interfaceid": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:46:25 np0005481065 nova_compute[260935]: 2025-10-11 09:46:25.113 2 DEBUG nova.network.os_vif_util [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1e:82:58,bridge_name='br-int',has_traffic_filtering=True,id=e61ae661-47c6-4317-a2c2-6e7a5b567441,network=Network(164a664d-5e52-48b9-8b00-f73d0851a4cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape61ae661-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:46:25 np0005481065 nova_compute[260935]: 2025-10-11 09:46:25.117 2 DEBUG nova.virt.libvirt.guest [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1e:82:58"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape61ae661-47"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 11 05:46:25 np0005481065 nova_compute[260935]: 2025-10-11 09:46:25.120 2 DEBUG nova.virt.libvirt.guest [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1e:82:58"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape61ae661-47"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 11 05:46:25 np0005481065 nova_compute[260935]: 2025-10-11 09:46:25.124 2 DEBUG nova.virt.libvirt.driver [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Attempting to detach device tape61ae661-47 from instance c176845c-89c0-4038-ba22-4ee79bd3ebfe from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct 11 05:46:25 np0005481065 nova_compute[260935]: 2025-10-11 09:46:25.125 2 DEBUG nova.virt.libvirt.guest [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] detach device xml: <interface type="ethernet">
Oct 11 05:46:25 np0005481065 nova_compute[260935]:  <mac address="fa:16:3e:1e:82:58"/>
Oct 11 05:46:25 np0005481065 nova_compute[260935]:  <model type="virtio"/>
Oct 11 05:46:25 np0005481065 nova_compute[260935]:  <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:46:25 np0005481065 nova_compute[260935]:  <mtu size="1442"/>
Oct 11 05:46:25 np0005481065 nova_compute[260935]:  <target dev="tape61ae661-47"/>
Oct 11 05:46:25 np0005481065 nova_compute[260935]: </interface>
Oct 11 05:46:25 np0005481065 nova_compute[260935]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 11 05:46:25 np0005481065 nova_compute[260935]: 2025-10-11 09:46:25.133 2 DEBUG nova.virt.libvirt.guest [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1e:82:58"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape61ae661-47"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 11 05:46:25 np0005481065 nova_compute[260935]: 2025-10-11 09:46:25.136 2 DEBUG nova.virt.libvirt.guest [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] No interface of type: <class 'nova.virt.libvirt.config.LibvirtConfigGuestInterface'> found in domain get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:261#033[00m
Oct 11 05:46:25 np0005481065 nova_compute[260935]: 2025-10-11 09:46:25.137 2 INFO nova.virt.libvirt.driver [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Successfully detached device tape61ae661-47 from instance c176845c-89c0-4038-ba22-4ee79bd3ebfe from the persistent domain config.#033[00m
Oct 11 05:46:25 np0005481065 nova_compute[260935]: 2025-10-11 09:46:25.137 2 DEBUG nova.virt.libvirt.driver [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] (1/8): Attempting to detach device tape61ae661-47 with device alias net0 from instance c176845c-89c0-4038-ba22-4ee79bd3ebfe from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct 11 05:46:25 np0005481065 nova_compute[260935]: 2025-10-11 09:46:25.138 2 DEBUG nova.virt.libvirt.guest [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] detach device xml: <interface type="ethernet">
Oct 11 05:46:25 np0005481065 nova_compute[260935]:  <mac address="fa:16:3e:1e:82:58"/>
Oct 11 05:46:25 np0005481065 nova_compute[260935]:  <model type="virtio"/>
Oct 11 05:46:25 np0005481065 nova_compute[260935]:  <driver name="vhost" rx_queue_size="512"/>
Oct 11 05:46:25 np0005481065 nova_compute[260935]:  <mtu size="1442"/>
Oct 11 05:46:25 np0005481065 nova_compute[260935]:  <target dev="tape61ae661-47"/>
Oct 11 05:46:25 np0005481065 nova_compute[260935]: </interface>
Oct 11 05:46:25 np0005481065 nova_compute[260935]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 11 05:46:25 np0005481065 kernel: tape61ae661-47 (unregistering): left promiscuous mode
Oct 11 05:46:25 np0005481065 NetworkManager[44960]: <info>  [1760175985.2465] device (tape61ae661-47): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 05:46:25 np0005481065 nova_compute[260935]: 2025-10-11 09:46:25.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:46:25 np0005481065 nova_compute[260935]: 2025-10-11 09:46:25.302 2 DEBUG nova.virt.libvirt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Received event <DeviceRemovedEvent: 1760175985.30203, c176845c-89c0-4038-ba22-4ee79bd3ebfe => net0> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct 11 05:46:25 np0005481065 nova_compute[260935]: 2025-10-11 09:46:25.305 2 DEBUG nova.virt.libvirt.driver [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Start waiting for the detach event from libvirt for device tape61ae661-47 with device alias net0 for instance c176845c-89c0-4038-ba22-4ee79bd3ebfe _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct 11 05:46:25 np0005481065 nova_compute[260935]: 2025-10-11 09:46:25.305 2 DEBUG nova.virt.libvirt.guest [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1e:82:58"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape61ae661-47"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 11 05:46:25 np0005481065 nova_compute[260935]: 2025-10-11 09:46:25.309 2 DEBUG nova.virt.libvirt.guest [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] No interface of type: <class 'nova.virt.libvirt.config.LibvirtConfigGuestInterface'> found in domain get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:261#033[00m
Oct 11 05:46:25 np0005481065 nova_compute[260935]: 2025-10-11 09:46:25.309 2 INFO nova.virt.libvirt.driver [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Successfully detached device tape61ae661-47 from instance c176845c-89c0-4038-ba22-4ee79bd3ebfe from the live domain config.#033[00m
Oct 11 05:46:25 np0005481065 nova_compute[260935]: 2025-10-11 09:46:25.311 2 DEBUG nova.virt.libvirt.vif [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:59:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1251385331',display_name='tempest-ServerActionsTestOtherA-server-1251385331',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1251385331',id=82,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD0zd9vOn1MoClsAHXDeWFPP5kO+VNofuvu89K7qYloOUWW4N93cF9QhhUyaB1pmFmHZjCIiPEyZ5cYnUAirQfqKcPyMnKcAjeovnFjGTt2K03Doe1dtzSfmqbvVdrCpkQ==',key_name='tempest-keypair-168857508',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:59:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d33b48586acf4e6c8254f2a1213b001c',ramdisk_id='',reservation_id='r-vuepvjtl',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-620268335',owner_user_name='tempest-ServerActionsTestOtherA-620268335-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:59:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8d211063ed874837bead2e13898b31d4',uuid=c176845c-89c0-4038-ba22-4ee79bd3ebfe,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "address": "fa:16:3e:1e:82:58", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ae661-47", "ovs_interfaceid": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 05:46:25 np0005481065 nova_compute[260935]: 2025-10-11 09:46:25.311 2 DEBUG nova.network.os_vif_util [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Converting VIF {"id": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "address": "fa:16:3e:1e:82:58", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ae661-47", "ovs_interfaceid": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 05:46:25 np0005481065 nova_compute[260935]: 2025-10-11 09:46:25.312 2 DEBUG nova.network.os_vif_util [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1e:82:58,bridge_name='br-int',has_traffic_filtering=True,id=e61ae661-47c6-4317-a2c2-6e7a5b567441,network=Network(164a664d-5e52-48b9-8b00-f73d0851a4cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape61ae661-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 05:46:25 np0005481065 nova_compute[260935]: 2025-10-11 09:46:25.313 2 DEBUG os_vif [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1e:82:58,bridge_name='br-int',has_traffic_filtering=True,id=e61ae661-47c6-4317-a2c2-6e7a5b567441,network=Network(164a664d-5e52-48b9-8b00-f73d0851a4cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape61ae661-47') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 05:46:25 np0005481065 nova_compute[260935]: 2025-10-11 09:46:25.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:46:25 np0005481065 nova_compute[260935]: 2025-10-11 09:46:25.315 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape61ae661-47, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:46:25 np0005481065 nova_compute[260935]: 2025-10-11 09:46:25.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:46:25 np0005481065 nova_compute[260935]: 2025-10-11 09:46:25.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:46:25 np0005481065 nova_compute[260935]: 2025-10-11 09:46:25.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:46:25 np0005481065 nova_compute[260935]: 2025-10-11 09:46:25.324 2 INFO os_vif [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1e:82:58,bridge_name='br-int',has_traffic_filtering=True,id=e61ae661-47c6-4317-a2c2-6e7a5b567441,network=Network(164a664d-5e52-48b9-8b00-f73d0851a4cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape61ae661-47')#033[00m
Oct 11 05:46:25 np0005481065 nova_compute[260935]: 2025-10-11 09:46:25.325 2 DEBUG nova.virt.libvirt.guest [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 05:46:25 np0005481065 nova_compute[260935]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 05:46:25 np0005481065 nova_compute[260935]:  <nova:name>tempest-ServerActionsTestOtherA-server-1251385331</nova:name>
Oct 11 05:46:25 np0005481065 nova_compute[260935]:  <nova:creationTime>2025-10-11 09:46:25</nova:creationTime>
Oct 11 05:46:25 np0005481065 nova_compute[260935]:  <nova:flavor name="m1.nano">
Oct 11 05:46:25 np0005481065 nova_compute[260935]:    <nova:memory>128</nova:memory>
Oct 11 05:46:25 np0005481065 nova_compute[260935]:    <nova:disk>1</nova:disk>
Oct 11 05:46:25 np0005481065 nova_compute[260935]:    <nova:swap>0</nova:swap>
Oct 11 05:46:25 np0005481065 nova_compute[260935]:    <nova:ephemeral>0</nova:ephemeral>
Oct 11 05:46:25 np0005481065 nova_compute[260935]:    <nova:vcpus>1</nova:vcpus>
Oct 11 05:46:25 np0005481065 nova_compute[260935]:  </nova:flavor>
Oct 11 05:46:25 np0005481065 nova_compute[260935]:  <nova:owner>
Oct 11 05:46:25 np0005481065 nova_compute[260935]:    <nova:user uuid="8d211063ed874837bead2e13898b31d4">tempest-ServerActionsTestOtherA-620268335-project-member</nova:user>
Oct 11 05:46:25 np0005481065 nova_compute[260935]:    <nova:project uuid="d33b48586acf4e6c8254f2a1213b001c">tempest-ServerActionsTestOtherA-620268335</nova:project>
Oct 11 05:46:25 np0005481065 nova_compute[260935]:  </nova:owner>
Oct 11 05:46:25 np0005481065 nova_compute[260935]:  <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 05:46:25 np0005481065 nova_compute[260935]:  <nova:ports/>
Oct 11 05:46:25 np0005481065 nova_compute[260935]: </nova:instance>
Oct 11 05:46:25 np0005481065 nova_compute[260935]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct 11 05:46:25 np0005481065 nova_compute[260935]: 2025-10-11 09:46:25.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:46:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3156: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Oct 11 05:46:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:46:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1058496683' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:46:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:46:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1058496683' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:46:26 np0005481065 podman[436533]: 2025-10-11 09:46:26.791691602 +0000 UTC m=+0.090226742 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd)
Oct 11 05:46:26 np0005481065 podman[436534]: 2025-10-11 09:46:26.82123191 +0000 UTC m=+0.121980242 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 11 05:46:27 np0005481065 nova_compute[260935]: 2025-10-11 09:46:27.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:46:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3157: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.3 KiB/s wr, 24 op/s
Oct 11 05:46:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3158: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.3 KiB/s wr, 23 op/s
Oct 11 05:46:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:46:30 np0005481065 nova_compute[260935]: 2025-10-11 09:46:30.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:46:30 np0005481065 nova_compute[260935]: 2025-10-11 09:46:30.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:46:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3159: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.3 KiB/s wr, 23 op/s
Oct 11 05:46:32 np0005481065 nova_compute[260935]: 2025-10-11 09:46:32.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:46:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3160: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 5.1 KiB/s wr, 1 op/s
Oct 11 05:46:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:46:35 np0005481065 nova_compute[260935]: 2025-10-11 09:46:35.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:46:35 np0005481065 nova_compute[260935]: 2025-10-11 09:46:35.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:46:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3161: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 4.7 KiB/s wr, 1 op/s
Oct 11 05:46:36 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 11 05:46:36 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 05:46:36 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:46:36 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:46:36 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 05:46:36 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:46:36 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 05:46:36 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:46:36 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 2bf8c8b5-cece-4da6-b498-8e9750631a51 does not exist
Oct 11 05:46:36 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev a7174808-c809-4e36-85b8-e380f1b44930 does not exist
Oct 11 05:46:36 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 277b4765-0a57-41eb-bbd6-de53a8b84957 does not exist
Oct 11 05:46:36 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 05:46:36 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 05:46:36 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 05:46:36 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:46:36 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:46:36 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:46:36 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 05:46:36 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:46:36 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:46:36 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:46:36 np0005481065 podman[436854]: 2025-10-11 09:46:36.952065244 +0000 UTC m=+0.043281105 container create b587ab3c7b5e1bdfe334fe91454e2797acf807010cdee055eaa8f55c066c16b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 11 05:46:36 np0005481065 systemd[1]: Started libpod-conmon-b587ab3c7b5e1bdfe334fe91454e2797acf807010cdee055eaa8f55c066c16b9.scope.
Oct 11 05:46:37 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:46:37 np0005481065 podman[436854]: 2025-10-11 09:46:36.931100985 +0000 UTC m=+0.022316856 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:46:37 np0005481065 podman[436854]: 2025-10-11 09:46:37.034553977 +0000 UTC m=+0.125769858 container init b587ab3c7b5e1bdfe334fe91454e2797acf807010cdee055eaa8f55c066c16b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:46:37 np0005481065 podman[436854]: 2025-10-11 09:46:37.049216279 +0000 UTC m=+0.140432170 container start b587ab3c7b5e1bdfe334fe91454e2797acf807010cdee055eaa8f55c066c16b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 11 05:46:37 np0005481065 podman[436854]: 2025-10-11 09:46:37.053641673 +0000 UTC m=+0.144857524 container attach b587ab3c7b5e1bdfe334fe91454e2797acf807010cdee055eaa8f55c066c16b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 05:46:37 np0005481065 sharp_diffie[436871]: 167 167
Oct 11 05:46:37 np0005481065 podman[436854]: 2025-10-11 09:46:37.058207051 +0000 UTC m=+0.149422942 container died b587ab3c7b5e1bdfe334fe91454e2797acf807010cdee055eaa8f55c066c16b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:46:37 np0005481065 systemd[1]: libpod-b587ab3c7b5e1bdfe334fe91454e2797acf807010cdee055eaa8f55c066c16b9.scope: Deactivated successfully.
Oct 11 05:46:37 np0005481065 systemd[1]: var-lib-containers-storage-overlay-67695c25e16eeddc482418ebc79b3529d7f9d41032d57f06d5445143d22075a7-merged.mount: Deactivated successfully.
Oct 11 05:46:37 np0005481065 podman[436854]: 2025-10-11 09:46:37.120962991 +0000 UTC m=+0.212178872 container remove b587ab3c7b5e1bdfe334fe91454e2797acf807010cdee055eaa8f55c066c16b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:46:37 np0005481065 systemd[1]: libpod-conmon-b587ab3c7b5e1bdfe334fe91454e2797acf807010cdee055eaa8f55c066c16b9.scope: Deactivated successfully.
Oct 11 05:46:37 np0005481065 podman[436896]: 2025-10-11 09:46:37.390264185 +0000 UTC m=+0.070943911 container create 42aeb7065c9995a6cd0f5e2f5e89b7af8344e5a12cdee6a1ba6a2c1b6293998d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 11 05:46:37 np0005481065 systemd[1]: Started libpod-conmon-42aeb7065c9995a6cd0f5e2f5e89b7af8344e5a12cdee6a1ba6a2c1b6293998d.scope.
Oct 11 05:46:37 np0005481065 podman[436896]: 2025-10-11 09:46:37.362082675 +0000 UTC m=+0.042762471 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:46:37 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:46:37 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18461993abf1215889938bae614f13ec82aa9b777e9d1cccab6d63a36bb2cf0d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:46:37 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18461993abf1215889938bae614f13ec82aa9b777e9d1cccab6d63a36bb2cf0d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:46:37 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18461993abf1215889938bae614f13ec82aa9b777e9d1cccab6d63a36bb2cf0d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:46:37 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18461993abf1215889938bae614f13ec82aa9b777e9d1cccab6d63a36bb2cf0d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:46:37 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18461993abf1215889938bae614f13ec82aa9b777e9d1cccab6d63a36bb2cf0d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 05:46:37 np0005481065 podman[436896]: 2025-10-11 09:46:37.508321327 +0000 UTC m=+0.189001043 container init 42aeb7065c9995a6cd0f5e2f5e89b7af8344e5a12cdee6a1ba6a2c1b6293998d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mclean, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:46:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3162: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 4.2 KiB/s wr, 1 op/s
Oct 11 05:46:37 np0005481065 podman[436896]: 2025-10-11 09:46:37.516943939 +0000 UTC m=+0.197623645 container start 42aeb7065c9995a6cd0f5e2f5e89b7af8344e5a12cdee6a1ba6a2c1b6293998d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mclean, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 05:46:37 np0005481065 podman[436896]: 2025-10-11 09:46:37.521043764 +0000 UTC m=+0.201723480 container attach 42aeb7065c9995a6cd0f5e2f5e89b7af8344e5a12cdee6a1ba6a2c1b6293998d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:46:38 np0005481065 hardcore_mclean[436913]: --> passed data devices: 0 physical, 3 LVM
Oct 11 05:46:38 np0005481065 hardcore_mclean[436913]: --> relative data size: 1.0
Oct 11 05:46:38 np0005481065 hardcore_mclean[436913]: --> All data devices are unavailable
Oct 11 05:46:38 np0005481065 systemd[1]: libpod-42aeb7065c9995a6cd0f5e2f5e89b7af8344e5a12cdee6a1ba6a2c1b6293998d.scope: Deactivated successfully.
Oct 11 05:46:38 np0005481065 systemd[1]: libpod-42aeb7065c9995a6cd0f5e2f5e89b7af8344e5a12cdee6a1ba6a2c1b6293998d.scope: Consumed 1.088s CPU time.
Oct 11 05:46:38 np0005481065 podman[436896]: 2025-10-11 09:46:38.662357964 +0000 UTC m=+1.343037730 container died 42aeb7065c9995a6cd0f5e2f5e89b7af8344e5a12cdee6a1ba6a2c1b6293998d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:46:38 np0005481065 systemd[1]: var-lib-containers-storage-overlay-18461993abf1215889938bae614f13ec82aa9b777e9d1cccab6d63a36bb2cf0d-merged.mount: Deactivated successfully.
Oct 11 05:46:38 np0005481065 podman[436896]: 2025-10-11 09:46:38.738954703 +0000 UTC m=+1.419634399 container remove 42aeb7065c9995a6cd0f5e2f5e89b7af8344e5a12cdee6a1ba6a2c1b6293998d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mclean, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:46:38 np0005481065 systemd[1]: libpod-conmon-42aeb7065c9995a6cd0f5e2f5e89b7af8344e5a12cdee6a1ba6a2c1b6293998d.scope: Deactivated successfully.
Oct 11 05:46:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3163: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 4.2 KiB/s wr, 1 op/s
Oct 11 05:46:39 np0005481065 podman[437099]: 2025-10-11 09:46:39.51897011 +0000 UTC m=+0.049198521 container create 2d61e9cadf08424f922757b06a8ac361be8e2e92625144fa26b336276852155a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_shannon, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:46:39 np0005481065 systemd[1]: Started libpod-conmon-2d61e9cadf08424f922757b06a8ac361be8e2e92625144fa26b336276852155a.scope.
Oct 11 05:46:39 np0005481065 podman[437099]: 2025-10-11 09:46:39.494428242 +0000 UTC m=+0.024656683 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:46:39 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:46:39 np0005481065 podman[437099]: 2025-10-11 09:46:39.609689565 +0000 UTC m=+0.139918056 container init 2d61e9cadf08424f922757b06a8ac361be8e2e92625144fa26b336276852155a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_shannon, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:46:39 np0005481065 podman[437099]: 2025-10-11 09:46:39.619044337 +0000 UTC m=+0.149272768 container start 2d61e9cadf08424f922757b06a8ac361be8e2e92625144fa26b336276852155a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_shannon, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:46:39 np0005481065 podman[437099]: 2025-10-11 09:46:39.623037709 +0000 UTC m=+0.153266190 container attach 2d61e9cadf08424f922757b06a8ac361be8e2e92625144fa26b336276852155a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_shannon, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 05:46:39 np0005481065 great_shannon[437115]: 167 167
Oct 11 05:46:39 np0005481065 systemd[1]: libpod-2d61e9cadf08424f922757b06a8ac361be8e2e92625144fa26b336276852155a.scope: Deactivated successfully.
Oct 11 05:46:39 np0005481065 podman[437099]: 2025-10-11 09:46:39.627778882 +0000 UTC m=+0.158007323 container died 2d61e9cadf08424f922757b06a8ac361be8e2e92625144fa26b336276852155a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_shannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:46:39 np0005481065 systemd[1]: var-lib-containers-storage-overlay-49f3ac8a2182a5e38f1dd3582a3caae88b7d7f02f7eb5f65f71dc65e85912bf0-merged.mount: Deactivated successfully.
Oct 11 05:46:39 np0005481065 podman[437099]: 2025-10-11 09:46:39.684996077 +0000 UTC m=+0.215224508 container remove 2d61e9cadf08424f922757b06a8ac361be8e2e92625144fa26b336276852155a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_shannon, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 05:46:39 np0005481065 systemd[1]: libpod-conmon-2d61e9cadf08424f922757b06a8ac361be8e2e92625144fa26b336276852155a.scope: Deactivated successfully.
Oct 11 05:46:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:46:39 np0005481065 podman[437139]: 2025-10-11 09:46:39.938858268 +0000 UTC m=+0.058133272 container create 19ee91ac38d9c46dbfdb5ada7add135b45dafad5dca85326bcf6b9813d7da092 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cori, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 05:46:39 np0005481065 systemd[1]: Started libpod-conmon-19ee91ac38d9c46dbfdb5ada7add135b45dafad5dca85326bcf6b9813d7da092.scope.
Oct 11 05:46:40 np0005481065 podman[437139]: 2025-10-11 09:46:39.917116148 +0000 UTC m=+0.036391142 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:46:40 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:46:40 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df826b0d6e11cb2cfeadf0856aa676036df58f953968c08d8421a4499862de69/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:46:40 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df826b0d6e11cb2cfeadf0856aa676036df58f953968c08d8421a4499862de69/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:46:40 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df826b0d6e11cb2cfeadf0856aa676036df58f953968c08d8421a4499862de69/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:46:40 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df826b0d6e11cb2cfeadf0856aa676036df58f953968c08d8421a4499862de69/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:46:40 np0005481065 podman[437139]: 2025-10-11 09:46:40.078809683 +0000 UTC m=+0.198084717 container init 19ee91ac38d9c46dbfdb5ada7add135b45dafad5dca85326bcf6b9813d7da092 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cori, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 05:46:40 np0005481065 podman[437139]: 2025-10-11 09:46:40.091275563 +0000 UTC m=+0.210550537 container start 19ee91ac38d9c46dbfdb5ada7add135b45dafad5dca85326bcf6b9813d7da092 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cori, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:46:40 np0005481065 podman[437139]: 2025-10-11 09:46:40.095239974 +0000 UTC m=+0.214515028 container attach 19ee91ac38d9c46dbfdb5ada7add135b45dafad5dca85326bcf6b9813d7da092 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cori, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 05:46:40 np0005481065 nova_compute[260935]: 2025-10-11 09:46:40.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:46:40 np0005481065 nova_compute[260935]: 2025-10-11 09:46:40.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:46:40 np0005481065 sad_cori[437156]: {
Oct 11 05:46:40 np0005481065 sad_cori[437156]:    "0": [
Oct 11 05:46:40 np0005481065 sad_cori[437156]:        {
Oct 11 05:46:40 np0005481065 sad_cori[437156]:            "devices": [
Oct 11 05:46:40 np0005481065 sad_cori[437156]:                "/dev/loop3"
Oct 11 05:46:40 np0005481065 sad_cori[437156]:            ],
Oct 11 05:46:40 np0005481065 sad_cori[437156]:            "lv_name": "ceph_lv0",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:            "lv_size": "21470642176",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:            "name": "ceph_lv0",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:            "tags": {
Oct 11 05:46:40 np0005481065 sad_cori[437156]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:                "ceph.cluster_name": "ceph",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:                "ceph.crush_device_class": "",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:                "ceph.encrypted": "0",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:                "ceph.osd_id": "0",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:                "ceph.type": "block",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:                "ceph.vdo": "0"
Oct 11 05:46:40 np0005481065 sad_cori[437156]:            },
Oct 11 05:46:40 np0005481065 sad_cori[437156]:            "type": "block",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:            "vg_name": "ceph_vg0"
Oct 11 05:46:40 np0005481065 sad_cori[437156]:        }
Oct 11 05:46:40 np0005481065 sad_cori[437156]:    ],
Oct 11 05:46:40 np0005481065 sad_cori[437156]:    "1": [
Oct 11 05:46:40 np0005481065 sad_cori[437156]:        {
Oct 11 05:46:40 np0005481065 sad_cori[437156]:            "devices": [
Oct 11 05:46:40 np0005481065 sad_cori[437156]:                "/dev/loop4"
Oct 11 05:46:40 np0005481065 sad_cori[437156]:            ],
Oct 11 05:46:40 np0005481065 sad_cori[437156]:            "lv_name": "ceph_lv1",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:            "lv_size": "21470642176",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:            "name": "ceph_lv1",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:            "tags": {
Oct 11 05:46:40 np0005481065 sad_cori[437156]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:                "ceph.cluster_name": "ceph",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:                "ceph.crush_device_class": "",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:                "ceph.encrypted": "0",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:                "ceph.osd_id": "1",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:                "ceph.type": "block",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:                "ceph.vdo": "0"
Oct 11 05:46:40 np0005481065 sad_cori[437156]:            },
Oct 11 05:46:40 np0005481065 sad_cori[437156]:            "type": "block",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:            "vg_name": "ceph_vg1"
Oct 11 05:46:40 np0005481065 sad_cori[437156]:        }
Oct 11 05:46:40 np0005481065 sad_cori[437156]:    ],
Oct 11 05:46:40 np0005481065 sad_cori[437156]:    "2": [
Oct 11 05:46:40 np0005481065 sad_cori[437156]:        {
Oct 11 05:46:40 np0005481065 sad_cori[437156]:            "devices": [
Oct 11 05:46:40 np0005481065 sad_cori[437156]:                "/dev/loop5"
Oct 11 05:46:40 np0005481065 sad_cori[437156]:            ],
Oct 11 05:46:40 np0005481065 sad_cori[437156]:            "lv_name": "ceph_lv2",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:            "lv_size": "21470642176",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:            "name": "ceph_lv2",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:            "tags": {
Oct 11 05:46:40 np0005481065 sad_cori[437156]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:                "ceph.cluster_name": "ceph",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:                "ceph.crush_device_class": "",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:                "ceph.encrypted": "0",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:                "ceph.osd_id": "2",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:                "ceph.type": "block",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:                "ceph.vdo": "0"
Oct 11 05:46:40 np0005481065 sad_cori[437156]:            },
Oct 11 05:46:40 np0005481065 sad_cori[437156]:            "type": "block",
Oct 11 05:46:40 np0005481065 sad_cori[437156]:            "vg_name": "ceph_vg2"
Oct 11 05:46:40 np0005481065 sad_cori[437156]:        }
Oct 11 05:46:40 np0005481065 sad_cori[437156]:    ]
Oct 11 05:46:40 np0005481065 sad_cori[437156]: }
Oct 11 05:46:40 np0005481065 systemd[1]: libpod-19ee91ac38d9c46dbfdb5ada7add135b45dafad5dca85326bcf6b9813d7da092.scope: Deactivated successfully.
Oct 11 05:46:40 np0005481065 podman[437139]: 2025-10-11 09:46:40.941019036 +0000 UTC m=+1.060294030 container died 19ee91ac38d9c46dbfdb5ada7add135b45dafad5dca85326bcf6b9813d7da092 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cori, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:46:40 np0005481065 systemd[1]: var-lib-containers-storage-overlay-df826b0d6e11cb2cfeadf0856aa676036df58f953968c08d8421a4499862de69-merged.mount: Deactivated successfully.
Oct 11 05:46:41 np0005481065 podman[437139]: 2025-10-11 09:46:41.015142135 +0000 UTC m=+1.134417119 container remove 19ee91ac38d9c46dbfdb5ada7add135b45dafad5dca85326bcf6b9813d7da092 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cori, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:46:41 np0005481065 systemd[1]: libpod-conmon-19ee91ac38d9c46dbfdb5ada7add135b45dafad5dca85326bcf6b9813d7da092.scope: Deactivated successfully.
Oct 11 05:46:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3164: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 4.2 KiB/s wr, 1 op/s
Oct 11 05:46:41 np0005481065 podman[437317]: 2025-10-11 09:46:41.915277662 +0000 UTC m=+0.066770654 container create d389ccda85d594948a9b097b0d0bd0410c60344b1f2c9f4cdc37ad679301667d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:46:41 np0005481065 systemd[1]: Started libpod-conmon-d389ccda85d594948a9b097b0d0bd0410c60344b1f2c9f4cdc37ad679301667d.scope.
Oct 11 05:46:41 np0005481065 podman[437317]: 2025-10-11 09:46:41.890778064 +0000 UTC m=+0.042271126 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:46:41 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:46:42 np0005481065 podman[437317]: 2025-10-11 09:46:42.018662981 +0000 UTC m=+0.170156003 container init d389ccda85d594948a9b097b0d0bd0410c60344b1f2c9f4cdc37ad679301667d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 05:46:42 np0005481065 podman[437317]: 2025-10-11 09:46:42.02859677 +0000 UTC m=+0.180089802 container start d389ccda85d594948a9b097b0d0bd0410c60344b1f2c9f4cdc37ad679301667d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_heyrovsky, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 05:46:42 np0005481065 podman[437317]: 2025-10-11 09:46:42.033307292 +0000 UTC m=+0.184800314 container attach d389ccda85d594948a9b097b0d0bd0410c60344b1f2c9f4cdc37ad679301667d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:46:42 np0005481065 loving_heyrovsky[437334]: 167 167
Oct 11 05:46:42 np0005481065 systemd[1]: libpod-d389ccda85d594948a9b097b0d0bd0410c60344b1f2c9f4cdc37ad679301667d.scope: Deactivated successfully.
Oct 11 05:46:42 np0005481065 podman[437317]: 2025-10-11 09:46:42.037279903 +0000 UTC m=+0.188772935 container died d389ccda85d594948a9b097b0d0bd0410c60344b1f2c9f4cdc37ad679301667d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:46:42 np0005481065 systemd[1]: var-lib-containers-storage-overlay-a9471d5348d0606aa7f8b8e6a3058dc0c7bc98801064754eb2b649a19b829a70-merged.mount: Deactivated successfully.
Oct 11 05:46:42 np0005481065 podman[437317]: 2025-10-11 09:46:42.095449645 +0000 UTC m=+0.246942667 container remove d389ccda85d594948a9b097b0d0bd0410c60344b1f2c9f4cdc37ad679301667d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True)
Oct 11 05:46:42 np0005481065 systemd[1]: libpod-conmon-d389ccda85d594948a9b097b0d0bd0410c60344b1f2c9f4cdc37ad679301667d.scope: Deactivated successfully.
Oct 11 05:46:42 np0005481065 podman[437359]: 2025-10-11 09:46:42.348939195 +0000 UTC m=+0.059944723 container create 15c531d5f74240493d63aa5e3c0ac520c7dfecf8b66d27128280bf1676ce0ebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_antonelli, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 05:46:42 np0005481065 systemd[1]: Started libpod-conmon-15c531d5f74240493d63aa5e3c0ac520c7dfecf8b66d27128280bf1676ce0ebe.scope.
Oct 11 05:46:42 np0005481065 podman[437359]: 2025-10-11 09:46:42.316047952 +0000 UTC m=+0.027053550 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:46:42 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:46:42 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f6eb4423bf45bc2248f00d73b9e291128657eb5dc26be1f6d6b00a07e70c7ad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:46:42 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f6eb4423bf45bc2248f00d73b9e291128657eb5dc26be1f6d6b00a07e70c7ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:46:42 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f6eb4423bf45bc2248f00d73b9e291128657eb5dc26be1f6d6b00a07e70c7ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:46:42 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f6eb4423bf45bc2248f00d73b9e291128657eb5dc26be1f6d6b00a07e70c7ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:46:42 np0005481065 podman[437359]: 2025-10-11 09:46:42.459321321 +0000 UTC m=+0.170326909 container init 15c531d5f74240493d63aa5e3c0ac520c7dfecf8b66d27128280bf1676ce0ebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_antonelli, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:46:42 np0005481065 podman[437359]: 2025-10-11 09:46:42.472365666 +0000 UTC m=+0.183371224 container start 15c531d5f74240493d63aa5e3c0ac520c7dfecf8b66d27128280bf1676ce0ebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_antonelli, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:46:42 np0005481065 podman[437359]: 2025-10-11 09:46:42.477091929 +0000 UTC m=+0.188097527 container attach 15c531d5f74240493d63aa5e3c0ac520c7dfecf8b66d27128280bf1676ce0ebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_antonelli, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 05:46:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3165: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 4.2 KiB/s wr, 1 op/s
Oct 11 05:46:43 np0005481065 silly_antonelli[437376]: {
Oct 11 05:46:43 np0005481065 silly_antonelli[437376]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 05:46:43 np0005481065 silly_antonelli[437376]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:46:43 np0005481065 silly_antonelli[437376]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 05:46:43 np0005481065 silly_antonelli[437376]:        "osd_id": 2,
Oct 11 05:46:43 np0005481065 silly_antonelli[437376]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:46:43 np0005481065 silly_antonelli[437376]:        "type": "bluestore"
Oct 11 05:46:43 np0005481065 silly_antonelli[437376]:    },
Oct 11 05:46:43 np0005481065 silly_antonelli[437376]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 05:46:43 np0005481065 silly_antonelli[437376]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:46:43 np0005481065 silly_antonelli[437376]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 05:46:43 np0005481065 silly_antonelli[437376]:        "osd_id": 0,
Oct 11 05:46:43 np0005481065 silly_antonelli[437376]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:46:43 np0005481065 silly_antonelli[437376]:        "type": "bluestore"
Oct 11 05:46:43 np0005481065 silly_antonelli[437376]:    },
Oct 11 05:46:43 np0005481065 silly_antonelli[437376]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 05:46:43 np0005481065 silly_antonelli[437376]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:46:43 np0005481065 silly_antonelli[437376]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 05:46:43 np0005481065 silly_antonelli[437376]:        "osd_id": 1,
Oct 11 05:46:43 np0005481065 silly_antonelli[437376]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:46:43 np0005481065 silly_antonelli[437376]:        "type": "bluestore"
Oct 11 05:46:43 np0005481065 silly_antonelli[437376]:    }
Oct 11 05:46:43 np0005481065 silly_antonelli[437376]: }
Oct 11 05:46:43 np0005481065 podman[437359]: 2025-10-11 09:46:43.586448684 +0000 UTC m=+1.297454202 container died 15c531d5f74240493d63aa5e3c0ac520c7dfecf8b66d27128280bf1676ce0ebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_antonelli, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:46:43 np0005481065 systemd[1]: libpod-15c531d5f74240493d63aa5e3c0ac520c7dfecf8b66d27128280bf1676ce0ebe.scope: Deactivated successfully.
Oct 11 05:46:43 np0005481065 systemd[1]: libpod-15c531d5f74240493d63aa5e3c0ac520c7dfecf8b66d27128280bf1676ce0ebe.scope: Consumed 1.125s CPU time.
Oct 11 05:46:43 np0005481065 systemd[1]: var-lib-containers-storage-overlay-3f6eb4423bf45bc2248f00d73b9e291128657eb5dc26be1f6d6b00a07e70c7ad-merged.mount: Deactivated successfully.
Oct 11 05:46:43 np0005481065 podman[437359]: 2025-10-11 09:46:43.648974418 +0000 UTC m=+1.359979936 container remove 15c531d5f74240493d63aa5e3c0ac520c7dfecf8b66d27128280bf1676ce0ebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_antonelli, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:46:43 np0005481065 systemd[1]: libpod-conmon-15c531d5f74240493d63aa5e3c0ac520c7dfecf8b66d27128280bf1676ce0ebe.scope: Deactivated successfully.
Oct 11 05:46:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:46:43 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:46:43 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:46:43 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:46:43 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev b71dc2a2-8a7a-46a3-b4c7-770aa84c0218 does not exist
Oct 11 05:46:43 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 6b41411a-8f62-4364-8ae2-f05c247bd3ce does not exist
Oct 11 05:46:44 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:46:44 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:46:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:46:45 np0005481065 nova_compute[260935]: 2025-10-11 09:46:45.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:46:45 np0005481065 nova_compute[260935]: 2025-10-11 09:46:45.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:46:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3166: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:46:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3167: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:46:47 np0005481065 podman[437470]: 2025-10-11 09:46:47.849540683 +0000 UTC m=+0.131330056 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 05:46:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3168: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:46:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:46:50 np0005481065 nova_compute[260935]: 2025-10-11 09:46:50.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:46:50 np0005481065 nova_compute[260935]: 2025-10-11 09:46:50.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:46:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3169: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:46:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3170: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:46:53 np0005481065 podman[437489]: 2025-10-11 09:46:53.800137312 +0000 UTC m=+0.097597499 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=iscsid, org.label-schema.license=GPLv2, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 05:46:54 np0005481065 nova_compute[260935]: 2025-10-11 09:46:54.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:46:54 np0005481065 nova_compute[260935]: 2025-10-11 09:46:54.702 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:46:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:46:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:46:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:46:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:46:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:46:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:46:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:46:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:46:55
Oct 11 05:46:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:46:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:46:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['volumes', '.rgw.root', '.mgr', 'default.rgw.control', 'images', 'vms', 'backups', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta']
Oct 11 05:46:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:46:55 np0005481065 nova_compute[260935]: 2025-10-11 09:46:55.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:46:55 np0005481065 nova_compute[260935]: 2025-10-11 09:46:55.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:46:55 np0005481065 nova_compute[260935]: 2025-10-11 09:46:55.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:46:55 np0005481065 nova_compute[260935]: 2025-10-11 09:46:55.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:46:55 np0005481065 nova_compute[260935]: 2025-10-11 09:46:55.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:46:55 np0005481065 nova_compute[260935]: 2025-10-11 09:46:55.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:46:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3171: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:46:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:46:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:46:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:46:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:46:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:46:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:46:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:46:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:46:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:46:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:46:55 np0005481065 nova_compute[260935]: 2025-10-11 09:46:55.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:46:55 np0005481065 nova_compute[260935]: 2025-10-11 09:46:55.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:46:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3172: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Oct 11 05:46:57 np0005481065 nova_compute[260935]: 2025-10-11 09:46:57.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:46:57 np0005481065 nova_compute[260935]: 2025-10-11 09:46:57.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:46:57 np0005481065 nova_compute[260935]: 2025-10-11 09:46:57.702 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:46:57 np0005481065 podman[437509]: 2025-10-11 09:46:57.81215967 +0000 UTC m=+0.110406228 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 11 05:46:57 np0005481065 podman[437510]: 2025-10-11 09:46:57.855896987 +0000 UTC m=+0.147945691 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Oct 11 05:46:58 np0005481065 nova_compute[260935]: 2025-10-11 09:46:58.211 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:46:58 np0005481065 nova_compute[260935]: 2025-10-11 09:46:58.212 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:46:58 np0005481065 nova_compute[260935]: 2025-10-11 09:46:58.213 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 05:46:58 np0005481065 nova_compute[260935]: 2025-10-11 09:46:58.338 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:46:58 np0005481065 ovn_controller[152945]: 2025-10-11T09:46:58Z|01736|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct 11 05:46:58 np0005481065 nova_compute[260935]: 2025-10-11 09:46:58.618 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:46:58 np0005481065 nova_compute[260935]: 2025-10-11 09:46:58.638 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:46:58 np0005481065 nova_compute[260935]: 2025-10-11 09:46:58.638 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 05:46:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3173: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct 11 05:46:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:47:00 np0005481065 nova_compute[260935]: 2025-10-11 09:47:00.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:47:00 np0005481065 nova_compute[260935]: 2025-10-11 09:47:00.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:47:00 np0005481065 nova_compute[260935]: 2025-10-11 09:47:00.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5028 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:47:00 np0005481065 nova_compute[260935]: 2025-10-11 09:47:00.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:47:00 np0005481065 nova_compute[260935]: 2025-10-11 09:47:00.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:47:00 np0005481065 nova_compute[260935]: 2025-10-11 09:47:00.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:47:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3174: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct 11 05:47:01 np0005481065 nova_compute[260935]: 2025-10-11 09:47:01.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:47:01 np0005481065 nova_compute[260935]: 2025-10-11 09:47:01.743 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:47:01 np0005481065 nova_compute[260935]: 2025-10-11 09:47:01.743 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:47:01 np0005481065 nova_compute[260935]: 2025-10-11 09:47:01.744 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:47:01 np0005481065 nova_compute[260935]: 2025-10-11 09:47:01.744 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:47:01 np0005481065 nova_compute[260935]: 2025-10-11 09:47:01.745 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:47:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:47:02 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2662046581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:47:02 np0005481065 nova_compute[260935]: 2025-10-11 09:47:02.214 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:47:02 np0005481065 nova_compute[260935]: 2025-10-11 09:47:02.285 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:47:02 np0005481065 nova_compute[260935]: 2025-10-11 09:47:02.285 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:47:02 np0005481065 nova_compute[260935]: 2025-10-11 09:47:02.286 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:47:02 np0005481065 nova_compute[260935]: 2025-10-11 09:47:02.290 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:47:02 np0005481065 nova_compute[260935]: 2025-10-11 09:47:02.290 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:47:02 np0005481065 nova_compute[260935]: 2025-10-11 09:47:02.294 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:47:02 np0005481065 nova_compute[260935]: 2025-10-11 09:47:02.294 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:47:02 np0005481065 nova_compute[260935]: 2025-10-11 09:47:02.543 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:47:02 np0005481065 nova_compute[260935]: 2025-10-11 09:47:02.544 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2765MB free_disk=59.83064270019531GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:47:02 np0005481065 nova_compute[260935]: 2025-10-11 09:47:02.544 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:47:02 np0005481065 nova_compute[260935]: 2025-10-11 09:47:02.545 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:47:02 np0005481065 nova_compute[260935]: 2025-10-11 09:47:02.642 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:47:02 np0005481065 nova_compute[260935]: 2025-10-11 09:47:02.642 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:47:02 np0005481065 nova_compute[260935]: 2025-10-11 09:47:02.643 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:47:02 np0005481065 nova_compute[260935]: 2025-10-11 09:47:02.643 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:47:02 np0005481065 nova_compute[260935]: 2025-10-11 09:47:02.643 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:47:02 np0005481065 nova_compute[260935]: 2025-10-11 09:47:02.736 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:47:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:47:03 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4045184369' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:47:03 np0005481065 nova_compute[260935]: 2025-10-11 09:47:03.245 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:47:03 np0005481065 nova_compute[260935]: 2025-10-11 09:47:03.256 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:47:03 np0005481065 nova_compute[260935]: 2025-10-11 09:47:03.300 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:47:03 np0005481065 nova_compute[260935]: 2025-10-11 09:47:03.302 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:47:03 np0005481065 nova_compute[260935]: 2025-10-11 09:47:03.303 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:47:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3175: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s wr, 0 op/s
Oct 11 05:47:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:47:05 np0005481065 nova_compute[260935]: 2025-10-11 09:47:05.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:47:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3176: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s wr, 0 op/s
Oct 11 05:47:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:47:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:47:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:47:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:47:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0026278224099759067 of space, bias 1.0, pg target 0.788346722992772 quantized to 32 (current 32)
Oct 11 05:47:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:47:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:47:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:47:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:47:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:47:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct 11 05:47:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:47:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 05:47:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:47:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:47:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:47:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 05:47:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:47:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 05:47:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:47:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:47:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:47:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 05:47:06 np0005481065 nova_compute[260935]: 2025-10-11 09:47:06.303 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:47:06 np0005481065 nova_compute[260935]: 2025-10-11 09:47:06.304 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:47:06 np0005481065 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct 11 05:47:06 np0005481065 systemd[1]: virtsecretd.service: Consumed 1.386s CPU time.
Oct 11 05:47:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3177: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.7 KiB/s wr, 0 op/s
Oct 11 05:47:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3178: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Oct 11 05:47:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:47:10 np0005481065 nova_compute[260935]: 2025-10-11 09:47:10.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:47:10 np0005481065 nova_compute[260935]: 2025-10-11 09:47:10.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:47:10 np0005481065 nova_compute[260935]: 2025-10-11 09:47:10.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:47:10 np0005481065 nova_compute[260935]: 2025-10-11 09:47:10.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:47:10 np0005481065 nova_compute[260935]: 2025-10-11 09:47:10.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:47:10 np0005481065 nova_compute[260935]: 2025-10-11 09:47:10.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:47:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3179: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s wr, 0 op/s
Oct 11 05:47:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3180: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s wr, 0 op/s
Oct 11 05:47:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:47:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:47:15.242 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:47:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:47:15.243 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:47:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:47:15.243 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:47:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3181: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct 11 05:47:15 np0005481065 nova_compute[260935]: 2025-10-11 09:47:15.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:47:15 np0005481065 nova_compute[260935]: 2025-10-11 09:47:15.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:47:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3182: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct 11 05:47:18 np0005481065 podman[437601]: 2025-10-11 09:47:18.780234064 +0000 UTC m=+0.081310651 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct 11 05:47:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3183: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:47:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:47:20 np0005481065 nova_compute[260935]: 2025-10-11 09:47:20.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:47:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3184: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:47:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3185: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:47:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:47:24 np0005481065 podman[437620]: 2025-10-11 09:47:24.783090991 +0000 UTC m=+0.081773165 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, io.buildah.version=1.41.3)
Oct 11 05:47:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:47:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:47:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:47:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:47:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:47:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:47:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3186: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:47:25 np0005481065 nova_compute[260935]: 2025-10-11 09:47:25.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:47:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:47:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2384124883' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:47:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:47:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2384124883' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:47:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3187: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:47:28 np0005481065 podman[437645]: 2025-10-11 09:47:28.812312581 +0000 UTC m=+0.094919503 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:47:28 np0005481065 podman[437646]: 2025-10-11 09:47:28.835624595 +0000 UTC m=+0.120422829 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 11 05:47:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3188: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:47:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:47:30 np0005481065 nova_compute[260935]: 2025-10-11 09:47:30.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:47:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3189: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:47:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3190: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:47:33 np0005481065 nova_compute[260935]: 2025-10-11 09:47:33.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:47:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:47:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3191: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:47:35 np0005481065 nova_compute[260935]: 2025-10-11 09:47:35.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:47:35 np0005481065 nova_compute[260935]: 2025-10-11 09:47:35.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:47:35 np0005481065 nova_compute[260935]: 2025-10-11 09:47:35.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:47:35 np0005481065 nova_compute[260935]: 2025-10-11 09:47:35.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:47:35 np0005481065 nova_compute[260935]: 2025-10-11 09:47:35.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:47:35 np0005481065 nova_compute[260935]: 2025-10-11 09:47:35.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:47:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3192: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:47:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3193: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:47:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:47:40 np0005481065 nova_compute[260935]: 2025-10-11 09:47:40.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:47:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3194: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:47:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3195: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:47:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:47:44 np0005481065 podman[437863]: 2025-10-11 09:47:44.84874492 +0000 UTC m=+0.097658610 container exec ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:47:44 np0005481065 podman[437863]: 2025-10-11 09:47:44.964045644 +0000 UTC m=+0.212959304 container exec_died ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:47:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3196: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:47:45 np0005481065 nova_compute[260935]: 2025-10-11 09:47:45.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:47:45 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:47:45 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:47:45 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:47:45 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:47:46 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:47:46 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:47:46 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 05:47:46 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:47:46 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 05:47:46 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:47:46 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev fe7e7c9e-0970-4619-8359-5ac2e777196b does not exist
Oct 11 05:47:46 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 5cfb91e5-022b-43dc-adfc-7f3b941bcbfb does not exist
Oct 11 05:47:46 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 4138dc19-54b7-4e8c-9809-6b162e203465 does not exist
Oct 11 05:47:46 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 05:47:46 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 05:47:46 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 05:47:46 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:47:46 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:47:46 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:47:46 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:47:46 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:47:46 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:47:46 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:47:46 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:47:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3197: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:47:47 np0005481065 podman[438299]: 2025-10-11 09:47:47.805055296 +0000 UTC m=+0.075828157 container create 4bf15292fa01a7e67cbde71a37aa585b9f97f38ffdeb642062daeaaf2e094737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:47:47 np0005481065 podman[438299]: 2025-10-11 09:47:47.771292069 +0000 UTC m=+0.042064980 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:47:47 np0005481065 systemd[1]: Started libpod-conmon-4bf15292fa01a7e67cbde71a37aa585b9f97f38ffdeb642062daeaaf2e094737.scope.
Oct 11 05:47:47 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:47:47 np0005481065 podman[438299]: 2025-10-11 09:47:47.918317663 +0000 UTC m=+0.189090524 container init 4bf15292fa01a7e67cbde71a37aa585b9f97f38ffdeb642062daeaaf2e094737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_khorana, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 11 05:47:47 np0005481065 podman[438299]: 2025-10-11 09:47:47.932420819 +0000 UTC m=+0.203193670 container start 4bf15292fa01a7e67cbde71a37aa585b9f97f38ffdeb642062daeaaf2e094737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:47:47 np0005481065 podman[438299]: 2025-10-11 09:47:47.938650743 +0000 UTC m=+0.209423614 container attach 4bf15292fa01a7e67cbde71a37aa585b9f97f38ffdeb642062daeaaf2e094737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_khorana, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:47:47 np0005481065 angry_khorana[438316]: 167 167
Oct 11 05:47:47 np0005481065 systemd[1]: libpod-4bf15292fa01a7e67cbde71a37aa585b9f97f38ffdeb642062daeaaf2e094737.scope: Deactivated successfully.
Oct 11 05:47:47 np0005481065 conmon[438316]: conmon 4bf15292fa01a7e67cbd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4bf15292fa01a7e67cbde71a37aa585b9f97f38ffdeb642062daeaaf2e094737.scope/container/memory.events
Oct 11 05:47:47 np0005481065 podman[438299]: 2025-10-11 09:47:47.943053767 +0000 UTC m=+0.213826658 container died 4bf15292fa01a7e67cbde71a37aa585b9f97f38ffdeb642062daeaaf2e094737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_khorana, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 05:47:47 np0005481065 systemd[1]: var-lib-containers-storage-overlay-8f3279222ff3b7df6fc3a3bc660079d759f09165b6559b9369e912f1ad678f46-merged.mount: Deactivated successfully.
Oct 11 05:47:47 np0005481065 podman[438299]: 2025-10-11 09:47:47.996400713 +0000 UTC m=+0.267173574 container remove 4bf15292fa01a7e67cbde71a37aa585b9f97f38ffdeb642062daeaaf2e094737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_khorana, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 05:47:48 np0005481065 systemd[1]: libpod-conmon-4bf15292fa01a7e67cbde71a37aa585b9f97f38ffdeb642062daeaaf2e094737.scope: Deactivated successfully.
Oct 11 05:47:48 np0005481065 podman[438340]: 2025-10-11 09:47:48.24372157 +0000 UTC m=+0.060163888 container create a2085501b714cf9293fc44f4cd1ff58cae5938053d30616b170bb097556cc77e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 05:47:48 np0005481065 systemd[1]: Started libpod-conmon-a2085501b714cf9293fc44f4cd1ff58cae5938053d30616b170bb097556cc77e.scope.
Oct 11 05:47:48 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:47:48 np0005481065 podman[438340]: 2025-10-11 09:47:48.219566853 +0000 UTC m=+0.036009231 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:47:48 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dfc80109a18163fc4ada85637cf5afa94d8505122b642f73693adfdc6e629f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:47:48 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dfc80109a18163fc4ada85637cf5afa94d8505122b642f73693adfdc6e629f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:47:48 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dfc80109a18163fc4ada85637cf5afa94d8505122b642f73693adfdc6e629f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:47:48 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dfc80109a18163fc4ada85637cf5afa94d8505122b642f73693adfdc6e629f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:47:48 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dfc80109a18163fc4ada85637cf5afa94d8505122b642f73693adfdc6e629f2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 05:47:48 np0005481065 podman[438340]: 2025-10-11 09:47:48.33142366 +0000 UTC m=+0.147866018 container init a2085501b714cf9293fc44f4cd1ff58cae5938053d30616b170bb097556cc77e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_gagarin, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:47:48 np0005481065 podman[438340]: 2025-10-11 09:47:48.343673243 +0000 UTC m=+0.160115801 container start a2085501b714cf9293fc44f4cd1ff58cae5938053d30616b170bb097556cc77e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_gagarin, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:47:48 np0005481065 podman[438340]: 2025-10-11 09:47:48.347082989 +0000 UTC m=+0.163525397 container attach a2085501b714cf9293fc44f4cd1ff58cae5938053d30616b170bb097556cc77e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_gagarin, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:47:49 np0005481065 clever_gagarin[438356]: --> passed data devices: 0 physical, 3 LVM
Oct 11 05:47:49 np0005481065 clever_gagarin[438356]: --> relative data size: 1.0
Oct 11 05:47:49 np0005481065 clever_gagarin[438356]: --> All data devices are unavailable
Oct 11 05:47:49 np0005481065 systemd[1]: libpod-a2085501b714cf9293fc44f4cd1ff58cae5938053d30616b170bb097556cc77e.scope: Deactivated successfully.
Oct 11 05:47:49 np0005481065 systemd[1]: libpod-a2085501b714cf9293fc44f4cd1ff58cae5938053d30616b170bb097556cc77e.scope: Consumed 1.109s CPU time.
Oct 11 05:47:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3198: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:47:49 np0005481065 podman[438388]: 2025-10-11 09:47:49.604093075 +0000 UTC m=+0.043710408 container died a2085501b714cf9293fc44f4cd1ff58cae5938053d30616b170bb097556cc77e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_gagarin, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:47:49 np0005481065 systemd[1]: var-lib-containers-storage-overlay-0dfc80109a18163fc4ada85637cf5afa94d8505122b642f73693adfdc6e629f2-merged.mount: Deactivated successfully.
Oct 11 05:47:49 np0005481065 podman[438388]: 2025-10-11 09:47:49.661782182 +0000 UTC m=+0.101399515 container remove a2085501b714cf9293fc44f4cd1ff58cae5938053d30616b170bb097556cc77e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 05:47:49 np0005481065 systemd[1]: libpod-conmon-a2085501b714cf9293fc44f4cd1ff58cae5938053d30616b170bb097556cc77e.scope: Deactivated successfully.
Oct 11 05:47:49 np0005481065 podman[438387]: 2025-10-11 09:47:49.68095894 +0000 UTC m=+0.108858504 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 11 05:47:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:47:50 np0005481065 podman[438564]: 2025-10-11 09:47:50.412335944 +0000 UTC m=+0.037673628 container create a95f60e8fe5ea09030279cee26905d0a928c9aad98d8594e21e884b13681518b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wilson, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:47:50 np0005481065 systemd[1]: Started libpod-conmon-a95f60e8fe5ea09030279cee26905d0a928c9aad98d8594e21e884b13681518b.scope.
Oct 11 05:47:50 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:47:50 np0005481065 podman[438564]: 2025-10-11 09:47:50.394856694 +0000 UTC m=+0.020194378 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:47:50 np0005481065 podman[438564]: 2025-10-11 09:47:50.493617524 +0000 UTC m=+0.118955218 container init a95f60e8fe5ea09030279cee26905d0a928c9aad98d8594e21e884b13681518b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wilson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 05:47:50 np0005481065 podman[438564]: 2025-10-11 09:47:50.504724125 +0000 UTC m=+0.130061789 container start a95f60e8fe5ea09030279cee26905d0a928c9aad98d8594e21e884b13681518b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wilson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 05:47:50 np0005481065 podman[438564]: 2025-10-11 09:47:50.507553894 +0000 UTC m=+0.132891638 container attach a95f60e8fe5ea09030279cee26905d0a928c9aad98d8594e21e884b13681518b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wilson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 05:47:50 np0005481065 gallant_wilson[438581]: 167 167
Oct 11 05:47:50 np0005481065 systemd[1]: libpod-a95f60e8fe5ea09030279cee26905d0a928c9aad98d8594e21e884b13681518b.scope: Deactivated successfully.
Oct 11 05:47:50 np0005481065 conmon[438581]: conmon a95f60e8fe5ea0903027 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a95f60e8fe5ea09030279cee26905d0a928c9aad98d8594e21e884b13681518b.scope/container/memory.events
Oct 11 05:47:50 np0005481065 podman[438564]: 2025-10-11 09:47:50.510510287 +0000 UTC m=+0.135847991 container died a95f60e8fe5ea09030279cee26905d0a928c9aad98d8594e21e884b13681518b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wilson, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:47:50 np0005481065 systemd[1]: var-lib-containers-storage-overlay-6a7ad59697e0a71c537b0aa0ad721c54e1ee4b1616a8b85f869a16d10fd2c2b6-merged.mount: Deactivated successfully.
Oct 11 05:47:50 np0005481065 nova_compute[260935]: 2025-10-11 09:47:50.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:47:50 np0005481065 podman[438564]: 2025-10-11 09:47:50.565656944 +0000 UTC m=+0.190994648 container remove a95f60e8fe5ea09030279cee26905d0a928c9aad98d8594e21e884b13681518b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:47:50 np0005481065 systemd[1]: libpod-conmon-a95f60e8fe5ea09030279cee26905d0a928c9aad98d8594e21e884b13681518b.scope: Deactivated successfully.
Oct 11 05:47:50 np0005481065 podman[438605]: 2025-10-11 09:47:50.785433908 +0000 UTC m=+0.066143396 container create 360805405fb44d1ff116d0ce2f9dc27182803cb8cfc897a0df86ceb99d7495ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_snyder, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:47:50 np0005481065 systemd[1]: Started libpod-conmon-360805405fb44d1ff116d0ce2f9dc27182803cb8cfc897a0df86ceb99d7495ea.scope.
Oct 11 05:47:50 np0005481065 podman[438605]: 2025-10-11 09:47:50.759028148 +0000 UTC m=+0.039737686 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:47:50 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:47:50 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dc82382b365b870397d0f80b608d6303875f464b8399e1840f7462f753eea19/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:47:50 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dc82382b365b870397d0f80b608d6303875f464b8399e1840f7462f753eea19/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:47:50 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dc82382b365b870397d0f80b608d6303875f464b8399e1840f7462f753eea19/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:47:50 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dc82382b365b870397d0f80b608d6303875f464b8399e1840f7462f753eea19/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:47:50 np0005481065 podman[438605]: 2025-10-11 09:47:50.888711625 +0000 UTC m=+0.169421143 container init 360805405fb44d1ff116d0ce2f9dc27182803cb8cfc897a0df86ceb99d7495ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 05:47:50 np0005481065 podman[438605]: 2025-10-11 09:47:50.901922136 +0000 UTC m=+0.182631624 container start 360805405fb44d1ff116d0ce2f9dc27182803cb8cfc897a0df86ceb99d7495ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_snyder, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Oct 11 05:47:50 np0005481065 podman[438605]: 2025-10-11 09:47:50.905786044 +0000 UTC m=+0.186495532 container attach 360805405fb44d1ff116d0ce2f9dc27182803cb8cfc897a0df86ceb99d7495ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_snyder, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:47:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3199: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:47:51 np0005481065 clever_snyder[438623]: {
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:    "0": [
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:        {
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:            "devices": [
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:                "/dev/loop3"
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:            ],
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:            "lv_name": "ceph_lv0",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:            "lv_size": "21470642176",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:            "name": "ceph_lv0",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:            "tags": {
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:                "ceph.cluster_name": "ceph",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:                "ceph.crush_device_class": "",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:                "ceph.encrypted": "0",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:                "ceph.osd_id": "0",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:                "ceph.type": "block",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:                "ceph.vdo": "0"
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:            },
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:            "type": "block",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:            "vg_name": "ceph_vg0"
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:        }
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:    ],
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:    "1": [
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:        {
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:            "devices": [
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:                "/dev/loop4"
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:            ],
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:            "lv_name": "ceph_lv1",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:            "lv_size": "21470642176",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:            "name": "ceph_lv1",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:            "tags": {
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:                "ceph.cluster_name": "ceph",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:                "ceph.crush_device_class": "",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:                "ceph.encrypted": "0",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:                "ceph.osd_id": "1",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:                "ceph.type": "block",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:                "ceph.vdo": "0"
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:            },
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:            "type": "block",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:            "vg_name": "ceph_vg1"
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:        }
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:    ],
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:    "2": [
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:        {
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:            "devices": [
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:                "/dev/loop5"
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:            ],
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:            "lv_name": "ceph_lv2",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:            "lv_size": "21470642176",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:            "name": "ceph_lv2",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:            "tags": {
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:                "ceph.cluster_name": "ceph",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:                "ceph.crush_device_class": "",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:                "ceph.encrypted": "0",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:                "ceph.osd_id": "2",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:                "ceph.type": "block",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:                "ceph.vdo": "0"
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:            },
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:            "type": "block",
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:            "vg_name": "ceph_vg2"
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:        }
Oct 11 05:47:51 np0005481065 clever_snyder[438623]:    ]
Oct 11 05:47:51 np0005481065 clever_snyder[438623]: }
Oct 11 05:47:51 np0005481065 systemd[1]: libpod-360805405fb44d1ff116d0ce2f9dc27182803cb8cfc897a0df86ceb99d7495ea.scope: Deactivated successfully.
Oct 11 05:47:51 np0005481065 podman[438632]: 2025-10-11 09:47:51.742231414 +0000 UTC m=+0.032886623 container died 360805405fb44d1ff116d0ce2f9dc27182803cb8cfc897a0df86ceb99d7495ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_snyder, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:47:51 np0005481065 systemd[1]: var-lib-containers-storage-overlay-0dc82382b365b870397d0f80b608d6303875f464b8399e1840f7462f753eea19-merged.mount: Deactivated successfully.
Oct 11 05:47:51 np0005481065 podman[438632]: 2025-10-11 09:47:51.818264847 +0000 UTC m=+0.108920026 container remove 360805405fb44d1ff116d0ce2f9dc27182803cb8cfc897a0df86ceb99d7495ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_snyder, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:47:51 np0005481065 systemd[1]: libpod-conmon-360805405fb44d1ff116d0ce2f9dc27182803cb8cfc897a0df86ceb99d7495ea.scope: Deactivated successfully.
Oct 11 05:47:52 np0005481065 podman[438788]: 2025-10-11 09:47:52.817004868 +0000 UTC m=+0.066491246 container create 0d56f72f7f03ff68af789a66cd8bd9e51f8ffec4c76b97c4c873510bd68f9ed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 05:47:52 np0005481065 systemd[1]: Started libpod-conmon-0d56f72f7f03ff68af789a66cd8bd9e51f8ffec4c76b97c4c873510bd68f9ed4.scope.
Oct 11 05:47:52 np0005481065 podman[438788]: 2025-10-11 09:47:52.790042752 +0000 UTC m=+0.039529170 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:47:52 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:47:52 np0005481065 podman[438788]: 2025-10-11 09:47:52.926769567 +0000 UTC m=+0.176255985 container init 0d56f72f7f03ff68af789a66cd8bd9e51f8ffec4c76b97c4c873510bd68f9ed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:47:52 np0005481065 podman[438788]: 2025-10-11 09:47:52.93402044 +0000 UTC m=+0.183506818 container start 0d56f72f7f03ff68af789a66cd8bd9e51f8ffec4c76b97c4c873510bd68f9ed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:47:52 np0005481065 podman[438788]: 2025-10-11 09:47:52.937983351 +0000 UTC m=+0.187469719 container attach 0d56f72f7f03ff68af789a66cd8bd9e51f8ffec4c76b97c4c873510bd68f9ed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_jepsen, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:47:52 np0005481065 recursing_jepsen[438805]: 167 167
Oct 11 05:47:52 np0005481065 systemd[1]: libpod-0d56f72f7f03ff68af789a66cd8bd9e51f8ffec4c76b97c4c873510bd68f9ed4.scope: Deactivated successfully.
Oct 11 05:47:52 np0005481065 conmon[438805]: conmon 0d56f72f7f03ff68af78 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0d56f72f7f03ff68af789a66cd8bd9e51f8ffec4c76b97c4c873510bd68f9ed4.scope/container/memory.events
Oct 11 05:47:52 np0005481065 podman[438788]: 2025-10-11 09:47:52.943725482 +0000 UTC m=+0.193211860 container died 0d56f72f7f03ff68af789a66cd8bd9e51f8ffec4c76b97c4c873510bd68f9ed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 11 05:47:52 np0005481065 systemd[1]: var-lib-containers-storage-overlay-79fe385ba3167c9400915b7711839e5960e1f42f8ba0a9bdd00746dbe582d8f6-merged.mount: Deactivated successfully.
Oct 11 05:47:52 np0005481065 podman[438788]: 2025-10-11 09:47:52.995359721 +0000 UTC m=+0.244846099 container remove 0d56f72f7f03ff68af789a66cd8bd9e51f8ffec4c76b97c4c873510bd68f9ed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:47:53 np0005481065 systemd[1]: libpod-conmon-0d56f72f7f03ff68af789a66cd8bd9e51f8ffec4c76b97c4c873510bd68f9ed4.scope: Deactivated successfully.
Oct 11 05:47:53 np0005481065 podman[438829]: 2025-10-11 09:47:53.238073608 +0000 UTC m=+0.064297764 container create 64ee16bb54c49a7189f395cdcdabd265b9b024764a3bc84f352fb11b907e69e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_noyce, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:47:53 np0005481065 systemd[1]: Started libpod-conmon-64ee16bb54c49a7189f395cdcdabd265b9b024764a3bc84f352fb11b907e69e2.scope.
Oct 11 05:47:53 np0005481065 podman[438829]: 2025-10-11 09:47:53.211874313 +0000 UTC m=+0.038098479 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:47:53 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:47:53 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca9aae32bd3fefc23b3d1d2ebbadad3cad0954e6c4f3cb04f7f24ad074c14914/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:47:53 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca9aae32bd3fefc23b3d1d2ebbadad3cad0954e6c4f3cb04f7f24ad074c14914/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:47:53 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca9aae32bd3fefc23b3d1d2ebbadad3cad0954e6c4f3cb04f7f24ad074c14914/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:47:53 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca9aae32bd3fefc23b3d1d2ebbadad3cad0954e6c4f3cb04f7f24ad074c14914/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:47:53 np0005481065 podman[438829]: 2025-10-11 09:47:53.354582746 +0000 UTC m=+0.180806932 container init 64ee16bb54c49a7189f395cdcdabd265b9b024764a3bc84f352fb11b907e69e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_noyce, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:47:53 np0005481065 podman[438829]: 2025-10-11 09:47:53.363164467 +0000 UTC m=+0.189388583 container start 64ee16bb54c49a7189f395cdcdabd265b9b024764a3bc84f352fb11b907e69e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_noyce, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:47:53 np0005481065 podman[438829]: 2025-10-11 09:47:53.365893953 +0000 UTC m=+0.192118109 container attach 64ee16bb54c49a7189f395cdcdabd265b9b024764a3bc84f352fb11b907e69e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 05:47:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3200: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:47:54 np0005481065 funny_noyce[438846]: {
Oct 11 05:47:54 np0005481065 funny_noyce[438846]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 05:47:54 np0005481065 funny_noyce[438846]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:47:54 np0005481065 funny_noyce[438846]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 05:47:54 np0005481065 funny_noyce[438846]:        "osd_id": 2,
Oct 11 05:47:54 np0005481065 funny_noyce[438846]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:47:54 np0005481065 funny_noyce[438846]:        "type": "bluestore"
Oct 11 05:47:54 np0005481065 funny_noyce[438846]:    },
Oct 11 05:47:54 np0005481065 funny_noyce[438846]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 05:47:54 np0005481065 funny_noyce[438846]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:47:54 np0005481065 funny_noyce[438846]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 05:47:54 np0005481065 funny_noyce[438846]:        "osd_id": 0,
Oct 11 05:47:54 np0005481065 funny_noyce[438846]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:47:54 np0005481065 funny_noyce[438846]:        "type": "bluestore"
Oct 11 05:47:54 np0005481065 funny_noyce[438846]:    },
Oct 11 05:47:54 np0005481065 funny_noyce[438846]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 05:47:54 np0005481065 funny_noyce[438846]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:47:54 np0005481065 funny_noyce[438846]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 05:47:54 np0005481065 funny_noyce[438846]:        "osd_id": 1,
Oct 11 05:47:54 np0005481065 funny_noyce[438846]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:47:54 np0005481065 funny_noyce[438846]:        "type": "bluestore"
Oct 11 05:47:54 np0005481065 funny_noyce[438846]:    }
Oct 11 05:47:54 np0005481065 funny_noyce[438846]: }
Oct 11 05:47:54 np0005481065 systemd[1]: libpod-64ee16bb54c49a7189f395cdcdabd265b9b024764a3bc84f352fb11b907e69e2.scope: Deactivated successfully.
Oct 11 05:47:54 np0005481065 systemd[1]: libpod-64ee16bb54c49a7189f395cdcdabd265b9b024764a3bc84f352fb11b907e69e2.scope: Consumed 1.006s CPU time.
Oct 11 05:47:54 np0005481065 podman[438829]: 2025-10-11 09:47:54.365250023 +0000 UTC m=+1.191474159 container died 64ee16bb54c49a7189f395cdcdabd265b9b024764a3bc84f352fb11b907e69e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_noyce, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 05:47:54 np0005481065 systemd[1]: var-lib-containers-storage-overlay-ca9aae32bd3fefc23b3d1d2ebbadad3cad0954e6c4f3cb04f7f24ad074c14914-merged.mount: Deactivated successfully.
Oct 11 05:47:54 np0005481065 podman[438829]: 2025-10-11 09:47:54.451209724 +0000 UTC m=+1.277433880 container remove 64ee16bb54c49a7189f395cdcdabd265b9b024764a3bc84f352fb11b907e69e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_noyce, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:47:54 np0005481065 systemd[1]: libpod-conmon-64ee16bb54c49a7189f395cdcdabd265b9b024764a3bc84f352fb11b907e69e2.scope: Deactivated successfully.
Oct 11 05:47:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:47:54 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:47:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:47:54 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:47:54 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev d8478b80-f34a-4e77-9c89-b771c8fbedb0 does not exist
Oct 11 05:47:54 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 886d8e6f-3129-4ceb-b6e7-2ca19fec5e6b does not exist
Oct 11 05:47:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:47:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:47:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:47:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:47:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:47:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:47:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:47:54 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:47:54 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:47:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:47:55
Oct 11 05:47:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:47:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:47:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['vms', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta', 'images', 'volumes', 'default.rgw.log', 'default.rgw.control', '.mgr']
Oct 11 05:47:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:47:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3201: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:47:55 np0005481065 nova_compute[260935]: 2025-10-11 09:47:55.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:47:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:47:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:47:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:47:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:47:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:47:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:47:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:47:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:47:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:47:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:47:55 np0005481065 nova_compute[260935]: 2025-10-11 09:47:55.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:47:55 np0005481065 nova_compute[260935]: 2025-10-11 09:47:55.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:47:55 np0005481065 nova_compute[260935]: 2025-10-11 09:47:55.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:47:55 np0005481065 podman[438943]: 2025-10-11 09:47:55.815505709 +0000 UTC m=+0.105955693 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.build-date=20251001)
Oct 11 05:47:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3202: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:47:57 np0005481065 nova_compute[260935]: 2025-10-11 09:47:57.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:47:57 np0005481065 nova_compute[260935]: 2025-10-11 09:47:57.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:47:57 np0005481065 nova_compute[260935]: 2025-10-11 09:47:57.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:47:57 np0005481065 nova_compute[260935]: 2025-10-11 09:47:57.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 11 05:47:58 np0005481065 nova_compute[260935]: 2025-10-11 09:47:58.186 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:47:58 np0005481065 nova_compute[260935]: 2025-10-11 09:47:58.187 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:47:58 np0005481065 nova_compute[260935]: 2025-10-11 09:47:58.187 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 05:47:58 np0005481065 nova_compute[260935]: 2025-10-11 09:47:58.187 2 DEBUG nova.objects.instance [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c176845c-89c0-4038-ba22-4ee79bd3ebfe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:47:58 np0005481065 nova_compute[260935]: 2025-10-11 09:47:58.434 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:47:58 np0005481065 nova_compute[260935]: 2025-10-11 09:47:58.815 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:47:58 np0005481065 nova_compute[260935]: 2025-10-11 09:47:58.836 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:47:58 np0005481065 nova_compute[260935]: 2025-10-11 09:47:58.837 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 05:47:58 np0005481065 nova_compute[260935]: 2025-10-11 09:47:58.837 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:47:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3203: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:47:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:47:59 np0005481065 podman[438963]: 2025-10-11 09:47:59.814918933 +0000 UTC m=+0.100717286 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 11 05:47:59 np0005481065 podman[438964]: 2025-10-11 09:47:59.846388856 +0000 UTC m=+0.130930434 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 11 05:48:00 np0005481065 nova_compute[260935]: 2025-10-11 09:48:00.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:48:00 np0005481065 nova_compute[260935]: 2025-10-11 09:48:00.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:48:00 np0005481065 nova_compute[260935]: 2025-10-11 09:48:00.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:48:00 np0005481065 nova_compute[260935]: 2025-10-11 09:48:00.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:48:00 np0005481065 nova_compute[260935]: 2025-10-11 09:48:00.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:48:00 np0005481065 nova_compute[260935]: 2025-10-11 09:48:00.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:48:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3204: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:48:01 np0005481065 nova_compute[260935]: 2025-10-11 09:48:01.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:48:01 np0005481065 nova_compute[260935]: 2025-10-11 09:48:01.749 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:48:01 np0005481065 nova_compute[260935]: 2025-10-11 09:48:01.750 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:48:01 np0005481065 nova_compute[260935]: 2025-10-11 09:48:01.750 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:48:01 np0005481065 nova_compute[260935]: 2025-10-11 09:48:01.751 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:48:01 np0005481065 nova_compute[260935]: 2025-10-11 09:48:01.751 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:48:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:48:02 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3572227953' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:48:02 np0005481065 nova_compute[260935]: 2025-10-11 09:48:02.277 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:48:02 np0005481065 nova_compute[260935]: 2025-10-11 09:48:02.395 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:48:02 np0005481065 nova_compute[260935]: 2025-10-11 09:48:02.395 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:48:02 np0005481065 nova_compute[260935]: 2025-10-11 09:48:02.396 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:48:02 np0005481065 nova_compute[260935]: 2025-10-11 09:48:02.404 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:48:02 np0005481065 nova_compute[260935]: 2025-10-11 09:48:02.404 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:48:02 np0005481065 nova_compute[260935]: 2025-10-11 09:48:02.411 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:48:02 np0005481065 nova_compute[260935]: 2025-10-11 09:48:02.412 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:48:02 np0005481065 nova_compute[260935]: 2025-10-11 09:48:02.646 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:48:02 np0005481065 nova_compute[260935]: 2025-10-11 09:48:02.648 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2742MB free_disk=59.83064270019531GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:48:02 np0005481065 nova_compute[260935]: 2025-10-11 09:48:02.649 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:48:02 np0005481065 nova_compute[260935]: 2025-10-11 09:48:02.649 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:48:02 np0005481065 nova_compute[260935]: 2025-10-11 09:48:02.767 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:48:02 np0005481065 nova_compute[260935]: 2025-10-11 09:48:02.768 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:48:02 np0005481065 nova_compute[260935]: 2025-10-11 09:48:02.768 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:48:02 np0005481065 nova_compute[260935]: 2025-10-11 09:48:02.769 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:48:02 np0005481065 nova_compute[260935]: 2025-10-11 09:48:02.769 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:48:02 np0005481065 nova_compute[260935]: 2025-10-11 09:48:02.855 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:48:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:48:03 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1017514983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:48:03 np0005481065 nova_compute[260935]: 2025-10-11 09:48:03.332 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:48:03 np0005481065 nova_compute[260935]: 2025-10-11 09:48:03.343 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:48:03 np0005481065 nova_compute[260935]: 2025-10-11 09:48:03.369 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:48:03 np0005481065 nova_compute[260935]: 2025-10-11 09:48:03.372 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:48:03 np0005481065 nova_compute[260935]: 2025-10-11 09:48:03.372 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:48:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3205: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:48:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:48:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3206: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:48:05 np0005481065 nova_compute[260935]: 2025-10-11 09:48:05.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:48:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:48:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:48:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:48:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:48:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0026278224099759067 of space, bias 1.0, pg target 0.788346722992772 quantized to 32 (current 32)
Oct 11 05:48:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:48:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:48:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:48:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:48:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:48:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct 11 05:48:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:48:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 05:48:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:48:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:48:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:48:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 05:48:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:48:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 05:48:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:48:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:48:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:48:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 05:48:06 np0005481065 nova_compute[260935]: 2025-10-11 09:48:06.374 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:48:06 np0005481065 nova_compute[260935]: 2025-10-11 09:48:06.375 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:48:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3207: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:48:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3208: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:48:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:48:10 np0005481065 nova_compute[260935]: 2025-10-11 09:48:10.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:48:10 np0005481065 nova_compute[260935]: 2025-10-11 09:48:10.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:48:10 np0005481065 nova_compute[260935]: 2025-10-11 09:48:10.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:48:10 np0005481065 nova_compute[260935]: 2025-10-11 09:48:10.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:48:10 np0005481065 nova_compute[260935]: 2025-10-11 09:48:10.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:48:10 np0005481065 nova_compute[260935]: 2025-10-11 09:48:10.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:48:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3209: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:48:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3210: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:48:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:48:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:48:15.244 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:48:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:48:15.244 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:48:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:48:15.245 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:48:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3211: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:48:15 np0005481065 nova_compute[260935]: 2025-10-11 09:48:15.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:48:15 np0005481065 nova_compute[260935]: 2025-10-11 09:48:15.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:48:15 np0005481065 nova_compute[260935]: 2025-10-11 09:48:15.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:48:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3212: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:48:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3213: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:48:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:48:20 np0005481065 nova_compute[260935]: 2025-10-11 09:48:20.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:48:20 np0005481065 podman[439051]: 2025-10-11 09:48:20.783759496 +0000 UTC m=+0.080722665 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 05:48:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3214: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:48:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3215: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct 11 05:48:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:48:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:48:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:48:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:48:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:48:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:48:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:48:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3216: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct 11 05:48:25 np0005481065 nova_compute[260935]: 2025-10-11 09:48:25.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:48:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:48:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1332784152' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:48:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:48:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1332784152' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:48:26 np0005481065 podman[439073]: 2025-10-11 09:48:26.786056256 +0000 UTC m=+0.075308923 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:48:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3217: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct 11 05:48:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3218: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct 11 05:48:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:48:30 np0005481065 nova_compute[260935]: 2025-10-11 09:48:30.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:48:30 np0005481065 podman[439091]: 2025-10-11 09:48:30.792087658 +0000 UTC m=+0.086031644 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3)
Oct 11 05:48:30 np0005481065 podman[439092]: 2025-10-11 09:48:30.863762029 +0000 UTC m=+0.145678347 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 11 05:48:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3219: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct 11 05:48:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e301 do_prune osdmap full prune enabled
Oct 11 05:48:32 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e302 e302: 3 total, 3 up, 3 in
Oct 11 05:48:32 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e302: 3 total, 3 up, 3 in
Oct 11 05:48:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3221: 321 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 315 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.1 KiB/s wr, 23 op/s
Oct 11 05:48:34 np0005481065 nova_compute[260935]: 2025-10-11 09:48:34.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:48:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:48:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3222: 321 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 315 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.1 KiB/s wr, 23 op/s
Oct 11 05:48:35 np0005481065 nova_compute[260935]: 2025-10-11 09:48:35.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:48:36 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e302 do_prune osdmap full prune enabled
Oct 11 05:48:36 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e303 e303: 3 total, 3 up, 3 in
Oct 11 05:48:36 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e303: 3 total, 3 up, 3 in
Oct 11 05:48:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3224: 321 pgs: 321 active+clean; 295 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 2.9 KiB/s wr, 55 op/s
Oct 11 05:48:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3225: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 62 op/s
Oct 11 05:48:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:48:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e303 do_prune osdmap full prune enabled
Oct 11 05:48:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e304 e304: 3 total, 3 up, 3 in
Oct 11 05:48:39 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e304: 3 total, 3 up, 3 in
Oct 11 05:48:40 np0005481065 nova_compute[260935]: 2025-10-11 09:48:40.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:48:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3227: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 2.1 KiB/s wr, 33 op/s
Oct 11 05:48:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3228: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 2.1 KiB/s wr, 33 op/s
Oct 11 05:48:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:48:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e304 do_prune osdmap full prune enabled
Oct 11 05:48:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e305 e305: 3 total, 3 up, 3 in
Oct 11 05:48:44 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e305: 3 total, 3 up, 3 in
Oct 11 05:48:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3230: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.7 KiB/s rd, 638 B/s wr, 7 op/s
Oct 11 05:48:45 np0005481065 nova_compute[260935]: 2025-10-11 09:48:45.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:48:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3231: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:48:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3232: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:48:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:48:50 np0005481065 nova_compute[260935]: 2025-10-11 09:48:50.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:48:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3233: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:48:51 np0005481065 podman[439136]: 2025-10-11 09:48:51.811398379 +0000 UTC m=+0.094104160 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 11 05:48:51 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e305 do_prune osdmap full prune enabled
Oct 11 05:48:51 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e306 e306: 3 total, 3 up, 3 in
Oct 11 05:48:51 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e306: 3 total, 3 up, 3 in
Oct 11 05:48:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3235: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1.8 KiB/s wr, 17 op/s
Oct 11 05:48:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:48:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:48:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:48:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:48:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:48:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:48:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:48:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:48:55
Oct 11 05:48:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:48:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:48:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', 'volumes', 'default.rgw.meta', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'images', 'backups']
Oct 11 05:48:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:48:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3236: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 11 05:48:55 np0005481065 nova_compute[260935]: 2025-10-11 09:48:55.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:48:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:48:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:48:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:48:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:48:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:48:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:48:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:48:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:48:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:48:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:48:55 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:48:55 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:48:55 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 05:48:55 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:48:55 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 05:48:55 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:48:55 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 994e123e-deaa-4124-9cd5-1d62f4091a52 does not exist
Oct 11 05:48:55 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev ebd2b4d8-54d3-4b1a-af0a-24a9e0e69fb8 does not exist
Oct 11 05:48:55 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 19dbc0f7-65d7-401a-87d2-d0225e2f712a does not exist
Oct 11 05:48:55 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 05:48:55 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 05:48:55 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 05:48:55 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:48:55 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:48:55 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:48:55 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:48:55 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:48:55 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:48:56 np0005481065 podman[439428]: 2025-10-11 09:48:56.655463413 +0000 UTC m=+0.075545860 container create e263c908c4b7d2dd8740ed622268b2e70563eb59fa4378f71c5adc267b292ce4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_carson, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 05:48:56 np0005481065 nova_compute[260935]: 2025-10-11 09:48:56.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:48:56 np0005481065 systemd[1]: Started libpod-conmon-e263c908c4b7d2dd8740ed622268b2e70563eb59fa4378f71c5adc267b292ce4.scope.
Oct 11 05:48:56 np0005481065 podman[439428]: 2025-10-11 09:48:56.623566268 +0000 UTC m=+0.043648725 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:48:56 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:48:56 np0005481065 podman[439428]: 2025-10-11 09:48:56.780663764 +0000 UTC m=+0.200746271 container init e263c908c4b7d2dd8740ed622268b2e70563eb59fa4378f71c5adc267b292ce4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 11 05:48:56 np0005481065 podman[439428]: 2025-10-11 09:48:56.79372016 +0000 UTC m=+0.213802607 container start e263c908c4b7d2dd8740ed622268b2e70563eb59fa4378f71c5adc267b292ce4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_carson, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:48:56 np0005481065 podman[439428]: 2025-10-11 09:48:56.798193076 +0000 UTC m=+0.218275593 container attach e263c908c4b7d2dd8740ed622268b2e70563eb59fa4378f71c5adc267b292ce4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_carson, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 11 05:48:56 np0005481065 serene_carson[439444]: 167 167
Oct 11 05:48:56 np0005481065 systemd[1]: libpod-e263c908c4b7d2dd8740ed622268b2e70563eb59fa4378f71c5adc267b292ce4.scope: Deactivated successfully.
Oct 11 05:48:56 np0005481065 podman[439428]: 2025-10-11 09:48:56.803140535 +0000 UTC m=+0.223222952 container died e263c908c4b7d2dd8740ed622268b2e70563eb59fa4378f71c5adc267b292ce4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_carson, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 05:48:56 np0005481065 systemd[1]: var-lib-containers-storage-overlay-6228eb02266966157ab3f44384072aa03a79e3fac4f746e0818473eaa277baef-merged.mount: Deactivated successfully.
Oct 11 05:48:56 np0005481065 podman[439428]: 2025-10-11 09:48:56.862006636 +0000 UTC m=+0.282089053 container remove e263c908c4b7d2dd8740ed622268b2e70563eb59fa4378f71c5adc267b292ce4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 11 05:48:56 np0005481065 systemd[1]: libpod-conmon-e263c908c4b7d2dd8740ed622268b2e70563eb59fa4378f71c5adc267b292ce4.scope: Deactivated successfully.
Oct 11 05:48:56 np0005481065 podman[439449]: 2025-10-11 09:48:56.935757284 +0000 UTC m=+0.092806544 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 11 05:48:57 np0005481065 podman[439484]: 2025-10-11 09:48:57.115372082 +0000 UTC m=+0.073567064 container create daff9667fd675943495171c9a6ac207136a7f3eb9f0f7fb794f8563da8b33907 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:48:57 np0005481065 systemd[1]: Started libpod-conmon-daff9667fd675943495171c9a6ac207136a7f3eb9f0f7fb794f8563da8b33907.scope.
Oct 11 05:48:57 np0005481065 podman[439484]: 2025-10-11 09:48:57.082887191 +0000 UTC m=+0.041082253 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:48:57 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:48:57 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17c991e904f226cc81de047f0ac2b1491fc266e9d0ab5e9c580634d76ebfa33e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:48:57 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17c991e904f226cc81de047f0ac2b1491fc266e9d0ab5e9c580634d76ebfa33e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:48:57 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17c991e904f226cc81de047f0ac2b1491fc266e9d0ab5e9c580634d76ebfa33e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:48:57 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17c991e904f226cc81de047f0ac2b1491fc266e9d0ab5e9c580634d76ebfa33e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:48:57 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17c991e904f226cc81de047f0ac2b1491fc266e9d0ab5e9c580634d76ebfa33e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 05:48:57 np0005481065 podman[439484]: 2025-10-11 09:48:57.235746777 +0000 UTC m=+0.193941819 container init daff9667fd675943495171c9a6ac207136a7f3eb9f0f7fb794f8563da8b33907 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 11 05:48:57 np0005481065 podman[439484]: 2025-10-11 09:48:57.251584221 +0000 UTC m=+0.209779223 container start daff9667fd675943495171c9a6ac207136a7f3eb9f0f7fb794f8563da8b33907 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:48:57 np0005481065 podman[439484]: 2025-10-11 09:48:57.255855821 +0000 UTC m=+0.214050833 container attach daff9667fd675943495171c9a6ac207136a7f3eb9f0f7fb794f8563da8b33907 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_nash, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:48:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3237: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 11 05:48:57 np0005481065 nova_compute[260935]: 2025-10-11 09:48:57.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:48:57 np0005481065 nova_compute[260935]: 2025-10-11 09:48:57.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:48:58 np0005481065 recursing_nash[439500]: --> passed data devices: 0 physical, 3 LVM
Oct 11 05:48:58 np0005481065 recursing_nash[439500]: --> relative data size: 1.0
Oct 11 05:48:58 np0005481065 recursing_nash[439500]: --> All data devices are unavailable
Oct 11 05:48:58 np0005481065 systemd[1]: libpod-daff9667fd675943495171c9a6ac207136a7f3eb9f0f7fb794f8563da8b33907.scope: Deactivated successfully.
Oct 11 05:48:58 np0005481065 podman[439529]: 2025-10-11 09:48:58.358300212 +0000 UTC m=+0.042225875 container died daff9667fd675943495171c9a6ac207136a7f3eb9f0f7fb794f8563da8b33907 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 05:48:58 np0005481065 systemd[1]: var-lib-containers-storage-overlay-17c991e904f226cc81de047f0ac2b1491fc266e9d0ab5e9c580634d76ebfa33e-merged.mount: Deactivated successfully.
Oct 11 05:48:58 np0005481065 podman[439529]: 2025-10-11 09:48:58.404306123 +0000 UTC m=+0.088231776 container remove daff9667fd675943495171c9a6ac207136a7f3eb9f0f7fb794f8563da8b33907 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 05:48:58 np0005481065 systemd[1]: libpod-conmon-daff9667fd675943495171c9a6ac207136a7f3eb9f0f7fb794f8563da8b33907.scope: Deactivated successfully.
Oct 11 05:48:58 np0005481065 nova_compute[260935]: 2025-10-11 09:48:58.700 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:48:58 np0005481065 nova_compute[260935]: 2025-10-11 09:48:58.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:48:58 np0005481065 nova_compute[260935]: 2025-10-11 09:48:58.702 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:48:59 np0005481065 nova_compute[260935]: 2025-10-11 09:48:59.201 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:48:59 np0005481065 nova_compute[260935]: 2025-10-11 09:48:59.202 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:48:59 np0005481065 nova_compute[260935]: 2025-10-11 09:48:59.202 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 05:48:59 np0005481065 podman[439687]: 2025-10-11 09:48:59.235983319 +0000 UTC m=+0.072326850 container create 14be6388101e3f234699bd30cbb908931a168616c5375712d947b435f9b6ce11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_noyce, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:48:59 np0005481065 systemd[1]: Started libpod-conmon-14be6388101e3f234699bd30cbb908931a168616c5375712d947b435f9b6ce11.scope.
Oct 11 05:48:59 np0005481065 podman[439687]: 2025-10-11 09:48:59.206374239 +0000 UTC m=+0.042717760 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:48:59 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:48:59 np0005481065 podman[439687]: 2025-10-11 09:48:59.346368576 +0000 UTC m=+0.182712147 container init 14be6388101e3f234699bd30cbb908931a168616c5375712d947b435f9b6ce11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_noyce, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:48:59 np0005481065 podman[439687]: 2025-10-11 09:48:59.353588608 +0000 UTC m=+0.189932129 container start 14be6388101e3f234699bd30cbb908931a168616c5375712d947b435f9b6ce11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 11 05:48:59 np0005481065 podman[439687]: 2025-10-11 09:48:59.35757077 +0000 UTC m=+0.193914301 container attach 14be6388101e3f234699bd30cbb908931a168616c5375712d947b435f9b6ce11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:48:59 np0005481065 dreamy_noyce[439703]: 167 167
Oct 11 05:48:59 np0005481065 systemd[1]: libpod-14be6388101e3f234699bd30cbb908931a168616c5375712d947b435f9b6ce11.scope: Deactivated successfully.
Oct 11 05:48:59 np0005481065 podman[439687]: 2025-10-11 09:48:59.363083664 +0000 UTC m=+0.199427155 container died 14be6388101e3f234699bd30cbb908931a168616c5375712d947b435f9b6ce11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:48:59 np0005481065 nova_compute[260935]: 2025-10-11 09:48:59.390 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:48:59 np0005481065 systemd[1]: var-lib-containers-storage-overlay-dd827c8ed89b4c273506e054a78c2819a832e41d7419d39c7cb265c0637b3772-merged.mount: Deactivated successfully.
Oct 11 05:48:59 np0005481065 podman[439687]: 2025-10-11 09:48:59.415763292 +0000 UTC m=+0.252106803 container remove 14be6388101e3f234699bd30cbb908931a168616c5375712d947b435f9b6ce11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_noyce, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Oct 11 05:48:59 np0005481065 systemd[1]: libpod-conmon-14be6388101e3f234699bd30cbb908931a168616c5375712d947b435f9b6ce11.scope: Deactivated successfully.
Oct 11 05:48:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3238: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 11 05:48:59 np0005481065 podman[439726]: 2025-10-11 09:48:59.687980038 +0000 UTC m=+0.076247920 container create d2b6e20bd793782022793ad74fa0c883585de4bba31d89196e1a5d41863effc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_albattani, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:48:59 np0005481065 nova_compute[260935]: 2025-10-11 09:48:59.700 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:48:59 np0005481065 nova_compute[260935]: 2025-10-11 09:48:59.726 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:48:59 np0005481065 nova_compute[260935]: 2025-10-11 09:48:59.727 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 05:48:59 np0005481065 nova_compute[260935]: 2025-10-11 09:48:59.728 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:48:59 np0005481065 systemd[1]: Started libpod-conmon-d2b6e20bd793782022793ad74fa0c883585de4bba31d89196e1a5d41863effc4.scope.
Oct 11 05:48:59 np0005481065 podman[439726]: 2025-10-11 09:48:59.662086051 +0000 UTC m=+0.050353923 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:48:59 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:48:59 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a46bace8b7b0cfa8ecea2650ef5e694c34da25821a2402f7e4d1ab14d543a081/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:48:59 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a46bace8b7b0cfa8ecea2650ef5e694c34da25821a2402f7e4d1ab14d543a081/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:48:59 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a46bace8b7b0cfa8ecea2650ef5e694c34da25821a2402f7e4d1ab14d543a081/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:48:59 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a46bace8b7b0cfa8ecea2650ef5e694c34da25821a2402f7e4d1ab14d543a081/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:48:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:48:59 np0005481065 podman[439726]: 2025-10-11 09:48:59.816081071 +0000 UTC m=+0.204349003 container init d2b6e20bd793782022793ad74fa0c883585de4bba31d89196e1a5d41863effc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 05:48:59 np0005481065 podman[439726]: 2025-10-11 09:48:59.82603808 +0000 UTC m=+0.214305962 container start d2b6e20bd793782022793ad74fa0c883585de4bba31d89196e1a5d41863effc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_albattani, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:48:59 np0005481065 podman[439726]: 2025-10-11 09:48:59.830066473 +0000 UTC m=+0.218334355 container attach d2b6e20bd793782022793ad74fa0c883585de4bba31d89196e1a5d41863effc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_albattani, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 05:49:00 np0005481065 nova_compute[260935]: 2025-10-11 09:49:00.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]: {
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:    "0": [
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:        {
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:            "devices": [
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:                "/dev/loop3"
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:            ],
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:            "lv_name": "ceph_lv0",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:            "lv_size": "21470642176",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:            "name": "ceph_lv0",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:            "tags": {
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:                "ceph.cluster_name": "ceph",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:                "ceph.crush_device_class": "",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:                "ceph.encrypted": "0",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:                "ceph.osd_id": "0",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:                "ceph.type": "block",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:                "ceph.vdo": "0"
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:            },
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:            "type": "block",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:            "vg_name": "ceph_vg0"
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:        }
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:    ],
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:    "1": [
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:        {
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:            "devices": [
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:                "/dev/loop4"
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:            ],
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:            "lv_name": "ceph_lv1",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:            "lv_size": "21470642176",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:            "name": "ceph_lv1",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:            "tags": {
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:                "ceph.cluster_name": "ceph",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:                "ceph.crush_device_class": "",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:                "ceph.encrypted": "0",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:                "ceph.osd_id": "1",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:                "ceph.type": "block",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:                "ceph.vdo": "0"
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:            },
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:            "type": "block",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:            "vg_name": "ceph_vg1"
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:        }
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:    ],
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:    "2": [
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:        {
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:            "devices": [
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:                "/dev/loop5"
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:            ],
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:            "lv_name": "ceph_lv2",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:            "lv_size": "21470642176",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:            "name": "ceph_lv2",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:            "tags": {
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:                "ceph.cluster_name": "ceph",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:                "ceph.crush_device_class": "",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:                "ceph.encrypted": "0",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:                "ceph.osd_id": "2",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:                "ceph.type": "block",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:                "ceph.vdo": "0"
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:            },
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:            "type": "block",
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:            "vg_name": "ceph_vg2"
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:        }
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]:    ]
Oct 11 05:49:00 np0005481065 exciting_albattani[439742]: }
Oct 11 05:49:00 np0005481065 systemd[1]: libpod-d2b6e20bd793782022793ad74fa0c883585de4bba31d89196e1a5d41863effc4.scope: Deactivated successfully.
Oct 11 05:49:00 np0005481065 podman[439726]: 2025-10-11 09:49:00.731804084 +0000 UTC m=+1.120071976 container died d2b6e20bd793782022793ad74fa0c883585de4bba31d89196e1a5d41863effc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 05:49:00 np0005481065 systemd[1]: var-lib-containers-storage-overlay-a46bace8b7b0cfa8ecea2650ef5e694c34da25821a2402f7e4d1ab14d543a081-merged.mount: Deactivated successfully.
Oct 11 05:49:00 np0005481065 podman[439726]: 2025-10-11 09:49:00.821391086 +0000 UTC m=+1.209658948 container remove d2b6e20bd793782022793ad74fa0c883585de4bba31d89196e1a5d41863effc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_albattani, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:49:00 np0005481065 systemd[1]: libpod-conmon-d2b6e20bd793782022793ad74fa0c883585de4bba31d89196e1a5d41863effc4.scope: Deactivated successfully.
Oct 11 05:49:00 np0005481065 podman[439765]: 2025-10-11 09:49:00.981654251 +0000 UTC m=+0.103801162 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd)
Oct 11 05:49:01 np0005481065 podman[439805]: 2025-10-11 09:49:01.155514098 +0000 UTC m=+0.162053167 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:49:01 np0005481065 podman[439953]: 2025-10-11 09:49:01.550422184 +0000 UTC m=+0.049048637 container create 661fe68c3ae8585b6eca0094862e35e8edc596a26d65521fbbe5cfd6e9332042 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:49:01 np0005481065 systemd[1]: Started libpod-conmon-661fe68c3ae8585b6eca0094862e35e8edc596a26d65521fbbe5cfd6e9332042.scope.
Oct 11 05:49:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3239: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 11 05:49:01 np0005481065 podman[439953]: 2025-10-11 09:49:01.530243058 +0000 UTC m=+0.028869601 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:49:01 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:49:01 np0005481065 podman[439953]: 2025-10-11 09:49:01.667620011 +0000 UTC m=+0.166246504 container init 661fe68c3ae8585b6eca0094862e35e8edc596a26d65521fbbe5cfd6e9332042 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_ritchie, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 05:49:01 np0005481065 podman[439953]: 2025-10-11 09:49:01.677473797 +0000 UTC m=+0.176100260 container start 661fe68c3ae8585b6eca0094862e35e8edc596a26d65521fbbe5cfd6e9332042 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_ritchie, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 11 05:49:01 np0005481065 podman[439953]: 2025-10-11 09:49:01.682020395 +0000 UTC m=+0.180646918 container attach 661fe68c3ae8585b6eca0094862e35e8edc596a26d65521fbbe5cfd6e9332042 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:49:01 np0005481065 reverent_ritchie[439969]: 167 167
Oct 11 05:49:01 np0005481065 systemd[1]: libpod-661fe68c3ae8585b6eca0094862e35e8edc596a26d65521fbbe5cfd6e9332042.scope: Deactivated successfully.
Oct 11 05:49:01 np0005481065 podman[439953]: 2025-10-11 09:49:01.689912956 +0000 UTC m=+0.188539449 container died 661fe68c3ae8585b6eca0094862e35e8edc596a26d65521fbbe5cfd6e9332042 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_ritchie, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 05:49:01 np0005481065 systemd[1]: var-lib-containers-storage-overlay-50fb790f2b6eddbb76614e4e1b91f53a6b528e7c7c0f3dc651fdaa1fcbb071e2-merged.mount: Deactivated successfully.
Oct 11 05:49:01 np0005481065 podman[439953]: 2025-10-11 09:49:01.727951493 +0000 UTC m=+0.226577956 container remove 661fe68c3ae8585b6eca0094862e35e8edc596a26d65521fbbe5cfd6e9332042 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_ritchie, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 11 05:49:01 np0005481065 systemd[1]: libpod-conmon-661fe68c3ae8585b6eca0094862e35e8edc596a26d65521fbbe5cfd6e9332042.scope: Deactivated successfully.
Oct 11 05:49:01 np0005481065 podman[439992]: 2025-10-11 09:49:01.950381722 +0000 UTC m=+0.058813131 container create 0ba36d1101f17ab2385a2012b44afadf274045a8d0d3b15c2576f0a12f2d648a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_meitner, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:49:02 np0005481065 systemd[1]: Started libpod-conmon-0ba36d1101f17ab2385a2012b44afadf274045a8d0d3b15c2576f0a12f2d648a.scope.
Oct 11 05:49:02 np0005481065 podman[439992]: 2025-10-11 09:49:01.920338009 +0000 UTC m=+0.028769478 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:49:02 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:49:02 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cae628d6b684057036b030a98f8cf488f061fd16277022324e0cefd12558e788/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:49:02 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cae628d6b684057036b030a98f8cf488f061fd16277022324e0cefd12558e788/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:49:02 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cae628d6b684057036b030a98f8cf488f061fd16277022324e0cefd12558e788/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:49:02 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cae628d6b684057036b030a98f8cf488f061fd16277022324e0cefd12558e788/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:49:02 np0005481065 podman[439992]: 2025-10-11 09:49:02.068766522 +0000 UTC m=+0.177197931 container init 0ba36d1101f17ab2385a2012b44afadf274045a8d0d3b15c2576f0a12f2d648a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_meitner, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 05:49:02 np0005481065 podman[439992]: 2025-10-11 09:49:02.082737454 +0000 UTC m=+0.191168833 container start 0ba36d1101f17ab2385a2012b44afadf274045a8d0d3b15c2576f0a12f2d648a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_meitner, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:49:02 np0005481065 podman[439992]: 2025-10-11 09:49:02.086380306 +0000 UTC m=+0.194811755 container attach 0ba36d1101f17ab2385a2012b44afadf274045a8d0d3b15c2576f0a12f2d648a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_meitner, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 05:49:02 np0005481065 nova_compute[260935]: 2025-10-11 09:49:02.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:49:02 np0005481065 nova_compute[260935]: 2025-10-11 09:49:02.737 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:49:02 np0005481065 nova_compute[260935]: 2025-10-11 09:49:02.738 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:49:02 np0005481065 nova_compute[260935]: 2025-10-11 09:49:02.739 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:49:02 np0005481065 nova_compute[260935]: 2025-10-11 09:49:02.739 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:49:02 np0005481065 nova_compute[260935]: 2025-10-11 09:49:02.740 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:49:03 np0005481065 epic_meitner[440008]: {
Oct 11 05:49:03 np0005481065 epic_meitner[440008]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 05:49:03 np0005481065 epic_meitner[440008]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:49:03 np0005481065 epic_meitner[440008]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 05:49:03 np0005481065 epic_meitner[440008]:        "osd_id": 2,
Oct 11 05:49:03 np0005481065 epic_meitner[440008]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:49:03 np0005481065 epic_meitner[440008]:        "type": "bluestore"
Oct 11 05:49:03 np0005481065 epic_meitner[440008]:    },
Oct 11 05:49:03 np0005481065 epic_meitner[440008]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 05:49:03 np0005481065 epic_meitner[440008]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:49:03 np0005481065 epic_meitner[440008]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 05:49:03 np0005481065 epic_meitner[440008]:        "osd_id": 0,
Oct 11 05:49:03 np0005481065 epic_meitner[440008]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:49:03 np0005481065 epic_meitner[440008]:        "type": "bluestore"
Oct 11 05:49:03 np0005481065 epic_meitner[440008]:    },
Oct 11 05:49:03 np0005481065 epic_meitner[440008]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 05:49:03 np0005481065 epic_meitner[440008]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:49:03 np0005481065 epic_meitner[440008]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 05:49:03 np0005481065 epic_meitner[440008]:        "osd_id": 1,
Oct 11 05:49:03 np0005481065 epic_meitner[440008]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:49:03 np0005481065 epic_meitner[440008]:        "type": "bluestore"
Oct 11 05:49:03 np0005481065 epic_meitner[440008]:    }
Oct 11 05:49:03 np0005481065 epic_meitner[440008]: }
Oct 11 05:49:03 np0005481065 systemd[1]: libpod-0ba36d1101f17ab2385a2012b44afadf274045a8d0d3b15c2576f0a12f2d648a.scope: Deactivated successfully.
Oct 11 05:49:03 np0005481065 conmon[440008]: conmon 0ba36d1101f17ab2385a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0ba36d1101f17ab2385a2012b44afadf274045a8d0d3b15c2576f0a12f2d648a.scope/container/memory.events
Oct 11 05:49:03 np0005481065 podman[439992]: 2025-10-11 09:49:03.056109265 +0000 UTC m=+1.164540684 container died 0ba36d1101f17ab2385a2012b44afadf274045a8d0d3b15c2576f0a12f2d648a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_meitner, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:49:03 np0005481065 systemd[1]: var-lib-containers-storage-overlay-cae628d6b684057036b030a98f8cf488f061fd16277022324e0cefd12558e788-merged.mount: Deactivated successfully.
Oct 11 05:49:03 np0005481065 podman[439992]: 2025-10-11 09:49:03.137289522 +0000 UTC m=+1.245720941 container remove 0ba36d1101f17ab2385a2012b44afadf274045a8d0d3b15c2576f0a12f2d648a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_meitner, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 05:49:03 np0005481065 systemd[1]: libpod-conmon-0ba36d1101f17ab2385a2012b44afadf274045a8d0d3b15c2576f0a12f2d648a.scope: Deactivated successfully.
Oct 11 05:49:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:49:03 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:49:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:49:03 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:49:03 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev e1e7815e-7ad2-4242-99b0-242c0b0228c9 does not exist
Oct 11 05:49:03 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev a65e1d47-7648-4214-b9e3-d5d97094ff42 does not exist
Oct 11 05:49:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:49:03 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/727297108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:49:03 np0005481065 nova_compute[260935]: 2025-10-11 09:49:03.282 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:49:03 np0005481065 nova_compute[260935]: 2025-10-11 09:49:03.395 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:49:03 np0005481065 nova_compute[260935]: 2025-10-11 09:49:03.397 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:49:03 np0005481065 nova_compute[260935]: 2025-10-11 09:49:03.397 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:49:03 np0005481065 nova_compute[260935]: 2025-10-11 09:49:03.401 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:49:03 np0005481065 nova_compute[260935]: 2025-10-11 09:49:03.401 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:49:03 np0005481065 nova_compute[260935]: 2025-10-11 09:49:03.405 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:49:03 np0005481065 nova_compute[260935]: 2025-10-11 09:49:03.405 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:49:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3240: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.8 KiB/s rd, 1.4 KiB/s wr, 12 op/s
Oct 11 05:49:03 np0005481065 nova_compute[260935]: 2025-10-11 09:49:03.675 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:49:03 np0005481065 nova_compute[260935]: 2025-10-11 09:49:03.677 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2757MB free_disk=59.83064270019531GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:49:03 np0005481065 nova_compute[260935]: 2025-10-11 09:49:03.678 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:49:03 np0005481065 nova_compute[260935]: 2025-10-11 09:49:03.678 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:49:03 np0005481065 nova_compute[260935]: 2025-10-11 09:49:03.816 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:49:03 np0005481065 nova_compute[260935]: 2025-10-11 09:49:03.817 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:49:03 np0005481065 nova_compute[260935]: 2025-10-11 09:49:03.817 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:49:03 np0005481065 nova_compute[260935]: 2025-10-11 09:49:03.818 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:49:03 np0005481065 nova_compute[260935]: 2025-10-11 09:49:03.818 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:49:03 np0005481065 nova_compute[260935]: 2025-10-11 09:49:03.918 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:49:04 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:49:04 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:49:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:49:04 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/738161503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:49:04 np0005481065 nova_compute[260935]: 2025-10-11 09:49:04.395 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:49:04 np0005481065 nova_compute[260935]: 2025-10-11 09:49:04.402 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:49:04 np0005481065 nova_compute[260935]: 2025-10-11 09:49:04.429 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:49:04 np0005481065 nova_compute[260935]: 2025-10-11 09:49:04.432 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:49:04 np0005481065 nova_compute[260935]: 2025-10-11 09:49:04.433 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:49:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:49:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3241: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:49:05 np0005481065 nova_compute[260935]: 2025-10-11 09:49:05.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:49:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:49:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:49:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:49:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:49:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0026278224099759067 of space, bias 1.0, pg target 0.788346722992772 quantized to 32 (current 32)
Oct 11 05:49:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:49:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:49:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:49:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:49:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:49:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.4513495474376506e-07 of space, bias 1.0, pg target 0.00013354048642312953 quantized to 32 (current 32)
Oct 11 05:49:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:49:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 05:49:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:49:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:49:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:49:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 05:49:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:49:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 05:49:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:49:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:49:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:49:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 05:49:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3242: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:49:08 np0005481065 nova_compute[260935]: 2025-10-11 09:49:08.435 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:49:08 np0005481065 nova_compute[260935]: 2025-10-11 09:49:08.435 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:49:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3243: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:49:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:49:10 np0005481065 nova_compute[260935]: 2025-10-11 09:49:10.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:49:10 np0005481065 nova_compute[260935]: 2025-10-11 09:49:10.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:49:10 np0005481065 nova_compute[260935]: 2025-10-11 09:49:10.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:49:10 np0005481065 nova_compute[260935]: 2025-10-11 09:49:10.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:49:10 np0005481065 nova_compute[260935]: 2025-10-11 09:49:10.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:49:10 np0005481065 nova_compute[260935]: 2025-10-11 09:49:10.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:49:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3244: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:49:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3245: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:49:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:49:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:49:15.245 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:49:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:49:15.246 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:49:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:49:15.246 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:49:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3246: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:49:15 np0005481065 nova_compute[260935]: 2025-10-11 09:49:15.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:49:15 np0005481065 nova_compute[260935]: 2025-10-11 09:49:15.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:49:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3247: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:49:17 np0005481065 nova_compute[260935]: 2025-10-11 09:49:17.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:49:17 np0005481065 nova_compute[260935]: 2025-10-11 09:49:17.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 11 05:49:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3248: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:49:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:49:20 np0005481065 nova_compute[260935]: 2025-10-11 09:49:20.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:49:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3249: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:49:22 np0005481065 podman[440148]: 2025-10-11 09:49:22.830706094 +0000 UTC m=+0.123423383 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 05:49:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3250: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:49:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:49:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:49:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:49:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:49:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:49:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:49:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:49:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3251: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:49:25 np0005481065 nova_compute[260935]: 2025-10-11 09:49:25.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:49:25 np0005481065 nova_compute[260935]: 2025-10-11 09:49:25.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:49:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:49:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3362886063' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:49:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:49:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3362886063' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:49:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3252: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:49:27 np0005481065 nova_compute[260935]: 2025-10-11 09:49:27.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:49:27 np0005481065 podman[440169]: 2025-10-11 09:49:27.805619307 +0000 UTC m=+0.098344250 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:49:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3253: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:49:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:49:30 np0005481065 nova_compute[260935]: 2025-10-11 09:49:30.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:49:30 np0005481065 nova_compute[260935]: 2025-10-11 09:49:30.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:49:30 np0005481065 nova_compute[260935]: 2025-10-11 09:49:30.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:49:30 np0005481065 nova_compute[260935]: 2025-10-11 09:49:30.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:49:30 np0005481065 nova_compute[260935]: 2025-10-11 09:49:30.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:49:30 np0005481065 nova_compute[260935]: 2025-10-11 09:49:30.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:49:31 np0005481065 podman[440189]: 2025-10-11 09:49:31.456205777 +0000 UTC m=+0.083154154 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 11 05:49:31 np0005481065 podman[440190]: 2025-10-11 09:49:31.483349638 +0000 UTC m=+0.104287886 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 11 05:49:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3254: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:49:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3255: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:49:34 np0005481065 nova_compute[260935]: 2025-10-11 09:49:34.721 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:49:34 np0005481065 nova_compute[260935]: 2025-10-11 09:49:34.722 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:49:34 np0005481065 nova_compute[260935]: 2025-10-11 09:49:34.722 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct 11 05:49:34 np0005481065 nova_compute[260935]: 2025-10-11 09:49:34.751 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct 11 05:49:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:49:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3256: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:49:35 np0005481065 nova_compute[260935]: 2025-10-11 09:49:35.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:49:35 np0005481065 nova_compute[260935]: 2025-10-11 09:49:35.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:49:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3257: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:49:39 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #159. Immutable memtables: 0.
Oct 11 05:49:39 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:49:39.379088) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 05:49:39 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 97] Flushing memtable with next log file: 159
Oct 11 05:49:39 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176179379153, "job": 97, "event": "flush_started", "num_memtables": 1, "num_entries": 1874, "num_deletes": 256, "total_data_size": 3092374, "memory_usage": 3139008, "flush_reason": "Manual Compaction"}
Oct 11 05:49:39 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 97] Level-0 flush table #160: started
Oct 11 05:49:39 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176179399957, "cf_name": "default", "job": 97, "event": "table_file_creation", "file_number": 160, "file_size": 3028674, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 66160, "largest_seqno": 68033, "table_properties": {"data_size": 3019861, "index_size": 5498, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 17811, "raw_average_key_size": 20, "raw_value_size": 3002412, "raw_average_value_size": 3439, "num_data_blocks": 244, "num_entries": 873, "num_filter_entries": 873, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760175980, "oldest_key_time": 1760175980, "file_creation_time": 1760176179, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 160, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:49:39 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 97] Flush lasted 20953 microseconds, and 12078 cpu microseconds.
Oct 11 05:49:39 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:49:39 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:49:39.400036) [db/flush_job.cc:967] [default] [JOB 97] Level-0 flush table #160: 3028674 bytes OK
Oct 11 05:49:39 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:49:39.400074) [db/memtable_list.cc:519] [default] Level-0 commit table #160 started
Oct 11 05:49:39 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:49:39.402184) [db/memtable_list.cc:722] [default] Level-0 commit table #160: memtable #1 done
Oct 11 05:49:39 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:49:39.402208) EVENT_LOG_v1 {"time_micros": 1760176179402199, "job": 97, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 05:49:39 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:49:39.402240) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 05:49:39 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 97] Try to delete WAL files size 3084376, prev total WAL file size 3084376, number of live WAL files 2.
Oct 11 05:49:39 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000156.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:49:39 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:49:39.404130) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036353236' seq:72057594037927935, type:22 .. '7061786F730036373738' seq:0, type:0; will stop at (end)
Oct 11 05:49:39 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 98] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 05:49:39 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 97 Base level 0, inputs: [160(2957KB)], [158(9624KB)]
Oct 11 05:49:39 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176179404185, "job": 98, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [160], "files_L6": [158], "score": -1, "input_data_size": 12883917, "oldest_snapshot_seqno": -1}
Oct 11 05:49:39 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 98] Generated table #161: 8528 keys, 11146509 bytes, temperature: kUnknown
Oct 11 05:49:39 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176179464424, "cf_name": "default", "job": 98, "event": "table_file_creation", "file_number": 161, "file_size": 11146509, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11090651, "index_size": 33409, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21381, "raw_key_size": 223567, "raw_average_key_size": 26, "raw_value_size": 10939723, "raw_average_value_size": 1282, "num_data_blocks": 1297, "num_entries": 8528, "num_filter_entries": 8528, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760176179, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 161, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:49:39 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:49:39 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:49:39.464871) [db/compaction/compaction_job.cc:1663] [default] [JOB 98] Compacted 1@0 + 1@6 files to L6 => 11146509 bytes
Oct 11 05:49:39 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:49:39.466775) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 213.4 rd, 184.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.9, 9.4 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(7.9) write-amplify(3.7) OK, records in: 9053, records dropped: 525 output_compression: NoCompression
Oct 11 05:49:39 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:49:39.466805) EVENT_LOG_v1 {"time_micros": 1760176179466791, "job": 98, "event": "compaction_finished", "compaction_time_micros": 60362, "compaction_time_cpu_micros": 40385, "output_level": 6, "num_output_files": 1, "total_output_size": 11146509, "num_input_records": 9053, "num_output_records": 8528, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 05:49:39 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000160.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:49:39 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176179467847, "job": 98, "event": "table_file_deletion", "file_number": 160}
Oct 11 05:49:39 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000158.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:49:39 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176179471214, "job": 98, "event": "table_file_deletion", "file_number": 158}
Oct 11 05:49:39 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:49:39.403992) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:49:39 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:49:39.471516) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:49:39 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:49:39.471529) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:49:39 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:49:39.471533) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:49:39 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:49:39.471538) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:49:39 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:49:39.471542) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:49:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3258: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:49:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:49:40 np0005481065 nova_compute[260935]: 2025-10-11 09:49:40.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:49:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3259: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:49:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3260: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:49:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:49:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3261: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:49:45 np0005481065 nova_compute[260935]: 2025-10-11 09:49:45.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:49:45 np0005481065 nova_compute[260935]: 2025-10-11 09:49:45.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:49:45 np0005481065 nova_compute[260935]: 2025-10-11 09:49:45.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:49:45 np0005481065 nova_compute[260935]: 2025-10-11 09:49:45.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:49:45 np0005481065 nova_compute[260935]: 2025-10-11 09:49:45.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:49:45 np0005481065 nova_compute[260935]: 2025-10-11 09:49:45.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:49:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3262: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:49:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3263: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:49:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:49:50 np0005481065 nova_compute[260935]: 2025-10-11 09:49:50.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:49:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3264: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:49:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3265: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:49:53 np0005481065 podman[440237]: 2025-10-11 09:49:53.809733365 +0000 UTC m=+0.096725944 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 05:49:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:49:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:49:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:49:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:49:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:49:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:49:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:49:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:49:55
Oct 11 05:49:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:49:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:49:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', 'volumes', 'images', 'default.rgw.log', '.mgr']
Oct 11 05:49:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:49:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3266: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:49:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:49:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:49:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:49:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:49:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:49:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:49:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:49:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:49:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:49:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:49:55 np0005481065 nova_compute[260935]: 2025-10-11 09:49:55.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:49:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3267: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:49:58 np0005481065 nova_compute[260935]: 2025-10-11 09:49:58.732 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:49:58 np0005481065 nova_compute[260935]: 2025-10-11 09:49:58.732 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:49:58 np0005481065 nova_compute[260935]: 2025-10-11 09:49:58.733 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:49:58 np0005481065 podman[440257]: 2025-10-11 09:49:58.783204918 +0000 UTC m=+0.084560623 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 05:49:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3268: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:49:59 np0005481065 nova_compute[260935]: 2025-10-11 09:49:59.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:49:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:50:00 np0005481065 nova_compute[260935]: 2025-10-11 09:50:00.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:50:00 np0005481065 nova_compute[260935]: 2025-10-11 09:50:00.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:50:00 np0005481065 nova_compute[260935]: 2025-10-11 09:50:00.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:50:00 np0005481065 nova_compute[260935]: 2025-10-11 09:50:00.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:50:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3269: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:50:01 np0005481065 podman[440279]: 2025-10-11 09:50:01.796121903 +0000 UTC m=+0.083832992 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 11 05:50:01 np0005481065 podman[440280]: 2025-10-11 09:50:01.838065519 +0000 UTC m=+0.125901572 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 11 05:50:01 np0005481065 nova_compute[260935]: 2025-10-11 09:50:01.984 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:50:01 np0005481065 nova_compute[260935]: 2025-10-11 09:50:01.984 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:50:01 np0005481065 nova_compute[260935]: 2025-10-11 09:50:01.985 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 05:50:02 np0005481065 nova_compute[260935]: 2025-10-11 09:50:02.975 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:50:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3270: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:50:04 np0005481065 nova_compute[260935]: 2025-10-11 09:50:04.068 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:50:04 np0005481065 nova_compute[260935]: 2025-10-11 09:50:04.091 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:50:04 np0005481065 nova_compute[260935]: 2025-10-11 09:50:04.091 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 05:50:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:50:04 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:50:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 05:50:04 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:50:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 05:50:04 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:50:04 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev c0d807ee-375b-4ffc-aad9-0a8ad0116b00 does not exist
Oct 11 05:50:04 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 7b95ad51-aef0-499b-a0fc-9226c87d09b5 does not exist
Oct 11 05:50:04 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev cb1e8f90-a617-41ec-842a-2eb2eda4d23b does not exist
Oct 11 05:50:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 05:50:04 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 05:50:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 05:50:04 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:50:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:50:04 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:50:04 np0005481065 nova_compute[260935]: 2025-10-11 09:50:04.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:50:04 np0005481065 nova_compute[260935]: 2025-10-11 09:50:04.725 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:50:04 np0005481065 nova_compute[260935]: 2025-10-11 09:50:04.726 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:50:04 np0005481065 nova_compute[260935]: 2025-10-11 09:50:04.726 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:50:04 np0005481065 nova_compute[260935]: 2025-10-11 09:50:04.726 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:50:04 np0005481065 nova_compute[260935]: 2025-10-11 09:50:04.727 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:50:04 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:50:04 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:50:04 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:50:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:50:05 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:50:05 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3962598057' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:50:05 np0005481065 nova_compute[260935]: 2025-10-11 09:50:05.210 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:50:05 np0005481065 nova_compute[260935]: 2025-10-11 09:50:05.305 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:50:05 np0005481065 nova_compute[260935]: 2025-10-11 09:50:05.306 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:50:05 np0005481065 nova_compute[260935]: 2025-10-11 09:50:05.306 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:50:05 np0005481065 nova_compute[260935]: 2025-10-11 09:50:05.310 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:50:05 np0005481065 nova_compute[260935]: 2025-10-11 09:50:05.311 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:50:05 np0005481065 nova_compute[260935]: 2025-10-11 09:50:05.316 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:50:05 np0005481065 nova_compute[260935]: 2025-10-11 09:50:05.317 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:50:05 np0005481065 podman[440617]: 2025-10-11 09:50:05.323352173 +0000 UTC m=+0.067929377 container create ed84ce5ca7714f154b7426698b99f14f2631dc27188fe39d46c07ca5537170fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_beaver, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 05:50:05 np0005481065 systemd[1]: Started libpod-conmon-ed84ce5ca7714f154b7426698b99f14f2631dc27188fe39d46c07ca5537170fb.scope.
Oct 11 05:50:05 np0005481065 podman[440617]: 2025-10-11 09:50:05.293502645 +0000 UTC m=+0.038079919 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:50:05 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:50:05 np0005481065 podman[440617]: 2025-10-11 09:50:05.436001142 +0000 UTC m=+0.180578356 container init ed84ce5ca7714f154b7426698b99f14f2631dc27188fe39d46c07ca5537170fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_beaver, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:50:05 np0005481065 podman[440617]: 2025-10-11 09:50:05.447714721 +0000 UTC m=+0.192291915 container start ed84ce5ca7714f154b7426698b99f14f2631dc27188fe39d46c07ca5537170fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_beaver, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 05:50:05 np0005481065 podman[440617]: 2025-10-11 09:50:05.454198473 +0000 UTC m=+0.198775667 container attach ed84ce5ca7714f154b7426698b99f14f2631dc27188fe39d46c07ca5537170fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 05:50:05 np0005481065 condescending_beaver[440634]: 167 167
Oct 11 05:50:05 np0005481065 systemd[1]: libpod-ed84ce5ca7714f154b7426698b99f14f2631dc27188fe39d46c07ca5537170fb.scope: Deactivated successfully.
Oct 11 05:50:05 np0005481065 podman[440617]: 2025-10-11 09:50:05.45658528 +0000 UTC m=+0.201162524 container died ed84ce5ca7714f154b7426698b99f14f2631dc27188fe39d46c07ca5537170fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_beaver, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 05:50:05 np0005481065 systemd[1]: var-lib-containers-storage-overlay-8e19b3c04977b61bd9f7ef083a7db9fde3c07fb6bf96dcb18f2036f75cd303e4-merged.mount: Deactivated successfully.
Oct 11 05:50:05 np0005481065 podman[440617]: 2025-10-11 09:50:05.532874169 +0000 UTC m=+0.277451363 container remove ed84ce5ca7714f154b7426698b99f14f2631dc27188fe39d46c07ca5537170fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_beaver, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:50:05 np0005481065 systemd[1]: libpod-conmon-ed84ce5ca7714f154b7426698b99f14f2631dc27188fe39d46c07ca5537170fb.scope: Deactivated successfully.
Oct 11 05:50:05 np0005481065 nova_compute[260935]: 2025-10-11 09:50:05.573 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:50:05 np0005481065 nova_compute[260935]: 2025-10-11 09:50:05.576 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2798MB free_disk=59.83064270019531GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:50:05 np0005481065 nova_compute[260935]: 2025-10-11 09:50:05.576 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:50:05 np0005481065 nova_compute[260935]: 2025-10-11 09:50:05.577 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:50:05 np0005481065 nova_compute[260935]: 2025-10-11 09:50:05.662 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:50:05 np0005481065 nova_compute[260935]: 2025-10-11 09:50:05.663 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:50:05 np0005481065 nova_compute[260935]: 2025-10-11 09:50:05.663 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:50:05 np0005481065 nova_compute[260935]: 2025-10-11 09:50:05.664 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:50:05 np0005481065 nova_compute[260935]: 2025-10-11 09:50:05.664 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:50:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3271: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:50:05 np0005481065 nova_compute[260935]: 2025-10-11 09:50:05.683 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing inventories for resource provider ead2f521-4d5d-46d9-864c-1aac19134114 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 11 05:50:05 np0005481065 nova_compute[260935]: 2025-10-11 09:50:05.705 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updating ProviderTree inventory for provider ead2f521-4d5d-46d9-864c-1aac19134114 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 11 05:50:05 np0005481065 nova_compute[260935]: 2025-10-11 09:50:05.705 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updating inventory in ProviderTree for provider ead2f521-4d5d-46d9-864c-1aac19134114 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 11 05:50:05 np0005481065 nova_compute[260935]: 2025-10-11 09:50:05.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:50:05 np0005481065 nova_compute[260935]: 2025-10-11 09:50:05.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:50:05 np0005481065 nova_compute[260935]: 2025-10-11 09:50:05.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:50:05 np0005481065 nova_compute[260935]: 2025-10-11 09:50:05.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:50:05 np0005481065 nova_compute[260935]: 2025-10-11 09:50:05.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:50:05 np0005481065 nova_compute[260935]: 2025-10-11 09:50:05.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:50:05 np0005481065 nova_compute[260935]: 2025-10-11 09:50:05.721 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing aggregate associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 11 05:50:05 np0005481065 podman[440657]: 2025-10-11 09:50:05.733774264 +0000 UTC m=+0.059080468 container create fa32d7723b1be13ac00239e2bacc503c68ab80e36f59161a153ebb415331da4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bouman, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 05:50:05 np0005481065 nova_compute[260935]: 2025-10-11 09:50:05.741 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing trait associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, traits: HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSE41,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_USB,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,HW_CPU_X86_SVM,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 11 05:50:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:50:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:50:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:50:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:50:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0026278224099759067 of space, bias 1.0, pg target 0.788346722992772 quantized to 32 (current 32)
Oct 11 05:50:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:50:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:50:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:50:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:50:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:50:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.4513495474376506e-07 of space, bias 1.0, pg target 0.00013354048642312953 quantized to 32 (current 32)
Oct 11 05:50:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:50:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 05:50:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:50:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:50:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:50:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 05:50:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:50:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 05:50:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:50:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:50:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:50:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 05:50:05 np0005481065 systemd[1]: Started libpod-conmon-fa32d7723b1be13ac00239e2bacc503c68ab80e36f59161a153ebb415331da4e.scope.
Oct 11 05:50:05 np0005481065 podman[440657]: 2025-10-11 09:50:05.701702934 +0000 UTC m=+0.027009058 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:50:05 np0005481065 nova_compute[260935]: 2025-10-11 09:50:05.804 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:50:05 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:50:05 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c95c64881e8865f811e67f9c4eb9d428857c359f54fec2d8dfd4092eab7ba85a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:50:05 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c95c64881e8865f811e67f9c4eb9d428857c359f54fec2d8dfd4092eab7ba85a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:50:05 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c95c64881e8865f811e67f9c4eb9d428857c359f54fec2d8dfd4092eab7ba85a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:50:05 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c95c64881e8865f811e67f9c4eb9d428857c359f54fec2d8dfd4092eab7ba85a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:50:05 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c95c64881e8865f811e67f9c4eb9d428857c359f54fec2d8dfd4092eab7ba85a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 05:50:05 np0005481065 podman[440657]: 2025-10-11 09:50:05.836220067 +0000 UTC m=+0.161526241 container init fa32d7723b1be13ac00239e2bacc503c68ab80e36f59161a153ebb415331da4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bouman, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:50:05 np0005481065 podman[440657]: 2025-10-11 09:50:05.85556175 +0000 UTC m=+0.180867834 container start fa32d7723b1be13ac00239e2bacc503c68ab80e36f59161a153ebb415331da4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Oct 11 05:50:05 np0005481065 podman[440657]: 2025-10-11 09:50:05.859191892 +0000 UTC m=+0.184498066 container attach fa32d7723b1be13ac00239e2bacc503c68ab80e36f59161a153ebb415331da4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bouman, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 05:50:06 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:50:06 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2231980031' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:50:06 np0005481065 nova_compute[260935]: 2025-10-11 09:50:06.289 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:50:06 np0005481065 nova_compute[260935]: 2025-10-11 09:50:06.296 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:50:06 np0005481065 nova_compute[260935]: 2025-10-11 09:50:06.312 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:50:06 np0005481065 nova_compute[260935]: 2025-10-11 09:50:06.314 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:50:06 np0005481065 nova_compute[260935]: 2025-10-11 09:50:06.314 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.737s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:50:06 np0005481065 magical_bouman[440674]: --> passed data devices: 0 physical, 3 LVM
Oct 11 05:50:06 np0005481065 magical_bouman[440674]: --> relative data size: 1.0
Oct 11 05:50:06 np0005481065 magical_bouman[440674]: --> All data devices are unavailable
Oct 11 05:50:07 np0005481065 systemd[1]: libpod-fa32d7723b1be13ac00239e2bacc503c68ab80e36f59161a153ebb415331da4e.scope: Deactivated successfully.
Oct 11 05:50:07 np0005481065 podman[440657]: 2025-10-11 09:50:07.010676248 +0000 UTC m=+1.335982352 container died fa32d7723b1be13ac00239e2bacc503c68ab80e36f59161a153ebb415331da4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bouman, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:50:07 np0005481065 systemd[1]: libpod-fa32d7723b1be13ac00239e2bacc503c68ab80e36f59161a153ebb415331da4e.scope: Consumed 1.103s CPU time.
Oct 11 05:50:07 np0005481065 systemd[1]: var-lib-containers-storage-overlay-c95c64881e8865f811e67f9c4eb9d428857c359f54fec2d8dfd4092eab7ba85a-merged.mount: Deactivated successfully.
Oct 11 05:50:07 np0005481065 podman[440657]: 2025-10-11 09:50:07.081897706 +0000 UTC m=+1.407203780 container remove fa32d7723b1be13ac00239e2bacc503c68ab80e36f59161a153ebb415331da4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 05:50:07 np0005481065 systemd[1]: libpod-conmon-fa32d7723b1be13ac00239e2bacc503c68ab80e36f59161a153ebb415331da4e.scope: Deactivated successfully.
Oct 11 05:50:07 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 05:50:07 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.0 total, 600.0 interval#012Cumulative writes: 14K writes, 68K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.02 MB/s#012Cumulative WAL: 14K writes, 14K syncs, 1.00 writes per sync, written: 0.09 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1382 writes, 6264 keys, 1382 commit groups, 1.0 writes per commit group, ingest: 8.96 MB, 0.01 MB/s#012Interval WAL: 1382 writes, 1382 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     74.5      1.12              0.34        49    0.023       0      0       0.0       0.0#012  L6      1/0   10.63 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   4.8    155.0    131.3      3.06              1.60        48    0.064    316K    25K       0.0       0.0#012 Sum      1/0   10.63 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   5.8    113.6    116.1      4.17              1.93        97    0.043    316K    25K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.2    151.4    155.0      0.37              0.21        10    0.037     43K   2578       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0    155.0    131.3      3.06              1.60        48    0.064    316K    25K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     74.8      1.11              0.34        48    0.023       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.2      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.0 total, 600.0 interval#012Flush(GB): cumulative 0.081, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.47 GB write, 0.08 MB/s write, 0.46 GB read, 0.08 MB/s read, 4.2 seconds#012Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558f0ab3b1f0#2 capacity: 304.00 MB usage: 54.56 MB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 0 last_secs: 0.000506 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3547,52.30 MB,17.2031%) FilterBlock(98,887.42 KB,0.285073%) IndexBlock(98,1.40 MB,0.459817%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct 11 05:50:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3272: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:50:07 np0005481065 podman[440875]: 2025-10-11 09:50:07.841054518 +0000 UTC m=+0.050139767 container create 9f737dd6249a6a6d6aefe67ccff30dc50278d4004bc210a6f3fd8803b50d6036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_merkle, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 05:50:07 np0005481065 systemd[1]: Started libpod-conmon-9f737dd6249a6a6d6aefe67ccff30dc50278d4004bc210a6f3fd8803b50d6036.scope.
Oct 11 05:50:07 np0005481065 podman[440875]: 2025-10-11 09:50:07.822642432 +0000 UTC m=+0.031727721 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:50:07 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:50:07 np0005481065 podman[440875]: 2025-10-11 09:50:07.952551895 +0000 UTC m=+0.161637234 container init 9f737dd6249a6a6d6aefe67ccff30dc50278d4004bc210a6f3fd8803b50d6036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:50:07 np0005481065 podman[440875]: 2025-10-11 09:50:07.964887531 +0000 UTC m=+0.173972790 container start 9f737dd6249a6a6d6aefe67ccff30dc50278d4004bc210a6f3fd8803b50d6036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 11 05:50:07 np0005481065 podman[440875]: 2025-10-11 09:50:07.968352119 +0000 UTC m=+0.177437458 container attach 9f737dd6249a6a6d6aefe67ccff30dc50278d4004bc210a6f3fd8803b50d6036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_merkle, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:50:07 np0005481065 silly_merkle[440891]: 167 167
Oct 11 05:50:07 np0005481065 systemd[1]: libpod-9f737dd6249a6a6d6aefe67ccff30dc50278d4004bc210a6f3fd8803b50d6036.scope: Deactivated successfully.
Oct 11 05:50:07 np0005481065 podman[440875]: 2025-10-11 09:50:07.970003255 +0000 UTC m=+0.179088514 container died 9f737dd6249a6a6d6aefe67ccff30dc50278d4004bc210a6f3fd8803b50d6036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 05:50:07 np0005481065 systemd[1]: var-lib-containers-storage-overlay-2ff9747bc6ffbe31b7265b2c5253b742bc222170d7a8209896fbb82abac5700d-merged.mount: Deactivated successfully.
Oct 11 05:50:08 np0005481065 podman[440875]: 2025-10-11 09:50:08.005791999 +0000 UTC m=+0.214877248 container remove 9f737dd6249a6a6d6aefe67ccff30dc50278d4004bc210a6f3fd8803b50d6036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_merkle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:50:08 np0005481065 systemd[1]: libpod-conmon-9f737dd6249a6a6d6aefe67ccff30dc50278d4004bc210a6f3fd8803b50d6036.scope: Deactivated successfully.
Oct 11 05:50:08 np0005481065 podman[440915]: 2025-10-11 09:50:08.22796325 +0000 UTC m=+0.070735225 container create 6b4327d559d18386800c57c3d46a34662db97f883b05bd8e0814ed55eed97e3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_antonelli, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:50:08 np0005481065 systemd[1]: Started libpod-conmon-6b4327d559d18386800c57c3d46a34662db97f883b05bd8e0814ed55eed97e3e.scope.
Oct 11 05:50:08 np0005481065 podman[440915]: 2025-10-11 09:50:08.200777788 +0000 UTC m=+0.043549803 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:50:08 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:50:08 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b7626d36f622ac4b129a8ec022e88ae1e99ada5e519ee6ca6db1641417edaec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:50:08 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b7626d36f622ac4b129a8ec022e88ae1e99ada5e519ee6ca6db1641417edaec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:50:08 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b7626d36f622ac4b129a8ec022e88ae1e99ada5e519ee6ca6db1641417edaec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:50:08 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b7626d36f622ac4b129a8ec022e88ae1e99ada5e519ee6ca6db1641417edaec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:50:08 np0005481065 podman[440915]: 2025-10-11 09:50:08.345794715 +0000 UTC m=+0.188566750 container init 6b4327d559d18386800c57c3d46a34662db97f883b05bd8e0814ed55eed97e3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 05:50:08 np0005481065 podman[440915]: 2025-10-11 09:50:08.361724982 +0000 UTC m=+0.204496977 container start 6b4327d559d18386800c57c3d46a34662db97f883b05bd8e0814ed55eed97e3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_antonelli, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:50:08 np0005481065 podman[440915]: 2025-10-11 09:50:08.365523788 +0000 UTC m=+0.208295843 container attach 6b4327d559d18386800c57c3d46a34662db97f883b05bd8e0814ed55eed97e3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_antonelli, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]: {
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:    "0": [
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:        {
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:            "devices": [
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:                "/dev/loop3"
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:            ],
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:            "lv_name": "ceph_lv0",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:            "lv_size": "21470642176",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:            "name": "ceph_lv0",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:            "tags": {
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:                "ceph.cluster_name": "ceph",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:                "ceph.crush_device_class": "",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:                "ceph.encrypted": "0",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:                "ceph.osd_id": "0",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:                "ceph.type": "block",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:                "ceph.vdo": "0"
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:            },
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:            "type": "block",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:            "vg_name": "ceph_vg0"
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:        }
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:    ],
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:    "1": [
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:        {
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:            "devices": [
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:                "/dev/loop4"
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:            ],
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:            "lv_name": "ceph_lv1",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:            "lv_size": "21470642176",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:            "name": "ceph_lv1",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:            "tags": {
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:                "ceph.cluster_name": "ceph",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:                "ceph.crush_device_class": "",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:                "ceph.encrypted": "0",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:                "ceph.osd_id": "1",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:                "ceph.type": "block",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:                "ceph.vdo": "0"
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:            },
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:            "type": "block",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:            "vg_name": "ceph_vg1"
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:        }
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:    ],
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:    "2": [
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:        {
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:            "devices": [
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:                "/dev/loop5"
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:            ],
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:            "lv_name": "ceph_lv2",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:            "lv_size": "21470642176",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:            "name": "ceph_lv2",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:            "tags": {
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:                "ceph.cluster_name": "ceph",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:                "ceph.crush_device_class": "",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:                "ceph.encrypted": "0",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:                "ceph.osd_id": "2",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:                "ceph.type": "block",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:                "ceph.vdo": "0"
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:            },
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:            "type": "block",
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:            "vg_name": "ceph_vg2"
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:        }
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]:    ]
Oct 11 05:50:09 np0005481065 distracted_antonelli[440932]: }
Oct 11 05:50:09 np0005481065 systemd[1]: libpod-6b4327d559d18386800c57c3d46a34662db97f883b05bd8e0814ed55eed97e3e.scope: Deactivated successfully.
Oct 11 05:50:09 np0005481065 podman[440915]: 2025-10-11 09:50:09.166583505 +0000 UTC m=+1.009355540 container died 6b4327d559d18386800c57c3d46a34662db97f883b05bd8e0814ed55eed97e3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_antonelli, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 05:50:09 np0005481065 systemd[1]: var-lib-containers-storage-overlay-1b7626d36f622ac4b129a8ec022e88ae1e99ada5e519ee6ca6db1641417edaec-merged.mount: Deactivated successfully.
Oct 11 05:50:09 np0005481065 podman[440915]: 2025-10-11 09:50:09.241484656 +0000 UTC m=+1.084256631 container remove 6b4327d559d18386800c57c3d46a34662db97f883b05bd8e0814ed55eed97e3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 11 05:50:09 np0005481065 systemd[1]: libpod-conmon-6b4327d559d18386800c57c3d46a34662db97f883b05bd8e0814ed55eed97e3e.scope: Deactivated successfully.
Oct 11 05:50:09 np0005481065 nova_compute[260935]: 2025-10-11 09:50:09.315 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:50:09 np0005481065 nova_compute[260935]: 2025-10-11 09:50:09.316 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:50:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3273: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:50:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:50:10 np0005481065 podman[441097]: 2025-10-11 09:50:10.169950277 +0000 UTC m=+0.069870900 container create d868fc7dc8a3c1218f5cab57e11f719a2092dccc278075f8d76f2b99bd4d1c2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_brahmagupta, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:50:10 np0005481065 systemd[1]: Started libpod-conmon-d868fc7dc8a3c1218f5cab57e11f719a2092dccc278075f8d76f2b99bd4d1c2b.scope.
Oct 11 05:50:10 np0005481065 podman[441097]: 2025-10-11 09:50:10.139079311 +0000 UTC m=+0.039000034 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:50:10 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:50:10 np0005481065 podman[441097]: 2025-10-11 09:50:10.280878479 +0000 UTC m=+0.180799192 container init d868fc7dc8a3c1218f5cab57e11f719a2092dccc278075f8d76f2b99bd4d1c2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_brahmagupta, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:50:10 np0005481065 podman[441097]: 2025-10-11 09:50:10.292102974 +0000 UTC m=+0.192023617 container start d868fc7dc8a3c1218f5cab57e11f719a2092dccc278075f8d76f2b99bd4d1c2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 05:50:10 np0005481065 podman[441097]: 2025-10-11 09:50:10.296535808 +0000 UTC m=+0.196456511 container attach d868fc7dc8a3c1218f5cab57e11f719a2092dccc278075f8d76f2b99bd4d1c2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 11 05:50:10 np0005481065 beautiful_brahmagupta[441114]: 167 167
Oct 11 05:50:10 np0005481065 systemd[1]: libpod-d868fc7dc8a3c1218f5cab57e11f719a2092dccc278075f8d76f2b99bd4d1c2b.scope: Deactivated successfully.
Oct 11 05:50:10 np0005481065 podman[441097]: 2025-10-11 09:50:10.299998645 +0000 UTC m=+0.199919338 container died d868fc7dc8a3c1218f5cab57e11f719a2092dccc278075f8d76f2b99bd4d1c2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:50:10 np0005481065 systemd[1]: var-lib-containers-storage-overlay-50fb37e7684ed608799ead6a500d547c67d88b8c7dedb8b5fcd37e5d4a9d38c6-merged.mount: Deactivated successfully.
Oct 11 05:50:10 np0005481065 podman[441097]: 2025-10-11 09:50:10.348059133 +0000 UTC m=+0.247979786 container remove d868fc7dc8a3c1218f5cab57e11f719a2092dccc278075f8d76f2b99bd4d1c2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_brahmagupta, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:50:10 np0005481065 systemd[1]: libpod-conmon-d868fc7dc8a3c1218f5cab57e11f719a2092dccc278075f8d76f2b99bd4d1c2b.scope: Deactivated successfully.
Oct 11 05:50:10 np0005481065 podman[441140]: 2025-10-11 09:50:10.579299519 +0000 UTC m=+0.074371047 container create d5c0c04515e82adae3c023eda151a83d0faa9b147ae17490a1ea9940d34ca45b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hermann, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:50:10 np0005481065 systemd[1]: Started libpod-conmon-d5c0c04515e82adae3c023eda151a83d0faa9b147ae17490a1ea9940d34ca45b.scope.
Oct 11 05:50:10 np0005481065 podman[441140]: 2025-10-11 09:50:10.550662846 +0000 UTC m=+0.045734454 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:50:10 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:50:10 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bcc0333411f3ef93b3d5906c09f44e4b1f25ffbd7e70bd45f233466f2eb8f7d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:50:10 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bcc0333411f3ef93b3d5906c09f44e4b1f25ffbd7e70bd45f233466f2eb8f7d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:50:10 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bcc0333411f3ef93b3d5906c09f44e4b1f25ffbd7e70bd45f233466f2eb8f7d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:50:10 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bcc0333411f3ef93b3d5906c09f44e4b1f25ffbd7e70bd45f233466f2eb8f7d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:50:10 np0005481065 podman[441140]: 2025-10-11 09:50:10.685032085 +0000 UTC m=+0.180103653 container init d5c0c04515e82adae3c023eda151a83d0faa9b147ae17490a1ea9940d34ca45b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:50:10 np0005481065 podman[441140]: 2025-10-11 09:50:10.701430635 +0000 UTC m=+0.196502173 container start d5c0c04515e82adae3c023eda151a83d0faa9b147ae17490a1ea9940d34ca45b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 05:50:10 np0005481065 podman[441140]: 2025-10-11 09:50:10.705248252 +0000 UTC m=+0.200319790 container attach d5c0c04515e82adae3c023eda151a83d0faa9b147ae17490a1ea9940d34ca45b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 11 05:50:10 np0005481065 nova_compute[260935]: 2025-10-11 09:50:10.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:50:10 np0005481065 nova_compute[260935]: 2025-10-11 09:50:10.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:50:10 np0005481065 nova_compute[260935]: 2025-10-11 09:50:10.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:50:10 np0005481065 nova_compute[260935]: 2025-10-11 09:50:10.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:50:10 np0005481065 nova_compute[260935]: 2025-10-11 09:50:10.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:50:10 np0005481065 nova_compute[260935]: 2025-10-11 09:50:10.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:50:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3274: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:50:11 np0005481065 sleepy_hermann[441156]: {
Oct 11 05:50:11 np0005481065 sleepy_hermann[441156]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 05:50:11 np0005481065 sleepy_hermann[441156]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:50:11 np0005481065 sleepy_hermann[441156]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 05:50:11 np0005481065 sleepy_hermann[441156]:        "osd_id": 2,
Oct 11 05:50:11 np0005481065 sleepy_hermann[441156]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:50:11 np0005481065 sleepy_hermann[441156]:        "type": "bluestore"
Oct 11 05:50:11 np0005481065 sleepy_hermann[441156]:    },
Oct 11 05:50:11 np0005481065 sleepy_hermann[441156]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 05:50:11 np0005481065 sleepy_hermann[441156]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:50:11 np0005481065 sleepy_hermann[441156]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 05:50:11 np0005481065 sleepy_hermann[441156]:        "osd_id": 0,
Oct 11 05:50:11 np0005481065 sleepy_hermann[441156]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:50:11 np0005481065 sleepy_hermann[441156]:        "type": "bluestore"
Oct 11 05:50:11 np0005481065 sleepy_hermann[441156]:    },
Oct 11 05:50:11 np0005481065 sleepy_hermann[441156]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 05:50:11 np0005481065 sleepy_hermann[441156]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:50:11 np0005481065 sleepy_hermann[441156]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 05:50:11 np0005481065 sleepy_hermann[441156]:        "osd_id": 1,
Oct 11 05:50:11 np0005481065 sleepy_hermann[441156]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:50:11 np0005481065 sleepy_hermann[441156]:        "type": "bluestore"
Oct 11 05:50:11 np0005481065 sleepy_hermann[441156]:    }
Oct 11 05:50:11 np0005481065 sleepy_hermann[441156]: }
Oct 11 05:50:11 np0005481065 systemd[1]: libpod-d5c0c04515e82adae3c023eda151a83d0faa9b147ae17490a1ea9940d34ca45b.scope: Deactivated successfully.
Oct 11 05:50:11 np0005481065 podman[441140]: 2025-10-11 09:50:11.857408637 +0000 UTC m=+1.352480155 container died d5c0c04515e82adae3c023eda151a83d0faa9b147ae17490a1ea9940d34ca45b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hermann, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:50:11 np0005481065 systemd[1]: libpod-d5c0c04515e82adae3c023eda151a83d0faa9b147ae17490a1ea9940d34ca45b.scope: Consumed 1.163s CPU time.
Oct 11 05:50:11 np0005481065 systemd[1]: var-lib-containers-storage-overlay-3bcc0333411f3ef93b3d5906c09f44e4b1f25ffbd7e70bd45f233466f2eb8f7d-merged.mount: Deactivated successfully.
Oct 11 05:50:11 np0005481065 podman[441140]: 2025-10-11 09:50:11.934075448 +0000 UTC m=+1.429146996 container remove d5c0c04515e82adae3c023eda151a83d0faa9b147ae17490a1ea9940d34ca45b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:50:11 np0005481065 systemd[1]: libpod-conmon-d5c0c04515e82adae3c023eda151a83d0faa9b147ae17490a1ea9940d34ca45b.scope: Deactivated successfully.
Oct 11 05:50:11 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:50:11 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:50:11 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:50:11 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:50:12 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev abf931b1-e0a8-4202-a71d-a9ca3bf90ed8 does not exist
Oct 11 05:50:12 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 86cd3a40-2c7e-4d3b-b99e-32c28761d510 does not exist
Oct 11 05:50:12 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:50:12 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:50:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3275: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:50:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:50:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:50:15.246 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:50:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:50:15.249 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:50:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:50:15.249 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:50:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3276: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:50:15 np0005481065 nova_compute[260935]: 2025-10-11 09:50:15.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:50:15 np0005481065 nova_compute[260935]: 2025-10-11 09:50:15.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:50:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3277: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:50:17 np0005481065 nova_compute[260935]: 2025-10-11 09:50:17.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:50:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3278: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:50:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:50:20 np0005481065 nova_compute[260935]: 2025-10-11 09:50:20.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:50:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3279: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:50:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3280: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:50:24 np0005481065 podman[441251]: 2025-10-11 09:50:24.795803986 +0000 UTC m=+0.093056441 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent)
Oct 11 05:50:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:50:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:50:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:50:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:50:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:50:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:50:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:50:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3281: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:50:25 np0005481065 nova_compute[260935]: 2025-10-11 09:50:25.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:50:25 np0005481065 nova_compute[260935]: 2025-10-11 09:50:25.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:50:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:50:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974663407' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:50:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:50:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974663407' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:50:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3282: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:50:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3283: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:50:29 np0005481065 podman[441270]: 2025-10-11 09:50:29.765879134 +0000 UTC m=+0.073025310 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:50:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:50:30 np0005481065 nova_compute[260935]: 2025-10-11 09:50:30.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:50:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3284: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:50:32 np0005481065 podman[441290]: 2025-10-11 09:50:32.805323043 +0000 UTC m=+0.101079196 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 11 05:50:32 np0005481065 podman[441291]: 2025-10-11 09:50:32.844922883 +0000 UTC m=+0.139884934 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 11 05:50:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3285: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:50:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:50:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3286: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:50:35 np0005481065 nova_compute[260935]: 2025-10-11 09:50:35.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:50:35 np0005481065 nova_compute[260935]: 2025-10-11 09:50:35.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:50:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3287: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.7 KiB/s rd, 255 B/s wr, 5 op/s
Oct 11 05:50:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e306 do_prune osdmap full prune enabled
Oct 11 05:50:38 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e307 e307: 3 total, 3 up, 3 in
Oct 11 05:50:38 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e307: 3 total, 3 up, 3 in
Oct 11 05:50:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3289: 321 pgs: 321 active+clean; 303 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.2 KiB/s rd, 1.6 MiB/s wr, 12 op/s
Oct 11 05:50:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:50:40 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e307 do_prune osdmap full prune enabled
Oct 11 05:50:40 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e308 e308: 3 total, 3 up, 3 in
Oct 11 05:50:40 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e308: 3 total, 3 up, 3 in
Oct 11 05:50:40 np0005481065 nova_compute[260935]: 2025-10-11 09:50:40.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:50:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3291: 321 pgs: 321 active+clean; 303 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 2.0 MiB/s wr, 15 op/s
Oct 11 05:50:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3292: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Oct 11 05:50:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:50:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3293: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 5.1 MiB/s wr, 38 op/s
Oct 11 05:50:45 np0005481065 nova_compute[260935]: 2025-10-11 09:50:45.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:50:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3294: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 2.6 MiB/s wr, 27 op/s
Oct 11 05:50:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3295: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 2.5 MiB/s wr, 25 op/s
Oct 11 05:50:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:50:50 np0005481065 nova_compute[260935]: 2025-10-11 09:50:50.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:50:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3296: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 2.2 MiB/s wr, 22 op/s
Oct 11 05:50:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3297: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 2.1 MiB/s wr, 21 op/s
Oct 11 05:50:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:50:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:50:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:50:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:50:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:50:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:50:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:50:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:50:55
Oct 11 05:50:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:50:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:50:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['vms', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', '.mgr', 'images', 'backups', 'default.rgw.meta']
Oct 11 05:50:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:50:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:50:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:50:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:50:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:50:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:50:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:50:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:50:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:50:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:50:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3298: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:50:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:50:55 np0005481065 nova_compute[260935]: 2025-10-11 09:50:55.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:50:55 np0005481065 nova_compute[260935]: 2025-10-11 09:50:55.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:50:55 np0005481065 podman[441336]: 2025-10-11 09:50:55.79979438 +0000 UTC m=+0.091526908 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 11 05:50:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3299: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:50:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3300: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:50:59 np0005481065 nova_compute[260935]: 2025-10-11 09:50:59.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:50:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:51:00 np0005481065 nova_compute[260935]: 2025-10-11 09:51:00.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:51:00 np0005481065 nova_compute[260935]: 2025-10-11 09:51:00.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:51:00 np0005481065 nova_compute[260935]: 2025-10-11 09:51:00.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:51:00 np0005481065 nova_compute[260935]: 2025-10-11 09:51:00.702 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:51:00 np0005481065 nova_compute[260935]: 2025-10-11 09:51:00.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:51:00 np0005481065 podman[441356]: 2025-10-11 09:51:00.843474454 +0000 UTC m=+0.126414207 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 11 05:51:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3301: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:51:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:51:02.206 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=61, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=60) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:51:02 np0005481065 nova_compute[260935]: 2025-10-11 09:51:02.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:51:02 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:51:02.208 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 05:51:02 np0005481065 nova_compute[260935]: 2025-10-11 09:51:02.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:51:02 np0005481065 nova_compute[260935]: 2025-10-11 09:51:02.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:51:02 np0005481065 nova_compute[260935]: 2025-10-11 09:51:02.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 11 05:51:02 np0005481065 nova_compute[260935]: 2025-10-11 09:51:02.993 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:51:02 np0005481065 nova_compute[260935]: 2025-10-11 09:51:02.994 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:51:02 np0005481065 nova_compute[260935]: 2025-10-11 09:51:02.994 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 05:51:02 np0005481065 nova_compute[260935]: 2025-10-11 09:51:02.995 2 DEBUG nova.objects.instance [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c176845c-89c0-4038-ba22-4ee79bd3ebfe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:51:03 np0005481065 nova_compute[260935]: 2025-10-11 09:51:03.190 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:51:03 np0005481065 nova_compute[260935]: 2025-10-11 09:51:03.594 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:51:03 np0005481065 nova_compute[260935]: 2025-10-11 09:51:03.608 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:51:03 np0005481065 nova_compute[260935]: 2025-10-11 09:51:03.608 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 05:51:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3302: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:51:03 np0005481065 podman[441377]: 2025-10-11 09:51:03.757247888 +0000 UTC m=+0.061485525 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 05:51:03 np0005481065 podman[441378]: 2025-10-11 09:51:03.869074365 +0000 UTC m=+0.164505196 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 11 05:51:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:51:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3303: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:51:05 np0005481065 nova_compute[260935]: 2025-10-11 09:51:05.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:51:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:51:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:51:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:51:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:51:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0026278224099759067 of space, bias 1.0, pg target 0.788346722992772 quantized to 32 (current 32)
Oct 11 05:51:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:51:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:51:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:51:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:51:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:51:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 11 05:51:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:51:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 05:51:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:51:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:51:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:51:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 05:51:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:51:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 05:51:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:51:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:51:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:51:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 05:51:05 np0005481065 nova_compute[260935]: 2025-10-11 09:51:05.786 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:51:05 np0005481065 nova_compute[260935]: 2025-10-11 09:51:05.787 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:51:05 np0005481065 nova_compute[260935]: 2025-10-11 09:51:05.788 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:51:05 np0005481065 nova_compute[260935]: 2025-10-11 09:51:05.788 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:51:05 np0005481065 nova_compute[260935]: 2025-10-11 09:51:05.789 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:51:05 np0005481065 nova_compute[260935]: 2025-10-11 09:51:05.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:51:06 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:51:06 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3433148840' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:51:06 np0005481065 nova_compute[260935]: 2025-10-11 09:51:06.327 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:51:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3304: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:51:07 np0005481065 nova_compute[260935]: 2025-10-11 09:51:07.708 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:51:07 np0005481065 nova_compute[260935]: 2025-10-11 09:51:07.709 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:51:07 np0005481065 nova_compute[260935]: 2025-10-11 09:51:07.709 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:51:07 np0005481065 nova_compute[260935]: 2025-10-11 09:51:07.715 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:51:07 np0005481065 nova_compute[260935]: 2025-10-11 09:51:07.716 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:51:07 np0005481065 nova_compute[260935]: 2025-10-11 09:51:07.722 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:51:07 np0005481065 nova_compute[260935]: 2025-10-11 09:51:07.723 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:51:07 np0005481065 nova_compute[260935]: 2025-10-11 09:51:07.941 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:51:07 np0005481065 nova_compute[260935]: 2025-10-11 09:51:07.942 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2805MB free_disk=59.83064270019531GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:51:07 np0005481065 nova_compute[260935]: 2025-10-11 09:51:07.943 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:51:07 np0005481065 nova_compute[260935]: 2025-10-11 09:51:07.943 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:51:08 np0005481065 nova_compute[260935]: 2025-10-11 09:51:08.182 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:51:08 np0005481065 nova_compute[260935]: 2025-10-11 09:51:08.183 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:51:08 np0005481065 nova_compute[260935]: 2025-10-11 09:51:08.184 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:51:08 np0005481065 nova_compute[260935]: 2025-10-11 09:51:08.184 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:51:08 np0005481065 nova_compute[260935]: 2025-10-11 09:51:08.185 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:51:08 np0005481065 nova_compute[260935]: 2025-10-11 09:51:08.410 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:51:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:51:08 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1495361142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:51:08 np0005481065 nova_compute[260935]: 2025-10-11 09:51:08.890 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:51:08 np0005481065 nova_compute[260935]: 2025-10-11 09:51:08.899 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:51:08 np0005481065 nova_compute[260935]: 2025-10-11 09:51:08.923 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:51:08 np0005481065 nova_compute[260935]: 2025-10-11 09:51:08.927 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:51:08 np0005481065 nova_compute[260935]: 2025-10-11 09:51:08.928 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.985s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:51:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e308 do_prune osdmap full prune enabled
Oct 11 05:51:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e309 e309: 3 total, 3 up, 3 in
Oct 11 05:51:09 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e309: 3 total, 3 up, 3 in
Oct 11 05:51:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3306: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:51:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:51:10 np0005481065 nova_compute[260935]: 2025-10-11 09:51:10.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:51:10 np0005481065 nova_compute[260935]: 2025-10-11 09:51:10.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:51:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3307: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:51:11 np0005481065 nova_compute[260935]: 2025-10-11 09:51:11.931 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:51:11 np0005481065 nova_compute[260935]: 2025-10-11 09:51:11.932 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:51:12 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:51:12.211 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '61'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:51:12 np0005481065 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 05:51:12 np0005481065 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 05:51:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:51:13 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:51:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 05:51:13 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:51:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 05:51:13 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:51:13 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 7ddfaec9-6f6a-4fd1-b719-56bdeb8606aa does not exist
Oct 11 05:51:13 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 58bbc594-93ed-4a2a-bef5-145b3ce3bba2 does not exist
Oct 11 05:51:13 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev f8e39f37-7eb7-47b5-b4d5-5fb82a7cbf62 does not exist
Oct 11 05:51:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 05:51:13 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 05:51:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 05:51:13 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:51:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:51:13 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:51:13 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:51:13 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:51:13 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:51:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3308: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 11 05:51:14 np0005481065 podman[441744]: 2025-10-11 09:51:14.077213816 +0000 UTC m=+0.079287115 container create 089a7947feb3c42a2bad486a87c9e2576de9c2da7d5e780e5db9fa25042e4c97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_clarke, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:51:14 np0005481065 podman[441744]: 2025-10-11 09:51:14.041183356 +0000 UTC m=+0.043256745 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:51:14 np0005481065 systemd[1]: Started libpod-conmon-089a7947feb3c42a2bad486a87c9e2576de9c2da7d5e780e5db9fa25042e4c97.scope.
Oct 11 05:51:14 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:51:14 np0005481065 podman[441744]: 2025-10-11 09:51:14.207742157 +0000 UTC m=+0.209815516 container init 089a7947feb3c42a2bad486a87c9e2576de9c2da7d5e780e5db9fa25042e4c97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:51:14 np0005481065 podman[441744]: 2025-10-11 09:51:14.221090242 +0000 UTC m=+0.223163551 container start 089a7947feb3c42a2bad486a87c9e2576de9c2da7d5e780e5db9fa25042e4c97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_clarke, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:51:14 np0005481065 podman[441744]: 2025-10-11 09:51:14.226358309 +0000 UTC m=+0.228431658 container attach 089a7947feb3c42a2bad486a87c9e2576de9c2da7d5e780e5db9fa25042e4c97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_clarke, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 05:51:14 np0005481065 systemd[1]: libpod-089a7947feb3c42a2bad486a87c9e2576de9c2da7d5e780e5db9fa25042e4c97.scope: Deactivated successfully.
Oct 11 05:51:14 np0005481065 gifted_clarke[441760]: 167 167
Oct 11 05:51:14 np0005481065 conmon[441760]: conmon 089a7947feb3c42a2bad <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-089a7947feb3c42a2bad486a87c9e2576de9c2da7d5e780e5db9fa25042e4c97.scope/container/memory.events
Oct 11 05:51:14 np0005481065 podman[441744]: 2025-10-11 09:51:14.233710486 +0000 UTC m=+0.235783795 container died 089a7947feb3c42a2bad486a87c9e2576de9c2da7d5e780e5db9fa25042e4c97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_clarke, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 05:51:14 np0005481065 systemd[1]: var-lib-containers-storage-overlay-47c5e9d0cd1b7ad68ceeac58c7257492ffabc369e7eceb98cb873064cb9c0194-merged.mount: Deactivated successfully.
Oct 11 05:51:14 np0005481065 podman[441744]: 2025-10-11 09:51:14.300382936 +0000 UTC m=+0.302456235 container remove 089a7947feb3c42a2bad486a87c9e2576de9c2da7d5e780e5db9fa25042e4c97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_clarke, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:51:14 np0005481065 systemd[1]: libpod-conmon-089a7947feb3c42a2bad486a87c9e2576de9c2da7d5e780e5db9fa25042e4c97.scope: Deactivated successfully.
Oct 11 05:51:14 np0005481065 podman[441784]: 2025-10-11 09:51:14.586980634 +0000 UTC m=+0.075711375 container create 9ee1a0ee47cc76e0a1d24a3828f92cde9e74e3461db5e51d751d3cb550d9b3ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_roentgen, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 11 05:51:14 np0005481065 podman[441784]: 2025-10-11 09:51:14.555144811 +0000 UTC m=+0.043875602 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:51:14 np0005481065 systemd[1]: Started libpod-conmon-9ee1a0ee47cc76e0a1d24a3828f92cde9e74e3461db5e51d751d3cb550d9b3ba.scope.
Oct 11 05:51:14 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:51:14 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0a350947bd30e5883e9ad3b30781df6d8914047e0a80e3176926b8609cea8f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:51:14 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0a350947bd30e5883e9ad3b30781df6d8914047e0a80e3176926b8609cea8f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:51:14 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0a350947bd30e5883e9ad3b30781df6d8914047e0a80e3176926b8609cea8f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:51:14 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0a350947bd30e5883e9ad3b30781df6d8914047e0a80e3176926b8609cea8f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:51:14 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0a350947bd30e5883e9ad3b30781df6d8914047e0a80e3176926b8609cea8f9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 05:51:14 np0005481065 podman[441784]: 2025-10-11 09:51:14.720567811 +0000 UTC m=+0.209298602 container init 9ee1a0ee47cc76e0a1d24a3828f92cde9e74e3461db5e51d751d3cb550d9b3ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_roentgen, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 05:51:14 np0005481065 podman[441784]: 2025-10-11 09:51:14.736897829 +0000 UTC m=+0.225628560 container start 9ee1a0ee47cc76e0a1d24a3828f92cde9e74e3461db5e51d751d3cb550d9b3ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_roentgen, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 05:51:14 np0005481065 podman[441784]: 2025-10-11 09:51:14.741781956 +0000 UTC m=+0.230512757 container attach 9ee1a0ee47cc76e0a1d24a3828f92cde9e74e3461db5e51d751d3cb550d9b3ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:51:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:51:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e309 do_prune osdmap full prune enabled
Oct 11 05:51:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 e310: 3 total, 3 up, 3 in
Oct 11 05:51:14 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e310: 3 total, 3 up, 3 in
Oct 11 05:51:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:51:15.247 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:51:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:51:15.253 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:51:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:51:15.253 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:51:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3310: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Oct 11 05:51:15 np0005481065 nova_compute[260935]: 2025-10-11 09:51:15.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:51:15 np0005481065 nova_compute[260935]: 2025-10-11 09:51:15.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:51:15 np0005481065 pedantic_roentgen[441800]: --> passed data devices: 0 physical, 3 LVM
Oct 11 05:51:15 np0005481065 pedantic_roentgen[441800]: --> relative data size: 1.0
Oct 11 05:51:15 np0005481065 pedantic_roentgen[441800]: --> All data devices are unavailable
Oct 11 05:51:16 np0005481065 systemd[1]: libpod-9ee1a0ee47cc76e0a1d24a3828f92cde9e74e3461db5e51d751d3cb550d9b3ba.scope: Deactivated successfully.
Oct 11 05:51:16 np0005481065 systemd[1]: libpod-9ee1a0ee47cc76e0a1d24a3828f92cde9e74e3461db5e51d751d3cb550d9b3ba.scope: Consumed 1.238s CPU time.
Oct 11 05:51:16 np0005481065 podman[441784]: 2025-10-11 09:51:16.036987013 +0000 UTC m=+1.525717734 container died 9ee1a0ee47cc76e0a1d24a3828f92cde9e74e3461db5e51d751d3cb550d9b3ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 11 05:51:16 np0005481065 systemd[1]: var-lib-containers-storage-overlay-b0a350947bd30e5883e9ad3b30781df6d8914047e0a80e3176926b8609cea8f9-merged.mount: Deactivated successfully.
Oct 11 05:51:16 np0005481065 podman[441784]: 2025-10-11 09:51:16.112591924 +0000 UTC m=+1.601322635 container remove 9ee1a0ee47cc76e0a1d24a3828f92cde9e74e3461db5e51d751d3cb550d9b3ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_roentgen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 05:51:16 np0005481065 systemd[1]: libpod-conmon-9ee1a0ee47cc76e0a1d24a3828f92cde9e74e3461db5e51d751d3cb550d9b3ba.scope: Deactivated successfully.
Oct 11 05:51:16 np0005481065 podman[441981]: 2025-10-11 09:51:16.956349968 +0000 UTC m=+0.049017116 container create 4a251049c766f932f5f37d800a07309bc23e9d28a58704d4f0463f5429da507e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_burnell, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:51:16 np0005481065 systemd[1]: Started libpod-conmon-4a251049c766f932f5f37d800a07309bc23e9d28a58704d4f0463f5429da507e.scope.
Oct 11 05:51:17 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:51:17 np0005481065 podman[441981]: 2025-10-11 09:51:16.938966341 +0000 UTC m=+0.031633489 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:51:17 np0005481065 podman[441981]: 2025-10-11 09:51:17.04946767 +0000 UTC m=+0.142134848 container init 4a251049c766f932f5f37d800a07309bc23e9d28a58704d4f0463f5429da507e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_burnell, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:51:17 np0005481065 podman[441981]: 2025-10-11 09:51:17.058665408 +0000 UTC m=+0.151332566 container start 4a251049c766f932f5f37d800a07309bc23e9d28a58704d4f0463f5429da507e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_burnell, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 11 05:51:17 np0005481065 podman[441981]: 2025-10-11 09:51:17.06267275 +0000 UTC m=+0.155339978 container attach 4a251049c766f932f5f37d800a07309bc23e9d28a58704d4f0463f5429da507e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_burnell, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 05:51:17 np0005481065 clever_burnell[441997]: 167 167
Oct 11 05:51:17 np0005481065 systemd[1]: libpod-4a251049c766f932f5f37d800a07309bc23e9d28a58704d4f0463f5429da507e.scope: Deactivated successfully.
Oct 11 05:51:17 np0005481065 podman[441981]: 2025-10-11 09:51:17.06550292 +0000 UTC m=+0.158170078 container died 4a251049c766f932f5f37d800a07309bc23e9d28a58704d4f0463f5429da507e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_burnell, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 05:51:17 np0005481065 systemd[1]: var-lib-containers-storage-overlay-da54ef9b4989cfd379d393d7410ead6418b7747a9bea79422cc6e41a2dbb44c1-merged.mount: Deactivated successfully.
Oct 11 05:51:17 np0005481065 podman[441981]: 2025-10-11 09:51:17.117354254 +0000 UTC m=+0.210021412 container remove 4a251049c766f932f5f37d800a07309bc23e9d28a58704d4f0463f5429da507e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_burnell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:51:17 np0005481065 systemd[1]: libpod-conmon-4a251049c766f932f5f37d800a07309bc23e9d28a58704d4f0463f5429da507e.scope: Deactivated successfully.
Oct 11 05:51:17 np0005481065 podman[442021]: 2025-10-11 09:51:17.350125943 +0000 UTC m=+0.086429415 container create 76b69cd32191d2a5e81506050f0c5cc760971d4997a329b94d728d1751c81e56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 05:51:17 np0005481065 systemd[1]: Started libpod-conmon-76b69cd32191d2a5e81506050f0c5cc760971d4997a329b94d728d1751c81e56.scope.
Oct 11 05:51:17 np0005481065 podman[442021]: 2025-10-11 09:51:17.320937884 +0000 UTC m=+0.057241436 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:51:17 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:51:17 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76da696b280cbd6a162fd10001dd9fe9553fcd7e5baf3038579aa08423468595/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:51:17 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76da696b280cbd6a162fd10001dd9fe9553fcd7e5baf3038579aa08423468595/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:51:17 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76da696b280cbd6a162fd10001dd9fe9553fcd7e5baf3038579aa08423468595/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:51:17 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76da696b280cbd6a162fd10001dd9fe9553fcd7e5baf3038579aa08423468595/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:51:17 np0005481065 podman[442021]: 2025-10-11 09:51:17.466576579 +0000 UTC m=+0.202880071 container init 76b69cd32191d2a5e81506050f0c5cc760971d4997a329b94d728d1751c81e56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 05:51:17 np0005481065 podman[442021]: 2025-10-11 09:51:17.479119941 +0000 UTC m=+0.215423433 container start 76b69cd32191d2a5e81506050f0c5cc760971d4997a329b94d728d1751c81e56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 05:51:17 np0005481065 podman[442021]: 2025-10-11 09:51:17.484869362 +0000 UTC m=+0.221172864 container attach 76b69cd32191d2a5e81506050f0c5cc760971d4997a329b94d728d1751c81e56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Oct 11 05:51:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3311: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.7 KiB/s wr, 30 op/s
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]: {
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:    "0": [
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:        {
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:            "devices": [
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:                "/dev/loop3"
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:            ],
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:            "lv_name": "ceph_lv0",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:            "lv_size": "21470642176",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:            "name": "ceph_lv0",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:            "tags": {
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:                "ceph.cluster_name": "ceph",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:                "ceph.crush_device_class": "",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:                "ceph.encrypted": "0",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:                "ceph.osd_id": "0",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:                "ceph.type": "block",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:                "ceph.vdo": "0"
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:            },
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:            "type": "block",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:            "vg_name": "ceph_vg0"
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:        }
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:    ],
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:    "1": [
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:        {
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:            "devices": [
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:                "/dev/loop4"
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:            ],
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:            "lv_name": "ceph_lv1",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:            "lv_size": "21470642176",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:            "name": "ceph_lv1",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:            "tags": {
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:                "ceph.cluster_name": "ceph",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:                "ceph.crush_device_class": "",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:                "ceph.encrypted": "0",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:                "ceph.osd_id": "1",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:                "ceph.type": "block",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:                "ceph.vdo": "0"
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:            },
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:            "type": "block",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:            "vg_name": "ceph_vg1"
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:        }
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:    ],
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:    "2": [
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:        {
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:            "devices": [
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:                "/dev/loop5"
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:            ],
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:            "lv_name": "ceph_lv2",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:            "lv_size": "21470642176",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:            "name": "ceph_lv2",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:            "tags": {
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:                "ceph.cluster_name": "ceph",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:                "ceph.crush_device_class": "",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:                "ceph.encrypted": "0",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:                "ceph.osd_id": "2",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:                "ceph.type": "block",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:                "ceph.vdo": "0"
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:            },
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:            "type": "block",
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:            "vg_name": "ceph_vg2"
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:        }
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]:    ]
Oct 11 05:51:18 np0005481065 vigorous_lewin[442037]: }
Oct 11 05:51:18 np0005481065 systemd[1]: libpod-76b69cd32191d2a5e81506050f0c5cc760971d4997a329b94d728d1751c81e56.scope: Deactivated successfully.
Oct 11 05:51:18 np0005481065 podman[442021]: 2025-10-11 09:51:18.263705797 +0000 UTC m=+1.000009289 container died 76b69cd32191d2a5e81506050f0c5cc760971d4997a329b94d728d1751c81e56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 11 05:51:18 np0005481065 systemd[1]: var-lib-containers-storage-overlay-76da696b280cbd6a162fd10001dd9fe9553fcd7e5baf3038579aa08423468595-merged.mount: Deactivated successfully.
Oct 11 05:51:18 np0005481065 podman[442021]: 2025-10-11 09:51:18.345377737 +0000 UTC m=+1.081681209 container remove 76b69cd32191d2a5e81506050f0c5cc760971d4997a329b94d728d1751c81e56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lewin, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:51:18 np0005481065 systemd[1]: libpod-conmon-76b69cd32191d2a5e81506050f0c5cc760971d4997a329b94d728d1751c81e56.scope: Deactivated successfully.
Oct 11 05:51:19 np0005481065 podman[442198]: 2025-10-11 09:51:19.284866428 +0000 UTC m=+0.072740401 container create 6775b865b91b457560d716f7a0a69cc6b7ac2423a7729cde111b204524f58ddf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_elgamal, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:51:19 np0005481065 systemd[1]: Started libpod-conmon-6775b865b91b457560d716f7a0a69cc6b7ac2423a7729cde111b204524f58ddf.scope.
Oct 11 05:51:19 np0005481065 podman[442198]: 2025-10-11 09:51:19.254567858 +0000 UTC m=+0.042441881 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:51:19 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:51:19 np0005481065 podman[442198]: 2025-10-11 09:51:19.387567218 +0000 UTC m=+0.175441231 container init 6775b865b91b457560d716f7a0a69cc6b7ac2423a7729cde111b204524f58ddf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 05:51:19 np0005481065 podman[442198]: 2025-10-11 09:51:19.398415223 +0000 UTC m=+0.186289166 container start 6775b865b91b457560d716f7a0a69cc6b7ac2423a7729cde111b204524f58ddf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 05:51:19 np0005481065 podman[442198]: 2025-10-11 09:51:19.402200579 +0000 UTC m=+0.190074612 container attach 6775b865b91b457560d716f7a0a69cc6b7ac2423a7729cde111b204524f58ddf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 05:51:19 np0005481065 elegant_elgamal[442214]: 167 167
Oct 11 05:51:19 np0005481065 systemd[1]: libpod-6775b865b91b457560d716f7a0a69cc6b7ac2423a7729cde111b204524f58ddf.scope: Deactivated successfully.
Oct 11 05:51:19 np0005481065 conmon[442214]: conmon 6775b865b91b457560d7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6775b865b91b457560d716f7a0a69cc6b7ac2423a7729cde111b204524f58ddf.scope/container/memory.events
Oct 11 05:51:19 np0005481065 podman[442198]: 2025-10-11 09:51:19.409439372 +0000 UTC m=+0.197313335 container died 6775b865b91b457560d716f7a0a69cc6b7ac2423a7729cde111b204524f58ddf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:51:19 np0005481065 systemd[1]: var-lib-containers-storage-overlay-f801ad58c07835a47a79cbdef0410f597165a08d450765ef8d96dd62d7f741d6-merged.mount: Deactivated successfully.
Oct 11 05:51:19 np0005481065 podman[442198]: 2025-10-11 09:51:19.461620735 +0000 UTC m=+0.249494708 container remove 6775b865b91b457560d716f7a0a69cc6b7ac2423a7729cde111b204524f58ddf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_elgamal, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:51:19 np0005481065 systemd[1]: libpod-conmon-6775b865b91b457560d716f7a0a69cc6b7ac2423a7729cde111b204524f58ddf.scope: Deactivated successfully.
Oct 11 05:51:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3312: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 11 05:51:19 np0005481065 podman[442238]: 2025-10-11 09:51:19.710549977 +0000 UTC m=+0.076900158 container create bd264118aebc337469de2b11dab2b81ab87180733dfd494358c7c60fd896860a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_boyd, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:51:19 np0005481065 systemd[1]: Started libpod-conmon-bd264118aebc337469de2b11dab2b81ab87180733dfd494358c7c60fd896860a.scope.
Oct 11 05:51:19 np0005481065 podman[442238]: 2025-10-11 09:51:19.682082349 +0000 UTC m=+0.048432580 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:51:19 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:51:19 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a89082ed7c6d48f965ac8f6f4cf6e2d9051f9d0577165e5c3cd44e28fc754e92/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:51:19 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a89082ed7c6d48f965ac8f6f4cf6e2d9051f9d0577165e5c3cd44e28fc754e92/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:51:19 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a89082ed7c6d48f965ac8f6f4cf6e2d9051f9d0577165e5c3cd44e28fc754e92/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:51:19 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a89082ed7c6d48f965ac8f6f4cf6e2d9051f9d0577165e5c3cd44e28fc754e92/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:51:19 np0005481065 podman[442238]: 2025-10-11 09:51:19.842610332 +0000 UTC m=+0.208960553 container init bd264118aebc337469de2b11dab2b81ab87180733dfd494358c7c60fd896860a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_boyd, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 05:51:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:51:19 np0005481065 podman[442238]: 2025-10-11 09:51:19.862930961 +0000 UTC m=+0.229281142 container start bd264118aebc337469de2b11dab2b81ab87180733dfd494358c7c60fd896860a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_boyd, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:51:19 np0005481065 podman[442238]: 2025-10-11 09:51:19.866985185 +0000 UTC m=+0.233335426 container attach bd264118aebc337469de2b11dab2b81ab87180733dfd494358c7c60fd896860a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_boyd, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 05:51:20 np0005481065 nova_compute[260935]: 2025-10-11 09:51:20.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:51:20 np0005481065 nova_compute[260935]: 2025-10-11 09:51:20.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:51:20 np0005481065 friendly_boyd[442255]: {
Oct 11 05:51:20 np0005481065 friendly_boyd[442255]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 05:51:20 np0005481065 friendly_boyd[442255]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:51:20 np0005481065 friendly_boyd[442255]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 05:51:20 np0005481065 friendly_boyd[442255]:        "osd_id": 2,
Oct 11 05:51:20 np0005481065 friendly_boyd[442255]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:51:20 np0005481065 friendly_boyd[442255]:        "type": "bluestore"
Oct 11 05:51:20 np0005481065 friendly_boyd[442255]:    },
Oct 11 05:51:20 np0005481065 friendly_boyd[442255]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 05:51:20 np0005481065 friendly_boyd[442255]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:51:20 np0005481065 friendly_boyd[442255]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 05:51:20 np0005481065 friendly_boyd[442255]:        "osd_id": 0,
Oct 11 05:51:20 np0005481065 friendly_boyd[442255]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:51:20 np0005481065 friendly_boyd[442255]:        "type": "bluestore"
Oct 11 05:51:20 np0005481065 friendly_boyd[442255]:    },
Oct 11 05:51:20 np0005481065 friendly_boyd[442255]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 05:51:20 np0005481065 friendly_boyd[442255]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:51:20 np0005481065 friendly_boyd[442255]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 05:51:20 np0005481065 friendly_boyd[442255]:        "osd_id": 1,
Oct 11 05:51:20 np0005481065 friendly_boyd[442255]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:51:20 np0005481065 friendly_boyd[442255]:        "type": "bluestore"
Oct 11 05:51:20 np0005481065 friendly_boyd[442255]:    }
Oct 11 05:51:20 np0005481065 friendly_boyd[442255]: }
Oct 11 05:51:20 np0005481065 systemd[1]: libpod-bd264118aebc337469de2b11dab2b81ab87180733dfd494358c7c60fd896860a.scope: Deactivated successfully.
Oct 11 05:51:20 np0005481065 podman[442238]: 2025-10-11 09:51:20.929887146 +0000 UTC m=+1.296237337 container died bd264118aebc337469de2b11dab2b81ab87180733dfd494358c7c60fd896860a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_boyd, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:51:20 np0005481065 systemd[1]: libpod-bd264118aebc337469de2b11dab2b81ab87180733dfd494358c7c60fd896860a.scope: Consumed 1.075s CPU time.
Oct 11 05:51:20 np0005481065 systemd[1]: var-lib-containers-storage-overlay-a89082ed7c6d48f965ac8f6f4cf6e2d9051f9d0577165e5c3cd44e28fc754e92-merged.mount: Deactivated successfully.
Oct 11 05:51:20 np0005481065 podman[442238]: 2025-10-11 09:51:20.993900881 +0000 UTC m=+1.360251032 container remove bd264118aebc337469de2b11dab2b81ab87180733dfd494358c7c60fd896860a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_boyd, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:51:21 np0005481065 systemd[1]: libpod-conmon-bd264118aebc337469de2b11dab2b81ab87180733dfd494358c7c60fd896860a.scope: Deactivated successfully.
Oct 11 05:51:21 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:51:21 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:51:21 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:51:21 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:51:21 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 6bc55b3f-a1e0-4f26-833a-8d832366089e does not exist
Oct 11 05:51:21 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 8002064a-8e50-4344-86be-de662b5b0769 does not exist
Oct 11 05:51:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3313: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 11 05:51:22 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:51:22 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:51:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3314: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:51:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:51:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:51:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:51:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:51:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:51:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:51:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:51:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3315: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:51:25 np0005481065 nova_compute[260935]: 2025-10-11 09:51:25.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:51:25 np0005481065 nova_compute[260935]: 2025-10-11 09:51:25.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:51:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:51:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2914793518' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:51:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:51:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2914793518' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:51:26 np0005481065 podman[442351]: 2025-10-11 09:51:26.810189875 +0000 UTC m=+0.101117007 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 11 05:51:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3316: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:51:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3317: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:51:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:51:30 np0005481065 nova_compute[260935]: 2025-10-11 09:51:30.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:51:30 np0005481065 nova_compute[260935]: 2025-10-11 09:51:30.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:51:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3318: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:51:31 np0005481065 podman[442372]: 2025-10-11 09:51:31.796655984 +0000 UTC m=+0.086883248 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:51:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3319: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:51:34 np0005481065 podman[442393]: 2025-10-11 09:51:34.808945992 +0000 UTC m=+0.102456935 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, managed_by=edpm_ansible)
Oct 11 05:51:34 np0005481065 podman[442394]: 2025-10-11 09:51:34.841351201 +0000 UTC m=+0.137635062 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:51:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:51:34 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #162. Immutable memtables: 0.
Oct 11 05:51:34 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:51:34.862247) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 05:51:34 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 99] Flushing memtable with next log file: 162
Oct 11 05:51:34 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176294862290, "job": 99, "event": "flush_started", "num_memtables": 1, "num_entries": 1202, "num_deletes": 251, "total_data_size": 1809770, "memory_usage": 1835480, "flush_reason": "Manual Compaction"}
Oct 11 05:51:34 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 99] Level-0 flush table #163: started
Oct 11 05:51:34 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176294874785, "cf_name": "default", "job": 99, "event": "table_file_creation", "file_number": 163, "file_size": 1098064, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 68034, "largest_seqno": 69235, "table_properties": {"data_size": 1093478, "index_size": 2045, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11886, "raw_average_key_size": 20, "raw_value_size": 1083559, "raw_average_value_size": 1907, "num_data_blocks": 92, "num_entries": 568, "num_filter_entries": 568, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760176180, "oldest_key_time": 1760176180, "file_creation_time": 1760176294, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 163, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:51:34 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 99] Flush lasted 12606 microseconds, and 5120 cpu microseconds.
Oct 11 05:51:34 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:51:34 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:51:34.874852) [db/flush_job.cc:967] [default] [JOB 99] Level-0 flush table #163: 1098064 bytes OK
Oct 11 05:51:34 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:51:34.874872) [db/memtable_list.cc:519] [default] Level-0 commit table #163 started
Oct 11 05:51:34 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:51:34.876242) [db/memtable_list.cc:722] [default] Level-0 commit table #163: memtable #1 done
Oct 11 05:51:34 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:51:34.876256) EVENT_LOG_v1 {"time_micros": 1760176294876251, "job": 99, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 05:51:34 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:51:34.876274) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 05:51:34 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 99] Try to delete WAL files size 1804273, prev total WAL file size 1804273, number of live WAL files 2.
Oct 11 05:51:34 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000159.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:51:34 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:51:34.877158) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032373534' seq:72057594037927935, type:22 .. '6D6772737461740033303035' seq:0, type:0; will stop at (end)
Oct 11 05:51:34 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 100] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 05:51:34 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 99 Base level 0, inputs: [163(1072KB)], [161(10MB)]
Oct 11 05:51:34 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176294877240, "job": 100, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [163], "files_L6": [161], "score": -1, "input_data_size": 12244573, "oldest_snapshot_seqno": -1}
Oct 11 05:51:34 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 100] Generated table #164: 8630 keys, 9627335 bytes, temperature: kUnknown
Oct 11 05:51:34 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176294958604, "cf_name": "default", "job": 100, "event": "table_file_creation", "file_number": 164, "file_size": 9627335, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9573871, "index_size": 30759, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21637, "raw_key_size": 225898, "raw_average_key_size": 26, "raw_value_size": 9423984, "raw_average_value_size": 1092, "num_data_blocks": 1190, "num_entries": 8630, "num_filter_entries": 8630, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760176294, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 164, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:51:34 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:51:34 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:51:34.959001) [db/compaction/compaction_job.cc:1663] [default] [JOB 100] Compacted 1@0 + 1@6 files to L6 => 9627335 bytes
Oct 11 05:51:34 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:51:34.960513) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 150.3 rd, 118.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 10.6 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(19.9) write-amplify(8.8) OK, records in: 9096, records dropped: 466 output_compression: NoCompression
Oct 11 05:51:34 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:51:34.960541) EVENT_LOG_v1 {"time_micros": 1760176294960529, "job": 100, "event": "compaction_finished", "compaction_time_micros": 81451, "compaction_time_cpu_micros": 47469, "output_level": 6, "num_output_files": 1, "total_output_size": 9627335, "num_input_records": 9096, "num_output_records": 8630, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 05:51:34 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000163.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:51:34 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176294961093, "job": 100, "event": "table_file_deletion", "file_number": 163}
Oct 11 05:51:34 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000161.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:51:34 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176294964763, "job": 100, "event": "table_file_deletion", "file_number": 161}
Oct 11 05:51:34 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:51:34.877018) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:51:34 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:51:34.964852) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:51:34 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:51:34.964860) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:51:34 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:51:34.964865) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:51:34 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:51:34.964869) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:51:34 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:51:34.964873) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:51:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3320: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:51:35 np0005481065 nova_compute[260935]: 2025-10-11 09:51:35.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:51:35 np0005481065 nova_compute[260935]: 2025-10-11 09:51:35.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:51:37 np0005481065 nova_compute[260935]: 2025-10-11 09:51:37.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:51:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3321: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:51:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3322: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:51:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:51:40 np0005481065 nova_compute[260935]: 2025-10-11 09:51:40.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:51:40 np0005481065 nova_compute[260935]: 2025-10-11 09:51:40.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:51:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3323: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:51:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3324: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:51:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:51:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3325: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:51:45 np0005481065 nova_compute[260935]: 2025-10-11 09:51:45.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:51:45 np0005481065 nova_compute[260935]: 2025-10-11 09:51:45.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:51:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3326: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:51:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3327: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:51:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:51:50 np0005481065 nova_compute[260935]: 2025-10-11 09:51:50.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:51:50 np0005481065 nova_compute[260935]: 2025-10-11 09:51:50.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:51:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3328: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:51:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3329: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:51:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:51:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:51:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:51:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:51:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:51:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:51:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:51:55 np0005481065 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 05:51:55 np0005481065 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 45K writes, 182K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s#012Cumulative WAL: 45K writes, 16K syncs, 2.73 writes per sync, written: 0.18 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1586 writes, 4965 keys, 1586 commit groups, 1.0 writes per commit group, ingest: 3.76 MB, 0.01 MB/s#012Interval WAL: 1586 writes, 675 syncs, 2.35 writes per sync, written: 0.00 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f9267111f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 0.000104 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f9267111f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 0.000104 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 m
Oct 11 05:51:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:51:55
Oct 11 05:51:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:51:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:51:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['default.rgw.control', 'vms', 'cephfs.cephfs.data', 'default.rgw.log', 'backups', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'volumes', '.mgr', 'images']
Oct 11 05:51:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:51:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:51:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:51:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:51:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:51:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:51:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:51:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:51:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:51:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:51:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:51:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3330: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:51:55 np0005481065 nova_compute[260935]: 2025-10-11 09:51:55.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:51:55 np0005481065 nova_compute[260935]: 2025-10-11 09:51:55.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:51:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3331: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:51:57 np0005481065 podman[442439]: 2025-10-11 09:51:57.795659078 +0000 UTC m=+0.093484693 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 11 05:51:59 np0005481065 nova_compute[260935]: 2025-10-11 09:51:59.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:51:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3332: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:51:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:52:00 np0005481065 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 05:52:00 np0005481065 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 47K writes, 180K keys, 47K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s#012Cumulative WAL: 47K writes, 17K syncs, 2.70 writes per sync, written: 0.18 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1324 writes, 4495 keys, 1324 commit groups, 1.0 writes per commit group, ingest: 2.98 MB, 0.00 MB/s#012Interval WAL: 1324 writes, 573 syncs, 2.31 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557ab31031f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557ab31031f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 mem
Oct 11 05:52:00 np0005481065 nova_compute[260935]: 2025-10-11 09:52:00.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:52:00 np0005481065 nova_compute[260935]: 2025-10-11 09:52:00.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:52:00 np0005481065 nova_compute[260935]: 2025-10-11 09:52:00.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:52:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3333: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:52:02 np0005481065 nova_compute[260935]: 2025-10-11 09:52:02.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:52:02 np0005481065 nova_compute[260935]: 2025-10-11 09:52:02.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:52:02 np0005481065 nova_compute[260935]: 2025-10-11 09:52:02.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:52:02 np0005481065 podman[442458]: 2025-10-11 09:52:02.782417339 +0000 UTC m=+0.075399195 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 11 05:52:03 np0005481065 nova_compute[260935]: 2025-10-11 09:52:03.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:52:03 np0005481065 nova_compute[260935]: 2025-10-11 09:52:03.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:52:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3334: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:52:04 np0005481065 nova_compute[260935]: 2025-10-11 09:52:04.504 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:52:04 np0005481065 nova_compute[260935]: 2025-10-11 09:52:04.505 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:52:04 np0005481065 nova_compute[260935]: 2025-10-11 09:52:04.505 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 05:52:04 np0005481065 nova_compute[260935]: 2025-10-11 09:52:04.719 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:52:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:52:05 np0005481065 nova_compute[260935]: 2025-10-11 09:52:05.072 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:52:05 np0005481065 nova_compute[260935]: 2025-10-11 09:52:05.097 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:52:05 np0005481065 nova_compute[260935]: 2025-10-11 09:52:05.097 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 05:52:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3335: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:52:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:52:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:52:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:52:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:52:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0026278224099759067 of space, bias 1.0, pg target 0.788346722992772 quantized to 32 (current 32)
Oct 11 05:52:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:52:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:52:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:52:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:52:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:52:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct 11 05:52:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:52:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 05:52:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:52:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:52:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:52:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 05:52:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:52:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 05:52:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:52:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:52:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:52:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 05:52:05 np0005481065 nova_compute[260935]: 2025-10-11 09:52:05.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:52:05 np0005481065 podman[442480]: 2025-10-11 09:52:05.815556242 +0000 UTC m=+0.105438258 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 11 05:52:05 np0005481065 podman[442481]: 2025-10-11 09:52:05.840699257 +0000 UTC m=+0.129057910 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 11 05:52:05 np0005481065 nova_compute[260935]: 2025-10-11 09:52:05.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:52:06 np0005481065 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 05:52:06 np0005481065 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.3 total, 600.0 interval#012Cumulative writes: 36K writes, 145K keys, 36K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.02 MB/s#012Cumulative WAL: 36K writes, 13K syncs, 2.73 writes per sync, written: 0.15 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1359 writes, 4745 keys, 1359 commit groups, 1.0 writes per commit group, ingest: 4.17 MB, 0.01 MB/s#012Interval WAL: 1359 writes, 574 syncs, 2.37 writes per sync, written: 0.00 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.048       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.048       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.05              0.00         1    0.048       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.3 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c4d9a771f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.3 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c4d9a771f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.3 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 mem
Oct 11 05:52:06 np0005481065 nova_compute[260935]: 2025-10-11 09:52:06.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:52:06 np0005481065 nova_compute[260935]: 2025-10-11 09:52:06.767 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:52:06 np0005481065 nova_compute[260935]: 2025-10-11 09:52:06.767 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:52:06 np0005481065 nova_compute[260935]: 2025-10-11 09:52:06.768 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:52:06 np0005481065 nova_compute[260935]: 2025-10-11 09:52:06.768 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:52:06 np0005481065 nova_compute[260935]: 2025-10-11 09:52:06.769 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:52:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:52:07 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3493803474' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:52:07 np0005481065 nova_compute[260935]: 2025-10-11 09:52:07.275 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:52:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3336: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:52:07 np0005481065 nova_compute[260935]: 2025-10-11 09:52:07.931 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:52:07 np0005481065 nova_compute[260935]: 2025-10-11 09:52:07.932 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:52:07 np0005481065 nova_compute[260935]: 2025-10-11 09:52:07.932 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:52:07 np0005481065 nova_compute[260935]: 2025-10-11 09:52:07.937 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:52:07 np0005481065 nova_compute[260935]: 2025-10-11 09:52:07.938 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:52:07 np0005481065 nova_compute[260935]: 2025-10-11 09:52:07.943 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:52:07 np0005481065 nova_compute[260935]: 2025-10-11 09:52:07.944 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:52:08 np0005481065 ceph-mgr[74605]: [devicehealth INFO root] Check health
Oct 11 05:52:08 np0005481065 nova_compute[260935]: 2025-10-11 09:52:08.150 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:52:08 np0005481065 nova_compute[260935]: 2025-10-11 09:52:08.151 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2796MB free_disk=59.83064270019531GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:52:08 np0005481065 nova_compute[260935]: 2025-10-11 09:52:08.151 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:52:08 np0005481065 nova_compute[260935]: 2025-10-11 09:52:08.151 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:52:08 np0005481065 nova_compute[260935]: 2025-10-11 09:52:08.252 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:52:08 np0005481065 nova_compute[260935]: 2025-10-11 09:52:08.253 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:52:08 np0005481065 nova_compute[260935]: 2025-10-11 09:52:08.253 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:52:08 np0005481065 nova_compute[260935]: 2025-10-11 09:52:08.253 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:52:08 np0005481065 nova_compute[260935]: 2025-10-11 09:52:08.254 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:52:08 np0005481065 nova_compute[260935]: 2025-10-11 09:52:08.326 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:52:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:52:08 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2449278572' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:52:08 np0005481065 nova_compute[260935]: 2025-10-11 09:52:08.824 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:52:08 np0005481065 nova_compute[260935]: 2025-10-11 09:52:08.832 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:52:08 np0005481065 nova_compute[260935]: 2025-10-11 09:52:08.866 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:52:08 np0005481065 nova_compute[260935]: 2025-10-11 09:52:08.869 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:52:08 np0005481065 nova_compute[260935]: 2025-10-11 09:52:08.869 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:52:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3337: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:52:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:52:10 np0005481065 nova_compute[260935]: 2025-10-11 09:52:10.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:52:10 np0005481065 nova_compute[260935]: 2025-10-11 09:52:10.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:52:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3338: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:52:11 np0005481065 nova_compute[260935]: 2025-10-11 09:52:11.870 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:52:11 np0005481065 nova_compute[260935]: 2025-10-11 09:52:11.871 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:52:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3339: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:52:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:52:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:52:15.248 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:52:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:52:15.249 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:52:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:52:15.249 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:52:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3340: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:52:15 np0005481065 nova_compute[260935]: 2025-10-11 09:52:15.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:52:15 np0005481065 nova_compute[260935]: 2025-10-11 09:52:15.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:52:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3341: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:52:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3342: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:52:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:52:20 np0005481065 nova_compute[260935]: 2025-10-11 09:52:20.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:52:20 np0005481065 nova_compute[260935]: 2025-10-11 09:52:20.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:52:20 np0005481065 nova_compute[260935]: 2025-10-11 09:52:20.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:52:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3343: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:52:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:52:22 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:52:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 05:52:22 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:52:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 05:52:22 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:52:22 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 1d2c3ceb-f25e-4cb1-8dcc-ea2a9cdabf73 does not exist
Oct 11 05:52:22 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 036b607d-623d-4273-a2bf-3df42f46c21e does not exist
Oct 11 05:52:22 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 778ebda6-096c-4bde-af35-60e6264389bf does not exist
Oct 11 05:52:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 05:52:22 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 05:52:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 05:52:22 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:52:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:52:22 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:52:22 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:52:22 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:52:22 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:52:23 np0005481065 podman[442843]: 2025-10-11 09:52:23.15610402 +0000 UTC m=+0.073319177 container create ac7b95d600782c8123b4a5884e4c21cf37c222d19d925dfbba96ee47df5053d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_payne, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Oct 11 05:52:23 np0005481065 systemd[1]: Started libpod-conmon-ac7b95d600782c8123b4a5884e4c21cf37c222d19d925dfbba96ee47df5053d8.scope.
Oct 11 05:52:23 np0005481065 podman[442843]: 2025-10-11 09:52:23.123417153 +0000 UTC m=+0.040632350 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:52:23 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:52:23 np0005481065 podman[442843]: 2025-10-11 09:52:23.269787149 +0000 UTC m=+0.187002356 container init ac7b95d600782c8123b4a5884e4c21cf37c222d19d925dfbba96ee47df5053d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_payne, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:52:23 np0005481065 podman[442843]: 2025-10-11 09:52:23.281726084 +0000 UTC m=+0.198941241 container start ac7b95d600782c8123b4a5884e4c21cf37c222d19d925dfbba96ee47df5053d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_payne, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 05:52:23 np0005481065 podman[442843]: 2025-10-11 09:52:23.287764243 +0000 UTC m=+0.204979400 container attach ac7b95d600782c8123b4a5884e4c21cf37c222d19d925dfbba96ee47df5053d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_payne, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 05:52:23 np0005481065 exciting_payne[442860]: 167 167
Oct 11 05:52:23 np0005481065 systemd[1]: libpod-ac7b95d600782c8123b4a5884e4c21cf37c222d19d925dfbba96ee47df5053d8.scope: Deactivated successfully.
Oct 11 05:52:23 np0005481065 podman[442843]: 2025-10-11 09:52:23.292515466 +0000 UTC m=+0.209730623 container died ac7b95d600782c8123b4a5884e4c21cf37c222d19d925dfbba96ee47df5053d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_payne, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:52:23 np0005481065 systemd[1]: var-lib-containers-storage-overlay-74588cbe3ce958cec3621ce2299e05471194417a59ad23d4cdf9229ab3e9f2e7-merged.mount: Deactivated successfully.
Oct 11 05:52:23 np0005481065 podman[442843]: 2025-10-11 09:52:23.347348444 +0000 UTC m=+0.264563601 container remove ac7b95d600782c8123b4a5884e4c21cf37c222d19d925dfbba96ee47df5053d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:52:23 np0005481065 systemd[1]: libpod-conmon-ac7b95d600782c8123b4a5884e4c21cf37c222d19d925dfbba96ee47df5053d8.scope: Deactivated successfully.
Oct 11 05:52:23 np0005481065 podman[442884]: 2025-10-11 09:52:23.623359956 +0000 UTC m=+0.081030854 container create 2b3fd22d62300aacbb8a24491b5c7f83de1dec65cf50cc010d23d13a1529a6e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_nightingale, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 05:52:23 np0005481065 podman[442884]: 2025-10-11 09:52:23.593007694 +0000 UTC m=+0.050678652 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:52:23 np0005481065 systemd[1]: Started libpod-conmon-2b3fd22d62300aacbb8a24491b5c7f83de1dec65cf50cc010d23d13a1529a6e0.scope.
Oct 11 05:52:23 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:52:23 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/502bf17f5b99edf7170471aaf9fc97fd91bcfe520795ac8b81f5f1a0deed15a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:52:23 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/502bf17f5b99edf7170471aaf9fc97fd91bcfe520795ac8b81f5f1a0deed15a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:52:23 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/502bf17f5b99edf7170471aaf9fc97fd91bcfe520795ac8b81f5f1a0deed15a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:52:23 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/502bf17f5b99edf7170471aaf9fc97fd91bcfe520795ac8b81f5f1a0deed15a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:52:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3344: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:52:23 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/502bf17f5b99edf7170471aaf9fc97fd91bcfe520795ac8b81f5f1a0deed15a3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 05:52:23 np0005481065 podman[442884]: 2025-10-11 09:52:23.759390911 +0000 UTC m=+0.217061849 container init 2b3fd22d62300aacbb8a24491b5c7f83de1dec65cf50cc010d23d13a1529a6e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_nightingale, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:52:23 np0005481065 podman[442884]: 2025-10-11 09:52:23.773067495 +0000 UTC m=+0.230738363 container start 2b3fd22d62300aacbb8a24491b5c7f83de1dec65cf50cc010d23d13a1529a6e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_nightingale, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 05:52:23 np0005481065 podman[442884]: 2025-10-11 09:52:23.777977372 +0000 UTC m=+0.235648330 container attach 2b3fd22d62300aacbb8a24491b5c7f83de1dec65cf50cc010d23d13a1529a6e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 05:52:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:52:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:52:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:52:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:52:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:52:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:52:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:52:24 np0005481065 xenodochial_nightingale[442900]: --> passed data devices: 0 physical, 3 LVM
Oct 11 05:52:24 np0005481065 xenodochial_nightingale[442900]: --> relative data size: 1.0
Oct 11 05:52:24 np0005481065 xenodochial_nightingale[442900]: --> All data devices are unavailable
Oct 11 05:52:24 np0005481065 systemd[1]: libpod-2b3fd22d62300aacbb8a24491b5c7f83de1dec65cf50cc010d23d13a1529a6e0.scope: Deactivated successfully.
Oct 11 05:52:24 np0005481065 systemd[1]: libpod-2b3fd22d62300aacbb8a24491b5c7f83de1dec65cf50cc010d23d13a1529a6e0.scope: Consumed 1.133s CPU time.
Oct 11 05:52:24 np0005481065 podman[442884]: 2025-10-11 09:52:24.983811112 +0000 UTC m=+1.441482020 container died 2b3fd22d62300aacbb8a24491b5c7f83de1dec65cf50cc010d23d13a1529a6e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 11 05:52:25 np0005481065 systemd[1]: var-lib-containers-storage-overlay-502bf17f5b99edf7170471aaf9fc97fd91bcfe520795ac8b81f5f1a0deed15a3-merged.mount: Deactivated successfully.
Oct 11 05:52:25 np0005481065 podman[442884]: 2025-10-11 09:52:25.051566823 +0000 UTC m=+1.509237681 container remove 2b3fd22d62300aacbb8a24491b5c7f83de1dec65cf50cc010d23d13a1529a6e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_nightingale, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 05:52:25 np0005481065 systemd[1]: libpod-conmon-2b3fd22d62300aacbb8a24491b5c7f83de1dec65cf50cc010d23d13a1529a6e0.scope: Deactivated successfully.
Oct 11 05:52:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3345: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:52:25 np0005481065 nova_compute[260935]: 2025-10-11 09:52:25.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:52:25 np0005481065 podman[443084]: 2025-10-11 09:52:25.821634921 +0000 UTC m=+0.062416031 container create 4aff7f179fe7f2a380656a584e79e9063fafb148b490ca0937aa0bfe93db7974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wright, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 11 05:52:25 np0005481065 systemd[1]: Started libpod-conmon-4aff7f179fe7f2a380656a584e79e9063fafb148b490ca0937aa0bfe93db7974.scope.
Oct 11 05:52:25 np0005481065 podman[443084]: 2025-10-11 09:52:25.791411923 +0000 UTC m=+0.032193073 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:52:25 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:52:25 np0005481065 nova_compute[260935]: 2025-10-11 09:52:25.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:52:25 np0005481065 podman[443084]: 2025-10-11 09:52:25.917954563 +0000 UTC m=+0.158735723 container init 4aff7f179fe7f2a380656a584e79e9063fafb148b490ca0937aa0bfe93db7974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wright, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 05:52:25 np0005481065 podman[443084]: 2025-10-11 09:52:25.930411592 +0000 UTC m=+0.171192702 container start 4aff7f179fe7f2a380656a584e79e9063fafb148b490ca0937aa0bfe93db7974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wright, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:52:25 np0005481065 podman[443084]: 2025-10-11 09:52:25.935027492 +0000 UTC m=+0.175808652 container attach 4aff7f179fe7f2a380656a584e79e9063fafb148b490ca0937aa0bfe93db7974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wright, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 05:52:25 np0005481065 hopeful_wright[443100]: 167 167
Oct 11 05:52:25 np0005481065 systemd[1]: libpod-4aff7f179fe7f2a380656a584e79e9063fafb148b490ca0937aa0bfe93db7974.scope: Deactivated successfully.
Oct 11 05:52:25 np0005481065 podman[443084]: 2025-10-11 09:52:25.939711663 +0000 UTC m=+0.180492823 container died 4aff7f179fe7f2a380656a584e79e9063fafb148b490ca0937aa0bfe93db7974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wright, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:52:25 np0005481065 systemd[1]: var-lib-containers-storage-overlay-6437c1743ae006fb478b640167a9cd92f9d63443d18dc9965d6a243df5ef3815-merged.mount: Deactivated successfully.
Oct 11 05:52:25 np0005481065 podman[443084]: 2025-10-11 09:52:25.992519504 +0000 UTC m=+0.233300604 container remove 4aff7f179fe7f2a380656a584e79e9063fafb148b490ca0937aa0bfe93db7974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wright, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:52:26 np0005481065 systemd[1]: libpod-conmon-4aff7f179fe7f2a380656a584e79e9063fafb148b490ca0937aa0bfe93db7974.scope: Deactivated successfully.
Oct 11 05:52:26 np0005481065 podman[443124]: 2025-10-11 09:52:26.231782345 +0000 UTC m=+0.057156184 container create 5217a001c971e6b69be8b4810e19005cfa228dd6e7b3f456e5efff0ab897da0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Oct 11 05:52:26 np0005481065 systemd[1]: Started libpod-conmon-5217a001c971e6b69be8b4810e19005cfa228dd6e7b3f456e5efff0ab897da0a.scope.
Oct 11 05:52:26 np0005481065 podman[443124]: 2025-10-11 09:52:26.208295066 +0000 UTC m=+0.033668935 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:52:26 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:52:26 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0569c9bb76635c135bdf41cb7d16f62afd8b2219bb1fbecf3fa7b64ef1dd84b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:52:26 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0569c9bb76635c135bdf41cb7d16f62afd8b2219bb1fbecf3fa7b64ef1dd84b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:52:26 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0569c9bb76635c135bdf41cb7d16f62afd8b2219bb1fbecf3fa7b64ef1dd84b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:52:26 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0569c9bb76635c135bdf41cb7d16f62afd8b2219bb1fbecf3fa7b64ef1dd84b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:52:26 np0005481065 podman[443124]: 2025-10-11 09:52:26.34106332 +0000 UTC m=+0.166437199 container init 5217a001c971e6b69be8b4810e19005cfa228dd6e7b3f456e5efff0ab897da0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_benz, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 11 05:52:26 np0005481065 podman[443124]: 2025-10-11 09:52:26.356185164 +0000 UTC m=+0.181559023 container start 5217a001c971e6b69be8b4810e19005cfa228dd6e7b3f456e5efff0ab897da0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_benz, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:52:26 np0005481065 podman[443124]: 2025-10-11 09:52:26.360559857 +0000 UTC m=+0.185933726 container attach 5217a001c971e6b69be8b4810e19005cfa228dd6e7b3f456e5efff0ab897da0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:52:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:52:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3888739876' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:52:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:52:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3888739876' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:52:27 np0005481065 boring_benz[443142]: {
Oct 11 05:52:27 np0005481065 boring_benz[443142]:    "0": [
Oct 11 05:52:27 np0005481065 boring_benz[443142]:        {
Oct 11 05:52:27 np0005481065 boring_benz[443142]:            "devices": [
Oct 11 05:52:27 np0005481065 boring_benz[443142]:                "/dev/loop3"
Oct 11 05:52:27 np0005481065 boring_benz[443142]:            ],
Oct 11 05:52:27 np0005481065 boring_benz[443142]:            "lv_name": "ceph_lv0",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:            "lv_size": "21470642176",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:            "name": "ceph_lv0",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:            "tags": {
Oct 11 05:52:27 np0005481065 boring_benz[443142]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:                "ceph.cluster_name": "ceph",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:                "ceph.crush_device_class": "",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:                "ceph.encrypted": "0",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:                "ceph.osd_id": "0",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:                "ceph.type": "block",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:                "ceph.vdo": "0"
Oct 11 05:52:27 np0005481065 boring_benz[443142]:            },
Oct 11 05:52:27 np0005481065 boring_benz[443142]:            "type": "block",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:            "vg_name": "ceph_vg0"
Oct 11 05:52:27 np0005481065 boring_benz[443142]:        }
Oct 11 05:52:27 np0005481065 boring_benz[443142]:    ],
Oct 11 05:52:27 np0005481065 boring_benz[443142]:    "1": [
Oct 11 05:52:27 np0005481065 boring_benz[443142]:        {
Oct 11 05:52:27 np0005481065 boring_benz[443142]:            "devices": [
Oct 11 05:52:27 np0005481065 boring_benz[443142]:                "/dev/loop4"
Oct 11 05:52:27 np0005481065 boring_benz[443142]:            ],
Oct 11 05:52:27 np0005481065 boring_benz[443142]:            "lv_name": "ceph_lv1",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:            "lv_size": "21470642176",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:            "name": "ceph_lv1",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:            "tags": {
Oct 11 05:52:27 np0005481065 boring_benz[443142]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:                "ceph.cluster_name": "ceph",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:                "ceph.crush_device_class": "",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:                "ceph.encrypted": "0",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:                "ceph.osd_id": "1",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:                "ceph.type": "block",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:                "ceph.vdo": "0"
Oct 11 05:52:27 np0005481065 boring_benz[443142]:            },
Oct 11 05:52:27 np0005481065 boring_benz[443142]:            "type": "block",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:            "vg_name": "ceph_vg1"
Oct 11 05:52:27 np0005481065 boring_benz[443142]:        }
Oct 11 05:52:27 np0005481065 boring_benz[443142]:    ],
Oct 11 05:52:27 np0005481065 boring_benz[443142]:    "2": [
Oct 11 05:52:27 np0005481065 boring_benz[443142]:        {
Oct 11 05:52:27 np0005481065 boring_benz[443142]:            "devices": [
Oct 11 05:52:27 np0005481065 boring_benz[443142]:                "/dev/loop5"
Oct 11 05:52:27 np0005481065 boring_benz[443142]:            ],
Oct 11 05:52:27 np0005481065 boring_benz[443142]:            "lv_name": "ceph_lv2",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:            "lv_size": "21470642176",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:            "name": "ceph_lv2",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:            "tags": {
Oct 11 05:52:27 np0005481065 boring_benz[443142]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:                "ceph.cluster_name": "ceph",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:                "ceph.crush_device_class": "",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:                "ceph.encrypted": "0",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:                "ceph.osd_id": "2",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:                "ceph.type": "block",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:                "ceph.vdo": "0"
Oct 11 05:52:27 np0005481065 boring_benz[443142]:            },
Oct 11 05:52:27 np0005481065 boring_benz[443142]:            "type": "block",
Oct 11 05:52:27 np0005481065 boring_benz[443142]:            "vg_name": "ceph_vg2"
Oct 11 05:52:27 np0005481065 boring_benz[443142]:        }
Oct 11 05:52:27 np0005481065 boring_benz[443142]:    ]
Oct 11 05:52:27 np0005481065 boring_benz[443142]: }
Oct 11 05:52:27 np0005481065 systemd[1]: libpod-5217a001c971e6b69be8b4810e19005cfa228dd6e7b3f456e5efff0ab897da0a.scope: Deactivated successfully.
Oct 11 05:52:27 np0005481065 podman[443152]: 2025-10-11 09:52:27.252491763 +0000 UTC m=+0.047780151 container died 5217a001c971e6b69be8b4810e19005cfa228dd6e7b3f456e5efff0ab897da0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_benz, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Oct 11 05:52:27 np0005481065 systemd[1]: var-lib-containers-storage-overlay-0569c9bb76635c135bdf41cb7d16f62afd8b2219bb1fbecf3fa7b64ef1dd84b3-merged.mount: Deactivated successfully.
Oct 11 05:52:27 np0005481065 podman[443152]: 2025-10-11 09:52:27.334620007 +0000 UTC m=+0.129908345 container remove 5217a001c971e6b69be8b4810e19005cfa228dd6e7b3f456e5efff0ab897da0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 05:52:27 np0005481065 systemd[1]: libpod-conmon-5217a001c971e6b69be8b4810e19005cfa228dd6e7b3f456e5efff0ab897da0a.scope: Deactivated successfully.
Oct 11 05:52:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3346: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:52:28 np0005481065 podman[443309]: 2025-10-11 09:52:28.211529652 +0000 UTC m=+0.061033793 container create 4034d87be5b69bccc424fb774a0f7aa0c394cbd3a67a52bb3e1274cbfed1e0ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Oct 11 05:52:28 np0005481065 systemd[1]: Started libpod-conmon-4034d87be5b69bccc424fb774a0f7aa0c394cbd3a67a52bb3e1274cbfed1e0ac.scope.
Oct 11 05:52:28 np0005481065 podman[443309]: 2025-10-11 09:52:28.180898603 +0000 UTC m=+0.030402804 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:52:28 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:52:28 np0005481065 podman[443309]: 2025-10-11 09:52:28.314075747 +0000 UTC m=+0.163579858 container init 4034d87be5b69bccc424fb774a0f7aa0c394cbd3a67a52bb3e1274cbfed1e0ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_banzai, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:52:28 np0005481065 podman[443309]: 2025-10-11 09:52:28.322690539 +0000 UTC m=+0.172194680 container start 4034d87be5b69bccc424fb774a0f7aa0c394cbd3a67a52bb3e1274cbfed1e0ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_banzai, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Oct 11 05:52:28 np0005481065 podman[443309]: 2025-10-11 09:52:28.326560358 +0000 UTC m=+0.176064499 container attach 4034d87be5b69bccc424fb774a0f7aa0c394cbd3a67a52bb3e1274cbfed1e0ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:52:28 np0005481065 strange_banzai[443327]: 167 167
Oct 11 05:52:28 np0005481065 systemd[1]: libpod-4034d87be5b69bccc424fb774a0f7aa0c394cbd3a67a52bb3e1274cbfed1e0ac.scope: Deactivated successfully.
Oct 11 05:52:28 np0005481065 podman[443309]: 2025-10-11 09:52:28.32843083 +0000 UTC m=+0.177934961 container died 4034d87be5b69bccc424fb774a0f7aa0c394cbd3a67a52bb3e1274cbfed1e0ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_banzai, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 11 05:52:28 np0005481065 podman[443324]: 2025-10-11 09:52:28.352999449 +0000 UTC m=+0.091485376 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:52:28 np0005481065 systemd[1]: var-lib-containers-storage-overlay-d63e15f5405a0e9164ba08e01eaab037e9845803369b2687b1be9462edfe13c0-merged.mount: Deactivated successfully.
Oct 11 05:52:28 np0005481065 podman[443309]: 2025-10-11 09:52:28.375249333 +0000 UTC m=+0.224753444 container remove 4034d87be5b69bccc424fb774a0f7aa0c394cbd3a67a52bb3e1274cbfed1e0ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 05:52:28 np0005481065 systemd[1]: libpod-conmon-4034d87be5b69bccc424fb774a0f7aa0c394cbd3a67a52bb3e1274cbfed1e0ac.scope: Deactivated successfully.
Oct 11 05:52:28 np0005481065 podman[443368]: 2025-10-11 09:52:28.618410143 +0000 UTC m=+0.073367898 container create ff0369e4ff79bd49bb7acc649c5e7cf295f2529c4f6a5001c3181f1e6b9fc3db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pike, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:52:28 np0005481065 systemd[1]: Started libpod-conmon-ff0369e4ff79bd49bb7acc649c5e7cf295f2529c4f6a5001c3181f1e6b9fc3db.scope.
Oct 11 05:52:28 np0005481065 podman[443368]: 2025-10-11 09:52:28.58868106 +0000 UTC m=+0.043638905 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:52:28 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:52:28 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0609ac1a6278c1e0c2e8e55db2fec1996a04eb3f75e3aeda92a49f9baa0dd142/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:52:28 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0609ac1a6278c1e0c2e8e55db2fec1996a04eb3f75e3aeda92a49f9baa0dd142/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:52:28 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0609ac1a6278c1e0c2e8e55db2fec1996a04eb3f75e3aeda92a49f9baa0dd142/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:52:28 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0609ac1a6278c1e0c2e8e55db2fec1996a04eb3f75e3aeda92a49f9baa0dd142/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:52:28 np0005481065 podman[443368]: 2025-10-11 09:52:28.732194285 +0000 UTC m=+0.187152130 container init ff0369e4ff79bd49bb7acc649c5e7cf295f2529c4f6a5001c3181f1e6b9fc3db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pike, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:52:28 np0005481065 podman[443368]: 2025-10-11 09:52:28.749336076 +0000 UTC m=+0.204293871 container start ff0369e4ff79bd49bb7acc649c5e7cf295f2529c4f6a5001c3181f1e6b9fc3db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 11 05:52:28 np0005481065 podman[443368]: 2025-10-11 09:52:28.754609673 +0000 UTC m=+0.209567538 container attach ff0369e4ff79bd49bb7acc649c5e7cf295f2529c4f6a5001c3181f1e6b9fc3db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pike, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:52:29 np0005481065 friendly_pike[443384]: {
Oct 11 05:52:29 np0005481065 friendly_pike[443384]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 05:52:29 np0005481065 friendly_pike[443384]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:52:29 np0005481065 friendly_pike[443384]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 05:52:29 np0005481065 friendly_pike[443384]:        "osd_id": 2,
Oct 11 05:52:29 np0005481065 friendly_pike[443384]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:52:29 np0005481065 friendly_pike[443384]:        "type": "bluestore"
Oct 11 05:52:29 np0005481065 friendly_pike[443384]:    },
Oct 11 05:52:29 np0005481065 friendly_pike[443384]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 05:52:29 np0005481065 friendly_pike[443384]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:52:29 np0005481065 friendly_pike[443384]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 05:52:29 np0005481065 friendly_pike[443384]:        "osd_id": 0,
Oct 11 05:52:29 np0005481065 friendly_pike[443384]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:52:29 np0005481065 friendly_pike[443384]:        "type": "bluestore"
Oct 11 05:52:29 np0005481065 friendly_pike[443384]:    },
Oct 11 05:52:29 np0005481065 friendly_pike[443384]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 05:52:29 np0005481065 friendly_pike[443384]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:52:29 np0005481065 friendly_pike[443384]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 05:52:29 np0005481065 friendly_pike[443384]:        "osd_id": 1,
Oct 11 05:52:29 np0005481065 friendly_pike[443384]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:52:29 np0005481065 friendly_pike[443384]:        "type": "bluestore"
Oct 11 05:52:29 np0005481065 friendly_pike[443384]:    }
Oct 11 05:52:29 np0005481065 friendly_pike[443384]: }
Oct 11 05:52:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3347: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:52:29 np0005481065 systemd[1]: libpod-ff0369e4ff79bd49bb7acc649c5e7cf295f2529c4f6a5001c3181f1e6b9fc3db.scope: Deactivated successfully.
Oct 11 05:52:29 np0005481065 podman[443368]: 2025-10-11 09:52:29.762182004 +0000 UTC m=+1.217139789 container died ff0369e4ff79bd49bb7acc649c5e7cf295f2529c4f6a5001c3181f1e6b9fc3db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pike, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:52:29 np0005481065 systemd[1]: libpod-ff0369e4ff79bd49bb7acc649c5e7cf295f2529c4f6a5001c3181f1e6b9fc3db.scope: Consumed 1.021s CPU time.
Oct 11 05:52:29 np0005481065 systemd[1]: var-lib-containers-storage-overlay-0609ac1a6278c1e0c2e8e55db2fec1996a04eb3f75e3aeda92a49f9baa0dd142-merged.mount: Deactivated successfully.
Oct 11 05:52:29 np0005481065 podman[443368]: 2025-10-11 09:52:29.842331252 +0000 UTC m=+1.297289047 container remove ff0369e4ff79bd49bb7acc649c5e7cf295f2529c4f6a5001c3181f1e6b9fc3db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pike, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 11 05:52:29 np0005481065 systemd[1]: libpod-conmon-ff0369e4ff79bd49bb7acc649c5e7cf295f2529c4f6a5001c3181f1e6b9fc3db.scope: Deactivated successfully.
Oct 11 05:52:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:52:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:52:29 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:52:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:52:29 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:52:29 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev d0f18d7f-e7ef-46bc-afb0-ca36b4e450e6 does not exist
Oct 11 05:52:29 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 9cf034ad-69fd-4b13-9dfa-ee7fb61a1209 does not exist
Oct 11 05:52:30 np0005481065 nova_compute[260935]: 2025-10-11 09:52:30.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:52:30 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:52:30 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:52:30 np0005481065 nova_compute[260935]: 2025-10-11 09:52:30.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:52:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3348: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:52:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3349: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:52:33 np0005481065 podman[443481]: 2025-10-11 09:52:33.805123878 +0000 UTC m=+0.091068745 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 11 05:52:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:52:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3350: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:52:35 np0005481065 nova_compute[260935]: 2025-10-11 09:52:35.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:52:35 np0005481065 nova_compute[260935]: 2025-10-11 09:52:35.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:52:36 np0005481065 podman[443505]: 2025-10-11 09:52:36.847008598 +0000 UTC m=+0.133488705 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:52:36 np0005481065 podman[443506]: 2025-10-11 09:52:36.860534987 +0000 UTC m=+0.140314516 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:52:37 np0005481065 nova_compute[260935]: 2025-10-11 09:52:37.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:52:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3351: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:52:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3352: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:52:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:52:40 np0005481065 nova_compute[260935]: 2025-10-11 09:52:40.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:52:40 np0005481065 nova_compute[260935]: 2025-10-11 09:52:40.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:52:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3353: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:52:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3354: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:52:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:52:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3355: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:52:45 np0005481065 nova_compute[260935]: 2025-10-11 09:52:45.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:52:45 np0005481065 nova_compute[260935]: 2025-10-11 09:52:45.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:52:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3356: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:52:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3357: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:52:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:52:50 np0005481065 nova_compute[260935]: 2025-10-11 09:52:50.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:52:50 np0005481065 nova_compute[260935]: 2025-10-11 09:52:50.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:52:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3358: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:52:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3359: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:52:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:52:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:52:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:52:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:52:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:52:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:52:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:52:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:52:55
Oct 11 05:52:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:52:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:52:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', 'volumes', 'images', 'default.rgw.meta', 'default.rgw.control', 'vms', '.rgw.root']
Oct 11 05:52:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:52:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:52:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:52:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:52:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:52:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:52:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:52:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:52:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:52:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:52:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:52:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3360: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:52:55 np0005481065 nova_compute[260935]: 2025-10-11 09:52:55.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:52:55 np0005481065 nova_compute[260935]: 2025-10-11 09:52:55.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:52:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3361: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:52:58 np0005481065 podman[443553]: 2025-10-11 09:52:58.80391887 +0000 UTC m=+0.095154080 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 11 05:52:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3362: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:52:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:53:00 np0005481065 nova_compute[260935]: 2025-10-11 09:53:00.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:53:00 np0005481065 nova_compute[260935]: 2025-10-11 09:53:00.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:53:00 np0005481065 nova_compute[260935]: 2025-10-11 09:53:00.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:53:00 np0005481065 nova_compute[260935]: 2025-10-11 09:53:00.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:53:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3363: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:53:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3364: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:53:04 np0005481065 nova_compute[260935]: 2025-10-11 09:53:04.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:53:04 np0005481065 nova_compute[260935]: 2025-10-11 09:53:04.702 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:53:04 np0005481065 podman[443573]: 2025-10-11 09:53:04.803388866 +0000 UTC m=+0.095226629 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid)
Oct 11 05:53:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:53:05 np0005481065 nova_compute[260935]: 2025-10-11 09:53:05.030 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:53:05 np0005481065 nova_compute[260935]: 2025-10-11 09:53:05.031 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:53:05 np0005481065 nova_compute[260935]: 2025-10-11 09:53:05.031 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 05:53:05 np0005481065 nova_compute[260935]: 2025-10-11 09:53:05.272 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:53:05 np0005481065 nova_compute[260935]: 2025-10-11 09:53:05.543 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:53:05 np0005481065 nova_compute[260935]: 2025-10-11 09:53:05.567 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:53:05 np0005481065 nova_compute[260935]: 2025-10-11 09:53:05.568 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 05:53:05 np0005481065 nova_compute[260935]: 2025-10-11 09:53:05.569 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:53:05 np0005481065 nova_compute[260935]: 2025-10-11 09:53:05.569 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:53:05 np0005481065 nova_compute[260935]: 2025-10-11 09:53:05.569 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:53:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3365: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:53:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:53:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:53:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:53:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:53:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0026278224099759067 of space, bias 1.0, pg target 0.788346722992772 quantized to 32 (current 32)
Oct 11 05:53:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:53:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:53:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:53:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:53:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:53:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct 11 05:53:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:53:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 05:53:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:53:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:53:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:53:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 05:53:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:53:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 05:53:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:53:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:53:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:53:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 05:53:05 np0005481065 nova_compute[260935]: 2025-10-11 09:53:05.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:53:05 np0005481065 nova_compute[260935]: 2025-10-11 09:53:05.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:53:06 np0005481065 nova_compute[260935]: 2025-10-11 09:53:06.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:53:06 np0005481065 nova_compute[260935]: 2025-10-11 09:53:06.736 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:53:06 np0005481065 nova_compute[260935]: 2025-10-11 09:53:06.736 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:53:06 np0005481065 nova_compute[260935]: 2025-10-11 09:53:06.737 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:53:06 np0005481065 nova_compute[260935]: 2025-10-11 09:53:06.737 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:53:06 np0005481065 nova_compute[260935]: 2025-10-11 09:53:06.737 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:53:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:53:07 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/146095201' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:53:07 np0005481065 nova_compute[260935]: 2025-10-11 09:53:07.191 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:53:07 np0005481065 nova_compute[260935]: 2025-10-11 09:53:07.274 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:53:07 np0005481065 nova_compute[260935]: 2025-10-11 09:53:07.274 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:53:07 np0005481065 nova_compute[260935]: 2025-10-11 09:53:07.275 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:53:07 np0005481065 nova_compute[260935]: 2025-10-11 09:53:07.280 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:53:07 np0005481065 nova_compute[260935]: 2025-10-11 09:53:07.281 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:53:07 np0005481065 nova_compute[260935]: 2025-10-11 09:53:07.285 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:53:07 np0005481065 nova_compute[260935]: 2025-10-11 09:53:07.286 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:53:07 np0005481065 nova_compute[260935]: 2025-10-11 09:53:07.472 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:53:07 np0005481065 nova_compute[260935]: 2025-10-11 09:53:07.473 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2808MB free_disk=59.83064270019531GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:53:07 np0005481065 nova_compute[260935]: 2025-10-11 09:53:07.474 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:53:07 np0005481065 nova_compute[260935]: 2025-10-11 09:53:07.474 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:53:07 np0005481065 nova_compute[260935]: 2025-10-11 09:53:07.579 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:53:07 np0005481065 nova_compute[260935]: 2025-10-11 09:53:07.579 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:53:07 np0005481065 nova_compute[260935]: 2025-10-11 09:53:07.579 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:53:07 np0005481065 nova_compute[260935]: 2025-10-11 09:53:07.580 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:53:07 np0005481065 nova_compute[260935]: 2025-10-11 09:53:07.580 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:53:07 np0005481065 nova_compute[260935]: 2025-10-11 09:53:07.693 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:53:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3366: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:53:07 np0005481065 podman[443617]: 2025-10-11 09:53:07.791867773 +0000 UTC m=+0.085772481 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct 11 05:53:07 np0005481065 podman[443618]: 2025-10-11 09:53:07.825254369 +0000 UTC m=+0.120124004 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller)
Oct 11 05:53:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:53:08 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2438806809' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:53:08 np0005481065 nova_compute[260935]: 2025-10-11 09:53:08.206 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:53:08 np0005481065 nova_compute[260935]: 2025-10-11 09:53:08.215 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:53:08 np0005481065 nova_compute[260935]: 2025-10-11 09:53:08.241 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:53:08 np0005481065 nova_compute[260935]: 2025-10-11 09:53:08.245 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:53:08 np0005481065 nova_compute[260935]: 2025-10-11 09:53:08.246 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:53:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3367: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:53:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:53:10 np0005481065 nova_compute[260935]: 2025-10-11 09:53:10.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:53:10 np0005481065 nova_compute[260935]: 2025-10-11 09:53:10.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:53:11 np0005481065 nova_compute[260935]: 2025-10-11 09:53:11.246 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:53:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3368: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:53:12 np0005481065 nova_compute[260935]: 2025-10-11 09:53:12.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:53:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3369: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:53:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:53:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:53:15.250 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:53:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:53:15.251 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:53:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:53:15.251 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:53:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3370: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:53:15 np0005481065 nova_compute[260935]: 2025-10-11 09:53:15.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:53:15 np0005481065 nova_compute[260935]: 2025-10-11 09:53:15.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:53:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3371: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:53:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3372: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:53:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:53:20 np0005481065 nova_compute[260935]: 2025-10-11 09:53:20.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:53:20 np0005481065 nova_compute[260935]: 2025-10-11 09:53:20.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:53:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3373: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:53:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3374: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:53:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:53:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:53:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:53:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:53:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:53:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:53:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:53:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3375: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:53:25 np0005481065 nova_compute[260935]: 2025-10-11 09:53:25.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:53:25 np0005481065 nova_compute[260935]: 2025-10-11 09:53:25.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:53:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:53:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1420280469' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:53:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:53:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1420280469' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:53:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3376: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:53:29 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #165. Immutable memtables: 0.
Oct 11 05:53:29 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:53:29.189074) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 05:53:29 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 101] Flushing memtable with next log file: 165
Oct 11 05:53:29 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176409189142, "job": 101, "event": "flush_started", "num_memtables": 1, "num_entries": 1112, "num_deletes": 251, "total_data_size": 1660603, "memory_usage": 1690832, "flush_reason": "Manual Compaction"}
Oct 11 05:53:29 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 101] Level-0 flush table #166: started
Oct 11 05:53:29 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176409203190, "cf_name": "default", "job": 101, "event": "table_file_creation", "file_number": 166, "file_size": 1644955, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 69236, "largest_seqno": 70347, "table_properties": {"data_size": 1639527, "index_size": 2886, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11425, "raw_average_key_size": 19, "raw_value_size": 1628705, "raw_average_value_size": 2812, "num_data_blocks": 130, "num_entries": 579, "num_filter_entries": 579, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760176295, "oldest_key_time": 1760176295, "file_creation_time": 1760176409, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 166, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:53:29 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 101] Flush lasted 14184 microseconds, and 8778 cpu microseconds.
Oct 11 05:53:29 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:53:29 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:53:29.203253) [db/flush_job.cc:967] [default] [JOB 101] Level-0 flush table #166: 1644955 bytes OK
Oct 11 05:53:29 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:53:29.203292) [db/memtable_list.cc:519] [default] Level-0 commit table #166 started
Oct 11 05:53:29 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:53:29.204748) [db/memtable_list.cc:722] [default] Level-0 commit table #166: memtable #1 done
Oct 11 05:53:29 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:53:29.204774) EVENT_LOG_v1 {"time_micros": 1760176409204765, "job": 101, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 05:53:29 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:53:29.204800) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 05:53:29 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 101] Try to delete WAL files size 1655472, prev total WAL file size 1655472, number of live WAL files 2.
Oct 11 05:53:29 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000162.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:53:29 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:53:29.205952) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036373737' seq:72057594037927935, type:22 .. '7061786F730037303239' seq:0, type:0; will stop at (end)
Oct 11 05:53:29 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 102] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 05:53:29 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 101 Base level 0, inputs: [166(1606KB)], [164(9401KB)]
Oct 11 05:53:29 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176409206012, "job": 102, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [166], "files_L6": [164], "score": -1, "input_data_size": 11272290, "oldest_snapshot_seqno": -1}
Oct 11 05:53:29 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 102] Generated table #167: 8695 keys, 9527171 bytes, temperature: kUnknown
Oct 11 05:53:29 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176409252388, "cf_name": "default", "job": 102, "event": "table_file_creation", "file_number": 167, "file_size": 9527171, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9473441, "index_size": 30897, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21765, "raw_key_size": 227896, "raw_average_key_size": 26, "raw_value_size": 9322517, "raw_average_value_size": 1072, "num_data_blocks": 1191, "num_entries": 8695, "num_filter_entries": 8695, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760176409, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 167, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:53:29 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:53:29 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:53:29.252709) [db/compaction/compaction_job.cc:1663] [default] [JOB 102] Compacted 1@0 + 1@6 files to L6 => 9527171 bytes
Oct 11 05:53:29 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:53:29.254361) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 242.6 rd, 205.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 9.2 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(12.6) write-amplify(5.8) OK, records in: 9209, records dropped: 514 output_compression: NoCompression
Oct 11 05:53:29 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:53:29.254390) EVENT_LOG_v1 {"time_micros": 1760176409254376, "job": 102, "event": "compaction_finished", "compaction_time_micros": 46471, "compaction_time_cpu_micros": 28440, "output_level": 6, "num_output_files": 1, "total_output_size": 9527171, "num_input_records": 9209, "num_output_records": 8695, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 05:53:29 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000166.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:53:29 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176409255059, "job": 102, "event": "table_file_deletion", "file_number": 166}
Oct 11 05:53:29 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000164.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:53:29 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176409258351, "job": 102, "event": "table_file_deletion", "file_number": 164}
Oct 11 05:53:29 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:53:29.205803) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:53:29 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:53:29.258455) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:53:29 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:53:29.258463) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:53:29 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:53:29.258465) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:53:29 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:53:29.258467) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:53:29 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:53:29.258469) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:53:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3377: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:53:29 np0005481065 podman[443686]: 2025-10-11 09:53:29.800412387 +0000 UTC m=+0.082403866 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 11 05:53:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:53:30 np0005481065 nova_compute[260935]: 2025-10-11 09:53:30.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:53:30 np0005481065 nova_compute[260935]: 2025-10-11 09:53:30.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:53:31 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:53:31 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:53:31 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 05:53:31 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:53:31 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 05:53:31 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:53:31 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 8436e64b-8bba-4f6f-b6de-cb996a041f87 does not exist
Oct 11 05:53:31 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev ad507034-ffdb-4e60-b112-14a727af4b12 does not exist
Oct 11 05:53:31 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 236745c2-3e63-4b6c-adb4-ce1cae1186d9 does not exist
Oct 11 05:53:31 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 05:53:31 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 05:53:31 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 05:53:31 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:53:31 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:53:31 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:53:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3378: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:53:32 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:53:32 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:53:32 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:53:32 np0005481065 podman[443978]: 2025-10-11 09:53:32.256047008 +0000 UTC m=+0.104829832 container create c6f82c3f876aaf91fc0c8741440731aa3d5702ef628e9245b2535d479662d0b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:53:32 np0005481065 podman[443978]: 2025-10-11 09:53:32.202412828 +0000 UTC m=+0.051195722 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:53:32 np0005481065 systemd[1]: Started libpod-conmon-c6f82c3f876aaf91fc0c8741440731aa3d5702ef628e9245b2535d479662d0b0.scope.
Oct 11 05:53:32 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:53:32 np0005481065 podman[443978]: 2025-10-11 09:53:32.385264149 +0000 UTC m=+0.234047033 container init c6f82c3f876aaf91fc0c8741440731aa3d5702ef628e9245b2535d479662d0b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_boyd, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 05:53:32 np0005481065 podman[443978]: 2025-10-11 09:53:32.404340089 +0000 UTC m=+0.253122923 container start c6f82c3f876aaf91fc0c8741440731aa3d5702ef628e9245b2535d479662d0b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_boyd, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 05:53:32 np0005481065 vigorous_boyd[443994]: 167 167
Oct 11 05:53:32 np0005481065 systemd[1]: libpod-c6f82c3f876aaf91fc0c8741440731aa3d5702ef628e9245b2535d479662d0b0.scope: Deactivated successfully.
Oct 11 05:53:32 np0005481065 podman[443978]: 2025-10-11 09:53:32.41637378 +0000 UTC m=+0.265156694 container attach c6f82c3f876aaf91fc0c8741440731aa3d5702ef628e9245b2535d479662d0b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_boyd, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 11 05:53:32 np0005481065 podman[443978]: 2025-10-11 09:53:32.417156793 +0000 UTC m=+0.265939627 container died c6f82c3f876aaf91fc0c8741440731aa3d5702ef628e9245b2535d479662d0b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_boyd, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 05:53:32 np0005481065 systemd[1]: var-lib-containers-storage-overlay-a787b26cb92ca8ef8e91cba474b1e2170bcc35526cf0162f3dcf3c4d6aeb0bda-merged.mount: Deactivated successfully.
Oct 11 05:53:32 np0005481065 podman[443978]: 2025-10-11 09:53:32.491384376 +0000 UTC m=+0.340167180 container remove c6f82c3f876aaf91fc0c8741440731aa3d5702ef628e9245b2535d479662d0b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_boyd, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 05:53:32 np0005481065 systemd[1]: libpod-conmon-c6f82c3f876aaf91fc0c8741440731aa3d5702ef628e9245b2535d479662d0b0.scope: Deactivated successfully.
Oct 11 05:53:32 np0005481065 systemd-logind[819]: New session 54 of user zuul.
Oct 11 05:53:32 np0005481065 systemd[1]: Started Session 54 of User zuul.
Oct 11 05:53:32 np0005481065 podman[444045]: 2025-10-11 09:53:32.753956045 +0000 UTC m=+0.063765207 container create a6d42cc1575d2fba891c5aee98c768788f518430903ff6225c8f172dcd1cc8fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bhaskara, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:53:32 np0005481065 systemd[1]: Started libpod-conmon-a6d42cc1575d2fba891c5aee98c768788f518430903ff6225c8f172dcd1cc8fa.scope.
Oct 11 05:53:32 np0005481065 podman[444045]: 2025-10-11 09:53:32.732402804 +0000 UTC m=+0.042211956 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:53:32 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:53:32 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45b1ba0015bd1ffe54b10c5bd6ad7e6eaaaf04799b9d3cc3acdc66a5e6838206/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:53:32 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45b1ba0015bd1ffe54b10c5bd6ad7e6eaaaf04799b9d3cc3acdc66a5e6838206/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:53:32 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45b1ba0015bd1ffe54b10c5bd6ad7e6eaaaf04799b9d3cc3acdc66a5e6838206/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:53:32 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45b1ba0015bd1ffe54b10c5bd6ad7e6eaaaf04799b9d3cc3acdc66a5e6838206/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:53:32 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45b1ba0015bd1ffe54b10c5bd6ad7e6eaaaf04799b9d3cc3acdc66a5e6838206/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 05:53:32 np0005481065 podman[444045]: 2025-10-11 09:53:32.8737926 +0000 UTC m=+0.183601742 container init a6d42cc1575d2fba891c5aee98c768788f518430903ff6225c8f172dcd1cc8fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bhaskara, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 11 05:53:32 np0005481065 podman[444045]: 2025-10-11 09:53:32.887162309 +0000 UTC m=+0.196971451 container start a6d42cc1575d2fba891c5aee98c768788f518430903ff6225c8f172dcd1cc8fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bhaskara, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 05:53:32 np0005481065 podman[444045]: 2025-10-11 09:53:32.892202842 +0000 UTC m=+0.202011984 container attach a6d42cc1575d2fba891c5aee98c768788f518430903ff6225c8f172dcd1cc8fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:53:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3379: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Oct 11 05:53:33 np0005481065 agitated_bhaskara[444085]: --> passed data devices: 0 physical, 3 LVM
Oct 11 05:53:33 np0005481065 agitated_bhaskara[444085]: --> relative data size: 1.0
Oct 11 05:53:33 np0005481065 agitated_bhaskara[444085]: --> All data devices are unavailable
Oct 11 05:53:34 np0005481065 systemd[1]: libpod-a6d42cc1575d2fba891c5aee98c768788f518430903ff6225c8f172dcd1cc8fa.scope: Deactivated successfully.
Oct 11 05:53:34 np0005481065 podman[444045]: 2025-10-11 09:53:34.001330269 +0000 UTC m=+1.311139431 container died a6d42cc1575d2fba891c5aee98c768788f518430903ff6225c8f172dcd1cc8fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bhaskara, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 05:53:34 np0005481065 systemd[1]: libpod-a6d42cc1575d2fba891c5aee98c768788f518430903ff6225c8f172dcd1cc8fa.scope: Consumed 1.054s CPU time.
Oct 11 05:53:34 np0005481065 systemd[1]: var-lib-containers-storage-overlay-45b1ba0015bd1ffe54b10c5bd6ad7e6eaaaf04799b9d3cc3acdc66a5e6838206-merged.mount: Deactivated successfully.
Oct 11 05:53:34 np0005481065 podman[444045]: 2025-10-11 09:53:34.092752729 +0000 UTC m=+1.402561881 container remove a6d42cc1575d2fba891c5aee98c768788f518430903ff6225c8f172dcd1cc8fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:53:34 np0005481065 systemd[1]: libpod-conmon-a6d42cc1575d2fba891c5aee98c768788f518430903ff6225c8f172dcd1cc8fa.scope: Deactivated successfully.
Oct 11 05:53:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:53:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:53:34.947 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=62, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=61) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:53:34 np0005481065 nova_compute[260935]: 2025-10-11 09:53:34.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:53:34 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:53:34.949 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 05:53:34 np0005481065 podman[444268]: 2025-10-11 09:53:34.962387371 +0000 UTC m=+0.060884677 container create b50ffd1ae993c05812643e7dc4fd7d6f2a9e9c0102d6eb07d43d078423a07aca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 11 05:53:35 np0005481065 systemd[1]: Started libpod-conmon-b50ffd1ae993c05812643e7dc4fd7d6f2a9e9c0102d6eb07d43d078423a07aca.scope.
Oct 11 05:53:35 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:53:35 np0005481065 podman[444268]: 2025-10-11 09:53:34.936455156 +0000 UTC m=+0.034952492 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:53:35 np0005481065 podman[444268]: 2025-10-11 09:53:35.05413548 +0000 UTC m=+0.152632806 container init b50ffd1ae993c05812643e7dc4fd7d6f2a9e9c0102d6eb07d43d078423a07aca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_keldysh, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 05:53:35 np0005481065 podman[444268]: 2025-10-11 09:53:35.064296348 +0000 UTC m=+0.162793654 container start b50ffd1ae993c05812643e7dc4fd7d6f2a9e9c0102d6eb07d43d078423a07aca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_keldysh, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:53:35 np0005481065 podman[444268]: 2025-10-11 09:53:35.067615502 +0000 UTC m=+0.166112798 container attach b50ffd1ae993c05812643e7dc4fd7d6f2a9e9c0102d6eb07d43d078423a07aca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_keldysh, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Oct 11 05:53:35 np0005481065 stupefied_keldysh[444285]: 167 167
Oct 11 05:53:35 np0005481065 systemd[1]: libpod-b50ffd1ae993c05812643e7dc4fd7d6f2a9e9c0102d6eb07d43d078423a07aca.scope: Deactivated successfully.
Oct 11 05:53:35 np0005481065 conmon[444285]: conmon b50ffd1ae993c0581264 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b50ffd1ae993c05812643e7dc4fd7d6f2a9e9c0102d6eb07d43d078423a07aca.scope/container/memory.events
Oct 11 05:53:35 np0005481065 podman[444268]: 2025-10-11 09:53:35.07458505 +0000 UTC m=+0.173082356 container died b50ffd1ae993c05812643e7dc4fd7d6f2a9e9c0102d6eb07d43d078423a07aca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_keldysh, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 05:53:35 np0005481065 podman[444282]: 2025-10-11 09:53:35.075509546 +0000 UTC m=+0.070507269 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 11 05:53:35 np0005481065 systemd[1]: var-lib-containers-storage-overlay-99224cadd617baa3331a404824366928daac2a44d4398c8963bdc2fc70f569d8-merged.mount: Deactivated successfully.
Oct 11 05:53:35 np0005481065 podman[444268]: 2025-10-11 09:53:35.126236663 +0000 UTC m=+0.224733999 container remove b50ffd1ae993c05812643e7dc4fd7d6f2a9e9c0102d6eb07d43d078423a07aca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_keldysh, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 05:53:35 np0005481065 systemd[1]: libpod-conmon-b50ffd1ae993c05812643e7dc4fd7d6f2a9e9c0102d6eb07d43d078423a07aca.scope: Deactivated successfully.
Oct 11 05:53:35 np0005481065 podman[444329]: 2025-10-11 09:53:35.377726649 +0000 UTC m=+0.069263663 container create b989d571166610c7092f98466dc42729127d03a1d522750b108d691b8841b054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:53:35 np0005481065 systemd[1]: Started libpod-conmon-b989d571166610c7092f98466dc42729127d03a1d522750b108d691b8841b054.scope.
Oct 11 05:53:35 np0005481065 podman[444329]: 2025-10-11 09:53:35.357141696 +0000 UTC m=+0.048678710 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:53:35 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:53:35 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/650507ce9c18210302f59edcecb10af5746b542287b0ec85b9288d5797981f81/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:53:35 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/650507ce9c18210302f59edcecb10af5746b542287b0ec85b9288d5797981f81/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:53:35 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/650507ce9c18210302f59edcecb10af5746b542287b0ec85b9288d5797981f81/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:53:35 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/650507ce9c18210302f59edcecb10af5746b542287b0ec85b9288d5797981f81/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:53:35 np0005481065 podman[444329]: 2025-10-11 09:53:35.509601106 +0000 UTC m=+0.201138130 container init b989d571166610c7092f98466dc42729127d03a1d522750b108d691b8841b054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_jepsen, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 11 05:53:35 np0005481065 podman[444329]: 2025-10-11 09:53:35.521990407 +0000 UTC m=+0.213527411 container start b989d571166610c7092f98466dc42729127d03a1d522750b108d691b8841b054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 11 05:53:35 np0005481065 podman[444329]: 2025-10-11 09:53:35.525896198 +0000 UTC m=+0.217433232 container attach b989d571166610c7092f98466dc42729127d03a1d522750b108d691b8841b054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 11 05:53:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3380: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Oct 11 05:53:35 np0005481065 nova_compute[260935]: 2025-10-11 09:53:35.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]: {
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:    "0": [
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:        {
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:            "devices": [
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:                "/dev/loop3"
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:            ],
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:            "lv_name": "ceph_lv0",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:            "lv_size": "21470642176",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:            "name": "ceph_lv0",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:            "tags": {
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:                "ceph.cluster_name": "ceph",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:                "ceph.crush_device_class": "",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:                "ceph.encrypted": "0",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:                "ceph.osd_id": "0",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:                "ceph.type": "block",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:                "ceph.vdo": "0"
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:            },
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:            "type": "block",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:            "vg_name": "ceph_vg0"
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:        }
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:    ],
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:    "1": [
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:        {
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:            "devices": [
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:                "/dev/loop4"
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:            ],
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:            "lv_name": "ceph_lv1",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:            "lv_size": "21470642176",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:            "name": "ceph_lv1",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:            "tags": {
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:                "ceph.cluster_name": "ceph",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:                "ceph.crush_device_class": "",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:                "ceph.encrypted": "0",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:                "ceph.osd_id": "1",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:                "ceph.type": "block",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:                "ceph.vdo": "0"
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:            },
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:            "type": "block",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:            "vg_name": "ceph_vg1"
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:        }
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:    ],
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:    "2": [
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:        {
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:            "devices": [
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:                "/dev/loop5"
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:            ],
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:            "lv_name": "ceph_lv2",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:            "lv_size": "21470642176",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:            "name": "ceph_lv2",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:            "tags": {
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:                "ceph.cluster_name": "ceph",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:                "ceph.crush_device_class": "",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:                "ceph.encrypted": "0",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:                "ceph.osd_id": "2",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:                "ceph.type": "block",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:                "ceph.vdo": "0"
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:            },
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:            "type": "block",
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:            "vg_name": "ceph_vg2"
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:        }
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]:    ]
Oct 11 05:53:36 np0005481065 adoring_jepsen[444368]: }
Oct 11 05:53:36 np0005481065 systemd[1]: libpod-b989d571166610c7092f98466dc42729127d03a1d522750b108d691b8841b054.scope: Deactivated successfully.
Oct 11 05:53:36 np0005481065 podman[444538]: 2025-10-11 09:53:36.337550885 +0000 UTC m=+0.033701506 container died b989d571166610c7092f98466dc42729127d03a1d522750b108d691b8841b054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_jepsen, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:53:36 np0005481065 systemd[1]: var-lib-containers-storage-overlay-650507ce9c18210302f59edcecb10af5746b542287b0ec85b9288d5797981f81-merged.mount: Deactivated successfully.
Oct 11 05:53:36 np0005481065 podman[444538]: 2025-10-11 09:53:36.41289518 +0000 UTC m=+0.109045751 container remove b989d571166610c7092f98466dc42729127d03a1d522750b108d691b8841b054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_jepsen, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 05:53:36 np0005481065 systemd[1]: libpod-conmon-b989d571166610c7092f98466dc42729127d03a1d522750b108d691b8841b054.scope: Deactivated successfully.
Oct 11 05:53:37 np0005481065 podman[444694]: 2025-10-11 09:53:37.356021643 +0000 UTC m=+0.078839185 container create d01db16ebe657e2be5b499014cfb26cf0a2c52d5cba62f43f470922c33da7d96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_wright, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:53:37 np0005481065 systemd[1]: Started libpod-conmon-d01db16ebe657e2be5b499014cfb26cf0a2c52d5cba62f43f470922c33da7d96.scope.
Oct 11 05:53:37 np0005481065 podman[444694]: 2025-10-11 09:53:37.323922903 +0000 UTC m=+0.046740485 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:53:37 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:53:37 np0005481065 podman[444694]: 2025-10-11 09:53:37.459573037 +0000 UTC m=+0.182390629 container init d01db16ebe657e2be5b499014cfb26cf0a2c52d5cba62f43f470922c33da7d96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:53:37 np0005481065 podman[444694]: 2025-10-11 09:53:37.471508405 +0000 UTC m=+0.194325957 container start d01db16ebe657e2be5b499014cfb26cf0a2c52d5cba62f43f470922c33da7d96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_wright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 05:53:37 np0005481065 podman[444694]: 2025-10-11 09:53:37.477136235 +0000 UTC m=+0.199953787 container attach d01db16ebe657e2be5b499014cfb26cf0a2c52d5cba62f43f470922c33da7d96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_wright, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 05:53:37 np0005481065 gracious_wright[444710]: 167 167
Oct 11 05:53:37 np0005481065 systemd[1]: libpod-d01db16ebe657e2be5b499014cfb26cf0a2c52d5cba62f43f470922c33da7d96.scope: Deactivated successfully.
Oct 11 05:53:37 np0005481065 podman[444694]: 2025-10-11 09:53:37.480212302 +0000 UTC m=+0.203029874 container died d01db16ebe657e2be5b499014cfb26cf0a2c52d5cba62f43f470922c33da7d96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_wright, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 11 05:53:37 np0005481065 systemd[1]: var-lib-containers-storage-overlay-eedddab0199d0ff8c1e8e2ad36f3253ee39358167f89e927f51d6bd0a3fd9e12-merged.mount: Deactivated successfully.
Oct 11 05:53:37 np0005481065 podman[444694]: 2025-10-11 09:53:37.523832488 +0000 UTC m=+0.246650000 container remove d01db16ebe657e2be5b499014cfb26cf0a2c52d5cba62f43f470922c33da7d96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_wright, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:53:37 np0005481065 systemd[1]: libpod-conmon-d01db16ebe657e2be5b499014cfb26cf0a2c52d5cba62f43f470922c33da7d96.scope: Deactivated successfully.
Oct 11 05:53:37 np0005481065 nova_compute[260935]: 2025-10-11 09:53:37.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:53:37 np0005481065 podman[444733]: 2025-10-11 09:53:37.771757083 +0000 UTC m=+0.061476923 container create 7b834237c3029ce9addbd891abb2e7ce2c0d33d137a11b6f78b069dbf74cb31a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_goldberg, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:53:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3381: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 52 op/s
Oct 11 05:53:37 np0005481065 systemd[1]: Started libpod-conmon-7b834237c3029ce9addbd891abb2e7ce2c0d33d137a11b6f78b069dbf74cb31a.scope.
Oct 11 05:53:37 np0005481065 podman[444733]: 2025-10-11 09:53:37.744082639 +0000 UTC m=+0.033802579 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:53:37 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:53:37 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70c91b56e59b16fed25e3f65a3378b9a679206c892e40d874d8c7581a608c84a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:53:37 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70c91b56e59b16fed25e3f65a3378b9a679206c892e40d874d8c7581a608c84a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:53:37 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70c91b56e59b16fed25e3f65a3378b9a679206c892e40d874d8c7581a608c84a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:53:37 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70c91b56e59b16fed25e3f65a3378b9a679206c892e40d874d8c7581a608c84a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:53:37 np0005481065 podman[444733]: 2025-10-11 09:53:37.908310322 +0000 UTC m=+0.198030262 container init 7b834237c3029ce9addbd891abb2e7ce2c0d33d137a11b6f78b069dbf74cb31a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_goldberg, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 11 05:53:37 np0005481065 podman[444733]: 2025-10-11 09:53:37.921128975 +0000 UTC m=+0.210848815 container start 7b834237c3029ce9addbd891abb2e7ce2c0d33d137a11b6f78b069dbf74cb31a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 11 05:53:37 np0005481065 podman[444733]: 2025-10-11 09:53:37.924663356 +0000 UTC m=+0.214383286 container attach 7b834237c3029ce9addbd891abb2e7ce2c0d33d137a11b6f78b069dbf74cb31a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 05:53:37 np0005481065 podman[444750]: 2025-10-11 09:53:37.975947219 +0000 UTC m=+0.110928465 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 05:53:38 np0005481065 podman[444752]: 2025-10-11 09:53:38.012453873 +0000 UTC m=+0.143096586 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 11 05:53:38 np0005481065 systemd-logind[819]: Session 54 logged out. Waiting for processes to exit.
Oct 11 05:53:38 np0005481065 systemd[1]: session-54.scope: Deactivated successfully.
Oct 11 05:53:38 np0005481065 systemd-logind[819]: Removed session 54.
Oct 11 05:53:39 np0005481065 distracted_goldberg[444749]: {
Oct 11 05:53:39 np0005481065 distracted_goldberg[444749]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 05:53:39 np0005481065 distracted_goldberg[444749]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:53:39 np0005481065 distracted_goldberg[444749]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 05:53:39 np0005481065 distracted_goldberg[444749]:        "osd_id": 2,
Oct 11 05:53:39 np0005481065 distracted_goldberg[444749]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:53:39 np0005481065 distracted_goldberg[444749]:        "type": "bluestore"
Oct 11 05:53:39 np0005481065 distracted_goldberg[444749]:    },
Oct 11 05:53:39 np0005481065 distracted_goldberg[444749]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 05:53:39 np0005481065 distracted_goldberg[444749]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:53:39 np0005481065 distracted_goldberg[444749]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 05:53:39 np0005481065 distracted_goldberg[444749]:        "osd_id": 0,
Oct 11 05:53:39 np0005481065 distracted_goldberg[444749]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:53:39 np0005481065 distracted_goldberg[444749]:        "type": "bluestore"
Oct 11 05:53:39 np0005481065 distracted_goldberg[444749]:    },
Oct 11 05:53:39 np0005481065 distracted_goldberg[444749]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 05:53:39 np0005481065 distracted_goldberg[444749]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:53:39 np0005481065 distracted_goldberg[444749]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 05:53:39 np0005481065 distracted_goldberg[444749]:        "osd_id": 1,
Oct 11 05:53:39 np0005481065 distracted_goldberg[444749]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:53:39 np0005481065 distracted_goldberg[444749]:        "type": "bluestore"
Oct 11 05:53:39 np0005481065 distracted_goldberg[444749]:    }
Oct 11 05:53:39 np0005481065 distracted_goldberg[444749]: }
Oct 11 05:53:39 np0005481065 systemd[1]: libpod-7b834237c3029ce9addbd891abb2e7ce2c0d33d137a11b6f78b069dbf74cb31a.scope: Deactivated successfully.
Oct 11 05:53:39 np0005481065 systemd[1]: libpod-7b834237c3029ce9addbd891abb2e7ce2c0d33d137a11b6f78b069dbf74cb31a.scope: Consumed 1.130s CPU time.
Oct 11 05:53:39 np0005481065 podman[444733]: 2025-10-11 09:53:39.051984608 +0000 UTC m=+1.341704468 container died 7b834237c3029ce9addbd891abb2e7ce2c0d33d137a11b6f78b069dbf74cb31a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_goldberg, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:53:39 np0005481065 systemd[1]: var-lib-containers-storage-overlay-70c91b56e59b16fed25e3f65a3378b9a679206c892e40d874d8c7581a608c84a-merged.mount: Deactivated successfully.
Oct 11 05:53:39 np0005481065 podman[444733]: 2025-10-11 09:53:39.140925068 +0000 UTC m=+1.430644908 container remove 7b834237c3029ce9addbd891abb2e7ce2c0d33d137a11b6f78b069dbf74cb31a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 05:53:39 np0005481065 systemd[1]: libpod-conmon-7b834237c3029ce9addbd891abb2e7ce2c0d33d137a11b6f78b069dbf74cb31a.scope: Deactivated successfully.
Oct 11 05:53:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:53:39 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:53:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:53:39 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:53:39 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev ec3620a2-a5c5-4098-be03-5fdd88240048 does not exist
Oct 11 05:53:39 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev e838f8f3-9993-428c-a794-9286809ea82b does not exist
Oct 11 05:53:39 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:53:39 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:53:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3382: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 05:53:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:53:40 np0005481065 nova_compute[260935]: 2025-10-11 09:53:40.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:53:40 np0005481065 nova_compute[260935]: 2025-10-11 09:53:40.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:53:40 np0005481065 nova_compute[260935]: 2025-10-11 09:53:40.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:53:40 np0005481065 nova_compute[260935]: 2025-10-11 09:53:40.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:53:41 np0005481065 nova_compute[260935]: 2025-10-11 09:53:40.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:53:41 np0005481065 nova_compute[260935]: 2025-10-11 09:53:40.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:53:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3383: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 05:53:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3384: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 05:53:43 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:53:43.953 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '62'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:53:44 np0005481065 nova_compute[260935]: 2025-10-11 09:53:44.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:53:44 np0005481065 nova_compute[260935]: 2025-10-11 09:53:44.704 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:53:44 np0005481065 nova_compute[260935]: 2025-10-11 09:53:44.705 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:53:44 np0005481065 nova_compute[260935]: 2025-10-11 09:53:44.706 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:53:44 np0005481065 nova_compute[260935]: 2025-10-11 09:53:44.707 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:53:44 np0005481065 nova_compute[260935]: 2025-10-11 09:53:44.708 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:53:44 np0005481065 nova_compute[260935]: 2025-10-11 09:53:44.708 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:53:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:53:44 np0005481065 nova_compute[260935]: 2025-10-11 09:53:44.911 2 DEBUG nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100#033[00m
Oct 11 05:53:44 np0005481065 nova_compute[260935]: 2025-10-11 09:53:44.929 2 DEBUG nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Oct 11 05:53:44 np0005481065 nova_compute[260935]: 2025-10-11 09:53:44.930 2 DEBUG nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Image id 03f2fef0-11c0-48e1-b3a0-3e02d898739e yields fingerprint 9811042c7d73cc51997f7c966840f4b7728169a1 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Oct 11 05:53:44 np0005481065 nova_compute[260935]: 2025-10-11 09:53:44.931 2 INFO nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] image 03f2fef0-11c0-48e1-b3a0-3e02d898739e at (/var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1): checking#033[00m
Oct 11 05:53:44 np0005481065 nova_compute[260935]: 2025-10-11 09:53:44.931 2 DEBUG nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] image 03f2fef0-11c0-48e1-b3a0-3e02d898739e at (/var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279#033[00m
Oct 11 05:53:44 np0005481065 nova_compute[260935]: 2025-10-11 09:53:44.935 2 DEBUG nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Oct 11 05:53:44 np0005481065 nova_compute[260935]: 2025-10-11 09:53:44.935 2 DEBUG nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] c176845c-89c0-4038-ba22-4ee79bd3ebfe is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Oct 11 05:53:44 np0005481065 nova_compute[260935]: 2025-10-11 09:53:44.936 2 DEBUG nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] b75d8ded-515b-48ff-a6b6-28df88878996 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Oct 11 05:53:44 np0005481065 nova_compute[260935]: 2025-10-11 09:53:44.936 2 DEBUG nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] 52be16b4-343a-4fd4-9041-39069a1fde2a is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Oct 11 05:53:44 np0005481065 nova_compute[260935]: 2025-10-11 09:53:44.937 2 WARNING nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571#033[00m
Oct 11 05:53:44 np0005481065 nova_compute[260935]: 2025-10-11 09:53:44.937 2 INFO nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Active base files: /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1#033[00m
Oct 11 05:53:44 np0005481065 nova_compute[260935]: 2025-10-11 09:53:44.938 2 INFO nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Removable base files: /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571#033[00m
Oct 11 05:53:44 np0005481065 nova_compute[260935]: 2025-10-11 09:53:44.939 2 INFO nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571#033[00m
Oct 11 05:53:44 np0005481065 nova_compute[260935]: 2025-10-11 09:53:44.939 2 DEBUG nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Oct 11 05:53:44 np0005481065 nova_compute[260935]: 2025-10-11 09:53:44.940 2 DEBUG nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Oct 11 05:53:44 np0005481065 nova_compute[260935]: 2025-10-11 09:53:44.940 2 DEBUG nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Oct 11 05:53:44 np0005481065 nova_compute[260935]: 2025-10-11 09:53:44.941 2 INFO nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66#033[00m
Oct 11 05:53:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3385: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 32 op/s
Oct 11 05:53:46 np0005481065 nova_compute[260935]: 2025-10-11 09:53:46.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:53:46 np0005481065 nova_compute[260935]: 2025-10-11 09:53:46.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:53:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3386: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 32 op/s
Oct 11 05:53:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3387: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Oct 11 05:53:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:53:51 np0005481065 nova_compute[260935]: 2025-10-11 09:53:51.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:53:51 np0005481065 nova_compute[260935]: 2025-10-11 09:53:51.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:53:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3388: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:53:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3389: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:53:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:53:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:53:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:53:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:53:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:53:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:53:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:53:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:53:55
Oct 11 05:53:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:53:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:53:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['vms', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', 'images', 'volumes', '.mgr', 'default.rgw.control', 'backups', 'default.rgw.log']
Oct 11 05:53:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:53:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:53:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:53:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:53:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:53:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:53:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:53:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:53:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:53:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:53:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:53:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3390: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:53:56 np0005481065 nova_compute[260935]: 2025-10-11 09:53:56.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:53:56 np0005481065 nova_compute[260935]: 2025-10-11 09:53:56.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:53:56 np0005481065 nova_compute[260935]: 2025-10-11 09:53:56.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:53:56 np0005481065 nova_compute[260935]: 2025-10-11 09:53:56.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:53:56 np0005481065 nova_compute[260935]: 2025-10-11 09:53:56.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:53:56 np0005481065 nova_compute[260935]: 2025-10-11 09:53:56.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:53:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3391: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:53:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3392: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:53:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:54:00 np0005481065 podman[444916]: 2025-10-11 09:54:00.824144687 +0000 UTC m=+0.106639005 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:54:00 np0005481065 nova_compute[260935]: 2025-10-11 09:54:00.937 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:54:00 np0005481065 nova_compute[260935]: 2025-10-11 09:54:00.937 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:54:01 np0005481065 nova_compute[260935]: 2025-10-11 09:54:01.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:54:01 np0005481065 nova_compute[260935]: 2025-10-11 09:54:01.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:54:01 np0005481065 nova_compute[260935]: 2025-10-11 09:54:01.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5030 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:54:01 np0005481065 nova_compute[260935]: 2025-10-11 09:54:01.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:54:01 np0005481065 nova_compute[260935]: 2025-10-11 09:54:01.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:54:01 np0005481065 nova_compute[260935]: 2025-10-11 09:54:01.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:54:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3393: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:54:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3394: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:54:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:54:05 np0005481065 nova_compute[260935]: 2025-10-11 09:54:05.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:54:05 np0005481065 nova_compute[260935]: 2025-10-11 09:54:05.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:54:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3395: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:54:05 np0005481065 podman[444937]: 2025-10-11 09:54:05.801944592 +0000 UTC m=+0.095321465 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, io.buildah.version=1.41.3, config_id=iscsid, org.label-schema.license=GPLv2)
Oct 11 05:54:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:54:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:54:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:54:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:54:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0026278224099759067 of space, bias 1.0, pg target 0.788346722992772 quantized to 32 (current 32)
Oct 11 05:54:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:54:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:54:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:54:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:54:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:54:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct 11 05:54:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:54:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 05:54:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:54:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:54:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:54:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 05:54:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:54:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 05:54:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:54:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:54:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:54:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 05:54:06 np0005481065 nova_compute[260935]: 2025-10-11 09:54:06.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:54:06 np0005481065 nova_compute[260935]: 2025-10-11 09:54:06.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:54:06 np0005481065 nova_compute[260935]: 2025-10-11 09:54:06.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:54:06 np0005481065 nova_compute[260935]: 2025-10-11 09:54:06.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:54:06 np0005481065 nova_compute[260935]: 2025-10-11 09:54:06.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:54:06 np0005481065 nova_compute[260935]: 2025-10-11 09:54:06.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:54:06 np0005481065 nova_compute[260935]: 2025-10-11 09:54:06.705 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:54:06 np0005481065 nova_compute[260935]: 2025-10-11 09:54:06.706 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:54:06 np0005481065 nova_compute[260935]: 2025-10-11 09:54:06.706 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 11 05:54:07 np0005481065 nova_compute[260935]: 2025-10-11 09:54:07.048 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:54:07 np0005481065 nova_compute[260935]: 2025-10-11 09:54:07.049 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:54:07 np0005481065 nova_compute[260935]: 2025-10-11 09:54:07.049 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 05:54:07 np0005481065 nova_compute[260935]: 2025-10-11 09:54:07.050 2 DEBUG nova.objects.instance [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c176845c-89c0-4038-ba22-4ee79bd3ebfe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:54:07 np0005481065 nova_compute[260935]: 2025-10-11 09:54:07.263 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:54:07 np0005481065 nova_compute[260935]: 2025-10-11 09:54:07.594 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:54:07 np0005481065 nova_compute[260935]: 2025-10-11 09:54:07.617 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:54:07 np0005481065 nova_compute[260935]: 2025-10-11 09:54:07.618 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 05:54:07 np0005481065 nova_compute[260935]: 2025-10-11 09:54:07.618 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:54:07 np0005481065 nova_compute[260935]: 2025-10-11 09:54:07.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:54:07 np0005481065 nova_compute[260935]: 2025-10-11 09:54:07.737 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:54:07 np0005481065 nova_compute[260935]: 2025-10-11 09:54:07.738 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:54:07 np0005481065 nova_compute[260935]: 2025-10-11 09:54:07.738 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:54:07 np0005481065 nova_compute[260935]: 2025-10-11 09:54:07.739 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:54:07 np0005481065 nova_compute[260935]: 2025-10-11 09:54:07.739 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:54:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3396: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:54:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:54:08 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/744241938' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:54:08 np0005481065 nova_compute[260935]: 2025-10-11 09:54:08.233 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:54:08 np0005481065 nova_compute[260935]: 2025-10-11 09:54:08.313 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:54:08 np0005481065 nova_compute[260935]: 2025-10-11 09:54:08.313 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:54:08 np0005481065 nova_compute[260935]: 2025-10-11 09:54:08.315 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:54:08 np0005481065 nova_compute[260935]: 2025-10-11 09:54:08.320 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:54:08 np0005481065 nova_compute[260935]: 2025-10-11 09:54:08.320 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:54:08 np0005481065 nova_compute[260935]: 2025-10-11 09:54:08.324 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:54:08 np0005481065 nova_compute[260935]: 2025-10-11 09:54:08.324 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:54:08 np0005481065 nova_compute[260935]: 2025-10-11 09:54:08.577 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:54:08 np0005481065 nova_compute[260935]: 2025-10-11 09:54:08.579 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2769MB free_disk=59.83064270019531GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:54:08 np0005481065 nova_compute[260935]: 2025-10-11 09:54:08.579 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:54:08 np0005481065 nova_compute[260935]: 2025-10-11 09:54:08.580 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:54:08 np0005481065 nova_compute[260935]: 2025-10-11 09:54:08.678 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:54:08 np0005481065 nova_compute[260935]: 2025-10-11 09:54:08.679 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:54:08 np0005481065 nova_compute[260935]: 2025-10-11 09:54:08.679 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:54:08 np0005481065 nova_compute[260935]: 2025-10-11 09:54:08.679 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:54:08 np0005481065 nova_compute[260935]: 2025-10-11 09:54:08.680 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:54:08 np0005481065 nova_compute[260935]: 2025-10-11 09:54:08.790 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:54:08 np0005481065 podman[444980]: 2025-10-11 09:54:08.801262205 +0000 UTC m=+0.099343749 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 05:54:08 np0005481065 podman[444981]: 2025-10-11 09:54:08.854056697 +0000 UTC m=+0.144001281 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:54:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:54:09 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/517822258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:54:09 np0005481065 nova_compute[260935]: 2025-10-11 09:54:09.283 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:54:09 np0005481065 nova_compute[260935]: 2025-10-11 09:54:09.292 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:54:09 np0005481065 nova_compute[260935]: 2025-10-11 09:54:09.314 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:54:09 np0005481065 nova_compute[260935]: 2025-10-11 09:54:09.317 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:54:09 np0005481065 nova_compute[260935]: 2025-10-11 09:54:09.318 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:54:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3397: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:54:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:54:11 np0005481065 nova_compute[260935]: 2025-10-11 09:54:11.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:54:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3398: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:54:12 np0005481065 nova_compute[260935]: 2025-10-11 09:54:12.320 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:54:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3399: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:54:14 np0005481065 nova_compute[260935]: 2025-10-11 09:54:14.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:54:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:54:14 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #168. Immutable memtables: 0.
Oct 11 05:54:14 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:54:14.897124) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 05:54:14 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 103] Flushing memtable with next log file: 168
Oct 11 05:54:14 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176454897190, "job": 103, "event": "flush_started", "num_memtables": 1, "num_entries": 624, "num_deletes": 254, "total_data_size": 708776, "memory_usage": 721360, "flush_reason": "Manual Compaction"}
Oct 11 05:54:14 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 103] Level-0 flush table #169: started
Oct 11 05:54:14 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176454904785, "cf_name": "default", "job": 103, "event": "table_file_creation", "file_number": 169, "file_size": 702447, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 70348, "largest_seqno": 70971, "table_properties": {"data_size": 699091, "index_size": 1263, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7504, "raw_average_key_size": 18, "raw_value_size": 692391, "raw_average_value_size": 1726, "num_data_blocks": 56, "num_entries": 401, "num_filter_entries": 401, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760176410, "oldest_key_time": 1760176410, "file_creation_time": 1760176454, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 169, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:54:14 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 103] Flush lasted 7722 microseconds, and 3634 cpu microseconds.
Oct 11 05:54:14 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:54:14 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:54:14.904852) [db/flush_job.cc:967] [default] [JOB 103] Level-0 flush table #169: 702447 bytes OK
Oct 11 05:54:14 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:54:14.904873) [db/memtable_list.cc:519] [default] Level-0 commit table #169 started
Oct 11 05:54:14 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:54:14.906589) [db/memtable_list.cc:722] [default] Level-0 commit table #169: memtable #1 done
Oct 11 05:54:14 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:54:14.906603) EVENT_LOG_v1 {"time_micros": 1760176454906599, "job": 103, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 05:54:14 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:54:14.906623) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 05:54:14 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 103] Try to delete WAL files size 705402, prev total WAL file size 705402, number of live WAL files 2.
Oct 11 05:54:14 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000165.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:54:14 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:54:14.907188) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033303133' seq:72057594037927935, type:22 .. '6C6F676D0033323634' seq:0, type:0; will stop at (end)
Oct 11 05:54:14 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 104] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 05:54:14 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 103 Base level 0, inputs: [169(685KB)], [167(9303KB)]
Oct 11 05:54:14 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176454907213, "job": 104, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [169], "files_L6": [167], "score": -1, "input_data_size": 10229618, "oldest_snapshot_seqno": -1}
Oct 11 05:54:14 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 104] Generated table #170: 8577 keys, 10122223 bytes, temperature: kUnknown
Oct 11 05:54:14 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176454961053, "cf_name": "default", "job": 104, "event": "table_file_creation", "file_number": 170, "file_size": 10122223, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10067947, "index_size": 31699, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21509, "raw_key_size": 226358, "raw_average_key_size": 26, "raw_value_size": 9917842, "raw_average_value_size": 1156, "num_data_blocks": 1223, "num_entries": 8577, "num_filter_entries": 8577, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760176454, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 170, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:54:14 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:54:14 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:54:14.961379) [db/compaction/compaction_job.cc:1663] [default] [JOB 104] Compacted 1@0 + 1@6 files to L6 => 10122223 bytes
Oct 11 05:54:14 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:54:14.962761) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 189.7 rd, 187.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 9.1 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(29.0) write-amplify(14.4) OK, records in: 9096, records dropped: 519 output_compression: NoCompression
Oct 11 05:54:14 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:54:14.962793) EVENT_LOG_v1 {"time_micros": 1760176454962780, "job": 104, "event": "compaction_finished", "compaction_time_micros": 53939, "compaction_time_cpu_micros": 33907, "output_level": 6, "num_output_files": 1, "total_output_size": 10122223, "num_input_records": 9096, "num_output_records": 8577, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 05:54:14 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000169.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:54:14 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176454963162, "job": 104, "event": "table_file_deletion", "file_number": 169}
Oct 11 05:54:14 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000167.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:54:14 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176454966469, "job": 104, "event": "table_file_deletion", "file_number": 167}
Oct 11 05:54:14 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:54:14.907101) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:54:14 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:54:14.966892) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:54:14 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:54:14.966899) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:54:14 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:54:14.966902) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:54:14 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:54:14.966905) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:54:14 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:54:14.966908) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:54:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:54:15.251 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:54:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:54:15.252 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:54:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:54:15.252 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:54:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3400: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:54:16 np0005481065 nova_compute[260935]: 2025-10-11 09:54:16.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:54:16 np0005481065 nova_compute[260935]: 2025-10-11 09:54:16.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:54:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3401: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:54:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3402: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:54:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:54:20 np0005481065 nova_compute[260935]: 2025-10-11 09:54:20.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:54:21 np0005481065 nova_compute[260935]: 2025-10-11 09:54:21.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:54:21 np0005481065 nova_compute[260935]: 2025-10-11 09:54:21.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:54:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3403: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:54:22 np0005481065 nova_compute[260935]: 2025-10-11 09:54:22.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:54:22 np0005481065 nova_compute[260935]: 2025-10-11 09:54:22.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 11 05:54:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3404: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:54:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:54:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:54:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:54:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:54:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:54:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:54:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:54:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3405: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:54:26 np0005481065 nova_compute[260935]: 2025-10-11 09:54:26.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:54:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:54:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3195713945' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:54:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:54:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3195713945' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:54:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3406: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:54:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3407: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:54:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:54:31 np0005481065 nova_compute[260935]: 2025-10-11 09:54:31.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:54:31 np0005481065 nova_compute[260935]: 2025-10-11 09:54:31.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:54:31 np0005481065 nova_compute[260935]: 2025-10-11 09:54:31.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:54:31 np0005481065 nova_compute[260935]: 2025-10-11 09:54:31.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:54:31 np0005481065 nova_compute[260935]: 2025-10-11 09:54:31.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:54:31 np0005481065 nova_compute[260935]: 2025-10-11 09:54:31.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:54:31 np0005481065 podman[445047]: 2025-10-11 09:54:31.785751852 +0000 UTC m=+0.077877992 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Oct 11 05:54:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3408: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:54:32 np0005481065 nova_compute[260935]: 2025-10-11 09:54:32.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:54:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3409: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:54:34 np0005481065 nova_compute[260935]: 2025-10-11 09:54:34.797 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:54:34 np0005481065 nova_compute[260935]: 2025-10-11 09:54:34.798 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct 11 05:54:34 np0005481065 nova_compute[260935]: 2025-10-11 09:54:34.819 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct 11 05:54:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:54:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3410: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:54:36 np0005481065 nova_compute[260935]: 2025-10-11 09:54:36.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:54:36 np0005481065 nova_compute[260935]: 2025-10-11 09:54:36.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:54:36 np0005481065 podman[445069]: 2025-10-11 09:54:36.809349253 +0000 UTC m=+0.098703650 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 11 05:54:37 np0005481065 nova_compute[260935]: 2025-10-11 09:54:37.724 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:54:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3411: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:54:39 np0005481065 podman[445113]: 2025-10-11 09:54:39.698157733 +0000 UTC m=+0.107209341 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 11 05:54:39 np0005481065 podman[445114]: 2025-10-11 09:54:39.765104865 +0000 UTC m=+0.166966500 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 11 05:54:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3412: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:54:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:54:40 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:54:40 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:54:40 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 05:54:40 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:54:40 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 05:54:40 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:54:40 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 058ee411-74a3-4dcc-b11e-2cb9c01c2dd6 does not exist
Oct 11 05:54:40 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 7d173903-09f0-4ce4-a6f9-1ad9753ead58 does not exist
Oct 11 05:54:40 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 06a9b743-4e5f-4a2b-af11-98c9e9cb248f does not exist
Oct 11 05:54:40 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 05:54:40 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 05:54:40 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 05:54:40 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:54:40 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:54:40 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:54:41 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:54:41 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:54:41 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:54:41 np0005481065 nova_compute[260935]: 2025-10-11 09:54:41.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:54:41 np0005481065 podman[445406]: 2025-10-11 09:54:41.183873779 +0000 UTC m=+0.067954611 container create 76b509e67f9ae36d386282df29dbd4ba86a4160fc3b2d786befa56aa3f428405 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_colden, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 05:54:41 np0005481065 systemd[1]: Started libpod-conmon-76b509e67f9ae36d386282df29dbd4ba86a4160fc3b2d786befa56aa3f428405.scope.
Oct 11 05:54:41 np0005481065 podman[445406]: 2025-10-11 09:54:41.153378508 +0000 UTC m=+0.037459390 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:54:41 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:54:41 np0005481065 podman[445406]: 2025-10-11 09:54:41.289840244 +0000 UTC m=+0.173921096 container init 76b509e67f9ae36d386282df29dbd4ba86a4160fc3b2d786befa56aa3f428405 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_colden, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:54:41 np0005481065 podman[445406]: 2025-10-11 09:54:41.301630507 +0000 UTC m=+0.185711359 container start 76b509e67f9ae36d386282df29dbd4ba86a4160fc3b2d786befa56aa3f428405 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_colden, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:54:41 np0005481065 podman[445406]: 2025-10-11 09:54:41.305574749 +0000 UTC m=+0.189655591 container attach 76b509e67f9ae36d386282df29dbd4ba86a4160fc3b2d786befa56aa3f428405 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 05:54:41 np0005481065 stoic_colden[445423]: 167 167
Oct 11 05:54:41 np0005481065 podman[445406]: 2025-10-11 09:54:41.312033371 +0000 UTC m=+0.196114233 container died 76b509e67f9ae36d386282df29dbd4ba86a4160fc3b2d786befa56aa3f428405 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 05:54:41 np0005481065 systemd[1]: libpod-76b509e67f9ae36d386282df29dbd4ba86a4160fc3b2d786befa56aa3f428405.scope: Deactivated successfully.
Oct 11 05:54:41 np0005481065 systemd[1]: var-lib-containers-storage-overlay-4c000f282592af049031c01930cc337027efd62208cc9c5d6c7e41f4ed9d63b6-merged.mount: Deactivated successfully.
Oct 11 05:54:41 np0005481065 podman[445406]: 2025-10-11 09:54:41.366947753 +0000 UTC m=+0.251028605 container remove 76b509e67f9ae36d386282df29dbd4ba86a4160fc3b2d786befa56aa3f428405 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_colden, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:54:41 np0005481065 systemd[1]: libpod-conmon-76b509e67f9ae36d386282df29dbd4ba86a4160fc3b2d786befa56aa3f428405.scope: Deactivated successfully.
Oct 11 05:54:41 np0005481065 podman[445447]: 2025-10-11 09:54:41.597730625 +0000 UTC m=+0.067359904 container create 972f4a9cd140589d30384d61754673ed4cfcef5c718817e513b7ddaed064bc6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hopper, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 05:54:41 np0005481065 systemd[1]: Started libpod-conmon-972f4a9cd140589d30384d61754673ed4cfcef5c718817e513b7ddaed064bc6b.scope.
Oct 11 05:54:41 np0005481065 podman[445447]: 2025-10-11 09:54:41.569355694 +0000 UTC m=+0.038985033 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:54:41 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:54:41 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/456b17fcb3fbbebc9544127ed8f73d98bd635ad1a8ca164a23677f5fe89f7675/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:54:41 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/456b17fcb3fbbebc9544127ed8f73d98bd635ad1a8ca164a23677f5fe89f7675/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:54:41 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/456b17fcb3fbbebc9544127ed8f73d98bd635ad1a8ca164a23677f5fe89f7675/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:54:41 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/456b17fcb3fbbebc9544127ed8f73d98bd635ad1a8ca164a23677f5fe89f7675/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:54:41 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/456b17fcb3fbbebc9544127ed8f73d98bd635ad1a8ca164a23677f5fe89f7675/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 05:54:41 np0005481065 podman[445447]: 2025-10-11 09:54:41.703292359 +0000 UTC m=+0.172921698 container init 972f4a9cd140589d30384d61754673ed4cfcef5c718817e513b7ddaed064bc6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hopper, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 05:54:41 np0005481065 podman[445447]: 2025-10-11 09:54:41.713080995 +0000 UTC m=+0.182710274 container start 972f4a9cd140589d30384d61754673ed4cfcef5c718817e513b7ddaed064bc6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hopper, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:54:41 np0005481065 podman[445447]: 2025-10-11 09:54:41.717748357 +0000 UTC m=+0.187377636 container attach 972f4a9cd140589d30384d61754673ed4cfcef5c718817e513b7ddaed064bc6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hopper, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:54:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3413: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:54:42 np0005481065 hopeful_hopper[445464]: --> passed data devices: 0 physical, 3 LVM
Oct 11 05:54:42 np0005481065 hopeful_hopper[445464]: --> relative data size: 1.0
Oct 11 05:54:42 np0005481065 hopeful_hopper[445464]: --> All data devices are unavailable
Oct 11 05:54:42 np0005481065 systemd[1]: libpod-972f4a9cd140589d30384d61754673ed4cfcef5c718817e513b7ddaed064bc6b.scope: Deactivated successfully.
Oct 11 05:54:42 np0005481065 systemd[1]: libpod-972f4a9cd140589d30384d61754673ed4cfcef5c718817e513b7ddaed064bc6b.scope: Consumed 1.207s CPU time.
Oct 11 05:54:42 np0005481065 conmon[445464]: conmon 972f4a9cd140589d3038 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-972f4a9cd140589d30384d61754673ed4cfcef5c718817e513b7ddaed064bc6b.scope/container/memory.events
Oct 11 05:54:42 np0005481065 podman[445447]: 2025-10-11 09:54:42.991186216 +0000 UTC m=+1.460815525 container died 972f4a9cd140589d30384d61754673ed4cfcef5c718817e513b7ddaed064bc6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 05:54:43 np0005481065 systemd[1]: var-lib-containers-storage-overlay-456b17fcb3fbbebc9544127ed8f73d98bd635ad1a8ca164a23677f5fe89f7675-merged.mount: Deactivated successfully.
Oct 11 05:54:43 np0005481065 podman[445447]: 2025-10-11 09:54:43.08194032 +0000 UTC m=+1.551569589 container remove 972f4a9cd140589d30384d61754673ed4cfcef5c718817e513b7ddaed064bc6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hopper, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:54:43 np0005481065 systemd[1]: libpod-conmon-972f4a9cd140589d30384d61754673ed4cfcef5c718817e513b7ddaed064bc6b.scope: Deactivated successfully.
Oct 11 05:54:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3414: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:54:43 np0005481065 podman[445649]: 2025-10-11 09:54:43.884419999 +0000 UTC m=+0.076991137 container create cce83428f8b741eac52f3c63c60a2f699fb98041d5b5e5a594c17cd522fffb78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_herschel, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:54:43 np0005481065 podman[445649]: 2025-10-11 09:54:43.842641908 +0000 UTC m=+0.035213116 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:54:43 np0005481065 systemd[1]: Started libpod-conmon-cce83428f8b741eac52f3c63c60a2f699fb98041d5b5e5a594c17cd522fffb78.scope.
Oct 11 05:54:44 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:54:44 np0005481065 podman[445649]: 2025-10-11 09:54:44.084260077 +0000 UTC m=+0.276831285 container init cce83428f8b741eac52f3c63c60a2f699fb98041d5b5e5a594c17cd522fffb78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_herschel, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 11 05:54:44 np0005481065 podman[445649]: 2025-10-11 09:54:44.096026249 +0000 UTC m=+0.288597417 container start cce83428f8b741eac52f3c63c60a2f699fb98041d5b5e5a594c17cd522fffb78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 05:54:44 np0005481065 busy_herschel[445666]: 167 167
Oct 11 05:54:44 np0005481065 systemd[1]: libpod-cce83428f8b741eac52f3c63c60a2f699fb98041d5b5e5a594c17cd522fffb78.scope: Deactivated successfully.
Oct 11 05:54:44 np0005481065 podman[445649]: 2025-10-11 09:54:44.17532434 +0000 UTC m=+0.367895488 container attach cce83428f8b741eac52f3c63c60a2f699fb98041d5b5e5a594c17cd522fffb78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:54:44 np0005481065 podman[445649]: 2025-10-11 09:54:44.175854865 +0000 UTC m=+0.368426033 container died cce83428f8b741eac52f3c63c60a2f699fb98041d5b5e5a594c17cd522fffb78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_herschel, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 05:54:44 np0005481065 systemd[1]: var-lib-containers-storage-overlay-81cf336ae8756f514782a61cc356a8295e24fdc2d2045673e7cbed21e196ecd5-merged.mount: Deactivated successfully.
Oct 11 05:54:44 np0005481065 podman[445649]: 2025-10-11 09:54:44.535034015 +0000 UTC m=+0.727605143 container remove cce83428f8b741eac52f3c63c60a2f699fb98041d5b5e5a594c17cd522fffb78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 05:54:44 np0005481065 systemd[1]: libpod-conmon-cce83428f8b741eac52f3c63c60a2f699fb98041d5b5e5a594c17cd522fffb78.scope: Deactivated successfully.
Oct 11 05:54:44 np0005481065 podman[445689]: 2025-10-11 09:54:44.80979595 +0000 UTC m=+0.088135182 container create b22bdd10a13125447f2c892405d035d4be2273a67d84e1e6ca5cd0a38b8e9ae1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_visvesvaraya, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:54:44 np0005481065 podman[445689]: 2025-10-11 09:54:44.75461352 +0000 UTC m=+0.032952802 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:54:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:54:44 np0005481065 systemd[1]: Started libpod-conmon-b22bdd10a13125447f2c892405d035d4be2273a67d84e1e6ca5cd0a38b8e9ae1.scope.
Oct 11 05:54:44 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:54:44 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab76f86dd41e0f43c4e28f70ac1efb6511b697b9edd52d0a096542398eed1486/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:54:44 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab76f86dd41e0f43c4e28f70ac1efb6511b697b9edd52d0a096542398eed1486/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:54:44 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab76f86dd41e0f43c4e28f70ac1efb6511b697b9edd52d0a096542398eed1486/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:54:44 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab76f86dd41e0f43c4e28f70ac1efb6511b697b9edd52d0a096542398eed1486/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:54:45 np0005481065 podman[445689]: 2025-10-11 09:54:45.060709451 +0000 UTC m=+0.339048703 container init b22bdd10a13125447f2c892405d035d4be2273a67d84e1e6ca5cd0a38b8e9ae1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 05:54:45 np0005481065 podman[445689]: 2025-10-11 09:54:45.073290216 +0000 UTC m=+0.351629438 container start b22bdd10a13125447f2c892405d035d4be2273a67d84e1e6ca5cd0a38b8e9ae1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 11 05:54:45 np0005481065 podman[445689]: 2025-10-11 09:54:45.148034739 +0000 UTC m=+0.426374031 container attach b22bdd10a13125447f2c892405d035d4be2273a67d84e1e6ca5cd0a38b8e9ae1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 05:54:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3415: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]: {
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:    "0": [
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:        {
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:            "devices": [
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:                "/dev/loop3"
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:            ],
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:            "lv_name": "ceph_lv0",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:            "lv_size": "21470642176",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:            "name": "ceph_lv0",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:            "tags": {
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:                "ceph.cluster_name": "ceph",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:                "ceph.crush_device_class": "",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:                "ceph.encrypted": "0",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:                "ceph.osd_id": "0",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:                "ceph.type": "block",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:                "ceph.vdo": "0"
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:            },
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:            "type": "block",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:            "vg_name": "ceph_vg0"
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:        }
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:    ],
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:    "1": [
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:        {
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:            "devices": [
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:                "/dev/loop4"
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:            ],
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:            "lv_name": "ceph_lv1",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:            "lv_size": "21470642176",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:            "name": "ceph_lv1",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:            "tags": {
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:                "ceph.cluster_name": "ceph",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:                "ceph.crush_device_class": "",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:                "ceph.encrypted": "0",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:                "ceph.osd_id": "1",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:                "ceph.type": "block",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:                "ceph.vdo": "0"
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:            },
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:            "type": "block",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:            "vg_name": "ceph_vg1"
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:        }
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:    ],
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:    "2": [
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:        {
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:            "devices": [
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:                "/dev/loop5"
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:            ],
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:            "lv_name": "ceph_lv2",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:            "lv_size": "21470642176",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:            "name": "ceph_lv2",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:            "tags": {
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:                "ceph.cluster_name": "ceph",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:                "ceph.crush_device_class": "",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:                "ceph.encrypted": "0",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:                "ceph.osd_id": "2",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:                "ceph.type": "block",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:                "ceph.vdo": "0"
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:            },
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:            "type": "block",
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:            "vg_name": "ceph_vg2"
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:        }
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]:    ]
Oct 11 05:54:45 np0005481065 happy_visvesvaraya[445705]: }
Oct 11 05:54:45 np0005481065 systemd[1]: libpod-b22bdd10a13125447f2c892405d035d4be2273a67d84e1e6ca5cd0a38b8e9ae1.scope: Deactivated successfully.
Oct 11 05:54:45 np0005481065 podman[445714]: 2025-10-11 09:54:45.949000515 +0000 UTC m=+0.029821314 container died b22bdd10a13125447f2c892405d035d4be2273a67d84e1e6ca5cd0a38b8e9ae1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_visvesvaraya, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:54:46 np0005481065 systemd[1]: var-lib-containers-storage-overlay-ab76f86dd41e0f43c4e28f70ac1efb6511b697b9edd52d0a096542398eed1486-merged.mount: Deactivated successfully.
Oct 11 05:54:46 np0005481065 nova_compute[260935]: 2025-10-11 09:54:46.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:54:46 np0005481065 nova_compute[260935]: 2025-10-11 09:54:46.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:54:46 np0005481065 nova_compute[260935]: 2025-10-11 09:54:46.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5006 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:54:46 np0005481065 nova_compute[260935]: 2025-10-11 09:54:46.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:54:46 np0005481065 podman[445714]: 2025-10-11 09:54:46.163727833 +0000 UTC m=+0.244548622 container remove b22bdd10a13125447f2c892405d035d4be2273a67d84e1e6ca5cd0a38b8e9ae1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 05:54:46 np0005481065 nova_compute[260935]: 2025-10-11 09:54:46.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:54:46 np0005481065 nova_compute[260935]: 2025-10-11 09:54:46.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:54:46 np0005481065 systemd[1]: libpod-conmon-b22bdd10a13125447f2c892405d035d4be2273a67d84e1e6ca5cd0a38b8e9ae1.scope: Deactivated successfully.
Oct 11 05:54:47 np0005481065 podman[445869]: 2025-10-11 09:54:47.005860722 +0000 UTC m=+0.038351705 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:54:47 np0005481065 podman[445869]: 2025-10-11 09:54:47.223510623 +0000 UTC m=+0.256001576 container create 4198cef30b96600c7f7d8aa9e686c0505213d2bb627b5c4d128bf670d4ed76ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kapitsa, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 05:54:47 np0005481065 systemd[1]: Started libpod-conmon-4198cef30b96600c7f7d8aa9e686c0505213d2bb627b5c4d128bf670d4ed76ce.scope.
Oct 11 05:54:47 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:54:47 np0005481065 podman[445869]: 2025-10-11 09:54:47.464499924 +0000 UTC m=+0.496990867 container init 4198cef30b96600c7f7d8aa9e686c0505213d2bb627b5c4d128bf670d4ed76ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kapitsa, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:54:47 np0005481065 podman[445869]: 2025-10-11 09:54:47.475625788 +0000 UTC m=+0.508116721 container start 4198cef30b96600c7f7d8aa9e686c0505213d2bb627b5c4d128bf670d4ed76ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 05:54:47 np0005481065 upbeat_kapitsa[445885]: 167 167
Oct 11 05:54:47 np0005481065 systemd[1]: libpod-4198cef30b96600c7f7d8aa9e686c0505213d2bb627b5c4d128bf670d4ed76ce.scope: Deactivated successfully.
Oct 11 05:54:47 np0005481065 conmon[445885]: conmon 4198cef30b96600c7f7d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4198cef30b96600c7f7d8aa9e686c0505213d2bb627b5c4d128bf670d4ed76ce.scope/container/memory.events
Oct 11 05:54:47 np0005481065 podman[445869]: 2025-10-11 09:54:47.616376056 +0000 UTC m=+0.648866979 container attach 4198cef30b96600c7f7d8aa9e686c0505213d2bb627b5c4d128bf670d4ed76ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 05:54:47 np0005481065 podman[445869]: 2025-10-11 09:54:47.618026673 +0000 UTC m=+0.650517596 container died 4198cef30b96600c7f7d8aa9e686c0505213d2bb627b5c4d128bf670d4ed76ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kapitsa, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 11 05:54:47 np0005481065 systemd[1]: var-lib-containers-storage-overlay-866efabd6da637e34bf1b02a691972aabb90deb1332d378372c7fac9a20c3fb0-merged.mount: Deactivated successfully.
Oct 11 05:54:47 np0005481065 podman[445869]: 2025-10-11 09:54:47.791220087 +0000 UTC m=+0.823711010 container remove 4198cef30b96600c7f7d8aa9e686c0505213d2bb627b5c4d128bf670d4ed76ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kapitsa, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:54:47 np0005481065 systemd[1]: libpod-conmon-4198cef30b96600c7f7d8aa9e686c0505213d2bb627b5c4d128bf670d4ed76ce.scope: Deactivated successfully.
Oct 11 05:54:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3416: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:54:47 np0005481065 podman[445911]: 2025-10-11 09:54:47.958742001 +0000 UTC m=+0.044615322 container create e0f7cf8fdf12b9972f5cd89b7def9e5f976d6623ed0937845b8e035977f3065e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 11 05:54:48 np0005481065 systemd[1]: Started libpod-conmon-e0f7cf8fdf12b9972f5cd89b7def9e5f976d6623ed0937845b8e035977f3065e.scope.
Oct 11 05:54:48 np0005481065 podman[445911]: 2025-10-11 09:54:47.937357036 +0000 UTC m=+0.023230357 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:54:48 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:54:48 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f1e8dba4ef2eed259d0b98dae8e03c62f76ff50d7ba7367dd1249c2eff53a15/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:54:48 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f1e8dba4ef2eed259d0b98dae8e03c62f76ff50d7ba7367dd1249c2eff53a15/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:54:48 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f1e8dba4ef2eed259d0b98dae8e03c62f76ff50d7ba7367dd1249c2eff53a15/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:54:48 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f1e8dba4ef2eed259d0b98dae8e03c62f76ff50d7ba7367dd1249c2eff53a15/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:54:48 np0005481065 podman[445911]: 2025-10-11 09:54:48.10555297 +0000 UTC m=+0.191426311 container init e0f7cf8fdf12b9972f5cd89b7def9e5f976d6623ed0937845b8e035977f3065e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 05:54:48 np0005481065 podman[445911]: 2025-10-11 09:54:48.117871868 +0000 UTC m=+0.203745199 container start e0f7cf8fdf12b9972f5cd89b7def9e5f976d6623ed0937845b8e035977f3065e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 11 05:54:48 np0005481065 podman[445911]: 2025-10-11 09:54:48.135162306 +0000 UTC m=+0.221035657 container attach e0f7cf8fdf12b9972f5cd89b7def9e5f976d6623ed0937845b8e035977f3065e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_jennings, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 05:54:49 np0005481065 quizzical_jennings[445928]: {
Oct 11 05:54:49 np0005481065 quizzical_jennings[445928]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 05:54:49 np0005481065 quizzical_jennings[445928]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:54:49 np0005481065 quizzical_jennings[445928]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 05:54:49 np0005481065 quizzical_jennings[445928]:        "osd_id": 2,
Oct 11 05:54:49 np0005481065 quizzical_jennings[445928]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:54:49 np0005481065 quizzical_jennings[445928]:        "type": "bluestore"
Oct 11 05:54:49 np0005481065 quizzical_jennings[445928]:    },
Oct 11 05:54:49 np0005481065 quizzical_jennings[445928]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 05:54:49 np0005481065 quizzical_jennings[445928]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:54:49 np0005481065 quizzical_jennings[445928]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 05:54:49 np0005481065 quizzical_jennings[445928]:        "osd_id": 0,
Oct 11 05:54:49 np0005481065 quizzical_jennings[445928]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:54:49 np0005481065 quizzical_jennings[445928]:        "type": "bluestore"
Oct 11 05:54:49 np0005481065 quizzical_jennings[445928]:    },
Oct 11 05:54:49 np0005481065 quizzical_jennings[445928]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 05:54:49 np0005481065 quizzical_jennings[445928]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:54:49 np0005481065 quizzical_jennings[445928]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 05:54:49 np0005481065 quizzical_jennings[445928]:        "osd_id": 1,
Oct 11 05:54:49 np0005481065 quizzical_jennings[445928]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:54:49 np0005481065 quizzical_jennings[445928]:        "type": "bluestore"
Oct 11 05:54:49 np0005481065 quizzical_jennings[445928]:    }
Oct 11 05:54:49 np0005481065 quizzical_jennings[445928]: }
Oct 11 05:54:49 np0005481065 systemd[1]: libpod-e0f7cf8fdf12b9972f5cd89b7def9e5f976d6623ed0937845b8e035977f3065e.scope: Deactivated successfully.
Oct 11 05:54:49 np0005481065 podman[445911]: 2025-10-11 09:54:49.18938818 +0000 UTC m=+1.275261521 container died e0f7cf8fdf12b9972f5cd89b7def9e5f976d6623ed0937845b8e035977f3065e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_jennings, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 11 05:54:49 np0005481065 systemd[1]: libpod-e0f7cf8fdf12b9972f5cd89b7def9e5f976d6623ed0937845b8e035977f3065e.scope: Consumed 1.070s CPU time.
Oct 11 05:54:49 np0005481065 systemd[1]: var-lib-containers-storage-overlay-4f1e8dba4ef2eed259d0b98dae8e03c62f76ff50d7ba7367dd1249c2eff53a15-merged.mount: Deactivated successfully.
Oct 11 05:54:49 np0005481065 podman[445911]: 2025-10-11 09:54:49.569289366 +0000 UTC m=+1.655162707 container remove e0f7cf8fdf12b9972f5cd89b7def9e5f976d6623ed0937845b8e035977f3065e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_jennings, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:54:49 np0005481065 systemd[1]: libpod-conmon-e0f7cf8fdf12b9972f5cd89b7def9e5f976d6623ed0937845b8e035977f3065e.scope: Deactivated successfully.
Oct 11 05:54:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:54:49 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:54:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:54:49 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:54:49 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 5e316f52-1a7a-4a23-868d-fb462409b3a2 does not exist
Oct 11 05:54:49 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 62ab72a1-854a-47ce-9e8b-bd124d113089 does not exist
Oct 11 05:54:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3417: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:54:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:54:50 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:54:50 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:54:51 np0005481065 nova_compute[260935]: 2025-10-11 09:54:51.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:54:51 np0005481065 nova_compute[260935]: 2025-10-11 09:54:51.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:54:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3418: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:54:52 np0005481065 systemd-logind[819]: New session 55 of user zuul.
Oct 11 05:54:52 np0005481065 systemd[1]: Started Session 55 of User zuul.
Oct 11 05:54:52 np0005481065 nova_compute[260935]: 2025-10-11 09:54:52.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:54:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:54:52.160 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=63, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=62) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 05:54:52 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:54:52.163 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 05:54:53 np0005481065 systemd[1]: Reloading.
Oct 11 05:54:53 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 05:54:53 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 05:54:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3419: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:54:53 np0005481065 systemd[1]: Reloading.
Oct 11 05:54:54 np0005481065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 05:54:54 np0005481065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 05:54:54 np0005481065 systemd[1]: Starting Podman API Socket...
Oct 11 05:54:54 np0005481065 systemd[1]: Listening on Podman API Socket.
Oct 11 05:54:54 np0005481065 dbus-broker-launch[809]: avc:  op=setenforce lsm=selinux enforcing=0 res=1
Oct 11 05:54:54 np0005481065 systemd[1]: podman.socket: Deactivated successfully.
Oct 11 05:54:54 np0005481065 systemd[1]: Closed Podman API Socket.
Oct 11 05:54:54 np0005481065 systemd[1]: Stopping Podman API Socket...
Oct 11 05:54:54 np0005481065 systemd[1]: Starting Podman API Socket...
Oct 11 05:54:54 np0005481065 systemd[1]: Listening on Podman API Socket.
Oct 11 05:54:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:54:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:54:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:54:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:54:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:54:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:54:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:54:55 np0005481065 systemd-logind[819]: New session 56 of user zuul.
Oct 11 05:54:55 np0005481065 systemd[1]: Started Session 56 of User zuul.
Oct 11 05:54:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:54:55
Oct 11 05:54:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:54:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:54:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'cephfs.cephfs.meta', '.rgw.root', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', '.mgr', 'default.rgw.log', 'vms', 'backups']
Oct 11 05:54:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:54:55 np0005481065 systemd[1]: Starting Podman API Service...
Oct 11 05:54:55 np0005481065 systemd[1]: Started Podman API Service.
Oct 11 05:54:55 np0005481065 podman[446257]: time="2025-10-11T09:54:55Z" level=info msg="/usr/bin/podman filtering at log level info"
Oct 11 05:54:55 np0005481065 podman[446257]: time="2025-10-11T09:54:55Z" level=info msg="Setting parallel job count to 25"
Oct 11 05:54:55 np0005481065 podman[446257]: time="2025-10-11T09:54:55Z" level=info msg="Using sqlite as database backend"
Oct 11 05:54:55 np0005481065 podman[446257]: time="2025-10-11T09:54:55Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Oct 11 05:54:55 np0005481065 podman[446257]: time="2025-10-11T09:54:55Z" level=info msg="Using systemd socket activation to determine API endpoint"
Oct 11 05:54:55 np0005481065 podman[446257]: time="2025-10-11T09:54:55Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Oct 11 05:54:55 np0005481065 podman[446257]: @ - - [11/Oct/2025:09:54:55 +0000] "HEAD /v4.7.0/libpod/_ping HTTP/1.1" 200 0 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Oct 11 05:54:55 np0005481065 podman[446257]: @ - - [11/Oct/2025:09:54:55 +0000] "GET /v4.7.0/libpod/containers/json HTTP/1.1" 200 27463 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Oct 11 05:54:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:54:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:54:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:54:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:54:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:54:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:54:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:54:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:54:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:54:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:54:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3420: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:54:56 np0005481065 nova_compute[260935]: 2025-10-11 09:54:56.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:54:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3421: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:54:58 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:54:58.165 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '63'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 05:54:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3422: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:54:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:55:01 np0005481065 nova_compute[260935]: 2025-10-11 09:55:01.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:55:01 np0005481065 nova_compute[260935]: 2025-10-11 09:55:01.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:55:01 np0005481065 nova_compute[260935]: 2025-10-11 09:55:01.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:55:01 np0005481065 nova_compute[260935]: 2025-10-11 09:55:01.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:55:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3423: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:55:02 np0005481065 podman[446294]: 2025-10-11 09:55:02.763214643 +0000 UTC m=+0.073242921 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 11 05:55:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3424: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:55:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:55:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3425: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:55:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:55:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:55:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:55:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:55:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0026278224099759067 of space, bias 1.0, pg target 0.788346722992772 quantized to 32 (current 32)
Oct 11 05:55:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:55:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:55:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:55:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:55:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:55:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct 11 05:55:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:55:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 05:55:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:55:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:55:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:55:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 05:55:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:55:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 05:55:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:55:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:55:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:55:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 05:55:06 np0005481065 nova_compute[260935]: 2025-10-11 09:55:06.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:55:06 np0005481065 nova_compute[260935]: 2025-10-11 09:55:06.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:55:06 np0005481065 nova_compute[260935]: 2025-10-11 09:55:06.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5046 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:55:06 np0005481065 nova_compute[260935]: 2025-10-11 09:55:06.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:55:06 np0005481065 nova_compute[260935]: 2025-10-11 09:55:06.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:55:06 np0005481065 nova_compute[260935]: 2025-10-11 09:55:06.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:55:06 np0005481065 nova_compute[260935]: 2025-10-11 09:55:06.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:55:07 np0005481065 nova_compute[260935]: 2025-10-11 09:55:07.107 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:55:07 np0005481065 nova_compute[260935]: 2025-10-11 09:55:07.107 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:55:07 np0005481065 nova_compute[260935]: 2025-10-11 09:55:07.108 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 05:55:07 np0005481065 nova_compute[260935]: 2025-10-11 09:55:07.305 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:55:07 np0005481065 podman[446315]: 2025-10-11 09:55:07.797232328 +0000 UTC m=+0.095284814 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 11 05:55:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3426: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:55:07 np0005481065 nova_compute[260935]: 2025-10-11 09:55:07.972 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:55:07 np0005481065 nova_compute[260935]: 2025-10-11 09:55:07.993 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:55:07 np0005481065 nova_compute[260935]: 2025-10-11 09:55:07.994 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 05:55:07 np0005481065 nova_compute[260935]: 2025-10-11 09:55:07.994 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:55:07 np0005481065 nova_compute[260935]: 2025-10-11 09:55:07.994 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:55:07 np0005481065 nova_compute[260935]: 2025-10-11 09:55:07.994 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:55:07 np0005481065 nova_compute[260935]: 2025-10-11 09:55:07.995 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:55:08 np0005481065 nova_compute[260935]: 2025-10-11 09:55:08.020 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:55:08 np0005481065 nova_compute[260935]: 2025-10-11 09:55:08.021 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:55:08 np0005481065 nova_compute[260935]: 2025-10-11 09:55:08.021 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:55:08 np0005481065 nova_compute[260935]: 2025-10-11 09:55:08.021 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:55:08 np0005481065 nova_compute[260935]: 2025-10-11 09:55:08.021 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:55:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:55:08 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1643149614' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:55:08 np0005481065 nova_compute[260935]: 2025-10-11 09:55:08.482 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:55:08 np0005481065 nova_compute[260935]: 2025-10-11 09:55:08.589 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:55:08 np0005481065 nova_compute[260935]: 2025-10-11 09:55:08.589 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:55:08 np0005481065 nova_compute[260935]: 2025-10-11 09:55:08.590 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:55:08 np0005481065 nova_compute[260935]: 2025-10-11 09:55:08.596 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:55:08 np0005481065 nova_compute[260935]: 2025-10-11 09:55:08.596 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:55:08 np0005481065 nova_compute[260935]: 2025-10-11 09:55:08.602 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:55:08 np0005481065 nova_compute[260935]: 2025-10-11 09:55:08.602 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:55:08 np0005481065 nova_compute[260935]: 2025-10-11 09:55:08.861 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:55:08 np0005481065 nova_compute[260935]: 2025-10-11 09:55:08.863 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2782MB free_disk=59.83064270019531GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:55:08 np0005481065 nova_compute[260935]: 2025-10-11 09:55:08.864 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:55:08 np0005481065 nova_compute[260935]: 2025-10-11 09:55:08.865 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:55:08 np0005481065 nova_compute[260935]: 2025-10-11 09:55:08.993 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:55:08 np0005481065 nova_compute[260935]: 2025-10-11 09:55:08.994 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:55:08 np0005481065 nova_compute[260935]: 2025-10-11 09:55:08.994 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:55:08 np0005481065 nova_compute[260935]: 2025-10-11 09:55:08.995 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:55:08 np0005481065 nova_compute[260935]: 2025-10-11 09:55:08.995 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:55:09 np0005481065 nova_compute[260935]: 2025-10-11 09:55:09.017 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing inventories for resource provider ead2f521-4d5d-46d9-864c-1aac19134114 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 11 05:55:09 np0005481065 nova_compute[260935]: 2025-10-11 09:55:09.036 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updating ProviderTree inventory for provider ead2f521-4d5d-46d9-864c-1aac19134114 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 11 05:55:09 np0005481065 nova_compute[260935]: 2025-10-11 09:55:09.036 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updating inventory in ProviderTree for provider ead2f521-4d5d-46d9-864c-1aac19134114 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 11 05:55:09 np0005481065 nova_compute[260935]: 2025-10-11 09:55:09.061 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing aggregate associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 11 05:55:09 np0005481065 nova_compute[260935]: 2025-10-11 09:55:09.091 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing trait associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, traits: HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSE41,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_USB,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,HW_CPU_X86_SVM,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 11 05:55:09 np0005481065 nova_compute[260935]: 2025-10-11 09:55:09.175 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:55:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:55:09 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2618290930' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:55:09 np0005481065 nova_compute[260935]: 2025-10-11 09:55:09.659 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:55:09 np0005481065 nova_compute[260935]: 2025-10-11 09:55:09.668 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:55:09 np0005481065 nova_compute[260935]: 2025-10-11 09:55:09.703 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:55:09 np0005481065 nova_compute[260935]: 2025-10-11 09:55:09.706 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:55:09 np0005481065 nova_compute[260935]: 2025-10-11 09:55:09.706 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:55:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3427: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:55:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:55:10 np0005481065 podman[446257]: time="2025-10-11T09:55:10Z" level=info msg="Received shutdown.Stop(), terminating!" PID=446257
Oct 11 05:55:10 np0005481065 systemd[1]: podman.service: Deactivated successfully.
Oct 11 05:55:10 np0005481065 podman[446384]: 2025-10-11 09:55:10.358000606 +0000 UTC m=+0.117204113 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 11 05:55:10 np0005481065 podman[446385]: 2025-10-11 09:55:10.393544101 +0000 UTC m=+0.155915548 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true)
Oct 11 05:55:11 np0005481065 nova_compute[260935]: 2025-10-11 09:55:11.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:55:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3428: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:55:13 np0005481065 nova_compute[260935]: 2025-10-11 09:55:13.416 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:55:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3429: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:55:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:55:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:55:15.252 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:55:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:55:15.253 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:55:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:55:15.253 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:55:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3430: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:55:16 np0005481065 nova_compute[260935]: 2025-10-11 09:55:16.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:55:16 np0005481065 nova_compute[260935]: 2025-10-11 09:55:16.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:55:16 np0005481065 nova_compute[260935]: 2025-10-11 09:55:16.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:55:16 np0005481065 nova_compute[260935]: 2025-10-11 09:55:16.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:55:16 np0005481065 nova_compute[260935]: 2025-10-11 09:55:16.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:55:16 np0005481065 nova_compute[260935]: 2025-10-11 09:55:16.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:55:16 np0005481065 nova_compute[260935]: 2025-10-11 09:55:16.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:55:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3431: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:55:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3432: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:55:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:55:21 np0005481065 nova_compute[260935]: 2025-10-11 09:55:21.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:55:21 np0005481065 nova_compute[260935]: 2025-10-11 09:55:21.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:55:21 np0005481065 nova_compute[260935]: 2025-10-11 09:55:21.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:55:21 np0005481065 nova_compute[260935]: 2025-10-11 09:55:21.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:55:21 np0005481065 nova_compute[260935]: 2025-10-11 09:55:21.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:55:21 np0005481065 nova_compute[260935]: 2025-10-11 09:55:21.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:55:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3433: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:55:22 np0005481065 ovn_controller[152945]: 2025-10-11T09:55:22Z|01737|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Oct 11 05:55:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3434: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:55:24 np0005481065 systemd[1]: session-55.scope: Deactivated successfully.
Oct 11 05:55:24 np0005481065 systemd[1]: session-55.scope: Consumed 1.604s CPU time.
Oct 11 05:55:24 np0005481065 systemd-logind[819]: Session 55 logged out. Waiting for processes to exit.
Oct 11 05:55:24 np0005481065 systemd-logind[819]: Removed session 55.
Oct 11 05:55:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:55:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:55:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:55:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:55:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:55:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:55:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:55:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3435: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:55:26 np0005481065 systemd-logind[819]: Session 56 logged out. Waiting for processes to exit.
Oct 11 05:55:26 np0005481065 systemd[1]: session-56.scope: Deactivated successfully.
Oct 11 05:55:26 np0005481065 systemd-logind[819]: Removed session 56.
Oct 11 05:55:26 np0005481065 nova_compute[260935]: 2025-10-11 09:55:26.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:55:26 np0005481065 nova_compute[260935]: 2025-10-11 09:55:26.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:55:26 np0005481065 nova_compute[260935]: 2025-10-11 09:55:26.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:55:26 np0005481065 nova_compute[260935]: 2025-10-11 09:55:26.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:55:26 np0005481065 nova_compute[260935]: 2025-10-11 09:55:26.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:55:26 np0005481065 nova_compute[260935]: 2025-10-11 09:55:26.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:55:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:55:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2675494968' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:55:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:55:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2675494968' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:55:27 np0005481065 nova_compute[260935]: 2025-10-11 09:55:27.661 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:55:27 np0005481065 nova_compute[260935]: 2025-10-11 09:55:27.720 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Triggering sync for uuid c176845c-89c0-4038-ba22-4ee79bd3ebfe _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct 11 05:55:27 np0005481065 nova_compute[260935]: 2025-10-11 09:55:27.720 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Triggering sync for uuid b75d8ded-515b-48ff-a6b6-28df88878996 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct 11 05:55:27 np0005481065 nova_compute[260935]: 2025-10-11 09:55:27.721 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Triggering sync for uuid 52be16b4-343a-4fd4-9041-39069a1fde2a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct 11 05:55:27 np0005481065 nova_compute[260935]: 2025-10-11 09:55:27.721 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "c176845c-89c0-4038-ba22-4ee79bd3ebfe" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:55:27 np0005481065 nova_compute[260935]: 2025-10-11 09:55:27.722 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "c176845c-89c0-4038-ba22-4ee79bd3ebfe" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:55:27 np0005481065 nova_compute[260935]: 2025-10-11 09:55:27.722 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "b75d8ded-515b-48ff-a6b6-28df88878996" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:55:27 np0005481065 nova_compute[260935]: 2025-10-11 09:55:27.722 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "b75d8ded-515b-48ff-a6b6-28df88878996" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:55:27 np0005481065 nova_compute[260935]: 2025-10-11 09:55:27.722 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "52be16b4-343a-4fd4-9041-39069a1fde2a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:55:27 np0005481065 nova_compute[260935]: 2025-10-11 09:55:27.723 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "52be16b4-343a-4fd4-9041-39069a1fde2a" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:55:27 np0005481065 nova_compute[260935]: 2025-10-11 09:55:27.759 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "b75d8ded-515b-48ff-a6b6-28df88878996" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.037s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:55:27 np0005481065 nova_compute[260935]: 2025-10-11 09:55:27.760 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "c176845c-89c0-4038-ba22-4ee79bd3ebfe" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.038s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:55:27 np0005481065 nova_compute[260935]: 2025-10-11 09:55:27.763 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "52be16b4-343a-4fd4-9041-39069a1fde2a" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.040s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:55:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3436: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:55:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3437: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:55:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:55:31 np0005481065 nova_compute[260935]: 2025-10-11 09:55:31.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:55:31 np0005481065 nova_compute[260935]: 2025-10-11 09:55:31.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:55:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3438: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:55:33 np0005481065 podman[446480]: 2025-10-11 09:55:33.762194995 +0000 UTC m=+0.068253910 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct 11 05:55:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3439: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:55:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:55:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3440: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:55:36 np0005481065 nova_compute[260935]: 2025-10-11 09:55:36.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4995-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:55:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3441: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:55:38 np0005481065 nova_compute[260935]: 2025-10-11 09:55:38.764 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:55:38 np0005481065 podman[446499]: 2025-10-11 09:55:38.782351017 +0000 UTC m=+0.079484728 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 11 05:55:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3442: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:55:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:55:40 np0005481065 podman[446520]: 2025-10-11 09:55:40.813741365 +0000 UTC m=+0.099033529 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 05:55:40 np0005481065 podman[446521]: 2025-10-11 09:55:40.892641905 +0000 UTC m=+0.173600397 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:55:41 np0005481065 nova_compute[260935]: 2025-10-11 09:55:41.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:55:41 np0005481065 nova_compute[260935]: 2025-10-11 09:55:41.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:55:41 np0005481065 nova_compute[260935]: 2025-10-11 09:55:41.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:55:41 np0005481065 nova_compute[260935]: 2025-10-11 09:55:41.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:55:41 np0005481065 nova_compute[260935]: 2025-10-11 09:55:41.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:55:41 np0005481065 nova_compute[260935]: 2025-10-11 09:55:41.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:55:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3443: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:55:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3444: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:55:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:55:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3445: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:55:46 np0005481065 nova_compute[260935]: 2025-10-11 09:55:46.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:55:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3446: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:55:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3447: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:55:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:55:50 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:55:50 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:55:50 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:55:50 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:55:51 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:55:51 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:55:51 np0005481065 nova_compute[260935]: 2025-10-11 09:55:51.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:55:51 np0005481065 nova_compute[260935]: 2025-10-11 09:55:51.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:55:51 np0005481065 nova_compute[260935]: 2025-10-11 09:55:51.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:55:51 np0005481065 nova_compute[260935]: 2025-10-11 09:55:51.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:55:51 np0005481065 nova_compute[260935]: 2025-10-11 09:55:51.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:55:51 np0005481065 nova_compute[260935]: 2025-10-11 09:55:51.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:55:51 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:55:51 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:55:51 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 05:55:51 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:55:51 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 05:55:51 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:55:51 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 1842cf83-cee6-4524-a080-bd56bc1e7981 does not exist
Oct 11 05:55:51 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 963d5731-8610-4049-972c-31a2151881a8 does not exist
Oct 11 05:55:51 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 9a42180e-6ee5-4f70-9311-4f4bc411279b does not exist
Oct 11 05:55:51 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 05:55:51 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 05:55:51 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 05:55:51 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:55:51 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:55:51 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:55:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3448: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:55:52 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:55:52 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:55:52 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:55:52 np0005481065 podman[446955]: 2025-10-11 09:55:52.62508064 +0000 UTC m=+0.072012736 container create 9de2e6db4ec825c0641a607024f02bbb15c211d0cebb1643b3fc30798e7cfb1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:55:52 np0005481065 systemd[1]: Started libpod-conmon-9de2e6db4ec825c0641a607024f02bbb15c211d0cebb1643b3fc30798e7cfb1f.scope.
Oct 11 05:55:52 np0005481065 podman[446955]: 2025-10-11 09:55:52.592790998 +0000 UTC m=+0.039723164 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:55:52 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:55:52 np0005481065 podman[446955]: 2025-10-11 09:55:52.777328503 +0000 UTC m=+0.224260649 container init 9de2e6db4ec825c0641a607024f02bbb15c211d0cebb1643b3fc30798e7cfb1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:55:52 np0005481065 podman[446955]: 2025-10-11 09:55:52.786571544 +0000 UTC m=+0.233503610 container start 9de2e6db4ec825c0641a607024f02bbb15c211d0cebb1643b3fc30798e7cfb1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_borg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 05:55:52 np0005481065 podman[446955]: 2025-10-11 09:55:52.790388372 +0000 UTC m=+0.237320538 container attach 9de2e6db4ec825c0641a607024f02bbb15c211d0cebb1643b3fc30798e7cfb1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_borg, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:55:52 np0005481065 exciting_borg[446972]: 167 167
Oct 11 05:55:52 np0005481065 systemd[1]: libpod-9de2e6db4ec825c0641a607024f02bbb15c211d0cebb1643b3fc30798e7cfb1f.scope: Deactivated successfully.
Oct 11 05:55:52 np0005481065 podman[446955]: 2025-10-11 09:55:52.798963454 +0000 UTC m=+0.245895560 container died 9de2e6db4ec825c0641a607024f02bbb15c211d0cebb1643b3fc30798e7cfb1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_borg, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 11 05:55:52 np0005481065 systemd[1]: var-lib-containers-storage-overlay-db4c22380c2c1cad4f7fc5564b33c7bcb35ba9b219dcd6f449f08af01e5ee8da-merged.mount: Deactivated successfully.
Oct 11 05:55:52 np0005481065 podman[446955]: 2025-10-11 09:55:52.852002073 +0000 UTC m=+0.298934139 container remove 9de2e6db4ec825c0641a607024f02bbb15c211d0cebb1643b3fc30798e7cfb1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_borg, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:55:52 np0005481065 systemd[1]: libpod-conmon-9de2e6db4ec825c0641a607024f02bbb15c211d0cebb1643b3fc30798e7cfb1f.scope: Deactivated successfully.
Oct 11 05:55:53 np0005481065 podman[446996]: 2025-10-11 09:55:53.02775846 +0000 UTC m=+0.045019803 container create a39ed9ff5096f945d8a324352ce24f58fc2ca7a5013c00c0618ae8913b79ef42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_chebyshev, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 05:55:53 np0005481065 systemd[1]: Started libpod-conmon-a39ed9ff5096f945d8a324352ce24f58fc2ca7a5013c00c0618ae8913b79ef42.scope.
Oct 11 05:55:53 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:55:53 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97530adc22cb6b8e34ff3af726f4a7a72798ea664e1e7c88abf939b17a6cdc5e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:55:53 np0005481065 podman[446996]: 2025-10-11 09:55:53.006137929 +0000 UTC m=+0.023399362 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:55:53 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97530adc22cb6b8e34ff3af726f4a7a72798ea664e1e7c88abf939b17a6cdc5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:55:53 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97530adc22cb6b8e34ff3af726f4a7a72798ea664e1e7c88abf939b17a6cdc5e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:55:53 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97530adc22cb6b8e34ff3af726f4a7a72798ea664e1e7c88abf939b17a6cdc5e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:55:53 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97530adc22cb6b8e34ff3af726f4a7a72798ea664e1e7c88abf939b17a6cdc5e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 05:55:53 np0005481065 podman[446996]: 2025-10-11 09:55:53.124536925 +0000 UTC m=+0.141798268 container init a39ed9ff5096f945d8a324352ce24f58fc2ca7a5013c00c0618ae8913b79ef42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 11 05:55:53 np0005481065 podman[446996]: 2025-10-11 09:55:53.133721145 +0000 UTC m=+0.150982488 container start a39ed9ff5096f945d8a324352ce24f58fc2ca7a5013c00c0618ae8913b79ef42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_chebyshev, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:55:53 np0005481065 podman[446996]: 2025-10-11 09:55:53.137244214 +0000 UTC m=+0.154505647 container attach a39ed9ff5096f945d8a324352ce24f58fc2ca7a5013c00c0618ae8913b79ef42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 05:55:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3449: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:55:54 np0005481065 hungry_chebyshev[447013]: --> passed data devices: 0 physical, 3 LVM
Oct 11 05:55:54 np0005481065 hungry_chebyshev[447013]: --> relative data size: 1.0
Oct 11 05:55:54 np0005481065 hungry_chebyshev[447013]: --> All data devices are unavailable
Oct 11 05:55:54 np0005481065 systemd[1]: libpod-a39ed9ff5096f945d8a324352ce24f58fc2ca7a5013c00c0618ae8913b79ef42.scope: Deactivated successfully.
Oct 11 05:55:54 np0005481065 podman[446996]: 2025-10-11 09:55:54.253039998 +0000 UTC m=+1.270301361 container died a39ed9ff5096f945d8a324352ce24f58fc2ca7a5013c00c0618ae8913b79ef42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_chebyshev, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Oct 11 05:55:54 np0005481065 systemd[1]: libpod-a39ed9ff5096f945d8a324352ce24f58fc2ca7a5013c00c0618ae8913b79ef42.scope: Consumed 1.062s CPU time.
Oct 11 05:55:54 np0005481065 systemd[1]: var-lib-containers-storage-overlay-97530adc22cb6b8e34ff3af726f4a7a72798ea664e1e7c88abf939b17a6cdc5e-merged.mount: Deactivated successfully.
Oct 11 05:55:54 np0005481065 podman[446996]: 2025-10-11 09:55:54.329523389 +0000 UTC m=+1.346784742 container remove a39ed9ff5096f945d8a324352ce24f58fc2ca7a5013c00c0618ae8913b79ef42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_chebyshev, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:55:54 np0005481065 systemd[1]: libpod-conmon-a39ed9ff5096f945d8a324352ce24f58fc2ca7a5013c00c0618ae8913b79ef42.scope: Deactivated successfully.
Oct 11 05:55:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:55:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:55:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:55:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:55:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:55:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:55:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:55:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:55:55
Oct 11 05:55:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:55:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:55:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'default.rgw.meta', 'vms', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', '.rgw.root', 'cephfs.cephfs.data', 'images', 'volumes']
Oct 11 05:55:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:55:55 np0005481065 podman[447197]: 2025-10-11 09:55:55.201958335 +0000 UTC m=+0.075219167 container create ccd1552c1c40a2e2fdcc3183b9632985dfa3c766fdcc307de67153d59e20da65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_merkle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:55:55 np0005481065 systemd[1]: Started libpod-conmon-ccd1552c1c40a2e2fdcc3183b9632985dfa3c766fdcc307de67153d59e20da65.scope.
Oct 11 05:55:55 np0005481065 podman[447197]: 2025-10-11 09:55:55.173549202 +0000 UTC m=+0.046810064 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:55:55 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:55:55 np0005481065 podman[447197]: 2025-10-11 09:55:55.303071922 +0000 UTC m=+0.176332734 container init ccd1552c1c40a2e2fdcc3183b9632985dfa3c766fdcc307de67153d59e20da65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:55:55 np0005481065 podman[447197]: 2025-10-11 09:55:55.315581586 +0000 UTC m=+0.188842388 container start ccd1552c1c40a2e2fdcc3183b9632985dfa3c766fdcc307de67153d59e20da65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:55:55 np0005481065 podman[447197]: 2025-10-11 09:55:55.319225769 +0000 UTC m=+0.192486591 container attach ccd1552c1c40a2e2fdcc3183b9632985dfa3c766fdcc307de67153d59e20da65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_merkle, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:55:55 np0005481065 practical_merkle[447214]: 167 167
Oct 11 05:55:55 np0005481065 systemd[1]: libpod-ccd1552c1c40a2e2fdcc3183b9632985dfa3c766fdcc307de67153d59e20da65.scope: Deactivated successfully.
Oct 11 05:55:55 np0005481065 podman[447197]: 2025-10-11 09:55:55.325533177 +0000 UTC m=+0.198794009 container died ccd1552c1c40a2e2fdcc3183b9632985dfa3c766fdcc307de67153d59e20da65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_merkle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:55:55 np0005481065 systemd[1]: var-lib-containers-storage-overlay-efbee594591c9c92e902307ae96032c3089a43d748ac403a1ee524a02c49d955-merged.mount: Deactivated successfully.
Oct 11 05:55:55 np0005481065 podman[447197]: 2025-10-11 09:55:55.372781982 +0000 UTC m=+0.246042794 container remove ccd1552c1c40a2e2fdcc3183b9632985dfa3c766fdcc307de67153d59e20da65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_merkle, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 11 05:55:55 np0005481065 systemd[1]: libpod-conmon-ccd1552c1c40a2e2fdcc3183b9632985dfa3c766fdcc307de67153d59e20da65.scope: Deactivated successfully.
Oct 11 05:55:55 np0005481065 podman[447237]: 2025-10-11 09:55:55.614585826 +0000 UTC m=+0.056676363 container create 1ac1ecad6a28d0449f5d09336ea51d15585b882a1f0ed142563e89e2dc546047 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_moore, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 05:55:55 np0005481065 systemd[1]: Started libpod-conmon-1ac1ecad6a28d0449f5d09336ea51d15585b882a1f0ed142563e89e2dc546047.scope.
Oct 11 05:55:55 np0005481065 ceph-mgr[74605]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2898047278
Oct 11 05:55:55 np0005481065 podman[447237]: 2025-10-11 09:55:55.59491303 +0000 UTC m=+0.037003607 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:55:55 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:55:55 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/652143c9bca1181fb983e066942ee8bf221f66df02750a7ee3b9bfde38df833c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:55:55 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/652143c9bca1181fb983e066942ee8bf221f66df02750a7ee3b9bfde38df833c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:55:55 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/652143c9bca1181fb983e066942ee8bf221f66df02750a7ee3b9bfde38df833c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:55:55 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/652143c9bca1181fb983e066942ee8bf221f66df02750a7ee3b9bfde38df833c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:55:55 np0005481065 podman[447237]: 2025-10-11 09:55:55.726988143 +0000 UTC m=+0.169078730 container init 1ac1ecad6a28d0449f5d09336ea51d15585b882a1f0ed142563e89e2dc546047 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 05:55:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:55:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:55:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:55:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:55:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:55:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:55:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:55:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:55:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:55:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:55:55 np0005481065 podman[447237]: 2025-10-11 09:55:55.74425056 +0000 UTC m=+0.186341137 container start 1ac1ecad6a28d0449f5d09336ea51d15585b882a1f0ed142563e89e2dc546047 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_moore, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:55:55 np0005481065 podman[447237]: 2025-10-11 09:55:55.748718107 +0000 UTC m=+0.190808744 container attach 1ac1ecad6a28d0449f5d09336ea51d15585b882a1f0ed142563e89e2dc546047 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_moore, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 05:55:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3450: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:55:56 np0005481065 nova_compute[260935]: 2025-10-11 09:55:56.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:55:56 np0005481065 nova_compute[260935]: 2025-10-11 09:55:56.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:55:56 np0005481065 gifted_moore[447254]: {
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:    "0": [
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:        {
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:            "devices": [
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:                "/dev/loop3"
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:            ],
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:            "lv_name": "ceph_lv0",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:            "lv_size": "21470642176",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:            "name": "ceph_lv0",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:            "tags": {
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:                "ceph.cluster_name": "ceph",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:                "ceph.crush_device_class": "",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:                "ceph.encrypted": "0",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:                "ceph.osd_id": "0",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:                "ceph.type": "block",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:                "ceph.vdo": "0"
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:            },
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:            "type": "block",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:            "vg_name": "ceph_vg0"
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:        }
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:    ],
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:    "1": [
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:        {
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:            "devices": [
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:                "/dev/loop4"
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:            ],
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:            "lv_name": "ceph_lv1",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:            "lv_size": "21470642176",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:            "name": "ceph_lv1",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:            "tags": {
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:                "ceph.cluster_name": "ceph",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:                "ceph.crush_device_class": "",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:                "ceph.encrypted": "0",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:                "ceph.osd_id": "1",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:                "ceph.type": "block",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:                "ceph.vdo": "0"
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:            },
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:            "type": "block",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:            "vg_name": "ceph_vg1"
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:        }
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:    ],
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:    "2": [
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:        {
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:            "devices": [
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:                "/dev/loop5"
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:            ],
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:            "lv_name": "ceph_lv2",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:            "lv_size": "21470642176",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:            "name": "ceph_lv2",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:            "tags": {
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:                "ceph.cluster_name": "ceph",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:                "ceph.crush_device_class": "",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:                "ceph.encrypted": "0",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:                "ceph.osd_id": "2",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:                "ceph.type": "block",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:                "ceph.vdo": "0"
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:            },
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:            "type": "block",
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:            "vg_name": "ceph_vg2"
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:        }
Oct 11 05:55:56 np0005481065 gifted_moore[447254]:    ]
Oct 11 05:55:56 np0005481065 gifted_moore[447254]: }
Oct 11 05:55:56 np0005481065 systemd[1]: libpod-1ac1ecad6a28d0449f5d09336ea51d15585b882a1f0ed142563e89e2dc546047.scope: Deactivated successfully.
Oct 11 05:55:56 np0005481065 podman[447263]: 2025-10-11 09:55:56.633452699 +0000 UTC m=+0.034199927 container died 1ac1ecad6a28d0449f5d09336ea51d15585b882a1f0ed142563e89e2dc546047 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_moore, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:55:56 np0005481065 systemd[1]: var-lib-containers-storage-overlay-652143c9bca1181fb983e066942ee8bf221f66df02750a7ee3b9bfde38df833c-merged.mount: Deactivated successfully.
Oct 11 05:55:56 np0005481065 podman[447263]: 2025-10-11 09:55:56.716429194 +0000 UTC m=+0.117176442 container remove 1ac1ecad6a28d0449f5d09336ea51d15585b882a1f0ed142563e89e2dc546047 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_moore, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:55:56 np0005481065 systemd[1]: libpod-conmon-1ac1ecad6a28d0449f5d09336ea51d15585b882a1f0ed142563e89e2dc546047.scope: Deactivated successfully.
Oct 11 05:55:57 np0005481065 podman[447417]: 2025-10-11 09:55:57.60092002 +0000 UTC m=+0.051294090 container create 4e6264ec3cd5e2ba88210de1eb1c85a9dbd7b8c7c220e712df97a5634c6d340b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 11 05:55:57 np0005481065 systemd[1]: Started libpod-conmon-4e6264ec3cd5e2ba88210de1eb1c85a9dbd7b8c7c220e712df97a5634c6d340b.scope.
Oct 11 05:55:57 np0005481065 podman[447417]: 2025-10-11 09:55:57.581933314 +0000 UTC m=+0.032307344 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:55:57 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:55:57 np0005481065 podman[447417]: 2025-10-11 09:55:57.712046851 +0000 UTC m=+0.162420931 container init 4e6264ec3cd5e2ba88210de1eb1c85a9dbd7b8c7c220e712df97a5634c6d340b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:55:57 np0005481065 podman[447417]: 2025-10-11 09:55:57.72086928 +0000 UTC m=+0.171243350 container start 4e6264ec3cd5e2ba88210de1eb1c85a9dbd7b8c7c220e712df97a5634c6d340b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:55:57 np0005481065 podman[447417]: 2025-10-11 09:55:57.724918285 +0000 UTC m=+0.175292415 container attach 4e6264ec3cd5e2ba88210de1eb1c85a9dbd7b8c7c220e712df97a5634c6d340b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_sutherland, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:55:57 np0005481065 gifted_sutherland[447433]: 167 167
Oct 11 05:55:57 np0005481065 systemd[1]: libpod-4e6264ec3cd5e2ba88210de1eb1c85a9dbd7b8c7c220e712df97a5634c6d340b.scope: Deactivated successfully.
Oct 11 05:55:57 np0005481065 conmon[447433]: conmon 4e6264ec3cd5e2ba8821 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4e6264ec3cd5e2ba88210de1eb1c85a9dbd7b8c7c220e712df97a5634c6d340b.scope/container/memory.events
Oct 11 05:55:57 np0005481065 podman[447417]: 2025-10-11 09:55:57.730468442 +0000 UTC m=+0.180842542 container died 4e6264ec3cd5e2ba88210de1eb1c85a9dbd7b8c7c220e712df97a5634c6d340b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 05:55:57 np0005481065 systemd[1]: var-lib-containers-storage-overlay-86d05ae28cef3c1bef652d6fbc61f65dcf3db892e4bee7aeefc59346da1aaf4c-merged.mount: Deactivated successfully.
Oct 11 05:55:57 np0005481065 podman[447417]: 2025-10-11 09:55:57.781885485 +0000 UTC m=+0.232259525 container remove 4e6264ec3cd5e2ba88210de1eb1c85a9dbd7b8c7c220e712df97a5634c6d340b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 11 05:55:57 np0005481065 systemd[1]: libpod-conmon-4e6264ec3cd5e2ba88210de1eb1c85a9dbd7b8c7c220e712df97a5634c6d340b.scope: Deactivated successfully.
Oct 11 05:55:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3451: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:55:57 np0005481065 podman[447457]: 2025-10-11 09:55:57.983618226 +0000 UTC m=+0.045908509 container create bb60f89cfcdec46801f9f95ef29779b45c68e63540db2a67ec10701291464bb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_swanson, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Oct 11 05:55:58 np0005481065 systemd[1]: Started libpod-conmon-bb60f89cfcdec46801f9f95ef29779b45c68e63540db2a67ec10701291464bb9.scope.
Oct 11 05:55:58 np0005481065 podman[447457]: 2025-10-11 09:55:57.963193159 +0000 UTC m=+0.025483522 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:55:58 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:55:58 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b883c7800eed4586e6588266624815260e8190dfb1a08b025d337bfcfeeb8a98/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:55:58 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b883c7800eed4586e6588266624815260e8190dfb1a08b025d337bfcfeeb8a98/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:55:58 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b883c7800eed4586e6588266624815260e8190dfb1a08b025d337bfcfeeb8a98/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:55:58 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b883c7800eed4586e6588266624815260e8190dfb1a08b025d337bfcfeeb8a98/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:55:58 np0005481065 podman[447457]: 2025-10-11 09:55:58.082410428 +0000 UTC m=+0.144700791 container init bb60f89cfcdec46801f9f95ef29779b45c68e63540db2a67ec10701291464bb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 05:55:58 np0005481065 podman[447457]: 2025-10-11 09:55:58.094004035 +0000 UTC m=+0.156294308 container start bb60f89cfcdec46801f9f95ef29779b45c68e63540db2a67ec10701291464bb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_swanson, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 05:55:58 np0005481065 podman[447457]: 2025-10-11 09:55:58.098623956 +0000 UTC m=+0.160914239 container attach bb60f89cfcdec46801f9f95ef29779b45c68e63540db2a67ec10701291464bb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_swanson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:55:59 np0005481065 pensive_swanson[447473]: {
Oct 11 05:55:59 np0005481065 pensive_swanson[447473]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 05:55:59 np0005481065 pensive_swanson[447473]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:55:59 np0005481065 pensive_swanson[447473]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 05:55:59 np0005481065 pensive_swanson[447473]:        "osd_id": 2,
Oct 11 05:55:59 np0005481065 pensive_swanson[447473]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:55:59 np0005481065 pensive_swanson[447473]:        "type": "bluestore"
Oct 11 05:55:59 np0005481065 pensive_swanson[447473]:    },
Oct 11 05:55:59 np0005481065 pensive_swanson[447473]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 05:55:59 np0005481065 pensive_swanson[447473]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:55:59 np0005481065 pensive_swanson[447473]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 05:55:59 np0005481065 pensive_swanson[447473]:        "osd_id": 0,
Oct 11 05:55:59 np0005481065 pensive_swanson[447473]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:55:59 np0005481065 pensive_swanson[447473]:        "type": "bluestore"
Oct 11 05:55:59 np0005481065 pensive_swanson[447473]:    },
Oct 11 05:55:59 np0005481065 pensive_swanson[447473]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 05:55:59 np0005481065 pensive_swanson[447473]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:55:59 np0005481065 pensive_swanson[447473]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 05:55:59 np0005481065 pensive_swanson[447473]:        "osd_id": 1,
Oct 11 05:55:59 np0005481065 pensive_swanson[447473]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:55:59 np0005481065 pensive_swanson[447473]:        "type": "bluestore"
Oct 11 05:55:59 np0005481065 pensive_swanson[447473]:    }
Oct 11 05:55:59 np0005481065 pensive_swanson[447473]: }
Oct 11 05:55:59 np0005481065 systemd[1]: libpod-bb60f89cfcdec46801f9f95ef29779b45c68e63540db2a67ec10701291464bb9.scope: Deactivated successfully.
Oct 11 05:55:59 np0005481065 systemd[1]: libpod-bb60f89cfcdec46801f9f95ef29779b45c68e63540db2a67ec10701291464bb9.scope: Consumed 1.097s CPU time.
Oct 11 05:55:59 np0005481065 podman[447457]: 2025-10-11 09:55:59.190884344 +0000 UTC m=+1.253174667 container died bb60f89cfcdec46801f9f95ef29779b45c68e63540db2a67ec10701291464bb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_swanson, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 11 05:55:59 np0005481065 systemd[1]: var-lib-containers-storage-overlay-b883c7800eed4586e6588266624815260e8190dfb1a08b025d337bfcfeeb8a98-merged.mount: Deactivated successfully.
Oct 11 05:55:59 np0005481065 podman[447457]: 2025-10-11 09:55:59.271663987 +0000 UTC m=+1.333954300 container remove bb60f89cfcdec46801f9f95ef29779b45c68e63540db2a67ec10701291464bb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_swanson, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 11 05:55:59 np0005481065 systemd[1]: libpod-conmon-bb60f89cfcdec46801f9f95ef29779b45c68e63540db2a67ec10701291464bb9.scope: Deactivated successfully.
Oct 11 05:55:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:55:59 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:55:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:55:59 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:55:59 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev b491fff0-8335-4ab7-8b5b-be3c4b6bd17c does not exist
Oct 11 05:55:59 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 00bf86e8-61a9-4b5a-bfd1-34afbae5d73d does not exist
Oct 11 05:55:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3452: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:55:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:56:00 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:56:00 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:56:01 np0005481065 nova_compute[260935]: 2025-10-11 09:56:01.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:56:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3453: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:56:02 np0005481065 nova_compute[260935]: 2025-10-11 09:56:02.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:56:03 np0005481065 nova_compute[260935]: 2025-10-11 09:56:03.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:56:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3454: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:56:04 np0005481065 podman[447572]: 2025-10-11 09:56:04.822303521 +0000 UTC m=+0.102613541 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 11 05:56:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:56:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:56:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3455: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:56:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:56:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:56:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:56:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0026278224099759067 of space, bias 1.0, pg target 0.788346722992772 quantized to 32 (current 32)
Oct 11 05:56:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:56:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:56:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:56:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:56:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:56:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct 11 05:56:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:56:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 05:56:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:56:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:56:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:56:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 05:56:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:56:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 05:56:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:56:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:56:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:56:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 05:56:06 np0005481065 nova_compute[260935]: 2025-10-11 09:56:06.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:56:06 np0005481065 nova_compute[260935]: 2025-10-11 09:56:06.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:56:07 np0005481065 nova_compute[260935]: 2025-10-11 09:56:07.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:56:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3456: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:56:08 np0005481065 nova_compute[260935]: 2025-10-11 09:56:08.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:56:08 np0005481065 nova_compute[260935]: 2025-10-11 09:56:08.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:56:09 np0005481065 nova_compute[260935]: 2025-10-11 09:56:09.500 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:56:09 np0005481065 nova_compute[260935]: 2025-10-11 09:56:09.501 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:56:09 np0005481065 nova_compute[260935]: 2025-10-11 09:56:09.501 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 05:56:09 np0005481065 nova_compute[260935]: 2025-10-11 09:56:09.770 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:56:09 np0005481065 podman[447592]: 2025-10-11 09:56:09.799840219 +0000 UTC m=+0.092742882 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.schema-version=1.0, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 11 05:56:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3457: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:56:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:56:10 np0005481065 nova_compute[260935]: 2025-10-11 09:56:10.362 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:56:10 np0005481065 nova_compute[260935]: 2025-10-11 09:56:10.386 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:56:10 np0005481065 nova_compute[260935]: 2025-10-11 09:56:10.386 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 05:56:10 np0005481065 nova_compute[260935]: 2025-10-11 09:56:10.386 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:56:10 np0005481065 nova_compute[260935]: 2025-10-11 09:56:10.387 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:56:10 np0005481065 nova_compute[260935]: 2025-10-11 09:56:10.387 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:56:10 np0005481065 nova_compute[260935]: 2025-10-11 09:56:10.425 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:56:10 np0005481065 nova_compute[260935]: 2025-10-11 09:56:10.426 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:56:10 np0005481065 nova_compute[260935]: 2025-10-11 09:56:10.427 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:56:10 np0005481065 nova_compute[260935]: 2025-10-11 09:56:10.427 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:56:10 np0005481065 nova_compute[260935]: 2025-10-11 09:56:10.428 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:56:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:56:10 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/809046280' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:56:10 np0005481065 nova_compute[260935]: 2025-10-11 09:56:10.920 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:56:11 np0005481065 nova_compute[260935]: 2025-10-11 09:56:11.106 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:56:11 np0005481065 nova_compute[260935]: 2025-10-11 09:56:11.107 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:56:11 np0005481065 nova_compute[260935]: 2025-10-11 09:56:11.107 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:56:11 np0005481065 nova_compute[260935]: 2025-10-11 09:56:11.111 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:56:11 np0005481065 nova_compute[260935]: 2025-10-11 09:56:11.111 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:56:11 np0005481065 nova_compute[260935]: 2025-10-11 09:56:11.115 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:56:11 np0005481065 nova_compute[260935]: 2025-10-11 09:56:11.115 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:56:11 np0005481065 nova_compute[260935]: 2025-10-11 09:56:11.307 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:56:11 np0005481065 nova_compute[260935]: 2025-10-11 09:56:11.309 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2773MB free_disk=59.83064270019531GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:56:11 np0005481065 nova_compute[260935]: 2025-10-11 09:56:11.309 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:56:11 np0005481065 nova_compute[260935]: 2025-10-11 09:56:11.309 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:56:11 np0005481065 nova_compute[260935]: 2025-10-11 09:56:11.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:56:11 np0005481065 nova_compute[260935]: 2025-10-11 09:56:11.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:56:11 np0005481065 nova_compute[260935]: 2025-10-11 09:56:11.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:56:11 np0005481065 nova_compute[260935]: 2025-10-11 09:56:11.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:56:11 np0005481065 nova_compute[260935]: 2025-10-11 09:56:11.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:56:11 np0005481065 nova_compute[260935]: 2025-10-11 09:56:11.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:56:11 np0005481065 nova_compute[260935]: 2025-10-11 09:56:11.520 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:56:11 np0005481065 nova_compute[260935]: 2025-10-11 09:56:11.520 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:56:11 np0005481065 nova_compute[260935]: 2025-10-11 09:56:11.521 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:56:11 np0005481065 nova_compute[260935]: 2025-10-11 09:56:11.521 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:56:11 np0005481065 nova_compute[260935]: 2025-10-11 09:56:11.522 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:56:11 np0005481065 nova_compute[260935]: 2025-10-11 09:56:11.737 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:56:11 np0005481065 podman[447635]: 2025-10-11 09:56:11.809103182 +0000 UTC m=+0.100705447 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller)
Oct 11 05:56:11 np0005481065 podman[447634]: 2025-10-11 09:56:11.822771059 +0000 UTC m=+0.110272468 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:56:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3458: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:56:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:56:12 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1596980447' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:56:12 np0005481065 nova_compute[260935]: 2025-10-11 09:56:12.238 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:56:12 np0005481065 nova_compute[260935]: 2025-10-11 09:56:12.248 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:56:12 np0005481065 nova_compute[260935]: 2025-10-11 09:56:12.353 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:56:12 np0005481065 nova_compute[260935]: 2025-10-11 09:56:12.357 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:56:12 np0005481065 nova_compute[260935]: 2025-10-11 09:56:12.357 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.048s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:56:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3459: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:56:14 np0005481065 nova_compute[260935]: 2025-10-11 09:56:14.674 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:56:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:56:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:56:15.253 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:56:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:56:15.254 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:56:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:56:15.254 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:56:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3460: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:56:16 np0005481065 nova_compute[260935]: 2025-10-11 09:56:16.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:56:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3461: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:56:18 np0005481065 nova_compute[260935]: 2025-10-11 09:56:18.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:56:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3462: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:56:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:56:21 np0005481065 nova_compute[260935]: 2025-10-11 09:56:21.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:56:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3463: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:56:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3464: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:56:24 np0005481065 nova_compute[260935]: 2025-10-11 09:56:24.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:56:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:56:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:56:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:56:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:56:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:56:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:56:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:56:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3465: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:56:26 np0005481065 nova_compute[260935]: 2025-10-11 09:56:26.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:56:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:56:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/495599010' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:56:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:56:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/495599010' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:56:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3466: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:56:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3467: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:56:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:56:31 np0005481065 nova_compute[260935]: 2025-10-11 09:56:31.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:56:31 np0005481065 nova_compute[260935]: 2025-10-11 09:56:31.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:56:31 np0005481065 nova_compute[260935]: 2025-10-11 09:56:31.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5005 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:56:31 np0005481065 nova_compute[260935]: 2025-10-11 09:56:31.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:56:31 np0005481065 nova_compute[260935]: 2025-10-11 09:56:31.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:56:31 np0005481065 nova_compute[260935]: 2025-10-11 09:56:31.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:56:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3468: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:56:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3469: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:56:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:56:35 np0005481065 podman[447702]: 2025-10-11 09:56:35.799098383 +0000 UTC m=+0.085182478 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 11 05:56:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3470: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:56:36 np0005481065 nova_compute[260935]: 2025-10-11 09:56:36.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:56:36 np0005481065 nova_compute[260935]: 2025-10-11 09:56:36.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:56:36 np0005481065 nova_compute[260935]: 2025-10-11 09:56:36.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5036 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:56:36 np0005481065 nova_compute[260935]: 2025-10-11 09:56:36.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:56:36 np0005481065 nova_compute[260935]: 2025-10-11 09:56:36.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:56:36 np0005481065 nova_compute[260935]: 2025-10-11 09:56:36.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:56:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3471: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:56:38 np0005481065 nova_compute[260935]: 2025-10-11 09:56:38.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:56:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3472: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:56:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:56:40 np0005481065 podman[447724]: 2025-10-11 09:56:40.477656632 +0000 UTC m=+0.076391359 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 05:56:41 np0005481065 nova_compute[260935]: 2025-10-11 09:56:41.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:56:41 np0005481065 nova_compute[260935]: 2025-10-11 09:56:41.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:56:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3473: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:56:42 np0005481065 podman[447745]: 2025-10-11 09:56:42.795353901 +0000 UTC m=+0.090894500 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 11 05:56:42 np0005481065 podman[447746]: 2025-10-11 09:56:42.832033667 +0000 UTC m=+0.123471130 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 11 05:56:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3474: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:56:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:56:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3475: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:56:46 np0005481065 nova_compute[260935]: 2025-10-11 09:56:46.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:56:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3476: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:56:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3477: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:56:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:56:51 np0005481065 nova_compute[260935]: 2025-10-11 09:56:51.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:56:51 np0005481065 nova_compute[260935]: 2025-10-11 09:56:51.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:56:51 np0005481065 nova_compute[260935]: 2025-10-11 09:56:51.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:56:51 np0005481065 nova_compute[260935]: 2025-10-11 09:56:51.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:56:51 np0005481065 nova_compute[260935]: 2025-10-11 09:56:51.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:56:51 np0005481065 nova_compute[260935]: 2025-10-11 09:56:51.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:56:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3478: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:56:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3479: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:56:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:56:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:56:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:56:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:56:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:56:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:56:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:56:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:56:55
Oct 11 05:56:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:56:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:56:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['.rgw.root', 'backups', 'default.rgw.log', '.mgr', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', 'volumes', 'vms']
Oct 11 05:56:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:56:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:56:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:56:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:56:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:56:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:56:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:56:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:56:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:56:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:56:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:56:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3480: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:56:56 np0005481065 nova_compute[260935]: 2025-10-11 09:56:56.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:56:56 np0005481065 nova_compute[260935]: 2025-10-11 09:56:56.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:56:56 np0005481065 nova_compute[260935]: 2025-10-11 09:56:56.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:56:56 np0005481065 nova_compute[260935]: 2025-10-11 09:56:56.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:56:56 np0005481065 nova_compute[260935]: 2025-10-11 09:56:56.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:56:56 np0005481065 nova_compute[260935]: 2025-10-11 09:56:56.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:56:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3481: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:56:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3482: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:56:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:57:00 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 11 05:57:00 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 05:57:00 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:57:00 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:57:00 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 05:57:00 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:57:00 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 05:57:00 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:57:00 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 52a27e58-ec03-4fe2-ba7a-6e8a6a627987 does not exist
Oct 11 05:57:00 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 098387c3-8f4a-4ed3-8224-092a198fcd35 does not exist
Oct 11 05:57:00 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 390df583-e6af-4048-8a55-63168625a4d9 does not exist
Oct 11 05:57:00 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 05:57:00 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 05:57:00 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 05:57:00 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:57:00 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:57:00 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:57:01 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 05:57:01 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:57:01 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:57:01 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:57:01 np0005481065 podman[448067]: 2025-10-11 09:57:01.381153408 +0000 UTC m=+0.077435019 container create aa4861981e8112b1cc4173affdcfdf7cd6b31ee7fce2d6e9297dc40e3edf5acc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shockley, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 11 05:57:01 np0005481065 systemd[1]: Started libpod-conmon-aa4861981e8112b1cc4173affdcfdf7cd6b31ee7fce2d6e9297dc40e3edf5acc.scope.
Oct 11 05:57:01 np0005481065 podman[448067]: 2025-10-11 09:57:01.3486864 +0000 UTC m=+0.044968061 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:57:01 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:57:01 np0005481065 podman[448067]: 2025-10-11 09:57:01.501325334 +0000 UTC m=+0.197606975 container init aa4861981e8112b1cc4173affdcfdf7cd6b31ee7fce2d6e9297dc40e3edf5acc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shockley, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:57:01 np0005481065 podman[448067]: 2025-10-11 09:57:01.515280499 +0000 UTC m=+0.211562110 container start aa4861981e8112b1cc4173affdcfdf7cd6b31ee7fce2d6e9297dc40e3edf5acc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shockley, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 05:57:01 np0005481065 podman[448067]: 2025-10-11 09:57:01.520944899 +0000 UTC m=+0.217226510 container attach aa4861981e8112b1cc4173affdcfdf7cd6b31ee7fce2d6e9297dc40e3edf5acc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shockley, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 05:57:01 np0005481065 vigilant_shockley[448083]: 167 167
Oct 11 05:57:01 np0005481065 systemd[1]: libpod-aa4861981e8112b1cc4173affdcfdf7cd6b31ee7fce2d6e9297dc40e3edf5acc.scope: Deactivated successfully.
Oct 11 05:57:01 np0005481065 conmon[448083]: conmon aa4861981e8112b1cc41 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aa4861981e8112b1cc4173affdcfdf7cd6b31ee7fce2d6e9297dc40e3edf5acc.scope/container/memory.events
Oct 11 05:57:01 np0005481065 podman[448067]: 2025-10-11 09:57:01.526341131 +0000 UTC m=+0.222622742 container died aa4861981e8112b1cc4173affdcfdf7cd6b31ee7fce2d6e9297dc40e3edf5acc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 05:57:01 np0005481065 systemd[1]: var-lib-containers-storage-overlay-fd91e17c17cada7245855b2243a1fc12620ebaf5f5296a3d46bb739383c467f1-merged.mount: Deactivated successfully.
Oct 11 05:57:01 np0005481065 podman[448067]: 2025-10-11 09:57:01.584064792 +0000 UTC m=+0.280346403 container remove aa4861981e8112b1cc4173affdcfdf7cd6b31ee7fce2d6e9297dc40e3edf5acc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shockley, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 05:57:01 np0005481065 systemd[1]: libpod-conmon-aa4861981e8112b1cc4173affdcfdf7cd6b31ee7fce2d6e9297dc40e3edf5acc.scope: Deactivated successfully.
Oct 11 05:57:01 np0005481065 nova_compute[260935]: 2025-10-11 09:57:01.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:57:01 np0005481065 nova_compute[260935]: 2025-10-11 09:57:01.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:57:01 np0005481065 nova_compute[260935]: 2025-10-11 09:57:01.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:57:01 np0005481065 nova_compute[260935]: 2025-10-11 09:57:01.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:57:01 np0005481065 nova_compute[260935]: 2025-10-11 09:57:01.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:57:01 np0005481065 nova_compute[260935]: 2025-10-11 09:57:01.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:57:01 np0005481065 podman[448107]: 2025-10-11 09:57:01.868847671 +0000 UTC m=+0.074684302 container create 07c0bbf460b721593030c55ee1525d84d37c82b1727cece565bb91036ee360ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hellman, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 05:57:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3483: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:57:01 np0005481065 systemd[1]: Started libpod-conmon-07c0bbf460b721593030c55ee1525d84d37c82b1727cece565bb91036ee360ac.scope.
Oct 11 05:57:01 np0005481065 podman[448107]: 2025-10-11 09:57:01.840653884 +0000 UTC m=+0.046490565 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:57:01 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:57:01 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72f30b70774f00f60633c522e52d10d02a346ca863997984b524056c98e48a8f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:57:01 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72f30b70774f00f60633c522e52d10d02a346ca863997984b524056c98e48a8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:57:01 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72f30b70774f00f60633c522e52d10d02a346ca863997984b524056c98e48a8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:57:01 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72f30b70774f00f60633c522e52d10d02a346ca863997984b524056c98e48a8f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:57:01 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72f30b70774f00f60633c522e52d10d02a346ca863997984b524056c98e48a8f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 05:57:02 np0005481065 podman[448107]: 2025-10-11 09:57:02.009295 +0000 UTC m=+0.215131661 container init 07c0bbf460b721593030c55ee1525d84d37c82b1727cece565bb91036ee360ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hellman, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:57:02 np0005481065 podman[448107]: 2025-10-11 09:57:02.019546009 +0000 UTC m=+0.225382640 container start 07c0bbf460b721593030c55ee1525d84d37c82b1727cece565bb91036ee360ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Oct 11 05:57:02 np0005481065 podman[448107]: 2025-10-11 09:57:02.023942284 +0000 UTC m=+0.229778925 container attach 07c0bbf460b721593030c55ee1525d84d37c82b1727cece565bb91036ee360ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hellman, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 05:57:03 np0005481065 cranky_hellman[448123]: --> passed data devices: 0 physical, 3 LVM
Oct 11 05:57:03 np0005481065 cranky_hellman[448123]: --> relative data size: 1.0
Oct 11 05:57:03 np0005481065 cranky_hellman[448123]: --> All data devices are unavailable
Oct 11 05:57:03 np0005481065 systemd[1]: libpod-07c0bbf460b721593030c55ee1525d84d37c82b1727cece565bb91036ee360ac.scope: Deactivated successfully.
Oct 11 05:57:03 np0005481065 systemd[1]: libpod-07c0bbf460b721593030c55ee1525d84d37c82b1727cece565bb91036ee360ac.scope: Consumed 1.115s CPU time.
Oct 11 05:57:03 np0005481065 podman[448107]: 2025-10-11 09:57:03.224212464 +0000 UTC m=+1.430049145 container died 07c0bbf460b721593030c55ee1525d84d37c82b1727cece565bb91036ee360ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:57:03 np0005481065 systemd[1]: var-lib-containers-storage-overlay-72f30b70774f00f60633c522e52d10d02a346ca863997984b524056c98e48a8f-merged.mount: Deactivated successfully.
Oct 11 05:57:03 np0005481065 podman[448107]: 2025-10-11 09:57:03.286066022 +0000 UTC m=+1.491902653 container remove 07c0bbf460b721593030c55ee1525d84d37c82b1727cece565bb91036ee360ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 05:57:03 np0005481065 systemd[1]: libpod-conmon-07c0bbf460b721593030c55ee1525d84d37c82b1727cece565bb91036ee360ac.scope: Deactivated successfully.
Oct 11 05:57:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3484: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:57:04 np0005481065 podman[448306]: 2025-10-11 09:57:04.142221997 +0000 UTC m=+0.058861204 container create 7369e8088c6e5dcddaea597d4cc7aafa16633b34ba4d6993b2d15c17e592a7f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mclaren, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:57:04 np0005481065 systemd[1]: Started libpod-conmon-7369e8088c6e5dcddaea597d4cc7aafa16633b34ba4d6993b2d15c17e592a7f6.scope.
Oct 11 05:57:04 np0005481065 podman[448306]: 2025-10-11 09:57:04.117713484 +0000 UTC m=+0.034352691 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:57:04 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:57:04 np0005481065 podman[448306]: 2025-10-11 09:57:04.25769376 +0000 UTC m=+0.174333017 container init 7369e8088c6e5dcddaea597d4cc7aafa16633b34ba4d6993b2d15c17e592a7f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mclaren, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:57:04 np0005481065 podman[448306]: 2025-10-11 09:57:04.270246105 +0000 UTC m=+0.186885272 container start 7369e8088c6e5dcddaea597d4cc7aafa16633b34ba4d6993b2d15c17e592a7f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 11 05:57:04 np0005481065 podman[448306]: 2025-10-11 09:57:04.273681882 +0000 UTC m=+0.190321139 container attach 7369e8088c6e5dcddaea597d4cc7aafa16633b34ba4d6993b2d15c17e592a7f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:57:04 np0005481065 xenodochial_mclaren[448323]: 167 167
Oct 11 05:57:04 np0005481065 systemd[1]: libpod-7369e8088c6e5dcddaea597d4cc7aafa16633b34ba4d6993b2d15c17e592a7f6.scope: Deactivated successfully.
Oct 11 05:57:04 np0005481065 podman[448306]: 2025-10-11 09:57:04.28210501 +0000 UTC m=+0.198744217 container died 7369e8088c6e5dcddaea597d4cc7aafa16633b34ba4d6993b2d15c17e592a7f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Oct 11 05:57:04 np0005481065 systemd[1]: var-lib-containers-storage-overlay-fc48afd4ed0b7fc998dc1b3ef864b297d93b3d660e29042ec440f47a5cf10259-merged.mount: Deactivated successfully.
Oct 11 05:57:04 np0005481065 podman[448306]: 2025-10-11 09:57:04.33516015 +0000 UTC m=+0.251799327 container remove 7369e8088c6e5dcddaea597d4cc7aafa16633b34ba4d6993b2d15c17e592a7f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mclaren, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 05:57:04 np0005481065 systemd[1]: libpod-conmon-7369e8088c6e5dcddaea597d4cc7aafa16633b34ba4d6993b2d15c17e592a7f6.scope: Deactivated successfully.
Oct 11 05:57:04 np0005481065 podman[448347]: 2025-10-11 09:57:04.573593548 +0000 UTC m=+0.064870324 container create 76edd06c690e754d2a52ecc94128078cc614b3414a8cdb1e6bb149124af48bd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_pascal, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 05:57:04 np0005481065 podman[448347]: 2025-10-11 09:57:04.545055721 +0000 UTC m=+0.036332567 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:57:04 np0005481065 systemd[1]: Started libpod-conmon-76edd06c690e754d2a52ecc94128078cc614b3414a8cdb1e6bb149124af48bd4.scope.
Oct 11 05:57:04 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:57:04 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49a71f067376b08816b41a96d2da04e8d53c29f1037a7fb92cb58a36b95343a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:57:04 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49a71f067376b08816b41a96d2da04e8d53c29f1037a7fb92cb58a36b95343a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:57:04 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49a71f067376b08816b41a96d2da04e8d53c29f1037a7fb92cb58a36b95343a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:57:04 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49a71f067376b08816b41a96d2da04e8d53c29f1037a7fb92cb58a36b95343a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:57:04 np0005481065 podman[448347]: 2025-10-11 09:57:04.703903791 +0000 UTC m=+0.195180597 container init 76edd06c690e754d2a52ecc94128078cc614b3414a8cdb1e6bb149124af48bd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 11 05:57:04 np0005481065 nova_compute[260935]: 2025-10-11 09:57:04.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:57:04 np0005481065 podman[448347]: 2025-10-11 09:57:04.720842229 +0000 UTC m=+0.212119025 container start 76edd06c690e754d2a52ecc94128078cc614b3414a8cdb1e6bb149124af48bd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_pascal, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:57:04 np0005481065 podman[448347]: 2025-10-11 09:57:04.724353968 +0000 UTC m=+0.215630815 container attach 76edd06c690e754d2a52ecc94128078cc614b3414a8cdb1e6bb149124af48bd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_pascal, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 11 05:57:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]: {
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:    "0": [
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:        {
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:            "devices": [
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:                "/dev/loop3"
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:            ],
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:            "lv_name": "ceph_lv0",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:            "lv_size": "21470642176",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:            "name": "ceph_lv0",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:            "tags": {
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:                "ceph.cluster_name": "ceph",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:                "ceph.crush_device_class": "",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:                "ceph.encrypted": "0",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:                "ceph.osd_id": "0",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:                "ceph.type": "block",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:                "ceph.vdo": "0"
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:            },
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:            "type": "block",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:            "vg_name": "ceph_vg0"
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:        }
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:    ],
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:    "1": [
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:        {
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:            "devices": [
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:                "/dev/loop4"
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:            ],
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:            "lv_name": "ceph_lv1",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:            "lv_size": "21470642176",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:            "name": "ceph_lv1",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:            "tags": {
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:                "ceph.cluster_name": "ceph",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:                "ceph.crush_device_class": "",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:                "ceph.encrypted": "0",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:                "ceph.osd_id": "1",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:                "ceph.type": "block",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:                "ceph.vdo": "0"
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:            },
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:            "type": "block",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:            "vg_name": "ceph_vg1"
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:        }
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:    ],
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:    "2": [
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:        {
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:            "devices": [
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:                "/dev/loop5"
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:            ],
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:            "lv_name": "ceph_lv2",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:            "lv_size": "21470642176",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:            "name": "ceph_lv2",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:            "tags": {
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:                "ceph.cluster_name": "ceph",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:                "ceph.crush_device_class": "",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:                "ceph.encrypted": "0",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:                "ceph.osd_id": "2",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:                "ceph.type": "block",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:                "ceph.vdo": "0"
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:            },
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:            "type": "block",
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:            "vg_name": "ceph_vg2"
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:        }
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]:    ]
Oct 11 05:57:05 np0005481065 nostalgic_pascal[448364]: }
Oct 11 05:57:05 np0005481065 systemd[1]: libpod-76edd06c690e754d2a52ecc94128078cc614b3414a8cdb1e6bb149124af48bd4.scope: Deactivated successfully.
Oct 11 05:57:05 np0005481065 podman[448347]: 2025-10-11 09:57:05.4802181 +0000 UTC m=+0.971494886 container died 76edd06c690e754d2a52ecc94128078cc614b3414a8cdb1e6bb149124af48bd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_pascal, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:57:05 np0005481065 systemd[1]: var-lib-containers-storage-overlay-49a71f067376b08816b41a96d2da04e8d53c29f1037a7fb92cb58a36b95343a9-merged.mount: Deactivated successfully.
Oct 11 05:57:05 np0005481065 podman[448347]: 2025-10-11 09:57:05.571737886 +0000 UTC m=+1.063014652 container remove 76edd06c690e754d2a52ecc94128078cc614b3414a8cdb1e6bb149124af48bd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 11 05:57:05 np0005481065 systemd[1]: libpod-conmon-76edd06c690e754d2a52ecc94128078cc614b3414a8cdb1e6bb149124af48bd4.scope: Deactivated successfully.
Oct 11 05:57:05 np0005481065 nova_compute[260935]: 2025-10-11 09:57:05.701 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:57:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:57:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:57:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:57:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:57:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0026278224099759067 of space, bias 1.0, pg target 0.788346722992772 quantized to 32 (current 32)
Oct 11 05:57:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:57:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:57:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:57:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:57:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:57:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct 11 05:57:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:57:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 05:57:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:57:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:57:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:57:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 05:57:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:57:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 05:57:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:57:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:57:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:57:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 05:57:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3485: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:57:05 np0005481065 podman[448460]: 2025-10-11 09:57:05.994773282 +0000 UTC m=+0.082354299 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Oct 11 05:57:06 np0005481065 podman[448547]: 2025-10-11 09:57:06.469001174 +0000 UTC m=+0.070364870 container create be5e7434cc81833b609dea47c5c55a91958ebd0c9196f822bc150b20d1bbc723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_kowalevski, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 11 05:57:06 np0005481065 systemd[1]: Started libpod-conmon-be5e7434cc81833b609dea47c5c55a91958ebd0c9196f822bc150b20d1bbc723.scope.
Oct 11 05:57:06 np0005481065 podman[448547]: 2025-10-11 09:57:06.439086318 +0000 UTC m=+0.040450084 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:57:06 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:57:06 np0005481065 podman[448547]: 2025-10-11 09:57:06.578599481 +0000 UTC m=+0.179963237 container init be5e7434cc81833b609dea47c5c55a91958ebd0c9196f822bc150b20d1bbc723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_kowalevski, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 05:57:06 np0005481065 podman[448547]: 2025-10-11 09:57:06.591681031 +0000 UTC m=+0.193044707 container start be5e7434cc81833b609dea47c5c55a91958ebd0c9196f822bc150b20d1bbc723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_kowalevski, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 11 05:57:06 np0005481065 podman[448547]: 2025-10-11 09:57:06.595973492 +0000 UTC m=+0.197337248 container attach be5e7434cc81833b609dea47c5c55a91958ebd0c9196f822bc150b20d1bbc723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 05:57:06 np0005481065 laughing_kowalevski[448564]: 167 167
Oct 11 05:57:06 np0005481065 systemd[1]: libpod-be5e7434cc81833b609dea47c5c55a91958ebd0c9196f822bc150b20d1bbc723.scope: Deactivated successfully.
Oct 11 05:57:06 np0005481065 podman[448547]: 2025-10-11 09:57:06.602583639 +0000 UTC m=+0.203947335 container died be5e7434cc81833b609dea47c5c55a91958ebd0c9196f822bc150b20d1bbc723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_kowalevski, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 05:57:06 np0005481065 systemd[1]: var-lib-containers-storage-overlay-07e2fbfcfff693d1f1bc62d58298a2a78b6d5a5aaeda7127d07727154e9ac21e-merged.mount: Deactivated successfully.
Oct 11 05:57:06 np0005481065 podman[448547]: 2025-10-11 09:57:06.660845125 +0000 UTC m=+0.262208831 container remove be5e7434cc81833b609dea47c5c55a91958ebd0c9196f822bc150b20d1bbc723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_kowalevski, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 11 05:57:06 np0005481065 systemd[1]: libpod-conmon-be5e7434cc81833b609dea47c5c55a91958ebd0c9196f822bc150b20d1bbc723.scope: Deactivated successfully.
Oct 11 05:57:06 np0005481065 nova_compute[260935]: 2025-10-11 09:57:06.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:57:06 np0005481065 nova_compute[260935]: 2025-10-11 09:57:06.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:57:06 np0005481065 podman[448587]: 2025-10-11 09:57:06.926946745 +0000 UTC m=+0.074880477 container create 06822f48e0b4c0e734d7504e5e1a644c9d4efce7953aaec9cd010b13e699101f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 11 05:57:06 np0005481065 podman[448587]: 2025-10-11 09:57:06.897428811 +0000 UTC m=+0.045362593 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:57:06 np0005481065 systemd[1]: Started libpod-conmon-06822f48e0b4c0e734d7504e5e1a644c9d4efce7953aaec9cd010b13e699101f.scope.
Oct 11 05:57:07 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:57:07 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afa9d8b224c2e4eadd8792923eb840a4edf3753066f434cedaee9f26104eac45/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:57:07 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afa9d8b224c2e4eadd8792923eb840a4edf3753066f434cedaee9f26104eac45/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:57:07 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afa9d8b224c2e4eadd8792923eb840a4edf3753066f434cedaee9f26104eac45/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:57:07 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afa9d8b224c2e4eadd8792923eb840a4edf3753066f434cedaee9f26104eac45/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:57:07 np0005481065 podman[448587]: 2025-10-11 09:57:07.071454309 +0000 UTC m=+0.219388101 container init 06822f48e0b4c0e734d7504e5e1a644c9d4efce7953aaec9cd010b13e699101f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 05:57:07 np0005481065 podman[448587]: 2025-10-11 09:57:07.082466001 +0000 UTC m=+0.230399733 container start 06822f48e0b4c0e734d7504e5e1a644c9d4efce7953aaec9cd010b13e699101f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 05:57:07 np0005481065 podman[448587]: 2025-10-11 09:57:07.086559406 +0000 UTC m=+0.234493198 container attach 06822f48e0b4c0e734d7504e5e1a644c9d4efce7953aaec9cd010b13e699101f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wozniak, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Oct 11 05:57:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3486: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:57:08 np0005481065 blissful_wozniak[448604]: {
Oct 11 05:57:08 np0005481065 blissful_wozniak[448604]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 05:57:08 np0005481065 blissful_wozniak[448604]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:57:08 np0005481065 blissful_wozniak[448604]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 05:57:08 np0005481065 blissful_wozniak[448604]:        "osd_id": 2,
Oct 11 05:57:08 np0005481065 blissful_wozniak[448604]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:57:08 np0005481065 blissful_wozniak[448604]:        "type": "bluestore"
Oct 11 05:57:08 np0005481065 blissful_wozniak[448604]:    },
Oct 11 05:57:08 np0005481065 blissful_wozniak[448604]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 05:57:08 np0005481065 blissful_wozniak[448604]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:57:08 np0005481065 blissful_wozniak[448604]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 05:57:08 np0005481065 blissful_wozniak[448604]:        "osd_id": 0,
Oct 11 05:57:08 np0005481065 blissful_wozniak[448604]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:57:08 np0005481065 blissful_wozniak[448604]:        "type": "bluestore"
Oct 11 05:57:08 np0005481065 blissful_wozniak[448604]:    },
Oct 11 05:57:08 np0005481065 blissful_wozniak[448604]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 05:57:08 np0005481065 blissful_wozniak[448604]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:57:08 np0005481065 blissful_wozniak[448604]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 05:57:08 np0005481065 blissful_wozniak[448604]:        "osd_id": 1,
Oct 11 05:57:08 np0005481065 blissful_wozniak[448604]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:57:08 np0005481065 blissful_wozniak[448604]:        "type": "bluestore"
Oct 11 05:57:08 np0005481065 blissful_wozniak[448604]:    }
Oct 11 05:57:08 np0005481065 blissful_wozniak[448604]: }
Oct 11 05:57:08 np0005481065 systemd[1]: libpod-06822f48e0b4c0e734d7504e5e1a644c9d4efce7953aaec9cd010b13e699101f.scope: Deactivated successfully.
Oct 11 05:57:08 np0005481065 podman[448587]: 2025-10-11 09:57:08.218518865 +0000 UTC m=+1.366452597 container died 06822f48e0b4c0e734d7504e5e1a644c9d4efce7953aaec9cd010b13e699101f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wozniak, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:57:08 np0005481065 systemd[1]: libpod-06822f48e0b4c0e734d7504e5e1a644c9d4efce7953aaec9cd010b13e699101f.scope: Consumed 1.141s CPU time.
Oct 11 05:57:08 np0005481065 systemd[1]: var-lib-containers-storage-overlay-afa9d8b224c2e4eadd8792923eb840a4edf3753066f434cedaee9f26104eac45-merged.mount: Deactivated successfully.
Oct 11 05:57:08 np0005481065 podman[448587]: 2025-10-11 09:57:08.298490556 +0000 UTC m=+1.446424268 container remove 06822f48e0b4c0e734d7504e5e1a644c9d4efce7953aaec9cd010b13e699101f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:57:08 np0005481065 systemd[1]: libpod-conmon-06822f48e0b4c0e734d7504e5e1a644c9d4efce7953aaec9cd010b13e699101f.scope: Deactivated successfully.
Oct 11 05:57:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:57:08 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:57:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:57:08 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:57:08 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev d3960a72-9b0f-4e7b-bb06-3e6036b536f1 does not exist
Oct 11 05:57:08 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 852a18a7-f8d8-489a-9ee1-b67d45c0f4ad does not exist
Oct 11 05:57:08 np0005481065 nova_compute[260935]: 2025-10-11 09:57:08.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:57:08 np0005481065 nova_compute[260935]: 2025-10-11 09:57:08.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:57:08 np0005481065 nova_compute[260935]: 2025-10-11 09:57:08.705 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 11 05:57:09 np0005481065 nova_compute[260935]: 2025-10-11 09:57:09.239 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:57:09 np0005481065 nova_compute[260935]: 2025-10-11 09:57:09.239 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:57:09 np0005481065 nova_compute[260935]: 2025-10-11 09:57:09.239 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 05:57:09 np0005481065 nova_compute[260935]: 2025-10-11 09:57:09.240 2 DEBUG nova.objects.instance [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c176845c-89c0-4038-ba22-4ee79bd3ebfe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 05:57:09 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:57:09 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:57:09 np0005481065 nova_compute[260935]: 2025-10-11 09:57:09.413 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:57:09 np0005481065 nova_compute[260935]: 2025-10-11 09:57:09.638 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:57:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3487: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:57:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:57:10 np0005481065 nova_compute[260935]: 2025-10-11 09:57:10.181 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:57:10 np0005481065 nova_compute[260935]: 2025-10-11 09:57:10.181 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 05:57:10 np0005481065 nova_compute[260935]: 2025-10-11 09:57:10.182 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:57:10 np0005481065 nova_compute[260935]: 2025-10-11 09:57:10.183 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:57:10 np0005481065 nova_compute[260935]: 2025-10-11 09:57:10.183 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:57:10 np0005481065 nova_compute[260935]: 2025-10-11 09:57:10.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:57:10 np0005481065 nova_compute[260935]: 2025-10-11 09:57:10.736 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:57:10 np0005481065 nova_compute[260935]: 2025-10-11 09:57:10.736 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:57:10 np0005481065 nova_compute[260935]: 2025-10-11 09:57:10.737 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:57:10 np0005481065 nova_compute[260935]: 2025-10-11 09:57:10.737 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:57:10 np0005481065 nova_compute[260935]: 2025-10-11 09:57:10.738 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:57:10 np0005481065 podman[448700]: 2025-10-11 09:57:10.817376501 +0000 UTC m=+0.103152996 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid)
Oct 11 05:57:11 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:57:11 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3521329931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:57:11 np0005481065 nova_compute[260935]: 2025-10-11 09:57:11.283 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:57:11 np0005481065 nova_compute[260935]: 2025-10-11 09:57:11.376 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:57:11 np0005481065 nova_compute[260935]: 2025-10-11 09:57:11.377 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:57:11 np0005481065 nova_compute[260935]: 2025-10-11 09:57:11.377 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:57:11 np0005481065 nova_compute[260935]: 2025-10-11 09:57:11.382 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:57:11 np0005481065 nova_compute[260935]: 2025-10-11 09:57:11.382 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:57:11 np0005481065 nova_compute[260935]: 2025-10-11 09:57:11.388 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:57:11 np0005481065 nova_compute[260935]: 2025-10-11 09:57:11.388 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:57:11 np0005481065 nova_compute[260935]: 2025-10-11 09:57:11.592 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:57:11 np0005481065 nova_compute[260935]: 2025-10-11 09:57:11.593 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2753MB free_disk=59.83064270019531GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:57:11 np0005481065 nova_compute[260935]: 2025-10-11 09:57:11.594 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:57:11 np0005481065 nova_compute[260935]: 2025-10-11 09:57:11.594 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:57:11 np0005481065 nova_compute[260935]: 2025-10-11 09:57:11.677 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:57:11 np0005481065 nova_compute[260935]: 2025-10-11 09:57:11.678 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:57:11 np0005481065 nova_compute[260935]: 2025-10-11 09:57:11.679 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:57:11 np0005481065 nova_compute[260935]: 2025-10-11 09:57:11.679 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:57:11 np0005481065 nova_compute[260935]: 2025-10-11 09:57:11.679 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:57:11 np0005481065 nova_compute[260935]: 2025-10-11 09:57:11.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:57:11 np0005481065 nova_compute[260935]: 2025-10-11 09:57:11.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:57:11 np0005481065 nova_compute[260935]: 2025-10-11 09:57:11.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:57:11 np0005481065 nova_compute[260935]: 2025-10-11 09:57:11.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:57:11 np0005481065 nova_compute[260935]: 2025-10-11 09:57:11.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:57:11 np0005481065 nova_compute[260935]: 2025-10-11 09:57:11.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:57:11 np0005481065 nova_compute[260935]: 2025-10-11 09:57:11.807 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:57:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3488: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:57:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:57:12 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2046483490' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:57:12 np0005481065 nova_compute[260935]: 2025-10-11 09:57:12.293 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:57:12 np0005481065 nova_compute[260935]: 2025-10-11 09:57:12.303 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:57:12 np0005481065 nova_compute[260935]: 2025-10-11 09:57:12.328 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:57:12 np0005481065 nova_compute[260935]: 2025-10-11 09:57:12.331 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:57:12 np0005481065 nova_compute[260935]: 2025-10-11 09:57:12.332 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:57:13 np0005481065 podman[448765]: 2025-10-11 09:57:13.798511649 +0000 UTC m=+0.097088305 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0)
Oct 11 05:57:13 np0005481065 podman[448766]: 2025-10-11 09:57:13.857568598 +0000 UTC m=+0.147657434 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 11 05:57:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3489: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:57:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:57:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:57:15.255 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:57:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:57:15.255 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:57:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:57:15.256 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:57:15 np0005481065 nova_compute[260935]: 2025-10-11 09:57:15.333 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:57:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3490: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:57:16 np0005481065 nova_compute[260935]: 2025-10-11 09:57:16.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:57:16 np0005481065 nova_compute[260935]: 2025-10-11 09:57:16.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:57:17 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #171. Immutable memtables: 0.
Oct 11 05:57:17 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:57:17.405885) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 05:57:17 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 105] Flushing memtable with next log file: 171
Oct 11 05:57:17 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176637405951, "job": 105, "event": "flush_started", "num_memtables": 1, "num_entries": 1680, "num_deletes": 251, "total_data_size": 2767455, "memory_usage": 2819056, "flush_reason": "Manual Compaction"}
Oct 11 05:57:17 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 105] Level-0 flush table #172: started
Oct 11 05:57:17 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176637429518, "cf_name": "default", "job": 105, "event": "table_file_creation", "file_number": 172, "file_size": 2707750, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 70972, "largest_seqno": 72651, "table_properties": {"data_size": 2699934, "index_size": 4758, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15833, "raw_average_key_size": 19, "raw_value_size": 2684369, "raw_average_value_size": 3389, "num_data_blocks": 213, "num_entries": 792, "num_filter_entries": 792, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760176455, "oldest_key_time": 1760176455, "file_creation_time": 1760176637, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 172, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:57:17 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 105] Flush lasted 23702 microseconds, and 14226 cpu microseconds.
Oct 11 05:57:17 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:57:17 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:57:17.429586) [db/flush_job.cc:967] [default] [JOB 105] Level-0 flush table #172: 2707750 bytes OK
Oct 11 05:57:17 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:57:17.429615) [db/memtable_list.cc:519] [default] Level-0 commit table #172 started
Oct 11 05:57:17 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:57:17.431304) [db/memtable_list.cc:722] [default] Level-0 commit table #172: memtable #1 done
Oct 11 05:57:17 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:57:17.431332) EVENT_LOG_v1 {"time_micros": 1760176637431322, "job": 105, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 05:57:17 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:57:17.431360) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 05:57:17 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 105] Try to delete WAL files size 2760220, prev total WAL file size 2760220, number of live WAL files 2.
Oct 11 05:57:17 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000168.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:57:17 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:57:17.432741) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037303238' seq:72057594037927935, type:22 .. '7061786F730037323830' seq:0, type:0; will stop at (end)
Oct 11 05:57:17 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 106] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 05:57:17 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 105 Base level 0, inputs: [172(2644KB)], [170(9884KB)]
Oct 11 05:57:17 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176637432803, "job": 106, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [172], "files_L6": [170], "score": -1, "input_data_size": 12829973, "oldest_snapshot_seqno": -1}
Oct 11 05:57:17 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 106] Generated table #173: 8855 keys, 11052848 bytes, temperature: kUnknown
Oct 11 05:57:17 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176637504530, "cf_name": "default", "job": 106, "event": "table_file_creation", "file_number": 173, "file_size": 11052848, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10995923, "index_size": 33708, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22149, "raw_key_size": 232764, "raw_average_key_size": 26, "raw_value_size": 10840056, "raw_average_value_size": 1224, "num_data_blocks": 1303, "num_entries": 8855, "num_filter_entries": 8855, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760176637, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 173, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:57:17 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:57:17 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:57:17.505019) [db/compaction/compaction_job.cc:1663] [default] [JOB 106] Compacted 1@0 + 1@6 files to L6 => 11052848 bytes
Oct 11 05:57:17 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:57:17.506583) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 178.5 rd, 153.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 9.7 +0.0 blob) out(10.5 +0.0 blob), read-write-amplify(8.8) write-amplify(4.1) OK, records in: 9369, records dropped: 514 output_compression: NoCompression
Oct 11 05:57:17 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:57:17.506613) EVENT_LOG_v1 {"time_micros": 1760176637506600, "job": 106, "event": "compaction_finished", "compaction_time_micros": 71877, "compaction_time_cpu_micros": 50474, "output_level": 6, "num_output_files": 1, "total_output_size": 11052848, "num_input_records": 9369, "num_output_records": 8855, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 05:57:17 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000172.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:57:17 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176637508110, "job": 106, "event": "table_file_deletion", "file_number": 172}
Oct 11 05:57:17 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000170.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:57:17 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176637512207, "job": 106, "event": "table_file_deletion", "file_number": 170}
Oct 11 05:57:17 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:57:17.432640) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:57:17 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:57:17.512465) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:57:17 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:57:17.512476) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:57:17 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:57:17.512480) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:57:17 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:57:17.512484) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:57:17 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:57:17.512488) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:57:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3491: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:57:18 np0005481065 nova_compute[260935]: 2025-10-11 09:57:18.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:57:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3492: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:57:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:57:21 np0005481065 nova_compute[260935]: 2025-10-11 09:57:21.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:57:21 np0005481065 nova_compute[260935]: 2025-10-11 09:57:21.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:57:21 np0005481065 nova_compute[260935]: 2025-10-11 09:57:21.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:57:21 np0005481065 nova_compute[260935]: 2025-10-11 09:57:21.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:57:21 np0005481065 nova_compute[260935]: 2025-10-11 09:57:21.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:57:21 np0005481065 nova_compute[260935]: 2025-10-11 09:57:21.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:57:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3493: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:57:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3494: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:57:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:57:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:57:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:57:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:57:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:57:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:57:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:57:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3495: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:57:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:57:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3280523202' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:57:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:57:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3280523202' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:57:26 np0005481065 nova_compute[260935]: 2025-10-11 09:57:26.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:57:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3496: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:57:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3497: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:57:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:57:31 np0005481065 nova_compute[260935]: 2025-10-11 09:57:31.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:57:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3498: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:57:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3499: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:57:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:57:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3500: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:57:36 np0005481065 podman[448814]: 2025-10-11 09:57:36.822322747 +0000 UTC m=+0.103901988 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 05:57:36 np0005481065 nova_compute[260935]: 2025-10-11 09:57:36.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:57:36 np0005481065 nova_compute[260935]: 2025-10-11 09:57:36.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:57:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3501: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:57:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3502: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:57:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:57:40 np0005481065 nova_compute[260935]: 2025-10-11 09:57:40.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:57:41 np0005481065 podman[448833]: 2025-10-11 09:57:41.795490181 +0000 UTC m=+0.094417369 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 11 05:57:41 np0005481065 nova_compute[260935]: 2025-10-11 09:57:41.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:57:41 np0005481065 nova_compute[260935]: 2025-10-11 09:57:41.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:57:41 np0005481065 nova_compute[260935]: 2025-10-11 09:57:41.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:57:41 np0005481065 nova_compute[260935]: 2025-10-11 09:57:41.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:57:41 np0005481065 nova_compute[260935]: 2025-10-11 09:57:41.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:57:41 np0005481065 nova_compute[260935]: 2025-10-11 09:57:41.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:57:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3503: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:57:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3504: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:57:44 np0005481065 podman[448851]: 2025-10-11 09:57:44.803082407 +0000 UTC m=+0.099832472 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd)
Oct 11 05:57:44 np0005481065 podman[448852]: 2025-10-11 09:57:44.844060455 +0000 UTC m=+0.134831522 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 11 05:57:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:57:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3505: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:57:46 np0005481065 nova_compute[260935]: 2025-10-11 09:57:46.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:57:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3506: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:57:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3507: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:57:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:57:51 np0005481065 nova_compute[260935]: 2025-10-11 09:57:51.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:57:51 np0005481065 nova_compute[260935]: 2025-10-11 09:57:51.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:57:51 np0005481065 nova_compute[260935]: 2025-10-11 09:57:51.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:57:51 np0005481065 nova_compute[260935]: 2025-10-11 09:57:51.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:57:51 np0005481065 nova_compute[260935]: 2025-10-11 09:57:51.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:57:51 np0005481065 nova_compute[260935]: 2025-10-11 09:57:51.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:57:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3508: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:57:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3509: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:57:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:57:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:57:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:57:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:57:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:57:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:57:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:57:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:57:55
Oct 11 05:57:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:57:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:57:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', 'backups', 'images', 'default.rgw.control', '.rgw.root', 'volumes', 'default.rgw.log', 'default.rgw.meta']
Oct 11 05:57:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:57:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:57:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:57:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:57:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:57:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:57:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:57:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:57:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:57:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:57:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:57:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3510: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:57:56 np0005481065 nova_compute[260935]: 2025-10-11 09:57:56.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:57:56 np0005481065 nova_compute[260935]: 2025-10-11 09:57:56.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:57:56 np0005481065 nova_compute[260935]: 2025-10-11 09:57:56.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:57:56 np0005481065 nova_compute[260935]: 2025-10-11 09:57:56.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:57:56 np0005481065 nova_compute[260935]: 2025-10-11 09:57:56.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:57:56 np0005481065 nova_compute[260935]: 2025-10-11 09:57:56.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:57:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3511: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:57:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3512: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:57:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:58:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3513: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:58:01 np0005481065 nova_compute[260935]: 2025-10-11 09:58:01.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:58:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3514: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:58:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:58:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:58:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:58:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:58:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:58:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0026278224099759067 of space, bias 1.0, pg target 0.788346722992772 quantized to 32 (current 32)
Oct 11 05:58:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:58:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:58:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:58:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:58:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:58:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct 11 05:58:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:58:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 05:58:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:58:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:58:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:58:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 05:58:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:58:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 05:58:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:58:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:58:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:58:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 05:58:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3515: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:58:06 np0005481065 nova_compute[260935]: 2025-10-11 09:58:06.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:58:06 np0005481065 nova_compute[260935]: 2025-10-11 09:58:06.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:58:06 np0005481065 nova_compute[260935]: 2025-10-11 09:58:06.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:58:07 np0005481065 podman[448897]: 2025-10-11 09:58:07.788935782 +0000 UTC m=+0.083910572 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 11 05:58:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3516: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:58:08 np0005481065 nova_compute[260935]: 2025-10-11 09:58:08.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:58:08 np0005481065 nova_compute[260935]: 2025-10-11 09:58:08.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:58:09 np0005481065 nova_compute[260935]: 2025-10-11 09:58:09.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:58:09 np0005481065 nova_compute[260935]: 2025-10-11 09:58:09.705 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:58:09 np0005481065 podman[449091]: 2025-10-11 09:58:09.736376927 +0000 UTC m=+0.095611013 container exec ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:58:09 np0005481065 podman[449091]: 2025-10-11 09:58:09.863238932 +0000 UTC m=+0.222473018 container exec_died ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:58:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3517: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:58:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:58:10 np0005481065 nova_compute[260935]: 2025-10-11 09:58:10.262 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:58:10 np0005481065 nova_compute[260935]: 2025-10-11 09:58:10.262 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:58:10 np0005481065 nova_compute[260935]: 2025-10-11 09:58:10.263 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 05:58:10 np0005481065 nova_compute[260935]: 2025-10-11 09:58:10.494 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:58:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:58:10 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:58:10 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:58:10 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:58:11 np0005481065 nova_compute[260935]: 2025-10-11 09:58:11.252 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:58:11 np0005481065 nova_compute[260935]: 2025-10-11 09:58:11.351 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:58:11 np0005481065 nova_compute[260935]: 2025-10-11 09:58:11.352 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 05:58:11 np0005481065 nova_compute[260935]: 2025-10-11 09:58:11.353 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:58:11 np0005481065 nova_compute[260935]: 2025-10-11 09:58:11.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:58:11 np0005481065 nova_compute[260935]: 2025-10-11 09:58:11.733 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:58:11 np0005481065 nova_compute[260935]: 2025-10-11 09:58:11.734 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:58:11 np0005481065 nova_compute[260935]: 2025-10-11 09:58:11.734 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:58:11 np0005481065 nova_compute[260935]: 2025-10-11 09:58:11.735 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:58:11 np0005481065 nova_compute[260935]: 2025-10-11 09:58:11.736 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:58:11 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:58:11 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:58:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3518: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:58:11 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:58:11 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:58:11 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 05:58:11 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:58:11 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 05:58:11 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:58:11 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev f27b023a-b34f-412d-b7d9-4cdbf9496c0e does not exist
Oct 11 05:58:11 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 5cb2eb0b-42bc-45e3-9b27-a1ed5cd0f2cd does not exist
Oct 11 05:58:11 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev db082280-b58a-4c2c-9600-8994fb5b7418 does not exist
Oct 11 05:58:11 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 05:58:11 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 05:58:11 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 05:58:11 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:58:11 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:58:11 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:58:11 np0005481065 nova_compute[260935]: 2025-10-11 09:58:11.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:58:11 np0005481065 nova_compute[260935]: 2025-10-11 09:58:11.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:58:11 np0005481065 nova_compute[260935]: 2025-10-11 09:58:11.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:58:11 np0005481065 nova_compute[260935]: 2025-10-11 09:58:11.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:58:11 np0005481065 nova_compute[260935]: 2025-10-11 09:58:11.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:58:11 np0005481065 nova_compute[260935]: 2025-10-11 09:58:11.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:58:12 np0005481065 podman[449430]: 2025-10-11 09:58:12.140696554 +0000 UTC m=+0.058458053 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=iscsid, org.label-schema.license=GPLv2)
Oct 11 05:58:12 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:58:12 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2794627343' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:58:12 np0005481065 nova_compute[260935]: 2025-10-11 09:58:12.258 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:58:12 np0005481065 nova_compute[260935]: 2025-10-11 09:58:12.359 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:58:12 np0005481065 nova_compute[260935]: 2025-10-11 09:58:12.360 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:58:12 np0005481065 nova_compute[260935]: 2025-10-11 09:58:12.360 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:58:12 np0005481065 nova_compute[260935]: 2025-10-11 09:58:12.364 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:58:12 np0005481065 nova_compute[260935]: 2025-10-11 09:58:12.365 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:58:12 np0005481065 nova_compute[260935]: 2025-10-11 09:58:12.369 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:58:12 np0005481065 nova_compute[260935]: 2025-10-11 09:58:12.370 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:58:12 np0005481065 nova_compute[260935]: 2025-10-11 09:58:12.574 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:58:12 np0005481065 nova_compute[260935]: 2025-10-11 09:58:12.576 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2789MB free_disk=59.83064270019531GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:58:12 np0005481065 nova_compute[260935]: 2025-10-11 09:58:12.576 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:58:12 np0005481065 nova_compute[260935]: 2025-10-11 09:58:12.576 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:58:12 np0005481065 nova_compute[260935]: 2025-10-11 09:58:12.701 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:58:12 np0005481065 nova_compute[260935]: 2025-10-11 09:58:12.702 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:58:12 np0005481065 nova_compute[260935]: 2025-10-11 09:58:12.702 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:58:12 np0005481065 nova_compute[260935]: 2025-10-11 09:58:12.702 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:58:12 np0005481065 nova_compute[260935]: 2025-10-11 09:58:12.703 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:58:12 np0005481065 podman[449568]: 2025-10-11 09:58:12.744485117 +0000 UTC m=+0.049442798 container create 0f07a3f23d1b9364f8675f909b9456181e43da2d54b82e2b6ea4ddd1d0ed48b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_leakey, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 11 05:58:12 np0005481065 systemd[1]: Started libpod-conmon-0f07a3f23d1b9364f8675f909b9456181e43da2d54b82e2b6ea4ddd1d0ed48b0.scope.
Oct 11 05:58:12 np0005481065 podman[449568]: 2025-10-11 09:58:12.718667768 +0000 UTC m=+0.023625499 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:58:12 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:58:12 np0005481065 podman[449568]: 2025-10-11 09:58:12.854090695 +0000 UTC m=+0.159048386 container init 0f07a3f23d1b9364f8675f909b9456181e43da2d54b82e2b6ea4ddd1d0ed48b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:58:12 np0005481065 podman[449568]: 2025-10-11 09:58:12.867644898 +0000 UTC m=+0.172602579 container start 0f07a3f23d1b9364f8675f909b9456181e43da2d54b82e2b6ea4ddd1d0ed48b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_leakey, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 11 05:58:12 np0005481065 nova_compute[260935]: 2025-10-11 09:58:12.867 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:58:12 np0005481065 podman[449568]: 2025-10-11 09:58:12.871857517 +0000 UTC m=+0.176815208 container attach 0f07a3f23d1b9364f8675f909b9456181e43da2d54b82e2b6ea4ddd1d0ed48b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 05:58:12 np0005481065 vigorous_leakey[449585]: 167 167
Oct 11 05:58:12 np0005481065 systemd[1]: libpod-0f07a3f23d1b9364f8675f909b9456181e43da2d54b82e2b6ea4ddd1d0ed48b0.scope: Deactivated successfully.
Oct 11 05:58:12 np0005481065 podman[449568]: 2025-10-11 09:58:12.877525257 +0000 UTC m=+0.182482938 container died 0f07a3f23d1b9364f8675f909b9456181e43da2d54b82e2b6ea4ddd1d0ed48b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 05:58:12 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:58:12 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:58:12 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:58:12 np0005481065 systemd[1]: var-lib-containers-storage-overlay-b944457e05b903761e14f4e278578ffcae66232272c726b614a78bd4ead40b30-merged.mount: Deactivated successfully.
Oct 11 05:58:12 np0005481065 podman[449568]: 2025-10-11 09:58:12.943343487 +0000 UTC m=+0.248301168 container remove 0f07a3f23d1b9364f8675f909b9456181e43da2d54b82e2b6ea4ddd1d0ed48b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_leakey, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:58:12 np0005481065 systemd[1]: libpod-conmon-0f07a3f23d1b9364f8675f909b9456181e43da2d54b82e2b6ea4ddd1d0ed48b0.scope: Deactivated successfully.
Oct 11 05:58:13 np0005481065 podman[449630]: 2025-10-11 09:58:13.214071538 +0000 UTC m=+0.073981902 container create d4fe924a54a472c799d6f85d3fb236aaed38ac64ca4578273c6d912eef975fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jones, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:58:13 np0005481065 systemd[1]: Started libpod-conmon-d4fe924a54a472c799d6f85d3fb236aaed38ac64ca4578273c6d912eef975fb3.scope.
Oct 11 05:58:13 np0005481065 podman[449630]: 2025-10-11 09:58:13.187894068 +0000 UTC m=+0.047804542 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:58:13 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:58:13 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d56d9c6f51f4cf9f16d76a103a11708cb81ccac0cfdd61e688ecd6e12347cf6f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:58:13 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d56d9c6f51f4cf9f16d76a103a11708cb81ccac0cfdd61e688ecd6e12347cf6f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:58:13 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d56d9c6f51f4cf9f16d76a103a11708cb81ccac0cfdd61e688ecd6e12347cf6f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:58:13 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d56d9c6f51f4cf9f16d76a103a11708cb81ccac0cfdd61e688ecd6e12347cf6f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:58:13 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d56d9c6f51f4cf9f16d76a103a11708cb81ccac0cfdd61e688ecd6e12347cf6f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 05:58:13 np0005481065 podman[449630]: 2025-10-11 09:58:13.334392448 +0000 UTC m=+0.194302872 container init d4fe924a54a472c799d6f85d3fb236aaed38ac64ca4578273c6d912eef975fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jones, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 11 05:58:13 np0005481065 podman[449630]: 2025-10-11 09:58:13.349422653 +0000 UTC m=+0.209333057 container start d4fe924a54a472c799d6f85d3fb236aaed38ac64ca4578273c6d912eef975fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jones, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 05:58:13 np0005481065 podman[449630]: 2025-10-11 09:58:13.354684482 +0000 UTC m=+0.214594886 container attach d4fe924a54a472c799d6f85d3fb236aaed38ac64ca4578273c6d912eef975fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:58:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:58:13 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2416026574' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:58:13 np0005481065 nova_compute[260935]: 2025-10-11 09:58:13.382 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:58:13 np0005481065 nova_compute[260935]: 2025-10-11 09:58:13.391 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:58:13 np0005481065 nova_compute[260935]: 2025-10-11 09:58:13.415 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:58:13 np0005481065 nova_compute[260935]: 2025-10-11 09:58:13.417 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:58:13 np0005481065 nova_compute[260935]: 2025-10-11 09:58:13.417 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:58:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3519: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:58:14 np0005481065 lucid_jones[449647]: --> passed data devices: 0 physical, 3 LVM
Oct 11 05:58:14 np0005481065 lucid_jones[449647]: --> relative data size: 1.0
Oct 11 05:58:14 np0005481065 lucid_jones[449647]: --> All data devices are unavailable
Oct 11 05:58:14 np0005481065 systemd[1]: libpod-d4fe924a54a472c799d6f85d3fb236aaed38ac64ca4578273c6d912eef975fb3.scope: Deactivated successfully.
Oct 11 05:58:14 np0005481065 podman[449630]: 2025-10-11 09:58:14.642345312 +0000 UTC m=+1.502255706 container died d4fe924a54a472c799d6f85d3fb236aaed38ac64ca4578273c6d912eef975fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jones, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:58:14 np0005481065 systemd[1]: libpod-d4fe924a54a472c799d6f85d3fb236aaed38ac64ca4578273c6d912eef975fb3.scope: Consumed 1.222s CPU time.
Oct 11 05:58:14 np0005481065 systemd[1]: var-lib-containers-storage-overlay-d56d9c6f51f4cf9f16d76a103a11708cb81ccac0cfdd61e688ecd6e12347cf6f-merged.mount: Deactivated successfully.
Oct 11 05:58:14 np0005481065 podman[449630]: 2025-10-11 09:58:14.730880224 +0000 UTC m=+1.590790628 container remove d4fe924a54a472c799d6f85d3fb236aaed38ac64ca4578273c6d912eef975fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jones, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 05:58:14 np0005481065 systemd[1]: libpod-conmon-d4fe924a54a472c799d6f85d3fb236aaed38ac64ca4578273c6d912eef975fb3.scope: Deactivated successfully.
Oct 11 05:58:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:58:15 np0005481065 podman[449715]: 2025-10-11 09:58:15.05306554 +0000 UTC m=+0.105675848 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 11 05:58:15 np0005481065 podman[449716]: 2025-10-11 09:58:15.100637584 +0000 UTC m=+0.154493287 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller)
Oct 11 05:58:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:58:15.256 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:58:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:58:15.257 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:58:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:58:15.257 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:58:15 np0005481065 podman[449876]: 2025-10-11 09:58:15.692266883 +0000 UTC m=+0.072000646 container create 81c73bb1a3402aa0dc4a8d78d8122f734e7b911fcabfcbfd7407a3022cdd46d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mestorf, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 05:58:15 np0005481065 podman[449876]: 2025-10-11 09:58:15.664341264 +0000 UTC m=+0.044075067 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:58:15 np0005481065 systemd[1]: Started libpod-conmon-81c73bb1a3402aa0dc4a8d78d8122f734e7b911fcabfcbfd7407a3022cdd46d8.scope.
Oct 11 05:58:15 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:58:15 np0005481065 podman[449876]: 2025-10-11 09:58:15.866757634 +0000 UTC m=+0.246491447 container init 81c73bb1a3402aa0dc4a8d78d8122f734e7b911fcabfcbfd7407a3022cdd46d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mestorf, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 05:58:15 np0005481065 podman[449876]: 2025-10-11 09:58:15.8807554 +0000 UTC m=+0.260489163 container start 81c73bb1a3402aa0dc4a8d78d8122f734e7b911fcabfcbfd7407a3022cdd46d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mestorf, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 05:58:15 np0005481065 podman[449876]: 2025-10-11 09:58:15.884853596 +0000 UTC m=+0.264587369 container attach 81c73bb1a3402aa0dc4a8d78d8122f734e7b911fcabfcbfd7407a3022cdd46d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 05:58:15 np0005481065 amazing_mestorf[449893]: 167 167
Oct 11 05:58:15 np0005481065 systemd[1]: libpod-81c73bb1a3402aa0dc4a8d78d8122f734e7b911fcabfcbfd7407a3022cdd46d8.scope: Deactivated successfully.
Oct 11 05:58:15 np0005481065 podman[449876]: 2025-10-11 09:58:15.889985551 +0000 UTC m=+0.269719324 container died 81c73bb1a3402aa0dc4a8d78d8122f734e7b911fcabfcbfd7407a3022cdd46d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:58:15 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 do_prune osdmap full prune enabled
Oct 11 05:58:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3520: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:58:15 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e311 e311: 3 total, 3 up, 3 in
Oct 11 05:58:15 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e311: 3 total, 3 up, 3 in
Oct 11 05:58:15 np0005481065 systemd[1]: var-lib-containers-storage-overlay-60f33e46cb80cd284e8563fd6931ed5f2efefa213fe63d8000afeb679abd4519-merged.mount: Deactivated successfully.
Oct 11 05:58:15 np0005481065 podman[449876]: 2025-10-11 09:58:15.96001944 +0000 UTC m=+0.339753213 container remove 81c73bb1a3402aa0dc4a8d78d8122f734e7b911fcabfcbfd7407a3022cdd46d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Oct 11 05:58:15 np0005481065 systemd[1]: libpod-conmon-81c73bb1a3402aa0dc4a8d78d8122f734e7b911fcabfcbfd7407a3022cdd46d8.scope: Deactivated successfully.
Oct 11 05:58:16 np0005481065 podman[449916]: 2025-10-11 09:58:16.23602711 +0000 UTC m=+0.083297615 container create 7e82aba17eec9dfa5fc09f168e565d9fdab429d18b38586148dcf126ba90d0a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_heyrovsky, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 05:58:16 np0005481065 podman[449916]: 2025-10-11 09:58:16.203927663 +0000 UTC m=+0.051198178 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:58:16 np0005481065 systemd[1]: Started libpod-conmon-7e82aba17eec9dfa5fc09f168e565d9fdab429d18b38586148dcf126ba90d0a2.scope.
Oct 11 05:58:16 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:58:16 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aac83156b499b1ed41f1e810735524d3ba1ffd3ee56d4a9e660014dff299aafb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:58:16 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aac83156b499b1ed41f1e810735524d3ba1ffd3ee56d4a9e660014dff299aafb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:58:16 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aac83156b499b1ed41f1e810735524d3ba1ffd3ee56d4a9e660014dff299aafb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:58:16 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aac83156b499b1ed41f1e810735524d3ba1ffd3ee56d4a9e660014dff299aafb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:58:16 np0005481065 podman[449916]: 2025-10-11 09:58:16.376037827 +0000 UTC m=+0.223308382 container init 7e82aba17eec9dfa5fc09f168e565d9fdab429d18b38586148dcf126ba90d0a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 05:58:16 np0005481065 podman[449916]: 2025-10-11 09:58:16.392009618 +0000 UTC m=+0.239280143 container start 7e82aba17eec9dfa5fc09f168e565d9fdab429d18b38586148dcf126ba90d0a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:58:16 np0005481065 podman[449916]: 2025-10-11 09:58:16.397277317 +0000 UTC m=+0.244547872 container attach 7e82aba17eec9dfa5fc09f168e565d9fdab429d18b38586148dcf126ba90d0a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 11 05:58:16 np0005481065 nova_compute[260935]: 2025-10-11 09:58:16.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]: {
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:    "0": [
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:        {
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:            "devices": [
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:                "/dev/loop3"
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:            ],
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:            "lv_name": "ceph_lv0",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:            "lv_size": "21470642176",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:            "name": "ceph_lv0",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:            "tags": {
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:                "ceph.cluster_name": "ceph",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:                "ceph.crush_device_class": "",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:                "ceph.encrypted": "0",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:                "ceph.osd_id": "0",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:                "ceph.type": "block",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:                "ceph.vdo": "0"
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:            },
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:            "type": "block",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:            "vg_name": "ceph_vg0"
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:        }
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:    ],
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:    "1": [
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:        {
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:            "devices": [
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:                "/dev/loop4"
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:            ],
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:            "lv_name": "ceph_lv1",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:            "lv_size": "21470642176",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:            "name": "ceph_lv1",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:            "tags": {
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:                "ceph.cluster_name": "ceph",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:                "ceph.crush_device_class": "",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:                "ceph.encrypted": "0",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:                "ceph.osd_id": "1",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:                "ceph.type": "block",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:                "ceph.vdo": "0"
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:            },
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:            "type": "block",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:            "vg_name": "ceph_vg1"
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:        }
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:    ],
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:    "2": [
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:        {
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:            "devices": [
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:                "/dev/loop5"
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:            ],
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:            "lv_name": "ceph_lv2",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:            "lv_size": "21470642176",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:            "name": "ceph_lv2",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:            "tags": {
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:                "ceph.cluster_name": "ceph",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:                "ceph.crush_device_class": "",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:                "ceph.encrypted": "0",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:                "ceph.osd_id": "2",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:                "ceph.type": "block",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:                "ceph.vdo": "0"
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:            },
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:            "type": "block",
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:            "vg_name": "ceph_vg2"
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:        }
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]:    ]
Oct 11 05:58:17 np0005481065 gallant_heyrovsky[449933]: }
Oct 11 05:58:17 np0005481065 systemd[1]: libpod-7e82aba17eec9dfa5fc09f168e565d9fdab429d18b38586148dcf126ba90d0a2.scope: Deactivated successfully.
Oct 11 05:58:17 np0005481065 podman[449942]: 2025-10-11 09:58:17.244964203 +0000 UTC m=+0.027759355 container died 7e82aba17eec9dfa5fc09f168e565d9fdab429d18b38586148dcf126ba90d0a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_heyrovsky, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:58:17 np0005481065 systemd[1]: var-lib-containers-storage-overlay-aac83156b499b1ed41f1e810735524d3ba1ffd3ee56d4a9e660014dff299aafb-merged.mount: Deactivated successfully.
Oct 11 05:58:17 np0005481065 podman[449942]: 2025-10-11 09:58:17.302455368 +0000 UTC m=+0.085250490 container remove 7e82aba17eec9dfa5fc09f168e565d9fdab429d18b38586148dcf126ba90d0a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:58:17 np0005481065 systemd[1]: libpod-conmon-7e82aba17eec9dfa5fc09f168e565d9fdab429d18b38586148dcf126ba90d0a2.scope: Deactivated successfully.
Oct 11 05:58:17 np0005481065 nova_compute[260935]: 2025-10-11 09:58:17.417 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:58:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3522: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Oct 11 05:58:18 np0005481065 podman[450098]: 2025-10-11 09:58:18.191082612 +0000 UTC m=+0.073542520 container create 176146b73960fd0442be348b045b4df935b48dae8a14ea16c0dfa87b3b34e96a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:58:18 np0005481065 systemd[1]: Started libpod-conmon-176146b73960fd0442be348b045b4df935b48dae8a14ea16c0dfa87b3b34e96a.scope.
Oct 11 05:58:18 np0005481065 podman[450098]: 2025-10-11 09:58:18.161548657 +0000 UTC m=+0.044008655 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:58:18 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:58:18 np0005481065 podman[450098]: 2025-10-11 09:58:18.341476792 +0000 UTC m=+0.223936730 container init 176146b73960fd0442be348b045b4df935b48dae8a14ea16c0dfa87b3b34e96a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_edison, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:58:18 np0005481065 podman[450098]: 2025-10-11 09:58:18.354290084 +0000 UTC m=+0.236750002 container start 176146b73960fd0442be348b045b4df935b48dae8a14ea16c0dfa87b3b34e96a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_edison, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 11 05:58:18 np0005481065 podman[450098]: 2025-10-11 09:58:18.357122434 +0000 UTC m=+0.239582342 container attach 176146b73960fd0442be348b045b4df935b48dae8a14ea16c0dfa87b3b34e96a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:58:18 np0005481065 interesting_edison[450114]: 167 167
Oct 11 05:58:18 np0005481065 systemd[1]: libpod-176146b73960fd0442be348b045b4df935b48dae8a14ea16c0dfa87b3b34e96a.scope: Deactivated successfully.
Oct 11 05:58:18 np0005481065 podman[450098]: 2025-10-11 09:58:18.363779452 +0000 UTC m=+0.246239420 container died 176146b73960fd0442be348b045b4df935b48dae8a14ea16c0dfa87b3b34e96a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_edison, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:58:18 np0005481065 systemd[1]: var-lib-containers-storage-overlay-b9f615b1db19b770ea28e079725710d4aa7b52ee76f317208f6d9129d401ee43-merged.mount: Deactivated successfully.
Oct 11 05:58:18 np0005481065 podman[450098]: 2025-10-11 09:58:18.419225769 +0000 UTC m=+0.301685687 container remove 176146b73960fd0442be348b045b4df935b48dae8a14ea16c0dfa87b3b34e96a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_edison, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:58:18 np0005481065 systemd[1]: libpod-conmon-176146b73960fd0442be348b045b4df935b48dae8a14ea16c0dfa87b3b34e96a.scope: Deactivated successfully.
Oct 11 05:58:18 np0005481065 podman[450137]: 2025-10-11 09:58:18.605638597 +0000 UTC m=+0.057092664 container create 215ddaaef2646b9cabd5a8193ea3b91b47050f003a58d0c8747ed00864587802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_grothendieck, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 05:58:18 np0005481065 systemd[1]: Started libpod-conmon-215ddaaef2646b9cabd5a8193ea3b91b47050f003a58d0c8747ed00864587802.scope.
Oct 11 05:58:18 np0005481065 podman[450137]: 2025-10-11 09:58:18.582711519 +0000 UTC m=+0.034165596 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:58:18 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:58:18 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fbd365d6177e6146d6fea6e18a6ea48c97c7e6e36c20a7e3a3da36d10ae8c55/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:58:18 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fbd365d6177e6146d6fea6e18a6ea48c97c7e6e36c20a7e3a3da36d10ae8c55/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:58:18 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fbd365d6177e6146d6fea6e18a6ea48c97c7e6e36c20a7e3a3da36d10ae8c55/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:58:18 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fbd365d6177e6146d6fea6e18a6ea48c97c7e6e36c20a7e3a3da36d10ae8c55/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:58:18 np0005481065 podman[450137]: 2025-10-11 09:58:18.71507043 +0000 UTC m=+0.166524497 container init 215ddaaef2646b9cabd5a8193ea3b91b47050f003a58d0c8747ed00864587802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 05:58:18 np0005481065 podman[450137]: 2025-10-11 09:58:18.733058728 +0000 UTC m=+0.184512785 container start 215ddaaef2646b9cabd5a8193ea3b91b47050f003a58d0c8747ed00864587802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 05:58:18 np0005481065 podman[450137]: 2025-10-11 09:58:18.736421303 +0000 UTC m=+0.187875440 container attach 215ddaaef2646b9cabd5a8193ea3b91b47050f003a58d0c8747ed00864587802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_grothendieck, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:58:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e311 do_prune osdmap full prune enabled
Oct 11 05:58:18 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e312 e312: 3 total, 3 up, 3 in
Oct 11 05:58:18 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e312: 3 total, 3 up, 3 in
Oct 11 05:58:19 np0005481065 nova_compute[260935]: 2025-10-11 09:58:19.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:58:19 np0005481065 thirsty_grothendieck[450154]: {
Oct 11 05:58:19 np0005481065 thirsty_grothendieck[450154]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 05:58:19 np0005481065 thirsty_grothendieck[450154]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:58:19 np0005481065 thirsty_grothendieck[450154]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 05:58:19 np0005481065 thirsty_grothendieck[450154]:        "osd_id": 2,
Oct 11 05:58:19 np0005481065 thirsty_grothendieck[450154]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:58:19 np0005481065 thirsty_grothendieck[450154]:        "type": "bluestore"
Oct 11 05:58:19 np0005481065 thirsty_grothendieck[450154]:    },
Oct 11 05:58:19 np0005481065 thirsty_grothendieck[450154]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 05:58:19 np0005481065 thirsty_grothendieck[450154]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:58:19 np0005481065 thirsty_grothendieck[450154]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 05:58:19 np0005481065 thirsty_grothendieck[450154]:        "osd_id": 0,
Oct 11 05:58:19 np0005481065 thirsty_grothendieck[450154]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:58:19 np0005481065 thirsty_grothendieck[450154]:        "type": "bluestore"
Oct 11 05:58:19 np0005481065 thirsty_grothendieck[450154]:    },
Oct 11 05:58:19 np0005481065 thirsty_grothendieck[450154]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 05:58:19 np0005481065 thirsty_grothendieck[450154]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:58:19 np0005481065 thirsty_grothendieck[450154]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 05:58:19 np0005481065 thirsty_grothendieck[450154]:        "osd_id": 1,
Oct 11 05:58:19 np0005481065 thirsty_grothendieck[450154]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:58:19 np0005481065 thirsty_grothendieck[450154]:        "type": "bluestore"
Oct 11 05:58:19 np0005481065 thirsty_grothendieck[450154]:    }
Oct 11 05:58:19 np0005481065 thirsty_grothendieck[450154]: }
Oct 11 05:58:19 np0005481065 systemd[1]: libpod-215ddaaef2646b9cabd5a8193ea3b91b47050f003a58d0c8747ed00864587802.scope: Deactivated successfully.
Oct 11 05:58:19 np0005481065 podman[450137]: 2025-10-11 09:58:19.777924466 +0000 UTC m=+1.229378523 container died 215ddaaef2646b9cabd5a8193ea3b91b47050f003a58d0c8747ed00864587802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 11 05:58:19 np0005481065 systemd[1]: libpod-215ddaaef2646b9cabd5a8193ea3b91b47050f003a58d0c8747ed00864587802.scope: Consumed 1.038s CPU time.
Oct 11 05:58:19 np0005481065 systemd[1]: var-lib-containers-storage-overlay-0fbd365d6177e6146d6fea6e18a6ea48c97c7e6e36c20a7e3a3da36d10ae8c55-merged.mount: Deactivated successfully.
Oct 11 05:58:19 np0005481065 podman[450137]: 2025-10-11 09:58:19.855592251 +0000 UTC m=+1.307046278 container remove 215ddaaef2646b9cabd5a8193ea3b91b47050f003a58d0c8747ed00864587802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:58:19 np0005481065 systemd[1]: libpod-conmon-215ddaaef2646b9cabd5a8193ea3b91b47050f003a58d0c8747ed00864587802.scope: Deactivated successfully.
Oct 11 05:58:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:58:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3524: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Oct 11 05:58:19 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:58:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:58:19 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:58:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:58:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e312 do_prune osdmap full prune enabled
Oct 11 05:58:19 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 4f70e8eb-2026-4b36-8718-87aa66cea957 does not exist
Oct 11 05:58:19 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev ae41963f-5489-46a5-9a9b-b2077457664f does not exist
Oct 11 05:58:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e313 e313: 3 total, 3 up, 3 in
Oct 11 05:58:19 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e313: 3 total, 3 up, 3 in
Oct 11 05:58:19 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:58:19 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:58:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3526: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 2.3 KiB/s wr, 41 op/s
Oct 11 05:58:21 np0005481065 nova_compute[260935]: 2025-10-11 09:58:21.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:58:21 np0005481065 nova_compute[260935]: 2025-10-11 09:58:21.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:58:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3527: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 3.5 KiB/s wr, 63 op/s
Oct 11 05:58:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e313 do_prune osdmap full prune enabled
Oct 11 05:58:23 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e314 e314: 3 total, 3 up, 3 in
Oct 11 05:58:23 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e314: 3 total, 3 up, 3 in
Oct 11 05:58:24 np0005481065 nova_compute[260935]: 2025-10-11 09:58:24.701 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:58:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:58:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:58:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:58:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:58:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:58:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:58:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:58:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e314 do_prune osdmap full prune enabled
Oct 11 05:58:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e315 e315: 3 total, 3 up, 3 in
Oct 11 05:58:24 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e315: 3 total, 3 up, 3 in
Oct 11 05:58:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3530: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 2.3 KiB/s wr, 41 op/s
Oct 11 05:58:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:58:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/671261316' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:58:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:58:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/671261316' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:58:27 np0005481065 nova_compute[260935]: 2025-10-11 09:58:26.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:58:27 np0005481065 nova_compute[260935]: 2025-10-11 09:58:27.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:58:27 np0005481065 nova_compute[260935]: 2025-10-11 09:58:27.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:58:27 np0005481065 nova_compute[260935]: 2025-10-11 09:58:27.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:58:27 np0005481065 nova_compute[260935]: 2025-10-11 09:58:27.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:58:27 np0005481065 nova_compute[260935]: 2025-10-11 09:58:27.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:58:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3531: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 2.6 MiB/s wr, 46 op/s
Oct 11 05:58:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3532: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 2.6 MiB/s wr, 46 op/s
Oct 11 05:58:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:58:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3533: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 9.6 KiB/s rd, 2.6 MiB/s wr, 14 op/s
Oct 11 05:58:32 np0005481065 nova_compute[260935]: 2025-10-11 09:58:32.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:58:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3534: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s rd, 2.1 MiB/s wr, 12 op/s
Oct 11 05:58:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:58:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3535: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 7.0 KiB/s rd, 1.9 MiB/s wr, 10 op/s
Oct 11 05:58:37 np0005481065 nova_compute[260935]: 2025-10-11 09:58:37.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:58:37 np0005481065 nova_compute[260935]: 2025-10-11 09:58:37.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:58:37 np0005481065 nova_compute[260935]: 2025-10-11 09:58:37.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:58:37 np0005481065 nova_compute[260935]: 2025-10-11 09:58:37.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:58:37 np0005481065 nova_compute[260935]: 2025-10-11 09:58:37.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:58:37 np0005481065 nova_compute[260935]: 2025-10-11 09:58:37.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:58:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3536: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 1.7 MiB/s wr, 9 op/s
Oct 11 05:58:38 np0005481065 podman[450253]: 2025-10-11 09:58:38.815387147 +0000 UTC m=+0.103032482 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 11 05:58:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3537: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:58:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:58:41 np0005481065 nova_compute[260935]: 2025-10-11 09:58:41.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:58:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3538: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:58:42 np0005481065 nova_compute[260935]: 2025-10-11 09:58:42.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:58:42 np0005481065 podman[450273]: 2025-10-11 09:58:42.811652134 +0000 UTC m=+0.110774011 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=iscsid, tcib_managed=true)
Oct 11 05:58:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3539: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:58:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:58:45 np0005481065 podman[450293]: 2025-10-11 09:58:45.788083529 +0000 UTC m=+0.082100461 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3)
Oct 11 05:58:45 np0005481065 podman[450294]: 2025-10-11 09:58:45.819560429 +0000 UTC m=+0.110485064 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 11 05:58:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3540: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:58:47 np0005481065 nova_compute[260935]: 2025-10-11 09:58:47.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:58:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3541: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:58:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3542: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:58:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:58:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3543: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:58:52 np0005481065 nova_compute[260935]: 2025-10-11 09:58:52.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:58:52 np0005481065 nova_compute[260935]: 2025-10-11 09:58:52.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:58:52 np0005481065 nova_compute[260935]: 2025-10-11 09:58:52.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:58:52 np0005481065 nova_compute[260935]: 2025-10-11 09:58:52.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:58:52 np0005481065 nova_compute[260935]: 2025-10-11 09:58:52.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:58:52 np0005481065 nova_compute[260935]: 2025-10-11 09:58:52.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:58:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3544: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:58:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:58:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:58:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:58:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:58:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:58:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:58:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:58:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:58:55
Oct 11 05:58:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:58:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:58:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['images', 'volumes', 'backups', 'default.rgw.meta', '.mgr', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', '.rgw.root', 'vms', 'cephfs.cephfs.data']
Oct 11 05:58:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:58:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:58:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:58:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:58:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:58:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:58:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:58:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:58:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:58:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:58:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:58:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3545: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:58:57 np0005481065 nova_compute[260935]: 2025-10-11 09:58:57.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:58:57 np0005481065 nova_compute[260935]: 2025-10-11 09:58:57.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:58:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3546: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:58:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3547: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:58:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:59:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3548: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:59:02 np0005481065 nova_compute[260935]: 2025-10-11 09:59:02.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:59:02 np0005481065 nova_compute[260935]: 2025-10-11 09:59:02.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:59:02 np0005481065 nova_compute[260935]: 2025-10-11 09:59:02.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:59:02 np0005481065 nova_compute[260935]: 2025-10-11 09:59:02.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:59:02 np0005481065 nova_compute[260935]: 2025-10-11 09:59:02.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:59:02 np0005481065 nova_compute[260935]: 2025-10-11 09:59:02.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:59:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3549: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:59:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:59:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 05:59:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:59:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 05:59:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:59:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0026278224099759067 of space, bias 1.0, pg target 0.788346722992772 quantized to 32 (current 32)
Oct 11 05:59:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:59:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:59:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:59:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:59:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:59:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00033308812756397733 of space, bias 1.0, pg target 0.0999264382691932 quantized to 32 (current 32)
Oct 11 05:59:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:59:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 05:59:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:59:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:59:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:59:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 05:59:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:59:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 05:59:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:59:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 05:59:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 05:59:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 05:59:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3550: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:59:07 np0005481065 nova_compute[260935]: 2025-10-11 09:59:07.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:59:07 np0005481065 nova_compute[260935]: 2025-10-11 09:59:07.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:59:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3551: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:59:08 np0005481065 nova_compute[260935]: 2025-10-11 09:59:08.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:59:08 np0005481065 nova_compute[260935]: 2025-10-11 09:59:08.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:59:09 np0005481065 nova_compute[260935]: 2025-10-11 09:59:09.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:59:09 np0005481065 nova_compute[260935]: 2025-10-11 09:59:09.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 05:59:09 np0005481065 podman[450338]: 2025-10-11 09:59:09.789246944 +0000 UTC m=+0.076223246 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:59:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:59:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3552: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:59:10 np0005481065 nova_compute[260935]: 2025-10-11 09:59:10.283 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 05:59:10 np0005481065 nova_compute[260935]: 2025-10-11 09:59:10.284 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 05:59:10 np0005481065 nova_compute[260935]: 2025-10-11 09:59:10.284 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 05:59:10 np0005481065 nova_compute[260935]: 2025-10-11 09:59:10.671 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 05:59:11 np0005481065 nova_compute[260935]: 2025-10-11 09:59:11.269 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 05:59:11 np0005481065 nova_compute[260935]: 2025-10-11 09:59:11.294 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 05:59:11 np0005481065 nova_compute[260935]: 2025-10-11 09:59:11.294 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 05:59:11 np0005481065 nova_compute[260935]: 2025-10-11 09:59:11.295 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:59:11 np0005481065 nova_compute[260935]: 2025-10-11 09:59:11.295 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:59:11 np0005481065 nova_compute[260935]: 2025-10-11 09:59:11.296 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 05:59:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3553: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:59:12 np0005481065 nova_compute[260935]: 2025-10-11 09:59:12.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:59:12 np0005481065 nova_compute[260935]: 2025-10-11 09:59:12.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:59:12 np0005481065 nova_compute[260935]: 2025-10-11 09:59:12.730 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:59:12 np0005481065 nova_compute[260935]: 2025-10-11 09:59:12.730 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:59:12 np0005481065 nova_compute[260935]: 2025-10-11 09:59:12.730 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:59:12 np0005481065 nova_compute[260935]: 2025-10-11 09:59:12.731 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 05:59:12 np0005481065 nova_compute[260935]: 2025-10-11 09:59:12.731 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:59:13 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:59:13 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1494995148' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:59:13 np0005481065 nova_compute[260935]: 2025-10-11 09:59:13.165 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:59:13 np0005481065 nova_compute[260935]: 2025-10-11 09:59:13.271 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:59:13 np0005481065 nova_compute[260935]: 2025-10-11 09:59:13.272 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:59:13 np0005481065 nova_compute[260935]: 2025-10-11 09:59:13.273 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:59:13 np0005481065 nova_compute[260935]: 2025-10-11 09:59:13.278 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:59:13 np0005481065 nova_compute[260935]: 2025-10-11 09:59:13.278 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:59:13 np0005481065 nova_compute[260935]: 2025-10-11 09:59:13.284 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:59:13 np0005481065 nova_compute[260935]: 2025-10-11 09:59:13.284 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 05:59:13 np0005481065 nova_compute[260935]: 2025-10-11 09:59:13.482 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 05:59:13 np0005481065 nova_compute[260935]: 2025-10-11 09:59:13.484 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2798MB free_disk=59.83064270019531GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 05:59:13 np0005481065 nova_compute[260935]: 2025-10-11 09:59:13.484 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:59:13 np0005481065 nova_compute[260935]: 2025-10-11 09:59:13.484 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:59:13 np0005481065 nova_compute[260935]: 2025-10-11 09:59:13.621 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:59:13 np0005481065 nova_compute[260935]: 2025-10-11 09:59:13.621 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:59:13 np0005481065 nova_compute[260935]: 2025-10-11 09:59:13.621 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 05:59:13 np0005481065 nova_compute[260935]: 2025-10-11 09:59:13.621 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 05:59:13 np0005481065 nova_compute[260935]: 2025-10-11 09:59:13.622 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 05:59:13 np0005481065 nova_compute[260935]: 2025-10-11 09:59:13.696 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 05:59:13 np0005481065 podman[450380]: 2025-10-11 09:59:13.794056192 +0000 UTC m=+0.095424068 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 11 05:59:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3554: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:59:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 05:59:14 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2112852372' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 05:59:14 np0005481065 nova_compute[260935]: 2025-10-11 09:59:14.227 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 05:59:14 np0005481065 nova_compute[260935]: 2025-10-11 09:59:14.235 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 05:59:14 np0005481065 nova_compute[260935]: 2025-10-11 09:59:14.257 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 05:59:14 np0005481065 nova_compute[260935]: 2025-10-11 09:59:14.260 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 05:59:14 np0005481065 nova_compute[260935]: 2025-10-11 09:59:14.260 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:59:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:59:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:59:15.258 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 05:59:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:59:15.258 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 05:59:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 09:59:15.259 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 05:59:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3555: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:59:16 np0005481065 podman[450423]: 2025-10-11 09:59:16.773779831 +0000 UTC m=+0.068308412 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 11 05:59:16 np0005481065 podman[450424]: 2025-10-11 09:59:16.828050074 +0000 UTC m=+0.107382895 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 05:59:17 np0005481065 nova_compute[260935]: 2025-10-11 09:59:17.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:59:17 np0005481065 nova_compute[260935]: 2025-10-11 09:59:17.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:59:17 np0005481065 nova_compute[260935]: 2025-10-11 09:59:17.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:59:17 np0005481065 nova_compute[260935]: 2025-10-11 09:59:17.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:59:17 np0005481065 nova_compute[260935]: 2025-10-11 09:59:17.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:59:17 np0005481065 nova_compute[260935]: 2025-10-11 09:59:17.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:59:17 np0005481065 nova_compute[260935]: 2025-10-11 09:59:17.263 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:59:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3556: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:59:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:59:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3557: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:59:21 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:59:21 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:59:21 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 05:59:21 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:59:21 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 05:59:21 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:59:21 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 10d52c31-aa51-4ef7-863c-25122cb66f49 does not exist
Oct 11 05:59:21 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev a4acc30b-e38f-435c-aa84-681d7808e14a does not exist
Oct 11 05:59:21 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev ddc13413-9c33-45ef-90a4-5628a069fa3f does not exist
Oct 11 05:59:21 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 05:59:21 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 05:59:21 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 05:59:21 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:59:21 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 05:59:21 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 05:59:21 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 05:59:21 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:59:21 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 05:59:21 np0005481065 nova_compute[260935]: 2025-10-11 09:59:21.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:59:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3558: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:59:22 np0005481065 podman[450742]: 2025-10-11 09:59:22.060364943 +0000 UTC m=+0.064206485 container create 0f322816bafae446539d2547aedc3d8f660fc4a05c09dbed5fc02ea7910e501d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_liskov, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:59:22 np0005481065 systemd[1]: Started libpod-conmon-0f322816bafae446539d2547aedc3d8f660fc4a05c09dbed5fc02ea7910e501d.scope.
Oct 11 05:59:22 np0005481065 podman[450742]: 2025-10-11 09:59:22.034307637 +0000 UTC m=+0.038149249 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:59:22 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:59:22 np0005481065 podman[450742]: 2025-10-11 09:59:22.181238449 +0000 UTC m=+0.185080071 container init 0f322816bafae446539d2547aedc3d8f660fc4a05c09dbed5fc02ea7910e501d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Oct 11 05:59:22 np0005481065 podman[450742]: 2025-10-11 09:59:22.194181525 +0000 UTC m=+0.198023087 container start 0f322816bafae446539d2547aedc3d8f660fc4a05c09dbed5fc02ea7910e501d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 05:59:22 np0005481065 podman[450742]: 2025-10-11 09:59:22.198917119 +0000 UTC m=+0.202758721 container attach 0f322816bafae446539d2547aedc3d8f660fc4a05c09dbed5fc02ea7910e501d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 05:59:22 np0005481065 boring_liskov[450759]: 167 167
Oct 11 05:59:22 np0005481065 systemd[1]: libpod-0f322816bafae446539d2547aedc3d8f660fc4a05c09dbed5fc02ea7910e501d.scope: Deactivated successfully.
Oct 11 05:59:22 np0005481065 podman[450742]: 2025-10-11 09:59:22.2021267 +0000 UTC m=+0.205968222 container died 0f322816bafae446539d2547aedc3d8f660fc4a05c09dbed5fc02ea7910e501d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 11 05:59:22 np0005481065 systemd[1]: var-lib-containers-storage-overlay-afaecf35206d19c044ae2cbffc9537dd40dc8d1944fcb9a42324d2fbb687f3f9-merged.mount: Deactivated successfully.
Oct 11 05:59:22 np0005481065 nova_compute[260935]: 2025-10-11 09:59:22.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:59:22 np0005481065 nova_compute[260935]: 2025-10-11 09:59:22.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:59:22 np0005481065 nova_compute[260935]: 2025-10-11 09:59:22.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:59:22 np0005481065 nova_compute[260935]: 2025-10-11 09:59:22.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:59:22 np0005481065 podman[450742]: 2025-10-11 09:59:22.247844792 +0000 UTC m=+0.251686314 container remove 0f322816bafae446539d2547aedc3d8f660fc4a05c09dbed5fc02ea7910e501d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 05:59:22 np0005481065 nova_compute[260935]: 2025-10-11 09:59:22.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:59:22 np0005481065 nova_compute[260935]: 2025-10-11 09:59:22.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:59:22 np0005481065 systemd[1]: libpod-conmon-0f322816bafae446539d2547aedc3d8f660fc4a05c09dbed5fc02ea7910e501d.scope: Deactivated successfully.
Oct 11 05:59:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e315 do_prune osdmap full prune enabled
Oct 11 05:59:22 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e316 e316: 3 total, 3 up, 3 in
Oct 11 05:59:22 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e316: 3 total, 3 up, 3 in
Oct 11 05:59:22 np0005481065 podman[450781]: 2025-10-11 09:59:22.513665644 +0000 UTC m=+0.074241109 container create c6e0166c40f5ff9791c3b186635d6ca266b618b4319198ae22a620a032cc7a1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 05:59:22 np0005481065 systemd[1]: Started libpod-conmon-c6e0166c40f5ff9791c3b186635d6ca266b618b4319198ae22a620a032cc7a1e.scope.
Oct 11 05:59:22 np0005481065 podman[450781]: 2025-10-11 09:59:22.483072449 +0000 UTC m=+0.043648004 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:59:22 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:59:22 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3730568f09ac0f80f304ad4c019b99f3559515d100386aaf92a3007752b87c94/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:59:22 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3730568f09ac0f80f304ad4c019b99f3559515d100386aaf92a3007752b87c94/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:59:22 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3730568f09ac0f80f304ad4c019b99f3559515d100386aaf92a3007752b87c94/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:59:22 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3730568f09ac0f80f304ad4c019b99f3559515d100386aaf92a3007752b87c94/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:59:22 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3730568f09ac0f80f304ad4c019b99f3559515d100386aaf92a3007752b87c94/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 05:59:22 np0005481065 podman[450781]: 2025-10-11 09:59:22.63665044 +0000 UTC m=+0.197225975 container init c6e0166c40f5ff9791c3b186635d6ca266b618b4319198ae22a620a032cc7a1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_keldysh, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:59:22 np0005481065 podman[450781]: 2025-10-11 09:59:22.64834856 +0000 UTC m=+0.208924035 container start c6e0166c40f5ff9791c3b186635d6ca266b618b4319198ae22a620a032cc7a1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 11 05:59:22 np0005481065 podman[450781]: 2025-10-11 09:59:22.653736083 +0000 UTC m=+0.214311558 container attach c6e0166c40f5ff9791c3b186635d6ca266b618b4319198ae22a620a032cc7a1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 05:59:23 np0005481065 modest_keldysh[450797]: --> passed data devices: 0 physical, 3 LVM
Oct 11 05:59:23 np0005481065 modest_keldysh[450797]: --> relative data size: 1.0
Oct 11 05:59:23 np0005481065 modest_keldysh[450797]: --> All data devices are unavailable
Oct 11 05:59:23 np0005481065 systemd[1]: libpod-c6e0166c40f5ff9791c3b186635d6ca266b618b4319198ae22a620a032cc7a1e.scope: Deactivated successfully.
Oct 11 05:59:23 np0005481065 systemd[1]: libpod-c6e0166c40f5ff9791c3b186635d6ca266b618b4319198ae22a620a032cc7a1e.scope: Consumed 1.055s CPU time.
Oct 11 05:59:23 np0005481065 podman[450827]: 2025-10-11 09:59:23.846343266 +0000 UTC m=+0.037933913 container died c6e0166c40f5ff9791c3b186635d6ca266b618b4319198ae22a620a032cc7a1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 05:59:23 np0005481065 systemd[1]: var-lib-containers-storage-overlay-3730568f09ac0f80f304ad4c019b99f3559515d100386aaf92a3007752b87c94-merged.mount: Deactivated successfully.
Oct 11 05:59:23 np0005481065 podman[450827]: 2025-10-11 09:59:23.917687802 +0000 UTC m=+0.109278439 container remove c6e0166c40f5ff9791c3b186635d6ca266b618b4319198ae22a620a032cc7a1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_keldysh, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:59:23 np0005481065 systemd[1]: libpod-conmon-c6e0166c40f5ff9791c3b186635d6ca266b618b4319198ae22a620a032cc7a1e.scope: Deactivated successfully.
Oct 11 05:59:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3560: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.8 KiB/s rd, 3 op/s
Oct 11 05:59:24 np0005481065 podman[450983]: 2025-10-11 09:59:24.738456857 +0000 UTC m=+0.053008719 container create abb29db944f37377aaeb80801c044eca1371844ed2570e6bc4eb4cdbaf7a4f53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_allen, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:59:24 np0005481065 systemd[1]: Started libpod-conmon-abb29db944f37377aaeb80801c044eca1371844ed2570e6bc4eb4cdbaf7a4f53.scope.
Oct 11 05:59:24 np0005481065 podman[450983]: 2025-10-11 09:59:24.71058358 +0000 UTC m=+0.025135452 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:59:24 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:59:24 np0005481065 podman[450983]: 2025-10-11 09:59:24.857257655 +0000 UTC m=+0.171809537 container init abb29db944f37377aaeb80801c044eca1371844ed2570e6bc4eb4cdbaf7a4f53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 05:59:24 np0005481065 podman[450983]: 2025-10-11 09:59:24.864695175 +0000 UTC m=+0.179247017 container start abb29db944f37377aaeb80801c044eca1371844ed2570e6bc4eb4cdbaf7a4f53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:59:24 np0005481065 podman[450983]: 2025-10-11 09:59:24.868238395 +0000 UTC m=+0.182790267 container attach abb29db944f37377aaeb80801c044eca1371844ed2570e6bc4eb4cdbaf7a4f53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_allen, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 05:59:24 np0005481065 strange_allen[450999]: 167 167
Oct 11 05:59:24 np0005481065 systemd[1]: libpod-abb29db944f37377aaeb80801c044eca1371844ed2570e6bc4eb4cdbaf7a4f53.scope: Deactivated successfully.
Oct 11 05:59:24 np0005481065 conmon[450999]: conmon abb29db944f37377aaeb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-abb29db944f37377aaeb80801c044eca1371844ed2570e6bc4eb4cdbaf7a4f53.scope/container/memory.events
Oct 11 05:59:24 np0005481065 podman[450983]: 2025-10-11 09:59:24.873580956 +0000 UTC m=+0.188132788 container died abb29db944f37377aaeb80801c044eca1371844ed2570e6bc4eb4cdbaf7a4f53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_allen, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 05:59:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:59:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:59:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:59:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:59:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:59:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:59:24 np0005481065 systemd[1]: var-lib-containers-storage-overlay-58fa6a392f9c5c8153333b31cb8a2122579d85edf674470fbc4b5d41df8f2bcc-merged.mount: Deactivated successfully.
Oct 11 05:59:24 np0005481065 podman[450983]: 2025-10-11 09:59:24.921152081 +0000 UTC m=+0.235703943 container remove abb29db944f37377aaeb80801c044eca1371844ed2570e6bc4eb4cdbaf7a4f53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_allen, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 05:59:24 np0005481065 systemd[1]: libpod-conmon-abb29db944f37377aaeb80801c044eca1371844ed2570e6bc4eb4cdbaf7a4f53.scope: Deactivated successfully.
Oct 11 05:59:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:59:25 np0005481065 podman[451024]: 2025-10-11 09:59:25.154429633 +0000 UTC m=+0.049489340 container create 1df06869fea5fa746ac50342be7030a6f8c0d15c71beea34bb0672ee3b661bf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 05:59:25 np0005481065 systemd[1]: Started libpod-conmon-1df06869fea5fa746ac50342be7030a6f8c0d15c71beea34bb0672ee3b661bf0.scope.
Oct 11 05:59:25 np0005481065 podman[451024]: 2025-10-11 09:59:25.13343884 +0000 UTC m=+0.028498537 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:59:25 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:59:25 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7fd535a8e2b1ebe46e5ff41e066eb6f827f79afb4709d15c202abaee13a1697/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:59:25 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7fd535a8e2b1ebe46e5ff41e066eb6f827f79afb4709d15c202abaee13a1697/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:59:25 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7fd535a8e2b1ebe46e5ff41e066eb6f827f79afb4709d15c202abaee13a1697/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:59:25 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7fd535a8e2b1ebe46e5ff41e066eb6f827f79afb4709d15c202abaee13a1697/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:59:25 np0005481065 podman[451024]: 2025-10-11 09:59:25.258274498 +0000 UTC m=+0.153334195 container init 1df06869fea5fa746ac50342be7030a6f8c0d15c71beea34bb0672ee3b661bf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_rhodes, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 05:59:25 np0005481065 podman[451024]: 2025-10-11 09:59:25.273755375 +0000 UTC m=+0.168815062 container start 1df06869fea5fa746ac50342be7030a6f8c0d15c71beea34bb0672ee3b661bf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 05:59:25 np0005481065 podman[451024]: 2025-10-11 09:59:25.277265334 +0000 UTC m=+0.172325021 container attach 1df06869fea5fa746ac50342be7030a6f8c0d15c71beea34bb0672ee3b661bf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 11 05:59:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3561: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.8 KiB/s rd, 3 op/s
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]: {
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:    "0": [
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:        {
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:            "devices": [
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:                "/dev/loop3"
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:            ],
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:            "lv_name": "ceph_lv0",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:            "lv_size": "21470642176",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:            "name": "ceph_lv0",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:            "tags": {
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:                "ceph.cluster_name": "ceph",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:                "ceph.crush_device_class": "",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:                "ceph.encrypted": "0",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:                "ceph.osd_id": "0",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:                "ceph.type": "block",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:                "ceph.vdo": "0"
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:            },
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:            "type": "block",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:            "vg_name": "ceph_vg0"
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:        }
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:    ],
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:    "1": [
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:        {
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:            "devices": [
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:                "/dev/loop4"
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:            ],
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:            "lv_name": "ceph_lv1",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:            "lv_size": "21470642176",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:            "name": "ceph_lv1",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:            "tags": {
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:                "ceph.cluster_name": "ceph",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:                "ceph.crush_device_class": "",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:                "ceph.encrypted": "0",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:                "ceph.osd_id": "1",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:                "ceph.type": "block",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:                "ceph.vdo": "0"
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:            },
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:            "type": "block",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:            "vg_name": "ceph_vg1"
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:        }
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:    ],
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:    "2": [
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:        {
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:            "devices": [
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:                "/dev/loop5"
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:            ],
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:            "lv_name": "ceph_lv2",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:            "lv_size": "21470642176",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:            "name": "ceph_lv2",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:            "tags": {
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:                "ceph.cephx_lockbox_secret": "",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:                "ceph.cluster_name": "ceph",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:                "ceph.crush_device_class": "",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:                "ceph.encrypted": "0",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:                "ceph.osd_id": "2",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:                "ceph.type": "block",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:                "ceph.vdo": "0"
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:            },
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:            "type": "block",
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:            "vg_name": "ceph_vg2"
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:        }
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]:    ]
Oct 11 05:59:26 np0005481065 ecstatic_rhodes[451041]: }
Oct 11 05:59:26 np0005481065 systemd[1]: libpod-1df06869fea5fa746ac50342be7030a6f8c0d15c71beea34bb0672ee3b661bf0.scope: Deactivated successfully.
Oct 11 05:59:26 np0005481065 podman[451024]: 2025-10-11 09:59:26.050475266 +0000 UTC m=+0.945534983 container died 1df06869fea5fa746ac50342be7030a6f8c0d15c71beea34bb0672ee3b661bf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_rhodes, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:59:26 np0005481065 systemd[1]: var-lib-containers-storage-overlay-e7fd535a8e2b1ebe46e5ff41e066eb6f827f79afb4709d15c202abaee13a1697-merged.mount: Deactivated successfully.
Oct 11 05:59:26 np0005481065 podman[451024]: 2025-10-11 09:59:26.121970317 +0000 UTC m=+1.017029994 container remove 1df06869fea5fa746ac50342be7030a6f8c0d15c71beea34bb0672ee3b661bf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_rhodes, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 05:59:26 np0005481065 systemd[1]: libpod-conmon-1df06869fea5fa746ac50342be7030a6f8c0d15c71beea34bb0672ee3b661bf0.scope: Deactivated successfully.
Oct 11 05:59:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 05:59:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2581551438' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 05:59:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 05:59:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2581551438' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 05:59:26 np0005481065 podman[451205]: 2025-10-11 09:59:26.867683741 +0000 UTC m=+0.046620959 container create 499da56bf67a5650a3bc894b12ff847bcda5a7dea8e7f47cfb4df58ba061f4d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_dhawan, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 05:59:26 np0005481065 systemd[1]: Started libpod-conmon-499da56bf67a5650a3bc894b12ff847bcda5a7dea8e7f47cfb4df58ba061f4d9.scope.
Oct 11 05:59:26 np0005481065 podman[451205]: 2025-10-11 09:59:26.847880591 +0000 UTC m=+0.026817839 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:59:26 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:59:26 np0005481065 podman[451205]: 2025-10-11 09:59:26.968114869 +0000 UTC m=+0.147052097 container init 499da56bf67a5650a3bc894b12ff847bcda5a7dea8e7f47cfb4df58ba061f4d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 05:59:26 np0005481065 podman[451205]: 2025-10-11 09:59:26.975942051 +0000 UTC m=+0.154879289 container start 499da56bf67a5650a3bc894b12ff847bcda5a7dea8e7f47cfb4df58ba061f4d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_dhawan, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 05:59:26 np0005481065 podman[451205]: 2025-10-11 09:59:26.980095458 +0000 UTC m=+0.159032686 container attach 499da56bf67a5650a3bc894b12ff847bcda5a7dea8e7f47cfb4df58ba061f4d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_dhawan, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:59:26 np0005481065 suspicious_dhawan[451222]: 167 167
Oct 11 05:59:26 np0005481065 systemd[1]: libpod-499da56bf67a5650a3bc894b12ff847bcda5a7dea8e7f47cfb4df58ba061f4d9.scope: Deactivated successfully.
Oct 11 05:59:26 np0005481065 conmon[451222]: conmon 499da56bf67a5650a3bc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-499da56bf67a5650a3bc894b12ff847bcda5a7dea8e7f47cfb4df58ba061f4d9.scope/container/memory.events
Oct 11 05:59:26 np0005481065 podman[451205]: 2025-10-11 09:59:26.986296173 +0000 UTC m=+0.165233401 container died 499da56bf67a5650a3bc894b12ff847bcda5a7dea8e7f47cfb4df58ba061f4d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_dhawan, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 05:59:27 np0005481065 systemd[1]: var-lib-containers-storage-overlay-18d2c06357c007da8c6dc4793ea4e36b88fea493aab99ab5baba56b8ab98baf2-merged.mount: Deactivated successfully.
Oct 11 05:59:27 np0005481065 podman[451205]: 2025-10-11 09:59:27.024196264 +0000 UTC m=+0.203133512 container remove 499da56bf67a5650a3bc894b12ff847bcda5a7dea8e7f47cfb4df58ba061f4d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_dhawan, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 11 05:59:27 np0005481065 systemd[1]: libpod-conmon-499da56bf67a5650a3bc894b12ff847bcda5a7dea8e7f47cfb4df58ba061f4d9.scope: Deactivated successfully.
Oct 11 05:59:27 np0005481065 podman[451245]: 2025-10-11 09:59:27.254464011 +0000 UTC m=+0.065877613 container create 3d97cefd74355056e3b4c9d2117fa40a3fc608f02f0af56ae9d2fc3931e25362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 11 05:59:27 np0005481065 nova_compute[260935]: 2025-10-11 09:59:27.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:59:27 np0005481065 nova_compute[260935]: 2025-10-11 09:59:27.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:59:27 np0005481065 systemd[1]: Started libpod-conmon-3d97cefd74355056e3b4c9d2117fa40a3fc608f02f0af56ae9d2fc3931e25362.scope.
Oct 11 05:59:27 np0005481065 podman[451245]: 2025-10-11 09:59:27.226962164 +0000 UTC m=+0.038375816 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 05:59:27 np0005481065 systemd[1]: Started libcrun container.
Oct 11 05:59:27 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c11fde3f592c424e035d54ac98446d60cc0f4644b141ed5f2cf5e72b0e7feed2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 05:59:27 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c11fde3f592c424e035d54ac98446d60cc0f4644b141ed5f2cf5e72b0e7feed2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 05:59:27 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c11fde3f592c424e035d54ac98446d60cc0f4644b141ed5f2cf5e72b0e7feed2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 05:59:27 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c11fde3f592c424e035d54ac98446d60cc0f4644b141ed5f2cf5e72b0e7feed2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 05:59:27 np0005481065 podman[451245]: 2025-10-11 09:59:27.363386139 +0000 UTC m=+0.174799771 container init 3d97cefd74355056e3b4c9d2117fa40a3fc608f02f0af56ae9d2fc3931e25362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 05:59:27 np0005481065 podman[451245]: 2025-10-11 09:59:27.378530437 +0000 UTC m=+0.189943989 container start 3d97cefd74355056e3b4c9d2117fa40a3fc608f02f0af56ae9d2fc3931e25362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_darwin, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 05:59:27 np0005481065 podman[451245]: 2025-10-11 09:59:27.382255752 +0000 UTC m=+0.193669404 container attach 3d97cefd74355056e3b4c9d2117fa40a3fc608f02f0af56ae9d2fc3931e25362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 11 05:59:27 np0005481065 nova_compute[260935]: 2025-10-11 09:59:27.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:59:27 np0005481065 nova_compute[260935]: 2025-10-11 09:59:27.705 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 11 05:59:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3562: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Oct 11 05:59:28 np0005481065 busy_darwin[451262]: {
Oct 11 05:59:28 np0005481065 busy_darwin[451262]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 05:59:28 np0005481065 busy_darwin[451262]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:59:28 np0005481065 busy_darwin[451262]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 05:59:28 np0005481065 busy_darwin[451262]:        "osd_id": 2,
Oct 11 05:59:28 np0005481065 busy_darwin[451262]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 05:59:28 np0005481065 busy_darwin[451262]:        "type": "bluestore"
Oct 11 05:59:28 np0005481065 busy_darwin[451262]:    },
Oct 11 05:59:28 np0005481065 busy_darwin[451262]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 05:59:28 np0005481065 busy_darwin[451262]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:59:28 np0005481065 busy_darwin[451262]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 05:59:28 np0005481065 busy_darwin[451262]:        "osd_id": 0,
Oct 11 05:59:28 np0005481065 busy_darwin[451262]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 05:59:28 np0005481065 busy_darwin[451262]:        "type": "bluestore"
Oct 11 05:59:28 np0005481065 busy_darwin[451262]:    },
Oct 11 05:59:28 np0005481065 busy_darwin[451262]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 05:59:28 np0005481065 busy_darwin[451262]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 05:59:28 np0005481065 busy_darwin[451262]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 05:59:28 np0005481065 busy_darwin[451262]:        "osd_id": 1,
Oct 11 05:59:28 np0005481065 busy_darwin[451262]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 05:59:28 np0005481065 busy_darwin[451262]:        "type": "bluestore"
Oct 11 05:59:28 np0005481065 busy_darwin[451262]:    }
Oct 11 05:59:28 np0005481065 busy_darwin[451262]: }
Oct 11 05:59:28 np0005481065 systemd[1]: libpod-3d97cefd74355056e3b4c9d2117fa40a3fc608f02f0af56ae9d2fc3931e25362.scope: Deactivated successfully.
Oct 11 05:59:28 np0005481065 podman[451245]: 2025-10-11 09:59:28.346230215 +0000 UTC m=+1.157643777 container died 3d97cefd74355056e3b4c9d2117fa40a3fc608f02f0af56ae9d2fc3931e25362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_darwin, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 05:59:28 np0005481065 systemd[1]: var-lib-containers-storage-overlay-c11fde3f592c424e035d54ac98446d60cc0f4644b141ed5f2cf5e72b0e7feed2-merged.mount: Deactivated successfully.
Oct 11 05:59:28 np0005481065 podman[451245]: 2025-10-11 09:59:28.400982802 +0000 UTC m=+1.212396354 container remove 3d97cefd74355056e3b4c9d2117fa40a3fc608f02f0af56ae9d2fc3931e25362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_darwin, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Oct 11 05:59:28 np0005481065 systemd[1]: libpod-conmon-3d97cefd74355056e3b4c9d2117fa40a3fc608f02f0af56ae9d2fc3931e25362.scope: Deactivated successfully.
Oct 11 05:59:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 05:59:28 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:59:28 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 05:59:28 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:59:28 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 43102d20-3b13-4811-a52b-f033bd2cb073 does not exist
Oct 11 05:59:28 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev a4f476f5-e61a-4900-bc94-16e2262c1b07 does not exist
Oct 11 05:59:29 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:59:29 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 05:59:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:59:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e316 do_prune osdmap full prune enabled
Oct 11 05:59:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e317 e317: 3 total, 3 up, 3 in
Oct 11 05:59:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3563: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Oct 11 05:59:29 np0005481065 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e317: 3 total, 3 up, 3 in
Oct 11 05:59:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3565: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.5 KiB/s wr, 23 op/s
Oct 11 05:59:32 np0005481065 nova_compute[260935]: 2025-10-11 09:59:32.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:59:32 np0005481065 nova_compute[260935]: 2025-10-11 09:59:32.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:59:32 np0005481065 nova_compute[260935]: 2025-10-11 09:59:32.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5047 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:59:32 np0005481065 nova_compute[260935]: 2025-10-11 09:59:32.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:59:32 np0005481065 nova_compute[260935]: 2025-10-11 09:59:32.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:59:33 np0005481065 nova_compute[260935]: 2025-10-11 09:59:33.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:59:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3566: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 1.4 KiB/s wr, 21 op/s
Oct 11 05:59:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:59:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3567: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 1.4 KiB/s wr, 21 op/s
Oct 11 05:59:37 np0005481065 nova_compute[260935]: 2025-10-11 09:59:37.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:59:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3568: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:59:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:59:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3569: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:59:40 np0005481065 podman[451362]: 2025-10-11 09:59:40.760150069 +0000 UTC m=+0.098610528 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 11 05:59:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3570: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:59:42 np0005481065 nova_compute[260935]: 2025-10-11 09:59:42.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:59:42 np0005481065 nova_compute[260935]: 2025-10-11 09:59:42.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:59:42 np0005481065 nova_compute[260935]: 2025-10-11 09:59:42.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:59:42 np0005481065 nova_compute[260935]: 2025-10-11 09:59:42.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:59:42 np0005481065 nova_compute[260935]: 2025-10-11 09:59:42.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:59:42 np0005481065 nova_compute[260935]: 2025-10-11 09:59:42.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:59:42 np0005481065 nova_compute[260935]: 2025-10-11 09:59:42.762 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:59:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3571: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:59:44 np0005481065 podman[451381]: 2025-10-11 09:59:44.802140277 +0000 UTC m=+0.093477992 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, container_name=iscsid)
Oct 11 05:59:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:59:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3572: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:59:47 np0005481065 nova_compute[260935]: 2025-10-11 09:59:47.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:59:47 np0005481065 nova_compute[260935]: 2025-10-11 09:59:47.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:59:47 np0005481065 nova_compute[260935]: 2025-10-11 09:59:47.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:59:47 np0005481065 nova_compute[260935]: 2025-10-11 09:59:47.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:59:47 np0005481065 nova_compute[260935]: 2025-10-11 09:59:47.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 05:59:47 np0005481065 nova_compute[260935]: 2025-10-11 09:59:47.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:59:47 np0005481065 nova_compute[260935]: 2025-10-11 09:59:47.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 05:59:47 np0005481065 nova_compute[260935]: 2025-10-11 09:59:47.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct 11 05:59:47 np0005481065 nova_compute[260935]: 2025-10-11 09:59:47.728 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct 11 05:59:47 np0005481065 podman[451401]: 2025-10-11 09:59:47.801187413 +0000 UTC m=+0.104753082 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 11 05:59:47 np0005481065 podman[451402]: 2025-10-11 09:59:47.848416867 +0000 UTC m=+0.132511155 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 11 05:59:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3573: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:59:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:59:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3574: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:59:51 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3575: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:59:52 np0005481065 nova_compute[260935]: 2025-10-11 09:59:52.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:59:52 np0005481065 nova_compute[260935]: 2025-10-11 09:59:52.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:59:52 np0005481065 nova_compute[260935]: 2025-10-11 09:59:52.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5052 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 05:59:52 np0005481065 nova_compute[260935]: 2025-10-11 09:59:52.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:59:52 np0005481065 nova_compute[260935]: 2025-10-11 09:59:52.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 05:59:52 np0005481065 nova_compute[260935]: 2025-10-11 09:59:52.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:59:53 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3576: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:59:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:59:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:59:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:59:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:59:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 05:59:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 05:59:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:59:54 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #174. Immutable memtables: 0.
Oct 11 05:59:54 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:59:54.968251) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 05:59:54 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 107] Flushing memtable with next log file: 174
Oct 11 05:59:54 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176794968355, "job": 107, "event": "flush_started", "num_memtables": 1, "num_entries": 1551, "num_deletes": 255, "total_data_size": 2489588, "memory_usage": 2536064, "flush_reason": "Manual Compaction"}
Oct 11 05:59:54 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 107] Level-0 flush table #175: started
Oct 11 05:59:54 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176794984727, "cf_name": "default", "job": 107, "event": "table_file_creation", "file_number": 175, "file_size": 1483622, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 72652, "largest_seqno": 74202, "table_properties": {"data_size": 1478055, "index_size": 2770, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14226, "raw_average_key_size": 21, "raw_value_size": 1465909, "raw_average_value_size": 2171, "num_data_blocks": 126, "num_entries": 675, "num_filter_entries": 675, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760176638, "oldest_key_time": 1760176638, "file_creation_time": 1760176794, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 175, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:59:54 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 107] Flush lasted 16681 microseconds, and 8703 cpu microseconds.
Oct 11 05:59:54 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:59:54 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:59:54.984942) [db/flush_job.cc:967] [default] [JOB 107] Level-0 flush table #175: 1483622 bytes OK
Oct 11 05:59:54 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:59:54.984968) [db/memtable_list.cc:519] [default] Level-0 commit table #175 started
Oct 11 05:59:54 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:59:54.987042) [db/memtable_list.cc:722] [default] Level-0 commit table #175: memtable #1 done
Oct 11 05:59:54 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:59:54.987065) EVENT_LOG_v1 {"time_micros": 1760176794987058, "job": 107, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 05:59:54 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:59:54.987090) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 05:59:54 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 107] Try to delete WAL files size 2482784, prev total WAL file size 2482784, number of live WAL files 2.
Oct 11 05:59:54 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000171.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:59:54 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:59:54.988445) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033303034' seq:72057594037927935, type:22 .. '6D6772737461740033323535' seq:0, type:0; will stop at (end)
Oct 11 05:59:54 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 108] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 05:59:54 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 107 Base level 0, inputs: [175(1448KB)], [173(10MB)]
Oct 11 05:59:54 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176794988498, "job": 108, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [175], "files_L6": [173], "score": -1, "input_data_size": 12536470, "oldest_snapshot_seqno": -1}
Oct 11 05:59:55 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 108] Generated table #176: 9080 keys, 10195503 bytes, temperature: kUnknown
Oct 11 05:59:55 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176795056154, "cf_name": "default", "job": 108, "event": "table_file_creation", "file_number": 176, "file_size": 10195503, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10139409, "index_size": 32301, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22725, "raw_key_size": 237620, "raw_average_key_size": 26, "raw_value_size": 9981919, "raw_average_value_size": 1099, "num_data_blocks": 1251, "num_entries": 9080, "num_filter_entries": 9080, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760176794, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 176, "seqno_to_time_mapping": "N/A"}}
Oct 11 05:59:55 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 05:59:55 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:59:55.056477) [db/compaction/compaction_job.cc:1663] [default] [JOB 108] Compacted 1@0 + 1@6 files to L6 => 10195503 bytes
Oct 11 05:59:55 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:59:55.057997) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 185.1 rd, 150.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 10.5 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(15.3) write-amplify(6.9) OK, records in: 9530, records dropped: 450 output_compression: NoCompression
Oct 11 05:59:55 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:59:55.058055) EVENT_LOG_v1 {"time_micros": 1760176795058041, "job": 108, "event": "compaction_finished", "compaction_time_micros": 67746, "compaction_time_cpu_micros": 49445, "output_level": 6, "num_output_files": 1, "total_output_size": 10195503, "num_input_records": 9530, "num_output_records": 9080, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 05:59:55 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000175.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:59:55 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176795058746, "job": 108, "event": "table_file_deletion", "file_number": 175}
Oct 11 05:59:55 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000173.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 05:59:55 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176795062604, "job": 108, "event": "table_file_deletion", "file_number": 173}
Oct 11 05:59:55 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:59:54.988310) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:59:55 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:59:55.062664) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:59:55 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:59:55.062671) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:59:55 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:59:55.062674) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:59:55 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:59:55.062677) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:59:55 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:59:55.062680) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 05:59:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:59:55
Oct 11 05:59:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 05:59:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 05:59:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['backups', 'images', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes', '.mgr', 'default.rgw.log', 'vms', 'default.rgw.control']
Oct 11 05:59:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 05:59:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 05:59:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:59:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 05:59:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 05:59:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:59:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 05:59:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:59:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 05:59:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:59:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 05:59:55 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3577: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:59:57 np0005481065 nova_compute[260935]: 2025-10-11 09:59:57.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 05:59:57 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3578: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 05:59:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 05:59:59 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3579: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 06:00:01 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3580: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 06:00:02 np0005481065 nova_compute[260935]: 2025-10-11 10:00:02.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 06:00:03 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3581: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 06:00:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 06:00:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 06:00:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 06:00:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 06:00:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 06:00:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0026278224099759067 of space, bias 1.0, pg target 0.788346722992772 quantized to 32 (current 32)
Oct 11 06:00:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 06:00:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 06:00:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 06:00:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 06:00:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 06:00:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct 11 06:00:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 06:00:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 06:00:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 06:00:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 06:00:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 06:00:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 06:00:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 06:00:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 06:00:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 06:00:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 06:00:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 06:00:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 06:00:05 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3582: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 06:00:07 np0005481065 nova_compute[260935]: 2025-10-11 10:00:07.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 06:00:07 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 06:00:07 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.0 total, 600.0 interval#012Cumulative writes: 16K writes, 74K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.02 MB/s#012Cumulative WAL: 16K writes, 16K syncs, 1.00 writes per sync, written: 0.10 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1333 writes, 6041 keys, 1333 commit groups, 1.0 writes per commit group, ingest: 8.75 MB, 0.01 MB/s#012Interval WAL: 1333 writes, 1333 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     75.9      1.19              0.38        54    0.022       0      0       0.0       0.0#012  L6      1/0    9.72 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   5.0    157.0    133.1      3.38              1.81        53    0.064    362K    28K       0.0       0.0#012 Sum      1/0    9.72 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   6.0    116.0    118.2      4.57              2.18       107    0.043    362K    28K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   7.6    142.2    139.9      0.40              0.25        10    0.040     46K   2463       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0    157.0    133.1      3.38              1.81        53    0.064    362K    28K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     76.2      1.19              0.38        53    0.022       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.2      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6600.0 total, 600.0 interval#012Flush(GB): cumulative 0.088, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.53 GB write, 0.08 MB/s write, 0.52 GB read, 0.08 MB/s read, 4.6 seconds#012Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.06 GB read, 0.09 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558f0ab3b1f0#2 capacity: 304.00 MB usage: 59.69 MB table_size: 0 occupancy: 18446744073709551615 collections: 12 last_copies: 0 last_secs: 0.00057 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3875,57.12 MB,18.791%) FilterBlock(108,1015.61 KB,0.326252%) IndexBlock(108,1.58 MB,0.518533%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct 11 06:00:07 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3583: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 06:00:09 np0005481065 nova_compute[260935]: 2025-10-11 10:00:09.728 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 06:00:09 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 06:00:09 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3584: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 06:00:10 np0005481065 nova_compute[260935]: 2025-10-11 10:00:10.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 06:00:10 np0005481065 nova_compute[260935]: 2025-10-11 10:00:10.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 06:00:10 np0005481065 nova_compute[260935]: 2025-10-11 10:00:10.702 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 06:00:10 np0005481065 nova_compute[260935]: 2025-10-11 10:00:10.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 11 06:00:11 np0005481065 nova_compute[260935]: 2025-10-11 10:00:11.301 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 06:00:11 np0005481065 nova_compute[260935]: 2025-10-11 10:00:11.303 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 06:00:11 np0005481065 nova_compute[260935]: 2025-10-11 10:00:11.303 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 06:00:11 np0005481065 nova_compute[260935]: 2025-10-11 10:00:11.304 2 DEBUG nova.objects.instance [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c176845c-89c0-4038-ba22-4ee79bd3ebfe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 06:00:11 np0005481065 nova_compute[260935]: 2025-10-11 10:00:11.534 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 06:00:11 np0005481065 podman[451448]: 2025-10-11 10:00:11.778070912 +0000 UTC m=+0.069911426 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent)
Oct 11 06:00:11 np0005481065 nova_compute[260935]: 2025-10-11 10:00:11.851 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 06:00:11 np0005481065 nova_compute[260935]: 2025-10-11 10:00:11.876 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 06:00:11 np0005481065 nova_compute[260935]: 2025-10-11 10:00:11.876 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 06:00:11 np0005481065 nova_compute[260935]: 2025-10-11 10:00:11.877 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 06:00:11 np0005481065 nova_compute[260935]: 2025-10-11 10:00:11.877 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 06:00:11 np0005481065 nova_compute[260935]: 2025-10-11 10:00:11.877 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 06:00:11 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3585: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 06:00:12 np0005481065 nova_compute[260935]: 2025-10-11 10:00:12.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 06:00:12 np0005481065 nova_compute[260935]: 2025-10-11 10:00:12.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 06:00:12 np0005481065 nova_compute[260935]: 2025-10-11 10:00:12.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 06:00:12 np0005481065 nova_compute[260935]: 2025-10-11 10:00:12.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 06:00:12 np0005481065 nova_compute[260935]: 2025-10-11 10:00:12.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 06:00:12 np0005481065 nova_compute[260935]: 2025-10-11 10:00:12.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 06:00:13 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3586: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 06:00:14 np0005481065 nova_compute[260935]: 2025-10-11 10:00:14.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 06:00:14 np0005481065 nova_compute[260935]: 2025-10-11 10:00:14.734 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 06:00:14 np0005481065 nova_compute[260935]: 2025-10-11 10:00:14.735 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 06:00:14 np0005481065 nova_compute[260935]: 2025-10-11 10:00:14.735 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 06:00:14 np0005481065 nova_compute[260935]: 2025-10-11 10:00:14.736 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 06:00:14 np0005481065 nova_compute[260935]: 2025-10-11 10:00:14.736 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 06:00:14 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 06:00:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 10:00:15.259 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 06:00:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 10:00:15.260 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 06:00:15 np0005481065 ovn_metadata_agent[162810]: 2025-10-11 10:00:15.260 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 06:00:15 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 06:00:15 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2439239632' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 06:00:15 np0005481065 nova_compute[260935]: 2025-10-11 10:00:15.304 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 06:00:15 np0005481065 nova_compute[260935]: 2025-10-11 10:00:15.427 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 06:00:15 np0005481065 nova_compute[260935]: 2025-10-11 10:00:15.427 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 06:00:15 np0005481065 nova_compute[260935]: 2025-10-11 10:00:15.428 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 06:00:15 np0005481065 nova_compute[260935]: 2025-10-11 10:00:15.433 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 06:00:15 np0005481065 nova_compute[260935]: 2025-10-11 10:00:15.434 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 06:00:15 np0005481065 nova_compute[260935]: 2025-10-11 10:00:15.439 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 06:00:15 np0005481065 nova_compute[260935]: 2025-10-11 10:00:15.440 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 06:00:15 np0005481065 nova_compute[260935]: 2025-10-11 10:00:15.680 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 06:00:15 np0005481065 nova_compute[260935]: 2025-10-11 10:00:15.681 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2828MB free_disk=59.83064270019531GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 06:00:15 np0005481065 nova_compute[260935]: 2025-10-11 10:00:15.682 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 06:00:15 np0005481065 nova_compute[260935]: 2025-10-11 10:00:15.682 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 06:00:15 np0005481065 podman[451489]: 2025-10-11 10:00:15.80196687 +0000 UTC m=+0.094516712 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=iscsid, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 06:00:15 np0005481065 nova_compute[260935]: 2025-10-11 10:00:15.863 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 06:00:15 np0005481065 nova_compute[260935]: 2025-10-11 10:00:15.864 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 06:00:15 np0005481065 nova_compute[260935]: 2025-10-11 10:00:15.864 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 06:00:15 np0005481065 nova_compute[260935]: 2025-10-11 10:00:15.865 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 06:00:15 np0005481065 nova_compute[260935]: 2025-10-11 10:00:15.865 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 06:00:15 np0005481065 nova_compute[260935]: 2025-10-11 10:00:15.891 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing inventories for resource provider ead2f521-4d5d-46d9-864c-1aac19134114 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 11 06:00:15 np0005481065 nova_compute[260935]: 2025-10-11 10:00:15.920 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updating ProviderTree inventory for provider ead2f521-4d5d-46d9-864c-1aac19134114 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 11 06:00:15 np0005481065 nova_compute[260935]: 2025-10-11 10:00:15.921 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updating inventory in ProviderTree for provider ead2f521-4d5d-46d9-864c-1aac19134114 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 11 06:00:15 np0005481065 nova_compute[260935]: 2025-10-11 10:00:15.945 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing aggregate associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 11 06:00:15 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3587: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 06:00:15 np0005481065 nova_compute[260935]: 2025-10-11 10:00:15.988 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing trait associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, traits: HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSE41,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_USB,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,HW_CPU_X86_SVM,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 11 06:00:16 np0005481065 nova_compute[260935]: 2025-10-11 10:00:16.104 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 06:00:16 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 06:00:16 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3759781213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 06:00:16 np0005481065 nova_compute[260935]: 2025-10-11 10:00:16.544 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 06:00:16 np0005481065 nova_compute[260935]: 2025-10-11 10:00:16.550 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 06:00:16 np0005481065 nova_compute[260935]: 2025-10-11 10:00:16.570 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 06:00:16 np0005481065 nova_compute[260935]: 2025-10-11 10:00:16.574 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 06:00:16 np0005481065 nova_compute[260935]: 2025-10-11 10:00:16.574 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.892s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 06:00:17 np0005481065 nova_compute[260935]: 2025-10-11 10:00:17.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 06:00:17 np0005481065 nova_compute[260935]: 2025-10-11 10:00:17.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 06:00:17 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3588: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 06:00:18 np0005481065 podman[451532]: 2025-10-11 10:00:18.852393167 +0000 UTC m=+0.143462046 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 11 06:00:18 np0005481065 podman[451533]: 2025-10-11 10:00:18.877253549 +0000 UTC m=+0.165118677 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 11 06:00:19 np0005481065 nova_compute[260935]: 2025-10-11 10:00:19.575 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 06:00:19 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 06:00:19 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3589: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 06:00:21 np0005481065 nova_compute[260935]: 2025-10-11 10:00:21.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 06:00:21 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3590: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 06:00:22 np0005481065 nova_compute[260935]: 2025-10-11 10:00:22.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4995-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 06:00:22 np0005481065 nova_compute[260935]: 2025-10-11 10:00:22.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 06:00:22 np0005481065 nova_compute[260935]: 2025-10-11 10:00:22.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 06:00:22 np0005481065 nova_compute[260935]: 2025-10-11 10:00:22.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 06:00:22 np0005481065 nova_compute[260935]: 2025-10-11 10:00:22.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 06:00:22 np0005481065 nova_compute[260935]: 2025-10-11 10:00:22.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 06:00:23 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3591: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 06:00:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 06:00:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 06:00:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 06:00:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 06:00:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 06:00:24 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 06:00:24 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 06:00:25 np0005481065 nova_compute[260935]: 2025-10-11 10:00:25.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 06:00:25 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3592: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 06:00:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 06:00:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/572454341' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 06:00:26 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 06:00:26 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/572454341' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 06:00:27 np0005481065 nova_compute[260935]: 2025-10-11 10:00:27.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 06:00:27 np0005481065 nova_compute[260935]: 2025-10-11 10:00:27.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 06:00:27 np0005481065 nova_compute[260935]: 2025-10-11 10:00:27.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 06:00:27 np0005481065 nova_compute[260935]: 2025-10-11 10:00:27.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 06:00:27 np0005481065 nova_compute[260935]: 2025-10-11 10:00:27.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 06:00:27 np0005481065 nova_compute[260935]: 2025-10-11 10:00:27.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 06:00:27 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3593: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 06:00:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 06:00:29 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 06:00:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 06:00:29 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 06:00:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 06:00:29 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 06:00:29 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev b374de8c-6e31-484b-91cb-7241b045d8d7 does not exist
Oct 11 06:00:29 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 4780e8d8-6cf0-4acc-88a1-2c162708eb8a does not exist
Oct 11 06:00:29 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev cf456b66-d9c8-4603-9d7c-def0b56b2318 does not exist
Oct 11 06:00:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 06:00:29 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 06:00:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 06:00:29 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 06:00:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 06:00:29 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 06:00:29 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 06:00:29 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3594: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 06:00:30 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 06:00:30 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 06:00:30 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 06:00:30 np0005481065 podman[451853]: 2025-10-11 10:00:30.567241698 +0000 UTC m=+0.065670927 container create 533ac78e3ccf1abd62e86d1fa94d811799351ebf44ef18909a5758c85f9d45ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 06:00:30 np0005481065 podman[451853]: 2025-10-11 10:00:30.537203139 +0000 UTC m=+0.035632388 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 06:00:30 np0005481065 systemd[1]: Started libpod-conmon-533ac78e3ccf1abd62e86d1fa94d811799351ebf44ef18909a5758c85f9d45ca.scope.
Oct 11 06:00:30 np0005481065 systemd[1]: Started libcrun container.
Oct 11 06:00:30 np0005481065 podman[451853]: 2025-10-11 10:00:30.690481531 +0000 UTC m=+0.188910820 container init 533ac78e3ccf1abd62e86d1fa94d811799351ebf44ef18909a5758c85f9d45ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 06:00:30 np0005481065 podman[451853]: 2025-10-11 10:00:30.70176544 +0000 UTC m=+0.200194679 container start 533ac78e3ccf1abd62e86d1fa94d811799351ebf44ef18909a5758c85f9d45ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_volhard, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 06:00:30 np0005481065 podman[451853]: 2025-10-11 10:00:30.706860424 +0000 UTC m=+0.205289653 container attach 533ac78e3ccf1abd62e86d1fa94d811799351ebf44ef18909a5758c85f9d45ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 06:00:30 np0005481065 systemd[1]: libpod-533ac78e3ccf1abd62e86d1fa94d811799351ebf44ef18909a5758c85f9d45ca.scope: Deactivated successfully.
Oct 11 06:00:30 np0005481065 kind_volhard[451870]: 167 167
Oct 11 06:00:30 np0005481065 conmon[451870]: conmon 533ac78e3ccf1abd62e8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-533ac78e3ccf1abd62e86d1fa94d811799351ebf44ef18909a5758c85f9d45ca.scope/container/memory.events
Oct 11 06:00:30 np0005481065 podman[451853]: 2025-10-11 10:00:30.712523684 +0000 UTC m=+0.210952913 container died 533ac78e3ccf1abd62e86d1fa94d811799351ebf44ef18909a5758c85f9d45ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 06:00:30 np0005481065 systemd[1]: var-lib-containers-storage-overlay-07e8904a61dd89ec574ad0412c1ab326d73ff16052c212a8ee89baefb8c9c76b-merged.mount: Deactivated successfully.
Oct 11 06:00:30 np0005481065 podman[451853]: 2025-10-11 10:00:30.772168889 +0000 UTC m=+0.270598128 container remove 533ac78e3ccf1abd62e86d1fa94d811799351ebf44ef18909a5758c85f9d45ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 06:00:30 np0005481065 systemd[1]: libpod-conmon-533ac78e3ccf1abd62e86d1fa94d811799351ebf44ef18909a5758c85f9d45ca.scope: Deactivated successfully.
Oct 11 06:00:31 np0005481065 podman[451894]: 2025-10-11 10:00:31.049443375 +0000 UTC m=+0.077028928 container create b7222408acec4b2cf1787acb9f58ceaf234da51b5bd473e6e2259936ea72e617 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 11 06:00:31 np0005481065 systemd[1]: Started libpod-conmon-b7222408acec4b2cf1787acb9f58ceaf234da51b5bd473e6e2259936ea72e617.scope.
Oct 11 06:00:31 np0005481065 podman[451894]: 2025-10-11 10:00:31.016561986 +0000 UTC m=+0.044147609 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 06:00:31 np0005481065 systemd[1]: Started libcrun container.
Oct 11 06:00:31 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b92663d9eb13351009195367f5b238b3a2cf420cc6d0c1adaf001ea3629b9881/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 06:00:31 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b92663d9eb13351009195367f5b238b3a2cf420cc6d0c1adaf001ea3629b9881/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 06:00:31 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b92663d9eb13351009195367f5b238b3a2cf420cc6d0c1adaf001ea3629b9881/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 06:00:31 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b92663d9eb13351009195367f5b238b3a2cf420cc6d0c1adaf001ea3629b9881/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 06:00:31 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b92663d9eb13351009195367f5b238b3a2cf420cc6d0c1adaf001ea3629b9881/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 06:00:31 np0005481065 podman[451894]: 2025-10-11 10:00:31.153555498 +0000 UTC m=+0.181141091 container init b7222408acec4b2cf1787acb9f58ceaf234da51b5bd473e6e2259936ea72e617 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 06:00:31 np0005481065 podman[451894]: 2025-10-11 10:00:31.170173827 +0000 UTC m=+0.197759340 container start b7222408acec4b2cf1787acb9f58ceaf234da51b5bd473e6e2259936ea72e617 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_antonelli, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 06:00:31 np0005481065 podman[451894]: 2025-10-11 10:00:31.173744618 +0000 UTC m=+0.201330181 container attach b7222408acec4b2cf1787acb9f58ceaf234da51b5bd473e6e2259936ea72e617 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_antonelli, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 06:00:31 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3595: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 06:00:32 np0005481065 compassionate_antonelli[451911]: --> passed data devices: 0 physical, 3 LVM
Oct 11 06:00:32 np0005481065 compassionate_antonelli[451911]: --> relative data size: 1.0
Oct 11 06:00:32 np0005481065 compassionate_antonelli[451911]: --> All data devices are unavailable
Oct 11 06:00:32 np0005481065 systemd[1]: libpod-b7222408acec4b2cf1787acb9f58ceaf234da51b5bd473e6e2259936ea72e617.scope: Deactivated successfully.
Oct 11 06:00:32 np0005481065 systemd[1]: libpod-b7222408acec4b2cf1787acb9f58ceaf234da51b5bd473e6e2259936ea72e617.scope: Consumed 1.215s CPU time.
Oct 11 06:00:32 np0005481065 podman[451894]: 2025-10-11 10:00:32.480628091 +0000 UTC m=+1.508213674 container died b7222408acec4b2cf1787acb9f58ceaf234da51b5bd473e6e2259936ea72e617 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_antonelli, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 06:00:32 np0005481065 nova_compute[260935]: 2025-10-11 10:00:32.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 06:00:32 np0005481065 nova_compute[260935]: 2025-10-11 10:00:32.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 06:00:32 np0005481065 nova_compute[260935]: 2025-10-11 10:00:32.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5007 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 06:00:32 np0005481065 nova_compute[260935]: 2025-10-11 10:00:32.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 06:00:32 np0005481065 nova_compute[260935]: 2025-10-11 10:00:32.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 06:00:32 np0005481065 nova_compute[260935]: 2025-10-11 10:00:32.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 06:00:32 np0005481065 systemd[1]: var-lib-containers-storage-overlay-b92663d9eb13351009195367f5b238b3a2cf420cc6d0c1adaf001ea3629b9881-merged.mount: Deactivated successfully.
Oct 11 06:00:32 np0005481065 podman[451894]: 2025-10-11 10:00:32.612889828 +0000 UTC m=+1.640475381 container remove b7222408acec4b2cf1787acb9f58ceaf234da51b5bd473e6e2259936ea72e617 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 06:00:32 np0005481065 systemd[1]: libpod-conmon-b7222408acec4b2cf1787acb9f58ceaf234da51b5bd473e6e2259936ea72e617.scope: Deactivated successfully.
Oct 11 06:00:33 np0005481065 podman[452096]: 2025-10-11 10:00:33.476364031 +0000 UTC m=+0.044733225 container create 0047748de172bcb352c8e2a9be8c50976a429c7873c56993b558452c7d876398 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_vaughan, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 06:00:33 np0005481065 systemd[1]: Started libpod-conmon-0047748de172bcb352c8e2a9be8c50976a429c7873c56993b558452c7d876398.scope.
Oct 11 06:00:33 np0005481065 systemd[1]: Started libcrun container.
Oct 11 06:00:33 np0005481065 podman[452096]: 2025-10-11 10:00:33.460193364 +0000 UTC m=+0.028562578 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 06:00:33 np0005481065 podman[452096]: 2025-10-11 10:00:33.564288736 +0000 UTC m=+0.132658030 container init 0047748de172bcb352c8e2a9be8c50976a429c7873c56993b558452c7d876398 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_vaughan, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 06:00:33 np0005481065 podman[452096]: 2025-10-11 10:00:33.572691813 +0000 UTC m=+0.141060997 container start 0047748de172bcb352c8e2a9be8c50976a429c7873c56993b558452c7d876398 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Oct 11 06:00:33 np0005481065 podman[452096]: 2025-10-11 10:00:33.576183502 +0000 UTC m=+0.144552736 container attach 0047748de172bcb352c8e2a9be8c50976a429c7873c56993b558452c7d876398 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_vaughan, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 06:00:33 np0005481065 kind_vaughan[452112]: 167 167
Oct 11 06:00:33 np0005481065 systemd[1]: libpod-0047748de172bcb352c8e2a9be8c50976a429c7873c56993b558452c7d876398.scope: Deactivated successfully.
Oct 11 06:00:33 np0005481065 podman[452096]: 2025-10-11 10:00:33.581560964 +0000 UTC m=+0.149930208 container died 0047748de172bcb352c8e2a9be8c50976a429c7873c56993b558452c7d876398 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_vaughan, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 11 06:00:33 np0005481065 systemd[1]: var-lib-containers-storage-overlay-e3259699e5d84f4b4f8f9dcf79f4f179c0a72e328e418a4cafd108fafdf36161-merged.mount: Deactivated successfully.
Oct 11 06:00:33 np0005481065 podman[452096]: 2025-10-11 10:00:33.628317965 +0000 UTC m=+0.196687159 container remove 0047748de172bcb352c8e2a9be8c50976a429c7873c56993b558452c7d876398 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_vaughan, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 06:00:33 np0005481065 systemd[1]: libpod-conmon-0047748de172bcb352c8e2a9be8c50976a429c7873c56993b558452c7d876398.scope: Deactivated successfully.
Oct 11 06:00:33 np0005481065 podman[452138]: 2025-10-11 10:00:33.908671598 +0000 UTC m=+0.080491545 container create 97f0ec6e3435f2358a3d46ffeb39940f3d7ca2d9b649512bc9de50d88189bc04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 06:00:33 np0005481065 systemd[1]: Started libpod-conmon-97f0ec6e3435f2358a3d46ffeb39940f3d7ca2d9b649512bc9de50d88189bc04.scope.
Oct 11 06:00:33 np0005481065 podman[452138]: 2025-10-11 10:00:33.886620935 +0000 UTC m=+0.058440932 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 06:00:33 np0005481065 systemd[1]: Started libcrun container.
Oct 11 06:00:33 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3596: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 06:00:33 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3866d1751874a357b5fb6f05591920b09b484968d1fda8890a9bf2b40aeba0a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 06:00:33 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3866d1751874a357b5fb6f05591920b09b484968d1fda8890a9bf2b40aeba0a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 06:00:34 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3866d1751874a357b5fb6f05591920b09b484968d1fda8890a9bf2b40aeba0a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 06:00:34 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3866d1751874a357b5fb6f05591920b09b484968d1fda8890a9bf2b40aeba0a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 06:00:34 np0005481065 podman[452138]: 2025-10-11 10:00:34.01168826 +0000 UTC m=+0.183508247 container init 97f0ec6e3435f2358a3d46ffeb39940f3d7ca2d9b649512bc9de50d88189bc04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ptolemy, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 06:00:34 np0005481065 podman[452138]: 2025-10-11 10:00:34.021505747 +0000 UTC m=+0.193325724 container start 97f0ec6e3435f2358a3d46ffeb39940f3d7ca2d9b649512bc9de50d88189bc04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ptolemy, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 06:00:34 np0005481065 podman[452138]: 2025-10-11 10:00:34.025947593 +0000 UTC m=+0.197767630 container attach 97f0ec6e3435f2358a3d46ffeb39940f3d7ca2d9b649512bc9de50d88189bc04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ptolemy, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]: {
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:    "0": [
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:        {
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:            "devices": [
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:                "/dev/loop3"
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:            ],
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:            "lv_name": "ceph_lv0",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:            "lv_size": "21470642176",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:            "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:            "name": "ceph_lv0",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:            "tags": {
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:                "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:                "ceph.cephx_lockbox_secret": "",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:                "ceph.cluster_name": "ceph",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:                "ceph.crush_device_class": "",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:                "ceph.encrypted": "0",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:                "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:                "ceph.osd_id": "0",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:                "ceph.type": "block",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:                "ceph.vdo": "0"
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:            },
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:            "type": "block",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:            "vg_name": "ceph_vg0"
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:        }
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:    ],
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:    "1": [
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:        {
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:            "devices": [
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:                "/dev/loop4"
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:            ],
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:            "lv_name": "ceph_lv1",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:            "lv_size": "21470642176",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:            "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:            "name": "ceph_lv1",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:            "tags": {
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:                "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:                "ceph.cephx_lockbox_secret": "",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:                "ceph.cluster_name": "ceph",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:                "ceph.crush_device_class": "",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:                "ceph.encrypted": "0",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:                "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:                "ceph.osd_id": "1",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:                "ceph.type": "block",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:                "ceph.vdo": "0"
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:            },
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:            "type": "block",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:            "vg_name": "ceph_vg1"
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:        }
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:    ],
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:    "2": [
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:        {
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:            "devices": [
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:                "/dev/loop5"
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:            ],
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:            "lv_name": "ceph_lv2",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:            "lv_size": "21470642176",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:            "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:            "name": "ceph_lv2",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:            "tags": {
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:                "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:                "ceph.cephx_lockbox_secret": "",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:                "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:                "ceph.cluster_name": "ceph",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:                "ceph.crush_device_class": "",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:                "ceph.encrypted": "0",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:                "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:                "ceph.osd_id": "2",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:                "ceph.type": "block",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:                "ceph.vdo": "0"
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:            },
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:            "type": "block",
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:            "vg_name": "ceph_vg2"
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:        }
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]:    ]
Oct 11 06:00:34 np0005481065 charming_ptolemy[452154]: }
Oct 11 06:00:34 np0005481065 systemd[1]: libpod-97f0ec6e3435f2358a3d46ffeb39940f3d7ca2d9b649512bc9de50d88189bc04.scope: Deactivated successfully.
Oct 11 06:00:34 np0005481065 podman[452138]: 2025-10-11 10:00:34.899909172 +0000 UTC m=+1.071729149 container died 97f0ec6e3435f2358a3d46ffeb39940f3d7ca2d9b649512bc9de50d88189bc04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 06:00:34 np0005481065 systemd[1]: var-lib-containers-storage-overlay-c3866d1751874a357b5fb6f05591920b09b484968d1fda8890a9bf2b40aeba0a-merged.mount: Deactivated successfully.
Oct 11 06:00:34 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 06:00:34 np0005481065 podman[452138]: 2025-10-11 10:00:34.975402365 +0000 UTC m=+1.147222322 container remove 97f0ec6e3435f2358a3d46ffeb39940f3d7ca2d9b649512bc9de50d88189bc04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 06:00:34 np0005481065 systemd[1]: libpod-conmon-97f0ec6e3435f2358a3d46ffeb39940f3d7ca2d9b649512bc9de50d88189bc04.scope: Deactivated successfully.
Oct 11 06:00:35 np0005481065 podman[452317]: 2025-10-11 10:00:35.816278048 +0000 UTC m=+0.053538764 container create 9dcf7099a1d2a8ff55b3bf08aad414d826d56486695943c9dae9667d2858d1f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_galileo, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 06:00:35 np0005481065 systemd[1]: Started libpod-conmon-9dcf7099a1d2a8ff55b3bf08aad414d826d56486695943c9dae9667d2858d1f8.scope.
Oct 11 06:00:35 np0005481065 podman[452317]: 2025-10-11 10:00:35.79265476 +0000 UTC m=+0.029915546 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 06:00:35 np0005481065 systemd[1]: Started libcrun container.
Oct 11 06:00:35 np0005481065 podman[452317]: 2025-10-11 10:00:35.931751501 +0000 UTC m=+0.169012267 container init 9dcf7099a1d2a8ff55b3bf08aad414d826d56486695943c9dae9667d2858d1f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_galileo, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 06:00:35 np0005481065 podman[452317]: 2025-10-11 10:00:35.944588864 +0000 UTC m=+0.181849590 container start 9dcf7099a1d2a8ff55b3bf08aad414d826d56486695943c9dae9667d2858d1f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 06:00:35 np0005481065 podman[452317]: 2025-10-11 10:00:35.949957986 +0000 UTC m=+0.187218732 container attach 9dcf7099a1d2a8ff55b3bf08aad414d826d56486695943c9dae9667d2858d1f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_galileo, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 06:00:35 np0005481065 keen_galileo[452334]: 167 167
Oct 11 06:00:35 np0005481065 systemd[1]: libpod-9dcf7099a1d2a8ff55b3bf08aad414d826d56486695943c9dae9667d2858d1f8.scope: Deactivated successfully.
Oct 11 06:00:35 np0005481065 podman[452317]: 2025-10-11 10:00:35.953668031 +0000 UTC m=+0.190928757 container died 9dcf7099a1d2a8ff55b3bf08aad414d826d56486695943c9dae9667d2858d1f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_galileo, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 06:00:35 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3597: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 06:00:35 np0005481065 systemd[1]: var-lib-containers-storage-overlay-414e65d311db1f6e4229d2ba50d1b74184f343bf2aa850e5f5fa3f894dc3bcbc-merged.mount: Deactivated successfully.
Oct 11 06:00:36 np0005481065 podman[452317]: 2025-10-11 10:00:36.010109096 +0000 UTC m=+0.247369822 container remove 9dcf7099a1d2a8ff55b3bf08aad414d826d56486695943c9dae9667d2858d1f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_galileo, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 06:00:36 np0005481065 systemd[1]: libpod-conmon-9dcf7099a1d2a8ff55b3bf08aad414d826d56486695943c9dae9667d2858d1f8.scope: Deactivated successfully.
Oct 11 06:00:36 np0005481065 podman[452358]: 2025-10-11 10:00:36.259345129 +0000 UTC m=+0.063603718 container create 961e4bb01e1d9b967252031839fa27291b3f6c7320fd2ac2877cfd1423cc18eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_rubin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 06:00:36 np0005481065 systemd[1]: Started libpod-conmon-961e4bb01e1d9b967252031839fa27291b3f6c7320fd2ac2877cfd1423cc18eb.scope.
Oct 11 06:00:36 np0005481065 podman[452358]: 2025-10-11 10:00:36.233715085 +0000 UTC m=+0.037973734 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 06:00:36 np0005481065 systemd[1]: Started libcrun container.
Oct 11 06:00:36 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4da4efee25d6261c82c039d02792d711c1eda2b670f9e6434bc1e86d96071f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 06:00:36 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4da4efee25d6261c82c039d02792d711c1eda2b670f9e6434bc1e86d96071f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 06:00:36 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4da4efee25d6261c82c039d02792d711c1eda2b670f9e6434bc1e86d96071f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 06:00:36 np0005481065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4da4efee25d6261c82c039d02792d711c1eda2b670f9e6434bc1e86d96071f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 06:00:36 np0005481065 podman[452358]: 2025-10-11 10:00:36.368358 +0000 UTC m=+0.172616639 container init 961e4bb01e1d9b967252031839fa27291b3f6c7320fd2ac2877cfd1423cc18eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_rubin, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 06:00:36 np0005481065 podman[452358]: 2025-10-11 10:00:36.391143684 +0000 UTC m=+0.195402283 container start 961e4bb01e1d9b967252031839fa27291b3f6c7320fd2ac2877cfd1423cc18eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_rubin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 06:00:36 np0005481065 podman[452358]: 2025-10-11 10:00:36.395368203 +0000 UTC m=+0.199626842 container attach 961e4bb01e1d9b967252031839fa27291b3f6c7320fd2ac2877cfd1423cc18eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 06:00:37 np0005481065 hardcore_rubin[452374]: {
Oct 11 06:00:37 np0005481065 hardcore_rubin[452374]:    "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 06:00:37 np0005481065 hardcore_rubin[452374]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 06:00:37 np0005481065 hardcore_rubin[452374]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 06:00:37 np0005481065 hardcore_rubin[452374]:        "osd_id": 2,
Oct 11 06:00:37 np0005481065 hardcore_rubin[452374]:        "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 06:00:37 np0005481065 hardcore_rubin[452374]:        "type": "bluestore"
Oct 11 06:00:37 np0005481065 hardcore_rubin[452374]:    },
Oct 11 06:00:37 np0005481065 hardcore_rubin[452374]:    "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 06:00:37 np0005481065 hardcore_rubin[452374]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 06:00:37 np0005481065 hardcore_rubin[452374]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 06:00:37 np0005481065 hardcore_rubin[452374]:        "osd_id": 0,
Oct 11 06:00:37 np0005481065 hardcore_rubin[452374]:        "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 06:00:37 np0005481065 hardcore_rubin[452374]:        "type": "bluestore"
Oct 11 06:00:37 np0005481065 hardcore_rubin[452374]:    },
Oct 11 06:00:37 np0005481065 hardcore_rubin[452374]:    "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 06:00:37 np0005481065 hardcore_rubin[452374]:        "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 06:00:37 np0005481065 hardcore_rubin[452374]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 06:00:37 np0005481065 hardcore_rubin[452374]:        "osd_id": 1,
Oct 11 06:00:37 np0005481065 hardcore_rubin[452374]:        "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 06:00:37 np0005481065 hardcore_rubin[452374]:        "type": "bluestore"
Oct 11 06:00:37 np0005481065 hardcore_rubin[452374]:    }
Oct 11 06:00:37 np0005481065 hardcore_rubin[452374]: }
Oct 11 06:00:37 np0005481065 systemd[1]: libpod-961e4bb01e1d9b967252031839fa27291b3f6c7320fd2ac2877cfd1423cc18eb.scope: Deactivated successfully.
Oct 11 06:00:37 np0005481065 podman[452358]: 2025-10-11 10:00:37.44086717 +0000 UTC m=+1.245125739 container died 961e4bb01e1d9b967252031839fa27291b3f6c7320fd2ac2877cfd1423cc18eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_rubin, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 06:00:37 np0005481065 systemd[1]: libpod-961e4bb01e1d9b967252031839fa27291b3f6c7320fd2ac2877cfd1423cc18eb.scope: Consumed 1.058s CPU time.
Oct 11 06:00:37 np0005481065 systemd[1]: var-lib-containers-storage-overlay-b4da4efee25d6261c82c039d02792d711c1eda2b670f9e6434bc1e86d96071f5-merged.mount: Deactivated successfully.
Oct 11 06:00:37 np0005481065 podman[452358]: 2025-10-11 10:00:37.489408982 +0000 UTC m=+1.293667551 container remove 961e4bb01e1d9b967252031839fa27291b3f6c7320fd2ac2877cfd1423cc18eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 06:00:37 np0005481065 systemd[1]: libpod-conmon-961e4bb01e1d9b967252031839fa27291b3f6c7320fd2ac2877cfd1423cc18eb.scope: Deactivated successfully.
Oct 11 06:00:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 06:00:37 np0005481065 nova_compute[260935]: 2025-10-11 10:00:37.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 06:00:37 np0005481065 nova_compute[260935]: 2025-10-11 10:00:37.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 06:00:37 np0005481065 nova_compute[260935]: 2025-10-11 10:00:37.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 06:00:37 np0005481065 nova_compute[260935]: 2025-10-11 10:00:37.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 06:00:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 06:00:37 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 06:00:37 np0005481065 nova_compute[260935]: 2025-10-11 10:00:37.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 06:00:37 np0005481065 nova_compute[260935]: 2025-10-11 10:00:37.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 06:00:37 np0005481065 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 06:00:37 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev cd3a34a6-e558-4f51-a62a-998ef42d2266 does not exist
Oct 11 06:00:37 np0005481065 ceph-mgr[74605]: [progress WARNING root] complete: ev 5ff6a686-128a-4a7a-8a26-67a92610d644 does not exist
Oct 11 06:00:37 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3598: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 06:00:38 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 06:00:38 np0005481065 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 06:00:39 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 06:00:39 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3599: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 06:00:41 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3600: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 06:00:42 np0005481065 nova_compute[260935]: 2025-10-11 10:00:42.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 06:00:42 np0005481065 podman[452470]: 2025-10-11 10:00:42.816363364 +0000 UTC m=+0.092942918 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 11 06:00:43 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3601: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 06:00:44 np0005481065 nova_compute[260935]: 2025-10-11 10:00:44.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 06:00:44 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 06:00:45 np0005481065 systemd-logind[819]: New session 57 of user zuul.
Oct 11 06:00:45 np0005481065 systemd[1]: Started Session 57 of User zuul.
Oct 11 06:00:45 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3602: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 06:00:46 np0005481065 podman[452527]: 2025-10-11 10:00:46.176236486 +0000 UTC m=+0.096908560 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 11 06:00:47 np0005481065 nova_compute[260935]: 2025-10-11 10:00:47.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 06:00:47 np0005481065 nova_compute[260935]: 2025-10-11 10:00:47.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 06:00:47 np0005481065 nova_compute[260935]: 2025-10-11 10:00:47.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 06:00:47 np0005481065 nova_compute[260935]: 2025-10-11 10:00:47.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 06:00:47 np0005481065 nova_compute[260935]: 2025-10-11 10:00:47.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 06:00:47 np0005481065 nova_compute[260935]: 2025-10-11 10:00:47.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 06:00:47 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3603: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 06:00:49 np0005481065 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.22893 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 06:00:49 np0005481065 podman[452695]: 2025-10-11 10:00:49.811590033 +0000 UTC m=+0.102052675 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 06:00:49 np0005481065 podman[452696]: 2025-10-11 10:00:49.846452678 +0000 UTC m=+0.134473321 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 11 06:00:49 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 06:00:49 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3604: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 06:00:50 np0005481065 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.22895 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 06:00:50 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Oct 11 06:00:50 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1069736701' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 11 06:00:52 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3605: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 06:00:52 np0005481065 nova_compute[260935]: 2025-10-11 10:00:52.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 06:00:52 np0005481065 nova_compute[260935]: 2025-10-11 10:00:52.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 06:00:52 np0005481065 nova_compute[260935]: 2025-10-11 10:00:52.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 06:00:52 np0005481065 nova_compute[260935]: 2025-10-11 10:00:52.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 06:00:52 np0005481065 nova_compute[260935]: 2025-10-11 10:00:52.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 06:00:52 np0005481065 nova_compute[260935]: 2025-10-11 10:00:52.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 06:00:54 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3606: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 06:00:54 np0005481065 ovs-vsctl[452846]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct 11 06:00:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 06:00:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 06:00:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 06:00:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 06:00:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 06:00:54 np0005481065 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 06:00:54 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 06:00:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_10:00:55
Oct 11 06:00:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 06:00:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 06:00:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] pools ['.rgw.root', 'backups', 'images', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta', '.mgr', 'volumes', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta']
Oct 11 06:00:55 np0005481065 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 06:00:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 06:00:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 06:00:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 06:00:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 06:00:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 06:00:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 06:00:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 06:00:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 06:00:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 06:00:55 np0005481065 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 06:00:55 np0005481065 virtqemud[260524]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct 11 06:00:56 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3607: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 06:00:56 np0005481065 virtqemud[260524]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct 11 06:00:56 np0005481065 virtqemud[260524]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 11 06:00:56 np0005481065 ceph-mds[100846]: mds.cephfs.compute-0.aywjqo asok_command: cache status {prefix=cache status} (starting...)
Oct 11 06:00:56 np0005481065 ceph-mds[100846]: mds.cephfs.compute-0.aywjqo asok_command: client ls {prefix=client ls} (starting...)
Oct 11 06:00:57 np0005481065 lvm[453199]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 11 06:00:57 np0005481065 lvm[453199]: VG ceph_vg2 finished
Oct 11 06:00:57 np0005481065 lvm[453207]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 11 06:00:57 np0005481065 lvm[453207]: VG ceph_vg0 finished
Oct 11 06:00:57 np0005481065 lvm[453242]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct 11 06:00:57 np0005481065 lvm[453242]: VG ceph_vg1 finished
Oct 11 06:00:57 np0005481065 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.22899 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 06:00:57 np0005481065 kernel: block loop5: the capability attribute has been deprecated.
Oct 11 06:00:57 np0005481065 nova_compute[260935]: 2025-10-11 10:00:57.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 06:00:57 np0005481065 nova_compute[260935]: 2025-10-11 10:00:57.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 06:00:57 np0005481065 ceph-mds[100846]: mds.cephfs.compute-0.aywjqo asok_command: damage ls {prefix=damage ls} (starting...)
Oct 11 06:00:57 np0005481065 ceph-mds[100846]: mds.cephfs.compute-0.aywjqo asok_command: dump loads {prefix=dump loads} (starting...)
Oct 11 06:00:57 np0005481065 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.22901 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 06:00:57 np0005481065 ceph-mds[100846]: mds.cephfs.compute-0.aywjqo asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct 11 06:00:58 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3608: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 06:00:58 np0005481065 ceph-mds[100846]: mds.cephfs.compute-0.aywjqo asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct 11 06:00:58 np0005481065 ceph-mds[100846]: mds.cephfs.compute-0.aywjqo asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct 11 06:00:58 np0005481065 ceph-mds[100846]: mds.cephfs.compute-0.aywjqo asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct 11 06:00:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Oct 11 06:00:58 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/667685264' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 11 06:00:58 np0005481065 ceph-mds[100846]: mds.cephfs.compute-0.aywjqo asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct 11 06:00:58 np0005481065 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.22907 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 06:00:58 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: 2025-10-11T10:00:58.668+0000 7f7f1b2f5640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 11 06:00:58 np0005481065 ceph-mgr[74605]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 11 06:00:58 np0005481065 ceph-mds[100846]: mds.cephfs.compute-0.aywjqo asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct 11 06:00:58 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 06:00:58 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3999565990' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 06:00:58 np0005481065 ceph-mds[100846]: mds.cephfs.compute-0.aywjqo asok_command: ops {prefix=ops} (starting...)
Oct 11 06:00:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Oct 11 06:00:59 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/650596535' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 11 06:00:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Oct 11 06:00:59 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/141713683' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 11 06:00:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct 11 06:00:59 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3915773846' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 11 06:00:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Oct 11 06:00:59 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2054933797' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 11 06:00:59 np0005481065 ceph-mds[100846]: mds.cephfs.compute-0.aywjqo asok_command: session ls {prefix=session ls} (starting...)
Oct 11 06:00:59 np0005481065 ceph-mds[100846]: mds.cephfs.compute-0.aywjqo asok_command: status {prefix=status} (starting...)
Oct 11 06:00:59 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #177. Immutable memtables: 0.
Oct 11 06:00:59 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-10:00:59.894648) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 06:00:59 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 109] Flushing memtable with next log file: 177
Oct 11 06:00:59 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176859894682, "job": 109, "event": "flush_started", "num_memtables": 1, "num_entries": 794, "num_deletes": 251, "total_data_size": 977136, "memory_usage": 991304, "flush_reason": "Manual Compaction"}
Oct 11 06:00:59 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 109] Level-0 flush table #178: started
Oct 11 06:00:59 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176859904077, "cf_name": "default", "job": 109, "event": "table_file_creation", "file_number": 178, "file_size": 967455, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 74203, "largest_seqno": 74996, "table_properties": {"data_size": 963399, "index_size": 1771, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9231, "raw_average_key_size": 19, "raw_value_size": 955216, "raw_average_value_size": 2032, "num_data_blocks": 79, "num_entries": 470, "num_filter_entries": 470, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760176795, "oldest_key_time": 1760176795, "file_creation_time": 1760176859, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 178, "seqno_to_time_mapping": "N/A"}}
Oct 11 06:00:59 np0005481065 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 109] Flush lasted 9456 microseconds, and 3010 cpu microseconds.
Oct 11 06:00:59 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 06:00:59 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-10:00:59.904105) [db/flush_job.cc:967] [default] [JOB 109] Level-0 flush table #178: 967455 bytes OK
Oct 11 06:00:59 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-10:00:59.904118) [db/memtable_list.cc:519] [default] Level-0 commit table #178 started
Oct 11 06:00:59 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-10:00:59.906059) [db/memtable_list.cc:722] [default] Level-0 commit table #178: memtable #1 done
Oct 11 06:00:59 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-10:00:59.906069) EVENT_LOG_v1 {"time_micros": 1760176859906066, "job": 109, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 06:00:59 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-10:00:59.906081) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 06:00:59 np0005481065 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 109] Try to delete WAL files size 973127, prev total WAL file size 973127, number of live WAL files 2.
Oct 11 06:00:59 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000174.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 06:00:59 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-10:00:59.906509) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037323739' seq:72057594037927935, type:22 .. '7061786F730037353331' seq:0, type:0; will stop at (end)
Oct 11 06:00:59 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 110] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 06:00:59 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 109 Base level 0, inputs: [178(944KB)], [176(9956KB)]
Oct 11 06:00:59 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176859906598, "job": 110, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [178], "files_L6": [176], "score": -1, "input_data_size": 11162958, "oldest_snapshot_seqno": -1}
Oct 11 06:00:59 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 06:00:59 np0005481065 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 110] Generated table #179: 9036 keys, 9406681 bytes, temperature: kUnknown
Oct 11 06:00:59 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176859983546, "cf_name": "default", "job": 110, "event": "table_file_creation", "file_number": 179, "file_size": 9406681, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9351760, "index_size": 31256, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22597, "raw_key_size": 237424, "raw_average_key_size": 26, "raw_value_size": 9195696, "raw_average_value_size": 1017, "num_data_blocks": 1199, "num_entries": 9036, "num_filter_entries": 9036, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760176859, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 179, "seqno_to_time_mapping": "N/A"}}
Oct 11 06:01:00 np0005481065 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 06:01:00 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-10:01:00.000021) [db/compaction/compaction_job.cc:1663] [default] [JOB 110] Compacted 1@0 + 1@6 files to L6 => 9406681 bytes
Oct 11 06:01:00 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-10:01:00.001980) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 144.8 rd, 122.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 9.7 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(21.3) write-amplify(9.7) OK, records in: 9550, records dropped: 514 output_compression: NoCompression
Oct 11 06:01:00 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-10:01:00.002017) EVENT_LOG_v1 {"time_micros": 1760176860002001, "job": 110, "event": "compaction_finished", "compaction_time_micros": 77101, "compaction_time_cpu_micros": 44227, "output_level": 6, "num_output_files": 1, "total_output_size": 9406681, "num_input_records": 9550, "num_output_records": 9036, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 06:01:00 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000178.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 06:01:00 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176860002729, "job": 110, "event": "table_file_deletion", "file_number": 178}
Oct 11 06:01:00 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3609: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 06:01:00 np0005481065 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000176.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 06:01:00 np0005481065 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176860007315, "job": 110, "event": "table_file_deletion", "file_number": 176}
Oct 11 06:01:00 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-10:00:59.906408) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 06:01:00 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-10:01:00.007459) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 06:01:00 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-10:01:00.007465) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 06:01:00 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-10:01:00.007468) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 06:01:00 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-10:01:00.007470) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 06:01:00 np0005481065 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-10:01:00.007472) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 06:01:00 np0005481065 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.22921 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 06:01:00 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct 11 06:01:00 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/800392077' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 11 06:01:00 np0005481065 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.22923 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 06:01:00 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct 11 06:01:00 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2161272959' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 11 06:01:00 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Oct 11 06:01:00 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/172235930' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 11 06:01:00 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 11 06:01:00 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2639512722' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 11 06:01:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Oct 11 06:01:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3476374198' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 11 06:01:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Oct 11 06:01:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/316147005' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 11 06:01:01 np0005481065 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.22935 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 06:01:01 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: 2025-10-11T10:01:01.647+0000 7f7f1b2f5640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 11 06:01:01 np0005481065 ceph-mgr[74605]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 11 06:01:01 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct 11 06:01:01 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1387189187' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 11 06:01:02 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3610: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 06:01:02 np0005481065 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.22941 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 06:01:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Oct 11 06:01:02 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/475225399' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 11 06:01:02 np0005481065 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.22943 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 06:01:02 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Oct 11 06:01:02 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/412012730' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 11 06:01:02 np0005481065 nova_compute[260935]: 2025-10-11 10:01:02.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 06:01:02 np0005481065 nova_compute[260935]: 2025-10-11 10:01:02.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 06:01:02 np0005481065 nova_compute[260935]: 2025-10-11 10:01:02.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 06:01:02 np0005481065 nova_compute[260935]: 2025-10-11 10:01:02.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 06:01:02 np0005481065 nova_compute[260935]: 2025-10-11 10:01:02.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 06:01:02 np0005481065 nova_compute[260935]: 2025-10-11 10:01:02.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 06:01:02 np0005481065 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.22947 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 06:01:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct 11 06:01:03 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4227035867' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 11 06:01:03 np0005481065 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.22951 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 11 06:01:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct 11 06:01:03 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3647786049' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dc236000 session 0x55c4df1d6d20
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 288374784 unmapped: 40697856 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb3c3000/0x0/0x4ffc00000, data 0x4c992a5/0x4e3b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3529643 data_alloc: 218103808 data_used: 33161216
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 288374784 unmapped: 40697856 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 36585472 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292601856 unmapped: 36470784 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 36446208 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea9bc000/0x0/0x4ffc00000, data 0x56a02a5/0x5842000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 36446208 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3619357 data_alloc: 234881024 data_used: 34316288
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 36446208 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 36446208 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.258290291s of 10.057239532s, submitted: 151
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292642816 unmapped: 36429824 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292651008 unmapped: 36421632 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea9b9000/0x0/0x4ffc00000, data 0x56a32a5/0x5845000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292651008 unmapped: 36421632 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4db02ec00 session 0x55c4dc1fb0e0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4dc1b3e00
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea9b9000/0x0/0x4ffc00000, data 0x56a32a5/0x5845000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3617337 data_alloc: 234881024 data_used: 34308096
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292659200 unmapped: 36413440 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292659200 unmapped: 36413440 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea9b9000/0x0/0x4ffc00000, data 0x56a32a5/0x5845000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [0,0,0,0,0,0,1])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292700160 unmapped: 36372480 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dc236000 session 0x55c4dbe98d20
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292700160 unmapped: 36372480 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb091000/0x0/0x4ffc00000, data 0x4d7c2a5/0x4f1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292700160 unmapped: 36372480 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3513165 data_alloc: 218103808 data_used: 29384704
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292700160 unmapped: 36372480 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292700160 unmapped: 36372480 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4e6920c00 session 0x55c4dfbab0e0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11000 session 0x55c4db568000
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.251020432s of 10.060697556s, submitted: 29
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292741120 unmapped: 36331520 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4e6920c00 session 0x55c4dc11a5a0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb5b4000/0x0/0x4ffc00000, data 0x4765233/0x4905000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292741120 unmapped: 36331520 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292741120 unmapped: 36331520 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb5b4000/0x0/0x4ffc00000, data 0x4765210/0x4904000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3453472 data_alloc: 218103808 data_used: 28397568
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292741120 unmapped: 36331520 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292741120 unmapped: 36331520 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292741120 unmapped: 36331520 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb5b4000/0x0/0x4ffc00000, data 0x4765210/0x4904000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb5b4000/0x0/0x4ffc00000, data 0x4765210/0x4904000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292741120 unmapped: 36331520 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb5b4000/0x0/0x4ffc00000, data 0x4765210/0x4904000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292741120 unmapped: 36331520 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3453472 data_alloc: 218103808 data_used: 28397568
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292741120 unmapped: 36331520 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb5b4000/0x0/0x4ffc00000, data 0x4765210/0x4904000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292741120 unmapped: 36331520 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292741120 unmapped: 36331520 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292749312 unmapped: 36323328 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292749312 unmapped: 36323328 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb5b4000/0x0/0x4ffc00000, data 0x4765210/0x4904000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3453472 data_alloc: 218103808 data_used: 28397568
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292749312 unmapped: 36323328 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292749312 unmapped: 36323328 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.098580360s of 15.238180161s, submitted: 38
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292749312 unmapped: 36323328 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4ddb9dc00 session 0x55c4daffd4a0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4df132c00 session 0x55c4ddbaa5a0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292749312 unmapped: 36323328 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292749312 unmapped: 36323328 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3452562 data_alloc: 218103808 data_used: 28401664
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292749312 unmapped: 36323328 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec78e000/0x0/0x4ffc00000, data 0x38cc200/0x3a6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [0,0,0,0,0,1])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292757504 unmapped: 36315136 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292757504 unmapped: 36315136 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292757504 unmapped: 36315136 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec794000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4db02ec00 session 0x55c4dd81f680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292757504 unmapped: 36315136 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306668 data_alloc: 218103808 data_used: 22568960
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292757504 unmapped: 36315136 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292765696 unmapped: 36306944 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292765696 unmapped: 36306944 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec794000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292765696 unmapped: 36306944 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292765696 unmapped: 36306944 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec794000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306668 data_alloc: 218103808 data_used: 22568960
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292765696 unmapped: 36306944 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292765696 unmapped: 36306944 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292765696 unmapped: 36306944 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292765696 unmapped: 36306944 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292765696 unmapped: 36306944 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec794000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306668 data_alloc: 218103808 data_used: 22568960
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292765696 unmapped: 36306944 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292765696 unmapped: 36306944 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec794000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292765696 unmapped: 36306944 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292765696 unmapped: 36306944 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292773888 unmapped: 36298752 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306668 data_alloc: 218103808 data_used: 22568960
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292773888 unmapped: 36298752 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292773888 unmapped: 36298752 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec794000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec794000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292782080 unmapped: 36290560 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292782080 unmapped: 36290560 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec794000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292782080 unmapped: 36290560 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306668 data_alloc: 218103808 data_used: 22568960
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292782080 unmapped: 36290560 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292782080 unmapped: 36290560 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292782080 unmapped: 36290560 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec794000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292782080 unmapped: 36290560 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 36921344 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306668 data_alloc: 218103808 data_used: 22568960
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 36921344 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 36921344 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec794000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 36921344 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec794000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292159488 unmapped: 36913152 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292159488 unmapped: 36913152 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306668 data_alloc: 218103808 data_used: 22568960
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292159488 unmapped: 36913152 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec794000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292159488 unmapped: 36913152 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292159488 unmapped: 36913152 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec794000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292167680 unmapped: 36904960 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec794000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292167680 unmapped: 36904960 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 40.671962738s of 42.242687225s, submitted: 18
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4db02ec00 session 0x55c4ddbc4d20
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11000 session 0x55c4db580780
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4ddb9dc00 session 0x55c4ddb485a0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4df132c00 session 0x55c4ddbb8d20
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4e6920c00 session 0x55c4dd17f680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3390262 data_alloc: 218103808 data_used: 22568960
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292716544 unmapped: 41615360 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292716544 unmapped: 41615360 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebc95000/0x0/0x4ffc00000, data 0x43cc19e/0x4569000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292716544 unmapped: 41615360 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292716544 unmapped: 41615360 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dc1cb000 session 0x55c4ddaf6f00
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dd00d000 session 0x55c4dfbaa960
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4ddb8b000 session 0x55c4df1d7860
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292716544 unmapped: 41615360 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3390262 data_alloc: 218103808 data_used: 22568960
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292724736 unmapped: 41607168 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebc95000/0x0/0x4ffc00000, data 0x43cc19e/0x4569000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292724736 unmapped: 41607168 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292724736 unmapped: 41607168 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292724736 unmapped: 41607168 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292732928 unmapped: 41598976 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbe5d800 session 0x55c4dd1534a0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.103104591s of 10.239775658s, submitted: 11
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4ddb9dc00 session 0x55c4dfbaab40
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebc95000/0x0/0x4ffc00000, data 0x43cc19e/0x4569000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3391561 data_alloc: 218103808 data_used: 22568960
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292741120 unmapped: 41590784 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292741120 unmapped: 41590784 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 293060608 unmapped: 41271296 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebc94000/0x0/0x4ffc00000, data 0x43cc1c1/0x456a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 293060608 unmapped: 41271296 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 293060608 unmapped: 41271296 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3471561 data_alloc: 234881024 data_used: 33812480
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 293060608 unmapped: 41271296 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebc94000/0x0/0x4ffc00000, data 0x43cc1c1/0x456a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 293060608 unmapped: 41271296 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebc94000/0x0/0x4ffc00000, data 0x43cc1c1/0x456a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebc94000/0x0/0x4ffc00000, data 0x43cc1c1/0x456a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 293060608 unmapped: 41271296 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 293060608 unmapped: 41271296 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 293060608 unmapped: 41271296 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3471561 data_alloc: 234881024 data_used: 33812480
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 293060608 unmapped: 41271296 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebc94000/0x0/0x4ffc00000, data 0x43cc1c1/0x456a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 293060608 unmapped: 41271296 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.165242195s of 12.180531502s, submitted: 4
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297926656 unmapped: 36405248 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297959424 unmapped: 36372480 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298778624 unmapped: 35553280 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3584047 data_alloc: 234881024 data_used: 35147776
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298778624 unmapped: 35553280 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb005000/0x0/0x4ffc00000, data 0x505b1c1/0x51f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298778624 unmapped: 35553280 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298778624 unmapped: 35553280 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb005000/0x0/0x4ffc00000, data 0x505b1c1/0x51f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298778624 unmapped: 35553280 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298778624 unmapped: 35553280 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3582539 data_alloc: 234881024 data_used: 35147776
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298778624 unmapped: 35553280 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eafe6000/0x0/0x4ffc00000, data 0x507a1c1/0x5218000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298778624 unmapped: 35553280 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298778624 unmapped: 35553280 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298778624 unmapped: 35553280 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eafe6000/0x0/0x4ffc00000, data 0x507a1c1/0x5218000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298778624 unmapped: 35553280 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.867265701s of 13.200298309s, submitted: 98
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3582939 data_alloc: 234881024 data_used: 35155968
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298778624 unmapped: 35553280 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298778624 unmapped: 35553280 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dc236000 session 0x55c4ddbc4000
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dcebdc00 session 0x55c4ddb49a40
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4db37b800 session 0x55c4ddbab860
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dcec0800 session 0x55c4dbe710e0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4db37b800 session 0x55c4df73b2c0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 299114496 unmapped: 35217408 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 299114496 unmapped: 35217408 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eaa33000/0x0/0x4ffc00000, data 0x562d1c1/0x57cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 299114496 unmapped: 35217408 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3630133 data_alloc: 234881024 data_used: 35155968
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 299114496 unmapped: 35217408 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eaa33000/0x0/0x4ffc00000, data 0x562d1c1/0x57cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 299114496 unmapped: 35217408 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 299114496 unmapped: 35217408 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dc236000 session 0x55c4dd2e2d20
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 299122688 unmapped: 35209216 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 299130880 unmapped: 35201024 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3632322 data_alloc: 234881024 data_used: 35307520
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 299302912 unmapped: 35028992 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 300818432 unmapped: 33513472 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eaa33000/0x0/0x4ffc00000, data 0x562d1c1/0x57cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 300818432 unmapped: 33513472 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 300818432 unmapped: 33513472 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 300818432 unmapped: 33513472 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3671522 data_alloc: 234881024 data_used: 40796160
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 300818432 unmapped: 33513472 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 300818432 unmapped: 33513472 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eaa33000/0x0/0x4ffc00000, data 0x562d1c1/0x57cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 300818432 unmapped: 33513472 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 300818432 unmapped: 33513472 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.157880783s of 19.457237244s, submitted: 21
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 300826624 unmapped: 33505280 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3671874 data_alloc: 234881024 data_used: 40796160
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 303357952 unmapped: 30973952 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea6c8000/0x0/0x4ffc00000, data 0x59981c1/0x5b36000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,14,28])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 303202304 unmapped: 31129600 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 303202304 unmapped: 31129600 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 305004544 unmapped: 29327360 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 305012736 unmapped: 29319168 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3708812 data_alloc: 234881024 data_used: 40869888
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 305029120 unmapped: 29302784 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 305029120 unmapped: 29302784 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea455000/0x0/0x4ffc00000, data 0x5c0b1c1/0x5da9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 305037312 unmapped: 29294592 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea455000/0x0/0x4ffc00000, data 0x5c0b1c1/0x5da9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 305037312 unmapped: 29294592 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 305037312 unmapped: 29294592 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3723514 data_alloc: 234881024 data_used: 41152512
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.852592468s of 11.093073845s, submitted: 89
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 305037312 unmapped: 29294592 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea455000/0x0/0x4ffc00000, data 0x5c0b1c1/0x5da9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 305037312 unmapped: 29294592 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 305037312 unmapped: 29294592 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 305037312 unmapped: 29294592 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 305037312 unmapped: 29294592 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3721486 data_alloc: 234881024 data_used: 41213952
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 305037312 unmapped: 29294592 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dcebdc00 session 0x55c4ddbbe780
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dcec0800 session 0x55c4dc11ab40
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea434000/0x0/0x4ffc00000, data 0x5c2c1c1/0x5dca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4ddb9dc00 session 0x55c4ddbc54a0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304406528 unmapped: 29925376 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304406528 unmapped: 29925376 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304406528 unmapped: 29925376 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eafd3000/0x0/0x4ffc00000, data 0x508d1c1/0x522b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304406528 unmapped: 29925376 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4df132c00 session 0x55c4ddacfc20
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4df73a000
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3592800 data_alloc: 234881024 data_used: 35217408
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304406528 unmapped: 29925376 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.106317520s of 10.223801613s, submitted: 27
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4db37b800 session 0x55c4dd97ef00
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 40206336 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec5cc000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 40206336 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 40206336 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 40206336 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327481 data_alloc: 218103808 data_used: 22568960
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 40206336 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 40206336 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 40206336 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec5cc000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 40206336 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 40206336 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec5cc000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.3 total, 600.0 interval#012Cumulative writes: 33K writes, 131K keys, 33K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.03 MB/s#012Cumulative WAL: 33K writes, 11K syncs, 2.78 writes per sync, written: 0.13 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3725 writes, 14K keys, 3725 commit groups, 1.0 writes per commit group, ingest: 17.79 MB, 0.03 MB/s#012Interval WAL: 3725 writes, 1461 syncs, 2.55 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327481 data_alloc: 218103808 data_used: 22568960
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 40206336 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec5cc000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 40206336 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec5cc000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 40206336 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 40206336 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec5cc000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 40206336 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec5cc000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327481 data_alloc: 218103808 data_used: 22568960
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 40206336 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 40206336 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 40206336 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 40206336 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec5cc000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 40206336 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327481 data_alloc: 218103808 data_used: 22568960
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 40206336 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec5cc000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 40206336 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 40206336 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 40206336 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 40206336 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327481 data_alloc: 218103808 data_used: 22568960
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec5cc000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 40206336 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294133760 unmapped: 40198144 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294133760 unmapped: 40198144 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294133760 unmapped: 40198144 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294133760 unmapped: 40198144 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec5cc000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327481 data_alloc: 218103808 data_used: 22568960
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294133760 unmapped: 40198144 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294133760 unmapped: 40198144 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294141952 unmapped: 40189952 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294141952 unmapped: 40189952 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294141952 unmapped: 40189952 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec5cc000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327481 data_alloc: 218103808 data_used: 22568960
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294150144 unmapped: 40181760 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294150144 unmapped: 40181760 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294150144 unmapped: 40181760 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294150144 unmapped: 40181760 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294150144 unmapped: 40181760 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec5cc000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327481 data_alloc: 218103808 data_used: 22568960
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294150144 unmapped: 40181760 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294150144 unmapped: 40181760 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294150144 unmapped: 40181760 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec5cc000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294150144 unmapped: 40181760 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294158336 unmapped: 40173568 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327481 data_alloc: 218103808 data_used: 22568960
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294158336 unmapped: 40173568 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec5cc000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec5cc000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294166528 unmapped: 40165376 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294166528 unmapped: 40165376 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294166528 unmapped: 40165376 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294166528 unmapped: 40165376 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327481 data_alloc: 218103808 data_used: 22568960
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294166528 unmapped: 40165376 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294166528 unmapped: 40165376 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec5cc000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 51.215015411s of 51.293136597s, submitted: 20
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dc236000 session 0x55c4ddaf7c20
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dcebdc00 session 0x55c4dc11b860
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dcebdc00 session 0x55c4ddaf74a0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4db37b800 session 0x55c4dbe70960
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4dbe71a40
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294379520 unmapped: 39952384 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294379520 unmapped: 39952384 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebf72000/0x0/0x4ffc00000, data 0x3cde200/0x3e7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 39944192 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3361388 data_alloc: 218103808 data_used: 22568960
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 39944192 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebf72000/0x0/0x4ffc00000, data 0x3cde200/0x3e7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 39944192 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 39944192 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebf72000/0x0/0x4ffc00000, data 0x3cde200/0x3e7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294395904 unmapped: 39936000 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294395904 unmapped: 39936000 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebf72000/0x0/0x4ffc00000, data 0x3cde200/0x3e7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3361388 data_alloc: 218103808 data_used: 22568960
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dc236000 session 0x55c4df1d6000
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294395904 unmapped: 39936000 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294404096 unmapped: 39927808 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebf72000/0x0/0x4ffc00000, data 0x3cde200/0x3e7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294412288 unmapped: 39919616 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebf72000/0x0/0x4ffc00000, data 0x3cde200/0x3e7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294420480 unmapped: 39911424 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294420480 unmapped: 39911424 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3389388 data_alloc: 218103808 data_used: 26505216
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebf72000/0x0/0x4ffc00000, data 0x3cde200/0x3e7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294420480 unmapped: 39911424 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294420480 unmapped: 39911424 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294420480 unmapped: 39911424 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294420480 unmapped: 39911424 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebf72000/0x0/0x4ffc00000, data 0x3cde200/0x3e7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294420480 unmapped: 39911424 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebf72000/0x0/0x4ffc00000, data 0x3cde200/0x3e7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3389388 data_alloc: 218103808 data_used: 26505216
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294420480 unmapped: 39911424 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294420480 unmapped: 39911424 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebf72000/0x0/0x4ffc00000, data 0x3cde200/0x3e7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.580312729s of 20.828023911s, submitted: 27
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294420480 unmapped: 39911424 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294772736 unmapped: 39559168 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296353792 unmapped: 37978112 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3404442 data_alloc: 218103808 data_used: 26533888
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296353792 unmapped: 37978112 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296353792 unmapped: 37978112 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebe1f000/0x0/0x4ffc00000, data 0x3e31200/0x3fcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296353792 unmapped: 37978112 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebe1f000/0x0/0x4ffc00000, data 0x3e31200/0x3fcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296353792 unmapped: 37978112 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296353792 unmapped: 37978112 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3403254 data_alloc: 218103808 data_used: 26537984
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296353792 unmapped: 37978112 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296353792 unmapped: 37978112 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebe00000/0x0/0x4ffc00000, data 0x3e50200/0x3fee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296353792 unmapped: 37978112 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296353792 unmapped: 37978112 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296353792 unmapped: 37978112 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebe00000/0x0/0x4ffc00000, data 0x3e50200/0x3fee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3403254 data_alloc: 218103808 data_used: 26537984
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296353792 unmapped: 37978112 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.226534843s of 13.379148483s, submitted: 41
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 37969920 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebdf7000/0x0/0x4ffc00000, data 0x3e59200/0x3ff7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 37969920 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 37969920 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 37969920 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebdf7000/0x0/0x4ffc00000, data 0x3e59200/0x3ff7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3403498 data_alloc: 218103808 data_used: 26537984
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 37961728 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 37961728 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 37961728 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebdf7000/0x0/0x4ffc00000, data 0x3e59200/0x3ff7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 37961728 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296419328 unmapped: 37912576 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3403146 data_alloc: 218103808 data_used: 26537984
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296443904 unmapped: 37888000 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebdf7000/0x0/0x4ffc00000, data 0x3e59200/0x3ff7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296443904 unmapped: 37888000 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296443904 unmapped: 37888000 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296443904 unmapped: 37888000 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebdf7000/0x0/0x4ffc00000, data 0x3e59200/0x3ff7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296443904 unmapped: 37888000 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3403146 data_alloc: 218103808 data_used: 26537984
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296443904 unmapped: 37888000 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296443904 unmapped: 37888000 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.515163422s of 15.872191429s, submitted: 92
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebdf7000/0x0/0x4ffc00000, data 0x3e59200/0x3ff7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296452096 unmapped: 37879808 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296452096 unmapped: 37879808 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4ddb9a800 session 0x55c4db430780
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4db37b800 session 0x55c4dbe9b2c0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4ddbb9860
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dc236000 session 0x55c4dd17f4a0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dcebdc00 session 0x55c4dc11e780
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296550400 unmapped: 41459712 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb109000/0x0/0x4ffc00000, data 0x4b47200/0x4ce5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3502292 data_alloc: 218103808 data_used: 26537984
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296550400 unmapped: 41459712 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296550400 unmapped: 41459712 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296550400 unmapped: 41459712 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296550400 unmapped: 41459712 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296558592 unmapped: 41451520 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb109000/0x0/0x4ffc00000, data 0x4b47200/0x4ce5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4e8e19800 session 0x55c4ddb494a0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3504562 data_alloc: 218103808 data_used: 26537984
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296583168 unmapped: 41426944 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296583168 unmapped: 41426944 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297754624 unmapped: 40255488 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301056000 unmapped: 36954112 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301056000 unmapped: 36954112 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3591574 data_alloc: 234881024 data_used: 36249600
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301056000 unmapped: 36954112 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb108000/0x0/0x4ffc00000, data 0x4b47223/0x4ce6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301056000 unmapped: 36954112 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301056000 unmapped: 36954112 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301056000 unmapped: 36954112 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301056000 unmapped: 36954112 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb108000/0x0/0x4ffc00000, data 0x4b47223/0x4ce6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3591574 data_alloc: 234881024 data_used: 36249600
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301056000 unmapped: 36954112 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301056000 unmapped: 36954112 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb108000/0x0/0x4ffc00000, data 0x4b47223/0x4ce6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.463996887s of 20.706624985s, submitted: 28
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301973504 unmapped: 36036608 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301989888 unmapped: 36020224 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302022656 unmapped: 35987456 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3676272 data_alloc: 234881024 data_used: 37085184
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302022656 unmapped: 35987456 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb865000/0x0/0x4ffc00000, data 0x5429223/0x55c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302022656 unmapped: 35987456 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302022656 unmapped: 35987456 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb865000/0x0/0x4ffc00000, data 0x5429223/0x55c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302022656 unmapped: 35987456 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302022656 unmapped: 35987456 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3676272 data_alloc: 234881024 data_used: 37085184
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302022656 unmapped: 35987456 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb865000/0x0/0x4ffc00000, data 0x5429223/0x55c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302022656 unmapped: 35987456 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302022656 unmapped: 35987456 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302022656 unmapped: 35987456 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302022656 unmapped: 35987456 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3677232 data_alloc: 234881024 data_used: 37154816
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb865000/0x0/0x4ffc00000, data 0x5429223/0x55c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302022656 unmapped: 35987456 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302022656 unmapped: 35987456 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302022656 unmapped: 35987456 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb865000/0x0/0x4ffc00000, data 0x5429223/0x55c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4db37b800 session 0x55c4dc1d43c0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.669697762s of 15.914948463s, submitted: 63
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4df1d6b40
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294117376 unmapped: 43892736 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dc236000 session 0x55c4dd92d4a0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294150144 unmapped: 43859968 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3416876 data_alloc: 218103808 data_used: 24150016
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295198720 unmapped: 42811392 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ece29000/0x0/0x4ffc00000, data 0x3e66200/0x4004000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295198720 unmapped: 42811392 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ece29000/0x0/0x4ffc00000, data 0x3e66200/0x4004000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295198720 unmapped: 42811392 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ece29000/0x0/0x4ffc00000, data 0x3e66200/0x4004000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295198720 unmapped: 42811392 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295198720 unmapped: 42811392 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4df132c00 session 0x55c4ddbaba40
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dcec0800 session 0x55c4ddaceb40
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3345708 data_alloc: 218103808 data_used: 20209664
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4db37b800 session 0x55c4dd97fa40
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295198720 unmapped: 42811392 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ed3c4000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295198720 unmapped: 42811392 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295198720 unmapped: 42811392 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295198720 unmapped: 42811392 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 42803200 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3345708 data_alloc: 218103808 data_used: 20209664
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 42803200 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 42803200 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ed3c4000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 42803200 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ed3c4000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 42803200 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 42803200 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ed3c4000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3345708 data_alloc: 218103808 data_used: 20209664
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 42803200 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ed3c4000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 42803200 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ed3c4000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 42803200 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 42803200 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 42803200 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3345708 data_alloc: 218103808 data_used: 20209664
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 42803200 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 42803200 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ed3c4000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 42803200 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 42803200 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 42803200 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3345708 data_alloc: 218103808 data_used: 20209664
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295215104 unmapped: 42795008 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ed3c4000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295215104 unmapped: 42795008 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295215104 unmapped: 42795008 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295215104 unmapped: 42795008 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295215104 unmapped: 42795008 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3345708 data_alloc: 218103808 data_used: 20209664
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295215104 unmapped: 42795008 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295215104 unmapped: 42795008 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ed3c4000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295215104 unmapped: 42795008 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295223296 unmapped: 42786816 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 35.936973572s of 36.086498260s, submitted: 52
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4ddbc5a40
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dc236000 session 0x55c4dc42c780
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4df132c00 session 0x55c4dfbaba40
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dcebdc00 session 0x55c4ddbab680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dcebdc00 session 0x55c4df73a5a0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294076416 unmapped: 43933696 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379140 data_alloc: 218103808 data_used: 20209664
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294076416 unmapped: 43933696 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294076416 unmapped: 43933696 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294076416 unmapped: 43933696 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ecfd2000/0x0/0x4ffc00000, data 0x3cbf19e/0x3e5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294076416 unmapped: 43933696 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ecfd2000/0x0/0x4ffc00000, data 0x3cbf19e/0x3e5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294076416 unmapped: 43933696 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379140 data_alloc: 218103808 data_used: 20209664
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ecfd2000/0x0/0x4ffc00000, data 0x3cbf19e/0x3e5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294076416 unmapped: 43933696 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4db37b800 session 0x55c4dc1e14a0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294076416 unmapped: 43933696 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4dd97f4a0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dc236000 session 0x55c4db430000
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4df132c00 session 0x55c4dd92c960
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294076416 unmapped: 43933696 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294076416 unmapped: 43933696 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294084608 unmapped: 43925504 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ecfd0000/0x0/0x4ffc00000, data 0x3cbf1d1/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3411287 data_alloc: 218103808 data_used: 24174592
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 43917312 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 43917312 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 43917312 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 43917312 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 43917312 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3411287 data_alloc: 218103808 data_used: 24174592
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ecfd0000/0x0/0x4ffc00000, data 0x3cbf1d1/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 43917312 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ecfd0000/0x0/0x4ffc00000, data 0x3cbf1d1/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 43917312 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 43917312 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 43917312 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ecfd0000/0x0/0x4ffc00000, data 0x3cbf1d1/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ecfd0000/0x0/0x4ffc00000, data 0x3cbf1d1/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.252729416s of 20.389076233s, submitted: 12
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297803776 unmapped: 40206336 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3481653 data_alloc: 218103808 data_used: 25280512
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297558016 unmapped: 40452096 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297566208 unmapped: 40443904 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297566208 unmapped: 40443904 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297566208 unmapped: 40443904 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297566208 unmapped: 40443904 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb53f000/0x0/0x4ffc00000, data 0x45b01d1/0x474f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3492885 data_alloc: 218103808 data_used: 25366528
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297566208 unmapped: 40443904 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297566208 unmapped: 40443904 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297566208 unmapped: 40443904 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297566208 unmapped: 40443904 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297566208 unmapped: 40443904 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3488117 data_alloc: 218103808 data_used: 25366528
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297566208 unmapped: 40443904 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb53c000/0x0/0x4ffc00000, data 0x45b31d1/0x4752000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297566208 unmapped: 40443904 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297574400 unmapped: 40435712 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297574400 unmapped: 40435712 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297574400 unmapped: 40435712 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3488117 data_alloc: 218103808 data_used: 25366528
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297574400 unmapped: 40435712 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb53c000/0x0/0x4ffc00000, data 0x45b31d1/0x4752000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297574400 unmapped: 40435712 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb53c000/0x0/0x4ffc00000, data 0x45b31d1/0x4752000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297574400 unmapped: 40435712 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297574400 unmapped: 40435712 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb53c000/0x0/0x4ffc00000, data 0x45b31d1/0x4752000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297582592 unmapped: 40427520 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3488117 data_alloc: 218103808 data_used: 25366528
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297582592 unmapped: 40427520 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297582592 unmapped: 40427520 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297582592 unmapped: 40427520 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297582592 unmapped: 40427520 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb53c000/0x0/0x4ffc00000, data 0x45b31d1/0x4752000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297582592 unmapped: 40427520 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3488117 data_alloc: 218103808 data_used: 25366528
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297582592 unmapped: 40427520 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297582592 unmapped: 40427520 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb53c000/0x0/0x4ffc00000, data 0x45b31d1/0x4752000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297590784 unmapped: 40419328 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb53c000/0x0/0x4ffc00000, data 0x45b31d1/0x4752000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297590784 unmapped: 40419328 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297590784 unmapped: 40419328 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3488117 data_alloc: 218103808 data_used: 25366528
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297590784 unmapped: 40419328 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 31.730146408s of 32.096771240s, submitted: 93
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297598976 unmapped: 40411136 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb53c000/0x0/0x4ffc00000, data 0x45b31d1/0x4752000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dc236000 session 0x55c4dd137680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dcebdc00 session 0x55c4dbe9b2c0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4e8e19800 session 0x55c4ddaceb40
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4ddb9cc00 session 0x55c4dd414960
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4ddaf6000
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297861120 unmapped: 40148992 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297861120 unmapped: 40148992 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb4d6000/0x0/0x4ffc00000, data 0x46191d1/0x47b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297861120 unmapped: 40148992 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3497288 data_alloc: 218103808 data_used: 25366528
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297861120 unmapped: 40148992 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb4d6000/0x0/0x4ffc00000, data 0x46191d1/0x47b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297861120 unmapped: 40148992 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297861120 unmapped: 40148992 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297861120 unmapped: 40148992 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb4d6000/0x0/0x4ffc00000, data 0x46191d1/0x47b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4ddaf7680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297869312 unmapped: 40140800 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3497420 data_alloc: 218103808 data_used: 25366528
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297877504 unmapped: 40132608 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297877504 unmapped: 40132608 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb4d6000/0x0/0x4ffc00000, data 0x46191d1/0x47b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297877504 unmapped: 40132608 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297877504 unmapped: 40132608 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297877504 unmapped: 40132608 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3497740 data_alloc: 218103808 data_used: 25399296
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297877504 unmapped: 40132608 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297877504 unmapped: 40132608 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb4d6000/0x0/0x4ffc00000, data 0x46191d1/0x47b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297885696 unmapped: 40124416 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297885696 unmapped: 40124416 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297885696 unmapped: 40124416 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb4d6000/0x0/0x4ffc00000, data 0x46191d1/0x47b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3497740 data_alloc: 218103808 data_used: 25399296
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.066957474s of 19.134149551s, submitted: 19
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297885696 unmapped: 40124416 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298737664 unmapped: 39272448 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298778624 unmapped: 39231488 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb302000/0x0/0x4ffc00000, data 0x47e51d1/0x4984000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298778624 unmapped: 39231488 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298778624 unmapped: 39231488 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3525892 data_alloc: 218103808 data_used: 25448448
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb286000/0x0/0x4ffc00000, data 0x48611d1/0x4a00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 39223296 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 39223296 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 39223296 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298057728 unmapped: 39952384 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb26f000/0x0/0x4ffc00000, data 0x48801d1/0x4a1f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298057728 unmapped: 39952384 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3522512 data_alloc: 218103808 data_used: 25452544
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298057728 unmapped: 39952384 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298057728 unmapped: 39952384 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298065920 unmapped: 39944192 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.252151489s of 13.026142120s, submitted: 40
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298065920 unmapped: 39944192 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb26f000/0x0/0x4ffc00000, data 0x48801d1/0x4a1f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dc236000 session 0x55c4dc1d4780
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dcebdc00 session 0x55c4ddbc4000
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298065920 unmapped: 39944192 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4ddb9cc00 session 0x55c4dbe71c20
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3494746 data_alloc: 218103808 data_used: 25370624
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298090496 unmapped: 39919616 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298090496 unmapped: 39919616 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298090496 unmapped: 39919616 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb37c000/0x0/0x4ffc00000, data 0x45b41d1/0x4753000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298090496 unmapped: 39919616 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298090496 unmapped: 39919616 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb37c000/0x0/0x4ffc00000, data 0x45b41d1/0x4753000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3494746 data_alloc: 218103808 data_used: 25370624
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298090496 unmapped: 39919616 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298090496 unmapped: 39919616 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298090496 unmapped: 39919616 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb37c000/0x0/0x4ffc00000, data 0x45b41d1/0x4753000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298090496 unmapped: 39919616 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4db37b800 session 0x55c4dc11ad20
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4db430b40
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298098688 unmapped: 39911424 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.455710411s of 11.519099236s, submitted: 24
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4dd40b0e0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3362051 data_alloc: 218103808 data_used: 20209664
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 43065344 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 43065344 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 43065344 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec224000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 43065344 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 43065344 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3362051 data_alloc: 218103808 data_used: 20209664
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 43065344 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 43065344 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec224000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 43065344 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 43065344 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 43065344 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3362051 data_alloc: 218103808 data_used: 20209664
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 43065344 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec224000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 43065344 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 43065344 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 43065344 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec224000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 43065344 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3362051 data_alloc: 218103808 data_used: 20209664
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 43065344 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 43065344 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 43065344 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 43065344 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec224000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 43065344 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dc236000 session 0x55c4dd97f0e0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dcebdc00 session 0x55c4dd81f680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4ddb9cc00 session 0x55c4db5812c0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4daffcb40
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.055343628s of 20.130304337s, submitted: 18
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4dfbab4a0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dc236000 session 0x55c4dd415a40
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dcebdc00 session 0x55c4dd1923c0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3385622 data_alloc: 218103808 data_used: 20209664
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4e8e19800 session 0x55c4ddb483c0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4e8e19800 session 0x55c4ddbc4f00
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295084032 unmapped: 42926080 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295084032 unmapped: 42926080 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebff0000/0x0/0x4ffc00000, data 0x3b001ae/0x3c9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295084032 unmapped: 42926080 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295084032 unmapped: 42926080 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295084032 unmapped: 42926080 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4dc1b30e0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3385490 data_alloc: 218103808 data_used: 20209664
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295084032 unmapped: 42926080 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4dc1e1860
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dc236000 session 0x55c4ddbc50e0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295084032 unmapped: 42926080 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dcebdc00 session 0x55c4dd92c000
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 42606592 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebfca000/0x0/0x4ffc00000, data 0x3b241e1/0x3cc4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 42606592 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 42606592 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3409626 data_alloc: 218103808 data_used: 22437888
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 42606592 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 42606592 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 42606592 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebfca000/0x0/0x4ffc00000, data 0x3b241e1/0x3cc4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 42606592 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 42606592 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3409626 data_alloc: 218103808 data_used: 22437888
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebfca000/0x0/0x4ffc00000, data 0x3b241e1/0x3cc4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 42606592 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 42606592 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebfca000/0x0/0x4ffc00000, data 0x3b241e1/0x3cc4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 42606592 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.175285339s of 18.298599243s, submitted: 28
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302612480 unmapped: 35397632 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302612480 unmapped: 35397632 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3484970 data_alloc: 218103808 data_used: 22724608
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302080000 unmapped: 35930112 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302080000 unmapped: 35930112 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea66e000/0x0/0x4ffc00000, data 0x42d81e1/0x4478000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302080000 unmapped: 35930112 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302080000 unmapped: 35930112 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea66e000/0x0/0x4ffc00000, data 0x42d81e1/0x4478000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302080000 unmapped: 35930112 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3484970 data_alloc: 218103808 data_used: 22724608
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302080000 unmapped: 35930112 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302047232 unmapped: 35962880 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302047232 unmapped: 35962880 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302047232 unmapped: 35962880 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea673000/0x0/0x4ffc00000, data 0x42db1e1/0x447b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302047232 unmapped: 35962880 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479898 data_alloc: 218103808 data_used: 22724608
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302047232 unmapped: 35962880 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea673000/0x0/0x4ffc00000, data 0x42db1e1/0x447b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea673000/0x0/0x4ffc00000, data 0x42db1e1/0x447b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302047232 unmapped: 35962880 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302047232 unmapped: 35962880 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302047232 unmapped: 35962880 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.904396057s of 16.287437439s, submitted: 89
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 306438144 unmapped: 31571968 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4dd40b860
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dc236000 session 0x55c4db581e00
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4e8e19800 session 0x55c4dc1b25a0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea673000/0x0/0x4ffc00000, data 0x42db1e1/0x447b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4ddba6800 session 0x55c4dbe70f00
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4de70f400 session 0x55c4dfbabc20
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3516464 data_alloc: 218103808 data_used: 22724608
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301973504 unmapped: 36036608 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301973504 unmapped: 36036608 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea208000/0x0/0x4ffc00000, data 0x47461e1/0x48e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301981696 unmapped: 36028416 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea208000/0x0/0x4ffc00000, data 0x47461e1/0x48e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301989888 unmapped: 36020224 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4ddbbe1e0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301998080 unmapped: 36012032 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3517765 data_alloc: 218103808 data_used: 22724608
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301998080 unmapped: 36012032 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea207000/0x0/0x4ffc00000, data 0x4746204/0x48e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302006272 unmapped: 36003840 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302006272 unmapped: 36003840 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302006272 unmapped: 36003840 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302006272 unmapped: 36003840 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3549765 data_alloc: 218103808 data_used: 27254784
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302014464 unmapped: 35995648 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302014464 unmapped: 35995648 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea207000/0x0/0x4ffc00000, data 0x4746204/0x48e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.904636383s of 13.092563629s, submitted: 11
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302014464 unmapped: 35995648 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302014464 unmapped: 35995648 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302014464 unmapped: 35995648 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea206000/0x0/0x4ffc00000, data 0x4747204/0x48e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3550473 data_alloc: 218103808 data_used: 27267072
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302014464 unmapped: 35995648 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 305487872 unmapped: 32522240 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304824320 unmapped: 33185792 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4e9b2d000/0x0/0x4ffc00000, data 0x4e20204/0x4fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304824320 unmapped: 33185792 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304824320 unmapped: 33185792 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4e9b2d000/0x0/0x4ffc00000, data 0x4e20204/0x4fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3610903 data_alloc: 218103808 data_used: 27291648
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304824320 unmapped: 33185792 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4e9b2d000/0x0/0x4ffc00000, data 0x4e20204/0x4fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304824320 unmapped: 33185792 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dc236000 session 0x55c4daffd4a0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4ddba6800 session 0x55c4dc42de00
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304824320 unmapped: 33185792 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.393812180s of 10.628395081s, submitted: 47
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4e8e19800 session 0x55c4db568960
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304136192 unmapped: 33873920 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304136192 unmapped: 33873920 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3489091 data_alloc: 218103808 data_used: 22724608
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea671000/0x0/0x4ffc00000, data 0x42dc1e1/0x447c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304136192 unmapped: 33873920 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dcebdc00 session 0x55c4dc1e05a0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4df73a960
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304136192 unmapped: 33873920 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dcebdc00 session 0x55c4dd81f860
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 33841152 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 33841152 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb084000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 33841152 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3384810 data_alloc: 218103808 data_used: 20209664
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 33841152 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 33841152 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb084000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 33841152 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 33841152 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 33841152 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3384810 data_alloc: 218103808 data_used: 20209664
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb084000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 33841152 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 33841152 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 33841152 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 33841152 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb084000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 33841152 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3384810 data_alloc: 218103808 data_used: 20209664
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 33841152 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb084000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 33841152 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304177152 unmapped: 33832960 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304177152 unmapped: 33832960 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 21.278112411s of 21.508995056s, submitted: 69
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4dc11ad20
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304177152 unmapped: 33832960 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dc236000 session 0x55c4ddace1e0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4ddba6800 session 0x55c4ddace780
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4ddba6800 session 0x55c4dd40ab40
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4ddb494a0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3407475 data_alloc: 218103808 data_used: 20209664
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304177152 unmapped: 33832960 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ead85000/0x0/0x4ffc00000, data 0x3bcc19e/0x3d69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304177152 unmapped: 33832960 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304177152 unmapped: 33832960 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304185344 unmapped: 33824768 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4dc1d54a0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304193536 unmapped: 33816576 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3407607 data_alloc: 218103808 data_used: 20209664
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304201728 unmapped: 33808384 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304209920 unmapped: 33800192 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ead85000/0x0/0x4ffc00000, data 0x3bcc19e/0x3d69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304209920 unmapped: 33800192 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304209920 unmapped: 33800192 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304209920 unmapped: 33800192 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3429687 data_alloc: 218103808 data_used: 23355392
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304209920 unmapped: 33800192 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304209920 unmapped: 33800192 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ead85000/0x0/0x4ffc00000, data 0x3bcc19e/0x3d69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304209920 unmapped: 33800192 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304209920 unmapped: 33800192 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304209920 unmapped: 33800192 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3429687 data_alloc: 218103808 data_used: 23355392
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.078292847s of 16.170019150s, submitted: 12
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 33193984 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ead85000/0x0/0x4ffc00000, data 0x3bcc19e/0x3d69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 33529856 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 33529856 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 33529856 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 33529856 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435355 data_alloc: 218103808 data_used: 23355392
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 33529856 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eacd8000/0x0/0x4ffc00000, data 0x3c7919e/0x3e16000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 33529856 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 33529856 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304488448 unmapped: 33521664 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304488448 unmapped: 33521664 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435355 data_alloc: 218103808 data_used: 23355392
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eacd8000/0x0/0x4ffc00000, data 0x3c7919e/0x3e16000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304488448 unmapped: 33521664 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eacd8000/0x0/0x4ffc00000, data 0x3c7919e/0x3e16000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304488448 unmapped: 33521664 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304488448 unmapped: 33521664 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.828106880s of 12.854273796s, submitted: 6
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dc236000 session 0x55c4ddbb94a0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dcebdc00 session 0x55c4ddbc4000
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304439296 unmapped: 33570816 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4dc1d4b40
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304439296 unmapped: 33570816 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3386683 data_alloc: 218103808 data_used: 20209664
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304439296 unmapped: 33570816 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304439296 unmapped: 33570816 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb085000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304439296 unmapped: 33570816 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304439296 unmapped: 33570816 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 33562624 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3386683 data_alloc: 218103808 data_used: 20209664
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 33562624 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 33562624 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb085000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb085000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 33562624 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 33562624 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb085000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 33554432 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3386683 data_alloc: 218103808 data_used: 20209664
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 33554432 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 33554432 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 33554432 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 33554432 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 33554432 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb085000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3386683 data_alloc: 218103808 data_used: 20209664
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 33554432 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 33546240 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 33538048 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 33538048 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb085000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 33538048 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb085000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3386683 data_alloc: 218103808 data_used: 20209664
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 33529856 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 33529856 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 33529856 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb085000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 33529856 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 33529856 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4ddacfc20
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dc236000 session 0x55c4dc11e960
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dcebdc00 session 0x55c4ddb483c0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3386683 data_alloc: 218103808 data_used: 20209664
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4ddba6800 session 0x55c4dc1b3a40
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 27.174865723s of 27.243019104s, submitted: 17
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4ddba6800 session 0x55c4dd92c780
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304930816 unmapped: 33079296 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4ddacf2c0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4db5e8b40
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dc236000 session 0x55c4db04cb40
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dcebdc00 session 0x55c4db47ab40
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304865280 unmapped: 33144832 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304865280 unmapped: 33144832 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ead80000/0x0/0x4ffc00000, data 0x3bd01ae/0x3d6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304865280 unmapped: 33144832 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304865280 unmapped: 33144832 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3419330 data_alloc: 218103808 data_used: 20213760
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4dd81f2c0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304865280 unmapped: 33144832 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4dd136f00
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304865280 unmapped: 33144832 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dc236000 session 0x55c4ddaf65a0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ead80000/0x0/0x4ffc00000, data 0x3bd01ae/0x3d6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4ddba6800 session 0x55c4db5e9a40
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304873472 unmapped: 33136640 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304873472 unmapped: 33136640 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304873472 unmapped: 33136640 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435338 data_alloc: 218103808 data_used: 21962752
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304873472 unmapped: 33136640 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ead5c000/0x0/0x4ffc00000, data 0x3bf41ae/0x3d92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304873472 unmapped: 33136640 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304873472 unmapped: 33136640 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304873472 unmapped: 33136640 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304873472 unmapped: 33136640 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435338 data_alloc: 218103808 data_used: 21962752
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304873472 unmapped: 33136640 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ead5c000/0x0/0x4ffc00000, data 0x3bf41ae/0x3d92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304881664 unmapped: 33128448 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304881664 unmapped: 33128448 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.143718719s of 18.263084412s, submitted: 24
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304889856 unmapped: 33120256 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 305659904 unmapped: 32350208 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eaacc000/0x0/0x4ffc00000, data 0x3e7c1ae/0x401a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3461194 data_alloc: 218103808 data_used: 22016000
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308846592 unmapped: 29163520 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 305594368 unmapped: 32415744 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304939008 unmapped: 33071104 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea7ee000/0x0/0x4ffc00000, data 0x41621ae/0x4300000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304939008 unmapped: 33071104 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 305201152 unmapped: 32808960 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497974 data_alloc: 218103808 data_used: 23076864
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea770000/0x0/0x4ffc00000, data 0x41e01ae/0x437e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 305430528 unmapped: 32579584 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 305430528 unmapped: 32579584 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 305430528 unmapped: 32579584 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea770000/0x0/0x4ffc00000, data 0x41e01ae/0x437e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 305430528 unmapped: 32579584 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 305430528 unmapped: 32579584 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3500086 data_alloc: 218103808 data_used: 23224320
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.937407017s of 11.942021370s, submitted: 61
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 305430528 unmapped: 32579584 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 305430528 unmapped: 32579584 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 305430528 unmapped: 32579584 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea74f000/0x0/0x4ffc00000, data 0x42011ae/0x439f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 305430528 unmapped: 32579584 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 305430528 unmapped: 32579584 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea74f000/0x0/0x4ffc00000, data 0x42011ae/0x439f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3500170 data_alloc: 218103808 data_used: 23232512
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 305430528 unmapped: 32579584 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 306479104 unmapped: 31531008 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4ddb8d000 session 0x55c4dbe98d20
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4db001a40
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4dd81eb40
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dc236000 session 0x55c4dc1fb680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 306495488 unmapped: 31514624 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4ddba6800 session 0x55c4dd92d860
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dcec0400 session 0x55c4df73a780
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4ddbc5e00
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4ddb48000
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dc236000 session 0x55c4ddacf860
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307003392 unmapped: 35209216 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea748000/0x0/0x4ffc00000, data 0x42071be/0x43a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307003392 unmapped: 35209216 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3557977 data_alloc: 218103808 data_used: 23232512
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea129000/0x0/0x4ffc00000, data 0x48261be/0x49c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307003392 unmapped: 35209216 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307003392 unmapped: 35209216 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.991683960s of 11.550959587s, submitted: 29
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4ddba6800 session 0x55c4db0621e0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307003392 unmapped: 35209216 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc29400 session 0x55c4ddace960
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307003392 unmapped: 35209216 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea126000/0x0/0x4ffc00000, data 0x48291be/0x49c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307003392 unmapped: 35209216 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4dd97fc20
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3559363 data_alloc: 218103808 data_used: 23232512
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4dc11ab40
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307003392 unmapped: 35209216 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea125000/0x0/0x4ffc00000, data 0x48291ce/0x49c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307003392 unmapped: 35209216 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 306552832 unmapped: 35659776 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea125000/0x0/0x4ffc00000, data 0x48291ce/0x49c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 306552832 unmapped: 35659776 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 306552832 unmapped: 35659776 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3604743 data_alloc: 218103808 data_used: 29429760
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 306552832 unmapped: 35659776 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 306552832 unmapped: 35659776 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 306552832 unmapped: 35659776 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea125000/0x0/0x4ffc00000, data 0x48291ce/0x49c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.22955 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 306552832 unmapped: 35659776 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.017693520s of 12.032156944s, submitted: 4
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 306552832 unmapped: 35659776 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3605423 data_alloc: 218103808 data_used: 29429760
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 306552832 unmapped: 35659776 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 306552832 unmapped: 35659776 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4e98e3000/0x0/0x4ffc00000, data 0x506b1ce/0x520b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 310689792 unmapped: 31522816 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 310689792 unmapped: 31522816 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 311140352 unmapped: 31072256 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4e93d0000/0x0/0x4ffc00000, data 0x557e1ce/0x571e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3712157 data_alloc: 234881024 data_used: 31072256
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 311140352 unmapped: 31072256 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 311140352 unmapped: 31072256 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 311140352 unmapped: 31072256 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 311140352 unmapped: 31072256 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.915276527s of 10.309535980s, submitted: 113
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dc236000 session 0x55c4df73ad20
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4ddba6800 session 0x55c4dc11eb40
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4e93d0000/0x0/0x4ffc00000, data 0x557e1ce/0x571e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 311156736 unmapped: 31055872 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4e93d0000/0x0/0x4ffc00000, data 0x557e1ce/0x571e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dd228800 session 0x55c4dc42c960
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3513130 data_alloc: 218103808 data_used: 23293952
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 311156736 unmapped: 31055872 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 311156736 unmapped: 31055872 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea73a000/0x0/0x4ffc00000, data 0x42161ae/0x43b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 311156736 unmapped: 31055872 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4e8e19800 session 0x55c4ddbc43c0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dd4e0800 session 0x55c4dc1d4780
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 311173120 unmapped: 31039488 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 34480128 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4ddbb81e0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3408058 data_alloc: 218103808 data_used: 20209664
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 34480128 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb085000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 34480128 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 34480128 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb085000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 34480128 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 34480128 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3408058 data_alloc: 218103808 data_used: 20209664
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 34480128 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 34480128 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 34480128 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 34480128 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb085000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 34480128 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3408058 data_alloc: 218103808 data_used: 20209664
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb085000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 34480128 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 34480128 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 34480128 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 34480128 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 34480128 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3408058 data_alloc: 218103808 data_used: 20209664
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 34480128 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb085000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 34480128 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4db581e00
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dc236000 session 0x55c4ddb49860
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4dc1d4b40
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb085000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4db5e9680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 22.569824219s of 23.457214355s, submitted: 54
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307757056 unmapped: 34455552 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dd4e0800 session 0x55c4ddacfc20
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4e8e19800 session 0x55c4df73be00
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dd228800 session 0x55c4db5685a0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dd228800 session 0x55c4dfbaad20
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4dd81e000
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307773440 unmapped: 34439168 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eab4d000/0x0/0x4ffc00000, data 0x3e031ae/0x3fa1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307773440 unmapped: 34439168 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456531 data_alloc: 218103808 data_used: 20213760
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307773440 unmapped: 34439168 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307773440 unmapped: 34439168 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307773440 unmapped: 34439168 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4dd97f680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eab4d000/0x0/0x4ffc00000, data 0x3e031ae/0x3fa1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eab4d000/0x0/0x4ffc00000, data 0x3e031ae/0x3fa1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307773440 unmapped: 34439168 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307773440 unmapped: 34439168 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456823 data_alloc: 218103808 data_used: 20217856
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 34422784 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 34422784 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 34422784 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eab4d000/0x0/0x4ffc00000, data 0x3e031ae/0x3fa1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 34422784 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 34422784 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3487223 data_alloc: 218103808 data_used: 24543232
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 34422784 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eab4d000/0x0/0x4ffc00000, data 0x3e031ae/0x3fa1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 34422784 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eab4d000/0x0/0x4ffc00000, data 0x3e031ae/0x3fa1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 34422784 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 34422784 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 34422784 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.570680618s of 17.714221954s, submitted: 24
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3497427 data_alloc: 218103808 data_used: 24576000
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 310853632 unmapped: 31358976 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 310796288 unmapped: 31416320 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea571000/0x0/0x4ffc00000, data 0x43d61ae/0x4574000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 311861248 unmapped: 30351360 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 311861248 unmapped: 30351360 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea571000/0x0/0x4ffc00000, data 0x43d61ae/0x4574000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 311861248 unmapped: 30351360 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3547101 data_alloc: 218103808 data_used: 25014272
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 311861248 unmapped: 30351360 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 311861248 unmapped: 30351360 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea571000/0x0/0x4ffc00000, data 0x43d61ae/0x4574000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 311861248 unmapped: 30351360 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 311861248 unmapped: 30351360 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea577000/0x0/0x4ffc00000, data 0x43d91ae/0x4577000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 311861248 unmapped: 30351360 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3542061 data_alloc: 218103808 data_used: 25014272
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 311861248 unmapped: 30351360 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 311861248 unmapped: 30351360 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 311861248 unmapped: 30351360 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 311861248 unmapped: 30351360 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 311861248 unmapped: 30351360 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3542061 data_alloc: 218103808 data_used: 25014272
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea577000/0x0/0x4ffc00000, data 0x43d91ae/0x4577000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 311861248 unmapped: 30351360 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 311861248 unmapped: 30351360 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4ddba6800 session 0x55c4ddbb8960
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4ddba3c00 session 0x55c4db04c780
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4db5805a0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4dc1b2000
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.402957916s of 16.838378906s, submitted: 77
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea577000/0x0/0x4ffc00000, data 0x43d91ae/0x4577000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [1,0,1,7])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dd228800 session 0x55c4dd4154a0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4ddba6800 session 0x55c4dc11f680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4db37ac00 session 0x55c4dbe994a0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4df73a5a0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4db04c000
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 312197120 unmapped: 37896192 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 312197120 unmapped: 37896192 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 312197120 unmapped: 37896192 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3631838 data_alloc: 218103808 data_used: 25014272
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 312197120 unmapped: 37896192 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dd228800 session 0x55c4ddacf2c0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 312352768 unmapped: 37740544 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4e99b2000/0x0/0x4ffc00000, data 0x4f9c220/0x513c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 312352768 unmapped: 37740544 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 312369152 unmapped: 37724160 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 314908672 unmapped: 35184640 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3722111 data_alloc: 234881024 data_used: 37122048
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 314908672 unmapped: 35184640 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 314925056 unmapped: 35168256 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4e99b2000/0x0/0x4ffc00000, data 0x4f9c220/0x513c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 314925056 unmapped: 35168256 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 314925056 unmapped: 35168256 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 314925056 unmapped: 35168256 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3722751 data_alloc: 234881024 data_used: 37183488
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 314933248 unmapped: 35160064 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.363443375s of 14.558417320s, submitted: 40
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 314933248 unmapped: 35160064 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4e99b1000/0x0/0x4ffc00000, data 0x4f9d220/0x513d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 314933248 unmapped: 35160064 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 314933248 unmapped: 35160064 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4e99b0000/0x0/0x4ffc00000, data 0x4f9d220/0x513d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [0,0,0,1])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 313647104 unmapped: 36446208 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3749519 data_alloc: 234881024 data_used: 37748736
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 313679872 unmapped: 36413440 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4e96de000/0x0/0x4ffc00000, data 0x5270220/0x5410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 313696256 unmapped: 36397056 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 313696256 unmapped: 36397056 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4e96de000/0x0/0x4ffc00000, data 0x5270220/0x5410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 313704448 unmapped: 36388864 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 313737216 unmapped: 36356096 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3755591 data_alloc: 234881024 data_used: 38092800
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 313737216 unmapped: 36356096 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4ddba6800 session 0x55c4dd97ef00
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4df107400 session 0x55c4dfbaaf00
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.041719437s of 10.220036507s, submitted: 58
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 313737216 unmapped: 36356096 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4df107400 session 0x55c4dc1fa780
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea175000/0x0/0x4ffc00000, data 0x43da1ae/0x4578000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 306487296 unmapped: 43606016 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 306487296 unmapped: 43606016 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea175000/0x0/0x4ffc00000, data 0x43da1ae/0x4578000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 306487296 unmapped: 43606016 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3551337 data_alloc: 218103808 data_used: 25075712
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 306487296 unmapped: 43606016 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dd4e0800 session 0x55c4ddace780
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4e8e19800 session 0x55c4dd97fa40
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 45277184 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4dd17f680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 45277184 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb085000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 45277184 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 45277184 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3427429 data_alloc: 218103808 data_used: 20209664
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 45277184 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 45277184 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 45277184 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb085000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 45277184 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 45277184 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3427429 data_alloc: 218103808 data_used: 20209664
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 45277184 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb085000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb085000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 45277184 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 45277184 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 45277184 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 45277184 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3427429 data_alloc: 218103808 data_used: 20209664
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 45277184 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 45277184 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb085000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 45277184 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 45277184 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb085000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 45277184 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb085000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3427429 data_alloc: 218103808 data_used: 20209664
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 45277184 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4dc1b2b40
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4dd137680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4dc11e780
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 45277184 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dd4e0800 session 0x55c4daffde00
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb085000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 25.166007996s of 25.358474731s, submitted: 57
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4df107400 session 0x55c4df73b680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4e8e19800 session 0x55c4dd40af00
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4e8e19800 session 0x55c4ddbb8d20
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4dd81f680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4dd40ba40
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304963584 unmapped: 45129728 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304963584 unmapped: 45129728 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304963584 unmapped: 45129728 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea71c000/0x0/0x4ffc00000, data 0x3e241ad/0x3fc2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dd4e0800 session 0x55c4db5e8000
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3470469 data_alloc: 218103808 data_used: 20209664
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304963584 unmapped: 45129728 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4df107400 session 0x55c4df73bc20
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304963584 unmapped: 45129728 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4df107400 session 0x55c4dd192780
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4ddacf680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304963584 unmapped: 45129728 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304971776 unmapped: 45121536 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea71c000/0x0/0x4ffc00000, data 0x3e241ad/0x3fc2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304971776 unmapped: 45121536 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3509954 data_alloc: 218103808 data_used: 25616384
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304971776 unmapped: 45121536 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4ddbb8780
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dd4e0800 session 0x55c4dfbaad20
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304971776 unmapped: 45121536 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea71c000/0x0/0x4ffc00000, data 0x3e241ad/0x3fc2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.3 total, 600.0 interval#012Cumulative writes: 35K writes, 140K keys, 35K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s#012Cumulative WAL: 35K writes, 12K syncs, 2.75 writes per sync, written: 0.14 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2305 writes, 8726 keys, 2305 commit groups, 1.0 writes per commit group, ingest: 10.50 MB, 0.02 MB/s#012Interval WAL: 2305 writes, 962 syncs, 2.40 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304971776 unmapped: 45121536 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304971776 unmapped: 45121536 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea71c000/0x0/0x4ffc00000, data 0x3e241ad/0x3fc2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304971776 unmapped: 45121536 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3509822 data_alloc: 218103808 data_used: 25616384
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304971776 unmapped: 45121536 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.446069717s of 14.544829369s, submitted: 14
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304971776 unmapped: 45121536 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304971776 unmapped: 45121536 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304971776 unmapped: 45121536 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea71c000/0x0/0x4ffc00000, data 0x3e241ad/0x3fc2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304971776 unmapped: 45121536 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4e8e19800 session 0x55c4ddaceb40
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dd228800 session 0x55c4df73be00
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3509822 data_alloc: 218103808 data_used: 25616384
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304971776 unmapped: 45121536 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304971776 unmapped: 45121536 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea71c000/0x0/0x4ffc00000, data 0x3e241ad/0x3fc2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304971776 unmapped: 45121536 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea71c000/0x0/0x4ffc00000, data 0x3e241ad/0x3fc2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea71c000/0x0/0x4ffc00000, data 0x3e241ad/0x3fc2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304971776 unmapped: 45121536 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304971776 unmapped: 45121536 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3509954 data_alloc: 218103808 data_used: 25616384
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304971776 unmapped: 45121536 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea71c000/0x0/0x4ffc00000, data 0x3e241ad/0x3fc2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304971776 unmapped: 45121536 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304971776 unmapped: 45121536 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea71c000/0x0/0x4ffc00000, data 0x3e241ad/0x3fc2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4e8e19800 session 0x55c4dc1fb4a0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.145725250s of 12.158161163s, submitted: 3
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4db5e9680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304979968 unmapped: 45113344 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4dc1b23c0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301408256 unmapped: 48685056 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432245 data_alloc: 218103808 data_used: 20209664
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301408256 unmapped: 48685056 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301408256 unmapped: 48685056 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301408256 unmapped: 48685056 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eac75000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301408256 unmapped: 48685056 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301408256 unmapped: 48685056 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432245 data_alloc: 218103808 data_used: 20209664
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eac75000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301408256 unmapped: 48685056 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301408256 unmapped: 48685056 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301408256 unmapped: 48685056 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301408256 unmapped: 48685056 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301408256 unmapped: 48685056 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eac75000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432245 data_alloc: 218103808 data_used: 20209664
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301408256 unmapped: 48685056 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301408256 unmapped: 48685056 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301408256 unmapped: 48685056 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301408256 unmapped: 48685056 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301408256 unmapped: 48685056 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eac75000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432245 data_alloc: 218103808 data_used: 20209664
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301408256 unmapped: 48685056 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301408256 unmapped: 48685056 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301408256 unmapped: 48685056 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301408256 unmapped: 48685056 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eac75000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301408256 unmapped: 48685056 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432245 data_alloc: 218103808 data_used: 20209664
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301408256 unmapped: 48685056 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301408256 unmapped: 48685056 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301416448 unmapped: 48676864 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301416448 unmapped: 48676864 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eac75000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eac75000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301416448 unmapped: 48676864 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432245 data_alloc: 218103808 data_used: 20209664
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301416448 unmapped: 48676864 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301416448 unmapped: 48676864 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301416448 unmapped: 48676864 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301416448 unmapped: 48676864 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301416448 unmapped: 48676864 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432245 data_alloc: 218103808 data_used: 20209664
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eac75000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301424640 unmapped: 48668672 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301424640 unmapped: 48668672 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301424640 unmapped: 48668672 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eac75000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301424640 unmapped: 48668672 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301424640 unmapped: 48668672 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eac75000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432245 data_alloc: 218103808 data_used: 20209664
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301424640 unmapped: 48668672 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dd4e0800 session 0x55c4dc11a5a0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dd4e0800 session 0x55c4dd1521e0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4dd81fa40
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301432832 unmapped: 48660480 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4ddbb8780
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 38.439521790s of 38.507686615s, submitted: 19
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dd228800 session 0x55c4dd192780
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4e8e19800 session 0x55c4ddbb8d20
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4e8e19800 session 0x55c4dd40af00
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4daffde00
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4dc11e780
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea6a3000/0x0/0x4ffc00000, data 0x3e9d1ae/0x403b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301727744 unmapped: 48365568 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301727744 unmapped: 48365568 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301727744 unmapped: 48365568 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3487397 data_alloc: 218103808 data_used: 20209664
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301727744 unmapped: 48365568 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea6a3000/0x0/0x4ffc00000, data 0x3e9d1ae/0x403b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301727744 unmapped: 48365568 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301727744 unmapped: 48365568 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dd228800 session 0x55c4dd17f680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301678592 unmapped: 48414720 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301678592 unmapped: 48414720 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea67f000/0x0/0x4ffc00000, data 0x3ec11ae/0x405f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3494489 data_alloc: 218103808 data_used: 20856832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 300687360 unmapped: 49405952 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301506560 unmapped: 48586752 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea67f000/0x0/0x4ffc00000, data 0x3ec11ae/0x405f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301506560 unmapped: 48586752 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301506560 unmapped: 48586752 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea67f000/0x0/0x4ffc00000, data 0x3ec11ae/0x405f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301506560 unmapped: 48586752 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea67f000/0x0/0x4ffc00000, data 0x3ec11ae/0x405f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3523289 data_alloc: 218103808 data_used: 24903680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301506560 unmapped: 48586752 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301506560 unmapped: 48586752 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301506560 unmapped: 48586752 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301506560 unmapped: 48586752 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea67f000/0x0/0x4ffc00000, data 0x3ec11ae/0x405f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301506560 unmapped: 48586752 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.172748566s of 18.245769501s, submitted: 20
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3588867 data_alloc: 218103808 data_used: 24956928
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 306823168 unmapped: 43270144 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 306823168 unmapped: 43270144 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4e9ce4000/0x0/0x4ffc00000, data 0x48561ae/0x49f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307101696 unmapped: 42991616 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc10000 session 0x55c4db5814a0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4e9c47000/0x0/0x4ffc00000, data 0x48f01ae/0x4a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307101696 unmapped: 42991616 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307101696 unmapped: 42991616 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3628643 data_alloc: 218103808 data_used: 27017216
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307101696 unmapped: 42991616 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307101696 unmapped: 42991616 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307101696 unmapped: 42991616 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x49111ae/0x4aaf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307101696 unmapped: 42991616 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307118080 unmapped: 42975232 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3628379 data_alloc: 218103808 data_used: 27037696
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307118080 unmapped: 42975232 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4df73ba40
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307118080 unmapped: 42975232 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.606063843s of 11.941136360s, submitted: 149
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 289 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4df1d7860
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 289 ms_handle_reset con 0x55c4e8e19800 session 0x55c4ddaf72c0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 289 ms_handle_reset con 0x55c4dd228800 session 0x55c4dd40ab40
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 321380352 unmapped: 37822464 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 289 ms_handle_reset con 0x55c4e0327800 session 0x55c4dc11b4a0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 289 ms_handle_reset con 0x55c4dbc11400 session 0x55c4dc1b30e0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 290 heartbeat osd_stat(store_statfs(0x4e9c19000/0x0/0x4ffc00000, data 0x4923d8d/0x4ac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [1,0,0,1])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 321421312 unmapped: 37781504 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 290 handle_osd_map epochs [291,291], i have 290, src has [1,291]
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 291 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4dd193680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 321445888 unmapped: 37756928 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 291 ms_handle_reset con 0x55c4dd228800 session 0x55c4ddbaba40
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 291 ms_handle_reset con 0x55c4e8e19800 session 0x55c4db430d20
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 291 ms_handle_reset con 0x55c4dcf90800 session 0x55c4db47af00
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3788274 data_alloc: 234881024 data_used: 37064704
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 321454080 unmapped: 37748736 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 321454080 unmapped: 37748736 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 321454080 unmapped: 37748736 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 291 heartbeat osd_stat(store_statfs(0x4e8e34000/0x0/0x4ffc00000, data 0x57054f7/0x58a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 321454080 unmapped: 37748736 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 321454080 unmapped: 37748736 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3788674 data_alloc: 234881024 data_used: 37064704
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 291 handle_osd_map epochs [291,292], i have 291, src has [1,292]
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 314769408 unmapped: 44433408 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 292 ms_handle_reset con 0x55c4dbc11400 session 0x55c4dd92d2c0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 292 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4ddace000
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 292 ms_handle_reset con 0x55c4dd228800 session 0x55c4dbe701e0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 314769408 unmapped: 44433408 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 292 ms_handle_reset con 0x55c4e8e19800 session 0x55c4df73b0e0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 292 ms_handle_reset con 0x55c4dbc29c00 session 0x55c4db5e8780
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 292 ms_handle_reset con 0x55c4ddb9e000 session 0x55c4df1d7680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 292 ms_handle_reset con 0x55c4dbc29c00 session 0x55c4dbe701e0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 292 ms_handle_reset con 0x55c4dbc11400 session 0x55c4db47af00
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 292 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4dc1b30e0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 314793984 unmapped: 44408832 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 292 heartbeat osd_stat(store_statfs(0x4e8e32000/0x0/0x4ffc00000, data 0x5706f6a/0x58ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 314793984 unmapped: 44408832 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 314793984 unmapped: 44408832 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3774000 data_alloc: 234881024 data_used: 37068800
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 314793984 unmapped: 44408832 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 292 heartbeat osd_stat(store_statfs(0x4e8e32000/0x0/0x4ffc00000, data 0x5706f6a/0x58ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 314793984 unmapped: 44408832 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.858778954s of 15.576435089s, submitted: 154
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 292 ms_handle_reset con 0x55c4dd228800 session 0x55c4ddaf72c0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 314810368 unmapped: 44392448 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 314810368 unmapped: 44392448 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 292 heartbeat osd_stat(store_statfs(0x4e8e32000/0x0/0x4ffc00000, data 0x5706f6a/0x58ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 314834944 unmapped: 44367872 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3798493 data_alloc: 234881024 data_used: 40214528
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 292 heartbeat osd_stat(store_statfs(0x4e8e32000/0x0/0x4ffc00000, data 0x5706f6a/0x58ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316227584 unmapped: 42975232 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316227584 unmapped: 42975232 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 292 heartbeat osd_stat(store_statfs(0x4e8e32000/0x0/0x4ffc00000, data 0x5706f6a/0x58ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316227584 unmapped: 42975232 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 292 heartbeat osd_stat(store_statfs(0x4e8e32000/0x0/0x4ffc00000, data 0x5706f6a/0x58ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316227584 unmapped: 42975232 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316227584 unmapped: 42975232 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3808253 data_alloc: 234881024 data_used: 41611264
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316227584 unmapped: 42975232 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316227584 unmapped: 42975232 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316227584 unmapped: 42975232 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 292 heartbeat osd_stat(store_statfs(0x4e8e32000/0x0/0x4ffc00000, data 0x5706f6a/0x58ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 292 heartbeat osd_stat(store_statfs(0x4e8e32000/0x0/0x4ffc00000, data 0x5706f6a/0x58ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316227584 unmapped: 42975232 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.218788147s of 12.240081787s, submitted: 6
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 315981824 unmapped: 43220992 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3823569 data_alloc: 234881024 data_used: 42450944
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 315981824 unmapped: 43220992 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 43048960 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 43048960 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 43048960 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 292 heartbeat osd_stat(store_statfs(0x4e8dec000/0x0/0x4ffc00000, data 0x574cf6a/0x58f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 43048960 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3830161 data_alloc: 234881024 data_used: 43458560
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 43048960 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 292 heartbeat osd_stat(store_statfs(0x4e8dec000/0x0/0x4ffc00000, data 0x574cf6a/0x58f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 43048960 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 43048960 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 43048960 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 43048960 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3830161 data_alloc: 234881024 data_used: 43458560
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 43048960 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 292 heartbeat osd_stat(store_statfs(0x4e8dec000/0x0/0x4ffc00000, data 0x574cf6a/0x58f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 43048960 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 43048960 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 43048960 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 292 heartbeat osd_stat(store_statfs(0x4e8dec000/0x0/0x4ffc00000, data 0x574cf6a/0x58f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 43048960 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3830161 data_alloc: 234881024 data_used: 43458560
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 43048960 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 43048960 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 43048960 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 43048960 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316162048 unmapped: 43040768 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 292 heartbeat osd_stat(store_statfs(0x4e8dec000/0x0/0x4ffc00000, data 0x574cf6a/0x58f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3830161 data_alloc: 234881024 data_used: 43458560
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316162048 unmapped: 43040768 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 292 heartbeat osd_stat(store_statfs(0x4e8dec000/0x0/0x4ffc00000, data 0x574cf6a/0x58f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316162048 unmapped: 43040768 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316162048 unmapped: 43040768 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 292 heartbeat osd_stat(store_statfs(0x4e8dec000/0x0/0x4ffc00000, data 0x574cf6a/0x58f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316162048 unmapped: 43040768 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316162048 unmapped: 43040768 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 292 heartbeat osd_stat(store_statfs(0x4e8dec000/0x0/0x4ffc00000, data 0x574cf6a/0x58f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 292 ms_handle_reset con 0x55c4ddb8ec00 session 0x55c4db47a1e0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3837521 data_alloc: 234881024 data_used: 44781568
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316702720 unmapped: 42500096 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 292 ms_handle_reset con 0x55c4ddb8ec00 session 0x55c4df1d72c0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316710912 unmapped: 42491904 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 27.459056854s of 27.480937958s, submitted: 5
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 293 ms_handle_reset con 0x55c4dd228800 session 0x55c4dbe710e0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 293 ms_handle_reset con 0x55c4e8e19800 session 0x55c4ddbbf4a0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 293 ms_handle_reset con 0x55c4ddb9e000 session 0x55c4ddaf6f00
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 335519744 unmapped: 23683072 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 293 ms_handle_reset con 0x55c4dcf91c00 session 0x55c4daffcb40
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 293 ms_handle_reset con 0x55c4dd228800 session 0x55c4dd137860
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 324026368 unmapped: 35176448 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 293 handle_osd_map epochs [294,295], i have 293, src has [1,295]
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 324042752 unmapped: 35160064 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4091715 data_alloc: 251658240 data_used: 51884032
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 295 heartbeat osd_stat(store_statfs(0x4e71e1000/0x0/0x4ffc00000, data 0x7352251/0x74fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 327540736 unmapped: 31662080 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 295 handle_osd_map epochs [295,296], i have 295, src has [1,296]
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 296 ms_handle_reset con 0x55c4ddb8ec00 session 0x55c4dd2d6b40
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 327606272 unmapped: 31596544 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 327606272 unmapped: 31596544 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 327639040 unmapped: 31563776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 296 heartbeat osd_stat(store_statfs(0x4e8bb3000/0x0/0x4ffc00000, data 0x597fe3e/0x5b2a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 296 ms_handle_reset con 0x55c4dbc11400 session 0x55c4dfbaa780
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 296 ms_handle_reset con 0x55c4dbc29c00 session 0x55c4dd2e25a0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 327639040 unmapped: 31563776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3875547 data_alloc: 251658240 data_used: 51609600
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 296 handle_osd_map epochs [296,297], i have 296, src has [1,297]
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 297 ms_handle_reset con 0x55c4ddb9e000 session 0x55c4dd40a780
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 325410816 unmapped: 33792000 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 325410816 unmapped: 33792000 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 297 heartbeat osd_stat(store_statfs(0x4e8e23000/0x0/0x4ffc00000, data 0x570f8c9/0x58ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 325410816 unmapped: 33792000 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.498741150s of 11.202224731s, submitted: 88
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 297 handle_osd_map epochs [298,298], i have 297, src has [1,298]
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 298 heartbeat osd_stat(store_statfs(0x4e8e23000/0x0/0x4ffc00000, data 0x570f8c9/0x58ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 325484544 unmapped: 33718272 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 298 ms_handle_reset con 0x55c4dbc11400 session 0x55c4db430000
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 322666496 unmapped: 36536320 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3699447 data_alloc: 234881024 data_used: 37097472
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 298 ms_handle_reset con 0x55c4dd4e0800 session 0x55c4dd97fa40
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 298 ms_handle_reset con 0x55c4df107400 session 0x55c4ddb49860
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 322666496 unmapped: 36536320 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 298 ms_handle_reset con 0x55c4dbc29c00 session 0x55c4ddbbef00
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 313188352 unmapped: 46014464 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eac33000/0x0/0x4ffc00000, data 0x3901428/0x3aab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 313188352 unmapped: 46014464 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 313188352 unmapped: 46014464 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 313188352 unmapped: 46014464 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3485595 data_alloc: 218103808 data_used: 20008960
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 313188352 unmapped: 46014464 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 313188352 unmapped: 46014464 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 313188352 unmapped: 46014464 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eac57000/0x0/0x4ffc00000, data 0x38dd428/0x3a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 298 handle_osd_map epochs [299,299], i have 299, src has [1,299]
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 313188352 unmapped: 46014464 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 313188352 unmapped: 46014464 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3489769 data_alloc: 218103808 data_used: 20017152
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 313188352 unmapped: 46014464 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 309018624 unmapped: 50184192 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 309018624 unmapped: 50184192 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 309018624 unmapped: 50184192 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eac53000/0x0/0x4ffc00000, data 0x38dee8b/0x3a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 309018624 unmapped: 50184192 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3489769 data_alloc: 218103808 data_used: 20017152
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 309018624 unmapped: 50184192 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 309018624 unmapped: 50184192 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 309018624 unmapped: 50184192 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eac53000/0x0/0x4ffc00000, data 0x38dee8b/0x3a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 309018624 unmapped: 50184192 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 309018624 unmapped: 50184192 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3489769 data_alloc: 218103808 data_used: 20017152
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 309018624 unmapped: 50184192 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 309018624 unmapped: 50184192 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eac53000/0x0/0x4ffc00000, data 0x38dee8b/0x3a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 309018624 unmapped: 50184192 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 309018624 unmapped: 50184192 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 299 ms_handle_reset con 0x55c4dd228800 session 0x55c4ddbbeb40
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 299 ms_handle_reset con 0x55c4dd228800 session 0x55c4db568b40
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 299 ms_handle_reset con 0x55c4dbc11400 session 0x55c4db568000
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 299 ms_handle_reset con 0x55c4dbc29c00 session 0x55c4ddbc4d20
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 26.086532593s of 26.284181595s, submitted: 72
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 299 ms_handle_reset con 0x55c4dd4e0800 session 0x55c4ddbc5860
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 299 ms_handle_reset con 0x55c4df107400 session 0x55c4dc1e0780
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307396608 unmapped: 51806208 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 299 ms_handle_reset con 0x55c4df107400 session 0x55c4dc1e1a40
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 299 ms_handle_reset con 0x55c4dbc11400 session 0x55c4dc1d4960
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 299 ms_handle_reset con 0x55c4dbc29c00 session 0x55c4dd17f0e0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533337 data_alloc: 218103808 data_used: 20017152
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 299 ms_handle_reset con 0x55c4dd228800 session 0x55c4dd17e5a0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307396608 unmapped: 51806208 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 299 ms_handle_reset con 0x55c4dd4e0800 session 0x55c4ddb49a40
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 299 ms_handle_reset con 0x55c4dd4e0800 session 0x55c4ddb490e0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307396608 unmapped: 51806208 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 299 ms_handle_reset con 0x55c4dbc11400 session 0x55c4dc11ba40
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307396608 unmapped: 51806208 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ea652000/0x0/0x4ffc00000, data 0x3edeebe/0x408c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307396608 unmapped: 51806208 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3579470 data_alloc: 218103808 data_used: 26312704
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ea652000/0x0/0x4ffc00000, data 0x3edeebe/0x408c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 299 ms_handle_reset con 0x55c4dbc29c00 session 0x55c4dd40a960
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 299 ms_handle_reset con 0x55c4dd228800 session 0x55c4db580960
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 299 ms_handle_reset con 0x55c4df107400 session 0x55c4ddbbef00
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eac53000/0x0/0x4ffc00000, data 0x38dee8b/0x3a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eac53000/0x0/0x4ffc00000, data 0x38dee8b/0x3a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3492936 data_alloc: 218103808 data_used: 20017152
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eac53000/0x0/0x4ffc00000, data 0x38dee8b/0x3a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3492936 data_alloc: 218103808 data_used: 20017152
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eac53000/0x0/0x4ffc00000, data 0x38dee8b/0x3a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3492936 data_alloc: 218103808 data_used: 20017152
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eac53000/0x0/0x4ffc00000, data 0x38dee8b/0x3a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eac53000/0x0/0x4ffc00000, data 0x38dee8b/0x3a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3492936 data_alloc: 218103808 data_used: 20017152
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eac53000/0x0/0x4ffc00000, data 0x38dee8b/0x3a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eac53000/0x0/0x4ffc00000, data 0x38dee8b/0x3a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3492936 data_alloc: 218103808 data_used: 20017152
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eac53000/0x0/0x4ffc00000, data 0x38dee8b/0x3a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3492936 data_alloc: 218103808 data_used: 20017152
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eac53000/0x0/0x4ffc00000, data 0x38dee8b/0x3a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 299 ms_handle_reset con 0x55c4e6920c00 session 0x55c4df73b860
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 299 ms_handle_reset con 0x55c4db02ec00 session 0x55c4ddacf0e0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3492936 data_alloc: 218103808 data_used: 20017152
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 299 ms_handle_reset con 0x55c4dbc11000 session 0x55c4df1d61e0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eac53000/0x0/0x4ffc00000, data 0x38dee8b/0x3a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3492936 data_alloc: 218103808 data_used: 20017152
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 299 ms_handle_reset con 0x55c4ddb8b000 session 0x55c4dd97f2c0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eac53000/0x0/0x4ffc00000, data 0x38dee8b/0x3a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eac53000/0x0/0x4ffc00000, data 0x38dee8b/0x3a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3492936 data_alloc: 218103808 data_used: 20017152
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eac53000/0x0/0x4ffc00000, data 0x38dee8b/0x3a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3492936 data_alloc: 218103808 data_used: 20017152
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eac53000/0x0/0x4ffc00000, data 0x38dee8b/0x3a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3492936 data_alloc: 218103808 data_used: 20017152
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eac53000/0x0/0x4ffc00000, data 0x38dee8b/0x3a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eac53000/0x0/0x4ffc00000, data 0x38dee8b/0x3a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eac53000/0x0/0x4ffc00000, data 0x38dee8b/0x3a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3492936 data_alloc: 218103808 data_used: 20017152
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eac53000/0x0/0x4ffc00000, data 0x38dee8b/0x3a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eac53000/0x0/0x4ffc00000, data 0x38dee8b/0x3a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3492936 data_alloc: 218103808 data_used: 20017152
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 73.629302979s of 73.781417847s, submitted: 21
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 300 ms_handle_reset con 0x55c4dd228800 session 0x55c4dd81ed20
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eac51000/0x0/0x4ffc00000, data 0x38e093b/0x3a8b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308191232 unmapped: 51011584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308191232 unmapped: 51011584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3494993 data_alloc: 218103808 data_used: 20025344
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308191232 unmapped: 51011584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308191232 unmapped: 51011584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eac52000/0x0/0x4ffc00000, data 0x38e0918/0x3a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308191232 unmapped: 51011584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308191232 unmapped: 51011584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308191232 unmapped: 51011584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3494993 data_alloc: 218103808 data_used: 20025344
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308191232 unmapped: 51011584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308191232 unmapped: 51011584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e237b/0x3a8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308191232 unmapped: 51011584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308191232 unmapped: 51011584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.939418793s of 11.038199425s, submitted: 45
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308191232 unmapped: 51011584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498043 data_alloc: 218103808 data_used: 20025344
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308191232 unmapped: 51011584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308215808 unmapped: 50987008 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308215808 unmapped: 50987008 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308215808 unmapped: 50987008 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308215808 unmapped: 50987008 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497163 data_alloc: 218103808 data_used: 20025344
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308215808 unmapped: 50987008 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308215808 unmapped: 50987008 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308224000 unmapped: 50978816 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308224000 unmapped: 50978816 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308224000 unmapped: 50978816 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497163 data_alloc: 218103808 data_used: 20025344
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308224000 unmapped: 50978816 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308224000 unmapped: 50978816 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308232192 unmapped: 50970624 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308232192 unmapped: 50970624 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308232192 unmapped: 50970624 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497163 data_alloc: 218103808 data_used: 20025344
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308232192 unmapped: 50970624 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308232192 unmapped: 50970624 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308240384 unmapped: 50962432 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308240384 unmapped: 50962432 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308240384 unmapped: 50962432 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497163 data_alloc: 218103808 data_used: 20025344
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308240384 unmapped: 50962432 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308240384 unmapped: 50962432 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308240384 unmapped: 50962432 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308240384 unmapped: 50962432 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308240384 unmapped: 50962432 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497163 data_alloc: 218103808 data_used: 20025344
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308248576 unmapped: 50954240 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 27.330192566s of 27.339185715s, submitted: 2
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308256768 unmapped: 50946048 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308256768 unmapped: 50946048 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308256768 unmapped: 50946048 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308256768 unmapped: 50946048 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497163 data_alloc: 218103808 data_used: 20025344
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308297728 unmapped: 50905088 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308297728 unmapped: 50905088 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308297728 unmapped: 50905088 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308305920 unmapped: 50896896 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308305920 unmapped: 50896896 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497339 data_alloc: 218103808 data_used: 20025344
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308305920 unmapped: 50896896 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308305920 unmapped: 50896896 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308305920 unmapped: 50896896 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308305920 unmapped: 50896896 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308305920 unmapped: 50896896 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497339 data_alloc: 218103808 data_used: 20025344
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 50880512 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 50880512 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 50880512 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 50880512 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308330496 unmapped: 50872320 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497339 data_alloc: 218103808 data_used: 20025344
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308338688 unmapped: 50864128 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308338688 unmapped: 50864128 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308338688 unmapped: 50864128 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308338688 unmapped: 50864128 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308338688 unmapped: 50864128 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497339 data_alloc: 218103808 data_used: 20025344
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308338688 unmapped: 50864128 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308338688 unmapped: 50864128 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308346880 unmapped: 50855936 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308346880 unmapped: 50855936 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308346880 unmapped: 50855936 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497339 data_alloc: 218103808 data_used: 20025344
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308346880 unmapped: 50855936 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308346880 unmapped: 50855936 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308355072 unmapped: 50847744 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308355072 unmapped: 50847744 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308355072 unmapped: 50847744 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497339 data_alloc: 218103808 data_used: 20025344
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308355072 unmapped: 50847744 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308363264 unmapped: 50839552 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308363264 unmapped: 50839552 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308363264 unmapped: 50839552 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308363264 unmapped: 50839552 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497339 data_alloc: 218103808 data_used: 20025344
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308363264 unmapped: 50839552 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308363264 unmapped: 50839552 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308363264 unmapped: 50839552 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308363264 unmapped: 50839552 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308363264 unmapped: 50839552 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497339 data_alloc: 218103808 data_used: 20025344
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308363264 unmapped: 50839552 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308363264 unmapped: 50839552 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308371456 unmapped: 50831360 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308371456 unmapped: 50831360 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308371456 unmapped: 50831360 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497339 data_alloc: 218103808 data_used: 20025344
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308371456 unmapped: 50831360 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308371456 unmapped: 50831360 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308371456 unmapped: 50831360 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308379648 unmapped: 50823168 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308379648 unmapped: 50823168 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497339 data_alloc: 218103808 data_used: 20025344
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308379648 unmapped: 50823168 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308379648 unmapped: 50823168 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308379648 unmapped: 50823168 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308379648 unmapped: 50823168 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308387840 unmapped: 50814976 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497339 data_alloc: 218103808 data_used: 20025344
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308387840 unmapped: 50814976 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308387840 unmapped: 50814976 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308387840 unmapped: 50814976 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308387840 unmapped: 50814976 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308396032 unmapped: 50806784 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497339 data_alloc: 218103808 data_used: 20025344
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308396032 unmapped: 50806784 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308396032 unmapped: 50806784 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308396032 unmapped: 50806784 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308396032 unmapped: 50806784 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308396032 unmapped: 50806784 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497339 data_alloc: 218103808 data_used: 20025344
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308396032 unmapped: 50806784 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308404224 unmapped: 50798592 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308404224 unmapped: 50798592 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308404224 unmapped: 50798592 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308387840 unmapped: 50814976 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497339 data_alloc: 218103808 data_used: 20025344
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308387840 unmapped: 50814976 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308387840 unmapped: 50814976 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308387840 unmapped: 50814976 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308387840 unmapped: 50814976 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308396032 unmapped: 50806784 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497339 data_alloc: 218103808 data_used: 20025344
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308396032 unmapped: 50806784 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308396032 unmapped: 50806784 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308396032 unmapped: 50806784 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308404224 unmapped: 50798592 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308404224 unmapped: 50798592 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497339 data_alloc: 218103808 data_used: 20025344
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308404224 unmapped: 50798592 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 85.935012817s of 85.945709229s, submitted: 2
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 301 ms_handle_reset con 0x55c4dd4e0800 session 0x55c4ddbc4d20
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308404224 unmapped: 50798592 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 301 ms_handle_reset con 0x55c4ddb8ec00 session 0x55c4dc1b25a0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307191808 unmapped: 52011008 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307191808 unmapped: 52011008 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307191808 unmapped: 52011008 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3496987 data_alloc: 234881024 data_used: 20811776
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307191808 unmapped: 52011008 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307191808 unmapped: 52011008 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307200000 unmapped: 52002816 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307200000 unmapped: 52002816 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307200000 unmapped: 52002816 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3496987 data_alloc: 234881024 data_used: 20811776
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307200000 unmapped: 52002816 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307200000 unmapped: 52002816 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.209140778s of 10.256829262s, submitted: 13
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 302 ms_handle_reset con 0x55c4ddb86000 session 0x55c4dd2e2b40
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 302 heartbeat osd_stat(store_statfs(0x4eac51000/0x0/0x4ffc00000, data 0x38e3358/0x3a8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 300703744 unmapped: 58499072 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 300703744 unmapped: 58499072 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 300703744 unmapped: 58499072 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 302 heartbeat osd_stat(store_statfs(0x4eb8bd000/0x0/0x4ffc00000, data 0x2c74f06/0x2e1f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3383071 data_alloc: 218103808 data_used: 9347072
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 300703744 unmapped: 58499072 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 302 handle_osd_map epochs [302,303], i have 302, src has [1,303]
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 303 heartbeat osd_stat(store_statfs(0x4eb8bf000/0x0/0x4ffc00000, data 0x2c74f06/0x2e1f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 303 ms_handle_reset con 0x55c4e36b6000 session 0x55c4ddbb90e0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 63348736 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 63348736 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 303 heartbeat osd_stat(store_statfs(0x4ec0bc000/0x0/0x4ffc00000, data 0x2476aa4/0x2620000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 63348736 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 63348736 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 303 handle_osd_map epochs [304,304], i have 303, src has [1,304]
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3314330 data_alloc: 218103808 data_used: 2863104
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 63348736 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 63348736 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 304 heartbeat osd_stat(store_statfs(0x4ec0ba000/0x0/0x4ffc00000, data 0x2478523/0x2623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 63348736 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 63348736 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 304 handle_osd_map epochs [304,305], i have 304, src has [1,305]
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.375181198s of 12.629664421s, submitted: 88
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 63348736 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3317304 data_alloc: 218103808 data_used: 2863104
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 63348736 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec0b7000/0x0/0x4ffc00000, data 0x2479f86/0x2626000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 63348736 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 63348736 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec0b7000/0x0/0x4ffc00000, data 0x2479f86/0x2626000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 63348736 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 63348736 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec0b7000/0x0/0x4ffc00000, data 0x2479f86/0x2626000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3317304 data_alloc: 218103808 data_used: 2863104
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec0b7000/0x0/0x4ffc00000, data 0x2479f86/0x2626000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 63348736 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec0b7000/0x0/0x4ffc00000, data 0x2479f86/0x2626000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 305 handle_osd_map epochs [305,306], i have 305, src has [1,306]
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 306 ms_handle_reset con 0x55c4dd228800 session 0x55c4dd192b40
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324291 data_alloc: 218103808 data_used: 2871296
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324291 data_alloc: 218103808 data_used: 2871296
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324291 data_alloc: 218103808 data_used: 2871296
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324291 data_alloc: 218103808 data_used: 2871296
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324291 data_alloc: 218103808 data_used: 2871296
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324291 data_alloc: 218103808 data_used: 2871296
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 63307776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 63307776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 63307776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 63307776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 63307776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324291 data_alloc: 218103808 data_used: 2871296
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 63307776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 63307776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 63307776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 63307776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 63299584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324291 data_alloc: 218103808 data_used: 2871296
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 63299584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 63299584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 63299584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 63299584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 63299584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324291 data_alloc: 218103808 data_used: 2871296
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 63299584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 63299584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 63299584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 63299584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 63299584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324291 data_alloc: 218103808 data_used: 2871296
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 63299584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 63299584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 63299584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 63299584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 63299584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324291 data_alloc: 218103808 data_used: 2871296
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 63299584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 63299584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 63299584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 63299584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 63299584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324291 data_alloc: 218103808 data_used: 2871296
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 63299584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 63299584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 63299584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 63299584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 63299584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324291 data_alloc: 218103808 data_used: 2871296
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 63299584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 63299584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 63291392 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 63291392 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 63291392 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324291 data_alloc: 218103808 data_used: 2871296
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 63291392 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 63291392 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 63291392 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 63291392 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 63291392 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324291 data_alloc: 218103808 data_used: 2871296
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 63283200 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 63283200 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 63283200 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 63283200 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 63275008 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324291 data_alloc: 218103808 data_used: 2871296
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 63275008 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 63275008 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 63275008 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 63275008 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 63266816 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324291 data_alloc: 218103808 data_used: 2871296
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 63266816 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 63266816 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 63266816 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 63266816 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 63266816 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324291 data_alloc: 218103808 data_used: 2871296
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 63266816 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 63266816 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 63266816 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 63266816 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 63266816 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324291 data_alloc: 218103808 data_used: 2871296
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 63258624 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 63258624 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 63258624 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 63258624 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 63258624 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324291 data_alloc: 218103808 data_used: 2871296
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 63258624 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 63258624 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 63258624 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295952384 unmapped: 63250432 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295952384 unmapped: 63250432 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324291 data_alloc: 218103808 data_used: 2871296
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295952384 unmapped: 63250432 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 111.956047058s of 112.013458252s, submitted: 25
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 63234048 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 306 handle_osd_map epochs [306,307], i have 306, src has [1,307]
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 307 ms_handle_reset con 0x55c4dd4e0800 session 0x55c4dd92cd20
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295976960 unmapped: 63225856 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 307 heartbeat osd_stat(store_statfs(0x4eb0b1000/0x0/0x4ffc00000, data 0x347bb7c/0x362d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 307 heartbeat osd_stat(store_statfs(0x4eb0ac000/0x0/0x4ffc00000, data 0x347d709/0x3631000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295985152 unmapped: 63217664 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 307 handle_osd_map epochs [308,308], i have 307, src has [1,308]
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 308 ms_handle_reset con 0x55c4ddb86000 session 0x55c4dd92d0e0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296001536 unmapped: 63201280 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3532041 data_alloc: 218103808 data_used: 2879488
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296009728 unmapped: 63193088 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296009728 unmapped: 63193088 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296009728 unmapped: 63193088 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 308 heartbeat osd_stat(store_statfs(0x4ea439000/0x0/0x4ffc00000, data 0x40ef286/0x42a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296009728 unmapped: 63193088 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296009728 unmapped: 63193088 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3532041 data_alloc: 218103808 data_used: 2879488
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296017920 unmapped: 63184896 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296017920 unmapped: 63184896 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 308 heartbeat osd_stat(store_statfs(0x4ea439000/0x0/0x4ffc00000, data 0x40ef286/0x42a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296017920 unmapped: 63184896 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 308 heartbeat osd_stat(store_statfs(0x4ea439000/0x0/0x4ffc00000, data 0x40ef286/0x42a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 63176704 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 308 heartbeat osd_stat(store_statfs(0x4ea439000/0x0/0x4ffc00000, data 0x40ef286/0x42a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 63176704 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3532041 data_alloc: 218103808 data_used: 2879488
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 63176704 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 308 heartbeat osd_stat(store_statfs(0x4ea439000/0x0/0x4ffc00000, data 0x40ef286/0x42a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 63176704 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 63176704 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 63176704 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 63176704 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3532041 data_alloc: 218103808 data_used: 2879488
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 308 heartbeat osd_stat(store_statfs(0x4ea439000/0x0/0x4ffc00000, data 0x40ef286/0x42a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 63176704 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 63176704 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 63176704 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 63168512 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 308 heartbeat osd_stat(store_statfs(0x4ea439000/0x0/0x4ffc00000, data 0x40ef286/0x42a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 63168512 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3532041 data_alloc: 218103808 data_used: 2879488
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 63168512 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 63168512 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296042496 unmapped: 63160320 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296042496 unmapped: 63160320 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 308 heartbeat osd_stat(store_statfs(0x4ea439000/0x0/0x4ffc00000, data 0x40ef286/0x42a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296042496 unmapped: 63160320 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3532041 data_alloc: 218103808 data_used: 2879488
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 63152128 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 63152128 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 63152128 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 31.948614120s of 32.148563385s, submitted: 31
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 308 handle_osd_map epochs [309,309], i have 308, src has [1,309]
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 309 ms_handle_reset con 0x55c4ddb8ec00 session 0x55c4dd40a780
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297115648 unmapped: 62087168 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297115648 unmapped: 62087168 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3534101 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 309 heartbeat osd_stat(store_statfs(0x4ea437000/0x0/0x4ffc00000, data 0x40f0e11/0x42a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297123840 unmapped: 62078976 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297123840 unmapped: 62078976 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297123840 unmapped: 62078976 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 309 handle_osd_map epochs [309,310], i have 309, src has [1,310]
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297148416 unmapped: 62054400 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297148416 unmapped: 62054400 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536883 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297148416 unmapped: 62054400 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297148416 unmapped: 62054400 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297148416 unmapped: 62054400 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297156608 unmapped: 62046208 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297156608 unmapped: 62046208 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536883 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297156608 unmapped: 62046208 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297156608 unmapped: 62046208 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297156608 unmapped: 62046208 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297156608 unmapped: 62046208 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297156608 unmapped: 62046208 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536883 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297164800 unmapped: 62038016 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297164800 unmapped: 62038016 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297164800 unmapped: 62038016 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297164800 unmapped: 62038016 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297172992 unmapped: 62029824 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536883 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 62021632 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 62021632 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 62021632 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 62021632 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 62021632 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536883 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 62021632 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 62021632 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297189376 unmapped: 62013440 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297189376 unmapped: 62013440 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297189376 unmapped: 62013440 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536883 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297189376 unmapped: 62013440 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297189376 unmapped: 62013440 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297189376 unmapped: 62013440 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297189376 unmapped: 62013440 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297189376 unmapped: 62013440 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536883 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296435712 unmapped: 62767104 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296443904 unmapped: 62758912 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296443904 unmapped: 62758912 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296443904 unmapped: 62758912 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296452096 unmapped: 62750720 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536883 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296452096 unmapped: 62750720 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296452096 unmapped: 62750720 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296452096 unmapped: 62750720 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296452096 unmapped: 62750720 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296452096 unmapped: 62750720 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536883 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296460288 unmapped: 62742528 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296460288 unmapped: 62742528 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296460288 unmapped: 62742528 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296468480 unmapped: 62734336 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296468480 unmapped: 62734336 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536883 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296468480 unmapped: 62734336 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296468480 unmapped: 62734336 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296468480 unmapped: 62734336 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296476672 unmapped: 62726144 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296476672 unmapped: 62726144 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.3 total, 600.0 interval#012Cumulative writes: 36K writes, 145K keys, 36K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.02 MB/s#012Cumulative WAL: 36K writes, 13K syncs, 2.73 writes per sync, written: 0.15 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1359 writes, 4745 keys, 1359 commit groups, 1.0 writes per commit group, ingest: 4.17 MB, 0.01 MB/s#012Interval WAL: 1359 writes, 574 syncs, 2.37 writes per sync, written: 0.00 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.048       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.048       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.05              0.00         1    0.048       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.3 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c4d9a771f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.3 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c4d9a771f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.3 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 mem
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536883 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296476672 unmapped: 62726144 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296476672 unmapped: 62726144 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296476672 unmapped: 62726144 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296476672 unmapped: 62726144 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296484864 unmapped: 62717952 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536883 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296484864 unmapped: 62717952 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296484864 unmapped: 62717952 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296484864 unmapped: 62717952 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296493056 unmapped: 62709760 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296493056 unmapped: 62709760 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536883 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296493056 unmapped: 62709760 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296493056 unmapped: 62709760 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296493056 unmapped: 62709760 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296493056 unmapped: 62709760 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296493056 unmapped: 62709760 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536883 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296493056 unmapped: 62709760 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296501248 unmapped: 62701568 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296501248 unmapped: 62701568 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296501248 unmapped: 62701568 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296509440 unmapped: 62693376 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536883 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296509440 unmapped: 62693376 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296517632 unmapped: 62685184 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296517632 unmapped: 62685184 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296517632 unmapped: 62685184 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296517632 unmapped: 62685184 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536883 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296517632 unmapped: 62685184 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296517632 unmapped: 62685184 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296517632 unmapped: 62685184 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296525824 unmapped: 62676992 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296525824 unmapped: 62676992 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536883 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296525824 unmapped: 62676992 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296525824 unmapped: 62676992 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296534016 unmapped: 62668800 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296534016 unmapped: 62668800 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296534016 unmapped: 62668800 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536883 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296534016 unmapped: 62668800 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296534016 unmapped: 62668800 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296534016 unmapped: 62668800 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296542208 unmapped: 62660608 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296542208 unmapped: 62660608 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536883 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296550400 unmapped: 62652416 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296550400 unmapped: 62652416 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296550400 unmapped: 62652416 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296558592 unmapped: 62644224 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296558592 unmapped: 62644224 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536883 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296558592 unmapped: 62644224 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296558592 unmapped: 62644224 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296558592 unmapped: 62644224 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296566784 unmapped: 62636032 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296566784 unmapped: 62636032 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536883 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296566784 unmapped: 62636032 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296566784 unmapped: 62636032 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296566784 unmapped: 62636032 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296566784 unmapped: 62636032 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296566784 unmapped: 62636032 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536883 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296566784 unmapped: 62636032 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296566784 unmapped: 62636032 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296566784 unmapped: 62636032 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296566784 unmapped: 62636032 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296574976 unmapped: 62627840 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536883 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296574976 unmapped: 62627840 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296583168 unmapped: 62619648 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296583168 unmapped: 62619648 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296583168 unmapped: 62619648 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296591360 unmapped: 62611456 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536883 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296591360 unmapped: 62611456 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296591360 unmapped: 62611456 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296591360 unmapped: 62611456 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296591360 unmapped: 62611456 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296591360 unmapped: 62611456 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536883 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296591360 unmapped: 62611456 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296591360 unmapped: 62611456 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296607744 unmapped: 62595072 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296607744 unmapped: 62595072 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296607744 unmapped: 62595072 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536883 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296607744 unmapped: 62595072 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296607744 unmapped: 62595072 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296607744 unmapped: 62595072 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 62586880 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 62586880 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536883 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 62586880 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 62586880 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 62586880 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 62586880 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 142.185562134s of 142.275115967s, submitted: 42
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 62586880 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296648704 unmapped: 62554112 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296665088 unmapped: 62537728 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296665088 unmapped: 62537728 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296665088 unmapped: 62537728 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296665088 unmapped: 62537728 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296665088 unmapped: 62537728 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296665088 unmapped: 62537728 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296665088 unmapped: 62537728 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296665088 unmapped: 62537728 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296665088 unmapped: 62537728 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296665088 unmapped: 62537728 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296665088 unmapped: 62537728 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296665088 unmapped: 62537728 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296665088 unmapped: 62537728 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296673280 unmapped: 62529536 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296673280 unmapped: 62529536 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296673280 unmapped: 62529536 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296673280 unmapped: 62529536 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296673280 unmapped: 62529536 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296681472 unmapped: 62521344 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296681472 unmapped: 62521344 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296681472 unmapped: 62521344 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296681472 unmapped: 62521344 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296681472 unmapped: 62521344 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296689664 unmapped: 62513152 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296689664 unmapped: 62513152 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296689664 unmapped: 62513152 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296697856 unmapped: 62504960 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296697856 unmapped: 62504960 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296697856 unmapped: 62504960 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296697856 unmapped: 62504960 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296706048 unmapped: 62496768 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296706048 unmapped: 62496768 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296706048 unmapped: 62496768 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296706048 unmapped: 62496768 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296714240 unmapped: 62488576 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296714240 unmapped: 62488576 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296714240 unmapped: 62488576 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296714240 unmapped: 62488576 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296714240 unmapped: 62488576 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296714240 unmapped: 62488576 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296714240 unmapped: 62488576 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296714240 unmapped: 62488576 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296730624 unmapped: 62472192 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296730624 unmapped: 62472192 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296730624 unmapped: 62472192 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296730624 unmapped: 62472192 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296730624 unmapped: 62472192 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296730624 unmapped: 62472192 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296730624 unmapped: 62472192 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296730624 unmapped: 62472192 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296738816 unmapped: 62464000 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296738816 unmapped: 62464000 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296738816 unmapped: 62464000 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296738816 unmapped: 62464000 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296738816 unmapped: 62464000 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296747008 unmapped: 62455808 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296747008 unmapped: 62455808 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296747008 unmapped: 62455808 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296747008 unmapped: 62455808 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296755200 unmapped: 62447616 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296755200 unmapped: 62447616 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296755200 unmapped: 62447616 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296755200 unmapped: 62447616 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296755200 unmapped: 62447616 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296755200 unmapped: 62447616 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296755200 unmapped: 62447616 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296763392 unmapped: 62439424 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296763392 unmapped: 62439424 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296763392 unmapped: 62439424 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296763392 unmapped: 62439424 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296763392 unmapped: 62439424 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296763392 unmapped: 62439424 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296771584 unmapped: 62431232 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296771584 unmapped: 62431232 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296771584 unmapped: 62431232 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296779776 unmapped: 62423040 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296779776 unmapped: 62423040 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296779776 unmapped: 62423040 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296779776 unmapped: 62423040 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296779776 unmapped: 62423040 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296779776 unmapped: 62423040 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296779776 unmapped: 62423040 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296787968 unmapped: 62414848 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296787968 unmapped: 62414848 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296787968 unmapped: 62414848 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296787968 unmapped: 62414848 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296787968 unmapped: 62414848 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296787968 unmapped: 62414848 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296787968 unmapped: 62414848 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296796160 unmapped: 62406656 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296804352 unmapped: 62398464 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296804352 unmapped: 62398464 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296804352 unmapped: 62398464 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296804352 unmapped: 62398464 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296804352 unmapped: 62398464 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296804352 unmapped: 62398464 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296804352 unmapped: 62398464 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296812544 unmapped: 62390272 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296812544 unmapped: 62390272 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296812544 unmapped: 62390272 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296812544 unmapped: 62390272 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296812544 unmapped: 62390272 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296820736 unmapped: 62382080 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296820736 unmapped: 62382080 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296820736 unmapped: 62382080 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296820736 unmapped: 62382080 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296828928 unmapped: 62373888 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296828928 unmapped: 62373888 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296828928 unmapped: 62373888 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296828928 unmapped: 62373888 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296837120 unmapped: 62365696 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296837120 unmapped: 62365696 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296837120 unmapped: 62365696 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296845312 unmapped: 62357504 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296853504 unmapped: 62349312 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296853504 unmapped: 62349312 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296853504 unmapped: 62349312 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296853504 unmapped: 62349312 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296853504 unmapped: 62349312 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296853504 unmapped: 62349312 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296853504 unmapped: 62349312 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296853504 unmapped: 62349312 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296861696 unmapped: 62341120 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296861696 unmapped: 62341120 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296861696 unmapped: 62341120 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296861696 unmapped: 62341120 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296861696 unmapped: 62341120 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296869888 unmapped: 62332928 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296869888 unmapped: 62332928 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296869888 unmapped: 62332928 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296878080 unmapped: 62324736 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296878080 unmapped: 62324736 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296878080 unmapped: 62324736 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296886272 unmapped: 62316544 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296886272 unmapped: 62316544 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296886272 unmapped: 62316544 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296886272 unmapped: 62316544 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296886272 unmapped: 62316544 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296886272 unmapped: 62316544 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296886272 unmapped: 62316544 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296886272 unmapped: 62316544 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296894464 unmapped: 62308352 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296894464 unmapped: 62308352 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296894464 unmapped: 62308352 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296894464 unmapped: 62308352 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296894464 unmapped: 62308352 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296910848 unmapped: 62291968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296910848 unmapped: 62291968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296910848 unmapped: 62291968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296919040 unmapped: 62283776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296919040 unmapped: 62283776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296919040 unmapped: 62283776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296919040 unmapped: 62283776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296919040 unmapped: 62283776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296919040 unmapped: 62283776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296919040 unmapped: 62283776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296919040 unmapped: 62283776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296927232 unmapped: 62275584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296927232 unmapped: 62275584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296927232 unmapped: 62275584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296927232 unmapped: 62275584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296935424 unmapped: 62267392 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296935424 unmapped: 62267392 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296935424 unmapped: 62267392 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296935424 unmapped: 62267392 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296943616 unmapped: 62259200 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296943616 unmapped: 62259200 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296943616 unmapped: 62259200 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296943616 unmapped: 62259200 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296951808 unmapped: 62251008 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296951808 unmapped: 62251008 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296951808 unmapped: 62251008 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296951808 unmapped: 62251008 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296960000 unmapped: 62242816 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296960000 unmapped: 62242816 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296960000 unmapped: 62242816 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296960000 unmapped: 62242816 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296968192 unmapped: 62234624 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296968192 unmapped: 62234624 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296968192 unmapped: 62234624 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296968192 unmapped: 62234624 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296976384 unmapped: 62226432 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296976384 unmapped: 62226432 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296976384 unmapped: 62226432 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296976384 unmapped: 62226432 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296984576 unmapped: 62218240 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296984576 unmapped: 62218240 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296984576 unmapped: 62218240 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296984576 unmapped: 62218240 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296992768 unmapped: 62210048 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296992768 unmapped: 62210048 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296992768 unmapped: 62210048 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296992768 unmapped: 62210048 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296992768 unmapped: 62210048 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296992768 unmapped: 62210048 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296992768 unmapped: 62210048 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296992768 unmapped: 62210048 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297009152 unmapped: 62193664 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297009152 unmapped: 62193664 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297009152 unmapped: 62193664 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297009152 unmapped: 62193664 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297017344 unmapped: 62185472 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297017344 unmapped: 62185472 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297017344 unmapped: 62185472 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297017344 unmapped: 62185472 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297017344 unmapped: 62185472 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297017344 unmapped: 62185472 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297025536 unmapped: 62177280 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297025536 unmapped: 62177280 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297025536 unmapped: 62177280 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297025536 unmapped: 62177280 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297025536 unmapped: 62177280 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297025536 unmapped: 62177280 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297033728 unmapped: 62169088 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297033728 unmapped: 62169088 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297033728 unmapped: 62169088 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297041920 unmapped: 62160896 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297041920 unmapped: 62160896 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297050112 unmapped: 62152704 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: mgrc ms_handle_reset ms_handle_reset con 0x55c4ddb9fc00
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2898047278
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2898047278,v1:192.168.122.100:6801/2898047278]
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: mgrc handle_mgr_configure stats_period=5
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297107456 unmapped: 62095360 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297107456 unmapped: 62095360 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297107456 unmapped: 62095360 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297107456 unmapped: 62095360 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297107456 unmapped: 62095360 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297107456 unmapped: 62095360 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297115648 unmapped: 62087168 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297115648 unmapped: 62087168 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297115648 unmapped: 62087168 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297115648 unmapped: 62087168 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297115648 unmapped: 62087168 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297115648 unmapped: 62087168 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297115648 unmapped: 62087168 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297115648 unmapped: 62087168 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297132032 unmapped: 62070784 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297132032 unmapped: 62070784 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297132032 unmapped: 62070784 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297132032 unmapped: 62070784 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297132032 unmapped: 62070784 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297132032 unmapped: 62070784 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297132032 unmapped: 62070784 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297140224 unmapped: 62062592 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297140224 unmapped: 62062592 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297148416 unmapped: 62054400 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297148416 unmapped: 62054400 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297148416 unmapped: 62054400 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297148416 unmapped: 62054400 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297148416 unmapped: 62054400 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297148416 unmapped: 62054400 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297148416 unmapped: 62054400 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297156608 unmapped: 62046208 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297156608 unmapped: 62046208 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297156608 unmapped: 62046208 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297156608 unmapped: 62046208 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297156608 unmapped: 62046208 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297156608 unmapped: 62046208 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297164800 unmapped: 62038016 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297164800 unmapped: 62038016 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297164800 unmapped: 62038016 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297164800 unmapped: 62038016 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297164800 unmapped: 62038016 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297172992 unmapped: 62029824 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297172992 unmapped: 62029824 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 62021632 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 62021632 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 62021632 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 62021632 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 62021632 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 62021632 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 62021632 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 62021632 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297189376 unmapped: 62013440 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297189376 unmapped: 62013440 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297189376 unmapped: 62013440 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297197568 unmapped: 62005248 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297197568 unmapped: 62005248 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297197568 unmapped: 62005248 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297197568 unmapped: 62005248 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297197568 unmapped: 62005248 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297197568 unmapped: 62005248 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297197568 unmapped: 62005248 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297197568 unmapped: 62005248 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 283.956665039s of 284.302581787s, submitted: 90
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297205760 unmapped: 61997056 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 310 handle_osd_map epochs [311,311], i have 310, src has [1,311]
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 311 ms_handle_reset con 0x55c4dd228800 session 0x55c4dd2e21e0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297238528 unmapped: 61964288 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297238528 unmapped: 61964288 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3431792 data_alloc: 218103808 data_used: 2895872
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297238528 unmapped: 61964288 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 311 heartbeat osd_stat(store_statfs(0x4eb435000/0x0/0x4ffc00000, data 0x30f4412/0x32a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 311 handle_osd_map epochs [312,312], i have 311, src has [1,312]
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 312 ms_handle_reset con 0x55c4dd4e0800 session 0x55c4df1d74a0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297246720 unmapped: 61956096 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 312 handle_osd_map epochs [313,313], i have 312, src has [1,313]
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297254912 unmapped: 61947904 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297263104 unmapped: 61939712 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297263104 unmapped: 61939712 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 313 ms_handle_reset con 0x55c4dbc10000 session 0x55c4dc11af00
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3353795 data_alloc: 218103808 data_used: 2904064
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ec09e000/0x0/0x4ffc00000, data 0x2487a52/0x263e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297295872 unmapped: 61906944 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 313 ms_handle_reset con 0x55c4ddb8ec00 session 0x55c4db0625a0
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 313 handle_osd_map epochs [314,314], i have 313, src has [1,314]
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297312256 unmapped: 61890560 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 314 handle_osd_map epochs [315,315], i have 314, src has [1,315]
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.847590446s of 10.088174820s, submitted: 90
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ec09b000/0x0/0x4ffc00000, data 0x24895fb/0x2642000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 314 handle_osd_map epochs [315,315], i have 315, src has [1,315]
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 61865984 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec09b000/0x0/0x4ffc00000, data 0x24895fb/0x2642000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec097000/0x0/0x4ffc00000, data 0x248b05e/0x2645000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297345024 unmapped: 61857792 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297345024 unmapped: 61857792 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3361701 data_alloc: 218103808 data_used: 2912256
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297345024 unmapped: 61857792 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297345024 unmapped: 61857792 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297345024 unmapped: 61857792 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec097000/0x0/0x4ffc00000, data 0x248b05e/0x2645000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297345024 unmapped: 61857792 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec097000/0x0/0x4ffc00000, data 0x248b05e/0x2645000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297345024 unmapped: 61857792 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3361701 data_alloc: 218103808 data_used: 2912256
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297345024 unmapped: 61857792 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297345024 unmapped: 61857792 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297345024 unmapped: 61857792 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297345024 unmapped: 61857792 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297353216 unmapped: 61849600 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3361701 data_alloc: 218103808 data_used: 2912256
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec097000/0x0/0x4ffc00000, data 0x248b05e/0x2645000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297353216 unmapped: 61849600 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297361408 unmapped: 61841408 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297361408 unmapped: 61841408 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297361408 unmapped: 61841408 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec097000/0x0/0x4ffc00000, data 0x248b05e/0x2645000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297369600 unmapped: 61833216 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3361701 data_alloc: 218103808 data_used: 2912256
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297369600 unmapped: 61833216 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297369600 unmapped: 61833216 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297369600 unmapped: 61833216 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec097000/0x0/0x4ffc00000, data 0x248b05e/0x2645000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297369600 unmapped: 61833216 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec097000/0x0/0x4ffc00000, data 0x248b05e/0x2645000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297377792 unmapped: 61825024 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3361701 data_alloc: 218103808 data_used: 2912256
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297377792 unmapped: 61825024 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297377792 unmapped: 61825024 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297377792 unmapped: 61825024 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297385984 unmapped: 61816832 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297385984 unmapped: 61816832 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3361701 data_alloc: 218103808 data_used: 2912256
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec097000/0x0/0x4ffc00000, data 0x248b05e/0x2645000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297385984 unmapped: 61816832 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297385984 unmapped: 61816832 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297402368 unmapped: 61800448 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297402368 unmapped: 61800448 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297402368 unmapped: 61800448 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3361861 data_alloc: 218103808 data_used: 2916352
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297402368 unmapped: 61800448 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec097000/0x0/0x4ffc00000, data 0x248b05e/0x2645000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297402368 unmapped: 61800448 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297402368 unmapped: 61800448 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297402368 unmapped: 61800448 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297402368 unmapped: 61800448 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3361861 data_alloc: 218103808 data_used: 2916352
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297410560 unmapped: 61792256 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec097000/0x0/0x4ffc00000, data 0x248b05e/0x2645000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297410560 unmapped: 61792256 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec097000/0x0/0x4ffc00000, data 0x248b05e/0x2645000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297410560 unmapped: 61792256 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297410560 unmapped: 61792256 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297410560 unmapped: 61792256 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3361861 data_alloc: 218103808 data_used: 2916352
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297418752 unmapped: 61784064 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297418752 unmapped: 61784064 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec097000/0x0/0x4ffc00000, data 0x248b05e/0x2645000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297418752 unmapped: 61784064 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec097000/0x0/0x4ffc00000, data 0x248b05e/0x2645000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec097000/0x0/0x4ffc00000, data 0x248b05e/0x2645000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297426944 unmapped: 61775872 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297426944 unmapped: 61775872 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3361861 data_alloc: 218103808 data_used: 2916352
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297426944 unmapped: 61775872 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297435136 unmapped: 61767680 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec097000/0x0/0x4ffc00000, data 0x248b05e/0x2645000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec097000/0x0/0x4ffc00000, data 0x248b05e/0x2645000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297435136 unmapped: 61767680 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297435136 unmapped: 61767680 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297435136 unmapped: 61767680 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3361861 data_alloc: 218103808 data_used: 2916352
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297435136 unmapped: 61767680 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec097000/0x0/0x4ffc00000, data 0x248b05e/0x2645000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297443328 unmapped: 61759488 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297443328 unmapped: 61759488 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec097000/0x0/0x4ffc00000, data 0x248b05e/0x2645000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297443328 unmapped: 61759488 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 315 handle_osd_map epochs [316,316], i have 315, src has [1,316]
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 56.906375885s of 56.922370911s, submitted: 15
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 316 ms_handle_reset con 0x55c4e8e19000 session 0x55c4df73bc20
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297467904 unmapped: 61734912 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3363450 data_alloc: 218103808 data_used: 2916352
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297467904 unmapped: 61734912 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 316 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4db000d20
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297467904 unmapped: 61734912 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297467904 unmapped: 61734912 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297467904 unmapped: 61734912 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec096000/0x0/0x4ffc00000, data 0x248cc1f/0x2647000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297476096 unmapped: 61726720 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3363450 data_alloc: 218103808 data_used: 2916352
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297476096 unmapped: 61726720 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec096000/0x0/0x4ffc00000, data 0x248cc1f/0x2647000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297476096 unmapped: 61726720 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 316 handle_osd_map epochs [317,317], i have 316, src has [1,317]
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec096000/0x0/0x4ffc00000, data 0x248cc1f/0x2647000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297533440 unmapped: 61669376 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297533440 unmapped: 61669376 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297533440 unmapped: 61669376 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3366424 data_alloc: 218103808 data_used: 2916352
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297541632 unmapped: 61661184 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297541632 unmapped: 61661184 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297541632 unmapped: 61661184 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297541632 unmapped: 61661184 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297541632 unmapped: 61661184 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3366424 data_alloc: 218103808 data_used: 2916352
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297549824 unmapped: 61652992 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297549824 unmapped: 61652992 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297549824 unmapped: 61652992 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297549824 unmapped: 61652992 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297549824 unmapped: 61652992 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3366424 data_alloc: 218103808 data_used: 2916352
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297549824 unmapped: 61652992 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297549824 unmapped: 61652992 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297549824 unmapped: 61652992 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297558016 unmapped: 61644800 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297558016 unmapped: 61644800 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3366424 data_alloc: 218103808 data_used: 2916352
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297558016 unmapped: 61644800 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297558016 unmapped: 61644800 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297558016 unmapped: 61644800 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297574400 unmapped: 61628416 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297574400 unmapped: 61628416 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3366424 data_alloc: 218103808 data_used: 2916352
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297574400 unmapped: 61628416 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297574400 unmapped: 61628416 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297574400 unmapped: 61628416 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297582592 unmapped: 61620224 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297582592 unmapped: 61620224 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3366584 data_alloc: 218103808 data_used: 2920448
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297582592 unmapped: 61620224 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297582592 unmapped: 61620224 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297582592 unmapped: 61620224 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297582592 unmapped: 61620224 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297582592 unmapped: 61620224 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3366584 data_alloc: 218103808 data_used: 2920448
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297582592 unmapped: 61620224 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297590784 unmapped: 61612032 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297590784 unmapped: 61612032 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297598976 unmapped: 61603840 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297598976 unmapped: 61603840 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3366584 data_alloc: 218103808 data_used: 2920448
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297598976 unmapped: 61603840 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297598976 unmapped: 61603840 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297607168 unmapped: 61595648 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297607168 unmapped: 61595648 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297615360 unmapped: 61587456 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3366584 data_alloc: 218103808 data_used: 2920448
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297615360 unmapped: 61587456 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297615360 unmapped: 61587456 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297615360 unmapped: 61587456 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297615360 unmapped: 61587456 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297615360 unmapped: 61587456 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3366584 data_alloc: 218103808 data_used: 2920448
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297615360 unmapped: 61587456 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297623552 unmapped: 61579264 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297623552 unmapped: 61579264 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297623552 unmapped: 61579264 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297631744 unmapped: 61571072 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3366584 data_alloc: 218103808 data_used: 2920448
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297631744 unmapped: 61571072 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297631744 unmapped: 61571072 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297631744 unmapped: 61571072 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297631744 unmapped: 61571072 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297631744 unmapped: 61571072 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3366584 data_alloc: 218103808 data_used: 2920448
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297631744 unmapped: 61571072 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297631744 unmapped: 61571072 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297648128 unmapped: 61554688 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297648128 unmapped: 61554688 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297648128 unmapped: 61554688 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3366584 data_alloc: 218103808 data_used: 2920448
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297648128 unmapped: 61554688 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297648128 unmapped: 61554688 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297648128 unmapped: 61554688 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297648128 unmapped: 61554688 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297648128 unmapped: 61554688 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3366584 data_alloc: 218103808 data_used: 2920448
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297656320 unmapped: 61546496 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297656320 unmapped: 61546496 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297656320 unmapped: 61546496 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297656320 unmapped: 61546496 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297656320 unmapped: 61546496 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3366584 data_alloc: 218103808 data_used: 2920448
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297656320 unmapped: 61546496 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 317 ms_handle_reset con 0x55c4df107400 session 0x55c4dd81fe00
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 317 ms_handle_reset con 0x55c4dbc29c00 session 0x55c4df73ad20
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297672704 unmapped: 61530112 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297672704 unmapped: 61530112 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297680896 unmapped: 61521920 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297680896 unmapped: 61521920 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3366584 data_alloc: 218103808 data_used: 2920448
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297680896 unmapped: 61521920 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297680896 unmapped: 61521920 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297689088 unmapped: 61513728 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297689088 unmapped: 61513728 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297689088 unmapped: 61513728 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3366584 data_alloc: 218103808 data_used: 2920448
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297689088 unmapped: 61513728 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297689088 unmapped: 61513728 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297689088 unmapped: 61513728 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297689088 unmapped: 61513728 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297689088 unmapped: 61513728 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3366584 data_alloc: 218103808 data_used: 2920448
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297705472 unmapped: 61497344 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297705472 unmapped: 61497344 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: do_command 'config diff' '{prefix=config diff}'
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: do_command 'config show' '{prefix=config show}'
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: do_command 'counter dump' '{prefix=counter dump}'
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297263104 unmapped: 61939712 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: do_command 'counter schema' '{prefix=counter schema}'
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296853504 unmapped: 62349312 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297107456 unmapped: 62095360 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3366584 data_alloc: 218103808 data_used: 2920448
Oct 11 06:01:03 np0005481065 ceph-osd[90364]: do_command 'log dump' '{prefix=log dump}'
Oct 11 06:01:03 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct 11 06:01:03 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2166521335' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 11 06:01:04 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3611: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 06:01:04 np0005481065 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.22959 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 11 06:01:04 np0005481065 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 06:01:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 11 06:01:04 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1496692667' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 11 06:01:04 np0005481065 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.22963 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 06:01:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct 11 06:01:04 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2153358173' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 11 06:01:04 np0005481065 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.22967 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 06:01:04 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 06:01:05 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Oct 11 06:01:05 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/178570993' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 11 06:01:05 np0005481065 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.22971 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 06:01:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 06:01:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 06:01:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 06:01:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 06:01:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0026278224099759067 of space, bias 1.0, pg target 0.788346722992772 quantized to 32 (current 32)
Oct 11 06:01:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 06:01:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 06:01:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 06:01:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 06:01:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 06:01:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct 11 06:01:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 06:01:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 06:01:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 06:01:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 06:01:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 06:01:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 06:01:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 06:01:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 06:01:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 06:01:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 06:01:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 06:01:05 np0005481065 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 06:01:06 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3612: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 06:01:06 np0005481065 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.22979 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 06:01:06 np0005481065 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: 2025-10-11T10:01:06.010+0000 7f7f1b2f5640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 11 06:01:06 np0005481065 ceph-mgr[74605]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 11 06:01:06 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Oct 11 06:01:06 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/142142937' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 11 06:01:06 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Oct 11 06:01:06 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3982567174' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 11 06:01:06 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Oct 11 06:01:06 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/530945270' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 11 06:01:06 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Oct 11 06:01:06 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/675396220' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 11 06:01:06 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Oct 11 06:01:06 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1141493182' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 11 06:01:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Oct 11 06:01:07 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2832554885' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 11 06:01:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Oct 11 06:01:07 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3038655888' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 11 06:01:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Oct 11 06:01:07 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3992329415' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 11 06:01:07 np0005481065 nova_compute[260935]: 2025-10-11 10:01:07.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 06:01:07 np0005481065 nova_compute[260935]: 2025-10-11 10:01:07.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 06:01:07 np0005481065 nova_compute[260935]: 2025-10-11 10:01:07.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 06:01:07 np0005481065 nova_compute[260935]: 2025-10-11 10:01:07.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 06:01:07 np0005481065 nova_compute[260935]: 2025-10-11 10:01:07.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 06:01:07 np0005481065 nova_compute[260935]: 2025-10-11 10:01:07.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 06:01:07 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Oct 11 06:01:07 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2907153542' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 11 06:01:08 np0005481065 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3613: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 06:01:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Oct 11 06:01:08 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1853826303' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 11 06:01:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Oct 11 06:01:08 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1346839265' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330858496 unmapped: 51183616 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333545472 unmapped: 48496640 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 335257600 unmapped: 46784512 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 335257600 unmapped: 46784512 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e72bd000/0x0/0x4ffc00000, data 0x6a682be/0x6c00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 335257600 unmapped: 46784512 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4233672 data_alloc: 251658240 data_used: 53624832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 335265792 unmapped: 46776320 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e72bd000/0x0/0x4ffc00000, data 0x6a682be/0x6c00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 335298560 unmapped: 46743552 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 335314944 unmapped: 46727168 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338542592 unmapped: 43499520 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab4956000 session 0x557ab5bfbe00
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dda400 session 0x557ab72a4f00
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338001920 unmapped: 44040192 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e5a83000/0x0/0x4ffc00000, data 0x70fe2be/0x7296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12edf9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4298296 data_alloc: 251658240 data_used: 54517760
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.904383659s of 10.933584213s, submitted: 71
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338206720 unmapped: 43835392 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342089728 unmapped: 39952384 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e547f000/0x0/0x4ffc00000, data 0x76ff2be/0x7897000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12edf9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342097920 unmapped: 39944192 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e541d000/0x0/0x4ffc00000, data 0x77612be/0x78f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12edf9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342097920 unmapped: 39944192 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342097920 unmapped: 39944192 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4362664 data_alloc: 251658240 data_used: 54804480
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342097920 unmapped: 39944192 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 341434368 unmapped: 40607744 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342482944 unmapped: 39559168 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e425e000/0x0/0x4ffc00000, data 0x77882be/0x7920000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343531520 unmapped: 38510592 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e425e000/0x0/0x4ffc00000, data 0x77882be/0x7920000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343539712 unmapped: 38502400 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab7f88400 session 0x557ab791f680
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714bc00 session 0x557ab791eb40
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4359640 data_alloc: 251658240 data_used: 54870016
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343539712 unmapped: 38502400 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.994635582s of 11.460409164s, submitted: 86
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343539712 unmapped: 38502400 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e425e000/0x0/0x4ffc00000, data 0x77882be/0x7920000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 340787200 unmapped: 41254912 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714bc00 session 0x557ab6a93e00
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 340803584 unmapped: 41238528 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 340803584 unmapped: 41238528 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e5339000/0x0/0x4ffc00000, data 0x66ad29b/0x6844000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4149256 data_alloc: 234881024 data_used: 44314624
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 340803584 unmapped: 41238528 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e5339000/0x0/0x4ffc00000, data 0x66ad29b/0x6844000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 340803584 unmapped: 41238528 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6696800 session 0x557ab7295a40
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dd9800 session 0x557ab5bfb0e0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 336035840 unmapped: 46006272 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab4c39c00 session 0x557ab55203c0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 336052224 unmapped: 45989888 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e69de000/0x0/0x4ffc00000, data 0x500929b/0x51a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 336052224 unmapped: 45989888 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3865366 data_alloc: 218103808 data_used: 27451392
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e69de000/0x0/0x4ffc00000, data 0x500929b/0x51a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 336052224 unmapped: 45989888 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 336052224 unmapped: 45989888 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 336052224 unmapped: 45989888 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e69de000/0x0/0x4ffc00000, data 0x500929b/0x51a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 336052224 unmapped: 45989888 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 336052224 unmapped: 45989888 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3865366 data_alloc: 218103808 data_used: 27451392
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 336052224 unmapped: 45989888 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 336052224 unmapped: 45989888 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 336052224 unmapped: 45989888 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e69de000/0x0/0x4ffc00000, data 0x500929b/0x51a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 336052224 unmapped: 45989888 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 336052224 unmapped: 45989888 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3865366 data_alloc: 218103808 data_used: 27451392
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 336052224 unmapped: 45989888 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 336052224 unmapped: 45989888 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 336052224 unmapped: 45989888 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e69de000/0x0/0x4ffc00000, data 0x500929b/0x51a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.566871643s of 21.283866882s, submitted: 80
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab7f89800 session 0x557ab552c960
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab9c2fc00 session 0x557ab7951680
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e69de000/0x0/0x4ffc00000, data 0x500929b/0x51a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 336052224 unmapped: 45989888 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 336052224 unmapped: 45989888 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3865142 data_alloc: 218103808 data_used: 27451392
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324763648 unmapped: 57278464 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324763648 unmapped: 57278464 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324763648 unmapped: 57278464 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e786b000/0x0/0x4ffc00000, data 0x417c29b/0x4313000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324763648 unmapped: 57278464 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab4c39c00 session 0x557ab46d30e0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324771840 unmapped: 57270272 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686476 data_alloc: 218103808 data_used: 18169856
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324771840 unmapped: 57270272 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324771840 unmapped: 57270272 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324771840 unmapped: 57270272 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324771840 unmapped: 57270272 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324771840 unmapped: 57270272 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686476 data_alloc: 218103808 data_used: 18169856
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324771840 unmapped: 57270272 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324771840 unmapped: 57270272 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324771840 unmapped: 57270272 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324771840 unmapped: 57270272 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324771840 unmapped: 57270272 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686476 data_alloc: 218103808 data_used: 18169856
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324771840 unmapped: 57270272 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324771840 unmapped: 57270272 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324771840 unmapped: 57270272 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324771840 unmapped: 57270272 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324771840 unmapped: 57270272 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686476 data_alloc: 218103808 data_used: 18169856
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324780032 unmapped: 57262080 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324780032 unmapped: 57262080 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324788224 unmapped: 57253888 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324788224 unmapped: 57253888 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324788224 unmapped: 57253888 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686476 data_alloc: 218103808 data_used: 18169856
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324788224 unmapped: 57253888 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324788224 unmapped: 57253888 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324796416 unmapped: 57245696 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324796416 unmapped: 57245696 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324796416 unmapped: 57245696 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686476 data_alloc: 218103808 data_used: 18169856
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324796416 unmapped: 57245696 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324796416 unmapped: 57245696 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324804608 unmapped: 57237504 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324804608 unmapped: 57237504 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324804608 unmapped: 57237504 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686476 data_alloc: 218103808 data_used: 18169856
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324804608 unmapped: 57237504 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324804608 unmapped: 57237504 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324804608 unmapped: 57237504 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324812800 unmapped: 57229312 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 38.966651917s of 41.397136688s, submitted: 35
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324812800 unmapped: 57229312 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6696800 session 0x557ab682a5a0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3693691 data_alloc: 218103808 data_used: 18169856
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dd9800 session 0x557ab72a52c0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714bc00 session 0x557ab3ee6b40
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714bc00 session 0x557ab55214a0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab4c39c00 session 0x557ab72a4b40
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324788224 unmapped: 57253888 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324788224 unmapped: 57253888 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324788224 unmapped: 57253888 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324796416 unmapped: 57245696 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7815000/0x0/0x4ffc00000, data 0x41d229b/0x4369000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324796416 unmapped: 57245696 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3693691 data_alloc: 218103808 data_used: 18169856
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714f400 session 0x557ab5521860
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324796416 unmapped: 57245696 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324796416 unmapped: 57245696 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714f400 session 0x557ab791fa40
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324796416 unmapped: 57245696 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7815000/0x0/0x4ffc00000, data 0x41d229b/0x4369000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dd9800 session 0x557ab7950f00
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324804608 unmapped: 57237504 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7815000/0x0/0x4ffc00000, data 0x41d229b/0x4369000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7815000/0x0/0x4ffc00000, data 0x41d229b/0x4369000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324812800 unmapped: 57229312 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab9c2fc00 session 0x557ab52921e0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.392966270s of 10.930088997s, submitted: 11
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3696569 data_alloc: 218103808 data_used: 18169856
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab4c39c00 session 0x557ab57cbe00
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 325124096 unmapped: 56918016 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 325124096 unmapped: 56918016 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 325124096 unmapped: 56918016 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e77f0000/0x0/0x4ffc00000, data 0x41f62aa/0x438e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 325124096 unmapped: 56918016 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 325124096 unmapped: 56918016 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3703615 data_alloc: 218103808 data_used: 18698240
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e77f0000/0x0/0x4ffc00000, data 0x41f62aa/0x438e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 325124096 unmapped: 56918016 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 325124096 unmapped: 56918016 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e77f0000/0x0/0x4ffc00000, data 0x41f62aa/0x438e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 325132288 unmapped: 56909824 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 325140480 unmapped: 56901632 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e77f0000/0x0/0x4ffc00000, data 0x41f62aa/0x438e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 325140480 unmapped: 56901632 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3703615 data_alloc: 218103808 data_used: 18698240
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 325140480 unmapped: 56901632 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 325140480 unmapped: 56901632 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.047631264s of 12.067139626s, submitted: 3
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 327548928 unmapped: 54493184 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e77ba000/0x0/0x4ffc00000, data 0x422c2aa/0x43c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 327548928 unmapped: 54493184 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 328695808 unmapped: 53346304 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749267 data_alloc: 218103808 data_used: 19013632
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 328695808 unmapped: 53346304 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 328695808 unmapped: 53346304 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7389000/0x0/0x4ffc00000, data 0x465d2aa/0x47f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 328695808 unmapped: 53346304 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 328695808 unmapped: 53346304 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 328671232 unmapped: 53370880 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3748063 data_alloc: 218103808 data_used: 19013632
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7387000/0x0/0x4ffc00000, data 0x465f2aa/0x47f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 328671232 unmapped: 53370880 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7387000/0x0/0x4ffc00000, data 0x465f2aa/0x47f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 328671232 unmapped: 53370880 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7387000/0x0/0x4ffc00000, data 0x465f2aa/0x47f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 328671232 unmapped: 53370880 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7387000/0x0/0x4ffc00000, data 0x465f2aa/0x47f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 328671232 unmapped: 53370880 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 328671232 unmapped: 53370880 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3748063 data_alloc: 218103808 data_used: 19013632
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.078260422s of 13.326107025s, submitted: 38
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 328671232 unmapped: 53370880 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714f400 session 0x557ab79503c0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab7f88400 session 0x557ab6df9a40
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab8b9d800 session 0x557ab7951c20
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab9c2e400 session 0x557ab46d0d20
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 328671232 unmapped: 53370880 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab4c39c00 session 0x557ab6df6960
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714f400 session 0x557ab46d3680
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab7f88400 session 0x557ab727a1e0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab8b9d800 session 0x557ab5521c20
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dda400 session 0x557ab5520b40
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e69ee000/0x0/0x4ffc00000, data 0x4ff82aa/0x5190000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 327286784 unmapped: 58957824 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 327286784 unmapped: 58957824 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 327286784 unmapped: 58957824 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3828738 data_alloc: 218103808 data_used: 19013632
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 327294976 unmapped: 58949632 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dda400 session 0x557ab5bfa000
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 327294976 unmapped: 58949632 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab4c39c00 session 0x557ab5521e00
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 327294976 unmapped: 58949632 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714f400 session 0x557ab727ab40
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e696e000/0x0/0x4ffc00000, data 0x50782aa/0x5210000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab7f88400 session 0x557ab686c000
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 327294976 unmapped: 58949632 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 327303168 unmapped: 58941440 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3831833 data_alloc: 218103808 data_used: 19013632
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 327180288 unmapped: 59064320 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 327221248 unmapped: 59023360 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e696c000/0x0/0x4ffc00000, data 0x50782dd/0x5212000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 327221248 unmapped: 59023360 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 327221248 unmapped: 59023360 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 327221248 unmapped: 59023360 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3894073 data_alloc: 234881024 data_used: 26898432
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 327221248 unmapped: 59023360 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e696c000/0x0/0x4ffc00000, data 0x50782dd/0x5212000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 327221248 unmapped: 59023360 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e696c000/0x0/0x4ffc00000, data 0x50782dd/0x5212000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 327221248 unmapped: 59023360 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 327221248 unmapped: 59023360 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e696c000/0x0/0x4ffc00000, data 0x50782dd/0x5212000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.143640518s of 19.456829071s, submitted: 25
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 327221248 unmapped: 59023360 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3894333 data_alloc: 234881024 data_used: 26902528
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 332275712 unmapped: 53968896 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334086144 unmapped: 52158464 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 336658432 unmapped: 49586176 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e5987000/0x0/0x4ffc00000, data 0x605d2dd/0x61f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334856192 unmapped: 51388416 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e594d000/0x0/0x4ffc00000, data 0x60912dd/0x622b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334553088 unmapped: 51691520 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4015763 data_alloc: 234881024 data_used: 27406336
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334569472 unmapped: 51675136 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334569472 unmapped: 51675136 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334569472 unmapped: 51675136 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334569472 unmapped: 51675136 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334569472 unmapped: 51675136 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4025403 data_alloc: 234881024 data_used: 27701248
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e593d000/0x0/0x4ffc00000, data 0x609f2dd/0x6239000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334569472 unmapped: 51675136 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334569472 unmapped: 51675136 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334569472 unmapped: 51675136 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334569472 unmapped: 51675136 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.048258781s of 14.340513229s, submitted: 126
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334569472 unmapped: 51675136 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4025403 data_alloc: 234881024 data_used: 27701248
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334503936 unmapped: 51740672 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab8b9d800 session 0x557ab6815e00
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab7348400 session 0x557ab5bfbe00
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e5944000/0x0/0x4ffc00000, data 0x60a02dd/0x623a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab8b9d800 session 0x557ab5292b40
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 329523200 unmapped: 56721408 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 329523200 unmapped: 56721408 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 329523200 unmapped: 56721408 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7383000/0x0/0x4ffc00000, data 0x46622aa/0x47fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 329523200 unmapped: 56721408 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749794 data_alloc: 218103808 data_used: 16392192
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dd9800 session 0x557ab59c7860
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714bc00 session 0x557ab54272c0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 329531392 unmapped: 56713216 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7384000/0x0/0x4ffc00000, data 0x46622aa/0x47fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab4c39c00 session 0x557ab4a7cf00
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330612736 unmapped: 55631872 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330612736 unmapped: 55631872 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330612736 unmapped: 55631872 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 43K writes, 162K keys, 43K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.03 MB/s#012Cumulative WAL: 43K writes, 15K syncs, 2.71 writes per sync, written: 0.16 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4545 writes, 19K keys, 4545 commit groups, 1.0 writes per commit group, ingest: 21.41 MB, 0.04 MB/s#012Interval WAL: 4545 writes, 1749 syncs, 2.60 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330612736 unmapped: 55631872 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3696804 data_alloc: 218103808 data_used: 15548416
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330612736 unmapped: 55631872 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330612736 unmapped: 55631872 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330612736 unmapped: 55631872 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330612736 unmapped: 55631872 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330612736 unmapped: 55631872 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3696804 data_alloc: 218103808 data_used: 15548416
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330612736 unmapped: 55631872 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330612736 unmapped: 55631872 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330612736 unmapped: 55631872 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330612736 unmapped: 55631872 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330612736 unmapped: 55631872 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3696804 data_alloc: 218103808 data_used: 15548416
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330612736 unmapped: 55631872 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330620928 unmapped: 55623680 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330620928 unmapped: 55623680 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330620928 unmapped: 55623680 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330620928 unmapped: 55623680 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3696804 data_alloc: 218103808 data_used: 15548416
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330620928 unmapped: 55623680 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330620928 unmapped: 55623680 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330629120 unmapped: 55615488 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330629120 unmapped: 55615488 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330629120 unmapped: 55615488 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3696804 data_alloc: 218103808 data_used: 15548416
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330629120 unmapped: 55615488 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330637312 unmapped: 55607296 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330637312 unmapped: 55607296 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330637312 unmapped: 55607296 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330637312 unmapped: 55607296 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3696804 data_alloc: 218103808 data_used: 15548416
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330637312 unmapped: 55607296 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330645504 unmapped: 55599104 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330645504 unmapped: 55599104 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330645504 unmapped: 55599104 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330670080 unmapped: 55574528 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3696804 data_alloc: 218103808 data_used: 15548416
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330670080 unmapped: 55574528 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330670080 unmapped: 55574528 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330678272 unmapped: 55566336 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330678272 unmapped: 55566336 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330678272 unmapped: 55566336 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3696804 data_alloc: 218103808 data_used: 15548416
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330678272 unmapped: 55566336 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330678272 unmapped: 55566336 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330686464 unmapped: 55558144 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330686464 unmapped: 55558144 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330686464 unmapped: 55558144 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3696804 data_alloc: 218103808 data_used: 15548416
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330686464 unmapped: 55558144 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330686464 unmapped: 55558144 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330686464 unmapped: 55558144 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330694656 unmapped: 55549952 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330694656 unmapped: 55549952 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3696804 data_alloc: 218103808 data_used: 15548416
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330702848 unmapped: 55541760 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330702848 unmapped: 55541760 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 57.869426727s of 58.322135925s, submitted: 84
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dd9800 session 0x557ab7951e00
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330874880 unmapped: 55369728 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330874880 unmapped: 55369728 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330874880 unmapped: 55369728 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3734466 data_alloc: 218103808 data_used: 15548416
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330874880 unmapped: 55369728 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330883072 unmapped: 55361536 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e743d000/0x0/0x4ffc00000, data 0x45aa29b/0x4741000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330883072 unmapped: 55361536 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330883072 unmapped: 55361536 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330883072 unmapped: 55361536 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3734466 data_alloc: 218103808 data_used: 15548416
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e743d000/0x0/0x4ffc00000, data 0x45aa29b/0x4741000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714bc00 session 0x557ab7295a40
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 331202560 unmapped: 55042048 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7418000/0x0/0x4ffc00000, data 0x45ce2be/0x4766000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 331202560 unmapped: 55042048 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 331202560 unmapped: 55042048 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 331218944 unmapped: 55025664 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 331218944 unmapped: 55025664 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3772168 data_alloc: 218103808 data_used: 20008960
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7418000/0x0/0x4ffc00000, data 0x45ce2be/0x4766000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 331218944 unmapped: 55025664 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 331218944 unmapped: 55025664 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 331218944 unmapped: 55025664 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 331218944 unmapped: 55025664 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 331218944 unmapped: 55025664 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3772168 data_alloc: 218103808 data_used: 20008960
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7418000/0x0/0x4ffc00000, data 0x45ce2be/0x4766000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 331227136 unmapped: 55017472 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 331227136 unmapped: 55017472 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.643857956s of 20.801364899s, submitted: 20
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 331931648 unmapped: 54312960 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7418000/0x0/0x4ffc00000, data 0x45ce2be/0x4766000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334323712 unmapped: 51920896 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334102528 unmapped: 52142080 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3826782 data_alloc: 218103808 data_used: 21155840
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334102528 unmapped: 52142080 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6e98000/0x0/0x4ffc00000, data 0x4b4e2be/0x4ce6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334184448 unmapped: 52060160 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334184448 unmapped: 52060160 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334184448 unmapped: 52060160 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6e98000/0x0/0x4ffc00000, data 0x4b4e2be/0x4ce6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334184448 unmapped: 52060160 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3826782 data_alloc: 218103808 data_used: 21155840
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334184448 unmapped: 52060160 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334184448 unmapped: 52060160 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334192640 unmapped: 52051968 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6e98000/0x0/0x4ffc00000, data 0x4b4e2be/0x4ce6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334192640 unmapped: 52051968 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6e98000/0x0/0x4ffc00000, data 0x4b4e2be/0x4ce6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334192640 unmapped: 52051968 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3826782 data_alloc: 218103808 data_used: 21155840
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334192640 unmapped: 52051968 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334192640 unmapped: 52051968 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334192640 unmapped: 52051968 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6e98000/0x0/0x4ffc00000, data 0x4b4e2be/0x4ce6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334200832 unmapped: 52043776 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334200832 unmapped: 52043776 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3826782 data_alloc: 218103808 data_used: 21155840
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334200832 unmapped: 52043776 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334209024 unmapped: 52035584 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334209024 unmapped: 52035584 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6e98000/0x0/0x4ffc00000, data 0x4b4e2be/0x4ce6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334209024 unmapped: 52035584 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.530765533s of 21.778186798s, submitted: 53
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6e98000/0x0/0x4ffc00000, data 0x4b4e2be/0x4ce6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334233600 unmapped: 52011008 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3824862 data_alloc: 218103808 data_used: 21155840
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334200832 unmapped: 52043776 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334200832 unmapped: 52043776 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334200832 unmapped: 52043776 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6e98000/0x0/0x4ffc00000, data 0x4b4e2be/0x4ce6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6e98000/0x0/0x4ffc00000, data 0x4b4e2be/0x4ce6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334200832 unmapped: 52043776 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334200832 unmapped: 52043776 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3824862 data_alloc: 218103808 data_used: 21155840
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334200832 unmapped: 52043776 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6e98000/0x0/0x4ffc00000, data 0x4b4e2be/0x4ce6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334200832 unmapped: 52043776 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334200832 unmapped: 52043776 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334209024 unmapped: 52035584 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dda400 session 0x557ab7951680
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714f400 session 0x557ab6fcd0e0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab7f88400 session 0x557ab58cfe00
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab7f88400 session 0x557ab79505a0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dd9800 session 0x557ab6df8d20
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334585856 unmapped: 51658752 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.958397865s of 10.519848824s, submitted: 129
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3847780 data_alloc: 218103808 data_used: 21155840
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e692a000/0x0/0x4ffc00000, data 0x4cab320/0x4e44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334585856 unmapped: 51658752 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334585856 unmapped: 51658752 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334585856 unmapped: 51658752 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6928000/0x0/0x4ffc00000, data 0x4cac320/0x4e45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334585856 unmapped: 51658752 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334585856 unmapped: 51658752 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3847780 data_alloc: 218103808 data_used: 21155840
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334594048 unmapped: 51650560 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334594048 unmapped: 51650560 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334364672 unmapped: 51879936 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6928000/0x0/0x4ffc00000, data 0x4cac320/0x4e45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334364672 unmapped: 51879936 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334364672 unmapped: 51879936 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3857992 data_alloc: 218103808 data_used: 22470656
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334364672 unmapped: 51879936 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6928000/0x0/0x4ffc00000, data 0x4cac320/0x4e45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334364672 unmapped: 51879936 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334364672 unmapped: 51879936 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334364672 unmapped: 51879936 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6928000/0x0/0x4ffc00000, data 0x4cac320/0x4e45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334364672 unmapped: 51879936 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3857992 data_alloc: 218103808 data_used: 22470656
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334364672 unmapped: 51879936 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334364672 unmapped: 51879936 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.543865204s of 17.608095169s, submitted: 4
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333447168 unmapped: 52797440 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333578240 unmapped: 52666368 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333914112 unmapped: 52330496 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6252000/0x0/0x4ffc00000, data 0x5375320/0x550e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3922318 data_alloc: 218103808 data_used: 23736320
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333914112 unmapped: 52330496 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333914112 unmapped: 52330496 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333914112 unmapped: 52330496 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333914112 unmapped: 52330496 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333914112 unmapped: 52330496 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3922318 data_alloc: 218103808 data_used: 23736320
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6252000/0x0/0x4ffc00000, data 0x5375320/0x550e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333914112 unmapped: 52330496 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6252000/0x0/0x4ffc00000, data 0x5375320/0x550e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333914112 unmapped: 52330496 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333914112 unmapped: 52330496 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333914112 unmapped: 52330496 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333914112 unmapped: 52330496 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3922318 data_alloc: 218103808 data_used: 23736320
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333914112 unmapped: 52330496 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6252000/0x0/0x4ffc00000, data 0x5375320/0x550e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333914112 unmapped: 52330496 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333914112 unmapped: 52330496 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.640145302s of 15.920056343s, submitted: 94
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dda400 session 0x557ab46d14a0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333570048 unmapped: 52674560 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714bc00 session 0x557ab6bfa3c0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333578240 unmapped: 52666368 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6a86000/0x0/0x4ffc00000, data 0x4b4f2be/0x4ce7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3832760 data_alloc: 218103808 data_used: 21155840
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333578240 unmapped: 52666368 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333578240 unmapped: 52666368 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333578240 unmapped: 52666368 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6a86000/0x0/0x4ffc00000, data 0x4b4f2be/0x4ce7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333578240 unmapped: 52666368 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333578240 unmapped: 52666368 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab8b9d800 session 0x557ab6df74a0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab7348400 session 0x557ab791fc20
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3832628 data_alloc: 218103808 data_used: 21155840
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dd9800 session 0x557ab792ef00
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333742080 unmapped: 52502528 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7484000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333742080 unmapped: 52502528 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333742080 unmapped: 52502528 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333742080 unmapped: 52502528 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333742080 unmapped: 52502528 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3714128 data_alloc: 218103808 data_used: 15548416
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7484000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333742080 unmapped: 52502528 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333742080 unmapped: 52502528 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7484000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333742080 unmapped: 52502528 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333750272 unmapped: 52494336 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333750272 unmapped: 52494336 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3714128 data_alloc: 218103808 data_used: 15548416
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333750272 unmapped: 52494336 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333750272 unmapped: 52494336 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7484000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333750272 unmapped: 52494336 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333758464 unmapped: 52486144 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333758464 unmapped: 52486144 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3714128 data_alloc: 218103808 data_used: 15548416
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333758464 unmapped: 52486144 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7484000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333758464 unmapped: 52486144 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333758464 unmapped: 52486144 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333766656 unmapped: 52477952 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333766656 unmapped: 52477952 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3714128 data_alloc: 218103808 data_used: 15548416
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7484000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333766656 unmapped: 52477952 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333766656 unmapped: 52477952 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7484000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333766656 unmapped: 52477952 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333766656 unmapped: 52477952 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333766656 unmapped: 52477952 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3714128 data_alloc: 218103808 data_used: 15548416
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333774848 unmapped: 52469760 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333774848 unmapped: 52469760 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7484000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333774848 unmapped: 52469760 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333774848 unmapped: 52469760 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dda400 session 0x557ab6df61e0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714bc00 session 0x557ab6bfbc20
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab7f88400 session 0x557ab685a3c0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dd9800 session 0x557ab5427a40
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 35.755138397s of 36.050605774s, submitted: 84
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dda400 session 0x557ab5427c20
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714bc00 session 0x557ab7294d20
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab7348400 session 0x557ab686d680
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab8b9d800 session 0x557ab791e960
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dd9800 session 0x557ab4aab0e0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333873152 unmapped: 52371456 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3783807 data_alloc: 218103808 data_used: 15548416
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6c9f000/0x0/0x4ffc00000, data 0x49372ab/0x4acf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333873152 unmapped: 52371456 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333873152 unmapped: 52371456 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333873152 unmapped: 52371456 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333873152 unmapped: 52371456 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6c9f000/0x0/0x4ffc00000, data 0x49372ab/0x4acf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333873152 unmapped: 52371456 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3783807 data_alloc: 218103808 data_used: 15548416
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333881344 unmapped: 52363264 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dda400 session 0x557ab6821a40
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6c9f000/0x0/0x4ffc00000, data 0x49372ab/0x4acf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333881344 unmapped: 52363264 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714bc00 session 0x557ab696f0e0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6c9f000/0x0/0x4ffc00000, data 0x49372ab/0x4acf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab7348400 session 0x557ab4aaa3c0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714f400 session 0x557ab72a5680
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6c9f000/0x0/0x4ffc00000, data 0x49372ab/0x4acf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334036992 unmapped: 52207616 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334036992 unmapped: 52207616 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6c7b000/0x0/0x4ffc00000, data 0x495b2ab/0x4af3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334045184 unmapped: 52199424 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3810788 data_alloc: 218103808 data_used: 18796544
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334045184 unmapped: 52199424 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6c7b000/0x0/0x4ffc00000, data 0x495b2ab/0x4af3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334045184 unmapped: 52199424 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6c7b000/0x0/0x4ffc00000, data 0x495b2ab/0x4af3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334045184 unmapped: 52199424 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334045184 unmapped: 52199424 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334045184 unmapped: 52199424 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3844868 data_alloc: 218103808 data_used: 23621632
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334045184 unmapped: 52199424 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334045184 unmapped: 52199424 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6c7b000/0x0/0x4ffc00000, data 0x495b2ab/0x4af3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334045184 unmapped: 52199424 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334045184 unmapped: 52199424 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.186073303s of 20.419456482s, submitted: 32
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 337428480 unmapped: 48816128 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3919034 data_alloc: 218103808 data_used: 23654400
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 337944576 unmapped: 48300032 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338378752 unmapped: 47865856 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338378752 unmapped: 47865856 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7288000/0x0/0x4ffc00000, data 0x538e2ab/0x5526000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338378752 unmapped: 47865856 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7288000/0x0/0x4ffc00000, data 0x538e2ab/0x5526000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338378752 unmapped: 47865856 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3942374 data_alloc: 218103808 data_used: 24608768
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338378752 unmapped: 47865856 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338378752 unmapped: 47865856 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338386944 unmapped: 47857664 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7267000/0x0/0x4ffc00000, data 0x53af2ab/0x5547000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338386944 unmapped: 47857664 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338386944 unmapped: 47857664 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3942170 data_alloc: 218103808 data_used: 24608768
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338386944 unmapped: 47857664 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338386944 unmapped: 47857664 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338386944 unmapped: 47857664 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338386944 unmapped: 47857664 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7267000/0x0/0x4ffc00000, data 0x53af2ab/0x5547000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338386944 unmapped: 47857664 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3942490 data_alloc: 218103808 data_used: 24616960
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338386944 unmapped: 47857664 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.147550583s of 16.505861282s, submitted: 81
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338386944 unmapped: 47857664 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338386944 unmapped: 47857664 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338386944 unmapped: 47857664 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7261000/0x0/0x4ffc00000, data 0x53b52ab/0x554d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338386944 unmapped: 47857664 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3942722 data_alloc: 218103808 data_used: 24616960
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338386944 unmapped: 47857664 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338386944 unmapped: 47857664 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338395136 unmapped: 47849472 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7261000/0x0/0x4ffc00000, data 0x53b52ab/0x554d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338395136 unmapped: 47849472 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7261000/0x0/0x4ffc00000, data 0x53b52ab/0x554d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338395136 unmapped: 47849472 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7261000/0x0/0x4ffc00000, data 0x53b52ab/0x554d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3942722 data_alloc: 218103808 data_used: 24616960
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338403328 unmapped: 47841280 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7261000/0x0/0x4ffc00000, data 0x53b52ab/0x554d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338403328 unmapped: 47841280 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7261000/0x0/0x4ffc00000, data 0x53b52ab/0x554d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338403328 unmapped: 47841280 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338403328 unmapped: 47841280 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7261000/0x0/0x4ffc00000, data 0x53b52ab/0x554d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338403328 unmapped: 47841280 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3942722 data_alloc: 218103808 data_used: 24616960
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338403328 unmapped: 47841280 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.610344887s of 15.617400169s, submitted: 2
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714bc00 session 0x557ab6a93860
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338411520 unmapped: 47833088 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab7348400 session 0x557ab7295680
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab7347000 session 0x557ab72a4960
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6693800 session 0x557ab6815c20
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557abb03dc00 session 0x557ab4a7dc20
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557abb03dc00 session 0x557ab72a4b40
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338616320 unmapped: 54984704 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338616320 unmapped: 54984704 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6670000/0x0/0x4ffc00000, data 0x5fa530d/0x613e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338616320 unmapped: 54984704 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4036021 data_alloc: 218103808 data_used: 24616960
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338616320 unmapped: 54984704 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338616320 unmapped: 54984704 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6670000/0x0/0x4ffc00000, data 0x5fa530d/0x613e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338616320 unmapped: 54984704 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338616320 unmapped: 54984704 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338624512 unmapped: 54976512 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4036021 data_alloc: 218103808 data_used: 24616960
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338624512 unmapped: 54976512 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6670000/0x0/0x4ffc00000, data 0x5fa530d/0x613e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 340860928 unmapped: 52740096 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6670000/0x0/0x4ffc00000, data 0x5fa530d/0x613e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 340860928 unmapped: 52740096 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6670000/0x0/0x4ffc00000, data 0x5fa530d/0x613e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 340860928 unmapped: 52740096 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 340869120 unmapped: 52731904 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4117141 data_alloc: 234881024 data_used: 35983360
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6670000/0x0/0x4ffc00000, data 0x5fa530d/0x613e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 340869120 unmapped: 52731904 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6670000/0x0/0x4ffc00000, data 0x5fa530d/0x613e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 340869120 unmapped: 52731904 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 340869120 unmapped: 52731904 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 340869120 unmapped: 52731904 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 340869120 unmapped: 52731904 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4117141 data_alloc: 234881024 data_used: 35983360
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.115795135s of 19.247415543s, submitted: 33
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 340869120 unmapped: 52731904 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6670000/0x0/0x4ffc00000, data 0x5fa530d/0x613e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,58])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e59b4000/0x0/0x4ffc00000, data 0x6c6130d/0x6dfa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 348758016 unmapped: 44843008 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 349085696 unmapped: 44515328 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 349085696 unmapped: 44515328 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 349085696 unmapped: 44515328 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4224165 data_alloc: 234881024 data_used: 36777984
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 349085696 unmapped: 44515328 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 349085696 unmapped: 44515328 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e58c6000/0x0/0x4ffc00000, data 0x6d4f30d/0x6ee8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 349085696 unmapped: 44515328 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 349085696 unmapped: 44515328 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 349093888 unmapped: 44507136 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4224657 data_alloc: 234881024 data_used: 36839424
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 349093888 unmapped: 44507136 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 349093888 unmapped: 44507136 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.643822670s of 11.472805023s, submitted: 144
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 349093888 unmapped: 44507136 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e58c4000/0x0/0x4ffc00000, data 0x6d5130d/0x6eea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 349093888 unmapped: 44507136 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6693800 session 0x557ab696fc20
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 349093888 unmapped: 44507136 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4224901 data_alloc: 234881024 data_used: 36839424
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714bc00 session 0x557ab4a7c960
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7259000/0x0/0x4ffc00000, data 0x53bc2ab/0x5554000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7259000/0x0/0x4ffc00000, data 0x53bc2ab/0x5554000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3948147 data_alloc: 218103808 data_used: 24141824
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7259000/0x0/0x4ffc00000, data 0x53bc2ab/0x5554000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7259000/0x0/0x4ffc00000, data 0x53bc2ab/0x5554000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.474429131s of 12.555706024s, submitted: 35
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dd9800 session 0x557ab6bfa3c0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dda400 session 0x557ab6814d20
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3947615 data_alloc: 218103808 data_used: 24141824
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6693800 session 0x557ab792fc20
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 54657024 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 54657024 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 54657024 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e84c5000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 54657024 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 54657024 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3734153 data_alloc: 218103808 data_used: 15548416
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 54657024 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 54657024 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e84c5000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 54657024 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 54657024 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e84c5000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 54657024 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3734153 data_alloc: 218103808 data_used: 15548416
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 54657024 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e84c5000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 54657024 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 54657024 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e84c5000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 54657024 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e84c5000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 54657024 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3734153 data_alloc: 218103808 data_used: 15548416
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 54657024 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e84c5000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e84c5000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 54657024 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 54657024 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 54657024 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 54657024 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3734153 data_alloc: 218103808 data_used: 15548416
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.411876678s of 20.594398499s, submitted: 54
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dd9800 session 0x557ab46ada40
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 339263488 unmapped: 54337536 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x49f629b/0x4b8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 339263488 unmapped: 54337536 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 339263488 unmapped: 54337536 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 339263488 unmapped: 54337536 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x49f629b/0x4b8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x49f629b/0x4b8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 339263488 unmapped: 54337536 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3805837 data_alloc: 218103808 data_used: 15548416
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 339263488 unmapped: 54337536 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 339263488 unmapped: 54337536 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 339263488 unmapped: 54337536 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 339263488 unmapped: 54337536 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 54329344 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3869517 data_alloc: 218103808 data_used: 23527424
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x49f629b/0x4b8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 54329344 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 54329344 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 54329344 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 54329344 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 54329344 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3869517 data_alloc: 218103808 data_used: 23527424
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 54329344 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x49f629b/0x4b8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 54329344 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 54329344 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.176839828s of 18.275016785s, submitted: 16
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342966272 unmapped: 50634752 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343031808 unmapped: 50569216 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e711e000/0x0/0x4ffc00000, data 0x54f929b/0x5690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3958529 data_alloc: 218103808 data_used: 24690688
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7107000/0x0/0x4ffc00000, data 0x551029b/0x56a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343031808 unmapped: 50569216 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7107000/0x0/0x4ffc00000, data 0x551029b/0x56a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343031808 unmapped: 50569216 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343031808 unmapped: 50569216 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343031808 unmapped: 50569216 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7107000/0x0/0x4ffc00000, data 0x551029b/0x56a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343031808 unmapped: 50569216 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7107000/0x0/0x4ffc00000, data 0x551029b/0x56a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3969485 data_alloc: 218103808 data_used: 24776704
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343031808 unmapped: 50569216 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343031808 unmapped: 50569216 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343031808 unmapped: 50569216 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343031808 unmapped: 50569216 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7107000/0x0/0x4ffc00000, data 0x551029b/0x56a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343031808 unmapped: 50569216 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3969485 data_alloc: 218103808 data_used: 24776704
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343031808 unmapped: 50569216 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343031808 unmapped: 50569216 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343031808 unmapped: 50569216 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343031808 unmapped: 50569216 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7107000/0x0/0x4ffc00000, data 0x551029b/0x56a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.833803177s of 16.100864410s, submitted: 84
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353026048 unmapped: 40574976 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4074802 data_alloc: 218103808 data_used: 24776704
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557abb03dc00 session 0x557ab792f860
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab7347000 session 0x557ab5292960
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab7348400 session 0x557ab6bfa000
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6693800 session 0x557ab46d2b40
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dd9800 session 0x557ab57cbe00
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 344006656 unmapped: 49594368 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 344006656 unmapped: 49594368 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e664a000/0x0/0x4ffc00000, data 0x5fcd29b/0x6164000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 344006656 unmapped: 49594368 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab7347000 session 0x557ab6fcc780
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557abb03dc00 session 0x557ab6fcde00
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 344006656 unmapped: 49594368 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab89b6400 session 0x557ab792ed20
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6693800 session 0x557ab792d0e0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e664a000/0x0/0x4ffc00000, data 0x5fcd29b/0x6164000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 344014848 unmapped: 49586176 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4052393 data_alloc: 218103808 data_used: 24776704
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6649000/0x0/0x4ffc00000, data 0x5fcd2ab/0x6165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 344014848 unmapped: 49586176 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 345096192 unmapped: 48504832 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 47915008 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6649000/0x0/0x4ffc00000, data 0x5fcd2ab/0x6165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 47915008 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6649000/0x0/0x4ffc00000, data 0x5fcd2ab/0x6165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 47915008 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4131217 data_alloc: 234881024 data_used: 34672640
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 47915008 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 47915008 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6649000/0x0/0x4ffc00000, data 0x5fcd2ab/0x6165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 47915008 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6649000/0x0/0x4ffc00000, data 0x5fcd2ab/0x6165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 47915008 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 47915008 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4131217 data_alloc: 234881024 data_used: 34672640
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.667927742s of 16.314342499s, submitted: 35
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 47915008 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350175232 unmapped: 43425792 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e5eac000/0x0/0x4ffc00000, data 0x67642ab/0x68fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351977472 unmapped: 41623552 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 41615360 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 41615360 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4217607 data_alloc: 234881024 data_used: 36577280
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352083968 unmapped: 41517056 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352083968 unmapped: 41517056 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e5e16000/0x0/0x4ffc00000, data 0x67f22ab/0x698a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dd9800 session 0x557ab7951e00
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab7347000 session 0x557ab552c960
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352083968 unmapped: 41517056 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557abb03dc00 session 0x557ab72a5680
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 348905472 unmapped: 44695552 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e5e24000/0x0/0x4ffc00000, data 0x67f22ab/0x698a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 348905472 unmapped: 44695552 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3973628 data_alloc: 218103808 data_used: 23699456
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 348905472 unmapped: 44695552 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714bc00 session 0x557ab7295a40
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 348905472 unmapped: 44695552 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.908845901s of 11.402800560s, submitted: 168
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6693800 session 0x557ab727be00
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 51560448 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 51560448 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e84c5000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 51560448 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3757716 data_alloc: 218103808 data_used: 14331904
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 51560448 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 51560448 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 51560448 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 51560448 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e84c5000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 51560448 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3757716 data_alloc: 218103808 data_used: 14331904
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 51560448 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e84c5000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 51560448 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 51560448 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 51560448 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e84c5000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e84c5000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 51560448 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3757716 data_alloc: 218103808 data_used: 14331904
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e84c5000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 51560448 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 51560448 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 51560448 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 51560448 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dd9800 session 0x557ab6df7e00
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714bc00 session 0x557ab6bfab40
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab7347000 session 0x557ab5521860
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557abb03dc00 session 0x557ab46d2f00
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.600893021s of 17.668704987s, submitted: 22
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342761472 unmapped: 50839552 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6693800 session 0x557ab792d4a0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3838253 data_alloc: 218103808 data_used: 14331904
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dd9800 session 0x557ab46d3a40
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342761472 unmapped: 50839552 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7c02000/0x0/0x4ffc00000, data 0x4a142fd/0x4bac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7c02000/0x0/0x4ffc00000, data 0x4a142fd/0x4bac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342761472 unmapped: 50839552 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7c02000/0x0/0x4ffc00000, data 0x4a142fd/0x4bac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342761472 unmapped: 50839552 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714bc00 session 0x557ab6bfad20
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab7347000 session 0x557ab696e780
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342761472 unmapped: 50839552 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557abb03dc00 session 0x557ab6a92f00
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6693800 session 0x557ab58ce780
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342908928 unmapped: 50692096 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3842804 data_alloc: 218103808 data_used: 14331904
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342908928 unmapped: 50692096 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7bdc000/0x0/0x4ffc00000, data 0x4a38330/0x4bd2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342884352 unmapped: 50716672 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342884352 unmapped: 50716672 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342884352 unmapped: 50716672 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342884352 unmapped: 50716672 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3906804 data_alloc: 218103808 data_used: 23220224
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342884352 unmapped: 50716672 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7bdc000/0x0/0x4ffc00000, data 0x4a38330/0x4bd2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342884352 unmapped: 50716672 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7bdc000/0x0/0x4ffc00000, data 0x4a38330/0x4bd2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342884352 unmapped: 50716672 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342884352 unmapped: 50716672 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342884352 unmapped: 50716672 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3906804 data_alloc: 218103808 data_used: 23220224
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.982942581s of 16.169082642s, submitted: 47
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 344268800 unmapped: 49332224 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #51. Immutable memtables: 0.
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 348372992 unmapped: 45228032 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e5b16000/0x0/0x4ffc00000, data 0x5958330/0x5af2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 349151232 unmapped: 44449792 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 349151232 unmapped: 44449792 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 349151232 unmapped: 44449792 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4046962 data_alloc: 234881024 data_used: 25493504
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 349151232 unmapped: 44449792 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e5a88000/0x0/0x4ffc00000, data 0x59e6330/0x5b80000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 349151232 unmapped: 44449792 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 349151232 unmapped: 44449792 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 349151232 unmapped: 44449792 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 349151232 unmapped: 44449792 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4044926 data_alloc: 234881024 data_used: 25497600
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 349151232 unmapped: 44449792 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e5a6f000/0x0/0x4ffc00000, data 0x5a05330/0x5b9f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 349151232 unmapped: 44449792 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 349151232 unmapped: 44449792 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.468229294s of 12.845428467s, submitted: 168
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714bc00 session 0x557ab72a41e0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dd9800 session 0x557ab5293a40
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 349151232 unmapped: 44449792 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab7347000 session 0x557ab72a5e00
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3773583 data_alloc: 218103808 data_used: 14331904
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7323000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3773583 data_alloc: 218103808 data_used: 14331904
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7323000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3773583 data_alloc: 218103808 data_used: 14331904
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7323000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3773583 data_alloc: 218103808 data_used: 14331904
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7323000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7323000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7323000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3773583 data_alloc: 218103808 data_used: 14331904
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3773583 data_alloc: 218103808 data_used: 14331904
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7323000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.126367569s of 27.255813599s, submitted: 43
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343842816 unmapped: 49758208 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab9c2c400 session 0x557ab58cfe00
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6693800 session 0x557ab686c000
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343851008 unmapped: 49750016 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343851008 unmapped: 49750016 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343851008 unmapped: 49750016 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343851008 unmapped: 49750016 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3803540 data_alloc: 218103808 data_used: 14331904
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343851008 unmapped: 49750016 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6f24000/0x0/0x4ffc00000, data 0x45522fd/0x46ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343851008 unmapped: 49750016 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dd9800 session 0x557ab72a4b40
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343859200 unmapped: 49741824 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6f23000/0x0/0x4ffc00000, data 0x4552320/0x46eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343867392 unmapped: 49733632 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343875584 unmapped: 49725440 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3836118 data_alloc: 218103808 data_used: 18526208
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6f23000/0x0/0x4ffc00000, data 0x4552320/0x46eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343875584 unmapped: 49725440 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6f23000/0x0/0x4ffc00000, data 0x4552320/0x46eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343875584 unmapped: 49725440 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343875584 unmapped: 49725440 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343875584 unmapped: 49725440 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6f23000/0x0/0x4ffc00000, data 0x4552320/0x46eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343875584 unmapped: 49725440 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3836118 data_alloc: 218103808 data_used: 18526208
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343875584 unmapped: 49725440 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6f23000/0x0/0x4ffc00000, data 0x4552320/0x46eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343875584 unmapped: 49725440 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343875584 unmapped: 49725440 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343875584 unmapped: 49725440 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.160448074s of 18.311054230s, submitted: 31
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 345620480 unmapped: 47980544 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3866682 data_alloc: 218103808 data_used: 18530304
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6c98000/0x0/0x4ffc00000, data 0x47dd320/0x4976000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [0,0,0,0,1,0,0,0,0,6])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347807744 unmapped: 45793280 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346521600 unmapped: 47079424 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346521600 unmapped: 47079424 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6835000/0x0/0x4ffc00000, data 0x4c40320/0x4dd9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346521600 unmapped: 47079424 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346521600 unmapped: 47079424 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3888466 data_alloc: 218103808 data_used: 18575360
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346529792 unmapped: 47071232 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346529792 unmapped: 47071232 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346529792 unmapped: 47071232 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6824000/0x0/0x4ffc00000, data 0x4c51320/0x4dea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346537984 unmapped: 47063040 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346537984 unmapped: 47063040 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3897630 data_alloc: 218103808 data_used: 18673664
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346537984 unmapped: 47063040 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346537984 unmapped: 47063040 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 47054848 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 47054848 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6824000/0x0/0x4ffc00000, data 0x4c51320/0x4dea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 47054848 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3897630 data_alloc: 218103808 data_used: 18673664
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 47054848 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 47054848 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 47054848 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.543237686s of 19.606155396s, submitted: 71
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab9c2e800 session 0x557ab46d0f00
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346685440 unmapped: 48537600 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346685440 unmapped: 48537600 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3966194 data_alloc: 218103808 data_used: 18677760
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e5fcf000/0x0/0x4ffc00000, data 0x54a6320/0x563f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346693632 unmapped: 48529408 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346693632 unmapped: 48529408 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346693632 unmapped: 48529408 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab528cc00 session 0x557ab7950f00
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e5fcf000/0x0/0x4ffc00000, data 0x54a6320/0x563f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab4688800 session 0x557ab551e960
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346701824 unmapped: 48521216 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e5fcf000/0x0/0x4ffc00000, data 0x54a6320/0x563f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346701824 unmapped: 48521216 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3966194 data_alloc: 218103808 data_used: 18677760
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab4688800 session 0x557ab6bfa5a0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab528cc00 session 0x557ab6814d20
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346701824 unmapped: 48521216 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346701824 unmapped: 48521216 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346587136 unmapped: 48635904 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346587136 unmapped: 48635904 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346587136 unmapped: 48635904 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4022787 data_alloc: 234881024 data_used: 26189824
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e5fab000/0x0/0x4ffc00000, data 0x54ca320/0x5663000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346587136 unmapped: 48635904 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346587136 unmapped: 48635904 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346587136 unmapped: 48635904 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346587136 unmapped: 48635904 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e5fab000/0x0/0x4ffc00000, data 0x54ca320/0x5663000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346587136 unmapped: 48635904 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4022787 data_alloc: 234881024 data_used: 26189824
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.141372681s of 17.398494720s, submitted: 23
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346587136 unmapped: 48635904 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346595328 unmapped: 48627712 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 349544448 unmapped: 45678592 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e5a77000/0x0/0x4ffc00000, data 0x59f0320/0x5b89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [1])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 349552640 unmapped: 45670400 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 348889088 unmapped: 46333952 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4093051 data_alloc: 234881024 data_used: 27561984
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 348889088 unmapped: 46333952 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 348889088 unmapped: 46333952 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e5a65000/0x0/0x4ffc00000, data 0x5a08320/0x5ba1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 348889088 unmapped: 46333952 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 348889088 unmapped: 46333952 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e5a65000/0x0/0x4ffc00000, data 0x5a08320/0x5ba1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6693800 session 0x557ab46d32c0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dd9800 session 0x557ab72a54a0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 348889088 unmapped: 46333952 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3915399 data_alloc: 218103808 data_used: 18788352
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab9c2e800 session 0x557ab791f2c0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347545600 unmapped: 47677440 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347545600 unmapped: 47677440 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347545600 unmapped: 47677440 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.202589035s of 12.640776634s, submitted: 111
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab7347000 session 0x557ab6bfa960
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714bc00 session 0x557ab46d2000
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347545600 unmapped: 47677440 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6823000/0x0/0x4ffc00000, data 0x4c52320/0x4deb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [0,0,1])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347545600 unmapped: 47677440 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3797219 data_alloc: 218103808 data_used: 14331904
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab9c2e800 session 0x557ab3ee7c20
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347545600 unmapped: 47677440 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347545600 unmapped: 47677440 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347545600 unmapped: 47677440 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347545600 unmapped: 47677440 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7323000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347545600 unmapped: 47677440 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3796459 data_alloc: 218103808 data_used: 14331904
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347545600 unmapped: 47677440 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347545600 unmapped: 47677440 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7323000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347545600 unmapped: 47677440 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347545600 unmapped: 47677440 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347545600 unmapped: 47677440 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3796459 data_alloc: 218103808 data_used: 14331904
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347545600 unmapped: 47677440 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347545600 unmapped: 47677440 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7323000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347545600 unmapped: 47677440 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347545600 unmapped: 47677440 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347545600 unmapped: 47677440 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3796459 data_alloc: 218103808 data_used: 14331904
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347545600 unmapped: 47677440 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7323000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347545600 unmapped: 47677440 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab4688800 session 0x557ab686c000
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab528cc00 session 0x557ab72a5e00
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab4688800 session 0x557ab5293a40
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714bc00 session 0x557ab72a41e0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347545600 unmapped: 47677440 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.335086823s of 19.687835693s, submitted: 52
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab7347000 session 0x557ab58ce780
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347717632 unmapped: 47505408 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6e4e000/0x0/0x4ffc00000, data 0x462929b/0x47c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347717632 unmapped: 47505408 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3839331 data_alloc: 218103808 data_used: 14331904
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6e4e000/0x0/0x4ffc00000, data 0x462929b/0x47c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347717632 unmapped: 47505408 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347717632 unmapped: 47505408 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab9c2e800 session 0x557ab6bfad20
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6693800 session 0x557ab46d3a40
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347717632 unmapped: 47505408 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab4688800 session 0x557ab792d4a0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714bc00 session 0x557ab46d2f00
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347734016 unmapped: 47489024 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347734016 unmapped: 47489024 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3844451 data_alloc: 218103808 data_used: 14331904
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347742208 unmapped: 47480832 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6e4c000/0x0/0x4ffc00000, data 0x46292ce/0x47c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347742208 unmapped: 47480832 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6e4c000/0x0/0x4ffc00000, data 0x46292ce/0x47c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347742208 unmapped: 47480832 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347742208 unmapped: 47480832 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347742208 unmapped: 47480832 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3880611 data_alloc: 218103808 data_used: 19320832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347742208 unmapped: 47480832 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347742208 unmapped: 47480832 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347742208 unmapped: 47480832 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6e4c000/0x0/0x4ffc00000, data 0x46292ce/0x47c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347742208 unmapped: 47480832 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347742208 unmapped: 47480832 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3880611 data_alloc: 218103808 data_used: 19320832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6e4c000/0x0/0x4ffc00000, data 0x46292ce/0x47c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.534839630s of 17.652015686s, submitted: 20
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 43999232 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351297536 unmapped: 43925504 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350904320 unmapped: 44318720 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350904320 unmapped: 44318720 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350986240 unmapped: 44236800 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3967763 data_alloc: 218103808 data_used: 20377600
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e63e9000/0x0/0x4ffc00000, data 0x508c2ce/0x5225000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350986240 unmapped: 44236800 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e63e9000/0x0/0x4ffc00000, data 0x508c2ce/0x5225000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350986240 unmapped: 44236800 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e63e9000/0x0/0x4ffc00000, data 0x508c2ce/0x5225000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350986240 unmapped: 44236800 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e63e9000/0x0/0x4ffc00000, data 0x508c2ce/0x5225000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350986240 unmapped: 44236800 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350986240 unmapped: 44236800 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3967763 data_alloc: 218103808 data_used: 20377600
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350994432 unmapped: 44228608 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350994432 unmapped: 44228608 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350994432 unmapped: 44228608 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e63e9000/0x0/0x4ffc00000, data 0x508c2ce/0x5225000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350994432 unmapped: 44228608 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350994432 unmapped: 44228608 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3967763 data_alloc: 218103808 data_used: 20377600
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351002624 unmapped: 44220416 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351002624 unmapped: 44220416 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.585489273s of 16.809347153s, submitted: 83
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dd9800 session 0x557ab685ba40
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab4689000 session 0x557ab4aaab40
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab528c400 session 0x557ab6fcc3c0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab528c400 session 0x557ab770ef00
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab4688800 session 0x557ab552c000
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350953472 unmapped: 44269568 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350953472 unmapped: 44269568 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6086000/0x0/0x4ffc00000, data 0x53ef2ce/0x5588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350953472 unmapped: 44269568 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4001852 data_alloc: 218103808 data_used: 20377600
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6086000/0x0/0x4ffc00000, data 0x53ef2ce/0x5588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab4689000 session 0x557ab46ac960
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350953472 unmapped: 44269568 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6086000/0x0/0x4ffc00000, data 0x53ef2ce/0x5588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dd9800 session 0x557ab6df9c20
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714bc00 session 0x557ab685b860
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350953472 unmapped: 44269568 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714bc00 session 0x557ab551ed20
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350953472 unmapped: 44269568 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350953472 unmapped: 44269568 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6085000/0x0/0x4ffc00000, data 0x53ef2de/0x5589000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350953472 unmapped: 44269568 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4026574 data_alloc: 218103808 data_used: 23715840
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350953472 unmapped: 44269568 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350953472 unmapped: 44269568 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350953472 unmapped: 44269568 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6085000/0x0/0x4ffc00000, data 0x53ef2de/0x5589000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350953472 unmapped: 44269568 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350953472 unmapped: 44269568 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4026734 data_alloc: 218103808 data_used: 23719936
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350953472 unmapped: 44269568 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6085000/0x0/0x4ffc00000, data 0x53ef2de/0x5589000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350953472 unmapped: 44269568 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.436676979s of 15.575437546s, submitted: 24
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350953472 unmapped: 44269568 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350953472 unmapped: 44269568 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e5939000/0x0/0x4ffc00000, data 0x5b392de/0x5cd3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353509376 unmapped: 41713664 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4088814 data_alloc: 218103808 data_used: 23982080
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353509376 unmapped: 41713664 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353738752 unmapped: 41484288 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353738752 unmapped: 41484288 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353738752 unmapped: 41484288 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353738752 unmapped: 41484288 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4102462 data_alloc: 218103808 data_used: 23887872
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e5467000/0x0/0x4ffc00000, data 0x5bdd2de/0x5d77000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353738752 unmapped: 41484288 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab4688800 session 0x557ab770f860
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab4689000 session 0x557ab792ed20
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353746944 unmapped: 41476096 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab528c400 session 0x557ab6e74000
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 44007424 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 44007424 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 44007424 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3976137 data_alloc: 218103808 data_used: 20381696
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e5fd8000/0x0/0x4ffc00000, data 0x508d2ce/0x5226000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 44007424 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab9c2e800 session 0x557ab792e960
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.293063164s of 13.741784096s, submitted: 135
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab7347000 session 0x557ab72a5680
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 44007424 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab4688800 session 0x557ab58cf4a0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350126080 unmapped: 45096960 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350126080 unmapped: 45096960 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6f14000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350126080 unmapped: 45096960 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3818412 data_alloc: 218103808 data_used: 14331904
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350126080 unmapped: 45096960 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350126080 unmapped: 45096960 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6f14000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350126080 unmapped: 45096960 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350126080 unmapped: 45096960 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350126080 unmapped: 45096960 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3818412 data_alloc: 218103808 data_used: 14331904
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350126080 unmapped: 45096960 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350126080 unmapped: 45096960 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350126080 unmapped: 45096960 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6f14000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350126080 unmapped: 45096960 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350126080 unmapped: 45096960 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3818412 data_alloc: 218103808 data_used: 14331904
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350126080 unmapped: 45096960 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350126080 unmapped: 45096960 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350126080 unmapped: 45096960 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6f14000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350126080 unmapped: 45096960 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350126080 unmapped: 45096960 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3818412 data_alloc: 218103808 data_used: 14331904
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350134272 unmapped: 45088768 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350134272 unmapped: 45088768 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.724020004s of 20.886013031s, submitted: 43
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab4689000 session 0x557ab6fcd4a0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6f14000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350150656 unmapped: 52944896 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350150656 unmapped: 52944896 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350150656 unmapped: 52944896 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3888416 data_alloc: 218103808 data_used: 14331904
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350158848 unmapped: 52936704 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350158848 unmapped: 52936704 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 46K writes, 175K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s#012Cumulative WAL: 46K writes, 17K syncs, 2.71 writes per sync, written: 0.17 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3122 writes, 13K keys, 3122 commit groups, 1.0 writes per commit group, ingest: 16.94 MB, 0.03 MB/s#012Interval WAL: 3122 writes, 1172 syncs, 2.66 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab528c400 session 0x557ab79503c0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350306304 unmapped: 52789248 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e661f000/0x0/0x4ffc00000, data 0x4a4829b/0x4bdf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350306304 unmapped: 52789248 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350306304 unmapped: 52789248 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3899108 data_alloc: 218103808 data_used: 15417344
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350240768 unmapped: 52854784 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab9c2e800 session 0x557ab792d0e0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714bc00 session 0x557ab727a1e0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350240768 unmapped: 52854784 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350240768 unmapped: 52854784 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab5467800 session 0x557ab792e5a0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: mgrc ms_handle_reset ms_handle_reset con 0x557ab92f9000
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2898047278
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2898047278,v1:192.168.122.100:6801/2898047278]
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: mgrc handle_mgr_configure stats_period=5
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350240768 unmapped: 52854784 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e661f000/0x0/0x4ffc00000, data 0x4a4829b/0x4bdf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557abb03c400 session 0x557ab682b2c0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab85ec800 session 0x557ab551eb40
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350240768 unmapped: 52854784 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3957188 data_alloc: 218103808 data_used: 23584768
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350240768 unmapped: 52854784 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e661f000/0x0/0x4ffc00000, data 0x4a4829b/0x4bdf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350240768 unmapped: 52854784 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350240768 unmapped: 52854784 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350240768 unmapped: 52854784 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350240768 unmapped: 52854784 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3957188 data_alloc: 218103808 data_used: 23584768
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab528c400 session 0x557ab6a93680
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e661f000/0x0/0x4ffc00000, data 0x4a4829b/0x4bdf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350240768 unmapped: 52854784 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350240768 unmapped: 52854784 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350240768 unmapped: 52854784 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350240768 unmapped: 52854784 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350240768 unmapped: 52854784 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3957188 data_alloc: 218103808 data_used: 23584768
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e661f000/0x0/0x4ffc00000, data 0x4a4829b/0x4bdf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350240768 unmapped: 52854784 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350240768 unmapped: 52854784 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e661f000/0x0/0x4ffc00000, data 0x4a4829b/0x4bdf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350240768 unmapped: 52854784 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab7347000 session 0x557ab57cad20
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350240768 unmapped: 52854784 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.894992828s of 27.005804062s, submitted: 10
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dd9800 session 0x557ab6a93c20
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 56172544 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3824103 data_alloc: 218103808 data_used: 14331904
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6f15000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 56172544 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 56172544 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 56172544 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 56172544 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6f15000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6f15000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 56172544 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3824103 data_alloc: 218103808 data_used: 14331904
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6f15000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 56172544 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 56172544 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 56172544 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 56172544 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 56164352 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3824103 data_alloc: 218103808 data_used: 14331904
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6f15000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 56164352 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 56164352 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 56164352 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 56164352 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6f15000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 56164352 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3824103 data_alloc: 218103808 data_used: 14331904
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346939392 unmapped: 56156160 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346939392 unmapped: 56156160 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346939392 unmapped: 56156160 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6f15000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346939392 unmapped: 56156160 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6f15000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346939392 unmapped: 56156160 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3824103 data_alloc: 218103808 data_used: 14331904
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346947584 unmapped: 56147968 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346947584 unmapped: 56147968 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6f15000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6f15000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346947584 unmapped: 56147968 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346955776 unmapped: 56139776 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346955776 unmapped: 56139776 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3824103 data_alloc: 218103808 data_used: 14331904
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346955776 unmapped: 56139776 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6f15000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346955776 unmapped: 56139776 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4246144449' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346955776 unmapped: 56139776 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346955776 unmapped: 56139776 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6f15000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346955776 unmapped: 56139776 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3824103 data_alloc: 218103808 data_used: 14331904
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346955776 unmapped: 56139776 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346963968 unmapped: 56131584 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346963968 unmapped: 56131584 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6f15000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346963968 unmapped: 56131584 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346963968 unmapped: 56131584 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3824103 data_alloc: 218103808 data_used: 14331904
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346972160 unmapped: 56123392 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6f15000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346972160 unmapped: 56123392 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 38.082767487s of 38.139495850s, submitted: 16
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab77a0000 session 0x557ab685ab40
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab528c400 session 0x557ab5520960
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dd9800 session 0x557ab46d14a0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714bc00 session 0x557ab72a4d20
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab7347000 session 0x557ab552c3c0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346972160 unmapped: 56123392 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346972160 unmapped: 56123392 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346972160 unmapped: 56123392 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3880678 data_alloc: 218103808 data_used: 14331904
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346972160 unmapped: 56123392 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346972160 unmapped: 56123392 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6795000/0x0/0x4ffc00000, data 0x48d229b/0x4a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346972160 unmapped: 56123392 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab92fb400 session 0x557ab5462f00
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6795000/0x0/0x4ffc00000, data 0x48d229b/0x4a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346972160 unmapped: 56123392 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346988544 unmapped: 56107008 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3882639 data_alloc: 218103808 data_used: 14331904
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6794000/0x0/0x4ffc00000, data 0x48d22be/0x4a6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 55222272 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350306304 unmapped: 52789248 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350306304 unmapped: 52789248 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6794000/0x0/0x4ffc00000, data 0x48d22be/0x4a6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350306304 unmapped: 52789248 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350306304 unmapped: 52789248 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3938479 data_alloc: 218103808 data_used: 21680128
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350306304 unmapped: 52789248 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350306304 unmapped: 52789248 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350306304 unmapped: 52789248 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6794000/0x0/0x4ffc00000, data 0x48d22be/0x4a6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350306304 unmapped: 52789248 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350306304 unmapped: 52789248 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3938479 data_alloc: 218103808 data_used: 21680128
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.121250153s of 18.225706100s, submitted: 20
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #52. Immutable memtables: 8.
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353247232 unmapped: 49848320 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353247232 unmapped: 49848320 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353247232 unmapped: 49848320 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e5324000/0x0/0x4ffc00000, data 0x4ba22be/0x4d3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab5468c00 session 0x557ab696e3c0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353247232 unmapped: 49848320 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353247232 unmapped: 49848320 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3959195 data_alloc: 218103808 data_used: 21684224
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353247232 unmapped: 49848320 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353247232 unmapped: 49848320 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353247232 unmapped: 49848320 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e5324000/0x0/0x4ffc00000, data 0x4ba22be/0x4d3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353247232 unmapped: 49848320 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353255424 unmapped: 49840128 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3959195 data_alloc: 218103808 data_used: 21684224
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e5324000/0x0/0x4ffc00000, data 0x4ba22be/0x4d3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353255424 unmapped: 49840128 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab5468c00 session 0x557ab770f860
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.425669670s of 11.516405106s, submitted: 17
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353255424 unmapped: 49840128 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 289 ms_handle_reset con 0x557ab714bc00 session 0x557ab685b860
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 289 ms_handle_reset con 0x557ab7347000 session 0x557ab685ba40
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 289 ms_handle_reset con 0x557ab4c45000 session 0x557ab52923c0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 360734720 unmapped: 53772288 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 289 ms_handle_reset con 0x557ab6693c00 session 0x557ab6bfad20
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 289 handle_osd_map epochs [289,290], i have 289, src has [1,290]
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 290 ms_handle_reset con 0x557ab4c45000 session 0x557ab685ad20
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 53698560 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 290 handle_osd_map epochs [290,291], i have 290, src has [1,291]
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 291 ms_handle_reset con 0x557ab5468c00 session 0x557ab6845860
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 53690368 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4149632 data_alloc: 234881024 data_used: 24653824
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 291 ms_handle_reset con 0x557ab714bc00 session 0x557ab79501e0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 291 ms_handle_reset con 0x557ab7347000 session 0x557ab6df7c20
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 291 ms_handle_reset con 0x557ab6680800 session 0x557ab68443c0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 291 heartbeat osd_stat(store_statfs(0x4e3f90000/0x0/0x4ffc00000, data 0x5f2e617/0x60cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 53690368 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 360996864 unmapped: 53510144 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 360996864 unmapped: 53510144 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 360996864 unmapped: 53510144 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 360996864 unmapped: 53510144 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4149632 data_alloc: 234881024 data_used: 24653824
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 360996864 unmapped: 53510144 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 291 heartbeat osd_stat(store_statfs(0x4e3f90000/0x0/0x4ffc00000, data 0x5f2e617/0x60cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 291 handle_osd_map epochs [292,292], i have 291, src has [1,292]
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 291 ms_handle_reset con 0x557ab4c45000 session 0x557ab6df7e00
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 291 handle_osd_map epochs [292,292], i have 292, src has [1,292]
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e3f90000/0x0/0x4ffc00000, data 0x5f2e617/0x60cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 354713600 unmapped: 59793408 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 292 ms_handle_reset con 0x557ab5468c00 session 0x557ab55214a0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 292 ms_handle_reset con 0x557ab714bc00 session 0x557ab5bfbe00
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 292 ms_handle_reset con 0x557ab7347000 session 0x557ab54272c0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 292 ms_handle_reset con 0x557ab65a0c00 session 0x557ab6e74780
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 292 ms_handle_reset con 0x557ab714f000 session 0x557ab5c034a0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 354713600 unmapped: 59793408 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 354721792 unmapped: 59785216 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e3f8e000/0x0/0x4ffc00000, data 0x5f3007a/0x60cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 354721792 unmapped: 59785216 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4135806 data_alloc: 234881024 data_used: 24653824
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 354721792 unmapped: 59785216 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 292 ms_handle_reset con 0x557ab5468c00 session 0x557ab79510e0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 354729984 unmapped: 59777024 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 292 ms_handle_reset con 0x557ab65a0c00 session 0x557ab6815680
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 292 ms_handle_reset con 0x557ab714bc00 session 0x557ab685af00
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.129091263s of 16.006256104s, submitted: 187
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 292 ms_handle_reset con 0x557ab7347000 session 0x557ab79510e0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 355041280 unmapped: 59465728 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e3f6a000/0x0/0x4ffc00000, data 0x5f5408a/0x60f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 355041280 unmapped: 59465728 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 355115008 unmapped: 59392000 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4170080 data_alloc: 234881024 data_used: 28872704
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 356212736 unmapped: 58294272 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 356212736 unmapped: 58294272 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 356220928 unmapped: 58286080 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e3f6a000/0x0/0x4ffc00000, data 0x5f5408a/0x60f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 356220928 unmapped: 58286080 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 356220928 unmapped: 58286080 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4215680 data_alloc: 234881024 data_used: 35340288
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 356220928 unmapped: 58286080 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 356220928 unmapped: 58286080 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 356220928 unmapped: 58286080 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e3f6a000/0x0/0x4ffc00000, data 0x5f5408a/0x60f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 356220928 unmapped: 58286080 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 356229120 unmapped: 58277888 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.251224518s of 12.262004852s, submitted: 2
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4264664 data_alloc: 234881024 data_used: 35364864
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 360923136 unmapped: 53583872 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 360988672 unmapped: 53518336 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 360988672 unmapped: 53518336 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e39e4000/0x0/0x4ffc00000, data 0x64da08a/0x667a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 360988672 unmapped: 53518336 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 360988672 unmapped: 53518336 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4291066 data_alloc: 234881024 data_used: 37998592
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 56705024 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e39e4000/0x0/0x4ffc00000, data 0x64da08a/0x667a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 56705024 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 56705024 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e39e4000/0x0/0x4ffc00000, data 0x64da08a/0x667a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 56705024 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 56705024 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4283402 data_alloc: 234881024 data_used: 37998592
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 56705024 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 56705024 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.373227119s of 12.451822281s, submitted: 9
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 56705024 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 56705024 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e39e4000/0x0/0x4ffc00000, data 0x64da08a/0x667a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e39e4000/0x0/0x4ffc00000, data 0x64da08a/0x667a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 56705024 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4283050 data_alloc: 234881024 data_used: 37998592
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 56705024 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 56705024 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 56705024 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 56705024 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e39e4000/0x0/0x4ffc00000, data 0x64da08a/0x667a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 56705024 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4282874 data_alloc: 234881024 data_used: 37998592
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 56705024 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 357810176 unmapped: 56696832 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 357810176 unmapped: 56696832 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 357818368 unmapped: 56688640 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 357818368 unmapped: 56688640 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4282874 data_alloc: 234881024 data_used: 37998592
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 292 ms_handle_reset con 0x557ab6684800 session 0x557ab685a960
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e39e4000/0x0/0x4ffc00000, data 0x64da08a/0x667a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 292 ms_handle_reset con 0x557ab7347400 session 0x557ab792e1e0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 358309888 unmapped: 56197120 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 292 ms_handle_reset con 0x557ab7347400 session 0x557ab57cbe00
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 358309888 unmapped: 56197120 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 292 handle_osd_map epochs [292,293], i have 292, src has [1,293]
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.957171440s of 14.975679398s, submitted: 4
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 293 ms_handle_reset con 0x557ab714bc00 session 0x557ab6a93a40
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 293 ms_handle_reset con 0x557ab7347000 session 0x557ab46ad680
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 293 ms_handle_reset con 0x557ab7f88000 session 0x557ab6df7680
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 394436608 unmapped: 32677888 heap: 427114496 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 293 ms_handle_reset con 0x557ab528d800 session 0x557ab792c780
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 293 ms_handle_reset con 0x557ab528d800 session 0x557ab6fccd20
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 370409472 unmapped: 60907520 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 294 handle_osd_map epochs [294,295], i have 294, src has [1,295]
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 294 handle_osd_map epochs [295,295], i have 295, src has [1,295]
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 295 ms_handle_reset con 0x557ab714bc00 session 0x557ab696e960
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 370417664 unmapped: 60899328 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4639141 data_alloc: 251658240 data_used: 49020928
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 295 ms_handle_reset con 0x557ab7347000 session 0x557ab6fcc3c0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e10d4000/0x0/0x4ffc00000, data 0x8de03e3/0x8f86000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,1])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 295 ms_handle_reset con 0x557ab7347400 session 0x557ab685a5a0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 295 ms_handle_reset con 0x557ab7f88000 session 0x557ab6df74a0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 370483200 unmapped: 60833792 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 296 ms_handle_reset con 0x557ab528d800 session 0x557ab7951e00
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 370499584 unmapped: 60817408 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 370499584 unmapped: 60817408 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 370499584 unmapped: 60817408 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 296 ms_handle_reset con 0x557ab528e800 session 0x557ab6e74780
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 296 ms_handle_reset con 0x557ab68a1c00 session 0x557ab54272c0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e39d7000/0x0/0x4ffc00000, data 0x64e0f5e/0x6686000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 370499584 unmapped: 60817408 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4357326 data_alloc: 251658240 data_used: 49016832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 297 ms_handle_reset con 0x557ab714bc00 session 0x557ab682a5a0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 370835456 unmapped: 60481536 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e3f5f000/0x0/0x4ffc00000, data 0x5f5af4e/0x60ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 370835456 unmapped: 60481536 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 370843648 unmapped: 60473344 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.327554703s of 11.279193878s, submitted: 148
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 297 handle_osd_map epochs [297,298], i have 297, src has [1,298]
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 370843648 unmapped: 60473344 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 298 ms_handle_reset con 0x557ab7347000 session 0x557ab791e000
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e3f80000/0x0/0x4ffc00000, data 0x5f389e9/0x60de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e5305000/0x0/0x4ffc00000, data 0x4bb3548/0x4d58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 359546880 unmapped: 71770112 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4032294 data_alloc: 234881024 data_used: 24264704
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 298 ms_handle_reset con 0x557ab528c400 session 0x557ab7294f00
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 359546880 unmapped: 71770112 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e5305000/0x0/0x4ffc00000, data 0x4bb3548/0x4d58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 355950592 unmapped: 75366400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 298 ms_handle_reset con 0x557ab7347000 session 0x557ab4a7c960
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 355950592 unmapped: 75366400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e5d57000/0x0/0x4ffc00000, data 0x4163525/0x4307000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 355950592 unmapped: 75366400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 355950592 unmapped: 75366400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3895224 data_alloc: 218103808 data_used: 13447168
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 355950592 unmapped: 75366400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 355950592 unmapped: 75366400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e5d57000/0x0/0x4ffc00000, data 0x4163525/0x4307000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 355950592 unmapped: 75366400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 355950592 unmapped: 75366400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e5d53000/0x0/0x4ffc00000, data 0x4164f88/0x430a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 355950592 unmapped: 75366400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3899398 data_alloc: 218103808 data_used: 13455360
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 355950592 unmapped: 75366400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 355950592 unmapped: 75366400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e5d53000/0x0/0x4ffc00000, data 0x4164f88/0x430a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 355950592 unmapped: 75366400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 355950592 unmapped: 75366400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 355950592 unmapped: 75366400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3899398 data_alloc: 218103808 data_used: 13455360
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 355950592 unmapped: 75366400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 355950592 unmapped: 75366400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 355950592 unmapped: 75366400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e5d53000/0x0/0x4ffc00000, data 0x4164f88/0x430a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 355950592 unmapped: 75366400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 355950592 unmapped: 75366400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3899398 data_alloc: 218103808 data_used: 13455360
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 355950592 unmapped: 75366400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e5d53000/0x0/0x4ffc00000, data 0x4164f88/0x430a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 355950592 unmapped: 75366400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 355950592 unmapped: 75366400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 355950592 unmapped: 75366400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 25.629646301s of 26.253942490s, submitted: 88
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 299 ms_handle_reset con 0x557ab528d800 session 0x557ab46ad0e0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 299 ms_handle_reset con 0x557ab528e800 session 0x557ab46d1c20
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 356433920 unmapped: 74883072 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 299 ms_handle_reset con 0x557ab68a1c00 session 0x557ab792c960
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3973225 data_alloc: 218103808 data_used: 13455360
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 299 ms_handle_reset con 0x557ab68a1c00 session 0x557ab686c1e0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 299 ms_handle_reset con 0x557ab528c400 session 0x557ab57cab40
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 356433920 unmapped: 74883072 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e5510000/0x0/0x4ffc00000, data 0x49a8f88/0x4b4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 356433920 unmapped: 74883072 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e5510000/0x0/0x4ffc00000, data 0x49a8f88/0x4b4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 299 ms_handle_reset con 0x557ab528d800 session 0x557ab46d30e0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 356671488 unmapped: 74645504 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 356687872 unmapped: 74629120 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 357310464 unmapped: 74006528 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4028302 data_alloc: 218103808 data_used: 20701184
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 357310464 unmapped: 74006528 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 299 ms_handle_reset con 0x557ab528e800 session 0x557ab5292960
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 299 ms_handle_reset con 0x557ab7347000 session 0x557ab685ab40
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 357310464 unmapped: 74006528 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 299 ms_handle_reset con 0x557ab528c400 session 0x557ab682b2c0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e5d54000/0x0/0x4ffc00000, data 0x4164f88/0x430a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e5d54000/0x0/0x4ffc00000, data 0x4164f88/0x430a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3907992 data_alloc: 218103808 data_used: 13455360
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e5d54000/0x0/0x4ffc00000, data 0x4164f88/0x430a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3907992 data_alloc: 218103808 data_used: 13455360
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e5d54000/0x0/0x4ffc00000, data 0x4164f88/0x430a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3907992 data_alloc: 218103808 data_used: 13455360
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e5d54000/0x0/0x4ffc00000, data 0x4164f88/0x430a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3907992 data_alloc: 218103808 data_used: 13455360
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e5d54000/0x0/0x4ffc00000, data 0x4164f88/0x430a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3907992 data_alloc: 218103808 data_used: 13455360
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e5d54000/0x0/0x4ffc00000, data 0x4164f88/0x430a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e5d54000/0x0/0x4ffc00000, data 0x4164f88/0x430a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3907992 data_alloc: 218103808 data_used: 13455360
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e5d54000/0x0/0x4ffc00000, data 0x4164f88/0x430a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e5d54000/0x0/0x4ffc00000, data 0x4164f88/0x430a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3907992 data_alloc: 218103808 data_used: 13455360
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 299 ms_handle_reset con 0x557ab6696800 session 0x557ab551f680
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e5d54000/0x0/0x4ffc00000, data 0x4164f88/0x430a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3907992 data_alloc: 218103808 data_used: 13455360
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e5d54000/0x0/0x4ffc00000, data 0x4164f88/0x430a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e5d54000/0x0/0x4ffc00000, data 0x4164f88/0x430a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3907992 data_alloc: 218103808 data_used: 13455360
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e5d54000/0x0/0x4ffc00000, data 0x4164f88/0x430a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e5d54000/0x0/0x4ffc00000, data 0x4164f88/0x430a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e5d54000/0x0/0x4ffc00000, data 0x4164f88/0x430a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3907992 data_alloc: 218103808 data_used: 13455360
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e5d54000/0x0/0x4ffc00000, data 0x4164f88/0x430a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3907992 data_alloc: 218103808 data_used: 13455360
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e5d54000/0x0/0x4ffc00000, data 0x4164f88/0x430a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3907992 data_alloc: 218103808 data_used: 13455360
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3907992 data_alloc: 218103808 data_used: 13455360
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e5d54000/0x0/0x4ffc00000, data 0x4164f88/0x430a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 73.412239075s of 73.797233582s, submitted: 87
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 300 ms_handle_reset con 0x557ab6696800 session 0x557ab685ab40
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352133120 unmapped: 79183872 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e5d51000/0x0/0x4ffc00000, data 0x4166b36/0x430c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352133120 unmapped: 79183872 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3911452 data_alloc: 218103808 data_used: 13463552
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352133120 unmapped: 79183872 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352133120 unmapped: 79183872 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352133120 unmapped: 79183872 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352133120 unmapped: 79183872 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352133120 unmapped: 79183872 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3911452 data_alloc: 218103808 data_used: 13463552
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e5d51000/0x0/0x4ffc00000, data 0x4166b36/0x430c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 300 handle_osd_map epochs [300,301], i have 300, src has [1,301]
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352141312 unmapped: 79175680 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4e000/0x0/0x4ffc00000, data 0x4168599/0x430f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352141312 unmapped: 79175680 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352141312 unmapped: 79175680 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352141312 unmapped: 79175680 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.917225838s of 11.012996674s, submitted: 35
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352141312 unmapped: 79175680 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3913898 data_alloc: 218103808 data_used: 13463552
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4f000/0x0/0x4ffc00000, data 0x4168599/0x430f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352141312 unmapped: 79175680 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352141312 unmapped: 79175680 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352141312 unmapped: 79175680 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352141312 unmapped: 79175680 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352141312 unmapped: 79175680 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3914198 data_alloc: 218103808 data_used: 13463552
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352149504 unmapped: 79167488 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352149504 unmapped: 79167488 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352149504 unmapped: 79167488 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352149504 unmapped: 79167488 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 79159296 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3914198 data_alloc: 218103808 data_used: 13463552
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 79159296 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 79159296 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 79159296 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 79159296 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 79159296 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3914198 data_alloc: 218103808 data_used: 13463552
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 79159296 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 79159296 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 79159296 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 79159296 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352165888 unmapped: 79151104 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3914198 data_alloc: 218103808 data_used: 13463552
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352165888 unmapped: 79151104 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352165888 unmapped: 79151104 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352165888 unmapped: 79151104 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 24.548757553s of 24.578418732s, submitted: 6
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352165888 unmapped: 79151104 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352165888 unmapped: 79151104 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3914374 data_alloc: 218103808 data_used: 13463552
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352174080 unmapped: 79142912 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352174080 unmapped: 79142912 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352174080 unmapped: 79142912 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352174080 unmapped: 79142912 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352174080 unmapped: 79142912 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3914374 data_alloc: 218103808 data_used: 13463552
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352182272 unmapped: 79134720 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352182272 unmapped: 79134720 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352182272 unmapped: 79134720 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.233197212s of 10.238834381s, submitted: 1
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 79110144 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 79110144 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3914374 data_alloc: 218103808 data_used: 13463552
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 79110144 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 79110144 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 79110144 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 79110144 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 79110144 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3914374 data_alloc: 218103808 data_used: 13463552
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 79110144 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352215040 unmapped: 79101952 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352215040 unmapped: 79101952 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352215040 unmapped: 79101952 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352215040 unmapped: 79101952 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3914374 data_alloc: 218103808 data_used: 13463552
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352215040 unmapped: 79101952 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 79093760 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 79093760 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 79093760 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 79093760 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3914374 data_alloc: 218103808 data_used: 13463552
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 79093760 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 79093760 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 79093760 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 79093760 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352231424 unmapped: 79085568 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3914374 data_alloc: 218103808 data_used: 13463552
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352231424 unmapped: 79085568 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352231424 unmapped: 79085568 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 79077376 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 79077376 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 79077376 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3914374 data_alloc: 218103808 data_used: 13463552
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 79077376 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 79077376 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352247808 unmapped: 79069184 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352247808 unmapped: 79069184 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352247808 unmapped: 79069184 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3914374 data_alloc: 218103808 data_used: 13463552
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352247808 unmapped: 79069184 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352247808 unmapped: 79069184 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352247808 unmapped: 79069184 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352247808 unmapped: 79069184 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352256000 unmapped: 79060992 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3914374 data_alloc: 218103808 data_used: 13463552
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352256000 unmapped: 79060992 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352256000 unmapped: 79060992 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352256000 unmapped: 79060992 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352264192 unmapped: 79052800 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352264192 unmapped: 79052800 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3914374 data_alloc: 218103808 data_used: 13463552
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352264192 unmapped: 79052800 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352272384 unmapped: 79044608 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352272384 unmapped: 79044608 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352272384 unmapped: 79044608 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352272384 unmapped: 79044608 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3914374 data_alloc: 218103808 data_used: 13463552
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352272384 unmapped: 79044608 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352272384 unmapped: 79044608 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352272384 unmapped: 79044608 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352272384 unmapped: 79044608 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352280576 unmapped: 79036416 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3914374 data_alloc: 218103808 data_used: 13463552
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352280576 unmapped: 79036416 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352280576 unmapped: 79036416 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352280576 unmapped: 79036416 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352280576 unmapped: 79036416 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352288768 unmapped: 79028224 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3914374 data_alloc: 218103808 data_used: 13463552
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352288768 unmapped: 79028224 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352288768 unmapped: 79028224 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352288768 unmapped: 79028224 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352288768 unmapped: 79028224 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352296960 unmapped: 79020032 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3914374 data_alloc: 218103808 data_used: 13463552
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352296960 unmapped: 79020032 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352296960 unmapped: 79020032 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352296960 unmapped: 79020032 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352296960 unmapped: 79020032 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352305152 unmapped: 79011840 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3914374 data_alloc: 218103808 data_used: 13463552
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352305152 unmapped: 79011840 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352305152 unmapped: 79011840 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352305152 unmapped: 79011840 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352305152 unmapped: 79011840 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352313344 unmapped: 79003648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3914374 data_alloc: 218103808 data_used: 13463552
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352313344 unmapped: 79003648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352313344 unmapped: 79003648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352313344 unmapped: 79003648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352313344 unmapped: 79003648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352313344 unmapped: 79003648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3914374 data_alloc: 218103808 data_used: 13463552
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352313344 unmapped: 79003648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352313344 unmapped: 79003648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 301 ms_handle_reset con 0x557ab528e800 session 0x557ab57cab40
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 301 ms_handle_reset con 0x557ab68a1c00 session 0x557ab686c1e0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 358055936 unmapped: 73261056 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 358055936 unmapped: 73261056 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 358055936 unmapped: 73261056 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3922694 data_alloc: 218103808 data_used: 17199104
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 358055936 unmapped: 73261056 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 358055936 unmapped: 73261056 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 358055936 unmapped: 73261056 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 358064128 unmapped: 73252864 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 358064128 unmapped: 73252864 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3922694 data_alloc: 218103808 data_used: 17199104
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 358064128 unmapped: 73252864 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 358064128 unmapped: 73252864 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 88.754707336s of 88.770294189s, submitted: 3
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e5d48000/0x0/0x4ffc00000, data 0x416d16a/0x4315000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 302 ms_handle_reset con 0x557ab7f89c00 session 0x557ab696e960
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353927168 unmapped: 77389824 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e6548000/0x0/0x4ffc00000, data 0x396d16a/0x3b15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353927168 unmapped: 77389824 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353927168 unmapped: 77389824 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3852284 data_alloc: 218103808 data_used: 10391552
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353927168 unmapped: 77389824 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 303 ms_handle_reset con 0x557ab7f89c00 session 0x557ab6df7680
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351289344 unmapped: 80027648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351289344 unmapped: 80027648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 303 heartbeat osd_stat(store_statfs(0x4e71b6000/0x0/0x4ffc00000, data 0x2cfed18/0x2ea7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351289344 unmapped: 80027648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 303 handle_osd_map epochs [303,304], i have 303, src has [1,304]
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351289344 unmapped: 80027648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752526 data_alloc: 218103808 data_used: 7446528
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e71b3000/0x0/0x4ffc00000, data 0x2d00797/0x2eaa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351289344 unmapped: 80027648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351289344 unmapped: 80027648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e71b3000/0x0/0x4ffc00000, data 0x2d00797/0x2eaa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351289344 unmapped: 80027648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e71b3000/0x0/0x4ffc00000, data 0x2d00797/0x2eaa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351289344 unmapped: 80027648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351289344 unmapped: 80027648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752526 data_alloc: 218103808 data_used: 7446528
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351289344 unmapped: 80027648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e71b3000/0x0/0x4ffc00000, data 0x2d00797/0x2eaa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.719218254s of 13.948991776s, submitted: 68
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351289344 unmapped: 80027648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351289344 unmapped: 80027648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351289344 unmapped: 80027648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351289344 unmapped: 80027648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3755500 data_alloc: 218103808 data_used: 7446528
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351289344 unmapped: 80027648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e71b0000/0x0/0x4ffc00000, data 0x2d021fa/0x2ead000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 305 handle_osd_map epochs [306,306], i have 305, src has [1,306]
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 306 ms_handle_reset con 0x557ab528c400 session 0x557ab72a5680
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351297536 unmapped: 80019456 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351297536 unmapped: 80019456 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351297536 unmapped: 80019456 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351297536 unmapped: 80019456 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760288 data_alloc: 218103808 data_used: 7446528
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351297536 unmapped: 80019456 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351297536 unmapped: 80019456 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351297536 unmapped: 80019456 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351297536 unmapped: 80019456 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351297536 unmapped: 80019456 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760288 data_alloc: 218103808 data_used: 7446528
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351297536 unmapped: 80019456 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351297536 unmapped: 80019456 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 80011264 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 80011264 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 80011264 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760288 data_alloc: 218103808 data_used: 7446528
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 80011264 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 80011264 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 80011264 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 80011264 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 80011264 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760288 data_alloc: 218103808 data_used: 7446528
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351313920 unmapped: 80003072 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351313920 unmapped: 80003072 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351313920 unmapped: 80003072 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351313920 unmapped: 80003072 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351322112 unmapped: 79994880 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760288 data_alloc: 218103808 data_used: 7446528
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351322112 unmapped: 79994880 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351322112 unmapped: 79994880 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351322112 unmapped: 79994880 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351338496 unmapped: 79978496 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351338496 unmapped: 79978496 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760288 data_alloc: 218103808 data_used: 7446528
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351338496 unmapped: 79978496 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351338496 unmapped: 79978496 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351346688 unmapped: 79970304 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351346688 unmapped: 79970304 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351346688 unmapped: 79970304 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760288 data_alloc: 218103808 data_used: 7446528
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351354880 unmapped: 79962112 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351354880 unmapped: 79962112 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351354880 unmapped: 79962112 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351354880 unmapped: 79962112 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351354880 unmapped: 79962112 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760288 data_alloc: 218103808 data_used: 7446528
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351354880 unmapped: 79962112 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351354880 unmapped: 79962112 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351371264 unmapped: 79945728 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351371264 unmapped: 79945728 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351371264 unmapped: 79945728 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760288 data_alloc: 218103808 data_used: 7446528
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351371264 unmapped: 79945728 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351371264 unmapped: 79945728 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351371264 unmapped: 79945728 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351371264 unmapped: 79945728 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351379456 unmapped: 79937536 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760288 data_alloc: 218103808 data_used: 7446528
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351379456 unmapped: 79937536 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351379456 unmapped: 79937536 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351387648 unmapped: 79929344 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351387648 unmapped: 79929344 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351387648 unmapped: 79929344 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760288 data_alloc: 218103808 data_used: 7446528
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351387648 unmapped: 79929344 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351387648 unmapped: 79929344 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351404032 unmapped: 79912960 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351404032 unmapped: 79912960 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351404032 unmapped: 79912960 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760288 data_alloc: 218103808 data_used: 7446528
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351412224 unmapped: 79904768 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351412224 unmapped: 79904768 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351412224 unmapped: 79904768 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351412224 unmapped: 79904768 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351412224 unmapped: 79904768 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760288 data_alloc: 218103808 data_used: 7446528
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351412224 unmapped: 79904768 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351420416 unmapped: 79896576 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351420416 unmapped: 79896576 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351428608 unmapped: 79888384 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351428608 unmapped: 79888384 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760288 data_alloc: 218103808 data_used: 7446528
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351428608 unmapped: 79888384 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351428608 unmapped: 79888384 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 79880192 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 79880192 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 79880192 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760288 data_alloc: 218103808 data_used: 7446528
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 79880192 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 79880192 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 79880192 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 79880192 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 79863808 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760288 data_alloc: 218103808 data_used: 7446528
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 79863808 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 79863808 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 79863808 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 79863808 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351461376 unmapped: 79855616 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760288 data_alloc: 218103808 data_used: 7446528
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351461376 unmapped: 79855616 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351461376 unmapped: 79855616 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351469568 unmapped: 79847424 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351469568 unmapped: 79847424 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351477760 unmapped: 79839232 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760288 data_alloc: 218103808 data_used: 7446528
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351477760 unmapped: 79839232 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351477760 unmapped: 79839232 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351477760 unmapped: 79839232 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351477760 unmapped: 79839232 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351485952 unmapped: 79831040 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760288 data_alloc: 218103808 data_used: 7446528
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351485952 unmapped: 79831040 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351485952 unmapped: 79831040 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351494144 unmapped: 79822848 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351494144 unmapped: 79822848 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351494144 unmapped: 79822848 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760288 data_alloc: 218103808 data_used: 7446528
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351502336 unmapped: 79814656 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351502336 unmapped: 79814656 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351510528 unmapped: 79806464 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351518720 unmapped: 79798272 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351518720 unmapped: 79798272 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760288 data_alloc: 218103808 data_used: 7446528
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351518720 unmapped: 79798272 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 110.632034302s of 110.660163879s, submitted: 18
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351535104 unmapped: 79781888 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 306 handle_osd_map epochs [307,307], i have 306, src has [1,307]
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 307 ms_handle_reset con 0x557ab528e800 session 0x557ab791f860
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351567872 unmapped: 79749120 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351567872 unmapped: 79749120 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 307 handle_osd_map epochs [308,308], i have 307, src has [1,308]
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 308 ms_handle_reset con 0x557ab6696800 session 0x557ab46d32c0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351584256 unmapped: 79732736 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3769757 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 308 heartbeat osd_stat(store_statfs(0x4e71a5000/0x0/0x4ffc00000, data 0x2d074a4/0x2eb8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351584256 unmapped: 79732736 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351584256 unmapped: 79732736 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351584256 unmapped: 79732736 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351592448 unmapped: 79724544 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351592448 unmapped: 79724544 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 308 heartbeat osd_stat(store_statfs(0x4e71a5000/0x0/0x4ffc00000, data 0x2d074a4/0x2eb8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3769757 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351600640 unmapped: 79716352 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351600640 unmapped: 79716352 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351600640 unmapped: 79716352 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351600640 unmapped: 79716352 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351600640 unmapped: 79716352 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3769757 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 308 heartbeat osd_stat(store_statfs(0x4e71a5000/0x0/0x4ffc00000, data 0x2d074a4/0x2eb8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351608832 unmapped: 79708160 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351608832 unmapped: 79708160 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351608832 unmapped: 79708160 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351608832 unmapped: 79708160 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351617024 unmapped: 79699968 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3769757 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351617024 unmapped: 79699968 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 308 heartbeat osd_stat(store_statfs(0x4e71a5000/0x0/0x4ffc00000, data 0x2d074a4/0x2eb8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351617024 unmapped: 79699968 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351625216 unmapped: 79691776 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351625216 unmapped: 79691776 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351625216 unmapped: 79691776 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3769757 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 308 heartbeat osd_stat(store_statfs(0x4e71a5000/0x0/0x4ffc00000, data 0x2d074a4/0x2eb8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351625216 unmapped: 79691776 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351625216 unmapped: 79691776 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351633408 unmapped: 79683584 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351633408 unmapped: 79683584 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351633408 unmapped: 79683584 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3769757 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351633408 unmapped: 79683584 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 308 heartbeat osd_stat(store_statfs(0x4e71a5000/0x0/0x4ffc00000, data 0x2d074a4/0x2eb8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351641600 unmapped: 79675392 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351649792 unmapped: 79667200 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 308 handle_osd_map epochs [308,309], i have 308, src has [1,309]
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 32.523567200s of 32.588008881s, submitted: 14
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351682560 unmapped: 79634432 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 309 ms_handle_reset con 0x557ab68a1c00 session 0x557ab79505a0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e71a3000/0x0/0x4ffc00000, data 0x2d09065/0x2eba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351682560 unmapped: 79634432 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3772018 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351690752 unmapped: 79626240 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351690752 unmapped: 79626240 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351690752 unmapped: 79626240 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351690752 unmapped: 79626240 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e71a3000/0x0/0x4ffc00000, data 0x2d09065/0x2eba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 309 handle_osd_map epochs [310,310], i have 309, src has [1,310]
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351731712 unmapped: 79585280 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774992 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351739904 unmapped: 79577088 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351739904 unmapped: 79577088 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351739904 unmapped: 79577088 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351739904 unmapped: 79577088 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351739904 unmapped: 79577088 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774992 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351739904 unmapped: 79577088 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 79568896 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 79568896 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 79568896 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 79568896 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774992 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 79568896 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351756288 unmapped: 79560704 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351756288 unmapped: 79560704 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351756288 unmapped: 79560704 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351756288 unmapped: 79560704 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774992 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351764480 unmapped: 79552512 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351764480 unmapped: 79552512 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351764480 unmapped: 79552512 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351764480 unmapped: 79552512 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351772672 unmapped: 79544320 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774992 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351772672 unmapped: 79544320 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351772672 unmapped: 79544320 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351780864 unmapped: 79536128 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351780864 unmapped: 79536128 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351780864 unmapped: 79536128 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774992 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351780864 unmapped: 79536128 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351780864 unmapped: 79536128 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351789056 unmapped: 79527936 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351789056 unmapped: 79527936 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351789056 unmapped: 79527936 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774992 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351789056 unmapped: 79527936 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351789056 unmapped: 79527936 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 79519744 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 79519744 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351805440 unmapped: 79511552 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774992 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351805440 unmapped: 79511552 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351813632 unmapped: 79503360 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351813632 unmapped: 79503360 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351813632 unmapped: 79503360 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351813632 unmapped: 79503360 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774992 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351813632 unmapped: 79503360 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351813632 unmapped: 79503360 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351813632 unmapped: 79503360 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351821824 unmapped: 79495168 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 47K writes, 180K keys, 47K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s#012Cumulative WAL: 47K writes, 17K syncs, 2.70 writes per sync, written: 0.18 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1324 writes, 4495 keys, 1324 commit groups, 1.0 writes per commit group, ingest: 2.98 MB, 0.00 MB/s#012Interval WAL: 1324 writes, 573 syncs, 2.31 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557ab31031f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557ab31031f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 mem
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351821824 unmapped: 79495168 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774992 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351821824 unmapped: 79495168 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351830016 unmapped: 79486976 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 79478784 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 79478784 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 79478784 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774992 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 79478784 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 79470592 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 79470592 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 79470592 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 79470592 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774992 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 79470592 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 79470592 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 79470592 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351854592 unmapped: 79462400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351854592 unmapped: 79462400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774992 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351854592 unmapped: 79462400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351862784 unmapped: 79454208 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351870976 unmapped: 79446016 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351870976 unmapped: 79446016 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351870976 unmapped: 79446016 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774992 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351870976 unmapped: 79446016 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351879168 unmapped: 79437824 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351879168 unmapped: 79437824 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351879168 unmapped: 79437824 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351879168 unmapped: 79437824 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774992 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351879168 unmapped: 79437824 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351879168 unmapped: 79437824 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351887360 unmapped: 79429632 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351887360 unmapped: 79429632 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 79421440 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774992 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 79421440 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 79421440 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 79421440 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 79421440 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 79421440 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774992 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351911936 unmapped: 79405056 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351911936 unmapped: 79405056 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 79396864 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 79396864 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 79396864 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774992 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 79396864 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 79396864 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 79396864 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 79396864 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 79396864 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774992 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351928320 unmapped: 79388672 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351928320 unmapped: 79388672 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351928320 unmapped: 79388672 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351928320 unmapped: 79388672 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351928320 unmapped: 79388672 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774992 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351928320 unmapped: 79388672 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351936512 unmapped: 79380480 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351936512 unmapped: 79380480 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351944704 unmapped: 79372288 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351944704 unmapped: 79372288 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774992 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351944704 unmapped: 79372288 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351952896 unmapped: 79364096 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351961088 unmapped: 79355904 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351961088 unmapped: 79355904 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351961088 unmapped: 79355904 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774992 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351961088 unmapped: 79355904 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351969280 unmapped: 79347712 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351969280 unmapped: 79347712 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351969280 unmapped: 79347712 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351969280 unmapped: 79347712 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774992 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351969280 unmapped: 79347712 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351969280 unmapped: 79347712 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351969280 unmapped: 79347712 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351969280 unmapped: 79347712 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 79331328 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774992 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 79331328 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 79331328 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 79331328 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 79323136 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 79323136 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774992 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 79323136 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 79323136 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 79314944 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 79314944 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 79314944 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774992 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 79314944 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 79306752 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 79306752 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 79298560 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 79298560 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774992 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 79290368 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 79290368 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 79290368 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 79290368 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352034816 unmapped: 79282176 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774992 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 141.775878906s of 141.837173462s, submitted: 27
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353091584 unmapped: 78225408 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353132544 unmapped: 78184448 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353132544 unmapped: 78184448 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353140736 unmapped: 78176256 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353140736 unmapped: 78176256 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353140736 unmapped: 78176256 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353140736 unmapped: 78176256 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353140736 unmapped: 78176256 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353148928 unmapped: 78168064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353148928 unmapped: 78168064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353148928 unmapped: 78168064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353148928 unmapped: 78168064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353148928 unmapped: 78168064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353148928 unmapped: 78168064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353157120 unmapped: 78159872 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353157120 unmapped: 78159872 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353157120 unmapped: 78159872 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353165312 unmapped: 78151680 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353165312 unmapped: 78151680 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353165312 unmapped: 78151680 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353165312 unmapped: 78151680 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353165312 unmapped: 78151680 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353165312 unmapped: 78151680 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353165312 unmapped: 78151680 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353173504 unmapped: 78143488 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353173504 unmapped: 78143488 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353173504 unmapped: 78143488 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353173504 unmapped: 78143488 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353181696 unmapped: 78135296 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353181696 unmapped: 78135296 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353181696 unmapped: 78135296 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353181696 unmapped: 78135296 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353189888 unmapped: 78127104 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353189888 unmapped: 78127104 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353189888 unmapped: 78127104 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353198080 unmapped: 78118912 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353198080 unmapped: 78118912 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353198080 unmapped: 78118912 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353198080 unmapped: 78118912 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353198080 unmapped: 78118912 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353198080 unmapped: 78118912 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353198080 unmapped: 78118912 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353206272 unmapped: 78110720 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353206272 unmapped: 78110720 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353206272 unmapped: 78110720 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353206272 unmapped: 78110720 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353222656 unmapped: 78094336 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353222656 unmapped: 78094336 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353222656 unmapped: 78094336 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353222656 unmapped: 78094336 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353222656 unmapped: 78094336 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353230848 unmapped: 78086144 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353230848 unmapped: 78086144 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353230848 unmapped: 78086144 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353230848 unmapped: 78086144 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353230848 unmapped: 78086144 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353239040 unmapped: 78077952 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353239040 unmapped: 78077952 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353239040 unmapped: 78077952 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353239040 unmapped: 78077952 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353239040 unmapped: 78077952 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353247232 unmapped: 78069760 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353247232 unmapped: 78069760 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353247232 unmapped: 78069760 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353255424 unmapped: 78061568 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353255424 unmapped: 78061568 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353255424 unmapped: 78061568 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353263616 unmapped: 78053376 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353263616 unmapped: 78053376 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353271808 unmapped: 78045184 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353271808 unmapped: 78045184 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353271808 unmapped: 78045184 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353271808 unmapped: 78045184 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353271808 unmapped: 78045184 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353271808 unmapped: 78045184 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353280000 unmapped: 78036992 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353280000 unmapped: 78036992 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353280000 unmapped: 78036992 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353280000 unmapped: 78036992 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353280000 unmapped: 78036992 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353280000 unmapped: 78036992 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353280000 unmapped: 78036992 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353280000 unmapped: 78036992 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353304576 unmapped: 78012416 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353304576 unmapped: 78012416 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353304576 unmapped: 78012416 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353304576 unmapped: 78012416 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353312768 unmapped: 78004224 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353312768 unmapped: 78004224 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353312768 unmapped: 78004224 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353312768 unmapped: 78004224 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353320960 unmapped: 77996032 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353320960 unmapped: 77996032 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353320960 unmapped: 77996032 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353320960 unmapped: 77996032 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353320960 unmapped: 77996032 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353329152 unmapped: 77987840 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353329152 unmapped: 77987840 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353329152 unmapped: 77987840 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353337344 unmapped: 77979648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353337344 unmapped: 77979648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353337344 unmapped: 77979648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353337344 unmapped: 77979648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353337344 unmapped: 77979648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353337344 unmapped: 77979648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353337344 unmapped: 77979648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353337344 unmapped: 77979648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353345536 unmapped: 77971456 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353345536 unmapped: 77971456 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353345536 unmapped: 77971456 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353345536 unmapped: 77971456 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353345536 unmapped: 77971456 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353353728 unmapped: 77963264 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353353728 unmapped: 77963264 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353353728 unmapped: 77963264 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353361920 unmapped: 77955072 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353361920 unmapped: 77955072 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353361920 unmapped: 77955072 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353378304 unmapped: 77938688 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353378304 unmapped: 77938688 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353378304 unmapped: 77938688 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353378304 unmapped: 77938688 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353378304 unmapped: 77938688 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353386496 unmapped: 77930496 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353386496 unmapped: 77930496 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353386496 unmapped: 77930496 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 77922304 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 77922304 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 77922304 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 77922304 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 77922304 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353402880 unmapped: 77914112 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353402880 unmapped: 77914112 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353411072 unmapped: 77905920 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353411072 unmapped: 77905920 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353411072 unmapped: 77905920 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353411072 unmapped: 77905920 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353411072 unmapped: 77905920 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353419264 unmapped: 77897728 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353419264 unmapped: 77897728 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353419264 unmapped: 77897728 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353427456 unmapped: 77889536 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353435648 unmapped: 77881344 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353435648 unmapped: 77881344 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353435648 unmapped: 77881344 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353435648 unmapped: 77881344 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353435648 unmapped: 77881344 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353443840 unmapped: 77873152 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353443840 unmapped: 77873152 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353443840 unmapped: 77873152 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353443840 unmapped: 77873152 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353443840 unmapped: 77873152 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353443840 unmapped: 77873152 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353443840 unmapped: 77873152 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353443840 unmapped: 77873152 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353460224 unmapped: 77856768 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353460224 unmapped: 77856768 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353460224 unmapped: 77856768 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353468416 unmapped: 77848576 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353468416 unmapped: 77848576 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353476608 unmapped: 77840384 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353476608 unmapped: 77840384 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353476608 unmapped: 77840384 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351232000 unmapped: 80084992 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351232000 unmapped: 80084992 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351232000 unmapped: 80084992 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351232000 unmapped: 80084992 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351232000 unmapped: 80084992 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351240192 unmapped: 80076800 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351240192 unmapped: 80076800 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351240192 unmapped: 80076800 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351240192 unmapped: 80076800 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351240192 unmapped: 80076800 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351240192 unmapped: 80076800 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351240192 unmapped: 80076800 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351240192 unmapped: 80076800 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351248384 unmapped: 80068608 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351248384 unmapped: 80068608 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351248384 unmapped: 80068608 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351248384 unmapped: 80068608 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351248384 unmapped: 80068608 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351256576 unmapped: 80060416 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351272960 unmapped: 80044032 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351272960 unmapped: 80044032 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351272960 unmapped: 80044032 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351272960 unmapped: 80044032 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351281152 unmapped: 80035840 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351289344 unmapped: 80027648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351289344 unmapped: 80027648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351289344 unmapped: 80027648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351289344 unmapped: 80027648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351289344 unmapped: 80027648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351289344 unmapped: 80027648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351289344 unmapped: 80027648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 80011264 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 80011264 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 80011264 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 80011264 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351313920 unmapped: 80003072 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351313920 unmapped: 80003072 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351313920 unmapped: 80003072 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351313920 unmapped: 80003072 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351322112 unmapped: 79994880 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351330304 unmapped: 79986688 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351330304 unmapped: 79986688 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351330304 unmapped: 79986688 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351330304 unmapped: 79986688 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351330304 unmapped: 79986688 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351330304 unmapped: 79986688 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351330304 unmapped: 79986688 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351338496 unmapped: 79978496 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351338496 unmapped: 79978496 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351338496 unmapped: 79978496 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351338496 unmapped: 79978496 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 ms_handle_reset con 0x557ab4688800 session 0x557ab6bfaf00
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: mgrc ms_handle_reset ms_handle_reset con 0x557ab6693800
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2898047278
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2898047278,v1:192.168.122.100:6801/2898047278]
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: mgrc handle_mgr_configure stats_period=5
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351346688 unmapped: 79970304 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 ms_handle_reset con 0x557aba78ac00 session 0x557ab6a92f00
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 ms_handle_reset con 0x557abb03c400 session 0x557ab68214a0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351346688 unmapped: 79970304 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351346688 unmapped: 79970304 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351354880 unmapped: 79962112 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351354880 unmapped: 79962112 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351354880 unmapped: 79962112 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351354880 unmapped: 79962112 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351354880 unmapped: 79962112 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351354880 unmapped: 79962112 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351354880 unmapped: 79962112 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351354880 unmapped: 79962112 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351354880 unmapped: 79962112 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351363072 unmapped: 79953920 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351363072 unmapped: 79953920 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351363072 unmapped: 79953920 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351363072 unmapped: 79953920 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351371264 unmapped: 79945728 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351379456 unmapped: 79937536 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351379456 unmapped: 79937536 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351387648 unmapped: 79929344 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351387648 unmapped: 79929344 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351387648 unmapped: 79929344 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351395840 unmapped: 79921152 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351395840 unmapped: 79921152 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351404032 unmapped: 79912960 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351404032 unmapped: 79912960 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351404032 unmapped: 79912960 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351404032 unmapped: 79912960 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351412224 unmapped: 79904768 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351412224 unmapped: 79904768 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351412224 unmapped: 79904768 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351420416 unmapped: 79896576 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351420416 unmapped: 79896576 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351420416 unmapped: 79896576 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351420416 unmapped: 79896576 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351420416 unmapped: 79896576 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 79880192 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 79880192 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 79880192 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 79880192 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 79880192 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 79880192 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 79880192 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 79880192 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351444992 unmapped: 79872000 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351444992 unmapped: 79872000 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351444992 unmapped: 79872000 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351444992 unmapped: 79872000 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351444992 unmapped: 79872000 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 79863808 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 79863808 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 79863808 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351461376 unmapped: 79855616 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351461376 unmapped: 79855616 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351461376 unmapped: 79855616 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351469568 unmapped: 79847424 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351469568 unmapped: 79847424 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351469568 unmapped: 79847424 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351469568 unmapped: 79847424 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351469568 unmapped: 79847424 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351477760 unmapped: 79839232 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351477760 unmapped: 79839232 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351477760 unmapped: 79839232 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351477760 unmapped: 79839232 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351494144 unmapped: 79822848 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351502336 unmapped: 79814656 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351502336 unmapped: 79814656 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351502336 unmapped: 79814656 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351502336 unmapped: 79814656 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 283.834289551s of 284.246185303s, submitted: 90
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 310 handle_osd_map epochs [311,311], i have 310, src has [1,311]
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 311 ms_handle_reset con 0x557ab6696800 session 0x557ab6df85a0
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 79773696 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 311 heartbeat osd_stat(store_statfs(0x4e719e000/0x0/0x4ffc00000, data 0x2d0c676/0x2ebf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3776980 data_alloc: 218103808 data_used: 7462912
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 79773696 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 79773696 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 311 handle_osd_map epochs [312,312], i have 311, src has [1,312]
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 312 ms_handle_reset con 0x557ab7f89c00 session 0x557ab4a7c960
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351559680 unmapped: 79757312 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351559680 unmapped: 79757312 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 312 handle_osd_map epochs [313,313], i have 312, src has [1,313]
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e719b000/0x0/0x4ffc00000, data 0x2d0e247/0x2ec2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351567872 unmapped: 79749120 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3782928 data_alloc: 218103808 data_used: 7462912
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351576064 unmapped: 79740928 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 313 ms_handle_reset con 0x557ab6dd9800 session 0x557ab6df8960
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351576064 unmapped: 79740928 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 313 handle_osd_map epochs [313,314], i have 313, src has [1,314]
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 314 ms_handle_reset con 0x557ab6dd9800 session 0x557ab685b680
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352632832 unmapped: 78684160 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 314 heartbeat osd_stat(store_statfs(0x4e6523000/0x0/0x4ffc00000, data 0x39818a5/0x3b3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 314 handle_osd_map epochs [315,315], i have 314, src has [1,315]
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 78667776 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 315 heartbeat osd_stat(store_statfs(0x4e651f000/0x0/0x4ffc00000, data 0x3983308/0x3b3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 78667776 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3879369 data_alloc: 218103808 data_used: 7471104
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 78667776 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 315 heartbeat osd_stat(store_statfs(0x4e651f000/0x0/0x4ffc00000, data 0x3983308/0x3b3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 78667776 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 78667776 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 315 heartbeat osd_stat(store_statfs(0x4e651f000/0x0/0x4ffc00000, data 0x3983308/0x3b3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 78667776 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352657408 unmapped: 78659584 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3879369 data_alloc: 218103808 data_used: 7471104
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352657408 unmapped: 78659584 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352657408 unmapped: 78659584 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 315 heartbeat osd_stat(store_statfs(0x4e651f000/0x0/0x4ffc00000, data 0x3983308/0x3b3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352657408 unmapped: 78659584 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352657408 unmapped: 78659584 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352657408 unmapped: 78659584 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 315 heartbeat osd_stat(store_statfs(0x4e651f000/0x0/0x4ffc00000, data 0x3983308/0x3b3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3879369 data_alloc: 218103808 data_used: 7471104
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352665600 unmapped: 78651392 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352673792 unmapped: 78643200 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352673792 unmapped: 78643200 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352673792 unmapped: 78643200 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 315 heartbeat osd_stat(store_statfs(0x4e651f000/0x0/0x4ffc00000, data 0x3983308/0x3b3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352681984 unmapped: 78635008 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3879369 data_alloc: 218103808 data_used: 7471104
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352681984 unmapped: 78635008 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352681984 unmapped: 78635008 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352681984 unmapped: 78635008 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 315 heartbeat osd_stat(store_statfs(0x4e651f000/0x0/0x4ffc00000, data 0x3983308/0x3b3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352681984 unmapped: 78635008 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352681984 unmapped: 78635008 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3879369 data_alloc: 218103808 data_used: 7471104
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352698368 unmapped: 78618624 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352698368 unmapped: 78618624 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 315 heartbeat osd_stat(store_statfs(0x4e651f000/0x0/0x4ffc00000, data 0x3983308/0x3b3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352698368 unmapped: 78618624 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 315 heartbeat osd_stat(store_statfs(0x4e651f000/0x0/0x4ffc00000, data 0x3983308/0x3b3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352698368 unmapped: 78618624 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352698368 unmapped: 78618624 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3879369 data_alloc: 218103808 data_used: 7471104
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352698368 unmapped: 78618624 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 315 heartbeat osd_stat(store_statfs(0x4e651f000/0x0/0x4ffc00000, data 0x3983308/0x3b3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352698368 unmapped: 78618624 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352698368 unmapped: 78618624 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 78610432 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 78610432 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 315 heartbeat osd_stat(store_statfs(0x4e651f000/0x0/0x4ffc00000, data 0x3983308/0x3b3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3879369 data_alloc: 218103808 data_used: 7471104
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 78610432 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 78610432 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 78610432 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352714752 unmapped: 78602240 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352714752 unmapped: 78602240 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 315 heartbeat osd_stat(store_statfs(0x4e651f000/0x0/0x4ffc00000, data 0x3983308/0x3b3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 315 heartbeat osd_stat(store_statfs(0x4e651f000/0x0/0x4ffc00000, data 0x3983308/0x3b3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3879369 data_alloc: 218103808 data_used: 7471104
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352714752 unmapped: 78602240 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352714752 unmapped: 78602240 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352714752 unmapped: 78602240 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 315 heartbeat osd_stat(store_statfs(0x4e651f000/0x0/0x4ffc00000, data 0x3983308/0x3b3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352714752 unmapped: 78602240 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352714752 unmapped: 78602240 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3879369 data_alloc: 218103808 data_used: 7471104
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352722944 unmapped: 78594048 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 78585856 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 78585856 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 315 heartbeat osd_stat(store_statfs(0x4e651f000/0x0/0x4ffc00000, data 0x3983308/0x3b3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 78585856 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 78585856 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3879369 data_alloc: 218103808 data_used: 7471104
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 78585856 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 315 heartbeat osd_stat(store_statfs(0x4e651f000/0x0/0x4ffc00000, data 0x3983308/0x3b3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352739328 unmapped: 78577664 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 315 heartbeat osd_stat(store_statfs(0x4e651f000/0x0/0x4ffc00000, data 0x3983308/0x3b3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352739328 unmapped: 78577664 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352739328 unmapped: 78577664 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352739328 unmapped: 78577664 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3879369 data_alloc: 218103808 data_used: 7471104
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352739328 unmapped: 78577664 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352747520 unmapped: 78569472 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352755712 unmapped: 78561280 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 315 heartbeat osd_stat(store_statfs(0x4e651f000/0x0/0x4ffc00000, data 0x3983308/0x3b3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352763904 unmapped: 78553088 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352763904 unmapped: 78553088 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 66.548286438s of 66.841957092s, submitted: 93
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 315 handle_osd_map epochs [316,316], i have 315, src has [1,316]
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3881037 data_alloc: 218103808 data_used: 7479296
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352788480 unmapped: 78528512 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 316 ms_handle_reset con 0x557ab6686c00 session 0x557ab727af00
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e651e000/0x0/0x4ffc00000, data 0x3984eb6/0x3b3f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352788480 unmapped: 78528512 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 316 ms_handle_reset con 0x557ab5468c00 session 0x557ab5bfbe00
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 316 ms_handle_reset con 0x557ab65a0c00 session 0x557ab5292000
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352788480 unmapped: 78528512 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352788480 unmapped: 78528512 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352788480 unmapped: 78528512 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e718e000/0x0/0x4ffc00000, data 0x2d14e93/0x2ece000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3796901 data_alloc: 218103808 data_used: 7479296
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352796672 unmapped: 78520320 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352796672 unmapped: 78520320 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352796672 unmapped: 78520320 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 316 handle_osd_map epochs [316,317], i have 316, src has [1,317]
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352813056 unmapped: 78503936 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 317 heartbeat osd_stat(store_statfs(0x4e718c000/0x0/0x4ffc00000, data 0x2d168f6/0x2ed1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352813056 unmapped: 78503936 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3798774 data_alloc: 218103808 data_used: 7479296
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 78495744 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 78495744 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 78495744 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 78495744 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 78495744 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 317 heartbeat osd_stat(store_statfs(0x4e718c000/0x0/0x4ffc00000, data 0x2d168f6/0x2ed1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3798774 data_alloc: 218103808 data_used: 7479296
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 78487552 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 78487552 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 78487552 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 78487552 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 78487552 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 317 heartbeat osd_stat(store_statfs(0x4e718c000/0x0/0x4ffc00000, data 0x2d168f6/0x2ed1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3798774 data_alloc: 218103808 data_used: 7479296
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 78479360 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352854016 unmapped: 78462976 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 317 heartbeat osd_stat(store_statfs(0x4e718c000/0x0/0x4ffc00000, data 0x2d168f6/0x2ed1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352854016 unmapped: 78462976 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352854016 unmapped: 78462976 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352854016 unmapped: 78462976 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3798774 data_alloc: 218103808 data_used: 7479296
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352854016 unmapped: 78462976 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 317 heartbeat osd_stat(store_statfs(0x4e718c000/0x0/0x4ffc00000, data 0x2d168f6/0x2ed1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352862208 unmapped: 78454784 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 317 heartbeat osd_stat(store_statfs(0x4e718c000/0x0/0x4ffc00000, data 0x2d168f6/0x2ed1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352862208 unmapped: 78454784 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352862208 unmapped: 78454784 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352870400 unmapped: 78446592 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3798774 data_alloc: 218103808 data_used: 7479296
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352870400 unmapped: 78446592 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 317 heartbeat osd_stat(store_statfs(0x4e718c000/0x0/0x4ffc00000, data 0x2d168f6/0x2ed1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352870400 unmapped: 78446592 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 317 heartbeat osd_stat(store_statfs(0x4e718c000/0x0/0x4ffc00000, data 0x2d168f6/0x2ed1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352878592 unmapped: 78438400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352878592 unmapped: 78438400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352878592 unmapped: 78438400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 317 heartbeat osd_stat(store_statfs(0x4e718c000/0x0/0x4ffc00000, data 0x2d168f6/0x2ed1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3798774 data_alloc: 218103808 data_used: 7479296
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352878592 unmapped: 78438400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 317 heartbeat osd_stat(store_statfs(0x4e718c000/0x0/0x4ffc00000, data 0x2d168f6/0x2ed1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352878592 unmapped: 78438400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352894976 unmapped: 78422016 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352903168 unmapped: 78413824 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352903168 unmapped: 78413824 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3798774 data_alloc: 218103808 data_used: 7479296
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352903168 unmapped: 78413824 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352903168 unmapped: 78413824 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 317 heartbeat osd_stat(store_statfs(0x4e718c000/0x0/0x4ffc00000, data 0x2d168f6/0x2ed1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 317 heartbeat osd_stat(store_statfs(0x4e718c000/0x0/0x4ffc00000, data 0x2d168f6/0x2ed1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352911360 unmapped: 78405632 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352911360 unmapped: 78405632 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352911360 unmapped: 78405632 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3798774 data_alloc: 218103808 data_used: 7479296
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352919552 unmapped: 78397440 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352919552 unmapped: 78397440 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: osd.1 317 heartbeat osd_stat(store_statfs(0x4e718c000/0x0/0x4ffc00000, data 0x2d168f6/0x2ed1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352919552 unmapped: 78397440 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 06:01:08 np0005481065 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352927744 unmapped: 78389248 heap: 431316992 old mem: 2845415832 new mem: 2845415832
